text
stringlengths 100
500k
| subset
stringclasses 4
values |
---|---|
Sheaf on an algebraic stack
In algebraic geometry, a quasi-coherent sheaf on an algebraic stack ${\mathfrak {X}}$ is a generalization of a quasi-coherent sheaf on a scheme. The most concrete description is that it is a data that consists of, for each a scheme S in the base category and $\xi $ in ${\mathfrak {X}}(S)$, a quasi-coherent sheaf $F_{\xi }$ on S together with maps implementing the compatibility conditions among $F_{\xi }$'s.
For a Deligne–Mumford stack, there is a simpler description in terms of a presentation $U\to {\mathfrak {X}}$: a quasi-coherent sheaf on ${\mathfrak {X}}$ is one obtained by descending a quasi-coherent sheaf on U.[1] A quasi-coherent sheaf on a Deligne–Mumford stack generalizes an orbibundle (in a sense).
Constructible sheaves (e.g., as ℓ-adic sheaves) can also be defined on an algebraic stack and they appear as coefficients of cohomology of a stack.
Definition
The following definition is (Arbarello, Cornalba & Griffiths 2011, Ch. XIII., Definition 2.1.)
Let ${\mathfrak {X}}$ be a category fibered in groupoids over the category of schemes of finite type over a field with the structure functor p. Then a quasi-coherent sheaf on ${\mathfrak {X}}$ is the data consisting of:
1. for each object $\xi $, a quasi-coherent sheaf $F_{\xi }$ on the scheme $p(\xi )$,
2. for each morphism $H:\xi \to \eta $ in ${\mathfrak {X}}$ and $h=p(H):p(\xi )\to p(\eta )$ in the base category, an isomorphism
$\rho _{H}:h^{*}(F_{\eta }){\overset {\simeq }{\to }}F_{\xi }$
satisfying the cocycle condition: for each pair $H_{1}:\xi _{1}\to \xi _{2},H_{2}:\xi _{2}\to \xi _{3}$,
$h_{1}^{*}h_{2}^{*}F_{\xi _{3}}{\overset {h_{1}^{*}(\rho _{H_{2}})}{\to }}h_{1}^{*}F_{\xi _{2}}{\overset {\rho _{H_{1}}}{\to }}F_{\xi _{1}}$ equals $h_{1}^{*}h_{2}^{*}F_{\xi _{3}}{\overset {\sim }{=}}(h_{2}\circ h_{1})^{*}F_{\xi _{3}}{\overset {\rho _{H_{2}\circ H_{1}}}{\to }}F_{\xi _{1}}$.
(cf. equivariant sheaf.)
Examples
• The Hodge bundle on the moduli stack of algebraic curves of fixed genus.
ℓ-adic formalism
The ℓ-adic formalism (theory of ℓ-adic sheaves) extends to algebraic stacks.
See also
• Hopf algebroid - encodes the data of quasi-coherent sheaves on a prestack presentable as a groupoid internal to affine schemes (or projective schemes using graded Hopf algebroids)
Notes
1. Arbarello, Cornalba & Griffiths 2011, Ch. XIII., § 2.
References
• Arbarello, Enrico; Griffiths, Phillip (2011). Geometry of algebraic curves. Vol. II, with a contribution by Joseph Daniel Harris. Grundlehren der mathematischen Wissenschaften. Vol. 268. doi:10.1007/978-3-540-69392-5. ISBN 978-3-540-42688-2. MR 2807457.
• Behrend, Kai A. (2003). "Derived 𝑙-adic categories for algebraic stacks". Memoirs of the American Mathematical Society. 163 (774). doi:10.1090/memo/0774.
• Laumon, Gérard; Moret-Bailly, Laurent (2000). Champs algébriques. Ergebnisse der Mathematik und ihrer Grenzgebiete. 3. Folge. A Series of Modern Surveys in Mathematics. Vol. 39. Berlin, New York: Springer-Verlag. doi:10.1007/978-3-540-24899-6. ISBN 978-3-540-65761-3. MR 1771927.
• Olsson, Martin (2007). "Sheaves on Artin stacks". Journal für die reine und angewandte Mathematik (Crelle's Journal). 2007 (603): 55–112. doi:10.1515/CRELLE.2007.012. S2CID 15445962. Editorial note: This paper corrects a mistake in Laumon and Moret-Bailly's Champs algébriques.
• Rydh, David (2016). "Approximation of Sheaves on Algebraic Stacks". International Mathematics Research Notices. 2016 (3): 717–737. arXiv:1408.6698. doi:10.1093/imrn/rnv142.
External links
• https://mathoverflow.net/questions/69035/the-category-of-l-adic-sheaves
• http://math.stanford.edu/~conrad/Weil2seminar/Notes/L16.pdf Adic Formalism, Part 2 Brian Lawrence March 1, 2017
|
Wikipedia
|
Shear matrix
In mathematics (particularly linear algebra), a shear matrix or transvection is an elementary matrix that represents the addition of a multiple of one row or column to another. Such a matrix may be derived by taking the identity matrix and replacing one of the zero elements with a non-zero value.
The name shear reflects the fact that the matrix represents a shear transformation. Geometrically, such a transformation takes pairs of points in a vector space that are purely axially separated along the axis whose row in the matrix contains the shear element, and effectively replaces those pairs by pairs whose separation is no longer purely axial but has two vector components. Thus, the shear axis is always an eigenvector of S.
Definition
A typical shear matrix is of the form
$S={\begin{pmatrix}1&0&0&\lambda &0\\0&1&0&0&0\\0&0&1&0&0\\0&0&0&1&0\\0&0&0&0&1\end{pmatrix}}.$
This matrix shears parallel to the x axis in the direction of the fourth dimension of the underlying vector space.
A shear parallel to the x axis results in $x'=x+\lambda y$ and $y'=y$. In matrix form:
${\begin{pmatrix}x'\\y'\end{pmatrix}}={\begin{pmatrix}1&\lambda \\0&1\end{pmatrix}}{\begin{pmatrix}x\\y\end{pmatrix}}.$
Similarly, a shear parallel to the y axis has $x'=x$ and $y'=y+\lambda x$. In matrix form:
${\begin{pmatrix}x'\\y'\end{pmatrix}}={\begin{pmatrix}1&0\\\lambda &1\end{pmatrix}}{\begin{pmatrix}x\\y\end{pmatrix}}.$
In 3D space this matrix shear the YZ plane into the diagonal plane passing through these 3 points: $(0,0,0)$ $(\lambda ,1,0)$ $(\mu ,0,1)$
$S={\begin{pmatrix}1&\lambda &\mu \\0&1&0\\0&0&1\end{pmatrix}}.$
The determinant will always be 1, as no matter where the shear element is placed, it will be a member of a skew-diagonal that also contains zero elements (as all skew-diagonals have length at least two) hence its product will remain zero and will not contribute to the determinant. Thus every shear matrix has an inverse, and the inverse is simply a shear matrix with the shear element negated, representing a shear transformation in the opposite direction. In fact, this is part of an easily derived more general result: if S is a shear matrix with shear element λ, then Sn is a shear matrix whose shear element is simply nλ. Hence, raising a shear matrix to a power n multiplies its shear factor by n.
Properties
If S is an n × n shear matrix, then:
• S has rank n and therefore is invertible
• 1 is the only eigenvalue of S, so det S = 1 and tr S = n
• the eigenspace of S (associated with the eigenvalue 1) has n − 1 dimensions.
• S is defective
• S is asymmetric
• S may be made into a block matrix by at most 1 column interchange and 1 row interchange operation
• the area, volume, or any higher order interior capacity of a polytope is invariant under the shear transformation of the polytope's vertices.
Composition
Two or more shear transformations can be combined.
If two shear matrices are $ {\begin{pmatrix}1&\lambda \\0&1\end{pmatrix}}$ and $ {\begin{pmatrix}1&0\\\mu &1\end{pmatrix}}$
then their composition matrix is
${\begin{pmatrix}1&\lambda \\0&1\end{pmatrix}}{\begin{pmatrix}1&0\\\mu &1\end{pmatrix}}={\begin{pmatrix}1+\lambda \mu &\lambda \\\mu &1\end{pmatrix}},$
which also has determinant 1, so that area is preserved.
In particular, if $\lambda =\mu $, we have
${\begin{pmatrix}1+\lambda ^{2}&\lambda \\\lambda &1\end{pmatrix}},$
which is a positive definite matrix.
Applications
• Shear matrices are often used in computer graphics.[1][2][3]
See also
• Transformation matrix
Notes
1. Foley et al. (1991, pp. 207–208, 216–217)
2. Geometric Tools for Computer Graphics, Philip J. Schneider and David H. Eberly, pp. 154-157
3. Computer Graphics, Apueva A. Desai, pp. 162-164
References
• Foley, James D.; van Dam, Andries; Feiner, Steven K.; Hughes, John F. (1991), Computer Graphics: Principles and Practice (2nd ed.), Reading: Addison-Wesley, ISBN 0-201-12110-7
Computer graphics
Vector graphics
• Diffusion curve
• Pixel
2D graphics
2.5D
• Isometric graphics
• Mode 7
• Parallax scrolling
• Ray casting
• Skybox
• Alpha compositing
• Layers
• Text-to-image
3D graphics
• 3D projection
• 3D rendering
• (Image-based
• Spectral
• Unbiased)
• Aliasing
• Anisotropic filtering
• Cel shading
• Lighting
• Global illumination
• Hidden-surface determination
• Polygon mesh
• (Triangle mesh)
• Shading
• Deferred
• Surface triangulation
• Wire-frame model
Concepts
• Affine transformation
• Back-face culling
• Clipping
• Collision detection
• Planar projection
• Rendering
• Rotation
• Scaling
• Shadow mapping
• Shadow volume
• Shear matrix
• Translation
Algorithms
• List of computer graphics algorithms
Matrix classes
Explicitly constrained entries
• Alternant
• Anti-diagonal
• Anti-Hermitian
• Anti-symmetric
• Arrowhead
• Band
• Bidiagonal
• Bisymmetric
• Block-diagonal
• Block
• Block tridiagonal
• Boolean
• Cauchy
• Centrosymmetric
• Conference
• Complex Hadamard
• Copositive
• Diagonally dominant
• Diagonal
• Discrete Fourier Transform
• Elementary
• Equivalent
• Frobenius
• Generalized permutation
• Hadamard
• Hankel
• Hermitian
• Hessenberg
• Hollow
• Integer
• Logical
• Matrix unit
• Metzler
• Moore
• Nonnegative
• Pentadiagonal
• Permutation
• Persymmetric
• Polynomial
• Quaternionic
• Signature
• Skew-Hermitian
• Skew-symmetric
• Skyline
• Sparse
• Sylvester
• Symmetric
• Toeplitz
• Triangular
• Tridiagonal
• Vandermonde
• Walsh
• Z
Constant
• Exchange
• Hilbert
• Identity
• Lehmer
• Of ones
• Pascal
• Pauli
• Redheffer
• Shift
• Zero
Conditions on eigenvalues or eigenvectors
• Companion
• Convergent
• Defective
• Definite
• Diagonalizable
• Hurwitz
• Positive-definite
• Stieltjes
Satisfying conditions on products or inverses
• Congruent
• Idempotent or Projection
• Invertible
• Involutory
• Nilpotent
• Normal
• Orthogonal
• Unimodular
• Unipotent
• Unitary
• Totally unimodular
• Weighing
With specific applications
• Adjugate
• Alternating sign
• Augmented
• Bézout
• Carleman
• Cartan
• Circulant
• Cofactor
• Commutation
• Confusion
• Coxeter
• Distance
• Duplication and elimination
• Euclidean distance
• Fundamental (linear differential equation)
• Generator
• Gram
• Hessian
• Householder
• Jacobian
• Moment
• Payoff
• Pick
• Random
• Rotation
• Seifert
• Shear
• Similarity
• Symplectic
• Totally positive
• Transformation
Used in statistics
• Centering
• Correlation
• Covariance
• Design
• Doubly stochastic
• Fisher information
• Hat
• Precision
• Stochastic
• Transition
Used in graph theory
• Adjacency
• Biadjacency
• Degree
• Edmonds
• Incidence
• Laplacian
• Seidel adjacency
• Tutte
Used in science and engineering
• Cabibbo–Kobayashi–Maskawa
• Density
• Fundamental (computer vision)
• Fuzzy associative
• Gamma
• Gell-Mann
• Hamiltonian
• Irregular
• Overlap
• S
• State transition
• Substitution
• Z (chemistry)
Related terms
• Jordan normal form
• Linear independence
• Matrix exponential
• Matrix representation of conic sections
• Perfect matrix
• Pseudoinverse
• Row echelon form
• Wronskian
• Mathematics portal
• List of matrices
• Category:Matrices
|
Wikipedia
|
Shearer's inequality
Shearer's inequality or also Shearer's lemma, in mathematics, is an inequality in information theory relating the entropy of a set of variables to the entropies of a collection of subsets. It is named for mathematician James B. Shearer.
Concretely, it states that if X1, ..., Xd are random variables and S1, ..., Sn are subsets of {1, 2, ..., d} such that every integer between 1 and d lies in at least r of these subsets, then
$H[(X_{1},\dots ,X_{d})]\leq {\frac {1}{r}}\sum _{i=1}^{n}H[(X_{j})_{j\in S_{i}}]$
where $H$ is entropy and $(X_{j})_{j\in S_{i}}$ is the Cartesian product of random variables $X_{j}$ with indices j in $S_{i}$.[1]
Combinatorial version
Let ${\mathcal {F}}$ be a family of subsets of [n] (possibly with repeats) with each $i\in [n]$ included in at least $t$ members of ${\mathcal {F}}$. Let ${\mathcal {A}}$ be another set of subsets of ${\mathcal {F}}$. Then
${\mathcal {|}}{\mathcal {A}}|\leq \prod _{F\in {\mathcal {F}}}\operatorname {trace} _{F}({\mathcal {A}})^{1/t}$
where $\operatorname {trace} _{F}({\mathcal {A}})=\{A\cap F:A\in {\mathcal {A}}\}$ the set of possible intersections of elements of ${\mathcal {A}}$ with $F$.[2]
See also
• Lovász local lemma
References
1. Chung, F.R.K.; Graham, R.L.; Frankl, P.; Shearer, J.B. (1986). "Some Intersection Theorems for Ordered Sets and Graphs". J. Comb. Theory A. 43: 23–37. doi:10.1016/0097-3165(86)90019-1.
2. Galvin, David (2014-06-30). "Three tutorial lectures on entropy and counting". arXiv:1406.7872 [math.CO].
|
Wikipedia
|
Shearlet
In applied mathematical analysis, shearlets are a multiscale framework which allows efficient encoding of anisotropic features in multivariate problem classes. Originally, shearlets were introduced in 2006[1] for the analysis and sparse approximation of functions $f\in L^{2}(\mathbb {R} ^{2})$. They are a natural extension of wavelets, to accommodate the fact that multivariate functions are typically governed by anisotropic features such as edges in images, since wavelets, as isotropic objects, are not capable of capturing such phenomena.
Shearlets are constructed by parabolic scaling, shearing, and translation applied to a few generating functions. At fine scales, they are essentially supported within skinny and directional ridges following the parabolic scaling law, which reads length² ≈ width. Similar to wavelets, shearlets arise from the affine group and allow a unified treatment of the continuum and digital situation leading to faithful implementations. Although they do not constitute an orthonormal basis for $L^{2}(\mathbb {R} ^{2})$, they still form a frame allowing stable expansions of arbitrary functions $f\in L^{2}(\mathbb {R} ^{2})$.
One of the most important properties of shearlets is their ability to provide optimally sparse approximations (in the sense of optimality in [2]) for cartoon-like functions $f$. In imaging sciences, cartoon-like functions serve as a model for anisotropic features and are compactly supported in $[0,1]^{2}$ while being $C^{2}$ apart from a closed piecewise $C^{2}$ singularity curve with bounded curvature. The decay rate of the $L^{2}$-error of the $N$-term shearlet approximation obtained by taking the $N$ largest coefficients from the shearlet expansion is in fact optimal up to a log-factor:[3][4]
$\|f-f_{N}\|_{L^{2}}^{2}\leq CN^{-2}(\log N)^{3},\quad N\to \infty ,$
where the constant $C$ depends only on the maximum curvature of the singularity curve and the maximum magnitudes of $f$, $f'$ and $f''$. This approximation rate significantly improves the best $N$-term approximation rate of wavelets providing only $O(N^{-1})$ for such class of functions.
Shearlets are to date the only directional representation system that provides sparse approximation of anisotropic features while providing a unified treatment of the continuum and digital realm that allows faithful implementation. Extensions of shearlet systems to $L^{2}(\mathbb {R} ^{d}),d\geq 2$ are also available. A comprehensive presentation of the theory and applications of shearlets can be found in.[5]
Definition
Continuous shearlet systems
Geometric effects of parabolic scaling and shearing with several parameters a and s.
The construction of continuous shearlet systems is based on parabolic scaling matrices
$A_{a}={\begin{bmatrix}a&0\\0&a^{1/2}\end{bmatrix}},\quad a>0$
as a mean to change the resolution, on shear matrices
$S_{s}={\begin{bmatrix}1&s\\0&1\end{bmatrix}},\quad s\in \mathbb {R} $
as a means to change the orientation, and finally on translations to change the positioning. In comparison to curvelets, shearlets use shearings instead of rotations, the advantage being that the shear operator $S_{s}$ leaves the integer lattice invariant in case $s\in \mathbb {Z} $, i.e., $S_{s}\mathbb {Z} ^{2}\subseteq \mathbb {Z} ^{2}.$ This indeed allows a unified treatment of the continuum and digital realm, thereby guaranteeing a faithful digital implementation.
For $\psi \in L^{2}(\mathbb {R} ^{2})$ the continuous shearlet system generated by $\psi $ is then defined as
$\operatorname {SH} _{\mathrm {cont} }(\psi )=\{\psi _{a,s,t}=a^{3/4}\psi (S_{s}A_{a}(\cdot -t))\mid a>0,s\in \mathbb {R} ,t\in \mathbb {R} ^{2}\},$
and the corresponding continuous shearlet transform is given by the map
$f\mapsto {\mathcal {SH}}_{\psi }f(a,s,t)=\langle f,\psi _{a,s,t}\rangle ,\quad f\in L^{2}(\mathbb {R} ^{2}),\quad (a,s,t)\in \mathbb {R} _{>0}\times \mathbb {R} \times \mathbb {R} ^{2}.$
Discrete shearlet systems
A discrete version of shearlet systems can be directly obtained from $\operatorname {SH} _{\mathrm {cont} }(\psi )$ by discretizing the parameter set $\mathbb {R} _{>0}\times \mathbb {R} \times \mathbb {R} ^{2}.$ There are numerous approaches for this but the most popular one is given by
$\{(2^{j},k,A_{2^{j}}^{-1}S_{k}^{-1}m)\mid j\in \mathbb {Z} ,k\in \mathbb {Z} ,m\in \mathbb {Z} ^{2}\}\subseteq \mathbb {R} _{>0}\times \mathbb {R} \times \mathbb {R} ^{2}.$
From this, the discrete shearlet system associated with the shearlet generator $\psi $ is defined by
$\operatorname {SH} (\psi )=\{\psi _{j,k,m}=2^{3j/4}\psi (S_{k}A_{2^{j}}\cdot {}-m)\mid j\in \mathbb {Z} ,k\in \mathbb {Z} ,m\in \mathbb {Z} ^{2}\},$
and the associated discrete shearlet transform is defined by
$f\mapsto {\mathcal {SH}}_{\psi }f(j,k,m)=\langle f,\psi _{j,k,m}\rangle ,\quad f\in L^{2}(\mathbb {R} ^{2}),\quad (j,k,m)\in \mathbb {Z} \times \mathbb {Z} \times \mathbb {Z} ^{2}.$
Examples
Trapezoidal frequency support of the classical shearlet.
Frequency tiling of the (discrete) classical shearlet system.
Let $\psi _{1}\in L^{2}(\mathbb {R} )$ be a function satisfying the discrete Calderón condition, i.e.,
$\sum _{j\in \mathbb {Z} }|{\hat {\psi }}_{1}(2^{-j}\xi )|^{2}=1,{\text{for a.e. }}\xi \in \mathbb {R} ,$
with ${\hat {\psi }}_{1}\in C^{\infty }(\mathbb {R} )$ and $\operatorname {supp} {\hat {\psi }}_{1}\subseteq [-{\tfrac {1}{2}},-{\tfrac {1}{16}}]\cup [{\tfrac {1}{16}},{\tfrac {1}{2}}],$ where ${\hat {\psi }}_{1}$ denotes the Fourier transform of $\psi _{1}.$ For instance, one can choose $\psi _{1}$ to be a Meyer wavelet. Furthermore, let $\psi _{2}\in L^{2}(\mathbb {R} )$ be such that ${\hat {\psi }}_{2}\in C^{\infty }(\mathbb {R} ),$ $\operatorname {supp} {\hat {\psi }}_{2}\subseteq [-1,1]$ and
$\sum _{k=-1}^{1}|{\hat {\psi }}_{2}(\xi +k)|^{2}=1,{\text{for a.e. }}\xi \in \left[-1,1\right].$
One typically chooses ${\hat {\psi }}_{2}$ to be a smooth bump function. Then $\psi \in L^{2}(\mathbb {R} ^{2})$ given by
${\hat {\psi }}(\xi )={\hat {\psi }}_{1}(\xi _{1}){\hat {\psi }}_{2}\left({\tfrac {\xi _{2}}{\xi _{1}}}\right),\quad \xi =(\xi _{1},\xi _{2})\in \mathbb {R} ^{2},$
is called a classical shearlet. It can be shown that the corresponding discrete shearlet system $\operatorname {SH} (\psi )$ constitutes a Parseval frame for $L^{2}(\mathbb {R} ^{2})$ consisting of bandlimited functions.[5]
Another example are compactly supported shearlet systems, where a compactly supported function $\psi \in L^{2}(\mathbb {R} ^{2})$ can be chosen so that $\operatorname {SH} (\psi )$ forms a frame for $L^{2}(\mathbb {R} ^{2})$.[4][6][7][8] In this case, all shearlet elements in $\operatorname {SH} (\psi )$ are compactly supported providing superior spatial localization compared to the classical shearlets, which are bandlimited. Although a compactly supported shearlet system does not generally form a Parseval frame, any function $f\in L^{2}(\mathbb {R} ^{2})$ can be represented by the shearlet expansion due to its frame property.
Cone-adapted shearlets
One drawback of shearlets defined as above is the directional bias of shearlet elements associated with large shearing parameters. This effect is already recognizable in the frequency tiling of classical shearlets (see Figure in Section #Examples), where the frequency support of a shearlet increasingly aligns along the $\xi _{2}$-axis as the shearing parameter $s$ goes to infinity. This causes serious problems when analyzing a function whose Fourier transform is concentrated around the $\xi _{2}$-axis.
To deal with this problem, the frequency domain is divided into a low-frequency part and two conic regions (see Figure):
${\begin{aligned}{\mathcal {R}}&=\left\{(\xi _{1},\xi _{2})\in \mathbb {R} ^{2}\mid |\xi _{1}|,|\xi _{2}|\leq 1\right\},\\{\mathcal {C}}_{\mathrm {h} }&=\left\{(\xi _{1},\xi _{2})\in \mathbb {R} ^{2}\mid |\xi _{2}/\xi _{1}|\leq 1,|\xi _{1}|>1\right\},\\{\mathcal {C}}_{\mathrm {v} }&=\left\{(\xi _{1},\xi _{2})\in \mathbb {R} ^{2}\mid |\xi _{1}/\xi _{2}|\leq 1,|\xi _{2}|>1\right\}.\end{aligned}}$
The associated cone-adapted discrete shearlet system consists of three parts, each one corresponding to one of these frequency domains. It is generated by three functions $\phi ,\psi ,{\tilde {\psi }}\in L^{2}(\mathbb {R} ^{2})$ and a lattice sampling factor $c=(c_{1},c_{2})\in (\mathbb {R} _{>0})^{2}:$
$\operatorname {SH} (\phi ,\psi ,{\tilde {\psi }};c)=\Phi (\phi ;c_{1})\cup \Psi (\psi ;c)\cup {\tilde {\Psi }}({\tilde {\psi }};c),$
where
${\begin{aligned}\Phi (\phi ;c_{1})&=\{\phi _{m}=\phi (\cdot {}-c_{1}m)\mid m\in \mathbb {Z} ^{2}\},\\\Psi (\psi ;c)&=\{\psi _{j,k,m}=2^{3j/4}\psi (S_{k}A_{2^{j}}\cdot {}-M_{c}m)\mid j\geq 0,|k|\leq \lceil 2^{j/2}\rceil ,m\in \mathbb {Z} ^{2}\},\\{\tilde {\Psi }}({\tilde {\psi }};c)&=\{{\tilde {\psi }}_{j,k,m}=2^{3j/4}\psi ({\tilde {S}}_{k}{\tilde {A}}_{2^{j}}\cdot {}-{\tilde {M}}_{c}m)\mid j\geq 0,|k|\leq \lceil 2^{j/2}\rceil ,m\in \mathbb {Z} ^{2}\},\end{aligned}}$
with
${\begin{aligned}&{\tilde {A}}_{a}={\begin{bmatrix}a^{1/2}&0\\0&a\end{bmatrix}},\;a>0,\quad {\tilde {S}}_{s}={\begin{bmatrix}1&0\\s&1\end{bmatrix}},\;s\in \mathbb {R} ,\quad M_{c}={\begin{bmatrix}c_{1}&0\\0&c_{2}\end{bmatrix}},\quad {\text{and}}\quad {\tilde {M}}_{c}={\begin{bmatrix}c_{2}&0\\0&c_{1}\end{bmatrix}}.\end{aligned}}$
The systems $\Psi (\psi )$ and ${\tilde {\Psi }}({\tilde {\psi }})$ basically differ in the reversed roles of $x_{1}$ and $x_{2}$. Thus, they correspond to the conic regions ${\mathcal {C}}_{\mathrm {h} }$ and ${\mathcal {C}}_{\mathrm {v} }$, respectively. Finally, the scaling function $\phi $ is associated with the low-frequency part ${\mathcal {R}}$.
Applications
• Image processing and computer sciences[5]
• Denoising
• Inverse problems
• Image enhancement
• Edge detection
• Inpainting
• Image separation
• PDEs[5]
• Resolution of the wavefront set
• Transport equations
• Coorbit theory, characterization of smoothness spaces[5]
• Differential geometry: manifold learning
Generalizations and extensions
• 3D-Shearlets [7][9]
• $\alpha $-Shearlets [7]
• Parabolic molecules [10]
• Cylindrical Shearlets [11][12]
See also
• Wavelet transform
• Curvelet transform
• Contourlet transform
• Bandelet transform
• Chirplet transform
• Noiselet transform
References
1. Guo, Kanghui, Gitta Kutyniok, and Demetrio Labate. "Sparse multidimensional representations using anisotropic dilation and shear operators." Wavelets and Splines (Athens, GA, 2005), G. Chen and MJ Lai, eds., Nashboro Press, Nashville, TN (2006): 189–201. "PDF" (PDF).
2. Donoho, David Leigh. "Sparse components of images and optimal atomic decompositions." Constructive Approximation 17.3 (2001): 353–382. "PDF". CiteSeerX 10.1.1.379.8993.
3. Guo, Kanghui, and Demetrio Labate. "Optimally sparse multidimensional representation using shearlets." SIAM Journal on Mathematical Analysis 39.1 (2007): 298–318. "PDF" (PDF).
4. Kutyniok, Gitta, and Wang-Q Lim. "Compactly supported shearlets are optimally sparse." Journal of Approximation Theory 163.11 (2011): 1564–1589. "PDF" (PDF).
5. Kutyniok, Gitta, and Demetrio Labate, eds. Shearlets: Multiscale analysis for multivariate data. Springer, 2012, ISBN 0-8176-8315-1
6. Kittipoom, Pisamai, Gitta Kutyniok, and Wang-Q Lim. "Construction of compactly supported shearlet frames." Constructive Approximation 35.1 (2012): 21–72. Kittipoom, P.; Kutyniok, G.; Lim, W. (2010). "PDF". arXiv:1003.5481 [math.FA].
7. Kutyniok, Gitta, Jakob Lemvig, and Wang-Q Lim. "Optimally sparse approximations of 3D functions by compactly supported shearlet frames." SIAM Journal on Mathematical Analysis 44.4 (2012): 2962–3017. Kutyniok, Gitta; Lemvig, Jakob; Lim, Wang-Q (2011). "PDF". arXiv:1109.5993 [math.FA].
8. Purnendu Banerjee and B. B. Chaudhuri, “Video Text Localization using Wavelet and Shearlet Transforms”, In Proc. SPIE 9021, Document Recognition and Retrieval XXI, 2014 (doi:10.1117/12.2036077).Banerjee, Purnendu; Chaudhuri, B. B. (2013). "Video text localization using wavelet and shearlet transforms". In Coüasnon, Bertrand; Ringger, Eric K (eds.). Document Recognition and Retrieval XXI. Vol. 9021. pp. 90210B. arXiv:1307.4990. doi:10.1117/12.2036077. S2CID 10659099.
9. Guo, Kanghui, and Demetrio Labate. "The construction of smooth Parseval frames of shearlets." Mathematical Modelling of Natural Phenomena 8.01 (2013): 82–105. "PDF" (PDF).
10. Grohs, Philipp and Kutyniok, Gitta. "Parabolic molecules." Foundations of Computational Mathematics (to appear) Grohs, Philipp; Kutyniok, Gitta (2012). "PDF". arXiv:1206.1958 [math.FA].
11. Easley, Glenn R.; Guo, Kanghui; Labate, Demetrio; Pahari, Basanta R. (2020-08-10). "Optimally Sparse Representations of Cartoon-Like Cylindrical Data". The Journal of Geometric Analysis. 39 (9): 8926–8946. doi:10.1007/s12220-020-00493-0. S2CID 221675372. Retrieved 2022-01-22.
12. Bernhard, Bernhard G.; Labate, Demetrio; Pahari, Basanta R. (2019-10-29). "Smooth projections and the construction of smooth Parseval frames of shearlets". Advances in Computational Mathematics. 45 (5–6): 3241–3264. doi:10.1007/s10444-019-09736-3. S2CID 210118010. Retrieved 2022-01-22.
External links
• Homepage of Gitta Kutyniok
• Homepage of Demetrio Labate
|
Wikipedia
|
Presheaf with transfers
In algebraic geometry, a presheaf with transfers is, roughly, a presheaf that, like cohomology theory, comes with pushforwards, “transfer” maps. Precisely, it is, by definition, a contravariant additive functor from the category of finite correspondences (defined below) to the category of abelian groups (in category theory, “presheaf” is another term for a contravariant functor).
When a presheaf F with transfers is restricted to the subcategory of smooth separated schemes, it can be viewed as a presheaf on the category with extra maps $F(Y)\to F(X)$, not coming from morphisms of schemes but also from finite correspondences from X to Y
A presheaf F with transfers is said to be $\mathbb {A} ^{1}$-homotopy invariant if $F(X)\simeq F(X\times \mathbb {A} ^{1})$ for every X.
For example, Chow groups as well as motivic cohomology groups form presheaves with transfers.
Finite correspondence
See also: Correspondence (algebraic geometry)
Let $X,Y$ be algebraic schemes (i.e., separated and of finite type over a field) and suppose $X$ is smooth. Then an elementary correspondence is an irreducible closed subscheme $W\subset X_{i}\times Y$, $X_{i}$ some connected component of X, such that the projection $\operatorname {Supp} (W)\to X_{i}$ is finite and surjective.[1] Let $\operatorname {Cor} (X,Y)$ be the free abelian group generated by elementary correspondences from X to Y; elements of $\operatorname {Cor} (X,Y)$ are then called finite correspondences.
The category of finite correspondences, denoted by $Cor$, is the category where the objects are smooth algebraic schemes over a field; where a Hom set is given as: $\operatorname {Hom} (X,Y)=\operatorname {Cor} (X,Y)$ and where the composition is defined as in intersection theory: given elementary correspondences $\alpha $ from $X$ to $Y$ and $\beta $ from $Y$ to $Z$, their composition is:
$\beta \circ \alpha =p_{{13},*}(p_{12}^{*}\alpha \cdot p_{23}^{*}\beta )$
where $\cdot $ denotes the intersection product and $p_{12}:X\times Y\times Z\to X\times Y$, etc. Note that the category $Cor$ is an additive category since each Hom set $\operatorname {Cor} (X,Y)$ is an abelian group.
This category contains the category ${\textbf {Sm}}$ of smooth algebraic schemes as a subcategory in the following sense: there is a faithful functor ${\textbf {Sm}}\to Cor$ that sends an object to itself and a morphism $f:X\to Y$ to the graph of $f$.
With the product of schemes taken as the monoid operation, the category $Cor$ is a symmetric monoidal category.
Sheaves with transfers
The basic notion underlying all of the different theories are presheaves with transfers. These are contravariant additive functors
$F:{\text{Cor}}_{k}\to {\text{Ab}}$
and their associated category is typically denoted $\mathbf {PST} (k)$, or just $\mathbf {PST} $ if the underlying field is understood. Each of the categories in this section are abelian categories, hence they are suitable for doing homological algebra.
Etale sheaves with transfers
These are defined as presheaves with transfers such that the restriction to any scheme $X$ is an etale sheaf. That is, if $U\to X$ is an etale cover, and $F$ is a presheaf with transfers, it is an Etale sheaf with transfers if the sequence
$0\to F(X){\xrightarrow {\text{diag}}}F(U){\xrightarrow {(+,-)}}F(U\times _{X}U)$
is exact and there is an isomorphism
$F(X\coprod Y)=F(X)\oplus F(Y)$
for any fixed smooth schemes $X,Y$.
Nisnevich sheaves with transfers
There is a similar definition for Nisnevich sheaf with transfers, where the Etale topology is switched with the Nisnevich topology.
Examples
Units
The sheaf of units ${\mathcal {O}}^{*}$ is a presheaf with transfers. Any correspondence $W\subset X\times Y$ induces a finite map of degree $N$ over $X$, hence there is the induced morphism
${\mathcal {O}}^{*}(Y)\to {\mathcal {O}}^{*}(W){\xrightarrow {N}}{\mathcal {O}}^{*}(X)$[2]
showing it is a presheaf with transfers.
Representable functors
One of the basic examples of presheaves with transfers are given by representable functors. Given a smooth scheme $X$ there is a presheaf with transfers $\mathbb {Z} _{tr}(X)$ sending $U\mapsto {\text{Hom}}_{Cor}(U,X)$.[2]
Representable functor associated to a point
The associated presheaf with transfers of ${\text{Spec}}(k)$ is denoted $\mathbb {Z} $.
Pointed schemes
Another class of elementary examples comes from pointed schemes $(X,x)$ with $x:{\text{Spec}}(k)\to X$. This morphism induces a morphism $x_{*}:\mathbb {Z} \to \mathbb {Z} _{tr}(X)$ whose cokernel is denoted $\mathbb {Z} _{tr}(X,x)$. There is a splitting coming from the structure morphism $X\to {\text{Spec}}(k)$, so there is an induced map $\mathbb {Z} _{tr}(X)\to \mathbb {Z} $, hence $\mathbb {Z} _{tr}(X)\cong \mathbb {Z} \oplus \mathbb {Z} _{tr}(X,x)$.
Representable functor associated to A1-0
There is a representable functor associated to the pointed scheme $\mathbb {G} _{m}=(\mathbb {A} ^{1}-\{0\},1)$ denoted $\mathbb {Z} _{tr}(\mathbb {G} _{m})$.
Smash product of pointed schemes
Given a finite family of pointed schemes $(X_{i},x_{i})$ there is an associated presheaf with transfers $\mathbb {Z} _{tr}((X_{1},x_{1})\wedge \cdots \wedge (X_{n},x_{n}))$, also denoted $\mathbb {Z} _{tr}(X_{1}\wedge \cdots \wedge X_{n})$[2] from their Smash product. This is defined as the cokernel of
${\text{coker}}\left(\bigoplus _{i}\mathbb {Z} _{tr}(X_{1}\times \cdots \times {\hat {X}}_{i}\times \cdots \times X_{n}){\xrightarrow {id\times \cdots \times x_{i}\times \cdots \times id}}\mathbb {Z} _{tr}(X_{1}\times \cdots \times X_{n})\right)$
For example, given two pointed schemes $(X,x),(Y,y)$, there is the associated presheaf with transfers $\mathbb {Z} _{tr}(X\wedge Y)$ equal to the cokernel of
$\mathbb {Z} _{tr}(X)\oplus \mathbb {Z} _{tr}(Y){\xrightarrow {\begin{bmatrix}1\times y&x\times 1\end{bmatrix}}}\mathbb {Z} _{tr}(X\times Y)$[3]
This is analogous to the smash product in topology since $X\wedge Y=(X\times Y)/(X\vee Y)$ where the equivalence relation mods out $X\times \{y\}\cup \{x\}\times Y$.
Wedge of single space
A finite wedge of a pointed space $(X,x)$ is denoted $\mathbb {Z} _{tr}(X^{\wedge q})=\mathbb {Z} _{tr}(X\wedge \cdots \wedge X)$. One example of this construction is $\mathbb {Z} _{tr}(\mathbb {G} _{m}^{\wedge q})$, which is used in the definition of the motivic complexes $\mathbb {Z} (q)$ used in Motivic cohomology.
Homotopy invariant sheaves
A presheaf with transfers $F$ is homotopy invariant if the projection morphism $p:X\times \mathbb {A} ^{1}\to X$ induces an isomorphism $p^{*}:F(X)\to F(X\times \mathbb {A} ^{1})$ for every smooth scheme $X$. There is a construction associating a homotopy invariant sheaf[2] for every presheaf with transfers $F$ using an analogue of simplicial homology.
Simplicial homology
There is a scheme
$\Delta ^{n}={\text{Spec}}\left({\frac {k[x_{0},\ldots ,x_{n}]}{\sum _{0\leq i\leq n}x_{i}-1}}\right)$
giving a cosimplicial scheme $\Delta ^{*}$, where the morphisms $\partial _{j}:\Delta ^{n}\to \Delta ^{n+1}$ are given by $x_{j}=0$. That is,
${\frac {k[x_{0},\ldots ,x_{n+1}]}{(\sum _{0\leq i\leq n}x_{i}-1)}}\to {\frac {k[x_{0},\ldots ,x_{n+1}]}{(\sum _{0\leq i\leq n}x_{i}-1,x_{j})}}$
gives the induced morphism $\partial _{j}$. Then, to a presheaf with transfers $F$, there is an associated complex of presheaves with transfers $C_{*}F$ sending
$C_{i}F:U\mapsto F(U\times \Delta ^{i})$
and has the induced chain morphisms
$\sum _{i=0}^{j}(-1)^{i}\partial _{i}^{*}:C_{j}F\to C_{j-1}F$
giving a complex of presheaves with transfers. The homology invaritant presheaves with transfers $H_{i}(C_{*}F)$ are homotopy invariant. In particular, $H_{0}(C_{*}F)$ is the universal homotopy invariant presheaf with transfers associated to $F$.
Relation with Chow group of zero cycles
Denote $H_{0}^{sing}(X/k):=H_{0}(C_{*}\mathbb {Z} _{tr}(X))({\text{Spec}}(k))$. There is an induced surjection $H_{0}^{sing}(X/k)\to {\text{CH}}_{0}(X)$ which is an isomorphism for $X$ projective.
Zeroth homology of Ztr(X)
The zeroth homology of $H_{0}(C_{*}\mathbb {Z} _{tr}(Y))(X)$ is ${\text{Hom}}_{Cor}(X,Y)/\mathbb {A} ^{1}{\text{ homotopy}}$ where homotopy equivalence is given as follows. Two finite correspondences $f,g:X\to Y$ are $\mathbb {A} ^{1}$-homotopy equivalent if there is a morphism $h:X\times \mathbb {A} ^{1}\to X$ such that $h|_{X\times 0}=f$ and $h|_{X\times 1}=g$.
Motivic complexes
For Voevodsky's category of mixed motives, the motive $M(X)$ associated to $X$, is the class of $C_{*}\mathbb {Z} _{tr}(X)$ in $DM_{Nis}^{eff,-}(k,R)$. One of the elementary motivic complexes are $\mathbb {Z} (q)$ for $q\geq 1$, defined by the class of
$\mathbb {Z} (q)=C_{*}\mathbb {Z} _{tr}(\mathbb {G} _{m}^{\wedge q})[-q]$[2]
For an abelian group $A$, such as $\mathbb {Z} /\ell $, there is a motivic complex $A(q)=\mathbb {Z} (q)\otimes A$. These give the motivic cohomology groups defined by
$H^{p,q}(X,\mathbb {Z} )=\mathbb {H} _{Zar}^{p}(X,\mathbb {Z} (q))$
since the motivic complexes $\mathbb {Z} (q)$ restrict to a complex of Zariksi sheaves of $X$.[2] These are called the $p$-th motivic cohomology groups of weight $q$. They can also be extended to any abelian group $A$,
$H^{p,q}(X,A)=\mathbb {H} _{Zar}^{p}(X,A(q))$
giving motivic cohomology with coefficients in $A$ of weight $q$.
Special cases
There are a few special cases which can be analyzed explicitly. Namely, when $q=0,1$. These results can be found in the fourth lecture of the Clay Math book.
Z(0)
In this case, $\mathbb {Z} (0)\cong \mathbb {Z} _{tr}(\mathbb {G} _{m}^{\wedge 0})$ which is quasi-isomorphic to $\mathbb {Z} $ (top of page 17),[2] hence the weight $0$ cohomology groups are isomorphic to
$H^{p,0}(X,\mathbb {Z} )={\begin{cases}\mathbb {Z} (X)&{\text{if }}p=0\\0&{\text{otherwise}}\end{cases}}$
where $\mathbb {Z} (X)={\text{Hom}}_{Cor}(X,{\text{Spec}}(k))$. Since an open cover
Z(1)
This case requires more work, but the end result is a quasi-isomorphism between $\mathbb {Z} (1)$ and ${\mathcal {O}}^{*}[-1]$. This gives the two motivic cohomology groups
${\begin{aligned}H^{1,1}(X,\mathbb {Z} )&=H_{Zar}^{0}(X,{\mathcal {O}}^{*})={\mathcal {O}}^{*}(X)\\H^{2,1}(X,\mathbb {Z} )&=H_{Zar}^{1}(X,{\mathcal {O}}^{*})={\text{Pic}}(X)\end{aligned}}$
where the middle cohomology groups are Zariski cohomology.
General case: Z(n)
In general, over a perfect field $k$, there is a nice description of $\mathbb {Z} (n)$ in terms of presheaves with transfer $\mathbb {Z} _{tr}(\mathbb {P} ^{n})$. There is a quasi-ismorphism
$C_{*}(\mathbb {Z} _{tr}(\mathbb {P} ^{n})/\mathbb {Z} _{tr}(\mathbb {P} ^{n-1}))\simeq C_{*}\mathbb {Z} _{tr}(\mathbb {G} _{m}^{\wedge q})[n]$
hence
$\mathbb {Z} (n)\simeq C_{*}(\mathbb {Z} _{tr}(\mathbb {P} ^{n})/\mathbb {Z} _{tr}(\mathbb {P} ^{n-1}))[-2n]$
which is found using splitting techniques along with a series of quasi-isomorphisms. The details are in lecture 15 of the Clay Math book.
See also
• Relative cycle
• Motivic cohomology
• Mixed motives (math)
• Étale topology
• Nisnevich topology
References
1. Carlo, Voevodsky & Weibel 2006, Definition 1.1. harvnb error: no target: CITEREFCarloVoevodskyWeibel2006 (help)
2. Lecture Notes on Motivic Cohomology (PDF). Clay Math. pp. 13, 15–16, 17, 21, 22.
3. Note $X\cong X\times \{y\}$ giving $\mathbb {Z} _{tr}(X\times \{y\})\cong \mathbb {Z} _{tr}(X)$
• Mazza, Carlo; Voevodsky, Vladimir; Weibel, Charles (2006), Lecture notes on motivic cohomology, Clay Mathematics Monographs, vol. 2, Providence, R.I.: American Mathematical Society, ISBN 978-0-8218-3847-1, MR 2242284
External links
• https://ncatlab.org/nlab/show/sheaf+with+transfer
|
Wikipedia
|
Sheila Oates Williams
Sheila Oates Williams (born 1939[1], also published as Sheila Oates and Sheila Oates Macdonald)[2] is a British and Australian mathematician specializing in abstract algebra. She is the namesake of the Oates–Powell theorem in group theory, and a winner of the B. H. Neumann Award.
Education and career
Sheila Oates was originally from Cornwall, where her father was a primary school headmaster in Tintagel. She was educated at Sir James Smith's Grammar School, and inspired to become a mathematician by a teacher there, Alfred Hooper. She read mathematics in St Hugh's College, Oxford, with Ida Busbridge as her tutor, and continued at Oxford as a doctoral student of Graham Higman.[3] She completed her doctorate (D.Phil.) in 1963.[4]
She became a lecturer and fellow at St Hilda's College, Oxford, before moving to Australia in 1965. In 1966, she took a position as senior lecturer at the University of Newcastle and later moved again to the University of Queensland, as reader. She retired in 1997.[3]
Contributions
As a student at Oxford, with Martin B. Powell, another student of Higman,[3] she proved the Oates–Powell theorem. This is an analogue for group theory of Hilbert's basis theorem,[5] and states that all finite groups have a finite system of axioms from which can be derived all equations that are true of the group. That is, every finite group is finitely based.[6][7]
As well as for her research, Williams is known for her work setting Australian mathematics competitions, including the International Mathematical Olympiad in 1988 and the Australian Mathematics Competition. She also participated several times in the Australian edition of the Mastermind television quiz show.[3]
Recognition
Williams was a 2002 recipient of the B. H. Neumann Award for Excellence in Mathematics Enrichment of the Australian Maths Trust.[3][8]
References
1. Birth year from Library of Congress catalog entry, retrieved 2021-05-29
2. Neumann, Bernhard (1999), "Professor Cheryl Praeger, mathematician", Interviews with Australian scientists, Australian Academy of Science, retrieved 2021-05-29, Sheila started as Sheila Oates, became Sheila Macdonald and now is Sheila Williams, which is a very good case against a woman changing her professional name on marriage (her first marriage was to Neil Macdonald)
3. Taylor, Peter, 2002 B. H. Neumann Award Recipients, Australian Maths Trust, retrieved 2021-05-29
4. Sheila Oates Williams at the Mathematics Genealogy Project
5. Chandler, Bruce; Magnus, Wilhelm (1982), "Varieties of groups", The History of Combinatorial Group Theory, Studies in the History of Mathematics and Physical Sciences, vol. 9, Springer, pp. 157–161, doi:10.1007/978-1-4613-9487-7_19
6. Neumann, Hanna (1967), "5.2 The theorem of Oates and Powell", Varieties of Groups, Springer, pp. 151–161, doi:10.1007/978-3-642-88599-0, MR 0215899
7. Oates, Sheila; Powell, M. B. (1964), "Identical relations in finite groups", Journal of Algebra, 1: 11–39, doi:10.1016/0021-8693(64)90004-3, MR 0161904
8. "B. H. Neumann Awards Given" (PDF), Mathematics People, Notices of the American Mathematical Society, 49 (10): 1270, November 2002
Authority control
International
• ISNI
• VIAF
National
• Norway
• Israel
• United States
Academics
• CiNii
• MathSciNet
• Mathematics Genealogy Project
Other
• IdRef
|
Wikipedia
|
Shekel function
The Shekel function is a multidimensional, multimodal, continuous, deterministic function commonly used as a test function for testing optimization techniques.
The mathematical form of a function in $n$ dimensions with $m$ maxima is:
$f({\vec {x}})=\sum _{i=1}^{m}\;\left(c_{i}+\sum \limits _{j=1}^{n}(x_{j}-a_{ji})^{2}\right)^{-1}$
or, similarly,
$f(x_{1},x_{2},...,x_{n-1},x_{n})=\sum _{i=1}^{m}\;\left(c_{i}+\sum \limits _{j=1}^{n}(x_{j}-a_{ij})^{2}\right)^{-1}$
Global minima
Numerically certified global minima and the corresponding solutions were obtained using interval methods for up to $n=10$.[1]
References
Shekel, J. 1971. "Test Functions for Multimodal Search Techniques." Fifth Annual Princeton Conference on Information Science and Systems.
1. Vanaret C. (2015) Hybridization of interval methods and evolutionary algorithms for solving difficult optimization problems. PhD thesis. Ecole Nationale de l'Aviation Civile. Institut National Polytechnique de Toulouse, France.
See also
• Test functions for optimization
|
Wikipedia
|
Shelah cardinal
In axiomatic set theory, Shelah cardinals are a kind of large cardinals. A cardinal $\kappa $ is called Shelah iff for every $f:\kappa \rightarrow \kappa $, there exists a transitive class $N$ and an elementary embedding $j:V\rightarrow N$ with critical point $\kappa $; and $V_{j(f)(\kappa )}\subset N$.
A Shelah cardinal has a normal ultrafilter containing the set of weakly hyper-Woodin cardinals below it.
References
• Ernest Schimmerling, Woodin cardinals, Shelah cardinals and the Mitchell-Steel core model, Proceedings of the American Mathematical Society 130/11, pp. 3385-3391, 2002, online
|
Wikipedia
|
Sheldon Katz
Sheldon H. Katz (19 December 1956, Brooklyn) is an American mathematician, specializing in algebraic geometry and its applications to string theory.[1]
Background and career
In 1973 Katz won first prize in the U.S.A. Mathematical Olympiad. He received in 1976 his bachelor's degree from MIT and in 1980 his Ph.D. from Princeton University under Robert C. Gunning with thesis Deformations of Linear Systems, Divisors and Weierstrass Points on Curves.[2] At the University of Utah, he was an instructor from 1980 to 1984. At the University of Oklahoma he was an assistant professor from 1984 to 1987. At Oklahoma State University, he became in 1987 an assistant professor, in 1989 an associate professor, in 1994 a full professor, in 1997 Southwestern Bell Professor, and in 1999 Regents Professor. Since 2001 he has been a professor at the University of Illinois, Urbana-Champaign, where he was chair of the department in 2006–2011.
For the academic year 1982/83 he was a visiting scholar at the Institute for Advanced Study.[3] He was a visiting professor at the Mittag-Leffler Institute (1997), at Duke University (1991/92) and at the University of Bayreuth (1989).
His research on algebraic geometry and its applications to string theory (including mirror symmetry) and supersymmetry has been published in prestigious journals in mathematics and physics.
In 2013 he was elected a Fellow of the American Mathematical Society.
Selected publications
Articles
• with Bruce Crauder: Crauder, Bruce; Katz, Sheldon (1989). "Cremona transformations with smooth irreducible fundamental locus". American Journal of Mathematics. 111 (2): 289–307. doi:10.2307/2374511. JSTOR 2374511.
• with Alberto Albano: Albano, Alberto; Katz, Sheldon (1991). "Lines on the Fermat quintic threefold and the infinitesimal generalized Hodge conjecture". Trans. Amer. Math. Soc. 324 (1): 353–368. doi:10.1090/s0002-9947-1991-1024767-6. MR 1024767.
• with David R. Morrison and M. Ronen Plesser: Katz, Sheldon; Morrison, David R.; Plesser, M.Ronen (1996). "Enhanced gauge symmetry in type 11 string theory". Nuclear Physics B. 477 (1): 105–140. arXiv:hep-th/9601108. Bibcode:1996NuPhB.477..105K. doi:10.1016/0550-3213(96)00331-8. S2CID 14482596.
• with Eric Sharpe: Katz, Sheldon; Sharpe, Eric (2003). "D-branes, open string vertex operators, and Ext groups". Adv. Theor. Math. Phys. 6 (6): 979–1030. arXiv:hep-th/0208104. Bibcode:2002hep.th....8104K. doi:10.4310/ATMP.2002.v6.n6.a1. S2CID 14199444.
• with Tony Pantev and E. Sharpe: Katz, Sheldon; Pantev, Tony; Sharpe, Eric (2003). "D-branes, orbifolds, and Ext groups". Nuclear Physics B. 673 (1): 263–300. arXiv:hep-th/0212218. Bibcode:2003NuPhB.673..263K. doi:10.1016/j.nuclphysb.2003.09.022. S2CID 17710799.
• with Andrei Caldararu and E. Sharpe: Caldararu, Andrei; Katz, Sheldon; Sharpe, Eric (2004). "D-branes, B fields, and Ext groups". Advances in Theoretical and Mathematical Physics. 7 (3): 381–404. arXiv:hep-th/0302099. doi:10.4310/atmp.2003.v7.n3.a1. S2CID 7443345.
• with Ron Donagi and E. Sharpe: Donagi, Ron; Katz, Sheldon; Sharpe, Eric (2005). "Spectra of D-branes with Higgs vevs". Advances in Theoretical and Mathematical Physics. 8 (5): 813–859. arXiv:hep-th/0309270. doi:10.4310/atmp.2004.v8.n5.a3. S2CID 229996.
• with D. Morrison, Sakura Schäfer-Nameki, and James Sully: Katz, Sheldon; Morrison, David R.; Schäfer-Nameki, Sakura; Sully, James (2011). "Tate's algorithm and F-theory". JHEP. 94 (8): 1108. arXiv:1106.3854. Bibcode:2011JHEP...08..094K. doi:10.1007/JHEP08(2011)094. S2CID 119184488.
• with Jinwon Choi and Albrecht Klemm: Choi, Jinwon; Katz, Sheldon; Klemm, Albrecht (2014). "The refined BPS index from stable pair invariants". Communications in Mathematical Physics. 328 (3): 903–954. arXiv:1210.4403. Bibcode:2014CMaPh.328..903C. doi:10.1007/s00220-014-1978-0. S2CID 119708881.
Books
• with Rahul Pandharipande, Cumrun Vafa, Ravi Vakil, Eric Zaslow, Kentaro Hori, Albrecht Klemm, Richard Thomas: Mirror Symmetry, Clay Mathematics Monographs, vol. 1, 2003
• with David A. Cox: Mirror Symmetry and Algebraic Geometry. AMS. 1999. ISBN 9780821821275.[4]
• Enumerative Geometry and String Theory. Student Mathematical Library. AMS. 2006. ISBN 9780821836873.[5][6]
References
1. homepage of Sheldon Katz at the University of Illinois at Urbana-Champaign
2. Sheldon Katz at the Mathematics Genealogy Project
3. Katz, Sheldon H. | Institute for Advanced Study
4. Batyrev, V. (2000). "Review: Mirror symmetry and algebraic geometry by David A. Cox and Sheldon Katz" (PDF). Bull. Amer. Math. Soc. (N.S.). 37 (4): 473–476. doi:10.1090/s0273-0979-00-00875-2.
5. William J. Satzer (11 July 2006). "Review: Enumerative Geometry and String Theory by Sheldon Katz". Mathematical Association of America.
6. "Review: Enumerative Geometry and String Theory". European Mathematical Society. 23 October 2011.
Authority control
International
• ISNI
• VIAF
National
• Norway
• France
• BnF data
• Germany
• Israel
• United States
• Japan
• Netherlands
Academics
• DBLP
• MathSciNet
• Mathematics Genealogy Project
• zbMATH
Other
• IdRef
|
Wikipedia
|
Sheldon M. Ross
Sheldon M. Ross is the Daniel J. Epstein Chair and Professor at the USC Viterbi School of Engineering. He is the author of several books in the field of probability.[1]
Biography
Ross received his B. S. degree in mathematics from Brooklyn College in 1963, his M.S. degrees in mathematics from Purdue University in 1964 and his Ph.D. degree in Statistics from Stanford University in 1968, studying under Gerald Lieberman and Cyrus Derman. He served as a Professor at the University of California, Berkeley from 1976 until joining the USC Viterbi School of Engineering in 2004. He serves as the Editor for several journals, among which Probability in the Engineering and Informational Sciences. In 2013 he became a fellow of the Institute for Operations Research and the Management Sciences.
In 1978, he formulated what became known as Ross's conjecture in queuing theory,[2] which was solved three years later by Tomasz Rolski at Poland's Wroclaw University.[3]
Selected publications
• Ross S. M. (1970) Applied Probability Models with Optimization Applications. Holden-Day: San Francisco, CA.
• Ross S. M. (1972) Introduction to Probability Models. Academic Press: Waltham, MA.
• Ross S. M. (1976) A First Course in Probability. MacMillan Publishing Company: London.
• Ross S. M. (1982) Stochastic Processes. John Wiley & Sons: New York.
• Ross S. M. (1983) Introduction to Stochastic Dynamic Programming. Academic Press: Waltham, MA.
• Ross S. M. (1995) Introductory Statistics. Academic Press: Waltham, MA.
• Ross S. M. (1996) Simulation. Academic Press: Waltham, MA.
• Derman C. & Ross S. M. (1997) Statistical Aspects of Quality Control. Academic Press: Waltham, MA.
• Ross S. M. (1999) An Elementary Introduction to Mathematical Finance: Options and Other Topics. Cambridge University Press: Cambridge.
• Ross S. M. (2000) Topics in Finite and Discrete Mathematics. Cambridge University Press: Cambridge.
• Ross S. M. (2001) Probability Models for Computer Science. Academic Press: Waltham, MA.
References
1. INFORMS. "Ross, Sheldon M." INFORMS. Retrieved 2021-04-12.
2. Ross, Sheldon M. (September 1978). "Average delay in queues with non-stationary Poisson arrivals". Journal of Applied Probability. 15 (3): 602–609. doi:10.2307/3213122. ISSN 0021-9002. JSTOR 3213122. S2CID 122948002.
3. Rolski, Tomasz (September 1981). "Queues with non-stationary input stream: Ross's conjecture". Advances in Applied Probability. 13 (3): 603–618. doi:10.2307/1426787. ISSN 0001-8678. JSTOR 1426787. S2CID 124842629.
External links
• Sheldon M. Ross publications indexed by Google Scholar
Authority control
International
• ISNI
• VIAF
National
• France
• BnF data
• Germany
• Israel
• United States
• Czech Republic
• Greece
• Korea
Academics
• Association for Computing Machinery
• Google Scholar
• MathSciNet
• Mathematics Genealogy Project
Other
• IdRef
|
Wikipedia
|
Shellsort
Shellsort, also known as Shell sort or Shell's method, is an in-place comparison sort. It can be seen as either a generalization of sorting by exchange (bubble sort) or sorting by insertion (insertion sort).[3] The method starts by sorting pairs of elements far apart from each other, then progressively reducing the gap between elements to be compared. By starting with far apart elements, it can move some out-of-place elements into position faster than a simple nearest neighbor exchange. Donald Shell published the first version of this sort in 1959.[4][5] The running time of Shellsort is heavily dependent on the gap sequence it uses. For many practical variants, determining their time complexity remains an open problem.
Shellsort
Shellsort with gaps 23, 10, 4, 1 in action
ClassSorting algorithm
Data structureArray
Worst-case performanceO(n2) (worst known worst case gap sequence)
O(n log2n) (best known worst case gap sequence)[1]
Best-case performanceO(n log n) (most gap sequences)
O(n log2n) (best known worst-case gap sequence)[2]
Average performancedepends on gap sequence
Worst-case space complexityО(n) total, O(1) auxiliary
Description
Shellsort is an optimization of insertion sort that allows the exchange of items that are far apart. The idea is to arrange the list of elements so that, starting anywhere, taking every hth element produces a sorted list. Such a list is said to be h-sorted. It can also be thought of as h interleaved lists, each individually sorted.[6] Beginning with large values of h allows elements to move long distances in the original list, reducing large amounts of disorder quickly, and leaving less work for smaller h-sort steps to do.[7] If the list is then k-sorted for some smaller integer k, then the list remains h-sorted. Following this idea for a decreasing sequence of h values ending in 1 is guaranteed to leave a sorted list in the end.[6]
In simplistic terms, this means if we have an array of 1024 numbers, our first gap (h) could be 512. We then run through the list comparing each element in the first half to the element in the second half. Our second gap (k) is 256, which breaks the array into four sections (starting at 0, 256, 512, 768), and we make sure the first items in each section are sorted relative to each other, then the second item in each section, and so on. In practice the gap sequence could be anything, but the last gap is always 1 to finish the sort (effectively finishing with an ordinary insertion sort).
An example run of Shellsort with gaps 5, 3 and 1 is shown below.
a1a2a3a4 a5a6a7a8 a9a10a11a12
Input data 628318530717958647692528
After 5-sorting 1728184707 2583865369 6295
After 3-sorting 170718 472825 696253 838695
After 1-sorting 071718252847536269838695
The first pass, 5-sorting, performs insertion sort on five separate subarrays (a1, a6, a11), (a2, a7, a12), (a3, a8), (a4, a9), (a5, a10). For instance, it changes the subarray (a1, a6, a11) from (62, 17, 25) to (17, 25, 62). The next pass, 3-sorting, performs insertion sort on the three subarrays (a1, a4, a7, a10), (a2, a5, a8, a11), (a3, a6, a9, a12). The last pass, 1-sorting, is an ordinary insertion sort of the entire array (a1,..., a12).
As the example illustrates, the subarrays that Shellsort operates on are initially short; later they are longer but almost ordered. In both cases insertion sort works efficiently.
Unlike insertion sort, Shellsort is not a stable sort since gapped insertions transport equal elements past one another and thus lose their original order. It is an adaptive sorting algorithm in that it executes faster when the input is partially sorted.
Pseudocode
Using Marcin Ciura's gap sequence, with an inner insertion sort.
# Sort an array a[0...n-1].
gaps = [701, 301, 132, 57, 23, 10, 4, 1] # Ciura gap sequence
# Start with the largest gap and work down to a gap of 1
# similar to insertion sort but instead of 1, gap is being used in each step
foreach (gap in gaps)
{
# Do a gapped insertion sort for every elements in gaps
# Each loop leaves a[0..gap-1] in gapped order
for (i = gap; i < n; i += 1)
{
# save a[i] in temp and make a hole at position i
temp = a[i]
# shift earlier gap-sorted elements up until the correct location for a[i] is found
for (j = i; (j >= gap) && (a[j - gap] > temp); j -= gap)
{
a[j] = a[j - gap]
}
# put temp (the original a[i]) in its correct location
a[j] = temp
}
}
Gap sequences
The question of deciding which gap sequence to use is difficult. Every gap sequence that contains 1 yields a correct sort (as this makes the final pass an ordinary insertion sort); however, the properties of thus obtained versions of Shellsort may be very different. Too few gaps slows down the passes, and too many gaps produces an overhead.
The table below compares most proposed gap sequences published so far. Some of them have decreasing elements that depend on the size of the sorted array (N). Others are increasing infinite sequences, whose elements less than N should be used in reverse order.
OEIS General term (k ≥ 1) Concrete gaps Worst-case
time complexity
Author and year of publication
$\left\lfloor {\frac {N}{2^{k}}}\right\rfloor $ $1,2,\ldots ,\left\lfloor {\frac {N}{4}}\right\rfloor ,\left\lfloor {\frac {N}{2}}\right\rfloor $ $\Theta \left(N^{2}\right)$ [e.g. when N = 2p] Shell, 1959[4]
$2\left\lfloor {\frac {N}{2^{k+1}}}\right\rfloor +1$ $1,3,\ldots ,\;2\left\lfloor {\frac {N}{8}}\right\rfloor +1,\;\;2\left\lfloor {\frac {N}{4}}\right\rfloor +1$ $\Theta \left(N^{\frac {3}{2}}\right)$ Frank & Lazarus, 1960[8]
A000225 $2^{k}-1$ $1,3,7,15,31,63,\ldots $ $\Theta \left(N^{\frac {3}{2}}\right)$ Hibbard, 1963[9]
A083318 $2^{k}+1$, prefixed with 1 $1,3,5,9,17,33,65,\ldots $ $\Theta \left(N^{\frac {3}{2}}\right)$ Papernov & Stasevich, 1965[10]
A003586 Successive numbers of the form $2^{p}3^{q}$ (3-smooth numbers) $1,2,3,4,6,8,9,12,\ldots $ $\Theta \left(N\log ^{2}N\right)$ Pratt, 1971[1]
A003462 ${\frac {3^{k}-1}{2}}$, not greater than $\left\lceil {\frac {N}{3}}\right\rceil $ $1,4,13,40,121,\ldots $ $\Theta \left(N^{\frac {3}{2}}\right)$ Knuth, 1973,[3] based on Pratt, 1971[1]
A036569 ${\begin{aligned}&\prod \limits _{I}a_{q},{\hbox{where}}\\a_{0}={}&3\\a_{q}={}&\min \left\{n\in \mathbb {N} \colon n\geq \left({\frac {5}{2}}\right)^{q+1},\forall p\colon 0\leq p<q\Rightarrow \gcd(a_{p},n)=1\right\}\\I={}&\left\{0\leq q<r\mid q\neq {\frac {1}{2}}\left(r^{2}+r\right)-k\right\}\\r={}&\left\lfloor {\sqrt {2k+{\sqrt {2k}}}}\right\rfloor \end{aligned}}$ $1,3,7,21,48,112,\ldots $ $O\left(N^{1+{\sqrt {\frac {8\ln \left(5/2\right)}{\ln(N)}}}}\right)$ Incerpi & Sedgewick, 1985,[11] Knuth[3]
A036562 $4^{k}+3\cdot 2^{k-1}+1$, prefixed with 1 $1,8,23,77,281,\ldots $ $O\left(N^{\frac {4}{3}}\right)$ Sedgewick, 1982[6]
A033622 ${\begin{cases}9\left(2^{k}-2^{\frac {k}{2}}\right)+1&k{\text{ even}},\\8\cdot 2^{k}-6\cdot 2^{(k+1)/2}+1&k{\text{ odd}}\end{cases}}$ $1,5,19,41,109,\ldots $ $O\left(N^{\frac {4}{3}}\right)$ Sedgewick, 1986[12]
$h_{k}=\max \left\{\left\lfloor {\frac {5h_{k-1}-1}{11}}\right\rfloor ,1\right\},h_{0}=N$ $1,\ldots ,\left\lfloor {\frac {5}{11}}\left\lfloor {\frac {5N-1}{11}}\right\rfloor -{\frac {1}{11}}\right\rfloor ,\left\lfloor {\frac {5N-1}{11}}\right\rfloor $ Unknown Gonnet & Baeza-Yates, 1991[13]
A108870 $\left\lceil {\frac {1}{5}}\left(9\cdot \left({\frac {9}{4}}\right)^{k-1}-4\right)\right\rceil $ $1,4,9,20,46,103,\ldots $ Unknown Tokuda, 1992[14]
A102549 Unknown (experimentally derived) $1,4,10,23,57,132,301,701$ Unknown Ciura, 2001[15]
$\left\lceil {\frac {\gamma ^{k}-1}{\gamma -1}}\right\rceil ,\gamma =2.243609061420001\ldots $ $1,4,9,20,45,102,\ldots $ Unknown Lee, 2021[16]
When the binary representation of N contains many consecutive zeroes, Shellsort using Shell's original gap sequence makes Θ(N2) comparisons in the worst case. For instance, this case occurs for N equal to a power of two when elements greater and smaller than the median occupy odd and even positions respectively, since they are compared only in the last pass.
Although it has higher complexity than the O(N log N) that is optimal for comparison sorts, Pratt's version lends itself to sorting networks and has the same asymptotic gate complexity as Batcher's bitonic sorter.
Gonnet and Baeza-Yates observed that Shellsort makes the fewest comparisons on average when the ratios of successive gaps are roughly equal to 2.2.[13] This is why their sequence with ratio 2.2 and Tokuda's sequence with ratio 2.25 prove efficient. However, it is not known why this is so. Sedgewick recommends using gaps which have low greatest common divisors or are pairwise coprime.[17] Gaps which are odd numbers seem to work well in practice: 25% reductions have been observed by avoiding even-numbered gaps. Gaps which avoid multiples of 3 and 5 seem to produce small benefits of < 10%.
With respect to the average number of comparisons, Ciura's sequence[15] has the best known performance; gaps from 701 were not determined but the sequence can be further extended according to the recursive formula $h_{k}=\lfloor 2.25h_{k-1}\rfloor $.
Tokuda's sequence, defined by the simple formula $h_{k}=\lceil h'_{k}\rceil $, where $h'_{k}=2.25h'_{k-1}+1$, $h'_{1}=1$, can be recommended for practical applications.
If the maximum input size is small, as may occur if Shellsort is used on small subarrays by another recursive sorting algorithm such as quicksort or merge sort, then it is possible to tabulate an optimal sequence for each input size.[18]
Computational complexity
The following property holds: after h2-sorting of any h1-sorted array, the array remains h1-sorted.[19] Every h1-sorted and h2-sorted array is also (a1h1+a2h2)-sorted, for any nonnegative integers a1 and a2. The worst-case complexity of Shellsort is therefore connected with the Frobenius problem: for given integers h1,..., hn with gcd = 1, the Frobenius number g(h1,..., hn) is the greatest integer that cannot be represented as a1h1+ ... +anhn with nonnegative integer a1,..., an. Using known formulae for Frobenius numbers, we can determine the worst-case complexity of Shellsort for several classes of gap sequences.[20] Proven results are shown in the above table.
Mark Allen Weiss proved that Shellsort runs in O(N log N) time when the input array is in reverse order.[21]
With respect to the average number of operations, none of the proven results concerns a practical gap sequence. For gaps that are powers of two, Espelid computed this average as $0.5349N{\sqrt {N}}-0.4387N-0.097{\sqrt {N}}+O(1)$.[22] Knuth determined the average complexity of sorting an N-element array with two gaps (h, 1) to be ${\frac {2N^{2}}{h}}+{\sqrt {\pi N^{3}h}}$.[3] It follows that a two-pass Shellsort with h = Θ(N1/3) makes on average O(N5/3) comparisons/inversions/running time. Yao found the average complexity of a three-pass Shellsort.[23] His result was refined by Janson and Knuth:[24] the average number of comparisons/inversions/running time made during a Shellsort with three gaps (ch, cg, 1), where h and g are coprime, is ${\frac {N^{2}}{4ch}}+O(N)$ in the first pass, ${\frac {1}{8g}}{\sqrt {\frac {\pi }{ch}}}(h-1)N^{3/2}+O(hN)$ in the second pass and $\psi (h,g)N+{\frac {1}{8}}{\sqrt {\frac {\pi }{c}}}(c-1)N^{3/2}+O\left((c-1)gh^{1/2}N\right)+O\left(c^{2}g^{3}h^{2}\right)$ in the third pass. ψ(h, g) in the last formula is a complicated function asymptotically equal to ${\sqrt {\frac {\pi h}{128}}}g+O\left(g^{-1/2}h^{1/2}\right)+O\left(gh^{-1/2}\right)$. In particular, when h = Θ(N7/15) and g = Θ(N1/5), the average time of sorting is O(N23/15).
Based on experiments, it is conjectured that Shellsort with Hibbard's gap sequence runs in O(N5/4) average time,[3] and that Gonnet and Baeza-Yates's sequence requires on average 0.41N ln N (ln ln N + 1/6) element moves.[13] Approximations of the average number of operations formerly put forward for other sequences fail when sorted arrays contain millions of elements.
The graph below shows the average number of element comparisons in various variants of Shellsort, divided by the theoretical lower bound, i.e. log2N!, where the sequence 1, 4, 10, 23, 57, 132, 301, 701 has been extended according to the formula $h_{k}=\lfloor 2.25h_{k-1}\rfloor $.
Applying the theory of Kolmogorov complexity, Jiang, Li, and Vitányi [25] proved the following lower bound for the order of the average number of operations/running time in a p-pass Shellsort: Ω(pN1+1/p) when p ≤ log2N and Ω(pN) when p > log2N. Therefore, Shellsort has prospects of running in an average time that asymptotically grows like N logN only when using gap sequences whose number of gaps grows in proportion to the logarithm of the array size. It is, however, unknown whether Shellsort can reach this asymptotic order of average-case complexity, which is optimal for comparison sorts. The lower bound was improved by Vitányi[26] for every number of passes $p$ to $\Omega (N\sum _{k=1}^{p}h_{k-1}/h_{k})$ where $h_{0}=N$. This result implies for example the Jiang-Li-Vitányi lower bound for all $p$-pass increment sequences and improves that lower bound for particular increment sequences. In fact all bounds (lower and upper) currently known for the average case are precisely matched by this lower bound. For example, this gives the new result that the Janson-Knuth upper bound is matched by the resulting lower bound for the used increment sequence, showing that three pass Shellsort for this increment sequence uses $\Theta (N^{23/15})$ comparisons/inversions/running time. The formula allows us to search for increment sequences that yield lower bounds which are unknown; for example an increment sequence for four passes which has a lower bound greater than $\Omega (pn^{1+1/p})=\Omega (n^{5/4})$ for the increment sequence $h_{1}=n^{11/16},$ $h_{2}=n^{7/16},$ $h_{3}=n^{3/16},$ $h_{4}=1$. The lower bound becomes $T=\Omega (n\cdot (n^{1-11/16}+n^{11/16-7/16}+n^{7/16-3/16}+n^{3/16})=\Omega (n^{1+5/16})=\Omega (n^{21/16}).$
The worst-case complexity of any version of Shellsort is of higher order: Plaxton, Poonen, and Suel showed that it grows at least as rapidly as $\Omega \left(N\left({\log N \over \log \log N}\right)^{2}\right)$.[27][28] Robert Cypher proved a stronger lower bound: $\Omega \left(N{{(\log N)^{2}} \over {\log \log N}}\right)$ when $h_{s+1}>h_{s}$ for all $s$.[29]
Applications
Shellsort performs more operations and has higher cache miss ratio than quicksort. However, since it can be implemented using little code and does not use the call stack, some implementations of the qsort function in the C standard library targeted at embedded systems use it instead of quicksort. Shellsort is, for example, used in the uClibc library.[30] For similar reasons, in the past, Shellsort was used in the Linux kernel.[31]
Shellsort can also serve as a sub-algorithm of introspective sort, to sort short subarrays and to prevent a slowdown when the recursion depth exceeds a given limit. This principle is employed, for instance, in the bzip2 compressor.[32]
See also
• Comb sort
References
1. Pratt, Vaughan Ronald (1979). Shellsort and Sorting Networks (Outstanding Dissertations in the Computer Sciences) (PDF). Garland. ISBN 978-0-8240-4406-0. Archived (PDF) from the original on 7 September 2021.
2. "Shellsort & Comparisons".
3. Knuth, Donald E. (1997). "Shell's method". The Art of Computer Programming. Volume 3: Sorting and Searching (2nd ed.). Reading, Massachusetts: Addison-Wesley. pp. 83–95. ISBN 978-0-201-89685-5.
4. Shell, D. L. (1959). "A High-Speed Sorting Procedure" (PDF). Communications of the ACM. 2 (7): 30–32. doi:10.1145/368370.368387. S2CID 28572656.
5. Some older textbooks and references call this the "Shell–Metzner" sort after Marlene Metzner Norton, but according to Metzner, "I had nothing to do with the sort, and my name should never have been attached to it." See "Shell sort". National Institute of Standards and Technology. Retrieved 17 July 2007.
6. Sedgewick, Robert (1998). Algorithms in C. Vol. 1 (3rd ed.). Addison-Wesley. pp. 273–281. ISBN 978-0-201-31452-6.
7. Kernighan, Brian W.; Ritchie, Dennis M. (1996). The C Programming Language (2nd ed.). Prentice Hall. p. 62. ISBN 978-7-302-02412-5.
8. Frank, R. M.; Lazarus, R. B. (1960). "A High-Speed Sorting Procedure". Communications of the ACM. 3 (1): 20–22. doi:10.1145/366947.366957. S2CID 34066017.
9. Hibbard, Thomas N. (1963). "An Empirical Study of Minimal Storage Sorting". Communications of the ACM. 6 (5): 206–213. doi:10.1145/366552.366557. S2CID 12146844.
10. Papernov, A. A.; Stasevich, G. V. (1965). "A Method of Information Sorting in Computer Memories" (PDF). Problems of Information Transmission. 1 (3): 63–75.
11. Incerpi, Janet; Sedgewick, Robert (1985). "Improved Upper Bounds on Shellsort" (PDF). Journal of Computer and System Sciences. 31 (2): 210–224. doi:10.1016/0022-0000(85)90042-x.
12. Sedgewick, Robert (1986). "A New Upper Bound for Shellsort". Journal of Algorithms. 7 (2): 159–173. doi:10.1016/0196-6774(86)90001-5.
13. Gonnet, Gaston H.; Baeza-Yates, Ricardo (1991). "Shellsort". Handbook of Algorithms and Data Structures: In Pascal and C (2nd ed.). Reading, Massachusetts: Addison-Wesley. pp. 161–163. ISBN 978-0-201-41607-7. Extensive experiments indicate that the sequence defined by α = 0.45454 < 5/11 performs significantly better than other sequences. The easiest way to compute ⌊0.45454n⌋ is by (5 * n — 1)/11 using integer arithmetic.
14. Tokuda, Naoyuki (1992). "An Improved Shellsort". In van Leeuven, Jan (ed.). Proceedings of the IFIP 12th World Computer Congress on Algorithms, Software, Architecture. Amsterdam: North-Holland Publishing Co. pp. 449–457. ISBN 978-0-444-89747-3.
15. Ciura, Marcin (2001). "Best Increments for the Average Case of Shellsort" (PDF). In Freiwalds, Rusins (ed.). Proceedings of the 13th International Symposium on Fundamentals of Computation Theory. London: Springer-Verlag. pp. 106–117. ISBN 978-3-540-42487-1. Archived from the original (PDF) on 23 September 2018.
16. Lee, Ying Wai (2021). "Empirically Improved Tokuda Gap Sequence in Shellsort". arXiv:2112.11112 [cs.DS].
17. Sedgewick, Robert (1998). "Shellsort". Algorithms in C++, Parts 1–4: Fundamentals, Data Structure, Sorting, Searching. Reading, Massachusetts: Addison-Wesley. pp. 285–292. ISBN 978-0-201-35088-3.
18. Forshell, Olof (22 May 2018). "How to choose the lengths of my sub sequences for a shell sort?". Stack Overflow. Additional commentary at Fastest gap sequence for shell sort? (23 May 2018).
19. Gale, David; Karp, Richard M. (April 1972). "A Phenomenon in the Theory of Sorting" (PDF). Journal of Computer and System Sciences. 6 (2): 103–115. doi:10.1016/S0022-0000(72)80016-3.
20. Selmer, Ernst S. (March 1989). "On Shellsort and the Frobenius Problem" (PDF). BIT Numerical Mathematics. 29 (1): 37–40. doi:10.1007/BF01932703. hdl:1956/19572. S2CID 32467267.
21. Weiss, Mark Allen (1989). "A good case for Shellsort". Congressus Numerantium. 73: 59–62.
22. Espelid, Terje O. (December 1973). "Analysis of a Shellsort Algorithm". BIT Numerical Mathematics. 13 (4): 394–400. doi:10.1007/BF01933401. S2CID 119443598. The quoted result is equation (8) on p. 399.
23. Yao, Andrew Chi-Chih (1980). "An Analysis of (h, k, 1)-Shellsort" (PDF). Journal of Algorithms. 1 (1): 14–50. doi:10.1016/0196-6774(80)90003-6. S2CID 3054966. STAN-CS-79-726. Archived from the original (PDF) on 4 March 2019.
24. Janson, Svante; Knuth, Donald E. (1997). "Shellsort with Three Increments" (PDF). Random Structures and Algorithms. 10 (1–2): 125–142. arXiv:cs/9608105. CiteSeerX 10.1.1.54.9911. doi:10.1002/(SICI)1098-2418(199701/03)10:1/2<125::AID-RSA6>3.0.CO;2-X.
25. Jiang, Tao; Li, Ming; Vitányi, Paul (September 2000). "A Lower Bound on the Average-Case Complexity of Shellsort" (PDF). Journal of the ACM. 47 (5): 905–911. arXiv:cs/9906008. CiteSeerX 10.1.1.6.6508. doi:10.1145/355483.355488. S2CID 3265123.
26. Vitányi, Paul (March 2018). "On the average-case complexity of Shellsort" (PDF). Random Structures and Algorithms. 52 (2): 354–363. arXiv:1501.06461. doi:10.1002/rsa.20737. S2CID 6833808.
27. Plaxton, C. Greg; Poonen, Bjorn; Suel, Torsten (24–27 October 1992). "Improved lower bounds for Shellsort" (PDF). Proceedings., 33rd Annual Symposium on Foundations of Computer Science. Vol. 33. Pittsburgh, United States. pp. 226–235. CiteSeerX 10.1.1.43.1393. doi:10.1109/SFCS.1992.267769. ISBN 978-0-8186-2900-6. S2CID 15095863.{{cite book}}: CS1 maint: location missing publisher (link)
28. Plaxton, C. Greg; Suel, Torsten (May 1997). "Lower Bounds for Shellsort" (PDF). Journal of Algorithms. 23 (2): 221–240. CiteSeerX 10.1.1.460.2429. doi:10.1006/jagm.1996.0825.
29. Cypher, Robert (1993). "A Lower Bound on the Size of Shellsort Sorting Networks". SIAM Journal on Computing. 22: 62–71. doi:10.1137/0222006.
30. Novoa, Manuel III. "libc/stdlib/stdlib.c". Retrieved 29 October 2014.
31. "kernel/groups.c". GitHub. Retrieved 5 May 2012.
32. Julian Seward. "bzip2/blocksort.c". Retrieved 30 March 2011.
Bibliography
• Knuth, Donald E. (1997). "Shell's method". The Art of Computer Programming. Volume 3: Sorting and Searching (2nd ed.). Reading, Massachusetts: Addison-Wesley. pp. 83–95. ISBN 978-0-201-89685-5.
• Analysis of Shellsort and Related Algorithms, Robert Sedgewick, Fourth European Symposium on Algorithms, Barcelona, September 1996.
External links
The Wikibook Algorithm implementation has a page on the topic of: Shell sort
• Animated Sorting Algorithms: Shell Sort at the Wayback Machine (archived 10 March 2015) – graphical demonstration
• Shellsort with gaps 5, 3, 1 as a Hungarian folk dance
Sorting algorithms
Theory
• Computational complexity theory
• Big O notation
• Total order
• Lists
• Inplacement
• Stability
• Comparison sort
• Adaptive sort
• Sorting network
• Integer sorting
• X + Y sorting
• Transdichotomous model
• Quantum sort
Exchange sorts
• Bubble sort
• Cocktail shaker sort
• Odd–even sort
• Comb sort
• Gnome sort
• Proportion extend sort
• Quicksort
Selection sorts
• Selection sort
• Heapsort
• Smoothsort
• Cartesian tree sort
• Tournament sort
• Cycle sort
• Weak-heap sort
Insertion sorts
• Insertion sort
• Shellsort
• Splaysort
• Tree sort
• Library sort
• Patience sorting
Merge sorts
• Merge sort
• Cascade merge sort
• Oscillating merge sort
• Polyphase merge sort
Distribution sorts
• American flag sort
• Bead sort
• Bucket sort
• Burstsort
• Counting sort
• Interpolation sort
• Pigeonhole sort
• Proxmap sort
• Radix sort
• Flashsort
Concurrent sorts
• Bitonic sorter
• Batcher odd–even mergesort
• Pairwise sorting network
• Samplesort
Hybrid sorts
• Block merge sort
• Kirkpatrick–Reisch sort
• Timsort
• Introsort
• Spreadsort
• Merge-insertion sort
Other
• Topological sorting
• Pre-topological order
• Pancake sorting
• Spaghetti sort
Impractical sorts
• Stooge sort
• Slowsort
• Bogosort
|
Wikipedia
|
Shell integration
Shell integration (the shell method in integral calculus) is a method for calculating the volume of a solid of revolution, when integrating along an axis perpendicular to the axis of revolution. This is in contrast to disc integration which integrates along the axis parallel to the axis of revolution.
Part of a series of articles about
Calculus
• Fundamental theorem
• Limits
• Continuity
• Rolle's theorem
• Mean value theorem
• Inverse function theorem
Differential
Definitions
• Derivative (generalizations)
• Differential
• infinitesimal
• of a function
• total
Concepts
• Differentiation notation
• Second derivative
• Implicit differentiation
• Logarithmic differentiation
• Related rates
• Taylor's theorem
Rules and identities
• Sum
• Product
• Chain
• Power
• Quotient
• L'Hôpital's rule
• Inverse
• General Leibniz
• Faà di Bruno's formula
• Reynolds
Integral
• Lists of integrals
• Integral transform
• Leibniz integral rule
Definitions
• Antiderivative
• Integral (improper)
• Riemann integral
• Lebesgue integration
• Contour integration
• Integral of inverse functions
Integration by
• Parts
• Discs
• Cylindrical shells
• Substitution (trigonometric, tangent half-angle, Euler)
• Euler's formula
• Partial fractions
• Changing order
• Reduction formulae
• Differentiating under the integral sign
• Risch algorithm
Series
• Geometric (arithmetico-geometric)
• Harmonic
• Alternating
• Power
• Binomial
• Taylor
Convergence tests
• Summand limit (term test)
• Ratio
• Root
• Integral
• Direct comparison
• Limit comparison
• Alternating series
• Cauchy condensation
• Dirichlet
• Abel
Vector
• Gradient
• Divergence
• Curl
• Laplacian
• Directional derivative
• Identities
Theorems
• Gradient
• Green's
• Stokes'
• Divergence
• generalized Stokes
Multivariable
Formalisms
• Matrix
• Tensor
• Exterior
• Geometric
Definitions
• Partial derivative
• Multiple integral
• Line integral
• Surface integral
• Volume integral
• Jacobian
• Hessian
Advanced
• Calculus on Euclidean space
• Generalized functions
• Limit of distributions
Specialized
• Fractional
• Malliavin
• Stochastic
• Variations
Miscellaneous
• Precalculus
• History
• Glossary
• List of topics
• Integration Bee
• Mathematical analysis
• Nonstandard analysis
Definition
The shell method goes as follows: Consider a volume in three dimensions obtained by rotating a cross-section in the xy-plane around the y-axis. Suppose the cross-section is defined by the graph of the positive function f(x) on the interval [a, b]. Then the formula for the volume will be:
$2\pi \int _{a}^{b}xf(x)\,dx$
If the function is of the y coordinate and the axis of rotation is the x-axis then the formula becomes:
$2\pi \int _{a}^{b}yf(y)\,dy$
If the function is rotating around the line x = h then the formula becomes:[1]
${\begin{cases}\displaystyle 2\pi \int _{a}^{b}(x-h)f(x)\,dx,&{\text{if}}\ h\leq a<b\\\displaystyle 2\pi \int _{a}^{b}(h-x)f(x)\,dx,&{\text{if}}\ a<b\leq h,\end{cases}}$
and for rotations around y = k it becomes
${\begin{cases}\displaystyle 2\pi \int _{a}^{b}(y-k)f(y)\,dy,&{\text{if}}\ k\leq a<b\\\displaystyle 2\pi \int _{a}^{b}(k-y)f(y)\,dy,&{\text{if}}\ a<b\leq k.\end{cases}}$
The formula is derived by computing the double integral in polar coordinates.
Example
Consider the volume, depicted below, whose cross section on the interval [1, 2] is defined by:
$y=(x-1)^{2}(x-2)^{2}$
Cross-section
3D volume
In the case of disc integration we would need to solve for x given y and because the volume is hollow in the middle we would find two functions, one that defined the inner solid and one that defined the outer solid. After integrating these two functions with the disk method we would subtract them to yield the desired volume.
With the shell method all we need is the following formula:
$2\pi \int _{1}^{2}x((x-1)^{2}(x-2)^{2})\,dx$
By expanding the polynomial the integral becomes very simple. In the end we find the volume is π/10 cubic units.
See also
• Solid of revolution
• Disc integration
References
1. Heckman, Dave (2014). "Volume – Shell Method" (PDF). Retrieved 2016-09-28.
• Weisstein, Eric W. "Method of Shells". MathWorld.
• Frank Ayres, Elliott Mendelson. Schaum's Outlines: Calculus. McGraw-Hill Professional 2008, ISBN 978-0-07-150861-2. pp. 244–248 (online copy, p. 244, at Google Books)
Calculus
Precalculus
• Binomial theorem
• Concave function
• Continuous function
• Factorial
• Finite difference
• Free variables and bound variables
• Graph of a function
• Linear function
• Radian
• Rolle's theorem
• Secant
• Slope
• Tangent
Limits
• Indeterminate form
• Limit of a function
• One-sided limit
• Limit of a sequence
• Order of approximation
• (ε, δ)-definition of limit
Differential calculus
• Derivative
• Second derivative
• Partial derivative
• Differential
• Differential operator
• Mean value theorem
• Notation
• Leibniz's notation
• Newton's notation
• Rules of differentiation
• linearity
• Power
• Sum
• Chain
• L'Hôpital's
• Product
• General Leibniz's rule
• Quotient
• Other techniques
• Implicit differentiation
• Inverse functions and differentiation
• Logarithmic derivative
• Related rates
• Stationary points
• First derivative test
• Second derivative test
• Extreme value theorem
• Maximum and minimum
• Further applications
• Newton's method
• Taylor's theorem
• Differential equation
• Ordinary differential equation
• Partial differential equation
• Stochastic differential equation
Integral calculus
• Antiderivative
• Arc length
• Riemann integral
• Basic properties
• Constant of integration
• Fundamental theorem of calculus
• Differentiating under the integral sign
• Integration by parts
• Integration by substitution
• trigonometric
• Euler
• Tangent half-angle substitution
• Partial fractions in integration
• Quadratic integral
• Trapezoidal rule
• Volumes
• Washer method
• Shell method
• Integral equation
• Integro-differential equation
Vector calculus
• Derivatives
• Curl
• Directional derivative
• Divergence
• Gradient
• Laplacian
• Basic theorems
• Line integrals
• Green's
• Stokes'
• Gauss'
Multivariable calculus
• Divergence theorem
• Geometric
• Hessian matrix
• Jacobian matrix and determinant
• Lagrange multiplier
• Line integral
• Matrix
• Multiple integral
• Partial derivative
• Surface integral
• Volume integral
• Advanced topics
• Differential forms
• Exterior derivative
• Generalized Stokes' theorem
• Tensor calculus
Sequences and series
• Arithmetico-geometric sequence
• Types of series
• Alternating
• Binomial
• Fourier
• Geometric
• Harmonic
• Infinite
• Power
• Maclaurin
• Taylor
• Telescoping
• Tests of convergence
• Abel's
• Alternating series
• Cauchy condensation
• Direct comparison
• Dirichlet's
• Integral
• Limit comparison
• Ratio
• Root
• Term
Special functions
and numbers
• Bernoulli numbers
• e (mathematical constant)
• Exponential function
• Natural logarithm
• Stirling's approximation
History of calculus
• Adequality
• Brook Taylor
• Colin Maclaurin
• Generality of algebra
• Gottfried Wilhelm Leibniz
• Infinitesimal
• Infinitesimal calculus
• Isaac Newton
• Fluxion
• Law of Continuity
• Leonhard Euler
• Method of Fluxions
• The Method of Mechanical Theorems
Lists
• Differentiation rules
• List of integrals of exponential functions
• List of integrals of hyperbolic functions
• List of integrals of inverse hyperbolic functions
• List of integrals of inverse trigonometric functions
• List of integrals of irrational functions
• List of integrals of logarithmic functions
• List of integrals of rational functions
• List of integrals of trigonometric functions
• Secant
• Secant cubed
• List of limits
• Lists of integrals
Miscellaneous topics
• Complex calculus
• Contour integral
• Differential geometry
• Manifold
• Curvature
• of curves
• of surfaces
• Tensor
• Euler–Maclaurin formula
• Gabriel's horn
• Integration Bee
• Proof that 22/7 exceeds π
• Regiomontanus' angle maximization problem
• Steinmetz solid
|
Wikipedia
|
Shelling (topology)
In mathematics, a shelling of a simplicial complex is a way of gluing it together from its maximal simplices (simplices that are not a face of another simplex) in a well-behaved way. A complex admitting a shelling is called shellable.
Definition
A d-dimensional simplicial complex is called pure if its maximal simplices all have dimension d. Let $\Delta $ be a finite or countably infinite simplicial complex. An ordering $C_{1},C_{2},\ldots $ of the maximal simplices of $\Delta $ is a shelling if the complex
$B_{k}:={\Big (}\bigcup _{i=1}^{k-1}C_{i}{\Big )}\cap C_{k}$
is pure and of dimension $\dim C_{k}-1$ for all $k=2,3,\ldots $. That is, the "new" simplex $C_{k}$ meets the previous simplices along some union $B_{k}$ of top-dimensional simplices of the boundary of $C_{k}$. If $B_{k}$ is the entire boundary of $C_{k}$ then $C_{k}$ is called spanning.
For $\Delta $ not necessarily countable, one can define a shelling as a well-ordering of the maximal simplices of $\Delta $ having analogous properties.
Properties
• A shellable complex is homotopy equivalent to a wedge sum of spheres, one for each spanning simplex of corresponding dimension.
• A shellable complex may admit many different shellings, but the number of spanning simplices and their dimensions do not depend on the choice of shelling. This follows from the previous property.
Examples
• Every Coxeter complex, and more generally every building (in the sense of Tits), is shellable.[1]
• The boundary complex of a (convex) polytope is shellable.[2][3] Note that here, shellability is generalized to the case of polyhedral complexes (that are not necessarily simplicial).
• There is an unshellable triangulation of the tetrahedron.[4]
Notes
1. Björner, Anders (1984). "Some combinatorial and algebraic properties of Coxeter complexes and Tits buildings". Advances in Mathematics. 52 (3): 173–212. doi:10.1016/0001-8708(84)90021-5. ISSN 0001-8708.
2. Bruggesser, H.; Mani, P. "Shellable Decompositions of Cells and Spheres". Mathematica Scandinavica. 29: 197–205. doi:10.7146/math.scand.a-11045.
3. Ziegler, Günter M. "8.2. Shelling polytopes". Lectures on polytopes. Springer. pp. 239–246. doi:10.1007/978-1-4613-8431-1_8.
4. Rudin, Mary Ellen (1958). "An unshellable triangulation of a tetrahedron". Bulletin of the American Mathematical Society. 64 (3): 90–91. doi:10.1090/s0002-9904-1958-10168-8. ISSN 1088-9485.
References
• Kozlov, Dmitry (2008). Combinatorial Algebraic Topology. Berlin: Springer. ISBN 978-3-540-71961-8.
|
Wikipedia
|
Antimatroid
In mathematics, an antimatroid is a formal system that describes processes in which a set is built up by including elements one at a time, and in which an element, once available for inclusion, remains available until it is included.[1] Antimatroids are commonly axiomatized in two equivalent ways, either as a set system modeling the possible states of such a process, or as a formal language modeling the different sequences in which elements may be included. Dilworth (1940) was the first to study antimatroids, using yet another axiomatization based on lattice theory, and they have been frequently rediscovered in other contexts.[2]
The axioms defining antimatroids as set systems are very similar to those of matroids, but whereas matroids are defined by an exchange axiom, antimatroids are defined instead by an anti-exchange axiom, from which their name derives. Antimatroids can be viewed as a special case of greedoids and of semimodular lattices, and as a generalization of partial orders and of distributive lattices. Antimatroids are equivalent, by complementation, to convex geometries, a combinatorial abstraction of convex sets in geometry.
Antimatroids have been applied to model precedence constraints in scheduling problems, potential event sequences in simulations, task planning in artificial intelligence, and the states of knowledge of human learners.
Definitions
An antimatroid can be defined as a finite family ${\mathcal {F}}$ of finite sets, called feasible sets, with the following two properties:[3]
• The union of any two feasible sets is also feasible. That is, ${\mathcal {F}}$ is closed under unions.
• If $S$ is a nonempty feasible set, then $S$ contains an element $x$ for which $S\setminus \{x\}$ (the set formed by removing $x$ from $S$) is also feasible. That is, ${\mathcal {F}}$ is an accessible set system.
Antimatroids also have an equivalent definition as a formal language, that is, as a set of strings defined from a finite alphabet of symbols. A string that belongs to this set is called a word of the language. A language ${\mathcal {L}}$ defining an antimatroid must satisfy the following properties:[4]
• Every symbol of the alphabet occurs in at least one word of ${\mathcal {L}}$.
• Each word of ${\mathcal {L}}$ contains at most one copy of each symbol. A language with this property is called normal.[5]
• Every prefix of a word in ${\mathcal {L}}$ is also in ${\mathcal {L}}$. A language with this property is called hereditary.[5]
• If $S$ and $T$ are words in ${\mathcal {L}}$, and $S$ contains at least one symbol that is not in $T$, then there is a symbol $x$ in $S$ such that the concatenation $Tx$ is another word in ${\mathcal {L}}$.
The equivalence of these two forms of definition can be seen as follows. If ${\mathcal {L}}$ is an antimatroid defined as a formal language, then the sets of symbols in words of ${\mathcal {L}}$ form an accessible union-closed set system. It is accessible by the hereditary property of strings, and it can be shown to be union-closed by repeated application of the concatenation property of strings. In the other direction, from an accessible union-closed set system ${\mathcal {F}}$, the language of normal strings whose prefixes all have sets of symbols belonging to ${\mathcal {F}}$ meets the requirements for a formal language to be an antimatroid. These two transformations are the inverses of each other: transforming a formal language into a set family and back, or vice versa, produces the same system. Thus, these two definitions lead to mathematically equivalent classes of objects.[6]
Examples
The following systems provide examples of antimatroids:
Chain antimatroids
The prefixes of a single string, and the sets of symbols in these prefixes, form an antimatroid. For instance the chain antimatroid defined by the string $abcd$ has as its formal language the set of strings
$\{\varepsilon ,a,ab,abc,abcd\}$
(where $\varepsilon $ denotes the empty string) and as its family of feasible sets the family[7]
${\bigl \{}\emptyset ,\{a\},\{a,b\},\{a,b,c\},\{a,b,c,d\}{\bigr \}}.$
Poset antimatroids
The lower sets of a finite partially ordered set form an antimatroid, with the full-length words of the antimatroid forming the linear extensions of the partial order.[8] By Birkhoff's representation theorem for distributive lattices, the feasible sets in a poset antimatroid (ordered by set inclusion) form a distributive lattice, and all distributive lattices can be formed in this way. Thus, antimatroids can be seen as generalizations of distributive lattices. A chain antimatroid is the special case of a poset antimatroid for a total order.[7]
Shelling antimatroids
A shelling sequence of a finite set $U$ of points in the Euclidean plane or a higher-dimensional Euclidean space is formed by repeatedly removing vertices of the convex hull. The feasible sets of the antimatroid formed by these sequences are the intersections of $U$ with the complement of a convex set.[7] Every antimatroid is isomorphic to a shelling antimatroid of points in a sufficiently high-dimensional space.[9]
Perfect elimination
A perfect elimination ordering of a chordal graph is an ordering of its vertices such that, for each vertex $v$, the neighbors of $v$ that occur later than $v$ in the ordering form a clique. The prefixes of perfect elimination orderings of a chordal graph form an antimatroid.[10]
Chip-firing games
Chip-firing games such as the abelian sandpile model are defined by a directed graph together with a system of "chips" placed on its vertices. Whenever the number of chips on a vertex $v$ is at least as large as the number of edges out of $v$, it is possible to fire $v$, moving one chip to each neighboring vertex. The event that $v$ fires for the $i$th time can only happen if it has already fired $i-1$ times and accumulated $i\cdot \deg(v)$ total chips. These conditions do not depend on the ordering of previous firings, and remain true until $v$ fires, so any given graph and initial placement of chips for which the system terminates defines an antimatroid on the pairs $(v,i)$. A consequence of the antimatroid property of these systems is that, for a given initial state, the number of times each vertex fires and the eventual stable state of the system do not depend on the firing order.[11]
Paths and basic words
In the set theoretic axiomatization of an antimatroid there are certain special sets called paths that determine the whole antimatroid, in the sense that the sets of the antimatroid are exactly the unions of paths.[12] If $S$ is any feasible set of the antimatroid, an element $x$ that can be removed from $S$ to form another feasible set is called an endpoint of $S$, and a feasible set that has only one endpoint is called a path of the antimatroid.[13] The family of paths can be partially ordered by set inclusion, forming the path poset of the antimatroid.[14]
For every feasible set $S$ in the antimatroid, and every element $x$ of $S$, one may find a path subset of $S$ for which $x$ is an endpoint: to do so, remove one at a time elements other than $x$ until no such removal leaves a feasible subset. Therefore, each feasible set in an antimatroid is the union of its path subsets.[12] If $S$ is not a path, each subset in this union is a proper subset of $S$. But, if $S$ is itself a path with endpoint $x$, each proper subset of $S$ that belongs to the antimatroid excludes $x$. Therefore, the paths of an antimatroid are exactly the feasible sets that do not equal the unions of their proper feasible subsets. Equivalently, a given family of sets ${\mathcal {P}}$ forms the family of paths of an antimatroid if and only if, for each $S$ in ${\mathcal {P}}$, the union of subsets of $S$ in ${\mathcal {P}}$ has one fewer element than $S$ itself.[15] If so, ${\mathcal {F}}$ itself is the family of unions of subsets of ${\mathcal {P}}$.[12]
In the formal language formalization of an antimatroid, the longest strings are called basic words. Each basic word forms a permutation of the whole alphabet.[16] If $B$ is the set of basic words, ${\mathcal {L}}$ can be defined from $B$ as the set of prefixes of words in $B$.[17]
Convex geometries
See also: Convex set, Convex geometry, and Closure operator
If ${\mathcal {F}}$ is the set system defining an antimatroid, with $U$ equal to the union of the sets in ${\mathcal {F}}$, then the family of sets
${\mathcal {G}}=\{U\setminus S\mid S\in {\mathcal {F}}\}$
complementary to the sets in ${\mathcal {F}}$ is sometimes called a convex geometry and the sets in ${\mathcal {G}}$ are called convex sets. For instance, in a shelling antimatroid, the convex sets are intersections of the given point set with convex subsets of Euclidean space. The set system defining a convex geometry must be closed under intersections. For any set $S$ in ${\mathcal {G}}$ that is not equal to $U$ there must be an element $x$ not in $S$ that can be added to $S$ to form another set in ${\mathcal {G}}$.[18]
A convex geometry can also be defined in terms of a closure operator $\tau $ that maps any subset of $U$ to its minimal closed superset. To be a closure operator, $\tau $ should have the following properties:[19]
• $\tau (\emptyset )=\emptyset $: the closure of the empty set is empty.
• For every subset $S$ of $U$, $S$ is a subset of $\tau (S)$ and $\tau (S)=\tau {\bigl (}\tau (S){\bigr )}$.
• Whenever $S\subset T\subset U$, $\tau (S)$ is a subset of $\tau (T)$.
The family of closed sets resulting from a closure operation of this type is necessarily closed under intersections, but might not be a convex geometry. The closure operators that define convex geometries also satisfy an additional anti-exchange axiom:
• If $S$ is a subset of $U$, and $y$ and $z$ are distinct elements of $U$ that do not belong to $\tau (S)$, but $z$ does belong to $\tau (S\cup \{y\})$, then $y$ does not belong to $\tau (S\cup \{z\})$.[19]
A closure operation satisfying this axiom is called an anti-exchange closure. If $S$ is a closed set in an anti-exchange closure, then the anti-exchange axiom determines a partial order on the elements not belonging to $S$, where $x\leq y$ in the partial order when $x$ belongs to $\tau (S\cup \{y\})$. If $x$ is a minimal element of this partial order, then $S\cup \{x\}$ is closed. That is, the family of closed sets of an anti-exchange closure has the property that for any set other than the universal set there is an element $x$ that can be added to it to produce another closed set. This property is complementary to the accessibility property of antimatroids, and the fact that intersections of closed sets are closed is complementary to the property that unions of feasible sets in an antimatroid are feasible. Therefore, the complements of the closed sets of any anti-exchange closure form an antimatroid.[18]
The undirected graphs in which the convex sets (subsets of vertices that contain all shortest paths between vertices in the subset) form a convex geometry are exactly the Ptolemaic graphs.[20]
Join-distributive lattices
Every two feasible sets of an antimatroid have a unique least upper bound (their union) and a unique greatest lower bound (the union of the sets in the antimatroid that are contained in both of them). Therefore, the feasible sets of an antimatroid, partially ordered by set inclusion, form a lattice. Various important features of an antimatroid can be interpreted in lattice-theoretic terms; for instance the paths of an antimatroid are the join-irreducible elements of the corresponding lattice, and the basic words of the antimatroid correspond to maximal chains in the lattice. The lattices that arise from antimatroids in this way generalize the finite distributive lattices, and can be characterized in several different ways.
• The description originally considered by Dilworth (1940) concerns meet-irreducible elements of the lattice. For each element $x$ of an antimatroid, there exists a unique maximal feasible set $S_{x}$ that does not contain $x$: $S_{x}$ can be constructed as the union of all feasible sets not containing $x$. This set $S_{x}$ is automatically meet-irreducible, meaning that it is not the meet of any two larger lattice elements. This is true because every feasible superset of $S_{x}$ contains $x$, and the same is therefore also true of every intersection of feasible supersets. Every element of an arbitrary lattice can be decomposed as a meet of meet-irreducible sets, often in multiple ways, but in the lattice corresponding to an antimatroid each element $T$ has a unique minimal family of meet-irreducible sets whose meet is $T$; this family consists of the sets $S_{x}$ for the elements $x$ such that $T\cup \{x\}$ is feasible. That is, the lattice has unique meet-irreducible decompositions.
• A second characterization concerns the intervals in the lattice, the sublattices defined by a pair of lattice elements $x\leq y$ consisting of all lattice elements $z$ with $x\leq z\leq y$. An interval is atomistic if every element in it is the join of atoms (the minimal elements above the bottom element $x$), and it is Boolean if it is isomorphic to the lattice of all subsets of a finite set. For an antimatroid, every interval that is atomistic is also boolean.
• Thirdly, the lattices arising from antimatroids are semimodular lattices, lattices that satisfy the upper semimodular law that for every two elements $x$ and $y$, if $y$ covers $x\wedge y$ then $x\vee y$ covers $x$. Translating this condition into the feasible sets of an antimatroid, if a feasible set $Y$ has only one element not belonging to another feasible set $X$ then that one element may be added to $X$ to form another set in the antimatroid. Additionally, the lattice of an antimatroid has the meet-semidistributive property: for all lattice elements $x$, $y$, and $z$, if $x\wedge y$ and $x\wedge z$ equal each other then they also both equal $x\wedge (y\vee z)$. A semimodular and meet-semidistributive lattice is called a join-distributive lattice.
These three characterizations are equivalent: any lattice with unique meet-irreducible decompositions has boolean atomistic intervals and is join-distributive, any lattice with boolean atomistic intervals has unique meet-irreducible decompositions and is join-distributive, and any join-distributive lattice has unique meet-irreducible decompositions and boolean atomistic intervals.[21] Thus, we may refer to a lattice with any of these three properties as join-distributive. Any antimatroid gives rise to a finite join-distributive lattice, and any finite join-distributive lattice comes from an antimatroid in this way.[22] Another equivalent characterization of finite join-distributive lattices is that they are graded (any two maximal chains have the same length), and the length of a maximal chain equals the number of meet-irreducible elements of the lattice.[23] The antimatroid representing a finite join-distributive lattice can be recovered from the lattice: the elements of the antimatroid can be taken to be the meet-irreducible elements of the lattice, and the feasible set corresponding to any element $x$ of the lattice consists of the set of meet-irreducible elements $y$ such that $y$ is not greater than or equal to $x$ in the lattice.
This representation of any finite join-distributive lattice as an accessible family of sets closed under unions (that is, as an antimatroid) may be viewed as an analogue of Birkhoff's representation theorem under which any finite distributive lattice has a representation as a family of sets closed under unions and intersections.
Supersolvable antimatroids
Motivated by a problem of defining partial orders on the elements of a Coxeter group, Armstrong (2009) studied antimatroids which are also supersolvable lattices. A supersolvable antimatroid is defined by a totally ordered collection of elements, and a family of sets of these elements. The family must include the empty set. Additionally, it must have the property that if two sets $A$ and $B$ belong to the family, if the set-theoretic difference $B\setminus A$ is nonempty, and if $x$ is the smallest element of $B\setminus A$, then $A\cup \{x\}$ also belongs to the family. As Armstrong observes, any family of sets of this type forms an antimatroid. Armstrong also provides a lattice-theoretic characterization of the antimatroids that this construction can form.[24]
Join operation and convex dimension
If ${\mathcal {A}}$ and ${\mathcal {B}}$ are two antimatroids, both described as a family of sets over the same universe of elements, then another antimatroid, the join of ${\mathcal {A}}$ and ${\mathcal {B}}$, can be formed as follows:
${\mathcal {A}}\vee {\mathcal {B}}=\{S\cup T\mid S\in {\mathcal {A}}\wedge T\in {\mathcal {B}}\}.$
This is a different operation than the join considered in the lattice-theoretic characterizations of antimatroids: it combines two antimatroids to form another antimatroid, rather than combining two sets in an antimatroid to form another set. The family of all antimatroids over the same universe forms a semilattice with this join operation.[25]
Joins are closely related to a closure operation that maps formal languages to antimatroids, where the closure of a language ${\mathcal {L}}$ is the intersection of all antimatroids containing ${\mathcal {L}}$ as a sublanguage. This closure has as its feasible sets the unions of prefixes of strings in ${\mathcal {L}}$. In terms of this closure operation, the join is the closure of the union of the languages of ${\mathcal {A}}$ and ${\mathcal {B}}$. Every antimatroid can be represented as a join of a family of chain antimatroids, or equivalently as the closure of a set of basic words; the convex dimension of an antimatroid ${\mathcal {A}}$ is the minimum number of chain antimatroids (or equivalently the minimum number of basic words) in such a representation. If ${\mathfrak {F}}$ is a family of chain antimatroids whose basic words all belong to ${\mathcal {A}}$, then ${\mathfrak {F}}$ generates ${\mathcal {A}}$ if and only if the feasible sets of ${\mathfrak {F}}$ include all paths of ${\mathcal {A}}$. The paths of ${\mathcal {A}}$ belonging to a single chain antimatroid must form a chain in the path poset of ${\mathcal {A}}$, so the convex dimension of an antimatroid equals the minimum number of chains needed to cover the path poset, which by Dilworth's theorem equals the width of the path poset.[26]
If one has a representation of an antimatroid as the closure of a set of $d$ basic words, then this representation can be used to map the feasible sets of the antimatroid to points in $d$-dimensional Euclidean space: assign one coordinate per basic word $W$, and make the coordinate value of a feasible set $S$ be the length of the longest prefix of $W$ that is a subset of $S$. With this embedding, $S$ is a subset of another feasible set $T$ if and only if the coordinates for $S$ are all less than or equal to the corresponding coordinates of $T$. Therefore, the order dimension of the inclusion ordering of the feasible sets is at most equal to the convex dimension of the antimatroid.[27] However, in general these two dimensions may be very different: there exist antimatroids with order dimension three but with arbitrarily large convex dimension.[28]
Enumeration
The number of possible antimatroids on a set of elements grows rapidly with the number of elements in the set. For sets of one, two, three, etc. elements, the number of distinct antimatroids is[29]
$1,3,22,485,59386,133059751,\dots \,.$
Applications
Both the precedence and release time constraints in the standard notation for theoretic scheduling problems may be modeled by antimatroids. Boyd & Faigle (1990) use antimatroids to generalize a greedy algorithm of Eugene Lawler for optimally solving single-processor scheduling problems with precedence constraints in which the goal is to minimize the maximum penalty incurred by the late scheduling of a task.
Glasserman & Yao (1994) use antimatroids to model the ordering of events in discrete event simulation systems.
Parmar (2003) uses antimatroids to model progress towards a goal in artificial intelligence planning problems.
In Optimality Theory, a mathematical model for the development of natural language based on optimization under constraints, grammars are logically equivalent to antimatroids.[30]
In mathematical psychology, antimatroids have been used to describe feasible states of knowledge of a human learner. Each element of the antimatroid represents a concept that is to be understood by the learner, or a class of problems that he or she might be able to solve correctly, and the sets of elements that form the antimatroid represent possible sets of concepts that could be understood by a single person. The axioms defining an antimatroid may be phrased informally as stating that learning one concept can never prevent the learner from learning another concept, and that any feasible state of knowledge can be reached by learning a single concept at a time. The task of a knowledge assessment system is to infer the set of concepts known by a given learner by analyzing his or her responses to a small and well-chosen set of problems. In this context antimatroids have also been called "learning spaces" and "well-graded knowledge spaces".[31]
Notes
1. See Korte, Lovász & Schrader (1991) for a comprehensive survey of antimatroid theory with many additional references.
2. Two early references are Edelman (1980) and Jamison (1980); Jamison was the first to use the term "antimatroid". Monjardet (1985) surveys the history of rediscovery of antimatroids.
3. See e.g. Kempner & Levit (2003), Definition 2.1 and Proposition 2.3, p. 2.
4. Korte, Lovász & Schrader (1991), p. 22.
5. Korte, Lovász & Schrader (1991), p. 5.
6. Korte, Lovász & Schrader (1991), Theorem 1.4, p. 24.
7. Gordon (1997).
8. Korte, Lovász & Schrader (1991), pp. 24–25.
9. Kashiwabara, Nakamura & Okamoto (2005).
10. Gordon (1997) describes several results related to antimatroids of this type, but these antimatroids were mentioned earlier e.g. by Korte, Lovász & Schrader (1991). Chandran et al. (2003) use the connection to antimatroids as part of an algorithm for efficiently listing all perfect elimination orderings of a given chordal graph.
11. Björner, Lovász & Shor (1991); Knauer (2009).
12. Korte, Lovász & Schrader (1991), Lemma 3.12, p. 31.
13. Korte, Lovász & Schrader (1991), p. 31.
14. Korte, Lovász & Schrader (1991), pp. 39–43.
15. See Korte, Lovász & Schrader (1991), Theorem 3.13, p. 32, which defines paths as rooted sets, sets with a distinguished element, and states an equivalent characterization on the families of rooted sets that form the paths of antimatroids.
16. Korte, Lovász & Schrader (1991), pp. 6, 22.
17. See Korte, Lovász & Schrader (1991), p. 22: "any word in an antimatroid can be extended to a basic word".
18. Korte, Lovász & Schrader (1991), Theorem 1.1, p. 21.
19. Korte, Lovász & Schrader (1991), p. 20.
20. Farber & Jamison (1986).
21. Adaricheva, Gorbunov & Tumanov (2003), Theorems 1.7 and 1.9; Armstrong (2009), Theorem 2.7.
22. Edelman (1980), Theorem 3.3; Armstrong (2009), Theorem 2.8.
23. Monjardet (1985) credits a dual form of this characterization to several papers from the 1960s by S. P. Avann.
24. Armstrong (2009).
25. Korte, Lovász & Schrader (1991), p. 42; Eppstein (2008), Section 7.2; Falmagne et al. (2013), section 14.4.
26. Edelman & Saks (1988); Korte, Lovász & Schrader (1991), Theorem 6.9.
27. Korte, Lovász & Schrader (1991), Corollary 6.10.
28. Eppstein (2008), Figure 15.
29. Sloane, N. J. A. (ed.), "Sequence A119770", The On-Line Encyclopedia of Integer Sequences, OEIS Foundation
30. Merchant & Riggle (2016).
31. Doignon & Falmagne (1999).
References
• Adaricheva, K. V.; Gorbunov, V. A.; Tumanov, V. I. (2003), "Join-semidistributive lattices and convex geometries", Advances in Mathematics, 173 (1): 1–49, doi:10.1016/S0001-8708(02)00011-7.
• Armstrong, Drew (2009), "The sorting order on a Coxeter group", Journal of Combinatorial Theory, Series A, 116 (8): 1285–1305, arXiv:0712.1047, doi:10.1016/j.jcta.2009.03.009, MR 2568800, S2CID 15474840.
• Birkhoff, Garrett; Bennett, M. K. (1985), "The convexity lattice of a poset", Order, 2 (3): 223–242, doi:10.1007/BF00333128, S2CID 118907732
• Björner, Anders; Lovász, László; Shor, Peter W. (1991), "Chip-firing games on graphs", European Journal of Combinatorics, 12 (4): 283–291, doi:10.1016/S0195-6698(13)80111-4, MR 1120415
• Björner, Anders; Ziegler, Günter M. (1992), "Introduction to greedoids", in White, Neil (ed.), Matroid Applications, Encyclopedia of Mathematics and its Applications, vol. 40, Cambridge: Cambridge University Press, pp. 284–357, doi:10.1017/CBO9780511662041.009, ISBN 0-521-38165-7, MR 1165537
• Boyd, E. Andrew; Faigle, Ulrich (1990), "An algorithmic characterization of antimatroids", Discrete Applied Mathematics, 28 (3): 197–205, doi:10.1016/0166-218X(90)90002-T, hdl:1911/101636.
• Chandran, L. S.; Ibarra, L.; Ruskey, F.; Sawada, J. (2003), "Generating and characterizing the perfect elimination orderings of a chordal graph" (PDF), Theoretical Computer Science, 307 (2): 303–317, doi:10.1016/S0304-3975(03)00221-4
• Dilworth, Robert P. (1940), "Lattices with unique irreducible decompositions", Annals of Mathematics, 41 (4): 771–777, doi:10.2307/1968857, JSTOR 1968857.
• Doignon, Jean-Paul; Falmagne, Jean-Claude (1999), Knowledge Spaces, Springer-Verlag, ISBN 3-540-64501-2.
• Edelman, Paul H. (1980), "Meet-distributive lattices and the anti-exchange closure", Algebra Universalis, 10 (1): 290–299, doi:10.1007/BF02482912, S2CID 120403229.
• Edelman, Paul H.; Saks, Michael E. (1988), "Combinatorial representation and convex dimension of convex geometries", Order, 5 (1): 23–32, doi:10.1007/BF00143895, S2CID 119826035.
• Eppstein, David (2008), Learning sequences, arXiv:0803.4030. Partially adapted as Chapters 13 and 14 of Falmagne, Jean-Claude; Albert, Dietrich; Doble, Chris; Eppstein, David; Hu, Xiangen, eds. (2013), Knowledge Spaces: Applications in Education, Springer-Verlag, doi:10.1007/978-3-642-35329-1, ISBN 978-3-642-35328-4.
• Farber, Martin; Jamison, Robert E. (1986), "Convexity in graphs and hypergraphs", SIAM Journal on Algebraic and Discrete Methods, 7 (3): 433–444, doi:10.1137/0607049, hdl:10338.dmlcz/127659, MR 0844046.
• Glasserman, Paul; Yao, David D. (1994), Monotone Structure in Discrete Event Systems, Wiley Series in Probability and Statistics, Wiley Interscience, ISBN 978-0-471-58041-6.
• Gordon, Gary (1997), "A β invariant for greedoids and antimatroids", Electronic Journal of Combinatorics, 4 (1): Research Paper 13, doi:10.37236/1298, MR 1445628.
• Jamison, Robert (1980), "Copoints in antimatroids", Proceedings of the Eleventh Southeastern Conference on Combinatorics, Graph Theory and Computing (Florida Atlantic Univ., Boca Raton, Fla., 1980), Vol. II, Congressus Numerantium, vol. 29, pp. 535–544, MR 0608454.
• Kashiwabara, Kenji; Nakamura, Masataka; Okamoto, Yoshio (2005), "The affine representation theorem for abstract convex geometries", Computational Geometry, 30 (2): 129–144, CiteSeerX 10.1.1.14.4965, doi:10.1016/j.comgeo.2004.05.001, MR 2107032.
• Kempner, Yulia; Levit, Vadim E. (2003), "Correspondence between two antimatroid algorithmic characterizations", Electronic Journal of Combinatorics, 10: Research Paper 44, arXiv:math/0307013, Bibcode:2003math......7013K, doi:10.37236/1737, MR 2014531, S2CID 11015967
• Knauer, Kolja (2009), "Chip-firing, antimatroids, and polyhedra", European Conference on Combinatorics, Graph Theory and Applications (EuroComb 2009), Electronic Notes in Discrete Mathematics, vol. 34, pp. 9–13, doi:10.1016/j.endm.2009.07.002, MR 2591410
• Korte, Bernhard; Lovász, László; Schrader, Rainer (1991), Greedoids, Springer-Verlag, pp. 19–43, ISBN 3-540-18190-3.
• Merchant, Nazarré; Riggle, Jason (2016), "OT grammars, beyond partial orders: ERC sets and antimatroids", Natural Language & Linguistic Theory, 34: 241–269, doi:10.1007/s11049-015-9297-5, S2CID 170567540.
• Monjardet, Bernard (1985), "A use for frequently rediscovering a concept", Order, 1 (4): 415–417, doi:10.1007/BF00582748, S2CID 119378521.
• Parmar, Aarati (2003), "Some Mathematical Structures Underlying Efficient Planning", AAAI Spring Symposium on Logical Formalization of Commonsense Reasoning (PDF).
|
Wikipedia
|
Shelly Harvey
Shelly Lynn Harvey is a professor of Mathematics at Rice University. Her research interests include knot theory, low-dimensional topology, and group theory.[1]
Shelly Harvey
NationalityAmerican
Alma materRice University
Scientific career
FieldsMathematics
InstitutionsRice University
Doctoral advisorTim Cochran
Early life
Harvey grew up in Rancho Cucamonga, California and graduated California Polytechnic State University in 1997.[1][2] She received her Ph.D. from Rice University in 2002 under the supervision of Tim Cochran.[1][2][3] After postdoctoral studies at the University of California, San Diego and the Massachusetts Institute of Technology, she returned to Rice University in 2005 as the first female tenure-track mathematician there.[1][2]
Recognitions
Harvey was a Sloan Fellow in 2006. In 2012, she became one of the inaugural fellows of the American Mathematical Society.[4]
Selected publications
• Cochran, Tim D.; Harvey, Shelly (2008), "Homology and derived series of groups. II. Dwyer's theorem", Geometry & Topology, 12 (1): 199–232, arXiv:math/0609484, doi:10.2140/gt.2008.12.199, MR 2377249, S2CID 10589479.
• Cochran, Tim D.; Harvey, Shelly; Leidy, Constance (2009), "Knot concordance and higher-order Blanchfield duality", Geometry & Topology, 13 (3): 1419–1482, arXiv:0710.3082, doi:10.2140/gt.2009.13.1419, MR 2496049, S2CID 8072597.
• Cochran, Tim D.; Harvey, Shelly; Leidy, Constance (2011), "Primary decomposition and the fractal nature of knot concordance", Mathematische Annalen, 351 (2): 443–508, arXiv:0906.1373, doi:10.1007/s00208-010-0604-5, MR 2836668, S2CID 7556758.
References
1. Curriculum vitae, retrieved 2014-12-21.
2. Rao, Anita (2012), Shelly Harvey: Knot your typical California Girl!, Association for Women in Mathematics, retrieved 2014-12-21.
3. Shelly Harvey at the Mathematics Genealogy Project
4. List of Fellows of the American Mathematical Society, retrieved 2014-12-21.
External links
• Shelly Harvey's official website
• Shelly Harvey publications indexed by Google Scholar
Authority control: Academics
• Google Scholar
• MathSciNet
• Mathematics Genealogy Project
|
Wikipedia
|
Shelly M. Jones
Shelly Monica Jones (born November 2, 1964) is an American mathematics educator. She is an associate professor of mathematics education at Central Connecticut State University.
Shelly M. Jones
Born (1964-11-02) November 2, 1964
OccupationAssociate Professor
EmployerCentral Connecticut State University
Notable workWomen Who Count: Honoring African American Women Mathematicians (2019)
Early life and education
Jones is African-American; she was raised in Bridgeport, Connecticut and went on to study computer science at Spelman College, graduating in 1986.[1][2] Jones received a master's degree in mathematics education from the University of Bridgeport and a Ph.D. in mathematics education from Illinois State University.[3]
Career and research
Jones is an associate professor at Central Connecticut State University in New Britain, Connecticut. She teaches undergraduate and graduate content, curriculum, and methods courses. Her focus includes culturally relevant mathematics, where she explains cognitively demanding mathematics skills from a relevant cultural perspective.[1] In addition, Jones's specialties include integrating elementary school mathematics and music, and the effects of college students’ attitudes and beliefs about mathematics on their success in college.
Jones' accomplishments have earned her recognition by Mathematically Gifted & Black as a Black History Month 2019 Honoree.[1]
Book
Jones is the author of the book Women Who Count: Honoring African American Women Mathematicians, published in 2019 by the American Mathematical Society.[4]
References
1. "Black History Month Honoree 2019: Shelly M. Jones". Mathematically Gifted & Black. 25 February 2019. Retrieved 4 June 2020.{{cite web}}: CS1 maint: url-status (link)
2. "Alumnae Features : Dr. Shelly Jones". Spelman Women to Watch. Retrieved 4 June 2020.{{cite web}}: CS1 maint: url-status (link)
3. Provost, Kerri. "Women in Science Spotlight: Dr. Shelly M. Jones". Connecticut Space Center. Retrieved 4 June 2020.{{cite web}}: CS1 maint: url-status (link)
4. Women Who Count: Honoring African American Women Mathematicians, ISBN 978-1470448899. Reviews:
• Knecht, Amanda. Mathematical Reviews. MR 3966443.{{cite journal}}: CS1 maint: untitled periodical (link)
• Dietz, Geoffrey (November 2019). "Women Who Count". MAA Reviews. Mathematical Association of America.
• Clark, Kathleen M. (June 2020). British Journal for the History of Mathematics. 35 (3): 253–255. doi:10.1080/26375451.2020.1778282. S2CID 222003448.{{cite journal}}: CS1 maint: untitled periodical (link)
External links
• Women Who Count: Honoring African American Women Mathematicians (official website)
• Culturally relevant pedagogy in mathematics: A critical need (TEDxCCSU video)
Authority control
International
• ISNI
• VIAF
National
• Germany
• United States
Academics
• Mathematics Genealogy Project
|
Wikipedia
|
Inverse distance weighting
Inverse distance weighting (IDW) is a type of deterministic method for multivariate interpolation with a known scattered set of points. The assigned values to unknown points are calculated with a weighted average of the values available at the known points. This method can also be used to create spatial weights matrices in spatial autocorrelation analyses (e.g. Moran's I).[1]
The name given to this type of method was motivated by the weighted average applied, since it resorts to the inverse of the distance to each known point ("amount of proximity") when assigning weights.
Definition of the problem
The expected result is a discrete assignment of the unknown function $u$ in a study region:
$u(x):x\to \mathbb {R} ,\quad x\in \mathbf {D} \subset \mathbb {R} ^{n},$
where $\mathbf {D} $ is the study region.
The set of $N$ known data points can be described as a list of tuples:
$[(x_{1},u_{1}),(x_{2},u_{2}),...,(x_{N},u_{N})].$
The function is to be "smooth" (continuous and once differentiable), to be exact ($u(x_{i})=u_{i}$) and to meet the user's intuitive expectations about the phenomenon under investigation. Furthermore, the function should be suitable for a computer application at a reasonable cost (nowadays, a basic implementation will probably make use of parallel resources).
Shepard's method
Historical reference
At the Harvard Laboratory for Computer Graphics and Spatial Analysis, beginning in 1965, a varied collection of scientists converged to rethink, among other things, what are now called geographic information systems.[2]
The motive force behind the Laboratory, Howard Fisher, conceived an improved computer mapping program that he called SYMAP, which, from the start, Fisher wanted to improve on the interpolation. He showed Harvard College freshmen his work on SYMAP, and many of them participated in Laboratory events. One freshman, Donald Shepard, decided to overhaul the interpolation in SYMAP, resulting in his famous article from 1968.[3]
Shepard's algorithm was also influenced by the theoretical approach of William Warntz and others at the Lab who worked with spatial analysis. He conducted a number of experiments with the exponent of distance, deciding on something closer to the gravity model (exponent of -2). Shepard implemented not just basic inverse distance weighting, but also allowed barriers (permeable and absolute) to interpolation.
Other research centers were working on interpolation at this time, particularly University of Kansas and their SURFACE II program. Still, the features of SYMAP were state-of-the-art, even though programmed by an undergraduate.
Basic form
Given a set of sample points $\{\mathbf {x} _{i},u_{i}|{\text{for }}\mathbf {x} _{i}\in \mathbb {R} ^{n},u_{i}\in \mathbb {R} \}_{i=1}^{N}$, the IDW interpolation function $u(\mathbf {x} ):\mathbb {R} ^{n}\to \mathbb {R} $ is defined as:
$u(\mathbf {x} )={\begin{cases}{\dfrac {\sum _{i=1}^{N}{w_{i}(\mathbf {x} )u_{i}}}{\sum _{i=1}^{N}{w_{i}(\mathbf {x} )}}},&{\text{if }}d(\mathbf {x} ,\mathbf {x} _{i})\neq 0{\text{ for all }}i,\\u_{i},&{\text{if }}d(\mathbf {x} ,\mathbf {x} _{i})=0{\text{ for some }}i,\end{cases}}$
where
$w_{i}(\mathbf {x} )={\frac {1}{d(\mathbf {x} ,\mathbf {x} _{i})^{p}}}$
is a simple IDW weighting function, as defined by Shepard,[3] x denotes an interpolated (arbitrary) point, xi is an interpolating (known) point, $d$ is a given distance (metric operator) from the known point xi to the unknown point x, N is the total number of known points used in interpolation and $p$ is a positive real number, called the power parameter.
Here weight decreases as distance increases from the interpolated points. Greater values of $p$ assign greater influence to values closest to the interpolated point, with the result turning into a mosaic of tiles (a Voronoi diagram) with nearly constant interpolated value for large values of p. For two dimensions, power parameters $p\leq 2$ cause the interpolated values to be dominated by points far away, since with a density $\rho $ of data points and neighboring points between distances $r_{0}$ to $R$, the summed weight is approximately
$\sum _{j}w_{j}\approx \int _{r_{0}}^{R}{\frac {2\pi r\rho \,dr}{r^{p}}}=2\pi \rho \int _{r_{0}}^{R}r^{1-p}\,dr,$
which diverges for $R\rightarrow \infty $ and $p\leq 2$. For M dimensions, the same argument holds for $p\leq M$. For the choice of value for p, one can consider the degree of smoothing desired in the interpolation, the density and distribution of samples being interpolated, and the maximum distance over which an individual sample is allowed to influence the surrounding ones.
Shepard's method is a consequence of minimization of a functional related to a measure of deviations between tuples of interpolating points {x, u} and i tuples of interpolated points {xi, ui}, defined as:
$\phi (\mathbf {x} ,u)=\left(\sum _{i=0}^{N}{\frac {(u-u_{i})^{2}}{d(\mathbf {x} ,\mathbf {x} _{i})^{p}}}\right)^{\frac {1}{p}},$
derived from the minimizing condition:
${\frac {\partial \phi (\mathbf {x} ,u)}{\partial u}}=0.$
The method can easily be extended to other dimensional spaces and it is in fact a generalization of Lagrange approximation into a multidimensional spaces. A modified version of the algorithm designed for trivariate interpolation was developed by Robert J. Renka [4] and is available in Netlib as algorithm 661 in the TOMS Library.
Example in 1 dimension
Modified Shepard's method
Another modification of Shepard's method calculates interpolated value using only nearest neighbors within R-sphere (instead of full sample). Weights are slightly modified in this case:
$w_{k}(\mathbf {x} )=\left({\frac {\max(0,R-d(\mathbf {x} ,\mathbf {x} _{k}))}{Rd(\mathbf {x} ,\mathbf {x} _{k})}}\right)^{2}.$
When combined with fast spatial search structure (like kd-tree), it becomes efficient N log N interpolation method suitable for large-scale problems.
See also
• Field (geography)
• Gravity model
• Kernel density estimation
• Spatial analysis
• Tobler's first law of geography
• Tobler's second law of geography
References
1. "Spatial Autocorrelation (Global Moran's I) (Spatial Statistics)". ArcGIS Pro Documentation. ESRI. Retrieved 13 September 2022.
2. Chrisman, Nicholas. "History of the Harvard Laboratory for Computer Graphics: a Poster Exhibit" (PDF).
3. Shepard, Donald (1968). "A two-dimensional interpolation function for irregularly-spaced data". Proceedings of the 1968 ACM National Conference. pp. 517–524. doi:10.1145/800186.810616.
4. Robert Renka, Professor Emeritus, University of North Texas
|
Wikipedia
|
Complex reflection group
In mathematics, a complex reflection group is a finite group acting on a finite-dimensional complex vector space that is generated by complex reflections: non-trivial elements that fix a complex hyperplane pointwise.
Complex reflection groups arise in the study of the invariant theory of polynomial rings. In the mid-20th century, they were completely classified in work of Shephard and Todd. Special cases include the symmetric group of permutations, the dihedral groups, and more generally all finite real reflection groups (the Coxeter groups or Weyl groups, including the symmetry groups of regular polyhedra).
Definition
A (complex) reflection r (sometimes also called pseudo reflection or unitary reflection) of a finite-dimensional complex vector space V is an element $r\in GL(V)$ of finite order that fixes a complex hyperplane pointwise, that is, the fixed-space $\operatorname {Fix} (r):=\operatorname {ker} (r-\operatorname {Id} _{V})$ has codimension 1.
A (finite) complex reflection group $W\subseteq GL(V)$ is a finite subgroup of $GL(V)$ that is generated by reflections.
Properties
Any real reflection group becomes a complex reflection group if we extend the scalars from R to C. In particular, all finite Coxeter groups or Weyl groups give examples of complex reflection groups.
A complex reflection group W is irreducible if the only W-invariant proper subspace of the corresponding vector space is the origin. In this case, the dimension of the vector space is called the rank of W.
The Coxeter number $h$ of an irreducible complex reflection group W of rank $n$ is defined as $h={\frac {|{\mathcal {R}}|+|{\mathcal {A}}|}{n}}$ where ${\mathcal {R}}$ denotes the set of reflections and ${\mathcal {A}}$ denotes the set of reflecting hyperplanes. In the case of real reflection groups, this definition reduces to the usual definition of the Coxeter number for finite Coxeter systems.
Classification
Any complex reflection group is a product of irreducible complex reflection groups, acting on the sum of the corresponding vector spaces.[1] So it is sufficient to classify the irreducible complex reflection groups.
The irreducible complex reflection groups were classified by G. C. Shephard and J. A. Todd (1954). They proved that every irreducible belonged to an infinite family G(m, p, n) depending on 3 positive integer parameters (with p dividing m) or was one of 34 exceptional cases, which they numbered from 4 to 37.[2] The group G(m, 1, n) is the generalized symmetric group; equivalently, it is the wreath product of the symmetric group Sym(n) by a cyclic group of order m. As a matrix group, its elements may be realized as monomial matrices whose nonzero elements are mth roots of unity.
The group G(m, p, n) is an index-p subgroup of G(m, 1, n). G(m, p, n) is of order mnn!/p. As matrices, it may be realized as the subset in which the product of the nonzero entries is an (m/p)th root of unity (rather than just an mth root). Algebraically, G(m, p, n) is a semidirect product of an abelian group of order mn/p by the symmetric group Sym(n); the elements of the abelian group are of the form (θa1, θa2, ..., θan), where θ is a primitive mth root of unity and Σai ≡ 0 mod p, and Sym(n) acts by permutations of the coordinates.[3]
The group G(m,p,n) acts irreducibly on Cn except in the cases m = 1, n > 1 (the symmetric group) and G(2, 2, 2) (the Klein four-group). In these cases, Cn splits as a sum of irreducible representations of dimensions 1 and n − 1.
Coxeter groups
When m = 2, the representation described in the previous section consists of matrices with real entries, and hence in these cases G(m,p,n) is a finite Coxeter group. In particular:[4]
• G(1, 1, n) has type An−1 = [3,3,...,3,3] = ...; the symmetric group of order n!
• G(2, 1, n) has type Bn = [3,3,...,3,4] = ...; the hyperoctahedral group of order 2nn!
• G(2, 2, n) has type Dn = [3,3,...,31,1] = ..., order 2nn!/2.
In addition, when m = p and n = 2, the group G(p, p, 2) is the dihedral group of order 2p; as a Coxeter group, type I2(p) = [p] = (and the Weyl group G2 when p = 6).
Other special cases and coincidences
The only cases when two groups G(m, p, n) are isomorphic as complex reflection groups are that G(ma, pa, 1) is isomorphic to G(mb, pb, 1) for any positive integers a, b (and both are isomorphic to the cyclic group of order m/p). However, there are other cases when two such groups are isomorphic as abstract groups.
The groups G(3, 3, 2) and G(1, 1, 3) are isomorphic to the symmetric group Sym(3). The groups G(2, 2, 3) and G(1, 1, 4) are isomorphic to the symmetric group Sym(4). Both G(2, 1, 2) and G(4, 4, 2) are isomorphic to the dihedral group of order 8. And the groups G(2p, p, 1) are cyclic of order 2, as is G(1, 1, 2).
List of irreducible complex reflection groups
There are a few duplicates in the first 3 lines of this list; see the previous section for details.
• ST is the Shephard–Todd number of the reflection group.
• Rank is the dimension of the complex vector space the group acts on.
• Structure describes the structure of the group. The symbol * stands for a central product of two groups. For rank 2, the quotient by the (cyclic) center is the group of rotations of a tetrahedron, octahedron, or icosahedron (T = Alt(4), O = Sym(4), I = Alt(5), of orders 12, 24, 60), as stated in the table. For the notation 21+4, see extra special group.
• Order is the number of elements of the group.
• Reflections describes the number of reflections: 26412 means that there are 6 reflections of order 2 and 12 of order 4.
• Degrees gives the degrees of the fundamental invariants of the ring of polynomial invariants. For example, the invariants of group number 4 form a polynomial ring with 2 generators of degrees 4 and 6.
ST Rank Structure and namesCoxeter names Order Reflections Degrees Codegrees
1 n−1 Symmetric group G(1,1,n) = Sym(n) n! 2n(n − 1)/2 2, 3, ...,n 0,1,...,n − 2
2 n G(m,p,n) m > 1, n > 1, p|m (G(2,2,2) is reducible) mnn!/p 2mn(n−1)/2,dnφ(d) (d|m/p, d > 1) m,2m,..,(n − 1)m; mn/p 0,m,..., (n − 1)m if p < m; 0,m,...,(n − 2)m, (n − 1)m − n if p = m
2 2 G(p,1,2) p > 1,p[4]2 or 2p2 2p,d2φ(d) (d|p, d > 1) p; 2p 0,p
2 2 Dihedral group G(p,p,2) p > 2[p] or 2p 2p 2,p 0,p-2
3 1 Cyclic group G(p,1,1) = Zpp[] or p dφ(d) (d|p, d > 1) p 0
4 2 W(L2), Z2.T3[3]3 or , ⟨2,3,3⟩ 24 38 4,6 0,2
5 2 Z6.T3[4]3 or 72 316 6,12 0,6
6 2 Z4.T3[6]2 or 48 2638 4,12 0,8
7 2 Z12.T‹3,3,3›2 or ⟨2,3,3⟩6 144 26316 12,12 0,12
8 2 Z4.O4[3]4 or 96 26412 8,12 0,4
9 2 Z8.O4[6]2 or or ⟨2,3,4⟩4 192 218412 8,24 0,16
10 2 Z12.O4[4]3 or 288 26316412 12,24 0,12
11 2 Z24.O⟨2,3,4⟩12 576 218316412 24,24 0,24
12 2 Z2.O= GL2(F3)⟨2,3,4⟩ 48 212 6,8 0,10
13 2 Z4.O⟨2,3,4⟩2 96 218 8,12 0,16
14 2 Z6.O3[8]2 or 144 212316 6,24 0,18
15 2 Z12.O⟨2,3,4⟩6 288 218316 12,24 0,24
16 2 Z10.I, ⟨2,3,5⟩×Z55[3]5 or 600 548 20,30 0,10
17 2 Z20.I5[6]2 or 1200 230548 20,60 0,40
18 2 Z30.I5[4]3 or 1800 340548 30,60 0,30
19 2 Z60.I⟨2,3,5⟩30 3600 230340548 60,60 0,60
20 2 Z6.I3[5]3 or 360 340 12,30 0,18
21 2 Z12.I3[10]2 or 720 230340 12,60 0,48
22 2 Z4.I⟨2,3,5⟩2 240 230 12,20 0,28
23 3 W(H3) = Z2 × PSL2(5)[5,3], 120 215 2,6,10 0,4,8
24 3 W(J3(4)) = Z2 × PSL2(7), Klein[1 1 14]4, 336 221 4,6,14 0,8,10
25 3 W(L3) = W(P3) = 31+2.SL2(3) Hessian3[3]3[3]3, 648 324 6,9,12 0,3,6
26 3 W(M3) =Z2 ×31+2.SL2(3) Hessian2[4]3[3]3, 1296 29 324 6,12,18 0,6,12
27 3 W(J3(5)) = Z2 ×(Z3.Alt(6)), Valentiner[1 1 15]4,
[1 1 14]5,
2160 245 6,12,30 0,18,24
28 4 W(F4) = (SL2(3)* SL2(3)).(Z2 × Z2)[3,4,3], 1152 212+12 2,6,8,12 0,4,6,10
29 4 W(N4) = (Z4*21 + 4).Sym(5)[1 1 2]4, 7680 240 4,8,12,20 0,8,12,16
30 4 W(H4) = (SL2(5)*SL2(5)).Z2[5,3,3], 14400 260 2,12,20,30 0,10,18,28
31 4 W(EN4) = W(O4) = (Z4*21 + 4).Sp4(2) 46080 260 8,12,20,24 0,12,16,28
32 4 W(L4) = Z3 × Sp4(3)3[3]3[3]3[3]3, 155520 380 12,18,24,30 0,6,12,18
33 5 W(K5) = Z2 ×Ω5(3) = Z2 × PSp4(3)= Z2 × PSU4(2)[1 2 2]3, 51840 245 4,6,10,12,18 0,6,8,12,14
34 6 W(K6)= Z3.Ω−
6
(3).Z2, Mitchell's group
[1 2 3]3, 39191040 2126 6,12,18,24,30,42 0,12,18,24,30,36
35 6 W(E6) = SO5(3) = O−
6
(2) = PSp4(3).Z2 = PSU4(2).Z2
[32,2,1], 51840 236 2,5,6,8,9,12 0,3,4,6,7,10
36 7 W(E7) = Z2 ×Sp6(2)[33,2,1], 2903040 263 2,6,8,10,12,14,18 0,4,6,8,10,12,16
37 8 W(E8)= Z2.O+
8
(2)
[34,2,1], 696729600 2120 2,8,12,14,18,20,24,30 0,6,10,12,16,18,22,28
For more information, including diagrams, presentations, and codegrees of complex reflection groups, see the tables in (Michel Broué, Gunter Malle & Raphaël Rouquier 1998).
Degrees
Shephard and Todd proved that a finite group acting on a complex vector space is a complex reflection group if and only if its ring of invariants is a polynomial ring (Chevalley–Shephard–Todd theorem). For $\ell $ being the rank of the reflection group, the degrees $d_{1}\leq d_{2}\leq \ldots \leq d_{\ell }$ of the generators of the ring of invariants are called degrees of W and are listed in the column above headed "degrees". They also showed that many other invariants of the group are determined by the degrees as follows:
• The center of an irreducible reflection group is cyclic of order equal to the greatest common divisor of the degrees.
• The order of a complex reflection group is the product of its degrees.
• The number of reflections is the sum of the degrees minus the rank.
• An irreducible complex reflection group comes from a real reflection group if and only if it has an invariant of degree 2.
• The degrees di satisfy the formula $\prod _{i=1}^{\ell }(q+d_{i}-1)=\sum _{w\in W}q^{\dim(V^{w})}.$
Codegrees
For $\ell $ being the rank of the reflection group, the codegrees $d_{1}^{*}\geq d_{2}^{*}\geq \ldots \geq d_{\ell }^{*}$ of W can be defined by $\prod _{i=1}^{\ell }(q-d_{i}^{*}-1)=\sum _{w\in W}\det(w)q^{\dim(V^{w})}.$
• For a real reflection group, the codegrees are the degrees minus 2.
• The number of reflection hyperplanes is the sum of the codegrees plus the rank.
Well-generated complex reflection groups
By definition, every complex reflection group is generated by its reflections. The set of reflections is not a minimal generating set, however, and every irreducible complex reflection groups of rank n has a minimal generating set consisting of either n or n + 1 reflections. In the former case, the group is said to be well-generated.
The property of being well-generated is equivalent to the condition $d_{i}+d_{i}^{*}=d_{\ell }$ for all $1\leq i\leq \ell $. Thus, for example, one can read off from the classification that the group G(m, p, n) is well-generated if and only if p = 1 or m.
For irreducible well-generated complex reflection groups, the Coxeter number h defined above equals the largest degree, $h=d_{\ell }$. A reducible complex reflection group is said to be well-generated if it is a product of irreducible well-generated complex reflection groups. Every finite real reflection group is well-generated.
Shephard groups
The well-generated complex reflection groups include a subset called the Shephard groups. These groups are the symmetry groups of regular complex polytopes. In particular, they include the symmetry groups of regular real polyhedra. The Shephard groups may be characterized as the complex reflection groups that admit a "Coxeter-like" presentation with a linear diagram. That is, a Shephard group has associated positive integers p1, ..., pn and q1, ..., qn − 1 such that there is a generating set s1, ..., sn satisfying the relations
$(s_{i})^{p_{i}}=1$ for i = 1, ..., n,
$s_{i}s_{j}=s_{j}s_{i}$ if $|i-j|>1$,
and
$s_{i}s_{i+1}s_{i}s_{i+1}\cdots =s_{i+1}s_{i}s_{i+1}s_{i}\cdots $ where the products on both sides have qi terms, for i = 1, ..., n − 1.
This information is sometimes collected in the Coxeter-type symbol p1[q1]p2[q2] ... [qn − 1]pn, as seen in the table above.
Among groups in the infinite family G(m, p, n), the Shephard groups are those in which p = 1. There are also 18 exceptional Shephard groups, of which three are real.[5][6]
Cartan matrices
An extended Cartan matrix defines the unitary group. Shephard groups of rank n group have n generators. Ordinary Cartan matrices have diagonal elements 2, while unitary reflections do not have this restriction.[7] For example, the rank 1 group of order p (with symbols p[], ) is defined by the 1 × 1 matrix $\left[1-e^{2\pi i/p}\right]$.
Given: $\zeta _{p}=e^{2\pi i/p},\omega =\zeta _{3}=e^{2\pi i/3}={\tfrac {1}{2}}(-1+i{\sqrt {3}}),\zeta _{4}=e^{2\pi i/4}=i,\zeta _{5}=e^{2\pi i/5}={\tfrac {1}{4}}(\left({\sqrt {5}}-1\right)+i{\sqrt {2(5+{\sqrt {5}})}}),\tau ={\tfrac {1+{\sqrt {5}}}{2}},\lambda ={\tfrac {1+i{\sqrt {7}}}{2}},\omega ={\tfrac {-1+i{\sqrt {3}}}{2}}$.
Rank 1
GroupCartanGroupCartan
2[]$\left[{\begin{matrix}2\end{matrix}}\right]$ 3[]$\left[{\begin{matrix}1-\omega \end{matrix}}\right]$
4[]$\left[{\begin{matrix}1-i\end{matrix}}\right]$ 5[]$\left[{\begin{matrix}1-\zeta _{5}\end{matrix}}\right]$
Rank 2
GroupCartanGroupCartan
G4 3[3]3$\left[{\begin{smallmatrix}1-\omega &1\\-\omega &1-\omega \end{smallmatrix}}\right]$ G5 3[4]3$\left[{\begin{smallmatrix}1-\omega &1\\-2\omega &1-\omega \end{smallmatrix}}\right]$
G6 2[6]3$\left[{\begin{smallmatrix}2&1\\1-\omega +i\omega ^{2}&1-\omega \end{smallmatrix}}\right]$ G8 4[3]4$\left[{\begin{smallmatrix}1-i&1\\-i&1-i\end{smallmatrix}}\right]$
G9 2[6]4$\left[{\begin{smallmatrix}2&1\\(1+{\sqrt {2}})\zeta _{8}&1+i\end{smallmatrix}}\right]$ G10 3[4]4$\left[{\begin{smallmatrix}1-\omega &1\\-i-\omega &1-i\end{smallmatrix}}\right]$
G14 3[8]2$\left[{\begin{smallmatrix}1-\omega &1\\1-\omega +\omega ^{2}{\sqrt {2}}&2\end{smallmatrix}}\right]$ G16 5[3]5$\left[{\begin{smallmatrix}1-\zeta _{5}&1\\-\zeta _{5}&1-\zeta _{5}\end{smallmatrix}}\right]$
G17 2[6]5$\left[{\begin{smallmatrix}2&1\\1-\zeta _{5}-i\zeta ^{3}&1-\zeta _{5}\end{smallmatrix}}\right]$ G18 3[4]5$\left[{\begin{smallmatrix}1-\omega &1\\-\omega -\zeta _{5}&1-\zeta _{5}\end{smallmatrix}}\right]$
G20 3[5]3$\left[{\begin{smallmatrix}1-\omega &1\\\omega (\tau -2)&1-\omega \end{smallmatrix}}\right]$ G21 2[10]3$\left[{\begin{smallmatrix}2&1\\1-\omega -i\omega ^{2}\tau &1-\omega \end{smallmatrix}}\right]$
Rank 3
GroupCartanGroupCartan
G22 <5,3,2>2$\left[{\begin{smallmatrix}2&\tau +i-1&-i+1\\-\tau -i-1&2&i\\i-1&-i&2\end{smallmatrix}}\right]$ G23 [5,3]$\left[{\begin{smallmatrix}2&-\tau &0\\-\tau &2&-1\\0&-1&2\end{smallmatrix}}\right]$
G24 [1 1 14]4$\left[{\begin{smallmatrix}2&-1&-\lambda \\-1&2&-1\\1+\lambda &-1&2\end{smallmatrix}}\right]$ G25 3[3]3[3]3$\left[{\begin{smallmatrix}1-\omega &\omega ^{2}&0\\-\omega ^{2}&1-\omega &-\omega ^{2}\\0&\omega ^{2}&1-\omega \end{smallmatrix}}\right]$
G26 3[3]3[4]2$\left[{\begin{smallmatrix}1-\omega &-\omega ^{2}&0\\\omega ^{2}&1-\omega &-1\\0&-1+\omega &2\end{smallmatrix}}\right]$ G27 [1 1 15]4$\left[{\begin{smallmatrix}2&-\tau &-\omega \\-\tau &2&-\omega ^{2}\\-\omega ^{2}&\omega &2\end{smallmatrix}}\right]$
Rank 4
GroupCartanGroupCartan
G28 [3,4,3]$\left[{\begin{smallmatrix}2&-1&0&0\\-1&2&-2&0\\0&-1&2&-1\\0&0&-1&2\end{smallmatrix}}\right]$ G29 [1 1 2]4$\left[{\begin{smallmatrix}2&-1&i+1&0\\-1&2&-i&0\\-i+1&i&2&-1\\0&0&-1&2\end{smallmatrix}}\right]$
G30 [5,3,3]$\left[{\begin{smallmatrix}2&-\tau &0&0\\-\tau &2&-1&0\\0&-1&2&-1\\0&0&-1&2\end{smallmatrix}}\right]$ G32 3[3]3[3]3$\left[{\begin{smallmatrix}1-\omega &\omega ^{2}&0&0\\-\omega ^{2}&1-\omega &-\omega ^{2}&0\\0&\omega ^{2}&1-\omega &\omega ^{2}\\0&0&-\omega ^{2}&1-\omega \end{smallmatrix}}\right]$
Rank 5
GroupCartanGroupCartan
G31 O4$\left[{\begin{smallmatrix}2&-1&i+1&0&-i+1\\-1&2&-i&0&0\\-i+1&i&2&-1&-i+1\\0&0&-1&2&-1\\i+1&0&i+1&-1&2\end{smallmatrix}}\right]$ G33 [1 2 2]3$\left[{\begin{smallmatrix}2&-1&0&0&0\\-1&2&-1&-1&0\\0&-1&2&-\omega &0\\0&-1&-\omega ^{2}&2&-\omega ^{2}\\0&0&0&-\omega &2\end{smallmatrix}}\right]$
References
1. Lehrer and Taylor, Theorem 1.27.
2. Lehrer and Taylor, p. 271.
3. Lehrer and Taylor, Section 2.2.
4. Lehrer and Taylor, Example 2.11.
5. Peter Orlik, Victor Reiner, Anne V. Shepler. The sign representation for Shephard groups. Mathematische Annalen. March 2002, Volume 322, Issue 3, pp 477–492. DOI:10.1007/s002080200001
6. Coxeter, H. S. M.; Regular Complex Polytopes, Cambridge University Press, 1974.
7. Unitary Reflection Groups, pp.91-93
• Broué, Michel; Malle, Gunter; Rouquier, Raphaël (1995), "On complex reflection groups and their associated braid groups" (PDF), Representations of groups (Banff, AB, 1994), CMS Conf. Proc., vol. 16, Providence, R.I.: American Mathematical Society, pp. 1–13, MR 1357192
• Broué, Michel; Malle, Gunter; Rouquier, Raphaël (1998), "Complex reflection groups, braid groups, Hecke algebras", Journal für die reine und angewandte Mathematik, 1998 (500): 127–190, CiteSeerX 10.1.1.128.2907, doi:10.1515/crll.1998.064, ISSN 0075-4102, MR 1637497
• Deligne, Pierre (1972), "Les immeubles des groupes de tresses généralisés", Inventiones Mathematicae, 17 (4): 273–302, Bibcode:1972InMat..17..273D, doi:10.1007/BF01406236, ISSN 0020-9910, MR 0422673, S2CID 123680847
• Hiller, Howard Geometry of Coxeter groups. Research Notes in Mathematics, 54. Pitman (Advanced Publishing Program), Boston, Mass.-London, 1982. iv+213 pp. ISBN 0-273-08517-4*
• Lehrer, Gustav I.; Taylor, Donald E. (2009), Unitary reflection groups, Australian Mathematical Society Lecture Series, vol. 20, Cambridge University Press, ISBN 978-0-521-74989-3, MR 2542964
• Shephard, G. C.; Todd, J. A. (1954), "Finite unitary reflection groups", Canadian Journal of Mathematics, Canadian Mathematical Society, 6: 274–304, doi:10.4153/CJM-1954-028-3, ISSN 0008-414X, MR 0059914, S2CID 3342221
• Coxeter, Finite Groups Generated by Unitary Reflections, 1966, 4. The Graphical Notation, Table of n-dimensional groups generated by n Unitary Reflections. pp. 422–423
External links
• MAGMA Computational Algebra System page
|
Wikipedia
|
Shephard's problem
In mathematics, Shephard's problem, is the following geometrical question asked by Geoffrey Colin Shephard in 1964: if K and L are centrally symmetric convex bodies in n-dimensional Euclidean space such that whenever K and L are projected onto a hyperplane, the volume of the projection of K is smaller than the volume of the projection of L, then does it follow that the volume of K is smaller than that of L?[1]
In this case, "centrally symmetric" means that the reflection of K in the origin, −K, is a translate of K, and similarly for L. If πk : Rn → Πk is a projection of Rn onto some k-dimensional hyperplane Πk (not necessarily a coordinate hyperplane) and Vk denotes k-dimensional volume, Shephard's problem is to determine the truth or falsity of the implication
$V_{k}(\pi _{k}(K))\leq V_{k}(\pi _{k}(L)){\mbox{ for all }}1\leq k<n\implies V_{n}(K)\leq V_{n}(L).$
Vk(πk(K)) is sometimes known as the brightness of K and the function Vk o πk as a (k-dimensional) brightness function.
In dimensions n = 1 and 2, the answer to Shephard's problem is "yes". In 1967, however, Petty and Schneider showed that the answer is "no" for every n ≥ 3.[2][3] The solution of Shephard's problem requires Minkowski's first inequality for convex bodies and the notion of projection bodies of convex bodies.
See also
• Busemann–Petty problem
Notes
1. Shephard 1964.
2. Petty 1967.
3. Schneider 1967.
References
• Gardner, Richard J. (2002). "The Brunn-Minkowski inequality". Bulletin of the American Mathematical Society. New Series. 39 (3): 355–405 (electronic). doi:10.1090/S0273-0979-02-00941-2.
• Petty, Clinton M. (1967). "Projection bodies". Proceedings of the Colloquium on Convexity (Copenhagen, 1965). Kobenhavns Univ. Mat. Inst., Copenhagen. pp. 234–241. MR 0216369.
• Schneider, Rolf (1967). "Zur einem Problem von Shephard über die Projektionen konvexer Körper". Mathematische Zeitschrift (in German). 101: 71–82. doi:10.1007/BF01135693.
• Shephard, G. C. (1964), "Shadow systems of convex sets", Israel Journal of Mathematics, 2 (4): 229–236, doi:10.1007/BF02759738, ISSN 0021-2172, MR 0179686
|
Wikipedia
|
Chevalley–Shephard–Todd theorem
In mathematics, the Chevalley–Shephard–Todd theorem in invariant theory of finite groups states that the ring of invariants of a finite group acting on a complex vector space is a polynomial ring if and only if the group is generated by pseudoreflections. In the case of subgroups of the complex general linear group the theorem was first proved by G. C. Shephard and J. A. Todd (1954) who gave a case-by-case proof. Claude Chevalley (1955) soon afterwards gave a uniform proof. It has been extended to finite linear groups over an arbitrary field in the non-modular case by Jean-Pierre Serre.
Statement of the theorem
Let V be a finite-dimensional vector space over a field K and let G be a finite subgroup of the general linear group GL(V). An element s of GL(V) is called a pseudoreflection if it fixes a codimension 1 subspace of V and is not the identity transformation I, or equivalently, if the kernel Ker (s − I) has codimension one in V. Assume that the order of G is relatively prime to the characteristic of K (the so-called non-modular case). Then the following properties are equivalent:[1]
• (A) The group G is generated by pseudoreflections.
• (B) The algebra of invariants K[V]G is a (free) polynomial algebra.
• (B′) The algebra of invariants K[V]G is a regular ring.
• (C) The algebra K[V] is a free module over K[V]G.
• (C′) The algebra K[V] is a projective module over K[V]G.
In the case when the field K is the field C of complex numbers, the first condition is usually stated as "G is a complex reflection group". Shephard and Todd derived a full classification of such groups.
Examples
• Let V be one-dimensional. Then any finite group faithfully acting on V is a subgroup of the multiplicative group of the field K, and hence a cyclic group. It follows that G consists of roots of unity of order dividing n, where n is its order, so G is generated by pseudoreflections. In this case, K[V] = K[x] is the polynomial ring in one variable and the algebra of invariants of G is the subalgebra generated by xn, hence it is a polynomial algebra.
• Let V = Kn be the standard n-dimensional vector space and G be the symmetric group Sn acting by permutations of the elements of the standard basis. The symmetric group is generated by transpositions (ij), which act by reflections on V. On the other hand, by the main theorem of symmetric functions, the algebra of invariants is the polynomial algebra generated by the elementary symmetric functions e1, ... en.
• Let V = K2 and G be the cyclic group of order 2 acting by ±I. In this case, G is not generated by pseudoreflections, since the nonidentity element s of G acts without fixed points, so that dim Ker (s − I) = 0. On the other hand, the algebra of invariants is the subalgebra of K[V] = K[x, y] generated by the homogeneous elements x2, xy, and y2 of degree 2. This subalgebra is not a polynomial algebra because of the relation x2y2 = (xy)2.
Generalizations
Broer (2007) gave an extension of the Chevalley–Shephard–Todd theorem to positive characteristic.
There has been much work on the question of when a reductive algebraic group acting on a vector space has a polynomial ring of invariants. In the case when the algebraic group is simple all cases when the invariant ring is polynomial have been classified by Schwarz (1978)
In general, the ring of invariants of a finite group acting linearly on a complex vector space is Cohen-Macaulay, so it is a finite rank free module over a polynomial subring.
Notes
1. See, e.g.: Bourbaki, Lie, chap. V, §5, nº5, theorem 4 for equivalence of (A), (B) and (C); page 26 of for equivalence of (A) and (B′); pages 6–18 of Archived 2014-07-29 at the Wayback Machine for equivalence of (C) and (C′) for a proof of (B′)⇒(A).
References
• Bourbaki, Nicolas, Éléments de mathématiques : Groupes et algèbres de Lie (English translation: Bourbaki, Nicolas, Elements of Mathematics: Lie Groups and Lie Algebras)
• Broer, Abraham (2007), On Chevalley-Shephard-Todd's theorem in positive characteristic, [], arXiv:0709.0715, Bibcode:2007arXiv0709.0715B
• Chevalley, Claude (1955), "Invariants of finite groups generated by reflections", Amer. J. Math., 77 (4): 778–782, doi:10.2307/2372597, JSTOR 2372597, S2CID 14952813
• Neusel, Mara D.; Smith, Larry (2002), Invariant Theory of Finite Groups, American Mathematical Society, ISBN 978-0-8218-2916-5
• Shephard, G. C.; Todd, J. A. (1954), "Finite unitary reflection groups", Can. J. Math., 6: 274–304, doi:10.4153/CJM-1954-028-3
• Schwarz, G. (1978), "Representations of simple Lie groups with regular rings of invariants", Invent. Math., 49 (2): 167–191, Bibcode:1978InMat..49..167S, doi:10.1007/BF01403085
• Smith, Larry (1997), "Polynomial invariants of finite groups. A survey of recent developments", Bull. Amer. Math. Soc., 34 (3): 211–250, doi:10.1090/S0273-0979-97-00724-6, MR 1433171
• Springer, T. A. (1977), Invariant Theory, Springer, ISBN 978-0-387-08242-4
|
Wikipedia
|
Sheppard's correction
In statistics, Sheppard's corrections are approximate corrections to estimates of moments computed from binned data. The concept is named after William Fleetwood Sheppard.
Let $m_{k}$ be the measured kth moment, ${\hat {\mu }}_{k}$ the corresponding corrected moment, and $c$ the breadth of the class interval (i.e., the bin width). No correction is necessary for the mean (first moment about zero). The first few measured and corrected moments about the mean are then related as follows:
${\begin{aligned}{\hat {\mu }}_{2}&=m_{2}-{\frac {1}{12}}c^{2}\\{\hat {\mu }}_{3}&=m_{3}\\{\hat {\mu }}_{4}&=m_{4}-{\frac {1}{2}}m_{2}c^{2}+{\frac {7}{240}}c^{4}.\end{aligned}}$
When the data come from a normally distributed population, then binning and using the midpoint of the bin as the observed value results in an overestimate of the variance. That is why the correction to the variance is negative. The reason why the uncorrected estimate of the variance is an overestimate is that the error is negatively correlated with the observation. For the uniform distribution, the error is uncorrelated with the observation, so a correction should be +c2/12, which is the variance of the error itself rather than −c2/12. Thus Sheppard's correction is biased in favor of population distributions in which the error is negatively correlated with the observation.
The cumulants of the sum of the grouped variable and the uniform variable are the sums of the cumulants. As odd cumulants of a uniform distribution are zero; only even moments are affected.
The second and fourth cumulants of the uniform distribution on (−0.5c, 0.5c) are respectively, c2/12 and −c4/120.
The correction to moments can be derived from the relation between cumulants and moments.
References
• Weisstein, Eric W. "Sheppard's Correction". MathWorld—A Wolfram Web Resource. Retrieved March 2, 2014.
• Weatherburn, C.E. (1949), A first course in mathematical statistics, Cambridge University Press
|
Wikipedia
|
Schrödinger equation
The Schrödinger equation is a linear partial differential equation that governs the wave function of a quantum-mechanical system.[1]: 1–2 Its discovery was a significant landmark in the development of quantum mechanics. It is named after Erwin Schrödinger, who postulated the equation in 1925 and published it in 1926, forming the basis for the work that resulted in his Nobel Prize in Physics in 1933.[2][3]
Part of a series of articles about
Quantum mechanics
$i\hbar {\frac {\partial }{\partial t}}|\psi (t)\rangle ={\hat {H}}|\psi (t)\rangle $
Schrödinger equation
• Introduction
• Glossary
• History
Background
• Classical mechanics
• Old quantum theory
• Bra–ket notation
• Hamiltonian
• Interference
Fundamentals
• Complementarity
• Decoherence
• Entanglement
• Energy level
• Measurement
• Nonlocality
• Quantum number
• State
• Superposition
• Symmetry
• Tunnelling
• Uncertainty
• Wave function
• Collapse
Experiments
• Bell's inequality
• Davisson–Germer
• Double-slit
• Elitzur–Vaidman
• Franck–Hertz
• Leggett–Garg inequality
• Mach–Zehnder
• Popper
• Quantum eraser
• Delayed-choice
• Schrödinger's cat
• Stern–Gerlach
• Wheeler's delayed-choice
Formulations
• Overview
• Heisenberg
• Interaction
• Matrix
• Phase-space
• Schrödinger
• Sum-over-histories (path integral)
Equations
• Dirac
• Klein–Gordon
• Pauli
• Rydberg
• Schrödinger
Interpretations
• Bayesian
• Consistent histories
• Copenhagen
• de Broglie–Bohm
• Ensemble
• Hidden-variable
• Local
• Many-worlds
• Objective collapse
• Quantum logic
• Relational
• Transactional
Advanced topics
• Relativistic quantum mechanics
• Quantum field theory
• Quantum information science
• Quantum computing
• Quantum chaos
• EPR paradox
• Density matrix
• Scattering theory
• Quantum statistical mechanics
• Quantum machine learning
Scientists
• Aharonov
• Bell
• Bethe
• Blackett
• Bloch
• Bohm
• Bohr
• Born
• Bose
• de Broglie
• Compton
• Dirac
• Davisson
• Debye
• Ehrenfest
• Einstein
• Everett
• Fock
• Fermi
• Feynman
• Glauber
• Gutzwiller
• Heisenberg
• Hilbert
• Jordan
• Kramers
• Pauli
• Lamb
• Landau
• Laue
• Moseley
• Millikan
• Onnes
• Planck
• Rabi
• Raman
• Rydberg
• Schrödinger
• Simmons
• Sommerfeld
• von Neumann
• Weyl
• Wien
• Wigner
• Zeeman
• Zeilinger
Modern physics
${\hat {H}}|\psi _{n}(t)\rangle =i\hbar {\frac {\partial }{\partial t}}|\psi _{n}(t)\rangle $
$G_{\mu \nu }+\Lambda g_{\mu \nu }={\kappa }T_{\mu \nu }$
Schrödinger and Einstein field equations
Founders
• Max Planck
• Albert Einstein
• Niels Bohr
• Max Born
• Werner Heisenberg
• Erwin Schrödinger
• Pascual Jordan
• Wolfgang Pauli
• Paul Dirac
• Ernest Rutherford
• Louis de Broglie
• Satyendra Nath Bose
Concepts
• Topology
• Space
• Time
• Energy
• Matter
• Work
• Randomness
• Information
• Entropy
• Mind
• Light
• Particle
• Wave
Branches
• Applied
• Experimental
• Theoretical
• Mathematical
• Philosophy of physics
• Quantum mechanics
• Quantum field theory
• Quantum information
• Quantum computation
• Electromagnetism
• Weak interaction
• Electroweak interaction
• Strong interaction
• Atomic
• Particle
• Nuclear
• Atomic, molecular, and optical
• Condensed matter
• Statistical
• Complex systems
• Non-linear dynamics
• Biophysics
• Neurophysics
• Plasma physics
• Special relativity
• General relativity
• Astrophysics
• Cosmology
• Theories of gravitation
• Quantum gravity
• Theory of everything
Scientists
• Witten
• Röntgen
• Becquerel
• Lorentz
• Planck
• Curie
• Wien
• Skłodowska-Curie
• Sommerfeld
• Rutherford
• Soddy
• Onnes
• Einstein
• Wilczek
• Born
• Weyl
• Bohr
• Kramers
• Schrödinger
• de Broglie
• Laue
• Bose
• Compton
• Pauli
• Walton
• Fermi
• van der Waals
• Heisenberg
• Dyson
• Zeeman
• Moseley
• Hilbert
• Gödel
• Jordan
• Dirac
• Wigner
• Hawking
• P. W. Anderson
• Lemaître
• Thomson
• Poincaré
• Wheeler
• Penrose
• Millikan
• Nambu
• von Neumann
• Higgs
• Hahn
• Feynman
• Yang
• Lee
• Lenard
• Salam
• 't Hooft
• Veltman
• Bell
• Gell-Mann
• J. J. Thomson
• Raman
• Bragg
• Bardeen
• Shockley
• Chadwick
• Lawrence
• Zeilinger
• Goudsmit
• Uhlenbeck
Categories
• Modern physics
Conceptually, the Schrödinger equation is the quantum counterpart of Newton's second law in classical mechanics. Given a set of known initial conditions, Newton's second law makes a mathematical prediction as to what path a given physical system will take over time. The Schrödinger equation gives the evolution over time of a wave function, the quantum-mechanical characterization of an isolated physical system. The equation was postulated by Schrödinger based on a postulate of Louis de Broglie that all matter has an associated matter wave. The equation predicted bound states of the atom in agreement with experimental observations.[4]: II:268
The Schrödinger equation is not the only way to study quantum mechanical systems and make predictions. Other formulations of quantum mechanics include matrix mechanics, introduced by Werner Heisenberg, and the path integral formulation, developed chiefly by Richard Feynman. When these approaches are compared, the use of the Schrödinger equation is sometimes called "wave mechanics". Paul Dirac incorporated special relativity and quantum mechanics into a single formulation that simplifies to the Schrödinger equation when relativistic effects are not significant.
Definition
Preliminaries
Introductory courses on physics or chemistry typically introduce the Schrödinger equation in a way that can be appreciated knowing only the concepts and notations of basic calculus, particularly derivatives with respect to space and time. A special case of the Schrödinger equation that admits a statement in those terms is the position-space Schrödinger equation for a single nonrelativistic particle in one dimension:
$i\hbar {\frac {\partial }{\partial t}}\Psi (x,t)=\left[-{\frac {\hbar ^{2}}{2m}}{\frac {\partial ^{2}}{\partial x^{2}}}+V(x,t)\right]\Psi (x,t).$
Here, $\Psi (x,t)$ is a wave function, a function that assigns a complex number to each point $x$ at each time $t$. The parameter $m$ is the mass of the particle, and $V(x,t)$ is the potential that represents the environment in which the particle exists.[5]: 74 The constant $i$ is the imaginary unit, and $\hbar $ is the reduced Planck constant, which has units of action (energy multiplied by time).[5]: 10
Broadening beyond this simple case, the mathematical formulation of quantum mechanics developed by Paul Dirac,[6] David Hilbert,[7] John von Neumann,[8] and Hermann Weyl[9] defines the state of a quantum mechanical system to be a vector $|\psi \rangle $ belonging to a (separable) Hilbert space ${\mathcal {H}}$. This vector is postulated to be normalized under the Hilbert space's inner product, that is, in Dirac notation it obeys $\langle \psi |\psi \rangle =1$. The exact nature of this Hilbert space is dependent on the system – for example, for describing position and momentum the Hilbert space is the space of complex square-integrable functions $L^{2}(\mathbb {C} )$,[10] while the Hilbert space for the spin of a single proton is simply the space of two-dimensional complex vectors $\mathbb {C} ^{2}$ with the usual inner product.[5]: 322
Physical quantities of interest – position, momentum, energy, spin – are represented by "observables", which are Hermitian (more precisely, self-adjoint) linear operators acting on the Hilbert space. A wave function can be an eigenvector of an observable, in which case it is called an eigenstate, and the associated eigenvalue corresponds to the value of the observable in that eigenstate. More generally, a quantum state will be a linear combination of the eigenstates, known as a quantum superposition. When an observable is measured, the result will be one of its eigenvalues with probability given by the Born rule: in the simplest case the eigenvalue $\lambda $ is non-degenerate and the probability is given by $|\langle \lambda |\psi \rangle |^{2}$, where $|\lambda \rangle $ is its associated eigenvector. More generally, the eigenvalue is degenerate and the probability is given by $\langle \psi |P_{\lambda }|\psi \rangle $, where $P_{\lambda }$ is the projector onto its associated eigenspace.[note 1]
A momentum eigenstate would be a perfectly monochromatic wave of infinite extent, which is not square-integrable. Likewise a position eigenstate would be a Dirac delta distribution, not square-integrable and technically not a function at all. Consequently, neither can belong to the particle's Hilbert space. Physicists sometimes introduce fictitious "bases" for a Hilbert space comprising elements outside that space. These are invented for calculational convenience and do not represent physical states.[11]: 100–105 Thus, a position-space wave function $\Psi (x,t)$ as used above can be written as the inner product of a time-dependent state vector $|\Psi (t)\rangle $ with unphysical but convenient "position eigenstates" $|x\rangle $:
$\Psi (x,t)=\langle x|\Psi (t)\rangle .$
Time-dependent equation
The form of the Schrödinger equation depends on the physical situation. The most general form is the time-dependent Schrödinger equation, which gives a description of a system evolving with time:[12]: 143
Time-dependent Schrödinger equation (general)
where $t$ is time, $\vert \Psi (t)\rangle $ is the state vector of the quantum system ($\Psi $ being the Greek letter psi), and ${\hat {H}}$ is an observable, the Hamiltonian operator.
The term "Schrödinger equation" can refer to both the general equation, or the specific nonrelativistic version. The general equation is indeed quite general, used throughout quantum mechanics, for everything from the Dirac equation to quantum field theory, by plugging in diverse expressions for the Hamiltonian. The specific nonrelativistic version is an approximation that yields accurate results in many situations, but only to a certain extent (see relativistic quantum mechanics and relativistic quantum field theory).
To apply the Schrödinger equation, write down the Hamiltonian for the system, accounting for the kinetic and potential energies of the particles constituting the system, then insert it into the Schrödinger equation. The resulting partial differential equation is solved for the wave function, which contains information about the system. In practice, the square of the absolute value of the wave function at each point is taken to define a probability density function.[5]: 78 For example, given a wave function in position space $\Psi (x,t)$ as above, we have
$\Pr(x,t)=|\Psi (x,t)|^{2}.$
Time-independent equation
The time-dependent Schrödinger equation described above predicts that wave functions can form standing waves, called stationary states. These states are particularly important as their individual study later simplifies the task of solving the time-dependent Schrödinger equation for any state. Stationary states can also be described by a simpler form of the Schrödinger equation, the time-independent Schrödinger equation.
Time-independent Schrödinger equation (general)
$\operatorname {\hat {H}} |\Psi \rangle =E|\Psi \rangle $
where $E$ is the energy of the system.[5]: 134 This is only used when the Hamiltonian itself is not dependent on time explicitly. However, even in this case the total wave function is dependent on time as explained in the section on linearity below. In the language of linear algebra, this equation is an eigenvalue equation. Therefore, the wave function is an eigenfunction of the Hamiltonian operator with corresponding eigenvalue(s) $E$.
Properties
Linearity
The Schrödinger equation is a linear differential equation, meaning that if two state vectors $|\psi _{1}\rangle $ and $|\psi _{2}\rangle $ are solutions, then so is any linear combination
$|\psi \rangle =a|\psi _{1}\rangle +b|\psi _{2}\rangle $
of the two state vectors where a and b are any complex numbers.[13]: 25 Moreover, the sum can be extended for any number of state vectors. This property allows superpositions of quantum states to be solutions of the Schrödinger equation. Even more generally, it holds that a general solution to the Schrödinger equation can be found by taking a weighted sum over a basis of states. A choice often employed is the basis of energy eigenstates, which are solutions of the time-independent Schrödinger equation. In this basis, a time-dependent state vector $|\Psi (t)\rangle $ can be written as the linear combination
$|\Psi (t)\rangle =\sum _{n}A_{n}e^{{-iE_{n}t}/\hbar }|\psi _{E_{n}}\rangle ,$
where $A_{n}$ are complex numbers and the vectors $|\psi _{E_{n}}\rangle $ are solutions of the time-independent equation ${\hat {H}}|\psi _{E_{n}}\rangle =E_{n}|\psi _{E_{n}}\rangle $.
Unitarity
Further information: Wigner's theorem
Holding the Hamiltonian ${\hat {H}}$ constant, the Schrödinger equation has the solution[12]
$|\Psi (t)\rangle =e^{-i{\hat {H}}t/\hbar }|\Psi (0)\rangle .$
The operator ${\hat {U}}(t)=e^{-i{\hat {H}}t/\hbar }$ is known as the time-evolution operator, and it is unitary: it preserves the inner product between vectors in the Hilbert space.[13] Unitarity is a general feature of time evolution under the Schrödinger equation. If the initial state is $|\Psi (0)\rangle $, then the state at a later time $t$ will be given by
$|\Psi (t)\rangle ={\hat {U}}(t)|\Psi (0)\rangle $
for some unitary operator ${\hat {U}}(t)$. Conversely, suppose that ${\hat {U}}(t)$ is a continuous family of unitary operators parameterized by $t$. Without loss of generality,[14] the parameterization can be chosen so that ${\hat {U}}(0)$ is the identity operator and that ${\hat {U}}(t/N)^{N}={\hat {U}}(t)$ for any $N>0$. Then ${\hat {U}}(t)$ depends upon the parameter $t$ in such a way that
${\hat {U}}(t)=e^{-i{\hat {G}}t}$
for some self-adjoint operator ${\hat {G}}$, called the generator of the family ${\hat {U}}(t)$. A Hamiltonian is just such a generator (up to the factor of Planck's constant that would be set to 1 in natural units). To see that the generator is Hermitian, note that with ${\hat {U}}(\delta t)\approx {\hat {U}}(0)-i{\hat {G}}\delta t$, we have
${\hat {U}}(\delta t)^{\dagger }{\hat {U}}(\delta t)\approx ({\hat {U}}(0)^{\dagger }+i{\hat {G}}^{\dagger }\delta t)({\hat {U}}(0)-i{\hat {G}}\delta t)=I+i\delta t({\hat {G}}^{\dagger }-{\hat {G}})+O(\delta t^{2}),$
so ${\hat {U}}(t)$ is unitary only if, to first order, its derivative is Hermitian.[15]
Changes of basis
The Schrödinger equation is often presented using quantities varying as functions of position, but as a vector-operator equation it has a valid representation in any arbitrary complete basis of kets in Hilbert space. As mentioned above, "bases" that lie outside the physical Hilbert space are also employed for calculational purposes. This is illustrated by the position-space and momentum-space Schrödinger equations for a nonrelativistic, spinless particle.[11]: 182 The Hilbert space for such a particle is the space of complex square-integrable functions on three-dimensional Euclidean space, and its Hamiltonian is the sum of a kinetic-energy term that is quadratic in the momentum operator and a potential-energy term:
$i\hbar {\frac {d}{dt}}|\Psi (t)\rangle =\left({\frac {1}{2m}}{\hat {p}}^{2}+{\hat {V}}\right)|\Psi (t)\rangle .$
Writing $\mathbf {r} $ for a three-dimensional position vector and $\mathbf {p} $ for a three-dimensional momentum vector, the position-space Schrödinger equation is
$i\hbar {\frac {\partial }{\partial t}}\Psi (\mathbf {r} ,t)=-{\frac {\hbar ^{2}}{2m}}\nabla ^{2}\Psi (\mathbf {r} ,t)+V(\mathbf {r} )\Psi (\mathbf {r} ,t).$
The momentum-space counterpart involves the Fourier transforms of the wave function and the potential:
$i\hbar {\frac {\partial }{\partial t}}{\tilde {\Psi }}(\mathbf {p} ,t)={\frac {\mathbf {p} ^{2}}{2m}}{\tilde {\Psi }}(\mathbf {p} ,t)+(2\pi \hbar )^{-3/2}\int d^{3}\mathbf {p} '\,{\tilde {V}}(\mathbf {p} -\mathbf {p} '){\tilde {\Psi }}(\mathbf {p} ',t).$
The functions $\Psi (\mathbf {r} ,t)$ and ${\tilde {\Psi }}(\mathbf {p} ,t)$ are derived from $|\Psi (t)\rangle $ by
$\Psi (\mathbf {r} ,t)=\langle \mathbf {r} |\Psi (t)\rangle ,$
${\tilde {\Psi }}(\mathbf {p} ,t)=\langle \mathbf {p} |\Psi (t)\rangle ,$
where $|\mathbf {r} \rangle $ and $|\mathbf {p} \rangle $ do not belong to the Hilbert space itself, but have well-defined inner products with all elements of that space.
When restricted from three dimensions to one, the position-space equation is just the first form of the Schrödinger equation given above. The relation between position and momentum in quantum mechanics can be appreciated in a single dimension. In canonical quantization, the classical variables $x$ and $p$ are promoted to self-adjoint operators ${\hat {x}}$ and ${\hat {p}}$ that satisfy the canonical commutation relation
$[{\hat {x}},{\hat {p}}]=i\hbar .$
This implies that[11]: 190
$\langle x|{\hat {p}}|\Psi \rangle =-i\hbar {\frac {d}{dx}}\Psi (x),$
so the action of the momentum operator ${\hat {p}}$ in the position-space representation is $ -i\hbar {\frac {d}{dx}}$. Thus, ${\hat {p}}^{2}$ becomes a second derivative, and in three dimensions, the second derivative becomes the Laplacian $\nabla ^{2}$.
The canonical commutation relation also implies that the position and momentum operators are Fourier conjugates of each other. Consequently, functions originally defined in terms of their position dependence can be converted to functions of momentum using the Fourier transform. In solid-state physics, the Schrödinger equation is often written for functions of momentum, as Bloch's theorem ensures the periodic crystal lattice potential couples ${\tilde {\Psi }}(p)$ with ${\tilde {\Psi }}(p+K)$ for only discrete reciprocal lattice vectors $K$. This makes it convenient to solve the momentum-space Schrödinger equation at each point in the Brillouin zone independently of the other points in the Brillouin zone.
Probability current
The Schrödinger equation is consistent with local probability conservation.[11]: 238 Multiplying the Schrödinger equation on the right by the complex conjugate wave function, and multiplying the wave function to the left of the complex conjugate of the Schrödinger equation, and subtracting, gives the continuity equation for probability:
${\frac {\partial }{\partial t}}\rho \left(\mathbf {r} ,t\right)+\nabla \cdot \mathbf {j} =0,$
where
$\rho =|\Psi |^{2}=\Psi ^{*}(\mathbf {r} ,t)\Psi (\mathbf {r} ,t)$
is the probability density (probability per unit volume, * denotes complex conjugate), and
$\mathbf {j} ={\frac {1}{2m}}\left(\Psi ^{*}{\hat {\mathbf {p} }}\Psi -\Psi {\hat {\mathbf {p} }}\Psi ^{*}\right)$
is the probability current (flow per unit area).
Separation of variables
If the Hamiltonian is not an explicit function of time, Schrödinger's equation reads:
$i\hbar {\frac {\partial }{\partial t}}\Psi (\mathbf {r} ,t)=\left[-{\frac {\hbar ^{2}}{2m}}\nabla ^{2}+V(\mathbf {r} )\right]\Psi (\mathbf {r} ,t).$
The operator on the right side depends only on time; the one on the left side depends only on space.
Solving the equation by separation of variables means seeking a solution of the form of a product of spatial and temporal parts[16]
$\Psi (\mathbf {r} ,t)=\psi (\mathbf {r} )\tau (t),$
where $\psi (\mathbf {r} )$ is a function of all the spatial coordinate(s) of the particle(s) constituting the system only, and $\tau (t)$ is a function of time only. Substituting this expression for $\Psi $ into the time dependent left hand side shows that $\tau (t)$ is a phase factor:
$\Psi (\mathbf {r} ,t)=\psi (\mathbf {r} )e^{-i{Et/\hbar }}.$
A solution of this type is called stationary, since the only time dependence is a phase factor that cancels when the probability density is calculated via the Born rule.[12]: 143ff
The spatial part of the full wave function solves:[17]
$\nabla ^{2}\psi (\mathbf {r} )+{\frac {2m}{\hbar ^{2}}}\left[E-V(\mathbf {r} )\right]\psi (\mathbf {r} )=0.$
where the energy $E$ appears in the phase factor.
This generalizes to any number of particles in any number of dimensions (in a time-independent potential): the standing wave solutions of the time-dependent equation are the states with definite energy, instead of a probability distribution of different energies. In physics, these standing waves are called "stationary states" or "energy eigenstates"; in chemistry they are called "atomic orbitals" or "molecular orbitals". Superpositions of energy eigenstates change their properties according to the relative phases between the energy levels. The energy eigenstates form a basis: any wave function may be written as a sum over the discrete energy states or an integral over continuous energy states, or more generally as an integral over a measure. This is the spectral theorem in mathematics, and in a finite-dimensional state space it is just a statement of the completeness of the eigenvectors of a Hermitian matrix.
Separation of variables can also be a useful method for the time-independent Schrödinger equation. For example, depending on the symmetry of the problem, the Cartesian axes might be separated,
$\psi (\mathbf {r} )=\psi _{x}(x)\psi _{y}(y)\psi _{z}(z),$
or radial and angular coordinates might be separated:
$\psi (\mathbf {r} )=\psi _{r}(r)\psi _{\theta }(\theta )\psi _{\phi }(\phi ).$
Examples
Particle in a box
The particle in a one-dimensional potential energy box is the most mathematically simple example where restraints lead to the quantization of energy levels. The box is defined as having zero potential energy inside a certain region and infinite potential energy outside.[11]: 77–78 For the one-dimensional case in the $x$ direction, the time-independent Schrödinger equation may be written
$-{\frac {\hbar ^{2}}{2m}}{\frac {d^{2}\psi }{dx^{2}}}=E\psi .$
With the differential operator defined by
${\hat {p}}_{x}=-i\hbar {\frac {d}{dx}}$
the previous equation is evocative of the classic kinetic energy analogue,
${\frac {1}{2m}}{\hat {p}}_{x}^{2}=E,$
with state $\psi $ in this case having energy $E$ coincident with the kinetic energy of the particle.
The general solutions of the Schrödinger equation for the particle in a box are
$\psi (x)=Ae^{ikx}+Be^{-ikx}\qquad \qquad E={\frac {\hbar ^{2}k^{2}}{2m}}$
or, from Euler's formula,
$\psi (x)=C\sin(kx)+D\cos(kx).$
The infinite potential walls of the box determine the values of $C,D,$ and $k$ at $x=0$ and $x=L$ where $\psi $ must be zero. Thus, at $x=0$,
$\psi (0)=0=C\sin(0)+D\cos(0)=D$
and $D=0$. At $x=L$,
$\psi (L)=0=C\sin(kL),$
in which $C$ cannot be zero as this would conflict with the postulate that $\psi $ has norm 1. Therefore, since $\sin(kL)=0$, $kL$ must be an integer multiple of $\pi $,
$k={\frac {n\pi }{L}}\qquad \qquad n=1,2,3,\ldots .$
This constraint on $k$ implies a constraint on the energy levels, yielding
$E_{n}={\frac {\hbar ^{2}\pi ^{2}n^{2}}{2mL^{2}}}={\frac {n^{2}h^{2}}{8mL^{2}}}.$
A finite potential well is the generalization of the infinite potential well problem to potential wells having finite depth. The finite potential well problem is mathematically more complicated than the infinite particle-in-a-box problem as the wave function is not pinned to zero at the walls of the well. Instead, the wave function must satisfy more complicated mathematical boundary conditions as it is nonzero in regions outside the well. Another related problem is that of the rectangular potential barrier, which furnishes a model for the quantum tunneling effect that plays an important role in the performance of modern technologies such as flash memory and scanning tunneling microscopy.
Harmonic oscillator
The Schrödinger equation for this situation is
$E\psi =-{\frac {\hbar ^{2}}{2m}}{\frac {d^{2}}{dx^{2}}}\psi +{\frac {1}{2}}m\omega ^{2}x^{2}\psi ,$
where $x$ is the displacement and $\omega $ the angular frequency. Furthermore, it can be used to describe approximately a wide variety of other systems, including vibrating atoms, molecules,[18] and atoms or ions in lattices,[19] and approximating other potentials near equilibrium points. It is also the basis of perturbation methods in quantum mechanics.
The solutions in position space are
$\psi _{n}(x)={\sqrt {\frac {1}{2^{n}\,n!}}}\ \left({\frac {m\omega }{\pi \hbar }}\right)^{1/4}\ e^{-{\frac {m\omega x^{2}}{2\hbar }}}\ {\mathcal {H}}_{n}\left({\sqrt {\frac {m\omega }{\hbar }}}x\right),$
where $n\in \{0,1,2,\ldots \}$, and the functions ${\mathcal {H}}_{n}$ are the Hermite polynomials of order $n$. The solution set may be generated by
$\psi _{n}(x)={\frac {1}{\sqrt {n!}}}\left({\sqrt {\frac {m\omega }{2\hbar }}}\right)^{n}\left(x-{\frac {\hbar }{m\omega }}{\frac {d}{dx}}\right)^{n}\left({\frac {m\omega }{\pi \hbar }}\right)^{\frac {1}{4}}e^{\frac {-m\omega x^{2}}{2\hbar }}.$
The eigenvalues are
$E_{n}=\left(n+{\frac {1}{2}}\right)\hbar \omega .$
The case $n=0$ is called the ground state, its energy is called the zero-point energy, and the wave function is a Gaussian.[20]
The harmonic oscillator, like the particle in a box, illustrates the generic feature of the Schrödinger equation that the energies of bound eigenstates are discretized.[11]: 352
Hydrogen atom
The Schrödinger equation for the electron in a hydrogen atom (or a hydrogen-like atom) is
$E\psi =-{\frac {\hbar ^{2}}{2\mu }}\nabla ^{2}\psi -{\frac {q^{2}}{4\pi \varepsilon _{0}r}}\psi $
where $q$ is the electron charge, $\mathbf {r} $ is the position of the electron relative to the nucleus, $r=|\mathbf {r} |$ is the magnitude of the relative position, the potential term is due to the Coulomb interaction, wherein $\varepsilon _{0}$ is the permittivity of free space and
$\mu ={\frac {m_{q}m_{p}}{m_{q}+m_{p}}}$
is the 2-body reduced mass of the hydrogen nucleus (just a proton) of mass $m_{p}$ and the electron of mass $m_{q}$. The negative sign arises in the potential term since the proton and electron are oppositely charged. The reduced mass in place of the electron mass is used since the electron and proton together orbit each other about a common centre of mass, and constitute a two-body problem to solve. The motion of the electron is of principal interest here, so the equivalent one-body problem is the motion of the electron using the reduced mass.
The Schrödinger equation for a hydrogen atom can be solved by separation of variables.[21] In this case, spherical polar coordinates are the most convenient. Thus,
$\psi (r,\theta ,\varphi )=R(r)Y_{\ell }^{m}(\theta ,\varphi )=R(r)\Theta (\theta )\Phi (\varphi ),$
where R are radial functions and $Y_{l}^{m}(\theta ,\varphi )$ are spherical harmonics of degree $\ell $ and order $m$. This is the only atom for which the Schrödinger equation has been solved for exactly. Multi-electron atoms require approximate methods. The family of solutions are:[22]
$\psi _{n\ell m}(r,\theta ,\varphi )={\sqrt {\left({\frac {2}{na_{0}}}\right)^{3}{\frac {(n-\ell -1)!}{2n[(n+\ell )!]}}}}e^{-r/na_{0}}\left({\frac {2r}{na_{0}}}\right)^{\ell }L_{n-\ell -1}^{2\ell +1}\left({\frac {2r}{na_{0}}}\right)\cdot Y_{\ell }^{m}(\theta ,\varphi )$
where
• $a_{0}={\frac {4\pi \varepsilon _{0}\hbar ^{2}}{m_{q}q^{2}}}$ is the Bohr radius,
• $L_{n-\ell -1}^{2\ell +1}(\cdots )$ are the generalized Laguerre polynomials of degree $n-\ell -1$,
• $n,\ell ,m$ are the principal, azimuthal, and magnetic quantum numbers respectively, which take the values $n=1,2,3,\dots ,$ $\ell =0,1,2,\dots ,n-1,$ $m=-\ell ,\dots ,\ell .$
Approximate solutions
It is typically not possible to solve the Schrödinger equation exactly for situations of physical interest. Accordingly, approximate solutions are obtained using techniques like variational methods and WKB approximation. It is also common to treat a problem of interest as a small modification to a problem that can be solved exactly, a method known as perturbation theory.
Semiclassical limit
One simple way to compare classical to quantum mechanics is to consider the time-evolution of the expected position and expected momentum, which can then be compared to the time-evolution of the ordinary position and momentum in classical mechanics.[23]: 302 The quantum expectation values satisfy the Ehrenfest theorem. For a one-dimensional quantum particle moving in a potential $V$, the Ehrenfest theorem says
$m{\frac {d}{dt}}\langle x\rangle =\langle p\rangle ;\quad {\frac {d}{dt}}\langle p\rangle =-\left\langle V'(X)\right\rangle .$ ;\quad {\frac {d}{dt}}\langle p\rangle =-\left\langle V'(X)\right\rangle .}
Although the first of these equations is consistent with the classical behavior, the second is not: If the pair $(\langle X\rangle ,\langle P\rangle )$ were to satisfy Newton's second law, the right-hand side of the second equation would have to be
$-V'\left(\left\langle X\right\rangle \right)$
which is typically not the same as $-\left\langle V'(X)\right\rangle $. For a general $V'$, therefore, quantum mechanics can lead to predictions where expectation values do not mimic the classical behavior. In the case of the quantum harmonic oscillator, however, $V'$ is linear and this distinction disappears, so that in this very special case, the expected position and expected momentum do exactly follow the classical trajectories.
For general systems, the best we can hope for is that the expected position and momentum will approximately follow the classical trajectories. If the wave function is highly concentrated around a point $x_{0}$, then $V'\left(\left\langle X\right\rangle \right)$ and $\left\langle V'(X)\right\rangle $ will be almost the same, since both will be approximately equal to $V'(x_{0})$. In that case, the expected position and expected momentum will remain very close to the classical trajectories, at least for as long as the wave function remains highly localized in position.
The Schrödinger equation in its general form
$i\hbar {\frac {\partial }{\partial t}}\Psi \left(\mathbf {r} ,t\right)={\hat {H}}\Psi \left(\mathbf {r} ,t\right)$
is closely related to the Hamilton–Jacobi equation (HJE)
$-{\frac {\partial }{\partial t}}S(q_{i},t)=H\left(q_{i},{\frac {\partial S}{\partial q_{i}}},t\right)$
where $S$ is the classical action and $H$ is the Hamiltonian function (not operator).[23]: 308 Here the generalized coordinates $q_{i}$ for $i=1,2,3$ (used in the context of the HJE) can be set to the position in Cartesian coordinates as $\mathbf {r} =(q_{1},q_{2},q_{3})=(x,y,z)$.
Substituting
$\Psi ={\sqrt {\rho (\mathbf {r} ,t)}}e^{iS(\mathbf {r} ,t)/\hbar }$
where $\rho $ is the probability density, into the Schrödinger equation and then taking the limit $\hbar \to 0$ in the resulting equation yield the Hamilton–Jacobi equation.
Density matrices
Main article: Density matrix
Wave functions are not always the most convenient way to describe quantum systems and their behavior. When the preparation of a system is only imperfectly known, or when the system under investigation is a part of a larger whole, density matrices may be used instead.[23]: 74 A density matrix is a positive semi-definite operator whose trace is equal to 1. (The term "density operator" is also used, particularly when the underlying Hilbert space is infinite-dimensional.) The set of all density matrices is convex, and the extreme points are the operators that project onto vectors in the Hilbert space. These are the density-matrix representations of wave functions; in Dirac notation, they are written
${\hat {\rho }}=|\Psi \rangle \langle \Psi |.$
The density-matrix analogue of the Schrödinger equation for wave functions is[24][25]
$i\hbar {\frac {\partial {\hat {\rho }}}{\partial t}}=[{\hat {H}},{\hat {\rho }}],$
where the brackets denote a commutator. This is variously known as the von Neumann equation, the Liouville–von Neumann equation, or just the Schrödinger equation for density matrices.[23]: 312 If the Hamiltonian is time-independent, this equation can be easily solved to yield
${\hat {\rho }}(t)=e^{-i{\hat {H}}t/\hbar }{\hat {\rho }}(0)e^{i{\hat {H}}t/\hbar }.$
More generally, if the unitary operator ${\hat {U}}(t)$ describes wave function evolution over some time interval, then the time evolution of a density matrix over that same interval is given by
${\hat {\rho }}(t)={\hat {U}}(t){\hat {\rho }}(0){\hat {U}}(t)^{\dagger }.$
Unitary evolution of a density matrix conserves its von Neumann entropy.[23]: 267
Relativistic quantum physics and quantum field theory
The one-particle Schrödinger equation described above is valid essentially in the nonrelativistic domain. For one reason, it is essentially invariant under Galilean transformations, which comprise the symmetry group of Newtonian dynamics.[note 2] Moreover, processes that change particle number are natural in relativity, and so an equation for one particle (or any fixed number thereof) can only be of limited use.[27] A more general form of the Schrödinger equation that also applies in relativistic situations can be formulated within quantum field theory (QFT), a framework that allows the combination of quantum mechanics with special relativity. The region in which both simultaneously apply may be described by relativistic quantum mechanics. Such descriptions may use time evolution generated by a Hamiltonian operator, as in the Schrödinger functional method.[28][29][30][31]
Klein–Gordon and Dirac equations
Attempts to combine quantum physics with special relativity began with building relativistic wave equations from the relativistic energy–momentum relation
$E^{2}=(pc)^{2}+\left(m_{0}c^{2}\right)^{2},$
instead of nonrelativistic energy equations. The Klein–Gordon equation and the Dirac equation are two such equations. The Klein–Gordon equation,
$-{\frac {1}{c^{2}}}{\frac {\partial ^{2}}{\partial t^{2}}}\psi +\nabla ^{2}\psi ={\frac {m^{2}c^{2}}{\hbar ^{2}}}\psi ,$
was the first such equation to be obtained, even before the nonrelativistic one-particle Schrödinger equation, and applies to massive spinless particles. Historically, Dirac obtained the Dirac equation by seeking a differential equation that would be first-order in both time and space, a desirable property for a relativistic theory. Taking the "square root" of the left-hand side of the Klein–Gordon equation in this way required factorizing it into a product of two operators, which Dirac wrote using 4 × 4 matrices $\alpha _{1},\alpha _{2},\alpha _{3},\beta $. Consequently, the wave function also became a four-component function, governed by the Dirac equation that, in free space, read
$\left(\beta mc^{2}+c\left(\sum _{n\mathop {=} 1}^{3}\alpha _{n}p_{n}\right)\right)\psi =i\hbar {\frac {\partial \psi }{\partial t}}.$
This has again the form of the Schrödinger equation, with the time derivative of the wave function being given by a Hamiltonian operator acting upon the wave function. Including influences upon the particle requires modifying the Hamiltonian operator. For example, the Dirac Hamiltonian for a particle of mass m and electric charge q in an electromagnetic field (described by the electromagnetic potentials φ and A) is:
${\hat {H}}_{\text{Dirac}}=\gamma ^{0}\left[c{\boldsymbol {\gamma }}\cdot \left({\hat {\mathbf {p} }}-q\mathbf {A} \right)+mc^{2}+\gamma ^{0}q\varphi \right],$
in which the γ = (γ1, γ2, γ3) and γ0 are the Dirac gamma matrices related to the spin of the particle. The Dirac equation is true for all spin-1⁄2 particles, and the solutions to the equation are 4-component spinor fields with two components corresponding to the particle and the other two for the antiparticle.
For the Klein–Gordon equation, the general form of the Schrödinger equation is inconvenient to use, and in practice the Hamiltonian is not expressed in an analogous way to the Dirac Hamiltonian. The equations for relativistic quantum fields, of which the Klein–Gordon and Dirac equations are two examples, can be obtained in other ways, such as starting from a Lagrangian density and using the Euler–Lagrange equations for fields, or using the representation theory of the Lorentz group in which certain representations can be used to fix the equation for a free particle of given spin (and mass).
In general, the Hamiltonian to be substituted in the general Schrödinger equation is not just a function of the position and momentum operators (and possibly time), but also of spin matrices. Also, the solutions to a relativistic wave equation, for a massive particle of spin s, are complex-valued 2(2s + 1)-component spinor fields.
Fock space
As originally formulated, the Dirac equation is an equation for a single quantum particle, just like the single-particle Schrödinger equation with wave function $\Psi (x,t)$. This is of limited use in relativistic quantum mechanics, where particle number is not fixed. Heuristically, this complication can be motivated by noting that mass–energy equivalence implies material particles can be created from energy. A common way to address this in QFT is to introduce a Hilbert space where the basis states are labeled by particle number, a so-called Fock space. The Schrödinger equation can then be formulated for quantum states on this Hilbert space.[27] However, because the Schrödinger equation picks out a preferred time axis, the Lorentz invariance of the theory is no longer manifest, and accordingly, the theory is often formulated in other ways.[32]
History
Following Max Planck's quantization of light (see black-body radiation), Albert Einstein interpreted Planck's quanta to be photons, particles of light, and proposed that the energy of a photon is proportional to its frequency, one of the first signs of wave–particle duality. Since energy and momentum are related in the same way as frequency and wave number in special relativity, it followed that the momentum $p$ of a photon is inversely proportional to its wavelength $\lambda $, or proportional to its wave number $k$:
$p={\frac {h}{\lambda }}=\hbar k,$
where $h$ is Planck's constant and $\hbar ={h}/{2\pi }$ is the reduced Planck constant. Louis de Broglie hypothesized that this is true for all particles, even particles which have mass such as electrons. He showed that, assuming that the matter waves propagate along with their particle counterparts, electrons form standing waves, meaning that only certain discrete rotational frequencies about the nucleus of an atom are allowed.[33] These quantized orbits correspond to discrete energy levels, and de Broglie reproduced the Bohr model formula for the energy levels. The Bohr model was based on the assumed quantization of angular momentum $L$ according to
$L=n{\frac {h}{2\pi }}=n\hbar .$
According to de Broglie, the electron is described by a wave, and a whole number of wavelengths must fit along the circumference of the electron's orbit:
$n\lambda =2\pi r.$
This approach essentially confined the electron wave in one dimension, along a circular orbit of radius $r$.
In 1921, prior to de Broglie, Arthur C. Lunn at the University of Chicago had used the same argument based on the completion of the relativistic energy–momentum 4-vector to derive what we now call the de Broglie relation.[34][35] Unlike de Broglie, Lunn went on to formulate the differential equation now known as the Schrödinger equation and solve for its energy eigenvalues for the hydrogen atom. Unfortunately the paper was rejected by the Physical Review, as recounted by Kamen.[36]
Following up on de Broglie's ideas, physicist Peter Debye made an offhand comment that if particles behaved as waves, they should satisfy some sort of wave equation. Inspired by Debye's remark, Schrödinger decided to find a proper 3-dimensional wave equation for the electron. He was guided by William Rowan Hamilton's analogy between mechanics and optics, encoded in the observation that the zero-wavelength limit of optics resembles a mechanical system—the trajectories of light rays become sharp tracks that obey Fermat's principle, an analog of the principle of least action.[37]
The equation he found is[38]
$i\hbar {\frac {\partial }{\partial t}}\Psi (\mathbf {r} ,t)=-{\frac {\hbar ^{2}}{2m}}\nabla ^{2}\Psi (\mathbf {r} ,t)+V(\mathbf {r} )\Psi (\mathbf {r} ,t).$
However, by that time, Arnold Sommerfeld had refined the Bohr model with relativistic corrections.[39][40] Schrödinger used the relativistic energy–momentum relation to find what is now known as the Klein–Gordon equation in a Coulomb potential (in natural units):
$\left(E+{\frac {e^{2}}{r}}\right)^{2}\psi (x)=-\nabla ^{2}\psi (x)+m^{2}\psi (x).$
He found the standing waves of this relativistic equation, but the relativistic corrections disagreed with Sommerfeld's formula. Discouraged, he put away his calculations and secluded himself with a mistress in a mountain cabin in December 1925.[41]
While at the cabin, Schrödinger decided that his earlier nonrelativistic calculations were novel enough to publish and decided to leave off the problem of relativistic corrections for the future. Despite the difficulties in solving the differential equation for hydrogen (he had sought help from his friend the mathematician Hermann Weyl[42]: 3 ) Schrödinger showed that his nonrelativistic version of the wave equation produced the correct spectral energies of hydrogen in a paper published in 1926.[42]: 1 [43] Schrödinger computed the hydrogen spectral series by treating a hydrogen atom's electron as a wave $\Psi (\mathbf {x} ,t)$, moving in a potential well $V$, created by the proton. This computation accurately reproduced the energy levels of the Bohr model.
The Schrödinger equation details the behavior of $\Psi $ but says nothing of its nature. Schrödinger tried to interpret the real part of $\Psi {\frac {\partial \Psi ^{*}}{\partial t}}$ as a charge density, and then revised this proposal, saying in his next paper that the modulus squared of $\Psi $ is a charge density. This approach was, however, unsuccessful.[note 3] In 1926, just a few days after this paper was published, Max Born successfully interpreted $\Psi $ as the probability amplitude, whose modulus squared is equal to probability density.[44]: 220 Later, Schrödinger himself explained this interpretation as follows:[47]
The already ... mentioned psi-function.... is now the means for predicting probability of measurement results. In it is embodied the momentarily attained sum of theoretically based future expectation, somewhat as laid down in a catalog.
— Erwin Schrödinger
Interpretation
The Schrödinger equation provides a way to calculate the wave function of a system and how it changes dynamically in time. However, the Schrödinger equation does not directly say what, exactly, the wave function is. The meaning of the Schrödinger equation and how the mathematical entities in it relate to physical reality depends upon the interpretation of quantum mechanics that one adopts.
In the views often grouped together as the Copenhagen interpretation, a system's wave function is a collection of statistical information about that system. The Schrödinger equation relates information about the system at one time to information about it at another. While the time-evolution process represented by the Schrödinger equation is continuous and deterministic, in that knowing the wave function at one instant is in principle sufficient to calculate it for all future times, wave functions can also change discontinuously and stochastically during a measurement. The wave function changes, according to this school of thought, because new information is available. The post-measurement wave function generally cannot be known prior to the measurement, but the probabilities for the different possibilities can be calculated using the Born rule.[23][48][note 4] Other, more recent interpretations of quantum mechanics, such as relational quantum mechanics and QBism also give the Schrödinger equation a status of this sort.[51][52]
Schrödinger himself suggested in 1952 that the different terms of a superposition evolving under the Schrödinger equation are "not alternatives but all really happen simultaneously". This has been interpreted as an early version of Everett's many-worlds interpretation.[53][54][note 5] This interpretation, formulated independently in 1956, holds that all the possibilities described by quantum theory simultaneously occur in a multiverse composed of mostly independent parallel universes.[56] This interpretation removes the axiom of wave function collapse, leaving only continuous evolution under the Schrödinger equation, and so all possible states of the measured system and the measuring apparatus, together with the observer, are present in a real physical quantum superposition. While the multiverse is deterministic, we perceive non-deterministic behavior governed by probabilities, because we do not observe the multiverse as a whole, but only one parallel universe at a time. Exactly how this is supposed to work has been the subject of much debate. Why we should assign probabilities at all to outcomes that are certain to occur in some worlds, and why should the probabilities be given by the Born rule?[57] Several ways to answer these questions in the many-worlds framework have been proposed, but there is no consensus on whether they are successful.[58][59][60]
Bohmian mechanics reformulates quantum mechanics to make it deterministic, at the price of making it explicitly nonlocal (a price exacted by Bell's theorem). It attributes to each physical system not only a wave function but in addition a real position that evolves deterministically under a nonlocal guiding equation. The evolution of a physical system is given at all times by the Schrödinger equation together with the guiding equation.[61]
See also
• Eckhaus equation
• Pauli equation
• Fokker–Planck equation
• List of things named after Erwin Schrödinger
• Logarithmic Schrödinger equation
• Nonlinear Schrödinger equation
• Quantum channel
• Relation between Schrödinger's equation and the path integral formulation of quantum mechanics
• Schrödinger picture
• Wigner quasiprobability distribution
Notes
1. This rule for obtaining probabilities from a state vector implies that vectors that only differ by an overall phase are physically equivalent; $|\psi \rangle $ and $e^{i\alpha }|\psi \rangle $ represent the same quantum states. In other words, the possible states are points in the projective space of a Hilbert space, usually called the complex projective space.
2. More precisely, the effect of a Galilean transformation upon the Schrödinger equation can be canceled by a phase transformation of the wave function that leaves the probabilities, as calculated via the Born rule, unchanged.[26]
3. For details, see Moore,[44]: 219 Jammer,[45]: 24–25 and Karam.[46]
4. One difficulty in discussing the philosophical position of "the Copenhagen interpretation" is that there is no single, authoritative source that establishes what the interpretation is. Another complication is that the philosophical background familiar to Einstein, Bohr, Heisenberg, and contemporaries is much less so to physicists and even philosophers of physics in more recent times.[49][50]
5. Schrödinger's later writings also contain elements resembling the modal interpretation originated by Bas van Fraassen. Because Schrödinger subscribed to a kind of post-Machian neutral monism, in which "matter" and "mind" are only different aspects or arrangements of the same common elements, treating the wavefunction as physical and treating it as information became interchangeable.[55]
References
1. Griffiths, David J. (2004). Introduction to Quantum Mechanics (2nd ed.). Prentice Hall. ISBN 978-0-13-111892-8.
2. "Physicist Erwin Schrödinger's Google doodle marks quantum mechanics work". The Guardian. 13 August 2013. Retrieved 25 August 2013.
3. Schrödinger, E. (1926). "An Undulatory Theory of the Mechanics of Atoms and Molecules" (PDF). Physical Review. 28 (6): 1049–70. Bibcode:1926PhRv...28.1049S. doi:10.1103/PhysRev.28.1049. Archived from the original (PDF) on 17 December 2008.
4. Whittaker, Edmund T. (1989). A history of the theories of aether & electricity. 2: The modern theories, 1900 - 1926 (Repr ed.). New York: Dover Publ. ISBN 978-0-486-26126-3.
5. Zwiebach, Barton (2022). Mastering Quantum Mechanics: Essentials, Theory, and Applications. MIT Press. ISBN 978-0-262-04613-8. OCLC 1347739457.
6. Dirac, Paul Adrien Maurice (1930). The Principles of Quantum Mechanics. Oxford: Clarendon Press.
7. Hilbert, David (2009). Sauer, Tilman; Majer, Ulrich (eds.). Lectures on the Foundations of Physics 1915–1927: Relativity, Quantum Theory and Epistemology. Springer. doi:10.1007/b12915. ISBN 978-3-540-20606-4. OCLC 463777694.
8. von Neumann, John (1932). Mathematische Grundlagen der Quantenmechanik. Berlin: Springer. English translation: Mathematical Foundations of Quantum Mechanics. Translated by Beyer, Robert T. Princeton University Press. 1955.
9. Weyl, Hermann (1950) [1931]. The Theory of Groups and Quantum Mechanics. Translated by Robertson, H. P. Dover. ISBN 978-0-486-60269-1. Translated from the German Gruppentheorie und Quantenmechanik (2nd ed.). S. Hirzel Verlag. 1931.
10. Samuel S. Holland (2012). Applied Analysis by the Hilbert Space Method: An Introduction with Applications to the Wave, Heat, and Schrödinger Equations (herdruk ed.). Courier Corporation. p. 190. ISBN 978-0-486-13929-6. Extract of page 190
11. Cohen-Tannoudji, Claude; Diu, Bernard; Laloë, Franck (2005). Quantum Mechanics. Translated by Hemley, Susan Reid; Ostrowsky, Nicole; Ostrowsky, Dan. John Wiley & Sons. ISBN 0-471-16433-X.
12. Shankar, R. (1994). Principles of Quantum Mechanics (2nd ed.). Kluwer Academic/Plenum Publishers. ISBN 978-0-306-44790-7.
13. Rieffel, Eleanor G.; Polak, Wolfgang H. (4 March 2011). Quantum Computing: A Gentle Introduction. MIT Press. ISBN 978-0-262-01506-6.
14. Yaffe, Laurence G. (2015). "Chapter 6: Symmetries" (PDF). Physics 226: Particles and Symmetries. Retrieved 1 January 2021.
15. Sakurai, J. J.; Napolitano, J. (2017). Modern Quantum Mechanics (Second ed.). Cambridge: Cambridge University Press. p. 68. ISBN 978-1-108-49999-6. OCLC 1105708539.
16. Singh, Chandralekha (March 2008). "Student understanding of quantum mechanics at the beginning of graduate instruction". American Journal of Physics. 76 (3): 277–287. arXiv:1602.06660. Bibcode:2008AmJPh..76..277S. doi:10.1119/1.2825387. ISSN 0002-9505. S2CID 118493003.
17. Adams, C.S; Sigel, M; Mlynek, J (1994). "Atom optics". Physics Reports. Elsevier BV. 240 (3): 143–210. Bibcode:1994PhR...240..143A. doi:10.1016/0370-1573(94)90066-3. ISSN 0370-1573.
18. Atkins, P. W. (1978). Physical Chemistry. Oxford University Press. ISBN 0-19-855148-7.
19. Hook, J. R.; Hall, H. E. (2010). Solid State Physics. Manchester Physics Series (2nd ed.). John Wiley & Sons. ISBN 978-0-471-92804-1.
20. Townsend, John S. (2012). "Chapter 7: The One-Dimensional Harmonic Oscillator". A Modern Approach to Quantum Mechanics. University Science Books. pp. 247–250, 254–5, 257, 272. ISBN 978-1-891389-78-8.
21. Tipler, P. A.; Mosca, G. (2008). Physics for Scientists and Engineers – with Modern Physics (6th ed.). Freeman. ISBN 978-0-7167-8964-2.
22. Griffiths, David J. (2008). Introduction to Elementary Particles. Wiley-VCH. pp. 162–. ISBN 978-3-527-40601-2. Retrieved 27 June 2011.
23. Peres, Asher (1993). Quantum Theory: Concepts and Methods. Kluwer. ISBN 0-7923-2549-4. OCLC 28854083.
24. Breuer, Heinz; Petruccione, Francesco (2002). The theory of open quantum systems. p. 110. ISBN 978-0-19-852063-4.
25. Schwabl, Franz (2002). Statistical mechanics. p. 16. ISBN 978-3-540-43163-3.
26. Home, Dipankar (2013). Conceptual Foundations of Quantum Physics. Springer US. pp. 4–5. ISBN 9781475798081. OCLC 1157340444.
27. Coleman, Sidney (8 November 2018). Derbes, David; Ting, Yuan-sen; Chen, Bryan Gin-ge; Sohn, Richard; Griffiths, David; Hill, Brian (eds.). Lectures Of Sidney Coleman On Quantum Field Theory. World Scientific Publishing. ISBN 978-9-814-63253-9. OCLC 1057736838.
28. Symanzik, K. (6 July 1981). "Schrödinger representation and Casimir effect in renormalizable quantum field theory". Nuclear Physics B. 190 (1): 1–44. Bibcode:1981NuPhB.190....1S. doi:10.1016/0550-3213(81)90482-X. ISSN 0550-3213.
29. Kiefer, Claus (15 March 1992). "Functional Schrödinger equation for scalar QED". Physical Review D. 45 (6): 2044–2056. Bibcode:1992PhRvD..45.2044K. doi:10.1103/PhysRevD.45.2044. ISSN 0556-2821. PMID 10014577.
30. Hatfield, Brian (1992). Quantum Field Theory of Point Particles and Strings. Cambridge, Mass.: Perseus Books. ISBN 978-1-4294-8516-6. OCLC 170230278.
31. Islam, Jamal Nazrul (May 1994). "The Schrödinger equation in quantum field theory". Foundations of Physics. 24 (5): 593–630. Bibcode:1994FoPh...24..593I. doi:10.1007/BF02054667. ISSN 0015-9018. S2CID 120883802.
32. Srednicki, Mark Allen (2012). Quantum Field Theory. Cambridge: Cambridge University Press. ISBN 978-0-521-86449-7. OCLC 71808151.
33. de Broglie, L. (1925). "Recherches sur la théorie des quanta" [On the Theory of Quanta] (PDF). Annales de Physique (in French). 10 (3): 22–128. Bibcode:1925AnPh...10...22D. doi:10.1051/anphys/192510030022. Archived from the original (PDF) on 9 May 2009.
34. Weissman, M. B.; V. V. Iliev; I. Gutman (2008). "A pioneer remembered: biographical notes about Arthur Constant Lunn" (PDF). Communications in Mathematical and in Computer Chemistry. 59 (3): 687–708.
35. Samuel I. Weissman; Michael Weissman (1997). "Alan Sokal's Hoax and A. Lunn's Theory of Quantum Mechanics". Physics Today. 50 (6): 15. Bibcode:1997PhT....50f..15W. doi:10.1063/1.881789.
36. Kamen, Martin D. (1985). Radiant Science, Dark Politics. Berkeley and Los Angeles, California: University of California Press. pp. 29–32. ISBN 978-0-520-04929-1.
37. Schrödinger, E. (1984). Collected papers. Friedrich Vieweg und Sohn. ISBN 978-3-7001-0573-2. See introduction to first 1926 paper.
38. Lerner, R. G.; Trigg, G. L. (1991). Encyclopaedia of Physics (2nd ed.). VHC publishers. ISBN 0-89573-752-3.
39. Sommerfeld, A. (1919). Atombau und Spektrallinien (in German). Braunschweig: Friedrich Vieweg und Sohn. ISBN 978-3-87144-484-5.
40. For an English source, see Haar, T. (1967). The Old Quantum Theory. Oxford, New York: Pergamon Press.
41. Teresi, Dick (7 January 1990). "The Lone Ranger of Quantum Mechanics". The New York Times. ISSN 0362-4331. Retrieved 13 October 2020.
42. Schrödinger, Erwin (1982). Collected Papers on Wave Mechanics (3rd ed.). American Mathematical Society. ISBN 978-0-8218-3524-1.
43. Schrödinger, E. (1926). "Quantisierung als Eigenwertproblem; von Erwin Schrödinger". Annalen der Physik (in German). 384 (4): 361–377. Bibcode:1926AnP...384..361S. doi:10.1002/andp.19263840404.
44. Moore, W. J. (1992). Schrödinger: Life and Thought. Cambridge University Press. ISBN 978-0-521-43767-7.
45. Jammer, Max (1974). Philosophy of Quantum Mechanics: The interpretations of quantum mechanics in historical perspective. Wiley-Interscience. ISBN 9780471439585.
46. Karam, Ricardo (June 2020). "Schrödinger's original struggles with a complex wave function". American Journal of Physics. 88 (6): 433–438. Bibcode:2020AmJPh..88..433K. doi:10.1119/10.0000852. ISSN 0002-9505. S2CID 219513834.
47. Erwin Schrödinger, "The Present situation in Quantum Mechanics", p. 9 of 22. The English version was translated by John D. Trimmer. The translation first appeared first in Proceedings of the American Philosophical Society, 124, 323–338. It later appeared as Section I.11 of Part I of Quantum Theory and Measurement by J. A. Wheeler and W. H. Zurek, eds., Princeton University Press, New Jersey 1983, ISBN 0691083169.
48. Omnès, R. (1994). The Interpretation of Quantum Mechanics. Princeton University Press. ISBN 978-0-691-03669-4. OCLC 439453957.
49. Faye, Jan (2019). "Copenhagen Interpretation of Quantum Mechanics". In Zalta, Edward N. (ed.). Stanford Encyclopedia of Philosophy. Metaphysics Research Lab, Stanford University.
50. Chevalley, Catherine (1999). "Why Do We Find Bohr Obscure?". In Greenberger, Daniel; Reiter, Wolfgang L.; Zeilinger, Anton (eds.). Epistemological and Experimental Perspectives on Quantum Physics. Springer Science+Business Media. pp. 59–74. doi:10.1007/978-94-017-1454-9. ISBN 978-9-04815-354-1.
51. van Fraassen, Bas C. (April 2010). "Rovelli's World". Foundations of Physics. 40 (4): 390–417. Bibcode:2010FoPh...40..390V. doi:10.1007/s10701-009-9326-5. ISSN 0015-9018. S2CID 17217776.
52. Healey, Richard (2016). "Quantum-Bayesian and Pragmatist Views of Quantum Theory". In Zalta, Edward N. (ed.). Stanford Encyclopedia of Philosophy. Metaphysics Research Lab, Stanford University.
53. Deutsch, David (2010). "Apart from Universes". In S. Saunders; J. Barrett; A. Kent; D. Wallace (eds.). Many Worlds? Everett, Quantum Theory and Reality. Oxford University Press.
54. Schrödinger, Erwin (1996). Bitbol, Michel (ed.). The Interpretation of Quantum Mechanics: Dublin Seminars (1949–1955) and other unpublished essays. OxBow Press.
55. Bitbol, Michel (1996). Schrödinger's Philosophy of Quantum Mechanics. Dordrecht: Springer Netherlands. ISBN 978-94-009-1772-9. OCLC 851376153.
56. Barrett, Jeffrey (2018). "Everett's Relative-State Formulation of Quantum Mechanics". In Zalta, Edward N. (ed.). Stanford Encyclopedia of Philosophy. Metaphysics Research Lab, Stanford University.
57. Wallace, David (2003). "Everettian Rationality: defending Deutsch's approach to probability in the Everett interpretation". Stud. Hist. Phil. Mod. Phys. 34 (3): 415–438. arXiv:quant-ph/0303050. Bibcode:2003SHPMP..34..415W. doi:10.1016/S1355-2198(03)00036-4. S2CID 1921913.
58. Ballentine, L. E. (1973). "Can the statistical postulate of quantum theory be derived?—A critique of the many-universes interpretation". Foundations of Physics. 3 (2): 229–240. Bibcode:1973FoPh....3..229B. doi:10.1007/BF00708440. S2CID 121747282.
59. Landsman, N. P. (2008). "The Born rule and its interpretation" (PDF). In Weinert, F.; Hentschel, K.; Greenberger, D.; Falkenburg, B. (eds.). Compendium of Quantum Physics. Springer. ISBN 978-3-540-70622-9. The conclusion seems to be that no generally accepted derivation of the Born rule has been given to date, but this does not imply that such a derivation is impossible in principle.
60. Kent, Adrian (2010). "One world versus many: The inadequacy of Everettian accounts of evolution, probability, and scientific confirmation". In S. Saunders; J. Barrett; A. Kent; D. Wallace (eds.). Many Worlds? Everett, Quantum Theory and Reality. Oxford University Press. arXiv:0905.0624. Bibcode:2009arXiv0905.0624K.
61. Goldstein, Sheldon (2017). "Bohmian Mechanics". In Zalta, Edward N. (ed.). Stanford Encyclopedia of Philosophy. Metaphysics Research Lab, Stanford University.
External links
Wikiquote has quotations related to Schrödinger equation.
• "Schrödinger equation". Encyclopedia of Mathematics. EMS Press. 2001 [1994].
• Quantum Cook Book (PDF) and PHYS 201: Fundamentals of Physics II by Ramamurti Shankar, Yale OpenCourseware
• The Modern Revolution in Physics – an online textbook.
• Quantum Physics I at MIT OpenCourseWare
Quantum electrodynamics
Formalism
• Euler–Heisenberg Lagrangian
• Feynman diagram
• Gupta–Bleuler formalism
• Path integral formulation
Particles
• Dual photon
• Electron
• Faddeev–Popov ghost
• Photon
• Positron
• Positronium
• Virtual particles
Concepts
• Anomalous magnetic dipole moment
• Furry's theorem
• Klein–Nishina formula
• Landau pole
• QED vacuum
• Self-energy
• Schwinger limit
• Uehling potential
• Vacuum polarization
• Vertex function
• Ward–Takahashi identity
Processes
• Bhabha scattering
• Breit–Wheeler process
• Bremsstrahlung
• Compton scattering
• Delbrück scattering
• Lamb shift
• Møller scattering
• Schwinger effect
• Photon-photon scattering
• See also: Template:Quantum mechanics topics
Quantum information science
General
• DiVincenzo's criteria
• NISQ era
• Quantum computing
• timeline
• Quantum information
• Quantum programming
• Quantum simulation
• Qubit
• physical vs. logical
• Quantum processors
• cloud-based
Theorems
• Bell's
• Eastin–Knill
• Gleason's
• Gottesman–Knill
• Holevo's
• Margolus–Levitin
• No-broadcasting
• No-cloning
• No-communication
• No-deleting
• No-hiding
• No-teleportation
• PBR
• Threshold
• Solovay–Kitaev
• Purification
Quantum
communication
• Classical capacity
• entanglement-assisted
• quantum capacity
• Entanglement distillation
• Monogamy of entanglement
• LOCC
• Quantum channel
• quantum network
• Quantum teleportation
• quantum gate teleportation
• Superdense coding
Quantum cryptography
• Post-quantum cryptography
• Quantum coin flipping
• Quantum money
• Quantum key distribution
• BB84
• SARG04
• other protocols
• Quantum secret sharing
Quantum algorithms
• Amplitude amplification
• Bernstein–Vazirani
• Boson sampling
• Deutsch–Jozsa
• Grover's
• HHL
• Hidden subgroup
• Quantum annealing
• Quantum counting
• Quantum Fourier transform
• Quantum optimization
• Quantum phase estimation
• Shor's
• Simon's
• VQE
Quantum
complexity theory
• BQP
• EQP
• QIP
• QMA
• PostBQP
Quantum
processor benchmarks
• Quantum supremacy
• Quantum volume
• Randomized benchmarking
• XEB
• Relaxation times
• T1
• T2
Quantum
computing models
• Adiabatic quantum computation
• Continuous-variable quantum information
• One-way quantum computer
• cluster state
• Quantum circuit
• quantum logic gate
• Quantum machine learning
• quantum neural network
• Quantum Turing machine
• Topological quantum computer
Quantum
error correction
• Codes
• CSS
• quantum convolutional
• stabilizer
• Shor
• Bacon–Shor
• Steane
• Toric
• gnu
• Entanglement-assisted
Physical
implementations
Quantum optics
• Cavity QED
• Circuit QED
• Linear optical QC
• KLM protocol
Ultracold atoms
• Optical lattice
• Trapped-ion QC
Spin-based
• Kane QC
• Spin qubit QC
• NV center
• NMR QC
Superconducting
• Charge qubit
• Flux qubit
• Phase qubit
• Transmon
Quantum
programming
• OpenQASM-Qiskit-IBM QX
• Quil-Forest/Rigetti QCS
• Cirq
• Q#
• libquantum
• many others...
• Quantum information science
• Quantum mechanics topics
Quantum mechanics
Background
• Introduction
• History
• Timeline
• Classical mechanics
• Old quantum theory
• Glossary
Fundamentals
• Born rule
• Bra–ket notation
• Complementarity
• Density matrix
• Energy level
• Ground state
• Excited state
• Degenerate levels
• Zero-point energy
• Entanglement
• Hamiltonian
• Interference
• Decoherence
• Measurement
• Nonlocality
• Quantum state
• Superposition
• Tunnelling
• Scattering theory
• Symmetry in quantum mechanics
• Uncertainty
• Wave function
• Collapse
• Wave–particle duality
Formulations
• Formulations
• Heisenberg
• Interaction
• Matrix mechanics
• Schrödinger
• Path integral formulation
• Phase space
Equations
• Dirac
• Klein–Gordon
• Pauli
• Rydberg
• Schrödinger
Interpretations
• Bayesian
• Consistent histories
• Copenhagen
• de Broglie–Bohm
• Ensemble
• Hidden-variable
• Local
• Many-worlds
• Objective collapse
• Quantum logic
• Relational
• Transactional
• Von Neumann-Wigner
Experiments
• Bell's inequality
• Davisson–Germer
• Delayed-choice quantum eraser
• Double-slit
• Franck–Hertz
• Mach–Zehnder interferometer
• Elitzur–Vaidman
• Popper
• Quantum eraser
• Stern–Gerlach
• Wheeler's delayed choice
Science
• Quantum biology
• Quantum chemistry
• Quantum chaos
• Quantum cosmology
• Quantum differential calculus
• Quantum dynamics
• Quantum geometry
• Quantum measurement problem
• Quantum mind
• Quantum stochastic calculus
• Quantum spacetime
Technology
• Quantum algorithms
• Quantum amplifier
• Quantum bus
• Quantum cellular automata
• Quantum finite automata
• Quantum channel
• Quantum circuit
• Quantum complexity theory
• Quantum computing
• Timeline
• Quantum cryptography
• Quantum electronics
• Quantum error correction
• Quantum imaging
• Quantum image processing
• Quantum information
• Quantum key distribution
• Quantum logic
• Quantum logic gates
• Quantum machine
• Quantum machine learning
• Quantum metamaterial
• Quantum metrology
• Quantum network
• Quantum neural network
• Quantum optics
• Quantum programming
• Quantum sensing
• Quantum simulator
• Quantum teleportation
Extensions
• Casimir effect
• Quantum statistical mechanics
• Quantum field theory
• History
• Quantum gravity
• Relativistic quantum mechanics
Related
• Schrödinger's cat
• in popular culture
• EPR paradox
• Quantum mysticism
• Category
• Physics portal
• Commons
Quantum field theories
Theories
• Algebraic QFT
• Axiomatic QFT
• Conformal field theory
• Lattice field theory
• Noncommutative QFT
• Gauge theory
• QFT in curved spacetime
• String theory
• Supergravity
• Thermal QFT
• Topological QFT
• Two-dimensional conformal field theory
Models
Regular
• Born–Infeld
• Euler–Heisenberg
• Ginzburg–Landau
• Non-linear sigma
• Proca
• Quantum electrodynamics
• Quantum chromodynamics
• Quartic interaction
• Scalar electrodynamics
• Scalar chromodynamics
• Soler
• Yang–Mills
• Yang–Mills–Higgs
• Yukawa
Low dimensional
• 2D Yang–Mills
• Bullough–Dodd
• Gross–Neveu
• Schwinger
• Sine-Gordon
• Thirring
• Thirring–Wess
• Toda
Conformal
• 2D free massless scalar
• Liouville
• Minimal
• Polyakov
• Wess–Zumino–Witten
Supersymmetric
• Wess–Zumino
• N = 1 super Yang–Mills
• Seiberg–Witten
• Super QCD
Superconformal
• 6D (2,0)
• ABJM
• N = 4 super Yang–Mills
Supergravity
• Higher dimensional
• N = 8
• Pure 4D N = 1
Topological
• BF
• Chern–Simons
Particle theory
• Chiral
• Fermi
• MSSM
• Nambu–Jona-Lasinio
• NMSSM
• Standard Model
• Stueckelberg
Related
• Casimir effect
• Cosmic string
• History
• Loop quantum gravity
• Loop quantum cosmology
• On shell and off shell
• Quantum chaos
• Quantum dynamics
• Quantum foam
• Quantum fluctuations
• links
• Quantum gravity
• links
• Quantum hadrodynamics
• Quantum hydrodynamics
• Quantum information
• Quantum information science
• links
• Quantum logic
• Quantum thermodynamics
See also: Template:Quantum mechanics topics
Quantum gravity
Central concepts
• AdS/CFT correspondence
• Ryu–Takayanagi conjecture
• Causal patch
• Gravitational anomaly
• Graviton
• Holographic principle
• IR/UV mixing
• Planck units
• Quantum foam
• Trans-Planckian problem
• Weinberg–Witten theorem
• Faddeev–Popov ghost
• Batalin-Vilkovisky formalism
• CA-duality
Toy models
• 2+1D topological gravity
• CGHS model
• Jackiw–Teitelboim gravity
• Liouville gravity
• RST model
• Topological quantum field theory
Quantum field theory
in curved spacetime
• Bunch–Davies vacuum
• Hawking radiation
• Semiclassical gravity
• Unruh effect
Black holes
• Black hole complementarity
• Black hole information paradox
• Black-hole thermodynamics
• Bekenstein bound
• Bousso's holographic bound
• Cosmic censorship hypothesis
• ER = EPR
• Firewall (physics)
• Gravitational singularity
Approaches
String theory
• Bosonic string theory
• M-theory
• Supergravity
• Superstring theory
Canonical quantum gravity
• Loop quantum gravity
• Wheeler–DeWitt equation
Euclidean quantum gravity
• Hartle–Hawking state
Others
• Causal dynamical triangulation
• Causal sets
• Noncommutative geometry
• Spin foam
• Group field theory
• Superfluid vacuum theory
• Twistor theory
• Dual graviton
Applications
• Quantum cosmology
• Eternal inflation
• Multiverse
• FRW/CFT duality
• See also: Template:Quantum mechanics topics
Authority control: National
• France
• BnF data
• Germany
• Israel
• United States
• Czech Republic
|
Wikipedia
|
Sherman–Morrison formula
In mathematics, in particular linear algebra, the Sherman–Morrison formula,[1][2][3] named after Jack Sherman and Winifred J. Morrison, computes the inverse of the sum of an invertible matrix $A$ and the outer product, $uv^{\textsf {T}}$, of vectors $u$ and $v$. The Sherman–Morrison formula is a special case of the Woodbury formula. Though named after Sherman and Morrison, it appeared already in earlier publications.[4]
Statement
Suppose $A\in \mathbb {R} ^{n\times n}$ is an invertible square matrix and $u,v\in \mathbb {R} ^{n}$ are column vectors. Then $A+uv^{\textsf {T}}$ is invertible iff $1+v^{\textsf {T}}A^{-1}u\neq 0$. In this case,
$\left(A+uv^{\textsf {T}}\right)^{-1}=A^{-1}-{A^{-1}uv^{\textsf {T}}A^{-1} \over 1+v^{\textsf {T}}A^{-1}u}.$
Here, $uv^{\textsf {T}}$ is the outer product of two vectors $u$ and $v$. The general form shown here is the one published by Bartlett.[5]
Proof
($\Leftarrow $) To prove that the backward direction $1+v^{\textsf {T}}A^{-1}u\neq 0\Rightarrow A+uv^{\textsf {T}}$ is invertible with inverse given as above) is true, we verify the properties of the inverse. A matrix $Y$ (in this case the right-hand side of the Sherman–Morrison formula) is the inverse of a matrix $X$ (in this case $A+uv^{\textsf {T}}$) if and only if $XY=YX=I$.
We first verify that the right hand side ($Y$) satisfies $XY=I$.
${\begin{aligned}XY&=\left(A+uv^{\textsf {T}}\right)\left(A^{-1}-{A^{-1}uv^{\textsf {T}}A^{-1} \over 1+v^{\textsf {T}}A^{-1}u}\right)\\[6pt]&=AA^{-1}+uv^{\textsf {T}}A^{-1}-{AA^{-1}uv^{\textsf {T}}A^{-1}+uv^{\textsf {T}}A^{-1}uv^{\textsf {T}}A^{-1} \over 1+v^{\textsf {T}}A^{-1}u}\\[6pt]&=I+uv^{\textsf {T}}A^{-1}-{uv^{\textsf {T}}A^{-1}+uv^{\textsf {T}}A^{-1}uv^{\textsf {T}}A^{-1} \over 1+v^{\textsf {T}}A^{-1}u}\\[6pt]&=I+uv^{\textsf {T}}A^{-1}-{u\left(1+v^{\textsf {T}}A^{-1}u\right)v^{\textsf {T}}A^{-1} \over 1+v^{\textsf {T}}A^{-1}u}\\[6pt]&=I+uv^{\textsf {T}}A^{-1}-uv^{\textsf {T}}A^{-1}\\[6pt]\end{aligned}}$
To end the proof of this direction, we need to show that $YX=I$ in a similar way as above:
$YX=\left(A^{-1}-{A^{-1}uv^{\textsf {T}}A^{-1} \over 1+v^{\textsf {T}}A^{-1}u}\right)(A+uv^{\textsf {T}})=I.$
(In fact, the last step can be avoided since for square matrices $X$ and $Y$, $XY=I$ is equivalent to $YX=I$.)
($\Rightarrow $) Reciprocally, if $1+v^{\textsf {T}}A^{-1}u=0$, then via the matrix determinant lemma, $\det \!\left(A+uv^{\textsf {T}}\right)=(1+v^{\textsf {T}}A^{-1}u)\det(A)=0$, so $\left(A+uv^{\textsf {T}}\right)$ is not invertible.
Application
If the inverse of $A$ is already known, the formula provides a numerically cheap way to compute the inverse of $A$ corrected by the matrix $uv^{\textsf {T}}$ (depending on the point of view, the correction may be seen as a perturbation or as a rank-1 update). The computation is relatively cheap because the inverse of $A+uv^{\textsf {T}}$ does not have to be computed from scratch (which in general is expensive), but can be computed by correcting (or perturbing) $A^{-1}$.
Using unit columns (columns from the identity matrix) for $u$ or $v$, individual columns or rows of $A$ may be manipulated and a correspondingly updated inverse computed relatively cheaply in this way.[6] In the general case, where $A^{-1}$ is a $n$-by-$n$ matrix and $u$ and $v$ are arbitrary vectors of dimension $n$, the whole matrix is updated[5] and the computation takes $3n^{2}$ scalar multiplications.[7] If $u$ is a unit column, the computation takes only $2n^{2}$ scalar multiplications. The same goes if $v$ is a unit column. If both $u$ and $v$ are unit columns, the computation takes only $n^{2}$ scalar multiplications.
This formula also has application in theoretical physics. Namely, in quantum field theory, one uses this formula to calculate the propagator of a spin-1 field.[8] The inverse propagator (as it appears in the Lagrangian) has the form $\left(A+uv^{\textsf {T}}\right)$. One uses the Sherman–Morrison formula to calculate the inverse (satisfying certain time-ordering boundary conditions) of the inverse propagator—or simply the (Feynman) propagator—which is needed to perform any perturbative calculation[9] involving the spin-1 field.
Alternative verification
Following is an alternate verification of the Sherman–Morrison formula using the easily verifiable identity
$\left(I+wv^{\textsf {T}}\right)^{-1}=I-{\frac {wv^{\textsf {T}}}{1+v^{\textsf {T}}w}}$.
Let
$u=Aw,\quad {\text{and}}\quad A+uv^{\textsf {T}}=A\left(I+wv^{\textsf {T}}\right),$
then
$\left(A+uv^{\textsf {T}}\right)^{-1}=\left(I+wv^{\textsf {T}}\right)^{-1}A^{-1}=\left(I-{\frac {wv^{\textsf {T}}}{1+v^{\textsf {T}}w}}\right)A^{-1}$.
Substituting $w=A^{-1}u$ gives
$\left(A+uv^{\textsf {T}}\right)^{-1}=\left(I-{\frac {A^{-1}uv^{\textsf {T}}}{1+v^{\textsf {T}}A^{-1}u}}\right)A^{-1}=A^{-1}-{\frac {A^{-1}uv^{\textsf {T}}A^{-1}}{1+v^{\textsf {T}}A^{-1}u}}$
Generalization (Woodbury matrix identity)
Given a square invertible $n\times n$ matrix $A$, an $n\times k$ matrix $U$, and a $k\times n$ matrix $V$, let $B$ be an $n\times n$ matrix such that $B=A+UV$. Then, assuming $\left(I_{k}+VA^{-1}U\right)$ is invertible, we have
$B^{-1}=A^{-1}-A^{-1}U\left(I_{k}+VA^{-1}U\right)^{-1}VA^{-1}.$
See also
• The matrix determinant lemma performs a rank-1 update to a determinant.
• Woodbury matrix identity
• Quasi-Newton method
• Binomial inverse theorem
• Bunch–Nielsen–Sorensen formula
• Maxwell stress tensor contains an application of the Sherman–Morrison formula.
References
1. Sherman, Jack; Morrison, Winifred J. (1949). "Adjustment of an Inverse Matrix Corresponding to Changes in the Elements of a Given Column or a Given Row of the Original Matrix (abstract)". Annals of Mathematical Statistics. 20: 621. doi:10.1214/aoms/1177729959.
2. Sherman, Jack; Morrison, Winifred J. (1950). "Adjustment of an Inverse Matrix Corresponding to a Change in One Element of a Given Matrix". Annals of Mathematical Statistics. 21 (1): 124–127. doi:10.1214/aoms/1177729893. MR 0035118. Zbl 0037.00901.
3. Press, William H.; Teukolsky, Saul A.; Vetterling, William T.; Flannery, Brian P. (2007), "Section 2.7.1 Sherman–Morrison Formula", Numerical Recipes: The Art of Scientific Computing (3rd ed.), New York: Cambridge University Press, ISBN 978-0-521-88068-8
4. Hager, William W. (1989). "Updating the inverse of a matrix" (PDF). SIAM Review. 31 (2): 221–239. doi:10.1137/1031049. JSTOR 2030425. MR 0997457. S2CID 7967459.
5. Bartlett, Maurice S. (1951). "An Inverse Matrix Adjustment Arising in Discriminant Analysis". Annals of Mathematical Statistics. 22 (1): 107–111. doi:10.1214/aoms/1177729698. MR 0040068. Zbl 0042.38203.
6. Langville, Amy N.; and Meyer, Carl D.; "Google's PageRank and Beyond: The Science of Search Engine Rankings", Princeton University Press, 2006, p. 156
7. Update of the inverse matrix by the Sherman–Morrison formula
8. Propagator#Spin 1
9. "Perturbative quantum field theory".
External links
• Weisstein, Eric W. "Sherman–Morrison formula". MathWorld.
|
Wikipedia
|
Sherman K. Stein
Sherman Kopald Stein (born August 11, 1926) is an American mathematician and an author of mathematics textbooks. He is a professor emeritus at the University of California, Davis. His writings have won the Lester R. Ford Award and the Beckenbach Book Prize.
Life
Stein was born on August 11, 1926, in Minneapolis; his father was a bookbinder. He graduated from the California Institute of Technology in 1946.[1] He completed his doctorate at Columbia University in 1952. His dissertation, The Homology of the Two-Fold Symmetric Product, was supervised by Paul Althaus Smith.[2]
Stein worked as a mathematics instructor at Princeton University for a year,[3] and then joined the mathematics faculty at the University of California, Davis in 1953. He retired in 1993.[1]
Books
Stein is the author of:[1]
• Mathematics: The Man-Made Universe: An Introduction to the Spirit of Mathematics (W. H. Freeman, 1963; 3rd ed., Dover, 1998)[4]
• Calculus in the First Three Dimensions (McGraw-Hill, 1967)[5]
• Calculus for the Natural and Social Sciences (McGraw-Hill, 1968)[6]
• Calculus and Analytic Geometry (McGraw-Hill, 1968; 5th ed., 1992)[7]
• Elementary Algebra: A Guided Inquiry (with Calvin D. Crabill, Houghton Mifflin, 1972)[8]
• Geometry: A Guided Inquiry (with G. D. Chakerian and Calvin D. Crabill, Houghton Mifflin, 1972)[9]
• Algebra II/Trigonometry (with Calvin D. Crabill, W. H. Freeman, 1976)[10]
• An Introduction to Differential Equations (with Anthony Barcellos, McGraw-Hill, 1994)
• Algebra and Tiling: Homomorphisms in the Service of Geometry (with Sándor Szabó, Mathematical Association of America, 1994)[11]
• Strength in Numbers: Discovering the Joy and Power of Mathematics in Everyday Life (Wiley, 1996)[12]
• Archimedes: What Did He Do besides Cry Eureka? (Mathematical Association of America, 1999)[13]
• How the Other Half Thinks: Adventures in Mathematical Reasoning (McGraw-Hill, 2001; reprinted as Adventures in Mathematical Reasoning, Dover, 2016)[14]
• Survival Guide for Outsiders: How to Protect Yourself from Politicians, Experts and Other Insiders (BookSurge, 2010).[15]
The book Algebra and Tiling: Homomorphisms in the Service of Geometry, written by Stein and Szabó, won the 1998 Beckenbach Book Prize of the Mathematical Association of America.[16]
Other contributions
Stein's doctoral research was in topology, but his research interests later shifted to abstract algebra and combinatorics. In combinatorics, he is known for formulating the tripod packing problem. The tripods of this problem are infinite polycubes, the unions of the lattice cubes along three axis-parallel rays, and they have also been called "Stein corners" in honor of his contributions to this problem.[17] Stein is also known as one of the independent discoverers of Fáry's theorem,[18] and for his contributions to equidissection, the partition of polygons into triangles of equal area.[19]
Stein won the Lester R. Ford Award of the Mathematical Association of America in 1975 for a paper on the connections between group theory and tessellations.[3]
References
1. "Stein, Sherman K. 1926-", Gale Contemporary Authors, 2009
2. Sherman K. Stein at the Mathematics Genealogy Project
3. "Algebraic tiling", Lester R. Ford Awards, Mathematical Association of America, retrieved 2019-02-23
4. Reviews of Mathematics: The Man-Made Universe:
• Wilcox, L. R. (June 1963), "Mathematics for the General Reader", Science, New Series, 140 (3573): 1298–1299, JSTOR 1711190
• Wagner, John (December 1963), The Mathematics Teacher, 56 (8): 639, JSTOR 27956938{{citation}}: CS1 maint: untitled periodical (link)
• Matthews, Geoffrey (February 1964), The Mathematical Gazette, 48 (363): 112–113, doi:10.2307/3614350, JSTOR 3614350{{citation}}: CS1 maint: untitled periodical (link)
• Kochendörffer, R. (1965), Biometrische Zeitschrift, 7 (2): 136, doi:10.1002/bimj.19650070245{{citation}}: CS1 maint: untitled periodical (link)
• Cogan, E. J. (March 1970), The American Mathematical Monthly, 77 (3): 317–318, doi:10.2307/2317735, JSTOR 2317735{{citation}}: CS1 maint: untitled periodical (link)
• Broadbent, T. A. A. (May 1970), The Mathematical Gazette, 54 (388): 167, doi:10.2307/3612117, JSTOR 3612117{{citation}}: CS1 maint: untitled periodical (link)
• Engelsohn, Harold (December 1970), The American Mathematical Monthly, 77 (10): 1121–1122, doi:10.2307/2316123, JSTOR 2316123{{citation}}: CS1 maint: untitled periodical (link)
• Sidney, Stuart Jay (September–October 1976), American Scientist, 64 (5): 581, JSTOR 27847529{{citation}}: CS1 maint: untitled periodical (link)
• Spangler, Richard C. (November 1976), The Mathematics Teacher, 69 (7): 619, JSTOR 27960635{{citation}}: CS1 maint: untitled periodical (link)
• Cooper, B. B. (May 1977), Mathematics in School, 6 (3): 35, JSTOR 30212437{{citation}}: CS1 maint: untitled periodical (link)
• Dudley, Underwood (July 2011), "Review", MAA Reviews
5. Review of Calculus in the First Three Dimensions:
• Becker, Glenn (October 2017), "Review", MAA Reviews
6. Review of Calculus for the Natural and Social Sciences:
• Loewen, Kenneth (January 1971), The American Mathematical Monthly, 78 (1): 94–95, doi:10.2307/2317508, JSTOR 2317508{{citation}}: CS1 maint: untitled periodical (link)
7. Reviews of Calculus and Analytic Geometry:
• Green, George (March 1974), The Mathematics Teacher, 67 (3): 247, JSTOR 27959645{{citation}}: CS1 maint: untitled periodical (link)
• Phillips, Charles; Stanek, Jean Chan (February 1976), The American Mathematical Monthly, 83 (2): 145–146, doi:10.2307/2977009, JSTOR 2977009{{citation}}: CS1 maint: untitled periodical (link)
• Cron, Joe (February 1988), The Mathematics Teacher, 81 (2): 152, JSTOR 27965734{{citation}}: CS1 maint: untitled periodical (link)
8. Review of Elementary Algebra: A Guided Inquiry:
• Munro, H. Bernice (October 1972), The Mathematics Teacher, 65 (6): 549, JSTOR 27958993{{citation}}: CS1 maint: untitled periodical (link)
9. Reviews of Geometry: A Guided Inquiry:
• Peak, Philip (February 1973), The Mathematics Teacher, 66 (2): 152, JSTOR 27959222{{citation}}: CS1 maint: untitled periodical (link)
• Dull, Arthur P. (Spring 1974), The Two-Year College Mathematics Journal, 5 (2): 57–58, doi:10.2307/3026575, JSTOR 3026575{{citation}}: CS1 maint: untitled periodical (link)
10. Review of Algebra II/Trigonometry:
• Bristol, James D. (December 1976), The Mathematics Teacher, 69 (8): 694–695, JSTOR 27960669{{citation}}: CS1 maint: untitled periodical (link)
11. Reviews of Algebra and Tiling:
• Kenyon, Richard (1995), Mathematical Reviews, MR 1311249{{citation}}: CS1 maint: untitled periodical (link)
• Walton, William L. (December 1995), The Mathematics Teacher, 88 (9): 778, JSTOR 27969590{{citation}}: CS1 maint: untitled periodical (link)
• Post, K.A. (1998), Mededelingen van Het Wiskundig Genootschap, 41: 255–256{{citation}}: CS1 maint: untitled periodical (link)
• Mainardi, Fabio (May 2008), "Review", MAA Reviews
12. Reviews of Strength in Numbers:
• Devlin, Keith (December 1996), "Review", MAA Reviews
• Rauff, James V. (April 1997), The Mathematics Teacher, 90 (4): 334, JSTOR 27970171{{citation}}: CS1 maint: untitled periodical (link)
• Dorgan, Karen (May 1997), Mathematics Teaching in the Middle School, 2 (6): 444, JSTOR 41181637{{citation}}: CS1 maint: untitled periodical (link)
• Galovich, Jennifer R. (August–September 1997), The American Mathematical Monthly, 104 (7): 677–679, doi:10.2307/2975071, JSTOR 2975071{{citation}}: CS1 maint: untitled periodical (link)
• Braden, Lawrence S. (September–October 1997), "Review", American Scientist, 85 (5): 488–489, ProQuest 215266093
13. Reviews of Archimedes: What Did He Do besides Cry Eureka?:
• Sandifer, Ed (August 1999), "Review", MAA Reviews
• Sonnabend, Thomas (March 2000), The Mathematics Teacher, 93 (3): 256, JSTOR 27971362{{citation}}: CS1 maint: untitled periodical (link)
• Keenan, Tim (July 2007), "Archimedes: What Did He Do Besides Cry Eureka?", Convergence
14. Reviews of Adventures in Mathematical Reasoning:
• Craine, Timothy V. (May 2002), The Mathematics Teacher, vol. 95 (5 ed.), National Council of Teachers of Mathematics, p. 394, JSTOR 20871067
• Langton, Stacy G. (August 2002), "Review", MAA Reviews
• Barbeau, Edward J. (September 2002), "Review" (PDF), Notices of the American Mathematical Society, 49 (8): 905–910
• Sheffield, Linda Jensen (April 2005), "Review", School Science and Mathematics, 105 (4): 214, doi:10.1111/j.1949-8594.2005.tb18160.x, ProQuest 195208263
• Abdi, S. Wali (March 2008), School Science and Mathematics, 108 (3): 121–123, doi:10.1111/j.1949-8594.2008.tb17815.x{{citation}}: CS1 maint: untitled periodical (link)*Dietz, Geoffrey (December 2016), "Review", MAA Reviews
15. Review of Survival Guide for Outsiders:
• Vestal, Donald L. (October 2010), "Review", MAA Reviews
16. Beckenbach Book Prize, Mathematical Association of America, retrieved 2019-02-23
17. Golomb, S. W. (1969), "A general formulation of error metrics", IEEE Transactions on Information Theory, IT-15: 425–426, doi:10.1109/tit.1969.1054308, MR 0243902
18. Harary, Frank (1979), "Independent discoveries in graph theory", Topics in graph theory (New York, 1977), Ann. New York Acad. Sci., vol. 328, New York Acad. Sci., New York, pp. 1–4, MR 0557880
19. Monsky, Paul (September 1990), "A conjecture of Stein on plane dissections", Mathematische Zeitschrift, 205 (1): 583–592, doi:10.1007/BF02571264, S2CID 122009844, Zbl 0693.51008
Authority control
International
• ISNI
• VIAF
National
• Norway
• France
• BnF data
• Germany
• Israel
• United States
• Japan
• Czech Republic
• Korea
• Netherlands
• Poland
Academics
• CiNii
• MathSciNet
• Mathematics Genealogy Project
• zbMATH
People
• Deutsche Biographie
Other
• IdRef
|
Wikipedia
|
Sherman function
The Sherman function describes the dependence of electron-atom scattering events on the spin of the scattered electrons.[1] It was first evaluated theoretically by the physicist Noah Sherman and it allows the measurement of polarization of an electron beam by Mott scattering experiments.[2] A correct evaluation of the Sherman function associated to a particular experimental setup is of vital importance in experiments of spin polarized photoemission spectroscopy, which is an experimental technique which allows to obtain information about the magnetic behaviour of a sample.[3]
Background
Polarization and spin-orbit coupling
When an electron beam is polarized, an unbalance between spin-up, $n_{up}$, and spin-down electrons, $n_{down}$, exists. The unbalance can be evaluated through the polarization $P$ [4] defined as
$P={\frac {n_{up}-n_{down}}{n_{up}+n_{down}}}$.
It is known that, when an electron collides against a nucleus, the scattering event is governed by Coulomb interaction. This is the leading term in the Hamiltonian, but a correction due to spin-orbit coupling can be taken into account and the effect on the Hamiltonian can be evaluated with the perturbation theory. Spin orbit interaction can be evaluated, in the rest reference frame of the electron, as the result of the interaction of the spin magnetic moment of the electron
${\boldsymbol {\mu }}_{S}=-g_{\text{s}}\mu _{\text{B}}{\frac {\mathbf {S} }{\hbar }},$
with the magnetic field that the electron sees, due to its orbital motion around the nucleus, whose expression in the non-relativistic limit is:
$\mathbf {B} ={\frac {1}{m_{\text{e}}ec^{2}}}{\frac {1}{r}}{\frac {\partial U(r)}{\partial r}}\mathbf {L} .$
In these expressions $\mathbf {S} $ is the spin angular-momentum, $\mu _{\text{B}}$ is the Bohr magneton, $g_{\text{s}}$ is the g-factor, $\hbar $ is the reduced Planck constant, $m_{\text{e}}$ is the electron mass, $e$ is the elementary charge, $c$ is the speed of light, $U=eV$ is the potential energy of the electron and $\mathbf {L} =\mathbf {r} \times \mathbf {p} $ is the angular momentum.
Due to spin orbit coupling, a new term will appear in the Hamiltonian, whose expression is[5]
$V_{\text{SO}}={\boldsymbol {\mu _{s}}}\cdot \mathbf {B} $.
Due to this effect, electrons will be scattered with different probabilities at different angles. Since the spin-orbit coupling is enhanced when the involved nuclei possess a high atomic number Z, the target is usually made of heavy metals, such as mercury,[1] gold[6] and thorium.[7]
Asymmetry
If we place two detectors at the same angle from the target, one on the right and one on the left, they will generally measure a different number of electrons $n_{R}$ and $n_{L}$. Consequently it is possible to define the asymmetry $A$, as[2]
$A={\frac {n_{R}-n_{L}}{n_{R}+n_{L}}}$.
The Sherman function $S(\theta )$ is a measure of the probability of a spin-up electron to be scattered, at a specific angle $\theta $, to the right or to the left of the target, due to spin-orbit coupling.[8][9] It can assume values ranging from -1 (spin-up electron is scattered with 100% probability to the left of the target) to +1 (spin-up electron is scattered with 100% probability to the right of the target). The value of the Sherman function depends on the energy of the incoming electron, evaluated via the parameter $\beta ={\frac {v}{c}}$.[1] When $S(\theta )=0$, spin-up electrons will be scattered with the same probability to the right and to the left of the target.[1]
Then it is possible to write
$n_{R}={\frac {n_{up}[1+S(\theta )]+n_{down}[1-S(\theta )]}{2}}$
$n_{L}={\frac {n_{up}[1-S(\theta )]+n_{down}[1+S(\theta )]}{2}}.$
Plugging these formulas inside the definition of asymmetry, it is possible to obtain a simple expression for the evaluation of the asymmetry at a specific angle $\theta $,[10] i.e.:
$A=PS(\theta )$.
Theoretical calculations are available for different atomic targets[1][11] and for a specific target, as a function of the angle.[8]
Application
To measure the polarization of an electron beam, a Mott detector is required.[12] In order to maximize the spin-orbit coupling, it is necessary that the electrons arrive near to the nuclei of the target. To achieve this condition, a system of electron optics is usually present, in order to accelerate the beam up to keV[13] or to MeV[14] energies. Since standard electron detectors count electrons being insensitive to their spin,[15] after the scattering with the target any information about the original polarization of the beam is lost. Nevertheless, by measuring the difference in the counts of the two detectors, the asymmetry can be evaluated and, if the Sherman function is known from previous calibration, the polarization can be calculated by inverting the last formula.[10]
In order to characterize completely the in-plane polarization, setups are available, with four channeltrons, two devoted to the left-right measure and two devoted to the up-right measure.[7]
Example
In the panel it is shown an example of the working principle of a Mott detector, supposing a value for $S(\theta )=0.5$. If an electron beam with a 3:1 ratio of spin-up over spin-down electrons collide with the target, it will be splitted with a ratio 5:3, according to previous equation, with an asymmetry of 25%.
See also
• Spin–orbit interaction
• Mott scattering
• Photoemission spectroscopy
References
1. Sherman, Noah (15 September 1956). "Coulomb Scattering of Relativistic Electrons by Point Nuclei". Physical Review. 103 (6): 1601–1607. Bibcode:1956PhRv..103.1601S. doi:10.1103/physrev.103.1601.
2. Mott, Nevill Francis (January 1997). "The scattering of electrons by atoms". Proceedings of the Royal Society of London. Series A, Containing Papers of a Mathematical and Physical Character. 127 (806): 658–665. doi:10.1098/rspa.1930.0082.
3. Nishide, Akinori; Takeichi, Yasuo; Okuda, Taichi; Taskin, Alexey A; Hirahara, Toru; Nakatsuji, Kan; Komori, Fumio; Kakizaki, Akito; Ando, Yoichi; Matsuda, Iwao (17 June 2010). "Spin-polarized surface bands of a three-dimensional topological insulator studied by high-resolution spin- and angle-resolved photoemission spectroscopy". New Journal of Physics. 12 (6): 065011. Bibcode:2010NJPh...12f5011N. doi:10.1088/1367-2630/12/6/065011.
4. Mayne, K. I. (July 1969). "Polarized electron beams". Contemporary Physics. 10 (4): 387–412. Bibcode:1969ConPh..10..387M. doi:10.1080/00107516908204794.
5. Griffiths, Davis J. Introduction to quantum mechanics (2nd ed.). Pearson Prentice Hall. ISBN 0131118927.
6. Ciullo, Giuseppe; Contalbrigo, Marco; Lenisa, Paolo (2009). Polarized Sources, Targets and Polarimetry : Proceedings of the 13th International Workshop. World Scientific Publishing Co Pte Ltd. p. 337. ISBN 9781283148580.
7. Berti, G.; Calloni, A.; Brambilla, A.; Bussetti, G.; Duò, L.; Ciccacci, F. (July 2014). "Direct observation of spin-resolved full and empty electron states in ferromagnetic surfaces". Review of Scientific Instruments. 85 (7): 073901. Bibcode:2014RScI...85g3901B. doi:10.1063/1.4885447. hdl:11311/825526. PMID 25085146. S2CID 38096215.
8. Chao, Alexander W.; Mess, Karl H. (2013). Handbook of accelerator physics and engineering (Second ed.). World scientific. pp. 756–757. ISBN 978-9814415859.
9. Joachim, Kessler (1976). Polarized electrons. Springer-Verlag. p. 49. ISBN 978-3-662-12721-6.
10. Sherman, Noah; Nelson, Donald F. (15 June 1959). "Determination of Electron Polarization by Means of Mott Scattering". Physical Review. 114 (6): 1541–1542. Bibcode:1959PhRv..114.1541S. doi:10.1103/PhysRev.114.1541.
11. Czyżewski, Zbigniew; MacCallum, Danny O’Neill; Romig, Alton; Joy, David C. (October 1990). "Calculations of Mott scattering cross section". Journal of Applied Physics. 68 (7): 3066–3072. Bibcode:1990JAP....68.3066C. doi:10.1063/1.346400.
12. Nelson, D. F.; Pidd, R. W. (1 May 1959). "Measurement of the Mott Asymmetry in Double Scattering of Electrons". Physical Review. 114 (3): 728–735. Bibcode:1959PhRv..114..728N. doi:10.1103/PhysRev.114.728. hdl:2027.42/6796.
13. Petrov, V. N.; Landolt, M.; Galaktionov, M. S.; Yushenkov, B. V. (December 1997). "A new compact 60 kV Mott polarimeter for spin polarized electron spectroscopy". Review of Scientific Instruments. 68 (12): 4385–4389. Bibcode:1997RScI...68.4385P. doi:10.1063/1.1148400.
14. Steigerwald, M. "MeV Mott Polarimetry at Jefferson Lab" (PDF). Retrieved 25 June 2020.
15. Ladislas Wiza, Joseph (June 1979). "Microchannel plate detectors". Nuclear Instruments and Methods. 162 (1–3): 587–601. Bibcode:1979NucIM.162..587L. doi:10.1016/0029-554X(79)90734-1.
|
Wikipedia
|
Sherman–Takeda theorem
In mathematics, the Sherman–Takeda theorem states that if A is a C*-algebra then its double dual is a W*-algebra, and is isomorphic to the weak closure of A in the universal representation of A.
The theorem was announced by Sherman (1950) and proved by Takeda (1954). The double dual of A is called the universal enveloping W*-algebra of A.
References
• Sherman, S. (1950), "The second adjoint of a C* algebra", Proceedings of the International Congress of Mathematicians 1950 (PDF), vol. 1, Providence, R.I.: American Mathematical Society, p. 470
• Takeda, Zirô (1954), "Conjugate spaces of operator algebras", Proceedings of the Japan Academy, 30 (2): 90–95, doi:10.3792/pja/1195526177, ISSN 0021-4280, MR 0063578
Functional analysis (topics – glossary)
Spaces
• Banach
• Besov
• Fréchet
• Hilbert
• Hölder
• Nuclear
• Orlicz
• Schwartz
• Sobolev
• Topological vector
Properties
• Barrelled
• Complete
• Dual (Algebraic/Topological)
• Locally convex
• Reflexive
• Reparable
Theorems
• Hahn–Banach
• Riesz representation
• Closed graph
• Uniform boundedness principle
• Kakutani fixed-point
• Krein–Milman
• Min–max
• Gelfand–Naimark
• Banach–Alaoglu
Operators
• Adjoint
• Bounded
• Compact
• Hilbert–Schmidt
• Normal
• Nuclear
• Trace class
• Transpose
• Unbounded
• Unitary
Algebras
• Banach algebra
• C*-algebra
• Spectrum of a C*-algebra
• Operator algebra
• Group algebra of a locally compact group
• Von Neumann algebra
Open problems
• Invariant subspace problem
• Mahler's conjecture
Applications
• Hardy space
• Spectral theory of ordinary differential equations
• Heat kernel
• Index theorem
• Calculus of variations
• Functional calculus
• Integral operator
• Jones polynomial
• Topological quantum field theory
• Noncommutative geometry
• Riemann hypothesis
• Distribution (or Generalized functions)
Advanced topics
• Approximation property
• Balanced set
• Choquet theory
• Weak topology
• Banach–Mazur distance
• Tomita–Takesaki theory
• Mathematics portal
• Category
• Commons
Spectral theory and *-algebras
Basic concepts
• Involution/*-algebra
• Banach algebra
• B*-algebra
• C*-algebra
• Noncommutative topology
• Projection-valued measure
• Spectrum
• Spectrum of a C*-algebra
• Spectral radius
• Operator space
Main results
• Gelfand–Mazur theorem
• Gelfand–Naimark theorem
• Gelfand representation
• Polar decomposition
• Singular value decomposition
• Spectral theorem
• Spectral theory of normal C*-algebras
Special Elements/Operators
• Isospectral
• Normal operator
• Hermitian/Self-adjoint operator
• Unitary operator
• Unit
Spectrum
• Krein–Rutman theorem
• Normal eigenvalue
• Spectrum of a C*-algebra
• Spectral radius
• Spectral asymmetry
• Spectral gap
Decomposition
• Decomposition of a spectrum
• Continuous
• Point
• Residual
• Approximate point
• Compression
• Direct integral
• Discrete
• Spectral abscissa
Spectral Theorem
• Borel functional calculus
• Min-max theorem
• Positive operator-valued measure
• Projection-valued measure
• Riesz projector
• Rigged Hilbert space
• Spectral theorem
• Spectral theory of compact operators
• Spectral theory of normal C*-algebras
Special algebras
• Amenable Banach algebra
• With an Approximate identity
• Banach function algebra
• Disk algebra
• Nuclear C*-algebra
• Uniform algebra
• Von Neumann algebra
• Tomita–Takesaki theory
Finite-Dimensional
• Alon–Boppana bound
• Bauer–Fike theorem
• Numerical range
• Schur–Horn theorem
Generalizations
• Dirac spectrum
• Essential spectrum
• Pseudospectrum
• Structure space (Shilov boundary)
Miscellaneous
• Abstract index group
• Banach algebra cohomology
• Cohen–Hewitt factorization theorem
• Extensions of symmetric operators
• Fredholm theory
• Limiting absorption principle
• Schröder–Bernstein theorems for operator algebras
• Sherman–Takeda theorem
• Unbounded operator
Examples
• Wiener algebra
Applications
• Almost Mathieu operator
• Corona theorem
• Hearing the shape of a drum (Dirichlet eigenvalue)
• Heat kernel
• Kuznetsov trace formula
• Lax pair
• Proto-value function
• Ramanujan graph
• Rayleigh–Faber–Krahn inequality
• Spectral geometry
• Spectral method
• Spectral theory of ordinary differential equations
• Sturm–Liouville theory
• Superstrong approximation
• Transfer operator
• Transform theory
• Weyl law
• Wiener–Khinchin theorem
|
Wikipedia
|
Delta operator
In mathematics, a delta operator is a shift-equivariant linear operator $Q\colon \mathbb {K} [x]\longrightarrow \mathbb {K} [x]$ on the vector space of polynomials in a variable $x$ over a field $\mathbb {K} $ that reduces degrees by one.
To say that $Q$ is shift-equivariant means that if $g(x)=f(x+a)$, then
${(Qg)(x)=(Qf)(x+a)}.\,$
In other words, if $f$ is a "shift" of $g$, then $Qf$ is also a shift of $Qg$, and has the same "shifting vector" $a$.
To say that an operator reduces degree by one means that if $f$ is a polynomial of degree $n$, then $Qf$ is either a polynomial of degree $n-1$, or, in case $n=0$, $Qf$ is 0.
Sometimes a delta operator is defined to be a shift-equivariant linear transformation on polynomials in $x$ that maps $x$ to a nonzero constant. Seemingly weaker than the definition given above, this latter characterization can be shown to be equivalent to the stated definition when $\mathbb {K} $ has characteristic zero, since shift-equivariance is a fairly strong condition.
Examples
• The forward difference operator
$(\Delta f)(x)=f(x+1)-f(x)\,$
is a delta operator.
• Differentiation with respect to x, written as D, is also a delta operator.
• Any operator of the form
$\sum _{k=1}^{\infty }c_{k}D^{k}$
(where Dn(ƒ) = ƒ(n) is the nth derivative) with $c_{1}\neq 0$ is a delta operator. It can be shown that all delta operators can be written in this form. For example, the difference operator given above can be expanded as
$\Delta =e^{D}-1=\sum _{k=1}^{\infty }{\frac {D^{k}}{k!}}.$
• The generalized derivative of time scale calculus which unifies the forward difference operator with the derivative of standard calculus is a delta operator.
• In computer science and cybernetics, the term "discrete-time delta operator" (δ) is generally taken to mean a difference operator
${(\delta f)(x)={{f(x+\Delta t)-f(x)} \over {\Delta t}}},$
the Euler approximation of the usual derivative with a discrete sample time $\Delta t$. The delta-formulation obtains a significant number of numerical advantages compared to the shift-operator at fast sampling.
Basic polynomials
Every delta operator $Q$ has a unique sequence of "basic polynomials", a polynomial sequence defined by three conditions:
• $p_{0}(x)=1;$
• $p_{n}(0)=0;$
• $(Qp_{n})(x)=np_{n-1}(x){\text{ for all }}n\in \mathbb {N} .$
Such a sequence of basic polynomials is always of binomial type, and it can be shown that no other sequences of binomial type exist. If the first two conditions above are dropped, then the third condition says this polynomial sequence is a Sheffer sequence—a more general concept.
See also
• Pincherle derivative
• Shift operator
• Umbral calculus
References
• Nikol'Skii, Nikolai Kapitonovich (1986), Treatise on the shift operator: spectral function theory, Berlin, New York: Springer-Verlag, ISBN 978-0-387-15021-5
External links
• Weisstein, Eric W. "Delta Operator". MathWorld.
|
Wikipedia
|
Bitwise operation
In computer programming, a bitwise operation operates on a bit string, a bit array or a binary numeral (considered as a bit string) at the level of its individual bits. It is a fast and simple action, basic to the higher-level arithmetic operations and directly supported by the processor. Most bitwise operations are presented as two-operand instructions where the result replaces one of the input operands.
On simple low-cost processors, typically, bitwise operations are substantially faster than division, several times faster than multiplication, and sometimes significantly faster than addition. While modern processors usually perform addition and multiplication just as fast as bitwise operations due to their longer instruction pipelines and other architectural design choices, bitwise operations do commonly use less power because of the reduced use of resources.[1]
Bitwise operators
In the explanations below, any indication of a bit's position is counted from the right (least significant) side, advancing left. For example, the binary value 0001 (decimal 1) has zeroes at every position but the first (i.e., the rightmost) one.
NOT
See also: Ones' complement
The bitwise NOT, or bitwise complement, is a unary operation that performs logical negation on each bit, forming the ones' complement of the given binary value. Bits that are 0 become 1, and those that are 1 become 0. For example:
NOT 0111 (decimal 7)
= 1000 (decimal 8)
NOT 10101011 (decimal 171)
= 01010100 (decimal 84)
The result is equal to the two's complement of the value minus one. If two's complement arithmetic is used, then NOT x = -x − 1.
For unsigned integers, the bitwise complement of a number is the "mirror reflection" of the number across the half-way point of the unsigned integer's range. For example, for 8-bit unsigned integers, NOT x = 255 - x, which can be visualized on a graph as a downward line that effectively "flips" an increasing range from 0 to 255, to a decreasing range from 255 to 0. A simple but illustrative example use is to invert a grayscale image where each pixel is stored as an unsigned integer.
AND
A bitwise AND is a binary operation that takes two equal-length binary representations and performs the logical AND operation on each pair of the corresponding bits. Thus, if both bits in the compared position are 1, the bit in the resulting binary representation is 1 (1 × 1 = 1); otherwise, the result is 0 (1 × 0 = 0 and 0 × 0 = 0). For example:
0101 (decimal 5)
AND 0011 (decimal 3)
= 0001 (decimal 1)
The operation may be used to determine whether a particular bit is set (1) or cleared (0). For example, given a bit pattern 0011 (decimal 3), to determine whether the second bit is set we use a bitwise AND with a bit pattern containing 1 only in the second bit:
0011 (decimal 3)
AND 0010 (decimal 2)
= 0010 (decimal 2)
Because the result 0010 is non-zero, we know the second bit in the original pattern was set. This is often called bit masking. (By analogy, the use of masking tape covers, or masks, portions that should not be altered or portions that are not of interest. In this case, the 0 values mask the bits that are not of interest.)
The bitwise AND may be used to clear selected bits (or flags) of a register in which each bit represents an individual Boolean state. This technique is an efficient way to store a number of Boolean values using as little memory as possible.
For example, 0110 (decimal 6) can be considered a set of four flags, where the first and fourth flags are clear (0), and the second and third flags are set (1). The third flag may be cleared by using a bitwise AND with the pattern that has a zero only in the third bit:
0110 (decimal 6)
AND 1011 (decimal 11)
= 0010 (decimal 2)
Because of this property, it becomes easy to check the parity of a binary number by checking the value of the lowest valued bit. Using the example above:
0110 (decimal 6)
AND 0001 (decimal 1)
= 0000 (decimal 0)
Because 6 AND 1 is zero, 6 is divisible by two and therefore even.
OR
A bitwise OR is a binary operation that takes two bit patterns of equal length and performs the logical inclusive OR operation on each pair of corresponding bits. The result in each position is 0 if both bits are 0, while otherwise the result is 1. For example:
0101 (decimal 5)
OR 0011 (decimal 3)
= 0111 (decimal 7)
The bitwise OR may be used to set to 1 the selected bits of the register described above. For example, the fourth bit of 0010 (decimal 2) may be set by performing a bitwise OR with the pattern with only the fourth bit set:
0010 (decimal 2)
OR 1000 (decimal 8)
= 1010 (decimal 10)
XOR
A bitwise XOR is a binary operation that takes two bit patterns of equal length and performs the logical exclusive OR operation on each pair of corresponding bits. The result in each position is 1 if only one of the bits is 1, but will be 0 if both are 0 or both are 1. In this we perform the comparison of two bits, being 1 if the two bits are different, and 0 if they are the same. For example:
0101 (decimal 5)
XOR 0011 (decimal 3)
= 0110 (decimal 6)
The bitwise XOR may be used to invert selected bits in a register (also called toggle or flip). Any bit may be toggled by XORing it with 1. For example, given the bit pattern 0010 (decimal 2) the second and fourth bits may be toggled by a bitwise XOR with a bit pattern containing 1 in the second and fourth positions:
0010 (decimal 2)
XOR 1010 (decimal 10)
= 1000 (decimal 8)
This technique may be used to manipulate bit patterns representing sets of Boolean states.
Assembly language programmers and optimizing compilers sometimes use XOR as a short-cut to setting the value of a register to zero. Performing XOR on a value against itself always yields zero, and on many architectures this operation requires fewer clock cycles and less memory than loading a zero value and saving it to the register.
If the set of bit strings of fixed length n (i.e. machine words) is thought of as an n-dimensional vector space ${\bf {F}}_{2}^{n}$ over the field ${\bf {F}}_{2}$, then vector addition corresponds to the bitwise XOR.
Mathematical equivalents
Assuming $x\geq y$, for the non-negative integers, the bitwise operations can be written as follows:
${\begin{aligned}\operatorname {NOT} x&=\sum _{n=0}^{\lfloor \log _{2}(x)\rfloor }2^{n}\left[\left(\left\lfloor {\frac {x}{2^{n}}}\right\rfloor {\bmod {2}}+1\right){\bmod {2}}\right]=2^{\left\lfloor \log _{2}(x)\right\rfloor +1}-1-x\\x\operatorname {AND} y&=\sum _{n=0}^{\lfloor \log _{2}(x)\rfloor }2^{n}\left(\left\lfloor {\frac {x}{2^{n}}}\right\rfloor {\bmod {2}}\right)\left(\left\lfloor {\frac {y}{2^{n}}}\right\rfloor {\bmod {2}}\right)\\x\operatorname {OR} y&=\sum _{n=0}^{\lfloor \log _{2}(x)\rfloor }2^{n}\left(\left(\left\lfloor {\frac {x}{2^{n}}}\right\rfloor {\bmod {2}}\right)+\left(\left\lfloor {\frac {y}{2^{n}}}\right\rfloor {\bmod {2}}\right)-\left(\left\lfloor {\frac {x}{2^{n}}}\right\rfloor {\bmod {2}}\right)\left(\left\lfloor {\frac {y}{2^{n}}}\right\rfloor {\bmod {2}}\right)\right)\\x\operatorname {XOR} y&=\sum _{n=0}^{\lfloor \log _{2}(x)\rfloor }2^{n}\left(\left[\left(\left\lfloor {\frac {x}{2^{n}}}\right\rfloor {\bmod {2}}\right)+\left(\left\lfloor {\frac {y}{2^{n}}}\right\rfloor {\bmod {2}}\right)\right]{\bmod {2}}\right)=\sum _{n=0}^{\lfloor \log _{2}(x)\rfloor }2^{n}\left[\left(\left\lfloor {\frac {x}{2^{n}}}\right\rfloor +\left\lfloor {\frac {y}{2^{n}}}\right\rfloor \right){\bmod {2}}\right]\end{aligned}}$
Truth table for all binary logical operators
There are 16 possible truth functions of two binary variables; this defines a truth table.
Here is the bitwise equivalent operations of two bits P and Q:
pq F0 NOR1 Xq2 ¬p3 ↛4 ¬q5 XOR6 NAND7 AND8 XNOR9 q10 If/then11 p12 Then/if13 OR14 T15
11 0000000011111111
10 0000111100001111
01 0011001100110011
00 0101010101010101
Bitwise
equivalents
0 NOT
(p OR q)
(NOT p)
AND q
NOT
p
p AND
(NOT q)
NOT
q
p XOR q NOT
(p AND q)
p AND q NOT
(p XOR q)
q (NOT p)
OR q
p p OR
(NOT q)
p OR q 1
Bit shifts
The bit shifts are sometimes considered bitwise operations, because they treat a value as a series of bits rather than as a numerical quantity. In these operations, the digits are moved, or shifted, to the left or right. Registers in a computer processor have a fixed width, so some bits will be "shifted out" of the register at one end, while the same number of bits are "shifted in" from the other end; the differences between bit shift operators lie in how they determine the values of the shifted-in bits.
Bit addressing
If the width of the register (frequently 32 or even 64) is larger than the number of bits (usually 8) of the smallest addressable unit, frequently called byte, the shift operations induce an addressing scheme from the bytes to the bits. Thereby the orientations "left" and "right" are taken from the standard writing of numbers in a place-value notation, such that a left shift increases and a right shift decreases the value of the number ― if the left digits are read first, this makes up a big-endian orientation. Disregarding the boundary effects at both ends of the register, arithmetic and logical shift operations behave the same, and a shift by 8 bit positions transports the bit pattern by 1 byte position in the following way:
Little-endian ordering: a left shift by 8 positions increases the byte address by 1,
a right shift by 8 positions decreases the byte address by 1.
Big-endian ordering: a left shift by 8 positions decreases the byte address by 1,
a right shift by 8 positions increases the byte address by 1.
Arithmetic shift
In an arithmetic shift, the bits that are shifted out of either end are discarded. In a left arithmetic shift, zeros are shifted in on the right; in a right arithmetic shift, the sign bit (the MSB in two's complement) is shifted in on the left, thus preserving the sign of the operand.
This example uses an 8-bit register, interpreted as two's complement:
00010111 (decimal +23) LEFT-SHIFT
= 00101110 (decimal +46)
10010111 (decimal −105) RIGHT-SHIFT
= 11001011 (decimal −53)
In the first case, the leftmost digit was shifted past the end of the register, and a new 0 was shifted into the rightmost position. In the second case, the rightmost 1 was shifted out (perhaps into the carry flag), and a new 1 was copied into the leftmost position, preserving the sign of the number. Multiple shifts are sometimes shortened to a single shift by some number of digits. For example:
00010111 (decimal +23) LEFT-SHIFT-BY-TWO
= 01011100 (decimal +92)
A left arithmetic shift by n is equivalent to multiplying by 2n (provided the value does not overflow), while a right arithmetic shift by n of a two's complement value is equivalent to taking the floor of division by 2n. If the binary number is treated as ones' complement, then the same right-shift operation results in division by 2n and rounding toward zero.
Logical shift
In a logical shift, zeros are shifted in to replace the discarded bits. Therefore, the logical and arithmetic left-shifts are exactly the same.
However, as the logical right-shift inserts value 0 bits into the most significant bit, instead of copying the sign bit, it is ideal for unsigned binary numbers, while the arithmetic right-shift is ideal for signed two's complement binary numbers.
Circular shift
Further information: Circular shift
Another form of shift is the circular shift, bitwise rotation or bit rotation.
Rotate
In this operation, sometimes called rotate no carry, the bits are "rotated" as if the left and right ends of the register were joined. The value that is shifted into the right during a left-shift is whatever value was shifted out on the left, and vice versa for a right-shift operation. This is useful if it is necessary to retain all the existing bits, and is frequently used in digital cryptography.
Rotate through carry
Rotate through carry is a variant of the rotate operation, where the bit that is shifted in (on either end) is the old value of the carry flag, and the bit that is shifted out (on the other end) becomes the new value of the carry flag.
A single rotate through carry can simulate a logical or arithmetic shift of one position by setting up the carry flag beforehand. For example, if the carry flag contains 0, then x RIGHT-ROTATE-THROUGH-CARRY-BY-ONE is a logical right-shift, and if the carry flag contains a copy of the sign bit, then x RIGHT-ROTATE-THROUGH-CARRY-BY-ONE is an arithmetic right-shift. For this reason, some microcontrollers such as low end PICs just have rotate and rotate through carry, and don't bother with arithmetic or logical shift instructions.
Rotate through carry is especially useful when performing shifts on numbers larger than the processor's native word size, because if a large number is stored in two registers, the bit that is shifted off one end of the first register must come in at the other end of the second. With rotate-through-carry, that bit is "saved" in the carry flag during the first shift, ready to shift in during the second shift without any extra preparation.
In high-level languages
Further information: Circular shift § Implementing circular shifts
In C family of languages
In C and C++ languages, the logical shift operators are "<<" for left shift and ">>" for right shift. The number of places to shift is given as the second argument to the operator. For example,
x = y << 2;
assigns x the result of shifting y to the left by two bits, which is equivalent to a multiplication by four.
Shifts can result in implementation-defined behavior or undefined behavior, so care must be taken when using them. The result of shifting by a bit count greater than or equal to the word's size is undefined behavior in C and C++.[2][3] Right-shifting a negative value is implementation-defined and not recommended by good coding practice;[4] the result of left-shifting a signed value is undefined if the result cannot be represented in the result type.[2]
In C#, the right-shift is an arithmetic shift when the first operand is an int or long. If the first operand is of type uint or ulong, the right-shift is a logical shift.[5]
Circular shifts
The C-family of languages lack a rotate operator (although C++20 provides std::rotl and std::rotr), but one can be synthesized from the shift operators. Care must be taken to ensure the statement is well formed to avoid undefined behavior and timing attacks in software with security requirements.[6] For example, a naive implementation that left-rotates a 32-bit unsigned value x by n positions is simply
uint32_t x = ..., n = ...;
uint32_t y = (x << n) | (x >> (32 - n));
However, a shift by 0 bits results in undefined behavior in the right-hand expression (x >> (32 - n)) because 32 - 0 is 32, and 32 is outside the range 0–31 inclusive. A second try might result in
uint32_t x = ..., n = ...;
uint32_t y = n ? (x << n) | (x >> (32 - n)) : x;
where the shift amount is tested to ensure that it does not introduce undefined behavior. However, the branch adds an additional code path and presents an opportunity for timing analysis and attack, which is often not acceptable in high-integrity software.[6] In addition, the code compiles to multiple machine instructions, which is often less efficient than the processor's native instruction.
To avoid the undefined behavior and branches under GCC and Clang, the following is recommended. The pattern is recognized by many compilers, and the compiler will emit a single rotate instruction:[7][8][9]
uint32_t x = ..., n = ...;
uint32_t y = (x << n) | (x >> (-n & 31));
There are also compiler-specific intrinsics implementing circular shifts, like _rotl8, _rotl16, _rotr8, _rotr16 in Microsoft Visual C++. Clang provides some rotate intrinsics for Microsoft compatibility that suffers the problems above.[9] GCC does not offer rotate intrinsics. Intel also provides x86 intrinsics.
Java
In Java, all integer types are signed, so the "<<" and ">>" operators perform arithmetic shifts. Java adds the operator ">>>" to perform logical right shifts, but since the logical and arithmetic left-shift operations are identical for signed integer, there is no "<<<" operator in Java.
More details of Java shift operators:[10]
• The operators << (left shift), >> (signed right shift), and >>> (unsigned right shift) are called the shift operators.
• The type of the shift expression is the promoted type of the left-hand operand. For example, aByte >>> 2 is equivalent to ((int) aByte) >>> 2.
• If the promoted type of the left-hand operand is int, only the five lowest-order bits of the right-hand operand are used as the shift distance. It is as if the right-hand operand were subjected to a bitwise logical AND operator & with the mask value 0x1f (0b11111).[11] The shift distance actually used is therefore always in the range 0 to 31, inclusive.
• If the promoted type of the left-hand operand is long, then only the six lowest-order bits of the right-hand operand are used as the shift distance. It is as if the right-hand operand were subjected to a bitwise logical AND operator & with the mask value 0x3f (0b111111).[11] The shift distance actually used is therefore always in the range 0 to 63, inclusive.
• The value of n >>> s is n right-shifted s bit positions with zero-extension.
• In bit and shift operations, the type byte is implicitly converted to int. If the byte value is negative, the highest bit is one, then ones are used to fill up the extra bytes in the int. So byte b1 = -5; int i = b1 | 0x0200; will result in i == -5.
JavaScript
JavaScript uses bitwise operations to evaluate each of two or more units place to 1 or 0.[12]
Pascal
In Pascal, as well as in all its dialects (such as Object Pascal and Standard Pascal), the logical left and right shift operators are "shl" and "shr", respectively. Even for signed integers, shr behaves like a logical shift, and does not copy the sign bit. The number of places to shift is given as the second argument. For example, the following assigns x the result of shifting y to the left by two bits:
x := y shl 2;
Other
• popcount, used in cryptography
• count leading zeros
Applications
Bitwise operations are necessary particularly in lower-level programming such as device drivers, low-level graphics, communications protocol packet assembly, and decoding.
Although machines often have efficient built-in instructions for performing arithmetic and logical operations, all these operations can be performed by combining the bitwise operators and zero-testing in various ways.[13] For example, here is a pseudocode implementation of ancient Egyptian multiplication showing how to multiply two arbitrary integers a and b (a greater than b) using only bitshifts and addition:
c ← 0
while b ≠ 0
if (b and 1) ≠ 0
c ← c + a
left shift a by 1
right shift b by 1
return c
Another example is a pseudocode implementation of addition, showing how to calculate a sum of two integers a and b using bitwise operators and zero-testing:
while a ≠ 0
c ← b and a
b ← b xor a
left shift c by 1
a ← c
return b
Boolean algebra
Main article: Boolean algebra
Sometimes it is useful to simplify complex expressions made up of bitwise operations, for example when writing compilers. The goal of a compiler is to translate a high level programming language into the most efficient machine code possible. Boolean algebra is used to simplify complex bitwise expressions.
AND
• x & y = y & x
• x & (y & z) = (x & y) & z
• x & 0xFFFF = x[14]
• x & 0 = 0
• x & x = x
OR
• x | y = y | x
• x | (y | z) = (x | y) | z
• x | 0 = x
• x | 0xFFFF = 0xFFFF
• x | x = x
NOT
• ~(~x) = x
XOR
• x ^ y = y ^ x
• x ^ (y ^ z) = (x ^ y) ^ z
• x ^ 0 = x
• x ^ y ^ y = x
• x ^ x = 0
• x ^ 0xFFFF = ~x
Additionally, XOR can be composed using the 3 basic operations (AND, OR, NOT)
• a ^ b = (a | b) & (~a | ~b)
• a ^ b = (a & ~b) | (~a & b)
Others
• x | (x & y) = x
• x & (x | y) = x
• ~(x | y) = ~x & ~y
• ~(x & y) = ~x | ~y
• x | (y & z) = (x | y) & (x | z)
• x & (y | z) = (x & y) | (x & z)
• x & (y ^ z) = (x & y) ^ (x & z)
• x + y = (x ^ y) + ((x & y) << 1)
• x - y = ~(~x + y)
Inverses and solving equations
It can be hard to solve for variables in boolean algebra, because unlike regular algebra, several operations do not have inverses. Operations without inverses lose some of the original data bits when they are performed, and it is not possible to recover this missing information.
• Has inverse
• NOT
• XOR
• Rotate left
• Rotate right
• No inverse
• AND
• OR
• Shift left
• Shift right
Order of operations
Operations at the top of this list are executed first. See the main article for a more complete list.
• ( )
• ~ -[15]
• * / %
• + -[16]
• << >>
• &
• ^
• |
See also
• Arithmetic logic unit
• Bit manipulation
• Bitboard
• Bitwise operations in C
• Boolean algebra (logic)
• Double dabble
• Find first set
• Karnaugh map
• Logic gate
• Logical operator
• Primitive data type
References
1. "CMicrotek Low-power Design Blog". CMicrotek. Retrieved 2015-08-12.
2. JTC1/SC22/WG14 N843 "C programming language", section 6.5.7
3. "Arithmetic operators - cppreference.com". en.cppreference.com. Retrieved 2016-07-06.
4. "INT13-C. Use bitwise operators only on unsigned operands". CERT: Secure Coding Standards. Software Engineering Institute, Carnegie Mellon University. Retrieved 2015-09-07.
5. "Operator (C# Reference)". Microsoft. Retrieved 2013-07-14.
6. "Near constant time rotate that does not violate the standards?". Stack Exchange Network. Retrieved 2015-08-12.
7. "Poor optimization of portable rotate idiom". GNU GCC Project. Retrieved 2015-08-11.
8. "Circular rotate that does not violate C/C++ standard?". Intel Developer Forums. Retrieved 2015-08-12.
9. "Constant not propagated into inline assembly, results in "constraint 'I' expects an integer constant expression"". LLVM Project. Retrieved 2015-08-11.
10. The Java Language Specification, section 15.19. Shift Operators
11. "Chapter 15. Expressions". oracle.com.
12. "JavaScript Bitwise". W3Schools.com.
13. "Synthesizing arithmetic operations using bit-shifting tricks". Bisqwit.iki.fi. 2014-02-15. Retrieved 2014-03-08.
14. Throughout this article, 0xFFFF means that all the bits in your data type need to be set to 1. The exact number of bits depends on the width of the data type.
15. - is negation here, not subtraction
16. - is subtraction here, not negation
External links
• Online Bitwise Calculator supports Bitwise AND, OR and XOR
• XORcat, a tool for bitwise-XOR files/streams
• Division using bitshifts
• "Bitwise Operations Mod N" by Enrique Zeleny, Wolfram Demonstrations Project.
• "Plots Of Compositions Of Bitwise Operations" by Enrique Zeleny, The Wolfram Demonstrations Project.
|
Wikipedia
|
Shift theorem
In mathematics, the (exponential) shift theorem is a theorem about polynomial differential operators (D-operators) and exponential functions. It permits one to eliminate, in certain cases, the exponential from under the D-operators.
Statement
The theorem states that, if P(D) is a polynomial D-operator, then, for any sufficiently differentiable function y,
$P(D)(e^{ax}y)\equiv e^{ax}P(D+a)y.$
To prove the result, proceed by induction. Note that only the special case
$P(D)=D^{n}$
needs to be proved, since the general result then follows by linearity of D-operators.
The result is clearly true for n = 1 since
$D(e^{ax}y)=e^{ax}(D+a)y.$
Now suppose the result true for n = k, that is,
$D^{k}(e^{ax}y)=e^{ax}(D+a)^{k}y.$
Then,
${\begin{aligned}D^{k+1}(e^{ax}y)&\equiv {\frac {d}{dx}}\left\{e^{ax}\left(D+a\right)^{k}y\right\}\\&{}=e^{ax}{\frac {d}{dx}}\left\{\left(D+a\right)^{k}y\right\}+ae^{ax}\left\{\left(D+a\right)^{k}y\right\}\\&{}=e^{ax}\left\{\left({\frac {d}{dx}}+a\right)\left(D+a\right)^{k}y\right\}\\&{}=e^{ax}(D+a)^{k+1}y.\end{aligned}}$
This completes the proof.
The shift theorem can be applied equally well to inverse operators:
${\frac {1}{P(D)}}(e^{ax}y)=e^{ax}{\frac {1}{P(D+a)}}y.$
Related
There is a similar version of the shift theorem for Laplace transforms ($t<a$):
$e^{-as}{\mathcal {L}}\{f(t)\}={\mathcal {L}}\{f(t-a)\}.$
Examples
The exponential shift theorem can be used to speed the calculation of higher derivatives of functions that is given by the product of an exponential and another function. For instance, if $f(x)=\sin(x)e^{x}$, one has that
${\begin{aligned}D^{3}f&=D^{3}(e^{x}\sin(x))=e^{x}(D+1)^{3}\sin(x)\\&=e^{x}\left(D^{3}+3D^{2}+3D+1\right)\sin(x)\\&=e^{x}\left(-\cos(x)-3\sin(x)+3\cos(x)+\sin(x)\right)\end{aligned}}$
Another application of the exponential shift theorem is to solve linear differential equations whose characteristic polynomial has repeated roots.[1]
Notes
1. See the article homogeneous equation with constant coefficients for more details.
References
• Morris, Tenenbaum; Pollard, Harry (1985). Ordinary differential equations : an elementary textbook for students of mathematics, engineering, and the sciences. New York: Dover Publications. ISBN 0486649407. OCLC 12188701.
|
Wikipedia
|
Shift operator
In mathematics, and in particular functional analysis, the shift operator, also known as the translation operator, is an operator that takes a function x ↦ f(x) to its translation x ↦ f(x + a).[1] In time series analysis, the shift operator is called the lag operator.
This article is about shift operators in mathematics. For operators in computer programming languages, see Bit shift. For the shift operator of group schemes, see Verschiebung operator.
Shift operators are examples of linear operators, important for their simplicity and natural occurrence. The shift operator action on functions of a real variable plays an important role in harmonic analysis, for example, it appears in the definitions of almost periodic functions, positive-definite functions, derivatives, and convolution.[2] Shifts of sequences (functions of an integer variable) appear in diverse areas such as Hardy spaces, the theory of abelian varieties, and the theory of symbolic dynamics, for which the baker's map is an explicit representation. Triangulated category is a categorified analogue of the shift operator.
Definition
Functions of a real variable
The shift operator T t (where $t\in \mathbb {R} $) takes a function f on $\mathbb {R} $ to its translation ft,
$T^{t}f(x)=f_{t}(x)=f(x+t)~.$
A practical operational calculus representation of the linear operator T t in terms of the plain derivative ${\tfrac {d}{dx}}$ was introduced by Lagrange,
$T^{t}=e^{t{\frac {d}{dx}}}~,$
which may be interpreted operationally through its formal Taylor expansion in t; and whose action on the monomial xn is evident by the binomial theorem, and hence on all series in x, and so all functions f(x) as above.[3] This, then, is a formal encoding of the Taylor expansion in Heaviside's calculus.
The operator thus provides the prototype[4] for Lie's celebrated advective flow for Abelian groups,
$\exp \left(t\beta (x){\frac {d}{dx}}\right)f(x)=\exp \left(t{\frac {d}{dh}}\right)F(h)=F(h+t)=f\left(h^{-1}(h(x)+t)\right),$
where the canonical coordinates h (Abel functions) are defined such that
$h'(x)\equiv {\frac {1}{\beta (x)}}~,\qquad f(x)\equiv F(h(x)).$
For example, it easily follows that $\beta (x)=x$ yields scaling,
$\exp \left(tx{\frac {d}{dx}}\right)f(x)=f(e^{t}x),$
hence $\exp \left(i\pi x{\tfrac {d}{dx}}\right)f(x)=f(-x)$ (parity); likewise, $\beta (x)=x^{2}$ yields[5]
$\exp \left(tx^{2}{\frac {d}{dx}}\right)f(x)=f\left({\frac {x}{1-tx}}\right),$
$\beta (x)={\tfrac {1}{x}}$ yields
$\exp \left({\frac {t}{x}}{\frac {d}{dx}}\right)f(x)=f\left({\sqrt {x^{2}+2t}}\right),$
$\beta (x)=e^{x}$ yields
$\exp \left(te^{x}{\frac {d}{dx}}\right)f(x)=f\left(\ln \left({\frac {1}{e^{-x}-t}}\right)\right),$
etc.
The initial condition of the flow and the group property completely determine the entire Lie flow, providing a solution to the translation functional equation[6]
$f_{t}(f_{\tau }(x))=f_{t+\tau }(x).$
Sequences
Main article: Shift space
The left shift operator acts on one-sided infinite sequence of numbers by
$S^{*}:(a_{1},a_{2},a_{3},\ldots )\mapsto (a_{2},a_{3},a_{4},\ldots )$
and on two-sided infinite sequences by
$T:(a_{k})_{k\,=\,-\infty }^{\infty }\mapsto (a_{k+1})_{k\,=\,-\infty }^{\infty }.$
The right shift operator acts on one-sided infinite sequence of numbers by
$S:(a_{1},a_{2},a_{3},\ldots )\mapsto (0,a_{1},a_{2},\ldots )$
and on two-sided infinite sequences by
$T^{-1}:(a_{k})_{k\,=\,-\infty }^{\infty }\mapsto (a_{k-1})_{k\,=\,-\infty }^{\infty }.$
The right and left shift operators acting on two-sided infinite sequences are called bilateral shifts.
Abelian groups
In general, as illustrated above, if F is a function on an abelian group G, and h is an element of G, the shift operator T g maps F to[6][7]
$F_{g}(h)=F(h+g).$
Properties of the shift operator
The shift operator acting on real- or complex-valued functions or sequences is a linear operator which preserves most of the standard norms which appear in functional analysis. Therefore, it is usually a continuous operator with norm one.
Action on Hilbert spaces
The shift operator acting on two-sided sequences is a unitary operator on $\ell _{2}(\mathbb {Z} ).$ The shift operator acting on functions of a real variable is a unitary operator on $L_{2}(\mathbb {R} ).$
In both cases, the (left) shift operator satisfies the following commutation relation with the Fourier transform:
${\mathcal {F}}T^{t}=M^{t}{\mathcal {F}},$
where M t is the multiplication operator by exp(itx). Therefore, the spectrum of T t is the unit circle.
The one-sided shift S acting on $\ell _{2}(\mathbb {N} )$ is a proper isometry with range equal to all vectors which vanish in the first coordinate. The operator S is a compression of T−1, in the sense that
$T^{-1}y=Sx{\text{ for each }}x\in \ell ^{2}(\mathbb {N} ),$
where y is the vector in $\ell _{2}(\mathbb {Z} )$ with yi = xi for i ≥ 0 and yi = 0 for i < 0. This observation is at the heart of the construction of many unitary dilations of isometries.
The spectrum of S is the unit disk. The shift S is one example of a Fredholm operator; it has Fredholm index −1.
Generalization
Jean Delsarte introduced the notion of generalized shift operator (also called generalized displacement operator); it was further developed by Boris Levitan.[2][8][9]
A family of operators $\{L^{x}\}_{x\in X}$ acting on a space Φ of functions from a set X to $\mathbb {C} $ is called a family of generalized shift operators if the following properties hold:
1. Associativity: let $(R^{y}f)(x)=(L^{x}f)(y).$ Then $L^{x}R^{y}=R^{y}L^{x}.$
2. There exists e in X such that Le is the identity operator.
In this case, the set X is called a hypergroup.
See also
• Arithmetic shift
• Logical shift
• Finite difference
• Translation operator (quantum mechanics)
Notes
1. Weisstein, Eric W. "Shift Operator". MathWorld.
2. Marchenko, V. A. (2006). "The generalized shift, transformation operators, and inverse problems". Mathematical events of the twentieth century. Berlin: Springer. pp. 145–162. doi:10.1007/3-540-29462-7_8. ISBN 978-3-540-23235-3. MR 2182783.
3. Jordan, Charles, (1939/1965). Calculus of Finite Differences, (AMS Chelsea Publishing), ISBN 978-0828400336 .
4. M Hamermesh (1989), Group Theory and Its Application to Physical Problems (Dover Books on Physics), Hamermesh ISBM 978-0486661810, Ch 8-6, pp 294-5, online.
5. p 75 of Georg Scheffers (1891): Sophus Lie, Vorlesungen Ueber Differentialgleichungen Mit Bekannten Infinitesimalen Transformationen, Teubner, Leipzig, 1891. ISBN 978-3743343078 online
6. Aczel, J (2006), Lectures on Functional Equations and Their Applications (Dover Books on Mathematics, 2006), Ch. 6, ISBN 978-0486445236 .
7. "A one-parameter continuous group is equivalent to a group of translations". M Hamermesh, ibid.
8. Levitan, B.M.; Litvinov, G.L. (2001) [1994], "Generalized displacement operators", Encyclopedia of Mathematics, EMS Press
9. Bredikhina, E.A. (2001) [1994], "Almost-periodic function", Encyclopedia of Mathematics, EMS Press
Bibliography
• Partington, Jonathan R. (March 15, 2004). Linear Operators and Linear Systems. Cambridge University Press. doi:10.1017/cbo9780511616693. ISBN 978-0-521-83734-7.
• Marvin Rosenblum and James Rovnyak, Hardy Classes and Operator Theory, (1985) Oxford University Press.
|
Wikipedia
|
Shift matrix
In mathematics, a shift matrix is a binary matrix with ones only on the superdiagonal or subdiagonal, and zeroes elsewhere. A shift matrix U with ones on the superdiagonal is an upper shift matrix. The alternative subdiagonal matrix L is unsurprisingly known as a lower shift matrix. The (i,j):th component of U and L are
$U_{ij}=\delta _{i+1,j},\quad L_{ij}=\delta _{i,j+1},$
where $\delta _{ij}$ is the Kronecker delta symbol.
For example, the 5×5 shift matrices are
$U_{5}={\begin{pmatrix}0&1&0&0&0\\0&0&1&0&0\\0&0&0&1&0\\0&0&0&0&1\\0&0&0&0&0\end{pmatrix}}\quad L_{5}={\begin{pmatrix}0&0&0&0&0\\1&0&0&0&0\\0&1&0&0&0\\0&0&1&0&0\\0&0&0&1&0\end{pmatrix}}.$
Clearly, the transpose of a lower shift matrix is an upper shift matrix and vice versa.
As a linear transformation, a lower shift matrix shifts the components of a column vector one position down, with a zero appearing in the first position. An upper shift matrix shifts the components of a column vector one position up, with a zero appearing in the last position.[1]
Premultiplying a matrix A by a lower shift matrix results in the elements of A being shifted downward by one position, with zeroes appearing in the top row. Postmultiplication by a lower shift matrix results in a shift left. Similar operations involving an upper shift matrix result in the opposite shift.
Clearly all finite-dimensional shift matrices are nilpotent; an n by n shift matrix S becomes the null matrix when raised to the power of its dimension n.
Shift matrices act on shift spaces. The infinite-dimensional shift matrices are particularly important for the study of ergodic systems. Important examples of infinite-dimensional shifts are the Bernoulli shift, which acts as a shift on Cantor space, and the Gauss map, which acts as a shift on the space of continued fractions (that is, on Baire space.)
Properties
Let L and U be the n by n lower and upper shift matrices, respectively. The following properties hold for both U and L. Let us therefore only list the properties for U:
• det(U) = 0
• trace(U) = 0
• rank(U) = n − 1
• The characteristic polynomials of U is
$p_{U}(\lambda )=(-1)^{n}\lambda ^{n}.$
• Un = 0. This follows from the previous property by the Cayley–Hamilton theorem.
• The permanent of U is 0.
The following properties show how U and L are related:
• LT = U; UT = L
• The null spaces of U and L are
$N(U)=\operatorname {span} \left\{(1,0,\ldots ,0)^{\mathsf {T}}\right\},$
$N(L)=\operatorname {span} \left\{(0,\ldots ,0,1)^{\mathsf {T}}\right\}.$
• The spectrum of U and L is $\{0\}$. The algebraic multiplicity of 0 is n, and its geometric multiplicity is 1. From the expressions for the null spaces, it follows that (up to a scaling) the only eigenvector for U is $(1,0,\ldots ,0)^{\mathsf {T}}$, and the only eigenvector for L is $(0,\ldots ,0,1)^{\mathsf {T}}$.
• For LU and UL we have
$UL=I-\operatorname {diag} (0,\ldots ,0,1),$
$LU=I-\operatorname {diag} (1,0,\ldots ,0).$
These matrices are both idempotent, symmetric, and have the same rank as U and L
• Ln−aUn−a + LaUa = Un−aLn−a + UaLa = I (the identity matrix), for any integer a between 0 and n inclusive.
If N is any nilpotent matrix, then N is similar to a block diagonal matrix of the form
${\begin{pmatrix}S_{1}&0&\ldots &0\\0&S_{2}&\ldots &0\\\vdots &\vdots &\ddots &\vdots \\0&0&\ldots &S_{r}\end{pmatrix}}$
where each of the blocks S1, S2, ..., Sr is a shift matrix (possibly of different sizes).[2][3]
Examples
$S={\begin{pmatrix}0&0&0&0&0\\1&0&0&0&0\\0&1&0&0&0\\0&0&1&0&0\\0&0&0&1&0\end{pmatrix}};\quad A={\begin{pmatrix}1&1&1&1&1\\1&2&2&2&1\\1&2&3&2&1\\1&2&2&2&1\\1&1&1&1&1\end{pmatrix}}.$
Then,
$SA={\begin{pmatrix}0&0&0&0&0\\1&1&1&1&1\\1&2&2&2&1\\1&2&3&2&1\\1&2&2&2&1\end{pmatrix}};\quad AS={\begin{pmatrix}1&1&1&1&0\\2&2&2&1&0\\2&3&2&1&0\\2&2&2&1&0\\1&1&1&1&0\end{pmatrix}}.$
Clearly there are many possible permutations. For example, $S^{\mathsf {T}}AS$ is equal to the matrix A shifted up and left along the main diagonal.
$S^{\mathsf {T}}AS={\begin{pmatrix}2&2&2&1&0\\2&3&2&1&0\\2&2&2&1&0\\1&1&1&1&0\\0&0&0&0&0\end{pmatrix}}.$
See also
• Clock and shift matrices
• Nilpotent matrix
• Subshift of finite type
Notes
1. Beauregard & Fraleigh (1973, p. 312)
2. Beauregard & Fraleigh (1973, pp. 312, 313)
3. Herstein (1964, p. 250)
References
• Beauregard, Raymond A.; Fraleigh, John B. (1973), A First Course In Linear Algebra: with Optional Introduction to Groups, Rings, and Fields, Boston: Houghton Mifflin Co., ISBN 0-395-14017-X
• Herstein, I. N. (1964), Topics In Algebra, Waltham: Blaisdell Publishing Company, ISBN 978-1114541016
External links
• Shift Matrix - entry in the Matrix Reference Manual
Matrix classes
Explicitly constrained entries
• Alternant
• Anti-diagonal
• Anti-Hermitian
• Anti-symmetric
• Arrowhead
• Band
• Bidiagonal
• Bisymmetric
• Block-diagonal
• Block
• Block tridiagonal
• Boolean
• Cauchy
• Centrosymmetric
• Conference
• Complex Hadamard
• Copositive
• Diagonally dominant
• Diagonal
• Discrete Fourier Transform
• Elementary
• Equivalent
• Frobenius
• Generalized permutation
• Hadamard
• Hankel
• Hermitian
• Hessenberg
• Hollow
• Integer
• Logical
• Matrix unit
• Metzler
• Moore
• Nonnegative
• Pentadiagonal
• Permutation
• Persymmetric
• Polynomial
• Quaternionic
• Signature
• Skew-Hermitian
• Skew-symmetric
• Skyline
• Sparse
• Sylvester
• Symmetric
• Toeplitz
• Triangular
• Tridiagonal
• Vandermonde
• Walsh
• Z
Constant
• Exchange
• Hilbert
• Identity
• Lehmer
• Of ones
• Pascal
• Pauli
• Redheffer
• Shift
• Zero
Conditions on eigenvalues or eigenvectors
• Companion
• Convergent
• Defective
• Definite
• Diagonalizable
• Hurwitz
• Positive-definite
• Stieltjes
Satisfying conditions on products or inverses
• Congruent
• Idempotent or Projection
• Invertible
• Involutory
• Nilpotent
• Normal
• Orthogonal
• Unimodular
• Unipotent
• Unitary
• Totally unimodular
• Weighing
With specific applications
• Adjugate
• Alternating sign
• Augmented
• Bézout
• Carleman
• Cartan
• Circulant
• Cofactor
• Commutation
• Confusion
• Coxeter
• Distance
• Duplication and elimination
• Euclidean distance
• Fundamental (linear differential equation)
• Generator
• Gram
• Hessian
• Householder
• Jacobian
• Moment
• Payoff
• Pick
• Random
• Rotation
• Seifert
• Shear
• Similarity
• Symplectic
• Totally positive
• Transformation
Used in statistics
• Centering
• Correlation
• Covariance
• Design
• Doubly stochastic
• Fisher information
• Hat
• Precision
• Stochastic
• Transition
Used in graph theory
• Adjacency
• Biadjacency
• Degree
• Edmonds
• Incidence
• Laplacian
• Seidel adjacency
• Tutte
Used in science and engineering
• Cabibbo–Kobayashi–Maskawa
• Density
• Fundamental (computer vision)
• Fuzzy associative
• Gamma
• Gell-Mann
• Hamiltonian
• Irregular
• Overlap
• S
• State transition
• Substitution
• Z (chemistry)
Related terms
• Jordan normal form
• Linear independence
• Matrix exponential
• Matrix representation of conic sections
• Perfect matrix
• Pseudoinverse
• Row echelon form
• Wronskian
• Mathematics portal
• List of matrices
• Category:Matrices
|
Wikipedia
|
Shift rule
The shift rule is a mathematical rule for sequences and series.
Here $n$ and $N$ are natural numbers.
For sequences, the rule states that if $(a_{n})$ is a sequence, then it converges if and only if $(a_{n+N})$ also converges, and in this case both sequences always converge to the same number.[1]
For series, the rule states that the series $\sum \limits _{n=1}^{\infty }a_{n}$ converges to a number if and only if $\sum \limits _{n=1}^{\infty }a_{n+N}$ converges.[2]
References
1. Ueltschi, Daniel (2011), Analysis –MA131 (PDF), University of Warwick, p. 31.
2. Alcock, Lara (2014), How to Think About Analysis, Oxford University Press, p. 102, ISBN 9780191035371.
|
Wikipedia
|
Shifting nth root algorithm
The shifting nth root algorithm is an algorithm for extracting the nth root of a positive real number which proceeds iteratively by shifting in n digits of the radicand, starting with the most significant, and produces one digit of the root on each iteration, in a manner similar to long division.
Algorithm
Notation
Let $B$ be the base of the number system you are using, and $n$ be the degree of the root to be extracted. Let $x$ be the radicand processed thus far, $y$ be the root extracted thus far, and $r$ be the remainder. Let $\alpha $ be the next $n$ digits of the radicand, and $\beta $ be the next digit of the root. Let $x'$ be the new value of $x$ for the next iteration, $y'$ be the new value of $y$ for the next iteration, and $r'$ be the new value of $r$ for the next iteration. These are all integers.
Invariants
At each iteration, the invariant $y^{n}+r=x$ will hold. The invariant $(y+1)^{n}>x$ will hold. Thus $y$ is the largest integer less than or equal to the $n$th root of $x$, and $r$ is the remainder.
Initialization
The initial values of $x,y$, and $r$ should be 0. The value of $\alpha $ for the first iteration should be the most significant aligned block of $n$ digits of the radicand. An aligned block of $n$ digits means a block of digits aligned so that the decimal point falls between blocks. For example, in 123.4 the most significant aligned block of two digits is 01, the next most significant is 23, and the third most significant is 40.
Main loop
On each iteration we shift in $n$ digits of the radicand, so we have $x'=B^{n}x+\alpha $ and we produce one digit of the root, so we have $y'=By+\beta $. The first invariant implies that $r'=x'-y'^{n}$. We want to choose $\beta $ so that the invariants described above hold. It turns out that there is always exactly one such choice, as will be proved below.
Proof of existence and uniqueness of $\beta $
By definition of a digit, $0\leq \beta <B$, and by definition of a block of digits, $0\leq \alpha <B^{n}$
The first invariant says that:
$x'=y'^{n}+r'$
or
$B^{n}x+\alpha =(By+\beta )^{n}+r'.$
So, pick the largest integer $\beta $ such that
$(By+\beta )^{n}\leq B^{n}x+\alpha .$
Such a $\beta $ always exists, since $0\leq \beta <B$ and if $\beta =0$ then $B^{n}y^{n}\leq B^{n}x+\alpha $, but since $y^{n}\leq x$, this is always true for $\beta =0$. Thus, there will always be a $\beta $ that satisfies the first invariant
Now consider the second invariant. It says:
$(y'+1)^{n}>x'$
or
$(By+\beta +1)^{n}>B^{n}x+\alpha .$
Now, if $\beta $ is not the largest admissible $\beta $ for the first invariant as described above, then $\beta +1$ is also admissible, and we have
$(By+\beta +1)^{n}\leq B^{n}x+\alpha .$
This violates the second invariant, so to satisfy both invariants we must pick the largest $\beta $ allowed by the first invariant. Thus we have proven the existence and uniqueness of $\beta $.
To summarize, on each iteration:
1. Let $\alpha $ be the next aligned block of digits from the radicand
2. Let $x'=B^{n}x+\alpha $
3. Let $\beta $ be the largest $\beta $ such that $(By+\beta )^{n}\leq B^{n}x+\alpha $
4. Let $y'=By+\beta $
5. Let $r'=x'-y'^{n}$
Now, note that $x=y^{n}+r$, so the condition
$(By+\beta )^{n}\leq B^{n}x+\alpha $
is equivalent to
$(By+\beta )^{n}-B^{n}y^{n}\leq B^{n}r+\alpha $
and
$r'=x'-y'^{n}=B^{n}x+\alpha -(By+\beta )^{n}$
is equivalent to
$r'=B^{n}r+\alpha -((By+\beta )^{n}-B^{n}y^{n}).$
Thus, we do not actually need $x$, and since $r=x-y^{n}$ and $x<(y+1)^{n}$, $r<(y+1)^{n}-y^{n}$ or $r<ny^{n-1}+O(y^{n-2})$, or $r<nx^{{n-1} \over n}+O(x^{{n-2} \over n})$, so by using $r$ instead of $x$ we save time and space by a factor of 1/$n$. Also, the $B^{n}y^{n}$ we subtract in the new test cancels the one in $(By+\beta )^{n}$, so now the highest power of $y$ we have to evaluate is $y^{n-1}$ rather than $y^{n}$.
Summary
1. Initialize $r$ and $y$ to 0.
2. Repeat until desired precision is obtained:
1. Let $\alpha $ be the next aligned block of digits from the radicand.
2. Let $\beta $ be the largest $\beta $ such that $(By+\beta )^{n}-B^{n}y^{n}\leq B^{n}r+\alpha .$
3. Let $y'=By+\beta $.
4. Let $r'=B^{n}r+\alpha -((By+\beta )^{n}-B^{n}y^{n}).$
5. Assign $y\leftarrow y'$ and $r\leftarrow r'.$
3. $y$ is the largest integer such that $y^{n}<xB^{k}$, and $y^{n}+r=xB^{k}$, where $k$ is the number of digits of the radicand after the decimal point that have been consumed (a negative number if the algorithm has not reached the decimal point yet).
Paper-and-pencil nth roots
As noted above, this algorithm is similar to long division, and it lends itself to the same notation:
1. 4 4 2 2 4
——————————————————————
_ 3/ 3.000 000 000 000 000
\/ 1 = 3(10×0)2×1 +3(10×0)×12 +13
—
2 000
1 744 = 3(10×1)2×4 +3(10×1)×42 +43
—————
256 000
241 984 = 3(10×14)2×4 +3(10×14)×42 +43
———————
14 016 000
12 458 888 = 3(10×144)2×2 +3(10×144)×22 +23
——————————
1 557 112 000
1 247 791 448 = 3(10×1442)2×2 +3(10×1442)×22 +23
—————————————
309 320 552 000
249 599 823 424 = 3(10×14422)2×4 +3(10×14422)×42 +43
———————————————
59 720 728 576
Note that after the first iteration or two the leading term dominates the $(By+\beta )^{n}-B^{n}y^{n}$, so we can get an often correct first guess at $\beta $ by dividing $B^{n}r+\alpha $ by $nB^{n-1}y^{n-1}$.
Performance
On each iteration, the most time-consuming task is to select $\beta $. We know that there are $B$ possible values, so we can find $\beta $ using $O(\log(B))$ comparisons. Each comparison will require evaluating $(By+\beta )^{n}-B^{n}y^{n}$. In the kth iteration, $y$ has $k$ digits, and the polynomial can be evaluated with $2n-4$ multiplications of up to $k(n-1)$ digits and $n-2$ additions of up to $k(n-1)$ digits, once we know the powers of $y$ and $\beta $ up through $n-1$ for $y$ and $n$ for $\beta $. $\beta $ has a restricted range, so we can get the powers of $\beta $ in constant time. We can get the powers of $y$ with $n-2$ multiplications of up to $k(n-1)$ digits. Assuming $n$-digit multiplication takes time $O(n^{2})$ and addition takes time $O(n)$, we take time $O(k^{2}n^{2})$ for each comparison, or time $O(k^{2}n^{2}\log(B))$ to pick $\beta $. The remainder of the algorithm is addition and subtraction that takes time $O(k)$, so each iteration takes $O(k^{2}n^{2}\log(B))$. For all $k$ digits, we need time $O(k^{3}n^{2}\log(B))$.
The only internal storage needed is $r$, which is $O(k)$ digits on the kth iteration. That this algorithm does not have bounded memory usage puts an upper bound on the number of digits which can be computed mentally, unlike the more elementary algorithms of arithmetic. Unfortunately, any bounded memory state machine with periodic inputs can only produce periodic outputs, so there are no such algorithms which can compute irrational numbers from rational ones, and thus no bounded memory root extraction algorithms.
Note that increasing the base increases the time needed to pick $\beta $ by a factor of $O(\log(B))$, but decreases the number of digits needed to achieve a given precision by the same factor, and since the algorithm is cubic time in the number of digits, increasing the base gives an overall speedup of $O(\log ^{2}(B))$. When the base is larger than the radicand, the algorithm degenerates to binary search, so it follows that this algorithm is not useful for computing roots with a computer, as it is always outperformed by much simpler binary search, and has the same memory complexity.
Examples
Square root of 2 in binary
1. 0 1 1 0 1
------------------
_ / 10.00 00 00 00 00 1
\/ 1 + 1
----- ----
1 00 100
0 + 0
-------- -----
1 00 00 1001
10 01 + 1
----------- ------
1 11 00 10101
1 01 01 + 1
---------- -------
1 11 00 101100
0 + 0
---------- --------
1 11 00 00 1011001
1 01 10 01 1
----------
1 01 11 remainder
Square root of 3
1. 7 3 2 0 5
----------------------
_ / 3.00 00 00 00 00
\/ 1 = 20×0×1+1^2
-
2 00
1 89 = 20×1×7+7^2 (27 x 7)
----
11 00
10 29 = 20×17×3+3^2 (343 x 3)
-----
71 00
69 24 = 20×173×2+2^2 (3462 x 2)
-----
1 76 00
0 = 20×1732×0+0^2 (34640 x 0)
-------
1 76 00 00
1 73 20 25 = 20×17320×5+5^2 (346405 x 5)
----------
2 79 75
Cube root of 5
1. 7 0 9 9 7
----------------------
_ 3/ 5. 000 000 000 000 000
\/ 1 = 300×(0^2)×1+30×0×(1^2)+1^3
-
4 000
3 913 = 300×(1^2)×7+30×1×(7^2)+7^3
-----
87 000
0 = 300×(17^2)×0+30×17×(0^2)+0^3
-------
87 000 000
78 443 829 = 300×(170^2)×9+30×170×(9^2)+9^3
----------
8 556 171 000
7 889 992 299 = 300×(1709^2)×9+30×1709×(9^2)+9^3
-------------
666 178 701 000
614 014 317 973 = 300×(17099^2)×7+30×17099×(7^2)+7^3
---------------
52 164 383 027
Fourth root of 7
1. 6 2 6 5 7
---------------------------
_ 4/ 7.0000 0000 0000 0000 0000
\/ 1 = 4000×(0^3)×1+600×(0^2)×(1^2)+40×0×(1^3)+1^4
-
6 0000
5 5536 = 4000×(1^3)×6+600×(1^2)×(6^2)+40×1×(6^3)+6^4
------
4464 0000
3338 7536 = 4000×(16^3)×2+600×(16^2)×(2^2)+40×16×(2^3)+2^4
---------
1125 2464 0000
1026 0494 3376 = 4000×(162^3)×6+600×(162^2)×(6^2)+40×162×(6^3)+6^4
--------------
99 1969 6624 0000
86 0185 1379 0625 = 4000×(1626^3)×5+600×(1626^2)×(5^2)+
----------------- 40×1626×(5^3)+5^4
13 1784 5244 9375 0000
12 0489 2414 6927 3201 = 4000×(16265^3)×7+600×(16265^2)×(7^2)+
---------------------- 40×16265×(7^3)+7^4
1 1295 2830 2447 6799
See also
• Methods of computing square roots
• nth root algorithm
External links
• Why the square root algorithm works "Home School Math". Also related pages giving examples of the long-division-like pencil and paper method for square roots.
• Reflections on The Square Root of Two "Medium". With an example of a C++ implementation.
|
Wikipedia
|
Peng Shige
Peng Shige (Chinese: 彭实戈, born December 8, 1947 in Binzhou, Shandong) is a Chinese mathematician noted for his contributions in stochastic analysis and mathematical finance.
Peng Shige
彭实戈
Born (1947-12-08) December 8, 1947
Binzhou, Shandong, China
Alma materShandong University
Paris Dauphine University
University of Provence
Fudan University
Known forBSDE
Mathematical Finance
Scientific career
FieldsMathematics
Mathematical Finance
InstitutionsShandong University
Fudan University
Chinese Academy of Sciences
Chinese name
Traditional Chinese彭實戈
Simplified Chinese彭实戈
Transcriptions
Standard Mandarin
Hanyu PinyinPéng Shígē
Biography
Peng Shige was born in Binzhou and raised in Shandong, while his parents' hometown is Haifeng County in south-eastern Guangdong, he is a grandnephew of the famous revolutionary Peng Pai, and his grandfather (Peng Pai's brother) is also recognized a "revolutionary martyr" by the nation.[1][2][3] He went to a countryside working with farmers as an "Educated youth" from 1968 to 1971, and studied in the Department of Physics, Shandong University from 1971 to 1974 and went to work at the Institute of Mathematics, Shandong University in 1978. In 1983 he took an opportunity to enter Paris Dauphine University, France under the supervision of Alain Bensoussan, who was a student of Jacques-Louis Lions. He obtained his PhDs from Paris Dauphine University in 1985[4] and from University of Provence in 1986. Then he returned to China and did postdoctoral research at Fudan University before becoming a professor at Shandong University in 1990. In 1992 he was awarded the Habilitation à Diriger des Recherches by the University of Provence. He was promoted to Distinguished Professor of the Ministry of Education of China (Cheung Kong Scholarship Programme) in 1999.[2][3]
Academic contributions
Professor Peng generalized the stochastic maximum principle in stochastic optimal control. In a paper published in 1990 with Étienne Pardoux, Peng founded the general theory (including nonlinear expectation) of backward stochastic differential equations (BSDEs), though linear BSDEs had been introduced by Jean-Michel Bismut in 1973.[5] Soon Feynman–Kac type connections of BSDEs and certain kinds of elliptic and parabolic partial differential equations (PDEs), e.g., Hamilton–Jacobi–Bellman equation, were obtained, where the solutions of these PDEs can be interpreted in the classical or viscosity senses. As a particular case the solution of the Black–Scholes equation can be represented as the solution of a simple linear BSDE, which can be regarded as a starting point of the BSDEs' applications in mathematical finance. A type of nonlinear expectation, called the g-expectation, was also derived from the theory of BSDEs. General theories of nonlinear expectations were developed later. These have various applications in utility theory, and the theory of dynamic risk measures.
Honours
Peng was elected as an academician of the Chinese Academy of Sciences in 2005. As one of the invited speakers, he gave a one-hour plenary lecture[6] at the International Congress of Mathematicians at Hyderabad, India on August 24, 2010.[7][8][9][10] He has been appointed as "Global Scholars" for academic years 2011–2014 by Princeton University, hosted by the university's departments of mathematics, operations research and financial engineering, and the Program in Applied and Computational Mathematics, as he "is a global leader in the field of probability theory and financial mathematics."[11][12][13] In March 2015, as one of six or seven nominees, Peng was nominated for Abel Prize by Norwegian mathematician Bernt Øksendal.[5] In September 2020, he was awarded Future Science Prize in mathematics and computer science.[14]
References
1. Gu, Shengnan (谷胜男) (October 16, 2012). "彭实戈:数学家的荣誉与责任 (Peng Shige: The honors and responsibilities of mathematicians)". People.com (in Chinese). Beijing, China: People's Daily. Retrieved 12 February 2019.
2. 黄强 (Huang Qiang) (October 7, 2001). "眷恋——记数学家彭实戈 (Love - Reporting mathematician Peng Shige)". www.cas.cn (in Chinese). Beijing, China: Chinese Academy of Sciences. Retrieved 14 February 2019.
3. 张兴华 (Zhang Xinghua) (January 15, 2009). "彭实戈:中国金融数学第一人 (Peng Shige: The number one in China's mathematical finance)". paper.jyb.cn (in Chinese). Beijing, China: 中国教育报 China Education Daily. Retrieved 14 February 2019.
4. Shige Peng at the Mathematics Genealogy Project.
5. Øksendal, Bernt (March 18, 2015). "Bernt Øksendal foreslår Etienne Pardoux og Shige Peng". www.mn.uio.no (in Norwegian and English). Blindern, Norway: University of Oslo. Retrieved 18 February 2019.
6. Official web page of The International Congress of Mathematicians (ICM 2010): PLENARY SPEAKERS/Invited Speakers Archived 2011-07-17 at the Wayback Machine
7. "Chinese Mathematician to Deliver Report at ICM". Chinese Academy of Sciences. Archived from the original on 2012-03-25. Retrieved 2009-05-29.
8. "Professor Peng gave a lecture at Xiamen University" (in Chinese). School of Mathematical Sciences Xiamen University. 2009-04-24. Archived from the original on 2011-07-07. Retrieved 2009-04-30.
9. "News at Shandong University" (in Chinese). Shandong University. 2009-04-28. Archived from the original on 2011-07-07. Retrieved 2009-04-30.
10. "More news from other websites". 2009-04-27. Archived from the original on 2011-07-07. Retrieved 2009-04-30.
11. Eric Quiñones (September 22, 2011). "Four new Global Scholars set to visit campus". Princeton, NJ 08544, USA. p. News at Princeton.{{cite web}}: CS1 maint: location (link)
12. Council for International Teaching and Research, Princeton University. "2011-12 Princeton Global Scholar Shige Peng". Princeton, NJ 08540, USA: Princeton University.{{cite web}}: CS1 maint: location (link)"Archived copy" (PDF). Archived from the original (PDF) on 2012-04-25. Retrieved 2012-02-12.{{cite web}}: CS1 maint: archived copy as title (link)
13. Wang, Qian (王倩) (December 9, 2014). "美国诺贝尔经济学奖得主在山东大学谈金融创新 (American Nobel Laureate in Economics talks about financial innovation at Shandong University)" (in Chinese). Beijing, China: China Daily. Retrieved 12 February 2019.
14. Future Science Prize Committee (September 6, 2020). "Announcement of 2020 Future Science Prize Winners: Tingdong Zhang, Zhenyi Wang, Ke Lu, Shige Peng". www.futureprize.org.{{cite web}}: CS1 maint: url-status (link)
External links
• Peng Shige at the Mathematics Genealogy Project
• Peng's homepage at Shandong University
• Peng's research group at Shandong University
• School of Mathematics, Shandong University
• School of Mathematics, Shandong University (Chinese)
• Financial Engineering: Shige Peng, the pioneer financial mathematician talks to us about some of the latest developments in predicting the way money works. An interview during The 8th International Congress on Industrial and Applied Mathematics 2015.
Authority control
International
• ISNI
• VIAF
National
• Germany
Academics
• DBLP
• MathSciNet
• Mathematics Genealogy Project
• zbMATH
Other
• IdRef
|
Wikipedia
|
Shigefumi Mori
Shigefumi Mori (森 重文, Mori Shigefumi, born February 23, 1951) is a Japanese mathematician, known for his work in algebraic geometry, particularly in relation to the classification of three-folds.
Shigefumi Mori
Shigefumi Mori
Born (1951-02-23) February 23, 1951
Nagoya, Japan
NationalityJapanese
Alma materKyoto University
Known forAlgebraic geometry
minimal model program
AwardsFields Medal (1990)
Cole Prize (1990)
Scientific career
FieldsMathematician
InstitutionsNagoya University
Kyoto University
ThesisThe Endomorphism Rings of Some Abelian Varieties (1978)
Doctoral advisorMasayoshi Nagata
Career
Mori completed his Ph.D. titled "The Endomorphism Rings of Some Abelian Varieties" under Masayoshi Nagata at Kyoto University in 1978.[1] He was visiting professor at Harvard University during 1977–1980, the Institute for Advanced Study in 1981–82, Columbia University 1985–87 and the University of Utah for periods during 1987–89 and again during 1991–92. He has been a professor at Kyoto University since 1990.
Work
He generalized the classical approach to the classification of algebraic surfaces to the classification of algebraic three-folds. The classical approach used the concept of minimal models of algebraic surfaces. He found that the concept of minimal models can be applied to three-folds as well if we allow some singularities on them. The extension of Mori's results to dimensions higher than three is called the minimal model program and is an active area of research in algebraic geometry.
He has been elected president of the International Mathematical Union, becoming the first head of the group from East Asia.[2]
Awards
He was awarded the Fields Medal in 1990 at the International Congress of Mathematicians.
In 2021, he received the Order of Culture.[3]
Major publications
• Mori, Shigefumi (1979). "Projective Manifolds with Ample Tangent Bundles". Annals of Mathematics. 110 (3): 593–606. doi:10.2307/1971241. JSTOR 1971241.
• Mori, Shigefumi; Mukai, Shigeru (1981). "Classification of Fano 3-folds with B2≥2". Manuscripta Mathematica. 36 (2): 147–162. doi:10.1007/BF01170131. S2CID 189831516.
• Mori, Shigefumi; Mukai, Shigeru (2003). "Classification of Fano 3-folds with B2≥2. (Erratum)". Manuscripta Mathematica. 110 (3): 407. doi:10.1007/s00229-002-0336-2. S2CID 121266346.
• Mori, Shigefumi (1982). "Threefolds Whose Canonical Bundles Are Not Numerically Effective". Annals of Mathematics. 116 (1): 133–176. doi:10.2307/2007050. JSTOR 2007050.
• Mori, Shigefumi (1988). "Flip theorem and the existence of minimal models for 3-folds". Journal of the American Mathematical Society. 1 (1): 117–253. doi:10.1090/S0894-0347-1988-0924704-X. JSTOR 1990969.
• Kollár, János; Miyaoka, Yoichi; Mori, Shigefumi. Rationally connected varieties. J. Algebraic Geom. 1 (1992), no. 3, 429–448.
• Kollár, János; Miyaoka, Yoichi; Mori, Shigefumi (1992). "Rational connectedness and boundedness of Fano manifolds". Journal of Differential Geometry. 36 (3). doi:10.4310/jdg/1214453188. S2CID 118102421.
• Kollár, János; Mori, Shigefumi (1992). "Classification of three-dimensional flips". Journal of the American Mathematical Society. 5 (3): 533–703. doi:10.1090/S0894-0347-1992-1149195-9. JSTOR 2152704.
• Keel, Sean; Mori, Shigefumi (1997). "Quotients by Groupoids". Annals of Mathematics. 145 (1): 193–213. doi:10.2307/2951828. JSTOR 2951828. S2CID 17830187.
• Kollár, János; Mori, Shigefumi. Birational geometry of algebraic varieties. With the collaboration of C. H. Clemens and A. Corti. Translated from the 1998 Japanese original. Cambridge Tracts in Mathematics, 134. Cambridge University Press, Cambridge, 1998. viii+254 pp. ISBN 0-521-63277-3
See also
• Keel–Mori theorem
References
1. Shigefumi Mori at the Mathematics Genealogy Project
2. "Kyoto University professor elected head of International Mathematical Union". The Japan Times Online. 2014-08-12.
3. "長嶋茂雄さんら9人文化勲章 功労者に加山雄三さんら". Jiji.com. Retrieved October 26, 2021.
• O'Connor, John J.; Robertson, Edmund F., "Shigefumi Mori", MacTutor History of Mathematics Archive, University of St Andrews
• Heisuke Hironaka, The work of Shigefumi Mori. Fields Medallists Lectures, Michael F. Atiyah (Editor), Daniel Iagolnitzer (Editor); World Scientific Publishing, 2007. ISBN 981-02-3117-2
External links
Fields Medalists
• 1936 Ahlfors
• Douglas
• 1950 Schwartz
• Selberg
• 1954 Kodaira
• Serre
• 1958 Roth
• Thom
• 1962 Hörmander
• Milnor
• 1966 Atiyah
• Cohen
• Grothendieck
• Smale
• 1970 Baker
• Hironaka
• Novikov
• Thompson
• 1974 Bombieri
• Mumford
• 1978 Deligne
• Fefferman
• Margulis
• Quillen
• 1982 Connes
• Thurston
• Yau
• 1986 Donaldson
• Faltings
• Freedman
• 1990 Drinfeld
• Jones
• Mori
• Witten
• 1994 Bourgain
• Lions
• Yoccoz
• Zelmanov
• 1998 Borcherds
• Gowers
• Kontsevich
• McMullen
• 2002 Lafforgue
• Voevodsky
• 2006 Okounkov
• Perelman
• Tao
• Werner
• 2010 Lindenstrauss
• Ngô
• Smirnov
• Villani
• 2014 Avila
• Bhargava
• Hairer
• Mirzakhani
• 2018 Birkar
• Figalli
• Scholze
• Venkatesh
• 2022 Duminil-Copin
• Huh
• Maynard
• Viazovska
• Category
• Mathematics portal
Authority control
International
• ISNI
• VIAF
National
• Norway
• France
• BnF data
• Germany
• Israel
• United States
• Japan
• Netherlands
Academics
• CiNii
• MathSciNet
• Mathematics Genealogy Project
• zbMATH
Other
• IdRef
|
Wikipedia
|
Shihoko Ishii
Shihoko Ishii (Japanese: 石井志保子, born 1950)[1] is a Japanese mathematician and professor at the University of Tokyo. Her research area is algebraic geometry.[2]
Shihoko Ishii
Born1950 (1950)
Nationality Japan
Alma materTokyo Metropolitan University
Known forSingularity theory
Scientific career
FieldsMathematics
InstitutionsUniversity of Tokyo
Education
Ishii received her bachelor's degree from Tokyo Women's Christian University in 1973 and her master's degree from Waseda University in 1975. She earned her PhD from Tokyo Metropolitan University in 1983.[3]
Research
Ishii's research focuses on singularity theory. She studies arc spaces, a mathematical concept related to jets: arc spaces are varieties encapsulating information about curves on another variety.[3]
Awards and honours
Ishii received the Saruhashi Prize for accomplishments by a Japanese woman researcher in the natural sciences in 1995.[4] As a postdoc, Ishii was inspired by reading a profile of Fumiko Yonezawa, a physicist and former winner of the Saruhashi prize.[3]
Ishii received the Algebra Prize from the Mathematical Society of Japan in 2011.[5]
References
1. Birth year from ISNI authority control file, retrieved 2018-11-28.
2. "Ishii, Shihoko". MathSciNet. Mathematical Reviews. Retrieved 8 April 2017.
3. "Shihoko Ishii". European Women in Mathematics. Retrieved 8 April 2017.
4. Sumiko Otsubo (2008). "Women Scientists and Gender Ideology". In Robertson, Jennifer (ed.). A Companion to the Anthropology of Japan. p. 474.
5. "Prizes by Research Sections". Mathematical Society of Japan. Retrieved 8 April 2017.
Authority control
International
• ISNI
• VIAF
National
• Israel
• United States
• Japan
• Czech Republic
Academics
• CiNii
• MathSciNet
• zbMATH
Other
• IdRef
|
Wikipedia
|
Shili Lin
Shili Lin is a statistician who studies the applications of statistics to genomic data. She is a professor of statistics at Ohio State University,[1] and is president-elect of the Caucus for Women in Statistics.[1][2]
Shili Lin
Alma materUniversity of Washington
Scientific career
InstitutionsOhio State University
ThesisMarkov Chain Monte Carlo Estimates Of Probabilities On Complex Structures (1993)
Doctoral advisorElizabeth A. Thompson
Lin earned her Ph.D. in 1993 from the University of Washington. Her dissertation, supervised by Elizabeth A. Thompson, was Markov Chain Monte Carlo Estimates Of Probabilities On Complex Structures.[3] After working as a Neyman Visiting Assistant Professor at the University of California, Berkeley, she joined the Ohio State faculty in 1995.[1]
She has been a fellow of the American Statistical Association since 2004, and a fellow of the American Association for the Advancement of Science since 2009.[1]
References
1. "Shili Lin, Professor of Statistics", People, Ohio State University Department of Statistics, retrieved 2017-10-23
2. Governing Council, Caucus for Women in Statistics, March 29, 2016, retrieved 2017-10-23
3. Shili Lin at the Mathematics Genealogy Project
External links
• Home page
Authority control
International
• ISNI
• VIAF
National
• Israel
• United States
• Czech Republic
Academics
• MathSciNet
• Mathematics Genealogy Project
Other
• IdRef
|
Wikipedia
|
Shimizu L-function
In mathematics, the Shimizu L-function, introduced by Hideo Shimizu (1963), is a Dirichlet series associated to a totally real algebraic number field. Michael Francis Atiyah, H. Donnelly, and I. M. Singer (1983) defined the signature defect of the boundary of a manifold as the eta invariant, the value as s=0 of their eta function, and used this to show that Hirzebruch's signature defect of a cusp of a Hilbert modular surface can be expressed in terms of the value at s=0 or 1 of a Shimizu L-function.
Definition
Suppose that K is a totally real algebraic number field, M is a lattice in the field, and V is a subgroup of maximal rank of the group of totally positive units preserving the lattice. The Shimizu L-series is given by
$L(M,V,s)=\sum _{\mu \in \{M-0\}/V}{\frac {\operatorname {sign} N(\mu )}{|N(\mu )|^{s}}}$
References
• Atiyah, Michael Francis; Donnelly, H.; Singer, I. M. (1982), "Geometry and analysis of Shimizu L-functions", Proceedings of the National Academy of Sciences of the United States of America, 79 (18): 5751, Bibcode:1982PNAS...79.5751A, doi:10.1073/pnas.79.18.5751, ISSN 0027-8424, JSTOR 12685, MR 0674920, PMC 346984, PMID 16593231
• Atiyah, Michael Francis; Donnelly, H.; Singer, I. M. (1983), "Eta invariants, signature defects of cusps, and values of L-functions", Annals of Mathematics, Second Series, 118 (1): 131–177, doi:10.2307/2006957, ISSN 0003-486X, JSTOR 2006957, MR 0707164
• Shimizu, Hideo (1963), "On discontinuous groups operating on the product of the upper half planes", Annals of Mathematics, Second Series, 77 (1): 33–71, doi:10.2307/1970201, ISSN 0003-486X, JSTOR 1970201, MR 0145106
|
Wikipedia
|
Shimura correspondence
In number theory, the Shimura correspondence is a correspondence between modular forms F of half integral weight k+1/2, and modular forms f of even weight 2k, discovered by Goro Shimura (1973). It has the property that the eigenvalue of a Hecke operator Tn2 on F is equal to the eigenvalue of Tn on f.
Let $f$ be a holomorphic cusp form with weight $(2k+1)/2$ and character $\chi $ . For any prime number p, let
$\sum _{n=1}^{\infty }\Lambda (n)n^{-s}=\prod _{p}(1-\omega _{p}p^{-s}+(\chi _{p})^{2}p^{2k-1-2s})^{-1}\ ,$
where $\omega _{p}$'s are the eigenvalues of the Hecke operators $T(p^{2})$ determined by p.
Using the functional equation of L-function, Shimura showed that
$F(z)=\sum _{n=1}^{\infty }\Lambda (n)q^{n}$
is a holomorphic modular function with weight 2k and character $\chi ^{2}$ .
Shimura's proof uses the Rankin-Selberg convolution of $f(z)$ with the theta series $\theta _{\psi }(z)=\sum _{n=-\infty }^{\infty }\psi (n)n^{\nu }e^{2i\pi n^{2}z}\ ({\scriptstyle \nu ={\frac {1-\psi (-1)}{2}}})$ for various Dirichlet characters $\psi $ then applies Weil's converse theorem.
See also
• Theta correspondence
References
• Bump, D. (2001) [1994], "Shimura correspondence", Encyclopedia of Mathematics, EMS Press
• Shimura, Goro (1973), "On modular forms of half integral weight", Annals of Mathematics, Second Series, 97: 440–481, doi:10.2307/1970831, ISSN 0003-486X, JSTOR 1970831, MR 0332663
|
Wikipedia
|
Shimura variety
In number theory, a Shimura variety is a higher-dimensional analogue of a modular curve that arises as a quotient variety of a Hermitian symmetric space by a congruence subgroup of a reductive algebraic group defined over Q. Shimura varieties are not algebraic varieties but are families of algebraic varieties. Shimura curves are the one-dimensional Shimura varieties. Hilbert modular surfaces and Siegel modular varieties are among the best known classes of Shimura varieties.
Special instances of Shimura varieties were originally introduced by Goro Shimura in the course of his generalization of the complex multiplication theory. Shimura showed that while initially defined analytically, they are arithmetic objects, in the sense that they admit models defined over a number field, the reflex field of the Shimura variety. In the 1970s, Pierre Deligne created an axiomatic framework for the work of Shimura. In 1979, Robert Langlands remarked that Shimura varieties form a natural realm of examples for which equivalence between motivic and automorphic L-functions postulated in the Langlands program can be tested. Automorphic forms realized in the cohomology of a Shimura variety are more amenable to study than general automorphic forms; in particular, there is a construction attaching Galois representations to them.[1]
Definition
Shimura datum
Let S = ResC/R Gm be the Weil restriction of the multiplicative group from complex numbers to real numbers. It is a real algebraic group, whose group of R-points, S(R), is C* and group of C-points is C*×C*. A Shimura datum is a pair (G, X) consisting of a (connected) reductive algebraic group G defined over the field Q of rational numbers and a G(R)-conjugacy class X of homomorphisms h: S → GR satisfying the following axioms:
• For any h in X, only weights (0,0), (1,−1), (−1,1) may occur in gC, i.e. the complexified Lie algebra of G decomposes into a direct sum
${\mathfrak {g}}\otimes \mathbb {C} ={\mathfrak {k}}\oplus {\mathfrak {p}}^{+}\oplus {\mathfrak {p}}^{-},$
where for any z ∈ S, h(z) acts trivially on the first summand and via $z/{\bar {z}}$ (respectively, ${\bar {z}}/z$) on the second (respectively, third) summand.
• The adjoint action of h(i) induces a Cartan involution on the adjoint group of GR.
• The adjoint group of GR does not admit a factor H defined over Q such that the projection of h on H is trivial.
It follows from these axioms that X has a unique structure of a complex manifold (possibly, disconnected) such that for every representation ρ: GR → GL(V), the family (V, ρ ⋅ h) is a holomorphic family of Hodge structures; moreover, it forms a variation of Hodge structure, and X is a finite disjoint union of hermitian symmetric domains.
Shimura variety
Let Aƒ be the ring of finite adeles of Q. For every sufficiently small compact open subgroup K of G(Aƒ), the double coset space
$\operatorname {Sh} _{K}(G,X)=G(\mathbb {Q} )\backslash X\times G(\mathbb {A} _{f})/K$
is a finite disjoint union of locally symmetric varieties of the form $\Gamma _{i}\backslash X^{+}$, where the plus superscript indicates a connected component. The varieties ShK(G,X) are complex algebraic varieties and they form an inverse system over all sufficiently small compact open subgroups K. This inverse system
$(\operatorname {Sh} _{K}(G,X))_{K}$
admits a natural right action of G(Aƒ). It is called the Shimura variety associated with the Shimura datum (G, X) and denoted Sh(G, X).
History
For special types of hermitian symmetric domains and congruence subgroups Γ, algebraic varieties of the form Γ \ X = ShK(G,X) and their compactifications were introduced in a series of papers of Goro Shimura during the 1960s. Shimura's approach, later presented in his monograph, was largely phenomenological, pursuing the widest generalizations of the reciprocity law formulation of complex multiplication theory. In retrospect, the name "Shimura variety" was introduced by Deligne, who proceeded to isolate the abstract features that played a role in Shimura's theory. In Deligne's formulation, Shimura varieties are parameter spaces of certain types of Hodge structures. Thus they form a natural higher-dimensional generalization of modular curves viewed as moduli spaces of elliptic curves with level structure. In many cases, the moduli problems to which Shimura varieties are solutions have been likewise identified.
Examples
Let F be a totally real number field and D a quaternion division algebra over F. The multiplicative group D× gives rise to a canonical Shimura variety. Its dimension d is the number of infinite places over which D splits. In particular, if d = 1 (for example, if F = Q and D ⊗ R ≅ M2(R)), fixing a sufficiently small arithmetic subgroup of D×, one gets a Shimura curve, and curves arising from this construction are already compact (i.e. projective).
Some examples of Shimura curves with explicitly known equations are given by the Hurwitz curves of low genus:
• Klein quartic (genus 3)
• Macbeath surface (genus 7)
• First Hurwitz triplet (genus 14)
and by the Fermat curve of degree 7.[2]
Other examples of Shimura varieties include Picard modular surfaces and Hilbert modular surfaces, also known as Hilbert–Blumenthal varieties.
Canonical models and special points
Each Shimura variety can be defined over a canonical number field E called the reflex field. This important result due to Shimura shows that Shimura varieties, which a priori are only complex manifolds, have an algebraic field of definition and, therefore, arithmetical significance. It forms the starting point in his formulation of the reciprocity law, where an important role is played by certain arithmetically defined special points.
The qualitative nature of the Zariski closure of sets of special points on a Shimura variety is described by the André–Oort conjecture. Conditional results have been obtained on this conjecture, assuming a generalized Riemann hypothesis.[3]
Role in the Langlands program
Shimura varieties play an outstanding role in the Langlands program. The prototypical theorem, the Eichler–Shimura congruence relation, implies that the Hasse–Weil zeta function of a modular curve is a product of L-functions associated to explicitly determined modular forms of weight 2. Indeed, it was in the process of generalization of this theorem that Goro Shimura introduced his varieties and proved his reciprocity law. Zeta functions of Shimura varieties associated with the group GL2 over other number fields and its inner forms (i.e. multiplicative groups of quaternion algebras) were studied by Eichler, Shimura, Kuga, Sato, and Ihara. On the basis of their results, Robert Langlands made a prediction that the Hasse-Weil zeta function of any algebraic variety W defined over a number field would be a product of positive and negative powers of automorphic L-functions, i.e. it should arise from a collection of automorphic representations.[1] However philosophically natural it may be to expect such a description, statements of this type have only been proved when W is a Shimura variety.[4] In the words of Langlands:
To show that all L-functions associated to Shimura varieties – thus to any motive defined by a Shimura variety – can be expressed in terms of the automorphic L-functions of [his paper of 1970] is weaker, even very much weaker, than to show that all motivic L-functions are equal to such L-functions. Moreover, although the stronger statement is expected to be valid, there is, so far as I know, no very compelling reason to expect that all motivic L-functions will be attached to Shimura varieties.[5]
Notes
1. Langlands, Robert (1979). "Automorphic Representations, Shimura Varieties, and Motives. Ein Märchen" (PDF). In Borel, Armand; Casselman, William (eds.). Automorphic Forms, Representations, and L-Functions: Symposium in Pure Mathematics. Vol. XXXIII Part 1. Chelsea Publishing Company. pp. 205–246.
2. Elkies, section 4.4 (pp. 94–97) in (Levy 1999).
3. http://people.math.jussieu.fr/~klingler/papiers/KY12.pdf
4. Qualification: many examples are known, and the sense in which they all "come from" Shimura varieties is a somewhat abstract one.
5. Langlands, Robert (1979). "Automorphic Representations, Shimura Varieties, and Motives. Ein Märchen" (PDF). In Borel, Armand; Casselman, William (eds.). Automorphic Forms, Representations, and L-Functions: Symposium in Pure Mathematics. Vol. XXXIII Part 1. Chelsea Publishing Company. p. 208.
References
• Alsina, Montserrat; Bayer, Pilar (2004), Quaternion orders, quadratic forms, and Shimura curves, CRM Monograph Series, vol. 22, Providence, RI: American Mathematical Society, ISBN 0-8218-3359-6, Zbl 1073.11040
• James Arthur, David Ellwood, and Robert Kottwitz (ed) Harmonic Analysis, the Trace Formula and Shimura Varieties, Clay Mathematics Proceedings, vol 4, AMS, 2005 ISBN 978-0-8218-3844-0
• Pierre Deligne, Travaux de Shimura. Séminaire Bourbaki, 23ème année (1970/71), Exp. No. 389, pp. 123–165. Lecture Notes in Math., Vol. 244, Springer, Berlin, 1971. MR0498581, Numdam
• Pierre Deligne, Variétés de Shimura: interprétation modulaire, et techniques de construction de modèles canoniques, in Automorphic forms, representations and L-functions, Proc. Sympos. Pure Math., XXXIII (Corvallis, OR, 1977), Part 2, pp. 247–289, Amer. Math. Soc., Providence, R.I., 1979. MR0546620
• Pierre Deligne, James S. Milne, Arthur Ogus, Kuang-yen Shi, Hodge cycles, motives, and Shimura varieties. Lecture Notes in Mathematics, 900. Springer-Verlag, Berlin-New York, 1982. ii+414 pp. ISBN 3-540-11174-3 MR0654325
• Levy, Silvio, ed. (1999), The eightfold way, Mathematical Sciences Research Institute Publications, vol. 35, Cambridge University Press, ISBN 978-0-521-66066-2, MR 1722410, Zbl 0941.00006, paperback edition by Cambridge University Press, 2001, ISBN 978-0-521-00419-0. Read This: The Eightfold Way, reviewed by Ruth Michler. {{citation}}: External link in |postscript= (help)CS1 maint: postscript (link)
• Milne, J.S. (2001) [1994], "Shimura variety", Encyclopedia of Mathematics, EMS Press
• J. Milne, Shimura varieties and motives, in U. Jannsen, S. Kleiman. J.-P. Serre (ed.), Motives, Proc. Symp. Pure Math, 55:2, Amer. Math. Soc. (1994), pp. 447–523
• J. S. Milne, Introduction to Shimura varieties, in Arthur, Ellwood, and Kottwitz (2005)
• Harry Reimann, The semi-simple zeta function of quaternionic Shimura varieties, Lecture Notes in Mathematics, 1657, Springer, 1997
• Goro Shimura, The Collected Works of Goro Shimura (2003), vol 1–5
• Goro Shimura Introduction to Arithmetic Theory of Automorphic Functions
|
Wikipedia
|
Shimura's reciprocity law
In mathematics, Shimura's reciprocity law, introduced by Shimura (1971), describes the action of ideles of imaginary quadratic fields on the values of modular functions at singular moduli. It forms a part of the Kronecker Jugendtraum, explicit class field theory for such fields. There are also higher-dimensional generalizations.
References
• Shimura, Goro (1971), Introduction to the arithmetic theory of automorphic functions, Publications of the Mathematical Society of Japan, vol. 11, Tokyo: Iwanami Shoten, Zbl 0221.10029
|
Wikipedia
|
Shimura subgroup
In mathematics, the Shimura subgroup Σ(N) is a subgroup of the Jacobian of the modular curve X0(N) of level N, given by the kernel of the natural map to the Jacobian of X1(N). It is named after Goro Shimura. There is a similar subgroup Σ(N,D) associated to Shimura curves of quaternion algebras.
References
• Ling, San; Oesterlé, Joseph (1991), "The Shimura subgroup of J0(N)", Astérisque (196): 171–203, ISSN 0303-1179, MR 1141458
• Mazur, Barry (1977), "Modular curves and the Eisenstein ideal", Publications Mathématiques de l'IHÉS (47): 33–186, ISSN 1618-1913, MR 0488287
• Ribet, Kenneth A. (1984), "Congruence relations between modular forms", Proceedings of the International Congress of Mathematicians, Vol. 1 (Warsaw, 1983), Warszawa: PWN, pp. 503–514, MR 0804706
• Ribet, Kenneth A. (1988), "On the component groups and the Shimura subgroup of J0(N)", Séminaire de Théorie des Nombres, 1987--1988 (Talence, 1987--1988), Talence: Univ. Bordeaux I, pp. Exp. No. 6, 10, MR 0993107
|
Wikipedia
|
Stephen Shing-Toung Yau
Stephen Shing-Toung Yau (Chinese: 丘成棟; pinyin: Qiū Chéngdòng; born 1952) is a Chinese-American mathematician. He is a Distinguished Professor Emeritus at the University of Illinois at Chicago, and currently teaches at Tsinghua University. He is a Fellow of the Institute of Electrical and Electronics Engineers and the American Mathematical Society.
Not to be confused with his brother, Shing-Tung Yau.
Stephen Shing-Toung Yau
丘成棟
Born1952 (age 70–71)
Jiaoling County, China
NationalityAmerican
Alma materChinese University of Hong Kong (BSc.)
State University of New York at Stony Brook (MA, PhD)
OccupationMathematician
Years active1976–present
EmployerTsinghua University
Known forDiscovering the "Yau algebra" and "Yau number"; co-founding Journal of Algebraic Geometry and Communications in Information and Systems
Biography
Shing-Toung Yau was born in 1952 in British Hong Kong,[1] with his ancestral home in Jiaoling County, Guangdong, China. He is the younger brother of Fields Medalist Shing-Tung Yau.[2]
After graduating from the Chinese University of Hong Kong, he studied mathematics at the State University of New York at Stony Brook, where he learnt after Henry Laufer[1] and earned his M.A. in 1974 and Ph.D. in 1976.[3][4]
He was a member of Princeton University's Institute for Advanced Study from 1976 to 1977 and from 1981 to 1982, and was a Benjamin Pierce Assistant Professor at Harvard University from 1977 to 1980. He subsequently taught at the University of Illinois at Chicago for more than 30 years. He was named the UIC Distinguished Professor in 2005.[4]
From 2002 to 2011, Yau also served as the Zi-Jiang Professor at East China Normal University in Shanghai, as well as Director of the university's Institute of Mathematics.[3] After retiring from UIC in 2012, he joined Tsinghua University as a full-time professor.[4]
Research
Among Yau's research interests are bioinformatics, complex algebraic geometry, singularities theory, and nonlinear filtering.[4] He published nearly 300 papers and established the "Yau algebra" and the "Yau number".[5] He served as Chairman of the IEEE International Conference on Control and Information and co-founded the Journal of Algebraic Geometry in 1991. He founded the journal Communications in Information and Systems in 2000, and has served as Editor-in-Chief since its inception.[4]
Awards
Yau was awarded the Sloan Research Fellowship in 1980 and the Guggenheim Fellowship in 2000. He was elected an IEEE Fellow in 2003 and a fellow of the American Mathematical Society in 2013.[4]
References
1. "学术报告". Harbin Engineering University (in Chinese). 2012-08-10. Archived from the original on 17 August 2019. Retrieved 2019-08-17.
2. "丘成桐院士关注家乡蕉岭仓海诗廊文化建设项目". Eastday (in Chinese). 2018-06-06. Archived from the original on 17 August 2019. Retrieved 2019-08-17.
3. "Stephen Shing-Toung Yau" (PDF). University of Illinois at Chicago. 2019-07-11. Retrieved 2019-08-16.{{cite web}}: CS1 maint: url-status (link)
4. "Stephen S. -T Yau". IEEE Xplore. Retrieved 2019-08-17.{{cite web}}: CS1 maint: url-status (link)
5. 瑛, 杨 (2012-08-13). "复几何、奇点理论及相关领域国际会议在清华举行". Tsinghua University (in Chinese). Archived from the original on 17 August 2019. Retrieved 2019-08-17.
Authority control
International
• FAST
• ISNI
• VIAF
National
• France
• BnF data
• Germany
• Israel
• United States
• Sweden
• Netherlands
Academics
• CiNii
• DBLP
• MathSciNet
• Mathematics Genealogy Project
• ORCID
• zbMATH
|
Wikipedia
|
Shintani's unit theorem
In mathematics, Shintani's unit theorem introduced by Shintani (1976, proposition 4) is a refinement of Dirichlet's unit theorem and states that a subgroup of finite index of the totally positive units of a number field has a fundamental domain given by a rational polyhedric cone in the Minkowski space of the field (Neukirch 1999, p. 507).
References
• Neukirch, Jürgen (1999). Algebraische Zahlentheorie. Grundlehren der mathematischen Wissenschaften. Vol. 322. Berlin: Springer-Verlag. ISBN 978-3-540-65399-8. MR 1697859. Zbl 0956.11021.
• Shintani, Takuro (1976), "On evaluation of zeta functions of totally real algebraic number fields at non-positive integers", Journal of the Faculty of Science. University of Tokyo. Section IA. Mathematics, 23 (2): 393–417, ISSN 0040-8980, MR 0427231, Zbl 0349.12007
• Shintani, Takuro (1981), "A remark on zeta functions of algebraic number fields", Automorphic forms, representation theory and arithmetic (Bombay, 1979), Tata Inst. Fund. Res. Studies in Math., vol. 10, Bombay: Tata Inst. Fundamental Res., pp. 255–260, ISBN 3-540-10697-9, MR 0633664
External links
• Mathematical pictures by Paul Gunnells
|
Wikipedia
|
Shintani zeta function
In mathematics, a Shintani zeta function or Shintani L-function is a generalization of the Riemann zeta function. They were first studied by Takuro Shintani (1976). They include Hurwitz zeta functions and Barnes zeta functions.
For the Shintani zeta function of a vector space, see Prehomogeneous vector space.
Definition
Let $P(\mathbf {x} )$ be a polynomial in the variables $\mathbf {x} =(x_{1},\dots ,x_{r})$ with real coefficients such that $P(\mathbf {x} )$ is a product of linear polynomials with positive coefficients, that is, $P(\mathbf {x} )=P_{1}(\mathbf {x} )P_{2}(\mathbf {x} )\cdots P_{k}(\mathbf {x} )$, where
$P_{i}(\mathbf {x} )=a_{i1}x_{1}+a_{i2}x_{2}+\cdots +a_{ir}x_{r}+b_{i},$
where $a_{ij}>0$, $b_{i}>0$ and $k=\deg P$. The Shintani zeta function in the variable $s$ is given by (the meromorphic continuation of)
$\zeta (P;s)=\sum _{x_{1},\dots ,x_{r}=1}^{\infty }{\frac {1}{P(\mathbf {x} )^{s}}}.$
The multi-variable version
The definition of Shintani zeta function has a straightforward generalization to a zeta function in several variables $(s_{1},\dots ,s_{k})$ given by
$\sum _{x_{1},\dots ,x_{r}=1}^{\infty }{\frac {1}{P_{1}(\mathbf {x} )^{s_{1}}\cdots P_{k}(\mathbf {x} )^{s_{k}}}}.$
The special case when k = 1 is the Barnes zeta function.
Relation to Witten zeta functions
Just like Shintani zeta functions, Witten zeta functions are defined by polynomials which are products of linear forms with non-negative coefficients. Witten zeta functions are however not special cases of Shintani zeta functions because in Witten zeta functions the linear forms are allowed to have some coefficients equal to zero. For example, the polynomial $(x+1)(y+1)(x+y+2)/2$ defines the Witten zeta function of $SU(3)$ but the linear form $x+1$ has $y$-coefficient equal to zero.
References
• Hida, Haruzo (1993), Elementary theory of L-functions and Eisenstein series, London Mathematical Society Student Texts, vol. 26, Cambridge University Press, ISBN 978-0-521-43411-9, MR 1216135, Zbl 0942.11024
• Shintani, Takuro (1976), "On evaluation of zeta functions of totally real algebraic number fields at non-positive integers", Journal of the Faculty of Science. University of Tokyo. Section IA. Mathematics, 23 (2): 393–417, ISSN 0040-8980, MR 0427231, Zbl 0349.12007
|
Wikipedia
|
Shioda modular surface
In mathematics, a Shioda modular surface is one of the elliptic surfaces studied by Shioda (1972).
References
• Barth, Wolf; Hulek, Klaus (1985), "Projective models of Shioda modular surfaces", Manuscripta Mathematica, Springer Berlin / Heidelberg, 50 (1): 73–132, doi:10.1007/BF01168828, ISSN 0025-2611, MR 0784140
• Kodaira, Kunihiko (1963), "On compact analytic surfaces. III", Annals of Mathematics, Second Series, 78: 1–40, doi:10.2307/1970500, ISSN 0003-486X, MR 0184257
• Shioda, Tetsuji (1972), "On elliptic modular surfaces", Journal of the Mathematical Society of Japan, 24: 20–59, doi:10.2969/jmsj/02410020, ISSN 0025-5645, MR 0429918
|
Wikipedia
|
Anania Shirakatsi
Anania Shirakatsi (Armenian: Անանիա Շիրակացի, Anania Širakac’i, anglicized: Ananias of Shirak) was a 7th-century Armenian polymath and natural philosopher, author of extant works covering mathematics, astronomy, geography, chronology, and other fields. Little is known for certain of his life outside of his own writings, but he is considered the father of the exact and natural sciences in Armenia—the first Armenian mathematician, astronomer,[2][3] and cosmographer.[4]
Anania Shirakatsi
1963 statue of Anania Shirakatsi holding a globe at the entrance of the Matenadaran
Bornc. 610
Shirak, Ayrarat,[1] Sasanian Armenia
Diedc. 685 (aged around 75)
Arminiya
NationalityArmenian
EraEarly Middle Ages
SchoolHellenizing School
Main interests
Mathematics, astronomy, geography, chronology
Influences
• Ancient Greek philosophy, Yeghishe, David the Invincible
Influenced
• All subsequent scientists in Armenia, especially Grigor Magistros and Hovhannes Imastaser
Seen as a part of the Armenian Hellenizing School, the last lay scholar in Christian Armenia until the 11th century,[5][6] Anania was educated primarily by Tychicus, in Trebizond. He composed science textbooks and the first known geographic work in classical Armenian (Ashkharhatsuyts),[7] which provides detailed information about Greater Armenia, Persia and the Caucasus (Georgia and Caucasian Albania).
In mathematics, his accomplishments include the earliest known table of results of the four basic operations,[3][8] the earliest known collection of recreational math puzzles and problems,[9] and the earliest book of math problems in Armenian.[10] He also devised a system of mathematical notation based on the Armenian alphabet, although he was the only writer known to have used it.
Name
His name is usually anglicized as Ananias of Shirak (Širak).[11] Anania is the Armenian variant of the biblical name Ananias, itself the Greek version of the Hebrew Hananiah.[12] The second part of his name denotes his place of origin, the region of Shirak (Širak),[2] though it may have become a sort of surname.[13] In some manuscripts, he is called Shirakuni (Շիրակունի) and Shirakavantsi (Շիրակաւանցի).[14]
Life
Background
Anania[lower-alpha 1] Shirakatsi lived in the 7th century.[15] The dates of his birth and death have not been definitively established. Robert H. Hewsen noted in 1968 that Anania is widely believed to have been born between 595 and 600;[5] a quarter-century later he settled on c. 610 as a birthdate and 685 as the year he died.[16] Agop Jack Hacikyan et al. place his birth in early 600s but agrees on 685.[17] James R. Russell, Edward G. Mathews, and Theo van Lint also concur with 610–685,[18][4][3] while Greenwood suggests c. 600–670.[10] Vardanyan places his death in the early 690s.[19]
Anania is the only classical Armenian scholar to have written an autobiography.[5] It is a brief text, characterized as "somewhat self-congratulatory"[6] and "more a statement of academic pedigree" than autobiography.[20] It was probably written as the preface to one of his scholarly works, possibly the K'nnikon.[4] He was the son of Hovhannes/Yovhannes and was born in the village of Anania/Aneank' (Անեանք) or in the town of Shirakavan (Yerazgavors),[21] in the canton of Shirak (Širak), in the central Armenian province of Ayrarat.[22] Aneank' may be connected to the later city of Ani, the Bagratid Armenian capital.[23]
Anania probably came from a noble family.[4] Since his name is sometimes spelled as "Shirakuni" (Շիրակունի), Hewsen argued that he may have belonged to the house of the Kamsarakan or Arsharuni princes of Shirak and Aršarunik’, respectively.[5] Greenwood suggests that it is more likely that Anania came from the lesser nobility in Shirak, who served the house of Kamsarakan.[24] Broutian describes his father as a "minor Armenian nobleman."[25] Vardanyan believes he either came from the Kamsarakan family or that they were his patrons.[26]
Anania is traditionally thought to have been buried in the village of Anavank'; however, the tradition probably originated from the name of the village.[3]
Education
Anania received his early education at the local Armenian schools, possibly at Dprevank monastery,[27] where he studied sacred texts and earlier Armenian authors.[5][28] Due to the lack of teachers and books in Armenia, he decided to travel to the Byzantine Empire (the "land of the Greeks") to study mathematics.[10][28] After first traveling to Theodosiopolis, then to the Byzantine-controlled province of Fourth Armenia (probably Martyropolis),[1] where he studied under the mathematician Christosatur for six months.[1] He then left to find a better teacher and learned about Tychicus,[lower-alpha 2] who was based at the monastery (or martyrium) of Saint Eugenios in Trebizond.[5][10] Redgate placed this in the 620s.[1] Greenwood has speculated that Tychicus, not mentioned elsewhere, may actually be Stephanus of Alexandria.[29]
Anania devoted a significant part of his autobiography to Tychicus (born c. 560), with whom he spent eight years in the 620s or 630s.[30] Tychicus had studied the Armenian language and its literature while serving in the Byzantine army in Armenia.[6][31] Wounded by the Persians, he retired from the military and later studied in Alexandria, Rome, and Constantinople.[6][31] Tychicus later returned to his native Trebizond, where he established a school c. 615.[31] Tychicus taught many students from Constantinople (including from the imperial court) and was renowned among Byzantine kings.[32][10] He provided Anania special attention and taught him what Anania called a "perfect knowledge of mathematics".[17] In Tychicus's vast library, Anania found "everything, exoteric and esoteric",[18] including sacred and secular Greek authors, including works on the sciences, medicine, chronology, and history.[10][31][33] James R. Russell argued that his library may have included Pythagorean and alchemical books.[18] Anania considered Tychicus to have been "predestined by God for the introduction of science into Armenia."[31]
Educator and scientist
Anania himself established a school in Armenia upon his return.[17] That school, the first in Armenia to teach quadrivium, is presumed to have been located in his native Shirak.[3][31] He was disappointed with the laziness of his students and their departure after learning the basics.[31] Anania complained about Armenians' lack of interest in mathematics,[6] writing that they "love neither learning, nor knowledge."[33] Nicholas Adontz considered it an exaggeration, "if not an absolute slander, to deny the Armenian innate love of investigation."[34] The 12th-century chronicler Samuel of Ani listed five of Shirakatsi's students,[3] who are otherwise unknown.[35] Anania financed his research in several fields with the money he earned teaching.[31]
Relationship with the Armenian Church
Anania had a close relationship with the Church.[36] Several scholars consider him a church ideologist akin to Cosmas Indicopleustes,[36] whom he actually criticized.[37] Hacikyan et al. describe Anania as a "devout Christian and well versed in the Bible" who "made some attempts to reconcile science and Scripture."[38] In his later years, Anania may have been a monk in the Armenian Church.[5] This is based on his religious discourses and attempts to date the feasts of the church.[39] John A. C. Greppin doubts that Anania was ever in any religious order.[40]
Hewsen noted that some of Anania's "more revolutionary ideas" were suppressed by the Armenian Church after his death.[41] Greppin noted that Anania, a largely secular author, had fallen into a "bad clerical odor."[40] Soviet historians represented him as a founder of irreligious and anti-clerical thought in Armenia, who pioneered double-truth theory.[42] Vazgen Chaloyan called him a "progressive representative of the feudal period of Armenian science."[43] Gevorg Khrlopian went as far as to argue that Anania was an enemy of the Armenian Church and fought against its obscurantism.[36] Hewsen opposed this view, suggesting that, instead, he was an "independent thinker of sorts."[5]
Philosophy
Anania is considered by modern scholars to be a representative of the Hellenizing School since many of his works were based on classical Greek sources.[44][45] He was the first Armenian scholar to have "imported a set of scientific notions, and examples of their applications, from the Greek-speaking schools" into Armenia.[46] He was well versed in Greek literature,[47] and the influence of Greek syntax is evident in his works.[48] Anania was also knowledgeable about native Armenian and Iranian cultural traditions;[10] several of his works provide important information on late Sassanian Iran.[10]
James R. Russell describes him an alchemist and a Pythagorean who "does not usually rely on mythology to explain natural phenomena.[49] Anania accepted the importance of experience, observation, rational practice and theory, and was influenced by the ideas of the 5th-century Neoplatonist philosopher Davit Anhaght (the Invincible), and Greek philosophers Thales of Miletus, Hippocrates, Democritus, Plato, Aristotle, Zeno of Citium, Epicurus, Ptolemy, Pappus of Alexandria, and Cosmas Indicopleustes.[50] Aristotle's On the Heavens had a significant influence on Anania's thought.[33] According to Gevorg Khrlopian, Anania was heavily influenced by Yeghishe's An Interpretation of Creation, the anonymous Interpretation of the Categories of Aristotle, and the works of Davit Anhaght,[51] who had established Neoplatonism in Armenian thought.[50] Anania was also the first Armenian scholar to quote Philo of Alexandria.[28]
Anania was the last known lay scholar in Christian Armenia until Grigor Magistros Pahlavuni in the 11th century.[5][6] He advocated rationalism in studying nature and attacked superstitious beliefs and astrology as the "babblings of the foolish."[52][53] He adopted the classical theory of four elements, which considered all matter to be composed of four elements: fire, air, water, and earth. He believed that while God directly created these elements, He did not interfere with the "natural course of the development of things." He asserted that the creation, existence, and decay of natural bodies and phenomena occurred through the union of these elements—without the interference of God.[54] Both living and non-living matter came into existence from a synthesis of the four elements.[54]
Anania accepted that the Earth is round, describing it as "like an egg with a spherical yolk (the globe) surrounded by a layer of white (the atmosphere) and covered with a hard shell (the sky)."[36] He accurately explained solar and lunar eclipses, the phases of the Moon, and the structure of the Milky Way,[52] describing the latter as a "mass of dense but faintly luminous stars."[36] Anania also correctly attributed tides to the influence of the Moon.[41][53] He described the topmost sphere as the aether (arp'i), the source of light and heat (through the Sun).[55]
Works
Anania was a polymath and natural philosopher.[56] About 40 works in various disciplines have been attributed to Anania, but only half are extant. They include studies and translations in mathematics, astronomy, cosmology, geography, chronology, and meteorology.[3] Many of his works are believed to have been part of the K'nnikon (Քննիկոն, from "canon", Greek: Kanonikon), completed circa 666,[3][57] and used as the standard science textbook in medieval Armenia.[4][58] According to Greenwood, the K'nnikon was a "fluid compilation, whose contents fluctuated over time, reflecting the interests and resources of different teachers and practitioners."[59]
Modern scholars have praised Anania's writing as concise, simple, and to the point, retaining the reader's attention and citing examples to illustrate his point.[52][50]
Mathematical
Anania was primarily devoted to mathematics,[4] which he considered the "mother of all knowledge."[3][5] His mathematical books were used as textbooks in Armenia.[6]
Of Anania's several mathematical works, the most important is the book of arithmetic (Hamaroghut’iun, Համարողութիւն; or T'vabanut'iun, Թւաբանութիւն),[60] a comprehensive collection of tables on the four basic operations.[52] It is the earliest extant known work of its kind.[3][8] The operations reach up to a total of 80 million, which is the highest number.[8] A possible theoretical part is believed lost.[52]
Problems and Solutions (alternatively translated as On Questions and Answers), a collection of 24 arithmetical problems and their solutions, is based on the application of fractions;[52][9] it is the earliest such work in Armenian. Many of its problems allude to real-world situations: six connect to the princely house of Shirak, the Kamsarakans,[8] and at least three to Iran.[10] Greenwood calls the problems "a rich source for seventh-century history whose value has not been sufficiently recognized."[61]
The third work, probably an appendix of the book of arithmetic, is titled Xraxc'anakank (Խրախճանականք), literally "things for festive occasions". It has been translated into English as Mathematical Pastimes,[60] Fun with Arithmetic or Problems for Amusement. It also contains 24 problems "intended for mathematical entertainment in social gatherings."[52] According to Mathews this may be the oldest extant text of its kind.[9]
Numerical notation
For his mathematical works, Anania developed a unique numerical notation based on 12 letters of the Armenian alphabet. For the units, he used the first nine letters of the Armenian script (Ա, Բ, Գ, Դ, Ե, Զ, Է, Ը, Թ), similar to the standard traditional Armenian numerical system. The letters used for 10, 100, and 1000 were also identical to the traditional Armenian system (Ժ, Ճ, Ռ), but all other numbers up to 10,000 were written using these 12 letters. For instance, 50 would be written ԵԺ (5×10) and not Ծ as in the standard system. Thus, the notation is multiplicative-additive as opposed to the ciphered-additive standard system and requires knowing 12 letters, instead of 36, to write numbers less than 10,000. Numbers greater than that could be written using multiplicative combinations of just 2 or 3 signs, but using all 36 letters.[62]
Stephen Chrisomalis believes this system was created by Anania since it only occurs in his works and is not found in Greek, Syriac, Hebrew, or any other alphabetic numeral system.[63] Allen Shaw has argued it was just a variant of the Armenian numerals designed specifically for the representation of large numbers.[64] No other writer used it.[63]
Astronomical
One of Anania's most significant works is the Cosmology (Տիեզերագիտութիւն, Tiezeragitut’iun).[65] Abrahamian's version is composed of ten chapters, with an introduction titled "In the Fulfillment of a Promise", implying a patron.[66] It covers the sun, the moon, celestial spheres, constellations, the Milky Way, and meteorological changes.[67]
Works used for the parts of the Cosmology include the Bible (mostly the Pentateuch and Psalms) and works by the Church Fathers. Anania cites the work of Basil of Caesarea, Gregory the Illuminator, and Amphiolocus (perhaps, of Iconium).[68] Some chapters of the work, such as "On Clouds" (also called "On the Sky" or "Concerning the Skies"), are largely based on Basil's Hexameron.[69] Anania also repeats the classical Greek notions in the fields of astronomy, physics or meteorology.[70] Pambakian wrote about the significance of the Cosmology:
In conclusion, when viewed as a Byzantine text, the Cosmology's originality may be sought in the way in which pagan and Christian traditions are combined (and often intrinsically intertwined), and much may be learned about the ‘making of science’ in the context of the seventh-century. Within the specific context of Armenian literature, this text deals with many aspects of natural philosophy in unprecedented depth.[71]
Another of Anania's astronomical works, Tables of the Motions of the Moon (Խորանք ընթացիք լուսոյ, xorank‘ ĕnt'ac'ik' lusoy),[60] is based on the works of Meton of Athens and his own observations.[72]
Perpetual calendar
In 667 Anania was invited by Catholicos Anastas I of Akori (r. 661/2–667) to the Armenian Church's central seat at Dvin to establish a fixed calendar of the movable and immovable feasts of the Armenian Church.[6][73] The result was a perpetual calendar based on a 532-year cycle (ՇԼԲ բոլորակ),[4] combining the solar cycle and the lunar cycle since they coincide every 532 years. It was first proposed by Victorius of Aquitaine in 457 and adopted by the Church of Alexandria.[36] Anania's calendar was never implemented by the Armenian Church;[4][6] Hovhannes Draskhanakerttsi believes that Anastas's death prevented a church council from ratifying it.[74]
Geographical
The Ashkharhatsuyts[6] (classical Armenian: Աշխարհացոյց, Ašxarhac'oyc', lit. "showing the world") is an anonymously published world map, believed to have been written sometime between 610 and 636.[75] According to Elizabeth Redgate, it was written "probably shortly before AD 636".[76] Its authorship has been disputed in the modern period; formerly believed to have been the work of Movses Khorenatsi, most scholars now attribute it to Anania.[77] Hewsen calls it "one of the most valuable works to come down to us from Armenian antiquity."[78]
The Armenian Geography—as it is alternatively known—has been especially important for research into the history and geography of Greater Armenia, the Caucasus (Georgia and Caucasian Albania) and the Sasanian Empire,[10] which are all described in detail.[38][78] The territories are described before the Arab invasions and conquests.[16] The information on Armenia is not found elsewhere in historical sources,[9] as it is the only known Armenian geographical work prior to the 13th century.[7]
The Ashkharhatsuyts has survived in long and short recensions.[78] According to the scholarly consensus, the long recension was the original.[79] For the description of Europe, North Africa and Asia (all the known world from Spain to China),[9] it largely uses Greek sources, namely the now lost geography of Pappus of Alexandria (4th century), which in turn, is based on the Geography of Ptolemy (2nd century).[4][10][78] According to Hewsen, it is the "last work based on ancient geographical knowledge written before the Renaissance."[36]
It was one of the earliest secular Armenian works to be published (in 1668 by Voskan Yerevantsi).[80] It has been translated into four languages: English, Latin (both 1736), French (1819), and Russian (1877).[81] In 1877, Kerovbe Patkanian first attributed it to Anania as the most probable author.[82]
Another geographical work of Anania, The Itinerary (Մղոնաչափք, Mghonach'ap'k' or Młonač'ap'k'), may have been a part of the Ashkharhatsuyts. It presents six routes from Dvin, Armenia's capital at that time, to the major settlements in different directions, with distances in miles (մղոն, mghon), referring to the Arabic mile of 1,917.6 metres (6,291 ft), according to Hakob Manandian.[83]
Chronology
Anania's major chronological work, the Chronicle, listed important events in order of their occurrence.[6] Written between 686 and 690, it is composed of two parts: a universal chronicle, utilizing the lost works of Annianus of Alexandria and the lost Roman imperial sequence from Eusebius's Chronographia, and an ecclesiastical history from a miaphysite perspective, which records the six ecumenical councils.[84]
Another chronological work, known as the Calendar (Tomar), included texts and tables about the calendars of 15 peoples: Armenians, Hebrews, Arabs, Macedonians, Romans, Syrians, Greeks, Egyptians, Ethiopians, Athenians, Bythanians, Cappadocians, Georgians, Caucasian Albanians, and Persians.[85][86] The calendars of the Armenians, Romans, Hebrews, Syrians, Greeks, and Egyptians contain texts, while those of other peoples only have the names of months and their length.[87]
Other
Anania wrote several books on weights and measures. He extensively used the work of Epiphanius of Salamis to present the system of weights used by the Greeks, Jews, and Syrians, and his own knowledge as well as other sources for those of the Armenians and Persians.[88][9]
Anania wrote several works on precious stones,[39] music, and the known languages of the world.[9]
Anania's discourses on Christmas/Epiphany and Easter are discussions on the dates of the two feasts. In the first, he uses a lost work he ascribes to Polycarp of Smyrna and insists that the Armenian custom of celebrating Christmas and the Epiphany on the same date is truer to the holidays' intent than celebrating them separately as is common elsewhere in the Christian world.[39][60]
Traditions and legends
Anania also wrote on herbal medicine, though none of his medical writings have survived. He is traditionally credited with the discovery of the miraculous flower called hamasp'iwr or hamaspiur (համասփիւռ).[18] One 16th century manuscript mentions that he dealt with its therapeutic properties. It has been identified by modern scholars as Silene latifolia (white campion). He is credited with discovering the plant in Dzoghakert (near modern Taşburun, Iğdır, Turkey)[89] and using it medically.[90][91][92]
According to a later legend, he taught alchemy to the king of Venice.[18]
Legacy
Influence in the Middle Ages
Anania laid the foundations of the exact sciences in Armenia and greatly influenced many Armenian scholars who came after him.[38][lower-alpha 3] Hovhannes Imastaser (Hovhannes Sarkavag) and other medieval scholars extensively cited and incorporated Anania's works.[50] In a 1037 letter, Grigor Magistros, a scholar from the Pahlavuni noble family, asked Catholicos Petros Getadardz for Anania's manuscripts of his K'nnikon, which were locked up at the catholicosate for centuries.[97][98] Grigor used these as a textbook at his school at the Sanahin Monastery.[99] Anania may had also influenced Byzantine Armenian scholars, such as the 9th century philosopher Leo[100] and the 14th century mathematician and grammarian Nicholas Artabasdos Rhabdas.[101]
Reemergence in the modern period
In the printed age, passing references to Anania were made as early as 1742 by Paghtasar Dpir, but it was not until the latter half of the 19th century that Anania and his work became a subject of scholarly study.[102] In 1877 Armenian linguist and philologist Kerovbe Patkanian published a collection of Anania's works in the original classical Armenian at St Petersburg University.[103] Titled Sundry Studies (Մնացորդք բանից, Mnats'ordk' banits'),[104] it is the first-ever print publication of his works.[38] Galust Ter-Mkrtchian published a number of Anania's works in 1896.[103] Joseph Orbeli, an Armenian member of the Russian Academy of Sciences, published a Russian translation of Anania's Problems and Solutions in 1918.[103]
Systematic study and publication of his works began in the Soviet period.[103] Ashot G. Abrahamian, who began his research at the Matenadaran in the 1930s, first published one of Anania's arithmetical texts in 1939,[103] followed by a complete compilation of Anania's work in 1944.[105]
Abrahamian's work was not received with universal acclaim. One critic objected to his 1944 compilation for attributing disputed works to Anania.[106] Abrahamian and Garegin Petrosian published an updated edition in 1979.[107] Some criticism persisted: Varag Arakelian noted a number of errors in translations from classical Armenian and concluded that a new translation of Anania's works was needed.[108] Another Soviet scholar, Suren T. Eremian, studied the Geography. He insisted on Anania's authorship and published his research in 1963.[109]
The first translation of Anania's work into a European language was done by the British Orientalist Frederick Cornwallis Conybeare, who translated into English Anania's On Christmas, in 1896, and On Easter and Anania's autobiography, in 1897.[110][103] Lemerle noted that Conybeare translated Anania's autobiography from a Russian translation, and it contains numerous serious errors.[111] Renewed interest in Anania's work emerged in the West in the 1960s. A French translation of his autobiography appeared in 1964 by Haïg Berbérian.[112] Robert H. Hewsen authored an introductory article on Anania's life and scholarship in 1968.[113]
Greenwood argues that studying Anania and his works "resonated with twentieth-century political beliefs and offered a suitable subject for academic research in ways that works on medieval theology or Biblical exegesis did not. Anania came to be projected as a national hero from the distant Armenian past, linking and affirming past and present identities."[114]
Modern assessment
Anania is considered by modern scholars as the "father of the exact sciences in Armenia."[115] Modern historians consider him as the greatest scientist of medieval Armenia[116] and, possibly, all Armenian history, up to the 20th century astrophysicist Viktor Ambartsumian.[lower-alpha 4][117] He is widely regarded as the founder of the natural sciences in the country.[42] He was the first classical Armenian scholar to study mathematics and several scientific subjects, such as cosmography and chronology.[118][111] Nicholas Adontz argued that Anania "occupied the same position in Armenian education as Leo [the Mathematician] did in Byzantine education. He was the first to sow the seeds of science among the Armenians."[119] Hacikyan et al. wrote in The Heritage of Armenian Literature:
Shirakatsi was an educator and an organizer of ideas and materials rather than an original thinker. He was often in the forefront of scientific thinking, but at other times he repeated the accepted theories of his time.[38]
Shirakatsi was one of six scholars whose statue was erected in front of the Matenadaran, the museum-institute of Armenian manuscripts in Yerevan, in the 1960s.[120] Another statue was erected in the front yard of the Yerevan State University. A crater on the Moon was named after Shirakatsi in 1979.[lower-alpha 5]
In independent post-Soviet Armenia, Anania Shirakatsi has been commemorated in various ways. In 1993 the Medal of Anania Shirakatsi, a state award, was established, given for "significant activities, inventions, and discoveries in the spheres of economy, engineering, architecture, science, and technology." In 2005 the Central Bank of Armenia issued a commemorative coin, while HayPost issued a stamp dedicated to Anania Shirakatsi.[lower-alpha 6]
References
Notes
1. He will be referred to by this name throughout this article, as surnames were not widely used in Armenia at the time.
2. also transliterated as Tukhikos[3] and Tykhikos;[18] Greek: Τύχικος, Classical Armenian: Տիւքիկոս
3. These include Hovhannes Draskhanakerttsi (d. 925),[38] Anania of Narek (d. 980s),[9] Grigor Magistros (d. 1058),[9] Stepanos Asoghik (11th century),[38] Hovhannes Kozern (11th century),[9] Hovhannes Imastaser (Sarkavag) (d. 1129),[93] Nerses Shnorhali (d. 1173),[94] Samuel Anetsi (12th century),[95] Vanakan (d. 1250),[50] Kirakos Gandzaketsi (d. 1271),[38] Hovhannes Erznkatsi (d. 1293),[96] Grigor Tatevatsi (d. 1410),[9] and Hakob Ghrimetsi (d. 1426).[94]
4. Avagyan, Sona (5 October 2010). "Ոչ ոք չէր նկատել, որ Շիրակացու 1400-ամյակն է". Hetq (in Armenian). Archived from the original on 6 December 2019.
5. Gazetteer of Planetary Nomenclature 1994 (PDF). U.S. Geological Survey. 1995. p. 78.
6. "Republic of Armenia: Medal of Anania Shirakatsi". medals.org.uk. Medals of the World.
"The Medal of Anania Shirakatsi". president.am. The Office to the President of the Republic of Armenia.
"Collector Coins: 2005 – Anania Shirakatsi". cba.am. Central Bank of Armenia.
"2005 – Armenian Stamps Stamps of Armenia Հայկական նամականիշեր". armenianstamps.org.
Citations
1. Redgate 2000, p. 188.
2. Hewsen 1968, p. 32.
3. Mathews 2008a, p. 70.
4. van Lint 2018, p. 68.
5. Hewsen 1968, p. 34.
6. Thomson 1997, p. 221.
7. Thomson 1997, p. 222.
8. Hewsen 1968, p. 42.
9. Mathews 2008a, p. 71.
10. Greenwood 2018.
11. Hewsen 1968, p. 32; van Lint 2018, p. 68; Redgate 2000, p. 188
12. Acharian 1942, p. 148.
13. Hayrapetian 1941, p. 3.
14. Hayrapetian 1941, p. 3; Vardanyan 2013, p. 9; Broutian 2009, p. 2; Abeghian 1944, p. 374
15. Hewsen 1968, p. 32; Hacikyan et al. 2002, p. 56; Sarafian 1930, p. 99; Thomson 1997, p. 220
16. Hewsen 1992, p. 15.
17. Hacikyan et al. 2002, p. 56.
18. Russell 2004, p. 293.
19. Vardanyan 2013, p. 17.
20. Greenwood 2011, p. 142.
21. Abeghian 1944, p. 374; Acharian 1942, p. 149; Tumanian et al. 1974, p. 362
22. van Lint 2018, p. 68; Greenwood 2018; Mathews 2008a, p. 70; Hewsen 1968, p. 34
23. Greenwood 2018; Greenwood 2011, p. 144; Hayrapetian 1941, p. 4
24. Greenwood 2011, p. 145.
25. Broutian 2009, p. 2.
26. Vardanyan 2013, p. 9.
27. Tumanian et al. 1974, p. 362.
28. Terian 1980, p. 180.
29. Pambakian 2018, p. 11.
30. Hewsen 1968, p. 35; Greenwood 2011, p. 179; Vasiliev 1945, p. 492
31. Hewsen 1968, p. 35.
32. Hewsen 1968, pp. 34–35.
33. Terian 1980, p. 181.
34. Adontz 1950, p. 72.
35. Greenwood 2011, p. 157.
36. Hewsen 1968, p. 36.
37. Hewsen 1968, pp. 36–37.
38. Hacikyan et al. 2002, p. 58.
39. Hewsen 1968, p. 45.
40. Greppin 1995, p. 679.
41. Hewsen 1968, p. 38.
42. Tumanian et al. 1974, p. 364.
43. Chaloyan 1964, p. 168.
44. Mathews 2008b, p. 365.
45. Adontz 1970, p. 163.
46. Pambakian 2018, p. 9.
47. Vasiliev 1945, p. 492.
48. Terian 1980, p. 182.
49. Russell 2004, p. 295.
50. Hewsen 1968, p. 40.
51. Khrlopian 1964, p. 178.
52. Hacikyan et al. 2002, p. 57.
53. Jeu 1973, p. 252.
54. Hewsen 1968, p. 37.
55. Hewsen 1968, p. 37-38.
56. Pambakian 2018, p. 11; Greenwood 2018; Hacikyan et al. 2002, p. 56
57. Hewsen 1992, p. 14.
58. Broutian 2009, p. 4.
59. Greenwood 2011, p. 136.
60. Pambakian 2018, p. 13.
61. Greenwood 2011, p. 177.
62. Chrisomalis 2010, pp. 175–176.
63. Chrisomalis 2010, p. 177.
64. Shaw 1939.
65. Pambakian 2018, p. 10.
66. Pambakian 2018, p. 14.
67. Greenwood 2018; Hacikyan et al. 2002, p. 57; Pambakian 2018, p. 15
68. Pambakian 2018, p. 16.
69. Mathews 2008a, p. 70; Thomson 1997, p. 221; Pambakian 2018, p. 16
70. Pambakian 2018, p. 17.
71. Pambakian 2018, p. 22.
72. Hewsen 1968, p. 41.
73. Hewsen 1968, pp. 35–36.
74. Greenwood 2011, p. 131.
75. Hewsen 1992, p. 13.
76. Redgate 2000, p. 6.
77. van Lint 2018, p. 68; Greppin 1995, p. 679; Mathews 2008a, p. 71; Hewsen 1992; Pambakian 2018, p. 12
78. Hewsen 1992, p. 1.
79. Greppin 1995, pp. 679–680.
80. Hewsen 1992, p. 4.
81. Hewsen 1992, pp. 4–5.
82. Hewsen 1992, p. 8.
83. Hewsen 1968, p. 44.
84. Greenwood 2008, p. 197.
85. Broutian 2009, p. 5.
86. Hewsen 1968, pp. 44–45.
87. Broutian 2009, pp. 8–9.
88. Hewsen 1968, pp. 43–44.
89. Hakobian, T. Kh.; Melik-Bakhshian, St. T. [in Armenian]; Barseghian, H. Kh. [in Armenian] (1991). "Ձող(ա)կերտ [Dzogh(a)kert]". Հայաստանի և հարակից շրջանների տեղանունների բառարան [Dictionary of Toponyms of Armenia and Surrounding Regions] Volume III (in Armenian). Yerevan University Press. pp. 481.
90. Vardanian, Stella (1999). "Medicine in Armenia". In Greppin, John A. C.; Savage-Smith, Emilie; Gueriguian, John L. (eds.). The Diffusion of Greco-Roman Medicine into the Middle East and the Caucasus. Delmar, New York: Caravan Books. p. 188-189. ISBN 0-88206-096-1.
91. Vardanian, S. A. (1984). "Բժշկություն [Medicine]". Հայ ժողովրդի պատմություն [History of the Armenian People] Volume II (in Armenian). Yerevan: Armenian SSR Academy of Sciences Press. p. 558.
92. Avdalbekyan, S. (1976). "«Պատմութիւն համասփիւռ ծաղկին» [History of the hamaspyur flower]". Patma-Banasirakan Handes (in Armenian). 3: 258–259. Archived from the original on 2022-11-21.
93. Abrahamian & Petrosian 1979, p. 23; Hewsen 1968, p. 40; Mathews 2008a, p. 71; Hacikyan et al. 2002, p. 58
94. Abrahamian & Petrosian 1979, p. 23.
95. Abrahamian & Petrosian 1979, p. 23; Mathews 2008a, p. 71
96. Abrahamian & Petrosian 1979, p. 23; Mathews 2008a, p. 71; Chaloyan 1964, p. 169
97. Matevosian 1994, pp. 17–18.
98. Greenwood 2011, p. 133.
99. Matevosian 1994, p. 21.
100. Adontz 1950, p. 73.
101. Chaloyan 1964, p. 169.
102. Gyulumyan 2012, pp. 6, 9.
103. Hewsen 1968, p. 33.
104. Patkanian 1877.
105. Abrahamian 1944.
106. Simyonov 1947, pp. 97–100.
107. Abrahamian & Petrosian 1979.
108. Arakelian 1981, p. 300.
109. Eremian 1963.
110. Conybeare 1897.
111. Lemerle 2017, p. 90.
112. Berbérian 1964.
113. Hewsen 1968.
114. Greenwood 2011, p. 134.
115. Terian 1980, p. 180; Mathews 2008a, p. 70; Hewsen 1968, p. 32; Lemerle 2017, p. 90
116. Hewsen 1968, p. 32; Broutian 2009, p. 2
117. Danielyan 2008, p. 256.
118. Thomson 1997, p. 220.
119. Adontz 1950, pp. 70–71.
120. Greenwood 2011, p. 135.
Bibliography
Books on Anania
• Patkanian, Kerovbe (1877). Անանիայի Շիրակունւոյ Մնացորդք բանից [Sundry Studies of Anania Shirakatsi] (in Armenian). Saint Petersburg University.
• Abrahamian, Ashot G. (1944). Անանիա Շիրակացու մատենագրությունը (in Armenian). Yerevan: Armenian SSR Matenadaran Press.
• Abrahamian, Ashot G.; Petrosian, Garegin B. (1979). Անանիա Շիրակացի․ Մատենագրություն [Anania Shirakatsi: Writings] (in Armenian). Yerevan: Sovetakan grogh. online
• Eremian, Suren (1963). Հայաստանը ըստ "Աշխարհացոյց"-ի (in Armenian). Yerevan: Armenian Academy of Sciences Press.
• Gyulumyan, O., ed. (2012). Անանիա Շիրակացի: Կենսամատենագիտություն [Anania Shirakatsi: Bibliography] (in Armenian). Yerevan: National Library of Armenia. ISBN 978-99930-65-86-9. PDF (archived)
• Hewsen, Robert H. (1992). The Geography of Ananias of Širak. Wiesbaden: Ludwig Reichert Verlag.
• Khrlopian, Gevorg T. (1964). Անանիա Շիրակացու աշխարհայացքը [Anania Shirakatsi's worldview]. Yerevan: Yerevan State University. OCLC 37519113.
General books
• Adontz, Nicholas (1970). Armenia in the Period of Justinian. Translated by Nina Garsoïan. Lisbon: Calouste Gulbenkian Foundation.
• Lemerle, Paul (2017) [1971]. Le premier humanisme byzantin [Byzantine Humanism: The First Phase]. Lindsay Helen, Ann Moffatt (trans.). Leiden: Brill. pp. 90–94. ISBN 9789004344594.
• Sarafian, Kevork A. (1930). History of Education in Armenia. Press of the La Verne Leader.
• Acharian, Hrachia (1942). Հայոց անձնանունների բառարան [Dictionary of Personal Names] (in Armenian). Vol. 1. Yerevan State University. pp. 148-149.
• Redgate, A. E. (2000). The Armenians. Oxford: Blackwell Publishing. ISBN 9780631220374.
Book chapters on Anania
• Abeghian, Manuk (1944). "Անանիա Շիրակացի [Anania Shirakatsi]". Հայոց Հին Գրականության Պատմություն [History of Ancient Armenian Literature] (in Armenian). Yerevan: Armenian SSR Academy of Sciences. pp. 373-387.
• Chrisomalis, Stephen (2010). "Shirakatsi's Notation". Numerical Notation: A Comparative History. Cambridge University Press. pp. 175-177. ISBN 9780521878180.
• Hacikyan, Agop Jack; Basmajian, Gabriel; Franchuk, Edward S.; Ouzounian, Nourhan (2002). "Anania Shirakatsi (Anania of Shirak)". The Heritage of Armenian Literature: From the sixth to the eighteenth century. Detroit: Wayne State University Press. pp. 56-80. ISBN 9780814330234.
• Mathews, Edward G. Jr. (2008a). "Anania of Shirak". In Keyser, Paul T.; Irby-Massie, Georgia L. (eds.). Encyclopedia of Ancient Natural Scientists: The Greek Tradition and its Many Heirs. Routledge. pp. 70-71. ISBN 9781134298020.
• Mathews, Edward G. Jr. (2008b). "Hellenizing School (Arm. Yunaban Drpoc; ca 570 – ca 730)". In Keyser, Paul T.; Irby-Massie, Georgia L. (eds.). Encyclopedia of Ancient Natural Scientists: The Greek Tradition and its Many Heirs. Routledge. ISBN 9781134298020.
• Terian, Abraham (1980). "The Hellenizing School: Its Time, Place, and Scope of Activities Reconsidered". In Nina Garsoïan; Thomas F. Mathews; Robert W. Thomson (eds.). East of Byzantium: Syria and Armenia in the Formative Period. Washington, D.C.: Dumbarton Oaks. pp. 175-186.
• Thomson, Robert W. (1997). "Armenian Literary Culture Through the Eleventh Century". In Hovannisian, Richard G. (ed.). The Armenian People from Ancient to Modern Times: Volume I: The Dynastic Periods: From Antiquity to the Fourteenth Century. New York: St. Martin's Press. pp. 199–240.
• van Lint, Theo (2018). "Ananias of Shirak (Anania Shirakats'i)". In Nicholson, Oliver (ed.). The Oxford Dictionary of Late Antiquity. Oxford University Press. p. 68. ISBN 978-0-19-881624-9.
Encyclopedia articles
• Greenwood, Timothy William (9 April 2018). "Ananias of Shirak (Anania Širakac'i)". Encyclopædia Iranica. online
• Tumanian, B.; Matevosian, A.; Chaloyan, V. [in Armenian]; Abrahamian, A. [in Armenian]; Tahmizian, N. (1974). "Անանիա Շիրակացի [Anania Shirakatsi]". Soviet Armenian Encyclopedia Volume I (in Armenian). pp. 362-364.
Journal articles
• Adontz, Nicholas (1950). Translated by J. G. M. "Role of the Armenians in Byzantine Science" (PDF). The Armenian Review. 3: 55–73. Archived from the original (PDF) on 2017-08-01.
• Conybeare, F. E. (1897). "Ananias of Shirak (A. D. 600—650 c.)". Byzantinische Zeitschrift. 6 (3): 572–584. doi:10.1515/byzs.1897.6.3.572. S2CID 194109254.
• Hewsen, Robert H. (1968). "Science in Seventh-Century Armenia: Ananias of Širak". Isis. History of Science Society. 59 (1): 32–45. doi:10.1086/350333. JSTOR 227850. S2CID 145014073.
• Vasiliev, Alexander (1945). "Reviewed Work: Armenia and the Byzantine Empire. A Brief Study of Armenian Art and Civilization by Sirapie Der Nersessian". Speculum. 20 (4): 492. doi:10.2307/2856749. JSTOR 2856749.
• Simyonov, L. (1947). "Պրոֆ. դոկտոր Ա. Աբրահամյան. – "Անանիա Շիրակացու մատենագրությունը"։ Երևան, Հայպետհրատ, 1944 թ." Bulletin of the Academy of Sciences of the Armenian SSR: Social Sciences (in Armenian) (3): 97–100.
• Arakelian, V. D. [in Armenian] (1981). "Անանիա Շիրակացի, Մատենագրություն, թարգմանությունը, աոաջաբանը և ծանոթագրությունները Ա. Գ. Աբրահամյանի և Գ. Բ. Պետրոսյանի, Երևան, 1979 [Ananya Shirakatsi: Selected Works, transl. by A. G. Abrahamian and G. B. Petrossian]". Patma-Banasirakan Handes (in Armenian) (4): 295–300.
• Berbérian, Haïg (1964). "Autobiographie d'Anania Sirakac'i". Revue des Études Arméniennes (in French) (1): 189–194.
• Shaw, Allen A. (1939). "An Overlooked Numeral System of Antiquity". National Mathematics Magazine. Mathematical Association of America. 13 (8): 368–372. doi:10.2307/3028489. JSTOR 3028489.
• Jeu, Bernard (1973). "A Note on Some Armenian Philosophers". Studies in Soviet Thought. Springer. 13 (3/4): 251–264. doi:10.1007/BF01043876. JSTOR 20098573. S2CID 143364705.
• Pambakian, Stephanie (2018). "Tradition and Innovation in the Cosmology of Anania Širakac'i" (PDF). Eurasiatica. Edizioni Ca’ Foscari. 11: 9–24. doi:10.30687/978-88-6969-279-6/001. ISBN 978-88-6969-280-2. Archived from the original on 2019-04-12. Retrieved 2019-03-30.{{cite journal}}: CS1 maint: bot: original URL status unknown (link)
• Matevosian, Artashes [in Armenian] (1994). "Գրիգոր Մագիստրոսը և Անանիա Շիրակացու "Քննիկոնը"" (PDF). Banber Matenadarani (in Armenian). Matenadaran (16): 16–30.
• Greenwood, Tim (2011). "A Reassessment of the Life and Mathematical Problems of Anania Širakac'i". Revue des Études Arméniennes. 33: 131–186.
• Greppin, John A. C. (1995). "Comments on Early Armenian Knowledge of Botany as Revealed in the Geography of Ananias of Shirak". Journal of the American Oriental Society. 115 (4): 679–684. doi:10.2307/604736. JSTOR 604736.
• Hayrapetian, S. (1941). "Անանիա Շիրակացու կյանքն ու գործունեությունը [Life and career of Anania Shirakatsi]" (PDF). Banber Matenadarani (in Armenian). Matenadaran (1).
• Vardanyan, Vahram (2013). "Անանիա Շիրակացին' հայոց հոգևոր և քաղաքական ինքնուրույնության ջահակիր [Anania Shirakatsi: An Armenian Torchbearer of Spiritual and Political Autonomy]". Lraber Hasarakakan Gitutyunneri (in Armenian) (3): 9–19.
• Greenwood, Tim (2008). ""New Light from the East": Chronography and Ecclesiastical History through a Late Seventh-Century Armenian Source". Journal of Early Christian Studies. 16 (2): 197–254. doi:10.1353/earl.0.0018. S2CID 170843259.
• Broutian, Grigor (2009). "Persian and Arabic Calendars as Presented by Anania Shirkatsi". Tarikh-e Elm. University of Tehran. 7 (1): 1–17.
• Danielyan, Eduard L. [in Armenian] (2008). "The Contribution of Academician Victor Hambartsumyan to the History of Armenian and World Cosmological Thought and Astronomy". Lraber Hasarakakan Gitutyunneri (3): 256–259.
• Chaloyan, Vazgen K. [in Armenian] (1964). "Անանիա Շիրակացու բնափիլիսոփայական հայացքները [The natural-philosophical views of Anania Shirakatsi]". Natural Sciences and History of Technology in Armenia (in Armenian). Armenian Academy of Sciences (3): 142–171.
• Russell, James R. (2004). "The Dream Vision of Anania Širakac'i". Armenian and Iranian Studies. Cambrdige, Massachusetts: Harvard University Press.; originally published in Revue des Études Arméniennes 21, 1988-89, pp. 159-170
Further reading
• Tee, Garry J. (1972). "Two Armenian Savants". Prudentia. University of Auckland. 4 (2).
• Mahé, Jean-Pierre (1987). "Quadrivium et cursus d'études au VIIe siècle en Arménie et dans le monde byzantin d'après le "K'nnikon" d'Anania Širakac'i". Travaux et Mémoires (in French). Centre de recherche d'Histoire et Civilisation de Byzance. 10: 159–206.
External links
• Media related to Anania Shirakatsi at Wikimedia Commons
• Quotations related to Anania Shirakatsi at Wikiquote
Authority control
International
• FAST
• ISNI
• VIAF
• 2
• 3
• 4
National
• Norway
• France
• BnF data
• Germany
• Israel
• Belgium
• United States
• Australia
• Netherlands
• Poland
Academics
• MathSciNet
• zbMATH
People
• Deutsche Biographie
• Trove
Other
• IdRef
|
Wikipedia
|
Shirley Kallek
Shirley Kallek (November 23, 1926 in Roselle, New Jersey – May 20, 1983) was an American economic statistician known for her work at the United States Census Bureau.[1] She was president of the Caucus for Women in Statistics[2] and of the Washington Statistical Society.[3]
Shirley Kallek
Born(1926-11-23)November 23, 1926
Roselle, New Jersey, United States
DiedMay 20, 1983(1983-05-20) (aged 56)
Alma materHunter College (BEc)
New York University (MA)
OccupationEconomic statistician
EmployerUnited States Census Bureau
Organization(s)Caucus for Women in Statistics
Washington Statistical Society
AwardsDepartment of Commerce Gold Medal
Biography
Early life and education
Kallek was born on November 23, 1926, in Roselle, New Jersey.[4][5] She did her undergraduate studies at Hunter College, where she earned a bachelor's degree in 1947. She completed a master's degree in 1949 at New York University.[4][5]
Career
After completing her studies, Kallek took a position as an analyst for the National Air Transportation Association, but resigned because she was paid roughly half of the salary for new male employees, and was offered only a 10% raise when she complained.[6] She started her own consulting business in 1950, and began working for the Census Bureau in 1955.[4][6]
In 1970, she became chief of the Economic Statistics and Surveys Division and chief of the Economic Censuses Staff at the Census Bureau.[4][7] She was associate director for economic fields for the census from 1974 to 1983.[6]
Service and later life
Kallek was Jewish. She became one of the founders of Temple Micah in Washington, DC, and the temple's first treasurer.[8] She also served as president of the Caucus for Women in Statistics in 1980,[2] and as president of the Washington Statistical Society for 1981–1982.[3]
She died from cancer on May 20, 1983.[9]
Recognition
Kallek was elected as a Fellow of the American Statistical Association in 1972 "for her innovative work in developing new data series, especially on minority business enterprise, and for her outstanding contribution to the improvement of existing industry statistics through effective administration and improved application of computer techniques".[7] She won the Department of Commerce Gold Medal in 1975,[4] and was posthumously given a presidential award for outstanding service for 1983.[10]
For many years after her death, the Shirley Kallek Memorial Lecture was an annual component of the Research Conference of the Census Bureau. The first such lecture was given by Alan Greenspan in 1985.[5]
References
1. "Shirley Kallek", Notable alumni, United States Census Bureau, retrieved 2018-12-21
2. Presidents 1971–2017 (PDF), Caucus for Women in Statistics, retrieved 2018-12-19
3. Washington Statistical Society Past and Present 1896 to 2012 (PDF), Washington Statistical Society, retrieved 2018-12-21
4. Briefing Handbook: Chief Economist, Bureau of the Census, Bureau of Economic Analysis (PDF), US Department of Commerce, 1977 – via Gerald R. Ford Museum
5. Waite, Charles E. (1995), "Economic statistics: Something old, something new, something borrowed, and something blue", Proceedings of the Annual Research Conference of the United States Bureau of the Census, pp. 3–14
6. Biles, Elmer S. (April 27, 1983), Oral history: Shirley Kallek (PDF), United States Census Bureau, retrieved 2018-12-21
7. "New ASA Fellows—1972", The American Statistician, 26 (4): 48–49, October 1972, doi:10.1080/00031305.1972.10477369
8. Early Days, Temple Micah, retrieved 2018-12-21
9. "In Remembrance of Shirley Kallek", APDU Newsletter, Association of Public Data Users, July 1983; "In Memoriam", Data User News, Bureau of the Census, 18 (6): 1, June 1983; "Shirley Kallek dies of cancer", Amstat News, American Statistical Association: 2, 1983
10. Causey, Mike (February 19, 1984), "Outstanding 1983 Workers Awarded $10,000", Washington Post
|
Wikipedia
|
Shirley Pledger
Shirley Pledger is a New Zealand mathematician and statistician known for her work on mark and recapture methods for estimating wildlife populations.[1] She is an emeritus professor in the School of Mathematics and Statistics of Victoria University of Wellington.[2]
Education and career
Pledger became a student at Victoria University of Wellington in 1961, and chose mathematics over physics because of its more welcoming environment to women at that time. She specialised in algebraic topology and earned a master's degree. Originally intending to go into secondary-school education, she was instead persuaded by the department head, professor J. T. Campbell, to become a lecturer in mathematics at Victoria University of Wellington in 1965. She married another new lecturer in mathematics, Ken Pledger, in 1967, and gave up her lecturership in 1970 soon before the birth of their first child.[3]
A few years later she returned to academia as an instructor in statistics at Wellington Polytechnic and in 1980 she obtained a lecturer position in statistics at Victoria University of Wellington. While working there, she completed a Ph.D. in statistics in 1999, concerning mark and recapture methods.[3] She was given a chair as a professor of biometrics at the university in 2011.[3][1]
Recognition
Pledger is the 2014 winner of the Campbell Award of the New Zealand Statistical Association.[3]
References
1. Prof Shirley Pledger's Inaugural Lecture, Victoria University of Wellington School of Mathematics and Statistics, 13 June 2011, retrieved 2020-08-26
2. Staff, Victoria University of Wellington School of Mathematics and Statistics, retrieved 2020-08-26
3. "Campbell Award", Australian & New Zealand Journal of Statistics, 58 (2): 145–148, June 2016, doi:10.1111/anzs.12160, S2CID 247695647
External links
• Home page
Authority control: Academics
• Scopus
|
Wikipedia
|
Jordan algebra
In abstract algebra, a Jordan algebra is a nonassociative algebra over a field whose multiplication satisfies the following axioms:
1. $xy=yx$ (commutative law)
2. $(xy)(xx)=x(y(xx))$ (Jordan identity).
The product of two elements x and y in a Jordan algebra is also denoted x ∘ y, particularly to avoid confusion with the product of a related associative algebra.
The axioms imply[1] that a Jordan algebra is power-associative, meaning that $x^{n}=x\cdots x$ is independent of how we parenthesize this expression. They also imply[1] that $x^{m}(x^{n}y)=x^{n}(x^{m}y)$ for all positive integers m and n. Thus, we may equivalently define a Jordan algebra to be a commutative, power-associative algebra such that for any element $x$, the operations of multiplying by powers $x^{n}$ all commute.
Jordan algebras were introduced by Pascual Jordan (1933) in an effort to formalize the notion of an algebra of observables in quantum electrodynamics. It was soon shown that the algebras were not useful in this context, however they have since found many applications in mathematics.[2] The algebras were originally called "r-number systems", but were renamed "Jordan algebras" by Abraham Adrian Albert (1946), who began the systematic study of general Jordan algebras.
Special Jordan algebras
Given an associative algebra A (not of characteristic 2), one can construct a Jordan algebra A+ using the same underlying addition vector space. Notice first that an associative algebra is a Jordan algebra if and only if it is commutative. If it is not commutative we can define a new multiplication on A to make it commutative, and in fact make it a Jordan algebra. The new multiplication x ∘ y is the Jordan product:
$x\circ y={\frac {xy+yx}{2}}.$
This defines a Jordan algebra A+, and we call these Jordan algebras, as well as any subalgebras of these Jordan algebras, special Jordan algebras. All other Jordan algebras are called exceptional Jordan algebras. The Shirshov–Cohn theorem states that any Jordan algebra with two generators is special.[3] Related to this, Macdonald's theorem states that any polynomial in three variables, that has degree one in one of the variables, and that vanishes in every special Jordan algebra, vanishes in every Jordan algebra.[4]
Hermitian Jordan algebras
If (A, σ) is an associative algebra with an involution σ, then if σ(x)=x and σ(y)=y it follows that $ \sigma (xy+yx)=xy+yx.$ Thus the set of all elements fixed by the involution (sometimes called the hermitian elements) form a subalgebra of A+, which is sometimes denoted H(A,σ).
Examples
1. The set of self-adjoint real, complex, or quaternionic matrices with multiplication
$(xy+yx)/2$
form a special Jordan algebra.
2. The set of 3×3 self-adjoint matrices over the octonions, again with multiplication
$(xy+yx)/2,$
is a 27 dimensional, exceptional Jordan algebra (it is exceptional because the octonions are not associative). This was the first example of an Albert algebra. Its automorphism group is the exceptional Lie group F4. Since over the complex numbers this is the only simple exceptional Jordan algebra up to isomorphism,[5] it is often referred to as "the" exceptional Jordan algebra. Over the real numbers there are three isomorphism classes of simple exceptional Jordan algebras.[5]
Derivations and structure algebra
A derivation of a Jordan algebra A is an endomorphism D of A such that D(xy) = D(x)y+xD(y). The derivations form a Lie algebra der(A). The Jordan identity implies that if x and y are elements of A, then the endomorphism sending z to x(yz)−y(xz) is a derivation. Thus the direct sum of A and der(A) can be made into a Lie algebra, called the structure algebra of A, str(A).
A simple example is provided by the Hermitian Jordan algebras H(A,σ). In this case any element x of A with σ(x)=−x defines a derivation. In many important examples, the structure algebra of H(A,σ) is A.
Derivation and structure algebras also form part of Tits' construction of the Freudenthal magic square.
Formally real Jordan algebras
A (possibly nonassociative) algebra over the real numbers is said to be formally real if it satisfies the property that a sum of n squares can only vanish if each one vanishes individually. In 1932, Jordan attempted to axiomatize quantum theory by saying that the algebra of observables of any quantum system should be a formally real algebra that is commutative (xy = yx) and power-associative (the associative law holds for products involving only x, so that powers of any element x are unambiguously defined). He proved that any such algebra is a Jordan algebra.
Not every Jordan algebra is formally real, but Jordan, von Neumann & Wigner (1934) classified the finite-dimensional formally real Jordan algebras, also called Euclidean Jordan algebras. Every formally real Jordan algebra can be written as a direct sum of so-called simple ones, which are not themselves direct sums in a nontrivial way. In finite dimensions, the simple formally real Jordan algebras come in four infinite families, together with one exceptional case:
• The Jordan algebra of n×n self-adjoint real matrices, as above.
• The Jordan algebra of n×n self-adjoint complex matrices, as above.
• The Jordan algebra of n×n self-adjoint quaternionic matrices. as above.
• The Jordan algebra freely generated by Rn with the relations
$x^{2}=\langle x,x\rangle $
where the right-hand side is defined using the usual inner product on Rn. This is sometimes called a spin factor or a Jordan algebra of Clifford type.
• The Jordan algebra of 3×3 self-adjoint octonionic matrices, as above (an exceptional Jordan algebra called the Albert algebra).
Of these possibilities, so far it appears that nature makes use only of the n×n complex matrices as algebras of observables. However, the spin factors play a role in special relativity, and all the formally real Jordan algebras are related to projective geometry.
Peirce decomposition
If e is an idempotent in a Jordan algebra A (e2 = e) and R is the operation of multiplication by e, then
• R(2R − 1)(R − 1) = 0
so the only eigenvalues of R are 0, 1/2, 1. If the Jordan algebra A is finite-dimensional over a field of characteristic not 2, this implies that it is a direct sum of subspaces A = A0(e) ⊕ A1/2(e) ⊕ A1(e) of the three eigenspaces. This decomposition was first considered by Jordan, von Neumann & Wigner (1934) for totally real Jordan algebras. It was later studied in full generality by Albert (1947) and called the Peirce decomposition of A relative to the idempotent e.[6]
Special kinds and generalizations
Infinite-dimensional Jordan algebras
In 1979, Efim Zelmanov classified infinite-dimensional simple (and prime non-degenerate) Jordan algebras. They are either of Hermitian or Clifford type. In particular, the only exceptional simple Jordan algebras are finite-dimensional Albert algebras, which have dimension 27.
Jordan operator algebras
Main article: Jordan operator algebra
The theory of operator algebras has been extended to cover Jordan operator algebras.
The counterparts of C*-algebras are JB algebras, which in finite dimensions are called Euclidean Jordan algebras. The norm on the real Jordan algebra must be complete and satisfy the axioms:
$\displaystyle {\|a\circ b\|\leq \|a\|\cdot \|b\|,\,\,\,\|a^{2}\|=\|a\|^{2},\,\,\,\|a^{2}\|\leq \|a^{2}+b^{2}\|.}$
These axioms guarantee that the Jordan algebra is formally real, so that, if a sum of squares of terms is zero, those terms must be zero. The complexifications of JB algebras are called Jordan C*-algebras or JB*-algebras. They have been used extensively in complex geometry to extend Koecher's Jordan algebraic treatment of bounded symmetric domains to infinite dimensions. Not all JB algebras can be realized as Jordan algebras of self-adjoint operators on a Hilbert space, exactly as in finite dimensions. The exceptional Albert algebra is the common obstruction.
The Jordan algebra analogue of von Neumann algebras is played by JBW algebras. These turn out to be JB algebras which, as Banach spaces, are the dual spaces of Banach spaces. Much of the structure theory of von Neumann algebras can be carried over to JBW algebras. In particular the JBW factors—those with center reduced to R—are completely understood in terms of von Neumann algebras. Apart from the exceptional Albert algebra, all JWB factors can be realised as Jordan algebras of self-adjoint operators on a Hilbert space closed in the weak operator topology. Of these the spin factors can be constructed very simply from real Hilbert spaces. All other JWB factors are either the self-adjoint part of a von Neumann factor or its fixed point subalgebra under a period 2 *-antiautomorphism of the von Neumann factor.[7]
Jordan rings
A Jordan ring is a generalization of Jordan algebras, requiring only that the Jordan ring be over a general ring rather than a field. Alternatively one can define a Jordan ring as a commutative nonassociative ring that respects the Jordan identity.
Jordan superalgebras
Jordan superalgebras were introduced by Kac, Kantor and Kaplansky; these are $\mathbb {Z} /2$-graded algebras $J_{0}\oplus J_{1}$ where $J_{0}$ is a Jordan algebra and $J_{1}$ has a "Lie-like" product with values in $J_{0}$.[8]
Any $\mathbb {Z} /2$-graded associative algebra $A_{0}\oplus A_{1}$ becomes a Jordan superalgebra with respect to the graded Jordan brace
$\{x_{i},y_{j}\}=x_{i}y_{j}+(-1)^{ij}y_{j}x_{i}\ .$
Jordan simple superalgebras over an algebraically closed field of characteristic 0 were classified by Kac (1977). They include several families and some exceptional algebras, notably $K_{3}$ and $K_{10}$.
J-structures
The concept of J-structure was introduced by Springer (1973) harvtxt error: no target: CITEREFSpringer1973 (help) to develop a theory of Jordan algebras using linear algebraic groups and axioms taking the Jordan inversion as basic operation and Hua's identity as a basic relation. In characteristic not equal to 2 the theory of J-structures is essentially the same as that of Jordan algebras.
Quadratic Jordan algebras
Quadratic Jordan algebras are a generalization of (linear) Jordan algebras introduced by Kevin McCrimmon (1966). The fundamental identities of the quadratic representation of a linear Jordan algebra are used as axioms to define a quadratic Jordan algebra over a field of arbitrary characteristic. There is a uniform description of finite-dimensional simple quadratic Jordan algebras, independent of characteristic: in characteristic not equal to 2 the theory of quadratic Jordan algebras reduces to that of linear Jordan algebras.
See also
• Freudenthal algebra
• Jordan triple system
• Jordan pair
• Kantor–Koecher–Tits construction
• Scorza variety
Notes
1. Jacobson 1968, pp. 35–36, specifically remark before (56) and theorem 8
2. Dahn, Ryan (2023-01-01). "Nazis, émigrés, and abstract mathematics". Physics Today. 76 (1): 44–50.
3. McCrimmon 2004, p. 100
4. McCrimmon 2004, p. 99
5. Springer & Veldkamp 2000, §5.8, p. 153
6. McCrimmon 2004, pp. 99 et seq, 235 et seq
7. See:
• Hanche-Olsen & Størmer 1984
• Upmeier 1985
• Upmeier 1987
• Faraut & Koranyi 1994
8. McCrimmon 2004, pp. 9–10
References
• Albert, A. Adrian (1946), "On Jordan algebras of linear transformations", Transactions of the American Mathematical Society, 59 (3): 524–555, doi:10.1090/S0002-9947-1946-0016759-3, ISSN 0002-9947, JSTOR 1990270, MR 0016759
• Albert, A. Adrian (1947), "A structure theory for Jordan algebras", Annals of Mathematics, Second Series, 48 (3): 546–567, doi:10.2307/1969128, ISSN 0003-486X, JSTOR 1969128, MR 0021546
• Baez, John C. (2002). "§3: Projective Octonionic Geometry". The Octonions. pp. 145–205. doi:10.1090/S0273-0979-01-00934-X. MR 1886087. S2CID 586512. {{cite book}}: |journal= ignored (help). Online HTML version.
• Faraut, J.; Koranyi, A. (1994), Analysis on symmetric cones, Oxford Mathematical Monographs, Oxford University Press, ISBN 0198534779
• Hanche-Olsen, H.; Størmer, E. (1984), Jordan operator algebras, Monographs and Studies in Mathematics, vol. 21, Pitman, ISBN 0273086197
• Jacobson, Nathan (2008) [1968], Structure and representations of Jordan algebras, American Mathematical Society Colloquium Publications, vol. 39, Providence, R.I.: American Mathematical Society, ISBN 9780821831793, MR 0251099
• Jordan, Pascual (1933), "Über Verallgemeinerungsmöglichkeiten des Formalismus der Quantenmechanik", Nachr. Akad. Wiss. Göttingen. Math. Phys. Kl. I, 41: 209–217
• Jordan, P.; von Neumann, J.; Wigner, E. (1934), "On an algebraic generalization of the quantum mechanical formalism", Annals of Mathematics, 35 (1): 29–64, doi:10.2307/1968117, JSTOR 1968117
• Kac, Victor G (1977), "Classification of simple Z-graded Lie superalgebras and simple Jordan superalgebras", Communications in Algebra, 5 (13): 1375–1400, doi:10.1080/00927877708822224, ISSN 0092-7872, MR 0498755
• McCrimmon, Kevin (1966), "A general theory of Jordan rings", Proc. Natl. Acad. Sci. U.S.A., 56 (4): 1072–1079, Bibcode:1966PNAS...56.1072M, doi:10.1073/pnas.56.4.1072, JSTOR 57792, MR 0202783, PMC 220000, PMID 16591377, Zbl 0139.25502
• McCrimmon, Kevin (2004), A taste of Jordan algebras, Universitext, Berlin, New York: Springer-Verlag, doi:10.1007/b97489, ISBN 978-0-387-95447-9, MR 2014924, Zbl 1044.17001, Errata
• Ichiro Satake (1980), Algebraic Structures of Symmetric Domains, Princeton University Press, ISBN 978-0-691-08271-4. Review
• Schafer, Richard D. (1996), An introduction to nonassociative algebras, Courier Dover Publications, ISBN 978-0-486-68813-8, Zbl 0145.25601
• Zhevlakov, K.A.; Slin'ko, A.M.; Shestakov, I.P.; Shirshov, A.I. (1982) [1978]. Rings that are nearly associative. Academic Press. ISBN 0-12-779850-1. MR 0518614. Zbl 0487.17001.
• Slin'ko, A.M. (2001) [1994], "Jordan algebra", Encyclopedia of Mathematics, EMS Press
• Springer, Tonny A. (1998) [1973], Jordan algebras and algebraic groups, Classics in Mathematics, Springer-Verlag, doi:10.1007/978-3-642-61970-0, ISBN 978-3-540-63632-8, MR 1490836, Zbl 1024.17018
• Springer, Tonny A.; Veldkamp, Ferdinand D. (2000) [1963], Octonions, Jordan algebras and exceptional groups, Springer Monographs in Mathematics, Berlin, New York: Springer-Verlag, doi:10.1007/978-3-662-12622-6, ISBN 978-3-540-66337-9, MR 1763974
• Upmeier, H. (1985), Symmetric Banach manifolds and Jordan C∗-algebras, North-Holland Mathematics Studies, vol. 104, ISBN 0444876510
• Upmeier, H. (1987), Jordan algebras in analysis, operator theory, and quantum mechanics, CBMS Regional Conference Series in Mathematics, vol. 67, American Mathematical Society, ISBN 082180717X
Further reading
• Knus, Max-Albert; Merkurjev, Alexander; Rost, Markus; Tignol, Jean-Pierre (1998), The book of involutions, Colloquium Publications, vol. 44, With a preface by J. Tits, Providence, RI: American Mathematical Society, ISBN 0-8218-0904-0, Zbl 0955.16001
External links
• Jordan algebra at PlanetMath
• Jordan-Banach and Jordan-Lie algebras at PlanetMath
Authority control: National
• Israel
• United States
• Japan
|
Wikipedia
|
Shlomo Sternberg
Shlomo Zvi Sternberg (born 1936), is an American mathematician known for his work in geometry, particularly symplectic geometry and Lie theory.
Shlomo Sternberg
Born (1936-11-20) November 20, 1936
Alma materJohns Hopkins University
AwardsGuggenheim Fellowship, 1974
Scientific career
FieldsMathematics
InstitutionsHarvard University
New York University
University of Chicago
ThesisSome Problems in Discrete Nonlinear Transformations in One and Two Dimensions (1955)
Doctoral advisorAurel Friedrich Wintner
Doctoral studentsVictor Guillemin
Ravindra Kulkarni
Yael Karshon
Steve Shnider
Israel Michael Sigal
Sandy Zabell
Websitehttps://www.math.harvard.edu/people/sternberg-shlomo/
Education and career
Sternberg earned his PhD in 1955 from Johns Hopkins University, with a thesis entitled "Some Problems in Discrete Nonlinear Transformations in One and Two Dimensions", supervised by Aurel Wintner.[1]
After postdoctoral work at New York University (1956–1957) and an instructorship at University of Chicago (1957–1959), Sternberg joined the Mathematics Department at Harvard University in 1959, where he was George Putnam Professor of Pure and Applied Mathematics until 2017. Since 2017, he is Emeritus Professor at the Harvard Mathematics Department.[2]
Among other honors, Sternberg was awarded a Guggenheim fellowship in 1974[3] and a honorary doctorate by the University of Mannheim in 1991.[4][5] He delivered the AMS Colloquium Lecture in 1990[6] and the Hebrew University's Albert Einstein Memorial Lecture in 2006.[7]
Sternberg was elected member of the American Academy of Arts and Sciences in 1969,[8] of the National Academy of Sciences in 1986,[9] of the Spanish Royal Academy of Sciences In 1999,[10] and of the American Philosophical Society in 2010.[11]
Research
Sternberg's first well-known published result, based on his PhD thesis, is known as the "Sternberg linearization theorem" which asserts that a smooth map near a hyperbolic fixed point can be made linear by a smooth change of coordinates provided that certain non-resonance conditions are satisfied. He also proved generalizations of the Birkhoff canonical form theorems for volume preserving mappings in n-dimensions and symplectic mappings, all in the smooth case.[12][13][14]
In the 1960s Sternberg became involved with Isadore Singer in the project of revisiting Élie Cartan's papers from the early 1900s on the classification of the simple transitive infinite Lie pseudogroups, and of relating Cartan's results to recent results in the theory of G-structures and supplying rigorous (by present-day standards) proofs of his main theorems.[15] Also, together with Victor Guillemin and Daniel Quillen, he extended this classification to a larger class of pseudogroups: the primitive infinite pseudogroups. As a by-product, they also obtained the " integrability of characteristics" theorem for over-determined systems of partial differential equations.[16]
Sternberg provided major contributions also to the topic of Lie group actions on symplectic manifolds, in particular involving various aspects of the theory of symplectic reduction. For instance, together with Bertram Kostant he showed how to use reduction techniques to give a rigorous mathematical treatment of what is known in the physics literature as the BRS quantization procedure.[17] Together with David Kazhdan and Bertram Kostant, he showed how one can simplify the analysis of dynamical systems of Calogero type by describing them as symplectic reductions of much simpler systems.[18] Together with Victor Guillemin he gave the first rigorous formulation and proof of a hitherto vague assertion about Lie group actions on symplectic manifolds, namely the Quantization commutes with reduction conjecture.[19]
This last work was also the inspiration for a result in equivariant symplectic geometry that disclosed for the first time a surprising and unexpected connection between the theory of Hamiltonian torus actions on compact symplectic manifolds and the theory of convex polytopes. This theorem, the "AGS convexity theorem," was simultaneously proved by Guillemin-Sternberg[20] and Michael Atiyah[21] in the early 1980s.
Sternberg's contributions to symplectic geometry and Lie theory have also included a number of basic textbooks on these subjects, among them the three graduate level texts with Victor Guillemin: "Geometric Asymptotics,"[22] "Symplectic Techniques in Physics",[23] and "Semi-Classical Analysis".[24] His "Lectures on Differential Geometry"[25] is a popular standard textbook for upper-level undergraduate courses on differential manifolds, the calculus of variations, Lie theory and the geometry of G-structures. He also published the more recent "Curvature in mathematics and physics".[26]
Sternberg has, in addition, played a role in recent developments in theoretical physics. He has worked with Yuval Ne'eman on supersymmetry in elementary particle physics, exploring from this perspective the Higgs mechanism, the method of spontaneous symmetry breaking and a unified approach to the theory of quarks and leptons.[27]
Religion
Sternberg is Jewish and a Rabbi.[8] He was among the mathematicians who debunked the mathematics foundations of Michael Drosnin's controversial claims in The Bible Code.[28][29][30]
Sternberg is described by rabbi Herschel Schachter of Yeshiva University as "a big genius in learning and math" who played a role in establishing that swordfish is kosher.[31]
Selected monographs and books
• Shlomo Sternberg (2019) A Mathematical Companion to Quantum Mechanics Dover Publications ISBN 9780486826899 ISBN 0486826899
• Shlomo Zvi Sternberg and Lynn Harold Loomis (2014) Advanced Calculus (Revised Edition) World Scientific Publishing ISBN 978-981-4583-92-3; 978-981-4583-93-0
• Victor Guillemin and Shlomo Sternberg (2013) Semi-Classical Analysis International Press of Boston ISBN 978-1571462763
• Shlomo Sternberg (2012) Lectures on Symplectic Geometry (in Mandarin) Lecture notes of Mathematical Science Center of Tsingua University, International Press ISBN 978-7-302-29498-6
• Shlomo Sternberg (2012) Curvature in Mathematics and Physics Dover Publications, Inc. ISBN 978-0486478555[32]
• Sternberg, Shlomo (2010). Dynamical Systems Dover Publications, Inc. ISBN 978-0486477053
• Shlomo Sternberg (2004), Lie algebras, Harvard University
• Victor Guillemin and Shlomo Sternberg (1999) Supersymmetry and Equivariant de Rham Theory 1999 Springer Verlag ISBN 978-3540647973
• Victor Guillemin, Eugene Lerman, and Shlomo Sternberg, (1996) Symplectic Fibrations and Multiplicity Diagrams Cambridge University Press
• Shlomo Sternberg (1994) Group Theory and Physics Cambridge University Press. ISBN 0-521-24870-1[33]
• Steven Shnider and Shlomo Sternberg (1993) Quantum Groups. From Coalgebras to Drinfeld Algebras: A Guided Tour (Mathematical Physics Ser.) International Press
• Victor Guillemin and Shlomo Sternberg (1990) Variations on a Theme by Kepler; reprint, 2006 Colloquium Publications ISBN 978-0821841846
• Paul Bamberg and Shlomo Sternberg (1988) A Course in Mathematics for Students of Physics Volume 1 1991 Cambridge University Press. ISBN 978-0521406499
• Paul Bamberg and Shlomo Sternberg (1988) A Course in Mathematics for Students of Physics Volume 2 1991 Cambridge University Press. ISBN 978-0521406505
• Victor Guillemin and Shlomo Sternberg (1984) Symplectic Techniques in Physics, 1990 Cambridge University Press ISBN 978-0521389907[34]
• Guillemin, Victor and Sternberg, Shlomo (1977) Geometric asymptotics Providence, RI: American Mathematical Society. ISBN 0-8218-1514-8; reprinted in 1990 as an on-line book
• Shlomo Sternberg (1969) Celestial Mechanics Part I W.A. Benjamin[35][36]
• Shlomo Sternberg (1969) Celestial Mechanics Part II W.A. Benjamin[35]
• Lynn H. Loomis, and Shlomo Sternberg (1968) Advanced Calculus Boston (World Scientific Publishing Company 2014); text available on-line
• Victor Guillemin and Shlomo Sternberg (1966) Deformation Theory of Pseudogroup Structures American Mathematical Society
• Shlomo Sternberg (1964) Lectures on differential geometry New York: Chelsea (1093) ISBN 0-8284-0316-3.[37]
• I. M. Singer and Shlomo Sternberg (1965) The infinite groups of Lie and Cartan. Part I. The transitive groups, Journal d'Analyse Mathématique 15, 1—114.[15]
See also
• Symplectic manifold
• Symplectic topology
References
1. "Shlomo Sternberg – The Mathematics Genealogy Project". mathgenealogy.org. Retrieved June 25, 2022.
2. "Harvard Mathematics Department Alumini, Faculty, Staff, Students & More".
3. "Shlomo Sternberg". John Simon Guggenheim Memorial Foundation. Retrieved June 25, 2022.
4. "Honors". Universität Mannheim. Retrieved June 25, 2022.
5. "Historical List". Universität Mannheim. Retrieved June 25, 2022.
6. "Colloquium Lectures". American Mathematical Society. Retrieved June 26, 2022.
7. "The Annual Albert Einstein Memorial Lecture".
8. "Shlomo Zvi Sternberg". American Academy of Arts & Sciences. Retrieved June 25, 2022.
9. "Shlomo Sternberg". nasonline.org. Retrieved June 25, 2022.
10. "Relación de académicos desde el año 1847 hasta el 2003" [List of academics from 1847 to 2003] (PDF). Real Academia de Ciencias Exactas, Físicas y Naturales (in Spanish). 2003.
11. "APS Member History". search.amphilsoc.org. Retrieved June 25, 2022.
12. Sternberg, Shlomo (1958). "On the Structure of Local Homeomorphisms of Euclidean n-Space, II". American Journal of Mathematics. 80 (3): 623–631. doi:10.2307/2372774. ISSN 0002-9327. JSTOR 2372774.
13. Sternberg, Shlomo (1957). "Local Contractions and a Theorem of Poincare". American Journal of Mathematics. 79 (4): 809–824. doi:10.2307/2372437. ISSN 0002-9327. JSTOR 2372437.
14. Bruhat, François (1960–1961). "Travaux de Sternberg". Séminaire Bourbaki. 6: 179–196. ISSN 0303-1179.
15. Singer, I. M.; Sternberg, Shlomo (December 1, 1965). "The infinite groups of Lie and Cartan Part I, (The transitive groups)". Journal d'Analyse Mathématique. 15 (1): 1–114. doi:10.1007/BF02787690. ISSN 1565-8538. S2CID 123124081.
16. Guillemin, V.; Quillen, D.; Sternberg, S. (1966). "The classification of the complex primitive infinite pseudogroups". Proceedings of the National Academy of Sciences. 55 (4): 687–690. doi:10.1073/pnas.55.4.687. ISSN 0027-8424. PMC 224211. PMID 16591345.
17. Kostant, Bertram; Sternberg, Shlomo (May 15, 1987). "Symplectic reduction, BRS cohomology, and infinite-dimensional Clifford algebras". Annals of Physics. 176 (1): 49–113. doi:10.1016/0003-4916(87)90178-3. ISSN 0003-4916.
18. Kazhdan, D.; Kostant, B.; Sternberg, S. (1978). "Hamiltonian group actions and dynamical systems of Calogero type". Communications on Pure and Applied Mathematics. 31 (4): 481–507. doi:10.1002/cpa.3160310405.
19. Guillemin, V.; Sternberg, S. (October 1, 1982). "Geometric quantization and multiplicities of group representations". Inventiones Mathematicae. 67 (3): 515–538. doi:10.1007/BF01398934. ISSN 1432-1297. S2CID 121632102.
20. Guillemin, V.; Sternberg, S. (October 1, 1982). "Convexity properties of the moment mapping". Inventiones Mathematicae. 67 (3): 491–513. doi:10.1007/BF01398933. ISSN 1432-1297. S2CID 189830182.
21. Atiyah, M. F. (1982). "Convexity and Commuting Hamiltonians". Bulletin of the London Mathematical Society. 14 (1): 1–15. doi:10.1112/blms/14.1.1.
22. Sternberg, Shlomo (December 31, 1977). Geometric Asymptotics. American Mathematical Society. ISBN 0821816330.
23. Sternberg, Shlomo (May 25, 1990). Symplectic Techniques in Physics. Cambridge University Press. ISBN 0521389909.
24. Sternberg, Shlomo (September 11, 2013). Semi-Classical Analysis. International Press of Boston. ISBN 978-1571462763.
25. Sternberg, Shlomo (March 11, 1999). Lectures on Differential Geometry. American Mathematical Society. ISBN 0821813854.
26. Sternberg, Shlomo (August 22, 2012). Curvature in mathematics and physics. Dover Books on Mathematics. ISBN 978-0486478555.
27. Ne'eman, Yuval; Sternberg, Shlomo (1980). "Internal supersymmetry and unification". Proceedings of the National Academy of Sciences. 77 (6): 3127–3131. doi:10.1073/pnas.77.6.3127. ISSN 0027-8424. PMC 349566. PMID 16592837.
28. Jackson, Allyn; Sternberg, Shlomo (1997). "The Bible Code" (PDF). Notices of the AMS. 44 (8): 935–939.
29. Sternberg, Shlomo (August 1997). "Snake Oil for Sale". Bible Review. 13 (4).
30. Mag, J. A. (June 1, 2008). "Torah Codes Revisited". Jewish Action. Retrieved June 26, 2022.
31. Schachter, Hershel (April 2018). "Is Swordfish Kosher?". The Jewish Press.
32. Ruane, P. N. (November 8, 2012). "Review of Curvature in Mathematics and Physics by Shlomo Sternberg". MAA Reviews, maa.org.
33. Humphreys, James E. (1995). "Review: Group theory and physics by S. Sternberg" (PDF). Bull. Amer. Math. Soc. (N.S.). 32 (4): 455–457. doi:10.1090/s0273-0979-1995-00612-9.
34. Duistermaat, J. J. (1988). "Review: Symplectic techniques in physics by Victor Guillemin and Shlomo Sternberg" (PDF). Bull. Amer. Math. Soc. (N.S.). 18 (1): 97–100. doi:10.1090/s0273-0979-1988-15620-0.
35. Arnold, V. (1972). "Review of Celestial Mechanics I, II by S. Sternberg" (PDF). Bull. Amer. Math. Soc. 78 (6): 962–963. doi:10.1090/s0002-9904-1972-13067-2.
36. Pollard, Harry (1976). "Review of Celestial Mechanics, Part I by Shlomo Sternberg". SIAM Review. 18 (1): 132. doi:10.1137/1018021.
37. Hermann, R. (1965). "Review: Lectures on differential geometry by S. Sternberg" (PDF). Bull. Amer. Math. Soc. 71 (1): 332–337. doi:10.1090/S0002-9904-1965-11286-1.
External links
• Sternberg's home page at Harvard has links to a half dozen on-line books
• Shlomo Sternberg at the Mathematics Genealogy Project
Authority control
International
• ISNI
• 2
• VIAF
National
• France
• BnF data
• Germany
• Italy
• Israel
• United States
• Czech Republic
• Netherlands
Academics
• DBLP
• MathSciNet
• Mathematics Genealogy Project
• zbMATH
People
• Deutsche Biographie
Other
• IdRef
|
Wikipedia
|
Shmuel Friedland
Shmuel Friedland (born 1944 in Tashkent, Uzbek Soviet Socialist Republic)[1] is an Israeli-American mathematician.
Shmuel Friedland
Born1944
Tashkent
NationalityIsraeli-American
Alma materTechnion - Israeli Institute of Technology
OccupationMathematician
Friedland studied at the Technion – Israel Institute of Technology, graduating in 1967 with bachelor's degree and in 1971 with doctorate of science under the supervision of Binjamin Schwarz.[2] As a postdoc Friedland was in 1972/73 at the Weizmann Institute, in 1973/74 at Stanford University, and in 1974/75 at the Institute for Advanced Study. Then he taught at the Hebrew University of Jerusalem, where he became in 1982 a full professor. In 1985 he became a professor at the University of Illinois at Chicago.[3]
Besides linear algebra (matrix theory), Friedland does research on a wide variety of mathematics, including complex dynamics and applied mathematics. With Elizabeth Gross, he proved a set-theoretic version of the salmon conjecture posed by Elizabeth S. Allman.[4]
With Miroslav Fiedler and Israel Gohberg, Friedland shared in the first Hans Schneider Prize, awarded by the International Linear Algebra Society in 1993. He was elected a Fellow of the American Mathematical Society (Class of 2019). Also, he was selected as a 2021 SIAM Fellow, "for deep and varied contributions to mathematics, especially linear algebra, matrix theory, and matrix computations".[5]
Selected publications
• "Nonoscillation and integral inequalities", Bull. Amer. Math. Soc., vol. 80, 1974, pp. 715–717. doi:10.1090/S0002-9904-1974-13565-2
• with Samuel Karlin: "Some inequalities for the spectral radius of nonnegative matrices and applications", Duke Mathematical Journal, vol. 42, 1975, pp. 459–490. (subscription required)
• Nonoscillation, disconjugacy and integral inequalities, Memoirs Amer. Math. Soc. 176, 1976
• with Walter K. Hayman: "Eigenvalue inequalities for the Dirichlet problem on spheres and the growth of subharmonic functions", Commentarii Mathematici Helvetici 51, no. 1 (1976): 133–161. doi:10.1007/BF02568147
• "On an inverse problem for nonnegative and eventually nonnegative matrices", Israel Journal of Mathematics, vol. 29, no. 1, 1978, 43–60. doi:10.1007/BF02760401
• "A lower bound for the permanent of doubly stochastic matrices", Annals of Mathematics, vol. 110, 1979, pp. 167–176. JSTOR 1971250
• with Nimrod Moiseyev: "Association of resonance states with the incomplete spectrum of finite complex-scaled Hamiltonian matrices", Physical Review A, vol. 22, no. 2, 1980, 618–624. doi:10.1103/PhysRevA.22.618
• "Convex spectral functions", Linear and Multilinear Algebra, vol. 9, no. 4, 1981, 299–316. doi:10.1080/03081088108817381
• with Carl R. de Boor and Allan Pincus: "Inverses of infinite sign regular matrices", Trans. Amer. Math. Soc., vol. 274, 1982, pp. 59–68. doi:10.1090/S0002-9947-1982-0670918-7
• "Simultaneous similarity of matrices", Bull. Amer. Math. Soc., vol. 8, 1983, pp. 93–95. doi:10.1090/S0273-0979-1983-15094-2
• "Simultaneous similarity of matrices", Advances in Mathematics, vol. 50, 1983, pp. 189–265. doi:10.1016/0001-8708(83)90044-0
• with Joel W. Robbin and John H. Sylvester: "On the crossing rule", Communications in Pure and Applied Mathematics, vol. 37, 1984, pp. 19–37. doi:10.1002/cpa.3160370104
• with Noga Alon and Gil Kalai: "Regular subgraphs of almost regular graphs", Journal of Combinatorial Theory, Series B, vol. 37, no. 1, 1984, 79–91. doi:10.1016/0095-8956(84)90047-9
• with John Willard Milnor: "Dynamical properties of plane polynomial automorphisms", Journal of Ergodic Theory & Dynamical Systems, vol. 9, 1989, pp. 67–99. doi:10.1017/S014338570000482X
• "Entropy of polynomial and rational maps", Annals of Mathematics, vol. 133, 1991, pp. 359–368. JSTOR 2944341
• with Sa'ar Hersonsky: "Jorgensen's inequality for discrete groups in normed algebras", Duke Mathematical Journal, vol. 69, 1993, pp. 593–614. (subscription required)
• with Vlad Gheorghiu and Gilad Gour: "Universal uncertainty relations", Physical Review Letters, vol. 111, 2013, p. 230401 doi:10.1103/PhysRevLett.111.230401
• with Stéphane Gaubert and Lixing Han: "Perron–Frobenius theorem for nonnegative multilinear forms and extensions", Linear Algebra and its Applications, vol. 438, no. 2, 2013, pp. 738–749. doi:10.1016/j.laa.2011.02.042
• with Giorgio Ottaviani: "The number of singular vector tuples and uniqueness of best rank one approximation of tensors", Foundations of Computational Mathematics, vol. 14, 2014, pp. 1209–1242. doi:10.1007/s10208-014-9194-z
• Matrices: Algebra, Analysis and Applications, World Scientific 2015
• with Lek-Heng Lim: "The computational complexity of duality", SIAM Journal on Optimization, vol. 26, no. 4, 2016, 2378–2393. doi:10.1137/16M105887X
• with Jinjie Zhang and Lek-Heng Lim: "Grothendieck constant is norm of Strassen matrix multiplication tensor", arXiv preprint arXiv:1711.04427, 2017. (See Grothendieck inequality.)
• with Mohsen Aliabadi, Linear Algebra and Matrices, SIAM 2018
References
1. biographical information from membership book, Institute of Advanced Study, 1980
2. Shmuel Friedland at the Mathematics Genealogy Project
3. "Shmuel Friedland". Mathematics Department, University of Illinois at Chicago.
4. Friedland, Shmuel; Gross, Elizabeth (2012). "A proof of the set-theoretic version of the Salmon conjecture". Journal of Algebra. 356: 374–379. arXiv:1104.1776. doi:10.1016/j.jalgebra.2012.01.017. S2CID 18426982. arXiv preprint
5. "SIAM Announces Class of 2021 Fellows". March 31, 2021. Retrieved 2021-04-03.
External links
• Interview of Shmuel Friedland by Lek-Heng Lim for IMAGE, The Bulletin of the International Linear Algebra Society, Fall 2017
Authority control
International
• ISNI
• VIAF
National
• Norway
• France
• BnF data
• Germany
• Italy
• Israel
• United States
• Czech Republic
• Netherlands
Academics
• CiNii
• DBLP
• MathSciNet
• Mathematics Genealogy Project
• zbMATH
Other
• IdRef
|
Wikipedia
|
Shmuel Onn
Shmuel Onn (Hebrew: שמואל און; born 1960) is a mathematician, Professor of Operations Research and Dresner Chair at the Technion - Israel Institute of Technology.[1] He is known for his contributions to integer programming and nonlinear combinatorial optimization.[2]
Shmuel Onn
Born1960
Israel
NationalityIsraeli
Alma materTechnion
Cornell University
SpouseRuth
ChildrenAmos and Naomi
Scientific career
FieldsOperations research, Mathematics
InstitutionsTechnion
ThesisDiscrete Geometry, Group Representations and Combinatorial Optimization: an Interplay (1992)
Doctoral advisorLouis J. Billera, Bernd Sturmfels, Leslie E. Trotter, Jr.
Education
Shmuel Onn did his elementary education in Kadoorie(he).[3] He received his B.Sc. (Cum Laude) in Electrical Engineering from Technion in 1980, and following his obligatory service in the Navy, received his M.Sc. from Technion in 1987.[3] Onn obtained his Ph.D. in operations research from Cornell University, with minors in applied mathematics and computer science, in 1992. His thesis, "Discrete Geometry, Group Representations and Combinatorial Optimization: an Interplay", was advised by Louis J. Billera, Bernd Sturmfels, and Leslie E. Trotter Jr.[4]
During 1992–1993 he was a postdoctoral fellow at DIMACS,[5] and during 1993-1994 he was an Alexander von Humboldt postdoctoral fellow at the University of Passau, Germany.[3]
Career
In 1994 Onn joined the Faculty of Data and Decision Sciences of Technion, where he is currently Professor and Dresner Chair. He was also a Visiting Professor and Nachdiplom Lecturer at the Institute for Mathematical Research, ETH Zürich in 2009,[6] and visiting professor at the Mathematics Department in the University of California at Davis (2001-2002).[7] Professor Onn has been also a long-term visitor at various mathematical research institutes including Mittag-Leffler in Stockholm, MSRI in Berkeley,[8] and Oberwolfach in Germany.[9] He also served as Associate Editor for Mathematics of Operations Research in 2010–2016[10] and Associate Editor for Discrete Optimization in 2004–2010.[3]
Onn advised several students and postdoctoral researchers who proceeded to pursue academic careers, including Antoine Deza, Sharon Aviran, Tal Raviv, Nir Halman, and Martin Koutecký.[11]
Research
Shmuel Onn is known for his contributions to integer programming and nonlinear combinatorial optimization. In particular, he developed an algorithmic theory of linear and nonlinear integer programming in variable dimension using Graver bases.[2] This work introduced the theory of block-structured and n-fold integer programming,[12][13] and the broader theory of sparse and bounded tree-depth integer programming, shown to be fixed-parameter tractable.[14][15][16] These theories were followed up by other authors,[17][18][19][20][21][22] and have applications in a variety of areas.[23] [24][25][26][27][28]
Some other contributions of Onn include a framework that uses edge-directions for solving convex multi-criteria combinatorial optimization problems and its applications,[29][30][31] a universality theorem showing that every integer program is one over slim three-dimensional tables,[32][33] the settling of the complexity of hypergraph degree sequences,[34] and the introduction of colorful linear programming.[35]
Honors and awards
• 2010, INFORMS Computing Society (ICS) Prize.[36]
• 2009, Nachdiplom Lecturer, Institute for Mathematical Research, ETH Zürich.[6]
Books
• Nonlinear discrete optimization: An algorithmic theory. Zurich Lectures in Advanced Mathematics. European Mathematical Society (EMS), Zürich, 2010.[2]
Personal life
Shmuel is married to Ruth. They have two children, Amos and Naomi, and live in Haifa.
External links
• Shmuel Onn (personal page), Technion
• Shmuel Onn, Technion
• Video Lecture Series on Nonlinear Discrete Optimization at MSRI, Berkeley
References
1. Shmuel Onn, Technion
2. Shmuel Onn. Nonlinear discrete optimization: An algorithmic theory, European Mathematical Society, 2010
3. Abridged CV, Technion
4. Shmuel Onn, Mathematics Genealogy Project
5. Past DIMACS Postdocs, Rutgers University
6. Nachdiplom lectures - Past lectures, ETH
7. Mathematics Colloquia and Seminars, University of California at Davis
8. Personal Profile of Dr. Shmuel Onn, Mathematical Sciences Research Institute
9. Research in Pairs - 2007 (PDF), Mathematical Research Institute of Oberwolfach
10. "Editorial Board", Mathematics of Operations Research, INFORMS, 40 (4): c2–c3, 2015, doi:10.1287/moor.2015.eb.v404
11. Students and Postdoctorants, Technion
12. Raymond Hemmecke; Shmuel Onn; Lyubov Romanchuk (2013). "N-fold integer programming in cubic time". Mathematical Programming. 137 (1–2): 325–341. arXiv:1101.3267. doi:10.1007/s10107-011-0490-y. S2CID 964450.
13. Jesus De Loera; Raymond Hemmecke; Shmuel Onn; Robert Weismantel (2008). "N-fold integer programming". Discrete Optimization. In Memory of George B. Dantzig. 5 (2): 231–241. doi:10.1016/j.disopt.2006.06.006. S2CID 997926.
14. Martin Koutecký; Shmuel Onn (2021). "Sparse Integer Programming is FPT". Bulletin of the European Association for Theoretical Computer Science. 2 (134): 69–71.
15. Friedrich Eisenbrand; Christoph Hunkenschroder; Kim-Manuel Klein; Martin Koutecký; Asaf Levin; Shmuel Onn (2019). "An algorithmic theory of integer programming". arXiv:1904.01361 [math.OC].
16. Martin Koutecký; Asaf Levin; Shmuel Onn (2018). "A parameterized strongly polynomial algorithm for block structured integer programs" (PDF). ICALP. Leibniz International Proceedings in Informatics (LIPIcs). 107: 85:1–85:14. arXiv:1802.05859. doi:10.4230/LIPIcs.ICALP.2018.85. ISBN 9783959770767. S2CID 3336201.
17. Jana Cslovjecsek; Friedrich Eisenbrand; Christoph Hunkenschroder; Kim-Manuel Klein; Lars Rohwedder; Robert Weismantel (2021). "Block-Structured Integer and Linear Programming in Strongly Polynomial and Near Linear Time". SODA: 1666–1681. arXiv:2002.07745.
18. Cornelius Brand; Martin Koutecký; Sebastian Ordyniak (2021). "Parameterized Algorithms for MILPs with Small Treedepth". AAAI. 35 (14): 12249–12257. arXiv:1912.03501. doi:10.1609/aaai.v35i14.17454. S2CID 208909901.
19. Timothy F. N. Chan; Jacob W. Cooper; Martin Koutecký; Daniel Král'; Kristýna Pekárková (2020). "Matrices of Optimal Tree-Depth and Row-Invariant Parameterized Algorithm for Integer Programming" (PDF). ICALP : 26:1–26:19. arXiv:1907.06688.
20. Klaus Jansen; Alexandra Lassota; Lars Rohwedder (2019). "Near-Linear Time Algorithm for n-fold ILPs via Color Coding". ICALP. Leibniz International Proceedings in Informatics (LIPIcs). 132: 75:1–75:13. doi:10.4230/LIPIcs.ICALP.2019.75. ISBN 9783959771092. S2CID 53300379.
21. Eduard Eiben; Robert Ganian; Dusan Knop; Kim-Manuel Klein; Sebastian Ordyniak; Michal Pilipczuk; Marcin Wrochna (2019). "Integer Programming and Incidence Treedepth". Integer Programming and Combinatorial Optimization (PDF). Lecture Notes in Computer Science. Vol. 11480. pp. 194–204. arXiv:2012.00079. doi:10.1007/978-3-030-17953-3_15. ISBN 978-3-030-17953-3. S2CID 142503705.
22. Friedrich Eisenbrand; Christoph Hunkenschröder; Kim-Manuel Klein (2018). "Faster Algorithms for Integer Programs with Block Structure" (PDF). ICALP: 49:1–49:13. arXiv:1802.06289.
23. Dusan Knop; Martin Koutecký; Matthias Mnich (2020). "Voting and Bribing in Single-Exponential Time". ACM Transactions on Economics and Computation. 8 (3): 12:1–12:28. doi:10.1145/3396855. S2CID 218529858.
24. Robert Bredereck; Piotr Faliszewski; Rolf Niedermeier; Piotr Skowron; Nimrod Talmon (2020). "Mixed integer programming with convex/concave constraints: Fixed-parameter tractability and applications to multicovering and voting". Theoretical Computer Science. 814: 86–105. arXiv:1709.02850. doi:10.1016/j.tcs.2020.01.017. S2CID 3227033.
25. Dusan Knop; Martin Koutecký; Matthias Mnich (2020). "Combinatorial n-fold integer programming and applications". Mathematical Programming. 184 (1): 1–34. doi:10.1007/s10107-019-01402-2. S2CID 213316783.
26. Klaus Jansen; Kim-Manuel Klein; Marten Maack; Malin Rau (2019). "Empowering the Configuration-IP - New PTAS Results for Scheduling with Setups Times". ITCS - Innovations in Theoretical Computer Science. Leibniz International Proceedings in Informatics (LIPIcs). 124: 44:1–44:19. doi:10.4230/LIPIcs.ITCS.2019.44. ISBN 9783959770958. S2CID 24006600.
27. Dusan Knop; Martin Koutecký (2018). "Scheduling meets n-fold integer programming". Journal of Scheduling. 21 (5): 493–503. arXiv:1603.02611. doi:10.1007/s10951-017-0550-0. S2CID 9627563.
28. Lin Chen; Dániel Marx (2018). "Covering a tree with rooted subtrees - parameterized and approximation algorithms" (PDF). SODA: 2801–2820.
29. Shmuel Onn; Uriel Rothblum (2004). "Convex combinatorial optimization" (PDF). Discrete & Computational Geometry. 32 (4): 549–566. doi:10.1007/s00454-004-1138-y. S2CID 803661.
30. Eric Babson; Shmuel Onn; Rekha Thomas (2003). "The Hilbert zonotope and a polynomial time algorithm for universal Grobner bases". Advances in Applied Mathematics. 30 (3): 529–544. doi:10.1016/S0196-8858(02)00509-2. S2CID 7178467.
31. Frank Hwang; Shmuel Onn; Uriel Rothblum (1999). "A polynomial time algorithm for shaped partition problems". SIAM Journal on Optimization. 10: 70–81. doi:10.1137/S1052623497344002.
32. Jesus De Loera; Shmuel Onn (2006). "All linear and integer programs are slim 3-way transportation programs" (PDF). SIAM Journal on Optimization. 17 (3): 806–821. doi:10.1137/040610623.
33. Jesus De Loera; Shmuel Onn (2004). "The complexity of three-way statistical tables" (PDF). SIAM Journal on Computing. 33 (4): 819–836. arXiv:math/0207200. doi:10.1137/S0097539702403803. S2CID 14941545.
34. Antoine Deza; Asaf Levin; Syed M. Meesum; Shmuel Onn (2018). "Optimization over degree sequences". SIAM Journal on Discrete Mathematics. 32 (3): 2067–2079. arXiv:1706.03951. doi:10.1137/17M1134482. S2CID 52039639.
35. Imre Bárány; Shmuel Onn (1997). "Colourful linear programming and its relatives" (PDF). Mathematics of Operations Research. 22 (3): 550–567. doi:10.1287/moor.22.3.550.
36. Shmuel Onn - 2010 INFORMS Computing Society Prize, INFORMS
|
Wikipedia
|
Favard's theorem
In mathematics, Favard's theorem, also called the Shohat–Favard theorem, states that a sequence of polynomials satisfying a suitable 3-term recurrence relation is a sequence of orthogonal polynomials. The theorem was introduced in the theory of orthogonal polynomials by Favard (1935) and Shohat (1938), though essentially the same theorem was used by Stieltjes in the theory of continued fractions many years before Favard's paper, and was rediscovered several times by other authors before Favard's work.
Statement
Suppose that y0 = 1, y1, ... is a sequence of polynomials where yn has degree n. If this is a sequence of orthogonal polynomials for some positive weight function then it satisfies a 3-term recurrence relation. Favard's theorem is roughly a converse of this, and states that if these polynomials satisfy a 3-term recurrence relation of the form
$y_{n+1}=(x-c_{n})y_{n}-d_{n}y_{n-1}$
for some numbers cn and dn, then the polynomials yn form an orthogonal sequence for some linear functional Λ with Λ(1)=1; in other words Λ(ymyn) = 0 if m ≠ n.
The linear functional Λ is unique, and is given by Λ(1) = 1, Λ(yn) = 0 if n > 0.
The functional Λ satisfies Λ(y2
n
) = dn Λ(y2
n–1
), which implies that Λ is positive definite if (and only if) the numbers cn are real and the numbers dn are positive.
See also
• Jacobi operator
• James Alexander Shohat
References
• Chihara, Theodore Seio (1978), An introduction to orthogonal polynomials, Mathematics and its Applications, vol. 13, New York: Gordon and Breach Science Publishers, ISBN 978-0-677-04150-6, MR 0481884 Reprinted by Dover 2011, ISBN 978-0-486-47929-3
• Favard, J. (1935), "Sur les polynomes de Tchebicheff.", C. R. Acad. Sci. Paris (in French), 200: 2052–2053, JFM 61.0288.01
• Rahman, Q. I.; Schmeisser, G. (2002), Analytic theory of polynomials, London Mathematical Society Monographs. New Series, vol. 26, Oxford: Oxford University Press, pp. 15–16, ISBN 0-19-853493-0, Zbl 1072.30006
• Subbotin, Yu. N. (2001) [1994], "Favard theorem", Encyclopedia of Mathematics, EMS Press
• Shohat, J. (1938), "Sur les polynômes orthogonaux généralises.", C. R. Acad. Sci. Paris (in French), 207: 556–558, Zbl 0019.40503
|
Wikipedia
|
Shooting method
In numerical analysis, the shooting method is a method for solving a boundary value problem by reducing it to an initial value problem. It involves finding solutions to the initial value problem for different initial conditions until one finds the solution that also satisfies the boundary conditions of the boundary value problem. In layman's terms, one "shoots" out trajectories in different directions from one boundary until one finds the trajectory that "hits" the other boundary condition.
Mathematical description
Suppose one wants to solve the boundary-value problem
$y''(t)=f(t,y(t),y'(t)),\quad y(t_{0})=y_{0},\quad y(t_{1})=y_{1}.$
Let $y(t;a)$ solve the initial-value problem
$y''(t)=f(t,y(t),y'(t)),\quad y(t_{0})=y_{0},\quad y'(t_{0})=a.$
If $y(t_{1};a)=y_{1}$, then $y(t;a)$ is also a solution of the boundary-value problem. The shooting method is the process of solving the initial value problem for many different values of $a$ until one finds the solution $y(t;a)$ that satisfies the desired boundary conditions. Typically, one does so numerically. The solution(s) correspond to root(s) of
$F(a)=y(t_{1};a)-y_{1}.$
To systematically vary the shooting parameter $a$ and find the root, one can employ standard root-finding algorithms like the bisection method or Newton's method.
Roots of $F$ and solutions to the boundary value problem are equivalent. If $a$ is a root of $F$, then $y(t;a)$ is a solution of the boundary value problem. Conversely, if the boundary value problem has a solution $y(t)$, it is also the unique solution $y(t;a)$ of the initial value problem where $a=y'(t_{0})$, so $a$ is a root of $F$.
Etymology and intuition
The term "shooting method" has its origin in artillery. An analogy for the shooting method is to
• place a cannon at the position $y(t_{0})=y_{0}$, then
• vary the angle $a=y'(t_{0})$ of the cannon, then
• fire the cannon until it hits the boundary value $y(t_{1})=y_{1}$.
Between each shot, the direction of the cannon is adjusted based on the previous shot, so every shot hits closer than the previous one. The trajectory that "hits" the desired boundary value is the solution to the boundary value problem — hence the name "shooting method".
Linear shooting method
The boundary value problem is linear if f has the form
$f(t,y(t),y'(t))=p(t)y'(t)+q(t)y(t)+r(t).$
In this case, the solution to the boundary value problem is usually given by:
$y(t)=y_{(1)}(t)+{\frac {y_{1}-y_{(1)}(t_{1})}{y_{(2)}(t_{1})}}y_{(2)}(t)$
where $y_{(1)}(t)$ is the solution to the initial value problem:
$y_{(1)}''(t)=p(t)y_{(1)}'(t)+q(t)y_{(1)}(t)+r(t),\quad y_{(1)}(t_{0})=y_{0},\quad y_{(1)}'(t_{0})=0,$
and $y_{(2)}(t)$ is the solution to the initial value problem:
$y_{(2)}''(t)=p(t)y_{(2)}'(t)+q(t)y_{(2)}(t),\quad y_{(2)}(t_{0})=0,\quad y_{(2)}'(t_{0})=1.$
See the proof for the precise condition under which this result holds.[1]
Examples
Standard boundary value problem
A boundary value problem is given as follows by Stoer and Bulirsch[2] (Section 7.3.1).
$w''(t)={\frac {3}{2}}w^{2}(t),\quad w(0)=4,\quad w(1)=1$
The initial value problem
$w''(t)={\frac {3}{2}}w^{2}(t),\quad w(0)=4,\quad w'(0)=s$
was solved for s = −1, −2, −3, ..., −100, and F(s) = w(1;s) − 1 plotted in the Figure 2. Inspecting the plot of F, we see that there are roots near −8 and −36. Some trajectories of w(t;s) are shown in the Figure 1.
Stoer and Bulirsch[2] state that there are two solutions, which can be found by algebraic methods.
These correspond to the initial conditions w′(0) = −8 and w′(0) = −35.9 (approximately).
Eigenvalue problem
The shooting method can also be used to solve eigenvalue problems. Consider the time-independent Schrödinger equation for the quantum harmonic oscillator
$-{\frac {1}{2}}\psi _{n}''(x)+{\frac {1}{2}}x^{2}\psi _{n}(x)=E_{n}\psi _{n}(x).$
In quantum mechanics, one seeks normalizable wavefunctions $\psi _{n}(x)$ and their corresponding energies subject to the boundary conditions
$\psi _{n}(x\rightarrow +\infty )=\psi _{n}(x\rightarrow -\infty )=0.$
The problem can be solved analytically to find the energies $E_{n}=n+1/2$ for $n=0,1,2,\dots $, but also serves as an excellent illustration of the shooting method. To apply it, first note some general properties of the Schrödinger equation:
• If $\psi _{n}(x)$ is an eigenfunction, so is $C\psi _{n}(x)$ for any nonzero constant $C$.
• The $n$-th excited state $\psi _{n}(x)$ has $n$ roots where $\psi _{n}(x)=0$.
• For even $n$, the $n$-th excited state $\psi _{n}(x)=\psi _{n}(-x)$ is symmetric and nonzero at the origin.
• For odd $n$, the $n$-th excited state $\psi _{n}(x)=-\psi _{n}(-x)$ is antisymmetric and thus zero at the origin.
To find the $n$-th excited state $\psi _{n}(x)$ and its energy $E_{n}$, the shooting method is then to:
1. Guess some energy $E_{n}$.
2. Integrate the Schrödinger equation. For example, use the central finite difference
$-{\frac {1}{2}}{\frac {\psi _{n}^{i+1}-2\psi _{n}^{i}+\psi _{n}^{i-1}}{{\Delta x}^{2}}}+{\frac {1}{2}}(x^{i})^{2}\psi _{n}^{i}=E_{n}\psi _{n}^{i}.$
• If $n$ is even, set $\psi _{0}$ to some arbitrary number (say $\psi _{n}^{0}=1$ — the wavefunction can be normalized after integration anyway) and use the symmetric property to find all remaining $\psi _{n}^{i}$.
• If $n$ is odd, set $\psi _{n}^{0}=0$ and $\psi _{n}^{1}$ to some arbitrary number (say $\psi _{n}^{1}=1$ — the wavefunction can be normalized after integration anyway) and find all remaining $\psi _{n}^{i}$.
3. Count the roots of $\psi _{n}$ and refine the guess for the energy $E_{n}$.
• If there are $n$ or less roots, the guessed energy is too low, so increase it and repeat the process.
• If there are more than $n$ roots, the guessed energy is too high, so decrease it and repeat the process.
The energy-guessing can be done with the bisection method, and the process can be terminated when the energy difference is sufficiently small. Then one can take any energy in the interval to be the correct energy.
See also
• Direct multiple shooting method
• Computation of radiowave attenuation in the atmosphere
Notes
1. Mathews, John H.; Fink, Kurtis K. (2004). "9.8 Boundary Value Problems". Numerical methods using MATLAB (PDF) (4th ed.). Upper Saddle River, N.J.: Pearson. ISBN 0-13-065248-2. Archived from the original (PDF) on 9 December 2006.
2. Stoer, J. and Bulirsch, R. Introduction to Numerical Analysis. New York: Springer-Verlag, 1980.
References
• Press, WH; Teukolsky, SA; Vetterling, WT; Flannery, BP (2007). "Section 18.1. The Shooting Method". Numerical Recipes: The Art of Scientific Computing (3rd ed.). New York: Cambridge University Press. ISBN 978-0-521-88068-8.
External links
• Brief Description of ODEPACK (at Netlib; contains LSODE)
• Shooting method of solving boundary value problems – Notes, PPT, Maple, Mathcad, Matlab, Mathematica at Holistic Numerical Methods Institute
|
Wikipedia
|
Shor's algorithm
Shor's algorithm is a quantum algorithm for finding the prime factors of an integer. It was developed in 1994 by the American mathematician Peter Shor.[1][2] It is one of the few known quantum algorithms with compelling potential applications and strong evidence of superpolynomial speedup compared to best known classical (that is, non-quantum) algorithms.[3] On the other hand, factoring numbers of practical significance requires far more qubits than available in the near future.[4] Another concern is that noise in quantum circuits may undermine results,[5] requiring additional qubits for quantum error correction.
Shor proposed multiple similar algorithms solving the factoring problem, the discrete logarithm problem, and the period finding problem. "Shor's algorithm" usually refers to his algorithm solving factoring, but may also refer to each of the three. The discrete logarithm algorithm and the factoring algorithm are instances of the period finding algorithm, and all three are instances of the hidden subgroup problem.
Shor's algorithm allows to factor an integer $N$ on a quantum computer in polylogarithmic time, meaning that the running time of the algorithm is polynomial in $\log N$.[6] Specifically, it takes quantum gates of order $O\!\left((\log N)^{2}(\log \log N)(\log \log \log N)\right)$ using fast multiplication,[7] or even $O\!\left((\log N)^{2}(\log \log N)\right)$ utilizing the asymptotically fastest multiplication algorithm currently known due to Harvey and Van Der Hoven,[8] thus demonstrating that the integer factorization problem can be efficiently solved on a quantum computer and is consequently in the complexity class BQP. This is significantly faster than the most efficient known classical factoring algorithm, the general number field sieve, which works in sub-exponential time: $O\!\left(e^{1.9(\log N)^{1/3}(\log \log N)^{2/3}}\right)$.[9]
Feasability and impact
If a quantum computer with a sufficient number of qubits could operate without succumbing to quantum noise and other quantum-decoherence phenomena, then Shor's algorithm could be used to break public-key cryptography schemes, such as
• The RSA scheme
• The Finite Field Diffie-Hellman key exchange
• The Elliptic Curve Diffie-Hellman key exchange[10]
RSA is based on the assumption that factoring large integers is computationally intractable. As far as is known, this assumption is valid for classical (non-quantum) computers; no classical algorithm is known that can factor integers in polynomial time. However, Shor's algorithm shows that factoring integers is efficient on an ideal quantum computer, so it may be feasible to defeat RSA by constructing a large quantum computer. It was also a powerful motivator for the design and construction of quantum computers, and for the study of new quantum-computer algorithms. It has also facilitated research on new cryptosystems that are secure from quantum computers, collectively called post-quantum cryptography.
Physical implementation
Given the high error rates of contemporary quantum computers and too few qubits to use quantum error correction, laboratory demonstrations obtain correct results only in a fraction of attempts.
In 2001, Shor's algorithm was demonstrated by a group at IBM, who factored $15$ into $3\times 5$, using an NMR implementation of a quantum computer with $7$ qubits.[11] After IBM's implementation, two independent groups implemented Shor's algorithm using photonic qubits, emphasizing that multi-qubit entanglement was observed when running the Shor's algorithm circuits.[12][13] In 2012, the factorization of $15$ was performed with solid-state qubits.[14] Later, in 2012, the factorization of $21$ was achieved.[15] In 2019, an attempt was made to factor the number $35$ using Shor's algorithm on an IBM Q System One, but the algorithm failed because of accumulating errors.[16] Though larger numbers have been factored by quantum computers using other algorithms,[17] these algorithms are similar to classical brute-force checking of factors, so unlike Shor's algorithm, they are not expected to ever perform better than classical factoring algorithms.[18]
Theoretical analyses of Shor's algorithm assume a quantum computer free of noise and errors. However, near-term practical implementations will have to deal with such undesired phenomena (when more qubits are available, Quantum error correction can help). In 2023, Jin-Yi Cai studied the impact of noise and concluded that "Shor's Algorithm Does Not Factor Large Integers in the Presence of Noise."[5]
Algorithm
The problem that we are trying to solve is: given an odd composite number $N$, find its integer factors.
To achieve this, Shor's algorithm consists of two parts:
1. A classical reduction of the factoring problem to the problem of order-finding. This reduction is similar to that used for other factoring algorithms, such as the quadratic sieve.
2. A quantum algorithm to solve the order-finding problem.
Classical reduction
A complete factoring algorithm is possible using extra classical methods if we're able to factor $N$ into just two integers $p$ and $q$; therefore the algorithm only needs to achieve that.
A basic observation is that, using Euclid's algorithm, we can always compute the GCD between two integers efficiently. In particular, this means we can check efficiently whether $N$ is even, in which case 2 is trivially a factor. Let us thus assume that $N$ is odd for the remainder of this discussion. Afterwards, we can use efficient classical algorithms to check if $N$ is a prime power;[19] again, the rest of the algorithm requires that $N$ is not a prime power, and if it is, $N$ has been completely factored.
If those easy cases do not produce a nontrivial factor of $N$, the algorithm proceeds to handle the remaining case. We pick a random integer $2\leq a<N$. A possible nontrivial divisor of $N$ can be found by computing $\gcd(a,N)$, which can be done classically and efficiently using the Euclidean algorithm. If this produces a nontrivial factor (meaning $\gcd(a,N)\neq 1$), the algorithm is finished, and the other nontrivial factor is $ {\frac {N}{\gcd(a,N)}}$. If a nontrivial factor was not identified, then that means that $N$ and the choice of $a$ are coprime. Here, the algorithm runs the quantum subroutine, which will return the order $r$ of $a$, meaning
$a^{r}\equiv 1{\bmod {N}}.$
The quantum subroutine requires that $a$ and $N$ are coprime,[2] which is true since at this point in the algorithm, $\gcd(a,N)$ did not produce a nontrivial factor of $N$. It can be seen from the equivalence that $N$ divides $a^{r}-1$, written $N\mid a^{r}-1$. This can be factored using difference of squares:
$N\mid (a^{r/2}-1)(a^{r/2}+1)$
Since we have factored the expression in this way, the algorithm doesn't work for odd $r$ (because $a^{r/2}$ must be an integer), meaning the algorithm would have to restart with a new $a$. Hereafter we can therefore assume $r$ is even. It cannot be the case that $N\mid a^{r/2}-1$, since this would imply $a^{r/2}\equiv 1{\bmod {N}}$, which would contradictorily imply that $ {\frac {r}{2}}$ would be the order of $a$, which was already $r$. At this point, it may or may not be the case that $N\mid a^{r/2}+1$. If it is not true that $N\mid a^{r/2}+1$, then that means we are able to find a nontrivial factor of $N$. We compute
$d=\gcd(N,a^{r/2}-1)$
If $d=1$, then that means $N\mid a^{r/2}+1$ was true, and a nontrivial factor of $N$ cannot be achieved from $a$, and the algorithm must restart with a new $a$. Otherwise, we have found a nontrivial factor of $N$, with the other being $ {\frac {N}{d}}$, and the algorithm is finished. For this step, it is also equivalent to compute $\gcd(N,a^{r/2}+1)$; it will produce a nontrivial factor if $\gcd(N,a^{r/2}-1)$ is nontrivial, and will not if it's trivial (where $N\mid a^{r/2}+1$). The algorithm restated shortly follows: let $N$ be odd, and not a prime power. We want to output two nontrivial factors of $N$.
1. Pick a random number $1<a<N$.
2. Compute $K=\gcd(a,N)$, the greatest common divisor of $a$ and $N$.
3. If $K\neq 1$, then $K$ is a nontrivial factor of $N$, with the other factor being $ {\frac {N}{K}}$ and we are done.
4. Otherwise, use the quantum subroutine to find the order $r$ of $a$.
5. If $r$ is odd, then go back to step 1.
6. Compute $g=\gcd(N,a^{r/2}+1)$. If $g$ is nontrivial, the other factor is $ {\frac {N}{g}}$, and we're done. Otherwise, go back to step 1.
It has been shown that this will be likely to succeed after a few runs.[2]
Quantum order-finding subroutine
The goal of the quantum subroutine of Shor's algorithm is, given coprime integers $N$ and $1<a<N$, to find the order $r$ of $a$ modulo $N$, which is the smallest positive integer such that $a^{r}\equiv 1{\pmod {N}}$. To achieve this, Shor's algorithm uses a quantum circuit involving two registers. The second register uses $n$ qubits, where $n$ is the smallest integer such that $N\leq 2^{n}$. The size of the first register determines how accurate of an approximation the circuit produces. It can be shown that using $2n+1$ qubits gives sufficient accuracy to find $r$. The exact quantum circuit depends on the parameters $a$ and $N$, which define the problem.
The algorithm consists of two main steps:
1. Use quantum phase estimation with unitary $U$ representing the operation of multiplying by $a$ (modulo $N$), and input state $|0\rangle ^{\otimes 2n+1}\otimes |1\rangle $ (where the second register is $|1\rangle $ made from $n$ qubits). The eigenvalues of this $U$ encode information about the period, and $|1\rangle $ can be seen to be writable as a sum of its eigenvectors. Thanks to these properties, the quantum phase estimation stage gives as output a random integer of the form ${\frac {j}{r}}2^{2n+1}$ for random $j=0,1,...,r-1$.
2. Use the continued fractions algorithm to extract the period $r$ from the measurement outcomes obtained in the previous stage. This is a procedure to post-process (with a classical computer) the measurement data obtained from measuring the output quantum states, and retrieve the period.
The connection with quantum phase estimation was not discussed in the original formulation of Shor's algorithm,[2] but was later proposed by Kitaev.[20]
Quantum phase estimation
In general the quantum phase estimation algorithm, for any unitary $U$ and eigenstate $|\psi \rangle $ such that $U|\psi \rangle =e^{2\pi i\theta }|\psi \rangle $, sends inputs states $|0\rangle |\psi \rangle $ into output states close to $|\phi \rangle |\psi \rangle $, where $\phi $ is an integer close to $2^{2n+1}\theta $. In other words, it sends each eigenstate $|\psi _{j}\rangle $ of $U$ into a state close to the associated eigenvalue. For the purposes of quantum order-finding, we employ this strategy using the unitary defined by the action
$U|k\rangle ={\begin{cases}|ak{\pmod {N}}\rangle &0\leq k<N,\\|k\rangle &N\leq k<2^{n}.\end{cases}}$
The action of $U$ on states $|k\rangle $ with $N\leq k<2^{n}$ is not crucial to the functioning of the algorithm, but needs to be included to ensure the overall transformation is a well-defined quantum gate. Implementing the circuit for quantum phase estimation with $U$ requires being able to efficiently implement the gates $U^{2^{j}}$. This can be accomplished via modular exponentiation, which is the slowest part of the algorithm. The gate thus defined satisfies $U^{r}=I$, which immediately implies that its eigenvalues are the $r$-th roots of unity $\omega _{r}^{k}=e^{2\pi ik/r}$. Furthermore, each eigenvalue $\omega _{r}^{k}$ has an eigenvector of the form $ |\psi _{j}\rangle =r^{-1/2}\sum _{k=0}^{r-1}\omega _{r}^{-kj}|a^{k}\rangle $, and these eigenvectors are such that
${\begin{aligned}{\frac {1}{\sqrt {r}}}\sum _{j=0}^{r-1}|\psi _{j}\rangle &={\frac {1}{r}}\sum _{j=0}^{r-1}\sum _{k=0}^{r-1}\omega _{r}^{jk}|a^{k}\rangle \\&=|1\rangle +{\frac {1}{r}}\sum _{k=1}^{r-1}\left(\sum _{j=0}^{r-1}\omega _{r}^{jk}\right)|a^{k}\rangle =|1\rangle ,\end{aligned}}$
where the last identity follows from the geometric series formula, which implies $ \sum _{j=0}^{r-1}\omega _{r}^{jk}=0$.
Using quantum phase estimation on an input state $|0\rangle ^{\otimes 2n+1}|\psi _{j}\rangle $ would result in an output $|\phi _{j}\rangle |\psi _{j}\rangle $ with each $\phi _{j}$ representing a superposition of integers that approximate $2^{2n+1}j/r$, with the most accurate measurement having a chance of $ {\frac {4}{\pi ^{2}}}\approx 40.55$ of being measured (which can be made arbitrarily high using extra qubits). Thus using as input $|0\rangle ^{\otimes 2n+1}|1\rangle $ instead, the output is a superposition of such states with $j=0,...,r-1$. In other words, using this input amounts to running quantum phase estimation on a superposition of eigenvectors of $U$. More explicitly, the quantum phase estimation circuit implements the transformation
$|0\rangle ^{\otimes 2n}|1\rangle ={\frac {1}{\sqrt {r}}}\sum _{j=0}^{r-1}|0\rangle ^{\otimes 2n}|\psi _{j}\rangle \to {\frac {1}{\sqrt {r}}}\sum _{j=0}^{r-1}|\phi _{j}\rangle |\psi _{j}\rangle .$
Measuring the first register, we now have a balanced probability $1/r$ to find each $|\phi _{j}\rangle $, each one giving an integer approximation to $2^{2n+1}j/r$, which can be divided by $2^{2n+1}$ to get a decimal approximation for $j/r$.
Continued fraction algorithm to retrieve the period
Then, we apply the continued fractions algorithm to find integers $ b$ and $ c$, where $ {\frac {b}{c}}$ gives the best fraction approximation for the approximation measured from the circuit, for $ b,c<N$ and coprime $ b$ and $ c$. The number of qubits in the first register, $2n+1$, which determines the accuracy of the approximation, guarantees that
${\frac {b}{c}}={\frac {j}{r}}$
given the best approximation from the superposition of $ |\phi _{j}\rangle $ was measured (which can be made arbitrarily likely by using extra bits and truncating the output). However, while $ b$ and $ c$ are coprime, it may be the case that $ j$ and $ r$ are not coprime. Because of that, $ b$ and $ c$ may have lost some factors that were in $ j$ and $ r$. This can be remedied by rerunning the quantum subroutine an arbitrary number of times, to produce a list of fraction approximations
${\frac {b_{1}}{c_{1}}}_ ,}\;{\frac {b_{2}}{c_{2}}}_ ,}\ldots {\vphantom {\frac {b_{1}}{c_{1}}}}_ ,}\;{\frac {b_{s}}{c_{s}}}$
where $ s$ is the number of times the algorithm was run. Each $ c_{k}$ will have different factors taken out of it because the circuit will (likely) have measured multiple different possible values of $ j$. To recover the actual $ r$ value, we can take the least common multiple of each $ c_{k}$:
$\mathrm {lcm} (c_{1},c_{2},\ldots ,c_{s})$
The least common multiple will be the order $ r$ of the original integer $ a$ with high probability.
Choosing the size of the first register
Phase estimation requires choosing the size of the first register to determine the accuracy of the algorithm, and for the quantum subroutine of Shor's algorithm, $2n+1$ qubits is sufficient to guarantee that the optimal bitstring measured from phase estimation (meaning the $|k\rangle $ where $ k/2^{2n+1}$ is the most accurate approximation of the phase from phase estimation) will allow the actual value of $r$ to be recovered.
Each $|\phi _{j}\rangle $ before measurement in Shor's algorithm represents a superposition of integers approximating $2^{2n+1}j/r$. Let $|k\rangle $ represent the most optimal integer in $|\phi _{j}\rangle $. The following theorem guarantees that the continued fractions algorithm will recover $j/r$ from $k/2^{2{n}+1}$:
Theorem — If $j$ and $r$ are $n$ bit integers, and
$\left\vert {\frac {j}{r}}-\phi \right\vert \leq {\frac {1}{2r^{2}}}$
then the continued fractions algorithm run on $\phi $ will recover both $ {\frac {j}{\gcd(j,\;r)}}$. and $ {\frac {r}{\gcd(j,\;r)}}$.
[3] As $k$ is the optimal bitstring from phase estimation, $k/2^{2{n}+1}$ is accurate to $j/r$ by $2n+1$ bits. Thus,
$\left\vert {\frac {j}{r}}-{\frac {k}{2^{2n+1}}}\right\vert \leq {\frac {1}{2^{2{n}+1}}}\leq {\frac {1}{2N^{2}}}\leq {\frac {1}{2r^{2}}}$
which implys that the continued fractions algorithm will recover $j$ and $r$ (or with their greatest common divisor taken out).
The bottleneck
The runtime bottleneck of Shor's algorithm is quantum modular exponentiation, which is by far slower than the quantum Fourier transform and classical pre-/post-processing. There are several approaches to constructing and optimizing circuits for modular exponentiation. The simplest and (currently) most practical approach is to mimic conventional arithmetic circuits with reversible gates, starting with ripple-carry adders. Knowing the base and the modulus of exponentiation facilitates further optimizations.[21][22] Reversible circuits typically use on the order of $n^{3}$ gates for $n$ qubits. Alternative techniques asymptotically improve gate counts by using quantum Fourier transforms, but are not competitive with fewer than 600 qubits owing to high constants.
Period finding and discrete logarithms
Shor's algorithms for the discrete log and the order finding problems are instances of an algorithm solving the period finding problem.. All three are instances of the hidden subgroup problem.
Shor's algorithm for discrete logarithms
Given a group $G$ with order $p$ and generator $g\in G$, suppose we know that $x=g^{r}\in G$, for some $r\in \mathbb {Z} _{p}$, and we wish to compute $r$, which is the discrete logarithm: $r={\log _{g}}(x)$. Consider the abelian group $\mathbb {Z} _{p}\times \mathbb {Z} _{p}$, where each factor corresponds to modular addition of values. Now, consider the function
$f\colon \mathbb {Z} _{p}\times \mathbb {Z} _{p}\to G\;;\;f(a,b)=g^{a}x^{-b}.$
This gives us an abelian hidden subgroup problem, where $f$ corresponds to a group homomorphism. The kernel corresponds to the multiples of $(r,1)$. So, if we can find the kernel, we can find $r$. A quantum algorithm for solving this problem exists. This algorithm is, like the factor-finding algorithm, due to Peter Shor and both are implemented by creating a superposition through using Hadamard gates, followed by implementing $f$ as a quantum transform, followed finally by a quantum Fourier transform.[3] Due to this, the quantum algorithm for computing the discrete logarithm is also occasionally referred to as "Shor's Algorithm."
The order-finding problem can also be viewed as a hidden subgroup problem.[3] To see this, consider the group of integers under addition, and for a given $a\in \mathbb {Z} $ such that: $a^{r}=1$, the function
$f\colon \mathbb {Z} \to \mathbb {Z} \;;\;f(x)=a^{x},\;f(x+r)=f(x).$
For any finite abelian group $G$, a quantum algorithm exists for solving the hidden subgroup for $G$ in polynomial time.[3]
See also
• GEECM, a factorization algorithm said to be "often much faster than Shor's"[23]
• Grover's algorithm
References
1. Shor, P.W. (1994). "Algorithms for quantum computation: Discrete logarithms and factoring". Proceedings 35th Annual Symposium on Foundations of Computer Science. IEEE Comput. Soc. Press. pp. 124–134. doi:10.1109/sfcs.1994.365700. ISBN 0818665807. S2CID 15291489.
2. Shor, Peter W. (October 1997). "Polynomial-Time Algorithms for Prime Factorization and Discrete Logarithms on a Quantum Computer". SIAM Journal on Computing. 26 (5): 1484–1509. arXiv:quant-ph/9508027. doi:10.1137/S0097539795293172. ISSN 0097-5397. S2CID 2337707.
3. Nielsen, Michael A.; Chuang, Isaac L. (9 December 2010). Quantum Computation and Quantum Information (PDF) (7th ed.). Cambridge University Press. ISBN 978-1-107-00217-3. Archived (PDF) from the original on 2019-07-11. Retrieved 24 April 2022.
4. Gidney, Craig; Ekerå, Martin (2021). "How to factor 2048 bit RSA integers in 8 hours using 20 million noisy qubits". Quantum. 5: 433. arXiv:1905.09749. doi:10.22331/q-2021-04-15-433. S2CID 162183806.
5. cai, Jin-Yi (June 15, 2023). "Shor's Algorithm Does Not Factor Large Integers in the Presence of Noise". arXiv:2306.10072 [quant-ph].
6. See also pseudo-polynomial time.
7. Beckman, David; Chari, Amalavoyal N.; Devabhaktuni, Srikrishna; Preskill, John (1996). "Efficient Networks for Quantum Factoring" (PDF). Physical Review A. 54 (2): 1034–1063. arXiv:quant-ph/9602016. Bibcode:1996PhRvA..54.1034B. doi:10.1103/PhysRevA.54.1034. PMID 9913575. S2CID 2231795.
8. Harvey, David; van Der Hoeven, Joris (2020). "Integer multiplication in time O(n log n)". Annals of Mathematics. doi:10.4007/annals.2021.193.2.4. S2CID 109934776.
9. "Number Field Sieve". wolfram.com. Retrieved 23 October 2015.
10. Roetteler, Martin; Naehrig, Michael; Svore, Krysta M.; Lauter, Kristin E. (2017). "Quantum resource estimates for computing elliptic curve discrete logarithms". In Takagi, Tsuyoshi; Peyrin, Thomas (eds.). Advances in Cryptology – ASIACRYPT 2017 – 23rd International Conference on the Theory and Applications of Cryptology and Information Security, Hong Kong, China, December 3–7, 2017, Proceedings, Part II. Lecture Notes in Computer Science. Vol. 10625. Springer. pp. 241–270. arXiv:1706.06752. doi:10.1007/978-3-319-70697-9_9.
11. Vandersypen, Lieven M. K.; Steffen, Matthias; Breyta, Gregory; Yannoni, Costantino S.; Sherwood, Mark H. & Chuang, Isaac L. (2001), "Experimental realization of Shor's quantum factoring algorithm using nuclear magnetic resonance" (PDF), Nature, 414 (6866): 883–887, arXiv:quant-ph/0112176, Bibcode:2001Natur.414..883V, CiteSeerX 10.1.1.251.8799, doi:10.1038/414883a, PMID 11780055, S2CID 4400832
12. Lu, Chao-Yang; Browne, Daniel E.; Yang, Tao & Pan, Jian-Wei (2007), "Demonstration of a Compiled Version of Shor's Quantum Factoring Algorithm Using Photonic Qubits" (PDF), Physical Review Letters, 99 (25): 250504, arXiv:0705.1684, Bibcode:2007PhRvL..99y0504L, doi:10.1103/PhysRevLett.99.250504, PMID 18233508, S2CID 5158195
13. Lanyon, B. P.; Weinhold, T. J.; Langford, N. K.; Barbieri, M.; James, D. F. V.; Gilchrist, A. & White, A. G. (2007), "Experimental Demonstration of a Compiled Version of Shor's Algorithm with Quantum Entanglement" (PDF), Physical Review Letters, 99 (25): 250505, arXiv:0705.1398, Bibcode:2007PhRvL..99y0505L, doi:10.1103/PhysRevLett.99.250505, hdl:10072/21608, PMID 18233509, S2CID 10010619
14. Lucero, Erik; Barends, Rami; Chen, Yu; Kelly, Julian; Mariantoni, Matteo; Megrant, Anthony; O'Malley, Peter; Sank, Daniel; Vainsencher, Amit; Wenner, James; White, Ted; Yin, Yi; Cleland, Andrew N.; Martinis, John M. (2012). "Computing prime factors with a Josephson phase qubit quantum processor". Nature Physics. 8 (10): 719. arXiv:1202.5707. Bibcode:2012NatPh...8..719L. doi:10.1038/nphys2385. S2CID 44055700.
15. Martín-López, Enrique; Martín-López, Enrique; Laing, Anthony; Lawson, Thomas; Alvarez, Roberto; Zhou, Xiao-Qi; O'Brien, Jeremy L. (12 October 2012). "Experimental realization of Shor's quantum factoring algorithm using qubit recycling". Nature Photonics. 6 (11): 773–776. arXiv:1111.4147. Bibcode:2012NaPho...6..773M. doi:10.1038/nphoton.2012.259. S2CID 46546101.
16. Amico, Mirko; Saleem, Zain H.; Kumph, Muir (2019-07-08). "An Experimental Study of Shor's Factoring Algorithm on IBM Q". Physical Review A. 100 (1): 012305. arXiv:1903.00768. doi:10.1103/PhysRevA.100.012305. ISSN 2469-9926. S2CID 92987546.
17. Karamlou, Amir H.; Simon, William A.; Katabarwa, Amara; Scholten, Travis L.; Peropadre, Borja; Cao, Yudong (2021-10-28). "Analyzing the performance of variational quantum factoring on a superconducting quantum processor". npj Quantum Information. 7 (1): 156. arXiv:2012.07825. Bibcode:2021npjQI...7..156K. doi:10.1038/s41534-021-00478-z. ISSN 2056-6387. S2CID 229156747.
18. "Quantum computing motte-and-baileys". Shtetl-Optimized. 2019-12-28. Retrieved 2021-11-15.
19. Bernstein, Daniel (1998). "Detecting perfect powers in essentially linear time". Mathematics of Computation. 67 (223): 1253–1283. doi:10.1090/S0025-5718-98-00952-1. ISSN 0025-5718.
20. Kitaev, A. Yu (1995-11-20). "Quantum measurements and the Abelian Stabilizer Problem". arXiv:quant-ph/9511026.
21. Markov, Igor L.; Saeedi, Mehdi (2012). "Constant-Optimized Quantum Circuits for Modular Multiplication and Exponentiation". Quantum Information and Computation. 12 (5–6): 361–394. arXiv:1202.6614. Bibcode:2012arXiv1202.6614M. doi:10.26421/QIC12.5-6-1. S2CID 16595181.
22. Markov, Igor L.; Saeedi, Mehdi (2013). "Faster Quantum Number Factoring via Circuit Synthesis". Phys. Rev. A. 87 (1): 012310. arXiv:1301.3210. Bibcode:2013PhRvA..87a2310M. doi:10.1103/PhysRevA.87.012310. S2CID 2246117.
23. Bernstein, Daniel J.; Heninger, Nadia; Lou, Paul; Valenta, Luke (2017). "Post-quantum RSA" (PDF). International Workshop on Post-Quantum Cryptography. Lecture Notes in Computer Science. 10346: 311–329. doi:10.1007/978-3-319-59879-6_18. ISBN 978-3-319-59878-9. Archived (PDF) from the original on 2017-04-20.
Further reading
• Nielsen, Michael A. & Chuang, Isaac L. (2010), Quantum Computation and Quantum Information, 10th Anniversary Edition, Cambridge University Press, ISBN 9781107002173.
• Phillip Kaye, Raymond Laflamme, Michele Mosca, An introduction to quantum computing, Oxford University Press, 2007, ISBN 0-19-857049-X
• "Explanation for the man in the street" by Scott Aaronson, "approved" by Peter Shor. (Shor wrote "Great article, Scott! That’s the best job of explaining quantum computing to the man on the street that I’ve seen."). An alternate metaphor for the QFT was presented in one of the comments. Scott Aaronson suggests the following 12 references as further reading (out of "the 10105000 quantum algorithm tutorials that are already on the web."):
• Shor, Peter W. (1997), "Polynomial-Time Algorithms for Prime Factorization and Discrete Logarithms on a Quantum Computer", SIAM J. Comput., 26 (5): 1484–1509, arXiv:quant-ph/9508027v2, Bibcode:1999SIAMR..41..303S, doi:10.1137/S0036144598347011. Revised version of the original paper by Peter Shor ("28 pages, LaTeX. This is an expanded version of a paper that appeared in the Proceedings of the 35th Annual Symposium on Foundations of Computer Science, Santa Fe, NM, Nov. 20--22, 1994. Minor revisions made January, 1996").
• Quantum Computing and Shor's Algorithm, Matthew Hayward's Quantum Algorithms Page, 2005-02-17, imsa.edu, LaTeX2HTML version of the original LaTeX document, also available as PDF or postscript document.
• Quantum Computation and Shor's Factoring Algorithm, Ronald de Wolf, CWI and University of Amsterdam, January 12, 1999, 9 page postscript document.
• Shor's Factoring Algorithm, Notes from Lecture 9 of Berkeley CS 294–2, dated 4 Oct 2004, 7 page postscript document.
• Chapter 6 Quantum Computation, 91 page postscript document, Caltech, Preskill, PH229.
• Quantum computation: a tutorial by Samuel L. Braunstein.
• The Quantum States of Shor's Algorithm, by Neal Young, Last modified: Tue May 21 11:47:38 1996.
• III. Breaking RSA Encryption with a Quantum Computer: Shor's Factoring Algorithm, Lecture notes on Quantum computation, Cornell University, Physics 481–681, CS 483; Spring, 2006 by N. David Mermin. Last revised 2006-03-28, 30 page PDF document.
• Lavor, C.; Manssur, L. R. U.; Portugal, R. (2003). "Shor's Algorithm for Factoring Large Integers". arXiv:quant-ph/0303175.
• Lomonaco, Jr (2000). "Shor's Quantum Factoring Algorithm". arXiv:quant-ph/0010034. This paper is a written version of a one-hour lecture given on Peter Shor's quantum factoring algorithm. 22 pages.
• Chapter 20 Quantum Computation, from Computational Complexity: A Modern Approach, Draft of a book: Dated January 2007, Sanjeev Arora and Boaz Barak, Princeton University. Published as Chapter 10 Quantum Computation of Sanjeev Arora, Boaz Barak, "Computational Complexity: A Modern Approach", Cambridge University Press, 2009, ISBN 978-0-521-42426-4
• A Step Toward Quantum Computing: Entangling 10 Billion Particles, from "Discover Magazine", Dated January 19, 2011.
• Josef Gruska - Quantum Computing Challenges also in Mathematics unlimited: 2001 and beyond, Editors Björn Engquist, Wilfried Schmid, Springer, 2001, ISBN 978-3-540-66913-5
External links
• Version 1.0.0 of libquantum: contains a C language implementation of Shor's algorithm with their simulated quantum computer library, but the width variable in shor.c should be set to 1 to improve the runtime complexity.
• PBS Infinite Series created two videos explaining the math behind Shor's algorithm, "How to Break Cryptography" and "Hacking at Quantum Speed with Shor's Algorithm".
Quantum information science
General
• DiVincenzo's criteria
• NISQ era
• Quantum computing
• timeline
• Quantum information
• Quantum programming
• Quantum simulation
• Qubit
• physical vs. logical
• Quantum processors
• cloud-based
Theorems
• Bell's
• Eastin–Knill
• Gleason's
• Gottesman–Knill
• Holevo's
• Margolus–Levitin
• No-broadcasting
• No-cloning
• No-communication
• No-deleting
• No-hiding
• No-teleportation
• PBR
• Threshold
• Solovay–Kitaev
• Purification
Quantum
communication
• Classical capacity
• entanglement-assisted
• quantum capacity
• Entanglement distillation
• Monogamy of entanglement
• LOCC
• Quantum channel
• quantum network
• Quantum teleportation
• quantum gate teleportation
• Superdense coding
Quantum cryptography
• Post-quantum cryptography
• Quantum coin flipping
• Quantum money
• Quantum key distribution
• BB84
• SARG04
• other protocols
• Quantum secret sharing
Quantum algorithms
• Amplitude amplification
• Bernstein–Vazirani
• Boson sampling
• Deutsch–Jozsa
• Grover's
• HHL
• Hidden subgroup
• Quantum annealing
• Quantum counting
• Quantum Fourier transform
• Quantum optimization
• Quantum phase estimation
• Shor's
• Simon's
• VQE
Quantum
complexity theory
• BQP
• EQP
• QIP
• QMA
• PostBQP
Quantum
processor benchmarks
• Quantum supremacy
• Quantum volume
• Randomized benchmarking
• XEB
• Relaxation times
• T1
• T2
Quantum
computing models
• Adiabatic quantum computation
• Continuous-variable quantum information
• One-way quantum computer
• cluster state
• Quantum circuit
• quantum logic gate
• Quantum machine learning
• quantum neural network
• Quantum Turing machine
• Topological quantum computer
Quantum
error correction
• Codes
• CSS
• quantum convolutional
• stabilizer
• Shor
• Bacon–Shor
• Steane
• Toric
• gnu
• Entanglement-assisted
Physical
implementations
Quantum optics
• Cavity QED
• Circuit QED
• Linear optical QC
• KLM protocol
Ultracold atoms
• Optical lattice
• Trapped-ion QC
Spin-based
• Kane QC
• Spin qubit QC
• NV center
• NMR QC
Superconducting
• Charge qubit
• Flux qubit
• Phase qubit
• Transmon
Quantum
programming
• OpenQASM-Qiskit-IBM QX
• Quil-Forest/Rigetti QCS
• Cirq
• Q#
• libquantum
• many others...
• Quantum information science
• Quantum mechanics topics
Number-theoretic algorithms
Primality tests
• AKS
• APR
• Baillie–PSW
• Elliptic curve
• Pocklington
• Fermat
• Lucas
• Lucas–Lehmer
• Lucas–Lehmer–Riesel
• Proth's theorem
• Pépin's
• Quadratic Frobenius
• Solovay–Strassen
• Miller–Rabin
Prime-generating
• Sieve of Atkin
• Sieve of Eratosthenes
• Sieve of Pritchard
• Sieve of Sundaram
• Wheel factorization
Integer factorization
• Continued fraction (CFRAC)
• Dixon's
• Lenstra elliptic curve (ECM)
• Euler's
• Pollard's rho
• p − 1
• p + 1
• Quadratic sieve (QS)
• General number field sieve (GNFS)
• Special number field sieve (SNFS)
• Rational sieve
• Fermat's
• Shanks's square forms
• Trial division
• Shor's
Multiplication
• Ancient Egyptian
• Long
• Karatsuba
• Toom–Cook
• Schönhage–Strassen
• Fürer's
Euclidean division
• Binary
• Chunking
• Fourier
• Goldschmidt
• Newton-Raphson
• Long
• Short
• SRT
Discrete logarithm
• Baby-step giant-step
• Pollard rho
• Pollard kangaroo
• Pohlig–Hellman
• Index calculus
• Function field sieve
Greatest common divisor
• Binary
• Euclidean
• Extended Euclidean
• Lehmer's
Modular square root
• Cipolla
• Pocklington's
• Tonelli–Shanks
• Berlekamp
• Kunerth
Other algorithms
• Chakravala
• Cornacchia
• Exponentiation by squaring
• Integer square root
• Integer relation (LLL; KZ)
• Modular exponentiation
• Montgomery reduction
• Schoof
• Trachtenberg system
• Italics indicate that algorithm is for numbers of special forms
|
Wikipedia
|
Coastline paradox
The coastline paradox is the counterintuitive observation that the coastline of a landmass does not have a well-defined length. This results from the fractal curve–like properties of coastlines; i.e., the fact that a coastline typically has a fractal dimension. Although the "paradox of length" was previously noted by Hugo Steinhaus,[1] the first systematic study of this phenomenon was by Lewis Fry Richardson,[2][3] and it was expanded upon by Benoit Mandelbrot.[4][5]
An example of the coastline paradox. If the coastline of Great Britain is measured using units 100 km (62 mi) long, then the length of the coastline is approximately 2,800 km (1,700 mi). With 50 km (31 mi) units, the total length is approximately 3,400 km (2,100 mi), approximately 600 km (370 mi) longer.
The measured length of the coastline depends on the method used to measure it and the degree of cartographic generalization. Since a landmass has features at all scales, from hundreds of kilometers in size to tiny fractions of a millimeter and below, there is no obvious size of the smallest feature that should be taken into consideration when measuring, and hence no single well-defined perimeter to the landmass. Various approximations exist when specific assumptions are made about minimum feature size.
Similarly, if a circle of radius 5 units is measured with wonky straight lines (for example, four lines at right angles to one another), it will measure with a circumference of 40 units. However, measuring with greater accuracy, the circumference is found to be closer to 31.42 units. This mystery still confounds many modern mathematicians.
The problem is fundamentally different from the measurement of other, simpler edges. It is possible, for example, to accurately measure the length of a straight, idealized metal bar by using a measurement device to determine that the length is less than a certain amount and greater than another amount—that is, to measure it within a certain degree of uncertainty. The more accurate the measurement device, the closer results will be to the true length of the edge. When measuring a coastline, however, the closer measurement does not result in an increase in accuracy—the measurement only increases in length; unlike with the metal bar, there is no way to obtain a maximum value for the length of the coastline.
In three-dimensional space, the coastline paradox is readily extended to the concept of fractal surfaces, whereby the area of a surface varies depending on the measurement resolution.
Mathematical aspects
The basic concept of length originates from Euclidean distance. In Euclidean geometry, a straight line represents the shortest distance between two points. This line has only one length. On the surface of a sphere, this is replaced by the geodesic length (also called the great circle length), which is measured along the surface curve that exists in the plane containing both endpoints and the center of the sphere. The length of basic curves is more complicated but can also be calculated. Measuring with rulers, one can approximate the length of a curve by adding the sum of the straight lines which connect the points:
Using a few straight lines to approximate the length of a curve will produce an estimate lower than the true length; when increasingly short (and thus more numerous) lines are used, the sum approaches the curve's true length. A precise value for this length can be found using calculus, the branch of mathematics enabling the calculation of infinitesimally small distances. The following animation illustrates how a smooth curve can be meaningfully assigned a precise length:
Not all curves can be measured in this way. A fractal is, by definition, a curve whose complexity changes with measurement scale. Whereas approximations of a smooth curve tend to a single value as measurement precision increases, the measured value for a fractal does not converge.
This Sierpiński curve (a type of space-filling curve), which repeats the same pattern on a smaller and smaller scale, continues to increase in length. If understood to iterate within an infinitely subdivisible geometric space, its length tends to infinity. At the same time, the area enclosed by the curve does converge to a precise figure—just as, analogously, the area of an island can be calculated more easily than the length of its coastline.
As the length of a fractal curve always diverges to infinity, if one were to measure a coastline with infinite or near-infinite resolution, the length of the infinitely short kinks in the coastline would add up to infinity.[6] However, this figure relies on the assumption that space can be subdivided into infinitesimal sections. The truth value of this assumption—which underlies Euclidean geometry and serves as a useful model in everyday measurement—is a matter of philosophical speculation, and may or may not reflect the changing realities of "space" and "distance" on the atomic level (approximately the scale of a nanometer).
Coastlines are less definite in their construction than idealized fractals such as the Mandelbrot set because they are formed by various natural events that create patterns in statistically random ways, whereas idealized fractals are formed through repeated iterations of simple, formulaic sequences.[7]
Discovery
Shortly before 1951, Lewis Fry Richardson, in researching the possible effect of border lengths on the probability of war, noticed that the Portuguese reported their measured border with Spain to be 987 km, but the Spanish reported it as 1214 km. This was the beginning of the coastline problem, which is a mathematical uncertainty inherent in the measurement of boundaries that are irregular.[8]
The prevailing method of estimating the length of a border (or coastline) was to lay out n equal straight-line segments of length ℓ with dividers on a map or aerial photograph. Each end of the segment must be on the boundary. Investigating the discrepancies in border estimation, Richardson discovered what is now termed the "Richardson effect": the sum of the segments is monotonically increasing when the common length of the segments is decreased. In effect, the shorter the ruler, the longer the measured border; the Spanish and Portuguese geographers were simply using different-length rulers.
The result most astounding to Richardson is that, under certain circumstances, as ℓ approaches zero, the length of the coastline approaches infinity. Richardson had believed, based on Euclidean geometry, that a coastline would approach a fixed length, as do similar estimations of regular geometric figures. For example, the perimeter of a regular polygon inscribed in a circle approaches the circumference with increasing numbers of sides (and decrease in the length of one side). In geometric measure theory such a smooth curve as the circle that can be approximated by small straight segments with a definite limit is termed a rectifiable curve.
Measuring a coastline
More than a decade after Richardson completed his work, Benoit Mandelbrot developed a new branch of mathematics, fractal geometry, to describe just such non-rectifiable complexes in nature as the infinite coastline.[9] His own definition of the new figure serving as the basis for his study is:[10]
I coined fractal from the Latin adjective fractus. The corresponding Latin verb frangere means "to break:" to create irregular fragments. It is therefore sensible ... that, in addition to "fragmented" ... fractus should also mean "irregular".
A key property of some fractals is self-similarity; that is, at any scale the same general configuration appears. A coastline is perceived as bays alternating with promontories. In the hypothetical situation that a given coastline has this property of self-similarity, then no matter how great any one small section of coastline is magnified, a similar pattern of smaller bays and promontories superimposed on larger bays and promontories appears, right down to the grains of sand. At that scale the coastline appears as a momentarily shifting, potentially infinitely long thread with a stochastic arrangement of bays and promontories formed from the small objects at hand. In such an environment (as opposed to smooth curves) Mandelbrot asserts[9] "coastline length turns out to be an elusive notion that slips between the fingers of those who want to grasp it".
There are different kinds of fractals. A coastline with the stated property is in "a first category of fractals, namely curves whose fractal dimension is greater than 1". That last statement represents an extension by Mandelbrot of Richardson's thought. Mandelbrot's statement of the Richardson effect is:[11]
$L(\varepsilon )\sim F\varepsilon ^{1-D},$
where L, coastline length, a function of the measurement unit ε, is approximated by the expression. F is a constant, and D is a parameter that Richardson found depended on the coastline approximated by L. He gave no theoretical explanation, but Mandelbrot identified D with a non-integer form of the Hausdorff dimension, later the fractal dimension. Rearranging the expression yields
$F\varepsilon ^{-D}\cdot \varepsilon ,$
where Fε−D must be the number of units ε required to obtain L. The broken line measuring a coast does not extend in one direction nor does it represent an area, but is intermediate between the two and can be thought of as a band of width 2ε. D is its fractal dimension, ranging between 1 and 2 (and typically less than 1.5). More broken coastlines have greater D, and therefore L is longer for the same ε. D is approximately 1.02 for the coastline of South Africa, and approximately 1.25 for the west coast of Great Britain.[5] For lake shorelines, the typical value of D is 1.28.[12] Mandelbrot showed that D is independent of ε.
See also
• Alaska boundary dispute – Alaskan and Canadian claims to the Alaskan Panhandle differed greatly, based on competing interpretations of the ambiguous phrase setting the border at "a line parallel to the windings of the coast", applied to the fjord-dense region.
• Fractal dimension
• Gabriel's horn, a geometric figure with infinite surface area but finite volume
• "How Long Is the Coast of Britain? Statistical Self-Similarity and Fractional Dimension", a paper by Benoît Mandelbrot
• Paradox of the heap
• Zeno's paradoxes
References
Citations
1. Steinhaus, Hugo (1954). "Length, shape and area". Colloquium Mathematicum. 3 (1): 1–13. doi:10.4064/cm-3-1-1-13. The left bank of the Vistula, when measured with increased precision would furnish lengths ten, hundred and even thousand times as great as the length read off the school map. A statement nearly adequate to reality would be to call most arcs encountered in nature not rectifiable.
2. Vulpiani, Angelo (2014). "Lewis Fry Richardson: scientist, visionary and pacifist". Lettera Matematica. 2 (3): 121–128. doi:10.1007/s40329-014-0063-z. MR 3344519. S2CID 128975381.
3. Richardson, L. F. (1961). "The problem of contiguity: An appendix to statistics of deadly quarrels". General Systems Yearbook. Vol. 6. pp. 139–187.
4. Mandelbrot, B. (1967). "How Long is the Coast of Britain? Statistical Self-Similarity and Fractional Dimension". Science. 156 (3775): 636–638. Bibcode:1967Sci...156..636M. doi:10.1126/science.156.3775.636. PMID 17837158. S2CID 15662830. Archived from the original on 2021-10-19. Retrieved 2021-05-21.
5. Mandelbrot, Benoit (1983). The Fractal Geometry of Nature. W. H. Freeman and Co. pp. 25–33. ISBN 978-0-7167-1186-5.
6. Post & Eisen, p. 550 (see below).
7. Heinz-Otto Peitgen, Hartmut Jürgens, Dietmar Saupe, Chaos and Fractals: New Frontiers of Science; Spring, 2004; p. 424.
8. Richardson, Lewis Fry (1993). "Fractals". In Ashford, Oliver M.; Charnock, H.; Drazin, P. G.; et al. (eds.). The Collected Papers of Lewis Fry Richardson: Meteorology and numerical analysis. Vol. 1. Cambridge University Press. pp. 45–46. ISBN 0-521-38297-1.
9. Mandelbrot 1982, p. 28.
10. Mandelbrot 1982, p. 1.
11. Mandelbrot 1982, pp. 29–31.
12. Seekell, D.; Cael, B.; Lindmark, E.; Byström, P. (2021). "The Fractal Scaling Relationship for River Inlets to Lakes". Geophysical Research Letters. 48 (9): e2021GL093366. Bibcode:2021GeoRL..4893366S. doi:10.1029/2021GL093366. ISSN 1944-8007. S2CID 235508504.
Sources
• Post, David G., and Michael Eisen. "How Long is the Coastline of Law? Thoughts on the Fractal Nature of Legal Systems". Journal of Legal Studies XXIX(1), January 2000.
• Mandelbrot, Benoit B. (1982). "II.5 How long is the coast of Britain?". The Fractal Geometry of Nature. Macmillan. pp. 25–33. ISBN 978-0-7167-1186-5.
External links
• "Coastlines" at Fractal Geometry (ed. Michael Frame, Benoit Mandelbrot, and Nial Neger; maintained for Math 190a at Yale University)
• The Atlas of Canada – Coastline and Shoreline
• NOAA GeoZone Blog on Digital Coast
• What Is The Coastline Paradox? – YouTube video by Veritasium
• The Coastline Paradox Explained – YouTube video by RealLifeLore
Fractals
Characteristics
• Fractal dimensions
• Assouad
• Box-counting
• Higuchi
• Correlation
• Hausdorff
• Packing
• Topological
• Recursion
• Self-similarity
Iterated function
system
• Barnsley fern
• Cantor set
• Koch snowflake
• Menger sponge
• Sierpinski carpet
• Sierpinski triangle
• Apollonian gasket
• Fibonacci word
• Space-filling curve
• Blancmange curve
• De Rham curve
• Minkowski
• Dragon curve
• Hilbert curve
• Koch curve
• Lévy C curve
• Moore curve
• Peano curve
• Sierpiński curve
• Z-order curve
• String
• T-square
• n-flake
• Vicsek fractal
• Hexaflake
• Gosper curve
• Pythagoras tree
• Weierstrass function
Strange attractor
• Multifractal system
L-system
• Fractal canopy
• Space-filling curve
• H tree
Escape-time
fractals
• Burning Ship fractal
• Julia set
• Filled
• Newton fractal
• Douady rabbit
• Lyapunov fractal
• Mandelbrot set
• Misiurewicz point
• Multibrot set
• Newton fractal
• Tricorn
• Mandelbox
• Mandelbulb
Rendering techniques
• Buddhabrot
• Orbit trap
• Pickover stalk
Random fractals
• Brownian motion
• Brownian tree
• Brownian motor
• Fractal landscape
• Lévy flight
• Percolation theory
• Self-avoiding walk
People
• Michael Barnsley
• Georg Cantor
• Bill Gosper
• Felix Hausdorff
• Desmond Paul Henry
• Gaston Julia
• Helge von Koch
• Paul Lévy
• Aleksandr Lyapunov
• Benoit Mandelbrot
• Hamid Naderi Yeganeh
• Lewis Fry Richardson
• Wacław Sierpiński
Other
• "How Long Is the Coast of Britain?"
• Coastline paradox
• Fractal art
• List of fractals by Hausdorff dimension
• The Fractal Geometry of Nature (1982 book)
• The Beauty of Fractals (1986 book)
• Chaos: Making a New Science (1987 book)
• Kaleidoscope
• Chaos theory
|
Wikipedia
|
Shortlex order
In mathematics, and particularly in the theory of formal languages, shortlex is a total ordering for finite sequences of objects that can themselves be totally ordered. In the shortlex ordering, sequences are primarily sorted by cardinality (length) with the shortest sequences first, and sequences of the same length are sorted into lexicographical order.[1] Shortlex ordering is also called radix, length-lexicographic, military, or genealogical ordering.[2]
In the context of strings on a totally ordered alphabet, the shortlex order is identical to the lexicographical order, except that shorter strings precede longer strings. For example, the shortlex order of the set of strings on the English alphabet (in its usual order) is [ε, a, b, c, ..., z, aa, ab, ac, ..., zz, aaa, aab, aac, ..., zzz, ...], where ε denotes the empty string.
The strings in this ordering over a fixed finite alphabet can be placed into one-to-one order-preserving correspondence with the non-negative integers, giving the bijective numeration system for representing numbers.[3] The shortlex ordering is also important in the theory of automatic groups.[4]
See also
• Graded lexicographic order
References
1. Sipser, Michael (2012). Introduction to the Theory of Computation (3 ed.). Boston, MA: Cengage Learning. p. 14. ISBN 978-1133187790.
2. Bárány, Vince (2008), "A hierarchy of automatic ω-words having a decidable MSO theory", RAIRO Theoretical Informatics and Applications, 42 (3): 417–450, doi:10.1051/ita:2008008, MR 2434027.
3. Smullyan, R. (1961), "9. Lexicographical ordering; n-adic representation of integers", Theory of Formal Systems, Annals of Mathematics Studies, vol. 47, Princeton University Press, pp. 34–36.
4. Epstein, David B. A.; Cannon, James W.; Holt, Derek F.; Levy, Silvio V. F.; Paterson, Michael S.; Thurston, William P. (1992), Word processing in groups, Boston, MA: Jones and Bartlett Publishers, p. 56, ISBN 0-86720-244-0, MR 1161694.
|
Wikipedia
|
Short division
In arithmetic, short division is a division algorithm which breaks down a division problem into a series of easier steps. It is an abbreviated form of long division — whereby the products are omitted and the partial remainders are notated as superscripts.
As a result, a short division tableau is shorter than its long division counterpart — though sometimes at the expense of relying on mental arithmetic, which could limit the size of the divisor.
For most people, small integer divisors up to 12 are handled using memorised multiplication tables, although the procedure could also be adapted to the larger divisors as well.
As in all division problems, a number called the dividend is divided by another, called the divisor. The answer to the problem would be the quotient, and in the case of Euclidean division, the remainder would be included as well.
Using short division, arbitrarily large dividends can be handled.[1]
Tableau
Short division does not use the slash (/) or division sign (÷) symbols. Instead, it displays the dividend, divisor, and quotient (when it is found) in a tableau. An example is shown below, representing the division of 500 by 4. The quotient is 125.
${\begin{array}{r}125\\4{\overline {)500}}\\\end{array}}$
Alternatively, the bar may be placed below the number, which means the sum proceeds down the page. This is in distinction to long division, where the space under the dividend is required for workings:
${\begin{array}{r}4{\underline {)500}}\\125\\\end{array}}$
Example
The procedure involves several steps. As an example, consider 950 divided by 4:
1. The dividend and divisor are written in the short division tableau:
$4{\overline {)950}}\ $
Dividing 950 by 4 in a single step would require knowing the multiplication table up to 238 × 4. Instead, the division is reduced to small steps. Starting from the left, enough digits are selected to form a number (called the partial dividend) that is at least 4×1 but smaller than 4×10 (4 being the divisor in this problem). Here, the partial dividend is 9.
2. The first number to be divided by the divisor (4) is the partial dividend (9). One writes the integer part of the result (2) above the division bar over the leftmost digit of the dividend, and one writes the remainder (1) as a small digit above and to the right of the partial dividend (9).
${\begin{matrix}2\\4{\overline {)9^{1}50}}\end{matrix}}$
3. Next one repeats step 2, using the small digit concatenated with the next digit of the dividend to form a new partial dividend (15). Dividing the new partial dividend by the divisor (4), one writes the result as before — the quotient above the next digit of the dividend, and the remainder as a small digit to the upper right. (Here 15 divided by 4 is 3, with a remainder of 3.)
${\begin{matrix}\,\,2\ 3\\4{\overline {)9^{1}5^{3}0}}\\\end{matrix}}$
4. One continues repeating step 2 until there are no digits remaining in the dividend. In this example, we see that 30 divided by 4 is 7 with a remainder of 2. The number written above the bar (237) is the quotient, and the last small digit (2) is the remainder.
${\begin{matrix}\quad 2\ 3\ 7\\4{\overline {)9^{1}5^{3}0^{2}}}\\\end{matrix}}$
5. The answer in this example is 237 with a remainder of 2. Alternatively, we can continue the above procedure if we want to produce a decimal answer. We do this by adding a decimal point and zeroes as necessary at the right of the dividend, and then treating each zero as another digit of the dividend. Thus, the next step in such a calculation would give the following:
${\begin{matrix}\quad 2\ 3\ 7.\ 5\\4{\overline {)9^{1}5^{3}0.^{2}0}}\\\end{matrix}}$
Using the alternative layout the final workings would be:
${\begin{array}{r}4{\underline {)9^{1}5^{3}0.^{2}0}}\\2^{\color {White}1}3^{\color {White}3}7.^{\color {White}2}5\\\end{array}}$
As usual, similar steps can also be used to handle the cases with a decimal dividend, or the cases where the divisor involves multiple digits.[2]
Prime factoring
A common requirement is to reduce a number to its prime factors. This is used particularly in working with vulgar fractions. The dividend is successively divided by prime numbers, repeating where possible:
${\begin{array}{r}2{\underline {)950}}\\5{\underline {)475}}\\5{\underline {){\color {White}0}95}}\\\ \ \ 19\\\end{array}}$
This results in 950 = 2 x 5² x 19
Modulo division
When one is interested only in the remainder of the division, this procedure (a variation of short division) ignores the quotient and tallies only the remainders. It can be used for manual modulo calculation or as a test for even divisibility. The quotient digits are not written down.
The following shows the solution (using short division) of 16762109 divided by seven.
${\begin{matrix}7)16^{2}7^{6}6^{3}2^{4}1^{6}0^{4}9^{0}\end{matrix}}$
The remainder is zero, so 16762109 is exactly divisible by 7.
See also
• Arbitrary-precision arithmetic
• Chunking (division)
• Division algorithm
• Elementary arithmetic
• Fourier division
• Long division
• Polynomial long division
• Synthetic division
References
1. G.P Quackenbos, LL.D. (1874). "Chapter VII: Division". A Practical Arithmetic. D. Appleton & Company.
2. "Dividing whole numbers -- A complete course in arithmetic". www.themathpage.com. Retrieved 2019-06-23.
External links
• Alternative Division Algorithms: Double Division, Partial Quotients & Column Division, Partial Quotients Movie
• Lesson in Short Division: TheMathPage.com
|
Wikipedia
|
Shortcut model
An important question in statistical mechanics is the dependence of model behaviour on the dimension of the system. The shortcut model[1][2] was introduced in the course of studying this dependence. The model interpolates between discrete regular lattices of integer dimension.
Introduction
The behaviour of different processes on discrete regular lattices have been studied quite extensively. They show a rich diversity of behaviour, including a non-trivial dependence on the dimension of the regular lattice.[3][4][5][6][7][8][9][10][11] In recent years the study has been extended from regular lattices to complex networks. The shortcut model has been used in studying several processes and their dependence on dimension.
Dimension of complex network
Usually, dimension is defined based on the scaling exponent of some property in the appropriate limit. One property one could use [2] is the scaling of volume with distance. For regular lattices $\textstyle \mathbf {Z} ^{d}$ the number of nodes $\textstyle j$ within a distance $\textstyle r(i,j)$ of node $\textstyle i$ scales as $\textstyle r(i,j)^{d}$.
For systems which arise in physical problems one usually can identify some physical space relations among the vertices. Nodes which are linked directly will have more influence on each other than nodes which are separated by several links. Thus, one could define the distance $\textstyle r(i,j)$ between nodes $\textstyle i$ and $\textstyle j$ as the length of the shortest path connecting the nodes.
For complex networks one can define the volume as the number of nodes $\textstyle j$ within a distance $\textstyle r(i,j)$ of node $\textstyle i$, averaged over $\textstyle i$, and the dimension may be defined as the exponent which determines the scaling behaviour of the volume with distance. For a vector $\textstyle {\vec {n}}=(n_{1},\dots ,n_{d})\in \mathbf {Z} ^{d}$, where $\textstyle d$ is a positive integer, the Euclidean norm $\textstyle \|{\vec {n}}\|$ is defined as the Euclidean distance from the origin to $\textstyle {\vec {n}}$, i.e.,
$\|{\vec {n}}\|={\sqrt {n_{1}^{2}+\cdots +n_{d}^{2}}}.$
However, the definition which generalises to complex networks is the $\textstyle L^{1}$ norm,
$\|{\vec {n}}\|_{1}=\|n_{1}\|+\cdots +\|n_{d}\|.$
The scaling properties hold for both the Euclidean norm and the $\textstyle L^{1}$ norm. The scaling relation is
$V(r)=kr^{d},$
where d is not necessarily an integer for complex networks. $\textstyle k$ is a geometric constant which depends on the complex network. If the scaling relation Eqn. holds, then one can also define the surface area $\textstyle S(r)$ as the number of nodes which are exactly at a distance $\textstyle r$ from a given node, and $\textstyle S(r)$ scales as
$S(r)=kdr^{d-1}.$
A definition based on the complex network zeta function[1] generalises the definition based on the scaling property of the volume with distance[2] and puts it on a mathematically robust footing.
Shortcut model
The shortcut model starts with a network built on a one-dimensional regular lattice. One then adds edges to create shortcuts that join remote parts of the lattice to one another. The starting network is a one-dimensional lattice of $\textstyle N$ vertices with periodic boundary conditions. Each vertex is joined to its neighbors on either side, which results in a system with $\textstyle N$ edges. The network is extended by taking each node in turn and, with probability $\textstyle p$, adding an edge to a new location $\textstyle m$ nodes distant.
The rewiring process allows the model to interpolate between a one-dimensional regular lattice and a two-dimensional regular lattice. When the rewiring probability $\textstyle p=0$, we have a one-dimensional regular lattice of size $\textstyle N$. When $\textstyle p=1$, every node is connected to a new location and the graph is essentially a two-dimensional lattice with $\textstyle m$ and $\textstyle N/m$ nodes in each direction. For $\textstyle p$ between $\textstyle 0$ and $\textstyle 1$, we have a graph which interpolates between the one and two dimensional regular lattices. The graphs we study are parametrized by
${\text{size}}=N,\,$
${\text{shortcut distance}}=m,\,$
${\text{rewiring probability}}=p.\,$
Application to extensiveness of power law potential
One application using the above definition of dimension was to the extensiveness of statistical mechanics systems with a power law potential where the interaction varies with the distance $\textstyle r$ as $\textstyle 1/r^{\alpha }$. In one dimension the system properties like the free energy do not behave extensively when $\textstyle 0\leq \alpha \leq 1$, i.e., they increase faster than N as $\textstyle N\rightarrow \infty $, where N is the number of spins in the system.
Consider the Ising model with the Hamiltonian (with N spins)
$H=-{\frac {1}{2}}\sum _{i,j}J(r(i,j))s_{i}s_{j}$
where $\textstyle s_{i}$ are the spin variables, $\textstyle r(i,j)$ is the distance between node $\textstyle i$ and node $\textstyle j$, and $\textstyle J(r(i,j))$ are the couplings between the spins. When the $\textstyle J(r(i,j))$ have the behaviour $\textstyle 1/r^{\alpha }$, we have the power law potential. For a general complex network the condition on the exponent $\textstyle \alpha $ which preserves extensivity of the Hamiltonian was studied. At zero temperature, the energy per spin is proportional to
$\rho =\sum _{i,j}J(r(i,j)),$
and hence extensivity requires that $\textstyle \rho $ be finite. For a general complex network $\textstyle \rho $ is proportional to the Riemann zeta function $\textstyle \zeta (\alpha -d+1)$. Thus, for the potential to be extensive, one requires
$\alpha >d.\,$
Other processes which have been studied are self-avoiding random walks, and the scaling of the mean path length with the network size. These studies lead to the interesting result that the dimension transitions sharply as the shortcut probability increases from zero.[12] The sharp transition in the dimension has been explained in terms of the combinatorially large number of available paths for points separated by distances large compared to 1.[13]
Conclusion
The shortcut model is useful for studying the dimension dependence of different processes. The processes studied include the behaviour of the power law potential as a function of the dimension, the behaviour of self-avoiding random walks, and the scaling of the mean path length. It may be useful to compare the shortcut model with the small-world network, since the definitions have a lot of similarity. In the small-world network also one starts with a regular lattice and adds shortcuts with probability $\textstyle p$. However, the shortcuts are not constrained to connect to a node a fixed distance ahead. Instead, the other end of the shortcut can connect to any randomly chosen node. As a result, the small world model tends to a random graph rather than a two-dimensional graph as the shortcut probability is increased.
References
1. O. Shanker (2007). "Graph Zeta Function and Dimension of Complex Network". Modern Physics Letters B. 21 (11): 639–644. Bibcode:2007MPLB...21..639S. doi:10.1142/S0217984907013146.
2. O. Shanker (2007). "Defining Dimension of a Complex Network". Modern Physics Letters B. 21 (6): 321–326. Bibcode:2007MPLB...21..321S. doi:10.1142/S0217984907012773.
3. O. Shanker (2006). "Long range 1-d potential at border of thermodynamic limit". Modern Physics Letters B. 20 (11): 649–654. Bibcode:2006MPLB...20..649S. doi:10.1142/S0217984906011128.
4. D. Ruelle (1968). "Statistical mechanics of a one-dimensional lattice gas". Communications in Mathematical Physics. 9 (4): 267–278. Bibcode:1968CMaPh...9..267R. CiteSeerX 10.1.1.456.2973. doi:10.1007/BF01654281. S2CID 120998243.
5. F. Dyson (1969). "Existence of a phase-transition in a one-dimensional Ising ferromagnet". Communications in Mathematical Physics. 12 (2): 91–107. Bibcode:1969CMaPh..12...91D. doi:10.1007/BF01645907. S2CID 122117175.
6. J. Frohlich & T. Spencer (1982). "The phase transition in the one-dimensional Ising Model with 1/r2 interaction energy". Communications in Mathematical Physics. 84 (1): 87–101. Bibcode:1982CMaPh..84...87F. doi:10.1007/BF01208373. S2CID 122722140.
7. M. Aizenman; J.T. Chayes; L. Chayes; C.M. Newman (1988). "Discontinuity of the magnetization in one-dimensional 1/|x−y|2 Ising and Potts models". Journal of Statistical Physics. 50 (1–2): 1–40. Bibcode:1988JSP....50....1A. doi:10.1007/BF01022985. S2CID 17289447.
8. J.Z. Imbrie; C.M. Newman (1988). "An intermediate phase with slow decay of correlations in one dimensional 1/|x−y|2 percolation, Ising and Potts models". Communications in Mathematical Physics. 118 (2): 303. Bibcode:1988CMaPh.118..303I. doi:10.1007/BF01218582. S2CID 117966310.
9. E. Luijten & H.W.J. Blöte (1995). "Monte Carlo method for spin models with long-range interactions". International Journal of Modern Physics C. 6 (3): 359. Bibcode:1995IJMPC...6..359L. CiteSeerX 10.1.1.53.5659. doi:10.1142/S0129183195000265.
10. R.H. Swendson & J.-S. Wang (1987). "Nonuniversal critical dynamics in Monte Carlo simulations". Physical Review Letters. 58 (2): 86–88. Bibcode:1987PhRvL..58...86S. doi:10.1103/PhysRevLett.58.86. PMID 10034599.
11. U. Wolff (1989). "Collective Monte Carlo Updating for Spin Systems". Physical Review Letters. 62 (4): 361–364. Bibcode:1989PhRvL..62..361W. doi:10.1103/PhysRevLett.62.361. PMID 10040213.
12. O. Shanker (2008). "Algorithms for Fractal Dimension Calculation". Modern Physics Letters B. 22 (7): 459–466. Bibcode:2008MPLB...22..459S. doi:10.1142/S0217984908015048.
13. O. Shanker (2008). "Sharp dimension transition in a shortcut model". J. Phys. A. 41 (28): 285001. Bibcode:2008JPhA...41B5001S. doi:10.1088/1751-8113/41/28/285001. S2CID 121474088.
|
Wikipedia
|
Shortest-path graph
In mathematics and geographic information science, a shortest-path graph is an undirected graph defined from a set of points in the Euclidean plane. The shortest-path graph is proposed with the idea of inferring edges between a point set such that the shortest path taken over the inferred edges will roughly align with the shortest path taken over the imprecise region represented by the point set. The edge set of the shortest-path graph varies based on a single parameter t ≥ 1. When the weight of an edge is defined as its Euclidean length raised to the power of the parameter t ≥ 1, the edge is present in the shortest-path graph if and only if it is the least weight path between its endpoints.[1]
Properties of shortest-path graph
When the configuration parameter t goes to infinity, shortest-path graph become the minimum spanning tree of the point set. The graph is a subgraph of the point set's Gabriel graph and therefore also a subgraph of its Delaunay triangulation.[1]
References
1. de Berg, Mark; Meulemans, Wouter; Speckmann, Bettina (2011). "Delineating imprecise regions via shortest-path graphs". Proceedings of the 19th ACM SIGSPATIAL International Conference on Advances in Geographic Information Systems - GIS '11. Vol. 19. pp. 271–280. doi:10.1145/2093973.2094010. ISBN 9781450310314. S2CID 2359926. Retrieved 2 September 2019.
|
Wikipedia
|
Shortest path faster algorithm
The Shortest Path Faster Algorithm (SPFA) is an improvement of the Bellman–Ford algorithm which computes single-source shortest paths in a weighted directed graph. The algorithm is believed to work well on random sparse graphs and is particularly suitable for graphs that contain negative-weight edges.[1] However, the worst-case complexity of SPFA is the same as that of Bellman–Ford, so for graphs with nonnegative edge weights Dijkstra's algorithm is preferred. The SPFA algorithm was first published by Edward F. Moore in 1959, as a generalization of breadth first search;[2] SPFA is Moore's “Algorithm D.” The name, “Shortest Path Faster Algorithm (SPFA),” was given by FanDing Duan, a Chinese researcher who rediscovered the algorithm in 1994.[3]
Algorithm
Given a weighted directed graph $G=(V,E)$ and a source vertex $s$, the SPFA algorithm finds the shortest path from $s$, to each vertex $v$, in the graph. The length of the shortest path from $s$ to $v$ is stored in $d(v)$ for each vertex $v$.
The basic idea of SPFA is the same as the Bellman-Ford algorithm in that each vertex is used as a candidate to relax its adjacent vertices. The improvement over the latter is that instead of trying all vertices blindly, SPFA maintains a queue of candidate vertices and adds a vertex to the queue only if that vertex is relaxed. This process repeats until no more vertex can be relaxed.
Below is the pseudo-code of the algorithm.[4] Here $Q$ is a first-in, first-out queue of candidate vertices, and $w(u,v)$ is the edge weight of $(u,v)$.
procedure Shortest-Path-Faster-Algorithm(G, s)
1 for each vertex v ≠ s in V(G)
2 d(v) := ∞
3 d(s) := 0
4 push s into Q
5 while Q is not empty do
6 u := poll Q
7 for each edge (u, v) in E(G) do
8 if d(u) + w(u, v) < d(v) then
9 d(v) := d(u) + w(u, v)
10 if v is not in Q then
11 push v into Q
The algorithm can also be applied to an undirected graph by replacing each undirected edge with two directed edges of opposite directions.
Proof of correctness
We will prove that the algorithm never computes incorrect shortest path lengths.
Lemma: Whenever the queue is checked for emptiness, any vertex currently capable of causing relaxation is in the queue.
Proof: We want to show that if $dist[w]>dist[u]+w(u,w)$ for any two vertices $u$ and $w$ at the time the condition is checked,$u$ is in the queue. We do so by induction on the number of iterations of the loop that have already occurred. First we note that this certainly holds before the loop is entered: if $u\not =s$, then relaxation is not possible; relaxation is possible from $u=s$ , and this is added to the queue immediately before the while loop is entered. Now, consider what happens inside the loop. A vertex $u$ is popped, and is used to relax all its neighbors, if possible. Therefore, immediately after that iteration of the loop, $u$ is not capable of causing any more relaxations (and does not have to be in the queue anymore). However, the relaxation by $x$ might cause some other vertices to become capable of causing relaxation. If there exists some vertex $x$ such that $dist[x]>dist[w]+w(w,x)$ before the current loop iteration, then $w$ is already in the queue. If this condition becomes true during the current loop iteration, then either $dist[x]$ increased, which is impossible, or $dist[w]$ decreased, implying that $w$ was relaxed. But after $w$ is relaxed, it is added to the queue if it is not already present.
Corollary: The algorithm terminates when and only when no further relaxations are possible.
Proof: If no further relaxations are possible, the algorithm continues to remove vertices from the queue, but does not add any more into the queue, because vertices are added only upon successful relaxations. Therefore the queue becomes empty and the algorithm terminates. If any further relaxations are possible, the queue is not empty, and the algorithm continues to run.
The algorithm fails to terminate if negative-weight cycles are reachable from the source. See here for a proof that relaxations are always possible when negative-weight cycles exist. In a graph with no cycles of negative weight, when no more relaxations are possible, the correct shortest paths have been computed (proof). Therefore in graphs containing no cycles of negative weight, the algorithm will never terminate with incorrect shortest path lengths.[5]
Running time
Experiments suggest that the average running time is $O(|E|)$ on random graphs, but the worst-case running time of the algorithm is $\Omega (|V|\cdot |E|)$, same as the Bellman-Ford algorithm.[1][6]
Optimization techniques
The performance of the algorithm is strongly determined by the order in which candidate vertices are used to relax other vertices. In fact, if $Q$ is a priority queue, then the algorithm pretty much resembles Dijkstra's. However, since a priority queue is not used here, two techniques are sometimes employed to improve the quality of the queue, which in turn improves the average-case performance (but not the worst-case performance). Both techniques rearrange the order of elements in $Q$ so that vertices closer to the source are processed first. Therefore, when implementing these techniques, $Q$ is no longer a first-in, first-out queue, but rather a normal doubly linked list or double-ended queue.
Small Label First (SLF) technique. In line 11, instead of always pushing vertex $v$ to the end of the queue, we compare $d(v)$ to $d{\big (}{\text{front}}(Q){\big )}$, and insert $v$ to the front of the queue if $d(v)$ is smaller. The pseudo-code for this technique is (after pushing $v$ to the end of the queue in line 11):
procedure Small-Label-First(G, Q)
if d(back(Q)) < d(front(Q)) then
u := pop back of Q
push u into front of Q
Large Label Last (LLL) technique. After line 11, we update the queue so that the first element is smaller than the average, and any element larger than the average is moved to the end of the queue. The pseudo-code is:
procedure Large-Label-Last(G, Q)
x := average of d(v) for all v in Q
while d(front(Q)) > x
u := pop front of Q
push u to back of Q
References
1. About the so-called SPFA algorithm
2. Moore, Edward F. (1959). "The shortest path through a maze". Proceedings of the International Symposium on the Theory of Switching. Harvard University Press. pp. 285–292.
3. Duan, Fanding (1994), "关于最短路径的SPFA快速算法 [About the SPFA algorithm]", Journal of Southwest Jiaotong University, 29 (2): 207–212
4. "Algorithm Gym :: Graph Algorithms".
5. "Shortest Path Faster Algorithm". wcipeg.
6. "Worst test case for SPFA". Retrieved 2023-05-14.
Graph and tree traversal algorithms
• α–β pruning
• A*
• IDA*
• LPA*
• SMA*
• Best-first search
• Beam search
• Bidirectional search
• Breadth-first search
• Lexicographic
• Parallel
• B*
• Depth-first search
• Iterative Deepening
• D*
• Fringe search
• Jump point search
• Monte Carlo tree search
• SSS*
Shortest path
• Bellman–Ford
• Dijkstra's
• Floyd–Warshall
• Johnson's
• Shortest path faster
• Yen's
Minimum spanning tree
• Borůvka's
• Kruskal's
• Prim's
• Reverse-delete
List of graph search algorithms
Optimization: Algorithms, methods, and heuristics
Unconstrained nonlinear
Functions
• Golden-section search
• Interpolation methods
• Line search
• Nelder–Mead method
• Successive parabolic interpolation
Gradients
Convergence
• Trust region
• Wolfe conditions
Quasi–Newton
• Berndt–Hall–Hall–Hausman
• Broyden–Fletcher–Goldfarb–Shanno and L-BFGS
• Davidon–Fletcher–Powell
• Symmetric rank-one (SR1)
Other methods
• Conjugate gradient
• Gauss–Newton
• Gradient
• Mirror
• Levenberg–Marquardt
• Powell's dog leg method
• Truncated Newton
Hessians
• Newton's method
Constrained nonlinear
General
• Barrier methods
• Penalty methods
Differentiable
• Augmented Lagrangian methods
• Sequential quadratic programming
• Successive linear programming
Convex optimization
Convex
minimization
• Cutting-plane method
• Reduced gradient (Frank–Wolfe)
• Subgradient method
Linear and
quadratic
Interior point
• Affine scaling
• Ellipsoid algorithm of Khachiyan
• Projective algorithm of Karmarkar
Basis-exchange
• Simplex algorithm of Dantzig
• Revised simplex algorithm
• Criss-cross algorithm
• Principal pivoting algorithm of Lemke
Combinatorial
Paradigms
• Approximation algorithm
• Dynamic programming
• Greedy algorithm
• Integer programming
• Branch and bound/cut
Graph
algorithms
Minimum
spanning tree
• Borůvka
• Prim
• Kruskal
Shortest path
• Bellman–Ford
• SPFA
• Dijkstra
• Floyd–Warshall
Network flows
• Dinic
• Edmonds–Karp
• Ford–Fulkerson
• Push–relabel maximum flow
Metaheuristics
• Evolutionary algorithm
• Hill climbing
• Local search
• Parallel metaheuristics
• Simulated annealing
• Spiral optimization algorithm
• Tabu search
• Software
|
Wikipedia
|
Shortest-path tree
In mathematics and computer science, a shortest-path tree rooted at a vertex v of a connected, undirected graph G is a spanning tree T of G, such that the path distance from root v to any other vertex u in T is the shortest path distance from v to u in G.
In connected graphs where shortest paths are well-defined (i.e. where there are no negative-length cycles), we may construct a shortest-path tree using the following algorithm:
1. Compute dist(u), the shortest-path distance from root v to vertex u in G using Dijkstra's algorithm or Bellman–Ford algorithm.
2. For all non-root vertices u, we can assign to u a parent vertex pu such that pu is connected to u, and that dist(pu) + edge_dist(pu,u) = dist(u). In case multiple choices for pu exist, choose pu for which there exists a shortest path from v to pu with as few edges as possible; this tie-breaking rule is needed to prevent loops when there exist zero-length cycles.
3. Construct the shortest-path tree using the edges between each node and its parent.
The above algorithm guarantees the existence of shortest-path trees. Like minimum spanning trees, shortest-path trees in general are not unique.
In graphs for which all edge weights are equal, shortest path trees coincide with breadth-first search trees.
In graphs that have negative cycles, the set of shortest simple paths from v to all other vertices do not necessarily form a tree.
For simple connected graphs, shortest-path trees can be used[1] to suggest a non-linear relationship between two network centrality measures, closeness and degree. By assuming that the branches of the shortest-path trees are statistically similar for any root node in one network, one may show that the size of the branches depend only on the number of branches connected to the root vertex, i.e. to the degree of the root node. From this one deduces that the inverse of closeness, a length scale associated with each vertex, varies approximately linearly with the logarithm of degree. The relationship is not exact but it captures a correlation between closeness and degree in large number of networks constructed from real data[1] and this success suggests that shortest-path trees can be a useful approximation in network analysis.
See also
• Shortest path problem
References
1. Evans, Tim S.; Chen, Bingsheng (2022). "Linking the network centrality measures closeness and degree". Communications Physics. 5 (1): 172. doi:10.1038/s42005-022-00949-5. hdl:10044/1/97904. ISSN 2399-3650.
References
Cahn, Robert S. (1998). Wide Area Network Design: Concepts and Tools for Optimization. Networking. Morgan Kaufmann. ISBN 978-1558604582.
|
Wikipedia
|
de Bruijn sequence
In combinatorial mathematics, a de Bruijn sequence of order n on a size-k alphabet A is a cyclic sequence in which every possible length-n string on A occurs exactly once as a substring (i.e., as a contiguous subsequence). Such a sequence is denoted by B(k, n) and has length kn, which is also the number of distinct strings of length n on A. Each of these distinct strings, when taken as a substring of B(k, n), must start at a different position, because substrings starting at the same position are not distinct. Therefore, B(k, n) must have at least kn symbols. And since B(k, n) has exactly kn symbols, De Bruijn sequences are optimally short with respect to the property of containing every string of length n at least once.
Not to be confused with the Moser–de Bruijn sequence, an integer sequence from number theory.
The number of distinct de Bruijn sequences B(k, n) is
${\dfrac {\left(k!\right)^{k^{n-1}}}{k^{n}}}.$
The sequences are named after the Dutch mathematician Nicolaas Govert de Bruijn, who wrote about them in 1946.[1] As he later wrote,[2] the existence of de Bruijn sequences for each order together with the above properties were first proved, for the case of alphabets with two elements, by Camille Flye Sainte-Marie (1894). The generalization to larger alphabets is due to Tatyana van Aardenne-Ehrenfest and de Bruijn (1951). Automata for recognizing these sequences are denoted as de Bruijn automata.
In most applications, A = {0,1}.
History
The earliest known example of a de Bruijn sequence comes from Sanskrit prosody where, since the work of Pingala, each possible three-syllable pattern of long and short syllables is given a name, such as 'y' for short–long–long and 'm' for long–long–long. To remember these names, the mnemonic yamātārājabhānasalagām is used, in which each three-syllable pattern occurs starting at its name: 'yamātā' has a short–long–long pattern, 'mātārā' has a long–long–long pattern, and so on, until 'salagām' which has a short–short–long pattern. This mnemonic, equivalent to a de Bruijn sequence on binary 3-tuples, is of unknown antiquity, but is at least as old as Charles Philip Brown's 1869 book on Sanskrit prosody that mentions it and considers it "an ancient line, written by Pāṇini".[3]
In 1894, A. de Rivière raised the question in an issue of the French problem journal L'Intermédiaire des Mathématiciens, of the existence of a circular arrangement of zeroes and ones of size $2^{n}$ that contains all $2^{n}$ binary sequences of length $n$. The problem was solved (in the affirmative), along with the count of $2^{2^{n-1}-n}$ distinct solutions, by Camille Flye Sainte-Marie in the same year.[2] This was largely forgotten, and Martin (1934) proved the existence of such cycles for general alphabet size in place of 2, with an algorithm for constructing them. Finally, when in 1944 Kees Posthumus conjectured the count $2^{2^{n-1}-n}$ for binary sequences, de Bruijn proved the conjecture in 1946, through which the problem became well-known.[2]
Karl Popper independently describes these objects in his The Logic of Scientific Discovery (1934), calling them "shortest random-like sequences".[4]
Examples
• Taking A = {0, 1}, there are two distinct B(2, 3): 00010111 and 11101000, one being the reverse or negation of the other.
• Two of the 16 possible B(2, 4) in the same alphabet are 0000100110101111 and 0000111101100101.
• Two of the 2048 possible B(2, 5) in the same alphabet are 00000100011001010011101011011111 and 00000101001000111110111001101011.
Construction
The de Bruijn sequences can be constructed by taking a Hamiltonian path of an n-dimensional de Bruijn graph over k symbols (or equivalently, an Eulerian cycle of an (n − 1)-dimensional de Bruijn graph).[5]
An alternative construction involves concatenating together, in lexicographic order, all the Lyndon words whose length divides n.[6]
An inverse Burrows–Wheeler transform can be used to generate the required Lyndon words in lexicographic order.[7]
De Bruijn sequences can also be constructed using shift registers[8] or via finite fields.[9]
Example using de Bruijn graph
Goal: to construct a B(2, 4) de Bruijn sequence of length 24 = 16 using Eulerian (n − 1 = 4 − 1 = 3) 3-D de Bruijn graph cycle.
Each edge in this 3-dimensional de Bruijn graph corresponds to a sequence of four digits: the three digits that label the vertex that the edge is leaving followed by the one that labels the edge. If one traverses the edge labeled 1 from 000, one arrives at 001, thereby indicating the presence of the subsequence 0001 in the de Bruijn sequence. To traverse each edge exactly once is to use each of the 16 four-digit sequences exactly once.
For example, suppose we follow the following Eulerian path through these vertices:
000, 000, 001, 011, 111, 111, 110, 101, 011,
110, 100, 001, 010, 101, 010, 100, 000.
These are the output sequences of length k:
0 0 0 0
_ 0 0 0 1
_ _ 0 0 1 1
This corresponds to the following de Bruijn sequence:
0 0 0 0 1 1 1 1 0 1 1 0 0 1 0 1
The eight vertices appear in the sequence in the following way:
{0 0 0 0} 1 1 1 1 0 1 1 0 0 1 0 1
0 {0 0 0 1} 1 1 1 0 1 1 0 0 1 0 1
0 0 {0 0 1 1} 1 1 0 1 1 0 0 1 0 1
0 0 0 {0 1 1 1} 1 0 1 1 0 0 1 0 1
0 0 0 0 {1 1 1 1} 0 1 1 0 0 1 0 1
0 0 0 0 1 {1 1 1 0} 1 1 0 0 1 0 1
0 0 0 0 1 1 {1 1 0 1} 1 0 0 1 0 1
0 0 0 0 1 1 1 {1 0 1 1} 0 0 1 0 1
0 0 0 0 1 1 1 1 {0 1 1 0} 0 1 0 1
0 0 0 0 1 1 1 1 0 {1 1 0 0} 1 0 1
0 0 0 0 1 1 1 1 0 1 {1 0 0 1} 0 1
0 0 0 0 1 1 1 1 0 1 1 {0 0 1 0} 1
0 0 0 0 1 1 1 1 0 1 1 0 {0 1 0 1}
0} 0 0 0 1 1 1 1 0 1 1 0 0 {1 0 1 ...
... 0 0} 0 0 1 1 1 1 0 1 1 0 0 1 {0 1 ...
... 0 0 0} 0 1 1 1 1 0 1 1 0 0 1 0 {1 ...
...and then we return to the starting point. Each of the eight 3-digit sequences (corresponding to the eight vertices) appears exactly twice, and each of the sixteen 4-digit sequences (corresponding to the 16 edges) appears exactly once.
Example using inverse Burrows—Wheeler transform
Mathematically, an inverse Burrows—Wheeler transform on a word w generates a multi-set of equivalence classes consisting of strings and their rotations.[7] These equivalence classes of strings each contain a Lyndon word as a unique minimum element, so the inverse Burrows—Wheeler transform can be considered to generate a set of Lyndon words. It can be shown that if we perform the inverse Burrows—Wheeler transform on a word w consisting of the size-k alphabet repeated kn−1 times (so that it will produce a word the same length as the desired de Bruijn sequence), then the result will be the set of all Lyndon words whose length divides n. It follows that arranging these Lyndon words in lexicographic order will yield a de Bruijn sequence B(k,n), and that this will be the first de Bruijn sequence in lexicographic order. The following method can be used to perform the inverse Burrows—Wheeler transform, using its standard permutation:
1. Sort the characters in the string w, yielding a new string w′
2. Position the string w′ above the string w, and map each letter's position in w′ to its position in w while preserving order. This process defines the Standard Permutation.
3. Write this permutation in cycle notation with the smallest position in each cycle first, and the cycles sorted in increasing order.
4. For each cycle, replace each number with the corresponding letter from string w′ in that position.
5. Each cycle has now become a Lyndon word, and they are arranged in lexicographic order, so dropping the parentheses yields the first de Bruijn sequence.
For example, to construct the smallest B(2,4) de Bruijn sequence of length 24 = 16, repeat the alphabet (ab) 8 times yielding w=abababababababab. Sort the characters in w, yielding w′=aaaaaaaabbbbbbbb. Position w′ above w as shown, and map each element in w′ to the corresponding element in w by drawing a line. Number the columns as shown so we can read the cycles of the permutation:
Starting from the left, the Standard Permutation notation cycles are: (1) (2 3 5 9) (4 7 13 10) (6 11) (8 15 14 12) (16). (Standard Permutation)
Then, replacing each number by the corresponding letter in w′ from that column yields: (a)(aaab)(aabb)(ab)(abbb)(b).
These are all of the Lyndon words whose length divides 4, in lexicographic order, so dropping the parentheses gives B(2,4) = aaaabaabbababbbb.
Algorithm
The following Python code calculates a de Bruijn sequence, given k and n, based on an algorithm from Frank Ruskey's Combinatorial Generation.[10]
from typing import Iterable, Union, Any
def de_bruijn(k: Union[Iterable[Any], int], n: int) -> str:
"""de Bruijn sequence for alphabet k
and subsequences of length n.
"""
# Two kinds of alphabet input: an integer expands
# to a list of integers as the alphabet..
if isinstance(k, int):
alphabet = list(map(str, range(k)))
else:
# While any sort of list becomes used as it is
alphabet = k
k = len(k)
a = [0] * k * n
sequence = []
def db(t, p):
if t > n:
if n % p == 0:
sequence.extend(a[1 : p + 1])
else:
a[t] = a[t - p]
db(t + 1, p)
for j in range(a[t - p] + 1, k):
a[t] = j
db(t + 1, t)
db(1, 1)
return "".join(alphabet[i] for i in sequence)
print(de_bruijn(2, 3))
print(de_bruijn("abcd", 2))
which prints
00010111
aabacadbbcbdccdd
Note that these sequences are understood to "wrap around" in a cycle. For example, the first sequence contains 110 and 100 in this fashion.
Uses
De Bruijn cycles are of general use in neuroscience and psychology experiments that examine the effect of stimulus order upon neural systems,[11] and can be specially crafted for use with functional magnetic resonance imaging.[12]
Angle detection
The symbols of a de Bruijn sequence written around a circular object (such as a wheel of a robot) can be used to identify its angle by examining the n consecutive symbols facing a fixed point. This angle-encoding problem is known as the "rotating drum problem".[13] Gray codes can be used as similar rotary positional encoding mechanisms, a method commonly found in rotary encoders.
Finding least- or most-significant set bit in a word
A de Bruijn sequence can be used to quickly find the index of the least significant set bit ("right-most 1") or the most significant set bit ("left-most 1") in a word using bitwise operations and multiplication.[14] The following example uses a de Bruijn sequence to determine the index of the least significant set bit (equivalent to counting the number of trailing '0' bits) in a 32 bit unsigned integer:
uint8_t lowestBitIndex(uint32_t v)
{
static const uint8_t BitPositionLookup[32] = // hash table
{
0, 1, 28, 2, 29, 14, 24, 3, 30, 22, 20, 15, 25, 17, 4, 8,
31, 27, 13, 23, 21, 19, 16, 7, 26, 12, 18, 6, 11, 5, 10, 9
};
return BitPositionLookup[((uint32_t)((v & -v) * 0x077CB531U)) >> 27];
}
The lowestBitIndex() function returns the index of the least-significant set bit in v, or zero if v has no set bits. The constant 0x077CB531U in the expression is the B (2, 5) sequence 0000 0111 0111 1100 1011 0101 0011 0001 (spaces added for clarity). The operation (v & -v) zeros all bits except the least-significant bit set, resulting in a new value which is a power of 2. This power of 2 is multiplied (arithmetic modulo 232) by the de Bruijn sequence, thus producing a 32-bit product in which the bit sequence of the 5 MSBs is unique for each power of 2. The 5 MSBs are shifted into the LSB positions to produce a hash code in the range [0, 31], which is then used as an index into hash table BitPositionLookup. The selected hash table value is the bit index of the least significant set bit in v.
The following example determines the index of the most significant bit set in a 32 bit unsigned integer:
uint32_t keepHighestBit(uint32_t n)
{
n |= (n >> 1);
n |= (n >> 2);
n |= (n >> 4);
n |= (n >> 8);
n |= (n >> 16);
return n - (n >> 1);
}
uint8_t highestBitIndex(uint32_t v)
{
static const uint8_t BitPositionLookup[32] = { // hash table
0, 1, 16, 2, 29, 17, 3, 22, 30, 20, 18, 11, 13, 4, 7, 23,
31, 15, 28, 21, 19, 10, 12, 6, 14, 27, 9, 5, 26, 8, 25, 24,
};
return BitPositionLookup[(keepHighestBit(v) * 0x06EB14F9U) >> 27];
}
In the above example an alternative de Bruijn sequence (0x06EB14F9U) is used, with corresponding reordering of array values. The choice of this particular de Bruijn sequence is arbitrary, but the hash table values must be ordered to match the chosen de Bruijn sequence. The keepHighestBit() function zeros all bits except the most-significant set bit, resulting in a value which is a power of 2, which is then processed as in the previous example.
Brute-force attacks on locks
B{10,3} with digits read from top to bottom
then left to right;[15] appending "00" yields
a string to brute-force a 3-digit combination lock
0 1 2 3 4 5 6 7 8 9
001
002
003
004
005
006
007
008
009
011
012112
013113
014114
015115
016116
017117
018118
019119
021
022122
023123223
024124224
025125225
026126226
027127227
028128228
029129229
031
032132
033133233
034134234334
035135235335
036136236336
037137237337
038138238338
039139239339
041
042142
043143243
044144244344
045145245345445
046146246346446
047147247347447
048148248348448
049149249349449
051
052152
053153253
054154254354
055155255355455
056156256356456556
057157257357457557
058158258358458558
059159259359459559
061
062162
063163263
064164264364
065165265365465
066166266366466566
067167267367467567667
068168268368468568668
069169269369469569669
071
072172
073173273
074174274374
075175275375475
076176276376476576
077177277377477577677
078178278378478578678778
079179279379479579679779
081
082182
083183283
084184284384
085185285385485
086186286386486586
087187287387487587687
088188288388488588688788
089189289389489589689789889
091
092192
093193293
094194294394
095195295395495
096196296396496596
097197297397497597697
098198298398498598698798
099199299399499599699799899(00)
A de Bruijn sequence can be used to shorten a brute-force attack on a PIN-like code lock that does not have an "enter" key and accepts the last n digits entered. For example, a digital door lock with a 4-digit code (each digit having 10 possibilities, from 0 to 9) would have B (10, 4) solutions, with length 10000. Therefore, only at most 10000 + 3 = 10003 (as the solutions are cyclic) presses are needed to open the lock, whereas trying all codes separately would require 4 × 10000 = 40000 presses.
f-fold de Bruijn sequences
f-fold n-ary de Bruijn sequence' is an extension of the notion n-ary de Bruijn sequence, such that the sequence of the length $fk^{n}$ contains every possible subsequence of the length n exactly f times. For example, for $n=2$ the cyclic sequences 11100010 and 11101000 are two-fold binary de Bruijn sequences. The number of two-fold de Bruijn sequences, $N_{n}$ for $n=1$ is $N_{1}=2$, the other known numbers[16] are $N_{2}=5$, $N_{3}=72$, and $N_{4}=43768$.
De Bruijn torus
A de Bruijn torus is a toroidal array with the property that every k-ary m-by-n matrix occurs exactly once.
Such a pattern can be used for two-dimensional positional encoding in a fashion analogous to that described above for rotary encoding. Position can be determined by examining the m-by-n matrix directly adjacent to the sensor, and calculating its position on the de Bruijn torus.
De Bruijn decoding
Computing the position of a particular unique tuple or matrix in a de Bruijn sequence or torus is known as the de Bruijn decoding problem. Efficient $\color {Blue}O(n\log n)$ decoding algorithms exist for special, recursively constructed sequences[17] and extend to the two-dimensional case.[18] De Bruijn decoding is of interest, e.g., in cases where large sequences or tori are used for positional encoding.
See also
• Normal number
• Linear-feedback shift register
• n-sequence
• BEST theorem
• Superpermutation
Notes
1. de Bruijn (1946).
2. de Bruijn (1975).
3. Brown (1869); Stein (1963); Kak (2000); Knuth (2006); Hall (2008).
4. Popper (2002).
5. Klein (2013).
6. According to Berstel & Perrin (2007), the sequence generated in this way was first described (with a different generation method) by Martin (1934), and the connection between it and Lyndon words was observed by Fredricksen & Maiorana (1978).
7. Higgins (2012).
8. Goresky & Klapper (2012).
9. Ralston (1982), pp. 136–139.
10. "De Bruijn sequences". Sage. Retrieved 2023-03-07.
11. Aguirre, Mattar & Magis-Weinberg (2011).
12. "De Bruijn cycle generator".
13. van Lint & Wilson (2001).
14. Anderson (1997–2009); Busch (2009)
15. "De Bruijn (DeBruijn) sequence (K=10, n=3)".
16. Osipov (2016).
17. Tuliani (2001).
18. Hurlbert & Isaak (1993).
References
• van Aardenne-Ehrenfest, Tanja; de Bruijn, Nicolaas Govert (1951). "Circuits and trees in oriented linear graphs" (PDF). Simon Stevin. 28: 203–217. MR 0047311.
• Aguirre, G. K.; Mattar, M. G.; Magis-Weinberg, L. (2011). "de Bruijn cycles for neural decoding". NeuroImage. 56 (3): 1293–1300. doi:10.1016/j.neuroimage.2011.02.005. PMC 3104402. PMID 21315160.
• Anderson, Sean Eron (1997–2009). "Bit Twiddling Hacks". Stanford University. Retrieved 2009-02-12.
• Berstel, Jean; Perrin, Dominique (2007). "The origins of combinatorics on words" (PDF). European Journal of Combinatorics. 28 (3): 996–1022. doi:10.1016/j.ejc.2005.07.019. MR 2300777.
• Brown, C. P. (1869). Sanskrit Prosody and Numerical Symbols Explained. p. 28.
• de Bruijn, Nicolaas Govert (1946). "A combinatorial problem" (PDF). Proc. Koninklijke Nederlandse Akademie V. Wetenschappen. 49: 758–764. MR 0018142, Indagationes Mathematicae 8: 461–467{{cite journal}}: CS1 maint: postscript (link)
• de Bruijn, Nicolaas Govert (1975). Acknowledgement of Priority to C. Flye Sainte-Marie on the counting of circular arrangements of 2n zeros and ones that show each n-letter word exactly once (PDF). T.H.-Report 75-WSK-06. Technological University Eindhoven.
• Busch, Philip (2009). "Computing Trailing Zeros HOWTO". Retrieved 2015-01-29.
• Flye Sainte-Marie, Camille (1894). "Solution to question nr. 48". L'Intermédiaire des Mathématiciens. 1: 107–110.
• Goresky, Mark; Klapper, Andrew (2012). "8.2.5 Shift register generation of de Bruijn sequences". Algebraic Shift Register Sequences. Cambridge University Press. pp. 174–175. ISBN 978-1-10701499-2.
• Hall, Rachel W. (2008). "Math for poets and drummers" (PDF). Math Horizons. 15 (3): 10–11. doi:10.1080/10724117.2008.11974752. S2CID 3637061. Archived from the original (PDF) on 2012-02-12. Retrieved 2008-10-22.
• Higgins, Peter (November 2012). "Burrows-Wheeler transforms and de Bruijn words" (PDF). Retrieved 2017-02-11.
• Hurlbert, Glenn; Isaak, Garth (1993). "On the de Bruijn torus problem". Journal of Combinatorial Theory. Series A. 64 (1): 50–62. doi:10.1016/0097-3165(93)90087-O. MR 1239511.
• Kak, Subhash (2000). "Yamātārājabhānasalagāṃ an interesting combinatoric sūtra" (PDF). Indian Journal of History of Science. 35 (2): 123–127. Archived from the original (PDF) on 2014-10-29.
• Klein, Andreas (2013). Stream Ciphers. Springer. p. 59. ISBN 978-1-44715079-4.
• Knuth, Donald Ervin (2006). The Art of Computer Programming, Fascicle 4: Generating All Trees – History of Combinatorial Generation. Addison–Wesley. p. 50. ISBN 978-0-321-33570-8.
• Fredricksen, Harold; Maiorana, James (1978). "Necklaces of beads in k colors and k-ary de Bruijn sequences". Discrete Mathematics. 23 (3): 207–210. doi:10.1016/0012-365X(78)90002-X. MR 0523071.
• Martin, Monroe H. (1934). "A problem in arrangements" (PDF). Bulletin of the American Mathematical Society. 40 (12): 859–864. doi:10.1090/S0002-9904-1934-05988-3. MR 1562989.
• Osipov, Vladimir (2016). "Wavelet Analysis on Symbolic Sequences and Two-Fold de Bruijn Sequences". Journal of Statistical Physics. 164 (1): 142–165. arXiv:1601.02097. Bibcode:2016JSP...164..142O. doi:10.1007/s10955-016-1537-5. ISSN 1572-9613. S2CID 16535836.
• Popper, Karl (2002) [1934]. The logic of scientific discovery. Routledge. p. 294. ISBN 978-0-415-27843-0.
• Ralston, Anthony (1982). "de Bruijn sequences—a model example of the interaction of discrete mathematics and computer science". Mathematics Magazine. 55 (3): 131–143. doi:10.2307/2690079. JSTOR 2690079. MR 0653429.
• Stein, Sherman K. (1963). "Yamátárájabhánasalagám". The Man-made Universe: An Introduction to the Spirit of Mathematics. pp. 110–118. Reprinted in Wardhaugh, Benjamin, ed. (2012), A Wealth of Numbers: An Anthology of 500 Years of Popular Mathematics Writing, Princeton University Press, pp. 139–144.
• Tuliani, Jonathan (2001). "de Bruijn sequences with efficient decoding algorithms". Discrete Mathematics. 226 (1–3): 313–336. doi:10.1016/S0012-365X(00)00117-5. MR 1802599.
• van Lint, J. H.; Wilson, Richard Michael (2001). A Course in Combinatorics. Cambridge University Press. p. 71. ISBN 978-0-52100601-9.
External links
• Weisstein, Eric W. "de Bruijn Sequence". MathWorld.
• OEIS sequence A166315 (Lexicographically smallest binary de Bruijn sequences)
• De Bruijn sequence
• CGI generator
• Applet generator
• Javascript generator and decoder. Implementation of J. Tuliani's algorithm.
• Door code lock
• Minimal arrays containing all sub-array combinations of symbols: De Bruijn sequences and tori
• http://debruijnsequence.org has many kinds of de Bruijn sequences.
|
Wikipedia
|
Hill climbing
In numerical analysis, hill climbing is a mathematical optimization technique which belongs to the family of local search. It is an iterative algorithm that starts with an arbitrary solution to a problem, then attempts to find a better solution by making an incremental change to the solution. If the change produces a better solution, another incremental change is made to the new solution, and so on until no further improvements can be found.
For example, hill climbing can be applied to the travelling salesman problem. It is easy to find an initial solution that visits all the cities but will likely be very poor compared to the optimal solution. The algorithm starts with such a solution and makes small improvements to it, such as switching the order in which two cities are visited. Eventually, a much shorter route is likely to be obtained.
Hill climbing finds optimal solutions for convex problems – for other problems it will find only local optima (solutions that cannot be improved upon by any neighboring configurations), which are not necessarily the best possible solution (the global optimum) out of all possible solutions (the search space). Examples of algorithms that solve convex problems by hill-climbing include the simplex algorithm for linear programming and binary search.[1]: 253 To attempt to avoid getting stuck in local optima, one could use restarts (i.e. repeated local search), or more complex schemes based on iterations (like iterated local search), or on memory (like reactive search optimization and tabu search), or on memory-less stochastic modifications (like simulated annealing).
The relative simplicity of the algorithm makes it a popular first choice amongst optimizing algorithms. It is used widely in artificial intelligence, for reaching a goal state from a starting node. Different choices for next nodes and starting nodes are used in related algorithms. Although more advanced algorithms such as simulated annealing or tabu search may give better results, in some situations hill climbing works just as well. Hill climbing can often produce a better result than other algorithms when the amount of time available to perform a search is limited, such as with real-time systems, so long as a small number of increments typically converges on a good solution (the optimal solution or a close approximation). At the other extreme, bubble sort can be viewed as a hill climbing algorithm (every adjacent element exchange decreases the number of disordered element pairs), yet this approach is far from efficient for even modest N, as the number of exchanges required grows quadratically.
Hill climbing is an anytime algorithm: it can return a valid solution even if it's interrupted at any time before it ends.
Mathematical description
Hill climbing attempts to maximize (or minimize) a target function $f(\mathbf {x} )$, where $\mathbf {x} $ is a vector of continuous and/or discrete values. At each iteration, hill climbing will adjust a single element in $\mathbf {x} $ and determine whether the change improves the value of $f(\mathbf {x} )$. (Note that this differs from gradient descent methods, which adjust all of the values in $\mathbf {x} $ at each iteration according to the gradient of the hill.) With hill climbing, any change that improves $f(\mathbf {x} )$ is accepted, and the process continues until no change can be found to improve the value of $f(\mathbf {x} )$. Then $\mathbf {x} $ is said to be "locally optimal".
In discrete vector spaces, each possible value for $\mathbf {x} $ may be visualized as a vertex in a graph. Hill climbing will follow the graph from vertex to vertex, always locally increasing (or decreasing) the value of $f(\mathbf {x} )$, until a local maximum (or local minimum) $x_{m}$ is reached.
Variants
In simple hill climbing, the first closer node is chosen, whereas in steepest ascent hill climbing all successors are compared and the closest to the solution is chosen. Both forms fail if there is no closer node, which may happen if there are local maxima in the search space which are not solutions. Steepest ascent hill climbing is similar to best-first search, which tries all possible extensions of the current path instead of only one.
Stochastic hill climbing does not examine all neighbors before deciding how to move. Rather, it selects a neighbor at random, and decides (based on the amount of improvement in that neighbor) whether to move to that neighbor or to examine another.
Coordinate descent does a line search along one coordinate direction at the current point in each iteration. Some versions of coordinate descent randomly pick a different coordinate direction each iteration.
Random-restart hill climbing is a meta-algorithm built on top of the hill climbing algorithm. It is also known as Shotgun hill climbing. It iteratively does hill-climbing, each time with a random initial condition $x_{0}$. The best $x_{m}$ is kept: if a new run of hill climbing produces a better $x_{m}$ than the stored state, it replaces the stored state.
Random-restart hill climbing is a surprisingly effective algorithm in many cases. It turns out that it is often better to spend CPU time exploring the space, than carefully optimizing from an initial condition.
Problems
Local maxima
Hill climbing will not necessarily find the global maximum, but may instead converge on a local maximum. This problem does not occur if the heuristic is convex. However, as many functions are not convex hill climbing may often fail to reach a global maximum. Other local search algorithms try to overcome this problem such as stochastic hill climbing, random walks and simulated annealing.
Ridges and alleys
Ridges are a challenging problem for hill climbers that optimize in continuous spaces. Because hill climbers only adjust one element in the vector at a time, each step will move in an axis-aligned direction. If the target function creates a narrow ridge that ascends in a non-axis-aligned direction (or if the goal is to minimize, a narrow alley that descends in a non-axis-aligned direction), then the hill climber can only ascend the ridge (or descend the alley) by zig-zagging. If the sides of the ridge (or alley) are very steep, then the hill climber may be forced to take very tiny steps as it zig-zags toward a better position. Thus, it may take an unreasonable length of time for it to ascend the ridge (or descend the alley).
By contrast, gradient descent methods can move in any direction that the ridge or alley may ascend or descend. Hence, gradient descent or the conjugate gradient method is generally preferred over hill climbing when the target function is differentiable. Hill climbers, however, have the advantage of not requiring the target function to be differentiable, so hill climbers may be preferred when the target function is complex.
Plateau
Another problem that sometimes occurs with hill climbing is that of a plateau. A plateau is encountered when the search space is flat, or sufficiently flat that the value returned by the target function is indistinguishable from the value returned for nearby regions due to the precision used by the machine to represent its value. In such cases, the hill climber may not be able to determine in which direction it should step, and may wander in a direction that never leads to improvement.
Pseudocode
algorithm Discrete Space Hill Climbing is
currentNode := startNode
loop do
L := NEIGHBORS(currentNode)
nextEval := −INF
nextNode := NULL
for all x in L do
if EVAL(x) > nextEval then
nextNode := x
nextEval := EVAL(x)
if nextEval ≤ EVAL(currentNode) then
// Return current node since no better neighbors exist
return currentNode
currentNode := nextNode
algorithm Continuous Space Hill Climbing is
currentPoint := initialPoint // the zero-magnitude vector is common
stepSize := initialStepSizes // a vector of all 1's is common
acceleration := someAcceleration // a value such as 1.2 is common
candidate[0] := −acceleration
candidate[1] := −1 / acceleration
candidate[2] := 1 / acceleration
candidate[3] := acceleration
bestScore := EVAL(currentPoint)
loop do
beforeScore := bestScore
for each element i in currentPoint do
beforePoint := currentPoint[i]
bestStep := 0
for j from 0 to 3 do // try each of 4 candidate locations
step := stepSize[i] × candidate[j]
currentPoint[i] := beforePoint + step
score := EVAL(currentPoint)
if score > bestScore then
bestScore := score
bestStep := step
if bestStep is 0 then
currentPoint[i] := beforePoint
stepSize[i] := stepSize[i] / acceleration
else
currentPoint[i] := beforePoint + bestStep
stepSize[i] := bestStep // acceleration
if (bestScore − beforeScore) < epsilon then
return currentPoint
Contrast genetic algorithm; random optimization.
See also
• Gradient descent
• Greedy algorithm
• Tâtonnement
• Mean-shift
• A* search algorithm
References
• Russell, Stuart J.; Norvig, Peter (2003), Artificial Intelligence: A Modern Approach (2nd ed.), Upper Saddle River, New Jersey: Prentice Hall, pp. 111–114, ISBN 0-13-790395-2
1. Skiena, Steven (2010). The Algorithm Design Manual (2nd ed.). Springer Science+Business Media. ISBN 1-849-96720-2.
Further reading
• Lasry, George (2018). A Methodology for the Cryptanalysis of Classical Ciphers with Search Metaheuristics (PDF). Kassel University Press. ISBN 978-3-7376-0459-8.
External links
• Hill climbing at Wikibooks
Optimization: Algorithms, methods, and heuristics
Unconstrained nonlinear
Functions
• Golden-section search
• Interpolation methods
• Line search
• Nelder–Mead method
• Successive parabolic interpolation
Gradients
Convergence
• Trust region
• Wolfe conditions
Quasi–Newton
• Berndt–Hall–Hall–Hausman
• Broyden–Fletcher–Goldfarb–Shanno and L-BFGS
• Davidon–Fletcher–Powell
• Symmetric rank-one (SR1)
Other methods
• Conjugate gradient
• Gauss–Newton
• Gradient
• Mirror
• Levenberg–Marquardt
• Powell's dog leg method
• Truncated Newton
Hessians
• Newton's method
Constrained nonlinear
General
• Barrier methods
• Penalty methods
Differentiable
• Augmented Lagrangian methods
• Sequential quadratic programming
• Successive linear programming
Convex optimization
Convex
minimization
• Cutting-plane method
• Reduced gradient (Frank–Wolfe)
• Subgradient method
Linear and
quadratic
Interior point
• Affine scaling
• Ellipsoid algorithm of Khachiyan
• Projective algorithm of Karmarkar
Basis-exchange
• Simplex algorithm of Dantzig
• Revised simplex algorithm
• Criss-cross algorithm
• Principal pivoting algorithm of Lemke
Combinatorial
Paradigms
• Approximation algorithm
• Dynamic programming
• Greedy algorithm
• Integer programming
• Branch and bound/cut
Graph
algorithms
Minimum
spanning tree
• Borůvka
• Prim
• Kruskal
Shortest path
• Bellman–Ford
• SPFA
• Dijkstra
• Floyd–Warshall
Network flows
• Dinic
• Edmonds–Karp
• Ford–Fulkerson
• Push–relabel maximum flow
Metaheuristics
• Evolutionary algorithm
• Hill climbing
• Local search
• Parallel metaheuristics
• Simulated annealing
• Spiral optimization algorithm
• Tabu search
• Software
|
Wikipedia
|
Shreeram Shankar Abhyankar
Shreeram Shankar Abhyankar (22 July 1930 – 2 November 2012)[1][2] was an Indian American mathematician known for his contributions to algebraic geometry. He, at the time of his death, held the Marshall Distinguished Professor of Mathematics Chair at Purdue University, and was also a professor of computer science and industrial engineering. He is known for Abhyankar's conjecture of finite group theory.
Shreeram Shankar Abhyankar
Shreeram Abhyankar (right) with Alexander Grothendieck (left), Michael Artin in the background, at Montreal, Quebec, Canada in 1970.
Born(1930-07-22)22 July 1930
Ujjain, India
Died2 November 2012(2012-11-02) (aged 82)
West Lafayette, Indiana, USA
CitizenshipUnited States
Alma materUniversity of Mumbai
Harvard University
Known forAbhyankar's conjecture, Abhyankar's lemma, Abhyankar–Moh theorem
AwardsChauvenet Prize (1978)
Scientific career
FieldsMathematics
InstitutionsPurdue University
Doctoral advisorOscar Zariski
His latest research was in the area of computational and algorithmic algebraic geometry.
Career
Abhyankar was born in a Chitpavan Brahmin family in Ujjain, Madhya Pradesh, India. He earned his B.Sc. from Royal Institute of Science of University of Mumbai in 1951, his M.A. at Harvard University in 1952, and his Ph.D. at Harvard in 1955. His thesis, written under the direction of Oscar Zariski, was titled Local uniformization on algebraic surfaces over modular ground fields.[3][4] Before going to Purdue, he was an associate professor of mathematics at Cornell University and Johns Hopkins University.
Abhyankar was appointed the Marshall Distinguished Professor of Mathematics at Purdue in 1967. His research topics include algebraic geometry (particularly resolution of singularities, a field in which he made significant progress over fields of finite characteristic), commutative algebra, local algebra, valuation theory, theory of functions of several complex variables, quantum electrodynamics, circuit theory, invariant theory, combinatorics, computer-aided design, and robotics. He popularized the Jacobian conjecture.
Death
Abhyankar died of a heart condition on 2 November 2012 at his residence near Purdue University.[5]
Selected publications
• Abhyankar, Shreeram S. (1967). "Local rings of high embedding dimension". American Journal of Mathematics. 89 (4): 1073–1077. doi:10.2307/2373418. JSTOR 2373418. MR 0220723.
• Abhyankar, Shreeram S. (1977). Lectures on expansion techniques in algebraic geometry. Tata Institute of Fundamental Research Lectures on Mathematics and Physics. Vol. 57. Notes by Balwant Singh. Tata Institute of Fundamental Research. MR 0542446.
• Abhyankar, Shreeram S.; Moh, Tzuong-Tsieng (1975). "Embeddings of the line in the plane". Journal für die reine und angewandte Mathematik. 276: 148–166. MR 0379502.
• Zariski, Oscar (1971). Algebraic surfaces. Ergebnisse der Mathematik und ihrer Grenzgebiete. Vol. 61. With appendices by Shreeram S. Abhyankar, Joseph Lipman, and David Mumford (Second supplemented ed.). New York–Heidelberg: Springer-Verlag. MR 0469915.
Honours
Abhyankar has won numerous awards and honours.
• Abhyankar received the Herbert Newby McCoy Award from Purdue University in 1973 .
• Fellow of the Indian Academy of Sciences
• Editorial board member of the Indian Journal of Pure and Applied Mathematics
• Chauvenet Prize from the Mathematical Association of America (1978)[6]
• Honorary Doctorate Degree (Docteur Honoris Causa) by the University of Angers in France (29 October 1998)
• Fellow of the American Mathematical Society (2012)[7]
See also
• Abhyankar's conjecture
• Abhyankar's inequality
• Abhyankar's lemma
• Abhyankar–Moh theorem
References
1. "S. S. Abhyankar (1930–2012)" (PDF). Current Science.ac.in. Archived from the original (PDF) on 16 March 2013.
2. by SR Ghorpade. "Remembering Shreeram S. Abhyankar" (PDF). Asia Pacific Mathematics Newsletter. Archived from the original (PDF) on 18 June 2013.
3. Shreeram Shankar Abhyankar at the Mathematics Genealogy Project
4. "Abhyankar biography". Retrieved 20 January 2017.
5. "Shreeram S. Abhyankar. Obituary". Retrieved 24 November 2012.
6. Abhyankar, Shreeram (1976). "Historical ramblings in algebraic geometry and related algebra". Amer. Math. Monthly. 83 (6): 409–448. doi:10.2307/2318338. JSTOR 2318338.
7. List of Fellows of the American Mathematical Society, retrieved 2012-11-03.
External links
Wikimedia Commons has media related to Shreeram Shankar Abhyankar.
• "Homepage of Shreeram Abhyankar".
• "Obituary of Shreeram Abhyankar". Legacy.com.
• Mulay, Shashikant; Sathaye, Avinash (November 2014), "Shreeram Abhyankar (July 22, 1930 – November 2, 2012)" (PDF), Notices of the American Mathematical Society, Providence, RI: American Mathematical Society, 61 (10): 1196–1216, doi:10.1090/noti1175
• Shreeram Shankar Abhyankar at the Mathematics Genealogy Project
Chauvenet Prize recipients
• 1925 G. A. Bliss
• 1929 T. H. Hildebrandt
• 1932 G. H. Hardy
• 1935 Dunham Jackson
• 1938 G. T. Whyburn
• 1941 Saunders Mac Lane
• 1944 R. H. Cameron
• 1947 Paul Halmos
• 1950 Mark Kac
• 1953 E. J. McShane
• 1956 Richard H. Bruck
• 1960 Cornelius Lanczos
• 1963 Philip J. Davis
• 1964 Leon Henkin
• 1965 Jack K. Hale and Joseph P. LaSalle
• 1967 Guido Weiss
• 1968 Mark Kac
• 1970 Shiing-Shen Chern
• 1971 Norman Levinson
• 1972 François Trèves
• 1973 Carl D. Olds
• 1974 Peter D. Lax
• 1975 Martin Davis and Reuben Hersh
• 1976 Lawrence Zalcman
• 1977 W. Gilbert Strang
• 1978 Shreeram S. Abhyankar
• 1979 Neil J. A. Sloane
• 1980 Heinz Bauer
• 1981 Kenneth I. Gross
• 1982 No award given.
• 1983 No award given.
• 1984 R. Arthur Knoebel
• 1985 Carl Pomerance
• 1986 George Miel
• 1987 James H. Wilkinson
• 1988 Stephen Smale
• 1989 Jacob Korevaar
• 1990 David Allen Hoffman
• 1991 W. B. Raymond Lickorish and Kenneth C. Millett
• 1992 Steven G. Krantz
• 1993 David H. Bailey, Jonathan M. Borwein and Peter B. Borwein
• 1994 Barry Mazur
• 1995 Donald G. Saari
• 1996 Joan Birman
• 1997 Tom Hawkins
• 1998 Alan Edelman and Eric Kostlan
• 1999 Michael I. Rosen
• 2000 Don Zagier
• 2001 Carolyn S. Gordon and David L. Webb
• 2002 Ellen Gethner, Stan Wagon, and Brian Wick
• 2003 Thomas C. Hales
• 2004 Edward B. Burger
• 2005 John Stillwell
• 2006 Florian Pfender & Günter M. Ziegler
• 2007 Andrew J. Simoson
• 2008 Andrew Granville
• 2009 Harold P. Boas
• 2010 Brian J. McCartin
• 2011 Bjorn Poonen
• 2012 Dennis DeTurck, Herman Gluck, Daniel Pomerleano & David Shea Vela-Vick
• 2013 Robert Ghrist
• 2014 Ravi Vakil
• 2015 Dana Mackenzie
• 2016 Susan H. Marshall & Donald R. Smith
• 2017 Mark Schilling
• 2018 Daniel J. Velleman
• 2019 Tom Leinster
• 2020 Vladimir Pozdnyakov & J. Michael Steele
• 2021 Travis Kowalski
• 2022 William Dunham, Ezra Brown & Matthew Crawford
Authority control
International
• ISNI
• VIAF
National
• Norway
• France
• BnF data
• Catalonia
• Germany
• Israel
• Belgium
• United States
• Czech Republic
• Australia
• Croatia
• Netherlands
Academics
• CiNii
• DBLP
• MathSciNet
• Mathematics Genealogy Project
• zbMATH
People
• Deutsche Biographie
• Trove
Other
• IdRef
|
Wikipedia
|
Shrewd cardinal
In mathematics, a shrewd cardinal is a certain kind of large cardinal number introduced by (Rathjen 1995), extending the definition of indescribable cardinals.
For an ordinal λ, a cardinal number κ is called λ-shrewd if for every proposition φ, and set A ⊆ Vκ with (Vκ+λ, ∈, A) ⊧ φ there exists an α, λ' < κ with (Vα+λ', ∈, A ∩ Vα) ⊧ φ. It is called shrewd if it is λ-shrewd for every λ[1](Definition 4.1) (including λ > κ).
This definition extends the concept of indescribability to transfinite levels. A λ-shrewd cardinal is also μ-shrewd for any ordinal μ < λ.[1](Corollary 4.3) Shrewdness was developed by Michael Rathjen as part of his ordinal analysis of Π12-comprehension. It is essentially the nonrecursive analog to the stability property for admissible ordinals.
More generally, a cardinal number κ is called λ-Πm-shrewd if for every Πm proposition φ, and set A ⊆ Vκ with (Vκ+λ, ∈, A) ⊧ φ there exists an α, λ' < κ with (Vα+λ', ∈, A ∩ Vα) ⊧ φ.[1](Definition 4.1) Πm is one of the levels of the Lévy hierarchy, in short one looks at formulas with m-1 alternations of quantifiers with the outermost quantifier being universal.
For finite n, an n-Πm-shrewd cardinals is the same thing as a Πmn-indescribable cardinal.
If κ is a subtle cardinal, then the set of κ-shrewd cardinals is stationary in κ.[1](Lemma 4.6) A cardinal is strongly unfoldable iff it is shrewd.[2]
λ-shrewdness is an improved version of λ-indescribability, as defined in Drake; this cardinal property differs in that the reflected substructure must be (Vα+λ, ∈, A ∩ Vα), making it impossible for a cardinal κ to be κ-indescribable. Also, the monotonicity property is lost: a λ-indescribable cardinal may fail to be α-indescribable for some ordinal α < λ.
References
• Drake, F. R. (1974). Set Theory: An Introduction to Large Cardinals (Studies in Logic and the Foundations of Mathematics ; V. 76). Elsevier Science Ltd. ISBN 0-444-10535-2.
• Rathjen, Michael (2006). "The Art of Ordinal Analysis" (PDF). Archived from the original (PDF) on 2009-12-22. Retrieved 2009-08-13.
• Rathjen, Michael (1995), "Recent advances in ordinal analysis: Π12-CA and related systems", The Bulletin of Symbolic Logic, 1 (4): 468–485, doi:10.2307/421132, ISSN 1079-8986, JSTOR 421132, MR 1369172, S2CID 10648711
1. M. Rathjen, "The Art of Ordinal Analysis". Accessed June 20 2022.
2. Lücke, Philipp (2021). "Strong unfoldability, shrewdness and combinatorial consequences". arXiv:2107.12722 [math.LO]. Accessed 4 July 2023.
|
Wikipedia
|
Shrinkage (statistics)
In statistics, shrinkage is the reduction in the effects of sampling variation. In regression analysis, a fitted relationship appears to perform less well on a new data set than on the data set used for fitting.[1] In particular the value of the coefficient of determination 'shrinks'. This idea is complementary to overfitting and, separately, to the standard adjustment made in the coefficient of determination to compensate for the subjunctive effects of further sampling, like controlling for the potential of new explanatory terms improving the model by chance: that is, the adjustment formula itself provides "shrinkage." But the adjustment formula yields an artificial shrinkage.
A shrinkage estimator is an estimator that, either explicitly or implicitly, incorporates the effects of shrinkage. In loose terms this means that a naive or raw estimate is improved by combining it with other information. The term relates to the notion that the improved estimate is made closer to the value supplied by the 'other information' than the raw estimate. In this sense, shrinkage is used to regularize ill-posed inference problems.
Shrinkage is implicit in Bayesian inference and penalized likelihood inference, and explicit in James–Stein-type inference. In contrast, simple types of maximum-likelihood and least-squares estimation procedures do not include shrinkage effects, although they can be used within shrinkage estimation schemes.
Description
Many standard estimators can be improved, in terms of mean squared error (MSE), by shrinking them towards zero (or any other fixed constant value). In other words, the improvement in the estimate from the corresponding reduction in the width of the confidence interval can outweigh the worsening of the estimate introduced by biasing the estimate towards zero (see bias-variance tradeoff).
Assume that the expected value of the raw estimate is not zero and consider other estimators obtained by multiplying the raw estimate by a certain parameter. A value for this parameter can be specified so as to minimize the MSE of the new estimate. For this value of the parameter, the new estimate will have a smaller MSE than the raw one. Thus it has been improved. An effect here may be to convert an unbiased raw estimate to an improved biased one.
Examples
A well-known example arises in the estimation of the population variance by sample variance. For a sample size of n, the use of a divisor n − 1 in the usual formula (Bessel's correction) gives an unbiased estimator, while other divisors have lower MSE, at the expense of bias. The optimal choice of divisor (weighting of shrinkage) depends on the excess kurtosis of the population, as discussed at mean squared error: variance, but one can always do better (in terms of MSE) than the unbiased estimator; for the normal distribution a divisor of n + 1 gives one which has the minimum mean squared error.
Methods
Types of regression that involve shrinkage estimates include ridge regression, where coefficients derived from a regular least squares regression are brought closer to zero by multiplying by a constant (the shrinkage factor), and lasso regression, where coefficients are brought closer to zero by adding or subtracting a constant.
The use of shrinkage estimators in the context of regression analysis, where there may be a large number of explanatory variables, has been described by Copas.[2] Here the values of the estimated regression coefficients are shrunk towards zero with the effect of reducing the mean square error of predicted values from the model when applied to new data. A later paper by Copas[3] applies shrinkage in a context where the problem is to predict a binary response on the basis of binary explanatory variables.
Hausser and Strimmer "develop a James-Stein-type shrinkage estimator, resulting in a procedure that is highly efficient statistically as well as computationally. Despite its simplicity, ...it outperforms eight other entropy estimation procedures across a diverse range of sampling scenarios and data-generating models, even in cases of severe undersampling. ...method is fully analytic and hence computationally inexpensive. Moreover, ...procedure simultaneously provides estimates of the entropy and of the cell frequencies. ...The proposed shrinkage estimators of entropy and mutual information, as well as all other investigated entropy estimators, have been implemented in R (R Development Core Team, 2008). A corresponding R package “entropy” was deposited in the R archive CRAN and is accessible at the URL https://cran.r-project.org/web/packages/entropy/ under the GNU General Public License."[4]
See also
• Additive smoothing
• Boosting (machine learning)
• Decision stump
• Chapman estimator
• Principal component regression
• Regularization (mathematics)
• Shrinkage estimation in the estimation of covariance matrices
• Stein's example
• Tikhonov regularization
Statistical software
• Hausser, Jean. "entropy". entropy package for R. Retrieved 2013-03-23.
References
1. Everitt B.S. (2002) Cambridge Dictionary of Statistics (2nd Edition), CUP. ISBN 0-521-81099-X
2. Copas, J.B. (1983). "Regression, Prediction and Shrinkage". Journal of the Royal Statistical Society, Series B. 45 (3): 311–354. JSTOR 2345402. MR 0737642.
3. Copas, J.B. (1993). "The shrinkage of point scoring methods". Journal of the Royal Statistical Society, Series C. 42 (2): 315–331. JSTOR 2986235.
4. Hausser, Jean; Strimmer (2009). "Entropy Inference and the James-Stein Estimator, with Application to Nonlinear Gene Association Networks" (PDF). Journal of Machine Learning Research. 10: 1469–1484. Retrieved 2013-03-23.
|
Wikipedia
|
Shrinking space
In mathematics, in the field of topology, a topological space is said to be a shrinking space if every open cover admits a shrinking. A shrinking of an open cover is another open cover indexed by the same indexing set, with the property that the closure of each open set in the shrinking lies inside the corresponding original open set.[1]
Properties
The following facts are known about shrinking spaces:
• Every shrinking space is normal.[1]
• Every shrinking space is countably paracompact.[1]
• In a normal space, every locally finite, and in fact, every point-finite open cover admits a shrinking.[1]
• Thus, every normal metacompact space is a shrinking space. In particular, every paracompact space is a shrinking space.[1]
These facts are particularly important because shrinking of open covers is a common technique in the theory of differential manifolds and while constructing functions using a partition of unity.
See also
• Topological property – Mathematical property of a space
References
1. Hart, K. P.; Nagata, Jun-iti; Vaughan, J. E. (2003), Encyclopedia of General Topology, Elsevier, p. 199, ISBN 9780080530864.
• General topology, Stephen Willard, definition 15.9 p. 104
|
Wikipedia
|
Scott Aaronson
Scott Joel Aaronson (born May 21, 1981)[1] is an American theoretical computer scientist and David J. Bruton Jr. Centennial Professor of Computer Science at the University of Texas at Austin. His primary areas of research are quantum computing and computational complexity theory.
Scott Aaronson
Aaronson in 2011
Born
Scott Joel Aaronson
(1981-05-21) May 21, 1981
Philadelphia, Pennsylvania, United States
NationalityAmerican
Alma mater
• Cornell University
• University of California, Berkeley
Known for
• Quantum Turing machine with postselection
• Algebrization
• Boson sampling
SpouseDana Moshkovitz
Awards
• Alan T. Waterman Award
• PECASE
• Tomassoni–Chisesi Prize
• ACM Prize in Computing
Scientific career
FieldsComputational complexity theory, Quantum Computing
Institutions
• University of Texas at Austin
• Massachusetts Institute of Technology
• Institute for Advanced Study
• University of Waterloo
Doctoral advisorUmesh Vazirani
Websitewww.scottaaronson.com/blog/
Early life and education
Aaronson grew up in the United States, though he spent a year in Asia when his father—a science writer turned public-relations executive—was posted to Hong Kong.[2] He enrolled in a school there that permitted him to skip ahead several years in math, but upon returning to the US, he found his education restrictive, getting bad grades and having run-ins with teachers. He enrolled in The Clarkson School, a gifted education program run by Clarkson University, which enabled Aaronson to apply for colleges while only in his freshman year of high school.[2] He was accepted into Cornell University, where he obtained his BSc in computer science in 2000,[3] and where he resided at the Telluride House.[4] He then attended the University of California, Berkeley, for his PhD, which he got in 2004 under the supervision of Umesh Vazirani.[5]
Aaronson had shown ability in mathematics from an early age, teaching himself calculus at the age of 11, provoked by symbols in a babysitter's textbook. He discovered computer programming at age 11, and felt he lagged behind peers, who had already been coding for years. In part due to Aaronson getting into advanced mathematics before getting into computer programming, he felt drawn to theoretical computing, particularly computational complexity theory. At Cornell, he became interested in quantum computing and devoted himself to computational complexity and quantum computing.[2]
Career
After postdoctorates at the Institute for Advanced Study and the University of Waterloo, he took a faculty position at MIT in 2007.[3] His primary area of research is quantum computing and computational complexity theory more generally.
In the summer of 2016 he moved from MIT to the University of Texas at Austin as David J. Bruton Jr. Centennial Professor of Computer Science and as the founding director of UT Austin's new Quantum Information Center.[6] In summer 2022 he announced he would be working for a year at OpenAI on theoretical foundations of AI safety.[7][8]
Awards
• Aaronson is one of two winners of the 2012 Alan T. Waterman Award.[9]
• Best Student Paper Awards at the Computational Complexity Conference for the papers "Limitations of Quantum Advice and One-Way Communication" (2004) [10] and "Quantum Certificate Complexity" (2003).[11][12]
• Danny Lewin Best Student Paper Award at the Symposium on Theory of Computing for the paper "Lower Bounds for Local Search by Quantum Arguments" (2004).[13]
• 2009 Presidential Early Career Award for Scientists and Engineers[14]
• 2017 Simons Investigator[15]
• He was elected as an ACM Fellow in 2019 "for contributions to quantum computing and computational complexity".[16]
• He was awarded the 2020 ACM Prize in Computing "for groundbreaking contributions to quantum computing".[17]
Popular work
He is a founder of the Complexity Zoo wiki, which catalogs all classes of computational complexity.[18][19] He is the author of the much-read blog "Shtetl-Optimized".[20]
In the interview to Scientific American he answers why his blog is called shtetl-optimized, and about his preoccupation to the past:
Shtetls were Jewish villages in pre-Holocaust Eastern Europe. They're where all my ancestors came from—some actually from the same place (Vitebsk) as Marc Chagall, who painted the fiddler on the roof. I watched Fiddler many times as a kid, both the movie and the play. And every time, there was a jolt of recognition, like: "So that's the world I was designed to inhabit. All the aspects of my personality that mark me out as weird today, the obsessive reading and the literal-mindedness and even the rocking back and forth—I probably have them because back then they would've made me a better Talmud scholar, or something."
— Scott Aaronson[21]
He also wrote the essay "Who Can Name The Bigger Number?".[22] The latter work, widely distributed in academic computer science, uses the concept of Busy Beaver Numbers as described by Tibor Radó to illustrate the limits of computability in a pedagogic environment.
He has also taught a graduate-level survey course, "Quantum Computing Since Democritus",[23] for which notes are available online, and have been published as a book by Cambridge University Press.[24] It weaves together disparate topics into a cohesive whole, including quantum mechanics, complexity, free will, time travel, the anthropic principle and more. Many of these interdisciplinary applications of computational complexity were later fleshed out in his article, "Why Philosophers Should Care About Computational Complexity".[25] Since then, Aaronson published a book entitled Quantum Computing Since Democritus based on the course.
An article of Aaronson's, "The Limits of Quantum Computers", was published in Scientific American,[26] and he was a guest speaker at the 2007 Foundational Questions in Science Institute conference.[27] Aaronson is frequently cited in the non-academic press, such as Science News,[28] The Age,[29] ZDNet,[30] Slashdot,[31] New Scientist,[32] The New York Times,[33] and Forbes magazine.[34]
Alleged Love Communications plagiarism
Aaronson was the subject of media attention in October 2007, when he accused Love Communications, a Sydney-based advertising agency, of plagiarizing a lecture[35] he wrote on quantum mechanics in an advertisement of theirs.[36] He alleged that a commercial they made for Ricoh Australia appropriated content almost verbatim from the lecture.[37] Aaronson received an email from the agency claiming to have sought legal advice and saying they did not believe that they were in violation of his copyright.
Dissatisfied, Aaronson pursued the matter, and the agency settled the dispute without admitting wrongdoing by making a charitable contribution to two science organizations of his choice. Concerning this matter, Aaronson stated, "Someone suggested [on my blog] a cameo with the models but if it was between that and a free printer, I think I'd take the printer."[36]
Personal life
Aaronson is married to computer scientist Dana Moshkovitz.[6] Aaronson identifies as Jewish.[38][39][40]
References
1. Aaronson, Scott. "Scott Aaronson". Qwiki.
2. Hardesty, Larry (April 7, 2014). "The complexonaut". mit.edu. Retrieved April 12, 2014.
3. CV from Aaronson's web site
4. Aaronson, Scott (December 5, 2017). "Quickies". Shtetl-Optimized. Retrieved January 30, 2018.
5. Scott Joel Aaronson at the Mathematics Genealogy Project
6. Shetl-Optimized, "From Boston to Austin", February 28, 2016.
7. "OpenAI is developing a watermark to identify work from its GPT text AI". New Scientist. 2022. Retrieved December 31, 2022.
8. "OpenAI!". Shtetl-Optimized. June 17, 2022. Retrieved December 31, 2022.
9. NSF to Honor Two Early Career Researchers in Computational Science With Alan T. Waterman Award, National Science Foundation, March 8, 2012, retrieved March 8, 2012.
10. Aaronson, Scott (2004). Limitations of Quantum Advice and One-Way Communication. Computational Complexity Conference. pp. 320–332.
11. Aaronson, Scott (2003). Quantum Certificate Complexity. Computational Complexity Conference. pp. 171–178.
12. "Future and Past Conferences". Computational Complexity Conference.
13. "Danny Lewin Best Student Paper Award". ACM.
14. "The Presidential Early Career Award for Scientists and Engineers: Recipient Details: Scott Aaronson". NSF.
15. Simons Investigators Awardees, The Simons Foundation
16. 2019 ACM Fellows Recognized for Far-Reaching Accomplishments that Define the Digital Age, Association for Computing Machinery, retrieved December 11, 2019
17. 2020, Association for Computing Machinery, retrieved April 14, 2021
18. Automata, Computability and Complexity by Elaine Rich (2008) ISBN 0-13-228806-0, p. 589, section "The Complexity Zoo"
19. The Complexity Zoo page (originally) at Qwiki (a quantum physics wiki, Stanford University)
20. "Shtetl-Optimized". scottaaronson.com. Retrieved January 23, 2014.
21. Horgan, John. "Scott Aaronson Answers Every Ridiculously Big Question I Throw at Him". Scientific American. Retrieved June 9, 2021.
22. Aaronson, Scott. "Who Can Name the Bigger Number?". academic personal website. Electrical Engineering and Computer Science, MIT. Retrieved January 2, 2014.
23. "PHYS771 Quantum Computing Since Democritus". scottaaronson.com. Retrieved January 23, 2014.
24. "Quantum Computing Democritus :: Quantum physics, quantum information and quantum computation". cambridge.org. Retrieved January 23, 2014.
25. Aaronson, Scott (2011). "Why Philosophers Should Care About Computational Complexity". arXiv:1108.1791v3 [CC cs. CC].
26. Aaronson, Scott (February 2008). "The Limits of Quantum Computers". Scientific American. 298 (3): 50–7. Bibcode:2008SciAm.298c..62A. doi:10.1038/scientificamerican0308-62. PMID 18357822.
27. "Foundational Questions in Science Institute conference". The Science Show. ABC Radio. August 18, 2007. Retrieved December 1, 2008.
28. Peterson, Ivars (November 20, 1999). "Quantum Games". Science News. Science Service. 156 (21): 334–335. doi:10.2307/4012018. JSTOR 4012018. Retrieved December 1, 2008.
29. Franklin, Roger (November 17, 2002). "Two-digit theory gets two fingers". The Age. Melbourne. Retrieved December 1, 2008.
30. Judge, Peter (November 9, 2007). "D-Wave's quantum computer ready for latest demo". ZDNet. CNET. Archived from the original on December 26, 2008. Retrieved December 1, 2008.
31. Dawson, Keith (November 29, 2008). "Improving Wikipedia Coverage of Computer Science". Slashdot. Retrieved December 1, 2008.
32. Brooks, Michael (March 31, 2007). "Outside of time: The quantum gravity computer". New Scientist (2597).
33. Pontin, Jason (April 8, 2007). "A Giant Leap Forward in Computing? Maybe Not". The New York Times. Retrieved December 1, 2008.
34. Gomes, Lee (December 12, 2008). "Your World View Doesn't Compute". Forbes. Archived from the original on December 14, 2008.
35. "PHYS771 Lecture 9: Quantum". scottaaronson.com. Retrieved January 20, 2017.
36. Tadros, Edmund (October 3, 2007). "Ad agency cribbed my lecture notes: professor". The Age. Melbourne. Retrieved December 1, 2008.
37. Tadros, Edmund (December 20, 2007). "Ad company settles plagiarism complaint". The Age. Melbourne. Retrieved December 1, 2008.
38. "Statement of Jewish scientists opposing the "judicial reform" in Israel". Shtetl-Optimized. February 16, 2023. Retrieved March 28, 2023.
39. "Statement of concern - Signatories". sites.google.com. Retrieved March 28, 2023.
40. "Sam Bankman-Fried and the geometry of conscience". Shtetl-Optimized. November 13, 2022. Retrieved March 28, 2023. SBF and I both grew up as nerdy kids in middle-class Jewish American families,...
External links
• Scott Aaronson at the Mathematics Genealogy Project
• Aaronson's blog
• Aaronson homepage
• UT Austin Quantum Information Center homepage
Authority control
International
• ISNI
• VIAF
National
• Norway
• France
• BnF data
• Germany
• Israel
• United States
• Japan
• Czech Republic
• Netherlands
• Poland
Academics
• CiNii
• DBLP
• MathSciNet
• Mathematics Genealogy Project
• zbMATH
Other
• IdRef
|
Wikipedia
|
Shuffle algebra
In mathematics, a shuffle algebra is a Hopf algebra with a basis corresponding to words on some set, whose product is given by the shuffle product X ⧢ Y of two words X, Y: the sum of all ways of interlacing them. The interlacing is given by the riffle shuffle permutation.
The shuffle algebra on a finite set is the graded dual of the universal enveloping algebra of the free Lie algebra on the set.
Over the rational numbers, the shuffle algebra is isomorphic to the polynomial algebra in the Lyndon words.
The shuffle product occurs in generic settings in non-commutative algebras; this is because it is able to preserve the relative order of factors being multiplied together - the riffle shuffle permutation. This can be held in contrast to the divided power structure, which becomes appropriate when factors are commutative.
Shuffle product
The shuffle product of words of lengths m and n is a sum over the (m+n)!/m!n! ways of interleaving the two words, as shown in the following examples:
ab ⧢ xy = abxy + axby + xaby + axyb + xayb + xyab
aaa ⧢ aa = 10aaaaa
It may be defined inductively by[1]
u ⧢ ε = ε ⧢ u = u
ua ⧢ vb = (u ⧢ vb)a + (ua ⧢ v)b
where ε is the empty word, a and b are single elements, and u and v are arbitrary words.
The shuffle product was introduced by Eilenberg & Mac Lane (1953). The name "shuffle product" refers to the fact that the product can be thought of as a sum over all ways of riffle shuffling two words together: this is the riffle shuffle permutation. The product is commutative and associative.[2]
The shuffle product of two words in some alphabet is often denoted by the shuffle product symbol ⧢ (Unicode character U+29E2 SHUFFLE PRODUCT, derived from the Cyrillic letter ⟨ш⟩ sha).
Infiltration product
The closely related infiltration product was introduced by Chen, Fox & Lyndon (1958). It is defined inductively on words over an alphabet A by
fa ↑ ga = (f ↑ ga)a + (fa ↑ g)a + (f ↑ g)a
fa ↑ gb = (f ↑ gb)a + (fa ↑ g)b
For example:
ab ↑ ab = ab + 2aab + 2abb + 4 aabb + 2abab
ab ↑ ba = aba + bab + abab + 2abba + 2baab + baba
The infiltration product is also commutative and associative.[3]
See also
• Hopf algebra of permutations
• Zinbiel algebra
References
1. Lothaire 1997, p. 101,126
2. Lothaire 1997, p. 126
3. Lothaire 1997, p. 128
• Chen, Kuo-Tsai; Fox, Ralph H.; Lyndon, Roger C. (1958), "Free differential calculus. IV. The quotient groups of the lower central series", Annals of Mathematics, Second Series, 68 (1): 81–95, doi:10.2307/1970044, JSTOR 1970044, MR 0102539, Zbl 0142.22304
• Eilenberg, Samuel; Mac Lane, Saunders (1953), "On the groups of H(Π,n). I", Annals of Mathematics, Second Series, 58 (1): 55–106, doi:10.2307/1969820, ISSN 0003-486X, JSTOR 1969820, MR 0056295, Zbl 0050.39304
• Green, J. A. (1995), Shuffle algebras, Lie algebras and quantum groups, Textos de Matemática. Série B, vol. 9, Coimbra: Universidade de Coimbra Departamento de Matemática, MR 1399082
• Hazewinkel, M. (2001) [1994], "Shuffle algebra", Encyclopedia of Mathematics, EMS Press
• Hazewinkel, Michiel; Gubareni, Nadiya; Kirichenko, V. V. (2010), Algebras, rings and modules. Lie algebras and Hopf algebras, Mathematical Surveys and Monographs, vol. 168, American Mathematical Society, doi:10.1090/surv/168, ISBN 978-0-8218-5262-0, MR 2724822, Zbl 1211.16023
• Lothaire, M. (1997), Combinatorics on words, Encyclopedia of Mathematics and Its Applications, vol. 17, Perrin, D.; Reutenauer, C.; Berstel, J.; Pin, J. E.; Pirillo, G.; Foata, D.; Sakarovitch, J.; Simon, I.; Schützenberger, M. P.; Choffrut, C.; Cori, R.; Lyndon, Roger; Rota, Gian-Carlo. Foreword by Roger Lyndon (2nd ed.), Cambridge University Press, ISBN 0-521-59924-5, Zbl 0874.20040
• Reutenauer, Christophe (1993), Free Lie algebras, London Mathematical Society Monographs. New Series, vol. 7, Oxford University Press, ISBN 978-0-19-853679-6, MR 1231799, Zbl 0798.17001
External links
• Shuffle product symbol
|
Wikipedia
|
Euler spiral
An Euler spiral is a curve whose curvature changes linearly with its curve length (the curvature of a circular curve is equal to the reciprocal of the radius). Euler spirals are also commonly referred to as spiros, clothoids, or Cornu spirals. Euler's spiral is a type of superspiral that has the property of a monotonic curvature function. [1]
Euler spirals have applications to diffraction computations. They are also widely used in railway and highway engineering to design transition curves between straight and curved sections of railway or roads. A similar application is also found in photonic integrated circuits. The principle of linear variation of the curvature of the transition curve between a tangent and a circular curve defines the geometry of the Euler spiral:
• Its curvature begins with zero at the straight section (the tangent) and increases linearly with its curve length.
• Where the Euler spiral meets the circular curve, its curvature becomes equal to that of the latter.
Applications
Track transition curve
To travel along a circular path, an object needs to be subject to a centripetal acceleration (for example: the Moon circles around the Earth because of gravity; a car turns its front wheels inward to generate a centripetal force). If a vehicle traveling on a straight path were to suddenly transition to a tangential circular path, it would require centripetal acceleration suddenly switching at the tangent point from zero to the required value; this would be difficult to achieve (think of a driver instantly moving the steering wheel from straight line to turning position, and the car actually doing it), putting mechanical stress on the vehicle's parts, and causing much discomfort (due to lateral jerk).
On early railroads this instant application of lateral force was not an issue since low speeds and wide-radius curves were employed (lateral forces on the passengers and the lateral sway was small and tolerable). As speeds of rail vehicles increased over the years, it became obvious that an easement is necessary, so that the centripetal acceleration increases linearly with the traveled distance. Given the expression of centripetal acceleration v2/r, the obvious solution is to provide an easement curve whose curvature, 1/R, increases linearly with the traveled distance. This geometry is an Euler spiral.[2]
Unaware of the solution of the geometry by Leonhard Euler, Rankine cited the cubic curve (a polynomial curve of degree 3), which is an approximation of the Euler spiral for small angular changes in the same way that a parabola is an approximation to a circular curve.
Marie Alfred Cornu (and later some civil engineers) also solved the calculus of the Euler spiral independently. Euler spirals are now widely used in rail and highway engineering for providing a transition or an easement between a tangent and a horizontal circular curve.
Optics
The Cornu spiral can be used to describe a diffraction pattern.[3] Consider a plane wave with phasor amplitude E0e−jkz which is diffracted by a "knife edge" of height h above x = 0 on the z = 0 plane. Then the diffracted wave field can be expressed as
$\mathbf {E} (x,z)=E_{0}e^{-jkz}{\frac {\mathrm {Fr} (\infty )-\mathrm {Fr} \left({\sqrt {\frac {2}{\lambda z}}}(h-x)\right)}{\mathrm {Fr} (\infty )-\mathrm {Fr} (-\infty )}},$
where Fr(x) is the Fresnel integral function, which forms the Cornu spiral on the complex plane.
So, to simplify the calculation of plane wave attenuation as it is diffracted from the knife-edge, one can use the diagram of a Cornu spiral by representing the quantities Fr(a) − Fr(b) as the physical distances between the points represented by Fr(a) and Fr(b) for appropriate a and b. This facilitates a rough computation of the attenuation of the plane wave by the knife edge of height h at a location (x, z) beyond the knife edge.
Integrated optics
Bends with continuously varying radius of curvature following the Euler spiral are also used to reduce losses in photonic integrated circuits, either in singlemode waveguides,[4][5] to smoothen the abrupt change of curvature and suppress coupling to radiation modes, or in multimode waveguides,[6] in order to suppress coupling to higher order modes and ensure effective singlemode operation. A pioneering and very elegant application of the Euler spiral to waveguides had been made as early as 1957,[7] with a hollow metal waveguide for microwaves. There the idea was to exploit the fact that a straight metal waveguide can be physically bent to naturally take a gradual bend shape resembling an Euler spiral.
Auto racing
Motorsport author Adam Brouillard has shown the Euler spiral's use in optimizing the racing line during the corner entry portion of a turn.[8]
Typography and digital vector drawing
Raph Levien has released Spiro as a toolkit for curve design, especially font design, in 2007[9][10] under a free licence. This toolkit has been implemented quite quickly afterwards in the font design tool Fontforge and the digital vector drawing Inkscape.
Map projection
Cutting a sphere along a spiral with width 1/N and flattening out the resulting shape yields an Euler spiral when n tends to the infinity.[11] If the sphere is the globe, this produces a map projection whose distortion tends to zero as n tends to the infinity.[12]
Whisker shapes
Natural shapes of rat's mystacial pad vibrissae (whiskers) are well approximated by pieces of the Euler spiral. When all these pieces for a single rat are assembled together, they span an interval extending from one coiled domain of the Euler spiral to the other.[13]
Formulation
Symbols
RRadius of curvature
RcRadius of circular curve at the end of the spiral
θAngle of curve from beginning of spiral (infinite R) to a particular point on the spiral.
This can also be measured as the angle between the initial tangent and the tangent at the concerned point.
θsAngle of full spiral curve
L, sLength measured along the spiral curve from its initial position
Ls, soLength of spiral curve
Derivation
The graph on the right illustrates an Euler spiral used as an easement (transition) curve between two given curves, in this case a straight line (the negative x axis) and a circle. The spiral starts at the origin in the positive x direction and gradually turns anticlockwise to osculate the circle.
The spiral is a small segment of the above double-end Euler spiral in the first quadrant.
From the definition of the curvature,
${\frac {1}{R}}={\frac {d\theta }{ds}}\propto s$
i.e.,
${\begin{aligned}Rs={\text{constant}}&=R_{c}s_{o}\\{\frac {d\theta }{ds}}&={\frac {s}{R_{c}s_{o}}}\end{aligned}}$
We write in the format,
${\frac {d\theta }{ds}}=2a^{2}s$
where
$2a^{2}={\frac {1}{R_{c}s_{o}}}$
or
$a={\frac {1}{\sqrt {2R_{c}s_{o}}}}$
thus
$\theta =(as)^{2}$
Now
$x=\int _{0}^{L}\cos \theta \,ds=\int _{0}^{L}\cos \left[\left(as\right)^{2}\right]\,ds$
If
$s'=as$
Then
$ds={\frac {ds'}{a}}$
Thus
${\begin{aligned}x&={\frac {1}{a}}\int _{0}^{L'}\cos \left(s^{2}\right)\,ds\\y&=\int _{0}^{L}\sin \theta \,ds\\&=\int _{0}^{L}\sin \left[\left(as\right)^{2}\right]\,ds\\&={\frac {1}{a}}\int _{0}^{L'}\sin \left({s}^{2}\right)\,ds\end{aligned}}$
Expansion of Fresnel integral
Main article: Fresnel integral
If a = 1, which is the case for normalized Euler curve, then the Cartesian coordinates are given by Fresnel integrals (or Euler integrals):
${\begin{aligned}C(L)&=\int _{0}^{L}\cos \left(s^{2}\right)\,ds\\S(L)&=\int _{0}^{L}\sin \left(s^{2}\right)\,ds\end{aligned}}$
Normalization and conclusion
For a given Euler curve with:
$2RL=2R_{c}L_{s}={\frac {1}{a^{2}}}$
or
${\frac {1}{R}}={\frac {L}{R_{c}L_{s}}}=2a^{2}L$
then
${\begin{aligned}x&={\frac {1}{a}}\int _{0}^{L'}\cos \left(s^{2}\right)\,ds\\y&={\frac {1}{a}}\int _{0}^{L'}\sin \left(s^{2}\right)\,ds\end{aligned}}$
where
${\begin{aligned}L'&=aL\\a&={\frac {1}{\sqrt {2R_{c}L_{s}}}}.\end{aligned}}$
The process of obtaining solution of (x, y) of an Euler spiral can thus be described as:
• Map L of the original Euler spiral by multiplying with factor a to L′ of the normalized Euler spiral;
• Find (x′, y′) from the Fresnel integrals; and
• Map (x′, y′) to (x, y) by scaling up (denormalize) with factor 1/a. Note that 1/a > 1.
In the normalization process,
${\begin{aligned}R'_{c}&={\frac {R_{c}}{\sqrt {2R_{c}L_{s}}}}={\sqrt {\frac {R_{c}}{2L_{s}}}}\\L'_{s}&={\frac {L_{s}}{\sqrt {2R_{c}L_{s}}}}={\sqrt {\frac {L_{s}}{2R_{c}}}}\end{aligned}}$
Then
$2R'_{c}L'_{s}=2{\sqrt {\frac {R_{c}}{2L_{s}}}}{\sqrt {\frac {L_{s}}{2R_{c}}}}={\frac {2}{2}}=1$
Generally the normalization reduces L′ to a small value (less than 1) and results in good converging characteristics of the Fresnel integral manageable with only a few terms (at a price of increased numerical instability of the calculation, especially for bigger θ values.).
Illustration
Given:
${\begin{aligned}R_{c}&=300\,\mathrm {m} \\L_{s}&=100\,\mathrm {m} \end{aligned}}$
Then
$\theta _{s}={\frac {L_{s}}{2R_{c}}}={\frac {100}{2\times 300}}={\frac {1}{6}}\ \mathrm {radian} $
and
$2R_{c}L_{s}=60\,000$
We scale down the Euler spiral by √60000, i.e. 100√6 to normalized Euler spiral that has:
${\begin{aligned}R'_{c}&={\tfrac {3}{\sqrt {6}}}\,\mathrm {m} \\L'_{s}&={\tfrac {1}{\sqrt {6}}}\,\mathrm {m} \\2R'_{c}L'_{s}&=2\times {\tfrac {3}{\sqrt {6}}}\times {\tfrac {1}{\sqrt {6}}}\\&=1\end{aligned}}$
and
$\theta _{s}={\frac {L'_{s}}{2R'_{c}}}={\frac {\frac {1}{\sqrt {6}}}{2\times {\frac {3}{\sqrt {6}}}}}={\frac {1}{6}}\ \mathrm {radian} $
The two angles θs are the same. This thus confirms that the original and normalized Euler spirals are geometrically similar. The locus of the normalized curve can be determined from Fresnel Integral, while the locus of the original Euler spiral can be obtained by scaling up or denormalizing.
Other properties of normalized Euler spirals
Normalized Euler spirals can be expressed as:
${\begin{aligned}x&=\int _{0}^{L}\cos \left(s^{2}\right)\,ds\\y&=\int _{0}^{L}\sin \left(s^{2}\right)\,ds\end{aligned}}$
or expressed as power series:
${\begin{aligned}x&=\left.\sum _{i=0}^{\infty }{\frac {(-1)^{i}}{(2i)!}}{\frac {s^{4i+1}}{4i+1}}\right|_{0}^{L}&&=\sum _{i=0}^{\infty }{\frac {(-1)^{i}}{(2i)!}}{\frac {L^{4i+1}}{4i+1}}\\y&=\left.\sum _{i=0}^{\infty }{\frac {(-1)^{i}}{(2i+1)!}}{\frac {s^{4i+3}}{4i+3}}\right|_{0}^{L}&&=\sum _{i=0}^{\infty }{\frac {(-1)^{i}}{(2i+1)!}}{\frac {L^{4i+3}}{4i+3}}\end{aligned}}$
The normalized Euler spiral will converge to a single point in the limit as the parameter L approaches infinity, which can be expressed as:
${\begin{aligned}x^{\prime }&=\lim _{L\to \infty }\int _{0}^{L}\cos \left(s^{2}\right)\,ds&&={\frac {1}{2}}{\sqrt {\frac {\pi }{2}}}\approx 0.6267\\y^{\prime }&=\lim _{L\to \infty }\int _{0}^{L}\sin \left(s^{2}\right)\,ds&&={\frac {1}{2}}{\sqrt {\frac {\pi }{2}}}\approx 0.6267\end{aligned}}$
Normalized Euler spirals have the following properties:
${\begin{aligned}2R_{c}L_{s}&=1\\\theta _{s}&={\frac {L_{s}}{2R_{c}}}=L_{s}^{2}\end{aligned}}$
and
${\begin{aligned}\theta &=\theta _{s}\cdot {\frac {L^{2}}{L_{s}^{2}}}=L^{2}\\{\frac {1}{R}}&={\frac {d\theta }{dL}}=2L\end{aligned}}$
Note that 2RcLs = 1 also means 1/Rc = 2Ls, in agreement with the last mathematical statement.
See also
• List of spirals
References
1. Ziatdinov, R. (2012), "Family of superspirals with completely monotonic curvature given in terms of Gauss hypergeometric function", Computer Aided Geometric Design, 29 (7): 510–518
2. Constantin (2016-03-07). "The Clothoid". Pwayblog. Retrieved 2023-06-07.
3. Eugene Hecht (1998). Optics (3rd ed.). Addison-Wesley. p. 491. ISBN 978-0-201-30425-1.
4. Kohtoku, M.; et al. (7 July 2005). "New Waveguide Fabrication Techniques for Next-generation PLCs" (PDF). NTT Technical Review. 3 (7): 37–41. Retrieved 24 January 2017.
5. Li, G.; et al. (11 May 2012). "Ultralow-loss, high-density SOI optical waveguide routing for macrochip interconnects". Optics Express. 20 (11): 12035–12039. Bibcode:2012OExpr..2012035L. doi:10.1364/OE.20.012035. PMID 22714189.
6. Cherchi, M.; et al. (18 July 2013). "Dramatic size reduction of waveguide bends on a micron-scale silicon photonic platform". Optics Express. 21 (15): 17814–17823. arXiv:1301.2197. Bibcode:2013OExpr..2117814C. doi:10.1364/OE.21.017814. PMID 23938654.
7. Unger, H.G. (September 1957). "Normal Mode Bends for Circular Electric Waves". The Bell System Technical Journal. 36 (5): 1292–1307. doi:10.1002/j.1538-7305.1957.tb01509.x.
8. Development, Paradigm Shift Driver; Brouillard, Adam (2016-03-18). The Perfect Corner: A Driver's Step-By-Step Guide to Finding Their Own Optimal Line Through the Physics of Racing. Paradigm Shift Motorsport Books. ISBN 9780997382426.
9. "Spiro".
10. "| Spiro 0.01 release | Typophile". www.typophile.com. Archived from the original on 2007-05-10.
11. Bartholdi, Laurent; Henriques, André (2012). "Orange Peels and Fresnel Integrals". The Mathematical Intelligencer. 34 (3): 1–3. arXiv:1202.3033. doi:10.1007/s00283-012-9304-1. ISSN 0343-6993. S2CID 52592272.
12. "A Strange Map Projection (Euler Spiral) - Numberphile". YouTube. Archived from the original on 2021-12-21.
13. Starostin, E.L.; et al. (15 January 2020). "The Euler spiral of rat whiskers". Science Advances. 6 (3): eaax5145. Bibcode:2020SciA....6.5145S. doi:10.1126/sciadv.aax5145. PMC 6962041. PMID 31998835.
Further reading
• Kellogg, Norman Benjamin (1907). The Transition Curve or Curve of Adjustment (3rd ed.). New York: McGraw.
• Weisstein, Eric W. "Cornu Spiral". MathWorld.
• R. Nave, The Cornu spiral, Hyperphysics (2002) (Uses πt²/2 instead of t².)
• Milton Abramowitz and Irene A. Stegun, eds. Handbook of Mathematical Functions with Formulas, Graphs, and Mathematical Tables. New York: Dover, 1972. (See Chapter 7)
• "Roller Coaster Loop Shapes". Retrieved 2010-11-12.
External links
• Euler's spiral at 2-D Mathematical Curves
• Interactive example with JSXGraph
• Euler's spiral-based map projection
Spirals, curves and helices
Curves
• Algebraic
• Curvature
• Gallery
• List
• Topics
Helices
• Angle
• Antenna
• Boerdijk–Coxeter
• Hemi
• Symmetry
• Triple
Biochemistry
• 310
• Alpha
• Beta
• Double
• Pi
• Polyproline
• Super
• Triple
• Collagen
Spirals
• Archimedean
• Cotes's
• Epispiral
• Hyperbolic
• Poinsot's
• Doyle
• Euler
• Fermat's
• Golden
• Involute
• List
• Logarithmic
• On Spirals
• Padovan
• Theodorus
• Spirangle
• Ulam
|
Wikipedia
|
Siamak Yassemi
Siamak Yassemi (Persian: سیامک یاسمی) is an Iranian mathematician and is currently the Dean of Faculty of Mathematics, Statistics and Computer Science, University of Tehran, Iran.[2] He has found basic techniques that have played important roles in the field homological algebra. His recent works have established relationships between monomial ideals in commutative algebra and graphs in combinatorics, which have stimulated the development of the new interdisciplinary field combinatorial commutative algebra.[3] Member of the Academy of Sciences of the Islamic Republic of Iran, he has received the COMSTECH International Award, the 22nd Khwarizmi International Award in Basic Science and the International Award from Tehran University, among others. He was the vice president of the University College of Sciences at the University of Tehran for more than three years, ending in 2007. He was the head of the School of Mathematics at the Institute for Research in Fundamental Sciences for more than two years. In 2015 he started to act as the head of the school of Mathematics, statistics and computer sciences at the University of Tehran. In 2018 he was elected by The World Academy of Sciences as a fellow member. That would make him the first Iranian mathematician who's ever been a member of TWAS.[4] In 2019 he was named Chevalier of the Ordre des Palmes Académiques for distinguished effort on extended multi-dimensional cooperation, including scientific research projects (Jundi-Shapur), student-and professor- exchanges, and several schools and conferences.
Prof.
Siamak Yassemi
NationalityIranian
Alma materUniversity of Tehran
Scientific career
Thesis (1994)
Doctoral advisorHans-Bjørn Foxby
Dean of Faculty of Mathematics, Statistics and Computer Science, University of Tehran, Iran
In office
2013–2021
Succeeded byGholamreza Rokni Lamouki
[1]
Life
Yassemi was born in Khorramshar, Iran.
Education
Yassemi completed his PhD under the supervision of Hans-Bjørn Foxby at the University of Copenhagen in 1994. He has since devoted a substantial part of his career to mathematical education.[5]
Honours
In 2009 he received the Khwarizmi International Award in basic sciences and in the same year he received the COMSTECH international award. The title of the project that has won the prize was "Homological and Combinatorial Methods in Commutative Algebra". He was an associate member of the Abdus Salam International Centre for Theoretical Physics (Trieste-Italy) for eight years (1996–2004). He's visited the Max Planck Institut für Mathematik in Bonn, the Institut des Hautes Études Scientifiques in Paris, and the Tata Institute of Fundamental Research in Mumbai several times.
In 2018 he was elected by The World Academy of Sciences as a fellow member. That would make him the first Iranian mathematician who's ever been a member of TWAS.[6]
In 2019 he was named Chevalier of the Ordre des Palmes Académiques for distinguished effort on extended multi-dimensional cooperation.
References
1. "صفحه-اصلی - دانشکده ریاضی، آمار و علوم کامپیوتر".
2. "Siamak Yassemi". math.ipm.ac.ir. Retrieved 2019-01-07.
3. "Siamak Yassemi - Google Scholar Citations". scholar.google.com. Retrieved 2019-01-07.
4. "TWAS elects 55 new Fellows". TWAS. Retrieved 2018-02-02.
5. "Yassemi, Siamak". TWAS. Retrieved 2019-01-07.
6. "TWAS elects 55 new Fellows". TWAS. Retrieved 2018-02-02.
External links
• Siamak Yassemi
Authority control: Academics
• Google Scholar
Mathematics in Iran
Mathematicians
Before
20th Century
• Abu al-Wafa' Buzjani
• Jamshīd al-Kāshī (al-Kashi's theorem)
• Omar Khayyam (Khayyam-Pascal's triangle, Khayyam-Saccheri quadrilateral, Khayyam's Solution of Cubic Equations)
• Al-Mahani
• Muhammad Baqir Yazdi
• Nizam al-Din al-Nisapuri
• Al-Nayrizi
• Kushyar Gilani
• Ayn al-Quzat Hamadani
• Al-Isfahani
• Al-Isfizari
• Al-Khwarizmi (Al-jabr)
• Najm al-Din al-Qazwini al-Katibi
• Nasir al-Din al-Tusi
• Al-Biruni
Modern
• Maryam Mirzakhani
• Caucher Birkar
• Sara Zahedi
• Farideh Firoozbakht (Firoozbakht's conjecture)
• S. L. Hakimi (Havel–Hakimi algorithm)
• Siamak Yassemi
• Freydoon Shahidi (Langlands–Shahidi method)
• Hamid Naderi Yeganeh
• Esmail Babolian
• Ramin Takloo-Bighash
• Lotfi A. Zadeh (Fuzzy mathematics, Fuzzy set, Fuzzy logic)
• Ebadollah S. Mahmoodian
• Reza Sarhangi (The Bridges Organization)
• Siavash Shahshahani
• Gholamhossein Mosaheb
• Amin Shokrollahi
• Reza Sadeghi
• Mohammad Mehdi Zahedi
• Mohsen Hashtroodi
• Hossein Zakeri
• Amir Ali Ahmadi
Prize Recipients
Fields Medal
• Maryam Mirzakhani (2014)
• Caucher Birkar (2018)
EMS Prize
• Sara Zahedi (2016)
Satter Prize
• Maryam Mirzakhani (2013)
Organizations
• Iranian Mathematical Society
Institutions
• Institute for Research in Fundamental Sciences
|
Wikipedia
|
Siamese method
The Siamese method, or De la Loubère method, is a simple method to construct any size of n-odd magic squares (i.e. number squares in which the sums of all rows, columns and diagonals are identical). The method was brought to France in 1688 by the French mathematician and diplomat Simon de la Loubère,[1] as he was returning from his 1687 embassy to the kingdom of Siam.[2][3][4] The Siamese method makes the creation of magic squares straightforward.
Publication
De la Loubère published his findings in his book A new historical relation of the kingdom of Siam (Du Royaume de Siam, 1693), under the chapter entitled The problem of the magical square according to the Indians.[5] Although the method is generally qualified as "Siamese", which refers to de la Loubère's travel to the country of Siam, de la Loubère himself learnt it from a Frenchman named M.Vincent (a doctor, who had first travelled to Persia and then to Siam, and was returning to France with the de la Loubère embassy), who himself had learnt it in the city of Surat in India:[5]
"Mr. Vincent, whom I have so often mentioned in my Relations, seeing me one day in the ship, during our return, studiously to range the Magical Squares after the method of Bachet, informed me that the Indians of Suratte ranged them with much more facility, and taught me their method for the unequal squares only, having, he said, forgot that of the equal"
— Simon de la Loubère, A new historical relation of the kingdom of Siam.[5]
The method
The method was surprising in its effectiveness and simplicity:
"I hope that it will not be unacceptable that I give the rules and the demonstration of this method, which is surprising for its extreme facility to execute a thing, which has appeared difficult to our Mathematicians"
— Simon de la Loubère, A new historical relation of the kingdom of Siam.[5]
First, an arithmetic progression has to be chosen (such as the simple progression 1,2,3,4,5,6,7,8,9 for a square with three rows and columns (the Lo Shu square)).
Then, starting from the central box of the first row with the number 1 (or the first number of any arithmetic progression), the fundamental movement for filling the boxes is diagonally up and right (↗), one step at a time. When a move would leave the square, it is wrapped around to the last row or first column, respectively.
If a filled box is encountered, one moves vertically down one box (↓) instead, then continuing as before.
Order-3 magic squares
step 1
1
.
.
step 2
1
.
2
step 3
1
3
2
step 4
1
3
42
step 5
1
35
42
step 6
16
35
42
step 7
16
357
42
step 8
816
357
42
step 9
816
357
492
Order-5 magic squares
Step 1
1
.
.
.
.
Step 2
1
.
.
.3
.2
Step 3
1
5
4.
3
2
Step 4
18
57
46.
3
2
Step 5
1815
5714
4613
10123
1129
Step 6
17241815
23571416
46132022
101219213
11182529
Other sizes
Any n-odd square ("odd-order square") can be thus built into a magic square. The Siamese method does not work however for n-even squares ("even-order squares", such as 2 rows/ 2 columns, 4 rows/ 4 columns etc...).
Order 3
816
357
492
Order 5
17241815
23571416
46132022
101219213
11182529
Order 9
47586980112233445
57687991122334446
67788102132435456
77718203142535566
61719304152636576
16272940516264755
26283950617274415
36384960717331425
37485970812132435
Other values
Any sequence of numbers can be used, provided they form an arithmetic progression (i.e. the difference of any two successive members of the sequence is a constant). Also, any starting number is possible. For example the following sequence can be used to form an order 3 magic square according to the Siamese method (9 boxes): 5, 10, 15, 20, 25, 30, 35, 40, 45 (the magic sum gives 75, for all rows, columns and diagonals).
Order 3
40530
152535
204510
Other starting points
It is possible not to start the arithmetic progression from the middle of the top row, but then only the row and column sums will be identical and result in a magic sum, whereas the diagonal sums will differ. The result will thus not be a true magic square:
Order 3
500700300
900200400
100600800
Rotations and reflections
Numerous other magic squares can be deduced from the above by simple rotations and reflections.
Variations
A slightly more complicated variation of this method exists in which the first number is placed in the box just above the center box. The fundamental movement for filling the boxes remains up and right (↗), one step at a time. However, if a filled box is encountered, one moves vertically up two boxes instead, then continuing as before.
Order 5
23619215
101811422
17513219
41225816
11247203
Numerous variants can be obtained by simple rotations and reflections. The next square is equivalent to the above (a simple reflexion): the first number is placed in the box just below the center box. The fundamental movement for filling the boxes then becomes diagonally down and right (↘), one step at a time. If a filled box is encountered, one moves vertically down two boxes instead, then continuing as before.[6]
Order 5
11247203
41225816
17513219
101811422
23619215
These variations, although not quite as simple as the basic Siamese method, are equivalent to the methods developed by earlier Arab and European scholars, such as Manuel Moschopoulos (1315), Johann Faulhaber (1580–1635) and Claude Gaspard Bachet de Méziriac (1581–1638), and allowed to create magic squares similar to theirs.[6][7]
See also
• Conway's LUX method for magic squares
• Strachey method for magic squares
Notes and references
1. Higgins, Peter (2008). Number Story: From Counting to Cryptography. New York: Copernicus. p. 54. ISBN 978-1-84800-000-1. footnote 8
2. Mathematical Circles Squared By Phillip E. Johnson, Howard Whitley Eves, p.22
3. CRC Concise Encyclopedia of Mathematics By Eric W. Weisstein, Page 1839
4. The Zen of Magic Squares, Circles, and Stars By Clifford A. Pickover Page 38
5. A new historical relation of the kingdom of Siam p.228
6. A new historical relation of the kingdom of Siam p229
7. The Zen of Magic Squares, Circles, and Stars by Clifford A. Pickover,2002 p.37
|
Wikipedia
|
Siberian Mathematical Journal
The Siberian Mathematical Journal (abbreviated as Sib. Math. J.) is a cover-to-cover English translation of the Russian peer-reviewed mathematics journal Sibirskii Matematicheskii Zhurnal, a publication of the Sobolev Institute of Mathematics of the Siberian Division of the Russian Academy of Sciences (Novosibirsk). Sibirskii Matematicheskii Zhurnal was established in 1960 and the Siberian Mathematical Journal was launched in 1966. It is published by Springer Science+Business Media.
The journal publishes research papers in all branches of mathematics, including functional analysis, differential equations, algebra and logic, geometry and topology, probability theory and mathematical statistics, ill-posed problems of mathematical physics, computational methods of linear algebra, etc.
External links
• Official website
• Print: ISSN 0037-4466
• Online: ISSN 1573-9260
|
Wikipedia
|
Congruence (geometry)
In geometry, two figures or objects are congruent if they have the same shape and size, or if one has the same shape and size as the mirror image of the other.[1]
More formally, two sets of points are called congruent if, and only if, one can be transformed into the other by an isometry, i.e., a combination of rigid motions, namely a translation, a rotation, and a reflection. This means that either object can be repositioned and reflected (but not resized) so as to coincide precisely with the other object. Therefore two distinct plane figures on a piece of paper are congruent if they can be cut out and then matched up completely. Turning the paper over is permitted.
In elementary geometry the word congruent is often used as follows.[2] The word equal is often used in place of congruent for these objects.
• Two line segments are congruent if they have the same length.
• Two angles are congruent if they have the same measure.
• Two circles are congruent if they have the same diameter.
In this sense, two plane figures are congruent implies that their corresponding characteristics are "congruent" or "equal" including not just their corresponding sides and angles, but also their corresponding diagonals, perimeters, and areas.
The related concept of similarity applies if the objects have the same shape but do not necessarily have the same size. (Most definitions consider congruence to be a form of similarity, although a minority require that the objects have different sizes in order to qualify as similar.)
Determining congruence of polygons
For two polygons to be congruent, they must have an equal number of sides (and hence an equal number—the same number—of vertices). Two polygons with n sides are congruent if and only if they each have numerically identical sequences (even if clockwise for one polygon and counterclockwise for the other) side-angle-side-angle-... for n sides and n angles.
Congruence of polygons can be established graphically as follows:
• First, match and label the corresponding vertices of the two figures.
• Second, draw a vector from one of the vertices of the one of the figures to the corresponding vertex of the other figure. Translate the first figure by this vector so that these two vertices match.
• Third, rotate the translated figure about the matched vertex until one pair of corresponding sides matches.
• Fourth, reflect the rotated figure about this matched side until the figures match.
If at any time the step cannot be completed, the polygons are not congruent.
Congruence of triangles
See also: Solution of triangles
Two triangles are congruent if their corresponding sides are equal in length, and their corresponding angles are equal in measure.
Symbolically, we write the congruency and incongruency of two triangles △ABC and △A′B′C′ as follows:
$ABC\cong A'B'C'$
$ABC\ncong A'B'C'$
In many cases it is sufficient to establish the equality of three corresponding parts and use one of the following results to deduce the congruence of the two triangles.
Determining congruence
Sufficient evidence for congruence between two triangles in Euclidean space can be shown through the following comparisons:
• SAS (side-angle-side): If two pairs of sides of two triangles are equal in length, and the included angles are equal in measurement, then the triangles are congruent.
• SSS (side-side-side): If three pairs of sides of two triangles are equal in length, then the triangles are congruent.
• ASA (angle-side-angle): If two pairs of angles of two triangles are equal in measurement, and the included sides are equal in length, then the triangles are congruent.
The ASA postulate was contributed by Thales of Miletus (Greek). In most systems of axioms, the three criteria – SAS, SSS and ASA – are established as theorems. In the School Mathematics Study Group system SAS is taken as one (#15) of 22 postulates.
• AAS (angle-angle-side): If two pairs of angles of two triangles are equal in measurement, and a pair of corresponding non-included sides are equal in length, then the triangles are congruent. AAS is equivalent to an ASA condition, by the fact that if any two angles are given, so is the third angle, since their sum should be 180°. ASA and AAS are sometimes combined into a single condition, AAcorrS – any two angles and a corresponding side.[3]
• RHS (right-angle-hypotenuse-side), also known as HL (hypotenuse-leg): If two right-angled triangles have their hypotenuses equal in length, and a pair of other sides are equal in length, then the triangles are congruent.
Side-side-angle
The SSA condition (side-side-angle) which specifies two sides and a non-included angle (also known as ASS, or angle-side-side) does not by itself prove congruence. In order to show congruence, additional information is required such as the measure of the corresponding angles and in some cases the lengths of the two pairs of corresponding sides. There are a few possible cases:
If two triangles satisfy the SSA condition and the length of the side opposite the angle is greater than or equal to the length of the adjacent side (SSA, or long side-short side-angle), then the two triangles are congruent. The opposite side is sometimes longer when the corresponding angles are acute, but it is always longer when the corresponding angles are right or obtuse. Where the angle is a right angle, also known as the hypotenuse-leg (HL) postulate or the right-angle-hypotenuse-side (RHS) condition, the third side can be calculated using the Pythagorean theorem thus allowing the SSS postulate to be applied.
If two triangles satisfy the SSA condition and the corresponding angles are acute and the length of the side opposite the angle is equal to the length of the adjacent side multiplied by the sine of the angle, then the two triangles are congruent.
If two triangles satisfy the SSA condition and the corresponding angles are acute and the length of the side opposite the angle is greater than the length of the adjacent side multiplied by the sine of the angle (but less than the length of the adjacent side), then the two triangles cannot be shown to be congruent. This is the ambiguous case and two different triangles can be formed from the given information, but further information distinguishing them can lead to a proof of congruence.
Angle-angle-angle
In Euclidean geometry, AAA (angle-angle-angle) (or just AA, since in Euclidean geometry the angles of a triangle add up to 180°) does not provide information regarding the size of the two triangles and hence proves only similarity and not congruence in Euclidean space.
However, in spherical geometry and hyperbolic geometry (where the sum of the angles of a triangle varies with size) AAA is sufficient for congruence on a given curvature of surface.[4]
CPCTC
This acronym stands for Corresponding Parts of Congruent Triangles are Congruent, which is an abbreviated version of the definition of congruent triangles.[5][6]
In more detail, it is a succinct way to say that if triangles ABC and DEF are congruent, that is,
$\triangle ABC\cong \triangle DEF,$
with corresponding pairs of angles at vertices A and D; B and E; and C and F, and with corresponding pairs of sides AB and DE; BC and EF; and CA and FD, then the following statements are true:
${\overline {AB}}\cong {\overline {DE}}$
${\overline {BC}}\cong {\overline {EF}}$
${\overline {AC}}\cong {\overline {DF}}$
$\angle BAC\cong \angle EDF$
$\angle ABC\cong \angle DEF$
$\angle BCA\cong \angle EFD.$
The statement is often used as a justification in elementary geometry proofs when a conclusion of the congruence of parts of two triangles is needed after the congruence of the triangles has been established. For example, if two triangles have been shown to be congruent by the SSS criteria and a statement that corresponding angles are congruent is needed in a proof, then CPCTC may be used as a justification of this statement.
A related theorem is CPCFC, in which "triangles" is replaced with "figures" so that the theorem applies to any pair of polygons or polyhedrons that are congruent.
Definition of congruence in analytic geometry
In a Euclidean system, congruence is fundamental; it is the counterpart of equality for numbers. In analytic geometry, congruence may be defined intuitively thus: two mappings of figures onto one Cartesian coordinate system are congruent if and only if, for any two points in the first mapping, the Euclidean distance between them is equal to the Euclidean distance between the corresponding points in the second mapping.
A more formal definition states that two subsets A and B of Euclidean space Rn are called congruent if there exists an isometry f : Rn → Rn (an element of the Euclidean group E(n)) with f(A) = B. Congruence is an equivalence relation.
Congruent conic sections
Two conic sections are congruent if their eccentricities and one other distinct parameter characterizing them are equal. Their eccentricities establish their shapes, equality of which is sufficient to establish similarity, and the second parameter then establishes size. Since two circles, parabolas, or rectangular hyperbolas always have the same eccentricity (specifically 0 in the case of circles, 1 in the case of parabolas, and ${\sqrt {2}}$ in the case of rectangular hyperbolas), two circles, parabolas, or rectangular hyperbolas need to have only one other common parameter value, establishing their size, for them to be congruent.
Congruent polyhedra
For two polyhedra with the same combinatorial type (that is, the same number E of edges, the same number of faces, and the same number of sides on corresponding faces), there exists a set of E measurements that can establish whether or not the polyhedra are congruent.[7][8] The number is tight, meaning that less than E measurements are not enough if the polyhedra are generic among their combinatorial type. But less measurements can work for special cases. For example, cubes have 12 edges, but 9 measurements are enough to decide if a polyhedron of that combinatorial type is congruent to a given regular cube.
Congruent triangles on a sphere
Main articles: Solving triangles § Solving spherical triangles, and Spherical trigonometry § Solution of triangles
As with plane triangles, on a sphere two triangles sharing the same sequence of angle-side-angle (ASA) are necessarily congruent (that is, they have three identical sides and three identical angles).[9] This can be seen as follows: One can situate one of the vertices with a given angle at the south pole and run the side with given length up the prime meridian. Knowing both angles at either end of the segment of fixed length ensures that the other two sides emanate with a uniquely determined trajectory, and thus will meet each other at a uniquely determined point; thus ASA is valid.
The congruence theorems side-angle-side (SAS) and side-side-side (SSS) also hold on a sphere; in addition, if two spherical triangles have an identical angle-angle-angle (AAA) sequence, they are congruent (unlike for plane triangles).[9]
The plane-triangle congruence theorem angle-angle-side (AAS) does not hold for spherical triangles.[10] As in plane geometry, side-side-angle (SSA) does not imply congruence.
Notation
A symbol commonly used for congruence is an equals symbol with a tilde above it, ≅, corresponding to the Unicode character 'approximately equal to' (U+2245). In the UK, the three-bar equal sign ≡ (U+2261) is sometimes used.
See also
• Euclidean plane isometry
• Isometry
References
1. Clapham, C.; Nicholson, J. (2009). "Oxford Concise Dictionary of Mathematics, Congruent Figures" (PDF). Addison-Wesley. p. 167. Archived from the original on 29 October 2013. Retrieved 2 June 2017.{{cite web}}: CS1 maint: bot: original URL status unknown (link)
2. "Congruence". Math Open Reference. 2009. Retrieved 2 June 2017.
3. Parr, H. E. (1970). Revision Course in School mathematics. Mathematics Textbooks Second Edition. G Bell and Sons Ltd. ISBN 0-7135-1717-4.
4. Cornel, Antonio (2002). Geometry for Secondary Schools. Mathematics Textbooks Second Edition. Bookmark Inc. ISBN 971-569-441-1.
5. Jacobs, Harold R. (1974), Geometry, W.H. Freeman, p. 160, ISBN 0-7167-0456-0 Jacobs uses a slight variation of the phrase
6. "Congruent Triangles". Cliff's Notes. Retrieved 2014-02-04.
7. Borisov, Alexander; Dickinson, Mark; Hastings, Stuart (March 2010). "A Congruence Problem for Polyhedra". American Mathematical Monthly. 117 (3): 232–249. arXiv:0811.4197. doi:10.4169/000298910X480081. S2CID 8166476.
8. Creech, Alexa. "A Congruence Problem" (PDF). Archived from the original (PDF) on November 11, 2013.
9. Bolin, Michael (September 9, 2003). "Exploration of Spherical Geometry" (PDF). pp. 6–7. Archived (PDF) from the original on 2022-10-09.
10. Hollyer, L. "Slide 89 of 112".
External links
Wikimedia Commons has media related to Congruence.
• The SSS at Cut-the-Knot
• The SSA at Cut-the-Knot
• Interactive animations demonstrating Congruent polygons, Congruent angles, Congruent line segments, Congruent triangles at Math Open Reference
Authority control: National
• Germany
|
Wikipedia
|
Edge (geometry)
In geometry, an edge is a particular type of line segment joining two vertices in a polygon, polyhedron, or higher-dimensional polytope.[1] In a polygon, an edge is a line segment on the boundary,[2] and is often called a polygon side. In a polyhedron or more generally a polytope, an edge is a line segment where two faces (or polyhedron sides) meet.[3] A segment joining two vertices while passing through the interior or exterior is not an edge but instead is called a diagonal.
Not to be confused with Edge (graph theory).
• Three edges AB, BC, and CA, each between two vertices of a triangle.
• A polygon is bounded by edges; this square has 4 edges.
• Every edge is shared by two faces in a polyhedron, like this cube.
• Every edge is shared by three or more faces in a 4-polytope, as seen in this projection of a tesseract.
Relation to edges in graphs
In graph theory, an edge is an abstract object connecting two graph vertices, unlike polygon and polyhedron edges which have a concrete geometric representation as a line segment. However, any polyhedron can be represented by its skeleton or edge-skeleton, a graph whose vertices are the geometric vertices of the polyhedron and whose edges correspond to the geometric edges.[4] Conversely, the graphs that are skeletons of three-dimensional polyhedra can be characterized by Steinitz's theorem as being exactly the 3-vertex-connected planar graphs.[5]
Number of edges in a polyhedron
Any convex polyhedron's surface has Euler characteristic
$V-E+F=2,$
where V is the number of vertices, E is the number of edges, and F is the number of faces. This equation is known as Euler's polyhedron formula. Thus the number of edges is 2 less than the sum of the numbers of vertices and faces. For example, a cube has 8 vertices and 6 faces, and hence 12 edges.
Incidences with other faces
In a polygon, two edges meet at each vertex; more generally, by Balinski's theorem, at least d edges meet at every vertex of a d-dimensional convex polytope.[6] Similarly, in a polyhedron, exactly two two-dimensional faces meet at every edge,[7] while in higher dimensional polytopes three or more two-dimensional faces meet at every edge.
Alternative terminology
In the theory of high-dimensional convex polytopes, a facet or side of a d-dimensional polytope is one of its (d − 1)-dimensional features, a ridge is a (d − 2)-dimensional feature, and a peak is a (d − 3)-dimensional feature. Thus, the edges of a polygon are its facets, the edges of a 3-dimensional convex polyhedron are its ridges, and the edges of a 4-dimensional polytope are its peaks.[8]
See also
• Extended side
References
1. Ziegler, Günter M. (1995), Lectures on Polytopes, Graduate Texts in Mathematics, vol. 152, Springer, Definition 2.1, p. 51, ISBN 9780387943657.
2. Weisstein, Eric W. "Polygon Edge." From MathWorld--A Wolfram Web Resource. http://mathworld.wolfram.com/PolygonEdge.html
3. Weisstein, Eric W. "Polytope Edge." From MathWorld--A Wolfram Web Resource. http://mathworld.wolfram.com/PolytopeEdge.html
4. Senechal, Marjorie (2013), Shaping Space: Exploring Polyhedra in Nature, Art, and the Geometrical Imagination, Springer, p. 81, ISBN 9780387927145.
5. Pisanski, Tomaž; Randić, Milan (2000), "Bridges between geometry and graph theory", in Gorini, Catherine A. (ed.), Geometry at work, MAA Notes, vol. 53, Washington, DC: Math. Assoc. America, pp. 174–194, MR 1782654. See in particular Theorem 3, p. 176.
6. Balinski, M. L. (1961), "On the graph structure of convex polyhedra in n-space", Pacific Journal of Mathematics, 11 (2): 431–434, doi:10.2140/pjm.1961.11.431, MR 0126765.
7. Wenninger, Magnus J. (1974), Polyhedron Models, Cambridge University Press, p. 1, ISBN 9780521098595.
8. Seidel, Raimund (1986), "Constructing higher-dimensional convex hulls at logarithmic cost per face", Proceedings of the Eighteenth Annual ACM Symposium on Theory of Computing (STOC '86), pp. 404–413, doi:10.1145/12130.12172, S2CID 8342016.
External links
• Weisstein, Eric W. "Polygonal edge". MathWorld.
• Weisstein, Eric W. "Polyhedral edge". MathWorld.
|
Wikipedia
|
Side-approximation theorem
In geometric topology, the side-approximation theorem was proved by Bing (1963). It implies that a 2-sphere in R3 can be approximated by polyhedral 2-spheres.
References
• Bing, R. H. (1957), "Approximating surfaces with polyhedral ones", Annals of Mathematics, Second Series, 65: 465–483, doi:10.2307/1970057, ISSN 0003-486X, JSTOR 1970057, MR 0087090
• Bing, R. H. (1963), "Approximating surfaces from the side", Annals of Mathematics, Second Series, 77: 145–192, doi:10.2307/1970203, ISSN 0003-486X, JSTOR 1970203, MR 0150744
|
Wikipedia
|
Blockchain
A blockchain is a distributed ledger with growing lists of records (blocks) that are securely linked together via cryptographic hashes.[1][2][3][4] Each block contains a cryptographic hash of the previous block, a timestamp, and transaction data (generally represented as a Merkle tree, where data nodes are represented by leaves). Since each block contains information about the previous block, they effectively form a chain (compare linked list data structure), with each additional block linking to the ones before it. Consequently, blockchain transactions are irreversible in that, once they are recorded, the data in any given block cannot be altered retroactively without altering all subsequent blocks.
Blockchains are typically managed by a peer-to-peer (P2P) computer network for use as a public distributed ledger, where nodes collectively adhere to a consensus algorithm protocol to add and validate new transaction blocks. Although blockchain records are not unalterable, since blockchain forks are possible, blockchains may be considered secure by design and exemplify a distributed computing system with high Byzantine fault tolerance.[5]
A blockchain was created by a person (or group of people) using the name (or pseudonym) Satoshi Nakamoto in 2008 to serve as the public distributed ledger for bitcoin cryptocurrency transactions, based on previous work by Stuart Haber, W. Scott Stornetta, and Dave Bayer.[6] The implementation of the blockchain within bitcoin made it the first digital currency to solve the double-spending problem without the need of a trusted authority or central server. The bitcoin design has inspired other applications[3][2] and blockchains that are readable by the public and are widely used by cryptocurrencies. The blockchain may be considered a type of payment rail.[7]
Private blockchains have been proposed for business use. Computerworld called the marketing of such privatized blockchains without a proper security model "snake oil";[8] however, others have argued that permissioned blockchains, if carefully designed, may be more decentralized and therefore more secure in practice than permissionless ones.[4][9]
History
Cryptographer David Chaum first proposed a blockchain-like protocol in his 1982 dissertation "Computer Systems Established, Maintained, and Trusted by Mutually Suspicious Groups."[10] Further work on a cryptographically secured chain of blocks was described in 1991 by Stuart Haber and W. Scott Stornetta.[4][11] They wanted to implement a system wherein document timestamps could not be tampered with. In 1992, Haber, Stornetta, and Dave Bayer incorporated Merkle trees into the design, which improved its efficiency by allowing several document certificates to be collected into one block.[4][12] Under their company Surety, their document certificate hashes have been published in The New York Times every week since 1995.[13]
The first decentralized blockchain was conceptualized by a person (or group of people) known as Satoshi Nakamoto in 2008. Nakamoto improved the design in an important way using a Hashcash-like method to timestamp blocks without requiring them to be signed by a trusted party and introducing a difficulty parameter to stabilize the rate at which blocks are added to the chain.[4] The design was implemented the following year by Nakamoto as a core component of the cryptocurrency bitcoin, where it serves as the public ledger for all transactions on the network.[3]
In August 2014, the bitcoin blockchain file size, containing records of all transactions that have occurred on the network, reached 20 GB (gigabytes).[14] In January 2015, the size had grown to almost 30 GB, and from January 2016 to January 2017, the bitcoin blockchain grew from 50 GB to 100 GB in size. The ledger size had exceeded 200 GB by early 2020.[15]
The words block and chain were used separately in Satoshi Nakamoto's original paper, but were eventually popularized as a single word, blockchain, by 2016.[16]
According to Accenture, an application of the diffusion of innovations theory suggests that blockchains attained a 13.5% adoption rate within financial services in 2016, therefore reaching the early adopters' phase.[17] Industry trade groups joined to create the Global Blockchain Forum in 2016, an initiative of the Chamber of Digital Commerce.
In May 2018, Gartner found that only 1% of CIOs indicated any kind of blockchain adoption within their organisations, and only 8% of CIOs were in the short-term "planning or [looking at] active experimentation with blockchain".[18] For the year 2019 Gartner reported 5% of CIOs believed blockchain technology was a 'game-changer' for their business.[19]
Structure and design
A blockchain is a decentralized, distributed, and often public, digital ledger consisting of records called blocks that are used to record transactions across many computers so that any involved block cannot be altered retroactively, without the alteration of all subsequent blocks.[3][20] This allows the participants to verify and audit transactions independently and relatively inexpensively.[21] A blockchain database is managed autonomously using a peer-to-peer network and a distributed timestamping server. They are authenticated by mass collaboration powered by collective self-interests.[22] Such a design facilitates robust workflow where participants' uncertainty regarding data security is marginal. The use of a blockchain removes the characteristic of infinite reproducibility from a digital asset. It confirms that each unit of value was transferred only once, solving the long-standing problem of double-spending. A blockchain has been described as a value-exchange protocol.[23] A blockchain can maintain title rights because, when properly set up to detail the exchange agreement, it provides a record that compels offer and acceptance.
Logically, a blockchain can be seen as consisting of several layers:[24]
• infrastructure (hardware)
• networking (node discovery, information propagation[25] and verification)
• consensus (proof of work, proof of stake)
• data (blocks, transactions)
• application (smart contracts/decentralized applications, if applicable)
Blocks
Blocks hold batches of valid transactions that are hashed and encoded into a Merkle tree.[3] Each block includes the cryptographic hash of the prior block in the blockchain, linking the two. The linked blocks form a chain.[3] This iterative process confirms the integrity of the previous block, all the way back to the initial block, which is known as the genesis block (Block 0).[26][27] To assure the integrity of a block and the data contained in it, the block is usually digitally signed.[28]
Sometimes separate blocks can be produced concurrently, creating a temporary fork. In addition to a secure hash-based history, any blockchain has a specified algorithm for scoring different versions of the history so that one with a higher score can be selected over others. Blocks not selected for inclusion in the chain are called orphan blocks.[27] Peers supporting the database have different versions of the history from time to time. They keep only the highest-scoring version of the database known to them. Whenever a peer receives a higher-scoring version (usually the old version with a single new block added) they extend or overwrite their own database and retransmit the improvement to their peers. There is never an absolute guarantee that any particular entry will remain in the best version of history forever. Blockchains are typically built to add the score of new blocks onto old blocks and are given incentives to extend with new blocks rather than overwrite old blocks. Therefore, the probability of an entry becoming superseded decreases exponentially[29] as more blocks are built on top of it, eventually becoming very low.[3][30]: ch. 08 [31] For example, bitcoin uses a proof-of-work system, where the chain with the most cumulative proof-of-work is considered the valid one by the network. There are a number of methods that can be used to demonstrate a sufficient level of computation. Within a blockchain the computation is carried out redundantly rather than in the traditional segregated and parallel manner.[32]
Block time
The block time is the average time it takes for the network to generate one extra block in the blockchain. By the time of block completion, the included data becomes verifiable. In cryptocurrency, this is practically when the transaction takes place, so a shorter block time means faster transactions. The block time for Ethereum is set to between 14 and 15 seconds, while for bitcoin it is on average 10 minutes.[33]
Hard forks
This section is an excerpt from Fork (blockchain) § Hard fork.[edit]
A hard fork is a change to the blockchain protocol that is not backward-compatible and requires all users to upgrade their software in order to continue participating in the network. In a hard fork, the network splits into two separate versions: one that follows the new rules and one that follows the old rules.
For example, Ethereum was hard-forked in 2016 to "make whole" the investors in The DAO, which had been hacked by exploiting a vulnerability in its code. In this case, the fork resulted in a split creating Ethereum and Ethereum Classic chains. In 2014 the Nxt community was asked to consider a hard fork that would have led to a rollback of the blockchain records to mitigate the effects of a theft of 50 million NXT from a major cryptocurrency exchange. The hard fork proposal was rejected, and some of the funds were recovered after negotiations and ransom payment. Alternatively, to prevent a permanent split, a majority of nodes using the new software may return to the old rules, as was the case of bitcoin split on 12 March 2013.[34]
A more recent hard-fork example is of Bitcoin in 2017, which resulted in a split creating Bitcoin Cash.[35] The network split was mainly due to a disagreement in how to increase the transactions per second to accommodate for demand.[36]
Decentralization
By storing data across its peer-to-peer network, the blockchain eliminates some risks that come with data being held centrally.[3] The decentralized blockchain may use ad hoc message passing and distributed networking.[37]
In a so-called "51% attack" a central entity gains control of more than half of a network and can then manipulate that specific blockchain record at will, allowing double-spending.[38]
Blockchain security methods include the use of public-key cryptography.[39]: 5 A public key (a long, random-looking string of numbers) is an address on the blockchain. Value tokens sent across the network are recorded as belonging to that address. A private key is like a password that gives its owner access to their digital assets or the means to otherwise interact with the various capabilities that blockchains now support. Data stored on the blockchain is generally considered incorruptible.[3]
Every node in a decentralized system has a copy of the blockchain. Data quality is maintained by massive database replication[40] and computational trust. No centralized "official" copy exists and no user is "trusted" more than any other.[39] Transactions are broadcast to the network using the software. Messages are delivered on a best-effort basis. Early blockchains rely on energy-intensive mining nodes to validate transactions,[27] add them to the block they are building, and then broadcast the completed block to other nodes.[30]: ch. 08 Blockchains use various time-stamping schemes, such as proof-of-work, to serialize changes.[41] Later consensus methods include proof of stake.[27] The growth of a decentralized blockchain is accompanied by the risk of centralization because the computer resources required to process larger amounts of data become more expensive.[42]
Finality
Finality is the level of confidence that the well-formed block recently appended to the blockchain will not be revoked in the future (is "finalized") and thus can be trusted. Most distributed blockchain protocols, whether proof of work or proof of stake, cannot guarantee the finality of a freshly committed block, and instead rely on "probabilistic finality": as the block goes deeper into a blockchain, it is less likely to be altered or reverted by a newly found consensus.[43]
Byzantine fault tolerance-based proof-of-stake protocols purport to provide so called "absolute finality": a randomly chosen validator proposes a block, the rest of validators vote on it, and, if a supermajority decision approves it, the block is irreversibly committed into the blockchain.[43] A modification of this method, an "economic finality", is used in practical protocols, like the Casper protocol used in Ethereum: validators which sign two different blocks at the same position in the blockchain are subject to "slashing", where their leveraged stake is forfeited.[43]
Openness
Open blockchains are more user-friendly than some traditional ownership records, which, while open to the public, still require physical access to view. Because all early blockchains were permissionless, controversy has arisen over the blockchain definition. An issue in this ongoing debate is whether a private system with verifiers tasked and authorized (permissioned) by a central authority should be considered a blockchain.[44][45][46][47][48] Proponents of permissioned or private chains argue that the term "blockchain" may be applied to any data structure that batches data into time-stamped blocks. These blockchains serve as a distributed version of multiversion concurrency control (MVCC) in databases.[49] Just as MVCC prevents two transactions from concurrently modifying a single object in a database, blockchains prevent two transactions from spending the same single output in a blockchain.[50]: 30–31 Opponents say that permissioned systems resemble traditional corporate databases, not supporting decentralized data verification, and that such systems are not hardened against operator tampering and revision.[44][46] Nikolai Hampton of Computerworld said that "many in-house blockchain solutions will be nothing more than cumbersome databases," and "without a clear security model, proprietary blockchains should be eyed with suspicion."[8][51]
Permissionless (public) blockchain
An advantage to an open, permissionless, or public, blockchain network is that guarding against bad actors is not required and no access control is needed.[29] This means that applications can be added to the network without the approval or trust of others, using the blockchain as a transport layer.[29]
Bitcoin and other cryptocurrencies currently secure their blockchain by requiring new entries to include proof of work. To prolong the blockchain, bitcoin uses Hashcash puzzles. While Hashcash was designed in 1997 by Adam Back, the original idea was first proposed by Cynthia Dwork and Moni Naor and Eli Ponyatovski in their 1992 paper "Pricing via Processing or Combatting Junk Mail".
In 2016, venture capital investment for blockchain-related projects was weakening in the USA but increasing in China.[52] Bitcoin and many other cryptocurrencies use open (public) blockchains. As of April 2018, bitcoin has the highest market capitalization.
Permissioned (private) blockchain
Permissioned blockchains use an access control layer to govern who has access to the network.[53] It has been argued that permissioned blockchains can guarantee a certain level of decentralization, if carefully designed, as opposed to permissionless blockchains, which are often centralized in practice.[9]
Disadvantages of permissioned blockchain
Nikolai Hampton argued in Computerworld that "There is also no need for a '51 percent' attack on a private blockchain, as the private blockchain (most likely) already controls 100 percent of all block creation resources. If you could attack or damage the blockchain creation tools on a private corporate server, you could effectively control 100 percent of their network and alter transactions however you wished."[8] This has a set of particularly profound adverse implications during a financial crisis or debt crisis like the financial crisis of 2007–08, where politically powerful actors may make decisions that favor some groups at the expense of others,[54] and "the bitcoin blockchain is protected by the massive group mining effort. It's unlikely that any private blockchain will try to protect records using gigawatts of computing power — it's time-consuming and expensive."[8] He also said, "Within a private blockchain there is also no 'race'; there's no incentive to use more power or discover blocks faster than competitors. This means that many in-house blockchain solutions will be nothing more than cumbersome databases."[8]
Blockchain analysis
The analysis of public blockchains has become increasingly important with the popularity of bitcoin, Ethereum, litecoin and other cryptocurrencies.[55] A blockchain, if it is public, provides anyone who wants access to observe and analyse the chain data, given one has the know-how. The process of understanding and accessing the flow of crypto has been an issue for many cryptocurrencies, crypto exchanges and banks.[56][57] The reason for this is accusations of blockchain-enabled cryptocurrencies enabling illicit dark market trade of drugs, weapons, money laundering, etc.[58] A common belief has been that cryptocurrency is private and untraceable, thus leading many actors to use it for illegal purposes. This is changing and now specialised tech companies provide blockchain tracking services, making crypto exchanges, law-enforcement and banks more aware of what is happening with crypto funds and fiat-crypto exchanges. The development, some argue, has led criminals to prioritise the use of new cryptos such as Monero.[59][60][61] The question is about the public accessibility of blockchain data and the personal privacy of the very same data. It is a key debate in cryptocurrency and ultimately in the blockchain.[62]
Standardisation
In April 2016, Standards Australia submitted a proposal to the International Organization for Standardization to consider developing standards to support blockchain technology. This proposal resulted in the creation of ISO Technical Committee 307, Blockchain and Distributed Ledger Technologies.[63] The technical committee has working groups relating to blockchain terminology, reference architecture, security and privacy, identity, smart contracts, governance and interoperability for blockchain and DLT, as well as standards specific to industry sectors and generic government requirements.[64] More than 50 countries are participating in the standardization process together with external liaisons such as the Society for Worldwide Interbank Financial Telecommunication (SWIFT), the European Commission, the International Federation of Surveyors, the International Telecommunication Union (ITU) and the United Nations Economic Commission for Europe (UNECE).[64]
Many other national standards bodies and open standards bodies are also working on blockchain standards.[65] These include the National Institute of Standards and Technology[66] (NIST), the European Committee for Electrotechnical Standardization[67] (CENELEC), the Institute of Electrical and Electronics Engineers[68] (IEEE), the Organization for the Advancement of Structured Information Standards (OASIS), and some individual participants in the Internet Engineering Task Force[69] (IETF).
Centralized blockchain
Although most of blockchain implementation are decentralized and distributed, Oracle launched a centralized blockchain table feature in Oracle 21c database. The Blockchain Table in Oracle 21c database is a centralized blockchain which provide immutable feature. Compared to decentralized blockchains, centralized blockchains normally can provide a higher throughput and lower latency of transactions than consensus-based distributed blockchains.[70][71]
Types
Currently, there are at least four types of blockchain networks — public blockchains, private blockchains, consortium blockchains and hybrid blockchains.
Public blockchains
A public blockchain has absolutely no access restrictions. Anyone with an Internet connection can send transactions to it as well as become a validator (i.e., participate in the execution of a consensus protocol).[72] Usually, such networks offer economic incentives for those who secure them and utilize some type of a proof-of-stake or proof-of-work algorithm.
Some of the largest, most known public blockchains are the bitcoin blockchain and the Ethereum blockchain.
Private blockchains
A private blockchain is permissioned.[53] One cannot join it unless invited by the network administrators. Participant and validator access is restricted. To distinguish between open blockchains and other peer-to-peer decentralized database applications that are not open ad-hoc compute clusters, the terminology Distributed Ledger (DLT) is normally used for private blockchains.
Hybrid blockchains
A hybrid blockchain has a combination of centralized and decentralized features.[73] The exact workings of the chain can vary based on which portions of centralization and decentralization are used.
Sidechains
A sidechain is a designation for a blockchain ledger that runs in parallel to a primary blockchain.[74][75] Entries from the primary blockchain (where said entries typically represent digital assets) can be linked to and from the sidechain; this allows the sidechain to otherwise operate independently of the primary blockchain (e.g., by using an alternate means of record keeping, alternate consensus algorithm, etc.).[76]
Consortium blockchain
A consortium blockchain is a type of blockchain that combines elements of both public and private blockchains. In a consortium blockchain, a group of organizations come together to create and operate the blockchain, rather than a single entity. The consortium members jointly manage the blockchain network and are responsible for validating transactions. Consortium blockchains are permissioned, meaning that only certain individuals or organizations are allowed to participate in the network. This allows for greater control over who can access the blockchain and helps to ensure that sensitive information is kept confidential.
Consortium blockchains are commonly used in industries where multiple organizations need to collaborate on a common goal, such as supply chain management or financial services. One advantage of consortium blockchains is that they can be more efficient and scalable than public blockchains, as the number of nodes required to validate transactions is typically smaller. Additionally, consortium blockchains can provide greater security and reliability than private blockchains, as the consortium members work together to maintain the network. Some examples of consortium blockchains include Quorum and Hyperledger.[77]
Uses
Blockchain technology can be integrated into multiple areas. The primary use of blockchains is as a distributed ledger for cryptocurrencies such as bitcoin; there were also a few other operational products that had matured from proof of concept by late 2016.[52] As of 2016, some businesses have been testing the technology and conducting low-level implementation to gauge blockchain's effects on organizational efficiency in their back office.[78]
In 2019, it was estimated that around $2.9 billion were invested in blockchain technology, which represents an 89% increase from the year prior. Additionally, the International Data Corp has estimated that corporate investment into blockchain technology will reach $12.4 billion by 2022.[79] Furthermore, According to PricewaterhouseCoopers (PwC), the second-largest professional services network in the world, blockchain technology has the potential to generate an annual business value of more than $3 trillion by 2030. PwC's estimate is further augmented by a 2018 study that they have conducted, in which PwC surveyed 600 business executives and determined that 84% have at least some exposure to utilizing blockchain technology, which indicates a significant demand and interest in blockchain technology.[80]
In 2019, the BBC World Service radio and podcast series Fifty Things That Made the Modern Economy identified blockchain as a technology that would have far-reaching consequences for economics and society. The economist and Financial Times journalist and broadcaster Tim Harford discussed why the underlying technology might have much wider applications and the challenges that needed to be overcome.[81] His first broadcast was on June 29, 2019.
The number of blockchain wallets quadrupled to 40 million between 2016 and 2020.[82]
A paper published in 2022 discussed the potential use of blockchain technology in sustainable management.[83]
Cryptocurrencies
Most cryptocurrencies use blockchain technology to record transactions. For example, the bitcoin network and Ethereum network are both based on blockchain.
The criminal enterprise Silk Road, which operated on Tor, utilized cryptocurrency for payments, some of which the US federal government has seized through research on the blockchain and forfeiture.[84]
Governments have mixed policies on the legality of their citizens or banks owning cryptocurrencies. China implements blockchain technology in several industries including a national digital currency which launched in 2020.[85] To strengthen their respective currencies, Western governments including the European Union and the United States have initiated similar projects.[86]
Smart contracts
Blockchain-based smart contracts are proposed contracts that can be partially or fully executed or enforced without human interaction.[87] One of the main objectives of a smart contract is automated escrow. A key feature of smart contracts is that they do not need a trusted third party (such as a trustee) to act as an intermediary between contracting entities — the blockchain network executes the contract on its own. This may reduce friction between entities when transferring value and could subsequently open the door to a higher level of transaction automation.[88] An IMF staff discussion from 2018 reported that smart contracts based on blockchain technology might reduce moral hazards and optimize the use of contracts in general. But "no viable smart contract systems have yet emerged." Due to the lack of widespread use, their legal status was unclear.[89][90]
Financial services
According to Reason, many banks have expressed interest in implementing distributed ledgers for use in banking and are cooperating with companies creating private blockchains,[91][92][93] and according to a September 2016 IBM study, this is occurring faster than expected.[94]
Banks are interested in this technology not least because it has the potential to speed up back office settlement systems.[95] Moreover, as the blockchain industry has reached early maturity institutional appreciation has grown that it is, practically speaking, the infrastructure of a whole new financial industry, with all the implications which that entails.[96]
Banks such as UBS are opening new research labs dedicated to blockchain technology in order to explore how blockchain can be used in financial services to increase efficiency and reduce costs.[97][98]
Berenberg, a German bank, believes that blockchain is an "overhyped technology" that has had a large number of "proofs of concept", but still has major challenges, and very few success stories.[99]
The blockchain has also given rise to initial coin offerings (ICOs) as well as a new category of digital asset called security token offerings (STOs), also sometimes referred to as digital security offerings (DSOs).[100] STO/DSOs may be conducted privately or on public, regulated stock exchange and are used to tokenize traditional assets such as company shares as well as more innovative ones like intellectual property, real estate,[101] art, or individual products. A number of companies are active in this space providing services for compliant tokenization, private STOs, and public STOs.
Games
Blockchain technology, such as cryptocurrencies and non-fungible tokens (NFTs), has been used in video games for monetization. Many live-service games offer in-game customization options, such as character skins or other in-game items, which the players can earn and trade with other players using in-game currency. Some games also allow for trading of virtual items using real-world currency, but this may be illegal in some countries where video games are seen as akin to gambling, and has led to gray market issues such as skin gambling, and thus publishers typically have shied away from allowing players to earn real-world funds from games.[102] Blockchain games typically allow players to trade these in-game items for cryptocurrency, which can then be exchanged for money.[103]
The first known game to use blockchain technologies was CryptoKitties, launched in November 2017, where the player would purchase NFTs with Ethereum cryptocurrency, each NFT consisting of a virtual pet that the player could breed with others to create offspring with combined traits as new NFTs.[104][103] The game made headlines in December 2017 when one virtual pet sold for more than US$100,000.[105] CryptoKitties also illustrated scalability problems for games on Ethereum when it created significant congestion on the Ethereum network in early 2018 with approximately 30% of all Ethereum transactions being for the game.[106][107]
By the early 2020s, there had not been a breakout success in video games using blockchain, as these games tend to focus on using blockchain for speculation instead of more traditional forms of gameplay, which offers limited appeal to most players. Such games also represent a high risk to investors as their revenues can be difficult to predict.[103] However, limited successes of some games, such as Axie Infinity during the COVID-19 pandemic, and corporate plans towards metaverse content, refueled interest in the area of GameFi, a term describing the intersection of video games and financing typically backed by blockchain currency, in the second half of 2021.[108] Several major publishers, including Ubisoft, Electronic Arts, and Take Two Interactive, have stated that blockchain and NFT-based games are under serious consideration for their companies in the future.[109]
In October 2021, Valve Corporation banned blockchain games, including those using cryptocurrency and NFTs, from being hosted on its Steam digital storefront service, which is widely used for personal computer gaming, claiming that this was an extension of their policy banning games that offered in-game items with real-world value. Valve's prior history with gambling, specifically skin gambling, was speculated to be a factor in the decision to ban blockchain games.[110] Journalists and players responded positively to Valve's decision as blockchain and NFT games have a reputation for scams and fraud among most PC gamers,[102][110] and Epic Games, which runs the Epic Games Store in competition to Steam, said that they would be open to accepted blockchain games in the wake of Valve's refusal.[111]
Supply chain
There have been several different efforts to employ blockchains in supply chain management.
• Precious commodities mining — Blockchain technology has been used for tracking the origins of gemstones and other precious commodities. In 2016, The Wall Street Journal reported that the blockchain technology company Everledger was partnering with IBM's blockchain-based tracking service to trace the origin of diamonds to ensure that they were ethically mined.[112] As of 2019, the Diamond Trading Company (DTC) has been involved in building a diamond trading supply chain product called Tracr.[113]
• Food supply — As of 2018, Walmart and IBM were running a trial to use a blockchain-backed system for supply chain monitoring for lettuce and spinach — all nodes of the blockchain were administered by Walmart and were located on the IBM cloud.[114]
• Fashion industry — There is an opaque relationship between brands, distributors, and customers in the fashion industry, which will prevent the sustainable and stable development of the fashion industry. Blockchain makes up for this shortcoming and makes information transparent, solving the difficulty of sustainable development of the industry.[115]
• Motor vehicles — Mercedes-Benz and partner Icertis developed a blockchain prototype used to facilitate consistent documentation of contracts along the supply chain so that the ethical standards and contractual obligations required of its direct suppliers can be passed on to second tier suppliers and beyond.[116][117] In another project, the company uses blockchain technology to track the emissions of climate-relevant gases and the amount of secondary material along the supply chain for its battery cell manufacturers.[118]
Domain names
There are several different efforts to offer domain name services via the blockchain. These domain names can be controlled by the use of a private key, which purports to allow for uncensorable websites. This would also bypass a registrar's ability to suppress domains used for fraud, abuse, or illegal content.[119]
Namecoin is a cryptocurrency that supports the ".bit" top-level domain (TLD). Namecoin was forked from bitcoin in 2011. The .bit TLD is not sanctioned by ICANN, instead requiring an alternative DNS root.[119] As of 2015, .bit was used by 28 websites, out of 120,000 registered names.[120] Namecoin was dropped by OpenNIC in 2019, due to malware and potential other legal issues.[121] Other blockchain alternatives to ICANN include The Handshake Network,[120] EmerDNS, and Unstoppable Domains.[119]
Specific TLDs include ".eth", ".luxe", and ".kred", which are associated with the Ethereum blockchain through the Ethereum Name Service (ENS). The .kred TLD also acts as an alternative to conventional cryptocurrency wallet addresses as a convenience for transferring cryptocurrency.[122]
Other uses
Blockchain technology can be used to create a permanent, public, transparent ledger system for compiling data on sales, tracking digital use and payments to content creators, such as wireless users[123] or musicians.[124] The Gartner 2019 CIO Survey reported 2% of higher education respondents had launched blockchain projects and another 18% were planning academic projects in the next 24 months.[125] In 2017, IBM partnered with ASCAP and PRS for Music to adopt blockchain technology in music distribution.[126] Imogen Heap's Mycelia service has also been proposed as a blockchain-based alternative "that gives artists more control over how their songs and associated data circulate among fans and other musicians."[127][128]
New distribution methods are available for the insurance industry such as peer-to-peer insurance, parametric insurance and microinsurance following the adoption of blockchain.[129][130] The sharing economy and IoT are also set to benefit from blockchains because they involve many collaborating peers.[131] The use of blockchain in libraries is being studied with a grant from the U.S. Institute of Museum and Library Services.[132]
Other blockchain designs include Hyperledger, a collaborative effort from the Linux Foundation to support blockchain-based distributed ledgers, with projects under this initiative including Hyperledger Burrow (by Monax) and Hyperledger Fabric (spearheaded by IBM).[133][134][135] Another is Quorum, a permissioned private blockchain by JPMorgan Chase with private storage, used for contract applications.[136]
Oracle introduced a blockchain table feature in its Oracle 21c database.[70][71]
Blockchain is also being used in peer-to-peer energy trading.[137][138][139]
Blockchain could be used in detecting counterfeits by associating unique identifiers to products, documents and shipments, and storing records associated with transactions that cannot be forged or altered.[140][141] It is however argued that blockchain technology needs to be supplemented with technologies that provide a strong binding between physical objects and blockchain systems,[142] as well as provisions for content creator verification ala KYC standards.[143] The EUIPO established an Anti-Counterfeiting Blockathon Forum, with the objective of "defining, piloting and implementing" an anti-counterfeiting infrastructure at the European level.[144][145] The Dutch Standardisation organisation NEN uses blockchain together with QR Codes to authenticate certificates.[146]
2022 Jan 30 Beijing and Shanghai are among the cities designated by China to trial blockchain applications.[147]
Blockchain interoperability
With the increasing number of blockchain systems appearing, even only those that support cryptocurrencies, blockchain interoperability is becoming a topic of major importance. The objective is to support transferring assets from one blockchain system to another blockchain system. Wegner[148] stated that "interoperability is the ability of two or more software components to cooperate despite differences in language, interface, and execution platform". The objective of blockchain interoperability is therefore to support such cooperation among blockchain systems, despite those kinds of differences.
There are already several blockchain interoperability solutions available.[149] They can be classified into three categories: cryptocurrency interoperability approaches, blockchain engines, and blockchain connectors.
Several individual IETF participants produced the draft of a blockchain interoperability architecture.[150]
Energy consumption concerns
Some cryptocurrencies use blockchain mining — the peer-to-peer computer computations by which transactions are validated and verified. This requires a large amount of energy. In June 2018, the Bank for International Settlements criticized the use of public proof-of-work blockchains for their high energy consumption.[151][152][153]
Early concern over the high energy consumption was a factor in later blockchains such as Cardano (2017), Solana (2020) and Polkadot (2020) adopting the less energy-intensive proof-of-stake model. Researchers have estimated that Bitcoin consumes 100,000 times as much energy as proof-of-stake networks.[154][155]
In 2021, a study by Cambridge University determined that Bitcoin (at 121 terawatt-hours per year) used more electricity than Argentina (at 121TWh) and the Netherlands (109TWh).[156] According to Digiconomist, one bitcoin transaction required 708 kilowatt-hours of electrical energy, the amount an average U.S. household consumed in 24 days.[157]
In February 2021, U.S. Treasury secretary Janet Yellen called Bitcoin "an extremely inefficient way to conduct transactions", saying "the amount of energy consumed in processing those transactions is staggering".[158] In March 2021, Bill Gates stated that "Bitcoin uses more electricity per transaction than any other method known to mankind", adding "It's not a great climate thing."[159]
Nicholas Weaver, of the International Computer Science Institute at the University of California, Berkeley, examined blockchain's online security, and the energy efficiency of proof-of-work public blockchains, and in both cases found it grossly inadequate.[160][161] The 31TWh-45TWh of electricity used for bitcoin in 2018 produced 17-23 million tonnes of CO2.[162][163] By 2022, the University of Cambridge and Digiconomist estimated that the two largest proof-of-work blockchains, Bitcoin and Ethereum, together used twice as much electricity in one year as the whole of Sweden, leading to the release of up to 120 million tonnes of CO2 each year.[164]
Some cryptocurrency developers are considering moving from the proof-of-work model to the proof-of-stake model.[165]
Academic research
In October 2014, the MIT Bitcoin Club, with funding from MIT alumni, provided undergraduate students at the Massachusetts Institute of Technology access to $100 of bitcoin. The adoption rates, as studied by Catalini and Tucker (2016), revealed that when people who typically adopt technologies early are given delayed access, they tend to reject the technology.[166] Many universities have founded departments focusing on crypto and blockchain, including MIT, in 2017. In the same year, Edinburgh became "one of the first big European universities to launch a blockchain course", according to the Financial Times.[167]
Adoption decision
Motivations for adopting blockchain technology (an aspect of innovation adoptation) have been investigated by researchers. For example, Janssen, et al. provided a framework for analysis,[168] and Koens & Poll pointed out that adoption could be heavily driven by non-technical factors.[169] Based on behavioral models, Li[170] has discussed the differences between adoption at the individual level and organizational levels.
Collaboration
Scholars in business and management have started studying the role of blockchains to support collaboration.[171][172] It has been argued that blockchains can foster both cooperation (i.e., prevention of opportunistic behavior) and coordination (i.e., communication and information sharing). Thanks to reliability, transparency, traceability of records, and information immutability, blockchains facilitate collaboration in a way that differs both from the traditional use of contracts and from relational norms. Contrary to contracts, blockchains do not directly rely on the legal system to enforce agreements.[173] In addition, contrary to the use of relational norms, blockchains do not require a trust or direct connections between collaborators.
Blockchain and internal audit
External video
Blockchain Basics & Cryptography, Gary Gensler, Massachusetts Institute of Technology, 0:30[174]
The need for internal audits to provide effective oversight of organizational efficiency will require a change in the way that information is accessed in new formats.[175] Blockchain adoption requires a framework to identify the risk of exposure associated with transactions using blockchain. The Institute of Internal Auditors has identified the need for internal auditors to address this transformational technology. New methods are required to develop audit plans that identify threats and risks. The Internal Audit Foundation study, Blockchain and Internal Audit, assesses these factors.[176] The American Institute of Certified Public Accountants has outlined new roles for auditors as a result of blockchain.[177]
Journals
In September 2015, the first peer-reviewed academic journal dedicated to cryptocurrency and blockchain technology research, Ledger, was announced. The inaugural issue was published in December 2016.[178] The journal covers aspects of mathematics, computer science, engineering, law, economics and philosophy that relate to cryptocurrencies.[179][180] The journal encourages authors to digitally sign a file hash of submitted papers, which are then timestamped into the bitcoin blockchain. Authors are also asked to include a personal bitcoin address on the first page of their papers for non-repudiation purposes.[181]
See also
• Changelog – a record of all notable changes made to a project
• Checklist – an informational aid used to reduce failure
• Economics of digitization
• List of blockchains
• Privacy and blockchain
• Version control – a record of all changes (mostly of software project) in a form of a graph
References
1. Morris, David Z. (15 May 2016). "Leaderless, Blockchain-Based Venture Capital Fund Raises $100 Million, And Counting". Fortune. Archived from the original on 21 May 2016. Retrieved 23 May 2016.
2. Popper, Nathan (21 May 2016). "A Venture Fund With Plenty of Virtual Capital, but No Capitalist". The New York Times. Archived from the original on 22 May 2016. Retrieved 23 May 2016.
3. "Blockchains: The great chain of being sure about things". The Economist. 31 October 2015. Archived from the original on 3 July 2016. Retrieved 18 June 2016. The technology behind bitcoin lets people who do not know or trust each other build a dependable ledger. This has implications far beyond the crypto currency.
4. Narayanan, Arvind; Bonneau, Joseph; Felten, Edward; Miller, Andrew; Goldfeder, Steven (2016). Bitcoin and cryptocurrency technologies: a comprehensive introduction. Princeton, New Jersey: Princeton University Press. ISBN 978-0-691-17169-2.
5. Iansiti, Marco; Lakhani, Karim R. (January 2017). "The Truth About Blockchain". Harvard Business Review. Cambridge, Massachusetts: Harvard University. Archived from the original on 18 January 2017. Retrieved 17 January 2017. The technology at the heart of bitcoin and other virtual currencies, blockchain is an open, distributed ledger that can record transactions between two parties efficiently and in a verifiable and permanent way.
6. Oberhaus, Daniel (27 August 2018). "The World's Oldest Blockchain Has Been Hiding in the New York Times Since 1995". Vice. Retrieved 9 October 2021.
7. Lunn, Bernard (10 February 2018). "Blockchain may finally disrupt payments from Micropayments to credit cards to SWIFT". dailyfintech.com. Archived from the original on 27 September 2018. Retrieved 18 November 2018.
8. Hampton, Nikolai (5 September 2016). "Understanding the blockchain hype: Why much of it is nothing more than snake oil and spin". Computerworld. Archived from the original on 6 September 2016. Retrieved 5 September 2016.
9. Bakos, Yannis; Halaburda, Hanna; Mueller-Bloch, Christoph (February 2021). "When Permissioned Blockchains Deliver More Decentralization Than Permissionless". Communications of the ACM. 64 (2): 20–22. doi:10.1145/3442371. S2CID 231704491.
10. Sherman, Alan T.; Javani, Farid; Zhang, Haibin; Golaszewski, Enis (January 2019). "On the Origins and Variations of Blockchain Technologies". IEEE Security Privacy. 17 (1): 72–77. arXiv:1810.06130. doi:10.1109/MSEC.2019.2893730. ISSN 1558-4046. S2CID 53114747.
11. Haber, Stuart; Stornetta, W. Scott (January 1991). "How to time-stamp a digital document". Journal of Cryptology. 3 (2): 99–111. CiteSeerX 10.1.1.46.8740. doi:10.1007/bf00196791. S2CID 14363020.
12. Bayer, Dave; Haber, Stuart; Stornetta, W. Scott (March 1992). Improving the Efficiency and Reliability of Digital Time-Stamping. pp. 329–334. CiteSeerX 10.1.1.71.4891. doi:10.1007/978-1-4613-9323-8_24. ISBN 978-1-4613-9325-2. {{cite book}}: |work= ignored (help)
13. Oberhaus, Daniel (27 August 2018). "The World's Oldest Blockchain Has Been Hiding in the New York Times Since 1995". www.vice.com. Retrieved 9 October 2021.
14. Nian, Lam Pak; Chuen, David LEE Kuo (2015). "A Light Touch of Regulation for Virtual Currencies". In Chuen, David LEE Kuo (ed.). Handbook of Digital Currency: Bitcoin, Innovation, Financial Instruments, and Big Data. Academic Press. p. 319. ISBN 978-0-12-802351-8.
15. "Blockchain Size". Archived from the original on 19 May 2020. Retrieved 25 February 2020.
16. Johnsen, Maria (12 May 2020). Blockchain in Digital Marketing: A New Paradigm of Trust. Maria Johnsen. p. 6. ISBN 979-8-6448-7308-1.{{cite book}}: CS1 maint: url-status (link)
17. "The future of blockchain in 8 charts". Raconteur. 27 June 2016. Archived from the original on 2 December 2016. Retrieved 3 December 2016.
18. "Hype Killer - Only 1% of Companies Are Using Blockchain, Gartner Reports | Artificial Lawyer". Artificial Lawyer. 4 May 2018. Archived from the original on 22 May 2018. Retrieved 22 May 2018.
19. Kasey Panetta. (31 October 2018). "Digital Business: CIO Agenda 2019: Exploit Transformational Technologies." Gartner website Retrieved 27 March 2021.
20. Armstrong, Stephen (7 November 2016). "Move over Bitcoin, the blockchain is only just getting started". Wired. Archived from the original on 8 November 2016. Retrieved 9 November 2016.
21. Catalini, Christian; Gans, Joshua S. (23 November 2016). "Some Simple Economics of the Blockchain" (PDF). SSRN. doi:10.2139/ssrn.2874598. hdl:1721.1/130500. S2CID 46904163. SSRN 2874598. Archived (PDF) from the original on 6 March 2020. Retrieved 16 September 2019.
22. Tapscott, Don; Tapscott, Alex (8 May 2016). "Here's Why Blockchains Will Change the World". Fortune. Archived from the original on 13 November 2016. Retrieved 16 November 2016.
23. Bheemaiah, Kariappa (January 2015). "Block Chain 2.0: The Renaissance of Money". Wired. Archived from the original on 14 November 2016. Retrieved 13 November 2016.
24. Chen, Huashan; Pendleton, Marcus; Njilla, Laurent; Xu, Shouhuai (12 June 2020). "A Survey on Ethereum Systems Security: Vulnerabilities, Attacks, and Defenses". ACM Computing Surveys. 53 (3): 3–4. arXiv:1908.04507. doi:10.1145/3391195. ISSN 0360-0300. S2CID 199551841.
25. Shishir, Bhatia (2 February 2006). Structured Information Flow (SIF) Framework for Automating End-to-End Information Flow for Large Organizations (Thesis). Virginia Tech.
26. "Genesis Block Definition". Investopedia. Retrieved 10 August 2022.
27. Bhaskar, Nirupama Devi; Chuen, David LEE Kuo (2015). "Bitcoin Mining Technology". Handbook of Digital Currency. pp. 45–65. doi:10.1016/B978-0-12-802117-0.00003-5. ISBN 978-0-12-802117-0.
28. Knirsch, Unterweger & Engel 2019, p. 2.
29. Antonopoulos, Andreas (20 February 2014). "Bitcoin security model: trust by computation". Radar. O'Reilly. Archived from the original on 31 October 2016. Retrieved 19 November 2016.
30. Antonopoulos, Andreas M. (2014). Mastering Bitcoin. Unlocking Digital Cryptocurrencies. Sebastopol, CA: O'Reilly Media. ISBN 978-1449374037. Archived from the original on 1 December 2016. Retrieved 3 November 2015.
31. Nakamoto, Satoshi (October 2008). "Bitcoin: A Peer-to-Peer Electronic Cash System" (PDF). bitcoin.org. Archived (PDF) from the original on 20 March 2014. Retrieved 28 April 2014.
32. "Permissioned Blockchains". Explainer. Monax. Archived from the original on 20 November 2016. Retrieved 20 November 2016.
33. Kumar, Randhir; Tripathi, Rakesh (November 2019). "Implementation of Distributed File Storage and Access Framework using IPFS and Blockchain". 2019 Fifth International Conference on Image Information Processing (ICIIP). IEEE. pp. 246–251. doi:10.1109/iciip47207.2019.8985677. ISBN 978-1-7281-0899-5. S2CID 211119043.
34. Lee, Timothy (12 March 2013). "Major glitch in Bitcoin network sparks sell-off; price temporarily falls 23%". Arstechnica. Archived from the original on 20 April 2013. Retrieved 25 February 2018.
35. Smith, Oli (21 January 2018). "Bitcoin price RIVAL: Cryptocurrency 'faster than bitcoin' will CHALLENGE market leaders". Express. Retrieved 6 April 2021.
36. "Bitcoin split in two, here's what that means". CNN. 1 August 2017. Retrieved 7 April 2021.
37. Hughes, Laurie; Dwivedi, Yogesh K.; Misra, Santosh K.; Rana, Nripendra P.; Raghavan, Vishnupriya; Akella, Viswanadh (December 2019). "Blockchain research, practice and policy: Applications, benefits, limitations, emerging research themes and research agenda". International Journal of Information Management. 49: 114–129. doi:10.1016/j.ijinfomgt.2019.02.005. hdl:10454/17473. S2CID 116666889.
38. Roberts, Jeff John (29 May 2018). "Bitcoin Spinoff Hacked in Rare '51% Attack'". Fortune. Archived from the original on 22 December 2021. Retrieved 27 December 2022.
39. Brito, Jerry; Castillo, Andrea (2013). Bitcoin: A Primer for Policymakers (PDF) (Report). Fairfax, VA: Mercatus Center, George Mason University. Archived (PDF) from the original on 21 September 2013. Retrieved 22 October 2013.
40. Raval, Siraj (2016). Decentralized Applications: Harnessing Bitcoin's Blockchain Technology. O'Reilly Media, Inc. pp. 1–2. ISBN 978-1-4919-2452-5.
41. Kopfstein, Janus (12 December 2013). "The Mission to Decentralize the Internet". The New Yorker. Archived from the original on 31 December 2014. Retrieved 30 December 2014. The network's 'nodes' — users running the bitcoin software on their computers — collectively check the integrity of other nodes to ensure that no one spends the same coins twice. All transactions are published on a shared public ledger, called the 'block chain.'
42. Gervais, Arthur; Karame, Ghassan O.; Capkun, Vedran; Capkun, Srdjan. "Is Bitcoin a Decentralized Currency?". InfoQ. InfoQ & IEEE computer society. Archived from the original on 10 October 2016. Retrieved 11 October 2016.
43. Deirmentzoglou, Evangelos; Papakyriakopoulos, Georgios; Patsakis, Constantinos (2019). "A Survey on Long-Range Attacks for Proof of Stake Protocols". IEEE Access. 7: 28712–28725. doi:10.1109/ACCESS.2019.2901858. eISSN 2169-3536. S2CID 84185792.
44. Voorhees, Erik (30 October 2015). "It's All About the Blockchain". Money and State. Archived from the original on 1 November 2015. Retrieved 2 November 2015.
45. Reutzel, Bailey (13 July 2015). "A Very Public Conflict Over Private Blockchains". PaymentsSource. New York, NY: SourceMedia, Inc. Archived from the original on 21 April 2016. Retrieved 18 June 2016.
46. Casey MJ (15 April 2015). "Moneybeat/BitBeat: Blockchains Without Coins Stir Tensions in Bitcoin Community". The Wall Street Journal. Archived from the original on 10 June 2016. Retrieved 18 June 2016.
47. "The 'Blockchain Technology' Bandwagon Has A Lesson Left To Learn". dinbits.com. 3 November 2015. Archived from the original on 29 June 2016. Retrieved 18 June 2016.
48. DeRose, Chris (26 June 2015). "Why the Bitcoin Blockchain Beats Out Competitors". American Banker. Archived from the original on 30 March 2016. Retrieved 18 June 2016.
49. Greenspan, Gideon (19 July 2015). "Ending the bitcoin vs blockchain debate". multichain.com. Archived from the original on 8 June 2016. Retrieved 18 June 2016.
50. Tapscott, Don; Tapscott, Alex (May 2016). The Blockchain Revolution: How the Technology Behind Bitcoin is Changing Money, Business, and the World. Portfolio/Penguin. ISBN 978-0-670-06997-2.
51. Barry, Levine (11 June 2018). "A new report bursts the blockchain bubble". MarTech. Archived from the original on 13 July 2018. Retrieved 13 July 2018.
52. Ovenden, James. "Blockchain Top Trends In 2017". The Innovation Enterprise. Archived from the original on 30 November 2016. Retrieved 4 December 2016.
53. Bob Marvin (30 August 2017). "Blockchain: The Invisible Technology That's Changing the World". PC MAG Australia. ZiffDavis, LLC. Archived from the original on 25 September 2017. Retrieved 25 September 2017.
54. O'Keeffe, M.; Terzi, A. (7 July 2015). "The political economy of financial crisis policy". Bruegel. Archived from the original on 19 May 2018. Retrieved 8 May 2018.
55. Dr Garrick Hileman & Michel Rauchs (2017). "GLOBAL CRYPTOCURRENCY BENCHMARKING STUDY" (PDF). Cambridge Centre for Alternative Finance. University of Cambridge Judge Business School. Archived (PDF) from the original on 15 May 2019. Retrieved 15 May 2019 – via crowdfundinsider.
56. Raymaekers, Wim (March 2015). "Cryptocurrency Bitcoin: Disruption, challenges and opportunities". Journal of Payments Strategy & Systems. 9 (1): 30–46. Archived from the original on 15 May 2019. Retrieved 15 May 2019.
57. "Why Crypto Companies Still Can't Open Checking Accounts". 3 March 2019. Archived from the original on 4 June 2019. Retrieved 4 June 2019.
58. Christian Brenig, Rafael Accorsi & Günter Müller (Spring 2015). "Economic Analysis of Cryptocurrency Backed Money Laundering". Association for Information Systems AIS Electronic Library (AISeL). Archived from the original on 28 August 2019. Retrieved 15 May 2019.
59. Greenberg, Andy (25 January 2017). "Monero, the Drug Dealer's Cryptocurrency of Choice, Is on Fire". Wired. ISSN 1059-1028. Archived from the original on 10 December 2018. Retrieved 15 May 2019.
60. Orcutt, Mike. "It's getting harder to hide money in Bitcoin". MIT Technology Review. Retrieved 15 May 2019.
61. "Explainer: 'Privacy coin' Monero offers near total anonymity". Reuters. 15 May 2019. Archived from the original on 15 May 2019. Retrieved 15 May 2019.
62. "An Untraceable Currency? Bitcoin Privacy Concerns - FinTech Weekly". FinTech Magazine Article. 7 April 2018. Archived from the original on 15 May 2019. Retrieved 15 May 2019.
63. "Blockchain". standards.org.au. Standards Australia. Retrieved 21 June 2021.
64. "ISO/TC 307 Blockchain and distributed ledger technologies". iso.org. ISO. 15 September 2020. Retrieved 21 June 2021.
65. Deshmukh, Sumedha; Boulais, Océane; Koens, Tommy. "Global Standards Mapping Initiative: An overview of blockchain technical standards" (PDF). weforum.org. World Economic Forum. Retrieved 23 June 2021.
66. "Blockchain Overview". NIST. 25 September 2019. Retrieved 21 June 2021.
67. "CEN and CENELEC publish a White Paper on standards in Blockchain & Distributed Ledger Technologies". cencenelec.eu. CENELEC. Retrieved 21 June 2021.
68. "Standards". ieee.org. IEEE Blockchain. Retrieved 21 June 2021.
69. Hardjono, Thomas. "An Interoperability Architecture for Blockchain/DLT Gateways". ietf.org. IETF. Retrieved 21 June 2021.
70. "Details: Oracle Blockchain Table". Archived from the original on 20 January 2021. Retrieved 1 January 2022.
71. "Oracle Blockchain Table". Archived from the original on 16 May 2021. Retrieved 1 January 2022.
72. "How Companies Can Leverage Private Blockchains to Improve Efficiency and Streamline Business Processes". Perfectial.
73. [Distributed Ledger Technology: Hybrid Approach, Front-to-Back Designing and Changing Trade Processing Infrastructure, By Martin Walker, First published:, 24 OCT 2018 ISBN 978-1-78272-389-9]
74. Siraj Raval (18 July 2016). Decentralized Applications: Harnessing Bitcoin's Blockchain Technology. "O'Reilly Media, Inc.". pp. 22–. ISBN 978-1-4919-2452-5.
75. Niaz Chowdhury (16 August 2019). Inside Blockchain, Bitcoin, and Cryptocurrencies. CRC Press. pp. 22–. ISBN 978-1-00-050770-6.
76. U.S. Patent 10,438,290
77. Dib, Omar; Brousmiche, Kei-Leo; Durand, Antoine; Thea, Eric; Hamida, Elyes Ben. "Consortium Blockchains: Overview, Applications and Challenges". International Journal on Advances in Telecommunications. 11 (1/2): 51–64.
78. Katie Martin (27 September 2016). "CLS dips into blockchain to net new currencies". Financial Times. Archived from the original on 9 November 2016. Retrieved 7 November 2016.
79. Castillo, Michael (16 April 2019). "blockchain 50: Billion Dollar Babies". Financial Website. SourceMedia. Retrieved 1 February 2021.
80. Davies, Steve (2018). "PwC's Global Blockchain Survey". Financial Website. SourceMedia. Retrieved 1 February 2021.
81. "BBC Radio 4 - Things That Made the Modern Economy, Series 2, Blockchain". BBC. Retrieved 14 October 2022.
82. Liu, Shanhong (13 March 2020). "Blockchain - Statistics & Facts". Statistics Website. SourceMedia. Retrieved 17 February 2021.
83. Du, Wenbo; Ma, Xiaozhi; Yuan, Hongping; Zhu, Yue (1 August 2022). "Blockchain technology-based sustainable management research: the status quo and a general framework for future application". Environmental Science and Pollution Research. 29 (39): 58648–58663. doi:10.1007/s11356-022-21761-2. ISSN 1614-7499. PMC 9261142. PMID 35794327.
84. KPIX-TV. (5 November 2020). "Silk Road: Feds Seize $1 Billion In Bitcoins Linked To Infamous Silk Road Dark Web Case; 'Where Did The Money Go'". KPIX website Retrieved 28 March 2021.
85. Aditi Kumar and Eric Rosenbach. (20 May 2020). "Could China's Digital Currency Unseat the Dollar?: American Economic and Geopolitical Power Is at Stake". Foreign Affairs website Retrieved 31 March 2021.
86. Staff. (16 February 2021). "The Economist Explains: What is the fuss over central-bank digital currencies?" The Economist website Retrieved 1 April 2021.
87. Franco, Pedro (2014). Understanding Bitcoin: Cryptography, Engineering and Economics. John Wiley & Sons. p. 9. ISBN 978-1-119-01916-9. Archived from the original on 14 February 2017. Retrieved 4 January 2017 – via Google Books.
88. Casey M (16 July 2018). The impact of blockchain technology on finance : a catalyst for change. London, UK. ISBN 978-1-912179-15-2. OCLC 1059331326.{{cite book}}: CS1 maint: location missing publisher (link)
89. Governatori, Guido; Idelberger, Florian; Milosevic, Zoran; Riveret, Regis; Sartor, Giovanni; Xu, Xiwei (2018). "On legal contracts, imperative and declarative smart contracts, and blockchain systems". Artificial Intelligence and Law. 26 (4): 33. doi:10.1007/s10506-018-9223-3. S2CID 3663005.
90. Virtual Currencies and Beyond: Initial Considerations (PDF). IMF Discussion Note. International Monetary Fund. 2016. p. 23. ISBN 978-1-5135-5297-2. Archived (PDF) from the original on 14 April 2018. Retrieved 19 April 2018.
91. Epstein, Jim (6 May 2016). "Is Blockchain Technology a Trojan Horse Behind Wall Street's Walled Garden?". Reason. Archived from the original on 8 July 2016. Retrieved 29 June 2016. mainstream misgivings about working with a system that's open for anyone to use. Many banks are partnering with companies building so-called private blockchains that mimic some aspects of Bitcoin's architecture except they're designed to be closed off and accessible only to chosen parties. ... [but some believe] that open and permission-less blockchains will ultimately prevail even in the banking sector simply because they're more efficient.
92. Redrup, Yolanda (29 June 2016). "ANZ backs private blockchain, but won't go public". Australia Financial Review. Archived from the original on 3 July 2016. Retrieved 7 July 2016. Blockchain networks can be either public or private. Public blockchains have many users and there are no controls over who can read, upload or delete the data and there are an unknown number of pseudonymous participants. In comparison, private blockchains also have multiple data sets, but there are controls in place over who can edit data and there are a known number of participants.
93. Shah, Rakesh (1 March 2018). "How Can The Banking Sector Leverage Blockchain Technology?". PostBox Communications. PostBox Communications Blog. Archived from the original on 17 March 2018. Banks preferably have a notable interest in utilizing Blockchain Technology because it is a great source to avoid fraudulent transactions. Blockchain is considered hassle free, because of the extra level of security it offers.
94. Kelly, Jemima (28 September 2016). "Banks adopting blockchain 'dramatically faster' than expected: IBM". Reuters. Archived from the original on 28 September 2016. Retrieved 28 September 2016.
95. Arnold, Martin (23 September 2013). "IBM in blockchain project with China UnionPay". Financial Times. Archived from the original on 9 November 2016. Retrieved 7 November 2016.
96. Ravichandran, Arvind; Fargo, Christopher; Kappos, David; Portilla, David; Buretta, John; Ngo, Minh Van; Rosenthal-Larrea, Sasha (28 January 2022). "Blockchain in the Banking Sector: A Review of the Landscape and Opportunities".
97. "UBS leads team of banks working on blockchain settlement system". Reuters. 24 August 2016. Archived from the original on 19 May 2017. Retrieved 13 May 2017.
98. "Cryptocurrency Blockchain". capgemini.com. Archived from the original on 5 December 2016. Retrieved 13 May 2017.
99. Kelly, Jemima (31 October 2017). "Top banks and R3 build blockchain-based payments system". Reuters. Archived from the original on 10 July 2018. Retrieved 9 July 2018.
100. "Are Token Assests the Securities of Tomorrow?" (PDF). Deloitte. February 2019. Archived (PDF) from the original on 23 June 2019. Retrieved 26 September 2019.
101. Hammerberg, Jeff (7 November 2021). "Potential impact of blockchain on real estate". Washington Blade.
102. Clark, Mitchell (15 October 2021). "Valve bans blockchain games and NFTs on Steam, Epic will try to make it work". The Verge. Retrieved 8 November 2021.
103. Mozuch, Mo (29 April 2021). "Blockchain Games Twist The Fundamentals Of Online Gaming". Inverse. Retrieved 4 November 2021.
104. "Internet firms try their luck at blockchain games". Asia Times. 22 February 2018. Retrieved 28 February 2018.
105. Evelyn Cheng (6 December 2017). "Meet CryptoKitties, the $100,000 digital beanie babies epitomizing the cryptocurrency mania". CNBC. Archived from the original on 20 November 2018. Retrieved 28 February 2018.
106. Laignee Barron (13 February 2018). "CryptoKitties is Going Mobile. Can Ethereum Handle the Traffic?". Fortune. Archived from the original on 28 October 2018. Retrieved 30 September 2018.
107. "CryptoKitties craze slows down transactions on Ethereum". 12 May 2017. Archived from the original on 12 January 2018.
108. Wells, Charlie; Egkolfopoulou, Misrylena (30 October 2021). "Into the Metaverse: Where Crypto, Gaming and Capitalism Collide". Bloomberg News. Retrieved 11 November 2021.
109. Orland, Kyle (4 November 2021). "Big-name publishers see NFTs as a big part of gaming's future". Ars Technica. Retrieved 4 November 2021.
110. Knoop, Joseph (15 October 2021). "Steam bans all games with NFTs or cryptocurrency". PC Gamer. Retrieved 8 November 2021.
111. Clark, Mitchell (15 October 2021). "Epic says it's 'open' to blockchain games after Steam bans them". The Verge. Retrieved 11 November 2021.
112. Nash, Kim S. (14 July 2016). "IBM Pushes Blockchain into the Supply Chain". The Wall Street Journal. Archived from the original on 18 July 2016. Retrieved 24 July 2016.
113. Gstettner, Stefan (30 July 2019). "How Blockchain Will Redefine Supply Chain Management". Knowledge@Wharton. The Wharton School of the University of Pennsylvania. Retrieved 28 August 2020.
114. Corkery, Michael; Popper, Nathaniel (24 September 2018). "From Farm to Blockchain: Walmart Tracks Its Lettuce". The New York Times. Archived from the original on 5 December 2018. Retrieved 5 December 2018.
115. "Blockchain basics: Utilizing blockchain to improve sustainable supply chains in fashion". Strategic Direction. 37 (5): 25–27. 8 June 2021. doi:10.1108/SD-03-2021-0028. ISSN 0258-0543. S2CID 241322151.
116. Mercedes-Benz Group AG, Once round the block, please!, accessed 24 August 2023
117. Green, W., Mercedes tests blockchain to improve supply chain transparency, Supply Management, published 25 February 2019, accessed 24 August 2023
118. Mercedes-Benz AG, Mercedes-Benz Cars drives "Ambition2039" in the supply chain: blockchain pilot project provides transparency on CO₂ emissions, published 30 January 2020, accessed 24 August 2023
119. Sanders, James; August 28 (28 August 2019). "Blockchain-based Unstoppable Domains is a rehash of a failed idea". TechRepublic. Archived from the original on 19 November 2019. Retrieved 16 April 2020.
120. Orcutt, Mike (4 June 2019). "The ambitious plan to reinvent how websites get their names". MIT Technology Review. Retrieved 17 May 2021.
121. Cimpanu, Catalin (17 July 2019). "OpenNIC drops support for .bit domain names after rampant malware abuse". ZDNet. Retrieved 17 May 2021.
122. ".Kred launches as dual DNS and ENS domain". Domain Name Wire | Domain Name News. 6 March 2020. Archived from the original on 8 March 2020. Retrieved 16 April 2020.
123. K. Kotobi, and S. G. Bilen, "Secure Blockchains for Dynamic Spectrum Access : A Decentralized Database in Moving Cognitive Radio Networks Enhances Security and User Access", IEEE Vehicular Technology Magazine, 2018.
124. "Blockchain Could Be Music's Next Disruptor". 22 September 2016. Archived from the original on 23 September 2016.
125. Susan Moore. (16 October 2019). "Digital Business: 4 Ways Blockchain Will Transform Higher Education". Gartner website Retrieved 27 March 2021.
126. "ASCAP, PRS and SACEM Join Forces for Blockchain Copyright System". Music Business Worldwide. 9 April 2017. Archived from the original on 10 April 2017.
127. Burchardi, K.; Harle, N. (20 January 2018). "The blockchain will disrupt the music business and beyond". Wired UK. Archived from the original on 8 May 2018. Retrieved 8 May 2018.
128. Bartlett, Jamie (6 September 2015). "Imogen Heap: saviour of the music industry?". The Guardian. Archived from the original on 22 April 2016. Retrieved 18 June 2016.
129. Wang, Kevin; Safavi, Ali (29 October 2016). "Blockchain is empowering the future of insurance". Tech Crunch. AOL Inc. Archived from the original on 7 November 2016. Retrieved 7 November 2016.
130. Gatteschi, Valentina; Lamberti, Fabrizio; Demartini, Claudio; Pranteda, Chiara; Santamaría, Víctor (20 February 2018). "Blockchain and Smart Contracts for Insurance: Is the Technology Mature Enough?". Future Internet. 10 (2): 20. doi:10.3390/fi10020020.
131. "Blockchain reaction: Tech companies plan for critical mass" (PDF). Ernst & Young. p. 5. Archived (PDF) from the original on 14 November 2016. Retrieved 13 November 2016.
132. Carrie Smith. Blockchain Reaction: How library professionals are approaching blockchain technology and its potential impact. Archived 12 September 2019 at the Wayback Machine American Libraries March 2019.
133. "IBM Blockchain based on Hyperledger Fabric from the Linux Foundation". IBM.com. 9 January 2018. Archived from the original on 7 December 2017. Retrieved 18 January 2018.
134. Hyperledger (22 January 2019). "Announcing Hyperledger Grid, a new project to help build and deliver supply chain solutions!". Archived from the original on 4 February 2019. Retrieved 8 March 2019.
135. Mearian, Lucas (23 January 2019). "Grid, a new project from the Linux Foundation, will offer developers tools to create supply chain-specific applications running atop distributed ledger technology". Computerworld. Archived from the original on 3 February 2019. Retrieved 8 March 2019.
136. "Why J.P. Morgan Chase Is Building a Blockchain on Ethereum". Fortune. Archived from the original on 2 February 2017. Retrieved 24 January 2017.
137. Andoni, Merlinda; Robu, Valentin; Flynn, David; Abram, Simone; Geach, Dale; Jenkins, David; McCallum, Peter; Peacock, Andrew (2019). "Blockchain technology in the energy sector: A systematic review of challenges and opportunities". Renewable and Sustainable Energy Reviews. 100: 143–174. doi:10.1016/j.rser.2018.10.014. S2CID 116422191.
138. "This Blockchain-Based Energy Platform Is Building A Peer-To-Peer Grid". 16 October 2017. Archived from the original on 7 June 2020. Retrieved 7 June 2020.
139. "Blockchain-based microgrid gives power to consumers in New York". Archived from the original on 22 March 2016. Retrieved 7 June 2020.
140. Ma, Jinhua; Lin, Shih-Ya; Chen, Xin; Sun, Hung-Min; Chen, Yeh-Cheng; Wang, Huaxiong (2020). "A Blockchain-Based Application System for Product Anti-Counterfeiting". IEEE Access. 8: 77642–77652. doi:10.1109/ACCESS.2020.2972026. ISSN 2169-3536. S2CID 214205788.
141. Alzahrani, Naif; Bulusu, Nirupama (15 June 2018). "Block-Supply Chain". Proceedings of the 1st Workshop on Cryptocurrencies and Blockchains for Distributed Systems. CryBlock'18. Munich, Germany: Association for Computing Machinery. pp. 30–35. doi:10.1145/3211933.3211939. ISBN 978-1-4503-5838-5. S2CID 169188795.
142. Balagurusamy, V. S. K.; Cabral, C.; Coomaraswamy, S.; Delamarche, E.; Dillenberger, D. N.; Dittmann, G.; Friedman, D.; Gökçe, O.; Hinds, N.; Jelitto, J.; Kind, A. (1 March 2019). "Crypto anchors". IBM Journal of Research and Development. 63 (2/3): 4:1–4:12. doi:10.1147/JRD.2019.2900651. ISSN 0018-8646. S2CID 201109790.
143. Jung, Seung Wook (23 June 2021). Lee, Robert (ed.). A Novel Authentication System for Artwork Based on Blockchain (eBook). 20th IEEE/ACIS International Summer Semi-Virtual Conference on Computer and Information Science (ICIS 2021). Studies in Computational Intelligence. Vol. 985. Springer. p. 159 – via Google Books.
144. Brett, Charles (18 April 2018). "EUIPO Blockathon Challenge 2018 -". Enterprise Times. Retrieved 1 September 2020.
145. "EUIPO Anti-Counterfeiting Blockathon Forum".
146. "PT Industrieel Management". PT Industrieel Management. Retrieved 1 September 2020.
147. "China selects pilot zones, application areas for blockchain project". Reuters. 31 January 2022.
148. Wegner, Peter (March 1996). "Interoperability". ACM Computing Surveys. 28: 285–287. doi:10.1145/234313.234424. Retrieved 24 October 2020.
149. Belchior, Rafael; Vasconcelos, André; Guerreiro, Sérgio; Correia, Miguel (May 2020). "A Survey on Blockchain Interoperability: Past, Present, and Future Trends". arXiv:2005.14282 [cs.DC].
150. Hardjono, T.; Hargreaves, M.; Smith, N. (2 October 2020). An Interoperability Architecture for Blockchain Gateways (Technical report). IETF. draft-hardjono-blockchain-interop-arch-00.
151. Hyun Song Shin (June 2018). "Chapter V. Cryptocurrencies: looking beyond the hype" (PDF). BIS 2018 Annual Economic Report. Bank for International Settlements. Archived (PDF) from the original on 18 June 2018. Retrieved 19 June 2018. Put in the simplest terms, the quest for decentralised trust has quickly become an environmental disaster.
152. Janda, Michael (18 June 2018). "Cryptocurrencies like bitcoin cannot replace money, says Bank for International Settlements". ABC (Australia). Archived from the original on 18 June 2018. Retrieved 18 June 2018.
153. Hiltzik, Michael (18 June 2018). "Is this scathing report the death knell for bitcoin?". Los Angeles Times. Archived from the original on 18 June 2018. Retrieved 19 June 2018.
154. Ossinger, Joanna (2 February 2022). "Polkadot Has Least Carbon Footprint, Crypto Researcher Says". Bloomberg. Retrieved 1 June 2022.
155. Jones, Jonathan Spencer (13 September 2021). "Blockchain proof-of-stake – not all are equal". www.smart-energy.com. Archived from the original on 25 September 2021. Retrieved 26 February 2022.
156. Criddle, Christina (February 20, 2021) "Bitcoin consumes 'more electricity than Argentina'." BBC News. (Retrieved April 26, 2021.)
157. Ponciano, Jonathan (March 9, 2021) "Bill Gates Sounds Alarm On Bitcoin's Energy Consumption–Here's Why Crypto Is Bad For Climate Change." Forbes.com. (Retrieved April 26, 2021.)
158. Rowlatt, Justin (February 27, 2021) "How Bitcoin's vast energy use could burst its bubble." BBC News. (Retrieved April 26, 2021.)
159. Sorkin, Andrew et al. (March 9, 2021) "Why Bill Gates Is Worried About Bitcoin." New York Times. (Retrieved April 25, 2021.)
160. Illing, Sean (11 April 2018). "Why Bitcoin is bullshit, explained by an expert". Vox. Archived from the original on 17 July 2018. Retrieved 17 July 2018.
161. Weaver, Nicholas. "Blockchains and Cryptocurrencies: Burn It With Fire". YouTube video. Berkeley School of Information. Archived from the original on 19 February 2019. Retrieved 17 July 2018.
162. Köhler, Susanne; Pizzol, Massimo (20 November 2019). "Life Cycle Assessment of Bitcoin Mining". Environmental Science & Technology. 53 (23): 13598–13606. Bibcode:2019EnST...5313598K. doi:10.1021/acs.est.9b05687. PMID 31746188.
163. Stoll, Christian; Klaaßen, Lena; Gallersdörfer, Ulrich (2019). "The Carbon Footprint of Bitcoin". Joule. 3 (7): 1647–1661. doi:10.1016/j.joule.2019.05.012.
164. "US lawmakers begin probe into Bitcoin miners' high energy use". Business Standard India. 29 January 2022 – via Business Standard.
165. Cuen, Leigh (March 21, 2021) "The debate about cryptocurrency and data consumption." TechCrunch. (Retrieved April 26, 2021.)
166. Catalini, Christian; Tucker, Catherine E. (11 August 2016). "Seeding the S-Curve? The Role of Early Adopters in Diffusion" (PDF). SSRN. doi:10.2139/ssrn.2822729. S2CID 157317501. SSRN 2822729.
167. Arnold, M. (2017) "Universities add blockchain to course list", Financial Times: Masters in Finance, Retrieved 26 January 2022.
168. Janssen, Marijn; Weerakkody, Vishanth; Ismagilova, Elvira; Sivarajah, Uthayasankar; Irani, Zahir (2020). "A framework for analysing blockchain technology adoption: Integrating institutional, market and technical factors". International Journal of Information Management. Elsevier. 50: 302–309. doi:10.1016/j.ijinfomgt.2019.08.012.
169. Koens, Tommy; Poll, Erik (2019), "The Drivers Behind Blockchain Adoption: The Rationality of Irrational Choices", Euro-Par 2018: Parallel Processing Workshops, Lecture Notes in Computer Science, vol. 11339, pp. 535–546, doi:10.1007/978-3-030-10549-5_42, hdl:2066/200787, ISBN 978-3-030-10548-8, S2CID 57662305
170. Li, Jerry (7 April 2020). "Blockchain Technology Adoption: Examining the Fundamental Drivers". Proceedings of the 2020 2nd International Conference on Management Science and Industrial Engineering. Association for Computing Machinery. pp. 253–260. doi:10.1145/3396743.3396750. ISBN 9781450377065. S2CID 218982506. Archived from the original on 5 June 2020 – via ACM Digital Library.
171. Hsieh, Ying-Ying; Vergne, Jean-Philippe; Anderson, Philip; Lakhani, Karim; Reitzig, Markus (12 February 2019). "Correction to: Bitcoin and the rise of decentralized autonomous organizations". Journal of Organization Design. 8 (1): 3. doi:10.1186/s41469-019-0041-1. ISSN 2245-408X.
172. Felin, Teppo; Lakhani, Karim (2018). "What Problems Will You Solve With Blockchain?". MIT Sloan Management Review.
173. Beck, Roman; Mueller-Bloch, Christoph; King, John Leslie (2018). "Governance in the Blockchain Economy: A Framework and Research Agenda". Journal of the Association for Information Systems: 1020–1034. doi:10.17705/1jais.00518. S2CID 69365923.
174. Popper, Nathaniel (27 June 2018). "What is the Blockchain? Explaining the Tech Behind Cryptocurrencies (Published 2018)". The New York Times.
175. Hugh Rooney, Brian Aiken, & Megan Rooney. (2017). Q&A. Is Internal Audit Ready for Blockchain? Technology Innovation Management Review, (10), 41.
176. Richard C. Kloch, Jr Simon J. Little, Blockchain and Internal Audit Internal Audit Foundation, 2019 ISBN 978-1-63454-065-0
177. Alexander, A. (2019). The audit, transformed: New advancements in technology are reshaping this core service. Accounting Today, 33(1)
178. Extance, Andy (30 September 2015). "The future of cryptocurrencies: Bitcoin and beyond". Nature. 526 (7571): 21–23. Bibcode:2015Natur.526...21E. doi:10.1038/526021a. ISSN 0028-0836. OCLC 421716612. PMID 26432223.
179. Ledger (eJournal / eMagazine, 2015). OCLC. OCLC 910895894.
180. Hertig, Alyssa (15 September 2015). "Introducing Ledger, the First Bitcoin-Only Academic Journal". Motherboard. Archived from the original on 10 January 2017. Retrieved 10 January 2017.
181. Rizun, Peter R.; Wilmer, Christopher E.; Burley, Richard Ford; Miller, Andrew (2015). "How to Write and Format an Article for Ledger" (PDF). Ledger. 1 (1): 1–12. doi:10.5195/LEDGER.2015.1 (inactive 1 August 2023). ISSN 2379-5980. OCLC 910895894. Archived (PDF) from the original on 22 September 2015. Retrieved 11 January 2017.{{cite journal}}: CS1 maint: DOI inactive as of August 2023 (link)
Further reading
• Crosby, Michael; Nachiappan; Pattanayak, Pradhan; Verma, Sanjeev; Kalyanaraman, Vignesh (16 October 2015). BlockChain Technology: Beyond Bitcoin (PDF) (Report). Sutardja Center for Entrepreneurship & Technology Technical Report. University of California, Berkeley. Retrieved 19 March 2017.
• Jaikaran, Chris (28 February 2018). Blockchain: Background and Policy Issues. Washington, DC: Congressional Research Service. Archived from the original on 2 December 2018. Retrieved 2 December 2018.
• Kakavand, Hossein; De Sevres, Nicolette Kost; Chilton, Bart (12 October 2016). The Blockchain Revolution: An Analysis of Regulation and Technology Related to Distributed Ledger Technologies (Report). Luther Systems & DLA Piper. SSRN 2849251.
• Mazonka, Oleg (29 December 2016). "Blockchain: Simple Explanation" (PDF). Journal of Reference. Archived from the original (PDF) on 7 August 2017. Retrieved 29 December 2016.
• Tapscott, Don; Tapscott, Alex (2016). Blockchain Revolution: How the Technology Behind Bitcoin Is Changing Money, Business and the World. London: Portfolio Penguin. ISBN 978-0-241-23785-4. OCLC 971395169.
• Saito, Kenji; Yamada, Hiroyuki (June 2016). What's So Different about Blockchain? Blockchain is a Probabilistic State Machine. IEEE 36th International Conference on Distributed Computing Systems Workshops. International Conference on Distributed Computing Systems Workshops (Icdcs). Nara, Nara, Japan: IEEE. pp. 168–75. doi:10.1109/ICDCSW.2016.28. ISBN 978-1-5090-3686-8. ISSN 2332-5666.
• Raval, Siraj (2016). Decentralized Applications: Harnessing Bitcoin's Blockchain Technology. Oreilly. ISBN 9781491924549.
• Bashir, Imran (2017). Mastering Blockchain. Packt Publishing, Ltd. ISBN 978-1-78712-544-5. OCLC 967373845.
• Knirsch, Fabian; Unterweger, Andread; Engel, Dominik (2019). "Implementing a blockchain from scratch: why, how, and what we learned". EURASIP Journal on Information Security. 2019. doi:10.1186/s13635-019-0085-3. S2CID 84837476.
• D. Puthal, N. Malik, S. P. Mohanty, E. Kougianos, and G. Das, "Everything you Wanted to Know about the Blockchain", IEEE Consumer Electronics Magazine, Volume 7, Issue 4, July 2018, pp. 06–14.
• David L. Portilla, David J. Kappos, Minh Van Ngo, Sasha Rosenthal-Larrea, John D. Buretta and Christopher K. Fargo, Cravath, Swaine & Moore LLP, "Blockchain in the Banking Sector: A Review of the Landscape and Opportunities", Harvard Law School of Corporate Governance, posted on Friday, January 28, 2022
External links
Wikiversity has learning resources about Blockchain
• Media related to Blockchain at Wikimedia Commons
Authority control
International
• FAST
National
• France
• BnF data
• Israel
• United States
• Czech Republic
Cryptocurrencies
Technology
• Blockchain
• Cryptocurrency tumbler
• Cryptocurrency wallet
• Cryptographic hash function
• Decentralized exchange
• Decentralized finance
• Distributed ledger
• Fork
• Lightning Network
• MetaMask
• Non-fungible token
• Smart contract
• Web3
Consensus mechanisms
• Proof of authority
• Proof of personhood
• Proof of space
• Proof of stake
• Proof of work
Proof of work currencies
SHA-256-based
• Bitcoin
• Bitcoin Cash
• Counterparty
• LBRY
• MazaCoin
• Namecoin
• Peercoin
• Titcoin
Ethash-based
• Ethereum (1.0)
• Ethereum Classic
Scrypt-based
• Auroracoin
• Bitconnect
• Coinye
• Dogecoin
• Litecoin
Equihash-based
• Bitcoin Gold
• Zcash
RandomX-based
• Monero
X11-based
• Dash
• Petro
Other
• AmbaCoin
• Firo
• IOTA
• Nervos Network
• Primecoin
• Verge
• Vertcoin
Proof of stake currencies
• Algorand
• Avalanche
• Cardano
• EOS.IO
• Ethereum (2.0)
• Gridcoin
• Kin
• Nxt
• Peercoin
• Polkadot
• Solana
• Steem
• Tezos
• TRON
ERC-20 tokens
• Augur
• Aventus
• Basic Attention Token
• Chainlink
• Kin
• KodakCoin
• Minds
• Shiba Inu
• The DAO
• TRON
Stablecoins
• Dai
• Diem
• Pax
• Tether
• USD Coin
Other currencies
• Chia
• Filecoin
• HBAR (Hashgraph)
• Helium
• MobileCoin
• Nano
• NEO
• Ripple
• SafeMoon
• Stellar
• WhopperCoin
Inactive currencies
• BitConnect
• Coinye
• KodakCoin
• OneCoin
• Petro
Cryptocurrency exchanges
• Abra
• Binance
• Bitfinex
• bitFlyer
• Bitkub
• Bitpanda
• Bithumb
• BitMEX
• Bitso
• Bitstamp
• BTCC
• BUX
• Circle
• Coinbase
• Coincheck
• Crypto.com
• EDX Markets
• eToro
• Gemini
• Genesis
• Huobi
• ItBit (Paxos)
• Kraken
• Kuna
• LocalBitcoins
• OKX
• ShapeShift
• Uniswap
• Upbit
• Zaif (Tech Bureau)
Defunct
• BTC-e
• FTX
• bankruptcy
• Mt. Gox
• QuadrigaCX
• Thodex
Crypto service companies
• Hyperledger
• IQ.Wiki
• Initiative Q
Related topics
• Airdrop
• BitLicense
• Blockchain game
• Complementary currency
• Crypto-anarchism
• Cryptocurrency bubble
• Cryptocurrency in Nigeria
• Cryptocurrency scams
• Digital currency
• Decentralized autonomous organization
• Decentralized application
• Distributed ledger technology law
• Double-spending
• Environmental impact
• Initial coin offering
• Initial exchange offering
• List of cryptocurrencies
• Token money
• Virtual currency
• Category
• Commons
• List
Bitcoin
• History
• Economics
• Legal status
People
• Gavin Andresen
• Andreas Antonopoulos
• Brian Armstrong
• Adam Back
• Wences Casares
• Tim Draper
• Hal Finney
• Wei Dai
• Mark Karpelès
• Satoshi Nakamoto
• Charlie Shrem
• Nick Szabo
• Vitalik Buterin
• Ross Ulbricht
• Roger Ver
• Cody Wilson
• Cameron Winklevoss
• Tyler Winklevoss
• Craig Wright
• Dave Kleiman
• Jihan Wu
Lists
• List of bitcoin companies
• List of bitcoin forks
• List of bitcoin organizations
• List of people in blockchain technology
Technologies
• Bitcoin network
• Blockchain
• Cryptocurrency
• Cryptocurrency wallet
• Bitcoin ATM
• ECDSA
• Lightning Network
• P2P
• Proof of work
• SegWit
• SHA-2
Forks
Client
• Bitcoin Unlimited
Currency
• Bitcoin Cash
• Bitcoin Gold
History
• Bitcoin scalability problem
• History of bitcoin
• 2018 cryptocurrency crash
• 2018 Bitcoin bomb threats
• 2020 Twitter account hijacking
Movies
• The Rise and Rise of Bitcoin (2014 film)
• Deep Web (2015 film)
Legal entities
(not exchanges)
• Bitcoin Foundation
• Bitcoin Magazine
• BitGo
• Bitmain
• Canaan Creative
• CoinDesk
• GHash.io
• Nuri
Bitcoin in El Salvador
• Bitcoin Law
• Category
• Commons
Database management systems
Types
• Object-oriented
• comparison
• Relational
• list
• comparison
• Key–value
• Column-oriented
• list
• Document-oriented
• Wide-column store
• Graph
• NoSQL
• NewSQL
• In-memory
• list
• Multi-model
• comparison
• Cloud
• Blockchain-based database
Concepts
• Database
• ACID
• Armstrong's axioms
• Codd's 12 rules
• CAP theorem
• CRUD
• Null
• Candidate key
• Foreign key
• Superkey
• Surrogate key
• Unique key
Objects
• Relation
• table
• column
• row
• View
• Transaction
• Transaction log
• Trigger
• Index
• Stored procedure
• Cursor
• Partition
Components
• Concurrency control
• Data dictionary
• JDBC
• XQJ
• ODBC
• Query language
• Query optimizer
• Query rewriting system
• Query plan
Functions
• Administration
• Query optimization
• Replication
• Sharding
Related topics
• Database models
• Database normalization
• Database storage
• Distributed database
• Federated database system
• Referential integrity
• Relational algebra
• Relational calculus
• Relational model
• Object–relational database
• Transaction processing
• Category
• Outline
• WikiProject
|
Wikipedia
|
Small dodecahemicosahedron
In geometry, the small dodecahemicosahedron (or great dodecahemiicosahedron) is a nonconvex uniform polyhedron, indexed as U62. It has 22 faces (12 pentagrams and 10 hexagons), 60 edges, and 30 vertices.[1] Its vertex figure is a crossed quadrilateral.
Small dodecahemicosahedron
TypeUniform star polyhedron
ElementsF = 22, E = 60
V = 30 (χ = −8)
Faces by sides12{5/2}+10{6}
Coxeter diagram
Wythoff symbol5/3 5/2 | 3 (double covering)
Symmetry groupIh, [5,3], *532
Index referencesU62, C78, W100
Dual polyhedronSmall dodecahemicosacron
Vertex figure
6.5/2.6.5/3
Bowers acronymSidhei
It is a hemipolyhedron with ten hexagonal faces passing through the model center.
Related polyhedra
Its convex hull is the icosidodecahedron. It also shares its edge arrangement with the dodecadodecahedron (having the pentagrammic faces in common), and with the great dodecahemicosahedron (having the hexagonal faces in common).
Dodecadodecahedron
Small dodecahemicosahedron
Great dodecahemicosahedron
Icosidodecahedron (convex hull)
Gallery
Traditional filling
Modulo-2 filling
See also
• List of uniform polyhedra
References
1. Maeder, Roman. "62: small dodecahemicosahedron". MathConsult.{{cite web}}: CS1 maint: url-status (link)
External links
• Weisstein, Eric W. "Small dodecahemicosahedron". MathWorld.
• Uniform polyhedra and duals
Star-polyhedra navigator
Kepler-Poinsot
polyhedra
(nonconvex
regular polyhedra)
• small stellated dodecahedron
• great dodecahedron
• great stellated dodecahedron
• great icosahedron
Uniform truncations
of Kepler-Poinsot
polyhedra
• dodecadodecahedron
• truncated great dodecahedron
• rhombidodecadodecahedron
• truncated dodecadodecahedron
• snub dodecadodecahedron
• great icosidodecahedron
• truncated great icosahedron
• nonconvex great rhombicosidodecahedron
• great truncated icosidodecahedron
Nonconvex uniform
hemipolyhedra
• tetrahemihexahedron
• cubohemioctahedron
• octahemioctahedron
• small dodecahemidodecahedron
• small icosihemidodecahedron
• great dodecahemidodecahedron
• great icosihemidodecahedron
• great dodecahemicosahedron
• small dodecahemicosahedron
Duals of nonconvex
uniform polyhedra
• medial rhombic triacontahedron
• small stellapentakis dodecahedron
• medial deltoidal hexecontahedron
• small rhombidodecacron
• medial pentagonal hexecontahedron
• medial disdyakis triacontahedron
• great rhombic triacontahedron
• great stellapentakis dodecahedron
• great deltoidal hexecontahedron
• great disdyakis triacontahedron
• great pentagonal hexecontahedron
Duals of nonconvex
uniform polyhedra with
infinite stellations
• tetrahemihexacron
• hexahemioctacron
• octahemioctacron
• small dodecahemidodecacron
• small icosihemidodecacron
• great dodecahemidodecacron
• great icosihemidodecacron
• great dodecahemicosacron
• small dodecahemicosacron
|
Wikipedia
|
Sidi's generalized secant method
Sidi's generalized secant method is a root-finding algorithm, that is, a numerical method for solving equations of the form $f(x)=0$ . The method was published by Avram Sidi.[1]
The method is a generalization of the secant method. Like the secant method, it is an iterative method which requires one evaluation of $f$ in each iteration and no derivatives of $f$. The method can converge much faster though, with an order which approaches 2 provided that $f$ satisfies the regularity conditions described below.
Algorithm
We call $\alpha $ the root of $f$, that is, $f(\alpha )=0$. Sidi's method is an iterative method which generates a sequence $\{x_{i}\}$ of approximations of $\alpha $. Starting with k + 1 initial approximations $x_{1},\dots ,x_{k+1}$, the approximation $x_{k+2}$ is calculated in the first iteration, the approximation $x_{k+3}$ is calculated in the second iteration, etc. Each iteration takes as input the last k + 1 approximations and the value of $f$ at those approximations. Hence the nth iteration takes as input the approximations $x_{n},\dots ,x_{n+k}$ and the values $f(x_{n}),\dots ,f(x_{n+k})$.
The number k must be 1 or larger: k = 1, 2, 3, .... It remains fixed during the execution of the algorithm. In order to obtain the starting approximations $x_{1},\dots ,x_{k+1}$ one could carry out a few initializing iterations with a lower value of k.
The approximation $x_{n+k+1}$ is calculated as follows in the nth iteration. A polynomial of interpolation $p_{n,k}(x)$ of degree k is fitted to the k + 1 points $(x_{n},f(x_{n})),\dots (x_{n+k},f(x_{n+k}))$. With this polynomial, the next approximation $x_{n+k+1}$ of $\alpha $ is calculated as
$x_{n+k+1}=x_{n+k}-{\frac {f(x_{n+k})}{p_{n,k}'(x_{n+k})}}$
(1)
with $p_{n,k}'(x_{n+k})$ the derivative of $p_{n,k}$ at $x_{n+k}$. Having calculated $x_{n+k+1}$ one calculates $f(x_{n+k+1})$ and the algorithm can continue with the (n + 1)th iteration. Clearly, this method requires the function $f$ to be evaluated only once per iteration; it requires no derivatives of $f$.
The iterative cycle is stopped if an appropriate stopping criterion is met. Typically the criterion is that the last calculated approximation is close enough to the sought-after root $\alpha $.
To execute the algorithm effectively, Sidi's method calculates the interpolating polynomial $p_{n,k}(x)$ in its Newton form.
Convergence
Sidi showed that if the function $f$ is (k + 1)-times continuously differentiable in an open interval $I$ containing $\alpha $ (that is, $f\in C^{k+1}(I)$), $\alpha $ is a simple root of $f$ (that is, $f'(\alpha )\neq 0$) and the initial approximations $x_{1},\dots ,x_{k+1}$ are chosen close enough to $\alpha $, then the sequence $\{x_{i}\}$ converges to $\alpha $, meaning that the following limit holds: $\lim \limits _{n\to \infty }x_{n}=\alpha $.
Sidi furthermore showed that
$\lim _{n\to \infty }{\frac {x_{n+1}-\alpha }{\prod _{i=0}^{k}(x_{n-i}-\alpha )}}=L={\frac {(-1)^{k+1}}{(k+1)!}}{\frac {f^{(k+1)}(\alpha )}{f'(\alpha )}},$
and that the sequence converges to $\alpha $ of order $\psi _{k}$, i.e.
$\lim \limits _{n\to \infty }{\frac {|x_{n+1}-\alpha |}{|x_{n}-\alpha |^{\psi _{k}}}}=|L|^{(\psi _{k}-1)/k}$
The order of convergence $\psi _{k}$ is the only positive root of the polynomial
$s^{k+1}-s^{k}-s^{k-1}-\dots -s-1$
We have e.g. $\psi _{1}=(1+{\sqrt {5}})/2$ ≈ 1.6180, $\psi _{2}$ ≈ 1.8393 and $\psi _{3}$ ≈ 1.9276. The order approaches 2 from below if k becomes large: $\lim \limits _{k\to \infty }\psi _{k}=2$ [2] [3]
Related algorithms
Sidi's method reduces to the secant method if we take k = 1. In this case the polynomial $p_{n,1}(x)$ is the linear approximation of $f$ around $\alpha $ which is used in the nth iteration of the secant method.
We can expect that the larger we choose k, the better $p_{n,k}(x)$ is an approximation of $f(x)$ around $x=\alpha $. Also, the better $p_{n,k}'(x)$ is an approximation of $f'(x)$ around $x=\alpha $. If we replace $p_{n,k}'$ with $f'$ in (1) we obtain that the next approximation in each iteration is calculated as
$x_{n+k+1}=x_{n+k}-{\frac {f(x_{n+k})}{f'(x_{n+k})}}$
(2)
This is the Newton–Raphson method. It starts off with a single approximation $x_{1}$ so we can take k = 0 in (2). It does not require an interpolating polynomial but instead one has to evaluate the derivative $f'$ in each iteration. Depending on the nature of $f$ this may not be possible or practical.
Once the interpolating polynomial $p_{n,k}(x)$ has been calculated, one can also calculate the next approximation $x_{n+k+1}$ as a solution of $p_{n,k}(x)=0$ instead of using (1). For k = 1 these two methods are identical: it is the secant method. For k = 2 this method is known as Muller's method.[3] For k = 3 this approach involves finding the roots of a cubic function, which is unattractively complicated. This problem becomes worse for even larger values of k. An additional complication is that the equation $p_{n,k}(x)=0$ will in general have multiple solutions and a prescription has to be given which of these solutions is the next approximation $x_{n+k+1}$. Muller does this for the case k = 2 but no such prescriptions appear to exist for k > 2.
References
1. Sidi, Avram, "Generalization Of The Secant Method For Nonlinear Equations", Applied Mathematics E-notes 8 (2008), 115–123, http://www.math.nthu.edu.tw/~amen/2008/070227-1.pdf
2. Traub, J.F., "Iterative Methods for the Solution of Equations", Prentice Hall, Englewood Cliffs, N.J. (1964)
3. Muller, David E., "A Method for Solving Algebraic Equations Using an Automatic Computer", Mathematical Tables and Other Aids to Computation 10 (1956), 208–215
|
Wikipedia
|
Sidon sequence
In number theory, a Sidon sequence is a sequence $A=\{a_{0},a_{1},a_{2},\dots \}$ of natural numbers in which all pairwise sums $a_{i}+a_{j}$ (for $i\leq j$) are different. Sidon sequences are also called Sidon sets; they are named after the Hungarian mathematician Simon Sidon, who introduced the concept in his investigations of Fourier series.
The main problem in the study of Sidon sequences, posed by Sidon,[1] is to find the maximum number of elements that a Sidon sequence can contain, up to some bound $x$. Despite a large body of research,[2] the question has remained unsolved.[3]
Early results
Paul Erdős and Pál Turán proved that, for every $x>0$, the number of elements smaller than $x$ in a Sidon sequence is at most ${\sqrt {x}}+O({\sqrt[{4}]{x}})$. Several years earlier, James Singer had constructed Sidon sequences with ${\sqrt {x}}(1-o(1))$ terms less than x. The bound was improved to ${\sqrt {x}}+{\sqrt[{4}]{x}}+1$ in 1969[4] and to ${\sqrt {x}}+0.998{\sqrt[{4}]{x}}$ in 2023.[5]
In 1994 Erdős offered 500 dollars for a proof or disproof of the bound ${\sqrt {x}}+o(x^{\epsilon })$.[6]
Infinite Sidon sequences
Erdős also showed that, for any particular infinite Sidon sequence $A$ with $A(x)$ denoting the number of its elements up to $x$,
$\liminf _{x\to \infty }{\frac {A(x){\sqrt {\log x}}}{\sqrt {x}}}\leq 1.$
That is, infinite Sidon sequences are thinner than the densest finite Sidon sequences.
For the other direction, Chowla and Mian observed that the greedy algorithm gives an infinite Sidon sequence with $A(x)>c{\sqrt[{3}]{x}}$ for every $x$.[7] Ajtai, Komlós, and Szemerédi improved this with a construction[8] of a Sidon sequence with
$A(x)>{\sqrt[{3}]{x\log x}}.$
The best lower bound to date was given by Imre Z. Ruzsa, who proved[9] that a Sidon sequence with
$A(x)>x^{{\sqrt {2}}-1-o(1)}$
exists. Erdős conjectured that an infinite Sidon set $A$ exists for which $A(x)>x^{1/2-o(1)}$ holds. He and Rényi showed[10] the existence of a sequence $\{a_{0},a_{1},\dots \}$ with the conjectural density but satisfying only the weaker property that there is a constant $k$ such that for every natural number $n$ there are at most $k$ solutions of the equation $a_{i}+a_{j}=n$. (To be a Sidon sequence would require that $k=1$.)
Erdős further conjectured that there exists a nonconstant integer-coefficient polynomial whose values at the natural numbers form a Sidon sequence. Specifically, he asked if the set of fifth powers is a Sidon set. Ruzsa came close to this by showing that there is a real number $c$ with $0<c<1$ such that the range of the function $f(x)=x^{5}+\lfloor cx^{4}\rfloor $ is a Sidon sequence, where $\lfloor \ \rfloor $ denotes the integer part. As $c$ is irrational, this function $f(x)$ is not a polynomial. The statement that the set of fifth powers is a Sidon set is a special case of the later conjecture of Lander, Parkin and Selfridge.
Sidon sequences which are asymptotic bases
The existence of Sidon sequences that form an asymptotic basis of order $m$ (meaning that every sufficiently large natural number $n$ can be written as the sum of $m$ numbers from the sequence) has been proved for $m=5$ in 2010,[11] $m=4$ in 2014,[12] $m=3+\epsilon $ (the sum of four terms with one smaller than $n^{\epsilon }$, for arbitrarily small positive $\epsilon $) in 2015[13] and $m=3$ in 2023 as a preprint,[14][15] this later one was posed as a problem in a paper of Erdős, Sárközy and Sós in 1994.[16]
Relationship to Golomb rulers
All finite Sidon sets are Golomb rulers, and vice versa.
To see this, suppose for a contradiction that $S$ is a Sidon set and not a Golomb ruler. Since it is not a Golomb ruler, there must be four members such that $a_{i}-a_{j}=a_{k}-a_{l}$. It follows that $a_{i}+a_{l}=a_{k}+a_{j}$, which contradicts the proposition that $S$ is a Sidon set. Therefore all Sidon sets must be Golomb rulers. By a similar argument, all Golomb rulers must be Sidon sets.
See also
• Moser–de Bruijn sequence
• Sumset
References
1. Erdős, P.; Turán, P. (1941), "On a problem of Sidon in additive number theory and on some related problems" (PDF), J. London Math. Soc., 16 (4): 212–215, doi:10.1112/jlms/s1-16.4.212. Addendum, 19 (1944), 208.
2. O'Bryant, K. (2004), "A complete annotated bibliography of work related to Sidon sequences", Electronic Journal of Combinatorics, 11: 39, doi:10.37236/32.
3. Guy, Richard K. (2004), "C9: Packing sums in pairs", Unsolved problems in number theory (3rd ed.), Springer-Verlag, pp. 175–180, ISBN 0-387-20860-7, Zbl 1058.11001
4. Linström, Bern (1969). "An inequality for B2-sequences". Journal of Combinatorial Theory. 6 (2): 211–212. doi:10.1016/S0021-9800(69)80124-9.
5. Balogh, József; Füredi, Zoltán; Roy, Souktik (2023-05-28). "An Upper Bound on the Size of Sidon Sets". The American Mathematical Monthly. 130 (5): 437–445. doi:10.1080/00029890.2023.2176667. ISSN 0002-9890. S2CID 232417382.
6. Erdős, Paul (1994). "Some problems in number theory, combinatorics and combinatorial geometry" (PDF). Mathematica Pannonica. 5 (2): 261–269.
7. Mian, Abdul Majid; Chowla, S. (1944), "On the B2 sequences of Sidon", Proc. Natl. Acad. Sci. India A, 14: 3–4, MR 0014114.
8. Ajtai, M.; Komlós, J.; Szemerédi, E. (1981), "A dense infinite Sidon sequence", European Journal of Combinatorics, 2 (1): 1–11, doi:10.1016/s0195-6698(81)80014-5, MR 0611925.
9. Ruzsa, I. Z. (1998), "An infinite Sidon sequence", Journal of Number Theory, 68: 63–71, doi:10.1006/jnth.1997.2192, MR 1492889.
10. Erdős, P.; Rényi, A. (1960), "Additive properties of random sequences of positive integers" (PDF), Acta Arithmetica, 6: 83–110, doi:10.4064/aa-6-1-83-110, MR 0120213.
11. Kiss, S. Z. (2010-07-01). "On Sidon sets which are asymptotic bases". Acta Mathematica Hungarica. 128 (1): 46–58. doi:10.1007/s10474-010-9155-1. ISSN 1588-2632. S2CID 96474687.
12. Kiss, Sándor Z.; Rozgonyi, Eszter; Sándor, Csaba (2014-12-01). "On Sidon sets which are asymptotic bases of order $4$". Functiones et Approximatio Commentarii Mathematici. 51 (2). arXiv:1304.5749. doi:10.7169/facm/2014.51.2.10. ISSN 0208-6573. S2CID 119121815.
13. Cilleruelo, Javier (November 2015). "On Sidon sets and asymptotic bases". Proceedings of the London Mathematical Society. 111 (5): 1206–1230. doi:10.1112/plms/pdv050. S2CID 34849568.
14. Pilatte, Cédric (2023-03-16). "A solution to the Erd\H{o}s-S\'ark\"ozy-S\'os problem on asymptotic Sidon bases of order 3". arXiv:2303.09659v1 [math.NT].
15. "First-Year Graduate Finds Paradoxical Number Set". Quanta Magazine. 2023-06-05. Retrieved 2023-06-13.
16. Erdős, P.; Sárközy, A.; Sós, V. T. (1994-12-31). "On additive properties of general sequences". Discrete Mathematics. 136 (1): 75–99. doi:10.1016/0012-365X(94)00108-U. ISSN 0012-365X. S2CID 38168554.
|
Wikipedia
|
Sidorenko's conjecture
Sidorenko's conjecture is a conjecture in the field of graph theory, posed by Alexander Sidorenko in 1986. Roughly speaking, the conjecture states that for any bipartite graph $H$ and graph $G$ on $n$ vertices with average degree $pn$, there are at least $p^{|E(H)|}n^{|V(H)|}$ labeled copies of $H$ in $G$, up to a small error term. Formally, it provides an intuitive inequality about graph homomorphism densities in graphons. The conjectured inequality can be interpreted as a statement that the density of copies of $H$ in a graph is asymptotically minimized by a random graph, as one would expect a $p^{|E(H)|}$ fraction of possible subgraphs to be a copy of $H$ if each edge exists with probability $p$.
Statement
Let $H$ be a graph. Then $H$ is said to have Sidorenko's property if, for all graphons $W$, the inequality
$t(H,W)\geq t(K_{2},W)^{|E(H)|}$
is true, where $t(H,W)$ is the homomorphism density of $H$ in $W$.
Sidorenko's conjecture (1986) states that every bipartite graph has Sidorenko's property.[1]
If $W$ is a graph $G$, this means that the probability of a uniform random mapping from $V(H)$ to $V(G)$ being a homomorphism is at least the product over each edge in $H$ of the probability of that edge being mapped to an edge in $G$. This roughly means that a randomly chosen graph with fixed number of vertices and average degree has the minimum number of labeled copies of $H$. This is not a surprising conjecture because the right hand side of the inequality is the probability of the mapping being a homomorphism if each edge map is independent. So one should expect the two sides to be at least of the same order. The natural extension to graphons would follow from the fact that every graphon is the limit point of some sequence of graphs.
The requirement that $H$ is bipartite to have Sidorenko's property is necessary — if $W$ is a bipartite graph, then $t(K_{3},W)=0$ since $W$ is triangle-free. But $t(K_{2},W)$ is twice the number of edges in $W$, so Sidorenko's property does not hold for $K_{3}$. A similar argument shows that no graph with an odd cycle has Sidorenko's property. Since a graph is bipartite if and only if it has no odd cycles, this implies that the only possible graphs that can have Sidorenko's property are bipartite graphs.
Equivalent formulation
Sidorenko's property is equivalent to the following reformulation:
For all graphs $G$, if $G$ has $n$ vertices and an average degree of $pn$, then $t(H,G)\geq p^{|E(H)|}$.
This is equivalent because the number of homomorphisms from $K_{2}$ to $G$ is twice the number of edges in $G$, and the inequality only needs to be checked when $W$ is a graph as previously mentioned.
In this formulation, since the number of non-injective homomorphisms from $H$ to $G$ is at most a constant times $n^{|V(H)|-1}$, Sidorenko's property would imply that there are at least $(p^{|E(H)|}-o(1))n^{|V(H)|}$ labeled copies of $H$ in $G$.
Examples
As previously noted, to prove Sidorenko's property it suffices to demonstrate the inequality for all graphs $G$. Throughout this section, $G$ is a graph on $n$ vertices with average degree $pn$. The quantity $\operatorname {hom} (H,G)$ refers to the number of homomorphisms from $H$ to $G$. This quantity is the same as $n^{|V(H)|}t(H,G)$.
Elementary proofs of Sidorenko's property for some graphs follow from the Cauchy–Schwarz inequality or Hölder's inequality. Others can be done by using spectral graph theory, especially noting the observation that the number of closed paths of length $\ell $ from vertex $i$ to vertex $j$ in $G$ is the component in the $i$th row and $j$th column of the matrix $A^{\ell }$, where $A$ is the adjacency matrix of $G$.
Cauchy–Schwarz: The 4-cycle C4
By fixing two vertices $u$ and $v$ of $G$, each copy of $C_{4}$ that have $u$ and $v$ on opposite ends can be identified by choosing two (not necessarily distinct) common neighbors of $u$ and $v$. Letting $\operatorname {codeg} (u,v)$ denote the codegree of $u$ and $v$ (i.e. the number of common neighbors), this implies
$\operatorname {hom} (C_{4},G)=\sum _{u,v\in V(G)}\operatorname {codeg} (u,v)^{2}\geq {\frac {1}{n^{2}}}\left(\sum _{u,v\in V(G)}\operatorname {codeg} (u,v)\right)^{2}$
by the Cauchy–Schwarz inequality. The sum has now become a count of all pairs of vertices and their common neighbors, which is the same as the count of all vertices and pairs of their neighbors. So
$\operatorname {hom} (C_{4},G)\geq {\frac {1}{n^{2}}}\left(\sum _{x\in V(G)}\deg(x)^{2}\right)^{2}\geq {\frac {1}{n^{2}}}\left({\frac {1}{n}}\left(\sum _{x\in V(G)}\deg(x)\right)^{2}\right)^{2}={\frac {1}{n^{2}}}\left({\frac {1}{n}}(n\cdot pn)^{2}\right)^{2}=p^{4}n^{4}$
by Cauchy–Schwarz again. So
$t(C_{4},G)={\frac {\operatorname {hom} (C_{4},G)}{n^{4}}}\geq p^{4}$
as desired.
Spectral graph theory: The 2k-cycle C2k
Although the Cauchy–Schwarz approach for $C_{4}$ is elegant and elementary, it does not immediately generalize to all even cycles. However, one can apply spectral graph theory to prove that all even cycles have Sidorenko's property. Note that odd cycles are not accounted for in Sidorenko's conjecture because they are not bipartite.
Using the observation about closed paths, it follows that $\operatorname {hom} (C_{2k},G)$ is the sum of the diagonal entries in $A^{2k}$. This is equal to the trace of $A^{2k}$, which in turn is equal to the sum of the $2k$th powers of the eigenvalues of $A$. If $\lambda _{1}\geq \lambda _{2}\geq \dots \geq \lambda _{n}$ are the eigenvalues of $A$, then the min-max theorem implies that
$\lambda _{1}\geq {\frac {\mathbf {1} ^{\intercal }A\mathbf {1} }{\mathbf {1} ^{\intercal }\mathbf {1} }}={\frac {1}{n}}\sum _{x\in V(G)}\deg(x)=pn,$
where $\mathbf {1} $ is the vector with $n$ components, all of which are $1$. But then
$\operatorname {hom} (C_{2k},G)=\sum _{i=1}^{n}\lambda _{i}^{2k}\geq \lambda _{1}^{2k}\geq p^{2k}n^{2k}$
because the eigenvalues of a real symmetric matrix are real. So
$t(C_{2k},G)={\frac {\operatorname {hom} (C_{2k},G)}{n^{2k}}}\geq p^{2k}$
as desired.
Entropy: Paths of length 3
J.L. Xiang Li and Balázs Szegedy (2011) introduced the idea of using entropy to prove some cases of Sidorenko's conjecture. Szegedy (2015) later applied the ideas further to prove that an even wider class of bipartite graphs have Sidorenko's property.[2] While Szegedy's proof wound up being abstract and technical, Tim Gowers and Jason Long reduced the argument to a simpler one for specific cases such as paths of length $3$.[3] In essence, the proof chooses a nice probability distribution of choosing the vertices in the path and applies Jensen's inequality (i.e. convexity) to deduce the inequality.
Partial results
Here is a list of some bipartite graphs $H$ which have been shown to have Sidorenko's property. Let $H$ have bipartition $A\sqcup B$.
• Paths have Sidorenko's property, as shown by Mulholland and Smith in 1959 (before Sidorenko formulated the conjecture).[4]
• Trees have Sidorenko's property, generalizing paths. This was shown by Sidorenko in a 1991 paper.[5]
• Cycles of even length have Sidorenko's property as previously shown. Sidorenko also demonstrated this in his 1991 paper.
• Complete bipartite graphs have Sidorenko's property. This was also shown in Sidorenko's 1991 paper.
• Bipartite graphs with $\min\{|A|,|B|\}\leq 4$ have Sidorenko's property. This was also shown in Sidorenko's 1991 paper.
• Hypercube graphs (generalizations of $Q_{3}$) have Sidorenko's property, as shown by Hatami in 2008.[6]
• More generally, norming graphs (as introduced by Hatami) have Sidorenko's property.
• If there is a vertex in $A$ that is neighbors with every vertex in $B$ (or vice versa), then $H$ has Sidorenko's property as shown by Conlon, Fox, and Sudakov in 2010.[7] This proof used the dependent random choice method.
• For all bipartite graphs $H$, there is some positive integer $p$ such that the $p$-blow-up of $B$ has Sidorenko's property. Here, the $p$-blow-up of $H$ is formed by replacing each vertex in $B$ with $p$ copies of itself, each connected with its original neighbors in $A$. This was shown by Conlon and Lee in 2018.[8]
• Some recursive approaches have been attempted, which take a collection of graphs that have Sidorenko's property to create a new graph that has Sidorenko's property. The main progress in this manner was done by Sidorenko in his 1991 paper, Li and Szegedy in 2011,[9] and Kim, Lee, and Lee in 2013.[10]
• Li and Szegedy's paper also used entropy methods to prove the property for a class of graphs called "reflection trees."
• Kim, Lee, and Lee's paper extended this idea to a class of graphs with a tree-like substructure called "tree-arrangeable graphs."
However, there are graphs for which Sidorenko's conjecture is still open. An example is the "Möbius strip" graph $K_{5,5}\setminus C_{10}$, formed by removing a $10$-cycle from the complete bipartite graph with parts of size $5$.
László Lovász proved a local version of Sidorenko's conjecture, i.e. for graphs that are "close" to random graphs in a sense of cut norm.[11]
Forcing conjecture
A sequence of graphs $\{G_{n}\}_{n=1}^{\infty }$ is called quasi-random with density $p$ for some density $0<p<1$ if for every graph $H$,
$t(H,G_{n})=(1+o(1))p^{|E(H)|}.$
The sequence of graphs would thus have properties of the Erdős–Rényi random graph $G(n,p)$.
If the edge density $t(K_{2},G_{n})$ is fixed at $(1+o(1))p$, then the condition implies that the sequence of graphs is near the equality case in Sidorenko's property for every graph $H$.
From Chung, Graham, and Wilson's 1989 paper about quasi-random graphs, it suffices for the $C_{4}$ count to match what would be expected of a random graph (i.e. the condition holds for $H=C_{4}$).[12] The paper also asks which graphs $H$ have this property besides $C_{4}$. Such graphs are called forcing graphs as their count controls the quasi-randomness of a sequence of graphs.
The forcing conjecture states the following:
A graph $H$ is forcing if and only if it is bipartite and not a tree.
It is straightforward to see that if $H$ is forcing, then it is bipartite and not a tree. Some examples of forcing graphs are even cycles (shown by Chung, Graham, and Wilson). Skokan and Thoma showed that all complete bipartite graphs that are not trees are forcing.[13]
Sidorenko's conjecture for graphs of density $p$ follows from the forcing conjecture. Furthermore, the forcing conjecture would show that graphs that are close to equality in Sidorenko's property must satisfy quasi-randomness conditions.[14]
See also
• Common graph
References
1. Sidorenko, Alexander (1993), "A correlation inequality for bipartite graphs", Graphs and Combinatorics, 9 (2–4): 201–204, doi:10.1007/BF02988307
2. Szegedy, Balázs (2015), An information theoretic approach to Sidorenko's conjecture, arXiv:1406.6738
3. Gowers, Tim. "Entropy and Sidorenko's conjecture — after Szegedy". Gowers's Weblog. Retrieved 1 December 2019.
4. Mulholland, H.P.; Smith, Cedric (1959), "An inequality arising in genetical theory", American Mathematical Monthly (66): 673–683, doi:10.1080/00029890.1959.11989387
5. Sidorenko, Alexander (1991), "Inequalities for functionals generated by bipartite graphs", Diskretnaya Matematika (3): 50–65, doi:10.1515/dma.1992.2.5.489
6. Hatami, Hamed (2010), "Graph norms and Sidorenko's conjecture", Israel Journal of Mathematics (175): 125–150, arXiv:0806.0047, doi:10.1007/s11856-010-0005-1
7. Conlon, David; Fox, Jacob; Sudakov, Benny (2010), "An approximate version of Sidorenko's conjecture", Geometric and Functional Analysis (20): 1354–1366, arXiv:1004.4236
8. Conlon, David; Lee, Joonkyung (2018), Sidorenko's conjecture for blow-ups, arXiv:1809.01259
9. Li, J.L. Xiang; Szegedy, Balázs (2011), On the logarithimic calculus and Sidorenko's conjecture, arXiv:1107.1153
10. Kim, Jeong Han; Lee, Choongbum; Lee, Joonkyung (2016), "Two Approaches to Sidorenko's Conjecture", Transactions of the American Mathematical Society, 368 (7): 5057–5074, arXiv:1310.4383, doi:10.1090/tran/6487
11. Lovász, László (2010), Subgraph densities in signed graphons and the local Sidorenko conjecture, arXiv:1004.3026
12. Chung, Fan; Graham, Ronald; Wilson, Richard (1989), "Quasi-random graphs", Combinatorica, 9 (4): 345–362, doi:10.1007/BF02125347
13. Skokan, Jozef; Thoma, Lubos (2004), "Bipartite Subgraphs and Quasi-Randomness", Graphs and Combinatorics, 20 (2): 255–262, doi:10.1007/s00373-004-0556-1
14. Conlon, David; Fox, Jacob; Sudakov, Benny (2010), "An approximate version of Sidorenko's conjecture", Geometric and Functional Analysis (20): 1354–1366, arXiv:1004.4236
|
Wikipedia
|
Seifert fiber space
A Seifert fiber space is a 3-manifold together with a decomposition as a disjoint union of circles. In other words, it is a $S^{1}$-bundle (circle bundle) over a 2-dimensional orbifold. Many 3-manifolds are Seifert fiber spaces, and they account for all compact oriented manifolds in 6 of the 8 Thurston geometries of the geometrization conjecture.
Definition
A Seifert manifold is a closed 3-manifold together with a decomposition into a disjoint union of circles (called fibers) such that each fiber has a tubular neighborhood that forms a standard fibered torus.
A standard fibered torus corresponding to a pair of coprime integers $(a,b)$ with $a>0$ is the surface bundle of the automorphism of a disk given by rotation by an angle of $2\pi b/a$ (with the natural fibering by circles). If $a=1$ the middle fiber is called ordinary, while if $a>1$ the middle fiber is called exceptional. A compact Seifert fiber space has only a finite number of exceptional fibers.
The set of fibers forms a 2-dimensional orbifold, denoted by B and called the base —also called the orbit surface— of the fibration. It has an underlying 2-dimensional surface $B_{0}$, but may have some special orbifold points corresponding to the exceptional fibers.
The definition of Seifert fibration can be generalized in several ways. The Seifert manifold is often allowed to have a boundary (also fibered by circles, so it is a union of tori). When studying non-orientable manifolds, it is sometimes useful to allow fibers to have neighborhoods that look like the surface bundle of a reflection (rather than a rotation) of a disk, so that some fibers have neighborhoods looking like fibered Klein bottles, in which case there may be one-parameter families of exceptional curves. In both of these cases, the base B of the fibration usually has a non-empty boundary.
Classification
Herbert Seifert classified all closed Seifert fibrations in terms of the following invariants. Seifert manifolds are denoted by symbols
$\{b,(\varepsilon ,g);(a_{1},b_{1}),\dots ,(a_{r},b_{r})\}\,$
where: $\varepsilon $ is one of the 6 symbols: $o_{1},o_{2},n_{1},n_{2},n_{3},n_{4}\,$, (or Oo, No, NnI, On, NnII, NnIII in Seifert's original notation) meaning:
• $o_{1}$ if B is orientable and M is orientable.
• $o_{2}$ if B is orientable and M is not orientable.
• $n_{1}$ if B is not orientable and M is not orientable and all generators of $\pi _{1}(B)$ preserve orientation of the fiber.
• $n_{2}$ if B is not orientable and M is orientable, so all generators of $\pi _{1}(B)$ reverse orientation of the fiber.
• $n_{3}$ if B is not orientable and M is not orientable and $g\geq 2$ and exactly one generator of $\pi _{1}(B)$ preserves orientation of the fiber.
• $n_{4}$ if B is not orientable and M is not orientable and $g\geq 3$ and exactly two generators of $\pi _{1}(B)$ preserve orientation of the fiber.
Here
• g is the genus of the underlying 2-manifold of the orbit surface.
• b is an integer, normalized to be 0 or 1 if M is not orientable and normalized to be 0 if in addition some $a_{i}$ is 2.
• $(a_{1},b_{1}),\ldots ,(a_{r},b_{r})$ are the pairs of numbers determining the type of each of the r exceptional orbits. They are normalized so that $0<b_{i}<a_{i}$ when M is orientable, and $0<b_{i}\leq a_{i}/2$ when M is not orientable.
The Seifert fibration of the symbol
$\{b,(\epsilon ,g);(a_{1},b_{1}),\ldots ,(a_{r},b_{r})\}$
can be constructed from that of symbol
$\{0,(\epsilon ,g);\}$
by using surgery to add fibers of types b and $b_{i}/a_{i}$.
If we drop the normalization conditions then the symbol can be changed as follows:
• Changing the sign of both $a_{i}$ and $b_{i}$ has no effect.
• Adding 1 to b and subtracting $a_{i}$ from $b_{i}$ has no effect. (In other words, we can add integers to each of the rational numbers $(b,b_{1}/a_{1},\ldots ,b_{r}/a_{r}$ provided that their sum remains constant.)
• If the manifold is not orientable, changing the sign of $b_{i}$ has no effect.
• Adding a fiber of type (1,0) has no effect. Every symbol is equivalent under these operations to a unique normalized symbol. When working with unnormalized symbols, the integer b can be set to zero by adding a fiber of type $(1,b)$.
Two closed Seifert oriented or non-orientable fibrations are isomorphic as oriented or non-orientable fibrations if and only if they have the same normalized symbol. However, it is sometimes possible for two Seifert manifolds to be homeomorphic even if they have different normalized symbols, because a few manifolds (such as lens spaces) can have more than one sort of Seifert fibration. Also an oriented fibration under a change of orientation becomes the Seifert fibration whose symbol has the sign of all the bs changed, which after normalization gives it the symbol
$\{-b-r,(\epsilon ,g);(a_{1},a_{1}-b_{1}),\ldots ,(a_{r},a_{r}-b_{r})\}$
and it is homeomorphic to this as an unoriented manifold.
The sum $b+\sum b_{i}/a_{i}$ is an invariant of oriented fibrations, which is zero if and only if the fibration becomes trivial after taking a finite cover of B.
The orbifold Euler characteristic $\chi (B)$ of the orbifold B is given by
$\chi (B)=\chi (B_{0})-\sum (1-1/a_{i})$,
where $\chi (B_{0})$ is the usual Euler characteristic of the underlying topological surface $B_{0}$ of the orbifold B. The behavior of M depends largely on the sign of the orbifold Euler characteristic of B.
Fundamental group
The fundamental group of M fits into the exact sequence
$\pi _{1}(S^{1})\rightarrow \pi _{1}(M)\rightarrow \pi _{1}(B)\rightarrow 1$
where $\pi _{1}(B)$ is the orbifold fundamental group of B (which is not the same as the fundamental group of the underlying topological manifold). The image of group $\pi _{1}(S^{1})$ is cyclic, normal, and generated by the element h represented by any regular fiber, but the map from π1(S1) to π1(M) is not always injective.
The fundamental group of M has the following presentation by generators and relations:
B orientable:
$\langle u_{1},v_{1},...u_{g},v_{g},q_{1},...q_{r},h|u_{i}h=h^{\epsilon }u_{i},v_{i}h=h^{\epsilon }v_{i},q_{i}h=hq_{i},q_{j}^{a_{j}}h^{b_{j}}=1,q_{1}...q_{r}[u_{1},v_{1}]...[u_{g},v_{g}]=h^{b}\rangle $
where ε is 1 for type o1, and is −1 for type o2.
B non-orientable:
$\langle v_{1},...,v_{g},q_{1},...q_{r},h|v_{i}h=h^{\epsilon _{i}}v_{i},q_{i}h=hq_{i},q_{j}^{a_{j}}h^{b_{j}}=1,q_{1}...q_{r}v_{1}^{2}...v_{g}^{2}=h^{b}\rangle $
where εi is 1 or −1 depending on whether the corresponding generator vi preserves or reverses orientation of the fiber. (So εi are all 1 for type n1, all −1 for type n2, just the first one is one for type n3, and just the first two are one for type n4.)
Positive orbifold Euler characteristic
The normalized symbols of Seifert fibrations with positive orbifold Euler characteristic are given in the list below. These Seifert manifolds often have many different Seifert fibrations. They have a spherical Thurston geometry if the fundamental group is finite, and an S2×R Thurston geometry if the fundamental group is infinite. Equivalently, the geometry is S2×R if the manifold is non-orientable or if b + Σbi/ai= 0, and spherical geometry otherwise.
{b; (o1, 0);} (b integral) is S2×S1 for b=0, otherwise a lens space L(b,1). In particular, {1; (o1, 0);} =L(1,1) is the 3-sphere.
{b; (o1, 0);(a1, b1)} (b integral) is the lens space L(ba1+b1,a1).
{b; (o1, 0);(a1, b1), (a2, b2)} (b integral) is S2×S1 if ba1a2+a1b2+a2b1 = 0, otherwise the lens space L(ba1a2+a1b2+a2b1, ma2+nb2) where ma1 − n(ba1 +b1) = 1.
{b; (o1, 0);(2, 1), (2, 1), (a3, b3)} (b integral) This is the prism manifold with fundamental group of order 4a3|(b+1)a3+b3| and first homology group of order 4|(b+1)a3+b3|.
{b; (o1, 0);(2, 1), (3, b2), (3, b3)} (b integral) The fundamental group is a central extension of the tetrahedral group of order 12 by a cyclic group.
{b; (o1, 0);(2, 1), (3, b2), (4, b3)} (b integral) The fundamental group is the product of a cyclic group of order |12b+6+4b2 + 3b3| and a double cover of order 48 of the octahedral group of order 24.
{b; (o1, 0);(2, 1), (3, b2), (5, b3)} (b integral) The fundamental group is the product of a cyclic group of order m=|30b+15+10b2 +6b3| and the order 120 perfect double cover of the icosahedral group. The manifolds are quotients of the Poincaré homology sphere by cyclic groups of order m. In particular, {−1; (o1, 0);(2, 1), (3, 1), (5, 1)} is the Poincaré sphere.
{b; (n1, 1);} (b is 0 or 1.) These are the non-orientable 3-manifolds with S2×R geometry. If b is even this is homeomorphic to the projective plane times the circle, otherwise it is homeomorphic to a surface bundle associated to an orientation reversing automorphism of the 2-sphere.
{b; (n1, 1);(a1, b1)} (b is 0 or 1.) These are the non-orientable 3-manifolds with S2×R geometry. If ba1+b1 is even this is homeomorphic to the projective plane times the circle, otherwise it is homeomorphic to a surface bundle associated to an orientation reversing automorphism of the 2-sphere.
{b; (n2, 1);} (b integral.) This is the prism manifold with fundamental group of order 4|b| and first homology group of order 4, except for b=0 when it is a sum of two copies of real projective space, and |b|=1 when it is the lens space with fundamental group of order 4.
{b; (n2, 1);(a1, b1)} (b integral.) This is the (unique) prism manifold with fundamental group of order 4a1|ba1 + b1| and first homology group of order 4a1.
Zero orbifold Euler characteristic
The normalized symbols of Seifert fibrations with zero orbifold Euler characteristic are given in the list below. The manifolds have Euclidean Thurston geometry if they are non-orientable or if b + Σbi/ai= 0, and nil geometry otherwise. Equivalently, the manifold has Euclidean geometry if and only if its fundamental group has an abelian group of finite index. There are 10 Euclidean manifolds, but four of them have two different Seifert fibrations. All surface bundles associated to automorphisms of the 2-torus of trace 2, 1, 0, −1, or −2 are Seifert fibrations with zero orbifold Euler characteristic (the ones for other (Anosov) automorphisms are not Seifert fiber spaces, but have sol geometry). The manifolds with nil geometry all have a unique Seifert fibration, and are characterized by their fundamental groups. The total spaces are all acyclic.
{b; (o1, 0); (3, b1), (3, b2), (3, b3)} (b integral, bi is 1 or 2) For b + Σbi/ai= 0 this is an oriented Euclidean 2-torus bundle over the circle, and is the surface bundle associated to an order 3 (trace −1) rotation of the 2-torus.
{b; (o1, 0); (2,1), (4, b2), (4, b3)} (b integral, bi is 1 or 3) For b + Σbi/ai= 0 this is an oriented Euclidean 2-torus bundle over the circle, and is the surface bundle associated to an order 4 (trace 0) rotation of the 2-torus.
{b; (o1, 0); (2, 1), (3, b2), (6, b3)} (b integral, b2 is 1 or 2, b3 is 1 or 5) For b + Σbi/ai= 0 this is an oriented Euclidean 2-torus bundle over the circle, and is the surface bundle associated to an order 6 (trace 1) rotation of the 2-torus.
{b; (o1, 0); (2, 1), (2, 1), (2, 1), (2, 1)} (b integral) These are oriented 2-torus bundles for trace −2 automorphisms of the 2-torus. For b=−2 this is an oriented Euclidean 2-torus bundle over the circle (the surface bundle associated to an order 2 rotation of the 2-torus) and is homeomorphic to {0; (n2, 2);}.
{b; (o1, 1); } (b integral) This is an oriented 2-torus bundle over the circle, given as the surface bundle associated to a trace 2 automorphism of the 2-torus. For b=0 this is Euclidean, and is the 3-torus (the surface bundle associated to the identity map of the 2-torus).
{b; (o2, 1); } (b is 0 or 1) Two non-orientable Euclidean Klein bottle bundles over the circle. The first homology is Z+Z+Z/2Z if b=0, and Z+Z if b=1. The first is the Klein bottle times S1 and other is the surface bundle associated to a Dehn twist of the Klein bottle. They are homeomorphic to the torus bundles {b; (n1, 2);}.
{0; (n1, 1); (2, 1), (2, 1)} Homeomorphic to the non-orientable Euclidean Klein bottle bundle {1; (n3, 2);}, with first homology Z + Z/4Z.
{b; (n1, 2); } (b is 0 or 1) These are the non-orientable Euclidean surface bundles associated with orientation reversing order 2 automorphisms of a 2-torus with no fixed points. The first homology is Z+Z+Z/2Z if b=0, and Z+Z if b=1. They are homeomorphic to the Klein bottle bundles {b; (o2, 1);}.
{b; (n2, 1); (2, 1), (2, 1)} (b integral) For b=−1 this is oriented Euclidean.
{b; (n2, 2); } (b integral) For b=0 this is an oriented Euclidean manifold, homeomorphic to the 2-torus bundle {−2; (o1, 0); (2, 1), (2, 1), (2, 1), (2, 1)} over the cicle associated to an order 2 rotation of the 2-torus.
{b; (n3, 2); } (b is 0 or 1) The other two non-orientable Euclidean Klein bottle bundles. The one with b = 1 is homeomorphic to {0; (n1, 1); (2, 1), (2, 1)}. The first homology is Z+Z/2Z+Z/2Z if b=0, and Z+Z/4Z if b=1. These two Klein bottle bundle are surface bundles associated to the y-homeomorphism and the product of this and the twist.
Negative orbifold Euler characteristic
This is the general case. All such Seifert fibrations are determined up to isomorphism by their fundamental group. The total spaces are aspherical (in other words all higher homotopy groups vanish). They have Thurston geometries of type the universal cover of SL2(R), unless some finite cover splits as a product, in which case they have Thurston geometries of type H2×R. This happens if the manifold is non-orientable or b + Σbi/ai= 0.
References
• A.V. Chernavskii (2001) [1994], "Seifert fibration", Encyclopedia of Mathematics, EMS Press
• Herbert Seifert, Topologie dreidimensionaler gefaserter Räume, Acta Mathematica 60 (1933) 147–238 (There is a translation by W. Heil, published by Florida State University in 1976 and found in: Herbert Seifert, William Threlfall, Seifert and Threllfall: a textbook of topology, Pure and Applied Mathematics, Academic Press Inc (1980), vol. 89.)
• Peter Orlik, Seifert manifolds, Lecture Notes in Mathematics 291, Springer (1972).
• Frank Raymond, Classification of the actions of the circle on 3-manifolds, Transactions of the American Mathematical Society 31, (1968) 51–87.
• William H. Jaco, Lectures on 3-manifold topology ISBN 0-8218-1693-4
• William H. Jaco, Peter B. Shalen, Seifert Fibered Spaces in Three Manifolds: Memoirs Series No. 220 (Memoirs of the American Mathematical Society; v. 21, no. 220) ISBN 0-8218-2220-9
• Matthew G. Brin (2007). "Seifert Fibered Spaces: Notes for a course given in the Spring of 1993". arXiv:0711.1346 [math.GT].
• John Hempel, 3-manifolds, American Mathematical Society, ISBN 0-8218-3695-1
• Peter Scott, The geometries of 3-manifolds. (errata), Bull. London Math. Soc. 15 (1983), no. 5, 401–487.
|
Wikipedia
|
Siegel's paradox
Siegel's paradox is the phenomenon that uncertainty about future prices can theoretically push rational consumers to temporarily trade away their preferred consumption goods (or currency) for non-preferred goods (or currency), as part of a plan to trade back to the preferred consumption goods after prices become clearer. For example, in some models, Americans can expect to earn more American dollars on average by investing in Euros, while Europeans can expect to earn more Euros on average by investing in American dollars. The paradox was identified by economist Jeremy Siegel in 1972.[1]
Like the related two envelopes problem, the phenomenon is sometimes labeled a paradox because an agent can seem to trade for something of equal monetary value and yet, paradoxically, seem at the same time to gain monetary value from the trade. Closer analysis shows that the "monetary value" of the trade is ambiguous but that nevertheless such trades are often favorable, depending on the scenario.
Apple/orange example
Economist Fischer Black gave the following illustration in 1995. Suppose that the exchange rate between an "apple" country where consumers prefer apples, and an "orange" country where consumers prefer valuable oranges, is currently 1:1, but will change next year to 2:1 or 1:2 with equal probability. Suppose an apple consumer trades an apple to an orange consumer in exchange for an orange. The apple consumer now has given up an apple for an orange, which next year has an expected value of 1.25 apples. The orange consumer now has given up an orange for an apple, which next year has an expected value of 1.25 oranges. Thus both appear to have benefited from the exchange on average.[1] Mathematically, the apparent surplus is related to Jensen's inequality.[2][3]
Wine example
A more detailed example is a simplified efficient market with two wines, an American wine and a German wine. In November the wines trade 1:1. In December, most consumers will put exactly twice as much value on the trendier wine than the non-trendy wine; there is a 50/50 chance that either wine will become the trendy one in December. Thus, the wines are equally likely to trade 1:2 or 2:1 in December. Most consumers care about the nationality only insofar as it influences which wine is trendy. The only exceptions are a single loyalist American consumer, who only drinks American wine and is indifferent to trendiness, and a single loyalist German consumer, whose only drinks German wine and is likewise indifferent to trendiness.
The American loyalist counter-intuitively prefers to hold a German rather than an American wine in November, as it has a 50% chance of being tradeable for 0.5 American wines, and a 50% chance of being tradeable for 2 American wines, and thus has an expected value of 1.25 American wines. (All the consumers in this scenario are risk-neutral). Similarly, the German, if she holds an American wine, can be considered to be in possession of an expected value of 1.25 German wines. In this case, the gains from Siegel's paradox are real, and each loyalist gains utility on average by temporarily trading away from their preferred consumption good, due to the large utility gain should they succeed in the gambit of saving up in hopes of buying multiple bottles of the preferred wine should the price plummet in December.
Analyzing the case of the trendy consumers, who are indifferent to the nationality apart from its trendiness, is more complex. Such a consumer, if in possession of American wine, might fallaciously reason: "I currently have 1 American wine. If I trade for a German wine, I will have an expected value of 1.25 American wines. Therefore, I will be better off on average if I adopt a strategy to temporarily trade away, as the American loyalist did." However, this is similar to the "two envelopes problem", and the gains from Siegel's paradox in this case are illusory. The trendy consumer who uses the American loyalist's strategy is left with a 50% chance of 0.5 bottles of a newly popular trendy American wine, and a 50% chance of 2 bottles of a newly unpopular non-trendy American wine; to the trendy consumer this is not a material improvement over having a 50% chance of a bottle of trendy American wine and a 50% chance of having a bottle of non-trendy American wine. Thus, the trendy consumer has merely broken even, on average. Similarly, the trendy consumer also would not gain utility from adopting the German loyalist's strategy.[4]
Applications
While the wine and the apples are toy examples, the paradox has a real-world application to what currencies investors should choose to hold. Fischer Black concluded from analyses similar to the apple/orange example that when investing overseas, investors should not seek to hedge all their currency risk.[1] Other researchers consider such an analysis simplistic. In many circumstances, Siegel's paradox should indeed drive a rational investor to become more willing to embrace modest currency risk. In many other circumstances, they should not; for example, if the exchange rate uncertainty is due to differing rates of inflation with the imposition of purchasing power parity, then something like the "two envelopes" analysis applies, and there may be no particular reason to embrace currency risk.[4]
Geometric mean and reciprocity functions
A different approach to Siegel's paradox is proposed by K. Mallahi-Karai and P. Safari,[5] where they show that the only possible way to avoid making risk-less money in such future-based currency exchanges is to settle on the (weighted) geometric mean of the future exchange rates, or more generally a product of the weighted geometric mean and a so-called reciprocity function. The weights of the geometric mean depend on the probability of the rates occurring in the future, while the reciprocity function can always be taken to be the unit function. What this implies, for instance, in the case of apple/orange example above, is that the consumers should trade their products for √(2)(1/2) = 1 units of the other product to avoid an arbitrage. This method will provide currency traders on both sides with a common exchange rate they can safely agree on.
References
1. Black, Fischer S. (Jan–Feb 1995). "Universal Hedging: Optimizing Currency Risk and Reward in International Equity Portfolios" (PDF). Financial Analysts Journal. 51: 161–167. doi:10.2469/faj.v51.n1.1872.
2. Beenstock, Michael. "Forward Exchange Rates and Siegel's Paradox .Oxford Economic Papers 37.2 (1985): 298-303.
3. Chu, Kam Hon (October 2005). "Solution to the Siegel Paradox". Open Economies Review. 16 (4): 399–405. doi:10.1007/s11079-005-4742-4. S2CID 155056834.
4. Edlin, Aaron S. "Forward Discount Bias, Nalebuff's Envelope Puzzle, and the Siegel Paradox in Foreign Exchange." Topics in Theoretical Economics 2.1 (2002).
5. Mallahi-Karai, Keivan; Safari, Pedram (August 2018). "Future exchange rates and Siegel's paradox". Global Finance Journal. 37: 168–172. arXiv:1805.03347. doi:10.1016/j.gfj.2018.04.007. S2CID 158217351.
|
Wikipedia
|
Siegel G-function
In mathematics, the Siegel G-functions are a class of functions in transcendental number theory introduced by C. L. Siegel. They satisfy a linear differential equation with polynomial coefficients, and the coefficients of their power series expansion lie in a fixed algebraic number field and have heights of at most exponential growth.
This article is about Siegel G-functions. For the general functions introduced by Cornelius Meijer, see Meijer G-function.
Definition
A Siegel G-function is a function given by an infinite power series
$f(z)=\sum _{n=0}^{\infty }a_{n}z^{n}$
where the coefficients an all belong to the same algebraic number field, K, and with the following two properties.
1. f is the solution to a linear differential equation with coefficients that are polynomials in z;
2. the projective height of the first n coefficients is O(cn) for some fixed constant c > 0.
The second condition means the coefficients of f grow no faster than a geometric series. Indeed, the functions can be considered as generalisations of geometric series, whence the name G-function, just as E-functions are generalisations of the exponential function.
References
• Beukers, F. (2001) [1994], "G-function", Encyclopedia of Mathematics, EMS Press
• C. L. Siegel, "Über einige Anwendungen diophantischer Approximationen", Ges. Abhandlungen, I, Springer (1966)
|
Wikipedia
|
Siegel identity
In mathematics, Siegel's identity refers to one of two formulae that are used in the resolution of Diophantine equations.
Statement
The first formula is
${\frac {x_{3}-x_{1}}{x_{2}-x_{1}}}+{\frac {x_{2}-x_{3}}{x_{2}-x_{1}}}=1.$
The second is
${\frac {x_{3}-x_{1}}{x_{2}-x_{1}}}\cdot {\frac {t-x_{2}}{t-x_{3}}}+{\frac {x_{2}-x_{3}}{x_{2}-x_{1}}}\cdot {\frac {t-x_{1}}{t-x_{3}}}=1.$
Application
The identities are used in translating Diophantine problems connected with integral points on hyperelliptic curves into S-unit equations.
See also
• Siegel formula
References
• Baker, Alan (1975). Transcendental Number Theory. Cambridge University Press. p. 40. ISBN 0-521-20461-5. Zbl 0297.10013.
• Baker, Alan; Wüstholz, Gisbert (2007). Logarithmic Forms and Diophantine Geometry. New Mathematical Monographs. Vol. 9. Cambridge University Press. p. 53. ISBN 978-0-521-88268-2. Zbl 1145.11004.
• Kubert, Daniel S.; Lang, Serge (1981). Modular Units. Grundlehren der Mathematischen Wissenschaften. Vol. 244. ISBN 0-387-90517-0.
• Lang, Serge (1978). Elliptic Curves: Diophantine Analysis. Grundlehren der mathematischen Wissenschaften. Vol. 231. Springer-Verlag. ISBN 0-387-08489-4.
• Smart, N. P. (1998). The Algorithmic Resolution of Diophantine Equations. London Mathematical Society Student Texts. Vol. 41. Cambridge University Press. pp. 36–37. ISBN 0-521-64633-2.
|
Wikipedia
|
Siegel's lemma
In mathematics, specifically in transcendental number theory and Diophantine approximation, Siegel's lemma refers to bounds on the solutions of linear equations obtained by the construction of auxiliary functions. The existence of these polynomials was proven by Axel Thue;[1] Thue's proof used Dirichlet's box principle. Carl Ludwig Siegel published his lemma in 1929.[2] It is a pure existence theorem for a system of linear equations.
Siegel's lemma has been refined in recent years to produce sharper bounds on the estimates given by the lemma.[3]
Statement
Suppose we are given a system of M linear equations in N unknowns such that N > M, say
$a_{11}X_{1}+\cdots +a_{1N}X_{N}=0$
$\cdots $
$a_{M1}X_{1}+\cdots +a_{MN}X_{N}=0$
where the coefficients are rational integers, not all 0, and bounded by B. The system then has a solution
$(X_{1},X_{2},\dots ,X_{N})$
with the Xs all rational integers, not all 0, and bounded by
$(NB)^{M/(N-M)}.$[4]
Bombieri & Vaaler (1983) gave the following sharper bound for the X's:
$\max |X_{j}|\,\leq \left(D^{-1}{\sqrt {\det(AA^{T})}}\right)^{\!1/(N-M)}$
where D is the greatest common divisor of the M × M minors of the matrix A, and AT is its transpose. Their proof involved replacing the pigeonhole principle by techniques from the geometry of numbers.
See also
• Diophantine approximation
References
1. Thue, Axel (1909). "Über Annäherungswerte algebraischer Zahlen". J. Reine Angew. Math. 1909 (135): 284–305. doi:10.1515/crll.1909.135.284. S2CID 125903243.
2. Siegel, Carl Ludwig (1929). "Über einige Anwendungen diophantischer Approximationen". Abh. Preuss. Akad. Wiss. Phys. Math. Kl.: 41–69., reprinted in Gesammelte Abhandlungen, volume 1; the lemma is stated on page 213
3. Bombieri, E.; Mueller, J. (1983). "On effective measures of irrationality for ${\scriptscriptstyle {\sqrt[{r}]{a/b}}}$ and related numbers". Journal für die reine und angewandte Mathematik. 342: 173–196.
4. (Hindry & Silverman 2000) Lemma D.4.1, page 316.
• Bombieri, E.; Vaaler, J. (1983). "On Siegel's lemma". Inventiones Mathematicae. 73 (1): 11–32. Bibcode:1983InMat..73...11B. doi:10.1007/BF01393823. S2CID 121274024.
• Hindry, Marc; Silverman, Joseph H. (2000). Diophantine geometry. Graduate Texts in Mathematics. Vol. 201. Berlin, New York: Springer-Verlag. ISBN 978-0-387-98981-5. MR 1745599.
• Wolfgang M. Schmidt. Diophantine approximation. Lecture Notes in Mathematics 785. Springer. (1980 [1996 with minor corrections]) (Pages 125-128 and 283-285)
• Wolfgang M. Schmidt. "Chapter I: Siegel's Lemma and Heights" (pages 1–33). Diophantine approximations and Diophantine equations, Lecture Notes in Mathematics, Springer Verlag 2000.
|
Wikipedia
|
Siegel operator
In mathematics, the Siegel operator is a linear map from (level 1) Siegel modular forms of degree d to Siegel modular forms of degree d − 1, generalizing taking the constant term of a modular form. The kernel is the space of Siegel cusp forms of degree d.
References
• Klingen, Helmut (2003), Introductory Lectures on Siegel Modular Forms, Cambridge University Press, ISBN 0-521-35052-2
• Weissauer, Rainer (1986), Stabile Modulformen und Eisensteinreihen, Lecture Notes in Mathematics, vol. 1219, Berlin: Springer-Verlag, ISBN 3-540-17181-9, MR 0923958
|
Wikipedia
|
Siegel parabolic subgroup
In mathematics, the Siegel parabolic subgroup, named after Carl Ludwig Siegel, is the parabolic subgroup of the symplectic group with abelian radical, given by the matrices of the symplectic group whose lower left quadrant is 0 (for the standard symplectic form).
References
|
Wikipedia
|
Siegel's theorem on integral points
In mathematics, Siegel's theorem on integral points states that for a smooth algebraic curve C of genus g defined over a number field K, presented in affine space in a given coordinate system, there are only finitely many points on C with coordinates in the ring of integers O of K, provided g > 0.
The theorem was first proved in 1929 by Carl Ludwig Siegel and was the first major result on Diophantine equations that depended only on the genus and not any special algebraic form of the equations. For g > 1 it was superseded by Faltings's theorem in 1983.
History
In 1929, Siegel proved the theorem by combining a version of the Thue–Siegel–Roth theorem, from diophantine approximation, with the Mordell–Weil theorem from diophantine geometry (required in Weil's version, to apply to the Jacobian variety of C).
In 2002, Umberto Zannier and Pietro Corvaja gave a new proof by using a new method based on the subspace theorem.[1]
Effective versions
Siegel's result was ineffective (see effective results in number theory), since Thue's method in diophantine approximation also is ineffective in describing possible very good rational approximations to algebraic numbers. Effective results in some cases derive from Baker's method.
See also
• Diophantine geometry
References
1. Corvaja, P. and Zannier, U. "A subspace theorem approach to integral points on curves", Compte Rendu Acad. Sci., 334, 2002, pp. 267–271 doi:10.1016/S1631-073X(02)02240-9
• Bombieri, Enrico; Gubler, Walter (2006). Heights in Diophantine Geometry. New Mathematical Monographs. Vol. 4. Cambridge University Press. ISBN 978-0-521-71229-3. Zbl 1130.11034.
• Lang, Serge (1978). Elliptic curves: Diophantine analysis. Grundlehren der mathematischen Wissenschaften. Vol. 231. pp. 128–153. ISBN 3-540-08489-4. Zbl 0388.10001.
• Siegel, Carl Ludwig (1929). "Über einige Anwendungen diophantischer Approximationen". Sitzungsberichte der Preussischen Akademie der Wissenschaften (in German).
|
Wikipedia
|
Siegel theta series
In mathematics, a Siegel theta series is a Siegel modular form associated to a positive definite lattice, generalizing the 1-variable theta function of a lattice.
Definition
Suppose that L is a positive definite lattice. The Siegel theta series of degree g is defined by
$\Theta _{L}^{g}(T)=\sum _{\lambda \in L^{g}}\exp(\pi iTr(\lambda T\lambda ^{t}))$
where T is an element of the Siegel upper half plane of degree g.
This is a Siegel modular form of degree d and weight dim(L)/2 for some subgroup of the Siegel modular group. If the lattice L is even and unimodular then this is a Siegel modular form for the full Siegel modular group.
When the degree is 1 this is just the usual theta function of a lattice.
References
• Freitag, E. (1983), Siegelsche Modulfunktionen, Grundlehren der Mathematischen Wissenschaften, vol. 254. Springer-Verlag, Berlin, ISBN 3-540-11661-3, MR 0871067{{citation}}: CS1 maint: location missing publisher (link)
|
Wikipedia
|
Siegel upper half-space
In mathematics, the Siegel upper half-space of degree g (or genus g) (also called the Siegel upper half-plane) is the set of g × g symmetric matrices over the complex numbers whose imaginary part is positive definite. It was introduced by Siegel (1939). It is the symmetric space associated to the symplectic group Sp(2g, R).
The Siegel upper half-space has properties as a complex manifold that generalize the properties of the upper half-plane, which is the Siegel upper half-space in the special case g = 1. The group of automorphisms preserving the complex structure of the manifold is isomorphic to the symplectic group Sp(2g, R). Just as the two-dimensional hyperbolic metric is the unique (up to scaling) metric on the upper half-plane whose isometry group is the complex automorphism group SL(2, R) = Sp(2, R), the Siegel upper half-space has only one metric up to scaling whose isometry group is Sp(2g, R). Writing a generic matrix Z in the Siegel upper half-space in terms of its real and imaginary parts as Z = X + iY, all metrics with isometry group Sp(2g, R) are proportional to
$ds^{2}={\text{tr}}(Y^{-1}dZY^{-1}d{\bar {Z}}).$
The Siegel upper half-plane can be identified with the set of tame almost complex structures compatible with a symplectic structure $\omega $, on the underlying $2n$ dimensional real vector space $V$, that is, the set of $J\in Hom(V)$ such that $J^{2}=-1$ and $\omega (Jv,v)>0$ for all vectors $v\neq 0$.[1]
See also
• Moduli of abelian varieties
• Paramodular group, a generalization of the Siegel modular group
• Siegel domain, a generalization of the Siegel upper half space
• Siegel modular form, a type of automorphic form defined on the Siegel upper half-space
• Siegel modular variety, a moduli space constructed as a quotient of the Siegel upper half-space
References
1. Bowman
• Bowman, Joshua P. "Some Elementary Results on the Siegel Half-plane" (PDF)..
• van der Geer, Gerard (2008), "Siegel modular forms and their applications", in Ranestad, Kristian (ed.), The 1-2-3 of modular forms, Universitext, Berlin: Springer-Verlag, pp. 181–245, doi:10.1007/978-3-540-74119-0, ISBN 978-3-540-74117-6, MR 2409679
• Nielsen, Frank (2020), "The Siegel–Klein Disk: Hilbert Geometry of the Siegel Disk Domain", Entropy, 22 (9): 1019, arXiv:2004.08160, doi:10.3390/e22091019, PMC 7597112, PMID 33286788
• Siegel, Carl Ludwig (1939), "Einführung in die Theorie der Modulfunktionen n-ten Grades", Mathematische Annalen, 116: 617–657, doi:10.1007/BF01597381, ISSN 0025-5831, MR 0001251, S2CID 124337559
|
Wikipedia
|
Siegel–Walfisz theorem
In analytic number theory, the Siegel–Walfisz theorem was obtained by Arnold Walfisz[1] as an application of a theorem by Carl Ludwig Siegel[2] to primes in arithmetic progressions. It is a refinement both of the prime number theorem and of Dirichlet's theorem on primes in arithmetic progressions.
Statement
Define
$\psi (x;q,a)=\sum _{n\,\leq \,x \atop n\,\equiv \,a\!{\pmod {\!q}}}\Lambda (n),$
where $\Lambda $ denotes the von Mangoldt function, and let φ denote Euler's totient function.
Then the theorem states that given any real number N there exists a positive constant CN depending only on N such that
$\psi (x;q,a)={\frac {x}{\varphi (q)}}+O\left(x\exp \left(-C_{N}(\log x)^{\frac {1}{2}}\right)\right),$
whenever (a, q) = 1 and
$q\leq (\log x)^{N}.$
Remarks
The constant CN is not effectively computable because Siegel's theorem is ineffective.
From the theorem we can deduce the following bound regarding the prime number theorem for arithmetic progressions: If, for (a, q) = 1, by $\pi (x;q,a)$ we denote the number of primes less than or equal to x which are congruent to a mod q, then
$\pi (x;q,a)={\frac {{\rm {Li}}(x)}{\varphi (q)}}+O\left(x\exp \left(-{\frac {C_{N}}{2}}(\log x)^{\frac {1}{2}}\right)\right),$
where N, a, q, CN and φ are as in the theorem, and Li denotes the logarithmic integral.
References
1. Walfisz, Arnold (1936). "Zur additiven Zahlentheorie. II" [On additive number theory. II]. Mathematische Zeitschrift (in German). 40 (1): 592–607. doi:10.1007/BF01218882. MR 1545584.
2. Siegel, Carl Ludwig (1935). "Über die Classenzahl quadratischer Zahlkörper" [On the class numbers of quadratic fields]. Acta Arithmetica (in German). 1 (1): 83–86.
|
Wikipedia
|
Sierpiński's constant
Sierpiński's constant is a mathematical constant usually denoted as K. One way of defining it is as the following limit:
$K=\lim _{n\to \infty }\left[\sum _{k=1}^{n}{r_{2}(k) \over k}-\pi \ln n\right]$
where r2(k) is a number of representations of k as a sum of the form a2 + b2 for integer a and b.
It can be given in closed form as:
${\begin{aligned}K&=\pi \left(2\ln 2+3\ln \pi +2\gamma -4\ln \Gamma \left({\tfrac {1}{4}}\right)\right)\\&=\pi \ln \left({\frac {4\pi ^{3}e^{2\gamma }}{\Gamma \left({\tfrac {1}{4}}\right)^{4}}}\right)\\&=\pi \ln \left({\frac {e^{2\gamma }}{2G^{2}}}\right)\\&=2.584981759579253217065893587383\dots \end{aligned}}$
where $G$ is Gauss's constant and $\gamma $ is the Euler-Mascheroni constant.
Another way to define/understand Sierpiński's constant is,
Let r(n)[1] denote the number of representations of $n$ by $k$ squares, then the Summatory Function[2] of $r_{2}(k)/k$ has the Asymptotic[3] expansion
$\sum _{k=1}^{n}{r_{2}(k) \over k}=K+\pi \ln n+o\surd (1/n)$,
where $K=2.5849817596$ is the Sierpinski constant. The above plot shows
$[\sum _{k=1}^{n}{r_{2}(k) \over k}]-\pi \ln n$,
with the value of $K$ indicated as the solid horizontal line.
See also
• Wacław Sierpiński
External links
• http://www.plouffe.fr/simon/constants/sierpinski.txt - Sierpiński's constant up to 2000th decimal digit.
• Weisstein, Eric W. "Sierpinski Constant". MathWorld.
• OEIS sequence A062089 (Decimal expansion of Sierpiński's constant)
• https://archive.lib.msu.edu/crcmath/math/math/s/s276.htm
References
1. "r(n)". archive.lib.msu.edu. Retrieved 2021-11-30.
2. "Summatory Function". archive.lib.msu.edu. Retrieved 2021-11-30.
3. "Asymptotic". archive.lib.msu.edu. Retrieved 2021-11-30.
|
Wikipedia
|
Sierpiński triangle
The Sierpiński triangle (sometimes spelled Sierpinski), also called the Sierpiński gasket or Sierpiński sieve, is a fractal attractive fixed set with the overall shape of an equilateral triangle, subdivided recursively into smaller equilateral triangles. Originally constructed as a curve, this is one of the basic examples of self-similar sets—that is, it is a mathematically generated pattern that is reproducible at any magnification or reduction. It is named after the Polish mathematician Wacław Sierpiński, but appeared as a decorative pattern many centuries before the work of Sierpiński.
Constructions
There are many different ways of constructing the Sierpinski triangle.
Removing triangles
The Sierpinski triangle may be constructed from an equilateral triangle by repeated removal of triangular subsets:
1. Start with an equilateral triangle.
2. Subdivide it into four smaller congruent equilateral triangles and remove the central triangle.
3. Repeat step 2 with each of the remaining smaller triangles infinitely.
Each removed triangle (a trema) is topologically an open set.[1] This process of recursively removing triangles is an example of a finite subdivision rule.
Shrinking and duplication
The same sequence of shapes, converging to the Sierpiński triangle, can alternatively be generated by the following steps:
1. Start with any triangle in a plane (any closed, bounded region in the plane will actually work). The canonical Sierpiński triangle uses an equilateral triangle with a base parallel to the horizontal axis (first image).
2. Shrink the triangle to 1/2 height and 1/2 width, make three copies, and position the three shrunken triangles so that each triangle touches the two other triangles at a corner (image 2). Note the emergence of the central hole—because the three shrunken triangles can between them cover only 3/4 of the area of the original. (Holes are an important feature of Sierpinski's triangle.)
3. Repeat step 2 with each of the smaller triangles (image 3 and so on).
Note that this infinite process is not dependent upon the starting shape being a triangle—it is just clearer that way. The first few steps starting, for example, from a square also tend towards a Sierpinski triangle. Michael Barnsley used an image of a fish to illustrate this in his paper "V-variable fractals and superfractals."[2][3]
The actual fractal is what would be obtained after an infinite number of iterations. More formally, one describes it in terms of functions on closed sets of points. If we let dA denote the dilation by a factor of 1/2 about a point A, then the Sierpiński triangle with corners A, B, and C is the fixed set of the transformation dA ∪ dB ∪ dC.
This is an attractive fixed set, so that when the operation is applied to any other set repeatedly, the images converge on the Sierpiński triangle. This is what is happening with the triangle above, but any other set would suffice.
Chaos game
If one takes a point and applies each of the transformations dA, dB, and dC to it randomly, the resulting points will be dense in the Sierpiński triangle, so the following algorithm will again generate arbitrarily close approximations to it:[4]
Start by labeling p1, p2 and p3 as the corners of the Sierpinski triangle, and a random point v1. Set vn+1 = 1/2(vn + prn), where rn is a random number 1, 2 or 3. Draw the points v1 to v∞. If the first point v1 was a point on the Sierpiński triangle, then all the points vn lie on the Sierpiński triangle. If the first point v1 to lie within the perimeter of the triangle is not a point on the Sierpiński triangle, none of the points vn will lie on the Sierpiński triangle, however they will converge on the triangle. If v1 is outside the triangle, the only way vn will land on the actual triangle, is if vn is on what would be part of the triangle, if the triangle was infinitely large.
Or more simply:
1. Take three points in a plane to form a triangle.
2. Randomly select any point inside the triangle and consider that your current position.
3. Randomly select any one of the three vertex points.
4. Move half the distance from your current position to the selected vertex.
5. Plot the current position.
6. Repeat from step 3.
This method is also called the chaos game, and is an example of an iterated function system. You can start from any point outside or inside the triangle, and it would eventually form the Sierpiński Gasket with a few leftover points (if the starting point lies on the outline of the triangle, there are no leftover points). With pencil and paper, a brief outline is formed after placing approximately one hundred points, and detail begins to appear after a few hundred.
Arrowhead construction of Sierpiński gasket
Another construction for the Sierpinski gasket shows that it can be constructed as a curve in the plane. It is formed by a process of repeated modification of simpler curves, analogous to the construction of the Koch snowflake:
1. Start with a single line segment in the plane
2. Repeatedly replace each line segment of the curve with three shorter segments, forming 120° angles at each junction between two consecutive segments, with the first and last segments of the curve either parallel to the original line segment or forming a 60° angle with it.
At every iteration, this construction gives a continuous curve. In the limit, these approach a curve that traces out the Sierpinski triangle by a single continuous directed (infinitely wiggly) path, which is called the Sierpinski arrowhead.[6] In fact, the aim of the original article by Sierpinski of 1915, was to show an example of a curve (a Cantorian curve), as the title of the article itself declares.[7][8]
Cellular automata
The Sierpinski triangle also appears in certain cellular automata (such as Rule 90), including those relating to Conway's Game of Life. For instance, the Life-like cellular automaton B1/S12 when applied to a single cell will generate four approximations of the Sierpinski triangle.[9] A very long, one cell–thick line in standard life will create two mirrored Sierpiński triangles. The time-space diagram of a replicator pattern in a cellular automaton also often resembles a Sierpiński triangle, such as that of the common replicator in HighLife.[10] The Sierpinski triangle can also be found in the Ulam-Warburton automaton and the Hex-Ulam-Warburton automaton.[11]
Pascal's triangle
If one takes Pascal's triangle with $2^{n}$ rows and colors the even numbers white, and the odd numbers black, the result is an approximation to the Sierpiński triangle. More precisely, the limit as $n$ approaches infinity of this parity-colored $2^{n}$-row Pascal triangle is the Sierpinski triangle.[12]
Towers of Hanoi
The Towers of Hanoi puzzle involves moving disks of different sizes between three pegs, maintaining the property that no disk is ever placed on top of a smaller disk. The states of an $n$-disk puzzle, and the allowable moves from one state to another, form an undirected graph, the Hanoi graph, that can be represented geometrically as the intersection graph of the set of triangles remaining after the $n$th step in the construction of the Sierpinski triangle. Thus, in the limit as $n$ goes to infinity, this sequence of graphs can be interpreted as a discrete analogue of the Sierpinski triangle.[13]
Properties
For integer number of dimensions $d$, when doubling a side of an object, $2^{d}$ copies of it are created, i.e. 2 copies for 1-dimensional object, 4 copies for 2-dimensional object and 8 copies for 3-dimensional object. For the Sierpiński triangle, doubling its side creates 3 copies of itself. Thus the Sierpiński triangle has Hausdorff dimension ${\tfrac {\log 3}{\log 2}}\approx 1.585$, which follows from solving $2^{d}=3$ for $d$.[14]
The area of a Sierpiński triangle is zero (in Lebesgue measure). The area remaining after each iteration is ${\tfrac {3}{4}}$ of the area from the previous iteration, and an infinite number of iterations results in an area approaching zero.[15]
The points of a Sierpinski triangle have a simple characterization in barycentric coordinates.[16] If a point has barycentric coordinates $(0.u_{1}u_{2}u_{3}\dots ,0.v_{1}v_{2}v_{3}\dots ,0.w_{1}w_{2}w_{3}\dots )$, expressed as binary numerals, then the point is in Sierpiński's triangle if and only if $u_{i}+v_{i}+w_{i}=1$ for all $i$.
Generalization to other moduli
A generalization of the Sierpiński triangle can also be generated using Pascal's triangle if a different modulus $P$ is used. Iteration $n$ can be generated by taking a Pascal's triangle with $P^{n}$ rows and coloring numbers by their value modulo $P$. As $n$ approaches infinity, a fractal is generated.
The same fractal can be achieved by dividing a triangle into a tessellation of $P^{2}$ similar triangles and removing the triangles that are upside-down from the original, then iterating this step with each smaller triangle.
Conversely, the fractal can also be generated by beginning with a triangle and duplicating it and arranging ${\tfrac {n(n+1)}{2}}$ of the new figures in the same orientation into a larger similar triangle with the vertices of the previous figures touching, then iterating that step.[17]
Analogues in higher dimensions
The Sierpinski tetrahedron or tetrix is the three-dimensional analogue of the Sierpiński triangle, formed by repeatedly shrinking a regular tetrahedron to one half its original height, putting together four copies of this tetrahedron with corners touching, and then repeating the process.
A tetrix constructed from an initial tetrahedron of side-length $L$ has the property that the total surface area remains constant with each iteration. The initial surface area of the (iteration-0) tetrahedron of side-length $L$ is $L^{2}{\sqrt {3}}$. The next iteration consists of four copies with side length ${\tfrac {L}{2}}$, so the total area is $ 4{\bigl (}{\tfrac {L}{2}}{\bigr )}^{2}{\sqrt {3}}=L^{2}{\sqrt {3}}$ again. Subsequent iterations again quadruple the number of copies and halve the side length, preserving the overall area. Meanwhile, the volume of the construction is halved at every step and therefore approaches zero. The limit of this process has neither volume nor surface but, like the Sierpinski gasket, is an intricately connected curve. Its Hausdorff dimension is $ {\tfrac {\log 4}{\log 2}}=2$; here "log" denotes the natural logarithm, the numerator is the logarithm of the number of copies of the shape formed from each copy of the previous iteration, and the denominator is the logarithm of the factor by which these copies are scaled down from the previous iteration. If all points are projected onto a plane that is parallel to two of the outer edges, they exactly fill a square of side length ${\tfrac {L}{\sqrt {2}}}$ without overlap.[18]
History
Wacław Sierpiński described the Sierpiński triangle in 1915. However, similar patterns appear already as a common motif of 13th-century Cosmatesque inlay stonework.[19]
The Apollonian gasket was first described by Apollonius of Perga (3rd century BC) and further analyzed by Gottfried Leibniz (17th century), and is a curved precursor of the 20th-century Sierpiński triangle.[20]
Etymology
The usage of the word "gasket" to refer to the Sierpiński triangle refers to gaskets such as are found in motors, and which sometimes feature a series of holes of decreasing size, similar to the fractal; this usage was coined by Benoit Mandelbrot, who thought the fractal looked similar to "the part that prevents leaks in motors".[21]
See also
• Apollonian gasket, a set of mutually tangent circles with the same combinatorial structure as the Sierpinski triangle
• List of fractals by Hausdorff dimension
• Sierpiński carpet, another fractal named after Sierpiński and formed by repeatedly removing squares from a larger square
• Triforce, a relic in the Legend of Zelda series
References
1. ""Sierpinski Gasket by Trema Removal"".
2. Michael Barnsley; et al. (2003), "V-variable fractals and superfractals", arXiv:math/0312314
3. NOVA (public television program). The Strange New Science of Chaos (episode). Public television station WGBH Boston. Aired 31 January 1989.
4. Feldman, David P. (2012), "17.4 The chaos game", Chaos and Fractals: An Elementary Introduction, Oxford University Press, pp. 178–180, ISBN 9780199566440.
5. Peitgen, Heinz-Otto; Jürgens, Hartmut; Saupe, Dietmar; Maletsky, Evan; Perciante, Terry; and Yunker, Lee (1991). Fractals for the Classroom: Strategic Activities Volume One, p.39. Springer-Verlag, New York. ISBN 0-387-97346-X and ISBN 3-540-97346-X.
6. Prusinkiewicz, P. (1986), "Graphical applications of L-systems" (PDF), Proceedings of Graphics Interface '86 / Vision Interface '86, pp. 247–253.
7. Sierpinski, Waclaw (1915). "Sur une courbe dont tout point est un point de ramification". Compt. Rend. Acad. Sci. Paris. 160: 302–305.
8. Brunori, Paola; Magrone, Paola; Lalli, Laura Tedeschini (2018-07-07), "Imperial Porphiry and Golden Leaf: Sierpinski Triangle in a Medieval Roman Cloister", Advances in Intelligent Systems and Computing, Springer International Publishing, pp. 595–609, doi:10.1007/978-3-319-95588-9_49, ISBN 9783319955872, S2CID 125313277
9. Rumpf, Thomas (2010), "Conway's Game of Life accelerated with OpenCL" (PDF), Proceedings of the Eleventh International Conference on Membrane Computing (CMC 11), pp. 459–462.
10. Bilotta, Eleonora; Pantano, Pietro (Summer 2005), "Emergent patterning phenomena in 2D cellular automata", Artificial Life, 11 (3): 339–362, doi:10.1162/1064546054407167, PMID 16053574, S2CID 7842605.
11. Khovanova, Tanya; Nie, Eric; Puranik, Alok (2014), "The Sierpinski Triangle and the Ulam-Warburton Automaton", Math Horizons, 23 (1): 5–9, arXiv:1408.5937, doi:10.4169/mathhorizons.23.1.5, S2CID 125503155
12. Stewart, Ian (2006), How to Cut a Cake: And other mathematical conundrums, Oxford University Press, p. 145, ISBN 9780191500718.
13. Romik, Dan (2006), "Shortest paths in the Tower of Hanoi graph and finite automata", SIAM Journal on Discrete Mathematics, 20 (3): 610–62, arXiv:math.CO/0310109, doi:10.1137/050628660, MR 2272218, S2CID 8342396.
14. Falconer, Kenneth (1990). Fractal geometry: mathematical foundations and applications. Chichester: John Wiley. p. 120. ISBN 978-0-471-92287-2. Zbl 0689.28003.
15. Helmberg, Gilbert (2007), Getting Acquainted with Fractals, Walter de Gruyter, p. 41, ISBN 9783110190922.
16. "Many ways to form the Sierpinski gasket".
17. Shannon, Kathleen M.; Bardzell, Michael J. (November 2003). "Patterns in Pascal's Triangle – with a Twist". Convergence. Mathematical Association of America. Retrieved 29 March 2015.
18. Jones, Huw; Campa, Aurelio (1993), "Abstract and natural forms from iterated function systems", in Thalmann, N. M.; Thalmann, D. (eds.), Communicating with Virtual Worlds, CGS CG International Series, Tokyo: Springer, pp. 332–344, doi:10.1007/978-4-431-68456-5_27
19. Williams, Kim (December 1997). Stewart, Ian (ed.). "The pavements of the Cosmati". The Mathematical Tourist. The Mathematical Intelligencer. 19 (1): 41–45. doi:10.1007/bf03024339. S2CID 189885713.
20. Mandelbrot B (1983). The Fractal Geometry of Nature. New York: W. H. Freeman. p. 170. ISBN 978-0-7167-1186-5.
Aste T, Weaire D (2008). The Pursuit of Perfect Packing (2nd ed.). New York: Taylor and Francis. pp. 131–138. ISBN 978-1-4200-6817-7.
21. Benedetto, John; Wojciech, Czaja. Integration and Modern Analysis. p. 408.
External links
Wikimedia Commons has media related to Sierpinski triangles.
• "Sierpinski gasket", Encyclopedia of Mathematics, EMS Press, 2001 [1994]
• Weisstein, Eric W. "Sierpinski Sieve". MathWorld.
• Rothemund, Paul W. K.; Papadakis, Nick; Winfree, Erik (2004). "Algorithmic Self-Assembly of DNA Sierpinski Triangles". PLOS Biology. 2 (12): e424. doi:10.1371/journal.pbio.0020424. PMC 534809. PMID 15583715.
• Sierpinski Gasket by Trema Removal at cut-the-knot
• Sierpinski Gasket and Tower of Hanoi at cut-the-knot
• Real-time GPU generated Sierpinski Triangle in 3D
• Pythagorean triangles, Waclaw Sierpinski, Courier Corporation, 2003
• A067771 Number of vertices in Sierpiński triangle of order n. at OEIS
• Interactive version of the chaos game
Fractals
Characteristics
• Fractal dimensions
• Assouad
• Box-counting
• Higuchi
• Correlation
• Hausdorff
• Packing
• Topological
• Recursion
• Self-similarity
Iterated function
system
• Barnsley fern
• Cantor set
• Koch snowflake
• Menger sponge
• Sierpinski carpet
• Sierpinski triangle
• Apollonian gasket
• Fibonacci word
• Space-filling curve
• Blancmange curve
• De Rham curve
• Minkowski
• Dragon curve
• Hilbert curve
• Koch curve
• Lévy C curve
• Moore curve
• Peano curve
• Sierpiński curve
• Z-order curve
• String
• T-square
• n-flake
• Vicsek fractal
• Hexaflake
• Gosper curve
• Pythagoras tree
• Weierstrass function
Strange attractor
• Multifractal system
L-system
• Fractal canopy
• Space-filling curve
• H tree
Escape-time
fractals
• Burning Ship fractal
• Julia set
• Filled
• Newton fractal
• Douady rabbit
• Lyapunov fractal
• Mandelbrot set
• Misiurewicz point
• Multibrot set
• Newton fractal
• Tricorn
• Mandelbox
• Mandelbulb
Rendering techniques
• Buddhabrot
• Orbit trap
• Pickover stalk
Random fractals
• Brownian motion
• Brownian tree
• Brownian motor
• Fractal landscape
• Lévy flight
• Percolation theory
• Self-avoiding walk
People
• Michael Barnsley
• Georg Cantor
• Bill Gosper
• Felix Hausdorff
• Desmond Paul Henry
• Gaston Julia
• Helge von Koch
• Paul Lévy
• Aleksandr Lyapunov
• Benoit Mandelbrot
• Hamid Naderi Yeganeh
• Lewis Fry Richardson
• Wacław Sierpiński
Other
• "How Long Is the Coast of Britain?"
• Coastline paradox
• Fractal art
• List of fractals by Hausdorff dimension
• The Fractal Geometry of Nature (1982 book)
• The Beauty of Fractals (1986 book)
• Chaos: Making a New Science (1987 book)
• Kaleidoscope
• Chaos theory
Authority control: National
• Germany
|
Wikipedia
|
Sierpiński carpet
The Sierpiński carpet is a plane fractal first described by Wacław Sierpiński in 1916. The carpet is a generalization of the Cantor set to two dimensions; another is Cantor dust.
The technique of subdividing a shape into smaller copies of itself, removing one or more copies, and continuing recursively can be extended to other shapes. For instance, subdividing an equilateral triangle into four equilateral triangles, removing the middle triangle, and recursing leads to the Sierpiński triangle. In three dimensions, a similar construction based on cubes is known as the Menger sponge.
Construction
The construction of the Sierpiński carpet begins with a square. The square is cut into 9 congruent subsquares in a 3-by-3 grid, and the central subsquare is removed. The same procedure is then applied recursively to the remaining 8 subsquares, ad infinitum. It can be realised as the set of points in the unit square whose coordinates written in base three do not both have a digit '1' in the same position, using the infinitesimal number representation of $0.1111\dots =0.2$.[1]
The process of recursively removing squares is an example of a finite subdivision rule.
Properties
The area of the carpet is zero (in standard Lebesgue measure).
Proof: Denote as ai the area of iteration i. Then ai + 1 = 8/9ai. So ai = (8/9)i, which tends to 0 as i goes to infinity.
The interior of the carpet is empty.
Proof: Suppose by contradiction that there is a point P in the interior of the carpet. Then there is a square centered at P which is entirely contained in the carpet. This square contains a smaller square whose coordinates are multiples of 1/3k for some k. But, if this square has not been previously removed, it must have been holed in iteration k + 1, so it cannot be contained in the carpet – a contradiction.
The Hausdorff dimension of the carpet is ${\frac {\log 8}{\log 3}}\approx 1.8928$.[2]
Sierpiński demonstrated that his carpet is a universal plane curve.[3] That is: the Sierpinski carpet is a compact subset of the plane with Lebesgue covering dimension 1, and every subset of the plane with these properties is homeomorphic to some subset of the Sierpiński carpet.
This "universality" of the Sierpiński carpet is not a true universal property in the sense of category theory: it does not uniquely characterize this space up to homeomorphism. For example, the disjoint union of a Sierpiński carpet and a circle is also a universal plane curve. However, in 1958 Gordon Whyburn[4] uniquely characterized the Sierpiński carpet as follows: any curve that is locally connected and has no 'local cut-points' is homeomorphic to the Sierpinski carpet. Here a local cut-point is a point p for which some connected neighborhood U of p has the property that U − {p} is not connected. So, for example, any point of the circle is a local cut point.
In the same paper Whyburn gave another characterization of the Sierpiński carpet. Recall that a continuum is a nonempty connected compact metric space. Suppose X is a continuum embedded in the plane. Suppose its complement in the plane has countably many connected components C1, C2, C3, ... and suppose:
• the diameter of Ci goes to zero as i → ∞;
• the boundary of Ci and the boundary of Cj are disjoint if i ≠ j;
• the boundary of Ci is a simple closed curve for each i;
• the union of the boundaries of the sets Ci is dense in X.
Then X is homeomorphic to the Sierpiński carpet.
Brownian motion on the Sierpiński carpet
The topic of Brownian motion on the Sierpiński carpet has attracted interest in recent years.[5] Martin Barlow and Richard Bass have shown that a random walk on the Sierpiński carpet diffuses at a slower rate than an unrestricted random walk in the plane. The latter reaches a mean distance proportional to √n after n steps, but the random walk on the discrete Sierpiński carpet reaches only a mean distance proportional to β√n for some β > 2. They also showed that this random walk satisfies stronger large deviation inequalities (so called "sub-Gaussian inequalities") and that it satisfies the elliptic Harnack inequality without satisfying the parabolic one. The existence of such an example was an open problem for many years.
Wallis sieve
A variation of the Sierpiński carpet, called the Wallis sieve, starts in the same way, by subdividing the unit square into nine smaller squares and removing the middle of them. At the next level of subdivision, it subdivides each of the squares into 25 smaller squares and removes the middle one, and it continues at the ith step by subdividing each square into (2i + 1)2 (the odd squares[6]) smaller squares and removing the middle one. By the Wallis product, the area of the resulting set is π/4, unlike the standard Sierpiński carpet which has zero limiting area. Although the Wallis sieve has positive Lebesgue measure, no subset that is a Cartesian product of two sets of real numbers has this property, so its Jordan measure is zero.[7]
Applications
Mobile phone and Wi-Fi fractal antennas have been produced in the form of few iterations of the Sierpiński carpet. Due to their self-similarity and scale invariance, they easily accommodate multiple frequencies. They are also easy to fabricate and smaller than conventional antennas of similar performance, thus being optimal for pocket-sized mobile phones.
See also
• List of fractals by Hausdorff dimension
• Menger sponge
References
1. Allouche, Jean-Paul; Shallit, Jeffrey (2003). Automatic Sequences: Theory, Applications, Generalizations. Cambridge University Press. pp. 405–406. ISBN 978-0-521-82332-6. Zbl 1086.11015.
2. Semmes, Stephen (2001). Some Novel Types of Fractal Geometry. Oxford Mathematical Monographs. Oxford University Press. p. 31. ISBN 0-19-850806-9. Zbl 0970.28001.
3. Sierpiński, Wacław (1916). "Sur une courbe cantorienne qui contient une image biunivoque et continue de toute courbe donnée". C. R. Acad. Sci. Paris (in French). 162: 629–632. ISSN 0001-4036. JFM 46.0295.02.
4. Whyburn, Gordon (1958). "Topological chcracterization of the Sierpinski curve". Fund. Math. 45: 320–324. doi:10.4064/fm-45-1-320-324.
5. Barlow, Martin; Bass, Richard, Brownian motion and harmonic analysis on Sierpiński carpets (PDF)
6. Sloane, N. J. A. (ed.). "Sequence A016754 (Odd squares: a(n) = (2n+1)^2. Also centered octagonal numbers.)". The On-Line Encyclopedia of Integer Sequences. OEIS Foundation.
7. Rummler, Hansklaus (1993). "Squaring the circle with holes". The American Mathematical Monthly. 100 (9): 858–860. doi:10.2307/2324662. JSTOR 2324662. MR 1247533.
External links
Wikimedia Commons has media related to Sierpinski carpet.
• Variations on the Theme of Tremas II
• Sierpiński Cookies
• Sierpinski Carpet Project
• Sierpinski Carpet solved by means of modular arithmetics
Fractals
Characteristics
• Fractal dimensions
• Assouad
• Box-counting
• Higuchi
• Correlation
• Hausdorff
• Packing
• Topological
• Recursion
• Self-similarity
Iterated function
system
• Barnsley fern
• Cantor set
• Koch snowflake
• Menger sponge
• Sierpinski carpet
• Sierpinski triangle
• Apollonian gasket
• Fibonacci word
• Space-filling curve
• Blancmange curve
• De Rham curve
• Minkowski
• Dragon curve
• Hilbert curve
• Koch curve
• Lévy C curve
• Moore curve
• Peano curve
• Sierpiński curve
• Z-order curve
• String
• T-square
• n-flake
• Vicsek fractal
• Hexaflake
• Gosper curve
• Pythagoras tree
• Weierstrass function
Strange attractor
• Multifractal system
L-system
• Fractal canopy
• Space-filling curve
• H tree
Escape-time
fractals
• Burning Ship fractal
• Julia set
• Filled
• Newton fractal
• Douady rabbit
• Lyapunov fractal
• Mandelbrot set
• Misiurewicz point
• Multibrot set
• Newton fractal
• Tricorn
• Mandelbox
• Mandelbulb
Rendering techniques
• Buddhabrot
• Orbit trap
• Pickover stalk
Random fractals
• Brownian motion
• Brownian tree
• Brownian motor
• Fractal landscape
• Lévy flight
• Percolation theory
• Self-avoiding walk
People
• Michael Barnsley
• Georg Cantor
• Bill Gosper
• Felix Hausdorff
• Desmond Paul Henry
• Gaston Julia
• Helge von Koch
• Paul Lévy
• Aleksandr Lyapunov
• Benoit Mandelbrot
• Hamid Naderi Yeganeh
• Lewis Fry Richardson
• Wacław Sierpiński
Other
• "How Long Is the Coast of Britain?"
• Coastline paradox
• Fractal art
• List of fractals by Hausdorff dimension
• The Fractal Geometry of Nature (1982 book)
• The Beauty of Fractals (1986 book)
• Chaos: Making a New Science (1987 book)
• Kaleidoscope
• Chaos theory
|
Wikipedia
|
Sierpiński number
In number theory, a Sierpiński number is an odd natural number k such that $k\times 2^{n}+1$ is composite for all natural numbers n. In 1960, Wacław Sierpiński proved that there are infinitely many odd integers k which have this property.
In other words, when k is a Sierpiński number, all members of the following set are composite:
$\left\{\,k\cdot 2^{n}+1:n\in \mathbb {N} \,\right\}.$
If the form is instead $k\times 2^{n}-1$, then k is a Riesel number.
Known Sierpiński numbers
The sequence of currently known Sierpiński numbers begins with:
78557, 271129, 271577, 322523, 327739, 482719, 575041, 603713, 903983, 934909, 965431, 1259779, 1290677, 1518781, 1624097, 1639459, 1777613, 2131043, 2131099, 2191531, 2510177, 2541601, 2576089, 2931767, 2931991, ... (sequence A076336 in the OEIS).
The number 78557 was proved to be a Sierpiński number by John Selfridge in 1962, who showed that all numbers of the form 78557⋅2n + 1 have a factor in the covering set {3, 5, 7, 13, 19, 37, 73}. For another known Sierpiński number, 271129, the covering set is {3, 5, 7, 13, 17, 241}. Most currently known Sierpiński numbers possess similar covering sets.[1]
However, in 1995 A. S. Izotov showed that some fourth powers could be proved to be Sierpiński numbers without establishing a covering set for all values of n. His proof depends on the aurifeuillean factorization t4⋅24m+2 + 1 = (t2⋅22m+1 + t⋅2m+1 + 1)⋅(t2⋅22m+1 - t⋅2m+1 + 1). This establishes that all n ≡ 2 (mod 4) give rise to a composite, and so it remains to eliminate only n ≡ 0, 1, 3 (mod 4) using a covering set.[2]
Sierpiński problem
Unsolved problem in mathematics:
Is 78,557 the smallest Sierpiński number?
(more unsolved problems in mathematics)
The Sierpiński problem asks for the value of the smallest Sierpiński number. In private correspondence with Paul Erdős, Selfridge conjectured that 78,557 was the smallest Sierpiński number.[3] No smaller Sierpiński numbers have been discovered, and it is now believed that 78,557 is the smallest number.[4]
To show that 78,557 really is the smallest Sierpiński number, one must show that all the odd numbers smaller than 78,557 are not Sierpiński numbers. That is, for every odd k below 78,557, there needs to exist a positive integer n such that k2n + 1 is prime.[1] As of December 2021, there are only five candidates which have not been eliminated as possible Sierpiński numbers:[5]
k = 21181, 22699, 24737, 55459, and 67607.
The distributed volunteer computing project PrimeGrid is attempting to eliminate all the remaining values of k. As of August 2022, no prime has been found for these values of k, with all $n\leq 37\,408\,811$ having been eliminated.[6]
The most recently eliminated candidate was k = 10223, when the prime $10223\times 2^{31172165}+1$ was discovered by PrimeGrid in October 2016. This number is 9,383,761 digits long.[5]
Prime Sierpiński problem
Unsolved problem in mathematics:
Is 271,129 the smallest prime Sierpiński number?
(more unsolved problems in mathematics)
In 1976, Nathan Mendelsohn determined that the second provable Sierpiński number is the prime k = 271129. The prime Sierpiński problem asks for the value of the smallest prime Sierpiński number, and there is an ongoing "Prime Sierpiński search" which tries to prove that 271129 is the first Sierpiński number which is also a prime. As of November 2018, the nine prime values of k less than 271129 for which a prime of the form k2n + 1 is not known are:[7]
k = 22699, 67607, 79309, 79817, 152267, 156511, 222113, 225931, and 237019.
As of August 2022, no prime has been found for these values of k with $n\leq 27\,315\,111$.[8]
The first two, being less than 78557, are also unsolved cases of the (non-prime) Sierpiński problem described above. The most recently eliminated candidate was k = 168451, when the prime number $168451\times 2^{19375200}+1$ was discovered by PrimeGrid in September 2017. The number is 5,832,522 digits long.[9]
Extended Sierpiński problem
Unsolved problem in mathematics:
Is 271,129 the second Sierpiński number?
(more unsolved problems in mathematics)
Suppose that both preceding Sierpiński problems had finally been solved, showing that 78557 is the smallest Sierpiński number and that 271129 is the smallest prime Sierpiński number. This still leaves unsolved the question of the second Sierpinski number; there could exist a composite Sierpiński number k such that $78557<k<271129$. An ongoing search is trying to prove that 271129 is the second Sierpiński number, by testing all k values between 78557 and 271129, prime or not.
Solving the extended Sierpiński problem, the most demanding of the three posed problems, requires the elimination of 21 remaining candidates $k<271129$, of which nine are prime (see above) and twelve are composite. The latter include k = 21181, 24737, 55459 from the original Sierpiński problem. As of August 2022, the following eight values of k, unique to the extended Sierpiński problem, remain:[10]
k = 91549, 131179, 163187, 200749, 209611, 227723, 229673, and 238411.
As of August 2022, no prime has been found for these values of k with $n\leq 22\,384\,237$.[11]
In December 2019, $99739\times 2^{14019102}+1$ was found to be prime by PrimeGrid, eliminating k = 99739. The number is 4,220,176 digits long.[12]
The most recent elimination was in December 2021, when $202705\times 2^{21320516}+1$ was found to be prime by PrimeGrid, eliminating k = 202705. The number is 6,418,121 digits long.
Simultaneously Sierpiński and Riesel
A number may be simultaneously Sierpiński and Riesel. These are called Brier numbers. The smallest five known examples are 3316923598096294713661, 10439679896374780276373, 11615103277955704975673, 12607110588854501953787, 17855036657007596110949, ... (A076335).[13]
Dual Sierpinski problem
If we take n to be a negative integer, then the number k2n + 1 becomes ${\frac {2^{|n|}+k}{2^{|n|}}}$. When k is odd, this is a fraction in reduced form, with numerator 2|n| + k. A dual Sierpinski number is defined as an odd natural number k such that 2n + k is composite for all natural numbers n. There is a conjecture that the set of these numbers is the same as the set of Sierpinski numbers; for example, 2n + 78557 is composite for all natural numbers n.
For odd values of k the least n such that 2n + k is prime are
1, 1, 1, 2, 1, 1, 2, 1, 1, 2, 1, 3, 2, 1, 1, 4, 2, 1, 2, 1, 1, 2, 1, 5, 2, ... (sequence A067760 in the OEIS)
The odd values of k for which 2n + k is composite for all n < k are
773, 2131, 2491, 4471, 5101, 7013, 8543, 10711, 14717, 17659, 19081, 19249, 20273, 21661, 22193, 26213, 28433, ... (sequence A033919 in the OEIS)
See also
• Cullen number
• Proth number
• Riesel number
• Seventeen or Bust
• Woodall number
References
1. Sierpinski number at The Prime Glossary
2. Anatoly S. Izotov (1995). "Note on Sierpinski Numbers" (PDF). Fibonacci Quarterly. 33 (3): 206.
3. Erdős, Paul; Odlyzko, Andrew Michael (May 1, 1979). "On the density of odd integers of the form (p − 1)2−n and related questions". Journal of Number Theory. Elsevier. 11 (2): 258. doi:10.1016/0022-314X(79)90043-X. ISSN 0022-314X.
4. Guy, Richard Kenneth (2005). Unsolved Problems in Number Theory. New York: Springer-Verlag. pp. B21:119–121, F13:383–385. ISBN 978-0-387-20860-2. OCLC 634701581.
5. Seventeen or Bust at PrimeGrid.
6. "Seventeen or Bust statistics". PrimeGrid. Retrieved November 21, 2019.
7. Goetz, Michael (July 10, 2008). "About the Prime Sierpinski Problem". PrimeGrid. Retrieved September 12, 2019.
8. "Prime Sierpinski Problem statistics". PrimeGrid. Retrieved November 21, 2019.
9. Zimmerman, Van (September 29, 2017). "New PSP Mega Prime!". PrimeGrid. Retrieved September 12, 2019.
10. Goetz, Michael (6 April 2018). "Welcome to the Extended Sierpinski Problem". PrimeGrid. Retrieved 21 August 2019.
11. "Extended Sierpinski Problem statistics". www.primegrid.com. Retrieved 6 April 2018.
12. Brown, Scott (13 January 2020). "ESP Mega Prime!". PrimeGrid. Retrieved 18 January 2020.
13. Problem 29.- Brier Numbers
Further reading
• Guy, Richard K. (2004), Unsolved Problems in Number Theory, New York: Springer-Verlag, p. 120, ISBN 0-387-20860-7
External links
• The Sierpinski problem: definition and status
• Weisstein, Eric W. "Sierpinski's composite number theorem". MathWorld.
• Archived at Ghostarchive and the Wayback Machine: Grime, Dr. James. "78557 and Proth Primes" (video). YouTube. Brady Haran. Retrieved 13 November 2017.
Classes of natural numbers
Powers and related numbers
• Achilles
• Power of 2
• Power of 3
• Power of 10
• Square
• Cube
• Fourth power
• Fifth power
• Sixth power
• Seventh power
• Eighth power
• Perfect power
• Powerful
• Prime power
Of the form a × 2b ± 1
• Cullen
• Double Mersenne
• Fermat
• Mersenne
• Proth
• Thabit
• Woodall
Other polynomial numbers
• Hilbert
• Idoneal
• Leyland
• Loeschian
• Lucky numbers of Euler
Recursively defined numbers
• Fibonacci
• Jacobsthal
• Leonardo
• Lucas
• Padovan
• Pell
• Perrin
Possessing a specific set of other numbers
• Amenable
• Congruent
• Knödel
• Riesel
• Sierpiński
Expressible via specific sums
• Nonhypotenuse
• Polite
• Practical
• Primary pseudoperfect
• Ulam
• Wolstenholme
Figurate numbers
2-dimensional
centered
• Centered triangular
• Centered square
• Centered pentagonal
• Centered hexagonal
• Centered heptagonal
• Centered octagonal
• Centered nonagonal
• Centered decagonal
• Star
non-centered
• Triangular
• Square
• Square triangular
• Pentagonal
• Hexagonal
• Heptagonal
• Octagonal
• Nonagonal
• Decagonal
• Dodecagonal
3-dimensional
centered
• Centered tetrahedral
• Centered cube
• Centered octahedral
• Centered dodecahedral
• Centered icosahedral
non-centered
• Tetrahedral
• Cubic
• Octahedral
• Dodecahedral
• Icosahedral
• Stella octangula
pyramidal
• Square pyramidal
4-dimensional
non-centered
• Pentatope
• Squared triangular
• Tesseractic
Combinatorial numbers
• Bell
• Cake
• Catalan
• Dedekind
• Delannoy
• Euler
• Eulerian
• Fuss–Catalan
• Lah
• Lazy caterer's sequence
• Lobb
• Motzkin
• Narayana
• Ordered Bell
• Schröder
• Schröder–Hipparchus
• Stirling first
• Stirling second
• Telephone number
• Wedderburn–Etherington
Primes
• Wieferich
• Wall–Sun–Sun
• Wolstenholme prime
• Wilson
Pseudoprimes
• Carmichael number
• Catalan pseudoprime
• Elliptic pseudoprime
• Euler pseudoprime
• Euler–Jacobi pseudoprime
• Fermat pseudoprime
• Frobenius pseudoprime
• Lucas pseudoprime
• Lucas–Carmichael number
• Somer–Lucas pseudoprime
• Strong pseudoprime
Arithmetic functions and dynamics
Divisor functions
• Abundant
• Almost perfect
• Arithmetic
• Betrothed
• Colossally abundant
• Deficient
• Descartes
• Hemiperfect
• Highly abundant
• Highly composite
• Hyperperfect
• Multiply perfect
• Perfect
• Practical
• Primitive abundant
• Quasiperfect
• Refactorable
• Semiperfect
• Sublime
• Superabundant
• Superior highly composite
• Superperfect
Prime omega functions
• Almost prime
• Semiprime
Euler's totient function
• Highly cototient
• Highly totient
• Noncototient
• Nontotient
• Perfect totient
• Sparsely totient
Aliquot sequences
• Amicable
• Perfect
• Sociable
• Untouchable
Primorial
• Euclid
• Fortunate
Other prime factor or divisor related numbers
• Blum
• Cyclic
• Erdős–Nicolas
• Erdős–Woods
• Friendly
• Giuga
• Harmonic divisor
• Jordan–Pólya
• Lucas–Carmichael
• Pronic
• Regular
• Rough
• Smooth
• Sphenic
• Størmer
• Super-Poulet
• Zeisel
Numeral system-dependent numbers
Arithmetic functions
and dynamics
• Persistence
• Additive
• Multiplicative
Digit sum
• Digit sum
• Digital root
• Self
• Sum-product
Digit product
• Multiplicative digital root
• Sum-product
Coding-related
• Meertens
Other
• Dudeney
• Factorion
• Kaprekar
• Kaprekar's constant
• Keith
• Lychrel
• Narcissistic
• Perfect digit-to-digit invariant
• Perfect digital invariant
• Happy
P-adic numbers-related
• Automorphic
• Trimorphic
Digit-composition related
• Palindromic
• Pandigital
• Repdigit
• Repunit
• Self-descriptive
• Smarandache–Wellin
• Undulating
Digit-permutation related
• Cyclic
• Digit-reassembly
• Parasitic
• Primeval
• Transposable
Divisor-related
• Equidigital
• Extravagant
• Frugal
• Harshad
• Polydivisible
• Smith
• Vampire
Other
• Friedman
Binary numbers
• Evil
• Odious
• Pernicious
Generated via a sieve
• Lucky
• Prime
Sorting related
• Pancake number
• Sorting number
Natural language related
• Aronson's sequence
• Ban
Graphemics related
• Strobogrammatic
• Mathematics portal
|
Wikipedia
|
Sierpiński set
In mathematics, a Sierpiński set is an uncountable subset of a real vector space whose intersection with every measure-zero set is countable. The existence of Sierpiński sets is independent of the axioms of ZFC. Sierpiński (1924) showed that they exist if the continuum hypothesis is true. On the other hand, they do not exist if Martin's axiom for ℵ1 is true. Sierpiński sets are weakly Luzin sets but are not Luzin sets (Kunen 2011, p. 376).
Not to be confused with Sierpiński space.
Example of a Sierpiński set
Choose a collection of 2ℵ0 measure-0 subsets of R such that every measure-0 subset is contained in one of them. By the continuum hypothesis, it is possible to enumerate them as Sα for countable ordinals α. For each countable ordinal β choose a real number xβ that is not in any of the sets Sα for α < β, which is possible as the union of these sets has measure 0 so is not the whole of R. Then the uncountable set X of all these real numbers xβ has only a countable number of elements in each set Sα, so is a Sierpiński set.
It is possible for a Sierpiński set to be a subgroup under addition. For this one modifies the construction above by choosing a real number xβ that is not in any of the countable number of sets of the form (Sα + X)/n for α < β, where n is a positive integer and X is an integral linear combination of the numbers xα for α < β. Then the group generated by these numbers is a Sierpiński set and a group under addition. More complicated variations of this construction produce examples of Sierpiński sets that are subfields or real-closed subfields of the real numbers.
References
• Kunen, Kenneth (2011), Set theory, Studies in Logic, vol. 34, London: College Publications, ISBN 978-1-84890-050-9, MR 2905394, Zbl 1262.03001
• Sierpiński, W. (1924), "Sur l'hypothèse du continu (2ℵ0 = ℵ1)", Fundamenta Mathematicae, 5 (1): 177–187
|
Wikipedia
|
Sierpiński space
In mathematics, the Sierpiński space (or the connected two-point set) is a finite topological space with two points, only one of which is closed.[1] It is the smallest example of a topological space which is neither trivial nor discrete. It is named after Wacław Sierpiński.
Not to be confused with Sierpiński set.
The Sierpiński space has important relations to the theory of computation and semantics,[2][3] because it is the classifying space for open sets in the Scott topology.
Definition and fundamental properties
Explicitly, the Sierpiński space is a topological space S whose underlying point set is $\{0,1\}$ and whose open sets are
$\{\varnothing ,\{1\},\{0,1\}\}.$
The closed sets are
$\{\varnothing ,\{0\},\{0,1\}\}.$
So the singleton set $\{0\}$ is closed and the set $\{1\}$ is open ($\varnothing =\{\,\}$ is the empty set).
The closure operator on S is determined by
${\overline {\{0\}}}=\{0\},\qquad {\overline {\{1\}}}=\{0,1\}.$
A finite topological space is also uniquely determined by its specialization preorder. For the Sierpiński space this preorder is actually a partial order and given by
$0\leq 0,\qquad 0\leq 1,\qquad 1\leq 1.$
Topological properties
The Sierpiński space $S$ is a special case of both the finite particular point topology (with particular point 1) and the finite excluded point topology (with excluded point 0). Therefore, $S$ has many properties in common with one or both of these families.
Separation
• The points 0 and 1 are topologically distinguishable in S since $\{1\}$ is an open set which contains only one of these points. Therefore, S is a Kolmogorov (T0) space.
• However, S is not T1 since the point 1 is not closed. It follows that S is not Hausdorff, or Tn for any $n\geq 1.$
• S is not regular (or completely regular) since the point 1 and the disjoint closed set $\{0\}$ cannot be separated by neighborhoods. (Also regularity in the presence of T0 would imply Hausdorff.)
• S is vacuously normal and completely normal since there are no nonempty separated sets.
• S is not perfectly normal since the disjoint closed sets $\varnothing $ and $\{0\}$ cannot be precisely separated by a function. Indeed, $\{0\}$ cannot be the zero set of any continuous function $S\to \mathbb {R} $ since every such function is constant.
Connectedness
• The Sierpiński space S is both hyperconnected (since every nonempty open set contains 1) and ultraconnected (since every nonempty closed set contains 0).
• It follows that S is both connected and path connected.
• A path from 0 to 1 in S is given by the function: $f(0)=0$ and $f(t)=1$ for $t>0.$ The function $f:I\to S$ is continuous since $f^{-1}(1)=(0,1]$ which is open in I.
• Like all finite topological spaces, S is locally path connected.
• The Sierpiński space is contractible, so the fundamental group of S is trivial (as are all the higher homotopy groups).
Compactness
• Like all finite topological spaces, the Sierpiński space is both compact and second-countable.
• The compact subset $\{1\}$ of S is not closed showing that compact subsets of T0 spaces need not be closed.
• Every open cover of S must contain S itself since S is the only open neighborhood of 0. Therefore, every open cover of S has an open subcover consisting of a single set: $\{S\}.$
• It follows that S is fully normal.[4]
Convergence
• Every sequence in S converges to the point 0. This is because the only neighborhood of 0 is S itself.
• A sequence in S converges to 1 if and only if the sequence contains only finitely many terms equal to 0 (i.e. the sequence is eventually just 1's).
• The point 1 is a cluster point of a sequence in S if and only if the sequence contains infinitely many 1's.
• Examples:
• 1 is not a cluster point of $(0,0,0,0,\ldots ).$
• 1 is a cluster point (but not a limit) of $(0,1,0,1,0,1,\ldots ).$
• The sequence $(1,1,1,1,\ldots )$ converges to both 0 and 1.
Metrizability
• The Sierpiński space S is not metrizable or even pseudometrizable since every pseudometric space is completely regular but the Sierpiński space is not even regular.
• S is generated by the hemimetric (or pseudo-quasimetric) $d(0,1)=0$ and $d(1,0)=1.$
Other properties
• There are only three continuous maps from S to itself: the identity map and the constant maps to 0 and 1.
• It follows that the homeomorphism group of S is trivial.
Continuous functions to the Sierpiński space
Let X be an arbitrary set. The set of all functions from X to the set $\{0,1\}$ is typically denoted $2^{X}.$ These functions are precisely the characteristic functions of X. Each such function is of the form
$\chi _{U}(x)={\begin{cases}1&x\in U\\0&x\not \in U\end{cases}}$
where U is a subset of X. In other words, the set of functions $2^{X}$ is in bijective correspondence with $P(X),$ the power set of X. Every subset U of X has its characteristic function $\chi _{U}$ and every function from X to $\{0,1\}$ is of this form.
Now suppose X is a topological space and let $\{0,1\}$ have the Sierpiński topology. Then a function $\chi _{U}:X\to S$ is continuous if and only if $\chi _{U}^{-1}(1)$ is open in X. But, by definition
$\chi _{U}^{-1}(1)=U.$
So $\chi _{U}$ is continuous if and only if U is open in X. Let $C(X,S)$ denote the set of all continuous maps from X to S and let $T(X)$ denote the topology of X (that is, the family of all open sets). Then we have a bijection from $T(X)$ to $C(X,S)$ which sends the open set $U$ to $\chi _{U}.$
$C(X,S)\cong {\mathcal {T}}(X)$
That is, if we identify $2^{X}$ with $P(X)$ the subset of continuous maps $C(X,S)\subseteq 2^{X}$is precisely the topology of $X:$ $T(X)\subseteq P(X).$
A particularly notable example of this is the Scott topology for partially ordered sets, in which the Sierpiński space becomes the classifying space for open sets when the characteristic function preserves directed joins.[5]
Categorical description
The above construction can be described nicely using the language of category theory. There is a contravariant functor $T:\mathbf {Top} \to \mathbf {Set} $ from the category of topological spaces to the category of sets which assigns each topological space $X$ its set of open sets $T(X)$ and each continuous function $f:X\to Y$ the preimage map
$f^{-1}:{\mathcal {T}}(Y)\to {\mathcal {T}}(X).$
The statement then becomes: the functor $T$ is represented by $(S,\{1\})$ where $S$ is the Sierpiński space. That is, $T$ is naturally isomorphic to the Hom functor $\operatorname {Hom} (-,S)$ with the natural isomorphism determined by the universal element $\{1\}\in T(S).$ This is generalized by the notion of a presheaf.[6]
The initial topology
Any topological space X has the initial topology induced by the family $C(X,S)$ of continuous functions to Sierpiński space. Indeed, in order to coarsen the topology on X one must remove open sets. But removing the open set U would render $\chi _{U}$ discontinuous. So X has the coarsest topology for which each function in $C(X,S)$ is continuous.
The family of functions $C(X,S)$ separates points in X if and only if X is a T0 space. Two points $x$ and $y$ will be separated by the function $\chi _{U}$ if and only if the open set U contains precisely one of the two points. This is exactly what it means for $x$ and $y$ to be topologically distinguishable.
Therefore, if X is T0, we can embed X as a subspace of a product of Sierpiński spaces, where there is one copy of S for each open set U in X. The embedding map
$e:X\to \prod _{U\in {\mathcal {T}}(X)}S=S^{{\mathcal {T}}(X)}$
is given by
$e(x)_{U}=\chi _{U}(x).\,$
Since subspaces and products of T0 spaces are T0, it follows that a topological space is T0 if and only if it is homeomorphic to a subspace of a power of S.
In algebraic geometry
In algebraic geometry the Sierpiński space arises as the spectrum, $\operatorname {Spec} (S),$ of a discrete valuation ring $R$ such as $\mathbb {Z} _{p}$ (the localization of the integers at the prime ideal generated by the prime number $p$). The generic point of $\operatorname {Spec} (S),$ coming from the zero ideal, corresponds to the open point 1, while the special point of $\operatorname {Spec} (S),$ coming from the unique maximal ideal, corresponds to the closed point 0.
See also
• Finite topological space
• List of topologies – List of concrete topologies and topological spaces
• Pseudocircle
Notes
1. Sierpinski space at the nLab
2. An online paper, it explains the motivation, why the notion of “topology” can be applied in the investigation of concepts of the computer science. Alex Simpson: Mathematical Structures for Semantics (original). Chapter III: Topological Spaces from a Computational Perspective (original). The “References” section provides many online materials on domain theory.
3. Escardó, Martín (2004). Synthetic topology of data types and classical spaces. Electronic Notes in Theoretical Computer Science. Vol. 87. Elsevier. p. 2004. CiteSeerX 10.1.1.129.2886.
4. Steen and Seebach incorrectly list the Sierpiński space as not being fully normal (or fully T4 in their terminology).
5. Scott topology at the nLab
6. Saunders MacLane, Ieke Moerdijk, Sheaves in Geometry and Logic: A First Introduction to Topos Theory, (1992) Springer-Verlag Universitext ISBN 978-0387977102
References
• Steen, Lynn Arthur; Seebach, J. Arthur Jr. (1995) [1978], Counterexamples in Topology (Dover reprint of 1978 ed.), Berlin, New York: Springer-Verlag, ISBN 978-0-486-68735-3, MR 0507446
• Michael Tiefenback (1977) "Topological Genealogy", Mathematics Magazine 50(3): 158–60 doi:10.2307/2689505
|
Wikipedia
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.