text
stringlengths 100
500k
| subset
stringclasses 4
values |
---|---|
Sum-free sequence
In mathematics, a sum-free sequence is an increasing sequence of positive integers,
$a_{1},a_{2},a_{3},\ldots ,$
such that no term $a_{n}$ can be represented as a sum of any subset of the preceding elements of the sequence.
This differs from a sum-free set, where only pairs of sums must be avoided, but where those sums may come from the whole set rather than just the preceding terms.
Example
The powers of two,
1, 2, 4, 8, 16, ...
form a sum-free sequence: each term in the sequence is one more than the sum of all preceding terms, and so cannot be represented as a sum of preceding terms.
Sums of reciprocals
A set of integers is said to be small if the sum of its reciprocals converges to a finite value. For instance, by the prime number theorem, the prime numbers are not small. Paul Erdős (1962) proved that every sum-free sequence is small, and asked how large the sum of reciprocals could be. For instance, the sum of the reciprocals of the powers of two (a geometric series) is two.
If $R$ denotes the maximum sum of reciprocals of a sum-free sequence, then through subsequent research it is known that $2.0654<R<2.8570$.[1]
Density
It follows from the fact that sum-free sequences are small that they have zero Schnirelmann density; that is, if $A(x)$ is defined to be the number of sequence elements that are less than or equal to $x$, then $A(x)=o(x)$. Erdős (1962) showed that for every sum-free sequence there exists an unbounded sequence of numbers $x_{i}$ for which $A(x_{i})=O(x^{\varphi -1})$ where $\varphi $ is the golden ratio, and he exhibited a sum-free sequence for which, for all values of $x$, $A(x)=\Omega (x^{2/7})$, subsequently improved to $A(x)=\Omega (x^{1/3})$ by Deshouillers, Erdős and Melfi in 1999 and to $A(x)=\Omega (x^{1/2-\varepsilon })$ by Luczak and Schoen in 2000, who also proved that the exponent 1/2 cannot be further improved.
Notes
1. Levine & O'Sullivan (1977); Abbott (1987); Yang (2009); Chen (2013); Yang (2015); .
References
• Abbott, H. L. (1987), "On sum-free sequences", Acta Arithmetica, 48 (1): 93–96, doi:10.4064/aa-48-1-93-96, MR 0893466.
• Chen, Yong Gao (2013), "On the reciprocal sum of a sum-free sequence", Science China Mathematics, 56 (5): 951–966, Bibcode:2013ScChA..56..951C, doi:10.1007/s11425-012-4540-6, S2CID 124005748.
• Deshouillers, Jean-Marc; Erdős, Pál; Melfi, Giuseppe (1999), "On a question about sum-free sequences", Discrete Mathematics, 200 (1–3): 49–54, doi:10.1016/s0012-365x(98)00322-7, MR 1692278.
• Erdős, Pál (1962), "Számelméleti megjegyzések, III. Néhány additív számelméleti problémáról" [Some remarks on number theory, III] (PDF), Matematikai Lapok (in Hungarian), 13: 28–38, MR 0144871.
• Levine, Eugene; O'Sullivan, Joseph (1977), "An upper estimate for the reciprocal sum of a sum-free sequence", Acta Arithmetica, 34 (1): 9–24, doi:10.4064/aa-34-1-9-24, MR 0466016.
• Luczak, Tomasz; Schoen, Tomasz (2000), "On the maximal density of sum-free sets", Acta Arithmetica, 95 (3): 225–229, doi:10.4064/aa-95-3-225-229, MR 1793162.
• Yang, Shi Chun (2009), "Note on the reciprocal sum of a sum-free sequence", Journal of Mathematical Research and Exposition, 29 (4): 753–755, MR 2549677.
• Yang, Shi Chun (2015), "An upper bound for Erdös reciprocal sum of the sum-free sequence", Scientia Sinica Mathematica, 45 (3): 213–232, doi:10.1360/N012014-00121.
| Wikipedia |
Sum-free set
In additive combinatorics and number theory, a subset A of an abelian group G is said to be sum-free if the sumset A + A is disjoint from A. In other words, A is sum-free if the equation $a+b=c$ has no solution with $a,b,c\in A$.
For example, the set of odd numbers is a sum-free subset of the integers, and the set {N + 1, ..., 2N } forms a large sum-free subset of the set {1, ..., 2N }. Fermat's Last Theorem is the statement that, for a given integer n > 2, the set of all nonzero nth powers of the integers is a sum-free set.
Some basic questions that have been asked about sum-free sets are:
• How many sum-free subsets of {1, ..., N } are there, for an integer N? Ben Green has shown[1] that the answer is $O(2^{N/2})$, as predicted by the Cameron–Erdős conjecture.[2]
• How many sum-free sets does an abelian group G contain?[3]
• What is the size of the largest sum-free set that an abelian group G contains?[3]
A sum-free set is said to be maximal if it is not a proper subset of another sum-free set.
Let $f:[1,\infty )\to [1,\infty )$ be defined by $f(n)$ is the largest number $k$ such that any subset of $[1,\infty )$ with size n has a sum-free subset of size k. The function is subadditive, and by the Fekete subadditivity lemma, $\lim _{n}{\frac {f(n)}{n}}$ exists. Erdős proved that $\lim _{n}{\frac {f(n)}{n}}\geq {\frac {1}{3}}$, and conjectured that equality holds.[4] This was proved by Eberhard, Green, and Manners.[5]
See also
• Erdős–Szemerédi theorem
• Sum-free sequence
References
1. Green, Ben (November 2004). "The Cameron–Erdős conjecture". Bulletin of the London Mathematical Society. 36 (6): 769–778. arXiv:math.NT/0304058. doi:10.1112/S0024609304003650. MR 2083752.
2. P.J. Cameron and P. Erdős, "On the number of sets of integers with various properties", Number Theory (Banff, 1988), de Gruyter, Berlin 1990, pp. 61-79; see Sloane OEIS: A007865
3. Ben Green and Imre Ruzsa, Sum-free sets in abelian groups, 2005.
4. P. Erdős, "Extremal problems in number theory", Matematika, 11:2 (1967), 98–105; Proc. Sympos. Pure Math., Vol. VIII, 1965, 181–189
5. Eberhard, Sean; Green, Ben; Manners, Freddie (2014). "Sets of integers with no large sum-free subset". Annals of Mathematics. 180 (2): 621–652. ISSN 0003-486X.
| Wikipedia |
Sum coloring
In graph theory, a sum coloring of a graph is a labeling of its vertices by positive integers, with no two adjacent vertices having equal labels, that minimizes the sum of the labels. The minimum sum that can be achieved is called the chromatic sum of the graph.[1] Chromatic sums and sum coloring were introduced by Supowit in 1987 using non-graph-theoretic terminology,[2] and first studied in graph theoretic terms by Ewa Kubicka (independently of Supowit) in her 1989 doctoral thesis.[3]
Obtaining the chromatic sum may require using more distinct labels than the chromatic number of the graph, and even when the chromatic number of a graph is bounded, the number of distinct labels needed to obtain the optimal chromatic sum may be arbitrarily large.[4]
Computing the chromatic sum is NP-hard. However it may be computed in linear time for trees and pseudotrees,[5][6] and in polynomial time for outerplanar graphs.[6] There is a constant-factor approximation algorithm for interval graphs and for bipartite graphs.[7][8] The interval graph case remains NP-hard.[9] It is the case arising in Supowit's original application in VLSI design, and also has applications in scheduling.[7]
References
1. Małafiejski, Michał (2004), "Sum coloring of graphs", in Kubale, Marek (ed.), Graph Colorings, Contemporary Mathematics, vol. 352, Providence, RI: American Mathematical Society, pp. 55–65, doi:10.1090/conm/352/06372, ISBN 9780821834589, MR 2076989
2. Supowit, K. J. (1987), "Finding a maximum planar subset of a set of nets in a channel", IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems, 6 (1): 93–94, doi:10.1109/tcad.1987.1270250, S2CID 14949711
3. Kubicka, Ewa Maria (1989), The chromatic sum and efficient tree algorithms, Ph.D. thesis, Western Michigan University, MR 2637573
4. Erdős, Paul; Kubicka, Ewa; Schwenk, Allen J. (1990), "Graphs that require many colors to achieve their chromatic sum", Proceedings of the Twentieth Southeastern Conference on Combinatorics, Graph Theory, and Computing (Boca Raton, FL, 1989), Congressus Numerantium, 71: 17–28, MR 1041612
5. Kubicka, Ewa; Schwenk, Allen J. (1989), "An introduction to chromatic sums", Proceedings of the 17th ACM Computer Science Conference (CSC '89), New York, NY, USA: ACM, pp. 39–45, doi:10.1145/75427.75430, ISBN 978-0-89791-299-0, S2CID 28544302
6. Kubicka, Ewa M. (2005), "Polynomial algorithm for finding chromatic sum for unicyclic and outerplanar graphs", Ars Combinatoria, 76: 193–201, MR 2152758
7. Halldórsson, Magnús M.; Kortsarz, Guy; Shachnai, Hadas (2001), "Minimizing average completion of dedicated tasks and interval graphs", Approximation, randomization, and combinatorial optimization (Berkeley, CA, 2001), Lecture Notes in Computer Science, vol. 2129, Berlin: Springer, pp. 114–126, doi:10.1007/3-540-44666-4_15, ISBN 978-3-540-42470-3, MR 1910356
8. Giaro, Krzysztof; Janczewski, Robert; Kubale, Marek; Małafiejski, Michał (2002), "A 27/26-approximation algorithm for the chromatic sum coloring of bipartite graphs", Approximation algorithms for combinatorial optimization, Lecture Notes in Computer Science, vol. 2462, Berlin: Springer, pp. 135–145, doi:10.1007/3-540-45753-4_13, ISBN 978-3-540-44186-1, MR 2091822
9. Marx, Dániel (2005), "A short proof of the NP-completeness of minimum sum interval coloring", Operations Research Letters, 33 (4): 382–384, CiteSeerX 10.1.1.5.2707, doi:10.1016/j.orl.2004.07.006, MR 2127409
| Wikipedia |
Sum of angles of a triangle
In a Euclidean space, the sum of angles of a triangle equals the straight angle (180 degrees, π radians, two right angles, or a half-turn). A triangle has three angles, one at each vertex, bounded by a pair of adjacent sides.
"Triangle postulate" redirects here. Not to be confused with Triangle inequality.
"Angle sum theorem" redirects here. For trigonometric identities concerning sums of angles, see List of trigonometric identities § Angle sum and difference identities.
It was unknown for a long time whether other geometries exist, for which this sum is different. The influence of this problem on mathematics was particularly strong during the 19th century. Ultimately, the answer was proven to be positive: in other spaces (geometries) this sum can be greater or lesser, but it then must depend on the triangle. Its difference from 180° is a case of angular defect and serves as an important distinction for geometric systems.
Cases
Euclidean geometry
In Euclidean geometry, the triangle postulate states that the sum of the angles of a triangle is two right angles. This postulate is equivalent to the parallel postulate.[1] In the presence of the other axioms of Euclidean geometry, the following statements are equivalent:[2]
• Triangle postulate: The sum of the angles of a triangle is two right angles.
• Playfair's axiom: Given a straight line and a point not on the line, exactly one straight line may be drawn through the point parallel to the given line.
• Proclus' axiom: If a line intersects one of two parallel lines, it must intersect the other also.[3]
• Equidistance postulate: Parallel lines are everywhere equidistant (i.e. the distance from each point on one line to the other line is always the same.)
• Triangle area property: The area of a triangle can be as large as we please.
• Three points property: Three points either lie on a line or lie on a circle.
• Pythagoras' theorem: In a right-angled triangle, the square of the hypotenuse equals the sum of the squares of the other two sides.[1]
Hyperbolic geometry
Main article: Hyperbolic triangle
The sum of the angles of a hyperbolic triangle is less than 180°. The relation between angular defect and the triangle's area was first proven by Johann Heinrich Lambert.[4]
One can easily see how hyperbolic geometry breaks Playfair's axiom, Proclus' axiom (the parallelism, defined as non-intersection, is intransitive in an hyperbolic plane), the equidistance postulate (the points on one side of, and equidistant from, a given line do not form a line), and Pythagoras' theorem. A circle[5] cannot have arbitrarily small curvature,[6] so the three points property also fails.
The sum of the angles can be arbitrarily small (but positive). For an ideal triangle, a generalization of hyperbolic triangles, this sum is equal to zero.
Spherical geometry
See also: Triangle § Non-planar triangles
For a spherical triangle, the sum of the angles is greater than 180° and can be up to 540°. Specifically, the sum of the angles is
180° × (1 + 4f ),
where f is the fraction of the sphere's area which is enclosed by the triangle.
Note that spherical geometry does not satisfy several of Euclid's axioms (including the parallel postulate.)
Exterior angles
Main article: Internal and external angle
Angles between adjacent sides of a triangle are referred to as interior angles in Euclidean and other geometries. Exterior angles can be also defined, and the Euclidean triangle postulate can be formulated as the exterior angle theorem. One can also consider the sum of all three exterior angles, that equals to 360°[7] in the Euclidean case (as for any convex polygon), is less than 360° in the spherical case, and is greater than 360° in the hyperbolic case.
In differential geometry
In the differential geometry of surfaces, the question of a triangle's angular defect is understood as a special case of the Gauss-Bonnet theorem where the curvature of a closed curve is not a function, but a measure with the support in exactly three points – vertices of a triangle.
See also
• Euclid's Elements
• Foundations of geometry
• Hilbert's axioms
• Saccheri quadrilateral (considered earlier than Saccheri by Omar Khayyám)
• Lambert quadrilateral
References
1. Eric W. Weisstein (2003). CRC concise encyclopedia of mathematics (2nd ed.). p. 2147. ISBN 1-58488-347-2. The parallel postulate is equivalent to the Equidistance postulate, Playfair axiom, Proclus axiom, the Triangle postulate and the Pythagorean theorem.
2. Keith J. Devlin (2000). The Language of Mathematics: Making the Invisible Visible. Macmillan. p. 161. ISBN 0-8050-7254-3.
3. Essentially, the transitivity of parallelism.
4. Ratcliffe, John (2006), Foundations of Hyperbolic Manifolds, Graduate Texts in Mathematics, vol. 149, Springer, p. 99, ISBN 9780387331973, That the area of a hyperbolic triangle is proportional to its angle defect first appeared in Lambert's monograph Theorie der Parallellinien, which was published posthumously in 1786.
5. Defined as the set of points at the fixed distance from its centre.
6. Defined in the differentially-geometrical sense.
7. From the definition of an exterior angle, its sums up to the straight angle with the interior angles. So, the sum of three exterior angles added to the sum of three interior angles always gives three straight angles.
| Wikipedia |
Disjunctive sum
In the mathematics of combinatorial games, the sum or disjunctive sum of two games is a game in which the two games are played in parallel, with each player being allowed to move in just one of the games per turn. The sum game finishes when there are no moves left in either of the two parallel games, at which point (in normal play) the last player to move wins. This operation may be extended to disjunctive sums of any number of games, again by playing the games in parallel and moving in exactly one of the games per turn. It is the fundamental operation that is used in the Sprague–Grundy theorem for impartial games and which led to the field of combinatorial game theory for partisan games.
Application to common games
Disjunctive sums arise in games that naturally break up into components or regions that do not interact except in that each player in turn must choose just one component to play in. Examples of such games are Go, Nim, Sprouts, Domineering, the Game of the Amazons, and the map-coloring games.
In such games, each component may be analyzed separately for simplifications that do not affect its outcome or the outcome of its disjunctive sum with other games. Once this analysis has been performed, the components can be combined by taking the disjunctive sum of two games at a time, combining them into a single game with the same outcome as the original game.
Mathematics
The sum operation was formalized by Conway (1976). It is a commutative and associative operation: if two games are combined, the outcome is the same regardless of what order they are combined, and if more than two games are combined, the outcome is the same regardless of how they are grouped.
The negation −G of a game G (the game formed by trading the roles of the two players) forms an additive inverse under disjunctive sums: the game G + −G is a zero game (won by whoever goes second) using a simple echoing strategy in which the second player repeatedly copies the first player's move in the other game. For any two games G and H, the game H + G + −G has the same outcome as H itself (although it may have a larger set of available moves).
Based on these properties, the class of combinatorial games may be thought of as having the structure of an abelian group, although with a proper class of elements rather than (as is more standard for groups) a set of elements. For an important subclass of the games called the surreal numbers, there exists a multiplication operator that extends this group to a field.
For impartial misère play games, an analogous theory of sums can be developed, but with fewer of these properties: these games form a commutative monoid with only one nontrivial invertible element, called star (*), of order two.
References
• Conway, John Horton (1976), On Numbers and Games, Academic Press.
| Wikipedia |
Divergence of the sum of the reciprocals of the primes
The sum of the reciprocals of all prime numbers diverges; that is:
$\sum _{p{\text{ prime}}}{\frac {1}{p}}={\frac {1}{2}}+{\frac {1}{3}}+{\frac {1}{5}}+{\frac {1}{7}}+{\frac {1}{11}}+{\frac {1}{13}}+{\frac {1}{17}}+\cdots =\infty $
This was proved by Leonhard Euler in 1737,[1] and strengthens Euclid's 3rd-century-BC result that there are infinitely many prime numbers and Nicole Oresme's 14th-century proof of the divergence of the sum of the reciprocals of the integers (harmonic series).
There are a variety of proofs of Euler's result, including a lower bound for the partial sums stating that
$\sum _{\scriptstyle p{\text{ prime}} \atop \scriptstyle p\leq n}{\frac {1}{p}}\geq \log \log(n+1)-\log {\frac {\pi ^{2}}{6}}$
for all natural numbers n. The double natural logarithm (log log) indicates that the divergence might be very slow, which is indeed the case. See Meissel–Mertens constant.
The harmonic series
First, we describe how Euler originally discovered the result. He was considering the harmonic series
$\sum _{n=1}^{\infty }{\frac {1}{n}}=1+{\frac {1}{2}}+{\frac {1}{3}}+{\frac {1}{4}}+\cdots =\infty $
He had already used the following "product formula" to show the existence of infinitely many primes.
$\sum _{n=1}^{\infty }{\frac {1}{n}}=\prod _{p}\left(1+{\frac {1}{p}}+{\frac {1}{p^{2}}}+\cdots \right)=\prod _{p}{\frac {1}{1-p^{-1}}}$
Here the product is taken over the set of all primes.
Such infinite products are today called Euler products. The product above is a reflection of the fundamental theorem of arithmetic. Euler noted that if there were only a finite number of primes, then the product on the right would clearly converge, contradicting the divergence of the harmonic series.
Proofs
Euler's proof
Euler considered the above product formula and proceeded to make a sequence of audacious leaps of logic. First, he took the natural logarithm of each side, then he used the Taylor series expansion for log x as well as the sum of a converging series:
${\begin{aligned}\log \left(\sum _{n=1}^{\infty }{\frac {1}{n}}\right)&{}=\log \left(\prod _{p}{\frac {1}{1-p^{-1}}}\right)=-\sum _{p}\log \left(1-{\frac {1}{p}}\right)\\[5pt]&=\sum _{p}\left({\frac {1}{p}}+{\frac {1}{2p^{2}}}+{\frac {1}{3p^{3}}}+\cdots \right)\\[5pt]&=\sum _{p}{\frac {1}{p}}+{\frac {1}{2}}\sum _{p}{\frac {1}{p^{2}}}+{\frac {1}{3}}\sum _{p}{\frac {1}{p^{3}}}+{\frac {1}{4}}\sum _{p}{\frac {1}{p^{4}}}+\cdots \\[5pt]&=A+{\frac {1}{2}}B+{\frac {1}{3}}C+{\frac {1}{4}}D+\cdots \\[5pt]&=A+K\end{aligned}}$
for a fixed constant K < 1. Then he invoked the relation
$\sum _{n=1}^{\infty }{\frac {1}{n}}=\log \infty ,$
which he explained, for instance in a later 1748 work,[2] by setting x = 1 in the Taylor series expansion
$\log \left({\frac {1}{1-x}}\right)=\sum _{n=1}^{\infty }{\frac {x^{n}}{n}}.$
This allowed him to conclude that
$A={\frac {1}{2}}+{\frac {1}{3}}+{\frac {1}{5}}+{\frac {1}{7}}+{\frac {1}{11}}+\cdots =\log \log \infty .$
It is almost certain that Euler meant that the sum of the reciprocals of the primes less than n is asymptotic to log log n as n approaches infinity. It turns out this is indeed the case, and a more precise version of this fact was rigorously proved by Franz Mertens in 1874.[3] Thus Euler obtained a correct result by questionable means.
Erdős's proof by upper and lower estimates
The following proof by contradiction comes from Paul Erdős.
Let pi denote the ith prime number. Assume that the sum of the reciprocals of the primes converges.
Then there exists a smallest positive integer k such that
$\sum _{i=k+1}^{\infty }{\frac {1}{p_{i}}}<{\frac {1}{2}}\qquad (1)$
For a positive integer x, let Mx denote the set of those n in {1, 2, ..., x} which are not divisible by any prime greater than pk (or equivalently all n ≤ x which are a product of powers of primes pi ≤ pk). We will now derive an upper and a lower estimate for |Mx|, the number of elements in Mx. For large x, these bounds will turn out to be contradictory.
Upper estimate
Every n in Mx can be written as n = m2r with positive integers m and r, where r is square-free. Since only the k primes p1, ..., pk can show up (with exponent 1) in the prime factorization of r, there are at most 2k different possibilities for r. Furthermore, there are at most √x possible values for m. This gives us the upper estimate
$|M_{x}|\leq 2^{k}{\sqrt {x}}\qquad (2)$
Lower estimate
The remaining x − |Mx| numbers in the set difference {1, 2, ..., x} \ Mx are all divisible by a prime greater than pk. Let Ni,x denote the set of those n in {1, 2, ..., x} which are divisible by the ith prime pi. Then
$\{1,2,\ldots ,x\}\setminus M_{x}=\bigcup _{i=k+1}^{\infty }N_{i,x}$
Since the number of integers in Ni,x is at most x/pi (actually zero for pi > x), we get
$x-|M_{x}|\leq \sum _{i=k+1}^{\infty }|N_{i,x}|<\sum _{i=k+1}^{\infty }{\frac {x}{p_{i}}}$
Using (1), this implies
${\frac {x}{2}}<|M_{x}|\qquad (3)$
This produces a contradiction: when x ≥ 22k + 2, the estimates (2) and (3) cannot both hold, because x/2 ≥ 2k√x.
Proof that the series exhibits log-log growth
Here is another proof that actually gives a lower estimate for the partial sums; in particular, it shows that these sums grow at least as fast as log log n. The proof is due to Ivan Niven,[4] adapted from the product expansion idea of Euler. In the following, a sum or product taken over p always represents a sum or product taken over a specified set of primes.
The proof rests upon the following four inequalities:
• Every positive integer i can be uniquely expressed as the product of a square-free integer and a square as a consequence of the fundamental theorem of arithmetic. Start with
$i=q_{1}^{2{\alpha }_{1}+{\beta }_{1}}\cdot q_{2}^{2{\alpha }_{2}+{\beta }_{2}}\cdots q_{r}^{2{\alpha }_{r}+{\beta }_{r}},$
where the βs are 0 (the corresponding power of prime q is even) or 1 (the corresponding power of prime q is odd). Factor out one copy of all the primes whose β is 1, leaving a product of primes to even powers, itself a square. Relabeling:
$i=(p_{1}p_{2}\cdots p_{s})\cdot b^{2},$
where the first factor, a product of primes to the first power, is square free. Inverting all the is gives the inequality
$\sum _{i=1}^{n}{\frac {1}{i}}\leq \left(\prod _{p\leq n}\left(1+{\frac {1}{p}}\right)\right)\cdot \left(\sum _{k=1}^{n}{\frac {1}{k^{2}}}\right)=A\cdot B.$
To see this, note that
${\frac {1}{i}}={\frac {1}{p_{1}p_{2}\cdots p_{s}}}\cdot {\frac {1}{b^{2}}},$
and
${\begin{aligned}\left(1+{\frac {1}{p_{1}}}\right)\left(1+{\frac {1}{p_{2}}}\right)\ldots \left(1+{\frac {1}{p_{s}}}\right)&=\left({\frac {1}{p_{1}}}\right)\left({\frac {1}{p_{2}}}\right)\cdots \left({\frac {1}{p_{s}}}\right)+\ldots \\&={\frac {1}{p_{1}p_{2}\cdots p_{s}}}+\ldots .\end{aligned}}$
That is, $1/(p_{1}p_{2}\cdots p_{s})$ is one of the summands in the expanded product A. And since $1/b^{2}$ is one of the summands of B, every summand $1/i$ is represented in one of the terms of AB when multiplied out. The inequality follows.
• The upper estimate for the natural logarithm
${\begin{aligned}\log(n+1)&=\int _{1}^{n+1}{\frac {dx}{x}}\\&=\sum _{i=1}^{n}\underbrace {\int _{i}^{i+1}{\frac {dx}{x}}} _{{}\,<\,{\frac {1}{i}}}\\&<\sum _{i=1}^{n}{\frac {1}{i}}\end{aligned}}$
• The lower estimate 1 + x < exp(x) for the exponential function, which holds for all x > 0.
• Let n ≥ 2. The upper bound (using a telescoping sum) for the partial sums (convergence is all we really need)
${\begin{aligned}\sum _{k=1}^{n}{\frac {1}{k^{2}}}&<1+\sum _{k=2}^{n}\underbrace {\left({\frac {1}{k-{\frac {1}{2}}}}-{\frac {1}{k+{\frac {1}{2}}}}\right)} _{=\,{\frac {1}{k^{2}-{\frac {1}{4}}}}\,>\,{\frac {1}{k^{2}}}}\\&=1+{\frac {2}{3}}-{\frac {1}{n+{\frac {1}{2}}}}<{\frac {5}{3}}\end{aligned}}$
Combining all these inequalities, we see that
${\begin{aligned}\log(n+1)&<\sum _{i=1}^{n}{\frac {1}{i}}\\&\leq \prod _{p\leq n}\left(1+{\frac {1}{p}}\right)\sum _{k=1}^{n}{\frac {1}{k^{2}}}\\&<{\frac {5}{3}}\prod _{p\leq n}\exp \left({\frac {1}{p}}\right)\\&={\frac {5}{3}}\exp \left(\sum _{p\leq n}{\frac {1}{p}}\right)\end{aligned}}$
Dividing through by 5/3 and taking the natural logarithm of both sides gives
$\log \log(n+1)-\log {\frac {5}{3}}<\sum _{p\leq n}{\frac {1}{p}}$
as desired. Q.E.D.
Using
$\sum _{k=1}^{\infty }{\frac {1}{k^{2}}}={\frac {\pi ^{2}}{6}}$
(see the Basel problem), the above constant log 5/3 = 0.51082... can be improved to log π2/6 = 0.4977...; in fact it turns out that
$\lim _{n\to \infty }\left(\sum _{p\leq n}{\frac {1}{p}}-\log \log n\right)=M$
where M = 0.261497... is the Meissel–Mertens constant (somewhat analogous to the much more famous Euler–Mascheroni constant).
Proof from Dusart's inequality
From Dusart's inequality, we get
$p_{n}<n\log n+n\log \log n\quad {\mbox{for }}n\geq 6$
Then
${\begin{aligned}\sum _{n=1}^{\infty }{\frac {1}{p_{n}}}&\geq \sum _{n=6}^{\infty }{\frac {1}{p_{n}}}\\&\geq \sum _{n=6}^{\infty }{\frac {1}{n\log n+n\log \log n}}\\&\geq \sum _{n=6}^{\infty }{\frac {1}{2n\log n}}=\infty \end{aligned}}$
by the integral test for convergence. This shows that the series on the left diverges.
Geometric and harmonic-series proof
The following proof is modified from James A. Clarkson.[5]
Define the k-th tail
$x_{k}=\sum _{n=k+1}^{\infty }{\frac {1}{p_{n}}}.$
Then for $i\geq 0$, the expansion of $(x_{k})^{i}$ contains at least one term for each reciprocal of a positive integer with exactly $i$ prime factors (counting multiplicities) only from the set $\{p_{k+1},p_{k+2},\cdots \}$. It follows that the geometric series $ \sum _{i=0}^{\infty }(x_{k})^{i}$ contains at least one term for each reciprocal of a positive integer not divisible by any $p_{n},n\leq k$. But since $1+j(p_{1}p_{2}\cdots p_{k})$ always satisfies this criterion,
$\sum _{i=0}^{\infty }(x_{k})^{i}>\sum _{j=1}^{\infty }{\frac {1}{1+j(p_{1}p_{2}\cdots p_{k})}}>{\frac {1}{1+p_{1}p_{2}\cdots p_{k}}}\sum _{j=1}^{\infty }{\frac {1}{j}}=\infty $
by the divergence of the harmonic series. This shows that $x_{k}\geq 1$ for all $k$, and since the tails of a convergent series must themselves converge to zero, this proves divergence.
Partial sums
While the partial sums of the reciprocals of the primes eventually exceed any integer value, they never equal an integer.
One proof[6] is by induction: The first partial sum is 1/2, which has the form odd/even. If the nth partial sum (for n ≥ 1) has the form odd/even, then the (n + 1)st sum is
${\frac {\text{odd}}{\text{even}}}+{\frac {1}{p_{n+1}}}={\frac {{\text{odd}}\cdot p_{n+1}+{\text{even}}}{{\text{even}}\cdot p_{n+1}}}={\frac {{\text{odd}}+{\text{even}}}{\text{even}}}={\frac {\text{odd}}{\text{even}}}$
as the (n + 1)st prime pn + 1 is odd; since this sum also has an odd/even form, this partial sum cannot be an integer (because 2 divides the denominator but not the numerator), and the induction continues.
Another proof rewrites the expression for the sum of the first n reciprocals of primes (or indeed the sum of the reciprocals of any set of primes) in terms of the least common denominator, which is the product of all these primes. Then each of these primes divides all but one of the numerator terms and hence does not divide the numerator itself; but each prime does divide the denominator. Thus the expression is irreducible and is non-integer.
See also
• Euclid's theorem that there are infinitely many primes
• Small set (combinatorics)
• Brun's theorem, on the convergent sum of reciprocals of the twin primes
• List of sums of reciprocals
References
1. Euler, Leonhard (1737). "Variae observationes circa series infinitas" [Various observations concerning infinite series]. Commentarii Academiae Scientiarum Petropolitanae. 9: 160–188.
2. Euler, Leonhard (1748). Introductio in analysin infinitorum. Tomus Primus [Introduction to Infinite Analysis. Volume I]. Lausanne: Bousquet. p. 228, ex. 1.
3. Mertens, F. (1874). "Ein Beitrag zur analytischen Zahlentheorie". J. Reine Angew. Math. 78: 46–62.
4. Niven, Ivan, "A Proof of the Divergence of Σ 1/p", The American Mathematical Monthly, Vol. 78, No. 3 (Mar. 1971), pp. 272-273. The half-page proof is expanded by William Dunham in Euler: The Master of Us All, pp. 74-76.
5. Clarkson, James (1966). "On the series of prime reciprocals" (PDF). Proc. Amer. Math. Soc. 17: 541.
6. Lord, Nick (2015). "Quick proofs that certain sums of fractions are not integers". The Mathematical Gazette. 99: 128–130. doi:10.1017/mag.2014.16. S2CID 123890989.
Sources
• Dunham, William (1999). Euler: The Master of Us All. MAA. pp. 61–79. ISBN 0-88385-328-0.
External links
• Caldwell, Chris K. "There are infinitely many primes, but, how big of an infinity?".
Sequences and series
Integer sequences
Basic
• Arithmetic progression
• Geometric progression
• Harmonic progression
• Square number
• Cubic number
• Factorial
• Powers of two
• Powers of three
• Powers of 10
Advanced (list)
• Complete sequence
• Fibonacci sequence
• Figurate number
• Heptagonal number
• Hexagonal number
• Lucas number
• Pell number
• Pentagonal number
• Polygonal number
• Triangular number
Properties of sequences
• Cauchy sequence
• Monotonic function
• Periodic sequence
Properties of series
Series
• Alternating
• Convergent
• Divergent
• Telescoping
Convergence
• Absolute
• Conditional
• Uniform
Explicit series
Convergent
• 1/2 − 1/4 + 1/8 − 1/16 + ⋯
• 1/2 + 1/4 + 1/8 + 1/16 + ⋯
• 1/4 + 1/16 + 1/64 + 1/256 + ⋯
• 1 + 1/2s + 1/3s + ... (Riemann zeta function)
Divergent
• 1 + 1 + 1 + 1 + ⋯
• 1 − 1 + 1 − 1 + ⋯ (Grandi's series)
• 1 + 2 + 3 + 4 + ⋯
• 1 − 2 + 3 − 4 + ⋯
• 1 + 2 + 4 + 8 + ⋯
• 1 − 2 + 4 − 8 + ⋯
• Infinite arithmetic series
• 1 − 1 + 2 − 6 + 24 − 120 + ⋯ (alternating factorials)
• 1 + 1/2 + 1/3 + 1/4 + ⋯ (harmonic series)
• 1/2 + 1/3 + 1/5 + 1/7 + 1/11 + ⋯ (inverses of primes)
Kinds of series
• Taylor series
• Power series
• Formal power series
• Laurent series
• Puiseux series
• Dirichlet series
• Trigonometric series
• Fourier series
• Generating series
Hypergeometric series
• Generalized hypergeometric series
• Hypergeometric function of a matrix argument
• Lauricella hypergeometric series
• Modular hypergeometric series
• Riemann's differential equation
• Theta hypergeometric series
• Category
| Wikipedia |
Canonical normal form
In Boolean algebra, any Boolean function can be expressed in the canonical disjunctive normal form (CDNF)[1] or minterm canonical form, and its dual, the canonical conjunctive normal form (CCNF) or maxterm canonical form. Other canonical forms include the complete sum of prime implicants or Blake canonical form (and its dual), and the algebraic normal form (also called Zhegalkin or Reed–Muller).
This article is about canonical forms particularly in Boolean algebra. It is not to be confused with Canonical form or Normal form.
Minterms are called products because they are the logical AND of a set of variables, and maxterms are called sums because they are the logical OR of a set of variables. These concepts are dual because of their complementary-symmetry relationship as expressed by De Morgan's laws.
Two dual canonical forms of any Boolean function are a "sum of minterms" and a "product of maxterms." The term "Sum of Products" (SoP or SOP) is widely used for the canonical form that is a disjunction (OR) of minterms. Its De Morgan dual is a "Product of Sums" (PoS or POS) for the canonical form that is a conjunction (AND) of maxterms. These forms can be useful for the simplification of these functions, which is of great importance in the optimization of Boolean formulas in general and digital circuits in particular.
Minterms
For a boolean function of $n$ variables ${x_{1},\dots ,x_{n}}$, a product term in which each of the $n$ variables appears once (either in its complemented or uncomplemented form) is called a minterm. Thus, a minterm is a logical expression of n variables that employs only the complement operator and the conjunction operator.
For example, $abc$, $ab'c$ and $abc'$ are 3 examples of the 8 minterms for a Boolean function of the three variables $a$, $b$, and $c$. The customary reading of the last of these is a AND b AND NOT-c.
There are 2n minterms of n variables, since a variable in the minterm expression can be in either its direct or its complemented form—two choices per variable.
Indexing minterms
Minterms are often numbered by a binary encoding of the complementation pattern of the variables, where the variables are written in a standard order, usually alphabetical. This convention assigns the value 1 to the direct form ($x_{i}$) and 0 to the complemented form ($x'_{i}$); the minterm is then $\sum \limits _{i=1}^{n}2^{i-1}\operatorname {value} (x_{i})$. For example, minterm $abc'$ is numbered 1102 = 610 and denoted $m_{6}$.
Functional equivalence
A given minterm n gives a true value (i.e., 1) for just one combination of the input variables. For example, minterm 5, a b' c, is true only when a and c both are true and b is false—the input arrangement where a = 1, b = 0, c = 1 results in 1.
Given the truth table of a logical function, it is possible to write the function as a "sum of products". This is a special form of disjunctive normal form. For example, if given the truth table for the arithmetic sum bit u of one bit position's logic of an adder circuit, as a function of x and y from the addends and the carry in, ci:
ci x y u(ci,x,y)
0000
0011
0101
0110
1001
1010
1100
1111
Observing that the rows that have an output of 1 are the 2nd, 3rd, 5th, and 8th, we can write u as a sum of minterms $m_{1},m_{2},m_{4},$ and $m_{7}$. If we wish to verify this: $u(ci,x,y)=m_{1}+m_{2}+m_{4}+m_{7}=(ci',x',y)+(ci',x,y')+(ci,x',y')+(ci,x,y)$ evaluated for all 8 combinations of the three variables will match the table.
Maxterms
For a boolean function of n variables ${x_{1},\dots ,x_{n}}$, a sum term in which each of the n variables appears once (either in its complemented or uncomplemented form) is called a maxterm. Thus, a maxterm is a logical expression of n variables that employs only the complement operator and the disjunction operator. Maxterms are a dual of the minterm idea (i.e., exhibiting a complementary symmetry in all respects). Instead of using ANDs and complements, we use ORs and complements and proceed similarly.
For example, the following are two of the eight maxterms of three variables:
a + b′ + c
a′ + b + c
There are again 2n maxterms of n variables, since a variable in the maxterm expression can also be in either its direct or its complemented form—two choices per variable.
Indexing maxterms
Each maxterm is assigned an index based on the opposite conventional binary encoding used for minterms. The maxterm convention assigns the value 0 to the direct form $(x_{i})$ and 1 to the complemented form $(x'_{i})$. For example, we assign the index 6 to the maxterm $a'+b'+c$ (110) and denote that maxterm as M6. Similarly M0 of these three variables is $a+b+c$ (000) and M7 is $a'+b'+c'$ (111).
Functional equivalence
It is apparent that maxterm n gives a false value (i.e., 0) for just one combination of the input variables. For example, maxterm 5, a′ + b + c′, is false only when a and c both are true and b is false—the input arrangement where a = 1, b = 0, c = 1 results in 0.
If one is given a truth table of a logical function, it is possible to write the function as a "product of sums". This is a special form of conjunctive normal form. For example, if given the truth table for the carry-out bit co of one bit position's logic of an adder circuit, as a function of x and y from the addends and the carry in, ci:
ci x y co(ci,x,y)
0000
0010
0100
0111
1000
1011
1101
1111
Observing that the rows that have an output of 0 are the 1st, 2nd, 3rd, and 5th, we can write co as a product of maxterms $M_{0},M_{1},M_{2}$ and $M_{4}$. If we wish to verify this:
$co(ci,x,y)=M_{0}M_{1}M_{2}M_{4}=(ci+x+y)(ci+x+y')(ci+x'+y)(ci'+x+y)$
evaluated for all 8 combinations of the three variables will match the table.
Dualization
The complement of a minterm is the respective maxterm. This can be easily verified by using de Morgan's law. For example: $M_{5}=a'+b+c'=(ab'c)'=m_{5}'$
Non-canonical PoS and SoP forms
It is often the case that the canonical minterm form can be simplified to an equivalent SoP form. This simplified form would still consist of a sum of product terms. However, in the simplified form, it is possible to have fewer product terms and/or product terms that contain fewer variables. For example, the following 3-variable function:
a b c f(a,b,c)
0000
0010
0100
0111
1000
1010
1100
1111
has the canonical minterm representation: $f=a'bc+abc$, but it has an equivalent simplified form: $f=bc$. In this trivial example, it is obvious that $bc=a'bc+abc$, but the simplified form has both fewer product terms, and the term has fewer variables.
The most simplified SoP representation of a function is referred to as a minimal SoP form.
In a similar manner, a canonical maxterm form can have a simplified PoS form.
While this example was simplified by applying normal algebraic methods [$f=(a'+a)bc$], in less obvious cases a convenient method for finding the minimal PoS/SoP form of a function with up to four variables is using a Karnaugh map.
The minimal PoS and SoP forms are important for finding optimal implementations of boolean functions and minimizing logic circuits.
Application example
The sample truth tables for minterms and maxterms above are sufficient to establish the canonical form for a single bit position in the addition of binary numbers, but are not sufficient to design the digital logic unless your inventory of gates includes AND and OR. Where performance is an issue (as in the Apollo Guidance Computer), the available parts are more likely to be NAND and NOR because of the complementing action inherent in transistor logic. The values are defined as voltage states, one near ground and one near the DC supply voltage Vcc, e.g. +5 VDC. If the higher voltage is defined as the 1 "true" value, a NOR gate is the simplest possible useful logical element.
Specifically, a 3-input NOR gate may consist of 3 bipolar junction transistors with their emitters all grounded, their collectors tied together and linked to Vcc through a load impedance. Each base is connected to an input signal, and the common collector point presents the output signal. Any input that is a 1 (high voltage) to its base shorts its transistor's emitter to its collector, causing current to flow through the load impedance, which brings the collector voltage (the output) very near to ground. That result is independent of the other inputs. Only when all 3 input signals are 0 (low voltage) do the emitter-collector impedances of all 3 transistors remain very high. Then very little current flows, and the voltage-divider effect with the load impedance imposes on the collector point a high voltage very near to Vcc.
The complementing property of these gate circuits may seem like a drawback when trying to implement a function in canonical form, but there is a compensating bonus: such a gate with only one input implements the complementing function, which is required frequently in digital logic.
This example assumes the Apollo parts inventory: 3-input NOR gates only, but the discussion is simplified by supposing that 4-input NOR gates are also available (in Apollo, those were compounded out of pairs of 3-input NORs).
Canonical and non-canonical consequences of NOR gates
A set of 8 NOR gates, if their inputs are all combinations of the direct and complement forms of the 3 input variables ci, x, and y, always produce minterms, never maxterms—that is, of the 8 gates required to process all combinations of 3 input variables, only one has the output value 1. That's because a NOR gate, despite its name, could better be viewed (using De Morgan's law) as the AND of the complements of its input signals.
The reason this is not a problem is the duality of minterms and maxterms, i.e. each maxterm is the complement of the like-indexed minterm, and vice versa.
In the minterm example above, we wrote $u(ci,x,y)=m_{1}+m_{2}+m_{4}+m_{7}$ but to perform this with a 4-input NOR gate we need to restate it as a product of sums (PoS), where the sums are the opposite maxterms. That is,
$u(ci,x,y)=\mathrm {AND} (M_{0},M_{3},M_{5},M_{6})=\mathrm {NOR} (m_{0},m_{3},m_{5},m_{6}).$
Truth tables
ci x y M0 M3 M5 M6 AND u(ci,x,y)
000011100
001111111
010111111
011101100
100111111
101110100
110111000
111111111
ci x y m0 m3 m5 m6 NOR u(ci,x,y)
000100000
001000011
010000011
011010000
100000011
101001000
110000100
111000011
In the maxterm example above, we wrote $co(ci,x,y)=M_{0}M_{1}M_{2}M_{4}$ but to perform this with a 4-input NOR gate we need to notice the equality to the NOR of the same minterms. That is,
$co(ci,x,y)=\mathrm {AND} (M_{0},M_{1},M_{2},M_{4})=\mathrm {NOR} (m_{0},m_{1},m_{2},m_{4}).$
Truth tables
ci x y M0 M1 M2 M4 AND co(ci,x,y)
000011100
001101100
010110100
011111111
100111000
101111111
110111111
111111111
ci x y m0 m1 m2 m4 NOR co(ci,x,y)
000100000
001010000
010001000
011000011
100000100
101000011
110000011
111000011
Design trade-offs considered in addition to canonical forms
One might suppose that the work of designing an adder stage is now complete, but we haven't addressed the fact that all 3 of the input variables have to appear in both their direct and complement forms. There's no difficulty about the addends x and y in this respect, because they are static throughout the addition and thus are normally held in latch circuits that routinely have both direct and complement outputs. (The simplest latch circuit made of NOR gates is a pair of gates cross-coupled to make a flip-flop: the output of each is wired as one of the inputs to the other.) There is also no need to create the complement form of the sum u. However, the carry out of one bit position must be passed as the carry into the next bit position in both direct and complement forms. The most straightforward way to do this is to pass co through a 1-input NOR gate and label the output co′, but that would add a gate delay in the worst possible place, slowing down the rippling of carries from right to left. An additional 4-input NOR gate building the canonical form of co′ (out of the opposite minterms as co) solves this problem.
$co'(ci,x,y)=\mathrm {AND} (M_{3},M_{5},M_{6},M_{7})=\mathrm {NOR} (m_{3},m_{5},m_{6},m_{7}).$
Truth tables
ci x y M3 M5 M6 M7 AND co'(ci,x,y)
000111111
001111111
010111111
011011100
100111111
101101100
110110100
111111000
ci x y m3 m5 m6 m7 NOR co'(ci,x,y)
000000011
001000011
010000011
011100000
100000011
101010000
110001000
111000100
The trade-off to maintain full speed in this way includes an unexpected cost (in addition to having to use a bigger gate). If we'd just used that 1-input gate to complement co, there would have been no use for the minterm $m_{7}$, and the gate that generated it could have been eliminated. Nevertheless, it is still a good trade.
Now we could have implemented those functions exactly according to their SoP and PoS canonical forms, by turning NOR gates into the functions specified. A NOR gate is made into an OR gate by passing its output through a 1-input NOR gate; and it is made into an AND gate by passing each of its inputs through a 1-input NOR gate. However, this approach not only increases the number of gates used, but also doubles the number of gate delays processing the signals, cutting the processing speed in half. Consequently, whenever performance is vital, going beyond canonical forms and doing the Boolean algebra to make the unenhanced NOR gates do the job is well worthwhile.
Top-down vs. bottom-up design
We have now seen how the minterm/maxterm tools can be used to design an adder stage in canonical form with the addition of some Boolean algebra, costing just 2 gate delays for each of the outputs. That's the "top-down" way to design the digital circuit for this function, but is it the best way? The discussion has focused on identifying "fastest" as "best," and the augmented canonical form meets that criterion flawlessly, but sometimes other factors predominate. The designer may have a primary goal of minimizing the number of gates, and/or of minimizing the fanouts of signals to other gates since big fanouts reduce resilience to a degraded power supply or other environmental factors. In such a case, a designer may develop the canonical-form design as a baseline, then try a bottom-up development, and finally compare the results.
The bottom-up development involves noticing that u = ci XOR (x XOR y), where XOR means eXclusive OR [true when either input is true but not when both are true], and that co = ci x + x y + y ci. One such development takes twelve NOR gates in all: six 2-input gates and two 1-input gates to produce u in 5 gate delays, plus three 2-input gates and one 3-input gate to produce co′ in 2 gate delays. The canonical baseline took eight 3-input NOR gates plus three 4-input NOR gates to produce u, co and co′ in 2 gate delays. If the circuit inventory actually includes 4-input NOR gates, the top-down canonical design looks like a winner in both gate count and speed. But if (contrary to our convenient supposition) the circuits are actually 3-input NOR gates, of which two are required for each 4-input NOR function, then the canonical design takes 14 gates compared to 12 for the bottom-up approach, but still produces the sum digit u considerably faster. The fanout comparison is tabulated as:
Variables Top-down Bottom-up
x 41
x' 43
y 41
y' 43
ci 41
ci' 43
M or m 4@1,4@2N/A
x XOR y N/A2
Misc N/A5@1
Max 43
The description of the bottom-up development mentions co′ as an output but not co. Does that design simply never need the direct form of the carry out? Well, yes and no. At each stage, the calculation of co′ depends only on ci′, x′ and y′, which means that the carry propagation ripples along the bit positions just as fast as in the canonical design without ever developing co. The calculation of u, which does require ci to be made from ci′ by a 1-input NOR, is slower but for any word length the design only pays that penalty once (when the leftmost sum digit is developed). That's because those calculations overlap, each in what amounts to its own little pipeline without affecting when the next bit position's sum bit can be calculated. And, to be sure, the co′ out of the leftmost bit position will probably have to be complemented as part of the logic determining whether the addition overflowed. But using 3-input NOR gates, the bottom-up design is very nearly as fast for doing parallel addition on a non-trivial word length, cuts down on the gate count, and uses lower fanouts ... so it wins if gate count and/or fanout are paramount!
We'll leave the exact circuitry of the bottom-up design of which all these statements are true as an exercise for the interested reader, assisted by one more algebraic formula: u = ci(x XOR y) + ci′(x XOR y)′]′. Decoupling the carry propagation from the sum formation in this way is what elevates the performance of a carry-lookahead adder over that of a ripple carry adder.
Application in digital circuit design
One application of Boolean algebra is digital circuit design, with one goal to minimize the number of gates and another to minimize the settling time.
There are sixteen possible functions of two variables, but in digital logic hardware, the simplest gate circuits implement only four of them: conjunction (AND), disjunction (inclusive OR), and the respective complements of those (NAND and NOR).
Most gate circuits accept more than 2 input variables; for example, the spaceborne Apollo Guidance Computer, which pioneered the application of integrated circuits in the 1960s, was built with only one type of gate, a 3-input NOR, whose output is true only when all 3 inputs are false.[2][3]
See also
• List of Boolean algebra topics
References
1. Peter J. Pahl; Rudolf Damrath (2012-12-06). Mathematical Foundations of Computational Engineering: A Handbook. Springer Science & Business Media. pp. 15–. ISBN 978-3-642-56893-0.
2. Hall, Eldon C. (1996). Journey to the Moon: The History of the Apollo Guidance Computer. AIAA. ISBN 1-56347-185-X.
3. "APOLLO GUIDANCE COMPUTER (AGC) Schematics". klabs.org. Rich Katz. Retrieved 2021-06-19. To see how NOR gate logic was used in the Apollo Guidance Computer's ALU, select any of the 4-BIT MODULE entries in the Index to Drawings, and expand images as desired.
Further reading
• Bender, Edward A.; Williamson, S. Gill (2005). A Short Course in Discrete Mathematics. Mineola, NY: Dover Publications, Inc. ISBN 0-486-43946-1.
The authors demonstrate a proof that any Boolean (logic) function can be expressed in either disjunctive or conjunctive normal form (cf pages 5–6); the proof simply proceeds by creating all 2N rows of N Boolean variables and demonstrates that each row ("minterm" or "maxterm") has a unique Boolean expression. Any Boolean function of the N variables can be derived from a composite of the rows whose minterm or maxterm are logical 1s ("trues")
• McCluskey, E. J. (1965). Introduction to the Theory of Switching Circuits. NY: McGraw–Hill Book Company. p. 78. LCCN 65-17394. Canonical expressions are defined and described
• Hill, Fredrick J.; Peterson, Gerald R. (1974). Introduction to Switching Theory and Logical Design (2nd ed.). NY: John Wiley & Sons. p. 101. ISBN 0-471-39882-9. Minterm and maxterm designation of functions
External links
The Wikibook Electronics has a page on the topic of: Boolean Algebra
• Boole, George (1848). Translated by Wilkins, David R. "The Calculus of Logic". Cambridge and Dublin Mathematical Journal. III: 183–198.
Digital electronics
Components
• Transistor
• Resistor
• Inductor
• Capacitor
• Printed electronics
• Printed circuit board
• Electronic circuit
• Flip-flop
• Memory cell
• Combinational logic
• Sequential logic
• Logic gate
• Boolean circuit
• Integrated circuit (IC)
• Hybrid integrated circuit (HIC)
• Mixed-signal integrated circuit
• Three-dimensional integrated circuit (3D IC)
• Emitter-coupled logic (ECL)
• Erasable programmable logic device (EPLD)
• Macrocell array
• Programmable logic array (PLA)
• Programmable logic device (PLD)
• Programmable Array Logic (PAL)
• Generic array logic (GAL)
• Complex programmable logic device (CPLD)
• Field-programmable gate array (FPGA)
• Field-programmable object array (FPOA)
• Application-specific integrated circuit (ASIC)
• Tensor Processing Unit (TPU)
Theory
• Digital signal
• Boolean algebra
• Logic synthesis
• Logic in computer science
• Computer architecture
• Digital signal
• Digital signal processing
• Circuit minimization
• Switching circuit theory
• Gate equivalent
Design
• Logic synthesis
• Place and route
• Placement
• Routing
• Transaction-level modeling
• Register-transfer level
• Hardware description language
• High-level synthesis
• Formal equivalence checking
• Synchronous logic
• Asynchronous logic
• Finite-state machine
• Hierarchical state machine
Applications
• Computer hardware
• Hardware acceleration
• Digital audio
• radio
• Digital photography
• Digital telephone
• Digital video
• cinematography
• television
• Electronic literature
Design issues
• Metastability
• Runt pulse
| Wikipedia |
Divisor function
In mathematics, and specifically in number theory, a divisor function is an arithmetic function related to the divisors of an integer. When referred to as the divisor function, it counts the number of divisors of an integer (including 1 and the number itself). It appears in a number of remarkable identities, including relationships on the Riemann zeta function and the Eisenstein series of modular forms. Divisor functions were studied by Ramanujan, who gave a number of important congruences and identities; these are treated separately in the article Ramanujan's sum.
"Robin's theorem" redirects here. For Robbins' theorem in graph theory, see Robbins' theorem.
A related function is the divisor summatory function, which, as the name implies, is a sum over the divisor function.
Definition
The sum of positive divisors function σz(n), for a real or complex number z, is defined as the sum of the zth powers of the positive divisors of n. It can be expressed in sigma notation as
$\sigma _{z}(n)=\sum _{d\mid n}d^{z}\,\!,$
where ${d\mid n}$ is shorthand for "d divides n". The notations d(n), ν(n) and τ(n) (for the German Teiler = divisors) are also used to denote σ0(n), or the number-of-divisors function[1][2] (OEIS: A000005). When z is 1, the function is called the sigma function or sum-of-divisors function,[1][3] and the subscript is often omitted, so σ(n) is the same as σ1(n) (OEIS: A000203).
The aliquot sum s(n) of n is the sum of the proper divisors (that is, the divisors excluding n itself, OEIS: A001065), and equals σ1(n) − n; the aliquot sequence of n is formed by repeatedly applying the aliquot sum function.
Example
For example, σ0(12) is the number of the divisors of 12:
${\begin{aligned}\sigma _{0}(12)&=1^{0}+2^{0}+3^{0}+4^{0}+6^{0}+12^{0}\\&=1+1+1+1+1+1=6,\end{aligned}}$
while σ1(12) is the sum of all the divisors:
${\begin{aligned}\sigma _{1}(12)&=1^{1}+2^{1}+3^{1}+4^{1}+6^{1}+12^{1}\\&=1+2+3+4+6+12=28,\end{aligned}}$
and the aliquot sum s(12) of proper divisors is:
${\begin{aligned}s(12)&=1^{1}+2^{1}+3^{1}+4^{1}+6^{1}\\&=1+2+3+4+6=16.\end{aligned}}$
σ-1(n) is sometimes called the abundancy index of n, and we have:
${\begin{aligned}\sigma _{-1}(12)&=1^{-1}+2^{-1}+3^{-1}+4^{-1}+6^{-1}+12^{-1}\\&={\tfrac {1}{1}}+{\tfrac {1}{2}}+{\tfrac {1}{3}}+{\tfrac {1}{4}}+{\tfrac {1}{6}}+{\tfrac {1}{12}}\\&={\tfrac {12}{12}}+{\tfrac {6}{12}}+{\tfrac {4}{12}}+{\tfrac {3}{12}}+{\tfrac {2}{12}}+{\tfrac {1}{12}}\\&={\tfrac {12+6+4+3+2+1}{12}}={\tfrac {28}{12}}={\tfrac {7}{3}}={\tfrac {\sigma _{1}(12)}{12}}\end{aligned}}$
Table of values
The cases x = 2 to 5 are listed in OEIS: A001157 − OEIS: A001160, x = 6 to 24 are listed in OEIS: A013954 − OEIS: A013972.
nfactorization𝜎0(n)𝜎1(n)𝜎2(n)𝜎3(n)𝜎4(n)
1111111
22235917
3324102882
422372173273
552626126626
62×3412502521394
7728503442402
823415855854369
932313917576643
102×5418130113410642
1111212122133214642
1222×3628210204422386
1313214170219828562
142×7424250309640834
153×5424260352851332
1624531341468169905
1717218290491483522
182×326394556813112931
19192203626860130322
2022×56425469198170898
213×74325009632196964
222×1143661011988248914
232322453012168279842
2423×386085016380358258
255233165115751391251
262×1344285019782485554
273344082020440538084
2822×7656105025112655746
292923084224390707282
302×3×5872130031752872644
313123296229792923522
32256631365374491118481
333×114481220372961200644
342×174541450442261419874
355×74481300433441503652
3622×329911911552611813539
37372381370506541874162
382×194601810617402215474
393×134561700615442342084
4023×58902210737102734994
41412421682689222825762
422×3×78962500866883348388
43432441850795083418802
4422×116842562972363997266
4532×56782366953824158518
462×2347226501095124757314
474724822101038244879682
4824×31012434101310685732210
497235724511179935767203
502×5269332551417596651267
Properties
Formulas at prime powers
For a prime number p,
${\begin{aligned}\sigma _{0}(p)&=2\\\sigma _{0}(p^{n})&=n+1\\\sigma _{1}(p)&=p+1\end{aligned}}$
because by definition, the factors of a prime number are 1 and itself. Also, where pn# denotes the primorial,
$\sigma _{0}(p_{n}\#)=2^{n}$
since n prime factors allow a sequence of binary selection ($p_{i}$ or 1) from n terms for each proper divisor formed. However, these are not in general the smallest numbers whose number of divisors is a power of two; instead, the smallest such number may be obtained by multiplying together the first n Fermi–Dirac primes, prime powers whose exponent is a power of two.[4]
Clearly, $1<\sigma _{0}(n)<n$ for all $n>2$, and $\sigma _{x}(n)>n$ for all $n>1$, $x>0$ .
The divisor function is multiplicative (since each divisor c of the product mn with $\gcd(m,n)=1$ distinctively correspond to a divisor a of m and a divisor b of n), but not completely multiplicative:
$\gcd(a,b)=1\Longrightarrow \sigma _{x}(ab)=\sigma _{x}(a)\sigma _{x}(b).$
The consequence of this is that, if we write
$n=\prod _{i=1}^{r}p_{i}^{a_{i}}$
where r = ω(n) is the number of distinct prime factors of n, pi is the ith prime factor, and ai is the maximum power of pi by which n is divisible, then we have: [5]
$\sigma _{x}(n)=\prod _{i=1}^{r}\sum _{j=0}^{a_{i}}p_{i}^{jx}=\prod _{i=1}^{r}\left(1+p_{i}^{x}+p_{i}^{2x}+\cdots +p_{i}^{a_{i}x}\right).$
which, when x ≠ 0, is equivalent to the useful formula: [5]
$\sigma _{x}(n)=\prod _{i=1}^{r}{\frac {p_{i}^{(a_{i}+1)x}-1}{p_{i}^{x}-1}}.$
When x = 0, $\sigma _{0}(n)$ is: [5]
$\sigma _{0}(n)=\prod _{i=1}^{r}(a_{i}+1).$
This result can be directly deduced from the fact that all divisors of $n$ are uniquely determined by the distinct tuples $(x_{1},x_{2},...,x_{i},...,x_{r})$ of integers with $0\leq x_{i}\leq a_{i}$ (i.e. $a_{i}+1$ independent choices for each $x_{i}$).
For example, if n is 24, there are two prime factors (p1 is 2; p2 is 3); noting that 24 is the product of 23×31, a1 is 3 and a2 is 1. Thus we can calculate $\sigma _{0}(24)$ as so:
$\sigma _{0}(24)=\prod _{i=1}^{2}(a_{i}+1)=(3+1)(1+1)=4\cdot 2=8.$
The eight divisors counted by this formula are 1, 2, 4, 8, 3, 6, 12, and 24.
Other properties and identities
Euler proved the remarkable recurrence:[6][7][8]
${\begin{aligned}\sigma (n)&=\sigma (n-1)+\sigma (n-2)-\sigma (n-5)-\sigma (n-7)+\sigma (n-12)+\sigma (n-15)+\cdots \\[12mu]&=\sum _{i\in \mathbb {N} }(-1)^{i+1}\left(\sigma \left(n-{\frac {1}{2}}\left(3i^{2}-i\right)\right)+\sigma \left(n-{\frac {1}{2}}\left(3i^{2}+i\right)\right)\right),\end{aligned}}$
where $\sigma (0)=n$ if it occurs and $\sigma (x)=0$ for $x<0$, and ${\tfrac {1}{2}}\left(3i^{2}\mp i\right)$ are consecutive pairs of generalized pentagonal numbers (OEIS: A001318, starting at offset 1). Indeed, Euler proved this by logarithmic differentiation of the identity in his pentagonal number theorem.
For a non-square integer, n, every divisor, d, of n is paired with divisor n/d of n and $\sigma _{0}(n)$ is even; for a square integer, one divisor (namely ${\sqrt {n}}$) is not paired with a distinct divisor and $\sigma _{0}(n)$ is odd. Similarly, the number $\sigma _{1}(n)$ is odd if and only if n is a square or twice a square.[9]
We also note s(n) = σ(n) − n. Here s(n) denotes the sum of the proper divisors of n, that is, the divisors of n excluding n itself. This function is used to recognize perfect numbers, which are the n such that s(n) = n. If s(n) > n, then n is an abundant number, and if s(n) < n, then n is a deficient number.
If n is a power of 2, $n=2^{k}$, then $\sigma (n)=2\cdot 2^{k}-1=2n-1$ and $s(n)=n-1$, which makes n almost-perfect.
As an example, for two primes $p,q:p<q$, let
$n=p\,q$.
Then
$\sigma (n)=(p+1)(q+1)=n+1+(p+q),$
$\varphi (n)=(p-1)(q-1)=n+1-(p+q),$
and
$n+1=(\sigma (n)+\varphi (n))/2,$
$p+q=(\sigma (n)-\varphi (n))/2,$
where $\varphi (n)$ is Euler's totient function.
Then, the roots of
$(x-p)(x-q)=x^{2}-(p+q)x+n=x^{2}-[(\sigma (n)-\varphi (n))/2]x+[(\sigma (n)+\varphi (n))/2-1]=0$
express p and q in terms of σ(n) and φ(n) only, requiring no knowledge of n or $p+q$, as
$p=(\sigma (n)-\varphi (n))/4-{\sqrt {[(\sigma (n)-\varphi (n))/4]^{2}-[(\sigma (n)+\varphi (n))/2-1]}},$
$q=(\sigma (n)-\varphi (n))/4+{\sqrt {[(\sigma (n)-\varphi (n))/4]^{2}-[(\sigma (n)+\varphi (n))/2-1]}}.$
Also, knowing n and either $\sigma (n)$ or $\varphi (n)$, or, alternatively, $p+q$ and either $\sigma (n)$ or $\varphi (n)$ allows an easy recovery of p and q.
In 1984, Roger Heath-Brown proved that the equality
$\sigma _{0}(n)=\sigma _{0}(n+1)$
is true for infinitely many values of n, see OEIS: A005237.
Series relations
Two Dirichlet series involving the divisor function are: [10]
$\sum _{n=1}^{\infty }{\frac {\sigma _{a}(n)}{n^{s}}}=\zeta (s)\zeta (s-a)\quad {\text{for}}\quad s>1,s>a+1,$
where $\zeta $ is the Riemann zeta function. The series for d(n) = σ0(n) gives: [10]
$\sum _{n=1}^{\infty }{\frac {d(n)}{n^{s}}}=\zeta ^{2}(s)\quad {\text{for}}\quad s>1,$
and a Ramanujan identity[11]
$\sum _{n=1}^{\infty }{\frac {\sigma _{a}(n)\sigma _{b}(n)}{n^{s}}}={\frac {\zeta (s)\zeta (s-a)\zeta (s-b)\zeta (s-a-b)}{\zeta (2s-a-b)}},$
which is a special case of the Rankin–Selberg convolution.
A Lambert series involving the divisor function is: [12]
$\sum _{n=1}^{\infty }q^{n}\sigma _{a}(n)=\sum _{n=1}^{\infty }\sum _{j=1}^{\infty }n^{a}q^{j\,n}=\sum _{n=1}^{\infty }{\frac {n^{a}q^{n}}{1-q^{n}}}$
for arbitrary complex |q| ≤ 1 and a. This summation also appears as the Fourier series of the Eisenstein series and the invariants of the Weierstrass elliptic functions.
For $k>0$, there is an explicit series representation with Ramanujan sums $c_{m}(n)$ as :[13]
$\sigma _{k}(n)=\zeta (k+1)n^{k}\sum _{m=1}^{\infty }{\frac {c_{m}(n)}{m^{k+1}}}.$
The computation of the first terms of $c_{m}(n)$ shows its oscillations around the "average value" $\zeta (k+1)n^{k}$:
$\sigma _{k}(n)=\zeta (k+1)n^{k}\left[1+{\frac {(-1)^{n}}{2^{k+1}}}+{\frac {2\cos {\frac {2\pi n}{3}}}{3^{k+1}}}+{\frac {2\cos {\frac {\pi n}{2}}}{4^{k+1}}}+\cdots \right]$
Growth rate
In little-o notation, the divisor function satisfies the inequality:[14][15]
${\mbox{for all }}\varepsilon >0,\quad d(n)=o(n^{\varepsilon }).$
More precisely, Severin Wigert showed that:[15]
$\limsup _{n\to \infty }{\frac {\log d(n)}{\log n/\log \log n}}=\log 2.$
On the other hand, since there are infinitely many prime numbers,[15]
$\liminf _{n\to \infty }d(n)=2.$
In Big-O notation, Peter Gustav Lejeune Dirichlet showed that the average order of the divisor function satisfies the following inequality:[16][17]
${\mbox{for all }}x\geq 1,\sum _{n\leq x}d(n)=x\log x+(2\gamma -1)x+O({\sqrt {x}}),$
where $\gamma $ is Euler's gamma constant. Improving the bound $O({\sqrt {x}})$ in this formula is known as Dirichlet's divisor problem.
The behaviour of the sigma function is irregular. The asymptotic growth rate of the sigma function can be expressed by: [18]
$\limsup _{n\rightarrow \infty }{\frac {\sigma (n)}{n\,\log \log n}}=e^{\gamma },$
where lim sup is the limit superior. This result is Grönwall's theorem, published in 1913 (Grönwall 1913). His proof uses Mertens' 3rd theorem, which says that:
$\lim _{n\to \infty }{\frac {1}{\log n}}\prod _{p\leq n}{\frac {p}{p-1}}=e^{\gamma },$
where p denotes a prime.
In 1915, Ramanujan proved that under the assumption of the Riemann hypothesis, Robin's inequality
$\ \sigma (n)<e^{\gamma }n\log \log n$ (where γ is the Euler–Mascheroni constant)
holds for all sufficiently large n (Ramanujan 1997). The largest known value that violates the inequality is n=5040. In 1984, Guy Robin proved that the inequality is true for all n > 5040 if and only if the Riemann hypothesis is true (Robin 1984). This is Robin's theorem and the inequality became known after him. Robin furthermore showed that if the Riemann hypothesis is false then there are an infinite number of values of n that violate the inequality, and it is known that the smallest such n > 5040 must be superabundant (Akbary & Friggstad 2009). It has been shown that the inequality holds for large odd and square-free integers, and that the Riemann hypothesis is equivalent to the inequality just for n divisible by the fifth power of a prime (Choie et al. 2007).
Robin also proved, unconditionally, that the inequality:
$\ \sigma (n)<e^{\gamma }n\log \log n+{\frac {0.6483\ n}{\log \log n}}$
holds for all n ≥ 3.
A related bound was given by Jeffrey Lagarias in 2002, who proved that the Riemann hypothesis is equivalent to the statement that:
$\sigma (n)<H_{n}+e^{H_{n}}\log(H_{n})$
for every natural number n > 1, where $H_{n}$ is the nth harmonic number, (Lagarias 2002).
See also
• Divisor sum convolutions, lists a few identities involving the divisor functions
• Euler's totient function, Euler's phi function
• Refactorable number
• Table of divisors
• Unitary divisor
Notes
1. Long (1972, p. 46)
2. Pettofrezzo & Byrkit (1970, p. 63)
3. Pettofrezzo & Byrkit (1970, p. 58)
4. Ramanujan, S. (1915), "Highly Composite Numbers", Proceedings of the London Mathematical Society, s2-14 (1): 347–409, doi:10.1112/plms/s2_14.1.347; see section 47, pp. 405–406, reproduced in Collected Papers of Srinivasa Ramanujan, Cambridge Univ. Press, 2015, pp. 124–125
5. Hardy & Wright (2008), pp. 310 f, §16.7.
6. Euler, Leonhard; Bell, Jordan (2004). "An observation on the sums of divisors". arXiv:math/0411587.
7. https://scholarlycommons.pacific.edu/euler-works/175/, Découverte d'une loi tout extraordinaire des nombres par rapport à la somme de leurs diviseurs
8. https://scholarlycommons.pacific.edu/euler-works/542/, De mirabilis proprietatibus numerorum pentagonalium
9. Gioia & Vaidya (1967).
10. Hardy & Wright (2008), pp. 326–328, §17.5.
11. Hardy & Wright (2008), pp. 334–337, §17.8.
12. Hardy & Wright (2008), pp. 338–341, §17.10.
13. E. Krätzel (1981). Zahlentheorie. Berlin: VEB Deutscher Verlag der Wissenschaften. p. 130. (German)
14. Apostol (1976), p. 296.
15. Hardy & Wright (2008), pp. 342–347, §18.1.
16. Apostol (1976), Theorem 3.3.
17. Hardy & Wright (2008), pp. 347–350, §18.2.
18. Hardy & Wright (2008), pp. 469–471, §22.9.
References
• Akbary, Amir; Friggstad, Zachary (2009), "Superabundant numbers and the Riemann hypothesis" (PDF), American Mathematical Monthly, 116 (3): 273–275, doi:10.4169/193009709X470128, archived from the original (PDF) on 2014-04-11.
• Apostol, Tom M. (1976), Introduction to analytic number theory, Undergraduate Texts in Mathematics, New York-Heidelberg: Springer-Verlag, ISBN 978-0-387-90163-3, MR 0434929, Zbl 0335.10001
• Bach, Eric; Shallit, Jeffrey, Algorithmic Number Theory, volume 1, 1996, MIT Press. ISBN 0-262-02405-5, see page 234 in section 8.8.
• Caveney, Geoffrey; Nicolas, Jean-Louis; Sondow, Jonathan (2011), "Robin's theorem, primes, and a new elementary reformulation of the Riemann Hypothesis" (PDF), INTEGERS: The Electronic Journal of Combinatorial Number Theory, 11: A33, arXiv:1110.5078, Bibcode:2011arXiv1110.5078C
• Choie, YoungJu; Lichiardopol, Nicolas; Moree, Pieter; Solé, Patrick (2007), "On Robin's criterion for the Riemann hypothesis", Journal de théorie des nombres de Bordeaux, 19 (2): 357–372, arXiv:math.NT/0604314, doi:10.5802/jtnb.591, ISSN 1246-7405, MR 2394891, S2CID 3207238, Zbl 1163.11059
• Gioia, A. A.; Vaidya, A. M. (1967), "Amicable numbers with opposite parity", The American Mathematical Monthly, 74 (8): 969–973, doi:10.2307/2315280, JSTOR 2315280, MR 0220659
• Grönwall, Thomas Hakon (1913), "Some asymptotic expressions in the theory of numbers", Transactions of the American Mathematical Society, 14: 113–122, doi:10.1090/S0002-9947-1913-1500940-6
• Hardy, G. H.; Wright, E. M. (2008) [1938], An Introduction to the Theory of Numbers, Revised by D. R. Heath-Brown and J. H. Silverman. Foreword by Andrew Wiles. (6th ed.), Oxford: Oxford University Press, ISBN 978-0-19-921986-5, MR 2445243, Zbl 1159.11001
• Ivić, Aleksandar (1985), The Riemann zeta-function. The theory of the Riemann zeta-function with applications, A Wiley-Interscience Publication, New York etc.: John Wiley & Sons, pp. 385–440, ISBN 0-471-80634-X, Zbl 0556.10026
• Lagarias, Jeffrey C. (2002), "An elementary problem equivalent to the Riemann hypothesis", The American Mathematical Monthly, 109 (6): 534–543, arXiv:math/0008177, doi:10.2307/2695443, ISSN 0002-9890, JSTOR 2695443, MR 1908008, S2CID 15884740
• Long, Calvin T. (1972), Elementary Introduction to Number Theory (2nd ed.), Lexington: D. C. Heath and Company, LCCN 77171950
• Pettofrezzo, Anthony J.; Byrkit, Donald R. (1970), Elements of Number Theory, Englewood Cliffs: Prentice Hall, LCCN 77081766
• Ramanujan, Srinivasa (1997), "Highly composite numbers, annotated by Jean-Louis Nicolas and Guy Robin", The Ramanujan Journal, 1 (2): 119–153, doi:10.1023/A:1009764017495, ISSN 1382-4090, MR 1606180, S2CID 115619659
• Robin, Guy (1984), "Grandes valeurs de la fonction somme des diviseurs et hypothèse de Riemann", Journal de Mathématiques Pures et Appliquées, Neuvième Série, 63 (2): 187–213, ISSN 0021-7824, MR 0774171
• Williams, Kenneth S. (2011), Number theory in the spirit of Liouville, London Mathematical Society Student Texts, vol. 76, Cambridge: Cambridge University Press, ISBN 978-0-521-17562-3, Zbl 1227.11002
External links
• Weisstein, Eric W. "Divisor Function". MathWorld.
• Weisstein, Eric W. "Robin's Theorem". MathWorld.
• Elementary Evaluation of Certain Convolution Sums Involving Divisor Functions PDF of a paper by Huard, Ou, Spearman, and Williams. Contains elementary (i.e. not relying on the theory of modular forms) proofs of divisor sum convolutions, formulas for the number of ways of representing a number as a sum of triangular numbers, and related results.
Divisibility-based sets of integers
Overview
• Integer factorization
• Divisor
• Unitary divisor
• Divisor function
• Prime factor
• Fundamental theorem of arithmetic
Factorization forms
• Prime
• Composite
• Semiprime
• Pronic
• Sphenic
• Square-free
• Powerful
• Perfect power
• Achilles
• Smooth
• Regular
• Rough
• Unusual
Constrained divisor sums
• Perfect
• Almost perfect
• Quasiperfect
• Multiply perfect
• Hemiperfect
• Hyperperfect
• Superperfect
• Unitary perfect
• Semiperfect
• Practical
• Erdős–Nicolas
With many divisors
• Abundant
• Primitive abundant
• Highly abundant
• Superabundant
• Colossally abundant
• Highly composite
• Superior highly composite
• Weird
Aliquot sequence-related
• Untouchable
• Amicable (Triple)
• Sociable
• Betrothed
Base-dependent
• Equidigital
• Extravagant
• Frugal
• Harshad
• Polydivisible
• Smith
Other sets
• Arithmetic
• Deficient
• Friendly
• Solitary
• Sublime
• Harmonic divisor
• Descartes
• Refactorable
• Superperfect
| Wikipedia |
Sum of radicals
In computational complexity theory, there is an open problem of whether some information about a sum of radicals may be computed in polynomial time depending on the input size, i.e., in the number of bits necessary to represent this sum. It is of importance for many problems in computational geometry, since the computation of the Euclidean distance between two points in the general case involves the computation of a square root, and therefore the perimeter of a polygon or the length of a polygonal chain takes the form of a sum of radicals.[1]
The sum of radicals is defined as a finite linear combination of radicals:
$\sum _{i=1}^{n}k_{i}{\sqrt[{r_{i}}]{x_{i}}},$
where $n,r_{i}$ are natural numbers and $k_{i},x_{i}$ are real numbers.
Most theoretical research in computational geometry of combinatorial character assumes the computational model of infinite precision real RAM, i.e., an abstract computer in which real numbers and operations on them are performed with infinite precision and the input size of a real number and the cost of an elementary operation are constants.[2] However, there is research in computational complexity, especially in computer algebra, where the input size of a number is the number of bits necessary for its representation.[3]
Of particular interest in computational geometry is the problem of determining the sign of the sum of radicals. For instance, the length of a polygonal path in which all vertices have integer coordinates may be expressed using the Pythagorean theorem as a sum of integer square roots, so in order to determine whether one path is longer or shorter than another in a Euclidean shortest path problem, it is necessary to determine the sign of an expression in which the first path's length is subtracted from the second; this expression is a sum of radicals.
In a similar way, the sum of radicals problem is inherent in the problem of minimum-weight triangulation in the Euclidean metric.
In 1991, Blömer proposed a polynomial time Monte Carlo algorithm for determining whether a sum of radicals is zero, or more generally whether it represents a rational number.[4] While Blömer's result does not resolve the computational complexity of finding the sign of the sum of radicals, it does imply that if the latter problem is in class NP, then it is also in co-NP.[4]
See also
• Nested radicals
• Abel–Ruffini theorem
References
1. Mulzer, Wolfgang; Rote, Günter (2008). "Minimum-weight triangulation is NP-hard". Journal of the ACM. 55 (2): A11:1–A11:29. arXiv:cs/0601002. doi:10.1145/1346330.1346336. MR 2417038.
2. Franco P. Preparata and Michael Ian Shamos (1985). Computational Geometry - An Introduction. Springer-Verlag. ISBN 978-0-387-96131-6. 1st edition: 2nd printing, corrected and expanded, 1988: ; Russian translation, 1989.
3. Computer Algebra Handbook, 2003, ISBN 3-540-65466-6
4. Blömer, Johannes (1991). "Computing sums of radicals in polynomial time". [1991] Proceedings 32nd Annual Symposium of Foundations of Computer Science. pp. 670–677. doi:10.1109/SFCS.1991.185434. ISBN 978-0-8186-2445-2..
| Wikipedia |
Sum of residues formula
In mathematics, the residue formula says that the sum of the residues of a meromorphic differential form on a smooth proper algebraic curve vanishes.
Statement
In this article, X denotes a proper smooth algebraic curve over a field k. A meromorphic (algebraic) differential form $\omega $ has, at each closed point x in X, a residue which is denoted $\operatorname {res} _{x}\omega $. Since $\omega $ has poles only at finitely many points, in particular the residue vanishes for all but finitely many points. The residue formula states:
$\sum _{x}\operatorname {res} _{x}\omega =0.$
Proofs
A geometric way of proving the theorem is by reducing the theorem to the case when X is the projective line, and proving it by explicit computations in this case, for example in Altman & Kleiman (1970, Ch. VIII, p. 177).
Tate (1968) proves the theorem using a notion of traces for certain endomorphisms of infinite-dimensional vector spaces. The residue of a differential form $fdg$ can be expressed in terms of traces of endomorphisms on the fraction field $K_{x}$ of the completed local rings ${\hat {\mathcal {O}}}_{X,x}$ which leads to a conceptual proof of the formula. A more recent exposition along similar lines, using more explicitly the notion of Tate vector spaces, is given by Clausen (2009).
References
• Altman, Allen; Kleiman, Steven (1970), Introduction to Grothendieck duality theory, Lecture Notes in Mathematics, vol. 146, Springer, doi:10.1007/BFb0060932, MR 0274461
• Clausen, Dustin (2009), Infinite-dimensional linear algebra, determinant line bundle and Kac–Moody extension, Harvard 2009 seminar notes
• Tate, John (1968), "Residues of differentials on curves", Annales scientifiques de l'École Normale Supérieure, 4, 1 (1): 149–159, doi:10.24033/asens.1162
| Wikipedia |
Series (mathematics)
In mathematics, a series is, roughly speaking, the operation of adding infinitely many quantities, one after the other, to a given starting quantity.[1] The study of series is a major part of calculus and its generalization, mathematical analysis. Series are used in most areas of mathematics, even for studying finite structures (such as in combinatorics) through generating functions. In addition to their ubiquity in mathematics, infinite series are also widely used in other quantitative disciplines such as physics, computer science, statistics and finance.
This article is about infinite sums. For finite sums, see Summation.
Part of a series of articles about
Calculus
• Fundamental theorem
• Limits
• Continuity
• Rolle's theorem
• Mean value theorem
• Inverse function theorem
Differential
Definitions
• Derivative (generalizations)
• Differential
• infinitesimal
• of a function
• total
Concepts
• Differentiation notation
• Second derivative
• Implicit differentiation
• Logarithmic differentiation
• Related rates
• Taylor's theorem
Rules and identities
• Sum
• Product
• Chain
• Power
• Quotient
• L'Hôpital's rule
• Inverse
• General Leibniz
• Faà di Bruno's formula
• Reynolds
Integral
• Lists of integrals
• Integral transform
• Leibniz integral rule
Definitions
• Antiderivative
• Integral (improper)
• Riemann integral
• Lebesgue integration
• Contour integration
• Integral of inverse functions
Integration by
• Parts
• Discs
• Cylindrical shells
• Substitution (trigonometric, tangent half-angle, Euler)
• Euler's formula
• Partial fractions
• Changing order
• Reduction formulae
• Differentiating under the integral sign
• Risch algorithm
Series
• Geometric (arithmetico-geometric)
• Harmonic
• Alternating
• Power
• Binomial
• Taylor
Convergence tests
• Summand limit (term test)
• Ratio
• Root
• Integral
• Direct comparison
• Limit comparison
• Alternating series
• Cauchy condensation
• Dirichlet
• Abel
Vector
• Gradient
• Divergence
• Curl
• Laplacian
• Directional derivative
• Identities
Theorems
• Gradient
• Green's
• Stokes'
• Divergence
• generalized Stokes
Multivariable
Formalisms
• Matrix
• Tensor
• Exterior
• Geometric
Definitions
• Partial derivative
• Multiple integral
• Line integral
• Surface integral
• Volume integral
• Jacobian
• Hessian
Advanced
• Calculus on Euclidean space
• Generalized functions
• Limit of distributions
Specialized
• Fractional
• Malliavin
• Stochastic
• Variations
Miscellaneous
• Precalculus
• History
• Glossary
• List of topics
• Integration Bee
• Mathematical analysis
• Nonstandard analysis
For a long time, the idea that such a potentially infinite summation could produce a finite result was considered paradoxical. This paradox was resolved using the concept of a limit during the 17th century. Zeno's paradox of Achilles and the tortoise illustrates this counterintuitive property of infinite sums: Achilles runs after a tortoise, but when he reaches the position of the tortoise at the beginning of the race, the tortoise has reached a second position; when he reaches this second position, the tortoise is at a third position, and so on. Zeno concluded that Achilles could never reach the tortoise, and thus that movement does not exist. Zeno divided the race into infinitely many sub-races, each requiring a finite amount of time, so that the total time for Achilles to catch the tortoise is given by a series. The resolution of the paradox is that, although the series has an infinite number of terms, it has a finite sum, which gives the time necessary for Achilles to catch up with the tortoise.
In modern terminology, any (ordered) infinite sequence $(a_{1},a_{2},a_{3},\ldots )$ of terms (that is, numbers, functions, or anything that can be added) defines a series, which is the operation of adding the ai one after the other. To emphasize that there are an infinite number of terms, a series may be called an infinite series. Such a series is represented (or denoted) by an expression like
$a_{1}+a_{2}+a_{3}+\cdots ,$
or, using the summation sign,
$\sum _{i=1}^{\infty }a_{i}.$
The infinite sequence of additions implied by a series cannot be effectively carried on (at least in a finite amount of time). However, if the set to which the terms and their finite sums belong has a notion of limit, it is sometimes possible to assign a value to a series, called the sum of the series. This value is the limit as n tends to infinity (if the limit exists) of the finite sums of the n first terms of the series, which are called the nth partial sums of the series. That is,
$\sum _{i=1}^{\infty }a_{i}=\lim _{n\to \infty }\sum _{i=1}^{n}a_{i}.$
When this limit exists, one says that the series is convergent or summable, or that the sequence $(a_{1},a_{2},a_{3},\ldots )$ is summable. In this case, the limit is called the sum of the series. Otherwise, the series is said to be divergent.[2]
The notation $ \sum _{i=1}^{\infty }a_{i}$ denotes both the series—that is the implicit process of adding the terms one after the other indefinitely—and, if the series is convergent, the sum of the series—the result of the process. This is a generalization of the similar convention of denoting by $a+b$ both the addition—the process of adding—and its result—the sum of a and b.
Generally, the terms of a series come from a ring, often the field $\mathbb {R} $ of the real numbers or the field $\mathbb {C} $ of the complex numbers. In this case, the set of all series is itself a ring (and even an associative algebra), in which the addition consists of adding the series term by term, and the multiplication is the Cauchy product.
Basic properties
An infinite series or simply a series is an infinite sum, represented by an infinite expression of the form[3]
$a_{0}+a_{1}+a_{2}+\cdots ,$
where $(a_{n})$ is any ordered sequence of terms, such as numbers, functions, or anything else that can be added (an abelian group). This is an expression that is obtained from the list of terms $a_{0},a_{1},\dots $ by laying them side by side, and conjoining them with the symbol "+". A series may also be represented by using summation notation, such as
$\sum _{n=0}^{\infty }a_{n}.$
If an abelian group A of terms has a concept of limit (e.g., if it is a metric space), then some series, the convergent series, can be interpreted as having a value in A, called the sum of the series. This includes the common cases from calculus, in which the group is the field of real numbers or the field of complex numbers. Given a series $ s=\sum _{n=0}^{\infty }a_{n}$, its kth partial sum is[2]
$s_{k}=\sum _{n=0}^{k}a_{n}=a_{0}+a_{1}+\cdots +a_{k}.$
By definition, the series $ \sum _{n=0}^{\infty }a_{n}$ converges to the limit L (or simply sums to L), if the sequence of its partial sums has a limit L.[3] In this case, one usually writes
$L=\sum _{n=0}^{\infty }a_{n}.$
A series is said to be convergent if it converges to some limit, or divergent when it does not. The value of this limit, if it exists, is then the value of the series.
Convergent series
A series Σan is said to converge or to be convergent when the sequence (sk) of partial sums has a finite limit. If the limit of sk is infinite or does not exist, the series is said to diverge.[4][2] When the limit of partial sums exists, it is called the value (or sum) of the series
$\sum _{n=0}^{\infty }a_{n}=\lim _{k\to \infty }s_{k}=\lim _{k\to \infty }\sum _{n=0}^{k}a_{n}.$
An easy way that an infinite series can converge is if all the an are zero for n sufficiently large. Such a series can be identified with a finite sum, so it is only infinite in a trivial sense.
Working out the properties of the series that converge, even if infinitely many terms are nonzero, is the essence of the study of series. Consider the example
$1+{\frac {1}{2}}+{\frac {1}{4}}+{\frac {1}{8}}+\cdots +{\frac {1}{2^{n}}}+\cdots .$
It is possible to "visualize" its convergence on the real number line: we can imagine a line of length 2, with successive segments marked off of lengths 1, 1/2, 1/4, etc. There is always room to mark the next segment, because the amount of line remaining is always the same as the last segment marked: When we have marked off 1/2, we still have a piece of length 1/2 unmarked, so we can certainly mark the next 1/4. This argument does not prove that the sum is equal to 2 (although it is), but it does prove that it is at most 2. In other words, the series has an upper bound. Given that the series converges, proving that it is equal to 2 requires only elementary algebra. If the series is denoted S, it can be seen that
$S/2={\frac {1+{\frac {1}{2}}+{\frac {1}{4}}+{\frac {1}{8}}+\cdots }{2}}={\frac {1}{2}}+{\frac {1}{4}}+{\frac {1}{8}}+{\frac {1}{16}}+\cdots .$
Therefore,
$S-S/2=1\Rightarrow S=2.$
The idiom can be extended to other, equivalent notions of series. For instance, a recurring decimal, as in
$x=0.111\dots ,$
encodes the series
$\sum _{n=1}^{\infty }{\frac {1}{10^{n}}}.$
Since these series always converge to real numbers (because of what is called the completeness property of the real numbers), to talk about the series in this way is the same as to talk about the numbers for which they stand. In particular, the decimal expansion 0.111... can be identified with 1/9. This leads to an argument that 9 × 0.111... = 0.999... = 1, which only relies on the fact that the limit laws for series preserve the arithmetic operations; for more detail on this argument, see 0.999....
Examples of numerical series
For other examples, see List of mathematical series and Sums of reciprocals § Infinitely many terms.
• A geometric series is one where each successive term is produced by multiplying the previous term by a constant number (called the common ratio in this context). For example:[2]
$1+{1 \over 2}+{1 \over 4}+{1 \over 8}+{1 \over 16}+\cdots =\sum _{n=0}^{\infty }{1 \over 2^{n}}=2.$
In general, the geometric series
$\sum _{n=0}^{\infty }z^{n}$
converges if and only if $ |z|<1$, in which case it converges to $ {1 \over 1-z}$.
• The harmonic series is the series[5]
$1+{1 \over 2}+{1 \over 3}+{1 \over 4}+{1 \over 5}+\cdots =\sum _{n=1}^{\infty }{1 \over n}.$
The harmonic series is divergent.
• An alternating series is a series where terms alternate signs. Examples:
$1-{1 \over 2}+{1 \over 3}-{1 \over 4}+{1 \over 5}-\cdots =\sum _{n=1}^{\infty }{\left(-1\right)^{n-1} \over n}=\ln(2)\quad $
(alternating harmonic series) and
$-1+{\frac {1}{3}}-{\frac {1}{5}}+{\frac {1}{7}}-{\frac {1}{9}}+\cdots =\sum _{n=1}^{\infty }{\frac {\left(-1\right)^{n}}{2n-1}}=-{\frac {\pi }{4}}$
• A telescoping series
$\sum _{n=1}^{\infty }(b_{n}-b_{n+1})$
converges if the sequence bn converges to a limit L—as n goes to infinity. The value of the series is then b1 − L.
• An arithmetico-geometric series is a generalization of the geometric series, which has coefficients of the common ratio equal to the terms in an arithmetic sequence. Example:
$3+{5 \over 2}+{7 \over 4}+{9 \over 8}+{11 \over 16}+\cdots =\sum _{n=0}^{\infty }{(3+2n) \over 2^{n}}.$
• The p-series
$\sum _{n=1}^{\infty }{\frac {1}{n^{p}}}$
converges if p > 1 and diverges for p ≤ 1, which can be shown with the integral criterion described below in convergence tests. As a function of p, the sum of this series is Riemann's zeta function.
• Hypergeometric series:
$_{r}F_{s}\left[{\begin{matrix}a_{1},a_{2},\dotsc ,a_{r}\\b_{1},b_{2},\dotsc ,b_{s}\end{matrix}};z\right]:=\sum _{n=0}^{\infty }{\frac {(a_{1})_{n}(a_{2})_{n}\dotsb (a_{r})_{n}}{(b_{1})_{n}(b_{2})_{n}\dotsb (b_{s})_{n}\;n!}}z^{n}$
and their generalizations (such as basic hypergeometric series and elliptic hypergeometric series) frequently appear in integrable systems and mathematical physics.[6]
• There are some elementary series whose convergence is not yet known/proven. For example, it is unknown whether the Flint Hills series
$\sum _{n=1}^{\infty }{\frac {1}{n^{3}\sin ^{2}n}}$
converges or not. The convergence depends on how well $\pi $ can be approximated with rational numbers (which is unknown as of yet). More specifically, the values of n with large numerical contributions to the sum are the numerators of the continued fraction convergents of $\pi $, a sequence beginning with 1, 3, 22, 333, 355, 103993, ... (sequence A046947 in the OEIS). These are integers n that are close to $m\pi $ for some integer m, so that $\sin n$ is close to $\sin m\pi =0$ and its reciprocal is large.
Pi
Main articles: Pi § Infinite series, Approximations of π, and Harmonic number § Identities involving π
$\sum _{i=1}^{\infty }{\frac {1}{i^{2}}}={\frac {1}{1^{2}}}+{\frac {1}{2^{2}}}+{\frac {1}{3^{2}}}+{\frac {1}{4^{2}}}+\cdots ={\frac {\pi ^{2}}{6}}$
$\sum _{i=1}^{\infty }{\frac {(-1)^{i+1}(4)}{2i-1}}={\frac {4}{1}}-{\frac {4}{3}}+{\frac {4}{5}}-{\frac {4}{7}}+{\frac {4}{9}}-{\frac {4}{11}}+{\frac {4}{13}}-\cdots =\pi $
Natural logarithm of 2
Main article: Natural logarithm of 2 § Series representations
$\sum _{i=1}^{\infty }{\frac {(-1)^{i+1}}{i}}=\ln 2$
[2]
$\sum _{i=0}^{\infty }{\frac {1}{(2i+1)(2i+2)}}=\ln 2$
$\sum _{i=0}^{\infty }{\frac {(-1)^{i}}{(i+1)(i+2)}}=2\ln(2)-1$
$\sum _{i=1}^{\infty }{\frac {1}{i\left(4i^{2}-1\right)}}=2\ln(2)-1$
$\sum _{i=1}^{\infty }{\frac {1}{2^{i}i}}=\ln 2$
$\sum _{i=1}^{\infty }\left({\frac {1}{3^{i}}}+{\frac {1}{4^{i}}}\right){\frac {1}{i}}=\ln 2$
$\sum _{i=1}^{\infty }{\frac {1}{2i(2i-1)}}=\ln 2$
Natural logarithm base e
Main article: e (mathematical constant)
$\sum _{i=0}^{\infty }{\frac {(-1)^{i}}{i!}}=1-{\frac {1}{1!}}+{\frac {1}{2!}}-{\frac {1}{3!}}+\cdots ={\frac {1}{e}}$
$\sum _{i=0}^{\infty }{\frac {1}{i!}}={\frac {1}{0!}}+{\frac {1}{1!}}+{\frac {1}{2!}}+{\frac {1}{3!}}+{\frac {1}{4!}}+\cdots =e$
Calculus and partial summation as an operation on sequences
Partial summation takes as input a sequence, (an), and gives as output another sequence, (SN). It is thus a unary operation on sequences. Further, this function is linear, and thus is a linear operator on the vector space of sequences, denoted Σ. The inverse operator is the finite difference operator, denoted Δ. These behave as discrete analogues of integration and differentiation, only for series (functions of a natural number) instead of functions of a real variable. For example, the sequence (1, 1, 1, ...) has series (1, 2, 3, 4, ...) as its partial summation, which is analogous to the fact that $ \int _{0}^{x}1\,dt=x.$
In computer science, it is known as prefix sum.
Properties of series
Series are classified not only by whether they converge or diverge, but also by the properties of the terms an (absolute or conditional convergence); type of convergence of the series (pointwise, uniform); the class of the term an (whether it is a real number, arithmetic progression, trigonometric function); etc.
Non-negative terms
When an is a non-negative real number for every n, the sequence SN of partial sums is non-decreasing. It follows that a series Σan with non-negative terms converges if and only if the sequence SN of partial sums is bounded.
For example, the series
$\sum _{n=1}^{\infty }{\frac {1}{n^{2}}}$
is convergent, because the inequality
${\frac {1}{n^{2}}}\leq {\frac {1}{n-1}}-{\frac {1}{n}},\quad n\geq 2,$
and a telescopic sum argument implies that the partial sums are bounded by 2. The exact value of the original series is the Basel problem.
Grouping
When you group a series reordering of the series does not happen, so Riemann series theorem does not apply. A new series will have its partial sums as subsequence of original series, which means if the original series converges, so does the new series. But for divergent series that is not true, for example 1-1+1-1+... grouped every two elements will create 0+0+0+... series, which is convergent. On the other hand, divergence of the new series means the original series can be only divergent which is sometimes useful, like in Oresme proof.
Absolute convergence
Main article: Absolute convergence
A series
$\sum _{n=0}^{\infty }a_{n}$
converges absolutely if the series of absolute values
$\sum _{n=0}^{\infty }\left|a_{n}\right|$
converges. This is sufficient to guarantee not only that the original series converges to a limit, but also that any reordering of it converges to the same limit.
Conditional convergence
Main article: Conditional convergence
A series of real or complex numbers is said to be conditionally convergent (or semi-convergent) if it is convergent but not absolutely convergent. A famous example is the alternating series
$\sum \limits _{n=1}^{\infty }{(-1)^{n+1} \over n}=1-{1 \over 2}+{1 \over 3}-{1 \over 4}+{1 \over 5}-\cdots ,$
which is convergent (and its sum is equal to $\ln 2$), but the series formed by taking the absolute value of each term is the divergent harmonic series. The Riemann series theorem says that any conditionally convergent series can be reordered to make a divergent series, and moreover, if the $a_{n}$ are real and $S$ is any real number, that one can find a reordering so that the reordered series converges with sum equal to $S$.
Abel's test is an important tool for handling semi-convergent series. If a series has the form
$\sum a_{n}=\sum \lambda _{n}b_{n}$
where the partial sums $B_{n}=b_{0}+\cdots +b_{n}$ are bounded, $\lambda _{n}$ has bounded variation, and $\lim \lambda _{n}b_{n}$ exists:
$\sup _{N}\left|\sum _{n=0}^{N}b_{n}\right|<\infty ,\ \ \sum \left|\lambda _{n+1}-\lambda _{n}\right|<\infty \ {\text{and}}\ \lambda _{n}B_{n}\ {\text{converges,}}$
then the series $ \sum a_{n}$ is convergent. This applies to the point-wise convergence of many trigonometric series, as in
$\sum _{n=2}^{\infty }{\frac {\sin(nx)}{\ln n}}$
with $0<x<2\pi $. Abel's method consists in writing $b_{n+1}=B_{n+1}-B_{n}$, and in performing a transformation similar to integration by parts (called summation by parts), that relates the given series $ \sum a_{n}$ to the absolutely convergent series
$\sum (\lambda _{n}-\lambda _{n+1})\,B_{n}.$
Evaluation of truncation errors
The evaluation of truncation errors is an important procedure in numerical analysis (especially validated numerics and computer-assisted proof).
Alternating series
When conditions of the alternating series test are satisfied by $ S:=\sum _{m=0}^{\infty }(-1)^{m}u_{m}$, there is an exact error evaluation.[7] Set $s_{n}$ to be the partial sum $ s_{n}:=\sum _{m=0}^{n}(-1)^{m}u_{m}$ of the given alternating series $S$. Then the next inequality holds:
$|S-s_{n}|\leq u_{n+1}.$
Taylor series
Taylor's theorem is a statement that includes the evaluation of the error term when the Taylor series is truncated.
Hypergeometric series
By using the ratio, we can obtain the evaluation of the error term when the hypergeometric series is truncated.[8]
Matrix exponential
For the matrix exponential:
$\exp(X):=\sum _{k=0}^{\infty }{\frac {1}{k!}}X^{k},\quad X\in \mathbb {C} ^{n\times n},$
the following error evaluation holds (scaling and squaring method):[9][10][11]
$T_{r,s}(X):=\left[\sum _{j=0}^{r}{\frac {1}{j!}}(X/s)^{j}\right]^{s},\quad \|\exp(X)-T_{r,s}(X)\|\leq {\frac {\|X\|^{r+1}}{s^{r}(r+1)!}}\exp(\|X\|).$
Convergence tests
Main article: Convergence tests
There exist many tests that can be used to determine whether particular series converge or diverge.
• n-th term test: If $ \lim _{n\to \infty }a_{n}\neq 0$, then the series diverges; if $ \lim _{n\to \infty }a_{n}=0$, then the test is inconclusive.
• Comparison test 1 (see Direct comparison test): If $ \sum b_{n}$ is an absolutely convergent series such that $\left\vert a_{n}\right\vert \leq C\left\vert b_{n}\right\vert $ for some number $C$ and for sufficiently large $n$, then $ \sum a_{n}$ converges absolutely as well. If $ \sum \left\vert b_{n}\right\vert $ diverges, and $\left\vert a_{n}\right\vert \geq \left\vert b_{n}\right\vert $ for all sufficiently large $n$, then $ \sum a_{n}$ also fails to converge absolutely (though it could still be conditionally convergent, for example, if the $a_{n}$ alternate in sign).
• Comparison test 2 (see Limit comparison test): If $ \sum b_{n}$ is an absolutely convergent series such that $\left\vert {\frac {a_{n+1}}{a_{n}}}\right\vert \leq \left\vert {\frac {b_{n+1}}{b_{n}}}\right\vert $ for sufficiently large $n$, then $ \sum a_{n}$ converges absolutely as well. If $ \sum \left|b_{n}\right|$ diverges, and $\left\vert {\frac {a_{n+1}}{a_{n}}}\right\vert \geq \left\vert {\frac {b_{n+1}}{b_{n}}}\right\vert $ for all sufficiently large $n$, then $ \sum a_{n}$ also fails to converge absolutely (though it could still be conditionally convergent, for example, if the $a_{n}$ alternate in sign).
• Ratio test: If there exists a constant $C<1$ such that $\left\vert {\frac {a_{n+1}}{a_{n}}}\right\vert <C$ for all sufficiently large $n$, then $ \sum a_{n}$ converges absolutely. When the ratio is less than $1$, but not less than a constant less than $1$, convergence is possible but this test does not establish it.
• Root test: If there exists a constant $C<1$ such that $\left\vert a_{n}\right\vert ^{\frac {1}{n}}\leq C$ for all sufficiently large $n$, then $ \sum a_{n}$ converges absolutely.
• Integral test: if $f(x)$ is a positive monotone decreasing function defined on the interval $[1,\infty )$ with $f(n)=a_{n}$ for all $n$, then $ \sum a_{n}$ converges if and only if the integral $ \int _{1}^{\infty }f(x)\,dx$ is finite.
• Cauchy's condensation test: If $a_{n}$ is non-negative and non-increasing, then the two series $ \sum a_{n}$ and $ \sum 2^{k}a_{(2^{k})}$ are of the same nature: both convergent, or both divergent.
• Alternating series test: A series of the form $ \sum (-1)^{n}a_{n}$ (with $a_{n}>0$) is called alternating. Such a series converges if the sequence $a_{n}$ is monotone decreasing and converges to $0$. The converse is in general not true.
• For some specific types of series there are more specialized convergence tests, for instance for Fourier series there is the Dini test.
Series of functions
Main article: Function series
A series of real- or complex-valued functions
$\sum _{n=0}^{\infty }f_{n}(x)$
converges pointwise on a set E, if the series converges for each x in E as an ordinary series of real or complex numbers. Equivalently, the partial sums
$s_{N}(x)=\sum _{n=0}^{N}f_{n}(x)$
converge to ƒ(x) as N → ∞ for each x ∈ E.
A stronger notion of convergence of a series of functions is the uniform convergence. A series converges uniformly if it converges pointwise to the function ƒ(x), and the error in approximating the limit by the Nth partial sum,
$|s_{N}(x)-f(x)|$
can be made minimal independently of x by choosing a sufficiently large N.
Uniform convergence is desirable for a series because many properties of the terms of the series are then retained by the limit. For example, if a series of continuous functions converges uniformly, then the limit function is also continuous. Similarly, if the ƒn are integrable on a closed and bounded interval I and converge uniformly, then the series is also integrable on I and can be integrated term-by-term. Tests for uniform convergence include the Weierstrass' M-test, Abel's uniform convergence test, Dini's test, and the Cauchy criterion.
More sophisticated types of convergence of a series of functions can also be defined. In measure theory, for instance, a series of functions converges almost everywhere if it converges pointwise except on a certain set of measure zero. Other modes of convergence depend on a different metric space structure on the space of functions under consideration. For instance, a series of functions converges in mean on a set E to a limit function ƒ provided
$\int _{E}\left|s_{N}(x)-f(x)\right|^{2}\,dx\to 0$
as N → ∞.
Power series
Main article: Power series
A power series is a series of the form
$\sum _{n=0}^{\infty }a_{n}(x-c)^{n}.$
The Taylor series at a point c of a function is a power series that, in many cases, converges to the function in a neighborhood of c. For example, the series
$\sum _{n=0}^{\infty }{\frac {x^{n}}{n!}}$
is the Taylor series of $e^{x}$ at the origin and converges to it for every x.
Unless it converges only at x=c, such a series converges on a certain open disc of convergence centered at the point c in the complex plane, and may also converge at some of the points of the boundary of the disc. The radius of this disc is known as the radius of convergence, and can in principle be determined from the asymptotics of the coefficients an. The convergence is uniform on closed and bounded (that is, compact) subsets of the interior of the disc of convergence: to wit, it is uniformly convergent on compact sets.
Historically, mathematicians such as Leonhard Euler operated liberally with infinite series, even if they were not convergent. When calculus was put on a sound and correct foundation in the nineteenth century, rigorous proofs of the convergence of series were always required.
Formal power series
Main article: Formal power series
While many uses of power series refer to their sums, it is also possible to treat power series as formal sums, meaning that no addition operations are actually performed, and the symbol "+" is an abstract symbol of conjunction which is not necessarily interpreted as corresponding to addition. In this setting, the sequence of coefficients itself is of interest, rather than the convergence of the series. Formal power series are used in combinatorics to describe and study sequences that are otherwise difficult to handle, for example, using the method of generating functions. The Hilbert–Poincaré series is a formal power series used to study graded algebras.
Even if the limit of the power series is not considered, if the terms support appropriate structure then it is possible to define operations such as addition, multiplication, derivative, antiderivative for power series "formally", treating the symbol "+" as if it corresponded to addition. In the most common setting, the terms come from a commutative ring, so that the formal power series can be added term-by-term and multiplied via the Cauchy product. In this case the algebra of formal power series is the total algebra of the monoid of natural numbers over the underlying term ring.[12] If the underlying term ring is a differential algebra, then the algebra of formal power series is also a differential algebra, with differentiation performed term-by-term.
Laurent series
Main article: Laurent series
Laurent series generalize power series by admitting terms into the series with negative as well as positive exponents. A Laurent series is thus any series of the form
$\sum _{n=-\infty }^{\infty }a_{n}x^{n}.$
If such a series converges, then in general it does so in an annulus rather than a disc, and possibly some boundary points. The series converges uniformly on compact subsets of the interior of the annulus of convergence.
Dirichlet series
Main article: Dirichlet series
A Dirichlet series is one of the form
$\sum _{n=1}^{\infty }{a_{n} \over n^{s}},$
where s is a complex number. For example, if all an are equal to 1, then the Dirichlet series is the Riemann zeta function
$\zeta (s)=\sum _{n=1}^{\infty }{\frac {1}{n^{s}}}.$
Like the zeta function, Dirichlet series in general play an important role in analytic number theory. Generally a Dirichlet series converges if the real part of s is greater than a number called the abscissa of convergence. In many cases, a Dirichlet series can be extended to an analytic function outside the domain of convergence by analytic continuation. For example, the Dirichlet series for the zeta function converges absolutely when Re(s) > 1, but the zeta function can be extended to a holomorphic function defined on $\mathbb {C} \setminus \{1\}$ with a simple pole at 1.
This series can be directly generalized to general Dirichlet series.
Trigonometric series
Main article: Trigonometric series
A series of functions in which the terms are trigonometric functions is called a trigonometric series:
${\frac {1}{2}}A_{0}+\sum _{n=1}^{\infty }\left(A_{n}\cos nx+B_{n}\sin nx\right).$
The most important example of a trigonometric series is the Fourier series of a function.
History of the theory of infinite series
Development of infinite series
Greek mathematician Archimedes produced the first known summation of an infinite series with a method that is still used in the area of calculus today. He used the method of exhaustion to calculate the area under the arc of a parabola with the summation of an infinite series, and gave a remarkably accurate approximation of π.[13][14]
Mathematicians from Kerala, India studied infinite series around 1350 CE.[15]
In the 17th century, James Gregory worked in the new decimal system on infinite series and published several Maclaurin series. In 1715, a general method for constructing the Taylor series for all functions for which they exist was provided by Brook Taylor. Leonhard Euler in the 18th century, developed the theory of hypergeometric series and q-series.
Convergence criteria
The investigation of the validity of infinite series is considered to begin with Gauss in the 19th century. Euler had already considered the hypergeometric series
$1+{\frac {\alpha \beta }{1\cdot \gamma }}x+{\frac {\alpha (\alpha +1)\beta (\beta +1)}{1\cdot 2\cdot \gamma (\gamma +1)}}x^{2}+\cdots $
on which Gauss published a memoir in 1812. It established simpler criteria of convergence, and the questions of remainders and the range of convergence.
Cauchy (1821) insisted on strict tests of convergence; he showed that if two series are convergent their product is not necessarily so, and with him begins the discovery of effective criteria. The terms convergence and divergence had been introduced long before by Gregory (1668). Leonhard Euler and Gauss had given various criteria, and Colin Maclaurin had anticipated some of Cauchy's discoveries. Cauchy advanced the theory of power series by his expansion of a complex function in such a form.
Abel (1826) in his memoir on the binomial series
$1+{\frac {m}{1!}}x+{\frac {m(m-1)}{2!}}x^{2}+\cdots $
corrected certain of Cauchy's conclusions, and gave a completely scientific summation of the series for complex values of $m$ and $x$. He showed the necessity of considering the subject of continuity in questions of convergence.
Cauchy's methods led to special rather than general criteria, and the same may be said of Raabe (1832), who made the first elaborate investigation of the subject, of De Morgan (from 1842), whose logarithmic test DuBois-Reymond (1873) and Pringsheim (1889) have shown to fail within a certain region; of Bertrand (1842), Bonnet (1843), Malmsten (1846, 1847, the latter without integration); Stokes (1847), Paucker (1852), Chebyshev (1852), and Arndt (1853).
General criteria began with Kummer (1835), and have been studied by Eisenstein (1847), Weierstrass in his various contributions to the theory of functions, Dini (1867), DuBois-Reymond (1873), and many others. Pringsheim's memoirs (1889) present the most complete general theory.
Uniform convergence
The theory of uniform convergence was treated by Cauchy (1821), his limitations being pointed out by Abel, but the first to attack it successfully were Seidel and Stokes (1847–48). Cauchy took up the problem again (1853), acknowledging Abel's criticism, and reaching the same conclusions which Stokes had already found. Thomae used the doctrine (1866), but there was great delay in recognizing the importance of distinguishing between uniform and non-uniform convergence, in spite of the demands of the theory of functions.
Semi-convergence
A series is said to be semi-convergent (or conditionally convergent) if it is convergent but not absolutely convergent.
Semi-convergent series were studied by Poisson (1823), who also gave a general form for the remainder of the Maclaurin formula. The most important solution of the problem is due, however, to Jacobi (1834), who attacked the question of the remainder from a different standpoint and reached a different formula. This expression was also worked out, and another one given, by Malmsten (1847). Schlömilch (Zeitschrift, Vol.I, p. 192, 1856) also improved Jacobi's remainder, and showed the relation between the remainder and Bernoulli's function
$F(x)=1^{n}+2^{n}+\cdots +(x-1)^{n}.$
Genocchi (1852) has further contributed to the theory.
Among the early writers was Wronski, whose "loi suprême" (1815) was hardly recognized until Cayley (1873) brought it into prominence.
Fourier series
Fourier series were being investigated as the result of physical considerations at the same time that Gauss, Abel, and Cauchy were working out the theory of infinite series. Series for the expansion of sines and cosines, of multiple arcs in powers of the sine and cosine of the arc had been treated by Jacob Bernoulli (1702) and his brother Johann Bernoulli (1701) and still earlier by Vieta. Euler and Lagrange simplified the subject, as did Poinsot, Schröter, Glaisher, and Kummer.
Fourier (1807) set for himself a different problem, to expand a given function of x in terms of the sines or cosines of multiples of x, a problem which he embodied in his Théorie analytique de la chaleur (1822). Euler had already given the formulas for determining the coefficients in the series; Fourier was the first to assert and attempt to prove the general theorem. Poisson (1820–23) also attacked the problem from a different standpoint. Fourier did not, however, settle the question of convergence of his series, a matter left for Cauchy (1826) to attempt and for Dirichlet (1829) to handle in a thoroughly scientific manner (see convergence of Fourier series). Dirichlet's treatment (Crelle, 1829), of trigonometric series was the subject of criticism and improvement by Riemann (1854), Heine, Lipschitz, Schläfli, and du Bois-Reymond. Among other prominent contributors to the theory of trigonometric and Fourier series were Dini, Hermite, Halphen, Krause, Byerly and Appell.
Generalizations
Asymptotic series
Asymptotic series, otherwise asymptotic expansions, are infinite series whose partial sums become good approximations in the limit of some point of the domain. In general they do not converge, but they are useful as sequences of approximations, each of which provides a value close to the desired answer for a finite number of terms. The difference is that an asymptotic series cannot be made to produce an answer as exact as desired, the way that convergent series can. In fact, after a certain number of terms, a typical asymptotic series reaches its best approximation; if more terms are included, most such series will produce worse answers.
Divergent series
Main article: Divergent series
Under many circumstances, it is desirable to assign a limit to a series which fails to converge in the usual sense. A summability method is such an assignment of a limit to a subset of the set of divergent series which properly extends the classical notion of convergence. Summability methods include Cesàro summation, (C,k) summation, Abel summation, and Borel summation, in increasing order of generality (and hence applicable to increasingly divergent series).
A variety of general results concerning possible summability methods are known. The Silverman–Toeplitz theorem characterizes matrix summability methods, which are methods for summing a divergent series by applying an infinite matrix to the vector of coefficients. The most general method for summing a divergent series is non-constructive, and concerns Banach limits.
Summations over arbitrary index sets
Definitions may be given for sums over an arbitrary index set $I.$[16] There are two main differences with the usual notion of series: first, there is no specific order given on the set $I$; second, this set $I$ may be uncountable. The notion of convergence needs to be strengthened, because the concept of conditional convergence depends on the ordering of the index set.
If $a:I\mapsto G$ is a function from an index set $I$ to a set $G,$ then the "series" associated to $a$ is the formal sum of the elements $a(x)\in G$ over the index elements $x\in I$ denoted by the
$\sum _{x\in I}a(x).$
When the index set is the natural numbers $I=\mathbb {N} ,$ the function $a:\mathbb {N} \mapsto G$ is a sequence denoted by $a(n)=a_{n}.$ A series indexed on the natural numbers is an ordered formal sum and so we rewrite $ \sum _{n\in \mathbb {N} }$ as $ \sum _{n=0}^{\infty }$ in order to emphasize the ordering induced by the natural numbers. Thus, we obtain the common notation for a series indexed by the natural numbers
$\sum _{n=0}^{\infty }a_{n}=a_{0}+a_{1}+a_{2}+\cdots .$
Families of non-negative numbers
When summing a family $\left\{a_{i}:i\in I\right\}$ of non-negative real numbers, define
$\sum _{i\in I}a_{i}=\sup \left\{\sum _{i\in A}a_{i}\,:A\subseteq I,A{\text{ finite}}\right\}\in [0,+\infty ].$
When the supremum is finite then the set of $i\in I$ such that $a_{i}>0$ is countable. Indeed, for every $n\geq 1,$ the cardinality $\left|A_{n}\right|$ of the set $A_{n}=\left\{i\in I:a_{i}>1/n\right\}$ is finite because
${\frac {1}{n}}\,\left|A_{n}\right|=\sum _{i\in A_{n}}{\frac {1}{n}}\leq \sum _{i\in A_{n}}a_{i}\leq \sum _{i\in I}a_{i}<\infty .$
If $I$ is countably infinite and enumerated as $I=\left\{i_{0},i_{1},\ldots \right\}$ then the above defined sum satisfies
$\sum _{i\in I}a_{i}=\sum _{k=0}^{+\infty }a_{i_{k}},$
provided the value $\infty $ is allowed for the sum of the series.
Any sum over non-negative reals can be understood as the integral of a non-negative function with respect to the counting measure, which accounts for the many similarities between the two constructions.
Abelian topological groups
Let $a:I\to X$ be a map, also denoted by $\left(a_{i}\right)_{i\in I},$ from some non-empty set $I$ into a Hausdorff abelian topological group $X.$ Let $\operatorname {Finite} (I)$ be the collection of all finite subsets of $I,$ with $\operatorname {Finite} (I)$ viewed as a directed set, ordered under inclusion $\,\subseteq \,$ with union as join. The family $\left(a_{i}\right)_{i\in I},$ is said to be unconditionally summable if the following limit, which is denoted by $\sum _{i\in I}a_{i}$ and is called the sum of $\left(a_{i}\right)_{i\in I},$ exists in $X:$
$\sum _{i\in I}a_{i}:=\lim _{A\in \operatorname {Finite} (I)}\ \sum _{i\in A}a_{i}=\lim \left\{\sum _{i\in A}a_{i}\,:A\subseteq I,A{\text{ finite }}\right\}$
Saying that the sum $S:=\sum _{i\in I}a_{i}$ is the limit of finite partial sums means that for every neighborhood $V$ of the origin in $X,$ there exists a finite subset $A_{0}$ of $I$ such that
$S-\sum _{i\in A}a_{i}\in V\qquad {\text{ for every finite superset}}\;A\supseteq A_{0}.$
Because $\operatorname {Finite} (I)$ is not totally ordered, this is not a limit of a sequence of partial sums, but rather of a net.[17][18]
For every neighborhood $W$ of the origin in $X,$ there is a smaller neighborhood $V$ such that $V-V\subseteq W.$ It follows that the finite partial sums of an unconditionally summable family $\left(a_{i}\right)_{i\in I},$ form a Cauchy net, that is, for every neighborhood $W$ of the origin in $X,$ there exists a finite subset $A_{0}$ of $I$ such that
$\sum _{i\in A_{1}}a_{i}-\sum _{i\in A_{2}}a_{i}\in W\qquad {\text{ for all finite supersets }}\;A_{1},A_{2}\supseteq A_{0},$
which implies that $a_{i}\in W$ for every $i\in I\setminus A_{0}$ (by taking $A_{1}:=A_{0}\cup \{i\}$ and $A_{2}:=A_{0}$).
When $X$ is complete, a family $\left(a_{i}\right)_{i\in I}$ is unconditionally summable in $X$ if and only if the finite sums satisfy the latter Cauchy net condition. When $X$ is complete and $\left(a_{i}\right)_{i\in I},$ is unconditionally summable in $X,$ then for every subset $J\subseteq I,$ the corresponding subfamily $\left(a_{j}\right)_{j\in J},$ is also unconditionally summable in $X.$
When the sum of a family of non-negative numbers, in the extended sense defined before, is finite, then it coincides with the sum in the topological group $X=\mathbb {R} .$
If a family $\left(a_{i}\right)_{i\in I}$ in $X$ is unconditionally summable then for every neighborhood $W$ of the origin in $X,$ there is a finite subset $A_{0}\subseteq I$ such that $a_{i}\in W$ for every index $i$ not in $A_{0}.$ If $X$ is a first-countable space then it follows that the set of $i\in I$ such that $a_{i}\neq 0$ is countable. This need not be true in a general abelian topological group (see examples below).
Unconditionally convergent series
Suppose that $I=\mathbb {N} .$ If a family $a_{n},n\in \mathbb {N} ,$ is unconditionally summable in a Hausdorff abelian topological group $X,$ then the series in the usual sense converges and has the same sum,
$\sum _{n=0}^{\infty }a_{n}=\sum _{n\in \mathbb {N} }a_{n}.$
By nature, the definition of unconditional summability is insensitive to the order of the summation. When $\sum a_{n}$ is unconditionally summable, then the series remains convergent after any permutation $\sigma :\mathbb {N} \to \mathbb {N} $ :\mathbb {N} \to \mathbb {N} } of the set $\mathbb {N} $ of indices, with the same sum,
$\sum _{n=0}^{\infty }a_{\sigma (n)}=\sum _{n=0}^{\infty }a_{n}.$
Conversely, if every permutation of a series $\sum a_{n}$ converges, then the series is unconditionally convergent. When $X$ is complete then unconditional convergence is also equivalent to the fact that all subseries are convergent; if $X$ is a Banach space, this is equivalent to say that for every sequence of signs $\varepsilon _{n}=\pm 1$, the series
$\sum _{n=0}^{\infty }\varepsilon _{n}a_{n}$
converges in $X.$
Series in topological vector spaces
If $X$ is a topological vector space (TVS) and $\left(x_{i}\right)_{i\in I}$ is a (possibly uncountable) family in $X$ then this family is summable[19] if the limit $\lim _{A\in \operatorname {Finite} (I)}x_{A}$ of the net $\left(x_{A}\right)_{A\in \operatorname {Finite} (I)}$ exists in $X,$ where $\operatorname {Finite} (I)$ is the directed set of all finite subsets of $I$ directed by inclusion $\,\subseteq \,$ and $ x_{A}:=\sum _{i\in A}x_{i}.$
It is called absolutely summable if in addition, for every continuous seminorm $p$ on $X,$ the family $\left(p\left(x_{i}\right)\right)_{i\in I}$ is summable. If $X$ is a normable space and if $\left(x_{i}\right)_{i\in I}$ is an absolutely summable family in $X,$ then necessarily all but a countable collection of $x_{i}$’s are zero. Hence, in normed spaces, it is usually only ever necessary to consider series with countably many terms.
Summable families play an important role in the theory of nuclear spaces.
Series in Banach and seminormed spaces
The notion of series can be easily extended to the case of a seminormed space. If $x_{n}$ is a sequence of elements of a normed space $X$ and if $x\in X$ then the series $\sum x_{n}$ converges to $x$ in $X$ if the sequence of partial sums of the series $ \left(\sum _{n=0}^{N}x_{n}\right)_{N=1}^{\infty }$ converges to $x$ in $X$; to wit,
$\left\|x-\sum _{n=0}^{N}x_{n}\right\|\to 0\quad {\text{ as }}N\to \infty .$
More generally, convergence of series can be defined in any abelian Hausdorff topological group. Specifically, in this case, $\sum x_{n}$ converges to $x$ if the sequence of partial sums converges to $x.$
If $(X,|\cdot |)$ is a seminormed space, then the notion of absolute convergence becomes: A series $ \sum _{i\in I}x_{i}$ of vectors in $X$ converges absolutely if
$\sum _{i\in I}\left|x_{i}\right|<+\infty $
in which case all but at most countably many of the values $\left|x_{i}\right|$ are necessarily zero.
If a countable series of vectors in a Banach space converges absolutely then it converges unconditionally, but the converse only holds in finite-dimensional Banach spaces (theorem of Dvoretzky & Rogers (1950)).
Well-ordered sums
Conditionally convergent series can be considered if $I$ is a well-ordered set, for example, an ordinal number $\alpha _{0}.$ In this case, define by transfinite recursion:
$\sum _{\beta <\alpha +1}a_{\beta }=a_{\alpha }+\sum _{\beta <\alpha }a_{\beta }$
and for a limit ordinal $\alpha ,$
$\sum _{\beta <\alpha }a_{\beta }=\lim _{\gamma \to \alpha }\sum _{\beta <\gamma }a_{\beta }$
if this limit exists. If all limits exist up to $\alpha _{0},$ then the series converges.
Examples
1. Given a function $f:X\to Y$ into an abelian topological group $Y,$ define for every $a\in X,$
$f_{a}(x)={\begin{cases}0&x\neq a,\\f(a)&x=a,\\\end{cases}}$
a function whose support is a singleton $\{a\}.$ Then
$f=\sum _{a\in X}f_{a}$
in the topology of pointwise convergence (that is, the sum is taken in the infinite product group $Y^{X}$).
2. In the definition of partitions of unity, one constructs sums of functions over arbitrary index set $I,$
$\sum _{i\in I}\varphi _{i}(x)=1.$
While, formally, this requires a notion of sums of uncountable series, by construction there are, for every given $x,$ only finitely many nonzero terms in the sum, so issues regarding convergence of such sums do not arise. Actually, one usually assumes more: the family of functions is locally finite, that is, for every $x$ there is a neighborhood of $x$ in which all but a finite number of functions vanish. Any regularity property of the $\varphi _{i},$ such as continuity, differentiability, that is preserved under finite sums will be preserved for the sum of any subcollection of this family of functions.
3. On the first uncountable ordinal $\omega _{1}$ viewed as a topological space in the order topology, the constant function $f:\left[0,\omega _{1}\right)\to \left[0,\omega _{1}\right]$ given by $f(\alpha )=1$ satisfies
$\sum _{\alpha \in [0,\omega _{1})}f(\alpha )=\omega _{1}$
(in other words, $\omega _{1}$ copies of 1 is $\omega _{1}$) only if one takes a limit over all countable partial sums, rather than finite partial sums. This space is not separable.
See also
• Continued fraction
• Convergence tests
• Convergent series
• Divergent series
• Infinite compositions of analytic functions
• Infinite expression
• Infinite product
• Iterated binary operation
• List of mathematical series
• Prefix sum
• Sequence transformation
• Series expansion
References
1. Thompson, Silvanus; Gardner, Martin (1998). Calculus Made Easy. ISBN 978-0-312-18548-0.
2. Weisstein, Eric W. "Series". mathworld.wolfram.com. Retrieved 2020-08-30.
3. Swokowski 1983, p. 501
4. Michael Spivak, Calculus
5. "Infinite Series". www.mathsisfun.com. Retrieved 2020-08-30.
6. Gasper, G., Rahman, M. (2004). Basic hypergeometric series. Cambridge University Press.
7. Positive and Negative Terms: Alternating Series
8. Johansson, F. (2016). Computing hypergeometric functions rigorously. arXiv preprint arXiv:1606.06977.
9. Higham, N. J. (2008). Functions of matrices: theory and computation. Society for Industrial and Applied Mathematics.
10. Higham, N. J. (2009). The scaling and squaring method for the matrix exponential revisited. SIAM review, 51(4), 747-764.
11. How and How Not to Compute the Exponential of a Matrix
12. Nicolas Bourbaki (1989), Algebra, Springer: §III.2.11.
13. O'Connor, J.J. & Robertson, E.F. (February 1996). "A history of calculus". University of St Andrews. Retrieved 2007-08-07.
14. K., Bidwell, James (30 November 1993). "Archimedes and Pi-Revisited". School Science and Mathematics. 94 (3).{{cite journal}}: CS1 maint: multiple names: authors list (link)
15. "Indians predated Newton 'discovery' by 250 years". manchester.ac.uk.
16. Jean Dieudonné, Foundations of mathematical analysis, Academic Press
17. Bourbaki, Nicolas (1998). General Topology: Chapters 1–4. Springer. pp. 261–270. ISBN 978-3-540-64241-1.
18. Choquet, Gustave (1966). Topology. Academic Press. pp. 216–231. ISBN 978-0-12-173450-3.
19. Schaefer & Wolff 1999, pp. 179–180.
Bibliography
• Bromwich, T. J. An Introduction to the Theory of Infinite Series MacMillan & Co. 1908, revised 1926, reprinted 1939, 1942, 1949, 1955, 1959, 1965.
• Dvoretzky, Aryeh; Rogers, C. Ambrose (1950). "Absolute and unconditional convergence in normed linear spaces". Proc. Natl. Acad. Sci. U.S.A. 36 (3): 192–197. Bibcode:1950PNAS...36..192D. doi:10.1073/pnas.36.3.192. PMC 1063182. PMID 16588972.
• Narici, Lawrence; Beckenstein, Edward (2011). Topological Vector Spaces. Pure and applied mathematics (Second ed.). Boca Raton, FL: CRC Press. ISBN 978-1584888666. OCLC 144216834.
• Swokowski, Earl W. (1983), Calculus with analytic geometry (Alternate ed.), Boston: Prindle, Weber & Schmidt, ISBN 978-0-87150-341-1
• Walter Rudin, Principles of Mathematical Analysis (McGraw-Hill: New York, 1964).
• Pietsch, Albrecht (1972). Nuclear locally convex spaces. Berlin,New York: Springer-Verlag. ISBN 0-387-05644-0. OCLC 539541.
• Robertson, A. P. (1973). Topological vector spaces. Cambridge England: University Press. ISBN 0-521-29882-2. OCLC 589250.
• Ryan, Raymond (2002). Introduction to tensor products of Banach spaces. London New York: Springer. ISBN 1-85233-437-1. OCLC 48092184.
• Schaefer, Helmut H.; Wolff, Manfred P. (1999). Topological Vector Spaces. GTM. Vol. 8 (Second ed.). New York, NY: Springer New York Imprint Springer. ISBN 978-1-4612-7155-0. OCLC 840278135.
• Trèves, François (2006) [1967]. Topological Vector Spaces, Distributions and Kernels. Mineola, N.Y.: Dover Publications. ISBN 978-0-486-45352-1. OCLC 853623322.
• Wong (1979). Schwartz spaces, nuclear spaces, and tensor products. Berlin New York: Springer-Verlag. ISBN 3-540-09513-6. OCLC 5126158.
MR0033975
External links
Wikimedia Commons has media related to Series (mathematics).
• "Series", Encyclopedia of Mathematics, EMS Press, 2001 [1994]
• Infinite Series Tutorial
• "Series-TheBasics". Paul's Online Math Notes.
• "Show-Me Collection of Series" (PDF). Leslie Green.
Sequences and series
Integer sequences
Basic
• Arithmetic progression
• Geometric progression
• Harmonic progression
• Square number
• Cubic number
• Factorial
• Powers of two
• Powers of three
• Powers of 10
Advanced (list)
• Complete sequence
• Fibonacci sequence
• Figurate number
• Heptagonal number
• Hexagonal number
• Lucas number
• Pell number
• Pentagonal number
• Polygonal number
• Triangular number
Properties of sequences
• Cauchy sequence
• Monotonic function
• Periodic sequence
Properties of series
Series
• Alternating
• Convergent
• Divergent
• Telescoping
Convergence
• Absolute
• Conditional
• Uniform
Explicit series
Convergent
• 1/2 − 1/4 + 1/8 − 1/16 + ⋯
• 1/2 + 1/4 + 1/8 + 1/16 + ⋯
• 1/4 + 1/16 + 1/64 + 1/256 + ⋯
• 1 + 1/2s + 1/3s + ... (Riemann zeta function)
Divergent
• 1 + 1 + 1 + 1 + ⋯
• 1 − 1 + 1 − 1 + ⋯ (Grandi's series)
• 1 + 2 + 3 + 4 + ⋯
• 1 − 2 + 3 − 4 + ⋯
• 1 + 2 + 4 + 8 + ⋯
• 1 − 2 + 4 − 8 + ⋯
• Infinite arithmetic series
• 1 − 1 + 2 − 6 + 24 − 120 + ⋯ (alternating factorials)
• 1 + 1/2 + 1/3 + 1/4 + ⋯ (harmonic series)
• 1/2 + 1/3 + 1/5 + 1/7 + 1/11 + ⋯ (inverses of primes)
Kinds of series
• Taylor series
• Power series
• Formal power series
• Laurent series
• Puiseux series
• Dirichlet series
• Trigonometric series
• Fourier series
• Generating series
Hypergeometric series
• Generalized hypergeometric series
• Hypergeometric function of a matrix argument
• Lauricella hypergeometric series
• Modular hypergeometric series
• Riemann's differential equation
• Theta hypergeometric series
• Category
Major topics in mathematical analysis
• Calculus: Integration
• Differentiation
• Differential equations
• ordinary
• partial
• stochastic
• Fundamental theorem of calculus
• Calculus of variations
• Vector calculus
• Tensor calculus
• Matrix calculus
• Lists of integrals
• Table of derivatives
• Real analysis
• Complex analysis
• Hypercomplex analysis (quaternionic analysis)
• Functional analysis
• Fourier analysis
• Least-squares spectral analysis
• Harmonic analysis
• P-adic analysis (P-adic numbers)
• Measure theory
• Representation theory
• Functions
• Continuous function
• Special functions
• Limit
• Series
• Infinity
Mathematics portal
Authority control
International
• FAST
National
• Spain
• France
• BnF data
• Germany
• Israel
• United States
• Japan
• Czech Republic
| Wikipedia |
Sum of squares
In mathematics, statistics and elsewhere, sums of squares occur in a number of contexts:
Statistics
• For partitioning of variance, see Partition of sums of squares
• For the "sum of squared deviations", see Least squares
• For the "sum of squared differences", see Mean squared error
• For the "sum of squared error", see Residual sum of squares
• For the "sum of squares due to lack of fit", see Lack-of-fit sum of squares
• For sums of squares relating to model predictions, see Explained sum of squares
• For sums of squares relating to observations, see Total sum of squares
• For sums of squared deviations, see Squared deviations from the mean
• For modelling involving sums of squares, see Analysis of variance
• For modelling involving the multivariate generalisation of sums of squares, see Multivariate analysis of variance
Number theory
• For the sum of squares of consecutive integers, see Square pyramidal number
• For representing an integer as a sum of squares of 4 integers, see Lagrange's four-square theorem
• Legendre's three-square theorem states which numbers can be expressed as the sum of three squares
• Jacobi's four-square theorem gives the number of ways that a number can be represented as the sum of four squares.
• For the number of representations of a positive integer as a sum of squares of k integers, see Sum of squares function.
• Fermat's theorem on sums of two squares says which primes are sums of two squares.
• The sum of two squares theorem generalizes Fermat's theorem to specify which composite numbers are the sums of two squares.
• Pythagorean triples are sets of three integers such that the sum of the squares of the first two equals the square of the third.
• A Pythagorean prime is a prime that is the sum of two squares; Fermat's theorem on sums of two squares states which primes are Pythagorean primes.
• Pythagorean triangles with integer altitude from the hypotenuse have the sum of squares of inverses of the integer legs equal to the square of the inverse of the integer altitude from the hypotenuse.
• Pythagorean quadruples are sets of four integers such that the sum of the squares of the first three equals the square of the fourth.
• The Basel problem, solved by Euler in terms of $\pi $, asked for an exact expression for the sum of the squares of the reciprocals of all positive integers.
• Rational trigonometry's triple-quad rule and triple-spread rule contain sums of squares, similar to Heron's formula.
• Squaring the square is a combinatorial problem of dividing a two-dimensional square with integer side length into smaller such squares.
Algebra and algebraic geometry
• For representing a polynomial as the sum of squares of polynomials, see Polynomial SOS.
• For computational optimization, see Sum-of-squares optimization.
• For representing a multivariate polynomial that takes only non-negative values over the reals as a sum of squares of rational functions, see Hilbert's seventeenth problem.
• The Brahmagupta–Fibonacci identity says the set of all sums of two squares is closed under multiplication.
• The sum of squared dimensions of a finite group's pairwise nonequivalent complex representations is equal to cardinality of that group.
Euclidean geometry and other inner-product spaces
• The Pythagorean theorem says that the square on the hypotenuse of a right triangle is equal in area to the sum of the squares on the legs. The sum of squares is not factorable.
• The Squared Euclidean distance (SED) is defined as the sum of squares of the differences between coordinates.
• Heron's formula for the area of a triangle can be re-written as using the sums of squares of a triangle's sides (and the sums of the squares of squares)
• The British flag theorem for rectangles equates two sums of two squares
• The parallelogram law equates the sum of the squares of the four sides to the sum of the squares of the diagonals
• Descartes' theorem for four kissing circles involves sums of squares
• The sum of the squares of the edges of a rectangular cuboid equals the square of any space diagonal
See also
• Sums of powers
• Sum of reciprocals
• Quadratic form (statistics)
• Reduced chi-squared statistic
| Wikipedia |
Sum of squares function
In number theory, the sum of squares function is an arithmetic function that gives the number of representations for a given positive integer n as the sum of k squares, where representations that differ only in the order of the summands or in the signs of the numbers being squared are counted as different, and is denoted by rk(n).
Definition
The function is defined as
$r_{k}(n)=|\{(a_{1},a_{2},\ldots ,a_{k})\in \mathbb {Z} ^{k}\ :\ n=a_{1}^{2}+a_{2}^{2}+\cdots +a_{k}^{2}\}|$ :\ n=a_{1}^{2}+a_{2}^{2}+\cdots +a_{k}^{2}\}|}
where $|\,\ |$ denotes the cardinality of a set. In other words, rk(n) is the number of ways n can be written as a sum of k squares.
For example, $r_{2}(1)=4$ since $1=0^{2}+(\pm 1)^{2}=(\pm 1)^{2}+0^{2}$ where each sum has two sign combinations, and also $r_{2}(2)=4$ since $2=(\pm 1)^{2}+(\pm 1)^{2}$ with four sign combinations. On the other hand, $r_{2}(3)=0$ because there is no way to represent 3 as a sum of two squares.
Formulae
k = 2
Main article: Sum of two squares theorem
The number of ways to write a natural number as sum of two squares is given by r2(n). It is given explicitly by
$r_{2}(n)=4(d_{1}(n)-d_{3}(n))$
where d1(n) is the number of divisors of n which are congruent to 1 modulo 4 and d3(n) is the number of divisors of n which are congruent to 3 modulo 4. Using sums, the expression can be written as:
$r_{2}(n)=4\sum _{d\mid n \atop d\,\equiv \,1,3{\pmod {4}}}(-1)^{(d-1)/2}$
The prime factorization $n=2^{g}p_{1}^{f_{1}}p_{2}^{f_{2}}\cdots q_{1}^{h_{1}}q_{2}^{h_{2}}\cdots $, where $p_{i}$ are the prime factors of the form $p_{i}\equiv 1{\pmod {4}},$ and $q_{i}$ are the prime factors of the form $q_{i}\equiv 3{\pmod {4}}$ gives another formula
$r_{2}(n)=4(f_{1}+1)(f_{2}+1)\cdots $, if all exponents $h_{1},h_{2},\cdots $ are even. If one or more $h_{i}$ are odd, then $r_{2}(n)=0$.
k = 3
See also: Legendre's three-square theorem
Gauss proved that for a squarefree number n > 4,
$r_{3}(n)={\begin{cases}24h(-n),&{\text{if }}n\equiv 3{\pmod {8}},\\0&{\text{if }}n\equiv 7{\pmod {8}},\\12h(-4n)&{\text{otherwise}},\end{cases}}$
where h(m) denotes the class number of an integer m.
There exist extensions of Gauss' formula to arbitrary integer n.[1][2]
k = 4
Main article: Jacobi's four-square theorem
The number of ways to represent n as the sum of four squares was due to Carl Gustav Jakob Jacobi and it is eight times the sum of all its divisors which are not divisible by 4, i.e.
$r_{4}(n)=8\sum _{d\,\mid \,n,\ 4\,\nmid \,d}d.$
Representing n = 2km, where m is an odd integer, one can express $r_{4}(n)$ in terms of the divisor function as follows:
$r_{4}(n)=8\sigma (2^{\min\{k,1\}}m).$
k = 6
The number of ways to represent n as the sum of six squares is given by
$r_{6}(n)=4\sum _{d\mid n}d^{2}{\big (}4\left({\tfrac {-4}{n/d}}\right)-\left({\tfrac {-4}{d}}\right){\big )},$
where $\left({\tfrac {\cdot }{\cdot }}\right)$ is the Kronecker symbol.[3]
k = 8
Jacobi also found an explicit formula for the case k = 8:[3]
$r_{8}(n)=16\sum _{d\,\mid \,n}(-1)^{n+d}d^{3}.$
Generating function
The generating function of the sequence $r_{k}(n)$ for fixed k can be expressed in terms of the Jacobi theta function:[4]
$\vartheta (0;q)^{k}=\vartheta _{3}^{k}(q)=\sum _{n=0}^{\infty }r_{k}(n)q^{n},$
where
$\vartheta (0;q)=\sum _{n=-\infty }^{\infty }q^{n^{2}}=1+2q+2q^{4}+2q^{9}+2q^{16}+\cdots .$
Numerical values
The first 30 values for $r_{k}(n),\;k=1,\dots ,8$ are listed in the table below:
n=r1(n)r2(n)r3(n)r4(n)r5(n)r6(n)r7(n)r8(n)
0011111111
11246810121416
22041224406084112
330083280160280448
42224624902525741136
550824481123128402016
62×300249624054412883136
770006432096023685504
823041224200102034449328
9322430104250876354212112
102×508241445601560442414112
11110024965602400756021312
1222×3008964002080924031808
131308241125602040845635168
142×7004819280032641108838528
153×500019296041601657656448
16242462473040921849474864
1717084814448034801780878624
182×320436312124043801974084784
191900241601520720027720109760
2022×50824144752655234440143136
213×700482561120460829456154112
222×1100242881840816031304149184
232300019216001056049728194688
2423×30024961200822452808261184
2552212302481210781243414252016
262×13087233620001020052248246176
2733003232022401312068320327040
2822×700019216001248074048390784
2929087224016801010468376390240
302×3×5004857627201414471120395136
See also
• Jacobi's four-square theorem
• Gauss circle problem
References
1. P. T. Bateman (1951). "On the Representation of a Number as the Sum of Three Squares" (PDF). Trans. Amer. Math. Soc. 71: 70–101. doi:10.1090/S0002-9947-1951-0042438-4.
2. S. Bhargava; Chandrashekar Adiga; D. D. Somashekara (1993). "Three-Square Theorem as an Application of Andrews' Identity" (PDF). Fibonacci Quart. 31 (2): 129–133.
3. Cohen, H. (2007). "5.4 Consequences of the Hasse–Minkowski Theorem". Number Theory Volume I: Tools and Diophantine Equations. Springer. ISBN 978-0-387-49922-2.
4. Milne, Stephen C. (2002). "Introduction". Infinite Families of Exact Sums of Squares Formulas, Jacobi Elliptic Functions, Continued Fractions, and Schur Functions. Springer Science & Business Media. p. 9. ISBN 1402004915.
External links
• Weisstein, Eric W. "Sum of Squares Function". MathWorld.
• Sloane, N. J. A. (ed.). "Sequence A122141 (number of ways of writing n as a sum of d squares)". The On-Line Encyclopedia of Integer Sequences. OEIS Foundation.
• Sloane, N. J. A. (ed.). "Sequence A004018 (Theta series of square lattice, r_2(n))". The On-Line Encyclopedia of Integer Sequences. OEIS Foundation.
| Wikipedia |
Sum of two squares theorem
In number theory, the sum of two squares theorem relates the prime decomposition of any integer n > 1 to whether it can be written as a sum of two squares, such that n = a2 + b2 for some integers a, b.
An integer greater than one can be written as a sum of two squares if and only if its prime decomposition contains no factor pk, where prime $p\equiv 3{\pmod {4}}$ and k is odd.[1]
In writing a number as a sum of two squares, it is allowed for one of the squares to be zero, or for both of them to be equal to each other, so all squares and all doubles of squares are included in the numbers that can be represented in this way. This theorem supplements Fermat's theorem on sums of two squares which says when a prime number can be written as a sum of two squares, in that it also covers the case for composite numbers.
A number may have multiple representations as a sum of two squares, counted by the sum of squares function; for instance, every Pythagorean triple $a^{2}+b^{2}=c^{2}$ gives a second representation for $c^{2}$ beyond the trivial representation $c^{2}+0^{2}$.
Examples
The prime decomposition of the number 2450 is given by 2450 = 2 · 52 · 72. Of the primes occurring in this decomposition, 2, 5, and 7, only 7 is congruent to 3 modulo 4. Its exponent in the decomposition, 2, is even. Therefore, the theorem states that it is expressible as the sum of two squares. Indeed, 2450 = 72 + 492.
The prime decomposition of the number 3430 is 2 · 5 · 73. This time, the exponent of 7 in the decomposition is 3, an odd number. So 3430 cannot be written as the sum of two squares.
Representable numbers
The numbers that can be represented as the sums of two squares form the integer sequence[2]
0, 1, 2, 4, 5, 8, 9, 10, 13, 16, 17, 18, 20, 25, 26, 29, 32, ...
They form the set of all norms of Gaussian integers;[2] their square roots form the set of all lengths of line segments between pairs of points in the two-dimensional integer lattice.
The number of representable numbers in the range from 0 to any number $n$ is proportional to ${\frac {n}{\sqrt {\log n}}}$, with a limiting constant of proportionality given by the Landau–Ramanujan constant, approximately 0.764.[3]
The product of any two representable numbers is another representable number. Its representation can be derived from representations of its two factors, using the Brahmagupta–Fibonacci identity.
See also
• Legendre's three-square theorem
• Lagrange's four-square theorem
• Sum of squares function
References
1. Dudley, Underwood (1969). "Sums of Two Squares". Elementary Number Theory. W.H. Freeman and Company. pp. 135–139.{{cite book}}: CS1 maint: url-status (link)
2. Sloane, N. J. A. (ed.). "Sequence A001481 (Numbers that are the sum of 2 squares)". The On-Line Encyclopedia of Integer Sequences. OEIS Foundation.
3. Rebák, Örs (2020). "Generalization of a Ramanujan identity". The American Mathematical Monthly. 127 (1): 80–83. arXiv:1612.08307. doi:10.1080/00029890.2020.1668716. MR 4043992.
| Wikipedia |
Sum rule in quantum mechanics
In quantum mechanics, a sum rule is a formula for transitions between energy levels, in which the sum of the transition strengths is expressed in a simple form. Sum rules are used to describe the properties of many physical systems, including solids, atoms, atomic nuclei, and nuclear constituents such as protons and neutrons.
The sum rules are derived from general principles, and are useful in situations where the behavior of individual energy levels is too complex to be described by a precise quantum-mechanical theory. In general, sum rules are derived by using Heisenberg's quantum-mechanical algebra to construct operator equalities, which are then applied to the particles or energy levels of a system.
Derivation of sum rules[1]
Assume that the Hamiltonian ${\hat {H}}$ has a complete set of eigenfunctions $|n\rangle $ with eigenvalues $E_{n}$:
${\hat {H}}|n\rangle =E_{n}|n\rangle .$
For the Hermitian operator ${\hat {A}}$ we define the repeated commutator ${\hat {C}}^{(k)}$ iteratively by:
Failed to parse (SVG (MathML can be enabled via browser plugin): Invalid response ("Math extension cannot connect to Restbase.") from server "http://localhost:6011/en.wikipedia.org/v1/":): \begin{align} \hat{C}^{(0)} & \equiv \hat{A}\\ \hat{C}^{(1)} & \equiv [\hat{H}, \hat{A}] = \hat{H}\hat{A}-\hat{A}\hat{H}\\ \hat{C}^{(k)} & \equiv [\hat{H}, \hat{C}^{(k-1)}], \ \ \ k=1,2,\ldots \end{align}
The operator ${\hat {C}}^{(0)}$ is Hermitian since ${\hat {A}}$ is defined to be Hermitian. The operator ${\hat {C}}^{(1)}$ is anti-Hermitian:
$\left({\hat {C}}^{(1)}\right)^{\dagger }=({\hat {H}}{\hat {A}})^{\dagger }-({\hat {A}}{\hat {H}})^{\dagger }={\hat {A}}{\hat {H}}-{\hat {H}}{\hat {A}}=-{\hat {C}}^{(1)}.$
By induction one finds:
$\left({\hat {C}}^{(k)}\right)^{\dagger }=(-1)^{k}{\hat {C}}^{(k)}$
and also
$\langle m|{\hat {C}}^{(k)}|n\rangle =(E_{m}-E_{n})^{k}\langle m|{\hat {A}}|n\rangle .$
For a Hermitian operator we have
$|\langle m|{\hat {A}}|n\rangle |^{2}=\langle m|{\hat {A}}|n\rangle \langle m|{\hat {A}}|n\rangle ^{\ast }=\langle m|{\hat {A}}|n\rangle \langle n|{\hat {A}}|m\rangle .$
Using this relation we derive:
${\begin{aligned}\langle m|[{\hat {A}},{\hat {C}}^{(k)}]|m\rangle &=\langle m|{\hat {A}}{\hat {C}}^{(k)}|m\rangle -\langle m|{\hat {C}}^{(k)}{\hat {A}}|m\rangle \\&=\sum _{n}\langle m|{\hat {A}}|n\rangle \langle n|{\hat {C}}^{(k)}|m\rangle -\langle m|{\hat {C}}^{(k)}|n\rangle \langle n|{\hat {A}}|m\rangle \\&=\sum _{n}\langle m|{\hat {A}}|n\rangle \langle n|{\hat {A}}|m\rangle (E_{n}-E_{m})^{k}-(E_{m}-E_{n})^{k}\langle m|{\hat {A}}|n\rangle \langle n|{\hat {A}}|m\rangle \\&=\sum _{n}(1-(-1)^{k})(E_{n}-E_{m})^{k}|\langle m|{\hat {A}}|n\rangle |^{2}.\end{aligned}}$
The result can be written as
$\langle m|[{\hat {A}},{\hat {C}}^{(k)}]|m\rangle ={\begin{cases}0,&{\mbox{if }}k{\mbox{ is even}}\\2\sum _{n}(E_{n}-E_{m})^{k}|\langle m|{\hat {A}}|n\rangle |^{2},&{\mbox{if }}k{\mbox{ is odd}}.\end{cases}}$
For $k=1$ this gives:
$\langle m|[{\hat {A}},[{\hat {H}},{\hat {A}}]]|m\rangle =2\sum _{n}(E_{n}-E_{m})|\langle m|{\hat {A}}|n\rangle |^{2}.$
See also
• Oscillator strength
• Sum rules (quantum field theory)
• QCD sum rules
References
1. Wang, Sanwu (1999-07-01). "Generalization of the Thomas-Reiche-Kuhn and the Bethe sum rules". Physical Review A. American Physical Society (APS). 60 (1): 262–266. Bibcode:1999PhRvA..60..262W. doi:10.1103/physreva.60.262. ISSN 1050-2947.
| Wikipedia |
Summation
In mathematics, summation is the addition of a sequence of any kind of numbers, called addends or summands; the result is their sum or total. Beside numbers, other types of values can be summed as well: functions, vectors, matrices, polynomials and, in general, elements of any type of mathematical objects on which an operation denoted "+" is defined.
This article is about sums of several elements. For more elementary aspects, see Addition. For infinite sums, see Series (mathematics). For other uses, see Summation (disambiguation).
Arithmetic operations
Addition (+)
$\scriptstyle \left.{\begin{matrix}\scriptstyle {\text{term}}\,+\,{\text{term}}\\\scriptstyle {\text{summand}}\,+\,{\text{summand}}\\\scriptstyle {\text{addend}}\,+\,{\text{addend}}\\\scriptstyle {\text{augend}}\,+\,{\text{addend}}\end{matrix}}\right\}\,=\,$ $\scriptstyle {\text{sum}}$
Subtraction (−)
$\scriptstyle \left.{\begin{matrix}\scriptstyle {\text{term}}\,-\,{\text{term}}\\\scriptstyle {\text{minuend}}\,-\,{\text{subtrahend}}\end{matrix}}\right\}\,=\,$ $\scriptstyle {\text{difference}}$
Multiplication (×)
$\scriptstyle \left.{\begin{matrix}\scriptstyle {\text{factor}}\,\times \,{\text{factor}}\\\scriptstyle {\text{multiplier}}\,\times \,{\text{multiplicand}}\end{matrix}}\right\}\,=\,$ $\scriptstyle {\text{product}}$
Division (÷)
$\scriptstyle \left.{\begin{matrix}\scriptstyle {\frac {\scriptstyle {\text{dividend}}}{\scriptstyle {\text{divisor}}}}\\[1ex]\scriptstyle {\frac {\scriptstyle {\text{numerator}}}{\scriptstyle {\text{denominator}}}}\end{matrix}}\right\}\,=\,$ $\scriptstyle \left\{{\begin{matrix}\scriptstyle {\text{fraction}}\\\scriptstyle {\text{quotient}}\\\scriptstyle {\text{ratio}}\end{matrix}}\right.$
Exponentiation (^)
$\scriptstyle {\text{base}}^{\text{exponent}}\,=\,$ $\scriptstyle {\text{power}}$
nth root (√)
$\scriptstyle {\sqrt[{\text{degree}}]{\scriptstyle {\text{radicand}}}}\,=\,$ $\scriptstyle {\text{root}}$
Logarithm (log)
$\scriptstyle \log _{\text{base}}({\text{anti-logarithm}})\,=\,$ $\scriptstyle {\text{logarithm}}$
Summations of infinite sequences are called series. They involve the concept of limit, and are not considered in this article.
The summation of an explicit sequence is denoted as a succession of additions. For example, summation of [1, 2, 4, 2] is denoted 1 + 2 + 4 + 2, and results in 9, that is, 1 + 2 + 4 + 2 = 9. Because addition is associative and commutative, there is no need of parentheses, and the result is the same irrespective of the order of the summands. Summation of a sequence of only one element results in this element itself. Summation of an empty sequence (a sequence with no elements), by convention, results in 0.
Very often, the elements of a sequence are defined, through a regular pattern, as a function of their place in the sequence. For simple patterns, summation of long sequences may be represented with most summands replaced by ellipses. For example, summation of the first 100 natural numbers may be written as 1 + 2 + 3 + 4 + ⋯ + 99 + 100. Otherwise, summation is denoted by using Σ notation, where $ \sum $ is an enlarged capital Greek letter sigma. For example, the sum of the first n natural numbers can be denoted as $ \sum _{i=1}^{n}i.$
For long summations, and summations of variable length (defined with ellipses or Σ notation), it is a common problem to find closed-form expressions for the result. For example,[lower-alpha 1]
$\sum _{i=1}^{n}i={\frac {n(n+1)}{2}}.$
Although such formulas do not always exist, many summation formulas have been discovered—with some of the most common and elementary ones being listed in the remainder of this article.
Notation
Capital-sigma notation
Mathematical notation uses a symbol that compactly represents summation of many similar terms: the summation symbol, $ \sum $, an enlarged form of the upright capital Greek letter sigma. This is defined as
$\sum _{i\mathop {=} m}^{n}a_{i}=a_{m}+a_{m+1}+a_{m+2}+\cdots +a_{n-1}+a_{n}$
where i is the index of summation; ai is an indexed variable representing each term of the sum; m is the lower bound of summation, and n is the upper bound of summation. The "i = m" under the summation symbol means that the index i starts out equal to m. The index, i, is incremented by one for each successive term, stopping when i = n.[lower-alpha 2]
This is read as "sum of ai, from i = m to n".
Here is an example showing the summation of squares:
$\sum _{i=3}^{6}i^{2}=3^{2}+4^{2}+5^{2}+6^{2}=86.$
In general, while any variable can be used as the index of summation (provided that no ambiguity is incurred), some of the most common ones include letters such as $i$,[lower-alpha 3] $j$, $k$, and $n$; the latter is also often used for the upper bound of a summation.
Alternatively, index and bounds of summation are sometimes omitted from the definition of summation if the context is sufficiently clear. This applies particularly when the index runs from 1 to n.[1] For example, one might write that:
$\sum a_{i}^{2}=\sum _{i=1}^{n}a_{i}^{2}.$
Generalizations of this notation are often used, in which an arbitrary logical condition is supplied, and the sum is intended to be taken over all values satisfying the condition. For example:
$\sum _{0\leq k<100}f(k)$
is an alternative notation for $ \sum _{k=0}^{99}f(k),$ the sum of $f(k)$ over all (integers) $k$ in the specified range. Similarly,
$\sum _{x\mathop {\in } S}f(x)$
is the sum of $f(x)$ over all elements $x$ in the set $S$, and
$\sum _{d\,|\,n}\;\mu (d)$
is the sum of $\mu (d)$ over all positive integers $d$ dividing $n$.[lower-alpha 4]
There are also ways to generalize the use of many sigma signs. For example,
$\sum _{i,j}$
is the same as
$\sum _{i}\sum _{j}.$
A similar notation is used for the product of a sequence, where $ \prod $, an enlarged form of the Greek capital letter pi, is used instead of $ \sum .$
Special cases
It is possible to sum fewer than 2 numbers:
• If the summation has one summand $x$, then the evaluated sum is $x$.
• If the summation has no summands, then the evaluated sum is zero, because zero is the identity for addition. This is known as the empty sum.
These degenerate cases are usually only used when the summation notation gives a degenerate result in a special case. For example, if $n=m$ in the definition above, then there is only one term in the sum; if $n=m-1$, then there is none.
Formal definition
Summation may be defined recursively as follows:
$\sum _{i=a}^{b}g(i)=0$, for $b<a$;
$\sum _{i=a}^{b}g(i)=g(b)+\sum _{i=a}^{b-1}g(i)$, for $b\geqslant a$.
Measure theory notation
In the notation of measure and integration theory, a sum can be expressed as a definite integral,
$\sum _{k\mathop {=} a}^{b}f(k)=\int _{[a,b]}f\,d\mu $
where $[a,b]$ is the subset of the integers from $a$ to $b$, and where $\mu $ is the counting measure.
Calculus of finite differences
Given a function f that is defined over the integers in the interval [m, n], the following equation holds:
$f(n)-f(m)=\sum _{i=m}^{n-1}(f(i+1)-f(i)).$
This is known as a telescoping series and is the analogue of the fundamental theorem of calculus in calculus of finite differences, which states that:
$f(n)-f(m)=\int _{m}^{n}f'(x)\,dx,$
where
$f'(x)=\lim _{h\to 0}{\frac {f(x+h)-f(x)}{h}}$
is the derivative of f.
An example of application of the above equation is the following:
$n^{k}=\sum _{i=0}^{n-1}\left((i+1)^{k}-i^{k}\right).$
Using binomial theorem, this may be rewritten as:
$n^{k}=\sum _{i=0}^{n-1}\left(\sum _{j=0}^{k-1}{\binom {k}{j}}i^{j}\right).$
The above formula is more commonly used for inverting of the difference operator $\Delta $, defined by:
$\Delta (f)(n)=f(n+1)-f(n),$
where f is a function defined on the nonnegative integers. Thus, given such a function f, the problem is to compute the antidifference of f, a function $F=\Delta ^{-1}f$ such that $\Delta F=f$. That is, $F(n+1)-F(n)=f(n).$ This function is defined up to the addition of a constant, and may be chosen as[2]
$F(n)=\sum _{i=0}^{n-1}f(i).$
There is not always a closed-form expression for such a summation, but Faulhaber's formula provides a closed form in the case where $f(n)=n^{k}$ and, by linearity, for every polynomial function of n.
Approximation by definite integrals
Many such approximations can be obtained by the following connection between sums and integrals, which holds for any increasing function f:
$\int _{s=a-1}^{b}f(s)\ ds\leq \sum _{i=a}^{b}f(i)\leq \int _{s=a}^{b+1}f(s)\ ds.$
and for any decreasing function f:
$\int _{s=a}^{b+1}f(s)\ ds\leq \sum _{i=a}^{b}f(i)\leq \int _{s=a-1}^{b}f(s)\ ds.$
For more general approximations, see the Euler–Maclaurin formula.
For summations in which the summand is given (or can be interpolated) by an integrable function of the index, the summation can be interpreted as a Riemann sum occurring in the definition of the corresponding definite integral. One can therefore expect that for instance
${\frac {b-a}{n}}\sum _{i=0}^{n-1}f\left(a+i{\frac {b-a}{n}}\right)\approx \int _{a}^{b}f(x)\ dx,$
since the right-hand side is by definition the limit for $n\to \infty $ of the left-hand side. However, for a given summation n is fixed, and little can be said about the error in the above approximation without additional assumptions about f: it is clear that for wildly oscillating functions the Riemann sum can be arbitrarily far from the Riemann integral.
Identities
The formulae below involve finite sums; for infinite summations or finite summations of expressions involving trigonometric functions or other transcendental functions, see list of mathematical series.
General identities
$\sum _{n=s}^{t}C\cdot f(n)=C\cdot \sum _{n=s}^{t}f(n)\quad $ (distributivity)[3]
$\sum _{n=s}^{t}f(n)\pm \sum _{n=s}^{t}g(n)=\sum _{n=s}^{t}\left(f(n)\pm g(n)\right)\quad $ (commutativity and associativity)[3]
$\sum _{n=s}^{t}f(n)=\sum _{n=s+p}^{t+p}f(n-p)\quad $ (index shift)
$\sum _{n\in B}f(n)=\sum _{m\in A}f(\sigma (m)),\quad $ for a bijection σ from a finite set A onto a set B (index change); this generalizes the preceding formula.
$\sum _{n=s}^{t}f(n)=\sum _{n=s}^{j}f(n)+\sum _{n=j+1}^{t}f(n)\quad $ (splitting a sum, using associativity)
$\sum _{n=a}^{b}f(n)=\sum _{n=0}^{b}f(n)-\sum _{n=0}^{a-1}f(n)\quad $ (a variant of the preceding formula)
$\sum _{n=s}^{t}f(n)=\sum _{n=0}^{t-s}f(t-n)\quad $ (the sum from the first term up to the last is equal to the sum from the last down to the first)
$\sum _{n=0}^{t}f(n)=\sum _{n=0}^{t}f(t-n)\quad $ (a particular case of the formula above)
$\sum _{i=k_{0}}^{k_{1}}\sum _{j=l_{0}}^{l_{1}}a_{i,j}=\sum _{j=l_{0}}^{l_{1}}\sum _{i=k_{0}}^{k_{1}}a_{i,j}\quad $ (commutativity and associativity, again)
$\sum _{k\leq j\leq i\leq n}a_{i,j}=\sum _{i=k}^{n}\sum _{j=k}^{i}a_{i,j}=\sum _{j=k}^{n}\sum _{i=j}^{n}a_{i,j}=\sum _{j=0}^{n-k}\sum _{i=k}^{n-j}a_{i+j,i}\quad $ (another application of commutativity and associativity)
$\sum _{n=2s}^{2t+1}f(n)=\sum _{n=s}^{t}f(2n)+\sum _{n=s}^{t}f(2n+1)\quad $ (splitting a sum into its odd and even parts, for even indexes)
$\sum _{n=2s+1}^{2t}f(n)=\sum _{n=s+1}^{t}f(2n)+\sum _{n=s+1}^{t}f(2n-1)\quad $ (splitting a sum into its odd and even parts, for odd indexes)
$\left(\sum _{i=0}^{n}a_{i}\right)\left(\sum _{j=0}^{n}b_{j}\right)=\sum _{i=0}^{n}\sum _{j=0}^{n}a_{i}b_{j}\quad $ (distributivity)
$\sum _{i=s}^{m}\sum _{j=t}^{n}{a_{i}}{c_{j}}=\left(\sum _{i=s}^{m}a_{i}\right)\left(\sum _{j=t}^{n}c_{j}\right)\quad $ (distributivity allows factorization)
$\sum _{n=s}^{t}\log _{b}f(n)=\log _{b}\prod _{n=s}^{t}f(n)\quad $ (the logarithm of a product is the sum of the logarithms of the factors)
$C^{\sum \limits _{n=s}^{t}f(n)}=\prod _{n=s}^{t}C^{f(n)}\quad $ (the exponential of a sum is the product of the exponential of the summands)
Powers and logarithm of arithmetic progressions
$\sum _{i=1}^{n}c=nc\quad $ for every c that does not depend on i
$\sum _{i=0}^{n}i=\sum _{i=1}^{n}i={\frac {n(n+1)}{2}}\qquad $ (Sum of the simplest arithmetic progression, consisting of the first n natural numbers.)[2]: 52
$\sum _{i=1}^{n}(2i-1)=n^{2}\qquad $ (Sum of first odd natural numbers)
$\sum _{i=0}^{n}2i=n(n+1)\qquad $ (Sum of first even natural numbers)
$\sum _{i=1}^{n}\log i=\log n!\qquad $ (A sum of logarithms is the logarithm of the product)
$\sum _{i=0}^{n}i^{2}=\sum _{i=1}^{n}i^{2}={\frac {n(n+1)(2n+1)}{6}}={\frac {n^{3}}{3}}+{\frac {n^{2}}{2}}+{\frac {n}{6}}\qquad $ (Sum of the first squares, see square pyramidal number.) [2]: 52
$\sum _{i=0}^{n}i^{3}=\left(\sum _{i=0}^{n}i\right)^{2}=\left({\frac {n(n+1)}{2}}\right)^{2}={\frac {n^{4}}{4}}+{\frac {n^{3}}{2}}+{\frac {n^{2}}{4}}\qquad $ (Nicomachus's theorem) [2]: 52
More generally, one has Faulhaber's formula for $p>1$
$\sum _{k=1}^{n}k^{p}={\frac {n^{p+1}}{p+1}}+{\frac {1}{2}}n^{p}+\sum _{k=2}^{p}{\binom {p}{k}}{\frac {B_{k}}{p-k+1}}\,n^{p-k+1},$
where $B_{k}$ denotes a Bernoulli number, and ${\binom {p}{k}}$ is a binomial coefficient.
Summation index in exponents
In the following summations, a is assumed to be different from 1.
$\sum _{i=0}^{n-1}a^{i}={\frac {1-a^{n}}{1-a}}$ (sum of a geometric progression)
$\sum _{i=0}^{n-1}{\frac {1}{2^{i}}}=2-{\frac {1}{2^{n-1}}}$ (special case for a = 1/2)
$\sum _{i=0}^{n-1}ia^{i}={\frac {a-na^{n}+(n-1)a^{n+1}}{(1-a)^{2}}}$ (a times the derivative with respect to a of the geometric progression)
${\begin{aligned}\sum _{i=0}^{n-1}\left(b+id\right)a^{i}&=b\sum _{i=0}^{n-1}a^{i}+d\sum _{i=0}^{n-1}ia^{i}\\&=b\left({\frac {1-a^{n}}{1-a}}\right)+d\left({\frac {a-na^{n}+(n-1)a^{n+1}}{(1-a)^{2}}}\right)\\&={\frac {b(1-a^{n})-(n-1)da^{n}}{1-a}}+{\frac {da(1-a^{n-1})}{(1-a)^{2}}}\end{aligned}}$
(sum of an arithmetico–geometric sequence)
Binomial coefficients and factorials
Main article: Binomial coefficient § Sums of the binomial coefficients
There exist very many summation identities involving binomial coefficients (a whole chapter of Concrete Mathematics is devoted to just the basic techniques). Some of the most basic ones are the following.
Involving the binomial theorem
$\sum _{i=0}^{n}{n \choose i}a^{n-i}b^{i}=(a+b)^{n},$ the binomial theorem
$\sum _{i=0}^{n}{n \choose i}=2^{n},$ the special case where a = b = 1
$\sum _{i=0}^{n}{n \choose i}p^{i}(1-p)^{n-i}=1$, the special case where p = a = 1 − b, which, for $0\leq p\leq 1,$ expresses the sum of the binomial distribution
$\sum _{i=0}^{n}i{n \choose i}=n(2^{n-1}),$ the value at a = b = 1 of the derivative with respect to a of the binomial theorem
$\sum _{i=0}^{n}{\frac {n \choose i}{i+1}}={\frac {2^{n+1}-1}{n+1}},$ the value at a = b = 1 of the antiderivative with respect to a of the binomial theorem
Involving permutation numbers
In the following summations, ${}_{n}P_{k}$ is the number of k-permutations of n.
$\sum _{i=0}^{n}{}_{i}P_{k}{n \choose i}={}_{n}P_{k}(2^{n-k})$
$\sum _{i=1}^{n}{}_{i+k}P_{k+1}=\sum _{i=1}^{n}\prod _{j=0}^{k}(i+j)={\frac {(n+k+1)!}{(n-1)!(k+2)}}$
$\sum _{i=0}^{n}i!\cdot {n \choose i}=\sum _{i=0}^{n}{}_{n}P_{i}=\lfloor n!\cdot e\rfloor ,\quad n\in \mathbb {Z} ^{+}$, where and $\lfloor x\rfloor $ denotes the floor function.
Others
$\sum _{k=0}^{m}{\binom {n+k}{n}}={\binom {n+m+1}{n+1}}$
$\sum _{i=k}^{n}{i \choose k}={n+1 \choose k+1}$
$\sum _{i=0}^{n}i\cdot i!=(n+1)!-1$
$\sum _{i=0}^{n}{m+i-1 \choose i}={m+n \choose n}$
$\sum _{i=0}^{n}{n \choose i}^{2}={2n \choose n}$
$\sum _{i=0}^{n}{\frac {1}{i!}}={\frac {\lfloor n!\;e\rfloor }{n!}}$
Harmonic numbers
$\sum _{i=1}^{n}{\frac {1}{i}}=H_{n}\quad $ (the nth harmonic number)
$\sum _{i=1}^{n}{\frac {1}{i^{k}}}=H_{n}^{k}\quad $ (a generalized harmonic number)
Growth rates
The following are useful approximations (using theta notation):
$\sum _{i=1}^{n}i^{c}\in \Theta (n^{c+1})$ for real c greater than −1
$\sum _{i=1}^{n}{\frac {1}{i}}\in \Theta (\log _{e}n)$ (See Harmonic number)
$\sum _{i=1}^{n}c^{i}\in \Theta (c^{n})$ for real c greater than 1
$\sum _{i=1}^{n}\log(i)^{c}\in \Theta (n\cdot \log(n)^{c})$ for non-negative real c
$\sum _{i=1}^{n}\log(i)^{c}\cdot i^{d}\in \Theta (n^{d+1}\cdot \log(n)^{c})$ for non-negative real c, d
$\sum _{i=1}^{n}\log(i)^{c}\cdot i^{d}\cdot b^{i}\in \Theta (n^{d}\cdot \log(n)^{c}\cdot b^{n})$ for non-negative real b > 1, c, d
History
• In 1675, Gottfried Wilhelm Leibniz, in a letter to Henry Oldenburg, suggests the symbol ∫ to mark the sum of differentials (Latin: calculus summatorius), hence the S-shape.[4][5][6] The renaming of this symbol to integral arose later in exchanges with Johann Bernoulli.[6]
• In 1755, the summation symbol Σ is attested in Leonhard Euler's Institutiones calculi differentialis.[7][8] Euler uses the symbol in expressions like:
$\Sigma \ (2wx+w^{2})=x^{2}$
• In 1772, usage of Σ and Σn is attested by Lagrange.[7][9]
• In 1823, the capital letter S is attested as a summation symbol for series. This usage was apparently widespread.[7]
• In 1829, the summation symbol Σ is attested by Fourier and C. G. J. Jacobi.[7] Fourier's use includes lower and upper bounds, for example:[10][11]
$\sum _{i=1}^{\infty }e^{-i^{2}t}\ldots $
See also
• Capital-pi notation
• Einstein notation
• Iverson bracket
• Iterated binary operation
• Kahan summation algorithm
• Product (mathematics)
• Summation by parts
• ∑ the summation single glyph (U+2211 N-ARY SUMMATION)
• ⎲ the paired glyph's beginning (U+23B2 SUMMATION TOP)
• ⎳ the paired glyph's end (U+23B3 SUMMATION BOTTOM)
Notes
1. For details, see Triangular number.
2. For a detailed exposition on summation notation, and arithmetic with sums, see Graham, Ronald L.; Knuth, Donald E.; Patashnik, Oren (1994). "Chapter 2: Sums". Concrete Mathematics: A Foundation for Computer Science (PDF) (2nd ed.). Addison-Wesley Professional. ISBN 978-0201558029.
3. in contexts where there is no possibility of confusion with the imaginary unit $i$
4. Although the name of the dummy variable does not matter (by definition), one usually uses letters from the middle of the alphabet ($i$ through $q$) to denote integers, if there is a risk of confusion. For example, even if there should be no doubt about the interpretation, it could look slightly confusing to many mathematicians to see $x$ instead of $k$ in the above formulae involving $k$.
References
1. "Summation Notation". www.columbia.edu. Retrieved 2020-08-16.
2. Handbook of Discrete and Combinatorial Mathematics, Kenneth H. Rosen, John G. Michaels, CRC Press, 1999, ISBN 0-8493-0149-1.
3. "Calculus I - Summation Notation". tutorial.math.lamar.edu. Retrieved 2020-08-16.
4. Burton, David M. (2011). The History of Mathematics: An Introduction (7th ed.). McGraw-Hill. p. 414. ISBN 978-0-07-338315-6.
5. Leibniz, Gottfried Wilhelm (1899). Gerhardt, Karl Immanuel (ed.). Der Briefwechsel von Gottfried Wilhelm Leibniz mit Mathematikern. Erster Band. Berlin: Mayer & Müller. p. 154.
6. Cajori (1929), pp. 181-182.
7. Cajori (1929), p. 61.
8. Euler, Leonhard (1755). Institutiones Calculi differentialis (in Latin). Petropolis. p. 27.
9. Lagrange, Joseph-Louis (1867–1892). Oeuvres de Lagrange. Tome 3 (in French). Paris. p. 451.{{cite book}}: CS1 maint: location missing publisher (link)
10. Mémoires de l'Académie royale des sciences de l'Institut de France pour l'année 1825, tome VIII (in French). Paris: Didot. 1829. pp. 581-622.
11. Fourier, Jean-Baptiste Joseph (1888–1890). Oeuvres de Fourier. Tome 2 (in French). Paris: Gauthier-Villars. p. 149.
Bibliography
• Cajori, Florian (1929). A History Of Mathematical Notations Volume II. Open Court Publishing. ISBN 978-0-486-67766-8.
External links
• Media related to Summation at Wikimedia Commons
Authority control: National
• Germany
| Wikipedia |
Sumario Compendioso
The Sumario Compendioso was the first mathematics book published in the New World. The book was published in Mexico City in 1556 by a clergyman Juan Diez.
Availability
The book has been digitized and is available on the Internet.
Before the Digital Age the only four known surviving copies were preserved at the Huntington Library, San Marino, California, the British Library, London, Duke University Library, and the University of Salamanca in Spain.[1]
Excerpts
In his book The Math Book, Clifford A. Pickover provided the following information about Sumario Compendioso:
The Sumario Compendioso, published in Mexico City in 1556, is the first work on mathematics printed in the Americas. The publication of Sumario Compendioso in the New World preceded by many decades the emigration of the Puritans to North America and the settlement in Jamestown, Virginia. The author, Brother Juan Diez, was a companion of Hernando Cortes, the Spanish conquistador, during Cortes's conquests of the Aztec Empire.[2]
References
1. old.nationalcurvebank.org
2. Clifford A. Pickover (2009). The Math Book: From Pythagoras to the 57th Dimension, 250 Milestones in the History of Mathematics. Sterling Publishing Company, Inc. p. 120. ISBN 978-1-4027-5796-9. Retrieved 29 July 2012.
External links
• Open Library
• HathiTrust
• JSTOR
• Archive.org
| Wikipedia |
Sumihiro's theorem
In algebraic geometry, Sumihiro's theorem, introduced by (Sumihiro 1974), states that a normal algebraic variety with an action of a torus can be covered by torus-invariant affine open subsets.
The "normality" in the hypothesis cannot be relaxed.[1] The hypothesis that the group acting on the variety is a torus can also not be relaxed.[2]
Notes
1. Cox, David A.; Little, John B.; Schenck, Henry K. (2011). Toric Varieties. American Mathematical Soc. ISBN 978-0-8218-4819-7.
2. "Bialynicki-Birula decomposition of a non-singular quasi-projective scheme". MathOverflow. Retrieved 2023-03-10.
References
• Sumihiro, Hideyasu (1974), "Equivariant completion", J. Math. Kyoto Univ., 14: 1–28, doi:10.1215/kjm/1250523277.
External links
• Alper, Jarod; Hall, Jack; Rydh, David (2015). "A Luna étale slice theorem for algebraic stacks". arXiv:1504.06467 [math.AG].
| Wikipedia |
Summa de arithmetica
Summa de arithmetica, geometria, proportioni et proportionalita (Summary of arithmetic, geometry, proportions and proportionality) is a book on mathematics written by Luca Pacioli and first published in 1494. It contains a comprehensive summary of Renaissance mathematics, including practical arithmetic, basic algebra, basic geometry and accounting, written for use as a textbook and reference work.
Summa de arithmetica, geometria, proportioni et proportionalita
Title page of the second (1523) edition
AuthorLuca Pacioli
CountryRepublic of Venice
LanguageItalian
SubjectsMathematics, Accounting
PublisherPaganini (Venice)
Publication date
1494
Pages615 pp (first edition)
Written in vernacular Italian, the Summa is the first printed work on algebra, and it contains the first published description of the double-entry bookkeeping system. It set a new standard for writing and argumentation about algebra, and its impact upon the subsequent development and standardization of professional accounting methods was so great that Pacioli is sometimes referred to as the "father of accounting".
Contents
The Summa de arithmetica as originally printed consists of ten chapters on a series of mathematical topics, collectively covering essentially all of Renaissance mathematics. The first seven chapters form a summary of arithmetic in 222 pages. The eighth chapter explains contemporary algebra in 78 pages. The ninth chapter discusses various topics relevant to business and trade, including barter, bills of exchange, weights and measures and bookkeeping, in 150 pages. The tenth and final chapter describes practical geometry (including basic trigonometry) in 151 pages.[1]
The book's mathematical content draws heavily on the traditions of the abacus schools of contemporary northern Italy, where the children of merchants and the middle class studied arithmetic on the model established by Fibonacci's Liber Abaci. The emphasis of this tradition was on facility with computation, using the Hindu–Arabic numeral system, developed through exposure to numerous example problems and case studies drawn principally from business and trade.[2] Pacioli's work likewise teaches through examples, but it also develops arguments for the validity of its solutions through reference to general principles, axioms and logical proof. In this way the Summa begins to reintegrate the logical methods of classical Greek geometry into the medieval discipline of algebra.[3]
Bookkeeping and finance
Within the chapter on business, a section entitled Particularis de computis et scripturis (Details of calculation and recording) describes the accounting methods then in use among northern-Italian merchants, including double-entry bookkeeping, trial balances, balance sheets and various other tools still employed by professional accountants.[4] The business chapter also introduces the rule of 72 for predicting an investment's future value, anticipating the development of the logarithm by more than century.[5] These techniques did not originate with Pacioli, who merely recorded and explained the established best practices of contemporary businesspeople in his region.[1]
Plagiarism controversy
Pacioli explicitly states in the Summa that he contributed no original mathematical content to the work, but he also does not specifically attribute any of the material to other sources.[1] Subsequent scholarship has found that much of the work's coverage of geometry is taken almost exactly from Piero della Francesca’s Trattato d’abaco, one of the algebra sections is based on the Trattato di Fioretti of Antonio de Mazzinghi, and a portion of the business chapter is copied from a manuscript by Giorgio Chiarini. This sort of appropriation has led some historians (notably including sixteenth-century biographer Giorgio Vasari) to accuse Pacioli of plagiarism in the Summa (and other works). Many of the problems and techniques included in the book are quite directly taken from these earlier works, but the Summa generally adds original logical arguments to justify the validity of the methods.[3]
History
Summa de arithmetica was composed over a period of decades through Pacioli's work as a professor of mathematics, and was probably intended as a textbook and reference work for students of mathematics and business, especially among the mercantile middle class of northern Italy.[1] It was written in vernacular Italian (rather than Latin), reflecting its target audience and its purpose as a teaching text. The work was dedicated to Guidobaldo da Montefeltro, Duke of Urbino, a patron of the arts whom Pacioli had met in Rome some years earlier.[4]
It was originally published in Venice in 1494 by Paganino Paganini,[6] with an identical second edition printed in 1523 in Toscolano. About a thousand copies were originally printed, of which roughly 120 are still extant.[7] In June 2019 an intact first edition sold at auction for US$1,215,000.[4]
Impact and legacy
While the Summa contained little or no original mathematical work by Pacioli, it was the most comprehensive mathematical text ever published at the time.[8] Its thoroughness and clarity (and the lack of any other similar work available in print) generated strong and steady sales to the European merchants who were the text's intended audience.[1] The reputation the Summa earned Pacioli as a mathematician and intellectual inspired Ludovico Sforza, Duke of Milan, to invite him to serve as a mathematical lecturer in the ducal court, where Pacioli befriended and collaborated with Leonardo da Vinci.[4]
The Summa represents the first published description of many accounting techniques, including double-entry bookkeeping.[8] Some of the same methods were described in other manuscripts predating the Summa (such as the 1458 Della mercatura e del mercante perfetto by Benedetto Cotrugli), but none was published before Pacioli's work, and none achieved the same wide influence. The work's role in standardizing and disseminating professional bookkeeping methods has earned Pacioli a reputation as the "father of accounting".[9]
The book also marks the beginning of a movement in sixteenth-century algebra toward the use of logical argumentation and theorems in the study of algebra, following the model of classical Greek geometry established by Euclid.[3] It is thought to be the first printed work on algebra,[4] and it includes the first printed example of a set of plus and minus signs that were to become standard in Italian Renaissance mathematics: 'p' with a tilde above (p̄) for "plus" and 'm' with a tilde (m̄) for minus.[1] Pacioli's (incorrect) assertion in the Summa that there was no general solution to cubic equations helped to popularize the problem among contemporary mathematicians, contributing to its subsequent solution by Niccolò Tartaglia.[4]
Commemoration
In 1994 Italy issued a 750-lira postage stamp honoring the 500th anniversary of the Summa's publication, depicting Pacioli surrounded by mathematical and geometric implements. The image on the stamp was inspired by the Portrait of Luca Pacioli and contains many of the same elements.[10]
See also
• De divina proportione, another influential mathematical work by Pacioli
• List of most expensive books and manuscripts
References
1. Sangster, Alan; Stoner, Gregory N; McCarthy, Patricia (June 2008). "The Market for Luca Pacioli's Summa Arithmetica" (PDF). Accounting Historians Journal. 35 (1): 111–134. doi:10.2308/0148-4184.35.1.111. Retrieved 15 January 2020.
2. Napolitani, Pier Daniele (2013). "Pacioli, Luca". Il Contributo italiano alla storia del Pensiero – Scienze (in Italian). Treccani. Retrieved 15 January 2020.
3. Heeffer, Albrecht. From Problem Solving to Argumentation: Pacioli's Appropriation of Abbacus Algebra (PDF) (Thesis). Ghent University. Retrieved 15 January 2020.
4. "Somma di arithmetica, geometria, proporzioni e proporzionalità". Christie's. 12 June 2019. Retrieved 15 January 2020.
5. O'Connor, John J; Robertson, Edmund F (March 2006). "A Napierian logarithm before Napier". MacTutor. University of St Andrews. Retrieved 15 January 2020.
6. Nuovo, Angela (2014). "PAGANINI, Paganino". Dizionario Biografico degli Italiani (in Italian). Vol. 80. Treccani. Retrieved 15 January 2020.
7. "A revolutionary treatise goes on the block". The Economist. 6 June 2019. Retrieved 15 January 2020.
8. Swetz, Frank J; Katz, Victor J (January 2011). "Mathematical Treasures—Pacioli's Summa". Loci. Mathematical Association of America. Retrieved 15 January 2020.
9. Smith, Murphy (2018). "Luca Pacioli: The Father of Accounting". SSRN 2320658.
10. John F. Ptak (11 March 2008). "The Sovietesque Disappearance of Pacioli's Rhombicuboctahedron". JF Ptak Science Booksl. Retrieved 15 January 2020.
External links
Wikimedia Commons has media related to Summa de arithmetica.
• Full text (1523 edition) on Google Books
• Digitised edition of Particularis de computis et scripturis (First (1494) edition)
• English translation of Particularis de computis et scripturis (1994)
| Wikipedia |
Einstein notation
In mathematics, especially the usage of linear algebra in mathematical physics, Einstein notation (also known as the Einstein summation convention or Einstein summation notation) is a notational convention that implies summation over a set of indexed terms in a formula, thus achieving brevity. As part of mathematics it is a notational subset of Ricci calculus; however, it is often used in physics applications that do not distinguish between tangent and cotangent spaces. It was introduced to physics by Albert Einstein in 1916.[1]
Introduction
Statement of convention
According to this convention, when an index variable appears twice in a single term and is not otherwise defined (see Free and bound variables), it implies summation of that term over all the values of the index. So where the indices can range over the set {1, 2, 3},
$y=\sum _{i=1}^{3}c_{i}x^{i}=c_{1}x^{1}+c_{2}x^{2}+c_{3}x^{3}$
is simplified by the convention to:
$y=c_{i}x^{i}$
The upper indices are not exponents but are indices of coordinates, coefficients or basis vectors. That is, in this context x2 should be understood as the second component of x rather than the square of x (this can occasionally lead to ambiguity). The upper index position in xi is because, typically, an index occurs once in an upper (superscript) and once in a lower (subscript) position in a term (see § Application below). Typically, (x1 x2 x3) would be equivalent to the traditional (x y z).
In general relativity, a common convention is that
• the Greek alphabet is used for space and time components, where indices take on values 0, 1, 2, or 3 (frequently used letters are μ, ν, ...),
• the Latin alphabet is used for spatial components only, where indices take on values 1, 2, or 3 (frequently used letters are i, j, ...),
In general, indices can range over any indexing set, including an infinite set. This should not be confused with a typographically similar convention used to distinguish between tensor index notation and the closely related but distinct basis-independent abstract index notation.
An index that is summed over is a summation index, in this case "i ". It is also called a dummy index since any symbol can replace "i " without changing the meaning of the expression (provided that it does not collide with other index symbols in the same term).
An index that is not summed over is a free index and should appear only once per term. If such an index does appear, it usually also appears in every other term in an equation. An example of a free index is the "i " in the equation $v_{i}=a_{i}b_{j}x^{j}$, which is equivalent to the equation $ v_{i}=\sum _{j}(a_{i}b_{j}x^{j})$.
Application
Einstein notation can be applied in slightly different ways. Typically, each index occurs once in an upper (superscript) and once in a lower (subscript) position in a term; however, the convention can be applied more generally to any repeated indices within a term.[2] When dealing with covariant and contravariant vectors, where the position of an index also indicates the type of vector, the first case usually applies; a covariant vector can only be contracted with a contravariant vector, corresponding to summation of the products of coefficients. On the other hand, when there is a fixed coordinate basis (or when not considering coordinate vectors), one may choose to use only subscripts; see § Superscripts and subscripts versus only subscripts below.
Vector representations
Superscripts and subscripts versus only subscripts
In terms of covariance and contravariance of vectors,
• upper indices represent components of contravariant vectors (vectors),
• lower indices represent components of covariant vectors (covectors).
They transform contravariantly or covariantly, respectively, with respect to change of basis.
In recognition of this fact, the following notation uses the same symbol both for a vector or covector and its components, as in:
${\begin{aligned}v=v^{i}e_{i}={\begin{bmatrix}e_{1}&e_{2}&\cdots &e_{n}\end{bmatrix}}{\begin{bmatrix}v^{1}\\v^{2}\\\vdots \\v^{n}\end{bmatrix}}\\w=w_{i}e^{i}={\begin{bmatrix}w_{1}&w_{2}&\cdots &w_{n}\end{bmatrix}}{\begin{bmatrix}e^{1}\\e^{2}\\\vdots \\e^{n}\end{bmatrix}}\end{aligned}}$
where v is the vector and v i are its components (not the ith covector v), w is the covector and wi are its components. The basis vector elements $e_{i}$ are each column vectors, and the covector basis elements $e^{i}$ are each row covectors. (See also § Abstract description; duality, below and the examples)
In the presence of a non-degenerate form (an isomorphism V → V∗, for instance a Riemannian metric or Minkowski metric), one can raise and lower indices.
A basis gives such a form (via the dual basis), hence when working on Rn with a Euclidean metric and a fixed orthonormal basis, one has the option to work with only subscripts.
However, if one changes coordinates, the way that coefficients change depends on the variance of the object, and one cannot ignore the distinction; see Covariance and contravariance of vectors.
Mnemonics
In the above example, vectors are represented as n × 1 matrices (column vectors), while covectors are represented as 1 × n matrices (row covectors).
When using the column vector convention:
• "Upper indices go up to down; lower indices go left to right."
• "Covariant tensors are row vectors that have indices that are below (co-row-below)."
• Covectors are row vectors:
${\begin{bmatrix}w_{1}&\cdots &w_{k}\end{bmatrix}}.$
Hence the lower index indicates which column you are in.
• Contravariant vectors are column vectors:
${\begin{bmatrix}v^{1}\\\vdots \\v^{k}\end{bmatrix}}$
Hence the upper index indicates which row you are in.
Abstract description
The virtue of Einstein notation is that it represents the invariant quantities with a simple notation.
In physics, a scalar is invariant under transformations of basis. In particular, a Lorentz scalar is invariant under a Lorentz transformation. The individual terms in the sum are not. When the basis is changed, the components of a vector change by a linear transformation described by a matrix. This led Einstein to propose the convention that repeated indices imply the summation is to be done.
As for covectors, they change by the inverse matrix. This is designed to guarantee that the linear function associated with the covector, the sum above, is the same no matter what the basis is.
The value of the Einstein convention is that it applies to other vector spaces built from V using the tensor product and duality. For example, V ⊗ V, the tensor product of V with itself, has a basis consisting of tensors of the form eij = ei ⊗ ej. Any tensor T in V ⊗ V can be written as:
$\mathbf {T} =T^{ij}\mathbf {e} _{ij}.$
V *, the dual of V, has a basis e1, e2, ..., en which obeys the rule
$\mathbf {e} ^{i}(\mathbf {e} _{j})=\delta _{j}^{i}.$
where δ is the Kronecker delta. As
$\operatorname {Hom} (V,W)=V^{*}\otimes W$
the row/column coordinates on a matrix correspond to the upper/lower indices on the tensor product.
Common operations in this notation
In Einstein notation, the usual element reference $A_{mn}$ for the $m$-th row and $n$-th column of matrix $A$ becomes ${A^{m}}_{n}$. We can then write the following operations in Einstein notation as follows.
Inner product (hence also vector dot product)
Using an orthogonal basis, the inner product is the sum of corresponding components multiplied together:
$\mathbf {u} \cdot \mathbf {v} =u_{j}v^{j}$
This can also be calculated by multiplying the covector on the vector.
Vector cross product
Again using an orthogonal basis (in 3 dimensions) the cross product intrinsically involves summations over permutations of components:
$\mathbf {u} \times \mathbf {v} ={\varepsilon ^{i}}_{jk}u^{j}v^{k}\mathbf {e} _{i}$
where
${\varepsilon ^{i}}_{jk}=\delta ^{il}\varepsilon _{ljk}$
εijk is the Levi-Civita symbol, and δil is the generalized Kronecker delta. Based on this definition of ε, there is no difference between εijk and εijk but the position of indices.
Matrix-vector multiplication
The product of a matrix Aij with a column vector vj is:
$\mathbf {u} _{i}=(\mathbf {A} \mathbf {v} )_{i}=\sum _{j=1}^{N}A_{ij}v_{j}$
equivalent to
$u^{i}={A^{i}}_{j}v^{j}$
This is a special case of matrix multiplication.
Matrix multiplication
The matrix product of two matrices Aij and Bjk is:
$\mathbf {C} _{ik}=(\mathbf {A} \mathbf {B} )_{ik}=\sum _{j=1}^{N}A_{ij}B_{jk}$
equivalent to
${C^{i}}_{k}={A^{i}}_{j}{B^{j}}_{k}$
Trace
For a square matrix Aij, the trace is the sum of the diagonal elements, hence the sum over a common index Aii.
Outer product
The outer product of the column vector ui by the row vector vj yields an m × n matrix A:
${A^{i}}_{j}=u^{i}v_{j}={(uv)^{i}}_{j}$
Since i and j represent two different indices, there is no summation and the indices are not eliminated by the multiplication.
Raising and lowering indices
Given a tensor, one can raise an index or lower an index by contracting the tensor with the metric tensor, gμν. For example, taking the tensor Tαβ, one can lower an index:
$g_{\mu \sigma }{T^{\sigma }}_{\beta }=T_{\mu \beta }$
Or one can raise an index:
$g^{\mu \sigma }{T_{\sigma }}^{\alpha }=T^{\mu \alpha }$
See also
• Tensor
• Abstract index notation
• Bra–ket notation
• Penrose graphical notation
• Levi-Civita symbol
• DeWitt notation
Notes
1. This applies only for numerical indices. The situation is the opposite for abstract indices. Then, vectors themselves carry upper abstract indices and covectors carry lower abstract indices, as per the example in the introduction of this article. Elements of a basis of vectors may carry a lower numerical index and an upper abstract index.
References
1. Einstein, Albert (1916). "The Foundation of the General Theory of Relativity". Annalen der Physik. 354 (7): 769. Bibcode:1916AnP...354..769E. doi:10.1002/andp.19163540702. Archived from the original (PDF) on 2006-08-29. Retrieved 2006-09-03.
2. "Einstein Summation". Wolfram Mathworld. Retrieved 13 April 2011.
Bibliography
• Kuptsov, L. P. (2001) [1994], "Einstein rule", Encyclopedia of Mathematics, EMS Press.
External links
The Wikibook General Relativity has a page on the topic of: Einstein Summation Notation
• Rawlings, Steve (2007-02-01). "Lecture 10 – Einstein Summation Convention and Vector Identities". Oxford University. Archived from the original on 2017-01-06. Retrieved 2008-07-02.
• "Understanding NumPy's einsum". Stack Overflow.
Tensors
Glossary of tensor theory
Scope
Mathematics
• Coordinate system
• Differential geometry
• Dyadic algebra
• Euclidean geometry
• Exterior calculus
• Multilinear algebra
• Tensor algebra
• Tensor calculus
• Physics
• Engineering
• Computer vision
• Continuum mechanics
• Electromagnetism
• General relativity
• Transport phenomena
Notation
• Abstract index notation
• Einstein notation
• Index notation
• Multi-index notation
• Penrose graphical notation
• Ricci calculus
• Tetrad (index notation)
• Van der Waerden notation
• Voigt notation
Tensor
definitions
• Tensor (intrinsic definition)
• Tensor field
• Tensor density
• Tensors in curvilinear coordinates
• Mixed tensor
• Antisymmetric tensor
• Symmetric tensor
• Tensor operator
• Tensor bundle
• Two-point tensor
Operations
• Covariant derivative
• Exterior covariant derivative
• Exterior derivative
• Exterior product
• Hodge star operator
• Lie derivative
• Raising and lowering indices
• Symmetrization
• Tensor contraction
• Tensor product
• Transpose (2nd-order tensors)
Related
abstractions
• Affine connection
• Basis
• Cartan formalism (physics)
• Connection form
• Covariance and contravariance of vectors
• Differential form
• Dimension
• Exterior form
• Fiber bundle
• Geodesic
• Levi-Civita connection
• Linear map
• Manifold
• Matrix
• Multivector
• Pseudotensor
• Spinor
• Vector
• Vector space
Notable tensors
Mathematics
• Kronecker delta
• Levi-Civita symbol
• Metric tensor
• Nonmetricity tensor
• Ricci curvature
• Riemann curvature tensor
• Torsion tensor
• Weyl tensor
Physics
• Moment of inertia
• Angular momentum tensor
• Spin tensor
• Cauchy stress tensor
• stress–energy tensor
• Einstein tensor
• EM tensor
• Gluon field strength tensor
• Metric tensor (GR)
Mathematicians
• Élie Cartan
• Augustin-Louis Cauchy
• Elwin Bruno Christoffel
• Albert Einstein
• Leonhard Euler
• Carl Friedrich Gauss
• Hermann Grassmann
• Tullio Levi-Civita
• Gregorio Ricci-Curbastro
• Bernhard Riemann
• Jan Arnoldus Schouten
• Woldemar Voigt
• Hermann Weyl
| Wikipedia |
Summation equation
In mathematics, a summation equation or discrete integral equation is an equation in which an unknown function appears under a summation sign. The theories of summation equations and integral equations can be unified as integral equations on time scales[1] using time scale calculus. A summation equation compares to a difference equation as an integral equation compares to a differential equation.
The Volterra summation equation is:
$x(t)=f(t)+\sum _{i=m}^{n}k(t,s,x(s))$
where x is the unknown function, and s, a, t are integers, and f, k are known functions.
References
1. Volterra integral equations on time scales: Basic qualitative and quantitative results with applications to initial value problems on unbounded domains, Tomasia Kulik, Christopher C. Tisdell, September 3, 2007
• Summation equations or discrete integral equations
| Wikipedia |
Summation of Grandi's series
Main article: Grandi's series
General considerations
Stability and linearity
The formal manipulations that lead to 1 − 1 + 1 − 1 + · · · being assigned a value of 1⁄2 include:
• Adding or subtracting two series term-by-term,
• Multiplying through by a scalar term-by-term,
• "Shifting" the series with no change in the sum, and
• Increasing the sum by adding a new term to the series' head.
These are all legal manipulations for sums of convergent series, but 1 − 1 + 1 − 1 + · · · is not a convergent series.
Nonetheless, there are many summation methods that respect these manipulations and that do assign a "sum" to Grandi's series. Two of the simplest methods are Cesàro summation and Abel summation.[1]
Cesàro sum
The first rigorous method for summing divergent series was published by Ernesto Cesàro in 1890. The basic idea is similar to Leibniz's probabilistic approach: essentially, the Cesàro sum of a series is the average of all of its partial sums. Formally one computes, for each n, the average σn of the first n partial sums, and takes the limit of these Cesàro means as n goes to infinity.
For Grandi's series, the sequence of arithmetic means is
1, 1⁄2, 2⁄3, 2⁄4, 3⁄5, 3⁄6, 4⁄7, 4⁄8, …
or, more suggestively,
(1⁄2+1⁄2), 1⁄2, (1⁄2+1⁄6), 1⁄2, (1⁄2+1⁄10), 1⁄2, (1⁄2+1⁄14), 1⁄2, …
where
$\sigma _{n}={\frac {1}{2}}$ for even n and $\sigma _{n}={\frac {1}{2}}+{\frac {1}{2n}}$ for odd n.
This sequence of arithmetic means converges to 1⁄2, so the Cesàro sum of Σak is 1⁄2. Equivalently, one says that the Cesàro limit of the sequence 1, 0, 1, 0, … is 1⁄2.[2]
The Cesàro sum of 1 + 0 − 1 + 1 + 0 − 1 + · · · is 2⁄3. So the Cesàro sum of a series can be altered by inserting infinitely many 0s as well as infinitely many brackets.[3]
The series can also be summed by the more general fractional (C, a) methods.[4]
Abel sum
Abel summation is similar to Euler's attempted definition of sums of divergent series, but it avoids Callet's and N. Bernoulli's objections by precisely constructing the function to use. In fact, Euler likely meant to limit his definition to power series,[5] and in practice he used it almost exclusively[6] in a form now known as Abel's method.
Given a series a0 + a1 + a2 + · · ·, one forms a new series a0 + a1x + a2x2 + · · ·. If the latter series converges for 0 < x < 1 to a function with a limit as x tends to 1, then this limit is called the Abel sum of the original series, after Abel's theorem which guarantees that the procedure is consistent with ordinary summation. For Grandi's series one has
$A\sum _{n=0}^{\infty }(-1)^{n}=\lim _{x\rightarrow 1}\sum _{n=0}^{\infty }(-x)^{n}=\lim _{x\rightarrow 1}{\frac {1}{1+x}}={\frac {1}{2}}.$ [7]
Related series
The corresponding calculation that the Abel sum of 1 + 0 − 1 + 1 + 0 − 1 + · · · is 2⁄3 involves the function (1 + x)/(1 + x + x2).
Whenever a series is Cesàro summable, it is also Abel summable and has the same sum. On the other hand, taking the Cauchy product of Grandi's series with itself yields a series which is Abel summable but not Cesàro summable:
1 − 2 + 3 − 4 + · · ·
has Abel sum 1⁄4.[8]
Dilution
Alternating spacing
That the ordinary Abel sum of 1 + 0 − 1 + 1 + 0 − 1 + · · · is 2⁄3 can also be phrased as the (A, λ) sum of the original series 1 − 1 + 1 − 1 + · · · where (λn) = (0, 2, 3, 5, 6, …). Likewise the (A, λ) sum of 1 − 1 + 1 − 1 + · · · where (λn) = (0, 1, 3, 4, 6, …) is 1⁄3.[9]
Exponential spacing
The summability of 1 − 1 + 1 − 1 + · · · can be frustrated by separating its terms with exponentially longer and longer groups of zeros. The simplest example to describe is the series where (−1)n appears in the rank 2n:
0 + 1 − 1 + 0 + 1 + 0 + 0 + 0 − 1 + 0 + 0 + 0 + 0 + 0 + 0 + 0 + 1 + 0 + · · ·.
This series is not Cesaro summable. After each nonzero term, the partial sums spend enough time lingering at either 0 or 1 to bring the average partial sum halfway to that point from its previous value. Over the interval 22m−1 ≤ n ≤ 22m − 1 following a (− 1) term, the nth arithmetic means vary over the range
${\frac {2}{3}}\left({\frac {2^{2m}-1}{2^{2m}+2}}\right)\;\mathrm {to} \;{\frac {1}{3}}(1-2^{-2m}),$
or about 2⁄3 to 1⁄3.[10]
In fact, the exponentially spaced series is not Abel summable either. Its Abel sum is the limit as x approaches 1 of the function
F(x) = 0 + x − x2 + 0 + x4 + 0 + 0 + 0 − x8 + 0 + 0 + 0 + 0 + 0 + 0 + 0 + x16 + 0 + · · ·.
This function satisfies a functional equation:
${\begin{array}{rcl}F(x)&=&\displaystyle x-x^{2}+x^{4}-x^{8}+\cdots \\[1em]&=&\displaystyle x-\left[(x^{2})-(x^{2})^{2}+(x^{2})^{4}-\cdots \right]\\[1em]&=&\displaystyle x-F(x^{2}).\end{array}}$
This functional equation implies that F(x) roughly oscillates around 1⁄2 as x approaches 1. To prove that the amplitude of oscillation is nonzero, it helps to separate F into an exactly periodic and an aperiodic part:
$F(x)=\Psi (x)+\Phi (x)$
where
$\Phi (x)=\sum _{n=0}^{\infty }{\frac {(-1)^{n}}{n!(1+2^{n})}}\left(\log {\frac {1}{x}}\right)^{n}$
satisfies the same functional equation as F. This now implies that Ψ(x) = −Ψ(x2) = Ψ(x4), so Ψ is a periodic function of loglog(1/x). Since dy (p.77) speaks of "another solution" and "plainly not constant", although technically he does not prove that F and Φ are different.</ref> Since the Φ part has a limit of 1⁄2, F oscillates as well.
Separation of scales
Given any function φ(x) such that φ(0) = 1, and the derivative of φ is integrable over (0, +∞), then the generalized φ-sum of Grandi's series exists and is equal to 1⁄2:
$S_{\varphi }=\lim _{\delta \downarrow 0}\sum _{k=0}^{\infty }(-1)^{k}\varphi (\delta k)={\frac {1}{2}}.$
The Cesaro or Abel sum is recovered by letting φ be a triangular or exponential function, respectively. If φ is additionally assumed to be continuously differentiable, then the claim can be proved by applying the mean value theorem and converting the sum into an integral. Briefly:
${\begin{array}{rcl}S_{\varphi }&=&\displaystyle \lim _{\delta \downarrow 0}\sum _{k=0}^{\infty }\left[\varphi (2k\delta )-\varphi (2k\delta +\delta )\right]\\[1em]&=&\displaystyle \lim _{\delta \downarrow 0}\sum _{k=0}^{\infty }\varphi '(2k\delta +c_{k})(-\delta )\\[1em]&=&\displaystyle -{\frac {1}{2}}\int _{0}^{\infty }\varphi '(x)\,dx=-{\frac {1}{2}}\varphi (x)|_{0}^{\infty }={\frac {1}{2}}.\end{array}}$[11]
Euler transform and analytic continuation
Borel sum
The Borel sum of Grandi's series is again 1⁄2, since
$1-x+{\frac {x^{2}}{2!}}-{\frac {x^{3}}{3!}}+{\frac {x^{4}}{4!}}-\cdots =e^{-x}$
and
$\int _{0}^{\infty }e^{-x}e^{-x}\,dx=\int _{0}^{\infty }e^{-2x}\,dx={\frac {1}{2}}.$[12]
The series can also be summed by generalized (B, r) methods.[13]
Spectral asymmetry
The entries in Grandi's series can be paired to the eigenvalues of an infinite-dimensional operator on Hilbert space. Giving the series this interpretation gives rise to the idea of spectral asymmetry, which occurs widely in physics. The value that the series sums to depends on the asymptotic behaviour of the eigenvalues of the operator. Thus, for example, let $\{\omega _{n}\}$ be a sequence of both positive and negative eigenvalues. Grandi's series corresponds to the formal sum
$\sum _{n}\operatorname {sgn}(\omega _{n})\;$
where $\operatorname {sgn}(\omega _{n})=\pm 1$ is the sign of the eigenvalue. The series can be given concrete values by considering various limits. For example, the heat kernel regulator leads to the sum
$\lim _{t\to 0}\sum _{n}\operatorname {sgn}(\omega _{n})e^{-t|\omega _{n}|}$
which, for many interesting cases, is finite for non-zero t, and converges to a finite value in the limit.
Methods that fail
The integral function method with pn = exp (−cn2) and c > 0.[14]
The moment constant method with
$d\chi =e^{-k(\log x)^{2}}x^{-1}dx$
and k > 0.[15]
Geometric series
The geometric series in $(x-1)$,
${\frac {1}{x}}=1-(x-1)+(x-1)^{2}-(x-1)^{3}+(x-1)^{4}-...$
is convergent for $|x-1|<1$. Formally substituting $x=2$ would give
${\frac {1}{2}}=1-1+1-1+1-...$
However, $x=2$ is outside the radius of convergence, $|x-1|<1$, so this conclusion cannot be made.
Notes
1. Davis pp.152, 153, 157
2. Davis pp.153, 163
3. Davis pp.162-163, ex.1-5
4. Smail p.131
5. Kline 1983 p.313
6. Bromwich p.322
7. Davis p.159
8. Davis p.165
9. Hardy p.73
10. Hardy p.60
11. Saichev pp.260-262
12. Weidlich p.20
13. Smail p.128
14. Hardy pp.79-81, 85
15. Hardy pp.81-86
References
• Bromwich, T.J. (1926) [1908]. An Introduction to the Theory of Infinite Series (2e ed.).
• Davis, Harry F. (May 1989). Fourier Series and Orthogonal Functions. Dover. ISBN 978-0-486-65973-2.
• Hardy, G.H. (1949). Divergent Series. Clarendon Press. LCC QA295 .H29 1967.
• Kline, Morris (November 1983). "Euler and Infinite Series". Mathematics Magazine. 56 (5): 307–314. CiteSeerX 10.1.1.639.6923. doi:10.2307/2690371. JSTOR 2690371.
• Saichev, A.I. & W.A. Woyczyński (1996). Distributions in the physical and engineering sciences, Volume 1. Birkhaüser. ISBN 978-0-8176-3924-2. LCC QA324.W69 1996.
• Smail, Lloyd (1925). History and Synopsis of the Theory of Summable Infinite Processes. University of Oregon Press. LCC QA295 .S64.
• Weidlich, John E. (June 1950). Summability methods for divergent series. Stanford M.S. theses.
Grandi's series
• History
• Education
• Summation
• Occurrences
People
• Luigi Guido Grandi
Related
• Thomson's lamp
• Category
| Wikipedia |
Sum of four cubes problem
The sum of four cubes problem[1] asks whether every integer is the sum of four cubes of integers. It is conjectured the answer is affirmative, but this conjecture has been neither proved nor disproved.[2] Some of the cubes may be negative numbers, in contrast to Waring's problem on sums of cubes, where they are required to be positive.
Unsolved problem in mathematics:
Is every integer the sum of four perfect cubes?
(more unsolved problems in mathematics)
The substitutions $X=T$, $Y=T$, and $Z=-T+1$ in the identity
$(X+Y+Z)^{3}-X^{3}-Y^{3}-Z^{3}=3(X+Y)(X+Z)(Y+Z)$
lead to the identity
$(T+1)^{3}+(-T)^{3}+(-T)^{3}+(T-1)^{3}=6T,$
which shows that every integer multiple of 6 is the sum of four cubes. (More generally, the same proof shows that every multiple of 6 in every ring is the sum of four cubes.)
Since every integer is congruent to its own cube modulo 6, it follows that every rational integer is the sum of five cubes of integers.
In 1966, V. A. Demjanenko proved that any integer that is congruent neither to 4 nor to −4 modulo 9 is the sum of four cubes of integers. For this, he used the following identities:
$6x=(x+1)^{3}+(x-1)^{3}-x^{3}-x^{3},$
$6x+3=x^{3}+(-x+4)^{3}+(2x-5)^{3}+(-2x+4)^{3},$
$18x+1=(2x+14)^{3}+(-2x-23)^{3}+(-3x-26)^{3}+(3x+30)^{3},$
$18x+7=(x+2)^{3}+(6x-1)^{3}+(8x-2)^{3}+(-9x+2)^{3},$
and
$18x+8=(x-5)^{3}+(-x+14)^{3}+(-3x+29)^{3}+(3x-30)^{3}.$
These identities (and those derived from them by passing to opposites) immediately show that any integer which is congruent neither to 4 nor to −4 modulo 9 and is congruent neither to 2 nor to −2 modulo 18 is a sum of four cubes of rational integers. Using more subtle reasonings, Demjanenko proved that integers congruent to 2 or to −2 modulo 18 are also sums of four cubes of integers.[3]
The problem therefore only arises for integers congruent to 4 or to −4 modulo 9. One example is
$13=10^{3}+7^{3}+1^{3}+(-11)^{3},$
but it is not known if every such integer can be written as a sum of four cubes.
See also
• Sums of three cubes
Notes and references
1. Referred to as the "four cube problem" in H. Davenport, The Higher Arithmetic: An Introduction to the Theory of Numbers, Cambridge University Press, 7th edition, 1999, p. 173, 177.
2. At least in 1982. See Philippe Revoy, “Sur les sommes de quatre cubes”, L’Enseignement Mathématique, t. 29, 1983, p. 209-220, online here or here, p. 209 on the point in question.
3. V.A. Demjanenko, "On sums of four cubes", Izvestiya Vysshikh Uchebnykh Zavedenii. Matematika, vol. 54, no. 5, 1966, p. 63-69, available online at the site Math-Net.Ru. For a demonstration in French, see Philippe Revoy, “Sur les sommes de quatre cubes”, L’Enseignement Mathématique, t. 29, 1983, p. 209-220, online here or here.
| Wikipedia |
Sums of three cubes
Unsolved problem in mathematics:
Is there a number that is not 4 or 5 modulo 9 and that cannot be expressed as a sum of three cubes?
(more unsolved problems in mathematics)
In the mathematics of sums of powers, it is an open problem to characterize the numbers that can be expressed as a sum of three cubes of integers, allowing both positive and negative cubes in the sum. A necessary condition for an integer $n$ to equal such a sum is that $n$ cannot equal 4 or 5 modulo 9, because the cubes modulo 9 are 0, 1, and −1, and no three of these numbers can sum to 4 or 5 modulo 9.[1] It is unknown whether this necessary condition is sufficient.
Variations of the problem include sums of non-negative cubes and sums of rational cubes. All integers have a representation as a sum of rational cubes, but it is unknown whether the sums of non-negative cubes form a set with non-zero natural density.
Small cases
A nontrivial representation of 0 as a sum of three cubes would give a counterexample to Fermat's Last Theorem for the exponent three, as one of the three cubes would have the opposite sign as the other two and its negation would equal the sum of the other two. Therefore, by Leonhard Euler's proof of that case of Fermat's last theorem,[2] there are only the trivial solutions
$a^{3}+(-a)^{3}+0^{3}=0.$
For representations of 1 and 2, there are infinite families of solutions
$(9b^{4})^{3}+(3b-9b^{4})^{3}+(1-9b^{3})^{3}=1$ (discovered[3] by K. Mahler in 1936)
and
$(1+6c^{3})^{3}+(1-6c^{3})^{3}+(-6c^{2})^{3}=2$ (discovered[4] by A.S. Verebrusov in 1908, quoted by L.J. Mordell[5]).
These can be scaled to obtain representations for any cube or any number that is twice a cube.[5] For 1, there exist other representations and other parameterized families of representations.[6] For 2, the other known representations are[6][7]
$1\ 214\ 928^{3}+3\ 480\ 205^{3}+(-3\ 528\ 875)^{3}=2,$
$37\ 404\ 275\ 617^{3}+(-25\ 282\ 289\ 375)^{3}+(-33\ 071\ 554\ 596)^{3}=2,$
$3\ 737\ 830\ 626\ 090^{3}+1\ 490\ 220\ 318\ 001^{3}+(-3\ 815\ 176\ 160\ 999)^{3}=2.$
However, 1 and 2 are the only numbers with representations that can be parameterized by quartic polynomials as above.[5] Even in the case of representations of 3, Louis J. Mordell wrote in 1953 "I do not know anything" more than its small solutions
$1^{3}+1^{3}+1^{3}=4^{3}+4^{3}+(-5)^{3}=3$
and the fact that each of the three cubed numbers must be equal modulo 9.[8][9]
Computational results
Since 1955, and starting with the instigation of Mordell, many authors have implemented computational searches for these representations.[10][11][7][12][13][14][15][16][17][18] Elsenhans & Jahnel (2009) used a method of Noam Elkies (2000) involving lattice reduction to search for all solutions to the Diophantine equation
$x^{3}+y^{3}+z^{3}=n$
for positive $n$ at most 1000 and for $\max(|x|,|y|,|z|)<10^{14}$,[17] leaving only 33, 42, 74, 114, 165, 390, 579, 627, 633, 732, 795, 906, 921, and 975 as open problems in 2009 for $n\leq 1000$, and 192, 375, and 600 remain with no primitive solutions (i.e. $\gcd(x,y,z)=1$). After Timothy Browning covered the problem on Numberphile in 2016, Huisman (2016) extended these searches to $\max(|x|,|y|,|z|)<10^{15}$ solving the case of 74, with solution
$74=(-284\ 650\ 292\ 555\ 885)^{3}+66\ 229\ 832\ 190\ 556^{3}+283\ 450\ 105\ 697\ 727^{3}.$
Through these searches, it was discovered that all $n<100$ that are unequal to 4 or 5 modulo 9 have a solution, with at most two exceptions, 33 and 42.[18]
However, in 2019, Andrew Booker settled the case $n=33$ by discovering that
$33=8\ 866\ 128\ 975\ 287\ 528^{3}+(-8\ 778\ 405\ 442\ 862\ 239)^{3}+(-2\ 736\ 111\ 468\ 807\ 040)^{3}.$
In order to achieve this, Booker exploited an alternative search strategy with running time proportional to $\min(|x|,|y|,|z|)$ rather than to their maximum,[19] an approach originally suggested by Heath-Brown et al.[20] He also found that
$795=(-14\ 219\ 049\ 725\ 358\ 227)^{3}+14\ 197\ 965\ 759\ 741\ 571^{3}+2\ 337\ 348\ 783\ 323\ 923^{3},$
and established that there are no solutions for $n=42$ or any of the other unresolved $n\leq 1000$ with $|z|\leq 10^{16}$.
Shortly thereafter, in September 2019, Booker and Andrew Sutherland finally settled the $n=42$ case, using 1.3 million hours of computing on the Charity Engine global grid to discover that
$42=(-80\ 538\ 738\ 812\ 075\ 974)^{3}+80\ 435\ 758\ 145\ 817\ 515^{3}+12\ 602\ 123\ 297\ 335\ 631^{3},$
as well as solutions for several other previously unknown cases including $n=165$ and $579$ for $n\leq 1000$.[21]
Booker and Sutherland also found a third representation of 3 using a further 4 million compute-hours on Charity Engine:
$3=569\ 936\ 821\ 221\ 962\ 380\ 720^{3}+(-569\ 936\ 821\ 113\ 563\ 493\ 509)^{3}+(-472\ 715\ 493\ 453\ 327\ 032)^{3}.$[21][22]
This discovery settled a 65-year old question of Louis J. Mordell that has stimulated much of the research on this problem.[8]
While presenting the third representation of 3 during his appearance in a video on the Youtube channel Numberphile, Booker also presented a representation for 906:
$906=(-74\ 924\ 259\ 395\ 610\ 397)^{3}+72\ 054\ 821\ 089\ 679\ 353\ 378^{3}+35\ 961\ 979\ 615\ 356\ 503^{3}.$[23]
The only remaining unsolved cases up to 1,000 are the seven numbers 114, 390, 627, 633, 732, 921, and 975, and there are no known primitive solutions (i.e. $\gcd(x,y,z)=1$) for 192, 375, and 600.[21][24]
Primitive solutions for n from 1 to 78
nxyz nxyz
1 910−12 39 117367134476−159380
2 12149283480205−3528875 42 1260212329733563180435758145817515−80538738812075974
3 111 43 223
6 −1−12 44 −5−78
7 0−12 45 2−34
8 915−16 46 −233
9 012 47 67−8
10 112 48 −23−2631
11 −2−23 51 602659−796
12 710−11 52 2396129245460702901317−61922712865
15 −122 53 −133
16 −511−16091626 54 −7−1112
17 122 55 133
18 −1−23 56 −11−2122
19 0−23 57 1−24
20 1−23 60 −1−45
21 −11−1416 61 0−45
24 −2901096694−1555055555515584139827 62 233
25 −1−13 63 0−14
26 0−13 64 −3−56
27 −4−56 65 014
28 013 66 114
29 113 69 2−45
30 −283059965−22188885172220422932 70 1120−21
33 −2736111468807040−87784054428622398866128975287528 71 −124
34 −123 72 79−10
35 023 73 124
36 123 74 66229832190556283450105697727−284650292555885
37 0−34 75 4381159435203083−435203231
38 1−34 78 2653−55
Popular interest
The sums of three cubes problem has been popularized in recent years by Brady Haran, creator of the YouTube channel Numberphile, beginning with the 2015 video "The Uncracked Problem with 33" featuring an interview with Timothy Browning.[25] This was followed six months later by the video "74 is Cracked" with Browning, discussing Huisman's 2016 discovery of a solution for 74.[26] In 2019, Numberphile published three related videos, "42 is the new 33", "The mystery of 42 is solved", and "3 as the sum of 3 cubes", to commemorate the discovery of solutions for 33, 42, and the new solution for 3.[27][28][23]
Booker's solution for 33 was featured in articles appearing in Quanta Magazine[29] and New Scientist[30], as well as an article in Newsweek in which Booker's collaboration with Sutherland was announced: "...the mathematician is now working with Andrew Sutherland of MIT in an attempt to find the solution for the final unsolved number below a hundred: 42".[31] The number 42 has additional popular interest due to its appearance in the 1979 Douglas Adams science fiction novel The Hitchhiker's Guide to the Galaxy as the answer to The Ultimate Question of Life, the Universe, and Everything.
Booker and Sutherland's announcements[32][33] of a solution for 42 received international press coverage, including articles in New Scientist,[34] Scientific American,[35] Popular Mechanics,[36] The Register,[37] Die Zeit,[38] Der Tagesspiegel,[39] Helsingin Sanomat,[40] Der Spiegel,[41] New Zealand Herald,[42] Indian Express,[43] Der Standard,[44] Las Provincias,[45] Nettavisen,[46] Digi24,[47] and BBC World Service.[48] Popular Mechanics named the solution for 42 as one of the "10 Biggest Math Breakthroughs of 2019".[49]
The resolution of Mordell's question by Booker and Sutherland a few weeks later sparked another round of news coverage.[22][50][51][52][53][54][55]
In Booker's invited talk at the fourteenth Algorithmic Number Theory Symposium he discusses some of the popular interest in this problem and the public reaction to the announcement of solutions for 33 and 42.[56]
Solvability and decidability
In 1992, Roger Heath-Brown conjectured that every $n$ unequal to 4 or 5 modulo 9 has infinitely many representations as sums of three cubes.[57] The case $n=33$ of this problem was used by Bjorn Poonen as the opening example in a survey on undecidable problems in number theory, of which Hilbert's tenth problem is the most famous example.[58] Although this particular case has since been resolved, it is unknown whether representing numbers as sums of cubes is decidable. That is, it is not known whether an algorithm can, for every input, test in finite time whether a given number has such a representation. If Heath-Brown's conjecture is true, the problem is decidable. In this case, an algorithm could correctly solve the problem by computing $n$ modulo 9, returning false when this is 4 or 5, and otherwise returning true. Heath-Brown's research also includes more precise conjectures on how far an algorithm would have to search to find an explicit representation rather than merely determining whether one exists.[57]
Variations
A variant of this problem related to Waring's problem asks for representations as sums of three cubes of non-negative integers. In the 19th century, Carl Gustav Jacob Jacobi and collaborators compiled tables of solutions to this problem.[59] It is conjectured that the representable numbers have positive natural density.[60][61] This remains unknown, but Trevor Wooley has shown that $\Omega (n^{0.917})$ of the numbers from $1$ to $n$ have such representations.[62][63][64] The density is at most $\Gamma (4/3)^{3}/6\approx 0.119$.[1]
Every integer can be represented as a sum of three cubes of rational numbers (rather than as a sum of cubes of integers).[65][66]
See also
• Sum of four cubes problem, whether every integer is a sum of four cubes
• Euler's sum of powers conjecture § k = 3, relating to cubes that can be written as a sum of three positive cubes
• Plato's number, an ancient text possibly discussing the equation 33 + 43 + 53 = 63
• Taxicab number, the smallest integer that can be expressed as a sum of two positive integer cubes in n distinct ways
References
1. Davenport, H. (1939), "On Waring's problem for cubes", Acta Mathematica, 71: 123–143, doi:10.1007/BF02547752, MR 0000026
2. Machis, Yu. Yu. (2007), "On Euler's hypothetical proof", Mathematical Notes, 82 (3): 352–356, doi:10.1134/S0001434607090088, MR 2364600, S2CID 121798358
3. Mahler, Kurt (1936), "Note on Hypothesis K of Hardy and Littlewood", Journal of the London Mathematical Society, 11 (2): 136–138, doi:10.1112/jlms/s1-11.2.136, MR 1574761
4. Verebrusov, A. S. (1908), "Объ уравненiи x3 + y3 + z3 = 2u3" [On the equation $x^{3}+y^{3}+z^{3}=2u^{3}$], Matematicheskii Sbornik (in Russian), 26 (4): 622–624, JFM 39.0259.02
5. Mordell, L.J. (1942), "On sums of three cubes", Journal of the London Mathematical Society, Second Series, 17 (3): 139–144, doi:10.1112/jlms/s1-17.3.139, MR 0007761
6. Avagyan, Armen; Dallakyan, Gurgen (2018), "A new method in the problem of three cubes", Universal Journal of Computational Mathematics, 5 (3): 45–56, arXiv:1802.06776, doi:10.13189/ujcmj.2017.050301, S2CID 36818799
7. Heath-Brown, D. R.; Lioen, W. M.; te Riele, H. J. J. (1993), "On solving the Diophantine equation $x^{3}+y^{3}+z^{3}=k$ on a vector computer", Mathematics of Computation, 61 (203): 235–244, Bibcode:1993MaCom..61..235H, doi:10.2307/2152950, JSTOR 2152950, MR 1202610
8. Mordell, L.J. (1953), "On the integer solutions of the equation $x^{2}+y^{2}+z^{2}+2xyz=n$", Journal of the London Mathematical Society, Second Series, 28: 500–510, doi:10.1112/jlms/s1-28.4.500, MR 0056619
9. The equality mod 9 of numbers whose cubes sum to 3 was credited to J. W. S. Cassels by Mordell (1953), but its proof was not published until Cassels, J. W. S. (1985), "A note on the Diophantine equation $x^{3}+y^{3}+z^{3}=3$", Mathematics of Computation, 44 (169): 265–266, doi:10.2307/2007811, JSTOR 2007811, MR 0771049, S2CID 121727002.
10. Miller, J. C. P.; Woollett, M. F. C. (1955), "Solutions of the Diophantine equation $x^{3}+y^{3}+z^{3}=k$", Journal of the London Mathematical Society, Second Series, 30: 101–110, doi:10.1112/jlms/s1-30.1.101, MR 0067916
11. Gardiner, V. L.; Lazarus, R. B.; Stein, P. R. (1964), "Solutions of the diophantine equation $x^{3}+y^{3}=z^{3}-d$", Mathematics of Computation, 18 (87): 408–413, doi:10.2307/2003763, JSTOR 2003763, MR 0175843
12. Conn, W.; Vaserstein, L. N. (1994), "On sums of three integral cubes", The Rademacher legacy to mathematics (University Park, PA, 1992), Contemporary Mathematics, vol. 166, Providence, Rhode Island: American Mathematical Society, pp. 285–294, doi:10.1090/conm/166/01628, MR 1284068
13. Bremner, Andrew (1995), "On sums of three cubes", Number theory (Halifax, NS, 1994), CMS Conference Proceedings, vol. 15, Providence, Rhode Island: American Mathematical Society, pp. 87–91, MR 1353923
14. Koyama, Kenji; Tsuruoka, Yukio; Sekigawa, Hiroshi (1997), "On searching for solutions of the Diophantine equation $x^{3}+y^{3}+z^{3}=n$", Mathematics of Computation, 66 (218): 841–851, doi:10.1090/S0025-5718-97-00830-2, MR 1401942
15. Elkies, Noam D. (2000), "Rational points near curves and small nonzero $|x^{3}-y^{2}|$ via lattice reduction", Algorithmic number theory (Leiden, 2000), Lecture Notes in Computer Science, vol. 1838, Springer, Berlin, pp. 33–63, arXiv:math/0005139, doi:10.1007/10722028_2, MR 1850598, S2CID 40620586
16. Beck, Michael; Pine, Eric; Tarrant, Wayne; Yarbrough Jensen, Kim (2007), "New integer representations as the sum of three cubes", Mathematics of Computation, 76 (259): 1683–1690, doi:10.1090/S0025-5718-07-01947-3, MR 2299795
17. Elsenhans, Andreas-Stephan; Jahnel, Jörg (2009), "New sums of three cubes", Mathematics of Computation, 78 (266): 1227–1230, doi:10.1090/S0025-5718-08-02168-6, MR 2476583
18. Huisman, Sander G. (2016), Newer sums of three cubes, arXiv:1604.07746
19. Booker, Andrew R. (2019), "Cracking the problem with 33", Research in Number Theory, 5 (26), doi:10.1007/s40993-019-0162-1, MR 3983550
20. Heath-Brown, D. R.; Lioen, W.M.; te Riele, H.J.J (1993), "On solving the Diophantine equation $x^{3}+y^{3}+z^{3}=k$ on a vector computer", Mathematics of Computation, 61 (203): 235–244, Bibcode:1993MaCom..61..235H, doi:10.2307/2152950, JSTOR 2152950, MR 1202610
21. Booker, Andrew R.; Sutherland, Andrew V. (2020), On a question of Mordell, arXiv:2007.01209
22. Lu, Donna (September 18, 2019), "Mathematicians find a completely new way to write the number 3", New Scientist
23. Haran, Brady (September 24, 2019), 3 as the sum of 3 cubes, Numberphile
24. Houston, Robin (September 6, 2019), "42 is the answer to the question 'what is (-80538738812075974)3 + 804357581458175153 + 126021232973356313?'", The Aperiodical
25. Haran, Brady (November 6, 2015), The uncracked problem with 33, Numberphile
26. Haran, Brady (May 31, 2016), 74 is cracked, Numberphile
27. Haran, Brady (March 12, 2019), 42 is the new 33, Numberphile
28. Haran, Brady (September 6, 2019), The mystery of 42 is solved, Numberphile
29. Pavlus, John (March 10, 2019), "Sum-of-Three-Cubes Problem Solved for 'Stubborn' Number 33", Quanta Magazine
30. Lu, Donna (March 14, 2019), "Mathematician cracks centuries-old problem about the number 33", New Scientist
31. Georgiou, Aristos (April 3, 2019), "The uncracked problem with 33: Mathematician solves 64-year-old 'Diophantine puzzle'", Newsweek
32. Sum of three cubes for 42 finally solved – using real life planetary computer, University of Bristol, September 6, 2019
33. Miller, Sandi (September 10, 2019), "The answer to life, the universe, and everything: Mathematics researcher Drew Sutherland helps solve decades-old sum-of-three-cubes puzzle, with help from "The Hitchhiker's Guide to the Galaxy."", MIT News, Massachusetts Institute of Technology
34. Lu, Donna (September 6, 2019), "Mathematicians crack elusive puzzle involving the number 42", New Scientist
35. Delahaye, Jean-Paul (September 20, 2020), "For Math Fans: A Hitchhiker's Guide to the Number 42", Scientific American
36. Grossman, David (September 6, 2019), "After 65 Years, Supercomputers Finally Solve This Unsolvable Math Problem", Popular Mechanics
37. Quach, Katyanna (September 7, 2019), "Finally! A solution to 42 – the Answer to the Ultimate Question of Life, The Universe, and Everything", The Register
38. "Matheproblem um die Zahl 42 geknackt", Die Zeit, September 16, 2019
39. "Das Matheproblem um die Zahl 42 ist geknackt", Der Tagesspiegel, September 16, 2019
40. Kivimäki, Antti (September 18, 2019), "Matemaatikkojen vaikea laskelma tuotti vihdoin kaivatun luvun 42", Helsingin Sanomat
41. "Matheproblem um die 42 geknackt", Der Spiegel, September 16, 2019
42. "Why the number 42 is the answer to life, the universe and everything", New Zealand Herald, September 9, 2019
43. Firaque, Kabir (September 20, 2019), "Explained: How a 65-year-old maths problem was solved", Indian Express
44. Taschwer, Klaus (September 15, 2019), "Endlich: Das Rätsel um die Zahl 42 ist gelöst", Der Standard
45. "Matemáticos resuelven el enigma del número 42 planteado hace 65 años", Las Provincias, September 18, 2019
46. Wærstad, Lars (October 10, 2019), "Supermaskin har løst over 60 år gammel tallgåte", Nettavisen
47. "A fost rezolvată problema care le-a dat bătăi de cap matematicienilor timp de 6 decenii. A fost nevoie de 1 milion de ore de procesare", Digi24, September 16, 2019
48. Paul, Fernanda (September 12, 2019), "Enigma de la suma de 3 cubos: matemáticos encuentran la solución final después de 65 años", BBC News Mundo
49. Linkletter, Dave (December 27, 2019), "The 10 Biggest Math Breakthroughs of 2019", Popular Mechanics
50. Mandelbaum, Ryan F. (September 18, 2019), "Mathematicians No Longer Stumped by the Number 3", Gizmodo
51. "42:n ongelman ratkaisijat löysivät ratkaisun myös 3:lle", Tiede, September 23, 2019
52. Kivimäki, Antti (September 22, 2019), "Numeron 42 ratkaisseet matemaatikot yllättivät: Löysivät myös luvulle 3 kauan odotetun ratkaisun", Helsingin Sanomat
53. Jesus Poblacion, Alfonso (October 3, 2019), "Matemáticos encuentran una nueva forma de llegar al número 3", El Diario Vasco
54. Honner, Patrick (November 5, 2019), "Why the Sum of Three Cubes Is a Hard Math Problem", Quanta Magazine
55. D'Souza, Dilip (November 28, 2019), "Waste not, there's a third way to make cubes", LiveMint
56. Booker, Andrew R. (July 4, 2020), 33 and all that, Algorithmic Number Theory Symposium
57. Heath-Brown, D. R. (1992), "The density of zeros of forms for which weak approximation fails", Mathematics of Computation, 59 (200): 613–623, doi:10.1090/s0025-5718-1992-1146835-5, JSTOR 2153078, MR 1146835
58. Poonen, Bjorn (2008), "Undecidability in number theory" (PDF), Notices of the American Mathematical Society, 55 (3): 344–350, MR 2382821
59. Dickson, Leonard Eugene (1920), History of the Theory of Numbers, Vol. II: Diophantine Analysis, Carnegie Institution of Washington, p. 717
60. Balog, Antal; Brüdern, Jörg (1995), "Sums of three cubes in three linked three-progressions", Journal für die Reine und Angewandte Mathematik, 1995 (466): 45–85, doi:10.1515/crll.1995.466.45, MR 1353314, S2CID 118818354
61. Deshouillers, Jean-Marc; Hennecart, François; Landreau, Bernard (2006), "On the density of sums of three cubes", in Hess, Florian; Pauli, Sebastian; Pohst, Michael (eds.), Algorithmic Number Theory: 7th International Symposium, ANTS-VII, Berlin, Germany, July 23-28, 2006, Proceedings, Lecture Notes in Computer Science, vol. 4076, Berlin: Springer, pp. 141–155, doi:10.1007/11792086_11, MR 2282921
62. Wooley, Trevor D. (1995), "Breaking classical convexity in Waring's problem: sums of cubes and quasi-diagonal behaviour" (PDF), Inventiones Mathematicae, 122 (3): 421–451, doi:10.1007/BF01231451, hdl:2027.42/46588, MR 1359599
63. Wooley, Trevor D. (2000), "Sums of three cubes", Mathematika, 47 (1–2): 53–61 (2002), doi:10.1112/S0025579300015710, hdl:2027.42/152941, MR 1924487
64. Wooley, Trevor D. (2015), "Sums of three cubes, II", Acta Arithmetica, 170 (1): 73–100, arXiv:1502.01944, doi:10.4064/aa170-1-6, MR 3373831, S2CID 119155786
65. Richmond, H. W. (1923), "On analogues of Waring's problem for rational numbers", Proceedings of the London Mathematical Society, Second Series, 21: 401–409, doi:10.1112/plms/s2-21.1.401, MR 1575369
66. Davenport, H.; Landau, E. (1969), "On the representation of positive integers as sums of three cubes of positive rational numbers", Number Theory and Analysis (Papers in Honor of Edmund Landau), New York: Plenum, pp. 49–53, MR 0262198
External links
• Solutions of n = x3 + y3 + z3 for 0 ≤ n ≤ 99, Hisanori Mishima
• threecubes, Daniel J. Bernstein
• Sums of three cubes, Mathpages
| Wikipedia |
Sumset
In additive combinatorics, the sumset (also called the Minkowski sum) of two subsets $A$ and $B$ of an abelian group $G$ (written additively) is defined to be the set of all sums of an element from $A$ with an element from $B$. That is,
$A+B=\{a+b:a\in A,b\in B\}.$
The $n$-fold iterated sumset of $A$ is
$nA=A+\cdots +A,$
where there are $n$ summands.
Many of the questions and results of additive combinatorics and additive number theory can be phrased in terms of sumsets. For example, Lagrange's four-square theorem can be written succinctly in the form
$4\,\Box =\mathbb {N} ,$
where $\Box $ is the set of square numbers. A subject that has received a fair amount of study is that of sets with small doubling, where the size of the set $A+A$ is small (compared to the size of $A$); see for example Freiman's theorem.
See also
• Restricted sumset
• Sidon set
• Sum-free set
• Schnirelmann density
• Shapley–Folkman lemma
• X + Y sorting
References
• Henry Mann (1976). Addition Theorems: The Addition Theorems of Group Theory and Number Theory (Corrected reprint of 1965 Wiley ed.). Huntington, New York: Robert E. Krieger Publishing Company. ISBN 0-88275-418-1.
• Nathanson, Melvyn B. (1990). "Best possible results on the density of sumsets". In Berndt, Bruce C.; Diamond, Harold G.; Halberstam, Heini; et al. (eds.). Analytic number theory. Proceedings of a conference in honor of Paul T. Bateman, held on April 25-27, 1989, at the University of Illinois, Urbana, IL (USA). Progress in Mathematics. Vol. 85. Boston: Birkhäuser. pp. 395–403. ISBN 0-8176-3481-9. Zbl 0722.11007.
• Nathanson, Melvyn B. (1996). Additive Number Theory: Inverse Problems and the Geometry of Sumsets. Graduate Texts in Mathematics. Vol. 165. Springer-Verlag. ISBN 0-387-94655-1. Zbl 0859.11003.
• Terence Tao and Van Vu, Additive Combinatorics, Cambridge University Press 2006.
External links
• Sloman, Leila (2022-12-06). "From Systems in Motion, Infinite Patterns Appear". Quanta Magazine.
| Wikipedia |
Sun's curious identity
In combinatorics, Sun's curious identity is the following identity involving binomial coefficients, first established by Zhi-Wei Sun in 2002:
$(x+m+1)\sum _{i=0}^{m}(-1)^{i}{\dbinom {x+y+i}{m-i}}{\dbinom {y+2i}{i}}-\sum _{i=0}^{m}{\dbinom {x+i}{m-i}}(-4)^{i}=(x-m){\dbinom {x}{m}}.$
Proofs
After Sun's publication of this identity in 2002, five other proofs were obtained by various mathematicians:
• Panholzer and Prodinger's proof via generating functions;
• Merlini and Sprugnoli's proof using Riordan arrays;
• Ekhad and Mohammed's proof by the WZ method;
• Chu and Claudio's proof with the help of Jensen's formula;
• Callan's combinatorial proof involving dominos and colorings.
References
• Callan, D. (2004), "A combinatorial proof of Sun's 'curious' identity" (PDF), INTEGERS: The Electronic Journal of Combinatorial Number Theory, 4: A05, arXiv:math.CO/0401216, Bibcode:2004math......1216C.
• Chu, W.; Claudio, L.V.D. (2003), "Jensen proof of a curious binomial identity" (PDF), INTEGERS: The Electronic Journal of Combinatorial Number Theory, 3: A20.
• Ekhad, S. B.; Mohammed, M. (2003), "A WZ proof of a 'curious' identity" (PDF), INTEGERS: The Electronic Journal of Combinatorial Number Theory, 3: A06.
• Merlini, D.; Sprugnoli, R. (2002), "A Riordan array proof of a curious identity" (PDF), INTEGERS: The Electronic Journal of Combinatorial Number Theory, 2: A08.
• Panholzer, A.; Prodinger, H. (2002), "A generating functions proof of a curious identity" (PDF), INTEGERS: The Electronic Journal of Combinatorial Number Theory, 2: A06.
• Sun, Zhi-Wei (2002), "A curious identity involving binomial coefficients" (PDF), INTEGERS: The Electronic Journal of Combinatorial Number Theory, 2: A04.
• Sun, Zhi-Wei (2008), "On sums of binomial coefficients and their applications", Discrete Mathematics, 308 (18): 4231–4245, arXiv:math.NT/0404385, doi:10.1016/j.disc.2007.08.046, S2CID 14089498.
| Wikipedia |
Sunflower (mathematics)
In the mathematical fields of set theory and extremal combinatorics, a sunflower or $\Delta $-system[1] is a collection of sets whose pairwise intersection is constant. This constant intersection is called the kernel of the sunflower.
Unsolved problem in mathematics:
For any sunflower size, does every set of uniformly sized sets which is of cardinality greater than some exponential in the set size contain a sunflower?
(more unsolved problems in mathematics)
The main research question arising in relation to sunflowers is: under what conditions does there exist a large sunflower (a sunflower with many sets) in a given collection of sets? The $\Delta $-lemma, sunflower lemma, and the Erdős-Rado sunflower conjecture give successively weaker conditions which would imply the existence of a large sunflower in a given collection, with the latter being one of the most famous open problems of extremal combinatorics.[2]
Formal definition
Suppose $W$ is a set system over $U$, that is, a collection of subsets of a set $U$. The collection $W$ is a sunflower (or $\Delta $-system) if there is a subset $S$ of $U$ such that for each distinct $A$ and $B$ in $W$, we have $A\cap B=S$. In other words, a set system or collection of sets $W$ is a sunflower if the pairwise intersection of each set in $W$ is identical. Note that this intersection, $S$, may be empty; a collection of pairwise disjoint subsets is also a sunflower. Similarly, a collection of sets each containing the same elements is also trivially a sunflower.
Sunflower lemma and conjecture
The study of sunflowers generally focuses on when set systems contain sunflowers, in particular, when a set system is sufficiently large to necessarily contain a sunflower.
Specifically, researchers analyze the function $f(k,r)$ for nonnegative integers $k,r$, which is defined to be the smallest nonnegative integer $n$ such that, for any set system $W$ such that every set $S\in W$ has cardinality at most $k$, if $W$ has more than $n$ sets, then $W$ contains a sunflower of $r$ sets. Though it is not clear that such an $n$ must exist, a basic and simple result of Erdős and Rado, the Delta System Theorem, indicates that it does.
Erdos-Rado Delta System Theorem:
For each $k>0$, $r>0$ is an integer $f(k,r)$ such that a set system $F$ of $k$-sets is of cardinality greater than $f(k,r)$, then $F$ contains a sunflower of size $r$.
In the literature, $W$ is often assumed to be a set rather than a collection, so any set can appear in $W$ at most once. By adding dummy elements, it suffices to only consider set systems $W$ such that every set in $W$ has cardinality $k$, so often the sunflower lemma is equivalently phrased as holding for "$k$-uniform" set systems.[3]
Sunflower lemma
Erdős & Rado (1960, p. 86) proved the sunflower lemma, which states that[4]
$f(k,r)\leq k!(r-1)^{k}.$
That is, if $k$ and $r$ are positive integers, then a set system $W$ of cardinality greater than $k!(r-1)^{k+1}$ of sets of cardinality $k$ contains a sunflower with at least $r$ sets.
The Erdős-Rado sunflower lemma can be proved directly through induction. First, $f(1,r)\leq r-1$, since the set system $W$ must be a collection of distinct sets of size one, and so $r$ of these sets make a sunflower. In the general case, suppose $W$ has no sunflower with $r$ sets. Then consider $A_{1},A_{2},\ldots ,A_{t}\in W$ to be a maximal collection of pairwise disjoint sets (that is, $A_{i}\cap A_{j}$ is the empty set unless $i=j$, and every set in $W$ intersects with some $A_{i}$). Because we assumed that $W$ had no sunflower of size $r$, and a collection of pairwise disjoint sets is a sunflower, $t<r$.
Let $A=A_{1}\cup A_{2}\cup \cdots \cup A_{t}$. Since each $A_{i}$ has cardinality $k$, the cardinality of $A$ is bounded by $kt\leq k(r-1)$. Define $W_{a}$ for some $a\in A$ to be
$W_{a}=\{S\setminus \{a\}\mid a\in S,\,S\in W\}.$
Then $W_{a}$ is a set system, like $W$, except that every element of $W_{a}$ has $k-1$ elements. Furthermore, every sunflower of $W_{a}$ corresponds to a sunflower of $W$, simply by adding back $a$ to every set. This means that, by our assumption that $W$ has no sunflower of size $r$, the size of $W_{a}$ must be bounded by $f(k-1,r)$.
Since every set $S\in W$ intersects with one of the $A_{i}$'s, it intersects with $A$, and so it corresponds to at least one of the sets in a $W_{a}$:
$|W|\leq \sum _{a\in A}|W_{a}|\leq |A|f(k-1,r)\leq k(r-1)f(k-1,r).$
Hence, if $|W|\geq k(r-1)f(k-1,r)$, then $W$ contains an $r$ set sunflower of size $k$ sets. Hence, $f(k,r)\leq k(r-1)f(k-1,r)$ and the theorem follows.[2]
Erdős-Rado sunflower conjecture
The sunflower conjecture is one of several variations of the conjecture of Erdős & Rado (1960, p. 86) that for each $r>2$, $f(k,r)\leq C^{k}$ for some constant $C>0$ depending only on $r$. The conjecture remains wide open even for fixed low values of $r$; for example $r=3$; it is not known whether $f(k,3)\leq C^{k}$ for some $C>0$. [5] A 2021 paper by Alweiss, Lovett, Wu, and Zhang gives the best progress towards the conjecture, proving that $f(k,r)\leq C^{k}$ for $C=O(r^{3}\log(k)\log \log(k))$.[6][7] A month after the release of the first version of their paper, Rao sharpened the bound to $C=O(r\log(rk))$;[8] the current best-known bound is $C=O(r\log k)$.[9]
Sunflower lower bounds
Erdős and Rado proved the following lower bound on $f(k,r)$. It is equal to the statement that the original sunflower lemma is optimal in $r$.
Theorem. $(r-1)^{k}\leq f(k,r).$
Proof. For $k=1$ a set of $r-1$ sequence of distinct elements is not a sunflower. Let $h(k-1,r)$ denote the size of the largest set of $k-1$-sets with no $r$ sunflower. Let $H$ be such a set. Take an additional set of $r-1$ elements and add one element to each set in one of $r-1$ disjoint copies of $H$. Take the union of the $r-1$ disjoint copies with the elements added and denote this set $H^{*}$. The copies of $H$ with an element added form an $r-1$ partition of $H^{*}$. We have that,$(r-1)|H|\leq |H^{*}|$. $H^{*}$ is sunflower free since any selection of $r$ sets if in one of the disjoint partitions is sunflower free by assumption of H being sunflower free. Otherwise, if $r$ sets are selected from across multiple sets of the partition, then two must be selected from one partition since there are only $r-1$ partitions. This implies that at least two sets and not all the sets will have an element in common. Hence this is not a sunflower of $r$ sets.
A stronger result is the following theorem:
Theorem. $f(a+b,r)\geq (f(a,r)-1)(f(b,r)-1)$
Proof. Let $F$ and $F^{*}$ be two sunflower free families. For each set $A$ in F, append every set in $F^{*}$ to $A$ to produce $|F^{*}|$ many sets. Denote this family of sets $F_{A}$. Take the union of $F_{A}$ over all $A$ in $F$. This produces a family of $|F^{*}||F|$ sets which is sunflower free.
It is also known that $10^{.5k}\leq f(k,3)$.
Applications of the sunflower lemma
The sunflower lemma has numerous applications in theoretical computer science. For example, in 1986, Razborov used the sunflower lemma to prove that the Clique language required $n^{\log(n)}$ (superpolynomial) size monotone circuits, a breakthrough result in circuit complexity theory at the time. Håstad, Jukna, and Pudlák used it to prove lower bounds on depth-$3$ $AC_{0}$ circuits. It has also been applied in the parameterized complexity of the hitting set problem, to design fixed-parameter tractable algorithms for finding small sets of elements that contain at least one element from a given family of sets.[10]
Analogue for infinite collections of sets
A version of the $\Delta $-lemma which is essentially equivalent to the Erdős-Rado $\Delta $-system theorem states that a countable collection of k-sets contains a countably infinite sunflower or $\Delta $-system.
The $\Delta $-lemma states that every uncountable collection of finite sets contains an uncountable $\Delta $-system.
The $\Delta $-lemma is a combinatorial set-theoretic tool used in proofs to impose an upper bound on the size of a collection of pairwise incompatible elements in a forcing poset. It may for example be used as one of the ingredients in a proof showing that it is consistent with Zermelo–Fraenkel set theory that the continuum hypothesis does not hold. It was introduced by Shanin (1946).
If $W$ is an $\omega _{2}$-sized collection of countable subsets of $\omega _{2}$, and if the continuum hypothesis holds, then there is an $\omega _{2}$-sized $\Delta $-subsystem. Let $\langle A_{\alpha }:\alpha <\omega _{2}\rangle $ enumerate $W$. For $\operatorname {cf} (\alpha )=\omega _{1}$, let $f(\alpha )=\sup(A_{\alpha }\cap \alpha )$. By Fodor's lemma, fix $S$ stationary in $\omega _{2}$ such that $f$ is constantly equal to $\beta $ on $S$. Build $S'\subseteq S$ of cardinality $\omega _{2}$ such that whenever $i<j$ are in $S'$ then $A_{i}\subseteq j$. Using the continuum hypothesis, there are only $\omega _{1}$-many countable subsets of $\beta $, so by further thinning we may stabilize the kernel.
See also
• Cap set
References
• Alweiss, Ryan; Lovett, Shachar; Wu, Kewen; Zhang, Jiapeng (June 2020), "Improved bounds for the sunflower lemma", Proceedings of the 52nd Annual ACM SIGACT Symposium on Theory of Computing, Association for Computing Machinery, pp. 624–630, arXiv:1908.08483, doi:10.1145/3357713.3384234, ISBN 978-1-4503-6979-4, S2CID 201314765
• Bell, Tolson; Chueluecha, Suchakree; Warnke, Lutz (2021), "Note on Sunflowers", Discrete Mathematics, 344 (7): 112367, arXiv:2009.09327, doi:10.1016/j.disc.2021.112367, MR 4240687, S2CID 221818818
• Deza, M.; Frankl, P. (1981), "Every large set of equidistant (0,+1,–1)-vectors forms a sunflower", Combinatorica, 1 (3): 225–231, doi:10.1007/BF02579328, ISSN 0209-9683, MR 0637827, S2CID 14043028
• Erdős, Paul; Rado, R. (1960), "Intersection theorems for systems of sets", Journal of the London Mathematical Society, Second Series, 35 (1): 85–90, doi:10.1112/jlms/s1-35.1.85, ISSN 0024-6107, MR 0111692
• Flum, Jörg; Grohe, Martin (2006), "A Kernelization of Hitting Set", Parameterized Complexity Theory, EATCS Ser. Texts in Theoretical Computer Science, Springer, pp. 210–212, doi:10.1007/3-540-29953-X, ISBN 978-3-540-29952-3, MR 2238686
• Jech, Thomas (2003), Set Theory, Springer
• Kunen, Kenneth (1980), Set Theory: An Introduction to Independence Proofs, North-Holland, ISBN 978-0-444-85401-8
• Rao, Anup (2020-02-25), "Coding for Sunflowers", Discrete Analysis, 2020 (2): 11887, doi:10.19086/da.11887, S2CID 202558957
• Rao, Anup (2023), "Sunflowers: from soil to oil" (PDF), Bull. Amer. Math. Soc., 60 (1): 29–38, doi:10.1090/bull/1777
• Shanin, N. A. (1946), "A theorem from the general theory of sets", C. R. (Doklady) Acad. Sci. URSS, New Series, 53: 399–400
• Tao, Terence (2020), The sunflower lemma via Shannon entropy, What's new (personal blog)
External links
• Thiemann, René. The Sunflower Lemma of Erdős and Rado (Formal proof development in Isabelle/HOL, Archive of Formal Proofs)
Notes
1. The original term for this concept was "$\Delta $-system". More recently the term "sunflower", possibly introduced by Deza & Frankl (1981), has been gradually replacing it.
2. "Extremal Combinatorics III: Some Basic Theorems". Combinatorics and more. 28 September 2008. Retrieved 2021-12-10.
3. Alweiss et al. (2020), p. 3.
4. Kostochka, Alexandr V. (2000), Althöfer, Ingo; Cai, Ning; Dueck, Gunter; Khachatrian, Levon (eds.), "Extremal Problems on Δ-Systems", Numbers, Information and Complexity, Boston, MA: Springer US, pp. 143–150, doi:10.1007/978-1-4757-6048-4_14, ISBN 978-1-4757-6048-4, retrieved 2022-05-02
5. Abbott, H.L; Hanson, D.; Sauer, N. (1972). "Intersection theorems for systems of sets". Journal of Combinatorial Theory, Series A. 12 (3): 381–389. doi:10.1016/0097-3165(72)90103-3. Retrieved 2021-12-10.
6. Alweiss et al. (2020).
7. "Quanta Magazine - Illuminating Science". Quanta Magazine. Retrieved 2019-11-10.
8. Rao (2020).
9. Bell, Chueluecha & Warnke (2021).
10. Flum & Grohe (2006).
| Wikipedia |
Sunčica Čanić
Sunčica Čanić is a Croatian-American mathematician, the Hugh Roy and Lillie Cranz Cullen Distinguished Professor of Mathematics and Director of the Center for Mathematical Biosciences at the University of Houston,[1][2] and Professor of Mathematics at the University of California, Berkeley. She is known for her work in mathematically modeling the human cardiovascular system and medical devices for it.[3]
Sunčica Čanić
OccupationProfessor
Academic background
EducationUniversity of Zagreb
Alma materStony Brook University
Doctoral advisorJ. Plohr and James Glimm
Academic work
DisciplineMathematician
Sub-disciplineApplied mathematics
InstitutionsUniversity of Houston
University of California, Berkeley
Main interestsMathematical modeling the human cardiovascular system
Education and career
Čanić earned bachelor's and master's degrees in mathematics in 1984 and 1986 from the University of Zagreb. She completed her Ph.D. in 1992 in applied mathematics from Stony Brook University, under the joint supervision of Bradley J. Plohr and James Glimm. She became an assistant professor at Iowa State University in 1992, and moved to the University of Houston in 1998. She became the Cullen Distinguished Professor in 2008,[2] and Professor of Mathematics at U.C. Berkeley in 2018. She is also a member of the board of governors of the Institute for Mathematics and its Applications.[4]
Contributions
Čanić's research has involved the computational simulation of the stents used to treat arterial clogging. By finding ways of simplifying computer models of stents from hundreds of thousands of nodes to only 400 nodes, she was able to make these simulations much more efficient, and used them to design improved stents that reduce clotting and scar formation.[5] She has also led the development of a procedure for heart valve replacement surgery that is less traumatic than open-heart surgery.[3]
Recognition
In 2014 she was elected as a fellow of the Society for Industrial and Applied Mathematics "for contributions to the modeling and analysis of partial differential equations motivated by applications in the life sciences."[6] She was elected as a Fellow of the American Mathematical Society in the 2020 Class, for "contributions to partial differential equations, and for mathematical modeling of fluid-structure interactions that has influenced the design of medical devices".[7]
References
1. "Newly Named Cullen Professor Uses Mathematics to Benefit Heart Research". University of Houston. 10 September 2008. Retrieved 19 January 2022.
2. Curriculum vitae: Sunčica Čanić (PDF), retrieved 13 October 2015
3. Tan, Anna (26 August 2014), UH mathematician's work to become basis for open-heart surgery alternative, BioNews Texas, retrieved 13 October 2015.
4. "Suncica Canic", Board of Governors, Institute for Mathematics and its Applications, retrieved 13 October 2015.
5. Dutchen, Stephanie (26 August 2010), "Scientists use math to build better stents", Behind the Scenes, National Science Foundation.
6. SIAM Fellows: Class of 2014, Society for Industrial and Applied Mathematics, retrieved 13 October 2015.
7. 2020 Class of the Fellows of the AMS, American Mathematical Society, retrieved 3 November 2019
External links
• Official website
Authority control
International
• VIAF
National
• Croatia
Academics
• CiNii
• MathSciNet
• Mathematics Genealogy Project
• zbMATH
Other
• IdRef
| Wikipedia |
Super envy-freeness
A super-envy-free division is a kind of a fair division. It is a division of resources among n partners, in which each partner values his/her share at strictly more than his/her due share of 1/n of the total value, and simultaneously, values the share of every other partner at strictly less than 1/n. Formally, in a super-envy-free division of a resource C among n partners, each partner i, with value measure Vi, receives a share Xi such that:
$V_{i}(X_{i})>V_{i}(C)/n~~{\text{ and }}~~\forall j\neq i:V_{i}(X_{j})<V_{i}(C)/n$.
This is a strong fairness requirement: it is stronger than both envy-freeness and super-proportionality.
Existence
Super envy-freeness was introduced by Julius Barbanel in 1996.[1] He proved that a super-envy-free cake-cutting exists if-and-only-if the value measures of the n partners are linearly independent. "Linearly independent" means that there is no vector of n non-zero real numbers $c_{1},\ldots ,c_{n}\in \mathbb {R} $ for which $c_{1}\cdot V_{1}+\cdots +c_{n}\cdot V_{n}=0$,
Computation
In 1999,[2] William Webb presented an algorithm that finds a super-envy-free allocation in this case. His algorithm is based on a witness to the fact that the measures are independent. A witness is an n-by-n matrix, in which element (i,j) is the value assigned by agent i to some piece j (where the pieces 1,...,n can be any partition of the cake, for example, partition to equal-length intervals). The matrix should be invertible - this is a witness to the linear independence of the measures.
Using such a matrix, the algorithm partitions each of the n pieces in a near-exact division. It can be shown that, if the matrix is invertible and the approximation factor is sufficiently small (w.r.t. the values in the inverse of the matrix), then the resulting allocation is indeed super-envy-free.
The run-time of the algorithm depends on the properties of the matrix. However, if the value measures are drawn uniformly at random from the unit simplex, with high probability, the runtime is polynomial in n.[3]
References
1. Barbanel, Julius B. (1996-01-01). "Super Envy-Free Cake Division and Independence of Measures". Journal of Mathematical Analysis and Applications. 197 (1): 54–60. doi:10.1006/S0022-247X(96)90006-2. ISSN 0022-247X.
2. Webb, William A. (1999-11-01). "An Algorithm For Super Envy-Free Cake Division". Journal of Mathematical Analysis and Applications. 239 (1): 175–179. doi:10.1006/jmaa.1999.6581. ISSN 0022-247X.
3. Chèze, Guillaume (2020-05-05). "Envy-free cake cutting: A polynomial number of queries with high probability". arXiv:2005.01982 [cs.CC].
| Wikipedia |
Tetration
In mathematics, tetration (or hyper-4) is an operation based on iterated, or repeated, exponentiation. There is no standard notation for tetration, though $\uparrow \uparrow $ and the left-exponent xb are common.
For repeated tetration, see Pentation.
Under the definition as repeated exponentiation, ${^{n}a}$ means ${a^{a^{\cdot ^{\cdot ^{a}}}}}$, where n copies of a are iterated via exponentiation, right-to-left, i.e. the application of exponentiation $n-1$ times. n is called the "height" of the function, while a is called the "base," analogous to exponentiation. It would be read as "the nth tetration of a".
It is the next hyperoperation after exponentiation, but before pentation. The word was coined by Reuben Louis Goodstein from tetra- (four) and iteration.
Tetration is also defined recursively as
${a\uparrow \uparrow n}:={\begin{cases}1&{\text{if }}n=0,\\a^{a\uparrow \uparrow (n-1)}&{\text{if }}n>0,\end{cases}}$
allowing for attempts to extend tetration to non-natural numbers such as real and complex numbers.
The two inverses of tetration are called super-root and super-logarithm, analogous to the nth root and the logarithmic functions. None of the three functions are elementary.
Tetration is used for the notation of very large numbers.
Introduction
The first four hyperoperations are shown here, with tetration being considered the fourth in the series. The unary operation succession, defined as $a'=a+1$, is considered to be the zeroth operation.
1. Addition
$a+n=a+\underbrace {1+1+\cdots +1} _{n}$
n copies of 1 added to a combined by succession.
2. Multiplication
$a\times n=\underbrace {a+a+\cdots +a} _{n}$
n copies of a combined by addition.
3. Exponentiation
$a^{n}=\underbrace {a\times a\times \cdots \times a} _{n}$
n copies of a combined by multiplication.
4. Tetration
${^{n}a}=\underbrace {a^{a^{\cdot ^{\cdot ^{a}}}}} _{n}$
n copies of a combined by exponentiation, right-to-left.
Note that nested exponents are conventionally interpreted from the top down: $3^{5^{7}}$ means $3^{\left(5^{7}\right)}$ and not $\left(3^{5}\right)^{7}.$
Succession, (an+1 = an + 1), is the most basic operation; while addition (a + n) is a primary operation, for addition of natural numbers it can be thought of as a chained succession of n successors of a; multiplication (a × n) is also a primary operation, though for natural numbers it can analogously be thought of as a chained addition involving n numbers of a. Exponentiation can be thought of as a chained multiplication involving n numbers of a and tetration ($^{n}a\!$) as a chained power involving n numbers a. Each of the operations above are defined by iterating the previous one;[1] however, unlike the operations before it, tetration is not an elementary function.
The parameter a is referred to as the base, while the parameter n may be referred to as the height. In the original definition of tetration, the height parameter must be a natural number; for instance, it would be illogical to say "three raised to itself negative five times" or "four raised to itself one half of a time." However, just as addition, multiplication, and exponentiation can be defined in ways that allow for extensions to real and complex numbers, several attempts have been made to generalize tetration to negative numbers, real numbers, and complex numbers. One such way for doing so is using a recursive definition for tetration; for any positive real $a>0$ and non-negative integer $n\geq 0$, we can define $\,\!{^{n}a}$ recursively as:[1]
${^{n}a}:={\begin{cases}1&{\text{if }}n=0\\a^{\left(^{(n-1)}a\right)}&{\text{if }}n>0\end{cases}}$
The recursive definition is equivalent to repeated exponentiation for natural heights; however, this definition allows for extensions to the other heights such as $^{0}a$, $^{-1}a$, and $^{i}a$ as well – many of these extensions are areas of active research.
Terminology
There are many terms for tetration, each of which has some logic behind it, but some have not become commonly used for one reason or another. Here is a comparison of each term with its rationale and counter-rationale.
• The term tetration, introduced by Goodstein in his 1947 paper Transfinite Ordinals in Recursive Number Theory[2] (generalizing the recursive base-representation used in Goodstein's theorem to use higher operations), has gained dominance. It was also popularized in Rudy Rucker's Infinity and the Mind.
• The term superexponentiation was published by Bromer in his paper Superexponentiation in 1987.[3] It was used earlier by Ed Nelson in his book Predicative Arithmetic, Princeton University Press, 1986.
• The term hyperpower[4] is a natural combination of hyper and power, which aptly describes tetration. The problem lies in the meaning of hyper with respect to the hyperoperation sequence. When considering hyperoperations, the term hyper refers to all ranks, and the term super refers to rank 4, or tetration. So under these considerations hyperpower is misleading, since it is only referring to tetration.
• The term power tower[5] is occasionally used, in the form "the power tower of order n" for ${\ \atop {\ }}{{\underbrace {a^{a^{\cdot ^{\cdot ^{a}}}}} } \atop n}$. Exponentiation is easily misconstrued: note that the operation of raising to a power is right-associative (see below). Tetration is iterated exponentiation (call this right-associative operation ^), starting from the top right side of the expression with an instance a^a (call this value c). Exponentiating the next leftward a (call this the 'next base' b), is to work leftward after obtaining the new value b^c. Working to the left, consume the next a to the left, as the base b, and evaluate the new b^c. 'Descend down the tower' in turn, with the new larger value for c on the next downward step.
Owing in part to some shared terminology and similar notational symbolism, tetration is often confused with closely related functions and expressions. Here are a few related terms:
Terms related to tetration
Terminology Form
Tetration $a^{a^{\cdot ^{\cdot ^{a^{a}}}}}$
Iterated exponentials $a^{a^{\cdot ^{\cdot ^{a^{x}}}}}$
Nested exponentials (also towers) $a_{1}^{a_{2}^{\cdot ^{\cdot ^{a_{n}}}}}$
Infinite exponentials (also towers) $a_{1}^{a_{2}^{a_{3}^{\cdot ^{\cdot ^{\cdot }}}}}$
In the first two expressions a is the base, and the number of times a appears is the height (add one for x). In the third expression, n is the height, but each of the bases is different.
Care must be taken when referring to iterated exponentials, as it is common to call expressions of this form iterated exponentiation, which is ambiguous, as this can either mean iterated powers or iterated exponentials.
Notation
There are many different notation styles that can be used to express tetration. Some notations can also be used to describe other hyperoperations, while some are limited to tetration and have no immediate extension.
Notation styles for tetration
Name Form Description
Rudy Rucker notation $\,{}^{n}a$ Used by Maurer [1901] and Goodstein [1947]; Rudy Rucker's book Infinity and the Mind popularized the notation.[nb 1]
Knuth's up-arrow notation $a{\uparrow \uparrow }n$ Allows extension by putting more arrows, or, even more powerfully, an indexed arrow.
Conway chained arrow notation $a\rightarrow n\rightarrow 2$ Allows extension by increasing the number 2 (equivalent with the extensions above), but also, even more powerfully, by extending the chain
Ackermann function ${}^{n}2=\operatorname {A} (4,n-3)+3$ Allows the special case $a=2$ to be written in terms of the Ackermann function.
Iterated exponential notation $\exp _{a}^{n}(1)$ Allows simple extension to iterated exponentials from initial values other than 1.
Hooshmand notations[6] ${\begin{aligned}&\operatorname {uxp} _{a}n\\[2pt]&a^{\frac {n}{}}\end{aligned}}$ Used by M. H. Hooshmand [2006].
Hyperoperation notations ${\begin{aligned}&a[4]n\\[2pt]&H_{4}(a,n)\end{aligned}}$ Allows extension by increasing the number 4; this gives the family of hyperoperations.
Double caret notation a^^n Since the up-arrow is used identically to the caret (^), tetration may be written as (^^); convenient for ASCII.
One notation above uses iterated exponential notation; this is defined in general as follows:
$\exp _{a}^{n}(x)=a^{a^{\cdot ^{\cdot ^{a^{x}}}}}$ with n as.
There are not as many notations for iterated exponentials, but here are a few:
Notation styles for iterated exponentials
Name Form Description
Standard notation $\exp _{a}^{n}(x)$ Euler coined the notation $\exp _{a}(x)=a^{x}$, and iteration notation $f^{n}(x)$ has been around about as long.
Knuth's up-arrow notation $(a{\uparrow })^{n}(x)$ Allows for super-powers and super-exponential function by increasing the number of arrows; used in the article on large numbers.
Text notation exp_a^n(x) Based on standard notation; convenient for ASCII.
J Notation x^^:(n-1)x Repeats the exponentiation. See J (programming language)[7]
Infinity barrier notation $a\uparrow \uparrow n|x$ Jonathan Bowers coined this,[8] and it can be extended to higher hyper-operations
Examples
Because of the extremely fast growth of tetration, most values in the following table are too large to write in scientific notation. In these cases, iterated exponential notation is used to express them in base 10. The values containing a decimal point are approximate.
Examples of tetration
$x$ ${}^{2}x$ ${}^{3}x$ ${}^{4}x$ ${}^{5}x$ ${}^{6}x$
1 1 1 1 1 1
2 4 (22) 16 (24) 65,536 (216) 2.00353 × 1019,728 $\exp _{10}^{3}(4.29508)$ (6.03123 × 1019,727 digits)
3 27 (33) 7,625,597,484,987 (327) $\exp _{10}^{3}(1.09902)$ (3,638,334,640,025 digits) $\exp _{10}^{4}(1.09902)$ $\exp _{10}^{5}(1.09902)$
4 256 (44) 1.34078 × 10154 (4256) $\exp _{10}^{3}(2.18726)$ (8.0723 × 10153 digits) $\exp _{10}^{4}(2.18726)$ $\exp _{10}^{5}(2.18726)$
5 3,125 (55) 1.91101 × 102,184 (53,125) $\exp _{10}^{3}(3.33928)$ (1.33574 × 102,184 digits) $\exp _{10}^{4}(3.33928)$ $\exp _{10}^{5}(3.33928)$
6 46,656 (66) 2.65912 × 1036,305 (646,656) $\exp _{10}^{3}(4.55997)$ (2.0692 × 1036,305 digits) $\exp _{10}^{4}(4.55997)$ $\exp _{10}^{5}(4.55997)$
7 823,543 (77) 3.75982 × 10695,974 $\exp _{10}^{3}(5.84259)$ (3.17742 × 10695,974 digits) $\exp _{10}^{4}(5.84259)$ $\exp _{10}^{5}(5.84259)$
8 16,777,216 (88) 6.01452 × 1015,151,335 $\exp _{10}^{3}(7.18045)$ (5.43165 × 1015,151,335 digits) $\exp _{10}^{4}(7.18045)$ $\exp _{10}^{5}(7.18045)$
9 387,420,489 (99) 4.28125 × 10369,693,099 $\exp _{10}^{3}(8.56784)$ (4.08535 × 10369,693,099 digits) $\exp _{10}^{4}(8.56784)$ $\exp _{10}^{5}(8.56784)$
10 10,000,000,000 (1010) 1010,000,000,000 $\exp _{10}^{3}(10)$ (1010,000,000,000 + 1 digits) $\exp _{10}^{4}(10)$ $\exp _{10}^{5}(10)$
Remark: If x does not differ from 10 by orders of magnitude, then for all $k\geq 3,~^{m}x=\exp _{10}^{k}z,~z>1~\Rightarrow ~^{m+1}x=\exp _{10}^{k+1}z'{\text{ with }}z'\approx z$. For example, $z-z'\approx 2\cdot 10^{-15}{\text{ for }}x=3=k,~m=4$ in the above table, and the difference is even smaller for the following rows.
Properties
Tetration has several properties that are similar to exponentiation, as well as properties that are specific to the operation and are lost or gained from exponentiation. Because exponentiation does not commute, the product and power rules do not have an analogue with tetration; the statements $ {}^{a}\left({}^{b}x\right)=\left({}^{ab}x\right)$ and $ {}^{a}\left(xy\right)={}^{a}x{}^{a}y$ are not true for most cases.[9]
However, tetration does follow a different property, in which $ {}^{a}x=x^{\left({}^{a-1}x\right)}$. This fact is most clearly shown using the recursive definition. From this property, a proof follows that $\left({}^{b}a\right)^{\left({}^{c}a\right)}=\left({}^{c+1}a\right)^{\left({}^{b-1}a\right)}$, which allows for switching b and c in certain equations. The proof goes as follows:
${\begin{aligned}\left({}^{b}a\right)^{\left({}^{c}a\right)}={}&\left(a^{{}^{b-1}a}\right)^{\left({}^{c}a\right)}\\={}&a^{\left({}^{b-1}a\right)\left({}^{c}a\right)}\\={}&a^{\left({}^{c}a\right)\left({}^{b-1}a\right)}\\={}&\left({}^{c+1}a\right)^{\left({}^{b-1}a\right)}\end{aligned}}$
When a number x and 10 are coprime, it is possible to compute the last m decimal digits of $\,\!\ ^{a}x$ using Euler's theorem, for any integer m. This is also true in other bases: for example, the last m octal digits of $\,\!\ ^{a}x$ can be computed when x and 8 are coprime.
Direction of evaluation
When evaluating tetration expressed as an "exponentiation tower", the serial exponentiation is done at the deepest level first (in the notation, at the apex). For example:
$^{4}2=2^{2^{2^{2}}}=2^{\left(2^{\left(2^{2}\right)}\right)}=2^{\left(2^{4}\right)}=2^{16}=65,\!536$
This order is important because exponentiation is not associative, and evaluating the expression in the opposite order will lead to a different answer:
$2^{2^{2^{2}}}\neq \left({\left(2^{2}\right)}^{2}\right)^{2}=4^{2\cdot 2}=256$
Evaluating the expression the left to right is considered less interesting; evaluating left to right, any expression $^{n}a\!$ can be simplified to be $a^{\left(a^{n-1}\right)}\!\!$.[10] Because of this, the towers must be evaluated from right to left (or top to bottom). Computer programmers refer to this choice as right-associative.
Extensions
Tetration can be extended in two different ways; in the equation $^{n}a\!$, both the base a and the height n can be generalized using the definition and properties of tetration. Although the base and the height can be extended beyond the non-negative integers to different domains, including ${^{n}0}$, complex functions such as ${}^{n}i$, and heights of infinite n, the more limited properties of tetration reduce the ability to extend tetration.
Base zero
The exponential $0^{0}$ is not consistently defined. Thus, the tetrations $\,{^{n}0}$ are not clearly defined by the formula given earlier. However, $\lim _{x\rightarrow 0}{}^{n}x$ is well defined, and exists:[11]
$\lim _{x\rightarrow 0}{}^{n}x={\begin{cases}1,&n{\text{ even}}\\0,&n{\text{ odd}}\end{cases}}$
Thus we could consistently define ${}^{n}0=\lim _{x\rightarrow 0}{}^{n}x$. This is analogous to defining $0^{0}=1$.
Under this extension, ${}^{0}0=1$, so the rule ${^{0}a}=1$ from the original definition still holds.
Complex bases
Since complex numbers can be raised to powers, tetration can be applied to bases of the form z = a + bi (where a and b are real). For example, in nz with z = i, tetration is achieved by using the principal branch of the natural logarithm; using Euler's formula we get the relation:
$i^{a+bi}=e^{{\frac {1}{2}}{\pi i}(a+bi)}=e^{-{\frac {1}{2}}{\pi b}}\left(\cos {\frac {\pi a}{2}}+i\sin {\frac {\pi a}{2}}\right)$
This suggests a recursive definition for n+1i = a′ + b′i given any ni = a + bi:
${\begin{aligned}a'&=e^{-{\frac {1}{2}}{\pi b}}\cos {\frac {\pi a}{2}}\\[2pt]b'&=e^{-{\frac {1}{2}}{\pi b}}\sin {\frac {\pi a}{2}}\end{aligned}}$
The following approximate values can be derived:
Values of tetration of complex bases
$ {}^{n}i$ Approximate value
$ {}^{1}i=i$ i
$ {}^{2}i=i^{\left({}^{1}i\right)}$ 0.2079
$ {}^{3}i=i^{\left({}^{2}i\right)}$ 0.9472 + 0.3208i
$ {}^{4}i=i^{\left({}^{3}i\right)}$ 0.0501 + 0.6021i
$ {}^{5}i=i^{\left({}^{4}i\right)}$ 0.3872 + 0.0305i
$ {}^{6}i=i^{\left({}^{5}i\right)}$ 0.7823 + 0.5446i
$ {}^{7}i=i^{\left({}^{6}i\right)}$ 0.1426 + 0.4005i
$ {}^{8}i=i^{\left({}^{7}i\right)}$ 0.5198 + 0.1184i
$ {}^{9}i=i^{\left({}^{8}i\right)}$ 0.5686 + 0.6051i
Solving the inverse relation, as in the previous section, yields the expected 0i = 1 and −1i = 0, with negative values of n giving infinite results on the imaginary axis. Plotted in the complex plane, the entire sequence spirals to the limit 0.4383 + 0.3606i, which could be interpreted as the value where n is infinite.
Such tetration sequences have been studied since the time of Euler, but are poorly understood due to their chaotic behavior. Most published research historically has focused on the convergence of the infinitely iterated exponential function. Current research has greatly benefited by the advent of powerful computers with fractal and symbolic mathematics software. Much of what is known about tetration comes from general knowledge of complex dynamics and specific research of the exponential map.
Infinite heights
Tetration can be extended to infinite heights; i.e., for certain a and n values in ${}^{n}a$, there exists a well defined result for an infinite n. This is because for bases within a certain interval, tetration converges to a finite value as the height tends to infinity. For example, ${\sqrt {2}}^{{\sqrt {2}}^{{\sqrt {2}}^{\cdot ^{\cdot ^{\cdot }}}}}$ converges to 2, and can therefore be said to be equal to 2. The trend towards 2 can be seen by evaluating a small finite tower:
${\begin{aligned}{\sqrt {2}}^{{\sqrt {2}}^{{\sqrt {2}}^{{\sqrt {2}}^{{\sqrt {2}}^{1.414}}}}}&\approx {\sqrt {2}}^{{\sqrt {2}}^{{\sqrt {2}}^{{\sqrt {2}}^{1.63}}}}\\&\approx {\sqrt {2}}^{{\sqrt {2}}^{{\sqrt {2}}^{1.76}}}\\&\approx {\sqrt {2}}^{{\sqrt {2}}^{1.84}}\\&\approx {\sqrt {2}}^{1.89}\\&\approx 1.93\end{aligned}}$
In general, the infinitely iterated exponential $x^{x^{\cdot ^{\cdot ^{\cdot }}}}\!\!$, defined as the limit of ${}^{n}x$ as n goes to infinity, converges for e−e ≤ x ≤ e1/e, roughly the interval from 0.066 to 1.44, a result shown by Leonhard Euler.[12] The limit, should it exist, is a positive real solution of the equation y = xy. Thus, x = y1/y. The limit defining the infinite exponential of x does not exist when x > e1/e because the maximum of y1/y is e1/e. The limit also fails to exist when 0 < x < e−e.
This may be extended to complex numbers z with the definition:
${}^{\infty }z=z^{z^{\cdot ^{\cdot ^{\cdot }}}}={\frac {\mathrm {W} (-\ln {z})}{-\ln {z}}}~,$
where W represents Lambert's W function.
As the limit y = ∞x (if existent on the positive real line, i.e. for e−e ≤ x ≤ e1/e) must satisfy xy = y we see that x ↦ y = ∞x is (the lower branch of) the inverse function of y ↦ x = y1/y.
Negative heights
We can use the recursive rule for tetration,
${^{k+1}a}=a^{\left({^{k}a}\right)},$
to prove ${}^{-1}a$:
$^{k}a=\log _{a}\left(^{k+1}a\right);$
Substituting −1 for k gives
${}^{-1}a=\log _{a}\left({}^{0}a\right)=\log _{a}1=0$.[10]
Smaller negative values cannot be well defined in this way. Substituting −2 for k in the same equation gives
${}^{-2}a=\log _{a}\left({}^{-1}a\right)=\log _{a}0=-\infty $
which is not well defined. They can, however, sometimes be considered sets.[10]
For $n=1$, any definition of $\,\!{^{-1}1}$ is consistent with the rule because
${^{0}1}=1=1^{n}$ for any $\,\!n={^{-1}1}$.
Real heights
At this time there is no commonly accepted solution to the general problem of extending tetration to the real or complex values of n. There have, however, been multiple approaches towards the issue, and different approaches are outlined below.
In general, the problem is finding — for any real a > 0 — a super-exponential function $\,f(x)={}^{x}a$ over real x > −2 that satisfies
• $\,{}^{-1}a=0$
• $\,{}^{0}a=1$
• $\,{}^{x}a=a^{\left({}^{x-1}a\right)}$for all real $x>-1.$[13]
To find a more natural extension, one or more extra requirements are usually required. This is usually some collection of the following:
• A continuity requirement (usually just that ${}^{x}a$ is continuous in both variables for $x>0$).
• A differentiability requirement (can be once, twice, k times, or infinitely differentiable in x).
• A regularity requirement (implying twice differentiable in x) that:
$\left({\frac {d^{2}}{dx^{2}}}f(x)>0\right)$ for all $x>0$
The fourth requirement differs from author to author, and between approaches. There are two main approaches to extending tetration to real heights; one is based on the regularity requirement, and one is based on the differentiability requirement. These two approaches seem to be so different that they may not be reconciled, as they produce results inconsistent with each other.
When $\,{}^{x}a$ is defined for an interval of length one, the whole function easily follows for all x > −2.
Linear approximation for real heights
A linear approximation (solution to the continuity requirement, approximation to the differentiability requirement) is given by:
${}^{x}a\approx {\begin{cases}\log _{a}\left(^{x+1}a\right)&x\leq -1\\1+x&-1<x\leq 0\\a^{\left(^{x-1}a\right)}&0<x\end{cases}}$
hence:
Linear approximation values
Approximation Domain
$ {}^{x}a\approx x+1$ for −1 < x < 0
$ {}^{x}a\approx a^{x}$ for 0 < x < 1
$ {}^{x}a\approx a^{a^{(x-1)}}$ for 1 < x < 2
and so on. However, it is only piecewise differentiable; at integer values of x the derivative is multiplied by $\ln {a}$. It is continuously differentiable for $x>-2$ if and only if $a=e$. For example, using these methods ${}^{\frac {\pi }{2}}e\approx 5.868...$ and ${}^{-4.3}0.5\approx 4.03335...$
A main theorem in Hooshmand's paper[6] states: Let $0<a\neq 1$. If $f:(-2,+\infty )\rightarrow \mathbb {R} $ is continuous and satisfies the conditions:
• $f(x)=a^{f(x-1)}\;\;{\text{for all}}\;\;x>-1,\;f(0)=1,$
• $f$ is differentiable on (−1, 0),
• $f^{\prime }$ is a nondecreasing or nonincreasing function on (−1, 0),
• $f^{\prime }\left(0^{+}\right)=(\ln a)f^{\prime }\left(0^{-}\right){\text{ or }}f^{\prime }\left(-1^{+}\right)=f^{\prime }\left(0^{-}\right).$
then $f$ is uniquely determined through the equation
$f(x)=\exp _{a}^{[x]}\left(a^{(x)}\right)=\exp _{a}^{[x+1]}((x))\quad {\text{for all}}\;\;x>-2,$
where $(x)=x-[x]$ denotes the fractional part of x and $\exp _{a}^{[x]}$ is the $[x]$-iterated function of the function $\exp _{a}$.
The proof is that the second through fourth conditions trivially imply that f is a linear function on [−1, 0].
The linear approximation to natural tetration function ${}^{x}e$ is continuously differentiable, but its second derivative does not exist at integer values of its argument. Hooshmand derived another uniqueness theorem for it which states:
If $f:(-2,+\infty )\rightarrow \mathbb {R} $ is a continuous function that satisfies:
• $f(x)=e^{f(x-1)}\;\;{\text{for all}}\;\;x>-1,\;f(0)=1,$
• $f$ is convex on (−1, 0),
• $f^{\prime }\left(0^{-}\right)\leq f^{\prime }\left(0^{+}\right).$
then $f={\text{uxp}}$. [Here $f={\text{uxp}}$ is Hooshmand's name for the linear approximation to the natural tetration function.]
The proof is much the same as before; the recursion equation ensures that $f^{\prime }(-1^{+})=f^{\prime }(0^{+}),$ and then the convexity condition implies that $f$ is linear on (−1, 0).
Therefore, the linear approximation to natural tetration is the only solution of the equation $f(x)=e^{f(x-1)}\;\;(x>-1)$ and $f(0)=1$ which is convex on (−1, +∞). All other sufficiently-differentiable solutions must have an inflection point on the interval (−1, 0).
Higher order approximations for real heights
Beyond linear approximations, a quadratic approximation (to the differentiability requirement) is given by:
${}^{x}a\approx {\begin{cases}\log _{a}\left({}^{x+1}a\right)&x\leq -1\\1+{\frac {2\ln(a)}{1\;+\;\ln(a)}}x-{\frac {1\;-\;\ln(a)}{1\;+\;\ln(a)}}x^{2}&-1<x\leq 0\\a^{\left({}^{x-1}a\right)}&x>0\end{cases}}$
which is differentiable for all $x>0$, but not twice differentiable. For example, ${}^{\frac {1}{2}}2\approx 1.45933...$ If $a=e$ this is the same as the linear approximation.[1]
Because of the way it is calculated, this function does not "cancel out", contrary to exponents, where $\left(a^{\frac {1}{n}}\right)^{n}=a$. Namely,
${}^{n}\left({}^{\frac {1}{n}}a\right)=\underbrace {\left({}^{\frac {1}{n}}a\right)^{\left({}^{\frac {1}{n}}a\right)^{\cdot ^{\cdot ^{\cdot ^{\cdot ^{\left({}^{\frac {1}{n}}a\right)}}}}}}} _{n}\neq a$.
Just as there is a quadratic approximation, cubic approximations and methods for generalizing to approximations of degree n also exist, although they are much more unwieldy.[1][14]
Complex heights
It has now been proven[15] that there exists a unique function F which is a solution of the equation F(z + 1) = exp(F(z)) and satisfies the additional conditions that F(0) = 1 and F(z) approaches the fixed points of the logarithm (roughly 0.318 ± 1.337i) as z approaches ±i∞ and that F is holomorphic in the whole complex z-plane, except the part of the real axis at z ≤ −2. This proof confirms a previous conjecture.[16] The construction of such a function was originally demonstrated by Kneser in 1950.[17] The complex map of this function is shown in the figure at right. The proof also works for other bases besides e, as long as the base is bigger than $e^{\frac {1}{e}}\approx 1.445$. Subsequent work extended the construction to all complex bases.[18]
The requirement of the tetration being holomorphic is important for its uniqueness. Many functions S can be constructed as
$S(z)=F\!\left(~z~+\sum _{n=1}^{\infty }\sin(2\pi nz)~\alpha _{n}+\sum _{n=1}^{\infty }{\Big (}1-\cos(2\pi nz){\Big )}~\beta _{n}\right)$
where α and β are real sequences which decay fast enough to provide the convergence of the series, at least at moderate values of Im z.
The function S satisfies the tetration equations S(z + 1) = exp(S(z)), S(0) = 1, and if αn and βn approach 0 fast enough it will be analytic on a neighborhood of the positive real axis. However, if some elements of {α} or {β} are not zero, then function S has multitudes of additional singularities and cutlines in the complex plane, due to the exponential growth of sin and cos along the imaginary axis; the smaller the coefficients {α} and {β} are, the further away these singularities are from the real axis.
The extension of tetration into the complex plane is thus essential for the uniqueness; the real-analytic tetration is not unique.
Non-elementary recursiveness
Tetration (restricted to $\mathbb {N} ^{2}$) is not an elementary recursive function. One can prove by induction that for every elementary recursive function f, there is a constant c such that
$f(x)\leq \underbrace {2^{2^{\cdot ^{\cdot ^{x}}}}} _{c}.$
We denote the right hand side by $g(c,x)$. Suppose on the contrary that tetration is elementary recursive. $g(x,x)+1$ is also elementary recursive. By the above inequality, there is a constant c such that $g(x,x)+1\leq g(c,x)$. By letting $x=c$, we have that $g(c,c)+1\leq g(c,c)$, a contradiction.
Inverse operations
Exponentiation has two inverse operations; roots and logarithms. Analogously, the inverses of tetration are often called the super-root, and the super-logarithm (In fact, all hyperoperations greater than or equal to 3 have analogous inverses); e.g., in the function ${^{3}}y=x$, the two inverses are the cube super-root of y and the super logarithm base y of x.
Super-root
The super-root is the inverse operation of tetration with respect to the base: if $^{n}y=x$, then y is an nth super root of x (${\sqrt[{n}]{x}}_{s}$ or ${\sqrt[{n}]{x}}_{4}$).
For example,
$^{4}2=2^{2^{2^{2}}}=65{,}536$
so 2 is the 4th super-root of 65,536.
Square super-root
The 2nd-order super-root, square super-root, or super square root has two equivalent notations, $\mathrm {ssrt} (x)$ and ${\sqrt {x}}_{s}$. It is the inverse of $^{2}x=x^{x}$ and can be represented with the Lambert W function:[19]
$\mathrm {ssrt} (x)=e^{W(\ln x)}={\frac {\ln x}{W(\ln x)}}$
The function also illustrates the reflective nature of the root and logarithm functions as the equation below only holds true when $y=\mathrm {ssrt} (x)$:
${\sqrt[{y}]{x}}=\log _{y}x$
Like square roots, the square super-root of x may not have a single solution. Unlike square roots, determining the number of square super-roots of x may be difficult. In general, if $e^{-1/e}<x<1$, then x has two positive square super-roots between 0 and 1; and if $x>1$, then x has one positive square super-root greater than 1. If x is positive and less than $e^{-1/e}$ it does not have any real square super-roots, but the formula given above yields countably infinitely many complex ones for any finite x not equal to 1.[19] The function has been used to determine the size of data clusters.[20]
At $x=1$:
$\mathrm {ssqrt} (x)=1+(x-1)-(x-1)^{2}+{\frac {3}{2}}(x-1)^{3}-{\frac {17}{6}}(x-1)^{4}+{\frac {37}{6}}(x-1)^{5}-{\frac {1759}{120}}(x-1)^{6}+{\frac {13279}{360}}(x-1)^{7}+{\mathcal {O}}{\left((x-1)^{8}\right)}$
Other super-roots
For each integer n > 2, the function nx is defined and increasing for x ≥ 1, and n1 = 1, so that the nth super-root of x, ${\sqrt[{n}]{x}}_{s}$, exists for x ≥ 1.
One of the simpler and faster formulas for a third-degree super-root is the recursive formula, if: xxx = a, and next x (n + 1) = exp (W (W (x (n) ln (a)))), for example x (0) = 1.
However, if the linear approximation above is used, then $^{y}x=y+1$ if −1 < y ≤ 0, so $^{y}{\sqrt {y+1}}_{s}$ cannot exist.
In the same way as the square super-root, terminology for other super roots can be based on the normal roots: "cube super-roots" can be expressed as ${\sqrt[{3}]{x}}_{s}$; the "4th super-root" can be expressed as ${\sqrt[{4}]{x}}_{s}$; and the "nth super-root" is ${\sqrt[{n}]{x}}_{s}$. Note that ${\sqrt[{n}]{x}}_{s}$ may not be uniquely defined, because there may be more than one nth root. For example, x has a single (real) super-root if n is odd, and up to two if n is even.
Just as with the extension of tetration to infinite heights, the super-root can be extended to n = ∞, being well-defined if 1/e ≤ x ≤ e. Note that $x={^{\infty }y}=y^{\left[^{\infty }y\right]}=y^{x},$ and thus that $y=x^{1/x}$. Therefore, when it is well defined, ${\sqrt[{\infty }]{x}}_{s}=x^{1/x}$ and, unlike normal tetration, is an elementary function. For example, ${\sqrt[{\infty }]{2}}_{s}=2^{1/2}={\sqrt {2}}$.
It follows from the Gelfond–Schneider theorem that super-root ${\sqrt {n}}_{s}$ for any positive integer n is either integer or transcendental, and ${\sqrt[{3}]{n}}_{s}$ is either integer or irrational.[21] It is still an open question whether irrational super-roots are transcendental in the latter case.
Super-logarithm
Main article: Super-logarithm
Once a continuous increasing (in x) definition of tetration, xa, is selected, the corresponding super-logarithm $\operatorname {slog} _{a}x$ or $\log _{a}^{4}x$ is defined for all real numbers x, and a > 1.
The function sloga x satisfies:
${\begin{aligned}\operatorname {slog} _{a}{^{x}a}&=x\\\operatorname {slog} _{a}a^{x}&=1+\operatorname {slog} _{a}x\\\operatorname {slog} _{a}x&=1+\operatorname {slog} _{a}\log _{a}x\\\operatorname {slog} _{a}x&\geq -2\end{aligned}}$
Open questions
Other than the problems with the extensions of tetration, there are several open questions concerning tetration, particularly when concerning the relations between number systems such as integers and irrational numbers:
• It is not known whether there is a positive integer n for which nπ or ne is an integer. In particular, it is not known whether either of 4π or 5e is an integer.
• It is not known whether nq is rational for any positive integer n and positive non-integer rational q.[21] For example, it is not known whether the positive root of the equation 4x = 2 is a rational number.
• It is not known whether eπ or πe are rationals or not, nor even their exact values.
See also
Wikimedia Commons has media related to tetration.
• Ackermann function
• Big O notation
• Double exponential function
• Hyperoperation
• Iterated logarithm
• Symmetric level-index arithmetic
Notes
1. Rudolf von Bitter Rucker's (1982) notation nx, as introduced by Hans Maurer (1901) and Reuben Louis Goodstein (1947) for tetration, must not be confused with Alfred Pringsheim's and Jules Molk's (1907) notation nf(x) to denote iterated function compositions, nor with David Patterson Ellerman's (1995) nx pre-superscript notation for roots.
References
1. Neyrinck, Mark. An Investigation of Arithmetic Operations. Retrieved 9 January 2019.
2. R. L. Goodstein (1947). "Transfinite ordinals in recursive number theory". Journal of Symbolic Logic. 12 (4): 123–129. doi:10.2307/2266486. JSTOR 2266486. S2CID 1318943.
3. N. Bromer (1987). "Superexponentiation". Mathematics Magazine. 60 (3): 169–174. doi:10.1080/0025570X.1987.11977296. JSTOR 2689566.
4. J. F. MacDonnell (1989). "Somecritical points of the hyperpower function $x^{x^{\dots }}$". International Journal of Mathematical Education. 20 (2): 297–305. doi:10.1080/0020739890200210. MR 0994348.
5. Weisstein, Eric W. "Power Tower". MathWorld.
6. Hooshmand, M. H. (2006). "Ultra power and ultra exponential functions". Integral Transforms and Special Functions. 17 (8): 549–558. doi:10.1080/10652460500422247. S2CID 120431576.
7. "Power Verb". J Vocabulary. J Software. Retrieved 2011-10-28.
8. "Spaces". Retrieved 2022-02-17.
9. Meiburg, Alexander (2014). "Analytic Extension of Tetration Through the Product Power-Tower" (PDF). Retrieved 2018-11-29.
10. Müller, M. "Reihenalgebra: What comes beyond exponentiation?" (PDF). Retrieved 2018-12-12.
11. "Climbing the ladder of hyper operators: tetration". math.blogoverflow.com. Stack Exchange Mathematics Blog. Retrieved 2019-07-25.
12. Euler, L. "De serie Lambertina Plurimisque eius insignibus proprietatibus." Acta Acad. Scient. Petropol. 2, 29–51, 1783. Reprinted in Euler, L. Opera Omnia, Series Prima, Vol. 6: Commentationes Algebraicae. Leipzig, Germany: Teubner, pp. 350–369, 1921. (facsimile)
13. Trappmann, Henryk; Kouznetsov, Dmitrii (2010-06-28). "5+ methods for real analytic tetration". Retrieved 2018-12-05.
14. Andrew Robbins. Solving for the Analytic Piecewise Extension of Tetration and the Super-logarithm. The extensions are found in part two of the paper, "Beginning of Results".
15. Paulsen, W.; Cowgill, S. (March 2017). "Solving $F(z+1)=b^{F(z)}$ in the complex plane" (PDF). Advances in Computational Mathematics. 43: 1–22. doi:10.1007/s10444-017-9524-1. S2CID 9402035.
16. Kouznetsov, D. (July 2009). "Solution of $F(z+1)=\exp(F(z))$ in complex $z$-plane" (PDF). Mathematics of Computation. 78 (267): 1647–1670. doi:10.1090/S0025-5718-09-02188-7.
17. Kneser, H. (1950). "Reelle analytische Lösungen der Gleichung $\varphi {\Big (}\varphi (x){\Big )}={\rm {e}}^{x}$ und verwandter Funktionalgleichungen". Journal für die reine und angewandte Mathematik (in German). 187: 56–67.
18. Paulsen, W. (June 2018). "Tetration for complex bases". Advances in Computational Mathematics. 45: 243–267. doi:10.1007/s10444-018-9615-7. S2CID 67866004.
19. Corless, R. M.; Gonnet, G. H.; Hare, D. E. G.; Jeffrey, D. J.; Knuth, D. E. (1996). "On the Lambert W function" (PostScript). Advances in Computational Mathematics. 5: 333. arXiv:1809.07369. doi:10.1007/BF02124750. S2CID 29028411.
20. Krishnam, R. (2004), "Efficient Self-Organization Of Large Wireless Sensor Networks" – Dissertation, BOSTON UNIVERSITY, COLLEGE OF ENGINEERING. pp. 37–40
21. Marshall, Ash J., and Tan, Yiren, "A rational number of the form aa with a irrational", Mathematical Gazette 96, March 2012, pp. 106–109.
• Daniel Geisler, Tetration
• Ioannis Galidakis, On extending hyper4 to nonintegers (undated, 2006 or earlier) (A simpler, easier to read review of the next reference)
• Ioannis Galidakis, On Extending hyper4 and Knuth's Up-arrow Notation to the Reals (undated, 2006 or earlier).
• Robert Munafo, Extension of the hyper4 function to reals (An informal discussion about extending tetration to the real numbers.)
• Lode Vandevenne, Tetration of the Square Root of Two. (2004). (Attempt to extend tetration to real numbers.)
• Ioannis Galidakis, Mathematics, (Definitive list of references to tetration research. Much information on the Lambert W function, Riemann surfaces, and analytic continuation.)
• Joseph MacDonell, Some Critical Points of the Hyperpower Function.
• Dave L. Renfro, Web pages for infinitely iterated exponentials
• Knobel, R. (1981). "Exponentials Reiterated". American Mathematical Monthly. 88 (4): 235–252. doi:10.1080/00029890.1981.11995239.
• Hans Maurer, "Über die Funktion $y=x^{[x^{[x(\cdots )]}]}$ für ganzzahliges Argument (Abundanzen)." Mittheilungen der Mathematische Gesellschaft in Hamburg 4, (1901), p. 33–50. (Reference to usage of $\ {^{n}a}$ from Knobel's paper.)
• The Fourth Operation
• Luca Moroni, The strange properties of the infinite power tower (https://arxiv.org/abs/1908.05559)
Further reading
• Galidakis, Ioannis; Weisstein, Eric Wolfgang. "Power Tower". MathWorld. Retrieved 2019-07-05.
Hyperoperations
Primary
• Successor (0)
• Addition (1)
• Multiplication (2)
• Exponentiation (3)
• Tetration (4)
• Pentation (5)
Inverse for left argument
• Predecessor (0)
• Subtraction (1)
• Division (2)
• Root extraction (3)
• Super-root (4)
Inverse for right argument
• Predecessor (0)
• Subtraction (1)
• Division (2)
• Logarithm (3)
• Super-logarithm (4)
Related articles
• Ackermann function
• Conway chained arrow notation
• Grzegorczyk hierarchy
• Knuth's up-arrow notation
• Steinhaus–Moser notation
Large numbers
Examples
in
numerical
order
• Thousand
• Ten thousand
• Hundred thousand
• Million
• Ten million
• Hundred million
• Billion
• Trillion
• Quadrillion
• Quintillion
• Sextillion
• Septillion
• Octillion
• Nonillion
• Decillion
• Eddington number
• Googol
• Shannon number
• Googolplex
• Skewes's number
• Moser's number
• Graham's number
• TREE(3)
• SSCG(3)
• BH(3)
• Rayo's number
• Transfinite numbers
Expression
methods
Notations
• Scientific notation
• Knuth's up-arrow notation
• Conway chained arrow notation
• Steinhaus–Moser notation
Operators
• Hyperoperation
• Tetration
• Pentation
• Ackermann function
• Grzegorczyk hierarchy
• Fast-growing hierarchy
Related
articles
(alphabetical
order)
• Busy beaver
• Extended real number line
• Indefinite and fictitious numbers
• Infinitesimal
• Largest known prime number
• List of numbers
• Long and short scales
• Number systems
• Number names
• Orders of magnitude
• Power of two
• Power of three
• Power of 10
• Sagan Unit
• Names
• History
| Wikipedia |
Super-logarithm
In mathematics, the super-logarithm is one of the two inverse functions of tetration. Just as exponentiation has two inverse functions, roots and logarithms, tetration has two inverse functions, super-roots and super-logarithms. There are several ways of interpreting super-logarithms:
• As the Abel function of exponential functions,
• As the inverse function of tetration with respect to the height,
• As a generalization of Robert Munafo's large number class system,
For positive integer values, the super-logarithm with base-e is equivalent to the number of times a logarithm must be iterated to get to 1 (the Iterated logarithm). However, this is not true for negative values and so cannot be considered a full definition. The precise definition of the super-logarithm depends on a precise definition of non-integer tetration (that is, ${^{y}x}$ for y not an integer). There is no clear consensus on the definition of non-integer tetration and so there is likewise no clear consensus on the super-logarithm for non-integer inputs.
Definitions
The super-logarithm, written $\operatorname {slog} _{b}(z),$ is defined implicitly by
$\operatorname {slog} _{b}(b^{z})=\operatorname {slog} _{b}(z)+1$ and
$\operatorname {slog} _{b}(1)=0.$
This definition implies that the super-logarithm can only have integer outputs, and that it is only defined for inputs of the form $b,b^{b},b^{b^{b}},$ and so on. In order to extend the domain of the super-logarithm from this sparse set to the real numbers, several approaches have been pursued. These usually include a third requirement in addition to those listed above, which vary from author to author. These approaches are as follows:
• The linear approximation approach by Rubstov and Romerio,
• The quadratic approximation approach by Andrew Robbins,
• The regular Abel function approach by George Szekeres,
• The iterative functional approach by Peter Walker, and
• The natural matrix approach by Peter Walker, and later generalized by Andrew Robbins.
Approximations
Usually, the special functions are defined not only for the real values of argument(s), but to complex plane, and differential and/or integral representation, as well as expansions in convergent and asymptotic series. Yet, no such representations are available for the slog function. Nevertheless, the simple approximations below are suggested.
Linear approximation
The linear approximation to the super-logarithm is:
$\operatorname {slog} _{b}(z)\approx {\begin{cases}\operatorname {slog} _{b}(b^{z})-1&{\text{if }}z\leq 0\\-1+z&{\text{if }}0<z\leq 1\\\operatorname {slog} _{b}(\log _{b}(z))+1&{\text{if }}1<z\\\end{cases}}$
which is a piecewise-defined function with a linear "critical piece". This function has the property that it is continuous for all real z ($C^{0}$ continuous). The first authors to recognize this approximation were Rubstov and Romerio, although it is not in their paper, it can be found in their algorithm that is used in their software prototype. The linear approximation to tetration, on the other hand, had been known before, for example by Ioannis Galidakis. This is a natural inverse of the linear approximation to tetration.
Authors like Holmes recognize that the super-logarithm would be a great use to the next evolution of computer floating-point arithmetic, but for this purpose, the function need not be infinitely differentiable. Thus, for the purpose of representing large numbers, the linear approximation approach provides enough continuity ($C^{0}$ continuity) to ensure that all real numbers can be represented on a super-logarithmic scale.
Quadratic approximation
The quadratic approximation to the super-logarithm is:
$\operatorname {slog} _{b}(z)\approx {\begin{cases}\operatorname {slog} _{b}(b^{z})-1&{\text{if }}z\leq 0\\-1+{\frac {2\log(b)}{1+\log(b)}}z+{\frac {1-\log(b)}{1+\log(b)}}z^{2}&{\text{if }}0<z\leq 1\\\operatorname {slog} _{b}(\log _{b}(z))+1&{\text{if }}1<z\end{cases}}$
which is a piecewise-defined function with a quadratic "critical piece". This function has the property that it is continuous and differentiable for all real z ($C^{1}$ continuous). The first author to publish this approximation was Andrew Robbins in this paper.
This version of the super-logarithm allows for basic calculus operations to be performed on the super-logarithm, without requiring a large amount of solving beforehand. Using this method, basic investigation of the properties of the super-logarithm and tetration can be performed with a small amount of computational overhead.
Approaches to the Abel function
Main article: Abel function
The Abel function is any function that satisfies Abel's functional equation:
$A_{f}(f(x))=A_{f}(x)+1$
Given an Abel function $A_{f}(x)$ another solution can be obtained by adding any constant $A'_{f}(x)=A_{f}(x)+c$. Thus given that the super-logarithm is defined by $\operatorname {slog} _{b}(1)=0$ and the third special property that differs between approaches, the Abel function of the exponential function could be uniquely determined.
Properties
Other equations that the super-logarithm satisfies are:
$\operatorname {slog} _{b}(z)=\operatorname {slog} _{b}(\log _{b}(z))+1$
$\operatorname {slog} _{b}(z)\geq -2$ for all real z
Probably the first example of a mathematical problem where the solution is expressed in terms of super-logarithms, is the following:
Consider oriented graphs with N nodes and such that oriented path from node i to node j exists if and only if $i>j.$ If length of all such paths is at most k edges, then the minimum possible total number of edges is:
$\Theta (N^{2})$ for $k=1$
$\Theta (N\log N)$ for $k=2$
$\Theta (N\log \log N)$ for $k=3$
$\Theta (N\operatorname {slog} N)$ for $k=4$ and $k=5$
(M. I. Grinchuk, 1986;[1] cases $k>5$ require super-super-logarithms, super-super-super-logarithms etc.)
Super-logarithm as inverse of tetration
As tetration (or super-exponential) ${\rm {sexp}}_{b}(z):={{^{z}}b}$ is suspected to be an analytic function,[2] at least for some values of $~b~$, the inverse function ${\rm {slog}}_{b}={\rm {sexp}}_{b}^{-1}$ may also be analytic. Behavior of $~{\rm {slog}}_{b}(z)~$, defined in such a way, the complex $~z~$ plane is sketched in Figure 1 for the case $~b=e~$. Levels of integer values of real and integer values of imaginary parts of the slog functions are shown with thick lines. If the existence and uniqueness of the analytic extension of tetration is provided by the condition of its asymptotic approach to the fixed points $L\approx 0.318+1.337{\!~{\rm {i}}}$ and $L^{*}\approx 0.318-1.337{\!~{\rm {i}}}$ of $L=\ln(L)$[3] in the upper and lower parts of the complex plane, then the inverse function should also be unique. Such a function is real at the real axis. It has two branch points at $~z=L~$ and $~z=L^{*}$. It approaches its limiting value $-2$ in vicinity of the negative part of the real axis (all the strip between the cuts shown with pink lines in the figure), and slowly grows up along the positive direction of the real axis. As the derivative at the real axis is positive, the imaginary part of slog remains positive just above the real axis and negative just below the real axis. The existence, uniqueness and generalizations are under discussion.[4]
See also
• Iterated logarithm
• Tetration
References
1. М. И. Гринчук, О сложности реализации последовательности треугольных булевых матриц вентильными схемами различной глубины, in: Методы дискретного анализа в синтезе управляющих систем, 44 (1986), pp. 3—23.
2. Peter Walker (1991). "Infinitely Differentiable Generalized Logarithmic and Exponential Functions". Mathematics of Computation. American Mathematical Society. 57 (196): 723–733. doi:10.2307/2938713. JSTOR 2938713.
3. H.Kneser (1950). "Reelle analytische Losungen der Gleichung $\varphi {\Big (}\varphi (x){\Big )}={\rm {e}}^{x}$ und verwandter Funktionalgleichungen". Journal für die reine und angewandte Mathematik. 187: 56–67. doi:10.1515/crll.1950.187.56. S2CID 118114436.
4. Tetration forum, http://math.eretrandre.org/tetrationforum/index.php
• Ioannis Galidakis, Mathematics, published online (accessed Nov 2007).
• W. Neville Holmes, Composite Arithmetic: Proposal for a New Standard, IEEE Computer Society Press, vol. 30, no. 3, pp. 65–73, 1997.
• Robert Munafo, Large Numbers at MROB, published online (accessed Nov 2007).
• C. A. Rubtsov and G. F. Romerio, Ackermann's Function and New Arithmetical Operation, published online (accessed Nov 2007).
• Andrew Robbins, Solving for the Analytic Piecewise Extension of Tetration and the Super-logarithm, published online (accessed Nov 2007).
• George Szekeres, Abel's equation and regular growth: variations on a theme by Abel, Experiment. Math. Volume 7, Issue 2 (1998), 85–100.
• Peter Walker, Infinitely Differentiable Generalized Logarithmic and Exponential Functions, Mathematics of Computation, Vol. 57, No. 196 (Oct., 1991), pp. 723–733.
External links
• Rubstov and Romerio, Hyper-operations Thread 1
• Rubstov and Romerio, Hyper-operations Thread 2
Hyperoperations
Primary
• Successor (0)
• Addition (1)
• Multiplication (2)
• Exponentiation (3)
• Tetration (4)
• Pentation (5)
Inverse for left argument
• Predecessor (0)
• Subtraction (1)
• Division (2)
• Root extraction (3)
• Super-root (4)
Inverse for right argument
• Predecessor (0)
• Subtraction (1)
• Division (2)
• Logarithm (3)
• Super-logarithm (4)
Related articles
• Ackermann function
• Conway chained arrow notation
• Grzegorczyk hierarchy
• Knuth's up-arrow notation
• Steinhaus–Moser notation
| Wikipedia |
Super-prime
Super-prime numbers, also known as higher-order primes or prime-indexed primes (PIPs), are the subsequence of prime numbers that occupy prime-numbered positions within the sequence of all prime numbers.
The subsequence begins
3, 5, 11, 17, 31, 41, 59, 67, 83, 109, 127, 157, 179, 191, 211, 241, 277, 283, 331, 353, 367, 401, 431, 461, 509, 547, 563, 587, 599, 617, 709, 739, 773, 797, 859, 877, 919, 967, 991, ... (sequence A006450 in the OEIS).
That is, if p(n) denotes the nth prime number, the numbers in this sequence are those of the form p(p(n)).
Dressler & Parker (1975) used a computer-aided proof (based on calculations involving the subset sum problem) to show that every integer greater than 96 may be represented as a sum of distinct super-prime numbers. Their proof relies on a result resembling Bertrand's postulate, stating that (after the larger gap between super-primes 5 and 11) each super-prime number is less than twice its predecessor in the sequence.
Broughan & Barnett (2009) show that there are
${\frac {x}{(\log x)^{2}}}+O\left({\frac {x\log \log x}{(\log x)^{3}}}\right)$
super-primes up to x. This can be used to show that the set of all super-primes is small.
One can also define "higher-order" primeness much the same way and obtain analogous sequences of primes (Fernandez 1999).
A variation on this theme is the sequence of prime numbers with palindromic prime indices, beginning with
3, 5, 11, 17, 31, 547, 739, 877, 1087, 1153, 2081, 2381, ... (sequence A124173 in the OEIS).
References
• Bayless, Jonathan; Klyve, Dominic; Oliveira e Silva, Tomás (2013), "New bounds and computations on prime-indexed primes", Integers, 13: A43:1–A43:21, MR 3097157
• Broughan, Kevin A.; Barnett, A. Ross (2009), "On the subsequence of primes having prime subscripts", Journal of Integer Sequences, 12, article 09.2.3.
• Dressler, Robert E.; Parker, S. Thomas (1975), "Primes with a prime subscript", Journal of the ACM, 22 (3): 380–381, doi:10.1145/321892.321900, MR 0376599.
• Fernandez, Neil (1999), An order of primeness, F(p).
External links
• A Russian programming contest problem related to the work of Dressler and Parker
Prime number classes
By formula
• Fermat (22n + 1)
• Mersenne (2p − 1)
• Double Mersenne (22p−1 − 1)
• Wagstaff (2p + 1)/3
• Proth (k·2n + 1)
• Factorial (n! ± 1)
• Primorial (pn# ± 1)
• Euclid (pn# + 1)
• Pythagorean (4n + 1)
• Pierpont (2m·3n + 1)
• Quartan (x4 + y4)
• Solinas (2m ± 2n ± 1)
• Cullen (n·2n + 1)
• Woodall (n·2n − 1)
• Cuban (x3 − y3)/(x − y)
• Leyland (xy + yx)
• Thabit (3·2n − 1)
• Williams ((b−1)·bn − 1)
• Mills (⌊A3n⌋)
By integer sequence
• Fibonacci
• Lucas
• Pell
• Newman–Shanks–Williams
• Perrin
• Partitions
• Bell
• Motzkin
By property
• Wieferich (pair)
• Wall–Sun–Sun
• Wolstenholme
• Wilson
• Lucky
• Fortunate
• Ramanujan
• Pillai
• Regular
• Strong
• Stern
• Supersingular (elliptic curve)
• Supersingular (moonshine theory)
• Good
• Super
• Higgs
• Highly cototient
• Unique
Base-dependent
• Palindromic
• Emirp
• Repunit (10n − 1)/9
• Permutable
• Circular
• Truncatable
• Minimal
• Delicate
• Primeval
• Full reptend
• Unique
• Happy
• Self
• Smarandache–Wellin
• Strobogrammatic
• Dihedral
• Tetradic
Patterns
• Twin (p, p + 2)
• Bi-twin chain (n ± 1, 2n ± 1, 4n ± 1, …)
• Triplet (p, p + 2 or p + 4, p + 6)
• Quadruplet (p, p + 2, p + 6, p + 8)
• k-tuple
• Cousin (p, p + 4)
• Sexy (p, p + 6)
• Chen
• Sophie Germain/Safe (p, 2p + 1)
• Cunningham (p, 2p ± 1, 4p ± 3, 8p ± 7, ...)
• Arithmetic progression (p + a·n, n = 0, 1, 2, 3, ...)
• Balanced (consecutive p − n, p, p + n)
By size
• Mega (1,000,000+ digits)
• Largest known
• list
Complex numbers
• Eisenstein prime
• Gaussian prime
Composite numbers
• Pseudoprime
• Catalan
• Elliptic
• Euler
• Euler–Jacobi
• Fermat
• Frobenius
• Lucas
• Somer–Lucas
• Strong
• Carmichael number
• Almost prime
• Semiprime
• Sphenic number
• Interprime
• Pernicious
Related topics
• Probable prime
• Industrial-grade prime
• Illegal prime
• Formula for primes
• Prime gap
First 60 primes
• 2
• 3
• 5
• 7
• 11
• 13
• 17
• 19
• 23
• 29
• 31
• 37
• 41
• 43
• 47
• 53
• 59
• 61
• 67
• 71
• 73
• 79
• 83
• 89
• 97
• 101
• 103
• 107
• 109
• 113
• 127
• 131
• 137
• 139
• 149
• 151
• 157
• 163
• 167
• 173
• 179
• 181
• 191
• 193
• 197
• 199
• 211
• 223
• 227
• 229
• 233
• 239
• 241
• 251
• 257
• 263
• 269
• 271
• 277
• 281
List of prime numbers
| Wikipedia |
Lie superalgebra
In mathematics, a Lie superalgebra is a generalisation of a Lie algebra to include a Z2‑grading. Lie superalgebras are important in theoretical physics where they are used to describe the mathematics of supersymmetry. In most of these theories, the even elements of the superalgebra correspond to bosons and odd elements to fermions (but this is not always true; for example, the BRST supersymmetry is the other way around).
Definition
Formally, a Lie superalgebra is a nonassociative Z2-graded algebra, or superalgebra, over a commutative ring (typically R or C) whose product [·, ·], called the Lie superbracket or supercommutator, satisfies the two conditions (analogs of the usual Lie algebra axioms, with grading):
Super skew-symmetry:
$[x,y]=-(-1)^{|x||y|}[y,x].\ $
The super Jacobi identity:[1]
$(-1)^{|x||z|}[x,[y,z]]+(-1)^{|y||x|}[y,[z,x]]+(-1)^{|z||y|}[z,[x,y]]=0,$
where x, y, and z are pure in the Z2-grading. Here, |x| denotes the degree of x (either 0 or 1). The degree of [x,y] is the sum of degree of x and y modulo 2.
One also sometimes adds the axioms $[x,x]=0$ for |x| = 0 (if 2 is invertible this follows automatically) and $[[x,x],x]=0$ for |x| = 1 (if 3 is invertible this follows automatically). When the ground ring is the integers or the Lie superalgebra is a free module, these conditions are equivalent to the condition that the Poincaré–Birkhoff–Witt theorem holds (and, in general, they are necessary conditions for the theorem to hold).
Just as for Lie algebras, the universal enveloping algebra of the Lie superalgebra can be given a Hopf algebra structure.
A graded Lie algebra (say, graded by Z or N) that is anticommutative and Jacobi in the graded sense also has a $Z_{2}$ grading (which is called "rolling up" the algebra into odd and even parts), but is not referred to as "super". See note at graded Lie algebra for discussion.
Properties
Let ${\mathfrak {g}}={\mathfrak {g}}_{0}\oplus {\mathfrak {g}}_{1}$ be a Lie superalgebra. By inspecting the Jacobi identity, one sees that there are eight cases depending on whether arguments are even or odd. These fall into four classes, indexed by the number of odd elements:[2]
1. No odd elements. The statement is just that ${\mathfrak {g}}_{0}$ is an ordinary Lie algebra.
2. One odd element. Then ${\mathfrak {g}}_{1}$ is a ${\mathfrak {g}}_{0}$-module for the action $\mathrm {ad} _{a}:b\rightarrow [a,b],\quad a\in {\mathfrak {g}}_{0},\quad b,[a,b]\in {\mathfrak {g}}_{1}$.
3. Two odd elements. The Jacobi identity says that the bracket ${\mathfrak {g}}_{1}\otimes {\mathfrak {g}}_{1}\rightarrow {\mathfrak {g}}_{0}$ is a symmetric ${\mathfrak {g}}_{1}$-map.
4. Three odd elements. For all $b\in {\mathfrak {g}}_{1}$, $[b,[b,b]]=0$.
Thus the even subalgebra ${\mathfrak {g}}_{0}$ of a Lie superalgebra forms a (normal) Lie algebra as all the signs disappear, and the superbracket becomes a normal Lie bracket, while ${\mathfrak {g}}_{1}$ is a linear representation of ${\mathfrak {g}}_{0}$, and there exists a symmetric ${\mathfrak {g}}_{0}$-equivariant linear map $\{\cdot ,\cdot \}:{\mathfrak {g}}_{1}\otimes {\mathfrak {g}}_{1}\rightarrow {\mathfrak {g}}_{0}$ such that,
$[\left\{x,y\right\},z]+[\left\{y,z\right\},x]+[\left\{z,x\right\},y]=0,\quad x,y,z\in {\mathfrak {g}}_{1}.$
Conditions (1)–(3) are linear and can all be understood in terms of ordinary Lie algebras. Condition (4) is nonlinear, and is the most difficult one to verify when constructing a Lie superalgebra starting from an ordinary Lie algebra (${\mathfrak {g}}_{0}$) and a representation (${\mathfrak {g}}_{1}$).
Involution
A ∗ Lie superalgebra is a complex Lie superalgebra equipped with an involutive antilinear map from itself to itself which respects the Z2 grading and satisfies [x,y]* = [y*,x*] for all x and y in the Lie superalgebra. (Some authors prefer the convention [x,y]* = (−1)|x||y|[y*,x*]; changing * to −* switches between the two conventions.) Its universal enveloping algebra would be an ordinary *-algebra.
Examples
Given any associative superalgebra $A$ one can define the supercommutator on homogeneous elements by
$[x,y]=xy-(-1)^{|x||y|}yx\ $
and then extending by linearity to all elements. The algebra $A$ together with the supercommutator then becomes a Lie superalgebra. The simplest example of this procedure is perhaps when $A$ is the space of all linear functions $\mathbf {End} (V)$ of a super vector space $V$ to itself. When $V=\mathbb {K} ^{p|q}$, this space is denoted by $M^{p|q}$ or $M(p|q)$.[3] With the Lie bracket per above, the space is denoted ${\mathfrak {gl}}(p|q)$.[4]
The Whitehead product on homotopy groups gives many examples of Lie superalgebras over the integers.
The super-Poincaré algebra generates the isometries of flat superspace.
Classification
The simple complex finite-dimensional Lie superalgebras were classified by Victor Kac.
They are (excluding the Lie algebras):[5]
The special linear lie superalgebra ${\mathfrak {sl}}(m|n)$.
The lie superalgebra ${\mathfrak {sl}}(m|n)$ is the subalgebra of ${\mathfrak {gl}}(m|n)$ consisting of matrices with super trace zero. It is simple when $m\not =n$. If $m=n$, then the identity matrix $I_{2m}$generates an ideal. Quotienting out this ideal leads to ${\mathfrak {sl}}(m|m)/\langle I_{2m}\rangle $ which is simple for $m\geq 2$.
The orthosymplectic Lie superalgebra ${\mathfrak {osp}}(m|2n)$.
Consider an even, non-degenerate, supersymmetric bilinear form $\langle \cdot ,\cdot \rangle $ on $\mathbb {C} ^{m|2n}$. Then the orthosymplectic Lie superalgebra is the subalgebra of ${\mathfrak {gl}}(m|2n)$ consisting of matrices that leave this form invariant:
${\mathfrak {osp}}(m|2n)=\{X\in {\mathfrak {gl}}(m|2n)\mid \langle Xu,v\rangle +(-1)^{|X||u|}\langle u,Xv\rangle =0{\text{ for all }}u,v\in \mathbb {C} ^{m|2n}\}.$
Its even part is given by ${\mathfrak {so}}(m)\oplus {\mathfrak {sp}}(2n)$.
The exceptional Lie superalgebra $D(2,1;\alpha )$.
There is a family of (9∣8)-dimensional Lie superalgebras depending on a parameter $\alpha $. These are deformations of $D(2,1)={\mathfrak {osp}}(4|2)$. If $\alpha \not =0$ and $\alpha \not =-1$, then D(2,1,α) is simple. Moreover $D(2,1;\alpha )\cong D(2,1;\beta )$ if $\alpha $ and $\beta $ are under the same orbit under the maps $\alpha \mapsto \alpha ^{-1}$ and $\alpha \mapsto -1-\alpha $.
The exceptional Lie superalgebra $F(4)$.
It has dimension (24|16). Its even part is given by ${\mathfrak {sl}}(2)\oplus {\mathfrak {so}}(7)$.
The exceptional Lie superalgebra $G(3)$.
It has dimension (17|14). Its even part is given by ${\mathfrak {sl}}(2)\oplus G_{2}$.
There are also two so-called strange series called ${\mathfrak {pe}}(n)$ and ${\mathfrak {q}}(n)$.
The Cartan types. They can be divided in four families: $W(n)$, $S(n)$, ${\widetilde {S}}(2n)$ and $H(n)$. For the Cartan type of simple Lie superalgebras, the odd part is no longer completely reducible under the action of the even part.
Classification of infinite-dimensional simple linearly compact Lie superalgebras
The classification consists of the 10 series W(m, n), S(m, n) ((m, n) ≠ (1, 1)), H(2m, n), K(2m + 1, n), HO(m, m) (m ≥ 2), SHO(m, m) (m ≥ 3), KO(m, m + 1), SKO(m, m + 1; β) (m ≥ 2), SHO ∼ (2m, 2m), SKO ∼ (2m + 1, 2m + 3) and the five exceptional algebras:
E(1, 6), E(5, 10), E(4, 4), E(3, 6), E(3, 8)
The last two are particularly interesting (according to Kac) because they have the standard model gauge group SU(3)×SU(2)×U(1) as their zero level algebra. Infinite-dimensional (affine) Lie superalgebras are important symmetries in superstring theory. Specifically, the Virasoro algebras with ${\mathcal {N}}$ supersymmetries are $K(1,{\mathcal {N}})$ which only have central extensions up to ${\mathcal {N}}=4$.[6]
Category-theoretic definition
In category theory, a Lie superalgebra can be defined as a nonassociative superalgebra whose product satisfies
• $[\cdot ,\cdot ]\circ ({\operatorname {id} }+\tau _{A,A})=0$
• $[\cdot ,\cdot ]\circ ([\cdot ,\cdot ]\otimes {\operatorname {id} }\circ ({\operatorname {id} }+\sigma +\sigma ^{2})=0$
where σ is the cyclic permutation braiding $({\operatorname {id} }\otimes \tau _{A,A})\circ (\tau _{A,A}\otimes {\operatorname {id} })$. In diagrammatic form:
See also
• Gerstenhaber algebra
• Anyonic Lie algebra
• Grassmann algebra
• Representation of a Lie superalgebra
• Superspace
• Supergroup
• Universal enveloping algebra
Notes
1. Freund 1983, p. 8
2. Varadarajan 2004, p. 89
3. Varadarajan 2004, p. 87
4. Varadarajan 2004, p. 90
5. Cheng S.-J. ;Wang W. (2012). Dualities and representations of Lie superalgebras. Providence, Rhode Island. p. 12. ISBN 978-0-8218-9118-6. OCLC 809925982.{{cite book}}: CS1 maint: location missing publisher (link) CS1 maint: multiple names: authors list (link)
6. Kac 2010
References
• Cheng, S.-J.; Wang, W. (2012). Dualities and Representations of Lie Superalgebras. Graduate Studies in Mathematics. Vol. 144. pp. 302pp. ISBN 978-0-8218-9118-6.
• Freund, P. G. O. (1983). Introduction to supersymmetry. Cambridge Monographs on Mathematical Physics. Cambridge University Press. doi:10.1017/CBO9780511564017. ISBN 978-0521-356-756.
• Grozman, P.; Leites, D.; Shchepochkina, I. (2005). "Lie Superalgebras of String Theories". Acta Mathematica Vietnamica. 26 (2005): 27–63. arXiv:hep-th/9702120. Bibcode:1997hep.th....2120G.
• Kac, V. G. (1977). "Lie superalgebras". Advances in Mathematics. 26 (1): 8–96. doi:10.1016/0001-8708(77)90017-2.
• Kac, V. G. (2010). "Classification of Infinite-Dimensional Simple Groups of Supersymmetries and Quantum Field Theory". Visions in Mathematics: 162–183. arXiv:math/9912235. doi:10.1007/978-3-0346-0422-2_6. ISBN 978-3-0346-0421-5. S2CID 15597378.
• Manin, Y. I. (1997). Gauge Field Theory and Complex Geometry ((2nd ed.) ed.). Berlin: Springer. ISBN 978-3-540-61378-7.
• Musson, I. M. (2012). Lie Superalgebras and Enveloping Algebras. Graduate Studies in Mathematics. Vol. 131. pp. 488 pp. ISBN 978-0-8218-6867-6.
• Varadarajan, V. S. (2004). Supersymmetry for Mathematicians: An Introduction. Courant Lecture Notes in Mathematics. Vol. 11. American Mathematical Society. ISBN 978-0-8218-3574-6.
Historical
• Frölicher, A.; Nijenhuis, A. (1956). "Theory of vector valued differential forms. Part I". Indagationes Mathematicae. 59: 338–350. doi:10.1016/S1385-7258(56)50046-7..
• Gerstenhaber, M. (1963). "The cohomology structure of an associative ring". Annals of Mathematics. 78 (2): 267–288. doi:10.2307/1970343. JSTOR 1970343.
• Gerstenhaber, M. (1964). "On the Deformation of Rings and Algebras". Annals of Mathematics. 79 (1): 59–103. doi:10.2307/1970484. JSTOR 1970484.
• Milnor, J. W.; Moore, J. C. (1965). "On the structure of Hopf algebras". Annals of Mathematics. 81 (2): 211–264. doi:10.2307/1970615. JSTOR 1970615.
External links
• Irving Kaplansky + Lie Superalgebras
Supersymmetry
General topics
• Supersymmetry
• Supersymmetric gauge theory
• Supersymmetric quantum mechanics
• Supergravity
• Superstring theory
• Super vector space
• Supergeometry
Supermathematics
• Superalgebra
• Lie superalgebra
• Super-Poincaré algebra
• Superconformal algebra
• Supersymmetry algebra
• Supergroup
• Superspace
• Harmonic superspace
• Super Minkowski space
• Supermanifold
Concepts
• Supercharge
• R-symmetry
• Supermultiplet
• Short supermultiplet
• BPS state
• Superpotential
• D-term
• FI D-term
• F-term
• Moduli space
• Supersymmetry breaking
• Konishi anomaly
• Seiberg duality
• Seiberg–Witten theory
• Witten index
• Wess–Zumino gauge
• Localization
• Mu problem
• Little hierarchy problem
• Electric–magnetic duality
Theorems
• Coleman–Mandula
• Haag–Łopuszański–Sohnius
• Nonrenormalization
Field theories
• Wess–Zumino
• N = 1 super Yang–Mills
• N = 4 super Yang–Mills
• Super QCD
• MSSM
• NMSSM
• 6D (2,0) superconformal
• ABJM superconformal
Supergravity
• Pure 4D N = 1 supergravity
• N = 8 supergravity
• Higher dimensional
• Gauged supergravity
Superpartners
• Axino
• Chargino
• Gaugino
• Goldstino
• Graviphoton
• Graviscalar
• Higgsino
• LSP
• Neutralino
• R-hadron
• Sfermion
• Sgoldstino
• Stop squark
• Superghost
Researchers
• Affleck
• Bagger
• Batchelor
• Berezin
• Dine
• Fayet
• Gates
• Golfand
• Iliopoulos
• Montonen
• Olive
• Salam
• Seiberg
• Siegel
• Roček
• Rogers
• Wess
• Witten
• Zumino
String theory
Background
• Strings
• Cosmic strings
• History of string theory
• First superstring revolution
• Second superstring revolution
• String theory landscape
Theory
• Nambu–Goto action
• Polyakov action
• Bosonic string theory
• Superstring theory
• Type I string
• Type II string
• Type IIA string
• Type IIB string
• Heterotic string
• N=2 superstring
• F-theory
• String field theory
• Matrix string theory
• Non-critical string theory
• Non-linear sigma model
• Tachyon condensation
• RNS formalism
• GS formalism
String duality
• T-duality
• S-duality
• U-duality
• Montonen–Olive duality
Particles and fields
• Graviton
• Dilaton
• Tachyon
• Ramond–Ramond field
• Kalb–Ramond field
• Magnetic monopole
• Dual graviton
• Dual photon
Branes
• D-brane
• NS5-brane
• M2-brane
• M5-brane
• S-brane
• Black brane
• Black holes
• Black string
• Brane cosmology
• Quiver diagram
• Hanany–Witten transition
Conformal field theory
• Virasoro algebra
• Mirror symmetry
• Conformal anomaly
• Conformal algebra
• Superconformal algebra
• Vertex operator algebra
• Loop algebra
• Kac–Moody algebra
• Wess–Zumino–Witten model
Gauge theory
• Anomalies
• Instantons
• Chern–Simons form
• Bogomol'nyi–Prasad–Sommerfield bound
• Exceptional Lie groups (G2, F4, E6, E7, E8)
• ADE classification
• Dirac string
• p-form electrodynamics
Geometry
• Worldsheet
• Kaluza–Klein theory
• Compactification
• Why 10 dimensions?
• Kähler manifold
• Ricci-flat manifold
• Calabi–Yau manifold
• Hyperkähler manifold
• K3 surface
• G2 manifold
• Spin(7)-manifold
• Generalized complex manifold
• Orbifold
• Conifold
• Orientifold
• Moduli space
• Hořava–Witten theory
• K-theory (physics)
• Twisted K-theory
Supersymmetry
• Supergravity
• Superspace
• Lie superalgebra
• Lie supergroup
Holography
• Holographic principle
• AdS/CFT correspondence
M-theory
• Matrix theory
• Introduction to M-theory
String theorists
• Aganagić
• Arkani-Hamed
• Atiyah
• Banks
• Berenstein
• Bousso
• Cleaver
• Curtright
• Dijkgraaf
• Distler
• Douglas
• Duff
• Dvali
• Ferrara
• Fischler
• Friedan
• Gates
• Gliozzi
• Gopakumar
• Green
• Greene
• Gross
• Gubser
• Gukov
• Guth
• Hanson
• Harvey
• 't Hooft
• Hořava
• Gibbons
• Kachru
• Kaku
• Kallosh
• Kaluza
• Kapustin
• Klebanov
• Knizhnik
• Kontsevich
• Klein
• Linde
• Maldacena
• Mandelstam
• Marolf
• Martinec
• Minwalla
• Moore
• Motl
• Mukhi
• Myers
• Nanopoulos
• Năstase
• Nekrasov
• Neveu
• Nielsen
• van Nieuwenhuizen
• Novikov
• Olive
• Ooguri
• Ovrut
• Polchinski
• Polyakov
• Rajaraman
• Ramond
• Randall
• Randjbar-Daemi
• Roček
• Rohm
• Sagnotti
• Scherk
• Schwarz
• Seiberg
• Sen
• Shenker
• Siegel
• Silverstein
• Sơn
• Staudacher
• Steinhardt
• Strominger
• Sundrum
• Susskind
• Townsend
• Trivedi
• Turok
• Vafa
• Veneziano
• Verlinde
• Verlinde
• Wess
• Witten
• Yau
• Yoneya
• Zamolodchikov
• Zamolodchikov
• Zaslow
• Zumino
• Zwiebach
Authority control
International
• FAST
National
• France
• BnF data
• Germany
• Israel
• United States
Other
• IdRef
| Wikipedia |
Super PI
Super PI is a computer program that calculates pi to a specified number of digits after the decimal point—up to a maximum of 32 million. It uses Gauss–Legendre algorithm and is a Windows port of the program used by Yasumasa Kanada in 1995 to compute pi to 232 digits.
Super PI
Operating systemWindows
TypeBenchmark
Websitehttp://www.superpi.net/
Significance
Super PI is popular in the overclocking community, both as a benchmark to test the performance of these systems[1][2] and as a stress test to check that they are still functioning correctly.[3]
Credibility concerns
The competitive nature of achieving the best Super PI calculation times led to fraudulent Super PI results, reporting calculation times faster than normal. Attempts to counter the fraudulent results resulted in a modified version of Super PI, with a checksum to validate the results. However, other methods exist of producing inaccurate or fake time results, raising questions about the program's future as an overclocking benchmark.
Super PI utilizes x87 floating point instructions which are supported on all x86 and x86-64 processors, current versions which also support the lower precision Streaming SIMD Extensions vector instructions.
The future
Super PI is single threaded, so its relevance as a measure of performance in the current era of multi-core processors is diminishing quickly. Therefore, wPrime has been developed to support multiple threaded calculations to be run at the same time so one can test stability on multi-core machines. Other multithreaded programs include: Hyper PI, IntelBurnTest, Prime95, Montecarlo superPI, OCCT or y-cruncher. Last but not least, while SuperPi is unable to calculate more than 32 million digits, and Alexander J. Yee & Shigeru Kondo were able to set a record of 10 Trillion 50 Digits of Pi using y-cruncher under a 2 x Intel Xeon X5680 @ 3.33 GHz - (12 physical cores, 24 hyperthreaded) computer on October 16, 2011[4] Super PI is much slower than these other programs, and utilizes inferior algorithms to them.
References
1. Maekinen, Sami (2006), CPU & GPU Overclocking Guide (PDF), ATI Technologies Inc.
2. Martinović, G.; Balen, J.; Rimac-Drlje, S. (2010), "Impact of the host operating systems on virtual machine performance", 2010 Proceedings of the 33rd International Convention MIPRO, IEEE, pp. 613–618.
3. Sanchez, Ernesto; Squillero, Giovanni; Tonda, Alberto (2011), "Evolutionary Failing-test Generation for Modern Microprocessors" (PDF), Proceedings of the 13th Annual Conference Companion on Genetic and Evolutionary Computation (GECCO '11), New York, NY, USA: ACM, pp. 225–226, doi:10.1145/2001858.2001985, ISBN 978-1-4503-0690-4, S2CID 17401803.
4. Round 2... 10 Trillion Digits of Pi, numberworld.org
External links
• "Kanada Laboratory". Archived from the original on Mar 24, 2019. Formerly with link to ftp site with version of the program for different operating systems.
• "Single-threaded Computer Benchmark | SuperPI". Super Pi. Retrieved 2019-12-22. wPrime Systems version.
| Wikipedia |
Super-Poincaré algebra
In theoretical physics, a super-Poincaré algebra is an extension of the Poincaré algebra to incorporate supersymmetry, a relation between bosons and fermions. They are examples of supersymmetry algebras (without central charges or internal symmetries), and are Lie superalgebras. Thus a super-Poincaré algebra is a Z2-graded vector space with a graded Lie bracket such that the even part is a Lie algebra containing the Poincaré algebra, and the odd part is built from spinors on which there is an anticommutation relation with values in the even part.
Informal sketch
The Poincaré algebra describes the isometries of Minkowski spacetime. From the representation theory of the Lorentz group, it is known that the Lorentz group admits two inequivalent complex spinor representations, dubbed $2$ and ${\overline {2}}$.[nb 1] Taking their tensor product, one obtains $2\otimes {\overline {2}}=3\oplus 1$; such decompositions of tensor products of representations into direct sums is given by the Littlewood–Richardson rule.
Normally, one treats such a decomposition as relating to specific particles: so, for example, the pion, which is a chiral vector particle, is composed of a quark-anti-quark pair. However, one could also identify $3\oplus 1$ with Minkowski spacetime itself. This leads to a natural question: if Minkowski space-time belongs to the adjoint representation, then can Poincaré symmetry be extended to the fundamental representation? Well, it can: this is exactly the super-Poincaré algebra. There is a corresponding experimental question: if we live in the adjoint representation, then where is the fundamental representation hiding? This is the program of supersymmetry, which has not been found experimentally.
History
The super-Poincaré algebra was first proposed in the context of the Haag–Łopuszański–Sohnius theorem, as a means of avoiding the conclusions of the Coleman–Mandula theorem. That is, the Coleman–Mandula theorem is a no-go theorem that states that the Poincaré algebra cannot be extended with additional symmetries that might describe the internal symmetries of the observed physical particle spectrum. However, the Coleman–Mandula theorem assumed that the algebra extension would be by means of a commutator; this assumption, and thus the theorem, can be avoided by considering the anti-commutator, that is, by employing anti-commuting Grassmann numbers. The proposal was to consider a supersymmetry algebra, defined as the semidirect product of a central extension of the super-Poincaré algebra by a compact Lie algebra of internal symmetries.
Definition
The simplest supersymmetric extension of the Poincaré algebra contains two Weyl spinors with the following anti-commutation relation:
$\{Q_{\alpha },{\bar {Q}}_{\dot {\beta }}\}=2{\sigma ^{\mu }}_{\alpha {\dot {\beta }}}P_{\mu }$
and all other anti-commutation relations between the Qs and Ps vanish.[1] The operators $Q_{\alpha },{\bar {Q}}_{\dot {\alpha }}$ are known as supercharges. In the above expression $P_{\mu }$ are the generators of translation and $\sigma ^{\mu }$ are the Pauli matrices. The index $\alpha $ runs over the values $\alpha =1,2.$ A dot is used over the index ${\dot {\beta }}$ to remind that this index transforms according to the inequivalent conjugate spinor representation; one must never accidentally contract these two types of indexes. The Pauli matrices can be considered to be a direct manifestation of the Littlewood–Richardson rule mentioned before: they indicate how the tensor product $2\otimes {\overline {2}}$ of the two spinors can be re-expressed as a vector. The index $\mu $ of course ranges over the space-time dimensions $\mu =0,1,2,3.$
It is convenient to work with Dirac spinors instead of Weyl spinors; a Dirac spinor can be thought of as an element of $2\oplus {\overline {2}}$; it has four components. The Dirac matrices are thus also four-dimensional, and can be expressed as direct sums of the Pauli matrices. The tensor product then gives an algebraic relation to the Minkowski metric $g^{\mu \nu }$ which is expressed as:
$\{\gamma ^{\mu },\gamma ^{\nu }\}=2g^{\mu \nu }$
and
$\sigma ^{\mu \nu }={\frac {i}{2}}\left[\gamma ^{\mu },\gamma ^{\nu }\right]$
This then gives the full algebra[2]
${\begin{aligned}\left[M^{\mu \nu },Q_{\alpha }\right]&={\frac {1}{2}}(\sigma ^{\mu \nu })_{\alpha }^{\;\;\beta }Q_{\beta }\\\left[Q_{\alpha },P^{\mu }\right]&=0\\\{Q_{\alpha },{\bar {Q}}_{\dot {\beta }}\}&=2(\sigma ^{\mu })_{\alpha {\dot {\beta }}}P_{\mu }\\\end{aligned}}$
which are to be combined with the normal Poincaré algebra. It is a closed algebra, since all Jacobi identities are satisfied and can have since explicit matrix representations. Following this line of reasoning will lead to supergravity.
Extended supersymmetry
It is possible to add more supercharges. That is, we fix a number which by convention is labelled ${\mathcal {N}}$, and define supercharges $Q_{\alpha }^{I},{\bar {Q}}_{\dot {\alpha }}^{I}$ with $I=1,\cdots ,{\mathcal {N}}.$
These can be thought of as many copies of the original supercharges, and hence satisfy
$[M^{\mu \nu },Q_{\alpha }^{I}]=(\sigma ^{\mu \nu })_{\alpha }{}^{\beta }Q_{\beta }^{I}$
$[P^{\mu },Q_{\alpha }^{I}]=0$
and
$\{Q_{\alpha }^{I},{\bar {Q}}_{\dot {\alpha }}^{J}\}=2\sigma _{\alpha {\dot {\alpha }}}^{\mu }P_{\mu }\delta ^{IJ}$
but can also satisfy
$\{Q_{\alpha }^{I},Q_{\beta }^{J}\}=\epsilon _{\alpha \beta }Z^{IJ}$
and
$\{{\bar {Q}}_{\dot {\alpha }}^{I},{\bar {Q}}_{\dot {\beta }}^{J}\}=\epsilon _{{\dot {\alpha }}{\dot {\beta }}}Z^{\dagger IJ}$
where $Z^{IJ}=-Z^{JI}$ is the central charge.
Super-Poincaré group and superspace
Just as the Poincaré algebra generates the Poincaré group of isometries of Minkowski space, the super-Poincaré algebra, an example of a Lie super-algebra, generates what is known as a supergroup. This can be used to define superspace with ${\mathcal {N}}$ supercharges: these are the right cosets of the Lorentz group within the ${\mathcal {N}}$ super-Poincaré group.
Just as $P_{\mu }$ has the interpretation as being the generator of spacetime translations, the charges $Q_{\alpha }^{I},{\bar {Q}}_{\dot {\alpha }}^{I}$, with $I=1,\cdots ,{\mathcal {N}}$, have the interpretation as generators of superspace translations in the 'spin coordinates' of superspace. That is, we can view superspace as the direct sum of Minkowski space with 'spin dimensions' labelled by coordinates $\theta _{\alpha }^{I},{\bar {\theta }}^{I{\dot {\alpha }}}$. The supercharge $Q_{\alpha }^{I}$ generates translations in the direction labelled by the coordinate $\theta _{\alpha }^{I}.$ By counting, there are $4{\mathcal {N}}$ spin dimensions.
Notation for superspace
The superspace consisting of Minkowski space with ${\mathcal {N}}$ supercharges is therefore labelled $\mathbb {R} ^{1,3|4{\mathcal {N}}}$ or sometimes simply $\mathbb {R} ^{4|4{\mathcal {N}}}$.
SUSY in 3 + 1 Minkowski spacetime
In (3 + 1) Minkowski spacetime, the Haag–Łopuszański–Sohnius theorem states that the SUSY algebra with N spinor generators is as follows.
The even part of the star Lie superalgebra is the direct sum of the Poincaré algebra and a reductive Lie algebra B (such that its self-adjoint part is the tangent space of a real compact Lie group). The odd part of the algebra would be
$\left({\frac {1}{2}},0\right)\otimes V\oplus \left(0,{\frac {1}{2}}\right)\otimes V^{*}$
where $(1/2,0)$ and $(0,1/2)$ are specific representations of the Poincaré algebra. (Compared to the notation used earlier in the article, these correspond ${\overline {2}}\oplus 1$ and $1\oplus 2$, respectively, also see the footnote where the previous notation was introduced). Both components are conjugate to each other under the * conjugation. V is an N-dimensional complex representation of B and V* is its dual representation. The Lie bracket for the odd part is given by a symmetric equivariant pairing {.,.} on the odd part with values in the even part. In particular, its reduced intertwiner from $\left[\left({\frac {1}{2}},0\right)\otimes V\right]\otimes \left[\left(0,{\frac {1}{2}}\right)\otimes V^{*}\right]$ to the ideal of the Poincaré algebra generated by translations is given as the product of a nonzero intertwiner from $\left({\frac {1}{2}},0\right)\otimes \left(0,{\frac {1}{2}}\right)$ to (1/2,1/2) by the "contraction intertwiner" from $V\otimes V^{*}$ to the trivial representation. On the other hand, its reduced intertwiner from $\left[\left({\frac {1}{2}},0\right)\otimes V\right]\otimes \left[\left({\frac {1}{2}},0\right)\otimes V\right]$ is the product of a (antisymmetric) intertwiner from $\left({\frac {1}{2}},0\right)\otimes \left({\frac {1}{2}},0\right)$ to (0,0) and an antisymmetric intertwiner A from $N^{2}$ to B. Conjugate it to get the corresponding case for the other half.
N = 1
B is now ${\mathfrak {u}}(1)$ (called R-symmetry) and V is the 1D representation of ${\mathfrak {u}}(1)$ with charge 1. A (the intertwiner defined above) would have to be zero since it is antisymmetric.
Actually, there are two versions of N=1 SUSY, one without the ${\mathfrak {u}}(1)$ (i.e. B is zero-dimensional) and the other with ${\mathfrak {u}}(1)$.
N = 2
B is now ${\mathfrak {su}}(2)\oplus {\mathfrak {u}}(1)$ and V is the 2D doublet representation of ${\mathfrak {su}}(2)$ with a zero ${\mathfrak {u}}(1)$ charge. Now, A is a nonzero intertwiner to the ${\mathfrak {u}}(1)$ part of B.
Alternatively, V could be a 2D doublet with a nonzero ${\mathfrak {u}}(1)$ charge. In this case, A would have to be zero.
Yet another possibility would be to let B be ${\mathfrak {u}}(1)_{A}\oplus {\mathfrak {u}}(1)_{B}\oplus {\mathfrak {u}}(1)_{C}$. V is invariant under ${\mathfrak {u}}(1)_{B}$ and ${\mathfrak {u}}(1)_{C}$ and decomposes into a 1D rep with ${\mathfrak {u}}(1)_{A}$ charge 1 and another 1D rep with charge -1. The intertwiner A would be complex with the real part mapping to ${\mathfrak {u}}(1)_{B}$ and the imaginary part mapping to ${\mathfrak {u}}(1)_{C}$.
Or we could have B being ${\mathfrak {su}}(2)\oplus {\mathfrak {u}}(1)_{A}\oplus {\mathfrak {u}}(1)_{B}$ with V being the doublet rep of ${\mathfrak {su}}(2)$ with zero ${\mathfrak {u}}(1)$ charges and A being a complex intertwiner with the real part mapping to ${\mathfrak {u}}(1)_{A}$ and the imaginary part to ${\mathfrak {u}}(1)_{B}$.
This doesn't even exhaust all the possibilities. We see that there is more than one N = 2 supersymmetry; likewise, the SUSYs for N > 2 are also not unique (in fact, it only gets worse).
N = 3
It is theoretically allowed, but the multiplet structure becomes automatically the same with that of an N=4 supersymmetric theory. So it is less often discussed compared to N=1,2,4 version.
N = 4
This is the maximal number of supersymmetries in a theory without gravity.
N = 8
This is the maximal number of supersymmetries in any supersymmetric theory. Beyond ${\mathcal {N}}=8$, any massless supermultiplet contains a sector with helicity $\lambda $ such that $|\lambda |>2$. Such theories on Minkowski space must be free (non-interacting).
SUSY in various dimensions
In 0 + 1, 2 + 1, 3 + 1, 4 + 1, 6 + 1, 7 + 1, 8 + 1, and 10 + 1 dimensions, a SUSY algebra is classified by a positive integer N.
In 1 + 1, 5 + 1 and 9 + 1 dimensions, a SUSY algebra is classified by two nonnegative integers (M, N), at least one of which is nonzero. M represents the number of left-handed SUSYs and N represents the number of right-handed SUSYs.
The reason of this has to do with the reality conditions of the spinors.
Hereafter d = 9 means d = 8 + 1 in Minkowski signature, etc. The structure of supersymmetry algebra is mainly determined by the number of the fermionic generators, that is the number N times the real dimension of the spinor in d dimensions. It is because one can obtain a supersymmetry algebra of lower dimension easily from that of higher dimensionality by the use of dimensional reduction.
Upper bound on dimension of supersymmetric theories
The maximum allowed dimension of theories with supersymmetry is $d=11=10+1$, which admits a unique theory called 11-dimensional supergravity which is the low-energy limit of M-theory. This incorporates supergravity: without supergravity, the maximum allowed dimension is $d=10=9+1$.[3]
d = 11
The only example is the N = 1 supersymmetry with 32 supercharges.
d = 10
From d = 11, N = 1 SUSY, one obtains N = (1, 1) nonchiral SUSY algebra, which is also called the type IIA supersymmetry. There is also N = (2, 0) SUSY algebra, which is called the type IIB supersymmetry. Both of them have 32 supercharges.
N = (1, 0) SUSY algebra with 16 supercharges is the minimal susy algebra in 10 dimensions. It is also called the type I supersymmetry. Type IIA / IIB / I superstring theory has the SUSY algebra of the corresponding name. The supersymmetry algebra for the heterotic superstrings is that of type I.
Remarks
1. The barred representations are conjugate linear while the unbarred ones are complex linear. The numeral refers to the dimension of the representation space. Another more common notation is to write (1⁄2, 0) and (0, 1⁄2) respectively for these representations. The general irreducible representation is then (m, n), where m, n are half-integral and correspond physically to the spin content of the representation, which ranges from |m + n| to |m − n| in integer steps, each spin occurring exactly once.
Notes
1. Aitchison 2005
2. van Nieuwenhuizen 1981, p. 274
3. Tong, David. "Supersymmetry". www.damtp.cam.ac.uk. Retrieved 3 April 2023.
References
• Aitchison, Ian J R (2005). "Supersymmetry and the MSSM: An Elementary Introduction". arXiv:hep-ph/0505105.
• Gol'fand, Y. A.; Likhtman, E. P. (1971). "Extension of the algebra of the Poincare group generators and violation of P invariance". JETP Lett. 13: 323–326. Bibcode:1971JETPL..13..323G.
• van Nieuwenhuizen, P. (1981). "Supergravity". Phys. Rep. 68 (4): 189–398. Bibcode:1981PhR....68..189V. doi:10.1016/0370-1573(81)90157-5.
• Volkov, D. V.; Akulov, V. P. (1972). "Possible Universal Neutrino Interaction". JETP Lett. 16 (11): 621 pp.
• Volkov, D. V.; Akulov, V. P. (1973). "Is the neutrino a goldstone particle". Phys. Lett. B. 46 (1): 109–110. Bibcode:1973PhLB...46..109V. doi:10.1016/0370-2693(73)90490-5.
• Weinberg, Steven (2000). Supersymmetry. The Quantum Theory of Fields. Vol. 3 (1st ed.). Cambridge: Cambridge University Press. ISBN 978-0521670555.
• Wess, J.; Zumino, B. (1974). "Supergauge transformations in four dimensions". Nuclear Physics B. 70 (1): 39–50. Bibcode:1974NuPhB..70...39W. doi:10.1016/0550-3213(74)90355-1.
Supersymmetry
General topics
• Supersymmetry
• Supersymmetric gauge theory
• Supersymmetric quantum mechanics
• Supergravity
• Superstring theory
• Super vector space
• Supergeometry
Supermathematics
• Superalgebra
• Lie superalgebra
• Super-Poincaré algebra
• Superconformal algebra
• Supersymmetry algebra
• Supergroup
• Superspace
• Harmonic superspace
• Super Minkowski space
• Supermanifold
Concepts
• Supercharge
• R-symmetry
• Supermultiplet
• Short supermultiplet
• BPS state
• Superpotential
• D-term
• FI D-term
• F-term
• Moduli space
• Supersymmetry breaking
• Konishi anomaly
• Seiberg duality
• Seiberg–Witten theory
• Witten index
• Wess–Zumino gauge
• Localization
• Mu problem
• Little hierarchy problem
• Electric–magnetic duality
Theorems
• Coleman–Mandula
• Haag–Łopuszański–Sohnius
• Nonrenormalization
Field theories
• Wess–Zumino
• N = 1 super Yang–Mills
• N = 4 super Yang–Mills
• Super QCD
• MSSM
• NMSSM
• 6D (2,0) superconformal
• ABJM superconformal
Supergravity
• Pure 4D N = 1 supergravity
• N = 8 supergravity
• Higher dimensional
• Gauged supergravity
Superpartners
• Axino
• Chargino
• Gaugino
• Goldstino
• Graviphoton
• Graviscalar
• Higgsino
• LSP
• Neutralino
• R-hadron
• Sfermion
• Sgoldstino
• Stop squark
• Superghost
Researchers
• Affleck
• Bagger
• Batchelor
• Berezin
• Dine
• Fayet
• Gates
• Golfand
• Iliopoulos
• Montonen
• Olive
• Salam
• Seiberg
• Siegel
• Roček
• Rogers
• Wess
• Witten
• Zumino
| Wikipedia |
Super column
A super column is a tuple (a pair) with a binary super column name and a value that maps it to many columns.[1] They consist of a key–value pairs, where the values are columns. Theoretically speaking, super columns are (sorted) associative array of columns.[2] Similar to a regular column family where a row is a sorted map of column names and column values, a row in a super column family is a sorted map of super column names that maps to column names and column values.
A super column is part of a keyspace together with other super columns and column families, and columns.
Code example
Written in the JSON-like syntax, a super column definition can be like this:
{
"databases": {
"Cassandra": {
"age": 20,
"name": {
"firstName": "Cassandra",
"lastName": "apache"
}
},
"HBase": {
"age": 20,
"address": {
"city": "Seoul",
"postcode": "1234"
}
}
}
}
Where:
"databases" are keyspace;
"Cassandra" and "HBase" are rowKeys;
"name" and "address" are super column names;
"firstName", "city", "age", etc. are column names.
See also
• Column (data store)
• Keyspace (distributed data store)
• Super column family
References
1. Sarkissian, Arin (September 1, 2009). "WTF is a SuperColumn". arin.me. Retrieved October 28, 2017. A SuperColumn is a tuple w/ a binary name & a value which is a map containing an unbounded number of Columns - keyed by the Column's name.
2. Ellis, Jonathan (August 15, 2016). "Data Model". Apache Cassandra Wiki. Retrieved October 28, 2017.
External links
• The Apache Cassandra data model
| Wikipedia |
Superstrong approximation
Superstrong approximation is a generalisation of strong approximation in algebraic groups G, to provide spectral gap results. The spectrum in question is that of the Laplacian matrix associated to a family of quotients of a discrete group Γ; and the gap is that between the first and second eigenvalues (normalisation so that the first eigenvalue corresponds to constant functions as eigenvectors). Here Γ is a subgroup of the rational points of G, but need not be a lattice: it may be a so-called thin group. The "gap" in question is a lower bound (absolute constant) for the difference of those eigenvalues.
A consequence and equivalent of this property, potentially holding for Zariski dense subgroups Γ of the special linear group over the integers, and in more general classes of algebraic groups G, is that the sequence of Cayley graphs for reductions Γp modulo prime numbers p, with respect to any fixed set S in Γ that is a symmetric set and generating set, is an expander family.[1]
In this context "strong approximation" is the statement that S when reduced generates the full group of points of G over the prime fields with p elements, when p is large enough. It is equivalent to the Cayley graphs being connected (when p is large enough), or that the locally constant functions on these graphs are constant, so that the eigenspace for the first eigenvalue is one-dimensional. Superstrong approximation therefore is a concrete quantitative improvement on these statements.
Background
Property (τ) is an analogue in discrete group theory of Kazhdan's property (T), and was introduced by Alexander Lubotzky.[2] For a given family of normal subgroups N of finite index in Γ, one equivalent formulation is that the Cayley graphs of the groups Γ/N, all with respect to a fixed symmetric set of generators S, form an expander family.[3] Therefore superstrong approximation is a formulation of property (τ), where the subgroups N are the kernels of reduction modulo large enough primes p.
The Lubotzky–Weiss conjecture states (for special linear groups and reduction modulo primes) that an expansion result of this kind holds independent of the choice of S. For applications, it is also relevant to have results where the modulus is not restricted to being a prime.[4]
Proofs of superstrong approximation
Results on superstrong approximation have been found using techniques on approximate subgroups, and growth rate in finite simple groups.[5]
Notes
1. (Breuillard & Oh 2014, pages x, 343)
2. http://www.ams.org/notices/200506/what-is.pdf
3. Alexander Lubotzky (1 January 1994). Discrete Groups, Expanding Graphs and Invariant Measures. Springer. p. 49. ISBN 978-3-7643-5075-8.
4. (Breuillard & Oh 2014, pages 3-4)
5. (Breuillard & Oh 2014, page xi)
References
• Breuillard, Emmanuel; Oh, Hee, eds. (2014), Thin Groups and Superstrong Approximation, Cambridge University Press, ISBN 978-1-107-03685-7
• Matthews, C. R.; Vaserstein, L. N.; Weisfeiler, B. (1984), "Congruence properties of Zariski-dense subgroups. I.", Proc. London Math. Soc., Series 3, 48 (3): 514–532, doi:10.1112/plms/s3-48.3.514, MR 0735226
| Wikipedia |
Hypercomputation
Hypercomputation or super-Turing computation is a set of models of computation that can provide outputs that are not Turing-computable. For example, a machine that could solve the halting problem would be a hypercomputer; so too would one that can correctly evaluate every statement in Peano arithmetic.
The Church–Turing thesis states that any "computable" function that can be computed by a mathematician with a pen and paper using a finite set of simple algorithms, can be computed by a Turing machine. Hypercomputers compute functions that a Turing machine cannot and which are, hence, not computable in the Church–Turing sense.
Technically, the output of a random Turing machine is uncomputable; however, most hypercomputing literature focuses instead on the computation of deterministic, rather than random, uncomputable functions.
History
A computational model going beyond Turing machines was introduced by Alan Turing in his 1938 PhD dissertation Systems of Logic Based on Ordinals.[1] This paper investigated mathematical systems in which an oracle was available, which could compute a single arbitrary (non-recursive) function from naturals to naturals. He used this device to prove that even in those more powerful systems, undecidability is still present. Turing's oracle machines are mathematical abstractions, and are not physically realizable.[2]
State space
In a sense, most functions are uncomputable: there are $\aleph _{0}$ computable functions, but there are an uncountable number ($2^{\aleph _{0}}$) of possible super-Turing functions.[3]
Models
Hypercomputer models range from useful but probably unrealizable (such as Turing's original oracle machines), to less-useful random-function generators that are more plausibly "realizable" (such as a random Turing machine).
Uncomputable inputs or black-box components
A system granted knowledge of the uncomputable, oracular Chaitin's constant (a number with an infinite sequence of digits that encode the solution to the halting problem) as an input can solve a large number of useful undecidable problems; a system granted an uncomputable random-number generator as an input can create random uncomputable functions, but is generally not believed to be able to meaningfully solve "useful" uncomputable functions such as the halting problem. There are an unlimited number of different types of conceivable hypercomputers, including:
• Turing's original oracle machines, defined by Turing in 1939.
• A real computer (a sort of idealized analog computer) can perform hypercomputation[4] if physics admits general real variables (not just computable reals), and these are in some way "harnessable" for useful (rather than random) computation. This might require quite bizarre laws of physics (for example, a measurable physical constant with an oracular value, such as Chaitin's constant), and would require the ability to measure the real-valued physical value to arbitrary precision, though standard physics makes such arbitrary-precision measurements theoretically infeasible.[5]
• Similarly, a neural net that somehow had Chaitin's constant exactly embedded in its weight function would be able to solve the halting problem,[6] but is subject to the same physical difficulties as other models of hypercomputation based on real computation.
• Certain fuzzy logic-based "fuzzy Turing machines" can, by definition, accidentally solve the halting problem, but only because their ability to solve the halting problem is indirectly assumed in the specification of the machine; this tends to be viewed as a "bug" in the original specification of the machines.[7][8]
• Similarly, a proposed model known as fair nondeterminism can accidentally allow the oracular computation of noncomputable functions, because some such systems, by definition, have the oracular ability to identify reject inputs that would "unfairly" cause a subsystem to run forever.[9][10]
• Dmytro Taranovsky has proposed a finitistic model of traditionally non-finitistic branches of analysis, built around a Turing machine equipped with a rapidly increasing function as its oracle. By this and more complicated models he was able to give an interpretation of second-order arithmetic. These models require an uncomputable input, such as a physical event-generating process where the interval between events grows at an uncomputably large rate.[11]
• Similarly, one unorthodox interpretation of a model of unbounded nondeterminism posits, by definition, that the length of time required for an "Actor" to settle is fundamentally unknowable, and therefore it cannot be proven, within the model, that it does not take an uncomputably long period of time.[12]
"Infinite computational steps" models
In order to work correctly, certain computations by the machines below literally require infinite, rather than merely unlimited but finite, physical space and resources; in contrast, with a Turing machine, any given computation that halts will require only finite physical space and resources.
A Turing machine that can complete infinitely many steps in finite time, a feat known as a supertask. Simply being able to run for an unbounded number of steps does not suffice. One mathematical model is the Zeno machine (inspired by Zeno's paradox). The Zeno machine performs its first computation step in (say) 1 minute, the second step in ½ minute, the third step in ¼ minute, etc. By summing 1 + ½ + ¼ + ... (a geometric series) we see that the machine performs infinitely many steps in a total of 2 minutes. According to Shagrir, Zeno machines introduce physical paradoxes and its state is logically undefined outside of one-side open period of [0, 2), thus undefined exactly at 2 minutes after beginning of the computation.[13]
It seems natural that the possibility of time travel (existence of closed timelike curves (CTCs)) makes hypercomputation possible by itself. However, this is not so since a CTC does not provide (by itself) the unbounded amount of storage that an infinite computation would require. Nevertheless, there are spacetimes in which the CTC region can be used for relativistic hypercomputation.[14] According to a 1992 paper,[15] a computer operating in a Malament–Hogarth spacetime or in orbit around a rotating black hole[16] could theoretically perform non-Turing computations for an observer inside the black hole.[17][18] Access to a CTC may allow the rapid solution to PSPACE-complete problems, a complexity class which, while Turing-decidable, is generally considered computationally intractable.[19][20]
Quantum models
Some scholars conjecture that a quantum mechanical system which somehow uses an infinite superposition of states could compute a non-computable function.[21] This is not possible using the standard qubit-model quantum computer, because it is proven that a regular quantum computer is PSPACE-reducible (a quantum computer running in polynomial time can be simulated by a classical computer running in polynomial space).[22]
"Eventually correct" systems
Some physically realizable systems will always eventually converge to the correct answer, but have the defect that they will often output an incorrect answer and stick with the incorrect answer for an uncomputably large period of time before eventually going back and correcting the mistake.
In mid 1960s, E Mark Gold and Hilary Putnam independently proposed models of inductive inference (the "limiting recursive functionals"[23] and "trial-and-error predicates",[24] respectively). These models enable some nonrecursive sets of numbers or languages (including all recursively enumerable sets of languages) to be "learned in the limit"; whereas, by definition, only recursive sets of numbers or languages could be identified by a Turing machine. While the machine will stabilize to the correct answer on any learnable set in some finite time, it can only identify it as correct if it is recursive; otherwise, the correctness is established only by running the machine forever and noting that it never revises its answer. Putnam identified this new interpretation as the class of "empirical" predicates, stating: "if we always 'posit' that the most recently generated answer is correct, we will make a finite number of mistakes, but we will eventually get the correct answer. (Note, however, that even if we have gotten to the correct answer (the end of the finite sequence) we are never sure that we have the correct answer.)"[24] L. K. Schubert's 1974 paper "Iterated Limiting Recursion and the Program Minimization Problem"[25] studied the effects of iterating the limiting procedure; this allows any arithmetic predicate to be computed. Schubert wrote, "Intuitively, iterated limiting identification might be regarded as higher-order inductive inference performed collectively by an ever-growing community of lower order inductive inference machines."
A symbol sequence is computable in the limit if there is a finite, possibly non-halting program on a universal Turing machine that incrementally outputs every symbol of the sequence. This includes the dyadic expansion of π and of every other computable real, but still excludes all noncomputable reals. The 'Monotone Turing machines' traditionally used in description size theory cannot edit their previous outputs; generalized Turing machines, as defined by Jürgen Schmidhuber, can. He defines the constructively describable symbol sequences as those that have a finite, non-halting program running on a generalized Turing machine, such that any output symbol eventually converges; that is, it does not change any more after some finite initial time interval. Due to limitations first exhibited by Kurt Gödel (1931), it may be impossible to predict the convergence time itself by a halting program, otherwise the halting problem could be solved. Schmidhuber ([26][27]) uses this approach to define the set of formally describable or constructively computable universes or constructive theories of everything. Generalized Turing machines can eventually converge to a correct solution of the halting problem by evaluating a Specker sequence.
Analysis of capabilities
Many hypercomputation proposals amount to alternative ways to read an oracle or advice function embedded into an otherwise classical machine. Others allow access to some higher level of the arithmetic hierarchy. For example, supertasking Turing machines, under the usual assumptions, would be able to compute any predicate in the truth-table degree containing $\Sigma _{1}^{0}$ or $\Pi _{1}^{0}$. Limiting-recursion, by contrast, can compute any predicate or function in the corresponding Turing degree, which is known to be $\Delta _{2}^{0}$. Gold further showed that limiting partial recursion would allow the computation of precisely the $\Sigma _{2}^{0}$ predicates.
Model Computable predicates Notes Refs
supertasking $\operatorname {tt} \left(\Sigma _{1}^{0},\Pi _{1}^{0}\right)$ dependent on outside observer [28]
limiting/trial-and-error $\Delta _{2}^{0}$ [23]
iterated limiting (k times) $\Delta _{k+1}^{0}$ [25]
Blum–Shub–Smale machine incomparable with traditional computable real functions [29]
Malament–Hogarth spacetime HYP dependent on spacetime structure [30]
analog recurrent neural network $\Delta _{1}^{0}[f]$ f is an advice function giving connection weights; size is bounded by runtime [31][32]
infinite time Turing machine $AQI$ Arithmetical Quasi-Inductive sets [33]
classical fuzzy Turing machine $\Sigma _{1}^{0}\cup \Pi _{1}^{0}$ for any computable t-norm [8]
increasing function oracle $\Delta _{1}^{1}$ for the one-sequence model; $\Pi _{1}^{1}$ are r.e. [11]
Criticism
Martin Davis, in his writings on hypercomputation,[34][35] refers to this subject as "a myth" and offers counter-arguments to the physical realizability of hypercomputation. As for its theory, he argues against the claims that this is a new field founded in the 1990s. This point of view relies on the history of computability theory (degrees of unsolvability, computability over functions, real numbers and ordinals), as also mentioned above. In his argument, he makes a remark that all of hypercomputation is little more than: "if non-computable inputs are permitted, then non-computable outputs are attainable."[36]
See also
• Digital physics
• Limits of computation
References
1. Turing, A. M. (1939). "Systems of Logic Based on Ordinals†". Proceedings of the London Mathematical Society. 45: 161–228. doi:10.1112/plms/s2-45.1.161. hdl:21.11116/0000-0001-91CE-3.
2. "Let us suppose that we are supplied with some unspecified means of solving number-theoretic problems; a kind of oracle as it were. We shall not go any further into the nature of this oracle apart from saying that it cannot be a machine" (Undecidable p. 167, a reprint of Turing's paper Systems of Logic Based On Ordinals)
3. J. Cabessa; H.T. Siegelmann (Apr 2012). "The Computational Power of Interactive Recurrent Neural Networks" (PDF). Neural Computation. 24 (4): 996–1019. CiteSeerX 10.1.1.411.7540. doi:10.1162/neco_a_00263. PMID 22295978. S2CID 5826757.
4. Arnold Schönhage, "On the power of random access machines", in Proc. Intl. Colloquium on Automata, Languages, and Programming (ICALP), pages 520–529, 1979. Source of citation: Scott Aaronson, "NP-complete Problems and Physical Reality" p. 12
5. Andrew Hodges. "The Professors and the Brainstorms". The Alan Turing Home Page. Retrieved 23 September 2011.
6. H.T. Siegelmann; E.D. Sontag (1994). "Analog Computation via Neural Networks". Theoretical Computer Science. 131 (2): 331–360. doi:10.1016/0304-3975(94)90178-3.
7. Biacino, L.; Gerla, G. (2002). "Fuzzy logic, continuity and effectiveness". Archive for Mathematical Logic. 41 (7): 643–667. CiteSeerX 10.1.1.2.8029. doi:10.1007/s001530100128. ISSN 0933-5846. S2CID 12513452.
8. Wiedermann, Jiří (2004). "Characterizing the super-Turing computing power and efficiency of classical fuzzy Turing machines". Theoretical Computer Science. 317 (1–3): 61–69. doi:10.1016/j.tcs.2003.12.004. Their (ability to solve the halting problem) is due to their acceptance criterion in which the ability to solve the halting problem is indirectly assumed.
9. Edith Spaan; Leen Torenvliet; Peter van Emde Boas (1989). "Nondeterminism, Fairness and a Fundamental Analogy". EATCS Bulletin. 37: 186–193.
10. Ord, Toby (2006). "The many forms of hypercomputation". Applied Mathematics and Computation. 178: 143–153. doi:10.1016/j.amc.2005.09.076.
11. Dmytro Taranovsky (July 17, 2005). "Finitism and Hypercomputation". Retrieved Apr 26, 2011.
12. Hewitt, Carl. "What Is Commitment." Physical, Organizational, and Social (Revised), Coordination, Organizations, Institutions, and Norms in Agent Systems II: AAMAS (2006).
13. These models have been independently developed by many different authors, including Hermann Weyl (1927). Philosophie der Mathematik und Naturwissenschaft.; the model is discussed in Shagrir, O. (June 2004). "Super-tasks, accelerating Turing machines and uncomputability". Theoretical Computer Science. 317 (1–3): 105–114. doi:10.1016/j.tcs.2003.12.007., Petrus H. Potgieter (July 2006). "Zeno machines and hypercomputation". Theoretical Computer Science. 358 (1): 23–33. arXiv:cs/0412022. doi:10.1016/j.tcs.2005.11.040. S2CID 6749770. and Vincent C. Müller (2011). "On the possibilities of hypercomputing supertasks". Minds and Machines. 21 (1): 83–96. CiteSeerX 10.1.1.225.3696. doi:10.1007/s11023-011-9222-6. S2CID 253434.
14. Andréka, Hajnal; Németi, István; Székely, Gergely (2012). "Closed Timelike Curves in Relativistic Computation". Parallel Processing Letters. 22 (3). arXiv:1105.0047. doi:10.1142/S0129626412400105. S2CID 16816151.
15. Hogarth, Mark L. (1992). "Does general relativity allow an observer to view an eternity in a finite time?". Foundations of Physics Letters. 5 (2): 173–181. Bibcode:1992FoPhL...5..173H. doi:10.1007/BF00682813. S2CID 120917288.
16. István Neméti; Hajnal Andréka (2006). "Can General Relativistic Computers Break the Turing Barrier?". Logical Approaches to Computational Barriers, Second Conference on Computability in Europe, CiE 2006, Swansea, UK, June 30-July 5, 2006. Proceedings. Lecture Notes in Computer Science. Vol. 3988. Springer. doi:10.1007/11780342. ISBN 978-3-540-35466-6.
17. Etesi, Gabor; Nemeti, Istvan (2002). "Non-Turing computations via Malament-Hogarth space-times". International Journal of Theoretical Physics. 41 (2): 341–370. arXiv:gr-qc/0104023. doi:10.1023/A:1014019225365. S2CID 17081866.
18. Earman, John; Norton, John D. (1993). "Forever is a Day: Supertasks in Pitowsky and Malament-Hogarth Spacetimes". Philosophy of Science. 60: 22–42. doi:10.1086/289716. S2CID 122764068.
19. Brun, Todd A. (2003). "Computers with closed timelike curves can solve hard problems". Found. Phys. Lett. 16 (3): 245–253. arXiv:gr-qc/0209061. doi:10.1023/A:1025967225931. S2CID 16136314.
20. S. Aaronson and J. Watrous. Closed Timelike Curves Make Quantum and Classical Computing Equivalent
21. There have been some claims to this effect; see Tien Kieu (2003). "Quantum Algorithm for the Hilbert's Tenth Problem". Int. J. Theor. Phys. 42 (7): 1461–1478. arXiv:quant-ph/0110136. doi:10.1023/A:1025780028846. S2CID 6634980. or M. Ziegler (2005). "Computational Power of Infinite Quantum Parallelism". International Journal of Theoretical Physics. 44 (11): 2059–2071. arXiv:quant-ph/0410141. Bibcode:2005IJTP...44.2059Z. doi:10.1007/s10773-005-8984-0. S2CID 9879859. and the ensuing literature. For a retort see Warren D. Smith (2006). "Three counterexamples refuting Kieu's plan for "quantum adiabatic hypercomputation"; and some uncomputable quantum mechanical tasks". Applied Mathematics and Computation. 178 (1): 184–193. doi:10.1016/j.amc.2005.09.078..
22. Bernstein, Ethan; Vazirani, Umesh (1997). "Quantum Complexity Theory". SIAM Journal on Computing. 26 (5): 1411–1473. doi:10.1137/S0097539796300921.
23. E. M. Gold (1965). "Limiting Recursion". Journal of Symbolic Logic. 30 (1): 28–48. doi:10.2307/2270580. JSTOR 2270580. S2CID 33811657., E. Mark Gold (1967). "Language identification in the limit". Information and Control. 10 (5): 447–474. doi:10.1016/S0019-9958(67)91165-5.
24. Hilary Putnam (1965). "Trial and Error Predicates and the Solution to a Problem of Mostowksi". Journal of Symbolic Logic. 30 (1): 49–57. doi:10.2307/2270581. JSTOR 2270581. S2CID 44655062.
25. L. K. Schubert (July 1974). "Iterated Limiting Recursion and the Program Minimization Problem". Journal of the ACM. 21 (3): 436–445. doi:10.1145/321832.321841. S2CID 2071951.
26. Schmidhuber, Juergen (2000). "Algorithmic Theories of Everything". arXiv:quant-ph/0011122.
27. J. Schmidhuber (2002). "Hierarchies of generalized Kolmogorov complexities and nonenumerable universal measures computable in the limit". International Journal of Foundations of Computer Science. 13 (4): 587–612. arXiv:quant-ph/0011122. Bibcode:2000quant.ph.11122S. doi:10.1142/S0129054102001291.
28. Petrus H. Potgieter (July 2006). "Zeno machines and hypercomputation". Theoretical Computer Science. 358 (1): 23–33. arXiv:cs/0412022. doi:10.1016/j.tcs.2005.11.040. S2CID 6749770.
29. Lenore Blum, Felipe Cucker, Michael Shub, and Stephen Smale (1998). Complexity and Real Computation. Springer. ISBN 978-0-387-98281-6.{{cite book}}: CS1 maint: multiple names: authors list (link)
30. P.D. Welch (2008). "The extent of computation in Malament-Hogarth spacetimes". British Journal for the Philosophy of Science. 59 (4): 659–674. arXiv:gr-qc/0609035. doi:10.1093/bjps/axn031.
31. H.T. Siegelmann (Apr 1995). "Computation Beyond the Turing Limit" (PDF). Science. 268 (5210): 545–548. Bibcode:1995Sci...268..545S. doi:10.1126/science.268.5210.545. PMID 17756722. S2CID 17495161.
32. Hava Siegelmann; Eduardo Sontag (1994). "Analog Computation via Neural Networks". Theoretical Computer Science. 131 (2): 331–360. doi:10.1016/0304-3975(94)90178-3.
33. P.D. Welch (2009). "Characteristics of discrete transfinite time Turing machine models: Halting times, stabilization times, and Normal Form theorems". Theoretical Computer Science. 410 (4–5): 426–442. doi:10.1016/j.tcs.2008.09.050.
34. Davis, Martin (2006). "Why there is no such discipline as hypercomputation". Applied Mathematics and Computation. 178 (1): 4–7. doi:10.1016/j.amc.2005.09.066.
35. Davis, Martin (2004). "The Myth of Hypercomputation". Alan Turing: Life and Legacy of a Great Thinker. Springer.
36. Martin Davis (Jan 2003). "The Myth of Hypercomputation". In Alexandra Shlapentokh (ed.). Miniworkshop: Hilbert's Tenth Problem, Mazur's Conjecture and Divisibility Sequences (PDF). MFO Report. Vol. 3. Mathematisches Forschungsinstitut Oberwolfach. p. 2.
Further reading
• Aoun, Mario Antoine (2016). "Advances in Three Hypercomputation Models" (PDF). Electronic Journal of Theoretical Physics. 13 (36): 169–182.
• Burgin, M. S. (1983). "Inductive Turing Machines". Notices of the Academy of Sciences of the USSR. 270 (6): 1289–1293.
• Burgin, Mark (2005). Super-recursive algorithms. Monographs in computer science. Springer. ISBN 0-387-95569-0.
• Cockshott, P.; Michaelson, G. (2007). "Are there new Models of Computation? Reply to Wegner and Eberbach". The Computer Journal. doi:10.1093/comjnl/bxl062.
• Cooper, S. B.; Odifreddi, P. (2003). "Incomputability in Nature" (PDF). In Cooper, S. B.; Goncharov, S. S. (eds.). Computability and Models: Perspectives East and West. New York, Boston, Dordrecht, London, Moscow: Plenum Publishers. pp. 137–160. Archived from the original (PDF) on 2011-07-24. Retrieved 2011-06-16.
• Cooper, S. B. (2006). "Definability as hypercomputational effect". Applied Mathematics and Computation. 178: 72–82. CiteSeerX 10.1.1.65.4088. doi:10.1016/j.amc.2005.09.072. S2CID 1487739.
• Copeland, J. (2002). "Hypercomputation" (PDF). Minds and Machines. 12 (4): 461–502. doi:10.1023/A:1021105915386. S2CID 218585685. Archived from the original (PDF) on 2016-03-14.
• Hagar, A.; Korolev, A. (2007). "Quantum Hypercomputation—Hype or Computation?*" (PDF). Philosophy of Science. 74 (3): 347–363. doi:10.1086/521969. S2CID 9857468.
• Ord, Toby (2002). "Hypercomputation: Computing more than the Turing machine can compute: A survey article on various forms of hypercomputation". arXiv:math/0209332.
• Piccinini, Gualtiero (June 16, 2021). "Computation in Physical Systems". Stanford Encyclopedia of Philosophy. Retrieved 2023-07-31.
• Sharma, Ashish (2022). "Nature Inspired Algorithms with Randomized Hypercomputational Perspective". Information Sciences. 608: 670–695. doi:10.1016/j.ins.2022.05.020. S2CID 248881264.
• Stannett, Mike (1990). "X-machines and the halting problem: Building a super-Turing machine". Formal Aspects of Computing. 2 (1): 331–341. doi:10.1007/BF01888233. S2CID 7406983.
• Stannett, Mike (2006). "The case for hypercomputation" (PDF). Applied Mathematics and Computation. 178 (1): 8–24. doi:10.1016/j.amc.2005.09.067. Archived from the original (PDF) on 2016-03-04.
• Syropoulos, Apostolos (2008). Hypercomputation: Computing Beyond the Church–Turing Barrier. Springer. ISBN 978-0-387-30886-9.
Authority control: National
• Germany
| Wikipedia |
Super vector space
In mathematics, a super vector space is a $\mathbb {Z} _{2}$-graded vector space, that is, a vector space over a field $\mathbb {K} $ with a given decomposition of subspaces of grade $0$ and grade $1$. The study of super vector spaces and their generalizations is sometimes called super linear algebra. These objects find their principal application in theoretical physics where they are used to describe the various algebraic aspects of supersymmetry.
Definitions
A super vector space is a $\mathbb {Z} _{2}$-graded vector space with decomposition[1]
$V=V_{0}\oplus V_{1},\quad 0,1\in \mathbb {Z} _{2}=\mathbb {Z} /2\mathbb {Z} .$
Vectors that are elements of either $V_{0}$ or $V_{1}$ are said to be homogeneous. The parity of a nonzero homogeneous element, denoted by $|x|$, is $0$ or $1$ according to whether it is in $V_{0}$ or $V_{1}$,
$|x|={\begin{cases}0&x\in V_{0}\\1&x\in V_{1}\end{cases}}$
Vectors of parity $0$ are called even and those of parity $1$ are called odd. In theoretical physics, the even elements are sometimes called Bose elements or bosonic, and the odd elements Fermi elements or fermionic. Definitions for super vector spaces are often given only in terms of homogeneous elements and then extended to nonhomogeneous elements by linearity.
If $V$ is finite-dimensional and the dimensions of $V_{0}$ and $V_{1}$ are $p$ and $q$ respectively, then $V$ is said to have dimension $p|q$. The standard super coordinate space, denoted $\mathbb {K} ^{p|q}$, is the ordinary coordinate space $\mathbb {K} ^{p+q}$ where the even subspace is spanned by the first $p$ coordinate basis vectors and the odd space is spanned by the last $q$.
A homogeneous subspace of a super vector space is a linear subspace that is spanned by homogeneous elements. Homogeneous subspaces are super vector spaces in their own right (with the obvious grading).
For any super vector space $V$, one can define the parity reversed space $\Pi V$ to be the super vector space with the even and odd subspaces interchanged. That is,
${\begin{aligned}(\Pi V)_{0}&=V_{1},\\(\Pi V)_{1}&=V_{0}.\end{aligned}}$
Linear transformations
A homomorphism, a morphism in the category of super vector spaces, from one super vector space to another is a grade-preserving linear transformation. A linear transformation $f:V\rightarrow W$ between super vector spaces is grade preserving if
$f(V_{i})\subset W_{i},\quad i=0,1.$
That is, it maps the even elements of $V$ to even elements of $W$ and odd elements of $V$ to odd elements of $W$. An isomorphism of super vector spaces is a bijective homomorphism. The set of all homomorphisms $V\rightarrow W$ is denoted $\mathrm {Hom} (V,W)$.[2]
Every linear transformation, not necessarily grade-preserving, from one super vector space to another can be written uniquely as the sum of a grade-preserving transformation and a grade-reversing one—that is, a transformation $f:V\rightarrow W$ such that
$f(V_{i})\subset W_{1-i},\quad i=0,1.$
Declaring the grade-preserving transformations to be even and the grade-reversing ones to be odd gives the space of all linear transformations from $V$ to $W$, denoted $\mathbf {Hom} (V,W)$ and called internal $\mathrm {Hom} $, the structure of a super vector space. In particular,[3]
$\left(\mathbf {Hom} (V,W)\right)_{0}=\mathrm {Hom} (V,W).$
A grade-reversing transformation from $V$ to $W$ can be regarded as a homomorphism from $V$ to the parity reversed space $\Pi W$, so that
$\mathbf {Hom} (V,W)=\mathrm {Hom} (V,W)\oplus \mathrm {Hom} (V,\Pi W)=\mathrm {Hom} (V,W)\oplus \mathrm {Hom} (\Pi V,W).$
Operations on super vector spaces
The usual algebraic constructions for ordinary vector spaces have their counterpart in the super vector space setting.
Dual space
The dual space $V^{*}$ of a super vector space $V$ can be regarded as a super vector space by taking the even functionals to be those that vanish on $V_{1}$ and the odd functionals to be those that vanish on $V_{0}$.[4] Equivalently, one can define $V^{*}$ to be the space of linear maps from $V$ to $\mathbb {K} ^{1|0}$ (the base field $\mathbb {K} $ thought of as a purely even super vector space) with the gradation given in the previous section.
Direct sum
Direct sums of super vector spaces are constructed as in the ungraded case with the grading given by
$(V\oplus W)_{0}=V_{0}\oplus W_{0},$
$(V\oplus W)_{1}=V_{1}\oplus W_{1}.$
Tensor product
One can also construct tensor products of super vector spaces. Here the additive structure of $\mathbb {Z} _{2}$ comes into play. The underlying space is as in the ungraded case with the grading given by
$(V\otimes W)_{i}=\bigoplus _{j+k=i}V_{j}\otimes W_{k},$
where the indices are in $\mathbb {Z} _{2}$. Specifically, one has
$(V\otimes W)_{0}=(V_{0}\otimes W_{0})\oplus (V_{1}\otimes W_{1}),$
$(V\otimes W)_{1}=(V_{0}\otimes W_{1})\oplus (V_{1}\otimes W_{0}).$
Supermodules
Just as one may generalize vector spaces over a field to modules over a commutative ring, one may generalize super vector spaces over a field to supermodules over a supercommutative algebra (or ring).
A common construction when working with super vector spaces is to enlarge the field of scalars to a supercommutative Grassmann algebra. Given a field $\mathbb {K} $ let
$R=\mathbb {K} [\theta _{1},\cdots ,\theta _{N}]$
denote the Grassmann algebra generated by $N$ anticommuting odd elements $\theta _{i}$. Any super vector $V$ space over $\mathbb {K} $ can be embedded in a module over $R$ by considering the (graded) tensor product
$\mathbb {K} [\theta _{1},\cdots ,\theta _{N}]\otimes V.$
The category of super vector spaces
The category of super vector spaces, denoted by $\mathbb {K} -\mathrm {SVect} $, is the category whose objects are super vector spaces (over a fixed field $\mathbb {K} $) and whose morphisms are even linear transformations (i.e. the grade preserving ones).
The categorical approach to super linear algebra is to first formulate definitions and theorems regarding ordinary (ungraded) algebraic objects in the language of category theory and then transfer these directly to the category of super vector spaces. This leads to a treatment of "superobjects" such as superalgebras, Lie superalgebras, supergroups, etc. that is completely analogous to their ungraded counterparts.
The category $\mathbb {K} -\mathrm {SVect} $ is a monoidal category with the super tensor product as the monoidal product and the purely even super vector space $\mathbb {K} ^{1|0}$ as the unit object. The involutive braiding operator
$\tau _{V,W}:V\otimes W\rightarrow W\otimes V,$
given by
$\tau _{V,W}(x\otimes y)=(-1)^{|x||y|}y\otimes x$
on homogeneous elements, turns $\mathbb {K} -\mathrm {SVect} $ into a symmetric monoidal category. This commutativity isomorphism encodes the "rule of signs" that is essential to super linear algebra. It effectively says that a minus sign is picked up whenever two odd elements are interchanged. One need not worry about signs in the categorical setting as long as the above operator is used wherever appropriate.
$\mathbb {K} -\mathrm {SVect} $ is also a closed monoidal category with the internal Hom object, $\mathbf {Hom} (V,W)$, given by the super vector space of all linear maps from $V$ to $W$. The ordinary $\mathrm {Hom} $ set $\mathrm {Hom} (V,W)$ is the even subspace therein:
$\mathrm {Hom} (V,W)=\mathbf {Hom} (V,W)_{0}.$
The fact that $\mathbb {K} -\mathrm {SVect} $ is closed means that the functor $-\otimes V$ is left adjoint to the functor $\mathrm {Hom} (V,-)$, given a natural bijection
$\mathrm {Hom} (U\otimes V,W)\cong \mathrm {Hom} (U,\mathbf {Hom} (V,W)).$
Superalgebra
Main article: superalgebra
A superalgebra over $\mathbb {K} $ can be described as a super vector space ${\mathcal {A}}$ with a multiplication map
$\mu :{\mathcal {A}}\otimes {\mathcal {A}}\to {\mathcal {A}},$ :{\mathcal {A}}\otimes {\mathcal {A}}\to {\mathcal {A}},}
that is a super vector space homomorphism. This is equivalent to demanding[5]
$|ab|=|a|+|b|,\quad a,b\in {\mathcal {A}}$
Associativity and the existence of an identity can be expressed with the usual commutative diagrams, so that a unital associative superalgebra over $\mathbb {K} $ is a monoid in the category $\mathbb {K} -\mathrm {SVect} $.
Notes
1. Varadarajan 2004, p. 83
2. Varadarajan 2004, p. 83
3. Varadarajan 2004, p. 83
4. Varadarajan 2004, p. 84
5. Varadarajan 2004, p. 87
References
• Deligne, P.; Morgan, J. W. (1999). "Notes on Supersymmetry (following Joseph Bernstein)". Quantum Fields and Strings: A Course for Mathematicians. Vol. 1. American Mathematical Society. pp. 41–97. ISBN 0-8218-2012-5 – via IAS.
• Varadarajan, V. S. (2004). Supersymmetry for Mathematicians: An Introduction. Courant Lecture Notes in Mathematics. Vol. 11. American Mathematical Society. ISBN 978-0-8218-3574-6.
Supersymmetry
General topics
• Supersymmetry
• Supersymmetric gauge theory
• Supersymmetric quantum mechanics
• Supergravity
• Superstring theory
• Super vector space
• Supergeometry
Supermathematics
• Superalgebra
• Lie superalgebra
• Super-Poincaré algebra
• Superconformal algebra
• Supersymmetry algebra
• Supergroup
• Superspace
• Harmonic superspace
• Super Minkowski space
• Supermanifold
Concepts
• Supercharge
• R-symmetry
• Supermultiplet
• Short supermultiplet
• BPS state
• Superpotential
• D-term
• FI D-term
• F-term
• Moduli space
• Supersymmetry breaking
• Konishi anomaly
• Seiberg duality
• Seiberg–Witten theory
• Witten index
• Wess–Zumino gauge
• Localization
• Mu problem
• Little hierarchy problem
• Electric–magnetic duality
Theorems
• Coleman–Mandula
• Haag–Łopuszański–Sohnius
• Nonrenormalization
Field theories
• Wess–Zumino
• N = 1 super Yang–Mills
• N = 4 super Yang–Mills
• Super QCD
• MSSM
• NMSSM
• 6D (2,0) superconformal
• ABJM superconformal
Supergravity
• Pure 4D N = 1 supergravity
• N = 8 supergravity
• Higher dimensional
• Gauged supergravity
Superpartners
• Axino
• Chargino
• Gaugino
• Goldstino
• Graviphoton
• Graviscalar
• Higgsino
• LSP
• Neutralino
• R-hadron
• Sfermion
• Sgoldstino
• Stop squark
• Superghost
Researchers
• Affleck
• Bagger
• Batchelor
• Berezin
• Dine
• Fayet
• Gates
• Golfand
• Iliopoulos
• Montonen
• Olive
• Salam
• Seiberg
• Siegel
• Roček
• Rogers
• Wess
• Witten
• Zumino
| Wikipedia |
Riemann–Roch theorem for surfaces
In mathematics, the Riemann–Roch theorem for surfaces describes the dimension of linear systems on an algebraic surface. The classical form of it was first given by Castelnuovo (1896, 1897), after preliminary versions of it were found by Max Noether (1886) and Enriques (1894). The sheaf-theoretic version is due to Hirzebruch.
Riemann–Roch theorem for surfaces
FieldAlgebraic geometry
First proof byGuido Castelnuovo, Max Noether, Federigo Enriques
First proof in1886, 1894, 1896, 1897
GeneralizationsAtiyah–Singer index theorem
Grothendieck–Riemann–Roch theorem
Hirzebruch–Riemann–Roch theorem
ConsequencesRiemann–Roch theorem
Statement
One form of the Riemann–Roch theorem states that if D is a divisor on a non-singular projective surface then
$\chi (D)=\chi (0)+{\tfrac {1}{2}}D.(D-K)\,$
where χ is the holomorphic Euler characteristic, the dot . is the intersection number, and K is the canonical divisor. The constant χ(0) is the holomorphic Euler characteristic of the trivial bundle, and is equal to 1 + pa, where pa is the arithmetic genus of the surface. For comparison, the Riemann–Roch theorem for a curve states that χ(D) = χ(0) + deg(D).
Noether's formula
Noether's formula states that
$\chi ={\frac {c_{1}^{2}+c_{2}}{12}}={\frac {(K.K)+e}{12}}$
where χ=χ(0) is the holomorphic Euler characteristic, c12 = (K.K) is a Chern number and the self-intersection number of the canonical class K, and e = c2 is the topological Euler characteristic. It can be used to replace the term χ(0) in the Riemann–Roch theorem with topological terms; this gives the Hirzebruch–Riemann–Roch theorem for surfaces.
Relation to the Hirzebruch–Riemann–Roch theorem
For surfaces, the Hirzebruch–Riemann–Roch theorem is essentially the Riemann–Roch theorem for surfaces combined with the Noether formula. To see this, recall that for each divisor D on a surface there is an invertible sheaf L = O(D) such that the linear system of D is more or less the space of sections of L. For surfaces the Todd class is $1+c_{1}(X)/2+(c_{1}(X)^{2}+c_{2}(X))/12$, and the Chern character of the sheaf L is just $1+c_{1}(L)+c_{1}(L)^{2}/2$, so the Hirzebruch–Riemann–Roch theorem states that
${\begin{aligned}\chi (D)&=h^{0}(L)-h^{1}(L)+h^{2}(L)\\&={\frac {1}{2}}c_{1}(L)^{2}+{\frac {1}{2}}c_{1}(L)\,c_{1}(X)+{\frac {1}{12}}\left(c_{1}(X)^{2}+c_{2}(X)\right)\end{aligned}}$
Fortunately this can be written in a clearer form as follows. First putting D = 0 shows that
$\chi (0)={\frac {1}{12}}\left(c_{1}(X)^{2}+c_{2}(X)\right)$ (Noether's formula)
For invertible sheaves (line bundles) the second Chern class vanishes. The products of second cohomology classes can be identified with intersection numbers in the Picard group, and we get a more classical version of Riemann Roch for surfaces:
$\chi (D)=\chi (0)+{\frac {1}{2}}(D.D-D.K)$
If we want, we can use Serre duality to express h2(O(D)) as h0(O(K − D)), but unlike the case of curves there is in general no easy way to write the h1(O(D)) term in a form not involving sheaf cohomology (although in practice it often vanishes).
Early versions
The earliest forms of the Riemann–Roch theorem for surfaces were often stated as an inequality rather than an equality, because there was no direct geometric description of first cohomology groups. A typical example is given by Zariski (1995, p. 78), which states that
$r\geq n-\pi +p_{a}+1-i$
where
• r is the dimension of the complete linear system |D| of a divisor D (so r = h0(O(D)) −1)
• n is the virtual degree of D, given by the self-intersection number (D.D)
• π is the virtual genus of D, equal to 1 + (D.D + K.D)/2
• pa is the arithmetic genus χ(OF) − 1 of the surface
• i is the index of speciality of D, equal to dim H0(O(K − D)) (which by Serre duality is the same as dim H2(O(D))).
The difference between the two sides of this inequality was called the superabundance s of the divisor D. Comparing this inequality with the sheaf-theoretic version of the Riemann–Roch theorem shows that the superabundance of D is given by s = dim H1(O(D)). The divisor D was called regular if i = s = 0 (or in other words if all higher cohomology groups of O(D) vanish) and superabundant if s > 0.
References
• Topological Methods in Algebraic Geometry by Friedrich Hirzebruch ISBN 3-540-58663-6
• Zariski, Oscar (1995), Algebraic surfaces, Classics in Mathematics, Berlin, New York: Springer-Verlag, ISBN 978-3-540-58658-6, MR 1336146
• Smith, Roy. "On Classical Riemann Roch and Hirzebruch's generalization" (PDF). Department of Mathematics Boyd Research and Education Center University of Georgia.
| Wikipedia |
Superabundant number
In mathematics, a superabundant number (sometimes abbreviated as SA) is a certain kind of natural number. A natural number n is called superabundant precisely when, for all m < n
${\frac {\sigma (m)}{m}}<{\frac {\sigma (n)}{n}}$
where σ denotes the sum-of-divisors function (i.e., the sum of all positive divisors of n, including n itself). The first few superabundant numbers are 1, 2, 4, 6, 12, 24, 36, 48, 60, 120, ... (sequence A004394 in the OEIS). For example, the number 5 is not a superabundant number because for 1, 2, 3, 4, and 5, the sigma is 1, 3, 4, 7, 6, and 7/4 > 6/5.
Superabundant numbers were defined by Leonidas Alaoglu and Paul Erdős (1944). Unknown to Alaoglu and Erdős, about 30 pages of Ramanujan's 1915 paper "Highly Composite Numbers" were suppressed. Those pages were finally published in The Ramanujan Journal 1 (1997), 119–153. In section 59 of that paper, Ramanujan defines generalized highly composite numbers, which include the superabundant numbers.
Properties
Leonidas Alaoglu and Paul Erdős (1944) proved that if n is superabundant, then there exist a k and a1, a2, ..., ak such that
$n=\prod _{i=1}^{k}(p_{i})^{a_{i}}$
where pi is the i-th prime number, and
$a_{1}\geq a_{2}\geq \dotsb \geq a_{k}\geq 1.$
That is, they proved that if n is superabundant, the prime decomposition of n has non-increasing exponents (the exponent of a larger prime is never more than that a smaller prime) and that all primes up to $p_{k}$ are factors of n. Then in particular any superabundant number is an even integer, and it is a multiple of the k-th primorial $p_{k}\#.$
In fact, the last exponent ak is equal to 1 except when n is 4 or 36.
Superabundant numbers are closely related to highly composite numbers. Not all superabundant numbers are highly composite numbers. In fact, only 449 superabundant and highly composite numbers are the same (sequence A166981 in the OEIS). For instance, 7560 is highly composite but not superabundant. Conversely, 1163962800 is superabundant but not highly composite.
Alaoglu and Erdős observed that all superabundant numbers are highly abundant.
Not all superabundant numbers are Harshad numbers. The first exception is the 105th SA number, 149602080797769600. The digit sum is 81, but 81 does not divide evenly into this SA number.
Superabundant numbers are also of interest in connection with the Riemann hypothesis, and with Robin's theorem that the Riemann hypothesis is equivalent to the statement that
${\frac {\sigma (n)}{e^{\gamma }n\log \log n}}<1$
for all n greater than the largest known exception, the superabundant number 5040. If this inequality has a larger counterexample, proving the Riemann hypothesis to be false, the smallest such counterexample must be a superabundant number (Akbary & Friggstad 2009).
Not all superabundant numbers are colossally abundant.
Extension
The generalized $k$-super abundant numbers are those such that ${\frac {\sigma _{k}(m)}{m^{k}}}<{\frac {\sigma _{k}(n)}{n^{k}}}$ for all $m<n$, where $\sigma _{k}(n)$ is the sum of the $k$-th powers of the divisors of $n$.
1-super abundant numbers are superabundant numbers. 0-super abundant numbers are highly composite numbers.
For example, generalized 2-super abundant numbers are 1, 2, 4, 6, 12, 24, 48, 60, 120, 240, ... (sequence A208767 in the OEIS)
References
• Briggs, Keith (2006), "Abundant numbers and the Riemann hypothesis", Experimental Mathematics, 15 (2): 251–256, doi:10.1080/10586458.2006.10128957, S2CID 46047029.
• Akbary, Amir; Friggstad, Zachary (2009), "Superabundant numbers and the Riemann hypothesis", American Mathematical Monthly, 116 (3): 273–275, doi:10.4169/193009709X470128.
• Alaoglu, Leonidas; Erdős, Paul (1944), "On highly composite and similar numbers", Transactions of the American Mathematical Society, American Mathematical Society, 56 (3): 448–469, doi:10.2307/1990319, JSTOR 1990319.
External links
• MathWorld: Superabundant number
Divisibility-based sets of integers
Overview
• Integer factorization
• Divisor
• Unitary divisor
• Divisor function
• Prime factor
• Fundamental theorem of arithmetic
Factorization forms
• Prime
• Composite
• Semiprime
• Pronic
• Sphenic
• Square-free
• Powerful
• Perfect power
• Achilles
• Smooth
• Regular
• Rough
• Unusual
Constrained divisor sums
• Perfect
• Almost perfect
• Quasiperfect
• Multiply perfect
• Hemiperfect
• Hyperperfect
• Superperfect
• Unitary perfect
• Semiperfect
• Practical
• Erdős–Nicolas
With many divisors
• Abundant
• Primitive abundant
• Highly abundant
• Superabundant
• Colossally abundant
• Highly composite
• Superior highly composite
• Weird
Aliquot sequence-related
• Untouchable
• Amicable (Triple)
• Sociable
• Betrothed
Base-dependent
• Equidigital
• Extravagant
• Frugal
• Harshad
• Polydivisible
• Smith
Other sets
• Arithmetic
• Deficient
• Friendly
• Solitary
• Sublime
• Harmonic divisor
• Descartes
• Refactorable
• Superperfect
Classes of natural numbers
Powers and related numbers
• Achilles
• Power of 2
• Power of 3
• Power of 10
• Square
• Cube
• Fourth power
• Fifth power
• Sixth power
• Seventh power
• Eighth power
• Perfect power
• Powerful
• Prime power
Of the form a × 2b ± 1
• Cullen
• Double Mersenne
• Fermat
• Mersenne
• Proth
• Thabit
• Woodall
Other polynomial numbers
• Hilbert
• Idoneal
• Leyland
• Loeschian
• Lucky numbers of Euler
Recursively defined numbers
• Fibonacci
• Jacobsthal
• Leonardo
• Lucas
• Padovan
• Pell
• Perrin
Possessing a specific set of other numbers
• Amenable
• Congruent
• Knödel
• Riesel
• Sierpiński
Expressible via specific sums
• Nonhypotenuse
• Polite
• Practical
• Primary pseudoperfect
• Ulam
• Wolstenholme
Figurate numbers
2-dimensional
centered
• Centered triangular
• Centered square
• Centered pentagonal
• Centered hexagonal
• Centered heptagonal
• Centered octagonal
• Centered nonagonal
• Centered decagonal
• Star
non-centered
• Triangular
• Square
• Square triangular
• Pentagonal
• Hexagonal
• Heptagonal
• Octagonal
• Nonagonal
• Decagonal
• Dodecagonal
3-dimensional
centered
• Centered tetrahedral
• Centered cube
• Centered octahedral
• Centered dodecahedral
• Centered icosahedral
non-centered
• Tetrahedral
• Cubic
• Octahedral
• Dodecahedral
• Icosahedral
• Stella octangula
pyramidal
• Square pyramidal
4-dimensional
non-centered
• Pentatope
• Squared triangular
• Tesseractic
Combinatorial numbers
• Bell
• Cake
• Catalan
• Dedekind
• Delannoy
• Euler
• Eulerian
• Fuss–Catalan
• Lah
• Lazy caterer's sequence
• Lobb
• Motzkin
• Narayana
• Ordered Bell
• Schröder
• Schröder–Hipparchus
• Stirling first
• Stirling second
• Telephone number
• Wedderburn–Etherington
Primes
• Wieferich
• Wall–Sun–Sun
• Wolstenholme prime
• Wilson
Pseudoprimes
• Carmichael number
• Catalan pseudoprime
• Elliptic pseudoprime
• Euler pseudoprime
• Euler–Jacobi pseudoprime
• Fermat pseudoprime
• Frobenius pseudoprime
• Lucas pseudoprime
• Lucas–Carmichael number
• Somer–Lucas pseudoprime
• Strong pseudoprime
Arithmetic functions and dynamics
Divisor functions
• Abundant
• Almost perfect
• Arithmetic
• Betrothed
• Colossally abundant
• Deficient
• Descartes
• Hemiperfect
• Highly abundant
• Highly composite
• Hyperperfect
• Multiply perfect
• Perfect
• Practical
• Primitive abundant
• Quasiperfect
• Refactorable
• Semiperfect
• Sublime
• Superabundant
• Superior highly composite
• Superperfect
Prime omega functions
• Almost prime
• Semiprime
Euler's totient function
• Highly cototient
• Highly totient
• Noncototient
• Nontotient
• Perfect totient
• Sparsely totient
Aliquot sequences
• Amicable
• Perfect
• Sociable
• Untouchable
Primorial
• Euclid
• Fortunate
Other prime factor or divisor related numbers
• Blum
• Cyclic
• Erdős–Nicolas
• Erdős–Woods
• Friendly
• Giuga
• Harmonic divisor
• Jordan–Pólya
• Lucas–Carmichael
• Pronic
• Regular
• Rough
• Smooth
• Sphenic
• Størmer
• Super-Poulet
• Zeisel
Numeral system-dependent numbers
Arithmetic functions
and dynamics
• Persistence
• Additive
• Multiplicative
Digit sum
• Digit sum
• Digital root
• Self
• Sum-product
Digit product
• Multiplicative digital root
• Sum-product
Coding-related
• Meertens
Other
• Dudeney
• Factorion
• Kaprekar
• Kaprekar's constant
• Keith
• Lychrel
• Narcissistic
• Perfect digit-to-digit invariant
• Perfect digital invariant
• Happy
P-adic numbers-related
• Automorphic
• Trimorphic
Digit-composition related
• Palindromic
• Pandigital
• Repdigit
• Repunit
• Self-descriptive
• Smarandache–Wellin
• Undulating
Digit-permutation related
• Cyclic
• Digit-reassembly
• Parasitic
• Primeval
• Transposable
Divisor-related
• Equidigital
• Extravagant
• Frugal
• Harshad
• Polydivisible
• Smith
• Vampire
Other
• Friedman
Binary numbers
• Evil
• Odious
• Pernicious
Generated via a sieve
• Lucky
• Prime
Sorting related
• Pancake number
• Sorting number
Natural language related
• Aronson's sequence
• Ban
Graphemics related
• Strobogrammatic
• Mathematics portal
| Wikipedia |
Superadditivity
In mathematics, a function $f$ is superadditive if
$f(x+y)\geq f(x)+f(y)$
for all $x$ and $y$ in the domain of $f.$
Similarly, a sequence $a_{1},a_{2},\ldots $ is called superadditive if it satisfies the inequality
$a_{n+m}\geq a_{n}+a_{m}$
for all $m$ and $n.$
The term "superadditive" is also applied to functions from a boolean algebra to the real numbers where $P(X\lor Y)\geq P(X)+P(Y),$ such as lower probabilities.
Examples of superadditive functions
• The map $f(x)=x^{2}$ is a superadditive function for nonnegative real numbers because the square of $x+y$ is always greater than or equal to the square of $x$ plus the square of $y,$ for nonnegative real numbers $x$ and $y$: $f(x+y)=(x+y)^{2}=x^{2}+y^{2}+2xy=f(x)+f(y)+2xy.$
• The determinant is superadditive for nonnegative Hermitian matrix, that is, if $A,B\in {\text{Mat}}_{n}(\mathbb {C} )$ are nonnegative Hermitian then $\det(A+B)\geq \det(A)+\det(B).$ This follows from the Minkowski determinant theorem, which more generally states that $\det(\cdot )^{1/n}$ is superadditive (equivalently, concave)[1] for nonnegative Hermitian matrices of size $n$: If $A,B\in {\text{Mat}}_{n}(\mathbb {C} )$ are nonnegative Hermitian then $\det(A+B)^{1/n}\geq \det(A)^{1/n}+\det(B)^{1/n}.$
• Horst Alzer proved[2] that Hadamard's gamma function $H(x)$ is superadditive for all real numbers $x,y$ with $x,y\geq 1.5031.$
• Mutual information
Properties
If $f$ is a superadditive function whose domain contains $0,$ then $f(0)\leq 0.$ To see this, take the inequality at the top: $f(x)\leq f(x+y)-f(y).$ Hence $f(0)\leq f(0+y)-f(y)=0.$
The negative of a superadditive function is subadditive.
Fekete's lemma
The major reason for the use of superadditive sequences is the following lemma due to Michael Fekete.[3]
Lemma: (Fekete) For every superadditive sequence $a_{1},a_{2},\ldots ,$ the limit $\lim a_{n}/n$ is equal to the supremum $\sup a_{n}/n.$ (The limit may be positive infinity, as is the case with the sequence $a_{n}=\log n!$ for example.)
The analogue of Fekete's lemma holds for subadditive functions as well. There are extensions of Fekete's lemma that do not require the definition of superadditivity above to hold for all $m$ and $n.$ There are also results that allow one to deduce the rate of convergence to the limit whose existence is stated in Fekete's lemma if some kind of both superadditivity and subadditivity is present. A good exposition of this topic may be found in Steele (1997).[4][5]
See also
• Choquet integral
• Inner measure
• Subadditivity – property of a function where the sum of two elements in domain of function is less than the sum of function valuesPages displaying wikidata descriptions as a fallback
• Sublinear function
References
1. M. Marcus, H. Minc (1992). A survey in matrix theory and matrix inequalities. Dover. Theorem 4.1.8, page 115.
2. Horst Alzer (2009). "A superadditive property of Hadamard's gamma function". Abhandlungen aus dem Mathematischen Seminar der Universität Hamburg. Springer. 79: 11–23. doi:10.1007/s12188-008-0009-5. S2CID 123691692.
3. Fekete, M. (1923). "Über die Verteilung der Wurzeln bei gewissen algebraischen Gleichungen mit ganzzahligen Koeffizienten". Mathematische Zeitschrift. 17 (1): 228–249. doi:10.1007/BF01504345. S2CID 186223729.
4. Michael J. Steele (1997). Probability theory and combinatorial optimization. SIAM, Philadelphia. ISBN 0-89871-380-3.
5. Michael J. Steele (2011). CBMS Lectures on Probability Theory and Combinatorial Optimization. University of Cambridge.
Notes
• György Polya and Gábor Szegö. (1976). Problems and theorems in analysis, volume 1. Springer-Verlag, New York. ISBN 0-387-05672-6.
This article incorporates material from Superadditivity on PlanetMath, which is licensed under the Creative Commons Attribution/Share-Alike License.
| Wikipedia |
Supercompact cardinal
In set theory, a supercompact cardinal is a type of large cardinal independently introduced by Solovay and Reinhardt.[1] They display a variety of reflection properties.
Formal definition
If $\lambda $ is any ordinal, $\kappa $ is $\lambda $-supercompact means that there exists an elementary embedding $j$ from the universe $V$ into a transitive inner model $M$ with critical point $\kappa $, $j(\kappa )>\lambda $ and
${}^{\lambda }M\subseteq M\,.$
That is, $M$ contains all of its $\lambda $-sequences. Then $\kappa $ is supercompact means that it is $\lambda $-supercompact for all ordinals $\lambda $.
Alternatively, an uncountable cardinal $\kappa $ is supercompact if for every $A$ such that $\vert A\vert \geq \kappa $ there exists a normal measure over $[A]^{<\kappa }$, in the following sense.
$[A]^{<\kappa }$ is defined as follows:
$[A]^{<\kappa }:=\{x\subseteq A\mid \vert x\vert <\kappa \}$.
An ultrafilter $U$ over $[A]^{<\kappa }$ is fine if it is $\kappa $-complete and $\{x\in [A]^{<\kappa }\mid a\in x\}\in U$, for every $a\in A$. A normal measure over $[A]^{<\kappa }$ is a fine ultrafilter $U$ over $[A]^{<\kappa }$ with the additional property that every function $f:[A]^{<\kappa }\to A$ such that $\{x\in [A]^{<\kappa }|f(x)\in x\}\in U$ is constant on a set in $U$. Here "constant on a set in $U$" means that there is $a\in A$ such that $\{x\in [A]^{<\kappa }|f(x)=a\}\in U$.
Properties
Supercompact cardinals have reflection properties. If a cardinal with some property (say a 3-huge cardinal) that is witnessed by a structure of limited rank exists above a supercompact cardinal $\kappa $, then a cardinal with that property exists below $\kappa $. For example, if $\kappa $ is supercompact and the generalized continuum hypothesis (GCH) holds below $\kappa $ then it holds everywhere because a bijection between the powerset of $\nu $ and a cardinal at least $\nu ^{++}$ would be a witness of limited rank for the failure of GCH at $\nu $ so it would also have to exist below $\nu $.
Finding a canonical inner model for supercompact cardinals is one of the major problems of inner model theory.
The least supercompact cardinal is the least $\kappa $ such that for every structure $(M,R_{1},\ldots ,R_{n})$ with cardinality of the domain $\vert M\vert \geq \kappa $, and for every $\Pi _{1}^{1}$ sentence $\phi $ such that $(M,R_{1},\ldots ,R_{n})\vDash \phi $, there exists a substructure $(M',R_{1}\vert M,\ldots ,R_{n}\vert M)$ with smaller domain (i.e. $\vert M'\vert <\vert M\vert $) that satisfies $\phi $.[2]
Supercompactness has a combinatorial characterization similar to the property of being ineffable. Let $P_{\kappa }(A)$ be the set of all nonempty subsets of $A$ which have cardinality $<\kappa $. A cardinal $\kappa $ is supercompact iff for every set $A$ (equivalently every cardinal $\alpha $), for every function $f:P_{\kappa }(A)\to P_{\kappa }(A)$, if $f(X)\subseteq X$ for all $X\in P_{\kappa }(A)$, then there is some $B\subseteq A$ such that $\{X\mid f(X)=B\cap X\}$ is stationary.[3]
See also
• Indestructibility
• Strongly compact cardinal
• List of large cardinal properties
References
• Drake, F. R. (1974). Set Theory: An Introduction to Large Cardinals (Studies in Logic and the Foundations of Mathematics ; V. 76). Elsevier Science Ltd. ISBN 0-444-10535-2.
• Jech, Thomas (2002). Set theory, third millennium edition (revised and expanded). Springer. ISBN 3-540-44085-2.
• Kanamori, Akihiro (2003). The Higher Infinite : Large Cardinals in Set Theory from Their Beginnings (2nd ed.). Springer. ISBN 3-540-00384-3.
Citations
1. A. Kanamori, "Kunen and set theory", pp.2450--2451. Topology and its Applications, vol. 158 (2011).
2. Magidor, M. (1971). "On the Role of Supercompact and Extendible Cardinals in Logic". Israel Journal of Mathematics. 10 (2): 147–157. doi:10.1007/BF02771565.
3. M. Magidor, Combinatorial Characterization of Supercompact Cardinals, pp.281--282. Proceedings of the American Mathematical Society, vol. 42 no. 1, 1974.
| Wikipedia |
Supercompact space
In mathematics, in the field of topology, a topological space is called supercompact if there is a subbasis such that every open cover of the topological space from elements of the subbasis has a subcover with at most two subbasis elements. Supercompactness and the related notion of superextension was introduced by J. de Groot in 1967.[1]
Examples
By the Alexander subbase theorem, every supercompact space is compact. Conversely, many (but not all) compact spaces are supercompact. The following are examples of supercompact spaces:
• Compact linearly ordered spaces with the order topology and all continuous images of such spaces[2]
• Compact metrizable spaces (due originally to Strok & Szymański (1975), see also Mills (1979))
• A product of supercompact spaces is supercompact (like a similar statement about compactness, Tychonoff's theorem, it is equivalent to the axiom of choice.)[3]
Properties
Some compact Hausdorff spaces are not supercompact; such an example is given by the Stone–Čech compactification of the natural numbers (with the discrete topology).[4]
A continuous image of a supercompact space need not be supercompact.[5]
In a supercompact space (or any continuous image of one), the cluster point of any countable subset is the limit of a nontrivial convergent sequence.[6]
Notes
1. de Groot (1969).
2. Bula et al. (1992).
3. Banaschewski (1993).
4. Bell (1978).
5. Verbeek (1972); Mills & van Mill (1979).
6. Yang (1994).
References
• Banaschewski, B. (1993), "Supercompactness, products and the axiom of choice", Kyungpook Math Journal, 33 (1): 111–114
• Bell, Murray G. (1978), "Not all compact Hausdorff spaces are supercompact", General Topology and Its Applications, 8 (2): 151–155, doi:10.1016/0016-660X(78)90046-6
• Bula, W.; Nikiel, J.; Tuncali, H. M.; Tymchatyn, E. D. (1992), "Continuous images of ordered compacta are regular supercompact", Topology and Its Applications, 45 (3): 203–221, doi:10.1016/0166-8641(92)90005-K
• de Groot, J. (1969), "Supercompactness and superextensions", in Flachsmeyer, J.; Poppe, H.; Terpe, F. (eds.), Contributions to extension theory of topological structures. Proceedings of the Symposium held in Berlin, August 14—19, 1967, Berlin: VEB Deutscher Verlag der Wissenschaften
• Engelking, R (1977), General topology, Taylor & Francis, ISBN 978-0-8002-0209-5
• Malykhin, VI; Ponomarev, VI (1977), "General topology (set-theoretic trend)", Journal of Mathematical Sciences, New York: Springer, 7 (4): 587–629, doi:10.1007/BF01084982, S2CID 120365836
• Mills, Charles F. (1979), "A simpler proof that compact metric spaces are supercompact", Proceedings of the American Mathematical Society, American Mathematical Society, Vol. 73, No. 3, 73 (3): 388–390, doi:10.2307/2042369, JSTOR 2042369, MR 0518526
• Mills, Charles F.; van Mill, Jan (1979), "A nonsupercompact continuous image of a supercompact space", Houston Journal of Mathematics, 5 (2): 241–247
• Mysior, Adam (1992), "Universal compact T1-spaces", Canadian Mathematical Bulletin, Canadian Mathematical Society, 35 (2): 261–266, doi:10.4153/CMB-1992-037-1
• Strok, M.; Szymański, A. (1975), "Compact metric spaces have binary bases" (PDF), Fundamenta Mathematicae, 89 (1): 81–91, doi:10.4064/fm-89-1-81-91
• van Mill, J. (1977), Supercompactness and Wallman spaces (Mathematical Centre Tracts, No. 85.), Amsterdam: Mathematisch Centrum, ISBN 90-6196-151-3
• Verbeek, A. (1972), Superextensions of topological spaces (Mathematical Centre tracts, No. 41), Amsterdam: Mathematisch Centrum
• Yang, Zhong Qiang (1994), "All cluster points of countable sets in supercompact spaces are the limits of nontrivial sequences", Proceedings of the American Mathematical Society, American Mathematical Society, Vol. 122, No. 2, 122 (2): 591–595, doi:10.2307/2161053, JSTOR 2161053
| Wikipedia |
Berezinian
In mathematics and theoretical physics, the Berezinian or superdeterminant is a generalization of the determinant to the case of supermatrices. The name is for Felix Berezin. The Berezinian plays a role analogous to the determinant when considering coordinate changes for integration on a supermanifold.
Definition
The Berezinian is uniquely determined by two defining properties:
• $\operatorname {Ber} (XY)=\operatorname {Ber} (X)\operatorname {Ber} (Y)$
• $\operatorname {Ber} (e^{X})=e^{\operatorname {str(X)} }\,$
where str(X) denotes the supertrace of X. Unlike the classical determinant, the Berezinian is defined only for invertible supermatrices.
The simplest case to consider is the Berezinian of a supermatrix with entries in a field K. Such supermatrices represent linear transformations of a super vector space over K. A particular even supermatrix is a block matrix of the form
$X={\begin{bmatrix}A&0\\0&D\end{bmatrix}}$
Such a matrix is invertible if and only if both A and D are invertible matrices over K. The Berezinian of X is given by
$\operatorname {Ber} (X)=\det(A)\det(D)^{-1}$
For a motivation of the negative exponent see the substitution formula in the odd case.
More generally, consider matrices with entries in a supercommutative algebra R. An even supermatrix is then of the form
$X={\begin{bmatrix}A&B\\C&D\end{bmatrix}}$
where A and D have even entries and B and C have odd entries. Such a matrix is invertible if and only if both A and D are invertible in the commutative ring R0 (the even subalgebra of R). In this case the Berezinian is given by
$\operatorname {Ber} (X)=\det(A-BD^{-1}C)\det(D)^{-1}$
or, equivalently, by
$\operatorname {Ber} (X)=\det(A)\det(D-CA^{-1}B)^{-1}.$
These formulas are well-defined since we are only taking determinants of matrices whose entries are in the commutative ring R0. The matrix
$D-CA^{-1}B\,$
is known as the Schur complement of A relative to ${\begin{bmatrix}A&B\\C&D\end{bmatrix}}.$
An odd matrix X can only be invertible if the number of even dimensions equals the number of odd dimensions. In this case, invertibility of X is equivalent to the invertibility of JX, where
$J={\begin{bmatrix}0&I\\-I&0\end{bmatrix}}.$
Then the Berezinian of X is defined as
$\operatorname {Ber} (X)=\operatorname {Ber} (JX)=\det(C-DB^{-1}A)\det(-B)^{-1}.$
Properties
• The Berezinian of $X$ is always a unit in the ring R0.
• $\operatorname {Ber} (X^{-1})=\operatorname {Ber} (X)^{-1}$
• $\operatorname {Ber} (X^{st})=\operatorname {Ber} (X)$ where $X^{st}$ denotes the supertranspose of $X$.
• $\operatorname {Ber} (X\oplus Y)=\operatorname {Ber} (X)\mathrm {Ber} (Y)$
Berezinian module
The determinant of an endomorphism of a free module M can be defined as the induced action on the 1-dimensional highest exterior power of M. In the supersymmetric case there is no highest exterior power, but there is a still a similar definition of the Berezinian as follows.
Suppose that M is a free module of dimension (p,q) over R. Let A be the (super)symmetric algebra S*(M*) of the dual M* of M. Then an automorphism of M acts on the ext module
$Ext_{A}^{p}(R,A)$
(which has dimension (1,0) if q is even and dimension (0,1) if q is odd)) as multiplication by the Berezinian.
See also
• Berezin integration
References
• Berezin, Feliks Aleksandrovich (1966) [1965], The method of second quantization, Pure and Applied Physics, vol. 24, Boston, MA: Academic Press, ISBN 978-0-12-089450-5, MR 0208930
• Deligne, Pierre; Morgan, John W. (1999), "Notes on supersymmetry (following Joseph Bernstein)", in Deligne, Pierre; Etingof, Pavel; Freed, Daniel S.; Jeffrey, Lisa C.; Kazhdan, David; Morgan, John W.; Morrison, David R.; Witten., Edward (eds.), Quantum fields and strings: a course for mathematicians, Vol. 1, Providence, R.I.: American Mathematical Society, pp. 41–97, ISBN 978-0-8218-1198-6, MR 1701597
• Manin, Yuri Ivanovich (1997), Gauge Field Theory and Complex Geometry (2nd ed.), Berlin, New York: Springer-Verlag, ISBN 978-3-540-61378-7
| Wikipedia |
Superegg
In geometry, a superegg is a solid of revolution obtained by rotating an elongated superellipse with exponent greater than 2 around its longest axis. It is a special case of superellipsoid.
Unlike an elongated ellipsoid, an elongated superegg can stand upright on a flat surface, or on top of another superegg.[1] This is due to its curvature being zero at the tips. The shape was popularized by Danish poet and scientist Piet Hein (1905–1996). Supereggs of various materials, including brass, were sold as novelties or "executive toys" in the 1960s.
Mathematical description
The superegg is a superellipsoid whose horizontal cross-sections are circles. It is defined by the inequality
$\left|{\frac {\sqrt {x^{2}+y^{2}}}{R}}\right|^{p}+\left|{\frac {z}{h}}\right|^{p}\leq 1$
where R is the horizontal radius at the "equator" (the widest part), and h is one half of the height. The exponent p determines the degree of flattening at the tips and equator. Hein's choice was p = 2.5 (the same one he used for the Sergels Torg roundabout), and R/h = 3/4.[2]
The definition can be changed to have an equality rather than an inequality; this changes the superegg to being a surface of revolution rather than a solid.[3]
Volume
The volume of a superegg can be derived via squigonometry, a generalization of trigonometry to squircles. It is related to the gamma function.
$V={\frac {4\pi hR^{2}}{3p}}{\frac {\Gamma ({\frac {1}{p}})\Gamma ({\frac {2}{p}})}{\Gamma ({\frac {3}{p}})}}$[4]
See also
Wikimedia Commons has media related to Superegg.
• Egg of Columbus
References
1. Gardner, Martin (1977). "Piet Hein's Superellipse". Mathematical Carnival. A New Round-Up of Tantalizers and Puzzles from Scientific American. New York: Vintage Press. pp. 240–254. ISBN 978-0-394-72349-5.
2. Piet Heins Superellipse (in Danish)
3. Weisstein, Eric W. "Superegg." From MathWorld--A Wolfram Web Resource. http://mathworld.wolfram.com/Superegg.html
4. Robert D. Poodiack (April 2016). "Squigonometry, Hyperellipses, and Supereggs". Mathematics Magazine Vol. 89, No. 2: 100–101.
| Wikipedia |
Superelement
A superelement is a finite element method technique which consists of defining a new type of finite element by grouping and processing a set of finite elements. A superelement describes a part of a problem, and can be locally solved, before being implemented in the global problem. Substructuring a problem by means of superelements may facilitate the division of labor and overcome computer memory limitations.
History
Superelements were invented in the aerospace industry, where complexity and the size of problems exceeded the solving capabilities of the computational hardware. The development of superelements made solving of larger problems possible, by breakdown of complex systems such as complete airplanes.
References
• Zu-Qing Qu (2004). Model Order Reduction Techniques: with Applications in Finite Element Analysis. Springer. p. 257. ISBN 1852338075.
• Robert D. Cook; et al. (2002). Concepts and Applications of Finite Element Analysis. John Wiley & Sons. Inc. p. 359. ISBN 0471356050.
| Wikipedia |
Superellipse
A superellipse, also known as a Lamé curve after Gabriel Lamé, is a closed curve resembling the ellipse, retaining the geometric features of semi-major axis and semi-minor axis, and symmetry about them, but a different overall shape.
In the Cartesian coordinate system, the set of all points $(x,y)$ on the curve satisfy the equation
$\left|{\frac {x}{a}}\right|^{n}\!\!+\left|{\frac {y}{b}}\right|^{n}\!=1,$
where $n,a$ and $b$ are positive numbers, and the vertical bars around a number indicate the absolute value of the number. The 3-dimensional generalization is called superellipsoid (some literatures also name it superquadrics).[1][2]
In the Polar coordinate system, the superellipse equation is (the set of all points $(r,\theta )$ on the curve satisfy the equation) :
$r=\left(\left|{\frac {\cos(\theta )}{a}}\right|^{n}\!\!+\left|{\frac {\sin(\theta )}{b}}\right|^{n}\!\right)^{-1/n}\!.$
Specific cases
This formula defines a closed curve contained in the rectangle −a ≤ x ≤ +a and −b ≤ y ≤ +b. The parameters a and b are called the semi-diameters of the curve. The overall shape of the curve is determined by the value of the exponent n, as shown in the following table:
$0<n<1$ The superellipse looks like a four-armed star with concave (inwards-curved) sides.
For n = 1/2, in particular, each of the four arcs is a segment of a parabola.
An astroid is the special case a = b, n = 2/3.
$n=1$ The curve is a rhombus with corners (±a, 0) and (0, ±b).
$1<n<2$ The curve looks like a rhombus with the same corners but with convex (outwards-curved) sides.
The curvature increases without limit as one approaches its extreme points.
$n=2$ The curve is an ordinary ellipse (in particular, a circle if a = b).
$n>2$ The curve looks superficially like a rectangle with rounded corners.
The curvature is zero at the points (±a, 0) and (0, ±b).
If n < 2, the figure is also called a hypoellipse; if n > 2, a hyperellipse.
When n ≥ 1 and a = b, the superellipse is the boundary of a ball of R2 in the n-norm.
The extreme points of the superellipse are (±a, 0) and (0, ±b), and its four "corners" are (±sa, ±sb), where $s=2^{-1/n}$ (sometimes called the "superness"[3]).
Mathematical properties
When n is a positive rational number p/q (in lowest terms), then each quadrant of the superellipse is a plane algebraic curve of order pq.[4] In particular, when a = b = 1 and n is an even integer, then it is a Fermat curve of degree n. In that case it is non-singular, but in general it will be singular. If the numerator is not even, then the curve is pieced together from portions of the same algebraic curve in different orientations.
The curve is given by the parametric equations (with parameter $t$ having no elementary geometric interpretation)
$\left.{\begin{aligned}x\left(t\right)&=\pm a\cos ^{\frac {2}{n}}t\\y\left(t\right)&=\pm b\sin ^{\frac {2}{n}}t\end{aligned}}\right\}\qquad 0\leq t\leq {\frac {\pi }{2}}$
where each ± can be chosen separately so that each value of $t$ gives four points on the curve. Equivalently, letting $t$ range over $0\leq t<2\pi ,$
${\begin{aligned}x\left(t\right)&={\left|\cos t\right|}^{\frac {2}{n}}\cdot a\operatorname {sgn}(\cos t)\\y\left(t\right)&={\left|\sin t\right|}^{\frac {2}{n}}\cdot b\operatorname {sgn}(\sin t)\end{aligned}}$
where the sign function is
$\operatorname {sgn}(w)={\begin{cases}-1,&w<0\\0,&w=0\\+1,&w>0.\end{cases}}$
Here $t$ is not the angle between the positive horizontal axis and the ray from the origin to the point, since the tangent of this angle equals y/x while in the parametric expressions $ {\frac {y}{x}}={\frac {b}{a}}(\tan t)^{2/n}\neq \tan t.$
The area inside the superellipse can be expressed in terms of the gamma function as
$\mathrm {Area} =4ab{\frac {\left(\Gamma \left(1+{\tfrac {1}{n}}\right)\right)^{2}}{\Gamma \left(1+{\tfrac {2}{n}}\right)}},$
or in terms of the beta function as
$\mathrm {Area} ={\frac {4ab}{n}}\mathrm {B} \!\left({\frac {1}{n}},{\frac {1}{n}}+1\right).$
The pedal curve is relatively straightforward to compute. Specifically, the pedal of
$\left|{\frac {x}{a}}\right|^{n}\!+\left|{\frac {y}{b}}\right|^{n}\!=1,$
is given in polar coordinates by[5]
$(a\cos \theta )^{\tfrac {n}{n-1}}+(b\sin \theta )^{\tfrac {n}{n-1}}=r^{\tfrac {n}{n-1}}.$
Generalizations
The superellipse is further generalized as:
$\left|{\frac {x}{a}}\right|^{m}\!\!+\left|{\frac {y}{b}}\right|^{n}\!=1;\qquad m,n>0.$
or
${\begin{aligned}x\left(t\right)&={\left|\cos t\right|}^{\frac {2}{m}}\cdot a\operatorname {sgn}(\cos t)\\y\left(t\right)&={\left|\sin t\right|}^{\frac {2}{n}}\cdot b\operatorname {sgn}(\sin t).\end{aligned}}$
Note that $t$ is a parameter which is not linked to the physical angle through elementary functions.
History
The general Cartesian notation of the form comes from the French mathematician Gabriel Lamé (1795–1870), who generalized the equation for the ellipse.
Hermann Zapf's typeface Melior, published in 1952, uses superellipses for letters such as o. Thirty years later Donald Knuth would build the ability to choose between true ellipses and superellipses (both approximated by cubic splines) into his Computer Modern type family.
The superellipse was named by the Danish poet and scientist Piet Hein (1905–1996) though he did not discover it as it is sometimes claimed. In 1959, city planners in Stockholm, Sweden announced a design challenge for a roundabout in their city square Sergels Torg. Piet Hein's winning proposal was based on a superellipse with n = 2.5 and a/b = 6/5.[6] As he explained it:
Man is the animal that draws lines which he himself then stumbles over. In the whole pattern of civilization there have been two tendencies, one toward straight lines and rectangular patterns and one toward circular lines. There are reasons, mechanical and psychological, for both tendencies. Things made with straight lines fit well together and save space. And we can move easily — physically or mentally — around things made with round lines. But we are in a straitjacket, having to accept one or the other, when often some intermediate form would be better. To draw something freehand — such as the patchwork traffic circle they tried in Stockholm — will not do. It isn't fixed, isn't definite like a circle or square. You don't know what it is. It isn't esthetically satisfying. The super-ellipse solved the problem. It is neither round nor rectangular, but in between. Yet it is fixed, it is definite — it has a unity.
Sergels Torg was completed in 1967. Meanwhile, Piet Hein went on to use the superellipse in other artifacts, such as beds, dishes, tables, etc.[7] By rotating a superellipse around the longest axis, he created the superegg, a solid egg-like shape that could stand upright on a flat surface, and was marketed as a novelty toy.
In 1968, when negotiators in Paris for the Vietnam War could not agree on the shape of the negotiating table, Balinski, Kieron Underwood and Holt suggested a superelliptical table in a letter to the New York Times.[6] The superellipse was used for the shape of the 1968 Azteca Olympic Stadium, in Mexico City.
Waldo R. Tobler developed a map projection, the Tobler hyperelliptical projection, published in 1973,[8] in which the meridians are arcs of superellipses.
The logo for news company The Local consists of a tilted superellipse matching the proportions of Sergels Torg. Three connected superellipses are used in the logo of the Pittsburgh Steelers.
In computing, mobile operating system iOS uses a superellipse curve for app icons, replacing the rounded corners style used up to version 6.[9]
See also
• Astroid, the superellipse with n = 2⁄3 and a = b, is a hypocycloid with four cusps.
• Deltoid curve, the hypocycloid of three cusps.
• Squircle, the superellipse with n = 4 and a = b, looks like "The Four-Cornered Wheel."
• Reuleaux triangle, "The Three-Cornered Wheel."
• Superformula, a generalization of the superellipse.
• Superquadrics: superellipsoids and supertoroids, the three-dimensional "relatives" of superellipses.
• Superelliptic curve, equation of the form Yn = f(X).
• Lp spaces
References
1. Barr (1981). "Superquadrics and Angle-Preserving Transformations". IEEE Computer Graphics and Applications. 1 (1): 11–23. doi:10.1109/MCG.1981.1673799. ISSN 1558-1756. S2CID 9389947.
2. Liu, Weixiao; Wu, Yuwei; Ruan, Sipu; Chirikjian, Gregory S. (2022). "Robust and Accurate Superquadric Recovery: A Probabilistic Approach". 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). pp. 2666–2675. arXiv:2111.14517. doi:10.1109/CVPR52688.2022.00270. ISBN 978-1-6654-6946-3. S2CID 244715106.
3. Donald Knuth: The METAFONTbook, p. 126
4. "Astroid" (PDF). Xah Code. Retrieved 14 March 2023.
5. J. Edwards (1892). Differential Calculus. London: MacMillan and Co. pp. 164.
6. Gardner, Martin (1977), "Piet Hein's Superellipse", Mathematical Carnival. A New Round-Up of Tantalizers and Puzzles from Scientific American, New York: Vintage Press, pp. 240–254, ISBN 978-0-394-72349-5
7. The Superellipse, in The Guide to Life, The Universe and Everything by BBC (27 June 2003)
8. Tobler, Waldo (1973), "The hyperelliptical and other new pseudocylindrical equal area map projections", Journal of Geophysical Research, 78 (11): 1753–1759, Bibcode:1973JGR....78.1753T, CiteSeerX 10.1.1.495.6424, doi:10.1029/JB078i011p01753.
9. Mynttinen, Ivo. "The iOS Design Guidelines".
External links
Wikimedia Commons has media related to Superellipse.
• Sokolov, D.D. (2001) [1994], "Lamé curve", Encyclopedia of Mathematics, EMS Press
• "Lamé Curve" at MathCurve.
• Weisstein, Eric W. "Superellipse". MathWorld.
• O'Connor, John J.; Robertson, Edmund F., "Lame Curves", MacTutor History of Mathematics Archive, University of St Andrews
• "Super Ellipse" on 2dcurves.com
• Superellipse Calculator & Template Generator
• Superellipse fitting toolbox in MATLAB
• C code for fitting superellipses
| Wikipedia |
Superellipsoid
In mathematics, a superellipsoid (or super-ellipsoid) is a solid whose horizontal sections are superellipses (Lamé curves) with the same squareness parameter $\epsilon _{2}$, and whose vertical sections through the center are superellipses with the squareness parameter $\epsilon _{1}$. It is a generalization of an ellipsoid, which is a special case when $\epsilon _{1}=\epsilon _{2}=1$.[2]
Superellipsoids as computer graphics primitives were popularized by Alan H. Barr (who used the name "superquadrics" to refer to both superellipsoids and supertoroids).[2][3] In modern computer vision and robotics literatures, superquadrics and superellipsoids are used interchangeably, since superellipsoids are the most representative and widely utilized shape among all the superquadrics.[4][5]
Superellipsoids have an rich shape vocabulary, including cuboids, cylinders, ellipsoids, octahedra and their intermediates.[6] It becomes an important geometric primitive widely used in computer vision,[6][5][7] robotics,[4] and physical simulation.[8] The main advantage of describing objects and envirionment with superellipsoids is its conciseness and expressiveness in shape.[6] Furthermore, a closed-form expression of the Minkowski sum between two superellipsoids is available.[9] This makes it a desirable geometric primitive for robot grasping, collision detection, and motion planning.[4] Useful tools and algorithms for superquadric visualization, sampling, and recovery are open-sourced here.
Special cases
A handful of notable mathematical figures can arise as special cases of superellipsoids given the correct set of values, which are depicted in the above graphic:
• Cylinder
• Sphere
• Steinmetz solid
• Bicone
• Regular octahedron
• Cube, as a limiting case where the exponents tend to infinity
Piet Hein's supereggs are also special cases of superellipsoids.
Formulas
Basic (normalized) superellipsoid
The basic superellipsoid is defined by the implicit function
$f(x,y,z)=\left(x^{\frac {2}{\epsilon _{2}}}+y^{\frac {2}{\epsilon _{2}}}\right)^{\epsilon _{2}/\epsilon _{1}}+z^{\frac {2}{\epsilon _{1}}}$
The parameters $\epsilon _{1}$ and $\epsilon _{2}$ are positive real numbers that control the squareness of the shape.
The surface of the superellipsoid is defined by the equation:
$f(x,y,z)=1$
For any given point $(x,y,z)\in \mathbb {R} ^{3}$, the point lies inside the superellipsoid if $f(x,y,z)<1$, and outside if $f(x,y,z)>1$.
Any "parallel of latitude" of the superellipsoid (a horizontal section at any constant z between -1 and +1) is a Lamé curve with exponent $2/\epsilon _{2}$, scaled by $a=(1-z^{\frac {2}{\epsilon _{1}}})^{\frac {\epsilon _{1}}{2}}$, which is
$\left({\frac {x}{a}}\right)^{\frac {2}{\epsilon _{2}}}+\left({\frac {y}{a}}\right)^{\frac {2}{\epsilon _{2}}}=1.$
Any "meridian of longitude" (a section by any vertical plane through the origin) is a Lamé curve with exponent $2/\epsilon _{1}$, stretched horizontally by a factor w that depends on the sectioning plane. Namely, if $x=u\cos \theta $ and $y=u\sin \theta $, for a given $\theta $, then the section is
$\left({\frac {u}{w}}\right)^{\frac {2}{\epsilon _{1}}}+z^{\frac {2}{\epsilon _{1}}}=1,$
where
$w=(\cos ^{\frac {2}{\epsilon _{2}}}\theta +\sin ^{\frac {2}{\epsilon _{2}}}\theta )^{-{\frac {\epsilon _{2}}{2}}}.$
In particular, if $\epsilon _{2}$ is 1, the horizontal cross-sections are circles, and the horizontal stretching $w$ of the vertical sections is 1 for all planes. In that case, the superellipsoid is a solid of revolution, obtained by rotating the Lamé curve with exponent $2/\epsilon _{1}$ around the vertical axis.
Superellipsoid
The basic shape above extends from −1 to +1 along each coordinate axis. The general superellipsoid is obtained by scaling the basic shape along each axis by factors $a_{x}$, $a_{y}$, $a_{z}$, the semi-diameters of the resulting solid. The implicit function is [2]
$F(x,y,z)=\left(\left({\frac {x}{a_{x}}}\right)^{\frac {2}{\epsilon _{2}}}+\left({\frac {y}{a_{y}}}\right)^{\frac {2}{\epsilon _{2}}}\right)^{\frac {\epsilon _{2}}{\epsilon _{1}}}+\left({\frac {z}{a_{z}}}\right)^{\frac {2}{\epsilon _{1}}}$.
Similarly, the surface of the superellipsoid is defined by the equation
$F(x,y,z)=1$
For any given point $(x,y,z)\in \mathbb {R} ^{3}$, the point lies inside the superellipsoid if $f(x,y,z)<1$, and outside if $f(x,y,z)>1$.
Therefore, the implicit function is also called the inside-outside function of the superellipsoid.[2]
The superellipsoid has a parametric representation in terms of surface parameters $\eta \in [-\pi /2,\pi /2)$, $\omega \in [-\pi ,\pi )$.[3]
$x(\eta ,\omega )=a_{x}\cos ^{\epsilon _{1}}\eta \cos ^{\epsilon _{2}}\omega $
$y(\eta ,\omega )=a_{y}\cos ^{\epsilon _{1}}\eta \sin ^{\epsilon _{2}}\omega $
$z(\eta ,\omega )=a_{z}\sin ^{\epsilon _{1}}\eta $
General posed superellipsoid
In computer vision and robotic applications, a superellipsoid with a general pose in the 3D Euclidean space is usually of more interest.[6][5]
For a given Euclidean transformation of the superellipsoid frame $g=[\mathbf {R} \in SO(3),\mathbf {t} \in \mathbb {R} ^{3}]\in SE(3)$ relative to the world frame, the implicit function of a general posed superellipsoid surface defined the world frame is[6]
$F\left(g^{-1}\circ (x,y,z)\right)=1$
where $\circ $ is the transformation operation that maps the point $(x,y,z)\in \mathbb {R} ^{3}$ in the world frame into the canonical superellipsoid frame.
Volume of superellipsoid
The volume encompassed by the superelllipsoid surface can be expressed in terms of the beta functions $\beta (\cdot ,\cdot )$,[10]
$V(\epsilon _{1},\epsilon _{2},a_{x},a_{y},a_{z})=2a_{x}a_{y}a_{z}\epsilon _{1}\epsilon _{2}\beta ({\frac {\epsilon _{1}}{2}},\epsilon _{1}+1)\beta ({\frac {\epsilon _{2}}{2}},{\frac {\epsilon _{2}+2}{2}})$
or equivalently with the Gamma function $\Gamma (\cdot )$, since
$\beta (m,n)={\frac {\Gamma (m)\Gamma (n)}{\Gamma (m+n)}}$
Recovery from data
Recoverying the superellipsoid (or superquadrics) representation from raw data (e.g., point cloud, mesh, images, and voxels) is an important task in computer vision,[11][7][6][5] robotics,[4] and physical simulation.[8]
Traditional computational methods model the problem as a least-square problem.[11] The goal is to find out the optimal set of superellipsoid parameters $\theta \doteq [\epsilon _{1},\epsilon _{2},a_{x},a_{y},a_{z},g]$ that minizie an objective function. Other than the shape parameters, $g\in SE(3)$ is the pose of the superellipsoid frame with respect to the world coordinate.
There are two commonly used objective functions.[12] The first one is constructed directly based on the implicit function[11]
$G_{1}(\theta )=a_{x}a_{y}a_{z}\sum _{i=1}^{N}\left(F^{\epsilon _{1}}\left(g^{-1}\circ (x_{i},y_{i},z_{i})\right)-1\right)^{2}$
The minimization of the objective function provides a recovered superellipsoid as close as possible to all the input points $\{(x_{i},y_{i},z_{i})\in \mathbb {R} ^{3},i=1,2,...,N\}$. At the mean time, the scalar value $a_{x},a_{y},a_{z}$ is positively proportional to the volume of the superellipsoid, and thus have the effect of minimizing the volume as well.
The other objective function tries to minimized the radial distance between the points and the superellipsoid. That is[13][12]
$G_{2}(\theta )=\sum _{i=1}^{N}\left(\left|r_{i}\right|\left|1-F^{-{\frac {\epsilon _{1}}{2}}}\left(g^{-1}\circ (x_{i},y_{i},z_{i})\right)\right|\right)^{2}$, where $r_{i}=\|(x_{i},y_{i},z_{i})\|_{2}$
A probabilistic method called EMS is designed to deal with noise and outliers.[6] In this method, the superellipsoid recovery is reformulated as a maximum likelihood estimation problem, and an optimization method is proposed to avoid local minima utilizing geometric similarities of the superellipsoids.
The method is further extended by modeling with nonparametric bayesian techniques to recovery multiple superellipsoids simultaneously.[14]
References
1. "POV-Ray: Documentation: 2.4.1.11 Superquadric Ellipsoid".
2. Barr (1981). "Superquadrics and Angle-Preserving Transformations". IEEE Computer Graphics and Applications. 1 (1): 11–23. doi:10.1109/MCG.1981.1673799. ISSN 1558-1756. S2CID 9389947.
3. Barr, A.H. (1992), Rigid Physically Based Superquadrics. Chapter III.8 of Graphics Gems III, edited by D. Kirk, pp. 137–159
4. Ruan, Sipu; Wang, Xiaoli; Chirikjian, Gregory S. (2022). "Collision Detection for Unions of Convex Bodies With Smooth Boundaries Using Closed-Form Contact Space Parameterization". IEEE Robotics and Automation Letters. 7 (4): 9485–9492. doi:10.1109/LRA.2022.3190629. ISSN 2377-3766. S2CID 250543506.
5. Paschalidou, Despoina; Van Gool, Luc; Geiger, Andreas (2020). "Learning Unsupervised Hierarchical Part Decomposition of 3D Objects from a Single RGB Image". 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). pp. 1057–1067. doi:10.1109/CVPR42600.2020.00114. ISBN 978-1-7281-7168-5. S2CID 214634317.
6. Liu, Weixiao; Wu, Yuwei; Ruan, Sipu; Chirikjian, Gregory S. (2022). "Robust and Accurate Superquadric Recovery: A Probabilistic Approach". 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). pp. 2666–2675. arXiv:2111.14517. doi:10.1109/CVPR52688.2022.00270. ISBN 978-1-6654-6946-3. S2CID 244715106.
7. Paschalidou, Despoina; Ulusoy, Ali Osman; Geiger, Andreas (2019). "Superquadrics Revisited: Learning 3D Shape Parsing Beyond Cuboids". 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). pp. 10336–10345. arXiv:1904.09970. doi:10.1109/CVPR.2019.01059. ISBN 978-1-7281-3293-8. S2CID 128265641.
8. Lu, G.; Third, J. R.; Müller, C. R. (2012-08-20). "Critical assessment of two approaches for evaluating contacts between super-quadric shaped particles in DEM simulations". Chemical Engineering Science. 78: 226–235. Bibcode:2012ChEnS..78..226L. doi:10.1016/j.ces.2012.05.041. ISSN 0009-2509.
9. Ruan, Sipu; Chirikjian, Gregory S. (2022-02-01). "Closed-form Minkowski sums of convex bodies with smooth positively curved boundaries". Computer-Aided Design. 143: 103133. arXiv:2012.15461. doi:10.1016/j.cad.2021.103133. ISSN 0010-4485. S2CID 229923980.
10. "SUPERQUADRICS AND THEIR GEOMETRIC PROPERTIES" (PDF).
11. Bajcsy, R.; Solina, F. (1987). "Three dimensional object representation revisited". Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV): 231–240.
12. Zhang, Yan (2003-10-01). "Experimental comparison of superquadric fitting objective functions". Pattern Recognition Letters. 24 (14): 2185–2193. Bibcode:2003PaReL..24.2185Z. doi:10.1016/S0167-8655(02)00400-2. ISSN 0167-8655.
13. Gross, A.D.; Boult, T.E. (1988). "Error of Fit Measures for Recovering Parametric Solids". [1988 Proceedings] Second International Conference on Computer Vision. pp. 690–694. doi:10.1109/CCV.1988.590052. ISBN 0-8186-0883-8. S2CID 43541446.
14. Wu, Yuwei; Liu, Weixiao; Ruan, Sipu; Chirikjian, Gregory S. (2022). Avidan, Shai; Brostow, Gabriel; Cissé, Moustapha; Farinella, Giovanni Maria; Hassner, Tal (eds.). "Primitive-Based Shape Abstraction via Nonparametric Bayesian Inference". Computer Vision – ECCV 2022. Lecture Notes in Computer Science. Cham: Springer Nature Switzerland. 13687: 479–495. arXiv:2203.14714. doi:10.1007/978-3-031-19812-0_28. ISBN 978-3-031-19812-0.
Bibliography
• Barr, "Superquadrics and Angle-Preserving Transformations," in IEEE Computer Graphics and Applications, vol. 1, no. 1, pp. 11–23, Jan. 1981, doi: 10.1109/MCG.1981.1673799.
• Aleš Jaklič, Aleš Leonardis, Franc Solina, Segmentation and Recovery of Superquadrics. Kluwer Academic Publishers, Dordrecht, 2000.
• Aleš Jaklič, Franc Solina (2003) Moments of Superellipsoids and their Application to Range Image Registration. IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS, 33 (4). pp. 648–657
• W. Liu, Y. Wu, S. Ruan and G. S. Chirikjian, "Robust and Accurate Superquadric Recovery: a Probabilistic Approach," 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), New Orleans, LA, USA, 2022, pp. 2666–2675, doi: 10.1109/CVPR52688.2022.00270.
External links
• Bibliography: SuperQuadric Representations
• Superquadric Tensor Glyphs
• SuperQuadric Ellipsoids and Toroids, OpenGL Lighting, and Timing
• Superquadratics by Robert Kragler, The Wolfram Demonstrations Project.
• Superquadrics Recovery Algorithm in Python and MATLAB
| Wikipedia |
Superelliptic curve
In mathematics, a superelliptic curve is an algebraic curve defined by an equation of the form
$y^{m}=f(x),$
where $m\geq 2$ is an integer and f is a polynomial of degree $d\geq 3$ with coefficients in a field $k$; more precisely, it is the smooth projective curve whose function field defined by this equation. The case $m=2$ and $d=3$ is an elliptic curve, the case $m=2$ and $d\geq 5$ is a hyperelliptic curve, and the case $m=3$ and $d\geq 4$ is an example of a trigonal curve.
Some authors impose additional restrictions, for example, that the integer $m$ should not be divisible by the characteristic of $k$, that the polynomial $f$ should be square free, that the integers m and d should be coprime, or some combination of these.[1]
The Diophantine problem of finding integer points on a superelliptic curve can be solved by a method similar to one used for the resolution of hyperelliptic equations: a Siegel identity is used to reduce to a Thue equation.
Definition
More generally, a superelliptic curve is a cyclic branched covering
$C\to \mathbb {P} ^{1}$
of the projective line of degree $m\geq 2$ coprime to the characteristic of the field of definition. The degree $m$ of the covering map is also referred to as the degree of the curve. By cyclic covering we mean that the Galois group of the covering (i.e., the corresponding function field extension) is cyclic.
The fundamental theorem of Kummer theory implies that a superelliptic curve of degree $m$ defined over a field $k$ has an affine model given by an equation
$y^{m}=f(x)$
for some polynomial $f\in k[x]$ of degree $m$ with each root having order $<m$, provided that $C$ has a point defined over $k$, that is, if the set $C(k)$ of $k$-rational points of $C$ is not empty. For example, this is always the case when $k$ is algebraically closed. In particular, function field extension $k(C)/k(x)$ is a Kummer extension.
Ramification
Let $C:y^{m}=f(x)$ be a superelliptic curve defined over an algebraically closed field $k$, and $B'\subset k$ denote the set of roots of $f$ in $k$. Define set
$B={\begin{cases}B'&{\text{ if }}m{\text{ divides }}\deg(f),\\B'\cup \{\infty \}&{\text{ otherwise.}}\end{cases}}$
Then $B\subset \mathbb {P} ^{1}(k)$ is the set of branch points of the covering map $C\to \mathbb {P} ^{1}$ given by $x$.
For an affine branch point $\alpha \in B$, let $r_{\alpha }$ denote the order of $\alpha $ as a root of $f$. As before, we assume that $1\leq r_{\alpha }<m$. Then
$e_{\alpha }={\frac {m}{(m,r_{\alpha })}}$
is the ramification index $e(P_{\alpha ,i})$ at each of the $(m,r_{\alpha })$ ramification points $P_{\alpha ,i}$ of the curve lying over $\alpha \in \mathbb {A} ^{1}(k)\subset \mathbb {P} ^{1}(k)$ (that is actually true for any $\alpha \in k$).
For the point at infinity, define integer $0\leq r_{\infty }<m$ as follows. If
$s=\min\{t\in \mathbb {Z} \mid mt\geq \deg(f)\},$
then $r_{\infty }=ms-\deg(f)$. Note that $(m,r_{\infty })=(m,\deg(f))$. Then analogously to the other ramification points,
$e_{\infty }={\frac {m}{(m,r_{\infty })}}$
is the ramification index $e(P_{\infty ,i})$ at the $(m,r_{\infty })$ points $P_{\infty ,i}$ that lie over $\infty $. In particular, the curve is unramified over infinity if and only if its degree $m$ divides $\deg(f)$.
Curve $C$ defined as above is connected precisely when $m$ and $r_{\alpha }$ are relatively prime (not necessarily pairwise), which is assumed to be the case.
Genus
By the Riemann-Hurwitz formula, the genus of a superelliptic curve is given by
$g={\frac {1}{2}}\left(m(|B|-2)-\sum _{\alpha \in B}(m,r_{\alpha })\right)+1.$
See also
• Hyperelliptic curve
• Branched covering
• Artin-Schreier curve
• Kummer theory
• Superellipse
References
1. Galbraith, S.D.; Paulhus, S.M.; Smart, N.P. (2002). "Arithmetic on superelliptic curves". Mathematics of Computation. 71: 394–405. doi:10.1090/S0025-5718-00-01297-7. MR 1863009.
• Hindry, Marc; Silverman, Joseph H. (2000). Diophantine Geometry: An Introduction. Graduate Texts in Mathematics. Vol. 201. Springer-Verlag. p. 361. ISBN 0-387-98981-1. Zbl 0948.11023.
• Koo, Ja Kyung (1991). "On holomorphic differentials of some algebraic function field of one variable over $\mathbb {C} $". Bull. Austral. Math. Soc. 43 (3): 399–405. doi:10.1017/S0004972700029245.
• Lang, Serge (1978). Elliptic Curves: Diophantine Analysis. Grundlehren der mathematischen Wissenschaften. Vol. 231. Springer-Verlag. ISBN 0-387-08489-4.
• Shorey, T.N.; Tijdeman, R. (1986). Exponential Diophantine equations. Cambridge Tracts in Mathematics. Vol. 87. Cambridge University Press. ISBN 0-521-26826-5. Zbl 0606.10011.
• Smart, N. P. (1998). The Algorithmic Resolution of Diophantine Equations. London Mathematical Society Student Texts. Vol. 41. Cambridge University Press. ISBN 0-521-64633-2.
| Wikipedia |
Superfactorial
In mathematics, and more specifically number theory, the superfactorial of a positive integer $n$ is the product of the first $n$ factorials. They are a special case of the Jordan–Pólya numbers, which are products of arbitrary collections of factorials.
Definition
The $n$th superfactorial ${\mathit {sf}}(n)$ may be defined as:[1]
${\begin{aligned}{\mathit {sf}}(n)&=1!\cdot 2!\cdot \cdots n!=\prod _{i=1}^{n}i!=n!\cdot {\mathit {sf}}(n-1)\\&=1^{n}\cdot 2^{n-1}\cdot \cdots n=\prod _{i=1}^{n}i^{n+1-i}.\\\end{aligned}}$
Following the usual convention for the empty product, the superfactorial of 0 is 1. The sequence of superfactorials, beginning with ${\mathit {sf}}(0)=1$, is:[1]
1, 1, 2, 12, 288, 34560, 24883200, 125411328000, 5056584744960000, ... (sequence A000178 in the OEIS)
Properties
Just as the factorials can be continuously interpolated by the gamma function, the superfactorials can be continuously interpolated by the Barnes G-function.[2]
According to an analogue of Wilson's theorem on the behavior of factorials modulo prime numbers, when $p$ is an odd prime number
${\mathit {sf}}(p-1)\equiv (p-1)!!{\pmod {p}},$
where $!!$ !!} is the notation for the double factorial.[3]
For every integer $k$, the number ${\mathit {sf}}(4k)/(2k)!$ is a square number. This may be expressed as stating that, in the formula for ${\mathit {sf}}(4k)$ as a product of factorials, omitting one of the factorials (the middle one, $(2k)!$) results in a square product.[4] Additionally, if any $n+1$ integers are given, the product of their pairwise differences is always a multiple of ${\mathit {sf}}(n)$, and equals the superfactorial when the given numbers are consecutive.[1]
References
1. Sloane, N. J. A. (ed.), "Sequence A000178 (Superfactorials: product of first n factorials)", The On-Line Encyclopedia of Integer Sequences, OEIS Foundation
2. Barnes, E. W. (1900), "The theory of the G-function", The Quarterly Journal of Pure and Applied Mathematics, 31: 264–314, JFM 30.0389.02
3. Aebi, Christian; Cairns, Grant (2015), "Generalizations of Wilson's theorem for double-, hyper-, sub- and superfactorials", The American Mathematical Monthly, 122 (5): 433–443, doi:10.4169/amer.math.monthly.122.5.433, JSTOR 10.4169/amer.math.monthly.122.5.433, MR 3352802, S2CID 207521192
4. White, D.; Anderson, M. (October 2020), "Using a superfactorial problem to provide extended problem-solving experiences", PRIMUS, 31 (10): 1038–1051, doi:10.1080/10511970.2020.1809039, S2CID 225372700
External links
• Weisstein, Eric W., "Superfactorial", MathWorld
| Wikipedia |
Superformula
The superformula is a generalization of the superellipse and was proposed by Johan Gielis around 2000.[1] Gielis suggested that the formula can be used to describe many complex shapes and curves that are found in nature. Gielis has filed a patent application related to the synthesis of patterns generated by the superformula, which expired effective 2020-05-10.[2]
In polar coordinates, with $r$ the radius and $\varphi $ the angle, the superformula is:
$r\left(\varphi \right)=\left(\left|{\frac {\cos \left({\frac {m\varphi }{4}}\right)}{a}}\right|^{n_{2}}+\left|{\frac {\sin \left({\frac {m\varphi }{4}}\right)}{b}}\right|^{n_{3}}\right)^{-{\frac {1}{n_{1}}}}.$
By choosing different values for the parameters $a,b,m,n_{1},n_{2},$ and $n_{3},$ different shapes can be generated.
The formula was obtained by generalizing the superellipse, named and popularized by Piet Hein, a Danish mathematician.
2D plots
In the following examples the values shown above each figure should be m, n1, n2 and n3.
A GNU Octave program for generating these figures
function sf2d(n, a)
u = [0:.001:2 * pi];
raux = abs(1 / a(1) .* abs(cos(n(1) * u / 4))) .^ n(3) + abs(1 / a(2) .* abs(sin(n(1) * u / 4))) .^ n(4);
r = abs(raux) .^ (- 1 / n(2));
x = r .* cos(u);
y = r .* sin(u);
plot(x, y);
end
Extension to higher dimensions
It is possible to extend the formula to 3, 4, or n dimensions, by means of the spherical product of superformulas. For example, the 3D parametric surface is obtained by multiplying two superformulas r1 and r2. The coordinates are defined by the relations:
$x=r_{1}(\theta )\cos \theta \cdot r_{2}(\phi )\cos \phi ,$
$y=r_{1}(\theta )\sin \theta \cdot r_{2}(\phi )\cos \phi ,$
$z=r_{2}(\phi )\sin \phi ,$
where $\phi $ (latitude) varies between −π/2 and π/2 and θ (longitude) between −π and π.
3D plots
3D superformula: a = b = 1; m, n1, n2 and n3 are shown in the pictures.
A GNU Octave program for generating these figures:
function sf3d(n, a)
u = [- pi:.05:pi];
v = [- pi / 2:.05:pi / 2];
nu = length(u);
nv = length(v);
for i = 1:nu
for j = 1:nv
raux1 = abs(1 / a(1) * abs(cos(n(1) .* u(i) / 4))) .^ n(3) + abs(1 / a(2) * abs(sin(n(1) * u(i) / 4))) .^ n(4);
r1 = abs(raux1) .^ (- 1 / n(2));
raux2 = abs(1 / a(1) * abs(cos(n(1) * v(j) / 4))) .^ n(3) + abs(1 / a(2) * abs(sin(n(1) * v(j) / 4))) .^ n(4);
r2 = abs(raux2) .^ (- 1 / n(2));
x(i, j) = r1 * cos(u(i)) * r2 * cos(v(j));
y(i, j) = r1 * sin(u(i)) * r2 * cos(v(j));
z(i, j) = r2 * sin(v(j));
endfor;
endfor;
mesh(x, y, z);
endfunction;
Generalization
The superformula can be generalized by allowing distinct m parameters in the two terms of the superformula. By replacing the first parameter $m$ with y and second parameter $m$ with z:[3]
$r\left(\varphi \right)=\left(\left|{\frac {\cos \left({\frac {y\varphi }{4}}\right)}{a}}\right|^{n_{2}}+\left|{\frac {\sin \left({\frac {z\varphi }{4}}\right)}{b}}\right|^{n_{3}}\right)^{-{\frac {1}{n_{1}}}}$
This allows the creation of rotationally asymmetric and nested structures. In the following examples a, b, ${n_{2}}$ and ${n_{3}}$ are 1:
References
1. Gielis, Johan (2003), "A generic geometric transformation that unifies a wide range of natural and abstract shapes", American Journal of Botany, 90 (3): 333–338, doi:10.3732/ajb.90.3.333, ISSN 0002-9122, PMID 21659124
2. EP patent 1177529, Gielis, Johan, "Method and apparatus for synthesizing patterns", issued 2005-02-02
• Stöhr, Uwe (2004), SuperformulaU (PDF), archived from the original (PDF) on December 8, 2017
External links
Wikimedia Commons has media related to Superformula.
• Some Experiments on Fitting of Gielis Curves by Simulated Annealing and Particle Swarm Methods of Global Optimization
• Least Squares Fitting of Chacón-Gielis Curves By the Particle Swarm Method of Optimization
• Superformula 2D Plotter & SVG Generator
• Interactive example using JSXGraph
• SuperShaper: An OpenSource, OpenCL accelerated, interactive 3D SuperShape generator with shader based visualisation (OpenGL3)
• Simpel, WebGL based SuperShape implementation
| Wikipedia |
Superfunction
In mathematics, superfunction is a nonstandard name for an iterated function for complexified continuous iteration index. Roughly, for some function f and for some variable x, the superfunction could be defined by the expression
$S(z;x)=\underbrace {f{\Big (}f{\big (}\dots f(x)\dots {\big )}{\Big )}} _{z{\text{ evaluations of the function}}\,f}.$
Main article: Iterated function
Main article: Infinite compositions of analytic functions
Then, S(z; x) can be interpreted as the superfunction of the function f(x). Such a definition is valid only for a positive integer index z. The variable x is often omitted. Much study and many applications of superfunctions employ various extensions of these superfunctions to complex and continuous indices; and the analysis of existence, uniqueness and their evaluation. The Ackermann functions and tetration can be interpreted in terms of superfunctions.
History
Analysis of superfunctions arose from applications of the evaluation of fractional iterations of functions. Superfunctions and their inverses allow evaluation of not only the first negative power of a function (inverse function), but also of any real and even complex iterate of that function. Historically, an early function of this kind considered was ${\sqrt {\exp }}$; the function ${\sqrt {\,!\;}}$ has then been used as the logo of the physics department of the Moscow State University.[1]
At that time, these investigators did not have computational access for the evaluation of such functions, but the function ${\sqrt {\exp }}$ was luckier than ${\sqrt {\,!\;}}$: at the very least, the existence of the holomorphic function $\varphi $ such that $\varphi (\varphi (u))=\exp(u)$ had been demonstrated in 1950 by Hellmuth Kneser.[2]
Relying on the elegant functional conjugacy theory of Schröder's equation,[3] for his proof, Kneser had constructed the "superfunction" of the exponential map through the corresponding Abel function ${\mathcal {X}}$, satisfying the related Abel equation
${\mathcal {X}}(\exp(u))={\mathcal {X}}(u)+1.\ $
so that ${\mathcal {X}}(S(z;u))={\mathcal {X}}(u)+z\ $. The inverse function Kneser found,
$S(z;u)={\mathcal {X}}^{-1}(z+{\mathcal {X}}(u))$
is an entire super-exponential, although it is not real on the real axis; it cannot be interpreted as tetrational, because the condition $S(0;x)=x$ cannot be realized for the entire super-exponential. The real ${\sqrt {\exp }}$ can be constructed with the tetrational (which is also a superexponential); while the real ${\sqrt {\,!\;}}$ can be constructed with the superfactorial.
There is a book dedicated to superfunctions [4]
Extensions
The recurrence formula of the above preamble can be written as
$S(z+1;x)=f(S(z;x))~~~~~~~~\forall z\in \mathbb {N} :z>0$
$S(1)=f(x).$
Instead of the last equation, one could write the identity function,
$S(0)=x~,$
and extend the range of definition of the superfunction S to the non-negative integers. Then, one may posit
$S(-1)=f^{-1}(x),$
and extend the range of validity to the integer values larger than −2.
The following extension, for example,
$S(-2)=f^{-2}(x)$
is not trivial, because the inverse function may happen to be not defined for some values of $x$. In particular, tetration can be interpreted as superfunction of exponentiation for some real base $b$; in this case,
$f=\exp _{b}.$
Then, at x = 1,
$S(-1)=\log _{b}1=0,$
but
$S(-2)=\log _{b}0$
is not defined.
For extension to non-integer values of the argument, the superfunction should be defined in a different way.
For complex numbers $a$ and $b$ such that $a$ belongs to some connected domain $D\subseteq \mathbb {C} $, the superfunction (from $a$ to $b$) of a holomorphic function f on the domain $D$ is a function $S$, holomorphic on domain $D$, such that
$S(z\!+\!1)=f(S(z))~\forall z\in D:z\!+\!1\in D\ $
$S(a)=b.\ $
Uniqueness
In general, the superfunction is not unique. For a given base function $f$, from a given $(a\mapsto d)$ superfunction $S$, another $(a\mapsto d)$ superfunction $G$ could be constructed as
$G(z)=S(z+\mu (z))\ $
where $\mu $ is any 1-periodic function, holomorphic at least in some vicinity of the real axis, such that $\mu (a)=0$.
The modified superfunction may have a narrower range of holomorphy. The variety of possible superfunctions is especially large in the limiting case, when the width of the range of holomorphy becomes zero; in this case, one deals with real-analytic superfunctions.[5]
If the range of holomorphy required is large enough, then the superfunction is expected to be unique, at least in some specific base functions $H$. In particular, the $(C,0\mapsto 1)$ superfunction of $\exp _{b}$, for $b>1$, is called tetration and is believed to be unique, at least for $C=\{z\in \mathbb {C} ~:~\Re (z)>-2\}$; for the case $b>\exp(1/\mathrm {e} )$,[6] but up to 2009, the uniqueness was a conjecture and not a theorem with a formal mathematical proof.
Examples
This short collection of elementary superfunctions is illustrated in.[7] Some superfunctions can be expressed through elementary functions; they are used without mention that they are superfunctions. For example, for the transfer function "++", which means unit increment, the superfunction is just addition of a constant.
Addition
Chose a complex number $c$ and define the function $\mathrm {add} _{c}$ by $\mathrm {add} _{c}(x)=c+x$for all $x\in \mathbb {C} $. Further define the function $\mathrm {mul_{c}} $ by $\mathrm {mul_{c}} (x)=c\cdot x$ for all $x\in \mathbb {C} $.
Then, the function $S(z;x)=x+\mathrm {mul_{c}} (z)$ is the superfunction (0 to c) of the function $\mathrm {add_{c}} $ on C.
Multiplication
Exponentiation $\exp _{c}$ is superfunction (from 1 to $c$) of function $\mathrm {mul} _{c}$.
Quadratic polynomials
The examples except the last one, below, are essentially from Schröder's pioneering 1870 paper.[3]
Let $f(x)=2x^{2}-1$. Then,
$S(z;x)=\cos(2^{z}\arccos(x))$
is a $(\mathbb {C} ,~0\!\rightarrow \!1)$ superfunction (iteration orbit) of f.
Indeed,
$S(z+1;x)=\cos(2\cdot 2^{z}\arccos(x))=2\cos(2^{z}\arccos(x))^{2}-1=f(S(z;x))\ $
and $S(0;x)=x.$
In this case, the superfunction $S$ is periodic, with period $T={\frac {2\pi }{\ln(2)}}i\approx 9.0647202836543876194\!~i$; and the superfunction approaches unity in the negative direction on the real axis:
$\lim _{z\rightarrow -\infty }S(z)=1.\ $
Algebraic function
Similarly,
$f(x)=2x{\sqrt {1-x^{2}}}$
has an iteration orbit
$S(z;x)=\sin(2^{z}\arcsin(x)).$
Rational function
In general, the transfer (step) function f(x) need not be an entire function. An example involving a meromorphic function f reads,
$f(x)={\frac {2x}{1-x^{2}}}~~~~~\forall x\in D$; $~D=\mathbb {C} \setminus \{-1,1\}.$
Its iteration orbit (superfunction) is
$S(z;x)=\tan(2^{z}\arctan(x))$
on C, the set of complex numbers except for the singularities of the function S. To see this, recall the double angle trigonometric formula
$\tan(2\alpha )={\frac {2\tan(\alpha )}{1-\tan(\alpha )^{2}}}~~\forall \alpha \in \mathbb {C} \setminus \{\alpha \in \mathbb {C} :\cos(\alpha )=0{\text{ or }}\sin(\alpha )=\pm \cos(\alpha )\}.$ :\cos(\alpha )=0{\text{ or }}\sin(\alpha )=\pm \cos(\alpha )\}.}
Exponentiation
Let $b>1$, $f(u)=\exp _{b}(u)$, $C=\{z\in \mathbb {C} :\Re (u)>-2\}$ :\Re (u)>-2\}} . The tetration $\mathrm {tet} _{b}$ is then a $(C,~0\!\rightarrow \!1)$ superfunction of $\exp _{b}$.
Abel function
Main article: Abel equation
The inverse of a superfunction for a suitable argument x can be interpreted as the Abel function, the solution of the Abel equation,
${\mathcal {X}}(\exp(u))={\mathcal {X}}(u)+1.\ $
and hence
${\mathcal {X}}(S(z;u))={\mathcal {X}}(u)+z.\ $
The inverse function when defined, is
$S(z;u)={\mathcal {X}}^{-1}(z+{\mathcal {X}}(u)),$
for suitable domains and ranges, when they exist. The recursive property of S is then self-evident.
The figure at left shows an example of transition from $\exp ^{1}\!=\!\exp $ to $\exp ^{\!-1}\!=\!\ln $. The iterated function $\exp ^{z}$ versus real argument is plotted for $z=2,1,0.9,0.5,0.1,-0.1,-0.5,-0.9,-1,-2$. The tetrational and ArcTetrational were used as superfunction $S$ and Abel function $A$ of the exponential. The figure at right shows these functions in the complex plane. At non-negative integer number of iteration, the iterated exponential is an entire function; at non-integer values, it has two branch points, which correspond to the fixed point $L$ and $L^{*}$ of natural logarithm. At $z\!\geq \!0$, function $\exp ^{z}(x)$ remains holomorphic at least in the strip $|\Im (z)|<\Im (L)\approx 1.3$ along the real axis.
Applications of superfunctions and Abel functions
Superfunctions, usually the superexponentials, are proposed as a fast-growing function for an upgrade of the floating point representation of numbers in computers. Such an upgrade would greatly extend the range of huge numbers which are still distinguishable from infinity.
Other applications include the calculation of fractional iterates (or fractional powers) of a function. Any holomorphic function can be identified to a transfer function, and then its superfunctions and corresponding Abel functions can be considered.
Nonlinear optics
In the investigation of the nonlinear response of optical materials, the sample is supposed to be optically thin, in such a way that the intensity of the light does not change much as it goes through. Then one can consider, for example, the absorption as function of the intensity. However, at small variation of the intensity in the sample, the precision of measurement of the absorption as function of intensity is not good. The reconstruction of the superfunction from the transfer function allows to work with relatively thick samples, improving the precision of measurements. In particular, the transfer function of the similar sample, which is half thinner, could be interpreted as the square root (i.e. half-iteration) of the transfer function of the initial sample.
Similar example is suggested for a nonlinear optical fiber.[6]
Nonlinear acoustics
It may make sense to characterize the nonlinearities in the attenuation of shock waves in a homogeneous tube. This could find an application in some advanced muffler, using nonlinear acoustic effects to withdraw the energy of the sound waves without to disturb the flux of the gas. Again, the analysis of the nonlinear response, i.e. the transfer function, may be boosted with the superfunction.
Evaporation and condensation
In analysis of condensation, the growth (or vaporization) of a small drop of liquid can be considered, as it diffuses down through a tube with some uniform concentration of vapor. In the first approximation, at fixed concentration of the vapor, the mass of the drop at the output end can be interpreted as the transfer function of the input mass. The square root of this transfer function will characterize the tube of half length.
Snow avalanche
The mass of a snowball that rolls down a hill can be considered as a function of the path it has already passed. At fixed length of this path (that can be determined by the altitude of the hill) this mass can be considered also as a transfer function of the input mass. The mass of the snowball could be measured at the top of the hill and at the bottom, giving the transfer function; then, the mass of the snowball, as a function of the length it passed, is a superfunction.
Operational element
If one needs to build up an operational element with some given transfer function $H$, and wants to realize it as a sequential connection of a couple of identical operational elements, then each of these two elements should have transfer function $h={\sqrt {H}}$. Such a function can be evaluated through the superfunction and the Abel function of the transfer function $H$.
The operational element may have any origin: it can be realized as an electronic microchip, or a mechanical couple of curvilinear grains, or some asymmetric U-tube filled with different liquids, and so on.
References
This article incorporates material from the Citizendium article "Superfunction", which is licensed under the Creative Commons Attribution-ShareAlike 3.0 Unported License but not under the GFDL.
1. Logo of the physics department of Moscow State University. (In Russian); . V.P.Kandidov. About the time and myself. (In Russian) . 250 anniversary of the Moscow State University. (In Russian) ПЕРВОМУ УНИВЕРСИТЕТУ СТРАНЫ - 250!
2. H.Kneser (1950). "Reelle analytische L¨osungen der Gleichung $\varphi (\varphi (x))=e^{x}$ und verwandter Funktionalgleichungen". Journal für die reine und angewandte Mathematik. 187: 56–67.
3. Schröder, Ernst (1870). "Ueber iterirte Functionen". Mathematische Annalen. 3 (2): 296–322. doi:10.1007/BF01443992. S2CID 116998358.
4. Dmitrii Kouznetsov (2020). Superfunctions: Non-integer iterates of holomorphic functions. Tetration and other superfunctions. Formulas,algorithms,tables,graphics. Publisher: Lambert Academic Publishing.
5. P.Walker (1991). "Infinitely differentiable generalized logarithmic and exponential functions". Mathematics of Computation. 57 (196): 723–733. doi:10.1090/S0025-5718-1991-1094963-4. JSTOR 2938713.
6. D.Kouznetsov. (2009). "Solutions of $F(z+1)=\exp(F(z))$ in the complex $z$plane". Mathematics of Computation. 78: 1647–1670. doi:10.1090/S0025-5718-09-02188-7. preprint: PDF
7. D.Kouznetsov, H.Trappmann. Superfunctions and square root of factorial. Moscow University Physics Bulletin, 2010, v.65, No.1, p.6-12. (Preprint ILS UEC, 2009: )
External links
• Superfunction - TORI - Mizugadro, the research site by Dmitrii Kouznetsov
| Wikipedia |
Superintegrable Hamiltonian system
In mathematics, a superintegrable Hamiltonian system is a Hamiltonian system on a $2n$-dimensional symplectic manifold for which the following conditions hold:
(i) There exist $k>n$ independent integrals $F_{i}$ of motion. Their level surfaces (invariant submanifolds) form a fibered manifold $F:Z\to N=F(Z)$ over a connected open subset $N\subset \mathbb {R} ^{k}$.
(ii) There exist smooth real functions $s_{ij}$ on $N$ such that the Poisson bracket of integrals of motion reads $\{F_{i},F_{j}\}=s_{ij}\circ F$.
(iii) The matrix function $s_{ij}$ is of constant corank $m=2n-k$ on $N$.
If $k=n$, this is the case of a completely integrable Hamiltonian system. The Mishchenko-Fomenko theorem for superintegrable Hamiltonian systems generalizes the Liouville-Arnold theorem on action-angle coordinates of completely integrable Hamiltonian system as follows.
Let invariant submanifolds of a superintegrable Hamiltonian system be connected compact and mutually diffeomorphic. Then the fibered manifold $F$ is a fiber bundle in tori $T^{m}$. There exists an open neighbourhood $U$ of $F$ which is a trivial fiber bundle provided with the bundle (generalized action-angle) coordinates $(I_{A},p_{i},q^{i},\phi ^{A})$, $A=1,\ldots ,m$, $i=1,\ldots ,n-m$ such that $(\phi ^{A})$ are coordinates on $T^{m}$. These coordinates are the Darboux coordinates on a symplectic manifold $U$. A Hamiltonian of a superintegrable system depends only on the action variables $I_{A}$ which are the Casimir functions of the coinduced Poisson structure on $F(U)$.
The Liouville-Arnold theorem for completely integrable systems and the Mishchenko-Fomenko theorem for the superintegrable ones are generalized to the case of non-compact invariant submanifolds. They are diffeomorphic to a toroidal cylinder $T^{m-r}\times \mathbb {R} ^{r}$.
See also
• Integrable system
• Action-angle coordinates
• Nambu mechanics
• Laplace–Runge–Lenz vector
References
• Mishchenko, A., Fomenko, A., Generalized Liouville method of integration of Hamiltonian systems, Funct. Anal. Appl. 12 (1978) 113. doi:10.1007/BF01076254
• Bolsinov, A., Jovanovic, B., Noncommutative integrability, moment map and geodesic flows, Ann. Global Anal. Geom. 23 (2003) 305; arXiv:math-ph/0109031.
• Fasso, F., Superintegrable Hamiltonian systems: geometry and perturbations, Acta Appl. Math. 87(2005) 93. doi:10.1007/s10440-005-1139-8
• Fiorani, E., Sardanashvily, G., Global action-angle coordinates for completely integrable systems with non-compact invariant manifolds, J. Math. Phys. 48 (2007) 032901; arXiv:math/0610790.
• Miller, W., Jr, Post, S., Winternitz P., Classical and quantum superintegrability with applications, J. Phys. A 46 (2013), no. 42, 423001, doi:10.1088/1751-8113/46/42/423001 arXiv:1309.2694
• Giachetta, G., Mangiarotti, L., Sardanashvily, G., Geometric Methods in Classical and Quantum Mechanics (World Scientific, Singapore, 2010) ISBN 978-981-4313-72-8; arXiv:1303.5363.
| Wikipedia |
Superior highly composite number
In mathematics, a superior highly composite number is a natural number which, in a particular rigorous sense, has many divisors. Particularly, it's defined by a ratio between the number of divisors an integer has and that integer raised to some positive power. For any possible exponent, whichever integer has the highest ratio is a superior highly composite number. It is a stronger restriction than that of a highly composite number, which is defined as having more divisors than any smaller positive integer.
The first 10 superior highly composite numbers and their factorization are listed.
# prime
factors
SHCN
n
prime
factorization
prime
exponents
# divisors
d(n)
primorial
factorization
1 2 2 1 2 2 2
2 6 2 ⋅ 3 1,1 22 4 6
3 12 22 ⋅ 3 2,1 3×2 6 2 ⋅ 6
4 60 22 ⋅ 3 ⋅ 5 2,1,1 3×22 12 2 ⋅ 30
5 120 23 ⋅ 3 ⋅ 5 3,1,1 4×22 16 22 ⋅ 30
6 360 23 ⋅ 32 ⋅ 5 3,2,1 4×3×2 24 2 ⋅ 6 ⋅ 30
7 2520 23 ⋅ 32 ⋅ 5 ⋅ 7 3,2,1,1 4×3×22 48 2 ⋅ 6 ⋅ 210
8 5040 24 ⋅ 32 ⋅ 5 ⋅ 7 4,2,1,1 5×3×22 60 22 ⋅ 6 ⋅ 210
9 55440 24 ⋅ 32 ⋅ 5 ⋅ 7 ⋅ 11 4,2,1,1,1 5×3×23 120 22 ⋅ 6 ⋅ 2310
10 720720 24 ⋅ 32 ⋅ 5 ⋅ 7 ⋅ 11 ⋅ 13 4,2,1,1,1,1 5×3×24 240 22 ⋅ 6 ⋅ 30030
For a superior highly composite number n there exists a positive real number ε such that for all natural numbers k smaller than n we have
${\frac {d(n)}{n^{\varepsilon }}}\geq {\frac {d(k)}{k^{\varepsilon }}}$
and for all natural numbers k larger than n we have
${\frac {d(n)}{n^{\varepsilon }}}>{\frac {d(k)}{k^{\varepsilon }}}$
where d(n), the divisor function, denotes the number of divisors of n. The term was coined by Ramanujan (1915).[1]
For example, the number with the most divisors per square root of the number itself is 12; this can be demonstrated using some highly composites near 12.
${\frac {2}{2^{.5}}}\approx 1.414,{\frac {3}{4^{.5}}}=1.5,{\frac {4}{6^{.5}}}\approx 1.633,{\frac {6}{12^{.5}}}\approx 1.732,{\frac {8}{24^{.5}}}\approx 1.633,{\frac {12}{60^{.5}}}\approx 1.549$
120 is another superior highly composite number because it has the highest ratio of divisors to itself raised to the .4 power.
${\frac {9}{36^{.4}}}\approx 2.146,{\frac {10}{48^{.4}}}\approx 2.126,{\frac {12}{60^{.4}}}\approx 2.333,{\frac {16}{120^{.4}}}\approx 2.357,{\frac {18}{180^{.4}}}\approx 2.255,{\frac {20}{240^{.4}}}\approx 2.233,{\frac {24}{360^{.4}}}\approx 2.279$
The first 15 superior highly composite numbers, 2, 6, 12, 60, 120, 360, 2520, 5040, 55440, 720720, 1441440, 4324320, 21621600, 367567200, 6983776800 (sequence A002201 in the OEIS) are also the first 15 colossally abundant numbers, which meet a similar condition based on the sum-of-divisors function rather than the number of divisors. Neither set, however, is a subset of the other.
Properties
All superior highly composite numbers are highly composite. This is easy to prove: if there is some number k that has the same number of divisors as n but is less than n itself (i.e. $d(k)=d(n)$, but $k<n$), then ${\frac {d(k)}{k^{\varepsilon }}}>{\frac {d(n)}{n^{\varepsilon }}}$ for all positive ε, so if a number "n" is not highly composite, it cannot be superior highly composite.
An effective construction of the set of all superior highly composite numbers is given by the following monotonic mapping from the positive real numbers.[2] Let
$e_{p}(x)=\left\lfloor {\frac {1}{{\sqrt[{x}]{p}}-1}}\right\rfloor $
for any prime number p and positive real x. Then
$s(x)=\prod _{p\in \mathbb {P} }p^{e_{p}(x)}$
is a superior highly composite number.
Note that the product need not be computed indefinitely, because if $p>2^{x}$ then $e_{p}(x)=0$, so the product to calculate $s(x)$ can be terminated once $p\geq 2^{x}$.
Also note that in the definition of $e_{p}(x)$, $1/x$ is analogous to $\varepsilon $ in the implicit definition of a superior highly composite number.
Moreover, for each superior highly composite number $s'$ exists a half-open interval $I\subset \mathbb {R} ^{+}$ such that $\forall x\in I:s(x)=s'$.
This representation implies that there exist an infinite sequence of $\pi _{1},\pi _{2},\ldots \in \mathbb {P} $ such that for the n-th superior highly composite number $s_{n}$ holds
$s_{n}=\prod _{i=1}^{n}\pi _{i}$
The first $\pi _{i}$ are 2, 3, 2, 5, 2, 3, 7, ... (sequence A000705 in the OEIS). In other words, the quotient of two successive superior highly composite numbers is a prime number.
Superior highly composite radices
The first few superior highly composite numbers have often been used as radices, due to their high divisibility for their size. For example:
• Binary (base 2)
• Senary (base 6)
• Duodecimal (base 12)
• Sexagesimal (base 60)
Bigger SHCNs can be used in other ways. 120 appears as the long hundred, while 360 appears as the number of degrees in a circle.
Notes
1. Weisstein, Eric W. "Superior Highly Composite Number". mathworld.wolfram.com. Retrieved 2021-03-05.
2. Ramanujan (1915); see also URL http://wwwhomes.uni-bielefeld.de/achim/hcn.dvi
References
• Ramanujan, S. (1915). "Highly composite numbers". Proc. London Math. Soc. Series 2. 14: 347–409. doi:10.1112/plms/s2_14.1.347. JFM 45.1248.01. Reprinted in Collected Papers (Ed. G. H. Hardy et al.), New York: Chelsea, pp. 78–129, 1962
• Sándor, József; Mitrinović, Dragoslav S.; Crstici, Borislav, eds. (2006). Handbook of number theory I. Dordrecht: Springer-Verlag. pp. 45–46. ISBN 1-4020-4215-9. Zbl 1151.11300.
External links
• Weisstein, Eric W. "Superior highly composite number". MathWorld.
Divisibility-based sets of integers
Overview
• Integer factorization
• Divisor
• Unitary divisor
• Divisor function
• Prime factor
• Fundamental theorem of arithmetic
Factorization forms
• Prime
• Composite
• Semiprime
• Pronic
• Sphenic
• Square-free
• Powerful
• Perfect power
• Achilles
• Smooth
• Regular
• Rough
• Unusual
Constrained divisor sums
• Perfect
• Almost perfect
• Quasiperfect
• Multiply perfect
• Hemiperfect
• Hyperperfect
• Superperfect
• Unitary perfect
• Semiperfect
• Practical
• Erdős–Nicolas
With many divisors
• Abundant
• Primitive abundant
• Highly abundant
• Superabundant
• Colossally abundant
• Highly composite
• Superior highly composite
• Weird
Aliquot sequence-related
• Untouchable
• Amicable (Triple)
• Sociable
• Betrothed
Base-dependent
• Equidigital
• Extravagant
• Frugal
• Harshad
• Polydivisible
• Smith
Other sets
• Arithmetic
• Deficient
• Friendly
• Solitary
• Sublime
• Harmonic divisor
• Descartes
• Refactorable
• Superperfect
Classes of natural numbers
Powers and related numbers
• Achilles
• Power of 2
• Power of 3
• Power of 10
• Square
• Cube
• Fourth power
• Fifth power
• Sixth power
• Seventh power
• Eighth power
• Perfect power
• Powerful
• Prime power
Of the form a × 2b ± 1
• Cullen
• Double Mersenne
• Fermat
• Mersenne
• Proth
• Thabit
• Woodall
Other polynomial numbers
• Hilbert
• Idoneal
• Leyland
• Loeschian
• Lucky numbers of Euler
Recursively defined numbers
• Fibonacci
• Jacobsthal
• Leonardo
• Lucas
• Padovan
• Pell
• Perrin
Possessing a specific set of other numbers
• Amenable
• Congruent
• Knödel
• Riesel
• Sierpiński
Expressible via specific sums
• Nonhypotenuse
• Polite
• Practical
• Primary pseudoperfect
• Ulam
• Wolstenholme
Figurate numbers
2-dimensional
centered
• Centered triangular
• Centered square
• Centered pentagonal
• Centered hexagonal
• Centered heptagonal
• Centered octagonal
• Centered nonagonal
• Centered decagonal
• Star
non-centered
• Triangular
• Square
• Square triangular
• Pentagonal
• Hexagonal
• Heptagonal
• Octagonal
• Nonagonal
• Decagonal
• Dodecagonal
3-dimensional
centered
• Centered tetrahedral
• Centered cube
• Centered octahedral
• Centered dodecahedral
• Centered icosahedral
non-centered
• Tetrahedral
• Cubic
• Octahedral
• Dodecahedral
• Icosahedral
• Stella octangula
pyramidal
• Square pyramidal
4-dimensional
non-centered
• Pentatope
• Squared triangular
• Tesseractic
Combinatorial numbers
• Bell
• Cake
• Catalan
• Dedekind
• Delannoy
• Euler
• Eulerian
• Fuss–Catalan
• Lah
• Lazy caterer's sequence
• Lobb
• Motzkin
• Narayana
• Ordered Bell
• Schröder
• Schröder–Hipparchus
• Stirling first
• Stirling second
• Telephone number
• Wedderburn–Etherington
Primes
• Wieferich
• Wall–Sun–Sun
• Wolstenholme prime
• Wilson
Pseudoprimes
• Carmichael number
• Catalan pseudoprime
• Elliptic pseudoprime
• Euler pseudoprime
• Euler–Jacobi pseudoprime
• Fermat pseudoprime
• Frobenius pseudoprime
• Lucas pseudoprime
• Lucas–Carmichael number
• Somer–Lucas pseudoprime
• Strong pseudoprime
Arithmetic functions and dynamics
Divisor functions
• Abundant
• Almost perfect
• Arithmetic
• Betrothed
• Colossally abundant
• Deficient
• Descartes
• Hemiperfect
• Highly abundant
• Highly composite
• Hyperperfect
• Multiply perfect
• Perfect
• Practical
• Primitive abundant
• Quasiperfect
• Refactorable
• Semiperfect
• Sublime
• Superabundant
• Superior highly composite
• Superperfect
Prime omega functions
• Almost prime
• Semiprime
Euler's totient function
• Highly cototient
• Highly totient
• Noncototient
• Nontotient
• Perfect totient
• Sparsely totient
Aliquot sequences
• Amicable
• Perfect
• Sociable
• Untouchable
Primorial
• Euclid
• Fortunate
Other prime factor or divisor related numbers
• Blum
• Cyclic
• Erdős–Nicolas
• Erdős–Woods
• Friendly
• Giuga
• Harmonic divisor
• Jordan–Pólya
• Lucas–Carmichael
• Pronic
• Regular
• Rough
• Smooth
• Sphenic
• Størmer
• Super-Poulet
• Zeisel
Numeral system-dependent numbers
Arithmetic functions
and dynamics
• Persistence
• Additive
• Multiplicative
Digit sum
• Digit sum
• Digital root
• Self
• Sum-product
Digit product
• Multiplicative digital root
• Sum-product
Coding-related
• Meertens
Other
• Dudeney
• Factorion
• Kaprekar
• Kaprekar's constant
• Keith
• Lychrel
• Narcissistic
• Perfect digit-to-digit invariant
• Perfect digital invariant
• Happy
P-adic numbers-related
• Automorphic
• Trimorphic
Digit-composition related
• Palindromic
• Pandigital
• Repdigit
• Repunit
• Self-descriptive
• Smarandache–Wellin
• Undulating
Digit-permutation related
• Cyclic
• Digit-reassembly
• Parasitic
• Primeval
• Transposable
Divisor-related
• Equidigital
• Extravagant
• Frugal
• Harshad
• Polydivisible
• Smith
• Vampire
Other
• Friedman
Binary numbers
• Evil
• Odious
• Pernicious
Generated via a sieve
• Lucky
• Prime
Sorting related
• Pancake number
• Sorting number
Natural language related
• Aronson's sequence
• Ban
Graphemics related
• Strobogrammatic
• Mathematics portal
| Wikipedia |
Superiority and inferiority ranking method
The superiority and inferiority ranking method (or SIR method) is a multi-criteria decision making model (MCDA) which can handle real data and provides six different preference structures for the system user. MCDM is a sub-discipline of operations research that explicitly evaluates multiple conflicting criteria in decision making, both in daily life and in settings such as business, government and medicine.
Description
It also incorporates outranking rationale to deal with the 'poor' true-criteria preference structure which appears in selecting proper equipment. The superiority and inferiority scores are produced through the generalized criteria. The SIR method can also analyze different criteria without compiling them into a small scale as GAs.
See also
• Architecture tradeoff analysis method
• Decision-making
• Decision-making software
• Decision-making paradox
• Decisional balance sheet
• Multicriteria classification problems
• Probability distribution
• Rank reversals in decision-making
References
Sources
• Tam CM,Tong TKL,Wong YW, (2004), Selection of Concrete Pump Using the Superiority and Inferiority Ranking Method, Journal of Construction Engineering and Management, Volume 130, Issue 6, pp. 827–834 (November/December)
• Free Multi-criteria Decision Aiding (MCDA) Tools for Research Students http://sites.google.com/site/mcdafreeware/
| Wikipedia |
Superiorization
Superiorization is an iterative method for constrained optimization. It is used for improving the efficacy of an iterative method whose convergence is resilient to certain kinds of perturbations. Such perturbations are designed to "force" the perturbed algorithm to produce more useful results for the intended application than the ones that are produced by the original iterative algorithm. The perturbed algorithm is called the superiorized version of the original unperturbed algorithm. If the original algorithm is computationally efficient and useful in terms of the target application and if the perturbations are inexpensive to calculate, the method may be used to steer iterates without additional computation cost.
Areas of application
The superiorization methodology is very general and has been used successfully in many important practical applications, such as iterative reconstruction of images from their projections,[1][2][3] single-photon emission computed tomography,[4] radiation therapy[5][6][7] and nondestructive testing,[8] just to name a few. A special issue of the journal Inverse Problems[9] is devoted to superiorization, both theory[10][11][12] and applications.[3][6][7]
Objective function reduction and relation with constrained optimization
An important case of superiorization is when the original algorithm is "feasibility-seeking" (in the sense that it strives to find some point in a feasible region that is compatible with a family of constraints) and the perturbations that are introduced into the original iterative algorithm aim at reducing (not necessarily minimizing) a given merit function. In this case, superiorization has a unique place in optimization theory and practice.
Many constrained optimization methods are based on methods for unconstrained optimization that are adapted to deal with constraints. Such is, for example, the class of projected gradient methods wherein the unconstrained minimization inner step "leads" the process and a projection onto the whole constraints set (the feasible region) is performed after each minimization step in order to regain feasibility. This projection onto the constraints set is in itself a non-trivial optimization problem and the need to solve it in every iteration hinders projected gradient methods and limits their efficacy to only feasible sets that are "simple to project onto". Barrier methods or penalty methods likewise are based on unconstrained optimization combined with various "add-on"s that guarantee that the constraints are preserved. Regularization methods embed the constraints into a "regularized" objective function and proceed with unconstrained solution methods for the new regularized objective function.
In contrast to these approaches, the superiorization methodology can be viewed as an antipodal way of thinking. Instead of adapting unconstrained minimization algorithms to handling constraints, it adapts feasibility-seeking algorithms to reduce merit function values. This is done while retaining the feasibility-seeking nature of the algorithm and without paying a high computational price. Furthermore, general-purpose approaches have been developed for automatically superiorizing iterative algorithms for large classes of constraints sets and merit functions; these provide algorithms for many application tasks.
Further sources
The superiorization methodology and perturbation resilience of algorithms are reviewed in,[13][14][15] see also.[16] Current work on superiorization can be appreciated from a continuously updated Internet page.[17] SNARK14[18] is a software package for the reconstruction if 2D images from 1D projections that has a built-in capability of superiorizing any iterative algorithm for any merit function.
References
1. G.T. Herman, Fundamentals of Computerized Tomography: Image Reconstruction from Projections, Springer-Verlag, London, UK, 2nd Edition, 2009. doi:10.1007/978-1-84628-723-7
2. E.S. Helou, M.V.W. Zibetti and E.X. Miqueles, Superiorization of incremental optimization algorithms for statistical tomographic image reconstruction, Inverse Problems, Vol. 33 (2017), 044010. doi:10.1088/1361-6420/33/4/044010
3. Q. Yang, W. Cong and G. Wang, Superiorization-based multi-energy CT image reconstruction, Inverse Problems, Vol. 33 (2017), 044014. doi:10.1088/1361-6420/aa5e0a
4. S. Luo and T. Zhou, Superiorization of EM algorithm and its application in single-photon emission computed tomography (SPECT), Inverse Problems and Imaging, Vol. 8, pp. 223–246, (2014). doi:10.3934/ipi.2014.8.223
5. R. Davidi, Y. Censor, R.W. Schulte, S. Geneser and L. Xing, Feasibility-seeking and superiorization algorithms applied to inverse treatment planning in radiation therapy, Contemporary Mathematics, Vol. 636, pp. 83–92, (2015). doi:10.1090/conm/636/12729
6. E. Bonacker, A. Gibali, K-H. Küfer and P. Süss, Speedup of lexicographic optimization by superiorization and its applications to cancer radiotherapy treatment, Inverse Problems, Vol. 33 (2017), 044012. doi:10.1088/1361-6420/33/4/044012
7. J. Zhu and S. Penfold, Total variation superiorization in dual-energy CT reconstruction for proton therapy treatment planning, Inverse Problems, Vol. 33 (2017), 044013. doi:10.1088/1361-6420/33/4/04401
8. M.J. Schrapp and G.T. Herman, Data fusion in X-ray computed tomography using a superiorization approach, Review of Scientific Instruments, Vol. 85, 053701 (9pp), (2014). doi:10.1063/1.4872378
9. Superiorization: Theory and Applications, Special Issue of the journal Inverse Problems, Volume 33, Number 4, April 2017
10. H. He and H-K. Xu, Perturbation resilience and superiorization methodology of averaged mappings, Inverse Problems, Vol. 33 (2017), 044007. doi:10.1088/1361-6420/33/4/044007
11. H-K. Xu, Bounded perturbation resilience and superiorization techniques for the projected scaled gradient method, Inverse Problems, Vol. 33 (2017), 044008. doi:10.1088/1361-6420/33/4/044008
12. Nikazad, Touraj, and Mokhtar Abbasi. "A unified treatment of some perturbed fixed point iterative methods with an infinite pool of operators." Inverse Problems 33.4 (2017): 044002.doi:10.1088/1361-6420/33/4/044002
13. G.T. Herman, E. Garduño, R. Davidi and Y. Censor, Superiorization: An optimization heuristic for medical physics, Medical Physics, Vol. 39, pp. 5532–5546, (2012). doi:10.1118/1.4745566
14. G.T. Herman, Superiorization for image analysis, in: Combinatorial Image Analysis, Lecture Notes in Computer Science Vol. 8466, Springer, 2014, pp. 1–7. doi:10.1007/978-3-319-07148-0_1
15. Y. Censor, Weak and strong superiorization: Between feasibility-seeking and minimization, Analele Stiintifice ale Universitatii Ovidius Constanta-Seria Matematica, Vol. 23, pp. 41–54, (2015). doi:10.1515/auom-2015-0046
16. Y. Censor, R. Davidi, G.T. Herman, R.W. Schulte and L. Tetruashvili, Projected subgradient minimization versus superiorization, Journal of Optimization Theory and Applications, Vol. 160, pp. 730–747, (2014). doi:10.1007/s10957-013-0408-3
17. "Superiorization". math.haifa.ac.il.
18. "Snark14 – Home". turing.iimas.unam.mx.
| Wikipedia |
Martingale (probability theory)
In probability theory, a martingale is a sequence of random variables (i.e., a stochastic process) for which, at a particular time, the conditional expectation of the next value in the sequence is equal to the present value, regardless of all prior values.
History
Originally, martingale referred to a class of betting strategies that was popular in 18th-century France.[1][2] The simplest of these strategies was designed for a game in which the gambler wins their stake if a coin comes up heads and loses it if the coin comes up tails. The strategy had the gambler double their bet after every loss so that the first win would recover all previous losses plus win a profit equal to the original stake. As the gambler's wealth and available time jointly approach infinity, their probability of eventually flipping heads approaches 1, which makes the martingale betting strategy seem like a sure thing. However, the exponential growth of the bets eventually bankrupts its users due to finite bankrolls. Stopped Brownian motion, which is a martingale process, can be used to model the trajectory of such games.
The concept of martingale in probability theory was introduced by Paul Lévy in 1934, though he did not name it. The term "martingale" was introduced later by Ville (1939), who also extended the definition to continuous martingales. Much of the original development of the theory was done by Joseph Leo Doob among others. Part of the motivation for that work was to show the impossibility of successful betting strategies in games of chance.
Definitions
A basic definition of a discrete-time martingale is a discrete-time stochastic process (i.e., a sequence of random variables) X1, X2, X3, ... that satisfies for any time n,
$\mathbf {E} (\vert X_{n}\vert )<\infty $
$\mathbf {E} (X_{n+1}\mid X_{1},\ldots ,X_{n})=X_{n}.$
That is, the conditional expected value of the next observation, given all the past observations, is equal to the most recent observation.
Martingale sequences with respect to another sequence
More generally, a sequence Y1, Y2, Y3 ... is said to be a martingale with respect to another sequence X1, X2, X3 ... if for all n
$\mathbf {E} (\vert Y_{n}\vert )<\infty $
$\mathbf {E} (Y_{n+1}\mid X_{1},\ldots ,X_{n})=Y_{n}.$
Similarly, a continuous-time martingale with respect to the stochastic process Xt is a stochastic process Yt such that for all t
$\mathbf {E} (\vert Y_{t}\vert )<\infty $
$\mathbf {E} (Y_{t}\mid \{X_{\tau },\tau \leq s\})=Y_{s}\quad \forall s\leq t.$
This expresses the property that the conditional expectation of an observation at time t, given all the observations up to time $s$, is equal to the observation at time s (of course, provided that s ≤ t). Note that the second property implies that $Y_{n}$ is measurable with respect to $X_{1}\dots X_{n}$.
General definition
In full generality, a stochastic process $Y:T\times \Omega \to S$ taking values in a Banach space $S$ with norm $\lVert \cdot \rVert _{S}$ is a martingale with respect to a filtration $\Sigma _{*}$ and probability measure $\mathbb {P} $ if
• Σ∗ is a filtration of the underlying probability space (Ω, Σ, $\mathbb {P} $);
• Y is adapted to the filtration Σ∗, i.e., for each t in the index set T, the random variable Yt is a Σt-measurable function;
• for each t, Yt lies in the Lp space L1(Ω, Σt, $\mathbb {P} $; S), i.e.
$\mathbf {E} _{\mathbb {P} }(\lVert Y_{t}\rVert _{S})<+\infty ;$ ;}
• for all s and t with s < t and all F ∈ Σs,
$\mathbf {E} _{\mathbb {P} }\left([Y_{t}-Y_{s}]\chi _{F}\right)=0,$
where χF denotes the indicator function of the event F. In Grimmett and Stirzaker's Probability and Random Processes, this last condition is denoted as
$Y_{s}=\mathbf {E} _{\mathbb {P} }(Y_{t}\mid \Sigma _{s}),$
which is a general form of conditional expectation.[3]
It is important to note that the property of being a martingale involves both the filtration and the probability measure (with respect to which the expectations are taken). It is possible that Y could be a martingale with respect to one measure but not another one; the Girsanov theorem offers a way to find a measure with respect to which an Itō process is a martingale.
In the Banach space setting the conditional expectation is also denoted in operator notation as $\mathbf {E} ^{\Sigma _{s}}Y_{t}$.[4]
Examples of martingales
• An unbiased random walk (in any number of dimensions) is an example of a martingale.
• A gambler's fortune (capital) is a martingale if all the betting games which the gambler plays are fair. To be more specific: suppose Xn is a gambler's fortune after n tosses of a fair coin, where the gambler wins $1 if the coin comes up heads and loses $1 if it comes up tails. The gambler's conditional expected fortune after the next trial, given the history, is equal to their present fortune. This sequence is thus a martingale.
• Let Yn = Xn2 − n where Xn is the gambler's fortune from the preceding example. Then the sequence { Yn : n = 1, 2, 3, ... } is a martingale. This can be used to show that the gambler's total gain or loss varies roughly between plus or minus the square root of the number of steps.
• (de Moivre's martingale) Now suppose the coin is unfair, i.e., biased, with probability p of coming up heads and probability q = 1 − p of tails. Let
$X_{n+1}=X_{n}\pm 1$
with "+" in case of "heads" and "−" in case of "tails". Let
$Y_{n}=(q/p)^{X_{n}}.$
Then { Yn : n = 1, 2, 3, ... } is a martingale with respect to { Xn : n = 1, 2, 3, ... }. To show this
${\begin{aligned}E[Y_{n+1}\mid X_{1},\dots ,X_{n}]&=p(q/p)^{X_{n}+1}+q(q/p)^{X_{n}-1}\\[6pt]&=p(q/p)(q/p)^{X_{n}}+q(p/q)(q/p)^{X_{n}}\\[6pt]&=q(q/p)^{X_{n}}+p(q/p)^{X_{n}}=(q/p)^{X_{n}}=Y_{n}.\end{aligned}}$
• Pólya's urn contains a number of different-coloured marbles; at each iteration a marble is randomly selected from the urn and replaced with several more of that same colour. For any given colour, the fraction of marbles in the urn with that colour is a martingale. For example, if currently 95% of the marbles are red then, though the next iteration is more likely to add red marbles than another color, this bias is exactly balanced out by the fact that adding more red marbles alters the fraction much less significantly than adding the same number of non-red marbles would.
• (Likelihood-ratio testing in statistics) A random variable X is thought to be distributed according either to probability density f or to a different probability density g. A random sample X1, ..., Xn is taken. Let Yn be the "likelihood ratio"
$Y_{n}=\prod _{i=1}^{n}{\frac {g(X_{i})}{f(X_{i})}}$
If X is actually distributed according to the density f rather than according to g, then { Yn : n = 1, 2, 3, ... } is a martingale with respect to { Xn : n = 1, 2, 3, ... }.
• In an ecological community (a group of species that are in a particular trophic level, competing for similar resources in a local area), the number of individuals of any particular species of fixed size is a function of (discrete) time, and may be viewed as a sequence of random variables. This sequence is a martingale under the unified neutral theory of biodiversity and biogeography.
• If { Nt : t ≥ 0 } is a Poisson process with intensity λ, then the compensated Poisson process { Nt − λt : t ≥ 0 } is a continuous-time martingale with right-continuous/left-limit sample paths
• Wald's martingale
• A $d$-dimensional process $M=(M^{(1)},\dots ,M^{(d)})$ in some space $S^{d}$ is a martingale in $S^{d}$ if each component $T_{i}(M)=M^{(i)}$ is a one-dimensional martingale in $S$.
Submartingales, supermartingales, and relationship to harmonic functions
There are two popular generalizations of a martingale that also include cases when the current observation Xn is not necessarily equal to the future conditional expectation E[Xn+1 | X1,...,Xn] but instead an upper or lower bound on the conditional expectation. These definitions reflect a relationship between martingale theory and potential theory, which is the study of harmonic functions. Just as a continuous-time martingale satisfies E[Xt | {Xτ : τ ≤ s}] − Xs = 0 ∀s ≤ t, a harmonic function f satisfies the partial differential equation Δf = 0 where Δ is the Laplacian operator. Given a Brownian motion process Wt and a harmonic function f, the resulting process f(Wt) is also a martingale.
• A discrete-time submartingale is a sequence $X_{1},X_{2},X_{3},\ldots $ of integrable random variables satisfying
$\operatorname {E} [X_{n+1}\mid X_{1},\ldots ,X_{n}]\geq X_{n}.$
Likewise, a continuous-time submartingale satisfies
$\operatorname {E} [X_{t}\mid \{X_{\tau }:\tau \leq s\}]\geq X_{s}\quad \forall s\leq t.$
In potential theory, a subharmonic function f satisfies Δf ≥ 0. Any subharmonic function that is bounded above by a harmonic function for all points on the boundary of a ball is bounded above by the harmonic function for all points inside the ball. Similarly, if a submartingale and a martingale have equivalent expectations for a given time, the history of the submartingale tends to be bounded above by the history of the martingale. Roughly speaking, the prefix "sub-" is consistent because the current observation Xn is less than (or equal to) the conditional expectation E[Xn+1 | X1,...,Xn]. Consequently, the current observation provides support from below the future conditional expectation, and the process tends to increase in future time.
• Analogously, a discrete-time supermartingale satisfies
$\operatorname {E} [X_{n+1}\mid X_{1},\ldots ,X_{n}]\leq X_{n}.$
Likewise, a continuous-time supermartingale satisfies
$\operatorname {E} [X_{t}\mid \{X_{\tau }:\tau \leq s\}]\leq X_{s}\quad \forall s\leq t.$
In potential theory, a superharmonic function f satisfies Δf ≤ 0. Any superharmonic function that is bounded below by a harmonic function for all points on the boundary of a ball is bounded below by the harmonic function for all points inside the ball. Similarly, if a supermartingale and a martingale have equivalent expectations for a given time, the history of the supermartingale tends to be bounded below by the history of the martingale. Roughly speaking, the prefix "super-" is consistent because the current observation Xn is greater than (or equal to) the conditional expectation E[Xn+1 | X1,...,Xn]. Consequently, the current observation provides support from above the future conditional expectation, and the process tends to decrease in future time.
Examples of submartingales and supermartingales
• Every martingale is also a submartingale and a supermartingale. Conversely, any stochastic process that is both a submartingale and a supermartingale is a martingale.
• Consider again the gambler who wins $1 when a coin comes up heads and loses $1 when the coin comes up tails. Suppose now that the coin may be biased, so that it comes up heads with probability p.
• If p is equal to 1/2, the gambler on average neither wins nor loses money, and the gambler's fortune over time is a martingale.
• If p is less than 1/2, the gambler loses money on average, and the gambler's fortune over time is a supermartingale.
• If p is greater than 1/2, the gambler wins money on average, and the gambler's fortune over time is a submartingale.
• A convex function of a martingale is a submartingale, by Jensen's inequality. For example, the square of the gambler's fortune in the fair coin game is a submartingale (which also follows from the fact that Xn2 − n is a martingale). Similarly, a concave function of a martingale is a supermartingale.
Martingales and stopping times
A stopping time with respect to a sequence of random variables X1, X2, X3, ... is a random variable τ with the property that for each t, the occurrence or non-occurrence of the event τ = t depends only on the values of X1, X2, X3, ..., Xt. The intuition behind the definition is that at any particular time t, you can look at the sequence so far and tell if it is time to stop. An example in real life might be the time at which a gambler leaves the gambling table, which might be a function of their previous winnings (for example, he might leave only when he goes broke), but he can't choose to go or stay based on the outcome of games that haven't been played yet.
In some contexts the concept of stopping time is defined by requiring only that the occurrence or non-occurrence of the event τ = t is probabilistically independent of Xt + 1, Xt + 2, ... but not that it is completely determined by the history of the process up to time t. That is a weaker condition than the one appearing in the paragraph above, but is strong enough to serve in some of the proofs in which stopping times are used.
One of the basic properties of martingales is that, if $(X_{t})_{t>0}$ is a (sub-/super-) martingale and $\tau $ is a stopping time, then the corresponding stopped process $(X_{t}^{\tau })_{t>0}$ defined by $X_{t}^{\tau }:=X_{\min\{\tau ,t\}}$ is also a (sub-/super-) martingale.
The concept of a stopped martingale leads to a series of important theorems, including, for example, the optional stopping theorem which states that, under certain conditions, the expected value of a martingale at a stopping time is equal to its initial value.
See also
• Azuma's inequality
• Brownian motion
• Doob martingale
• Doob's martingale convergence theorems
• Doob's martingale inequality
• Doob–Meyer decomposition theorem
• Local martingale
• Markov chain
• Markov property
• Martingale (betting system)
• Martingale central limit theorem
• Martingale difference sequence
• Martingale representation theorem
• Normal number
• Semimartingale
Notes
1. Balsara, N. J. (1992). Money Management Strategies for Futures Traders. Wiley Finance. p. 122. ISBN 978-0-471-52215-7. martingale.
2. Mansuy, Roger (June 2009). "The origins of the Word "Martingale"" (PDF). Electronic Journal for History of Probability and Statistics. 5 (1). Archived (PDF) from the original on 2012-01-31. Retrieved 2011-10-22.
3. Grimmett, G.; Stirzaker, D. (2001). Probability and Random Processes (3rd ed.). Oxford University Press. ISBN 978-0-19-857223-7.
4. Bogachev, Vladimir (1998). Gaussian Measures. American Mathematical Society. pp. 372–373. ISBN 978-1470418694.
References
• "Martingale", Encyclopedia of Mathematics, EMS Press, 2001 [1994]
• "The Splendors and Miseries of Martingales". Electronic Journal for History of Probability and Statistics. 5 (1). June 2009. Entire issue dedicated to Martingale probability theory (Laurent Mazliak and Glenn Shafer, Editors).
• Baldi, Paolo; Mazliak, Laurent; Priouret, Pierre (1991). Martingales and Markov Chains. Chapman and Hall. ISBN 978-1-584-88329-6.
• Williams, David (1991). Probability with Martingales. Cambridge University Press. ISBN 978-0-521-40605-5.
• Kleinert, Hagen (2004). Path Integrals in Quantum Mechanics, Statistics, Polymer Physics, and Financial Markets (4th ed.). Singapore: World Scientific. ISBN 981-238-107-4.
• Siminelakis, Paris (2010). "Martingales and Stopping Times: Use of martingales in obtaining bounds and analyzing algorithms" (PDF). University of Athens. Archived from the original (PDF) on 2018-02-19. Retrieved 2010-06-18.
• Ville, Jean (1939). "Étude critique de la notion de collectif". Bulletin of the American Mathematical Society. Monographies des Probabilités (in French). Paris. 3 (11): 824–825. doi:10.1090/S0002-9904-1939-07089-4. Zbl 0021.14601. Review by Doob.
Stochastic processes
Discrete time
• Bernoulli process
• Branching process
• Chinese restaurant process
• Galton–Watson process
• Independent and identically distributed random variables
• Markov chain
• Moran process
• Random walk
• Loop-erased
• Self-avoiding
• Biased
• Maximal entropy
Continuous time
• Additive process
• Bessel process
• Birth–death process
• pure birth
• Brownian motion
• Bridge
• Excursion
• Fractional
• Geometric
• Meander
• Cauchy process
• Contact process
• Continuous-time random walk
• Cox process
• Diffusion process
• Empirical process
• Feller process
• Fleming–Viot process
• Gamma process
• Geometric process
• Hawkes process
• Hunt process
• Interacting particle systems
• Itô diffusion
• Itô process
• Jump diffusion
• Jump process
• Lévy process
• Local time
• Markov additive process
• McKean–Vlasov process
• Ornstein–Uhlenbeck process
• Poisson process
• Compound
• Non-homogeneous
• Schramm–Loewner evolution
• Semimartingale
• Sigma-martingale
• Stable process
• Superprocess
• Telegraph process
• Variance gamma process
• Wiener process
• Wiener sausage
Both
• Branching process
• Galves–Löcherbach model
• Gaussian process
• Hidden Markov model (HMM)
• Markov process
• Martingale
• Differences
• Local
• Sub-
• Super-
• Random dynamical system
• Regenerative process
• Renewal process
• Stochastic chains with memory of variable length
• White noise
Fields and other
• Dirichlet process
• Gaussian random field
• Gibbs measure
• Hopfield model
• Ising model
• Potts model
• Boolean network
• Markov random field
• Percolation
• Pitman–Yor process
• Point process
• Cox
• Poisson
• Random field
• Random graph
Time series models
• Autoregressive conditional heteroskedasticity (ARCH) model
• Autoregressive integrated moving average (ARIMA) model
• Autoregressive (AR) model
• Autoregressive–moving-average (ARMA) model
• Generalized autoregressive conditional heteroskedasticity (GARCH) model
• Moving-average (MA) model
Financial models
• Binomial options pricing model
• Black–Derman–Toy
• Black–Karasinski
• Black–Scholes
• Chan–Karolyi–Longstaff–Sanders (CKLS)
• Chen
• Constant elasticity of variance (CEV)
• Cox–Ingersoll–Ross (CIR)
• Garman–Kohlhagen
• Heath–Jarrow–Morton (HJM)
• Heston
• Ho–Lee
• Hull–White
• LIBOR market
• Rendleman–Bartter
• SABR volatility
• Vašíček
• Wilkie
Actuarial models
• Bühlmann
• Cramér–Lundberg
• Risk process
• Sparre–Anderson
Queueing models
• Bulk
• Fluid
• Generalized queueing network
• M/G/1
• M/M/1
• M/M/c
Properties
• Càdlàg paths
• Continuous
• Continuous paths
• Ergodic
• Exchangeable
• Feller-continuous
• Gauss–Markov
• Markov
• Mixing
• Piecewise-deterministic
• Predictable
• Progressively measurable
• Self-similar
• Stationary
• Time-reversible
Limit theorems
• Central limit theorem
• Donsker's theorem
• Doob's martingale convergence theorems
• Ergodic theorem
• Fisher–Tippett–Gnedenko theorem
• Large deviation principle
• Law of large numbers (weak/strong)
• Law of the iterated logarithm
• Maximal ergodic theorem
• Sanov's theorem
• Zero–one laws (Blumenthal, Borel–Cantelli, Engelbert–Schmidt, Hewitt–Savage, Kolmogorov, Lévy)
Inequalities
• Burkholder–Davis–Gundy
• Doob's martingale
• Doob's upcrossing
• Kunita–Watanabe
• Marcinkiewicz–Zygmund
Tools
• Cameron–Martin formula
• Convergence of random variables
• Doléans-Dade exponential
• Doob decomposition theorem
• Doob–Meyer decomposition theorem
• Doob's optional stopping theorem
• Dynkin's formula
• Feynman–Kac formula
• Filtration
• Girsanov theorem
• Infinitesimal generator
• Itô integral
• Itô's lemma
• Karhunen–Loève theorem
• Kolmogorov continuity theorem
• Kolmogorov extension theorem
• Lévy–Prokhorov metric
• Malliavin calculus
• Martingale representation theorem
• Optional stopping theorem
• Prokhorov's theorem
• Quadratic variation
• Reflection principle
• Skorokhod integral
• Skorokhod's representation theorem
• Skorokhod space
• Snell envelope
• Stochastic differential equation
• Tanaka
• Stopping time
• Stratonovich integral
• Uniform integrability
• Usual hypotheses
• Wiener space
• Classical
• Abstract
Disciplines
• Actuarial mathematics
• Control theory
• Econometrics
• Ergodic theory
• Extreme value theory (EVT)
• Large deviations theory
• Mathematical finance
• Mathematical statistics
• Probability theory
• Queueing theory
• Renewal theory
• Ruin theory
• Signal processing
• Statistics
• Stochastic analysis
• Time series analysis
• Machine learning
• List of topics
• Category
Authority control: National
• France
• BnF data
• Israel
• United States
• Japan
| Wikipedia |
Supermathematics
Supermathematics is the branch of mathematical physics which applies the mathematics of Lie superalgebras to the behaviour of bosons and fermions. The driving force in its formation in the 1960s and 1970s was Felix Berezin.
Objects of study include superalgebras (such as super Minkowski space and super-Poincaré algebra), superschemes, supermetrics/supersymmetry, supermanifolds, supergeometry, and supergravity, namely in the context of superstring theory.
References
• "The importance of Lie algebras"; Professor Isaiah Kantor, Lund University
External links
• Felix Berezin, The Life and Death of the Mastermind of Supermathematics, edited by Mikhail Shifman, World Scientific, Singapore, 2007, ISBN 978-981-270-532-7
| Wikipedia |
Supermodule
In mathematics, a supermodule is a Z2-graded module over a superring or superalgebra. Supermodules arise in super linear algebra which is a mathematical framework for studying the concept supersymmetry in theoretical physics.
Supermodules over a commutative superalgebra can be viewed as generalizations of super vector spaces over a (purely even) field K. Supermodules often play a more prominent role in super linear algebra than do super vector spaces. These reason is that it is often necessary or useful to extend the field of scalars to include odd variables. In doing so one moves from fields to commutative superalgebras and from vector spaces to modules.
In this article, all superalgebras are assumed be associative and unital unless stated otherwise.
Formal definition
Let A be a fixed superalgebra. A right supermodule over A is a right module E over A with a direct sum decomposition (as an abelian group)
$E=E_{0}\oplus E_{1}$
such that multiplication by elements of A satisfies
$E_{i}A_{j}\subseteq E_{i+j}$
for all i and j in Z2. The subgroups Ei are then right A0-modules.
The elements of Ei are said to be homogeneous. The parity of a homogeneous element x, denoted by |x|, is 0 or 1 according to whether it is in E0 or E1. Elements of parity 0 are said to be even and those of parity 1 to be odd. If a is a homogeneous scalar and x is a homogeneous element of E then |x·a| is homogeneous and |x·a| = |x| + |a|.
Likewise, left supermodules and superbimodules are defined as left modules or bimodules over A whose scalar multiplications respect the gradings in the obvious manner. If A is supercommutative, then every left or right supermodule over A may be regarded as a superbimodule by setting
$a\cdot x=(-1)^{|a||x|}x\cdot a$
for homogeneous elements a ∈ A and x ∈ E, and extending by linearity. If A is purely even this reduces to the ordinary definition.
Homomorphisms
A homomorphism between supermodules is a module homomorphism that preserves the grading. Let E and F be right supermodules over A. A map
$\phi :E\to F\,$
is a supermodule homomorphism if
• $\phi (x+y)=\phi (x)+\phi (y)\,$
• $\phi (x\cdot a)=\phi (x)\cdot a\,$
• $\phi (E_{i})\subseteq F_{i}\,$
for all a∈A and all x,y∈E. The set of all module homomorphisms from E to F is denoted by Hom(E, F).
In many cases, it is necessary or convenient to consider a larger class of morphisms between supermodules. Let A be a supercommutative algebra. Then all supermodules over A be regarded as superbimodules in a natural fashion. For supermodules E and F, let Hom(E, F) denote the space of all right A-linear maps (i.e. all module homomorphisms from E to F considered as ungraded right A-modules). There is a natural grading on Hom(E, F) where the even homomorphisms are those that preserve the grading
$\phi (E_{i})\subseteq F_{i}$
and the odd homomorphisms are those that reverse the grading
$\phi (E_{i})\subseteq F_{1-i}.$
If φ ∈ Hom(E, F) and a ∈ A are homogeneous then
$\phi (x\cdot a)=\phi (x)\cdot a\qquad \phi (a\cdot x)=(-1)^{|a||\phi |}a\cdot \phi (x).$
That is, the even homomorphisms are both right and left linear whereas the odd homomorphism are right linear but left antilinear (with respect to the grading automorphism).
The set Hom(E, F) can be given the structure of a bimodule over A by setting
${\begin{aligned}(a\cdot \phi )(x)&=a\cdot \phi (x)\\(\phi \cdot a)(x)&=\phi (a\cdot x).\end{aligned}}$
With the above grading Hom(E, F) becomes a supermodule over A whose even part is the set of all ordinary supermodule homomorphisms
$\mathbf {Hom} _{0}(E,F)=\mathrm {Hom} (E,F).$
In the language of category theory, the class of all supermodules over A forms a category with supermodule homomorphisms as the morphisms. This category is a symmetric monoidal closed category under the super tensor product whose internal Hom functor is given by Hom.
References
• Deligne, Pierre; John W. Morgan (1999). "Notes on Supersymmetry (following Joseph Bernstein)". Quantum Fields and Strings: A Course for Mathematicians. Vol. 1. American Mathematical Society. pp. 41–97. ISBN 0-8218-2012-5.
• Manin, Y. I. (1997). Gauge Field Theory and Complex Geometry ((2nd ed.) ed.). Berlin: Springer. ISBN 3-540-61378-1.
• Varadarajan, V. S. (2004). Supersymmetry for Mathematicians: An Introduction. Courant Lecture Notes in Mathematics 11. American Mathematical Society. ISBN 0-8218-3574-2.
| Wikipedia |
Superoperator
In physics, a superoperator is a linear operator acting on a vector space of linear operators.[1]
Sometimes the term refers more specially to a completely positive map which also preserves or does not increase the trace of its argument. This specialized meaning is used extensively in the field of quantum computing, especially quantum programming, as they characterise mappings between density matrices.
The use of the super- prefix here is in no way related to its other use in mathematical physics. That is to say superoperators have no connection to supersymmetry and superalgebra which are extensions of the usual mathematical concepts defined by extending the ring of numbers to include Grassmann numbers. Since superoperators are themselves operators the use of the super- prefix is used to distinguish them from the operators upon which they act.
Left/Right Multiplication
Defining the left and right multiplication superoperators by ${\mathcal {L}}(A)[\rho ]=A\rho $ and ${\mathcal {R}}(A)[\rho ]=\rho A$ respectively one can express the commutator as
$[A,\rho ]={\mathcal {L}}(A)[\rho ]-{\mathcal {R}}(A)[\rho ].$
Next we vectorize the matrix $\rho $ which is the mapping
$\rho =\sum _{i,j}\rho _{ij}|i\rangle \langle j|\to |\rho \rangle \!\rangle =\sum _{i,j}\rho _{ij}|i\rangle \otimes |j\rangle ,$
where $|\cdot \rangle \!\rangle $ denotes a vector in the Fock-Liouville space. The matrix representation of ${\mathcal {L}}(A)$ is then calculated by using the same mapping
$A\rho =\sum _{i,j}\rho _{ij}A|i\rangle \langle j|\to \sum _{i,j}\rho _{ij}(A|i\rangle )\otimes |j\rangle =\sum _{i,j}\rho _{ij}(A\otimes I)(|i\rangle \otimes |j\rangle )=(A\otimes I)|\rho \rangle \!\rangle ={\mathcal {L}}(A)[\rho ],$
indicating that ${\mathcal {L}}(A)=A\otimes I$. Similarly one can show that ${\mathcal {R}}(A)=(I\otimes A^{T})$. These representations allows us to calculate things like eigenvalues associated to superoperators. These eigenvalues are particularly useful in the field of open quantum systems, where the real parts of the Lindblad superoperator's eigenvalues will indicate whether a quantum system will relax or not.
Example von Neumann Equation
In quantum mechanics the Schrödinger Equation, $i\hbar {\frac {\partial }{\partial t}}\Psi ={\hat {H}}\Psi $ expresses the time evolution of the state vector $\psi $ by the action of the Hamiltonian ${\hat {H}}$ which is an operator mapping state vectors to state vectors.
In the more general formulation of John von Neumann, statistical states and ensembles are expressed by density operators rather than state vectors. In this context the time evolution of the density operator is expressed via the von Neumann equation in which density operator is acted upon by a superoperator ${\mathcal {H}}$ mapping operators to operators. It is defined by taking the commutator with respect to the Hamiltonian operator:
$i\hbar {\frac {\partial }{\partial t}}\rho ={\mathcal {H}}[\rho ]$
where
${\mathcal {H}}[\rho ]=[{\hat {H}},\rho ]\equiv {\hat {H}}\rho -\rho {\hat {H}}$
As commutator brackets are used extensively in QM this explicit superoperator presentation of the Hamiltonian's action is typically omitted.
Example Derivatives of Functions on the Space of Operators
When considering an operator valued function of operators ${\hat {H}}={\hat {H}}({\hat {P}})$ as for example when we define the quantum mechanical Hamiltonian of a particle as a function of the position and momentum operators, we may (for whatever reason) define an “Operator Derivative” ${\frac {\Delta {\hat {H}}}{\Delta {\hat {P}}}}$ as a superoperator mapping an operator to an operator.
For example, if $H(P)=P^{3}=PPP$ then its operator derivative is the superoperator defined by:
${\frac {\Delta H}{\Delta P}}[X]=XP^{2}+PXP+P^{2}X$
This “operator derivative” is simply the Jacobian matrix of the function (of operators) where one simply treats the operator input and output as vectors and expands the space of operators in some basis. The Jacobian matrix is then an operator (at one higher level of abstraction) acting on that vector space (of operators).
See also
Lindblad superoperator
References
1. John Preskill, Lecture notes for Quantum Computation course at Caltech, Ch. 3,
| Wikipedia |
Superparabola
A superparabola is a geometric curve defined in the Cartesian coordinate system as a set of points (x, y) with
${\frac {y}{b}}=\lbrack 1-\left({\frac {x}{a}}\right)^{2}\rbrack ^{p},$
where p, a, and b are positive integers. The equation defines an open curve in the rectangle −a ≤ x ≤ a, 0 ≤ y ≤ b.
The superparabola can vary in shape from a rectangular function (p = 0), to a semi-ellipse (p = 1/2), to a parabola (p = 1), to a pulse function (p > 1).
Mathematical properties
Without loss of generality we can consider the canonical form of the superparabola (a = b = 1)
$f(x;p)=\left(1-x^{2}\right)^{p}$
When p > 0, the function describes a continuous differentiable curve on the plane. The curve can be described parametrically on the complex plane as
$z=\sin(u)+i\cos ^{2p}(u);\quad -{\tfrac {\pi }{2}}\leq u\leq {\tfrac {\pi }{2}}$
Derivatives of the superparabola are given by
$f'(x;p)=-2px(1-x^{2})^{p-1}$
${\frac {\partial f}{\partial p}}=(1-x^{2})^{p}\ln(1-x^{2})=f(x)\ln \lbrack f(x;1)\rbrack $
The area under the curve is given by
${\text{Area}}=\int _{-1}^{1}\int _{0}^{f(x)}dydx=\int _{-1}^{1}(1-x^{2})^{p}dx=\psi (p)$
where ψ is a global function valid for all p > −1,
$\psi (p)={\frac {{\sqrt {\pi }}\,\Gamma (p+1)}{\Gamma (p+{\frac {3}{2}})}}$
The area under a portion of the curve requires the indefinite integral
$\int (1-x^{2})^{p}dx=x\,{_{2}}F{_{1}}$ $(1/2,-p;3/2;x^{2})$
where $ _{2}F_{1}$ is the Gaussian hypergeometric function. An interesting property is that any superparabola raised to a power $n$ is just another superparabola; thus
$\int _{-1}^{1}f^{n}(x)=\psi (np)$
The centroid of the area under the curve is given by
$C={\frac {\mathbf {i} }{A}}\int _{-1}^{1}x\int _{0}^{f(x)}dydx+{\frac {\mathbf {j} }{A}}\ \int _{-1}^{1}\int _{0}^{f(x)}ydydx$
$={\frac {\mathbf {j} }{2A}}\int _{-1}^{1}f^{2}(x)dx=\mathbf {j} {\frac {\psi (2p)}{2\psi (p)}}$
where the $x$-component is zero by virtue of symmetry. Thus, the centroid can be expressed as one-half the ratio of the area of the square of the curve to the area of the curve.
The nth (mathematical) moment is given by
$\mu _{=}\int _{-1}^{1}x^{n}f(x)dx={\begin{cases}M(p,n)&{\text{if n is even}}\\0&{\text{if n is odd}}\end{cases}}$
$M(p,n)={\frac {2}{n+1}}{\frac {\Gamma ((n+3)/2)\Gamma (p+1)}{\Gamma (p+(n+3)/2)}}$
The arc length of the curve is given by
${\text{Length}}=\int _{-1}^{1}{\sqrt {1+[f'(x)]^{2}}}dx.$
In general, integrals containing ${\sqrt {1+[f'(x)]^{2}}}$ cannot be found in terms of standard mathematical functions. Even numerical solutions can be problematic for the improper integrals that arise when $f'(x)$ is singular at $x=\pm 1$ . Two instances of exact solutions have been found. For the semicircle $(p=1/2)$, $L=\pi $ and the parabola $(p=1)$, $L=\left({\sqrt {5}}+\sinh ^{-1}(2)/2\right)\ \approx {2.9579}$.
The arc length is $L=4$ for both $p=0{\text{and}}p=\infty $ and has a minimum value of $L\sim 2.914$ at $p\sim 1.595$ . The area under the curve decreases monotonically with increasing $p$.
Generalization
A natural generalization for the superparabola is to relax the constraint on the power of x. For example,
$f(x)=\left(1-\left\vert x\right\vert ^{q}\right)^{p}$
where the absolute value was added to assure symmetry with respect to the y-axis. The curve can be described parametrically on the complex plane as well,
$z=\left\vert {\text{sin}}^{2/q}(u)\right\vert {\text{sgn}}(u)+i{\text{cos}}^{2p}(u);\qquad -\pi /2\leq {u}\leq \pi /2$
Now, it is apparent that the generalized superparabola contains within it the superellipse, i.e., $p=1/q$ , and its generalization.[1] Conversely, the generalization of the superellipse clearly contains the superparabola. Here, however, we have the analytic solution for the area under the curve.
The indefinite and definite integrals are given by
$\int f(x)dx=x\cdot _{2}F_{1}(-p,1/q;1+1/q;x^{2})$
${\text{Area}}=\int _{-1}^{1}f(x)dx=\Psi (p,q)$
where $\Psi $ is a universal function valid for all $q$ and $p>-1$.
$\Psi (p,q)={\frac {2\Gamma ({\frac {q+1}{q}})\Gamma (p+1)}{\Gamma (p+{\frac {q+1}{q}})}}$
These results can be readily applied to the centroid and moments of the curve as demonstrated above by substitution of $\Psi (p,q)$ for $\psi (p)$.
History
The superellipse has been identified since 1818 as a Lamé curve. It appears that the superparabola was first identified by Löffelmann and Gröller.[1] in their paper on superquadrics in conjunction with computer graphics. Waldman and Gray[2] used the superparabola in their analyses of the Archimedean hoof.[2][3][4] The "cylinder hoof", "hoof" or "ungula" was first formulated in a letter from Archimedes to Eratosthenes in the 3rd century BC and led to the classic Propositions 13 and 14 of The Method.[5] This letter now transposed in Dijksterhuis is one of the most famous exchange of ideas in all history of mathematics.
Applications
The superparabola and its generalization have been applied to the Archimedean hoof. Briefly, the Archimedean hoof consists of a right cylinder with a footprint y = f(x) and height h that is cut by the plane z = h y. In the first image, the portion on the right is called the hoof, and is taken from the remaining half-cylinder leaving the complement . The base area, volume, and center of mass of both the hoof and the complement can be described solely in terms of the universal function, Ψ and height.[2][3][4]
3-D Printer Hoof3-D Printer Hoof3-D Printer Hoof Half-cylinder
See also
• Superellipse
• Superquadrics
• Superformula
References
Specific
1. H. Löffelmann and E. Gröller, Parameterizing Superquadrics, Proc. (WSCG '95), 1995 (Winter School of Computer Graphics).
2. C. H. Waldman and S. B. Gray, Superparabola and Superellipse in the Method of Archimedes.
3. S. B. Gray, D. Yang, G. Gordillo, S. Landsberger and C. Waldman, The Method of Archimedes: Propositions 13 and 14, Notices of the American Mathematical Society, 62(9), October, 2015, pp. 1036–1040. Photos courtesy of D. Yang
4. S. B. Gray and C. H. Waldman, Archimedes Reimagined: Derivatives from The Method., submitted for publication August, 2015 (Preprint available on request).
5. E. J. Dijksterhuis, Archimedes (with a new bibliographic essay by Wilbur R. Knorr), Princeton University Press, 1987, p. 313.
General
• Classic Study of Curves, G. S. Carr, Formulas and Theorems in PURE MATHEMATICS, 2nd ed., Chelsea Publishing Co., New York, 1970. Reprint of Carr's 1886 edition under the title of A Synopsis of Elementary Results in Pure Mathematics, London and Cambridge.
• A. Bellos, Alex's Adventures in Numberland, Bloomsbury, UK, 2011.
• H. Boualem and R. Brouzet, To Be (a Circle) or Not to Be?, The College Mathematics Journal, 46 (3) May, 2015, 197-206.
• P. Bourke, Supershapes (Superformula), http://paulbourke.net/geometry/supershape/, March 2002.
• G. Cardillo, Superformula Generator 2d (Feb. 2006), Matlab File Exchange http://www.mathworks.com/matlabcentral/fileexchange/10189-superformula-generator-2d.
• G. Cardillo, Superformula Generator 3d (Feb. 2006), Matlab File Exchange http://www.mathworks.com/matlabcentral/fileexchange/10190-superformula-generator-3d.
• J. Gielis, A generic geometric transformation that unifies a wide range of natural and abstract shapes, American Journal of Botany 90 (3): 333–338, 2003.
• G. Lamé, Leçons sur les coordonnées curvilignes et leurs diverses applications, Paris, Mallet-Bachelier, 1859.
• P. Lynch, Sharing a Pint, ThatsMaths, 2012 http://thatsmaths.com/2012/12/13/sharing-a-pint.
• K.B. Oldham, J. Myland, J. Spanier, An Atlas of Functions, 2nd ed, Springer, 2010.
• E. W. Weisstein, CRC Concise Encyclopedia of Mathematics, CRC Press, 2003.
External links
• Archimedean Hoof
• Animation of Archimedean Hoof
• Superparabola
• More on the Parabola
• http://www.cs.drexel.edu/~crorres/Archimedes/contents.html More on Archimedes
• Palimpsest of Archimedes
• Restoring The Archimedes Palimpsest
• Curves
• More Curves
| Wikipedia |
Superpattern
In the mathematical study of permutations and permutation patterns, a superpattern or universal permutation is a permutation that contains all of the patterns of a given length. More specifically, a k-superpattern contains all possible patterns of length k.[1]
Definitions and example
If π is a permutation of length n, represented as a sequence of the numbers from 1 to n in some order, and s = s1, s2, ..., sk is a subsequence of π of length k, then s corresponds to a unique pattern, a permutation of length k whose elements are in the same order as s. That is, for each pair i and j of indexes, the ith element of the pattern for s should be less than the jthe element if and only if the ith element of s is less than the jth element. Equivalently, the pattern is order-isomorphic to the subsequence. For instance, if π is the permutation 25314, then it has ten subsequences of length three, forming the following patterns:
SubsequencePattern
253132
251231
254132
231231
234123
214213
531321
534312
514312
314213
A permutation π is called a k-superpattern if its patterns of length k include all of the length-k permutations. For instance, the length-3 patterns of 25314 include all six of the length-3 permutations, so 25314 is a 3-superpattern. No 3-superpattern can be shorter, because any two subsequences that form the two patterns 123 and 321 can only intersect in a single position, so five symbols are required just to cover these two patterns.
Length bounds
Arratia (1999) introduced the problem of determining the length of the shortest possible k-superpattern.[2] He observed that there exists a superpattern of length k2 (given by the lexicographic ordering on the coordinate vectors of points in a square grid) and also observed that, for a superpattern of length n, it must be the case that it has at least as many subsequences as there are patterns. That is, it must be true that ${\tbinom {n}{k}}\geq k!$, from which it follows by Stirling's approximation that n ≥ k2/e2, where e ≈ 2.71828 is Euler's number. This lower bound was later improved very slightly by Chroman, Kwan, and Singhal (2021), who increased it to 1.000076k2/e2,[3] disproving Arratia's conjecture that the k2/e2 lower bound was tight.[2]
The upper bound of k2 on superpattern length proven by Arratia is not tight. After intermediate improvements,[4] Miller (2009) proved that there is a k-superpattern of length at most k(k + 1)/2 for every k.[5] This bound was later improved by Engen and Vatter (2021), who lowered it to ⌈(k2 + 1)/2⌉.[6]
Eriksson et al. conjectured that the true length of the shortest k-superpattern is asymptotic to k2/2.[4] However, this is in contradiction with a conjecture of Alon on random superpatterns described below.
Random superpatterns
Researchers have also studied the length needed for a sequence generated by a random process to become a superpattern.[7] Arratia (1999) observes that, because the longest increasing subsequence of a random permutation has length (with high probability) approximately 2√n, it follows that a random permutation must have length at least k2/4 to have high probability of being a k-superpattern: permutations shorter than this will likely not contain the identity pattern.[2] He attributes to Alon the conjecture that, for any ε > 0, with high probability, random permutations of length k2/(4 − ε) will be k-superpatterns.
See also
• Superpermutation
References
1. Bóna, Miklós (2012), Combinatorics of Permutations, Discrete Mathematics and Its Applications, vol. 72 (2nd ed.), CRC Press, p. 227, ISBN 9781439850510.
2. Arratia, Richard (1999), "On the Stanley-Wilf conjecture for the number of permutations avoiding a given pattern", Electronic Journal of Combinatorics, 6: N1, doi:10.37236/1477, MR 1710623
3. Chroman, Zachary; Kwan, Matthew; Singhal, Mihir (2021), "Lower bounds for superpatterns and universal sequences", Journal of Combinatorial Theory, Series A, 182, Paper No. 105467 (15 pp), arXiv:2004.02375, doi:10.1016/j.jcta.2021.105467, MR 4253319
4. Eriksson, Henrik; Eriksson, Kimmo; Linusson, Svante; Wästlund, Johan (2007), "Dense packing of patterns in a permutation", Annals of Combinatorics, 11 (3–4): 459–470, doi:10.1007/s00026-007-0329-7, MR 2376116, S2CID 2021533
5. Miller, Alison (2009), "Asymptotic bounds for permutations containing many different patterns", Journal of Combinatorial Theory, Series A, 116 (1): 92–108, doi:10.1016/j.jcta.2008.04.007
6. Engen, Michael; Vatter, Vincent (2021), "Containing all permutations", American Mathematical Monthly, 128 (1): 4–24, arXiv:1810.08252, doi:10.1080/00029890.2021.1835384
7. Godbole, Anant P.; Liendo, Martha (2016), "Waiting time distribution for the emergence of superpatterns", Methodology and Computing in Applied Probability, 18 (2): 517–528, arXiv:1302.4668, doi:10.1007/s11009-015-9439-6, MR 3488590
| Wikipedia |
Superperfect group
In mathematics, in the realm of group theory, a group is said to be superperfect when its first two homology groups are trivial: H1(G, Z) = H2(G, Z) = 0. This is stronger than a perfect group, which is one whose first homology group vanishes. In more classical terms, a superperfect group is one whose abelianization and Schur multiplier both vanish; abelianization equals the first homology, while the Schur multiplier equals the second homology.
Definition
The first homology group of a group is the abelianization of the group itself, since the homology of a group G is the homology of any Eilenberg–MacLane space of type K(G, 1); the fundamental group of a K(G, 1) is G, and the first homology of K(G, 1) is then abelianization of its fundamental group. Thus, if a group is superperfect, then it is perfect.
A finite perfect group is superperfect if and only if it is its own universal central extension (UCE), as the second homology group of a perfect group parametrizes central extensions.
Examples
For example, if G is the fundamental group of a homology sphere, then G is superperfect. The smallest finite, non-trivial superperfect group is the binary icosahedral group (the fundamental group of the Poincaré homology sphere).
The alternating group A5 is perfect but not superperfect: it has a non-trivial central extension, the binary icosahedral group (which is in fact its UCE) is superperfect. More generally, the projective special linear groups PSL(n, q) are simple (hence perfect) except for PSL(2, 2) and PSL(2, 3), but not superperfect, with the special linear groups SL(n,q) as central extensions. This family includes the binary icosahedral group (thought of as SL(2, 5)) as UCE of A5 (thought of as PSL(2, 5)).
Every acyclic group is superperfect, but the converse is not true: the binary icosahedral group is superperfect, but not acyclic.
References
• A. Jon Berrick and Jonathan A. Hillman, "Perfect and acyclic subgroups of finitely presentable groups", Journal of the London Mathematical Society (2) 68 (2003), no. 3, 683--698. MR2009444
| Wikipedia |
Superpermutation
In combinatorial mathematics, a superpermutation on n symbols is a string that contains each permutation of n symbols as a substring. While trivial superpermutations can simply be made up of every permutation concatenated together, superpermutations can also be shorter (except for the trivial case of n = 1) because overlap is allowed. For instance, in the case of n = 2, the superpermutation 1221 contains all possible permutations (12 and 21), but the shorter string 121 also contains both permutations.
It has been shown that for 1 ≤ n ≤ 5, the smallest superpermutation on n symbols has length 1! + 2! + … + n! (sequence A180632 in the OEIS). The first four smallest superpermutations have respective lengths 1, 3, 9, and 33, forming the strings 1, 121, 123121321, and 123412314231243121342132413214321. However, for n = 5, there are several smallest superpermutations having the length 153. One such superpermutation is shown below, while another of the same length can be obtained by switching all of the fours and fives in the second half of the string (after the bold 2):[1]
123451234152341253412354123145231425314235142315423124531243512431524312543121345213425134215342135421324513241532413524132541321453214352143251432154321
For the cases of n > 5, a smallest superpermutation has not yet been proved nor a pattern to find them, but lower and upper bounds for them have been found.
Finding superpermutations
One of the most common algorithms for creating a superpermutation of order $n$ is a recursive algorithm. First, the superpermutation of order $n-1$ is split into its individual permutations in the order of how they appeared in the superpermutation. Each of those permutation are then placed next to a copy of themselves with an nth symbol added in between the two copies. Finally, each resulting structure is placed next to each other and all adjacent identical symbols are merged.[2]
For example, a superpermutation of order 3 can be created from one with 2 symbols; starting with the superpermutation 121 and splitting it up into the permutations 12 and 21, the permutations are copied and placed as 12312 and 21321. They are placed together to create 1231221321, and the identical adjacent 2s in the middle are merged to create 123121321, which is indeed a superpermutation of order 3. This algorithm results in the shortest possible superpermutation for all n less than or equal to 5, but becomes increasingly longer than the shortest possible as n increase beyond that.[2]
Another way of finding superpermutations lies in creating a graph where each permutation is a vertex and every permutation is connected by an edge. Each edge has a weight associated with it; the weight is calculated by seeing how many characters can be added to the end of one permutation (dropping the same number of characters from the start) to result in the other permutation.[2] For instance, the edge from 123 to 312 has weight 2 because 123 + 12 = 12312 = 312. Any hamiltonian path through the created graph is a superpermutation, and the problem of finding the path with the smallest weight becomes a form of the traveling salesman problem. The first instance of a superpermutation smaller than length $1!+2!+\ldots +n!$ was found using a computer search on this method by Robin Houston.
Lower bounds, or the Haruhi problem
In September 2011, an anonymous poster on the Science & Math ("/sci/") board of 4chan proved that the smallest superpermutation on n symbols (n ≥ 2) has at least length n! + (n−1)! + (n−2)! + n − 3.[3] In reference to the Japanese anime series The Melancholy of Haruhi Suzumiya, the problem was presented on the imageboard as "The Haruhi Problem":[4] if you wanted to watch the 14 episodes of the first season of the series in every possible order, what would be the shortest string of episodes you would need to watch?[5] The proof for this lower bound came to the general public interest in October 2018, after mathematician and computer scientist Robin Houston tweeted about it.[3] On 25 October 2018, Robin Houston, Jay Pantone, and Vince Vatter posted a refined version of this proof in the On-Line Encyclopedia of Integer Sequences (OEIS).[5][6] A published version of this proof, credited to "Anonymous 4chan poster", appears in Engen and Vatter (2019).[7] For "The Haruhi Problem" specifically (the case for 14 symbols), the current lower and upper bound are 93,884,313,611 and 93,924,230,411, respectively.[3] This means that watching the series in every possible order would require about 4 million years.
Upper bounds
On 20 October 2018, by adapting a construction by Aaron Williams for constructing Hamiltonian paths through the Cayley graph of the symmetric group,[8] science fiction author and mathematician Greg Egan devised an algorithm to produce superpermutations of length n! + (n−1)! + (n−2)! + (n−3)! + n − 3.[2] Up to 2018, these were the smallest superpermutations known for n ≥ 7. However, on 1 February 2019, Bogdan Coanda announced that he had found a superpermutation for n=7 of length 5907, or (n! + (n−1)! + (n−2)! + (n−3)! + n − 3) − 1, which was a new record.[2] On 27 February 2019, using ideas developed by Robin Houston, Egan produced a superpermutation for n = 7 of length 5906.[2] Whether similar shorter superpermutations also exist for values of n > 7 remains an open question. The current best lower bound (see section above) for n = 7 is still 5884.
See also
• Superpattern, a permutation that contains each permutation of n symbols as a permutation pattern
• De Bruijn sequence, a similar problem with cyclic sequences
Further reading
• Ashlock, Daniel A.; Tillotson, Jenett (1993), "Construction of small superpermutations and minimal injective superstrings", Congressus Numerantium, 93: 91–98, Zbl 0801.05004
• Anonymous 4chan Poster; Houston, Robin; Pantone, Jay; Vatter, Vince (October 25, 2018). "A lower bound on the length of the shortest superpattern" (PDF). On-Line Encyclopedia of Integer Sequences.
References
1. Johnston, Nathaniel (July 28, 2013). "Non-uniqueness of minimal superpermutations". Discrete Mathematics. 313 (14): 1553–1557. arXiv:1303.4150. Bibcode:2013arXiv1303.4150J. doi:10.1016/j.disc.2013.03.024. S2CID 12018639. Zbl 1368.05004. Retrieved March 16, 2014.
2. Egan, Greg (20 October 2018). "Superpermutations". gregegan.net. Retrieved 15 January 2020.
3. Griggs, Mary Beth. "An anonymous 4chan post could help solve a 25-year-old math mystery". The Verge.
4. Anon, - San (September 17, 2011). "Permutations Thread III ニア愛". Warosu.{{cite web}}: CS1 maint: url-status (link)
5. Klarreich, Erica (November 5, 2018). "Sci-Fi Writer Greg Egan and Anonymous Math Whiz Advance Permutation Problem". Quanta Magazine. Retrieved June 21, 2020.{{cite web}}: CS1 maint: url-status (link)
6. Anonymous 4chan poster; Houston, Robin; Pantone, Jay; Vatter, Vince (October 25, 2018). "A lower bound on the length of the shortest superpattern" (PDF). OEIS. Retrieved 27 October 2018.
7. Engen, Michael; Vatter, Vincent (2021), "Containing all permutations", American Mathematical Monthly, 128 (1): 4–24, arXiv:1810.08252, doi:10.1080/00029890.2021.1835384
8. Aaron, Williams (2013). "Hamiltonicity of the Cayley Digraph on the Symmetric Group Generated by σ = (1 2 ... n) and τ = (1 2)". arXiv:1307.2549v3 [math.CO].
External links
• The Minimal Superpermutation Problem - Nathaniel Johnston's blog
• Grime, James. "Superpermutations - Numberphile" (video). YouTube. Brady Haran. Retrieved 1 February 2018.
• The 4chan post on /sci/, archived on warosu.org
• Tweet by Robin Houston, which brought attention to the 4chan post
• Article about the problem of finding short supermpermutations in Quanta Magazine
| Wikipedia |
Superposed epoch analysis
Superposed epoch analysis (SPE or SEA), also called Chree analysis after a paper by Charles Chree [1] that employed the technique, is a statistical tool used in data analysis either to detect periodicities within a time sequence or to reveal a correlation (usually in time) between two data sequences (usually two time series).[2]
When comparing two time series, the essence of the method is to: (1) define each occurrence of an event in one data sequence (series #1) as a key time; (2) extract subsets of data from the other sequence (series #2) within some time range near each key time; (3) superpose all extracted subsets from series #2 (with key times for all subsets synchronized) by adding them. (To effectively superpose data from series #2 that are recorded at different or even irregular times, data binning is often used.) This approach can be used to detect a signal (i.e., related variations in both series) in the presence of noise (i.e., unrelated variations in both series) whenever the noise sums incoherently while the signal is reinforced by the superposition.
To search for periodicities in a single time series, the data sequence can be broken into separate subsets of equal duration, and then all subsets can be superposed. Some hypothesis for the length of the period is required to set the subsets' duration.
The approach has been used in signal analysis in several fields, including geophysics (where it has been referred to as compositing)[3][4] and solar physics.[5]
References
1. C. Chree-Some phenomena of sunspots and of terrestrial magnetism at Kew observatory-July 1913
2. Y.P. Singh, Badruddin-Statistical considerations in superposed epoch analysis and its applications in space research-April 2006
3. J. B. Adams, M. E. Mann, C. M. Ammann-Proxy evidence for an El Niño-like response to volcanic forcing-November 2003
4. A. C. Comrie-No Consistent Link Between Dust Storms and Valley Fever (Coccidioidomycosis)-November 2021
5. J. P. Mason, J. T. Hoeksema-Testing Automated Solar Flare Forecasting with 13 Years of Michelson Doppler Imager Magnetograms-November 2010
| Wikipedia |
Superposition calculus
The superposition calculus is a calculus for reasoning in equational logic. It was developed in the early 1990s and combines concepts from first-order resolution with ordering-based equality handling as developed in the context of (unfailing) Knuth–Bendix completion. It can be seen as a generalization of either resolution (to equational logic) or unfailing completion (to full clausal logic). Like most first-order calculi, superposition tries to show the unsatisfiability of a set of first-order clauses, i.e. it performs proofs by refutation. Superposition is refutation complete—given unlimited resources and a fair derivation strategy, from any unsatisfiable clause set a contradiction will eventually be derived.
As of 2007, most of the (state-of-the-art) theorem provers for first-order logic are based on superposition (e.g. the E equational theorem prover), although only a few implement the pure calculus.
Implementations
• E
• SPASS
• Vampire
• Waldmeister (official web page)
References
• Rewrite-Based Equational Theorem Proving with Selection and Simplification, Leo Bachmair and Harald Ganzinger, Journal of Logic and Computation 3(4), 1994.
• Paramodulation-Based Theorem Proving, Robert Nieuwenhuis and Alberto Rubio, Handbook of Automated Reasoning I(7), Elsevier Science and MIT Press, 2001.
| Wikipedia |
Superprocess
An $(\xi ,d,\beta )$-superprocess, $X(t,dx)$, within mathematics probability theory is a stochastic process on $\mathbb {R} \times \mathbb {R} ^{d}$ that is usually constructed as a special limit of near-critical branching diffusions.
Informally, it can be seen as a branching process where each particle splits and dies at infinite rates, and evolves according to a diffusion equation, and we follow the rescaled population of particles, seen as a measure on $\mathbb {R} $.
Scaling limit of a discrete branching process
Simplest setting
For any integer $N\geq 1$, consider a branching Brownian process $Y^{N}(t,dx)$ defined as follows:
• Start at $t=0$ with $N$ independent particles distributed according to a probability distribution $\mu $.
• Each particle independently move according to a Brownian motion.
• Each particle independently dies with rate $N$.
• When a particle dies, with probability $1/2$ it gives birth to two offspring in the same location.
The notation $Y^{N}(t,dx)$ means should be interpreted as: at each time $t$, the number of particles in a set $A\subset \mathbb {R} $ is $Y^{N}(t,A)$. In other words, $Y$ is a measure-valued random process.[1]
Now, define a renormalized process:
$X^{N}(t,dx):={\frac {1}{N}}Y^{N}(t,dx)$
Then the finite-dimensional distributions of $X^{N}$ converge as $N\to +\infty $ to those of a measure-valued random process $X(t,dx)$, which is called a $(\xi ,\phi )$-superprocess,[1] with initial value $X(0)=\mu $, where $\phi (z):={\frac {z^{2}}{2}}$ and where $\xi $ is a Brownian motion (specifically, $\xi =(\Omega ,{\mathcal {F}},{\mathcal {F}}_{t},\xi _{t},{\textbf {P}}_{x})$ where $(\Omega ,{\mathcal {F}})$ is a measurable space, $({\mathcal {F}}_{t})_{t\geq 0}$ is a filtration, and $\xi _{t}$ under ${\textbf {P}}_{x}$ has the law of a Brownian motion started at $x$).
As will be clarified in the next section, $\phi $ encodes an underlying branching mechanism, and $\xi $ encodes the motion of the particles. Here, since $\xi $ is a Brownian motion, the resulting object is known as a Super-brownian motion.[1]
Generalization to (ξ, ϕ)-superprocesses
Our discrete branching system $Y^{N}(t,dx)$ can be much more sophisticated, leading to a variety of superprocesses:
• Instead of $\mathbb {R} $, the state space can now be any Lusin space $E$.
• The underlying motion of the particles can now be given by $\xi =(\Omega ,{\mathcal {F}},{\mathcal {F}}_{t},\xi _{t},{\textbf {P}}_{x})$, where $\xi _{t}$ is a càdlàg Markov process (see,[1] Chapter 4, for details).
• A particle dies at rate $\gamma _{N}$
• When a particle dies at time $t$, located in $\xi _{t}$, it gives birth to a random number of offspring $n_{t,\xi _{t}}$. These offspring start to move from $\xi _{t}$. We require that the law of $n_{t,x}$ depends solely on $x$, and that all $(n_{t,x})_{t,x}$ are independent. Set $p_{k}(x)=\mathbb {P} [n_{t,x}=k]$ and define $g$ the associated probability-generating function:$ g(x,z):=\sum \limits _{k=0}^{\infty }p_{k}(x)z^{k}$
Add the following requirement that the expected number of offspring is bounded:
$\sup \limits _{x\in E}\mathbb {E} [n_{t,x}]<+\infty $
Define $X^{N}(t,dx):={\frac {1}{N}}Y^{N}(t,dx)$ as above, and define the following crucial function:
$\phi _{N}(x,z):=N\gamma _{N}\left[g_{N}{\Big (}x,1-{\frac {z}{N}}{\Big )}\,-\,{\Big (}1-{\frac {z}{N}}{\Big )}\right]$
Add the requirement, for all $a\geq 0$, that $\phi _{N}(x,z)$ is Lipschitz continuous with respect to $z$ uniformly on $E\times [0,a]$, and that $\phi _{N}$ converges to some function $\phi $ as $N\to +\infty $ uniformly on $E\times [0,a]$.
Provided all of these conditions, the finite-dimensional distributions of $X^{N}(t)$ converge to those of a measure-valued random process $X(t,dx)$ which is called a $(\xi ,\phi )$-superprocess,[1] with initial value $X(0)=\mu $.
Commentary on ϕ
Provided $\lim _{N\to +\infty }\gamma _{N}=+\infty $, that is, the number of branching events becomes infinite, the requirement that $\phi _{N}$ converges implies that, taking a Taylor expansion of $g_{N}$, the expected number of offspring is close to 1, and therefore that the process is near-critical.
Generalization to Dawson-Watanabe superprocesses
The branching particle system $Y^{N}(t,dx)$ can be further generalized as follows:
• The probability of death in the time interval $[r,t)$ of a particle following trajectory $(\xi _{t})_{t\geq 0}$ is $\exp \left\{-\int _{r}^{t}\alpha _{N}(\xi _{s})K(ds)\right\}$ where $\alpha _{N}$ is a positive measurable function and $K$ is a continuous functional of $\xi $ (see,[1] chapter 2, for details).
• When a particle following trajectory $\xi $ dies at time $t$, it gives birth to offspring according to a measure-valued probability kernel $F_{N}(\xi _{t-},d\nu )$. In other words, the offspring are not necessarily born on their parent's location. The number of offspring is given by $\nu (1)$. Assume that $\sup \limits _{x\in E}\int \nu (1)F_{N}(x,d\nu )<+\infty $.
Then, under suitable hypotheses, the finite-dimensional distributions of $X^{N}(t)$ converge to those of a measure-valued random process $X(t,dx)$ which is called a Dawson-Watanabe superprocess,[1] with initial value $X(0)=\mu $.
Properties
A superprocess has a number of properties. It is a Markov process, and its Markov kernel $Q_{t}(\mu ,d\nu )$ verifies the branching property:
$Q_{t}(\mu +\mu ',\cdot )=Q_{t}(\mu ,\cdot )*Q_{t}(\mu ',\cdot )$
where $*$ is the convolution.A special class of superprocesses are $(\alpha ,d,\beta )$-superprocesses,[2] with $\alpha \in (0,2],d\in \mathbb {N} ,\beta \in (0,1]$. A $(\alpha ,d,\beta )$-superprocesses is defined on $\mathbb {R} ^{d}$. Its branching mechanism is defined by its factorial moment generating function (the definition of a branching mechanism varies slightly among authors, some[1] use the definition of $\phi $ in the previous section, others[2] use the factorial moment generating function):
$\Phi (s)={\frac {1}{1+\beta }}(1-s)^{1+\beta }+s$
and the spatial motion of individual particles (noted $\xi $ in the previous section) is given by the $\alpha $-symmetric stable process with infinitesimal generator $\Delta _{\alpha }$.
The $\alpha =2$ case means $\xi $ is a standard Brownian motion and the $(2,d,1)$-superprocess is called the super-Brownian motion.
One of the most important properties of superprocesses is that they are intimately connected with certain nonlinear partial differential equations. The simplest such equation is $\Delta u-u^{2}=0\ on\ \mathbb {R} ^{d}.$ When the spatial motion (migration) is a diffusion process, one talks about a superdiffusion. The connection between superdiffusions and nonlinear PDE's is similar to the one between diffusions and linear PDE's.
Further resources
• Eugene B. Dynkin (2004). Superdiffusions and positive solutions of nonlinear partial differential equations. Appendix A by J.-F. Le Gall and Appendix B by I. E. Verbitsky. University Lecture Series, 34. American Mathematical Society. ISBN 9780821836828.
References
1. Li, Zenghu (2011), Li, Zenghu (ed.), "Measure-Valued Branching Processes", Measure-Valued Branching Markov Processes, Berlin, Heidelberg: Springer, pp. 29–56, doi:10.1007/978-3-642-15004-3_2, ISBN 978-3-642-15004-3, retrieved 2022-12-20
2. Etheridge, Alison (2000). An introduction to superprocesses. Providence, RI: American Mathematical Society. ISBN 0-8218-2706-5. OCLC 44270365.
Stochastic processes
Discrete time
• Bernoulli process
• Branching process
• Chinese restaurant process
• Galton–Watson process
• Independent and identically distributed random variables
• Markov chain
• Moran process
• Random walk
• Loop-erased
• Self-avoiding
• Biased
• Maximal entropy
Continuous time
• Additive process
• Bessel process
• Birth–death process
• pure birth
• Brownian motion
• Bridge
• Excursion
• Fractional
• Geometric
• Meander
• Cauchy process
• Contact process
• Continuous-time random walk
• Cox process
• Diffusion process
• Empirical process
• Feller process
• Fleming–Viot process
• Gamma process
• Geometric process
• Hawkes process
• Hunt process
• Interacting particle systems
• Itô diffusion
• Itô process
• Jump diffusion
• Jump process
• Lévy process
• Local time
• Markov additive process
• McKean–Vlasov process
• Ornstein–Uhlenbeck process
• Poisson process
• Compound
• Non-homogeneous
• Schramm–Loewner evolution
• Semimartingale
• Sigma-martingale
• Stable process
• Superprocess
• Telegraph process
• Variance gamma process
• Wiener process
• Wiener sausage
Both
• Branching process
• Galves–Löcherbach model
• Gaussian process
• Hidden Markov model (HMM)
• Markov process
• Martingale
• Differences
• Local
• Sub-
• Super-
• Random dynamical system
• Regenerative process
• Renewal process
• Stochastic chains with memory of variable length
• White noise
Fields and other
• Dirichlet process
• Gaussian random field
• Gibbs measure
• Hopfield model
• Ising model
• Potts model
• Boolean network
• Markov random field
• Percolation
• Pitman–Yor process
• Point process
• Cox
• Poisson
• Random field
• Random graph
Time series models
• Autoregressive conditional heteroskedasticity (ARCH) model
• Autoregressive integrated moving average (ARIMA) model
• Autoregressive (AR) model
• Autoregressive–moving-average (ARMA) model
• Generalized autoregressive conditional heteroskedasticity (GARCH) model
• Moving-average (MA) model
Financial models
• Binomial options pricing model
• Black–Derman–Toy
• Black–Karasinski
• Black–Scholes
• Chan–Karolyi–Longstaff–Sanders (CKLS)
• Chen
• Constant elasticity of variance (CEV)
• Cox–Ingersoll–Ross (CIR)
• Garman–Kohlhagen
• Heath–Jarrow–Morton (HJM)
• Heston
• Ho–Lee
• Hull–White
• LIBOR market
• Rendleman–Bartter
• SABR volatility
• Vašíček
• Wilkie
Actuarial models
• Bühlmann
• Cramér–Lundberg
• Risk process
• Sparre–Anderson
Queueing models
• Bulk
• Fluid
• Generalized queueing network
• M/G/1
• M/M/1
• M/M/c
Properties
• Càdlàg paths
• Continuous
• Continuous paths
• Ergodic
• Exchangeable
• Feller-continuous
• Gauss–Markov
• Markov
• Mixing
• Piecewise-deterministic
• Predictable
• Progressively measurable
• Self-similar
• Stationary
• Time-reversible
Limit theorems
• Central limit theorem
• Donsker's theorem
• Doob's martingale convergence theorems
• Ergodic theorem
• Fisher–Tippett–Gnedenko theorem
• Large deviation principle
• Law of large numbers (weak/strong)
• Law of the iterated logarithm
• Maximal ergodic theorem
• Sanov's theorem
• Zero–one laws (Blumenthal, Borel–Cantelli, Engelbert–Schmidt, Hewitt–Savage, Kolmogorov, Lévy)
Inequalities
• Burkholder–Davis–Gundy
• Doob's martingale
• Doob's upcrossing
• Kunita–Watanabe
• Marcinkiewicz–Zygmund
Tools
• Cameron–Martin formula
• Convergence of random variables
• Doléans-Dade exponential
• Doob decomposition theorem
• Doob–Meyer decomposition theorem
• Doob's optional stopping theorem
• Dynkin's formula
• Feynman–Kac formula
• Filtration
• Girsanov theorem
• Infinitesimal generator
• Itô integral
• Itô's lemma
• Karhunen–Loève theorem
• Kolmogorov continuity theorem
• Kolmogorov extension theorem
• Lévy–Prokhorov metric
• Malliavin calculus
• Martingale representation theorem
• Optional stopping theorem
• Prokhorov's theorem
• Quadratic variation
• Reflection principle
• Skorokhod integral
• Skorokhod's representation theorem
• Skorokhod space
• Snell envelope
• Stochastic differential equation
• Tanaka
• Stopping time
• Stratonovich integral
• Uniform integrability
• Usual hypotheses
• Wiener space
• Classical
• Abstract
Disciplines
• Actuarial mathematics
• Control theory
• Econometrics
• Ergodic theory
• Extreme value theory (EVT)
• Large deviations theory
• Mathematical finance
• Mathematical statistics
• Probability theory
• Queueing theory
• Renewal theory
• Ruin theory
• Signal processing
• Statistics
• Stochastic analysis
• Time series analysis
• Machine learning
• List of topics
• Category
| Wikipedia |
Superreal number
In abstract algebra, the superreal numbers are a class of extensions of the real numbers, introduced by H. Garth Dales and W. Hugh Woodin as a generalization of the hyperreal numbers and primarily of interest in non-standard analysis, model theory, and the study of Banach algebras. The field of superreals is itself a subfield of the surreal numbers.
Dales and Woodin's superreals are distinct from the super-real numbers of David O. Tall, which are lexicographically ordered fractions of formal power series over the reals.[1]
Formal definition
Suppose X is a Tychonoff space and C(X) is the algebra of continuous real-valued functions on X. Suppose P is a prime ideal in C(X). Then the factor algebra A = C(X)/P is by definition an integral domain that is a real algebra and that can be seen to be totally ordered. The field of fractions F of A is a superreal field if F strictly contains the real numbers $\mathbb {R} $, so that F is not order isomorphic to $\mathbb {R} $.
If the prime ideal P is a maximal ideal, then F is a field of hyperreal numbers (Robinson's hyperreals being a very special case).
References
1. Tall, David (March 1980), "Looking at graphs through infinitesimal microscopes, windows and telescopes" (PDF), Mathematical Gazette, 64 (427): 22–49, CiteSeerX 10.1.1.377.4224, doi:10.2307/3615886, JSTOR 3615886, S2CID 115821551
Bibliography
• Dales, H. Garth; Woodin, W. Hugh (1996), Super-real fields, London Mathematical Society Monographs. New Series, vol. 14, The Clarendon Press Oxford University Press, ISBN 978-0-19-853991-9, MR 1420859
• Gillman, L.; Jerison, M. (1960), Rings of Continuous Functions, Van Nostrand, ISBN 978-0442026912
Number systems
Sets of definable numbers
• Natural numbers ($\mathbb {N} $)
• Integers ($\mathbb {Z} $)
• Rational numbers ($\mathbb {Q} $)
• Constructible numbers
• Algebraic numbers ($\mathbb {A} $)
• Closed-form numbers
• Periods
• Computable numbers
• Arithmetical numbers
• Set-theoretically definable numbers
• Gaussian integers
Composition algebras
• Division algebras: Real numbers ($\mathbb {R} $)
• Complex numbers ($\mathbb {C} $)
• Quaternions ($\mathbb {H} $)
• Octonions ($\mathbb {O} $)
Split
types
• Over $\mathbb {R} $:
• Split-complex numbers
• Split-quaternions
• Split-octonions
Over $\mathbb {C} $:
• Bicomplex numbers
• Biquaternions
• Bioctonions
Other hypercomplex
• Dual numbers
• Dual quaternions
• Dual-complex numbers
• Hyperbolic quaternions
• Sedenions ($\mathbb {S} $)
• Split-biquaternions
• Multicomplex numbers
• Geometric algebra/Clifford algebra
• Algebra of physical space
• Spacetime algebra
Other types
• Cardinal numbers
• Extended natural numbers
• Irrational numbers
• Fuzzy numbers
• Hyperreal numbers
• Levi-Civita field
• Surreal numbers
• Transcendental numbers
• Ordinal numbers
• p-adic numbers (p-adic solenoids)
• Supernatural numbers
• Profinite integers
• Superreal numbers
• Normal numbers
• Classification
• List
| Wikipedia |
Super-recursive algorithm
In computability theory, super-recursive algorithms are a generalization of ordinary algorithms that are more powerful, that is, compute more than Turing machines. The term was introduced by Mark Burgin, whose book "Super-recursive algorithms" develops their theory and presents several mathematical models. Turing machines and other mathematical models of conventional algorithms allow researchers to find properties of recursive algorithms and their computations. In a similar way, mathematical models of super-recursive algorithms, such as inductive Turing machines, allow researchers to find properties of super-recursive algorithms and their computations.
Burgin, as well as other researchers (including Selim Akl, Eugene Eberbach, Peter Kugel, Jan van Leeuwen, Hava Siegelmann, Peter Wegner, and Jiří Wiedermann) who studied different kinds of super-recursive algorithms and contributed to the theory of super-recursive algorithms, have argued that super-recursive algorithms can be used to disprove the Church-Turing thesis, but this point of view has been criticized within the mathematical community and is not widely accepted.
Definition
Burgin (2005: 13) uses the term recursive algorithms for algorithms that can be implemented on Turing machines, and uses the word algorithm in a more general sense. Then a super-recursive class of algorithms is "a class of algorithms in which it is possible to compute functions not computable by any Turing machine" (Burgin 2005: 107).
Super-recursive algorithms are closely related to hypercomputation in a way similar to the relationship between ordinary computation and ordinary algorithms. Computation is a process, while an algorithm is a finite constructive description of such a process. Thus a super-recursive algorithm defines a "computational process (including processes of input and output) that cannot be realized by recursive algorithms." (Burgin 2005: 108). A more restricted definition demands that hypercomputation solves a supertask (see Copeland 2002; Hagar and Korolev 2007).
Super-recursive algorithms are also related to algorithmic schemes, which are more general than super-recursive algorithms. Burgin argues (2005: 115) that it is necessary to make a clear distinction between super-recursive algorithms and those algorithmic schemes that are not algorithms. Under this distinction, some types of hypercomputation are obtained by super-recursive algorithms, e.g., inductive Turing machines, while other types of hypercomputation are directed by algorithmic schemas, e.g., infinite time Turing machines. This explains how works on super-recursive algorithms are related to hypercomputation and vice versa. According to this argument, super-recursive algorithms are just one way of defining a hypercomputational process.
Examples
Examples of super-recursive algorithms include (Burgin 2005: 132):
• limiting recursive functions and limiting partial recursive functions (E.M. Gold 1965)
• trial and error predicates (Hilary Putnam 1965)
• inductive inference machines (Carl Smith)
• inductive Turing machines, which perform computations similar to computations of Turing machines and produce their results after a finite number of steps (Mark Burgin)
• limit Turing machines, which perform computations similar to computations of Turing machines but their final results are limits of their intermediate results (Mark Burgin)
• trial-and-error machines (Ja. Hintikka and A. Mutanen 1998)
• general Turing machines (J. Schmidhuber)
• Internet machines (van Leeuwen, J. and Wiedermann, J.)
• evolutionary computers, which use DNA to produce the value of a function (Darko Roglic)
• fuzzy computation (Jirí Wiedermann 2004)
• evolutionary Turing machines (Eugene Eberbach 2005)
Examples of algorithmic schemes include:
• Turing machines with arbitrary oracles (Alan Turing)
• Transrecursive operators (Borodyanskii and Burgin)
• machines that compute with real numbers (L. Blum, F. Cucker, M. Shub, and S. Smale 1998)
• neural networks based on real numbers (Hava Siegelmann 1999)
For examples of practical super-recursive algorithms, see the book of Burgin.
Inductive Turing machines
Inductive Turing machines implement an important class of super-recursive algorithms. An inductive Turing machine is a definite list of well-defined instructions for completing a task which, when given an initial state, will proceed through a well-defined series of successive states, eventually giving the final result. The difference between an inductive Turing machine and an ordinary Turing machine is that an ordinary Turing machine must stop when it has obtained its result, while in some cases an inductive Turing machine can continue to compute after obtaining the result, without stopping. Kleene called procedures that could run forever without stopping by the name calculation procedure or algorithm (Kleene 1952:137). Kleene also demanded that such an algorithm must eventually exhibit "some object" (Kleene 1952:137). Burgin argues that this condition is satisfied by inductive Turing machines, as their results are exhibited after a finite number of steps. The reason that inductive Turing machines cannot be instructed to halt when their final output is produced is that in some cases inductive Turing machines may not be able to tell at which step the result has been obtained.
Simple inductive Turing machines are equivalent to other models of computation such as general Turing machines of Schmidhuber, trial and error predicates of Hilary Putnam, limiting partial recursive functions of Gold, and trial-and-error machines of Hintikka and Mutanen (1998). More advanced inductive Turing machines are much more powerful. There are hierarchies of inductive Turing machines that can decide membership in arbitrary sets of the arithmetical hierarchy (Burgin 2005). In comparison with other equivalent models of computation, simple inductive Turing machines and general Turing machines give direct constructions of computing automata that are thoroughly grounded in physical machines. In contrast, trial-and-error predicates, limiting recursive functions, and limiting partial recursive functions present only syntactic systems of symbols with formal rules for their manipulation. Simple inductive Turing machines and general Turing machines are related to limiting partial recursive functions and trial-and-error predicates as Turing machines are related to partial recursive functions and lambda calculus.
The non-halting computations of inductive Turing machines should not be confused with infinite-time computations (see, for example, Potgieter 2006). First, some computations of inductive Turing machines do halt. As in the case of conventional Turing machines, some halting computations give the result, while others do not. Even if it does not halt, an inductive Turing machine produces output from time to time. If this output stops changing, it is then considered the result of the computation.
There are two main distinctions between ordinary Turing machines and simple inductive Turing machines. The first distinction is that even simple inductive Turing machines can do much more than conventional Turing machines. The second distinction is that a conventional Turing machine will always determine (by coming to a final state) when the result is obtained, while a simple inductive Turing machine, in some cases (such as when "computing" something that cannot be computed by an ordinary Turing machine), will not be able to make this determination.
Schmidhuber's generalized Turing machines
A symbol sequence is computable in the limit if there is a finite, possibly non-halting program on a universal Turing machine that incrementally outputs every symbol of the sequence. This includes the dyadic expansion of π but still excludes most of the real numbers, because most cannot be described by a finite program. Traditional Turing machines with a write-only output tape cannot edit their previous outputs; generalized Turing machines, according to Jürgen Schmidhuber, can edit their output tape as well as their work tape. He defines the constructively describable symbol sequences as those that have a finite, non-halting program running on a generalized Turing machine, such that any output symbol eventually converges, that is, it does not change any more after some finite initial time interval. Schmidhuber (2000, 2002) uses this approach to define the set of formally describable or constructively computable universes or constructive theories of everything. Generalized Turing machines and simple inductive Turing machines are two classes of super-recursive algorithms that are the closest to recursive algorithms (Schmidhuber 2000).
Relation to the Church–Turing thesis
The Church–Turing thesis in recursion theory relies on a particular definition of the term algorithm. Based on definitions that are more general than the one commonly used in recursion theory, Burgin argues that super-recursive algorithms, such as inductive Turing machines disprove the Church–Turing thesis. He proves furthermore that super-recursive algorithms could theoretically provide even greater efficiency gains than using quantum algorithms.
Burgin's interpretation of super-recursive algorithms has encountered opposition in the mathematical community. One critic is logician Martin Davis, who argues that Burgin's claims have been well understood "for decades". Davis states,
"The present criticism is not about the mathematical discussion of these matters but only about the misleading claims regarding physical systems of the present and future."(Davis 2006: 128)
Davis disputes Burgin's claims that sets at level $\Delta _{2}^{0}$ of the arithmetical hierarchy can be called computable, saying
"It is generally understood that for a computational result to be useful one must be able to at least recognize that it is indeed the result sought." (Davis 2006: 128)
See also
• Interactive computation
References
• Blum, L., F. Cucker, M. Shub, and S. Smale, Complexity and Real Computation, Springer Publishing 1998
• Burgin, Mark (2005), Super-recursive algorithms, Monographs in computer science, Springer. ISBN 0-387-95569-0
• José Félix Costa, MR2246430 Review in MathSciNet.
• Harvey Cohn (2005), CR131542 (0606-0574) Review in Computing Reviews
• Martin Davis (2007),Review in Bulletin of Symbolic Logic, v. 13 n. 2.
• Marc L. Smith (2006), Review in The Computer Journal, Vol. 49 No. 6
• Review, Vilmar Trevisan (2005), Zentralblatt MATH, Vol. 1070. Review 1070.68038
• Copeland, J. (2002) Hypercomputation, Minds and Machines, v. 12, pp. 461–502
• Davis, Martin (2006), "The Church–Turing Thesis: Consensus and opposition". Proceedings, Computability in Europe 2006. Lecture notes in computer science, 3988 pp. 125–132
• Eberbach, E. (2005) "Toward a theory of evolutionary computation", BioSystems 82, 1-19
• Gold, E.M. Limiting recursion. J. Symb. Log. 10 (1965), 28-48.
• Gold, E. Mark (1967), Language Identification in the Limit (PDF), vol. 10, Information and Control, pp. 447–474
• Hagar, A. and Korolev, A. (2007) "Quantum Hypercomputation – Hype or Computation?"
• Hintikka, Ja. and Mutanen, A. An Alternative Concept of Computability, in “Language, Truth, and Logic in Mathematics”, Dordrecht, pp. 174–188, 1998
• Kleene, Stephen C. (1952), Introduction to Metamathematics (First ed.), Amsterdam: North-Holland Publishing Company.
• Peter Kugel, "It's time to think outside the computational box", Communications of the ACM, Volume 48, Issue 11, November 2005
• Petrus H. Potgieter, "Zeno machines and hypercomputation", Theoretical Computer Science, Volume 358, Issue 1 (July 2006) pp. 23 – 33
• Hilary Putnam, "Trial and Error Predicates and the Solution to a Problem of Mostowski". Journal of Symbolic Logic, Volume 30, Issue 1 (1965), 49-57
• Darko Roglic, "The universal evolutionary computer based on super-recursive algorithms of evolvability"
• Hava Siegelmann, Neural Networks and Analog Computation: Beyond the Turing Limit, Birkhäuser, 1999, ISBN 0817639497
• Turing, A. (1939) Systems of Logic Based on Ordinals, Proc. Lond. Math. Soc., Ser.2, v. 45: 161-228
• van Leeuwen, J. and Wiedermann, J. (2000a) Breaking the Turing Barrier: The case of the Internet, Techn. Report, Inst. of Computer Science, Academy of Sciences of the Czech Republic, Prague
• Jiří Wiedermann, Characterizing the super-Turing computing power and efficiency of classical fuzzy Turing machines, Theoretical Computer Science, Volume 317, Issue 1-3, June 2004
• Jiří Wiedermann and Jan van Leeuwen, "The emergent computational potential of evolving artificial living systems", AI Communications, v. 15, No. 4, 2002
Further reading
• Akl, S.G., Three counterexamples to dispel the myth of the universal computer, Parallel Processing Letters, Vol. 16, No. 3, September 2006, pp. 381 – 403.
• Akl, S.G., The myth of universal computation, in: Parallel Numerics, Trobec, R., Zinterhof, P., Vajtersic, M., and Uhl, A., Eds., Part 2, Systems and Simulation, University of Salzburg, Salzburg, Austria and Jozef Stefan Institute, Ljubljana, Slovenia, 2005, pp. 211 – 236
• Angluin, D., and Smith, C. H. (1983) Inductive Inference: Theory and Methods, Comput. Surveys, v. 15, no. 3, pp. 237–269
• Apsïtis, K, Arikawa, S, Freivalds, R., Hirowatari, E., and Smith, C. H. (1999) On the inductive inference of recursive real-valued functions, Theoretical Computer Science, 219(1-2): 3—17
• Boddy, M, Dean, T. 1989. "Solving Time-Dependent Planning Problems". Technical Report: CS-89-03, Brown University
• Burgin, M. "Algorithmic Complexity of Recursive and Inductive Algorithms", Theoretical Computer Science, v. 317, No. 1/3, 2004, pp. 31–60
• Burgin, M. and Klinger, A. Experience, Generations, and Limits in Machine Learning, Theoretical Computer Science, v. 317, No. 1/3, 2004, pp. 71–91
• Eberbach, E., and Wegner, P., "Beyond Turing Machines", Bulletin of the European Association for Theoretical Computer Science (EATCS Bulletin), 81, Oct. 2003, 279-304
• S. Zilberstein, Using Anytime Algorithms in Intelligent Systems, "AI Magazine", 17(3):73-83, 1996
External links
• A New Paradigm for Computation. Los Angeles ACM Chapter Meeting, December 1, 1999.
• Anytime algorithm from FOLDOC
| Wikipedia |
Superrigidity
In mathematics, in the theory of discrete groups, superrigidity is a concept designed to show how a linear representation ρ of a discrete group Γ inside an algebraic group G can, under some circumstances, be as good as a representation of G itself. That this phenomenon happens for certain broadly defined classes of lattices inside semisimple groups was the discovery of Grigory Margulis, who proved some fundamental results in this direction.
There is more than one result that goes by the name of Margulis superrigidity.[1] One simplified statement is this: take G to be a simply connected semisimple real algebraic group in GLn, such that the Lie group of its real points has real rank at least 2 and no compact factors. Suppose Γ is an irreducible lattice in G. For a local field F and ρ a linear representation of the lattice Γ of the Lie group, into GLn (F), assume the image ρ(Γ) is not relatively compact (in the topology arising from F) and such that its closure in the Zariski topology is connected. Then F is the real numbers or the complex numbers, and there is a rational representation of G giving rise to ρ by restriction.
See also
• Mostow rigidity theorem
• Local rigidity
Notes
1. Margulis 1991, p. 2 Theorem 2.
References
• "Discrete subgroup", Encyclopedia of Mathematics, EMS Press, 2001 [1994]
• Gromov, M.; Pansu, P. Rigidity of lattices: an introduction. Geometric topology: recent developments (Montecatini Terme, 1990), 39–137, Lecture Notes in Math., 1504, Springer, Berlin, 1991. doi:10.1007/BFb0094289
• Gromov, Mikhail; Schoen, Richard. Harmonic maps into singular spaces and p-adic superrigidity for lattices in groups of rank one. Inst. Hautes Études Sci. Publ. Math. No. 76 (1992), 165–246.
• Ji, Lizhen. A summary of the work of Gregory Margulis. Pure Appl. Math. Q. 4 (2008), no. 1, Special Issue: In honor of Grigory Margulis. Part 2, 1–69. [Pages 17-19]
• Jost, Jürgen; Yau, Shing-Tung. Applications of quasilinear PDE to algebraic geometry and arithmetic lattices. Algebraic geometry and related topics (Inchon, 1992), 169–193, Conf. Proc. Lecture Notes Algebraic Geom., I, Int. Press, Cambridge, MA, 1993.
• Margulis, G.A. (1991). Discrete subgroups of semisimple lie groups. Ergebnisse der Mathematik und ihrer Grenzgebiete (3), 17. Springer-Verlag. ISBN 3-540-12179-X. MR 1090825. OCLC 471802846.
• Tits, Jacques. Travaux de Margulis sur les sous-groupes discrets de groupes de Lie. Séminaire Bourbaki, 28ème année (1975/76), Exp. No. 482, pp. 174–190. Lecture Notes in Math., Vol. 567, Springer, Berlin, 1977.
| Wikipedia |
Superalgebra
In mathematics and theoretical physics, a superalgebra is a Z2-graded algebra.[1] That is, it is an algebra over a commutative ring or field with a decomposition into "even" and "odd" pieces and a multiplication operator that respects the grading.
The prefix super- comes from the theory of supersymmetry in theoretical physics. Superalgebras and their representations, supermodules, provide an algebraic framework for formulating supersymmetry. The study of such objects is sometimes called super linear algebra. Superalgebras also play an important role in related field of supergeometry where they enter into the definitions of graded manifolds, supermanifolds and superschemes.
Formal definition
Let K be a commutative ring. In most applications, K is a field of characteristic 0, such as R or C.
A superalgebra over K is a K-module A with a direct sum decomposition
$A=A_{0}\oplus A_{1}$
together with a bilinear multiplication A × A → A such that
$A_{i}A_{j}\subseteq A_{i+j}$
where the subscripts are read modulo 2, i.e. they are thought of as elements of Z2.
A superring, or Z2-graded ring, is a superalgebra over the ring of integers Z.
The elements of each of the Ai are said to be homogeneous. The parity of a homogeneous element x, denoted by |x|, is 0 or 1 according to whether it is in A0 or A1. Elements of parity 0 are said to be even and those of parity 1 to be odd. If x and y are both homogeneous then so is the product xy and $|xy|=|x|+|y|$.
An associative superalgebra is one whose multiplication is associative and a unital superalgebra is one with a multiplicative identity element. The identity element in a unital superalgebra is necessarily even. Unless otherwise specified, all superalgebras in this article are assumed to be associative and unital.
A commutative superalgebra (or supercommutative algebra) is one which satisfies a graded version of commutativity. Specifically, A is commutative if
$yx=(-1)^{|x||y|}xy\,$
for all homogeneous elements x and y of A. There are superalgebras that are commutative in the ordinary sense, but not in the superalgebra sense. For this reason, commutative superalgebras are often called supercommutative in order to avoid confusion.[2]
Examples
• Any algebra over a commutative ring K may be regarded as a purely even superalgebra over K; that is, by taking A1 to be trivial.
• Any Z- or N-graded algebra may be regarded as superalgebra by reading the grading modulo 2. This includes examples such as tensor algebras and polynomial rings over K.
• In particular, any exterior algebra over K is a superalgebra. The exterior algebra is the standard example of a supercommutative algebra.
• The symmetric polynomials and alternating polynomials together form a superalgebra, being the even and odd parts, respectively. Note that this is a different grading from the grading by degree.
• Clifford algebras are superalgebras. They are generally noncommutative.
• The set of all endomorphisms (denoted $\mathbf {End} (V)\equiv \mathbf {Hom} (V,V)$, where the boldface $\mathrm {Hom} $ is referred to as internal $\mathrm {Hom} $, composed of all linear maps) of a super vector space forms a superalgebra under composition.
• The set of all square supermatrices with entries in K forms a superalgebra denoted by Mp|q(K). This algebra may be identified with the algebra of endomorphisms of a free supermodule over K of rank p|q and is the internal Hom of above for this space.
• Lie superalgebras are a graded analog of Lie algebras. Lie superalgebras are nonunital and nonassociative; however, one may construct the analog of a universal enveloping algebra of a Lie superalgebra which is a unital, associative superalgebra.
Further definitions and constructions
Even subalgebra
Let A be a superalgebra over a commutative ring K. The submodule A0, consisting of all even elements, is closed under multiplication and contains the identity of A and therefore forms a subalgebra of A, naturally called the even subalgebra. It forms an ordinary algebra over K.
The set of all odd elements A1 is an A0-bimodule whose scalar multiplication is just multiplication in A. The product in A equips A1 with a bilinear form
$\mu :A_{1}\otimes _{A_{0}}A_{1}\to A_{0}$
such that
$\mu (x\otimes y)\cdot z=x\cdot \mu (y\otimes z)$
for all x, y, and z in A1. This follows from the associativity of the product in A.
Grade involution
There is a canonical involutive automorphism on any superalgebra called the grade involution. It is given on homogeneous elements by
${\hat {x}}=(-1)^{|x|}x$
and on arbitrary elements by
${\hat {x}}=x_{0}-x_{1}$
where xi are the homogeneous parts of x. If A has no 2-torsion (in particular, if 2 is invertible) then the grade involution can be used to distinguish the even and odd parts of A:
$A_{i}=\{x\in A:{\hat {x}}=(-1)^{i}x\}.$
Supercommutativity
The supercommutator on A is the binary operator given by
$[x,y]=xy-(-1)^{|x||y|}yx$
on homogeneous elements, extended to all of A by linearity. Elements x and y of A are said to supercommute if [x, y] = 0.
The supercenter of A is the set of all elements of A which supercommute with all elements of A:
$\mathrm {Z} (A)=\{a\in A:[a,x]=0{\text{ for all }}x\in A\}.$
The supercenter of A is, in general, different than the center of A as an ungraded algebra. A commutative superalgebra is one whose supercenter is all of A.
Super tensor product
The graded tensor product of two superalgebras A and B may be regarded as a superalgebra A ⊗ B with a multiplication rule determined by:
$(a_{1}\otimes b_{1})(a_{2}\otimes b_{2})=(-1)^{|b_{1}||a_{2}|}(a_{1}a_{2}\otimes b_{1}b_{2}).$
If either A or B is purely even, this is equivalent to the ordinary ungraded tensor product (except that the result is graded). However, in general, the super tensor product is distinct from the tensor product of A and B regarded as ordinary, ungraded algebras.
Generalizations and categorical definition
One can easily generalize the definition of superalgebras to include superalgebras over a commutative superring. The definition given above is then a specialization to the case where the base ring is purely even.
Let R be a commutative superring. A superalgebra over R is a R-supermodule A with a R-bilinear multiplication A × A → A that respects the grading. Bilinearity here means that
$r\cdot (xy)=(r\cdot x)y=(-1)^{|r||x|}x(r\cdot y)$
for all homogeneous elements r ∈ R and x, y ∈ A.
Equivalently, one may define a superalgebra over R as a superring A together with an superring homomorphism R → A whose image lies in the supercenter of A.
One may also define superalgebras categorically. The category of all R-supermodules forms a monoidal category under the super tensor product with R serving as the unit object. An associative, unital superalgebra over R can then be defined as a monoid in the category of R-supermodules. That is, a superalgebra is an R-supermodule A with two (even) morphisms
${\begin{aligned}\mu &:A\otimes A\to A\\\eta &:R\to A\end{aligned}}$
for which the usual diagrams commute.
Notes
1. Kac, Martinez & Zelmanov 2001, p. 3
2. Varadarajan 2004, p. 87
References
• Deligne, P.; Morgan, J. W. (1999). "Notes on Supersymmetry (following Joseph Bernstein)". Quantum Fields and Strings: A Course for Mathematicians. Vol. 1. American Mathematical Society. pp. 41–97. ISBN 0-8218-2012-5.
• Kac, V. G.; Martinez, C.; Zelmanov, E. (2001). Graded simple Jordan superalgebras of growth one. Memoirs of the AMS Series. Vol. 711. AMS Bookstore. ISBN 978-0-8218-2645-4.
• Manin, Y. I. (1997). Gauge Field Theory and Complex Geometry ((2nd ed.) ed.). Berlin: Springer. ISBN 3-540-61378-1.
• Varadarajan, V. S. (2004). Supersymmetry for Mathematicians: An Introduction. Courant Lecture Notes in Mathematics. Vol. 11. American Mathematical Society. ISBN 978-0-8218-3574-6.
Industrial and applied mathematics
Computational
• Algorithms
• design
• analysis
• Automata theory
• Coding theory
• Computational geometry
• Constraint programming
• Computational logic
• Cryptography
• Information theory
Discrete
• Computer algebra
• Computational number theory
• Combinatorics
• Graph theory
• Discrete geometry
Analysis
• Approximation theory
• Clifford analysis
• Clifford algebra
• Differential equations
• Ordinary differential equations
• Partial differential equations
• Stochastic differential equations
• Differential geometry
• Differential forms
• Gauge theory
• Geometric analysis
• Dynamical systems
• Chaos theory
• Control theory
• Functional analysis
• Operator algebra
• Operator theory
• Harmonic analysis
• Fourier analysis
• Multilinear algebra
• Exterior
• Geometric
• Tensor
• Vector
• Multivariable calculus
• Exterior
• Geometric
• Tensor
• Vector
• Numerical analysis
• Numerical linear algebra
• Numerical methods for ordinary differential equations
• Numerical methods for partial differential equations
• Validated numerics
• Variational calculus
Probability theory
• Distributions (random variables)
• Stochastic processes / analysis
• Path integral
• Stochastic variational calculus
Mathematical
physics
• Analytical mechanics
• Lagrangian
• Hamiltonian
• Field theory
• Classical
• Conformal
• Effective
• Gauge
• Quantum
• Statistical
• Topological
• Perturbation theory
• in quantum mechanics
• Potential theory
• String theory
• Bosonic
• Topological
• Supersymmetry
• Supersymmetric quantum mechanics
• Supersymmetric theory of stochastic dynamics
Algebraic structures
• Algebra of physical space
• Feynman integral
• Poisson algebra
• Quantum group
• Renormalization group
• Representation theory
• Spacetime algebra
• Superalgebra
• Supersymmetry algebra
Decision sciences
• Game theory
• Operations research
• Optimization
• Social choice theory
• Statistics
• Mathematical economics
• Mathematical finance
Other applications
• Biology
• Chemistry
• Psychology
• Sociology
• "The Unreasonable Effectiveness of Mathematics in the Natural Sciences"
Related
• Mathematics
• Mathematical software
Organizations
• Society for Industrial and Applied Mathematics
• Japan Society for Industrial and Applied Mathematics
• Société de Mathématiques Appliquées et Industrielles
• International Council for Industrial and Applied Mathematics
• European Community on Computational Methods in Applied Sciences
• Category
• Mathematics portal / outline / topics list
Supersymmetry
General topics
• Supersymmetry
• Supersymmetric gauge theory
• Supersymmetric quantum mechanics
• Supergravity
• Superstring theory
• Super vector space
• Supergeometry
Supermathematics
• Superalgebra
• Lie superalgebra
• Super-Poincaré algebra
• Superconformal algebra
• Supersymmetry algebra
• Supergroup
• Superspace
• Harmonic superspace
• Super Minkowski space
• Supermanifold
Concepts
• Supercharge
• R-symmetry
• Supermultiplet
• Short supermultiplet
• BPS state
• Superpotential
• D-term
• FI D-term
• F-term
• Moduli space
• Supersymmetry breaking
• Konishi anomaly
• Seiberg duality
• Seiberg–Witten theory
• Witten index
• Wess–Zumino gauge
• Localization
• Mu problem
• Little hierarchy problem
• Electric–magnetic duality
Theorems
• Coleman–Mandula
• Haag–Łopuszański–Sohnius
• Nonrenormalization
Field theories
• Wess–Zumino
• N = 1 super Yang–Mills
• N = 4 super Yang–Mills
• Super QCD
• MSSM
• NMSSM
• 6D (2,0) superconformal
• ABJM superconformal
Supergravity
• Pure 4D N = 1 supergravity
• N = 8 supergravity
• Higher dimensional
• Gauged supergravity
Superpartners
• Axino
• Chargino
• Gaugino
• Goldstino
• Graviphoton
• Graviscalar
• Higgsino
• LSP
• Neutralino
• R-hadron
• Sfermion
• Sgoldstino
• Stop squark
• Superghost
Researchers
• Affleck
• Bagger
• Batchelor
• Berezin
• Dine
• Fayet
• Gates
• Golfand
• Iliopoulos
• Montonen
• Olive
• Salam
• Seiberg
• Siegel
• Roček
• Rogers
• Wess
• Witten
• Zumino
| Wikipedia |
Supersingular K3 surface
In algebraic geometry, a supersingular K3 surface is a K3 surface over a field k of characteristic p > 0 such that the slopes of Frobenius on the crystalline cohomology H2(X,W(k)) are all equal to 1.[1] These have also been called Artin supersingular K3 surfaces. Supersingular K3 surfaces can be considered the most special and interesting of all K3 surfaces.
Definitions and main results
More generally, a smooth projective variety X over a field of characteristic p > 0 is called supersingular if all slopes of Frobenius on the crystalline cohomology Ha(X,W(k)) are equal to a/2, for all a. In particular, this gives the standard notion of a supersingular abelian variety. For a variety X over a finite field Fq, it is equivalent to say that the eigenvalues of Frobenius on the l-adic cohomology Ha(X,Ql) are equal to qa/2 times roots of unity. It follows that any variety in positive characteristic whose l-adic cohomology is generated by algebraic cycles is supersingular.
A K3 surface whose l-adic cohomology is generated by algebraic cycles is sometimes called a Shioda supersingular K3 surface. Since the second Betti number of a K3 surface is always 22, this property means that the surface has 22 independent elements in its Picard group (ρ = 22). From what we have said, a K3 surface with Picard number 22 must be supersingular.
Conversely, the Tate conjecture would imply that every supersingular K3 surface over an algebraically closed field has Picard number 22. This is now known in every characteristic p except 2, since the Tate conjecture was proved for all K3 surfaces in characteristic p at least 3 by Nygaard-Ogus (1985), Maulik (2014), Charles (2013), and Madapusi Pera (2013).
To see that K3 surfaces with Picard number 22 exist only in positive characteristic, one can use Hodge theory to prove that the Picard number of a K3 surface in characteristic zero is at most 20. In fact the Hodge diamond for any complex K3 surface is the same (see classification), and the middle row reads 1, 20, 1. In other words, h2,0 and h0,2 both take the value 1, with h1,1 = 20. Therefore, the dimension of the space spanned by the algebraic cycles is at most 20 in characteristic zero; surfaces with this maximum value are sometimes called singular K3 surfaces.
Another phenomenon which can only occur in positive characteristic is that a K3 surface may be unirational. Michael Artin observed that every unirational K3 surface over an algebraically closed field must have Picard number 22. (In particular, a unirational K3 surface must be supersingular.) Conversely, Artin conjectured that every K3 surface with Picard number 22 must be unirational.[2] Artin's conjecture was proved in characteristic 2 by Rudakov & Shafarevich (1978). Proofs in every characteristic p at least 5 were claimed by Liedtke (2013) and Lieblich (2014), but later refuted by Bragg & Lieblich (2022).
History
The first example of a K3 surface with Picard number 22 was given by Tate (1965), who observed that the Fermat quartic
w4 + x4 + y4 + z4 = 0
has Picard number 22 over algebraically closed fields of characteristic 3 mod 4. Then Shioda showed that the elliptic modular surface of level 4 (the universal generalized elliptic curve E(4) → X(4)) in characteristic 3 mod 4 is a K3 surface with Picard number 22, as is the Kummer surface of the product of two supersingular elliptic curves in odd characteristic. Shimada (2004, 2004b) showed that all K3 surfaces with Picard number 22 are double covers of the projective plane. In the case of characteristic 2 the double cover may need to be an inseparable covering.
The discriminant of the intersection form on the Picard group of a K3 surface with Picard number 22 is an even power
p2e
of the characteristic p, as was shown by Artin and Milne. Here e is called the Artin invariant of the K3 surface. Artin showed that
1 ≤ e ≤ 10.
There is a corresponding Artin stratification of the moduli spaces of supersingular K3 surfaces, which have dimension 9. The subspace of supersingular K3 surfaces with Artin invariant e has dimension e − 1.
Examples
In characteristic 2,
z2 = f(x, y) ,
for a sufficiently general polynomial f(x, y) of degree 6, defines a surface with 21 isolated singularities. The smooth projective minimal model of such a surface is a unirational K3 surface, and hence a K3 surface with Picard number 22. The largest Artin invariant here is 10.
Similarly, in characteristic 3,
z3 = g(x, y) ,
for a sufficiently general polynomial g(x, y) of degree 4, defines a surface with 9 isolated singularities. The smooth projective minimal model of such a surface is again a unirational K3 surface, and hence a K3 surface with Picard number 22. The highest Artin invariant in this family is 6.
Dolgachev & Kondō (2003) described the supersingular K3 surface in characteristic 2 with Artin number 1 in detail.
Kummer surfaces
If the characteristic p is greater than 2, Ogus (1979) showed that every K3 surface S with Picard number 22 and Artin invariant at most 2 is a Kummer surface, meaning the minimal resolution of the quotient of an abelian surface A by the mapping x ↦ − x. More precisely, A is a supersingular abelian surface, isogenous to the product of two supersingular elliptic curves.
See also
• K3 surface
• Tate conjecture
Notes
1. M. Artin and B. Mazur. Ann. Sci. École Normale Supérieure 10 (1977), 87-131. P. 90.
2. M. Artin. Ann. Sci. École Normale Supérieure 7 (1974), 543-567. P. 552.
References
• Artin, Michael (1974), "Supersingular K3 surfaces", Annales Scientifiques de l'École Normale Supérieure, Série 4, 7: 543–567, MR 0371899
• Bragg, Daniel; Lieblich, Max (2022), "Perfect points on curves of genus 1 and consequences for supersingular K3 surfaces", Compositio Mathematica, 158: 1052–1083, arXiv:1904.04803, doi:10.1112/S0010437X22007382
• Charles, F. (2013), "The Tate conjecture for K3 surfaces over finite fields", Inventiones Mathematicae, 194: 119–145, arXiv:1206.4002, Bibcode:2013InMat.194..119C, doi:10.1007/s00222-012-0443-y, MR 3103257
• Dolgachev, I.; Kondō, S. (2003), "A supersingular K3 surface in characteristic 2 and the Leech lattice", Int. Math. Res. Not. (1): 1–23, arXiv:math/0112283, Bibcode:2001math.....12283D, MR 1935564
• Lieblich, M. (2014), On the unirationality of supersingular K3 surfaces, arXiv:1403.3073, Bibcode:2014arXiv1403.3073L
• Liedtke, C. (2013), "Supersingular K3 surfaces are unirational", Inventiones Mathematicae, 200: 979–1014, arXiv:1304.5623, Bibcode:2015InMat.200..979L, doi:10.1007/s00222-014-0547-7
• Liedtke, Christian (2016), "Lectures on Supersingular K3 Surfaces and the Crystalline Torelli Theorem", K3 Surfaces and Their Moduli, Progress in Mathematics, vol. 315, Birkhauser, pp. 171–235, arXiv:1403.2538, Bibcode:2014arXiv1403.2538L
• Madapusi Pera, K. (2013), "The Tate conjecture for K3 surfaces in odd characteristic", Inventiones Mathematicae, 201: 625–668, arXiv:1301.6326, Bibcode:2013arXiv1301.6326M, doi:10.1007/s00222-014-0557-5
• Maulik, D. (2014), "Supersingular K3 surfaces for large primes", Duke Mathematical Journal, 163: 2357–2425, arXiv:1203.2889, Bibcode:2012arXiv1203.2889M, doi:10.1215/00127094-2804783, MR 3265555
• Nygaard, N.; Ogus, A. (1985), "Tate's conjecture for K3 surfaces of finite height", Annals of Mathematics, 122: 461–507, doi:10.2307/1971327, JSTOR 1971327, MR 0819555
• Ogus, Arthur (1979), "Supersingular K3 crystals", Journées de Géométrie Algébrique de Rennes (Rennes, 1978), Vol. II, Astérisque, vol. 64, Paris: Société Mathématique de France, pp. 3–86, MR 0563467
• Rudakov, A. N.; Shafarevich, Igor R. (1978), "Supersingular K3 surfaces over fields of characteristic 2", Izvestiya Akademii Nauk SSSR. Seriya Matematicheskaya, 42 (4): 848–869, Bibcode:1979IzMat..13..147R, doi:10.1070/IM1979v013n01ABEH002016, MR 0508830
• Shimada, Ichiro (2004), "Supersingular K3 surfaces in characteristic 2 as double covers of a projective plane" (PDF), The Asian Journal of Mathematics, 8 (3): 531–586, arXiv:math/0311073, Bibcode:2003math.....11073S, doi:10.4310/ajm.2004.v8.n3.a8, MR 2129248, archived from the original (PDF) on 2006-07-20
• Shimada, Ichiro (2004b), "Supersingular K3 surfaces in odd characteristic and sextic double planes", Mathematische Annalen, 328 (3): 451–468, arXiv:math/0309451, doi:10.1007/s00208-003-0494-x, MR 2036331
• Shioda, Tetsuji (1979), "Supersingular K3 surfaces", Algebraic geometry (Proc. Summer Meeting, Univ. Copenhagen, Copenhagen, 1978), Lecture Notes in Math., vol. 732, Berlin, New York: Springer-Verlag, pp. 564–591, doi:10.1007/BFb0066664, MR 0555718
• Tate, John T. (1965), "Algebraic cycles and poles of zeta functions", Arithmetical Algebraic Geometry (Proc. Conf. Purdue Univ., 1963), New York: Harper & Row, pp. 93–110, MR 0225778
| Wikipedia |
Supersingular prime (algebraic number theory)
In algebraic number theory, a supersingular prime for a given elliptic curve is a prime number with a certain relationship to that curve. If the curve E is defined over the rational numbers, then a prime p is supersingular for E if the reduction of E modulo p is a supersingular elliptic curve over the residue field Fp.
Noam Elkies showed that every elliptic curve over the rational numbers has infinitely many supersingular primes. However, the set of supersingular primes has asymptotic density zero (if E does not have complex multiplication). Lang & Trotter (1976) conjectured that the number of supersingular primes less than a bound X is within a constant multiple of ${\frac {\sqrt {X}}{\ln X}}$, using heuristics involving the distribution of eigenvalues of the Frobenius endomorphism. As of 2019, this conjecture is open.
More generally, if K is any global field—i.e., a finite extension either of Q or of Fp(t)—and A is an abelian variety defined over K, then a supersingular prime ${\mathfrak {p}}$ for A is a finite place of K such that the reduction of A modulo ${\mathfrak {p}}$ is a supersingular abelian variety.
References
• Elkies, Noam D. (1987). "The existence of infinitely many supersingular primes for every elliptic curve over Q". Invent. Math. 89 (3): 561–567. Bibcode:1987InMat..89..561E. doi:10.1007/BF01388985. MR 0903384. S2CID 123646933.
• Lang, Serge; Trotter, Hale F. (1976). Frobenius distributions in GL2-extensions. Lecture Notes in Mathematics. Vol. 504. New York: Springer-Verlag. ISBN 0-387-07550-X. Zbl 0329.12015.
• Ogg, A. P. (1980). "Modular Functions". In Cooperstein, Bruce; Mason, Geoffrey (eds.). The Santa Cruz Conference on Finite Groups. Held at the University of California, Santa Cruz, Calif., June 25–July 20, 1979. Proc. Symp. Pure Math. Vol. 37. Providence, RI: American Mathematical Society. pp. 521–532. ISBN 0-8218-1440-0. Zbl 0448.10021.
• Silverman, Joseph H. (1986). The Arithmetic of Elliptic Curves. Graduate Texts in Mathematics. Vol. 106. New York: Springer-Verlag. ISBN 0-387-96203-4. Zbl 0585.14026.
Prime number classes
By formula
• Fermat (22n + 1)
• Mersenne (2p − 1)
• Double Mersenne (22p−1 − 1)
• Wagstaff (2p + 1)/3
• Proth (k·2n + 1)
• Factorial (n! ± 1)
• Primorial (pn# ± 1)
• Euclid (pn# + 1)
• Pythagorean (4n + 1)
• Pierpont (2m·3n + 1)
• Quartan (x4 + y4)
• Solinas (2m ± 2n ± 1)
• Cullen (n·2n + 1)
• Woodall (n·2n − 1)
• Cuban (x3 − y3)/(x − y)
• Leyland (xy + yx)
• Thabit (3·2n − 1)
• Williams ((b−1)·bn − 1)
• Mills (⌊A3n⌋)
By integer sequence
• Fibonacci
• Lucas
• Pell
• Newman–Shanks–Williams
• Perrin
• Partitions
• Bell
• Motzkin
By property
• Wieferich (pair)
• Wall–Sun–Sun
• Wolstenholme
• Wilson
• Lucky
• Fortunate
• Ramanujan
• Pillai
• Regular
• Strong
• Stern
• Supersingular (elliptic curve)
• Supersingular (moonshine theory)
• Good
• Super
• Higgs
• Highly cototient
• Unique
Base-dependent
• Palindromic
• Emirp
• Repunit (10n − 1)/9
• Permutable
• Circular
• Truncatable
• Minimal
• Delicate
• Primeval
• Full reptend
• Unique
• Happy
• Self
• Smarandache–Wellin
• Strobogrammatic
• Dihedral
• Tetradic
Patterns
• Twin (p, p + 2)
• Bi-twin chain (n ± 1, 2n ± 1, 4n ± 1, …)
• Triplet (p, p + 2 or p + 4, p + 6)
• Quadruplet (p, p + 2, p + 6, p + 8)
• k-tuple
• Cousin (p, p + 4)
• Sexy (p, p + 6)
• Chen
• Sophie Germain/Safe (p, 2p + 1)
• Cunningham (p, 2p ± 1, 4p ± 3, 8p ± 7, ...)
• Arithmetic progression (p + a·n, n = 0, 1, 2, 3, ...)
• Balanced (consecutive p − n, p, p + n)
By size
• Mega (1,000,000+ digits)
• Largest known
• list
Complex numbers
• Eisenstein prime
• Gaussian prime
Composite numbers
• Pseudoprime
• Catalan
• Elliptic
• Euler
• Euler–Jacobi
• Fermat
• Frobenius
• Lucas
• Somer–Lucas
• Strong
• Carmichael number
• Almost prime
• Semiprime
• Sphenic number
• Interprime
• Pernicious
Related topics
• Probable prime
• Industrial-grade prime
• Illegal prime
• Formula for primes
• Prime gap
First 60 primes
• 2
• 3
• 5
• 7
• 11
• 13
• 17
• 19
• 23
• 29
• 31
• 37
• 41
• 43
• 47
• 53
• 59
• 61
• 67
• 71
• 73
• 79
• 83
• 89
• 97
• 101
• 103
• 107
• 109
• 113
• 127
• 131
• 137
• 139
• 149
• 151
• 157
• 163
• 167
• 173
• 179
• 181
• 191
• 193
• 197
• 199
• 211
• 223
• 227
• 229
• 233
• 239
• 241
• 251
• 257
• 263
• 269
• 271
• 277
• 281
List of prime numbers
| Wikipedia |
Supersingular prime (moonshine theory)
In the mathematical branch of moonshine theory, a supersingular prime is a prime number that divides the order of the Monster group M, which is the largest sporadic simple group. There are precisely fifteen supersingular prime numbers: the first eleven primes (2, 3, 5, 7, 11, 13, 17, 19, 23, 29, and 31), as well as 41, 47, 59, and 71. (sequence A002267 in the OEIS)
The non-supersingular primes are 37, 43, 53, 61, 67, and any prime number greater than or equal to 73.
Supersingular primes are related to the notion of supersingular elliptic curves as follows. For a prime number p, the following are equivalent:
1. The modular curve X0+(p) = X0(p) / wp, where wp is the Fricke involution of X0(p), has genus zero.
2. Every supersingular elliptic curve in characteristic p can be defined over the prime subfield Fp.
3. The order of the Monster group is divisible by p.
The equivalence is due to Andrew Ogg. More precisely, in 1975 Ogg showed that the primes satisfying the first condition are exactly the 15 supersingular primes listed above and shortly thereafter learned of the (then conjectural) existence of a sporadic simple group having exactly these primes as prime divisors. This strange coincidence was the beginning of the theory of monstrous moonshine.
All supersingular primes are Chen primes, but 37, 53, and 67 are also Chen primes, and there are infinitely many Chen primes greater than 73.
References
• Weisstein, Eric W. "Supersingular Prime". MathWorld.
• Weisstein, Eric W. "Sporadic group". MathWorld.
• Ogg, A. P. (1980). "Modular Functions". In Cooperstein, Bruce; Mason, Geoffrey (eds.). The Santa Cruz Conference on Finite Groups. Held at the University of California, Santa Cruz, Calif., June 25–July 20, 1979. Providence, RI: Amer. Math. Soc. pp. 521–532. ISBN 0-8218-1440-0.
Prime number classes
By formula
• Fermat (22n + 1)
• Mersenne (2p − 1)
• Double Mersenne (22p−1 − 1)
• Wagstaff (2p + 1)/3
• Proth (k·2n + 1)
• Factorial (n! ± 1)
• Primorial (pn# ± 1)
• Euclid (pn# + 1)
• Pythagorean (4n + 1)
• Pierpont (2m·3n + 1)
• Quartan (x4 + y4)
• Solinas (2m ± 2n ± 1)
• Cullen (n·2n + 1)
• Woodall (n·2n − 1)
• Cuban (x3 − y3)/(x − y)
• Leyland (xy + yx)
• Thabit (3·2n − 1)
• Williams ((b−1)·bn − 1)
• Mills (⌊A3n⌋)
By integer sequence
• Fibonacci
• Lucas
• Pell
• Newman–Shanks–Williams
• Perrin
• Partitions
• Bell
• Motzkin
By property
• Wieferich (pair)
• Wall–Sun–Sun
• Wolstenholme
• Wilson
• Lucky
• Fortunate
• Ramanujan
• Pillai
• Regular
• Strong
• Stern
• Supersingular (elliptic curve)
• Supersingular (moonshine theory)
• Good
• Super
• Higgs
• Highly cototient
• Unique
Base-dependent
• Palindromic
• Emirp
• Repunit (10n − 1)/9
• Permutable
• Circular
• Truncatable
• Minimal
• Delicate
• Primeval
• Full reptend
• Unique
• Happy
• Self
• Smarandache–Wellin
• Strobogrammatic
• Dihedral
• Tetradic
Patterns
• Twin (p, p + 2)
• Bi-twin chain (n ± 1, 2n ± 1, 4n ± 1, …)
• Triplet (p, p + 2 or p + 4, p + 6)
• Quadruplet (p, p + 2, p + 6, p + 8)
• k-tuple
• Cousin (p, p + 4)
• Sexy (p, p + 6)
• Chen
• Sophie Germain/Safe (p, 2p + 1)
• Cunningham (p, 2p ± 1, 4p ± 3, 8p ± 7, ...)
• Arithmetic progression (p + a·n, n = 0, 1, 2, 3, ...)
• Balanced (consecutive p − n, p, p + n)
By size
• Mega (1,000,000+ digits)
• Largest known
• list
Complex numbers
• Eisenstein prime
• Gaussian prime
Composite numbers
• Pseudoprime
• Catalan
• Elliptic
• Euler
• Euler–Jacobi
• Fermat
• Frobenius
• Lucas
• Somer–Lucas
• Strong
• Carmichael number
• Almost prime
• Semiprime
• Sphenic number
• Interprime
• Pernicious
Related topics
• Probable prime
• Industrial-grade prime
• Illegal prime
• Formula for primes
• Prime gap
First 60 primes
• 2
• 3
• 5
• 7
• 11
• 13
• 17
• 19
• 23
• 29
• 31
• 37
• 41
• 43
• 47
• 53
• 59
• 61
• 67
• 71
• 73
• 79
• 83
• 89
• 97
• 101
• 103
• 107
• 109
• 113
• 127
• 131
• 137
• 139
• 149
• 151
• 157
• 163
• 167
• 173
• 179
• 181
• 191
• 193
• 197
• 199
• 211
• 223
• 227
• 229
• 233
• 239
• 241
• 251
• 257
• 263
• 269
• 271
• 277
• 281
List of prime numbers
| Wikipedia |
Supersolvable group
In mathematics, a group is supersolvable (or supersoluble) if it has an invariant normal series where all the factors are cyclic groups. Supersolvability is stronger than the notion of solvability.
Definition
Let G be a group. G is supersolvable if there exists a normal series
$\{1\}=H_{0}\triangleleft H_{1}\triangleleft \cdots \triangleleft H_{s-1}\triangleleft H_{s}=G$
such that each quotient group $H_{i+1}/H_{i}\;$ is cyclic and each $H_{i}$ is normal in $G$.
By contrast, for a solvable group the definition requires each quotient to be abelian. In another direction, a polycyclic group must have a subnormal series with each quotient cyclic, but there is no requirement that each $H_{i}$ be normal in $G$. As every finite solvable group is polycyclic, this can be seen as one of the key differences between the definitions. For a concrete example, the alternating group on four points, $A_{4}$, is solvable but not supersolvable.
Basic Properties
Some facts about supersolvable groups:
• Supersolvable groups are always polycyclic, and hence solvable.
• Every finitely generated nilpotent group is supersolvable.
• Every metacyclic group is supersolvable.
• The commutator subgroup of a supersolvable group is nilpotent.
• Subgroups and quotient groups of supersolvable groups are supersolvable.
• A finite supersolvable group has an invariant normal series with each factor cyclic of prime order.
• In fact, the primes can be chosen in a nice order: For every prime p, and for π the set of primes greater than p, a finite supersolvable group has a unique Hall π-subgroup. Such groups are sometimes called ordered Sylow tower groups.
• Every group of square-free order, and every group with cyclic Sylow subgroups (a Z-group), is supersolvable.
• Every irreducible complex representation of a finite supersolvable group is monomial, that is, induced from a linear character of a subgroup. In other words, every finite supersolvable group is a monomial group.
• Every maximal subgroup in a supersolvable group has prime index.
• A finite group is supersolvable if and only if every maximal subgroup has prime index.
• A finite group is supersolvable if and only if every maximal chain of subgroups has the same length. This is important to those interested in the lattice of subgroups of a group, and is sometimes called the Jordan–Dedekind chain condition.
• By Baum's theorem, every supersolvable finite group has a DFT algorithm running in time O(n log n).
References
• Schenkman, Eugene. Group Theory. Krieger, 1975.
• Schmidt, Roland. Subgroup Lattices of Groups. de Gruyter, 1994.
• Keith Conrad, SUBGROUP SERIES II, Section 4 , http://www.math.uconn.edu/~kconrad/blurbs/grouptheory/subgpseries2.pdf
| Wikipedia |
Superstrong cardinal
In mathematics, a cardinal number κ is called superstrong if and only if there exists an elementary embedding j : V → M from V into a transitive inner model M with critical point κ and $V_{j(\kappa )}$ ⊆ M.
Similarly, a cardinal κ is n-superstrong if and only if there exists an elementary embedding j : V → M from V into a transitive inner model M with critical point κ and $V_{j^{n}(\kappa )}$ ⊆ M. Akihiro Kanamori has shown that the consistency strength of an n+1-superstrong cardinal exceeds that of an n-huge cardinal for each n > 0.
References
• Kanamori, Akihiro (2003). The Higher Infinite : Large Cardinals in Set Theory from Their Beginnings (2nd ed.). Springer. ISBN 3-540-00384-3.
| Wikipedia |
Universe (mathematics)
In mathematics, and particularly in set theory, category theory, type theory, and the foundations of mathematics, a universe is a collection that contains all the entities one wishes to consider in a given situation.
In set theory, universes are often classes that contain (as elements) all sets for which one hopes to prove a particular theorem. These classes can serve as inner models for various axiomatic systems such as ZFC or Morse–Kelley set theory. Universes are of critical importance to formalizing concepts in category theory inside set-theoretical foundations. For instance, the canonical motivating example of a category is Set, the category of all sets, which cannot be formalized in a set theory without some notion of a universe.
In type theory, a universe is a type whose elements are types.
In a specific context
Main article: Domain of discourse
Perhaps the simplest version is that any set can be a universe, so long as the object of study is confined to that particular set. If the object of study is formed by the real numbers, then the real line R, which is the real number set, could be the universe under consideration. Implicitly, this is the universe that Georg Cantor was using when he first developed modern naive set theory and cardinality in the 1870s and 1880s in applications to real analysis. The only sets that Cantor was originally interested in were subsets of R.
This concept of a universe is reflected in the use of Venn diagrams. In a Venn diagram, the action traditionally takes place inside a large rectangle that represents the universe U. One generally says that sets are represented by circles; but these sets can only be subsets of U. The complement of a set A is then given by that portion of the rectangle outside of A's circle. Strictly speaking, this is the relative complement U \ A of A relative to U; but in a context where U is the universe, it can be regarded as the absolute complement AC of A. Similarly, there is a notion of the nullary intersection, that is the intersection of zero sets (meaning no sets, not null sets).
Without a universe, the nullary intersection would be the set of absolutely everything, which is generally regarded as impossible; but with the universe in mind, the nullary intersection can be treated as the set of everything under consideration, which is simply U. These conventions are quite useful in the algebraic approach to basic set theory, based on Boolean lattices. Except in some non-standard forms of axiomatic set theory (such as New Foundations), the class of all sets is not a Boolean lattice (it is only a relatively complemented lattice).
In contrast, the class of all subsets of U, called the power set of U, is a Boolean lattice. The absolute complement described above is the complement operation in the Boolean lattice; and U, as the nullary intersection, serves as the top element (or nullary meet) in the Boolean lattice. Then De Morgan's laws, which deal with complements of meets and joins (which are unions in set theory) apply, and apply even to the nullary meet and the nullary join (which is the empty set).
In ordinary mathematics
However, once subsets of a given set X (in Cantor's case, X = R) are considered, the universe may need to be a set of subsets of X. (For example, a topology on X is a set of subsets of X.) The various sets of subsets of X will not themselves be subsets of X but will instead be subsets of PX, the power set of X. This may be continued; the object of study may next consist of such sets of subsets of X, and so on, in which case the universe will be P(PX). In another direction, the binary relations on X (subsets of the Cartesian product X × X) may be considered, or functions from X to itself, requiring universes like P(X × X) or XX.
Thus, even if the primary interest is X, the universe may need to be considerably larger than X. Following the above ideas, one may want the superstructure over X as the universe. This can be defined by structural recursion as follows:
• Let S0X be X itself.
• Let S1X be the union of X and PX.
• Let S2X be the union of S1X and P(S1X).
• In general, let Sn+1X be the union of SnX and P(SnX).
Then the superstructure over X, written SX, is the union of S0X, S1X, S2X, and so on; or
$\mathbf {S} X:=\bigcup _{i=0}^{\infty }\mathbf {S} _{i}X{\mbox{.}}\!$
No matter what set X is the starting point, the empty set {} will belong to S1X. The empty set is the von Neumann ordinal [0]. Then {[0]}, the set whose only element is the empty set, will belong to S2X; this is the von Neumann ordinal [1]. Similarly, {[1]} will belong to S3X, and thus so will {[0],[1]}, as the union of {[0]} and {[1]}; this is the von Neumann ordinal [2]. Continuing this process, every natural number is represented in the superstructure by its von Neumann ordinal. Next, if x and y belong to the superstructure, then so does {{x},{x,y}}, which represents the ordered pair (x,y). Thus the superstructure will contain the various desired Cartesian products. Then the superstructure also contains functions and relations, since these may be represented as subsets of Cartesian products. The process also gives ordered n-tuples, represented as functions whose domain is the von Neumann ordinal [n], and so on.
So if the starting point is just X = {}, a great deal of the sets needed for mathematics appear as elements of the superstructure over {}. But each of the elements of S{} will be a finite set. Each of the natural numbers belongs to it, but the set N of all natural numbers does not (although it is a subset of S{}). In fact, the superstructure over {} consists of all of the hereditarily finite sets. As such, it can be considered the universe of finitist mathematics. Speaking anachronistically, one could suggest that the 19th-century finitist Leopold Kronecker was working in this universe; he believed that each natural number existed but that the set N (a "completed infinity") did not.
However, S{} is unsatisfactory for ordinary mathematicians (who are not finitists), because even though N may be available as a subset of S{}, still the power set of N is not. In particular, arbitrary sets of real numbers are not available. So it may be necessary to start the process all over again and form S(S{}). However, to keep things simple, one can take the set N of natural numbers as given and form SN, the superstructure over N. This is often considered the universe of ordinary mathematics. The idea is that all of the mathematics that is ordinarily studied refers to elements of this universe. For example, any of the usual constructions of the real numbers (say by Dedekind cuts) belongs to SN. Even non-standard analysis can be done in the superstructure over a non-standard model of the natural numbers.
There is a slight shift in philosophy from the previous section, where the universe was any set U of interest. There, the sets being studied were subsets of the universe; now, they are members of the universe. Thus although P(SX) is a Boolean lattice, what is relevant is that SX itself is not. Consequently, it is rare to apply the notions of Boolean lattices and Venn diagrams directly to the superstructure universe as they were to the power-set universes of the previous section. Instead, one can work with the individual Boolean lattices PA, where A is any relevant set belonging to SX; then PA is a subset of SX (and in fact belongs to SX). In Cantor's case X = R in particular, arbitrary sets of real numbers are not available, so there it may indeed be necessary to start the process all over again.
In set theory
It is possible to give a precise meaning to the claim that SN is the universe of ordinary mathematics; it is a model of Zermelo set theory, the axiomatic set theory originally developed by Ernst Zermelo in 1908. Zermelo set theory was successful precisely because it was capable of axiomatising "ordinary" mathematics, fulfilling the programme begun by Cantor over 30 years earlier. But Zermelo set theory proved insufficient for the further development of axiomatic set theory and other work in the foundations of mathematics, especially model theory.
For a dramatic example, the description of the superstructure process above cannot itself be carried out in Zermelo set theory. The final step, forming S as an infinitary union, requires the axiom of replacement, which was added to Zermelo set theory in 1922 to form Zermelo–Fraenkel set theory, the set of axioms most widely accepted today. So while ordinary mathematics may be done in SN, discussion of SN goes beyond the "ordinary", into metamathematics.
But if high-powered set theory is brought in, the superstructure process above reveals itself to be merely the beginning of a transfinite recursion. Going back to X = {}, the empty set, and introducing the (standard) notation Vi for Si{}, V0 = {}, V1 = P{}, and so on as before. But what used to be called "superstructure" is now just the next item on the list: Vω, where ω is the first infinite ordinal number. This can be extended to arbitrary ordinal numbers:
$V_{i}:=\bigcup _{j<i}\mathbf {P} V_{j}\!$
defines Vi for any ordinal number i. The union of all of the Vi is the von Neumann universe V:
$V:=\bigcup _{i}V_{i}\!$.
Every individual Vi is a set, but their union V is a proper class. The axiom of foundation, which was added to ZF set theory at around the same time as the axiom of replacement, says that every set belongs to V.
Kurt Gödel's constructible universe L and the axiom of constructibility
Inaccessible cardinals yield models of ZF and sometimes additional axioms, and are equivalent to the existence of the Grothendieck universe set
In predicate calculus
In an interpretation of first-order logic, the universe (or domain of discourse) is the set of individuals (individual constants) over which the quantifiers range. A proposition such as ∀x (x2 ≠ 2) is ambiguous, if no domain of discourse has been identified. In one interpretation, the domain of discourse could be the set of real numbers; in another interpretation, it could be the set of natural numbers. If the domain of discourse is the set of real numbers, the proposition is false, with x = √2 as counterexample; if the domain is the set of naturals, the proposition is true, since 2 is not the square of any natural number.
In category theory
Main article: Grothendieck universe
There is another approach to universes which is historically connected with category theory. This is the idea of a Grothendieck universe. Roughly speaking, a Grothendieck universe is a set inside which all the usual operations of set theory can be performed. This version of a universe is defined to be any set for which the following axioms hold:[1]
1. $x\in u\in U$ implies $x\in U$
2. $u\in U$ and $v\in U$ imply {u,v}, (u,v), and $u\times v\in U$.
3. $x\in U$ implies ${\mathcal {P}}x\in U$ and $\cup x\in U$
4. $\omega \in U$ (here $\omega =\{0,1,2,...\}$ is the set of all finite ordinals.)
5. if $f:a\to b$ is a surjective function with $a\in U$ and $b\subset U$, then $b\in U$.
The advantage of a Grothendieck universe is that it is actually a set, and never a proper class. The disadvantage is that if one tries hard enough, one can leave a Grothendieck universe.
The most common use of a Grothendieck universe U is to take U as a replacement for the category of all sets. One says that a set S is U-small if S ∈U, and U-large otherwise. The category U-Set of all U-small sets has as objects all U-small sets and as morphisms all functions between these sets. Both the object set and the morphism set are sets, so it becomes possible to discuss the category of "all" sets without invoking proper classes. Then it becomes possible to define other categories in terms of this new category. For example, the category of all U-small categories is the category of all categories whose object set and whose morphism set are in U. Then the usual arguments of set theory are applicable to the category of all categories, and one does not have to worry about accidentally talking about proper classes. Because Grothendieck universes are extremely large, this suffices in almost all applications.
Often when working with Grothendieck universes, mathematicians assume the Axiom of Universes: "For any set x, there exists a universe U such that x ∈U." The point of this axiom is that any set one encounters is then U-small for some U, so any argument done in a general Grothendieck universe can be applied.[2] This axiom is closely related to the existence of strongly inaccessible cardinals.
In type theory
In some type theories, especially in systems with dependent types, types themselves can be regarded as terms. There is a type called the universe (often denoted ${\mathcal {U}}$) which has types as its elements. To avoid paradoxes such as Girard's paradox (an analogue of Russell's paradox for type theory), type theories are often equipped with a countably infinite hierarchy of such universes, with each universe being a term of the next one.
There are at least two kinds of universes that one can consider in type theory: Russell-style universes (named after Bertrand Russell) and Tarski-style universes (named after Alfred Tarski).[3][4][5] A Russell-style universe is a type whose terms are types.[3] A Tarski-style universe is a type together with an interpretation operation allowing us to regard its terms as types.[3]
For example:[6]
The openendedness of Martin-Löf type theory is particularly manifest in the introduction of so-called universes. Type universes encapsulate the informal notion of reflection whose role may be explained as follows. During the course of developing a particular formalization of type theory, the type theorist may look back over the rules for types, say C, which have been introduced hitherto and perform the step of recognizing that they are valid according to Martin-Löf’s informal semantics of meaning explanation. This act of ‘introspection’ is an attempt to become aware of the conceptions which have governed our constructions in the past. It gives rise to a “reflection principle which roughly speaking says whatever we are used to doing with types can be done inside a universe” (Martin-Löf 1975, 83). On the formal level, this leads to an extension of the existing formalization of type theory in that the type forming capacities of C become enshrined in a type universe UC mirroring C.
See also
• Domain of discourse
• Grothendieck universe
• Herbrand universe
• Free object
• Open formula
• Space (mathematics)
Notes
1. Mac Lane 1998, p. 22
2. Low, Zhen Lin (2013-04-18). "Universes for category theory". arXiv:1304.5227v2 [math.CT].
3. "Universe in Homotopy Type Theory" in nLab
4. Zhaohui Luo, "Notes on Universes in Type Theory", 2012.
5. Per Martin-Löf, Intuitionistic Type Theory, Bibliopolis, 1984, pp. 88 and 91.
6. Rathjen, Michael (October 2005). "The Constructive Hilbert Program and the Limits of Martin-Löf Type Theory". Synthese. 147: 81–120. doi:10.1007/s11229-004-6208-4. S2CID 143295. Retrieved September 21, 2022.
References
• Mac Lane, Saunders (1998). Categories for the Working Mathematician. Springer-Verlag New York, Inc.
External links
• "Universe", Encyclopedia of Mathematics, EMS Press, 2001 [1994]
• Weisstein, Eric W. "Universal Set". MathWorld.
Mathematical logic
General
• Axiom
• list
• Cardinality
• First-order logic
• Formal proof
• Formal semantics
• Foundations of mathematics
• Information theory
• Lemma
• Logical consequence
• Model
• Theorem
• Theory
• Type theory
Theorems (list)
& Paradoxes
• Gödel's completeness and incompleteness theorems
• Tarski's undefinability
• Banach–Tarski paradox
• Cantor's theorem, paradox and diagonal argument
• Compactness
• Halting problem
• Lindström's
• Löwenheim–Skolem
• Russell's paradox
Logics
Traditional
• Classical logic
• Logical truth
• Tautology
• Proposition
• Inference
• Logical equivalence
• Consistency
• Equiconsistency
• Argument
• Soundness
• Validity
• Syllogism
• Square of opposition
• Venn diagram
Propositional
• Boolean algebra
• Boolean functions
• Logical connectives
• Propositional calculus
• Propositional formula
• Truth tables
• Many-valued logic
• 3
• Finite
• ∞
Predicate
• First-order
• list
• Second-order
• Monadic
• Higher-order
• Free
• Quantifiers
• Predicate
• Monadic predicate calculus
Set theory
• Set
• Hereditary
• Class
• (Ur-)Element
• Ordinal number
• Extensionality
• Forcing
• Relation
• Equivalence
• Partition
• Set operations:
• Intersection
• Union
• Complement
• Cartesian product
• Power set
• Identities
Types of Sets
• Countable
• Uncountable
• Empty
• Inhabited
• Singleton
• Finite
• Infinite
• Transitive
• Ultrafilter
• Recursive
• Fuzzy
• Universal
• Universe
• Constructible
• Grothendieck
• Von Neumann
Maps & Cardinality
• Function/Map
• Domain
• Codomain
• Image
• In/Sur/Bi-jection
• Schröder–Bernstein theorem
• Isomorphism
• Gödel numbering
• Enumeration
• Large cardinal
• Inaccessible
• Aleph number
• Operation
• Binary
Set theories
• Zermelo–Fraenkel
• Axiom of choice
• Continuum hypothesis
• General
• Kripke–Platek
• Morse–Kelley
• Naive
• New Foundations
• Tarski–Grothendieck
• Von Neumann–Bernays–Gödel
• Ackermann
• Constructive
Formal systems (list),
Language & Syntax
• Alphabet
• Arity
• Automata
• Axiom schema
• Expression
• Ground
• Extension
• by definition
• Conservative
• Relation
• Formation rule
• Grammar
• Formula
• Atomic
• Closed
• Ground
• Open
• Free/bound variable
• Language
• Metalanguage
• Logical connective
• ¬
• ∨
• ∧
• →
• ↔
• =
• Predicate
• Functional
• Variable
• Propositional variable
• Proof
• Quantifier
• ∃
• !
• ∀
• rank
• Sentence
• Atomic
• Spectrum
• Signature
• String
• Substitution
• Symbol
• Function
• Logical/Constant
• Non-logical
• Variable
• Term
• Theory
• list
Example axiomatic
systems
(list)
• of arithmetic:
• Peano
• second-order
• elementary function
• primitive recursive
• Robinson
• Skolem
• of the real numbers
• Tarski's axiomatization
• of Boolean algebras
• canonical
• minimal axioms
• of geometry:
• Euclidean:
• Elements
• Hilbert's
• Tarski's
• non-Euclidean
• Principia Mathematica
Proof theory
• Formal proof
• Natural deduction
• Logical consequence
• Rule of inference
• Sequent calculus
• Theorem
• Systems
• Axiomatic
• Deductive
• Hilbert
• list
• Complete theory
• Independence (from ZFC)
• Proof of impossibility
• Ordinal analysis
• Reverse mathematics
• Self-verifying theories
Model theory
• Interpretation
• Function
• of models
• Model
• Equivalence
• Finite
• Saturated
• Spectrum
• Submodel
• Non-standard model
• of arithmetic
• Diagram
• Elementary
• Categorical theory
• Model complete theory
• Satisfiability
• Semantics of logic
• Strength
• Theories of truth
• Semantic
• Tarski's
• Kripke's
• T-schema
• Transfer principle
• Truth predicate
• Truth value
• Type
• Ultraproduct
• Validity
Computability theory
• Church encoding
• Church–Turing thesis
• Computably enumerable
• Computable function
• Computable set
• Decision problem
• Decidable
• Undecidable
• P
• NP
• P versus NP problem
• Kolmogorov complexity
• Lambda calculus
• Primitive recursive function
• Recursion
• Recursive set
• Turing machine
• Type theory
Related
• Abstract logic
• Category theory
• Concrete/Abstract Category
• Category of sets
• History of logic
• History of mathematical logic
• timeline
• Logicism
• Mathematical object
• Philosophy of mathematics
• Supertask
Mathematics portal
| Wikipedia |
Superstring theory
Superstring theory is an attempt to explain all of the particles and fundamental forces of nature in one theory by modeling them as vibrations of tiny supersymmetric strings.
String theory
Fundamental objects
• String
• Cosmic string
• Brane
• D-brane
Perturbative theory
• Bosonic
• Superstring (Type I, Type II, Heterotic)
Non-perturbative results
• S-duality
• T-duality
• U-duality
• M-theory
• F-theory
• AdS/CFT correspondence
Phenomenology
• Phenomenology
• Cosmology
• Landscape
Mathematics
• Geometric Langlands correspondence
• Mirror symmetry
• Monstrous moonshine
• Vertex algebra
Related concepts
• Theory of everything
• Conformal field theory
• Quantum gravity
• Supersymmetry
• Supergravity
• Twistor string theory
• N = 4 supersymmetric Yang–Mills theory
• Kaluza–Klein theory
• Multiverse
• Holographic principle
Theorists
• Aganagić
• Arkani-Hamed
• Atiyah
• Banks
• Berenstein
• Bousso
• Cleaver
• Curtright
• Dijkgraaf
• Distler
• Douglas
• Duff
• Dvali
• Ferrara
• Fischler
• Friedan
• Gates
• Gliozzi
• Gopakumar
• Green
• Greene
• Gross
• Gubser
• Gukov
• Guth
• Hanson
• Harvey
• Hořava
• Horowitz
• Gibbons
• Kachru
• Kaku
• Kallosh
• Kaluza
• Kapustin
• Klebanov
• Knizhnik
• Kontsevich
• Klein
• Linde
• Maldacena
• Mandelstam
• Marolf
• Martinec
• Minwalla
• Moore
• Motl
• Mukhi
• Myers
• Nanopoulos
• Năstase
• Nekrasov
• Neveu
• Nielsen
• van Nieuwenhuizen
• Novikov
• Olive
• Ooguri
• Ovrut
• Polchinski
• Polyakov
• Rajaraman
• Ramond
• Randall
• Randjbar-Daemi
• Roček
• Rohm
• Sagnotti
• Scherk
• Schwarz
• Seiberg
• Sen
• Shenker
• Siegel
• Silverstein
• Sơn
• Staudacher
• Steinhardt
• Strominger
• Sundrum
• Susskind
• 't Hooft
• Townsend
• Trivedi
• Turok
• Vafa
• Veneziano
• Verlinde
• Verlinde
• Wess
• Witten
• Yau
• Yoneya
• Zamolodchikov
• Zamolodchikov
• Zaslow
• Zumino
• Zwiebach
• History
• Glossary
'Superstring theory' is a shorthand for supersymmetric string theory because unlike bosonic string theory, it is the version of string theory that accounts for both fermions and bosons and incorporates supersymmetry to model gravity.
Since the second superstring revolution, the five superstring theories (Type I, Type IIA, Type IIB, HO and HE) are regarded as different limits of a single theory tentatively called M-theory.
Background
One of the deepest open problems in theoretical physics is formulating a theory of quantum gravity. Such a theory incorporates both the theory of general relativity, which describes gravitation and applies to large-scale structures, and quantum mechanics or more specifically quantum field theory, which describes the other three fundamental forces that act on the atomic scale.
Quantum field theory, in particular the Standard model, is currently the most successful theory to describe fundamental forces, but while computing physical quantities of interest, naïvely one obtains infinite values. Physicists developed the technique of renormalization to 'eliminate these infinities' to obtain finite values which can be experimentally tested. This technique works for three of the four fundamental forces: Electromagnetism, the strong force and the weak force, but does not work for gravity, which is non-renormalizable. Development of a quantum theory of gravity therefore requires different means than those used for the other forces.[1]
According to superstring theory, or more generally string theory, the fundamental constituents of reality are strings with radius on the order of the Planck length (about 10−33 cm). An appealing feature of string theory is that fundamental particles can be viewed as excitations of the string. The tension in a string is on the order of the Planck force (1044 newtons). The graviton (the proposed messenger particle of the gravitational force) is predicted by the theory to be a string with wave amplitude zero.
History
Investigating how a string theory may include fermions in its spectrum led to the invention of supersymmetry (in the West)[2] in 1971,[3] a mathematical transformation between bosons and fermions. String theories that include fermionic vibrations are now known as "superstring theories".
Since its beginnings in the seventies and through the combined efforts of many different researchers, superstring theory has developed into a broad and varied subject with connections to quantum gravity, particle and condensed matter physics, cosmology, and pure mathematics.
Absence of physical evidence
Superstring theory is based on supersymmetry. No supersymmetric particles have been discovered and initial investigation, carried out in 2011 at the Large Hadron Collider (LHC)[4] and in 2006 at the Tevatron has excluded some of the ranges.[5][6][7][8] For instance, the mass constraint of the Minimal Supersymmetric Standard Model squarks has been up to 1.1 TeV, and gluinos up to 500 GeV.[9] No report on suggesting large extra dimensions has been delivered from LHC. There have been no principles so far to limit the number of vacua in the concept of a landscape of vacua.[10]
Some particle physicists became disappointed by the lack of experimental verification of supersymmetry, and some have already discarded it.[11] Jon Butterworth at University College London said that we had no sign of supersymmetry, even in higher energy region, excluding the superpartners of the top quark up to a few TeV. Ben Allanach at the University of Cambridge states that if we do not discover any new particles in the next trial at the LHC, then we can say it is unlikely to discover supersymmetry at CERN in the foreseeable future.[11]
Extra dimensions
See also: String theory § Extra dimensions
Our physical space is observed to have three large spatial dimensions and, along with time, is a boundless 4-dimensional continuum known as spacetime. However, nothing prevents a theory from including more than 4 dimensions. In the case of string theory, consistency requires spacetime to have 10 dimensions (3D regular space + 1 time + 6D hyperspace).[12] The fact that we see only 3 dimensions of space can be explained by one of two mechanisms: either the extra dimensions are compactified on a very small scale, or else our world may live on a 3-dimensional submanifold corresponding to a brane, on which all known particles besides gravity would be restricted.
If the extra dimensions are compactified, then the extra 6 dimensions must be in the form of a Calabi–Yau manifold. Within the more complete framework of M-theory, they would have to take form of a G2 manifold. A particular exact symmetry of string/M-theory called T-duality (which exchanges momentum modes for winding number and sends compact dimensions of radius R to radius 1/R),[13] has led to the discovery of equivalences between different Calabi–Yau manifolds called mirror symmetry.
Superstring theory is not the first theory to propose extra spatial dimensions. It can be seen as building upon the Kaluza–Klein theory, which proposed a 4+1 dimensional (5D) theory of gravity. When compactified on a circle, the gravity in the extra dimension precisely describes electromagnetism from the perspective of the 3 remaining large space dimensions. Thus the original Kaluza–Klein theory is a prototype for the unification of gauge and gravity interactions, at least at the classical level, however it is known to be insufficient to describe nature for a variety of reasons (missing weak and strong forces, lack of parity violation, etc.) A more complex compact geometry is needed to reproduce the known gauge forces. Also, to obtain a consistent, fundamental, quantum theory requires the upgrade to string theory, not just the extra dimensions.
Number of superstring theories
Theoretical physicists were troubled by the existence of five separate superstring theories. A possible solution for this dilemma was suggested at the beginning of what is called the second superstring revolution in the 1990s, which suggests that the five string theories might be different limits of a single underlying theory, called M-theory. This remains a conjecture.[14]
String theories
Type Spacetime dimensions SUSY generators chiral open strings heterotic compactification gauge group tachyon
Bosonic (closed) 26 N = 0 no no no none yes
Bosonic (open) 26 N = 0 no yes no U(1) yes
I 10 N = (1,0) yes yes no SO(32) no
IIA 10 N = (1,1) no no no U(1) no
IIB 10 N = (2,0) yes no no none no
HO 10 N = (1,0) yes no yes SO(32) no
HE 10 N = (1,0) yes no yes E8 × E8 no
M-theory 11 N = 1 no no no none no
The five consistent superstring theories are:
• The type I string has one supersymmetry in the ten-dimensional sense (16 supercharges). This theory is special in the sense that it is based on unoriented open and closed strings, while the rest are based on oriented closed strings.
• The type II string theories have two supersymmetries in the ten-dimensional sense (32 supercharges). There are actually two kinds of type II strings called type IIA and type IIB. They differ mainly in the fact that the IIA theory is non-chiral (parity conserving) while the IIB theory is chiral (parity violating).
• The heterotic string theories are based on a peculiar hybrid of a type I superstring and a bosonic string. There are two kinds of heterotic strings differing in their ten-dimensional gauge groups: the heterotic E8×E8 string and the heterotic SO(32) string. (The name heterotic SO(32) is slightly inaccurate since among the SO(32) Lie groups, string theory singles out a quotient Spin(32)/Z2 that is not equivalent to SO(32).)
Chiral gauge theories can be inconsistent due to anomalies. This happens when certain one-loop Feynman diagrams cause a quantum mechanical breakdown of the gauge symmetry. The anomalies were canceled out via the Green–Schwarz mechanism.
Even though there are only five superstring theories, making detailed predictions for real experiments requires information about exactly what physical configuration the theory is in. This considerably complicates efforts to test string theory because there is an astronomically high number—10500 or more—of configurations that meet some of the basic requirements to be consistent with our world. Along with the extreme remoteness of the Planck scale, this is the other major reason it is hard to test superstring theory.
Another approach to the number of superstring theories refers to the mathematical structure called composition algebra. In the findings of abstract algebra there are just seven composition algebras over the field of real numbers. In 1990 physicists R. Foot and G.C. Joshi in Australia stated that "the seven classical superstring theories are in one-to-one correspondence to the seven composition algebras".[15]
Integrating general relativity and quantum mechanics
General relativity typically deals with situations involving large mass objects in fairly large regions of spacetime whereas quantum mechanics is generally reserved for scenarios at the atomic scale (small spacetime regions). The two are very rarely used together, and the most common case that combines them is in the study of black holes. Having peak density, or the maximum amount of matter possible in a space, and very small area, the two must be used in synchrony to predict conditions in such places. Yet, when used together, the equations fall apart, spitting out impossible answers, such as imaginary distances and less than one dimension.
The major problem with their incongruence is that, at Planck scale (a fundamental small unit of length) lengths, general relativity predicts a smooth, flowing surface, while quantum mechanics predicts a random, warped surface, which are nowhere near compatible. Superstring theory resolves this issue, replacing the classical idea of point particles with strings. These strings have an average diameter of the Planck length, with extremely small variances, which completely ignores the quantum mechanical predictions of Planck-scale length dimensional warping. Also, these surfaces can be mapped as branes. These branes can be viewed as objects with a morphism between them. In this case, the morphism will be the state of a string that stretches between brane A and brane B.
Singularities are avoided because the observed consequences of "Big Crunches" never reach zero size. In fact, should the universe begin a "big crunch" sort of process, string theory dictates that the universe could never be smaller than the size of one string, at which point it would actually begin expanding.
Mathematics
D-branes
D-branes are membrane-like objects in 10D string theory. They can be thought of as occurring as a result of a Kaluza–Klein compactification of 11D M-theory that contains membranes. Because compactification of a geometric theory produces extra vector fields the D-branes can be included in the action by adding an extra U(1) vector field to the string action.
$\partial _{z}\rightarrow \partial _{z}+iA_{z}(z,{\overline {z}})$
In type I open string theory, the ends of open strings are always attached to D-brane surfaces. A string theory with more gauge fields such as SU(2) gauge fields would then correspond to the compactification of some higher-dimensional theory above 11 dimensions, which is not thought to be possible to date. Furthermore, the tachyons attached to the D-branes show the instability of those D-branes with respect to the annihilation. The tachyon total energy is (or reflects) the total energy of the D-branes.
Why five superstring theories?
For a 10 dimensional supersymmetric theory we are allowed a 32-component Majorana spinor. This can be decomposed into a pair of 16-component Majorana-Weyl (chiral) spinors. There are then various ways to construct an invariant depending on whether these two spinors have the same or opposite chiralities:
Superstring modelInvariant
Heterotic$\partial _{z}X^{\mu }-i{\overline {\theta _{L}}}\Gamma ^{\mu }\partial _{z}\theta _{L}$
IIA$\partial _{z}X^{\mu }-i{\overline {\theta _{L}}}\Gamma ^{\mu }\partial _{z}\theta _{L}-i{\overline {\theta _{R}}}\Gamma ^{\mu }\partial _{z}\theta _{R}$
IIB$\partial _{z}X^{\mu }-i{\overline {\theta _{L}^{1}}}\Gamma ^{\mu }\partial _{z}\theta _{L}^{1}-i{\overline {\theta _{L}^{2}}}\Gamma ^{\mu }\partial _{z}\theta _{L}^{2}$
The heterotic superstrings come in two types SO(32) and E8×E8 as indicated above and the type I superstrings include open strings.
Beyond superstring theory
It is conceivable that the five superstring theories are approximated to a theory in higher dimensions possibly involving membranes. Because the action for this involves quartic terms and higher so is not Gaussian, the functional integrals are very difficult to solve and so this has confounded the top theoretical physicists. Edward Witten has popularised the concept of a theory in 11 dimensions, called M-theory, involving membranes interpolating from the known symmetries of superstring theory. It may turn out that there exist membrane models or other non-membrane models in higher dimensions—which may become acceptable when we find new unknown symmetries of nature, such as noncommutative geometry. It is thought, however, that 16 is probably the maximum since SO(16) is a maximal subgroup of E8, the largest exceptional Lie group, and also is more than large enough to contain the Standard Model. Quartic integrals of the non-functional kind are easier to solve so there is hope for the future. This is the series solution, which is always convergent when a is non-zero and negative:
$\int _{-\infty }^{\infty }\exp({ax^{4}+bx^{3}+cx^{2}+dx+f})\,dx=e^{f}\sum _{n,m,p=0}^{\infty }{\frac {b^{4n}}{(4n)!}}{\frac {c^{2m}}{(2m)!}}{\frac {d^{4p}}{(4p)!}}{\frac {\Gamma (3n+m+p+{\frac {1}{4}})}{a^{3n+m+p+{\frac {1}{4}}}}}$
In the case of membranes the series would correspond to sums of various membrane interactions that are not seen in string theory.
Compactification
Investigating theories of higher dimensions often involves looking at the 10 dimensional superstring theory and interpreting some of the more obscure results in terms of compactified dimensions. For example, D-branes are seen as compactified membranes from 11D M-theory. Theories of higher dimensions such as 12D F-theory and beyond produce other effects, such as gauge terms higher than U(1). The components of the extra vector fields (A) in the D-brane actions can be thought of as extra coordinates (X) in disguise. However, the known symmetries including supersymmetry currently restrict the spinors to 32-components—which limits the number of dimensions to 11 (or 12 if you include two time dimensions.) Some physicists (e.g., John Baez et al.) have speculated that the exceptional Lie groups E6, E7 and E8 having maximum orthogonal subgroups SO(10), SO(12) and SO(16) may be related to theories in 10, 12 and 16 dimensions; 10 dimensions corresponding to string theory and the 12 and 16 dimensional theories being yet undiscovered but would be theories based on 3-branes and 7-branes respectively. However, this is a minority view within the string community. Since E7 is in some sense F4 quaternified and E8 is F4 octonified, the 12 and 16 dimensional theories, if they did exist, may involve the noncommutative geometry based on the quaternions and octonions respectively. From the above discussion, it can be seen that physicists have many ideas for extending superstring theory beyond the current 10 dimensional theory, but so far all have been unsuccessful.
Kac–Moody algebras
Since strings can have an infinite number of modes, the symmetry used to describe string theory is based on infinite dimensional Lie algebras. Some Kac–Moody algebras that have been considered as symmetries for M-theory have been E10 and E11 and their supersymmetric extensions.
See also
• AdS/CFT correspondence
• dS/CFT correspondence
• Grand unification theory
• List of string theory topics
• String field theory
References
1. Polchinski, Joseph. String Theory: Volume I. Cambridge University Press, p. 4.
2. Rickles, Dean (2014). A Brief History of String Theory: From Dual Models to M-Theory. Springer, p. 104. ISBN 978-3-642-45128-7
3. J. L. Gervais and B. Sakita worked on the two-dimensional case in which they use the concept of "supergauge," taken from Ramond, Neveu, and Schwarz's work on dual models: Gervais, J.-L.; Sakita, B. (1971). "Field theory interpretation of supergauges in dual models". Nuclear Physics B. 34 (2): 632–639. Bibcode:1971NuPhB..34..632G. doi:10.1016/0550-3213(71)90351-8.
4. Buchmueller, O.; Cavanaugh, R.; Colling, D.; De Roeck, A.; Dolan, M. J.; Ellis, J. R.; Flächer, H.; Heinemeyer, S.; Isidori, G.; Olive, K.; Rogerson, S.; Ronga, F.; Weiglein, G. (May 2011). "Implications of initial LHC searches for supersymmetry". The European Physical Journal C. 71 (5): 1634. arXiv:1102.4585. Bibcode:2011EPJC...71.1634B. doi:10.1140/epjc/s10052-011-1634-1. S2CID 52026092.
5. Woit, Peter (February 22, 2011). "Implications of Initial LHC Searches for Supersymmetry".
6. Cassel, S.; Ghilencea, D. M.; Kraml, S.; Lessa, A.; Ross, G. G. (2011). "Fine-tuning implications for complementary dark matter and LHC SUSY searches". Journal of High Energy Physics. 2011 (5): 120. arXiv:1101.4664. Bibcode:2011JHEP...05..120C. doi:10.1007/JHEP05(2011)120. S2CID 53467362.
7. Falkowski, Adam (Jester) (February 16, 2011). "What LHC tells about SUSY". resonaances.blogspot.com. Archived from the original on March 22, 2014. Retrieved March 22, 2014.
8. Tapper, Alex (24 March 2010). "Early SUSY searches at the LHC" (PDF). Imperial College London.
9. CMS Collaboration (2011). "Search for Supersymmetry at the LHC in Events with Jets and Missing Transverse Energy". Physical Review Letters. 107 (22): 221804. arXiv:1109.2352. Bibcode:2011PhRvL.107v1804C. doi:10.1103/PhysRevLett.107.221804. PMID 22182023. S2CID 22498269.
10. Shifman, M. (2012). "Frontiers Beyond the Standard Model: Reflections and Impressionistic Portrait of the Conference". Modern Physics Letters A. 27 (40): 1230043. Bibcode:2012MPLA...2730043S. doi:10.1142/S0217732312300431.
11. Jha, Alok (August 6, 2013). "One year on from the Higgs boson find, has physics hit the buffers?". The Guardian. photograph: Harold Cunningham/Getty Images. London: GMG. ISSN 0261-3077. OCLC 60623878. Archived from the original on March 22, 2014. Retrieved March 22, 2014.
12. The D = 10 critical dimension was originally discovered by John H. Schwarz in Schwarz, J. H. (1972). "Physical states and pomeron poles in the dual pion model". Nuclear Physics, B46(1), 61–74.
13. Polchinski, Joseph. String Theory: Volume I. Cambridge University Press, p. 247.
14. Polchinski, Joseph. String Theory: Volume II. Cambridge University Press, p. 198.
15. Foot, R.; Joshi, G. C. (1990). "Nonstandard signature of spacetime, superstrings, and the split composition algebras". Letters in Mathematical Physics. 19 (1): 65–71. Bibcode:1990LMaPh..19...65F. doi:10.1007/BF00402262. S2CID 120143992.
Cited sources
• Polchinski, Joseph (1998). String Theory Vol. 1: An Introduction to the Bosonic String. Cambridge University Press. ISBN 978-0-521-63303-1.
• Polchinski, Joseph (1998). String Theory Vol. 2: Superstring Theory and Beyond. Cambridge University Press. ISBN 978-0-521-63304-8.
Supersymmetry
General topics
• Supersymmetry
• Supersymmetric gauge theory
• Supersymmetric quantum mechanics
• Supergravity
• Superstring theory
• Super vector space
• Supergeometry
Supermathematics
• Superalgebra
• Lie superalgebra
• Super-Poincaré algebra
• Superconformal algebra
• Supersymmetry algebra
• Supergroup
• Superspace
• Harmonic superspace
• Super Minkowski space
• Supermanifold
Concepts
• Supercharge
• R-symmetry
• Supermultiplet
• Short supermultiplet
• BPS state
• Superpotential
• D-term
• FI D-term
• F-term
• Moduli space
• Supersymmetry breaking
• Konishi anomaly
• Seiberg duality
• Seiberg–Witten theory
• Witten index
• Wess–Zumino gauge
• Localization
• Mu problem
• Little hierarchy problem
• Electric–magnetic duality
Theorems
• Coleman–Mandula
• Haag–Łopuszański–Sohnius
• Nonrenormalization
Field theories
• Wess–Zumino
• N = 1 super Yang–Mills
• N = 4 super Yang–Mills
• Super QCD
• MSSM
• NMSSM
• 6D (2,0) superconformal
• ABJM superconformal
Supergravity
• Pure 4D N = 1 supergravity
• N = 8 supergravity
• Higher dimensional
• Gauged supergravity
Superpartners
• Axino
• Chargino
• Gaugino
• Goldstino
• Graviphoton
• Graviscalar
• Higgsino
• LSP
• Neutralino
• R-hadron
• Sfermion
• Sgoldstino
• Stop squark
• Superghost
Researchers
• Affleck
• Bagger
• Batchelor
• Berezin
• Dine
• Fayet
• Gates
• Golfand
• Iliopoulos
• Montonen
• Olive
• Salam
• Seiberg
• Siegel
• Roček
• Rogers
• Wess
• Witten
• Zumino
String theory
Background
• Strings
• Cosmic strings
• History of string theory
• First superstring revolution
• Second superstring revolution
• String theory landscape
Theory
• Nambu–Goto action
• Polyakov action
• Bosonic string theory
• Superstring theory
• Type I string
• Type II string
• Type IIA string
• Type IIB string
• Heterotic string
• N=2 superstring
• F-theory
• String field theory
• Matrix string theory
• Non-critical string theory
• Non-linear sigma model
• Tachyon condensation
• RNS formalism
• GS formalism
String duality
• T-duality
• S-duality
• U-duality
• Montonen–Olive duality
Particles and fields
• Graviton
• Dilaton
• Tachyon
• Ramond–Ramond field
• Kalb–Ramond field
• Magnetic monopole
• Dual graviton
• Dual photon
Branes
• D-brane
• NS5-brane
• M2-brane
• M5-brane
• S-brane
• Black brane
• Black holes
• Black string
• Brane cosmology
• Quiver diagram
• Hanany–Witten transition
Conformal field theory
• Virasoro algebra
• Mirror symmetry
• Conformal anomaly
• Conformal algebra
• Superconformal algebra
• Vertex operator algebra
• Loop algebra
• Kac–Moody algebra
• Wess–Zumino–Witten model
Gauge theory
• Anomalies
• Instantons
• Chern–Simons form
• Bogomol'nyi–Prasad–Sommerfield bound
• Exceptional Lie groups (G2, F4, E6, E7, E8)
• ADE classification
• Dirac string
• p-form electrodynamics
Geometry
• Worldsheet
• Kaluza–Klein theory
• Compactification
• Why 10 dimensions?
• Kähler manifold
• Ricci-flat manifold
• Calabi–Yau manifold
• Hyperkähler manifold
• K3 surface
• G2 manifold
• Spin(7)-manifold
• Generalized complex manifold
• Orbifold
• Conifold
• Orientifold
• Moduli space
• Hořava–Witten theory
• K-theory (physics)
• Twisted K-theory
Supersymmetry
• Supergravity
• Superspace
• Lie superalgebra
• Lie supergroup
Holography
• Holographic principle
• AdS/CFT correspondence
M-theory
• Matrix theory
• Introduction to M-theory
String theorists
• Aganagić
• Arkani-Hamed
• Atiyah
• Banks
• Berenstein
• Bousso
• Cleaver
• Curtright
• Dijkgraaf
• Distler
• Douglas
• Duff
• Dvali
• Ferrara
• Fischler
• Friedan
• Gates
• Gliozzi
• Gopakumar
• Green
• Greene
• Gross
• Gubser
• Gukov
• Guth
• Hanson
• Harvey
• 't Hooft
• Hořava
• Gibbons
• Kachru
• Kaku
• Kallosh
• Kaluza
• Kapustin
• Klebanov
• Knizhnik
• Kontsevich
• Klein
• Linde
• Maldacena
• Mandelstam
• Marolf
• Martinec
• Minwalla
• Moore
• Motl
• Mukhi
• Myers
• Nanopoulos
• Năstase
• Nekrasov
• Neveu
• Nielsen
• van Nieuwenhuizen
• Novikov
• Olive
• Ooguri
• Ovrut
• Polchinski
• Polyakov
• Rajaraman
• Ramond
• Randall
• Randjbar-Daemi
• Roček
• Rohm
• Sagnotti
• Scherk
• Schwarz
• Seiberg
• Sen
• Shenker
• Siegel
• Silverstein
• Sơn
• Staudacher
• Steinhardt
• Strominger
• Sundrum
• Susskind
• Townsend
• Trivedi
• Turok
• Vafa
• Veneziano
• Verlinde
• Verlinde
• Wess
• Witten
• Yau
• Yoneya
• Zamolodchikov
• Zamolodchikov
• Zaslow
• Zumino
• Zwiebach
Standard Model
Background
• Particle physics
• Fermions
• Gauge boson
• Higgs boson
• Quantum field theory
• Gauge theory
• Strong interaction
• Color charge
• Quantum chromodynamics
• Quark model
• Electroweak interaction
• Weak interaction
• Quantum electrodynamics
• Fermi's interaction
• Weak hypercharge
• Weak isospin
Constituents
• CKM matrix
• Spontaneous symmetry breaking
• Higgs mechanism
• Mathematical formulation of the Standard Model
Beyond the
Standard Model
Evidence
• Hierarchy problem
• Dark matter
• Cosmological constant
• problem
• Strong CP problem
• Neutrino oscillation
Theories
• Technicolor
• Kaluza–Klein theory
• Grand Unified Theory
• Theory of everything
Supersymmetry
• MSSM
• NMSSM
• Split supersymmetry
• Supergravity
Quantum gravity
• String theory
• Superstring theory
• Loop quantum gravity
• Causal dynamical triangulation
• Canonical quantum gravity
• Superfluid vacuum theory
• Twistor theory
Experiments
• Gran Sasso
• INO
• LHC
• SNO
• Super-K
• Tevatron
• Category
• Commons
Authority control: National
• Spain
• France
• BnF data
• Germany
• Israel
• United States
• Japan
• Czech Republic
| Wikipedia |
Supertrace
In the theory of superalgebras, if A is a commutative superalgebra, V is a free right A-supermodule and T is an endomorphism from V to itself, then the supertrace of T, str(T) is defined by the following trace diagram:
More concretely, if we write out T in block matrix form after the decomposition into even and odd subspaces as follows,
$T={\begin{pmatrix}T_{00}&T_{01}\\T_{10}&T_{11}\end{pmatrix}}$
then the supertrace
str(T) = the ordinary trace of T00 − the ordinary trace of T11.
Let us show that the supertrace does not depend on a basis. Suppose e1, ..., ep are the even basis vectors and ep+1, ..., ep+q are the odd basis vectors. Then, the components of T, which are elements of A, are defined as
$T(\mathbf {e} _{j})=\mathbf {e} _{i}T_{j}^{i}.\,$
The grading of Tij is the sum of the gradings of T, ei, ej mod 2.
A change of basis to e1', ..., ep', e(p+1)', ..., e(p+q)' is given by the supermatrix
$\mathbf {e} _{i'}=\mathbf {e} _{i}A_{i'}^{i}$
and the inverse supermatrix
$\mathbf {e} _{i}=\mathbf {e} _{i'}(A^{-1})_{i}^{i'},\,$
where of course, AA−1 = A−1A = 1 (the identity).
We can now check explicitly that the supertrace is basis independent. In the case where T is even, we have
$\operatorname {str} (A^{-1}TA)=(-1)^{|i'|}(A^{-1})_{j}^{i'}T_{k}^{j}A_{i'}^{k}=(-1)^{|i'|}(-1)^{(|i'|+|j|)(|i'|+|j|)}T_{k}^{j}A_{i'}^{k}(A^{-1})_{j}^{i'}=(-1)^{|j|}T_{j}^{j}=\operatorname {str} (T).$
In the case where T is odd, we have
$\operatorname {str} (A^{-1}TA)=(-1)^{|i'|}(A^{-1})_{j}^{i'}T_{k}^{j}A_{i'}^{k}=(-1)^{|i'|}(-1)^{(1+|j|+|k|)(|i'|+|j|)}T_{k}^{j}(A^{-1})_{j}^{i'}A_{i'}^{k}=(-1)^{|j|}T_{j}^{j}=\operatorname {str} (T).$
The ordinary trace is not basis independent, so the appropriate trace to use in the Z2-graded setting is the supertrace.
The supertrace satisfies the property
$\operatorname {str} (T_{1}T_{2})=(-1)^{|T_{1}||T_{2}|}\operatorname {str} (T_{2}T_{1})$
for all T1, T2 in End(V). In particular, the supertrace of a supercommutator is zero.
In fact, one can define a supertrace more generally for any associative superalgebra E over a commutative superalgebra A as a linear map tr: E -> A which vanishes on supercommutators.[1] Such a supertrace is not uniquely defined; it can always at least be modified by multiplication by an element of A.
Physics applications
In supersymmetric quantum field theories, in which the action integral is invariant under a set of symmetry transformations (known as supersymmetry transformations) whose algebras are superalgebras, the supertrace has a variety of applications. In such a context, the supertrace of the mass matrix for the theory can be written as a sum over spins of the traces of the mass matrices for particles of different spin:[2]
$\operatorname {str} [M^{2}]=\sum _{s}(-1)^{2s}(2s+1)\operatorname {tr} [m_{s}^{2}].$
In anomaly-free theories where only renormalizable terms appear in the superpotential, the above supertrace can be shown to vanish, even when supersymmetry is spontaneously broken.
The contribution to the effective potential arising at one loop (sometimes referred to as the Coleman-Weinberg potential[3]) can also be written in terms of a supertrace. If $M$ is the mass matrix for a given theory, the one-loop potential can be written as
$V_{eff}^{1-loop}={\dfrac {1}{64\pi ^{2}}}\operatorname {str} {\bigg [}M^{4}\ln {\Big (}{\dfrac {M^{2}}{\Lambda ^{2}}}{\Big )}{\bigg ]}={\dfrac {1}{64\pi ^{2}}}\operatorname {tr} {\bigg [}m_{B}^{4}\ln {\Big (}{\dfrac {m_{B}^{2}}{\Lambda ^{2}}}{\Big )}-m_{F}^{4}\ln {\Big (}{\dfrac {m_{F}^{2}}{\Lambda ^{2}}}{\Big )}{\bigg ]}$
where $m_{B}$ and $m_{F}$ are the respective tree-level mass matrices for the separate bosonic and fermionic degrees of freedom in the theory and $\Lambda $ is a cutoff scale.
See also
• Berezinian
References
1. N. Berline, E. Getzler, M. Vergne, Heat Kernels and Dirac Operators, Springer-Verlag, 1992, ISBN 0-387-53340-0, p. 39.
2. Martin, Stephen P. (1998). "A Supesymmetry Primer". Perspectives on Supersymmetry. World Scientific. pp. 1–98. arXiv:hep-ph/9709356. doi:10.1142/9789812839657_0001. ISBN 978-981-02-3553-6. ISSN 1793-1339.
3. Coleman, Sidney; Weinberg, Erick (1973-03-15). "Radiative Corrections as the Origin of Spontaneous Symmetry Breaking". Physical Review D. American Physical Society (APS). 7 (6): 1888–1910. arXiv:hep-th/0507214. doi:10.1103/physrevd.7.1888. ISSN 0556-2821.
| Wikipedia |
Support function
In mathematics, the support function hA of a non-empty closed convex set A in $\mathbb {R} ^{n}$ describes the (signed) distances of supporting hyperplanes of A from the origin. The support function is a convex function on $\mathbb {R} ^{n}$. Any non-empty closed convex set A is uniquely determined by hA. Furthermore, the support function, as a function of the set A, is compatible with many natural geometric operations, like scaling, translation, rotation and Minkowski addition. Due to these properties, the support function is one of the most central basic concepts in convex geometry.
Not to be confused with Support curve.
Definition
The support function $h_{A}\colon \mathbb {R} ^{n}\to \mathbb {R} $ of a non-empty closed convex set A in $\mathbb {R} ^{n}$ is given by
$h_{A}(x)=\sup\{x\cdot a:a\in A\},$
$x\in \mathbb {R} ^{n}$; see [1] [2] .[3] Its interpretation is most intuitive when x is a unit vector: by definition, A is contained in the closed half space
$\{y\in \mathbb {R} ^{n}:y\cdot x\leqslant h_{A}(x)\}$
and there is at least one point of A in the boundary
$H(x)=\{y\in \mathbb {R} ^{n}:y\cdot x=h_{A}(x)\}$
of this half space. The hyperplane H(x) is therefore called a supporting hyperplane with exterior (or outer) unit normal vector x. The word exterior is important here, as the orientation of x plays a role, the set H(x) is in general different from H(-x). Now hA is the (signed) distance of H(x) from the origin.
Examples
The support function of a singleton A={a} is $h_{A}(x)=x\cdot a$.
The support function of the Euclidean unit ball $B=\{y\in \mathbb {R} ^{n}\,:\,\|y\|_{2}\leq 1\}$ is $h_{B}(x)=\|x\|_{2}$ where $\|\cdot \|_{2}$ is the 2-norm.
If A is a line segment through the origin with endpoints -a and a then $h_{A}(x)=|x\cdot a|$.
Properties
As a function of x
The support function of a compact nonempty convex set is real valued and continuous, but if the set is closed and unbounded, its support function is extended real valued (it takes the value $\infty $). As any nonempty closed convex set is the intersection of its supporting half spaces, the function hA determines A uniquely. This can be used to describe certain geometric properties of convex sets analytically. For instance, a set A is point symmetric with respect to the origin if and only if hA is an even function.
In general, the support function is not differentiable. However, directional derivatives exist and yield support functions of support sets. If A is compact and convex, and hA'(u;x) denotes the directional derivative of hA at u ≠ 0 in direction x, we have
$h_{A}'(u;x)=h_{A\cap H(u)}(x)\qquad x\in \mathbb {R} ^{n}.$
Here H(u) is the supporting hyperplane of A with exterior normal vector u, defined above. If A ∩ H(u) is a singleton {y}, say, it follows that the support function is differentiable at u and its gradient coincides with y. Conversely, if hA is differentiable at u, then A ∩ H(u) is a singleton. Hence hA is differentiable at all points u ≠ 0 if and only if A is strictly convex (the boundary of A does not contain any line segments).
More generally, when $A$ is convex and closed then for any $u\in \mathbb {R} ^{n}\setminus \{0\}$,
$\partial h_{A}(u)=H(u)\cap A\,,$
where $\partial h_{A}(u)$ denotes the set of subgradients of $h_{A}$ at $u$.
It follows directly from its definition that the support function is positive homogeneous:
$h_{A}(\alpha x)=\alpha h_{A}(x),\qquad \alpha \geq 0,x\in \mathbb {R} ^{n},$
and subadditive:
$h_{A}(x+y)\leq h_{A}(x)+h_{A}(y),\qquad x,y\in \mathbb {R} ^{n}.$
It follows that hA is a convex function. It is crucial in convex geometry that these properties characterize support functions: Any positive homogeneous, convex, real valued function on $\mathbb {R} ^{n}$ is the support function of a nonempty compact convex set. Several proofs are known ,[3] one is using the fact that the Legendre transform of a positive homogeneous, convex, real valued function is the (convex) indicator function of a compact convex set.
Many authors restrict the support function to the Euclidean unit sphere and consider it as a function on Sn-1. The homogeneity property shows that this restriction determines the support function on $\mathbb {R} ^{n}$, as defined above.
As a function of A
The support functions of a dilated or translated set are closely related to the original set A:
$h_{\alpha A}(x)=\alpha h_{A}(x),\qquad \alpha \geq 0,x\in \mathbb {R} ^{n}$
and
$h_{A+b}(x)=h_{A}(x)+x\cdot b,\qquad x,b\in \mathbb {R} ^{n}.$
The latter generalises to
$h_{A+B}(x)=h_{A}(x)+h_{B}(x),\qquad x\in \mathbb {R} ^{n},$
where A + B denotes the Minkowski sum:
$A+B:=\{\,a+b\in \mathbb {R} ^{n}\mid a\in A,\ b\in B\,\}.$
The Hausdorff distance d H(A, B) of two nonempty compact convex sets A and B can be expressed in terms of support functions,
$d_{\mathrm {H} }(A,B)=\|h_{A}-h_{B}\|_{\infty }$
where, on the right hand side, the uniform norm on the unit sphere is used.
The properties of the support function as a function of the set A are sometimes summarized in saying that $\tau $:A $\mapsto $ h A maps the family of non-empty compact convex sets to the cone of all real-valued continuous functions on the sphere whose positive homogeneous extension is convex. Abusing terminology slightly, $\tau $ is sometimes called linear, as it respects Minkowski addition, although it is not defined on a linear space, but rather on an (abstract) convex cone of nonempty compact convex sets. The mapping $\tau $ is an isometry between this cone, endowed with the Hausdorff metric, and a subcone of the family of continuous functions on Sn-1 with the uniform norm.
Variants
In contrast to the above, support functions are sometimes defined on the boundary of A rather than on Sn-1, under the assumption that there exists a unique exterior unit normal at each boundary point. Convexity is not needed for the definition. For an oriented regular surface, M, with a unit normal vector, N, defined everywhere on its surface, the support function is then defined by
${x}\mapsto {x}\cdot N({x})$.
In other words, for any ${x}\in M$, this support function gives the signed distance of the unique hyperplane that touches M in x.
See also
• Barrier cone
• Supporting functional
References
1. T. Bonnesen, W. Fenchel, Theorie der konvexen Körper, Julius Springer, Berlin, 1934. English translation: Theory of convex bodies, BCS Associates, Moscow, ID, 1987.
2. R. J. Gardner, Geometric tomography, Cambridge University Press, New York, 1995. Second edition: 2006.
3. R. Schneider, Convex bodies: the Brunn-Minkowski theory, Cambridge University Press, Cambridge, 1993.
| Wikipedia |
Supporting line
In geometry, a supporting line L of a curve C in the plane is a line that contains a point of C, but does not separate any two points of C.[1] In other words, C lies completely in one of the two closed half-planes defined by L and has at least one point on L.
Properties
There can be many supporting lines for a curve at a given point. When a tangent exists at a given point, then it is the unique supporting line at this point, if it does not separate the curve.
Generalizations
The notion of supporting line is also discussed for planar shapes. In this case a supporting line may be defined as a line which has common points with the boundary of the shape, but not with its interior.[2]
The notion of a supporting line to a planar curve or convex shape can be generalized to n dimension as a supporting hyperplane.
Critical support lines
If two bounded connected planar shapes have disjoint convex hulls that are separated by a positive distance, then they necessarily have exactly four common lines of support, the bitangents of the two convex hulls. Two of these lines of support separate the two shapes, and are called critical support lines.[2] Without the assumption of convexity, there may be more or fewer than four lines of support, even if the shapes themselves are disjoint. For instance, if one shape is an annulus that contains the other, then there are no common lines of support, while if each of two shapes consists of a pair of small disks at opposite corners of a square then there may be as many as 16 common lines of support.
References
1. "The geometry of geodesics", Herbert Busemann, p. 158
2. "Encyclopedia of Distances", by Michel M. Deza, Elena Deza, p. 179
| Wikipedia |
Support (mathematics)
In mathematics, the support of a real-valued function $f$ is the subset of the function domain containing the elements which are not mapped to zero. If the domain of $f$ is a topological space, then the support of $f$ is instead defined as the smallest closed set containing all points not mapped to zero. This concept is used very widely in mathematical analysis.
Formulation
Suppose that $f:X\to \mathbb {R} $ is a real-valued function whose domain is an arbitrary set $X.$ The set-theoretic support of $f,$ written $\operatorname {supp} (f),$ is the set of points in $X$ where $f$ is non-zero:
$\operatorname {supp} (f)=\{x\in X\,:\,f(x)\neq 0\}.$
The support of $f$ is the smallest subset of $X$ with the property that $f$ is zero on the subset's complement. If $f(x)=0$ for all but a finite number of points $x\in X,$ then $f$ is said to have finite support.
If the set $X$ has an additional structure (for example, a topology), then the support of $f$ is defined in an analogous way as the smallest subset of $X$ of an appropriate type such that $f$ vanishes in an appropriate sense on its complement. The notion of support also extends in a natural way to functions taking values in more general sets than $\mathbb {R} $ and to other objects, such as measures or distributions.
Closed support
The most common situation occurs when $X$ is a topological space (such as the real line or $n$-dimensional Euclidean space) and $f:X\to \mathbb {R} $ is a continuous real (or complex)-valued function. In this case, the support of $f$, $\operatorname {supp} (f)$, or the closed support of $f$, is defined topologically as the closure (taken in $X$) of the subset of $X$ where $f$ is non-zero[1][2][3] that is,
$\operatorname {supp} (f):=\operatorname {cl} _{X}\left(\{x\in X\,:\,f(x)\neq 0\}\right)={\overline {f^{-1}\left(\{0\}^{\mathrm {c} }\right)}}.$
Since the intersection of closed sets is closed, $\operatorname {supp} (f)$ is the intersection of all closed sets that contain the set-theoretic support of $f.$
For example, if $f:\mathbb {R} \to \mathbb {R} $ is the function defined by
$f(x)={\begin{cases}1-x^{2}&{\text{if }}|x|<1\\0&{\text{if }}|x|\geq 1\end{cases}}$
then $\operatorname {supp} (f)$, the support of $f$, or the closed support of $f$, is the closed interval $[-1,1],$ since $f$ is non-zero on the open interval $(-1,1)$ and the closure of this set is $[-1,1].$
The notion of closed support is usually applied to continuous functions, but the definition makes sense for arbitrary real or complex-valued functions on a topological space, and some authors do not require that $f:X\to \mathbb {R} $ (or $f:X\to \mathbb {C} $) be continuous.[4]
Compact support
Functions with compact support on a topological space $X$ are those whose closed support is a compact subset of $X.$ If $X$ is the real line, or $n$-dimensional Euclidean space, then a function has compact support if and only if it has bounded support, since a subset of $\mathbb {R} ^{n}$ is compact if and only if it is closed and bounded.
For example, the function $f:\mathbb {R} \to \mathbb {R} $ defined above is a continuous function with compact support $[-1,1].$ If $f:\mathbb {R} ^{n}\to \mathbb {R} $ is a smooth function then because $f$ is identically $0$ on the open subset $\mathbb {R} ^{n}\smallsetminus \operatorname {supp} (f),$ all of $f$'s partial derivatives of all orders are also identically $0$ on $\mathbb {R} ^{n}\smallsetminus \operatorname {supp} (f).$
The condition of compact support is stronger than the condition of vanishing at infinity. For example, the function $f:\mathbb {R} \to \mathbb {R} $ defined by
$f(x)={\frac {1}{1+x^{2}}}$
vanishes at infinity, since $f(x)\to 0$ as $|x|\to \infty ,$ but its support $\mathbb {R} $ is not compact.
Real-valued compactly supported smooth functions on a Euclidean space are called bump functions. Mollifiers are an important special case of bump functions as they can be used in distribution theory to create sequences of smooth functions approximating nonsmooth (generalized) functions, via convolution.
In good cases, functions with compact support are dense in the space of functions that vanish at infinity, but this property requires some technical work to justify in a given example. As an intuition for more complex examples, and in the language of limits, for any $\varepsilon >0,$ any function $f$ on the real line $\mathbb {R} $ that vanishes at infinity can be approximated by choosing an appropriate compact subset $C$ of $\mathbb {R} $ such that
$\left|f(x)-I_{C}(x)f(x)\right|<\varepsilon $
for all $x\in X,$ where $I_{C}$ is the indicator function of $C.$ Every continuous function on a compact topological space has compact support since every closed subset of a compact space is indeed compact.
Essential support
If $X$ is a topological measure space with a Borel measure $\mu $ (such as $\mathbb {R} ^{n},$ or a Lebesgue measurable subset of $\mathbb {R} ^{n},$ equipped with Lebesgue measure), then one typically identifies functions that are equal $\mu $-almost everywhere. In that case, the essential support of a measurable function $f:X\to \mathbb {R} $ written $\operatorname {ess\,supp} (f),$ is defined to be the smallest closed subset $F$ of $X$ such that $f=0$ $\mu $-almost everywhere outside $F.$ Equivalently, $\operatorname {ess\,supp} (f)$ is the complement of the largest open set on which $f=0$ $\mu $-almost everywhere[5]
$\operatorname {ess\,supp} (f):=X\setminus \bigcup \left\{\Omega \subseteq X:\Omega {\text{ is open and }}f=0\,\mu {\text{-almost everywhere in }}\Omega \right\}.$
The essential support of a function $f$ depends on the measure $\mu $ as well as on $f,$ and it may be strictly smaller than the closed support. For example, if $f:[0,1]\to \mathbb {R} $ is the Dirichlet function that is $0$ on irrational numbers and $1$ on rational numbers, and $[0,1]$ is equipped with Lebesgue measure, then the support of $f$ is the entire interval $[0,1],$ but the essential support of $f$ is empty, since $f$ is equal almost everywhere to the zero function.
In analysis one nearly always wants to use the essential support of a function, rather than its closed support, when the two sets are different, so $\operatorname {ess\,supp} (f)$ is often written simply as $\operatorname {supp} (f)$ and referred to as the support.[5][6]
Generalization
If $M$ is an arbitrary set containing zero, the concept of support is immediately generalizable to functions $f:X\to M.$ Support may also be defined for any algebraic structure with identity (such as a group, monoid, or composition algebra), in which the identity element assumes the role of zero. For instance, the family $\mathbb {Z} ^{\mathbb {N} }$ of functions from the natural numbers to the integers is the uncountable set of integer sequences. The subfamily $\left\{f\in \mathbb {Z} ^{\mathbb {N} }:f{\text{ has finite support }}\right\}$ is the countable set of all integer sequences that have only finitely many nonzero entries.
Functions of finite support are used in defining algebraic structures such as group rings and free abelian groups.[7]
In probability and measure theory
Further information: support (measure theory)
In probability theory, the support of a probability distribution can be loosely thought of as the closure of the set of possible values of a random variable having that distribution. There are, however, some subtleties to consider when dealing with general distributions defined on a sigma algebra, rather than on a topological space.
More formally, if $X:\Omega \to \mathbb {R} $ is a random variable on $(\Omega ,{\mathcal {F}},P)$ then the support of $X$ is the smallest closed set $R_{X}\subseteq \mathbb {R} $ such that $P\left(X\in R_{X}\right)=1.$
In practice however, the support of a discrete random variable $X$ is often defined as the set $R_{X}=\{x\in \mathbb {R} :P(X=x)>0\}$ and the support of a continuous random variable $X$ is defined as the set $R_{X}=\{x\in \mathbb {R} :f_{X}(x)>0\}$ where $f_{X}(x)$ is a probability density function of $X$ (the set-theoretic support).[8]
Note that the word support can refer to the logarithm of the likelihood of a probability density function.[9]
Support of a distribution
It is possible also to talk about the support of a distribution, such as the Dirac delta function $\delta (x)$ on the real line. In that example, we can consider test functions $F,$ which are smooth functions with support not including the point $0.$ Since $\delta (F)$ (the distribution $\delta $ applied as linear functional to $F$) is $0$ for such functions, we can say that the support of $\delta $ is $\{0\}$ only. Since measures (including probability measures) on the real line are special cases of distributions, we can also speak of the support of a measure in the same way.
Suppose that $f$ is a distribution, and that $U$ is an open set in Euclidean space such that, for all test functions $\phi $ such that the support of $\phi $ is contained in $U,$ $f(\phi )=0.$ Then $f$ is said to vanish on $U.$ Now, if $f$ vanishes on an arbitrary family $U_{\alpha }$ of open sets, then for any test function $\phi $ supported in $\bigcup U_{\alpha },$ a simple argument based on the compactness of the support of $\phi $ and a partition of unity shows that $f(\phi )=0$ as well. Hence we can define the support of $f$ as the complement of the largest open set on which $f$ vanishes. For example, the support of the Dirac delta is $\{0\}.$
Singular support
In Fourier analysis in particular, it is interesting to study the singular support of a distribution. This has the intuitive interpretation as the set of points at which a distribution fails to be a smooth function.
For example, the Fourier transform of the Heaviside step function can, up to constant factors, be considered to be $1/x$ (a function) except at $x=0.$ While $x=0$ is clearly a special point, it is more precise to say that the transform of the distribution has singular support $\{0\}$: it cannot accurately be expressed as a function in relation to test functions with support including $0.$ It can be expressed as an application of a Cauchy principal value improper integral.
For distributions in several variables, singular supports allow one to define wave front sets and understand Huygens' principle in terms of mathematical analysis. Singular supports may also be used to understand phenomena special to distribution theory, such as attempts to 'multiply' distributions (squaring the Dirac delta function fails – essentially because the singular supports of the distributions to be multiplied should be disjoint).
Family of supports
An abstract notion of family of supports on a topological space $X,$ suitable for sheaf theory, was defined by Henri Cartan. In extending Poincaré duality to manifolds that are not compact, the 'compact support' idea enters naturally on one side of the duality; see for example Alexander–Spanier cohomology.
Bredon, Sheaf Theory (2nd edition, 1997) gives these definitions. A family $\Phi $ of closed subsets of $X$ is a family of supports, if it is down-closed and closed under finite union. Its extent is the union over $\Phi .$ A paracompactifying family of supports that satisfies further that any $Y$ in $\Phi $ is, with the subspace topology, a paracompact space; and has some $Z$ in $\Phi $ which is a neighbourhood. If $X$ is a locally compact space, assumed Hausdorff the family of all compact subsets satisfies the further conditions, making it paracompactifying.
See also
• Bounded function – A mathematical function the set of whose values are bounded
• Bump function – Smooth and compactly supported function
• Support of a module
• Titchmarsh convolution theorem
Citations
1. Folland, Gerald B. (1999). Real Analysis, 2nd ed. New York: John Wiley. p. 132.
2. Hörmander, Lars (1990). Linear Partial Differential Equations I, 2nd ed. Berlin: Springer-Verlag. p. 14.
3. Pascucci, Andrea (2011). PDE and Martingale Methods in Option Pricing. Bocconi & Springer Series. Berlin: Springer-Verlag. p. 678. doi:10.1007/978-88-470-1781-8. ISBN 978-88-470-1780-1.
4. Rudin, Walter (1987). Real and Complex Analysis, 3rd ed. New York: McGraw-Hill. p. 38.
5. Lieb, Elliott; Loss, Michael (2001). Analysis. Graduate Studies in Mathematics. Vol. 14 (2nd ed.). American Mathematical Society. p. 13. ISBN 978-0821827833.
6. In a similar way, one uses the essential supremum of a measurable function instead of its supremum.
7. Tomasz, Kaczynski (2004). Computational homology. Mischaikow, Konstantin Michael,, Mrozek, Marian. New York: Springer. p. 445. ISBN 9780387215976. OCLC 55897585.
8. Taboga, Marco. "Support of a random variable". statlect.com. Retrieved 29 November 2017.
9. Edwards, A. W. F. (1992). Likelihood (Expanded ed.). Baltimore: Johns Hopkins University Press. pp. 31–34. ISBN 0-8018-4443-6.
References
• Rudin, Walter (1991). Functional Analysis. International Series in Pure and Applied Mathematics. Vol. 8 (Second ed.). New York, NY: McGraw-Hill Science/Engineering/Math. ISBN 978-0-07-054236-5. OCLC 21163277.
• Trèves, François (2006) [1967]. Topological Vector Spaces, Distributions and Kernels. Mineola, N.Y.: Dover Publications. ISBN 978-0-486-45352-1. OCLC 853623322.
| Wikipedia |
Support (measure theory)
In mathematics, the support (sometimes topological support or spectrum) of a measure $\mu $ on a measurable topological space $(X,\operatorname {Borel} (X))$ is a precise notion of where in the space $X$ the measure "lives". It is defined to be the largest (closed) subset of $X$ for which every open neighbourhood of every point of the set has positive measure.
Motivation
A (non-negative) measure $\mu $ on a measurable space $(X,\Sigma )$ is really a function $\mu :\Sigma \to [0,+\infty ].$ :\Sigma \to [0,+\infty ].} Therefore, in terms of the usual definition of support, the support of $\mu $ is a subset of the σ-algebra $\Sigma :$ :}
$\operatorname {supp} (\mu ):={\overline {\{A\in \Sigma \,\vert \,\mu (A)\neq 0\}}},$
where the overbar denotes set closure. However, this definition is somewhat unsatisfactory: we use the notion of closure, but we do not even have a topology on $\Sigma .$ What we really want to know is where in the space $X$ the measure $\mu $ is non-zero. Consider two examples:
1. Lebesgue measure $\lambda $ on the real line $\mathbb {R} .$ It seems clear that $\lambda $ "lives on" the whole of the real line.
2. A Dirac measure $\delta _{p}$ at some point $p\in \mathbb {R} .$ Again, intuition suggests that the measure $\delta _{p}$ "lives at" the point $p,$ and nowhere else.
In light of these two examples, we can reject the following candidate definitions in favour of the one in the next section:
1. We could remove the points where $\mu $ is zero, and take the support to be the remainder $X\setminus \{x\in X\mid \mu (\{x\})=0\}.$ This might work for the Dirac measure $\delta _{p},$ but it would definitely not work for $\lambda :$ :} since the Lebesgue measure of any singleton is zero, this definition would give $\lambda $ empty support.
2. By comparison with the notion of strict positivity of measures, we could take the support to be the set of all points with a neighbourhood of positive measure:
$\{x\in X\mid \exists N_{x}{\text{ open}}{\text{ such that }}(x\in N_{x}{\text{ and }}\mu (N_{x})>0)\}$
(or the closure of this). It is also too simplistic: by taking $N_{x}=X$ for all points $x\in X,$ this would make the support of every measure except the zero measure the whole of $X.$
However, the idea of "local strict positivity" is not too far from a workable definition.
Definition
Let $(X,T)$ be a topological space; let $B(T)$ denote the Borel σ-algebra on $X,$ i.e. the smallest sigma algebra on $X$ that contains all open sets $U\in T.$ Let $\mu $ be a measure on $(X,B(T))$ Then the support (or spectrum) of $\mu $ is defined as the set of all points $x$ in $X$ for which every open neighbourhood $N_{x}$ of $x$ has positive measure:
$\operatorname {supp} (\mu ):=\{x\in X\mid \forall N_{x}\in T\colon (x\in N_{x}\Rightarrow \mu (N_{x})>0)\}.$
Some authors prefer to take the closure of the above set. However, this is not necessary: see "Properties" below.
An equivalent definition of support is as the largest $C\in B(T)$ (with respect to inclusion) such that every open set which has non-empty intersection with $C$ has positive measure, i.e. the largest $C$ such that:
$(\forall U\in T)(U\cap C\neq \varnothing \implies \mu (U\cap C)>0).$
Signed and complex measures
This definition can be extended to signed and complex measures. Suppose that $\mu :\Sigma \to [-\infty ,+\infty ]$ :\Sigma \to [-\infty ,+\infty ]} is a signed measure. Use the Hahn decomposition theorem to write
$\mu =\mu ^{+}-\mu ^{-},$
where $\mu ^{\pm }$ are both non-negative measures. Then the support of $\mu $ is defined to be
$\operatorname {supp} (\mu ):=\operatorname {supp} (\mu ^{+})\cup \operatorname {supp} (\mu ^{-}).$
Similarly, if $\mu :\Sigma \to \mathbb {C} $ :\Sigma \to \mathbb {C} } is a complex measure, the support of $\mu $ is defined to be the union of the supports of its real and imaginary parts.
Properties
$\operatorname {supp} (\mu _{1}+\mu _{2})=\operatorname {supp} (\mu _{1})\cup \operatorname {supp} (\mu _{2})$ holds.
A measure $\mu $ on $X$ is strictly positive if and only if it has support $\operatorname {supp} (\mu )=X.$ If $\mu $ is strictly positive and $x\in X$ is arbitrary, then any open neighbourhood of $x,$ since it is an open set, has positive measure; hence, $x\in \operatorname {supp} (\mu ),$ so $\operatorname {supp} (\mu )=X.$ Conversely, if $\operatorname {supp} (\mu )=X,$ then every non-empty open set (being an open neighbourhood of some point in its interior, which is also a point of the support) has positive measure; hence, $\mu $ is strictly positive. The support of a measure is closed in $X,$as its complement is the union of the open sets of measure $0.$
In general the support of a nonzero measure may be empty: see the examples below. However, if $X$ is a Hausdorff topological space and $\mu $ is a Radon measure, a Borel set $A$ outside the support has measure zero:
$A\subseteq X\setminus \operatorname {supp} (\mu )\implies \mu (A)=0.$
The converse is true if $A$ is open, but it is not true in general: it fails if there exists a point $x\in \operatorname {supp} (\mu )$ such that $\mu (\{x\})=0$ (e.g. Lebesgue measure). Thus, one does not need to "integrate outside the support": for any measurable function $f:X\to \mathbb {R} $ or $\mathbb {C} ,$
$\int _{X}f(x)\,\mathrm {d} \mu (x)=\int _{\operatorname {supp} (\mu )}f(x)\,\mathrm {d} \mu (x).$
The concept of support of a measure and that of spectrum of a self-adjoint linear operator on a Hilbert space are closely related. Indeed, if $\mu $ is a regular Borel measure on the line $\mathbb {R} ,$ then the multiplication operator $(Af)(x)=xf(x)$ is self-adjoint on its natural domain
$D(A)=\{f\in L^{2}(\mathbb {R} ,d\mu )\mid xf(x)\in L^{2}(\mathbb {R} ,d\mu )\}$
and its spectrum coincides with the essential range of the identity function $x\mapsto x,$ which is precisely the support of $\mu .$[1]
Examples
Lebesgue measure
In the case of Lebesgue measure $\lambda $ on the real line $\mathbb {R} ,$ consider an arbitrary point $x\in \mathbb {R} .$ Then any open neighbourhood $N_{x}$ of $x$ must contain some open interval $(x-\epsilon ,x+\epsilon )$ for some $\epsilon >0.$ This interval has Lebesgue measure $2\epsilon >0,$ so $\lambda (N_{x})\geq 2\epsilon >0.$ Since $x\in \mathbb {R} $ was arbitrary, $\operatorname {supp} (\lambda )=\mathbb {R} .$
Dirac measure
In the case of Dirac measure $\delta _{p},$ let $x\in \mathbb {R} $ and consider two cases:
1. if $x=p,$ then every open neighbourhood $N_{x}$ of $x$ contains $p,$ so $\delta _{p}(N_{x})=1>0.$
2. on the other hand, if $x\neq p,$ then there exists a sufficiently small open ball $B$ around $x$ that does not contain $p,$ so $\delta _{p}(B)=0.$
We conclude that $\operatorname {supp} (\delta _{p})$ is the closure of the singleton set $\{p\},$ which is $\{p\}$ itself.
In fact, a measure $\mu $ on the real line is a Dirac measure $\delta _{p}$ for some point $p$ if and only if the support of $\mu $ is the singleton set $\{p\}.$ Consequently, Dirac measure on the real line is the unique measure with zero variance (provided that the measure has variance at all).
A uniform distribution
Consider the measure $\mu $ on the real line $\mathbb {R} $ defined by
$\mu (A):=\lambda (A\cap (0,1))$
i.e. a uniform measure on the open interval $(0,1).$ A similar argument to the Dirac measure example shows that $\operatorname {supp} (\mu )=[0,1].$ Note that the boundary points 0 and 1 lie in the support: any open set containing 0 (or 1) contains an open interval about 0 (or 1), which must intersect $(0,1),$ and so must have positive $\mu $-measure.
A nontrivial measure whose support is empty
The space of all countable ordinals with the topology generated by "open intervals" is a locally compact Hausdorff space. The measure ("Dieudonné measure") that assigns measure 1 to Borel sets containing an unbounded closed subset and assigns 0 to other Borel sets is a Borel probability measure whose support is empty.
A nontrivial measure whose support has measure zero
On a compact Hausdorff space the support of a non-zero measure is always non-empty, but may have measure $0.$ An example of this is given by adding the first uncountable ordinal $\Omega $ to the previous example: the support of the measure is the single point $\Omega ,$ which has measure $0.$
References
1. Mathematical methods in Quantum Mechanics with applications to Schrödinger Operators
• Ambrosio, L., Gigli, N. & Savaré, G. (2005). Gradient Flows in Metric Spaces and in the Space of Probability Measures. ETH Zürich, Birkhäuser Verlag, Basel. ISBN 3-7643-2428-7.{{cite book}}: CS1 maint: multiple names: authors list (link)
• Parthasarathy, K. R. (2005). Probability measures on metric spaces. AMS Chelsea Publishing, Providence, RI. p. xii+276. ISBN 0-8218-3889-X. MR2169627 (See chapter 2, section 2.)
• Teschl, Gerald (2009). Mathematical methods in Quantum Mechanics with applications to Schrödinger Operators. AMS.(See chapter 3, section 2)
Measure theory
Basic concepts
• Absolute continuity of measures
• Lebesgue integration
• Lp spaces
• Measure
• Measure space
• Probability space
• Measurable space/function
Sets
• Almost everywhere
• Atom
• Baire set
• Borel set
• equivalence relation
• Borel space
• Carathéodory's criterion
• Cylindrical σ-algebra
• Cylinder set
• 𝜆-system
• Essential range
• infimum/supremum
• Locally measurable
• π-system
• σ-algebra
• Non-measurable set
• Vitali set
• Null set
• Support
• Transverse measure
• Universally measurable
Types of Measures
• Atomic
• Baire
• Banach
• Besov
• Borel
• Brown
• Complex
• Complete
• Content
• (Logarithmically) Convex
• Decomposable
• Discrete
• Equivalent
• Finite
• Inner
• (Quasi-) Invariant
• Locally finite
• Maximising
• Metric outer
• Outer
• Perfect
• Pre-measure
• (Sub-) Probability
• Projection-valued
• Radon
• Random
• Regular
• Borel regular
• Inner regular
• Outer regular
• Saturated
• Set function
• σ-finite
• s-finite
• Signed
• Singular
• Spectral
• Strictly positive
• Tight
• Vector
Particular measures
• Counting
• Dirac
• Euler
• Gaussian
• Haar
• Harmonic
• Hausdorff
• Intensity
• Lebesgue
• Infinite-dimensional
• Logarithmic
• Product
• Projections
• Pushforward
• Spherical measure
• Tangent
• Trivial
• Young
Maps
• Measurable function
• Bochner
• Strongly
• Weakly
• Convergence: almost everywhere
• of measures
• in measure
• of random variables
• in distribution
• in probability
• Cylinder set measure
• Random: compact set
• element
• measure
• process
• variable
• vector
• Projection-valued measure
Main results
• Carathéodory's extension theorem
• Convergence theorems
• Dominated
• Monotone
• Vitali
• Decomposition theorems
• Hahn
• Jordan
• Maharam's
• Egorov's
• Fatou's lemma
• Fubini's
• Fubini–Tonelli
• Hölder's inequality
• Minkowski inequality
• Radon–Nikodym
• Riesz–Markov–Kakutani representation theorem
Other results
• Disintegration theorem
• Lifting theory
• Lebesgue's density theorem
• Lebesgue differentiation theorem
• Sard's theorem
For Lebesgue measure
• Isoperimetric inequality
• Brunn–Minkowski theorem
• Milman's reverse
• Minkowski–Steiner formula
• Prékopa–Leindler inequality
• Vitale's random Brunn–Minkowski inequality
Applications & related
• Convex analysis
• Descriptive set theory
• Probability theory
• Real analysis
• Spectral theory
| Wikipedia |
Support of a module
In commutative algebra, the support of a module M over a commutative ring A is the set of all prime ideals ${\mathfrak {p}}$ of A such that $M_{\mathfrak {p}}\neq 0$ (that is, the localization of M at ${\mathfrak {p}}$ is not equal to zero).[1] It is denoted by $\operatorname {Supp} M$. The support is, by definition, a subset of the spectrum of A.
Properties
• $M=0$ if and only if its support is empty.
• Let $0\to M'\to M\to M''\to 0$ be a short exact sequence of A-modules. Then
$\operatorname {Supp} M=\operatorname {Supp} M'\cup \operatorname {Supp} M''.$
Note that this union may not be a disjoint union.
• If $M$ is a sum of submodules $M_{\lambda }$, then $\operatorname {Supp} M=\bigcup _{\lambda }\operatorname {Supp} M_{\lambda }.$
• If $M$ is a finitely generated A-module, then $\operatorname {Supp} M$ is the set of all prime ideals containing the annihilator of M. In particular, it is closed in the Zariski topology on Spec A.
• If $M,N$ are finitely generated A-modules, then
$\operatorname {Supp} (M\otimes _{A}N)=\operatorname {Supp} M\cap \operatorname {Supp} N.$
• If $M$ is a finitely generated A-module and I is an ideal of A, then $\operatorname {Supp} (M/IM)$ is the set of all prime ideals containing $I+\operatorname {Ann} M.$ This is $V(I)\cap \operatorname {Supp} M$.
Support of a quasicoherent sheaf
If F is a quasicoherent sheaf on a scheme X, the support of F is the set of all points x in X such that the stalk Fx is nonzero. This definition is similar to the definition of the support of a function on a space X, and this is the motivation for using the word "support". Most properties of the support generalize from modules to quasicoherent sheaves word for word. For example, the support of a coherent sheaf (or more generally, a finite type sheaf) is a closed subspace of X.[2]
If M is a module over a ring A, then the support of M as a module coincides with the support of the associated quasicoherent sheaf ${\tilde {M}}$ on the affine scheme Spec A. Moreover, if $\{U_{\alpha }=\operatorname {Spec} (A_{\alpha })\}$ is an affine cover of a scheme X, then the support of a quasicoherent sheaf F is equal to the union of supports of the associated modules Mα over each Aα.[3]
Examples
As noted above, a prime ideal ${\mathfrak {p}}$ is in the support if and only if it contains the annihilator of $M$.[4] For example, over $R=\mathbb {C} [x,y,z,w]$, the annihilator of the module
$M=R/I={\frac {\mathbb {C} [x,y,z,w]}{(x^{4}+y^{4}+z^{4}+w^{4})}}$
is the ideal $I=(f)=(x^{4}+y^{4}+z^{4}+w^{4})$. This implies that $\operatorname {Supp} M\cong \operatorname {Spec} (R/I)$, the vanishing locus of the polynomial f. Looking at the short exact sequence
$0\to I\to R\to R/I\to 0$
we might mistakenly conjecture that the support of I = (f) is Spec(R(f)), which is the complement of the vanishing locus of the polynomial f. In fact, since R is an integral domain, the ideal I = (f) = Rf is isomorphic to R as a module, so its support is the entire space: Supp(I) = Spec(R).
The support of a finite module over a Noetherian ring is always closed under specialization.
Now, if we take two polynomials $f_{1},f_{2}\in R$ in an integral domain which form a complete intersection ideal $(f_{1},f_{2})$, the tensor property shows us that
$\operatorname {Supp} \left(R/(f_{1})\otimes _{R}R/(f_{2})\right)=\,\operatorname {Supp} \left(R/(f_{1})\right)\cap \,\operatorname {Supp} \left(R/(f_{2})\right)\cong \,\operatorname {Spec} (R/(f_{1},f_{2})).$
See also
• Annihilator (ring theory)
• Associated prime
• Support (mathematics)
References
1. EGA 0I, 1.7.1.
2. The Stacks Project authors (2017). Stacks Project, Tag 01B4.
3. The Stacks Project authors (2017). Stacks Project, Tag 01AS.
4. Eisenbud, David. Commutative Algebra with a View Towards Algebraic Geometry. corollary 2.7. p. 67.{{cite book}}: CS1 maint: location (link)
• Grothendieck, Alexandre; Dieudonné, Jean (1960). "Éléments de géométrie algébrique: I. Le langage des schémas". Publications Mathématiques de l'IHÉS. 4. doi:10.1007/bf02684778. MR 0217083.
• Atiyah, M. F., and I. G. Macdonald, Introduction to Commutative Algebra, Perseus Books, 1969, ISBN 0-201-00361-9 MR242802
| Wikipedia |
Support polygon
For a rigid object in contact with a fixed environment and acted upon by gravity in the vertical direction, its support polygon is a horizontal region over which the center of mass must lie to achieve static stability.[1] For example, for an object resting on a horizontal surface (e.g. a table), the support polygon is the convex hull of its "footprint" on the table.
The support polygon succinctly represents the conditions necessary for an object to be at equilibrium under gravity. That is, if the object's center of mass lies over the support polygon, then there exist a set of forces over the region of contact that exactly counteracts the forces of gravity. Note that this is a necessary condition for stability, but not a sufficient one.
Derivation[2]
Let the object be in contact at a finite number of points $C_{1},\ldots ,C_{N}$. At each point $C_{k}$, let $FC_{k}$ be the set of forces that can be applied on the object at that point. Here, $FC_{k}$ is known as the friction cone, and for the Coulomb model of friction, is actually a cone with apex at the origin, extending to infinity in the normal direction of the contact.
Let $f_{1},\ldots ,f_{N}$ be the (unspecified) forces at the contact points. To balance the object in static equilibrium, the following Newton-Euler equations must be met on $f_{1},\ldots ,f_{N}$:
• $\sum _{k=1}^{N}f_{k}+G=0$
• $\sum _{k=1}^{N}f_{k}\times C_{k}+G\times CM=0$
• $f_{k}\in FC_{k}$ for all $k$
where $G$ is the force of gravity on the object, and $CM$ is its center of mass. The first two equations are the Newton-Euler equations, and the third requires all forces to be valid. If there is no set of forces $f_{1},\ldots ,f_{N}$ that meet all these conditions, the object will not be in equilibrium.
The second equation has no dependence on the vertical component of the center of mass, and thus if a solution exists for one $CM$, the same solution works for all $CM+\alpha G$. Therefore, the set of all $CM$ that have solutions to the above conditions is a set that extends infinitely in the up and down directions. The support polygon is simply the projection of this set on the horizontal plane.
These results can easily be extended to different friction models and an infinite number of contact points (i.e. a region of contact).
Properties
Even though the word "polygon" is used to describe this region, in general it can be any convex shape with curved edges. The support polygon is invariant under translations and rotations about the gravity vector (that is, if the contact points and friction cones were translated and rotated about the gravity vector, the support polygon is simply translated and rotated).
If the friction cones are convex cones (as they typically are), the support polygon is always a convex region. It is also invariant to the mass of the object (provided it is nonzero).
If all contacts lie on a (not necessarily horizontal) plane, and the friction cones at all contacts contain the negative gravity vector $-G$, then the support polygon is the convex hull of the contact points projected onto the horizontal plane.
References
1. McGhee, R. B.; Frank, A. A. (1968-08-01). "On the stability properties of quadruped creeping gaits". Mathematical Biosciences. 3: 331–351. doi:10.1016/0025-5564(68)90090-4. ISSN 0025-5564.
2. Bretl, T.; Lall, S. (August 2008). "Testing Static Equilibrium for Legged Robots". IEEE Transactions on Robotics. 24 (4): 794–807. doi:10.1109/TRO.2008.2001360. ISSN 1552-3098. S2CID 15864841.
| Wikipedia |
Supporting functional
In convex analysis and mathematical optimization, the supporting functional is a generalization of the supporting hyperplane of a set.
Mathematical definition
Let X be a locally convex topological space, and $C\subset X$ be a convex set, then the continuous linear functional $\phi :X\to \mathbb {R} $ is a supporting functional of C at the point $x_{0}$ if $\phi \not =0$ and $\phi (x)\leq \phi (x_{0})$ for every $x\in C$.[1]
Relation to support function
If $h_{C}:X^{*}\to \mathbb {R} $ (where $X^{*}$ is the dual space of $X$) is a support function of the set C, then if $h_{C}\left(x^{*}\right)=x^{*}\left(x_{0}\right)$, it follows that $h_{C}$ defines a supporting functional $\phi :X\to \mathbb {R} $ of C at the point $x_{0}$ such that $\phi (x)=x^{*}(x)$ for any $x\in X$.
Relation to supporting hyperplane
If $\phi $ is a supporting functional of the convex set C at the point $x_{0}\in C$ such that
$\phi \left(x_{0}\right)=\sigma =\sup _{x\in C}\phi (x)>\inf _{x\in C}\phi (x)$
then $H=\phi ^{-1}(\sigma )$ defines a supporting hyperplane to C at $x_{0}$.[2]
References
1. Pallaschke, Diethard; Rolewicz, Stefan (1997). Foundations of mathematical optimization: convex analysis without linearity. Springer. p. 323. ISBN 978-0-7923-4424-7.
2. Borwein, Jonathan; Lewis, Adrian (2006). Convex Analysis and Nonlinear Optimization: Theory and Examples (2 ed.). Springer. p. 240. ISBN 978-0-387-29570-1.
| Wikipedia |
Supporting hyperplane
In geometry, a supporting hyperplane of a set $S$ in Euclidean space $\mathbb {R} ^{n}$ is a hyperplane that has both of the following two properties:[1]
• $S$ is entirely contained in one of the two closed half-spaces bounded by the hyperplane,
• $S$ has at least one boundary-point on the hyperplane.
Here, a closed half-space is the half-space that includes the points within the hyperplane.
Supporting hyperplane theorem
This theorem states that if $S$ is a convex set in the topological vector space $X=\mathbb {R} ^{n},$ and $x_{0}$ is a point on the boundary of $S,$ then there exists a supporting hyperplane containing $x_{0}.$ If $x^{*}\in X^{*}\backslash \{0\}$ ($X^{*}$ is the dual space of $X$, $x^{*}$ is a nonzero linear functional) such that $x^{*}\left(x_{0}\right)\geq x^{*}(x)$ for all $x\in S$, then
$H=\{x\in X:x^{*}(x)=x^{*}\left(x_{0}\right)\}$
defines a supporting hyperplane.[2]
Conversely, if $S$ is a closed set with nonempty interior such that every point on the boundary has a supporting hyperplane, then $S$ is a convex set, and is the intersection of all its supporting closed half-spaces.[2]
The hyperplane in the theorem may not be unique, as noticed in the second picture on the right. If the closed set $S$ is not convex, the statement of the theorem is not true at all points on the boundary of $S,$ as illustrated in the third picture on the right.
The supporting hyperplanes of convex sets are also called tac-planes or tac-hyperplanes.[3]
The forward direction can be proved as a special case of the separating hyperplane theorem (see the page for the proof). For the converse direction,
Proof
Define $T$ to be the intersection of all its supporting closed half-spaces. Clearly $S\subset T$. Now let $y\not \in S$, show $y\not \in T$.
Let $x\in \mathrm {int} (S)$, and consider the line segment $[x,y]$. Let $t$ be the largest number such that $[x,t(y-x)+x]$ is contained in $S$. Then $t\in (0,1)$.
Let $b=t(y-x)+x$, then $b\in \partial S$. Draw a supporting hyperplane across $b$. Let it be represented as a nonzero linear functional $f:\mathbb {R} ^{n}\to \mathbb {R} $ such that $\forall a\in S,f(a)\geq f(b)$. Then since $x\in \mathrm {int} (S)$, we have $f(x)>f(b)$. Thus by ${\frac {f(y)-f(b)}{1-t}}={\frac {f(b)-f(x)}{t}}$, we have $f(y)<f(b)$, so $y\not \in T$.
See also
• Support function
• Supporting line (supporting hyperplanes in $\mathbb {R} ^{2}$)
Notes
1. Luenberger, David G. (1969). Optimization by Vector Space Methods. New York: John Wiley & Sons. p. 133. ISBN 978-0-471-18117-0.
2. Boyd, Stephen P.; Vandenberghe, Lieven (2004). Convex Optimization (pdf). Cambridge University Press. pp. 50–51. ISBN 978-0-521-83378-3. Retrieved October 15, 2011.
3. Cassels, John W. S. (1997), An Introduction to the Geometry of Numbers, Springer Classics in Mathematics (reprint of 1959[3] and 1971 Springer-Verlag ed.), Springer-Verlag.
References & further reading
• Ostaszewski, Adam (1990). Advanced mathematical methods. Cambridge; New York: Cambridge University Press. p. 129. ISBN 0-521-28964-5.
• Giaquinta, Mariano; Hildebrandt, Stefan (1996). Calculus of variations. Berlin; New York: Springer. p. 57. ISBN 3-540-50625-X.
• Goh, C. J.; Yang, X.Q. (2002). Duality in optimization and variational inequalities. London; New York: Taylor & Francis. p. 13. ISBN 0-415-27479-6.
• Soltan, V. (2021). Support and separation properties of convex sets in finite dimension. Extracta Math. Vol. 36, no. 2, 241-278.
| Wikipedia |
Superconvergence
In numerical analysis, a superconvergent or supraconvergent method is one which converges faster than generally expected (superconvergence or supraconvergence). For example, in the Finite Element Method approximation to Poisson's equation in two dimensions, using piecewise linear elements, the average error in the gradient is first order. However under certain conditions it's possible to recover the gradient at certain locations within each element to second order.
References
• Barbeiro, S.; Ferreira, J. A.; Grigorieff, R. D. (2005), "Supraconvergence of a finite difference scheme for solutions in Hs(0, L)", IMA J Numer Anal, 25 (4): 797–811, CiteSeerX 10.1.1.108.7189, doi:10.1093/imanum/dri018
• Ferreira, J. A.; Grigorieff, R. D. (1998), "On the supraconvergence of elliptic finite difference methods" (PDF), Applied Numerical Mathematics, 28: 275–292, doi:10.1016/S0168-9274(98)00048-8, hdl:10316/4663
• Levine, N. D. (1985), "Superconvergent Recovery of the Gradient from Piecewise Linear Finite-element Approximations" (PDF), IMA J Numer Anal, 5 (4): 407–427, doi:10.1093/imanum/5.4.407
| Wikipedia |
Stochastic resonance
Stochastic resonance (SR) is a phenomenon in which a signal that is normally too weak to be detected by a sensor, can be boosted by adding white noise to the signal, which contains a wide spectrum of frequencies. The frequencies in the white noise corresponding to the original signal's frequencies will resonate with each other, amplifying the original signal while not amplifying the rest of the white noise – thereby increasing the signal-to-noise ratio, which makes the original signal more prominent. Further, the added white noise can be enough to be detectable by the sensor, which can then filter it out to effectively detect the original, previously undetectable signal.
This phenomenon of boosting undetectable signals by resonating with added white noise extends to many other systems – whether electromagnetic, physical or biological – and is an active area of research.[1]
Stochastic resonance was first proposed by the Italian physicists Roberto Benzi, Alfonso Sutera and Angelo Vulpiani in 1981,[2] and the first application they proposed (together with Giorgio Parisi) was in the context of climate dynamics.[3][4]
Technical description
Stochastic resonance (SR) is observed when noise added to a system changes the system's behaviour in some fashion. More technically, SR occurs if the signal-to-noise ratio of a nonlinear system or device increases for moderate values of noise intensity. It often occurs in bistable systems or in systems with a sensory threshold and when the input signal to the system is "sub-threshold." For lower noise intensities, the signal does not cause the device to cross threshold, so little signal is passed through it. For large noise intensities, the output is dominated by the noise, also leading to a low signal-to-noise ratio. For moderate intensities, the noise allows the signal to reach threshold, but the noise intensity is not so large as to swamp it. Thus, a plot of signal-to-noise ratio as a function of noise intensity contains a peak.
Strictly speaking, stochastic resonance occurs in bistable systems, when a small periodic (sinusoidal) force is applied together with a large wide band stochastic force (noise). The system response is driven by the combination of the two forces that compete/cooperate to make the system switch between the two stable states. The degree of order is related to the amount of periodic function that it shows in the system response. When the periodic force is chosen small enough in order to not make the system response switch, the presence of a non-negligible noise is required for it to happen. When the noise is small, very few switches occur, mainly at random with no significant periodicity in the system response. When the noise is very strong, a large number of switches occur for each period of the sinusoid, and the system response does not show remarkable periodicity. Between these two conditions, there exists an optimal value of the noise that cooperatively concurs with the periodic forcing in order to make almost exactly one switch per period (a maximum in the signal-to-noise ratio).
Such a favorable condition is quantitatively determined by the matching of two timescales: the period of the sinusoid (the deterministic time scale) and the Kramers rate[5] (i.e., the average switch rate induced by the sole noise: the inverse of the stochastic time scale[6][7]). Thus the term "stochastic resonance."
Stochastic resonance was discovered and proposed for the first time in 1981 to explain the periodic recurrence of ice ages.[8] Since then, the same principle has been applied in a wide variety of systems. Nowadays stochastic resonance is commonly invoked when noise and nonlinearity concur to determine an increase of order in the system response.
Suprathreshold
Suprathreshold stochastic resonance is a particular form of stochastic resonance, in which random fluctuations, or noise, provide a signal processing benefit in a nonlinear system. Unlike most of the nonlinear systems in which stochastic resonance occurs, suprathreshold stochastic resonance occurs when the strength of the fluctuations is small relative to that of an input signal, or even small for random noise. It is not restricted to a subthreshold signal, hence the qualifier.
Neuroscience, psychology and biology
Stochastic resonance has been observed in the neural tissue of the sensory systems of several organisms.[9] Computationally, neurons exhibit SR because of non-linearities in their processing. SR has yet to be fully explained in biological systems, but neural synchrony in the brain (specifically in the gamma wave frequency[10]) has been suggested as a possible neural mechanism for SR by researchers who have investigated the perception of "subconscious" visual sensation.[11] Single neurons in vitro including cerebellar Purkinje cells[12] and squid giant axon[13] could also demonstrate the inverse stochastic resonance, when spiking is inhibited by synaptic noise of a particular variance.
Medicine
SR-based techniques have been used to create a novel class of medical devices for enhancing sensory and motor functions such as vibrating insoles especially for the elderly, or patients with diabetic neuropathy or stroke.[14]
See the Review of Modern Physics[15] article for a comprehensive overview of stochastic resonance.
Stochastic Resonance has found noteworthy application in the field of image processing.
Signal analysis
A related phenomenon is dithering applied to analog signals before analog-to-digital conversion.[16] Stochastic resonance can be used to measure transmittance amplitudes below an instrument's detection limit. If Gaussian noise is added to a subthreshold (i.e., immeasurable) signal, then it can be brought into a detectable region. After detection, the noise is removed. A fourfold improvement in the detection limit can be obtained.[17]
See also
• Mutual coherence (linear algebra)
• Signal detection theory
• Stochastic resonance (sensory neurobiology)
References
1. Moss F, Ward LM, Sannita WG (February 2004). "Stochastic resonance and sensory information processing: a tutorial and review of application". Clinical Neurophysiology. 115 (2): 267–81. doi:10.1016/j.clinph.2003.09.014. PMID 14744566. S2CID 4141064.
2. Benzi, R; Sutera, A; Vulpiani, A (1 November 1981). "The mechanism of stochastic resonance". Journal of Physics A: Mathematical and General. 14 (11): L453–L457. Bibcode:1981JPhA...14L.453B. doi:10.1088/0305-4470/14/11/006. ISSN 0305-4470. S2CID 123005407.
3. BENZI, ROBERTO; PARISI, GIORGIO; SUTERA, ALFONSO; VULPIANI, ANGELO (February 1982). "Stochastic resonance in climatic change". Tellus. 34 (1): 10–16. doi:10.1111/j.2153-3490.1982.tb01787.x. ISSN 0040-2826.
4. Benzi, Roberto; Parisi, Giorgio; Sutera, Alfonso; Vulpiani, Angelo (June 1983). "A Theory of Stochastic Resonance in Climatic Change". SIAM Journal on Applied Mathematics. 43 (3): 565–578. doi:10.1137/0143037. ISSN 0036-1399.
5. Kramers, H.A.: Brownian motion in a field of force and the diffusion model of chemical reactions. Physica (Utrecht) 7, 284–304 (1940)}
6. Peter Hänggi; Peter Talkner; Michal Borkovec (1990). "Reaction-rate theory: fifty years after Kramers". Reviews of Modern Physics. 62 (2): 251–341. Bibcode:1990RvMP...62..251H. doi:10.1103/RevModPhys.62.251. S2CID 122573991.
7. Hannes Risken The Fokker-Planck Equation, 2nd edition, Springer, 1989
8. Benzi R, Parisi G, Sutera A, Vulpiani A (1982). "Stochastic resonance in climatic change". Tellus. 34 (1): 10–6. Bibcode:1982Tell...34...10B. doi:10.1111/j.2153-3490.1982.tb01787.x.
9. Kosko, Bart (2006). Noise. New York, N.Y: Viking. ISBN 978-0-670-03495-6.
10. Ward LM, Doesburg SM, Kitajo K, MacLean SE, Roggeveen AB (December 2006). "Neural synchrony in stochastic resonance, attention, and consciousness". Can J Exp Psychol. 60 (4): 319–26. doi:10.1037/cjep2006029. PMID 17285879.
11. Melloni L, Molina C, Pena M, Torres D, Singer W, Rodriguez E (March 2007). "Synchronization of neural activity across cortical areas correlates with conscious perception". J. Neurosci. 27 (11): 2858–65. doi:10.1523/JNEUROSCI.4623-06.2007. PMC 6672558. PMID 17360907. Final proof of role of neural coherence in consciousness?
12. Buchin, Anatoly; Rieubland, Sarah; Häusser, Michael; Gutkin, Boris S.; Roth, Arnd (19 August 2016). "Inverse Stochastic Resonance in Cerebellar Purkinje Cells". PLOS Computational Biology. 12 (8): e1005000. Bibcode:2016PLSCB..12E5000B. doi:10.1371/journal.pcbi.1005000. PMC 4991839. PMID 27541958.
13. Paydarfar, D.; Forger, D. B.; Clay, J. R. (9 August 2006). "Noisy Inputs and the Induction of On-Off Switching Behavior in a Neuronal Pacemaker". Journal of Neurophysiology. 96 (6): 3338–3348. doi:10.1152/jn.00486.2006. PMID 16956993. S2CID 10035457.
14. E. Sejdić, L. A. Lipsitz, "Necessity of noise in physiology and medicine," Computer Methods and Programs in Biomedicine, vol. 111, no. 2, pp. 459–470, Aug. 2013.
15. Gammaitoni L, Hänggi P, Jung P, Marchesoni F (1998). "Stochastic resonance" (PDF). Reviews of Modern Physics. 70 (1): 223–87. Bibcode:1998RvMP...70..223G. doi:10.1103/RevModPhys.70.223.
16. Gammaitoni L (1995). "Stochastic resonance and the dithering effect in threshold physical systems" (PDF). Phys. Rev. E. 52 (5): 4691–8. Bibcode:1995PhRvE..52.4691G. doi:10.1103/PhysRevE.52.4691. PMID 9963964.
17. Palonpon A, Amistoso J, Holdsworth J, Garcia W, Saloma C (1998). "Measurement of weak transmittances by stochastic resonance". Optics Letters. 23 (18): 1480–2. Bibcode:1998OptL...23.1480P. doi:10.1364/OL.23.001480. PMID 18091823.
Bibliography
• McDonnell MD, and Abbott D (2009). "What is Stochastic Resonance? Definitions, misconceptions, debates, and its relevance to biology". PLOS Computational Biology. 5 (5): e1000348. Bibcode:2009PLSCB...5E0348M. doi:10.1371/journal.pcbi.1000348. PMC 2660436. PMID 19562010.
• Gammaitoni L, Hänggi P, Jung P, Marchesoni F (2009). "Stochastic Resonance: A remarkable idea that changed our perception of noise" (PDF). European Physical Journal B. 69 (1): 1–3. Bibcode:2009EPJB...69....1G. doi:10.1140/epjb/e2009-00163-x. S2CID 123073615.{{cite journal}}: CS1 maint: multiple names: authors list (link)
• Hänggi P (March 2002). "Stochastic resonance in biology. How noise can enhance detection of weak signals and help improve biological information processing" (PDF). ChemPhysChem. 3 (3): 285–90. doi:10.1002/1439-7641(20020315)3:3<285::AID-CPHC285>3.0.CO;2-A. PMID 12503175.
• F. Chapeau-Blondeau; D. Rousseau (2009). "Raising the noise to improve performance in optimal processing". Journal of Statistical Mechanics: Theory and Experiment. 2009 (1): P01003. Bibcode:2009JSMTE..01..003C. doi:10.1088/1742-5468/2009/01/P01003. S2CID 7778013.
• J.C. Comte; et al. (2003). "Stochastic resonance: another way to retrieve subthreshold digital data". Physics Letters A. 309 (1): 39–43. Bibcode:2003PhLA..309...39C. doi:10.1016/S0375-9601(03)00166-X.
• Moss F, Ward LM, Sannita WG (February 2004). "Stochastic resonance and sensory information processing: a tutorial and review of application". Clin Neurophysiol. 115 (2): 267–81. doi:10.1016/j.clinph.2003.09.014. PMID 14744566. S2CID 4141064.
• Wiesenfeld K, Moss F (January 1995). "Stochastic resonance and the benefits of noise: from ice ages to crayfish and SQUIDs". Nature. 373 (6509): 33–6. Bibcode:1995Natur.373...33W. doi:10.1038/373033a0. PMID 7800036. S2CID 4287929.
• Bulsara A, Gammaitoni L (1996). "Tuning in to noise" (PDF). Physics Today. 49 (3): 39–45. Bibcode:1996PhT....49c..39B. doi:10.1063/1.881491.
• F. Chapeau-Blondeau; D. Rousseau (2002). "Noise improvements in stochastic resonance: From signal amplification to optimal detection". Fluctuation and Noise Letters. 2 (3): L221–L233. doi:10.1142/S0219477502000798. S2CID 47951856.
• Priplata AA, Patritti BL, Niemi JB, et al. (January 2006). "Noise-enhanced balance control in patients with diabetes and patients with stroke". Ann. Neurol. 59 (1): 4–12. doi:10.1002/ana.20670. PMID 16287079. S2CID 3140340.
• Peter Hänggi; Peter Talkner; Michal Borkovec (1990). "Reaction-rate theory: fifty years after Kramers". Reviews of Modern Physics. 62 (2): 251–341. Bibcode:1990RvMP...62..251H. doi:10.1103/RevModPhys.62.251. S2CID 122573991.
• Hannes Risken The Fokker-Planck Equation, 2nd edition, Springer, 1989
Bibliography for suprathreshold stochastic resonance
• N. G. Stocks, "Suprathreshold stochastic resonance in multilevel threshold systems," Physical Review Letters, 84, pp. 2310–2313, 2000.
• M. D. McDonnell, D. Abbott, and C. E. M. Pearce, "An analysis of noise enhanced information transmission in an array of comparators," Microelectronics Journal 33, pp. 1079–1089, 2002.
• M. D. McDonnell and N. G. Stocks, "Suprathreshold stochastic resonance," Scholarpedia 4, Article No. 6508, 2009.
• M. D. McDonnell, N. G. Stocks, C. E. M. Pearce, D. Abbott, Stochastic Resonance: From Suprathreshold Stochastic Resonance to Stochastic Signal Quantization, Cambridge University Press, 2008.
• F. Chapeau-Blondeau; D. Rousseau (2004). "Enhancement by noise in parallel arrays of sensors with power-law characteristics". Physical Review E. 70 (6): 060101. Bibcode:2004PhRvE..70f0101C. doi:10.1103/PhysRevE.70.060101. PMID 15697330. S2CID 30684643.
External links
• "Stochastic resonance". Scholarpedia.
• Scholar Google profile on stochastic resonance
• Harry JD, Niemi JB, Priplata AA, Collins JJ (April 2005). "Balancing Act". IEEE Spectrum. 42 (4): 36–41. doi:10.1109/MSPEC.2005.1413729. S2CID 18576276.
• Newsweek Being messy, both at home and in foreign policy, may have its own advantages Retrieved 3 Jan 2011
• Stochastic Resonance Conference 1998–2008 ten years of continuous growth. 17-21 Aug. 2008, Perugia (Italy)
• Stochastic Resonance - From Suprathreshold Stochastic Resonance to Stochastic Signal Quantization (book)
• Review of Suprathreshold Stochastic Resonance
• A.S. Samardak, A. Nogaret, N. B. Janson, A. G. Balanov, I. Farrer and D. A. Ritchie. "Noise-Controlled Signal Transmission in a Multithread Semiconductor Neuron" // Phys. Rev. Lett. 102 (2009) 226802,
| Wikipedia |
Suren Arakelov
Suren Yurievich Arakelov (Russian: Суре́н Ю́рьевич Араке́лов, Armenian: Սուրեն Յուրիի Առաքելով) (born October 16, 1947 in Kharkiv) is a Soviet mathematician of Armenian descent known for developing Arakelov theory.
Biography
From 1965 onwards Arakelov attended the Mathematics department of Moscow State University, where he graduated in 1971.
In 1974, Arakelov received his candidate of sciences degree from the Steklov Institute in Moscow, under the supervision of Igor Shafarevich. He then worked as a junior researcher at the Gubkin Russian State University of Oil and Gas in Moscow until 1979. He did protest against arrest of Alexander Solzhenitsyn, and was arrested and committed to a mental hospital.[1] Then he stopped his research activity to pursue other life goals. As of 2014 he lives in Moscow with his wife and children.
Arakelov theory
Main article: Arakelov theory
Arakelov theory was exploited by Paul Vojta to give a new proof of the Mordell conjecture and by Gerd Faltings in his proof of Lang's generalization of the Mordell conjecture.
Publications
• S. J. Arakelov (1971). "Families of algebraic curves with fixed degeneracies". Mathematics of the USSR-Izvestiya. 5 (6): 1277–1302. doi:10.1070/IM1971v005n06ABEH001235.
• S. J. Arakelov (1974). "Intersection theory of divisors on an arithmetic surface". Mathematics of the USSR-Izvestiya. 8 (6): 1167–1180. doi:10.1070/IM1974v008n06ABEH002141.
• Arakelov, S. J. (1975). "Theory of intersections on an arithmetic surface". Proc. Internat. Congr. Mathematicians. Vancouver: Amer. Math. Soc. 1: 405–408.
References
1. "What happened to Suren Arakelov?". Mathoverflow.
External links
• Serge Lang (1988). Introduction to Arakelov Theory. Springer. ISBN 0387967931.
Authority control
International
• VIAF
Academics
• Mathematics Genealogy Project
• zbMATH
| Wikipedia |
Surface
A surface, as the term is most generally used, is the outermost or uppermost layer of a physical object or space.[1][2] It is the portion or region of the object that can first be perceived by an observer using the senses of sight and touch, and is the portion with which other materials first interact. The surface of an object is more than "a mere geometric solid", but is "filled with, spread over by, or suffused with perceivable qualities such as color and warmth".[3]
The concept of surface has been abstracted and formalized in mathematics, specifically in geometry. Depending on the properties on which the emphasis is given, there are several non equivalent such formalizations, that are all called surface, sometimes with some qualifier, such as algebraic surface, smooth surface or fractal surface.
The concept of surface and its mathematical abstraction are both widely used in physics, engineering, computer graphics, and many other disciplines, primarily in representing the surfaces of physical objects. For example, in analyzing the aerodynamic properties of an airplane, the central consideration is the flow of air along its surface. The concept also raises certain philosophical questions—for example, how thick is the layer of atoms or molecules that can be considered part of the surface of an object (i.e., where does the "surface" end and the "interior" begin),[2][4] and do objects really have a surface at all if, at the subatomic level, they never actually come in contact with other objects.[5]
Perception of surfaces
The surface of an object is the part of the object that is primarily perceived. Humans equate seeing the surface of an object with seeing an object. For example, in looking at an automobile, it is normally not possible to see the engine, electronics, and other internal structures, but the object is still recognized as an automobile because the surface identifies it as one.[6] Conceptually, the "surface" of an object can be defined as the topmost layer of atoms.[7] Many objects and organisms have a surface that is in some way distinct from their interior. For example, the peel of an apple has very different qualities from the interior of the apple,[8] and the exterior surface of a radio may have very different components from the interior. Peeling the apple constitutes removal of the surface, ultimately leaving a different surface with a different texture and appearance, identifiable as a peeled apple. Removing the exterior surface of an electronic device may render its purpose unrecognizable. By contrast, removing the outermost layer of a rock or the topmost layer of liquid contained in a glass would leave a substance or material with the same composition, only slightly reduced in volume.[9]
In mathematics
This section is an excerpt from Surface (mathematics).[edit]
In mathematics, a surface is a mathematical model of the common concept of a surface. It is a generalization of a plane, but, unlike a plane, it may be curved; this is analogous to a curve generalizing a straight line.
There are several more precise definitions, depending on the context and the mathematical tools that are used for the study. The simplest mathematical surfaces are planes and spheres in the Euclidean 3-space. The exact definition of a surface may depend on the context. Typically, in algebraic geometry, a surface may cross itself (and may have other singularities), while, in topology and differential geometry, it may not.
A surface is a topological space of dimension two; this means that a moving point on a surface may move in two directions (it has two degrees of freedom). In other words, around almost every point, there is a coordinate patch on which a two-dimensional coordinate system is defined. For example, the surface of the Earth resembles (ideally) a two-dimensional sphere, and latitude and longitude provide two-dimensional coordinates on it (except at the poles and along the 180th meridian).
In the physical sciences
See also: Euclidean planes in three-dimensional space § Occurrence in nature
Many surfaces considered in physics and chemistry (physical sciences in general) are interfaces. For example, a surface may be the idealized limit between two fluids, liquid and gas (the surface of the sea in air) or the idealized boundary of a solid (the surface of a ball). In fluid dynamics, the shape of a free surface may be defined by surface tension. However, they are surfaces only at macroscopic scale. At microscopic scale, they may have some thickness. At atomic scale, they do not look at all as a surface, because of holes formed by spaces between atoms or molecules.
Other surfaces considered in physics are wavefronts. One of these, discovered by Fresnel, is called wave surface by mathematicians.
The surface of the reflector of a telescope is a paraboloid of revolution.
Other occurrences:
• Soap bubbles, which are physical examples of minimal surfaces
• Equipotential surface in, e.g., gravity fields
• Earth's surface
• Surface science, the study of physical and chemical phenomena that occur at the interface of two phases
• Surface metrology
• Surface wave, a mechanical wave
• Atmospheric boundaries (tropopause, edge of space, plasmapause, etc.)
In computer graphics
One of the main challenges in computer graphics is creating realistic simulations of surfaces. In technical applications of 3D computer graphics (CAx) such as computer-aided design and computer-aided manufacturing, surfaces are one way of representing objects. The other ways are wireframe (lines and curves) and solids. Point clouds are also sometimes used as temporary ways to represent an object, with the goal of using the points to create one or more of the three permanent representations.
One technique used for enhancing surface realism in computer graphics is the use of physically-based rendering (PBR) algorithms which simulate the interaction of light with surfaces based on their physical properties, such as reflectance, roughness, and transparency. By incorporating mathematical models and algorithms, PBR can generate highly realistic renderings that resemble the behavior of real-world materials. PBR has found practical applications beyond entertainment, extending its impact to architectural design, product prototyping, and scientific simulations.
References
1. Sparke, Penny & Fisher, Fiona (2016). The Routledge Companion to Design Studies. New York: Routledge. p. 124. ISBN 9781317203285. OCLC 952155029.
2. Sorensen, Roy (2011). Seeing Dark Things: The Philosophy of Shadows. Oxford: Oxford University Press. p. 45. ISBN 9780199797134. OCLC 955163137.
3. Butchvarov, Panayot (1970). The Concept of Knowledge. Evanston: Northwestern University Press. p. 249. ISBN 9780810103191. OCLC 925168650.
4. Stroll, Avrum (1988). Surfaces. Minneapolis: University of Minnesota Press. p. 205. ISBN 9780816616947. OCLC 925290683.
5. Plesha, Michael; Gray, Gary & Costanzo, Francesco (2012). Engineering Mechanics: Statics and Dynamics (2nd ed.). New York: McGraw-Hill Higher Education. p. 8. ISBN 9780073380315. OCLC 801035627.
6. Butchvarov (1970), p. 253.
7. Stroll (1988), p. 54.
8. Stroll (1988), p. 81.
9. Gibson, James J. (1950). "The Perception of Visual Surfaces". The American Journal of Psychology. 63 (3): 367–384. doi:10.2307/1418003. ISSN 0002-9556.
Authority control: National
• Japan
• Czech Republic
| Wikipedia |
Algebraic surface
In mathematics, an algebraic surface is an algebraic variety of dimension two. In the case of geometry over the field of complex numbers, an algebraic surface has complex dimension two (as a complex manifold, when it is non-singular) and so of dimension four as a smooth manifold.
The theory of algebraic surfaces is much more complicated than that of algebraic curves (including the compact Riemann surfaces, which are genuine surfaces of (real) dimension two). Many results were obtained, however, in the Italian school of algebraic geometry, and are up to 100 years old.
Classification by the Kodaira dimension
Main article: Enriques–Kodaira classification
In the case of dimension one, varieties are classified by only the topological genus, but, in dimension two, one needs to distinguish the arithmetic genus $p_{a}$ and the geometric genus $p_{g}$ because one cannot distinguish birationally only the topological genus. Then, irregularity is introduced for the classification of varieties. A summary of the results (in detail, for each kind of surface refers to each redirection), follows:
Examples of algebraic surfaces include (κ is the Kodaira dimension):
• κ = −∞: the projective plane, quadrics in P3, cubic surfaces, Veronese surface, del Pezzo surfaces, ruled surfaces
• κ = 0 : K3 surfaces, abelian surfaces, Enriques surfaces, hyperelliptic surfaces
• κ = 1: elliptic surfaces
• κ = 2: surfaces of general type.
For more examples see the list of algebraic surfaces.
The first five examples are in fact birationally equivalent. That is, for example, a cubic surface has a function field isomorphic to that of the projective plane, being the rational functions in two indeterminates. The Cartesian product of two curves also provides examples.
Birational geometry of surfaces
The birational geometry of algebraic surfaces is rich, because of blowing up (also known as a monoidal transformation), under which a point is replaced by the curve of all limiting tangent directions coming into it (a projective line). Certain curves may also be blown down, but there is a restriction (self-intersection number must be −1).
Castelnuovo's Theorem
One of the fundamental theorems for the birational geometry of surfaces is Castelnuovo's theorem. This states that any birational map between algebraic surfaces is given by a finite sequence of blowups and blowdowns.
Properties
The Nakai criterion says that:
A Divisor D on a surface S is ample if and only if D2 > 0 and for all irreducible curve C on S D•C > 0.
Ample divisors have a nice property such as it is the pullback of some hyperplane bundle of projective space, whose properties are very well known. Let ${\mathcal {D}}(S)$ be the abelian group consisting of all the divisors on S. Then due to the intersection theorem
${\mathcal {D}}(S)\times {\mathcal {D}}(S)\rightarrow \mathbb {Z} :(X,Y)\mapsto X\cdot Y$ :(X,Y)\mapsto X\cdot Y}
is viewed as a quadratic form. Let
${\mathcal {D}}_{0}(S):=\{D\in {\mathcal {D}}(S)|D\cdot X=0,{\text{for all }}X\in {\mathcal {D}}(S)\}$
then ${\mathcal {D}}/{\mathcal {D}}_{0}(S):=Num(S)$ becomes to be a numerical equivalent class group of S and
$Num(S)\times Num(S)\mapsto \mathbb {Z} =({\bar {D}},{\bar {E}})\mapsto D\cdot E$
also becomes to be a quadratic form on $Num(S)$, where ${\bar {D}}$ is the image of a divisor D on S. (In the below the image ${\bar {D}}$ is abbreviated with D.)
For an ample line bundle H on S, the definition
$\{H\}^{\perp }:=\{D\in Num(S)|D\cdot H=0\}.$
is used in the surface version of the Hodge index theorem:
for $D\in \{\{H\}^{\perp }|D\neq 0\},D\cdot D<0$, i.e. the restriction of the intersection form to $\{H\}^{\perp }$ is a negative definite quadratic form.
This theorem is proven using the Nakai criterion and the Riemann-Roch theorem for surfaces. The Hodge index theorem is used in Deligne's proof of the Weil conjecture.
Basic results on algebraic surfaces include the Hodge index theorem, and the division into five groups of birational equivalence classes called the classification of algebraic surfaces. The general type class, of Kodaira dimension 2, is very large (degree 5 or larger for a non-singular surface in P3 lies in it, for example).
There are essential three Hodge number invariants of a surface. Of those, h1,0 was classically called the irregularity and denoted by q; and h2,0 was called the geometric genus pg. The third, h1,1, is not a birational invariant, because blowing up can add whole curves, with classes in H1,1. It is known that Hodge cycles are algebraic, and that algebraic equivalence coincides with homological equivalence, so that h1,1 is an upper bound for ρ, the rank of the Néron-Severi group. The arithmetic genus pa is the difference
geometric genus − irregularity.
In fact this explains why the irregularity got its name, as a kind of 'error term'.
Riemann-Roch theorem for surfaces
Main article: Riemann-Roch theorem for surfaces
The Riemann-Roch theorem for surfaces was first formulated by Max Noether. The families of curves on surfaces can be classified, in a sense, and give rise to much of their interesting geometry.
References
• Dolgachev, I.V. (2001) [1994], "Algebraic surface", Encyclopedia of Mathematics, EMS Press
• Zariski, Oscar (1995), Algebraic surfaces, Classics in Mathematics, Berlin, New York: Springer-Verlag, ISBN 978-3-540-58658-6, MR 1336146
External links
• Free program SURFER to visualize algebraic surfaces in real-time, including a user gallery.
• SingSurf an interactive 3D viewer for algebraic surfaces.
• Page on Algebraic Surfaces started in 2008
• Overview and thoughts on designing Algebraic surfaces
Authority control: National
• Japan
• Czech Republic
| Wikipedia |
Surface (mathematics)
In mathematics, a surface is a mathematical model of the common concept of a surface. It is a generalization of a plane, but, unlike a plane, it may be curved; this is analogous to a curve generalizing a straight line.
There are several more precise definitions, depending on the context and the mathematical tools that are used for the study. The simplest mathematical surfaces are planes and spheres in the Euclidean 3-space. The exact definition of a surface may depend on the context. Typically, in algebraic geometry, a surface may cross itself (and may have other singularities), while, in topology and differential geometry, it may not.
A surface is a topological space of dimension two; this means that a moving point on a surface may move in two directions (it has two degrees of freedom). In other words, around almost every point, there is a coordinate patch on which a two-dimensional coordinate system is defined. For example, the surface of the Earth resembles (ideally) a two-dimensional sphere, and latitude and longitude provide two-dimensional coordinates on it (except at the poles and along the 180th meridian).
Definitions
Often, a surface is defined by equations that are satisfied by the coordinates of its points. This is the case of the graph of a continuous function of two variables. The set of the zeros of a function of three variables is a surface, which is called an implicit surface.[1] If the defining three-variate function is a polynomial, the surface is an algebraic surface. For example, the unit sphere is an algebraic surface, as it may be defined by the implicit equation
$x^{2}+y^{2}+z^{2}-1=0.$
A surface may also be defined as the image, in some space of dimension at least 3, of a continuous function of two variables (some further conditions are required to insure that the image is not a curve). In this case, one says that one has a parametric surface, which is parametrized by these two variables, called parameters. For example, the unit sphere may be parametrized by the Euler angles, also called longitude u and latitude v by
${\begin{aligned}x&=\cos(u)\cos(v)\\y&=\sin(u)\cos(v)\\z&=\sin(v)\,.\end{aligned}}$
Parametric equations of surfaces are often irregular at some points. For example, all but two points of the unit sphere, are the image, by the above parametrization, of exactly one pair of Euler angles (modulo 2π). For the remaining two points (the north and south poles), one has cos v = 0, and the longitude u may take any values. Also, there are surfaces for which there cannot exist a single parametrization that covers the whole surface. Therefore, one often considers surfaces which are parametrized by several parametric equations, whose images cover the surface. This is formalized by the concept of manifold: in the context of manifolds, typically in topology and differential geometry, a surface is a manifold of dimension two; this means that a surface is a topological space such that every point has a neighborhood which is homeomorphic to an open subset of the Euclidean plane (see Surface (topology) and Surface (differential geometry)). This allows defining surfaces in spaces of dimension higher than three, and even abstract surfaces, which are not contained in any other space. On the other hand, this excludes surfaces that have singularities, such as the vertex of a conical surface or points where a surface crosses itself.
In classical geometry, a surface is generally defined as a locus of a point or a line. For example, a sphere is the locus of a point which is at a given distance of a fixed point, called the center; a conical surface is the locus of a line passing through a fixed point and crossing a curve; a surface of revolution is the locus of a curve rotating around a line. A ruled surface is the locus of a moving line satisfying some constraints; in modern terminology, a ruled surface is a surface, which is a union of lines.
Terminology
There are several kinds of surfaces that are considered in mathematics. An unambiguous terminology is thus necessary to distinguish them when needed. A topological surface is a surface that is a manifold of dimension two (see § Topological surface). A differentiable surface is a surfaces that is a differentiable manifold (see § Differentiable surface). Every differentiable surface is a topological surface, but the converse is false.
A "surface" is often implicitly supposed to be contained in a Euclidean space of dimension 3, typically R3. A surface that is contained in a projective space is called a projective surface (see § Projective surface). A surface that is not supposed to be included in another space is called an abstract surface.
Examples
• The graph of a continuous function of two variables, defined over a connected open subset of R2 is a topological surface. If the function is differentiable, the graph is a differentiable surface.
• A plane is both an algebraic surface and a differentiable surface. It is also a ruled surface and a surface of revolution.
• A circular cylinder (that is, the locus of a line crossing a circle and parallel to a given direction) is an algebraic surface and a differentiable surface.
• A circular cone (locus of a line crossing a circle, and passing through a fixed point, the apex, which is outside the plane of the circle) is an algebraic surface which is not a differentiable surface. If one removes the apex, the remainder of the cone is the union of two differentiable surfaces.
• The surface of a polyhedron is a topological surface, which is neither a differentiable surface nor an algebraic surface.
• A hyperbolic paraboloid (the graph of the function z = xy) is a differentiable surface and an algebraic surface. It is also a ruled surface, and, for this reason, is often used in architecture.
• A two-sheet hyperboloid is an algebraic surface and the union of two non-intersecting differentiable surfaces.
Parametric surface
A parametric surface is the image of an open subset of the Euclidean plane (typically $\mathbb {R} ^{2}$) by a continuous function, in a topological space, generally a Euclidean space of dimension at least three. Usually the function is supposed to be continuously differentiable, and this will be always the case in this article.
Specifically, a parametric surface in $\mathbb {R} ^{3}$ is given by three functions of two variables u and v, called parameters
${\begin{aligned}x&=f_{1}(u,v)\\y&=f_{2}(u,v)\\z&=f_{3}(u,v)\,.\end{aligned}}$
As the image of such a function may be a curve (for example, if the three functions are constant with respect to v), a further condition is required, generally that, for almost all values of the parameters, the Jacobian matrix
${\begin{bmatrix}{\dfrac {\partial f_{1}}{\partial u}}&{\dfrac {\partial f_{1}}{\partial v}}\\{\dfrac {\partial f_{2}}{\partial u}}&{\dfrac {\partial f_{2}}{\partial v}}\\{\dfrac {\partial f_{3}}{\partial u}}&{\dfrac {\partial f_{3}}{\partial v}}\\\end{bmatrix}}$
has rank two. Here "almost all" means that the values of the parameters where the rank is two contain a dense open subset of the range of the parametrization. For surfaces in a space of higher dimension, the condition is the same, except for the number of columns of the Jacobian matrix.
Tangent plane and normal vector
A point p where the above Jacobian matrix has rank two is called regular, or, more properly, the parametrization is called regular at p.
The tangent plane at a regular point p is the unique plane passing through p and having a direction parallel to the two row vectors of the Jacobian matrix. The tangent plane is an affine concept, because its definition is independent of the choice of a metric. In other words, any affine transformation maps the tangent plane to the surface at a point to the tangent plane to the image of the surface at the image of the point.
The normal line at a point of a surface is the unique line passing through the point and perpendicular to the tangent plane; the normal vector is a vector which is parallel to the normal.
For other differential invariants of surfaces, in the neighborhood of a point, see Differential geometry of surfaces.
Irregular point and singular point
A point of a parametric surface which is not regular is irregular. There are several kinds of irregular points.
It may occur that an irregular point becomes regular, if one changes the parametrization. This is the case of the poles in the parametrization of the unit sphere by Euler angles: it suffices to permute the role of the different coordinate axes for changing the poles.
On the other hand, consider the circular cone of parametric equation
${\begin{aligned}x&=t\cos(u)\\y&=t\sin(u)\\z&=t\,.\end{aligned}}$
The apex of the cone is the origin (0, 0, 0), and is obtained for t = 0. It is an irregular point that remains irregular, whichever parametrization is chosen (otherwise, there would exist a unique tangent plane). Such an irregular point, where the tangent plane is undefined, is said singular.
There is another kind of singular points. There are the self-crossing points, that is the points where the surface crosses itself. In other words, these are the points which are obtained for (at least) two different values of the parameters.
Graph of a bivariate function
Let z = f(x, y) be a function of two real variables. This is a parametric surface, parametrized as
${\begin{aligned}x&=t\\y&=u\\z&=f(t,u)\,.\end{aligned}}$
Every point of this surface is regular, as the two first columns of the Jacobian matrix form the identity matrix of rank two.
Rational surface
Main article: Rational surface
A rational surface is a surface that may be parametrized by rational functions of two variables. That is, if fi(t, u) are, for i = 0, 1, 2, 3, polynomials in two indeterminates, then the parametric surface, defined by
${\begin{aligned}x&={\frac {f_{1}(t,u)}{f_{0}(t,u)}}\\y&={\frac {f_{2}(t,u)}{f_{0}(t,u)}}\\z&={\frac {f_{3}(t,u)}{f_{0}(t,u)}}\,,\end{aligned}}$
is a rational surface.
A rational surface is an algebraic surface, but most algebraic surfaces are not rational.
Implicit surface
Main article: Implicit surface
An implicit surface in a Euclidean space (or, more generally, in an affine space) of dimension 3 is the set of the common zeros of a differentiable function of three variables
$f(x,y,z)=0.$
Implicit means that the equation defines implicitly one of the variables as a function of the other variables. This is made more exact by the implicit function theorem: if f(x0, y0, z0) = 0, and the partial derivative in z of f is not zero at (x0, y0, z0), then there exists a differentiable function φ(x, y) such that
$f(x,y,\varphi (x,y))=0$
in a neighbourhood of (x0, y0, z0). In other words, the implicit surface is the graph of a function near a point of the surface where the partial derivative in z is nonzero. An implicit surface has thus, locally, a parametric representation, except at the points of the surface where the three partial derivatives are zero.
Regular points and tangent plane
A point of the surface where at least one partial derivative of f is nonzero is called regular. At such a point $(x_{0},y_{0},z_{0})$, the tangent plane and the direction of the normal are well defined, and may be deduced, with the implicit function theorem from the definition given above, in § Tangent plane and normal vector. The direction of the normal is the gradient, that is the vector
$\left[{\frac {\partial f}{\partial x}}(x_{0},y_{0},z_{0}),{\frac {\partial f}{\partial y}}(x_{0},y_{0},z_{0}),{\frac {\partial f}{\partial z}}(x_{0},y_{0},z_{0})\right].$
The tangent plane is defined by its implicit equation
${\frac {\partial f}{\partial x}}(x_{0},y_{0},z_{0})(x-x_{0})+{\frac {\partial f}{\partial y}}(x_{0},y_{0},z_{0})(y-y_{0})+{\frac {\partial f}{\partial z}}(x_{0},y_{0},z_{0})(z-z_{0})=0.$
Singular point
A singular point of an implicit surface (in $\mathbb {R} ^{3}$) is a point of the surface where the implicit equation holds and the three partial derivatives of its defining function are all zero. Therefore, the singular points are the solutions of a system of four equations in three indeterminates. As most such systems have no solution, many surfaces do not have any singular point. A surface with no singular point is called regular or non-singular.
The study of surfaces near their singular points and the classification of the singular points is singularity theory. A singular point is isolated if there is no other singular point in a neighborhood of it. Otherwise, the singular points may form a curve. This is in particular the case for self-crossing surfaces.
Algebraic surface
Main article: Algebraic surface
Originally, an algebraic surface was a surface which may be defined by an implicit equation
$f(x,y,z)=0,$
where f is a polynomial in three indeterminates, with real coefficients.
The concept has been extended in several directions, by defining surfaces over arbitrary fields, and by considering surfaces in spaces of arbitrary dimension or in projective spaces. Abstract algebraic surfaces, which are not explicitly embedded in another space, are also considered.
Surfaces over arbitrary fields
Polynomials with coefficients in any field are accepted for defining an algebraic surface. However, the field of coefficients of a polynomial is not well defined, as, for example, a polynomial with rational coefficients may also be considered as a polynomial with real or complex coefficients. Therefore, the concept of point of the surface has been generalized in the following way.[2]
Given a polynomial f(x, y, z), let k be the smallest field containing the coefficients, and K be an algebraically closed extension of k, of infinite transcendence degree.[3] Then a point of the surface is an element of K3 which is a solution of the equation
$f(x,y,z)=0.$
If the polynomial has real coefficients, the field K is the complex field, and a point of the surface that belongs to $\mathbb {R} ^{3}$ (a usual point) is called a real point. A point that belongs to k3 is called rational over k, or simply a rational point, if k is the field of rational numbers.
Projective surface
A projective surface in a projective space of dimension three is the set of points whose homogeneous coordinates are zeros of a single homogeneous polynomial in four variables. More generally, a projective surface is a subset of a projective space, which is a projective variety of dimension two.
Projective surfaces are strongly related to affine surfaces (that is, ordinary algebraic surfaces). One passes from a projective surface to the corresponding affine surface by setting to one some coordinate or indeterminate of the defining polynomials (usually the last one). Conversely, one passes from an affine surface to its associated projective surface (called projective completion) by homogenizing the defining polynomial (in case of surfaces in a space of dimension three), or by homogenizing all polynomials of the defining ideal (for surfaces in a space of higher dimension).
In higher dimensional spaces
One cannot define the concept of an algebraic surface in a space of dimension higher than three without a general definition of an algebraic variety and of the dimension of an algebraic variety. In fact, an algebraic surface is an algebraic variety of dimension two.
More precisely, an algebraic surface in a space of dimension n is the set of the common zeros of at least n – 2 polynomials, but these polynomials must satisfy further conditions that may be not immediate to verify. Firstly, the polynomials must not define a variety or an algebraic set of higher dimension, which is typically the case if one of the polynomials is in the ideal generated by the others. Generally, n – 2 polynomials define an algebraic set of dimension two or higher. If the dimension is two, the algebraic set may have several irreducible components. If there is only one component the n – 2 polynomials define a surface, which is a complete intersection. If there are several components, then one needs further polynomials for selecting a specific component.
Most authors consider as an algebraic surface only algebraic varieties of dimension two, but some also consider as surfaces all algebraic sets whose irreducible components have the dimension two.
In the case of surfaces in a space of dimension three, every surface is a complete intersection, and a surface is defined by a single polynomial, which is irreducible or not, depending on whether non-irreducible algebraic sets of dimension two are considered as surfaces or not.
Topological surface
Main article: Surface (topology)
In topology, a surface is generally defined as a manifold of dimension two. This means that a topological surface is a topological space such that every point has a neighborhood that is homeomorphic to an open subset of a Euclidean plane.
Every topological surface is homeomorphic to a polyhedral surface such that all facets are triangles. The combinatorial study of such arrangements of triangles (or, more generally, of higher-dimensional simplexes) is the starting object of algebraic topology. This allows the characterization of the properties of surfaces in terms of purely algebraic invariants, such as the genus and homology groups.
The homeomorphism classes of surfaces have been completely described (see Surface (topology)).
Differentiable surface
This section is an excerpt from Differential geometry of surfaces.[edit]
In mathematics, the differential geometry of surfaces deals with the differential geometry of smooth surfaces with various additional structures, most often, a Riemannian metric. Surfaces have been extensively studied from various perspectives: extrinsically, relating to their embedding in Euclidean space and intrinsically, reflecting their properties determined solely by the distance within the surface as measured along curves on the surface. One of the fundamental concepts investigated is the Gaussian curvature, first studied in depth by Carl Friedrich Gauss,[4] who showed that curvature was an intrinsic property of a surface, independent of its isometric embedding in Euclidean space.
Surfaces naturally arise as graphs of functions of a pair of variables, and sometimes appear in parametric form or as loci associated to space curves. An important role in their study has been played by Lie groups (in the spirit of the Erlangen program), namely the symmetry groups of the Euclidean plane, the sphere and the hyperbolic plane. These Lie groups can be used to describe surfaces of constant Gaussian curvature; they also provide an essential ingredient in the modern approach to intrinsic differential geometry through connections. On the other hand, extrinsic properties relying on an embedding of a surface in Euclidean space have also been extensively studied. This is well illustrated by the non-linear Euler–Lagrange equations in the calculus of variations: although Euler developed the one variable equations to understand geodesics, defined independently of an embedding, one of Lagrange's main applications of the two variable equations was to minimal surfaces, a concept that can only be defined in terms of an embedding.
Fractal surface
This section is an excerpt from Fractal landscape.[edit]
A fractal landscape or fractal surface is generated using a stochastic algorithm designed to produce fractal behavior that mimics the appearance of natural terrain. In other words, the surface resulting from the procedure is not a deterministic, but rather a random surface that exhibits fractal behavior.[5]
Many natural phenomena exhibit some form of statistical self-similarity that can be modeled by fractal surfaces.[6] Moreover, variations in surface texture provide important visual cues to the orientation and slopes of surfaces, and the use of almost self-similar fractal patterns can help create natural looking visual effects.[7] The modeling of the Earth's rough surfaces via fractional Brownian motion was first proposed by Benoit Mandelbrot.[8]
Because the intended result of the process is to produce a landscape, rather than a mathematical function, processes are frequently applied to such landscapes that may affect the stationarity and even the overall fractal behavior of such a surface, in the interests of producing a more convincing landscape.
According to R. R. Shearer, the generation of natural looking surfaces and landscapes was a major turning point in art history, where the distinction between geometric, computer generated images and natural, man made art became blurred.[9] The first use of a fractal-generated landscape in a film was in 1982 for the movie Star Trek II: The Wrath of Khan. Loren Carpenter refined the techniques of Mandelbrot to create an alien landscape.[10]
In computer graphics
This section is an excerpt from Computer representation of surfaces.[edit]
In technical applications of 3D computer graphics (CAx) such as computer-aided design and computer-aided manufacturing, surfaces are one way of representing objects. The other ways are wireframe (lines and curves) and solids. Point clouds are also sometimes used as temporary ways to represent an object, with the goal of using the points to create one or more of the three permanent representations.
See also
• Area element, the area of a differential element of a surface
• Coordinate surfaces
• Hypersurface
• Perimeter, a two-dimensional equivalent
• Polyhedral surface
• Shape
• Signed distance function
• Solid figure
• Surface area
• Surface patch
• Surface integral
Notes
1. Here "implicit" does not refer to a property of the surface, which may be defined by other means, but instead to how it is defined. Thus this term is an abbreviation of "surface defined by an implicit equation".
2. Weil, André (1946), Foundations of Algebraic Geometry, American Mathematical Society Colloquium Publications, vol. 29, Providence, R.I.: American Mathematical Society, pp. 1–363, ISBN 9780821874622, MR 0023093
3. The infinite degree of transcendence is a technical condition, which allows an accurate definition of the concept of generic point.
4. Gauss 1902. sfn error: no target: CITEREFGauss1902 (help)
5. "The Fractal Geometry of Nature".
6. Advances in multimedia modeling: 13th International Multimedia Modeling by Tat-Jen Cham 2007 ISBN 3-540-69428-5 page
7. Human symmetry perception and its computational analysis by Christopher W. Tyler 2002 ISBN 0-8058-4395-7 pages 173–177
8. Dynamics of Fractal Surfaces by Fereydoon Family and Tamas Vicsek 1991 ISBN 981-02-0720-4 page 45
9. Rhonda Roland Shearer "Rethinking Images and Metaphors" in The languages of the brain by Albert M. Galaburda 2002 ISBN 0-674-00772-7 pages 351–359
10. Briggs, John (1992). Fractals: The Patterns of Chaos : a New Aesthetic of Art, Science, and Nature. Simon and Schuster. p. 84. ISBN 978-0671742171. Retrieved 15 June 2014.
| Wikipedia |
Cube
In geometry, a cube[1] is a three-dimensional solid object bounded by six square faces, facets or sides, with three meeting at each vertex. Viewed from a corner it is a hexagon and its net is usually depicted as a cross.[2]
This article is about the 3-dimensional shape. For cubes in any dimension, see Hypercube. For other uses, see Cube (disambiguation).
Regular hexahedron
(Click here for rotating model)
TypePlatonic solid
ElementsF = 6, E = 12
V = 8 (χ = 2)
Faces by sides6{4}
Conway notationC
Schläfli symbols{4,3}
t{2,4} or {4}×{}
tr{2,2}
{}×{}×{} = {}3
Face configurationV3.3.3.3
Wythoff symbol3 | 2 4
Coxeter diagram
SymmetryOh, B3, [4,3], (*432)
Rotation groupO, [4,3]+, (432)
ReferencesU06, C18, W3
Propertiesregular, convexzonohedron, Hanner polytope
Dihedral angle90°
4.4.4
(Vertex figure)
Octahedron
(dual polyhedron)
Net
The cube is the only regular hexahedron and is one of the five Platonic solids. It has 6 faces, 12 edges, and 8 vertices.
The cube is also a square parallelepiped, an equilateral cuboid and a right rhombohedron a 3-zonohedron. It is a regular square prism in three orientations, and a trigonal trapezohedron in four orientations.
The cube is dual to the octahedron. It has cubical or octahedral symmetry.
The cube is the only convex polyhedron whose faces are all squares.
Orthogonal projections
The cube has four special orthogonal projections, centered, on a vertex, edges, face and normal to its vertex figure. The first and third correspond to the A2 and B2 Coxeter planes.
Orthogonal projections
Centered by Face Vertex
Coxeter planes B2
A2
Projective
symmetry
[4] [6]
Tilted views
Spherical tiling
"Spherical cube" redirects here. Not to be confused with Squircle.
The cube can also be represented as a spherical tiling, and projected onto the plane via a stereographic projection. This projection is conformal, preserving angles but not areas or lengths. Straight lines on the sphere are projected as circular arcs on the plane.
Orthographic projection Stereographic projection
Cartesian coordinates
For a cube centered at the origin, with edges parallel to the axes and with an edge length of 2, the Cartesian coordinates of the vertices are
(±1, ±1, ±1)
while the interior consists of all points (x0, x1, x2) with −1 < xi < 1 for all i.
Equation in three dimensional space
In analytic geometry, a cube's surface with center (x0, y0, z0) and edge length of 2a is the locus of all points (x, y, z) such that
$\max\{|x-x_{0}|,|y-y_{0}|,|z-z_{0}|\}=a.$
A cube can also be considered the limiting case of a 3D superellipsoid as all three exponents approach infinity.
Formulas
For a cube of edge length $a$:
surface area $6a^{2}\,$ volume $a^{3}\,$
face diagonal ${\sqrt {2}}a$ space diagonal $ {\sqrt {3}}a$
radius of circumscribed sphere ${\frac {\sqrt {3}}{2}}a$ radius of sphere tangent to edges ${\frac {a}{\sqrt {2}}}$
radius of inscribed sphere ${\frac {a}{2}}$ angles between faces (in radians) ${\frac {\pi }{2}}$
As the volume of a cube is the third power of its sides $a\times a\times a$, third powers are called cubes, by analogy with squares and second powers.
A cube has the largest volume among cuboids (rectangular boxes) with a given surface area. Also, a cube has the largest volume among cuboids with the same total linear size (length+width+height).
Point in space
For a cube whose circumscribing sphere has radius R, and for a given point in its 3-dimensional space with distances di from the cube's eight vertices, we have:[3]
${\frac {\sum _{i=1}^{8}d_{i}^{4}}{8}}+{\frac {16R^{4}}{9}}=\left({\frac {\sum _{i=1}^{8}d_{i}^{2}}{8}}+{\frac {2R^{2}}{3}}\right)^{2}.$
Doubling the cube
Doubling the cube, or the Delian problem, was the problem posed by ancient Greek mathematicians of using only a compass and straightedge to start with the length of the edge of a given cube and to construct the length of the edge of a cube with twice the volume of the original cube. They were unable to solve this problem, which in 1837 Pierre Wantzel proved it to be impossible because the cube root of 2 is not a constructible number.
Uniform colorings and symmetry
The cube has three uniform colorings, named by the unique colors of the square faces around each vertex: 111, 112, 123.
The cube has four classes of symmetry, which can be represented by vertex-transitive coloring the faces. The highest octahedral symmetry Oh has all the faces the same color. The dihedral symmetry D4h comes from the cube being a solid, with all the six sides being different colors. The prismatic subsets D2d has the same coloring as the previous one and D2h has alternating colors for its sides for a total of three colors, paired by opposite sides. Each symmetry form has a different Wythoff symbol.
Name Regular
hexahedron
Square prism Rectangular
trapezoprism
Rectangular
cuboid
Rhombic
prism
Trigonal
trapezohedron
Coxeter
diagram
Schläfli
symbol
{4,3} {4}×{ }
rr{4,2}
s2{2,4} { }3
tr{2,2}
{ }×2{ }
Wythoff
symbol
3 | 4 2 4 2 | 2 2 2 2 |
Symmetry Oh
[4,3]
(*432)
D4h
[4,2]
(*422)
D2d
[4,2+]
(2*2)
D2h
[2,2]
(*222)
D3d
[6,2+]
(2*3)
Symmetry
order
24 16 8 8 12
Image
(uniform
coloring)
(111)
(112)
(112)
(123)
(112)
(111), (112)
Geometric relations
A cube has eleven nets: that is, there are eleven ways to flatten a hollow cube by cutting seven edges.[4] To color the cube so that no two adjacent faces have the same color, one would need at least three colors.
The cube is the cell of the only regular tiling of three-dimensional Euclidean space. It is also unique among the Platonic solids in having faces with an even number of sides and, consequently, it is the only member of that group that is a zonohedron (every face has point symmetry).
The cube can be cut into six identical square pyramids. If these square pyramids are then attached to the faces of a second cube, a rhombic dodecahedron is obtained (with pairs of coplanar triangles combined into rhombic faces).
In theology
Cubes appear in Abrahamic religions. The Kaaba (Arabic for 'cube') in Mecca is one example. Cubes also appear in Judaism as tefillin, and the New Jerusalem is described in the New Testament as a cube.[5]
Other dimensions
The analogue of a cube in four-dimensional Euclidean space has a special name—a tesseract or hypercube. More properly, a hypercube (or n-dimensional cube or simply n-cube) is the analogue of the cube in n-dimensional Euclidean space and a tesseract is the order-4 hypercube. A hypercube is also called a measure polytope.
There are analogues of the cube in lower dimensions too: a point in dimension 0, a line segment in one dimension and a square in two dimensions.
Related polyhedra
The quotient of the cube by the antipodal map yields a projective polyhedron, the hemicube.
If the original cube has edge length 1, its dual polyhedron (an octahedron) has edge length $\scriptstyle {\sqrt {2}}/2$.
The cube is a special case in various classes of general polyhedra:
NameEqual edge-lengths?Equal angles?Right angles?
CubeYesYesYes
RhombohedronYesYesNo
CuboidNoYesYes
ParallelepipedNoYesNo
quadrilaterally faced hexahedronNoNoNo
The vertices of a cube can be grouped into two groups of four, each forming a regular tetrahedron; more generally this is referred to as a demicube. These two together form a regular compound, the stella octangula. The intersection of the two forms a regular octahedron. The symmetries of a regular tetrahedron correspond to those of a cube which map each tetrahedron to itself; the other symmetries of the cube map the two to each other.
One such regular tetrahedron has a volume of 1/3 of that of the cube. The remaining space consists of four equal irregular tetrahedra with a volume of 1/6 of that of the cube, each.
The rectified cube is the cuboctahedron. If smaller corners are cut off we get a polyhedron with six octagonal faces and eight triangular ones. In particular we can get regular octagons (truncated cube). The rhombicuboctahedron is obtained by cutting off both corners and edges to the correct amount.
A cube can be inscribed in a dodecahedron so that each vertex of the cube is a vertex of the dodecahedron and each edge is a diagonal of one of the dodecahedron's faces; taking all such cubes gives rise to the regular compound of five cubes.
If two opposite corners of a cube are truncated at the depth of the three vertices directly connected to them, an irregular octahedron is obtained. Eight of these irregular octahedra can be attached to the triangular faces of a regular octahedron to obtain the cuboctahedron.
The cube is topologically related to a series of spherical polyhedral and tilings with order-3 vertex figures.
*n32 symmetry mutation of regular tilings: {n,3}
Spherical Euclidean Compact hyperb. Paraco. Noncompact hyperbolic
{2,3} {3,3} {4,3} {5,3} {6,3} {7,3} {8,3} {∞,3} {12i,3} {9i,3} {6i,3} {3i,3}
The cuboctahedron is one of a family of uniform polyhedra related to the cube and regular octahedron.
Uniform octahedral polyhedra
Symmetry: [4,3], (*432) [4,3]+
(432)
[1+,4,3] = [3,3]
(*332)
[3+,4]
(3*2)
{4,3} t{4,3} r{4,3}
r{31,1}
t{3,4}
t{31,1}
{3,4}
{31,1}
rr{4,3}
s2{3,4}
tr{4,3} sr{4,3} h{4,3}
{3,3}
h2{4,3}
t{3,3}
s{3,4}
s{31,1}
=
=
=
=
or
=
or
=
Duals to uniform polyhedra
V43 V3.82 V(3.4)2 V4.62 V34 V3.43 V4.6.8 V34.4 V33 V3.62 V35
The cube is topologically related as a part of sequence of regular tilings, extending into the hyperbolic plane: {4,p}, p=3,4,5...
*n42 symmetry mutation of regular tilings: {4,n}
Spherical Euclidean Compact hyperbolic Paracompact
{4,3}
{4,4}
{4,5}
{4,6}
{4,7}
{4,8}...
{4,∞}
With dihedral symmetry, Dih4, the cube is topologically related in a series of uniform polyhedral and tilings 4.2n.2n, extending into the hyperbolic plane:
*n42 symmetry mutation of truncated tilings: 4.2n.2n
Symmetry
*n42
[n,4]
Spherical Euclidean Compact hyperbolic Paracomp.
*242
[2,4]
*342
[3,4]
*442
[4,4]
*542
[5,4]
*642
[6,4]
*742
[7,4]
*842
[8,4]...
*∞42
[∞,4]
Truncated
figures
Config. 4.4.4 4.6.6 4.8.8 4.10.10 4.12.12 4.14.14 4.16.16 4.∞.∞
n-kis
figures
Config. V4.4.4 V4.6.6 V4.8.8 V4.10.10 V4.12.12 V4.14.14 V4.16.16 V4.∞.∞
All these figures have octahedral symmetry.
The cube is a part of a sequence of rhombic polyhedra and tilings with [n,3] Coxeter group symmetry. The cube can be seen as a rhombic hexahedron where the rhombi are squares.
Symmetry mutations of dual quasiregular tilings: V(3.n)2
*n32 Spherical Euclidean Hyperbolic
*332 *432 *532 *632 *732 *832... *∞32
Tiling
Conf. V(3.3)2 V(3.4)2 V(3.5)2 V(3.6)2 V(3.7)2 V(3.8)2 V(3.∞)2
The cube is a square prism:
Family of uniform n-gonal prisms
Prism name Digonal prism (Trigonal)
Triangular prism
(Tetragonal)
Square prism
Pentagonal prism Hexagonal prism Heptagonal prism Octagonal prism Enneagonal prism Decagonal prism Hendecagonal prism Dodecagonal prism ... Apeirogonal prism
Polyhedron image ...
Spherical tiling image Plane tiling image
Vertex config. 2.4.43.4.44.4.45.4.46.4.47.4.48.4.49.4.410.4.411.4.412.4.4...∞.4.4
Coxeter diagram ...
As a trigonal trapezohedron, the cube is related to the hexagonal dihedral symmetry family.
Uniform hexagonal dihedral spherical polyhedra
Symmetry: [6,2], (*622) [6,2]+, (622) [6,2+], (2*3)
{6,2} t{6,2} r{6,2} t{2,6} {2,6} rr{6,2} tr{6,2} sr{6,2} s{2,6}
Duals to uniforms
V62 V122 V62 V4.4.6 V26 V4.4.6 V4.4.12 V3.3.3.6 V3.3.3.3
Regular and uniform compounds of cubes
Compound of three cubes
Compound of five cubes
In uniform honeycombs and polychora
It is an element of 9 of 28 convex uniform honeycombs:
Cubic honeycomb
Truncated square prismatic honeycomb
Snub square prismatic honeycomb
Elongated triangular prismatic honeycomb Gyroelongated triangular prismatic honeycomb
Cantellated cubic honeycomb
Cantitruncated cubic honeycomb
Runcitruncated cubic honeycomb
Runcinated alternated cubic honeycomb
It is also an element of five four-dimensional uniform polychora:
Tesseract
Cantellated 16-cell
Runcinated tesseract
Cantitruncated 16-cell
Runcitruncated 16-cell
Cubical graph
Cubical graph
Named afterQ3
Vertices8
Edges12
Radius3
Diameter3
Girth4
Automorphisms48
Chromatic number2
PropertiesHamiltonian, regular, symmetric, distance-regular, distance-transitive, 3-vertex-connected, bipartite, planar graph
Table of graphs and parameters
The skeleton of the cube (the vertices and edges) forms a graph with 8 vertices and 12 edges, called the cube graph. It is a special case of the hypercube graph.[6] It is one of 5 Platonic graphs, each a skeleton of its Platonic solid.
An extension is the three dimensional k-ARY Hamming graph, which for k = 2 is the cube graph. Graphs of this sort occur in the theory of parallel processing in computers.
See also
• Pyramid
• Tesseract
• Trapezohedron
References
1. English cube from Old French < Latin cubus < Greek κύβος (kubos) meaning "a cube, a die, vertebra". In turn from PIE *keu(b)-, "to bend, turn".
2. "Nets of a Solids | Geometry |Nets of a Cube |Nets of a Cone & Cylinder".
3. Park, Poo-Sung. "Regular polytope distances", Forum Geometricorum 16, 2016, 227-232. http://forumgeom.fau.edu/FG2016volume16/FG201627.pdf Archived 2016-10-10 at the Wayback Machine
4. Uehara, Ryuhei (2020). "Figure 1.1". Introduction to Computational Origami: The World of New Computational Geometry. Singapore: Springer. p. 4. doi:10.1007/978-981-15-4470-5. ISBN 978-981-15-4469-9. MR 4215620. S2CID 220150682.
5. "Symbolism of the Cube • Eve Out of the Garden". 30 October 2020.
6. Harary, Frank; Hayes, John P.; Wu, Horng-Jyh (1988). "A survey of the theory of hypercube graphs" (PDF). Computers & Mathematics with Applications. 15 (4): 277–289. doi:10.1016/0898-1221(88)90213-1. hdl:2027.42/27522. MR 0949280.
External links
• Weisstein, Eric W. "Cube". MathWorld.
• Cube: Interactive Polyhedron Model*
• Volume of a cube, with interactive animation
• Cube (Robert Webb's site)
Convex polyhedra
Platonic solids (regular)
• tetrahedron
• cube
• octahedron
• dodecahedron
• icosahedron
Archimedean solids
(semiregular or uniform)
• truncated tetrahedron
• cuboctahedron
• truncated cube
• truncated octahedron
• rhombicuboctahedron
• truncated cuboctahedron
• snub cube
• icosidodecahedron
• truncated dodecahedron
• truncated icosahedron
• rhombicosidodecahedron
• truncated icosidodecahedron
• snub dodecahedron
Catalan solids
(duals of Archimedean)
• triakis tetrahedron
• rhombic dodecahedron
• triakis octahedron
• tetrakis hexahedron
• deltoidal icositetrahedron
• disdyakis dodecahedron
• pentagonal icositetrahedron
• rhombic triacontahedron
• triakis icosahedron
• pentakis dodecahedron
• deltoidal hexecontahedron
• disdyakis triacontahedron
• pentagonal hexecontahedron
Dihedral regular
• dihedron
• hosohedron
Dihedral uniform
• prisms
• antiprisms
duals:
• bipyramids
• trapezohedra
Dihedral others
• pyramids
• truncated trapezohedra
• gyroelongated bipyramid
• cupola
• bicupola
• frustum
• bifrustum
• rotunda
• birotunda
• prismatoid
• scutoid
Degenerate polyhedra are in italics.
Fundamental convex regular and uniform polytopes in dimensions 2–10
Family An Bn I2(p) / Dn E6 / E7 / E8 / F4 / G2 Hn
Regular polygon Triangle Square p-gon Hexagon Pentagon
Uniform polyhedron Tetrahedron Octahedron • Cube Demicube Dodecahedron • Icosahedron
Uniform polychoron Pentachoron 16-cell • Tesseract Demitesseract 24-cell 120-cell • 600-cell
Uniform 5-polytope 5-simplex 5-orthoplex • 5-cube 5-demicube
Uniform 6-polytope 6-simplex 6-orthoplex • 6-cube 6-demicube 122 • 221
Uniform 7-polytope 7-simplex 7-orthoplex • 7-cube 7-demicube 132 • 231 • 321
Uniform 8-polytope 8-simplex 8-orthoplex • 8-cube 8-demicube 142 • 241 • 421
Uniform 9-polytope 9-simplex 9-orthoplex • 9-cube 9-demicube
Uniform 10-polytope 10-simplex 10-orthoplex • 10-cube 10-demicube
Uniform n-polytope n-simplex n-orthoplex • n-cube n-demicube 1k2 • 2k1 • k21 n-pentagonal polytope
Topics: Polytope families • Regular polytope • List of regular polytopes and compounds
Authority control: National
• France
• BnF data
• Germany
• Israel
• United States
| Wikipedia |
Surface area
The surface area (symbol A) of a solid object is a measure of the total area that the surface of the object occupies.[1] The mathematical definition of surface area in the presence of curved surfaces is considerably more involved than the definition of arc length of one-dimensional curves, or of the surface area for polyhedra (i.e., objects with flat polygonal faces), for which the surface area is the sum of the areas of its faces. Smooth surfaces, such as a sphere, are assigned surface area using their representation as parametric surfaces. This definition of surface area is based on methods of infinitesimal calculus and involves partial derivatives and double integration.
A general definition of surface area was sought by Henri Lebesgue and Hermann Minkowski at the turn of the twentieth century. Their work led to the development of geometric measure theory, which studies various notions of surface area for irregular objects of any dimension. An important example is the Minkowski content of a surface.
Definition
While the areas of many simple surfaces have been known since antiquity, a rigorous mathematical definition of area requires a great deal of care. This should provide a function
$S\mapsto A(S)$
which assigns a positive real number to a certain class of surfaces that satisfies several natural requirements. The most fundamental property of the surface area is its additivity: the area of the whole is the sum of the areas of the parts. More rigorously, if a surface S is a union of finitely many pieces S1, …, Sr which do not overlap except at their boundaries, then
$A(S)=A(S_{1})+\cdots +A(S_{r}).$
Surface areas of flat polygonal shapes must agree with their geometrically defined area. Since surface area is a geometric notion, areas of congruent surfaces must be the same and the area must depend only on the shape of the surface, but not on its position and orientation in space. This means that surface area is invariant under the group of Euclidean motions. These properties uniquely characterize surface area for a wide class of geometric surfaces called piecewise smooth. Such surfaces consist of finitely many pieces that can be represented in the parametric form
$S_{D}:{\vec {r}}={\vec {r}}(u,v),\quad (u,v)\in D$
with a continuously differentiable function ${\vec {r}}.$ The area of an individual piece is defined by the formula
$A(S_{D})=\iint _{D}\left|{\vec {r}}_{u}\times {\vec {r}}_{v}\right|\,du\,dv.$
Thus the area of SD is obtained by integrating the length of the normal vector ${\vec {r}}_{u}\times {\vec {r}}_{v}$ to the surface over the appropriate region D in the parametric uv plane. The area of the whole surface is then obtained by adding together the areas of the pieces, using additivity of surface area. The main formula can be specialized to different classes of surfaces, giving, in particular, formulas for areas of graphs z = f(x,y) and surfaces of revolution.
One of the subtleties of surface area, as compared to arc length of curves, is that surface area cannot be defined simply as the limit of areas of polyhedral shapes approximating a given smooth surface. It was demonstrated by Hermann Schwarz that already for the cylinder, different choices of approximating flat surfaces can lead to different limiting values of the area; this example is known as the Schwarz lantern.[2][3]
Various approaches to a general definition of surface area were developed in the late nineteenth and the early twentieth century by Henri Lebesgue and Hermann Minkowski. While for piecewise smooth surfaces there is a unique natural notion of surface area, if a surface is very irregular, or rough, then it may not be possible to assign an area to it at all. A typical example is given by a surface with spikes spread throughout in a dense fashion. Many surfaces of this type occur in the study of fractals. Extensions of the notion of area which partially fulfill its function and may be defined even for very badly irregular surfaces are studied in geometric measure theory. A specific example of such an extension is the Minkowski content of the surface.
Common formulas
Surface areas of common solids
Shape Equation Variables
Cube $6a^{2}$ a = side length
Cuboid $2\left(lb+lh+bh\right)$ l = length, b = breadth, h = height
Triangular prism $bh+l\left(p+q+r\right)$ b = base length of triangle, h = height of triangle, l = distance between triangular bases, p, q, r = sides of triangle
All prisms $2B+Ph$ B = the area of one base, P = the perimeter of one base, h = height
Sphere $4\pi r^{2}=\pi d^{2}$ r = radius of sphere, d = diameter
Hemisphere $3\pi r^{2}$ r = radius of the hemisphere
Hemispherical shell $\pi \left(3R^{2}+r^{2}\right)$ R = external radius of hemisphere, r = internal radius of hemisphere
Spherical lune $2r^{2}\theta $ r = radius of sphere, θ = dihedral angle
Torus $\left(2\pi r\right)\left(2\pi R\right)=4\pi ^{2}Rr$ r = minor radius (radius of the tube), R = major radius (distance from center of tube to center of torus)
Closed cylinder $2\pi r^{2}+2\pi rh=2\pi r\left(r+h\right)$ r = radius of the circular base, h = height of the cylinder
Cylindrical annulus $2\pi Rh+2\pi rh+2(\pi R^{2}-\pi r^{2})=2\pi (R+r)(R-r+h)$ R = External radius
r = Internal radius, h = height
Capsule $2\pi r(2r+h)$ r = radius of the hemispheres and cylinder, h = height of the cylinder
Curved surface area of a cone $\pi r{\sqrt {r^{2}+h^{2}}}=\pi rs$ $s={\sqrt {r^{2}+h^{2}}}$
s = slant height of the cone, r = radius of the circular base, h = height of the cone
Full surface area of a cone $\pi r\left(r+{\sqrt {r^{2}+h^{2}}}\right)=\pi r\left(r+s\right)$ s = slant height of the cone, r = radius of the circular base, h = height of the cone
Regular Pyramid $B+{\frac {Ps}{2}}$ B = area of base, P = perimeter of base, s = slant height
Square pyramid $b^{2}+2bs=b^{2}+2b{\sqrt {\left({\frac {b}{2}}\right)^{2}+h^{2}}}$ b = base length, s = slant height, h = vertical height
Rectangular pyramid $lb+l{\sqrt {\left({\frac {b}{2}}\right)^{2}+h^{2}}}+b{\sqrt {\left({\frac {l}{2}}\right)^{2}+h^{2}}}$ l = length, b = breadth, h = height
Tetrahedron ${\sqrt {3}}a^{2}$ a = side length
Surface of revolution $2\pi \int _{a}^{b}{f(x){\sqrt {1+(f'(x))^{2}}}dx}$
Parametric surface $\iint _{D}\left\vert {\vec {r}}_{u}\times {\vec {r}}_{v}\right\vert dA$ ${\vec {r}}$ = parametric vector equation of surface,
${\vec {r}}_{u}$ = partial derivative of ${\vec {r}}$ with respect to $u$,
${\vec {r}}_{v}$ = partial derivative of ${\vec {r}}$ with respect to $v$,
$D$ = shadow region
Ratio of surface areas of a sphere and cylinder of the same radius and height
The below given formulas can be used to show that the surface area of a sphere and cylinder of the same radius and height are in the ratio 2 : 3, as follows.
Let the radius be r and the height be h (which is 2r for the sphere).
${\begin{array}{rlll}{\text{Sphere surface area}}&=4\pi r^{2}&&=(2\pi r^{2})\times 2\\{\text{Cylinder surface area}}&=2\pi r(h+r)&=2\pi r(2r+r)&=(2\pi r^{2})\times 3\end{array}}$
The discovery of this ratio is credited to Archimedes.[4]
In chemistry
Surface area is important in chemical kinetics. Increasing the surface area of a substance generally increases the rate of a chemical reaction. For example, iron in a fine powder will combust, while in solid blocks it is stable enough to use in structures. For different applications a minimal or maximal surface area may be desired.
In biology
The surface area of an organism is important in several considerations, such as regulation of body temperature and digestion. Animals use their teeth to grind food down into smaller particles, increasing the surface area available for digestion. The epithelial tissue lining the digestive tract contains microvilli, greatly increasing the area available for absorption. Elephants have large ears, allowing them to regulate their own body temperature. In other instances, animals will need to minimize surface area; for example, people will fold their arms over their chest when cold to minimize heat loss.
The surface area to volume ratio (SA:V) of a cell imposes upper limits on size, as the volume increases much faster than does the surface area, thus limiting the rate at which substances diffuse from the interior across the cell membrane to interstitial spaces or to other cells. Indeed, representing a cell as an idealized sphere of radius r, the volume and surface area are, respectively, V = (4/3)πr3 and SA = 4πr2. The resulting surface area to volume ratio is therefore 3/r. Thus, if a cell has a radius of 1 μm, the SA:V ratio is 3; whereas if the radius of the cell is instead 10 μm, then the SA:V ratio becomes 0.3. With a cell radius of 100, SA:V ratio is 0.03. Thus, the surface area falls off steeply with increasing volume.
See also
• Perimeter length
• Projected area
• BET theory, technique for the measurement of the specific surface area of materials
• Spherical area
• Surface integral
References
1. Weisstein, Eric W. "Surface Area". MathWorld.
2. "Schwarz's Paradox" (PDF). Archived (PDF) from the original on 4 March 2016. Retrieved 21 March 2017.
3. "Archived copy" (PDF). Archived from the original (PDF) on 15 December 2011. Retrieved 24 July 2012.{{cite web}}: CS1 maint: archived copy as title (link)
4. Rorres, Chris. "Tomb of Archimedes: Sources". Courant Institute of Mathematical Sciences. Archived from the original on 9 December 2006. Retrieved 2 January 2007.
• Yu.D. Burago; V.A. Zalgaller; L.D. Kudryavtsev (2001) [1994], "Area", Encyclopedia of Mathematics, EMS Press
External links
• Surface Area Video at Thinkwell
| Wikipedia |
Cylinder
A cylinder (from Ancient Greek κύλινδρος (kúlindros) 'roller, tumbler')[1] has traditionally been a three-dimensional solid, one of the most basic of curvilinear geometric shapes. In elementary geometry, it is considered a prism with a circle as its base.
Cylinder
A circular right cylinder of height h and diameter d=2r
TypeSmooth surface
Algebraic surface
Euler char.2
Symmetry groupO(2)×O(1)
Surface area2πr(r + h)
Volumeπr2h
A cylinder may also be defined as an infinite curvilinear surface in various modern branches of geometry and topology. The shift in the basic meaning—solid versus surface (as in ball and sphere)—has created some ambiguity with terminology. The two concepts may be distinguished by referring to solid cylinders and cylindrical surfaces. In the literature the unadorned term cylinder could refer to either of these or to an even more specialized object, the right circular cylinder.
Types
The definitions and results in this section are taken from the 1913 text Plane and Solid Geometry by George Wentworth and David Eugene Smith (Wentworth & Smith 1913).
A cylindrical surface is a surface consisting of all the points on all the lines which are parallel to a given line and which pass through a fixed plane curve in a plane not parallel to the given line. Any line in this family of parallel lines is called an element of the cylindrical surface. From a kinematics point of view, given a plane curve, called the directrix, a cylindrical surface is that surface traced out by a line, called the generatrix, not in the plane of the directrix, moving parallel to itself and always passing through the directrix. Any particular position of the generatrix is an element of the cylindrical surface.
A solid bounded by a cylindrical surface and two parallel planes is called a (solid) cylinder. The line segments determined by an element of the cylindrical surface between the two parallel planes is called an element of the cylinder. All the elements of a cylinder have equal lengths. The region bounded by the cylindrical surface in either of the parallel planes is called a base of the cylinder. The two bases of a cylinder are congruent figures. If the elements of the cylinder are perpendicular to the planes containing the bases, the cylinder is a right cylinder, otherwise it is called an oblique cylinder. If the bases are disks (regions whose boundary is a circle) the cylinder is called a circular cylinder. In some elementary treatments, a cylinder always means a circular cylinder.[2]
The height (or altitude) of a cylinder is the perpendicular distance between its bases.
The cylinder obtained by rotating a line segment about a fixed line that it is parallel to is a cylinder of revolution. A cylinder of revolution is a right circular cylinder. The height of a cylinder of revolution is the length of the generating line segment. The line that the segment is revolved about is called the axis of the cylinder and it passes through the centers of the two bases.
Right circular cylinders
The bare term cylinder often refers to a solid cylinder with circular ends perpendicular to the axis, that is, a right circular cylinder, as shown in the figure. The cylindrical surface without the ends is called an open cylinder. The formulae for the surface area and the volume of a right circular cylinder have been known from early antiquity.
A right circular cylinder can also be thought of as the solid of revolution generated by rotating a rectangle about one of its sides. These cylinders are used in an integration technique (the "disk method") for obtaining volumes of solids of revolution.[3]
A tall and thin needle cylinder has a height much greater than its diameter, whereas a short and wide disk cylinder has a diameter much greater than its height.
Properties
Cylindric sections
A cylindric section is the intersection of a cylinder's surface with a plane. They are, in general, curves and are special types of plane sections. The cylindric section by a plane that contains two elements of a cylinder is a parallelogram.[4] Such a cylindric section of a right cylinder is a rectangle.[4]
A cylindric section in which the intersecting plane intersects and is perpendicular to all the elements of the cylinder is called a right section.[5] If a right section of a cylinder is a circle then the cylinder is a circular cylinder. In more generality, if a right section of a cylinder is a conic section (parabola, ellipse, hyperbola) then the solid cylinder is said to be parabolic, elliptic and hyperbolic, respectively.
For a right circular cylinder, there are several ways in which planes can meet a cylinder. First, planes that intersect a base in at most one point. A plane is tangent to the cylinder if it meets the cylinder in a single element. The right sections are circles and all other planes intersect the cylindrical surface in an ellipse.[6] If a plane intersects a base of the cylinder in exactly two points then the line segment joining these points is part of the cylindric section. If such a plane contains two elements, it has a rectangle as a cylindric section, otherwise the sides of the cylindric section are portions of an ellipse. Finally, if a plane contains more than two points of a base, it contains the entire base and the cylindric section is a circle.
In the case of a right circular cylinder with a cylindric section that is an ellipse, the eccentricity e of the cylindric section and semi-major axis a of the cylindric section depend on the radius of the cylinder r and the angle α between the secant plane and cylinder axis, in the following way:
$e=\cos \alpha ,$
$a={\frac {r}{\sin \alpha }}.$
Volume
If the base of a circular cylinder has a radius r and the cylinder has height h, then its volume is given by
V = πr2h.
This formula holds whether or not the cylinder is a right cylinder.[7]
This formula may be established by using Cavalieri's principle.
In more generality, by the same principle, the volume of any cylinder is the product of the area of a base and the height. For example, an elliptic cylinder with a base having semi-major axis a, semi-minor axis b and height h has a volume V = Ah, where A is the area of the base ellipse (= πab). This result for right elliptic cylinders can also be obtained by integration, where the axis of the cylinder is taken as the positive x-axis and A(x) = A the area of each elliptic cross-section, thus:
$V=\int _{0}^{h}A(x)dx=\int _{0}^{h}\pi abdx=\pi ab\int _{0}^{h}dx=\pi abh.$
Using cylindrical coordinates, the volume of a right circular cylinder can be calculated by integration over
$=\int _{0}^{h}\int _{0}^{2\pi }\int _{0}^{r}s\,\,ds\,d\phi \,dz$
$=\pi \,r^{2}\,h.$
Surface area
Having radius r and altitude (height) h, the surface area of a right circular cylinder, oriented so that its axis is vertical, consists of three parts:
• the area of the top base: πr2
• the area of the bottom base: πr2
• the area of the side: 2πrh
The area of the top and bottom bases is the same, and is called the base area, B. The area of the side is known as the lateral area, L.
An open cylinder does not include either top or bottom elements, and therefore has surface area (lateral area)
L = 2πrh.
The surface area of the solid right circular cylinder is made up the sum of all three components: top, bottom and side. Its surface area is therefore,
A = L + 2B = 2πrh + 2πr2 = 2πr(h + r) = πd(r + h),
where d = 2r is the diameter of the circular top or bottom.
For a given volume, the right circular cylinder with the smallest surface area has h = 2r. Equivalently, for a given surface area, the right circular cylinder with the largest volume has h = 2r, that is, the cylinder fits snugly in a cube of side length = altitude ( = diameter of base circle).[8]
The lateral area, L, of a circular cylinder, which need not be a right cylinder, is more generally given by:
L = e × p,
where e is the length of an element and p is the perimeter of a right section of the cylinder.[9] This produces the previous formula for lateral area when the cylinder is a right circular cylinder.
Right circular hollow cylinder (cylindrical shell)
A right circular hollow cylinder (or cylindrical shell) is a three-dimensional region bounded by two right circular cylinders having the same axis and two parallel annular bases perpendicular to the cylinders' common axis, as in the diagram.
Let the height be h, internal radius r, and external radius R. The volume is given by
$V=\pi (R^{2}-r^{2})h=2\pi \left({\frac {R+r}{2}}\right)h(R-r).$.
Thus, the volume of a cylindrical shell equals 2π(average radius)(altitude)(thickness).[10]
The surface area, including the top and bottom, is given by
$A=2\pi (R+r)h+2\pi (R^{2}-r^{2}).$.
Cylindrical shells are used in a common integration technique for finding volumes of solids of revolution.[11]
On the Sphere and Cylinder
Main article: On the Sphere and Cylinder
In the treatise by this name, written c. 225 BCE, Archimedes obtained the result of which he was most proud, namely obtaining the formulas for the volume and surface area of a sphere by exploiting the relationship between a sphere and its circumscribed right circular cylinder of the same height and diameter. The sphere has a volume two-thirds that of the circumscribed cylinder and a surface area two-thirds that of the cylinder (including the bases). Since the values for the cylinder were already known, he obtained, for the first time, the corresponding values for the sphere. The volume of a sphere of radius r is 4/3πr3 = 2/3 (2πr3). The surface area of this sphere is 4πr2 = 2/3 (6πr2). A sculpted sphere and cylinder were placed on the tomb of Archimedes at his request.
Cylindrical surfaces
In some areas of geometry and topology the term cylinder refers to what has been called a cylindrical surface. A cylinder is defined as a surface consisting of all the points on all the lines which are parallel to a given line and which pass through a fixed plane curve in a plane not parallel to the given line.[12] Such cylinders have, at times, been referred to as generalized cylinders. Through each point of a generalized cylinder there passes a unique line that is contained in the cylinder.[13] Thus, this definition may be rephrased to say that a cylinder is any ruled surface spanned by a one-parameter family of parallel lines.
A cylinder having a right section that is an ellipse, parabola, or hyperbola is called an elliptic cylinder, parabolic cylinder and hyperbolic cylinder, respectively. These are degenerate quadric surfaces.[14]
When the principal axes of a quadric are aligned with the reference frame (always possible for a quadric), a general equation of the quadric in three dimensions is given by
$f(x,y,z)=Ax^{2}+By^{2}+Cz^{2}+Dx+Ey+Gz+H=0,$
with the coefficients being real numbers and not all of A, B and C being 0. If at least one variable does not appear in the equation, then the quadric is degenerate. If one variable is missing, we may assume by an appropriate rotation of axes that the variable z does not appear and the general equation of this type of degenerate quadric can be written as[15]
$A\left(x+{\frac {D}{2A}}\right)^{2}+B\left(y+{\frac {E}{2B}}\right)^{2}=\rho ,$
where
$\rho =-H+{\frac {D^{2}}{4A}}+{\frac {E^{2}}{4B}}.$
Elliptic cylinder
If AB > 0 this is the equation of an elliptic cylinder.[15] Further simplification can be obtained by translation of axes and scalar multiplication. If $\rho $ has the same sign as the coefficients A and B, then the equation of an elliptic cylinder may be rewritten in Cartesian coordinates as:
$\left({\frac {x}{a}}\right)^{2}+\left({\frac {y}{b}}\right)^{2}=1.$
This equation of an elliptic cylinder is a generalization of the equation of the ordinary, circular cylinder (a = b). Elliptic cylinders are also known as cylindroids, but that name is ambiguous, as it can also refer to the Plücker conoid.
If $\rho $ has a different sign than the coefficients, we obtain the imaginary elliptic cylinders:
$\left({\frac {x}{a}}\right)^{2}+\left({\frac {y}{b}}\right)^{2}=-1,$
which have no real points on them. ($\rho =0$ gives a single real point.)
Hyperbolic cylinder
If A and B have different signs and $\rho \neq 0$, we obtain the hyperbolic cylinders, whose equations may be rewritten as:
$\left({\frac {x}{a}}\right)^{2}-\left({\frac {y}{b}}\right)^{2}=1.$
Parabolic cylinder
Finally, if AB = 0 assume, without loss of generality, that B = 0 and A = 1 to obtain the parabolic cylinders with equations that can be written as:[16]
${x}^{2}+2a{y}=0.$
Projective geometry
In projective geometry, a cylinder is simply a cone whose apex (vertex) lies on the plane at infinity. If the cone is a quadratic cone, the plane at infinity (which passes through the vertex) can intersect the cone at two real lines, a single real line (actually a coincident pair of lines), or only at the vertex. These cases give rise to the hyperbolic, parabolic or elliptic cylinders respectively.[17]
This concept is useful when considering degenerate conics, which may include the cylindrical conics.
Prisms
A solid circular cylinder can be seen as the limiting case of a n-gonal prism where n approaches infinity. The connection is very strong and many older texts treat prisms and cylinders simultaneously. Formulas for surface area and volume are derived from the corresponding formulas for prisms by using inscribed and circumscribed prisms and then letting the number of sides of the prism increase without bound.[18] One reason for the early emphasis (and sometimes exclusive treatment) on circular cylinders is that a circular base is the only type of geometric figure for which this technique works with the use of only elementary considerations (no appeal to calculus or more advanced mathematics). Terminology about prisms and cylinders is identical. Thus, for example, since a truncated prism is a prism whose bases do not lie in parallel planes, a solid cylinder whose bases do not lie in parallel planes would be called a truncated cylinder.
From a polyhedral viewpoint, a cylinder can also be seen as a dual of a bicone as an infinite-sided bipyramid.
Family of uniform n-gonal prisms
Prism name Digonal prism (Trigonal)
Triangular prism
(Tetragonal)
Square prism
Pentagonal prism Hexagonal prism Heptagonal prism Octagonal prism Enneagonal prism Decagonal prism Hendecagonal prism Dodecagonal prism ... Apeirogonal prism
Polyhedron image ...
Spherical tiling image Plane tiling image
Vertex config. 2.4.43.4.44.4.45.4.46.4.47.4.48.4.49.4.410.4.411.4.412.4.4...∞.4.4
Coxeter diagram ...
See also
• List of shapes
• Steinmetz solid, the intersection of two or three perpendicular cylinders
Notes
1. κύλινδρος Archived 2013-07-30 at the Wayback Machine, Henry George Liddell, Robert Scott, A Greek-English Lexicon, on Perseus
2. Jacobs, Harold R. (1974), Geometry, W. H. Freeman and Co., p. 607, ISBN 0-7167-0456-0
3. Swokowski 1983, p. 283
4. Wentworth & Smith 1913, p. 354
5. Wentworth & Smith 1913, p. 357
6. "MathWorld: Cylindric section". Archived from the original on 2008-04-23.
7. Wentworth & Smith 1913, p. 359
8. Lax, Peter D.; Terrell, Maria Shea (2013), Calculus With Applications, Undergraduate Texts in Mathematics, Springer, p. 178, ISBN 9781461479468, archived from the original on 2018-02-06.
9. Wentworth & Smith 1913, p. 358
10. Swokowski 1983, p. 292
11. Swokowski 1983, p. 291
12. Albert 2016, p. 43
13. Albert 2016, p. 49
14. Brannan, David A.; Esplen, Matthew F.; Gray, Jeremy J. (1999), Geometry, Cambridge University Press, p. 34, ISBN 978-0-521-59787-6
15. Albert 2016, p. 74
16. Albert 2016, p. 75
17. Pedoe, Dan (1988) [1970], Geometry a Comprehensive Course, Dover, p. 398, ISBN 0-486-65812-0
18. Slaught, H.E.; Lennes, N.J. (1919), Solid Geometry with Problems and Applications (PDF) (Revised ed.), Allyn and Bacon, pp. 79–81, archived (PDF) from the original on 2013-03-06
References
• Albert, Abraham Adrian (2016) [1949], Solid Analytic Geometry, Dover, ISBN 978-0-486-81026-3
• Swokowski, Earl W. (1983), Calculus with Analytic Geometry (Alternate ed.), Prindle, Weber & Schmidt, ISBN 0-87150-341-7
• Wentworth, George; Smith, David Eugene (1913), Plane and Solid Geometry, Ginn and Co.
External links
Wikimedia Commons has media related to Cylinder (geometry).
Wikisource has the text of the 1911 Encyclopædia Britannica article "Cylinder".
Look up cylinder in Wiktionary, the free dictionary.
• Weisstein, Eric W. "Cylinder". MathWorld.
• Surface area of a cylinder at MATHguide
• Volume of a cylinder at MATHguide
Compact topological surfaces and their immersions in 3D
Without boundary
Orientable
• Sphere (genus 0)
• Torus (genus 1)
• Number 8 (genus 2)
• Pretzel (genus 3) ...
Non-orientable
• Real projective plane
• genus 1; Boy's surface
• Roman surface
• Klein bottle (genus 2)
• Dyck's surface (genus 3) ...
With boundary
• Disk
• Semisphere
• Ribbon
• Annulus
• Cylinder
• Möbius strip
• Cross-cap
• Sphere with three holes ...
Related
notions
Properties
• Connectedness
• Compactness
• Triangulatedness or smoothness
• Orientability
Characteristics
• Number of boundary components
• Genus
• Euler characteristic
Operations
• Connected sum
• Making a hole
• Gluing a handle
• Gluing a cross-cap
• Immersion
| Wikipedia |
Prism (geometry)
In geometry, a prism is a polyhedron comprising an n-sided polygon base, a second base which is a translated copy (rigidly moved without rotation) of the first, and n other faces, necessarily all parallelograms, joining corresponding sides of the two bases. All cross-sections parallel to the bases are translations of the bases. Prisms are named after their bases, e.g. a prism with a pentagonal base is called a pentagonal prism. Prisms are a subclass of prismatoids.
Set of uniform n-gonal prisms
Example: uniform hexagonal prism (n = 6)
Typeuniform in the sense of semiregular polyhedron
Faces
• two n-sided regular polygons
• n squares
Edges3n
Vertices2n
Vertex configuration4.4.n
Schläfli symbol{n}×{ } [1]
t{2,n}
Conway notationPn
Coxeter diagram
Symmetry groupDnh, [n,2], (*n22), order 4n
Rotation groupDn, [n,2]+, (n22), order 2n
Dual polyhedronconvex dual-uniform n-gonal bipyramid
Propertiesconvex, regular polygon faces, isogonal, translated bases, sides ⊥ bases
Net
Example: net of uniform enneagonal prism (n = 9)
Like many basic geometric terms, the word prism (from Greek πρίσμα (prisma) 'something sawed') was first used in Euclid's Elements. Euclid defined the term in Book XI as “a solid figure contained by two opposite, equal and parallel planes, while the rest are parallelograms”. However, this definition has been criticized for not being specific enough in relation to the nature of the bases, which caused confusion among later geometry writers.[2][3]
Oblique vs right
An oblique prism is a prism in which the joining edges and faces are not perpendicular to the base faces.
Example: a parallelepiped is an oblique prism whose base is a parallelogram, or equivalently a polyhedron with six parallelogram faces.
A right prism is a prism in which the joining edges and faces are perpendicular to the base faces.[4] This applies if and only if all the joining faces are rectangular.
The dual of a right n-prism is a right n-bipyramid.
A right prism (with rectangular sides) with regular n-gon bases has Schläfli symbol { }×{n}. It approaches a cylinder as n approaches infinity.
Special cases
• A right rectangular prism (with a rectangular base) is also called a cuboid, or informally a rectangular box. A right rectangular prism has Schläfli symbol { }×{ }×{ }.
• A right square prism (with a square base) is also called a square cuboid, or informally a square box.
Note: some texts may apply the term rectangular prism or square prism to both a right rectangular-based prism and a right square-based prism.
Regular prism
A regular prism is a prism with regular bases.
Uniform prism
A uniform prism or semiregular prism is a right prism with regular bases and all edges of the same length.
Thus all the side faces of a uniform prism are squares.
Thus all the faces of a uniform prism are regular polygons. Also, such prisms are isogonal; thus they are uniform polyhedra. They form one of the two infinite series of semiregular polyhedra, the other series being formed by the antiprisms.
A uniform n-gonal prism has Schläfli symbol t{2,n}.
Family of uniform n-gonal prisms
Prism name Digonal prism (Trigonal)
Triangular prism
(Tetragonal)
Square prism
Pentagonal prism Hexagonal prism Heptagonal prism Octagonal prism Enneagonal prism Decagonal prism Hendecagonal prism Dodecagonal prism ... Apeirogonal prism
Polyhedron image ...
Spherical tiling image Plane tiling image
Vertex config. 2.4.43.4.44.4.45.4.46.4.47.4.48.4.49.4.410.4.411.4.412.4.4...∞.4.4
Coxeter diagram ...
Volume
The volume of a prism is the product of the area of the base by the height, i.e. the distance between the two base faces (in the case of a non-right prism, note that this means the perpendicular distance).
The volume is therefore:
$V=Bh,$
where B is the base area and h is the height.
The volume of a prism whose base is an n-sided regular polygon with side length s is therefore:
$V={\frac {n}{4}}hs^{2}\cot \left({\frac {\pi }{n}}\right).$
Surface area
The surface area of a right prism is:
$2B+Ph,$
where B is the area of the base, h the height, and P the base perimeter.
The surface area of a right prism whose base is a regular n-sided polygon with side length s, and with height h, is therefore:
$A={\frac {n}{2}}s^{2}\cot \left({\frac {\pi }{n}}\right)+nsh.$
Schlegel diagrams
P3
P4
P5
P6
P7
P8
Symmetry
The symmetry group of a right n-sided prism with regular base is Dnh of order 4n, except in the case of a cube, which has the larger symmetry group Oh of order 48, which has three versions of D4h as subgroups. The rotation group is Dn of order 2n, except in the case of a cube, which has the larger symmetry group O of order 24, which has three versions of D4 as subgroups.
The symmetry group Dnh contains inversion iff n is even.
The hosohedra and dihedra also possess dihedral symmetry, and an n-gonal prism can be constructed via the geometrical truncation of an n-gonal hosohedron, as well as through the cantellation or expansion of an n-gonal dihedron.
Truncated prism
A truncated prism is formed when prism is sliced by a plane that is not parallel to its bases. A truncated prism's bases are not congruent, and its sides are not parallelograms.[5]
Twisted prism
A twisted prism is a nonconvex polyhedron constructed from a uniform n-prism with each side face bisected on the square diagonal, by twisting the top, usually by π/n radians (180/n degrees) in the same direction, causing sides to be concave.[6][7]
A twisted prism cannot be dissected into tetrahedra without adding new vertices. The smallest case: the triangular form, is called a Schönhardt polyhedron.
An n-gonal twisted prism is topologically identical to the n-gonal uniform antiprism, but has half the symmetry group: Dn, [n,2]+, order 2n. It can be seen as a nonconvex antiprism, with tetrahedra removed between pairs of triangles.
3-gonal 4-gonal 12-gonal
Schönhardt polyhedron
Twisted square prism
Square antiprism
Twisted dodecagonal antiprism
Frustum
A frustum is a similar construction to a prism, with trapezoid lateral faces and differently sized top and bottom polygons.
Star prism
Further information: Prismatic uniform polyhedron
A star prism is a nonconvex polyhedron constructed by two identical star polygon faces on the top and bottom, being parallel and offset by a distance and connected by rectangular faces. A uniform star prism will have Schläfli symbol {p/q} × { }, with p rectangle and 2 {p/q} faces. It is topologically identical to a p-gonal prism.
Examples
{ }×{ }180×{ } ta{3}×{ } {5/2}×{ } {7/2}×{ } {7/3}×{ } {8/3}×{ }
D2h, order 8 D3h, order 12 D5h, order 20 D7h, order 28 D8h, order 32
Crossed prism
A crossed prism is a nonconvex polyhedron constructed from a prism, where the vertices of one base are inverted around the center of this base (or rotated by 180°). This transforms the side rectangular faces into crossed rectangles. For a regular polygon base, the appearance is an n-gonal hour glass. All oblique edges pass through a single body center. Note: no vertex is at this body centre. A crossed prism is topologically identical to an n-gonal prism.
Examples
{ }×{ }180×{ }180 ta{3}×{ }180 {3}×{ }180 {4}×{ }180 {5}×{ }180 {5/2}×{ }180 {6}×{ }180
D2h, order 8 D3d, order 12 D4h, order 16 D5d, order 20 D6d, order 24
Toroidal prism
A toroidal prism is a nonconvex polyhedron like a crossed prism, but without bottom and top base faces, and with simple rectangular side faces closing the polyhedron. This can only be done for even-sided base polygons. These are topological tori, with Euler characteristic of zero. The topological polyhedral net can be cut from two rows of a square tiling (with vertex configuration 4.4.4.4): a band of n squares, each attached to a crossed rectangle. An n-gonal toroidal prism has 2n vertices, 2n faces: n squares and n crossed rectangles, and 4n edges. It is topologically self-dual.
Examples
D4h, order 16 D6h, order 24
v=8, e=16, f=8 v=12, e=24, f=12
Prismatic polytope
A prismatic polytope is a higher-dimensional generalization of a prism. An n-dimensional prismatic polytope is constructed from two (n − 1)-dimensional polytopes, translated into the next dimension.
The prismatic n-polytope elements are doubled from the (n − 1)-polytope elements and then creating new elements from the next lower element.
Take an n-polytope with fi i-face elements (i = 0, ..., n). Its (n + 1)-polytope prism will have 2fi + fi−1 i-face elements. (With f−1 = 0, fn = 1.)
By dimension:
• Take a polygon with n vertices, n edges. Its prism has 2n vertices, 3n edges, and 2 + n faces.
• Take a polyhedron with v vertices, e edges, and f faces. Its prism has 2v vertices, 2e + v edges, 2f + e faces, and 2 + f cells.
• Take a polychoron with v vertices, e edges, f faces, and c cells. Its prism has 2v vertices, 2e + v edges, 2f + e faces, 2c + f cells, and 2 + c hypercells.
Uniform prismatic polytope
See also: Uniform 4-polytope § Prismatic_uniform 4-polytopes
See also: Uniform 5-polytope § Uniform_prismatic forms
A regular n-polytope represented by Schläfli symbol {p,q,...,t} can form a uniform prismatic (n + 1)-polytope represented by a Cartesian product of two Schläfli symbols: {p,q,...,t}×{ }.
By dimension:
• A 0-polytopic prism is a line segment, represented by an empty Schläfli symbol { }.
• A 1-polytopic prism is a rectangle, made from 2 translated line segments. It is represented as the product Schläfli symbol { }×{ }. If it is square, symmetry can be reduced: { }×{ } = {4}.
Example: , Square, { }×{ }, two parallel line segments, connected by two line segment sides.
• A polygonal prism is a 3-dimensional prism made from two translated polygons connected by rectangles. A regular polygon {p} can construct a uniform n-gonal prism represented by the product {p}×{ }. If p = 4, with square sides symmetry it becomes a cube: {4}×{ } = {4,3}.
Example: , Pentagonal prism, {5}×{ }, two parallel pentagons connected by 5 rectangular sides.
• A polyhedral prism is a 4-dimensional prism made from two translated polyhedra connected by 3-dimensional prism cells. A regular polyhedron {p,q} can construct the uniform polychoric prism, represented by the product {p,q}×{ }. If the polyhedron and the sides are cubes, it becomes a tesseract: {4,3}×{ } = {4,3,3}.
Example: , Dodecahedral prism, {5,3}×{ }, two parallel dodecahedra connected by 12 pentagonal prism sides.
• ...
Higher order prismatic polytopes also exist as cartesian products of any two or more polytopes. The dimension of a product polytope is the sum of the dimensions of its elements. The first examples of these exist in 4-dimensional space; they are called duoprisms as the product of two polygons in 4-dimensions.
Regular duoprisms are represented as {p}×{q}, with pq vertices, 2pq edges, pq square faces, p q-gon faces, q p-gon faces, and bounded by p q-gonal prisms and q p-gonal prisms.
For example, {4}×{4}, a 4-4 duoprism is a lower symmetry form of a tesseract, as is {4,3}×{ }, a cubic prism. {4}×{4}×{ } (4-4 duoprism prism), {4,3}×{4} (cube-4 duoprism) and {4,3,3}×{ } (tesseractic prism) are lower symmetry forms of a 5-cube.
See also
• Apeirogonal prism
• Rectified prism
• Prismanes
• List of shapes
References
1. N.W. Johnson: Geometries and Transformations, 2018, ISBN 978-1-107-10340-5, Chapter 11: Finite symmetry groups, 11.3 Pyramids, Prisms, and Antiprisms, Figure 11.3b
2. Thomas Malton (1774). A Royal Road to Geometry: Or, an Easy and Familiar Introduction to the Mathematics. ... By Thomas Malton. ... author, and sold. pp. 360–.
3. James Elliot (1845). Key to the Complete Treatise on Practical Geometry and Mensuration: Containing Full Demonstrations of the Rules ... Longman, Brown, Green, and Longmans. pp. 3–.
4. William F. Kern, James R. Bland, Solid Mensuration with proofs, 1938, p. 28.
5. William F. Kern, James R. Bland, Solid Mensuration with proofs, 1938, p.81
6. The facts on file: Geometry handbook, Catherine A. Gorini, 2003, ISBN 0-8160-4875-4, p.172
7. "Pictures of Twisted Prisms".
• Anthony Pugh (1976). Polyhedra: A visual approach. California: University of California Press Berkeley. ISBN 0-520-03056-7. Chapter 2: Archimedean polyhedra, prisma and antiprisms
External links
Wikisource has the text of the 1911 Encyclopædia Britannica article "Prism".
• Weisstein, Eric W. "Prism". MathWorld.
• Paper models of prisms and antiprisms Free nets of prisms and antiprisms
• Paper models of prisms and antiprisms Using nets generated by Stella
Convex polyhedra
Platonic solids (regular)
• tetrahedron
• cube
• octahedron
• dodecahedron
• icosahedron
Archimedean solids
(semiregular or uniform)
• truncated tetrahedron
• cuboctahedron
• truncated cube
• truncated octahedron
• rhombicuboctahedron
• truncated cuboctahedron
• snub cube
• icosidodecahedron
• truncated dodecahedron
• truncated icosahedron
• rhombicosidodecahedron
• truncated icosidodecahedron
• snub dodecahedron
Catalan solids
(duals of Archimedean)
• triakis tetrahedron
• rhombic dodecahedron
• triakis octahedron
• tetrakis hexahedron
• deltoidal icositetrahedron
• disdyakis dodecahedron
• pentagonal icositetrahedron
• rhombic triacontahedron
• triakis icosahedron
• pentakis dodecahedron
• deltoidal hexecontahedron
• disdyakis triacontahedron
• pentagonal hexecontahedron
Dihedral regular
• dihedron
• hosohedron
Dihedral uniform
• prisms
• antiprisms
duals:
• bipyramids
• trapezohedra
Dihedral others
• pyramids
• truncated trapezohedra
• gyroelongated bipyramid
• cupola
• bicupola
• frustum
• bifrustum
• rotunda
• birotunda
• prismatoid
• scutoid
Degenerate polyhedra are in italics.
| Wikipedia |
Toric code
The toric code is a topological quantum error correcting code, and an example of a stabilizer code, defined on a two-dimensional spin lattice.[1] It is the simplest and most well studied of the quantum double models.[2] It is also the simplest example of topological order—Z2 topological order (first studied in the context of Z2 spin liquid in 1991).[3][4] The toric code can also be considered to be a Z2 lattice gauge theory in a particular limit.[5] It was introduced by Alexei Kitaev.
The toric code gets its name from its periodic boundary conditions, giving it the shape of a torus. These conditions give the model translational invariance, which is useful for analytic study. However, some experimental realizations require open boundary conditions, allowing the system to be embedded on a 2D surface. The resulting code is typically known as the planar code. This has identical behaviour to the toric code in most, but not all, cases.
Error correction and computation
The toric code is defined on a two-dimensional lattice, usually chosen to be the square lattice, with a spin-½ degree of freedom located on each edge. They are chosen to be periodic. Stabilizer operators are defined on the spins around each vertex $v$ and plaquette (or face ie. a vertex of the dual lattice) $p$ of the lattice as follows,
$A_{v}=\prod _{i\in v}\sigma _{i}^{x},\,\,B_{p}=\prod _{i\in p}\sigma _{i}^{z}.$
Where here we use $i\in v$ to denote the edges touching the vertex $v$, and $i\in p$ to denote the edges surrounding the plaquette $p$. The stabilizer space of the code is that for which all stabilizers act trivially, hence for any state $|\psi \rangle $ in this space it holds that
$A_{v}|\psi \rangle =|\psi \rangle ,\,\,\forall v,\,\,B_{p}|\psi \rangle =|\psi \rangle ,\,\,\forall p.$
For the toric code, this space is four-dimensional, and so can be used to store two qubits of quantum information. This can be proven by considering the number of independent stabilizer operators. The occurrence of errors will move the state out of the stabilizer space, resulting in vertices and plaquettes for which the above condition does not hold. The positions of these violations is the syndrome of the code, which can be used for error correction.
The unique nature of the topological codes, such as the toric code, is that stabilizer violations can be interpreted as quasiparticles. Specifically, if the code is in a state $|\phi \rangle $ such that,
$A_{v}|\phi \rangle =-|\phi \rangle $,
a quasiparticle known as an $e$ anyon can be said to exist on the vertex $v$. Similarly violations of the $B_{p}$ are associated with so called $m$ anyons on the plaquettes. The stabilizer space therefore corresponds to the anyonic vacuum. Single spin errors cause pairs of anyons to be created and transported around the lattice.
When errors create an anyon pair and move the anyons, one can imagine a path connecting the two composed of all links acted upon. If the anyons then meet and are annihilated, this path describes a loop. If the loop is topologically trivial, it has no effect on the stored information. The annihilation of the anyons, in this case, corrects all of the errors involved in their creation and transport. However, if the loop is topologically non-trivial, though re-annihilation of the anyons returns the state to the stabilizer space, it also implements a logical operation on the stored information. The errors, in this case, are therefore not corrected but consolidated.
Consider the noise model for which bit and phase errors occur independently on each spin, both with probability p. When p is low, this will create sparsely distributed pairs of anyons which have not moved far from their point of creation. Correction can be achieved by identifying the pairs that the anyons were created in (up to an equivalence class), and then re-annihilating them to remove the errors. As p increases, however, it becomes more ambiguous as to how the anyons may be paired without risking the formation of topologically non-trivial loops. This gives a threshold probability, under which the error correction will almost certainly succeed. Through a mapping to the random-bond Ising model, this critical probability has been found to be around 11%.[6]
Other error models may also be considered, and thresholds found. In all cases studied so far, the code has been found to saturate the Hashing bound. For some error models, such as biased errors where bit errors occur more often than phase errors or vice versa, lattices other than the square lattice must be used to achieve the optimal thresholds.[7][8]
These thresholds are upper limits and are useless unless efficient algorithms are found to achieve them. The most well-used algorithm is minimum weight perfect matching.[9] When applied to the noise model with independent bit and flip errors, a threshold of around 10.5% is achieved. This falls only a little short of the 11% maximum. However, matching does not work so well when there are correlations between the bit and phase errors, such as with depolarizing noise.
The means to perform quantum computation on logical information stored within the toric code has been considered, with the properties of the code providing fault-tolerance. It has been shown that extending the stabilizer space using 'holes', vertices or plaquettes on which stabilizers are not enforced, allows many qubits to be encoded into the code. However, a universal set of unitary gates cannot be fault-tolerantly implemented by unitary operations and so additional techniques are required to achieve quantum computing. For example, universal quantum computing can be achieved by preparing magic states via encoded quantum stubs called tidBits used to teleport in the required additional gates when replaced as a qubit. Furthermore, preparation of magic states must be fault tolerant, which can be achieved by magic state distillation on noisy magic states. A measurement based scheme for quantum computation based upon this principle has been found, whose error threshold is the highest known for a two-dimensional architecture.[10][11]
Hamiltonian and self-correction
Since the stabilizer operators of the toric code are quasilocal, acting only on spins located near each other on a two-dimensional lattice, it is not unrealistic to define the following Hamiltonian,
$H_{\rm {TC}}=-J\sum _{v}A_{v}-J\sum _{p}B_{p},\,\,\,J>0.$
The ground state space of this Hamiltonian is the stabilizer space of the code. Excited states correspond to those of anyons, with the energy proportional to their number. Local errors are therefore energetically suppressed by the gap, which has been shown to be stable against local perturbations.[12] However, the dynamic effects of such perturbations can still cause problems for the code.[13][14]
The gap also gives the code a certain resilience against thermal errors, allowing it to be correctable almost surely for a certain critical time. This time increases with $J$, but since arbitrary increases of this coupling are unrealistic, the protection given by the Hamiltonian still has its limits.
The means to make the toric code, or the planar code, into a fully self-correcting quantum memory is often considered. Self-correction means that the Hamiltonian will naturally suppress errors indefinitely, leading to a lifetime that diverges in the thermodynamic limit. It has been found that this is possible in the toric code only if long range interactions are present between anyons.[15][16] Proposals have been made for realization of these in the lab [17] Another approach is the generalization of the model to higher dimensions, with self-correction possible in 4D with only quasi-local interactions.[18]
Anyon model
As mentioned above, so called $e$ and $m$ quasiparticles are associated with the vertices and plaquettes of the model, respectively. These quasiparticles can be described as anyons, due to the non-trivial effect of their braiding. Specifically, though both species of anyons are bosonic with respect to themselves, the braiding of two $e$'s or $m$'s having no effect, a full monodromy of an $e$ and an $m$ will yield a phase of $-1$. Such a result is not consistent with either bosonic or fermionic statistics, and hence is anyonic.
The anyonic mutual statistics of the quasiparticles demonstrate the logical operations performed by topologically non-trivial loops. Consider the creation of a pair of $e$ anyons followed by the transport of one around a topologically nontrivial loop, such as that shown on the torus in blue on the figure above, before the pair are reannhilated. The state is returned to the stabilizer space, but the loop implements a logical operation on one of the stored qubits. If $m$ anyons are similarly moved through the red loop above a logical operation will also result. The phase of $-1$ resulting when braiding the anyons shows that these operations do not commute, but rather anticommute. They may therefore be interpreted as logical $Z$ and $X$ Pauli operators on one of the stored qubits. The corresponding logical Pauli's on the other qubit correspond to an $m$ anyon following the blue loop and an $e$ anyon following the red. No braiding occurs when $e$ and $m$ pass through parallel paths, the phase of $-1$ therefore does not arise and the corresponding logical operations commute. This is as should be expected since these form operations acting on different qubits.
Due to the fact that both $e$ and $m$ anyons can be created in pairs, it is clear to see that both these quasiparticles are their own antiparticles. A composite particle composed of two $e$ anyons is therefore equivalent to the vacuum, since the vacuum can yield such a pair and such a pair will annihilate to the vacuum. Accordingly, these composites have bosonic statistics, since their braiding is always completely trivial. A composite of two $m$ anyons is similarly equivalent to the vacuum. The creation of such composites is known as the fusion of anyons, and the results can be written in terms of fusion rules. In this case, these take the form,
$e\times e=1,\,\,\,m\times m=1.$
Where $1$ denotes the vacuum. A composite of an $e$ and an $m$ is not trivial. This therefore constitutes another quasiparticle in the model, sometimes denoted $\psi $, with fusion rule,
$e\times m=\psi .$
From the braiding statistics of the anyons we see that, since any single exchange of two $\psi $'s will involve a full monodromy of a constituent $e$ and $m$, a phase of $-1$ will result. This implies fermionic self-statistics for the $\psi $'s.
Generalizations
The use of a torus is not required to form an error correcting code. Other surfaces may also be used, with their topological properties determining the degeneracy of the stabilizer space. In general, quantum error correcting codes defined on two-dimensional spin lattices according to the principles above are known as surface codes.[19]
It is also possible to define similar codes using higher-dimensional spins. These are the quantum double models[20] and string-net models,[21] which allow a greater richness in the behaviour of anyons, and so may be used for more advanced quantum computation and error correction proposals.[22] These not only include models with Abelian anyons, but also those with non-Abelian statistics.[23][24][25]
Experimental progress
The most explicit demonstration of the properties of the toric code has been in state based approaches. Rather than attempting to realize the Hamiltonian, these simply prepare the code in the stabilizer space. Using this technique, experiments have been able to demonstrate the creation, transport and statistics of the anyons[26][27][28] and measurement of the topological entanglement entropy.[28] More recent experiments have also been able to demonstrate the error correction properties of the code.[29][28]
For realizations of the toric code and its generalizations with a Hamiltonian, much progress has been made using Josephson junctions. The theory of how the Hamiltonians may be implemented has been developed for a wide class of topological codes.[30] An experiment has also been performed, realizing the toric code Hamiltonian for a small lattice, and demonstrating the quantum memory provided by its degenerate ground state.[31]
Other theoretical and experimental works towards realizations are based on cold atoms. A toolkit of methods that may be used to realize topological codes with optical lattices has been explored, [32] as have experiments concerning minimal instances of topological order.[33] Such minimal instances of the toric code has been realized experimentally within isolated square plaquettes.[34] Progress is also being made into simulations of the toric model with Rydberg atoms, in which the Hamiltonian and the effects of dissipative noise can be demonstrated.[35][36] Experiments in Rydberg atom arrays have also successfully realized the toric code with periodic boundary conditions in two dimensions by coherently transporting arrays of entangled atoms.[37]
References
1. A. Y. Kitaev, Proceedings of the 3rd International Conference of Quantum Communication and Measurement, Ed. O. Hirota, A. S. Holevo, and C. M. Caves (New York, Plenum, 1997)
2. Kitaev, Alexei (2006). "Anyons in an exactly solved model and beyond". Annals of Physics. 321 (1): 2–111. arXiv:cond-mat/0506438. Bibcode:2006AnPhy.321....2K. doi:10.1016/j.aop.2005.10.005. ISSN 0003-4916. S2CID 118948929.
3. Read, N.; Sachdev, Subir (1 March 1991). "Large-Nexpansion for frustrated quantum antiferromagnets". Physical Review Letters. 66 (13): 1773–1776. Bibcode:1991PhRvL..66.1773R. doi:10.1103/physrevlett.66.1773. ISSN 0031-9007. PMID 10043303.
4. Wen, X. G. (1 July 1991). "Mean-field theory of spin-liquid states with finite energy gap and topological orders". Physical Review B. 44 (6): 2664–2672. Bibcode:1991PhRvB..44.2664W. doi:10.1103/physrevb.44.2664. ISSN 0163-1829. PMID 9999836.
5. Fradkin, Eduardo; Shenker, Stephen H. (15 June 1979). "Phase diagrams of lattice gauge theories with Higgs fields". Physical Review D. 19 (12): 3682–3697. Bibcode:1979PhRvD..19.3682F. doi:10.1103/physrevd.19.3682. ISSN 0556-2821.
6. Dennis, Eric; Kitaev, Alexei; Landahl, Andrew; Preskill, John (2002). "Topological quantum memory". Journal of Mathematical Physics. 43 (9): 4452–4505. arXiv:quant-ph/0110143. Bibcode:2002JMP....43.4452D. doi:10.1063/1.1499754. ISSN 0022-2488. S2CID 36673677.
7. Röthlisberger, Beat; Wootton, James R.; Heath, Robert M.; Pachos, Jiannis K.; Loss, Daniel (13 February 2012). "Incoherent dynamics in the toric code subject to disorder". Physical Review A. 85 (2): 022313. arXiv:1112.1613. Bibcode:2012PhRvA..85b2313R. doi:10.1103/physreva.85.022313. ISSN 1050-2947. S2CID 118585279.
8. Bombin, H.; Andrist, Ruben S.; Ohzeki, Masayuki; Katzgraber, Helmut G.; Martin-Delgado, M. A. (30 April 2012). "Strong Resilience of Topological Codes to Depolarization". Physical Review X. 2 (2): 021004. arXiv:1202.1852. Bibcode:2012PhRvX...2b1004B. doi:10.1103/physrevx.2.021004. ISSN 2160-3308.
9. Edmonds, Jack (1965). "Paths, Trees, and Flowers". Canadian Journal of Mathematics. 17: 449–467. doi:10.4153/cjm-1965-045-4. ISSN 0008-414X. S2CID 247198603.
10. Raussendorf, Robert; Harrington, Jim (11 May 2007). "Fault-Tolerant Quantum Computation with High Threshold in Two Dimensions". Physical Review Letters. 98 (19): 190504. arXiv:quant-ph/0610082. Bibcode:2007PhRvL..98s0504R. doi:10.1103/physrevlett.98.190504. ISSN 0031-9007. PMID 17677613. S2CID 39504821.
11. Raussendorf, R; Harrington, J; Goyal, K (29 June 2007). "Topological fault-tolerance in cluster state quantum computation". New Journal of Physics. 9 (6): 199. arXiv:quant-ph/0703143. Bibcode:2007NJPh....9..199R. doi:10.1088/1367-2630/9/6/199. ISSN 1367-2630.
12. Bravyi, Sergey; Hastings, Matthew B.; Michalakis, Spyridon (2010). "Topological quantum order: Stability under local perturbations". Journal of Mathematical Physics. 51 (9): 093512. arXiv:1001.0344. Bibcode:2010JMP....51i3512B. doi:10.1063/1.3490195. ISSN 0022-2488. S2CID 115166306.
13. F. Pastawski; A. Kay; N. Schuch; J. I. Cirac (2010). "Limitations of passive protection of quantum information". Quantum Information and Computation. 10 (7&8): 580. arXiv:0911.3843. doi:10.26421/qic10.7-8. ISSN 1533-7146. S2CID 3076085.
14. Freeman, C. Daniel; Herdman, C. M.; Gorman, D. J.; Whaley, K. B. (7 October 2014). "Relaxation dynamics of the toric code in contact with a thermal reservoir: Finite-size scaling in a low-temperature regime". Physical Review B. 90 (13): 134302. arXiv:1405.2315. Bibcode:2014PhRvB..90m4302F. doi:10.1103/physrevb.90.134302. ISSN 1098-0121. S2CID 118724410.
15. Hamma, Alioscia; Castelnovo, Claudio; Chamon, Claudio (18 June 2009). "Toric-boson model: Toward a topological quantum memory at finite temperature". Physical Review B. 79 (24): 245122. arXiv:0812.4622. Bibcode:2009PhRvB..79x5122H. doi:10.1103/physrevb.79.245122. hdl:1721.1/51820. ISSN 1098-0121. S2CID 5202832.
16. Chesi, Stefano; Röthlisberger, Beat; Loss, Daniel (6 August 2010). "Self-correcting quantum memory in a thermal environment". Physical Review A. 82 (2): 022305. arXiv:0908.4264. Bibcode:2010PhRvA..82b2305C. doi:10.1103/physreva.82.022305. ISSN 1050-2947. S2CID 118400202.
17. Pedrocchi, Fabio L.; Chesi, Stefano; Loss, Daniel (10 March 2011). "Quantum memory coupled to cavity modes". Physical Review B. 83 (11): 115415. arXiv:1011.3762. Bibcode:2011PhRvB..83k5415P. doi:10.1103/physrevb.83.115415. ISSN 1098-0121. S2CID 118595257.
18. Alicki, R.; Horodecki, M.; Horodecki, P.; Horodecki, R. (2010). "On Thermal Stability of Topological Qubit in Kitaev's 4D Model". Open Systems & Information Dynamics. 17 (1): 1–20. arXiv:0811.0033. doi:10.1142/s1230161210000023. ISSN 1230-1612. S2CID 26719502.
19. Ghosh, Joydip; Fowler, Austin G.; Geller, Michael R. (19 December 2012). "Surface code with decoherence: An analysis of three superconducting architectures". Physical Review A. 86 (6): 062318. arXiv:1210.5799. Bibcode:2012PhRvA..86f2318G. doi:10.1103/physreva.86.062318. ISSN 1050-2947. S2CID 10196488.
20. Bullock, Stephen S; Brennen, Gavin K (14 March 2007). "Qudit surface codes and gauge theory with finite cyclic groups". Journal of Physics A: Mathematical and Theoretical. 40 (13): 3481–3505. arXiv:quant-ph/0609070. Bibcode:2007JPhA...40.3481B. doi:10.1088/1751-8113/40/13/013. ISSN 1751-8113. S2CID 15630224.
21. Levin, Michael A. and Xiao-Gang Wen (12 January 2005). "String-net condensation: A physical mechanism for topological phases". Physical Review B. 71 (45110): 21. arXiv:cond-mat/0404617. Bibcode:2005PhRvB..71d5110L. doi:10.1103/PhysRevB.71.045110. S2CID 51962817.
22. Wootton, James R.; Lahtinen, Ville; Doucot, Benoit; Pachos, Jiannis K. (2011). "Engineering complex topological memories from simple Abelian models". Annals of Physics. 326 (9): 2307–2314. arXiv:0908.0708. Bibcode:2011AnPhy.326.2307W. doi:10.1016/j.aop.2011.05.008. ISSN 0003-4916. S2CID 119288871.
23. Aguado, M.; Brennen, G. K.; Verstraete, F.; Cirac, J. I. (22 December 2008). "Creation, Manipulation, and Detection of Abelian and Non-Abelian Anyons in Optical Lattices". Physical Review Letters. 101 (26): 260501. arXiv:0802.3163. Bibcode:2008PhRvL.101z0501A. doi:10.1103/physrevlett.101.260501. hdl:1854/LU-8589252. ISSN 0031-9007. PMID 19113760. S2CID 11619038.
24. Brennen, G K; Aguado, M; Cirac, J I (22 May 2009). "Simulations of quantum double models". New Journal of Physics. 11 (5): 053009. arXiv:0901.1345. Bibcode:2009NJPh...11e3009B. doi:10.1088/1367-2630/11/5/053009. ISSN 1367-2630.
25. Liu, Yu-Jie; Shtengel, Kirill; Smith, Adam; Pollmann, Frank (2022-11-07). "Methods for Simulating String-Net States and Anyons on a Digital Quantum Computer". PRX Quantum. 3 (4): 040315. doi:10.1103/PRXQuantum.3.040315.
26. Pachos, J K; Wieczorek, W; Schmid, C; Kiesel, N; Pohlner, R; Weinfurter, H (12 August 2009). "Revealing anyonic features in a toric code quantum simulation". New Journal of Physics. 11 (8): 083010. Bibcode:2009NJPh...11h3010P. doi:10.1088/1367-2630/11/8/083010. ISSN 1367-2630.
27. C.-Y. Lu, et al., Phys. Rev. Lett. 102, 030502 (2009).
28. Satzinger, K. J.; Liu, Y.; Smith, A.; Knapp, C.; Newman, M.; Jones, C.; Chen, Z.; Quintana, C.; Mi, X.; Dunsworth, A.; Gidney, C. (2021-04-02). "Realizing topologically ordered states on a quantum processor". Science. 374 (6572): 1237–1241. arXiv:2104.01180. Bibcode:2021Sci...374.1237S. doi:10.1126/science.abi8378. PMID 34855491. S2CID 233025160.
29. Yao, Xing-Can; Wang, Tian-Xiong; Chen, Hao-Ze; Gao, Wei-Bo; Fowler, Austin G.; Raussendorf, Robert; Chen, Zeng-Bing; Liu, Nai-Le; Lu, Chao-Yang; Deng, You-Jin; Chen, Yu-Ao; Pan, Jian-Wei (22 February 2012). "Experimental demonstration of topological error correction". Nature. 482 (7386): 489–494. arXiv:0905.1542. Bibcode:2012Natur.482..489Y. doi:10.1038/nature10770. ISSN 0028-0836. PMID 22358838. S2CID 4307662.
30. Douçot, Benoit; Ioffe, Lev B.; Vidal, Julien (3 June 2004). "Discrete non-Abelian gauge theories in Josephson-junction arrays and quantum computation". Physical Review B. 69 (21): 214501. arXiv:cond-mat/0302104. Bibcode:2004PhRvB..69u4501D. doi:10.1103/physrevb.69.214501. ISSN 1098-0121. S2CID 119407144.
31. Gladchenko, Sergey; Olaya, David; Dupont-Ferrier, Eva; Douçot, Benoit; Ioffe, Lev B.; Gershenson, Michael E. (2009). "Superconducting nanocircuits for topologically protected qubits". Nature Physics. 5 (1): 48–53. arXiv:0802.2295. Bibcode:2009NatPh...5...48G. doi:10.1038/nphys1151. ISSN 1745-2473. S2CID 118359424.
32. Micheli, A.; Brennen, G. K.; Zoller, P. (30 April 2006). "A toolbox for lattice-spin models with polar molecules". Nature Physics. 2 (5): 341–347. arXiv:quant-ph/0512222. Bibcode:2006NatPh...2..341M. doi:10.1038/nphys287. ISSN 1745-2473. S2CID 108289844.
33. Paredes, Belén; Bloch, Immanuel (1 January 2008). "Minimum instances of topological matter in an optical plaquette". Physical Review A. 77 (2): 023603. arXiv:0711.3796. Bibcode:2008PhRvA..77b3603P. doi:10.1103/physreva.77.023603. ISSN 1050-2947. S2CID 46143303.
34. Dai, Hanning; Yang, Bing; Reingruber, Andreas; Sun, Hui; Xu, Xiao-Fan; Chen, Yu-Ao; Yuan, Zhen-Sheng; Pan, Jian-Wei (28 August 2017). "Four-body ring-exchange interactions and anyonic statistics within a minimal toric-code Hamiltonian". Nature Physics. 13 (2): 1195–1200. arXiv:1602.05709. Bibcode:2017NatPh..13.1195D. doi:10.1038/NPHYS4243. ISSN 1745-2473. S2CID 118604118.
35. Weimer, Hendrik; Müller, Markus; Lesanovsky, Igor; Zoller, Peter; Büchler, Hans Peter (14 March 2010). "A Rydberg quantum simulator". Nature Physics. 6 (5): 382–388. arXiv:0907.1657. Bibcode:2010NatPh...6..382W. doi:10.1038/nphys1614. ISSN 1745-2473. S2CID 54710282.
36. Semeghini, Giulia; Levine, Harry; Keesling, Alexander; Ebadi, Sepehr; Wang, Tout T.; Bluvstein, Dolev; Verresen, Ruben; Pichler, Hannes; Kalinowski, Marcin; Samajdar, Rhine; Omran, Ahmed (2021). "Probing Topological Spin Liquids on a Programmable Quantum Simulator". Science. 374 (6572): 1242–1247. arXiv:2104.04119. Bibcode:2021Sci...374.1242S. doi:10.1126/science.abi8794. PMID 34855494. S2CID 233204440.
37. Bluvstein, Dolev; Levine, Harry; Semeghini, Giulia; Wang, Tout; Ebadi, Sepehr; Kalinowski, Marcin; Maskara, Nishad; Pichler, Hannes; Greiner, Marcus; Vuletic, Vladan; Lukin, Misha (April 20, 2022). "A quantum processor based on coherent transport of entangled atom arrays". Nature. 604 (7906): 451–456. doi:10.1038/s41586-022-04592-6. S2CID 244954259. Retrieved 28 August 2022.
External links
• https://skepsisfera.blogspot.com/2010/04/kitaevs-toric-code.html
Quantum information science
General
• DiVincenzo's criteria
• NISQ era
• Quantum computing
• timeline
• Quantum information
• Quantum programming
• Quantum simulation
• Qubit
• physical vs. logical
• Quantum processors
• cloud-based
Theorems
• Bell's
• Eastin–Knill
• Gleason's
• Gottesman–Knill
• Holevo's
• Margolus–Levitin
• No-broadcasting
• No-cloning
• No-communication
• No-deleting
• No-hiding
• No-teleportation
• PBR
• Threshold
• Solovay–Kitaev
• Purification
Quantum
communication
• Classical capacity
• entanglement-assisted
• quantum capacity
• Entanglement distillation
• Monogamy of entanglement
• LOCC
• Quantum channel
• quantum network
• Quantum teleportation
• quantum gate teleportation
• Superdense coding
Quantum cryptography
• Post-quantum cryptography
• Quantum coin flipping
• Quantum money
• Quantum key distribution
• BB84
• SARG04
• other protocols
• Quantum secret sharing
Quantum algorithms
• Amplitude amplification
• Bernstein–Vazirani
• Boson sampling
• Deutsch–Jozsa
• Grover's
• HHL
• Hidden subgroup
• Quantum annealing
• Quantum counting
• Quantum Fourier transform
• Quantum optimization
• Quantum phase estimation
• Shor's
• Simon's
• VQE
Quantum
complexity theory
• BQP
• EQP
• QIP
• QMA
• PostBQP
Quantum
processor benchmarks
• Quantum supremacy
• Quantum volume
• Randomized benchmarking
• XEB
• Relaxation times
• T1
• T2
Quantum
computing models
• Adiabatic quantum computation
• Continuous-variable quantum information
• One-way quantum computer
• cluster state
• Quantum circuit
• quantum logic gate
• Quantum machine learning
• quantum neural network
• Quantum Turing machine
• Topological quantum computer
Quantum
error correction
• Codes
• CSS
• quantum convolutional
• stabilizer
• Shor
• Bacon–Shor
• Steane
• Toric
• gnu
• Entanglement-assisted
Physical
implementations
Quantum optics
• Cavity QED
• Circuit QED
• Linear optical QC
• KLM protocol
Ultracold atoms
• Optical lattice
• Trapped-ion QC
Spin-based
• Kane QC
• Spin qubit QC
• NV center
• NMR QC
Superconducting
• Charge qubit
• Flux qubit
• Phase qubit
• Transmon
Quantum
programming
• OpenQASM-Qiskit-IBM QX
• Quil-Forest/Rigetti QCS
• Cirq
• Q#
• libquantum
• many others...
• Quantum information science
• Quantum mechanics topics
| Wikipedia |
Developable surface
In mathematics, a developable surface (or torse: archaic) is a smooth surface with zero Gaussian curvature. That is, it is a surface that can be flattened onto a plane without distortion (i.e. it can be bent without stretching or compression). Conversely, it is a surface which can be made by transforming a plane (i.e. "folding", "bending", "rolling", "cutting" and/or "gluing"). In three dimensions all developable surfaces are ruled surfaces (but not vice versa). There are developable surfaces in four-dimensional space $\mathbb {R} ^{4}$ which are not ruled.[1]
The envelope of a single parameter family of planes is called a developable surface.
Particulars
The developable surfaces which can be realized in three-dimensional space include:
• Cylinders and, more generally, the "generalized" cylinder; its cross-section may be any smooth curve
• Cones and, more generally, conical surfaces; away from the apex
• The oloid and the sphericon are members of a special family of solids that develop their entire surface when rolling down a flat plane.
• Planes (trivially); which may be viewed as a cylinder whose cross-section is a line
• Tangent developable surfaces; which are constructed by extending the tangent lines of a spatial curve.
• The torus has a metric under which it is developable, which can be embedded into three-dimensional space by the Nash embedding theorem[2] and has a simple representation in four dimensions as the Cartesian product of two circles: see Clifford torus.
Formally, in mathematics, a developable surface is a surface with zero Gaussian curvature. One consequence of this is that all "developable" surfaces embedded in 3D-space are ruled surfaces (though hyperboloids are examples of ruled surfaces which are not developable). Because of this, many developable surfaces can be visualised as the surface formed by moving a straight line in space. For example, a cone is formed by keeping one end-point of a line fixed whilst moving the other end-point in a circle.
Application
Developable surfaces have several practical applications.
Developable Mechanisms are mechanisms that conform to a developable surface and can exhibit motion (deploy) off the surface.[3][4]
Many cartographic projections involve projecting the Earth to a developable surface and then "unrolling" the surface into a region on the plane.
Since developable surfaces may be constructed by bending a flat sheet, they are also important in manufacturing objects from sheet metal, cardboard, and plywood. An industry which uses developed surfaces extensively is shipbuilding.[5]
Non-developable surface
Most smooth surfaces (and most surfaces in general) are not developable surfaces. Non-developable surfaces are variously referred to as having "double curvature", "doubly curved", "compound curvature", "non-zero Gaussian curvature", etc.
Some of the most often-used non-developable surfaces are:
• Spheres are not developable surfaces under any metric as they cannot be unrolled onto a plane.
• The helicoid is a ruled surface – but unlike the ruled surfaces mentioned above, it is not a developable surface.
• The hyperbolic paraboloid and the hyperboloid are slightly different doubly ruled surfaces – but unlike the ruled surfaces mentioned above, neither one is a developable surface.
Applications of non-developable surfaces
Many gridshells and tensile structures and similar constructions gain strength by using (any) doubly curved form.
See also
• Development (differential geometry)
• Developable roller
References
1. Hilbert, David; Cohn-Vossen, Stephan (1952), Geometry and the Imagination (2nd ed.), New York: Chelsea, pp. 341–342, ISBN 978-0-8284-1087-8
2. Borrelli, V.; Jabrane, S.; Lazarus, F.; Thibert, B. (April 2012), "Flat tori in three-dimensional space and convex integration", Proceedings of the National Academy of Sciences, 109 (19): 7218–7223, doi:10.1073/pnas.1118478109, PMC 3358891, PMID 22523238.
3. "Developable Mechanisms | About Developable Mechanisms". compliantmechanisms. Retrieved 2019-02-14.
4. Howell, Larry L.; Lang, Robert J.; Magleby, Spencer P.; Zimmerman, Trent K.; Nelson, Todd G. (2019-02-13). "Developable mechanisms on developable surfaces". Science Robotics. 4 (27): eaau5171. doi:10.1126/scirobotics.aau5171. ISSN 2470-9476. PMID 33137737.
5. Nolan, T. J. (1970), Computer-Aided Design of Developable Hull Surfaces, Ann Arbor: University Microfilms International
External links
Wikimedia Commons has media related to Developable surfaces.
• Weisstein, Eric W. "Developable Surface". MathWorld.
• Examples of developable surfaces on the Rhino3DE website
| Wikipedia |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.