text
stringlengths 100
500k
| subset
stringclasses 4
values |
---|---|
Symmetric closure
In mathematics, the symmetric closure of a binary relation $R$ on a set $X$ is the smallest symmetric relation on $X$ that contains $R.$
For example, if $X$ is a set of airports and $xRy$ means "there is a direct flight from airport $x$ to airport $y$", then the symmetric closure of $R$ is the relation "there is a direct flight either from $x$ to $y$ or from $y$ to $x$". Or, if $X$ is the set of humans and $R$ is the relation 'parent of', then the symmetric closure of $R$ is the relation "$x$ is a parent or a child of $y$".
Definition
The symmetric closure $S$ of a relation $R$ on a set $X$ is given by
$S=R\cup \{(y,x):(x,y)\in R\}.$
In other words, the symmetric closure of $R$ is the union of $R$ with its converse relation, $R^{\operatorname {T} }.$
See also
• Transitive closure – Smallest transitive relation containing a given binary relation
• Reflexive closure – operation on binary relationsPages displaying wikidata descriptions as a fallback
References
• Franz Baader and Tobias Nipkow, Term Rewriting and All That, Cambridge University Press, 1998, p. 8
Order theory
• Topics
• Glossary
• Category
Key concepts
• Binary relation
• Boolean algebra
• Cyclic order
• Lattice
• Partial order
• Preorder
• Total order
• Weak ordering
Results
• Boolean prime ideal theorem
• Cantor–Bernstein theorem
• Cantor's isomorphism theorem
• Dilworth's theorem
• Dushnik–Miller theorem
• Hausdorff maximal principle
• Knaster–Tarski theorem
• Kruskal's tree theorem
• Laver's theorem
• Mirsky's theorem
• Szpilrajn extension theorem
• Zorn's lemma
Properties & Types (list)
• Antisymmetric
• Asymmetric
• Boolean algebra
• topics
• Completeness
• Connected
• Covering
• Dense
• Directed
• (Partial) Equivalence
• Foundational
• Heyting algebra
• Homogeneous
• Idempotent
• Lattice
• Bounded
• Complemented
• Complete
• Distributive
• Join and meet
• Reflexive
• Partial order
• Chain-complete
• Graded
• Eulerian
• Strict
• Prefix order
• Preorder
• Total
• Semilattice
• Semiorder
• Symmetric
• Total
• Tolerance
• Transitive
• Well-founded
• Well-quasi-ordering (Better)
• (Pre) Well-order
Constructions
• Composition
• Converse/Transpose
• Lexicographic order
• Linear extension
• Product order
• Reflexive closure
• Series-parallel partial order
• Star product
• Symmetric closure
• Transitive closure
Topology & Orders
• Alexandrov topology & Specialization preorder
• Ordered topological vector space
• Normal cone
• Order topology
• Order topology
• Topological vector lattice
• Banach
• Fréchet
• Locally convex
• Normed
Related
• Antichain
• Cofinal
• Cofinality
• Comparability
• Graph
• Duality
• Filter
• Hasse diagram
• Ideal
• Net
• Subnet
• Order morphism
• Embedding
• Isomorphism
• Order type
• Ordered field
• Ordered vector space
• Partially ordered
• Positive cone
• Riesz space
• Upper set
• Young's lattice
| Wikipedia |
Symmetric convolution
In mathematics, symmetric convolution is a special subset of convolution operations in which the convolution kernel is symmetric across its zero point. Many common convolution-based processes such as Gaussian blur and taking the derivative of a signal in frequency-space are symmetric and this property can be exploited to make these convolutions easier to evaluate.
Convolution theorem
The convolution theorem states that a convolution in the real domain can be represented as a pointwise multiplication across the frequency domain of a Fourier transform. Since sine and cosine transforms are related transforms a modified version of the convolution theorem can be applied, in which the concept of circular convolution is replaced with symmetric convolution. Using these transforms to compute discrete symmetric convolutions is non-trivial since discrete sine transforms (DSTs) and discrete cosine transforms (DCTs) can be counter-intuitively incompatible for computing symmetric convolution, i.e. symmetric convolution can only be computed between a fixed set of compatible transforms.
Mutually compatible transforms
In order to compute symmetric convolution effectively, one must know which particular frequency domains (which are reachable by transforming real data through DSTs or DCTs) the inputs and outputs to the convolution can be and then tailor the symmetries of the transforms to the required symmetries of the convolution.
The following table documents which combinations of the domains from the main eight commonly used DST I-IV and DCT I-IV satisfy $f*g=h$ where $*$ represents the symmetric convolution operator. Convolution is a commutative operator, and so $f$ and $g$ are interchangeable.
fgh
DCT-IDCT-IDCT-I
DCT-IDST-IDST-I
DST-IDST-I-DCT-I
DCT-IIDCT-IDCT-II
DCT-IIDST-IDST-II
DST-IIDCT-IDST-II
DST-IIDST-I-DCT-II
DCT-IIDCT-IIDCT-I
DCT-IIDST-IIDST-I
DST-IIDST-II-DCT-I
fgh
DCT-IIIDCT-IIIDCT-III
DCT-IIIDST-IIIDST-III
DST-IIIDST-III-DCT-III
DCT-IVDCT-IIIDCT-IV
DCT-IVDST-IIIDST-IV
DST-IVDCT-IIIDST-IV
DST-IVDST-III-DCT-IV
DCT-IVDCT-IVDCT-III
DCT-IVDST-IVDST-III
DST-IVDST-IV-DCT-III
Forward transforms of $f$, $g$ and $h$, through the transforms specified should allow the symmetric convolution to be computed as a pointwise multiplication, with any excess undefined frequency amplitudes set to zero. Possibilities for symmetric convolutions involving DSTs and DCTs V-VIII derived from the discrete Fourier transforms (DFTs) of odd logical order can be determined by adding four to each type in the above tables.
Advantages of symmetric convolutions
There are a number of advantages to computing symmetric convolutions in DSTs and DCTs in comparison with the more common circular convolution with the Fourier transform.
Most notably the implicit symmetry of the transforms involved is such that only data unable to be inferred through symmetry is required. For instance using a DCT-II, a symmetric signal need only have the positive half DCT-II transformed, since the frequency domain will implicitly construct the mirrored data comprising the other half. This enables larger convolution kernels to be used with the same cost as smaller kernels circularly convolved on the DFT. Also the boundary conditions implicit in DSTs and DCTs create edge effects that are often more in keeping with neighbouring data than the periodic effects introduced by using the Fourier transform.
References
• Martucci, S. A. (1994). "Symmetric convolution and the discrete sine and cosine transforms". IEEE Trans. Signal Process. SP-42 (5): 1038–1051. Bibcode:1994ITSP...42.1038M. doi:10.1109/78.295213.
| Wikipedia |
Symmetric derivative
In mathematics, the symmetric derivative is an operation generalizing the ordinary derivative. It is defined as[1][2]
$\lim _{h\to 0}{\frac {f(x+h)-f(x-h)}{2h}}.$
The expression under the limit is sometimes called the symmetric difference quotient.[3][4] A function is said to be symmetrically differentiable at a point x if its symmetric derivative exists at that point.
If a function is differentiable (in the usual sense) at a point, then it is also symmetrically differentiable, but the converse is not true. A well-known counterexample is the absolute value function f(x) = |x|, which is not differentiable at x = 0, but is symmetrically differentiable here with symmetric derivative 0. For differentiable functions, the symmetric difference quotient does provide a better numerical approximation of the derivative than the usual difference quotient.[3]
The symmetric derivative at a given point equals the arithmetic mean of the left and right derivatives at that point, if the latter two both exist.[1][2]: 6
Neither Rolle's theorem nor the mean-value theorem hold for the symmetric derivative; some similar but weaker statements have been proved.
Examples
The absolute value function
For the absolute value function $f(x)=|x|$, using the notation $f_{s}(x)$ for the symmetric derivative, we have at $x=0$ that
${\begin{aligned}f_{s}(0)&=\lim _{h\to 0}{\frac {f(0+h)-f(0-h)}{2h}}=\lim _{h\to 0}{\frac {f(h)-f(-h)}{2h}}\\&=\lim _{h\to 0}{\frac {|h|-|{-h}|}{2h}}\\&=\lim _{h\to 0}{\frac {|h|-|h|}{2h}}\\&=\lim _{h\to 0}{\frac {0}{2h}}=0.\\\end{aligned}}$
Hence the symmetric derivative of the absolute value function exists at $x=0$ and is equal to zero, even though its ordinary derivative does not exist at that point (due to a "sharp" turn in the curve at $x=0$).
Note that in this example both the left and right derivatives at 0 exist, but they are unequal (one is −1, while the other is +1); their average is 0, as expected.
The function x−2
For the function $f(x)=1/x^{2}$, at $x=0$ we have
${\begin{aligned}f_{s}(0)&=\lim _{h\to 0}{\frac {f(0+h)-f(0-h)}{2h}}=\lim _{h\to 0}{\frac {f(h)-f(-h)}{2h}}\\[1ex]&=\lim _{h\to 0}{\frac {1/h^{2}-1/(-h)^{2}}{2h}}=\lim _{h\to 0}{\frac {1/h^{2}-1/h^{2}}{2h}}=\lim _{h\to 0}{\frac {0}{2h}}=0.\end{aligned}}$
Again, for this function the symmetric derivative exists at $x=0$, while its ordinary derivative does not exist at $x=0$ due to discontinuity in the curve there. Furthermore, neither the left nor the right derivative is finite at 0, i.e. this is an essential discontinuity.
The Dirichlet function
The Dirichlet function, defined as
$f(x)={\begin{cases}1,&{\text{if }}x{\text{ is rational}}\\0,&{\text{if }}x{\text{ is irrational}}\end{cases}}$
has a symmetric derivative at every $x\in \mathbb {Q} $, but is not symmetrically differentiable at any $x\in \mathbb {R} \setminus \mathbb {Q} $; i.e. the symmetric derivative exists at rational numbers but not at irrational numbers.
Quasi-mean-value theorem
The symmetric derivative does not obey the usual mean-value theorem (of Lagrange). As a counterexample, the symmetric derivative of f(x) = |x| has the image {−1, 0, 1}, but secants for f can have a wider range of slopes; for instance, on the interval [−1, 2], the mean-value theorem would mandate that there exist a point where the (symmetric) derivative takes the value ${\frac {|2|-|-1|}{2-(-1)}}={\frac {1}{3}}$.[5]
A theorem somewhat analogous to Rolle's theorem but for the symmetric derivative was established in 1967 by C. E. Aull, who named it quasi-Rolle theorem. If f is continuous on the closed interval [a, b] and symmetrically differentiable on the open interval (a, b), and f(a) = f(b) = 0, then there exist two points x, y in (a, b) such that fs(x) ≥ 0, and fs(y) ≤ 0. A lemma also established by Aull as a stepping stone to this theorem states that if f is continuous on the closed interval [a, b] and symmetrically differentiable on the open interval (a, b), and additionally f(b) > f(a), then there exist a point z in (a, b) where the symmetric derivative is non-negative, or with the notation used above, fs(z) ≥ 0. Analogously, if f(b) < f(a), then there exists a point z in (a, b) where fs(z) ≤ 0.[5]
The quasi-mean-value theorem for a symmetrically differentiable function states that if f is continuous on the closed interval [a, b] and symmetrically differentiable on the open interval (a, b), then there exist x, y in (a, b) such that[5][2]: 7
$f_{s}(x)\leq {\frac {f(b)-f(a)}{b-a}}\leq f_{s}(y).$
As an application, the quasi-mean-value theorem for f(x) = |x| on an interval containing 0 predicts that the slope of any secant of f is between −1 and 1.
If the symmetric derivative of f has the Darboux property, then the (form of the) regular mean-value theorem (of Lagrange) holds, i.e. there exists z in (a, b) such that[5]
$f_{s}(z)={\frac {f(b)-f(a)}{b-a}}.$
As a consequence, if a function is continuous and its symmetric derivative is also continuous (thus has the Darboux property), then the function is differentiable in the usual sense.[5]
Generalizations
The notion generalizes to higher-order symmetric derivatives and also to n-dimensional Euclidean spaces.
The second symmetric derivative
The second symmetric derivative is defined as[6][2]: 1
$\lim _{h\to 0}{\frac {f(x+h)-2f(x)+f(x-h)}{h^{2}}}.$
If the (usual) second derivative exists, then the second symmetric derivative exists and is equal to it.[6] The second symmetric derivative may exist, however, even when the (ordinary) second derivative does not. As example, consider the sign function $\operatorname {sgn}(x)$, which is defined by
$\operatorname {sgn}(x)={\begin{cases}-1&{\text{if }}x<0,\\0&{\text{if }}x=0,\\1&{\text{if }}x>0.\end{cases}}$
The sign function is not continuous at zero, and therefore the second derivative for $x=0$ does not exist. But the second symmetric derivative exists for $x=0$:
$\lim _{h\to 0}{\frac {\operatorname {sgn}(0+h)-2\operatorname {sgn}(0)+\operatorname {sgn}(0-h)}{h^{2}}}=\lim _{h\to 0}{\frac {\operatorname {sgn}(h)-2\cdot 0+(-\operatorname {sgn}(h))}{h^{2}}}=\lim _{h\to 0}{\frac {0}{h^{2}}}=0.$
See also
• Central differencing scheme
• Density point
• Generalizations of the derivative
• Symmetrically continuous function
References
1. Peter R. Mercer (2014). More Calculus of a Single Variable. Springer. p. 173. ISBN 978-1-4939-1926-0.
2. Thomson, Brian S. (1994). Symmetric Properties of Real Functions. Marcel Dekker. ISBN 0-8247-9230-0.
3. Peter D. Lax; Maria Shea Terrell (2013). Calculus With Applications. Springer. p. 213. ISBN 978-1-4614-7946-8.
4. Shirley O. Hockett; David Bock (2005). Barron's how to Prepare for the AP Calculus. Barron's Educational Series. pp. 53. ISBN 978-0-7641-2382-5.
5. Sahoo, Prasanna; Riedel, Thomas (1998). Mean Value Theorems and Functional Equations. World Scientific. pp. 188–192. ISBN 978-981-02-3544-4.
6. A. Zygmund (2002). Trigonometric Series. Cambridge University Press. pp. 22–23. ISBN 978-0-521-89053-3.
• A. B. Kharazishvili (2005). Strange Functions in Real Analysis (2nd ed.). CRC Press. p. 34. ISBN 978-1-4200-3484-4.
• Aull, C. E. (1967). "The first symmetric derivative". Am. Math. Mon. 74 (6): 708–711. doi:10.1080/00029890.1967.12000020.
External links
• "Symmetric derivative", Encyclopedia of Mathematics, EMS Press, 2001 [1994]
• Approximating the Derivative by the Symmetric Difference Quotient (Wolfram Demonstrations Project)
| Wikipedia |
Symmetric fair cake-cutting
Symmetric fair cake-cutting is a variant of the fair cake-cutting problem, in which fairness is applied not only to the final outcome, but also to the assignment of roles in the division procedure.
As an example, consider a birthday cake that has to be divided between two children with different tastes, such that each child feels that his/her share is "fair", i.e., worth at least 1/2 of the entire cake. They can use the classic divide and choose procedure: Alice cuts the cake into two pieces worth exactly 1/2 in her eyes, and George chooses the piece that he considers more valuable. The outcome is always fair. However, the procedure is not symmetric: while Alice always gets a value of exactly 1/2 of her value, George may get much more than 1/2 of his value. Thus, while Alice does not envy George's share, she does envy George's role in the procedure.
In contrast, consider the alternative procedure in which Alice and George both make half-marks on the cake, i.e., each of them marks the location in which the cake should be cut such that the two pieces are equal in his/her eyes. Then, the cake is cut exactly between these cuts—if Alice's cut is a and George's cut is g, then the cake is cut at (a+g)/2. If a<g, Alice gets the leftmost piece and George the rightmost piece; otherwise Alice gets the rightmost piece and George the leftmost piece. The final outcome is still fair. And here, the roles are symmetric: the only case in which the roles make a difference in the final outcome is when a=g, but in this case, both parts have a value of exactly 1/2 to both children, so the roles do not make a difference in the final value. Hence, the alternative procedure is both fair and symmetric.
The idea was first presented by Manabe and Okamoto,[1] who termed it meta-envy-free.
Several variants of symmetric fair cake-cutting have been proposed:
• Anonymous fair cake-cutting requires that not only the values be equal, but also the pieces themselves be equal.[2] This implies symmeric fairness, but it is stronger . For example, it is not satisfied by the symmetric-divide-and-choose above, since in the case that a=g, the first agent always gets the leftmost piece and the second agent always gets the rightmost piece.
• Aristotelian fair cake-cutting requires only that agents with identical value measures receive the same value.[3] This is implied by symmetric fairness, but it is weaker. For example, it is satisfied by the asymmetric version of divide-and-choose: if the agents' valuations are identical, then both of them receive a value of exactly 1/2.
Definitions
There is a cake C, usually assumed to be a 1-dimensional interval. There are n people. Each person i has value function Vi which maps subsets of C to weakly-positive numbers.
A division procedure is a function F that maps n value functions to a partition of C. The piece allocated by F to agent i is denoted by F(V1,...,Vn; i).
Symmetric procedure
A division procedure F is called symmetric if, for any permutation p of (1,...,n), and for every i:
Vi(F(V1,...,Vn; i)) = Vi(F(Vp(1),...,Vp(n); p−1(i)))
In particular, when n=2, a procedure is symmetric if:
V1(F(V1,V2; 1)) = V1(F(V2,V1; 2)) and V2(F(V1,V2; 2)) = V2(F(V2,V1; 1))
This means that agent 1 gets the same value whether he plays first or second, and the same is true for agent 2. As another example, when n=3, the symmetry requirement implies (among others):
V1(F(V1,V2,V3; 1)) = V1(F(V2,V3,V1; 3)) = V1(F(V3,V1,V2; 2)).
Anonymous procedure
A division procedure F is called anonymous if, for any permutation p of (1,...,n), and for every i:
F(V1,...,Vn; i) = F(Vp(1),...,Vp(n); p−1(i))
Any anonymous procedure is symmetric, since if the pieces are equal - their values are surely equal.
But the opposite is not true: it is possible that a permutation gives an agent different pieces with equal value.
Aristotelian procedure
A division procedure F is called aristotelian if, whenever Vi=Vk:
Vi(F(V1,...,Vn; i)) = Vk(F(V1,...,Vn; k))
The criterion is named after Aristotle, who wrote in his book on ethics: "... it is when equals possess or are allotted unequal shares, or persons not equal equal shares, that quarrels and complaints arise". Every symmetric procedure is aristotelian. Let p be the permutation that exchanges i and k. Symmetry implies that:
Vi(F(V1,....Vi,...,Vk,...,Vn; i)) = Vi(F(V1,....Vk,...,Vi,...,Vn; k))
But since Vi=Vk, the two sequences of value-measures are identical, so this implies the definition of aristotelian. Moreover, every procedure envy-free cake-cutting is aristotelian: envy-freeness implies that:
Vi(F(V1,...,Vn; i)) ≥ Vi(F(V1,...,Vn; k)) Vk(F(V1,...,Vn; k)) ≥ Vk(F(V1,...,Vn; i))
But since Vi=Vk, the two inequalities imply that both values are equal.
However, a procedure that satisfies the weaker condition of Proportional cake-cutting is not necessarily aristotelian. Cheze[3] shows an example with 4 agents in which the Even-Paz procedure for proportional cake-cutting may give different values to agents with identical value-measures.
The following chart summarizes the relations between the criteria:
• Anonymous → Symmetric → Aristotelian
• Envy-free → Aristotelian
• Envy-free → Proportional
Procedures
Every procedure can be made "symmetric ex-ante" by randomization. For example, in the asymmetric divide-and-choose, the divider can be selected by tossing a coin. However, such a procedure is not symmetric ex-post. Therefore, the research regarding symmetric fair cake-cutting focuses on deterministic algorithms.
Manabe and Okamoto[1] presented symmetric and envy-free ("meta-envy-free") deterministic procedures for two and three agents.
Nicolo and Yu[2] presented an anonymous, envy-free and Pareto-efficient division protocol for two agents. The protocol implements the allocation in subgame perfect equilibrium, assuming each agent has complete information on the valuation of the other agent.
The symmetric cut and choose procedure for two agents was studied empirically in a lab experiment.[4] Alternative symmetric fair cake-cutting procedures for two agents are rightmost mark[5] and leftmost leaves.[6]
Cheze[3] presented several procedures:
• A general scheme for convering any envy-free procedure into a symmetric deterministic procedure: run the original procedure n! times, once for each permutation of the agents, and choose one of the outcomes according to some topological criterion (e.g. minimizing the number of cuts). This procedure is not practical when n is large.
• An aristotelian proportional procedure for n agents, which requires O(n3) queries and a polynomial number of arithmetic operations by the referee.
• A symmetric proportional procedure for n agents, which requires O(n3) queries, but may require an exponential number of arithmetic operations by the referee.
Aristotelian proportional procedure
The aristotelian procedure of Cheze[3] for proportional cake-cutting extends the lone divider procedure. For convenience, we normalize the valuations such that the value of the entire cake is n for all agents. The goal is to give each agent a piece with a value of at least 1.
1. One player chosen arbitrarily, called the divider, cuts the cake into n pieces whose value in his/her eyes is exactly 1.
2. Construct a bipartite graph G = (X+Y, E) in which each vertex in X is an agent, each vertex in Y is a piece, and there is an edge between an agent x and a piece y iff x values y at least 1.
3. Find a maximum-cardinality envy-free matching in G (a matching in which no unmatched agent is adjacent to a matched piece). Note that the divider is adjacent to all n pieces, so |NG(X)|= n ≥ |X| (where NG(X) is the set of neighbors of X in Y). Hence, a non-empty envy-free matching exists.[7] Suppose w.l.o.g. that the EFM matches agents 1,...,k to pieces X1,...,Xk, and leaves unmatched the agents and pieces from k+1 to n.
4. For each i in 1,...,k for which Vi(Xi) = 1 - give Xi to agent i. Now, the divider and all agents whose value function is identical to the dividers' are assigned a piece and have the same value.
5. Consider now the agents i in 1,...,k for which Vi(Xi) > 1. Partition them into subsets with identical value-vector for the pieces X1,...,Xk. For each subset, recursively divide their pieces among them (for example, if agents 1, 3, 4 agree on the values of all the pieces 1,...,k, then divide pieces X1,X3,X4 recursively among them). Now, all agents whose value-function is identical are assigned to the same subset, and they divide a subcake whose value for them is greater than their number, so the precondition for recursion is satisfied.
6. Recursively divide the unmatched pieces Xk+1, ..., Xn among the unmatched agents. Note that, by envy-freeness of the matching, each unmatched agent values each matched piece at less than 1, so he values the remaining pieces at more than the number of agents, so the precondition for recursion is satisfied.
References
1. Manabe, Yoshifumi; Okamoto, Tatsuaki (2010). "Meta-envy-free Cake-cutting Protocols". Proceedings of the 35th International Conference on Mathematical Foundations of Computer Science. MFCS'10. Berlin, Heidelberg: Springer-Verlag. 6281: 501–512. Bibcode:2010LNCS.6281..501M. doi:10.1007/978-3-642-15155-2_44. ISBN 9783642151545.
2. Nicolò, Antonio; Yu, Yan (2008-09-01). "Strategic divide and choose" (PDF). Games and Economic Behavior. 64 (1): 268–289. doi:10.1016/j.geb.2008.01.006. ISSN 0899-8256.
3. Chèze, Guillaume (2018-04-11). "Don't cry to be the first! Symmetric fair division algorithms exist". arXiv:1804.03833 [cs.GT].
4. Kyropoulou, Maria; Ortega, Josué; Segal-Halevi, Erel (2019). "Fair Cake-Cutting in Practice". Proceedings of the 2019 ACM Conference on Economics and Computation. EC '19. New York, NY, USA: ACM: 547–548. arXiv:1810.08243. doi:10.1145/3328526.3329592. ISBN 9781450367929. S2CID 53041563.
5. Segal-Halevi, Erel; Sziklai, Balázs R. (2018-09-01). "Resource-monotonicity and population-monotonicity in connected cake-cutting". Mathematical Social Sciences. 95: 19–30. arXiv:1703.08928. doi:10.1016/j.mathsocsci.2018.07.001. ISSN 0165-4896. S2CID 16282641.
6. Ortega, Josue (2019-08-08). "Obvious Manipulations in Cake-Cutting". arXiv:1908.02988 [cs.GT].
7. Segal-Halevi, Erel; Aigner-Horev, Elad (2022). "Envy-free matchings in bipartite graphs and their applications to fair division". Information Sciences. 587: 164–187. arXiv:1901.09527. doi:10.1016/j.ins.2021.11.059. S2CID 170079201.
| Wikipedia |
Symmetric function
In mathematics, a function of $n$ variables is symmetric if its value is the same no matter the order of its arguments. For example, a function $f\left(x_{1},x_{2}\right)$ of two arguments is a symmetric function if and only if $f\left(x_{1},x_{2}\right)=f\left(x_{2},x_{1}\right)$ for all $x_{1}$ and $x_{2}$ such that $\left(x_{1},x_{2}\right)$ and $\left(x_{2},x_{1}\right)$ are in the domain of $f.$ The most commonly encountered symmetric functions are polynomial functions, which are given by the symmetric polynomials.
This article is about functions that are invariant under all permutations of their variables. For the generalization of symmetric polynomials to infinitely many variables (in algebraic combinatorics), see ring of symmetric functions. For symmetric functions on elements of a vector space, see symmetric tensor.
A related notion is alternating polynomials, which change sign under an interchange of variables. Aside from polynomial functions, tensors that act as functions of several vectors can be symmetric, and in fact the space of symmetric $k$-tensors on a vector space $V$ is isomorphic to the space of homogeneous polynomials of degree $k$ on $V.$ Symmetric functions should not be confused with even and odd functions, which have a different sort of symmetry.
Symmetrization
Main article: Symmetrization
Given any function $f$ in $n$ variables with values in an abelian group, a symmetric function can be constructed by summing values of $f$ over all permutations of the arguments. Similarly, an anti-symmetric function can be constructed by summing over even permutations and subtracting the sum over odd permutations. These operations are of course not invertible, and could well result in a function that is identically zero for nontrivial functions $f.$ The only general case where $f$ can be recovered if both its symmetrization and antisymmetrization are known is when $n=2$ and the abelian group admits a division by 2 (inverse of doubling); then $f$ is equal to half the sum of its symmetrization and its antisymmetrization.
Examples
• Consider the real function
$f(x_{1},x_{2},x_{3})=(x-x_{1})(x-x_{2})(x-x_{3}).$
By definition, a symmetric function with $n$ variables has the property that
$f(x_{1},x_{2},\ldots ,x_{n})=f(x_{2},x_{1},\ldots ,x_{n})=f(x_{3},x_{1},\ldots ,x_{n},x_{n-1}),\quad {\text{ etc.}}$
In general, the function remains the same for every permutation of its variables. This means that, in this case,
$(x-x_{1})(x-x_{2})(x-x_{3})=(x-x_{2})(x-x_{1})(x-x_{3})=(x-x_{3})(x-x_{1})(x-x_{2})$
and so on, for all permutations of $x_{1},x_{2},x_{3}.$
• Consider the function
$f(x,y)=x^{2}+y^{2}-r^{2}.$
If $x$ and $y$ are interchanged the function becomes
$f(y,x)=y^{2}+x^{2}-r^{2},$
which yields exactly the same results as the original $f(x,y).$
• Consider now the function
$f(x,y)=ax^{2}+by^{2}-r^{2}.$
If $x$ and $y$ are interchanged, the function becomes
$f(y,x)=ay^{2}+bx^{2}-r^{2}.$
This function is not the same as the original if $a\neq b,$ which makes it non-symmetric.
Applications
U-statistics
In statistics, an $n$-sample statistic (a function in $n$ variables) that is obtained by bootstrapping symmetrization of a $k$-sample statistic, yielding a symmetric function in $n$ variables, is called a U-statistic. Examples include the sample mean and sample variance.
See also
• Alternating polynomial
• Elementary symmetric polynomial – homogeneous symmetric polynomial in which each possible monomial occurs exactly once with coefficient 1Pages displaying wikidata descriptions as a fallback
• Even and odd functions – Mathematical functions with specific symmetries
• Quasisymmetric function
• Ring of symmetric functions
• Symmetrization – process that converts any function in n variables to a symmetric function in n variablesPages displaying wikidata descriptions as a fallback
• Vandermonde polynomial – determinant of Vandermonde matrixPages displaying wikidata descriptions as a fallback
References
• F. N. David, M. G. Kendall & D. E. Barton (1966) Symmetric Function and Allied Tables, Cambridge University Press.
• Joseph P. S. Kung, Gian-Carlo Rota, & Catherine H. Yan (2009) Combinatorics: The Rota Way, §5.1 Symmetric functions, pp 222–5, Cambridge University Press, ISBN 978-0-521-73794-4.
Tensors
Glossary of tensor theory
Scope
Mathematics
• Coordinate system
• Differential geometry
• Dyadic algebra
• Euclidean geometry
• Exterior calculus
• Multilinear algebra
• Tensor algebra
• Tensor calculus
• Physics
• Engineering
• Computer vision
• Continuum mechanics
• Electromagnetism
• General relativity
• Transport phenomena
Notation
• Abstract index notation
• Einstein notation
• Index notation
• Multi-index notation
• Penrose graphical notation
• Ricci calculus
• Tetrad (index notation)
• Van der Waerden notation
• Voigt notation
Tensor
definitions
• Tensor (intrinsic definition)
• Tensor field
• Tensor density
• Tensors in curvilinear coordinates
• Mixed tensor
• Antisymmetric tensor
• Symmetric tensor
• Tensor operator
• Tensor bundle
• Two-point tensor
Operations
• Covariant derivative
• Exterior covariant derivative
• Exterior derivative
• Exterior product
• Hodge star operator
• Lie derivative
• Raising and lowering indices
• Symmetrization
• Tensor contraction
• Tensor product
• Transpose (2nd-order tensors)
Related
abstractions
• Affine connection
• Basis
• Cartan formalism (physics)
• Connection form
• Covariance and contravariance of vectors
• Differential form
• Dimension
• Exterior form
• Fiber bundle
• Geodesic
• Levi-Civita connection
• Linear map
• Manifold
• Matrix
• Multivector
• Pseudotensor
• Spinor
• Vector
• Vector space
Notable tensors
Mathematics
• Kronecker delta
• Levi-Civita symbol
• Metric tensor
• Nonmetricity tensor
• Ricci curvature
• Riemann curvature tensor
• Torsion tensor
• Weyl tensor
Physics
• Moment of inertia
• Angular momentum tensor
• Spin tensor
• Cauchy stress tensor
• stress–energy tensor
• Einstein tensor
• EM tensor
• Gluon field strength tensor
• Metric tensor (GR)
Mathematicians
• Élie Cartan
• Augustin-Louis Cauchy
• Elwin Bruno Christoffel
• Albert Einstein
• Leonhard Euler
• Carl Friedrich Gauss
• Hermann Grassmann
• Tullio Levi-Civita
• Gregorio Ricci-Curbastro
• Bernhard Riemann
• Jan Arnoldus Schouten
• Woldemar Voigt
• Hermann Weyl
| Wikipedia |
Symmetric hypergraph theorem
The Symmetric hypergraph theorem is a theorem in combinatorics that puts an upper bound on the chromatic number of a graph (or hypergraph in general). The original reference for this paper is unknown at the moment, and has been called folklore.[1]
Statement
A group $G$ acting on a set $S$ is called transitive if given any two elements $x$ and $y$ in $S$, there exists an element $f$ of $G$ such that $f(x)=y$. A graph (or hypergraph) is called symmetric if its automorphism group is transitive.
Theorem. Let $H=(S,E)$ be a symmetric hypergraph. Let $m=|S|$, and let $\chi (H)$ denote the chromatic number of $H$, and let $\alpha (H)$ denote the independence number of $H$. Then
$\chi (H)\leq 1+{\frac {\ln {m}}{-\ln {(1-\alpha (H)/m)}}}$
Applications
This theorem has applications to Ramsey theory, specifically graph Ramsey theory. Using this theorem, a relationship between the graph Ramsey numbers and the extremal numbers can be shown (see Graham-Rothschild-Spencer for the details).
See also
• Ramsey theory
Notes
1. R. Graham, B. Rothschild, J. Spencer. Ramsey Theory. 2nd ed., Wiley, New-York, 1990.
| Wikipedia |
Symmetric inverse semigroup
In abstract algebra, the set of all partial bijections on a set X (a.k.a. one-to-one partial transformations) forms an inverse semigroup, called the symmetric inverse semigroup[1] (actually a monoid) on X. The conventional notation for the symmetric inverse semigroup on a set X is ${\mathcal {I}}_{X}$[2] or ${\mathcal {IS}}_{X}$.[3] In general ${\mathcal {I}}_{X}$ is not commutative.
Details about the origin of the symmetric inverse semigroup are available in the discussion on the origins of the inverse semigroup.
Finite symmetric inverse semigroups
When X is a finite set {1, ..., n}, the inverse semigroup of one-to-one partial transformations is denoted by Cn and its elements are called charts or partial symmetries.[4] The notion of chart generalizes the notion of permutation. A (famous) example of (sets of) charts are the hypomorphic mapping sets from the reconstruction conjecture in graph theory.[5]
The cycle notation of classical, group-based permutations generalizes to symmetric inverse semigroups by the addition of a notion called a path, which (unlike a cycle) ends when it reaches the "undefined" element; the notation thus extended is called path notation.[5]
See also
• Symmetric group
Notes
1. Grillet, Pierre A. (1995). Semigroups: An Introduction to the Structure Theory. CRC Press. p. 228. ISBN 978-0-8247-9662-4.
2. Hollings 2014, p. 252
3. Ganyushkin & Mazorchuk 2008, p. v
4. Lipscomb 1997, p. 1
5. Lipscomb 1997, p. xiii
References
• Lipscomb, S. (1997). Symmetric Inverse Semigroups. AMS Mathematical Surveys and Monographs. American Mathematical Society. ISBN 0-8218-0627-0.
• Ganyushkin, Olexandr; Mazorchuk, Volodymyr (2008). Classical Finite Transformation Semigroups: An Introduction. Springer. doi:10.1007/978-1-84800-281-4. ISBN 978-1-84800-281-4.
• Hollings, Christopher (2014). Mathematics across the Iron Curtain: A History of the Algebraic Theory of Semigroups. American Mathematical Society. ISBN 978-1-4704-1493-1.
| Wikipedia |
Permutation model
In mathematical set theory, a permutation model is a model of set theory with atoms (ZFA) constructed using a group of permutations of the atoms. A symmetric model is similar except that it is a model of ZF (without atoms) and is constructed using a group of permutations of a forcing poset. One application is to show the independence of the axiom of choice from the other axioms of ZFA or ZF. Permutation models were introduced by Fraenkel (1922) and developed further by Mostowski (1938). Symmetric models were introduced by Paul Cohen.
Construction of permutation models
Suppose that A is a set of atoms, and G is a group of permutations of A. A normal filter of G is a collection F of subgroups of G such that
• G is in F
• The intersection of two elements of F is in F
• Any subgroup containing an element of F is in F
• Any conjugate of an element of F is in F
• The subgroup fixing any element of A is in F.
If V is a model of ZFA with A the set of atoms, then an element of V is called symmetric if the subgroup fixing it is in F, and is called hereditarily symmetric if it and all elements of its transitive closure are symmetric. The permutation model consists of all hereditarily symmetric elements, and is a model of ZFA.
Construction of filters on a group
A filter on a group can be constructed from an invariant ideal on of the Boolean algebra of subsets of A containing all elements of A. Here an ideal is a collection I of subsets of A closed under taking unions and subsets, and is called invariant if it is invariant under the action of the group G. For each element S of the ideal one can take the subgroup of G consisting of all elements fixing every element S. These subgroups generate a normal filter of G.
References
• Fraenkel, A. (1922), "Der Begriff "definit" und die Unabhängigkeit des Auswahlaxioms", Sitzungsberichte der Königlich Preussischen Akademie der Wissenschaften: 253–257, JFM 48.0199.02
• Mostowski, Andrzej (1938), "Über den Begriff einer Endlichen Menge", Comptes Rendus des Séances de la Société des Sciences et des Lettres de Varsovie, Classe III, 31 (8): 13–20
| Wikipedia |
Symmetric monoidal category
In category theory, a branch of mathematics, a symmetric monoidal category is a monoidal category (i.e. a category in which a "tensor product" $\otimes $ is defined) such that the tensor product is symmetric (i.e. $A\otimes B$ is, in a certain strict sense, naturally isomorphic to $B\otimes A$ for all objects $A$ and $B$ of the category). One of the prototypical examples of a symmetric monoidal category is the category of vector spaces over some fixed field k, using the ordinary tensor product of vector spaces.
Definition
A symmetric monoidal category is a monoidal category (C, ⊗, I) such that, for every pair A, B of objects in C, there is an isomorphism $s_{AB}:A\otimes B\to B\otimes A$ called the swap map[1] that is natural in both A and B and such that the following diagrams commute:
• The unit coherence:
• The associativity coherence:
• The inverse law:
In the diagrams above, a, l, and r are the associativity isomorphism, the left unit isomorphism, and the right unit isomorphism respectively.
Examples
Some examples and non-examples of symmetric monoidal categories:
• The category of sets. The tensor product is the set theoretic cartesian product, and any singleton can be fixed as the unit object.
• The category of groups. Like before, the tensor product is just the cartesian product of groups, and the trivial group is the unit object.
• More generally, any category with finite products, that is, a cartesian monoidal category, is symmetric monoidal. The tensor product is the direct product of objects, and any terminal object (empty product) is the unit object.
• The category of bimodules over a ring R is monoidal (using the ordinary tensor product of modules), but not necessarily symmetric. If R is commutative, the category of left R-modules is symmetric monoidal. The latter example class includes the category of all vector spaces over a given field.
• Given a field k and a group (or a Lie algebra over k), the category of all k-linear representations of the group (or of the Lie algebra) is a symmetric monoidal category. Here the standard tensor product of representations is used.
• The categories (Ste,$\circledast $) and (Ste,$\odot $) of stereotype spaces over ${\mathbb {C} }$ are symmetric monoidal, and moreover, (Ste,$\circledast $) is a closed symmetric monoidal category with the internal hom-functor $\oslash $.
Properties
The classifying space (geometric realization of the nerve) of a symmetric monoidal category is an $E_{\infty }$ space, so its group completion is an infinite loop space.[2]
Specializations
A dagger symmetric monoidal category is a symmetric monoidal category with a compatible dagger structure.
A cosmos is a complete cocomplete closed symmetric monoidal category.
Generalizations
In a symmetric monoidal category, the natural isomorphisms $s_{AB}:A\otimes B\to B\otimes A$ are their own inverses in the sense that $s_{BA}\circ s_{AB}=1_{A\otimes B}$. If we abandon this requirement (but still require that $A\otimes B$ be naturally isomorphic to $B\otimes A$), we obtain the more general notion of a braided monoidal category.
References
1. Fong, Brendan; Spivak, David I. (2018-10-12). "Seven Sketches in Compositionality: An Invitation to Applied Category Theory". arXiv:1803.05316 [math.CT].
2. Thomason, R.W. (1995). "Symmetric Monoidal Categories Model all Connective Spectra" (PDF). Theory and Applications of Categories. 1 (5): 78–118. CiteSeerX 10.1.1.501.2534.
• Symmetric monoidal category at the nLab
• This article incorporates material from Symmetric monoidal category on PlanetMath, which is licensed under the Creative Commons Attribution/Share-Alike License.
Category theory
Key concepts
Key concepts
• Category
• Adjoint functors
• CCC
• Commutative diagram
• Concrete category
• End
• Exponential
• Functor
• Kan extension
• Morphism
• Natural transformation
• Universal property
Universal constructions
Limits
• Terminal objects
• Products
• Equalizers
• Kernels
• Pullbacks
• Inverse limit
Colimits
• Initial objects
• Coproducts
• Coequalizers
• Cokernels and quotients
• Pushout
• Direct limit
Algebraic categories
• Sets
• Relations
• Magmas
• Groups
• Abelian groups
• Rings (Fields)
• Modules (Vector spaces)
Constructions on categories
• Free category
• Functor category
• Kleisli category
• Opposite category
• Quotient category
• Product category
• Comma category
• Subcategory
Higher category theory
Key concepts
• Categorification
• Enriched category
• Higher-dimensional algebra
• Homotopy hypothesis
• Model category
• Simplex category
• String diagram
• Topos
n-categories
Weak n-categories
• Bicategory (pseudofunctor)
• Tricategory
• Tetracategory
• Kan complex
• ∞-groupoid
• ∞-topos
Strict n-categories
• 2-category (2-functor)
• 3-category
Categorified concepts
• 2-group
• 2-ring
• En-ring
• (Traced)(Symmetric) monoidal category
• n-group
• n-monoid
• Category
• Outline
• Glossary
| Wikipedia |
Monoidal natural transformation
Suppose that $({\mathcal {C}},\otimes ,I)$ and $({\mathcal {D}},\bullet ,J)$ are two monoidal categories and
$(F,m):({\mathcal {C}},\otimes ,I)\to ({\mathcal {D}},\bullet ,J)$ and $(G,n):({\mathcal {C}},\otimes ,I)\to ({\mathcal {D}},\bullet ,J)$
are two lax monoidal functors between those categories.
A monoidal natural transformation
$\theta :(F,m)\to (G,n)$ :(F,m)\to (G,n)}
between those functors is a natural transformation $\theta :F\to G$ between the underlying functors such that the diagrams
and
commute for every objects $A$ and $B$ of ${\mathcal {C}}$ (see Definition 11 in [1]).
A symmetric monoidal natural transformation is a monoidal natural transformation between symmetric monoidal functors.
References
1. Baez, John C. "Some Definitions Everyone Should Know" (PDF). Retrieved 2 December 2014.
| Wikipedia |
Perfect obstruction theory
In algebraic geometry, given a Deligne–Mumford stack X, a perfect obstruction theory for X consists of:
1. a perfect two-term complex $E=[E^{-1}\to E^{0}]$ in the derived category $D({\text{Qcoh}}(X)_{et})$ of quasi-coherent étale sheaves on X, and
2. a morphism $\varphi \colon E\to {\textbf {L}}_{X}$, where ${\textbf {L}}_{X}$ is the cotangent complex of X, that induces an isomorphism on $h^{0}$ and an epimorphism on $h^{-1}$.
The notion was introduced by Kai Behrend and Barbara Fantechi (1997) for an application to the intersection theory on moduli stacks; in particular, to define a virtual fundamental class.
Examples
Schemes
Consider a regular embedding $I\colon Y\to W$ fitting into a cartesian square
${\begin{matrix}X&{\xrightarrow {j}}&V\\g\downarrow &&\downarrow f\\Y&{\xrightarrow {i}}&W\end{matrix}}$
where $V,W$ are smooth. Then, the complex
$E^{\bullet }=[g^{*}N_{Y/W}^{\vee }\to j^{*}\Omega _{V}]$ (in degrees $-1,0$)
forms a perfect obstruction theory for X.[1] The map comes from the composition
$g^{*}N_{Y/W}^{\vee }\to g^{*}i^{*}\Omega _{W}=j^{*}f^{*}\Omega _{W}\to j^{*}\Omega _{V}$
This is a perfect obstruction theory because the complex comes equipped with a map to $\mathbf {L} _{X}^{\bullet }$ coming from the maps $g^{*}\mathbf {L} _{Y}^{\bullet }\to \mathbf {L} _{X}^{\bullet }$ and $j^{*}\mathbf {L} _{V}^{\bullet }\to \mathbf {L} _{X}^{\bullet }$. Note that the associated virtual fundamental class is $[X,E^{\bullet }]=i^{!}[V]$
Example 1
Consider a smooth projective variety $Y\subset \mathbb {P} ^{n}$. If we set $V=W$, then the perfect obstruction theory in $D^{[-1,0]}(X)$ is
$[N_{X/\mathbb {P} ^{n}}^{\vee }\to \Omega _{\mathbb {P} ^{n}}]$
and the associated virtual fundamental class is
$[X,E^{\bullet }]=i^{!}[\mathbb {P} ^{n}]$
In particular, if $Y$ is a smooth local complete intersection then the perfect obstruction theory is the cotangent complex (which is the same as the truncated cotangent complex).
Deligne–Mumford stacks
The previous construction works too with Deligne–Mumford stacks.
Symmetric obstruction theory
By definition, a symmetric obstruction theory is a perfect obstruction theory together with nondegenerate symmetric bilinear form.
Example: Let f be a regular function on a smooth variety (or stack). Then the set of critical points of f carries a symmetric obstruction theory in a canonical way.
Example: Let M be a complex symplectic manifold. Then the (scheme-theoretic) intersection of Lagrangian submanifolds of M carries a canonical symmetric obstruction theory.
Notes
1. Behrend & Fantechi 1997, § 6
References
• Behrend, Kai (2005). "Donaldson–Thomas invariants via microlocal geometry". arXiv:math/0507523v2.
• Behrend, Kai; Fantechi, Barbara (1997-03-01). "The intrinsic normal cone". Inventiones Mathematicae. 128 (1): 45–88. arXiv:alg-geom/9601010. Bibcode:1997InMat.128...45B. doi:10.1007/s002220050136. ISSN 0020-9910. S2CID 18533009.
• Oesinghaus, Jakob (2015-07-20). "Understanding the obstruction cone of a symmetric obstruction theory". MathOverflow. Retrieved 2017-07-19.
See also
• Behrend function
• Gromov–Witten invariant
| Wikipedia |
Symmetric tensor
In mathematics, a symmetric tensor is a tensor that is invariant under a permutation of its vector arguments:
$T(v_{1},v_{2},\ldots ,v_{r})=T(v_{\sigma 1},v_{\sigma 2},\ldots ,v_{\sigma r})$
for every permutation σ of the symbols {1, 2, ..., r}. Alternatively, a symmetric tensor of order r represented in coordinates as a quantity with r indices satisfies
$T_{i_{1}i_{2}\cdots i_{r}}=T_{i_{\sigma 1}i_{\sigma 2}\cdots i_{\sigma r}}.$
The space of symmetric tensors of order r on a finite-dimensional vector space V is naturally isomorphic to the dual of the space of homogeneous polynomials of degree r on V. Over fields of characteristic zero, the graded vector space of all symmetric tensors can be naturally identified with the symmetric algebra on V. A related concept is that of the antisymmetric tensor or alternating form. Symmetric tensors occur widely in engineering, physics and mathematics.
Definition
Let V be a vector space and
$T\in V^{\otimes k}$
a tensor of order k. Then T is a symmetric tensor if
$\tau _{\sigma }T=T\,$
for the braiding maps associated to every permutation σ on the symbols {1,2,...,k} (or equivalently for every transposition on these symbols).
Given a basis {ei} of V, any symmetric tensor T of rank k can be written as
$T=\sum _{i_{1},\ldots ,i_{k}=1}^{N}T_{i_{1}i_{2}\cdots i_{k}}e^{i_{1}}\otimes e^{i_{2}}\otimes \cdots \otimes e^{i_{k}}$
for some unique list of coefficients Failed to parse (SVG (MathML can be enabled via browser plugin): Invalid response ("Math extension cannot connect to Restbase.") from server "http://localhost:6011/en.wikipedia.org/v1/":): {\displaystyle T_{i_1i_2\cdots i_k}} (the components of the tensor in the basis) that are symmetric on the indices. That is to say
$T_{i_{\sigma 1}i_{\sigma 2}\cdots i_{\sigma k}}=T_{i_{1}i_{2}\cdots i_{k}}$
for every permutation σ.
The space of all symmetric tensors of order k defined on V is often denoted by Sk(V) or Symk(V). It is itself a vector space, and if V has dimension N then the dimension of Symk(V) is the binomial coefficient
$\dim \operatorname {Sym} ^{k}(V)={N+k-1 \choose k}.$
We then construct Sym(V) as the direct sum of Symk(V) for k = 0,1,2,...
$\operatorname {Sym} (V)=\bigoplus _{k=0}^{\infty }\operatorname {Sym} ^{k}(V).$
Examples
There are many examples of symmetric tensors. Some include, the metric tensor, $g_{\mu \nu }$, the Einstein tensor, $G_{\mu \nu }$ and the Ricci tensor, $R_{\mu \nu }$.
Many material properties and fields used in physics and engineering can be represented as symmetric tensor fields; for example: stress, strain, and anisotropic conductivity. Also, in diffusion MRI one often uses symmetric tensors to describe diffusion in the brain or other parts of the body.
Ellipsoids are examples of algebraic varieties; and so, for general rank, symmetric tensors, in the guise of homogeneous polynomials, are used to define projective varieties, and are often studied as such.
Given a Riemannian manifold $(M,g)$ equipped with its Levi-Civita connection $\nabla $, the covariant curvature tensor is a symmetric order 2 tensor over the vector space $ V=\Omega ^{2}(M)=\bigwedge ^{2}T^{*}M$ of differential 2-forms. This corresponds to the fact that, viewing $R_{ijk\ell }\in (T^{*}M)^{\otimes 4}$, we have the symmetry $R_{ij\,k\ell }=R_{k\ell \,ij}$ between the first and second pairs of arguments in addition to antisymmetry within each pair: $R_{jik\ell }=-R_{ijk\ell }=R_{ij\ell k}$.[1]
Symmetric part of a tensor
Suppose $V$ is a vector space over a field of characteristic 0. If T ∈ V⊗k is a tensor of order $k$, then the symmetric part of $T$ is the symmetric tensor defined by
$\operatorname {Sym} \,T={\frac {1}{k!}}\sum _{\sigma \in {\mathfrak {S}}_{k}}\tau _{\sigma }T,$
the summation extending over the symmetric group on k symbols. In terms of a basis, and employing the Einstein summation convention, if
$T=T_{i_{1}i_{2}\cdots i_{k}}e^{i_{1}}\otimes e^{i_{2}}\otimes \cdots \otimes e^{i_{k}},$
then
$\operatorname {Sym} \,T={\frac {1}{k!}}\sum _{\sigma \in {\mathfrak {S}}_{k}}T_{i_{\sigma 1}i_{\sigma 2}\cdots i_{\sigma k}}e^{i_{1}}\otimes e^{i_{2}}\otimes \cdots \otimes e^{i_{k}}.$
The components of the tensor appearing on the right are often denoted by
$T_{(i_{1}i_{2}\cdots i_{k})}={\frac {1}{k!}}\sum _{\sigma \in {\mathfrak {S}}_{k}}T_{i_{\sigma 1}i_{\sigma 2}\cdots i_{\sigma k}}$
with parentheses () around the indices being symmetrized. Square brackets [] are used to indicate anti-symmetrization.
Symmetric product
If T is a simple tensor, given as a pure tensor product
$T=v_{1}\otimes v_{2}\otimes \cdots \otimes v_{r}$
then the symmetric part of T is the symmetric product of the factors:
$v_{1}\odot v_{2}\odot \cdots \odot v_{r}:={\frac {1}{r!}}\sum _{\sigma \in {\mathfrak {S}}_{r}}v_{\sigma 1}\otimes v_{\sigma 2}\otimes \cdots \otimes v_{\sigma r}.$
In general we can turn Sym(V) into an algebra by defining the commutative and associative product ⊙.[2] Given two tensors T1 ∈ Symk1(V) and T2 ∈ Symk2(V), we use the symmetrization operator to define:
$T_{1}\odot T_{2}=\operatorname {Sym} (T_{1}\otimes T_{2})\quad \left(\in \operatorname {Sym} ^{k_{1}+k_{2}}(V)\right).$
It can be verified (as is done by Kostrikin and Manin[2]) that the resulting product is in fact commutative and associative. In some cases the operator is omitted: T1T2 = T1 ⊙ T2.
In some cases an exponential notation is used:
$v^{\odot k}=\underbrace {v\odot v\odot \cdots \odot v} _{k{\text{ times}}}=\underbrace {v\otimes v\otimes \cdots \otimes v} _{k{\text{ times}}}=v^{\otimes k}.$
Where v is a vector. Again, in some cases the ⊙ is left out:
$v^{k}=\underbrace {v\,v\,\cdots \,v} _{k{\text{ times}}}=\underbrace {v\odot v\odot \cdots \odot v} _{k{\text{ times}}}.$
Decomposition
In analogy with the theory of symmetric matrices, a (real) symmetric tensor of order 2 can be "diagonalized". More precisely, for any tensor T ∈ Sym2(V), there is an integer r, non-zero unit vectors v1,...,vr ∈ V and weights λ1,...,λr such that
$T=\sum _{i=1}^{r}\lambda _{i}\,v_{i}\otimes v_{i}.$
The minimum number r for which such a decomposition is possible is the (symmetric) rank of T. The vectors appearing in this minimal expression are the principal axes of the tensor, and generally have an important physical meaning. For example, the principal axes of the inertia tensor define the Poinsot's ellipsoid representing the moment of inertia. Also see Sylvester's law of inertia.
For symmetric tensors of arbitrary order k, decompositions
$T=\sum _{i=1}^{r}\lambda _{i}\,v_{i}^{\otimes k}$
are also possible. The minimum number r for which such a decomposition is possible is the symmetric rank of T.[3] This minimal decomposition is called a Waring decomposition; it is a symmetric form of the tensor rank decomposition. For second-order tensors this corresponds to the rank of the matrix representing the tensor in any basis, and it is well known that the maximum rank is equal to the dimension of the underlying vector space. However, for higher orders this need not hold: the rank can be higher than the number of dimensions in the underlying vector space. Moreover, the rank and symmetric rank of a symmetric tensor may differ.[4]
See also
• Antisymmetric tensor
• Ricci calculus
• Schur polynomial
• Symmetric polynomial
• Transpose
• Young symmetrizer
Notes
1. Carmo, Manfredo Perdigão do (1992). Riemannian geometry. Francis J. Flaherty. Boston: Birkhäuser. ISBN 0-8176-3490-8. OCLC 24667701.
2. Kostrikin, Alexei I.; Manin, Iurii Ivanovich (1997). Linear algebra and geometry. Algebra, Logic and Applications. Vol. 1. Gordon and Breach. pp. 276–279. ISBN 9056990497.
3. Comon, P.; Golub, G.; Lim, L. H.; Mourrain, B. (2008). "Symmetric Tensors and Symmetric Tensor Rank". SIAM Journal on Matrix Analysis and Applications. 30 (3): 1254. arXiv:0802.1681. doi:10.1137/060661569. S2CID 5676548.
4. Shitov, Yaroslav (2018). "A Counterexample to Comon's Conjecture". SIAM Journal on Applied Algebra and Geometry. 2 (3): 428–443. arXiv:1705.08740. doi:10.1137/17m1131970. ISSN 2470-6566. S2CID 119717133.
References
• Bourbaki, Nicolas (1989), Elements of mathematics, Algebra I, Springer-Verlag, ISBN 3-540-64243-9.
• Bourbaki, Nicolas (1990), Elements of mathematics, Algebra II, Springer-Verlag, ISBN 3-540-19375-8.
• Greub, Werner Hildbert (1967), Multilinear algebra, Die Grundlehren der Mathematischen Wissenschaften, Band 136, Springer-Verlag New York, Inc., New York, MR 0224623.
• Sternberg, Shlomo (1983), Lectures on differential geometry, New York: Chelsea, ISBN 978-0-8284-0316-0.
External links
• Cesar O. Aguilar, The Dimension of Symmetric k-tensors
Tensors
Glossary of tensor theory
Scope
Mathematics
• Coordinate system
• Differential geometry
• Dyadic algebra
• Euclidean geometry
• Exterior calculus
• Multilinear algebra
• Tensor algebra
• Tensor calculus
• Physics
• Engineering
• Computer vision
• Continuum mechanics
• Electromagnetism
• General relativity
• Transport phenomena
Notation
• Abstract index notation
• Einstein notation
• Index notation
• Multi-index notation
• Penrose graphical notation
• Ricci calculus
• Tetrad (index notation)
• Van der Waerden notation
• Voigt notation
Tensor
definitions
• Tensor (intrinsic definition)
• Tensor field
• Tensor density
• Tensors in curvilinear coordinates
• Mixed tensor
• Antisymmetric tensor
• Symmetric tensor
• Tensor operator
• Tensor bundle
• Two-point tensor
Operations
• Covariant derivative
• Exterior covariant derivative
• Exterior derivative
• Exterior product
• Hodge star operator
• Lie derivative
• Raising and lowering indices
• Symmetrization
• Tensor contraction
• Tensor product
• Transpose (2nd-order tensors)
Related
abstractions
• Affine connection
• Basis
• Cartan formalism (physics)
• Connection form
• Covariance and contravariance of vectors
• Differential form
• Dimension
• Exterior form
• Fiber bundle
• Geodesic
• Levi-Civita connection
• Linear map
• Manifold
• Matrix
• Multivector
• Pseudotensor
• Spinor
• Vector
• Vector space
Notable tensors
Mathematics
• Kronecker delta
• Levi-Civita symbol
• Metric tensor
• Nonmetricity tensor
• Ricci curvature
• Riemann curvature tensor
• Torsion tensor
• Weyl tensor
Physics
• Moment of inertia
• Angular momentum tensor
• Spin tensor
• Cauchy stress tensor
• stress–energy tensor
• Einstein tensor
• EM tensor
• Gluon field strength tensor
• Metric tensor (GR)
Mathematicians
• Élie Cartan
• Augustin-Louis Cauchy
• Elwin Bruno Christoffel
• Albert Einstein
• Leonhard Euler
• Carl Friedrich Gauss
• Hermann Grassmann
• Tullio Levi-Civita
• Gregorio Ricci-Curbastro
• Bernhard Riemann
• Jan Arnoldus Schouten
• Woldemar Voigt
• Hermann Weyl
| Wikipedia |
Symmetric power
In mathematics, the n-th symmetric power of an object X is the quotient of the n-fold product $X^{n}:=X\times \cdots \times X$ by the permutation action of the symmetric group ${\mathfrak {S}}_{n}$.
More precisely, the notion exists at least in the following three areas:
• In linear algebra, the n-th symmetric power of a vector space V is the vector subspace of the symmetric algebra of V consisting of degree-n elements (here the product is a tensor product).
• In algebraic topology, the n-th symmetric power of a topological space X is the quotient space $X^{n}/{\mathfrak {S}}_{n}$, as in the beginning of this article.
• In algebraic geometry, a symmetric power is defined in a way similar to that in algebraic topology. For example, if $X=\operatorname {Spec} (A)$ is an affine variety, then the GIT quotient $\operatorname {Spec} ((A\otimes _{k}\dots \otimes _{k}A)^{{\mathfrak {S}}_{n}})$ is the n-th symmetric power of X.
References
• Eisenbud, David; Harris, Joe, 3264 and All That: A Second Course in Algebraic Geometry, Cambridge University Press, ISBN 978-1-107-01708-5
External links
• Hopkins, Michael J. (March 2018). "Symmetric powers of the sphere" (PDF).
| Wikipedia |
Symmetric product (topology)
In algebraic topology, the nth symmetric product of a topological space consists of the unordered n-tuples of its elements. If one fixes a basepoint, there is a canonical way of embedding the lower-dimensional symmetric products into the higher-dimensional ones. That way, one can consider the colimit over the symmetric products, the infinite symmetric product. This construction can easily be extended to give a homotopy functor.
From an algebraic point of view, the infinite symmetric product is the free commutative monoid generated by the space minus the basepoint, the basepoint yielding the identity element. That way, one can view it as the abelian version of the James reduced product.
One of its essential applications is the Dold-Thom theorem, stating that the homotopy groups of the infinite symmetric product of a connected CW complex are the same as the reduced homology groups of that complex. That way, one can give a homotopical definition of homology.
Definition
Let X be a topological space and n ≥ 1 a natural number. Define the nth symmetric product of X or the n-fold symmetric product of X as the space
$\operatorname {SP} ^{n}(X)=X^{n}/S_{n}.$
Here, the symmetric group Sn acts on Xn by permuting the factors. Hence, the elements of SPn(X) are the unordered n-tuples of elements of X. Write [x1, ..., xn] for the point in SPn(X) defined by (x1, ..., xn) ∈ Xn.
Note that one can define the nth symmetric product in any category where products and colimits exist. Namely, one then has canonical isomorphisms φ : X × Y → Y × X for any objects X and Y and can define the action of the transposition $(k\ k+1)\in S_{n}$ on Xn as $\operatorname {Id} ^{k-1}\times \phi \times \operatorname {Id} ^{n-k-1}$, thereby inducing an action of the whole Sn on Xn. This means that one can consider symmetric products of objects like simplicial sets as well. Moreover, if the category is cartesian closed, the distributive law X × (Y ∐ Z) ≅ X × Y ∐ X × Z holds and therefore one gets
$\operatorname {SP} ^{n}(X\amalg Y)=\coprod _{k=0}^{n}\operatorname {SP} ^{k}(X)\times \operatorname {SP} ^{n-k}(Y).$
If (X, e) is a based space, it is common to set SP0(X) = {e}. Further, Xn can then be embedded into Xn+1 by sending (x1, ..., xn) to (x1, ..., xn, e). This clearly induces an embedding of SPn(X) into SPn+1(X). Therefore, the infinite symmetric product can be defined as
$\operatorname {SP} (X)=\operatorname {colim} \operatorname {SP} ^{n}(X).$
A definition avoiding category theoretic notions can be given by taking SP(X) to be the union of the increasing sequence of spaces SPn(X) equipped with the direct limit topology. This means that a subset of SP(X) is open if and only if all its intersections with the SPn(X) are open. We define the basepoint of SP(X) as [e]. That way, SP(X) becomes a based space as well.
One can generalise this definition as well to pointed categories where products and colimits exist. Namely, in this case one has a canonical map Xn → Xn+1, induced by the identity Xn → Xn and the zero map Xn → X. So this results in a direct system of the symmetric products, too and one can therefore define its colimit as the infinite symmetric product.
Examples
• SPn(I) is the same as the n-dimensional standard simplex Δn, where I denotes the unit interval.
• SPn(S1) can be identified with the space of conjugacy classes of unitary n × n-matrices, where S1 is supposed to be the circle. This is because such a class is uniquely determined by the eigenvalues of an element of the class, all lying in S1. At first, one can easily see that this space is homotopy-equivalent to S1: As SPn is a homotopy functor (see Properties), the space in question is homotopy-equivalent to SPn(C − {0}). Consider the map SPn(C − {0}) → Pn into the space Pn of polynomials over C of degree at most n, mapping [w1, ..., wn] to (z - w1) ⋅⋅⋅ (z - wn). This way, one can identify SPn(C − {0}) with the space of monic polynomials of degree n having constant term different from zero, i.e. Cn − 1 × (C − {0}), which is homotopy-equivalent to S1. This implies that the infinite symmetric product SP(S1) is homotopy-equivalent to S1 as well. However, one knows considerably more about the space SPn(S1). Namely, that the map
${\begin{aligned}\qquad \operatorname {SP} ^{n}(S^{1})&\to S^{1},\\{\color {white}.}[w_{1},\dots ,w_{n}]&\mapsto w_{1}\cdots w_{n}\end{aligned}}$
is a fibre bundle with fibre being homeomorphic to the (n − 1)-dimensional standard simplex ∆n−1. It is orientable if and only if n is odd.[1][2]
• SP(S2) is homeomorphic to the infinite-dimensional complex projective space CP∞ as follows: The space CPn can be identified with the space of nonzero polynomials of degree at most n over C up to scalar multiplication by sending a0 + ... + anzn to the line passing through (a0, ..., an). Interpreting S2 as the Riemann sphere C ∪ {∞} yields a map
${\begin{aligned}\qquad f\colon (S^{2})^{n}&\to \mathbf {CP} ^{n},\\(a_{1},\dots ,a_{n})&\mapsto (z+a_{1})\cdots (z+a_{n}),\end{aligned}}$
where the possible factors z + ∞ are omitted. One can check that this map indeed is continuous.[3] As f(a1, ..., an) remains unchanged under permutation of the ai's, f induces a continuous bijection SPn(S2) → CPn. But as both are compact Hausdorff spaces, this map is a homeomorphism. Letting n go to infinity shows that the assertion holds.
Although calculating SP(Sn) for n ≥ 3 turns out to be quite difficult, one can still describe SP2(Sn) quite well as the mapping cone of a map ΣnRPn-1 → Sn, where Σn stands for applying the reduced suspension n times and RPn−1 is the (n − 1)-dimensional real projective space: One can view SP2(Sn) as a certain quotient of Dn × Dn by identifying Sn with Dn/∂Dn. Interpreting Dn × Dn as the cone on its boundary Dn × ∂Dn ∪ ∂Dn × Dn, the identifications for SP2 respect the concentric copies of the boundary. Hence, it suffices to only consider these. The identifications on the boundary ∂Dn × Dn ∪ Dn × ∂Dn of Dn × Dn itself yield Sn. This is clear as this is a quotient of Dn × ∂Dn and as ∂Dn is collapsed to one point in Sn. The identifications on the other concentric copies of the boundary yield the quotient space Z of Dn × ∂Dn, obtained by identifying (x, y) with (y, x) whenever both coordinates lie in ∂Dn. Define a map f: Dn × RPn−1 → Z by sending a pair (x, L) to (w, z). Here, z ∈ ∂Dn and w ∈ Dn are chosen on the line through x parallel to L such that x is their midpoint. If x is the midpoint of the segment zz′, there is no way to distinguish between z and w, but this is not a problem since f takes values in the quotient space Z. Therefore, f is well-defined. As f(x, L) = f(x, L′) holds for every x ∈ ∂Dn, f factors through ΣnRPn−1 and is easily seen to be a homeomorphism on this domain.
Properties
H-space structure
As SP(X) is the free commutative monoid generated by X − {e} with identity element e, it can be thought of as a commutative analogue of the James reduced product J(X). This means that SP(X) is the quotient of J(X) obtained by identifying points that differ only by a permutation of coordinates. Therefore, the H-space structure on J(X) induces one on SP(X) if X is a CW complex, making it a commutative and associative H-space with strict identity. As such, the Dold-Thom theorem implies that all its k-invariants vanish, meaning that it has the weak homotopy type of a generalised Eilenberg-MacLane space if X is path-connected.[4] However, if X is an arbitrary space, the multiplication on SP(X) may fail to be continuous.[5]
Functioriality
SPn is a homotopy functor: A map f: X → Y clearly induces a map SPn(f) : SPn(X) → SPn(Y) given by SPn(f)[x1, ..., xn] = [f(x1), ..., f(xn)]. A homotopy between two maps f, g: X → Y yields one between SPn(f) and SPn(g). Also, one can easily see that the diagram
commutes, meaning that SP is a functor as well. Similarly, SP is even a homotopy functor on the category of pointed spaces and basepoint-preserving homotopy classes of maps. In particular, X ≃ Y implies SPn(X) ≃ SPn(Y), but in general not SP(X) ≃ SP(Y) as homotopy equivalence may be affected by requiring maps and homotopies to be basepoint-preserving. However, this is not the case if one requires X and Y to be connected CW complexes.[6]
Simplicial and CW structure
SP(X) inherits certain structures of X: For a simplicial complex X, one can also install a simplicial structure on Xn such that each n-permutation is either the identity on a simplex or a homeomorphism from one simplex to another. This means that one gets a simplicial structure on SPn(X). Furthermore, SPn(X) is also a subsimplex of SPn+1(X) if the basepoint e ∈ X is a vertex, meaning that SP(X) inherits a simplicial structure in this case as well.[7] However, one should note that Xn and SPn(X) do not need to have the weak topology if X has uncountably many simplices.[8] An analogous statement can be made if X is a CW complex. Nevertheless, it is still possible to equip SP(X) with the structure of a CW complex such that both topologies have the same compact sets if X is an arbitrary simplicial complex.[9] So the distinction between the two topologies will not cause any differences for purposes of homotopy, e.g.
Homotopy
One of the main uses of infinite symmetric products is the Dold-Thom theorem. It states that the reduced homology groups coincide with the homotopy groups of the infinite symmetric product of a connected CW complex. This allows one to reformulate homology only using homotopy which can be very helpful in algebraic geometry. It also means that the functor SP maps Moore spaces M(G, n) to Eilenberg-MacLane spaces K(G, n). Therefore, it yields a natural way to construct the latter spaces given the proper Moore spaces.
It has also been studied how other constructions combined with the infinite symmetric product affect the homotopy groups. For example, it has been shown that the map
$\rho \colon \operatorname {SP} (X)\to \Omega \operatorname {SP} (\Sigma X),\quad \rho [x_{1},\dots ,x_{n}](t)=[(x_{1},t),\dots ,(x_{n},t)]$
is a weak homotopy equivalence, where ΣX = X ∧ S1 denotes the reduced suspension and ΩY stands for the loop space of the pointed space Y.[10]
Homology
Unsurprisingly, the homology groups of the symmetric product cannot be described as easily as the homotopy groups. Nevertheless, it is known that the homology groups of the symmetric product of a CW complex are determined by the homology groups of the complex. More precisely, if X and Y are CW complexes and R is a principal ideal domain such that Hi(X, R) ≅ Hi(Y, R) for all i ≤ k, then Hi(SPn(X), R) ≅ Hi(SPn(Y), R) holds as well for all i ≤ k. This can be generalised to Γ-products, defined in the next section.[11]
For a simplicial set K, one has furthermore
$H_{*}(\operatorname {SP} ^{n+1}(K))\cong H_{*}(\operatorname {SP} ^{n+1}(K),\operatorname {SP} ^{n}(K))\oplus H_{*}(\operatorname {SP} ^{n}(K)).$
Passing to geometric realisations, one sees that this statement holds for connected CW complexes as well.[12] Induction yields furthermore
$H_{*}(\operatorname {SP} (K))\cong \bigoplus _{n=1}^{\infty }H_{*}(\operatorname {SP} ^{n}(K),\operatorname {SP} ^{n-1}(K)).$[13]
Related constructions and generalisations
S. Liao introduced a slightly more general version of symmetric products, called Γ-products for a subgroup Γ of the symmetric group Sn.[14] The operation was the same and hence he defined XΓ = Xn/Γ as the Γ-product of X. That allowed him to study cyclic products, the special case for Γ being the cyclic group, as well.
When establishing the Dold-Thom theorem, they also considered the "quotient group" Z[X] of SP(X). This is the free abelian group over X with the basepoint as the zero element. If X is a CW complex, it is even a topological group. In order to equip this group with a topology, Dold and Thom initially introduced it as the following quotient over the infinite symmetric product of the wedge sum of X with a copy of itself: Let τ : X ∨ X → X ∨ X be interchanging the summands. Furthermore, let ~ be the equivalence relation on SP(X ∨ X) generated by
$x\sim x+y+\operatorname {SP} (\tau )(y)$
for x, y ∈ SP(X ∨ X). Then one can define Z[X] as
$\mathbb {Z} [X]=\operatorname {SP} (X\vee X)/\sim .$
Since ~ is compatible with the addition in SP(X ∨ X), one gets an associative and commutative addition on Z[X]. One also has the topological inclusions X ⊂ SP(X) ⊂ Z[X][15] and it can easily be seen that this construction has properties similar to the ones of SP, like being a functor.
McCord gave a construction generalising both SP(X) and Z[X]: Let G be a monoid with identity element 1 and let (X, e) be a pointed set. Define
$B(G,X)=\{u\colon X\to G:u(e)=1{\text{ and }}u(x)=1{\text{ for all but finitely many }}x\in X\}.$
Then B(G, X) is again a monoid under pointwise multiplication which will be denoted by ⋅. Let gx denote the element of B(G, X) taking the value g at x and being 1 elsewhere for g ∈ G, x ∈ X − {e}. Moreover, ge shall denote the function being 1 everywhere, the unit of B(G, X).
In order to install a topology on B(G, X), one needs to demand that X be compactly generated and that G be an abelian topological monoid. Define Bn(G, X) to be the subset of B(G, X) consisting of all maps that differ from the constant function 1 at no more than n points. Bn(G, X) gets equipped with the final topology of the map
${\begin{aligned}\mu _{n}\colon (G\times X)^{n}&\to B_{n}(G,X),\\((g_{1},x_{1}),\dots ,(g_{n},x_{n}))&\mapsto g_{1}x_{1}\cdots g_{n}x_{n}.\end{aligned}}$
Now, Bn(G, X) is a closed subset of Bn+1(G, X).[16] Then B(G, X) can be equipped with the direct limit topology, making it again a compactly generated space. One can then identify SP(X) respectively Z[X] with B(N, X) respectively B(Z, X).
Moreover, B(⋅,⋅) is functorial in the sense that B: C × D → C is a bifunctor for C being the category of abelian topological monoids and D being the category of pointed CW complexes.[17] Here, the map B(φ, f) : B(G, X) → B(H, Y) for a morphism φ: G → H of abelian topological monoids and a continuous map f: X → Y is defined as
$B(\varphi ,f)(g_{1}x_{1}\cdots g_{n}x_{n})=(\varphi g_{1})(fx_{1})\cdots (\varphi g_{n})(fx_{n})$
for all gi ∈ G and xi ∈ X. As in the preceding cases, one sees that a based homotopy ft : X → Y induces a homotopy B(Id, ft) : B(G, X) → B(G, Y) for an abelian topological monoid G.
Using this construction, the Dold-Thom theorem can be generalised. Namely, for a discrete module M over a commutative ring with unit one has
$[X,B(M,Y)]\cong \prod _{n=0}^{\infty }{\tilde {H}}^{n}(X,{\tilde {H}}_{n}(Y,M))$
for based spaces X and Y having the homotopy type of a CW complex.[18] Here, H̃n denotes reduced homology and [X, Z] stands for the set of all based homotopy classes of basepoint-preserving maps X → Z. As M is a module, [X, B(M, Y)] has an obvious group structure. Inserting X = Sn and M = Z yields the Dold-Thom theorem for Z[X].
It is noteworthy as well that B(G, S1) is a classifying space for G if G is a topological group such that the inclusion {1} → G is a cofibration.[19]
Notes
1. Morton, Hugh R. (1967). "Symmetric Products of the Circle". Mathematical Proceedings of the Cambridge Philosophical Society. Vol. 63. Cambridge University Press. pp. 349–352.
2. Symmetric Product of Circles on nLab
3. Hatcher (2002), Example 4K.4
4. Dold and Thom (1958), Satz 7.1
5. Spanier (1959), Footnote 2
6. Hatcher (2002), p.481
7. Aguilar, Gitler and Prieto (2008), Note 5.2.2
8. Dold and Thom (1958), 3.3
9. Hatcher (2002), pp.482-483
10. Spanier (1959), Theorem 10.1
11. Dold (1958), Theorem 7.2
12. Milgram, R. James (1969), "The Homology of Symmetric Products", Transactions of the American Mathematical Society, 138: 251–265
13. Spanier (1959), Theorem 7.2
14. Liao (1954)
15. Dold and Thom (1958), 4.7
16. McCord (1969), Lemma 6.2
17. McCord (1969), Corollary 6.9
18. McCord (1969), Theorem 11.5
19. McCord (1969), Theorem 9.17
References
• Aguilar, Marcelo; Gitler, Samuel; Prieto, Carlos (2008). Algebraic Topology from a Homotopical Viewpoint. Springer Science & Business Media. ISBN 978-0-387-22489-3.
• Dold, Albrecht (1958), "Homology of Symmetric Products and other Functor of Complexes", Annals of Mathematics: 54–80
• Dold, Albrecht; Thom, René (1958), "Quasifaserungen und unendliche symmetrische Produkte", Annals of Mathematics, Second Series, 67 (2): 239–281, doi:10.2307/1970005, ISSN 0003-486X, JSTOR 1970005, MR 0097062
• Hatcher, Allen (2002). Algebraic Topology. Cambridge University Press. ISBN 978-0-521-79540-1.
• Liao, S.D. (1954), "On the Topology of Cyclic Products of Spheres", Transactions of the American Mathematical Society, 77 (3): 520–551
• McCord, Michael C. (1969), "Classifying Spaces and Infinite Symmetric Products", Transactions of the American Mathematical Society, 146: 273–298
• Piccinini, Renzo A. (1992). Lectures on Homotopy Theory. Elsevier. ISBN 9780080872827.
• Spanier, Edwin (1959), "Infinite Symmetric Products, Function Spaces and Duality", Annals of Mathematics: 142–198
External links
• Symmetric product in arbitrary categories? on MathOverflow
| Wikipedia |
Symmetric product of an algebraic curve
In mathematics, the n-fold symmetric product of an algebraic curve C is the quotient space of the n-fold cartesian product
C × C × ... × C
or Cn by the group action of the symmetric group Sn on n letters permuting the factors. It exists as a smooth algebraic variety denoted by ΣnC. If C is a compact Riemann surface, ΣnC is therefore a complex manifold. Its interest in relation to the classical geometry of curves is that its points correspond to effective divisors on C of degree n, that is, formal sums of points with non-negative integer coefficients.
For C the projective line (say the Riemann sphere $\mathbb {C} $ ∪ {∞} ≈ S2), its nth symmetric product ΣnC can be identified with complex projective space $\mathbb {CP} ^{n}$ of dimension n.
If G has genus g ≥ 1 then the ΣnC are closely related to the Jacobian variety J of C. More accurately for n taking values up to g they form a sequence of approximations to J from below: their images in J under addition on J (see theta-divisor) have dimension n and fill up J, with some identifications caused by special divisors.
For g = n we have ΣgC actually birationally equivalent to J; the Jacobian is a blowing down of the symmetric product. That means that at the level of function fields it is possible to construct J by taking linearly disjoint copies of the function field of C, and within their compositum taking the fixed subfield of the symmetric group. This is the source of André Weil's technique of constructing J as an abstract variety from 'birational data'. Other ways of constructing J, for example as a Picard variety, are preferred now[1] but this does mean that for any rational function F on C
F(x1) + ... + F(xg)
makes sense as a rational function on J, for the xi staying away from the poles of F.
For n > g the mapping from ΣnC to J by addition fibers it over J; when n is large enough (around twice g) this becomes a projective space bundle (the Picard bundle). It has been studied in detail, for example by Kempf and Mukai.
Betti numbers and the Euler characteristic of the symmetric product
Let C be a smooth projective curve of genus g over the complex numbers C. The Betti numbers bi(ΣnC) of the symmetric products ΣnC for all n = 0, 1, 2, ... are given by the generating function
$\sum _{n=0}^{\infty }\sum _{i=0}^{2n}b_{i}(\Sigma ^{n}C)y^{n}u^{i-n}={\frac {(1+y)^{2g}}{(1-uy)(1-u^{-1}y)}}$
and their Euler characteristics e(ΣnC) are given by the generating function
$\sum _{n=0}^{\infty }e(\Sigma ^{n}C)p^{n}=(1-p)^{2g-2}.$
Here we have set u = -1 and y = -p in the previous formula.
Notes
1. Anderson (2002) provided an elementary construction as lines of matrices.
References
• Macdonald, I. G. (1962), "Symmetric products of an algebraic curve", Topology, 1 (4): 319–343, doi:10.1016/0040-9383(62)90019-8, MR 0151460
• Anderson, Greg W. (2002), "Abeliants and their application to an elementary construction of Jacobians", Advances in Mathematics, 172 (2): 169–205, arXiv:math/0112321, doi:10.1016/S0001-8708(02)00024-5, MR 1942403
| Wikipedia |
Symmetric rank-one
The Symmetric Rank 1 (SR1) method is a quasi-Newton method to update the second derivative (Hessian) based on the derivatives (gradients) calculated at two points. It is a generalization to the secant method for a multidimensional problem. This update maintains the symmetry of the matrix but does not guarantee that the update be positive definite.
The sequence of Hessian approximations generated by the SR1 method converges to the true Hessian under mild conditions, in theory; in practice, the approximate Hessians generated by the SR1 method show faster progress towards the true Hessian than do popular alternatives (BFGS or DFP), in preliminary numerical experiments.[1][2] The SR1 method has computational advantages for sparse or partially separable problems.[3]
A twice continuously differentiable function $x\mapsto f(x)$ has a gradient ($\nabla f$) and Hessian matrix $B$: The function $f$ has an expansion as a Taylor series at $x_{0}$, which can be truncated
$f(x_{0}+\Delta x)\approx f(x_{0})+\nabla f(x_{0})^{T}\Delta x+{\frac {1}{2}}\Delta x^{T}{B}\Delta x$;
its gradient has a Taylor-series approximation also
$\nabla f(x_{0}+\Delta x)\approx \nabla f(x_{0})+B\Delta x$,
which is used to update $B$. The above secant-equation need not have a unique solution $B$. The SR1 formula computes (via an update of rank 1) the symmetric solution that is closest to the current approximate-value $B_{k}$:
$B_{k+1}=B_{k}+{\frac {(y_{k}-B_{k}\Delta x_{k})(y_{k}-B_{k}\Delta x_{k})^{T}}{(y_{k}-B_{k}\Delta x_{k})^{T}\Delta x_{k}}}$,
where
$y_{k}=\nabla f(x_{k}+\Delta x_{k})-\nabla f(x_{k})$.
The corresponding update to the approximate inverse-Hessian $H_{k}=B_{k}^{-1}$ is
$H_{k+1}=H_{k}+{\frac {(\Delta x_{k}-H_{k}y_{k})(\Delta x_{k}-H_{k}y_{k})^{T}}{(\Delta x_{k}-H_{k}y_{k})^{T}y_{k}}}$.
One might wonder why positive-definiteness is not preserved — after all, a rank-1 update of the form $B_{k+1}=B_{k}+vv^{T}$ is positive-definite if $B_{k}$ is. The explanation is that the update might be of the form $B_{k+1}=B_{k}-vv^{T}$ instead because the denominator can be negative, and in that case there are no guarantees about positive-definiteness.
The SR1 formula has been rediscovered a number of times. A drawback is that the denominator can vanish. Some authors have suggested that the update be applied only if
$|\Delta x_{k}^{T}(y_{k}-B_{k}\Delta x_{k})|\geq r\|\Delta x_{k}\|\cdot \|y_{k}-B_{k}\Delta x_{k}\|$,
where $r\in (0,1)$ is a small number, e.g. $10^{-8}$.[4]
See also
• Quasi-Newton method
• Broyden's method
• Newton's method in optimization
• Broyden-Fletcher-Goldfarb-Shanno (BFGS) method
• L-BFGS method
References
1. Conn, A. R.; Gould, N. I. M.; Toint, Ph. L. (March 1991). "Convergence of quasi-Newton matrices generated by the symmetric rank one update". Mathematical Programming. Springer Berlin/ Heidelberg. 50 (1): 177–195. doi:10.1007/BF01594934. ISSN 0025-5610. S2CID 28028770.
2. Khalfan, H. Fayez; et al. (1993). "A Theoretical and Experimental Study of the Symmetric Rank-One Update". SIAM Journal on Optimization. 3 (1): 1–24. doi:10.1137/0803001.
3. Byrd, Richard H.; et al. (1996). "Analysis of a Symmetric Rank-One Trust Region Method". SIAM Journal on Optimization. 6 (4): 1025–1039. doi:10.1137/S1052623493252985.
4. Nocedal, Jorge; Wright, Stephen J. (1999). Numerical Optimization. Springer. ISBN 0-387-98793-2.
Optimization: Algorithms, methods, and heuristics
Unconstrained nonlinear
Functions
• Golden-section search
• Interpolation methods
• Line search
• Nelder–Mead method
• Successive parabolic interpolation
Gradients
Convergence
• Trust region
• Wolfe conditions
Quasi–Newton
• Berndt–Hall–Hall–Hausman
• Broyden–Fletcher–Goldfarb–Shanno and L-BFGS
• Davidon–Fletcher–Powell
• Symmetric rank-one (SR1)
Other methods
• Conjugate gradient
• Gauss–Newton
• Gradient
• Mirror
• Levenberg–Marquardt
• Powell's dog leg method
• Truncated Newton
Hessians
• Newton's method
Constrained nonlinear
General
• Barrier methods
• Penalty methods
Differentiable
• Augmented Lagrangian methods
• Sequential quadratic programming
• Successive linear programming
Convex optimization
Convex
minimization
• Cutting-plane method
• Reduced gradient (Frank–Wolfe)
• Subgradient method
Linear and
quadratic
Interior point
• Affine scaling
• Ellipsoid algorithm of Khachiyan
• Projective algorithm of Karmarkar
Basis-exchange
• Simplex algorithm of Dantzig
• Revised simplex algorithm
• Criss-cross algorithm
• Principal pivoting algorithm of Lemke
Combinatorial
Paradigms
• Approximation algorithm
• Dynamic programming
• Greedy algorithm
• Integer programming
• Branch and bound/cut
Graph
algorithms
Minimum
spanning tree
• Borůvka
• Prim
• Kruskal
Shortest path
• Bellman–Ford
• SPFA
• Dijkstra
• Floyd–Warshall
Network flows
• Dinic
• Edmonds–Karp
• Ford–Fulkerson
• Push–relabel maximum flow
Metaheuristics
• Evolutionary algorithm
• Hill climbing
• Local search
• Parallel metaheuristics
• Simulated annealing
• Spiral optimization algorithm
• Tabu search
• Software
| Wikipedia |
Symmetric decreasing rearrangement
In mathematics, the symmetric decreasing rearrangement of a function is a function which is symmetric and decreasing, and whose level sets are of the same size as those of the original function.[1]
Definition for sets
Given a measurable set, $A,$ in $\mathbb {R} ^{n},$ one defines the symmetric rearrangement of $A,$ called $A^{*},$ as the ball centered at the origin, whose volume (Lebesgue measure) is the same as that of the set $A.$
An equivalent definition is
$A^{*}=\left\{x\in \mathbb {R} ^{n}:\,\omega _{n}\cdot |x|^{n}<|A|\right\},$
where $\omega _{n}$ is the volume of the unit ball and where $|A|$ is the volume of $A.$
Definition for functions
The rearrangement of a non-negative, measurable real-valued function $f$ whose level sets $f^{-1}(y)$ (for $y\in \mathbb {R} _{\geq 0}$) have finite measure is
$f^{*}(x)=\int _{0}^{\infty }\mathbb {I} _{\{y:f(y)>t\}^{*}}(x)\,dt,$
where $\mathbb {I} _{A}$ denotes the indicator function of the set $A.$ In words, the value of $f^{*}(x)$ gives the height $t$ for which the radius of the symmetric rearrangement of $\{y:f(y)>t\}$ is equal to $x.$ We have the following motivation for this definition. Because the identity
$g(x)=\int _{0}^{\infty }\mathbb {I} _{\{y:g(y)>t\}}(x)\,dt,$
holds for any non-negative function $g,$ the above definition is the unique definition that forces the identity $\mathbb {I} _{A}^{*}=\mathbb {I} _{A^{*}}$ to hold.
Properties
The function $f^{*}$ is a symmetric and decreasing function whose level sets have the same measure as the level sets of $f,$ that is,
$|\{x:f^{*}(x)>t\}|=|\{x:f(x)>t\}|.$
If $f$ is a function in $L^{p},$ then
$\|f\|_{L^{p}}=\|f^{*}\|_{L^{p}}.$
The Hardy–Littlewood inequality holds, that is,
$\int fg\leq \int f^{*}g^{*}.$
Further, the Pólya–Szegő inequality holds. This says that if $1\leq p<\infty $ and if $f\in W^{1,p}$ then
$\|\nabla f^{*}\|_{p}\leq \|\nabla f\|_{p}.$
The symmetric decreasing rearrangement is order preserving and decreases $L^{p}$ distance, that is,
$f\leq g{\text{ implies }}f^{*}\leq g^{*}$
and
$\|f-g\|_{L^{p}}\geq \|f^{*}-g^{*}\|_{L^{p}}.$
Applications
The Pólya–Szegő inequality yields, in the limit case, with $p=1,$ the isoperimetric inequality. Also, one can use some relations with harmonic functions to prove the Rayleigh–Faber–Krahn inequality.
Nonsymmetric decreasing rearrangement
We can also define $f^{*}$ as a function on the nonnegative real numbers rather than on all of $\mathbb {R} ^{n}.$[2] Let $(E,\mu )$ be a σ-finite measure space, and let $f:E\to [-\infty ,\infty ]$ be a measurable function that takes only finite (that is, real) values μ-a.e. (where "$\mu $-a.e." means except possibly on a set of $\mu $-measure zero). We define the distribution function $\mu _{f}:[0,\infty ]\to [0,\infty ]$ by the rule
$\mu _{f}(s)=\mu \{x\in E:\vert f(x)\vert >s\}.$
We can now define the decreasing rearrangment (or, sometimes, nonincreasing rearrangement) of $f$ as the function $f^{*}:[0,\infty )\to [0,\infty ]$ by the rule
$f^{*}(t)=\inf\{s\in [0,\infty ]:\mu _{f}(s)\leq t\}.$
Note that this version of the decreasing rearrangement is not symmetric, as it is only defined on the nonnegative real numbers. However, it inherits many of the same properties listed above as the symmetric version, namely:
• $f$ and $f^{*}$ are equimeasurable, that is, they have the same distribution function.
• The Hardy-Littlewood inequality holds, that is, $\int _{E}|fg|\;d\mu \leq \int _{0}^{\infty }f^{*}(t)g^{*}(t)\;dt.$
• $\vert f\vert \leq \vert g\vert $ $\mu $-a.e. implies $f^{*}\leq g^{*}.$
• $(af)^{*}=|a|f^{*}$ for all real numbers $a.$
• $(f+g)^{*}(t_{1}+t_{2})\leq f^{*}(t_{1})+g^{*}(t_{2})$ for all $t_{1},t_{2}\in [0,\infty ).$
• $|f_{n}|\uparrow |f|$ $\mu $-a.e. implies $f_{n}^{*}\uparrow f^{*}.$
• $\left(\vert f\vert ^{p}\right)^{*}=(f^{*})^{p}$ for all positive real numbers $p.$
• $\|f\|_{L_{p}(E)}=\|f^{*}\|_{L_{p}[0,\infty )}$ for all positive real numbers $p.$
• $\|f\|_{L_{\infty }(E)}=f^{*}(0).$
The (nonsymmetric) decreasing rearrangement function arises often in the theory of rearrangement-invariant Banach function spaces. Especially important is the following:
Luxemburg Representation Theorem. Let $\rho $ be a rearrangement-invariant Banach function norm over a resonant measure space $(E,\mu ).$ Then there exists a (possibly not unique) rearrangement-invariant function norm ${\overline {\rho }}$ on $[0,\infty )$ such that $\rho (f)={\overline {\rho }}(f^{*})$ for all nonnegative measurable functions $f:E\to [0,\infty ]$ which are finite-valued $\mu $-a.e.
Note that the definitions of all the terminology in the above theorem (that is, Banach function norms, rearrangement-invariant Banach function spaces, and resonant measure spaces) can be found in sections 1 and 2 of Bennett and Sharpley's book (cf. the references below).
See also
• Isoperimetric inequality – Geometric inequality which sets a lower bound on the surface area of a set given its volume
• Layer cake representation
• Rayleigh–Faber–Krahn inequality
• Riesz rearrangement inequality
• Sobolev space – Vector space of functions in mathematics
• Szegő inequality
References
1. Lieb, Elliott; Loss, Michael (2001). Analysis. Graduate Studies in Mathematics. Vol. 14 (2nd ed.). American Mathematical Society. ISBN 978-0821827833.
2. Bennett, Colin; Sharpley, Robert (1988). Interpolation of Operators. ISBN 978-0-120-88730-9.
Measure theory
Basic concepts
• Absolute continuity of measures
• Lebesgue integration
• Lp spaces
• Measure
• Measure space
• Probability space
• Measurable space/function
Sets
• Almost everywhere
• Atom
• Baire set
• Borel set
• equivalence relation
• Borel space
• Carathéodory's criterion
• Cylindrical σ-algebra
• Cylinder set
• 𝜆-system
• Essential range
• infimum/supremum
• Locally measurable
• π-system
• σ-algebra
• Non-measurable set
• Vitali set
• Null set
• Support
• Transverse measure
• Universally measurable
Types of Measures
• Atomic
• Baire
• Banach
• Besov
• Borel
• Brown
• Complex
• Complete
• Content
• (Logarithmically) Convex
• Decomposable
• Discrete
• Equivalent
• Finite
• Inner
• (Quasi-) Invariant
• Locally finite
• Maximising
• Metric outer
• Outer
• Perfect
• Pre-measure
• (Sub-) Probability
• Projection-valued
• Radon
• Random
• Regular
• Borel regular
• Inner regular
• Outer regular
• Saturated
• Set function
• σ-finite
• s-finite
• Signed
• Singular
• Spectral
• Strictly positive
• Tight
• Vector
Particular measures
• Counting
• Dirac
• Euler
• Gaussian
• Haar
• Harmonic
• Hausdorff
• Intensity
• Lebesgue
• Infinite-dimensional
• Logarithmic
• Product
• Projections
• Pushforward
• Spherical measure
• Tangent
• Trivial
• Young
Maps
• Measurable function
• Bochner
• Strongly
• Weakly
• Convergence: almost everywhere
• of measures
• in measure
• of random variables
• in distribution
• in probability
• Cylinder set measure
• Random: compact set
• element
• measure
• process
• variable
• vector
• Projection-valued measure
Main results
• Carathéodory's extension theorem
• Convergence theorems
• Dominated
• Monotone
• Vitali
• Decomposition theorems
• Hahn
• Jordan
• Maharam's
• Egorov's
• Fatou's lemma
• Fubini's
• Fubini–Tonelli
• Hölder's inequality
• Minkowski inequality
• Radon–Nikodym
• Riesz–Markov–Kakutani representation theorem
Other results
• Disintegration theorem
• Lifting theory
• Lebesgue's density theorem
• Lebesgue differentiation theorem
• Sard's theorem
For Lebesgue measure
• Isoperimetric inequality
• Brunn–Minkowski theorem
• Milman's reverse
• Minkowski–Steiner formula
• Prékopa–Leindler inequality
• Vitale's random Brunn–Minkowski inequality
Applications & related
• Convex analysis
• Descriptive set theory
• Probability theory
• Real analysis
• Spectral theory
Lp spaces
Basic concepts
• Banach & Hilbert spaces
• Lp spaces
• Measure
• Lebesgue
• Measure space
• Measurable space/function
• Minkowski distance
• Sequence spaces
L1 spaces
• Integrable function
• Lebesgue integration
• Taxicab geometry
L2 spaces
• Bessel's
• Cauchy–Schwarz
• Euclidean distance
• Hilbert space
• Parseval's identity
• Polarization identity
• Pythagorean theorem
• Square-integrable function
$L^{\infty }$ spaces
• Bounded function
• Chebyshev distance
• Infimum and supremum
• Essential
• Uniform norm
Maps
• Almost everywhere
• Convergence almost everywhere
• Convergence in measure
• Function space
• Integral transform
• Locally integrable function
• Measurable function
• Symmetric decreasing rearrangement
Inequalities
• Babenko–Beckner
• Chebyshev's
• Clarkson's
• Hanner's
• Hausdorff–Young
• Hölder's
• Markov's
• Minkowski
• Young's convolution
Results
• Marcinkiewicz interpolation theorem
• Plancherel theorem
• Riemann–Lebesgue
• Riesz–Fischer theorem
• Riesz–Thorin theorem
For Lebesgue measure
• Isoperimetric inequality
• Brunn–Minkowski theorem
• Milman's reverse
• Minkowski–Steiner formula
• Prékopa–Leindler inequality
• Vitale's random Brunn–Minkowski inequality
Applications & related
• Bochner space
• Fourier analysis
• Lorentz space
• Probability theory
• Quasinorm
• Real analysis
• Sobolev space
• *-algebra
• C*-algebra
• Von Neumann
| Wikipedia |
Symmetric relation
A symmetric relation is a type of binary relation. An example is the relation "is equal to", because if a = b is true then b = a is also true. Formally, a binary relation R over a set X is symmetric if:
$\forall a,b\in X(aRb\Leftrightarrow bRa),$[1]
Transitive binary relations
Symmetric Antisymmetric Connected Well-founded Has joins Has meets Reflexive Irreflexive Asymmetric
Total, Semiconnex Anti-
reflexive
Equivalence relation Y ✗ ✗ ✗ ✗ ✗ Y ✗ ✗
Preorder (Quasiorder) ✗ ✗ ✗ ✗ ✗ ✗ Y ✗ ✗
Partial order ✗ Y ✗ ✗ ✗ ✗ Y ✗ ✗
Total preorder ✗ ✗ Y ✗ ✗ ✗ Y ✗ ✗
Total order ✗ Y Y ✗ ✗ ✗ Y ✗ ✗
Prewellordering ✗ ✗ Y Y ✗ ✗ Y ✗ ✗
Well-quasi-ordering ✗ ✗ ✗ Y ✗ ✗ Y ✗ ✗
Well-ordering ✗ Y Y Y ✗ ✗ Y ✗ ✗
Lattice ✗ Y ✗ ✗ Y Y Y ✗ ✗
Join-semilattice ✗ Y ✗ ✗ Y ✗ Y ✗ ✗
Meet-semilattice ✗ Y ✗ ✗ ✗ Y Y ✗ ✗
Strict partial order ✗ Y ✗ ✗ ✗ ✗ ✗ Y Y
Strict weak order ✗ Y ✗ ✗ ✗ ✗ ✗ Y Y
Strict total order ✗ Y Y ✗ ✗ ✗ ✗ Y Y
Symmetric Antisymmetric Connected Well-founded Has joins Has meets Reflexive Irreflexive Asymmetric
Definitions, for all $a,b$ and $S\neq \varnothing :$ :} ${\begin{aligned}&aRb\\\Rightarrow {}&bRa\end{aligned}}$ ${\begin{aligned}aRb{\text{ and }}&bRa\\\Rightarrow a={}&b\end{aligned}}$ ${\begin{aligned}a\neq {}&b\Rightarrow \\aRb{\text{ or }}&bRa\end{aligned}}$ ${\begin{aligned}\min S\\{\text{exists}}\end{aligned}}$ ${\begin{aligned}a\vee b\\{\text{exists}}\end{aligned}}$ ${\begin{aligned}a\wedge b\\{\text{exists}}\end{aligned}}$ $aRa$ ${\text{not }}aRa$ ${\begin{aligned}aRb\Rightarrow \\{\text{not }}bRa\end{aligned}}$
Y indicates that the column's property is always true the row's term (at the very left), while ✗ indicates that the property is not guaranteed in general (it might, or might not, hold). For example, that every equivalence relation is symmetric, but not necessarily antisymmetric, is indicated by Y in the "Symmetric" column and ✗ in the "Antisymmetric" column, respectively.
All definitions tacitly require the homogeneous relation $R$ be transitive: for all $a,b,c,$ if $aRb$ and $bRc$ then $aRc.$
A term's definition may require additional properties that are not listed in this table.
where the notation $aRb$ means that $(a,b)\in R$.
If RT represents the converse of R, then R is symmetric if and only if R = RT.
Symmetry, along with reflexivity and transitivity, are the three defining properties of an equivalence relation.[1]
Examples
In mathematics
• "is equal to" (equality) (whereas "is less than" is not symmetric)
• "is comparable to", for elements of a partially ordered set
• "... and ... are odd":
Outside mathematics
• "is married to" (in most legal systems)
• "is a fully biological sibling of"
• "is a homophone of"
• "is co-worker of"
• "is teammate of"
Relationship to asymmetric and antisymmetric relations
By definition, a nonempty relation cannot be both symmetric and asymmetric (where if a is related to b, then b cannot be related to a (in the same way)). However, a relation can be neither symmetric nor asymmetric, which is the case for "is less than or equal to" and "preys on").
Symmetric and antisymmetric (where the only way a can be related to b and b be related to a is if a = b) are actually independent of each other, as these examples show.
Mathematical examples
SymmetricNot symmetric
Antisymmetricequalitydivides, less than or equal to
Not antisymmetriccongruence in modular arithmetic// (integer division), most nontrivial permutations
Non-mathematical examples
SymmetricNot symmetric
Antisymmetricis the same person as, and is marriedis the plural of
Not antisymmetricis a full biological sibling ofpreys on
Properties
• A symmetric and transitive relation is always quasireflexive.[2]
• A symmetric, transitive, and reflexive relation is called an equivalence relation.[1]
• One way to count the symmetric relations on n elements, that in their binary matrix representation the upper right triangle determines the relation fully, and it can be arbitrary given, thus there are as many symmetric relations as nxn binary upper triangle matrices, $2^{n(n+1)/2}.$[3]
Number of n-element binary relations of different types
Elements Any Transitive Reflexive Symmetric Preorder Partial order Total preorder Total order Equivalence relation
0111111111
1221211111
216134843322
3512171646429191365
465,5363,9944,0961,024355219752415
n 2n2 2n2−n 2n(n+1)/2 $ \sum _{k=0}^{n}k!S(n,k)$ n! $ \sum _{k=0}^{n}S(n,k)$
OEIS A002416 A006905 A053763 A006125 A000798 A001035 A000670 A000142 A000110
Note that S(n, k) refers to Stirling numbers of the second kind.
References
1. Biggs, Norman L. (2002). Discrete Mathematics. Oxford University Press. p. 57. ISBN 978-0-19-871369-2.
2. If xRy, the yRx by symmetry, hence xRx by transitivity. The proof of xRy ⇒ yRy is similar.
3. Sloane, N. J. A. (ed.). "Sequence A006125". The On-Line Encyclopedia of Integer Sequences. OEIS Foundation.
See also
• Commutative property – Property of some mathematical operations
• Symmetry in mathematics
• Symmetry – Mathematical invariance under transformations
| Wikipedia |
Symmetric set
In mathematics, a nonempty subset S of a group G is said to be symmetric if it contains the inverses of all of its elements.
Definition
In set notation a subset $S$ of a group $G$ is called symmetric if whenever $s\in S$ then the inverse of $s$ also belongs to $S.$ So if $G$ is written multiplicatively then $S$ is symmetric if and only if $S=S^{-1}$ where $S^{-1}:=\left\{s^{-1}:s\in S\right\}.$ If $G$ is written additively then $S$ is symmetric if and only if $S=-S$ where $-S:=\{-s:s\in S\}.$
If $S$ is a subset of a vector space then $S$ is said to be a symmetric set if it is symmetric with respect to the additive group structure of the vector space; that is, if $S=-S,$ which happens if and only if $-S\subseteq S.$ The symmetric hull of a subset $S$ is the smallest symmetric set containing $S,$ and it is equal to $S\cup -S.$ The largest symmetric set contained in $S$ is $S\cap -S.$
Sufficient conditions
Arbitrary unions and intersections of symmetric sets are symmetric.
Any vector subspace in a vector space is a symmetric set.
Examples
In $\mathbb {R} ,$ examples of symmetric sets are intervals of the type $(-k,k)$ with $k>0,$ and the sets $\mathbb {Z} $ and $(-1,1).$
If $S$ is any subset of a group, then $S\cup S^{-1}$ and $S\cap S^{-1}$ are symmetric sets.
Any balanced subset of a real or complex vector space is symmetric.
See also
• Absolutely convex set – convex and balanced setPages displaying wikidata descriptions as a fallback
• Absorbing set – Set that can be "inflated" to reach any point
• Balanced set – Construct in functional analysis
• Bounded set (topological vector space) – Generalization of boundedness
• Convex set – In geometry, set whose intersection with every line is a single line segment
• Minkowski functional – Function made from a set
• Star domain – Property of point sets in Euclidean spaces
References
• R. Cristescu, Topological vector spaces, Noordhoff International Publishing, 1977.
• Rudin, Walter (1991). Functional Analysis. International Series in Pure and Applied Mathematics. Vol. 8 (Second ed.). New York, NY: McGraw-Hill Science/Engineering/Math. ISBN 978-0-07-054236-5. OCLC 21163277.
• Narici, Lawrence; Beckenstein, Edward (2011). Topological Vector Spaces. Pure and applied mathematics (Second ed.). Boca Raton, FL: CRC Press. ISBN 978-1584888666. OCLC 144216834.
• Schaefer, Helmut H.; Wolff, Manfred P. (1999). Topological Vector Spaces. GTM. Vol. 8 (Second ed.). New York, NY: Springer New York Imprint Springer. ISBN 978-1-4612-7155-0. OCLC 840278135.
• Trèves, François (2006) [1967]. Topological Vector Spaces, Distributions and Kernels. Mineola, N.Y.: Dover Publications. ISBN 978-0-486-45352-1. OCLC 853623322.
This article incorporates material from symmetric set on PlanetMath, which is licensed under the Creative Commons Attribution/Share-Alike License.
Functional analysis (topics – glossary)
Spaces
• Banach
• Besov
• Fréchet
• Hilbert
• Hölder
• Nuclear
• Orlicz
• Schwartz
• Sobolev
• Topological vector
Properties
• Barrelled
• Complete
• Dual (Algebraic/Topological)
• Locally convex
• Reflexive
• Reparable
Theorems
• Hahn–Banach
• Riesz representation
• Closed graph
• Uniform boundedness principle
• Kakutani fixed-point
• Krein–Milman
• Min–max
• Gelfand–Naimark
• Banach–Alaoglu
Operators
• Adjoint
• Bounded
• Compact
• Hilbert–Schmidt
• Normal
• Nuclear
• Trace class
• Transpose
• Unbounded
• Unitary
Algebras
• Banach algebra
• C*-algebra
• Spectrum of a C*-algebra
• Operator algebra
• Group algebra of a locally compact group
• Von Neumann algebra
Open problems
• Invariant subspace problem
• Mahler's conjecture
Applications
• Hardy space
• Spectral theory of ordinary differential equations
• Heat kernel
• Index theorem
• Calculus of variations
• Functional calculus
• Integral operator
• Jones polynomial
• Topological quantum field theory
• Noncommutative geometry
• Riemann hypothesis
• Distribution (or Generalized functions)
Advanced topics
• Approximation property
• Balanced set
• Choquet theory
• Weak topology
• Banach–Mazur distance
• Tomita–Takesaki theory
• Mathematics portal
• Category
• Commons
Linear algebra
• Outline
• Glossary
Basic concepts
• Scalar
• Vector
• Vector space
• Scalar multiplication
• Vector projection
• Linear span
• Linear map
• Linear projection
• Linear independence
• Linear combination
• Basis
• Change of basis
• Row and column vectors
• Row and column spaces
• Kernel
• Eigenvalues and eigenvectors
• Transpose
• Linear equations
Matrices
• Block
• Decomposition
• Invertible
• Minor
• Multiplication
• Rank
• Transformation
• Cramer's rule
• Gaussian elimination
Bilinear
• Orthogonality
• Dot product
• Hadamard product
• Inner product space
• Outer product
• Kronecker product
• Gram–Schmidt process
Multilinear algebra
• Determinant
• Cross product
• Triple product
• Seven-dimensional cross product
• Geometric algebra
• Exterior algebra
• Bivector
• Multivector
• Tensor
• Outermorphism
Vector space constructions
• Dual
• Direct sum
• Function space
• Quotient
• Subspace
• Tensor product
Numerical
• Floating-point
• Numerical stability
• Basic Linear Algebra Subprograms
• Sparse matrix
• Comparison of linear algebra libraries
• Category
• Mathematics portal
• Commons
• Wikibooks
• Wikiversity
| Wikipedia |
Symmetric difference
In mathematics, the symmetric difference of two sets, also known as the disjunctive union, is the set of elements which are in either of the sets, but not in their intersection. For example, the symmetric difference of the sets $\{1,2,3\}$ and $\{3,4\}$ is $\{1,2,4\}$.
Symmetric difference
Venn diagram of $A\vartriangle B$. The symmetric difference is the union without the intersection: $~\setminus ~$ $~=~$
TypeSet operation
FieldSet theory
StatementThe symmetric difference is the set of elements that are in either set, but not in the intersection.
Symbolic statement$A\,\vartriangle \,B=\left(A\setminus B\right)\cup \left(B\setminus A\right)$
The symmetric difference of the sets A and B is commonly denoted by $A\ominus B$ or $A\operatorname {\vartriangle } B.$[1][2]
The power set of any set becomes an abelian group under the operation of symmetric difference, with the empty set as the neutral element of the group and every element in this group being its own inverse. The power set of any set becomes a Boolean ring, with symmetric difference as the addition of the ring and intersection as the multiplication of the ring.
Properties
The symmetric difference is equivalent to the union of both relative complements, that is:[1]
$A\,\vartriangle \,B=\left(A\setminus B\right)\cup \left(B\setminus A\right),$
The symmetric difference can also be expressed using the XOR operation ⊕ on the predicates describing the two sets in set-builder notation:
$A\mathbin {\vartriangle } B=\{x:(x\in A)\oplus (x\in B)\}.$
The same fact can be stated as the indicator function (denoted here by $\chi $) of the symmetric difference, being the XOR (or addition mod 2) of the indicator functions of its two arguments: $\chi _{(A\,\vartriangle \,B)}=\chi _{A}\oplus \chi _{B}$ or using the Iverson bracket notation $[x\in A\,\vartriangle \,B]=[x\in A]\oplus [x\in B]$.
The symmetric difference can also be expressed as the union of the two sets, minus their intersection:
$A\,\vartriangle \,B=(A\cup B)\setminus (A\cap B),$[1]
In particular, $A\mathbin {\vartriangle } B\subseteq A\cup B$; the equality in this non-strict inclusion occurs if and only if $A$ and $B$ are disjoint sets. Furthermore, denoting $D=A\mathbin {\vartriangle } B$ and $I=A\cap B$, then $D$ and $I$ are always disjoint, so $D$ and $I$ partition $A\cup B$. Consequently, assuming intersection and symmetric difference as primitive operations, the union of two sets can be well defined in terms of symmetric difference by the right-hand side of the equality
$A\,\cup \,B=(A\,\vartriangle \,B)\,\vartriangle \,(A\cap B)$.
The symmetric difference is commutative and associative:
${\begin{aligned}A\,\vartriangle \,B&=B\,\vartriangle \,A,\\(A\,\vartriangle \,B)\,\vartriangle \,C&=A\,\vartriangle \,(B\,\vartriangle \,C).\end{aligned}}$
The empty set is neutral, and every set is its own inverse:
${\begin{aligned}A\,\vartriangle \,\varnothing &=A,\\A\,\vartriangle \,A&=\varnothing .\end{aligned}}$
Thus, the power set of any set X becomes an abelian group under the symmetric difference operation. (More generally, any field of sets forms a group with the symmetric difference as operation.) A group in which every element is its own inverse (or, equivalently, in which every element has order 2) is sometimes called a Boolean group;[3][4] the symmetric difference provides a prototypical example of such groups. Sometimes the Boolean group is actually defined as the symmetric difference operation on a set.[5] In the case where X has only two elements, the group thus obtained is the Klein four-group.
Equivalently, a Boolean group is an elementary abelian 2-group. Consequently, the group induced by the symmetric difference is in fact a vector space over the field with 2 elements Z2. If X is finite, then the singletons form a basis of this vector space, and its dimension is therefore equal to the number of elements of X. This construction is used in graph theory, to define the cycle space of a graph.
From the property of the inverses in a Boolean group, it follows that the symmetric difference of two repeated symmetric differences is equivalent to the repeated symmetric difference of the join of the two multisets, where for each double set both can be removed. In particular:
$(A\,\vartriangle \,B)\,\vartriangle \,(B\,\vartriangle \,C)=A\,\vartriangle \,C.$
This implies triangle inequality:[6] the symmetric difference of A and C is contained in the union of the symmetric difference of A and B and that of B and C.
Intersection distributes over symmetric difference:
$A\cap (B\,\vartriangle \,C)=(A\cap B)\,\vartriangle \,(A\cap C),$
and this shows that the power set of X becomes a ring, with symmetric difference as addition and intersection as multiplication. This is the prototypical example of a Boolean ring.
Further properties of the symmetric difference include:
• $A\mathbin {\vartriangle } B=\emptyset $ if and only if $A=B$.
• $A\mathbin {\vartriangle } B=A^{c}\mathbin {\vartriangle } B^{c}$, where $A^{c}$, $B^{c}$ is $A$'s complement, $B$'s complement, respectively, relative to any (fixed) set that contains both.
• $\left(\bigcup _{\alpha \in {\mathcal {I}}}A_{\alpha }\right)\vartriangle \left(\bigcup _{\alpha \in {\mathcal {I}}}B_{\alpha }\right)\subseteq \bigcup _{\alpha \in {\mathcal {I}}}\left(A_{\alpha }\mathbin {\vartriangle } B_{\alpha }\right)$, where ${\mathcal {I}}$ is an arbitrary non-empty index set.
• If $f:S\rightarrow T$ is any function and $A,B\subseteq T$ are any sets in $f$'s codomain, then $f^{-1}\left(A\mathbin {\vartriangle } B\right)=f^{-1}\left(A\right)\mathbin {\vartriangle } f^{-1}\left(B\right).$
The symmetric difference can be defined in any Boolean algebra, by writing
$x\,\vartriangle \,y=(x\lor y)\land \lnot (x\land y)=(x\land \lnot y)\lor (y\land \lnot x)=x\oplus y.$
This operation has the same properties as the symmetric difference of sets.
n-ary symmetric difference
The repeated symmetric difference is in a sense equivalent to an operation on a multiset of sets giving the set of elements which are in an odd number of sets.
As above, the symmetric difference of a collection of sets contains just elements which are in an odd number of the sets in the collection:
$\vartriangle M=\left\{a\in \bigcup M:\left|\{A\in M:a\in A\}\right|{\text{ is odd}}\right\}.$
Evidently, this is well-defined only when each element of the union $ \bigcup M$ is contributed by a finite number of elements of $M$.
Suppose $M=\left\{M_{1},M_{2},\ldots ,M_{n}\right\}$ is a multiset and $n\geq 2$. Then there is a formula for $|\vartriangle M|$, the number of elements in $\vartriangle M$, given solely in terms of intersections of elements of $M$:
$|\vartriangle M|=\sum _{l=1}^{n}(-2)^{l-1}\sum _{1\leq i_{1}<i_{2}<\ldots <i_{l}\leq n}\left|M_{i_{1}}\cap M_{i_{2}}\cap \ldots \cap M_{i_{l}}\right|.$
Symmetric difference on measure spaces
As long as there is a notion of "how big" a set is, the symmetric difference between two sets can be considered a measure of how "far apart" they are.
First consider a finite set S and the counting measure on subsets given by their size. Now consider two subsets of S and set their distance apart as the size of their symmetric difference. This distance is in fact a metric, which makes the power set on S a metric space. If S has n elements, then the distance from the empty set to S is n, and this is the maximum distance for any pair of subsets.[7]
Using the ideas of measure theory, the separation of measurable sets can be defined to be the measure of their symmetric difference. If μ is a σ-finite measure defined on a σ-algebra Σ, the function
$d_{\mu }(X,Y)=\mu (X\,\vartriangle \,Y)$
is a pseudometric on Σ. dμ becomes a metric if Σ is considered modulo the equivalence relation X ~ Y if and only if $\mu (X\,\vartriangle \,Y)=0$. It is sometimes called Fréchet-Nikodym metric. The resulting metric space is separable if and only if L2(μ) is separable.
If $\mu (X),\mu (Y)<\infty $, we have: $|\mu (X)-\mu (Y)|\leq \mu (X\,\vartriangle \,Y)$. Indeed,
${\begin{aligned}|\mu (X)-\mu (Y)|&=\left|\left(\mu \left(X\setminus Y\right)+\mu \left(X\cap Y\right)\right)-\left(\mu \left(X\cap Y\right)+\mu \left(Y\setminus X\right)\right)\right|\\&=\left|\mu \left(X\setminus Y\right)-\mu \left(Y\setminus X\right)\right|\\&\leq \left|\mu \left(X\setminus Y\right)\right|+\left|\mu \left(Y\setminus X\right)\right|\\&=\mu \left(X\setminus Y\right)+\mu \left(Y\setminus X\right)\\&=\mu \left(\left(X\setminus Y\right)\cup \left(Y\setminus X\right)\right)\\&=\mu \left(X\,\vartriangle \,Y\right)\end{aligned}}$
If $S=\left(\Omega ,{\mathcal {A}},\mu \right)$ is a measure space and $F,G\in {\mathcal {A}}$ are measurable sets, then their symmetric difference is also measurable: $F\vartriangle G\in {\mathcal {A}}$. One may define an equivalence relation on measurable sets by letting $F$ and $G$ be related if $\mu \left(F\vartriangle G\right)=0$. This relation is denoted $F=G\left[{\mathcal {A}},\mu \right]$.
Given ${\mathcal {D}},{\mathcal {E}}\subseteq {\mathcal {A}}$, one writes ${\mathcal {D}}\subseteq {\mathcal {E}}\left[{\mathcal {A}},\mu \right]$ if to each $D\in {\mathcal {D}}$ there's some $E\in {\mathcal {E}}$ such that $D=E\left[{\mathcal {A}},\mu \right]$. The relation "$\subseteq \left[{\mathcal {A}},\mu \right]$" is a partial order on the family of subsets of ${\mathcal {A}}$.
We write ${\mathcal {D}}={\mathcal {E}}\left[{\mathcal {A}},\mu \right]$ if ${\mathcal {D}}\subseteq {\mathcal {E}}\left[{\mathcal {A}},\mu \right]$ and ${\mathcal {E}}\subseteq {\mathcal {D}}\left[{\mathcal {A}},\mu \right]$. The relation "$=\left[{\mathcal {A}},\mu \right]$" is an equivalence relationship between the subsets of ${\mathcal {A}}$.
The symmetric closure of ${\mathcal {D}}$ is the collection of all ${\mathcal {A}}$-measurable sets that are $=\left[{\mathcal {A}},\mu \right]$ to some $D\in {\mathcal {D}}$. The symmetric closure of ${\mathcal {D}}$ contains ${\mathcal {D}}$. If ${\mathcal {D}}$ is a sub-$\sigma $-algebra of ${\mathcal {A}}$, so is the symmetric closure of ${\mathcal {D}}$.
$F=G\left[{\mathcal {A}},\mu \right]$ iff $\left|\mathbf {1} _{F}-\mathbf {1} _{G}\right|=0$ $\left[{\mathcal {A}},\mu \right]$ almost everywhere.
Hausdorff distance vs. symmetric difference
The Hausdorff distance and the (area of the) symmetric difference are both pseudo-metrics on the set of measurable geometric shapes. However, they behave quite differently. The figure at the right shows two sequences of shapes, "Red" and "Red ∪ Green". When the Hausdorff distance between them becomes smaller, the area of the symmetric difference between them becomes larger, and vice versa. By continuing these sequences in both directions, it is possible to get two sequences such that the Hausdorff distance between them converges to 0 and the symmetric distance between them diverges, or vice versa.
See also
• Algebra of sets
• Boolean function
• Complement (set theory)
• Difference (set theory)
• Exclusive or
• Fuzzy set
• Intersection (set theory)
• Jaccard index
• List of set identities and relations
• Logical graph
• Separable sigma algebras
• Set theory
• Symmetry
• Union (set theory)
• inclusion–exclusion principle
References
1. Taylor, Courtney (March 31, 2019). "What Is Symmetric Difference in Math?". ThoughtCo. Retrieved 2020-09-05.
2. Weisstein, Eric W. "Symmetric Difference". mathworld.wolfram.com. Retrieved 2020-09-05.
3. Givant, Steven; Halmos, Paul (2009). Introduction to Boolean Algebras. Springer Science & Business Media. p. 6. ISBN 978-0-387-40293-2.
4. Humberstone, Lloyd (2011). The Connectives. MIT Press. p. 782. ISBN 978-0-262-01654-4.
5. Rotman, Joseph J. (2010). Advanced Modern Algebra. American Mathematical Soc. p. 19. ISBN 978-0-8218-4741-1.
6. Rudin, Walter (January 1, 1976). Principles of Mathematical Analysis (3rd ed.). McGraw-Hill Education. p. 306. ISBN 978-0070542358.
7. Claude Flament (1963) Applications of Graph Theory to Group Structure, page 16, Prentice-Hall MR0157785
Bibliography
• Halmos, Paul R. (1960). Naive set theory. The University Series in Undergraduate Mathematics. van Nostrand Company. Zbl 0087.04403.
• Symmetric difference of sets. In Encyclopaedia of Mathematics
Set theory
Overview
• Set (mathematics)
Axioms
• Adjunction
• Choice
• countable
• dependent
• global
• Constructibility (V=L)
• Determinacy
• Extensionality
• Infinity
• Limitation of size
• Pairing
• Power set
• Regularity
• Union
• Martin's axiom
• Axiom schema
• replacement
• specification
Operations
• Cartesian product
• Complement (i.e. set difference)
• De Morgan's laws
• Disjoint union
• Identities
• Intersection
• Power set
• Symmetric difference
• Union
• Concepts
• Methods
• Almost
• Cardinality
• Cardinal number (large)
• Class
• Constructible universe
• Continuum hypothesis
• Diagonal argument
• Element
• ordered pair
• tuple
• Family
• Forcing
• One-to-one correspondence
• Ordinal number
• Set-builder notation
• Transfinite induction
• Venn diagram
Set types
• Amorphous
• Countable
• Empty
• Finite (hereditarily)
• Filter
• base
• subbase
• Ultrafilter
• Fuzzy
• Infinite (Dedekind-infinite)
• Recursive
• Singleton
• Subset · Superset
• Transitive
• Uncountable
• Universal
Theories
• Alternative
• Axiomatic
• Naive
• Cantor's theorem
• Zermelo
• General
• Principia Mathematica
• New Foundations
• Zermelo–Fraenkel
• von Neumann–Bernays–Gödel
• Morse–Kelley
• Kripke–Platek
• Tarski–Grothendieck
• Paradoxes
• Problems
• Russell's paradox
• Suslin's problem
• Burali-Forti paradox
Set theorists
• Paul Bernays
• Georg Cantor
• Paul Cohen
• Richard Dedekind
• Abraham Fraenkel
• Kurt Gödel
• Thomas Jech
• John von Neumann
• Willard Quine
• Bertrand Russell
• Thoralf Skolem
• Ernst Zermelo
| Wikipedia |
Symmetric spectrum
In algebraic topology, a symmetric spectrum X is a spectrum of pointed simplicial sets that comes with an action of the symmetric group $\Sigma _{n}$ on $X_{n}$ such that the composition of structure maps
$S^{1}\wedge \dots \wedge S^{1}\wedge X_{n}\to S^{1}\wedge \dots \wedge S^{1}\wedge X_{n+1}\to \dots \to S^{1}\wedge X_{n+p-1}\to X_{n+p}$
is equivariant with respect to $\Sigma _{p}\times \Sigma _{n}$. A morphism between symmetric spectra is a morphism of spectra that is equivariant with respect to the actions of symmetric groups.
The technical advantage of the category ${\mathcal {S}}p^{\Sigma }$ of symmetric spectra is that it has a closed symmetric monoidal structure (with respect to smash product). It is also a simplicial model category. A symmetric ring spectrum is a monoid in ${\mathcal {S}}p^{\Sigma }$; if the monoid is commutative, it's a commutative ring spectrum. The possibility of this definition of "ring spectrum" was one of motivations behind the category.
A similar technical goal is also achieved by May's theory of S-modules, a competing theory.
References
• Introduction to symmetric spectra I
• M. Hovey, B. Shipley, and J. Smith, “Symmetric spectra”, Journal of the AMS 13 (1999), no. 1, 149 – 208.
| Wikipedia |
Symmetric algebra
In mathematics, the symmetric algebra S(V) (also denoted Sym(V)) on a vector space V over a field K is a commutative algebra over K that contains V, and is, in some sense, minimal for this property. Here, "minimal" means that S(V) satisfies the following universal property: for every linear map f from V to a commutative algebra A, there is a unique algebra homomorphism g : S(V) → A such that f = g ∘ i, where i is the inclusion map of V in S(V).
Not to be confused with Symmetric Frobenius algebra.
If B is a basis of V, the symmetric algebra S(V) can be identified, through a canonical isomorphism, to the polynomial ring K[B], where the elements of B are considered as indeterminates. Therefore, the symmetric algebra over V can be viewed as a "coordinate free" polynomial ring over V.
The symmetric algebra S(V) can be built as the quotient of the tensor algebra T(V) by the two-sided ideal generated by the elements of the form x ⊗ y − y ⊗ x.
All these definitions and properties extend naturally to the case where V is a module (not necessarily a free one) over a commutative ring.
Construction
From tensor algebra
It is possible to use the tensor algebra T(V) to describe the symmetric algebra S(V). In fact, S(V) can be defined as the quotient algebra of T(V) by the two-sided ideal generated by the commutators $v\otimes w-w\otimes v.$
It is straightforward to verify that the resulting algebra satisfies the universal property stated in the introduction. Because of the universal property of the tensor algebra, a linear map f from V to a commutative algebra A extends to an algebra homomorphism $T(V)\rightarrow A$, which factors through S(V) because A is commutative. The extension of f to an algebra homomorphism $S(V)\rightarrow A$ is unique because V generates A as a K-algebra.
This results also directly from a general result of category theory, which asserts that the composition of two left adjoint functors is also a left adjoint functor. Here, the forgetful functor from commutative algebras to vector spaces or modules (forgetting the multiplication) is the composition of the forgetful functors from commutative algebras to associative algebras (forgetting commutativity), and from associative algebras to vectors or modules (forgetting the multiplication). As the tensor algebra and the quotient by commutators are left adjoint to these forgetful functors, their composition is left adjoint to the forgetful functor from commutative algebra to vectors or modules, and this proves the desired universal property.
From polynomial ring
The symmetric algebra S(V) can also be built from polynomial rings.
If V is a K-vector space or a free K-module, with a basis B, let K[B] be the polynomial ring that has the elements of B as indeterminates. The homogeneous polynomials of degree one form a vector space or a free module that can be identified with V. It is straightforward to verify that this makes K[B] a solution to the universal problem stated in the introduction. This implies that K[B] and S(V) are canonically isomorphic, and can therefore be identified. This results also immediately from general considerations of category theory, since free modules and polynomial rings are free objects of their respective categories.
If V is a module that is not free, it can be written $V=L/M,$ where L is a free module, and M is a submodule of L. In this case, one has
$S(V)=S(L/M)=S(L)/\langle M\rangle ,$
where $\langle M\rangle $ is the ideal generated by M. (Here, equals signs mean equality up to a canonical isomorphism.) Again this can be proved by showing that one has a solution of the universal property, and this can be done either by a straightforward but boring computation, or by using category theory, and more specifically, the fact that a quotient is the solution of the universal problem for morphisms that map to zero a given subset. (Depending on the case, the kernel is a normal subgroup, a submodule or an ideal, and the usual definition of quotients can be viewed as a proof of the existence of a solution of the universal problem.)
Grading
The symmetric algebra is a graded algebra. That is, it is a direct sum
$S(V)=\bigoplus _{n=0}^{\infty }S^{n}(V),$
where $S^{n}(V),$ called the nth symmetric power of V, is the vector subspace or submodule generated by the products of n elements of V. (The second symmetric power $S^{2}(V)$ is sometimes called the symmetric square of V).
This can be proved by various means. One follows from the tensor-algebra construction: since the tensor algebra is graded, and the symmetric algebra is its quotient by a homogeneous ideal: the ideal generated by all $x\otimes y-y\otimes x,$ where x and y are in V, that is, homogeneous of degree one.
In the case of a vector space or a free module, the gradation is the gradation of the polynomials by the total degree. A non-free module can be written as L / M, where L is a free module of base B; its symmetric algebra is the quotient of the (graded) symmetric algebra of L (a polynomial ring) by the homogeneous ideal generated by the elements of M, which are homogeneous of degree one.
One can also define $S^{n}(V)$ as the solution of the universal problem for n-linear symmetric functions from V into a vector space or a module, and then verify that the direct sum of all $S^{n}(V)$ satisfies the universal problem for the symmetric algebra.
Relationship with symmetric tensors
As the symmetric algebra of a vector space is a quotient of the tensor algebra, an element of the symmetric algebra is not a tensor, and, in particular, is not a symmetric tensor. However, symmetric tensors are strongly related to the symmetric algebra.
A symmetric tensor of degree n is an element of Tn(V) that is invariant under the action of the symmetric group ${\mathcal {S}}_{n}.$ More precisely, given $\sigma \in {\mathcal {S}}_{n},$ the transformation $v_{1}\otimes \cdots \otimes v_{n}\mapsto v_{\sigma (1)}\otimes \cdots \otimes v_{\sigma (n)}$ defines a linear endomorphism of Tn(V). A symmetric tensor is a tensor that is invariant under all these endomorphisms. The symmetric tensors of degree n form a vector subspace (or module) Symn(V) ⊂ Tn(V). The symmetric tensors are the elements of the direct sum $\textstyle \bigoplus _{n=0}^{\infty }\operatorname {Sym} ^{n}(V),$ which is a graded vector space (or a graded module). It is not an algebra, as the tensor product of two symmetric tensors is not symmetric in general.
Let $\pi _{n}$ be the restriction to Symn(V) of the canonical surjection $T^{n}(V)\to S^{n}(V).$ If n! is invertible in the ground field (or ring), then $\pi _{n}$ is an isomorphism. This is always the case with a ground field of characteristic zero. The inverse isomorphism is the linear map defined (on products of n vectors) by the symmetrization
$v_{1}\cdots v_{n}\mapsto {\frac {1}{n!}}\sum _{\sigma \in S_{n}}v_{\sigma (1)}\otimes \cdots \otimes v_{\sigma (n)}.$
The map $\pi _{n}$ is not injective if the characteristic is less than n+1; for example $\pi _{n}(x\otimes y+y\otimes x)=2xy$ is zero in characteristic two. Over a ring of characteristic zero, $\pi _{n}$ can be non surjective; for example, over the integers, if x and y are two linearly independent elements of V = S1(V) that are not in 2V, then $xy\not \in \pi _{n}(\operatorname {Sym} ^{2}(V)),$ since ${\frac {1}{2}}(x\otimes y+y\otimes x)\not \in \operatorname {Sym} ^{2}(V).$
In summary, over a field of characteristic zero, the symmetric tensors and the symmetric algebra form two isomorphic graded vector spaces. They can thus be identified as far as only the vector space structure is concerned, but they cannot be identified as soon as products are involved. Moreover, this isomorphism does not extend to the cases of fields of positive characteristic and rings that do not contain the rational numbers.
Categorical properties
Given a module V over a commutative ring K, the symmetric algebra S(V) can be defined by the following universal property:
For every K-linear map f from V to a commutative K-algebra A, there is a unique K-algebra homomorphism $g:S(V)\to A$ such that $f=g\circ i,$ where i is the inclusion of V in S(V).
As for every universal property, as soon as a solution exists, this defines uniquely the symmetric algebra, up to a canonical isomorphism. It follows that all properties of the symmetric algebra can be deduced from the universal property. This section is devoted to the main properties that belong to category theory.
The symmetric algebra is a functor from the category of K-modules to the category of K-commutative algebra, since the universal property implies that every module homomorphism $f:V\to W$ can be uniquely extended to an algebra homomorphism $S(f):S(V)\to S(W).$
The universal property can be reformulated by saying that the symmetric algebra is a left adjoint to the forgetful functor that sends a commutative algebra to its underlying module.
Symmetric algebra of an affine space
One can analogously construct the symmetric algebra on an affine space. The key difference is that the symmetric algebra of an affine space is not a graded algebra, but a filtered algebra: one can determine the degree of a polynomial on an affine space, but not its homogeneous parts.
For instance, given a linear polynomial on a vector space, one can determine its constant part by evaluating at 0. On an affine space, there is no distinguished point, so one cannot do this (choosing a point turns an affine space into a vector space).
Analogy with exterior algebra
The Sk are functors comparable to the exterior powers; here, though, the dimension grows with k; it is given by
$\operatorname {dim} (S^{k}(V))={\binom {n+k-1}{k}}$
where n is the dimension of V. This binomial coefficient is the number of n-variable monomials of degree k. In fact, the symmetric algebra and the exterior algebra appear as the isotypical components of the trivial and sign representation of the action of $S_{n}$ acting on the tensor product $V^{\otimes n}$ (for example over the complex field)
As a Hopf algebra
The symmetric algebra can be given the structure of a Hopf algebra. See Tensor algebra for details.
As a universal enveloping algebra
The symmetric algebra S(V) is the universal enveloping algebra of an abelian Lie algebra, i.e. one in which the Lie bracket is identically 0.
See also
• exterior algebra, the alternating algebra analog
• graded-symmetric algebra, a common generalization of a symmetric algebra and an exterior algebra
• Weyl algebra, a quantum deformation of the symmetric algebra by a symplectic form
• Clifford algebra, a quantum deformation of the exterior algebra by a quadratic form
References
• Bourbaki, Nicolas (1989), Elements of mathematics, Algebra I, Springer-Verlag, ISBN 3-540-64243-9
Algebra
• Outline
• History
Areas
• Abstract algebra
• Algebraic geometry
• Algebraic number theory
• Category theory
• Commutative algebra
• Elementary algebra
• Homological algebra
• K-theory
• Linear algebra
• Multilinear algebra
• Noncommutative algebra
• Order theory
• Representation theory
• Universal algebra
Basic concepts
• Algebraic expression
• Equation (Linear equation, Quadratic equation)
• Function (Polynomial function)
• Inequality (Linear inequality)
• Operation (Addition, Multiplication)
• Relation (Equivalence relation)
• Variable
Algebraic structures
• Field (theory)
• Group (theory)
• Module (theory)
• Ring (theory)
• Vector space (Vector)
Linear and
multilinear algebra
• Basis
• Determinant
• Eigenvalues and eigenvectors
• Inner product space (Dot product)
• Hilbert space
• Linear map (Matrix)
• Linear subspace (Affine space)
• Norm (Euclidean norm)
• Orthogonality (Orthogonal complement)
• Rank
• Trace
Algebraic constructions
• Composition algebra
• Exterior algebra
• Free object (Free group, ...)
• Geometric algebra (Multivector)
• Polynomial ring (Polynomial)
• Quotient object (Quotient group, ...)
• Symmetric algebra
• Tensor algebra
Topic lists
• Algebraic structures
• Abstract algebra topics
• Linear algebra topics
Glossaries
• Field theory
• Linear algebra
• Order theory
• Ring theory
• Category
• Mathematics portal
• Wikibooks
• Linear
• Abstract
• Wikiversity
• Linear
• Abstract
| Wikipedia |
Symmetric variety
In algebraic geometry, a symmetric variety is an algebraic analogue of a symmetric space in differential geometry, given by a quotient G/H of a reductive algebraic group G by the subgroup H fixed by some involution of G.
See also
• Wonderful compactification
• Homogeneous variety
• Spherical variety
References
• Ash, A.; Mumford, David; Rapoport, M.; Tai, Y. (1975), Smooth compactification of locally symmetric varieties (PDF), Brookline, Mass.: Math. Sci. Press, ISBN 978-0-521-73955-9, MR 0457437
| Wikipedia |
Symmetrically continuous function
In mathematics, a function $f:\mathbb {R} \to \mathbb {R} $ is symmetrically continuous at a point x if
$\lim _{h\to 0}f(x+h)-f(x-h)=0.$
The usual definition of continuity implies symmetric continuity, but the converse is not true. For example, the function $x^{-2}$ is symmetrically continuous at $x=0$, but not continuous.
Also, symmetric differentiability implies symmetric continuity, but the converse is not true just like usual continuity does not imply differentiability.
The set of the symmetrically continuous functions, with the usual scalar multiplication can be easily shown to have the structure of a vector space over $\mathbb {R} $, similarly to the usually continuous functions, which form a linear subspace within it.
References
• Thomson, Brian S. (1994). Symmetric Properties of Real Functions. Marcel Dekker. ISBN 0-8247-9230-0.
| Wikipedia |
Linked list
In computer science, a linked list is a linear collection of data elements whose order is not given by their physical placement in memory. Instead, each element points to the next. It is a data structure consisting of a collection of nodes which together represent a sequence. In its most basic form, each node contains data, and a reference (in other words, a link) to the next node in the sequence. This structure allows for efficient insertion or removal of elements from any position in the sequence during iteration. More complex variants add additional links, allowing more efficient insertion or removal of nodes at arbitrary positions. A drawback of linked lists is that data access time is a linear function of the number of nodes for each linked list (I.e., the access time linearly increases as nodes are added to a linked list.) because nodes are serially linked so a node needs to be accessed first to access the next node (so difficult to pipeline). Faster access, such as random access, is not feasible. Arrays have better cache locality compared to linked lists.
Linked lists are among the simplest and most common data structures. They can be used to implement several other common abstract data types, including lists, stacks, queues, associative arrays, and S-expressions, though it is not uncommon to implement those data structures directly without using a linked list as the basis.
The principal benefit of a linked list over a conventional array is that the list elements can be easily inserted or removed without reallocation or reorganization of the entire structure because the data items do not need to be stored contiguously in memory or on disk, while restructuring an array at run-time is a much more expensive operation. Linked lists allow insertion and removal of nodes at any point in the list, and allow doing so with a constant number of operations by keeping the link previous to the link being added or removed in memory during list traversal.
On the other hand, since simple linked lists by themselves do not allow random access to the data or any form of efficient indexing, many basic operations—such as obtaining the last node of the list, finding a node that contains a given datum, or locating the place where a new node should be inserted—may require iterating through most or all of the list elements.
History
Linked lists were developed in 1955–1956, by Allen Newell, Cliff Shaw and Herbert A. Simon at RAND Corporation and Carnegie Mellon University as the primary data structure for their Information Processing Language (IPL). IPL was used by the authors to develop several early artificial intelligence programs, including the Logic Theory Machine, the General Problem Solver, and a computer chess program. Reports on their work appeared in IRE Transactions on Information Theory in 1956, and several conference proceedings from 1957 to 1959, including Proceedings of the Western Joint Computer Conference in 1957 and 1958, and Information Processing (Proceedings of the first UNESCO International Conference on Information Processing) in 1959. The now-classic diagram consisting of blocks representing list nodes with arrows pointing to successive list nodes appears in "Programming the Logic Theory Machine" by Newell and Shaw in Proc. WJCC, February 1957. Newell and Simon were recognized with the ACM Turing Award in 1975 for having "made basic contributions to artificial intelligence, the psychology of human cognition, and list processing". The problem of machine translation for natural language processing led Victor Yngve at Massachusetts Institute of Technology (MIT) to use linked lists as data structures in his COMIT programming language for computer research in the field of linguistics. A report on this language entitled "A programming language for mechanical translation" appeared in Mechanical Translation in 1958.
Another early appearance of linked lists was by Hans Peter Luhn who wrote an internal IBM memorandum in January 1953 that suggested the use of linked lists in chained hash tables.[1]
LISP, standing for list processor, was created by John McCarthy in 1958 while he was at MIT and in 1960 he published its design in a paper in the Communications of the ACM, entitled "Recursive Functions of Symbolic Expressions and Their Computation by Machine, Part I". One of LISP's major data structures is the linked list.
By the early 1960s, the utility of both linked lists and languages which use these structures as their primary data representation was well established. Bert Green of the MIT Lincoln Laboratory published a review article entitled "Computer languages for symbol manipulation" in IRE Transactions on Human Factors in Electronics in March 1961 which summarized the advantages of the linked list approach. A later review article, "A Comparison of list-processing computer languages" by Bobrow and Raphael, appeared in Communications of the ACM in April 1964.
Several operating systems developed by Technical Systems Consultants (originally of West Lafayette Indiana, and later of Chapel Hill, North Carolina) used singly linked lists as file structures. A directory entry pointed to the first sector of a file, and succeeding portions of the file were located by traversing pointers. Systems using this technique included Flex (for the Motorola 6800 CPU), mini-Flex (same CPU), and Flex9 (for the Motorola 6809 CPU). A variant developed by TSC for and marketed by Smoke Signal Broadcasting in California, used doubly linked lists in the same manner.
The TSS/360 operating system, developed by IBM for the System 360/370 machines, used a double linked list for their file system catalog. The directory structure was similar to Unix, where a directory could contain files and other directories and extend to any depth.
Basic concepts and nomenclature
Each record of a linked list is often called an 'element' or 'node'.
The field of each node that contains the address of the next node is usually called the 'next link' or 'next pointer'. The remaining fields are known as the 'data', 'information', 'value', 'cargo', or 'payload' fields.
The 'head' of a list is its first node. The 'tail' of a list may refer either to the rest of the list after the head, or to the last node in the list. In Lisp and some derived languages, the next node may be called the 'cdr' (pronounced /'kʊd.əɹ/) of the list, while the payload of the head node may be called the 'car'.
Singly linked list
Singly linked lists contain nodes which have a 'value' field as well as 'next' field, which points to the next node in line of nodes. Operations that can be performed on singly linked lists include insertion, deletion and traversal.
The following C language code demonstrates how to add a new node with the "value" to the end of a singly linked list:
// Each node in a linked list is a structure. The head node is the first node in the list.
Node* addNode(Node* head, int value) {
Node* temp, * p; // Declare two Node pointers temp and p. temp will point to the new Node (i.e., it will have the new Node's memory address) being added to the end of the list, while p will point to the next Node in each iteration.
temp = createNode(); // Assume a function createNode creates a new Node and returns its memory address. The new Node has the value field (data field) = 0 and the "next" pointer = NULL.
temp->value = value; // Add data to the value field of the new Node.
if (head == NULL) {
head = temp; // If the linked list is empty (i.e., the head node pointer is a null pointer), then have the head node pointer point to the new Node.
}
else {
p = head; // Assign the head node pointer to the pointer p.
while (p->next != NULL) {
p = p->next; // Traverse the list until p is the last Node. The last Node always points to NULL.
}
p->next = temp; // Make the previously last Node point to the new Node.
}
return head; // Return the head node pointer.
}
Doubly linked list
In a 'doubly linked list', each node contains, besides the next-node link, a second link field pointing to the 'previous' node in the sequence. The two links may be called 'forward('s') and 'backwards', or 'next' and 'prev'('previous').
A technique known as XOR-linking allows a doubly linked list to be implemented using a single link field in each node. However, this technique requires the ability to do bit operations on addresses, and therefore may not be available in some high-level languages.
Many modern operating systems use doubly linked lists to maintain references to active processes, threads, and other dynamic objects.[2] A common strategy for rootkits to evade detection is to unlink themselves from these lists.[3]
Multiply linked list
In a 'multiply linked list', each node contains two or more link fields, each field being used to connect the same set of data arranged in a different order (e.g., by name, by department, by date of birth, etc.). While a doubly linked list can be seen as a special case of multiply linked list, the fact that the two and more orders are opposite to each other leads to simpler and more efficient algorithms, so they are usually treated as a separate case.
Circular linked list
In the last node of a linked list, the link field often contains a null reference, a special value is used to indicate the lack of further nodes. A less common convention is to make it point to the first node of the list; in that case, the list is said to be 'circular' or 'circularly linked'; otherwise, it is said to be 'open' or 'linear'. It is a list where the last node pointer points to the first node (i.e., the "next link" pointer of the last node has the memory address of the first node).
In the case of a circular doubly linked list, the first node also points to the last node of the list.
Sentinel nodes
In some implementations an extra 'sentinel' or 'dummy' node may be added before the first data record or after the last one. This convention simplifies and accelerates some list-handling algorithms, by ensuring that all links can be safely dereferenced and that every list (even one that contains no data elements) always has a "first" and "last" node.
Empty lists
An empty list is a list that contains no data records. This is usually the same as saying that it has zero nodes. If sentinel nodes are being used, the list is usually said to be empty when it has only sentinel nodes.
Hash linking
The link fields need not be physically part of the nodes. If the data records are stored in an array and referenced by their indices, the link field may be stored in a separate array with the same indices as the data records.
List handles
Since a reference to the first node gives access to the whole list, that reference is often called the 'address', 'pointer', or 'handle' of the list. Algorithms that manipulate linked lists usually get such handles to the input lists and return the handles to the resulting lists. In fact, in the context of such algorithms, the word "list" often means "list handle". In some situations, however, it may be convenient to refer to a list by a handle that consists of two links, pointing to its first and last nodes.
Combining alternatives
The alternatives listed above may be arbitrarily combined in almost every way, so one may have circular doubly linked lists without sentinels, circular singly linked lists with sentinels, etc.
Tradeoffs
As with most choices in computer programming and design, no method is well suited to all circumstances. A linked list data structure might work well in one case, but cause problems in another. This is a list of some of the common tradeoffs involving linked list structures.
Linked lists vs. dynamic arrays
Comparison of list data structures
Peek
(index)
Mutate (insert or delete) at … Excess space,
average
Beginning End Middle
Linked list Θ(n) Θ(1) Θ(1), known end element;
Θ(n), unknown end element
Peek time +
Θ(1)[4][5]
Θ(n)
Array Θ(1) — — — 0
Dynamic array Θ(1) Θ(n) Θ(1) amortized Θ(n) Θ(n)[6]
Balanced tree Θ(log n) Θ(log n) Θ(log n) Θ(log n) Θ(n)
Random-access list Θ(log n)[7] Θ(1) —[7] —[7] Θ(n)
Hashed array tree Θ(1) Θ(n) Θ(1) amortized Θ(n) Θ(√n)
A dynamic array is a data structure that allocates all elements contiguously in memory, and keeps a count of the current number of elements. If the space reserved for the dynamic array is exceeded, it is reallocated and (possibly) copied, which is an expensive operation.
Linked lists have several advantages over dynamic arrays. Insertion or deletion of an element at a specific point of a list, assuming that we have indexed a pointer to the node (before the one to be removed, or before the insertion point) already, is a constant-time operation (otherwise without this reference it is O(n)), whereas insertion in a dynamic array at random locations will require moving half of the elements on average, and all the elements in the worst case. While one can "delete" an element from an array in constant time by somehow marking its slot as "vacant", this causes fragmentation that impedes the performance of iteration.
Moreover, arbitrarily many elements may be inserted into a linked list, limited only by the total memory available; while a dynamic array will eventually fill up its underlying array data structure and will have to reallocate—an expensive operation, one that may not even be possible if memory is fragmented, although the cost of reallocation can be averaged over insertions, and the cost of an insertion due to reallocation would still be amortized O(1). This helps with appending elements at the array's end, but inserting into (or removing from) middle positions still carries prohibitive costs due to data moving to maintain contiguity. An array from which many elements are removed may also have to be resized in order to avoid wasting too much space.
On the other hand, dynamic arrays (as well as fixed-size array data structures) allow constant-time random access, while linked lists allow only sequential access to elements. Singly linked lists, in fact, can be easily traversed in only one direction. This makes linked lists unsuitable for applications where it's useful to look up an element by its index quickly, such as heapsort. Sequential access on arrays and dynamic arrays is also faster than on linked lists on many machines, because they have optimal locality of reference and thus make good use of data caching.
Another disadvantage of linked lists is the extra storage needed for references, which often makes them impractical for lists of small data items such as characters or boolean values, because the storage overhead for the links may exceed by a factor of two or more the size of the data. In contrast, a dynamic array requires only the space for the data itself (and a very small amount of control data).[note 1] It can also be slow, and with a naïve allocator, wasteful, to allocate memory separately for each new element, a problem generally solved using memory pools.
Some hybrid solutions try to combine the advantages of the two representations. Unrolled linked lists store several elements in each list node, increasing cache performance while decreasing memory overhead for references. CDR coding does both these as well, by replacing references with the actual data referenced, which extends off the end of the referencing record.
A good example that highlights the pros and cons of using dynamic arrays vs. linked lists is by implementing a program that resolves the Josephus problem. The Josephus problem is an election method that works by having a group of people stand in a circle. Starting at a predetermined person, one may count around the circle n times. Once the nth person is reached, one should remove them from the circle and have the members close the circle. The process is repeated until only one person is left. That person wins the election. This shows the strengths and weaknesses of a linked list vs. a dynamic array, because if the people are viewed as connected nodes in a circular linked list, then it shows how easily the linked list is able to delete nodes (as it only has to rearrange the links to the different nodes). However, the linked list will be poor at finding the next person to remove and will need to search through the list until it finds that person. A dynamic array, on the other hand, will be poor at deleting nodes (or elements) as it cannot remove one node without individually shifting all the elements up the list by one. However, it is exceptionally easy to find the nth person in the circle by directly referencing them by their position in the array.
The list ranking problem concerns the efficient conversion of a linked list representation into an array. Although trivial for a conventional computer, solving this problem by a parallel algorithm is complicated and has been the subject of much research.
A balanced tree has similar memory access patterns and space overhead to a linked list while permitting much more efficient indexing, taking O(log n) time instead of O(n) for a random access. However, insertion and deletion operations are more expensive due to the overhead of tree manipulations to maintain balance. Schemes exist for trees to automatically maintain themselves in a balanced state: AVL trees or red–black trees.
Singly linked linear lists vs. other lists
While doubly linked and circular lists have advantages over singly linked linear lists, linear lists offer some advantages that make them preferable in some situations.
A singly linked linear list is a recursive data structure, because it contains a pointer to a smaller object of the same type. For that reason, many operations on singly linked linear lists (such as merging two lists, or enumerating the elements in reverse order) often have very simple recursive algorithms, much simpler than any solution using iterative commands. While those recursive solutions can be adapted for doubly linked and circularly linked lists, the procedures generally need extra arguments and more complicated base cases.
Linear singly linked lists also allow tail-sharing, the use of a common final portion of sub-list as the terminal portion of two different lists. In particular, if a new node is added at the beginning of a list, the former list remains available as the tail of the new one—a simple example of a persistent data structure. Again, this is not true with the other variants: a node may never belong to two different circular or doubly linked lists.
In particular, end-sentinel nodes can be shared among singly linked non-circular lists. The same end-sentinel node may be used for every such list. In Lisp, for example, every proper list ends with a link to a special node, denoted by nil or (), whose CAR and CDR links point to itself. Thus a Lisp procedure can safely take the CAR or CDR of any list.
The advantages of the fancy variants are often limited to the complexity of the algorithms, not in their efficiency. A circular list, in particular, can usually be emulated by a linear list together with two variables that point to the first and last nodes, at no extra cost.
Doubly linked vs. singly linked
Double-linked lists require more space per node (unless one uses XOR-linking), and their elementary operations are more expensive; but they are often easier to manipulate because they allow fast and easy sequential access to the list in both directions. In a doubly linked list, one can insert or delete a node in a constant number of operations given only that node's address. To do the same in a singly linked list, one must have the address of the pointer to that node, which is either the handle for the whole list (in case of the first node) or the link field in the previous node. Some algorithms require access in both directions. On the other hand, doubly linked lists do not allow tail-sharing and cannot be used as persistent data structures.
Circularly linked vs. linearly linked
A circularly linked list may be a natural option to represent arrays that are naturally circular, e.g. the corners of a polygon, a pool of buffers that are used and released in FIFO ("first in, first out") order, or a set of processes that should be time-shared in round-robin order. In these applications, a pointer to any node serves as a handle to the whole list.
With a circular list, a pointer to the last node gives easy access also to the first node, by following one link. Thus, in applications that require access to both ends of the list (e.g., in the implementation of a queue), a circular structure allows one to handle the structure by a single pointer, instead of two.
A circular list can be split into two circular lists, in constant time, by giving the addresses of the last node of each piece. The operation consists in swapping the contents of the link fields of those two nodes. Applying the same operation to any two nodes in two distinct lists joins the two list into one. This property greatly simplifies some algorithms and data structures, such as the quad-edge and face-edge.
The simplest representation for an empty circular list (when such a thing makes sense) is a null pointer, indicating that the list has no nodes. Without this choice, many algorithms have to test for this special case, and handle it separately. By contrast, the use of null to denote an empty linear list is more natural and often creates fewer special cases.
For some applications, it can be useful to use singly linked lists that can vary between being circular and being linear, or even circular with a linear initial segment. Algorithms for searching or otherwise operating on these have to take precautions to avoid accidentally entering an endless loop. One well-known method is to have a second pointer walking the list at half or double the speed, and if both pointers meet at the same node, you know you found a cycle.
Using sentinel nodes
Sentinel node may simplify certain list operations, by ensuring that the next or previous nodes exist for every element, and that even empty lists have at least one node. One may also use a sentinel node at the end of the list, with an appropriate data field, to eliminate some end-of-list tests. For example, when scanning the list looking for a node with a given value x, setting the sentinel's data field to x makes it unnecessary to test for end-of-list inside the loop. Another example is the merging two sorted lists: if their sentinels have data fields set to +∞, the choice of the next output node does not need special handling for empty lists.
However, sentinel nodes use up extra space (especially in applications that use many short lists), and they may complicate other operations (such as the creation of a new empty list).
However, if the circular list is used merely to simulate a linear list, one may avoid some of this complexity by adding a single sentinel node to every list, between the last and the first data nodes. With this convention, an empty list consists of the sentinel node alone, pointing to itself via the next-node link. The list handle should then be a pointer to the last data node, before the sentinel, if the list is not empty; or to the sentinel itself, if the list is empty.
The same trick can be used to simplify the handling of a doubly linked linear list, by turning it into a circular doubly linked list with a single sentinel node. However, in this case, the handle should be a single pointer to the dummy node itself.[8]
Linked list operations
When manipulating linked lists in-place, care must be taken to not use values that you have invalidated in previous assignments. This makes algorithms for inserting or deleting linked list nodes somewhat subtle. This section gives pseudocode for adding or removing nodes from singly, doubly, and circularly linked lists in-place. Throughout we will use null to refer to an end-of-list marker or sentinel, which may be implemented in a number of ways.
Singly linked lists
Our node data structure will have two fields. We also keep a variable firstNode which always points to the first node in the list, or is null for an empty list.
record Node
{
data; // The data being stored in the node
Node next // A reference[2] to the next node, null for last node
}
record List
{
Node firstNode // points to first node of list; null for empty list
}
Traversal of a singly linked list is simple, beginning at the first node and following each next link until we come to the end:
node := list.firstNode
while node not null
(do something with node.data)
node := node.next
The following code inserts a node after an existing node in a singly linked list. The diagram shows how it works. Inserting a node before an existing one cannot be done directly; instead, one must keep track of the previous node and insert a node after it.
function insertAfter(Node node, Node newNode) // insert newNode after node
newNode.next := node.next
node.next := newNode
Inserting at the beginning of the list requires a separate function. This requires updating firstNode.
function insertBeginning(List list, Node newNode) // insert node before current first node
newNode.next := list.firstNode
list.firstNode := newNode
Similarly, we have functions for removing the node after a given node, and for removing a node from the beginning of the list. The diagram demonstrates the former. To find and remove a particular node, one must again keep track of the previous element.
function removeAfter(Node node) // remove node past this one
obsoleteNode := node.next
node.next := node.next.next
destroy obsoleteNode
function removeBeginning(List list) // remove first node
obsoleteNode := list.firstNode
list.firstNode := list.firstNode.next // point past deleted node
destroy obsoleteNode
Notice that removeBeginning() sets list.firstNode to null when removing the last node in the list.
Since we can't iterate backwards, efficient insertBefore or removeBefore operations are not possible. Inserting to a list before a specific node requires traversing the list, which would have a worst case running time of O(n).
Appending one linked list to another can be inefficient unless a reference to the tail is kept as part of the List structure, because we must traverse the entire first list in order to find the tail, and then append the second list to this. Thus, if two linearly linked lists are each of length $n$, list appending has asymptotic time complexity of $O(n)$. In the Lisp family of languages, list appending is provided by the append procedure.
Many of the special cases of linked list operations can be eliminated by including a dummy element at the front of the list. This ensures that there are no special cases for the beginning of the list and renders both insertBeginning() and removeBeginning() unnecessary, i.e., every element or node is next to another node (even the first node is next to the dummy node). In this case, the first useful data in the list will be found at list.firstNode.next.
Circularly linked list
In a circularly linked list, all nodes are linked in a continuous circle, without using null. For lists with a front and a back (such as a queue), one stores a reference to the last node in the list. The next node after the last node is the first node. Elements can be added to the back of the list and removed from the front in constant time.
Circularly linked lists can be either singly or doubly linked.
Both types of circularly linked lists benefit from the ability to traverse the full list beginning at any given node. This often allows us to avoid storing firstNode and lastNode, although if the list may be empty we need a special representation for the empty list, such as a lastNode variable which points to some node in the list or is null if it's empty; we use such a lastNode here. This representation significantly simplifies adding and removing nodes with a non-empty list, but empty lists are then a special case.
Algorithms
Assuming that someNode is some node in a non-empty circular singly linked list, this code iterates through that list starting with someNode:
function iterate(someNode)
if someNode ≠ null
node := someNode
do
do something with node.value
node := node.next
while node ≠ someNode
Notice that the test "while node ≠ someNode" must be at the end of the loop. If the test was moved to the beginning of the loop, the procedure would fail whenever the list had only one node.
This function inserts a node "newNode" into a circular linked list after a given node "node". If "node" is null, it assumes that the list is empty.
function insertAfter(Node node, Node newNode)
if node = null // assume list is empty
newNode.next := newNode
else
newNode.next := node.next
node.next := newNode
update lastNode variable if necessary
Suppose that "L" is a variable pointing to the last node of a circular linked list (or null if the list is empty). To append "newNode" to the end of the list, one may do
insertAfter(L, newNode)
L := newNode
To insert "newNode" at the beginning of the list, one may do
insertAfter(L, newNode)
if L = null
L := newNode
This function inserts a value "newVal" before a given node "node" in O(1) time. We create a new node between "node" and the next node, and then put the value of "node" into that new node, and put "newVal" in "node". Thus, a singly linked circularly linked list with only a firstNode variable can both insert to the front and back in O(1) time.
function insertBefore(Node node, newVal)
if node = null // assume list is empty
newNode := new Node(data:=newVal, next:=newNode)
else
newNode := new Node(data:=node.data, next:=node.next)
node.data := newVal
node.next := newNode
update firstNode variable if necessary
This function removes a non-null node from a list of size greater than 1 in O(1) time. It copies data from the next node into the node, and then sets the node's next pointer to skip over the next node.
function remove(Node node)
if node ≠ null and size of list > 1
removedData := node.data
node.data := node.next.data
node.next = node.next.next
return removedData
Linked lists using arrays of nodes
Languages that do not support any type of reference can still create links by replacing pointers with array indices. The approach is to keep an array of records, where each record has integer fields indicating the index of the next (and possibly previous) node in the array. Not all nodes in the array need be used. If records are also not supported, parallel arrays can often be used instead.
As an example, consider the following linked list record that uses arrays instead of pointers:
record Entry {
integer next; // index of next entry in array
integer prev; // previous entry (if double-linked)
string name;
real balance;
}
A linked list can be built by creating an array of these structures, and an integer variable to store the index of the first element.
integer listHead
Entry Records[1000]
Links between elements are formed by placing the array index of the next (or previous) cell into the Next or Prev field within a given element. For example:
Index Next Prev Name Balance
0 1 4 Jones, John 123.45
1 −1 0 Smith, Joseph 234.56
2 (listHead) 4 −1 Adams, Adam 0.00
3 Ignore, Ignatius 999.99
4 0 2 Another, Anita 876.54
5
6
7
In the above example, ListHead would be set to 2, the location of the first entry in the list. Notice that entry 3 and 5 through 7 are not part of the list. These cells are available for any additions to the list. By creating a ListFree integer variable, a free list could be created to keep track of what cells are available. If all entries are in use, the size of the array would have to be increased or some elements would have to be deleted before new entries could be stored in the list.
The following code would traverse the list and display names and account balance:
i := listHead
while i ≥ 0 // loop through the list
print i, Records[i].name, Records[i].balance // print entry
i := Records[i].next
When faced with a choice, the advantages of this approach include:
• The linked list is relocatable, meaning it can be moved about in memory at will, and it can also be quickly and directly serialized for storage on disk or transfer over a network.
• Especially for a small list, array indexes can occupy significantly less space than a full pointer on many architectures.
• Locality of reference can be improved by keeping the nodes together in memory and by periodically rearranging them, although this can also be done in a general store.
• Naïve dynamic memory allocators can produce an excessive amount of overhead storage for each node allocated; almost no allocation overhead is incurred per node in this approach.
• Seizing an entry from a pre-allocated array is faster than using dynamic memory allocation for each node, since dynamic memory allocation typically requires a search for a free memory block of the desired size.
This approach has one main disadvantage, however: it creates and manages a private memory space for its nodes. This leads to the following issues:
• It increases complexity of the implementation.
• Growing a large array when it is full may be difficult or impossible, whereas finding space for a new linked list node in a large, general memory pool may be easier.
• Adding elements to a dynamic array will occasionally (when it is full) unexpectedly take linear (O(n)) instead of constant time (although it's still an amortized constant).
• Using a general memory pool leaves more memory for other data if the list is smaller than expected or if many nodes are freed.
For these reasons, this approach is mainly used for languages that do not support dynamic memory allocation. These disadvantages are also mitigated if the maximum size of the list is known at the time the array is created.
Language support
Many programming languages such as Lisp and Scheme have singly linked lists built in. In many functional languages, these lists are constructed from nodes, each called a cons or cons cell. The cons has two fields: the car, a reference to the data for that node, and the cdr, a reference to the next node. Although cons cells can be used to build other data structures, this is their primary purpose.
In languages that support abstract data types or templates, linked list ADTs or templates are available for building linked lists. In other languages, linked lists are typically built using references together with records.
Internal and external storage
When constructing a linked list, one is faced with the choice of whether to store the data of the list directly in the linked list nodes, called internal storage, or merely to store a reference to the data, called external storage. Internal storage has the advantage of making access to the data more efficient, requiring less storage overall, having better locality of reference, and simplifying memory management for the list (its data is allocated and deallocated at the same time as the list nodes).
External storage, on the other hand, has the advantage of being more generic, in that the same data structure and machine code can be used for a linked list no matter what the size of the data is. It also makes it easy to place the same data in multiple linked lists. Although with internal storage the same data can be placed in multiple lists by including multiple next references in the node data structure, it would then be necessary to create separate routines to add or delete cells based on each field. It is possible to create additional linked lists of elements that use internal storage by using external storage, and having the cells of the additional linked lists store references to the nodes of the linked list containing the data.
In general, if a set of data structures needs to be included in linked lists, external storage is the best approach. If a set of data structures need to be included in only one linked list, then internal storage is slightly better, unless a generic linked list package using external storage is available. Likewise, if different sets of data that can be stored in the same data structure are to be included in a single linked list, then internal storage would be fine.
Another approach that can be used with some languages involves having different data structures, but all have the initial fields, including the next (and prev if double linked list) references in the same location. After defining separate structures for each type of data, a generic structure can be defined that contains the minimum amount of data shared by all the other structures and contained at the top (beginning) of the structures. Then generic routines can be created that use the minimal structure to perform linked list type operations, but separate routines can then handle the specific data. This approach is often used in message parsing routines, where several types of messages are received, but all start with the same set of fields, usually including a field for message type. The generic routines are used to add new messages to a queue when they are received, and remove them from the queue in order to process the message. The message type field is then used to call the correct routine to process the specific type of message.
Example of internal and external storage
Suppose you wanted to create a linked list of families and their members. Using internal storage, the structure might look like the following:
record member { // member of a family
member next;
string firstName;
integer age;
}
record family { // the family itself
family next;
string lastName;
string address;
member members // head of list of members of this family
}
To print a complete list of families and their members using internal storage, we could write:
aFamily := Families // start at head of families list
while aFamily ≠ null // loop through list of families
print information about family
aMember := aFamily.members // get head of list of this family's members
while aMember ≠ null // loop through list of members
print information about member
aMember := aMember.next
aFamily := aFamily.next
Using external storage, we would create the following structures:
record node { // generic link structure
node next;
pointer data // generic pointer for data at node
}
record member { // structure for family member
string firstName;
integer age
}
record family { // structure for family
string lastName;
string address;
node members // head of list of members of this family
}
To print a complete list of families and their members using external storage, we could write:
famNode := Families // start at head of families list
while famNode ≠ null // loop through list of families
aFamily := (family) famNode.data // extract family from node
print information about family
memNode := aFamily.members // get list of family members
while memNode ≠ null // loop through list of members
aMember := (member)memNode.data // extract member from node
print information about member
memNode := memNode.next
famNode := famNode.next
Notice that when using external storage, an extra step is needed to extract the record from the node and cast it into the proper data type. This is because both the list of families and the list of members within the family are stored in two linked lists using the same data structure (node), and this language does not have parametric types.
As long as the number of families that a member can belong to is known at compile time, internal storage works fine. If, however, a member needed to be included in an arbitrary number of families, with the specific number known only at run time, external storage would be necessary.
Speeding up search
Finding a specific element in a linked list, even if it is sorted, normally requires O(n) time (linear search). This is one of the primary disadvantages of linked lists over other data structures. In addition to the variants discussed above, below are two simple ways to improve search time.
In an unordered list, one simple heuristic for decreasing average search time is the move-to-front heuristic, which simply moves an element to the beginning of the list once it is found. This scheme, handy for creating simple caches, ensures that the most recently used items are also the quickest to find again.
Another common approach is to "index" a linked list using a more efficient external data structure. For example, one can build a red–black tree or hash table whose elements are references to the linked list nodes. Multiple such indexes can be built on a single list. The disadvantage is that these indexes may need to be updated each time a node is added or removed (or at least, before that index is used again).
Random-access lists
A random-access list is a list with support for fast random access to read or modify any element in the list.[9] One possible implementation is a skew binary random-access list using the skew binary number system, which involves a list of trees with special properties; this allows worst-case constant time head/cons operations, and worst-case logarithmic time random access to an element by index.[9] Random-access lists can be implemented as persistent data structures.[9]
Random-access lists can be viewed as immutable linked lists in that they likewise support the same O(1) head and tail operations.[9]
A simple extension to random-access lists is the min-list, which provides an additional operation that yields the minimum element in the entire list in constant time (without mutation complexities).[9]
Related data structures
Both stacks and queues are often implemented using linked lists, and simply restrict the type of operations which are supported.
The skip list is a linked list augmented with layers of pointers for quickly jumping over large numbers of elements, and then descending to the next layer. This process continues down to the bottom layer, which is the actual list.
A binary tree can be seen as a type of linked list where the elements are themselves linked lists of the same nature. The result is that each node may include a reference to the first node of one or two other linked lists, which, together with their contents, form the subtrees below that node.
An unrolled linked list is a linked list in which each node contains an array of data values. This leads to improved cache performance, since more list elements are contiguous in memory, and reduced memory overhead, because less metadata needs to be stored for each element of the list.
A hash table may use linked lists to store the chains of items that hash to the same position in the hash table.
A heap shares some of the ordering properties of a linked list, but is almost always implemented using an array. Instead of references from node to node, the next and previous data indexes are calculated using the current data's index.
A self-organizing list rearranges its nodes based on some heuristic which reduces search times for data retrieval by keeping commonly accessed nodes at the head of the list.
Notes
1. The amount of control data required for a dynamic array is usually of the form $K+B*n$, where $K$ is a per-array constant, $B$ is a per-dimension constant, and $n$ is the number of dimensions. $K$ and $B$ are typically on the order of 10 bytes.
References
1. Knuth, Donald (1998). The Art of Computer Programming. Vol. 3: Sorting and Searching (2nd ed.). Addison-Wesley. p. 547. ISBN 978-0-201-89685-5.
2. "The NT Insider:Kernel-Mode Basics: Windows Linked Lists". Archived from the original on 2015-09-23. Retrieved 2015-07-31.
3. "Archived copy" (PDF). Archived from the original (PDF) on 2016-10-01. Retrieved 2021-08-31.{{cite web}}: CS1 maint: archived copy as title (link)
4. Day 1 Keynote - Bjarne Stroustrup: C++11 Style at GoingNative 2012 on channel9.msdn.com from minute 45 or foil 44
5. Number crunching: Why you should never, ever, EVER use linked-list in your code again at kjellkod.wordpress.com
6. Brodnik, Andrej; Carlsson, Svante; Sedgewick, Robert; Munro, JI; Demaine, ED (1999), Resizable Arrays in Optimal Time and Space (Technical Report CS-99-09) (PDF), Department of Computer Science, University of Waterloo
7. Chris Okasaki (1995). "Purely Functional Random-Access Lists". Proceedings of the Seventh International Conference on Functional Programming Languages and Computer Architecture: 86–95. doi:10.1145/224164.224187.
8. Ford, William; Topp, William (2002). Data Structures with C++ using STL (Second ed.). Prentice-Hall. pp. 466–467. ISBN 0-13-085850-1.
9. Okasaki, Chris (1995). Purely Functional Random-Access Lists (PS). pp. 86–95. Retrieved May 7, 2015. {{cite book}}: |work= ignored (help)
Further reading
• Juan, Angel (2006). "Ch20 –Data Structures; ID06 - PROGRAMMING with JAVA (slide part of the book 'Big Java', by CayS. Horstmann)" (PDF). p. 3. Archived from the original (PDF) on 2012-01-06. Retrieved 2011-07-10.
• Black, Paul E. (2004-08-16). Pieterse, Vreda; Black, Paul E. (eds.). "linked list". Dictionary of Algorithms and Data Structures. National Institute of Standards and Technology. Retrieved 2004-12-14.
• Antonakos, James L.; Mansfield, Kenneth C. Jr. (1999). Practical Data Structures Using C/C++. Prentice-Hall. pp. 165–190. ISBN 0-13-280843-9.
• Collins, William J. (2005) [2002]. Data Structures and the Java Collections Framework. New York: McGraw Hill. pp. 239–303. ISBN 0-07-282379-8.
• Cormen, Thomas H.; Leiserson, Charles E.; Rivest, Ronald L.; Stein, Clifford (2003). Introduction to Algorithms. MIT Press. pp. 205–213, 501–505. ISBN 0-262-03293-7.
• Cormen, Thomas H.; Leiserson, Charles E.; Rivest, Ronald L.; Stein, Clifford (2001). "10.2: Linked lists". Introduction to Algorithms (2nd ed.). MIT Press. pp. 204–209. ISBN 0-262-03293-7.
• Green, Bert F. Jr. (1961). "Computer Languages for Symbol Manipulation". IRE Transactions on Human Factors in Electronics (2): 3–8. doi:10.1109/THFE2.1961.4503292.
• McCarthy, John (1960). "Recursive Functions of Symbolic Expressions and Their Computation by Machine, Part I". Communications of the ACM. 3 (4): 184. doi:10.1145/367177.367199. S2CID 1489409.
• Knuth, Donald (1997). "2.2.3-2.2.5". Fundamental Algorithms (3rd ed.). Addison-Wesley. pp. 254–298. ISBN 0-201-89683-4.
• Newell, Allen; Shaw, F. C. (1957). "Programming the Logic Theory Machine". Proceedings of the Western Joint Computer Conference: 230–240.
• Parlante, Nick (2001). "Linked list basics" (PDF). Stanford University. Retrieved 2009-09-21.
• Sedgewick, Robert (1998). Algorithms in C. Addison Wesley. pp. 90–109. ISBN 0-201-31452-5.
• Shaffer, Clifford A. (1998). A Practical Introduction to Data Structures and Algorithm Analysis. New Jersey: Prentice Hall. pp. 77–102. ISBN 0-13-660911-2.
• Wilkes, Maurice Vincent (1964). "An Experiment with a Self-compiling Compiler for a Simple List-Processing Language". Annual Review in Automatic Programming. Pergamon Press. 4 (1): 1. doi:10.1016/0066-4138(64)90013-8.
• Wilkes, Maurice Vincent (1964). "Lists and Why They are Useful". Proceeds of the ACM National Conference, Philadelphia 1964. ACM (P–64): F1–1.
• Shanmugasundaram, Kulesh (2005-04-04). "Linux Kernel Linked List Explained". Archived from the original on 2009-09-25. Retrieved 2009-09-21.
External links
Wikimedia Commons has media related to Linked lists.
• Description from the Dictionary of Algorithms and Data Structures
• Introduction to Linked Lists, Stanford University Computer Science Library
• Linked List Problems, Stanford University Computer Science Library
• Open Data Structures - Chapter 3 - Linked Lists, Pat Morin
• Patent for the idea of having nodes which are in several linked lists simultaneously (note that this technique was widely used for many decades before the patent was granted)
• Implementation of a singly linked list in C
• Implementation of a singly linked list in C++
• Implementation of a doubly linked list in C
• Implementation of a doubly linked list in C++
Well-known data structures
Types
• Collection
• Container
Abstract
• Associative array
• Multimap
• Retrieval Data Structure
• List
• Stack
• Queue
• Double-ended queue
• Priority queue
• Double-ended priority queue
• Set
• Multiset
• Disjoint-set
Arrays
• Bit array
• Circular buffer
• Dynamic array
• Hash table
• Hashed array tree
• Sparse matrix
Linked
• Association list
• Linked list
• Skip list
• Unrolled linked list
• XOR linked list
Trees
• B-tree
• Binary search tree
• AA tree
• AVL tree
• Red–black tree
• Self-balancing tree
• Splay tree
• Heap
• Binary heap
• Binomial heap
• Fibonacci heap
• R-tree
• R* tree
• R+ tree
• Hilbert R-tree
• Trie
• Hash tree
Graphs
• Binary decision diagram
• Directed acyclic graph
• Directed acyclic word graph
• List of data structures
Authority control: National
• Germany
| Wikipedia |
Symmetrization
In mathematics, symmetrization is a process that converts any function in $n$ variables to a symmetric function in $n$ variables. Similarly, antisymmetrization converts any function in $n$ variables into an antisymmetric function.
Two variables
Let $S$ be a set and $A$ be an additive abelian group. A map $\alpha :S\times S\to A$ is called a symmetric map if
$\alpha (s,t)=\alpha (t,s)\quad {\text{ for all }}s,t\in S.$
It is called an antisymmetric map if instead
$\alpha (s,t)=-\alpha (t,s)\quad {\text{ for all }}s,t\in S.$
The symmetrization of a map $\alpha :S\times S\to A$ is the map $(x,y)\mapsto \alpha (x,y)+\alpha (y,x).$ Similarly, the antisymmetrization or skew-symmetrization of a map $\alpha :S\times S\to A$ is the map $(x,y)\mapsto \alpha (x,y)-\alpha (y,x).$
The sum of the symmetrization and the antisymmetrization of a map $\alpha $ is $2\alpha .$ Thus, away from 2, meaning if 2 is invertible, such as for the real numbers, one can divide by 2 and express every function as a sum of a symmetric function and an anti-symmetric function.
The symmetrization of a symmetric map is its double, while the symmetrization of an alternating map is zero; similarly, the antisymmetrization of a symmetric map is zero, while the antisymmetrization of an anti-symmetric map is its double.
Bilinear forms
The symmetrization and antisymmetrization of a bilinear map are bilinear; thus away from 2, every bilinear form is a sum of a symmetric form and a skew-symmetric form, and there is no difference between a symmetric form and a quadratic form.
At 2, not every form can be decomposed into a symmetric form and a skew-symmetric form. For instance, over the integers, the associated symmetric form (over the rationals) may take half-integer values, while over $\mathbb {Z} /2\mathbb {Z} ,$ a function is skew-symmetric if and only if it is symmetric (as $1=-1$).
This leads to the notion of ε-quadratic forms and ε-symmetric forms.
Representation theory
In terms of representation theory:
• exchanging variables gives a representation of the symmetric group on the space of functions in two variables,
• the symmetric and antisymmetric functions are the subrepresentations corresponding to the trivial representation and the sign representation, and
• symmetrization and antisymmetrization map a function into these subrepresentations – if one divides by 2, these yield projection maps.
As the symmetric group of order two equals the cyclic group of order two ($\mathrm {S} _{2}=\mathrm {C} _{2}$), this corresponds to the discrete Fourier transform of order two.
n variables
More generally, given a function in $n$ variables, one can symmetrize by taking the sum over all $n!$ permutations of the variables,[1] or antisymmetrize by taking the sum over all $n!/2$ even permutations and subtracting the sum over all $n!/2$ odd permutations (except that when $n\leq 1,$ the only permutation is even).
Here symmetrizing a symmetric function multiplies by $n!$ – thus if $n!$ is invertible, such as when working over a field of characteristic $0$ or $p>n,$ then these yield projections when divided by $n!.$
In terms of representation theory, these only yield the subrepresentations corresponding to the trivial and sign representation, but for $n>2$ there are others – see representation theory of the symmetric group and symmetric polynomials.
Bootstrapping
Given a function in $k$ variables, one can obtain a symmetric function in $n$ variables by taking the sum over $k$-element subsets of the variables. In statistics, this is referred to as bootstrapping, and the associated statistics are called U-statistics.
See also
• Alternating multilinear map – Multilinear map that is 0 whenever arguments are linearly dependent
• Antisymmetric tensor – Tensor equal to the negative of any of its transpositions
Notes
1. Hazewinkel (1990), p. 344
References
• Hazewinkel, Michiel (1990). Encyclopaedia of mathematics: an updated and annotated translation of the Soviet "Mathematical encyclopaedia". Encyclopaedia of Mathematics. Vol. 6. Springer. ISBN 978-1-55608-005-0.
Tensors
Glossary of tensor theory
Scope
Mathematics
• Coordinate system
• Differential geometry
• Dyadic algebra
• Euclidean geometry
• Exterior calculus
• Multilinear algebra
• Tensor algebra
• Tensor calculus
• Physics
• Engineering
• Computer vision
• Continuum mechanics
• Electromagnetism
• General relativity
• Transport phenomena
Notation
• Abstract index notation
• Einstein notation
• Index notation
• Multi-index notation
• Penrose graphical notation
• Ricci calculus
• Tetrad (index notation)
• Van der Waerden notation
• Voigt notation
Tensor
definitions
• Tensor (intrinsic definition)
• Tensor field
• Tensor density
• Tensors in curvilinear coordinates
• Mixed tensor
• Antisymmetric tensor
• Symmetric tensor
• Tensor operator
• Tensor bundle
• Two-point tensor
Operations
• Covariant derivative
• Exterior covariant derivative
• Exterior derivative
• Exterior product
• Hodge star operator
• Lie derivative
• Raising and lowering indices
• Symmetrization
• Tensor contraction
• Tensor product
• Transpose (2nd-order tensors)
Related
abstractions
• Affine connection
• Basis
• Cartan formalism (physics)
• Connection form
• Covariance and contravariance of vectors
• Differential form
• Dimension
• Exterior form
• Fiber bundle
• Geodesic
• Levi-Civita connection
• Linear map
• Manifold
• Matrix
• Multivector
• Pseudotensor
• Spinor
• Vector
• Vector space
Notable tensors
Mathematics
• Kronecker delta
• Levi-Civita symbol
• Metric tensor
• Nonmetricity tensor
• Ricci curvature
• Riemann curvature tensor
• Torsion tensor
• Weyl tensor
Physics
• Moment of inertia
• Angular momentum tensor
• Spin tensor
• Cauchy stress tensor
• stress–energy tensor
• Einstein tensor
• EM tensor
• Gluon field strength tensor
• Metric tensor (GR)
Mathematicians
• Élie Cartan
• Augustin-Louis Cauchy
• Elwin Bruno Christoffel
• Albert Einstein
• Leonhard Euler
• Carl Friedrich Gauss
• Hermann Grassmann
• Tullio Levi-Civita
• Gregorio Ricci-Curbastro
• Bernhard Riemann
• Jan Arnoldus Schouten
• Woldemar Voigt
• Hermann Weyl
| Wikipedia |
Symmetrizable compact operator
In mathematics, a symmetrizable compact operator is a compact operator on a Hilbert space that can be composed with a positive operator with trivial kernel to produce a self-adjoint operator. Such operators arose naturally in the work on integral operators of Hilbert, Korn, Lichtenstein and Marty required to solve elliptic boundary value problems on bounded domains in Euclidean space. Between the late 1940s and early 1960s the techniques, previously developed as part of classical potential theory, were abstracted within operator theory by various mathematicians, including M. G. Krein, William T. Reid, Peter Lax and Jean Dieudonné. Fredholm theory already implies that any element of the spectrum is an eigenvalue. The main results assert that the spectral theory of these operators is similar to that of compact self-adjoint operators: any spectral value is real; they form a sequence tending to zero; any generalized eigenvector is an eigenvector; and the eigenvectors span a dense subspace of the Hilbert space.
Discussion
Let H be a Hilbert space. A compact operator K on H is symmetrizable if there is a bounded self-adjoint operator S on H such that S is positive with trivial kernel, i.e. (Sx,x) > 0 for all non-zero x, and SK is self-adjoint:
$\displaystyle {SK=K^{*}S.}$
In many applications S is also compact. The operator S defines a new inner product on H
$\displaystyle {(x,y)_{S}=(Sx,y)}.$
Let HS be the Hilbert space completion of H with respect to this inner product.
The operator K defines a formally self-adjoint operator on the dense subspace H of HS. As Krein (1947) and Reid (1951) noted, the operator has the same operator norm as K. In fact[1] the self-adjointness condition implies
$\displaystyle {SK^{n}=(K^{*})^{n}S.}$
It follows by induction that, if (x,x)S = 1, then
$\displaystyle {\|Kx\|_{S}|^{2^{n}}\leq \|K^{2^{n}}x\|_{S}.}$
Hence
$\displaystyle {\|Kx\|_{S}\leq \limsup _{n\rightarrow \infty }\|K\|(\|S\|\|x\|^{2})^{1/2^{n}}=\|K\|}$
If K is only compact, Krein gave an argument, invoking Fredholm theory, to show that K defines a compact operator on HS. A shorter argument is available if K belongs to a Schatten class.
When K is a Hilbert–Schmidt operator, the argument proceeds as follows. Let R be the unique positive square root of S and for ε > 0 define[2]
$\displaystyle {A_{\varepsilon }=(R+\varepsilon I)^{-1}SK(R+\varepsilon I)^{-1}.}$
These are self-adjoint Hilbert–Schmidt operator on H which are uniformly bounded in the Hilbert–Schmidt norm:
$\displaystyle {\|A_{\varepsilon }\|_{2}^{2}=(R^{2}(R+\varepsilon I)^{-2}K,KR^{2}(R+\varepsilon I)^{-2})_{2}\leq \|K\|_{2}^{2},}$
Since the Hilbert–Schmidt operators form a Hilbert space, there is a subsequence converging weakly to s self-adjoint Hilbert–Schmidt operator A. Since Aε R tends to RK in Hilbert–Schmidt norm, it follows that
$\displaystyle {RK=AR.}$
Thus if U is the unitary induced by R between HS and H, then the operator KS induced by the restriction of K corresponds to A on H:
$\displaystyle {UK_{S}U^{*}=A.}$
The operators K − λI and K* − λI are Fredholm operators of index 0 for λ ≠ 0, so any spectral value of K or K* is an eigenvalue and the corresponding eigenspaces are finite-dimensional. On the other hand, by the special theorem for compact operators, H is the orthogonal direct sum of the eigenspaces of A, all finite-dimensional except possibly for the 0 eigenspace. Since RA = K* R, the image under R of the λ eigenspace of A lies in the λ eigenspace of K*. Similarly R carries the λ eigenspace of K into the λ eigenspace of A. It follows that the eigenvalues of K and K* are all real. Since R is injective and has dense range it induces isomorphisms between the λ eigenspaces of A, K and K*. The same is true for generalized eigenvalues since powers of K − λI and K* − λI are also Fredholm of index 0. Since any generalized λ eigenvector of A is already an eigenvector, the same is true for K and K*. For λ = 0, this argument shows that Kmx = 0 implies Kx = 0.
Finally the eigenspaces of K* span a dense subspace of H, since it contains the image under R of the corresponding space for A. The above arguments also imply that the eigenvectors for non-zero eigenvalues of KS in HS all lie in the subspace H.
Hilbert–Schmidt operators K with non-zero real eigenvalues λn satisfy the following identities proved by Carleman (1921):
$\displaystyle {\mathrm {tr} \,K^{2}=\sum \lambda _{n}^{2},\,\,\,\det(I-zK^{2})=\prod _{n=1}^{\infty }(1-z\lambda _{n}^{2}).}$
Here tr is the trace on trace-class operators and det is the Fredholm determinant. For symmetrizable Hilbert–Schmidt operators the result states that the trace or determinant for K or K* is equal to the trace or determinant for A. For symmetrizable operators, the identities for K* can be proved by taking H0 to be the kernel of K* and Hm the finite dimensional eigenspaces for the non-zero eigenvalues λm. Let PN be the orthogonal projection onto the direct sum of Hm with 0 ≤ m ≤ N. This subspace is left invariant by K*. Although the sum is not orthogonal the restriction PNK*PN of K* is similar by a bounded operator with bounded inverse to the diagonal operator on the orthogonal direct sum with the same eigenvalues. Thus
$\displaystyle {\mathrm {tr} \,(P_{N}K^{*}P_{N})^{2}=\sum _{m=1}^{N}\lambda _{m}^{2}\cdot \mathrm {dim} \,H_{m},\,\,\,\mathrm {det} \,[P_{N}-z(P_{N}K^{*}P_{N})^{2}]=\prod _{m=1}^{N}(1-z\lambda _{m}^{2})^{\mathrm {dim} \,H_{m}}.}$
Since PNK*PN tends to K* in Hilbert–Schmidt norm, the identities for K* follow by passing to the limit as N tends to infinity.
Notes
1. Halmos 1974
2. Khavinson, Putinar & Shapiro 2007, p. 156
References
• Carleman, T. (1921), "Zur Theorie der linearen Integralgleichungen", Math. Z., 9 (3–4): 196–217, doi:10.1007/bf01279029, S2CID 122412155
• Dieudonné, J. (1969), Foundations of modern analysis, Pure and Applied Mathematics, Academic Press
• Halmos, P.R. (1974), A Hilbert space problem book, Graduate Texts in Mathematics, vol. 19, Springer-Verlag, ISBN 978-0-387-90090-2, Problem 82
• Kellogg, Oliver Dimon (1929), Foundations of potential theory, Die Grundlehren der Mathematischen Wissenschaften, vol. 31, Springer-Verlag
• Khavinson, D.; Putinar, M.; Shapiro, H. S. (2007), "Poincaré's variational problem in potential theory", Arch. Ration. Mech. Anal., 185 (1): 143–184, Bibcode:2007ArRMA.185..143K, CiteSeerX 10.1.1.569.7145, doi:10.1007/s00205-006-0045-1, S2CID 855706
• Krein, M. G. (1998), "Compact linear operators on functional spaces with two norms (translated from 1947 Ukrainian article)", Integral Equations Operator Theory, 30 (2): 140–162, doi:10.1007/bf01238216, S2CID 120822340
• Landkof, N. S. (1972), Foundations of modern potential theory, Die Grundlehren der mathematischen Wissenschaften, vol. 180, Springer-Verlag
• Lax, Peter D. (1954), "Symmetrizable linear transformations", Comm. Pure Appl. Math., 7 (4): 633–647, doi:10.1002/cpa.3160070403
• Reid, William T. (1951), "Symmetrizable completely continuous linear transformations in Hilbert space", Duke Math. J., 18: 41–56, doi:10.1215/s0012-7094-51-01805-4
• Zaanen, Adriaan Cornelis (1953), Linear analysis; Measure and integral, Banach and Hilbert space, linear integral equations, Interscience
| Wikipedia |
Symmetrization methods
In mathematics the symmetrization methods are algorithms of transforming a set $A\subset \mathbb {R} ^{n}$ to a ball $B\subset \mathbb {R} ^{n}$ with equal volume $\operatorname {vol} (B)=\operatorname {vol} (A)$ and centered at the origin. B is called the symmetrized version of A, usually denoted $A^{*}$. These algorithms show up in solving the classical isoperimetric inequality problem, which asks: Given all two-dimensional shapes of a given area, which of them has the minimal perimeter (for details see Isoperimetric inequality). The conjectured answer was the disk and Steiner in 1838 showed this to be true using the Steiner symmetrization method (described below). From this many other isoperimetric problems sprung and other symmetrization algorithms. For example, Rayleigh's conjecture is that the first eigenvalue of the Dirichlet problem is minimized for the ball (see Rayleigh–Faber–Krahn inequality for details). Another problem is that the Newtonian capacity of a set A is minimized by $A^{*}$ and this was proved by Polya and G. Szego (1951) using circular symmetrization (described below).
Symmetrization
If $\Omega \subset \mathbb {R} ^{n}$ is measurable, then it is denoted by $\Omega ^{*}$ the symmetrized version of $\Omega $ i.e. a ball $\Omega ^{*}:=B_{r}(0)\subset \mathbb {R} ^{n}$ such that $\operatorname {vol} (\Omega ^{*})=\operatorname {vol} (\Omega )$. We denote by $f^{*}$ the symmetric decreasing rearrangement of nonnegative measurable function f and define it as $f^{*}(x):=\int _{0}^{\infty }1_{\{y:f(y)>t\}^{*}}(x)\,dt$, where $\{y:f(y)>t\}^{*}$ is the symmetrized version of preimage set $\{y:f(y)>t\}$. The methods described below have been proved to transform $\Omega $ to $\Omega ^{*}$ i.e. given a sequence of symmetrization transformations $\{T_{k}\}$ there is $\lim \limits _{k\to \infty }d_{Ha}(\Omega ^{*},T_{k}(K))=0$, where $d_{Ha}$ is the Hausdorff distance (for discussion and proofs see Burchard (2009))
Steiner symmetrization
Steiner symmetrization was introduced by Steiner (1838) to solve the isoperimetric theorem stated above. Let $H\subset \mathbb {R} ^{n}$ be a hyperplane through the origin. Rotate space so that $H$ is the $x_{n}=0$ ($x_{n}$ is the nth coordinate in $\mathbb {R} ^{n}$) hyperplane. For each $x\in H$ let the perpendicular line through $x\in H$ be $L_{x}=\{x+ye_{n}:y\in \mathbb {R} \}$. Then by replacing each $\Omega \cap L_{x}$ by a line centered at H and with length $|\Omega \cap L_{x}|$ we obtain the Steiner symmetrized version.
$\operatorname {St} (\Omega ):=\{x+ye_{n}:x+ze_{n}\in \Omega {\text{ for some }}z{\text{ and }}|y|\leq {\frac {1}{2}}|\Omega \cap L_{x}|\}.$
It is denoted by $\operatorname {St} (f)$ the Steiner symmetrization wrt to $x_{n}=0$ hyperplane of nonnegative measurable function $f:\mathbb {R} ^{d}\to \mathbb {R} $ and for fixed $x_{1},\ldots ,x_{n-1}$ define it as
$St:f(x_{1},\ldots ,x_{n-1},\cdot )\mapsto (f(x_{1},\ldots ,x_{n-1},\cdot ))^{*}.$
Properties
• It preserves convexity: if $\Omega $ is convex, then $St(\Omega )$ is also convex.
• It is linear: $St(x+\lambda \Omega )=St(x)+\lambda St(\Omega )$.
• Super-additive: $St(K)+St(U)\subset St(K+U)$.
Circular symmetrization
A popular method for symmetrization in the plane is Polya's circular symmetrization. After, its generalization will be described to higher dimensions. Let $\Omega \subset \mathbb {C} $ be a domain; then its circular symmetrization $\operatorname {Circ} (\Omega )$ with regard to the positive real axis is defined as follows: Let
$\Omega _{t}:=\{\theta \in [0,2\pi ]:te^{i\theta }\in \Omega \}$
i.e. contain the arcs of radius t contained in $\Omega $. So it is defined
• If $\Omega _{t}$ is the full circle, then $\operatorname {Circ} (\Omega )\cap \{|z|=t\}:=\{|z|=t\}$.
• If the length is $m(\Omega _{t})=\alpha $, then $\operatorname {Circ} (\Omega )\cap \{|z|=t\}:=\{te^{i\theta }:|\theta |<{\frac {\alpha }{2}}\}$.
• $0,\infty \in \operatorname {Circ} (\Omega )$ iff $0,\infty \in \Omega $.
In higher dimensions $\Omega \subset \mathbb {R} ^{n}$, its spherical symmetrization $Sp^{n}(\Omega )$ wrt to positive axis of $x_{1}$ is defined as follows: Let $\Omega _{r}:=\{x\in \mathbb {S} ^{n-1}:rx\in \Omega \}$ i.e. contain the caps of radius r contained in $\Omega $. Also, for the first coordinate let $\operatorname {angle} (x_{1}):=\theta $ if $x_{1}=rcos\theta $. So as above
• If $\Omega _{r}$ is the full cap, then $Sp^{n}(\Omega )\cap \{|z|=r\}:=\{|z|=r\}$.
• If the surface area is $m_{s}(\Omega _{t})=\alpha $, then $Sp^{n}(\Omega )\cap \{|z|=r\}:=\{x:|x|=r$ and $0\leq \operatorname {angle} (x_{1})\leq \theta _{\alpha }\}=:C(\theta _{\alpha })$ where $\theta _{\alpha }$ is picked so that its surface area is $m_{s}(C(\theta _{\alpha })=\alpha $. In words, $C(\theta _{\alpha })$ is a cap symmetric around the positive axis $x_{1}$ with the same area as the intersection $\Omega \cap \{|z|=r\}$.
• $0,\infty \in Sp^{n}(\Omega )$ iff $0,\infty \in \Omega $.
Polarization
Let $\Omega \subset \mathbb {R} ^{n}$ be a domain and $H^{n-1}\subset \mathbb {R} ^{n}$ be a hyperplane through the origin. Denote the reflection across that plane to the positive halfspace $\mathbb {H} ^{+}$ as $\sigma _{H}$ or just $\sigma $ when it is clear from the context. Also, the reflected $\Omega $ across hyperplane H is defined as $\sigma \Omega $. Then, the polarized $\Omega $ is denoted as $\Omega ^{\sigma }$ and defined as follows
• If $x\in \Omega \cap \mathbb {H} ^{+}$, then $x\in \Omega ^{\sigma }$.
• If $x\in \Omega \cap \sigma (\Omega )\cap \mathbb {H} ^{-}$, then $x\in \Omega ^{\sigma }$.
• If $x\in (\Omega \setminus \sigma (\Omega ))\cap \mathbb {H} ^{-}$, then $\sigma x\in \Omega ^{\sigma }$.
In words, $(\Omega \setminus \sigma (\Omega ))\cap \mathbb {H} ^{-}$ is simply reflected to the halfspace $\mathbb {H} ^{+}$. It turns out that this transformation can approximate the above ones (in the Hausdorff distance) (see Brock & Solynin (2000)).
References
• Burchard, Almut (2009). "A Short Course on Rearrangement Inequalities" (PDF). Retrieved 1 November 2015.
• Brock, Friedemann; Solynin, Alexander (2000), "An approach to symmetrization via polarization.", Transactions of the American Mathematical Society, 352 (4): 1759–1796, doi:10.1090/S0002-9947-99-02558-1, MR 1695019
• Kojar, Tomas (2015). "Brownian Motion and Symmetrization". arXiv:1505.01868 [math.PR].
• Morgan, Frank (2009). "Symmetrization". Retrieved 1 November 2015.
| Wikipedia |
Symmetrohedron
In geometry, a symmetrohedron is a high-symmetry polyhedron containing convex regular polygons on symmetry axes with gaps on the convex hull filled by irregular polygons. The name was coined by Craig S. Kaplan and George W. Hart.[1]
The trivial cases are the Platonic solids, Archimedean solids with all regular polygons. A first class is called bowtie which contain pairs of trapezoidal faces. A second class has kite faces. Another class are called LCM symmetrohedra.
Symbolic notation
Each symmetrohedron is described by a symbolic expression G(l; m; n; α). G represents the symmetry group (T,O,I). The values l, m and n are the multipliers ; a multiplier of m will cause a regular km-gon to be placed at every k-fold axis of G. In the notation, the axis degrees are assumed to be sorted in descending order, 5,3,2 for I, 4,3,2 for O, and 3,3,2 for T . We also allow two special values for the multipliers: *, indicating that no polygons should be placed on the given axes, and 0, indicating that the final solid must have a vertex (a zero-sided polygon) on the axes. We require that one or two of l, m, and n be positive integers. The final parameter, α, controls the relative sizes of the non-degenerate axis-gons.
Conway polyhedron notation is another way to describe these polyhedra, starting with a regular form, and applying prefix operators. The notation doesn't imply which faces should be made regular beyond the uniform solutions of the Archimedean solids.
Duals
I(*;2;3;e) Pyritohedral
1-generator point
These symmetrohedra are produced by a single generator point within a fundamental domains, reflective symmetry across domain boundaries. Edges exist perpendicular to each triangle boundary, and regular faces exist centered on each of the 3 triangle corners.
The symmetrohedra can be extended to euclidean tilings, using the symmetry of the regular square tiling, and dual pairs of triangular and hexagonal tilings. Tilings, Q is square symmetry p4m, H is hexagonal symmetry p6m.
Coxeter-Dynkin diagrams exist for these uniform polyhedron solutions, representing the position of the generator point within the fundamental domain. Each node represents one of 3 mirrors on the edge of the triangle. A mirror node is ringed if the generator point is active, off the mirror, and creates new edges between the point and its mirror image.
Domain Edges Tetrahedral (3 3 2) Octahedral (4 3 2) Icosahedral (5 3 2) Triangular (6 3 2) Square (4 4 2)
SymbolImageSymbolImageSymbolImageSymbolImage DualSymbolImage Dual
1 T(1;*;*;e)
T,
C, O(1;*;*;e)
I(1;*;*;e)
D,
H(1;*;*;e)
H,
Q(1;*;*;e)
Q,
1 T(*;1;*;e)
dT,
O(*;1;*;e)
O,
I(*;1;*;e)
I,
H(*;1;*;e)
dH,
Q(*;1;*;e)
dQ,
2 T(1;1;*;e)
aT,
O(1;1;*;e)
aC,
I(1;1;*;e)
aD,
H(1;1;*;e)
aH,
Q(1;1;*;e)
aQ,
3 T(2;1;*;e)
tT,
O(2;1;*;e)
tC,
I(2;1;*;e)
tD,
H(2;1;*;e)
tH,
Q(2;1;*;e)
tQ,
3 T(1;2;*;e)
dtT,
O(1;2;*;e)
tO,
I(1;2;*;e)
tI,
H(1;2;*;e)
dtH,
Q(1;2;*;e)
dtQ,
4 T(1;1;*;1)
eT,
O(1;1;*;1)
eC,
I(1;1;*;1)
eD,
H(1;1;*;1)
eH,
Q(1;1;*;1)
eQ,
6 T(2;2;*;e)
bT,
O(2;2;*;e)
bC,
I(2;2;*;e)
bD,
H(2;2;*;e)
bH,
Q(2;2;*;e)
bQ,
2-generator points
Domain Edges Tetrahedral (3 3 2) Octahedral (4 3 2) Icosahedral (5 3 2) Triangular (6 3 2) Square (4 4 2)
SymbolImageSymbolImageSymbolImageSymbolImage DualSymbolImage Dual
6 T(1;2;*;[2])
atT
O(1;2;*;[2])
atO
I(1;2;*;[2])
atI
H(1;2;*;[2])
atΔ
Q(1;2;*;[2])
Q(2;1;*;[2])
atQ
6 O(2;1;*;[2])
atC
I(2;1;*;[2])
atD
H(2;1;*;[2])
atH
7 T(3;*;*;[2])
T(*;3;*;[2])
dKdT
O(3;*;*;[2])
dKdC
I(3;*;*;[2])
dKdD
H(3;*;*;[2])
dKdH
Q(3;*;*;[2])
Q(*;3;*;[2])
dKQ
7 O(*;3;*;[2])
dKdO
I(*;3;*;[2])
dKdI
H(*;3;*;[2])
dKdΔ
8 T(2;3;*;α)
T(3;2;*;α)
dM0T
O(2;3;*;α)
dM0dO
I(2;3;*;α)
dM0dI
H(2;3;*;α)
dM0dΔ
Q(2;3;*;α)
Q(3;2;*;α)
dM0Q
8 O(3;2;*;α)
dM0dC
I(3;2;*;α)
dM0dD
H(3;2;*;α)
dM0dH
9 T(2;4;*;e)
T(4;2;*;e)
ttT
O(2;4;*;e)
ttO
I(2;4;*;e)
ttI
H(2;4;*;e)
ttΔ
Q(4;2;*;e)
Q(2;4;*;e)
ttQ
9 O(4;2;*;e)
ttC
I(4;2;*;e)
ttD
H(4;2;*;e)
ttH
7 T(2;1;*;1)
T(1;2;*;1)
dM3T
O(1;2;*;1)
dM3O
I(1;2;*;1)
dM3I
H(1;2;*;1)
dM3Δ
Q(2;1;*;1)
Q(1;2;*;1)
dM3dQ
7 O(2;1;*;1)
dM3C
I(2;1;*;1)
dM3D
H(2;1;*;1)
dM3H
9 T(2;3;*;e)
T(3;2;*;e)
dm3T
O(2;3;*;e)
dm3C
I(2;3;*;e)
dm3D
H(2;3;*;e)
dm3H
Q(2;3;*;e)
Q(3;2;*;e)
dm3Q
9 O(3;2;*;e)
dm3O
I(3;2;*;e)
dm3I
H(3;2;*;e)
dm3Δ
10 T(2;*;3;e)
T(*;2;3;e)
dXdT
3.4.6.6
O(*;2;3;e)
dXdO
I(*;2;3;e)
dXdI
H(*;2;3;e)
dXdΔ
Q(2;*;3;e)
Q(*;2;3;e)
dXdQ
10 O(2;*;3;e)
dXdC
3.4.6.8
I(2;*;3;e)
dXdD
3.4.6.10
H(2;*;3;e)
dXdH
3.4.6.12
3-generator points
Domain Edges Tetrahedral (3 3 2) Octahedral (4 3 2) Icosahedral (5 3 2) Triangular (6 3 2) Square (4 4 2)
SymbolImageSymbolImageSymbolImageSymbolImage DualSymbolImage Dual
6 T(2;0;*;[1]) O(0;2;*;[1])
dL0dO
I(0;2;*;[1])
dL0dI
H(0;2;*;[1])
dL0H
Q(2;0;*;[1])
Q(0;2;*;[1])
dL0dQ
6 O(2;0;*;[1])
dL0dC
I(2;0;*;[1])
dL0dD
H(2;0;*;[1])
dL0Δ
7 T(3;0;*;[2]) O(0;3;*;[2])
dLdO
I(0;3;*;[2])
dLdI
H(0;3;*;[2])
dLH
Q(2;0;*;[1])
Q(0;2;*;[2])
dLQ
7 O(3;0;*;[2])
dLdC
I(3;0;*;[2])
dLdD
H(3;0;*;[2])
dLΔ
12 T(2;2;*;a)
amT
O(2;2;*;a)
amC
I(2;2;*;a)
amD
H(2;2;*;a)
amH
Q(2;2;*;a)
amQ
See also
• Near-miss Johnson solid
• Conway polyhedron notation
References
1. Symmetrohedra: Polyhedra from Symmetric Placement of Regular Polygons
External links
• Symmetrohedra on RobertLovesPi.net.
• Antiprism Free software that includes Symmetro for generating and viewing these polyhedra with Kaplan-Hart notation.
| Wikipedia |
Symmetry in mathematics
Symmetry occurs not only in geometry, but also in other branches of mathematics. Symmetry is a type of invariance: the property that a mathematical object remains unchanged under a set of operations or transformations.[1]
Given a structured object X of any sort, a symmetry is a mapping of the object onto itself which preserves the structure. This can occur in many ways; for example, if X is a set with no additional structure, a symmetry is a bijective map from the set to itself, giving rise to permutation groups. If the object X is a set of points in the plane with its metric structure or any other metric space, a symmetry is a bijection of the set to itself which preserves the distance between each pair of points (i.e., an isometry).
In general, every kind of structure in mathematics will have its own kind of symmetry, many of which are listed in the given points mentioned above.
Symmetry in geometry
Main article: Symmetry (geometry)
The types of symmetry considered in basic geometry include reflectional symmetry, rotation symmetry, translational symmetry and glide reflection symmetry, which are described more fully in the main article Symmetry (geometry).
Symmetry in calculus
Even and odd functions
Main article: Even and odd functions
Even functions
Let f(x) be a real-valued function of a real variable, then f is even if the following equation holds for all x and -x in the domain of f:
$f(x)=f(-x)$
Geometrically speaking, the graph face of an even function is symmetric with respect to the y-axis, meaning that its graph remains unchanged after reflection about the y-axis. Examples of even functions include |x|, x2, x4, cos(x), and cosh(x).
Odd functions
Again, let f be a real-valued function of a real variable, then f is odd if the following equation holds for all x and -x in the domain of f:
$-f(x)=f(-x)$
That is,
$f(x)+f(-x)=0\,.$
Geometrically, the graph of an odd function has rotational symmetry with respect to the origin, meaning that its graph remains unchanged after rotation of 180 degrees about the origin. Examples of odd functions are x, x3, sin(x), sinh(x), and erf(x).
Integrating
The integral of an odd function from −A to +A is zero, provided that A is finite and that the function is integrable (e.g., has no vertical asymptotes between −A and A).[3]
The integral of an even function from −A to +A is twice the integral from 0 to +A, provided that A is finite and the function is integrable (e.g., has no vertical asymptotes between −A and A).[3] This also holds true when A is infinite, but only if the integral converges.
Series
• The Maclaurin series of an even function includes only even powers.
• The Maclaurin series of an odd function includes only odd powers.
• The Fourier series of a periodic even function includes only cosine terms.
• The Fourier series of a periodic odd function includes only sine terms.
Symmetry in linear algebra
Symmetry in matrices
In linear algebra, a symmetric matrix is a square matrix that is equal to its transpose (i.e., it is invariant under matrix transposition). Formally, matrix A is symmetric if
$A=A^{T}.$
By the definition of matrix equality, which requires that the entries in all corresponding positions be equal, equal matrices must have the same dimensions (as matrices of different sizes or shapes cannot be equal). Consequently, only square matrices can be symmetric.
The entries of a symmetric matrix are symmetric with respect to the main diagonal. So if the entries are written as A = (aij), then aij = aji, for all indices i and j.
For example, the following 3×3 matrix is symmetric:
${\begin{bmatrix}1&7&3\\7&4&-5\\3&-5&6\end{bmatrix}}$
Every square diagonal matrix is symmetric, since all off-diagonal entries are zero. Similarly, each diagonal element of a skew-symmetric matrix must be zero, since each is its own negative.
In linear algebra, a real symmetric matrix represents a self-adjoint operator over a real inner product space. The corresponding object for a complex inner product space is a Hermitian matrix with complex-valued entries, which is equal to its conjugate transpose. Therefore, in linear algebra over the complex numbers, it is often assumed that a symmetric matrix refers to one which has real-valued entries. Symmetric matrices appear naturally in a variety of applications, and typical numerical linear algebra software makes special accommodations for them.
Symmetry in abstract algebra
Symmetric groups
Main article: Symmetric group
The symmetric group Sn (on a finite set of n symbols) is the group whose elements are all the permutations of the n symbols, and whose group operation is the composition of such permutations, which are treated as bijective functions from the set of symbols to itself.[4] Since there are n! (n factorial) possible permutations of a set of n symbols, it follows that the order (i.e., the number of elements) of the symmetric group Sn is n!.
Symmetric polynomials
Main article: Symmetric polynomial
A symmetric polynomial is a polynomial P(X1, X2, ..., Xn) in n variables, such that if any of the variables are interchanged, one obtains the same polynomial. Formally, P is a symmetric polynomial if for any permutation σ of the subscripts 1, 2, ..., n, one has P(Xσ(1), Xσ(2), ..., Xσ(n)) = P(X1, X2, ..., Xn).
Symmetric polynomials arise naturally in the study of the relation between the roots of a polynomial in one variable and its coefficients, since the coefficients can be given by polynomial expressions in the roots, and all roots play a similar role in this setting. From this point of view, the elementary symmetric polynomials are the most fundamental symmetric polynomials. A theorem states that any symmetric polynomial can be expressed in terms of elementary symmetric polynomials, which implies that every symmetric polynomial expression in the roots of a monic polynomial can alternatively be given as a polynomial expression in the coefficients of the polynomial.
Examples
In two variables X1 and X2, one has symmetric polynomials such as:
• $X_{1}^{3}+X_{2}^{3}-7$
• $4X_{1}^{2}X_{2}^{2}+X_{1}^{3}X_{2}+X_{1}X_{2}^{3}+(X_{1}+X_{2})^{4}$
and in three variables X1, X2 and X3, one has as a symmetric polynomial:
• $X_{1}X_{2}X_{3}-2X_{1}X_{2}-2X_{1}X_{3}-2X_{2}X_{3}\,$
Symmetric tensors
Main article: Symmetric tensor
In mathematics, a symmetric tensor is tensor that is invariant under a permutation of its vector arguments:
$T(v_{1},v_{2},\dots ,v_{r})=T(v_{\sigma 1},v_{\sigma 2},\dots ,v_{\sigma r})$
for every permutation σ of the symbols {1,2,...,r}. Alternatively, an rth order symmetric tensor represented in coordinates as a quantity with r indices satisfies
$T_{i_{1}i_{2}\dots i_{r}}=T_{i_{\sigma 1}i_{\sigma 2}\dots i_{\sigma r}}.$
The space of symmetric tensors of rank r on a finite-dimensional vector space is naturally isomorphic to the dual of the space of homogeneous polynomials of degree r on V. Over fields of characteristic zero, the graded vector space of all symmetric tensors can be naturally identified with the symmetric algebra on V. A related concept is that of the antisymmetric tensor or alternating form. Symmetric tensors occur widely in engineering, physics and mathematics.
Galois theory
Main article: Galois theory
Given a polynomial, it may be that some of the roots are connected by various algebraic equations. For example, it may be that for two of the roots, say A and B, that A2 + 5B3 = 7. The central idea of Galois theory is to consider those permutations (or rearrangements) of the roots having the property that any algebraic equation satisfied by the roots is still satisfied after the roots have been permuted. An important proviso is that we restrict ourselves to algebraic equations whose coefficients are rational numbers. Thus, Galois theory studies the symmetries inherent in algebraic equations.
Automorphisms of algebraic objects
Main article: Automorphism
In abstract algebra, an automorphism is an isomorphism from a mathematical object to itself. It is, in some sense, a symmetry of the object, and a way of mapping the object to itself while preserving all of its structure. The set of all automorphisms of an object forms a group, called the automorphism group. It is, loosely speaking, the symmetry group of the object.
Examples
• In set theory, an arbitrary permutation of the elements of a set X is an automorphism. The automorphism group of X is also called the symmetric group on X.
• In elementary arithmetic, the set of integers, Z, considered as a group under addition, has a unique nontrivial automorphism: negation. Considered as a ring, however, it has only the trivial automorphism. Generally speaking, negation is an automorphism of any abelian group, but not of a ring or field.
• A group automorphism is a group isomorphism from a group to itself. Informally, it is a permutation of the group elements such that the structure remains unchanged. For every group G there is a natural group homomorphism G → Aut(G) whose image is the group Inn(G) of inner automorphisms and whose kernel is the center of G. Thus, if G has trivial center it can be embedded into its own automorphism group.[5]
• In linear algebra, an endomorphism of a vector space V is a linear operator V → V. An automorphism is an invertible linear operator on V. When the vector space is finite-dimensional, the automorphism group of V is the same as the general linear group, GL(V).
• A field automorphism is a bijective ring homomorphism from a field to itself. In the cases of the rational numbers (Q) and the real numbers (R) there are no nontrivial field automorphisms. Some subfields of R have nontrivial field automorphisms, which however do not extend to all of R (because they cannot preserve the property of a number having a square root in R). In the case of the complex numbers, C, there is a unique nontrivial automorphism that sends R into R: complex conjugation, but there are infinitely (uncountably) many "wild" automorphisms (assuming the axiom of choice).[6] Field automorphisms are important to the theory of field extensions, in particular Galois extensions. In the case of a Galois extension L/K the subgroup of all automorphisms of L fixing K pointwise is called the Galois group of the extension.
Symmetry in representation theory
Symmetry in quantum mechanics: bosons and fermions
In quantum mechanics, bosons have representatives that are symmetric under permutation operators, and fermions have antisymmetric representatives.
This implies the Pauli exclusion principle for fermions. In fact, the Pauli exclusion principle with a single-valued many-particle wavefunction is equivalent to requiring the wavefunction to be antisymmetric. An antisymmetric two-particle state is represented as a sum of states in which one particle is in state $\scriptstyle |x\rangle $ and the other in state $\scriptstyle |y\rangle $:
$|\psi \rangle =\sum _{x,y}A(x,y)|x,y\rangle $
and antisymmetry under exchange means that A(x,y) = −A(y,x). This implies that A(x,x) = 0, which is Pauli exclusion. It is true in any basis, since unitary changes of basis keep antisymmetric matrices antisymmetric, although strictly speaking, the quantity A(x,y) is not a matrix but an antisymmetric rank-two tensor.
Conversely, if the diagonal quantities A(x,x) are zero in every basis, then the wavefunction component:
$A(x,y)=\langle \psi |x,y\rangle =\langle \psi |(|x\rangle \otimes |y\rangle )$
is necessarily antisymmetric. To prove it, consider the matrix element:
$\langle \psi |((|x\rangle +|y\rangle )\otimes (|x\rangle +|y\rangle ))\,$
This is zero, because the two particles have zero probability to both be in the superposition state $\scriptstyle |x\rangle +|y\rangle $. But this is equal to
$\langle \psi |x,x\rangle +\langle \psi |x,y\rangle +\langle \psi |y,x\rangle +\langle \psi |y,y\rangle \,$
The first and last terms on the right hand side are diagonal elements and are zero, and the whole sum is equal to zero. So the wavefunction matrix elements obey:
$\langle \psi |x,y\rangle +\langle \psi |y,x\rangle =0\,$.
or
$A(x,y)=-A(y,x)\,$
Symmetry in set theory
Symmetric relation
Main article: Symmetric relation
We call a relation symmetric if every time the relation stands from A to B, it stands too from B to A. Note that symmetry is not the exact opposite of antisymmetry.
Symmetry in metric spaces
Isometries of a space
Main article: Isometry
An isometry is a distance-preserving map between metric spaces. Given a metric space, or a set and scheme for assigning distances between elements of the set, an isometry is a transformation which maps elements to another metric space such that the distance between the elements in the new metric space is equal to the distance between the elements in the original metric space. In a two-dimensional or three-dimensional space, two geometric figures are congruent if they are related by an isometry: related by either a rigid motion, or a composition of a rigid motion and a reflection. Up to a relation by a rigid motion, they are equal if related by a direct isometry.
Isometries have been used to unify the working definition of symmetry in geometry and for functions, probability distributions, matrices, strings, graphs, etc.[7]
Symmetries of differential equations
A symmetry of a differential equation is a transformation that leaves the differential equation invariant. Knowledge of such symmetries may help solve the differential equation.
A Line symmetry of a system of differential equations is a continuous symmetry of the system of differential equations. Knowledge of a Line symmetry can be used to simplify an ordinary differential equation through reduction of order.[8]
For ordinary differential equations, knowledge of an appropriate set of Lie symmetries allows one to explicitly calculate a set of first integrals, yielding a complete solution without integration.
Symmetries may be found by solving a related set of ordinary differential equations.[8] Solving these equations is often much simpler than solving the original differential equations.
Symmetry in probability
In the case of a finite number of possible outcomes, symmetry with respect to permutations (relabelings) implies a discrete uniform distribution.
In the case of a real interval of possible outcomes, symmetry with respect to interchanging sub-intervals of equal length corresponds to a continuous uniform distribution.
In other cases, such as "taking a random integer" or "taking a random real number", there are no probability distributions at all symmetric with respect to relabellings or to exchange of equally long subintervals. Other reasonable symmetries do not single out one particular distribution, or in other words, there is not a unique probability distribution providing maximum symmetry.
There is one type of isometry in one dimension that may leave the probability distribution unchanged, that is reflection in a point, for example zero.
A possible symmetry for randomness with positive outcomes is that the former applies for the logarithm, i.e., the outcome and its reciprocal have the same distribution. However this symmetry does not single out any particular distribution uniquely.
For a "random point" in a plane or in space, one can choose an origin, and consider a probability distribution with circular or spherical symmetry, respectively.
See also
• Use of symmetry in integration
• Invariance (mathematics)
References
1. Weisstein, Eric W. "Invariant". mathworld.wolfram.com. Retrieved 2019-12-06.
2. "Maths in a minute: Symmetry". plus.maths.org. 2016-06-23. Retrieved 2019-12-06.
3. Weisstein, Eric W. "Odd Function". mathworld.wolfram.com. Retrieved 2019-12-06.
4. Jacobson (2009), p. 31.
5. PJ Pahl, R Damrath (2001). "§7.5.5 Automorphisms". Mathematical foundations of computational engineering (Felix Pahl translation ed.). Springer. p. 376. ISBN 3-540-67995-2.
6. Yale, Paul B. (May 1966). "Automorphisms of the Complex Numbers" (PDF). Mathematics Magazine. 39 (3): 135–141. doi:10.2307/2689301. JSTOR 2689301.
7. Petitjean, Michel (2007). "A definition of symmetry". Symmetry: Culture and Science. 18 (2–3): 99–119. Zbl 1274.58003.
8. Olver, Peter J. (1986). Applications of Lie Groups to Differential Equations. New York: Springer Verlag. ISBN 978-0-387-95000-6.
Bibliography
• Weyl, Hermann (1989) [1952]. Symmetry. Princeton Science Library. Princeton University Press. ISBN 0-691-02374-3.
• Ronan, Mark (2006). Symmetry and the Monster. Oxford University Press. ISBN 978-0-19-280723-6. (Concise introduction for lay reader)
• du Sautoy, Marcus (2012). Finding Moonshine: A Mathematician's Journey Through Symmetry. Harper Collins. ISBN 978-0-00-738087-9.
Authority control: National
• Israel
• United States
| Wikipedia |
One-dimensional symmetry group
A one-dimensional symmetry group is a mathematical group that describes symmetries in one dimension (1D).
A pattern in 1D can be represented as a function f(x) for, say, the color at position x.
The only nontrivial point group in 1D is a simple reflection. It can be represented by the simplest Coxeter group, A1, [ ], or Coxeter-Dynkin diagram .
Affine symmetry groups represent translation. Isometries which leave the function unchanged are translations x + a with a such that f(x + a) = f(x) and reflections a − x with a such that f(a − x) = f(x). The reflections can be represented by the affine Coxeter group [∞], or Coxeter-Dynkin diagram representing two reflections, and the translational symmetry as [∞]+, or Coxeter-Dynkin diagram as the composite of two reflections.
Point group
For a pattern without translational symmetry there are the following possibilities (1D point groups):
• the symmetry group is the trivial group (no symmetry)
• the symmetry group is one of the groups each consisting of the identity and reflection in a point (isomorphic to Z2)
Group Coxeter Description
C1 [ ]+ Identity, Trivial group Z1
D1 [ ] Reflection. Abstract groups Z2 or Dih1.
Discrete symmetry groups
These affine symmetries can be considered limiting cases of the 2D dihedral and cyclic groups:
Group Coxeter Description
C∞ [∞]+ Cyclic: ∞-fold rotations become translations. Abstract group Z∞, the infinite cyclic group.
D∞ [∞] Dihedral: ∞-fold reflections. Abstract group Dih∞, the infinite dihedral group.
Translational symmetry
Consider all patterns in 1D which have translational symmetry, i.e., functions f(x) such that for some a > 0, f(x + a) = f(x) for all x. For these patterns, the values of a for which this property holds form a group.
We first consider patterns for which the group is discrete, i.e., for which the positive values in the group have a minimum. By rescaling we make this minimum value 1.
Such patterns fall in two categories, the two 1D space groups or line groups.
In the simpler case the only isometries of R which map the pattern to itself are translations; this applies, e.g., for the pattern
− −−− − −−− − −−− − −−−
Each isometry can be characterized by an integer, namely plus or minus the translation distance. Therefore the symmetry group is Z.
In the other case, among the isometries of R which map the pattern to itself there are also reflections; this applies, e.g., for the pattern
− −−− − − −−− − − −−− −
We choose the origin for x at one of the points of reflection. Now all reflections which map the pattern to itself are of the form a−x where the constant "a" is an integer (the increments of a are 1 again, because we can combine a reflection and a translation to get another reflection, and we can combine two reflections to get a translation). Therefore all isometries can be characterized by an integer and a code, say 0 or 1, for translation or reflection.
Thus:
• $(a,0):x\mapsto x+a$
• $(a,1):x\mapsto a-x$
The latter is a reflection with respect to the point a/2 (an integer or an integer plus 1/2).
Group operations (function composition, the one on the right first) are, for integers a and b:
• $(a,0)\circ (b,0)=(a+b,0)$
• $(a,0)\circ (b,1)=(a+b,1)$
• $(a,1)\circ (b,0)=(a-b,1)$
• $(a,1)\circ (b,1)=(a-b,0)$
E.g., in the third case: translation by an amount b changes x into x + b, reflection with respect to 0 gives−x − b, and a translation a gives a − b − x.
This group is called the generalized dihedral group of Z, Dih(Z), and also D∞. It is a semidirect product of Z and C2. It has a normal subgroup of index 2 isomorphic to Z: the translations. Also it contains an element f of order 2 such that, for all n in Z, n f = f n −1: the reflection with respect to the reference point, (0,1).
The two groups are called lattice groups. The lattice is Z. As translation cell we can take the interval 0 ≤ x < 1. In the first case the fundamental domain can be taken the same; topologically it is a circle (1-torus); in the second case we can take 0 ≤ x ≤ 0.5.
The actual discrete symmetry group of a translationally symmetric pattern can be:
• of group 1 type, for any positive value of the smallest translation distance
• of group 2 type, for any positive value of the smallest translation distance, and any positioning of the lattice of points of reflection (which is twice as dense as the translation lattice)
The set of translationally symmetric patterns can thus be classified by actual symmetry group, while actual symmetry groups, in turn, can be classified as type 1 or type 2.
These space group types are the symmetry groups “up to conjugacy with respect to affine transformations”: the affine transformation changes the translation distance to the standard one (above: 1), and the position of one of the points of reflections, if applicable, to the origin. Thus the actual symmetry group contains elements of the form gag−1= b, which is a conjugate of a.
Non-discrete symmetry groups
For a homogeneous “pattern” the symmetry group contains all translations, and reflection in all points. The symmetry group is isomorphic to Dih(R).
There are also less trivial patterns/functions with translational symmetry for arbitrarily small translations, e.g. the group of translations by rational distances. Even apart from scaling and shifting, there are infinitely many cases, e.g. by considering rational numbers of which the denominators are powers of a given prime number.
The translations form a group of isometries. However, there is no pattern with this group as symmetry group.
1D-symmetry of a function vs. 2D-symmetry of its graph
Symmetries of a function (in the sense of this article) imply corresponding symmetries of its graph. However, 2-fold rotational symmetry of the graph does not imply any symmetry (in the sense of this article) of the function: function values (in a pattern representing colors, grey shades, etc.) are nominal data, i.e. grey is not between black and white, the three colors are simply all different.
Even with nominal colors there can be a special kind of symmetry, as in:
−−−−−−− -- − −−− − − −
(reflection gives the negative image). This is also not included in the classification.
Group action
Group actions of the symmetry group that can be considered in this connection are:
• on R
• on the set of real functions of a real variable (each representing a pattern)
This section illustrates group action concepts for these cases.
The action of G on X is called
• transitive if for any two x, y in X there exists a g in G such that g · x = y; for neither of the two group actions this is the case for any discrete symmetry group
• faithful (or effective) if for any two different g, h in G there exists an x in X such that g · x ≠ h · x; for both group actions this is the case for any discrete symmetry group (because, except for the identity, symmetry groups do not contain elements that “do nothing”)
• free if for any two different g, h in G and all x in X we have g · x ≠ h · x; this is the case if there are no reflections
• regular (or simply transitive) if it is both transitive and free; this is equivalent to saying that for any two x, y in X there exists precisely one g in G such that g · x = y.
Orbits and stabilizers
Consider a group G acting on a set X. The orbit of a point x in X is the set of elements of X to which x can be moved by the elements of G. The orbit of x is denoted by Gx:
$Gx=\left\{g\cdot x\mid g\in G\right\}.$
Case that the group action is on R:
• For the trivial group, all orbits contain only one element; for a group of translations, an orbit is e.g. {..,−9,1,11,21,..}, for a reflection e.g. {2,4}, and for the symmetry group with translations and reflections, e.g., {−8,−6,2,4,12,14,22,24,..} (translation distance is 10, points of reflection are ..,−7,−2,3,8,13,18,23,..). The points within an orbit are “equivalent”. If a symmetry group applies for a pattern, then within each orbit the color is the same.
Case that the group action is on patterns:
• The orbits are sets of patterns, containing translated and/or reflected versions, “equivalent patterns”. A translation of a pattern is only equivalent if the translation distance is one of those included in the symmetry group considered, and similarly for a mirror image.
The set of all orbits of X under the action of G is written as X/G.
If Y is a subset of X, we write GY for the set {g · y : y $\in $ Y and g $\in $ G}. We call the subset Y invariant under G if GY = Y (which is equivalent to GY ⊆ Y). In that case, G also operates on Y. The subset Y is called fixed under G if g · y = yfor all g in G and all y in Y. In the example of the orbit {−8,−6,2,4,12,14,22,24,..}, {−9,−8,−6,−5,1,2,4,5,11,12,14,15,21,22,24,25,..} is invariant under G, but not fixed.
For every x in X, we define the stabilizer subgroup of x (also called the isotropy group or little group) as the set of all elements in G that fix x:
$G_{x}=\{g\in G\mid g\cdot x=x\}.$
If x is a reflection point, its stabilizer is the group of order two containing the identity and the reflection inx. In other cases the stabilizer is the trivial group.
For a fixed x in X, consider the map from G to X given by $g\mid \rightarrow g\cdot x$. The image of this map is the orbit of x and the coimage is the set of all left cosets of Gx. The standard quotient theorem of set theory then gives a natural bijection between $G/G_{x}$ and $Gx$. Specifically, the bijection is given by $hG_{x}\mid \rightarrow h\cdot x$. This result is known as the orbit-stabilizer theorem. If, in the example, we take $x=3$, the orbit is {−7,3,13,23,..}, and the two groups are isomorphic with Z.
If two elements $x$ and $y$ belong to the same orbit, then their stabilizer subgroups, $G_{x}$ and $G_{y}$, are isomorphic. More precisely: if $y=g\cdot x$, then $G_{y}=gG_{x}g^{-1}$. In the example this applies e.g. for 3 and 23, both reflection points. Reflection about 23 corresponds to a translation of −20, reflection about 3, and translation of 20.
See also
• Line group
• Frieze group
• Space group
• Wallpaper group
| Wikipedia |
Symmetry in Mechanics
Symmetry in Mechanics: A Gentle, Modern Introduction is an undergraduate textbook on mathematics and mathematical physics, centered on the use of symplectic geometry to solve the Kepler problem. It was written by Stephanie Singer, and published by Birkhäuser in 2001.
Topics
The Kepler problem in classical mechanics is a special case of the two-body problem in which two point masses interact by Newton's law of universal gravitation (or by any central force obeying an inverse-square law). The book starts and ends with this problem, the first time in an ad hoc manner that represents the problem using a system of twelve variables for the positions and momentum vectors of the two bodies, uses the conservation laws of physics to set up a system of differential equations obeyed by these variables, and solves these equations. The second time through, it describes the positions and variables of the two bodies as a single point in a 12-dimensional phase space, describes the behavior of the bodies as a Hamiltonian system, and uses symplectic reductions to shrink the phase space to two dimensions before solving it to produce Kepler's laws of planetary motion in a more direct and principled way.[1]
The middle portion of the book sets up the machinery of symplectic geometry needed to complete this tour. Topics covered in this part include manifolds, vector fields and differential forms, pushforwards and pullbacks, symplectic manifolds, Hamiltonian energy functions, the representation of finite and infinitesimal physical symmetries using Lie groups and Lie algebras, and the use of the moment map to relate symmetries to conserved quantities.[1][2][3] In these topics, as well, concrete examples are central to the presentation.[4]
Audience and reception
The book is written as a textbook for undergraduate mathematics and physics students, with many exercises, and it assumes that the students are already familiar with multivariable calculus and linear algebra,[1] a significantly lower level of background material than other books on symplectic geometry in mechanics.[5] It is not comprehensive in its coverage of symplectic geometry and mechanics, but could be used as auxiliary reading in a class that covers that material from other sources,[6] such as Abraham and Marsden's Foundations of Mechanics or Arnold's Mathematical Methods of Classical Mechanics. Alternatively, on its own, it can provide a more accessible first course in this material, before presenting it more comprehensively in another course.[1][2][4]
Reviewer William Satzer writes that this book "makes serious efforts to address real students and their potential difficulties" and shifts comfortably between mathematical and physical views of its problem.[1] Similarly, reviewer J. R. Dorfman writes that it "removes some of the language barriers that divide the worlds of mathematics and physics",[3] and reviewer Jiří Vanžura calls it "remarkable" in its dual ability to motivate mathematical methods for physics students and provide applications in physics for mathematics students, adding that "The book is perfectly written and serves very well its purpose."[7] Reviewer Ivailo Mladenov notes with approval the book's attention to example-first exposition, and despite pointing to a minor inaccuracy regarding the nationality of Sophus Lie, recommends it to both undergraduate and graduate students.[6] Reviewer Richard Montgomory writes that the book does "an excellent job of leading the reader from the Kepler problem to a view of the growing field of symplectic geometry".[5]
References
1. Satzer, William J. (December 2005), "Review of Symmetry in Mechanics", MAA Reviews, Mathematical Association of America
2. Jamiołkowski, A.; Mrugała, R. (February 2002), "Review of Symmetry in Mechanics", Reports on Mathematical Physics, 49 (1): 123–124, Bibcode:2002RpMP...49..123J, doi:10.1016/s0034-4877(02)80009-x
3. Dorfman, J. R. (January 2002), "Review of Symmetry in Mechanics", Physics Today, 55 (1): 57–57, doi:10.1063/1.1457270
4. Abbott, Steve (November 2001), "Review of Symmetry in Mechanics", The Mathematical Gazette, 85 (504): 571, doi:10.2307/3621823, JSTOR 3621823
5. Montgomery, Richard (April 2003), "Review of Symmetry in Mechanics" (PDF), American Mathematical Monthly, 110 (4): 348–353, doi:10.2307/3647898, JSTOR 3647898
6. Mladenov, Ivailo, "Review of Symmetry in Mechanics", zbMATH, Zbl 0970.70003; see also Mladenov's review in MR1816059
7. Vanžura, Jiří (2003), "Review of Symmetry in Mechanics", Mathematica Bohemica, 128 (1): 112
| Wikipedia |
Symmetry number
The symmetry number or symmetry order of an object is the number of different but indistinguishable (or equivalent) arrangements (or views) of the object, that is, it is the order of its symmetry group. The object can be a molecule, crystal lattice, lattice, tiling, or in general any kind of mathematical object that admits symmetries.[1]
In statistical thermodynamics, the symmetry number corrects for any overcounting of equivalent molecular conformations in the partition function. In this sense, the symmetry number depends upon how the partition function is formulated. For example, if one writes the partition function of ethane so that the integral includes full rotation of a methyl, then the 3-fold rotational symmetry of the methyl group contributes a factor of 3 to the symmetry number; but if one writes the partition function so that the integral includes only one rotational energy well of the methyl, then the methyl rotation does not contribute to the symmetry number. [2]
See also
• Group theory, a branch of mathematics which discusses symmetry, symmetry groups, symmetry spaces, symmetry operations
• Point groups in three dimensions
• Space group in 3 dimensions
• Molecular symmetry
• List of the 230 crystallographic 3D space groups
• Fixed points of isometry groups in Euclidean space
• Symmetric group, mathematics
• Symmetry group, mathematics
References
1. IUPAC, Compendium of Chemical Terminology, 2nd ed. (the "Gold Book") (1997). Online corrected version: (2006–) "symmetry number, s". doi:10.1351/goldbook.S06214
2. Symmetry Numbers for Rigid, Flexible and Fluxional Molecules: Theory and Applications. M.K. Gilson and K. K. Irikura. J. Phys. Chem. B 114:16304-16317, 2010.
| Wikipedia |
Semi-implicit Euler method
In mathematics, the semi-implicit Euler method, also called symplectic Euler, semi-explicit Euler, Euler–Cromer, and Newton–Størmer–Verlet (NSV), is a modification of the Euler method for solving Hamilton's equations, a system of ordinary differential equations that arises in classical mechanics. It is a symplectic integrator and hence it yields better results than the standard Euler method.
Setting
The semi-implicit Euler method can be applied to a pair of differential equations of the form
${\begin{aligned}{dx \over dt}&=f(t,v)\\{dv \over dt}&=g(t,x),\end{aligned}}$
where f and g are given functions. Here, x and v may be either scalars or vectors. The equations of motion in Hamiltonian mechanics take this form if the Hamiltonian is of the form
$H=T(t,v)+V(t,x).\,$
The differential equations are to be solved with the initial condition
$x(t_{0})=x_{0},\qquad v(t_{0})=v_{0}.$
The method
The semi-implicit Euler method produces an approximate discrete solution by iterating
${\begin{aligned}v_{n+1}&=v_{n}+g(t_{n},x_{n})\,\Delta t\\[0.3em]x_{n+1}&=x_{n}+f(t_{n},v_{n+1})\,\Delta t\end{aligned}}$
where Δt is the time step and tn = t0 + nΔt is the time after n steps.
The difference with the standard Euler method is that the semi-implicit Euler method uses vn+1 in the equation for xn+1, while the Euler method uses vn.
Applying the method with negative time step to the computation of $(x_{n},v_{n})$ from $(x_{n+1},v_{n+1})$ and rearranging leads to the second variant of the semi-implicit Euler method
${\begin{aligned}x_{n+1}&=x_{n}+f(t_{n},v_{n})\,\Delta t\\[0.3ex]v_{n+1}&=v_{n}+g(t_{n},x_{n+1})\,\Delta t\end{aligned}}$
which has similar properties.
The semi-implicit Euler is a first-order integrator, just as the standard Euler method. This means that it commits a global error of the order of Δt. However, the semi-implicit Euler method is a symplectic integrator, unlike the standard method. As a consequence, the semi-implicit Euler method almost conserves the energy (when the Hamiltonian is time-independent). Often, the energy increases steadily when the standard Euler method is applied, making it far less accurate.
Alternating between the two variants of the semi-implicit Euler method leads in one simplification to the Störmer-Verlet integration and in a slightly different simplification to the leapfrog integration, increasing both the order of the error and the order of preservation of energy.[1]
The stability region of the semi-implicit method was presented by Niiranen[2] although the semi-implicit Euler was misleadingly called symmetric Euler in his paper. The semi-implicit method models the simulated system correctly if the complex roots of the characteristic equation are within the circle shown below. For real roots the stability region extends outside the circle for which the criterion is $s>-2/\Delta t$
As can be seen, the semi-implicit method can simulate correctly both stable systems that have their roots in the left half plane and unstable systems that have their roots in the right half plane. This is clear advantage over forward (standard) Euler and backward Euler. Forward Euler tends to have less damping than the real system when the negative real parts of the roots get near the imaginary axis and backward Euler may show the system be stable even when the roots are in the right half plane.
Example
The motion of a spring satisfying Hooke's law is given by
${\begin{aligned}{\frac {dx}{dt}}&=v(t)\\[0.2em]{\frac {dv}{dt}}&=-{\frac {k}{m}}\,x=-\omega ^{2}\,x.\end{aligned}}$
The semi-implicit Euler for this equation is
${\begin{aligned}v_{n+1}&=v_{n}-\omega ^{2}\,x_{n}\,\Delta t\\[0.2em]x_{n+1}&=x_{n}+v_{n+1}\,\Delta t.\end{aligned}}$
Substituting $v_{n+1}$ in the second equation with the expression given by the first equation, the iteration can be expressed in the following matrix form
${\begin{bmatrix}x_{n+1}\\v_{n+1}\end{bmatrix}}={\begin{bmatrix}1-\omega ^{2}\Delta t^{2}&\Delta t\\-\omega ^{2}\Delta t&1\end{bmatrix}}{\begin{bmatrix}x_{n}\\v_{n}\end{bmatrix}},$
and since the determinant of the matrix is 1 the transformation is area-preserving.
The iteration preserves the modified energy functional $E_{h}(x,v)={\tfrac {1}{2}}\left(v^{2}+\omega ^{2}\,x^{2}-\omega ^{2}\Delta t\,vx\right)$ exactly, leading to stable periodic orbits (for sufficiently small step size) that deviate by $O(\Delta t)$ from the exact orbits. The exact circular frequency $\omega $ increases in the numerical approximation by a factor of $1+{\tfrac {1}{24}}\omega ^{2}\Delta t^{2}+O(\Delta t^{4})$.
References
1. Hairer, Ernst; Lubich, Christian; Wanner, Gerhard (2003). "Geometric numerical integration illustrated by the Störmer/Verlet method". Acta Numerica. 12: 399–450. Bibcode:2003AcNum..12..399H. CiteSeerX 10.1.1.7.7106. doi:10.1017/S0962492902000144. S2CID 122016794.
2. Niiranen, Jouko: Fast and accurate symmetric Euler algorithm for electromechanical simulations Proceedings of the Electrimacs'99, Sept. 14-16, 1999 Lisboa, Portugal, Vol. 1, pages 71 - 78.
• Nikolic, Branislav K. "Euler-Cromer method". University of Delaware. Retrieved 2021-09-29.{{cite web}}: CS1 maint: url-status (link)
• Vesely, Franz J. (2001). Computational Physics: An Introduction (2nd ed.). Springer. pp. 117. ISBN 978-0-306-46631-1.
• Giordano, Nicholas J.; Hisao Nakanishi (July 2005). Computational Physics (2nd ed.). Benjamin Cummings. ISBN 0-13-146990-8.
Numerical methods for integration
First-order methods
• Euler method
• Backward Euler
• Semi-implicit Euler
• Exponential Euler
Second-order methods
• Verlet integration
• Velocity Verlet
• Trapezoidal rule
• Beeman's algorithm
• Midpoint method
• Heun's method
• Newmark-beta method
• Leapfrog integration
Higher-order methods
• Exponential integrator
• Runge–Kutta methods
• List of Runge–Kutta methods
• Linear multistep method
• General linear methods
• Backward differentiation formula
• Yoshida
• Gauss–Legendre method
Theory
• Symplectic integrator
| Wikipedia |
Symplectic basis
In linear algebra, a standard symplectic basis is a basis ${\mathbf {e} }_{i},{\mathbf {f} }_{i}$ of a symplectic vector space, which is a vector space with a nondegenerate alternating bilinear form $\omega $, such that $\omega ({\mathbf {e} }_{i},{\mathbf {e} }_{j})=0=\omega ({\mathbf {f} }_{i},{\mathbf {f} }_{j}),\omega ({\mathbf {e} }_{i},{\mathbf {f} }_{j})=\delta _{ij}$. A symplectic basis of a symplectic vector space always exists; it can be constructed by a procedure similar to the Gram–Schmidt process.[1] The existence of the basis implies in particular that the dimension of a symplectic vector space is even if it is finite.
See also
• Darboux theorem
• Symplectic frame bundle
• Symplectic spinor bundle
• Symplectic vector space
Notes
1. Maurice de Gosson: Symplectic Geometry and Quantum Mechanics (2006), p.7 and pp. 12–13
References
• da Silva, A.C., Lectures on Symplectic Geometry, Springer (2001). ISBN 3-540-42195-5.
• Maurice de Gosson: Symplectic Geometry and Quantum Mechanics (2006) Birkhäuser Verlag, Basel ISBN 978-3-7643-7574-4.
| Wikipedia |
Symplectic category
In mathematics, Weinstein's symplectic category is (roughly) a category whose objects are symplectic manifolds and whose morphisms are canonical relations, inclusions of Lagrangian submanifolds L into $M\times N^{-}$, where the superscript minus means minus the given symplectic form (for example, the graph of a symplectomorphism; hence, minus). The notion was introduced by Alan Weinstein, according to whom "Quantization problems[1] suggest that the category of symplectic manifolds and symplectomorphisms be augmented by the inclusion of canonical relations as morphisms." The composition of canonical relations is given by a fiber product.
Strictly speaking, the symplectic category is not a well-defined category (since the composition may not be well-defined) without some transversality conditions.
References
Notes
1. He means geometric quantization.
Sources
• Weinstein, Alan (2009). "Symplectic Categories". arXiv:0911.4133.
Further reading
• Victor Guillemin and Shlomo Sternberg, Some problems in integral geometry and some related problems in microlocal analysis, American Journal of Mathematics 101 (1979), 915–955.
See also
• Fourier integral operator
| Wikipedia |
Symplectic filling
In mathematics, a filling of a manifold X is a cobordism W between X and the empty set. More to the point, the n-dimensional topological manifold X is the boundary of an (n + 1)-dimensional manifold W. Perhaps the most active area of current research is when n = 3, where one may consider certain types of fillings.
There are many types of fillings, and a few examples of these types (within a probably limited perspective) follow.
• An oriented filling of any orientable manifold X is another manifold W such that the orientation of X is given by the boundary orientation of W, which is the one where the first basis vector of the tangent space at each point of the boundary is the one pointing directly out of W, with respect to a chosen Riemannian metric. Mathematicians call this orientation the outward normal first convention.
All the following cobordisms are oriented, with the orientation on W given by a symplectic structure. Let ξ denote the kernel of the contact form α.
• A weak symplectic filling of a contact manifold (X,ξ) is a symplectic manifold (W,ω) with $\partial W=X$ such that $\omega |_{\xi }>0$.
• A strong symplectic filling of a contact manifold (X,ξ) is a symplectic manifold (W,ω) with $\partial W=X$ such that ω is exact near the boundary (which is X) and α is a primitive for ω. That is, ω = dα in a neighborhood of the boundary $\partial W=X$.
• A Stein filling of a contact manifold (X,ξ) is a Stein manifold W which has X as its strictly pseudoconvex boundary and ξ is the set of complex tangencies to X – that is, those tangent planes to X that are complex with respect to the complex structure on W. The canonical example of this is the 3-sphere
$\{x\in \mathbb {C} ^{2}:|x|=1\}$
where the complex structure on $\mathbb {C} ^{2}$ is multiplication by ${\sqrt {-1}}$ in each coordinate and W is the ball {|x| < 1} bounded by that sphere.
It is known that this list is strictly increasing in difficulty in the sense that there are examples of contact 3-manifolds with weak but no strong filling, and others that have strong but no Stein filling. Further, it can be shown that each type of filling is an example of the one preceding it, so that a Stein filling is a strong symplectic filling, for example. It used to be that one spoke of semi-fillings in this context, which means that X is one of possibly many boundary components of W, but it has been shown that any semi-filling can be modified to be a filling of the same type, of the same 3-manifold, in the symplectic world (Stein manifolds always have one boundary component).
References
• Y. Eliashberg, A Few Remarks about Symplectic Filling, Geometry and Topology 8, 2004, p. 277–293 arXiv:math/0311459
• J. Etnyre, On Symplectic Fillings Algebr. Geom. Topol. 4 (2004), p. 73–80 online
• H. Geiges, An Introduction to Contact Topology, Cambridge University Press, 2008
| Wikipedia |
Symplectic frame bundle
In symplectic geometry, the symplectic frame bundle[1] of a given symplectic manifold $(M,\omega )\,$ is the canonical principal ${\mathrm {Sp} }(n,{\mathbb {R} })$-subbundle $\pi _{\mathbf {R} }\colon {\mathbf {R} }\to M\,$ of the tangent frame bundle $\mathrm {F} M\,$ consisting of linear frames which are symplectic with respect to $\omega \,$. In other words, an element of the symplectic frame bundle is a linear frame $u\in \mathrm {F} _{p}(M)\,$ at point $p\in M\,,$ i.e. an ordered basis $({\mathbf {e} }_{1},\dots ,{\mathbf {e} }_{n},{\mathbf {f} }_{1},\dots ,{\mathbf {f} }_{n})\,$ of tangent vectors at $p\,$ of the tangent vector space $T_{p}(M)\,$, satisfying
$\omega _{p}({\mathbf {e} }_{j},{\mathbf {e} }_{k})=\omega _{p}({\mathbf {f} }_{j},{\mathbf {f} }_{k})=0\,$ and $\omega _{p}({\mathbf {e} }_{j},{\mathbf {f} }_{k})=\delta _{jk}\,$
for $j,k=1,\dots ,n\,$. For $p\in M\,$, each fiber ${\mathbf {R} }_{p}\,$ of the principal ${\mathrm {Sp} }(n,{\mathbb {R} })$-bundle $\pi _{\mathbf {R} }\colon {\mathbf {R} }\to M\,$ is the set of all symplectic bases of $T_{p}(M)\,$.
The symplectic frame bundle $\pi _{\mathbf {R} }\colon {\mathbf {R} }\to M\,$, a subbundle of the tangent frame bundle $\mathrm {F} M\,$, is an example of reductive G-structure on the manifold $M\,$.
See also
• Metaplectic group
• Metaplectic structure
• Symplectic basis
• Symplectic structure
• Symplectic geometry
• Symplectic group
• Symplectic spinor bundle
Notes
1. Habermann, Katharina; Habermann, Lutz (2006), Introduction to Symplectic Dirac Operators, Springer-Verlag, p. 23, ISBN 978-3-540-33420-0
Books
• Habermann, Katharina; Habermann, Lutz (2006), Introduction to Symplectic Dirac Operators, Springer-Verlag, ISBN 978-3-540-33420-0
• da Silva, A.C., Lectures on Symplectic Geometry, Springer (2001). ISBN 3-540-42195-5.
• Maurice de Gosson: Symplectic Geometry and Quantum Mechanics (2006) Birkhäuser Verlag, Basel ISBN 3-7643-7574-4.
| Wikipedia |
Symplectic integrator
In mathematics, a symplectic integrator (SI) is a numerical integration scheme for Hamiltonian systems. Symplectic integrators form the subclass of geometric integrators which, by definition, are canonical transformations. They are widely used in nonlinear dynamics, molecular dynamics, discrete element methods, accelerator physics, plasma physics, quantum physics, and celestial mechanics.
Introduction
Symplectic integrators are designed for the numerical solution of Hamilton's equations, which read
${\dot {p}}=-{\frac {\partial H}{\partial q}}\quad {\mbox{and}}\quad {\dot {q}}={\frac {\partial H}{\partial p}},$
where $q$ denotes the position coordinates, $p$ the momentum coordinates, and $H$ is the Hamiltonian. The set of position and momentum coordinates $(q,p)$ are called canonical coordinates. (See Hamiltonian mechanics for more background.)
The time evolution of Hamilton's equations is a symplectomorphism, meaning that it conserves the symplectic 2-form $dp\wedge dq$. A numerical scheme is a symplectic integrator if it also conserves this 2-form.
Symplectic integrators also might possess, as a conserved quantity, a Hamiltonian which is slightly perturbed from the original one (only true for a small class of simple cases). By virtue of these advantages, the SI scheme has been widely applied to the calculations of long-term evolution of chaotic Hamiltonian systems ranging from the Kepler problem to the classical and semi-classical simulations in molecular dynamics.
Most of the usual numerical methods, like the primitive Euler scheme and the classical Runge–Kutta scheme, are not symplectic integrators.
Methods for constructing symplectic algorithms
Splitting methods for separable Hamiltonians
A widely used class of symplectic integrators is formed by the splitting methods.
Assume that the Hamiltonian is separable, meaning that it can be written in the form
$H(p,q)=T(p)+V(q).$
(1)
This happens frequently in Hamiltonian mechanics, with T being the kinetic energy and V the potential energy.
For the notational simplicity, let us introduce the symbol $z=(q,p)$ to denote the canonical coordinates including both the position and momentum coordinates. Then, the set of the Hamilton's equations given in the introduction can be expressed in a single expression as
${\dot {z}}=\{z,H(z)\},$
(2)
where $\{\cdot ,\cdot \}$ is a Poisson bracket. Furthermore, by introducing an operator $D_{H}\cdot =\{\cdot ,H\}$, which returns a Poisson bracket of the operand with the Hamiltonian, the expression of the Hamilton's equation can be further simplified to
${\dot {z}}=D_{H}z.$
The formal solution of this set of equations is given as a matrix exponential:
$z(\tau )=\exp(\tau D_{H})z(0).$
(3)
Note the positivity of $\tau D_{H}$ in the matrix exponential.
When the Hamiltonian has the form of equation (1), the solution (3) is equivalent to
$z(\tau )=\exp[\tau (D_{T}+D_{V})]z(0).$
(4)
The SI scheme approximates the time-evolution operator $\exp[\tau (D_{T}+D_{V})]$ in the formal solution (4) by a product of operators as
${\begin{aligned}\exp[\tau (D_{T}+D_{V})]&=\prod _{i=1}^{k}\exp(c_{i}\tau D_{T})\exp(d_{i}\tau D_{V})+O(\tau ^{k+1})\\&=\exp(c_{1}\tau D_{T})\exp(d_{1}\tau D_{V})\dots \exp(c_{k}\tau D_{T})\exp(d_{k}\tau D_{V})+O(\tau ^{k+1}),\end{aligned}}$
(5)
where $c_{i}$ and $d_{i}$ are real numbers, $k$ is an integer, which is called the order of the integrator, and where $ \sum _{i=1}^{k}c_{i}=\sum _{i=1}^{k}d_{i}=1$. Note that each of the operators $\exp(c_{i}\tau D_{T})$ and $\exp(d_{i}\tau D_{V})$ provides a symplectic map, so their product appearing in the right-hand side of (5) also constitutes a symplectic map.
Since $D_{T}^{2}z=\{\{z,T\},T\}=\{({\dot {q}},0),T\}=(0,0)$ for all $z$, we can conclude that
$D_{T}^{2}=0.$
(6)
By using a Taylor series, $\exp(aD_{T})$ can be expressed as
$\exp(aD_{T})=\sum _{n=0}^{\infty }{\frac {(aD_{T})^{n}}{n!}},$
(7)
where $a$ is an arbitrary real number. Combining (6) and (7), and by using the same reasoning for $D_{V}$ as we have used for $D_{T}$, we get
${\begin{cases}\exp(aD_{T})&=1+aD_{T},\\\exp(aD_{V})&=1+aD_{V}.\end{cases}}$
(8)
In concrete terms, $\exp(c_{i}\tau D_{T})$ gives the mapping
${\begin{pmatrix}q\\p\end{pmatrix}}\mapsto {\begin{pmatrix}q+\tau c_{i}{\frac {\partial T}{\partial p}}(p)\\p\end{pmatrix}},$
and $\exp(d_{i}\tau D_{V})$ gives
${\begin{pmatrix}q\\p\end{pmatrix}}\mapsto {\begin{pmatrix}q\\p-\tau d_{i}{\frac {\partial V}{\partial q}}(q)\\\end{pmatrix}}.$
Note that both of these maps are practically computable.
Examples
The simplified form of the equations (in executed order) are:
$q_{i+1}=q_{i}+c_{i}{\frac {p_{i+1}}{m}}t$
$p_{i+1}=p_{i}+d_{i}F(q_{i})t$
After converting into Lagrangian coordinates:
$x_{i+1}=x_{i}+c_{i}v_{i+1}t$
$v_{i+1}=v_{i}+d_{i}a(x_{i})t$
Where $F(x)$ is the force vector at $x$, $a(x)$ is the acceleration vector at $x$, and $m$ is the scalar quantity of mass.
Several symplectic integrators are given below. An illustrative way to use them is to consider a particle with position $q$ and momentum $p$.
To apply a timestep with values $c_{1,2,3},d_{1,2,3}$ to the particle, carry out the following steps:
Iteratively:
• Update the position $i$ of the particle by adding to it its (previously updated) velocity $i$ multiplied by $c_{i}$
• Update the velocity $i$ of the particle by adding to it its acceleration (at updated position) multiplied by $d_{i}$
A first-order example
The symplectic Euler method is the first-order integrator with $k=1$ and coefficients
$c_{1}=d_{1}=1.$
Note that the algorithm above does not work if time-reversibility is needed. The algorithm has to be implemented in two parts, one for positive time steps, one for negative time steps.
A second-order example
The Verlet method is the second-order integrator with $k=2$ and coefficients
$c_{1}=0,\qquad c_{2}=1,\qquad d_{1}=d_{2}={\tfrac {1}{2}}.$
Since $c_{1}=0$, the algorithm above is symmetric in time. There are 3 steps to the algorithm, and step 1 and 3 are exactly the same, so the positive time version can be used for negative time.
A third-order example
A third-order symplectic integrator (with $k=3$) was discovered by Ronald Ruth in 1983.[1] One of the many solutions is given by
${\begin{aligned}c_{1}&=1,&c_{2}&=-{\tfrac {2}{3}},&c_{3}&={\tfrac {2}{3}},\\d_{1}&=-{\tfrac {1}{24}},&d_{2}&={\tfrac {3}{4}},&d_{3}&={\tfrac {7}{24}}.\end{aligned}}$
A fourth-order example
A fourth-order integrator (with $k=4$) was also discovered by Ruth in 1983 and distributed privately to the particle-accelerator community at that time. This was described in a lively review article by Forest.[2] This fourth-order integrator was published in 1990 by Forest and Ruth and also independently discovered by two other groups around that same time.[3][4][5]
${\begin{aligned}c_{1}&=c_{4}={\frac {1}{2(2-2^{1/3})}},&c_{2}&=c_{3}={\frac {1-2^{1/3}}{2(2-2^{1/3})}},\\d_{1}&=d_{3}={\frac {1}{2-2^{1/3}}},&d_{2}&=-{\frac {2^{1/3}}{2-2^{1/3}}},\quad d_{4}=0.\end{aligned}}$
To determine these coefficients, the Baker–Campbell–Hausdorff formula can be used. Yoshida, in particular, gives an elegant derivation of coefficients for higher-order integrators. Later on, Blanes and Moan[6] further developed partitioned Runge–Kutta methods for the integration of systems with separable Hamiltonians with very small error constants.
Splitting methods for general nonseparable Hamiltonians
General nonseparable Hamiltonians can also be explicitly and symplectically integrated.
To do so, Tao introduced a restraint that binds two copies of phase space together to enable an explicit splitting of such systems.[7] The idea is, instead of $H(Q,P)$, one simulates ${\bar {H}}(q,p,x,y)=H(q,y)+H(x,p)+\omega \left(\left\|q-x\right\|_{2}^{2}/2+\left\|p-y\right\|_{2}^{2}/2\right)$, whose solution agrees with that of $H(Q,P)$ in the sense that $q(t)=x(t)=Q(t),p(t)=y(t)=P(t)$.
The new Hamiltonian is advantageous for explicit symplectic integration, because it can be split into the sum of three sub-Hamiltonians, $H_{A}=H(q,y)$, $H_{B}=H(x,p)$, and $H_{C}=\omega \left(\left\|q-x\right\|_{2}^{2}/2+\left\|p-y\right\|_{2}^{2}/2\right)$. Exact solutions of all three sub-Hamiltonians can be explicitly obtained: both $H_{A},H_{B}$ solutions correspond to shifts of mismatched position and momentum, and $H_{C}$ corresponds to a linear transformation. To symplectically simulate the system, one simply composes these solution maps.
Applications
In plasma physics
In recent decades symplectic integrator in plasma physics has become an active research topic,[8] because straightforward applications of the standard symplectic methods do not suit the need of large-scale plasma simulations enabled by the peta- to exa-scale computing hardware. Special symplectic algorithms need to be customarily designed, tapping into the special structures of physics problem under investigation. One such example is the charged particle dynamics in an electromagnetic field. With the canonical symplectic structure, the Hamiltonian of the dynamics is
$H({\boldsymbol {p}},{\boldsymbol {x}})={\frac {1}{2}}\left({\boldsymbol {p}}-{\boldsymbol {A}}\right)^{2}+\phi ,$
whose $ {\boldsymbol {p}}$-dependence and $ {\boldsymbol {x}}$-dependence are not separable, and standard explicit symplectic methods do not apply. For large-scale simulations on massively parallel clusters, however, explicit methods are preferred.
To overcome this difficulty, we can explore the specific way that the $ {\boldsymbol {p}}$-dependence and $ {\boldsymbol {x}}$-dependence are entangled in this Hamiltonian, and try to design a symplectic algorithm just for this or this type of problem. First, we note that the $ {\boldsymbol {p}}$-dependence is quadratic, therefore the first order symplectic Euler method implicit in $ {\boldsymbol {p}}$ is actually explicit. This is what is used in the canonical symplectic particle-in-cell (PIC) algorithm.[9] To build high order explicit methods, we further note that the $ {\boldsymbol {p}}$-dependence and $ {\boldsymbol {x}}$-dependence in this $ H({\boldsymbol {p}},{\boldsymbol {x}})$ are product-separable, 2nd and 3rd order explicit symplectic algorithms can be constructed using generating functions,[10] and arbitrarily high-order explicit symplectic integrators for time-dependent electromagnetic fields can also be constructed using Runge-Kutta techniques.[11]
A more elegant and versatile alternative is to look at the following non-canonical symplectic structure of the problem,
$i_{({\dot {\boldsymbol {x}}},{\dot {\boldsymbol {v}}})}\Omega =-dH,\ \ \ \Omega =d({\boldsymbol {v}}+{\boldsymbol {A}})\wedge d{\boldsymbol {x}},\ \ \ H={\frac {1}{2}}{\boldsymbol {v}}^{2}+\phi .$
Here $ \Omega $ is a non-constant non-canonical symplectic form. General symplectic integrator for non-constant non-canonical symplectic structure, explicit or implicit, is not known to exist. However, for this specific problem, a family of high-order explicit non-canonical symplectic integrators can be constructed using the He splitting method.[12] Splitting $ H$ into 4 parts,
${\begin{aligned}H&=H_{x}+H_{y}+H_{z}+H_{\phi },\\H_{x}&={\frac {1}{2}}v_{x}^{2},\ \ H_{y}={\frac {1}{2}}v_{y}^{2},\ \ H_{z}={\frac {1}{2}}v_{z}^{2},\ \ H_{\phi }=\phi ,\end{aligned}}$
we find serendipitously that for each subsystem, e.g.,
$i_{({\dot {\boldsymbol {x}}},{\dot {\boldsymbol {v}}})}\Omega =-dH_{x}$
and
$i_{({\dot {\boldsymbol {x}}},{\dot {\boldsymbol {v}}})}\Omega =-dH_{\phi },$
the solution map can be written down explicitly and calculated exactly. Then explicit high-order non-canonical symplectic algorithms can be constructed using different compositions. Let $ \Theta _{x},\Theta _{y},\Theta _{z}$ and $ \Theta _{\phi }$ denote the exact solution maps for the 4 subsystems. A 1st-order symplectic scheme is
${\begin{aligned}\Theta _{1}\left(\Delta \tau \right)=\Theta _{x}\left(\Delta \tau \right)\Theta _{y}\left(\Delta \tau \right)\Theta _{z}\left(\Delta \tau \right)\Theta _{\phi }\left(\Delta \tau \right)~.\end{aligned}}$
A symmetric 2nd-order symplectic scheme is,
${\begin{aligned}\Theta _{2}\left(\Delta \tau \right)&=\Theta _{x}\left(\Delta \tau /2\right)\Theta _{y}\left(\Delta \tau /2\right)\Theta _{z}\left(\Delta \tau /2\right)\Theta _{\phi }\left(\Delta \tau \right)\\&\Theta _{z}\left(\Delta t/2\right)\Theta _{y}\left(\Delta t/2\right)\Theta _{x}\left(\Delta t/2\right)\!,\end{aligned}}$
which is a customarily modified Strang splitting. A $ 2(l+1)$-th order scheme can be constructed from a $ 2l$-th order scheme using the method of triple jump,
${\begin{aligned}\Theta _{2(l+1)}(\Delta \tau )&=\Theta _{2l}(\alpha _{l}\Delta \tau )\Theta _{2l}(\beta _{l}\Delta \tau )\Theta _{2l}(\alpha _{l}\Delta \tau )~,\\\alpha _{l}&=1/(2-2^{1/(2l+1)})~,\\\beta _{l}&=1-2\alpha _{l}~.\end{aligned}}$
The He splitting method is one of key techniques used in the structure-preserving geometric particle-in-cell (PIC) algorithms.[13][14][15][16]
See also
• Energy drift
• Multisymplectic integrator
• Variational integrator
• Verlet integration
References
1. Ruth, Ronald D. (August 1983). "A Canonical Integration Technique". IEEE Transactions on Nuclear Science. NS-30 (4): 2669–2671. Bibcode:1983ITNS...30.2669R. doi:10.1109/TNS.1983.4332919. S2CID 5911358.
2. Forest, Etienne (2006). "Geometric Integration for Particle Accelerators". J. Phys. A: Math. Gen. 39 (19): 5321–5377. Bibcode:2006JPhA...39.5321F. doi:10.1088/0305-4470/39/19/S03.
3. Forest, E.; Ruth, Ronald D. (1990). "Fourth-order symplectic integration" (PDF). Physica D. 43: 105–117. Bibcode:1990PhyD...43..105F. doi:10.1016/0167-2789(90)90019-L.
4. Yoshida, H. (1990). "Construction of higher order symplectic integrators". Phys. Lett. A. 150 (5–7): 262–268. Bibcode:1990PhLA..150..262Y. doi:10.1016/0375-9601(90)90092-3.
5. Candy, J.; Rozmus, W (1991). "A Symplectic Integration Algorithm for Separable Hamiltonian Functions". J. Comput. Phys. 92 (1): 230–256. Bibcode:1991JCoPh..92..230C. doi:10.1016/0021-9991(91)90299-Z.
6. Blanes, S.; Moan, P. C. (May 2002). "Practical symplectic partitioned Runge–Kutta and Runge–Kutta–Nyström methods". Journal of Computational and Applied Mathematics. 142 (2): 313–330. Bibcode:2002JCoAM.142..313B. doi:10.1016/S0377-0427(01)00492-7.
7. Tao, Molei (2016). "Explicit symplectic approximation of nonseparable Hamiltonians: Algorithm and long time performance". Phys. Rev. E. 94 (4): 043303. arXiv:1609.02212. Bibcode:2016PhRvE..94d3303T. doi:10.1103/PhysRevE.94.043303. PMID 27841574. S2CID 41468935.
8. Qin, H.; Guan,X. (2008). "A Variational Symplectic Integrator for the Guiding Center Motion of Charged Particles for Long Time Simulations in General Magnetic Fields" (PDF). Physical Review Letters. 100 (3): 035006. doi:10.1103/PhysRevLett.100.035006. PMID 18232993.
9. Qin, H.; Liu, J.; Xiao,J. (2016). "Canonical symplectic particle-in-cell method for long-term large-scale simulations of the Vlasov–Maxwell equations". Nuclear Fusion. 56 (1): 014001. arXiv:1503.08334. Bibcode:2016NucFu..56a4001Q. doi:10.1088/0029-5515/56/1/014001. S2CID 29190330.
10. Zhang, R.; Qin, H.; Tang, Y. (2016). "Explicit symplectic algorithms based on generating functions for charged particle dynamics". Physical Review E. 94 (1): 013205. arXiv:1604.02787. Bibcode:2016PhRvE..94a3205Z. doi:10.1103/PhysRevE.94.013205. PMID 27575228. S2CID 2166879.
11. Tao, M. (2016). "Explicit high-order symplectic integrators for charged particles in general electromagnetic fields". Journal of Computational Physics. 327: 245. arXiv:1605.01458. Bibcode:2016JCoPh.327..245T. doi:10.1016/j.jcp.2016.09.047. S2CID 31262651.
12. He, Y.; Qin, H.; Sun, Y. (2015). "Hamiltonian integration methods for Vlasov-Maxwell equations". Physics of Plasmas. 22: 124503. arXiv:1505.06076. doi:10.1063/1.4938034. S2CID 118560512.
13. Xiao, J.; Qin, H.; Liu, J. (2015). "Explicit high-order non-canonical symplectic particle-in-cell algorithms for Vlasov-Maxwell systems". Physics of Plasmas. 22 (11): 112504. arXiv:1510.06972. Bibcode:2015PhPl...22k2504X. doi:10.1063/1.4935904. S2CID 12893515.
14. Kraus, M; Kormann, K; Morrison, P.; Sonnendrucker, E (2017). "GEMPIC: geometric electromagnetic particle-in-cell methods". Journal of Plasma Physics. 83 (4): 905830401. arXiv:1609.03053. Bibcode:2017JPlPh..83d9001K. doi:10.1017/S002237781700040X. S2CID 8207132.
15. Xiao, J.; Qin, H.; Liu, J. (2018). "Structure-preserving geometric particle-in-cell methods for Vlasov-Maxwell systems". Plasma Science and Technology. 20 (11): 110501. arXiv:1804.08823. Bibcode:2018PlST...20k0501X. doi:10.1088/2058-6272/aac3d1. S2CID 250801157.
16. Glasser, A.; Qin, H. (2022). "A gauge-compatible Hamiltonian splitting algorithm for particle-in-cell simulations using finite element exterior calculus". Journal of Plasma Physics. 88 (2): 835880202. arXiv:2110.10346. Bibcode:2022JPlPh..88b8302G. doi:10.1017/S0022377822000290. S2CID 239049433.
• Leimkuhler, Ben; Reich, Sebastian (2005). Simulating Hamiltonian Dynamics. Cambridge University Press. ISBN 0-521-77290-7.
• Hairer, Ernst; Lubich, Christian; Wanner, Gerhard (2006). Geometric Numerical Integration: Structure-Preserving Algorithms for Ordinary Differential Equations (2 ed.). Springer. ISBN 978-3-540-30663-4.
• Kang, Feng; Qin, Mengzhao (2010). Symplectic geometric algorithms for Hamiltonian systems. Springer.
Numerical methods for integration
First-order methods
• Euler method
• Backward Euler
• Semi-implicit Euler
• Exponential Euler
Second-order methods
• Verlet integration
• Velocity Verlet
• Trapezoidal rule
• Beeman's algorithm
• Midpoint method
• Heun's method
• Newmark-beta method
• Leapfrog integration
Higher-order methods
• Exponential integrator
• Runge–Kutta methods
• List of Runge–Kutta methods
• Linear multistep method
• General linear methods
• Backward differentiation formula
• Yoshida
• Gauss–Legendre method
Theory
• Symplectic integrator
| Wikipedia |
Tautological one-form
In mathematics, the tautological one-form is a special 1-form defined on the cotangent bundle $T^{*}Q$ of a manifold $Q.$ In physics, it is used to create a correspondence between the velocity of a point in a mechanical system and its momentum, thus providing a bridge between Lagrangian mechanics and Hamiltonian mechanics (on the manifold $Q$).
Not to be confused with Symplectic manifold § Definition, or Symplectic vector space.
The exterior derivative of this form defines a symplectic form giving $T^{*}Q$ the structure of a symplectic manifold. The tautological one-form plays an important role in relating the formalism of Hamiltonian mechanics and Lagrangian mechanics. The tautological one-form is sometimes also called the Liouville one-form, the Poincaré one-form, the canonical one-form, or the symplectic potential. A similar object is the canonical vector field on the tangent bundle.
To define the tautological one-form, select a coordinate chart $U$ on $T^{*}Q$ and a canonical coordinate system on $U.$ Pick an arbitrary point $m\in T^{*}Q.$ By definition of cotangent bundle, $m=(q,p),$ where $q\in Q$ and $p\in T_{q}^{*}Q.$ The tautological one-form $\theta _{m}:T_{m}T^{*}Q\to \mathbb {R} $ is given by
$\theta _{m}=\sum _{i=1}^{n}p_{i}dq^{i},$
with $n=\mathop {\text{dim}} Q$ and $(p_{1},\ldots ,p_{n})\in U\subseteq \mathbb {R} ^{n}$ being the coordinate representation of $p.$
Any coordinates on $T^{*}Q$ that preserve this definition, up to a total differential (exact form), may be called canonical coordinates; transformations between different canonical coordinate systems are known as canonical transformations.
The canonical symplectic form, also known as the Poincaré two-form, is given by
$\omega =-d\theta =\sum _{i}dq^{i}\wedge dp_{i}$
The extension of this concept to general fibre bundles is known as the solder form. By convention, one uses the phrase "canonical form" whenever the form has a unique, canonical definition, and one uses the term "solder form", whenever an arbitrary choice has to be made. In algebraic geometry and complex geometry the term "canonical" is discouraged, due to confusion with the canonical class, and the term "tautological" is preferred, as in tautological bundle.
Coordinate-free definition
The tautological 1-form can also be defined rather abstractly as a form on phase space. Let $Q$ be a manifold and $M=T^{*}Q$ be the cotangent bundle or phase space. Let
$\pi :M\to Q$
be the canonical fiber bundle projection, and let
$\mathrm {d} \pi :TM\to TQ$
be the induced tangent map. Let $m$ be a point on $M.$ Since $M$ is the cotangent bundle, we can understand $m$ to be a map of the tangent space at $q=\pi (m)$:
$m:T_{q}Q\to \mathbb {R} .$
That is, we have that $m$ is in the fiber of $q.$ The tautological one-form $\theta _{m}$ at point $m$ is then defined to be
$\theta _{m}=m\circ \mathrm {d} \pi _{m}.$
It is a linear map
$\theta _{m}:T_{m}M\to \mathbb {R} $
and so
$\theta :M\to T^{*}M.$
Symplectic potential
The symplectic potential is generally defined a bit more freely, and also only defined locally: it is any one-form $\phi $ such that $\omega =-d\phi $; in effect, symplectic potentials differ from the canonical 1-form by a closed form.
Properties
The tautological one-form is the unique one-form that "cancels" pullback. That is, let $\beta $ be a 1-form on $Q.$ $\beta $ is a section $\beta :Q\to T^{*}Q.$ For an arbitrary 1-form $\sigma $ on $T^{*}Q,$ the pullback of $\sigma $ by $\beta $ is, by definition, $\beta ^{*}\sigma :=\sigma \circ \beta _{*}.$ :=\sigma \circ \beta _{*}.} Here, $\beta _{*}:TQ\to TT^{*}Q$ is the pushforward of $\beta .$ Like $\beta ,$ $\beta ^{*}\sigma $ is a 1-form on $Q.$ The tautological one-form $\theta $ is the only form with the property that $\beta ^{*}\theta =\beta ,$ for every 1-form $\beta $ on $Q.$
Proof.
For a chart $(\{q^{i}\}_{i=1}^{n},U)$ on $Q$ (where $U\subseteq \mathbb {R} ^{n}),$ let $\{p_{i},q^{i}\}_{i=1}^{n}$ be the coordinates on $T^{*}Q,$ where the fiber coordinates $\{p_{i}\}_{i=1}^{n}$ are associated with the linear basis $\{dq^{i}\}_{i=1}^{n}.$ By assumption, for every ${\mathbf {q} }=(q^{1},\ldots ,q^{n})\in U,$
$\beta ({\mathbf {q} })=\sum _{i=1}^{n}\beta _{i}(\mathbf {q} )\,dq^{i},$
or
$\mathbf {q} =(q^{1},\ldots ,q^{n})\ {\stackrel {\beta }{\to }}\ (\underbrace {q^{1},\ldots ,q^{n}} _{\mathbf {q} },\underbrace {\beta _{1}(\mathbf {q} ),\ldots ,\beta _{n}(\mathbf {q} } _{\mathbf {p} })).$
It follows that
$\beta _{*}\left({\frac {\partial }{\partial q^{i}}}{\Biggl |}_{\mathbf {q} }\right)={\frac {\partial }{\partial q^{i}}}{\Biggl |}_{\beta (\mathbf {q} )}+\sum _{j=1}^{n}{\frac {\partial \beta _{j}}{\partial q^{i}}}{\Biggl |}_{\mathbf {q} }\cdot {\frac {\partial }{\partial p_{j}}}{\Biggl |}_{\beta (\mathbf {q} )}$
which implies that
$(\beta ^{*}\,dq^{i})\left({\partial /\partial q^{j}}\right)_{\mathbf {q} }=dq^{i}\left[\beta _{*}\left({\partial /\partial q^{j}}\right)_{\mathbf {q} }\right]=\delta _{ij}.$
Step 1. We have
${\begin{aligned}(\beta ^{*}\theta )\left(\partial /\partial q^{i}\right)_{\mathbf {q} }&=\theta \left(\beta _{*}\left(\partial /\partial q^{i}\right)_{\mathbf {q} }\right)=\left(\sum _{j=1}^{n}p_{j}dq^{j}\right)\left(\beta _{*}\left(\partial /\partial q^{i}\right)_{\mathbf {q} }\right)\\&=\beta _{i}(\mathbf {q} )=\beta \left(\partial /\partial q^{i}\right)_{\mathbf {q} }.\end{aligned}}$
Step 1'. For completeness, we now give a coordinate-free proof that $\beta ^{*}\theta =\beta ,$ for any 1-form $\beta .$
Observe that, intuitively speaking, for every $q\in Q$ and $p\in T_{q}^{*}Q,$ the linear map $d\pi _{(q,p)}$ in the definition of $\theta $ projects the tangent space $T_{(q,p)}T^{*}Q$ onto its subspace $T_{q}Q.$ As a consequence, for every $q\in Q$ and $v\in T_{q}Q,$
$d\pi _{\beta (q)}(\beta _{*q}v)=v,$
where $\beta _{*q}$ is the instance of $\beta _{*}$ at the point $q\in Q,$ that is,
$\beta _{*q}:T_{q}Q\to T_{\beta (q)}T^{*}Q.$
Applying the coordinate-free definition of $\theta $ to $\theta _{\beta (q)},$ obtain
$(\beta ^{*}\theta )_{q}v=\theta _{\beta (q)}(\beta _{*q}v)=\beta (q)(d\pi _{\beta (q)}(\beta _{*q}v))=\beta (q)v.$
Step 2. It is enough to show that $\alpha =0$ if $\beta ^{*}\alpha =0,$ for every one-form $\beta .$ Let
$\alpha =\sum _{i=1}^{n}\alpha _{q^{i}}(\mathbf {p} ,\mathbf {q} )\,dq^{i}+\sum _{i=1}^{n}\alpha _{p_{i}}(\mathbf {p} ,\mathbf {q} )\,dp_{i},$
where $\alpha _{p^{i}},\alpha _{q^{i}}\in C^{\infty }(\mathbb {R} ^{n}\times U,\mathbb {R} ).$
Substituting $v=\left(\partial /\partial q_{i}\right)_{\mathbf {q} }$ into the identity $\alpha (\beta _{*}v)=0$ obtain
$\alpha (\partial /\partial q^{i})_{\beta (\mathbf {q} )}+\sum _{j=1}^{n}(\partial \beta _{j}/\partial q^{i})_{\mathbf {q} }\cdot \alpha (\partial /\partial p_{j})_{\beta (\mathbf {q} )}=0,$
or equivalently, for any choice of $n$ functions $p_{i}=\beta _{i}(\mathbf {q} ),$
$\alpha _{q^{i}}(\mathbf {p} ,\mathbf {q} )+\sum _{j=1}^{n}\partial p_{j}/\partial q^{i}\cdot \alpha _{p_{j}}(\mathbf {p} ,\mathbf {q} )=0.$
Let $\beta =\sum _{j=1}^{n}c_{j}dq^{j},$ where $c_{j}={\text{const}}.$ In this case, $\beta _{j}=c_{j}.$ For every $\mathbf {q} \in U$ and $c_{j}\in \mathbb {R} ,$
$\alpha _{q^{i}}(\mathbf {p} ,\mathbf {q} ){\bigl |}_{j=1\ldots n}^{p_{j}=c_{j}}=0.$
This shows that $\alpha _{q^{i}}(\mathbf {p} ,\mathbf {q} )=0$ on $\mathbb {R} ^{n}\times U,$ and the identity
$\sum _{j=1}^{n}\partial p_{j}/\partial q^{i}\cdot \alpha _{p_{j}}(\mathbf {p} ,\mathbf {q} )=0$
must hold for an arbitrary choice of functions $p_{i}=\beta _{i}(\mathbf {q} ).$ If $\beta =\sum _{j=1}^{n}c_{j}q^{j}dq^{j}$ (with ${}^{j}$ indicating superscript) then $\beta _{j}=c_{j}q^{j},$ and the identity becomes
$\alpha _{p_{i}}(\mathbf {p} ,\mathbf {q} ){\bigl |}_{j=1\ldots n}^{p_{j}=c_{j}q^{j}}=0,$
for every $\mathbf {q} \in U$ and $c_{j}\in \mathbb {R} .$ Since $c_{j}=p^{j}/q^{j},$ we see that $\alpha _{p_{i}}(\mathbf {p} ,\mathbf {q} )=0,$ as long as $q^{j}\neq 0$ for all $j.$ On the other hand, the function $\alpha _{p_{i}}$ is continuous, and hence $\alpha _{p_{i}}(\mathbf {p} ,\mathbf {q} )=0$ on $\mathbb {R} ^{n}\times U.$
So, by the commutation between the pull-back and the exterior derivative,
$\beta ^{*}\omega =-\beta ^{*}d\theta =-d(\beta ^{*}\theta )=-d\beta .$
Action
If $H$ is a Hamiltonian on the cotangent bundle and $X_{H}$ is its Hamiltonian vector field, then the corresponding action $S$ is given by
$S=\theta (X_{H}).$
In more prosaic terms, the Hamiltonian flow represents the classical trajectory of a mechanical system obeying the Hamilton-Jacobi equations of motion. The Hamiltonian flow is the integral of the Hamiltonian vector field, and so one writes, using traditional notation for action-angle variables:
$S(E)=\sum _{i}\oint p_{i}\,dq^{i}$
with the integral understood to be taken over the manifold defined by holding the energy $E$ constant: $H=E={\text{const}}.$
On Riemannian and Pseudo-Riemannian Manifolds
If the manifold $Q$ has a Riemannian or pseudo-Riemannian metric $g,$ then corresponding definitions can be made in terms of generalized coordinates. Specifically, if we take the metric to be a map
$g:TQ\to T^{*}Q,$
then define
$\Theta =g^{*}\theta $
and
$\Omega =-d\Theta =g^{*}\omega $
In generalized coordinates $(q^{1},\ldots ,q^{n},{\dot {q}}^{1},\ldots ,{\dot {q}}^{n})$ on $TQ,$ one has
$\Theta =\sum _{ij}g_{ij}{\dot {q}}^{i}dq^{j}$
and
$\Omega =\sum _{ij}g_{ij}\;dq^{i}\wedge d{\dot {q}}^{j}+\sum _{ijk}{\frac {\partial g_{ij}}{\partial q^{k}}}\;{\dot {q}}^{i}\,dq^{j}\wedge dq^{k}$
The metric allows one to define a unit-radius sphere in $T^{*}Q.$ The canonical one-form restricted to this sphere forms a contact structure; the contact structure may be used to generate the geodesic flow for this metric.
References
• Ralph Abraham and Jerrold E. Marsden, Foundations of Mechanics, (1978) Benjamin-Cummings, London ISBN 0-8053-0102-X See section 3.2.
Manifolds (Glossary)
Basic concepts
• Topological manifold
• Atlas
• Differentiable/Smooth manifold
• Differential structure
• Smooth atlas
• Submanifold
• Riemannian manifold
• Smooth map
• Submersion
• Pushforward
• Tangent space
• Differential form
• Vector field
Main results (list)
• Atiyah–Singer index
• Darboux's
• De Rham's
• Frobenius
• Generalized Stokes
• Hopf–Rinow
• Noether's
• Sard's
• Whitney embedding
Maps
• Curve
• Diffeomorphism
• Local
• Geodesic
• Exponential map
• in Lie theory
• Foliation
• Immersion
• Integral curve
• Lie derivative
• Section
• Submersion
Types of
manifolds
• Closed
• (Almost) Complex
• (Almost) Contact
• Fibered
• Finsler
• Flat
• G-structure
• Hadamard
• Hermitian
• Hyperbolic
• Kähler
• Kenmotsu
• Lie group
• Lie algebra
• Manifold with boundary
• Oriented
• Parallelizable
• Poisson
• Prime
• Quaternionic
• Hypercomplex
• (Pseudo−, Sub−) Riemannian
• Rizza
• (Almost) Symplectic
• Tame
Tensors
Vectors
• Distribution
• Lie bracket
• Pushforward
• Tangent space
• bundle
• Torsion
• Vector field
• Vector flow
Covectors
• Closed/Exact
• Covariant derivative
• Cotangent space
• bundle
• De Rham cohomology
• Differential form
• Vector-valued
• Exterior derivative
• Interior product
• Pullback
• Ricci curvature
• flow
• Riemann curvature tensor
• Tensor field
• density
• Volume form
• Wedge product
Bundles
• Adjoint
• Affine
• Associated
• Cotangent
• Dual
• Fiber
• (Co) Fibration
• Jet
• Lie algebra
• (Stable) Normal
• Principal
• Spinor
• Subbundle
• Tangent
• Tensor
• Vector
Connections
• Affine
• Cartan
• Ehresmann
• Form
• Generalized
• Koszul
• Levi-Civita
• Principal
• Vector
• Parallel transport
Related
• Classification of manifolds
• Gauge theory
• History
• Morse theory
• Moving frame
• Singularity theory
Generalizations
• Banach manifold
• Diffeology
• Diffiety
• Fréchet manifold
• K-theory
• Orbifold
• Secondary calculus
• over commutative algebras
• Sheaf
• Stratifold
• Supermanifold
• Stratified space
| Wikipedia |
Symplectic vector space
In mathematics, a symplectic vector space is a vector space V over a field F (for example the real numbers R) equipped with a symplectic bilinear form.
A symplectic bilinear form is a mapping ω : V × V → F that is
Bilinear
Linear in each argument separately;
Alternating
ω(v, v) = 0 holds for all v ∈ V; and
Non-degenerate
ω(u, v) = 0 for all v ∈ V implies that u = 0.
If the underlying field has characteristic not 2, alternation is equivalent to skew-symmetry. If the characteristic is 2, the skew-symmetry is implied by, but does not imply alternation. In this case every symplectic form is a symmetric form, but not vice versa.
Working in a fixed basis, ω can be represented by a matrix. The conditions above are equivalent to this matrix being skew-symmetric, nonsingular, and hollow (all diagonal entries are zero). This should not be confused with a symplectic matrix, which represents a symplectic transformation of the space. If V is finite-dimensional, then its dimension must necessarily be even since every skew-symmetric, hollow matrix of odd size has determinant zero. Notice that the condition that the matrix be hollow is not redundant if the characteristic of the field is 2. A symplectic form behaves quite differently from a symmetric form, for example, the scalar product on Euclidean vector spaces.
Standard symplectic space
Further information: Symplectic matrix § Symplectic transformations
The standard symplectic space is R2n with the symplectic form given by a nonsingular, skew-symmetric matrix. Typically ω is chosen to be the block matrix
$\omega ={\begin{bmatrix}0&I_{n}\\-I_{n}&0\end{bmatrix}}$
where In is the n × n identity matrix. In terms of basis vectors (x1, ..., xn, y1, ..., yn):
${\begin{aligned}\omega (x_{i},y_{j})=-\omega (y_{j},x_{i})&=\delta _{ij},\\\omega (x_{i},x_{j})=\omega (y_{i},y_{j})&=0.\end{aligned}}$
A modified version of the Gram–Schmidt process shows that any finite-dimensional symplectic vector space has a basis such that ω takes this form, often called a Darboux basis or symplectic basis.
Sketch of process:
Start with an arbitrary basis $v_{1},...,v_{n}$, and represent the dual of each basis vector by the dual basis: $\omega (v_{i},\cdot )=\sum _{j}\omega (v_{i},v_{j})v_{j}^{*}$. This gives us a $n\times n$ matrix with entries $\omega (v_{i},v_{j})$. Solve for its null space. Now for any $(\lambda _{1},...,\lambda _{n})$ in the null space, we have $\sum _{i}\omega (v_{i},\cdot )=0$, so the null space gives us the degenerate subspace $V_{0}$.
Now arbitrarily pick a complementary $W$ such that $V=V_{0}\oplus W$, and let $w_{1},...,w_{m}$ be a basis of $W$. Since $\omega (w_{1},\cdot )\neq 0$, and $\omega (w_{1},w_{1})=0$, WLOG $\omega (w_{1},w_{2})\neq 0$. Now scale $w_{2}$ so that $\omega (w_{1},w_{2})=1$. Then define $w'=w-\omega (w,w_{2})w_{1}+\omega (w,w_{1})w_{2}$ for each of $w=w_{3},w_{4},...,w_{m}$. Iterate.
Notice that this method applies for symplectic vector space over any field, not just the field of real numbers.
Case of real or complex field:
When the space is over the field of real numbers, then we can modify the modified Gram-Schmidt process as follows: Start the same way. Let $w_{1},...,w_{m}$ be an orthonormal basis (with respect to the usual inner product on $\mathbb {R} ^{n}$) of $W$. Since $\omega (w_{1},\cdot )\neq 0$, and $\omega (w_{1},w_{1})=0$, WLOG $\omega (w_{1},w_{2})\neq 0$. Now multiply $w_{2}$ by a sign, so that $\omega (w_{1},w_{2})\geq 0$. Then define $w'=w-\omega (w,w_{2})w_{1}+\omega (w,w_{1})w_{2}$ for each of $w=w_{3},w_{4},...,w_{m}$, then scale each $w'$ so that it has norm one. Iterate.
Similarly, for the field of complex numbers, we may choose a unitary basis. This This proves the spectral theory of antisymmetric matrices.
Lagrangian form
There is another way to interpret this standard symplectic form. Since the model space R2n used above carries much canonical structure which might easily lead to misinterpretation, we will use "anonymous" vector spaces instead. Let V be a real vector space of dimension n and V∗ its dual space. Now consider the direct sum W = V ⊕ V∗ of these spaces equipped with the following form:
$\omega (x\oplus \eta ,y\oplus \xi )=\xi (x)-\eta (y).$
Now choose any basis (v1, ..., vn) of V and consider its dual basis
$\left(v_{1}^{*},\ldots ,v_{n}^{*}\right).$
We can interpret the basis vectors as lying in W if we write xi = (vi, 0) and yi = (0, vi∗). Taken together, these form a complete basis of W,
$(x_{1},\ldots ,x_{n},y_{1},\ldots ,y_{n}).$
The form ω defined here can be shown to have the same properties as in the beginning of this section. On the other hand, every symplectic structure is isomorphic to one of the form V ⊕ V∗. The subspace V is not unique, and a choice of subspace V is called a polarization. The subspaces that give such an isomorphism are called Lagrangian subspaces or simply Lagrangians.
Explicitly, given a Lagrangian subspace as defined below, then a choice of basis (x1, ..., xn) defines a dual basis for a complement, by ω(xi, yj) = δij.
Analogy with complex structures
Just as every symplectic structure is isomorphic to one of the form V ⊕ V∗, every complex structure on a vector space is isomorphic to one of the form V ⊕ V. Using these structures, the tangent bundle of an n-manifold, considered as a 2n-manifold, has an almost complex structure, and the cotangent bundle of an n-manifold, considered as a 2n-manifold, has a symplectic structure: T∗(T∗M)p = Tp(M) ⊕ (Tp(M))∗.
The complex analog to a Lagrangian subspace is a real subspace, a subspace whose complexification is the whole space: W = V ⊕ J V. As can be seen from the standard symplectic form above, every symplectic form on R2n is isomorphic to the imaginary part of the standard complex (Hermitian) inner product on Cn (with the convention of the first argument being anti-linear).
Volume form
Let ω be an alternating bilinear form on an n-dimensional real vector space V, ω ∈ Λ2(V). Then ω is non-degenerate if and only if n is even and ωn/2 = ω ∧ ... ∧ ω is a volume form. A volume form on a n-dimensional vector space V is a non-zero multiple of the n-form e1∗ ∧ ... ∧ en∗ where e1, e2, ..., en is a basis of V.
For the standard basis defined in the previous section, we have
$\omega ^{n}=(-1)^{\frac {n}{2}}x_{1}^{*}\wedge \dotsb \wedge x_{n}^{*}\wedge y_{1}^{*}\wedge \dotsb \wedge y_{n}^{*}.$
By reordering, one can write
$\omega ^{n}=x_{1}^{*}\wedge y_{1}^{*}\wedge \dotsb \wedge x_{n}^{*}\wedge y_{n}^{*}.$
Authors variously define ωn or (−1)n/2ωn as the standard volume form. An occasional factor of n! may also appear, depending on whether the definition of the alternating product contains a factor of n! or not. The volume form defines an orientation on the symplectic vector space (V, ω).
Symplectic map
Suppose that (V, ω) and (W, ρ) are symplectic vector spaces. Then a linear map f : V → W is called a symplectic map if the pullback preserves the symplectic form, i.e. f∗ρ = ω, where the pullback form is defined by (f∗ρ)(u, v) = ρ(f(u), f(v)). Symplectic maps are volume- and orientation-preserving.
Symplectic group
If V = W, then a symplectic map is called a linear symplectic transformation of V. In particular, in this case one has that ω(f(u), f(v)) = ω(u, v), and so the linear transformation f preserves the symplectic form. The set of all symplectic transformations forms a group and in particular a Lie group, called the symplectic group and denoted by Sp(V) or sometimes Sp(V, ω). In matrix form symplectic transformations are given by symplectic matrices.
Subspaces
Let W be a linear subspace of V. Define the symplectic complement of W to be the subspace
$W^{\perp }=\{v\in V\mid \omega (v,w)=0{\mbox{ for all }}w\in W\}.$
The symplectic complement satisfies:
${\begin{aligned}\left(W^{\perp }\right)^{\perp }&=W\\\dim W+\dim W^{\perp }&=\dim V.\end{aligned}}$
However, unlike orthogonal complements, W⊥ ∩ W need not be 0. We distinguish four cases:
• W is symplectic if W⊥ ∩ W = {0}. This is true if and only if ω restricts to a nondegenerate form on W. A symplectic subspace with the restricted form is a symplectic vector space in its own right.
• W is isotropic if W ⊆ W⊥. This is true if and only if ω restricts to 0 on W. Any one-dimensional subspace is isotropic.
• W is coisotropic if W⊥ ⊆ W. W is coisotropic if and only if ω descends to a nondegenerate form on the quotient space W/W⊥. Equivalently W is coisotropic if and only if W⊥ is isotropic. Any codimension-one subspace is coisotropic.
• W is Lagrangian if W = W⊥. A subspace is Lagrangian if and only if it is both isotropic and coisotropic. In a finite-dimensional vector space, a Lagrangian subspace is an isotropic one whose dimension is half that of V. Every isotropic subspace can be extended to a Lagrangian one.
Referring to the canonical vector space R2n above,
• the subspace spanned by {x1, y1} is symplectic
• the subspace spanned by {x1, x2} is isotropic
• the subspace spanned by {x1, x2, ..., xn, y1} is coisotropic
• the subspace spanned by {x1, x2, ..., xn} is Lagrangian.
Heisenberg group
Main article: Heisenberg group
A Heisenberg group can be defined for any symplectic vector space, and this is the typical way that Heisenberg groups arise.
A vector space can be thought of as a commutative Lie group (under addition), or equivalently as a commutative Lie algebra, meaning with trivial Lie bracket. The Heisenberg group is a central extension of such a commutative Lie group/algebra: the symplectic form defines the commutation, analogously to the canonical commutation relations (CCR), and a Darboux basis corresponds to canonical coordinates – in physics terms, to momentum operators and position operators.
Indeed, by the Stone–von Neumann theorem, every representation satisfying the CCR (every representation of the Heisenberg group) is of this form, or more properly unitarily conjugate to the standard one.
Further, the group algebra of (the dual to) a vector space is the symmetric algebra, and the group algebra of the Heisenberg group (of the dual) is the Weyl algebra: one can think of the central extension as corresponding to quantization or deformation.
Formally, the symmetric algebra of a vector space V over a field F is the group algebra of the dual, Sym(V) := F[V∗], and the Weyl algebra is the group algebra of the (dual) Heisenberg group W(V) = F[H(V∗)]. Since passing to group algebras is a contravariant functor, the central extension map H(V) → V becomes an inclusion Sym(V) → W(V).
See also
• A symplectic manifold is a smooth manifold with a smoothly-varying closed symplectic form on each tangent space.
• Maslov index
• A symplectic representation is a group representation where each group element acts as a symplectic transformation.
References
• Claude Godbillon (1969) "Géométrie différentielle et mécanique analytique", Hermann
• Abraham, Ralph; Marsden, Jerrold E. (1978). "Hamiltonian and Lagrangian Systems". Foundations of Mechanics (2nd ed.). London: Benjamin-Cummings. pp. 161–252. ISBN 0-8053-0102-X. PDF
• Paulette Libermann and Charles-Michel Marle (1987) "Symplectic Geometry and Analytical Mechanics", D. Reidel
• Jean-Marie Souriau (1997) "Structure of Dynamical Systems, A Symplectic View of Physics", Springer
| Wikipedia |
Symplectic representation
In mathematical field of representation theory, a symplectic representation is a representation of a group or a Lie algebra on a symplectic vector space (V, ω) which preserves the symplectic form ω. Here ω is a nondegenerate skew symmetric bilinear form
$\omega \colon V\times V\to \mathbb {F} $
where F is the field of scalars. A representation of a group G preserves ω if
$\omega (g\cdot v,g\cdot w)=\omega (v,w)$
for all g in G and v, w in V, whereas a representation of a Lie algebra g preserves ω if
$\omega (\xi \cdot v,w)+\omega (v,\xi \cdot w)=0$
for all ξ in g and v, w in V. Thus a representation of G or g is equivalently a group or Lie algebra homomorphism from G or g to the symplectic group Sp(V,ω) or its Lie algebra sp(V,ω)
If G is a compact group (for example, a finite group), and F is the field of complex numbers, then by introducing a compatible unitary structure (which exists by an averaging argument), one can show that any complex symplectic representation is a quaternionic representation. Quaternionic representations of finite or compact groups are often called symplectic representations, and may be identified using the Frobenius–Schur indicator.
References
• Fulton, William; Harris, Joe (1991). Representation theory. A first course. Graduate Texts in Mathematics, Readings in Mathematics. Vol. 129. New York: Springer-Verlag. doi:10.1007/978-1-4612-0979-9. ISBN 978-0-387-97495-8. MR 1153249. OCLC 246650103..
| Wikipedia |
Symplectic spinor bundle
In differential geometry, given a metaplectic structure $\pi _{\mathbf {P} }\colon {\mathbf {P} }\to M\,$ on a $2n$-dimensional symplectic manifold $(M,\omega ),\,$ the symplectic spinor bundle is the Hilbert space bundle $\pi _{\mathbf {Q} }\colon {\mathbf {Q} }\to M\,$ associated to the metaplectic structure via the metaplectic representation. The metaplectic representation of the metaplectic group — the two-fold covering of the symplectic group — gives rise to an infinite rank vector bundle; this is the symplectic spinor construction due to Bertram Kostant.[1]
A section of the symplectic spinor bundle ${\mathbf {Q} }\,$ is called a symplectic spinor field.
Formal definition
Let $({\mathbf {P} },F_{\mathbf {P} })$ be a metaplectic structure on a symplectic manifold $(M,\omega ),\,$ that is, an equivariant lift of the symplectic frame bundle $\pi _{\mathbf {R} }\colon {\mathbf {R} }\to M\,$ with respect to the double covering $\rho \colon {\mathrm {Mp} }(n,{\mathbb {R} })\to {\mathrm {Sp} }(n,{\mathbb {R} }).\,$
The symplectic spinor bundle ${\mathbf {Q} }\,$ is defined [2] to be the Hilbert space bundle
${\mathbf {Q} }={\mathbf {P} }\times _{\mathfrak {m}}L^{2}({\mathbb {R} }^{n})\,$
associated to the metaplectic structure ${\mathbf {P} }$ via the metaplectic representation ${\mathfrak {m}}\colon {\mathrm {Mp} }(n,{\mathbb {R} })\to {\mathrm {U} }(L^{2}({\mathbb {R} }^{n})),\,$ also called the Segal–Shale–Weil [3][4][5] representation of ${\mathrm {Mp} }(n,{\mathbb {R} }).\,$ Here, the notation ${\mathrm {U} }({\mathbf {W} })\,$ denotes the group of unitary operators acting on a Hilbert space ${\mathbf {W} }.\,$
The Segal–Shale–Weil representation [6] is an infinite dimensional unitary representation of the metaplectic group ${\mathrm {Mp} }(n,{\mathbb {R} })$ on the space of all complex valued square Lebesgue integrable square-integrable functions $L^{2}({\mathbb {R} }^{n}).\,$ Because of the infinite dimension, the Segal–Shale–Weil representation is not so easy to handle.
Notes
1. Kostant, B. (1974). "Symplectic Spinors". Symposia Mathematica. Academic Press. XIV: 139–152.
2. Habermann, Katharina; Habermann, Lutz (2006), Introduction to Symplectic Dirac Operators, Springer-Verlag, ISBN 978-3-540-33420-0 page 37
3. Segal, I.E (1962), Lectures at the 1960 Boulder Summer Seminar, AMS, Providence, RI
4. Shale, D. (1962). "Linear symmetries of free boson fields". Trans. Amer. Math. Soc. 103: 149–167. doi:10.1090/s0002-9947-1962-0137504-6.
5. Weil, A. (1964). "Sur certains groupes d'opérateurs unitaires". Acta Math. 111: 143–211. doi:10.1007/BF02391012.
6. Kashiwara, M; Vergne, M. (1978). "On the Segal–Shale–Weil representation and harmonic polynomials". Inventiones Mathematicae. 44: 1–47. doi:10.1007/BF01389900.
Further reading
• Habermann, Katharina; Habermann, Lutz (2006), Introduction to Symplectic Dirac Operators, Springer-Verlag, ISBN 978-3-540-33420-0
| Wikipedia |
Symplectic sum
In mathematics, specifically in symplectic geometry, the symplectic sum is a geometric modification on symplectic manifolds, which glues two given manifolds into a single new one. It is a symplectic version of connected summation along a submanifold, often called a fiber sum.
The symplectic sum is the inverse of the symplectic cut, which decomposes a given manifold into two pieces. Together the symplectic sum and cut may be viewed as a deformation of symplectic manifolds, analogous for example to deformation to the normal cone in algebraic geometry.
The symplectic sum has been used to construct previously unknown families of symplectic manifolds, and to derive relationships among the Gromov–Witten invariants of symplectic manifolds.
Definition
Let $M_{1}$ and $M_{2}$ be two symplectic $2n$-manifolds and $V$ a symplectic $(2n-2)$-manifold, embedded as a submanifold into both $M_{1}$ and $M_{2}$ via
$j_{i}:V\hookrightarrow M_{i},$
such that the Euler classes of the normal bundles are opposite:
$e(N_{M_{1}}V)=-e(N_{M_{2}}V).$
In the 1995 paper that defined the symplectic sum, Robert Gompf proved that for any orientation-reversing isomorphism
$\psi :N_{M_{1}}V\to N_{M_{2}}V$
there is a canonical isotopy class of symplectic structures on the connected sum
$(M_{1},V)\#(M_{2},V)$
meeting several conditions of compatibility with the summands $M_{i}$. In other words, the theorem defines a symplectic sum operation whose result is a symplectic manifold, unique up to isotopy.
To produce a well-defined symplectic structure, the connected sum must be performed with special attention paid to the choices of various identifications. Loosely speaking, the isomorphism $\psi $ is composed with an orientation-reversing symplectic involution of the normal bundles of $V$ (or rather their corresponding punctured unit disk bundles); then this composition is used to glue $M_{1}$ to $M_{2}$ along the two copies of $V$.
Generalizations
In greater generality, the symplectic sum can be performed on a single symplectic manifold $M$ containing two disjoint copies of $V$, gluing the manifold to itself along the two copies. The preceding description of the sum of two manifolds then corresponds to the special case where $X$ consists of two connected components, each containing a copy of $V$.
Additionally, the sum can be performed simultaneously on submanifolds $X_{i}\subseteq M_{i}$ of equal dimension and meeting $V$ transversally.
Other generalizations also exist. However, it is not possible to remove the requirement that $V$ be of codimension two in the $M_{i}$, as the following argument shows.
A symplectic sum along a submanifold of codimension $2k$ requires a symplectic involution of a $2k$-dimensional annulus. If this involution exists, it can be used to patch two $2k$-dimensional balls together to form a symplectic $2k$-dimensional sphere. Because the sphere is a compact manifold, a symplectic form $\omega $ on it induces a nonzero cohomology class
$[\omega ]\in H^{2}(\mathbb {S} ^{2k},\mathbb {R} ).$
But this second cohomology group is zero unless $2k=2$. So the symplectic sum is possible only along a submanifold of codimension two.
Identity element
Given $M$ with codimension-two symplectic submanifold $V$, one may projectively complete the normal bundle of $V$ in $M$ to the $\mathbb {CP} ^{1}$-bundle
$P:=\mathbb {P} (N_{M}V\oplus \mathbb {C} ).$
This $P$ contains two canonical copies of $V$: the zero-section $V_{0}$, which has normal bundle equal to that of $V$ in $M$, and the infinity-section $V_{\infty }$, which has opposite normal bundle. Therefore, one may symplectically sum $(M,V)$ with $(P,V_{\infty })$; the result is again $M$, with $V_{0}$ now playing the role of $V$:
$(M,V)=((M,V)\#(P,V_{\infty }),V_{0}).$
So for any particular pair $(M,V)$ there exists an identity element $P$ for the symplectic sum. Such identity elements have been used both in establishing theory and in computations; see below.
Symplectic sum and cut as deformation
It is sometimes profitable to view the symplectic sum as a family of manifolds. In this framework, the given data $M_{1}$, $M_{2}$, $V$, $j_{1}$, $j_{2}$, $\psi $ determine a unique smooth $(2n+2)$-dimensional symplectic manifold $Z$ and a fibration
$Z\to D\subseteq \mathbb {C} $
in which the central fiber is the singular space
$Z_{0}=M_{1}\cup _{V}M_{2}$
obtained by joining the summands $M_{i}$ along $V$, and the generic fiber $Z_{\epsilon }$ is a symplectic sum of the $M_{i}$. (That is, the generic fibers are all members of the unique isotopy class of the symplectic sum.)
Loosely speaking, one constructs this family as follows. Choose a nonvanishing holomorphic section $\eta $ of the trivial complex line bundle
$N_{M_{1}}V\otimes _{\mathbb {C} }N_{M_{2}}V.$
Then, in the direct sum
$N_{M_{1}}V\oplus N_{M_{2}}V,$
with $v_{i}$ representing a normal vector to $V$ in $M_{i}$, consider the locus of the quadratic equation
$v_{1}\otimes v_{2}=\epsilon \eta $
for a chosen small $\epsilon \in \mathbb {C} $. One can glue both $M_{i}\setminus V$ (the summands with $V$ deleted) onto this locus; the result is the symplectic sum $Z_{\epsilon }$.
As $\epsilon $ varies, the sums $Z_{\epsilon }$ naturally form the family $Z\to D$ described above. The central fiber $Z_{0}$ is the symplectic cut of the generic fiber. So the symplectic sum and cut can be viewed together as a quadratic deformation of symplectic manifolds.
An important example occurs when one of the summands is an identity element $P$. For then the generic fiber is a symplectic manifold $M$ and the central fiber is $M$ with the normal bundle of $V$ "pinched off at infinity" to form the $\mathbb {CP} ^{1}$-bundle $P$. This is analogous to deformation to the normal cone along a smooth divisor $V$ in algebraic geometry. In fact, symplectic treatments of Gromov–Witten theory often use the symplectic sum/cut for "rescaling the target" arguments, while algebro-geometric treatments use deformation to the normal cone for these same arguments.
However, the symplectic sum is not a complex operation in general. The sum of two Kähler manifolds need not be Kähler.
History and applications
The symplectic sum was first clearly defined in 1995 by Robert Gompf. He used it to demonstrate that any finitely presented group appears as the fundamental group of a symplectic four-manifold. Thus the category of symplectic manifolds was shown to be much larger than the category of Kähler manifolds.
Around the same time, Eugene Lerman proposed the symplectic cut as a generalization of symplectic blow up and used it to study the symplectic quotient and other operations on symplectic manifolds.
A number of researchers have subsequently investigated the behavior of pseudoholomorphic curves under symplectic sums, proving various versions of a symplectic sum formula for Gromov–Witten invariants. Such a formula aids computation by allowing one to decompose a given manifold into simpler pieces, whose Gromov–Witten invariants should be easier to compute. Another approach is to use an identity element $P$ to write the manifold $M$ as a symplectic sum
$(M,V)=(M,V)\#(P,V_{\infty }).$
A formula for the Gromov–Witten invariants of a symplectic sum then yields a recursive formula for the Gromov–Witten invariants of $M$.
References
• Robert Gompf: A new construction of symplectic manifolds, Annals of Mathematics 142 (1995), 527-595
• Dusa McDuff and Dietmar Salamon: Introduction to Symplectic Topology (1998) Oxford Mathematical Monographs, ISBN 0-19-850451-9
• Dusa McDuff and Dietmar Salamon: J-Holomorphic Curves and Symplectic Topology (2004) American Mathematical Society Colloquium Publications, ISBN 0-8218-3485-1
| Wikipedia |
Symplectic geometry
Symplectic geometry is a branch of differential geometry and differential topology that studies symplectic manifolds; that is, differentiable manifolds equipped with a closed, nondegenerate 2-form. Symplectic geometry has its origins in the Hamiltonian formulation of classical mechanics where the phase space of certain classical systems takes on the structure of a symplectic manifold.[1]
The term "symplectic", introduced by Weyl,[2] is a calque of "complex"; previously, the "symplectic group" had been called the "line complex group". "Complex" comes from the Latin com-plexus, meaning "braided together" (co- + plexus), while symplectic comes from the corresponding Greek sym-plektikos (συμπλεκτικός); in both cases the stem comes from the Indo-European root *pleḱ- The name reflects the deep connections between complex and symplectic structures.
By Darboux's Theorem, symplectic manifolds are isomorphic to the standard symplectic vector space locally, hence only have global (topological) invariants. "Symplectic topology," which studies global properties of symplectic manifolds, is often used interchangeably with "symplectic geometry."
The name "complex group" formerly advocated by me in allusion to line complexes, as these are defined by the vanishing of antisymmetric bilinear forms, has become more and more embarrassing through collision with the word "complex" in the connotation of complex number. I therefore propose to replace it by the corresponding Greek adjective "symplectic." Dickson called the group the "Abelian linear group" in homage to Abel who first studied it.
Weyl (1939, p. 165)
Introduction
A symplectic geometry is defined on a smooth even-dimensional space that is a differentiable manifold. On this space is defined a geometric object, the symplectic 2-form, that allows for the measurement of sizes of two-dimensional objects in the space. The symplectic form in symplectic geometry plays a role analogous to that of the metric tensor in Riemannian geometry. Where the metric tensor measures lengths and angles, the symplectic form measures oriented areas.[3]
Symplectic geometry arose from the study of classical mechanics and an example of a symplectic structure is the motion of an object in one dimension. To specify the trajectory of the object, one requires both the position q and the momentum p, which form a point (p,q) in the Euclidean plane ℝ2. In this case, the symplectic form is
$\omega =dp\wedge dq$
and is an area form that measures the area A of a region S in the plane through integration:
$A=\int _{S}\omega .$
The area is important because as conservative dynamical systems evolve in time, this area is invariant.[3]
Higher dimensional symplectic geometries are defined analogously. A 2n-dimensional symplectic geometry is formed of pairs of directions
$((x_{1},x_{2}),(x_{3},x_{4}),\ldots (x_{2n-1},x_{2n}))$
in a 2n-dimensional manifold along with a symplectic form
$\omega =dx_{1}\wedge dx_{2}+dx_{3}\wedge dx_{4}+\cdots +dx_{2n-1}\wedge dx_{2n}.$
This symplectic form yields the size of a 2n-dimensional region V in the space as the sum of the areas of the projections of V onto each of the planes formed by the pairs of directions[3]
$A=\int _{V}\omega =\int _{V}dx_{1}\wedge dx_{2}+\int _{V}dx_{3}\wedge dx_{4}+\cdots +\int _{V}dx_{2n-1}\wedge dx_{2n}.$
Comparison with Riemannian geometry
Symplectic geometry has a number of similarities with and differences from Riemannian geometry, which is the study of differentiable manifolds equipped with nondegenerate, symmetric 2-tensors (called metric tensors). Unlike in the Riemannian case, symplectic manifolds have no local invariants such as curvature. This is a consequence of Darboux's theorem which states that a neighborhood of any point of a 2n-dimensional symplectic manifold is isomorphic to the standard symplectic structure on an open set of ℝ2n. Another difference with Riemannian geometry is that not every differentiable manifold need admit a symplectic form; there are certain topological restrictions. For example, every symplectic manifold is even-dimensional and orientable. Additionally, if M is a closed symplectic manifold, then the 2nd de Rham cohomology group H2(M) is nontrivial; this implies, for example, that the only n-sphere that admits a symplectic form is the 2-sphere. A parallel that one can draw between the two subjects is the analogy between geodesics in Riemannian geometry and pseudoholomorphic curves in symplectic geometry: Geodesics are curves of shortest length (locally), while pseudoholomorphic curves are surfaces of minimal area. Both concepts play a fundamental role in their respective disciplines.
Examples and structures
Every Kähler manifold is also a symplectic manifold. Well into the 1970s, symplectic experts were unsure whether any compact non-Kähler symplectic manifolds existed, but since then many examples have been constructed (the first was due to William Thurston); in particular, Robert Gompf has shown that every finitely presented group occurs as the fundamental group of some symplectic 4-manifold, in marked contrast with the Kähler case.
Most symplectic manifolds, one can say, are not Kähler; and so do not have an integrable complex structure compatible with the symplectic form. Mikhail Gromov, however, made the important observation that symplectic manifolds do admit an abundance of compatible almost complex structures, so that they satisfy all the axioms for a Kähler manifold except the requirement that the transition maps be holomorphic.
Gromov used the existence of almost complex structures on symplectic manifolds to develop a theory of pseudoholomorphic curves,[4] which has led to a number of advancements in symplectic topology, including a class of symplectic invariants now known as Gromov–Witten invariants. Later, using the pseudoholomorphic curve technique Andreas Floer invented another important tool in symplectic geometry known as the Floer homology.[5]
See also
• Contact geometry
• Geometric mechanics
• Moment map
• Poisson geometry
• Symplectic integration
• Symplectic vector space
Notes
1. Hartnett, Kevin (February 9, 2017). "A Fight to Fix Geometry's Foundations". Quanta Magazine.
2. Weyl, Hermann (1939). The Classical Groups. Their Invariants and Representations. Reprinted by Princeton University Press (1997). ISBN 0-691-05756-7. MR0000255
3. McDuff, Dusa (2010), "What is Symplectic Geometry?", in Hobbs, Catherine; Paycha, Sylvie (eds.), European Women in Mathematics – Proceedings of the 13th General Meeting, World Scientific, pp. 33–51, CiteSeerX 10.1.1.433.1953, ISBN 9789814277686
4. Gromov, Mikhael. "Pseudo holomorphic curves in symplectic manifolds." Inventiones mathematicae 82.2 (1985): 307–347.
5. Floer, Andreas. "Morse theory for Lagrangian intersections." Journal of differential geometry 28.3 (1988): 513–547.
References
• Abraham, Ralph; Marsden, Jerrold E. (1978). Foundations of Mechanics. London: Benjamin-Cummings. ISBN 978-0-8053-0102-1.
• Arnol'd, V. I. (1986). "Первые шаги симплектической топологии" [First steps in symplectic topology]. Успехи математических наук (in Russian). 41 (6(252)): 3–18. doi:10.1070/RM1986v041n06ABEH004221. ISSN 0036-0279. S2CID 250908036 – via Russian Mathematical Surveys, 1986, 41:6, 1–21.
• McDuff, Dusa; Salamon, D. (1998). Introduction to Symplectic Topology. Oxford University Press. ISBN 978-0-19-850451-1.
• Fomenko, A. T. (1995). Symplectic Geometry (2nd ed.). Gordon and Breach. ISBN 978-2-88124-901-3. (An undergraduate level introduction.)
• de Gosson, Maurice A. (2006). Symplectic Geometry and Quantum Mechanics. Basel: Birkhäuser Verlag. ISBN 978-3-7643-7574-4.
• Weinstein, Alan (1981). "Symplectic Geometry" (PDF). Bulletin of the American Mathematical Society. 5 (1): 1–13. doi:10.1090/s0273-0979-1981-14911-9.
• Weyl, Hermann (1939). The Classical Groups. Their Invariants and Representations. Reprinted by Princeton University Press (1997). ISBN 0-691-05756-7. MR0000255.
External links
• Media related to Symplectic geometry at Wikimedia Commons
• "Symplectic structure", Encyclopedia of Mathematics, EMS Press, 2001 [1994]
Authority control: National
• France
• BnF data
• Germany
• Israel
• United States
• Japan
| Wikipedia |
Symplectic matrix
In mathematics, a symplectic matrix is a $2n\times 2n$ matrix $M$ with real entries that satisfies the condition
$M^{\text{T}}\Omega M=\Omega ,$
(1)
where $M^{\text{T}}$ denotes the transpose of $M$ and $\Omega $ is a fixed $2n\times 2n$ nonsingular, skew-symmetric matrix. This definition can be extended to $2n\times 2n$ matrices with entries in other fields, such as the complex numbers, finite fields, p-adic numbers, and function fields.
Typically $\Omega $ is chosen to be the block matrix
$\Omega ={\begin{bmatrix}0&I_{n}\\-I_{n}&0\\\end{bmatrix}},$
where $I_{n}$ is the $n\times n$ identity matrix. The matrix $\Omega $ has determinant $+1$ and its inverse is $\Omega ^{-1}=\Omega ^{\text{T}}=-\Omega $.
Properties
Generators for symplectic matrices
Every symplectic matrix has determinant $+1$, and the $2n\times 2n$ symplectic matrices with real entries form a subgroup of the general linear group $\mathrm {GL} (2n;\mathbb {R} )$ under matrix multiplication since being symplectic is a property stable under matrix multiplication. Topologically, this symplectic group is a connected noncompact real Lie group of real dimension $n(2n+1)$, and is denoted $\mathrm {Sp} (2n;\mathbb {R} )$. The symplectic group can be defined as the set of linear transformations that preserve the symplectic form of a real symplectic vector space.
This symplectic group has a distinguished set of generators, which can be used to find all possible symplectic matrices. This includes the following sets
${\begin{aligned}D(n)=&\left\{{\begin{pmatrix}A&0\\0&(A^{T})^{-1}\end{pmatrix}}:A\in {\text{GL}}(n;\mathbb {R} )\right\}\\N(n)=&\left\{{\begin{pmatrix}I_{n}&B\\0&I_{n}\end{pmatrix}}:B\in {\text{Sym}}(n;\mathbb {R} )\right\}\end{aligned}}$
where ${\text{Sym}}(n;\mathbb {R} )$ is the set of $n\times n$ symmetric matrices. Then, $\mathrm {Sp} (2n;\mathbb {R} )$ is generated by the set[1]p. 2
$\{\Omega \}\cup D(n)\cup N(n)$
of matrices. In other words, any symplectic matrix can be constructed by multiplying matrices in $D(n)$ and $N(n)$ together, along with some power of $\Omega $.
Inverse matrix
Every symplectic matrix is invertible with the inverse matrix given by
$M^{-1}=\Omega ^{-1}M^{\text{T}}\Omega .$
Furthermore, the product of two symplectic matrices is, again, a symplectic matrix. This gives the set of all symplectic matrices the structure of a group. There exists a natural manifold structure on this group which makes it into a (real or complex) Lie group called the symplectic group.
Determinantal properties
It follows easily from the definition that the determinant of any symplectic matrix is ±1. Actually, it turns out that the determinant is always +1 for any field. One way to see this is through the use of the Pfaffian and the identity
${\mbox{Pf}}(M^{\text{T}}\Omega M)=\det(M){\mbox{Pf}}(\Omega ).$
Since $M^{\text{T}}\Omega M=\Omega $ and ${\mbox{Pf}}(\Omega )\neq 0$ we have that $\det(M)=1$.
When the underlying field is real or complex, one can also show this by factoring the inequality $\det(M^{\text{T}}M+I)\geq 1$.[2]
Block form of symplectic matrices
Suppose Ω is given in the standard form and let $M$ be a $2n\times 2n$ block matrix given by
$M={\begin{pmatrix}A&B\\C&D\end{pmatrix}}$
where $A,B,C,D$ are $n\times n$ matrices. The condition for $M$ to be symplectic is equivalent to the two following equivalent conditions[3]
$A^{\text{T}}C,B^{\text{T}}D$ symmetric, and $A^{\text{T}}D-C^{\text{T}}B=I$
$AB^{\text{T}},CD^{\text{T}}$ symmetric, and $AD^{\text{T}}-BC^{\text{T}}=I$
When $n=1$ these conditions reduce to the single condition $\det(M)=1$. Thus a $2\times 2$ matrix is symplectic iff it has unit determinant.
Inverse matrix of block matrix
With $\Omega $ in standard form, the inverse of $M$ is given by
$M^{-1}=\Omega ^{-1}M^{\text{T}}\Omega ={\begin{pmatrix}D^{\text{T}}&-B^{\text{T}}\\-C^{\text{T}}&A^{\text{T}}\end{pmatrix}}.$
The group has dimension $n(2n+1)$. This can be seen by noting that $(M^{\text{T}}\Omega M)^{\text{T}}=-M^{\text{T}}\Omega M$ is anti-symmetric. Since the space of anti-symmetric matrices has dimension ${\binom {2n}{2}},$ the identity $M^{\text{T}}\Omega M=\Omega $ imposes $2n \choose 2$ constraints on the $(2n)^{2}$ coefficients of $M$ and leaves $M$ with $n(2n+1)$ independent coefficients.
Symplectic transformations
In the abstract formulation of linear algebra, matrices are replaced with linear transformations of finite-dimensional vector spaces. The abstract analog of a symplectic matrix is a symplectic transformation of a symplectic vector space. Briefly, a symplectic vector space $(V,\omega )$ is a $2n$-dimensional vector space $V$ equipped with a nondegenerate, skew-symmetric bilinear form $\omega $ called the symplectic form.
A symplectic transformation is then a linear transformation $L:V\to V$ which preserves $\omega $, i.e.
$\omega (Lu,Lv)=\omega (u,v).$
Fixing a basis for $V$, $\omega $ can be written as a matrix $\Omega $ and $L$ as a matrix $M$. The condition that $L$ be a symplectic transformation is precisely the condition that M be a symplectic matrix:
$M^{\text{T}}\Omega M=\Omega .$
Under a change of basis, represented by a matrix A, we have
$\Omega \mapsto A^{\text{T}}\Omega A$
$M\mapsto A^{-1}MA.$
One can always bring $\Omega $ to either the standard form given in the introduction or the block diagonal form described below by a suitable choice of A.
The matrix Ω
Symplectic matrices are defined relative to a fixed nonsingular, skew-symmetric matrix $\Omega $. As explained in the previous section, $\Omega $ can be thought of as the coordinate representation of a nondegenerate skew-symmetric bilinear form. It is a basic result in linear algebra that any two such matrices differ from each other by a change of basis.
The most common alternative to the standard $\Omega $ given above is the block diagonal form
$\Omega ={\begin{bmatrix}{\begin{matrix}0&1\\-1&0\end{matrix}}&&0\\&\ddots &\\0&&{\begin{matrix}0&1\\-1&0\end{matrix}}\end{bmatrix}}.$
This choice differs from the previous one by a permutation of basis vectors.
Sometimes the notation $J$ is used instead of $\Omega $ for the skew-symmetric matrix. This is a particularly unfortunate choice as it leads to confusion with the notion of a complex structure, which often has the same coordinate expression as $\Omega $ but represents a very different structure. A complex structure $J$ is the coordinate representation of a linear transformation that squares to $-I_{n}$, whereas $\Omega $ is the coordinate representation of a nondegenerate skew-symmetric bilinear form. One could easily choose bases in which $J$ is not skew-symmetric or $\Omega $ does not square to $-I_{n}$.
Given a hermitian structure on a vector space, $J$ and $\Omega $ are related via
$\Omega _{ab}=-g_{ac}{J^{c}}_{b}$
where $g_{ac}$ is the metric. That $J$ and $\Omega $ usually have the same coordinate expression (up to an overall sign) is simply a consequence of the fact that the metric g is usually the identity matrix.
Diagonalization and decomposition
• For any positive definite symmetric real symplectic matrix S there exists U in $\mathrm {U} (2n,\mathbb {R} )=\mathrm {O} (2n)$ such that
$S=U^{\text{T}}DU\quad {\text{for}}\quad D=\operatorname {diag} (\lambda _{1},\ldots ,\lambda _{n},\lambda _{1}^{-1},\ldots ,\lambda _{n}^{-1}),$
where the diagonal elements of D are the eigenvalues of S.[4]
• Any real symplectic matrix S has a polar decomposition of the form:[4]
$S=UR\quad $ for $\quad U\in \operatorname {Sp} (2n,\mathbb {R} )\cap \operatorname {U} (2n,\mathbb {R} )$ and $R\in \operatorname {Sp} (2n,\mathbb {R} )\cap \operatorname {Sym} _{+}(2n,\mathbb {R} ).$
• Any real symplectic matrix can be decomposed as a product of three matrices:
$S=O{\begin{pmatrix}D&0\\0&D^{-1}\end{pmatrix}}O',$
(2)
such that O and O' are both symplectic and orthogonal and D is positive-definite and diagonal.[5] This decomposition is closely related to the singular value decomposition of a matrix and is known as an 'Euler' or 'Bloch-Messiah' decomposition.
Complex matrices
If instead M is a 2n × 2n matrix with complex entries, the definition is not standard throughout the literature. Many authors [6] adjust the definition above to
$M^{*}\Omega M=\Omega \,.$
(3)
where M* denotes the conjugate transpose of M. In this case, the determinant may not be 1, but will have absolute value 1. In the 2×2 case (n=1), M will be the product of a real symplectic matrix and a complex number of absolute value 1.
Other authors [7] retain the definition (1) for complex matrices and call matrices satisfying (3) conjugate symplectic.
Applications
Transformations described by symplectic matrices play an important role in quantum optics and in continuous-variable quantum information theory. For instance, symplectic matrices can be used to describe Gaussian (Bogoliubov) transformations of a quantum state of light.[8] In turn, the Bloch-Messiah decomposition (2) means that such an arbitrary Gaussian transformation can be represented as a set of two passive linear-optical interferometers (corresponding to orthogonal matrices O and O' ) intermitted by a layer of active non-linear squeezing transformations (given in terms of the matrix D).[9] In fact, one can circumvent the need for such in-line active squeezing transformations if two-mode squeezed vacuum states are available as a prior resource only.[10]
See also
• Symplectic vector space
• Symplectic group
• Symplectic representation
• Orthogonal matrix
• Unitary matrix
• Hamiltonian mechanics
• Linear complex structure
References
1. Habermann, Katharina, 1966- (2006). Introduction to symplectic Dirac operators. Springer. ISBN 978-3-540-33421-7. OCLC 262692314.{{cite book}}: CS1 maint: multiple names: authors list (link)
2. Rim, Donsub (2017). "An elementary proof that symplectic matrices have determinant one". Adv. Dyn. Syst. Appl. 12 (1): 15–20. arXiv:1505.04240. doi:10.37622/ADSA/12.1.2017.15-20. S2CID 119595767.
3. de Gosson, Maurice. "Introduction to Symplectic Mechanics: Lectures I-II-III" (PDF).
4. de Gosson, Maurice A. (2011). Symplectic Methods in Harmonic Analysis and in Mathematical Physics - Springer. doi:10.1007/978-3-7643-9992-4. ISBN 978-3-7643-9991-7.
5. Ferraro, Alessandro; Olivares, Stefano; Paris, Matteo G. A. (31 March 2005). "Gaussian states in continuous variable quantum information". Sec. 1.3, p. 4. arXiv:quant-ph/0503237.
6. Xu, H. G. (July 15, 2003). "An SVD-like matrix decomposition and its applications". Linear Algebra and Its Applications. 368: 1–24. doi:10.1016/S0024-3795(03)00370-7. hdl:1808/374.
7. Mackey, D. S.; Mackey, N. (2003). "On the Determinant of Symplectic Matrices". Numerical Analysis Report. 422. Manchester, England: Manchester Centre for Computational Mathematics. {{cite journal}}: Cite journal requires |journal= (help)
8. Weedbrook, Christian; Pirandola, Stefano; García-Patrón, Raúl; Cerf, Nicolas J.; Ralph, Timothy C.; Shapiro, Jeffrey H.; Lloyd, Seth (2012). "Gaussian quantum information". Reviews of Modern Physics. 84 (2): 621–669. arXiv:1110.3234. Bibcode:2012RvMP...84..621W. doi:10.1103/RevModPhys.84.621. S2CID 119250535.
9. Braunstein, Samuel L. (2005). "Squeezing as an irreducible resource". Physical Review A. 71 (5): 055801. arXiv:quant-ph/9904002. Bibcode:2005PhRvA..71e5801B. doi:10.1103/PhysRevA.71.055801. S2CID 16714223.
10. Chakhmakhchyan, Levon; Cerf, Nicolas (2018). "Simulating arbitrary Gaussian circuits with linear optics". Physical Review A. 98 (6): 062314. arXiv:1803.11534. Bibcode:2018PhRvA..98f2314C. doi:10.1103/PhysRevA.98.062314. S2CID 119227039.
Matrix classes
Explicitly constrained entries
• Alternant
• Anti-diagonal
• Anti-Hermitian
• Anti-symmetric
• Arrowhead
• Band
• Bidiagonal
• Bisymmetric
• Block-diagonal
• Block
• Block tridiagonal
• Boolean
• Cauchy
• Centrosymmetric
• Conference
• Complex Hadamard
• Copositive
• Diagonally dominant
• Diagonal
• Discrete Fourier Transform
• Elementary
• Equivalent
• Frobenius
• Generalized permutation
• Hadamard
• Hankel
• Hermitian
• Hessenberg
• Hollow
• Integer
• Logical
• Matrix unit
• Metzler
• Moore
• Nonnegative
• Pentadiagonal
• Permutation
• Persymmetric
• Polynomial
• Quaternionic
• Signature
• Skew-Hermitian
• Skew-symmetric
• Skyline
• Sparse
• Sylvester
• Symmetric
• Toeplitz
• Triangular
• Tridiagonal
• Vandermonde
• Walsh
• Z
Constant
• Exchange
• Hilbert
• Identity
• Lehmer
• Of ones
• Pascal
• Pauli
• Redheffer
• Shift
• Zero
Conditions on eigenvalues or eigenvectors
• Companion
• Convergent
• Defective
• Definite
• Diagonalizable
• Hurwitz
• Positive-definite
• Stieltjes
Satisfying conditions on products or inverses
• Congruent
• Idempotent or Projection
• Invertible
• Involutory
• Nilpotent
• Normal
• Orthogonal
• Unimodular
• Unipotent
• Unitary
• Totally unimodular
• Weighing
With specific applications
• Adjugate
• Alternating sign
• Augmented
• Bézout
• Carleman
• Cartan
• Circulant
• Cofactor
• Commutation
• Confusion
• Coxeter
• Distance
• Duplication and elimination
• Euclidean distance
• Fundamental (linear differential equation)
• Generator
• Gram
• Hessian
• Householder
• Jacobian
• Moment
• Payoff
• Pick
• Random
• Rotation
• Seifert
• Shear
• Similarity
• Symplectic
• Totally positive
• Transformation
Used in statistics
• Centering
• Correlation
• Covariance
• Design
• Doubly stochastic
• Fisher information
• Hat
• Precision
• Stochastic
• Transition
Used in graph theory
• Adjacency
• Biadjacency
• Degree
• Edmonds
• Incidence
• Laplacian
• Seidel adjacency
• Tutte
Used in science and engineering
• Cabibbo–Kobayashi–Maskawa
• Density
• Fundamental (computer vision)
• Fuzzy associative
• Gamma
• Gell-Mann
• Hamiltonian
• Irregular
• Overlap
• S
• State transition
• Substitution
• Z (chemistry)
Related terms
• Jordan normal form
• Linear independence
• Matrix exponential
• Matrix representation of conic sections
• Perfect matrix
• Pseudoinverse
• Row echelon form
• Wronskian
• Mathematics portal
• List of matrices
• Category:Matrices
| Wikipedia |
Symplectic vector field
In physics and mathematics, a symplectic vector field is one whose flow preserves a symplectic form. That is, if $(M,\omega )$ is a symplectic manifold with smooth manifold $M$ and symplectic form $\omega $, then a vector field $X\in {\mathfrak {X}}(M)$ in the Lie algebra ${\mathfrak {X}}(M)$ is symplectic if its flow preserves the symplectic structure. In other words, the Lie derivative of the vector field must vanish:
${\mathcal {L}}_{X}\omega =0$.[1]
An alternative definition is that a vector field is symplectic if its interior product with the symplectic form is closed.[1] (The interior product gives a map from vector fields to 1-forms, which is an isomorphism due to the nondegeneracy of a symplectic 2-form.) The equivalence of the definitions follows from the closedness of the symplectic form and Cartan's magic formula for the Lie derivative in terms of the exterior derivative.
If the interior product of a vector field with the symplectic form is an exact form (and in particular, a closed form), then it is called a Hamiltonian vector field. If the first De Rham cohomology group $H^{1}(M)$ of the manifold is trivial, all closed forms are exact, so all symplectic vector fields are Hamiltonian. That is, the obstruction to a symplectic vector field being Hamiltonian lives in $H^{1}(M)$. In particular, symplectic vector fields on simply connected manifolds are Hamiltonian.
The Lie bracket of two symplectic vector fields is Hamiltonian, and thus the collection of symplectic vector fields and the collection of Hamiltonian vector fields both form Lie algebras.
References
1. Cannas da Silva, Ana (2001), Lectures on Symplectic Geometry, Lecture Notes in Mathematics, vol. 1764, Springer-Verlag, p. 106, ISBN 978-3-540-42195-5.
This article incorporates material from Symplectic vector field on PlanetMath, which is licensed under the Creative Commons Attribution/Share-Alike License.
| Wikipedia |
Symplectization
In mathematics, the symplectization of a contact manifold is a symplectic manifold which naturally corresponds to it.
Definition
Let $(V,\xi )$ be a contact manifold, and let $x\in V$. Consider the set
$S_{x}V=\{\beta \in T_{x}^{*}V-\{0\}\mid \ker \beta =\xi _{x}\}\subset T_{x}^{*}V$
of all nonzero 1-forms at $x$, which have the contact plane $\xi _{x}$ as their kernel. The union
$SV=\bigcup _{x\in V}S_{x}V\subset T^{*}V$
is a symplectic submanifold of the cotangent bundle of $V$, and thus possesses a natural symplectic structure.
The projection $\pi :SV\to V$ supplies the symplectization with the structure of a principal bundle over $V$ with structure group $\mathbb {R} ^{*}\equiv \mathbb {R} -\{0\}$.
The coorientable case
When the contact structure $\xi $ is cooriented by means of a contact form $\alpha $, there is another version of symplectization, in which only forms giving the same coorientation to $\xi $ as $\alpha $ are considered:
$S_{x}^{+}V=\{\beta \in T_{x}^{*}V-\{0\}\,|\,\beta =\lambda \alpha ,\,\lambda >0\}\subset T_{x}^{*}V,$
$S^{+}V=\bigcup _{x\in V}S_{x}^{+}V\subset T^{*}V.$
Note that $\xi $ is coorientable if and only if the bundle $\pi :SV\to V$ is trivial. Any section of this bundle is a coorienting form for the contact structure.
| Wikipedia |
Synchrotron function
In mathematics the synchrotron functions are defined as follows (for x ≥ 0):[1]
• First synchrotron function
$F(x)=x\int _{x}^{\infty }K_{\frac {5}{3}}(t)\,dt$
• Second synchrotron function
$G(x)=xK_{\frac {2}{3}}(x)$
where Kj is the modified Bessel function of the second kind.
Use in astrophysics
In astrophysics, x is usually a ratio of frequencies, that is, the frequency over a critical frequency (critical frequency is the frequency at which most synchrotron radiation is radiated). This is needed when calculating the spectra for different types of synchrotron emission. It takes a spectrum of electrons (or any charged particle) generated by a separate process (such as a power law distribution of electrons and positrons from a constant injection spectrum) and converts this to the spectrum of photons generated by the input electrons/positrons.
References
1. Fouka, M.; Ouichaoui, S. (2013-01-29). "Analytical Fits to the Synchrotron Functions". Research in Astronomy and Astrophysics. 13 (6): 680–686. arXiv:1301.6908. doi:10.1088/1674-4527/13/6/007. S2CID 118480582.
Further reading
• Longair, Malcolm S. (2011). High energy astrophysics (3rd ed.). Cambridge: Cambridge University Press. ISBN 978-0-511-93059-1. OCLC 702125055.
• Rybicki, George B. (2004). Radiative processes in astrophysics (PDF). Alan P. Lightman. Weinheim. p. 191. ISBN 978-3-527-61817-0. OCLC 212140606.{{cite book}}: CS1 maint: location missing publisher (link)
| Wikipedia |
Decoding methods
In coding theory, decoding is the process of translating received messages into codewords of a given code. There have been many common methods of mapping messages to codewords. These are often used to recover messages sent over a noisy channel, such as a binary symmetric channel.
Notation
$C\subset \mathbb {F} _{2}^{n}$ is considered a binary code with the length $n$; $x,y$ shall be elements of $\mathbb {F} _{2}^{n}$; and $d(x,y)$ is the distance between those elements.
Ideal observer decoding
One may be given the message $x\in \mathbb {F} _{2}^{n}$, then ideal observer decoding generates the codeword $y\in C$. The process results in this solution:
$\mathbb {P} (y{\mbox{ sent}}\mid x{\mbox{ received}})$
For example, a person can choose the codeword $y$ that is most likely to be received as the message $x$ after transmission.
Decoding conventions
Each codeword does not have an expected possibility: there may be more than one codeword with an equal likelihood of mutating into the received message. In such a case, the sender and receiver(s) must agree ahead of time on a decoding convention. Popular conventions include:
1. Request that the codeword be resent – automatic repeat-request.
2. Choose any random codeword from the set of most likely codewords which is nearer to that.
3. If another code follows, mark the ambiguous bits of the codeword as erasures and hope that the outer code disambiguates them
Maximum likelihood decoding
Further information: Maximum likelihood
Given a received vector $x\in \mathbb {F} _{2}^{n}$ maximum likelihood decoding picks a codeword $y\in C$ that maximizes
$\mathbb {P} (x{\mbox{ received}}\mid y{\mbox{ sent}})$,
that is, the codeword $y$ that maximizes the probability that $x$ was received, given that $y$ was sent. If all codewords are equally likely to be sent then this scheme is equivalent to ideal observer decoding. In fact, by Bayes Theorem,
${\begin{aligned}\mathbb {P} (x{\mbox{ received}}\mid y{\mbox{ sent}})&{}={\frac {\mathbb {P} (x{\mbox{ received}},y{\mbox{ sent}})}{\mathbb {P} (y{\mbox{ sent}})}}\\&{}=\mathbb {P} (y{\mbox{ sent}}\mid x{\mbox{ received}})\cdot {\frac {\mathbb {P} (x{\mbox{ received}})}{\mathbb {P} (y{\mbox{ sent}})}}.\end{aligned}}$
Upon fixing $\mathbb {P} (x{\mbox{ received}})$, $x$ is restructured and $\mathbb {P} (y{\mbox{ sent}})$ is constant as all codewords are equally likely to be sent. Therefore, $\mathbb {P} (x{\mbox{ received}}\mid y{\mbox{ sent}})$ is maximised as a function of the variable $y$ precisely when $\mathbb {P} (y{\mbox{ sent}}\mid x{\mbox{ received}})$ is maximised, and the claim follows.
As with ideal observer decoding, a convention must be agreed to for non-unique decoding.
The maximum likelihood decoding problem can also be modeled as an integer programming problem.[1]
The maximum likelihood decoding algorithm is an instance of the "marginalize a product function" problem which is solved by applying the generalized distributive law.[2]
Minimum distance decoding
Given a received codeword $x\in \mathbb {F} _{2}^{n}$, minimum distance decoding picks a codeword Failed to parse (SVG (MathML can be enabled via browser plugin): Invalid response ("Math extension cannot connect to Restbase.") from server "http://localhost:6011/en.wikipedia.org/v1/":): y\in C to minimise the Hamming distance:
$d(x,y)=\#\{i:x_{i}\not =y_{i}\}$
i.e. choose the codeword $y$ that is as close as possible to $x$.
Note that if the probability of error on a discrete memoryless channel $p$ is strictly less than one half, then minimum distance decoding is equivalent to maximum likelihood decoding, since if
$d(x,y)=d,\,$
then:
${\begin{aligned}\mathbb {P} (y{\mbox{ received}}\mid x{\mbox{ sent}})&{}=(1-p)^{n-d}\cdot p^{d}\\&{}=(1-p)^{n}\cdot \left({\frac {p}{1-p}}\right)^{d}\\\end{aligned}}$
which (since p is less than one half) is maximised by minimising d.
Minimum distance decoding is also known as nearest neighbour decoding. It can be assisted or automated by using a standard array. Minimum distance decoding is a reasonable decoding method when the following conditions are met:
1. The probability $p$ that an error occurs is independent of the position of the symbol.
2. Errors are independent events – an error at one position in the message does not affect other positions.
These assumptions may be reasonable for transmissions over a binary symmetric channel. They may be unreasonable for other media, such as a DVD, where a single scratch on the disk can cause an error in many neighbouring symbols or codewords.
As with other decoding methods, a convention must be agreed to for non-unique decoding.
Syndrome decoding
Syndrome decoding is a highly efficient method of decoding a linear code over a noisy channel, i.e. one on which errors are made. In essence, syndrome decoding is minimum distance decoding using a reduced lookup table. This is allowed by the linearity of the code.[3]
Suppose that $C\subset \mathbb {F} _{2}^{n}$ is a linear code of length $n$ and minimum distance $d$ with parity-check matrix $H$. Then clearly $C$ is capable of correcting up to
$t=\left\lfloor {\frac {d-1}{2}}\right\rfloor $
errors made by the channel (since if no more than $t$ errors are made then minimum distance decoding will still correctly decode the incorrectly transmitted codeword).
Now suppose that a codeword $x\in \mathbb {F} _{2}^{n}$ is sent over the channel and the error pattern $e\in \mathbb {F} _{2}^{n}$ occurs. Then $z=x+e$ is received. Ordinary minimum distance decoding would lookup the vector $z$ in a table of size $|C|$ for the nearest match - i.e. an element (not necessarily unique) $c\in C$ with
$d(c,z)\leq d(y,z)$
for all $y\in C$. Syndrome decoding takes advantage of the property of the parity matrix that:
$Hx=0$
for all $x\in C$. The syndrome of the received $z=x+e$ is defined to be:
$Hz=H(x+e)=Hx+He=0+He=He$
To perform ML decoding in a binary symmetric channel, one has to look-up a precomputed table of size $2^{n-k}$, mapping $He$ to $e$.
Note that this is already of significantly less complexity than that of a standard array decoding.
However, under the assumption that no more than $t$ errors were made during transmission, the receiver can look up the value $He$ in a further reduced table of size
${\begin{matrix}\sum _{i=0}^{t}{\binom {n}{i}}\\\end{matrix}}$
List decoding
Main article: List decoding
Information set decoding
This is a family of Las Vegas-probabilistic methods all based on the observation that it is easier to guess enough error-free positions, than it is to guess all the error-positions.
The simplest form is due to Prange: Let $G$ be the $k\times n$ generator matrix of $C$ used for encoding. Select $k$ columns of $G$ at random, and denote by $G'$ the corresponding submatrix of $G$. With reasonable probability $G'$ will have full rank, which means that if we let $c'$ be the sub-vector for the corresponding positions of any codeword $c=mG$ of $C$ for a message $m$, we can recover $m$ as $m=c'G'^{-1}$. Hence, if we were lucky that these $k$ positions of the received word $y$ contained no errors, and hence equalled the positions of the sent codeword, then we may decode.
If $t$ errors occurred, the probability of such a fortunate selection of columns is given by $\textstyle {\binom {n-t}{k}}/{\binom {n}{k}}$.
This method has been improved in various ways, e.g. by Stern[4] and Canteaut and Sendrier.[5]
Partial response maximum likelihood
Partial response maximum likelihood (PRML) is a method for converting the weak analog signal from the head of a magnetic disk or tape drive into a digital signal.
Viterbi decoder
A Viterbi decoder uses the Viterbi algorithm for decoding a bitstream that has been encoded using forward error correction based on a convolutional code. The Hamming distance is used as a metric for hard decision Viterbi decoders. The squared Euclidean distance is used as a metric for soft decision decoders.
Optimal decision decoding algorithm (ODDA)
Optimal decision decoding algorithm (ODDA) for an asymmetric TWRC system.[6]
See also
• Don't care alarm
• Error detection and correction
• Forbidden input
References
1. Feldman, Jon; Wainwright, Martin J.; Karger, David R. (March 2005). "Using Linear Programming to Decode Binary Linear Codes". IEEE Transactions on Information Theory. 51 (3): 954–972. doi:10.1109/TIT.2004.842696. S2CID 3120399.
2. Aji, Srinivas M.; McEliece, Robert J. (March 2000). "The Generalized Distributive Law" (PDF). IEEE Transactions on Information Theory. 46 (2): 325–343. doi:10.1109/18.825794.
3. Beutelspacher, Albrecht; Rosenbaum, Ute (1998). Projective Geometry. Cambridge University Press. p. 190. ISBN 0-521-48277-1.
4. Stern, Jacques (1989). "A method for finding codewords of small weight". Coding Theory and Applications. Lecture Notes in Computer Science. Vol. 388. Springer-Verlag. pp. 106–113. doi:10.1007/BFb0019850. ISBN 978-3-540-51643-9.
5. Ohta, Kazuo; Pei, Dingyi, eds. (1998). Advances in Cryptology — ASIACRYPT'98. Lecture Notes in Computer Science. Vol. 1514. pp. 187–199. doi:10.1007/3-540-49649-1. ISBN 978-3-540-65109-3. S2CID 37257901.
6. Siamack Ghadimi (2020), Optimal decision decoding algorithm (ODDA) for an asymmetric TWRC system;, Universal Journal of Electrical and Electronic Engineering
Further reading
• Hill, Raymond (1986). A first course in coding theory. Oxford Applied Mathematics and Computing Science Series. Oxford University Press. ISBN 978-0-19-853803-5.
• Pless, Vera (1982). Introduction to the theory of error-correcting codes. Wiley-Interscience Series in Discrete Mathematics. John Wiley & Sons. ISBN 978-0-471-08684-0.
• van Lint, Jacobus H. (1992). Introduction to Coding Theory. Graduate Texts in Mathematics (GTM). Vol. 86 (2 ed.). Springer-Verlag. ISBN 978-3-540-54894-2.
| Wikipedia |
Synge's theorem
In mathematics, specifically Riemannian geometry, Synge's theorem is a classical result relating the curvature of a Riemannian manifold to its topology. It is named for John Lighton Synge, who proved it in 1936.
Theorem and sketch of proof
Let M be a closed Riemannian manifold with positive sectional curvature. The theorem asserts:
• If M is even-dimensional and orientable, then M is simply connected.
• If M is odd-dimensional, then it is orientable.
In particular, a closed manifold of even dimension can support a positively curved Riemannian metric only if its fundamental group has one or two elements.
The proof of Synge's theorem can be summarized as follows.[1] Given a geodesic S1 → M with an orthogonal and parallel vector field along the geodesic (i.e. a parallel section of the normal bundle to the geodesic), then Synge's earlier computation of the second variation formula for arclength shows immediately that the geodesic may be deformed so as to shorten its length. The only tool used at this stage is the assumption on sectional curvature.
The construction of a parallel vector field along any path is automatic via parallel transport; the nontriviality in the case of a loop is whether the values at the endpoints coincide. This reduces to a problem of pure linear algebra: let V be a finite-dimensional real inner product space with T: V → V an orthogonal linear map with an eigenvector v with eigenvalue one. If the determinant of T is positive and the dimension of V is even, or alternatively if the determinant of T is negative and the dimension of V is odd, then there is an eigenvector w of T with eigenvalue one which is orthogonal to v. In context, V is the tangent space to M at a point of a geodesic loop, T is the parallel transport map defined by the loop, and v is the tangent vector to the geodesic.
Given any noncontractible loop in a complete Riemannian manifold, there is a representative of its (free) homotopy class which has minimal possible arclength, and it is a geodesic.[2] According to Synge's computation, this implies that there cannot be a parallel and orthogonal vector field along this geodesic. However:
• Orientability implies that the parallel transport map along every loop has positive determinant. Even-dimensionality then implies the existence of a parallel vector field, orthogonal to the geodesic.
• Non-orientability implies the non-contractible loop can be chosen so that the parallel transport map has negative determinant. Odd-dimensionality then implies the existence of a parallel vector field, orthogonal to the geodesic.
This contradiction establishes the non-existence of noncontractible loops in the first case, and the impossibility of non-orientability in the latter case.
Alan Weinstein later rephrased the proof so as to establish fixed points of isometries, rather than topological properties of the underlying manifold.[3]
References
1. do Carmo 1992, Section 9.3; Jost 2017, Theorem 6.1.2; Petersen 2016, Section 6.3.2.
2. Jost 2017, Theorem 1.5.1.
3. do Carmo 1992, Theorem 9.3.7.
Sources.
• do Carmo, Manfredo Perdigão (1992). Riemannian geometry. Mathematics: Theory & Applications (Translated from the second Portuguese edition of 1979 original ed.). Boston, MA: Birkhäuser Boston. ISBN 978-0-8176-3490-2. MR 1138207. Zbl 0752.53001.
• Jost, Jürgen (2017). Riemannian geometry and geometric analysis. Universitext (Seventh edition of 1995 original ed.). Springer, Cham. doi:10.1007/978-3-319-61860-9. ISBN 978-3-319-61859-3. MR 3726907. Zbl 1380.53001.
• Petersen, Peter (2016). Riemannian geometry. Graduate Texts in Mathematics. Vol. 171 (Third edition of 1998 original ed.). Springer, Cham. doi:10.1007/978-3-319-26654-1. ISBN 978-3-319-26652-7. MR 3469435. Zbl 1417.53001.
• Synge, John Lighton (1936). "On the connectivity of spaces of positive curvature". Quarterly Journal of Mathematics. Oxford Series. 7 (1): 316–320. doi:10.1093/qmath/os-7.1.316. JFM 62.0861.04. Zbl 0015.41601.
| Wikipedia |
Synopsis of Pure Mathematics
Synopsis of Pure Mathematics[1] is a book by G. S. Carr, written in 1886.[2] The book attempted to summarize the state of most of the basic mathematics known at the time.
The book is noteworthy because it was a major source of information for the legendary and self-taught mathematician Srinivasa Ramanujan who managed to obtain a library loaned copy from a friend in 1903.[3] Ramanujan reportedly studied the contents of the book in detail.[4] The book is generally acknowledged as a key element in awakening the genius of Ramanujan.[4]
Carr acknowledged the main sources of his book in its preface:
... In the Algebra, Theory of Equations, and Trigonometry sections, I am largely indebted to Todhunter's well-known treatises ...
In the section entitled Elementary Geometry, I have added to simpler propositions a selection of theorems from Townsend's Modern Geometry and Salmon's Conic Sections.
In Geometric Conics, the line of demonstration followed agrees, in the main, with that adopted in Drew's treatise on the subject. ...
The account of the C. G. S. system given in the preliminary section, has been compiled from a valuable contribution on the subject by Professor Everett, of Belfast, published by the Physical Society of London. ...
In addition to the authors already named, the following treatises have been consulted—Algebras, by Wood, Bourdon, and Lefebvre de Fourey; Snowball's Trigonometry; Salmon's Higher Algebra; the geometrical exercises in Pott's Euclid; and Geometrical Conics by Taylor, Jackson, and Renshaw.[5]
Bibliography
• Carr, George Shoobridge (1886), A synopsis of elementary results in pure mathematics containing propositions, formulae, and methods of analysis, with abridged demonstrations., Reprinted by Chelsea, 1970, London. Fr. Hodgson. Cambridge. Macmillan and Bowes, ISBN 978-0-8284-0239-2
References
1. The full title is, "A synopsis of elementary results in pure mathematics: containing propositions, formulæ, and methods of analysis, with abridged demonstrations. Supplemented by an index to the papers on pure mathematics which are to be found in the principal journals and transactions of learned societies, both English and foreign, of the present century"
2. "Review of A Synopsis of Elementary Results in Pure Mathematics by G. S. Carr". Science. XI (277): 251. 25 May 1888.
3. A to Z of mathematicians by Tucker McElroy 2005 ISBN 0-8160-5338-3 page 221
4. Collected papers of Srinivasa Ramanujan Srinivasa Ramanujan Aiyangar, Godfrey Harold Hardy, P. Veṅkatesvara Seshu Aiyar 2000 ISBN 0-8218-2076-1 page xii
5. Carr, G. S. (1886). A synopsis of elementary results in pure mathematics. London: F. Hodgson. pp. vi–ix.
External links
• Carr, George Shoobridge (1886), A synopsis of elementary results in pure mathematics containing propositions, formulae, and methods of analysis, with abridged demonstrations., London. Fr. Hodgson. Cambridge. Macmillan and Bowes - archive.org
• Carr, George Shoobridge (1886), A synopsis of elementary results in pure mathematics containing propositions, formulae, and methods of analysis, with abridged demonstrations., London. Fr. Hodgson. Cambridge. Macmillan and Bowes - archive.org
• Carr, George Shoobridge (1886), A synopsis of elementary results in pure mathematics containing propositions, formulae, and methods of analysis, with abridged demonstrations. (PDF), London. Fr. Hodgson. Cambridge. Macmillan and Bowes - rarebooksocietyofindia.org
| Wikipedia |
Syntactic monoid
In mathematics and computer science, the syntactic monoid $M(L)$ of a formal language $L$ is the smallest monoid that recognizes the language $L$.
Syntactic quotient
The free monoid on a given set is the monoid whose elements are all the strings of zero or more elements from that set, with string concatenation as the monoid operation and the empty string as the identity element. Given a subset $S$ of a free monoid $M$, one may define sets that consist of formal left or right inverses of elements in $S$. These are called quotients, and one may define right or left quotients, depending on which side one is concatenating. Thus, the right quotient of $S$ by an element $m$ from $M$ is the set
$S\ /\ m=\{u\in M\;\vert \;um\in S\}.$
Similarly, the left quotient is
$m\setminus S=\{u\in M\;\vert \;mu\in S\}.$
Syntactic equivalence
The syntactic quotient induces an equivalence relation on $M$, called the syntactic relation, or syntactic equivalence (induced by $S$).
The right syntactic equivalence is the equivalence relation
$s\sim _{S}t\ \Leftrightarrow \ S\,/\,s\;=\;S\,/\,t\ \Leftrightarrow \ (\forall x\in M\colon \ xs\in S\Leftrightarrow xt\in S)$.
Similarly, the left syntactic equivalence is
$s\;{}_{S}{\sim }\;t\ \Leftrightarrow \ s\setminus S\;=\;t\setminus S\ \Leftrightarrow \ (\forall y\in M\colon \ sy\in S\Leftrightarrow ty\in S)$.
Observe that the right syntactic equivalence is a left congruence with respect to string concatenation and vice versa; i.e., $s\sim _{S}t\ \Rightarrow \ xs\sim _{S}xt\ $ for all $x\in M$.
The syntactic congruence or Myhill congruence[1] is defined as[2]
$s\equiv _{S}t\ \Leftrightarrow \ (\forall x,y\in M\colon \ xsy\in S\Leftrightarrow xty\in S)$.
The definition extends to a congruence defined by a subset $S$ of a general monoid $M$. A disjunctive set is a subset $S$ such that the syntactic congruence defined by $S$ is the equality relation.[3]
Let us call $[s]_{S}$ the equivalence class of $s$ for the syntactic congruence. The syntactic congruence is compatible with concatenation in the monoid, in that one has
$[s]_{S}[t]_{S}=[st]_{S}$
for all $s,t\in M$. Thus, the syntactic quotient is a monoid morphism, and induces a quotient monoid
$M(S)=M\ /\ {\equiv _{S}}$.
This monoid $M(S)$ is called the syntactic monoid of $S$. It can be shown that it is the smallest monoid that recognizes $S$; that is, $M(S)$ recognizes $S$, and for every monoid $N$ recognizing $S$, $M(S)$ is a quotient of a submonoid of $N$. The syntactic monoid of $S$ is also the transition monoid of the minimal automaton of $S$.[1][2][4]
A group language is one for which the syntactic monoid is a group.[5]
Myhill–Nerode theorem
The Myhill–Nerode theorem states: a language $L$ is regular if and only if the family of quotients $\{m\setminus L\,\vert \;m\in M\}$ is finite, or equivalently, the left syntactic equivalence ${}_{S}{\sim }$ has finite index (meaning it partitions $M$ into finitely many equivalence classes).[6]
This theorem was first proved by Anil Nerode (Nerode 1958) and the relation ${}_{S}{\sim }$ is thus referred to as Nerode congruence by some authors.[7][8]
Proof
The proof of the "only if" part is as follows. Assume that a finite automaton recognizing $L$ reads input $x$, which leads to state $p$. If $y$ is another string read by the machine, also terminating in the same state $p$, then clearly one has $x\setminus L\,=y\setminus L$. Thus, the number of elements in $\{m\setminus L\,\vert \;m\in M\}$ is at most equal to the number of states of the automaton and $\{m\setminus L\,\vert \;m\in L\}$ is at most the number of final states.
For a proof of the "if" part, assume that the number of elements in $\{m\setminus L\,\vert \;m\in M\}$ is finite. One can then construct an automaton where $Q=\{m\setminus L\,\vert \;m\in M\}$ is the set of states, $F=\{m\setminus L\,\vert \;m\in L\}$ is the set of final states, the language $L$ is the initial state, and the transition function is given by $\delta _{y}\colon x\setminus L\to y\setminus (x\setminus L)=(xy)\setminus L$. Clearly, this automaton recognizes $L$.
Thus, a language $L$ is recognizable if and only if the set $\{m\setminus L\,\vert \;m\in M\}$ is finite. Note that this proof also builds the minimal automaton.
Examples
• Let $L$ be the language over $A=\{a,b\}$ of words of even length. The syntactic congruence has two classes, $L$ itself and $L_{1}$, the words of odd length. The syntactic monoid is the group of order 2 on $\{L,L_{1}\}$.[9]
• For the language $(ab+ba)^{*}$, the minimal automaton has 4 states and the syntactic monoid has 15 elements.[10]
• The bicyclic monoid is the syntactic monoid of the Dyck language (the language of balanced sets of parentheses).
• The free monoid on $A$ (where $\left|A\right|>1$) is the syntactic monoid of the language $\{ww^{R}\mid w\in A^{*}\}$, where $w^{R}$ is the reversal of the word $w$. (For $\left|A\right|=1$, one can use the language of square powers of the letter.)
• Every non-trivial finite monoid is homomorphic to the syntactic monoid of some non-trivial language,[11] but not every finite monoid is isomorphic to a syntactic monoid.[12]
• Every finite group is isomorphic to the syntactic monoid of some regular language.[11]
• The language over $\{a,b\}$ in which the number of occurrences of $a$ and $b$ are congruent modulo $2^{n}$ is a group language with syntactic monoid $\mathbb {Z} /2^{n}\mathbb {Z} $.[5]
• Trace monoids are examples of syntactic monoids.
• Marcel-Paul Schützenberger[13] characterized star-free languages as those with finite aperiodic syntactic monoids.[14]
References
1. Holcombe (1982) p.160
2. Lawson (2004) p.210
3. Lawson (2004) p.232
4. Straubing (1994) p.55
5. Sakarovitch (2009) p.342
6. Nerode, Anil (1958), "Linear Automaton Transformations", Proceedings of the American Mathematical Society, 9 (4): 541–544, doi:10.1090/S0002-9939-1958-0135681-9, JSTOR 2033204
7. Brzozowski, Janusz; Szykuła, Marek; Ye, Yuli (2018), "Syntactic Complexity of Regular Ideals", Theory of Computing Systems, 62 (5): 1175–1202, doi:10.1007/s00224-017-9803-8, hdl:10012/12499, S2CID 2238325
8. Crochemore, Maxime; et al. (2009), "From Nerode's congruence to suffix automata with mismatches", Theoretical Computer Science, 410 (37): 3471–3480, doi:10.1016/j.tcs.2009.03.011, S2CID 14277204
9. Straubing (1994) p.54
10. Lawson (2004) pp.211-212
11. McNaughton, Robert; Papert, Seymour (1971). Counter-free Automata. Research Monograph. Vol. 65. With an appendix by William Henneman. MIT Press. p. 48. ISBN 0-262-13076-9. Zbl 0232.94024.
12. Lawson (2004) p.233
13. Marcel-Paul Schützenberger (1965). "On finite monoids having only trivial subgroups" (PDF). Information and Computation. 8 (2): 190–194. doi:10.1016/s0019-9958(65)90108-7.
14. Straubing (1994) p.60
• Anderson, James A. (2006). Automata theory with modern applications. With contributions by Tom Head. Cambridge: Cambridge University Press. ISBN 0-521-61324-8. Zbl 1127.68049.
• Holcombe, W.M.L. (1982). Algebraic automata theory. Cambridge Studies in Advanced Mathematics. Vol. 1. Cambridge University Press. ISBN 0-521-60492-3. Zbl 0489.68046.
• Lawson, Mark V. (2004). Finite automata. Chapman and Hall/CRC. ISBN 1-58488-255-7. Zbl 1086.68074.
• Pin, Jean-Éric (1997). "10. Syntactic semigroups". In Rozenberg, G.; Salomaa, A. (eds.). Handbook of Formal Language Theory (PDF). Vol. 1. Springer-Verlag. pp. 679–746. Zbl 0866.68057.
• Sakarovitch, Jacques (2009). Elements of automata theory. Translated from the French by Reuben Thomas. Cambridge University Press. ISBN 978-0-521-84425-3. Zbl 1188.68177.
• Straubing, Howard (1994). Finite automata, formal logic, and circuit complexity. Progress in Theoretical Computer Science. Basel: Birkhäuser. ISBN 3-7643-3719-2. Zbl 0816.68086.
| Wikipedia |
Syntactic predicate
A syntactic predicate specifies the syntactic validity of applying a production in a formal grammar and is analogous to a semantic predicate that specifies the semantic validity of applying a production. It is a simple and effective means of dramatically improving the recognition strength of an LL parser by providing arbitrary lookahead. In their original implementation, syntactic predicates had the form “( α )?” and could only appear on the left edge of a production. The required syntactic condition α could be any valid context-free grammar fragment.
More formally, a syntactic predicate is a form of production intersection, used in parser specifications or in formal grammars. In this sense, the term predicate has the meaning of a mathematical indicator function. If p1 and p2, are production rules, the language generated by both p1 and p2 is their set intersection.
As typically defined or implemented, syntactic predicates implicitly order the productions so that predicated productions specified earlier have higher precedence than predicated productions specified later within the same decision. This conveys an ability to disambiguate ambiguous productions because the programmer can simply specify which production should match.
Parsing expression grammars (PEGs), invented by Bryan Ford, extend these simple predicates by allowing "not predicates" and permitting a predicate to appear anywhere within a production. Moreover, Ford invented packrat parsing to handle these grammars in linear time by employing memoization, at the cost of heap space.
It is possible to support linear-time parsing of predicates as general as those allowed by PEGs, but reduce the memory cost associated with memoization by avoiding backtracking where some more efficient implementation of lookahead suffices. This approach is implemented by ANTLR version 3, which uses Deterministic finite automata for lookahead; this may require testing a predicate in order to choose between transitions of the DFA (called "pred-LL(*)" parsing).[1]
Overview
Terminology
The term syntactic predicate was coined by Parr & Quong and differentiates this form of predicate from semantic predicates (also discussed).[2]
Syntactic predicates have been called multi-step matching, parse constraints, and simply predicates in various literature. (See References section below.) This article uses the term syntactic predicate throughout for consistency and to distinguish them from semantic predicates.
Formal closure properties
Bar-Hillel et al.[3] show that the intersection of two regular languages is also a regular language, which is to say that the regular languages are closed under intersection.
The intersection of a regular language and a context-free language is also closed, and it has been known at least since Hartmanis[4] that the intersection of two context-free languages is not necessarily a context-free language (and is thus not closed). This can be demonstrated easily using the canonical Type 1 language, $L=\{a^{n}b^{n}c^{n}:n\geq 1\}$:
Let $L_{1}=\{a^{m}b^{n}c^{n}:m,n\geq 1\}$ (Type 2)
Let $L_{2}=\{a^{n}b^{n}c^{m}:m,n\geq 1\}$ (Type 2)
Let $L_{3}=L_{1}\cap L_{2}$
Given the strings abcc, aabbc, and aaabbbccc, it is clear that the only string that belongs to both L1 and L2 (that is, the only one that produces a non-empty intersection) is aaabbbccc.
Other considerations
In most formalisms that use syntactic predicates, the syntax of the predicate is noncommutative, which is to say that the operation of predication is ordered. For instance, using the above example, consider the following pseudo-grammar, where X ::= Y PRED Z is understood to mean: "Y produces X if and only if Y also satisfies predicate Z":
S ::= a X
X ::= Y PRED Z
Y ::= a+ BNCN
Z ::= ANBN c+
BNCN ::= b [BNCN] c
ANBN ::= a [ANBN] b
Given the string aaaabbbccc, in the case where Y must be satisfied first (and assuming a greedy implementation), S will generate aX and X in turn will generate aaabbbccc, thereby generating aaaabbbccc. In the case where Z must be satisfied first, ANBN will fail to generate aaaabbb, and thus aaaabbbccc is not generated by the grammar. Moreover, if either Y or Z (or both) specify any action to be taken upon reduction (as would be the case in many parsers), the order that these productions match determines the order in which those side-effects occur. Formalisms that vary over time (such as adaptive grammars) may rely on these side effects.
ANTLR
Parr & Quong[5] give this example of a syntactic predicate:
stat: (declaration)? declaration
| expression
;
which is intended to satisfy the following informally stated[6] constraints of C++:
1. If it looks like a declaration, it is; otherwise
2. if it looks like an expression, it is; otherwise
3. it is a syntax error.
In the first production of rule stat, the syntactic predicate (declaration)? indicates that declaration is the syntactic context that must be present for the rest of that production to succeed. We can interpret the use of (declaration)? as "I am not sure if declaration will match; let me try it out and, if it does not match, I shall try the next alternative." Thus, when encountering a valid declaration, the rule declaration will be recognized twice—once as syntactic predicate and once during the actual parse to execute semantic actions.
Of note in the above example is the fact that any code triggered by the acceptance of the declaration production will only occur if the predicate is satisfied.
Canonical examples
The language $L=\{a^{n}b^{n}c^{n}|n\geq 1\}$ can be represented in various grammars and formalisms as follows:
Parsing Expression Grammars
S ← &(A !b) a+ B !c
A ← a A? b
B ← b B? c
§-Calculus
Using a bound predicate:
S → {A}B
A → X 'c+'
X → 'a' [X] 'b'
B → 'a+' Y
Y → 'b' [Y] 'c'
Using two free predicates:
A → <'a+'>a <'b+'>b Ψ(a b)X <'c+'>c Ψ(b c)Y
X → 'a' [X] 'b'
Y → 'b' [Y] 'c'
Conjunctive Grammars
(Note: the following example actually generates $L=\{a^{n}b^{n}c^{n}|n\geq 0\}$, but is included here because it is the example given by the inventor of conjunctive grammars.[7]):
S → AB&DC
A → aA | ε
B → bBc | ε
C → cC | ε
D → aDb | ε
Perl 6 rules
rule S { <before <A> <!before b>> a+ <B> <!before c> }
rule A { a <A>? b }
rule B { b <B>? c }
Parsers/formalisms using some form of syntactic predicate
Although by no means an exhaustive list, the following parsers and grammar formalisms employ syntactic predicates:
ANTLR (Parr & Quong)
As originally implemented,[2] syntactic predicates sit on the leftmost edge of a production such that the production to the right of the predicate is attempted if and only if the syntactic predicate first accepts the next portion of the input stream. Although ordered, the predicates are checked first, with parsing of a clause continuing if and only if the predicate is satisfied, and semantic actions only occurring in non-predicates.[5]
Augmented Pattern Matcher (Balmas)
Balmas refers to syntactic predicates as "multi-step matching" in her paper on APM.[8] As an APM parser parses, it can bind substrings to a variable, and later check this variable against other rules, continuing to parse if and only if that substring is acceptable to further rules.
Parsing expression grammars (Ford)
Ford's PEGs have syntactic predicates expressed as the and-predicate and the not-predicate.[9]
§-Calculus (Jackson)
In the §-Calculus, syntactic predicates are originally called simply predicates, but are later divided into bound and free forms, each with different input properties.[10]
Raku rules
Raku introduces a generalized tool for describing a grammar called rules, which are an extension of Perl 5's regular expression syntax.[11] Predicates are introduced via a lookahead mechanism called before, either with "<before ...>" or "<!before ...>" (that is: "not before"). Perl 5 also has such lookahead, but it can only encapsulate Perl 5's more limited regexp features.
ProGrammar (NorKen Technologies)
ProGrammar's GDL (Grammar Definition Language) makes use of syntactic predicates in a form called parse constraints.[12] ATTENTION NEEDED: This link is no longer valid!
Conjunctive and Boolean Grammars (Okhotin)
Conjunctive grammars, first introduced by Okhotin,[13] introduce the explicit notion of conjunction-as-predication. Later treatment of conjunctive and boolean grammars[14] is the most thorough treatment of this formalism to date.
References
1. Parr, Terence (2007). The Definitive ANTLR Reference: Building Domain-Specific Languages. The Pragmatic Programmers. p. 328. ISBN 978-3-540-63293-1.
2. Parr, Terence J.; Quong, Russell (October 1993). "Adding Semantic and Syntactic Predicates to LL(k) parsing: pred-LL(k)" (Document). Army High Performance Computing Research Center Preprint No. 93-096. pp. 263–277. {{cite document}}: Unknown parameter |citeseerx= ignored (help)
3. Bar-Hillel, Y.; Perles, M.; Shamir, E. (1961). "On formal properties of simple phrase structure grammars". Zeitschrift für Phonetik, Sprachwissenschaft und Kommunikationsforschung. 14 (2): 143–172..
4. Hartmanis, Juris (1967). "Context-Free Languages and Turing Machine Computations". Proceedings of Symposia in Applied Mathematics. Mathematical Aspects of Computer Science. AMS. 19: 42–51. doi:10.1090/psapm/019/0235938. ISBN 9780821867280.
5. Parr, Terence; Quong, Russell (July 1995). "ANTLR: A Predicated-LL(k) Parser Generator" (PDF). Software: Practice and Experience. 25 (7): 789–810. doi:10.1002/spe.4380250705. S2CID 13453016.
6. Stroustrup, Bjarne; Ellis, Margaret A. (1990). The Annotated C++ Reference Manual. Addison-Wesley. ISBN 9780201514599.
7. Okhotin, Alexander (2001). "Conjunctive grammars" (PDF). Journal of Automata, Languages and Combinatorics. 6 (4): 519–535. doi:10.25596/jalc-2001-519. S2CID 18009960. Archived from the original (PDF) on 26 June 2019.
8. Balmas, Françoise (20–23 September 1994). "An Augmented Pattern Matcher as a Tool to Synthesize Conceptual Descriptions of Programs". Proceedings KBSE '94. Ninth Knowledge-Based Software Engineering Conference. Proceedings of the Ninth Knowledged-Based Software Engineering Conference. Monterey, California. pp. 150–157. doi:10.1109/KBSE.1994.342667. ISBN 0-8186-6380-4.
9. Ford, Bryan (September 2002). Packrat Parsing: a Practical Linear-Time Algorithm with Backtracking (Master’s thesis). Massachusetts Institute of Technology.
10. Jackson, Quinn Tyler (March 2006). Adapting to Babel: Adaptivity & Context-Sensitivity in Parsing. Plymouth, Massachusetts: Ibis Publishing. CiteSeerX 10.1.1.403.8977.
11. Wall, Larry (2002–2006). "Synopsis 5: Regexes and Rules".
12. "Grammar Definition Language". NorKen Technologies.
13. Okhotin, Alexander (2000). "On Augmenting the Formalism of Context-Free Grammars with an Intersection Operation". Proceedings of the Fourth International Conference "Discrete Models in the Theory of Control Systems" (in Russian): 106–109.
14. Okhotin, Alexander (August 2004). Boolean Grammars: Expressive Power and Algorithms (Doctoral thesis). Kingston, Ontario: School of Computing, Queens University.
External links
• ANTLR site
• Alexander Okhotin's Conjunctive Grammars Page
• Alexander Okhotin's Boolean Grammars Page
• The Packrat Parsing and Parsing Expression Grammars Page
| Wikipedia |
Optical heterodyne detection
Optical heterodyne detection is a method of extracting information encoded as modulation of the phase, frequency or both of electromagnetic radiation in the wavelength band of visible or infrared light. The light signal is compared with standard or reference light from a "local oscillator" (LO) that would have a fixed offset in frequency and phase from the signal if the latter carried null information. "Heterodyne" signifies more than one frequency, in contrast to the single frequency employed in homodyne detection.[1]
The comparison of the two light signals is typically accomplished by combining them in a photodiode detector, which has a response that is linear in energy, and hence quadratic in amplitude of electromagnetic field. Typically, the two light frequencies are similar enough that their difference or beat frequency produced by the detector is in the radio or microwave band that can be conveniently processed by electronic means.
This technique became widely applicable to topographical and velocity-sensitive imaging with the invention in the 1990s of synthetic array heterodyne detection.[2] The light reflected from a target scene is focussed on a relatively inexpensive photodetector consisting of a single large physical pixel, while a different LO frequency is also tightly focussed on each virtual pixel of this detector, resulting in an electrical signal from the detector carrying a mixture of beat frequencies that can be electronically isolated and distributed spatially to present an image of the scene.[2]
History
Optical heterodyne detection began to be studied at least as early as 1962, within two years of the construction of the first laser.[3] However, laser illumination is not the only way to produce spatially coherent light. In 1995, Guerra[4] published results in which he used a "form of optical heterodyning" to detect and image a grating with frequency many times smaller than the illuminating wavelength, and therefore smaller than the resolution, or passband, of the microscope, by beating it against a local oscillator in the form of a similar but transparent grating. A form of super-resolution microscopy, this work continues to spawn a family and generation of microscopes of particular use in the life sciences, known as "structured illumination microscopy", Polaroid Corp. patented Guerra's invention in 1997.[5]
Contrast to conventional radio frequency (RF) heterodyne detection
It is instructive to contrast the practical aspects of optical band detection to radio frequency (RF) band heterodyne detection.
Energy versus electric field detection
Unlike RF band detection, optical frequencies oscillate too rapidly to directly measure and process the electric field electronically. Instead optical photons are (usually) detected by absorbing the photon's energy, thus only revealing the magnitude, and not by following the electric field phase. Hence the primary purpose of heterodyne mixing is to down shift the signal from the optical band to an electronically tractable frequency range.
In RF band detection, typically, the electromagnetic field drives oscillatory motion of electrons in an antenna; the captured EMF is subsequently electronically mixed with a local oscillator (LO) by any convenient non-linear circuit element with a quadratic term (most commonly a rectifier). In optical detection, the desired non-linearity is inherent in the photon absorption process itself. Conventional light detectors—so called "Square-law detectors"—respond to the photon energy to free bound electrons, and since the energy flux scales as the square of the electric field, so does the rate at which electrons are freed. A difference frequency only appears in the detector output current when both the LO and signal illuminate the detector at the same time, causing the square of their combined fields to have a cross term or "difference" frequency modulating the average rate at which free electrons are generated.
Wideband local oscillators for coherent detection
Another point of contrast is the expected bandwidth of the signal and local oscillator. Typically, an RF local oscillator is a pure frequency; pragmatically, "purity" means that a local oscillator's frequency bandwidth is much much less than the difference frequency. With optical signals, even with a laser, it is not simple to produce a reference frequency sufficiently pure to have either an instantaneous bandwidth or long term temporal stability that is less than a typical megahertz or kilohertz scale difference frequency. For this reason, the same source is often used to produce the LO and the signal so that their difference frequency can be kept constant even if the center frequency wanders.
As a result, the mathematics of squaring the sum of two pure tones, normally invoked to explain RF heterodyne detection, is an oversimplified model of optical heterodyne detection. Nevertheless, the intuitive pure-frequency heterodyne concept still holds perfectly for the wideband case provided that the signal and LO are mutually coherent. Crucially, one can obtain narrow-band interference from coherent broadband sources: this is the basis for white light interferometry and optical coherence tomography. Mutual coherence permits the rainbow in Newton's rings, and supernumerary rainbows.
Consequently, optical heterodyne detection is usually performed as interferometry where the LO and signal share a common origin, rather than, as in radio, a transmitter sending to a remote receiver. The remote receiver geometry is uncommon because generating a local oscillator signal that is coherent with a signal of independent origin is technologically difficult at optical frequencies. However, lasers of sufficiently narrow linewidth to allow the signal and LO to originate from different lasers do exist.[6]
Photon counting
After optical heterodyne became an established technique, consideration was given to the conceptual basis for operation at such low signal light levels that "only a few, or even fractions of, photons enter the receiver in a characteristic time interval".[7] It was concluded that even when photons of different energies are absorbed at a countable rate by a detector at different (random) times, the detector can still produce a difference frequency. Hence light seems to have wave-like properties not only as it propagates through space, but also when it interacts with matter.[8] Progress with photon counting was such that by 2008 it was proposed that, even with larger signal strengths available, it could be advantageous to employ local oscillator power low enough to allow detection of the beat signal by photon counting. This was understood to have a main advantage of imaging with available and rapidly developing large-format multi-pixel counting photodetectors.[9]
Photon counting was applied with frequency-modulated continuous wave (FMCW) lasers. Numerical algorithms were developed to optimize the statistical performance of the analysis of the data from photon counting.[10][11][12]
Key benefits
Gain in the detection
The amplitude of the down-mixed difference frequency can be larger than the amplitude of the original signal itself. The difference frequency signal is proportional to the product of the amplitudes of the LO and signal electric fields. Thus the larger the LO amplitude, the larger the difference-frequency amplitude. Hence there is gain in the photon conversion process itself.
$I\propto \left[E_{\mathrm {sig} }\cos(\omega _{\mathrm {sig} }t+\varphi )+E_{\mathrm {LO} }\cos(\omega _{\mathrm {LO} }t)\right]^{2}\propto {\frac {1}{2}}E_{\mathrm {sig} }^{2}+{\frac {1}{2}}E_{\mathrm {LO} }^{2}+2E_{\mathrm {LO} }E_{\mathrm {sig} }\cos(\omega _{\mathrm {sig} }t+\varphi )\cos(\omega _{\mathrm {LO} }t)$
The first two terms are proportional to the average (DC) energy flux absorbed (or, equivalently, the average current in the case of photon counting). The third term is time varying and creates the sum and difference frequencies. In the optical regime the sum frequency will be too high to pass through the subsequent electronics. In many applications the signal is weaker than the LO, thus it can be seen that gain occurs because the energy flux in the difference frequency $E_{\mathrm {LO} }E_{\mathrm {sig} }$ is greater than the DC energy flux of the signal by itself $E_{\mathrm {sig} }^{2}$.
Preservation of optical phase
By itself, the signal beam's energy flux, $E_{\mathrm {sig} }^{2}$, is DC and thus erases the phase associated with its optical frequency; Heterodyne detection allows this phase to be detected. If the optical phase of the signal beam shifts by an angle phi, then the phase of the electronic difference frequency shifts by exactly the same angle phi. More properly, to discuss an optical phase shift one needs to have a common time base reference. Typically the signal beam is derived from the same laser as the LO but shifted by some modulator in frequency. In other cases, the frequency shift may arise from reflection from a moving object. As long as the modulation source maintains a constant offset phase between the LO and signal source, any added optical phase shifts over time arising from external modification of the return signal are added to the phase of the difference frequency and thus are measurable.
Mapping optical frequencies to electronic frequencies allows sensitive measurements
As noted above, the difference frequency linewidth can be much smaller than the optical linewidth of the signal and LO signal, provided the two are mutually coherent. Thus small shifts in optical signal center-frequency can be measured: For example, Doppler lidar systems can discriminate wind velocities with a resolution better than 1 meter per second, which is less than a part in a billion Doppler shift in the optical frequency. Likewise small coherent phase shifts can be measured even for nominally incoherent broadband light, allowing optical coherence tomography to image micrometer-sized features. Because of this, an electronic filter can define an effective optical frequency bandpass that is narrower than any realizable wavelength filter operating on the light itself, and thereby enable background light rejection and hence the detection of weak signals.
Noise reduction to shot noise limit
As with any small signal amplification, it is most desirable to get gain as close as possible to the initial point of the signal interception: moving the gain ahead of any signal processing reduces the additive contributions of effects like resistor Johnson–Nyquist noise, or electrical noises in active circuits. In optical heterodyne detection, the mixing-gain happens directly in the physics of the initial photon absorption event, making this ideal. Additionally, to a first approximation, absorption is perfectly quadratic, in contrast to RF detection by a diode non-linearity.
One of the virtues of heterodyne detection is that the difference frequency is generally far removed spectrally from the potential noises radiated during the process of generating either the signal or the LO signal, thus the spectral region near the difference frequency may be relatively quiet. Hence, narrow electronic filtering near the difference frequency is highly effective at removing the remaining, generally broadband, noise sources.
The primary remaining source of noise is photon shot noise from the nominally constant DC level, which is typically dominated by the Local Oscillator (LO). Since the shot noise scales as the amplitude of the LO electric field level, and the heterodyne gain also scales the same way, the ratio of the shot noise to the mixed signal is constant no matter how large the LO.
Thus in practice one increases the LO level, until the gain on the signal raises it above all other additive noise sources, leaving only the shot noise. In this limit, the signal to noise ratio is affected by the shot noise of the signal only (i.e. there is no noise contribution from the powerful LO because it divided out of the ratio). At that point there is no change in the signal to noise as the gain is raised further. (Of course, this is a highly idealized description; practical limits on the LO intensity matter in real detectors and an impure LO might carry some noise at the difference frequency)
Key problems and their solutions
Array detection and imaging
Array detection of light, i.e. detecting light in a large number of independent detector pixels, is common in digital camera image sensors. However, it tends to be quite difficult in heterodyne detection, since the signal of interest is oscillating (also called AC by analogy to circuits), often at millions of cycles per second or more. At the typical frame rates for image sensors, which are much slower, each pixel would integrate the total light received over many oscillation cycles, and this time-integration would destroy the signal of interest. Thus a heterodyne array must usually have parallel direct connections from every sensor pixel to separate electrical amplifiers, filters, and processing systems. This makes large, general purpose, heterodyne imaging systems prohibitively expensive. For example, simply attaching 1 million leads to a megapixel coherent array is a daunting challenge.
To solve this problem, synthetic array heterodyne detection (SAHD) was developed.[2] In SAHD, large imaging arrays can be multiplexed into virtual pixels on a single element detector with single readout lead, single electrical filter, and single recording system.[13] The time domain conjugate of this approach is Fourier transform heterodyne detection,[14] which also has the multiplex advantage and also allows a single element detector to act like an imaging array. SAHD has been implemented as Rainbow heterodyne detection[15][16] in which instead of a single frequency LO, many narrowly spaced frequencies are spread out across the detector element surface like a rainbow. The physical position where each photon arrived is encoded in the resulting difference frequency itself, making a virtual 1D array on a single element detector. If the frequency comb is evenly spaced then, conveniently, the Fourier transform of the output waveform is the image itself. Arrays in 2D can be created as well, and since the arrays are virtual, the number of pixels, their size, and their individual gains can be adapted dynamically. The multiplex disadvantage is that the shot noise from all the pixels combine since they are not physically separated.
Speckle and diversity reception
As discussed, the LO and signal must be temporally coherent. They also need to be spatially coherent across the face of the detector or they will destructively interfere. In many usage scenarios the signal is reflected from optically rough surfaces or passes through optically turbulent media leading to wavefronts that are spatially incoherent. In laser scattering this is known as speckle.[17]
In RF detection the antenna is rarely larger than the wavelength so all excited electrons move coherently within the antenna, whereas in optics the detector is usually much larger than the wavelength and thus can intercept a distorted phase front, resulting in destructive interference by out-of-phase photo-generated electrons within the detector.
While destructive interference dramatically reduces the signal level, the summed amplitude of a spatially incoherent mixture does not approach zero but rather the mean amplitude of a single speckle.[17] However, since the standard deviation of the coherent sum of the speckles is exactly equal to the mean speckle intensity, optical heterodyne detection of scrambled phase fronts can never measure the absolute light level with an error bar less than the size of the signal itself. This upper bound signal-to-noise ratio of unity is only for absolute magnitude measurement: it can have signal-to-noise ratio better than unity for phase, frequency or time-varying relative-amplitude measurements in a stationary speckle field.
In RF detection, "diversity reception" is often used to mitigate low signals when the primary antenna is inadvertently located at an interference null point: by having more than one antenna one can adaptively switch to whichever antenna has the strongest signal or even incoherently add all of the antenna signals. Simply adding the antennae coherently can produce destructive interference just as happens in the optical realm.
The analogous diversity reception for optical heterodyne has been demonstrated with arrays of photon-counting detectors.[9] For incoherent addition of the multiple element detectors in a random speckle field, the ratio of the mean to the standard deviation will scale as the square root of the number of independently measured speckles. This improved signal-to-noise ratio makes absolute amplitude measurements feasible in heterodyne detection.
However, as noted above, scaling physical arrays to large element counts is challenging for heterodyne detection due to the oscillating or even multi-frequency nature of the output signal. Instead, a single-element optical detector can also act like diversity receiver via synthetic array heterodyne detection or Fourier transform heterodyne detection. With a virtual array one can then either adaptively select just one of the LO frequencies, track a slowly moving bright speckle, or add them all in post-processing by the electronics.
Coherent temporal summation
One can incoherently add the magnitudes of a time series of N independent pulses to obtain a √N improvement in the signal to noise on the amplitude, but at the expense of losing the phase information. Instead coherent addition (adding the complex magnitude and phase) of multiple pulse waveforms would improve the signal to noise by a factor of N, not its square root, and preserve the phase information. The practical limitation is adjacent pulses from typical lasers have a minute frequency drift that translates to a large random phase shift in any long distance return signal, and thus just like the case for spatially scrambled-phase pixels, destructively interfere when added coherently. However, coherent addition of multiple pulses is possible with advanced laser systems that narrow the frequency drift far below the difference frequency (intermediate frequency). This technique has been demonstrated in multi-pulse coherent Doppler LIDAR.[18]
See also
• Rainbow heterodyne detection
• Interferometry
• Heterodyne
• Superheterodyne
• Homodyne
• Optical coherence tomography
References
1. "Optical detection techniques: homodyne versus heterodyne". Renishaw plc (UK). 2002. Archived from the original on 26 July 2017. Retrieved 15 February 2017.
2. Strauss, Charlie E. M. (1994). "Synthetic-array heterodyne detection: a single-element detector acts as an array". Optics Letters. 19 (20): 1609–11. Bibcode:1994OptL...19.1609S. doi:10.1364/OL.19.001609. PMID 19855597.
3. Jacobs, Stephen (30 November 1962). Technical Note on Heterodyne Detection in Optical Communications (PDF) (Report). Syosset, New York: Technical Research Group, Inc. Archived from the original (PDF) on February 10, 2017. Retrieved 15 February 2017.
4. Guerra, John M. (1995-06-26). "Super‐resolution through illumination by diffraction‐born evanescent waves". Applied Physics Letters. 66 (26): 3555–3557. doi:10.1063/1.113814. ISSN 0003-6951.
5. U.S. Pat. No. 5,666,197; "Apparatus and methods employing phase control and analysis of evanescent illumination for imaging and metrology of subwavelength lateral surface topography"; John M. Guerra, inventor; Assigned to Polaroid Corp.; Sept. 1997.
6. Hinkley, E.; Freed, Charles (1969). "Direct Observation of the Lorentzian Line Shape as Limited by Quantum Phase Noise in a Laser above Threshold". Physical Review Letters. 23 (6): 277. Bibcode:1969PhRvL..23..277H. doi:10.1103/PhysRevLett.23.277.
7. Winzer, Peter J.; Leeb, Walter R. (1998). "Coherent lidar at low signal powers: Basic considerations on optical heterodyning". Journal of Modern Optics. 45 (8): 1549–1555. Bibcode:1998JMOp...45.1549W. doi:10.1080/09500349808230651. ISSN 0950-0340.
8. Feynman, Richard P.; Leighton, Robert B.; Sands, Matthew (2005) [1970]. The Feynman Lectures on Physics: The Definitive and Extended Edition. Vol. 2 (2nd ed.). Addison Wesley. p. 111. ISBN 978-0-8053-9045-2.
9. Jiang, Leaf A.; Luu, Jane X. (2008). "Heterodyne detection with a weak local oscillator". Applied Optics. 47 (10): 1486–503. Bibcode:2008ApOpt..47.1486J. doi:10.1364/AO.47.001486. ISSN 0003-6935. PMID 18382577.
10. Erkmen, Baris I.; Barber, Zeb W.; Dahl, Jason (2013). "Maximum-likelihood estimation for frequency-modulated continuous-wave laser ranging using photon-counting detectors". Applied Optics. 52 (10): 2008–18. Bibcode:2013ApOpt..52.2008E. doi:10.1364/AO.52.002008. ISSN 0003-6935. PMID 23545955.
11. Erkmen, Baris; Dahl, Jason R.; Barber, Zeb W. (2013). "Performance Analysis for FMCW Ranging Using Photon-Counting Detectors". Cleo: 2013. pp. CTu1H.7. doi:10.1364/CLEO_SI.2013.CTu1H.7. ISBN 978-1-55752-972-5. S2CID 44697963.
12. Liu, Lisheng; Zhang, Heyong; Guo, Jin; Zhao, Shuai; Wang, Tingfeng (2012). "Photon time-interval statistics applied to the analysis of laser heterodyne signal with photon counter". Optics Communications. 285 (18): 3820–3826. Bibcode:2012OptCo.285.3820L. doi:10.1016/j.optcom.2012.05.019. ISSN 0030-4018.
13. Strauss, Charlie E. M. (1995). "Synthetic Array Heterodyne Detection: Developments within the Caliope CO2 DIAL Program". Optical Society of America, Proceedings of the 1995 Coherent Laser Radar Topical Meeting. 96: 13278. Bibcode:1995STIN...9613278R.
14. Cooke, Bradly J.; Galbraith, Amy E.; Laubscher, Bryan E.; Strauss, Charlie E. M.; Olivas, Nicholas L.; Grubler, Andrew C. (1999). "Laser field imaging through Fourier transform heterodyne". In Kamerman, Gary W; Werner, Christian (eds.). Laser Radar Technology and Applications IV. pp. 390–408. doi:10.1117/12.351361. ISSN 0277-786X. S2CID 58918536. {{cite book}}: |journal= ignored (help)
15. Strauss, C.E.M. and Rehse, S.J. "Rainbow heterodyne detection" Lasers and Electro-Optics, 1996. CLEO Pub Date: 2–7 June 1996 (200) ISBN 1-55752-443-2 (See DOE archive)
16. "Multi-Pixel Synthetic Array Heterodyne Detection Report", 1995, Strauss, C.E.M. and Rehse, S.J.
17. Dainty C (Ed), Laser Speckle and Related Phenomena, 1984, Springer Verlag, ISBN 0-387-13169-8
18. Gabriel Lombardi, Jerry Butman, Torrey Lyons, David Terry, and Garrett Piech, "Multiple-pulse coherent laser radar waveform"
External links
• Rüdiger Paschotta (2011-04-29). "Optical Heterodyne Detection". Encyclopedia of Laser Physics and Technology. RP Photonics.
• US Patent 5689335 — Synthetic Array Heterodyne Detection invention
• LANL Report LA-UR-99-1055 (1999) — Field Imaging in Lidar via Fourier Transform Heterodyne
• Daher, Carlos; Torres, Jeremie; Iniguez-de-la-Torre, Ignacio; Nouvel, Philippe; Varani, Luca; Sangare, Paul; Ducournau, Guillaume; Gaquiere, Christophe; Mateos, Javier; Gonzalez, Tomas (2016). "Room Temperature Direct and Heterodyne Detection of 0.28–0.69-THz Waves Based on GaN 2-DEG Unipolar Nanochannels" (PDF). IEEE Transactions on Electron Devices. 63 (1): 353–359. Bibcode:2016ITED...63..353D. doi:10.1109/TED.2015.2503987. hdl:10366/130697. ISSN 0018-9383. S2CID 33231377.
| Wikipedia |
Synthetic differential geometry
In mathematics, synthetic differential geometry is a formalization of the theory of differential geometry in the language of topos theory. There are several insights that allow for such a reformulation. The first is that most of the analytic data for describing the class of smooth manifolds can be encoded into certain fibre bundles on manifolds: namely bundles of jets (see also jet bundle). The second insight is that the operation of assigning a bundle of jets to a smooth manifold is functorial in nature. The third insight is that over a certain category, these are representable functors. Furthermore, their representatives are related to the algebras of dual numbers, so that smooth infinitesimal analysis may be used.
Synthetic differential geometry can serve as a platform for formulating certain otherwise obscure or confusing notions from differential geometry. For example, the meaning of what it means to be natural (or invariant) has a particularly simple expression, even though the formulation in classical differential geometry may be quite difficult.
Further reading
• John Lane Bell, Two Approaches to Modelling the Universe: Synthetic Differential Geometry and Frame-Valued Sets (PDF file)
• F.W. Lawvere, Outline of synthetic differential geometry (PDF file)
• Anders Kock, Synthetic Differential Geometry (PDF file), Cambridge University Press, 2nd Edition, 2006.
• R. Lavendhomme, Basic Concepts of Synthetic Differential Geometry, Springer-Verlag, 1996.
• Michael Shulman, Synthetic Differential Geometry
• Ryszard Paweł Kostecki, Differential Geometry in Toposes
Infinitesimals
History
• Adequality
• Leibniz's notation
• Integral symbol
• Criticism of nonstandard analysis
• The Analyst
• The Method of Mechanical Theorems
• Cavalieri's principle
Related branches
• Nonstandard analysis
• Nonstandard calculus
• Internal set theory
• Synthetic differential geometry
• Smooth infinitesimal analysis
• Constructive nonstandard analysis
• Infinitesimal strain theory (physics)
Formalizations
• Differentials
• Hyperreal numbers
• Dual numbers
• Surreal numbers
Individual concepts
• Standard part function
• Transfer principle
• Hyperinteger
• Increment theorem
• Monad
• Internal set
• Levi-Civita field
• Hyperfinite set
• Law of continuity
• Overspill
• Microcontinuity
• Transcendental law of homogeneity
Mathematicians
• Gottfried Wilhelm Leibniz
• Abraham Robinson
• Pierre de Fermat
• Augustin-Louis Cauchy
• Leonhard Euler
Textbooks
• Analyse des Infiniment Petits
• Elementary Calculus
• Cours d'Analyse
| Wikipedia |
Synthetic division
In algebra, synthetic division is a method for manually performing Euclidean division of polynomials, with less writing and fewer calculations than long division.
It is mostly taught for division by linear monic polynomials (known as Ruffini's rule), but the method can be generalized to division by any polynomial.
The advantages of synthetic division are that it allows one to calculate without writing variables, it uses few calculations, and it takes significantly less space on paper than long division. Also, the subtractions in long division are converted to additions by switching the signs at the very beginning, helping to prevent sign errors.
Regular synthetic division
The first example is synthetic division with only a monic linear denominator $x-a$.
${\frac {x^{3}-12x^{2}-42}{x-3}}$
The numerator can be written as $p(x)=x^{3}-12x^{2}+0x-42$.
The zero of the denominator $g(x)$ is $3$.
The coefficients of $p(x)$ are arranged as follows, with the zero of $g(x)$ on the left:
${\begin{array}{cc}{\begin{array}{r}\\3\\\end{array}}&{\begin{array}{|rrrr}\ 1&-12&0&-42\\&&&\\\hline \end{array}}\end{array}}$
The first coefficient after the bar is "dropped" to the last row.
${\begin{array}{cc}{\begin{array}{r}\\3\\\\\end{array}}&{\begin{array}{|rrrr}\color {blue}1&-12&0&-42\\&&&\\\hline \color {blue}1&&&\\\end{array}}\end{array}}$
The dropped number is multiplied by the number before the bar, and placed in the next column.
${\begin{array}{cc}{\begin{array}{r}\\\color {grey}3\\\\\end{array}}&{\begin{array}{|rrrr}1&-12&0&-42\\&\color {brown}3&&\\\hline \color {blue}1&&&\\\end{array}}\end{array}}$
An addition is performed in the next column.
${\begin{array}{cc}{\begin{array}{c}\\3\\\\\end{array}}&{\begin{array}{|rrrr}1&\color {green}-12&0&-42\\&\color {green}3&&\\\hline 1&\color {green}-9&&\\\end{array}}\end{array}}$
The previous two steps are repeated and the following is obtained:
${\begin{array}{cc}{\begin{array}{c}\\3\\\\\end{array}}&{\begin{array}{|rrrr}1&-12&0&-42\\&3&-27&-81\\\hline 1&-9&-27&-123\end{array}}\end{array}}$
Here, the last term (-123) is the remainder while the rest correspond to the coefficients of the quotient.
The terms are written with increasing degree from right to left beginning with degree zero for the remainder and the result.
${\begin{array}{rrr|r}1x^{2}&-9x&-27&-123\end{array}}$
Hence the quotient and remainder are:
$q(x)=x^{2}-9x-27$
$r(x)=-123$
Evaluating polynomials by the remainder theorem
The above form of synthetic division is useful in the context of the polynomial remainder theorem for evaluating univariate polynomials. To summarize, the value of $p(x)$ at $a$ is equal to the remainder of the division of $p(x)$ by $x-a.$
The advantage of calculating the value this way is that it requires just over half as many multiplication steps as naive evaluation. An alternative evaluation strategy is Horner's method.
Expanded synthetic division
This method generalizes to division by any monic polynomial with only a slight modification with changes in bold. Using the same steps as before, perform the following division:
${\frac {x^{3}-12x^{2}-42}{x^{2}+x-3}}$
We concern ourselves only with the coefficients. Write the coefficients of the polynomial to be divided at the top.
${\begin{array}{|rrrr}\ 1&-12&0&-42\end{array}}$
Negate the coefficients of the divisor.
${\begin{array}{rrr}-1x^{2}&-1x&+3\end{array}}$
Write in every coefficient but the first one on the left in an upward right diagonal (see next diagram).
${\begin{array}{cc}{\begin{array}{rr}\\&3\\-1&\\\end{array}}&{\begin{array}{|rrrr}\ 1&-12&0&-42\\&&&\\&&&\\\hline \end{array}}\end{array}}$
Note the change of sign from 1 to −1 and from −3 to 3 . "Drop" the first coefficient after the bar to the last row.
${\begin{array}{cc}{\begin{array}{rr}\\&3\\-1&\\\\\end{array}}&{\begin{array}{|rrrr}1&-12&0&-42\\&&&\\&&&\\\hline 1&&&\\\end{array}}\end{array}}$
Multiply the dropped number by the diagonal before the bar, and place the resulting entries diagonally to the right from the dropped entry.
${\begin{array}{cc}{\begin{array}{rr}\\&3\\-1&\\\\\end{array}}&{\begin{array}{|rrrr}1&-12&0&-42\\&&3&\\&-1&&\\\hline 1&&&\\\end{array}}\end{array}}$
Perform an addition in the next column.
${\begin{array}{cc}{\begin{array}{rr}\\&3\\-1&\\\\\end{array}}&{\begin{array}{|rrrr}1&-12&0&-42\\&&3&\\&-1&&\\\hline 1&-13&&\\\end{array}}\end{array}}$
Repeat the previous two steps until you would go past the entries at the top with the next diagonal.
${\begin{array}{cc}{\begin{array}{rr}\\&3\\-1&\\\\\end{array}}&{\begin{array}{|rrrr}1&-12&0&-42\\&&3&-39\\&-1&13&\\\hline 1&-13&16&\\\end{array}}\end{array}}$
Then simply add up any remaining columns.
${\begin{array}{cc}{\begin{array}{rr}\\&3\\-1&\\\\\end{array}}&{\begin{array}{|rrrr}1&-12&0&-42\\&&3&-39\\&-1&13&\\\hline 1&-13&16&-81\\\end{array}}\end{array}}$
Count the terms to the left of the bar. Since there are two, the remainder has degree one and this is the two right-most terms under the bar. Mark the separation with a vertical bar.
${\begin{array}{rr|rr}1&-13&16&-81\end{array}}$
The terms are written with increasing degree from right to left beginning with degree zero for both the remainder and the result.
${\begin{array}{rr|rr}1x&-13&16x&-81\end{array}}$
The result of our division is:
${\frac {x^{3}-12x^{2}-42}{x^{2}+x-3}}=x-13+{\frac {16x-81}{x^{2}+x-3}}$
For non-monic divisors
With a little prodding, the expanded technique may be generalised even further to work for any polynomial, not just monics. The usual way of doing this would be to divide the divisor $g(x)$ with its leading coefficient (call it a):
$h(x)={\frac {g(x)}{a}}$
then using synthetic division with $h(x)$ as the divisor, and then dividing the quotient by a to get the quotient of the original division (the remainder stays the same). But this often produces unsightly fractions which get removed later, and is thus more prone to error. It is possible to do it without first reducing the coefficients of $g(x)$.
As can be observed by first performing long division with such a non-monic divisor, the coefficients of $f(x)$ are divided by the leading coefficient of $g(x)$ after "dropping", and before multiplying.
Let's illustrate by performing the following division:
${\frac {6x^{3}+5x^{2}-7}{3x^{2}-2x-1}}$
A slightly modified table is used:
${\begin{array}{cc}{\begin{array}{rrr}\\&1&\\2&&\\\\&&/3\\\end{array}}{\begin{array}{|rrrr}6&5&0&-7\\&&&\\&&&\\\hline &&&\\&&&\\\end{array}}\end{array}}$
Note the extra row at the bottom. This is used to write values found by dividing the "dropped" values by the leading coefficient of $g(x)$ (in this case, indicated by the /3; note that, unlike the rest of the coefficients of $g(x)$, the sign of this number is not changed).
Next, the first coefficient of $f(x)$ is dropped as usual:
${\begin{array}{cc}{\begin{array}{rrr}\\&1&\\2&&\\\\&&/3\\\end{array}}{\begin{array}{|rrrr}6&5&0&-7\\&&&\\&&&\\\hline 6&&&\\&&&\\\end{array}}\end{array}}$
and then the dropped value is divided by 3 and placed in the row below:
${\begin{array}{cc}{\begin{array}{rrr}\\&1&\\2&&\\\\&&/3\\\end{array}}{\begin{array}{|rrrr}6&5&0&-7\\&&&\\&&&\\\hline 6&&&\\2&&&\\\end{array}}\end{array}}$
Next, the new (divided) value is used to fill the top rows with multiples of 2 and 1, as in the expanded technique:
${\begin{array}{cc}{\begin{array}{rrr}\\&1&\\2&&\\\\&&/3\\\end{array}}{\begin{array}{|rrrr}6&5&0&-7\\&&2&\\&4&&\\\hline 6&&&\\2&&&\\\end{array}}\end{array}}$
The 5 is dropped next, with the obligatory adding of the 4 below it, and the answer is divided again:
${\begin{array}{cc}{\begin{array}{rrr}\\&1&\\2&&\\\\&&/3\\\end{array}}{\begin{array}{|rrrr}6&5&0&-7\\&&2&\\&4&&\\\hline 6&9&&\\2&3&&\\\end{array}}\end{array}}$
Then the 3 is used to fill the top rows:
${\begin{array}{cc}{\begin{array}{rrr}\\&1&\\2&&\\\\&&/3\\\end{array}}{\begin{array}{|rrrr}6&5&0&-7\\&&2&3\\&4&6&\\\hline 6&9&&\\2&3&&\\\end{array}}\end{array}}$
At this point, if, after getting the third sum, we were to try and use it to fill the top rows, we would "fall off" the right side, thus the third sum is the first coefficient of the remainder, as in regular synthetic division. But the values of the remainder are not divided by the leading coefficient of the divisor:
${\begin{array}{cc}{\begin{array}{rrr}\\&1&\\2&&\\\\&&/3\\\end{array}}{\begin{array}{|rrrr}6&5&0&-7\\&&2&3\\&4&6&\\\hline 6&9&8&-4\\2&3&&\\\end{array}}\end{array}}$
Now we can read off the coefficients of the answer. As in expanded synthetic division, the last two values (2 is the degree of the divisor) are the coefficients of the remainder, and the remaining values are the coefficients of the quotient:
${\begin{array}{rr|rr}2x&+3&8x&-4\end{array}}$
and the result is
${\frac {6x^{3}+5x^{2}-7}{3x^{2}-2x-1}}=2x+3+{\frac {8x-4}{3x^{2}-2x-1}}$
Compact Expanded Synthetic Division
However, the diagonal format above becomes less space-efficient when the degree of the divisor exceeds half of the degree of the dividend. Consider the following division:
${\dfrac {a_{7}x^{7}+a_{6}x^{6}+a_{5}x^{5}+a_{4}x^{4}+a_{3}x^{3}+a_{2}x^{2}+a_{1}x+a_{0}}{b_{4}x^{4}-b_{3}x^{3}-b_{2}x^{2}-b_{1}x-b_{0}}}$
It is easy to see that we have complete freedom to write each product in any row as long as it is in the correct column, so the algorithm can be compactified by a greedy strategy, as illustrated in the division below:
${\begin{array}{cc}{\begin{array}{rrrr}\\\\\\\\b_{3}&b_{2}&b_{1}&b_{0}\\\\&&&&/b_{4}\\\end{array}}{\begin{array}{|rrrr|rrrr}&&&&q_{0}b_{3}&&&\\&&&q_{1}b_{3}&q_{1}b_{2}&q_{0}b_{2}&&\\&&q_{2}b_{3}&q_{2}b_{2}&q_{2}b_{1}&q_{1}b_{1}&q_{0}b_{1}&\\&q_{3}b_{3}&q_{3}b_{2}&q_{3}b_{1}&q_{3}b_{0}&q_{2}b_{0}&q_{1}b_{0}&q_{0}b_{0}\\a_{7}&a_{6}&a_{5}&a_{4}&a_{3}&a_{2}&a_{1}&a_{0}\\\hline a_{7}&q_{2}'&q_{1}'&q_{0}'&r_{3}&r_{2}&r_{1}&r_{0}\\q_{3}&q_{2}&q_{1}&q_{0}&&&&\\\end{array}}\end{array}}$
The following describes how to perform the algorithm; this algorithm includes steps for dividing non-monic divisors:
1. Write the coefficients of the dividend on a bar.
${\begin{array}{cc}{\begin{array}{|rrrrrrrr}\ a_{7}&a_{6}&a_{5}&a_{4}&a_{3}&a_{2}&a_{1}&a_{0}\\\hline \end{array}}\end{array}}$
2. Ignoring the first (leading) coefficient of the divisor, negate each coefficients and place them on the left-hand side of the bar.
${\begin{array}{cc}{\begin{array}{rrrr}b_{3}&b_{2}&b_{1}&b_{0}\\\end{array}}&{\begin{array}{|rrrrrrrr}\ a_{7}&a_{6}&a_{5}&a_{4}&a_{3}&a_{2}&a_{1}&a_{0}\\\hline \end{array}}\end{array}}$
3. From the number of coefficients placed on the left side of the bar, count the number of dividend coefficients above the bar, starting from the rightmost column. Then place a vertical bar to the left, and as well as the row below, of that column. This vertical bar marks the separation between the quotient and the remainder.
${\begin{array}{cc}{\begin{array}{rrrr}b_{3}&b_{2}&b_{1}&b_{0}\\\\\end{array}}&{\begin{array}{|rrrr|rrrr}a_{7}&a_{6}&a_{5}&a_{4}&a_{3}&a_{2}&a_{1}&a_{0}\\\hline &&&&&&&\\\end{array}}\end{array}}$
4. Drop the first coefficient of the dividend below the bar.
${\begin{array}{cc}{\begin{array}{rrrr}b_{3}&b_{2}&b_{1}&b_{0}\\\\\end{array}}&{\begin{array}{|rrrr|rrrr}a_{7}&a_{6}&a_{5}&a_{4}&a_{3}&a_{2}&a_{1}&a_{0}\\\hline a_{7}&&&&&&&\\\end{array}}\end{array}}$
• Divide the previously dropped/summed number by the leading coefficient of the divisor and place it on the row below (this doesn't need to be done if the leading coefficient is 1).
In this case $q_{3}={\dfrac {a_{7}}{b_{4}}}$, where the index $3=7-4$ has been chosen by subtracting the index of the divisor from the dividend.
• Multiply the previously dropped/summed number (or the divided dropped/summed number) to each negated divisor coefficients on the left (starting with the left most); skip if the dropped/summed number is zero. Place each product on top of the subsequent columns.
${\begin{array}{cc}{\begin{array}{rrrr}\\b_{3}&b_{2}&b_{1}&b_{0}\\\\&&&&/b_{4}\\\end{array}}{\begin{array}{|rrrr|rrrr}&q_{3}b_{3}&q_{3}b_{2}&q_{3}b_{1}&q_{3}b_{0}&&&\\a_{7}&a_{6}&a_{5}&a_{4}&a_{3}&a_{2}&a_{1}&a_{0}\\\hline a_{7}&&&&&&&\\q_{3}&&&&&&&\\\end{array}}\end{array}}$
5. Perform a column-wise addition on the next column. In this case, $q_{2}'=q_{3}b_{3}+a_{6}$.
${\begin{array}{cc}{\begin{array}{rrrr}\\b_{3}&b_{2}&b_{1}&b_{0}\\\\&&&&/b_{4}\\\end{array}}{\begin{array}{|rrrr|rrrr}&q_{3}b_{3}&q_{3}b_{2}&q_{3}b_{1}&q_{3}b_{0}&&&\\a_{7}&a_{6}&a_{5}&a_{4}&a_{3}&a_{2}&a_{1}&a_{0}\\\hline a_{7}&q_{2}'&&&&&&\\q_{3}&&&&&&&\\\end{array}}\end{array}}$
6. Repeat the previous two steps. Stop when you performed the previous two steps on the number just before the vertical bar.
1. Let $q_{2}={\dfrac {q_{2}'}{b_{4}}}$.
${\begin{array}{cc}{\begin{array}{rrrr}\\\\b_{3}&b_{2}&b_{1}&b_{0}\\\\&&&&/b_{4}\\\end{array}}{\begin{array}{|rrrr|rrrr}&&q_{2}b_{3}&q_{2}b_{2}&q_{2}b_{1}&&&\\&q_{3}b_{3}&q_{3}b_{2}&q_{3}b_{1}&q_{3}b_{0}&q_{2}b_{0}&&\\a_{7}&a_{6}&a_{5}&a_{4}&a_{3}&a_{2}&a_{1}&a_{0}\\\hline a_{7}&q_{2}'&q_{1}'&&&&&\\q_{3}&q_{2}&&&&&&\\\end{array}}\end{array}}$
2. Let $q_{1}={\dfrac {q_{1}'}{b_{4}}}$.
${\begin{array}{cc}{\begin{array}{rrrr}\\\\\\b_{3}&b_{2}&b_{1}&b_{0}\\\\&&&&/b_{4}\\\end{array}}{\begin{array}{|rrrr|rrrr}&&&q_{1}b_{3}&q_{1}b_{2}&&&\\&&q_{2}b_{3}&q_{2}b_{2}&q_{2}b_{1}&q_{1}b_{1}&&\\&q_{3}b_{3}&q_{3}b_{2}&q_{3}b_{1}&q_{3}b_{0}&q_{2}b_{0}&q_{1}b_{0}&\\a_{7}&a_{6}&a_{5}&a_{4}&a_{3}&a_{2}&a_{1}&a_{0}\\\hline a_{7}&q_{2}'&q_{1}'&q_{0}'&&&&\\q_{3}&q_{2}&q_{1}&&&&&\\\end{array}}\end{array}}$
3. Let $q_{0}={\dfrac {q_{0}'}{b_{4}}}$.
${\begin{array}{cc}{\begin{array}{rrrr}\\\\\\\\b_{3}&b_{2}&b_{1}&b_{0}\\\\&&&&/b_{4}\\\end{array}}{\begin{array}{|rrrr|rrrr}&&&&q_{0}b_{3}&&&\\&&&q_{1}b_{3}&q_{1}b_{2}&q_{0}b_{2}&&\\&&q_{2}b_{3}&q_{2}b_{2}&q_{2}b_{1}&q_{1}b_{1}&q_{0}b_{1}&\\&q_{3}b_{3}&q_{3}b_{2}&q_{3}b_{1}&q_{3}b_{0}&q_{2}b_{0}&q_{1}b_{0}&q_{0}b_{0}\\a_{7}&a_{6}&a_{5}&a_{4}&a_{3}&a_{2}&a_{1}&a_{0}\\\hline a_{7}&q_{2}'&q_{1}'&q_{0}'&r_{3}&&&\\q_{3}&q_{2}&q_{1}&q_{0}&&&&\\\end{array}}\end{array}}$
7. Perform the remaining column-wise additions on the subsequent columns (calculating the remainder).
${\begin{array}{cc}{\begin{array}{rrrr}\\\\\\\\b_{3}&b_{2}&b_{1}&b_{0}\\\\&&&&/b_{4}\\\end{array}}{\begin{array}{|rrrr|rrrr}&&&&q_{0}b_{3}&&&\\&&&q_{1}b_{3}&q_{1}b_{2}&q_{0}b_{2}&&\\&&q_{2}b_{3}&q_{2}b_{2}&q_{2}b_{1}&q_{1}b_{1}&q_{0}b_{1}&\\&q_{3}b_{3}&q_{3}b_{2}&q_{3}b_{1}&q_{3}b_{0}&q_{2}b_{0}&q_{1}b_{0}&q_{0}b_{0}\\a_{7}&a_{6}&a_{5}&a_{4}&a_{3}&a_{2}&a_{1}&a_{0}\\\hline a_{7}&q_{2}'&q_{1}'&q_{0}'&r_{3}&r_{2}&r_{1}&r_{0}\\q_{3}&q_{2}&q_{1}&q_{0}&&&&\\\end{array}}\end{array}}$
8. The bottommost results below the horizontal bar are coefficients of the polynomials (the quotient and the remainder), where the coefficients of the quotient are to the left of the vertical bar separation and the coefficients of the remainder are to the right. These coefficients are interpreted as having increasing degree from right to left, beginning with degree zero for both the quotient and the remainder.
We interpret the results to get:
${\dfrac {a_{7}x^{7}+a_{6}x^{6}+a_{5}x^{5}+a_{4}x^{4}+a_{3}x^{3}+a_{2}x^{2}+a_{1}x+a_{0}}{b_{4}x^{4}-b_{3}x^{3}-b_{2}x^{2}-b_{1}x-b_{0}}}=q_{3}x^{3}+q_{2}x^{2}+q_{1}x+q_{0}+{\dfrac {r_{3}x^{3}+r_{2}x^{2}+r_{1}x+r_{0}}{b_{4}x^{4}-b_{3}x^{3}-b_{2}x^{2}-b_{1}x-b_{0}}}$
Python implementation
The following snippet implements Expanded Synthetic Division in Python for arbitrary univariate polynomials:
def expanded_synthetic_division(dividend, divisor):
"""Fast polynomial division by using Expanded Synthetic Division.
Also works with non-monic polynomials.
Dividend and divisor are both polynomials, which are here simply lists of coefficients.
E.g.: x**2 + 3*x + 5 will be represented as [1, 3, 5]
"""
out = list(dividend) # Copy the dividend
normalizer = divisor[0]
for i in range(len(dividend) - len(divisor) + 1):
# For general polynomial division (when polynomials are non-monic),
# we need to normalize by dividing the coefficient with the divisor's first coefficient
out[i] /= normalizer
coef = out[i]
if coef != 0: # Useless to multiply if coef is 0
# In synthetic division, we always skip the first coefficient of the divisor,
# because it is only used to normalize the dividend coefficients
for j in range(1, len(divisor)):
out[i + j] += -divisor[j] * coef
# The resulting out contains both the quotient and the remainder,
# the remainder being the size of the divisor (the remainder
# has necessarily the same degree as the divisor since it is
# what we couldn't divide from the dividend), so we compute the index
# where this separation is, and return the quotient and remainder.
separator = 1 - len(divisor)
return out[:separator], out[separator:] # Return quotient, remainder.
See also
• Euclidean domain
• Greatest common divisor of two polynomials
• Gröbner basis
• Horner scheme
• Polynomial remainder theorem
• Ruffini's rule
References
• Lianghuo Fan (2003). "A Generalization of Synthetic Division and A General Theorem of Division of Polynomials" (PDF). Mathematical Medley. 30 (1): 30–37.
• Li Zhou (2009). "Short Division of Polynomials". College Mathematics Journal. 40 (1): 44–46. doi:10.4169/193113409x469721.
External links
• Goodman, Len; Stover, Christopher & Weisstein, Eric W. "Synthetic Division". MathWorld.
• Stover, Christopher. "Ruffini's Rule". MathWorld.
| Wikipedia |
Syntomic topology
In algebraic geometry, the syntomic topology is a Grothendieck topology introduced by Fontaine & Messing (1987).
Mazur defined a morphism to be syntomic if it is flat and locally a complete intersection. The syntomic topology is generated by surjective syntomic morphisms of affine schemes.
References
• Fontaine, Jean-Marc; Messing, William (1987), "p-adic periods and p-adic étale cohomology", Current trends in arithmetical algebraic geometry (Arcata, Calif., 1985), Contemp. Math., vol. 67, Providence, R.I.: American Mathematical Society, pp. 179–207, MR 0902593
External links
• Explanation of the word "syntomic" by Barry Mazur.
• Syntomic cohomology at the nLab
| Wikipedia |
Syntractrix
A syntractrix is a curve of the form
$x+{\sqrt {b^{2}-y^{2}}}=a\ln {\frac {b+{\sqrt {b^{2}-y^{2}}}}{y}}.$[1]
It is the locus of a point on the tangent of a tractrix at a constant distance from the point of tangency, as the point of tangency is moved along the curve.[2]
References
1. George Salmon (1879). A Treatise on the Higher Plane Curves: Intended as a Sequel to A Treatise on Conic Sections. Published by Hodges, Foster, and Figgis. Page 290.
2. Dionysius Lardner, A system of algebraic geometry 1823, p. 261–263
| Wikipedia |
System L
System L is a natural deductive logic developed by E.J. Lemmon.[1] Derived from Suppes' method,[2] it represents natural deduction proofs as sequences of justified steps. Both methods are derived from Gentzen's 1934/1935 natural deduction system,[3] in which proofs were presented in tree-diagram form rather than in the tabular form of Suppes and Lemmon. Although the tree-diagram layout has advantages for philosophical and educational purposes, the tabular layout is much more convenient for practical applications.
A similar tabular layout is presented by Kleene.[4] The main difference is that Kleene does not abbreviate the left-hand sides of assertions to line numbers, preferring instead to either give full lists of precedent propositions or alternatively indicate the left-hand sides by bars running down the left of the table to indicate dependencies. However, Kleene's version has the advantage that it is presented, although only very sketchily, within a rigorous framework of metamathematical theory, whereas the books by Suppes[2] and Lemmon[1] are applications of the tabular layout for teaching introductory logic.
Description of the deductive system
System L is a predicate calculus with equality, so its description can be separated into two parts: the general proof syntax and the context specific rules.
General Proof Syntax
A proof is a table with 4 columns and unlimited ordered rows. From left to right the columns hold:
1. A set of positive integers, possibly empty
2. A positive integer
3. A well-formed formula (or wff)
4. A set of numbers, possibly empty; a rule; and possibly a reference to another proof
The following is an example:
p → q, ¬q ⊢ ¬p [Modus Tollendo Tollens (MTT)]
Assumption number Line number Formula (wff) Lines in-use and Justification
1 (1) p → q A
2 (2) ¬q A
3 (3) p A (for RAA)
1, 3 (4) q 1, 3, MPP
1, 2, 3 (5) q ∧ ¬q 2, 4, ∧I
1, 2 (6) ¬p 3, 5, RAA
Q.E.D
The second column holds line numbers. The third holds a wff, which is justified by the rule held in the fourth along with auxiliary information about other wffs, possibly in other proofs. The first column represents the line numbers of the assumptions the wff rests on, determined by the application of the cited rule in context. Any line of any valid proof can be converted into a sequent by listing the wffs at the cited lines as the premises and the wff at the line as the conclusion. Analogously, they can be converted into conditionals where the antecedent is a conjunction. These sequents are often listed above the proof, as Modus Tollens is above.
Rules of Predicate Calculus with Equality
The above proof is a valid one, but proofs don't need to be to conform to the general syntax of the proof system. To guarantee a sequent's validity, however, we must conform to carefully specified rules. The rules can be divided into four groups: the propositional rules (1-10), the predicate rules (11-14), the rules of equality (15-16), and the rule of substitution (17). Adding these groups in order allows one to build a propositional calculus, then a predicate calculus, then a predicate calculus with equality, then a predicate calculus with equality allowing for the derivation of new rules. Some of the propositional calculus rules, like MTT, are superfluous and can be derived as rules from other rules.
1. The Rule of Assumption (A): "A" justifies any wff. The only assumption is its own line number.
2. Modus Ponendo Ponens (MPP): If there are lines a and b previously in the proof containing P→Q and P respectively, "a,b MPP" justifies Q. The assumptions are the collective pool of lines a and b.
3. The Rule of Conditional Proof (CP): If a line with proposition P has an assumption line b with proposition Q, "b,a CP" justifies Q→P. All of a's assumptions aside b are kept.
4. The Rule of Double Negation (DN): "a DN" justifies adding or subtracting two negation symbols from the wff at a line a previously in the proof, making this rule a biconditional. The assumption pool is the one of the line cited.
5. The Rule of ∧-introduction (∧I): If propositions P and Q are at lines a and b, "a,b ∧I" justifies P∧Q. The assumptions are the collective pool of the conjoined propositions.
6. The Rule of ∧-elimination (∧E): If line a is a conjunction P∧Q, one can conclude either P or Q using "a ∧E". The assumptions are line a's. ∧I and ∧E allow for monotonicity of entailment, as when a proposition P is joined with Q with ∧I and separated with ∧E, it retains Q's assumptions.
7. The Rule of ∨-introduction (∨I): For a line a with proposition P one can introduce P∨Q citing "a ∨I". The assumptions are a's.
8. The Rule of ∨-elimination (∨E): For a disjunction P∨Q, if one assumes P and Q and separately comes to the conclusion R from each, then one can conclude R. The rule is cited as "a,b,c,d,e ∨E", where line a has the initial disjunction P∨Q, lines b and d assume P and Q respectively, and lines c and e are R with P and Q in their respective assumption pools. The assumptions are the collective pools of the two lines concluding R minus the lines of P and Q, b and d.
9. Reductio Ad Absurdum (RAA): For a proposition P∧¬P on line a citing an assumption Q on line b, one can cite "b,a RAA" and derive ¬Q from the assumptions of line a aside from b.
10. Modus Tollens (MTT): For propositions P→Q and ¬Q on lines a and b one can cite "a,b MTT" to derive ¬P. The assumptions are those of lines a and b. This is proven from other rules above.
11. Universal Introduction (UI): For a predicate $Ra$ on line a, one can cite "a UI" to justify a universal quantification,$(\forall x)Rx$, provided none of the assumptions on line a have the term $a$ in anywhere. The assumptions are those of line a.
12. Universal Elimination (UE): For a universally quantified predicate $(\forall x)Rx$ on line a, one can cite "a UE" to justify $Ra$. The assumptions are those of line a. UE is a duality with UI in that one can switch between quantified and free variables using these rules.
13. Existential Introduction (EI): For a predicate $Ra$ on line a one can cite "a EI" to justify an existential quantification, $(\exists x)Rx$. The assumptions are those of line a.
14. Existential Elimination (EE): For an existentially quantified predicate $(\exists x)Rx$ on line a, if we assume $Ra$ to be true on line b and derive P with it on line c, we can cite "a,b,c EE" to justify P. The term $a$ cannot appear in the conclusion P, any of its assumptions aside from line b, or on line a. For this reason EE and EI are in duality, as one can assume $Ra$ and use EI to reach a conclusion from $(\exists x)Rx$, as the EI will rid the conclusion of the term $a$. The assumptions are the assumptions on line a and any on line c aside from b.
15. Equality Introduction (=I): At any point one can introduce $a=a$ citing "=I" with no assumptions.
16. Equality Elimination (=E): For propositions $a=b$ and P on lines a and b, one can cite "a,b =E" to justify changing any terms $a$ terms in P to $b$. The assumptions are the pool of a and b.
17. Substitution Instance (SI(S)): For a sequent $P,Q\vdash R$ proved in proof X and substitution instances of $P$ and $Q$ on lines a and b, one can cite "a,b SI(S) X" to justify introducing a substitution instance of $R$. The assumptions are those of lines a and b. A derived rule with no assumptions is a theorem, and can be introduced at any time with no assumptions. Some cite that as "TI(S)", for "theorem" instead of "sequent". Additionally, some cite only "SI" or "TI" in either case when a substitution instance isn't needed, as their propositions match the ones of the referenced proof exactly.
Examples
An example of the proof of a sequent (a theorem in this case):
⊢p ∨ ¬p
Assumption number Line number Formula (wff) Lines in-use and Justification
1 (1) ¬(p ∨ ¬p) A (for RAA)
2 (2) p A (for RAA)
2 (3) (p ∨ ¬p) 2, ∨I
1, 2 (4) (p ∨ ¬p) ∧ ¬(p ∨ ¬p) 3, 1, ∧I
1 (5) ¬p 2, 4, RAA
1 (6) (p ∨ ¬p) 5, ∨I
1 (7) (p ∨ ¬p) ∧ ¬(p ∨ ¬p) 1, 6, ∧I
(8) ¬¬(p ∨ ¬p) 1, 7, RAA
(9) (p ∨ ¬p) 8, DN
Q.E.D
A proof of the principle of explosion using monotonicity of entailment. Some have called the following technique, demonstrated in lines 3-6, the Rule of (Finite) Augmentation of Premises:[5]
p, ¬p ⊢ q
Assumption number Line number Formula (wff) Lines in-use and Justification
1 (1) p A (for RAA)
2 (2) ¬p A (for RAA)
1, 2 (3) p ∧ ¬p 1, 2, ∧I
4 (4) ¬q A (for DN)
1, 2, 4 (5) (p ∧ ¬p) ∧ ¬q 3, 4, ∧I
1, 2, 4 (6) p ∧ ¬p 5, ∨E
1, 2 (7) ¬¬q 4, 6, RAA
1, 2 (8) q 7, DN
Q.E.D
An example of substitution and ∨E:
(p ∧ ¬p) ∨ (q ∧ ¬q) ⊢ r
Assumption number Line number Formula (wff) Lines in-use and Justification
1 (1) (p ∧ ¬p) ∨ (q ∧ ¬q) A
2 (2) p ∧ ¬p A (for ∨E)
2 (3) p 2 ∧E
2 (4) ¬p 2 ∧E
2 (5) r 3, 4 SI(S) see above proof
6 (6) q ∧ ¬q A (for ∨E)
6 (7) q 6 ∧E
6 (8) ¬q 2 ∧E
6 (9) r 7, 8 SI(S) see above proof
1 (10) r 1, 2, 5, 6, 9, ∨E
Q.E.D
History of tabular natural deduction systems
The historical development of tabular-layout natural deduction systems, which are rule-based, and which indicate antecedent propositions by line numbers (and related methods such as vertical bars or asterisks) includes the following publications.
• 1940: In a textbook, Quine[6] indicated antecedent dependencies by line numbers in square brackets, anticipating Suppes' 1957 line-number notation.
• 1950: In a textbook, Quine (1982, pp. 241–255) demonstrated a method of using one or more asterisks to the left of each line of proof to indicate dependencies. This is equivalent to Kleene's vertical bars. (It is not totally clear if Quine's asterisk notation appeared in the original 1950 edition or was added in a later edition.)
• 1957: An introduction to practical logic theorem proving in a textbook by Suppes (1999, pp. 25–150). This indicated dependencies (i.e. antecedent propositions) by line numbers at the left of each line.
• 1963: Stoll (1979, pp. 183–190, 215–219) uses sets of line numbers to indicate antecedent dependencies of the lines of sequential logical arguments based on natural deduction inference rules.
• 1965: The entire textbook by Lemmon (1965) is an introduction to logic proofs using a method based on that of Suppes.
• 1967: In a textbook, Kleene (2002, pp. 50–58, 128–130) briefly demonstrated two kinds of practical logic proofs, one system using explicit quotations of antecedent propositions on the left of each line, the other system using vertical bar-lines on the left to indicate dependencies.[7]
See also
• Natural deduction
• Sequent calculus
• Deductive systems
Notes
1. See Lemmon 1965 for an introductory presentation of Lemmon's natural deduction system.
2. See Suppes 1999, pp. 25–150, for an introductory presentation of Suppes' natural deduction system.
3. Gentzen 1934, Gentzen 1935.
4. Kleene 2002, pp. 50–56, 128–130.
5. Coburn, Barry; Miller, David (October 1977). "Two comments on Lemmon's Beginning logic". Notre Dame Journal of Formal Logic. 18 (4): 607–610. doi:10.1305/ndjfl/1093888128. ISSN 0029-4527.
6. Quine (1981). See particularly pages 91–93 for Quine's line-number notation for antecedent dependencies.
7. A particular advantage of Kleene's tabular natural deduction systems is that he proves the validity of the inference rules for both propositional calculus and predicate calculus. See Kleene 2002, pp. 44–45, 118–119.
References
• Gentzen, Gerhard Karl Erich (1934). "Untersuchungen über das logische Schließen. I". Mathematische Zeitschrift. 39 (2): 176–210. doi:10.1007/BF01201353. (English translation Investigations into Logical Deduction in Szabo.)
• Gentzen, Gerhard Karl Erich (1935). "Untersuchungen über das logische Schließen. II". Mathematische Zeitschrift. 39 (3): 405–431. doi:10.1007/bf01201363.
• Kleene, Stephen Cole (2002) [1967]. Mathematical logic. Mineola, New York: Dover Publications. ISBN 978-0-486-42533-7.
• Lemmon, Edward John (1965). Beginning logic. Thomas Nelson. ISBN 0-17-712040-1.
• Quine, Willard Van Orman (1981) [1940]. Mathematical logic (Revised ed.). Cambridge, Massachusetts: Harvard University Press. ISBN 978-0-674-55451-1.
• Quine, Willard Van Orman (1982) [1950]. Methods of logic (Fourth ed.). Cambridge, Massachusetts: Harvard University Press. ISBN 978-0-674-57176-1.
• Stoll, Robert Roth (1979) [1963]. Set Theory and Logic. Mineola, New York: Dover Publications. ISBN 978-0-486-63829-4.
• Suppes, Patrick Colonel (1999) [1957]. Introduction to logic. Mineola, New York: Dover Publications. ISBN 978-0-486-40687-9.
• Szabo, M.E. (1969). The collected papers of Gerhard Gentzen. Amsterdam: North-Holland.
External links
• Pelletier, Jeff, "A History of Natural Deduction and Elementary Logic Textbooks."
| Wikipedia |
System U
In mathematical logic, System U and System U− are pure type systems, i.e. special forms of a typed lambda calculus with an arbitrary number of sorts, axioms and rules (or dependencies between the sorts). They were both proved inconsistent by Jean-Yves Girard in 1972.[1] This result led to the realization that Martin-Löf's original 1971 type theory was inconsistent as it allowed the same "Type in Type" behaviour that Girard's paradox exploits.
Formal definition
System U is defined[2]: 352 as a pure type system with
• three sorts $\{\ast ,\square ,\triangle \}$;
• two axioms $\{\ast :\square ,\square :\triangle \}$ :\square ,\square :\triangle \}} ; and
• five rules $\{(\ast ,\ast ),(\square ,\ast ),(\square ,\square ),(\triangle ,\ast ),(\triangle ,\square )\}$.
System U− is defined the same with the exception of the $(\triangle ,\ast )$ rule.
The sorts $\ast $ and $\square $ are conventionally called “Type” and “Kind”, respectively; the sort $\triangle $ doesn't have a specific name. The two axioms describe the containment of types in kinds ($\ast :\square $ :\square } ) and kinds in $\triangle $ ($\square :\triangle $ :\triangle } ). Intuitively, the sorts describe a hierarchy in the nature of the terms.
1. All values have a type, such as a base type (e.g. $b:\mathrm {Bool} $ is read as “b is a boolean”) or a (dependent) function type (e.g. $f:\mathrm {Nat} \to \mathrm {Bool} $ is read as “f is a function from natural numbers to booleans”).
2. $\ast $ is the sort of all such types ($t:\ast $ is read as “t is a type”). From $\ast $ we can build more terms, such as $\ast \to \ast $ which is the kind of unary type-level operators (e.g. $\mathrm {List} :\ast \to \ast $ :\ast \to \ast } is read as “List is a function from types to types”, that is, a polymorphic type). The rules restrict how we can form new kinds.
3. $\square $ is the sort of all such kinds ($k:\square $ is read as “k is a kind”). Similarly we can build related terms, according to what the rules allow.
4. $\triangle $ is the sort of all such terms.
The rules govern the dependencies between the sorts: $(\ast ,\ast )$ says that values may depend on values (functions), $(\square ,\ast )$ allows values to depend on types (polymorphism), $(\square ,\square )$ allows types to depend on types (type operators), and so on.
Girard's paradox
The definitions of System U and U− allow the assignment of polymorphic kinds to generic constructors in analogy to polymorphic types of terms in classical polymorphic lambda calculi, such as System F. An example of such a generic constructor might be[2]: 353 (where k denotes a kind variable)
$\lambda k^{\square }\lambda \alpha ^{k\to k}\lambda \beta ^{k}\!.\alpha (\alpha \beta )\;:\;\Pi k:\square ((k\to k)\to k\to k)$.
This mechanism is sufficient to construct a term with the type $(\forall p:\ast ,p)$ (equivalent to the type $\bot $), which implies that every type is inhabited. By the Curry–Howard correspondence, this is equivalent to all logical propositions being provable, which makes the system inconsistent.
Girard's paradox is the type-theoretic analogue of Russell's paradox in set theory.
References
1. Girard, Jean-Yves (1972). "Interprétation fonctionnelle et Élimination des coupures de l'arithmétique d'ordre supérieur" (PDF).
2. Sørensen, Morten Heine; Urzyczyn, Paweł (2006). "Pure type systems and the lambda cube". Lectures on the Curry–Howard isomorphism. Elsevier. doi:10.1016/S0049-237X(06)80015-7. ISBN 0-444-52077-5.
Further reading
• Barendregt, Henk (1992). "Lambda calculi with types". In S. Abramsky; D. Gabbay; T. Maibaum (eds.). Handbook of Logic in Computer Science. Oxford Science Publications. pp. 117–309.
• Coquand, Thierry (1986). "An analysis of Girard's paradox". Logic in Computer Science. IEEE Computer Society Press. pp. 227–236.
| Wikipedia |
System of bilinear equations
In mathematics, a system of bilinear equations is a special sort of system of polynomial equations, where each equation equates a bilinear form with a constant (possibly zero). More precisely, given two sets of variables represented as coordinate vectors x and y, then each equation of the system can be written
$y^{T}A_{i}x=g_{i},$
where, i is an integer whose value ranges from 1 to the number of equations, each $A_{i}$ is a matrix, and each $g_{i}$ is a real number. Systems of bilinear equations arise in many subjects including engineering, biology, and statistics.
See also
• Systems of linear equations
References
• Charles R. Johnson, Joshua A. Link 'Solution theory for complete bilinear systems of equations' - http://onlinelibrary.wiley.com/doi/10.1002/nla.676/abstract
• Vinh, Le Anh 'On the solvability of systems of bilinear equations in finite fields' - https://arxiv.org/abs/0903.1156
• Yang Dian 'Solution theory for system of bilinear equations' - https://digitalarchive.wm.edu/handle/10288/13726
• Scott Cohen and Carlo Tomasi. 'Systems of bilinear equations'. Technical report, Stanford, CA, USA, 1997.- ftp://reports.stanford.edu/public_html/cstr/reports/cs/tr/97/1588/CS-TR-97-1588.pdf
| Wikipedia |
Linear inequality
In mathematics a linear inequality is an inequality which involves a linear function. A linear inequality contains one of the symbols of inequality:[1]
• < less than
• > greater than
• ≤ less than or equal to
• ≥ greater than or equal to
• ≠ not equal to
A linear inequality looks exactly like a linear equation, with the inequality sign replacing the equality sign.
Linear inequalities of real numbers
Two-dimensional linear inequalities
Two-dimensional linear inequalities are expressions in two variables of the form:
$ax+by<c{\text{ and }}ax+by\geq c,$
where the inequalities may either be strict or not. The solution set of such an inequality can be graphically represented by a half-plane (all the points on one "side" of a fixed line) in the Euclidean plane.[2] The line that determines the half-planes (ax + by = c) is not included in the solution set when the inequality is strict. A simple procedure to determine which half-plane is in the solution set is to calculate the value of ax + by at a point (x0, y0) which is not on the line and observe whether or not the inequality is satisfied.
For example,[3] to draw the solution set of x + 3y < 9, one first draws the line with equation x + 3y = 9 as a dotted line, to indicate that the line is not included in the solution set since the inequality is strict. Then, pick a convenient point not on the line, such as (0,0). Since 0 + 3(0) = 0 < 9, this point is in the solution set, so the half-plane containing this point (the half-plane "below" the line) is the solution set of this linear inequality.
Linear inequalities in general dimensions
In Rn linear inequalities are the expressions that may be written in the form
$f({\bar {x}})<b$ or $f({\bar {x}})\leq b,$
where f is a linear form (also called a linear functional), ${\bar {x}}=(x_{1},x_{2},\ldots ,x_{n})$ and b a constant real number.
More concretely, this may be written out as
$a_{1}x_{1}+a_{2}x_{2}+\cdots +a_{n}x_{n}<b$
or
$a_{1}x_{1}+a_{2}x_{2}+\cdots +a_{n}x_{n}\leq b.$
Here $x_{1},x_{2},...,x_{n}$ are called the unknowns, and $a_{1},a_{2},...,a_{n}$ are called the coefficients.
Alternatively, these may be written as
$g(x)<0\,$ or $g(x)\leq 0,$
where g is an affine function.[4]
That is
$a_{0}+a_{1}x_{1}+a_{2}x_{2}+\cdots +a_{n}x_{n}<0$
or
$a_{0}+a_{1}x_{1}+a_{2}x_{2}+\cdots +a_{n}x_{n}\leq 0.$
Note that any inequality containing a "greater than" or a "greater than or equal" sign can be rewritten with a "less than" or "less than or equal" sign, so there is no need to define linear inequalities using those signs.
Systems of linear inequalities
A system of linear inequalities is a set of linear inequalities in the same variables:
${\begin{alignedat}{7}a_{11}x_{1}&&\;+\;&&a_{12}x_{2}&&\;+\cdots +\;&&a_{1n}x_{n}&&\;\leq \;&&&b_{1}\\a_{21}x_{1}&&\;+\;&&a_{22}x_{2}&&\;+\cdots +\;&&a_{2n}x_{n}&&\;\leq \;&&&b_{2}\\\vdots \;\;\;&&&&\vdots \;\;\;&&&&\vdots \;\;\;&&&&&\;\vdots \\a_{m1}x_{1}&&\;+\;&&a_{m2}x_{2}&&\;+\cdots +\;&&a_{mn}x_{n}&&\;\leq \;&&&b_{m}\\\end{alignedat}}$
Here $x_{1},\ x_{2},...,x_{n}$ are the unknowns, $a_{11},\ a_{12},...,\ a_{mn}$ are the coefficients of the system, and $b_{1},\ b_{2},...,b_{m}$ are the constant terms.
This can be concisely written as the matrix inequality
$Ax\leq b,$
where A is an m×n matrix, x is an n×1 column vector of variables, and b is an m×1 column vector of constants.[5]
In the above systems both strict and non-strict inequalities may be used.
• Not all systems of linear inequalities have solutions.
Variables can be eliminated from systems of linear inequalities using Fourier–Motzkin elimination.[6]
Polyhedra
The set of solutions of a real linear inequality constitutes a half-space of the 'n'-dimensional real space, one of the two defined by the corresponding linear equation.
The set of solutions of a system of linear inequalities corresponds to the intersection of the half-spaces defined by individual inequalities. It is a convex set, since the half-spaces are convex sets, and the intersection of a set of convex sets is also convex. In the non-degenerate cases this convex set is a convex polyhedron (possibly unbounded, e.g., a half-space, a slab between two parallel half-spaces or a polyhedral cone). It may also be empty or a convex polyhedron of lower dimension confined to an affine subspace of the n-dimensional space Rn.
Linear programming
Main article: Linear programming
A linear programming problem seeks to optimize (find a maximum or minimum value) a function (called the objective function) subject to a number of constraints on the variables which, in general, are linear inequalities.[7] The list of constraints is a system of linear inequalities.
Generalization
The above definition requires well-defined operations of addition, multiplication and comparison; therefore, the notion of a linear inequality may be extended to ordered rings, and in particular to ordered fields.
References
1. Miller & Heeren 1986, p. 355
2. Technically, for this statement to be correct both a and b can not simultaneously be zero. In that situation, the solution set is either empty or the entire plane.
3. Angel & Porter 1989, p. 310
4. In the 2-dimensional case, both linear forms and affine functions are historically called linear functions because their graphs are lines. In other dimensions, neither type of function has a graph which is a line, so the generalization of linear function in two dimensions to higher dimensions is done by means of algebraic properties and this causes the split into two types of functions. However, the difference between affine functions and linear forms is just the addition of a constant.
5. "Linear Regression and Multiple Linear Regression | IBM Skills Network - KeepNotes". keepnotes.com. Retrieved 2023-08-20.
6. Gärtner, Bernd; Matoušek, Jiří (2006). Understanding and Using Linear Programming. Berlin: Springer. ISBN 3-540-30697-8.
7. Angel & Porter 1989, p. 373
Sources
• Angel, Allen R.; Porter, Stuart R. (1989), A Survey of Mathematics with Applications (3rd ed.), Addison-Wesley, ISBN 0-201-13696-1
• Miller, Charles D.; Heeren, Vern E. (1986), Mathematical Ideas (5th ed.), Scott, Foresman, ISBN 0-673-18276-2
External links
• Khan Academy: Linear inequalities, free online micro lectures
| Wikipedia |
System of parameters
In mathematics, a system of parameters for a local Noetherian ring of Krull dimension d with maximal ideal m is a set of elements x1, ..., xd that satisfies any of the following equivalent conditions:
1. m is a minimal prime over (x1, ..., xd).
2. The radical of (x1, ..., xd) is m.
3. Some power of m is contained in (x1, ..., xd).
4. (x1, ..., xd) is m-primary.
Every local Noetherian ring admits a system of parameters.[1]
It is not possible for fewer than d elements to generate an ideal whose radical is m because then the dimension of R would be less than d.
If M is a k-dimensional module over a local ring, then x1, ..., xk is a system of parameters for M if the length of M / (x1, ..., xk) M is finite.
General references
• Atiyah, Michael Francis; Macdonald, I. G. (1969), Introduction to commutative algebra, Addison-Wesley Publishing Co., Reading, Mass.-London-Don Mills, Ont., MR 0242802
References
1. "Math 711: Lecture of September 5, 2007" (PDF). University of Michigan. September 5, 2007. Retrieved May 31, 2022.
| Wikipedia |
System of differential equations
In mathematics, a system of differential equations is a finite set of differential equations. Such a system can be either linear or non-linear. Also, such a system can be either a system of ordinary differential equations or a system of partial differential equations.
Linear system of differential equations
Main article: Linear differential equation
Like any system of equations, a system of linear differential equations is said to be overdetermined if there are more equations than the unknowns.
For an overdetermined system to have a solution, it needs to satisfy the compatibility conditions.[1] For example, consider the system:
${\frac {\partial u}{\partial x_{i}}}=f_{i},1\leq i\leq m.$
Then the necessary conditions for the system to have a solution are:
${\frac {\partial f_{i}}{\partial x_{k}}}-{\frac {\partial f_{k}}{\partial x_{i}}}=0,1\leq i,k\leq m.$
See also: Cauchy problem and Ehrenpreis's fundamental principle.
Non-linear system of differential equations
Main article: Nonlinear differential equation
Perhaps the most famous example of a non-linear system of differential equations is the Navier–Stokes equations. Unlike the linear case, the existence of a solution of a non-linear system is a difficult problem (cf. Navier–Stokes existence and smoothness.)
See also: h-principle.
Differential system
A differential system is a means of studying a system of partial differential equations using geometric ideas such as differential forms and vector fields.
For example, the compatibility conditions of an overdetermined system of differential equations can be succinctly stated in terms of differential forms (i.e., a form to be exact, it needs to be closed). See integrability conditions for differential systems for more.
See also: Category:differential systems.
Notes
1. "Overdetermined system - Encyclopedia of Mathematics".
See also
• Integral geometry
• Cartan–Kuranishi prolongation theorem
References
• L. Ehrenpreis, The Universality of the Radon Transform, Oxford Univ. Press, 2003.
• Gromov, M. (1986), Partial differential relations, Springer, ISBN 3-540-12177-3
• M. Kuranishi, "Lectures on involutive systems of partial differential equations" , Publ. Soc. Mat. São Paulo (1967)
• Pierre Schapira, Microdifferential systems in the complex domain, Grundlehren der Math- ematischen Wissenschaften, vol. 269, Springer-Verlag, 1985.
Further reading
• https://mathoverflow.net/questions/273235/a-very-basic-question-about-projections-in-formal-pde-theory
• https://www.encyclopediaofmath.org/index.php/Involutional_system
• https://www.encyclopediaofmath.org/index.php/Complete_system
• https://www.encyclopediaofmath.org/index.php/Partial_differential_equations_on_a_manifold
| Wikipedia |
Realization (systems)
In systems theory, a realization of a state space model is an implementation of a given input-output behavior. That is, given an input-output relationship, a realization is a quadruple of (time-varying) matrices $[A(t),B(t),C(t),D(t)]$ such that
${\dot {\mathbf {x} }}(t)=A(t)\mathbf {x} (t)+B(t)\mathbf {u} (t)$
$\mathbf {y} (t)=C(t)\mathbf {x} (t)+D(t)\mathbf {u} (t)$
with $(u(t),y(t))$ describing the input and output of the system at time $t$.
LTI System
For a linear time-invariant system specified by a transfer matrix, $H(s)$, a realization is any quadruple of matrices $(A,B,C,D)$ such that $H(s)=C(sI-A)^{-1}B+D$.
Canonical realizations
Any given transfer function which is strictly proper can easily be transferred into state-space by the following approach (this example is for a 4-dimensional, single-input, single-output system)):
Given a transfer function, expand it to reveal all coefficients in both the numerator and denominator. This should result in the following form:
$H(s)={\frac {n_{3}s^{3}+n_{2}s^{2}+n_{1}s+n_{0}}{s^{4}+d_{3}s^{3}+d_{2}s^{2}+d_{1}s+d_{0}}}$.
The coefficients can now be inserted directly into the state-space model by the following approach:
${\dot {\textbf {x}}}(t)={\begin{bmatrix}-d_{3}&-d_{2}&-d_{1}&-d_{0}\\1&0&0&0\\0&1&0&0\\0&0&1&0\end{bmatrix}}{\textbf {x}}(t)+{\begin{bmatrix}1\\0\\0\\0\\\end{bmatrix}}{\textbf {u}}(t)$
${\textbf {y}}(t)={\begin{bmatrix}n_{3}&n_{2}&n_{1}&n_{0}\end{bmatrix}}{\textbf {x}}(t)$.
This state-space realization is called controllable canonical form (also known as phase variable canonical form) because the resulting model is guaranteed to be controllable (i.e., because the control enters a chain of integrators, it has the ability to move every state).
The transfer function coefficients can also be used to construct another type of canonical form
${\dot {\textbf {x}}}(t)={\begin{bmatrix}-d_{3}&1&0&0\\-d_{2}&0&1&0\\-d_{1}&0&0&1\\-d_{0}&0&0&0\end{bmatrix}}{\textbf {x}}(t)+{\begin{bmatrix}n_{3}\\n_{2}\\n_{1}\\n_{0}\end{bmatrix}}{\textbf {u}}(t)$
${\textbf {y}}(t)={\begin{bmatrix}1&0&0&0\end{bmatrix}}{\textbf {x}}(t)$.
This state-space realization is called observable canonical form because the resulting model is guaranteed to be observable (i.e., because the output exits from a chain of integrators, every state has an effect on the output).
General System
D = 0
If we have an input $u(t)$, an output $y(t)$, and a weighting pattern $T(t,\sigma )$ then a realization is any triple of matrices $[A(t),B(t),C(t)]$ such that $T(t,\sigma )=C(t)\phi (t,\sigma )B(\sigma )$ where $\phi $ is the state-transition matrix associated with the realization.[1]
System identification
System identification techniques take the experimental data from a system and output a realization. Such techniques can utilize both input and output data (e.g. eigensystem realization algorithm) or can only include the output data (e.g. frequency domain decomposition). Typically an input-output technique would be more accurate, but the input data is not always available.
See also
• Grey box model
• Statistical Model
• System identification
References
1. Brockett, Roger W. (1970). Finite Dimensional Linear Systems. John Wiley & Sons. ISBN 978-0-471-10585-5.
| Wikipedia |
Systems of Logic Based on Ordinals
Systems of Logic Based on Ordinals was the PhD dissertation of the mathematician Alan Turing.[1][2]
Turing's thesis is not about a new type of formal logic, nor was he interested in so-called ‘ranked logic’ systems derived from ordinal or relative numbering, in which comparisons can be made between truth-states on the basis of relative veracity. Instead, Turing investigated the possibility of resolving the Godelian incompleteness condition using Cantor's method of infinites. This condition can be stated thus—in all systems with finite sets of axioms, an exclusive-or condition applies to expressive power and provability; i.e. one can have power and no proof, or proof and no power, but not both.
The thesis is an exploration of formal mathematical systems after Gödel's theorem. Gödel showed that for any formal system S powerful enough to represent arithmetic, there is a theorem G which is true but the system is unable to prove. G could be added as an additional axiom to the system in place of a proof. However this would create a new system S' with its own unprovable true theorem G', and so on. Turing's thesis looks at what happens if you simply iterate this process repeatedly, generating an infinite set of new axioms to add to the original theory, and even goes one step further in using transfinite recursion to go "past infinity," yielding a set of new theories Gn, one for each ordinal number n.
The thesis was completed at Princeton under Alonzo Church and was a classic work in mathematics which introduced the concept of ordinal logic.[3]
Martin Davis states that although Turing's use of a computing oracle is not a major focus of the dissertation, it has proven to be highly influential in theoretical computer science, e.g. in the polynomial time hierarchy.[4]
References
1. Turing, Alan (1938). Systems of Logic Based on Ordinals (PhD thesis). Princeton University. doi:10.1112/plms/s2-45.1.161. hdl:21.11116/0000-0001-91CE-3. ProQuest 301792588.
2. Turing, A. M. (1939). "Systems of Logic Based on Ordinals". Proceedings of the London Mathematical Society: 161–228. doi:10.1112/plms/s2-45.1.161. hdl:21.11116/0000-0001-91CE-3.
3. Solomon Feferman, Turing in the Land of O(z) in "The universal Turing machine: a half-century survey" by Rolf Herken 1995 ISBN 3-211-82637-8 page 111
4. Martin Davis "Computability, Computation and the Real World", in Imagination and Rigor edited by Settimo Termini 2006 ISBN 88-470-0320-2 pages 63-66
External links
• https://rauterberg.employee.id.tue.nl/lecturenotes/DDM110%20CAS/Turing/Turing-1939%20Sysyems%20of%20logic%20based%20on%20ordinals.pdf
• https://www.dcc.fc.up.pt/~acm/turing-phd.pdf
• https://web.archive.org/web/20121023103503/https://webspace.princeton.edu/users/jedwards/Turing%20Centennial%202012/Mudd%20Archive%20files/12285_AC100_Turing_1938.pdf
• "Turing's Princeton Dissertation". Princeton University Press. Retrieved January 10, 2012.
• Solomon Feferman (November 2006), "Turing's Thesis" (PDF), Notices of the AMS, 53 (10)
| Wikipedia |
Formal system
A formal system is an abstract structure used for inferring theorems from axioms according to a set of rules. These rules, which are used for carrying out the inference of theorems from axioms, are the logical calculus of the formal system. A formal system is essentially an "axiomatic system".[1]
In 1921, David Hilbert proposed to use such a system as the foundation for the knowledge in mathematics.[2] A formal system may represent a well-defined system of abstract thought.
The term formalism is sometimes a rough synonym for formal system, but it also refers to a given style of notation, for example, Paul Dirac's bra–ket notation.
Background
Each formal system is described by primitive symbols (which collectively form an alphabet) to finitely construct a formal language from a set of axioms through inferential rules of formation.
The system thus consists of valid formulas built up through finite combinations of the primitive symbols—combinations that are formed from the axioms in accordance with the stated rules.[3]
More formally, this can be expressed as the following:
1. A finite set of symbols, known as the alphabet, which are concatenated into finite strings called formulas.
2. A grammar consisting of rules to form formulas from simpler formulas. A formula is said to be well-formed if it can be formed using the rules of the formal grammar. It is often required that there be a decision procedure for deciding whether a formula is well-formed.
3. A set of axioms, or axiom schemata, consisting of well-formed formulas.
4. A set of inference rules. A well-formed formula that can be inferred from the axioms is known as a theorem of the formal system.
Recursive
A formal system is said to be recursive (i.e. effective) or recursively enumerable if the set of axioms and the set of inference rules are decidable sets or semidecidable sets, respectively.
Inference and entailment
The entailment of the system by its logical foundation is what distinguishes a formal system from others which may have some basis in an abstract model. Often the formal system will be the basis for or even identified with a larger theory or field (e.g. Euclidean geometry) consistent with the usage in modern mathematics such as model theory.
Formal language
Main articles: Formal language and Formal grammar
A formal language is a language that is defined by a formal system. Like languages in linguistics, formal languages generally have two aspects:
• the syntax of a language is what the language looks like (more formally: the set of possible expressions that are valid utterances in the language) studied in formal language theory
• the semantics of a language are what the utterances of the language mean (which is formalized in various ways, depending on the type of language in question)
In computer science and linguistics usually only the syntax of a formal language is considered via the notion of a formal grammar. A formal grammar is a precise description of the syntax of a formal language: a set of strings. The two main categories of formal grammar are that of generative grammars, which are sets of rules for how strings in a language can be generated, and that of analytic grammars (or reductive grammar,[4][5]) which are sets of rules for how a string can be analyzed to determine whether it is a member of the language. In short, an analytic grammar describes how to recognize when strings are members in the set, whereas a generative grammar describes how to write only those strings in the set.
In mathematics, a formal language is usually not described by a formal grammar but by (a) natural language, such as English. Logical systems are defined by both a deductive system and natural language. Deductive systems in turn are only defined by natural language (see below).
Deductive system
A deductive system, also called a deductive apparatus or a logic, consists of the axioms (or axiom schemata) and rules of inference that can be used to derive theorems of the system.[6]
Such deductive systems preserve deductive qualities in the formulas that are expressed in the system. Usually the quality we are concerned with is truth as opposed to falsehood. However, other modalities, such as justification or belief may be preserved instead.
In order to sustain its deductive integrity, a deductive apparatus must be definable without reference to any intended interpretation of the language. The aim is to ensure that each line of a derivation is merely a syntactic consequence of the lines that precede it. There should be no element of any interpretation of the language that gets involved with the deductive nature of the system.
An example of deductive system is first order predicate logic.
Logical system
A logical system or language (not be confused with the kind of "formal language" discussed above which is described by a formal grammar), is a deductive system (see section above; most commonly first order predicate logic) together with additional (non-logical) axioms. According to model theory, a logical system may be given one or more semantics or interpretations which describe whether a well-formed formula is satisfied by a given structure. A structure that satisfies all the axioms of the formal system is known as a model of the logical system. A logical system is sound if each well-formed formula that can be inferred from the axioms is satisfied by every model of the logical system. Conversely, a logic system is (semantically) complete if each well-formed formula that is satisfied by every model of the logical system can be inferred from the axioms.
An example of a logical system is Peano arithmetic. The standard model of arithmetic sets the domain of discourse to be the nonnegative integers and gives the symbols their usual meaning.[7] There are also non-standard models of arithmetic.
History
Early logic systems includes Indian logic of Pāṇini, syllogistic logic of Aristotle, propositional logic of Stoicism, and Chinese logic of Gongsun Long (c. 325–250 BCE) . In more recent times, contributors include George Boole, Augustus De Morgan, and Gottlob Frege. Mathematical logic was developed in 19th century Europe.
Formalism
Main article: Formal logical systems
Hilbert's program
Main article: Hilbert's program
David Hilbert instigated a formalist movement that was eventually tempered by Gödel's incompleteness theorems.
QED manifesto
Main article: QED manifesto
The QED manifesto represented a subsequent, as yet unsuccessful, effort at formalization of known mathematics.
Examples
Main article: List of formal systems
Examples of formal systems include:
• Lambda calculus
• Predicate calculus
• Propositional calculus
Variants
The following systems are variations of formal systems.
Proof system
Main articles: Proof system and Formal proof
Formal proofs are sequences of well-formed formulas (or wff for short). For a wff to qualify as part of a proof, it might either be an axiom or be the product of applying an inference rule on previous wffs in the proof sequence. The last wff in the sequence is recognized as a theorem.
The point of view that generating formal proofs is all there is to mathematics is often called formalism. David Hilbert founded metamathematics as a discipline for discussing formal systems. Any language that one uses to talk about a formal system is called a metalanguage. The metalanguage may be a natural language, or it may be partially formalized itself, but it is generally less completely formalized than the formal language component of the formal system under examination, which is then called the object language, that is, the object of the discussion in question.
Once a formal system is given, one can define the set of theorems which can be proved inside the formal system. This set consists of all wffs for which there is a proof. Thus all axioms are considered theorems. Unlike the grammar for wffs, there is no guarantee that there will be a decision procedure for deciding whether a given wff is a theorem or not. The notion of theorem just defined should not be confused with theorems about the formal system, which, in order to avoid confusion, are usually called metatheorems.
See also
• Formal method
• Formal science
• Rewriting system
• Substitution instance
• Theory (mathematical logic)
References
1. "Formal system, ENCYCLOPÆDIA BRITANNICA".
2. Zach, Richard (31 July 2003). "Hilbert's Program". Hilbert's Program, Stanford Encyclopedia of Philosophy. Metaphysics Research Lab, Stanford University.
3. Encyclopædia Britannica, Formal system definition, 2007.
4. Reductive grammar: (computer science) A set of syntactic rules for the analysis of strings to determine whether the strings exist in a language. "Sci-Tech Dictionary McGraw-Hill Dictionary of Scientific and Technical Terms" (6th ed.). McGraw-Hill. About the Author Compiled by The Editors of the McGraw-Hill Encyclopedia of Science & Technology (New York, NY) an in-house staff who represents the cutting-edge of skill, knowledge, and innovation in science publishing.
5. "There are two classes of formal-language definition compiler-writing schemes. The productive grammar approach is the most common. A productive grammar consists primarrly of a set of rules that describe a method of generating all possible strings of the language. The reductive or analytical grammar technique states a set of rules that describe a method of analyzing any string of characters and deciding whether that string is in the language." "The TREE-META Compiler-Compiler System: A Meta Compiler System for the Univac 1108 and General Electric 645, University of Utah Technical Report RADC-TR-69-83. C. Stephen Carr, David A. Luther, Sherian Erdmann" (PDF). Retrieved 5 January 2015.
6. Hunter, Geoffrey, Metalogic: An Introduction to the Metatheory of Standard First-Order Logic, University of California Press, 1971
7. Kaye, Richard (1991). "1. The Standard Model". Models of Peano arithmetic. Oxford: Clarendon Press. p. 10. ISBN 9780198532132.
Further reading
• Raymond M. Smullyan, 1961. Theory of Formal Systems: Annals of Mathematics Studies, Princeton University Press (April 1, 1961) 156 pages ISBN 0-691-08047-X
• Stephen Cole Kleene, 1967. Mathematical Logic Reprinted by Dover, 2002. ISBN 0-486-42533-9
• Douglas Hofstadter, 1979. Gödel, Escher, Bach: An Eternal Golden Braid ISBN 978-0-465-02656-2. 777 pages.
External links
Look up formalisation in Wiktionary, the free dictionary.
• Media related to Formal systems at Wikimedia Commons
• Encyclopædia Britannica, Formal system definition, 2007.
• What is a Formal System?: Some quotes from John Haugeland's `Artificial Intelligence: The Very Idea' (1985), pp. 48–64.
• Peter Suber, Formal Systems and Machines: An Isomorphism Archived 2011-05-24 at the Wayback Machine, 1997.
Mathematical logic
General
• Axiom
• list
• Cardinality
• First-order logic
• Formal proof
• Formal semantics
• Foundations of mathematics
• Information theory
• Lemma
• Logical consequence
• Model
• Theorem
• Theory
• Type theory
Theorems (list)
& Paradoxes
• Gödel's completeness and incompleteness theorems
• Tarski's undefinability
• Banach–Tarski paradox
• Cantor's theorem, paradox and diagonal argument
• Compactness
• Halting problem
• Lindström's
• Löwenheim–Skolem
• Russell's paradox
Logics
Traditional
• Classical logic
• Logical truth
• Tautology
• Proposition
• Inference
• Logical equivalence
• Consistency
• Equiconsistency
• Argument
• Soundness
• Validity
• Syllogism
• Square of opposition
• Venn diagram
Propositional
• Boolean algebra
• Boolean functions
• Logical connectives
• Propositional calculus
• Propositional formula
• Truth tables
• Many-valued logic
• 3
• Finite
• ∞
Predicate
• First-order
• list
• Second-order
• Monadic
• Higher-order
• Free
• Quantifiers
• Predicate
• Monadic predicate calculus
Set theory
• Set
• Hereditary
• Class
• (Ur-)Element
• Ordinal number
• Extensionality
• Forcing
• Relation
• Equivalence
• Partition
• Set operations:
• Intersection
• Union
• Complement
• Cartesian product
• Power set
• Identities
Types of Sets
• Countable
• Uncountable
• Empty
• Inhabited
• Singleton
• Finite
• Infinite
• Transitive
• Ultrafilter
• Recursive
• Fuzzy
• Universal
• Universe
• Constructible
• Grothendieck
• Von Neumann
Maps & Cardinality
• Function/Map
• Domain
• Codomain
• Image
• In/Sur/Bi-jection
• Schröder–Bernstein theorem
• Isomorphism
• Gödel numbering
• Enumeration
• Large cardinal
• Inaccessible
• Aleph number
• Operation
• Binary
Set theories
• Zermelo–Fraenkel
• Axiom of choice
• Continuum hypothesis
• General
• Kripke–Platek
• Morse–Kelley
• Naive
• New Foundations
• Tarski–Grothendieck
• Von Neumann–Bernays–Gödel
• Ackermann
• Constructive
Formal systems (list),
Language & Syntax
• Alphabet
• Arity
• Automata
• Axiom schema
• Expression
• Ground
• Extension
• by definition
• Conservative
• Relation
• Formation rule
• Grammar
• Formula
• Atomic
• Closed
• Ground
• Open
• Free/bound variable
• Language
• Metalanguage
• Logical connective
• ¬
• ∨
• ∧
• →
• ↔
• =
• Predicate
• Functional
• Variable
• Propositional variable
• Proof
• Quantifier
• ∃
• !
• ∀
• rank
• Sentence
• Atomic
• Spectrum
• Signature
• String
• Substitution
• Symbol
• Function
• Logical/Constant
• Non-logical
• Variable
• Term
• Theory
• list
Example axiomatic
systems
(list)
• of arithmetic:
• Peano
• second-order
• elementary function
• primitive recursive
• Robinson
• Skolem
• of the real numbers
• Tarski's axiomatization
• of Boolean algebras
• canonical
• minimal axioms
• of geometry:
• Euclidean:
• Elements
• Hilbert's
• Tarski's
• non-Euclidean
• Principia Mathematica
Proof theory
• Formal proof
• Natural deduction
• Logical consequence
• Rule of inference
• Sequent calculus
• Theorem
• Systems
• Axiomatic
• Deductive
• Hilbert
• list
• Complete theory
• Independence (from ZFC)
• Proof of impossibility
• Ordinal analysis
• Reverse mathematics
• Self-verifying theories
Model theory
• Interpretation
• Function
• of models
• Model
• Equivalence
• Finite
• Saturated
• Spectrum
• Submodel
• Non-standard model
• of arithmetic
• Diagram
• Elementary
• Categorical theory
• Model complete theory
• Satisfiability
• Semantics of logic
• Strength
• Theories of truth
• Semantic
• Tarski's
• Kripke's
• T-schema
• Transfer principle
• Truth predicate
• Truth value
• Type
• Ultraproduct
• Validity
Computability theory
• Church encoding
• Church–Turing thesis
• Computably enumerable
• Computable function
• Computable set
• Decision problem
• Decidable
• Undecidable
• P
• NP
• P versus NP problem
• Kolmogorov complexity
• Lambda calculus
• Primitive recursive function
• Recursion
• Recursive set
• Turing machine
• Type theory
Related
• Abstract logic
• Category theory
• Concrete/Abstract Category
• Category of sets
• History of logic
• History of mathematical logic
• timeline
• Logicism
• Mathematical object
• Philosophy of mathematics
• Supertask
Mathematics portal
Systems science
System
types
• Art
• Biological
• Coupled human–environment
• Ecological
• Economic
• Multi-agent
• Nervous
• Social
Concepts
• Doubling time
• Leverage points
• Limiting factor
• Negative feedback
• Positive feedback
Theoretical
fields
• Control theory
• Cybernetics
• Earth system science
• Living systems
• Sociotechnical system
• Systemics
• Urban metabolism
• World-systems theory
• Analysis
• Biology
• Dynamics
• Ecology
• Engineering
• Neuroscience
• Pharmacology
• Philosophy
• Psychology
• Theory (Systems thinking)
Scientists
• Alexander Bogdanov
• Russell L. Ackoff
• William Ross Ashby
• Ruzena Bajcsy
• Béla H. Bánáthy
• Gregory Bateson
• Anthony Stafford Beer
• Richard E. Bellman
• Ludwig von Bertalanffy
• Margaret Boden
• Kenneth E. Boulding
• Murray Bowen
• Kathleen Carley
• Mary Cartwright
• C. West Churchman
• Manfred Clynes
• George Dantzig
• Edsger W. Dijkstra
• Fred Emery
• Heinz von Foerster
• Stephanie Forrest
• Jay Wright Forrester
• Barbara Grosz
• Charles A. S. Hall
• Mike Jackson
• Lydia Kavraki
• James J. Kay
• Faina M. Kirillova
• George Klir
• Allenna Leonard
• Edward Norton Lorenz
• Niklas Luhmann
• Humberto Maturana
• Margaret Mead
• Donella Meadows
• Mihajlo D. Mesarovic
• James Grier Miller
• Radhika Nagpal
• Howard T. Odum
• Talcott Parsons
• Ilya Prigogine
• Qian Xuesen
• Anatol Rapoport
• John Seddon
• Peter Senge
• Claude Shannon
• Katia Sycara
• Eric Trist
• Francisco Varela
• Manuela M. Veloso
• Kevin Warwick
• Norbert Wiener
• Jennifer Wilby
• Anthony Wilden
Applications
• Systems theory in anthropology
• Systems theory in archaeology
• Systems theory in political science
Organizations
• List
• Principia Cybernetica
• Category
• Portal
• Commons
Major topics in Foundations of Mathematics
Mathematical logic
• Peano axioms
• Mathematical induction
• Formal system
• Axiomatic system
• Hilbert system
• Natural deduction
• Mathematical proof
• Model theory
• Mathematical constructivism
• Modal logic
• List of mathematical logic topics
Set theory
• Set
• Naive set theory
• Axiomatic set theory
• Zermelo set theory
• Zermelo–Fraenkel set theory
• Constructive set theory
• Descriptive set theory
• Determinacy
• Russell's paradox
• List of set theory topics
Type theory
• Axiom of reducibility
• Simple type theory
• Dependent type theory
• Intuitionistic type theory
• Homotopy type theory
• Univalent foundations
• Girard's paradox
Category theory
• Category
• Topos theory
• Category of sets
• Higher category theory
• ∞-groupoid
• ∞-topos theory
• Mathematical structuralism
• Glossary of category theory
• List of category theory topics
| Wikipedia |
Systoles of surfaces
In mathematics, systolic inequalities for curves on surfaces were first studied by Charles Loewner in 1949 (unpublished; see remark at end of P. M. Pu's paper in '52). Given a closed surface, its systole, denoted sys, is defined to be the least length of a loop that cannot be contracted to a point on the surface. The systolic area of a metric is defined to be the ratio area/sys2. The systolic ratio SR is the reciprocal quantity sys2/area. See also Introduction to systolic geometry.
Torus
In 1949 Loewner proved his inequality for metrics on the torus T2, namely that the systolic ratio SR(T2) is bounded above by $2/{\sqrt {3}}$, with equality in the flat (constant curvature) case of the equilateral torus (see hexagonal lattice).
Real projective plane
A similar result is given by Pu's inequality for the real projective plane from 1952, due to Pao Ming Pu, with an upper bound of π/2 for the systolic ratio SR(RP2), also attained in the constant curvature case.
Klein bottle
For the Klein bottle K, Bavard (1986) obtained an optimal upper bound of $\pi /{\sqrt {8}}$ for the systolic ratio:
$\mathrm {SR} (K)\leq {\frac {\pi }{\sqrt {8}}},$
based on work by Blatter from the 1960s.
Genus 2
An orientable surface of genus 2 satisfies Loewner's bound $\mathrm {SR} (2)\leq {\tfrac {2}{\sqrt {3}}}$, see (Katz-Sabourau '06). It is unknown whether or not every surface of positive genus satisfies Loewner's bound. It is conjectured that they all do. The answer is affirmative for genus 20 and above by (Katz-Sabourau '05).
Arbitrary genus
For a closed surface of genus g, Hebda and Burago (1980) showed that the systolic ratio SR(g) is bounded above by the constant 2. Three years later, Mikhail Gromov found an upper bound for SR(g) given by a constant times
${\frac {(\log g)^{2}}{g}}.$
A similar lower bound (with a smaller constant) was obtained by Buser and Sarnak. Namely, they exhibited arithmetic hyperbolic Riemann surfaces with systole behaving as a constant times $\log(g)$. Note that area is 4π(g-1) from the Gauss-Bonnet theorem, so that SR(g) behaves asymptotically as a constant times ${\tfrac {(\log g)^{2}}{g}}$.
The study of the asymptotic behavior for large genus $g$ of the systole of hyperbolic surfaces reveals some interesting constants. Thus, Hurwitz surfaces $\Sigma _{g}$ defined by a tower of principal congruence subgroups of the (2,3,7) hyperbolic triangle group satisfy the bound
$\mathrm {sys} (\Sigma _{g})\geq {\frac {4}{3}}\log g,$
resulting from an analysis of the Hurwitz quaternion order. A similar bound holds for more general arithmetic Fuchsian groups. This 2007 result by Mikhail Katz, Mary Schaps, and Uzi Vishne improves an inequality due to Peter Buser and Peter Sarnak in the case of arithmetic groups defined over $\mathbb {Q} $, from 1994, which contained a nonzero additive constant. For the Hurwitz surfaces of principal congruence type, the systolic ratio SR(g) is asymptotic to
${\frac {4}{9\pi }}{\frac {(\log g)^{2}}{g}}.$
Using Katok's entropy inequality, the following asymptotic upper bound for SR(g) was found in (Katz-Sabourau 2005):
${\frac {(\log g)^{2}}{\pi g}},$
see also (Katz 2007), p. 85. Combining the two estimates, one obtains tight bounds for the asymptotic behavior of the systolic ratio of surfaces.
Sphere
There is also a version of the inequality for metrics on the sphere, for the invariant L defined as the least length of a closed geodesic of the metric. In '80, Gromov conjectured a lower bound of $1/2{\sqrt {3}}$ for the ratio area/L2. A lower bound of 1/961 obtained by Croke in '88 has recently been improved by Nabutovsky, Rotman, and Sabourau.
See also
• Differential geometry of surfaces
References
• Bavard, C. (1986). "Inégalité isosystolique pour la bouteille de Klein". Mathematische Annalen. 274 (3): 439–441. doi:10.1007/BF01457227.
• Buser, P.; Sarnak, P. (1994). "On the period matrix of a Riemann surface of large genus (With an appendix by J. H. Conway and N. J. A. Sloane)". Inventiones Mathematicae. 117 (1): 27–56. Bibcode:1994InMat.117...27B. doi:10.1007/BF01232233.
• Gromov, Mikhael (1983). "Filling Riemannian manifolds". Journal of Differential Geometry. 18 (1): 1–147. doi:10.4310/jdg/1214509283. MR 0697984.
• Hebda, James J. (1982). "Some lower bounds for the area of surfaces". Inventiones Mathematicae. 65 (3): 485–490. Bibcode:1982InMat..65..485H. doi:10.1007/BF01396632.
• Katz, Mikhail G. (2007). Systolic geometry and topology. Mathematical Surveys and Monographs. Vol. 137. Providence, R.I.: American Mathematical Society. ISBN 978-0-8218-4177-8.
• Katz, Mikhail G.; Sabourau, Stéphane (2005). "Entropy of systolically extremal surfaces and asymptotic bounds". Ergodic Theory and Dynamical Systems. 25 (4): 1209–1220. arXiv:math/0410312. doi:10.1017/S0143385704001014.
• Katz, Mikhail G.; Sabourau, Stéphane (2006). "Hyperelliptic surfaces are Loewner". Proceedings of the American Mathematical Society. 134 (4): 1189–1195. arXiv:math.DG/0407009. doi:10.1090/S0002-9939-05-08057-3.
• Katz, Mikhail G.; Schaps, Mary; Vishne, Uzi (2007). "Logarithmic growth of systole of arithmetic Riemann surfaces along congruence subgroups". Journal of Differential Geometry. 76 (3): 399–422. arXiv:math.DG/0505007. doi:10.4310/jdg/1180135693.
• Pu, P. M. (1952). "Some inequalities in certain nonorientable Riemannian manifolds". Pacific Journal of Mathematics. 2: 55–71. doi:10.2140/pjm.1952.2.55. MR 0048886.
Systolic geometry
1-systoles of surfaces
• Loewner's torus inequality
• Pu's inequality
• Filling area conjecture
• Bolza surface
• Systoles of surfaces
• Eisenstein integers
1-systoles of manifolds
• Gromov's systolic inequality for essential manifolds
• Essential manifold
• Filling radius
• Hermite constant
Higher systoles
• Gromov's inequality for complex projective space
• Systolic freedom
• Systolic category
| Wikipedia |
Systolic freedom
In differential geometry, systolic freedom refers to the fact that closed Riemannian manifolds may have arbitrarily small volume regardless of their systolic invariants. That is, systolic invariants or products of systolic invariants do not in general provide universal (i.e. curvature-free) lower bounds for the total volume of a closed Riemannian manifold.
Systolic freedom was first detected by Mikhail Gromov in an I.H.É.S. preprint in 1992 (which eventually appeared as Gromov 1996), and was further developed by Mikhail Katz, Michael Freedman and others. Gromov's observation was elaborated on by Marcel Berger (1993). One of the first publications to study systolic freedom in detail is by Katz (1995).
Systolic freedom has applications in quantum error correction. Croke & Katz (2003) survey the main results on systolic freedom.
Example
The complex projective plane admits Riemannian metrics of arbitrarily small volume, such that every essential surface is of area at least 1. Here a surface is called "essential" if it cannot be contracted to a point in the ambient 4-manifold.
Systolic constraint
The opposite of systolic freedom is systolic constraint, characterized by the presence of systolic inequalities such as Gromov's systolic inequality for essential manifolds.
References
• Berger, Marcel (1993), "Systoles et applications selon Gromov", Séminaire Bourbaki (in French), 1992/93. Astérisque 216, Exp. No. 771, 5, 279–310.
• Croke, Christopher B.; Katz, Mikhail (2003), "Universal volume bounds in Riemannian manifolds", Surveys in differential geometry, VIII (Boston, MA, 2002), Somerville, MA: Int. Press, pp. 109–137.
• Freedman, Michael H. (1999), "Z2-systolic-freedom", Proceedings of the Kirbyfest (Berkeley, CA, 1998), Geom. Topol. Monogr., vol. 2, Coventry: Geom. Topol. Publ., pp. 113–123.
• Freedman, Michael H.; Meyer, David A.; Luo, Feng (2002), "Z2-systolic freedom and quantum codes", Mathematics of quantum computation, Comput. Math. Ser., Boca Raton, FL: Chapman & Hall/CRC, pp. 287–320.
• Freedman, Michael H.; Meyer, David A. (2001), Projective plane and planar quantum codesjournal=Found. Comput. Math., vol. 1, pp. 325–332.
• Gromov, Mikhail (1996), "Systoles and intersystolic inequalities", Actes de la Table Ronde de Géométrie Différentielle (Luminy, 1992), Sémin. Congr., vol. 1, Paris: Soc. Math. France, pp. 291–362.
• Katz, Mikhail (1995), "Counterexamples to isosystolic inequalities", Geom. Dedicata, 57 (2): 195–206, doi:10.1007/bf01264937, S2CID 11211702.
Systolic geometry
1-systoles of surfaces
• Loewner's torus inequality
• Pu's inequality
• Filling area conjecture
• Bolza surface
• Systoles of surfaces
• Eisenstein integers
1-systoles of manifolds
• Gromov's systolic inequality for essential manifolds
• Essential manifold
• Filling radius
• Hermite constant
Higher systoles
• Gromov's inequality for complex projective space
• Systolic freedom
• Systolic category
| Wikipedia |
Systolic geometry
In mathematics, systolic geometry is the study of systolic invariants of manifolds and polyhedra, as initially conceived by Charles Loewner and developed by Mikhail Gromov, Michael Freedman, Peter Sarnak, Mikhail Katz, Larry Guth, and others, in its arithmetical, ergodic, and topological manifestations. See also a slower-paced Introduction to systolic geometry.
For a more accessible and less technical introduction to this topic, see Introduction to systolic geometry.
The notion of systole
The systole of a compact metric space X is a metric invariant of X, defined to be the least length of a noncontractible loop in X (i.e. a loop that cannot be contracted to a point in the ambient space X). In more technical language, we minimize length over free loops representing nontrivial conjugacy classes in the fundamental group of X. When X is a graph, the invariant is usually referred to as the girth, ever since the 1947 article on girth by W. T. Tutte.[1] Possibly inspired by Tutte's article, Loewner started thinking about systolic questions on surfaces in the late 1940s, resulting in a 1950 thesis by his student Pao Ming Pu. The actual term "systole" itself was not coined until a quarter century later, by Marcel Berger.
This line of research was, apparently, given further impetus by a remark of René Thom, in a conversation with Berger in the library of Strasbourg University during the 1961–62 academic year, shortly after the publication of the papers of R. Accola and C. Blatter. Referring to these systolic inequalities, Thom reportedly exclaimed: Mais c'est fondamental! [These results are of fundamental importance!]
Subsequently, Berger popularized the subject in a series of articles and books, most recently in the March 2008 issue of the Notices of the American Mathematical Society (see reference below). A bibliography at the Website for systolic geometry and topology currently contains over 160 articles. Systolic geometry is a rapidly developing field, featuring a number of recent publications in leading journals. Recently (see the 2006 paper by Katz and Rudyak below), the link with the Lusternik–Schnirelmann category has emerged. The existence of such a link can be thought of as a theorem in systolic topology.
Property of a centrally symmetric polyhedron in 3-space
Every convex centrally symmetric polyhedron P in R3 admits a pair of opposite (antipodal) points and a path of length L joining them and lying on the boundary ∂P of P, satisfying
$L^{2}\leq {\frac {\pi }{4}}\mathrm {area} (\partial P).$
An alternative formulation is as follows. Any centrally symmetric convex body of surface area A can be squeezed through a noose of length ${\sqrt {\pi A}}$, with the tightest fit achieved by a sphere. This property is equivalent to a special case of Pu's inequality (see below), one of the earliest systolic inequalities.
Concepts
To give a preliminary idea of the flavor of the field, one could make the following observations. The main thrust of Thom's remark to Berger quoted above appears to be the following. Whenever one encounters an inequality relating geometric invariants, such a phenomenon in itself is interesting; all the more so when the inequality is sharp (i.e., optimal). The classical isoperimetric inequality is a good example.
In systolic questions about surfaces, integral-geometric identities play a particularly important role. Roughly speaking, there is an integral identity relating area on the one hand, and an average of energies of a suitable family of loops on the other. By the Cauchy–Schwarz inequality, energy is an upper bound for length squared; hence one obtains an inequality between area and the square of the systole. Such an approach works both for the Loewner inequality
$\mathrm {sys} ^{2}\leq {\frac {2}{\sqrt {3}}}\cdot \mathrm {area} $
for the torus, where the case of equality is attained by the flat torus whose deck transformations form the lattice of Eisenstein integers,
and for Pu's inequality for the real projective plane P2(R):
$\mathrm {sys} ^{2}\leq {\frac {\pi }{2}}\cdot \mathrm {area} $,
with equality characterizing a metric of constant Gaussian curvature.
An application of the computational formula for the variance in fact yields the following version of Loewner's torus inequality with isosystolic defect:
$\mathrm {area} -{\frac {\sqrt {3}}{2}}\mathrm {sys} ^{2}\geq \mathrm {var} (f),$
where f is the conformal factor of the metric with respect to a unit area flat metric in its conformal class. This inequality can be thought of as analogous to Bonnesen's inequality with isoperimetric defect, a strengthening of the isoperimetric inequality.
A number of new inequalities of this type have recently been discovered, including universal volume lower bounds. More details appear at systoles of surfaces.
Gromov's systolic inequality
The deepest result in the field is Gromov's inequality for the homotopy 1-systole of an essential n-manifold M:
$\operatorname {sys\pi } _{1}{}^{n}\leq C_{n}\operatorname {vol} (M),$
where Cn is a universal constant only depending on the dimension of M. Here the homotopy systole sysπ1 is by definition the least length of a noncontractible loop in M. A manifold is called essential if its fundamental class [M] represents a nontrivial class in the homology of its fundamental group. The proof involves a new invariant called the filling radius, introduced by Gromov, defined as follows.
Denote by A the coefficient ring Z or Z2, depending on whether or not M is orientable. Then the fundamental class, denoted [M], of a compact n-dimensional manifold M is a generator of $H_{n}(M;A)=A$. Given an imbedding of M in Euclidean space E, we set
$\mathrm {FillRad} (M\subset E)=\inf \left\{\epsilon >0\left|\;\iota _{\epsilon }([M])=0\in H_{n}(U_{\epsilon }M)\right.\right\},$
where ιε is the inclusion homomorphism induced by the inclusion of M in its ε-neighborhood Uε M in E.
To define an absolute filling radius in a situation where M is equipped with a Riemannian metric g, Gromov proceeds as follows. One exploits an imbedding due to C. Kuratowski. One imbeds M in the Banach space L∞(M) of bounded Borel functions on M, equipped with the sup norm $\|\;\|$. Namely, we map a point x ∈ M to the function fx ∈ L∞(M) defined by the formula fx(y) = d(x,y) for all y ∈ M, where d is the distance function defined by the metric. By the triangle inequality we have $d(x,y)=\|f_{x}-f_{y}\|,$ and therefore the imbedding is strongly isometric, in the precise sense that internal distance and ambient distance coincide. Such a strongly isometric imbedding is impossible if the ambient space is a Hilbert space, even when M is the Riemannian circle (the distance between opposite points must be π, not 2!). We then set E = L∞(M) in the formula above, and define
$\mathrm {FillRad} (M)=\mathrm {FillRad} \left(M\subset L^{\infty }(M)\right).$
Namely, Gromov proved a sharp inequality relating the systole and the filling radius,
$\mathrm {sys\pi } _{1}\leq 6\;\mathrm {FillRad} (M),$
valid for all essential manifolds M; as well as an inequality
$\mathrm {FillRad} \leq C_{n}\mathrm {vol} _{n}{}^{1/n}(M),$
valid for all closed manifolds M.
A summary of a proof, based on recent results in geometric measure theory by S. Wenger, building upon earlier work by L. Ambrosio and B. Kirchheim, appears in Section 12.2 of the book "Systolic geometry and topology" referenced below. A completely different approach to the proof of Gromov's inequality was recently proposed by Larry Guth.[2]
Gromov's stable inequality
A significant difference between 1-systolic invariants (defined in terms of lengths of loops) and the higher, k-systolic invariants (defined in terms of areas of cycles, etc.) should be kept in mind. While a number of optimal systolic inequalities, involving the 1-systoles, have by now been obtained, just about the only optimal inequality involving purely the higher k-systoles is Gromov's optimal stable 2-systolic inequality
$\mathrm {stsys} _{2}{}^{n}\leq n!\;\mathrm {vol} _{2n}(\mathbb {CP} ^{n})$
for complex projective space, where the optimal bound is attained by the symmetric Fubini–Study metric, pointing to the link to quantum mechanics. Here the stable 2-systole of a Riemannian manifold M is defined by setting
$\mathrm {stsys} _{2}=\lambda _{1}\left(H_{2}(M,\mathbb {Z} )_{\mathbb {R} },\|\;\|\right),$
where $\|\;\|$ is the stable norm, while λ1 is the least norm of a nonzero element of the lattice. Just how exceptional Gromov's stable inequality is, only became clear recently. Namely, it was discovered that, contrary to expectation, the symmetric metric on the quaternionic projective plane is not its systolically optimal metric, in contrast with the 2-systole in the complex case. While the quaternionic projective plane with its symmetric metric has a middle-dimensional stable systolic ratio of 10/3, the analogous ratio for the symmetric metric of the complex projective 4-space gives the value 6, while the best available upper bound for such a ratio of an arbitrary metric on both of these spaces is 14. This upper bound is related to properties of the Lie algebra E7. If there exists an 8-manifold with exceptional Spin(7) holonomy and 4-th Betti number 1, then the value 14 is in fact optimal. Manifolds with Spin(7) holonomy have been studied intensively by Dominic Joyce.
Lower bounds for 2-systoles
Similarly, just about the only nontrivial lower bound for a k-systole with k = 2, results from recent work in gauge theory and J-holomorphic curves. The study of lower bounds for the conformal 2-systole of 4-manifolds has led to a simplified proof of the density of the image of the period map, by Jake Solomon.
Schottky problem
Perhaps one of the most striking applications of systoles is in the context of the Schottky problem, by P. Buser and P. Sarnak, who distinguished the Jacobians of Riemann surfaces among principally polarized abelian varieties, laying the foundation for systolic arithmetic.
Lusternik–Schnirelmann category
Asking systolic questions often stimulates questions in related fields. Thus, a notion of systolic category of a manifold has been defined and investigated, exhibiting a connection to the Lusternik–Schnirelmann category (LS category). Note that the systolic category (as well as the LS category) is, by definition, an integer. The two categories have been shown to coincide for both surfaces and 3-manifolds. Moreover, for orientable 4-manifolds, systolic category is a lower bound for LS category. Once the connection is established, the influence is mutual: known results about LS category stimulate systolic questions, and vice versa.
The new invariant was introduced by Katz and Rudyak (see below). Since the invariant turns out to be closely related to the Lusternik-Schnirelman category (LS category), it was called systolic category.
Systolic category of a manifold M is defined in terms of the various k-systoles of M. Roughly speaking, the idea is as follows. Given a manifold M, one looks for the longest product of systoles which give a "curvature-free" lower bound for the total volume of M (with a constant independent of the metric). It is natural to include systolic invariants of the covers of M in the definition, as well. The number of factors in such a "longest product" is by definition the systolic category of M.
For example, Gromov showed that an essential n-manifold admits a volume lower bound in terms of the n'th power of the homotopy 1-systole (see section above). It follows that the systolic category of an essential n-manifold is precisely n. In fact, for closed n-manifolds, the maximal value of both the LS category and the systolic category is attained simultaneously.
Another hint at the existence of an intriguing relation between the two categories is the relation to the invariant called the cuplength. Thus, the real cuplength turns out to be a lower bound for both categories.
Systolic category coincides with the LS category in a number of cases, including the case of manifolds of dimensions 2 and 3. In dimension 4, it was recently shown that the systolic category is a lower bound for the LS category.
Systolic hyperbolic geometry
The study of the asymptotic behavior for large genus g of the systole of hyperbolic surfaces reveals some interesting constants. Thus, Hurwitz surfaces Σg defined by a tower of principal congruence subgroups of the (2,3,7) hyperbolic triangle group satisfy the bound
$\mathrm {sys} \pi _{1}(\Sigma _{g})\geq {\frac {4}{3}}\log g,$
and a similar bound holds for more general arithmetic Fuchsian groups. This 2007 result by Katz, Schaps, and Vishne[3] generalizes the results of Peter Buser and Peter Sarnak in the case of arithmetic groups defined over Q, from their seminal 1994 paper.[4]
A bibliography for systoles in hyperbolic geometry currently numbers forty articles. Interesting examples are provided by the Bolza surface, Klein quartic, Macbeath surface, First Hurwitz triplet.
Relation to Abel–Jacobi maps
A family of optimal systolic inequalities is obtained as an application of the techniques of Burago and Ivanov, exploiting suitable Abel–Jacobi maps, defined as follows.
Let M be a manifold, π = π1(M), its fundamental group and f: π → πab be its abelianisation map. Let tor be the torsion subgroup of πab. Let g: πab → πab/tor be the quotient by torsion. Clearly, πab/tor= Zb, where b = b1 (M). Let φ: π → Zb be the composed homomorphism.
Definition: The cover ${\bar {M}}$ of the manifold M corresponding the subgroup Ker(φ) ⊂ π is called the universal (or maximal) free abelian cover.
Now assume M has a Riemannian metric. Let E be the space of harmonic 1-forms on M, with dual E* canonically identified with H1(M,R). By integrating an integral harmonic 1-form along paths from a basepoint x0 ∈ M, we obtain a map to the circle R/Z = S1.
Similarly, in order to define a map M → H1(M,R)/H1(M,Z)R without choosing a basis for cohomology, we argue as follows. Let x be a point in the universal cover ${\tilde {M}}$ of M. Thus x is represented by a point of M together with a path c from x0 to it. By integrating along the path c, we obtain a linear form, $h\to \int _{c}h$, on E. We thus obtain a map ${\tilde {M}}\to E^{*}=H_{1}(M,\mathbf {R} )$, which, furthermore, descends to a map
${\overline {A}}_{M}:{\overline {M}}\to E^{*},\;\;c\mapsto \left(h\mapsto \int _{c}h\right),$
where ${\overline {M}}$ is the universal free abelian cover.
Definition: The Jacobi variety (Jacobi torus) of M is the torus J1(M)= H1(M,R)/H1(M,Z)R
Definition: The Abel–Jacobi map $A_{M}:M\to J_{1}(M),$ is obtained from the map above by passing to quotients. The Abel–Jacobi map is unique up to translations of the Jacobi torus.
As an example one can cite the following inequality, due to D. Burago, S. Ivanov and M. Gromov.
Let M be an n-dimensional Riemannian manifold with first Betti number n, such that the map from M to its Jacobi torus has nonzero degree. Then M satisfies the optimal stable systolic inequality
$\mathrm {stsys} _{1}{}^{n}\leq \gamma _{n}\mathrm {vol} _{n}(M),$
where $\gamma _{n}$ is the classical Hermite constant.
Related fields, volume entropy
Asymptotic phenomena for the systole of surfaces of large genus have been shown to be related to interesting ergodic phenomena, and to properties of congruence subgroups of arithmetic groups.
Gromov's 1983 inequality for the homotopy systole implies, in particular, a uniform lower bound for the area of an aspherical surface in terms of its systole. Such a bound generalizes the inequalities of Loewner and Pu, albeit in a non-optimal fashion.
Gromov's seminal 1983 paper also contains asymptotic bounds relating the systole and the area, which improve the uniform bound (valid in all dimensions).
It was discovered recently (see paper by Katz and Sabourau below) that the volume entropy h, together with A. Katok's optimal inequality for h, is the "right" intermediary in a transparent proof of M. Gromov's asymptotic bound for the systolic ratio of surfaces of large genus.
The classical result of A. Katok states that every metric on a closed surface M with negative Euler characteristic satisfies an optimal inequality relating the entropy and the area.
It turns out that the minimal entropy of a closed surface can be related to its optimal systolic ratio. Namely, there is an upper bound for the entropy of a systolically extremal surface, in terms of its systole. By combining this upper bound with Katok's optimal lower bound in terms of the volume, one obtains a simpler alternative proof of Gromov's asymptotic estimate for the optimal systolic ratio of surfaces of large genus. Furthermore, such an approach yields an improved multiplicative constant in Gromov's theorem.
As an application, this method implies that every metric on a surface of genus at least 20 satisfies Loewner's torus inequality. This improves the best earlier estimate of 50 which followed from an estimate of Gromov's.
Filling area conjecture
Main article: filling area conjecture
Gromov's filling area conjecture has been proved in a hyperelliptic setting (see reference by Bangert et al. below).
The filling area conjecture asserts that among all possible fillings of the Riemannian circle of length 2π by a surface with the strongly isometric property, the round hemisphere has the least area. Here the Riemannian circle refers to the unique closed 1-dimensional Riemannian manifold of total 1-volume 2π and Riemannian diameter π.
To explain the conjecture, we start with the observation that the equatorial circle of the unit 2-sphere, S2 ⊂ R3, is a Riemannian circle S1 of length 2π and diameter π.
More precisely, the Riemannian distance function of S1 is the restriction of the ambient Riemannian distance on the sphere. This property is not satisfied by the standard imbedding of the unit circle in the Euclidean plane, where a pair of opposite points are at distance 2, not π.
We consider all fillings of S1 by a surface, such that the restricted metric defined by the inclusion of the circle as the boundary of the surface is the Riemannian metric of a circle of length 2π. The inclusion of the circle as the boundary is then called a strongly isometric imbedding of the circle.
In 1983 Gromov conjectured that the round hemisphere gives the "best" way of filling the circle among all filling surfaces.
The case of simply-connected fillings is equivalent to Pu's inequality. Recently the case of genus-1 fillings was settled affirmatively, as well (see reference by Bangert et al. below). Namely, it turns out that one can exploit a half-century old formula by J. Hersch from integral geometry. Namely, consider the family of figure-8 loops on a football, with the self-intersection point at the equator (see figure at the beginning of the article). Hersch's formula expresses the area of a metric in the conformal class of the football, as an average of the energies of the figure-8 loops from the family. An application of Hersch's formula to the hyperelliptic quotient of the Riemann surface proves the filling area conjecture in this case.
Other systolic ramifications of hyperellipticity have been identified in genus 2.
Surveys
The surveys in the field include M. Berger's survey (1993), Gromov's survey (1996), Gromov's book (1999), Berger's panoramic book (2003), as well as Katz's book (2007). These references may help a beginner enter the field. They also contain open problems to work on.
See also
• Filling area conjecture
• First Hurwitz triplet
• Girth (functional analysis)
• Gromov's inequality for complex projective space
• Gromov's systolic inequality for essential manifolds
• List of differential geometry topics
• Loewner's torus inequality
• Pu's inequality
• Systoles of surfaces
• Systolic freedom
Notes
1. Tutte, William T. (1947). "A family of cubical graphs". Proc. Cambridge Philos. Soc. 43 (4): 459–474. Bibcode:1947PCPS...43..459T. doi:10.1017/S0305004100023720. MR 0021678. S2CID 123505185.
2. Guth, Larry (2011). "Volumes of balls in large Riemannian manifolds". Annals of Mathematics. 173 (1): 51–76. arXiv:math/0610212. doi:10.4007/annals.2011.173.1.2. MR 2753599. S2CID 1392012.
3. Katz, Mikhail G.; Schaps, Mary; Vishne, Uzi (2007). "Logarithmic growth of systole of arithmetic Riemann surfaces along congruence subgroups". Journal of Differential Geometry. 76 (3): 399–422. arXiv:math.DG/0505007. doi:10.4310/jdg/1180135693.
4. Buser, P.; Sarnak, P. (1994). "On the period matrix of a Riemann surface of large genus (with an Appendix by J.H. Conway and N.J.A. Sloane)". Inventiones Mathematicae. 117 (1): 27–56. doi:10.1007/BF01232233. ISSN 0020-9910. S2CID 116904696.
References
• Bangert, V.; Croke, C.; Ivanov, S.; Katz, M. (2005). "Filling area conjecture and ovalless real hyperelliptic surfaces". Geometric and Functional Analysis. 15 (3): 577–597. arXiv:math/0405583. CiteSeerX 10.1.1.240.2242. doi:10.1007/s00039-005-0517-8. S2CID 17100812.
• Berger, Marcel (1992–1993). "Systoles et applications selon Gromov" (PDF). Séminaire Bourbaki. 35: 279–310.
• Berger, M. (2003). A panoramic view of Riemannian geometry. Springer. ISBN 978-3-642-18245-7.
• Berger, M. (2008). "What is... a Systole?" (PDF). Notices of the AMS. 55 (3): 374–6.
• Gromov, M. (1983). "Filling Riemannian manifolds". J. Diff. Geom. 18: 1–147. CiteSeerX 10.1.1.400.9154. doi:10.4310/jdg/1214509283.
• Gromov, M. (1996). "Systoles and intersystolic inequalities". Actes de la Table Ronde de Géométrie Différentielle (Luminy, 1992). Sémin. Congr. Vol. 1. Soc. Math. France. pp. 291–362. CiteSeerX 10.1.1.539.1365.
• Katz, M.; Semmes, S.; Gromov, M. (2007) [2001]. Metric Structures for Riemannian and Non-Riemannian Spaces. Progress in Mathematics. Vol. 152. Birkhäuser. ISBN 978-0-8176-4583-0.
• Katz, M. (1983). "The filling radius of two-point homogeneous spaces". Journal of Differential Geometry. 18 (3): 505–511. doi:10.4310/jdg/1214437785.
• Katz, M. (2007). Systolic geometry and topology. Mathematical Surveys and Monographs. Vol. 137. American Mathematical Society. ISBN 978-0-8218-4177-8.
• Katz, M.; Rudyak, Y. (2006). "Systolic category and Lusternik–Schnirelman category of low-dimensional manifolds". Communications on Pure and Applied Mathematics. 59: 1433–56. arXiv:math/0410456. CiteSeerX 10.1.1.236.3757. doi:10.1002/cpa.20146. S2CID 15470409.
• Katz, M.; Sabourau, S. (2005). "Entropy of systolically extremal surfaces and asymptotic bounds". Ergo. Th. Dynam. Sys. 25 (4): 1209–20. arXiv:math/0410312. CiteSeerX 10.1.1.236.5949. doi:10.1017/S0143385704001014. S2CID 11631690.
• Pu, P.M. (1952). "Some inequalities in certain nonorientable Riemannian manifolds" (PDF). Pacific J. Math. 2: 55–71. doi:10.2140/pjm.1952.2.55.
External links
• AMS webpage for Mikhail Katz's book.
• Website for systolic geometry and topology
Systolic geometry
1-systoles of surfaces
• Loewner's torus inequality
• Pu's inequality
• Filling area conjecture
• Bolza surface
• Systoles of surfaces
• Eisenstein integers
1-systoles of manifolds
• Gromov's systolic inequality for essential manifolds
• Essential manifold
• Filling radius
• Hermite constant
Higher systoles
• Gromov's inequality for complex projective space
• Systolic freedom
• Systolic category
| Wikipedia |
Introduction to systolic geometry
Systolic geometry is a branch of differential geometry, a field within mathematics, studying problems such as the relationship between the area inside a closed curve C, and the length or perimeter of C. Since the area A may be small while the length l is large, when C looks elongated, the relationship can only take the form of an inequality. What is more, such an inequality would be an upper bound for A: there is no interesting lower bound just in terms of the length.
This article is a non-technical introduction to the subject. For the main encyclopedia article, see Systolic geometry.
Mikhail Gromov once voiced the opinion that the isoperimetric inequality was known already to the Ancient Greeks. The mythological tale of Dido, Queen of Carthage shows that problems about making a maximum area for a given perimeter were posed in a natural way, in past eras.
The relation between length and area is closely related to the physical phenomenon known as surface tension, which gives a visible form to the comparable relation between surface area and volume. The familiar shapes of drops of water express minima of surface area.
The purpose of this article is to explain another such relation between length and area. A space is called simply connected if every loop in the space can be contracted to a point in a continuous fashion. For example, a room with a pillar in the middle, connecting floor to ceiling, is not simply connected. In geometry, a systole is a distance which is characteristic of a compact metric space which is not simply connected. It is the length of a shortest loop in the space that cannot be contracted to a point in the space. In the room example, absent other features, the systole would be the circumference of the pillar. Systolic geometry gives lower bounds for various attributes of the space in terms of its systole.
It is known that the Fubini–Study metric is the natural metric for the geometrisation of quantum mechanics. In an intriguing connection to global geometric phenomena, it turns out that the Fubini–Study metric can be characterized as the boundary case of equality in Gromov's inequality for complex projective space, involving an area quantity called the 2-systole, pointing to a possible connection to quantum mechanical phenomena.
In the following, these systolic inequalities will be compared to the classical isoperimetric inequalities, which can in turn be motivated by physical phenomena observed in the behavior of a water drop.
Surface tension and shape of a water drop
Perhaps the most familiar physical manifestation of the 3-dimensional isoperimetric inequality is the shape of a drop of water. Namely, a drop will typically assume a symmetric round shape. Since the amount of water in a drop is fixed, surface tension forces the drop into a shape which minimizes the surface area of the drop, namely a round sphere. Thus the round shape of the drop is a consequence of the phenomenon of surface tension. Mathematically, this phenomenon is expressed by the isoperimetric inequality.
Isoperimetric inequality in the plane
The solution to the isoperimetric problem in the plane is usually expressed in the form of an inequality that relates the length $L$ of a closed curve and the area $A$ of the planar region that it encloses. The isoperimetric inequality states that
$4\pi A\leq L^{2},\,$
and that the equality holds if and only if the curve is a round circle. The inequality is an upper bound for area in terms of length.
Central symmetry
Recall the notion of central symmetry: a Euclidean polyhedron is called centrally symmetric if it is invariant under the antipodal map
$x\mapsto -x.\,$
Thus, in the plane central symmetry is the rotation by 180 degrees. For example, an ellipse is centrally symmetric, as is any ellipsoid in 3-space.
Property of a centrally symmetric polyhedron in 3-space
There is a geometric inequality that is in a sense dual to the isoperimetric inequality in the following sense. Both involve a length and an area. The isoperimetric inequality is an upper bound for area in terms of length. There is a geometric inequality which provides an upper bound for a certain length in terms of area. More precisely it can be described as follows.
Any centrally symmetric convex body of surface area $A$ can be squeezed through a noose of length ${\sqrt {\pi A}}$, with the tightest fit achieved by a sphere. This property is equivalent to a special case of Pu's inequality, one of the earliest systolic inequalities.
For example, an ellipsoid is an example of a convex centrally symmetric body in 3-space. It may be helpful to the reader to develop an intuition for the property mentioned above in the context of thinking about ellipsoidal examples.
An alternative formulation is as follows. Every convex centrally symmetric body $P$ in ${\mathbb {R} }^{3}$ admits a pair of opposite (antipodal) points and a path of length $L$ joining them and lying on the boundary $\partial P$ of $P$, satisfying
$L^{2}\leq {\frac {\pi }{4}}\mathrm {area} (\partial P).$
Notion of systole
The systole of a compact metric space $X$ is a metric invariant of $X$, defined to be the least length of a noncontractible loop in $X$. We will denote it as follows:
$\mathrm {sys} (X).\,$
Note that a loop minimizing length is necessarily a closed geodesic. When $X$ is a graph, the invariant is usually referred to as the girth, ever since the 1947 article by William Tutte. Possibly inspired by Tutte's article, Charles Loewner started thinking about systolic questions on surfaces in the late 1940s, resulting in a 1950 thesis by his student P. M. Pu. The actual term systole itself was not coined until a quarter century later, by Marcel Berger.
This line of research was, apparently, given further impetus by a remark of René Thom, in a conversation with Berger in the library of Strasbourg University during the 1961–62 academic year, shortly after the publication of the papers of R. Accola and C. Blatter. Referring to these systolic inequalities, Thom reportedly exclaimed: Mais c'est fondamental! [These results are of fundamental importance!]
Subsequently, Berger popularized the subject in a series of articles and books, most recently in the March 2008 issue of the Notices of the American Mathematical Society. A bibliography at the Website for systolic geometry and topology currently contains over 170 articles. Systolic geometry is a rapidly developing field, featuring a number of recent publications in leading journals. Recently, an intriguing link has emerged with the Lusternik–Schnirelmann category. The existence of such a link can be thought of as a theorem in systolic topology.
The real projective plane
In projective geometry, the real projective plane $\mathbb {RP} ^{2}$ is defined as the collection of lines through the origin in $\mathbb {R} ^{3}$. The distance function on $\mathbb {RP} ^{2}$ is most readily understood from this point of view. Namely, the distance between two lines through the origin is by definition the angle between them (measured in radians), or more precisely the lesser of the two angles. This distance function corresponds to the metric of constant Gaussian curvature +1.
Alternatively, $\mathbb {RP} ^{2}$ can be defined as the surface obtained by identifying each pair of antipodal points on the 2-sphere.
Other metrics on $\mathbb {RP} ^{2}$ can be obtained by quotienting metrics on $S^{2}$ imbedded in 3-space in a centrally symmetric way.
Topologically, $\mathbb {RP} ^{2}$ can be obtained from the Möbius strip by attaching a disk along the boundary.
Among closed surfaces, the real projective plane is the simplest non-orientable such surface.
Pu's inequality
Pu's inequality for the real projective plane applies to general Riemannian metrics on $\mathbb {RP} ^{2}$.
A student of Charles Loewner's, Pao Ming Pu proved in a 1950 thesis (published in 1952) that every metric $g$ on the real projective plane $\mathbb {RP} ^{2}$ satisfies the optimal inequality
$\mathrm {sys} (g)^{2}\leq {\frac {\pi }{2}}\mathrm {area} (g),$
where $\mathrm {sys} $ is the systole. The boundary case of equality is attained precisely when the metric is of constant Gaussian curvature. Alternatively, the inequality can be presented as follows:
$\mathrm {area} (g)-{\frac {2}{\pi }}\mathrm {sys} (g)^{2}\geq 0.$
There is a vast generalisation of Pu's inequality, due to Mikhail Gromov, called Gromov's systolic inequality for essential manifolds. To state his result, one requires a topological notion of an essential manifold.
Loewner's torus inequality
Similarly to Pu's inequality, Loewner's torus inequality relates the total area, to the systole, i.e. least length of a noncontractible loop on the torus $(T^{2},g)$:
$\mathrm {area} (g)-{\tfrac {\sqrt {3}}{2}}\mathrm {sys} (g)^{2}\geq 0.$
The boundary case of equality is attained if and only if the metric is homothetic to the flat metric obtained as the quotient of ${\mathbb {R} }^{2}$ by the lattice formed by the Eisenstein integers.
Bonnesen's inequality
The classical Bonnesen's inequality is the strengthened isoperimetric inequality
$L^{2}-4\pi A\geq \pi ^{2}(R-r)^{2}.\,$
Here $A$ is the area of the region bounded by a closed Jordan curve of length (perimeter) $L$ in the plane, $R$ is the circumradius of the bounded region, and $r$ is its inradius. The error term $\pi ^{2}(R-r)^{2}$ on the right hand side is traditionally called the isoperimetric defect. There exists a similar strengthening of Loewner's inequality.
Loewner's inequality with a defect term
The explanation of the strengthened version of Loewner's inequality is somewhat more technical than the rest of this article. It seems worth including it here for the sake of completeness. The strengthened version is the inequality
$\mathrm {area} (g)-{\tfrac {\sqrt {3}}{2}}\mathrm {sys} (g)^{2}\geq \mathrm {Var} (f),$
where Var is the probabilistic variance while f is the conformal factor expressing the metric g in terms of the flat metric of unit area in the conformal class of g. The proof results from a combination of the computational formula for the variance and Fubini's theorem (see Horowitz et al, 2009).
See also
• Systoles of surfaces
• Sub-Riemannian geometry
References
• Bangert, V.; Croke, C.; Ivanov, S.; Katz, M. (2005). "Filling area conjecture and ovalless real hyperelliptic surfaces". Geometric and Functional Analysis. 15 (3): 577–597. arXiv:math/0405583. CiteSeerX 10.1.1.240.2242. doi:10.1007/s00039-005-0517-8. S2CID 17100812.
• Berger, Marcel (1992–1993). "Systoles et applications selon Gromov" (PDF). Séminaire Bourbaki. 35: 279–310.
• Berger, M. (2003). A panoramic view of Riemannian geometry. Springer. ISBN 978-3-642-18245-7.
• Berger, M. (2008). "What is... a Systole?" (PDF). Notices of the AMS. 55 (3): 374–6.
• Buser, P.; Sarnak, P. (1994). "On the period matrix of a Riemann surface of large genus. With an appendix by J.H. Conway and N.J.A. Sloane". Invent. Math. 117 (1): 27–56. Bibcode:1994InMat.117...27B. doi:10.1007/BF01232233. S2CID 116904696.
• Gromov, M. (1983). "Filling Riemannian manifolds". J. Diff. Geom. 18: 1–147. CiteSeerX 10.1.1.400.9154. doi:10.4310/jdg/1214509283.
• Gromov, M. (1996). "Systoles and intersystolic inequalities". Actes de la Table Ronde de Géométrie Différentielle (Luminy, 1992). Sémin. Congr. Vol. 1. Soc. Math. France. pp. 291–362. CiteSeerX 10.1.1.539.1365.
• Horowitz, Charles; Katz, Karin Usadi; Katz, Mikhail G. (2009). "Loewner's torus inequality with isosystolic defect". Journal of Geometric Analysis. 19 (4): 796–808. arXiv:0803.0690. CiteSeerX 10.1.1.314.5106. doi:10.1007/s12220-009-9090-y. S2CID 18444111.
• Katz, M.; Semmes, S.; Gromov, M. (2007) [2001]. Metric Structures for Riemannian and Non-Riemannian Spaces. Progress in Mathematics. Vol. 152. Birkhäuser. ISBN 978-0-8176-4583-0.
• Katz, M. (1983). "The filling radius of two-point homogeneous spaces". Journal of Differential Geometry. 18 (3): 505–511. doi:10.4310/jdg/1214437785.
• Katz, M. (2007). Systolic geometry and topology. Mathematical Surveys and Monographs. Vol. 137. American Mathematical Society. ISBN 978-0-8218-4177-8.
• Katz, M.; Rudyak, Y. (2006). "Systolic category and Lusternik–Schnirelman category of low-dimensional manifolds". Communications on Pure and Applied Mathematics. 59: 1433–56. arXiv:math/0410456. CiteSeerX 10.1.1.236.3757. doi:10.1002/cpa.20146. S2CID 15470409.
• Katz, M.; Sabourau, S. (2005). "Entropy of systolically extremal surfaces and asymptotic bounds". Ergo. Th. Dynam. Sys. 25 (4): 1209–20. arXiv:math/0410312. CiteSeerX 10.1.1.236.5949. doi:10.1017/S0143385704001014. S2CID 11631690.
• Katz, M.; Schaps, M.; Vishne, U. (2007). "Logarithmic growth of systole of arithmetic Riemann surfaces along congruence subgroups". J. Differential Geom. 76 (3): 399–422. arXiv:math.DG/0505007. CiteSeerX 10.1.1.240.5600. doi:10.4310/jdg/1180135693. S2CID 18152345.
• Pu, P.M. (1952). "Some inequalities in certain nonorientable Riemannian manifolds" (PDF). Pacific J. Math. 2: 55–71. doi:10.2140/pjm.1952.2.55.
External links
• Waner, Stefan (May 2014). "Introduction to Differential Geometry & General Relativity" (PDF). Lecture Notes. Departments of Mathematics and Physics, Hofstra University.
Systolic geometry
1-systoles of surfaces
• Loewner's torus inequality
• Pu's inequality
• Filling area conjecture
• Bolza surface
• Systoles of surfaces
• Eisenstein integers
1-systoles of manifolds
• Gromov's systolic inequality for essential manifolds
• Essential manifold
• Filling radius
• Hermite constant
Higher systoles
• Gromov's inequality for complex projective space
• Systolic freedom
• Systolic category
Introductory science articles
• Introduction to angular momentum
• Introduction to electromagnetism
• Introduction to entropy
• Introduction to evolution
• Introduction to gauge theory
• Introduction to general relativity
• Introduction to genetics
• Introduction to M-theory
• Introduction to the mathematics of general relativity
• Introduction to quantum mechanics
• Introduction to systolic geometry
• Introduction to the heaviest elements
• Introduction to viruses
| Wikipedia |
Linear relation
In linear algebra, a linear relation, or simply relation, between elements of a vector space or a module is a linear equation that has these elements as a solution.
More precisely, if $e_{1},\dots ,e_{n}$ are elements of a (left) module M over a ring R (the case of a vector space over a field is a special case), a relation between $e_{1},\dots ,e_{n}$ is a sequence $(f_{1},\dots ,f_{n})$ of elements of R such that
$f_{1}e_{1}+\dots +f_{n}e_{n}=0.$
The relations between $e_{1},\dots ,e_{n}$ form a module. One is generally interested in the case where $e_{1},\dots ,e_{n}$ is a generating set of a finitely generated module M, in which case the module of the relations is often called a syzygy module of M. The syzygy module depends on the choice of a generating set, but it is unique up to the direct sum with a free module. That is, if $S_{1}$ and $S_{2}$ are syzygy modules corresponding to two generating sets of the same module, then they are stably isomorphic, which means that there exist two free modules $L_{1}$ and $L_{2}$ such that $S_{1}\oplus L_{1}$ and $S_{2}\oplus L_{2}$ are isomorphic.
Higher order syzygy modules are defined recursively: a first syzygy module of a module M is simply its syzygy module. For k > 1, a kth syzygy module of M is a syzygy module of a (k – 1)-th syzygy module. Hilbert's syzygy theorem states that, if $R=K[x_{1},\dots ,x_{n}]$ is a polynomial ring in n indeterminates over a field, then every nth syzygy module is free. The case n = 0 is the fact that every finite dimensional vector space has a basis, and the case n = 1 is the fact that K[x] is a principal ideal domain and that every submodule of a finitely generated free K[x] module is also free.
The construction of higher order syzygy modules is generalized as the definition of free resolutions, which allows restating Hilbert's syzygy theorem as a polynomial ring in n indeterminates over a field has global homological dimension n.
If a and b are two elements of the commutative ring R, then (b, –a) is a relation that is said trivial. The module of trivial relations of an ideal is the submodule of the first syzygy module of the ideal that is generated by the trivial relations between the elements of a generating set of an ideal. The concept of trivial relations can be generalized to higher order syzygy modules, and this leads to the concept of the Koszul complex of an ideal, which provides information on the non-trivial relations between the generators of an ideal.
Basic definitions
Let R be a ring, and M be a left R-module. A linear relation, or simply a relation between k elements $x_{1},\dots ,x_{k}$ of M is a sequence $(a_{1},\dots ,a_{k})$ of elements of R such that
$a_{1}x_{1}+\dots +a_{k}x_{k}=0.$
If $x_{1},\dots ,x_{k}$ is a generating set of M, the relation is often called a syzygy of M. It makes sense to call it a syzygy of $M$ without regard to $x_{1},..,x_{k}$ because, although the syzygy module depends on the chosen generating set, most of its properties are independent; see § Stable properties, below.
If the ring R is Noetherian, or, at least coherent, and if M is finitely generated, then the syzygy module is also finitely generated. A syzygy module of this syzygy module is a second syzygy module of M. Continuing this way one can define a kth syzygy module for every positive integer k.
Hilbert's syzygy theorem asserts that, if M is a finitely generated module over a polynomial ring $K[x_{1},\dots ,x_{n}]$ over a field, then any nth syzygy module is a free module.
Stable properties
In this section, all modules are supposed to be finitely generated. That is the ring R is supposed Noetherian, or, at least, coherent.
Generally speaking, in the language of K-theory, a property is stable if it becomes true by making a direct sum with a sufficiently large free module. A fundamental property of syzygies modules is that there are "stably independent" on choices of generating sets for involved modules. The following result is the basis of these stable properties.
Proposition — Let $\{x_{1},\dots ,x_{m}\}$ be a generating set of an R-module M, and $y_{1},\dots ,y_{n}$ be other elements of M. The module of the relations between $x_{1},\dots ,x_{m},y_{1},\dots ,y_{n}$ is the direct sum of the module of the relations between $x_{1},\dots ,x_{m},$ and a free module of rank n.
Proof. As $\{x_{1},\dots ,x_{m}\}$ is a generating set, each $y_{i}$ can be written $\textstyle y_{i}=\sum \alpha _{i,j}x_{j}.$ This provides a relation $r_{i}$ between $x_{1},\dots ,x_{m},y_{1},\dots ,y_{n}.$ Now, if $r=(a_{1},\dots ,a_{m},b_{1},\dots ,b_{n})$ is any relation, then $\textstyle r-\sum b_{i}r_{i}$ is a relation between the $x_{1},\dots ,x_{m}$ only. In other words, every relation between $x_{1},\dots ,x_{m},y_{1},\dots ,y_{n}$ is a sum of a relation between $x_{1},\dots ,x_{m},$ and a linear combination of the $r_{i}$s. It is straightforward to prove that this decomposition is unique, and this proves the result. $\blacksquare $
This proves that the first syzygy module is "stably unique". More precisely, given two generating sets $S_{1}$ and $S_{2}$ of a module M, if $S_{1}$ and $S_{2}$ are the corresponding modules of relations, then there exist two free modules $L_{1}$ and $L_{2}$ such that $S_{1}\oplus L_{1}$ and $S_{2}\oplus L_{2}$ are isomorphic. For proving this, it suffices to apply twice the preceding proposition for getting two decompositions of the module of the relations between the union of the two generating sets.
For obtaining a similar result for higher syzygy modules, it remains to prove that, if M is any module, and L is a free module, then M and M ⊕ L have isomorphic syzygy modules. It suffices to consider a generating set of M ⊕ L that consists of a generating set of M and a basis of L. For every relation between the elements of this generating set, the coefficients of the basis elements of L are all zero, and the syzygies of M ⊕ L are exactly the syzygies of M extended with zero coefficients. This completes the proof to the following theorem.
Theorem — For every positive integer k, the kth syzygy module of a given module depends on choices of generating sets, but is unique up to the direct sum with a free module. More precisely, if $S_{1}$ and $S_{2}$ are kth syzygy modules that are obtained by different choices of generating sets, then there are free modules $L_{1}$ and $L_{2}$ such that $S_{1}\oplus L_{1}$ and $S_{2}\oplus L_{2}$ are isomorphic.
Relationship with free resolutions
Given a generating set $g_{1},\dots ,g_{n}$ of an R-module, one can consider a free module of L of basis $G_{1},\dots ,G_{n},$ where $G_{1},\dots ,G_{n}$ are new indeterminates. This defines an exact sequence
$L\longrightarrow M\longrightarrow 0,$
where the left arrow is the linear map that maps each $G_{i}$ to the corresponding $g_{i}.$ The kernel of this left arrow is a first syzygy module of M.
One can repeat this construction with this kernel in place of M. Repeating again and again this construction, one gets a long exact sequence
$\cdots \longrightarrow L_{k}\longrightarrow L_{k-1}\longrightarrow \cdots \longrightarrow L_{0}\longrightarrow M\longrightarrow 0,$
where all $L_{i}$ are free modules. By definition, such a long exact sequence is a free resolution of M.
For every k ≥ 1, the kernel $S_{k}$ of the arrow starting from $L_{k-1}$ is a kth syzygy module of M. It follows that the study of free resolutions is the same as the study of syzygy modules.
A free resolution is finite of length ≤ n if $S_{n}$ is free. In this case, one can take $L_{n}=S_{n},$ and $L_{k}=0$ (the zero module) for every k > n.
This allows restating Hilbert's syzygy theorem: If $R=K[x_{1},\dots ,x_{n}]$ is a polynomial ring in n indeterminates over a field K, then every free resolution is finite of length at most n.
The global dimension of a commutative Noetherian ring is either infinite, or the minimal n such that every free resolution is finite of length at most n. A commutative Noetherian ring is regular if its global dimension is finite. In this case, the global dimension equals its Krull dimension. So, Hilbert's syzygy theorem may be restated in a very short sentence that hides much mathematics: A polynomial ring over a field is a regular ring.
Trivial relations
In a commutative ring R, one has always ab– ba = 0. This implies trivially that (b, –a) is a linear relation between a and b. Therefore, given a generating set $g_{1},\dots ,g_{k}$ of an ideal I, one calls trivial relation or trivial syzygy every element of the submodule the syzygy module that is generated by these trivial relations between two generating elements. More precisely, the module of trivial syzygies is generated by the relations
$r_{i,j}=(x_{1},\dots ,x_{r})$
such that $x_{i}=g_{j},$ $x_{j}=-g_{i},$ and $x_{h}=0$ otherwise.
History
The word syzygy came into mathematics with the work of Arthur Cayley.[1] In that paper, Cayley used it in the theory of resultants and discriminants.[2] As the word syzygy was used in astronomy to denote a linear relation between planets, Cayley used it to denote linear relations between minors of a matrix, such as, in the case of a 2×3 matrix:
$a\,{\begin{vmatrix}b&c\\e&f\end{vmatrix}}-b\,{\begin{vmatrix}a&c\\d&f\end{vmatrix}}+c\,{\begin{vmatrix}a&b\\d&e\end{vmatrix}}=0.$
Then, the word syzygy was popularized (among mathematicians) by David Hilbert in his 1890 article, which contains three fundamental theorems on polynomials, Hilbert's syzygy theorem, Hilbert's basis theorem and Hilbert's Nullstellensatz.
In his article, Cayley makes use, in a special case, of what was later[3] called the Koszul complex, after a similar construction in differential geometry by the mathematician Jean-Louis Koszul.
Notes
1. 1847[Cayley 1847] A. Cayley, “On the theory of involution in geometry”, Cambridge Math. J. 11 (1847), 52–61. See also Collected Papers, Vol. 1 (1889), 80–94, Cambridge Univ. Press, Cambridge.
2. [Gel’fand et al. 1994] I. M. Gel’fand, M. M. Kapranov, and A. V. Zelevinsky, Discriminants, resultants, and multidimensional determinants, Mathematics: Theory & Applications, Birkhäuser, Boston, 1994.
3. Serre, Jean-Pierre Algèbre locale. Multiplicités. (French) Cours au Collège de France, 1957–1958, rédigé par Pierre Gabriel. Seconde édition, 1965. Lecture Notes in Mathematics, 11 Springer-Verlag, Berlin-New York 1965 vii+188 pp.; this is the published form of mimeographed notes from Serre's lectures at the College de France in 1958.
References
• Cox, David; Little, John; O’Shea, Donal (2007). "Ideals, Varieties, and Algorithms". Undergraduate Texts in Mathematics. New York, NY: Springer New York. doi:10.1007/978-0-387-35651-8. ISBN 978-0-387-35650-1. ISSN 0172-6056.
• Cox, David; Little, John; O’Shea, Donal (2005). "Using Algebraic Geometry". Graduate Texts in Mathematics. New York: Springer-Verlag. doi:10.1007/b138611. ISBN 0-387-20706-6.
• Eisenbud, David (1995). Commutative Algebra with a View Toward Algebraic Geometry. Graduate Texts in Mathematics. Vol. 150. Springer-Verlag. doi:10.1007/978-1-4612-5350-1. ISBN 0-387-94268-8.
• David Eisenbud, The Geometry of Syzygies, Graduate Texts in Mathematics, vol. 229, Springer, 2005.
| Wikipedia |
Hilbert's syzygy theorem
In mathematics, Hilbert's syzygy theorem is one of the three fundamental theorems about polynomial rings over fields, first proved by David Hilbert in 1890, which were introduced for solving important open questions in invariant theory, and are at the basis of modern algebraic geometry. The two other theorems are Hilbert's basis theorem that asserts that all ideals of polynomial rings over a field are finitely generated, and Hilbert's Nullstellensatz, which establishes a bijective correspondence between affine algebraic varieties and prime ideals of polynomial rings.
Hilbert's syzygy theorem concerns the relations, or syzygies in Hilbert's terminology, between the generators of an ideal, or, more generally, a module. As the relations form a module, one may consider the relations between the relations; the theorem asserts that, if one continues in this way, starting with a module over a polynomial ring in n indeterminates over a field, one eventually finds a zero module of relations, after at most n steps.
Hilbert's syzygy theorem is now considered to be an early result of homological algebra. It is the starting point of the use of homological methods in commutative algebra and algebraic geometry.
History
The syzygy theorem first appeared in Hilbert's seminal paper "Über die Theorie der algebraischen Formen" (1890).[1] The paper is split into five parts: part I proves Hilbert's basis theorem over a field, while part II proves it over the integers. Part III contains the syzygy theorem (Theorem III), which is used in part IV to discuss the Hilbert polynomial. The last part, part V, proves finite generation of certain rings of invariants. Incidentally part III also contains a special case of the Hilbert–Burch theorem.
Syzygies (relations)
Main article: syzygy (mathematics)
Originally, Hilbert defined syzygies for ideals in polynomial rings, but the concept generalizes trivially to (left) modules over any ring.
Given a generating set $g_{1},\ldots ,g_{k}$ of a module M over a ring R, a relation or first syzygy between the generators is a k-tuple $(a_{1},\ldots ,a_{k})$ of elements of R such that[2]
$a_{1}g_{1}+\cdots +a_{k}g_{k}=0.$
Let $L_{0}$ be a free module with basis $(G_{1},\ldots ,G_{k}).$ The k-tuple $(a_{1},\ldots ,a_{k})$ may be identified with the element
$a_{1}G_{1}+\cdots +a_{k}G_{k},$
and the relations form the kernel $R_{1}$ of the linear map $L_{0}\to M$ defined by $G_{i}\mapsto g_{i}.$ In other words, one has an exact sequence
$0\to R_{1}\to L_{0}\to M\to 0.$
This first syzygy module $R_{1}$ depends on the choice of a generating set, but, if $S_{1}$ is the module which is obtained with another generating set, there exist two free modules $F_{1}$ and $F_{2}$ such that
$R_{1}\oplus F_{1}\cong S_{1}\oplus F_{2}$
where $\oplus $ denote the direct sum of modules.
The second syzygy module is the module of the relations between generators of the first syzygy module. By continuing in this way, one may define the kth syzygy module for every positive integer k.
If the kth syzygy module is free for some k, then by taking a basis as a generating set, the next syzygy module (and every subsequent one) is the zero module. If one does not take a basis as a generating set, then all subsequent syzygy modules are free.
Let n be the smallest integer, if any, such that the nth syzygy module of a module M is free or projective. The above property of invariance, up to the sum direct with free modules, implies that n does not depend on the choice of generating sets. The projective dimension of M is this integer, if it exists, or ∞ if not. This is equivalent with the existence of an exact sequence
$0\longrightarrow R_{n}\longrightarrow L_{n-1}\longrightarrow \cdots \longrightarrow L_{0}\longrightarrow M\longrightarrow 0,$
where the modules $L_{i}$ are free and $R_{n}$ is projective. It can be shown that one may always choose the generating sets for $R_{n}$ being free, that is for the above exact sequence to be a free resolution.
Statement
Hilbert's syzygy theorem states that, if M is a finitely generated module over a polynomial ring $k[x_{1},\ldots ,x_{n}]$ in n indeterminates over a field k, then the nth syzygy module of M is always a free module.
In modern language, this implies that the projective dimension of M is at most n, and thus that there exists a free resolution
$0\longrightarrow L_{k}\longrightarrow L_{k-1}\longrightarrow \cdots \longrightarrow L_{0}\longrightarrow M\longrightarrow 0$
of length k ≤ n.
This upper bound on the projective dimension is sharp, that is, there are modules of projective dimension exactly n. The standard example is the field k, which may be considered as a $k[x_{1},\ldots ,x_{n}]$-module by setting $x_{i}c=0$ for every i and every c ∈ k. For this module, the nth syzygy module is free, but not the (n − 1)th one (for a proof, see § Koszul complex, below).
The theorem is also true for modules that are not finitely generated. As the global dimension of a ring is the supremum of the projective dimensions of all modules, Hilbert's syzygy theorem may be restated as: the global dimension of $k[x_{1},\ldots ,x_{n}]$ is n.
Low dimension
In the case of zero indeterminates, Hilbert's syzygy theorem is simply the fact that every vector space has a basis.
In the case of a single indeterminate, Hilbert's syzygy theorem is an instance of the theorem asserting that over a principal ideal ring, every submodule of a free module is itself free.
Koszul complex
The Koszul complex, also called "complex of exterior algebra", allows, in some cases, an explicit description of all syzygy modules.
Let $g_{1},\ldots ,g_{k}$ be a generating system of an ideal I in a polynomial ring $R=k[x_{1},\ldots ,x_{n}]$, and let $L_{1}$ be a free module of basis $G_{1},\ldots ,G_{k}.$ The exterior algebra of $L_{1}$ is the direct sum
$\Lambda (L_{1})=\bigoplus _{t=0}^{k}L_{t},$
where $L_{t}$ is the free module, which has, as a basis, the exterior products
$G_{i_{1}}\wedge \cdots \wedge G_{i_{t}},$
such that $i_{1}<i_{2}<\cdots <i_{t}.$ In particular, one has $L_{0}=R$ (because of the definition of the empty product), the two definitions of $L_{1}$ coincide, and $L_{t}=0$ for t > k. For every positive t, one may define a linear map $L_{t}\to L_{t-1}$ by
$G_{i_{1}}\wedge \cdots \wedge G_{i_{t}}\mapsto \sum _{j=1}^{t}(-1)^{j+1}g_{i_{j}}G_{i_{1}}\wedge \cdots \wedge {\widehat {G}}_{i_{j}}\wedge \cdots \wedge G_{i_{t}},$
where the hat means that the factor is omitted. A straightforward computation shows that the composition of two consecutive such maps is zero, and thus that one has a complex
$0\to L_{t}\to L_{t-1}\to \cdots \to L_{1}\to L_{0}\to R/I.$
This is the Koszul complex. In general the Koszul complex is not an exact sequence, but it is an exact sequence if one works with a polynomial ring $R=k[x_{1},\ldots ,x_{n}]$ and an ideal generated by a regular sequence of homogeneous polynomials.
In particular, the sequence $x_{1},\ldots ,x_{n}$ is regular, and the Koszul complex is thus a projective resolution of $$$k=R/\langle x_{1},\ldots ,x_{n}\rangle .$ In this case, the nth syzygy module is free of dimension one (generated by the product of all $G_{i}$); the (n − 1)th syzygy module is thus the quotient of a free module of dimension n by the submodule generated by $(x_{1},-x_{2},\ldots ,\pm x_{n}).$ This quotient may not be a projective module, as otherwise, there would exist polynomials $p_{i}$ such that $p_{1}x_{1}+\cdots +p_{n}x_{n}=1,$ which is impossible (substituting 0 for the $x_{i}$ in the latter equality provides 1 = 0). This proves that the projective dimension of $k=R/\langle x_{1},\ldots ,x_{n}\rangle $ is exactly n.
The same proof applies for proving that the projective dimension of $k[x_{1},\ldots ,x_{n}]/\langle g_{1},\ldots ,g_{t}\rangle $ is exactly t if the $g_{i}$ form a regular sequence of homogeneous polynomials.
Computation
At Hilbert's time, there were no method available for computing syzygies. It was only known that an algorithm may be deduced from any upper bound of the degree of the generators of the module of syzygies. In fact, the coefficients of the syzygies are unknown polynomials. If the degree of these polynomials is bounded, the number of their monomials is also bounded. Expressing that one has a syzygy provides a system of linear equations whose unknowns are the coefficients of these monomials. Therefore, any algorithm for linear systems implies an algorithm for syzygies, as soon as a bound of the degrees is known.
The first bound for syzygies (as well as for ideal membership problem) was given in 1926 by Grete Hermann:[3] Let M a submodule of a free module L of dimension t over $k[x_{1},\ldots ,x_{n}];$ if the coefficients over a basis of L of a generating system of M have a total degree at most d, then there is a constant c such that the degrees occurring in a generating system of the first syzygy module is at most $(td)^{2^{cn}}.$ The same bound applies for testing the membership to M of an element of L.[4]
On the other hand, there are examples where a double exponential degree necessarily occurs. However such examples are extremely rare, and this sets the question of an algorithm that is efficient when the output is not too large. At the present time, the best algorithms for computing syzygies are Gröbner basis algorithms. They allow the computation of the first syzygy module, and also, with almost no extra cost, all syzygies modules.
Syzygies and regularity
One might wonder which ring-theoretic property of $A=k[x_{1},\ldots ,x_{n}]$ causes the Hilbert syzygy theorem to hold. It turns out that this is regularity, which is an algebraic formulation of the fact that affine n-space is a variety without singularities. In fact the following generalization holds: Let $A$ be a Noetherian ring. Then $A$ has finite global dimension if and only if $A$ is regular and the Krull dimension of $A$ is finite; in that case the global dimension of $A$ is equal to the Krull dimension. This result may be proven using Serre's theorem on regular local rings.
See also
• Quillen–Suslin theorem
• Hilbert series and Hilbert polynomial
References
1. D. Hilbert, Über die Theorie der algebraischen Formen, Mathematische Annalen 36, 473–530.
2. The theory is presented for finitely generated modules, but extends easily to arbitrary modules.
3. Grete Hermann: Die Frage der endlich vielen Schritte in der Theorie der Polynomideale. Unter Benutzung nachgelassener Sätze von K. Hentzelt, Mathematische Annalen, Volume 95, Number 1, 736-788, doi:10.1007/BF01206635 (abstract in German language) — The question of finitely many steps in polynomial ideal theory (review and English-language translation)
4. G. Hermann claimed c = 1, but did not prove this.
• David Eisenbud, Commutative algebra. With a view toward algebraic geometry. Graduate Texts in Mathematics, 150. Springer-Verlag, New York, 1995. xvi+785 pp. ISBN 0-387-94268-8; ISBN 0-387-94269-6 MR1322960
• "Hilbert theorem", Encyclopedia of Mathematics, EMS Press, 2001 [1994]
| Wikipedia |
Sz.-Nagy's dilation theorem
The Sz.-Nagy dilation theorem (proved by Béla Szőkefalvi-Nagy) states that every contraction T on a Hilbert space H has a unitary dilation U to a Hilbert space K, containing H, with
$T^{n}=P_{H}U^{n}\vert _{H},\quad n\geq 0.$
Moreover, such a dilation is unique (up to unitary equivalence) when one assumes K is minimal, in the sense that the linear span of $\bigcup \nolimits _{n\in \mathbb {N} }\,U_{n}H$ is dense in K. When this minimality condition holds, U is called the minimal unitary dilation of T.
Proof
For a contraction T (i.e., ($\|T\|\leq 1$), its defect operator DT is defined to be the (unique) positive square root DT = (I - T*T)½. In the special case that S is an isometry, DS* is a projector and DS=0, hence the following is an Sz. Nagy unitary dilation of S with the required polynomial functional calculus property:
$U={\begin{bmatrix}S&D_{S^{*}}\\D_{S}&-S^{*}\end{bmatrix}}.$
Returning to the general case of a contraction T, every contraction T on a Hilbert space H has an isometric dilation, again with the calculus property, on
$\oplus _{n\geq 0}H$
given by
$S={\begin{bmatrix}T&0&0&\cdots &\\D_{T}&0&0&&\\0&I&0&\ddots \\0&0&I&\ddots \\\vdots &&\ddots &\ddots \end{bmatrix}}.$
Substituting the S thus constructed into the previous Sz.-Nagy unitary dilation for an isometry S, one obtains a unitary dilation for a contraction T:
$T^{n}=P_{H}S^{n}\vert _{H}=P_{H}(Q_{H'}U\vert _{H'})^{n}\vert _{H}=P_{H}U^{n}\vert _{H}.$
Schaffer form
The Schaffer form of a unitary Sz. Nagy dilation can be viewed as a beginning point for the characterization of all unitary dilations, with the required property, for a given contraction.
Remarks
A generalisation of this theorem, by Berger, Foias and Lebow, shows that if X is a spectral set for T, and
${\mathcal {R}}(X)$
is a Dirichlet algebra, then T has a minimal normal δX dilation, of the form above. A consequence of this is that any operator with a simply connected spectral set X has a minimal normal δX dilation.
To see that this generalises Sz.-Nagy's theorem, note that contraction operators have the unit disc D as a spectral set, and that normal operators with spectrum in the unit circle δD are unitary.
References
• Paulsen, V. (2003). Completely Bounded Maps and Operator Algebras. Cambridge University Press.
• Schaffer, J. J. (1955). "On unitary dilations of contractions". Proceedings of the American Mathematical Society. 6 (2): 322. doi:10.2307/2032368. JSTOR 2032368.
Functional analysis (topics – glossary)
Spaces
• Banach
• Besov
• Fréchet
• Hilbert
• Hölder
• Nuclear
• Orlicz
• Schwartz
• Sobolev
• Topological vector
Properties
• Barrelled
• Complete
• Dual (Algebraic/Topological)
• Locally convex
• Reflexive
• Reparable
Theorems
• Hahn–Banach
• Riesz representation
• Closed graph
• Uniform boundedness principle
• Kakutani fixed-point
• Krein–Milman
• Min–max
• Gelfand–Naimark
• Banach–Alaoglu
Operators
• Adjoint
• Bounded
• Compact
• Hilbert–Schmidt
• Normal
• Nuclear
• Trace class
• Transpose
• Unbounded
• Unitary
Algebras
• Banach algebra
• C*-algebra
• Spectrum of a C*-algebra
• Operator algebra
• Group algebra of a locally compact group
• Von Neumann algebra
Open problems
• Invariant subspace problem
• Mahler's conjecture
Applications
• Hardy space
• Spectral theory of ordinary differential equations
• Heat kernel
• Index theorem
• Calculus of variations
• Functional calculus
• Integral operator
• Jones polynomial
• Topological quantum field theory
• Noncommutative geometry
• Riemann hypothesis
• Distribution (or Generalized functions)
Advanced topics
• Approximation property
• Balanced set
• Choquet theory
• Weak topology
• Banach–Mazur distance
• Tomita–Takesaki theory
• Mathematics portal
• Category
• Commons
| Wikipedia |
Szegő limit theorems
In mathematical analysis, the Szegő limit theorems describe the asymptotic behaviour of the determinants of large Toeplitz matrices.[1][2][3] They were first proved by Gábor Szegő.
Notation
Let $\phi :\mathbb {T} \to \mathbb {C} $ :\mathbb {T} \to \mathbb {C} } be a complex function ("symbol") on the unit circle. Consider the $n\times n$ Toeplitz matrices $T_{n}(\phi )$, defined by
$T_{n}(\phi )_{k,l}={\widehat {\phi }}(k-l),\quad 0\leq k,l\leq n-1,$
where
${\widehat {\phi }}(k)={\frac {1}{2\pi }}\int _{0}^{2\pi }\phi (e^{i\theta })e^{-ik\theta }\,d\theta $
are the Fourier coefficients of $\phi $.
First Szegő theorem
The first Szegő theorem[1][4] states that, if $\phi >0$ and $\phi \in L_{1}(\mathbb {T} )$, then
$\lim _{n\to \infty }{\frac {\det T_{n}(\phi )}{\det T_{n-1}(\phi )}}=\exp \left\{{\frac {1}{2\pi }}\int _{0}^{2\pi }\log \phi (e^{i\theta })\,d\theta \right\}.$
(1)
The right-hand side of (1) is the geometric mean of $\phi $ (well-defined by the arithmetic-geometric mean inequality).
Second Szegő theorem
Denote the right-hand side of (1) by $G$. The second (or strong) Szegő theorem[1][5] asserts that if, in addition, the derivative of $\phi $ is Hölder continuous of order $\alpha >0$, then
$\lim _{n\to \infty }{\frac {\det T_{n}(\phi )}{G^{n}(\phi )}}=\exp \left\{\sum _{k=1}^{\infty }k\left|{\widehat {(\log \phi )}}(k)\right|^{2}\right\}.$
References
1. Böttcher, Albrecht; Silbermann, Bernd (1990). "Toeplitz determinants". Analysis of Toeplitz operators. Berlin: Springer-Verlag. p. 525. ISBN 3-540-52147-X. MR 1071374.
2. Ehrhardt, T.; Silbermann, B. (2001) [1994], "Szegö_limit_theorems", Encyclopedia of Mathematics, EMS Press
3. Simon, Barry (2011). Szegő's Theorem and Its Descendants: Spectral Theory for L2 Perturbations of Orthogonal Polynomials. Princeton: Princeton University Press. ISBN 978-0-691-14704-8.
4. Szegő, G. (1915). "Ein Grenzwertsatz über die Toeplitzschen Determinanten einer reellen positiven Funktion" (PDF). Math. Ann. 76 (4): 490–503. doi:10.1007/BF01458220.
5. Szegő, G. (1952). "On certain Hermitian forms associated with the Fourier series of a positive function". Comm. Sém. Math. Univ. Lund [Medd. Lunds Univ. Mat. Sem.]: 228–238. MR 0051961.
| Wikipedia |
Szegő kernel
In the mathematical study of several complex variables, the Szegő kernel is an integral kernel that gives rise to a reproducing kernel on a natural Hilbert space of holomorphic functions. It is named for its discoverer, the Hungarian mathematician Gábor Szegő.
Let Ω be a bounded domain in Cn with C2 boundary, and let A(Ω) denote the space of all holomorphic functions in Ω that are continuous on ${\overline {\Omega }}$. Define the Hardy space H2(∂Ω) to be the closure in L2(∂Ω) of the restrictions of elements of A(Ω) to the boundary. The Poisson integral implies that each element ƒ of H2(∂Ω) extends to a holomorphic function Pƒ in Ω. Furthermore, for each z ∈ Ω, the map
$f\mapsto Pf(z)$
defines a continuous linear functional on H2(∂Ω). By the Riesz representation theorem, this linear functional is represented by a kernel kz, which is to say
$Pf(z)=\int _{\partial \Omega }f(\zeta ){\overline {k_{z}(\zeta )}}\,d\sigma (\zeta ).$
The Szegő kernel is defined by
$S(z,\zeta )={\overline {k_{z}(\zeta )}},\quad z\in \Omega ,\zeta \in \partial \Omega .$
Like its close cousin, the Bergman kernel, the Szegő kernel is holomorphic in z. In fact, if φi is an orthonormal basis of H2(∂Ω) consisting entirely of the restrictions of functions in A(Ω), then a Riesz–Fischer theorem argument shows that
$S(z,\zeta )=\sum _{i=1}^{\infty }\phi _{i}(z){\overline {\phi _{i}(\zeta )}}.$
References
• Krantz, Steven G. (2002), Function Theory of Several Complex Variables, Providence, R.I.: American Mathematical Society, ISBN 978-0-8218-2724-6
| Wikipedia |
Pólya–Szegő inequality
In mathematical analysis, the Pólya–Szegő inequality (or Szegő inequality) states that the Sobolev energy of a function in a Sobolev space does not increase under symmetric decreasing rearrangement.[1] The inequality is named after the mathematicians George Pólya and Gábor Szegő.
Mathematical setting and statement
Given a Lebesgue measurable function $u:\mathbb {R} ^{n}\to \mathbb {R} ^{+},$the symmetric decreasing rearrangement $u^{*}:\mathbb {R} ^{n}\to \mathbb {R} ^{+},$ is the unique function such that for every $t\in \mathbb {R} ,$ the sublevel set $u^{*}{}^{-1}((t,+\infty ))$ is an open ball centred at the origin $0\in \mathbb {R} ^{n}$ that has the same Lebesgue measure as $u^{-1}((t,+\infty )).$
Equivalently, $u^{*}$ is the unique radial and radially nonincreasing function, whose strict sublevel sets are open and have the same measure as those of the function $u$.
The Pólya–Szegő inequality states that if moreover $u\in W^{1,p}(\mathbb {R} ^{n}),$ then $u^{*}\in W^{1,p}(\mathbb {R} ^{n})$ and
$\int _{\mathbb {R} ^{n}}|\nabla u^{*}|^{p}\leq \int _{\mathbb {R} ^{n}}|\nabla u|^{p}.$
Applications of the inequality
The Pólya–Szegő inequality is used to prove the Rayleigh–Faber–Krahn inequality, which states that among all the domains of a given fixed volume, the ball has the smallest first eigenvalue for the Laplacian with Dirichlet boundary conditions. The proof goes by restating the problem as a minimization of the Rayleigh quotient.[1]
The isoperimetric inequality can be deduced from the Pólya–Szegő inequality with $p=1$.
The optimal constant in the Sobolev inequality can be obtained by combining the Pólya–Szegő inequality with some integral inequalities.[2][3]
Equality cases
Since the Sobolev energy is invariant under translations, any translation of a radial function achieves equality in the Pólya–Szegő inequality. There are however other functions that can achieve equality, obtained for example by taking a radial nonincreasing function that achieves its maximum on a ball of positive radius and adding to this function another function which is radial with respect to a different point and whose support is contained in the maximum set of the first function. In order to avoid this obstruction, an additional condition is thus needed.
It has been proved that if the function $u$ achieves equality in the Pólya–Szegő inequality and if the set $\{x\in \mathbb {R} ^{n}:u(x)>0{\text{ and }}\nabla u(x)=0\}$ is a null set for Lebesgue's measure, then the function $u$ is radial and radially nonincreasing with respect to some point $a\in \mathbb {R} ^{n}$.[4]
Generalizations
The Pólya–Szegő inequality is still valid for symmetrizations on the sphere or the hyperbolic space.[5]
The inequality also holds for partial symmetrizations defined by foliating the space into planes (Steiner symmetrization)[6][7] and into spheres (cap symmetrization).[8][9]
There are also Pólya−Szegő inequalities for rearrangements with respect to non-Euclidean norms and using the dual norm of the gradient.[10][11][12]
Proofs of the inequality
Original proof by a cylindrical isoperimetric inequality
The original proof by Pólya and Szegő for $p=2$ was based on an isoperimetric inequality comparing sets with cylinders and an asymptotics expansion of the area of the area of the graph of a function.[1] The inequality is proved for a smooth function $u$ that vanishes outside a compact subset of the Euclidean space $\mathbb {R} ^{n}.$ For every $\varepsilon >0$, they define the sets
${\begin{aligned}C_{\varepsilon }&=\{(x,t)\in \mathbb {R} ^{n}\times \mathbb {R} \,:\,0<t<\varepsilon u(x)\}\\C_{\varepsilon }^{*}&=\{(x,t)\in \mathbb {R} ^{n}\times \mathbb {R} \,:\,0<t<\varepsilon u^{*}(x)\}\end{aligned}}$
These sets are the sets of points who lie between the domain of the functions $\varepsilon u$ and $\varepsilon u^{*}$ and their respective graphs. They use then the geometrical fact that since the horizontal slices of both sets have the same measure and those of the second are balls, to deduce that the area of the boundary of the cylindrical set $C_{\varepsilon }^{*}$ cannot exceed the one of $C_{\varepsilon }$. These areas can be computed by the area formula yielding the inequality
$\int _{u^{*}{}^{-1}((0,+\infty ))}1+{\sqrt {1+\varepsilon ^{2}|\nabla u^{*}|^{2}}}\leq \int _{u^{-1}((0,+\infty ))}1+{\sqrt {1+\varepsilon ^{2}|\nabla u|^{2}}}.$
Since the sets $u^{-1}((0,+\infty ))$ and $u{}^{*}{}^{-1}((0,+\infty ))$ have the same measure, this is equivalent to
${\frac {1}{\varepsilon }}\int _{u^{*}{}^{-1}((0,+\infty ))}{\sqrt {1+\varepsilon ^{2}|\nabla u^{*}|^{2}}}-1\leq {\frac {1}{\varepsilon }}\int _{u^{-1}((0,+\infty ))}{\sqrt {1+\varepsilon ^{2}|\nabla u|^{2}}}-1.$
The conclusion then follows from the fact that
$\lim _{\varepsilon \to 0}{\frac {1}{\varepsilon }}\int _{u^{-1}((0,+\infty ))}{\sqrt {1+\varepsilon ^{2}|\nabla u|^{2}}}-1={\frac {1}{2}}\int _{\mathbb {R} ^{n}}|\nabla u|^{2}.$
Coarea formula and isoperimetric inequality
The Pólya–Szegő inequality can be proved by combining the coarea formula, Hölder’s inequality and the classical isoperimetric inequality.[2]
If the function $u$ is smooth enough, the coarea formula can be used to write
$\int _{\mathbb {R} ^{n}}|\nabla u|^{p}=\int _{0}^{+\infty }\int _{u^{-1}({t})}|\nabla u|^{p-1}\,d{\mathcal {H}}^{n-1}\,dt,$
where ${\mathcal {H}}^{n-1}$ denotes the $(n-1)$–dimensional Hausdorff measure on the Euclidean space $\mathbb {R} ^{n}$. For almost every each $t\in (0,+\infty )$, we have by Hölder's inequality,
${\mathcal {H}}^{n-1}\left(u^{-1}(\{t\})\right)\leq \left(\int _{u^{-1}(\{t\})}|\nabla u|^{p-1}\right)^{\frac {1}{p}}\left(\int _{u^{-1}(\{t\})}{\frac {1}{|\nabla u|}}\right)^{1-{\frac {1}{p}}}.$
Therefore, we have
$\int _{u^{-1}(\{t\})}|\nabla u|^{p-1}\geq {\frac {{\mathcal {H}}^{n-1}\left(u^{-1}(\{t\})\right)^{p}}{\left(\int _{u^{-1}(\{t\})}{\frac {1}{|\nabla u|}}\right)^{p-1}}}.$
Since the set $u^{*}{}^{-1}((t,+\infty ))$ is a ball that has the same measure as the set $u^{-1}((t,+\infty ))$, by the classical isoperimetric inequality, we have
${\mathcal {H}}^{n-1}\left(u^{*}{}^{-1}(\{t\})\right)\leq {\mathcal {H}}^{n-1}\left(u^{-1}(\{t\})\right).$
Moreover, recalling that the sublevel sets of the functions $u$ and $u^{*}$ have the same measure,
$\int _{u^{*}{}^{-1}(\{t\})}{\frac {1}{|\nabla u^{*}|}}=\int _{u^{-1}(\{t\})}{\frac {1}{|\nabla u|}},$
and therefore,
$\int _{\mathbb {R} ^{n}}|\nabla u|^{p}\geq \int _{0}^{+\infty }{\frac {{\mathcal {H}}^{n-1}\left(u^{*}{}^{-1}(\{t\})\right)^{p}}{\left(\int _{u^{*}{}^{-1}(\{t\})}{\frac {1}{|\nabla u^{*}|}}\right)^{p-1}}}\,dt.$
Since the function $u^{*}$ is radial, one has
${\frac {{\mathcal {H}}^{n-1}\left(u^{*}{}^{-1}(\{t\})\right)^{p}}{\left(\int _{u^{*}{}^{-1}(\{t\})}{\frac {1}{|\nabla u^{*}|}}\right)^{p-1}}}=\int _{u^{*}{}^{-1}(\{t\})}|\nabla u^{*}|^{p-1},$
and the conclusion follows by applying the coarea formula again.
Rearrangement inequalities for convolution
When $p=2$, the Pólya–Szegő inequality can be proved by representing the Sobolev energy by the heat kernel.[13] One begins by observing that
$\int _{\mathbb {R} ^{n}}|\nabla u|^{2}=\lim _{t\to 0}{\frac {1}{t}}\left(\int _{\mathbb {R} ^{n}}|u|^{2}-\int _{\mathbb {R} ^{n}}\int _{\mathbb {R} ^{n}}K_{t}(x-y)u(x)u(y)\,dx\,dy\right),$
where for $t\in (0,+\infty )$, the function $K_{t}:\mathbb {R} ^{n}\to \mathbb {R} $ is the heat kernel, defined for every $z\in \mathbb {R} ^{n}$ by
$K_{t}(z)={\frac {1}{(4\pi t)^{\frac {n}{2}}}}e^{-{\frac {|z|^{2}}{4t}}}.$
Since for every $t\in (0,+\infty )$ the function $K_{t}$ is radial and radially decreasing, we have by the Riesz rearrangement inequality
$\int _{\mathbb {R} ^{n}}\int _{\mathbb {R} ^{n}}K_{t}(x-y)\,u(x)\,u(y)\,dx\,dy\leq \int _{\mathbb {R} ^{n}}\int _{\mathbb {R} ^{n}}K_{t}(x-y)\,u^{*}(x)\,u^{*}(y)\,dx\,dy$
Hence, we deduce that
${\begin{aligned}\int _{\mathbb {R} ^{n}}|\nabla u|^{2}&=\lim _{t\to 0}{\frac {1}{t}}\left(\int _{\mathbb {R} ^{n}}|u|^{2}-\int _{\mathbb {R} ^{n}}\int _{\mathbb {R} ^{n}}K_{t}(x-y)u(x)u(y)\,dx\,dy\right)\\[6pt]&\geq \lim _{t\to 0}{\frac {1}{t}}\left(\int _{\mathbb {R} ^{n}}|u|^{2}-\int _{\mathbb {R} ^{n}}\int _{\mathbb {R} ^{n}}K_{t}(x-y)u^{*}(x)u^{*}(y)\,dx\,dy\right)\\[6pt]&=\int _{\mathbb {R} ^{n}}|\nabla u^{*}|^{2}.\end{aligned}}$
References
1. Pólya, George; Szegő, Gábor (1951). Isoperimetric Inequalities in Mathematical Physics. Annals of Mathematics Studies. Princeton, N.J.: Princeton University Press. ISBN 9780691079882. ISSN 0066-2313.
2. Talenti, Giorgio (1976). "Best constant in Sobolev inequality". Annali di Matematica Pura ed Applicata. 110 (1): 353–372. CiteSeerX 10.1.1.615.4193. doi:10.1007/BF02418013. ISSN 0373-3114. S2CID 16923822.
3. Aubin, Thierry (1976-01-01). "Problèmes isopérimétriques et espaces de Sobolev". Journal of Differential Geometry (in French). 11 (4): 573–598. doi:10.4310/jdg/1214433725. ISSN 0022-040X.
4. Brothers, John E.; Ziemer, William P. (1988). "Minimal rearrangements of Sobolev functions". Journal für die Reine und Angewandte Mathematik. 384: 153–179. ISSN 0075-4102.
5. Baernstein II, Albert (1994). "A unified approach to symmetrization". In Alvino, Angelo; Fabes, Eugenes; Talenti, Giorgio (eds.). Partial Differential Equations of Elliptic Type. Symposia Mathematica. Cambridge University Press. pp. 47–92. ISBN 9780521460484.
6. Kawohl, Bernhard (1985). Rearrangements and Convexity of Level Sets in PDE. Lecture Notes in Mathematics. Vol. 1150. Berlin Heidelberg: Springer. doi:10.1007/bfb0075060. ISBN 978-3-540-15693-2. ISSN 0075-8434.
7. Brock, Friedemann; Solynin, Alexander (2000). "An approach to symmetrization via polarization". Transactions of the American Mathematical Society. 352 (4): 1759–1796. doi:10.1090/S0002-9947-99-02558-1. ISSN 0002-9947.
8. Sarvas, Jukka (1972). Symmetrization of Condensers in N-space. Suomalainen Tiedeakatemia. ISBN 9789514100635.
9. Smets, Didier; Willem, Michel (2003). "Partial symmetry and asymptotic behavior for some elliptic variational problems". Calculus of Variations and Partial Differential Equations. 18 (1): 57–75. doi:10.1007/s00526-002-0180-y. ISSN 0944-2669. S2CID 119466691.
10. Angelo, Alvino; Vincenzo, Ferone; Guido, Trombetti; Pierre-Louis, Lions (1997). "Convex symmetrization and applications". Annales de l'Institut Henri Poincaré C (in French). 14 (2): 275. Bibcode:1997AIHPC..14..275A. doi:10.1016/S0294-1449(97)80147-3.
11. Van Schaftingen, Jean (2006). "Anisotropic symmetrization". Annales de l'Institut Henri Poincaré C. 23 (4): 539–565. Bibcode:2006AIHPC..23..539V. doi:10.1016/j.anihpc.2005.06.001.
12. Cianchi, Andrea (2007). "Symmetrization in Anisotropic Elliptic Problems". Communications in Partial Differential Equations. 32 (5): 693–717. doi:10.1080/03605300600634973. ISSN 0360-5302. S2CID 121383998.
13. Lieb, Elliott H.; Loss, Michael (2001-01-01). Analysis (2 ed.). American mathematical Society. ISBN 9780821827833. OCLC 468606724.
| Wikipedia |
Szegő polynomial
In mathematics, a Szegő polynomial is one of a family of orthogonal polynomials for the Hermitian inner product
$\langle f|g\rangle =\int _{-\pi }^{\pi }f(e^{i\theta }){\overline {g(e^{i\theta })}}\,d\mu $
where dμ is a given positive measure on [−π, π]. Writing $\phi _{n}(z)$ for the polynomials, they obey a recurrence relation
$\phi _{n+1}(z)=z\phi (z)+\rho _{n+1}\phi ^{*}(z)$
where $\rho _{n+1}$ is a parameter, called the reflection coefficient or the Szegő parameter.
See also
• Cayley transform
• Schur class
• Favard's theorem
References
• Bultheel, A. (2001) [1994], "Szegö polynomial", Encyclopedia of Mathematics, EMS Press
• G. Szegő, "Orthogonal polynomials", Colloq. Publ., 33, Amer. Math. Soc. (1967)
| Wikipedia |
Degeneracy (graph theory)
In graph theory, a k-degenerate graph is an undirected graph in which every subgraph has a vertex of degree at most k: that is, some vertex in the subgraph touches k or fewer of the subgraph's edges. The degeneracy of a graph is the smallest value of k for which it is k-degenerate. The degeneracy of a graph is a measure of how sparse it is, and is within a constant factor of other sparsity measures such as the arboricity of a graph.
Degeneracy is also known as the k-core number,[1] width,[2] and linkage,[3] and is essentially the same as the coloring number[4] or Szekeres–Wilf number (named after Szekeres and Wilf (1968)). k-degenerate graphs have also been called k-inductive graphs.[5] The degeneracy of a graph may be computed in linear time by an algorithm that repeatedly removes minimum-degree vertices.[6] The connected components that are left after all vertices of degree less than k have been (repeatedly) removed are called the k-cores of the graph and the degeneracy of a graph is the largest value k such that it has a k-core.
Examples
Every finite forest has either an isolated vertex (incident to no edges) or a leaf vertex (incident to exactly one edge); therefore, trees and forests are 1-degenerate graphs. Every 1-degenerate graph is a forest.
Every finite planar graph has a vertex of degree five or less; therefore, every planar graph is 5-degenerate, and the degeneracy of any planar graph is at most five. Similarly, every outerplanar graph has degeneracy at most two,[7] and the Apollonian networks have degeneracy three.
The Barabási–Albert model for generating random scale-free networks[8] is parameterized by a number m such that each vertex that is added to the graph has m previously-added vertices. It follows that any subgraph of a network formed in this way has a vertex of degree at most m (the last vertex in the subgraph to have been added to the graph) and Barabási–Albert networks are automatically m-degenerate.
Every k-regular graph has degeneracy exactly k. More strongly, the degeneracy of a graph equals its maximum vertex degree if and only if at least one of the connected components of the graph is regular of maximum degree. For all other graphs, the degeneracy is strictly less than the maximum degree.[9]
Definitions and equivalences
The coloring number of a graph G was defined by Erdős & Hajnal (1966) to be the least κ for which there exists an ordering of the vertices of G in which each vertex has fewer than κ neighbors that are earlier in the ordering. It should be distinguished from the chromatic number of G, the minimum number of colors needed to color the vertices so that no two adjacent vertices have the same color; the ordering which determines the coloring number provides an order to color the vertices of G with the coloring number, but in general the chromatic number may be smaller.
The degeneracy of a graph G was defined by Lick & White (1970) as the least k such that every induced subgraph of G contains a vertex with k or fewer neighbors. The definition would be the same if arbitrary subgraphs are allowed in place of induced subgraphs, as a non-induced subgraph can only have vertex degrees that are smaller than or equal to the vertex degrees in the subgraph induced by the same vertex set.
The two concepts of coloring number and degeneracy are equivalent: in any finite graph the degeneracy is just one less than the coloring number.[10] For, if a graph has an ordering with coloring number κ then in each subgraph H the vertex that belongs to H and is last in the ordering has at most κ − 1 neighbors in H. In the other direction, if G is k-degenerate, then an ordering with coloring number k + 1 can be obtained by repeatedly finding a vertex v with at most k neighbors, removing v from the graph, ordering the remaining vertices, and adding v to the end of the order.
A third, equivalent formulation is that G is k-degenerate (or has coloring number at most k + 1) if and only if the edges of G can be oriented to form a directed acyclic graph with outdegree at most k.[11] Such an orientation can be formed by orienting each edge towards the earlier of its two endpoints in a coloring number ordering. In the other direction, if an orientation with outdegree k is given, an ordering with coloring number k + 1 can be obtained as a topological ordering of the resulting directed acyclic graph.
k-Cores
A k-core of a graph G is a maximal connected subgraph of G in which all vertices have degree at least k. Equivalently, it is one of the connected components of the subgraph of G formed by repeatedly deleting all vertices of degree less than k. If a non-empty k-core exists, then, clearly, G has degeneracy at least k, and the degeneracy of G is the largest k for which G has a k-core.
A vertex $u$ has coreness $c$ if it belongs to a $c$-core but not to any $(c+1)$-core.
The concept of a k-core was introduced to study the clustering structure of social networks[12] and to describe the evolution of random graphs.[13] It has also been applied in bioinformatics,[14] network visualization,[15] and resilience of networks in ecology.[16] A survey of the topic, covering the main concepts, important algorithmic techniques as well as some application domains, may be found in Malliaros et al. (2019).
Bootstrap percolation is a random process studied as an epidemic model[17] and as a model for fault tolerance for distributed computing.[18] It consists of selecting a random subset of active cells from a lattice or other space, and then considering the k-core of the induced subgraph of this subset.[19]
Algorithms
Matula & Beck (1983) outline an algorithm to derive the degeneracy ordering of a graph $G=(V,E)$ with vertex set V and edge set E in ${\mathcal {O}}(\vert V\vert +\vert E\vert )$ time and ${\mathcal {O}}(\vert V\vert )$ words of space, by storing vertices in a degree-indexed bucket queue and repeatedly removing the vertex with the smallest degree. The degeneracy k is given by the highest degree of any vertex at the time of its removal.
In more detail, the algorithm proceeds as follows:
• Initialize an output list L.
• Compute a number dv for each vertex v in G, the number of neighbors of v that are not already in L. Initially, these numbers are just the degrees of the vertices.
• Initialize an array D such that D[i] contains a list of the vertices v that are not already in L for which dv = i.
• Initialize k to 0.
• Repeat n times:
• Scan the array cells D[0], D[1], ... until finding an i for which D[i] is nonempty.
• Set k to max(k,i)
• Select a vertex v from D[i]. Add v to the beginning of L and remove it from D[i].
• For each neighbor w of v not already in L, subtract one from dw and move w to the cell of D corresponding to the new value of dw.
At the end of the algorithm, any vertex $L[i]$ will have at most k edges to the vertices $L[1,\ldots ,i-1]$. The l-cores of G are the subgraphs $H_{l}\subset G$ that are induced by the vertices $L[1,\ldots ,i]$, where i is the first vertex with degree $\geq l$ at the time it is added to L.
Relation to other graph parameters
If a graph G is oriented acyclically with outdegree k, then its edges may be partitioned into k forests by choosing one forest for each outgoing edge of each node. Thus, the arboricity of G is at most equal to its degeneracy. In the other direction, an n-vertex graph that can be partitioned into k forests has at most k(n − 1) edges and therefore has a vertex of degree at most 2k− 1 – thus, the degeneracy is less than twice the arboricity. One may also compute in polynomial time an orientation of a graph that minimizes the outdegree but is not required to be acyclic. The edges of a graph with such an orientation may be partitioned in the same way into k pseudoforests, and conversely any partition of a graph's edges into k pseudoforests leads to an outdegree-k orientation (by choosing an outdegree-1 orientation for each pseudoforest), so the minimum outdegree of such an orientation is the pseudoarboricity, which again is at most equal to the degeneracy.[20] The thickness is also within a constant factor of the arboricity, and therefore also of the degeneracy.[21]
A k-degenerate graph has chromatic number at most k + 1; this is proved by a simple induction on the number of vertices which is exactly like the proof of the six-color theorem for planar graphs. Since chromatic number is an upper bound on the order of the maximum clique, the latter invariant is also at most degeneracy plus one. By using a greedy coloring algorithm on an ordering with optimal coloring number, one can graph color a k-degenerate graph using at most k + 1 colors.[22]
A k-vertex-connected graph is a graph that cannot be partitioned into more than one component by the removal of fewer than k vertices, or equivalently a graph in which each pair of vertices can be connected by k vertex-disjoint paths. Since these paths must leave the two vertices of the pair via disjoint edges, a k-vertex-connected graph must have degeneracy at least k. Concepts related to k-cores but based on vertex connectivity have been studied in social network theory under the name of structural cohesion.[23]
If a graph has treewidth or pathwidth at most k, then it is a subgraph of a chordal graph which has a perfect elimination ordering in which each vertex has at most k earlier neighbors. Therefore, the degeneracy is at most equal to the treewidth and at most equal to the pathwidth. However, there exist graphs with bounded degeneracy and unbounded treewidth, such as the grid graphs.[24]
The Burr–Erdős conjecture relates the degeneracy of a graph G to the Ramsey number of G, the least n such that any two-edge-coloring of an n-vertex complete graph must contain a monochromatic copy of G. Specifically, the conjecture is that for any fixed value of k, the Ramsey number of k-degenerate graphs grows linearly in the number of vertices of the graphs.[25] The conjecture was proven by Lee (2017).
Infinite graphs
Although concepts of degeneracy and coloring number are frequently considered in the context of finite graphs, the original motivation for Erdős & Hajnal (1966) was the theory of infinite graphs. For an infinite graph G, one may define the coloring number analogously to the definition for finite graphs, as the smallest cardinal number α such that there exists a well-ordering of the vertices of G in which each vertex has fewer than α neighbors that are earlier in the ordering. The inequality between coloring and chromatic numbers holds also in this infinite setting; Erdős & Hajnal (1966) state that, at the time of publication of their paper, it was already well known.
The degeneracy of random subsets of infinite lattices has been studied under the name of bootstrap percolation.
See also
• Graph theory
• Network science
• Percolation Theory
• Core–periphery structure
• Cereceda's conjecture
Notes
1. Bader & Hogue (2003).
2. Freuder (1982).
3. Kirousis & Thilikos (1996).
4. Erdős & Hajnal (1966).
5. Irani (1994).
6. Matula & Beck (1983).
7. Lick & White (1970).
8. Barabási & Albert (1999).
9. Jensen & Toft (2011), p. 78: "It is easy to see that col(G) = Δ(G) + 1 if and only if G has a Δ(G)-regular component." In the notation used by Jensen and Toft, col(G) is the degeneracy plus one, and Δ(G) is the maximum vertex degree.
10. Matula (1968); Lick & White (1970), Proposition 1, page 1084.
11. Chrobak & Eppstein (1991).
12. Seidman (1983).
13. Bollobás (1984); Łuczak (1991);Dorogovtsev, Goltsev & Mendes (2006).
14. Bader & Hogue (2003); Altaf-Ul-Amin et al. (2003); Wuchty & Almaas (2005).
15. Gaertler & Patrignani (2004); Alvarez-Hamelin et al. (2006).
16. Garcia-Algarra et al. (2017).
17. Balogh et al. (2012).
18. Kirkpatrick et al. (2002).
19. Adler (1991).
20. Chrobak & Eppstein (1991); Gabow & Westermann (1992); Venkateswaran (2004); Asahiro et al. (2006); Kowalik (2006).
21. Dean, Hutchinson & Scheinerman (1991).
22. Erdős & Hajnal (1966); Szekeres & Wilf (1968).
23. Moody & White (2003).
24. Robertson & Seymour (1984).
25. Burr & Erdős (1975).
References
• Adler, Joan (1991), "Bootstrap percolation", Physica A: Statistical Mechanics and its Applications, 171 (3): 453–470, Bibcode:1991PhyA..171..453A, doi:10.1016/0378-4371(91)90295-n
• Altaf-Ul-Amin, M.; Nishikata, K.; Koma, T.; Miyasato, T.; Shinbo, Y.; Arifuzzaman, M.; Wada, C.; Maeda, M.; Oshima, T. (2003), "Prediction of protein functions based on k-cores of protein-protein interaction networks and amino acid sequences" (PDF), Genome Informatics, 14: 498–499, archived from the original (PDF) on 2007-09-27
• Alvarez-Hamelin, José Ignacio; Dall'Asta, Luca; Barrat, Alain; Vespignani, Alessandro (2006), "k-core decomposition: a tool for the visualization of large scale networks", in Weiss, Yair; Schölkopf, Bernhard; Platt, John (eds.), Advances in Neural Information Processing Systems 18: Proceedings of the 2005 Conference, vol. 18, The MIT Press, p. 41, arXiv:cs/0504107, Bibcode:2005cs........4107A, ISBN 0262232537
• Asahiro, Yuichi; Miyano, Eiji; Ono, Hirotaka; Zenmyo, Kouhei (2006), "Graph orientation algorithms to minimize the maximum outdegree", CATS '06: Proceedings of the 12th Computing: The Australasian Theory Symposium, Darlinghurst, Australia, Australia: Australian Computer Society, Inc., pp. 11–20, ISBN 1-920682-33-3
• Bader, Gary D.; Hogue, Christopher W. V. (2003), "An automated method for finding molecular complexes in large protein interaction networks", BMC Bioinformatics, 4 (1): 2, doi:10.1186/1471-2105-4-2, PMC 149346, PMID 12525261
• Balogh, József; Bollobás, Béla; Duminil-Copin, Hugo; Morris, Robert (2012), "The sharp threshold for bootstrap percolation in all dimensions", Transactions of the American Mathematical Society, 364 (5): 2667–2701, arXiv:1010.3326, doi:10.1090/S0002-9947-2011-05552-2, MR 2888224, S2CID 2708046
• Barabási, Albert-László; Albert, Réka (1999), "Emergence of scaling in random networks" (PDF), Science, 286 (5439): 509–512, arXiv:cond-mat/9910332, Bibcode:1999Sci...286..509B, doi:10.1126/science.286.5439.509, PMID 10521342, S2CID 524106, archived from the original (PDF) on 2006-11-11
• Bollobás, Béla (1984), "The evolution of sparse graphs", Graph Theory and Combinatorics, Proc. Cambridge Combinatorial Conf. in honor of Paul Erdős, Academic Press, pp. 35–57
• Burr, Stefan A.; Erdős, Paul (1975), "On the magnitude of generalized Ramsey numbers for graphs", Infinite and finite sets (Colloq., Keszthely, 1973; dedicated to P. Erdős on his 60th birthday), Vol. 1 (PDF), Colloq. Math. Soc. János Bolyai, vol. 10, Amsterdam: North-Holland, pp. 214–240, MR 0371701
• Chrobak, Marek; Eppstein, David (1991), "Planar orientations with low out-degree and compaction of adjacency matrices" (PDF), Theoretical Computer Science, 86 (2): 243–266, doi:10.1016/0304-3975(91)90020-3
• Dean, Alice M.; Hutchinson, Joan P.; Scheinerman, Edward R. (1991), "On the thickness and arboricity of a graph", Journal of Combinatorial Theory, Series B, 52 (1): 147–151, doi:10.1016/0095-8956(91)90100-X, MR 1109429
• Dorogovtsev, S. N.; Goltsev, A. V.; Mendes, J. F. F. (2006), "k-core organization of complex networks", Physical Review Letters, 96 (4): 040601, arXiv:cond-mat/0509102, Bibcode:2006PhRvL..96d0601D, doi:10.1103/PhysRevLett.96.040601, PMID 16486798, S2CID 2035
• Erdős, Paul; Hajnal, András (1966), "On chromatic number of graphs and set-systems" (PDF), Acta Mathematica Hungarica, 17 (1–2): 61–99, doi:10.1007/BF02020444, MR 0193025
• Freuder, Eugene C. (1982), "A sufficient condition for backtrack-free search", Journal of the ACM, 29 (1): 24–32, doi:10.1145/322290.322292, S2CID 8624975
• Gabow, H. N.; Westermann, H. H. (1992), "Forests, frames, and games: algorithms for matroid sums and applications", Algorithmica, 7 (1): 465–497, doi:10.1007/BF01758774, S2CID 40358357
• Gaertler, Marco; Patrignani, Maurizio (2004), "Dynamic analysis of the autonomous system graph", Proc. 2nd International Workshop on Inter-Domain Performance and Simulation (IPS 2004), pp. 13–24, CiteSeerX 10.1.1.81.6841
• Garcia-Algarra, Javier; Pastor, Juan Manuel; Iriondo, Jose Maria; Galeano, Javier (2017), "Ranking of critical species to preserve the functionality of mutualistic networks using the k-core decomposition", PeerJ, 5: e3321, doi:10.7717/peerj.3321, PMC 5438587, PMID 28533969
• Irani, Sandy (1994), "Coloring inductive graphs on-line", Algorithmica, 11 (1): 53–72, doi:10.1007/BF01294263, S2CID 181800
• Jensen, Tommy R.; Toft, Bjarne (2011), Graph Coloring Problems, Wiley Series in Discrete Mathematics and Optimization, vol. 39, John Wiley & Sons, ISBN 9781118030745
• Kirkpatrick, Scott; Wilcke, Winfried W.; Garner, Robert B.; Huels, Harald (2002), "Percolation in dense storage arrays", Physica A: Statistical Mechanics and its Applications, 314 (1–4): 220–229, Bibcode:2002PhyA..314..220K, doi:10.1016/S0378-4371(02)01153-6, MR 1961703
• Kirousis, L. M.; Thilikos, D. M. (1996), "The linkage of a graph" (PDF), SIAM Journal on Computing, 25 (3): 626–647, doi:10.1137/S0097539793255709, archived from the original (PDF) on 2011-07-21
• Kowalik, Łukasz (2006), "Approximation scheme for lowest outdegree orientation and graph density measures", Proceedings of the 17th International Symposium on Algorithms and Computation (ISAAC 2006), Lecture Notes in Computer Science, Springer-Verlag, 4288: 557–566, doi:10.1007/11940128_56, ISBN 978-3-540-49694-6
• Lee, Choongbum (2017), "Ramsey numbers of degenerate graphs", Annals of Mathematics, 185 (3): 791–829, arXiv:1505.04773, doi:10.4007/annals.2017.185.3.2, S2CID 7974973
• Lick, Don R.; White, Arthur T. (1970), "k-degenerate graphs", Canadian Journal of Mathematics, 22 (5): 1082–1096, doi:10.4153/CJM-1970-125-1
• Łuczak, Tomasz (1991), "Size and connectivity of the k-core of a random graph", Discrete Mathematics, 91 (1): 61–68, doi:10.1016/0012-365X(91)90162-U
• Malliaros, Fragkiskos D.; Giatsidis, Christos; Papadopoulos, Apostolos N.; Vazirgiannis, Michalis (2019), "The core decomposition of networks: theory, algorithms and applications" (PDF), The VLDB Journal, 29: 61–92, doi:10.1007/s00778-019-00587-4, S2CID 85519668
• Matula, David W. (1968), "A min-max theorem for graphs with application to graph coloring", SIAM 1968 National Meeting, SIAM Review, 10 (4): 481–482, doi:10.1137/1010115
• Matula, David W.; Beck, L. L. (1983), "Smallest-last ordering and clustering and graph coloring algorithms", Journal of the ACM, 30 (3): 417–427, doi:10.1145/2402.322385, MR 0709826, S2CID 4417741
• Moody, James; White, Douglas R. (2003), "Structural cohesion and embeddedness: a hierarchical conception of social groups", American Sociological Review, 68 (1): 1–25, doi:10.2307/3088904, JSTOR 3088904
• Robertson, Neil; Seymour, Paul (1984), "Graph minors. III. Planar tree-width", Journal of Combinatorial Theory, Series B, 36 (1): 49–64, doi:10.1016/0095-8956(84)90013-3
• Seidman, Stephen B. (1983), "Network structure and minimum degree", Social Networks, 5 (3): 269–287, doi:10.1016/0378-8733(83)90028-X
• Szekeres, George; Wilf, Herbert S. (1968), "An inequality for the chromatic number of a graph", Journal of Combinatorial Theory, 4: 1–3, doi:10.1016/S0021-9800(68)80081-X
• Venkateswaran, V. (2004), "Minimizing maximum indegree", Discrete Applied Mathematics, 143 (1–3): 374–378, doi:10.1016/j.dam.2003.07.007
• Wuchty, S.; Almaas, E. (2005), "Peeling the yeast protein network", Proteomics, 5 (2): 444–449, doi:10.1002/pmic.200400962, PMID 15627958, S2CID 17659720
| Wikipedia |
Szekeres snark
In the mathematical field of graph theory, the Szekeres snark is a snark with 50 vertices and 75 edges.[1] It was the fifth known snark, discovered by George Szekeres in 1973.[2]
Szekeres snark
The Szekeres snark
Named afterGeorge Szekeres
Vertices50
Edges75
Radius6
Diameter7
Girth5
Automorphisms20
Chromatic number3
Chromatic index4
Book thickness3
Queue number2
PropertiesSnark
Hypohamiltonian
Table of graphs and parameters
As a snark, the Szekeres graph is a connected, bridgeless cubic graph with chromatic index equal to 4. The Szekeres snark is non-planar and non-hamiltonian but is hypohamiltonian.[3] It has book thickness 3 and queue number 2.[4]
Another well known snark on 50 vertices is the Watkins snark discovered by John J. Watkins in 1989.[5]
Gallery
• The chromatic number of the Szekeres snark is 3.
• The chromatic index of the Szekeres snark is 4.
• Alternative drawing of the Szekeres snark.
References
1. Weisstein, Eric W. "Szekeres Snark". MathWorld.
2. Szekeres, G. (1973). "Polyhedral decompositions of cubic graphs". Bull. Austral. Math. Soc. 8 (3): 367–387. doi:10.1017/S0004972700042660.
3. Weisstein, Eric W. "Hypohamiltonian Graph". MathWorld.
4. Wolz, Jessica; Engineering Linear Layouts with SAT. Master Thesis, University of Tübingen, 2018
5. Watkins, J. J. "Snarks." Ann. New York Acad. Sci. 576, 606-622, 1989.
| Wikipedia |
Szemerédi's theorem
In arithmetic combinatorics, Szemerédi's theorem is a result concerning arithmetic progressions in subsets of the integers. In 1936, Erdős and Turán conjectured[1] that every set of integers A with positive natural density contains a k-term arithmetic progression for every k. Endre Szemerédi proved the conjecture in 1975.
Statement
A subset A of the natural numbers is said to have positive upper density if
$\limsup _{n\to \infty }{\frac {|A\cap \{1,2,3,\dotsc ,n\}|}{n}}>0$.
Szemerédi's theorem asserts that a subset of the natural numbers with positive upper density contains infinitely many arithmetic progressions of length k for all positive integers k.
An often-used equivalent finitary version of the theorem states that for every positive integer k and real number $\delta \in (0,1]$, there exists a positive integer
$N=N(k,\delta )$
such that every subset of {1, 2, ..., N} of size at least δN contains an arithmetic progression of length k.
Another formulation uses the function rk(N), the size of the largest subset of {1, 2, ..., N} without an arithmetic progression of length k. Szemerédi's theorem is equivalent to the asymptotic bound
$r_{k}(N)=o(N)$.
That is, rk(N) grows less than linearly with N.
History
Van der Waerden's theorem, a precursor of Szemerédi's theorem, was proven in 1927.
The cases k = 1 and k = 2 of Szemerédi's theorem are trivial. The case k = 3, known as Roth's theorem, was established in 1953 by Klaus Roth[2] via an adaptation of the Hardy–Littlewood circle method. Endre Szemerédi[3] proved the case k = 4 through combinatorics. Using an approach similar to the one he used for the case k = 3, Roth[4] gave a second proof for this in 1972.
The general case was settled in 1975, also by Szemerédi,[5] who developed an ingenious and complicated extension of his previous combinatorial argument for k = 4 (called "a masterpiece of combinatorial reasoning" by Erdős[6]). Several other proofs are now known, the most important being those by Hillel Furstenberg[7][8] in 1977, using ergodic theory, and by Timothy Gowers[9] in 2001, using both Fourier analysis and combinatorics. Terence Tao has called the various proofs of Szemerédi's theorem a "Rosetta stone" for connecting disparate fields of mathematics.[10]
Quantitative bounds
It is an open problem to determine the exact growth rate of rk(N). The best known general bounds are
$CN\exp \left(-n2^{(n-1)/2}{\sqrt[{n}]{\log N}}+{\frac {1}{2n}}\log \log N\right)\leq r_{k}(N)\leq {\frac {N}{(\log \log N)^{2^{-2^{k+9}}}}},$
where $n=\lceil \log k\rceil $. The lower bound is due to O'Bryant[11] building on the work of Behrend,[12] Rankin,[13] and Elkin.[14][15] The upper bound is due to Gowers.[9]
For small k, there are tighter bounds than the general case. When k = 3, Bourgain,[16][17] Heath-Brown,[18] Szemerédi,[19] Sanders,[20] and Bloom[21] established progressively smaller upper bounds, and Bloom and Sisask then proved the first bound that broke the so-called ``logarithmic barrier".[22] The current best bounds are
$N2^{-{\sqrt {8\log N}}}\leq r_{3}(N)\leq Ne^{-c(\log N)^{1/11}}$, for some constant $c>0$,
due to O'Bryant,[11] and Kelley and Meka[23] respectively.
For k = 4, Green and Tao[24][25] proved that
$r_{4}(N)\leq C{\frac {N}{(\log N)^{c}}}$
for some c > 0.
Extensions and generalizations
A multidimensional generalization of Szemerédi's theorem was first proven by Hillel Furstenberg and Yitzhak Katznelson using ergodic theory.[26] Timothy Gowers,[27] Vojtěch Rödl and Jozef Skokan[28][29] with Brendan Nagle, Rödl, and Mathias Schacht,[30] and Terence Tao[31] provided combinatorial proofs.
Alexander Leibman and Vitaly Bergelson[32] generalized Szemerédi's to polynomial progressions: If $A\subset \mathbb {N} $ is a set with positive upper density and $p_{1}(n),p_{2}(n),\dotsc ,p_{k}(n)$ are integer-valued polynomials such that $p_{i}(0)=0$, then there are infinitely many $u,n\in \mathbb {Z} $ such that $u+p_{i}(n)\in A$ for all $1\leq i\leq k$. Leibman and Bergelson's result also holds in a multidimensional setting.
The finitary version of Szemerédi's theorem can be generalized to finite additive groups including vector spaces over finite fields.[33] The finite field analog can be used as a model for understanding the theorem in the natural numbers.[34] The problem of obtaining bounds in the k=3 case of Szemerédi's theorem in the vector space $\mathbb {F} _{3}^{n}$ is known as the cap set problem.
The Green–Tao theorem asserts the prime numbers contain arbitrary long arithmetic progressions. It is not implied by Szemerédi's theorem because the primes have density 0 in the natural numbers. As part of their proof, Ben Green and Tao introduced a "relative" Szemerédi theorem which applies to subsets of the integers (even those with 0 density) satisfying certain pseudorandomness conditions. A more general relative Szemerédi theorem has since been given by David Conlon, Jacob Fox, and Yufei Zhao.[35][36]
The Erdős conjecture on arithmetic progressions would imply both Szemerédi's theorem and the Green–Tao theorem.
See also
• Problems involving arithmetic progressions
• Ergodic Ramsey theory
• Arithmetic combinatorics
• Szemerédi regularity lemma
Notes
1. Erdős, Paul; Turán, Paul (1936). "On some sequences of integers" (PDF). Journal of the London Mathematical Society. 11 (4): 261–264. doi:10.1112/jlms/s1-11.4.261. MR 1574918.
2. Roth, Klaus Friedrich (1953). "On certain sets of integers". Journal of the London Mathematical Society. 28 (1): 104–109. doi:10.1112/jlms/s1-28.1.104. MR 0051853. Zbl 0050.04002.
3. Szemerédi, Endre (1969). "On sets of integers containing no four elements in arithmetic progression". Acta Mathematica Academiae Scientiarum Hungaricae. 20 (1–2): 89–104. doi:10.1007/BF01894569. MR 0245555. Zbl 0175.04301.
4. Roth, Klaus Friedrich (1972). "Irregularities of sequences relative to arithmetic progressions, IV". Periodica Math. Hungar. 2 (1–4): 301–326. doi:10.1007/BF02018670. MR 0369311. S2CID 126176571.
5. Szemerédi, Endre (1975). "On sets of integers containing no k elements in arithmetic progression" (PDF). Acta Arithmetica. 27: 199–245. doi:10.4064/aa-27-1-199-245. MR 0369312. Zbl 0303.10056.
6. Erdős, Paul (2013). "Some of My Favorite Problems and Results". In Graham, Ronald L.; Nešetřil, Jaroslav; Butler, Steve (eds.). The Mathematics of Paul Erdős I (Second ed.). New York: Springer. pp. 51–70. doi:10.1007/978-1-4614-7258-2_3. ISBN 978-1-4614-7257-5. MR 1425174.
7. Furstenberg, Hillel (1977). "Ergodic behavior of diagonal measures and a theorem of Szemerédi on arithmetic progressions". Journal d'Analyse Mathématique. 31: 204–256. doi:10.1007/BF02813304. MR 0498471. S2CID 120917478..
8. Furstenberg, Hillel; Katznelson, Yitzhak; Ornstein, Donald Samuel (1982). "The ergodic theoretical proof of Szemerédi's theorem". Bull. Amer. Math. Soc. 7 (3): 527–552. doi:10.1090/S0273-0979-1982-15052-2. MR 0670131.
9. Gowers, Timothy (2001). "A new proof of Szemerédi's theorem". Geom. Funct. Anal. 11 (3): 465–588. doi:10.1007/s00039-001-0332-9. MR 1844079. S2CID 124324198.
10. Tao, Terence (2007). "The dichotomy between structure and randomness, arithmetic progressions, and the primes". In Sanz-Solé, Marta; Soria, Javier; Varona, Juan Luis; Verdera, Joan (eds.). Proceedings of the International Congress of Mathematicians Madrid, August 22–30, 2006. International Congress of Mathematicians. Vol. 1. Zürich: European Mathematical Society. pp. 581–608. arXiv:math/0512114. doi:10.4171/022-1/22. ISBN 978-3-03719-022-7. MR 2334204.
11. O'Bryant, Kevin (2011). "Sets of integers that do not contain long arithmetic progressions". Electronic Journal of Combinatorics. 18 (1). doi:10.37236/546. MR 2788676.
12. Behrend, Felix A. (1946). "On the sets of integers which contain no three terms in arithmetic progression". Proceedings of the National Academy of Sciences. 32 (12): 331–332. Bibcode:1946PNAS...32..331B. doi:10.1073/pnas.32.12.331. MR 0018694. PMC 1078964. PMID 16578230. Zbl 0060.10302.
13. Rankin, Robert A. (1962). "Sets of integers containing not more than a given number of terms in arithmetical progression". Proc. R. Soc. Edinburgh Sect. A. 65: 332–344. MR 0142526. Zbl 0104.03705.
14. Elkin, Michael (2011). "An improved construction of progression-free sets". Israel Journal of Mathematics. 184 (1): 93–128. arXiv:0801.4310. doi:10.1007/s11856-011-0061-1. MR 2823971.
15. Green, Ben; Wolf, Julia (2010). "A note on Elkin's improvement of Behrend's construction". In Chudnovsky, David; Chudnovsky, Gregory (eds.). Additive Number Theory. Additive number theory. Festschrift in honor of the sixtieth birthday of Melvyn B. Nathanson. New York: Springer. pp. 141–144. arXiv:0810.0732. doi:10.1007/978-0-387-68361-4_9. ISBN 978-0-387-37029-3. MR 2744752. S2CID 10475217.
16. Bourgain, Jean (1999). "On triples in arithmetic progression". Geom. Funct. Anal. 9 (5): 968–984. doi:10.1007/s000390050105. MR 1726234. S2CID 392820.
17. Bourgain, Jean (2008). "Roth's theorem on progressions revisited". Journal d'Analyse Mathématique. 104 (1): 155–192. doi:10.1007/s11854-008-0020-x. MR 2403433. S2CID 16985451.
18. Heath-Brown, Roger (1987). "Integer sets containing no arithmetic progressions". Journal of the London Mathematical Society. 35 (3): 385–394. doi:10.1112/jlms/s2-35.3.385. MR 0889362.
19. Szemerédi, Endre (1990). "Integer sets containing no arithmetic progressions". Acta Mathematica Hungarica. 56 (1–2): 155–158. doi:10.1007/BF01903717. MR 1100788.
20. Sanders, Tom (2011). "On Roth's theorem on progressions". Annals of Mathematics. 174 (1): 619–636. arXiv:1011.0104. doi:10.4007/annals.2011.174.1.20. MR 2811612. S2CID 53331882.
21. Bloom, Thomas F. (2016). "A quantitative improvement for Roth's theorem on arithmetic progressions". Journal of the London Mathematical Society. Second Series. 93 (3): 643–663. arXiv:1405.5800. doi:10.1112/jlms/jdw010. MR 3509957. S2CID 27536138.
22. Bloom, Thomas; Sisask, Olof (2020). "Breaking the logarithmic barrier in Roth's theorem on arithmetic progressions". arXiv:2007.03528. {{cite journal}}: Cite journal requires |journal= (help)
23. Kelley, Zander; Meka, Raghu (2023). "Strong bounds for 3-progressions". arXiv:2302.05537. {{cite journal}}: Cite journal requires |journal= (help)
24. Green, Ben; Tao, Terence (2009). "New bounds for Szemeredi's theorem. II. A new bound for r4(N)". In Chen, William W. L.; Gowers, Timothy; Halberstam, Heini; Schmidt, Wolfgang; Vaughan, Robert Charles (eds.). New bounds for Szemeredi's theorem, II: A new bound for r_4(N). Analytic number theory. Essays in honour of Klaus Roth on the occasion of his 80th birthday. Cambridge: Cambridge University Press. pp. 180–204. arXiv:math/0610604. Bibcode:2006math.....10604G. ISBN 978-0-521-51538-2. MR 2508645. Zbl 1158.11007.
25. Green, Ben; Tao, Terence (2017). "New bounds for Szemerédi's theorem, III: A polylogarithmic bound for r4(N)". Mathematika. 63 (3): 944–1040. arXiv:1705.01703. doi:10.1112/S0025579317000316. MR 3731312. S2CID 119145424.
26. Furstenberg, Hillel; Katznelson, Yitzhak (1978). "An ergodic Szemerédi theorem for commuting transformations". Journal d'Analyse Mathématique. 38 (1): 275–291. doi:10.1007/BF02790016. MR 0531279. S2CID 123386017.
27. Gowers, Timothy (2007). "Hypergraph regularity and the multidimensional Szemerédi theorem". Annals of Mathematics. 166 (3): 897–946. arXiv:0710.3032. doi:10.4007/annals.2007.166.897. MR 2373376. S2CID 56118006.
28. Rödl, Vojtěch; Skokan, Jozef (2004). "Regularity lemma for k-uniform hypergraphs". Random Structures Algorithms. 25 (1): 1–42. doi:10.1002/rsa.20017. MR 2069663. S2CID 7458739.
29. Rödl, Vojtěch; Skokan, Jozef (2006). "Applications of the regularity lemma for uniform hypergraphs" (PDF). Random Structures Algorithms. 28 (2): 180–194. doi:10.1002/rsa.20108. MR 2198496. S2CID 18203198.
30. Nagle, Brendan; Rödl, Vojtěch; Schacht, Mathias (2006). "The counting lemma for regular k-uniform hypergraphs". Random Structures Algorithms. 28 (2): 113–179. doi:10.1002/rsa.20117. MR 2198495. S2CID 14126774.
31. Tao, Terence (2006). "A variant of the hypergraph removal lemma". Journal of Combinatorial Theory. Series A. 113 (7): 1257–1280. arXiv:math/0503572. doi:10.1016/j.jcta.2005.11.006. MR 2259060.
32. Bergelson, Vitaly; Leibman, Alexander (1996). "Polynomial extensions of van der Waerden's and Szemerédi's theorems". Journal of the American Mathematical Society. 9 (3): 725–753. doi:10.1090/S0894-0347-96-00194-4. MR 1325795.
33. Furstenberg, Hillel; Katznelson, Yitzhak (1991). "A density version of the Hales–Jewett theorem". Journal d'Analyse Mathématique. 57 (1): 64–119. doi:10.1007/BF03041066. MR 1191743. S2CID 123036744.
34. Wolf, Julia (2015). "Finite field models in arithmetic combinatorics—ten years on". Finite Fields and Their Applications. 32: 233–274. doi:10.1016/j.ffa.2014.11.003. MR 3293412.
35. Conlon, David; Fox, Jacob; Zhao, Yufei (2015). "A relative Szemerédi theorem". Geometric and Functional Analysis. 25 (3): 733–762. arXiv:1305.5440. doi:10.1007/s00039-015-0324-9. MR 3361771. S2CID 14398869.
36. Zhao, Yufei (2014). "An arithmetic transference proof of a relative Szemerédi theorem". Mathematical Proceedings of the Cambridge Philosophical Society. 156 (2): 255–261. arXiv:1307.4959. Bibcode:2014MPCPS.156..255Z. doi:10.1017/S0305004113000662. MR 3177868. S2CID 119673319.
Further reading
• Tao, Terence (2007). "The ergodic and combinatorial approaches to Szemerédi's theorem". In Granville, Andrew; Nathanson, Melvyn B.; Solymosi, József (eds.). Additive Combinatorics. CRM Proceedings & Lecture Notes. Vol. 43. Providence, RI: American Mathematical Society. pp. 145–193. arXiv:math/0604456. Bibcode:2006math......4456T. ISBN 978-0-8218-4351-2. MR 2359471. Zbl 1159.11005.
External links
• PlanetMath source for initial version of this page
• Announcement by Ben Green and Terence Tao – the preprint is available at math.NT/0404188
• Discussion of Szemerédi's theorem (part 1 of 5)
• Ben Green and Terence Tao: Szemerédi's theorem on Scholarpedia
• Weisstein, Eric W. "SzemeredisTheorem". MathWorld.
• Grime, James; Hodge, David (2012). "6,000,000: Endre Szemerédi wins the Abel Prize". Numberphile. Brady Haran.
| Wikipedia |
Szemerédi–Trotter theorem
The Szemerédi–Trotter theorem is a mathematical result in the field of Discrete geometry. It asserts that given n points and m lines in the Euclidean plane, the number of incidences (i.e., the number of point-line pairs, such that the point lies on the line) is
$O\left(n^{2/3}m^{2/3}+n+m\right).$
This bound cannot be improved, except in terms of the implicit constants.
As for the implicit constants, it was shown by János Pach, Radoš Radoičić, Gábor Tardos, and Géza Tóth[1] that the upper bound $2.5n^{2/3}m^{2/3}+n+m$ holds. Since then better constants are known due to better crossing lemma constants; the current best is 2.44.[2] On the other hand, Pach and Tóth showed that the statement does not hold true if one replaces the coefficient 2.5 with 0.42.[3]
An equivalent formulation of the theorem is the following. Given n points and an integer k ≥ 2, the number of lines which pass through at least k of the points is
$O\left({\frac {n^{2}}{k^{3}}}+{\frac {n}{k}}\right).$
The original proof of Endre Szemerédi and William T. Trotter was somewhat complicated, using a combinatorial technique known as cell decomposition.[4][5] Later, László Székely discovered a much simpler proof using the crossing number inequality for graphs.[6] (See below.)
The Szemerédi–Trotter theorem has a number of consequences, including Beck's theorem in incidence geometry and the Erdős-Szemerédi sum-product problem in additive combinatorics.
Proof of the first formulation
We may discard the lines which contain two or fewer of the points, as they can contribute at most 2m incidences to the total number. Thus we may assume that every line contains at least three of the points.
If a line contains k points, then it will contain k − 1 line segments which connect two consecutive points along the line. Because k ≥ 3 after discarding the two-point lines, it follows that k − 1 ≥ k/2, so the number of these line segments on each line is at least half the number of incidences on that line. Summing over all of the lines, the number of these line segments is again at least half the total number of incidences. Thus if e denotes the number of such line segments, it will suffice to show that
$e=O\left(n^{2/3}m^{2/3}+n+m\right).$
Now consider the graph formed by using the n points as vertices, and the e line segments as edges. Since each line segment lies on one of m lines, and any two lines intersect in at most one point, the crossing number of this graph is at most the number of points where two lines intersect, which is at most m(m − 1)/2. The crossing number inequality implies that either e ≤ 7.5n, or that m(m − 1)/2 ≥ e3 / 33.75n2. In either case e ≤ 3.24(nm)2/3 + 7.5n, giving the desired bound
$e=O\left(n^{2/3}m^{2/3}+n+m\right).$
Proof of the second formulation
Since every pair of points can be connected by at most one line, there can be at most n(n − 1)/2 lines which can connect at k or more points, since k ≥ 2. This bound will prove the theorem when k is small (e.g. if k ≤ C for some absolute constant C). Thus, we need only consider the case when k is large, say k ≥ C.
Suppose that there are m lines that each contain at least k points. These lines generate at least mk incidences, and so by the first formulation of the Szemerédi–Trotter theorem, we have
$mk=O\left(n^{2/3}m^{2/3}+n+m\right),$
and so at least one of the statements $mk=O(n^{2/3}m^{2/3}),mk=O(n)$, or $mk=O(m)$ is true. The third possibility is ruled out since k was assumed to be large, so we are left with the first two. But in either of these two cases, some elementary algebra will give the bound $m=O(n^{2}/k^{3}+n/k)$ as desired.
Optimality
Except for its constant, the Szemerédi–Trotter incidence bound cannot be improved. To see this, consider for any positive integer $N\in \mathbb {N} $ a set of points on the integer lattice
$P=\left\{(a,b)\in \mathbb {Z} ^{2}\ :\ 1\leq a\leq N;1\leq b\leq 2N^{2}\right\},$ :\ 1\leq a\leq N;1\leq b\leq 2N^{2}\right\},}
and a set of lines
$L=\left\{(x,mx+b)\ :\ m,b\in \mathbb {Z} ;1\leq m\leq N;1\leq b\leq N^{2}\right\}.$ :\ m,b\in \mathbb {Z} ;1\leq m\leq N;1\leq b\leq N^{2}\right\}.}
Clearly, $|P|=2N^{3}$ and $|L|=N^{3}$. Since each line is incident to N points (i.e., once for each $x\in \{1,\cdots ,N\}$), the number of incidences is $N^{4}$ which matches the upper bound.[7]
Generalization to $\mathbb {R} ^{d}$
One generalization of this result to arbitrary dimension, $\mathbb {R} ^{d}$, was found by Agarwal and Aronov.[8] Given a set of n points, S, and the set of m hyperplanes, H, which are each spanned by S, the number of incidences between S and H is bounded above by
$O\left(m^{2/3}n^{d/3}+n^{d-1}\right).$
Equivalently, the number of hyperplanes in H containing k or more points is bounded above by
$O\left({\frac {n^{d}}{k^{3}}}+{\frac {n^{d-1}}{k}}\right).$
A construction due to Edelsbrunner shows this bound to be asymptotically optimal.[9]
József Solymosi and Terence Tao obtained near sharp upper bounds for the number of incidences between points and algebraic varieties in higher dimensions, when the points and varieties satisfy "certain pseudo-line type axioms". Their proof uses the Polynomial Ham Sandwich Theorem.[10]
In $\mathbb {C} ^{2}$
Many proofs of the Szemerédi–Trotter theorem over $\mathbb {R} $ rely in a crucial way on the topology of Euclidean space, so do not extend easily to other fields. e.g. the original proof of Szemerédi and Trotter; the polynomial partitioning proof and the crossing number proof do not extend to the complex plane.
Tóth successfully generalized the original proof of Szemerédi and Trotter to the complex plane $\mathbb {C} ^{2}$ by introducing additional ideas.[11] This result was also obtained independently and through a different method by Zahl.[12] The implicit constant in the bound is not the same in the complex numbers: in Tóth's proof the constant can be taken to be $10^{60}$; the constant is not explicit in Zahl's proof.
When the point set is a Cartesian product, Solymosi and Tardos show that the Szemerédi-Trotter bound holds using a much simpler argument.[13]
In finite fields
Let $\mathbb {F} $ be a field.
A Szemerédi-Trotter bound is impossible in general due to the following example, stated here in $\mathbb {F} _{p}$: let ${\mathcal {P}}=\mathbb {F} _{p}\times \mathbb {F} _{p}$ be the set of all $p^{2}$ points and let ${\mathcal {L}}$ be the set of all $p^{2}$ lines in the plane. Since each line contains $p$ points, there are $p^{3}$ incidences. On the other hand, a Szemerédi-Trotter bound would give $O((p^{2})^{2/3}(p^{2})^{2/3}+p^{2})=O(p^{8/3})$ incidences. This example shows that the trivial, combinatorial incidence bound is tight.
Bourgain, Katz and Tao[14] show that if this example is excluded, then an incidence bound that is an improvement on the trivial bound can be attained.
Incidence bounds over finite fields are of two types: (i) when at least one of the set of points or lines is `large' in terms of the characteristic of the field; (ii) both the set of points and the set of lines are `small' in terms of the characteristic.
Large set incidence bounds
Let $q$ be an odd prime power. Then Vinh[15] showed that the number of incidences between $n$ points and $m$ lines in $\mathbb {F} _{q}^{2}$ is at most
${\frac {nm}{q}}+{\sqrt {qnm}}.$
Note that there is no implicit constant in this bound.
Small set incidence bounds
Let $\mathbb {F} $ be a field of characteristic $p\neq 2$. Stevens and de Zeeuw[16] show that the number of incidences between $n$ points and $m$ lines in $\mathbb {F} ^{2}$ is
$O\left(m^{11/15}n^{11/15}\right)$
under the condition $m^{-2}n^{13}\leq p^{15}$ in positive characteristic. (In a field of characteristic zero, this condition is not necessary.) This bound is better than the trivial incidence estimate when $m^{7/8}<n<m^{8/7}$.
If the point set is a Cartesian Product, then they show an improved incidence bound: let ${\mathcal {P}}=A\times B\subseteq \mathbb {F} ^{2}$ be a finite set of points with $|A|\leq |B|$ and let ${\mathcal {L}}$ be a set of lines in the plane. Suppose that $|A||B|^{2}\leq |{\mathcal {L}}|^{3}$ and in positive characteristic that $|A||{\mathcal {L}}|\leq p^{2}$. Then the number of incidences between ${\mathcal {P}}$ and ${\mathcal {L}}$ is
$O\left(|A|^{3/4}|B|^{1/2}|{\mathcal {L}}|^{3/4}+|{\mathcal {L}}|\right).$
This bound is optimal. Note that by point-line duality in the plane, this incidence bound can be rephrased for an arbitrary point set and a set of lines having a Cartesian product structure.
In both the reals and arbitrary fields, Rudnev and Shkredov[17] show an incidence bound for when both the point set and the line set has a Cartesian product structure. This is sometimes better than the above bounds.
References
1. Pach, János; Radoičić, Radoš; Tardos, Gábor; Tóth, Géza (2006). "Improving the Crossing Lemma by Finding More Crossings in Sparse Graphs". Discrete & Computational Geometry. 36 (4): 527–552. doi:10.1007/s00454-006-1264-9.
2. Ackerman, Eyal (December 2019). "On topological graphs with at most four crossings per edge". Computational Geometry. 85: 101574. arXiv:1509.01932. doi:10.1016/j.comgeo.2019.101574. ISSN 0925-7721. S2CID 16847443.
3. Pach, János; Tóth, Géza (1997). "Graphs drawn with few crossings per edge". Combinatorica. 17 (3): 427–439. CiteSeerX 10.1.1.47.4690. doi:10.1007/BF01215922. S2CID 20480170.
4. Szemerédi, Endre; Trotter, William T. (1983). "Extremal problems in discrete geometry". Combinatorica. 3 (3–4): 381–392. doi:10.1007/BF02579194. MR 0729791. S2CID 1750834.
5. Szemerédi, Endre; Trotter, William T. (1983). "A Combinatorial Distinction Between the Euclidean and Projective Planes" (PDF). European Journal of Combinatorics. 4 (4): 385–394. doi:10.1016/S0195-6698(83)80036-5.
6. Székely, László A. (1997). "Crossing numbers and hard Erdős problems in discrete geometry". Combinatorics, Probability and Computing. 6 (3): 353–358. CiteSeerX 10.1.1.125.1484. doi:10.1017/S0963548397002976. MR 1464571. S2CID 36602807.
7. Terence Tao (March 17, 2011). "An incidence theorem in higher dimensions". Retrieved August 26, 2012.
8. Agarwal, Pankaj; Aronov, Boris (1992). "Counting facets and incidences". Discrete & Computational Geometry. 7 (1): 359–369. doi:10.1007/BF02187848.
9. Edelsbrunner, Herbert (1987). "6.5 Lower bounds for many cells". Algorithms in Combinatorial Geometry. Springer-Verlag. ISBN 978-3-540-13722-1.
10. Solymosi, József; Tao, Terence (September 2012). "An incidence theorem in higher dimensions". Discrete & Computational Geometry. 48 (2): 255–280. arXiv:1103.2926. doi:10.1007/s00454-012-9420-x. MR 2946447. S2CID 17830766.
11. Tóth, Csaba D. (2015). "The Szemerédi-Trotter Theorem in the Complex Plane". Combinatorica. 35 (1): 95–126. arXiv:math/0305283. doi:10.1007/s00493-014-2686-2. S2CID 13237229.
12. Zahl, Joshua (2015). "A Szemerédi-Trotter Type Theorem in ℝ4". Discrete & Computational Geometry. 54 (3): 513–572. arXiv:1203.4600. doi:10.1007/s00454-015-9717-7. S2CID 16610999.
13. Solymosi, Jozsef; Tardos, Gabor (2007). "On the number of k-rich transformations". Proceedings of the twenty-third annual symposium on Computational geometry - SCG '07. SCG '07. New York, New York, USA: ACM Press. pp. 227–231. doi:10.1145/1247069.1247111. ISBN 978-1-59593-705-6. S2CID 15928844.
14. Bourgain, Jean; Katz, Nets; Tao, Terence (2004-02-01). "A sum-product estimate in finite fields, and applications". Geometric and Functional Analysis. 14 (1): 27–57. arXiv:math/0301343. doi:10.1007/s00039-004-0451-1. ISSN 1016-443X. S2CID 14097626.
15. Vinh, Le Anh (November 2011). "The Szemerédi–Trotter type theorem and the sum-product estimate in finite fields". European Journal of Combinatorics. 32 (8): 1177–1181. arXiv:0711.4427. doi:10.1016/j.ejc.2011.06.008. ISSN 0195-6698. S2CID 1956316.
16. Stevens, Sophie; de Zeeuw, Frank (2017-08-03). "An improved point-line incidence bound over arbitrary fields". Bulletin of the London Mathematical Society. 49 (5): 842–858. arXiv:1609.06284. doi:10.1112/blms.12077. ISSN 0024-6093. S2CID 119635655.
17. Rudnev, Misha; Shkredov, Ilya D. (July 2022). "On the growth rate in SL_2(F_p), the affine group and sum-product type implications". Mathematika. 68 (3): 738–783. arXiv:1812.01671. doi:10.1112/mtk.12120. S2CID 248710290.
Incidence structures
Representation
• Incidence matrix
• Incidence graph
Fields
• Combinatorics
• Block design
• Steiner system
• Geometry
• Incidence
• Projective plane
• Graph theory
• Hypergraph
• Statistics
• Blocking
Configurations
• Complete quadrangle
• Fano plane
• Möbius–Kantor configuration
• Pappus configuration
• Hesse configuration
• Desargues configuration
• Reye configuration
• Schläfli double six
• Cremona–Richmond configuration
• Kummer configuration
• Grünbaum–Rigby configuration
• Klein configuration
• Dual
Theorems
• Sylvester–Gallai theorem
• De Bruijn–Erdős theorem
• Szemerédi–Trotter theorem
• Beck's theorem
• Bruck–Ryser–Chowla theorem
Applications
• Design of experiments
• Kirkman's schoolgirl problem
| Wikipedia |
Zsigmondy's theorem
In number theory, Zsigmondy's theorem, named after Karl Zsigmondy, states that if $a>b>0$ are coprime integers, then for any integer $n\geq 1$, there is a prime number p (called a primitive prime divisor) that divides $a^{n}-b^{n}$ and does not divide $a^{k}-b^{k}$ for any positive integer $k<n$, with the following exceptions:
• $n=1$, $a-b=1$; then $a^{n}-b^{n}=1$ which has no prime divisors
• $n=2$, $a+b$ a power of two; then any odd prime factors of $a^{2}-b^{2}=(a+b)(a^{1}-b^{1})$ must be contained in $a^{1}-b^{1}$, which is also even
• $n=6$, $a=2$, $b=1$; then $a^{6}-b^{6}=63=3^{2}\times 7=(a^{2}-b^{2})^{2}(a^{3}-b^{3})$
This generalizes Bang's theorem,[1] which states that if $n>1$ and $n$ is not equal to 6, then $2^{n}-1$ has a prime divisor not dividing any $2^{k}-1$ with $k<n$.
Similarly, $a^{n}+b^{n}$ has at least one primitive prime divisor with the exception $2^{3}+1^{3}=9$.
Zsigmondy's theorem is often useful, especially in group theory, where it is used to prove that various groups have distinct orders except when they are known to be the same.[2][3]
History
The theorem was discovered by Zsigmondy working in Vienna from 1894 until 1925.
Generalizations
Let $(a_{n})_{n\geq 1}$ be a sequence of nonzero integers. The Zsigmondy set associated to the sequence is the set
${\mathcal {Z}}(a_{n})=\{n\geq 1:a_{n}{\text{ has no primitive prime divisors}}\}.$
i.e., the set of indices $n$ such that every prime dividing $a_{n}$ also divides some $a_{m}$ for some $m<n$. Thus Zsigmondy's theorem implies that ${\mathcal {Z}}(a^{n}-b^{n})\subset \{1,2,6\}$, and Carmichael's theorem says that the Zsigmondy set of the Fibonacci sequence is $\{1,2,6,12\}$, and that of the Pell sequence is $\{1\}$. In 2001 Bilu, Hanrot, and Voutier[4] proved that in general, if $(a_{n})_{n\geq 1}$ is a Lucas sequence or a Lehmer sequence, then ${\mathcal {Z}}(a_{n})\subseteq \{1\leq n\leq 30\}$ (see OEIS: A285314, there are only 13 such $n$s, namely 1, 2, 3, 4, 5, 6, 7, 8, 10, 12, 13, 18, 30). Lucas and Lehmer sequences are examples of divisibility sequences.
It is also known that if $(W_{n})_{n\geq 1}$ is an elliptic divisibility sequence, then its Zsigmondy set ${\mathcal {Z}}(W_{n})$ is finite.[5] However, the result is ineffective in the sense that the proof does not give an explicit upper bound for the largest element in ${\mathcal {Z}}(W_{n})$, although it is possible to give an effective upper bound for the number of elements in ${\mathcal {Z}}(W_{n})$.[6]
See also
• Carmichael's theorem
References
1. A. S. Bang (1886). "Taltheoretiske Undersøgelser". Tidsskrift for Mathematik. 5. Mathematica Scandinavica. 4: 70–80. JSTOR 24539988. And Bang, A. S. (1886). "Taltheoretiske Undersøgelser (continued, see p. 80)". Tidsskrift for Mathematik. 4: 130–137. JSTOR 24540006.
2. Montgomery, H. "Divisibility of Mersenne Numbers." 17 Sep 2001.
3. Artin, Emil (August 1955). "The Orders of the Linear Groups". Comm. Pure Appl. Math. 8 (3): 355–365. doi:10.1002/cpa.3160080302.
4. Y. Bilu, G. Hanrot, P.M. Voutier, Existence of primitive divisors of Lucas and Lehmer numbers, J. Reine Angew. Math. 539 (2001), 75-122
5. J.H. Silverman, Wieferich's criterion and the abc-conjecture, J. Number Theory 30 (1988), 226-237
6. P. Ingram, J.H. Silverman, Uniform estimates for primitive divisors in elliptic divisibility sequences, Number theory, Analysis and Geometry, Springer-Verlag, 2010, 233-263.
• K. Zsigmondy (1892). "Zur Theorie der Potenzreste". Journal Monatshefte für Mathematik. 3 (1): 265–284. doi:10.1007/BF01692444. hdl:10338.dmlcz/120560.
• Th. Schmid (1927). "Karl Zsigmondy". Jahresbericht der Deutschen Mathematiker-Vereinigung. 36: 167–168.
• Moshe Roitman (1997). "On Zsigmondy Primes". Proceedings of the American Mathematical Society. 125 (7): 1913–1919. doi:10.1090/S0002-9939-97-03981-6. JSTOR 2162291.
• Walter Feit (1988). "On Large Zsigmondy Primes". Proceedings of the American Mathematical Society. American Mathematical Society. 102 (1): 29–36. doi:10.2307/2046025. JSTOR 2046025.
• Everest, Graham; van der Poorten, Alf; Shparlinski, Igor; Ward, Thomas (2003). Recurrence sequences. Mathematical Surveys and Monographs. Vol. 104. Providence, RI: American Mathematical Society. pp. 103–104. ISBN 0-8218-3387-1. Zbl 1033.11006.
External links
• Weisstein, Eric W. "Zsigmondy Theorem". MathWorld.
| Wikipedia |
Entropy in thermodynamics and information theory
The mathematical expressions for thermodynamic entropy in the statistical thermodynamics formulation established by Ludwig Boltzmann and J. Willard Gibbs in the 1870s are similar to the information entropy by Claude Shannon and Ralph Hartley, developed in the 1940s.
Equivalence of form of the defining expressions
The defining expression for entropy in the theory of statistical mechanics established by Ludwig Boltzmann and J. Willard Gibbs in the 1870s, is of the form:
$S=-k_{\text{B}}\sum _{i}p_{i}\ln p_{i},$
where $p_{i}$ is the probability of the microstate i taken from an equilibrium ensemble, and $k_{B}$ is the Boltzmann's constant.
The defining expression for entropy in the theory of information established by Claude E. Shannon in 1948 is of the form:
$H=-\sum _{i}p_{i}\log _{b}p_{i},$
where $p_{i}$ is the probability of the message $m_{i}$ taken from the message space M, and b is the base of the logarithm used. Common values of b are 2, Euler's number e, and 10, and the unit of entropy is shannon (or bit) for b = 2, nat for b = e, and hartley for b = 10.[1]
Mathematically H may also be seen as an average information, taken over the message space, because when a certain message occurs with probability pi, the information quantity −log(pi) (called information content or self-information) will be obtained.
If all the microstates are equiprobable (a microcanonical ensemble), the statistical thermodynamic entropy reduces to the form, as given by Boltzmann,
$S=k_{\text{B}}\ln W,$
where W is the number of microstates that corresponds to the macroscopic thermodynamic state. Therefore S depends on temperature.
If all the messages are equiprobable, the information entropy reduces to the Hartley entropy
$H=\log _{b}|M|\ ,$
where $|M|$ is the cardinality of the message space M.
The logarithm in the thermodynamic definition is the natural logarithm. It can be shown that the Gibbs entropy formula, with the natural logarithm, reproduces all of the properties of the macroscopic classical thermodynamics of Rudolf Clausius. (See article: Entropy (statistical views)).
The logarithm can also be taken to the natural base in the case of information entropy. This is equivalent to choosing to measure information in nats instead of the usual bits (or more formally, shannons). In practice, information entropy is almost always calculated using base-2 logarithms, but this distinction amounts to nothing other than a change in units. One nat is about 1.44 shannons.
For a simple compressible system that can only perform volume work, the first law of thermodynamics becomes
$dE=-pdV+TdS.$
But one can equally well write this equation in terms of what physicists and chemists sometimes call the 'reduced' or dimensionless entropy, σ = S/k, so that
$dE=-pdV+k_{\text{B}}Td\sigma .$
Just as S is conjugate to T, so σ is conjugate to kBT (the energy that is characteristic of T on a molecular scale).
Thus the definitions of entropy in statistical mechanics (The Gibbs entropy formula $S=-k_{\mathrm {B} }\sum _{i}p_{i}\log p_{i}$) and in classical thermodynamics ($dS={\frac {\delta Q_{\text{rev}}}{T}}$, and the fundamental thermodynamic relation) are equivalent for microcanonical ensemble, and statistical ensembles describing a thermodynamic system in equilibrium with a reservoir, such as the canonical ensemble, grand canonical ensemble, isothermal–isobaric ensemble. This equivalence is commonly shown in textbooks. However, the equivalence between the thermodynamic definition of entropy and the Gibbs entropy is not general but instead an exclusive property of the generalized Boltzmann distribution.[2]
Furthermore, it has been shown that the definitions of entropy in statistical mechanics is the only entropy that is equivalent to the classical thermodynamics entropy under the following postulates:[3]
1. The probability density function is proportional to some function of the ensemble parameters and random variables.
2. Thermodynamic state functions are described by ensemble averages of random variables.
3. At infinite temperature, all the microstates have the same probability.
Theoretical relationship
Despite the foregoing, there is a difference between the two quantities. The information entropy Η can be calculated for any probability distribution (if the "message" is taken to be that the event i which had probability pi occurred, out of the space of the events possible), while the thermodynamic entropy S refers to thermodynamic probabilities pi specifically. The difference is more theoretical than actual, however, because any probability distribution can be approximated arbitrarily closely by some thermodynamic system.
Moreover, a direct connection can be made between the two. If the probabilities in question are the thermodynamic probabilities pi: the (reduced) Gibbs entropy σ can then be seen as simply the amount of Shannon information needed to define the detailed microscopic state of the system, given its macroscopic description. Or, in the words of G. N. Lewis writing about chemical entropy in 1930, "Gain in entropy always means loss of information, and nothing more". To be more concrete, in the discrete case using base two logarithms, the reduced Gibbs entropy is equal to the average of the minimum number of yes–no questions needed to be answered in order to fully specify the microstate, given that we know the macrostate.
Furthermore, the prescription to find the equilibrium distributions of statistical mechanics—such as the Boltzmann distribution—by maximising the Gibbs entropy subject to appropriate constraints (the Gibbs algorithm) can be seen as something not unique to thermodynamics, but as a principle of general relevance in statistical inference, if it is desired to find a maximally uninformative probability distribution, subject to certain constraints on its averages. (These perspectives are explored further in the article Maximum entropy thermodynamics.)
The Shannon entropy in information theory is sometimes expressed in units of bits per symbol. The physical entropy may be on a "per quantity" basis (h) which is called "intensive" entropy instead of the usual total entropy which is called "extensive" entropy. The "shannons" of a message (Η) are its total "extensive" information entropy and is h times the number of bits in the message.
A direct and physically real relationship between h and S can be found by assigning a symbol to each microstate that occurs per mole, kilogram, volume, or particle of a homogeneous substance, then calculating the 'h' of these symbols. By theory or by observation, the symbols (microstates) will occur with different probabilities and this will determine h. If there are N moles, kilograms, volumes, or particles of the unit substance, the relationship between h (in bits per unit substance) and physical extensive entropy in nats is:
$S=k_{\mathrm {B} }\ln(2)Nh$
where ln(2) is the conversion factor from base 2 of Shannon entropy to the natural base e of physical entropy. N h is the amount of information in bits needed to describe the state of a physical system with entropy S. Landauer's principle demonstrates the reality of this by stating the minimum energy E required (and therefore heat Q generated) by an ideally efficient memory change or logic operation by irreversibly erasing or merging N h bits of information will be S times the temperature which is
$E=Q=Tk_{\mathrm {B} }\ln(2)Nh,$
where h is in informational bits and E and Q are in physical Joules. This has been experimentally confirmed.[4]
Temperature is a measure of the average kinetic energy per particle in an ideal gas (kelvins = 2/3 joules/kB) so the J/K units of kB is dimensionless (joule/joule). kb is the conversion factor from energy in 3/2 kelvins to joules for an ideal gas. If kinetic energy measurements per particle of an ideal gas were expressed as joules instead of kelvins, kb in the above equations would be replaced by 3/2. This shows that S is a true statistical measure of microstates that does not have a fundamental physical unit other than the units of information, in this case nats, which is just a statement of which logarithm base was chosen by convention.
Information is physical
Szilard's engine
A physical thought experiment demonstrating how just the possession of information might in principle have thermodynamic consequences was established in 1929 by Leó Szilárd, in a refinement of the famous Maxwell's demon scenario.[5]
Consider Maxwell's set-up, but with only a single gas particle in a box. If the supernatural demon knows which half of the box the particle is in (equivalent to a single bit of information), it can close a shutter between the two halves of the box, close a piston unopposed into the empty half of the box, and then extract $k_{\text{B}}T\ln 2$ joules of useful work if the shutter is opened again. The particle can then be left to isothermally expand back to its original equilibrium occupied volume. In just the right circumstances therefore, the possession of a single bit of Shannon information (a single bit of negentropy in Brillouin's term) really does correspond to a reduction in the entropy of the physical system. The global entropy is not decreased, but information to free energy conversion is possible.
Using a phase-contrast microscope equipped with a high speed camera connected to a computer, as demon, the principle has been actually demonstrated.[6] In this experiment, information to energy conversion is performed on a Brownian particle by means of feedback control; that is, synchronizing the work given to the particle with the information obtained on its position. Computing energy balances for different feedback protocols, has confirmed that the Jarzynski equality requires a generalization that accounts for the amount of information involved in the feedback.
Landauer's principle
In fact one can generalise: any information that has a physical representation must somehow be embedded in the statistical mechanical degrees of freedom of a physical system.
Thus, Rolf Landauer argued in 1961, if one were to imagine starting with those degrees of freedom in a thermalised state, there would be a real reduction in thermodynamic entropy if they were then re-set to a known state. This can only be achieved under information-preserving microscopically deterministic dynamics if the uncertainty is somehow dumped somewhere else – i.e. if the entropy of the environment (or the non information-bearing degrees of freedom) is increased by at least an equivalent amount, as required by the Second Law, by gaining an appropriate quantity of heat: specifically kT ln 2 of heat for every 1 bit of randomness erased.
On the other hand, Landauer argued, there is no thermodynamic objection to a logically reversible operation potentially being achieved in a physically reversible way in the system. It is only logically irreversible operations – for example, the erasing of a bit to a known state, or the merging of two computation paths – which must be accompanied by a corresponding entropy increase. When information is physical, all processing of its representations, i.e. generation, encoding, transmission, decoding and interpretation, are natural processes where entropy increases by consumption of free energy.[7]
Applied to the Maxwell's demon/Szilard engine scenario, this suggests that it might be possible to "read" the state of the particle into a computing apparatus with no entropy cost; but only if the apparatus has already been SET into a known state, rather than being in a thermalised state of uncertainty. To SET (or RESET) the apparatus into this state will cost all the entropy that can be saved by knowing the state of Szilard's particle.
Negentropy
Shannon entropy has been related by physicist Léon Brillouin to a concept sometimes called negentropy. In 1953, Brillouin derived a general equation[8] stating that the changing of an information bit value requires at least kT ln(2) energy. This is the same energy as the work Leo Szilard's engine produces in the idealistic case, which in turn equals to the same quantity found by Landauer. In his book,[9] he further explored this problem concluding that any cause of a bit value change (measurement, decision about a yes/no question, erasure, display, etc.) will require the same amount, kT ln(2), of energy. Consequently, acquiring information about a system's microstates is associated with an entropy production, while erasure yields entropy production only when the bit value is changing. Setting up a bit of information in a sub-system originally in thermal equilibrium results in a local entropy reduction. However, there is no violation of the second law of thermodynamics, according to Brillouin, since a reduction in any local system's thermodynamic entropy results in an increase in thermodynamic entropy elsewhere. In this way, Brillouin clarified the meaning of negentropy which was considered as controversial because its earlier understanding can yield Carnot efficiency higher than one. Additionally, the relationship between energy and information formulated by Brillouin has been proposed as a connection between the amount of bits that the brain processes and the energy it consumes: Collell and Fauquet [10] argued that De Castro [11] analytically found the Landauer limit as the thermodynamic lower bound for brain computations. However, even though evolution is supposed to have "selected" the most energetically efficient processes, the physical lower bounds are not realistic quantities in the brain. Firstly, because the minimum processing unit considered in physics is the atom/molecule, which is distant from the actual way that brain operates; and, secondly, because neural networks incorporate important redundancy and noise factors that greatly reduce their efficiency.[12] Laughlin et al. [13] was the first to provide explicit quantities for the energetic cost of processing sensory information. Their findings in blowflies revealed that for visual sensory data, the cost of transmitting one bit of information is around 5 × 10−14 Joules, or equivalently 104 ATP molecules. Thus, neural processing efficiency is still far from Landauer's limit of kTln(2) J, but as a curious fact, it is still much more efficient than modern computers.
In 2009, Mahulikar & Herwig redefined thermodynamic negentropy as the specific entropy deficit of the dynamically ordered sub-system relative to its surroundings.[14] This definition enabled the formulation of the Negentropy Principle, which is mathematically shown to follow from the 2nd Law of Thermodynamics, during order existence.
Quantum theory
Hirschman showed,[15] cf. Hirschman uncertainty, that Heisenberg's uncertainty principle can be expressed as a particular lower bound on the sum of the classical distribution entropies of the quantum observable probability distributions of a quantum mechanical state, the square of the wave-function, in coordinate, and also momentum space, when expressed in Planck units. The resulting inequalities provide a tighter bound on the uncertainty relations of Heisenberg.
It is meaningful to assign a "joint entropy", because positions and momenta are quantum conjugate variables and are therefore not jointly observable. Mathematically, they have to be treated as joint distribution. Note that this joint entropy is not equivalent to the Von Neumann entropy, −Tr ρ lnρ = −⟨lnρ⟩. Hirschman's entropy is said to account for the full information content of a mixture of quantum states.[16]
(Dissatisfaction with the Von Neumann entropy from quantum information points of view has been expressed by Stotland, Pomeransky, Bachmat and Cohen, who have introduced a yet different definition of entropy that reflects the inherent uncertainty of quantum mechanical states. This definition allows distinction between the minimum uncertainty entropy of pure states, and the excess statistical entropy of mixtures.[17])
See also
• Thermodynamic entropy
• Information entropy
• Thermodynamics
• Statistical mechanics
• Information theory
• Quantum entanglement
• Quantum decoherence
• Fluctuation theorem
• Black hole entropy
• Black hole information paradox
• Entropy (information theory)
• Entropy (statistical thermodynamics)
• Entropy (order and disorder)
• Orders of magnitude (entropy)
References
1. Schneider, T.D, Information theory primer with an appendix on logarithms, National Cancer Institute, 14 April 2007.
2. Gao, Xiang; Gallicchio, Emilio; Roitberg, Adrian (2019). "The generalized Boltzmann distribution is the only distribution in which the Gibbs-Shannon entropy equals the thermodynamic entropy". The Journal of Chemical Physics. 151 (3): 034113. arXiv:1903.02121. Bibcode:2019JChPh.151c4113G. doi:10.1063/1.5111333. PMID 31325924. S2CID 118981017.
3. Gao, Xiang (March 2022). "The Mathematics of the Ensemble Theory". Results in Physics. 34: 105230. Bibcode:2022ResPh..3405230G. doi:10.1016/j.rinp.2022.105230. S2CID 221978379.
4. Antoine Bérut; Artak Arakelyan; Artyom Petrosyan; Sergio Ciliberto; Raoul Dillenschneider; Eric Lutz (8 March 2012), "Experimental verification of Landauer's principle linking information and thermodynamics" (PDF), Nature, 483 (7388): 187–190, Bibcode:2012Natur.483..187B, doi:10.1038/nature10872, PMID 22398556, S2CID 9415026
5. Szilard, Leo (1929). "Über die Entropieverminderung in einem thermodynamischen System bei Eingriffen intelligenter Wesen". Zeitschrift für Physik (in German). 53 (11–12): 840–856. Bibcode:1929ZPhy...53..840S. doi:10.1007/BF01341281. ISSN 0044-3328. S2CID 122038206. Available on-line in English at Aurellen.org.
6. Shoichi Toyabe; Takahiro Sagawa; Masahito Ueda; Eiro Muneyuki; Masaki Sano (2010-09-29). "Information heat engine: converting information to energy by feedback control". Nature Physics. 6 (12): 988–992. arXiv:1009.5287. Bibcode:2010NatPh...6..988T. doi:10.1038/nphys1821. S2CID 118444713. We demonstrated that free energy is obtained by a feedback control using the information about the system; information is converted to free energy, as the first realization of Szilard-type Maxwell's demon.
7. Karnani, M.; Pääkkönen, K.; Annila, A. (2009). "The physical character of information". Proc. R. Soc. A. 465 (2107): 2155–75. Bibcode:2009RSPSA.465.2155K. doi:10.1098/rspa.2009.0063.
8. Brillouin, Leon (1953). "The negentropy principle of information". Journal of Applied Physics. 24 (9): 1152–1163. Bibcode:1953JAP....24.1152B. doi:10.1063/1.1721463.
9. Leon Brillouin, Science and Information theory, Dover, 1956
10. Collell, G; Fauquet, J. (June 2015). "Brain activity and cognition: a connection from thermodynamics and information theory". Frontiers in Psychology. 6 (4): 818. doi:10.3389/fpsyg.2015.00818. PMC 4468356. PMID 26136709.
11. De Castro, A. (November 2013). "The Thermodynamic Cost of Fast Thought". Minds and Machines. 23 (4): 473–487. arXiv:1201.5841. doi:10.1007/s11023-013-9302-x. S2CID 11180644.
12. Narayanan, N. S. at al. (2005). "Redundancy and synergy of neuronal ensembles in motor cortex". J. Neurosci. 25 (17): 4207–4216. doi:10.1523/JNEUROSCI.4697-04.2005. PMC 6725112. PMID 15858046.
13. Laughlin, S. B at al. (November 2013). "The metabolic cost of neural information". Nat. Neurosci. 1 (1): 36–41. doi:10.1038/236. PMID 10195106. S2CID 204995437.
14. Mahulikar, S.P.; Herwig, H. (August 2009). "Exact thermodynamic principles for dynamic order existence and evolution in chaos". Chaos, Solitons & Fractals. 41 (4): 1939–48. Bibcode:2009CSF....41.1939M. doi:10.1016/j.chaos.2008.07.051.
15. Hirschman, I.I. Jr. (January 1957). "A note on entropy". American Journal of Mathematics. 79 (1): 152–6. doi:10.2307/2372390. JSTOR 2372390.
16. Zachos, C. K. (2007). "A classical bound on quantum entropy". Journal of Physics A: Mathematical and Theoretical. 40 (21): F407–F412. arXiv:hep-th/0609148. Bibcode:2007JPhA...40..407Z. doi:10.1088/1751-8113/40/21/F02. S2CID 1619604.
17. Alexander Stotland; Pomeransky; Eitan Bachmat; Doron Cohen (2004). "The information entropy of quantum mechanical states". Europhysics Letters. 67 (5): 700–6. arXiv:quant-ph/0401021. Bibcode:2004EL.....67..700S. CiteSeerX 10.1.1.252.8715. doi:10.1209/epl/i2004-10110-1. S2CID 51730529.
Further reading
• Bennett, C.H. (1973). "Logical reversibility of computation". IBM J. Res. Dev. 17 (6): 525–532. doi:10.1147/rd.176.0525.
• Brillouin, Léon (2004), Science And Information Theory (second ed.), Dover, ISBN 978-0-486-43918-1. [Republication of 1962 original.]
• Frank, Michael P. (May–June 2002). "Physical Limits of Computing". Computing in Science and Engineering. 4 (3): 16–25. Bibcode:2002CSE.....4c..16F. CiteSeerX 10.1.1.429.1618. doi:10.1109/5992.998637. OSTI 1373456. S2CID 499628.
• Greven, Andreas; Keller, Gerhard; Warnecke, Gerald, eds. (2003). Entropy. Princeton University Press. ISBN 978-0-691-11338-8. (A highly technical collection of writings giving an overview of the concept of entropy as it appears in various disciplines.)
• Kalinin, M.I.; Kononogov, S.A. (2005), "Boltzmann's constant, the energy meaning of temperature, and thermodynamic irreversibility", Measurement Techniques, 48 (7): 632–636, doi:10.1007/s11018-005-0195-9, S2CID 118726162.
• Koutsoyiannis, D. (2011), "Hurst–Kolmogorov dynamics as a result of extremal entropy production", Physica A, 390 (8): 1424–1432, Bibcode:2011PhyA..390.1424K, doi:10.1016/j.physa.2010.12.035.
• Landauer, R. (1993). "Information is Physical". Proc. Workshop on Physics and Computation PhysComp'92. Los Alamitos: IEEE Comp. Sci.Press. pp. 1–4. doi:10.1109/PHYCMP.1992.615478. ISBN 978-0-8186-3420-8. S2CID 60640035.
• Landauer, R. (1961). "Irreversibility and Heat Generation in the Computing Process". IBM J. Res. Dev. 5 (3): 183–191. doi:10.1147/rd.53.0183.
• Leff, H.S.; Rex, A.F., eds. (1990). Maxwell's Demon: Entropy, Information, Computing. Princeton NJ: Princeton University Press. ISBN 978-0-691-08727-6.
• Middleton, D. (1960). An Introduction to Statistical Communication Theory. McGraw-Hill.
• Shannon, Claude E. (July–October 1948). "A Mathematical Theory of Communication". Bell System Technical Journal. 27 (3): 379–423. doi:10.1002/j.1538-7305.1948.tb01338.x. hdl:10338.dmlcz/101429. (as PDF)
External links
• Information Processing and Thermodynamic Entropy Stanford Encyclopedia of Philosophy.
• An Intuitive Guide to the Concept of Entropy Arising in Various Sectors of Science — a wikibook on the interpretation of the concept of entropy.
| Wikipedia |
Szilassi polyhedron
In geometry, the Szilassi polyhedron is a nonconvex polyhedron, topologically a torus, with seven hexagonal faces.
Szilassi polyhedron
TypeToroidal polyhedron
Faces7 hexagons
Edges21
Vertices14
Euler char.0 (Genus 1)
Vertex configuration6.6.6
Symmetry groupC1, [ ]+, (11)
Dual polyhedronCsászár polyhedron
PropertiesNon-convex
Coloring and symmetry
The 14 vertices and 21 edges of the Szilassi polyhedron form an embedding of the Heawood graph onto the surface of a torus.[1] Each face of this polyhedron shares an edge with each other face. As a result, it requires seven colours to colour all adjacent faces. This example shows that, on surfaces topologically equivalent to a torus, some subdivisions require seven colors, providing the lower bound for the seven colour theorem. The other half of the theorem states that all toroidal subdivisions can be colored with seven or fewer colors.
The Szilassi polyhedron has an axis of 180-degree symmetry. This symmetry swaps three pairs of congruent faces, leaving one unpaired hexagon that has the same rotational symmetry as the polyhedron.
Complete face adjacency
The tetrahedron and the Szilassi polyhedron are the only two known polyhedra in which each face shares an edge with each other face.
If a polyhedron with f faces is embedded onto a surface with h holes, in such a way that each face shares an edge with each other face, it follows by some manipulation of the Euler characteristic that
$h={\frac {(f-4)(f-3)}{12}}.$
This equation is satisfied for the tetrahedron with h = 0 and f = 4, and for the Szilassi polyhedron with h = 1 and f = 7.
The next possible solution, h = 6 and f = 12, would correspond to a polyhedron with 44 vertices and 66 edges. However, it is not known whether such a polyhedron can be realized geometrically without self-crossings (rather than as an abstract polytope). More generally this equation can be satisfied precisely when f is congruent to 0, 3, 4, or 7 modulo 12.[2][3]
Unsolved problem in mathematics:
Is there a non-convex polyhedron without self-intersections with more than seven faces, all of which share an edge with each other?
(more unsolved problems in mathematics)
• Interactive orthographic projection with each face a different colour. In the SVG image, move the mouse left and right to rotate the model.
• Animation
History
The Szilassi polyhedron is named after Hungarian mathematician Lajos Szilassi, who discovered it in 1977.[4][1] The dual to the Szilassi polyhedron, the Császár polyhedron, was discovered earlier by Ákos Császár (1949); it has seven vertices, 21 edges connecting every pair of vertices, and 14 triangular faces. Like the Szilassi polyhedron, the Császár polyhedron has the topology of a torus.[5]
References
1. Szilassi, Lajos (1986), "Regular toroids" (PDF), Structural Topology, 13: 69–80
2. Jungerman, M.; Ringel, Gerhard (1980), "Minimal triangulations on orientable surfaces", Acta Mathematica, 145 (1–2): 121–154, doi:10.1007/BF02414187
3. Grünbaum, Branko; Szilassi, Lajos (2009), "Geometric realizations of special toroidal complexes", Contributions to Discrete Mathematics, 4 (1): 21–39, doi:10.11575/cdm.v4i1.61986, MR 2541986
4. Gardner, Martin (1978), "In which a mathematical aesthetic is applied to modern minimal art", Mathematical Games, Scientific American, 239 (5): 22–32, doi:10.1038/scientificamerican1178-22, JSTOR 24955839
5. Császár, Ákos (1949), "A polyhedron without diagonals", Acta Sci. Math. Szeged, 13: 140–142
External links
• Ace, Tom, The Szilassi polyhedron.
• Peterson, Ivars (2007), "A polyhedron with a hole", MathTrek, Mathematical Association of America.
• Weisstein, Eric W., "Szilassi Polyhedron", MathWorld
• Szilassi Polyhedron – Papercraft model at CutOutFoldUp.com
| Wikipedia |
Szolem Mandelbrojt
Szolem Mandelbrojt (10 January 1899 – 23 September 1983) was a Polish-French mathematician who specialized in mathematical analysis. He was a professor at the Collège de France from 1938 to 1972, where he held the Chair of Analytical Mechanics and Celestial Mechanics.
Szolem Mandelbrojt
Szolem Mandelbrojt
Born(1899-01-10)10 January 1899
Warsaw, Congress Poland
Died23 September 1983(1983-09-23) (aged 84)
Paris, France
NationalityPolish and French
Alma materUniversity of Paris
University of Kharkiv
Scientific career
FieldsMathematics
InstitutionsCollège de France
Rice University
University of Clermont-Ferrand
University of Lille
Doctoral advisorJacques Hadamard
Doctoral studentsPaul Malliavin
Shmuel Agmon
John Gergen
Jean-Pierre Kahane
Yitzhak Katznelson
George Piranian
Hans Jakob Reiter
Notes
He was the uncle of Benoit Mandelbrot.
Biography
Szolem Mandelbrojt was born on 10 January 1899 in Warsaw, Poland into a Jewish family of Lithuanian descent. He was initially educated in Warsaw, then in 1919 he moved to Kharkov, Ukraine (then USSR) and spent a year as a student of the Russian mathematician Sergei Bernstein. A year later, he emigrated to France and settled in Paris. In subsequent years, he attended the seminars of Jacques Hadamard, Henri Lebesgue, Émile Picard, and others. In 1923, he received a doctorate from the University of Paris on the analytic continuation of the Taylor series. Hadamard was his Ph.D. advisor.
In 1924 Mandelbrojt was awarded a Rockefeller Fellowship in the United States. In May 1926 he married Gladys Manuelle Grunwald (born 28 June 1904 in Paris). From 1926 to 1927, he spent a year as an assistant professor at the Rice Institute (now Rice University) in Houston, Texas.
In 1928 he returned to France - having received French citizenship in 1927 – and was appointed an assistant professor at the University of Lille. The following year he became a full professor at the University of Clermont-Ferrand. In December 1934 Mandelbrojt co-founded the Nicolas Bourbaki group of mathematicians, of which he was a member until World War II. He succeeded Hadamard at Collège de France in 1938 and took up the Chair of Analytical Mechanics and Celestial Mechanics.[1]
Mandelbrojt helped several members of his family emigrate from Poland to France in 1936. One of them, his nephew Benoit Mandelbrot, was to discover the Mandelbrot set and coin the word fractal in the 1970s.
In 1939 he fought for France when the country was invaded by the Nazis, then in 1940, along with many scientists helped by Louis Rapkine and the Rockefeller Foundation, Mandelbrojt relocated to the United States, taking up a position at the Rice Institute. In 1944 he joined the scientific committee of the Free French Forces in London, England.
In 1945 Mandelbrojt moved back to France and resumed his professional activities at Collège de France, where he remained until his retirement in 1972. In his retirement year he was elected a member of the French Academy of Sciences.
Szolem Mandelbrojt died at the age of 84 in Paris, France, on 23 September 1983.
Research
Even though Mandelbrojt was an early member of the Bourbaki group, and he did take part in a number of Bourbaki gatherings until the breakout of the war, his main research interests were actually quite remote from abstract algebra. As evidenced by his publications (see next), he focused on complex analysis and harmonic analysis, with an emphasis on Dirichlet series, lacunary series, and entire functions.
Rather than a Bourbakist, he is perhaps more accurately described as a follower of G. H. Hardy. Together with Norbert Wiener and Torsten Carleman, he can be viewed as a moderate modernizer of classical Fourier analysis.
Shmuel Agmon, Jean-Pierre Kahane, Yitzhak Katznelson, Paul Malliavin and George Piranian are among his students.
Selected works
Books
• Hadamard, Jacques; Mandelbrojt, Szolem (1926). La série de Taylor et son prolongement analytique (2nd ed.). Gauthier-Villars.[2]
• Mandelbrojt, Szolem (1951). "General theorems of closure". Rice Institute Pamphlet (Monograph published as a special issue). XIV (4): 225–352.
• Mandelbrojt, Szolem (1958). "Composition Theorems". Rice Institute Pamphlet (Monograph published as a special issue). 45 (3).
• Mandelbrojt, Szolem (1967). Fonctions entières et transformées de Fourier. Applications. Mathematical Society of Japan.
• Mandelbrojt, Szolem (1972) [1969]. Dirichlet series. Principles and methods [Séries de Dirichlet. Principes et méthodes]. Reidel.
Lecture notes
• Mandelbrojt, Szolem (1927). "Modern researches on the singularities of functions defined by Taylor's series; lectures delivered at the Rice Institute during the academic year 1926-27". Rice Institute Pamphlet (12 articles). 14 (4).[3]
• Mandelbrojt, Szolem (1935). Séries de Fourier et classes quasi-analytiques de fonctions. Leçons professées à l'Institut Henri Poincaré et à la Faculté des sciences de Clermont-Ferrand. Gauthier-Villars.
• Mandelbrojt, Szolem (1942). "Analytic Functions and Classes of Infinitely Differentiable Functions. A series of lectures delivered at the Rice Institute during the academic year 1940-41". Rice Institute Pamphlet (17 articles). 29 (1).
• Mandelbrojt, Szolem (1944). "Dirichlet series. Lectures delivered at the Rice Institute during the academic year 1942-43". Rice Institute Pamphlet (10 articles). 31 (4).
• Mandelbrojt, Szolem (1952). Séries adhérentes. Régularisation des suites. Applications. Leçons professées au Collège de France et au Rice Institute. Gauthier-Villars.[4]
Articles
• Mandelbrojt, M. S. (1932). Les singularités des fonctions analytiques représentées par une série de Taylor (PDF). Mémorial des Sciences Mathématiques. Vol. 54. Gauthier-Villars.
• Mandelbrojt, Szolem (1936). "Séries lacunaires". Actualités Scientifiques et Industrielles. Hermann. 305 (Exposés sur la théorie des fonctions, no. 2).[5]
• Mandelbrojt, Szolem (1938). "La régularisation des fonctions". Actualités Scientifiques et Industrielles. Hermann. 733 (Exposés sur la théorie des fonctions, no. 13).
• Mandelbrojt, Szolem; Ulrich, Floyd Edward (1942). "On a generalization of the problem of quasi-analyticity". Trans. Amer. Math. Soc. 52 (2): 265–282. doi:10.1090/S0002-9947-1942-0007015-4. MR 0007015.
• Mandelbrojt, Szolem (1944). "Quasi-analyticity and analytic continuation—a general principle". Trans. Amer. Math. Soc. 55: 96–131. doi:10.1090/S0002-9947-1944-0009635-1. MR 0009635.
• Mandelbrojt, Szolem; MacLane, Gerald R. (1947). "On functions holomorphic in a strip region, and an extension of Watson's problem". Trans. Amer. Math. Soc. 61 (3): 454–467. doi:10.1090/S0002-9947-1947-0020142-5. MR 0020142.
• Mandelbrojt, Szolem (1948). "Analytic continuation and infinitely differentiable functions". Bull. Amer. Math. Soc. 54 (3): 239–248. doi:10.1090/S0002-9904-1948-08963-7. MR 0023877.
• Chandrasekharan, K.; Mandelbrojt, Szolem (1959). "On solutions of Riemann's functional equation". Bull. Amer. Math. Soc. 65 (6): 358–362. doi:10.1090/S0002-9904-1959-10372-4. MR 0111727.
• Mandelbrojt, Szolem (1966). "Les taubériens généraux de Norbert Wiener". Bull. Amer. Math. Soc. 72 (1): 48–51. doi:10.1090/S0002-9904-1966-11461-1. MR 0184008.
• Mandelbrojt, Szolem (1967). "Exponentielle associée à un ensemble; transformées de Fourier généralisées". Annales de l'Institut Fourier. 17 (1): 325–351. doi:10.5802/aif.259. MR 0257360.
Thesis
• Mandelbrojt, Szolem (1923). Sur les séries de Taylor qui présentent des lacunes (Ph.D. thesis). Paris-Sorbonne University.
Notes
1. Professeurs disparus at Collège de France.
2. Smail, L. L. (1928). " 'La Série de Taylor et son Prolongement Analytique' by J. Hadamard and S. Mandelbrojt". Bull. Amer. Math. Soc. (Review). 34 (1): 119–120. doi:10.1090/S0002-9904-1928-04539-1.
3. Widder, D. V. (1929). " 'Modern Researches on the Singularities of Functions Defined by Taylor's Series' by S. Mandelbrojt". Bull. Amer. Math. Soc. (Review). 35 (3): 409–410. doi:10.1090/S0002-9904-1929-04762-1.
4. Fuchs, W. H. J. (1953). " 'Séries adhérentes. Régularisation des suites. Applications' by S. Mandelbrojt". Bull. Amer. Math. Soc. (Review). 59 (4): 413–414. doi:10.1090/S0002-9904-1953-09743-9.
5. Moore, C. N. (1938). " 'La Réduction des Séries Alternées Divergentes et ses Applications' by J. Ser and 'Séries Lacunaires' by S. Mandelbrojt". Bull. Amer. Math. Soc. (Review). 44 (5): 313. doi:10.1090/S0002-9904-1938-06721-3.
References
• Comité national français de mathématiciens (1981). Szolem Mandelbrojt. Selecta. Gauthier-Villars.
External links
• O'Connor, John J.; Robertson, Edmund F., "Szolem Mandelbrojt", MacTutor History of Mathematics Archive, University of St Andrews
• Szolem Mandelbrojt at the Mathematics Genealogy Project.
• Szolem Mandelbrojt at Collège de France.
• Szolem Mandelbrojt at Hathi Trust Digital Library.
Authority control
International
• FAST
• ISNI
• VIAF
National
• France
• BnF data
• Germany
• Israel
• United States
• Sweden
• Netherlands
Academics
• MathSciNet
• Mathematics Genealogy Project
• zbMATH
Other
• SNAC
• IdRef
| Wikipedia |
Szpilrajn extension theorem
In order theory, the Szpilrajn extension theorem (also called the order-extension principle), proved by Edward Szpilrajn in 1930,[1] states that every partial order is contained in a total order. Intuitively, the theorem says that any method of comparing elements that leaves some pairs incomparable can be extended in such a way that every pair becomes comparable. The theorem is one of many examples of the use of the axiom of choice in the form of Zorn's lemma to find a maximal set with certain properties.
Definitions and statement
A binary relation $R$ on a set $X$ is formally defined as a set of ordered pairs $(x,y)$ of elements of $X,$ and $(x,y)\in R$ is often abbreviated as $xRy.$
A relation is reflexive if $xRx$ holds for every element $x\in X;$ it is transitive if $xRy{\text{ and }}yRz$ imply $xRz$ for all $x,y,z\in X;$ it is antisymmetric if $xRy{\text{ and }}yRx$ imply $x=y$ for all $x,y\in X;$ and it is a connex relation if $xRy{\text{ or }}yRx$ holds for all $x,y\in X.$ A partial order is, by definition, a reflexive, transitive and antisymmetric relation. A total order is a partial order that is connex.
A relation $R$ is contained in another relation $S$ when all ordered pairs in $R$ also appear in $S;$ that is,$xRy$ implies $xSy$ for all $x,y\in X.$ The extension theorem states that every relation $R$ that is reflexive, transitive and antisymmetric (that is, a partial order) is contained in another relation $S$ which is reflexive, transitive, antisymmetric and connex (that is, a total order).
Proof
The theorem is proved in two steps. First, if a partial order does not compare $x$ and $y,$ it can be extended by first adding the pair $(x,y)$ and then performing the transitive closure, and second, since this operation generates an ordering that strictly contains the original one and can be applied to all pairs of incomparable elements, there exists a relation in which all pairs of elements have been made comparable.
The first step is proved as a preliminary lemma, in which a partial order where a pair of elements $x$ and $y$ are incomparable is changed to make them comparable. This is done by first adding the pair $xRy$ to the relation, which may result in a non-transitive relation, and then restoring transitivity by adding all pairs $qRp$ such that $qRx{\text{ and }}yRp.$ This is done on a single pair of incomparable elements $x$ and $y,$ and produces a relation that is still reflexive, antisymmetric and transitive and that strictly contains the original one.
Next it is shown that the poset of partial orders containing $R,$ ordered by inclusion, has a maximal element. The existence of such a maximal element is proved by applying Zorn's lemma to this poset. A chain in this poset is a set of relations containing $R$ such that given any two of these relations, one is contained in the other.
To apply Zorn's lemma, it must be shown that every chain has an upper bound in the poset. Let ${\mathcal {C}}$ be such a chain, and it remains to show that the union of its elements, $\bigcup {\mathcal {C}},$ is an upper bound for ${\mathcal {C}}$ which is in the poset: ${\mathcal {C}}$ contains the original relation $R$ since every element of ${\mathcal {C}}$ is a partial order containing $R.$ Next, it is shown that $\bigcup {\mathcal {C}}$ is a transitive relation. Suppose that $(x,y)$ and $(y,z)$ are in $\bigcup {\mathcal {C}},$ so that there exist $S,T\in {\mathcal {C}}$ such that $(x,y)\in S{\text{ and }}(y,z)\in T.$ Since ${\mathcal {C}}$ is a chain, either $S\subseteq T{\text{ or }}T\subseteq S.$ Suppose $S\subseteq T;$ the argument for when $T\subseteq S$ is similar. Then $(x,y)\in T.$ Since all relations produced by our process are transitive, $(x,z)$ is in $T$ and therefore also in $\bigcup {\mathcal {C}}.$ Similarly, it can be shown that $\bigcup {\mathcal {C}}$ is antisymmetric.
Therefore by Zorn's lemma the set of partial orders containing $R$ has a maximal element $Q,$ and it remains only to show that $Q$ is total. Indeed if $Q$ had a pair of incomparable elements then it is possible to apply the process of the first step to it, leading to another strict partial order that contains $R$ and strictly contains $Q,$ contradicting that $Q$ is maximal. $Q$ is therefore a total order containing $R,$ completing the proof.
Other extension theorems
Arrow[2] stated that every preorder (reflexive and transitive relation) can be extended to a total preorder (transitive and connex relation). This claim was later proved by Hansson.[3]: Lemma 3 [4]
Suzumura proved that a binary relation can be extended to a total preorder if and only if it is Suzumura-consistent, which means that there is no cycle of elements such that $xRy$ for every pair of consecutive elements $(x,y),$ and there is some pair of consecutive elements $(x,y)$ in the cycle for which $yRx$ does not hold.[4]
See also
• Linear extension – Mathematical ordering of a partial order
References
1. Szpilrajn, Edward (1930), "Sur l'extension de l'ordre partiel" (PDF), Fundamenta Mathematicae (in French), 16: 386–389, doi:10.4064/fm-16-1-386-389.
2. Arrow, Kenneth J. (2012-06-26). Social Choice and Individual Values: Third Edition. Yale University Press. ISBN 978-0-300-18698-7.
3. Hansson, Bengt (1968). "Choice Structures and Preference Relations". Synthese. 18 (4): 443–458. doi:10.1007/BF00484979. ISSN 0039-7857. JSTOR 20114617. S2CID 46966243.
4. Cato, Susumu (2012-05-01). "SZPILRAJN, ARROW AND SUZUMURA: CONCISE PROOFS OF EXTENSION THEOREMS AND AN EXTENSION: Extension Theorems". Metroeconomica. 63 (2): 235–249. doi:10.1111/j.1467-999X.2011.04130.x. S2CID 153381284.
Order theory
• Topics
• Glossary
• Category
Key concepts
• Binary relation
• Boolean algebra
• Cyclic order
• Lattice
• Partial order
• Preorder
• Total order
• Weak ordering
Results
• Boolean prime ideal theorem
• Cantor–Bernstein theorem
• Cantor's isomorphism theorem
• Dilworth's theorem
• Dushnik–Miller theorem
• Hausdorff maximal principle
• Knaster–Tarski theorem
• Kruskal's tree theorem
• Laver's theorem
• Mirsky's theorem
• Szpilrajn extension theorem
• Zorn's lemma
Properties & Types (list)
• Antisymmetric
• Asymmetric
• Boolean algebra
• topics
• Completeness
• Connected
• Covering
• Dense
• Directed
• (Partial) Equivalence
• Foundational
• Heyting algebra
• Homogeneous
• Idempotent
• Lattice
• Bounded
• Complemented
• Complete
• Distributive
• Join and meet
• Reflexive
• Partial order
• Chain-complete
• Graded
• Eulerian
• Strict
• Prefix order
• Preorder
• Total
• Semilattice
• Semiorder
• Symmetric
• Total
• Tolerance
• Transitive
• Well-founded
• Well-quasi-ordering (Better)
• (Pre) Well-order
Constructions
• Composition
• Converse/Transpose
• Lexicographic order
• Linear extension
• Product order
• Reflexive closure
• Series-parallel partial order
• Star product
• Symmetric closure
• Transitive closure
Topology & Orders
• Alexandrov topology & Specialization preorder
• Ordered topological vector space
• Normal cone
• Order topology
• Order topology
• Topological vector lattice
• Banach
• Fréchet
• Locally convex
• Normed
Related
• Antichain
• Cofinal
• Cofinality
• Comparability
• Graph
• Duality
• Filter
• Hasse diagram
• Ideal
• Net
• Subnet
• Order morphism
• Embedding
• Isomorphism
• Order type
• Ordered field
• Ordered vector space
• Partially ordered
• Positive cone
• Riesz space
• Upper set
• Young's lattice
| Wikipedia |
Szpiro's conjecture
In number theory, Szpiro's conjecture relates to the conductor and the discriminant of an elliptic curve. In a slightly modified form, it is equivalent to the well-known abc conjecture. It is named for Lucien Szpiro, who formulated it in the 1980s. Szpiro's conjecture and its equivalent forms have been described as "the most important unsolved problem in Diophantine analysis" by Dorian Goldfeld,[1] in part to its large number of consequences in number theory including Roth's theorem, the Mordell conjecture, the Fermat–Catalan conjecture, and Brocard's problem.[2][3][4][5]
Modified Szpiro conjecture
FieldNumber theory
Conjectured byLucien Szpiro
Conjectured in1981
Equivalent toabc conjecture
Consequences
• Beal conjecture
• Faltings's theorem
• Fermat's Last Theorem
• Fermat–Catalan conjecture
• Roth's theorem
• Tijdeman's theorem
Original statement
The conjecture states that: given ε > 0, there exists a constant C(ε) such that for any elliptic curve E defined over Q with minimal discriminant Δ and conductor f, we have
$\vert \Delta \vert \leq C(\varepsilon )\cdot f^{6+\varepsilon }.$
Modified Szpiro conjecture
The modified Szpiro conjecture states that: given ε > 0, there exists a constant C(ε) such that for any elliptic curve E defined over Q with invariants c4, c6 and conductor f (using notation from Tate's algorithm), we have
$\max\{\vert c_{4}\vert ^{3},\vert c_{6}\vert ^{2}\}\leq C(\varepsilon )\cdot f^{6+\varepsilon }.$
abc conjecture
The abc conjecture originated as the outcome of attempts by Joseph Oesterlé and David Masser to understand Szpiro's conjecture,[6] and was then shown to be equivalent to the modified Szpiro's conjecture.[7]
Claimed proofs
Main article: Abc conjecture § Claimed proofs
In August 2012, Shinichi Mochizuki claimed a proof of Szpiro's conjecture by developing a new theory called inter-universal Teichmüller theory (IUTT).[8] However, the papers have not been accepted by the mathematical community as providing a proof of the conjecture,[9][10][11] with Peter Scholze and Jakob Stix concluding in March 2018 that the gap was "so severe that … small modifications will not rescue the proof strategy".[12][13][14]
See also
• Arakelov theory
References
1. Goldfeld, Dorian (1996). "Beyond the last theorem". Math Horizons. 4 (September): 26–34. doi:10.1080/10724117.1996.11974985. JSTOR 25678079.
2. Bombieri, Enrico (1994). "Roth's theorem and the abc-conjecture". Preprint. ETH Zürich.
3. Elkies, N. D. (1991). "ABC implies Mordell". International Mathematics Research Notices. 1991 (7): 99–109. doi:10.1155/S1073792891000144.
4. Pomerance, Carl (2008). "Computational Number Theory". The Princeton Companion to Mathematics. Princeton University Press. pp. 361–362.
5. Dąbrowski, Andrzej (1996). "On the diophantine equation x! + A = y2". Nieuw Archief voor Wiskunde, IV. 14: 321–324. Zbl 0876.11015.
6. Fesenko, Ivan (2015), "Arithmetic deformation theory via arithmetic fundamental groups and nonarchimedean theta functions, notes on the work of Shinichi Mochizuki" (PDF), European Journal of Mathematics, 1 (3): 405–440, doi:10.1007/s40879-015-0066-0.
7. Oesterlé, Joseph (1988), "Nouvelles approches du "théorème" de Fermat", Astérisque, Séminaire Bourbaki exp 694 (161): 165–186, ISSN 0303-1179, MR 0992208
8. Ball, Peter (10 September 2012). "Proof claimed for deep connection between primes". Nature. doi:10.1038/nature.2012.11378. Retrieved 19 April 2020.
9. Revell, Timothy (September 7, 2017). "Baffling ABC maths proof now has impenetrable 300-page 'summary'". New Scientist.
10. Conrad, Brian (December 15, 2015). "Notes on the Oxford IUT workshop by Brian Conrad". Retrieved March 18, 2018.
11. Castelvecchi, Davide (8 October 2015). "The biggest mystery in mathematics: Shinichi Mochizuki and the impenetrable proof". Nature. 526 (7572): 178–181. Bibcode:2015Natur.526..178C. doi:10.1038/526178a. PMID 26450038.
12. Scholze, Peter; Stix, Jakob. "Why abc is still a conjecture" (PDF). Archived from the original on February 8, 2020. (updated version of their May report|)
13. Klarreich, Erica (September 20, 2018). "Titans of Mathematics Clash Over Epic Proof of ABC Conjecture". Quanta Magazine.
14. "March 2018 Discussions on IUTeich". Retrieved October 2, 2018. Web-page by Mochizuki describing discussions and linking consequent publications and supplementary material
Bibliography
• Lang, S. (1997), Survey of Diophantine geometry, Berlin: Springer-Verlag, p. 51, ISBN 3-540-61223-8, Zbl 0869.11051
• Szpiro, L. (1981). "Propriétés numériques du faisceau dualisant rélatif". Seminaire sur les pinceaux des courbes de genre au moins deux (PDF). pp. 44–78. Zbl 0517.14006. {{cite book}}: |journal= ignored (help)
• Szpiro, L. (1987), "Présentation de la théorie d'Arakelov", Contemp. Math., Contemporary Mathematics, 67: 279–293, doi:10.1090/conm/067/902599, ISBN 9780821850749, Zbl 0634.14012
| Wikipedia |
George Szpiro
George Geza Szpiro (born 18 February 1950 in Vienna)[1] is an Israeli–Swiss author, journalist, and mathematician. He has written articles and books on popular mathematics and related topics.
Life and career
Szpiro was born in Vienna in 1950, and moved to Zug, Switzerland, in 1961.[2] He obtained a master's degree in mathematics and physics from ETH Zurich. He also obtained an MBA from Stanford University, in 1975.[2] Afterward, he worked as a management consultant at McKinsey & Company.[3][4] In 1984, he obtained a Ph.D. in mathematical economics from Hebrew University.[2][4]
Szpiro was an assistant professor at the Wharton School of the University of Pennsylvania, during 1984–1986. He was a lecturer in mathematical economics at Hebrew University, during 1986–1992. He also taught at the University of Zurich.[3][4] He has published research papers related to mathematics, finance, and statistics.[5]
Since 1986, Szpiro has worked as a journalist at Neue Zürcher Zeitung.[2] At NZZ, he has been the Israel correspondent and mathematics columnist.[2][6] For his mathematics columns, Szpiro was awarded the Prix Média by the Swiss Academy of Natural Sciences, in 2003.[7][8] He was also awarded the Media Prize by the German Mathematical Society, in 2006.[4] Beside writing for NZZ, he has also written non-research mathematics columns for journals such as Nature and Notices of the American Mathematical Society.[5][6]
Szpiro married in 1979. He and his wife, Fortuna, have three children.
Books
• Kepler's Conjecture: How Some of the Greatest Minds in History Helped Solve One of the Oldest Math Problems in the World (John Wiley & Sons, 2003)
• The Secret Life of Numbers: 50 Easy Pieces on How Mathematicians Work and Think (Joseph Henry Press, 2006)
• Poincaré's Prize: The Hundred-Year Quest to Solve One of Math's Greatest Puzzles (Dutton, 2007)
• Numbers Rule: The Vexing Mathematics of Democracy, from Plato to the Present (Princeton University Press, 2010)
• A Mathematical Medley: Fifty Easy Pieces on Mathematics (American Mathematical Society, 2010)
• Pricing the Future: Finance, Physics, and the 300-year Journey to the Black-Scholes Equation (Basic Books, 2011)
Notes
1. Who's Who in Finance and Industry 1998–1999, p. 781
2. George G. Szpiro - NZZ, archived from the original on 2015-04-12
3. "Deutsche Mathematiker-Vereinigung verleiht Medienpreise", Press Release, Technical University of Berlin (2006-11-06)
4. Medienpreise der DMV 2006, archived from the original on 2009-01-23
5. "George G. Szpiro - Publications"—list of Szpiro's publications at ResearchGate
6. Szpiro, G. (2007), "Interview with Stephen Smale" (PDF), Notices of the American Mathematical Society, 54: 995–997
7. "Preisträger Prix Média", Swiss Academy of Natural Sciences
8. Descartes Communication Prize, Directorate-General for Research and Innovation (European Commission) (2005)
External links
• George Szpiro at the Mathematics Genealogy Project
• George Szpiro, archived from the original on 2016-01-13 —website of George Szpiro
• George Szpiro on LinkedIn
• "The Truth, the Whole Truth, And Nothing but the Truth" —YouTube video of Szpiro discussing the difficulties a journalist faces when writing about mathematics for a general audience (the discussion was part of a seminar at the University of Edinburgh: dated 9 April 2015)
Authority control
International
• ISNI
• VIAF
National
• France
• BnF data
• Catalonia
• Germany
• Israel
• United States
• Sweden
• Czech Republic
• Korea
• Netherlands
• Portugal
Academics
• MathSciNet
• Mathematics Genealogy Project
• zbMATH
Other
• IdRef
| Wikipedia |
Szymanski's conjecture
In mathematics, Szymanski's conjecture, named after Ted H. Szymanski (1989), states that every permutation on the n-dimensional doubly directed hypercube graph can be routed with edge-disjoint paths. That is, if the permutation σ matches each vertex v to another vertex σ(v), then for each v there exists a path in the hypercube graph from v to σ(v) such that no two paths for two different vertices u and v use the same edge in the same direction.
Through computer experiments it has been verified that the conjecture is true for n ≤ 4 (Baudon, Fertin & Havel 2001). Although the conjecture remains open for n ≥ 5, in this case there exist permutations that require the use of paths that are not shortest paths in order to be routed (Lubiw 1990).
References
• Baudon, Olivier; Fertin, Guillaume; Havel, Ivan (2001), "Routing permutations and 2-1 routing requests in the hypercube", Discrete Applied Mathematics, 113 (1): 43–58, doi:10.1016/S0166-218X(00)00386-3.
• Lubiw, Anna (1990), "Counterexample to a conjecture of Szymanski on hypercube routing", Information Processing Letters, 35 (2): 57–61, doi:10.1016/0020-0190(90)90106-8.
• Szymanski, Ted H. (1989), "On the Permutation Capability of a Circuit-Switched Hypercube", Proc. Internat. Conf. on Parallel Processing, vol. 1, Silver Spring, MD: IEEE Computer Society Press, pp. 103–110.
| Wikipedia |
Szász–Mirakjan–Kantorovich operator
In functional analysis, a discipline within mathematics, the Szász–Mirakjan–Kantorovich operators are defined by
$[{\mathcal {T}}_{n}(f)](x)=ne^{-nx}\sum _{k=0}^{\infty }{{\frac {(nx)^{k}}{k!}}\int _{k/n}^{(k+1)/n}f(t)\,dt}$
where $x\in [0,\infty )\subset \mathbb {R} $ and $n\in \mathbb {N} $.[1]
See also
• Szász–Mirakyan operator
Notes
1. Walczak, Zbigniew (2002). "On approximation by modified Szasz–Mirakyan operators". Glasnik Matematički. 37 (57): 303–319.
References
• Totik, V. (June 1983). "Approximation by Szász–Mirakjan–-Kantorovich operators in Lp (p > 1)". Analysis Mathematica (in Russian). 9 (2): 147–167. doi:10.1007/BF01982010. MR 0720083. S2CID 123797519. Zbl 0513.41012.
| Wikipedia |
Li Ye (mathematician)
Li Ye (Chinese: 李冶; Wade–Giles: Li Yeh; 1192–1279), born Li Zhi (Chinese: 李治), courtesy name Li Jingzhai (Chinese: 李敬斋),[1][2] was a Chinese scientist and writer who published and improved the tian yuan shu method for solving polynomial equations of one variable.[1][3][4][5][6][7] Along with the 4th-century Chinese astronomer Yu Xi, Li Ye proposed the idea of a spherical Earth instead of a flat one before the advances of European science in the 17th century.
Name
Li Ye was born Li Zhi, but later changed his name to Li Ye to avoid confusion with the third Tang emperor who was also named Li Zhi, removing one stroke from his original name to change the character. His name is also sometimes written as Li Chih or Li Yeh. His literary name was Renqing (Chinese: 仁卿; Wade–Giles: Jen-ch’ing) and his appellation was Jingzhai (Chinese: 敬斋; Wade–Giles: Ching-chai).[1][2]
Life
Li Ye was born in Daxing (now Beijing). His father was a secretary to an officer in the Jurchen army. Li passed the civil service examination in 1230 at the age of 38, and was administrative prefect of Jun prefecture in Henan province until the Mongol invasion in 1233. He then lived in poverty in the mountainous Shanxi province. In 1248 he finished his most known work Ceyuan haijing (測圓海鏡, Sea mirror of circle measurements).[1][8] Li then returned to Hebei.
In 1257 Kublai Khan, grandson of Genghis Khan, ordered Li to give advice on science. In 1259 Li completed Yigu yanduan (益古演段, New steps in computation), also a mathematics text. After becoming Khan, Kublai twice offered Li government positions, but Li was too old and had ill health. In 1264 Li finally accepted a position at the Hanlin Academy, writing official histories. However, he had a political fallout and resigned after a few months, again citing ill health.[1] He spent his final years teaching at his home near Feng Lung mountain in Yuan, Hebei. Li told his son to burn all of his books except for Sea mirror of circle measurements.[1]
Mathematics
Ceyuan haijing
Main article: Ceyuan haijing
Ceyuan haijing (Sea mirror of circle measurements) is a collection of 170 problems, all related to the same example of a circular city wall inscribed in a right triangle and a square.[1][9] They often involve two people who walk on straight lines until they can see each other, meet or reach a tree in a certain spot. The purpose of book was to study intricate geometrical relations with algebra and provide solutions to equations.[10]
Many of the problems are solved by polynomial equations, which are represented using a method called tian yuan shu, "coefficient array method" or literally "method of the celestial unknown".[1][11] The method was known before him in some form. It is a positional system of rod numerals to represent polynomial equations.
For example, 2x2 + 18x − 316 = 0 is represented as
which is equal to in Arabic Numbers.
The 元 (yuan) denotes the unknown x, so the numerals on that line mean 18x. The line below is the constant term (-316) and the line above is the coefficient of the quadratic (x2) term. The system accommodates arbitrarily high exponents of the unknown by adding more lines on top and negative exponents by adding lines below the constant term. Decimals can also be represented. Later, the line order was reversed so that the first line is the lowest exponent.
Li does not explain how to solve equations in general, but shows it with the example problems. Most of the equations can be reduced to the second or sometimes third order. It is often assumed that he used methods similar to Ruffini's rule and Horner scheme.
Yigu yanduan
Main article: Yigu yanduan
Yigu yanduan (New steps in computation) is a work of more basic mathematics written soon after Li Ye completed Ceyuan haijing, and was probably written to help students who could not understand Sea mirror of circle measurements. Yigu yanduan consists of three volumes dedicated to solving geometrical problems on two tracks, through Tian yuan shu and geometry. It also contained algebraic problems, but with slightly different notations.[11]
Astronomy and shape of the earth
The huntian (渾天) theory of the celestial sphere stipulated that the earth was flat and square, while the heavens were spherical in shape, along with celestial bodies such as the sun and moon (described by 1st-century AD polymathic scientist and statesman Zhang Heng like a crossbow bullet and ball, respectively).[12] However, the idea of a flat earth was criticized by the Jin dynasty astronomer Yu Xi (fl. 307-345 AD), who suggested a rounded shape as an alternative.[13] In his Jingzhai gu zhin zhu (敬齋古今注),[14] Li Ye echoed Yu's idea that the Earth was spherical, similar in shape to the heavens but smaller in size, arguing that it could not be square since that would hinder the movement of the heavens and celestial bodies.[15]
However, the idea of a spherical earth was not accepted in mainstream Chinese science and cartography until the 17th century during the late Ming and early Qing periods, with the advent of evidence of European circumnavigation of the globe.[16] The flat Earth theory in Chinese science was finally overturned in the 17th-century. Jesuits in China also introduced the spherical Earth model advanced by ancient Greeks such as Philolaus and Eratosthenes[17] and presented in world maps such as Matteo Ricci's Kunyu Wanguo Quantu published in Ming-dynasty China in 1602.[18]
See also
• Chinese astronomy
• Chinese mathematics
• Qin Jiushao
• Zhu Shijie
References
1. Breard, Andrea. (Jan 01, 2021). "Li Ye: Chinese mathematician". Encyclopaedia Britannica. Accessed 7 February 2021.
2. "Li, Ye (1192-1279) 李, 冶 (1192-1279)" IdRef: Identifiants et Référentials pour l'enseignement supérieur et la recherche (French). Accessed 19 February 2018.
3. O'Connor, John J.; Robertson, Edmund F. (December 2003). "Li Zhi Biography". MacTutor History of Mathematics archive. University of St Andrews in Scotland. Retrieved 21 December 2009.
4. Ho, Peng Yoke (2000). Li, Qi and Shu: An Introduction to Science and Civilization in China (unabridged ed.). Courier Dover Publications. pp. 89–96. ISBN 0-486-41445-0.
5. Ho, Peng Yoke (2008). "Li Chih, also called Li Yeh". Complete Dictionary of Scientific Biography. Charles Scribner's Sons. Retrieved 2009-12-21. Via encyclopedia.com.
6. Lam Lay-Yong; Ang Tian-Se (September 1984). "Li Ye and his Yi Gu Yan Duan (old mathematics in expanded sections)". Archive for History of Exact Sciences. Berlin / Heidelberg: Springer. 29 (3): 237–266. doi:10.1007/BF00348622. S2CID 120593520.
7. Swetz, Frank (1996). "Enigmas of Chinese Mathematics". In Ronald Calinger (ed.). Vita mathematica: historical research and integration with teaching. MAA Notes. Vol. 40. Cambridge University Press. pp. 89–90. ISBN 0-88385-097-4.
8. Needham, Joseph; Wang, Ling. (1995) [1959]. Science and Civilization in China: Mathematics and the Sciences of the Heavens and the Earth, vol. 3, reprint edition. Cambridge: Cambridge University Press. ISBN 0-521-05801-5, p. 40.
9. Needham, Joseph; Wang, Ling. (1995) [1959]. Science and Civilization in China: Mathematics and the Sciences of the Heavens and the Earth, vol. 3, reprint edition. Cambridge: Cambridge University Press. ISBN 0-521-05801-5, pp. 44, 129.
10. Needham, Joseph; Wang, Ling. (1995) [1959]. Science and Civilization in China: Mathematics and the Sciences of the Heavens and the Earth, vol. 3, reprint edition. Cambridge: Cambridge University Press. ISBN 0-521-05801-5, pp. 44-45.
11. Needham, Joseph; Wang, Ling. (1995) [1959]. Science and Civilization in China: Mathematics and the Sciences of the Heavens and the Earth, vol. 3, reprint edition. Cambridge: Cambridge University Press. ISBN 0-521-05801-5, p. 45.
12. Needham, Joseph; Wang, Ling. (1995) [1959]. Science and Civilization in China: Mathematics and the Sciences of the Heavens and the Earth, vol. 3, reprint edition. Cambridge: Cambridge University Press. ISBN 0-521-05801-5, pp. 216-218, 227.
13. Needham, Joseph; Wang, Ling. (1995) [1959]. Science and Civilization in China: Mathematics and the Sciences of the Heavens and the Earth, vol. 3, reprint edition. Cambridge: Cambridge University Press. ISBN 0-521-05801-5, pp. 220, 498.
14. Needham, Joseph; Wang, Ling. (1995) [1959]. Science and Civilization in China: Mathematics and the Sciences of the Heavens and the Earth, vol. 3, reprint edition. Cambridge: Cambridge University Press. ISBN 0-521-05801-5, p. 498; footnote i.
15. Needham, Joseph; Wang, Ling. (1995) [1959]. Science and Civilization in China: Mathematics and the Sciences of the Heavens and the Earth, vol. 3, reprint edition. Cambridge: Cambridge University Press. ISBN 0-521-05801-5, p. 498.
16. Needham, Joseph; Wang, Ling. (1995) [1959]. Science and Civilization in China: Mathematics and the Sciences of the Heavens and the Earth, vol. 3, reprint edition. Cambridge: Cambridge University Press. ISBN 0-521-05801-5, pp. 498-499.
17. Cullen, Christopher. (1993). "Appendix A: A Chinese Eratosthenes of the Flat Earth: a Study of a Fragment of Cosmology in Huainanzi", in Major, John. S. (ed), Heaven and Earth in Early Han Thought: Chapters Three, Four, and Five of the Huananzi. Albany: State University of New York Press. ISBN 0-7914-1585-6, p. 269-270.
18. Baran, Madeleine (December 16, 2009). "Historic map coming to Minnesota". St. Paul, Minn.: Minnesota Public Radio. Retrieved 19 February 2018.
Further reading
• Chan, Hok-Lam. 1997. “A Recipe to Qubilai Qa'an on Governance: The Case of Chang Te-hui and Li Chih”. Journal of the Royal Asiatic Society 7 (2). Cambridge University Press: 257–83. https://www.jstor.org/stable/25183352.
Authority control
International
• FAST
• ISNI
• VIAF
• WorldCat
National
• France
• BnF data
• Germany
• Israel
• United States
• Australia
• Netherlands
Academics
• zbMATH
People
• Trove
Other
• SNAC
• IdRef
| Wikipedia |
T-norm
In mathematics, a t-norm (also T-norm or, unabbreviated, triangular norm) is a kind of binary operation used in the framework of probabilistic metric spaces and in multi-valued logic, specifically in fuzzy logic. A t-norm generalizes intersection in a lattice and conjunction in logic. The name triangular norm refers to the fact that in the framework of probabilistic metric spaces t-norms are used to generalize the triangle inequality of ordinary metric spaces.
Definition
A t-norm is a function T: [0, 1] × [0, 1] → [0, 1] that satisfies the following properties:
• Commutativity: T(a, b) = T(b, a)
• Monotonicity: T(a, b) ≤ T(c, d) if a ≤ c and b ≤ d
• Associativity: T(a, T(b, c)) = T(T(a, b), c)
• The number 1 acts as identity element: T(a, 1) = a
Since a t-norm is a binary algebraic operation on the interval [0, 1], infix algebraic notation is also common, with the t-norm usually denoted by $*$.
The defining conditions of the t-norm are exactly those of a partially ordered abelian monoid on the real unit interval [0, 1]. (Cf. ordered group.) The monoidal operation of any partially ordered abelian monoid L is therefore by some authors called a triangular norm on L.
Classification of t-norms
A t-norm is called continuous if it is continuous as a function, in the usual interval topology on [0, 1]2. (Similarly for left- and right-continuity.)
A t-norm is called strict if it is continuous and strictly monotone.
A t-norm is called nilpotent if it is continuous and each x in the open interval (0, 1) is nilpotent, that is, there is a natural number n such that x $*$ ... $*$ x (n times) equals 0.
A t-norm $*$ is called Archimedean if it has the Archimedean property, that is, if for each x, y in the open interval (0, 1) there is a natural number n such that x $*$ ... $*$ x (n times) is less than or equal to y.
The usual partial ordering of t-norms is pointwise, that is,
T1 ≤ T2 if T1(a, b) ≤ T2(a, b) for all a, b in [0, 1].
As functions, pointwise larger t-norms are sometimes called stronger than those pointwise smaller. In the semantics of fuzzy logic, however, the larger a t-norm, the weaker (in terms of logical strength) conjunction it represents.
Prominent examples
• Minimum t-norm $\top _{\mathrm {min} }(a,b)=\min\{a,b\},$ also called the Gödel t-norm, as it is the standard semantics for conjunction in Gödel fuzzy logic. Besides that, it occurs in most t-norm based fuzzy logics as the standard semantics for weak conjunction. It is the pointwise largest t-norm (see the properties of t-norms below).
• Product t-norm $\top _{\mathrm {prod} }(a,b)=a\cdot b$ (the ordinary product of real numbers). Besides other uses, the product t-norm is the standard semantics for strong conjunction in product fuzzy logic. It is a strict Archimedean t-norm.
• Łukasiewicz t-norm $\top _{\mathrm {Luk} }(a,b)=\max\{0,a+b-1\}.$ The name comes from the fact that the t-norm is the standard semantics for strong conjunction in Łukasiewicz fuzzy logic. It is a nilpotent Archimedean t-norm, pointwise smaller than the product t-norm.
• Drastic t-norm
$\top _{\mathrm {D} }(a,b)={\begin{cases}b&{\mbox{if }}a=1\\a&{\mbox{if }}b=1\\0&{\mbox{otherwise.}}\end{cases}}$
The name reflects the fact that the drastic t-norm is the pointwise smallest t-norm (see the properties of t-norms below). It is a right-continuous Archimedean t-norm.
• Nilpotent minimum
$\top _{\mathrm {nM} }(a,b)={\begin{cases}\min(a,b)&{\mbox{if }}a+b>1\\0&{\mbox{otherwise}}\end{cases}}$
is a standard example of a t-norm that is left-continuous, but not continuous. Despite its name, the nilpotent minimum is not a nilpotent t-norm.
• Hamacher product
$\top _{\mathrm {H} _{0}}(a,b)={\begin{cases}0&{\mbox{if }}a=b=0\\{\frac {ab}{a+b-ab}}&{\mbox{otherwise}}\end{cases}}$
is a strict Archimedean t-norm, and an important representative of the parametric classes of Hamacher t-norms and Schweizer–Sklar t-norms.
Properties of t-norms
The drastic t-norm is the pointwise smallest t-norm and the minimum is the pointwise largest t-norm:
$\top _{\mathrm {D} }(a,b)\leq \top (a,b)\leq \mathrm {\top _{min}} (a,b),$ for any t-norm $\top $ and all a, b in [0, 1].
For every t-norm T, the number 0 acts as null element: T(a, 0) = 0 for all a in [0, 1].
A t-norm T has zero divisors if and only if it has nilpotent elements; each nilpotent element of T is also a zero divisor of T. The set of all nilpotent elements is an interval [0, a] or [0, a), for some a in [0, 1].
Properties of continuous t-norms
Although real functions of two variables can be continuous in each variable without being continuous on [0, 1]2, this is not the case with t-norms: a t-norm T is continuous if and only if it is continuous in one variable, i.e., if and only if the functions fy(x) = T(x, y) are continuous for each y in [0, 1]. Analogous theorems hold for left- and right-continuity of a t-norm.
A continuous t-norm is Archimedean if and only if 0 and 1 are its only idempotents.
A continuous Archimedean t-norm is strict if 0 is its only nilpotent element; otherwise it is nilpotent. By definition, moreover, a continuous Archimedean t-norm T is nilpotent if and only if each x < 1 is a nilpotent element of T. Thus with a continuous Archimedean t-norm T, either all or none of the elements of (0, 1) are nilpotent. If it is the case that all elements in (0, 1) are nilpotent, then the t-norm is isomorphic to the Łukasiewicz t-norm; i.e., there is a strictly increasing function f such that
$\top (x,y)=f^{-1}(\top _{\mathrm {Luk} }(f(x),f(y))).$
If on the other hand it is the case that there are no nilpotent elements of T, the t-norm is isomorphic to the product t-norm. In other words, all nilpotent t-norms are isomorphic, the Łukasiewicz t-norm being their prototypical representative; and all strict t-norms are isomorphic, with the product t-norm as their prototypical example. The Łukasiewicz t-norm is itself isomorphic to the product t-norm undercut at 0.25, i.e., to the function p(x, y) = max(0.25, x · y) on [0.25, 1]2.
For each continuous t-norm, the set of its idempotents is a closed subset of [0, 1]. Its complement—the set of all elements that are not idempotent—is therefore a union of countably many non-overlapping open intervals. The restriction of the t-norm to any of these intervals (including its endpoints) is Archimedean, and thus isomorphic either to the Łukasiewicz t-norm or the product t-norm. For such x, y that do not fall into the same open interval of non-idempotents, the t-norm evaluates to the minimum of x and y. These conditions actually give a characterization of continuous t-norms, called the Mostert–Shields theorem, since every continuous t-norm can in this way be decomposed, and the described construction always yields a continuous t-norm. The theorem can also be formulated as follows:
A t-norm is continuous if and only if it is isomorphic to an ordinal sum of the minimum, Łukasiewicz, and product t-norm.
A similar characterization theorem for non-continuous t-norms is not known (not even for left-continuous ones), only some non-exhaustive methods for the construction of t-norms have been found.
Residuum
For any left-continuous t-norm $\top $, there is a unique binary operation $\Rightarrow $ on [0, 1] such that
$\top (z,x)\leq y$ if and only if $z\leq (x\Rightarrow y)$
for all x, y, z in [0, 1]. This operation is called the residuum of the t-norm. In prefix notation, the residuum of a t-norm $\top $ is often denoted by ${\vec {\top }}$ or by the letter R.
The interval [0, 1] equipped with a t-norm and its residuum forms a residuated lattice. The relation between a t-norm T and its residuum R is an instance of adjunction (specifically, a Galois connection): the residuum forms a right adjoint R(x, –) to the functor T(–, x) for each x in the lattice [0, 1] taken as a poset category.
In the standard semantics of t-norm based fuzzy logics, where conjunction is interpreted by a t-norm, the residuum plays the role of implication (often called R-implication).
Basic properties of residua
If $\Rightarrow $ is the residuum of a left-continuous t-norm $\top $, then
$(x\Rightarrow y)=\sup\{z\mid \top (z,x)\leq y\}.$
Consequently, for all x, y in the unit interval,
$(x\Rightarrow y)=1$ if and only if $x\leq y$
and
$(1\Rightarrow y)=y.$
If $*$ is a left-continuous t-norm and $\Rightarrow $ its residuum, then
${\begin{array}{rcl}\min(x,y)&\geq &x*(x\Rightarrow y)\\\max(x,y)&=&\min((x\Rightarrow y)\Rightarrow y,(y\Rightarrow x)\Rightarrow x).\end{array}}$
If $*$ is continuous, then equality holds in the former.
Residua of prominent left-continuous t-norms
If x ≤ y, then R(x, y) = 1 for any residuum R. The following table therefore gives the values of prominent residua only for x > y.
Residuum of theNameValue for x > yGraph
Minimum t-norm Standard Gödel implication y
Product t-norm Goguen implication y / x
Łukasiewicz t-norm Standard Łukasiewicz implication 1 – x + y
Nilpotent minimum Kleene-Dienes implication max(1 – x, y)
T-conorms
T-conorms (also called S-norms) are dual to t-norms under the order-reversing operation that assigns 1 – x to x on [0, 1]. Given a t-norm $\top $, the complementary conorm is defined by
$\bot (a,b)=1-\top (1-a,1-b).$
This generalizes De Morgan's laws.
It follows that a t-conorm satisfies the following conditions, which can be used for an equivalent axiomatic definition of t-conorms independently of t-norms:
• Commutativity: ⊥(a, b) = ⊥(b, a)
• Monotonicity: ⊥(a, b) ≤ ⊥(c, d) if a ≤ c and b ≤ d
• Associativity: ⊥(a, ⊥(b, c)) = ⊥(⊥(a, b), c)
• Identity element: ⊥(a, 0) = a
T-conorms are used to represent logical disjunction in fuzzy logic and union in fuzzy set theory.
Examples of t-conorms
Important t-conorms are those dual to prominent t-norms:
• Maximum t-conorm $\bot _{\mathrm {max} }(a,b)=\max\{a,b\}$, dual to the minimum t-norm, is the smallest t-conorm (see the properties of t-conorms below). It is the standard semantics for disjunction in Gödel fuzzy logic and for weak disjunction in all t-norm based fuzzy logics.
• Probabilistic sum $\bot _{\mathrm {sum} }(a,b)=a+b-a\cdot b=1-(1-a)\cdot (1-b)$ is dual to the product t-norm. In probability theory it expresses the probability of the union of independent events. It is also the standard semantics for strong disjunction in such extensions of product fuzzy logic in which it is definable (e.g., those containing involutive negation).
• Bounded sum $\bot _{\mathrm {Luk} }(a,b)=\min\{a+b,1\}$ is dual to the Łukasiewicz t-norm. It is the standard semantics for strong disjunction in Łukasiewicz fuzzy logic.
• Drastic t-conorm
$\bot _{\mathrm {D} }(a,b)={\begin{cases}b&{\mbox{if }}a=0\\a&{\mbox{if }}b=0\\1&{\mbox{otherwise,}}\end{cases}}$
dual to the drastic t-norm, is the largest t-conorm (see the properties of t-conorms below).
• Nilpotent maximum, dual to the nilpotent minimum:
$\bot _{\mathrm {nM} }(a,b)={\begin{cases}\max(a,b)&{\mbox{if }}a+b<1\\1&{\mbox{otherwise.}}\end{cases}}$
• Einstein sum (compare the velocity-addition formula under special relativity)
$\bot _{\mathrm {H} _{2}}(a,b)={\frac {a+b}{1+ab}}$
is a dual to one of the Hamacher t-norms.
Properties of t-conorms
Many properties of t-conorms can be obtained by dualizing the properties of t-norms, for example:
• For any t-conorm ⊥, the number 1 is an annihilating element: ⊥(a, 1) = 1, for any a in [0, 1].
• Dually to t-norms, all t-conorms are bounded by the maximum and the drastic t-conorm:
$\mathrm {\bot _{max}} (a,b)\leq \bot (a,b)\leq \bot _{\mathrm {D} }(a,b)$, for any t-conorm $\bot $ and all a, b in [0, 1].
Further properties result from the relationships between t-norms and t-conorms or their interplay with other operators, e.g.:
• A t-norm T distributes over a t-conorm ⊥, i.e.,
T(x, ⊥(y, z)) = ⊥(T(x, y), T(x, z)) for all x, y, z in [0, 1],
if and only if ⊥ is the maximum t-conorm. Dually, any t-conorm distributes over the minimum, but not over any other t-norm.
Non-standard negators
A negator $n\colon [0,1]\to [0,1]$ is a monotonically decreasing mapping such that $n(0)=1$ and $n(1)=0$. A negator n is called
• strict in case of strict monotonocity, and
• strong if it is strict and involutive, that is, $n(n(x))=x$ for all $x$ in [0, 1].
The standard (canonical) negator is $n(x)=1-x,\ x\in [0,1]$, which is both strict and strong. As the standard negator is used in the above definition of a t-norm/t-conorm pair, this can be generalized as follows:
A De Morgan triplet is a triple (T,⊥,n) such that[1]
1. T is a t-norm
2. ⊥ is a t-conorm according to the axiomatic definition of t-conorms as mentioned above
3. n is a strong negator
4. $\forall a,b\in [0,1]\colon \ n({\perp }(a,b))=\top (n(a),n(b))$.
See also
• Construction of t-norms
• T-norm fuzzy logics
References
1. Ismat Beg, Samina Ashraf: Similarity measures for fuzzy sets, at: Applied and Computational Mathematics, March 2009, available on Research Gate since November 23rd, 2016
• Klement, Erich Peter; Mesiar, Radko; and Pap, Endre (2000), Triangular Norms. Dordrecht: Kluwer. ISBN 0-7923-6416-3.
• Hájek, Petr (1998), Metamathematics of Fuzzy Logic. Dordrecht: Kluwer. ISBN 0-7923-5238-6
• Cignoli, Roberto L.O.; D'Ottaviano, Itala M.L.; and Mundici, Daniele (2000), Algebraic Foundations of Many-valued Reasoning. Dordrecht: Kluwer. ISBN 0-7923-6009-5
• Fodor, János (2004), "Left-continuous t-norms in fuzzy logic: An overview". Acta Polytechnica Hungarica 1(2), ISSN 1785-8860
| Wikipedia |
T-coloring
In graph theory, a T-Coloring of a graph $G=(V,E)$, given the set T of nonnegative integers containing 0, is a function $c:V(G)\to \mathbb {N} $ that maps each vertex to a positive integer (color) such that if u and w are adjacent then $|c(u)-c(w)|\notin T$.[1] In simple words, the absolute value of the difference between two colors of adjacent vertices must not belong to fixed set T. The concept was introduced by William K. Hale.[2] If T = {0} it reduces to common vertex coloring.
The T-chromatic number, $\chi _{T}(G),$ is the minimum number of colors that can be used in a T-coloring of G.
The complementary coloring of T-coloring c, denoted ${\overline {c}}$ is defined for each vertex v of G by
${\overline {c}}(v)=s+1-c(v)$
where s is the largest color assigned to a vertex of G by the c function.[1]
Relation to Chromatic Number
Proposition. $\chi _{T}(G)=\chi (G)$.[3]
Proof. Every T-coloring of G is also a vertex coloring of G, so $\chi _{T}(G)\geq \chi (G).$ Suppose that $\chi (G)=k$ and $r=\max(T).$ Given a common vertex k-coloring function $c:V(G)\to \mathbb {N} $ using the colors $\{1,\ldots ,k\}.$ We define $d:V(G)\to \mathbb {N} $ as
$d(v)=(r+1)c(v)$
For every two adjacent vertices u and w of G,
$|d(u)-d(w)|=|(r+1)c(u)-(r+1)c(w)|=(r+1)|c(u)-c(w)|\geq r+1$
so $|d(u)-d(w)|\notin T.$ Therefore d is a T-coloring of G. Since d uses k colors, $\chi _{T}(G)\leq k=\chi (G).$ Consequently, $\chi _{T}(G)=\chi (G).$
T-span
The span of a T-coloring c of G is defined as
$sp_{T}(c)=\max _{u,w\in V(G)}|c(u)-c(w)|.$
The T-span is defined as:
$sp_{T}(G)=\min _{c}sp_{T}(c).$[4]
Some bounds of the T-span are given below:
• For every k-chromatic graph G with clique of size $\omega $ and every finite set T of nonnegative integers containing 0, $sp_{T}(K_{\omega })\leq sp_{T}(G)\leq sp_{T}(K_{k}).$
• For every graph G and every finite set T of nonnegative integers containing 0 whose largest element is r, $sp_{T}(G)\leq (\chi (G)-1)(r+1).$[5]
• For every graph G and every finite set T of nonnegative integers containing 0 whose cardinality is t, $sp_{T}(G)\leq (\chi (G)-1)t.$ [5]
See also
• Graph coloring
References
1. Chartrand, Gary; Zhang, Ping (2009). "14. Colorings, Distance, and Domination". Chromatic Graph Theory. CRC Press. pp. 397–402.
2. W. K. Hale, Frequency assignment: Theory and applications. Proc. IEEE 68 (1980) 1497–1514.
3. M. B. Cozzens and F. S. Roberts, T -colorings of graphs and the Channel Assignment Problem. Congr. Numer. 35 (1982) 191–208.
4. Chartrand, Gary; Zhang, Ping (2009). "14. Colorings, Distance, and Domination". Chromatic Graph Theory. CRC Press. p. 399.
5. M. B. Cozzens and F. S. Roberts, T -colorings of graphs and the Channel Assignment Problem. Congr. Numer. 35 (1982) 191–208.
| Wikipedia |
T-group (mathematics)
In mathematics, in the field of group theory, a T-group is a group in which the property of normality is transitive, that is, every subnormal subgroup is normal. Here are some facts about T-groups:
• Every simple group is a T-group.
• Every quasisimple group is a T-group.
• Every abelian group is a T-group.
• Every Hamiltonian group is a T-group.
• Every nilpotent T-group is either abelian or Hamiltonian, because in a nilpotent group, every subgroup is subnormal.
• Every normal subgroup of a T-group is a T-group.
• Every homomorphic image of a T-group is a T-group.
• Every solvable T-group is metabelian.
The solvable T-groups were characterized by Wolfgang Gaschütz as being exactly the solvable groups G with an abelian normal Hall subgroup H of odd order such that the quotient group G/H is a Dedekind group and H is acted upon by conjugation as a group of power automorphisms by G.
A PT-group is a group in which permutability is transitive. A finite T-group is a PT-group.
References
• Robinson, Derek J.S. (1996), A Course in the Theory of Groups, Berlin, New York: Springer-Verlag, ISBN 978-0-387-94461-6
• Ballester-Bolinches, Adolfo; Esteban-Romero, Ramon; Asaad, Mohamed (2010), Products of Finite Groups, Walter de Gruyter, ISBN 978-3-11-022061-2
| Wikipedia |
T-norm fuzzy logics
T-norm fuzzy logics are a family of non-classical logics, informally delimited by having a semantics that takes the real unit interval [0, 1] for the system of truth values and functions called t-norms for permissible interpretations of conjunction. They are mainly used in applied fuzzy logic and fuzzy set theory as a theoretical basis for approximate reasoning.
T-norm fuzzy logics belong in broader classes of fuzzy logics and many-valued logics. In order to generate a well-behaved implication, the t-norms are usually required to be left-continuous; logics of left-continuous t-norms further belong in the class of substructural logics, among which they are marked with the validity of the law of prelinearity, (A → B) ∨ (B → A). Both propositional and first-order (or higher-order) t-norm fuzzy logics, as well as their expansions by modal and other operators, are studied. Logics that restrict the t-norm semantics to a subset of the real unit interval (for example, finitely valued Łukasiewicz logics) are usually included in the class as well.
Important examples of t-norm fuzzy logics are monoidal t-norm logic (MTL) of all left-continuous t-norms, basic logic (BL) of all continuous t-norms, product fuzzy logic of the product t-norm, or the nilpotent minimum logic of the nilpotent minimum t-norm. Some independently motivated logics belong among t-norm fuzzy logics, too, for example Łukasiewicz logic (which is the logic of the Łukasiewicz t-norm) or Gödel–Dummett logic (which is the logic of the minimum t-norm).
Motivation
As members of the family of fuzzy logics, t-norm fuzzy logics primarily aim at generalizing classical two-valued logic by admitting intermediary truth values between 1 (truth) and 0 (falsity) representing degrees of truth of propositions. The degrees are assumed to be real numbers from the unit interval [0, 1]. In propositional t-norm fuzzy logics, propositional connectives are stipulated to be truth-functional, that is, the truth value of a complex proposition formed by a propositional connective from some constituent propositions is a function (called the truth function of the connective) of the truth values of the constituent propositions. The truth functions operate on the set of truth degrees (in the standard semantics, on the [0, 1] interval); thus the truth function of an n-ary propositional connective c is a function Fc: [0, 1]n → [0, 1]. Truth functions generalize truth tables of propositional connectives known from classical logic to operate on the larger system of truth values.
T-norm fuzzy logics impose certain natural constraints on the truth function of conjunction. The truth function $*\colon [0,1]^{2}\to [0,1]$ of conjunction is assumed to satisfy the following conditions:
• Commutativity, that is, $x*y=y*x$ for all x and y in [0, 1]. This expresses the assumption that the order of fuzzy propositions is immaterial in conjunction, even if intermediary truth degrees are admitted.
• Associativity, that is, $(x*y)*z=x*(y*z)$ for all x, y, and z in [0, 1]. This expresses the assumption that the order of performing conjunction is immaterial, even if intermediary truth degrees are admitted.
• Monotony, that is, if $x\leq y$ then $x*z\leq y*z$ for all x, y, and z in [0, 1]. This expresses the assumption that increasing the truth degree of a conjunct should not decrease the truth degree of the conjunction.
• Neutrality of 1, that is, $1*x=x$ for all x in [0, 1]. This assumption corresponds to regarding the truth degree 1 as full truth, conjunction with which does not decrease the truth value of the other conjunct. Together with the previous conditions this condition ensures that also $0*x=0$ for all x in [0, 1], which corresponds to regarding the truth degree 0 as full falsity, conjunction with which is always fully false.
• Continuity of the function $*$ (the previous conditions reduce this requirement to the continuity in either argument). Informally this expresses the assumption that microscopic changes of the truth degrees of conjuncts should not result in a macroscopic change of the truth degree of their conjunction. This condition, among other things, ensures a good behavior of (residual) implication derived from conjunction; to ensure the good behavior, however, left-continuity (in either argument) of the function $*$ is sufficient.[1] In general t-norm fuzzy logics, therefore, only left-continuity of $*$ is required, which expresses the assumption that a microscopic decrease of the truth degree of a conjunct should not macroscopically decrease the truth degree of conjunction.
These assumptions make the truth function of conjunction a left-continuous t-norm, which explains the name of the family of fuzzy logics (t-norm based). Particular logics of the family can make further assumptions about the behavior of conjunction (for example, Gödel–Dummett logic requires its idempotence) or other connectives (for example, the logic IMTL (involutive monoidal t-norm logic) requires the involutiveness of negation).
All left-continuous t-norms $*$ have a unique residuum, that is, a binary function $\Rightarrow $ such that for all x, y, and z in [0, 1],
$x*y\leq z$ if and only if $x\leq y\Rightarrow z.$
The residuum of a left-continuous t-norm can explicitly be defined as
$(x\Rightarrow y)=\sup\{z\mid z*x\leq y\}.$
This ensures that the residuum is the pointwise largest function such that for all x and y,
$x*(x\Rightarrow y)\leq y.$
The latter can be interpreted as a fuzzy version of the modus ponens rule of inference. The residuum of a left-continuous t-norm thus can be characterized as the weakest function that makes the fuzzy modus ponens valid, which makes it a suitable truth function for implication in fuzzy logic. Left-continuity of the t-norm is the necessary and sufficient condition for this relationship between a t-norm conjunction and its residual implication to hold.
Truth functions of further propositional connectives can be defined by means of the t-norm and its residuum, for instance the residual negation $\neg x=(x\Rightarrow 0)$ or bi-residual equivalence $x\Leftrightarrow y=(x\Rightarrow y)*(y\Rightarrow x).$ Truth functions of propositional connectives may also be introduced by additional definitions: the most usual ones are the minimum (which plays a role of another conjunctive connective), the maximum (which plays a role of a disjunctive connective), or the Baaz Delta operator, defined in [0, 1] as $\Delta x=1$ if $x=1$ and $\Delta x=0$ otherwise. In this way, a left-continuous t-norm, its residuum, and the truth functions of additional propositional connectives determine the truth values of complex propositional formulae in [0, 1].
Formulae that always evaluate to 1 are called tautologies with respect to the given left-continuous t-norm $*,$ or $*{\mbox{-}}$tautologies. The set of all $*{\mbox{-}}$tautologies is called the logic of the t-norm $*,$ as these formulae represent the laws of fuzzy logic (determined by the t-norm) that hold (to degree 1) regardless of the truth degrees of atomic formulae. Some formulae are tautologies with respect to a larger class of left-continuous t-norms; the set of such formulae is called the logic of the class. Important t-norm logics are the logics of particular t-norms or classes of t-norms, for example:
• Łukasiewicz logic is the logic of the Łukasiewicz t-norm $x*y=\max(x+y-1,0)$
• Gödel–Dummett logic is the logic of the minimum t-norm $x*y=\min(x,y)$
• Product fuzzy logic is the logic of the product t-norm $x*y=x\cdot y$
• Monoidal t-norm logic MTL is the logic of (the class of) all left-continuous t-norms
• Basic fuzzy logic BL is the logic of (the class of) all continuous t-norms
It turns out that many logics of particular t-norms and classes of t-norms are axiomatizable. The completeness theorem of the axiomatic system with respect to the corresponding t-norm semantics on [0, 1] is then called the standard completeness of the logic. Besides the standard real-valued semantics on [0, 1], the logics are sound and complete with respect to general algebraic semantics, formed by suitable classes of prelinear commutative bounded integral residuated lattices.
History
Some particular t-norm fuzzy logics have been introduced and investigated long before the family was recognized (even before the notions of fuzzy logic or t-norm emerged):
• Łukasiewicz logic (the logic of the Łukasiewicz t-norm) was originally defined by Jan Łukasiewicz (1920) as a three-valued logic;[2] it was later generalized to n-valued (for all finite n) as well as infinitely-many-valued variants, both propositional and first-order.[3]
• Gödel–Dummett logic (the logic of the minimum t-norm) was implicit in Gödel's 1932 proof of infinite-valuedness of intuitionistic logic.[4] Later (1959) it was explicitly studied by Dummett who proved a completeness theorem for the logic.[5]
A systematic study of particular t-norm fuzzy logics and their classes began with Hájek's (1998) monograph Metamathematics of Fuzzy Logic, which presented the notion of the logic of a continuous t-norm, the logics of the three basic continuous t-norms (Łukasiewicz, Gödel, and product), and the 'basic' fuzzy logic BL of all continuous t-norms (all of them both propositional and first-order). The book also started the investigation of fuzzy logics as non-classical logics with Hilbert-style calculi, algebraic semantics, and metamathematical properties known from other logics (completeness theorems, deduction theorems, complexity, etc.).
Since then, a plethora of t-norm fuzzy logics have been introduced and their metamathematical properties have been investigated. Some of the most important t-norm fuzzy logics were introduced in 2001, by Esteva and Godo (MTL, IMTL, SMTL, NM, WNM),[1] Esteva, Godo, and Montagna (propositional ŁΠ),[6] and Cintula (first-order ŁΠ).[7]
Logical language
The logical vocabulary of propositional t-norm fuzzy logics standardly comprises the following connectives:
• Implication $\rightarrow $ (binary). In the context of other than t-norm-based fuzzy logics, the t-norm-based implication is sometimes called residual implication or R-implication, as its standard semantics is the residuum of the t-norm that realizes strong conjunction.
• Strong conjunction $\And $ (binary). In the context of substructural logics, the sign $\otimes $ and the names group, intensional, multiplicative, or parallel conjunction are often used for strong conjunction.
• Weak conjunction $\wedge $ (binary), also called lattice conjunction (as it is always realized by the lattice operation of meet in algebraic semantics). In the context of substructural logics, the names additive, extensional, or comparative conjunction are sometimes used for lattice conjunction. In the logic BL and its extensions (though not in t-norm logics in general), weak conjunction is definable in terms of implication and strong conjunction, by
$A\wedge B\equiv A{\mathbin {\And }}(A\rightarrow B).$
The presence of two conjunction connectives is a common feature of contraction-free substructural logics.
• Bottom $\bot $ (nullary); $0$ or ${\overline {0}}$ are common alternative signs and zero a common alternative name for the propositional constant (as the constants bottom and zero of substructural logics coincide in t-norm fuzzy logics). The proposition $\bot $ represents the falsity or absurdum and corresponds to the classical truth value false.
• Negation $\neg $ (unary), sometimes called residual negation if other negation connectives are considered, as it is defined from the residual implication by the reductio ad absurdum:
$\neg A\equiv A\rightarrow \bot $
• Equivalence $\leftrightarrow $ (binary), defined as
$A\leftrightarrow B\equiv (A\rightarrow B)\wedge (B\rightarrow A)$
In t-norm logics, the definition is equivalent to $(A\rightarrow B){\mathbin {\And }}(B\rightarrow A).$
• (Weak) disjunction $\vee $ (binary), also called lattice disjunction (as it is always realized by the lattice operation of join in algebraic semantics). In t-norm logics it is definable in terms of other connectives as
$A\vee B\equiv ((A\rightarrow B)\rightarrow B)\wedge ((B\rightarrow A)\rightarrow A)$
• Top $\top $ (nullary), also called one and denoted by $1$ or ${\overline {1}}$ (as the constants top and zero of substructural logics coincide in t-norm fuzzy logics). The proposition $\top $ corresponds to the classical truth value true and can in t-norm logics be defined as
$\top \equiv \bot \rightarrow \bot .$
Some propositional t-norm logics add further propositional connectives to the above language, most often the following ones:
• The Delta connective $\triangle $ is a unary connective that asserts classical truth of a proposition, as the formulae of the form $\triangle A$ behave as in classical logic. Also called the Baaz Delta, as it was first used by Matthias Baaz for Gödel–Dummett logic.[8] The expansion of a t-norm logic $L$ by the Delta connective is usually denoted by $L_{\triangle }.$
• Truth constants are nullary connectives representing particular truth values between 0 and 1 in the standard real-valued semantics. For the real number $r$, the corresponding truth constant is usually denoted by ${\overline {r}}.$ Most often, the truth constants for all rational numbers are added. The system of all truth constants in the language is supposed to satisfy the bookkeeping axioms:[9]
${\overline {r{\mathbin {\And }}s}}\leftrightarrow ({\overline {r}}{\mathbin {\And }}{\overline {s}}),$
${\overline {r\rightarrow s}}\leftrightarrow ({\overline {r}}{\mathbin {\rightarrow }}{\overline {s}}),$
etc. for all propositional connectives and all truth constants definable in the language.
• Involutive negation $\sim $ (unary) can be added as an additional negation to t-norm logics whose residual negation is not itself involutive, that is, if it does not obey the law of double negation $\neg \neg A\leftrightarrow A$. A t-norm logic $L$ expanded with involutive negation is usually denoted by $L_{\sim }$ and called $L$ with involution.
• Strong disjunction $\oplus $ (binary). In the context of substructural logics it is also called group, intensional, multiplicative, or parallel disjunction. Even though standard in contraction-free substructural logics, in t-norm fuzzy logics it is usually used only in the presence of involutive negation, which makes it definable (and so axiomatizable) by de Morgan's law from strong conjunction:
$A\oplus B\equiv \mathrm {\sim } (\mathrm {\sim } A{\mathbin {\And }}\mathrm {\sim } B).$
• Additional t-norm conjunctions and residual implications. Some expressively strong t-norm logics, for instance the logic ŁΠ, have more than one strong conjunction or residual implication in their language. In the standard real-valued semantics, all such strong conjunctions are realized by different t-norms and the residual implications by their residua.
Well-formed formulae of propositional t-norm logics are defined from propositional variables (usually countably many) by the above logical connectives, as usual in propositional logics. In order to save parentheses, it is common to use the following order of precedence:
• Unary connectives (bind most closely)
• Binary connectives other than implication and equivalence
• Implication and equivalence (bind most loosely)
First-order variants of t-norm logics employ the usual logical language of first-order logic with the above propositional connectives and the following quantifiers:
• General quantifier $\forall $
• Existential quantifier $\exists $
The first-order variant of a propositional t-norm logic $L$ is usually denoted by $L\forall .$
Semantics
Algebraic semantics is predominantly used for propositional t-norm fuzzy logics, with three main classes of algebras with respect to which a t-norm fuzzy logic $L$ is complete:
• General semantics, formed of all $L$-algebras — that is, all algebras for which the logic is sound.
• Linear semantics, formed of all linear $L$-algebras — that is, all $L$-algebras whose lattice order is linear.
• Standard semantics, formed of all standard $L$-algebras — that is, all $L$-algebras whose lattice reduct is the real unit interval [0, 1] with the usual order. In standard $L$-algebras, the interpretation of strong conjunction is a left-continuous t-norm and the interpretation of most propositional connectives is determined by the t-norm (hence the names t-norm-based logics and t-norm $L$-algebras, which is also used for $L$-algebras on the lattice [0, 1]). In t-norm logics with additional connectives, however, the real-valued interpretation of the additional connectives may be restricted by further conditions for the t-norm algebra to be called standard: for example, in standard $L_{\sim }$-algebras of the logic $L$ with involution, the interpretation of the additional involutive negation $\sim $ is required to be the standard involution $f_{\sim }(x)=1-x,$ rather than other involutions that can also interpret $\sim $ over t-norm $L_{\sim }$-algebras.[10] In general, therefore, the definition of standard t-norm algebras has to be explicitly given for t-norm logics with additional connectives.
Bibliography
• Esteva F. & Godo L., 2001, "Monoidal t-norm based logic: Towards a logic of left-continuous t-norms". Fuzzy Sets and Systems 124: 271–288.
• Flaminio T. & Marchioni E., 2006, T-norm based logics with an independent involutive negation. Fuzzy Sets and Systems 157: 3125–3144.
• Gottwald S. & Hájek P., 2005, Triangular norm based mathematical fuzzy logic. In E.P. Klement & R. Mesiar (eds.), Logical, Algebraic, Analytic and Probabilistic Aspects of Triangular Norms, pp. 275–300. Elsevier, Amsterdam 2005.
• Hájek P., 1998, Metamathematics of Fuzzy Logic. Dordrecht: Kluwer. ISBN 0-7923-5238-6.
References
1. Esteva & Godo (2001)
2. Łukasiewicz J., 1920, O logice trojwartosciowej (Polish, On three-valued logic). Ruch filozoficzny 5:170–171.
3. Hay, L.S., 1963, Axiomatization of the infinite-valued predicate calculus. Journal of Symbolic Logic 28:77–86.
4. Gödel K., 1932, Zum intuitionistischen Aussagenkalkül, Anzeiger Akademie der Wissenschaften Wien 69: 65–66.
5. Dummett M., 1959, Propositional calculus with denumerable matrix, Journal of Symbolic Logic 27: 97–106
6. Esteva F., Godo L., & Montagna F., 2001, The ŁΠ and ŁΠ½ logics: Two complete fuzzy systems joining Łukasiewicz and product logics, Archive for Mathematical Logic 40: 39–67.
7. Cintula P., 2001, The ŁΠ and ŁΠ½ propositional and predicate logics, Fuzzy Sets and Systems 124: 289–302.
8. Baaz M., 1996, Infinite-valued Gödel logic with 0-1-projections and relativisations. In P. Hájek (ed.), Gödel'96: Logical Foundations of Mathematics, Computer Science, and Physics, Springer, Lecture Notes in Logic 6: 23–33
9. Hájek (1998)
10. Flaminio & Marchioni (2006)
| Wikipedia |
T-schema
The T-schema ("truth schema", not to be confused with "Convention T") is used to check if an inductive definition of truth is valid, which lies at the heart of any realisation of Alfred Tarski's semantic theory of truth. Some authors refer to it as the "Equivalence Schema", a synonym introduced by Michael Dummett.[1]
The T-schema is often expressed in natural language, but it can be formalized in many-sorted predicate logic or modal logic; such a formalisation is called a "T-theory." T-theories form the basis of much fundamental work in philosophical logic, where they are applied in several important controversies in analytic philosophy.
As expressed in semi-natural language (where 'S' is the name of the sentence abbreviated to S): 'S' is true if and only if S.
Example: 'snow is white' is true if and only if snow is white.
The inductive definition
By using the schema one can give an inductive definition for the truth of compound sentences. Atomic sentences are assigned truth values disquotationally. For example, the sentence "'Snow is white' is true" becomes materially equivalent with the sentence "snow is white", i.e. 'snow is white' is true if and only if snow is white. The truth of more complex sentences is defined in terms of the components of the sentence:
• A sentence of the form "A and B" is true if and only if A is true and B is true
• A sentence of the form "A or B" is true if and only if A is true or B is true
• A sentence of the form "if A then B" is true if and only if A is false or B is true; see material implication.
• A sentence of the form "not A" is true if and only if A is false
• A sentence of the form "for all x, A(x)" is true if and only if, for every possible value of x, A(x) is true.
• A sentence of the form "for some x, A(x)" is true if and only if, for some possible value of x, A(x) is true.
Predicates for truth that meet all of these criteria are called a "satisfaction classes", a notion often defined with respect to a fixed language (such as the language of Peano arithmetic); these classes are considered acceptable definitions for the notion of truth.[2]
Natural languages
Joseph Heath points out[3] that "The analysis of the truth predicate provided by Tarski's Schema T is not capable of handling all occurrences of the truth predicate in natural language. In particular, Schema T treats only "freestanding" uses of the predicate—cases when it is applied to complete sentences." He gives as "obvious problem" the sentence:
• Everything that Bill believes is true.
Heath argues that analyzing this sentence using T-schema generates the sentence fragment—"everything that Bill believes"—on the righthand side of the Logical biconditional.
See also
• Principle of bivalence
• Law of excluded middle
References
1. Künne, Wolfgang (2003). Conceptions of truth. Clarendon Press. p. 18. ISBN 978-0-19-928019-3.
2. H. Kotlarski, Full Satisfaction Classes: A Survey (1991, Notre Dame Journal of Formal Logic, p.573). Accessed 9 September 2022.
3. Heath, Joseph (2001). Communicative action and rational choice. MIT Press. p. 186. ISBN 978-0-262-08291-4.
External links
• Zalta, Edward N. (ed.). "Tarski's Truth Definitions". Stanford Encyclopedia of Philosophy.
• Zalta, Edward N. (ed.). "Consequences of the Semantic Paradoxes". Stanford Encyclopedia of Philosophy.
Truth
General
• Statement
• Propositions
• Truth-bearer
• Truth-maker
Theories
• Coherence
• Consensus
• Constructivist
• Correspondence
• Deflationary
• Epistemic
• Pluralist
• Pragmatic
• Redundancy
• Semantic
Mathematical logic
General
• Axiom
• list
• Cardinality
• First-order logic
• Formal proof
• Formal semantics
• Foundations of mathematics
• Information theory
• Lemma
• Logical consequence
• Model
• Theorem
• Theory
• Type theory
Theorems (list)
& Paradoxes
• Gödel's completeness and incompleteness theorems
• Tarski's undefinability
• Banach–Tarski paradox
• Cantor's theorem, paradox and diagonal argument
• Compactness
• Halting problem
• Lindström's
• Löwenheim–Skolem
• Russell's paradox
Logics
Traditional
• Classical logic
• Logical truth
• Tautology
• Proposition
• Inference
• Logical equivalence
• Consistency
• Equiconsistency
• Argument
• Soundness
• Validity
• Syllogism
• Square of opposition
• Venn diagram
Propositional
• Boolean algebra
• Boolean functions
• Logical connectives
• Propositional calculus
• Propositional formula
• Truth tables
• Many-valued logic
• 3
• Finite
• ∞
Predicate
• First-order
• list
• Second-order
• Monadic
• Higher-order
• Free
• Quantifiers
• Predicate
• Monadic predicate calculus
Set theory
• Set
• Hereditary
• Class
• (Ur-)Element
• Ordinal number
• Extensionality
• Forcing
• Relation
• Equivalence
• Partition
• Set operations:
• Intersection
• Union
• Complement
• Cartesian product
• Power set
• Identities
Types of Sets
• Countable
• Uncountable
• Empty
• Inhabited
• Singleton
• Finite
• Infinite
• Transitive
• Ultrafilter
• Recursive
• Fuzzy
• Universal
• Universe
• Constructible
• Grothendieck
• Von Neumann
Maps & Cardinality
• Function/Map
• Domain
• Codomain
• Image
• In/Sur/Bi-jection
• Schröder–Bernstein theorem
• Isomorphism
• Gödel numbering
• Enumeration
• Large cardinal
• Inaccessible
• Aleph number
• Operation
• Binary
Set theories
• Zermelo–Fraenkel
• Axiom of choice
• Continuum hypothesis
• General
• Kripke–Platek
• Morse–Kelley
• Naive
• New Foundations
• Tarski–Grothendieck
• Von Neumann–Bernays–Gödel
• Ackermann
• Constructive
Formal systems (list),
Language & Syntax
• Alphabet
• Arity
• Automata
• Axiom schema
• Expression
• Ground
• Extension
• by definition
• Conservative
• Relation
• Formation rule
• Grammar
• Formula
• Atomic
• Closed
• Ground
• Open
• Free/bound variable
• Language
• Metalanguage
• Logical connective
• ¬
• ∨
• ∧
• →
• ↔
• =
• Predicate
• Functional
• Variable
• Propositional variable
• Proof
• Quantifier
• ∃
• !
• ∀
• rank
• Sentence
• Atomic
• Spectrum
• Signature
• String
• Substitution
• Symbol
• Function
• Logical/Constant
• Non-logical
• Variable
• Term
• Theory
• list
Example axiomatic
systems
(list)
• of arithmetic:
• Peano
• second-order
• elementary function
• primitive recursive
• Robinson
• Skolem
• of the real numbers
• Tarski's axiomatization
• of Boolean algebras
• canonical
• minimal axioms
• of geometry:
• Euclidean:
• Elements
• Hilbert's
• Tarski's
• non-Euclidean
• Principia Mathematica
Proof theory
• Formal proof
• Natural deduction
• Logical consequence
• Rule of inference
• Sequent calculus
• Theorem
• Systems
• Axiomatic
• Deductive
• Hilbert
• list
• Complete theory
• Independence (from ZFC)
• Proof of impossibility
• Ordinal analysis
• Reverse mathematics
• Self-verifying theories
Model theory
• Interpretation
• Function
• of models
• Model
• Equivalence
• Finite
• Saturated
• Spectrum
• Submodel
• Non-standard model
• of arithmetic
• Diagram
• Elementary
• Categorical theory
• Model complete theory
• Satisfiability
• Semantics of logic
• Strength
• Theories of truth
• Semantic
• Tarski's
• Kripke's
• T-schema
• Transfer principle
• Truth predicate
• Truth value
• Type
• Ultraproduct
• Validity
Computability theory
• Church encoding
• Church–Turing thesis
• Computably enumerable
• Computable function
• Computable set
• Decision problem
• Decidable
• Undecidable
• P
• NP
• P versus NP problem
• Kolmogorov complexity
• Lambda calculus
• Primitive recursive function
• Recursion
• Recursive set
• Turing machine
• Type theory
Related
• Abstract logic
• Category theory
• Concrete/Abstract Category
• Category of sets
• History of logic
• History of mathematical logic
• timeline
• Logicism
• Mathematical object
• Philosophy of mathematics
• Supertask
Mathematics portal
| Wikipedia |
Sequential space
In topology and related fields of mathematics, a sequential space is a topological space whose topology can be completely characterized by its convergent/divergent sequences. They can be thought of as spaces that satisfy a very weak axiom of countability, and all first-countable spaces (especially metric spaces) are sequential.
In any topological space $(X,\tau ),$ if a convergent sequence is contained in a closed set $C,$ then the limit of that sequence must be contained in $C$ as well. This property is known as sequential closure. Sequential spaces are precisely those topological spaces for which sequentially closed sets are in fact closed. (These definitions can also be rephrased in terms of sequentially open sets; see below.) Said differently, any topology can be described in terms of nets (also known as Moore–Smith sequences), but those sequences may be "too long" (indexed by too large an ordinal) to compress into a sequence. Sequential spaces are those topological spaces for which nets of countable length (i.e., sequences) suffice to describe the topology.
Any topology can be refined (that is, made finer) to a sequential topology, called the sequential coreflection of $X.$
The related concepts of Fréchet–Urysohn spaces, T-sequential spaces, and $N$-sequential spaces are also defined in terms of how a space's topology interacts with sequences, but have subtly different properties.
Sequential spaces and $N$-sequential spaces were introduced by S. P. Franklin.[1]
History
Although spaces satisfying such properties had implicitly been studied for several years, the first formal definition is due to S. P. Franklin in 1965. Franklin wanted to determine "the classes of topological spaces that can be specified completely by the knowledge of their convergent sequences", and began by investigating the first-countable spaces, for which it was already known that sequences sufficed. Franklin then arrived at the modern definition by abstracting the necessary properties of first-countable spaces.
Preliminary definitions
See also: Filters in topology and Net (mathematics)
Let $X$ be a set and let $x_{\bullet }=\left(x_{i}\right)_{i=1}^{\infty }$ be a sequence in $X$; that is, a family of elements of $X$, indexed by the natural numbers. In this article, $x_{\bullet }\subseteq S$ means that each element in the sequence $x_{\bullet }$ is an element of $S,$ and, if $f:X\to Y$ is a map, then $f\left(x_{\bullet }\right)=\left(f\left(x_{i}\right)\right)_{i=1}^{\infty }.$ For any index $i,$ the tail of $x_{\bullet }$ starting at $i$ is the sequence
$x_{\geq i}=(x_{i},x_{i+1},x_{i+2},\ldots ){\text{.}}$
A sequence $x_{\bullet }$ is eventually in $S$ if some tail of $x_{\bullet }$ satisfies $x_{\geq i}\subseteq S.$
Let $\tau $ be a topology on $X$ and $x_{\bullet }$ a sequence therein. The sequence $x_{\bullet }$ converges to a point $x\in X,$ written $x_{\bullet }{\overset {\tau }{\to }}x$ (when context allows, $x_{\bullet }\to x$), if, for every neighborhood $U\in \tau $ of $x,$ eventually $x_{\bullet }$ is in $U.$ $x$ is then called a limit point of $x_{\bullet }.$
A function $f:X\to Y$ between topological spaces is sequentially continuous if $x_{\bullet }\to x$ implies $f(x_{\bullet })\to f(x).$
Sequential closure/interior
Let $(X,\tau )$ be a topological space and let $S\subseteq X$ be a subset. The topological closure (resp. topological interior) of $S$ in $(X,\tau )$ is denoted by $\operatorname {cl} _{X}S$ (resp. $\operatorname {int} _{X}S$).
The sequential closure of $S$ in $(X,\tau )$ is the set
$\operatorname {scl} (S)=\left\{x:{\text{there exists a sequence }}s_{\bullet }\subseteq S{\text{ such that }}s_{\bullet }\to x\right\}$
which defines a map, the sequential closure operator, on the power set of $X.$ If necessary for clarity, this set may also be written $\operatorname {scl} _{X}(S)$ or $\operatorname {scl} _{(X,\tau )}(S).$ It is always the case that $\operatorname {scl} _{X}S\subseteq \operatorname {cl} _{X}S,$ but the reverse may fail. The sequential interior of $S$ in $(X,\tau )$ is the set
$\operatorname {sint} (S)=\{s:{\text{whenever }}x_{\bullet }\subseteq X{\text{ and }}x_{\bullet }\to s,{\text{ then }}x_{\bullet }{\text{ is eventually in }}S\}$
(the topological space again indicated with a subscript if necessary).
Sequential closure and interior satisfy many of the nice properties of topological closure and interior: for all subsets $R,S\subseteq X,$
• $\operatorname {scl} _{X}(X\setminus S)=X\setminus \operatorname {sint} _{X}(S)$ and $\operatorname {sint} _{X}(X\setminus S)=X\setminus \operatorname {scl} _{X}(S)$;
Proof
Fix $x\in \operatorname {sint} (X\setminus S).$ If $x\in \operatorname {scl} (S),$ then there exists $s_{\bullet }\subseteq S$ with $s_{\bullet }\to x.$ But by the definition of sequential interior, eventually $s_{\bullet }$ is in $X\setminus S,$ contradicting $s_{\bullet }\subseteq S.$
Conversely, suppose $x\notin \operatorname {sint} (X\setminus S)$; then there exists a sequence $s_{\bullet }\subseteq X$ with $s_{\bullet }\to x$ that is not eventually in $X\setminus S.$ By passing to the subsequence of elements not in $X\setminus S,$ we may assume that $s_{\bullet }\subseteq S.$ But then $x\in \operatorname {scl} (S).$
▮
• $\operatorname {scl} (\emptyset )=\emptyset $ and $\operatorname {sint} (\emptyset )=\emptyset $;
• $ \operatorname {sint} (S)\subseteq S\subseteq \operatorname {scl} (S)$;
• $\operatorname {scl} (R\cup S)=\operatorname {scl} (R)\cup \operatorname {scl} (S)$; and
• $ \operatorname {scl} (S)\subseteq \operatorname {scl} (\operatorname {scl} (S)).$
That is, sequential closure is a preclosure operator. Unlike topological closure, sequential closure is not idempotent: the last containment may be strict. Thus sequential closure is not a (Kuratowski) closure operator.
Sequentially closed and open sets
A set $S$ is sequentially closed if $S=\operatorname {scl} (S)$; equivalently, for all $s_{\bullet }\subseteq S$ and $x\in X$ such that $s_{\bullet }{\overset {\tau }{\to }}x,$ we must have $x\in S.$[note 1]
A set $S$ is defined to be sequentially open if its complement is sequentially closed. Equivalent conditions include:
• $S=\operatorname {sint} (S)$ or
• For all $x_{\bullet }\subseteq X$ and $s\in S$ such that $x_{\bullet }{\overset {\tau }{\to }}s,$ eventually $x_{\bullet }$ is in $S$ (that is, there exists some integer $i$ such that the tail $x_{\geq i}\subseteq S$).
A set $S$ is a sequential neighborhood of a point $x\in X$ if it contains $x$ in its sequential interior; sequential neighborhoods need not be sequentially open (see § T- and N-sequential spaces below).
It is possible for a subset of $X$ to be sequentially open but not open. Similarly, it is possible for there to exist a sequentially closed subset that is not closed.
Sequential spaces and coreflection
As discussed above, sequential closure is not in general idempotent, and so not the closure operator of a topology. One can obtain an idempotent sequential closure via transfinite iteration: for a successor ordinal $\alpha +1,$ define (as usual)
$(\operatorname {scl} )^{\alpha +1}(S)=\operatorname {scl} ((\operatorname {scl} )^{\alpha }(S))$
and, for a limit ordinal $\alpha ,$ define
$(\operatorname {scl} )^{\alpha }(S)=\bigcup _{\beta <\alpha }{(\operatorname {scl} )^{\beta }(S)}{\text{.}}$
This process gives an ordinal-indexed increasing sequence of sets; as it turns out, that sequence always stabilizes by index $\omega _{1}$ (the first uncountable ordinal). Conversely, the sequential order of $X$ is the minimal ordinal at which, for any choice of $S,$ the above sequence will stabilize.[2]
The transfinite sequential closure of $S$ is the terminal set in the above sequence: $(\operatorname {scl} )^{\omega _{1}}(S).$ The operator $(\operatorname {scl} )^{\omega _{1}}$ is idempotent and thus a closure operator. In particular, it defines a topology, the sequential coreflection. In the sequential coreflection, every sequentially-closed set is closed (and every sequentially-open set is open).[3]
Sequential spaces
A topological space $(X,\tau )$ is sequential if it satisfies any of the following equivalent conditions:
• $\tau $ is its own sequential coreflection.[4]
• Every sequentially open subset of $X$ is open.
• Every sequentially closed subset of $X$ is closed.
• For any subset $S\subseteq X$ that is not closed in $X,$ there exists some[note 2] $x\in \operatorname {cl} (S)\setminus S$ and a sequence in $S$ that converges to $x.$[5]
• (Universal Property) For every topological space $Y,$ a map $f:X\to Y$ is continuous if and only if it is sequentially continuous (if $x_{\bullet }\to x$ then $f\left(x_{\bullet }\right)\to f(x)$).[6]
• $X$ is the quotient of a first-countable space.
• $X$ is the quotient of a metric space.
By taking $Y=X$ and $f$ to be the identity map on $X$ in the universal property, it follows that the class of sequential spaces consists precisely of those spaces whose topological structure is determined by convergent sequences. If two topologies agree on convergent sequences, then they necessarily have the same sequential coreflection. Moreover, a function from $Y$ is sequentially continuous if and only if it is continuous on the sequential coreflection (that is, when pre-composed with $f$).
T- and N-sequential spaces
A T-sequential space is a topological space with sequential order 1, which is equivalent to any of the following conditions:[1]
• The sequential closure (or interior) of every subset of $X$ is sequentially closed (resp. open).
• $\operatorname {scl} $ or $\operatorname {sint} $ are idempotent.
• $ \operatorname {scl} (S)=\bigcap _{{\text{sequentially closed }}C\supseteq S}{C}$ or $ \operatorname {sint} (S)=\bigcup _{{\text{sequentially open }}U\subseteq S}{U}$
• Any sequential neighborhood of $x\in X$ can be shrunk to a sequentially-open set that contains $x$; formally, sequentially-open neighborhoods are a neighborhood basis for the sequential neighborhoods.
• For any $x\in X$ and any sequential neighborhood $N$ of $x,$ there exists a sequential neighborhood $M$ of $x$ such that, for every $m\in M,$ the set $N$ is a sequential neighborhood of $m.$
Being a T-sequential space is incomparable with being a sequential space; there are sequential spaces that are not T-sequential and vice-versa. However, a topological space $(X,\tau )$ is called a $N$-sequential (or neighborhood-sequential) if it is both sequential and T-sequential. An equivalent condition is that every sequential neighborhood contains an open (classical) neighborhood.[1]
Every first-countable space (and thus every metrizable space) is $N$-sequential. There exist topological vector spaces that are sequential but not $N$-sequential (and thus not T-sequential).[1]
Fréchet–Urysohn spaces
Main article: Fréchet–Urysohn space
A topological space $(X,\tau )$ is called Fréchet–Urysohn if it satisfies any of the following equivalent conditions:
• $X$ is hereditarily sequential; that is, every topological subspace is sequential.
• For every subset $S\subseteq X,$ $\operatorname {scl} _{X}S=\operatorname {cl} _{X}S.$
• For any subset $S\subseteq X$ that is not closed in $X$ and every $x\in \left(\operatorname {cl} _{X}S\right)\setminus S,$ there exists a sequence in $S$ that converges to $x.$
Fréchet–Urysohn spaces are also sometimes said to be "Fréchet," but should be confused with neither Fréchet spaces in functional analysis nor the T1 condition.
Examples and sufficient conditions
Every CW-complex is sequential, as it can be considered as a quotient of a metric space.
The prime spectrum of a commutative Noetherian ring with the Zariski topology is sequential. [7]
Take the real line $\mathbb {R} $ and identify the set $\mathbb {Z} $ of integers to a point. As a quotient of a metric space, the result is sequential, but it is not first countable.
Every first-countable space is Fréchet–Urysohn and every Fréchet-Urysohn space is sequential. Thus every metrizable or pseudometrizable space — in particular, every second-countable space, metric space, or discrete space — is sequential.
Let ${\mathcal {F}}$ be a set of maps from Fréchet–Urysohn spaces to $X.$ Then the final topology that ${\mathcal {F}}$ induces on $X$ is sequential.
A Hausdorff topological vector space is sequential if and only if there exists no strictly finer topology with the same convergent sequences.[8][9]
Spaces that are sequential but not Fréchet-Urysohn
Schwartz space ${\mathcal {S}}\left(\mathbb {R} ^{n}\right)$and the space $C^{\infty }(U)$ of smooth functions, as discussed in the article on distributions, are both widely-used sequential spaces, but are not Fréchet-Urysohn. Indeed the strong dual spaces of both these of spaces are not Fréchet-Urysohn either.[10][11]
More generally, every infinite-dimensional Montel DF-space is sequential but not Fréchet–Urysohn.
Arens' space is sequential, but not Fréchet–Urysohn.[12][13]
Non-examples (spaces that are not sequential)
The simplest space that is not sequential is the cocountable topology on an uncountable set. Every convergent sequence in such a space is eventually constant; hence every set is sequentially open. But the cocountable topology is not discrete. (One could call the topology "sequentially discrete".)[14]
Let $C_{c}^{k}(U)$ denote the space of $k$-smooth test functions with its canonical topology and let ${\mathcal {D}}'(U)$ denote the space of distributions, the strong dual space of $C_{c}^{\infty }(U)$; neither are sequential (nor even an Ascoli space).[10][11] On the other hand, both $C_{c}^{\infty }(U)$ and ${\mathcal {D}}'(U)$ are Montel spaces[15] and, in the dual space of any Montel space, a sequence of continuous linear functionals converges in the strong dual topology if and only if it converges in the weak* topology (that is, converges pointwise).[10][16]
Consequences
Every sequential space has countable tightness and is compactly generated.
If $f:X\to Y$ is a continuous open surjection between two Hausdorff sequential spaces then the set $\{y:{|f^{-1}(y)|=1}\}\subseteq Y$ of points with unique preimage is closed. (By continuity, so is its preimage in $X,$ the set of all points on which $f$ is injective.)
If $f:X\to Y$ is a surjective map (not necessarily continuous) onto a Hausdorff sequential space $Y$ and ${\mathcal {B}}$ bases for the topology on $X,$ then $f:X\to Y$ is an open map if and only if, for every $x\in X,$ basic neighborhood $B\in {\mathcal {B}}$ of $x,$ and sequence $y_{\bullet }=\left(y_{i}\right)_{i=1}^{\infty }\to f(x)$ in $Y,$ there is a subsequence of $y_{\bullet }$ that is eventually in $f(B).$
Categorical properties
The full subcategory Seq of all sequential spaces is closed under the following operations in the category Top of topological spaces:
• Quotients
• Continuous closed or open images
• Sums
• Inductive limits
• Open and closed subspaces
The category Seq is not closed under the following operations in Top:
• Continuous images
• Subspaces
• Finite products
Since they are closed under topological sums and quotients, the sequential spaces form a coreflective subcategory of the category of topological spaces. In fact, they are the coreflective hull of metrizable spaces (that is, the smallest class of topological spaces closed under sums and quotients and containing the metrizable spaces).
The subcategory Seq is a Cartesian closed category with respect to its own product (not that of Top). The exponential objects are equipped with the (convergent sequence)-open topology.
P.I. Booth and A. Tillotson have shown that Seq is the smallest Cartesian closed subcategory of Top containing the underlying topological spaces of all metric spaces, CW-complexes, and differentiable manifolds and that is closed under colimits, quotients, and other "certain reasonable identities" that Norman Steenrod described as "convenient".[17].
Every sequential space is compactly generated, and finite products in Seq coincide with those for compactly generated spaces, since products in the category of compactly generated spaces preserve quotients of metric spaces.
See also
• Axiom of countability – property of certain mathematical objects (usually in a category) that asserts the existence of a countable set with certain properties. Without such an axiom, such a set might not probably exist.Pages displaying wikidata descriptions as a fallback
• Closed graph property – Graph of a map closed in the product space
• First-countable space – Topological space where each point has a countable neighbourhood basis
• Fréchet–Urysohn space – Topological space
• Sequence covering map
Notes
1. You cannot simultaneously apply this "test" to infinitely many subsets (for example, you can not use something akin to the axiom of choice). Not all sequential spaces are Fréchet-Urysohn, but only in those spaces can the closure of a set $S$ can be determined without it ever being necessary to consider any set other than $S.$
2. A Fréchet–Urysohn space is defined by the analogous condition for all such $x$:
For any subset $S\subseteq X$ that is not closed in $X,$ for any $x\in \operatorname {cl} _{X}(S)\setminus S,$ there exists a sequence in $S$ that converges to $x.$
Citations
1. Snipes, Ray (1972). "T-sequential topological spaces" (PDF). Fundamenta Mathematicae. 77 (2): 95–98. doi:10.4064/fm-77-2-95-98. ISSN 0016-2736.
• Arhangel'skiĭ, A. V.; Franklin, S. P. (1968). "Ordinal invariants for topological spaces". Michigan Math. J. 15 (3): 313–320. doi:10.1307/mmj/1029000034.
2. Baron, S. (October 1968). "The Coreflective Subcategory of Sequential Spaces". Canadian Mathematical Bulletin. 11 (4): 603–604. doi:10.4153/CMB-1968-074-4. ISSN 0008-4395. S2CID 124685527.
3. "Topology of sequentially open sets is sequential?". Mathematics Stack Exchange.
4. Arkhangel'skii, A.V. and Pontryagin L.S., General Topology I, definition 9 p.12
5. Baron, S.; Leader, Solomon (1966). "Solution to Problem #5299". The American Mathematical Monthly. 73 (6): 677–678. doi:10.2307/2314834. ISSN 0002-9890. JSTOR 2314834.
6. "On sequential properties of Noetherian topological spaces" (PDF). 2004. Retrieved 30 Jul 2023.
7. Wilansky 2013, p. 224.
8. Dudley, R. M., On sequential convergence - Transactions of the American Mathematical Society Vol 112, 1964, pp. 483-507
9. Gabrielyan, Saak (25 Feb 2017). "Topological properties of strict $(LF)$-spaces and strong duals of Montel strict $(LF)$-spaces". arXiv:1702.07867v1 [math.FA].
10. T. Shirai, Sur les Topologies des Espaces de L. Schwartz, Proc. Japan Acad. 35 (1959), 31-36.
11. Engelking 1989, Example 1.6.19
12. Ma, Dan (19 August 2010). "A note about the Arens' space". Retrieved 1 August 2013.
13. math; Sleziak, Martin (Dec 6, 2016). "Example of different topologies with same convergent sequences". Mathematics Stack Exchange. StackOverflow. Retrieved 2022-06-27.
14. "Topological vector space". Encyclopedia of Mathematics. Encyclopedia of Mathematics. Retrieved September 6, 2020. It is a Montel space, hence paracompact, and so normal.
15. Trèves 2006, pp. 351–359.
16. Steenrod 1967
References
• Arkhangel'skii, A.V. and Pontryagin, L.S., General Topology I, Springer-Verlag, New York (1990) ISBN 3-540-18178-4.
• Arkhangel'skii, A V (1966). "Mappings and spaces" (PDF). Russian Mathematical Surveys. 21 (4): 115–162. Bibcode:1966RuMaS..21..115A. doi:10.1070/RM1966v021n04ABEH004169. ISSN 0036-0279. S2CID 250900871. Retrieved 10 February 2021.
• Akiz, Hürmet Fulya; Koçak, Lokman (2019). "Sequentially Hausdorff and full sequentially Hausdorff spaces". Communications Faculty of Science University of Ankara Series A1Mathematics and Statistics. 68 (2): 1724–1732. doi:10.31801/cfsuasmas.424418. ISSN 1303-5991. Retrieved 10 February 2021.
• Boone, James (1973). "A note on mesocompact and sequentially mesocompact spaces". Pacific Journal of Mathematics. 44 (1): 69–74. doi:10.2140/pjm.1973.44.69. ISSN 0030-8730.
• Booth, Peter; Tillotson, J. (1980). "Monoidal closed, Cartesian closed and convenient categories of topological spaces". Pacific Journal of Mathematics. 88 (1): 35–53. doi:10.2140/pjm.1980.88.35. ISSN 0030-8730. Retrieved 10 February 2021.
• Engelking, R., General Topology, Heldermann, Berlin (1989). Revised and completed edition.
• Foged, L. (1985). "A characterization of closed images of metric spaces". Proceedings of the American Mathematical Society. 95 (3): 487–490. doi:10.1090/S0002-9939-1985-0806093-3. ISSN 0002-9939.
• Franklin, S. (1965). "Spaces in which sequences suffice" (PDF). Fundamenta Mathematicae. 57 (1): 107–115. doi:10.4064/fm-57-1-107-115. ISSN 0016-2736.
• Franklin, S. (1967). "Spaces in which sequences suffice II" (PDF). Fundamenta Mathematicae. 61 (1): 51–56. doi:10.4064/fm-61-1-51-56. ISSN 0016-2736. Retrieved 10 February 2021.
• Goreham, Anthony, "Sequential Convergence in Topological Spaces", (2016)
• Gruenhage, Gary; Michael, Ernest; Tanaka, Yoshio (1984). "Spaces determined by point-countable covers". Pacific Journal of Mathematics. 113 (2): 303–332. doi:10.2140/pjm.1984.113.303. ISSN 0030-8730.
• Michael, E.A. (1972). "A quintuple quotient quest". General Topology and Its Applications. 2 (2): 91–138. doi:10.1016/0016-660X(72)90040-2. ISSN 0016-660X.
• Shou, Lin; Chuan, Liu; Mumin, Dai (1997). "Images on locally separable metric spaces". Acta Mathematica Sinica. 13 (1): 1–8. doi:10.1007/BF02560519. ISSN 1439-8516. S2CID 122383748.
• Steenrod, N. E. (1967). "A convenient category of topological spaces". The Michigan Mathematical Journal. 14 (2): 133–152. doi:10.1307/mmj/1028999711. Retrieved 10 February 2021.
• Trèves, François (2006) [1967]. Topological Vector Spaces, Distributions and Kernels. Mineola, N.Y.: Dover Publications. ISBN 978-0-486-45352-1. OCLC 853623322.
• Wilansky, Albert (2013). Modern Methods in Topological Vector Spaces. Mineola, New York: Dover Publications, Inc. ISBN 978-0-486-49353-4. OCLC 849801114.
| Wikipedia |
Student's t-distribution
In probability and statistics, Student's t-distribution (or simply the t-distribution) $t_{\nu }$ is a continuous probability distribution that generalizes the standard normal distribution. Like the latter, it is symmetric around zero and bell-shaped.
This article is about the mathematics of Student's t-distribution. For its uses in statistics, see Student's t-test.
Student's t
Probability density function
Cumulative distribution function
Parameters $\nu >0$ degrees of freedom (real)
Support $x\in (-\infty ,\infty )$
PDF $\textstyle {\frac {\Gamma \left({\frac {\nu +1}{2}}\right)}{{\sqrt {\nu \pi }}\,\Gamma \left({\frac {\nu }{2}}\right)}}\left(1+{\frac {x^{2}}{\nu }}\right)^{-{\frac {\nu +1}{2}}}\!$
CDF
${\begin{matrix}{\frac {1}{2}}+x\Gamma \left({\frac {\nu +1}{2}}\right)\times \\[0.5em]{\frac {\,_{2}F_{1}\left({\frac {1}{2}},{\frac {\nu +1}{2}};{\frac {3}{2}};-{\frac {x^{2}}{\nu }}\right)}{{\sqrt {\pi \nu }}\,\Gamma \left({\frac {\nu }{2}}\right)}}\end{matrix}}$
where 2F1 is the hypergeometric function
Mean 0 for $\nu >1$, otherwise undefined
Median 0
Mode 0
Variance $\textstyle {\frac {\nu }{\nu -2}}$ for $\nu >2$, ∞ for $1<\nu \leq 2$, otherwise undefined
Skewness 0 for $\nu >3$, otherwise undefined
Ex. kurtosis $\textstyle {\frac {6}{\nu -4}}$ for $\nu >4$, ∞ for $2<\nu \leq 4$, otherwise undefined
Entropy
${\begin{matrix}{\frac {\nu +1}{2}}\left[\psi \left({\frac {1+\nu }{2}}\right)-\psi \left({\frac {\nu }{2}}\right)\right]\\[0.5em]+\ln {\left[{\sqrt {\nu }}B\left({\frac {\nu }{2}},{\frac {1}{2}}\right)\right]}\,{\scriptstyle {\text{(nats)}}}\end{matrix}}$
• ψ: digamma function,
• B: beta function
MGF undefined
CF
$\textstyle {\frac {K_{\nu /2}\left({\sqrt {\nu }}|t|\right)\cdot \left({\sqrt {\nu }}|t|\right)^{\nu /2}}{\Gamma (\nu /2)2^{\nu /2-1}}}$ for $\nu >0$
• $K_{\nu }(x)$: modified Bessel function of the second kind[1]
CVaR (ES)
$\mu +s\left({\frac {\nu +T^{-1}(1-p)^{2}\tau (T^{-1}(1-p)^{2})}{(\nu -1)(1-p)}}\right)$
Where $T^{-1}$ is the inverse standardized student-t CDF, and $\tau $ is the standardized student-t PDF.[2]
However, $t_{\nu }$ has heavier tails and the amount of probability mass in the tails is controlled by the parameter $\nu $. For $\nu =1$ the Student's t distribution $t_{\nu }$ becomes the standard Cauchy distribution, whereas for $\nu \rightarrow \infty $ it becomes the standard normal distribution $N(0,1)$.
The Student's t-distribution plays a role in a number of widely used statistical analyses, including Student's t-test for assessing the statistical significance of the difference between two sample means, the construction of confidence intervals for the difference between two population means, and in linear regression analysis.
In the form of the location-scale t-distribution $lst(\mu ,\tau ^{2},\nu )$ it generalizes the normal distribution and also arises in the Bayesian analysis of data from a normal family as a compound distribution when marginalizing over the variance parameter.
History and etymology
In statistics, the t-distribution was first derived as a posterior distribution in 1876 by Helmert[3][4][5] and Lüroth.[6][7][8] The t-distribution also appeared in a more general form as Pearson Type IV distribution in Karl Pearson's 1895 paper.[9]
In the English-language literature, the distribution takes its name from William Sealy Gosset's 1908 paper in Biometrika under the pseudonym "Student".[10] One version of the origin of the pseudonym is that Gosset's employer preferred staff to use pen names when publishing scientific papers instead of their real name, so he used the name "Student" to hide his identity. Another version is that Guinness did not want their competitors to know that they were using the t-test to determine the quality of raw material.[11][12]
Gosset worked at the Guinness Brewery in Dublin, Ireland, and was interested in the problems of small samples – for example, the chemical properties of barley where sample sizes might be as few as 3. Gosset's paper refers to the distribution as the "frequency distribution of standard deviations of samples drawn from a normal population". It became well known through the work of Ronald Fisher, who called the distribution "Student's distribution" and represented the test value with the letter t.[13][14]
Definition
Probability density function
Student's t-distribution has the probability density function (PDF) given by
$f(t)={\frac {\Gamma ({\frac {\nu +1}{2}})}{{\sqrt {\nu \pi }}\,\Gamma ({\frac {\nu }{2}})}}\left(1+{\frac {t^{2}}{\nu }}\right)^{-(\nu +1)/2},$
where $\nu $ is the number of degrees of freedom and $\Gamma $ is the gamma function. This may also be written as
$f(t)={\frac {1}{{\sqrt {\nu }}\,\mathrm {B} ({\frac {1}{2}},{\frac {\nu }{2}})}}\left(1+{\frac {t^{2}}{\nu }}\right)^{-(\nu +1)/2},$
where B is the Beta function. In particular for integer valued degrees of freedom $\nu $ we have:
For $\nu >1$ even,
${\frac {\Gamma ({\frac {\nu +1}{2}})}{{\sqrt {\nu \pi }}\,\Gamma ({\frac {\nu }{2}})}}={\frac {(\nu -1)(\nu -3)\cdots 5\cdot 3}{2{\sqrt {\nu }}(\nu -2)(\nu -4)\cdots 4\cdot 2\,}}\cdot $
For $\nu >1$ odd,
${\frac {\Gamma ({\frac {\nu +1}{2}})}{{\sqrt {\nu \pi }}\,\Gamma ({\frac {\nu }{2}})}}={\frac {(\nu -1)(\nu -3)\cdots 4\cdot 2}{\pi {\sqrt {\nu }}(\nu -2)(\nu -4)\cdots 5\cdot 3\,}}\cdot \!$
The probability density function is symmetric, and its overall shape resembles the bell shape of a normally distributed variable with mean 0 and variance 1, except that it is a bit lower and wider. As the number of degrees of freedom grows, the t-distribution approaches the normal distribution with mean 0 and variance 1. For this reason ${\nu }$ is also known as the normality parameter.[15]
The following images show the density of the t-distribution for increasing values of $\nu $. The normal distribution is shown as a blue line for comparison. Note that the t-distribution (red line) becomes closer to the normal distribution as $\nu $ increases.
Density of the t-distribution (red) for 1, 2, 3, 5, 10, and 30 degrees of freedom compared to the standard normal distribution (blue).
Previous plots shown in green.
1 degree of freedom
2 degrees of freedom
3 degrees of freedom
5 degrees of freedom
10 degrees of freedom
30 degrees of freedom
Cumulative distribution function
The cumulative distribution function (CDF) can be written in terms of I, the regularized incomplete beta function. For t > 0,
$F(t)=\int _{-\infty }^{t}f(u)\,du=1-{\tfrac {1}{2}}I_{x(t)}\left({\tfrac {\nu }{2}},{\tfrac {1}{2}}\right),$
where
$x(t)={\frac {\nu }{t^{2}+\nu }}.$
Other values would be obtained by symmetry. An alternative formula, valid for $t^{2}<\nu $, is
$\int _{-\infty }^{t}f(u)\,du={\tfrac {1}{2}}+t{\frac {\Gamma \left({\tfrac {1}{2}}(\nu +1)\right)}{{\sqrt {\pi \nu }}\,\Gamma \left({\tfrac {\nu }{2}}\right)}}\,{}_{2}F_{1}\left({\tfrac {1}{2}},{\tfrac {1}{2}}(\nu +1);{\tfrac {3}{2}};-{\tfrac {t^{2}}{\nu }}\right),$
where 2F1 is a particular case of the hypergeometric function.
For information on its inverse cumulative distribution function, see quantile function § Student's t-distribution.
Special cases
Certain values of $\nu $ give a simple form for Student's t-distribution.
$\nu $ PDF CDF notes
1 ${\frac {1}{\pi (1+t^{2})}}$ ${\frac {1}{2}}+{\frac {1}{\pi }}\arctan(t)$ See Cauchy distribution
2 ${\frac {1}{2{\sqrt {2}}\left(1+{\frac {t^{2}}{2}}\right)^{3/2}}}$ ${\frac {1}{2}}+{\frac {t}{2{\sqrt {2}}{\sqrt {1+{\frac {t^{2}}{2}}}}}}$
3 ${\frac {2}{\pi {\sqrt {3}}\left(1+{\frac {t^{2}}{3}}\right)^{2}}}$ ${\frac {1}{2}}+{\frac {1}{\pi }}{\left[{\frac {1}{\sqrt {3}}}{\frac {t}{1+{\frac {t^{2}}{3}}}}+\arctan \left({\frac {t}{\sqrt {3}}}\right)\right]}$
4 ${\frac {3}{8\left(1+{\frac {t^{2}}{4}}\right)^{5/2}}}$ ${\frac {1}{2}}+{\frac {3}{8}}{\frac {t}{\sqrt {1+{\frac {t^{2}}{4}}}}}{\left[1-{\frac {1}{12}}{\frac {t^{2}}{1+{\frac {t^{2}}{4}}}}\right]}$
5 ${\frac {8}{3\pi {\sqrt {5}}\left(1+{\frac {t^{2}}{5}}\right)^{3}}}$ ${\frac {1}{2}}+{\frac {1}{\pi }}{\left[{\frac {t}{{\sqrt {5}}\left(1+{\frac {t^{2}}{5}}\right)}}\left(1+{\frac {2}{3\left(1+{\frac {t^{2}}{5}}\right)}}\right)+\arctan \left({\frac {t}{\sqrt {5}}}\right)\right]}$
$\infty $ ${\frac {1}{\sqrt {2\pi }}}e^{-t^{2}/2}$ ${\frac {1}{2}}{\left[1+\operatorname {erf} \left({\frac {t}{\sqrt {2}}}\right)\right]}$ See Normal distribution, Error function
Moments
For $\nu >1$, the raw moments of the t-distribution are
$\operatorname {E} (T^{k})={\begin{cases}0&k{\text{ odd}},\quad 0<k<\nu \\{\frac {1}{{\sqrt {\pi }}\Gamma \left({\frac {\nu }{2}}\right)}}\left[\Gamma \left({\frac {k+1}{2}}\right)\Gamma \left({\frac {\nu -k}{2}}\right)\nu ^{\frac {k}{2}}\right]&k{\text{ even}},\quad 0<k<\nu .\\\end{cases}}$
Moments of order $\nu $ or higher do not exist.[16]
The term for $0<k<\nu $, k even, may be simplified using the properties of the gamma function to
$\operatorname {E} (T^{k})=\nu ^{\frac {k}{2}}\,\prod _{i=1}^{k/2}{\frac {2i-1}{\nu -2i}}\qquad k{\text{ even}},\quad 0<k<\nu .$
For a t-distribution with $\nu $ degrees of freedom, the expected value is 0 if $\nu >1$, and its variance is ${\frac {\nu }{\nu -2}}$ if $\nu >2$. The skewness is 0 if $\nu >3$ and the excess kurtosis is ${\frac {6}{\nu -4}}$ if $\nu >4$.
Location-scale t-distribution
Location-scale transformation
Student's t-distribution generalizes to the three parameter location-scale t-distribution $lst(\mu ,\tau ^{2},\nu )$ by introducing a location parameter $\mu $ and a scale parameter $\tau $. With
$T\sim t_{\nu }$
and location-scale family transformation
$X=\mu +\tau T$
we get
$X\sim lst(\mu ,\tau ^{2},\nu )$
The resulting distribution is also called the non-standardized Student's t-distribution.
Density and first two moments
The location-scale t distribution has a density defined by:[17]
$p(x\mid \nu ,\mu ,\tau )={\frac {\Gamma ({\frac {\nu +1}{2}})}{\Gamma ({\frac {\nu }{2}}){\sqrt {\pi \nu }}\tau \,}}\left(1+{\frac {1}{\nu }}\left({\frac {x-\mu }{\tau }}\right)^{2}\right)^{-(\nu +1)/2}$
Equivalently, the density can be written in terms of $\tau ^{2}$:
$p(x\mid \nu ,\mu ,\tau ^{2})={\frac {\Gamma ({\frac {\nu +1}{2}})}{\Gamma ({\frac {\nu }{2}}){\sqrt {\pi \nu \tau ^{2}}}}}\left(1+{\frac {1}{\nu }}{\frac {(x-\mu )^{2}}{{\tau }^{2}}}\right)^{-(\nu +1)/2}$
Other properties of this version of the distribution are:[17]
${\begin{aligned}\operatorname {E} (X)&=\mu &{\text{ for }}\nu >1\\\operatorname {var} (X)&=\tau ^{2}{\frac {\nu }{\nu -2}}&{\text{ for }}\nu >2\\\operatorname {mode} (X)&=\mu \end{aligned}}$
Special cases
• If $X$ follows a location-scale t-distribution $X\sim \mathrm {lst} \left(\mu ,\tau ^{2},\nu \right)$ then for $\nu \rightarrow \infty $ $X$ is normally distributed $X\sim \mathrm {N} \left(\mu ,\tau ^{2}\right)$ with mean $\mu $ and variance $\tau ^{2}$.
• The location-scale t-distribution $\mathrm {lst} \left(\mu ,\tau ^{2},\nu =1\right)$ with degree of freedom $\nu =1$ is equivalent to the Cauchy distribution $\mathrm {Cau} \left(\mu ,\tau \right)$.
• The location-scale t-distribution $\mathrm {lst} \left(\mu =0,\tau ^{2}=1,\nu \right)$ with $\mu =0$ and $\tau ^{2}=1$ reduces to the Student's t-distribution $t_{\nu }$
How the t-distribution arises (characterization)
Sampling distribution of t-statistic
The t-distribution arises as the sampling distribution of the t-statistic. Below the one-sample t-statistic is discussed, for the corresponding two-sample t-statistic see Student's t-test.
Unbiased variance estimate
Let $x_{1},\ldots ,x_{n}\sim N(\mu ,\sigma ^{2})$ be independent and identically distributed samples from a normal distribution with mean $\mu $ and variance $\sigma ^{2}$. The sample mean and unbiased sample variance are given by:
${\begin{aligned}{\bar {x}}&={\frac {x_{1}+\cdots +x_{n}}{n}},\\[5pt]s^{2}&={\frac {1}{n-1}}\sum _{i=1}^{n}(x_{i}-{\bar {x}})^{2}.\end{aligned}}$
The resulting (one sample) t-statistic is given by
$t={\frac {{\bar {x}}-\mu }{\sqrt {s^{2}/n}}}\sim t_{n-1}.$
and is distributed according to a Student's t-distribution with $n-1$ degrees of freedom.
Thus for inference purposes the t-statistic is a useful "pivotal quantity" in the case when the mean and variance $(\mu ,\sigma ^{2})$ are unknown population parameters, in the sense that the t-statistic has then a probability distribution that depends on neither $\mu $ nor $\sigma ^{2}$.
ML variance estimate
Instead of the unbiased estimate $s^{2}$ we may also use the maximum likelihood estimate
$s_{ML}^{2}={\frac {1}{n}}\sum _{i=1}^{n}(x_{i}-{\bar {x}})^{2}$
yielding the statistic
$t_{ML}={\frac {{\bar {x}}-\mu }{\sqrt {s_{ML}^{2}/n}}}={\sqrt {\frac {n}{n-1}}}t.$
This is distributed according to the location-scale t-distribution:
$t_{ML}\sim lst(0,\tau ^{2}=n/(n-1),n-1).$
Compound distribution of normal with inverse gamma distribution
The location-scale t-distribution results from compounding a Gaussian distribution (normal distribution) with mean $\mu $ and unknown variance, with an inverse gamma distribution placed over the variance with parameters $a=\nu /2$ and $b=\nu {\tau }^{2}/2$. In other words, the random variable X is assumed to have a Gaussian distribution with an unknown variance distributed as inverse gamma, and then the variance is marginalized out (integrated out).
Equivalently, this distribution results from compounding a Gaussian distribution with a scaled-inverse-chi-squared distribution with parameters $\nu $ and ${\tau }^{2}$. The scaled-inverse-chi-squared distribution is exactly the same distribution as the inverse gamma distribution, but with a different parameterization, i.e. $\nu =2a,\;{\tau }^{2}={\frac {b}{a}}$.
The reason for the usefulness of this characterization is that in Bayesian statistics the inverse gamma distribution is the conjugate prior distribution of the variance of a Gaussian distribution. As a result, the location-scale t-distribution arises naturally in many Bayesian inference problems.[18]
Maximum entropy distribution
Student's t-distribution is the maximum entropy probability distribution for a random variate X for which $\operatorname {E} (\ln(\nu +X^{2}))$ is fixed.[19]
Further properties
Monte Carlo sampling
There are various approaches to constructing random samples from the Student's t-distribution. The matter depends on whether the samples are required on a stand-alone basis, or are to be constructed by application of a quantile function to uniform samples; e.g., in the multi-dimensional applications basis of copula-dependency. In the case of stand-alone sampling, an extension of the Box–Muller method and its polar form is easily deployed.[20] It has the merit that it applies equally well to all real positive degrees of freedom, ν, while many other candidate methods fail if ν is close to zero.[20]
Integral of Student's probability density function and p-value
The function A(t | ν) is the integral of Student's probability density function, f(t) between −t and t, for t ≥ 0. It thus gives the probability that a value of t less than that calculated from observed data would occur by chance. Therefore, the function A(t | ν) can be used when testing whether the difference between the means of two sets of data is statistically significant, by calculating the corresponding value of t and the probability of its occurrence if the two sets of data were drawn from the same population. This is used in a variety of situations, particularly in t-tests. For the statistic t, with ν degrees of freedom, A(t | ν) is the probability that t would be less than the observed value if the two means were the same (provided that the smaller mean is subtracted from the larger, so that t ≥ 0). It can be easily calculated from the cumulative distribution function Fν(t) of the t-distribution:
$A(t\mid \nu )=F_{\nu }(t)-F_{\nu }(-t)=1-I_{\frac {\nu }{\nu +t^{2}}}\left({\frac {\nu }{2}},{\frac {1}{2}}\right),$
where Ix is the regularized incomplete beta function (a, b).
For statistical hypothesis testing this function is used to construct the p-value.
Related distributions
• The noncentral t-distribution generalizes the t-distribution to include a noncentrality parameter. Unlike the nonstandardized t-distributions, the noncentral distributions are not symmetric (the median is not the same as the mode).
• The discrete Student's t-distribution is defined by its probability mass function at r being proportional to:[21]
$\prod _{j=1}^{k}{\frac {1}{(r+j+a)^{2}+b^{2}}}\quad \quad r=\ldots ,-1,0,1,\ldots .$
Here a, b, and k are parameters. This distribution arises from the construction of a system of discrete distributions similar to that of the Pearson distributions for continuous distributions.[22]
• One can generate Student-t samples by taking the ratio of variables from the normal distribution and the square-root of χ2-distribution. If we use instead of the normal distribution, e.g., the Irwin–Hall distribution, we obtain over-all a symmetric 4-parameter distribution, which includes the normal, the uniform, the triangular, the Student-t and the Cauchy distribution. This is also more flexible than some other symmetric generalizations of the normal distribution.
• t-distribution is an instance of ratio distributions.
Uses
In frequentist statistical inference
Student's t-distribution arises in a variety of statistical estimation problems where the goal is to estimate an unknown parameter, such as a mean value, in a setting where the data are observed with additive errors. If (as in nearly all practical statistical work) the population standard deviation of these errors is unknown and has to be estimated from the data, the t-distribution is often used to account for the extra uncertainty that results from this estimation. In most such problems, if the standard deviation of the errors were known, a normal distribution would be used instead of the t-distribution.
Confidence intervals and hypothesis tests are two statistical procedures in which the quantiles of the sampling distribution of a particular statistic (e.g. the standard score) are required. In any situation where this statistic is a linear function of the data, divided by the usual estimate of the standard deviation, the resulting quantity can be rescaled and centered to follow Student's t-distribution. Statistical analyses involving means, weighted means, and regression coefficients all lead to statistics having this form.
Quite often, textbook problems will treat the population standard deviation as if it were known and thereby avoid the need to use the Student's t-distribution. These problems are generally of two kinds: (1) those in which the sample size is so large that one may treat a data-based estimate of the variance as if it were certain, and (2) those that illustrate mathematical reasoning, in which the problem of estimating the standard deviation is temporarily ignored because that is not the point that the author or instructor is then explaining.
Hypothesis testing
A number of statistics can be shown to have t-distributions for samples of moderate size under null hypotheses that are of interest, so that the t-distribution forms the basis for significance tests. For example, the distribution of Spearman's rank correlation coefficient ρ, in the null case (zero correlation) is well approximated by the t distribution for sample sizes above about 20.
Confidence intervals
Suppose the number A is so chosen that
$\Pr(-A<T<A)=0.9,$
when T has a t-distribution with n − 1 degrees of freedom. By symmetry, this is the same as saying that A satisfies
$\Pr(T<A)=0.95,$
so A is the "95th percentile" of this probability distribution, or $A=t_{(0.05,n-1)}$. Then
$\Pr \left(-A<{\frac {{\overline {X}}_{n}-\mu }{S_{n}/{\sqrt {n}}}}<A\right)=0.9,$
and this is equivalent to
$\Pr \left({\overline {X}}_{n}-A{\frac {S_{n}}{\sqrt {n}}}<\mu <{\overline {X}}_{n}+A{\frac {S_{n}}{\sqrt {n}}}\right)=0.9.$
Therefore, the interval whose endpoints are
${\overline {X}}_{n}\pm A{\frac {S_{n}}{\sqrt {n}}}$
is a 90% confidence interval for μ. Therefore, if we find the mean of a set of observations that we can reasonably expect to have a normal distribution, we can use the t-distribution to examine whether the confidence limits on that mean include some theoretically predicted value – such as the value predicted on a null hypothesis.
It is this result that is used in the Student's t-tests: since the difference between the means of samples from two normal distributions is itself distributed normally, the t-distribution can be used to examine whether that difference can reasonably be supposed to be zero.
If the data are normally distributed, the one-sided (1 − α)-upper confidence limit (UCL) of the mean, can be calculated using the following equation:
$\mathrm {UCL} _{1-\alpha }={\overline {X}}_{n}+t_{\alpha ,n-1}{\frac {S_{n}}{\sqrt {n}}}.$
The resulting UCL will be the greatest average value that will occur for a given confidence interval and population size. In other words, ${\overline {X}}_{n}$ being the mean of the set of observations, the probability that the mean of the distribution is inferior to UCL1−α is equal to the confidence level 1 − α.
Prediction intervals
The t-distribution can be used to construct a prediction interval for an unobserved sample from a normal distribution with unknown mean and variance.
In Bayesian statistics
The Student's t-distribution, especially in its three-parameter (location-scale) version, arises frequently in Bayesian statistics as a result of its connection with the normal distribution. Whenever the variance of a normally distributed random variable is unknown and a conjugate prior placed over it that follows an inverse gamma distribution, the resulting marginal distribution of the variable will follow a Student's t-distribution. Equivalent constructions with the same results involve a conjugate scaled-inverse-chi-squared distribution over the variance, or a conjugate gamma distribution over the precision. If an improper prior proportional to σ−2 is placed over the variance, the t-distribution also arises. This is the case regardless of whether the mean of the normally distributed variable is known, is unknown distributed according to a conjugate normally distributed prior, or is unknown distributed according to an improper constant prior.
Related situations that also produce a t-distribution are:
• The marginal posterior distribution of the unknown mean of a normally distributed variable, with unknown prior mean and variance following the above model.
• The prior predictive distribution and posterior predictive distribution of a new normally distributed data point when a series of independent identically distributed normally distributed data points have been observed, with prior mean and variance as in the above model.
Robust parametric modeling
The t-distribution is often used as an alternative to the normal distribution as a model for data, which often has heavier tails than the normal distribution allows for; see e.g. Lange et al.[23] The classical approach was to identify outliers (e.g., using Grubbs's test) and exclude or downweight them in some way. However, it is not always easy to identify outliers (especially in high dimensions), and the t-distribution is a natural choice of model for such data and provides a parametric approach to robust statistics.
A Bayesian account can be found in Gelman et al.[24] The degrees of freedom parameter controls the kurtosis of the distribution and is correlated with the scale parameter. The likelihood can have multiple local maxima and, as such, it is often necessary to fix the degrees of freedom at a fairly low value and estimate the other parameters taking this as given. Some authors report that values between 3 and 9 are often good choices. Venables and Ripley suggest that a value of 5 is often a good choice.
Student's t-process
For practical regression and prediction needs, Student's t-processes were introduced, that are generalisations of the Student t-distributions for functions. A Student's t-process is constructed from the Student t-distributions like a Gaussian process is constructed from the Gaussian distributions. For a Gaussian process, all sets of values have a multidimensional Gaussian distribution. Analogously, $X(t)$ is a Student t-process on an interval $I=[a,b]$ if the correspondent values of the process $X(t_{1}),...,X(t_{n})$ ($t_{i}\in I$) have a joint multivariate Student t-distribution.[25] These processes are used for regression, prediction, Bayesian optimization and related problems. For multivariate regression and multi-output prediction, the multivariate Student t-processes are introduced and used.[26]
Table of selected values
The following table lists values for t-distributions with ν degrees of freedom for a range of one-sided or two-sided critical regions. The first column is ν, the percentages along the top are confidence levels, and the numbers in the body of the table are the $t_{\alpha ,n-1}$ factors described in the section on confidence intervals.
The last row with infinite ν gives critical points for a normal distribution since a t-distribution with infinitely many degrees of freedom is a normal distribution. (See Related distributions above).
One-sided 75% 80% 85% 90% 95% 97.5% 99% 99.5% 99.75% 99.9% 99.95%
Two-sided 50% 60% 70% 80% 90% 95% 98% 99% 99.5% 99.8% 99.9%
1 1.000 1.376 1.963 3.078 6.314 12.706 31.821 63.657 127.321 318.309 636.619
2 0.816 1.061 1.386 1.886 2.920 4.303 6.965 9.925 14.089 22.327 31.599
3 0.765 0.978 1.250 1.638 2.353 3.182 4.541 5.841 7.453 10.215 12.924
4 0.741 0.941 1.190 1.533 2.132 2.776 3.747 4.604 5.598 7.173 8.610
5 0.727 0.920 1.156 1.476 2.015 2.571 3.365 4.032 4.773 5.893 6.869
6 0.718 0.906 1.134 1.440 1.943 2.447 3.143 3.707 4.317 5.208 5.959
7 0.711 0.896 1.119 1.415 1.895 2.365 2.998 3.499 4.029 4.785 5.408
8 0.706 0.889 1.108 1.397 1.860 2.306 2.896 3.355 3.833 4.501 5.041
9 0.703 0.883 1.100 1.383 1.833 2.262 2.821 3.250 3.690 4.297 4.781
10 0.700 0.879 1.093 1.372 1.812 2.228 2.764 3.169 3.581 4.144 4.587
11 0.697 0.876 1.088 1.363 1.796 2.201 2.718 3.106 3.497 4.025 4.437
12 0.695 0.873 1.083 1.356 1.782 2.179 2.681 3.055 3.428 3.930 4.318
13 0.694 0.870 1.079 1.350 1.771 2.160 2.650 3.012 3.372 3.852 4.221
14 0.692 0.868 1.076 1.345 1.761 2.145 2.624 2.977 3.326 3.787 4.140
15 0.691 0.866 1.074 1.341 1.753 2.131 2.602 2.947 3.286 3.733 4.073
16 0.690 0.865 1.071 1.337 1.746 2.120 2.583 2.921 3.252 3.686 4.015
17 0.689 0.863 1.069 1.333 1.740 2.110 2.567 2.898 3.222 3.646 3.965
18 0.688 0.862 1.067 1.330 1.734 2.101 2.552 2.878 3.197 3.610 3.922
19 0.688 0.861 1.066 1.328 1.729 2.093 2.539 2.861 3.174 3.579 3.883
20 0.687 0.860 1.064 1.325 1.725 2.086 2.528 2.845 3.153 3.552 3.850
21 0.686 0.859 1.063 1.323 1.721 2.080 2.518 2.831 3.135 3.527 3.819
22 0.686 0.858 1.061 1.321 1.717 2.074 2.508 2.819 3.119 3.505 3.792
23 0.685 0.858 1.060 1.319 1.714 2.069 2.500 2.807 3.104 3.485 3.767
24 0.685 0.857 1.059 1.318 1.711 2.064 2.492 2.797 3.091 3.467 3.745
25 0.684 0.856 1.058 1.316 1.708 2.060 2.485 2.787 3.078 3.450 3.725
26 0.684 0.856 1.058 1.315 1.706 2.056 2.479 2.779 3.067 3.435 3.707
27 0.684 0.855 1.057 1.314 1.703 2.052 2.473 2.771 3.057 3.421 3.690
28 0.683 0.855 1.056 1.313 1.701 2.048 2.467 2.763 3.047 3.408 3.674
29 0.683 0.854 1.055 1.311 1.699 2.045 2.462 2.756 3.038 3.396 3.659
30 0.683 0.854 1.055 1.310 1.697 2.042 2.457 2.750 3.030 3.385 3.646
40 0.681 0.851 1.050 1.303 1.684 2.021 2.423 2.704 2.971 3.307 3.551
50 0.679 0.849 1.047 1.299 1.676 2.009 2.403 2.678 2.937 3.261 3.496
60 0.679 0.848 1.045 1.296 1.671 2.000 2.390 2.660 2.915 3.232 3.460
80 0.678 0.846 1.043 1.292 1.664 1.990 2.374 2.639 2.887 3.195 3.416
100 0.677 0.845 1.042 1.290 1.660 1.984 2.364 2.626 2.871 3.174 3.390
120 0.677 0.845 1.041 1.289 1.658 1.980 2.358 2.617 2.860 3.160 3.373
∞ 0.674 0.842 1.036 1.282 1.645 1.960 2.326 2.576 2.807 3.090 3.291
One-sided 75% 80% 85% 90% 95% 97.5% 99% 99.5% 99.75% 99.9% 99.95%
Two-sided 50% 60% 70% 80% 90% 95% 98% 99% 99.5% 99.8% 99.9%
Calculating the confidence interval
Let's say we have a sample with size 11, sample mean 10, and sample variance 2. For 90% confidence with 10 degrees of freedom, the one-sided t-value from the table is 1.372. Then with confidence interval calculated from
${\overline {X}}_{n}\pm t_{\alpha ,\nu }{\frac {S_{n}}{\sqrt {n}}},$
we determine that with 90% confidence we have a true mean lying below
$10+1.372{\frac {\sqrt {2}}{\sqrt {11}}}=10.585.$
In other words, 90% of the times that an upper threshold is calculated by this method from particular samples, this upper threshold exceeds the true mean.
And with 90% confidence we have a true mean lying above
$10-1.372{\frac {\sqrt {2}}{\sqrt {11}}}=9.414.$
In other words, 90% of the times that a lower threshold is calculated by this method from particular samples, this lower threshold lies below the true mean.
So that at 80% confidence (calculated from 100% − 2 × (1 − 90%) = 80%), we have a true mean lying within the interval
$\left(10-1.372{\frac {\sqrt {2}}{\sqrt {11}}},10+1.372{\frac {\sqrt {2}}{\sqrt {11}}}\right)=(9.414,10.585).$
Saying that 80% of the times that upper and lower thresholds are calculated by this method from a given sample, the true mean is both below the upper threshold and above the lower threshold is not the same as saying that there is an 80% probability that the true mean lies between a particular pair of upper and lower thresholds that have been calculated by this method; see confidence interval and prosecutor's fallacy.
Nowadays, statistical software, such as the R programming language, and functions available in many spreadsheet programs compute values of the t-distribution and its inverse without tables.
See also
• F-distribution
• Folded-t and half-t distributions
• Hotelling's T-squared distribution
• Multivariate Student distribution
• Standard normal table (Z-distribution table)
• t-statistic
• Tau distribution, for internally studentized residuals
• Wilks' lambda distribution
• Wishart distribution
• Modified half-normal distribution[27] with the pdf on $(0,\infty )$ is given as $f(x)={\frac {2\beta ^{\frac {\alpha }{2}}x^{\alpha -1}\exp(-\beta x^{2}+\gamma x)}{\Psi {\left({\frac {\alpha }{2}},{\frac {\gamma }{\sqrt {\beta }}}\right)}}}$, where $\Psi (\alpha ,z)={}_{1}\Psi _{1}\left({\begin{matrix}\left(\alpha ,{\frac {1}{2}}\right)\\(1,0)\end{matrix}};z\right)$ denotes the Fox–Wright Psi function.
Notes
1. Hurst, Simon. "The Characteristic Function of the Student t Distribution". Financial Mathematics Research Report No. FMRR006-95, Statistics Research Report No. SRR044-95. Archived from the original on February 18, 2010.
2. Norton, Matthew; Khokhlov, Valentyn; Uryasev, Stan (2019). "Calculating CVaR and bPOE for common probability distributions with application to portfolio optimization and density estimation" (PDF). Annals of Operations Research. Springer. 299 (1–2): 1281–1315. doi:10.1007/s10479-019-03373-1. Retrieved 2023-02-27.
3. Helmert FR (1875). "Über die Berechnung des wahrscheinlichen Fehlers aus einer endlichen Anzahl wahrer Beobachtungsfehler". Z. Math. U. Physik. 20: 300–3.
4. Helmert FR (1876). "Über die Wahrscheinlichkeit der Potenzsummen der Beobachtungsfehler und uber einige damit in Zusammenhang stehende Fragen". Z. Math. Phys. 21: 192–218.
5. Helmert FR (1876). "Die Genauigkeit der Formel von Peters zur Berechnung des wahrscheinlichen Beobachtungsfehlers directer Beobachtungen gleicher Genauigkeit" [The accuracy of Peters' formula for calculating the probable observation error of direct observations of the same accuracy]. Astron. Nachr. (in German). 88 (8–9): 113–132. Bibcode:1876AN.....88..113H. doi:10.1002/asna.18760880802.
6. Lüroth J (1876). "Vergleichung von zwei Werten des wahrscheinlichen Fehlers". Astron. Nachr. 87 (14): 209–20. Bibcode:1876AN.....87..209L. doi:10.1002/asna.18760871402.
7. Pfanzagl J, Sheynin O (1996). "Studies in the history of probability and statistics. XLIV. A forerunner of the t-distribution". Biometrika. 83 (4): 891–898. doi:10.1093/biomet/83.4.891. MR 1766040.
8. Sheynin O (1995). "Helmert's work in the theory of errors". Arch. Hist. Exact Sci. 49 (1): 73–104. doi:10.1007/BF00374700. S2CID 121241599.
9. Pearson, K. (1895-01-01). "Contributions to the Mathematical Theory of Evolution. II. Skew Variation in Homogeneous Material". Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences. 186: 343–414 (374). Bibcode:1895RSPTA.186..343P. doi:10.1098/rsta.1895.0010. ISSN 1364-503X.
10. "Student" [William Sealy Gosset] (1908). "The probable error of a mean" (PDF). Biometrika. 6 (1): 1–25. doi:10.1093/biomet/6.1.1. hdl:10338.dmlcz/143545. JSTOR 2331554.
11. Wendl MC (2016). "Pseudonymous fame". Science. 351 (6280): 1406. Bibcode:2016Sci...351.1406W. doi:10.1126/science.351.6280.1406. PMID 27013722.
12. Mortimer RG (2005). Mathematics for physical chemistry (3rd ed.). Burlington, MA: Elsevier. pp. 326. ISBN 9780080492889. OCLC 156200058.
13. Fisher RA (1925). "Applications of 'Student's' distribution" (PDF). Metron. 5: 90–104. Archived from the original (PDF) on 5 March 2016.
14. Walpole RE, Myers R, Myers S, et al. (2006). Probability & Statistics for Engineers & Scientists (7th ed.). New Delhi: Pearson. p. 237. ISBN 9788177584042. OCLC 818811849.
15. Kruschke JK (2015). Doing Bayesian Data Analysis (2nd ed.). Academic Press. ISBN 9780124058880. OCLC 959632184.
16. Casella G, Berger RL (1990). Statistical Inference. Duxbury Resource Center. p. 56. ISBN 9780534119584.
17. Jackman, S. (2009). Bayesian Analysis for the Social Sciences. Wiley Series in Probability and Statistics. Wiley. p. 507. doi:10.1002/9780470686621. ISBN 9780470011546.
18. Gelman AB, Carlin JS, Rubin DB, et al. (1997). Bayesian Data Analysis (2nd ed.). Boca Raton: Chapman & Hall. p. 68. ISBN 9780412039911.
19. Park SY, Bera AK (2009). "Maximum entropy autoregressive conditional heteroskedasticity model". J. Econom. 150 (2): 219–230. doi:10.1016/j.jeconom.2008.12.014.
20. Bailey RW (1994). "Polar Generation of Random Variates with the t-Distribution". Math. Comput. 62 (206): 779–781. Bibcode:1994MaCom..62..779B. doi:10.2307/2153537. JSTOR 2153537.
21. Ord JK (1972). Families of Frequency Distributions. London: Griffin. ISBN 9780852641378. See Table 5.1.{{cite book}}: CS1 maint: postscript (link)
22. Ord JK (1972). "Chapter 5". Families of frequency distributions. London: Griffin. ISBN 9780852641378.
23. Lange KL, Little RJ, Taylor JM (1989). "Robust Statistical Modeling Using the t Distribution" (PDF). J. Am. Stat. Assoc. 84 (408): 881–896. doi:10.1080/01621459.1989.10478852. JSTOR 2290063.
24. Gelman AB, Carlin JB, Stern HS, et al. (2014). "Computationally efficient Markov chain simulation". Bayesian Data Analysis. Boca Raton, Florida: CRC Press. p. 293. ISBN 9781439898208.
25. Shah, Amar; Wilson, Andrew Gordon; Ghahramani, Zoubin (2014). "Student-t processes as alternatives to Gaussian processes" (PDF). JMLR. 33 (Proceedings of the 17th International Conference on Artificial Intelligence and Statistics (AISTATS) 2014, Reykjavik, Iceland): 877–885. arXiv:1402.4306.
26. Chen, Zexun; Wang, Bo; Gorban, Alexander N. (2019). "Multivariate Gaussian and Student-t process regression for multi-output prediction". Neural Computing and Applications. 32 (8): 3005–3028. arXiv:1703.04455. doi:10.1007/s00521-019-04687-8.
27. Sun, Jingchao; Kong, Maiying; Pal, Subhadip (22 June 2021). "The Modified-Half-Normal distribution: Properties and an efficient sampling scheme". Communications in Statistics - Theory and Methods. 52 (5): 1591–1613. doi:10.1080/03610926.2021.1934700. ISSN 0361-0926. S2CID 237919587.
References
• Senn, S.; Richardson, W. (1994). "The first t-test". Statistics in Medicine. 13 (8): 785–803. doi:10.1002/sim.4780130802. PMID 8047737.
• Hogg RV, Craig AT (1978). Introduction to Mathematical Statistics (4th ed.). New York: Macmillan. ASIN B010WFO0SA.
• Venables, W. N.; Ripley, B. D. (2002). Modern Applied Statistics with S (Fourth ed.). Springer.
• Gelman, Andrew; John B. Carlin; Hal S. Stern; Donald B. Rubin (2003). Bayesian Data Analysis (Second ed.). CRC/Chapman & Hall. ISBN 1-58488-388-X.
External links
• "Student distribution", Encyclopedia of Mathematics, EMS Press, 2001 [1994]
• Earliest Known Uses of Some of the Words of Mathematics (S) (Remarks on the history of the term "Student's distribution")
• Rouaud, M. (2013), Probability, Statistics and Estimation (PDF) (short ed.) First Students on page 112.
• Student's t-Distribution, Archived 2021-04-10 at the Wayback Machine
Probability distributions (list)
Discrete
univariate
with finite
support
• Benford
• Bernoulli
• beta-binomial
• binomial
• categorical
• hypergeometric
• negative
• Poisson binomial
• Rademacher
• soliton
• discrete uniform
• Zipf
• Zipf–Mandelbrot
with infinite
support
• beta negative binomial
• Borel
• Conway–Maxwell–Poisson
• discrete phase-type
• Delaporte
• extended negative binomial
• Flory–Schulz
• Gauss–Kuzmin
• geometric
• logarithmic
• mixed Poisson
• negative binomial
• Panjer
• parabolic fractal
• Poisson
• Skellam
• Yule–Simon
• zeta
Continuous
univariate
supported on a
bounded interval
• arcsine
• ARGUS
• Balding–Nichols
• Bates
• beta
• beta rectangular
• continuous Bernoulli
• Irwin–Hall
• Kumaraswamy
• logit-normal
• noncentral beta
• PERT
• raised cosine
• reciprocal
• triangular
• U-quadratic
• uniform
• Wigner semicircle
supported on a
semi-infinite
interval
• Benini
• Benktander 1st kind
• Benktander 2nd kind
• beta prime
• Burr
• chi
• chi-squared
• noncentral
• inverse
• scaled
• Dagum
• Davis
• Erlang
• hyper
• exponential
• hyperexponential
• hypoexponential
• logarithmic
• F
• noncentral
• folded normal
• Fréchet
• gamma
• generalized
• inverse
• gamma/Gompertz
• Gompertz
• shifted
• half-logistic
• half-normal
• Hotelling's T-squared
• inverse Gaussian
• generalized
• Kolmogorov
• Lévy
• log-Cauchy
• log-Laplace
• log-logistic
• log-normal
• log-t
• Lomax
• matrix-exponential
• Maxwell–Boltzmann
• Maxwell–Jüttner
• Mittag-Leffler
• Nakagami
• Pareto
• phase-type
• Poly-Weibull
• Rayleigh
• relativistic Breit–Wigner
• Rice
• truncated normal
• type-2 Gumbel
• Weibull
• discrete
• Wilks's lambda
supported
on the whole
real line
• Cauchy
• exponential power
• Fisher's z
• Kaniadakis κ-Gaussian
• Gaussian q
• generalized normal
• generalized hyperbolic
• geometric stable
• Gumbel
• Holtsmark
• hyperbolic secant
• Johnson's SU
• Landau
• Laplace
• asymmetric
• logistic
• noncentral t
• normal (Gaussian)
• normal-inverse Gaussian
• skew normal
• slash
• stable
• Student's t
• Tracy–Widom
• variance-gamma
• Voigt
with support
whose type varies
• generalized chi-squared
• generalized extreme value
• generalized Pareto
• Marchenko–Pastur
• Kaniadakis κ-exponential
• Kaniadakis κ-Gamma
• Kaniadakis κ-Weibull
• Kaniadakis κ-Logistic
• Kaniadakis κ-Erlang
• q-exponential
• q-Gaussian
• q-Weibull
• shifted log-logistic
• Tukey lambda
Mixed
univariate
continuous-
discrete
• Rectified Gaussian
Multivariate
(joint)
• Discrete:
• Ewens
• multinomial
• Dirichlet
• negative
• Continuous:
• Dirichlet
• generalized
• multivariate Laplace
• multivariate normal
• multivariate stable
• multivariate t
• normal-gamma
• inverse
• Matrix-valued:
• LKJ
• matrix normal
• matrix t
• matrix gamma
• inverse
• Wishart
• normal
• inverse
• normal-inverse
• complex
Directional
Univariate (circular) directional
Circular uniform
univariate von Mises
wrapped normal
wrapped Cauchy
wrapped exponential
wrapped asymmetric Laplace
wrapped Lévy
Bivariate (spherical)
Kent
Bivariate (toroidal)
bivariate von Mises
Multivariate
von Mises–Fisher
Bingham
Degenerate
and singular
Degenerate
Dirac delta function
Singular
Cantor
Families
• Circular
• compound Poisson
• elliptical
• exponential
• natural exponential
• location–scale
• maximum entropy
• mixture
• Pearson
• Tweedie
• wrapped
• Category
• Commons
Statistics
• Outline
• Index
Descriptive statistics
Continuous data
Center
• Mean
• Arithmetic
• Arithmetic-Geometric
• Cubic
• Generalized/power
• Geometric
• Harmonic
• Heronian
• Heinz
• Lehmer
• Median
• Mode
Dispersion
• Average absolute deviation
• Coefficient of variation
• Interquartile range
• Percentile
• Range
• Standard deviation
• Variance
Shape
• Central limit theorem
• Moments
• Kurtosis
• L-moments
• Skewness
Count data
• Index of dispersion
Summary tables
• Contingency table
• Frequency distribution
• Grouped data
Dependence
• Partial correlation
• Pearson product-moment correlation
• Rank correlation
• Kendall's τ
• Spearman's ρ
• Scatter plot
Graphics
• Bar chart
• Biplot
• Box plot
• Control chart
• Correlogram
• Fan chart
• Forest plot
• Histogram
• Pie chart
• Q–Q plot
• Radar chart
• Run chart
• Scatter plot
• Stem-and-leaf display
• Violin plot
Data collection
Study design
• Effect size
• Missing data
• Optimal design
• Population
• Replication
• Sample size determination
• Statistic
• Statistical power
Survey methodology
• Sampling
• Cluster
• Stratified
• Opinion poll
• Questionnaire
• Standard error
Controlled experiments
• Blocking
• Factorial experiment
• Interaction
• Random assignment
• Randomized controlled trial
• Randomized experiment
• Scientific control
Adaptive designs
• Adaptive clinical trial
• Stochastic approximation
• Up-and-down designs
Observational studies
• Cohort study
• Cross-sectional study
• Natural experiment
• Quasi-experiment
Statistical inference
Statistical theory
• Population
• Statistic
• Probability distribution
• Sampling distribution
• Order statistic
• Empirical distribution
• Density estimation
• Statistical model
• Model specification
• Lp space
• Parameter
• location
• scale
• shape
• Parametric family
• Likelihood (monotone)
• Location–scale family
• Exponential family
• Completeness
• Sufficiency
• Statistical functional
• Bootstrap
• U
• V
• Optimal decision
• loss function
• Efficiency
• Statistical distance
• divergence
• Asymptotics
• Robustness
Frequentist inference
Point estimation
• Estimating equations
• Maximum likelihood
• Method of moments
• M-estimator
• Minimum distance
• Unbiased estimators
• Mean-unbiased minimum-variance
• Rao–Blackwellization
• Lehmann–Scheffé theorem
• Median unbiased
• Plug-in
Interval estimation
• Confidence interval
• Pivot
• Likelihood interval
• Prediction interval
• Tolerance interval
• Resampling
• Bootstrap
• Jackknife
Testing hypotheses
• 1- & 2-tails
• Power
• Uniformly most powerful test
• Permutation test
• Randomization test
• Multiple comparisons
Parametric tests
• Likelihood-ratio
• Score/Lagrange multiplier
• Wald
Specific tests
• Z-test (normal)
• Student's t-test
• F-test
Goodness of fit
• Chi-squared
• G-test
• Kolmogorov–Smirnov
• Anderson–Darling
• Lilliefors
• Jarque–Bera
• Normality (Shapiro–Wilk)
• Likelihood-ratio test
• Model selection
• Cross validation
• AIC
• BIC
Rank statistics
• Sign
• Sample median
• Signed rank (Wilcoxon)
• Hodges–Lehmann estimator
• Rank sum (Mann–Whitney)
• Nonparametric anova
• 1-way (Kruskal–Wallis)
• 2-way (Friedman)
• Ordered alternative (Jonckheere–Terpstra)
• Van der Waerden test
Bayesian inference
• Bayesian probability
• prior
• posterior
• Credible interval
• Bayes factor
• Bayesian estimator
• Maximum posterior estimator
• Correlation
• Regression analysis
Correlation
• Pearson product-moment
• Partial correlation
• Confounding variable
• Coefficient of determination
Regression analysis
• Errors and residuals
• Regression validation
• Mixed effects models
• Simultaneous equations models
• Multivariate adaptive regression splines (MARS)
Linear regression
• Simple linear regression
• Ordinary least squares
• General linear model
• Bayesian regression
Non-standard predictors
• Nonlinear regression
• Nonparametric
• Semiparametric
• Isotonic
• Robust
• Heteroscedasticity
• Homoscedasticity
Generalized linear model
• Exponential families
• Logistic (Bernoulli) / Binomial / Poisson regressions
Partition of variance
• Analysis of variance (ANOVA, anova)
• Analysis of covariance
• Multivariate ANOVA
• Degrees of freedom
Categorical / Multivariate / Time-series / Survival analysis
Categorical
• Cohen's kappa
• Contingency table
• Graphical model
• Log-linear model
• McNemar's test
• Cochran–Mantel–Haenszel statistics
Multivariate
• Regression
• Manova
• Principal components
• Canonical correlation
• Discriminant analysis
• Cluster analysis
• Classification
• Structural equation model
• Factor analysis
• Multivariate distributions
• Elliptical distributions
• Normal
Time-series
General
• Decomposition
• Trend
• Stationarity
• Seasonal adjustment
• Exponential smoothing
• Cointegration
• Structural break
• Granger causality
Specific tests
• Dickey–Fuller
• Johansen
• Q-statistic (Ljung–Box)
• Durbin–Watson
• Breusch–Godfrey
Time domain
• Autocorrelation (ACF)
• partial (PACF)
• Cross-correlation (XCF)
• ARMA model
• ARIMA model (Box–Jenkins)
• Autoregressive conditional heteroskedasticity (ARCH)
• Vector autoregression (VAR)
Frequency domain
• Spectral density estimation
• Fourier analysis
• Least-squares spectral analysis
• Wavelet
• Whittle likelihood
Survival
Survival function
• Kaplan–Meier estimator (product limit)
• Proportional hazards models
• Accelerated failure time (AFT) model
• First hitting time
Hazard function
• Nelson–Aalen estimator
Test
• Log-rank test
Applications
Biostatistics
• Bioinformatics
• Clinical trials / studies
• Epidemiology
• Medical statistics
Engineering statistics
• Chemometrics
• Methods engineering
• Probabilistic design
• Process / quality control
• Reliability
• System identification
Social statistics
• Actuarial science
• Census
• Crime statistics
• Demography
• Econometrics
• Jurimetrics
• National accounts
• Official statistics
• Population statistics
• Psychometrics
Spatial statistics
• Cartography
• Environmental statistics
• Geographic information system
• Geostatistics
• Kriging
• Category
• Mathematics portal
• Commons
• WikiProject
| Wikipedia |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.