text
stringlengths 100
500k
| subset
stringclasses 4
values |
---|---|
Tractor bundle
In conformal geometry, the tractor bundle is a particular vector bundle constructed on a conformal manifold whose fibres form an effective representation of the conformal group (see associated bundle).
The term tractor is a portmanteau of "Tracy Thomas" and "twistor", the bundle having been introduced first by T. Y. Thomas as an alternative formulation of the Cartan conformal connection,[1] and later rediscovered within the formalism of local twistors and generalized to projective connections by Michael Eastwood et al. in [2]
References
1. Thomas, T. Y., "On conformal differential geometry", Proc. N.A.S. 12 (1926), 352–359; "Conformal tensors", Proc. N.A.S. 18 (1931), 103–189.
2. Bailey, T. N.; Eastwood, M. G.; Gover, A. R., "Thomas's structure bundle for conformal, projective and related structures", Rocky Mountain J. 24 (1994), 1191–1217.
| Wikipedia |
Tractrix
In geometry, a tractrix (from Latin trahere 'to pull, drag'; plural: tractrices) is the curve along which an object moves, under the influence of friction, when pulled on a horizontal plane by a line segment attached to a pulling point (the tractor) that moves at a right angle to the initial line between the object and the puller at an infinitesimal speed. It is therefore a curve of pursuit. It was first introduced by Claude Perrault in 1670, and later studied by Isaac Newton (1676) and Christiaan Huygens (1693).[1]
Mathematical derivation
Suppose the object is placed at (a, 0) (or (4, 0) in the example shown at right), and the puller at the origin, so a is the length of the pulling thread (4 in the example at right). Then the puller starts to move along the y axis in the positive direction. At every moment, the thread will be tangent to the curve y = y(x) described by the object, so that it becomes completely determined by the movement of the puller. Mathematically, if the coordinates of the object are (x, y), the y-coordinate of the puller is $y+\operatorname {sign} (y){\sqrt {a^{2}-x^{2}}},$ by the Pythagorean theorem. Writing that the slope of thread equals that of the tangent to the curve leads to the differential equation
${\frac {dy}{dx}}=\pm {\frac {\sqrt {a^{2}-x^{2}}}{x}}$
with the initial condition y(a) = 0. Its solution is
$y=\int _{x}^{a}{\frac {\sqrt {a^{2}-t^{2}}}{t}}\,dt=\pm \!\left(a\ln {\frac {a+{\sqrt {a^{2}-x^{2}}}}{x}}-{\sqrt {a^{2}-x^{2}}}\right),$
where the sign ± depends on the direction (positive or negative) of the movement of the puller.
The first term of this solution can also be written
$a\operatorname {arsech} {\frac {x}{a}},$
where arsech is the inverse hyperbolic secant function.
The sign before the solution depends whether the puller moves upward or downward. Both branches belong to the tractrix, meeting at the cusp point (a, 0).
Basis of the tractrix
The essential property of the tractrix is constancy of the distance between a point P on the curve and the intersection of the tangent line at P with the asymptote of the curve.
The tractrix might be regarded in a multitude of ways:
1. It is the locus of the center of a hyperbolic spiral rolling (without skidding) on a straight line.
2. It is the involute of the catenary function, which describes a fully flexible, inelastic, homogeneous string attached to two points that is subjected to a gravitational field. The catenary has the equation y(x) = a cosh x/a.
3. The trajectory determined by the middle of the back axle of a car pulled by a rope at a constant speed and with a constant direction (initially perpendicular to the vehicle).
4. It is a (non-linear) curve which a circle of radius a rolling on a straight line, with its center at the x axis, intersects perpendicularly at all times.
The function admits a horizontal asymptote. The curve is symmetrical with respect to the y-axis. The curvature radius is r = a cot x/y.
A great implication that the tractrix had was the study of its surface of revolution about its asymptote: the pseudosphere. Studied by Eugenio Beltrami in 1868, as a surface of constant negative Gaussian curvature, the pseudosphere is a local model of hyperbolic geometry. The idea was carried further by Kasner and Newman in their book Mathematics and the Imagination, where they show a toy train dragging a pocket watch to generate the tractrix.[2]
Properties
• The curve can be parameterised by the equation $x=t-\tanh(t),y=1/{\cosh(t)}$.[3]
• Due to the geometrical way it was defined, the tractrix has the property that the segment of its tangent, between the asymptote and the point of tangency, has constant length a.
• The arc length of one branch between x = x1 and x = x2 is a ln x1/x2.
• The area between the tractrix and its asymptote is π a2/2, which can be found using integration or Mamikon's theorem.
• The envelope of the normals of the tractrix (that is, the evolute of the tractrix) is the catenary (or chain curve) given by y = a cosh x/a.
• The surface of revolution created by revolving a tractrix about its asymptote is a pseudosphere.
Practical application
In 1927, P. G. A. H. Voigt patented a horn loudspeaker design based on the assumption that a wave front traveling through the horn is spherical of a constant radius. The idea is to minimize distortion caused by internal reflection of sound within the horn. The resulting shape is the surface of revolution of a tractrix.[4]
An important application is in the forming technology for sheet metal. In particular a tractrix profile is used for the corner of the die on which the sheet metal is bent during deep drawing.[5]
A toothed belt-pulley design provides improved efficiency for mechanical power transmission using a tractrix catenary shape for its teeth.[6] This shape minimizes the friction of the belt teeth engaging the pulley, because the moving teeth engage and disengage with minimal sliding contact. Original timing belt designs used simpler trapezoidal or circular tooth shapes, which cause significant sliding and friction.
Drawing machines
• In October–November 1692, Christiaan Huygens described three tractrix-drawing machines.
• In 1693 Gottfried Wilhelm Leibniz devised a "universal tractional machine" which, in theory, could integrate any first order differential equation.[7] The concept was an analog computing mechanism implementing the tractional principle. The device was impractical to build with the technology of Leibniz's time, and was never realized.
• In 1706 John Perks built a tractional machine in order to realise the hyperbolic quadrature.[8]
• In 1729 Giovanni Poleni built a tractional device that enabled logarithmic functions to be drawn.[9]
A history of all these machines can be seen in an article by H. J. M. Bos.[10]
See also
• Dini's surface
• Hyperbolic functions for tanh, sech, csch, arcosh
• Natural logarithm for ln
• Sign function for sgn
• Trigonometric functions for sin, cos, tan, arccot, csc
Notes
1. Stillwell, John (2010). Mathematics and Its History (revised, 3rd ed.). Springer Science & Business Media. p. 345. ISBN 978-1-4419-6052-8., extract of page 345
2. Kasner, Edward; Newman, James (2013). "Figure 45(a)". Mathematics and the Imagination. Dover Books on Mathematics. Courier Corporation. p. 141. ISBN 9780486320274.
3. O'Connor, John J.; Robertson, Edmund F., "Tractrix", MacTutor History of Mathematics Archive, University of St Andrews
4. Horn loudspeaker design pp. 4–5. (Reprinted from Wireless World, March 1974)
5. Lange, Kurt (1985). Handbook of Metal Forming. McGraw Hill Book Company. p. 20.43.
6. "Gates Powergrip GT3 Drive Design Manual" (PDF). Gates Corporation. 2014. p. 177. Retrieved 17 November 2017. The GT tooth profile is based on the tractix mathematical function. Engineering handbooks describe this function as a "frictionless" system. This early development by Schiele is described as an involute form of a catenary.
7. Milici, Pietro (2014). Lolli, Gabriele (ed.). From Logic to Practice: Italian Studies in the Philosophy of Mathematics. Springer. ... mechanical devices studied ... to solve particular differential equations ... We must recollect Leibniz's 'universal tractional machine'
8. Perks, John (1706). "The construction and properties of a new quadratrix to the hyperbola". Philosophical Transactions. 25: 2253–2262. doi:10.1098/rstl.1706.0017. JSTOR 102681. S2CID 186211499.
9. Poleni, John (1729). Epistolarum mathematicanim fasciculus. p. letter no. 7.
10. Bos, H. J. M. (1989). "Recognition and Wonder – Huygens, Tractional Motion and Some Thoughts on the History of Mathematics" (PDF). Euclides. 63: 65–76.
References
• Kasner, Edward; Newman, James (1940). Mathematics and the Imagination. Simon & Schuster. p. 141–143.
• Lawrence, J. Dennis (1972). A Catalog of Special Plane Curves. Dover Publications. pp. 5, 199. ISBN 0-486-60288-5.
External links
Wikimedia Commons has media related to Tractrix.
• O'Connor, John J.; Robertson, Edmund F., "Tractrix", MacTutor History of Mathematics Archive, University of St Andrews
• "Tractrix". PlanetMath.
• "Famous curves". PlanetMath.
• Tractrix on MathWorld
• Module: Leibniz's Pocket Watch ODE at PHASER
| Wikipedia |
Pseudosphere
In geometry, a pseudosphere is a surface with constant negative Gaussian curvature.
A pseudosphere of radius R is a surface in $\mathbb {R} ^{3}$ having curvature −1/R2 in each point. Its name comes from the analogy with the sphere of radius R, which is a surface of curvature 1/R2. The term was introduced by Eugenio Beltrami in his 1868 paper on models of hyperbolic geometry.[1]
Tractroid
The same surface can be also described as the result of revolving a tractrix about its asymptote. For this reason the pseudosphere is also called tractroid. As an example, the (half) pseudosphere (with radius 1) is the surface of revolution of the tractrix parametrized by[2]
$t\mapsto \left(t-\tanh {t},\operatorname {sech} \,{t}\right),\quad \quad 0\leq t<\infty .$
It is a singular space (the equator is a singularity), but away from the singularities, it has constant negative Gaussian curvature and therefore is locally isometric to a hyperbolic plane.
The name "pseudosphere" comes about because it has a two-dimensional surface of constant negative Gaussian curvature, just as a sphere has a surface with constant positive Gaussian curvature. Just as the sphere has at every point a positively curved geometry of a dome the whole pseudosphere has at every point the negatively curved geometry of a saddle.
As early as 1693 Christiaan Huygens found that the volume and the surface area of the pseudosphere are finite,[3] despite the infinite extent of the shape along the axis of rotation. For a given edge radius R, the area is 4πR2 just as it is for the sphere, while the volume is 2/3πR3 and therefore half that of a sphere of that radius.[4][5]
Universal covering space
The half pseudosphere of curvature −1 is covered by the interior of a horocycle. In the Poincaré half-plane model one convenient choice is the portion of the half-plane with y ≥ 1.[6] Then the covering map is periodic in the x direction of period 2π, and takes the horocycles y = c to the meridians of the pseudosphere and the vertical geodesics x = c to the tractrices that generate the pseudosphere. This mapping is a local isometry, and thus exhibits the portion y ≥ 1 of the upper half-plane as the universal covering space of the pseudosphere. The precise mapping is
$(x,y)\mapsto {\big (}v(\operatorname {arcosh} y)\cos x,v(\operatorname {arcosh} y)\sin x,u(\operatorname {arcosh} y){\big )}$
where
$t\mapsto {\big (}u(t)=t-\operatorname {tanh} t,v(t)=\operatorname {sech} t{\big )}$
is the parametrization of the tractrix above.
Hyperboloid
In some sources that use the hyperboloid model of the hyperbolic plane, the hyperboloid is referred to as a pseudosphere.[7] This usage of the word is because the hyperboloid can be thought of as a sphere of imaginary radius, embedded in a Minkowski space.
Pseudospherical surfaces
A pseudospherical surface is a generalization of the pseudosphere. A surface which is piecewise smoothly immersed in $\mathbb {R} ^{3}$ with constant negative curvature is a pseudospherical surface. The tractroid is the simplest example. Other examples include the Dini's surfaces, breather surfaces, and the Kuen surface.
Relation to solutions to the Sine-Gordon equation
Pseudospherical surfaces can be constructed from solutions to the Sine-Gordon equation.[8] A sketch proof starts with reparametrizing the tractroid with coordinates in which the Gauss–Codazzi equations can be rewritten as the Sine-Gordon equation.
In particular, for the tractroid the Gauss–Codazzi equations are the Sine-Gordon equation applied to the static soliton solution, so the Gauss–Codazzi equations are satisfied. In these coordinates the first and second fundamental forms are written in a way that makes clear the Gaussian curvature is -1 for any solution of the Sine-Gordon equations.
Then any solution to the Sine-Gordon equation can be used to specify a first and second fundamental form which satisfy the Gauss–Codazzi equations. There is then a theorem that any such set of initial data can be used to at least locally specify an immersed surface in $\mathbb {R} ^{3}$.
A few examples of Sine-Gordon solutions and their corresponding surface are given as follows:
• Static 1-soliton: pseudosphere
• Moving 1-soliton: Dini's surface
• Breather solution: Breather surface
• 2-soliton: Kuen surface
See also
• Hilbert's theorem (differential geometry)
• Dini's surface
• Gabriel's Horn
• Hyperboloid
• Hyperboloid structure
• Quasi-sphere
• Sine–Gordon equation
• Sphere
• Surface of revolution
References
1. Beltrami, Eugenio (1868). "Saggio sulla interpretazione della geometria non euclidea" [Treatise on the interpretation of non-Euclidean geometry]. Gior. Mat. (in Italian). 6: 248–312.
(Also Beltrami, Eugenio. Opere Matematiche [Mathematical Works] (in Italian). Vol. 1. pp. 374–405. ISBN 1-4181-8434-9.;
Beltrami, Eugenio (1869). "Essai d'interprétation de la géométrie noneuclidéenne" [Treatise on the interpretation of non-Euclidean geometry]. Annales de l'École Normale Supérieure (in French). 6: 251–288. Archived from the original on 2016-02-02. Retrieved 2010-07-24.)
2. Bonahon, Francis (2009). Low-dimensional geometry: from Euclidean surfaces to hyperbolic knots. AMS Bookstore. p. 108. ISBN 0-8218-4816-X., Chapter 5, page 108
3. Stillwell, John (2010). Mathematics and Its History (revised, 3rd ed.). Springer Science & Business Media. p. 345. ISBN 978-1-4419-6052-8., extract of page 345
4. Le Lionnais, F. (2004). Great Currents of Mathematical Thought, Vol. II: Mathematics in the Arts and Sciences (2 ed.). Courier Dover Publications. p. 154. ISBN 0-486-49579-5., Chapter 40, page 154
5. Weisstein, Eric W. "Pseudosphere". MathWorld.
6. Thurston, William, Three-dimensional geometry and topology, vol. 1, Princeton University Press, p. 62.
7. Hasanov, Elman (2004), "A new theory of complex rays", IMA J. Appl. Math., 69: 521–537, doi:10.1093/imamat/69.6.521, ISSN 1464-3634, archived from the original on 2013-04-15
8. Wheeler, Nicholas. "From Pseudosphere to Sine-Gordon equation" (PDF). Retrieved 24 November 2022.
• Stillwell, J. (1996). Sources of Hyperbolic Geometry. Amer. Math. Soc & London Math. Soc.
• Henderson, D. W.; Taimina, D. (2006). "Experiencing Geometry: Euclidean and Non-Euclidean with History". Aesthetics and Mathematics (PDF). Springer-Verlag.
• Kasner, Edward; Newman, James (1940). Mathematics and the Imagination. Simon & Schuster. pp. 140, 145, 155.
External links
• Non Euclid
• Crocheting the Hyperbolic Plane: An Interview with David Henderson and Daina Taimina
• Norman Wildberger lecture 16, History of Mathematics, University of New South Wales. YouTube. 2012 May.
• Pseudospherical surfaces at the virtual math museum.
| Wikipedia |
Tracy–Widom distribution
The Tracy–Widom distribution is a probability distribution from random matrix theory introduced by Craig Tracy and Harold Widom (1993, 1994). It is the distribution of the normalized largest eigenvalue of a random Hermitian matrix. The distribution is defined as a Fredholm determinant.
In practical terms, Tracy–Widom is the crossover function between the two phases of weakly versus strongly coupled components in a system.[1] It also appears in the distribution of the length of the longest increasing subsequence of random permutations,[2] as large-scale statistics in the Kardar-Parisi-Zhang equation,[3] in current fluctuations of the asymmetric simple exclusion process (ASEP) with step initial condition,[4] and in simplified mathematical models of the behavior of the longest common subsequence problem on random inputs.[5] See Takeuchi & Sano (2010) and Takeuchi et al. (2011) for experimental testing (and verifying) that the interface fluctuations of a growing droplet (or substrate) are described by the TW distribution $F_{2}$ (or $F_{1}$) as predicted by Prähofer & Spohn (2000).
The distribution $F_{1}$ is of particular interest in multivariate statistics.[6] For a discussion of the universality of $F_{\beta }$, $\beta =1,2,4$, see Deift (2007). For an application of $F_{1}$ to inferring population structure from genetic data see Patterson, Price & Reich (2006). In 2017 it was proved that the distribution F is not infinitely divisible.[7]
Definition as a law of large numbers
Let $F_{\beta }$ denote the cumulative distribution function of the Tracy–Widom distribution with given $\beta $. It can be defined as a law of large numbers, similar to the central limit theorem.
There are typically three Tracy–Widom distributions, $F_{\beta }$, with $\beta \in \{1,2,4\}$. They correspond to the three gaussian ensembles: orthogonal ($\beta =1$), unitary ($\beta =2$), and symplectic ($\beta =4$).
In general, consider a gaussian ensemble with beta value $\beta $, with its diagonal entries having variance 1, and off-diagonal entries having variance $\sigma ^{2}$, and let $F_{N,\beta }(s)$ be probability that an $N\times N$ matrix sampled from the ensemble have maximal eigenvalue $\leq s$, then define[8]
$F_{\beta }(x)=\lim _{N\to \infty }F_{N,\beta }(\sigma (2N^{1/2}+N^{-1/6}x))=\lim _{N\to \infty }Pr(N^{1/6}(\lambda _{max}/\sigma -2N^{1/2})\leq x)$
where $\lambda _{\max }$ denotes the largest eigenvalue of the random matrix. The shift by $2\sigma N^{1/2}$ centers the distribution, since at the limit, the eigenvalue distribution converges to the semicircular distribution with radius $2\sigma N^{1/2}$. The multiplication by $N^{1/6}$ is used because the standard deviation of the distribution scales as $N^{-1/6}$ (first derived in [9]).
For example:[10]
$F_{2}(x)=\lim _{N\to \infty }\operatorname {Prob} \left((\lambda _{\max }-{\sqrt {4N}})N^{1/6}\leq x\right),$
where the matrix is sampled from the gaussian unitary ensemble with off-diagonal variance $1$.
The definition of the Tracy–Widom distributions $F_{\beta }$ may be extended to all $\beta >0$ (Slide 56 in Edelman (2003), Ramírez, Rider & Virág (2006)).
One may naturally ask for the limit distribution of second-largest eigenvalues, third-largest eigenvalues, etc. They are known.[11][8]
Functional forms
Fredholm determinant
$F_{2}$ can be given as the Fredholm determinant
$F_{2}(s)=\det(I-A_{s})=1+\sum _{n=1}^{\infty }{\frac {(-1)^{n}}{n!}}\int _{(s,\infty )^{n}}\det _{i,j=1,...,n}[A_{s}(x_{i},x_{j})]dx_{1}\cdots dx_{n}$
of the kernel $A_{s}$ ("Airy kernel") on square integrable functions on the half line $(s,\infty )$, given in terms of Airy functions Ai by
$A_{s}(x,y)={\begin{cases}{\frac {\mathrm {Ai} (x)\mathrm {Ai} '(y)-\mathrm {Ai} '(x)\mathrm {Ai} (y)}{x-y}}\quad {\text{if }}x\neq y\\Ai'(x)^{2}-x(Ai(x))^{2}\quad {\text{if }}x=y\end{cases}}$
Painlevé transcendents
$F_{2}$ can also be given as an integral
$F_{2}(s)=\exp \left(-\int _{s}^{\infty }(x-s)q^{2}(x)\,dx\right)$
in terms of a solution[note 1] of a Painlevé equation of type II
$q^{\prime \prime }(s)=sq(s)+2q(s)^{3}\,$
with boundary condition $ \displaystyle q(s)\sim {\textrm {Ai}}(s),s\to \infty .$ This function $q$ is a Painlevé transcendent.
Other distributions are also expressible in terms of the same $q$:[10]
${\begin{aligned}F_{1}(s)&=\exp \left(-{\frac {1}{2}}\int _{s}^{\infty }q(x)\,dx\right)\,\left(F_{2}(s)\right)^{1/2}\\F_{4}(s/{\sqrt {2}})&=\cosh \left({\frac {1}{2}}\int _{s}^{\infty }q(x)\,dx\right)\,\left(F_{2}(s)\right)^{1/2}.\end{aligned}}$
Functional equations
Define
${\begin{aligned}F(x)&=\exp \left(-{\frac {1}{2}}\int _{x}^{\infty }(y-x)q(y)^{2}\,dy\right)\\E(x)&=\exp \left(-{\frac {1}{2}}\int _{x}^{\infty }q(y)\,dy\right)\end{aligned}}$
then[8]
$F_{1}(x)=E(x)F(x),\quad F_{2}(x)=F(x)^{2},\quad \quad F_{4}\left({\frac {x}{\sqrt {2}}}\right)={\frac {1}{2}}\left(E(x)+{\frac {1}{E(x)}}\right)F(x)$
Occurrences
Other than in random matrix theory, the Tracy–Widom distributions occur in many other probability problems.[12]
Let $l_{n}$ be the length of the longest increasing subsequence in a random permutation sampled uniformly from $S_{n}$, the permutation group on n elements. Then the cumulative distribution function of ${\frac {l_{n}-2N^{1/2}}{N^{1/6}}}$ converges to $F_{2}$.[13]
Asymptotics
Probability density function
Let $f_{\beta }(x)=F_{\beta }'(x)$ be the probability density function for the distribution, then[12]
$f_{\beta }(x)\sim {\begin{cases}e^{-{\frac {\beta }{24}}|x|^{3}},\quad x\to -\infty \\e^{-{\frac {2\beta }{3}}|x|^{3/2}},\quad x\to +\infty \end{cases}}$
In particular, we see that it is severely skewed to the right: it is much more likely for $\lambda _{max}$ to be much larger than $2\sigma {\sqrt {N}}$ than to be much smaller. This could be intuited by seeing that the limit distribution is the semicircle law, so there is "repulsion" from the bulk of the distribution, forcing $\lambda _{max}$ to be not much smaller than $2\sigma {\sqrt {N}}$. At the $x\to -\infty $ limit, a more precise expression is (equation 49 [12])
$f_{\beta }(x)\sim \tau _{\beta }|x|^{(\beta ^{2}+4-6\beta )/16\beta }\exp \left[-\beta {\frac {|x|^{3}}{24}}+{\sqrt {2}}{\frac {\beta -2}{6}}|x|^{3/2}\right]$
for some positive number $\tau _{\beta }$ that depends on $\beta $.
Cumulative distribution function
At the $x\to +\infty $ limit,[14]
${\begin{aligned}F(x)&=1-{\frac {e^{-{\frac {4}{3}}x^{3/2}}}{32\pi x^{3/2}}}{\biggl (}1-{\frac {35}{24x^{3/2}}}+{\cal {O}}(x^{-3}){\biggr )},\\E(x)&=1-{\frac {e^{-{\frac {2}{3}}x^{3/2}}}{4{\sqrt {\pi }}x^{3/2}}}{\biggl (}1-{\frac {41}{48x^{3/2}}}+{\cal {O}}(x^{-3}){\biggr )}\end{aligned}}$
and at the $x\to -\infty $ limit,
${\begin{aligned}F(x)&=2^{1/48}e^{{\frac {1}{2}}\zeta ^{\prime }(-1)}{\frac {e^{-{\frac {1}{24}}|x|^{3}}}{|x|^{1/16}}}\left(1+{\frac {3}{2^{7}|x|^{3}}}+O(|x|^{-6})\right)\\E(x)&={\frac {1}{2^{1/4}}}e^{-{\frac {1}{3{\sqrt {2}}}}|x|^{3/2}}{\Biggl (}1-{\frac {1}{24{\sqrt {2}}|x|^{3/2}}}+{\cal {O}}(|x|^{-3}){\Biggr )}.\end{aligned}}$
where $\zeta $ is the Riemann zeta function, and $\zeta '(-1)=-0.1654211437$. This allows derivation of $x\to \pm \infty $ behavior of $F_{\beta }$. For example,
${\begin{aligned}1-F_{2}(x)&={\frac {1}{32\pi x^{3/2}}}e^{-4x^{3/2}/3}(1+O(x^{-3/2})),\\F_{2}(-x)&={\frac {2^{1/24}e^{\zeta ^{\prime }(-1)}}{x^{1/8}}}e^{-x^{3}/12}{\biggl (}1+{\frac {3}{2^{6}x^{3}}}+O(x^{-6}){\biggr )}.\end{aligned}}$
Painlevé transcendent
The Painlevé transcendent has asymptotic expansion at $x\to -\infty $ (equation 4.1 of [15])
$q(x)={\sqrt {-{\frac {x}{2}}}}\left(1+{\frac {1}{8}}x^{-3}-{\frac {73}{128}}x^{-6}+{\frac {10657}{1024}}x^{-9}+O(x^{-12})\right)$
This is necessary for numerical computations, as the $q\sim {\sqrt {-x/2}}$ solution is unstable: any deviation from it tends to drop it to the $q\sim -{\sqrt {-x/2}}$ branch instead.[16]
Numerics
Numerical techniques for obtaining numerical solutions to the Painlevé equations of the types II and V, and numerically evaluating eigenvalue distributions of random matrices in the beta-ensembles were first presented by Edelman & Persson (2005) using MATLAB. These approximation techniques were further analytically justified in Bejan (2005) and used to provide numerical evaluation of Painlevé II and Tracy–Widom distributions (for $\beta =1,2,4$) in S-PLUS. These distributions have been tabulated in Bejan (2005) to four significant digits for values of the argument in increments of 0.01; a statistical table for p-values was also given in this work. Bornemann (2010) gave accurate and fast algorithms for the numerical evaluation of $F_{\beta }$ and the density functions $f_{\beta }(s)=dF_{\beta }/ds$ for $\beta =1,2,4$. These algorithms can be used to compute numerically the mean, variance, skewness and excess kurtosis of the distributions $F_{\beta }$.[17]
$\beta $ Mean Variance Skewness Excess kurtosis
1 −1.2065335745820 1.607781034581 0.29346452408 0.1652429384
2 −1.771086807411 0.8131947928329 0.224084203610 0.0934480876
4 −2.306884893241 0.5177237207726 0.16550949435 0.0491951565
Functions for working with the Tracy–Widom laws are also presented in the R package 'RMTstat' by Johnstone et al. (2009) and MATLAB package 'RMLab' by Dieng (2006).
For a simple approximation based on a shifted gamma distribution see Chiani (2014).
Shen & Serkh (2022) developed a spectral algorithm for the eigendecomposition of the integral operator $A_{s}$, which can be used to rapidly evaluate Tracy–Widom distributions, or, more generally, the distributions of the $k$th largest level at the soft edge scaling limit of Gaussian ensembles, to machine accuracy.
See also
• Wigner semicircle distribution
• Marchenko–Pastur distribution
Footnotes
1. Mysterious Statistical Law May Finally Have an Explanation, wired.com 2014-10-27
2. Baik, Deift & Johansson (1999).
3. Sasamoto & Spohn (2010)
4. Johansson (2000); Tracy & Widom (2009)).
5. Majumdar & Nechaev (2005).
6. Johnstone (2007, 2008, 2009).
7. Domínguez-Molina (2017).
8. Tracy, Craig A.; Widom, Harold (2009b). Sidoravičius, Vladas (ed.). "The Distributions of Random Matrix Theory and their Applications". New Trends in Mathematical Physics. Dordrecht: Springer Netherlands: 753–765. doi:10.1007/978-90-481-2810-5_48. ISBN 978-90-481-2810-5.
9. Forrester, P. J. (1993-08-09). "The spectrum edge of random matrix ensembles". Nuclear Physics B. 402 (3): 709–728. doi:10.1016/0550-3213(93)90126-A. ISSN 0550-3213.
10. Tracy & Widom (1996).
11. Dieng, Momar (2005). "Distribution functions for edge eigenvalues in orthogonal and symplectic ensembles: Painlevé representations". International Mathematics Research Notices. 2005 (37): 2263–2287. doi:10.1155/IMRN.2005.2263. ISSN 1687-0247.
12. Majumdar, Satya N; Schehr, Grégory (2014-01-31). "Top eigenvalue of a random matrix: large deviations and third order phase transition". Journal of Statistical Mechanics: Theory and Experiment. 2014 (1): P01012. arXiv:1311.0580. doi:10.1088/1742-5468/2014/01/p01012. ISSN 1742-5468. S2CID 119122520.
13. Baik, Deift & Johansson 1999
14. Baik, Jinho; Buckingham, Robert; DiFranco, Jeffery (2008-02-26). "Asymptotics of Tracy-Widom Distributions and the Total Integral of a Painlevé II Function". Communications in Mathematical Physics. 280 (2): 463–497. arXiv:0704.3636. doi:10.1007/s00220-008-0433-5. ISSN 0010-3616. S2CID 16324715.
15. Tracy, Craig A.; Widom, Harold (May 1993). "Level-spacing distributions and the Airy kernel". Physics Letters B. 305 (1–2): 115–118. arXiv:hep-th/9210074. Bibcode:1993PhLB..305..115T. doi:10.1016/0370-2693(93)91114-3. ISSN 0370-2693. S2CID 13912236.
16. Bender, Carl M.; Orszag, Steven A. (1999-10-29). Advanced Mathematical Methods for Scientists and Engineers I: Asymptotic Methods and Perturbation Theory. Springer Science & Business Media. pp. 163–165. ISBN 978-0-387-98931-0.
17. Su, Zhong-gen; Lei, Yu-huan; Shen, Tian (2021-03-01). "Tracy-Widom distribution, Airy2 process and its sample path properties". Applied Mathematics-A Journal of Chinese Universities. 36 (1): 128–158. doi:10.1007/s11766-021-4251-2. ISSN 1993-0445. S2CID 237903590.
1. called "Hastings–McLeod solution". Published by Hastings, S.P., McLeod, J.B.: A boundary value problem associated with the second Painlevé transcendent and the Korteweg-de Vries equation. Arch. Ration. Mech. Anal. 73, 31–51 (1980)
References
• Baik, J.; Deift, P.; Johansson, K. (1999), "On the distribution of the length of the longest increasing subsequence of random permutations", Journal of the American Mathematical Society, 12 (4): 1119–1178, arXiv:math/9810105, doi:10.1090/S0894-0347-99-00307-0, JSTOR 2646100, MR 1682248.
• Bornemann, F. (2010), "On the numerical evaluation of distributions in random matrix theory: A review with an invitation to experimental mathematics", Markov Processes and Related Fields, 16 (4): 803–866, arXiv:0904.1581, Bibcode:2009arXiv0904.1581B.
• Chiani, M. (2014), "Distribution of the largest eigenvalue for real Wishart and Gaussian random matrices and a simple approximation for the Tracy–Widom distribution", Journal of Multivariate Analysis, 129: 69–81, arXiv:1209.3394, doi:10.1016/j.jmva.2014.04.002, S2CID 15889291.
• Sasamoto, Tomohiro; Spohn, Herbert (2010), "One-Dimensional Kardar-Parisi-Zhang Equation: An Exact Solution and its Universality", Physical Review Letters, 104 (23): 230602, arXiv:1002.1883, Bibcode:2010PhRvL.104w0602S, doi:10.1103/PhysRevLett.104.230602, PMID 20867222, S2CID 34945972
• Deift, P. (2007), "Universality for mathematical and physical systems" (PDF), International Congress of Mathematicians (Madrid, 2006), European Mathematical Society, pp. 125–152, arXiv:math-ph/0603038, doi:10.4171/022-1/7, MR 2334189.
• Dieng, Momar (2006), RMLab, a MATLAB package for computing Tracy-Widom distributions and simulating random matrices.
• Domínguez-Molina, J.Armando (2017), "The Tracy-Widom distribution is not infinitely divisible", Statistics & Probability Letters, 213 (1): 56–60, arXiv:1601.02898, doi:10.1016/j.spl.2016.11.029, S2CID 119676736.
• Johansson, K. (2000), "Shape fluctuations and random matrices", Communications in Mathematical Physics, 209 (2): 437–476, arXiv:math/9903134, Bibcode:2000CMaPh.209..437J, doi:10.1007/s002200050027, S2CID 16291076.
• Johansson, K. (2002), "Toeplitz determinants, random growth and determinantal processes" (PDF), Proc. International Congress of Mathematicians (Beijing, 2002), vol. 3, Beijing: Higher Ed. Press, pp. 53–62, MR 1957518.
• Johnstone, I. M. (2007), "High dimensional statistical inference and random matrices" (PDF), International Congress of Mathematicians (Madrid, 2006), European Mathematical Society, pp. 307–333, arXiv:math/0611589, doi:10.4171/022-1/13, MR 2334195.
• Johnstone, I. M. (2008), "Multivariate analysis and Jacobi ensembles: largest eigenvalue, Tracy–Widom limits and rates of convergence", Annals of Statistics, 36 (6): 2638–2716, arXiv:0803.3408, doi:10.1214/08-AOS605, PMC 2821031, PMID 20157626.
• Johnstone, I. M. (2009), "Approximate null distribution of the largest root in multivariate analysis", Annals of Applied Statistics, 3 (4): 1616–1633, arXiv:1009.5854, doi:10.1214/08-AOAS220, PMC 2880335, PMID 20526465.
• Majumdar, Satya N.; Nechaev, Sergei (2005), "Exact asymptotic results for the Bernoulli matching model of sequence alignment", Physical Review E, 72 (2): 020901, 4, arXiv:q-bio/0410012, Bibcode:2005PhRvE..72b0901M, doi:10.1103/PhysRevE.72.020901, MR 2177365, PMID 16196539, S2CID 11390762.
• Patterson, N.; Price, A. L.; Reich, D. (2006), "Population structure and eigenanalysis", PLOS Genetics, 2 (12): e190, doi:10.1371/journal.pgen.0020190, PMC 1713260, PMID 17194218.
• Prähofer, M.; Spohn, H. (2000), "Universal distributions for growing processes in 1+1 dimensions and random matrices", Physical Review Letters, 84 (21): 4882–4885, arXiv:cond-mat/9912264, Bibcode:2000PhRvL..84.4882P, doi:10.1103/PhysRevLett.84.4882, PMID 10990822, S2CID 20814566.
• Shen, Z.; Serkh, K. (2022), "On the evaluation of the eigendecomposition of the Airy integral operator", Applied and Computational Harmonic Analysis, 57: 105–150, arXiv:2104.12958, doi:10.1016/j.acha.2021.11.003, S2CID 233407802.
• Takeuchi, K. A.; Sano, M. (2010), "Universal fluctuations of growing interfaces: Evidence in turbulent liquid crystals", Physical Review Letters, 104 (23): 230601, arXiv:1001.5121, Bibcode:2010PhRvL.104w0601T, doi:10.1103/PhysRevLett.104.230601, PMID 20867221, S2CID 19315093
• Takeuchi, K. A.; Sano, M.; Sasamoto, T.; Spohn, H. (2011), "Growing interfaces uncover universal fluctuations behind scale invariance", Scientific Reports, 1: 34, arXiv:1108.2118, Bibcode:2011NatSR...1E..34T, doi:10.1038/srep00034, PMC 3216521, PMID 22355553
• Tracy, C. A.; Widom, H. (1993), "Level-spacing distributions and the Airy kernel", Physics Letters B, 305 (1–2): 115–118, arXiv:hep-th/9210074, Bibcode:1993PhLB..305..115T, doi:10.1016/0370-2693(93)91114-3, S2CID 119690132.
• Tracy, C. A.; Widom, H. (1994), "Level-spacing distributions and the Airy kernel", Communications in Mathematical Physics, 159 (1): 151–174, arXiv:hep-th/9211141, Bibcode:1994CMaPh.159..151T, doi:10.1007/BF02100489, MR 1257246, S2CID 13912236.
• Tracy, C. A.; Widom, H. (1996), "On orthogonal and symplectic matrix ensembles", Communications in Mathematical Physics, 177 (3): 727–754, arXiv:solv-int/9509007, Bibcode:1996CMaPh.177..727T, doi:10.1007/BF02099545, MR 1385083, S2CID 17398688
• Tracy, C. A.; Widom, H. (2002), "Distribution functions for largest eigenvalues and their applications" (PDF), Proc. International Congress of Mathematicians (Beijing, 2002), vol. 1, Beijing: Higher Ed. Press, pp. 587–596, MR 1989209.
• Tracy, C. A.; Widom, H. (2009), "Asymptotics in ASEP with step initial condition", Communications in Mathematical Physics, 290 (1): 129–154, arXiv:0807.1713, Bibcode:2009CMaPh.290..129T, doi:10.1007/s00220-009-0761-0, S2CID 14730756.
Further reading
• Bejan, Andrei Iu. (2005), Largest eigenvalues and sample covariance matrices. Tracy–Widom and Painleve II: Computational aspects and realization in S-Plus with applications (PDF), M.Sc. dissertation, Department of Statistics, The University of Warwick.
• Edelman, A.; Persson, P.-O. (2005), Numerical Methods for Eigenvalue Distributions of Random Matrices, arXiv:math-ph/0501068, Bibcode:2005math.ph...1068E.
• Edelman, A. (2003), Stochastic Differential Equations and Random Matrices, SIAM Applied Linear Algebra.
• Ramírez, J. A.; Rider, B.; Virág, B. (2006), "Beta ensembles, stochastic Airy spectrum, and a diffusion", Journal of the American Mathematical Society, 24 (4): 919–944, arXiv:math/0607331, Bibcode:2006math......7331R, doi:10.1090/S0894-0347-2011-00703-0, S2CID 10226881.
External links
• Kuijlaars, Universality of distribution functions in random matrix theory (PDF).
• Tracy, C. A.; Widom, H., The distributions of random matrix theory and their applications (PDF).
• Johnstone, Iain; Ma, Zongming; Perry, Patrick; Shahram, Morteza (2009), Package 'RMTstat' (PDF).
• At the Far Ends of a New Universal Law, Quanta Magazine
Probability distributions (list)
Discrete
univariate
with finite
support
• Benford
• Bernoulli
• beta-binomial
• binomial
• categorical
• hypergeometric
• negative
• Poisson binomial
• Rademacher
• soliton
• discrete uniform
• Zipf
• Zipf–Mandelbrot
with infinite
support
• beta negative binomial
• Borel
• Conway–Maxwell–Poisson
• discrete phase-type
• Delaporte
• extended negative binomial
• Flory–Schulz
• Gauss–Kuzmin
• geometric
• logarithmic
• mixed Poisson
• negative binomial
• Panjer
• parabolic fractal
• Poisson
• Skellam
• Yule–Simon
• zeta
Continuous
univariate
supported on a
bounded interval
• arcsine
• ARGUS
• Balding–Nichols
• Bates
• beta
• beta rectangular
• continuous Bernoulli
• Irwin–Hall
• Kumaraswamy
• logit-normal
• noncentral beta
• PERT
• raised cosine
• reciprocal
• triangular
• U-quadratic
• uniform
• Wigner semicircle
supported on a
semi-infinite
interval
• Benini
• Benktander 1st kind
• Benktander 2nd kind
• beta prime
• Burr
• chi
• chi-squared
• noncentral
• inverse
• scaled
• Dagum
• Davis
• Erlang
• hyper
• exponential
• hyperexponential
• hypoexponential
• logarithmic
• F
• noncentral
• folded normal
• Fréchet
• gamma
• generalized
• inverse
• gamma/Gompertz
• Gompertz
• shifted
• half-logistic
• half-normal
• Hotelling's T-squared
• inverse Gaussian
• generalized
• Kolmogorov
• Lévy
• log-Cauchy
• log-Laplace
• log-logistic
• log-normal
• log-t
• Lomax
• matrix-exponential
• Maxwell–Boltzmann
• Maxwell–Jüttner
• Mittag-Leffler
• Nakagami
• Pareto
• phase-type
• Poly-Weibull
• Rayleigh
• relativistic Breit–Wigner
• Rice
• truncated normal
• type-2 Gumbel
• Weibull
• discrete
• Wilks's lambda
supported
on the whole
real line
• Cauchy
• exponential power
• Fisher's z
• Kaniadakis κ-Gaussian
• Gaussian q
• generalized normal
• generalized hyperbolic
• geometric stable
• Gumbel
• Holtsmark
• hyperbolic secant
• Johnson's SU
• Landau
• Laplace
• asymmetric
• logistic
• noncentral t
• normal (Gaussian)
• normal-inverse Gaussian
• skew normal
• slash
• stable
• Student's t
• Tracy–Widom
• variance-gamma
• Voigt
with support
whose type varies
• generalized chi-squared
• generalized extreme value
• generalized Pareto
• Marchenko–Pastur
• Kaniadakis κ-exponential
• Kaniadakis κ-Gamma
• Kaniadakis κ-Weibull
• Kaniadakis κ-Logistic
• Kaniadakis κ-Erlang
• q-exponential
• q-Gaussian
• q-Weibull
• shifted log-logistic
• Tukey lambda
Mixed
univariate
continuous-
discrete
• Rectified Gaussian
Multivariate
(joint)
• Discrete:
• Ewens
• multinomial
• Dirichlet
• negative
• Continuous:
• Dirichlet
• generalized
• multivariate Laplace
• multivariate normal
• multivariate stable
• multivariate t
• normal-gamma
• inverse
• Matrix-valued:
• LKJ
• matrix normal
• matrix t
• matrix gamma
• inverse
• Wishart
• normal
• inverse
• normal-inverse
• complex
Directional
Univariate (circular) directional
Circular uniform
univariate von Mises
wrapped normal
wrapped Cauchy
wrapped exponential
wrapped asymmetric Laplace
wrapped Lévy
Bivariate (spherical)
Kent
Bivariate (toroidal)
bivariate von Mises
Multivariate
von Mises–Fisher
Bingham
Degenerate
and singular
Degenerate
Dirac delta function
Singular
Cantor
Families
• Circular
• compound Poisson
• elliptical
• exponential
• natural exponential
• location–scale
• maximum entropy
• mixture
• Pearson
• Tweedie
• wrapped
• Category
• Commons
| Wikipedia |
Telecommunications forecasting
All telecommunications service providers perform forecasting calculations to assist them in planning their networks.[1] Accurate forecasting helps operators to make key investment decisions relating to product development and introduction, advertising, pricing etc., well in advance of product launch, which helps to ensure that the company will make a profit on a new venture and that capital is invested wisely.[2]
Why is forecasting used?
Forecasting can be conducted for many purposes, so it is important that the reason for performing the calculation is clearly defined and understood. Some common reasons for forecasting include:[2]
• Planning and Budgeting – Using forecast data can help network planners decide how much equipment to purchase and where to place it to ensure optimum management of traffic loads.
• Evaluation – Forecasting can help management decide if decisions that have been made will be to the advantage or detriment of the company.
• Verification – As new forecast data becomes available it is necessary to check whether new forecasts confirm the outcomes predicted by the old forecasts.
Knowing the purpose of the forecast will help to answer additional questions such as the following:[2]
• What is being forecast? – events, trends, variables, technology
• Level of focus – focus on a single product or a whole line, focus on a single company or the entire industry
• How often is forecasting conducted? – daily, weekly, monthly, annually
• Do the methods used reflect the decisions needed to be taken by management?
• What are the resources available to make decisions? – lead-time, staff, relevant data, budget, etc.
• What are the types of errors that could occur and what will they cost the company?
Factors influencing forecasting
When forecasting it is important to understand which factors may influence the calculation, and to what extent. A list of some common factors can be seen below:[2]
• Technology
• subscriber access – fibre, wireless, wired, cellular, TDMA, CDMA, handsets
• application – telephony, PBXs, ISDN, videoconferencing, LANs, teleconferencing, internetworking, WANs
• technology – broadband, narrowband, carriers, fibre to the curb, DSL
• Economics
• Global Economics – Economic climate, predictions, estimates, economic factors, interest rates, prime rate, growth, management's outlook, investors' confidence, politics
• Sectoral Economics – trends in industry, investors’ outlook, telecommunications, emerging technologies growth rate, recessions, and slowdowns
• Macroeconomics – inflation, GDP, exports, monetary exchange rates, imports, government deficit, economic health
• Demographics
• Measurement of number of people in regions – how many were born, are living and died within a time period
• The way people live – health, fertility, marriage rates, ageing rate, conception, mortality
Data preparation
Before forecasting is performed, the data being used must be "prepared". If the data contains errors, then the forecast result will be equally flawed. It is therefore vital that all anomalous data be removed. Such a procedure is known as data "scrubbing".[2] Scrubbing data involved removing data points known as "outliers". Outliers are data that lie outside the normal pattern. They are usually caused by anomalous and often unique events and so are unlikely to recur. Removing outliers improves data integrity and increases the accuracy of the forecast.
Forecasting methods
There are many different methods used to conduct forecasting. They can be divided into different groups based on the theories according to which they were developed:[2]
Judgment-based methods
Judgment-based methods rely on the opinions and knowledge of people who have considerable experience in the area that the forecast is being conducted. There are two main judgment based methods:[2]
• Delphi method – The Delphi method involves directing a series of questions to experts. The experts provide their estimates regarding future development. The researcher summarizes the replies and sends the summary back to the experts, asking them if they wish to revise their opinions. The Delphi method is not very reliable and has only worked successfully in very rare cases.
• Extrapolation – Extrapolation is the usual method of forecasting. It is based on the assumption that future events will continue to develop along the same boundaries as previous events i.e. the past is a good predictor of the future. The researcher first acquires data about previous events and plots it. He then determines if there a pattern has emerged, and if so, he attempts to extend the pattern into the future and in so doing begins to generate a forecast of what is likely to happen. To extend patterns, researchers generally use a simple extrapolation rule, such as the S-shaped logistic function or Gompertz curves, or the Catastrophic Curve to help them in their extrapolation. It is in deciding which rule to use that the researcher’s judgment is required.
Survey methods
Survey methods are based on the opinions of customers and are thus reasonably accurate if performed correctly. In performing a survey, the survey’s target group needs to be identified.[3] This can be achieved by considering why the forecast is being conducted in the first place. Once the target group has been identified, a sample must be chosen. The sample is a sub-set of the target and must be chosen so that it accurately reflects everyone in the target group.[3] The survey must then pose a series of questions to the sample group and their answers must be recorded.
The recorded answers must then be analyzed using statistical and analytical methods. The average opinion and the variation about that mean are statistical analytical techniques that can be used.[3] The results of the analysis should then be checked using alternative forecasting methods and the results can be published.[3] It must be kept in mind that this method is only accurate if the sample is a balanced and accurate subset of the target group and if the sample group has accurately answered the questions.[3]
Time series methods
Time series methods are based on measurements taken of events on a periodic basis.[2] These methods use such data to develop models which can then be used to extrapolate into the future, thereby generating the forecast. Each model operates according to a different set of assumptions and is designed for a different purpose. Examples of Time Series Methods are:[2]
• Exponential smoothing – This method is based on a moving average of the data being analyzed, e.g. a moving average of sales figures
• Cyclical and seasonal trends – This method focuses on previous data to help define a pattern or trend that occurs in cyclic or seasonal periods. Researchers can then use current data to adjust the pattern so that it fits this period’s data, and in so doing can forecast what will happen during the remainder of the current season or cycle.
• Statistical models – Statistical models allow the researcher to develop statistical relationships between variables. These models are based on current data and by means of extrapolation, a future model can be created. Extrapolation techniques are based on standard statistical laws, thus improving the accuracy of the prediction. Statistical techniques not only produce forecasts but also quantify precision and reliability. Examples of this are the ERLANG B and C formulae, developed in 1917 by the Danish mathematician Agner Erlang.
Analogous methods
Analogous Methods involve finding similarities between foreign events and the events that are being studied. The foreign events are usually selected at a time when they are more "mature" than current events. No foreign event will perfectly mirror current events and this must be kept in mind so that any necessary corrections can be made. By examining the foreign, more mature, set of events, the future of current events can be forecast.[2]
Analogous methods can be split up into two groups namely:[2]
• Qualitative (symbolical) models
• Quantitative (numeric) models
Causal models
Causal Models are the most accurate form of forecasting, and the most complex. They involve creating a complex and complete model of the events being forecast. The model must include all possible variables, and must be able to predict every possible outcome.
Causal Models are often so complex that they can only be created on computers. They are developed using data from a set of events. The model is only as accurate as the data used to develop it.[2]
Combination forecasts
Combination Forecasts combine the methods discussed above. The advantage is that in most cases accuracy is increased; however a researcher must be careful that the disadvantages of each of the above methods do not combine to produce compound errors in forecasts. Examples of combination forecasts include: "Integration of Judgment and Quantitative Forecasts" and "Simple and Weighted Averages".[2]
Determining forecast accuracy
It is difficult to determine the accuracy of any forecast, as it represents an attempt to predict future events, which is always challenging. To help improve and test forecast accuracy researchers use many different checking methods. A simple checking method involves the use of several different forecasting methods and comparing the results to see if they are more or less equal. Another method can involve statistically calculating the errors in the forecasting calculation and expressing them in terms of the root mean squared error, thereby providing an indication of the overall error in the method. A sensitivity analysis can also be useful, as it determines what will happen if some of the original data upon which the forecast was developed turned out to be wrong. Determining forecast accuracy, like forecasting itself, can never be performed with certainty and so it is advisable to ensure that input data is measured and obtained as accurately as possible, the most appropriate forecasting methods are selected, and the forecasting process is conducted as rigorously as possible.[2]
References
1. Farr R.E., Telecommunications Traffic, Tariffs and Costs – An Introduction For Managers, Peter Peregrinus, 1988.
2. Kennedy I. G., Forecasting, School of Electrical and Information Engineering, University of the Witwatersrand, 2003.
3. Goodman A., Surveys and Sampling, 7 November 1999 http://deakin.edu.au/~agoodman/sci101/index.html Last accessed 30 January 2005.
| Wikipedia |
Traffic equations
In queueing theory, a discipline within the mathematical theory of probability, traffic equations are equations that describe the mean arrival rate of traffic, allowing the arrival rates at individual nodes to be determined. Mitrani notes "if the network is stable, the traffic equations are valid and can be solved."[1]: 125
Jackson network
In a Jackson network, the mean arrival rate $\lambda _{i}$ at each node i in the network is given by the sum of external arrivals (that is, arrivals from outside the network directly placed onto node i, if any), and internal arrivals from each of the other nodes on the network. If external arrivals at node i have rate $\gamma _{i}$, and the routing matrix[2] is P, the traffic equations are,[3] (for i = 1, 2, ..., m)
$\lambda _{i}=\gamma _{i}+\sum _{j=1}^{m}p_{ji}\lambda _{j}.$
This can be written in matrix form as
$\lambda (I-P)=\gamma \,,$
and there is a unique solution of unknowns $\lambda _{i}$ to this equation, so the mean arrival rates at each of the nodes can be determined given knowledge of the external arrival rates $\gamma _{i}$ and the matrix P. The matrix I − P is surely non-singular as otherwise in the long run the network would become empty.[1]
Gordon–Newell network
In a Gordon–Newell network there are no external arrivals, so the traffic equations take the form (for i = 1, 2, ..., m)
$\lambda _{i}=\sum _{j=1}^{m}p_{ji}\lambda _{j}.$
Notes
1. Mitrani, I. (1997). "Queueing networks". Probabilistic Modelling. p. 122. doi:10.1017/CBO9781139173087.005. ISBN 9781139173087.
2. As explained in the Jackson network article, jobs travel among the nodes following a fixed routing matrix.
3. Harrison, Peter G.; Patel, Naresh M. (1992). Performance Modelling of Communication Networks and Computer Architectures. Addison-Wesley. ISBN 0-201-54419-9.
Queueing theory
Single queueing nodes
• D/M/1 queue
• M/D/1 queue
• M/D/c queue
• M/M/1 queue
• Burke's theorem
• M/M/c queue
• M/M/∞ queue
• M/G/1 queue
• Pollaczek–Khinchine formula
• Matrix analytic method
• M/G/k queue
• G/M/1 queue
• G/G/1 queue
• Kingman's formula
• Lindley equation
• Fork–join queue
• Bulk queue
Arrival processes
• Poisson point process
• Markovian arrival process
• Rational arrival process
Queueing networks
• Jackson network
• Traffic equations
• Gordon–Newell theorem
• Mean value analysis
• Buzen's algorithm
• Kelly network
• G-network
• BCMP network
Service policies
• FIFO
• LIFO
• Processor sharing
• Round-robin
• Shortest job next
• Shortest remaining time
Key concepts
• Continuous-time Markov chain
• Kendall's notation
• Little's law
• Product-form solution
• Balance equation
• Quasireversibility
• Flow-equivalent server method
• Arrival theorem
• Decomposition method
• Beneš method
Limit theorems
• Fluid limit
• Mean-field theory
• Heavy traffic approximation
• Reflected Brownian motion
Extensions
• Fluid queue
• Layered queueing network
• Polling system
• Adversarial queueing network
• Loss network
• Retrial queue
Information systems
• Data buffer
• Erlang (unit)
• Erlang distribution
• Flow control (data)
• Message queue
• Network congestion
• Network scheduler
• Pipeline (software)
• Quality of service
• Scheduling (computing)
• Teletraffic engineering
Category
| Wikipedia |
Traffic model
A traffic model is a mathematical model of real-world traffic, usually, but not restricted to, road traffic. Traffic modeling draws heavily on theoretical foundations like network theory and certain theories from physics like the kinematic wave model. The interesting quantity being modeled and measured is the traffic flow, i.e. the throughput of mobile units (e.g. vehicles) per time and transportation medium capacity (e.g. road or lane width). Models can teach researchers and engineers how to ensure an optimal flow with a minimum number of traffic jams.
Traffic models often are the basis of a traffic simulation.[1]
Types
Microscopic traffic flow model
Traffic flow is assumed to depend on individual mobile units, i.e. cars, which are explicitly modeled
Macroscopic traffic flow model
Only the mass action or the statistical properties of a large number of units is analyzed
Examples
• Biham–Middleton–Levine traffic model
• Traffic generation model
• History of network traffic models
• Traffic mix
• Intelligent driver model
• Network traffic
• Three-phase traffic theory
• Two-fluid model
See also
• Braess's paradox
• Gridlock
• Mobility model
• Network traffic
• Network traffic simulation
• Traffic bottleneck
• Traffic flow
• Traffic wave
• Queueing theory
• Traffic equations
References
1. Mahmud, Khizir; Town, Graham E. (June 2016). "A review of computer tools for modeling electric vehicle energy requirements and their impact on power distribution networks". Applied Energy. 172: 337–359. doi:10.1016/j.apenergy.2016.03.100.
External links
• http://math.mit.edu/projects/traffic/
• Takashi Nagatani (2002). "The physics of traffic jams". Rep. Prog. Phys. INSTITUTE OF PHYSICS PUBLISHING. 65 (9): 1331–1386. Bibcode:2002RPPh...65.1331N. CiteSeerX 10.1.1.205.6595. doi:10.1088/0034-4885/65/9/203.
• "The Physics of Gridlock". The Atlantic. December 2000.
| Wikipedia |
Trailing zero
In mathematics, trailing zeros are a sequence of 0 in the decimal representation (or more generally, in any positional representation) of a number, after which no other digits follow.
Trailing zeros to the right of a decimal point, as in 12.340, don’t affect the value of a number and may be omitted if all that is of interest is its numerical value. This is true even if the zeros recur infinitely. For example, in pharmacy, trailing zeros are omitted from dose values to prevent misreading. However, trailing zeros may be useful for indicating the number of significant figures, for example in a measurement. In such a context, "simplifying" a number by removing trailing zeros would be incorrect.
The number of trailing zeros in a non-zero base-b integer n equals the exponent of the highest power of b that divides n. For example, 14000 has three trailing zeros and is therefore divisible by 1000 = 103, but not by 104. This property is useful when looking for small factors in integer factorization. Some computer architectures have a count trailing zeros operation in their instruction set for efficiently determining the number of trailing zero bits in a machine word.
Factorial
The number of trailing zeros in the decimal representation of n!, the factorial of a non-negative integer n, is simply the multiplicity of the prime factor 5 in n!. This can be determined with this special case of de Polignac's formula:[1]
$f(n)=\sum _{i=1}^{k}\left\lfloor {\frac {n}{5^{i}}}\right\rfloor =\left\lfloor {\frac {n}{5}}\right\rfloor +\left\lfloor {\frac {n}{5^{2}}}\right\rfloor +\left\lfloor {\frac {n}{5^{3}}}\right\rfloor +\cdots +\left\lfloor {\frac {n}{5^{k}}}\right\rfloor ,\,$
where k must be chosen such that
$5^{k+1}>n,\,$
more precisely
$5^{k}\leq n<5^{k+1},$
$k=\left\lfloor \log _{5}n\right\rfloor ,$
and $\lfloor a\rfloor $ denotes the floor function applied to a. For n = 0, 1, 2, ... this is
0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 2, 2, 2, 2, 2, 3, 3, 3, 3, 3, 4, 4, 4, 4, 4, 6, ... (sequence A027868 in the OEIS).
For example, 53 > 32, and therefore 32! = 263130836933693530167218012160000000 ends in
$\left\lfloor {\frac {32}{5}}\right\rfloor +\left\lfloor {\frac {32}{5^{2}}}\right\rfloor =6+1=7\,$
zeros. If n < 5, the inequality is satisfied by k = 0; in that case the sum is empty, giving the answer 0.
The formula actually counts the number of factors 5 in n!, but since there are at least as many factors 2, this is equivalent to the number of factors 10, each of which gives one more trailing zero.
Defining
$q_{i}=\left\lfloor {\frac {n}{5^{i}}}\right\rfloor ,\,$
the following recurrence relation holds:
${\begin{aligned}q_{0}\,\,\,\,\,&=\,\,\,n,\quad \\q_{i+1}&=\left\lfloor {\frac {q_{i}}{5}}\right\rfloor .\,\end{aligned}}$
This can be used to simplify the computation of the terms of the summation, which can be stopped as soon as q i reaches zero. The condition 5k+1 > n is equivalent to q k+1 = 0.
See also
• Leading zero
• Trailing digit
References
1. Summarized from Factorials and Trailing Zeroes
External links
• Why are trailing fractional zeros important? for some examples of when trailing zeros are significant
• Number of trailing zeros for any factorialPython program to calculate the number of trailing zeros for any factorial Archived 2017-02-22 at the Wayback Machine
| Wikipedia |
Train track (mathematics)
In the mathematical area of topology, a train track is a family of curves embedded on a surface, meeting the following conditions:
1. The curves meet at a finite set of vertices called switches.
2. Away from the switches, the curves are smooth and do not touch each other.
3. At each switch, three curves meet with the same tangent line, with two curves entering from one direction and one from the other.
The main application of train tracks in mathematics is to study laminations of surfaces, that is, partitions of closed subsets of surfaces into unions of smooth curves. Train tracks have also been used in graph drawing.
Train tracks and laminations
A lamination of a surface is a partition of a closed subset of the surface into smooth curves. The study of train tracks was originally motivated by the following observation: If a generic lamination on a surface is looked at from a distance by a myopic person, it will look like a train track.
A switch in a train track models a point where two families of parallel curves in the lamination merge to become a single family, as shown in the illustration. Although the switch consists of three curves ending in and intersecting at a single point, the curves in the lamination do not have endpoints and do not intersect each other.
For this application of train tracks to laminations, it is often important to constrain the shapes that can be formed by connected components of the surface between the curves of the track. For instance, Penner and Harer require that each such component, when glued to a copy of itself along its boundary to form a smooth surface with cusps, have negative cusped Euler characteristic.
A train track with weights, or weighted train track or measured train track, consists of a train track with a non-negative real number, called a weight, assigned to each branch. The weights can be used to model which of the curves in a parallel family of curves from a lamination are split to which sides of the switch. Weights must satisfy the following switch condition: The weight assigned to the ingoing branch at a switch should equal the sum of the weights assigned to the branches outgoing from that switch. Weights are closely related to the notion of carrying. A train track is said to carry a lamination if there is a train track neighborhood such that every leaf of the lamination is contained in the neighborhood and intersects each vertical fiber transversely. If each vertical fiber has nontrivial intersection with some leaf, then the lamination is fully carried by the train track.
References
• Penner, R. C., with Harer, J. L. (1992). Combinatorics of Train Tracks. Princeton University Press, Annals of Mathematics Studies. ISBN 0-691-02531-2.{{cite book}}: CS1 maint: multiple names: authors list (link)
| Wikipedia |
Train track map
In the mathematical subject of geometric group theory, a train track map is a continuous map f from a finite connected graph to itself which is a homotopy equivalence and which has particularly nice cancellation properties with respect to iterations. This map sends vertices to vertices and edges to nontrivial edge-paths with the property that for every edge e of the graph and for every positive integer n the path fn(e) is immersed, that is fn(e) is locally injective on e. Train-track maps are a key tool in analyzing the dynamics of automorphisms of finitely generated free groups and in the study of the Culler–Vogtmann Outer space.
History
Train track maps for free group automorphisms were introduced in a 1992 paper of Bestvina and Handel.[1] The notion was motivated by Thurston's train tracks on surfaces, but the free group case is substantially different and more complicated. In their 1992 paper Bestvina and Handel proved that every irreducible automorphism of Fn has a train-track representative. In the same paper they introduced the notion of a relative train track and applied train track methods to solve[1] the Scott conjecture which says that for every automorphism α of a finitely generated free group Fn the fixed subgroup of α is free of rank at most n. In a subsequent paper[2] Bestvina and Handel applied the train track techniques to obtain an effective proof of Thurston's classification of homeomorphisms of compact surfaces (with or without boundary) which says that every such homeomorphism is, up to isotopy, either reducible, of finite order or pseudo-anosov.
Since then train tracks became a standard tool in the study of algebraic, geometric and dynamical properties of automorphisms of free groups and of subgroups of Out(Fn). Train tracks are particularly useful since they allow to understand long-term growth (in terms of length) and cancellation behavior for large iterates of an automorphism of Fn applied to a particular conjugacy class in Fn. This information is especially helpful when studying the dynamics of the action of elements of Out(Fn) on the Culler–Vogtmann Outer space and its boundary and when studying Fn actions of on real trees.[3][4][5] Examples of applications of train tracks include: a theorem of Brinkmann[6] proving that for an automorphism α of Fn the mapping torus group of α is word-hyperbolic if and only if α has no periodic conjugacy classes; a theorem of Bridson and Groves[7] that for every automorphism α of Fn the mapping torus group of α satisfies a quadratic isoperimetric inequality; a proof of algorithmic solvability of the conjugacy problem for free-by-cyclic groups;[8] and others.
Train tracks were a key tool in the proof by Bestvina, Feighn and Handel that the group Out(Fn) satisfies the Tits alternative.[9][10]
The machinery of train tracks for injective endomorphisms of free groups was later developed by Dicks and Ventura.[11]
Formal definition
Combinatorial map
For a finite graph Γ (which is thought of here as a 1-dimensional cell complex) a combinatorial map is a continuous map
f : Γ → Γ
such that:
• The map f takes vertices to vertices.
• For every edge e of Γ its image f(e) is a nontrivial edge-path e1...em in Γ where m ≥ 1. Moreover, e can be subdivided into m intervals such that the interior of the i-th interval is mapped by f homeomorphically onto the interior of the edge ei for i = 1,...,m.
Train track map
Let Γ be a finite connected graph. A combinatorial map f : Γ → Γ is called a train track map if for every edge e of Γ and every integer n ≥ 1 the edge-path fn(e) contains no backtracks, that is, it contains no subpaths of the form hh−1 where h is an edge of Γ. In other words, the restriction of fn to e is locally injective (or an immersion) for every edge e and every n ≥ 1.
When applied to the case n = 1, this definition implies, in particular, that the path f(e) has no backtracks.
Topological representative
Let Fk be a free group of finite rank k ≥ 2. Fix a free basis A of Fk and an identification of Fk with the fundamental group of the rose Rk which is a wedge of k circles corresponding to the basis elements of A.
Let φ ∈ Out(Fk) be an outer automorphism of Fk.
A topological representative of φ is a triple (τ, Γ, f) where:
• Γ is a finite connected graph with the first betti number k (so that the fundamental group of Γ is free of rank k).
• τ : Rk → Γ is a homotopy equivalence (which, in this case, means that τ is a continuous map which induces an isomorphism at the level of fundamental groups).
• f : Γ → Γ is a combinatorial map which is also a homotopy equivalence.
• If σ : Γ → Rk is a homotopy inverse of τ then the composition
σfτ : Rk → Rk
induces an automorphism of Fk = π1(Rk) whose outer automorphism class is equal to φ.
The map τ in the above definition is called a marking and is typically suppressed when topological representatives are discussed. Thus, by abuse of notation, one often says that in the above situation f : Γ → Γ is a topological representative of φ.
Train track representative
Let φ ∈ Out(Fk) be an outer automorphism of Fk. A train track map which is a topological representative of φ is called a train track representative of φ.
Legal and illegal turns
Let f : Γ → Γ be a combinatorial map. A turn is an unordered pair e, h of oriented edges of Γ (not necessarily distinct) having a common initial vertex. A turn e, h is degenerate if e = h and nondegenerate otherwise.
A turn e, h is illegal if for some n ≥ 1 the paths fn(e) and fn(h) have a nontrivial common initial segment (that is, they start with the same edge). A turn is legal if it not illegal.
An edge-path e1,..., em is said to contain turns ei−1, ei+1 for i = 1,...,m−1.
A combinatorial map f : Γ → Γ is a train-track map if and only if for every edge e of Γ the path f(e) contains no illegal turns.
Derivative map
Let f : Γ → Γ be a combinatorial map and let E be the set of oriented edges of Γ. Then f determines its derivative map Df : E → E where for every edge e Df(e) is the initial edge of the path f(e). The map Df naturally extends to the map Df : T → T where T is the set of all turns in Γ. For a turn t given by an edge-pair e, h, its image Df(t) is the turn Df(e), Df(h). A turn t is legal if and only if for every n ≥ 1 the turn (Df)n(t) is nondegenerate. Since the set T of turns is finite, this fact allows one to algorithmically determine if a given turn is legal or not and hence to algorithmically decide, given f, whether or not f is a train-track map.
Examples
Let φ be the automorphism of F(a,b) given by φ(a) = b, φ(b) = ab. Let Γ be the wedge of two loop-edges Ea and Eb corresponding to the free basis elements a and b, wedged at the vertex v. Let f : Γ → Γ be the map which fixes v and sends the edge Ea to Eb and that sends the edge Eb to the edge-path EaEb. Then f is a train track representative of φ.
Main result for irreducible automorphisms
Irreducible automorphisms
An outer automorphism φ of Fk is said to be reducible if there exists a free product decomposition
$F_{k}=H_{1}\ast \dots H_{m}\ast U$
where all Hi are nontrivial, where m ≥ 1 and where φ permutes the conjugacy classes of H1,...,Hm in Fk. An outer automorphism φ of Fk is said to be irreducible if it is not reducible.
It is known[1] that φ ∈ Out(Fk) be irreducible if and only if for every topological representative f : Γ → Γ of φ, where Γ is finite, connected and without degree-one vertices, any proper f-invariant subgraph of Γ is a forest.
Bestvina–Handel theorem for irreducible automorphisms
The following result was obtained by Bestvina and Handel in their 1992 paper[1] where train track maps were originally introduced:
Let φ ∈ Out(Fk) be irreducible. Then there exists a train track representative of φ.
Sketch of the proof
For a topological representative f:Γ→Γ of an automorphism φ of Fk the transition matrix M(f) is an rxr matrix (where r is the number of topological edges of Γ) where the entry mij is the number of times the path f(ej) passes through the edge ei (in either direction). If φ is irreducible, the transition matrix M(f) is irreducible in the sense of the Perron–Frobenius theorem and it has a unique Perron–Frobenius eigenvalue λ(f) ≥ 1 which is equal to the spectral radius of M(f).
One then defines a number of different moves on topological representatives of φ that are all seen to either decrease or preserve the Perron–Frobenius eigenvalue of the transition matrix. These moves include: subdividing an edge; valence-one homotopy (getting rid of a degree-one vertex); valence-two homotopy (getting rid of a degree-two vertex); collapsing an invariant forest; and folding. Of these moves the valence-one homotopy always reduced the Perron–Frobenius eigenvalue.
Starting with some topological representative f of an irreducible automorphism φ one then algorithmically constructs a sequence of topological representatives
f = f1, f2, f3,...
of φ where fn is obtained from fn−1 by several moves, specifically chosen. In this sequence, if fn is not a train track map, then the moves producing fn+1 from fn necessarily involve a sequence of folds followed by a valence-one homotopy, so that the Perron–Frobenius eigenvalue of fn+1 is strictly smaller than that of fn. The process is arranged in such a way that Perron–Frobenius eigenvalues of the maps fn take values in a discrete substet of $\mathbb {R} $. This guarantees that the process terminates in a finite number of steps and the last term fN of the sequence is a train track representative of φ.
Applications to growth
A consequence (requiring additional arguments) of the above theorem is the following:[1]
• If φ ∈ Out(Fk) is irreducible then the Perron–Frobenius eigenvalue λ(f) does not depend on the choice of a train track representative f of φ but is uniquely determined by φ itself and is denoted by λ(φ). The number λ(φ) is called the growth rate of φ.
• If φ ∈ Out(Fk) is irreducible and of infinite order then λ(φ) > 1. Moreover, in this case for every free basis X of Fk and for most nontrivial values of w ∈ Fk there exists C ≥ 1 such that for all n ≥ 1
${\frac {1}{C}}\lambda ^{n}(\phi )\leq ||\phi ^{n}(w)||_{X}\leq C\lambda ^{n}(\phi ),$
where ||u||X is the cyclically reduced length of an element u of Fk with respect to X. The only exceptions occur when Fk corresponds to the fundamental group of a compact surface with boundary S, and φ corresponds to a pseudo-Anosov homeomorphism of S, and w corresponds to a path going around a component of the boundary of S.
Unlike for elements of mapping class groups, for an irreducible φ ∈ Out(Fk) it is often the case [12] that
λ(φ) ≠ λ(φ−1).
Applications and generalizations
• The first major application of train tracks was given in the original 1992 paper of Bestvina and Handel[1] where train tracks were introduced. The paper gave a proof of the Scott conjecture which says that for every automorphism α of a finitely generated free group Fn the fixed subgroup of α is free of rank at most n.
• In a subsequent paper[2] Bestvina and Handel applied the train track techniques to obtain an effective proof of Thurston's classification of homeomorphisms of compact surfaces (with or without boundary) which says that every such homeomorphism is, up to isotopy, is either reducible, of finite order or pseudo-anosov.
• Train tracks are the main tool in Los' algorithm for deciding whether or not two irreducible elements of Out(Fn) are conjugate in Out(Fn).[13]
• A theorem of Brinkmann[6] proving that for an automorphism α of Fn the mapping torus group of α is word-hyperbolic if and only if α has no periodic conjugacy classes.
• A theorem of Levitt and Lustig showing that a fully irreducible automorphism of a Fn has "north-south" dynamics when acting on the Thurston-type compactification of the Culler–Vogtmann Outer space.[4]
• A theorem of Bridson and Groves[7] that for every automorphism α of Fn the mapping torus group of α satisfies a quadratic isoperimetric inequality.
• The proof by Bestvina, Feighn and Handel that the group Out(Fn) satisfies the Tits alternative.[9][10]
• An algorithm that, given an automorphism α of Fn, decides whether or not the fixed subgroup of α is trivial and finds a finite generating set for that fixed subgroup.[14]
• The proof of algorithmic solvability of the conjugacy problem for free-by-cyclic groups by Bogopolski, Martino, Maslakova, and Ventura.[8]
• The machinery of train tracks for injective endomorphisms of free groups, generalizing the case of automorphisms, was developed in a 1996 book of Dicks and Ventura.[11]
See also
• Geometric group theory
• Real tree
• Mapping class group
• Free group
• Out(Fn)
Basic references
• Bestvina, Mladen; Handel, Michael (1992). "Train tracks and automorphisms of free groups". Annals of Mathematics. Second Series. 135 (1): 1–51. doi:10.2307/2946562. JSTOR 2946562. MR 1147956.
• Warren Dicks, and Enric Ventura. The group fixed by a family of injective endomorphisms of a free group. Contemporary Mathematics, 195. American Mathematical Society, Providence, RI, 1996. ISBN 0-8218-0564-9
• Oleg Bogopolski. Introduction to group theory. EMS Textbooks in Mathematics. European Mathematical Society, Zürich, 2008. ISBN 978-3-03719-041-8
Footnotes
1. Mladen Bestvina, and Michael Handel, Train tracks and automorphisms of free groups. Annals of Mathematics (2), vol. 135 (1992), no. 1, pp. 1–51
2. Mladen Bestvina and Michael Handel. Train-tracks for surface homeomorphisms. Topology, vol. 34 (1995), no. 1, pp. 109–140.
3. M. Bestvina, M. Feighn, M. Handel, Laminations, trees, and irreducible automorphisms of free groups. Geometric and Functional Analysis, vol. 7 (1997), no. 2, 215–244
4. Gilbert Levitt and Martin Lustig, Irreducible automorphisms of Fn have north-south dynamics on compactified outer space. Journal of the Institute of Mathematics of Jussieu, vol. 2 (2003), no. 1, 59–72
5. Gilbert Levitt, and Martin Lustig, Automorphisms of free groups have asymptotically periodic dynamics. Crelle's Journal, vol. 619 (2008), pp. 1–36
6. P. Brinkmann, Hyperbolic automorphisms of free groups. Geometric and Functional Analysis, vol. 10 (2000), no. 5, pp. 1071–1089
7. Martin R. Bridson and Daniel Groves. The quadratic isoperimetric inequality for mapping tori of free-group automorphisms. Memoirs of the American Mathematical Society, to appear.
8. O. Bogopolski, A. Martino, O. Maslakova, E. Ventura, The conjugacy problem is solvable in free-by-cyclic groups. Bulletin of the London Mathematical Society, vol. 38 (2006), no. 5, pp. 787–794
9. Mladen Bestvina, Mark Feighn, and Michael Handel. The Tits alternative for Out(Fn). I. Dynamics of exponentially-growing automorphisms. Annals of Mathematics (2), vol. 151 (2000), no. 2, pp. 517–623
10. Mladen Bestvina, Mark Feighn, and Michael Handel. The Tits alternative for Out(Fn). II. A Kolchin type theorem. Annals of Mathematics (2), vol. 161 (2005), no. 1, pp. 1–59
11. Warren Dicks, and Enric Ventura. The group fixed by a family of injective endomorphisms of a free group. Contemporary Mathematics, 195. American Mathematical Society, Providence, RI, 1996. ISBN 0-8218-0564-9
12. Michael Handel, and Lee Mosher, The expansion factors of an outer automorphism and its inverse. Transactions of the American Mathematical Society, vol. 359 (2007), no. 7, 3185 3208
13. Jérôme E. Los, On the conjugacy problem for automorphisms of free groups. Topology, vol. 35 (1996), no. 3, pp. 779–806
14. O. S. Maslakova. The fixed point group of a free group automorphism. (Russian). Algebra Logika, vol. 42 (2003), no. 4, pp. 422–472; translation in Algebra and Logic, vol. 42 (2003), no. 4, pp. 237–265
External links
• Peter Brinkmann's minicourse notes on train tracks
| Wikipedia |
Traité de mécanique céleste
Traité de mécanique céleste (transl. "Treatise of celestial mechanics") is a five-volume treatise on celestial mechanics written by Pierre-Simon Laplace and published from 1798 to 1825 with a second edition in 1829.[1][2][3][4] In 1842, the government of Louis Philippe gave a grant of 40,000 francs for a 7-volume national edition of the Oeuvres de Laplace (1843–1847); the Traité de mécanique céleste with its four supplements occupies the first 5 volumes.[5]
Newton laid the foundations of Celestial Mechanics, at the close of the seventeenth century, by the discovery of the principle of universal gravitation. Even in his own hands, this discovery led to important consequences, but it has required a century and a half, and a regular succession of intellects the most powerful, to fill up the outline sketched by him. Of these, Laplace himself was the last, and, perhaps after Newton, the greatest; and the task commenced in the Principia of the former, is completed in the Mécanique Céleste of the latter. In this last named work, the illustrious author has proposed to himself his object, to unite all the theories scattered throughout the various channels of publication, employed by his predecessors, to reduce them to one common method, and present them all in the same point of view.[6]
If one were asked to name the two most important works in the progress of mathematics and physics, the answer would undoubtedly be, the Principia of Newton and the Mécanique Céleste of Laplace. In their historical and philosophical aspects these works easily outrank all others, and furnish thus the standard by which all others must be measured. The distinguishing feature of the Principia is its clear and exhaustive enunciation of fundamental principles. The Mécanique Céleste, on the other hand, is conspicuous for the development of principles and for the profound generality of its methods. The Principia gives the plans and specifications of the foundations; the Mécanique Céleste affords the key to the vast and complex superstructure.[7]
Traité de mécanique céleste
AuthorPierre-Simon Laplace
LanguageFrench
Published1798 to 1825
Tome I. (1798)
Livre I. Des lois générales de l'équilibre et du mouvement
• Chap. I. De l'équilibre et de la composition des forces qui agissent sur un point matériel
• Chap. II. Du mouvement d'un point matériel
• Chap. III. De l'équilibre d'un système de corps
• Chap. IV. De l'équilibre des fluides
• Chap. V. Principes généraux du mouvement d'un système de corps
• Chap. VI. Des lois du mouvement d'un système de corps, dans toutes les relations mathématiquement possibles entre la force et la vitesse
• Chat. VII. Des mouvemens d'un corps solide de figure quelconque
• Chap. VIII. Du mouvement des fluides
Tome II. (1798)
Tome III. (1802)
Tome IV. (1805)
Tome V. (1825)
English translations
During the early nineteenth century at least five English translations of Mécanique Céleste were published. In 1814 the Reverend John Toplis prepared a translation of Book 1 entitled The Mechanics of Laplace. Translated with Notes and Additions.[8] In 1821 Thomas Young anonymously published a further translation into English of the first book; beyond just translating from French to English he claimed in the preface to have translated the style of mathematics:
The translator flatters himself, however, that he has not expressed the author’s meaning in English words alone, but that he has rendered it perfectly intelligible to any person, who is conversant with the English mathematicians of the old school only, and that his book will serve as a connecting link between the geometrical and algebraical modes of representation.[9]
The Reverend Henry Harte, a fellow at Trinity College, Dublin translated the entire first volume of Mécanique Céleste, with Book 1 published in 1822 and Book 2 published separately in 1827.[10] Similarly to Bowditch (see below), Harte felt that Laplace's exposition was too brief, making his work difficult to understand:
... it may be safely asserted, that the chief obstacle to a more general knowledge of the work, arises from the summary manner in which the Author passes over the intermediate steps in several of his most interesting investigations.[11]
Bowditch's translation
The famous American mathematician Nathaniel Bowditch translated the first four volumes of the Traité de mécanique céleste but not the fifth volume;[12] however, Bowditch did make use of relevant portions of the fifth volume in his extensive commentaries for the first four volumes.[13]
The first four volumes of Dr. Bowditch's Translation and Commentary were published successively, in 1828, 1832, 1834, and 1839, at the sacrifice of one quarter of his whole property. The expense was largely increased by the voluminous commentary. This was really of the nature of an original work, and was rendered necessary by the frequent gaps which Laplace had left in his own publication. Mr. N. I. Bowditch says, in his biography of his father, that Dr. Bowditch was accustomed to remark, "Whenever I meet in Laplace with the words, Thus it plainly appears, I am sure that hours, and perhaps days, of hard study will alone enable me to discover how it plainly appears."[14]
Bowditch's translation of the first four volumes of Laplace's Traité de mécanique céleste was completed by 1818 but he would not publish it for many years. Almost certainly the cost of publication caused the delay, but Bowditch did not just put the work on one side after 1818 but continued to improve it over the succeeding years. Bowditch was helped by Benjamin Peirce in this project and his commentaries doubled the length of the book. His purpose was more than just an English translation. He wanted to supply steps omitted in the original text; to incorporate later results into the translation; and to give credits omitted by Laplace.[13]
• Volumes 1-4 of "Mécanique céleste" translated by Nathaniel Bowditch(1829)
• Title page of Volume 1 of "Mécanique céleste" translated by Nathaniel Bowditch(1829)
• First page of Volume 1 of "Mécanique céleste" translated by Nathaniel Bowditch(1829)
Somerville's translation
In 1826, it was still felt by Henry Brougham, president of the Society for the Diffusion of Useful Knowledge, that the British reader was lacking a readable translation of Mécanique Céleste. He thus approached Mary Somerville, who began to prepare a translation which would "explain to the unlearned the sort of thing it is - the plan, the vast merit, the wonderful truths unfolded or methodized - and the calculus by which all this is accomplished".[15] In 1830, John Herschel wrote to Somerville and enclosed a copy of Bowditch's 1828 translation of Volume 1 which Herschel had just received. Undeterred, Somerville decided to continue with the preparation of her own work as she felt the two translations differed in their aims; whereas Bowditch's contained an overwhelming number of footnotes to explain each mathematical step, Somerville instead wished to state and demonstrate the results as clearly as possible.[16]
A year later, in 1831, Somerville's translation was published under the title Mechanism of the Heavens.[17] It received great critical acclaim, with complimentary reviews appearing in the Quarterly Review, the Edinburgh Review, and the Monthly Notices of the Royal Astronomical Society.[18]
References
1. Traité de mécanique céleste, 1798–1825.
2. Oeuvres de Laplace. Paris: Imprimerie royale; 1843–1847{{cite book}}: CS1 maint: postscript (link)
3. Laplace, Pierre Simon, marquis de. Traité de mécanique céleste, 1799–1825. Paris.{{cite book}}: CS1 maint: multiple names: authors list (link)
4. Laplace, Pierre Simon, marquis de (1829). Traité de mécanique céleste (deuxième ed.).{{cite book}}: CS1 maint: multiple names: authors list (link)
5. Clerke, Agnes Mary (1911). "Laplace, Pierre Simon" . In Chisholm, Hugh (ed.). Encyclopædia Britannica. Vol. 16 (11th ed.). Cambridge University Press. pp. 200–203.
6. Walsh, Robert (June 1829). "Review: Traité de Mécanique Céleste par M. Le Marquis de Laplace, Tome V. Paris, Bachelier". The American Quarterly Review. 5: 310–343.
7. Woodward, R. S. (August 1891). "Review of Tisserand's Mecånique Céleste". The Annals of Mathematics. 6 (2): 49–56. doi:10.2307/1967235. JSTOR 1967235.
8. Toplis, John (1814). The Mechanics of Laplace. Translated with Notes and Additions. London: Longmans Brown and Co.
9. Young, Thomas (1821). Elementary Illustrations of the Celestial Mechanics of Laplace, Part the First, Comprehending the First Book. London: John Murray.
10. Grattan-Guinness, Ivor. "Before Bowditch: Henry Harte's translation of books 1 and 2 of Laplace's Mécanique céleste". Schriftenreihe für Geschichte der Naturwissenschaften Technik und Medizin. 24 (2): 53–5.
11. Harte, Henry (1822). A Treatise of Celestial Mechanics, By P. S. Laplace. Dublin: Richard Milliken. pp. v.
12. Gillispie, Charles Coulston; Grattan-Guinness, Ivor (2000). Pierre-Simon Laplace, 1749-1827: a life in exact science. Princeton University Press. p. 283. ISBN 0691050279.
13. O'Connor, John J.; Robertson, Edmund F., "Nathaniel Bowditch", MacTutor History of Mathematics Archive, University of St Andrews
14. Lovering, Joseph (May 1888 – May 1889). "The "Mécanique Céleste" of Laplace, and Its Translation, with a Commentary by Bowditch". Proceedings of the American Academy of Arts and Sciences. 24: 185–201. doi:10.2307/20021561. JSTOR 20021561. (See p. 196 for quote.)
15. Somerville, Mary (1873). Personal Recollections, from Early Life to Old Age, of Mary Somerville. John Murray.
16. Patterson, Elizabeth Chambers (1983). Mary Somerville and the Cultivation of Science, 1815-1840. The Hague: Martinus Nijhoff. pp. 74–5.
17. Somerville, Mary (1831). Mechanism of the Heavens. London: John Murray.
18. Secord, James, ed. (2004). Collected Works of Mary Somerville. Vol. 1. Thoemmes Continuum.
External links
Translation by Nathaniel Bowditch
• Volume I, 1829
• Volume II, 1832
• Volume III, 1834
• Volume IV, 1839 with a memoir of the translator by his son
| Wikipedia |
Trakhtenbrot's theorem
In logic, finite model theory, and computability theory, Trakhtenbrot's theorem (due to Boris Trakhtenbrot) states that the problem of validity in first-order logic on the class of all finite models is undecidable. In fact, the class of valid sentences over finite models is not recursively enumerable (though it is co-recursively enumerable).
Trakhtenbrot's theorem implies that Gödel's completeness theorem (that is fundamental to first-order logic) does not hold in the finite case. Also it seems counter-intuitive that being valid over all structures is 'easier' than over just the finite ones.
The theorem was first published in 1950: "The Impossibility of an Algorithm for the Decidability Problem on Finite Classes".[1]
Mathematical formulation
We follow the formulations as in Ebbinghaus and Flum[2]
Theorem
Satisfiability for finite structures is not decidable in first-order logic.
That is, the set {φ | φ is a sentence of first-order logic that is satisfiable among finite structures} is undecidable.
Corollary
Let σ be a relational vocabulary with one at least binary relation symbol.
The set of σ-sentences valid in all finite structures is not recursively enumerable.
Remarks
1. This implies that Gödel's completeness theorem fails in the finite since completeness implies recursive enumerability.
2. It follows that there is no recursive function f such that: if φ has a finite model, then it has a model of size at most f(φ). In other words, there is no effective analogue to the Löwenheim–Skolem theorem in the finite.
Intuitive proof
This proof is taken from Chapter 10, section 4, 5 of Mathematical Logic by H.-D. Ebbinghaus.
As in the most common proof of Gödel's First Incompleteness Theorem through using the undecidability of the halting problem, for each Turing machine $M$ there is a corresponding arithmetical sentence $\phi _{M}$, effectively derivable from $M$, such that it is true if and only if $M$ halts on the empty tape. Intuitively, $\phi _{M}$ asserts "there exists a natural number that is the Gödel code for the computation record of $M$ on the empty tape that ends with halting".
If the machine $M$ does halt in finite steps, then the complete computation record is also finite, then there is a finite initial segment of the natural numbers such that the arithmetical sentence $\phi _{M}$ is also true on this initial segment. Intuitively, this is because in this case, proving $\phi _{M}$ requires the arithmetic properties of only finitely many numbers.
If the machine $M$ does not halt in finite steps, then $\phi _{M}$ is false in any finite model, since there's no finite computation record of $M$ that ends with halting.
Thus, if $M$ halts, $\phi _{M}$ is true in some finite models. If $M$ does not halt, $\phi _{M}$ is false in all finite models. So, $M$ does not halt if and only if $\neg \phi _{M}$ is true over all finite models.
The set of machines that does not halt is not recursively enumerable, so the set of valid sentences over finite models is not recursively enumerable.
Alternative proof
In this section we exhibit a more rigorous proof from Libkin.[3] Note in the above statement that the corollary also entails the theorem, and this is the direction we prove here.
Theorem
For every relational vocabulary τ with at least one binary relation symbol, it is undecidable whether a sentence φ of vocabulary τ is finitely satisfiable.
Proof
According to the previous lemma, we can in fact use finitely many binary relation symbols. The idea of the proof is similar to the proof of Fagin's theorem, and we encode Turing machines in first-order logic. What we want to prove is that for every Turing machine M we construct a sentence φM of vocabulary τ such that φM is finitely satisfiable if and only if M halts on the empty input, which is equivalent to the halting problem and therefore undecidable.
Let M= ⟨Q, Σ, δ, q0, Qa, Qr⟩ be a deterministic Turing machine with a single infinite tape.
• Q is the set of states,
• Σ is the input alphabet,
• Δ is the tape alphabet,
• δ is the transition function,
• q0 is the initial state,
• Qa and Qr are the sets of accepting and rejecting states.
Since we are dealing with the problem of halting on an empty input we may assume w.l.o.g. that Δ={0,1} and that 0 represents a blank, while 1 represents some tape symbol. We define τ so that we can represent computations:
τ := {<, min, T0 (⋅,⋅), T1 (⋅,⋅), (Hq(⋅,⋅))(q ∈ Q)}
Where:
• < is a linear order and min is a constant symbol for the minimal element with respect to < (our finite domain will be associated with an initial segment of the natural numbers).
• T0 and T1 are tape predicates. Ti(s,t) indicates that position s at time t contains i, where i ∈ {0,1}.
• Hq's are head predicates. Hq(s,t) indicates that at time t the machine is in state q, and its head is in position s.
The sentence φM states that (i) <, min, Ti's and Hq's are interpreted as above and (ii) that the machine eventually halts. The halting condition is equivalent to saying that Hq∗(s, t) holds for some s, t and q∗ ∈ Qa ∪ Qr and after that state, the configuration of the machine does not change. Configurations of a halting machine (the nonhalting is not finite) can be represented as a τ (finite) sentence (more precisely, a finite τ-structure which satisfies the sentence). The sentence φM is: φ ≡ α ∧ β ∧ γ ∧ η ∧ ζ ∧ θ.
We break it down by components:
• α states that < is a linear order and that min is its minimal element
• γ defines the initial configuration of M: it is in state q0, the head is in the first position and the tape contains only zeros: γ ≡ Hq0(min,min) ∧ ∀s T0 (s, min)
• η states that in every configuration of M, each tape cell contains exactly one element of Δ: ∀s∀t(T0(s, t) ↔ ¬ T1(s, t))
• β imposes a basic consistency condition on the predicates Hq's: at any time the machine is in exactly one state:
$\forall t\exists !s(\bigvee _{q\in Q}H_{q}(s,t))\land \neg \exists s\exists t(\bigvee _{q,q'\in Q,q\neq q}H_{q}(s,t)\land H_{q'}(s,t))$
• ζ states that at some point M is in a halting state:
$\exists s\exists t\bigvee _{q\in Q_{a}\cup Q_{r}}H_{q}(s,t)$
• θ consists of a conjunction of sentences stating that Ti's and Hq's are well behaved with respect to the transitions of M. As an example, let δ(q,0)=(q',1, left) meaning that if M is in state q reading 0, then it writes 1, moves the head one position to the left and goes into the state q'. We represent this condition by the disjunction of θ0 and θ1:
$\theta _{0}\equiv \forall s\forall t(s\neq {\underline {min}}\land T_{0}(s,t)\land H_{q}(s,t))\to \theta _{2}$
Where θ2 is:
$T_{1}(s,t+1)\land H_{q'}(s-1,t+1)\land \forall s'(s\neq s'\to (\bigwedge _{i=0,1}T_{i}(s',t+1)\leftrightarrow T_{i}(s',t)))$
And:
$\theta _{1}\equiv \forall s\forall t(s={\underline {min}}\land T_{0}(s,t)\land H_{q}(s,t))\to \theta _{3}$
Where θ3 is:
$T_{1}(s,t+1)\land H_{q'}(s,t+1)\land \forall s'(s\neq s'\to (\bigwedge _{i=0,1}T_{i}(s',t+1)\leftrightarrow T_{i}(s',t)))$
s-1 and t+1 are first-order definable abbreviations for the predecessor and successor according to the ordering <. The sentence θ0 assures that the tape content in position s changes from 0 to 1, the state changes from q to q', the rest of the tape remains the same and that the head moves to s-1 (i. e. one position to the left), assuming s is not the first position in the tape. If it is, then all is handled by θ1: everything is the same, except the head does not move to the left but stays put.
If φM has a finite model, then such a model that represents a computation of M (that starts with the empty tape (i.e. tape containing all zeros) and ends in a halting state). If M halts on the empty input, then the set of all configurations of the halting computations of M (coded with <, Ti's and Hq's) is a model of φM, which is finite, since the set of all configurations of halting computations is finite. It follows that M halts on the empty input iff φM has a finite model. Since halting on the empty input is undecidable, so is the question of whether φM has a finite model ${\mathcal {A}}$ (equivalently, whether φM is finitely satisfiable) is also undecidable (recursively enumerable, but not recursive). This concludes the proof.
Corollary
The set of finitely satisfiable sentences is recursively enumerable.
Proof
Enumerate all pairs $({\mathcal {A}},\phi )$ where ${\mathcal {A}}$ is finite and ${\mathcal {A}}\models \phi $.
Corollary
For any vocabulary containing at least one binary relation symbol, the set of all finitely valid sentences is not recursively enumerable.
Proof
From the previous lemma, the set of finitely satisfiable sentences is recursively enumerable. Assume that the set of all finitely valid sentences is recursively enumerable. Since ¬φ is finitely valid iff φ is not finitely satisfiable, we conclude that the set of sentences which are not finitely satisfiable is recursively enumerable. If both a set A and its complement are recursively enumerable, then A is recursive. It follows that the set of finitely satisfiable sentences is recursive, which contradicts Trakhtenbrot's theorem.
References
1. Trakhtenbrot, Boris (1950). "The Impossibility of an Algorithm for the Decidability Problem on Finite Classes". Proceedings of the USSR Academy of Sciences (in Russian). 70 (4): 569–572.
2. Ebbinghaus, Heinz-Dieter; Flum, Jörg (1995). Finite Model Theory. Springer Science+Business Media. ISBN 978-3-540-60149-4.
3. Libkin, Leonid (2010). Elements of Finite Model Theory. Texts in Theoretical Computer Science. ISBN 978-3-642-05948-3.
• Boolos, Burgess, Jeffrey. Computability and Logic, Cambridge University Press, 2002.
• Simpson, S. "Theorems of Church and Trakhtenbrot". 2001.
| Wikipedia |
Transcendental extension
In mathematics, a transcendental extension $L/K$ is a field extension such that there exists an element in the field $L$ that is transcendental over the field $K$; that is, an element that is not a root of any univariate polynomial with coefficients in $K$. In other words, a transcendental extension is a field extension that is not algebraic. For example, $\mathbb {C} ,\mathbb {R} $ are both transcendental extensions of $\mathbb {Q} .$
A transcendence basis of a field extension $L/K$ (or a transcendence basis of $L$ over $K$) is a maximal algebraically independent subset of $L$ over $K.$ Transcendence bases share many properties with bases of vector spaces. In particular, all transcendence bases of a field extension have the same cardinality, called the transcendence degree of the extension. Thus, a field extension is a transcendental extension if and only if its transcendence degree is positive.
Transcendental extensions are widely used in algebraic geometry. For example, the dimension of an algebraic variety is the transcendence degree of its function field. Also, global function fields are transcendental extensions of degree one of a finite field, and play in number theory in positive characteristic a role that is very similar to the role of algebraic number fields in characteristic zero.
Transcendence basis
Zorn's lemma shows there exists a maximal linearly independent subset of a vector space (i.e., a basis). A similar argument with Zorn's lemma shows that, given a field extension L / K, there exists a maximal algebraically independent subset of L over K.[1] It is then called a transcendence basis. By maximality, an algebraically independent subset S of L over K is a transcendence basis if and only if L is an algebraic extension of K(S), the field obtained by adjoining the elements of S to K.
The exchange lemma (a version for algebraically independent sets[2]) implies that if S, S' are transcendence bases, then S and S' have the same cardinality. Then the common cardinality of transcendence bases is called the transcendence degree of L over K and is denoted as $\operatorname {tr.deg.} _{K}L$ or $\operatorname {tr.deg.} (L/K)$. There is thus an analogy: a transcendence basis and transcendence degree, on the one hand, and a basis and dimension on the other hand. This analogy can be made more formal, by observing that linear independence in vector spaces and algebraic independence in field extensions both form examples of finitary matroids (pregeometries). Any finitary matroid has a basis, and all bases have the same cardinality.[3]
If G is a generating set of L (i.e., L = K(G)), then a transcendence basis for L can be taken as a subset of G. In particular, $\operatorname {tr.deg.} _{K}L\leq $ the minimum cardinality of generating sets of L over K. Also, a finitely generated field extension admits a finite transcendence basis.
If no field K is specified, the transcendence degree of a field L is its degree relative to some fixed base field; for example, the prime field of the same characteristic, or K, if L is an algebraic function field over K.
The field extension L / K is purely transcendental if there is a subset S of L that is algebraically independent over K and such that L = K(S).
A separating transcendence basis of L / K is a transcendence basis S such that L is a separable algebraic extension over K(S). A field extension L / K is said to be separably generated if it admits a separating transcendence basis.[4] If a field extension is finitely generated and it is also separably generated, then each generating set of the field extension contains a separating transcendence basis.[5] Over a perfect field, every finitely generated field extension is separably generated; i.e., it admits a finite separating transcendence basis.[6]
Examples
• An extension is algebraic if and only if its transcendence degree is 0; the empty set serves as a transcendence basis here.
• The field of rational functions in n variables K(x1,...,xn) (i.e. the field of fractions of the polynomial ring K[x1,...,xn]) is a purely transcendental extension with transcendence degree n over K; we can for example take {x1,...,xn} as a transcendence base.
• More generally, the transcendence degree of the function field L of an n-dimensional algebraic variety over a ground field K is n.
• Q(√2, e) has transcendence degree 1 over Q because √2 is algebraic while e is transcendental.
• The transcendence degree of C or R over Q is the cardinality of the continuum. (Since Q is countable, the field Q(S) will have the same cardinality as S for any infinite set S, and any algebraic extension of Q(S) will have the same cardinality again.)
• The transcendence degree of Q(e, π) over Q is either 1 or 2; the precise answer is unknown because it is not known whether e and π are algebraically independent.
• If S is a compact Riemann surface, the field C(S) of meromorphic functions on S has transcendence degree 1 over C.
Facts
If M / L and L / K are field extensions, then
trdeg(M / K) = trdeg(M / L) + trdeg(L / K)
This is proven by showing that a transcendence basis of M / K can be obtained by taking the union of a transcendence basis of M / L and one of L / K.
If the set S is algebraically independent over K, then the field K(S) is isomorphic to the field of rational functions over K in a set of variables of the same cardinality as S. Each such rational function is a fraction of two polynomials in finitely many of those variables, with coefficients in K.
Two algebraically closed fields are isomorphic if and only if they have the same characteristic and the same transcendence degree over their prime field.[7]
The transcendence degree of an integral domain
Let $A\subset B$ be integral domains. If $Q(A)$ and $Q(B)$ denote the fields of fractions of A an B, then the transcendence degree of B over A is defined as the transcendence degree of the field extension $Q(B)/Q(A).$
The Noether normalization lemma implies that if R is an integral domain that is a finitely generated algebra over a field k, then the Krull dimension of R is the transcendence degree of R over k.
This has the following geometric interpretation: if X is an affine algebraic variety over a field k, the Krull dimension of its coordinate ring equals the transcendence degree of its function field, and this defines the dimension of X. It follows that, if X is not an affine variety, its dimension (defined as the transcendence degree of its function field) can also be defined locally as the Krull dimension of the coordinate ring of the restriction of the variety to an open affine subset.
Relations to differentials
Let $K/k$ be a finitely generated field extension. Then[8]
$\dim _{k}\Omega _{K/k}\geq \operatorname {trdeg} (k/K).$
where $\Omega _{K/k}$ denotes the module of Kahler differentials. Also, in the above, the equality holds if and only if K is separably generated over k (meaning it admits a separating transcendence basis).
Applications
Transcendence bases are a useful tool to prove various existence statements about field homomorphisms. Here is an example: Given an algebraically closed field L, a subfield K and a field automorphism f of K, there exists a field automorphism of L which extends f (i.e. whose restriction to K is f). For the proof, one starts with a transcendence basis S of L / K. The elements of K(S) are just quotients of polynomials in elements of S with coefficients in K; therefore the automorphism f can be extended to one of K(S) by sending every element of S to itself. The field L is the algebraic closure of K(S) and algebraic closures are unique up to isomorphism; this means that the automorphism can be further extended from K(S) to L.
As another application, we show that there are (many) proper subfields of the complex number field C which are (as fields) isomorphic to C. For the proof, take a transcendence basis S of C / Q. S is an infinite (even uncountable) set, so there exist (many) maps f: S → S which are injective but not surjective. Any such map can be extended to a field homomorphism Q(S) → Q(S) which is not surjective. Such a field homomorphism can in turn be extended to the algebraic closure C, and the resulting field homomorphisms C → C are not surjective.
The transcendence degree can give an intuitive understanding of the size of a field. For instance, a theorem due to Siegel states that if X is a compact, connected, complex manifold of dimension n and K(X) denotes the field of (globally defined) meromorphic functions on it, then trdegC(K(X)) ≤ n.
See also
• Lüroth’s theorem, a theorem about purely transcendental extensions of degree one
• Regular extension
References
1. Milne, Theorem 9.13.
2. Milne, Lemma 9.6.
3. Joshi, K. D. (1997), Applied Discrete Structures, New Age International, p. 909, ISBN 9788122408263.
4. Hartshorne 1977, Ch I, § 4, just before Theorem 4.7.A
5. Hartshorne 1977, Ch I, Theorem 4.7.A
6. Milne, Theorem 9.27.
7. Milne, Proposition 9.16.
8. Hartshorne 1977, Ch. II, Theorem 8.6. A
• Hartshorne, Robin (1977), Algebraic Geometry, Graduate Texts in Mathematics, vol. 52, New York: Springer-Verlag, ISBN 978-0-387-90244-9, MR 0463157
• Milne, James, Field Theory (PDF)
• § 6.3. of Shimura, Goro (1971), Introduction to the arithmetic theory of automorphic functions, Publications of the Mathematical Society of Japan, vol. 11, Tokyo: Iwanami Shoten, Zbl 0221.10029
| Wikipedia |
Transcendental number theory
Transcendental number theory is a branch of number theory that investigates transcendental numbers (numbers that are not solutions of any polynomial equation with rational coefficients), in both qualitative and quantitative ways.
Algebraic structure → Ring theory
Ring theory
Basic concepts
Rings
• Subrings
• Ideal
• Quotient ring
• Fractional ideal
• Total ring of fractions
• Product of rings
• Free product of associative algebras
• Tensor product of algebras
Ring homomorphisms
• Kernel
• Inner automorphism
• Frobenius endomorphism
Algebraic structures
• Module
• Associative algebra
• Graded ring
• Involutive ring
• Category of rings
• Initial ring $\mathbb {Z} $
• Terminal ring $0=\mathbb {Z} _{1}$
Related structures
• Field
• Finite field
• Non-associative ring
• Lie ring
• Jordan ring
• Semiring
• Semifield
Commutative algebra
Commutative rings
• Integral domain
• Integrally closed domain
• GCD domain
• Unique factorization domain
• Principal ideal domain
• Euclidean domain
• Field
• Finite field
• Composition ring
• Polynomial ring
• Formal power series ring
Algebraic number theory
• Algebraic number field
• Ring of integers
• Algebraic independence
• Transcendental number theory
• Transcendence degree
p-adic number theory and decimals
• Direct limit/Inverse limit
• Zero ring $\mathbb {Z} _{1}$
• Integers modulo pn $\mathbb {Z} /p^{n}\mathbb {Z} $
• Prüfer p-ring $\mathbb {Z} (p^{\infty })$
• Base-p circle ring $\mathbb {T} $
• Base-p integers $\mathbb {Z} $
• p-adic rationals $\mathbb {Z} [1/p]$
• Base-p real numbers $\mathbb {R} $
• p-adic integers $\mathbb {Z} _{p}$
• p-adic numbers $\mathbb {Q} _{p}$
• p-adic solenoid $\mathbb {T} _{p}$
Algebraic geometry
• Affine variety
Noncommutative algebra
Noncommutative rings
• Division ring
• Semiprimitive ring
• Simple ring
• Commutator
Noncommutative algebraic geometry
Free algebra
Clifford algebra
• Geometric algebra
Operator algebra
Transcendence
Main article: Transcendental number
The fundamental theorem of algebra tells us that if we have a non-constant polynomial with rational coefficients (or equivalently, by clearing denominators, with integer coefficients) then that polynomial will have a root in the complex numbers. That is, for any non-constant polynomial $P$ with rational coefficients there will be a complex number $\alpha $ such that $P(\alpha )=0$. Transcendence theory is concerned with the converse question: given a complex number $\alpha $, is there a polynomial $P$ with rational coefficients such that $P(\alpha )=0?$ If no such polynomial exists then the number is called transcendental.
More generally the theory deals with algebraic independence of numbers. A set of numbers {α1, α2, …, αn} is called algebraically independent over a field K if there is no non-zero polynomial P in n variables with coefficients in K such that P(α1, α2, …, αn) = 0. So working out if a given number is transcendental is really a special case of algebraic independence where n = 1 and the field K is the field of rational numbers.
A related notion is whether there is a closed-form expression for a number, including exponentials and logarithms as well as algebraic operations. There are various definitions of "closed-form", and questions about closed-form can often be reduced to questions about transcendence.
History
Approximation by rational numbers: Liouville to Roth
Use of the term transcendental to refer to an object that is not algebraic dates back to the seventeenth century, when Gottfried Leibniz proved that the sine function was not an algebraic function.[1] The question of whether certain classes of numbers could be transcendental dates back to 1748[2] when Euler asserted[3] that the number logab was not algebraic for rational numbers a and b provided b is not of the form b = ac for some rational c.
Euler's assertion was not proved until the twentieth century, but almost a hundred years after his claim Joseph Liouville did manage to prove the existence of numbers that are not algebraic, something that until then had not been known for sure.[4] His original papers on the matter in the 1840s sketched out arguments using continued fractions to construct transcendental numbers. Later, in the 1850s, he gave a necessary condition for a number to be algebraic, and thus a sufficient condition for a number to be transcendental.[5] This transcendence criterion was not strong enough to be necessary too, and indeed it fails to detect that the number e is transcendental. But his work did provide a larger class of transcendental numbers, now known as Liouville numbers in his honour.
Liouville's criterion essentially said that algebraic numbers cannot be very well approximated by rational numbers. So if a number can be very well approximated by rational numbers then it must be transcendental. The exact meaning of "very well approximated" in Liouville's work relates to a certain exponent. He showed that if α is an algebraic number of degree d ≥ 2 and ε is any number greater than zero, then the expression
$\left|\alpha -{\frac {p}{q}}\right|<{\frac {1}{q^{d+\varepsilon }}}$
can be satisfied by only finitely many rational numbers p/q. Using this as a criterion for transcendence is not trivial, as one must check whether there are infinitely many solutions p/q for every d ≥ 2.
In the twentieth century work by Axel Thue,[6] Carl Siegel,[7] and Klaus Roth[8] reduced the exponent in Liouville's work from d + ε to d/2 + 1 + ε, and finally, in 1955, to 2 + ε. This result, known as the Thue–Siegel–Roth theorem, is ostensibly the best possible, since if the exponent 2 + ε is replaced by just 2 then the result is no longer true. However, Serge Lang conjectured an improvement of Roth's result; in particular he conjectured that q2+ε in the denominator of the right-hand side could be reduced to $q^{2}(\log q)^{1+\epsilon }$.
Roth's work effectively ended the work started by Liouville, and his theorem allowed mathematicians to prove the transcendence of many more numbers, such as the Champernowne constant. The theorem is still not strong enough to detect all transcendental numbers, though, and many famous constants including e and π either are not or are not known to be very well approximable in the above sense.[9]
Auxiliary functions: Hermite to Baker
Fortunately other methods were pioneered in the nineteenth century to deal with the algebraic properties of e, and consequently of π through Euler's identity. This work centred on use of the so-called auxiliary function. These are functions which typically have many zeros at the points under consideration. Here "many zeros" may mean many distinct zeros, or as few as one zero but with a high multiplicity, or even many zeros all with high multiplicity. Charles Hermite used auxiliary functions that approximated the functions $e^{kx}$ for each natural number $k$ in order to prove the transcendence of $e$ in 1873.[10] His work was built upon by Ferdinand von Lindemann in the 1880s[11] in order to prove that eα is transcendental for nonzero algebraic numbers α. In particular this proved that π is transcendental since eπi is algebraic, and thus answered in the negative the problem of antiquity as to whether it was possible to square the circle. Karl Weierstrass developed their work yet further and eventually proved the Lindemann–Weierstrass theorem in 1885.[12]
In 1900 David Hilbert posed his famous collection of problems. The seventh of these, and one of the hardest in Hilbert's estimation, asked about the transcendence of numbers of the form ab where a and b are algebraic, a is not zero or one, and b is irrational. In the 1930s Alexander Gelfond[13] and Theodor Schneider[14] proved that all such numbers were indeed transcendental using a non-explicit auxiliary function whose existence was granted by Siegel's lemma. This result, the Gelfond–Schneider theorem, proved the transcendence of numbers such as eπ and the Gelfond–Schneider constant.
The next big result in this field occurred in the 1960s, when Alan Baker made progress on a problem posed by Gelfond on linear forms in logarithms. Gelfond himself had managed to find a non-trivial lower bound for the quantity
$|\beta _{1}\log \alpha _{1}+\beta _{2}\log \alpha _{2}|\,$
where all four unknowns are algebraic, the αs being neither zero nor one and the βs being irrational. Finding similar lower bounds for the sum of three or more logarithms had eluded Gelfond, though. The proof of Baker's theorem contained such bounds, solving Gauss' class number problem for class number one in the process. This work won Baker the Fields medal for its uses in solving Diophantine equations. From a purely transcendental number theoretic viewpoint, Baker had proved that if α1, ..., αn are algebraic numbers, none of them zero or one, and β1, ..., βn are algebraic numbers such that 1, β1, ..., βn are linearly independent over the rational numbers, then the number
$\alpha _{1}^{\beta _{1}}\alpha _{2}^{\beta _{2}}\cdots \alpha _{n}^{\beta _{n}}$
is transcendental.[15]
Other techniques: Cantor and Zilber
In the 1870s, Georg Cantor started to develop set theory and, in 1874, published a paper proving that the algebraic numbers could be put in one-to-one correspondence with the set of natural numbers, and thus that the set of transcendental numbers must be uncountable.[16] Later, in 1891, Cantor used his more familiar diagonal argument to prove the same result.[17] While Cantor's result is often quoted as being purely existential and thus unusable for constructing a single transcendental number,[18][19] the proofs in both the aforementioned papers give methods to construct transcendental numbers.[20]
While Cantor used set theory to prove the plenitude of transcendental numbers, a recent development has been the use of model theory in attempts to prove an unsolved problem in transcendental number theory. The problem is to determine the transcendence degree of the field
$K=\mathbb {Q} (x_{1},\ldots ,x_{n},e^{x_{1}},\ldots ,e^{x_{n}})$
for complex numbers x1, ..., xn that are linearly independent over the rational numbers. Stephen Schanuel conjectured that the answer is at least n, but no proof is known. In 2004, though, Boris Zilber published a paper that used model theoretic techniques to create a structure that behaves very much like the complex numbers equipped with the operations of addition, multiplication, and exponentiation. Moreover, in this abstract structure Schanuel's conjecture does indeed hold.[21] Unfortunately it is not yet known that this structure is in fact the same as the complex numbers with the operations mentioned; there could exist some other abstract structure that behaves very similarly to the complex numbers but where Schanuel's conjecture doesn't hold. Zilber did provide several criteria that would prove the structure in question was C, but could not prove the so-called Strong Exponential Closure axiom. The simplest case of this axiom has since been proved,[22] but a proof that it holds in full generality is required to complete the proof of the conjecture.
Approaches
A typical problem in this area of mathematics is to work out whether a given number is transcendental. Cantor used a cardinality argument to show that there are only countably many algebraic numbers, and hence almost all numbers are transcendental. Transcendental numbers therefore represent the typical case; even so, it may be extremely difficult to prove that a given number is transcendental (or even simply irrational).
For this reason transcendence theory often works towards a more quantitative approach. So given a particular complex number α one can ask how close α is to being an algebraic number. For example, if one supposes that the number α is algebraic then can one show that it must have very high degree or a minimum polynomial with very large coefficients? Ultimately if it is possible to show that no finite degree or size of coefficient is sufficient then the number must be transcendental. Since a number α is transcendental if and only if P(α) ≠ 0 for every non-zero polynomial P with integer coefficients, this problem can be approached by trying to find lower bounds of the form
$|P(a)|>F(A,d)$
where the right hand side is some positive function depending on some measure A of the size of the coefficients of P, and its degree d, and such that these lower bounds apply to all P ≠ 0. Such a bound is called a transcendence measure.
The case of d = 1 is that of "classical" diophantine approximation asking for lower bounds for
$|ax+b|$.
The methods of transcendence theory and diophantine approximation have much in common: they both use the auxiliary function concept.
Major results
The Gelfond–Schneider theorem was the major advance in transcendence theory in the period 1900–1950. In the 1960s the method of Alan Baker on linear forms in logarithms of algebraic numbers reanimated transcendence theory, with applications to numerous classical problems and diophantine equations.
Mahler's classification
Kurt Mahler in 1932 partitioned the transcendental numbers into 3 classes, called S, T, and U.[23] Definition of these classes draws on an extension of the idea of a Liouville number (cited above).
Measure of irrationality of a real number
One way to define a Liouville number is to consider how small a given real number x makes linear polynomials |qx − p| without making them exactly 0. Here p, q are integers with |p|, |q| bounded by a positive integer H.
Let $m(x,1,H)$ be the minimum non-zero absolute value these polynomials take and take:
$\omega (x,1,H)=-{\frac {\log m(x,1,H)}{\log H}}$
$\omega (x,1)=\limsup _{H\to \infty }\,\omega (x,1,H).$
ω(x, 1) is often called the measure of irrationality of a real number x. For rational numbers, ω(x, 1) = 0 and is at least 1 for irrational real numbers. A Liouville number is defined to have infinite measure of irrationality. Roth's theorem says that irrational real algebraic numbers have measure of irrationality 1.
Measure of transcendence of a complex number
Next consider the values of polynomials at a complex number x, when these polynomials have integer coefficients, degree at most n, and height at most H, with n, H being positive integers.
Let $m(x,n,H)$ be the minimum non-zero absolute value such polynomials take at $x$ and take:
$\omega (x,n,H)=-{\frac {\log m(x,n,H)}{n\log H}}$
$\omega (x,n)=\limsup _{H\to \infty }\,\omega (x,n,H).$
Suppose this is infinite for some minimum positive integer n. A complex number x in this case is called a U number of degree n.
Now we can define
$\omega (x)=\limsup _{n\to \infty }\,\omega (x,n).$
ω(x) is often called the measure of transcendence of x. If the ω(x, n) are bounded, then ω(x) is finite, and x is called an S number. If the ω(x, n) are finite but unbounded, x is called a T number. x is algebraic if and only if ω(x) = 0.
Clearly the Liouville numbers are a subset of the U numbers. William LeVeque in 1953 constructed U numbers of any desired degree.[24] The Liouville numbers and hence the U numbers are uncountable sets. They are sets of measure 0.[25]
T numbers also comprise a set of measure 0.[26] It took about 35 years to show their existence. Wolfgang M. Schmidt in 1968 showed that examples exist. However, almost all complex numbers are S numbers.[27] Mahler proved that the exponential function sends all non-zero algebraic numbers to S numbers:[28][29] this shows that e is an S number and gives a proof of the transcendence of π. This number π is known not to be a U number.[30] Many other transcendental numbers remain unclassified.
Two numbers x, y are called algebraically dependent if there is a non-zero polynomial P in two indeterminates with integer coefficients such that P(x, y) = 0. There is a powerful theorem that two complex numbers that are algebraically dependent belong to the same Mahler class.[24][31] This allows construction of new transcendental numbers, such as the sum of a Liouville number with e or π.
The symbol S probably stood for the name of Mahler's teacher Carl Ludwig Siegel, and T and U are just the next two letters.
Koksma's equivalent classification
Jurjen Koksma in 1939 proposed another classification based on approximation by algebraic numbers.[23][32]
Consider the approximation of a complex number x by algebraic numbers of degree ≤ n and height ≤ H. Let α be an algebraic number of this finite set such that |x − α| has the minimum positive value. Define ω*(x, H, n) and ω*(x, n) by:
$|x-\alpha |=H^{-n\omega ^{*}(x,H,n)-1}.$
$\omega ^{*}(x,n)=\limsup _{H\to \infty }\,\omega ^{*}(x,n,H).$
If for a smallest positive integer n, ω*(x, n) is infinite, x is called a U*-number of degree n.
If the ω*(x, n) are bounded and do not converge to 0, x is called an S*-number,
A number x is called an A*-number if the ω*(x, n) converge to 0.
If the ω*(x, n) are all finite but unbounded, x is called a T*-number,
Koksma's and Mahler's classifications are equivalent in that they divide the transcendental numbers into the same classes.[32] The A*-numbers are the algebraic numbers.[27]
LeVeque's construction
Let
$\lambda ={\tfrac {1}{3}}+\sum _{k=1}^{\infty }10^{-k!}.$
It can be shown that the nth root of λ (a Liouville number) is a U-number of degree n.[33]
This construction can be improved to create an uncountable family of U-numbers of degree n. Let Z be the set consisting of every other power of 10 in the series above for λ. The set of all subsets of Z is uncountable. Deleting any of the subsets of Z from the series for λ creates uncountably many distinct Liouville numbers, whose nth roots are U-numbers of degree n.
Type
The supremum of the sequence {ω(x, n)} is called the type. Almost all real numbers are S numbers of type 1, which is minimal for real S numbers. Almost all complex numbers are S numbers of type 1/2, which is also minimal. The claims of almost all numbers were conjectured by Mahler and in 1965 proved by Vladimir Sprindzhuk.[34]
Open problems
While the Gelfond–Schneider theorem proved that a large class of numbers was transcendental, this class was still countable. Many well-known mathematical constants are still not known to be transcendental, and in some cases it is not even known whether they are rational or irrational. A partial list can be found here.
A major problem in transcendence theory is showing that a particular set of numbers is algebraically independent rather than just showing that individual elements are transcendental. So while we know that e and π are transcendental that doesn't imply that e + π is transcendental, nor other combinations of the two (except eπ, Gelfond's constant, which is known to be transcendental). Another major problem is dealing with numbers that are not related to the exponential function. The main results in transcendence theory tend to revolve around e and the logarithm function, which means that wholly new methods tend to be required to deal with numbers that cannot be expressed in terms of these two objects in an elementary fashion.
Schanuel's conjecture would solve the first of these problems somewhat as it deals with algebraic independence and would indeed confirm that e + π is transcendental. It still revolves around the exponential function, however, and so would not necessarily deal with numbers such as Apéry's constant or the Euler–Mascheroni constant. Another extremely difficult unsolved problem is the so-called constant or identity problem.[35]
Notes
1. N. Bourbaki, Elements of the History of Mathematics Springer (1994).
2. Gelfond 1960, p. 2.
3. Euler, L. (1748). Introductio in analysin infinitorum. Lausanne.
4. The existence proof based on the different cardinalities of the real and the algebraic numbers was not possible before Cantor's first set theory article in 1874.
5. Liouville, J. (1844). "Sur les classes très étendues de quantités dont la valeur n'est ni algébrique ni même réductible à des irrationelles algébriques". Comptes rendus de l'Académie des Sciences de Paris. 18: 883–885, 910–911.; Journal Math. Pures et Appl. 16, (1851), pp.133–142.
6. Thue, A. (1909). "Über Annäherungswerte algebraischer Zahlen". J. Reine Angew. Math. 1909 (135): 284–305. doi:10.1515/crll.1909.135.284. S2CID 125903243.
7. Siegel, C. L. (1921). "Approximation algebraischer Zahlen". Mathematische Zeitschrift. 10 (3–4): 172–213. doi:10.1007/BF01211608.
8. Roth, K. F. (1955). "Rational approximations to algebraic numbers". Mathematika. 2 (1): 1–20. doi:10.1112/S0025579300000644. And "Corrigendum", p. 168, doi:10.1112/S002559300000826.
9. Mahler, K. (1953). "On the approximation of π". Proc. Akad. Wetensch. Ser. A. 56: 30–42.
10. Hermite, C. (1873). "Sur la fonction exponentielle". C. R. Acad. Sci. Paris. 77.
11. Lindemann, F. (1882). "Ueber die Zahl π". Mathematische Annalen. 20 (2): 213–225. doi:10.1007/BF01446522.
12. Weierstrass, K. (1885). "Zu Hrn. Lindemann's Abhandlung: 'Über die Ludolph'sche Zahl'". Sitzungber. Königl. Preuss. Akad. Wissensch. Zu Berlin. 2: 1067–1086.
13. Gelfond, A. O. (1934). "Sur le septième Problème de D. Hilbert". Izv. Akad. Nauk SSSR. 7: 623–630.
14. Schneider, T. (1935). "Transzendenzuntersuchungen periodischer Funktionen. I. Transzendend von Potenzen". Journal für die reine und angewandte Mathematik. 1935 (172): 65–69. doi:10.1515/crll.1935.172.65. S2CID 115310510.
15. A. Baker, Linear forms in the logarithms of algebraic numbers. I, II, III, Mathematika 13 ,(1966), pp.204–216; ibid. 14, (1967), pp.102–107; ibid. 14, (1967), pp.220–228, MR0220680
16. Cantor, G. (1874). "Ueber eine Eigenschaft des Inbegriffes aller reelen algebraischen Zahlen". J. Reine Angew. Math. (in German). 1874 (77): 258–262. doi:10.1515/crll.1874.77.258. S2CID 199545885.
17. Cantor, G. (1891). "Ueber eine elementare Frage der Mannigfaltigkeitslehre". Jahresbericht der Deutschen Mathematiker-Vereinigung (in German). 1: 75–78.
18. Kac, M.; Stanislaw, U. (1968). Mathematics and Logic. Fredering A. Praeger. p. 13.
19. Bell, E. T. (1937). Men of Mathematics. New York: Simon & Schuster. p. 569.
20. Gray, R. (1994). "Georg Cantor and Transcendental Numbers" (PDF). American Mathematical Monthly. 101 (9): 819–832. doi:10.1080/00029890.1994.11997035. JSTOR 2975129.
21. Zilber, B. (2005). "Pseudo-exponentiation on algebraically closed fields of characteristic zero". Annals of Pure and Applied Logic. 132 (1): 67–95. doi:10.1016/j.apal.2004.07.001. MR 2102856.
22. Marker, D. (2006). "A remark on Zilber's pseudoexponentiation". Journal of Symbolic Logic. 71 (3): 791–798. doi:10.2178/jsl/1154698577. JSTOR 27588482. MR 2250821. S2CID 1477361.
23. Bugeaud 2012, p. 250 harvnb error: no target: CITEREFBugeaud2012 (help).
24. LeVeque 2002, p. II:172 harvnb error: no target: CITEREFLeVeque2002 (help).
25. Burger & Tubbs 2004, p. 170 harvnb error: no target: CITEREFBurgerTubbs2004 (help).
26. Burger & Tubbs 2004, p. 172 harvnb error: no target: CITEREFBurgerTubbs2004 (help).
27. Bugeaud 2012, p. 251 harvnb error: no target: CITEREFBugeaud2012 (help).
28. LeVeque 2002, pp. II:174–186 harvnb error: no target: CITEREFLeVeque2002 (help).
29. Burger & Tubbs 2004, p. 182 harvnb error: no target: CITEREFBurgerTubbs2004 (help).
30. Baker 1990, p. 86
31. Burger & Tubbs, p. 163 harvnb error: no target: CITEREFBurgerTubbs (help).
32. Baker 1975, p. 87.
33. Baker 1990, p. 90 harvnb error: no target: CITEREFBaker1990 (help).
34. Baker 1975, p. 86.
35. Richardson, D. (1968). "Some Undecidable Problems Involving Elementary Functions of a Real Variable". Journal of Symbolic Logic. 33 (4): 514–520. doi:10.2307/2271358. JSTOR 2271358. MR 0239976.
References
• Baker, Alan (1975). Transcendental Number Theory. paperback edition 1990. Cambridge University Press. ISBN 0-521-20461-5. Zbl 0297.10013.
• Gelfond, A. O. (1960). Transcendental and Algebraic Numbers. Dover. Zbl 0090.26103.
• Lang, Serge (1966). Introduction to Transcendental Numbers. Addison–Wesley. Zbl 0144.04101.
• Natarajan, Saradha [in French]; Thangadurai, Ravindranathan (2020). Pillars of Transcendental Number Theory. Springer Verlag. ISBN 978-981-15-4154-4.
• Sprindzhuk, Vladimir G. (1969). Mahler's Problem in Metric Number Theory (1967). AMS Translations of Mathematical Monographs. Translated from Russian by B. Volkmann. American Mathematical Society. ISBN 978-1-4704-4442-6.
• Sprindzhuk, Vladimir G. (1979). Metric theory of Diophantine approximations. Scripta Series in Mathematics. Translated from Russian by Richard A. Silverman. Foreword by Donald J. Newman. Wiley. ISBN 0-470-26706-2. Zbl 0482.10047.
Further reading
• Alan Baker and Gisbert Wüstholz, Logarithmic Forms and Diophantine Geometry, New Mathematical Monographs 9, Cambridge University Press, 2007, ISBN 978-0-521-88268-2
Number theory
Fields
• Algebraic number theory (class field theory, non-abelian class field theory, Iwasawa theory, Iwasawa–Tate theory, Kummer theory)
• Analytic number theory (analytic theory of L-functions, probabilistic number theory, sieve theory)
• Geometric number theory
• Computational number theory
• Transcendental number theory
• Diophantine geometry (Arakelov theory, Hodge–Arakelov theory)
• Arithmetic combinatorics (additive number theory)
• Arithmetic geometry (anabelian geometry, P-adic Hodge theory)
• Arithmetic topology
• Arithmetic dynamics
Key concepts
• Numbers
• Natural numbers
• Prime numbers
• Rational numbers
• Irrational numbers
• Algebraic numbers
• Transcendental numbers
• P-adic numbers (P-adic analysis)
• Arithmetic
• Modular arithmetic
• Chinese remainder theorem
• Arithmetic functions
Advanced concepts
• Quadratic forms
• Modular forms
• L-functions
• Diophantine equations
• Diophantine approximation
• Continued fractions
• Category
• List of topics
• List of recreational topics
• Wikibook
• Wikiversity
| Wikipedia |
Gudermannian function
In mathematics, the Gudermannian function relates a hyperbolic angle measure $ \psi $ to a circular angle measure $ \phi $ called the gudermannian of $ \psi $ and denoted $ \operatorname {gd} \psi $.[1] The Gudermannian function reveals a close relationship between the circular functions and hyperbolic functions. It was introduced in the 1760s by Johann Heinrich Lambert, and later named for Christoph Gudermann who also described the relationship between circular and hyperbolic functions in 1830.[2] The gudermannian is sometimes called the hyperbolic amplitude as a limiting case of the Jacobi elliptic amplitude $ \operatorname {am} (\psi ,m)$ when parameter $ m=1.$
The real Gudermannian function is typically defined for $ -\infty <\psi <\infty $ to be the integral of the hyperbolic secant[3]
$\phi =\operatorname {gd} \psi \equiv \int _{0}^{\psi }\operatorname {sech} t\,\mathrm {d} t=\operatorname {arctan} (\sinh \psi ).$
The real inverse Gudermannian function can be defined for $ -{\tfrac {1}{2}}\pi <\phi <{\tfrac {1}{2}}\pi $ as the integral of the secant
$\psi =\operatorname {gd} ^{-1}\phi =\int _{0}^{\phi }\operatorname {sec} t\,\mathrm {d} t=\operatorname {arsinh} (\tan \phi ).$
The hyperbolic angle measure $\psi =\operatorname {gd} ^{-1}\phi $ is called the anti-gudermannian of $\phi $ or sometimes the lambertian of $\phi $, denoted $\psi =\operatorname {lam} \phi .$[4] In the context of geodesy and navigation for latitude $ \phi $, $k\operatorname {gd} ^{-1}\phi $ (scaled by arbitrary constant $ k$) was historically called the meridional part of $\phi $ (French: latitude croissante). It is the vertical coordinate of the Mercator projection.
The two angle measures $ \phi $ and $ \psi $ are related by a common stereographic projection
$s=\tan {\tfrac {1}{2}}\phi =\tanh {\tfrac {1}{2}}\psi ,$
and this identity can serve as an alternative definition for $ \operatorname {gd} $ and $ \operatorname {gd} ^{-1}$ valid throughout the complex plane:
${\begin{aligned}\operatorname {gd} \psi &={2\arctan }{\bigl (}\tanh {\tfrac {1}{2}}\psi \,{\bigr )},\\[5mu]\operatorname {gd} ^{-1}\phi &={2\operatorname {artanh} }{\bigl (}\tan {\tfrac {1}{2}}\phi \,{\bigr )}.\end{aligned}}$
Circular–hyperbolic identities
We can evaluate the integral of the hyperbolic secant using the stereographic projection (hyperbolic half-tangent) as a change of variables:[5]
${\begin{aligned}\operatorname {gd} \psi &\equiv \int _{0}^{\psi }{\frac {1}{\operatorname {cosh} t}}\mathrm {d} t=\int _{0}^{\tanh {\frac {1}{2}}\psi }{\frac {1-u^{2}}{1+u^{2}}}{\frac {2\,\mathrm {d} u}{1-u^{2}}}\qquad {\bigl (}u=\tanh {\tfrac {1}{2}}t{\bigr )}\\[8mu]&=2\int _{0}^{\tanh {\frac {1}{2}}\psi }{\frac {1}{1+u^{2}}}\mathrm {d} u={2\arctan }{\bigl (}\tanh {\tfrac {1}{2}}\psi \,{\bigr )},\\[5mu]\tan {\tfrac {1}{2}}{\operatorname {gd} \psi }&=\tanh {\tfrac {1}{2}}\psi .\end{aligned}}$
Letting $ \phi =\operatorname {gd} \psi $ and $ s=\tan {\tfrac {1}{2}}\phi =\tanh {\tfrac {1}{2}}\psi $ we can derive a number of identities between hyperbolic functions of $ \psi $ and circular functions of $ \phi .$[6]
${\begin{aligned}s&=\tan {\tfrac {1}{2}}\phi =\tanh {\tfrac {1}{2}}\psi ,\\[6mu]{\frac {2s}{1+s^{2}}}&=\sin \phi =\tanh \psi ,\quad &{\frac {1+s^{2}}{2s}}&=\csc \phi =\coth \psi ,\\[10mu]{\frac {1-s^{2}}{1+s^{2}}}&=\cos \phi =\operatorname {sech} \psi ,\quad &{\frac {1+s^{2}}{1-s^{2}}}&=\sec \phi =\cosh \psi ,\\[10mu]{\frac {2s}{1-s^{2}}}&=\tan \phi =\sinh \psi ,\quad &{\frac {1-s^{2}}{2s}}&=\cot \phi =\operatorname {csch} \psi .\\[8mu]\end{aligned}}$
These are commonly used as expressions for $\operatorname {gd} $ and $\operatorname {gd} ^{-1}$ for real values of $\psi $ and $\phi $ with $|\phi |<{\tfrac {1}{2}}\pi .$ For example, the numerically well-behaved formulas
${\begin{aligned}\operatorname {gd} \psi &=\operatorname {arctan} (\sinh \psi ),\\[6mu]\operatorname {gd} ^{-1}\phi &=\operatorname {arsinh} (\tan \phi ).\end{aligned}}$
(Note, for $|\phi |>{\tfrac {1}{2}}\pi $ and for complex arguments, care must be taken choosing branches of the inverse functions.)[7]
We can also express $ \psi $ and $ \phi $ in terms of $ s\colon $
${\begin{aligned}2\arctan s&=\phi =\operatorname {gd} \psi ,\\[6mu]2\operatorname {artanh} s&=\operatorname {gd} ^{-1}\phi =\psi .\\[6mu]\end{aligned}}$
If we expand $ \tan {\tfrac {1}{2}}$ and $ \tanh {\tfrac {1}{2}}$ in terms of the exponential, then we can see that $ s,$ $\exp \phi i,$ and $\exp \psi $ are all Möbius transformations of each-other (specifically, rotations of the Riemann sphere):
${\begin{aligned}s&=i{\frac {1-e^{\phi i}}{1+e^{\phi i}}}={\frac {e^{\psi }-1}{e^{\psi }+1}},\\[10mu]i{\frac {s-i}{s+i}}&=\exp \phi i\quad ={\frac {e^{\psi }-i}{e^{\psi }+i}},\\[10mu]{\frac {1+s}{1-s}}&=i{\frac {i+e^{\phi i}}{i-e^{\phi i}}}\,=\exp \psi .\end{aligned}}$
For real values of $ \psi $ and $ \phi $ with $|\phi |<{\tfrac {1}{2}}\pi $, these Möbius transformations can be written in terms of trigonometric functions in several ways,
${\begin{aligned}\exp \psi &=\sec \phi +\tan \phi =\tan {\tfrac {1}{2}}{\bigl (}{\tfrac {1}{2}}\pi +\phi {\bigr )}\\[6mu]&={\frac {1+\tan {\tfrac {1}{2}}\phi }{1-\tan {\tfrac {1}{2}}\phi }}={\sqrt {\frac {1+\sin \phi }{1-\sin \phi }}},\\[12mu]\exp \phi i&=\operatorname {sech} \psi +i\tanh \psi =\tanh {\tfrac {1}{2}}{\bigl (}{-{\tfrac {1}{2}}}\pi i+\psi {\bigr )}\\[6mu]&={\frac {1+i\tanh {\tfrac {1}{2}}\psi }{1-i\tanh {\tfrac {1}{2}}\psi }}={\sqrt {\frac {1+i\sinh \psi }{1-i\sinh \psi }}}.\end{aligned}}$
These give further expressions for $\operatorname {gd} $ and $\operatorname {gd} ^{-1}$ for real arguments with $|\phi |<{\tfrac {1}{2}}\pi .$ For example,[8]
${\begin{aligned}\operatorname {gd} \psi &=2\arctan e^{\psi }-{\tfrac {1}{2}}\pi ,\\[6mu]\operatorname {gd} ^{-1}\phi &=\log(\sec \phi +\tan \phi ).\end{aligned}}$
Complex values
As a functions of a complex variable, $ z\mapsto w=\operatorname {gd} z$ conformally maps the infinite strip $ \left|\operatorname {Im} z\right|\leq {\tfrac {1}{2}}\pi $ to the infinite strip $ \left|\operatorname {Re} w\right|\leq {\tfrac {1}{2}}\pi ,$ while $ w\mapsto z=\operatorname {gd} ^{-1}w$ conformally maps the infinite strip $ \left|\operatorname {Re} w\right|\leq {\tfrac {1}{2}}\pi $ to the infinite strip $ \left|\operatorname {Im} z\right|\leq {\tfrac {1}{2}}\pi .$
Analytically continued by reflections to the whole complex plane, $ z\mapsto w=\operatorname {gd} z$ is a periodic function of period $ 2\pi i$ which sends any infinite strip of "height" $ 2\pi i$ onto the strip $ -\pi <\operatorname {Re} w\leq \pi .$ Likewise, extended to the whole complex plane, $ w\mapsto z=\operatorname {gd} ^{-1}w$ is a periodic function of period $ 2\pi $ which sends any infinite strip of "width" $ 2\pi $ onto the strip $ -\pi <\operatorname {Im} z\leq \pi .$[9] For all points in the complex plane, these functions can be correctly written as:
${\begin{aligned}\operatorname {gd} z&={2\arctan }{\bigl (}\tanh {\tfrac {1}{2}}z\,{\bigr )},\\[5mu]\operatorname {gd} ^{-1}w&={2\operatorname {artanh} }{\bigl (}\tan {\tfrac {1}{2}}w\,{\bigr )}.\end{aligned}}$
For the $ \operatorname {gd} $ and $ \operatorname {gd} ^{-1}$ functions to remain invertible with these extended domains, we might consider each to be a multivalued function (perhaps $ \operatorname {Gd} $ and $ \operatorname {Gd} ^{-1}$, with $ \operatorname {gd} $ and $ \operatorname {gd} ^{-1}$ the principal branch) or consider their domains and codomains as Riemann surfaces.
If $ u+iv=\operatorname {gd} (x+iy),$ then the real and imaginary components $ u$ and $ v$ can be found by:[10]
$\tan u={\frac {\sinh x}{\cos y}},\quad \tanh v={\frac {\sin y}{\cosh x}}.$
(In practical implementation, make sure to use the 2-argument arctangent, $ u=\operatorname {atan2} (\sinh x,\cos y)$.)
Likewise, if $ x+iy=\operatorname {gd} ^{-1}(u+iv),$ then components $ x$ and $ y$ can be found by:[11]
$\tanh x={\frac {\sin u}{\cosh v}},\quad \tan y={\frac {\sinh v}{\cos u}}.$
Multiplying these together reveals the additional identity[8]
$\tanh x\,\tan y=\tan u\,\tanh v.$
Symmetries
The two functions can be thought of as rotations or reflections of each-other, with a similar relationship as $ \sinh iz=i\sin z$ between sine and hyperbolic sine:[12]
${\begin{aligned}\operatorname {gd} iz&=i\operatorname {gd} ^{-1}z,\\[5mu]\operatorname {gd} ^{-1}iz&=i\operatorname {gd} z.\end{aligned}}$
The functions are both odd and they commute with complex conjugation. That is, a reflection across the real or imaginary axis in the domain results in the same reflection in the codomain:
${\begin{aligned}\operatorname {gd} (-z)&=-\operatorname {gd} z,&\quad \operatorname {gd} {\bar {z}}&={\overline {\operatorname {gd} z}},&\quad \operatorname {gd} (-{\bar {z}})&=-{\overline {\operatorname {gd} z}},\\[5mu]\operatorname {gd} ^{-1}(-z)&=-\operatorname {gd} ^{-1}z,&\quad \operatorname {gd} ^{-1}{\bar {z}}&={\overline {\operatorname {gd} ^{-1}z}},&\quad \operatorname {gd} ^{-1}(-{\bar {z}})&=-{\overline {\operatorname {gd} ^{-1}z}}.\end{aligned}}$
The functions are periodic, with periods $ 2\pi i$ and $ 2\pi $:
${\begin{aligned}\operatorname {gd} (z+2\pi i)&=\operatorname {gd} z,\\[5mu]\operatorname {gd} ^{-1}(z+2\pi )&=\operatorname {gd} ^{-1}z.\end{aligned}}$
A translation in the domain of $ \operatorname {gd} $ by $ \pm \pi i$ results in a half-turn rotation and translation in the codomain by one of $ \pm \pi ,$ and vice versa for $ \operatorname {gd} ^{-1}\colon $[13]
${\begin{aligned}\operatorname {gd} ({\pm \pi i}+z)&={\begin{cases}\pi -\operatorname {gd} z\quad &{\mbox{if }}\ \ \operatorname {Re} z\geq 0,\\[5mu]-\pi -\operatorname {gd} z\quad &{\mbox{if }}\ \ \operatorname {Re} z<0,\end{cases}}\\[15mu]\operatorname {gd} ^{-1}({\pm \pi }+z)&={\begin{cases}\pi i-\operatorname {gd} ^{-1}z\quad &{\mbox{if }}\ \ \operatorname {Im} z\geq 0,\\[3mu]-\pi i-\operatorname {gd} ^{-1}z\quad &{\mbox{if }}\ \ \operatorname {Im} z<0.\end{cases}}\end{aligned}}$
A reflection in the domain of $ \operatorname {gd} $ across either of the lines $ x\pm {\tfrac {1}{2}}\pi i$ results in a reflection in the codomain across one of the lines $ \pm {\tfrac {1}{2}}\pi +yi,$ and vice versa for $ \operatorname {gd} ^{-1}\colon $
${\begin{aligned}\operatorname {gd} ({\pm \pi i}+{\bar {z}})&={\begin{cases}\pi -{\overline {\operatorname {gd} z}}\quad &{\mbox{if }}\ \ \operatorname {Re} z\geq 0,\\[5mu]-\pi -{\overline {\operatorname {gd} z}}\quad &{\mbox{if }}\ \ \operatorname {Re} z<0,\end{cases}}\\[15mu]\operatorname {gd} ^{-1}({\pm \pi }-{\bar {z}})&={\begin{cases}\pi i+{\overline {\operatorname {gd} ^{-1}z}}\quad &{\mbox{if }}\ \ \operatorname {Im} z\geq 0,\\[3mu]-\pi i+{\overline {\operatorname {gd} ^{-1}z}}\quad &{\mbox{if }}\ \ \operatorname {Im} z<0.\end{cases}}\end{aligned}}$
This is related to the identity
$\tanh {\tfrac {1}{2}}({\pi i}\pm z)=\tan {\tfrac {1}{2}}({\pi }\mp \operatorname {gd} z).$
Specific values
A few specific values (where $ \infty $ indicates the limit at one end of the infinite strip):[14]
${\begin{aligned}\operatorname {gd} (0)&=0,&\quad {\operatorname {gd} }{\bigl (}{\pm {\log }{\bigl (}2+{\sqrt {3}}{\bigr )}}{\bigr )}&=\pm {\tfrac {1}{3}}\pi ,\\[5mu]\operatorname {gd} (\pi i)&=\pi ,&\quad {\operatorname {gd} }{\bigl (}{\pm {\tfrac {1}{3}}}\pi i{\bigr )}&=\pm {\log }{\bigl (}2+{\sqrt {3}}{\bigr )}i,\\[5mu]\operatorname {gd} ({\pm \infty })&=\pm {\tfrac {1}{2}}\pi ,&\quad {\operatorname {gd} }{\bigl (}{\pm {\log }{\bigl (}1+{\sqrt {2}}{\bigr )}}{\bigr )}&=\pm {\tfrac {1}{4}}\pi ,\\[5mu]{\operatorname {gd} }{\bigl (}{\pm {\tfrac {1}{2}}}\pi i{\bigr )}&=\pm \infty i,&\quad {\operatorname {gd} }{\bigl (}{\pm {\tfrac {1}{4}}}\pi i{\bigr )}&=\pm {\log }{\bigl (}1+{\sqrt {2}}{\bigr )}i,\\[5mu]&&{\operatorname {gd} }{\bigl (}{\log }{\bigl (}1+{\sqrt {2}}{\bigr )}\pm {\tfrac {1}{2}}\pi i{\bigr )}&={\tfrac {1}{2}}\pi \pm {\log }{\bigl (}1+{\sqrt {2}}{\bigr )}i,\\[5mu]&&{\operatorname {gd} }{\bigl (}{-\log }{\bigl (}1+{\sqrt {2}}{\bigr )}\pm {\tfrac {1}{2}}\pi i{\bigr )}&=-{\tfrac {1}{2}}\pi \pm {\log }{\bigl (}1+{\sqrt {2}}{\bigr )}i.\end{aligned}}$
Derivatives
${\begin{aligned}{\frac {\mathrm {d} }{\mathrm {d} z}}\operatorname {gd} z&=\operatorname {sech} z,\\[10mu]{\frac {\mathrm {d} }{\mathrm {d} z}}\operatorname {gd} ^{-1}z&=\sec z.\end{aligned}}$
Argument-addition identities
By combining hyperbolic and circular argument-addition identities,
${\begin{aligned}\tanh(z+w)&={\frac {\tanh z+\tanh w}{1+\tanh z\,\tanh w}},\\[10mu]\tan(z+w)&={\frac {\tan z+\tan w}{1-\tan z\,\tan w}},\end{aligned}}$
with the circular–hyperbolic identity,
$\tan {\tfrac {1}{2}}(\operatorname {gd} z)=\tanh {\tfrac {1}{2}}z,$
we have the Gudermannian argument-addition identities:
${\begin{aligned}\operatorname {gd} (z+w)&=2\arctan {\frac {\tan {\tfrac {1}{2}}(\operatorname {gd} z)+\tan {\tfrac {1}{2}}(\operatorname {gd} w)}{1+\tan {\tfrac {1}{2}}(\operatorname {gd} z)\,\tan {\tfrac {1}{2}}(\operatorname {gd} w)}},\\[12mu]\operatorname {gd} ^{-1}(z+w)&=2\operatorname {artanh} {\frac {\tanh {\tfrac {1}{2}}(\operatorname {gd} ^{-1}z)+\tanh {\tfrac {1}{2}}(\operatorname {gd} ^{-1}w)}{1-\tanh {\tfrac {1}{2}}(\operatorname {gd} ^{-1}z)\,\tanh {\tfrac {1}{2}}(\operatorname {gd} ^{-1}w)}}.\end{aligned}}$
Further argument-addition identities can be written in terms of other circular functions,[15] but they require greater care in choosing branches in inverse functions. Notably,
${\begin{aligned}\operatorname {gd} (z+w)&=u+v,\quad {\text{where}}\ \tan u={\frac {\sinh z}{\cosh w}},\ \tan v={\frac {\sinh w}{\cosh z}},\\[10mu]\operatorname {gd} ^{-1}(z+w)&=u+v,\quad {\text{where}}\ \tanh u={\frac {\sin z}{\cos w}},\ \tanh v={\frac {\sin w}{\cos z}},\end{aligned}}$
which can be used to derive the per-component computation for the complex Gudermannian and inverse Gudermannian.[16]
In the specific case $ z=w,$ double-argument identities are
${\begin{aligned}\operatorname {gd} (2z)&=2\arctan(\sin(\operatorname {gd} z)),\\[5mu]\operatorname {gd} ^{-1}(2z)&=2\operatorname {artanh} (\sinh(\operatorname {gd} ^{-1}z)).\end{aligned}}$
Taylor series
The Taylor series near zero, valid for complex values $ z$ with $ |z|<{\tfrac {1}{2}}\pi ,$ are[17]
${\begin{aligned}\operatorname {gd} z&=\sum _{k=0}^{\infty }{\frac {E_{k}}{(k+1)!}}z^{k+1}=z-{\frac {1}{6}}z^{3}+{\frac {1}{24}}z^{5}-{\frac {61}{5040}}z^{7}+{\frac {277}{72576}}z^{9}-\dots ,\\[10mu]\operatorname {gd} ^{-1}z&=\sum _{k=0}^{\infty }{\frac {|E_{k}|}{(k+1)!}}z^{k+1}=z+{\frac {1}{6}}z^{3}+{\frac {1}{24}}z^{5}+{\frac {61}{5040}}z^{7}+{\frac {277}{72576}}z^{9}+\dots ,\end{aligned}}$
where the numbers $ E_{k}$ are the Euler secant numbers, 1, 0, -1, 0, 5, 0, -61, 0, 1385 ... (sequences A122045, A000364, and A028296 in the OEIS). These series were first computed by James Gregory in 1671.[18]
Because the Gudermannian and inverse Gudermannian functions are the integrals of the hyperbolic secant and secant functions, the numerators $ E_{k}$ and $ |E_{k}|$ are same as the numerators of the Taylor series for sech and sec, respectively, but shifted by one place.
The reduced unsigned numerators are 1, 1, 1, 61, 277, ... and the reduced denominators are 1, 6, 24, 5040, 72576, ... (sequences A091912 and A136606 in the OEIS).
History
For broader coverage of this topic, see Mercator projection § History, and Integral of the secant function.
The function and its inverse are related to the Mercator projection. The vertical coordinate in the Mercator projection is called isometric latitude, and is often denoted $ \psi .$ In terms of latitude $ \phi $ on the sphere (expressed in radians) the isometric latitude can be written
$\psi =\operatorname {gd} ^{-1}\phi =\int _{0}^{\phi }\sec t\,\mathrm {d} t.$
The inverse from the isometric latitude to spherical latitude is $ \phi =\operatorname {gd} \psi .$ (Note: on an ellipsoid of revolution, the relation between geodetic latitude and isometric latitude is slightly more complicated.)
Gerardus Mercator plotted his celebrated map in 1569, but the precise method of construction was not revealed. In 1599, Edward Wright described a method for constructing a Mercator projection numerically from trigonometric tables, but did not produce a closed formula. The closed formula was published in 1668 by James Gregory.
The Gudermannian function per se was introduced by Johann Heinrich Lambert in the 1760s at the same time as the hyperbolic functions. He called it the "transcendent angle", and it went by various names until 1862 when Arthur Cayley suggested it be given its current name as a tribute to Christoph Gudermann's work in the 1830s on the theory of special functions.[19] Gudermann had published articles in Crelle's Journal that were later collected in a book[20] which expounded $ \sinh $ and $ \cosh $ to a wide audience (although represented by the symbols $ {\mathfrak {Sin}}$ and $ {\mathfrak {Cos}}$).
The notation $ \operatorname {gd} $ was introduced by Cayley who starts by calling $ \phi =\operatorname {gd} u$ the Jacobi elliptic amplitude $ \operatorname {am} u$ in the degenerate case where the elliptic modulus is $ m=1,$ so that $ {\sqrt {1+m\sin \!^{2}\,\phi }}$ reduces to $ \cos \phi .$[21] This is the inverse of the integral of the secant function. Using Cayley's notation,
$u=\int _{0}{\frac {d\phi }{\cos \phi }}={\log \,\tan }{\bigl (}{\tfrac {1}{4}}\pi +{\tfrac {1}{2}}\phi {\bigr )}.$
He then derives "the definition of the transcendent",
$\operatorname {gd} u={{\frac {1}{i}}\log \,\tan }{\bigl (}{\tfrac {1}{4}}\pi +{\tfrac {1}{2}}ui{\bigr )},$
observing that "although exhibited in an imaginary form, [it] is a real function of $ u$".
The Gudermannian and its inverse were used to make trigonometric tables of circular functions also function as tables of hyperbolic functions. Given a hyperbolic angle $ \psi $, hyperbolic functions could be found by first looking up $ \phi =\operatorname {gd} \psi $ in a Gudermannian table and then looking up the appropriate circular function of $ \phi $, or by directly locating $ \psi $ in an auxiliary $\operatorname {gd} ^{-1}$ column of the trigonometric table.[22]
Generalization
The Gudermannian function can be thought of mapping points on one branch of a hyperbola to points on a semicircle. Points on one sheet of an n-dimensional hyperboloid of two sheets can be likewise mapped onto a n-dimensional hemisphere via stereographic projection. The hemisphere model of hyperbolic space uses such a map to represent hyperbolic space.
Applications
• The angle of parallelism function in hyperbolic geometry is the complement of the gudermannian, ${\mathit {\Pi }}(\psi )={\tfrac {1}{2}}\pi -\operatorname {gd} \psi .$
• On a Mercator projection a line of constant latitude is parallel to the equator (on the projection) and is displaced by an amount proportional to the inverse Gudermannian of the latitude.
• The Gudermannian (with a complex argument) may be used to define the transverse Mercator projection.[23]
• The Gudermannian appears in a non-periodic solution of the inverted pendulum.[24]
• The Gudermannian appears in a moving mirror solution of the dynamical Casimir effect.[25]
• If an infinite number of infinitely long, equidistant, parallel, coplanar, straight wires are kept at equal potentials with alternating signs, the potential-flux distribution in a cross-sectional plane perpendicular to the wires is the complex Gudermannian.[26]
• The Gudermannian function is a sigmoid function, and as such is sometimes used as an activation function in machine learning.
• The (scaled and shifted) Gudermannian is the cumulative distribution function of the hyperbolic secant distribution.
• A function based on the Gudermannian provides a good model for the shape of spiral galaxy arms.[27]
See also
• Tractrix
• Catenary § Catenary of equal strength
Notes
1. The symbols $ \psi $ and $ \phi $ were chosen for this article because they are commonly used in geodesy for the isometric latitude (vertical coordinate of the Mercator projection) and geodetic latitude, respectively, and geodesy/cartography was the original context for the study of the Gudermannian and inverse Gudermannian functions.
2. Gudermann published several papers about the trigonometric and hyperbolic functions in Crelle's Journal in 1830–1831. These were collected in a book, Gudermann (1833).
3. Roy & Olver (2010) §4.23(viii) "Gudermannian Function"; Beyer (1987)
4. Kennelly (1929); Lee (1976)
5. Masson (2021)
6. Gottschalk (2003) pp. 23–27
7. Masson (2021) draws complex-valued plots of several of these, demonstrating that naïve implementations that choose the principal branch of inverse trigonometric functions yield incorrect results.
8. Weisstein, Eric W. "Gudermannian". MathWorld.
9. Kennelly (1929)
10. Kennelly (1929) p. 181; Beyer (1987) p. 269
11. Beyer (1987) p. 269 – note the typo.
12. Legendre (1817) §4.2.8(163) pp. 144–145
13. Kennelly (1929) p. 182
14. Kahlig & Reich (2013)
15. Cayley (1862) p. 21
16. Kennelly (1929) pp. 180–183
17. Legendre (1817) §4.2.7(162) pp. 143–144
18. Turnbull, Herbert Westren, ed. (1939). James Gregory; Tercentenary Memorial Volume. G. Bell & Sons. p. 170.
19. Becker & Van Orstrand (1909)
20. Gudermann (1833)
21. Cayley (1862)
22. For example Hoüel labels the hyperbolic functions across the top in Table XIV of: Hoüel, Guillaume Jules (1885). Recueil de formules et de tables numériques. Gauthier-Villars. p. 36.
23. Osborne (2013) p. 74
24. Robertson (1997)
25. Good, Anderson & Evans (2013)
26. Kennelly (1928)
27. Ringermacher & Mead (2009)
References
• Barnett, Janet Heine (2004). "Enter, Stage Center: The Early Drama of the Hyperbolic Functions" (PDF). Mathematics Magazine. 77 (1): 15–30. doi:10.1080/0025570X.2004.11953223.
• Becker, George Ferdinand; Van Orstrand, Charles Edwin (1909). Hyperbolic Functions. Smithsonian Mathematical Tables. Smithsonian Institution.
• Becker, George Ferdinand (1912). "The gudermannian complement and imaginary geometry" (PDF). The London, Edinburgh, and Dublin Philosophical Magazine and Journal of Science. 24 (142): 600–608. doi:10.1080/14786441008637363.
• Beyer, William H., ed. (1987). CRC Handbook of Mathematical Sciences (6th ed.). CRC Press. pp. 268–286.
• Cayley, Arthur (1862). "On the transcendent $ \operatorname {gd} u={\dfrac {1}{i}}\log \tan {\bigl (}{\tfrac {1}{4}}\pi +{\tfrac {1}{2}}ui{\bigr )}$". Philosophical Magazine. 4th Series. 24 (158): 19–21. doi:10.1080/14786446208643307.
• Good, Michael R.R.; Anderson, Paul R.; Evans, Charles R. (2013). "Time dependence of particle creation from accelerating mirrors". Physical Review D. 88 (2): 025023. arXiv:1303.6756. doi:10.1103/PhysRevD.88.025023.
• Gottschalk, Walter (2003). "Good Things about the Gudermannian" (PDF). Gottschalk's Gestalts.
• Gudermann, Christoph (1833). Theorie der Potenzial- oder cyklisch-hyperbolischen Functionen [Theory of Potential- or Circular-Hyperbolic Functions] (in German). G. Reimer.
• Jennings, George; Ni, David; Pong, Wai Yan; Raianu, Serban (2022). "The Integral of Secant and Stereographic Projections of Conic Sections". arXiv:2204.11187 [math.HO].
• Kahlig, Peter; Reich, Ludwig (2013). Contributions to the theory of the Legendre-Gudermann equation (PDF) (Technical report). Fachbibliothek für Mathematik, Karl-Franzens-Universität Graz.
• Karney, Charles F.F. (2011). "Transverse Mercator with an accuracy of a few nanometers". Journal of Geodesy. 85 (8): 475–485. arXiv:1002.1417. doi:10.1007/s00190-011-0445-3.
• Kennelly, Arthur E. (1928). "Gudermannian Complex Angles". Proceedings of the National Academy of Sciences. 14 (11): 839–844. doi:10.1073/pnas.14.11.839.
• Kennelly, Arthur E. (1929). "Gudermannians and Lambertians with Their Respective Addition Theorems". Proceedings of the American Philosophical Society. 68 (3): 175–184.
• Lambert, Johann Heinrich (1761). "Mémoire sur quelques propriétés remarquables des quantités transcendentes circulaires et logarithmiques" [Memoir on some remarkable properties of the circular and logarithmic transcendental quantities]. Histoire de l'Académie Royale des Sciences et des Belles-Lettres (in French). Berlin (published 1768). 17: 265–322.
• Lee, Laurence Patrick (1976). Conformal Projections Based on Elliptic Functions. Cartographica Monograph. Vol. 16. University of Toronto Press.
• Legendre, Adrien-Marie (1817). Exercices de calcul intégral [Exercises in integral calculus] (in French). Vol. 2. Courcier.
• Majernik, V. (1986). "Representation of relativistic quantities by trigonometric functions". American Journal of Physics. 54 (6): 536–538. doi:10.1119/1.14557.
• McMahon, James (1906). Hyperbolic Functions. Wiley. [First published as McMahon (1896). "IV. Hyperbolic Functions". In Merriman; Woodward (eds.). Higher Mathematics. Wiley. pp. 107–168.]
• Masson, Paul (2021). "The Complex Gudermannian". Analytic Physics.
• Osborne, Peter (2013). "The Mercator projections" (PDF).
• Peters, J. M. H. (1984). "The Gudermannian". The Mathematical Gazette. 68 (445): 192–196. doi:10.2307/3616342. JSTOR 3616342.
• Reynolds, William F. (1993). "Hyperbolic Geometry on a Hyperboloid" (PDF). The American Mathematical Monthly. 100 (5): 442–455. doi:10.1080/00029890.1993.11990430. Archived from the original (PDF) on 2016-05-28.
• Rickey, V. Frederick; Tuchinsky, Philip M. (1980). "An application of geography to mathematics: History of the integral of the secant" (PDF). Mathematics Magazine. 53 (3): 162–166. doi:10.1080/0025570X.1980.11976846.
• Ringermacher, Harry I.; Mead, Lawrence R. (2009). "A new formula describing the scaffold structure of spiral galaxies". Monthly Notices of the Royal Astronomical Society. 397 (1): 164–171. doi:10.1111/j.1365-2966.2009.14950.x.
• Robertson, John S. (1997). "Gudermann and the simple pendulum". The College Mathematics Journal. 28 (4): 271–276. doi:10.2307/2687148. JSTOR 2687148.
• Romakina, Lyudmila N. (2018). "The inverse Gudermannian in the hyperbolic geometry". Integral Transforms and Special Functions. 29 (5): 384–401. doi:10.1080/10652469.2018.1441296.
• Roy, Ranjan; Olver, Frank W. J. (2010), "4. Elementary Functions", in Olver, Frank W. J.; et al. (eds.), NIST Handbook of Mathematical Functions, Cambridge University Press, ISBN 978-0-521-19225-5, MR 2723248.
• Sala, Kenneth L. (1989). "Transformations of the Jacobian amplitude function and its calculation via the arithmetic-geometric mean" (PDF). SIAM Journal on Mathematical Analysis. 20 (6): 1514–1528. doi:10.1137/0520100.
External links
• Penn, Michael (2020) "the Gudermannian function!" on YouTube.
| Wikipedia |
Transcendental function
In mathematics, a transcendental function is an analytic function that does not satisfy a polynomial equation, in contrast to an algebraic function.[1][2] In other words, a transcendental function "transcends" algebra in that it cannot be expressed algebraically.
Examples of transcendental functions include the exponential function, the logarithm, and the trigonometric functions.
Definition
Formally, an analytic function f (z) of one real or complex variable z is transcendental if it is algebraically independent of that variable.[3] This can be extended to functions of several variables.
History
The transcendental functions sine and cosine were tabulated from physical measurements in antiquity, as evidenced in Greece (Hipparchus) and India (jya and koti-jya). In describing Ptolemy's table of chords, an equivalent to a table of sines, Olaf Pedersen wrote:
The mathematical notion of continuity as an explicit concept is unknown to Ptolemy. That he, in fact, treats these functions as continuous appears from his unspoken presumption that it is possible to determine a value of the dependent variable corresponding to any value of the independent variable by the simple process of linear interpolation.[4]
A revolutionary understanding of these circular functions occurred in the 17th century and was explicated by Leonhard Euler in 1748 in his Introduction to the Analysis of the Infinite. These ancient transcendental functions became known as continuous functions through quadrature of the rectangular hyperbola xy = 1 by Grégoire de Saint-Vincent in 1647, two millennia after Archimedes had produced The Quadrature of the Parabola.
The area under the hyperbola was shown to have the scaling property of constant area for a constant ratio of bounds. The hyperbolic logarithm function so described was of limited service until 1748 when Leonhard Euler related it to functions where a constant is raised to a variable exponent, such as the exponential function where the constant base is e. By introducing these transcendental functions and noting the bijection property that implies an inverse function, some facility was provided for algebraic manipulations of the natural logarithm even if it is not an algebraic function.
The exponential function is written $\exp(x)=e^{x}$. Euler identified it with the infinite series $ \sum _{k=0}^{\infty }x^{k}/k!$, where k! denotes the factorial of k.
The even and odd terms of this series provide sums denoting cosh(x) and sinh(x), so that $e^{x}=\cosh x+\sinh x.$ These transcendental hyperbolic functions can be converted into circular functions sine and cosine by introducing (−1)k into the series, resulting in alternating series. After Euler, mathematicians view the sine and cosine this way to relate the transcendence to logarithm and exponent functions, often through Euler's formula in complex number arithmetic.
Examples
The following functions are transcendental:
${\begin{aligned}f_{1}(x)&=x^{\pi }\\[2pt]f_{2}(x)&=c^{x}\\[2pt]f_{3}(x)&=x^{x}\\f_{4}(x)&=x^{\frac {1}{x}}={\sqrt[{x}]{x}}\\[2pt]f_{5}(x)&=\log _{c}x\\[2pt]f_{6}(x)&=\sin {x}\end{aligned}}$
For the second function $f_{2}(x)$, if we set $c$ equal to $e$, the base of the natural logarithm, then we get that $e^{x}$ is a transcendental function. Similarly, if we set $c$ equal to $e$ in $f_{5}(x)$, then we get that $f_{5}(x)=\log _{e}x=\ln x$ (that is, the natural logarithm) is a transcendental function.
Algebraic and transcendental functions
Further information: Elementary function (differential algebra)
The most familiar transcendental functions are the logarithm, the exponential (with any non-trivial base), the trigonometric, and the hyperbolic functions, and the inverses of all of these. Less familiar are the special functions of analysis, such as the gamma, elliptic, and zeta functions, all of which are transcendental. The generalized hypergeometric and Bessel functions are transcendental in general, but algebraic for some special parameter values.
A function that is not transcendental is algebraic. Simple examples of algebraic functions are the rational functions and the square root function, but in general, algebraic functions cannot be defined as finite formulas of the elementary functions.[5]
The indefinite integral of many algebraic functions is transcendental. For example, the logarithm function arose from the reciprocal function in an effort to find the area of a hyperbolic sector.
Differential algebra examines how integration frequently creates functions that are algebraically independent of some class, such as when one takes polynomials with trigonometric functions as variables.
Transcendentally transcendental functions
Most familiar transcendental functions, including the special functions of mathematical physics, are solutions of algebraic differential equations. Those that are not, such as the gamma and the zeta functions, are called transcendentally transcendental or hypertranscendental functions.[6]
Exceptional set
If f is an algebraic function and $\alpha $ is an algebraic number then f (α) is also an algebraic number. The converse is not true: there are entire transcendental functions f such that f (α) is an algebraic number for any algebraic α.[7] For a given transcendental function the set of algebraic numbers giving algebraic results is called the exceptional set of that function.[8][9] Formally it is defined by:
${\mathcal {E}}(f)=\left\{\alpha \in {\overline {\mathbb {Q} }}\,:\,f(\alpha )\in {\overline {\mathbb {Q} }}\right\}.$
In many instances the exceptional set is fairly small. For example, ${\mathcal {E}}(\exp )=\{0\},$ this was proved by Lindemann in 1882. In particular exp(1) = e is transcendental. Also, since exp(iπ) = −1 is algebraic we know that iπ cannot be algebraic. Since i is algebraic this implies that π is a transcendental number.
In general, finding the exceptional set of a function is a difficult problem, but if it can be calculated then it can often lead to results in transcendental number theory. Here are some other known exceptional sets:
• Klein's j-invariant
${\mathcal {E}}(j)=\left\{\alpha \in {\mathcal {H}}\,:\,[\mathbb {Q} (\alpha ):\mathbb {Q} ]=2\right\},$
where ${\mathcal {H}}$ is the upper half-plane, and $[\mathbb {Q} (\alpha ):\mathbb {Q} ]$ is the degree of the number field $\mathbb {Q} (\alpha ).$ This result is due to Theodor Schneider.[10]
• Exponential function in base 2:
${\mathcal {E}}(2^{x})=\mathbb {Q} ,$
This result is a corollary of the Gelfond–Schneider theorem, which states that if $\alpha \neq 0,1$ is algebraic, and $\beta $ is algebraic and irrational then $\alpha ^{\beta }$ is transcendental. Thus the function 2x could be replaced by cx for any algebraic c not equal to 0 or 1. Indeed, we have:
${\mathcal {E}}(x^{x})={\mathcal {E}}\left(x^{\frac {1}{x}}\right)=\mathbb {Q} \setminus \{0\}.$
• A consequence of Schanuel's conjecture in transcendental number theory would be that ${\mathcal {E}}\left(e^{e^{x}}\right)=\emptyset .$
• A function with empty exceptional set that does not require assuming Schanuel's conjecture is $f(x)=\exp(1+\pi x).$
While calculating the exceptional set for a given function is not easy, it is known that given any subset of the algebraic numbers, say A, there is a transcendental function whose exceptional set is A.[11] The subset does not need to be proper, meaning that A can be the set of algebraic numbers. This directly implies that there exist transcendental functions that produce transcendental numbers only when given transcendental numbers. Alex Wilkie also proved that there exist transcendental functions for which first-order-logic proofs about their transcendence do not exist by providing an exemplary analytic function.[12]
Dimensional analysis
In dimensional analysis, transcendental functions are notable because they make sense only when their argument is dimensionless (possibly after algebraic reduction). Because of this, transcendental functions can be an easy-to-spot source of dimensional errors. For example, log(5 metres) is a nonsensical expression, unlike log(5 metres / 3 metres) or log(3) metres. One could attempt to apply a logarithmic identity to get log(5) + log(metres), which highlights the problem: applying a non-algebraic operation to a dimension creates meaningless results.
See also
• Complex function
• Function (mathematics)
• Generalized function
• List of special functions and eponyms
• List of types of functions
• Rational function
• Special functions
References
1. Townsend, E.J. (1915). Functions of a Complex Variable. H. Holt. p. 300. OCLC 608083625.
2. Hazewinkel, Michiel (1993). Encyclopedia of Mathematics. Vol. 9. pp. 236.
3. Waldschmidt, M. (2000). Diophantine approximation on linear algebraic groups. Springer. ISBN 978-3-662-11569-5.
4. Pedersen, Olaf (1974). Survey of the Almagest. Odense University Press. p. 84. ISBN 87-7492-087-1.
5. cf. Abel–Ruffini theorem
6. Rubel, Lee A. (November 1989). "A Survey of Transcendentally Transcendental Functions". The American Mathematical Monthly. 96 (9): 777–788. doi:10.1080/00029890.1989.11972282. JSTOR 2324840.
7. van der Poorten, A.J. (1968). "Transcendental entire functions mapping every algebraic number field into itself". J. Austral. Math. Soc. 8 (2): 192–8. doi:10.1017/S144678870000522X. S2CID 121788380.
8. Marques, D.; Lima, F.M.S. (2010). "Some transcendental functions that yield transcendental values for every algebraic entry". arXiv:1004.1668v1 [math.NT].
9. Archinard, N. (2003). "Exceptional sets of hypergeometric series". Journal of Number Theory. 101 (2): 244–269. doi:10.1016/S0022-314X(03)00042-8.
10. Schneider, T. (1937). "Arithmetische Untersuchungen elliptischer Integrale". Math. Annalen. 113: 1–13. doi:10.1007/BF01571618. S2CID 121073687.
11. Waldschmidt, M. (2009). "Auxiliary functions in transcendental number theory". The Ramanujan Journal. 20 (3): 341–373. arXiv:0908.4024. doi:10.1007/s11139-009-9204-y. S2CID 122797406.
12. Wilkie, A.J. (1998). "An algebraically conservative, transcendental function". Paris VII Preprints. 66.
External links
The Wikibook Associative Composition Algebra has a page on the topic of: Transcendental functions
• Definition of "Transcendental function" in the Encyclopedia of Math
| Wikipedia |
Transcendental number
In mathematics, a transcendental number is a real or complex number that is not algebraic – that is, not the root of a non-zero polynomial of finite degree with rational coefficients. The best known transcendental numbers are π and e.[1][2]
Though only a few classes of transcendental numbers are known – partly because it can be extremely difficult to show that a given number is transcendental – transcendental numbers are not rare: indeed, almost all real and complex numbers are transcendental, since the algebraic numbers form a countable set, while the set of real numbers and the set of complex numbers are both uncountable sets, and therefore larger than any countable set. All transcendental real numbers (also known as real transcendental numbers or transcendental irrational numbers) are irrational numbers, since all rational numbers are algebraic.[3][4][5][6] The converse is not true: Not all irrational numbers are transcendental. Hence, the set of real numbers consists of non-overlapping rational, algebraic non-rational and transcendental real numbers.[3] For example, the square root of 2 is an irrational number, but it is not a transcendental number as it is a root of the polynomial equation x2 − 2 = 0. The golden ratio (denoted $\varphi $ or $\phi $) is another irrational number that is not transcendental, as it is a root of the polynomial equation x2 − x − 1 = 0. The quality of a number being transcendental is called transcendence.
History
The name "transcendental" comes from the Latin trānscendere 'to climb over or beyond, surmount',[7] and was first used for the mathematical concept in Leibniz's 1682 paper in which he proved that sin x is not an algebraic function of x .[8] Euler, in the 18th century, was probably the first person to define transcendental numbers in the modern sense.[9]
Johann Heinrich Lambert conjectured that e and π were both transcendental numbers in his 1768 paper proving the number π is irrational, and proposed a tentative sketch of a proof of π's transcendence.[10]
Joseph Liouville first proved the existence of transcendental numbers in 1844,[11] and in 1851 gave the first decimal examples such as the Liouville constant
${\begin{aligned}L_{b}&=\sum _{n=1}^{\infty }10^{-n!}\\&=10^{-1}+10^{-2}+10^{-6}+10^{-24}+10^{-120}+10^{-720}+10^{-5040}+10^{-40320}+\ldots \\&=0.{\textbf {1}}{\textbf {1}}000{\textbf {1}}00000000000000000{\textbf {1}}00000000000000000000000000000000000000000000000000000\ \ldots \\\end{aligned}}$
in which the nth digit after the decimal point is 1 if n is equal to k! (k factorial) for some k and 0 otherwise.[12] In other words, the nth digit of this number is 1 only if n is one of the numbers 1! = 1, 2! = 2, 3! = 6, 4! = 24, etc. Liouville showed that this number belongs to a class of transcendental numbers that can be more closely approximated by rational numbers than can any irrational algebraic number, and this class of numbers are called Liouville numbers, named in his honour. Liouville showed that all Liouville numbers are transcendental.[13]
The first number to be proven transcendental without having been specifically constructed for the purpose of proving transcendental numbers' existence was e, by Charles Hermite in 1873.
In 1874, Georg Cantor proved that the algebraic numbers are countable and the real numbers are uncountable. He also gave a new method for constructing transcendental numbers.[14] Although this was already implied by his proof of the countability of the algebraic numbers, Cantor also published a construction that proves there are as many transcendental numbers as there are real numbers.[lower-alpha 1] Cantor's work established the ubiquity of transcendental numbers.
In 1882, Ferdinand von Lindemann published the first complete proof of the transcendence of π. He first proved that ea is transcendental if a is a non-zero algebraic number. Then, since ei π = −1 is algebraic (see Euler's identity), i π must be transcendental. But since i is algebraic, π therefore must be transcendental. This approach was generalized by Karl Weierstrass to what is now known as the Lindemann–Weierstrass theorem. The transcendence of π allowed the proof of the impossibility of several ancient geometric constructions involving compass and straightedge, including the most famous one, squaring the circle.
In 1900, David Hilbert posed a question about transcendental numbers, Hilbert's seventh problem: If a is an algebraic number that is not zero or one, and b is an irrational algebraic number, is ab necessarily transcendental? The affirmative answer was provided in 1934 by the Gelfond–Schneider theorem. This work was extended by Alan Baker in the 1960s in his work on lower bounds for linear forms in any number of logarithms (of algebraic numbers).[16]
Properties
A transcendental number is a (possibly complex) number that is not the root of any integer polynomial. Every real transcendental number must also be irrational, since a rational number is the root of an integer polynomial of degree one.[17] The set of transcendental numbers is uncountably infinite. Since the polynomials with rational coefficients are countable, and since each such polynomial has a finite number of zeroes, the algebraic numbers must also be countable. However, Cantor's diagonal argument proves that the real numbers (and therefore also the complex numbers) are uncountable. Since the real numbers are the union of algebraic and transcendental numbers, it is impossible for both subsets to be countable. This makes the transcendental numbers uncountable.
No rational number is transcendental and all real transcendental numbers are irrational. The irrational numbers contain all the real transcendental numbers and a subset of the algebraic numbers, including the quadratic irrationals and other forms of algebraic irrationals.
Applying any non-constant single-variable algebraic function to a transcendental argument yields a transcendental value. For example, from knowing that π is transcendental, it can be immediately deduced that numbers such as $5\pi $, ${\tfrac {\pi -3}{\sqrt {2}}}$, $({\sqrt {\pi }}-{\sqrt {3}})^{8}$, and ${\sqrt[{4}]{\pi ^{5}+7}}$ are transcendental as well.
However, an algebraic function of several variables may yield an algebraic number when applied to transcendental numbers if these numbers are not algebraically independent. For example, π and (1 − π) are both transcendental, but π + (1 − π) = 1 is obviously not. It is unknown whether e + π, for example, is transcendental, though at least one of e + π and eπ must be transcendental. More generally, for any two transcendental numbers a and b, at least one of a + b and ab must be transcendental. To see this, consider the polynomial (x − a)(x − b) = x2 − (a + b) x + a b . If (a + b) and a b were both algebraic, then this would be a polynomial with algebraic coefficients. Because algebraic numbers form an algebraically closed field, this would imply that the roots of the polynomial, a and b, must be algebraic. But this is a contradiction, and thus it must be the case that at least one of the coefficients is transcendental.
The non-computable numbers are a strict subset of the transcendental numbers.
All Liouville numbers are transcendental, but not vice versa. Any Liouville number must have unbounded partial quotients in its continued fraction expansion. Using a counting argument one can show that there exist transcendental numbers which have bounded partial quotients and hence are not Liouville numbers.
Using the explicit continued fraction expansion of e, one can show that e is not a Liouville number (although the partial quotients in its continued fraction expansion are unbounded). Kurt Mahler showed in 1953 that π is also not a Liouville number. It is conjectured that all infinite continued fractions with bounded terms, that have a "simple" structure, and that are not eventually periodic are transcendental (in other words, algebraic irrational roots of at least third degree polynomials do not have simple continued fraction expansions, since eventually periodic continued fractions correspond to quadratic irrationals, see Hermite's problem).[18]
Numbers proven to be transcendental
Numbers proven to be transcendental:
• ea if a is algebraic and nonzero (by the Lindemann–Weierstrass theorem).
• π (by the Lindemann–Weierstrass theorem).
• eπ, Gelfond's constant, as well as e−π/2 = ii (by the Gelfond–Schneider theorem).
• ab where a is algebraic but not 0 or 1, and b is irrational algebraic (by the Gelfond–Schneider theorem), in particular:
$2^{\sqrt {2}}$, the Gelfond–Schneider constant (or Hilbert number)
• sin a, cos a, tan a, csc a, sec a, and cot a, and their hyperbolic counterparts, for any nonzero algebraic number a, expressed in radians (by the Lindemann–Weierstrass theorem).
• The fixed point of the cosine function (also referred to as the Dottie number d) – the unique real solution to the equation cos x = x, where x is in radians (by the Lindemann–Weierstrass theorem).[19]
• ln a if a is algebraic and not equal to 0 or 1, for any branch of the logarithm function (by the Lindemann–Weierstrass theorem), in particular: the universal parabolic constant.
• logb a if a and b are positive integers not both powers of the same integer, and a is not equal to 1 (by the Gelfond–Schneider theorem).
• arcsin a, arccos a, arctan a, arccsc a, arcsec a, arccot a and their hyperbolic counterparts, for any algebraic number a where $a\notin \{0,1\}$ (by the Lindemann–Weierstrass theorem).
• The Bessel function of the first kind Jν(x), its first derivative, and the quotient ${\tfrac {J'_{\nu }(x)}{J_{\nu }(x)}}$ are transcendental when ν is rational and x is algebraic and nonzero,[20] and all nonzero roots of Jν(x) and J'ν(x) are transcendental when ν is rational.[21]
• W(a) if a is algebraic and nonzero, for any branch of the Lambert W Function (by the Lindemann–Weierstrass theorem), in particular: Ω the omega constant
• W(r,a) if both a and the order r are algebraic such that $a\neq 0$, for any branch of the generalized Lambert W function.[22]
• √xs, the square super-root of any natural number is either an integer or transcendental (by the Gelfond–Schneider theorem)
• $\operatorname {\Gamma } \left({\tfrac {1}{3}}\right)\ $,[23] $\operatorname {\Gamma } \left({\tfrac {1}{4}}\right)\ $,[24] and $\operatorname {\Gamma } \left({\tfrac {1}{6}}\right)\ $.[24] The numbers $\ \operatorname {\Gamma } \left({\tfrac {2}{3}}\right)\ ,$ $\ \operatorname {\Gamma } \left({\tfrac {3}{4}}\right)\ ,$ and $\ \operatorname {\Gamma } \left({\tfrac {5}{6}}\right)\ $ are also known to be transcendental. The numbers $\ {\tfrac {1}{\pi }}\operatorname {\Gamma } \left({\tfrac {1}{4}}\right)^{4}\ $ and $\ {\tfrac {1}{\pi }}\operatorname {\Gamma } \left({\tfrac {1}{3}}\right)^{2}\ $ are also transcendental.[25]
• The values of Euler beta function $\mathrm {B} (a,b)$ (in which a, b and $a+b$ are non-integer rational numbers).[26]
• 0.64341054629 ... , Cahen's constant.[27]
• $\pi +\ln(2)+{\sqrt {2}}\ln(3)$.[28] In general, all numbers of the form $\pi +\beta _{1}\ln(a_{1})+\cdots +\beta _{n}\ln(a_{n})$ are transcendental, where $\beta _{j}$ are algebraic for all $1\leq j\leq n$ and $a_{j}$ are non-zero algebraic for all $1\leq j\leq n$ (by the Baker's theorem).
• The Champernowne constants, the irrational numbers formed by concatenating representations of all positive integers.[29]
• Ω, Chaitin's constant (since it is a non-computable number).[30]
• The supremum limit of the Specker sequences (since they are non-computable numbers).[31]
• The so-called Fredholm constants, such as[11][32][lower-alpha 2]
$\sum _{n=0}^{\infty }10^{-2^{n}}=0.{\textbf {1}}{\textbf {1}}0{\textbf {1}}000{\textbf {1}}0000000{\textbf {1}}\ldots $
which also holds by replacing 10 with any algebraic number b > 1.[34]
• ${\frac {\arctan(x)}{\pi }}$ , for rational number x such that $x\notin \{0,\pm {1}\}$.[28]
• The values of the Rogers-Ramanujan continued fraction $R(q)$ where ${q}\in \mathbb {C} $ is algebraic and $0<|q|<1$.[35] The lemniscatic values of theta function $\sum _{n=-\infty }^{\infty }q^{n^{2}}$ (under the same conditions for ${q}$) are also transcendental.[36]
• j(q) where ${q}\in \mathbb {C} $ is algebraic but not imaginary quadratic (i.e, the exceptional set of this function is the number field whose degree of extension over $\mathbb {Q} $ is 2).
• The values of the infinite series with fast convergence rate as defined by Y. Gao and J. Gao, such as $\sum _{n=1}^{\infty }{\frac {3^{n}}{2^{3^{n}}}}$.[37]
• The real constant in the definition of van der Corput's constant involving the Fresnel integrals.[38]
• The real constant in the definition of Zolotarev-Schur constant involving the complete elliptic integral functions.[39]
• Gauss's constant and the related lemniscate constant.[40]
• Any number of the form $\sum _{n=0}^{\infty }{\frac {E_{n}(\beta ^{r^{n}})}{F_{n}(\beta ^{r^{n}})}}$ (where $E_{n}(z)$, $F_{n}(z)$ are polynomials in variables $n$ and $z$, $\beta $ is algebraic and $\beta \neq 0$, $r$ is any integer greater than 1).[41]
• Artificially constructed non-periodic numbers.[42]
• The Robbins constant in three-dimensional line picking problem.[43]
• The aforementioned Liouville constant for any algebraic b ∈ (0, 1).
• The sum of reciprocals of exponential factorials.[28]
• The Prouhet–Thue–Morse constant[44] and the related rabbit constant.[45]
• The Komornik–Loreti constant.[46]
• Any number for which the digits with respect to some fixed base form a Sturmian word.[47]
• The paperfolding constant (also named as "Gaussian Liouville number").[48]
• Constructed irrational numbers which are not simply normal in any base.[49]
• For β > 1
$\sum _{k=0}^{\infty }10^{-\left\lfloor \beta ^{k}\right\rfloor };$
where $\beta \mapsto \lfloor \beta \rfloor $ is the floor function.[50]
• 3.300330000000000330033... and its reciprocal 0.30300000303..., two numbers with only two different decimal digits whose nonzero digit positions are given by the Moser–de Bruijn sequence and its double.[51]
• The number ${\tfrac {\pi }{2}}{\tfrac {Y_{0}(2)}{J_{0}(2)}}-\gamma $, where Yα(x) and Jα(x) are Bessel functions and γ is the Euler–Mascheroni constant.[52][53]
• Nesterenko proved in 1996 that $\pi ,e^{\pi }$ and $\Gamma (1/4)$ are algebraically independent.[25] This results in the transcendence of the Weierstrass constant[54] and the number $\sum _{n=2}^{\infty }{\frac {1}{n^{4}-1}}$.[55]
Possible transcendental numbers
Numbers which have yet to be proven to be either transcendental or algebraic:
• Most sums, products, powers, etc. of the number π and the number e, e.g. eπ, e + π, π − e, π/e, ππ, ee, πe, π√2, eπ2 are not known to be rational, algebraic, irrational or transcendental. A notable exception is eπ√n (for any positive integer n) which has been proven transcendental.[56] It has been shown that both e + π and π/e do not satisfy any polynomial equation of degree $\leq 8$ and integer coefficients of average size 109.[57]
• The Euler–Mascheroni constant γ: In 2010 M. Ram Murty and N. Saradha found an infinite list of numbers containing γ/4 such that all but at most one of them are transcendental.[58][59] In 2012 it was shown that at least one of γ and the Euler–Gompertz constant δ is transcendental.[60]
• Apéry's constant ζ(3) (whose irrationality was proved by Apéry).
• The reciprocal Fibonacci constant and reciprocal Lucas constant[61] (which has been proved to be irrational).
• Catalan's constant, and the values of Dirichlet beta function at other even integers, β(4), β(6), ... (not even proven to be irrational).[62]
• Khinchin's constant, also not proven to be irrational.
• The Riemann zeta function at other odd positive integers, ζ(5), ζ(7), ... (not proven to be irrational).
• The Feigenbaum constants δ and α, also not proven to be irrational.
• Mills' constant and twin prime constant (also not proven to be irrational)
• The cube super-root of any natural number is either an integer or irrational (by the Gelfond–Schneider theorem). [63] However, it is still unclear if the irrational numbers in the later case are all transcendental.
• The second and later eigenvalues of the Gauss-Kuzmin-Wirsing operator, also not proven to be irrational.
• The Copeland–Erdős constant, formed by concatenating the decimal representations of the prime numbers.
• The relative density of regular prime numbers: in 1964, Siegel conjectured that its value is $e^{-1/2}$.
• $\Gamma (1/5)$ has not been proven to be irrational.[25]
• Various constants whose value is not known with high precision, such as the Landau's constant and the Grothendieck constant.
Related conjectures:
• Schanuel's conjecture,
• Four exponentials conjecture.
Sketch of a proof that e is transcendental
The first proof that the base of the natural logarithms, e, is transcendental dates from 1873. We will now follow the strategy of David Hilbert (1862–1943) who gave a simplification of the original proof of Charles Hermite. The idea is the following:
Assume, for purpose of finding a contradiction, that e is algebraic. Then there exists a finite set of integer coefficients c0, c1, ..., cn satisfying the equation:
$c_{0}+c_{1}e+c_{2}e^{2}+\cdots +c_{n}e^{n}=0,\qquad c_{0},c_{n}\neq 0~.$
Now for a positive integer k, we define the following polynomial:
$f_{k}(x)=x^{k}\left[(x-1)\cdots (x-n)\right]^{k+1},$
and multiply both sides of the above equation by
$\int _{0}^{\infty }f_{k}\ e^{-x}\ \mathrm {d} \ x\ ,$
to arrive at the equation:
$c_{0}\left(\int _{0}^{\infty }f_{k}e^{-x}\ \mathrm {d} \ x\right)+c_{1}e\left(\int _{0}^{\infty }f_{k}e^{-x}\ \mathrm {d} \ x\right)+\cdots +c_{n}e^{n}\left(\int _{0}^{\infty }f_{k}e^{-x}\ \mathrm {d} \ x\right)=0~.$
By splitting respective domains of integration, this equation can be written in the form
$P+Q=0$
where
${\begin{aligned}P&=c_{0}\left(\int _{0}^{\infty }f_{k}e^{-x}\ \mathrm {d} \ x\right)+c_{1}e\left(\int _{1}^{\infty }f_{k}e^{-x}\ \mathrm {d} \ x\right)+c_{2}e^{2}\left(\int _{2}^{\infty }f_{k}e^{-x}\ \mathrm {d} \ x\right)+\cdots +c_{n}e^{n}\left(\int _{n}^{\infty }f_{k}e^{-x}\ \mathrm {d} \ x\right)\\Q&=c_{1}e\left(\int _{0}^{1}f_{k}e^{-x}\ \mathrm {d} \ x\right)+c_{2}e^{2}\left(\int _{0}^{2}f_{k}e^{-x}\ \mathrm {d} \ x\right)+\cdots +c_{n}e^{n}\left(\int _{0}^{n}f_{k}e^{-x}\ \mathrm {d} \ x\right)\end{aligned}}$
Lemma 1. For an appropriate choice of k, $\ {\tfrac {P}{k!}}\ $ is a non-zero integer.
Proof. Each term in P is an integer times a sum of factorials, which results from the relation
$\ \int _{0}^{\infty }x^{j}e^{-x}\ \mathrm {d} \ x=j!\ $
which is valid for any positive integer j (consider the Gamma function).
It is non-zero because for every a satisfying 0 < a ≤ n , the integrand in
$c_{a}e^{a}\int _{a}^{\infty }f_{k}e^{-x}\ \mathrm {d} \ x$
is e−x times a sum of terms whose lowest power of x is k + 1 after substituting x for x + a in the integral. Then this becomes a sum of integrals of the form
$\ A_{j-k}\int _{0}^{\infty }x^{j}e^{-x}\ \mathrm {d} \ x\ $ Where Aj−k is integer.
with k+1 ≤ j , and it is therefore an integer divisible by (k+1)! . After dividing by k!, we get zero modulo k + 1 . However, we can write:
$\ \int _{0}^{\infty }f_{k}e^{-x}\ \mathrm {d} \ x=\int _{0}^{\infty }\left(\left[m(-1)^{n}(n!)\right]^{k+1}e^{-x}x^{k}+\cdots \right)\ \mathrm {d} \ x\ $
and thus
${\frac {1}{k!}}c_{0}\int _{0}^{\infty }f_{k}e^{-x}\ \mathrm {d} \ x\equiv c_{0}\left[\ (-1)^{n}(n!)\ \right]^{k+1}\ \not \equiv \ 0{\pmod {k+1}}~.$
So when dividing each integral in P by k!, the initial one is not divisible by k + 1 , but all the others are, as long as k + 1 is prime and larger than n and |c0| . It follows that $\ {\frac {\ P\ }{k!}}\ $ itself is not divisible by the prime k + 1 and therefore cannot be zero.
Lemma 2. $\left|{\tfrac {Q}{k!}}\right|<1$ for sufficiently large $k$.
Proof. Note that
${\begin{aligned}f_{k}e^{-x}&=x^{k}\left[(x-1)(x-2)\cdots (x-n)\right]^{k+1}e^{-x}\\&=\left(x(x-1)\cdots (x-n)\right)^{k}\cdot \left((x-1)\cdots (x-n)e^{-x}\right)\\&=u(x)^{k}\cdot v(x)\end{aligned}}$
where $\ u(x)\ $ and $\ v(x)\ $ are continuous functions of $\ x\ $ for all $\ x\ ,$ so are bounded on the interval $\ [0,n]~.$ That is, there are constants $\ G,H>0\ $ such that
$\ \left|f_{k}e^{-x}\right|\leq |u(x)|^{k}\cdot |v(x)|<G^{k}H\quad {\text{ for }}0\leq x\leq n~.$
So each of those integrals composing $\ Q\ $ is bounded, the worst case being
$\ \left|\int _{0}^{n}f_{k}e^{-x}\ \mathrm {d} \ x\right|\leq \int _{0}^{n}\left|f_{k}e^{-x}\right|\ \mathrm {d} \ x\leq \int _{0}^{n}G^{k}H\ \mathrm {d} \ x=nG^{k}H~.$
It is now possible to bound the sum $Q$ as well:
$\ |Q|<G^{k}\cdot nH\left(|c_{1}|e+|c_{2}|e^{2}+\cdots +|c_{n}|e^{n}\right)=G^{k}\cdot M\ ,$
where $\ M\ $ is a constant not depending on $\ k~.$ It follows that
$\ \left|{\frac {Q}{k!}}\right|<M\cdot {\frac {G^{k}}{k!}}\to 0\quad {\text{ as }}k\to \infty \ ,$
finishing the proof of this lemma.
Choosing a value of $\ k\ $ satisfying both lemmas leads to a non-zero integer ($\ {\frac {P}{k!}}\ $) added to a vanishingly small quantity ($\ {\frac {Q}{k!}}\ $) being equal to zero, is an impossibility. It follows that the original assumption, that e can satisfy a polynomial equation with integer coefficients, is also impossible; that is, e is transcendental.
The transcendence of π
A similar strategy, different from Lindemann's original approach, can be used to show that the number π is transcendental. Besides the gamma-function and some estimates as in the proof for e, facts about symmetric polynomials play a vital role in the proof.
For detailed information concerning the proofs of the transcendence of π and e, see the references and external links.
See also
• Transcendental number theory, the study of questions related to transcendental numbers
• Gelfond–Schneider theorem
• Diophantine approximation
• Periods, a set of numbers (including both transcendental and algebraic numbers) which may be defined by integral equations.
Number systems
Complex $:\;\mathbb {C} $ :\;\mathbb {C} }
Real $:\;\mathbb {R} $ :\;\mathbb {R} }
Rational $:\;\mathbb {Q} $ :\;\mathbb {Q} }
Integer $:\;\mathbb {Z} $ :\;\mathbb {Z} }
Natural $:\;\mathbb {N} $ :\;\mathbb {N} }
Zero: 0
One: 1
Prime numbers
Composite numbers
Negative integers
Fraction
Finite decimal
Dyadic (finite binary)
Repeating decimal
Irrational
Algebraic irrational
Transcendental
Imaginary
Notes
1. Cantor's construction builds a one-to-one correspondence between the set of transcendental numbers and the set of real numbers. In this article, Cantor only applies his construction to the set of irrational numbers.[15]
2. The name 'Fredholm number' is misplaced: Kempner first proved this number is transcendental, and the note on page 403 states that Fredholm never studied this number.[33]
References
1. Pickover, Cliff. "The 15 most famous transcendental numbers". sprott.physics.wisc.edu. Retrieved 2020-01-23.
2. Shidlovskii, Andrei B. (June 2011). Transcendental Numbers. Walter de Gruyter. p. 1. ISBN 9783110889055.
3. Bunday, B. D.; Mulholland, H. (20 May 2014). Pure Mathematics for Advanced Level. Butterworth-Heinemann. ISBN 978-1-4831-0613-7. Retrieved 21 March 2021.
4. Baker, A. (1964). "On Mahler's classification of transcendental numbers". Acta Mathematica. 111: 97–120. doi:10.1007/bf02391010. S2CID 122023355.
5. Heuer, Nicolaus; Loeh, Clara (1 November 2019). "Transcendental simplicial volumes". arXiv:1911.06386 [math.GT].
6. "Real number". Encyclopædia Britannica. mathematics. Retrieved 2020-08-11.
7. "transcendental". Oxford English Dictionary. s.v.
8. Leibniz, Gerhardt & Pertz 1858, pp. 97–98; Bourbaki 1994, p. 74
9. Erdős & Dudley 1983
10. Lambert 1768
11. Kempner 1916
12. "Weisstein, Eric W. "Liouville's Constant", MathWorld".
13. Liouville 1851
14. Cantor 1874; Gray 1994
15. Cantor 1878, p. 254
16. Baker, Alan (1998). J.J. O'Connor and E.F. Robertson. www-history.mcs.st-andrews.ac.uk (biographies). The MacTutor History of Mathematics archive. St. Andrew's, Scotland: University of St. Andrew's.
17. Hardy 1979
18. Adamczewski & Bugeaud 2005
19. Weisstein, Eric W. "Dottie Number". Wolfram MathWorld. Wolfram Research, Inc. Retrieved 23 July 2016.
20. Siegel, Carl L. (2014). "Über einige Anwendungen diophantischer Approximationen". On Some Applications of Diophantine Approximations: a translation of Carl Ludwig Siegel's Über einige Anwendungen diophantischer Approximationen by Clemens Fuchs, with a commentary and the article Integral points on curves: Siegel's theorem after Siegel's proof by Clemens Fuchs and Umberto Zannier (in German). Scuola Normale Superiore. pp. 81–138. doi:10.1007/978-88-7642-520-2_2. ISBN 978-88-7642-520-2.
21. Lorch, Lee; Muldoon, Martin E. (1995). "Transcendentality of zeros of higher dereivatives of functions involving Bessel functions". International Journal of Mathematics and Mathematical Sciences. 18 (3): 551–560. doi:10.1155/S0161171295000706.
22. Mező, István; Baricz, Árpád (June 22, 2015). "On the generalization of the Lambert W function" (PDF).
23. le Lionnais 1979, p. 46 via Wolfram Mathworld, Transcendental Number
24. Chudnovsky 1984 via Wolfram Mathworld, Transcendental Number
25. "Mathematical constants". Mathematics (general). Cambridge University Press. Retrieved 2022-09-22.
26. Waldschmidt, Michel (September 7, 2005). "Transcendence of Periods: The State of the Art" (PDF). webusers.imj-prg.fr.
27. Davison & Shallit 1991
28. Weisstein, Eric W. "Transcendental Number". mathworld.wolfram.com. Retrieved 2023-08-09.
29. Mahler 1937; Mahler 1976, p. 12
30. Calude 2002, p. 239
31. Grue Simonsen, Jakob. "Specker Sequences Revisited" (PDF). hjemmesider.diku.dk.{{cite web}}: CS1 maint: url-status (link)
32. Shallit 1996
33. Allouche & Shallit 2003, pp. 385, 403
34. Loxton 1988
35. Duverney, Daniel; Nishioka, Keiji; Nishioka, Kumiko; Shiokawa, Iekata (1997). "Transcendence of Rogers-Ramanujan continued fraction and reciprocal sums of Fibonacci numbers". Proceedings of the Japan Academy, Series A, Mathematical Sciences. 73 (7): 140–142. doi:10.3792/pjaa.73.140. ISSN 0386-2194.
36. Bertrand, Daniel (1997). "Theta functions and transcendence". The Ramanujan Journal. 1 (4): 339–350. doi:10.1023/A:1009749608672.
37. "A140654 - OEIS". oeis.org. Retrieved 2023-08-12.
38. Weisstein, Eric W. "van der Corput's Constant". mathworld.wolfram.com. Retrieved 2023-08-10.
39. Weisstein, Eric W. "Zolotarev-Schur Constant". mathworld.wolfram.com. Retrieved 2023-08-12.
40. Todd, John (1975). "The lemniscate constants". Communications of the ACM. 18: 14–19. doi:10.1145/360569.360580. S2CID 85873.
41. Kurosawa, Takeshi (2007-03-01). "Transcendence of certain series involving binary linear recurrences". Journal of Number Theory. 123 (1): 35–58. doi:10.1016/j.jnt.2006.05.019. ISSN 0022-314X.
42. Yoshinaga, Masahiko (2008-05-03). "Periods and elementary real numbers". arXiv:0805.0349 [math.AG].
43. Steven R. Finch (2003). Mathematical Constants. Cambridge University Press. p. 479. ISBN 978-3-540-67695-9. Schmutz.
44. Mahler 1929; Allouche & Shallit 2003, p. 387
45. Weisstein, Eric W. "Rabbit Constant". mathworld.wolfram.com. Retrieved 2023-08-09.
46. Allouche, Jean-Paul; Cosnard, Michel (2000), "The Komornik–Loreti constant is transcendental", American Mathematical Monthly, 107 (5): 448–449, doi:10.2307/2695302, JSTOR 2695302, MR 1763399
47. Pytheas Fogg 2002
48. "A143347 - OEIS". oeis.org. Retrieved 2023-08-09.
49. Bugeaud 2012, p. 113.
50. Adamczewski, Boris (March 2013). "The Many Faces of the Kempner Number" (PDF). arxiv.org. arXiv:1303.1685.{{cite web}}: CS1 maint: url-status (link)
51. Blanchard & Mendès France 1982
52. Mahler, Kurt; Mordell, Louis Joel (1968-06-04). "Applications of a theorem by A. B. Shidlovski". Proceedings of the Royal Society of London. Series A. Mathematical and Physical Sciences. 305 (1481): 149–173. Bibcode:1968RSPSA.305..149M. doi:10.1098/rspa.1968.0111. S2CID 123486171.
53. Lagarias, Jeffrey C. (2013-07-19). "Euler's constant: Euler's work and modern developments". Bulletin of the American Mathematical Society. 50 (4): 527–628. arXiv:1303.1856. doi:10.1090/S0273-0979-2013-01423-X. ISSN 0273-0979.
54. Weisstein, Eric W. "Weierstrass Constant". mathworld.wolfram.com. Retrieved 2023-08-12.
55. Elsner, Carsten; Shimomura, Shun; Shiokawa, Iekata (2012-09-01). "Algebraic independence of certain numbers related to modular functions". Functiones et Approximatio Commentarii Mathematici. 47 (1). doi:10.7169/facm/2012.47.1.10. ISSN 0208-6573.
56. Weisstein, Eric W. "Irrational Number". MathWorld.
57. Weisstein, Eric W. "e". mathworld.wolfram.com. Retrieved 2023-08-12.
58. Murty, M. Ram; Saradha, N. (2010-12-01). "Euler–Lehmer constants and a conjecture of Erdös". Journal of Number Theory. 130 (12): 2671–2682. doi:10.1016/j.jnt.2010.07.004. ISSN 0022-314X.
59. Murty, M. Ram; Zaytseva, Anastasia (2013-01-01). "Transcendence of generalized Euler constants". The American Mathematical Monthly. 120 (1): 48–54. doi:10.4169/amer.math.monthly.120.01.048. ISSN 0002-9890. S2CID 20495981.
60. Rivoal, Tanguy (2012). "On the arithmetic nature of the values of the gamma function, Euler's constant, and Gompertz's constant". Michigan Mathematical Journal. 61 (2): 239–254. doi:10.1307/mmj/1339011525. ISSN 0026-2285.
61. "A093540 - OEIS". oeis.org. Retrieved 2023-08-12.
62. Rivoal, T.; Zudilin, W. (2003-08-01). "Diophantine properties of numbers related to Catalan's constant". Mathematische Annalen. 326 (4): 705–721. doi:10.1007/s00208-003-0420-2. hdl:1959.13/803688. ISSN 1432-1807.
63. Marshall, J. Ash; Tan, Yiren (March 2012). "A rational number of the form aa with a irrational" (PDF).{{cite web}}: CS1 maint: url-status (link)
Sources
• Adamczewski, Boris; Bugeaud, Yann (2005). "On the complexity of algebraic numbers, II. Continued fractions". Acta Mathematica. 195 (1): 1–20. arXiv:math/0511677. Bibcode:2005math.....11677A. doi:10.1007/BF02588048. S2CID 15521751.
• Allouche, J.-P. [in French]; Shallit, J. (2003). Automatic Sequences: Theory, applications, generalizations. Cambridge University Press. ISBN 978-0-521-82332-6. Zbl 1086.11015.
• Baker, A. (1990). Transcendental Number Theory (paperback ed.). Cambridge University Press. ISBN 978-0-521-20461-3. Zbl 0297.10013.
• Blanchard, André; Mendès France, Michel (1982). "Symétrie et transcendance". Bulletin des Sciences Mathématiques. 106 (3): 325–335. MR 0680277.
• Bourbaki, N. (1994). Elements of the History of Mathematics. Springer. ISBN 9783540647676 – via Internet Archive.
• Bugeaud, Yann (2012). Distribution modulo one and Diophantine approximation. Cambridge Tracts in Mathematics. Vol. 193. Cambridge University Press. ISBN 978-0-521-11169-0. Zbl 1260.11001.
• Burger, Edward B.; Tubbs, Robert (2004). Making transcendence transparent. An intuitive approach to classical transcendental number theory. Springer. ISBN 978-0-387-21444-3. Zbl 1092.11031.
• Calude, Cristian S. (2002). Information and Randomness: An algorithmic perspective. Texts in Theoretical Computer Science (2nd rev. and ext. ed.). Springer. ISBN 978-3-540-43466-5. Zbl 1055.68058.
• Cantor, G. (1874). "Über eine Eigenschaft des Inbegriffes aller reelen algebraischen Zahlen". J. Reine Angew. Math. 77: 258–262.
• Cantor, G. (1878). "Ein Beitrag zur Mannigfaltigkeitslehre". J. Reine Angew. Math. 84: 242–258.
• Chudnovsky, G.V. (1984). Contributions to the Theory of Transcendental Numbers. American Mathematical Society. ISBN 978-0-8218-1500-7.
• Davison, J. Les; Shallit, J.O. (1991). "Continued fractions for some alternating series". Monatshefte für Mathematik. 111 (2): 119–126. doi:10.1007/BF01332350. S2CID 120003890.
• Erdős, P.; Dudley, U. (1983). "Some Remarks and Problems in Number Theory Related to the Work of Euler" (PDF). Mathematics Magazine. 56 (5): 292–298. CiteSeerX 10.1.1.210.6272. doi:10.2307/2690369. JSTOR 2690369.
• Gelfond, A. (1960) [1956]. Transcendental and Algebraic Numbers (reprint ed.). Dover.
• Gray, Robert (1994). "Georg Cantor and transcendental numbers". Amer. Math. Monthly. 101 (9): 819–832. doi:10.2307/2975129. JSTOR 2975129. Zbl 0827.01004 – via maa.org.
• Hardy, G.H. (1979). An Introduction to the Theory of Numbers (5th ed.). Oxford: Clarendon Press. p. 159. ISBN 0-19-853171-0.
• Higgins, Peter M. (2008). Number Story. Copernicus Books. ISBN 978-1-84800-001-8.
• Hilbert, D. (1893). "Über die Transcendenz der Zahlen e und $\pi $". Mathematische Annalen. 43 (2–3): 216–219. doi:10.1007/BF01443645. S2CID 179177945.
• Kempner, Aubrey J. (1916). "On Transcendental Numbers". Transactions of the American Mathematical Society. 17 (4): 476–482. doi:10.2307/1988833. JSTOR 1988833.
• Lambert, J.H. (1768). "Mémoire sur quelques propriétés remarquables des quantités transcendantes, circulaires et logarithmiques". Mémoires de l'Académie Royale des Sciences de Berlin: 265–322.
• Leibniz, G.W.; Gerhardt, Karl Immanuel; Pertz, Georg Heinrich (1858). Leibnizens mathematische Schriften. Vol. 5. A. Asher & Co. pp. 97–98 – via Internet Archive.
• le Lionnais, F. (1979). Les nombres remarquables. Hermann. ISBN 2-7056-1407-9.
• le Veque, W.J. (2002) [1956]. Topics in Number Theory. Vol. I and II. Dover. ISBN 978-0-486-42539-9 – via Internet Archive.
• Liouville, J. (1851). "Sur des classes très étendues de quantités dont la valeur n'est ni algébrique, ni même réductible à des irrationnelles algébriques" (PDF). J. Math. Pures Appl. 16: 133–142.
• Loxton, J.H. (1988). "13. Automata and transcendence". In Baker, A. (ed.). New Advances in Transcendence Theory. Cambridge University Press. pp. 215–228. ISBN 978-0-521-33545-4. Zbl 0656.10032.
• Mahler, K. (1929). "Arithmetische Eigenschaften der Lösungen einer Klasse von Funktionalgleichungen". Math. Annalen. 101: 342–366. doi:10.1007/bf01454845. JFM 55.0115.01. S2CID 120549929.
• Mahler, K. (1937). "Arithmetische Eigenschaften einer Klasse von Dezimalbrüchen". Proc. Konin. Neder. Akad. Wet. Ser. A (40): 421–428.
• Mahler, K. (1976). Lectures on Transcendental Numbers. Lecture Notes in Mathematics. Vol. 546. Springer. ISBN 978-3-540-07986-6. Zbl 0332.10019.
• Natarajan, Saradha [in French]; Thangadurai, Ravindranathan (2020). Pillars of Transcendental Number Theory. Springer Verlag. ISBN 978-981-15-4154-4.
• Pytheas Fogg, N. (2002). Berthé, V.; Ferenczi, Sébastien; Mauduit, Christian; Siegel, A. (eds.). Substitutions in dynamics, arithmetics and combinatorics. Lecture Notes in Mathematics. Vol. 1794. Springer. ISBN 978-3-540-44141-0. Zbl 1014.11015.
• Shallit, J. (15–26 July 1996). "Number theory and formal languages". In Hejhal, D.A.; Friedman, Joel; Gutzwiller, M.C.; Odlyzko, A.M. (eds.). Emerging Applications of Number Theory. IMA Summer Program. The IMA Volumes in Mathematics and its Applications. Vol. 109. Minneapolis, MN: Springer (published 1999). pp. 547–570. ISBN 978-0-387-98824-5.
External links
Wikisource has original text related to this article:
Über die Transzendenz der Zahlen e und π. (in German)
• Weisstein, Eric W. "Transcendental Number". MathWorld.
• Weisstein, Eric W. "Liouville Number". MathWorld.
• Weisstein, Eric W. "Liouville's Constant". MathWorld.
• "Proof that e is transcendental". planetmath.org.
• "Proof that the Liouville constant is transcendental". deanlm.com.
• Fritsch, R. (29 March 1988). Transzendenz von e im Leistungskurs? [Transcendence of e in advanced courses?] (PDF). Rahmen der 79. Hauptversammlung des Deutschen Vereins zur Förderung des mathematischen und naturwissenschaftlichen Unterrichts [79th Annual, General Meeting of the German Association for the Promotion of Mathematics and Science Education]. Der mathematische und naturwissenschaftliche Unterricht (in German). Vol. 42. Kiel, DE (published 1989). pp. 75–80 (presentation), 375-376 (responses). Archived from the original (PDF) on 2011-07-16 – via University of Munich (mathematik.uni-muenchen.de ). — Proof that e is transcendental, in German.
• Fritsch, R. (2003). "Hilberts Beweis der Transzendenz der Ludolphschen Zahl π" (PDF). Дифференциальная геометрия многообразий фигур (in German). 34: 144–148. Archived from the original (PDF) on 2011-07-16 – via University of Munich (mathematik.uni-muenchen.de/~fritsch).
Irrational numbers
• Chaitin's (Ω)
• Liouville
• Prime (ρ)
• Omega
• Cahen
• Logarithm of 2
• Gauss's (G)
• Twelfth root of 2
• Apéry's (ζ(3))
• Plastic (ρ)
• Square root of 2
• Supergolden ratio (ψ)
• Erdős–Borwein (E)
• Golden ratio (φ)
• Square root of 3
• Square root of pi (√π)
• Square root of 5
• Silver ratio (δS)
• Square root of 6
• Square root of 7
• Euler's (e)
• Pi (π)
• Schizophrenic
• Transcendental
• Trigonometric
Number systems
Sets of definable numbers
• Natural numbers ($\mathbb {N} $)
• Integers ($\mathbb {Z} $)
• Rational numbers ($\mathbb {Q} $)
• Constructible numbers
• Algebraic numbers ($\mathbb {A} $)
• Closed-form numbers
• Periods
• Computable numbers
• Arithmetical numbers
• Set-theoretically definable numbers
• Gaussian integers
Composition algebras
• Division algebras: Real numbers ($\mathbb {R} $)
• Complex numbers ($\mathbb {C} $)
• Quaternions ($\mathbb {H} $)
• Octonions ($\mathbb {O} $)
Split
types
• Over $\mathbb {R} $:
• Split-complex numbers
• Split-quaternions
• Split-octonions
Over $\mathbb {C} $:
• Bicomplex numbers
• Biquaternions
• Bioctonions
Other hypercomplex
• Dual numbers
• Dual quaternions
• Dual-complex numbers
• Hyperbolic quaternions
• Sedenions ($\mathbb {S} $)
• Split-biquaternions
• Multicomplex numbers
• Geometric algebra/Clifford algebra
• Algebra of physical space
• Spacetime algebra
Other types
• Cardinal numbers
• Extended natural numbers
• Irrational numbers
• Fuzzy numbers
• Hyperreal numbers
• Levi-Civita field
• Surreal numbers
• Transcendental numbers
• Ordinal numbers
• p-adic numbers (p-adic solenoids)
• Supernatural numbers
• Profinite integers
• Superreal numbers
• Normal numbers
• Classification
• List
Number theory
Fields
• Algebraic number theory (class field theory, non-abelian class field theory, Iwasawa theory, Iwasawa–Tate theory, Kummer theory)
• Analytic number theory (analytic theory of L-functions, probabilistic number theory, sieve theory)
• Geometric number theory
• Computational number theory
• Transcendental number theory
• Diophantine geometry (Arakelov theory, Hodge–Arakelov theory)
• Arithmetic combinatorics (additive number theory)
• Arithmetic geometry (anabelian geometry, P-adic Hodge theory)
• Arithmetic topology
• Arithmetic dynamics
Key concepts
• Numbers
• Natural numbers
• Prime numbers
• Rational numbers
• Irrational numbers
• Algebraic numbers
• Transcendental numbers
• P-adic numbers (P-adic analysis)
• Arithmetic
• Modular arithmetic
• Chinese remainder theorem
• Arithmetic functions
Advanced concepts
• Quadratic forms
• Modular forms
• L-functions
• Diophantine equations
• Diophantine approximation
• Continued fractions
• Category
• List of topics
• List of recreational topics
• Wikibook
• Wikiversity
Authority control: National
• France
• BnF data
• Israel
• United States
• Japan
• Czech Republic
| Wikipedia |
Algebraic element
In mathematics, if L is a field extension of K, then an element a of L is called an algebraic element over K, or just algebraic over K, if there exists some non-zero polynomial g(x) with coefficients in K such that g(a) = 0. Elements of L which are not algebraic over K are called transcendental over K.
These notions generalize the algebraic numbers and the transcendental numbers (where the field extension is C/Q, C being the field of complex numbers and Q being the field of rational numbers).
Examples
• The square root of 2 is algebraic over Q, since it is the root of the polynomial g(x) = x2 − 2 whose coefficients are rational.
• Pi is transcendental over Q but algebraic over the field of real numbers R: it is the root of g(x) = x − π, whose coefficients (1 and −π) are both real, but not of any polynomial with only rational coefficients. (The definition of the term transcendental number uses C/Q, not C/R.)
Properties
The following conditions are equivalent for an element $a$ of $L$:
• $a$ is algebraic over $K$,
• the field extension $K(a)/K$ is algebraic, i.e. every element of $K(a)$ is algebraic over $K$ (here $K(a)$ denotes the smallest subfield of $L$ containing $K$ and $a$),
• the field extension $K(a)/K$ has finite degree, i.e. the dimension of $K(a)$ as a $K$-vector space is finite,
• $K[a]=K(a)$, where $K[a]$ is the set of all elements of $L$ that can be written in the form $g(a)$ with a polynomial $g$ whose coefficients lie in $K$.
To make this more explicit, consider the polynomial evaluation $\varepsilon _{a}:K[X]\rightarrow K(a),\,P\mapsto P(a)$. This is a homomorphism and its kernel is $\{P\in K[X]\mid P(a)=0\}$. If $a$ is algebraic, this ideal contains non-zero polynomials, but as $K[X]$ is a euclidean domain, it contains a unique polynomial $p$ with minimal degree and leading coefficient $1$, which then also generates the ideal and must be irreducible. The polynomial $p$ is called the minimal polynomial of $a$ and it encodes many important properties of $a$. Hence the ring isomorphism $K[X]/(p)\rightarrow \mathrm {im} (\varepsilon _{a})$ obtained by the homomorphism theorem is an isomorphism of fields, where we can then observe that $\mathrm {im} (\varepsilon _{a})=K(a)$. Otherwise, $\varepsilon _{a}$ is injective and hence we obtain a field isomorphism $K(X)\rightarrow K(a)$, where $K(X)$ is the field of fractions of $K[X]$, i.e. the field of rational functions on $K$, by the universal property of the field of fractions. We can conclude that in any case, we find an isomorphism $K(a)\cong K[X]/(p)$ or $K(a)\cong K(X)$. Investigating this construction yields the desired results.
This characterization can be used to show that the sum, difference, product and quotient of algebraic elements over $K$ are again algebraic over $K$. For if $a$ and $b$ are both algebraic, then $(K(a))(b)$ is finite. As it contains the aforementioned combinations of $a$ and $b$, adjoining one of them to $K$ also yields a finite extension, and therefore these elements are algebraic as well. Thus set of all elements of $L$ which are algebraic over $K$ is a field that sits in between $L$ and $K$.
Fields that do not allow any algebraic elements over them (except their own elements) are called algebraically closed. The field of complex numbers is an example. If $L$ is algebraically closed, then the field of algebraic elements of $L$ over $K$ is algebraically closed, which can again be directly shown using the characterisation of simple algebraic extensions above. An example for this is the field of algebraic numbers.
See also
• Algebraic independence
References
• Lang, Serge (2002), Algebra, Graduate Texts in Mathematics, vol. 211 (Revised third ed.), New York: Springer-Verlag, ISBN 978-0-387-95385-4, MR 1878556, Zbl 0984.00001
| Wikipedia |
Transcendental equation
In applied mathematics, a transcendental equation is an equation over the real (or complex) numbers that is not algebraic, that is, if at least one of its sides describes a transcendental function.[1] Examples include:
${\begin{aligned}x&=e^{-x}\\x&=\cos x\\2^{x}&=x^{2}\end{aligned}}$
A transcendental equation need not be an equation between elementary functions, although most published examples are.
In some cases, a transcendental equation can be solved by transforming it into an equivalent algebraic equation. Some such transformations are sketched below; computer algebra systems may provide more elaborated transformations.[2]
In general, however, only approximate solutions can be found.[3]
Transformation into an algebraic equation
Ad hoc methods exist for some classes of transcendental equations in one variable to transform them into algebraic equations which then might be solved.
Exponential equations
If the unknown, say x, occurs only in exponents:
• applying the natural logarithm to both sides may yield an algebraic equation,[4] e.g.
$4^{x}=3^{x^{2}-1}\cdot 2^{5x}$ transforms to $x\ln 4=(x^{2}-1)\ln 3+5x\ln 2$, which simplifies to $x^{2}\ln 3+x(5\ln 2-\ln 4)-\ln 3=0$, which has the solutions $x={\frac {-3\ln 2\pm {\sqrt {9(\ln 2)^{2}-4(\ln 3)^{2}}}}{2\ln 3}}.$
This will not work if addition occurs "at the base line", as in $4^{x}=3^{x^{2}-1}+2^{5x}.$
• if all "base constants" can be written as integer or rational powers of some number q, then substituting y=qx may succeed, e.g.
$2^{x-1}+4^{x-2}-8^{x-2}=0$ transforms, using y=2x, to ${\frac {1}{2}}y+{\frac {1}{16}}y^{2}-{\frac {1}{64}}y^{3}=0$ which has the solutions $y\in \{0,-4,8\}$, hence $x=\log _{2}8=3$ is the only real solution.[5]
This will not work if squares or higher power of x occurs in an exponent, or if the "base constants" do not "share" a common q.
• sometimes, substituting y=xex may obtain an algebraic equation; after the solutions for y are known, those for x can be obtained by applying the Lambert W function, e.g.:
$x^{2}e^{2x}+2=3xe^{x}$ transforms to $y^{2}+2=3y,$ which has the solutions $y\in \{1,2\},$ hence $x\in \{W_{0}(1),W_{0}(2),W_{-1}(1),W_{-1}(2)\}$, where $W_{0}$ and $W_{-1}$ the denote the real-valued branches of the multivalued $W$ function.
Logarithmic equations
If the unknown x occurs only in arguments of a logarithm function:
• applying exponentiation to both sides may yield an algebraic equation, e.g.
$2\log _{5}(3x-1)-\log _{5}(12x+1)=0$ transforms, using exponentiation to base $5.$ to ${\frac {(3x-1)^{2}}{12x+1}}=1,$ which has the solutions $x\in \{0,2\}.$ If only real numbers are considered, $x=0$ is not a solution, as it leads to a non-real subexpression $\log _{5}(-1)$ in the given equation.
This requires the original equation to consist of integer-coefficient linear combinations of logarithms w.r.t. a unique base, and the logarithm arguments to be polynomials in x.[6]
• if all "logarithm calls" have a unique base $b$ and a unique argument expression $f(x),$ then substituting $y=\log _{b}(f(x))$ may lead to a simpler equation,[7] e.g.
$5\ln(\sin x^{2})+6=7{\sqrt {\ln(\sin x^{2})+8}}$ transforms, using $y=\ln(\sin x^{2}),$ to $5y+6=7{\sqrt {y+8}},$ which is algebraic and can be solved. After that, applying inverse operations to the substitution equation yields ${\sqrt {\arcsin \exp y}}=x.$
Trigonometric equations
If the unknown x occurs only as argument of trigonometric functions:
• applying Pythagorean identities and trigonometric sum and multiple formulas, arguments of the forms $\sin(nx+a),\cos(mx+b),\tan(lx+c),...$ with integer $n,m,l,...$ might all be transformed to arguments of the form, say, $\sin x$. After that, substituting $y=\sin(x)$ yields an algebraic equation,[8] e.g.
$\sin(x+a)=(\cos ^{2}x)-1$ transforms to $(\sin x)(\cos a)+{\sqrt {1-\sin ^{2}x}}(\sin a)=1-(\sin ^{2}x)-1$, and, after substitution, to $y(\cos a)+{\sqrt {1-y^{2}}}(\sin a)=-y^{2}$ which is algebraic[9] and can be solved. After that, applying $x=2k\pi +\arcsin y$ obtains the solutions.
Hyperbolic equations
If the unknown x occurs only in linear expressions inside arguments of hyperbolic functions,
• unfolding them by their defining exponential expressions and substituting $y=exp(x)$ yields an algebraic equation,[10] e.g.
$3\cosh x=4+\sinh(2x-6)$ unfolds to ${\frac {3}{2}}(e^{x}+{\frac {1}{e^{x}}})=4+{\frac {1}{2}}\left({\frac {(e^{x})^{2}}{e^{6}}}-{\frac {e^{6}}{(e^{x})^{2}}}\right),$ which transforms to the equation ${\frac {3}{2}}(y+{\frac {1}{y}})=4+{\frac {1}{2}}\left({\frac {y^{2}}{e^{6}}}-{\frac {e^{6}}{y^{2}}}\right),$ which is algebraic[11] and can be solved. Applying $x=\ln y$ obtains the solutions of the original equation.
Approximate solutions
Approximate numerical solutions to transcendental equations can be found using numerical, analytical approximations, or graphical methods.
Numerical methods for solving arbitrary equations are called root-finding algorithms.
In some cases, the equation can be well approximated using Taylor series near the zero. For example, for $k\approx 1$, the solutions of $\sin x=kx$ are approximately those of $(1-k)x-x^{3}/6=0$, namely $x=0$ and $x=\pm {\sqrt {6}}{\sqrt {1-k}}$.
For a graphical solution, one method is to set each side of a single-variable transcendental equation equal to a dependent variable and plot the two graphs, using their intersecting points to find solutions (see picture).
Other solutions
• Some transcendental systems of high-order equations can be solved by “separation” of the unknowns, reducing them to algebraic equations.[12][13]
• The following can also be used when solving transcendental equations/inequalities: If $x_{0}$ is a solution to the equation $f(x)=g(x)$ and $f(x)\leq c\leq g(x)$, then this solution must satisfy $f(x_{0})=g(x_{0})=c$. For example, we want to solve $\log _{2}\left(3+2x-x^{2}\right)=\tan ^{2}\left({\frac {\pi x}{4}}\right)+\cot ^{2}\left({\frac {\pi x}{4}}\right)$. The given equation is defined for $-1<x<3$. Let $f(x)=\log _{2}\left(3+2x-x^{2}\right)$ and $g(x)=\tan ^{2}\left({\frac {\pi x}{4}}\right)+\cot ^{2}\left({\frac {\pi x}{4}}\right)$. It is easy to show that $f(x)\leq 2$ and $g(x)\geq 2$ so if there is a solution to the equation, it must satisfy $f(x)=g(x)=2$. From $f(x)=2$ we get $x=1\in (-1,3)$. Indeed, $f(1)=g(1)=2$ and so $x=1$ is the only real solution to the equation.
See also
• Mrs. Miniver's problem – Problem on areas of intersecting circles
References
1. I.N. Bronstein and K.A. Semendjajew and G. Musiol and H. Mühlig (2005). Taschenbuch der Mathematik (in German). Frankfurt/Main: Harri Deutsch. Here: Sect.1.6.4.1, p.45. The domain of equations is left implicit throughout the book.
2. For example, according to the Wolfram Mathematica tutorial page on equation solving, both $2^{x}=x$ and $e^{x}+x+1=0$ can be solved by symbolic expressions, while $x=\cos x$ can only be solved approximatively.
3. Bronstein et al., p.45-46
4. Bronstein et al., Sect.1.6.4.2.a, p.46
5. Bronstein et al., Sect.1.6.4.2.b, p.46
6. Bronstein et al., Sect.1.6.4.3.b, p.46
7. Bronstein et al., Sect.1.6.4.3.a, p.46
8. Bronstein et al., Sect.1.6.4.4, p.46-47
9. over an appropriate field, containing $\sin a$ and $\cos a$
10. Bronstein et al., Sect.1.6.4.5, p.47
11. over an appropriate field, containing $e^{6}$
12. V. A. Varyuhin, S. A. Kas'yanyuk, “On a certain method for solving nonlinear systems of a special type”, Zh. Vychisl. Mat. Mat. Fiz., 6:2 (1966), 347–352; U.S.S.R. Comput. Math. Math. Phys., 6:2 (1966), 214–221
13. V.A. Varyukhin, Fundamental Theory of Multichannel Analysis (VA PVO SV, Kyiv, 1993) [in Russian]
• John P. Boyd (2014). Solving Transcendental Equations: The Chebyshev Polynomial Proxy and Other Numerical Rootfinders, Perturbation Series, and Oracles. Other Titles in Applied Mathematics. Philadelphia: Society for Industrial and Applied Mathematics (SIAM). doi:10.1137/1.9781611973525. ISBN 978-1-61197-351-8.
| Wikipedia |
Transcendental law of homogeneity
In mathematics, the transcendental law of homogeneity (TLH) is a heuristic principle enunciated by Gottfried Wilhelm Leibniz most clearly in a 1710 text entitled Symbolismus memorabilis calculi algebraici et infinitesimalis in comparatione potentiarum et differentiarum, et de lege homogeneorum transcendentali.[1] Henk J. M. Bos describes it as the principle to the effect that in a sum involving infinitesimals of different orders, only the lowest-order term must be retained, and the remainder discarded.[2] Thus, if $a$ is finite and $dx$ is infinitesimal, then one sets
$a+dx=a.$
Similarly,
$u\,dv+v\,du+du\,dv=u\,dv+v\,du,$
where the higher-order term du dv is discarded in accordance with the TLH. A recent study argues that Leibniz's TLH was a precursor of the standard part function over the hyperreals.[3]
See also
• Law of continuity
• Adequality
References
1. Leibniz Mathematische Schriften, (1863), edited by C. I. Gerhardt, volume V, pages 377–382)
2. Bos, Henk J. M. (1974), "Differentials, higher-order differentials and the derivative in the Leibnizian calculus", Archive for History of Exact Sciences, 14: 1–90, doi:10.1007/BF00327456, S2CID 120779114
3. Katz, Mikhail; Sherry, David (2012), "Leibniz's Infinitesimals: Their Fictionality, Their Modern Implementations, and Their Foes from Berkeley to Russell and Beyond", Erkenntnis, 78 (3): 571–625, arXiv:1205.0174, doi:10.1007/s10670-012-9370-y, S2CID 254471766
Gottfried Wilhelm Leibniz
Mathematics and
philosophy
• Alternating series test
• Best of all possible worlds
• Calculus controversy
• Calculus ratiocinator
• Characteristica universalis
• Compossibility
• Difference
• Dynamism
• Identity of indiscernibles
• Individuation
• Law of continuity
• Leibniz wheel
• Leibniz's gap
• Leibniz's notation
• Lingua generalis
• Mathesis universalis
• Pre-established harmony
• Plenitude
• Sufficient reason
• Salva veritate
• Theodicy
• Transcendental law of homogeneity
• Rationalism
• Universal science
• Vis viva
• Well-founded phenomenon
Works
• De Arte Combinatoria (1666)
• Discourse on Metaphysics (1686)
• New Essays on Human Understanding (1704)
• Théodicée (1710)
• Monadology (1714)
• Leibniz–Clarke correspondence (1715–1716)
Category
Infinitesimals
History
• Adequality
• Leibniz's notation
• Integral symbol
• Criticism of nonstandard analysis
• The Analyst
• The Method of Mechanical Theorems
• Cavalieri's principle
Related branches
• Nonstandard analysis
• Nonstandard calculus
• Internal set theory
• Synthetic differential geometry
• Smooth infinitesimal analysis
• Constructive nonstandard analysis
• Infinitesimal strain theory (physics)
Formalizations
• Differentials
• Hyperreal numbers
• Dual numbers
• Surreal numbers
Individual concepts
• Standard part function
• Transfer principle
• Hyperinteger
• Increment theorem
• Monad
• Internal set
• Levi-Civita field
• Hyperfinite set
• Law of continuity
• Overspill
• Microcontinuity
• Transcendental law of homogeneity
Mathematicians
• Gottfried Wilhelm Leibniz
• Abraham Robinson
• Pierre de Fermat
• Augustin-Louis Cauchy
• Leonhard Euler
Textbooks
• Analyse des Infiniment Petits
• Elementary Calculus
• Cours d'Analyse
| Wikipedia |
Transcritical bifurcation
In bifurcation theory, a field within mathematics, a transcritical bifurcation is a particular kind of local bifurcation, meaning that it is characterized by an equilibrium having an eigenvalue whose real part passes through zero.
A transcritical bifurcation is one in which a fixed point exists for all values of a parameter and is never destroyed. However, such a fixed point interchanges its stability with another fixed point as the parameter is varied.[1] In other words, both before and after the bifurcation, there is one unstable and one stable fixed point. However, their stability is exchanged when they collide. So the unstable fixed point becomes stable and vice versa.
The normal form of a transcritical bifurcation is
${\frac {dx}{dt}}=rx-x^{2}.$
This equation is similar to the logistic equation, but in this case we allow $r$ and $x$ to be positive or negative (while in the logistic equation $x$ and $r$ must be non-negative). The two fixed points are at $x=0$ and $x=r$. When the parameter $r$ is negative, the fixed point at $x=0$ is stable and the fixed point $x=r$ is unstable. But for $r>0$, the point at $x=0$ is unstable and the point at $x=r$ is stable. So the bifurcation occurs at $r=0$.
A typical example (in real life) could be the consumer-producer problem where the consumption is proportional to the (quantity of) resource.
For example:
${\frac {dx}{dt}}=rx(1-x)-px,$
where
• $rx(1-x)$ is the logistic equation of resource growth; and
• $px$ is the consumption, proportional to the resource $x$.
References
1. Strogatz, Steven (2001). Nonlinear dynamics and chaos: with applications to physics, biology, chemistry, and engineering. Boulder: Westview Press. ISBN 0-7382-0453-6.
| Wikipedia |
Transfer function matrix
In control system theory, and various branches of engineering, a transfer function matrix, or just transfer matrix is a generalisation of the transfer functions of single-input single-output (SISO) systems to multiple-input and multiple-output (MIMO) systems.[1] The matrix relates the outputs of the system to its inputs. It is a particularly useful construction for linear time-invariant (LTI) systems because it can be expressed in terms of the s-plane.
In some systems, especially ones consisting entirely of passive components, it can be ambiguous which variables are inputs and which are outputs. In electrical engineering, a common scheme is to gather all the voltage variables on one side and all the current variables on the other regardless of which are inputs or outputs. This results in all the elements of the transfer matrix being in units of impedance. The concept of impedance (and hence impedance matrices) has been borrowed into other energy domains by analogy, especially mechanics and acoustics.
Many control systems span several different energy domains. This requires transfer matrices with elements in mixed units. This is needed both to describe transducers that make connections between domains and to describe the system as a whole. If the matrix is to properly model energy flows in the system, compatible variables must be chosen to allow this.
General
A MIMO system with m outputs and n inputs is represented by a m × n matrix. Each entry in the matrix is in the form of a transfer function relating an output to an input. For example, for a three-input, two-output system, one might write,
${\begin{bmatrix}y_{1}\\y_{2}\end{bmatrix}}={\begin{bmatrix}g_{11}&g_{12}&g_{13}\\g_{21}&g_{22}&g_{23}\end{bmatrix}}{\begin{bmatrix}u_{1}\\u_{2}\\u_{3}\end{bmatrix}}$
where the un are the inputs, the ym are the outputs, and the gmn are the transfer functions. This may be written more succinctly in matrix operator notation as,
$\mathbf {Y} =\mathbf {G} \mathbf {U} $
where Y is a column vector of the outputs, G is a matrix of the transfer functions, and U is a column vector of the inputs.
In many cases, the system under consideration is a linear time-invariant (LTI) system. In such cases, it is convenient to express the transfer matrix in terms of the Laplace transform (in the case of continuous time variables) or the z-transform (in the case of discrete time variables) of the variables. This may be indicated by writing, for instance,
$\mathbf {Y} (s)=\mathbf {G} (s)\mathbf {U} (s)$
which indicates that the variables and matrix are in terms of s, the complex frequency variable of the s-plane arising from Laplace transforms, rather than time. The examples in this article are all assumed to be in this form, although that is not explicitly indicated for brevity. For discrete time systems s is replaced by z from the z-transform, but this makes no difference to subsequent analysis. The matrix is particularly useful when it is a proper rational matrix, that is, all its elements are proper rational functions. In this case the state-space representation can be applied.[2]
In systems engineering, the overall system transfer matrix G (s) is decomposed into two parts: H (s) representing the system being controlled, and C(s) representing the control system. C (s) takes as its inputs the inputs of G (s) and the outputs of H (s). The outputs of C (s) form the inputs for H (s).[3]
Electrical systems
In electrical systems it is often the case that the distinction between input and output variables is ambiguous. They can be either, depending on circumstance and point of view. In such cases the concept of port (a place where energy is transferred from one system to another) can be more useful than input and output. It is customary to define two variables for each port (p): the voltage across it (Vp) and the current entering it (Ip). For instance, the transfer matrix of a two-port network can be defined as follows,
${\begin{bmatrix}V_{1}\\V_{2}\end{bmatrix}}={\begin{bmatrix}z_{11}&z_{12}\\z_{21}&z_{22}\\\end{bmatrix}}{\begin{bmatrix}I_{1}\\I_{2}\end{bmatrix}}$
where the zmn are called the impedance parameters, or z-parameters. They are so called because they are in units of impedance and relate port currents to a port voltage. The z-parameters are not the only way that transfer matrices are defined for two-port networks. There are six basic matrices that relate voltages and currents each with advantages for particular system network topologies.[4] However, only two of these can be extended beyond two ports to an arbitrary number of ports. These two are the z-parameters and their inverse, the admittance parameters or y-parameters.[5]
To understand the relationship between port voltages and currents and inputs and outputs, consider the simple voltage divider circuit. If we only wish to consider the output voltage (V2) resulting from applying the input voltage (V1) then the transfer function can be expressed as,
${\begin{bmatrix}V_{2}\end{bmatrix}}={\begin{bmatrix}{\dfrac {R_{2}}{R_{1}+R_{2}}}\end{bmatrix}}{\begin{bmatrix}V_{1}\end{bmatrix}}$
which can be considered the trivial case of a 1×1 transfer matrix. The expression correctly predicts the output voltage if there is no current leaving port 2, but is increasingly inaccurate as the load increases. If, however, we attempt to use the circuit in reverse, driving it with a voltage at port 2 and calculate the resulting voltage at port 1 the expression gives completely the wrong result even with no load on port 1. It predicts a greater voltage at port 1 than was applied at port 2, an impossibility with a purely resistive circuit like this one. To correctly predict the behaviour of the circuit, the currents entering or leaving the ports must also be taken into account, which is what the transfer matrix does.[6] The impedance matrix for the voltage divider circuit is,
${\begin{bmatrix}V_{1}\\V_{2}\end{bmatrix}}={\begin{bmatrix}R_{1}+R_{2}&R_{2}\\R_{2}&R_{2}\end{bmatrix}}{\begin{bmatrix}I_{1}\\I_{2}\end{bmatrix}}$
which fully describes its behaviour under all input and output conditions.[7]
At microwave frequencies, none of the transfer matrices based on port voltages and currents are convenient to use in practice. Voltage is difficult to measure directly, current next to impossible, and the open circuits and short circuits required by the measurement technique cannot be achieved with any accuracy. For waveguide implementations, circuit voltage and current are entirely meaningless. Transfer matrices using different sorts of variables are used instead. These are the powers transmitted into, and reflected from a port which are readily measured in the transmission line technology used in distributed-element circuits in the microwave band. The most well known and widely used of these sorts of parameters is the scattering parameters, or s-parameters.[8]
Mechanical and other systems
The concept of impedance can be extended into the mechanical, and other domains through a mechanical-electrical analogy, hence the impedance parameters, and other forms of 2-port network parameters, can be extended to the mechanical domain also. To do this an effort variable and a flow variable are made analogues of voltage and current respectively. For mechanical systems under translation these variables are force and velocity respectively.[9]
Expressing the behaviour of a mechanical component as a two-port or multi-port with a transfer matrix is a useful thing to do because, like electrical circuits, the component can often be operated in reverse and its behaviour is dependent on the loads at the inputs and outputs. For instance, a gear train is often characterised simply by its gear ratio, a SISO transfer function. However, the gearbox output shaft can be driven round to turn the input shaft requiring a MIMO analysis. In this example the effort and flow variables are torque T and angular velocity ω respectively. The transfer matrix in terms of z-parameters will look like,
${\begin{bmatrix}T_{1}\\T_{2}\end{bmatrix}}={\begin{bmatrix}z_{11}&z_{12}\\z_{21}&z_{22}\end{bmatrix}}{\begin{bmatrix}\omega _{1}\\\omega _{2}\end{bmatrix}}$
However, the z-parameters are not necessarily the most convenient for characterising gear trains. A gear train is the analogue of an electrical transformer and the h-parameters (hybrid parameters) better describe transformers because they directly include the turns ratios (the analogue of gear ratios).[10] The gearbox transfer matrix in h-parameter format is,
${\begin{bmatrix}T_{1}\\\omega _{2}\end{bmatrix}}={\begin{bmatrix}h_{11}&h_{12}\\h_{21}&h_{22}\end{bmatrix}}{\begin{bmatrix}\omega _{1}\\T_{2}\end{bmatrix}}$
where
h21 is the velocity ratio of the gear train with no load on the output,
h12 is the reverse direction torque ratio of the gear train with input shaft clamped, equal to the forward velocity ratio for an ideal gearbox,
h11 is the input rotational mechanical impedance with no load on the output shaft, zero for an ideal gearbox, and,
h22 is the output rotational mechanical admittance with the input shaft clamped.
For an ideal gear train with no losses (friction, distortion etc), this simplifies to,
${\begin{bmatrix}T_{1}\\\omega _{2}\end{bmatrix}}={\begin{bmatrix}0&N\\N&0\end{bmatrix}}{\begin{bmatrix}\omega _{1}\\T_{2}\end{bmatrix}}$
where N is the gear ratio.[11]
Transducers and actuators
In a system that consists of multiple energy domains, transfer matrices are required that can handle components with ports in different domains. In robotics and mechatronics, actuators are required. These usually consist of a transducer converting, for instance, signals from the control system in the electrical domain into motion in the mechanical domain. The control system also requires sensors that detect the motion and convert it back into the electrical domain through another transducer so that the motion can be properly controlled through a feedback loop. Other sensors in the system may be transducers converting yet other energy domains into electrical signals, such as optical, audio, thermal, fluid flow and chemical. Another application is the field of mechanical filters which require transducers between the electrical and mechanical domains in both directions.
A simple example is an electromagnetic electromechanical actuator driven by an electronic controller. This requires a transducer with an input port in the electrical domain and an output port in the mechanical domain. This might be represented simplistically by a SISO transfer function, but for similar reasons to those already stated, a more accurate representation is achieved with a two-input, two-output MIMO transfer matrix. In the z-parameters, this takes the form,
${\begin{bmatrix}V\\F\end{bmatrix}}={\begin{bmatrix}z_{11}&z_{12}\\z_{21}&z_{22}\end{bmatrix}}{\begin{bmatrix}I\\v\end{bmatrix}}$
where F is the force applied to the actuator and v is the resulting velocity of the actuator. The impedance parameters here are a mixture of units; z11 is an electrical impedance, z22 is a mechanical impedance and the other two are transimpedances in a hybrid mix of units.[12]
Acoustic systems
Acoustic systems are a subset of fluid dynamics, and in both fields the primary input and output variables are pressure, P, and volumetric flow rate, Q, except in the case of sound travelling through solid components. In the latter case, the primary variables of mechanics, force and velocity, are more appropriate. An example of a two-port acoustic component is a filter such as a muffler on an exhaust system. A transfer matrix representation of it may look like,
${\begin{bmatrix}P_{2}\\Q_{2}\end{bmatrix}}={\begin{bmatrix}T_{11}&T_{12}\\T_{21}&T_{22}\end{bmatrix}}{\begin{bmatrix}P_{1}\\Q_{1}\end{bmatrix}}$
Here, the Tmn are the transmission parameters, also known as ABCD-parameters. The component can be just as easily described by the z-parameters, but transmission parameters have a mathematical advantage when dealing with a system of two-ports that are connected in a cascade of the output of one into the input port of another. In such cases the overall transmission parameters are found simply by the matrix multiplication of the transmission parameter matrices of the constituent components.[13]
Compatible variables
When working with mixed variables from different energy domains consideration needs to be given on which variables to consider analogous. The choice depends on what the analysis is intended to achieve. If it is desired to correctly model energy flows throughout the entire system then a pair of variables whose product is power (power conjugate variables) in one energy domain must map to power conjugate variables in other domains. Power conjugate variables are not unique so care needs to be taken to use the same mapping of variables throughout the system.[14]
A common mapping (used in some of the examples in this article) maps the effort variables (ones that initiate an action) from each domain together and maps the flow variables (ones that are a property of an action) from each domain together. Each pair of effort and flow variables is power conjugate. This system is known as the impedance analogy because a ratio of the effort to the flow variable in each domain is analogous to electrical impedance.[15]
There are two other power conjugate systems on the same variables that are in use. The mobility analogy maps mechanical force to electric current instead of voltage. This analogy is widely used by mechanical filter designers and frequently in audio electronics also. The mapping has the advantage of preserving network topologies across domains but does not maintain the mapping of impedances. The Trent analogy classes the power conjugate variables as either across variables, or through variables depending on whether they act across an element of a system or through it. This largely ends up the same as the mobility analogy except in the case of the fluid flow domain (including the acoustics domain). Here pressure is made analogous to voltage (as in the impedance analogy) instead of current (as in the mobility analogy). However, force in the mechanical domain is analogous to current because force acts through an object.[16]
There are some commonly used analogies that do not use power conjugate pairs. For sensors, correctly modelling energy flows may not be so important. Sensors often extract only tiny amounts of energy into the system. Choosing variables that are convenient to measure, particularly ones that the sensor is sensing, may be more useful. For instance, in the thermal resistance analogy, thermal resistance is considered analogous to electrical resistance, resulting in temperature difference and thermal power mapping to voltage and current respectively. The power conjugate of temperature difference is not thermal power, but rather entropy flow rate, something that cannot be directly measured. Another analogy of the same sort occurs in the magnetic domain. This maps magnetic reluctance to electrical resistance, resulting in magnetic flux mapping to current instead of magnetic flux rate of change as required for compatible variables.[17]
History
The matrix representation of linear algebraic equations has been known for some time. Poincaré in 1907 was the first to describe a transducer as a pair of such equations relating electrical variables (voltage and current) to mechanical variables (force and velocity). Wegel, in 1921, was the first to express these equations in terms of mechanical impedance as well as electrical impedance.[18]
The first use of transfer matrices to represent a MIMO control system was by Boksenbom and Hood in 1950, but only for the particular case of the gas turbine engines they were studying for the National Advisory Committee for Aeronautics.[19] Cruickshank provided a firmer basis in 1955 but without complete generality. Kavanagh in 1956 gave the first completely general treatment, establishing the matrix relationship between system and control and providing criteria for realisability of a control system that could deliver a prescribed behaviour of the system under control.[20]
See also
• Transfer-matrix method (optics)
References
1. Chen, p. 1038
• Levine, p. 481
• Chen, pp. 1037–1038
2. Kavanagh, p. 350
• Chen, pp. 54–55
• Iyer, p. 240
• Bakshi & Bakshi, p. 420
3. Choma, p. 197
4. Yang & Lee, pp. 37–38
5. Bessai, pp. 4–5
• Nguyen, p. 271
• Bessai, p. 1
6. Busch-Vishniac, pp. 19–20
7. Olsen, pp. 239–240
• Busch-Vishniac, p. 20
• Koenig & Blackwell, p. 170
8. Pierce, p. 200
9. Munjal, p. 81
10. Busch-Vishniac, p. 18
11. Busch-Vishniac, p. 20
12. Busch-Vishniac, pp. 19–20
13. Busch-Vishniac, pp. 18, 20
14. Pierce, p. 200
• Kavanagh, p. 350
• Bokenham & Hood, p. 581
15. Kavanagh, pp. 349–350
Bibliography
• Bessai, Horst, MIMO Signals and Systems, Springer, 2006 ISBN 038727457X.
• Bakshi, A.V.; Bakshi, U.A., Network Theory, Technical Publications, 2008 ISBN 8184314027.
• Boksenbom, Aaron S.; Hood, Richard, "General algebraic method applied to control analysis of complex engine types", NACA Report 980, 1950.
• Busch-Vishniac, Ilene J., Electromechanical Sensors and Actuators, Springer, 1999 ISBN 038798495X.
• Chen, Wai Kai, The Electrical Engineering Handbook, Academic Press, 2004 ISBN 0080477488.
• Choma, John, Electrical Networks: Theory and Analysis, Wiley, 1985 ISBN 0471085286.
• Cruickshank, A. J. O., "Matrix formulation of control system equations", The Matrix and Tensor Quarterly, vol. 5, no. 3, p. 76, 1955.
• Iyer, T. S. K. V., Circuit Theory, Tata McGraw-Hill Education, 1985 ISBN 0074516817.
• Kavanagh, R. J., "The application of matrix methods to multi-variable control systems", Journal of the Franklin Institute, vol. 262, iss. 5, pp. 349–367, November 1956.
• Koenig, Herman Edward; Blackwell, William A., Electromechanical System Theory, McGraw-Hill, 1961 OCLC 564134
• Levine, William S., The Control Handbook, CRC Press, 1996 ISBN 0849385709.
• Nguyen, Cam, Radio-Frequency Integrated-Circuit Engineering, Wiley, 2015 ISBN 1118936485.
• Olsen A., "Characterization of Transformers by h-Paraameters", IEEE Transactions on Circuit Theory, vol. 13, iss. 2, pp. 239–240, June 1966.
• Pierce, Allan D. Acoustics: an Introduction to its Physical Principles and Applications, Acoustical Society of America, 1989 ISBN 0883186128.
• Poincaré, H., "Etude du récepteur téléphonique", Eclairage Electrique, vol. 50, pp. 221–372, 1907.
• Wegel, R. L., "Theory of magneto-mechanical systems as applied to telephone receivers and similar structures", Journal of the American Institute of Electrical Engineers, vol. 40, pp. 791–802, 1921.
• Yang, Won Y.; Lee, Seung C., Circuit Systems with MATLAB and PSpice, Wiley 2008, ISBN 0470822406.
| Wikipedia |
Transfer (group theory)
In the mathematical field of group theory, the transfer defines, given a group G and a subgroup H of finite index, a group homomorphism from G to the abelianization of H. It can be used in conjunction with the Sylow theorems to obtain certain numerical results on the existence of finite simple groups.
The transfer was defined by Issai Schur (1902) and rediscovered by Emil Artin (1929).[1]
Construction
The construction of the map proceeds as follows:[2] Let [G:H] = n and select coset representatives, say
$x_{1},\dots ,x_{n},\,$
for H in G, so G can be written as a disjoint union
$G=\bigcup \ x_{i}H.$
Given y in G, each yxi is in some coset xjH and so
$yx_{i}=x_{j}h_{i}$
for some index j and some element hi of H. The value of the transfer for y is defined to be the image of the product
$\textstyle \prod _{i=1}^{n}h_{i}$
in H/H′, where H′ is the commutator subgroup of H. The order of the factors is irrelevant since H/H′ is abelian.
It is straightforward to show that, though the individual hi depends on the choice of coset representatives, the value of the transfer does not. It is also straightforward to show that the mapping defined this way is a homomorphism.
Example
If G is cyclic then the transfer takes any element y of G to y[G:H].
A simple case is that seen in the Gauss lemma on quadratic residues, which in effect computes the transfer for the multiplicative group of non-zero residue classes modulo a prime number p, with respect to the subgroup {1, −1}.[1] One advantage of looking at it that way is the ease with which the correct generalisation can be found, for example for cubic residues in the case that p − 1 is divisible by three.
Homological interpretation
This homomorphism may be set in the context of group homology. In general, given any subgroup H of G and any G-module A, there is a corestriction map of homology groups $\mathrm {Cor} :H_{n}(H,A)\to H_{n}(G,A)$ induced by the inclusion map $i:H\to G$, but if we have that H is of finite index in G, there are also restriction maps $\mathrm {Res} :H_{n}(G,A)\to H_{n}(H,A)$. In the case of n = 1 and $A=\mathbb {Z} $ with the trivial G-module structure, we have the map $\mathrm {Res} :H_{1}(G,\mathbb {Z} )\to H_{1}(H,\mathbb {Z} )$. Noting that $H_{1}(G,\mathbb {Z} )$ may be identified with $G/G'$ where $G'$ is the commutator subgroup, this gives the transfer map via $G\xrightarrow {\pi } G/G'\xrightarrow {\mathrm {Res} } H/H'$, with $\pi $ denoting the natural projection.[3] The transfer is also seen in algebraic topology, when it is defined between classifying spaces of groups.
Terminology
The name transfer translates the German Verlagerung, which was coined by Helmut Hasse.
Commutator subgroup
If G is finitely generated, the commutator subgroup G′ of G has finite index in G and H=G′, then the corresponding transfer map is trivial. In other words, the map sends G to 0 in the abelianization of G′. This is important in proving the principal ideal theorem in class field theory.[1] See the Emil Artin-John Tate Class Field Theory notes.
See also
• Focal subgroup theorem, an important application of transfer
• By Artin's reciprocity law, the Artin transfer describes the principalization of ideal classes in extensions of algebraic number fields.
References
1. Serre (1979) p.122
2. Following Scott 3.5
3. Serre (1979) p.120
• Artin, Emil (1929), "Idealklassen in Oberkörpern und allgemeines Reziprozitätsgesetz", Abhandlungen aus dem Mathematischen Seminar der Universität Hamburg, 7 (1): 46–51, doi:10.1007/BF02941159, S2CID 121475651
• Schur, Issai (1902), "Neuer Beweis eines Satzes über endliche Gruppen", Sitzungsberichte der Königlich Preussischen Akademie der Wissenschaften: 1013–1019, JFM 33.0146.01
• Scott, W.R. (1987) [1964]. Group Theory. Dover. pp. 60 ff. ISBN 0-486-65377-3. Zbl 0641.20001.
• Serre, Jean-Pierre (1979). Local Fields. Graduate Texts in Mathematics. Vol. 67. Translated by Greenberg, Marvin Jay. Springer-Verlag. pp. 120–122. ISBN 0-387-90424-7. Zbl 0423.12016.
| Wikipedia |
Shriek map
In category theory, a branch of mathematics, certain unusual functors are denoted $f_{!}$ and $f^{!},$ with the exclamation mark used to indicate that they are exceptional in some way. They are thus accordingly sometimes called shriek maps, with "shriek" being slang for an exclamation mark, though other terms are used, depending on context.
"Transfer map" redirects here. For the transfer homomorphism in group theory, see Transfer (group theory).
Usage
Shriek notation is used in two senses:
• To distinguish a functor from a more usual functor $f_{*}$ or $f^{*},$ accordingly as it is covariant or contravariant.
• To indicate a map that goes "the wrong way" – a functor that has the same objects as a more familiar functor, but behaves differently on maps and has the opposite variance. For example, it has a pull-back where one expects a push-forward.
Examples
In algebraic geometry, these arise in image functors for sheaves, particularly Verdier duality, where $f_{!}$ is a "less usual" functor.
In algebraic topology, these arise particularly in fiber bundles, where they yield maps that have the opposite of the usual variance. They are thus called wrong way maps, Gysin maps, as they originated in the Gysin sequence, or transfer maps. A fiber bundle $F\to E\to B,$ with base space B, fiber F, and total space E, has, like any other continuous map of topological spaces, a covariant map on homology $H_{*}(E)\to H_{*}(B)$ and a contravariant map on cohomology $H^{*}(B)\to H^{*}(E).$ However, it also has a covariant map on cohomology, corresponding in de Rham cohomology to "integration along the fiber", and a contravariant map on homology, corresponding in de Rham cohomology to "pointwise product with the fiber". The composition of the "wrong way" map with the usual map gives a map from the homology of the base to itself, analogous to a unit/counit of an adjunction; compare also Galois connection.
These can be used in understanding and proving the product property for the Euler characteristic of a fiber bundle.[1]
Notes
1. Gottlieb, Daniel Henry (1975), "Fibre bundles and the Euler characteristic" (PDF), Journal of Differential Geometry, 10 (1): 39–48, doi:10.4310/jdg/1214432674
| Wikipedia |
Transfer matrix
In applied mathematics, the transfer matrix is a formulation in terms of a block-Toeplitz matrix of the two-scale equation, which characterizes refinable functions. Refinable functions play an important role in wavelet theory and finite element theory.
This article is about the transfer matrix in wavelet theory. For the transfer matrix in control systems, see Transfer function matrix. For the transfer matrix method in statistical physics, see Transfer-matrix method. For the transfer matrix method in optics, see Transfer-matrix method (optics). For the transfer matrix in dynamical systems theory, see Transfer operator. For a single scalar, see Transfer coefficient (disambiguation).
For the mask $h$, which is a vector with component indexes from $a$ to $b$, the transfer matrix of $h$, we call it $T_{h}$ here, is defined as
$(T_{h})_{j,k}=h_{2\cdot j-k}.$
More verbosely
$T_{h}={\begin{pmatrix}h_{a}&&&&&\\h_{a+2}&h_{a+1}&h_{a}&&&\\h_{a+4}&h_{a+3}&h_{a+2}&h_{a+1}&h_{a}&\\\ddots &\ddots &\ddots &\ddots &\ddots &\ddots \\&h_{b}&h_{b-1}&h_{b-2}&h_{b-3}&h_{b-4}\\&&&h_{b}&h_{b-1}&h_{b-2}\\&&&&&h_{b}\end{pmatrix}}.$
The effect of $T_{h}$ can be expressed in terms of the downsampling operator "$\downarrow $":
$T_{h}\cdot x=(h*x)\downarrow 2.$
Properties
• $T_{h}\cdot x=T_{x}\cdot h$.
• If you drop the first and the last column and move the odd-indexed columns to the left and the even-indexed columns to the right, then you obtain a transposed Sylvester matrix.
• The determinant of a transfer matrix is essentially a resultant.
More precisely:
Let $h_{\mathrm {e} }$ be the even-indexed coefficients of $h$ ($(h_{\mathrm {e} })_{k}=h_{2k}$) and let $h_{\mathrm {o} }$ be the odd-indexed coefficients of $h$ ($(h_{\mathrm {o} })_{k}=h_{2k+1}$).
Then $\det T_{h}=(-1)^{\lfloor {\frac {b-a+1}{4}}\rfloor }\cdot h_{a}\cdot h_{b}\cdot \mathrm {res} (h_{\mathrm {e} },h_{\mathrm {o} })$, where $\mathrm {res} $ is the resultant.
This connection allows for fast computation using the Euclidean algorithm.
• For the trace of the transfer matrix of convolved masks holds
$\mathrm {tr} ~T_{g*h}=\mathrm {tr} ~T_{g}\cdot \mathrm {tr} ~T_{h}$
• For the determinant of the transfer matrix of convolved mask holds
$\det T_{g*h}=\det T_{g}\cdot \det T_{h}\cdot \mathrm {res} (g_{-},h)$
where $g_{-}$ denotes the mask with alternating signs, i.e. $(g_{-})_{k}=(-1)^{k}\cdot g_{k}$.
• If $T_{h}\cdot x=0$, then $T_{g*h}\cdot (g_{-}*x)=0$.
This is a concretion of the determinant property above. From the determinant property one knows that $T_{g*h}$ is singular whenever $T_{h}$ is singular. This property also tells, how vectors from the null space of $T_{h}$ can be converted to null space vectors of $T_{g*h}$.
• If $x$ is an eigenvector of $T_{h}$ with respect to the eigenvalue $\lambda $, i.e.
$T_{h}\cdot x=\lambda \cdot x$,
then $x*(1,-1)$ is an eigenvector of $T_{h*(1,1)}$ with respect to the same eigenvalue, i.e.
$T_{h*(1,1)}\cdot (x*(1,-1))=\lambda \cdot (x*(1,-1))$.
• Let $\lambda _{a},\dots ,\lambda _{b}$ be the eigenvalues of $T_{h}$, which implies $\lambda _{a}+\dots +\lambda _{b}=\mathrm {tr} ~T_{h}$ and more generally $\lambda _{a}^{n}+\dots +\lambda _{b}^{n}=\mathrm {tr} (T_{h}^{n})$. This sum is useful for estimating the spectral radius of $T_{h}$. There is an alternative possibility for computing the sum of eigenvalue powers, which is faster for small $n$.
Let $C_{k}h$ be the periodization of $h$ with respect to period $2^{k}-1$. That is $C_{k}h$ is a circular filter, which means that the component indexes are residue classes with respect to the modulus $2^{k}-1$. Then with the upsampling operator $\uparrow $ it holds
$\mathrm {tr} (T_{h}^{n})=\left(C_{k}h*(C_{k}h\uparrow 2)*(C_{k}h\uparrow 2^{2})*\cdots *(C_{k}h\uparrow 2^{n-1})\right)_{[0]_{2^{n}-1}}$
Actually not $n-2$ convolutions are necessary, but only $2\cdot \log _{2}n$ ones, when applying the strategy of efficient computation of powers. Even more the approach can be further sped up using the Fast Fourier transform.
• From the previous statement we can derive an estimate of the spectral radius of $\varrho (T_{h})$. It holds
$\varrho (T_{h})\geq {\frac {a}{\sqrt {\#h}}}\geq {\frac {1}{\sqrt {3\cdot \#h}}}$
where $\#h$ is the size of the filter and if all eigenvalues are real, it is also true that
$\varrho (T_{h})\leq a$,
where $a=\Vert C_{2}h\Vert _{2}$.
See also
• Hurwitz determinant
References
• Strang, Gilbert (1996). "Eigenvalues of $(\downarrow 2){H}$ and convergence of the cascade algorithm". IEEE Transactions on Signal Processing. 44: 233–238. doi:10.1109/78.485920.
• Thielemann, Henning (2006). Optimally matched wavelets (PhD thesis). (contains proofs of the above properties)
| Wikipedia |
Transfer operator
In mathematics, the transfer operator encodes information about an iterated map and is frequently used to study the behavior of dynamical systems, statistical mechanics, quantum chaos and fractals. In all usual cases, the largest eigenvalue is 1, and the corresponding eigenvector is the invariant measure of the system.
Not to be confused with transfer homomorphism.
The transfer operator is sometimes called the Ruelle operator, after David Ruelle, or the Perron–Frobenius operator or Ruelle–Perron–Frobenius operator, in reference to the applicability of the Perron–Frobenius theorem to the determination of the eigenvalues of the operator.
Definition
The iterated function to be studied is a map $f\colon X\rightarrow X$ for an arbitrary set $X$.
The transfer operator is defined as an operator ${\mathcal {L}}$ acting on the space of functions $\{\Phi \colon X\rightarrow \mathbb {C} \}$ as
$({\mathcal {L}}\Phi )(x)=\sum _{y\,\in \,f^{-1}(x)}g(y)\Phi (y)$
where $g\colon X\rightarrow \mathbb {C} $ is an auxiliary valuation function. When $f$ has a Jacobian determinant $|J|$, then $g$ is usually taken to be $g=1/|J|$.
The above definition of the transfer operator can be shown to be the point-set limit of the measure-theoretic pushforward of g: in essence, the transfer operator is the direct image functor in the category of measurable spaces. The left-adjoint of the Frobenius–Perron operator is the Koopman operator or composition operator. The general setting is provided by the Borel functional calculus.
As a general rule, the transfer operator can usually be interpreted as a (left-)shift operator acting on a shift space. The most commonly studied shifts are the subshifts of finite type. The adjoint to the transfer operator can likewise usually be interpreted as a right-shift. Particularly well studied right-shifts include the Jacobi operator and the Hessenberg matrix, both of which generate systems of orthogonal polynomials via a right-shift.
Applications
Whereas the iteration of a function $f$ naturally leads to a study of the orbits of points of X under iteration (the study of point dynamics), the transfer operator defines how (smooth) maps evolve under iteration. Thus, transfer operators typically appear in physics problems, such as quantum chaos and statistical mechanics, where attention is focused on the time evolution of smooth functions. In turn, this has medical applications to rational drug design, through the field of molecular dynamics.
It is often the case that the transfer operator is positive, has discrete positive real-valued eigenvalues, with the largest eigenvalue being equal to one. For this reason, the transfer operator is sometimes called the Frobenius–Perron operator.
The eigenfunctions of the transfer operator are usually fractals. When the logarithm of the transfer operator corresponds to a quantum Hamiltonian, the eigenvalues will typically be very closely spaced, and thus even a very narrow and carefully selected ensemble of quantum states will encompass a large number of very different fractal eigenstates with non-zero support over the entire volume. This can be used to explain many results from classical statistical mechanics, including the irreversibility of time and the increase of entropy.
The transfer operator of the Bernoulli map $b(x)=2x-\lfloor 2x\rfloor $ is exactly solvable and is a classic example of deterministic chaos; the discrete eigenvalues correspond to the Bernoulli polynomials. This operator also has a continuous spectrum consisting of the Hurwitz zeta function.
The transfer operator of the Gauss map $h(x)=1/x-\lfloor 1/x\rfloor $ is called the Gauss–Kuzmin–Wirsing (GKW) operator. The theory of the GKW dates back to a hypothesis by Gauss on continued fractions and is closely related to the Riemann zeta function.
See also
• Bernoulli scheme
• Shift of finite type
• Krein–Rutman theorem
References
• Gaspard, Pierre (1992). "r-adic one dimensional maps and the Euler summation formula". J. Phys. A: Math. Gen. 25 (8): L483–L485. Bibcode:1992JPhA...25L.483G. doi:10.1088/0305-4470/25/8/017.
• Gaspard, Pierre (1998). Chaos, scattering and statistical mechanics. Cambridge University Press. ISBN 0-521-39511-9.
• Mackey, Michael C. (1992). Time's Arrow : The origins of thermodynamic behaviour. Springer-Verlag. ISBN 0-387-94093-6.
• Mayer, Dieter H. (1978). The Ruelle-Araki transfer operator in classical statistical mechanics. Springer-Verlag. ISBN 0-387-09990-5.
• Ruelle, David (1978). Thermodynamic formalism: the mathematical structures of classical equilibrium statistical mechanics. Addison–Wesley, Reading. ISBN 0-201-13504-3.
• Ruelle, David (2002). "Dynamical Zeta Functions and Transfer Operators" (PDF). Notices of the AMS. 49 (8): 887–895. (Provides an introductory survey).
Functional analysis (topics – glossary)
Spaces
• Banach
• Besov
• Fréchet
• Hilbert
• Hölder
• Nuclear
• Orlicz
• Schwartz
• Sobolev
• Topological vector
Properties
• Barrelled
• Complete
• Dual (Algebraic/Topological)
• Locally convex
• Reflexive
• Reparable
Theorems
• Hahn–Banach
• Riesz representation
• Closed graph
• Uniform boundedness principle
• Kakutani fixed-point
• Krein–Milman
• Min–max
• Gelfand–Naimark
• Banach–Alaoglu
Operators
• Adjoint
• Bounded
• Compact
• Hilbert–Schmidt
• Normal
• Nuclear
• Trace class
• Transpose
• Unbounded
• Unitary
Algebras
• Banach algebra
• C*-algebra
• Spectrum of a C*-algebra
• Operator algebra
• Group algebra of a locally compact group
• Von Neumann algebra
Open problems
• Invariant subspace problem
• Mahler's conjecture
Applications
• Hardy space
• Spectral theory of ordinary differential equations
• Heat kernel
• Index theorem
• Calculus of variations
• Functional calculus
• Integral operator
• Jones polynomial
• Topological quantum field theory
• Noncommutative geometry
• Riemann hypothesis
• Distribution (or Generalized functions)
Advanced topics
• Approximation property
• Balanced set
• Choquet theory
• Weak topology
• Banach–Mazur distance
• Tomita–Takesaki theory
• Mathematics portal
• Category
• Commons
| Wikipedia |
Transfer principle
In model theory, a transfer principle states that all statements of some language that are true for some structure are true for another structure. One of the first examples was the Lefschetz principle, which states that any sentence in the first-order language of fields that is true for the complex numbers is also true for any algebraically closed field of characteristic 0.
History
An incipient form of a transfer principle was described by Leibniz under the name of "the Law of Continuity".[1] Here infinitesimals are expected to have the "same" properties as appreciable numbers. The transfer principle can also be viewed as a rigorous formalization of the principle of permanence. Similar tendencies are found in Cauchy, who used infinitesimals to define both the continuity of functions (in Cours d'Analyse) and a form of the Dirac delta function.[1]: 903
In 1955, Jerzy Łoś proved the transfer principle for any hyperreal number system. Its most common use is in Abraham Robinson's nonstandard analysis of the hyperreal numbers, where the transfer principle states that any sentence expressible in a certain formal language that is true of real numbers is also true of hyperreal numbers.
Transfer principle for the hyperreals
See also: Hyperreal number § The transfer principle
The transfer principle concerns the logical relation between the properties of the real numbers R, and the properties of a larger field denoted *R called the hyperreal numbers. The field *R includes, in particular, infinitesimal ("infinitely small") numbers, providing a rigorous mathematical realisation of a project initiated by Leibniz.
The idea is to express analysis over R in a suitable language of mathematical logic, and then point out that this language applies equally well to *R. This turns out to be possible because at the set-theoretic level, the propositions in such a language are interpreted to apply only to internal sets rather than to all sets. As Robinson put it, the sentences of [the theory] are interpreted in *R in Henkin's sense.[2]
The theorem to the effect that each proposition valid over R, is also valid over *R, is called the transfer principle.
There are several different versions of the transfer principle, depending on what model of nonstandard mathematics is being used. In terms of model theory, the transfer principle states that a map from a standard model to a nonstandard model is an elementary embedding (an embedding preserving the truth values of all statements in a language), or sometimes a bounded elementary embedding (similar, but only for statements with bounded quantifiers).
The transfer principle appears to lead to contradictions if it is not handled correctly. For example, since the hyperreal numbers form a non-Archimedean ordered field and the reals form an Archimedean ordered field, the property of being Archimedean ("every positive real is larger than $1/n$ for some positive integer $n$") seems at first sight not to satisfy the transfer principle. The statement "every positive hyperreal is larger than $1/n$ for some positive integer $n$" is false; however the correct interpretation is "every positive hyperreal is larger than $1/n$ for some positive hyperinteger $n$". In other words, the hyperreals appear to be Archimedean to an internal observer living in the nonstandard universe, but appear to be non-Archimedean to an external observer outside the universe.
A freshman-level accessible formulation of the transfer principle is Keisler's book Elementary Calculus: An Infinitesimal Approach.
Example
Every real Failed to parse (SVG (MathML can be enabled via browser plugin): Invalid response ("Math extension cannot connect to Restbase.") from server "http://localhost:6011/en.wikipedia.org/v1/":): x satisfies the inequality
$x\geq \lfloor x\rfloor ,$
where $\lfloor \,\cdot \,\rfloor $ is the integer part function. By a typical application of the transfer principle, every hyperreal $x$ satisfies the inequality
$x\geq {}^{*}\!\lfloor x\rfloor ,$
where ${}^{*}\!\lfloor \,\cdot \,\rfloor $ is the natural extension of the integer part function. If $x$ is infinite, then the hyperinteger ${}^{*}\!\lfloor x\rfloor $ is infinite, as well.
Generalizations of the concept of number
Historically, the concept of number has been repeatedly generalized. The addition of 0 to the natural numbers $\mathbb {N} $ was a major intellectual accomplishment in its time. The addition of negative integers to form $\mathbb {Z} $ already constituted a departure from the realm of immediate experience to the realm of mathematical models. The further extension, the rational numbers $\mathbb {Q} $, is more familiar to a layperson than their completion $\mathbb {R} $, partly because the reals do not correspond to any physical reality (in the sense of measurement and computation) different from that represented by $\mathbb {Q} $. Thus, the notion of an irrational number is meaningless to even the most powerful floating-point computer. The necessity for such an extension stems not from physical observation but rather from the internal requirements of mathematical coherence. The infinitesimals entered mathematical discourse at a time when such a notion was required by mathematical developments at the time, namely the emergence of what became known as the infinitesimal calculus. As already mentioned above, the mathematical justification for this latest extension was delayed by three centuries. Keisler wrote:
"In discussing the real line we remarked that we have no way of knowing what a line in physical space is really like. It might be like the hyperreal line, the real line, or neither. However, in applications of the calculus, it is helpful to imagine a line in physical space as a hyperreal line."
The self-consistent development of the hyperreals turned out to be possible if every true first-order logic statement that uses basic arithmetic (the natural numbers, plus, times, comparison) and quantifies only over the real numbers was assumed to be true in a reinterpreted form if we presume that it quantifies over hyperreal numbers. For example, we can state that for every real number there is another number greater than it:
$\forall x\in \mathbb {R} \quad \exists y\in \mathbb {R} \quad x<y.$
The same will then also hold for hyperreals:
$\forall x\in {}^{\star }\mathbb {R} \quad \exists y\in {}^{\star }\mathbb {R} \quad x<y.$
Another example is the statement that if you add 1 to a number you get a bigger number:
$\forall x\in \mathbb {R} \quad x<x+1$
which will also hold for hyperreals:
$\forall x\in {}^{\star }\mathbb {R} \quad x<x+1.$
The correct general statement that formulates these equivalences is called the transfer principle. Note that, in many formulas in analysis, quantification is over higher-order objects such as functions and sets, which makes the transfer principle somewhat more subtle than the above examples suggest.
Differences between R and *R
The transfer principle however doesn't mean that R and *R have identical behavior. For instance, in *R there exists an element ω such that
$1<\omega ,\quad 1+1<\omega ,\quad 1+1+1<\omega ,\quad 1+1+1+1<\omega ,\ldots $
but there is no such number in R. This is possible because the nonexistence of this number cannot be expressed as a first order statement of the above type. A hyperreal number like ω is called infinitely large; the reciprocals of the infinitely large numbers are the infinitesimals.
The hyperreals *R form an ordered field containing the reals R as a subfield. Unlike the reals, the hyperreals do not form a standard metric space, but by virtue of their order they carry an order topology.
Constructions of the hyperreals
The hyperreals can be developed either axiomatically or by more constructively oriented methods. The essence of the axiomatic approach is to assert (1) the existence of at least one infinitesimal number, and (2) the validity of the transfer principle. In the following subsection we give a detailed outline of a more constructive approach. This method allows one to construct the hyperreals if given a set-theoretic object called an ultrafilter, but the ultrafilter itself cannot be explicitly constructed. Vladimir Kanovei and Shelah[3] give a construction of a definable, countably saturated elementary extension of the structure consisting of the reals and all finitary relations on it.
In its most general form, transfer is a bounded elementary embedding between structures.
Statement
The ordered field *R of nonstandard real numbers properly includes the real field R. Like all ordered fields that properly include R, this field is non-Archimedean. It means that some members x ≠ 0 of *R are infinitesimal, i.e.,
$\underbrace {\left|x\right|+\cdots +\left|x\right|} _{n{\text{ terms}}}<1{\text{ for every finite [[cardinal number]] }}n.$
The only infinitesimal in R is 0. Some other members of *R, the reciprocals y of the nonzero infinitesimals, are infinite, i.e.,
$\underbrace {1+\cdots +1} _{n{\text{ terms}}}<\left|y\right|{\text{ for every finite [[cardinal number]] }}n.$
The underlying set of the field *R is the image of R under a mapping A ↦ *A from subsets A of R to subsets of *R. In every case
$A\subseteq {^{*}\!A},$
with equality if and only if A is finite. Sets of the form *A for some $\scriptstyle A\,\subseteq \,\mathbb {R} $ are called standard subsets of *R. The standard sets belong to a much larger class of subsets of *R called internal sets. Similarly each function
$f:A\rightarrow \mathbb {R} $
extends to a function
${^{*}\!f}:{^{*}\!A}\rightarrow {^{*}\mathbb {R} };$
these are called standard functions, and belong to the much larger class of internal functions. Sets and functions that are not internal are external.
The importance of these concepts stems from their role in the following proposition and is illustrated by the examples that follow it.
The transfer principle:
• Suppose a proposition that is true of *R can be expressed via functions of finitely many variables (e.g. (x, y) ↦ x + y), relations among finitely many variables (e.g. x ≤ y), finitary logical connectives such as and, or, not, if...then..., and the quantifiers
$\forall x\in \mathbb {R} {\text{ and }}\exists x\in \mathbb {R} .$
For example, one such proposition is
$\forall x\in \mathbb {R} \ \exists y\in \mathbb {R} \ x+y=0.$
Such a proposition is true in R if and only if it is true in *R when the quantifier
$\forall x\in {^{*}\!\mathbb {R} }$
replaces
$\forall x\in \mathbb {R} ,$
and similarly for $\exists $.
• Suppose a proposition otherwise expressible as simply as those considered above mentions some particular sets $\scriptstyle A\,\subseteq \,\mathbb {R} $. Such a proposition is true in R if and only if it is true in *R with each such "A" replaced by the corresponding *A. Here are two examples:
• The set
$[0,1]^{\ast }=\{\,x\in \mathbb {R} :0\leq x\leq 1\,\}^{\ast }$
must be
$\{\,x\in {^{*}\mathbb {R} }:0\leq x\leq 1\,\},$
including not only members of R between 0 and 1 inclusive, but also members of *R between 0 and 1 that differ from those by infinitesimals. To see this, observe that the sentence
$\forall x\in \mathbb {R} \ (x\in [0,1]{\text{ if and only if }}0\leq x\leq 1)$
is true in R, and apply the transfer principle.
• The set *N must have no upper bound in *R (since the sentence expressing the non-existence of an upper bound of N in R is simple enough for the transfer principle to apply to it) and must contain n + 1 if it contains n, but must not contain anything between n and n + 1. Members of
${^{*}\mathbb {N} }\setminus \mathbb {N} $
are "infinite integers".)
• Suppose a proposition otherwise expressible as simply as those considered above contains the quantifier
$\forall A\subseteq \mathbb {R} \dots {\text{ or }}\exists A\subseteq \mathbb {R} \dots \ .$
Such a proposition is true in R if and only if it is true in *R after the changes specified above and the replacement of the quantifiers with
$[\forall {\text{ internal }}A\subseteq {^{*}\mathbb {R} }\dots ]$
and
$[\exists {\text{ internal }}A\subseteq {^{*}\mathbb {R} }\dots ]\ .$
Three examples
The appropriate setting for the hyperreal transfer principle is the world of internal entities. Thus, the well-ordering property of the natural numbers by transfer yields the fact that every internal subset of $\mathbb {N} $ has a least element. In this section internal sets are discussed in more detail.
• Every nonempty internal subset of *R that has an upper bound in *R has a least upper bound in *R. Consequently the set of all infinitesimals is external.
• The well-ordering principle implies every nonempty internal subset of *N has a smallest member. Consequently the set
${^{*}\mathbb {N} }\setminus \mathbb {N} $
of all infinite integers is external.
• If n is an infinite integer, then the set {1, ..., n} (which is not standard) must be internal. To prove this, first observe that the following is trivially true:
$\forall n\in \mathbb {N} \ \exists A\subseteq \mathbb {N} \ \forall x\in \mathbb {N} \ [x\in A{\text{ iff }}x\leq n].$
Consequently
$\forall n\in {^{*}\mathbb {N} }\ \exists {\text{ internal }}A\subseteq {^{*}\mathbb {N} }\ \forall x\in {^{*}\mathbb {N} }\ [x\in A{\text{ iff }}x\leq n].$
• As with internal sets, so with internal functions: Replace
$\forall f:A\rightarrow \mathbb {R} \dots $
with
$\forall {\text{ internal }}f:{^{*}\!A}\rightarrow {^{*}\mathbb {R} }\dots $
when applying the transfer principle, and similarly with $\exists $ in place of $\forall $.
For example: If n is an infinite integer, then the complement of the image of any internal one-to-one function ƒ from the infinite set {1, ..., n} into {1, ..., n, n + 1, n + 2, n + 3} has exactly three members by the transfer principle. Because of the infiniteness of the domain, the complements of the images of one-to-one functions from the former set to the latter come in many sizes, but most of these functions are external.
This last example motivates an important definition: A *-finite (pronounced star-finite) subset of *R is one that can be placed in internal one-to-one correspondence with {1, ..., n} for some n ∈ *N.
See also
• Elementary Calculus: An Infinitesimal Approach
• Principle of Permanence
• Generality of algebra
Notes
1. Keisler, H. Jerome. "Elementary Calculus: An Infinitesimal Approach". p. 902.
2. Robinson, A. The metaphysics of the calculus, in Problems in the Philosophy of Mathematics, ed. Lakatos (Amsterdam: North Holland), pp. 28–46, 1967. Reprinted in the 1979 Collected Works. Page 29.
3. Kanovei, Vladimir; Shelah, Saharon (2004), "A definable nonstandard model of the reals" (PDF), Journal of Symbolic Logic, 69: 159–164, arXiv:math/0311165, doi:10.2178/jsl/1080938834
References
• Chang, Chen Chung; Keisler, H. Jerome (1990) [1973], Model Theory, Studies in Logic and the Foundations of Mathematics (3rd ed.), Elsevier, ISBN 978-0-444-88054-3
• Hardy, Michael: "Scaled Boolean algebras". Adv. in Appl. Math. 29 (2002), no. 2, 243–292.
• Kanovei, Vladimir; Shelah, Saharon (2004), "A definable nonstandard model of the reals", Journal of Symbolic Logic, 69: 159–164, arXiv:math/0311165, doi:10.2178/jsl/1080938834
• Keisler, H. Jerome (2000). "Elementary Calculus: An Infinitesimal Approach".
• Kuhlmann, F.-V. (2001) [1994], "Transfer principle", Encyclopedia of Mathematics, EMS Press
• Łoś, Jerzy (1955) Quelques remarques, théorèmes et problèmes sur les classes définissables d'algèbres. Mathematical interpretation of formal systems, pp. 98–113. North-Holland Publishing Co., Amsterdam.
• Robinson, Abraham (1996), Non-standard analysis, Princeton University Press, ISBN 978-0-691-04490-3, MR 0205854
Infinitesimals
History
• Adequality
• Leibniz's notation
• Integral symbol
• Criticism of nonstandard analysis
• The Analyst
• The Method of Mechanical Theorems
• Cavalieri's principle
Related branches
• Nonstandard analysis
• Nonstandard calculus
• Internal set theory
• Synthetic differential geometry
• Smooth infinitesimal analysis
• Constructive nonstandard analysis
• Infinitesimal strain theory (physics)
Formalizations
• Differentials
• Hyperreal numbers
• Dual numbers
• Surreal numbers
Individual concepts
• Standard part function
• Transfer principle
• Hyperinteger
• Increment theorem
• Monad
• Internal set
• Levi-Civita field
• Hyperfinite set
• Law of continuity
• Overspill
• Microcontinuity
• Transcendental law of homogeneity
Mathematicians
• Gottfried Wilhelm Leibniz
• Abraham Robinson
• Pierre de Fermat
• Augustin-Louis Cauchy
• Leonhard Euler
Textbooks
• Analyse des Infiniment Petits
• Elementary Calculus
• Cours d'Analyse
Mathematical logic
General
• Axiom
• list
• Cardinality
• First-order logic
• Formal proof
• Formal semantics
• Foundations of mathematics
• Information theory
• Lemma
• Logical consequence
• Model
• Theorem
• Theory
• Type theory
Theorems (list)
& Paradoxes
• Gödel's completeness and incompleteness theorems
• Tarski's undefinability
• Banach–Tarski paradox
• Cantor's theorem, paradox and diagonal argument
• Compactness
• Halting problem
• Lindström's
• Löwenheim–Skolem
• Russell's paradox
Logics
Traditional
• Classical logic
• Logical truth
• Tautology
• Proposition
• Inference
• Logical equivalence
• Consistency
• Equiconsistency
• Argument
• Soundness
• Validity
• Syllogism
• Square of opposition
• Venn diagram
Propositional
• Boolean algebra
• Boolean functions
• Logical connectives
• Propositional calculus
• Propositional formula
• Truth tables
• Many-valued logic
• 3
• Finite
• ∞
Predicate
• First-order
• list
• Second-order
• Monadic
• Higher-order
• Free
• Quantifiers
• Predicate
• Monadic predicate calculus
Set theory
• Set
• Hereditary
• Class
• (Ur-)Element
• Ordinal number
• Extensionality
• Forcing
• Relation
• Equivalence
• Partition
• Set operations:
• Intersection
• Union
• Complement
• Cartesian product
• Power set
• Identities
Types of Sets
• Countable
• Uncountable
• Empty
• Inhabited
• Singleton
• Finite
• Infinite
• Transitive
• Ultrafilter
• Recursive
• Fuzzy
• Universal
• Universe
• Constructible
• Grothendieck
• Von Neumann
Maps & Cardinality
• Function/Map
• Domain
• Codomain
• Image
• In/Sur/Bi-jection
• Schröder–Bernstein theorem
• Isomorphism
• Gödel numbering
• Enumeration
• Large cardinal
• Inaccessible
• Aleph number
• Operation
• Binary
Set theories
• Zermelo–Fraenkel
• Axiom of choice
• Continuum hypothesis
• General
• Kripke–Platek
• Morse–Kelley
• Naive
• New Foundations
• Tarski–Grothendieck
• Von Neumann–Bernays–Gödel
• Ackermann
• Constructive
Formal systems (list),
Language & Syntax
• Alphabet
• Arity
• Automata
• Axiom schema
• Expression
• Ground
• Extension
• by definition
• Conservative
• Relation
• Formation rule
• Grammar
• Formula
• Atomic
• Closed
• Ground
• Open
• Free/bound variable
• Language
• Metalanguage
• Logical connective
• ¬
• ∨
• ∧
• →
• ↔
• =
• Predicate
• Functional
• Variable
• Propositional variable
• Proof
• Quantifier
• ∃
• !
• ∀
• rank
• Sentence
• Atomic
• Spectrum
• Signature
• String
• Substitution
• Symbol
• Function
• Logical/Constant
• Non-logical
• Variable
• Term
• Theory
• list
Example axiomatic
systems
(list)
• of arithmetic:
• Peano
• second-order
• elementary function
• primitive recursive
• Robinson
• Skolem
• of the real numbers
• Tarski's axiomatization
• of Boolean algebras
• canonical
• minimal axioms
• of geometry:
• Euclidean:
• Elements
• Hilbert's
• Tarski's
• non-Euclidean
• Principia Mathematica
Proof theory
• Formal proof
• Natural deduction
• Logical consequence
• Rule of inference
• Sequent calculus
• Theorem
• Systems
• Axiomatic
• Deductive
• Hilbert
• list
• Complete theory
• Independence (from ZFC)
• Proof of impossibility
• Ordinal analysis
• Reverse mathematics
• Self-verifying theories
Model theory
• Interpretation
• Function
• of models
• Model
• Equivalence
• Finite
• Saturated
• Spectrum
• Submodel
• Non-standard model
• of arithmetic
• Diagram
• Elementary
• Categorical theory
• Model complete theory
• Satisfiability
• Semantics of logic
• Strength
• Theories of truth
• Semantic
• Tarski's
• Kripke's
• T-schema
• Transfer principle
• Truth predicate
• Truth value
• Type
• Ultraproduct
• Validity
Computability theory
• Church encoding
• Church–Turing thesis
• Computably enumerable
• Computable function
• Computable set
• Decision problem
• Decidable
• Undecidable
• P
• NP
• P versus NP problem
• Kolmogorov complexity
• Lambda calculus
• Primitive recursive function
• Recursion
• Recursive set
• Turing machine
• Type theory
Related
• Abstract logic
• Category theory
• Concrete/Abstract Category
• Category of sets
• History of logic
• History of mathematical logic
• timeline
• Logicism
• Mathematical object
• Philosophy of mathematics
• Supertask
Mathematics portal
Logic
• Outline
• History
Major fields
• Computer science
• Formal semantics (natural language)
• Inference
• Philosophy of logic
• Proof
• Semantics of logic
• Syntax
Logics
• Classical
• Informal
• Critical thinking
• Reason
• Mathematical
• Non-classical
• Philosophical
Theories
• Argumentation
• Metalogic
• Metamathematics
• Set
Foundations
• Abduction
• Analytic and synthetic propositions
• Contradiction
• Paradox
• Antinomy
• Deduction
• Deductive closure
• Definition
• Description
• Entailment
• Linguistic
• Form
• Induction
• Logical truth
• Name
• Necessity and sufficiency
• Premise
• Probability
• Reference
• Statement
• Substitution
• Truth
• Validity
Lists
topics
• Mathematical logic
• Boolean algebra
• Set theory
other
• Logicians
• Rules of inference
• Paradoxes
• Fallacies
• Logic symbols
• Philosophy portal
• Category
• WikiProject (talk)
• changes
| Wikipedia |
Conformal radius
In mathematics, the conformal radius is a way to measure the size of a simply connected planar domain D viewed from a point z in it. As opposed to notions using Euclidean distance (say, the radius of the largest inscribed disk with center z), this notion is well-suited to use in complex analysis, in particular in conformal maps and conformal geometry.
A closely related notion is the transfinite diameter or (logarithmic) capacity of a compact simply connected set D, which can be considered as the inverse of the conformal radius of the complement E = Dc viewed from infinity.
Definition
Given a simply connected domain D ⊂ C, and a point z ∈ D, by the Riemann mapping theorem there exists a unique conformal map f : D → D onto the unit disk (usually referred to as the uniformizing map) with f(z) = 0 ∈ D and f′(z) ∈ R+. The conformal radius of D from z is then defined as
$\mathrm {rad} (z,D):={\frac {1}{f'(z)}}\,.$
The simplest example is that the conformal radius of the disk of radius r viewed from its center is also r, shown by the uniformizing map x ↦ x/r. See below for more examples.
One reason for the usefulness of this notion is that it behaves well under conformal maps: if φ : D → D′ is a conformal bijection and z in D, then $\mathrm {rad} (\varphi (z),D')=|\varphi '(z)|\,\mathrm {rad} (z,D)$.
The conformal radius can also be expressed as $\exp(\xi _{x}(x))$ where $\xi _{x}(y)$ is the harmonic extension of $\log(|x-y|)$ from $\partial D$ to $D$.
A special case: the upper-half plane
Let K ⊂ H be a subset of the upper half-plane such that D := H\K is connected and simply connected, and let z ∈ D be a point. (This is a usual scenario, say, in the Schramm-Loewner evolution). By the Riemann mapping theorem, there is a conformal bijection g : D → H. Then, for any such map g, a simple computation gives that
$\mathrm {rad} (z,D)={\frac {2\,\mathrm {Im} (g(z))}{|g'(z)|}}\,.$
For example, when K = ∅ and z = i, then g can be the identity map, and we get rad(i, H) = 2. Checking that this agrees with the original definition: the uniformizing map f : H → D is
$f(z)=i{\frac {z-i}{z+i}},$
and then the derivative can be easily calculated.
Relation to inradius
That it is a good measure of radius is shown by the following immediate consequence of the Schwarz lemma and the Koebe 1/4 theorem: for z ∈ D ⊂ C,
${\frac {\mathrm {rad} (z,D)}{4}}\leq \mathrm {dist} (z,\partial D)\leq \mathrm {rad} (z,D),$
where dist(z, ∂D) denotes the Euclidean distance between z and the boundary of D, or in other words, the radius of the largest inscribed disk with center z.
Both inequalities are best possible:
The upper bound is clearly attained by taking D = D and z = 0.
The lower bound is attained by the following “slit domain”: D = C\R+ and z = −r ∈ R−. The square root map φ takes D onto the upper half-plane H, with $\varphi (-r)=i{\sqrt {r}}$ and derivative $|\varphi '(-r)|={\frac {1}{2{\sqrt {r}}}}$. The above formula for the upper half-plane gives $\mathrm {rad} (i{\sqrt {r}},\mathbb {H} )=2{\sqrt {r}}$, and then the formula for transformation under conformal maps gives rad(−r, D) = 4r, while, of course, dist(−r, ∂D) = r.
Version from infinity: transfinite diameter and logarithmic capacity
When D ⊂ C is a connected, simply connected compact set, then its complement E = Dc is a connected, simply connected domain in the Riemann sphere that contains ∞, and one can define
$\mathrm {rad} (\infty ,D):={\frac {1}{\mathrm {rad} (\infty ,E)}}:=\lim _{z\to \infty }{\frac {f(z)}{z}},$
where f : C\D → E is the unique bijective conformal map with f(∞) = ∞ and that limit being positive real, i.e., the conformal map of the form
$f(z)=c_{1}z+c_{0}+c_{-1}z^{-1}+\dots ,\qquad c_{1}\in \mathbf {R} _{+}.$
The coefficient c1 = rad(∞, D) equals the transfinite diameter and the (logarithmic) capacity of D; see Chapter 11 of Pommerenke (1975) and Kuz′mina (2002). See also the article on the capacity of a set.
The coefficient c0 is called the conformal center of D. It can be shown to lie in the convex hull of D; moreover,
$D\subseteq \{z:|z-c_{0}|\leq 2c_{1}\}\,,$
where the radius 2c1 is sharp for the straight line segment of length 4c1. See pages 12–13 and Chapter 11 of Pommerenke (1975).
The Fekete, Chebyshev and modified Chebyshev constants
We define three other quantities that are equal to the transfinite diameter even though they are defined from a very different point of view. Let
$d(z_{1},\ldots ,z_{k}):=\prod _{1\leq i<j\leq k}|z_{i}-z_{j}|$
denote the product of pairwise distances of the points $z_{1},\ldots ,z_{k}$ and let us define the following quantity for a compact set D ⊂ C:
$d_{n}(D):=\sup _{z_{1},\ldots ,z_{n}\in D}d(z_{1},\ldots ,z_{n})^{\frac {1}{\binom {n}{2}}}$
In other words, $d_{n}(D)$ is the supremum of the geometric mean of pairwise distances of n points in D. Since D is compact, this supremum is actually attained by a set of points. Any such n-point set is called a Fekete set.
The limit $d(D):=\lim _{n\to \infty }d_{n}(D)$ exists and it is called the Fekete constant.
Now let ${\mathcal {P}}_{n}$ denote the set of all monic polynomials of degree n in C[x], let ${\mathcal {Q}}_{n}$ denote the set of polynomials in ${\mathcal {P}}_{n}$ with all zeros in D and let us define
$\mu _{n}(D):=\inf _{p\in {\mathcal {P}}_{n}}\sup _{z\in D}|p(z)|$ and ${\tilde {\mu }}_{n}(D):=\inf _{p\in {\mathcal {Q}}_{n}}\sup _{z\in D}|p(z)|$
Then the limits
$\mu (D):=\lim _{n\to \infty }\mu _{n}(D)^{\frac {1}{n}}$ and $\mu (D):=\lim _{n\to \infty }{\tilde {\mu }}_{n}(D)^{\frac {1}{n}}$
exist and they are called the Chebyshev constant and modified Chebyshev constant, respectively. Michael Fekete and Gábor Szegő proved that these constants are equal.
Applications
The conformal radius is a very useful tool, e.g., when working with the Schramm-Loewner evolution. A beautiful instance can be found in Lawler, Schramm & Werner (2002).
References
• Ahlfors, Lars V. (1973). Conformal invariants: topics in geometric function theory. Series in Higher Mathematics. McGraw-Hill. MR 0357743. Zbl 0272.30012.
• Horváth, János, ed. (2005). A Panorama of Hungarian Mathematics in the Twentieth Century, I. Bolyai Society Mathematical Studies. Springer. ISBN 3-540-28945-3.
• Kuz′mina, G. V. (2002) [1994], "Conformal radius of a domain", Encyclopedia of Mathematics, EMS Press
• Lawler, Gregory F.; Schramm, Oded; Werner, Wendelin (2002), "One-arm exponent for critical 2D percolation", Electronic Journal of Probability, 7 (2): 13 pp., arXiv:math/0108211, doi:10.1214/ejp.v7-101, ISSN 1083-6489, MR 1887622, Zbl 1015.60091
• Pommerenke, Christian (1975). Univalent functions. Studia Mathematica/Mathematische Lehrbücher. Vol. Band XXV. With a chapter on quadratic differentials by Gerd Jensen. Göttingen: Vandenhoeck & Ruprecht. Zbl 0298.30014.
Further reading
• Rumely, Robert S. (1989), Capacity theory on algebraic curves, Lecture Notes in Mathematics, vol. 1378, Berlin etc.: Springer-Verlag, ISBN 3-540-51410-4, Zbl 0679.14012
External links
• Pooh, Charles, Conformal radius. From MathWorld — A Wolfram Web Resource, created by Eric W. Weisstein.
| Wikipedia |
Transfinite induction
Transfinite induction is an extension of mathematical induction to well-ordered sets, for example to sets of ordinal numbers or cardinal numbers. Its correctness is a theorem of ZFC.[1]
Induction by cases
Let $P(\alpha )$ be a property defined for all ordinals $\alpha $. Suppose that whenever $P(\beta )$ is true for all $\beta <\alpha $, then $P(\alpha )$ is also true.[2] Then transfinite induction tells us that $P$ is true for all ordinals.
Usually the proof is broken down into three cases:
• Zero case: Prove that $P(0)$ is true.
• Successor case: Prove that for any successor ordinal $\alpha +1$, $P(\alpha +1)$ follows from $P(\alpha )$ (and, if necessary, $P(\beta )$ for all $\beta <\alpha $).
• Limit case: Prove that for any limit ordinal $\lambda $, $P(\lambda )$ follows from $P(\beta )$ for all $\beta <\lambda $.
All three cases are identical except for the type of ordinal considered. They do not formally need to be considered separately, but in practice the proofs are typically so different as to require separate presentations. Zero is sometimes considered a limit ordinal and then may sometimes be treated in proofs in the same case as limit ordinals.
Transfinite recursion
Transfinite recursion is similar to transfinite induction; however, instead of proving that something holds for all ordinal numbers, we construct a sequence of objects, one for each ordinal.
As an example, a basis for a (possibly infinite-dimensional) vector space can be created by starting with the empty set and for each ordinal α > 0 choosing a vector that is not in the span of the vectors $\{v_{\beta }\mid \beta <\alpha \}$. This process stops when no vector can be chosen.
More formally, we can state the Transfinite Recursion Theorem as follows:
Transfinite Recursion Theorem (version 1). Given a class function[3] G: V → V (where V is the class of all sets), there exists a unique transfinite sequence F: Ord → V (where Ord is the class of all ordinals) such that
$F(\alpha )=G(F\upharpoonright \alpha )$ for all ordinals α, where $\upharpoonright $ denotes the restriction of F's domain to ordinals < α.
As in the case of induction, we may treat different types of ordinals separately: another formulation of transfinite recursion is the following:
Transfinite Recursion Theorem (version 2). Given a set g1, and class functions G2, G3, there exists a unique function F: Ord → V such that
• F(0) = g1,
• F(α + 1) = G2(F(α)), for all α ∈ Ord,
• $F(\lambda )=G_{3}(F\upharpoonright \lambda )$, for all limit λ ≠ 0.
Note that we require the domains of G2, G3 to be broad enough to make the above properties meaningful. The uniqueness of the sequence satisfying these properties can be proved using transfinite induction.
More generally, one can define objects by transfinite recursion on any well-founded relation R. (R need not even be a set; it can be a proper class, provided it is a set-like relation; i.e. for any x, the collection of all y such that yRx is a set.)
Relationship to the axiom of choice
Proofs or constructions using induction and recursion often use the axiom of choice to produce a well-ordered relation that can be treated by transfinite induction. However, if the relation in question is already well-ordered, one can often use transfinite induction without invoking the axiom of choice.[4] For example, many results about Borel sets are proved by transfinite induction on the ordinal rank of the set; these ranks are already well-ordered, so the axiom of choice is not needed to well-order them.
The following construction of the Vitali set shows one way that the axiom of choice can be used in a proof by transfinite induction:
First, well-order the real numbers (this is where the axiom of choice enters via the well-ordering theorem), giving a sequence $\langle r_{\alpha }\mid \alpha <\beta \rangle $, where β is an ordinal with the cardinality of the continuum. Let v0 equal r0. Then let v1 equal rα1, where α1 is least such that rα1 − v0 is not a rational number. Continue; at each step use the least real from the r sequence that does not have a rational difference with any element thus far constructed in the v sequence. Continue until all the reals in the r sequence are exhausted. The final v sequence will enumerate the Vitali set.
The above argument uses the axiom of choice in an essential way at the very beginning, in order to well-order the reals. After that step, the axiom of choice is not used again.
Other uses of the axiom of choice are more subtle. For example, a construction by transfinite recursion frequently will not specify a unique value for Aα+1, given the sequence up to α, but will specify only a condition that Aα+1 must satisfy, and argue that there is at least one set satisfying this condition. If it is not possible to define a unique example of such a set at each stage, then it may be necessary to invoke (some form of) the axiom of choice to select one such at each step. For inductions and recursions of countable length, the weaker axiom of dependent choice is sufficient. Because there are models of Zermelo–Fraenkel set theory of interest to set theorists that satisfy the axiom of dependent choice but not the full axiom of choice, the knowledge that a particular proof only requires dependent choice can be useful.
See also
• Mathematical induction
• ∈-induction
• Transfinite number
• Well-founded induction
• Zorn's lemma
Notes
1. J. Schlöder, Ordinal Arithmetic. Accessed 2022-03-24.
2. It is not necessary here to assume separately that $P(0)$ is true. As there is no $\beta $ less than 0, it is vacuously true that for all $\beta <0$, $P(\beta )$ is true.
3. A class function is a rule (specifically, a logical formula) assigning each element in the lefthand class to an element in the righthand class. It is not a function because its domain and codomain are not sets.
4. In fact, the domain of the relation does not even need to be a set. It can be a proper class, provided that the relation R is set-like: for any x, the collection of all y such that y R x must be a set.
References
• Suppes, Patrick (1972), "Section 7.1", Axiomatic set theory, Dover Publications, ISBN 0-486-61630-4
External links
• Emerson, Jonathan; Lezama, Mark & Weisstein, Eric W. "Transfinite Induction". MathWorld.
Mathematical logic
General
• Axiom
• list
• Cardinality
• First-order logic
• Formal proof
• Formal semantics
• Foundations of mathematics
• Information theory
• Lemma
• Logical consequence
• Model
• Theorem
• Theory
• Type theory
Theorems (list)
& Paradoxes
• Gödel's completeness and incompleteness theorems
• Tarski's undefinability
• Banach–Tarski paradox
• Cantor's theorem, paradox and diagonal argument
• Compactness
• Halting problem
• Lindström's
• Löwenheim–Skolem
• Russell's paradox
Logics
Traditional
• Classical logic
• Logical truth
• Tautology
• Proposition
• Inference
• Logical equivalence
• Consistency
• Equiconsistency
• Argument
• Soundness
• Validity
• Syllogism
• Square of opposition
• Venn diagram
Propositional
• Boolean algebra
• Boolean functions
• Logical connectives
• Propositional calculus
• Propositional formula
• Truth tables
• Many-valued logic
• 3
• Finite
• ∞
Predicate
• First-order
• list
• Second-order
• Monadic
• Higher-order
• Free
• Quantifiers
• Predicate
• Monadic predicate calculus
Set theory
• Set
• Hereditary
• Class
• (Ur-)Element
• Ordinal number
• Extensionality
• Forcing
• Relation
• Equivalence
• Partition
• Set operations:
• Intersection
• Union
• Complement
• Cartesian product
• Power set
• Identities
Types of Sets
• Countable
• Uncountable
• Empty
• Inhabited
• Singleton
• Finite
• Infinite
• Transitive
• Ultrafilter
• Recursive
• Fuzzy
• Universal
• Universe
• Constructible
• Grothendieck
• Von Neumann
Maps & Cardinality
• Function/Map
• Domain
• Codomain
• Image
• In/Sur/Bi-jection
• Schröder–Bernstein theorem
• Isomorphism
• Gödel numbering
• Enumeration
• Large cardinal
• Inaccessible
• Aleph number
• Operation
• Binary
Set theories
• Zermelo–Fraenkel
• Axiom of choice
• Continuum hypothesis
• General
• Kripke–Platek
• Morse–Kelley
• Naive
• New Foundations
• Tarski–Grothendieck
• Von Neumann–Bernays–Gödel
• Ackermann
• Constructive
Formal systems (list),
Language & Syntax
• Alphabet
• Arity
• Automata
• Axiom schema
• Expression
• Ground
• Extension
• by definition
• Conservative
• Relation
• Formation rule
• Grammar
• Formula
• Atomic
• Closed
• Ground
• Open
• Free/bound variable
• Language
• Metalanguage
• Logical connective
• ¬
• ∨
• ∧
• →
• ↔
• =
• Predicate
• Functional
• Variable
• Propositional variable
• Proof
• Quantifier
• ∃
• !
• ∀
• rank
• Sentence
• Atomic
• Spectrum
• Signature
• String
• Substitution
• Symbol
• Function
• Logical/Constant
• Non-logical
• Variable
• Term
• Theory
• list
Example axiomatic
systems
(list)
• of arithmetic:
• Peano
• second-order
• elementary function
• primitive recursive
• Robinson
• Skolem
• of the real numbers
• Tarski's axiomatization
• of Boolean algebras
• canonical
• minimal axioms
• of geometry:
• Euclidean:
• Elements
• Hilbert's
• Tarski's
• non-Euclidean
• Principia Mathematica
Proof theory
• Formal proof
• Natural deduction
• Logical consequence
• Rule of inference
• Sequent calculus
• Theorem
• Systems
• Axiomatic
• Deductive
• Hilbert
• list
• Complete theory
• Independence (from ZFC)
• Proof of impossibility
• Ordinal analysis
• Reverse mathematics
• Self-verifying theories
Model theory
• Interpretation
• Function
• of models
• Model
• Equivalence
• Finite
• Saturated
• Spectrum
• Submodel
• Non-standard model
• of arithmetic
• Diagram
• Elementary
• Categorical theory
• Model complete theory
• Satisfiability
• Semantics of logic
• Strength
• Theories of truth
• Semantic
• Tarski's
• Kripke's
• T-schema
• Transfer principle
• Truth predicate
• Truth value
• Type
• Ultraproduct
• Validity
Computability theory
• Church encoding
• Church–Turing thesis
• Computably enumerable
• Computable function
• Computable set
• Decision problem
• Decidable
• Undecidable
• P
• NP
• P versus NP problem
• Kolmogorov complexity
• Lambda calculus
• Primitive recursive function
• Recursion
• Recursive set
• Turing machine
• Type theory
Related
• Abstract logic
• Category theory
• Concrete/Abstract Category
• Category of sets
• History of logic
• History of mathematical logic
• timeline
• Logicism
• Mathematical object
• Philosophy of mathematics
• Supertask
Mathematics portal
Set theory
Overview
• Set (mathematics)
Axioms
• Adjunction
• Choice
• countable
• dependent
• global
• Constructibility (V=L)
• Determinacy
• Extensionality
• Infinity
• Limitation of size
• Pairing
• Power set
• Regularity
• Union
• Martin's axiom
• Axiom schema
• replacement
• specification
Operations
• Cartesian product
• Complement (i.e. set difference)
• De Morgan's laws
• Disjoint union
• Identities
• Intersection
• Power set
• Symmetric difference
• Union
• Concepts
• Methods
• Almost
• Cardinality
• Cardinal number (large)
• Class
• Constructible universe
• Continuum hypothesis
• Diagonal argument
• Element
• ordered pair
• tuple
• Family
• Forcing
• One-to-one correspondence
• Ordinal number
• Set-builder notation
• Transfinite induction
• Venn diagram
Set types
• Amorphous
• Countable
• Empty
• Finite (hereditarily)
• Filter
• base
• subbase
• Ultrafilter
• Fuzzy
• Infinite (Dedekind-infinite)
• Recursive
• Singleton
• Subset · Superset
• Transitive
• Uncountable
• Universal
Theories
• Alternative
• Axiomatic
• Naive
• Cantor's theorem
• Zermelo
• General
• Principia Mathematica
• New Foundations
• Zermelo–Fraenkel
• von Neumann–Bernays–Gödel
• Morse–Kelley
• Kripke–Platek
• Tarski–Grothendieck
• Paradoxes
• Problems
• Russell's paradox
• Suslin's problem
• Burali-Forti paradox
Set theorists
• Paul Bernays
• Georg Cantor
• Paul Cohen
• Richard Dedekind
• Abraham Fraenkel
• Kurt Gödel
• Thomas Jech
• John von Neumann
• Willard Quine
• Bertrand Russell
• Thoralf Skolem
• Ernst Zermelo
| Wikipedia |
Transfinite interpolation
In numerical analysis, transfinite interpolation is a means to construct functions over a planar domain in such a way that they match a given function on the boundary. This method is applied in geometric modelling and in the field of finite element method.[1]
The transfinite interpolation method, first introduced by William J. Gordon and Charles A. Hall,[2] receives its name due to how a function belonging to this class is able to match the primitive function at a nondenumerable number of points.[3] In the authors' words:
We use the term ‘transfinite’ to describe the general class of interpolation schemes studied herein since, unlike the classical methods of higher dimensional interpolation which match the primitive function F at a finite number of distinct points, these methods match F at a non-denumerable (transfinite) number of points.
Transfinite interpolation is similar to the Coons patch, invented in 1967. [4]
Formula
With parametrized curves ${\vec {c}}_{1}(u)$, ${\vec {c}}_{3}(u)$ describing one pair of opposite sides of a domain, and ${\vec {c}}_{2}(v)$, ${\vec {c}}_{4}(v)$ describing the other pair. the position of point (u,v) in the domain is
${\begin{array}{rcl}{\vec {S}}(u,v)&=&(1-v){\vec {c}}_{1}(u)+v{\vec {c}}_{3}(u)+(1-u){\vec {c}}_{2}(v)+u{\vec {c}}_{4}(v)\\&&-\left[(1-u)(1-v){\vec {P}}_{1,2}+uv{\vec {P}}_{3,4}+u(1-v){\vec {P}}_{1,4}+(1-u)v{\vec {P}}_{3,2}\right]\end{array}}$
where, e.g., ${\vec {P}}_{1,2}$ is the point where curves ${\vec {c}}_{1}$ and ${\vec {c}}_{2}$ meet.
References
1. Dyken, Christopher; Floater, Michael S. (2009). "Transfinite mean value interpolation". Computer Aided Geometric Design. 1 (26): 117–134. CiteSeerX 10.1.1.137.4822. doi:10.1016/j.cagd.2007.12.003.
2. Gordon, William; Hall, Charles (1973). "Construction of curvilinear coordinate systems and application to mesh generation". International Journal for Numerical Methods in Engineering. 7 (4): 461–477. doi:10.1002/nme.1620070405.
3. Gordon, William; Thiel, Linda (1982). "Transfinite mapping and their application to grid generation". Applied Mathematics and Computation. 10–11 (10): 171–233. doi:10.1016/0096-3003(82)90191-6.
4. Steven A. Coons, Surfaces for computer-aided design of space forms, Technical Report MAC-TR-41, Project MAC, MIT, June 1967.
| Wikipedia |
Transform theory
In mathematics, transform theory is the study of transforms, which relate a function in one domain to another function in a second domain. The essence of transform theory is that by a suitable choice of basis for a vector space a problem may be simplified—or diagonalized as in spectral theory.
Spectral theory
In spectral theory, the spectral theorem says that if A is an n×n self-adjoint matrix, there is an orthonormal basis of eigenvectors of A. This implies that A is diagonalizable.
Furthermore, each eigenvalue is real.
Transforms
• Laplace transform
• Fourier transform
• Hankel transform
• Joukowsky transform
• Mellin transform
• Z-transform
References
• Keener, James P. 2000. Principles of Applied Mathematics: Transformation and Approximation. Cambridge: Westview Press. ISBN 0-7382-0129-4
| Wikipedia |
Geometric transformation
In mathematics, a geometric transformation is any bijection of a set to itself (or to another such set) with some salient geometrical underpinning. More specifically, it is a function whose domain and range are sets of points — most often both $\mathbb {R} ^{2}$ or both $\mathbb {R} ^{3}$ — such that the function is bijective so that its inverse exists.[1] The study of geometry may be approached by the study of these transformations.[2]
Not to be confused with Transformation geometry.
For broader coverage of this topic, see Transformation (mathematics).
Classifications
Geometric transformations can be classified by the dimension of their operand sets (thus distinguishing between, say, planar transformations and spatial transformations). They can also be classified according to the properties they preserve:
• Displacements preserve distances and oriented angles (e.g., translations);[3]
• Isometries preserve angles and distances (e.g., Euclidean transformations);[4][5]
• Similarities preserve angles and ratios between distances (e.g., resizing);[6]
• Affine transformations preserve parallelism (e.g., scaling, shear);[5][7]
• Projective transformations preserve collinearity;[8]
Each of these classes contains the previous one.[8]
• Möbius transformations using complex coordinates on the plane (as well as circle inversion) preserve the set of all lines and circles, but may interchange lines and circles.
• Original image (based on the map of France)
• Isometry
• Similarity
• Affine transformation
• Projective transformation
• Inversion
• Conformal transformations preserve angles, and are, in the first order, similarities.
• Equiareal transformations, preserve areas in the planar case or volumes in the three dimensional case.[9] and are, in the first order, affine transformations of determinant 1.
• Homeomorphisms (bicontinuous transformations) preserve the neighborhoods of points.
• Diffeomorphisms (bidifferentiable transformations) are the transformations that are affine in the first order; they contain the preceding ones as special cases, and can be further refined.
• Conformal transformation
• Equiareal transformation
• Homeomorphism
• Diffeomorphism
Transformations of the same type form groups that may be sub-groups of other transformation groups.
Opposite group actions
Main articles: Group action and Opposite group
Many geometric transformations are expressed with linear algebra. The bijective linear transformations are elements of a general linear group. The linear transformation A is non-singular. For a row vector v, the matrix product vA gives another row vector w = vA.
The transpose of a row vector v is a column vector vT, and the transpose of the above equality is $w^{T}=(vA)^{T}=A^{T}v^{T}.$ Here AT provides a left action on column vectors.
In transformation geometry there are compositions AB. Starting with a row vector v, the right action of the composed transformation is w = vAB. After transposition,
$w^{T}=(vAB)^{T}=(AB)^{T}v^{T}=B^{T}A^{T}v^{T}.$
Thus for AB the associated left group action is $B^{T}A^{T}.$ In the study of opposite groups, the distinction is made between opposite group actions because commutative groups are the only groups for which these opposites are equal.
See also
• Coordinate transformation
• Erlangen program
• Symmetry (geometry)
• Motion
• Reflection
• Rigid transformation
• Rotation
• Topology
• Transformation matrix
References
1. Usiskin, Zalman; Peressini, Anthony L.; Marchisotto, Elena; Stanley, Dick (2003). Mathematics for High School Teachers: An Advanced Perspective. Pearson Education. p. 84. ISBN 0-13-044941-5. OCLC 50004269.
2. Venema, Gerard A. (2006), Foundations of Geometry, Pearson Prentice Hall, p. 285, ISBN 9780131437005
3. "Geometry Translation". www.mathsisfun.com. Retrieved 2020-05-02.
4. "Geometric Transformations — Euclidean Transformations". pages.mtu.edu. Retrieved 2020-05-02.
5. Geometric transformation, p. 131, at Google Books
6. "Transformations". www.mathsisfun.com. Retrieved 2020-05-02.
7. "Geometric Transformations — Affine Transformations". pages.mtu.edu. Retrieved 2020-05-02.
8. Leland Wilkinson, D. Wills, D. Rope, A. Norton, R. Dubbs – Geometric transformation, p. 182, at Google Books
9. Geometric transformation, p. 191, at Google Books Bruce E. Meserve – Fundamental Concepts of Geometry, page 191.]
Further reading
Wikimedia Commons has media related to Transformations (geometry).
• Adler, Irving (2012) [1966], A New Look at Geometry, Dover, ISBN 978-0-486-49851-5
• Dienes, Z. P.; Golding, E. W. (1967) . Geometry Through Transformations (3 vols.): Geometry of Distortion, Geometry of Congruence, and Groups and Coordinates. New York: Herder and Herder.
• David Gans – Transformations and geometries.
• Hilbert, David; Cohn-Vossen, Stephan (1952). Geometry and the Imagination (2nd ed.). Chelsea. ISBN 0-8284-1087-9.
• John McCleary (2013) Geometry from a Differentiable Viewpoint, Cambridge University Press ISBN 978-0-521-11607-7
• Modenov, P. S.; Parkhomenko, A. S. (1965) . Geometric Transformations (2 vols.): Euclidean and Affine Transformations, and Projective Transformations. New York: Academic Press.
• A. N. Pressley – Elementary Differential Geometry.
• Yaglom, I. M. (1962, 1968, 1973, 2009) . Geometric Transformations (4 vols.). Random House (I, II & III), MAA (I, II, III & IV).
| Wikipedia |
Transformation between distributions in time–frequency analysis
In the field of time–frequency analysis, several signal formulations are used to represent the signal in a joint time–frequency domain.[1]
There are several methods and transforms called "time-frequency distributions" (TFDs), whose interconnections were organized by Leon Cohen.[2] [3][4][5] The most useful and popular methods form a class referred to as "quadratic" or bilinear time–frequency distributions. A core member of this class is the Wigner–Ville distribution (WVD), as all other TFDs can be written as a smoothed or convolved versions of the WVD. Another popular member of this class is the spectrogram which is the square of the magnitude of the short-time Fourier transform (STFT). The spectrogram has the advantage of being positive and is easy to interpret, but also has disadvantages, like being irreversible, which means that once the spectrogram of a signal is computed, the original signal can't be extracted from the spectrogram. The theory and methodology for defining a TFD that verifies certain desirable properties is given in the "Theory of Quadratic TFDs".[6]
The scope of this article is to illustrate some elements of the procedure to transform one distribution into another. The method used to transform a distribution is borrowed from the phase space formulation of quantum mechanics, even though the subject matter of this article is "signal processing". Noting that a signal can be recovered from a particular distribution under certain conditions, given a certain TFD ρ1(t,f) representing the signal in a joint time–frequency domain, another, different, TFD ρ2(t,f) of the same signal can be obtained to calculate any other distribution, by simple smoothing or filtering; some of these relationships are shown below. A full treatment of the question can be given in Cohen's book.
General class
If we use the variable ω = 2πf, then, borrowing the notations used in the field of quantum mechanics, we can show that time–frequency representation, such as Wigner distribution function (WDF) and other bilinear time–frequency distributions, can be expressed as
$C(t,\omega )={\dfrac {1}{4\pi ^{2}}}\iiint s^{*}\left(u-{\dfrac {1}{2}}\tau \right)s\left(u+{\dfrac {1}{2}}\tau \right)\phi (\theta ,\tau )e^{-j\theta t-j\tau \omega +j\theta u}\,du\,d\tau \,d\theta ,$
(1)
where $\phi (\theta ,\tau )$ is a two dimensional function called the kernel, which determines the distribution and its properties (for a signal processing terminology and treatment of this question, the reader is referred to the references already cited in the introduction).
The kernel of the Wigner distribution function (WDF) is one. However, no particular significance should be attached to that, since it is possible to write the general form so that the kernel of any distribution is one, in which case the kernel of the Wigner distribution function (WDF) would be something else.
Characteristic function formulation
The characteristic function is the double Fourier transform of the distribution. By inspection of Eq. (1), we can obtain that
$C(t,\omega )={\dfrac {1}{4\pi ^{2}}}\iint M(\theta ,\tau )e^{-j\theta t-j\tau \omega }\,d\theta \,d\tau $
(2)
where
${\begin{alignedat}{2}M(\theta ,\tau )&=\phi (\theta ,\tau )\int s^{*}\left(u-{\dfrac {1}{2}}\tau \right)s\left(u+{\dfrac {1}{2}}\tau \right)e^{j\theta u}\,du\\&=\phi (\theta ,\tau )A(\theta ,\tau )\\\end{alignedat}}$
(3)
and where $A(\theta ,\tau )$ is the symmetrical ambiguity function. The characteristic function may be appropriately called the generalized ambiguity function.
Transformation between distributions
To obtain that relationship suppose that there are two distributions, $C_{1}$ and $C_{2}$, with corresponding kernels, $\phi _{1}$ and $\phi _{2}$. Their characteristic functions are
$M_{1}(\phi ,\tau )=\phi _{1}(\theta ,\tau )\int s^{*}\left(u-{\tfrac {\tau }{2}}\right)s\left(u+{\tfrac {\tau }{2}}\right)e^{j\theta u}\,du$
(4)
$M_{2}(\phi ,\tau )=\phi _{2}(\theta ,\tau )\int s^{*}\left(u-{\tfrac {\tau }{2}}\right)s\left(u+{\tfrac {\tau }{2}}\right)e^{j\theta u}\,du$
(5)
Divide one equation by the other to obtain
$M_{1}(\phi ,\tau )={\dfrac {\phi _{1}(\theta ,\tau )}{\phi _{2}(\theta ,\tau )}}M_{2}(\phi ,\tau )$
(6)
This is an important relationship because it connects the characteristic functions. For the division to be proper the kernel cannot to be zero in a finite region.
To obtain the relationship between the distributions take the double Fourier transform of both sides and use Eq. (2)
$C_{1}(t,\omega )={\dfrac {1}{4\pi ^{2}}}\iint {\dfrac {\phi _{1}(\theta ,\tau )}{\phi _{2}(\theta ,\tau )}}M_{2}(\theta ,\tau )e^{-j\theta t-j\tau \omega }\,d\theta \,d\tau $
(7)
Now express $M_{2}$ in terms of $C_{2}$ to obtain
$C_{1}(t,\omega )={\dfrac {1}{4\pi ^{2}}}\iiiint {\dfrac {\phi _{1}(\theta ,\tau )}{\phi _{2}(\theta ,\tau )}}C_{2}(t,\omega ')e^{j\theta (t'-t)+j\tau (\omega '-\omega )}\,d\theta \,d\tau \,dt'\,d\omega '$
(8)
This relationship can be written as
$C_{1}(t,\omega )=\iint g_{12}(t'-t,\omega '-\omega )C_{2}(t,\omega ')\,dt'\,d\omega '$
(9)
with
$g_{12}(t,\omega )={\dfrac {1}{4\pi ^{2}}}\iint {\dfrac {\phi _{1}(\theta ,\tau )}{\phi _{2}(\theta ,\tau )}}e^{j\theta t+j\tau \omega }\,d\theta \,d\tau $
(10)
Relation of the spectrogram to other bilinear representations
Now we specialize to the case where one transform from an arbitrary representation to the spectrogram. In Eq. (9), both $C_{1}$ to be the spectrogram and $C_{2}$ to be arbitrary are set. In addition, to simplify notation, $\phi _{SP}=\phi _{1},\phi =\phi _{2}$, and $g_{SP}=g_{12}$ are set and written as
$C_{SP}(t,\omega )=\iint g_{SP}\left(t'-t,\omega '-\omega \right)C\left(t,\omega '\right)\,dt'\,d\omega '$
(11)
The kernel for the spectrogram with window, $h(t)$, is $A_{h}(-\theta ,\tau )$ and therefore
${\begin{aligned}g_{SP}(t,\omega )&={\dfrac {1}{4\pi ^{2}}}\iint {\dfrac {A_{h}(-\theta ,\tau )}{\phi (\theta ,\tau )}}e^{j\theta t+j\tau \omega }\,d\theta \,d\tau \\&={\dfrac {1}{4\pi ^{2}}}\iiint {\dfrac {1}{\phi (\theta ,\tau )}}h^{*}(u-{\tfrac {\tau }{2}})h(u+{\tfrac {\tau }{2}})e^{j\theta t+j\tau \omega -j\theta u}\,du\,d\tau \,d\theta \\&={\dfrac {1}{4\pi ^{2}}}\iiint h^{*}(u-{\tfrac {\tau }{2}})h(u+{\tfrac {\tau }{2}}){\dfrac {\phi (\theta ,\tau )}{\phi (\theta ,\tau )\phi (-\theta ,\tau )}}e^{-j\theta t+j\tau \omega +j\theta u}\,du\,d\tau \,d\theta \\\end{aligned}}$
If we only consider kernels for which $\phi (-\theta ,\tau )\phi (\theta ,\tau )=1$ holds then
$g_{SP}(t,\omega )={\dfrac {1}{4\pi ^{2}}}\iiint h^{*}(u-{\tfrac {\tau }{2}})h(u+{\tfrac {\tau }{2}})\phi (\theta ,\tau )e^{-j\theta t+j\tau \omega +j\theta u}\,du\,d\tau \,d\theta =C_{h}(t,-\omega )$
and therefore
$C_{SP}(t,\omega )=\iint C_{s}(t',\omega ')C_{h}(t'-t,\omega '-\omega )\,dt'\,d\omega '$
This was shown by Janssen.[4] When $\phi (-\theta ,\tau )\phi (\theta ,\tau )$ does not equal one, then
$C_{SP}(t,\omega )=\iiiint G(t'',\omega '')C_{s}(t',\omega ')C_{h}(t''+t'-t,-\omega ''+\omega -\omega ')\,dt'\,dt''\,d\omega '\,d\omega ''$
where
$G(t,\omega )={\dfrac {1}{4\pi ^{2}}}\iint {\dfrac {e^{-j\theta t-j\tau \omega }}{\phi (\theta ,\tau )\phi (-\theta ,\tau )}}\,d\theta \,d\tau $
References
1. L. Cohen, "Time–Frequency Analysis," Prentice-Hall, New York, 1995. ISBN 978-0135945322
2. L. Cohen, "Generalized phase-space distribution functions," J. Math. Phys., 7 (1966) pp. 781–786, doi:10.1063/1.1931206
3. L. Cohen, "Quantization Problem and Variational Principle in the Phase Space Formulation of Quantum Mechanics," J. Math. Phys., 7 pp. 1863–1866, 1976.
4. A. J. E. M. Janssen, "On the locus and spread of pseudo-density functions in the time frequency plane," Philips Journal of Research, vol. 37, pp. 79–110, 1982.
5. E. Sejdić, I. Djurović, J. Jiang, “Time-frequency feature representation using energy concentration: An overview of recent advances,” Digital Signal Processing, vol. 19, no. 1, pp. 153-183, January 2009.
6. B. Boashash, “Theory of Quadratic TFDs”, Chapter 3, pp. 59–82, in B. Boashash, editor, Time-Frequency Signal Analysis & Processing: A Comprehensive Reference, Elsevier, Oxford, 2003; ISBN 0-08-044335-4.
| Wikipedia |
List of common coordinate transformations
This is a list of some of the most commonly used coordinate transformations.
2-dimensional
Let $(x,y)$ be the standard Cartesian coordinates, and $(r,\theta )$ the standard polar coordinates.
From polar coordinates
${\begin{aligned}x&=r\cos \theta \\y&=r\sin \theta \\[5pt]{\frac {\partial (x,y)}{\partial (r,\theta )}}&={\begin{bmatrix}\cos \theta &-r\sin \theta \\\sin \theta &{\phantom {-}}r\cos \theta \end{bmatrix}}\\[5pt]{\text{Jacobian}}=\det {\frac {\partial (x,y)}{\partial (r,\theta )}}&=r\end{aligned}}$
From log-polar coordinates
Main article: log-polar coordinates
${\begin{aligned}x&=e^{\rho }\cos \theta ,\\y&=e^{\rho }\sin \theta .\end{aligned}}$
By using complex numbers $(x,y)=x+iy'$, the transformation can be written as
$x+iy=e^{\rho +i\theta }$
That is, it is given by the complex exponential function.
From bipolar coordinates
Main article: bipolar coordinates
${\begin{aligned}x&=a{\frac {\sinh \tau }{\cosh \tau -\cos \sigma }}\\y&=a{\frac {\sin \sigma }{\cosh \tau -\cos \sigma }}\end{aligned}}$
From 2-center bipolar coordinates
Main article: two-center bipolar coordinates
${\begin{aligned}x&={\frac {1}{4c}}\left(r_{1}^{2}-r_{2}^{2}\right)\\y&=\pm {\frac {1}{4c}}{\sqrt {16c^{2}r_{1}^{2}-(r_{1}^{2}-r_{2}^{2}+4c^{2})^{2}}}\end{aligned}}$
From Cesàro equation
Main article: Cesàro equation
${\begin{aligned}x&=\int \cos \left[\int \kappa (s)\,ds\right]ds\\y&=\int \sin \left[\int \kappa (s)\,ds\right]ds\end{aligned}}$
From Cartesian coordinates
${\begin{aligned}r&={\sqrt {x^{2}+y^{2}}}\\\theta '&=\arctan \left|{\frac {y}{x}}\right|\end{aligned}}$
Note: solving for $\theta '$ returns the resultant angle in the first quadrant ($ 0<\theta <{\frac {\pi }{2}}$). To find $\theta ,$ one must refer to the original Cartesian coordinate, determine the quadrant in which $\theta $ lies (for example, (3,−3) [Cartesian] lies in QIV), then use the following to solve for $\theta :$ :}
$\theta ={\begin{cases}{\hphantom {2}}\theta '&({\text{for }}\theta '{\text{ in QI: }}{\phantom {IIV}}0<\theta '<{\frac {\pi }{2}})\\{\hphantom {2}}\pi -\theta '&({\text{for }}\theta '{\text{ in QII: }}{\phantom {IV}}{\frac {\pi }{2}}<\theta '<\pi )\\{\hphantom {2}}\pi +\theta '&({\text{for }}\theta '{\text{ in QIII: }}{\phantom {II}}\pi <\theta '<{\frac {3\pi }{2}})\\2\pi -\theta '&({\text{for }}\theta '{\text{ in QIV: }}{\phantom {I}}{\frac {3\pi }{2}}<\theta '<2\pi )\end{cases}}$
The value for $\theta $ must be solved for in this manner because for all values of $\theta $, $\tan \theta $ is only defined for $ -{\frac {\pi }{2}}<\theta <+{\frac {\pi }{2}}$, and is periodic (with period $\pi $). This means that the inverse function will only give values in the domain of the function, but restricted to a single period. Hence, the range of the inverse function is only half a full circle.
Note that one can also use
${\begin{aligned}r&={\sqrt {x^{2}+y^{2}}}\\\theta '&=2\arctan {\frac {y}{x+r}}\end{aligned}}$
From 2-center bipolar coordinates
${\begin{aligned}r&={\sqrt {\frac {r_{1}^{2}+r_{2}^{2}-2c^{2}}{2}}}\\\theta &=\arctan \left[{\sqrt {{\frac {8c^{2}(r_{1}^{2}+r_{2}^{2}-2c^{2})}{r_{1}^{2}-r_{2}^{2}}}-1}}\right]\end{aligned}}$
Where 2c is the distance between the poles.
To log-polar coordinates from Cartesian coordinates
${\begin{aligned}\rho &=\log {\sqrt {x^{2}+y^{2}}},\\\theta &=\arctan {\frac {y}{x}}.\end{aligned}}$
In Cartesian coordinates
${\begin{aligned}\kappa &={\frac {x'y''-y'x''}{({x'}^{2}+{y'}^{2})^{\frac {3}{2}}}}\\s&=\int _{a}^{t}{\sqrt {{x'}^{2}+{y'}^{2}}}\,dt\end{aligned}}$
In polar coordinates
${\begin{aligned}\kappa &={\frac {r^{2}+2{r'}^{2}-rr''}{(r^{2}+{r'}^{2})^{\frac {3}{2}}}}\\s&=\int _{a}^{\varphi }{\sqrt {r^{2}+{r'}^{2}}}\,d\varphi \end{aligned}}$
3-dimensional
Let (x, y, z) be the standard Cartesian coordinates, and (ρ, θ, φ) the spherical coordinates, with θ the angle measured away from the +Z axis (as , see conventions in spherical coordinates). As φ has a range of 360° the same considerations as in polar (2 dimensional) coordinates apply whenever an arctangent of it is taken. θ has a range of 180°, running from 0° to 180°, and does not pose any problem when calculated from an arccosine, but beware for an arctangent.
If, in the alternative definition, θ is chosen to run from −90° to +90°, in opposite direction of the earlier definition, it can be found uniquely from an arcsine, but beware of an arccotangent. In this case in all formulas below all arguments in θ should have sine and cosine exchanged, and as derivative also a plus and minus exchanged.
All divisions by zero result in special cases of being directions along one of the main axes and are in practice most easily solved by observation.
From spherical coordinates
Main article: spherical coordinates
${\begin{aligned}x&=\rho \,\sin \theta \,\cos \varphi \\y&=\rho \,\sin \theta \,\sin \varphi \\z&=\rho \,\cos \theta \\{\frac {\partial (x,y,z)}{\partial (\rho ,\theta ,\varphi )}}&={\begin{pmatrix}\sin \theta \cos \varphi &\rho \cos \theta \cos \varphi &-\rho \sin \theta \sin \varphi \\\sin \theta \sin \varphi &\rho \cos \theta \sin \varphi &\rho \sin \theta \cos \varphi \\\cos \theta &-\rho \sin \theta &0\end{pmatrix}}\end{aligned}}$
So for the volume element:
$dx\,dy\,dz=\det {\frac {\partial (x,y,z)}{\partial (\rho ,\theta ,\varphi )}}\,d\rho \,d\theta \,d\varphi =\rho ^{2}\sin \theta \,d\rho \,d\theta \,d\varphi $
From cylindrical coordinates
Main article: cylindrical coordinates
${\begin{aligned}x&=r\,\cos \theta \\y&=r\,\sin \theta \\z&=z\,\\{\frac {\partial (x,y,z)}{\partial (r,\theta ,z)}}&={\begin{pmatrix}\cos \theta &-r\sin \theta &0\\\sin \theta &r\cos \theta &0\\0&0&1\end{pmatrix}}\end{aligned}}$
So for the volume element:
$dV=dx\,dy\,dz=\det {\frac {\partial (x,y,z)}{\partial (r,\theta ,z)}}\,dr\,d\theta \,dz=r\,dr\,d\theta \,dz$
To spherical coordinates
Main article: spherical coordinates
From Cartesian coordinates
${\begin{aligned}\rho &={\sqrt {x^{2}+y^{2}+z^{2}}}\\\theta &=\arctan \left({\frac {\sqrt {x^{2}+y^{2}}}{z}}\right)=\arccos \left({\frac {z}{\sqrt {x^{2}+y^{2}+z^{2}}}}\right)\\\varphi &=\arctan \left({\frac {y}{x}}\right)=\arccos \left({\frac {x}{\sqrt {x^{2}+y^{2}}}}\right)=\arcsin \left({\frac {y}{\sqrt {x^{2}+y^{2}}}}\right)\\{\frac {\partial \left(\rho ,\theta ,\varphi \right)}{\partial \left(x,y,z\right)}}&={\begin{pmatrix}{\frac {x}{\rho }}&{\frac {y}{\rho }}&{\frac {z}{\rho }}\\{\frac {xz}{\rho ^{2}{\sqrt {x^{2}+y^{2}}}}}&{\frac {yz}{\rho ^{2}{\sqrt {x^{2}+y^{2}}}}}&-{\frac {\sqrt {x^{2}+y^{2}}}{\rho ^{2}}}\\{\frac {-y}{x^{2}+y^{2}}}&{\frac {x}{x^{2}+y^{2}}}&0\\\end{pmatrix}}\end{aligned}}$
See also the article on atan2 for how to elegantly handle some edge cases.
So for the element:
$d\rho \,d\theta \,d\varphi =\det {\frac {\partial (\rho ,\theta ,\varphi )}{\partial (x,y,z)}}\,dx\,dy\,dz={\frac {1}{{\sqrt {x^{2}+y^{2}}}{\sqrt {x^{2}+y^{2}+z^{2}}}}}\,dx\,dy\,dz$
From cylindrical coordinates
Main article: cylindrical coordinates
${\begin{aligned}\rho &={\sqrt {r^{2}+h^{2}}}\\\theta &=\arctan {\frac {r}{h}}\\\varphi &=\varphi \\{\frac {\partial (\rho ,\theta ,\varphi )}{\partial (r,h,\varphi )}}&={\begin{pmatrix}{\frac {r}{\sqrt {r^{2}+h^{2}}}}&{\frac {h}{\sqrt {r^{2}+h^{2}}}}&0\\{\frac {h}{r^{2}+h^{2}}}&{\frac {-r}{r^{2}+h^{2}}}&0\\0&0&1\\\end{pmatrix}}\\\det {\frac {\partial (\rho ,\theta ,\varphi )}{\partial (r,h,\varphi )}}&={\frac {1}{\sqrt {r^{2}+h^{2}}}}\end{aligned}}$
From Cartesian coordinates
${\begin{aligned}r&={\sqrt {x^{2}+y^{2}}}\\\theta &=\arctan {\left({\frac {y}{x}}\right)}\\z&=z\quad \end{aligned}}$
${\frac {\partial (r,\theta ,h)}{\partial (x,y,z)}}={\begin{pmatrix}{\frac {x}{\sqrt {x^{2}+y^{2}}}}&{\frac {y}{\sqrt {x^{2}+y^{2}}}}&0\\{\frac {-y}{x^{2}+y^{2}}}&{\frac {x}{x^{2}+y^{2}}}&0\\0&0&1\end{pmatrix}}$
From spherical coordinates
${\begin{aligned}r&=\rho \sin \varphi \\h&=\rho \cos \varphi \\\theta &=\theta \\{\frac {\partial (r,h,\theta )}{\partial (\rho ,\varphi ,\theta )}}&={\begin{pmatrix}\sin \varphi &\rho \cos \varphi &0\\\cos \varphi &-\rho \sin \varphi &0\\0&0&1\\\end{pmatrix}}\\\det {\frac {\partial (r,h,\theta )}{\partial (\rho ,\varphi ,\theta )}}&=-\rho \end{aligned}}$
Arc-length, curvature and torsion from Cartesian coordinates
${\begin{aligned}s&=\int _{0}^{t}{\sqrt {{x'}^{2}+{y'}^{2}+{z'}^{2}}}\,dt\\[3pt]\kappa &={\frac {\sqrt {(z''y'-y''z')^{2}+(x''z'-z''x')^{2}+(y''x'-x''y')^{2}}}{({x'}^{2}+{y'}^{2}+{z'}^{2})^{\frac {3}{2}}}}\\[3pt]\tau &={\frac {x'''(y'z''-y''z')+y'''(x''z'-x'z'')+z'''(x'y''-x''y')}{{(x'y''-x''y')}^{2}+{(x''z'-x'z'')}^{2}+{(y'z''-y''z')}^{2}}}\end{aligned}}$
See also
• Geographic coordinate conversion
• Transformation matrix
References
• Arfken, George (2013). Mathematical Methods for Physicists. Academic Press. ISBN 978-0123846549.
| Wikipedia |
Transformation geometry
In mathematics, transformation geometry (or transformational geometry) is the name of a mathematical and pedagogic take on the study of geometry by focusing on groups of geometric transformations, and properties that are invariant under them. It is opposed to the classical synthetic geometry approach of Euclidean geometry, that focuses on proving theorems.
For example, within transformation geometry, the properties of an isosceles triangle are deduced from the fact that it is mapped to itself by a reflection about a certain line. This contrasts with the classical proofs by the criteria for congruence of triangles.[1]
The first systematic effort to use transformations as the foundation of geometry was made by Felix Klein in the 19th century, under the name Erlangen programme. For nearly a century this approach remained confined to mathematics research circles. In the 20th century efforts were made to exploit it for mathematical education. Andrei Kolmogorov included this approach (together with set theory) as part of a proposal for geometry teaching reform in Russia.[2] These efforts culminated in the 1960s with the general reform of mathematics teaching known as the New Math movement.
Pedagogy
An exploration of transformation geometry often begins with a study of reflection symmetry as found in daily life. The first real transformation is reflection in a line or reflection against an axis. The composition of two reflections results in a rotation when the lines intersect, or a translation when they are parallel. Thus through transformations students learn about Euclidean plane isometry. For instance, consider reflection in a vertical line and a line inclined at 45° to the horizontal. One can observe that one composition yields a counter-clockwise quarter-turn (90°) while the reverse composition yields a clockwise quarter-turn. Such results show that transformation geometry includes non-commutative processes.
An entertaining application of reflection in a line occurs in a proof of the one-seventh area triangle found in any triangle.
Another transformation introduced to young students is the dilation. However, the reflection in a circle transformation seems inappropriate for lower grades. Thus inversive geometry, a larger study than grade school transformation geometry, is usually reserved for college students.
Experiments with concrete symmetry groups make way for abstract group theory. Other concrete activities use computations with complex numbers, hypercomplex numbers, or matrices to express transformation geometry. Such transformation geometry lessons present an alternate view that contrasts with classical synthetic geometry. When students then encounter analytic geometry, the ideas of coordinate rotations and reflections follow easily. All these concepts prepare for linear algebra where the reflection concept is expanded.
Educators have shown some interest and described projects and experiences with transformation geometry for children from kindergarten to high school. In the case of very young age children, in order to avoid introducing new terminology and to make links with students' everyday experience with concrete objects, it was sometimes recommended to use words they are familiar with, like "flips" for line reflections, "slides" for translations, and "turns" for rotations, although these are not precise mathematical language. In some proposals, students start by performing with concrete objects before they perform the abstract transformations via their definitions of a mapping of each point of the figure.[3][4][5][6]
In an attempt to restructure the courses of geometry in Russia, Kolmogorov suggested presenting it under the point of view of transformations, so the geometry courses were structured based on set theory. This led to the appearance of the term "congruent" in schools, for figures that were before called "equal": since a figure was seen as a set of points, it could only be equal to itself, and two triangles that could be overlapped by isometries were said to be congruent.[2]
One author expressed the importance of group theory to transformation geometry as follows:
I have gone to some trouble to develop from first principles all the group theory that I need, with the intention that my book can serve as a first introduction to transformation groups, and the notions of abstract group theory if you have never seen these.[7]
See also
• Chirality (mathematics)
• Geometric transformation
• Euler's rotation theorem
• Motion (geometry)
• Transformation matrix
References
1. Georges Glaeser – The crisis of geometry teaching
2. Alexander Karp & Bruce R. Vogeli – Russian Mathematics Education: Programs and Practices, Volume 5, pgs. 100–102
3. R.S. Millman – Kleinian transformation geometry, Amer. Math. Monthly 84 (1977)
4. UNESCO - New trends in mathematics teaching, v.3, 1972 / pg. 8
5. Barbara Zorin – Geometric Transformations in Middle School Mathematics Textbooks
6. UNESCO - Studies in mathematics education. Teaching of geometry
7. Miles Reid & Balázs Szendröi (2005) Geometry and Topology, pg. xvii, Cambridge University Press, ISBN 0-521-61325-6, MR2194744
Further reading
• Heinrich Guggenheimer (1967) Plane Geometry and Its Groups, Holden-Day.
• Roger Evans Howe & William Barker (2007) Continuous Symmetry: From Euclid to Klein, American Mathematical Society, ISBN 978-0-8218-3900-3 .
• Robin Hartshorne (2011) Review of Continuous Symmetry, American Mathematical Monthly 118:565–8.
• Roger Lyndon (1985) Groups and Geometry, #101 London Mathematical Society Lecture Note Series, Cambridge University Press ISBN 0-521-31694-4 .
• P.S. Modenov and A.S. Parkhomenko (1965) Geometric Transformations, translated by Michael B.P. Slater, Academic Press.
• George E. Martin (1982) Transformation Geometry: An Introduction to Symmetry, Springer Verlag.
• Isaak Yaglom (1962) Geometric Transformations, Random House (translated from the Russian).
• Max Jeger (1966) Transformation Geometry (translated from the German).
• Transformations teaching notes from Gatsby Charitable Foundation
• Kristin A. Camenga (NCTM's 2011 Annual Meeting & Exposition) - Transforming Geometric Proof with Reflections, Rotations and Translations.
• Nathalie Sinclair (2008) The History of the Geometry Curriculum in the United States, pps. 63–66.
• Zalman P. Usiskin and Arthur F. Coxford. A Transformation Approach to Tenth Grade Geometry, The Mathematics Teacher, Vol. 65, No. 1 (January 1972), pp. 21-30.
• Zalman P. Usiskin. The Effects of Teaching Euclidean Geometry via Transformations on Student Achievement and Attitudes in Tenth-Grade Geometry, Journal for Research in Mathematics Education, Vol. 3, No. 4 (Nov., 1972), pp. 249-259.
• A. N. Kolmogorov. Геометрические преобразования в школьном курсе геометрии, Математика в школе, 1965, Nº 2, pp. 24–29. (Geometric transformations in a school geometry course) (in Russian)
• Alton Thorpe Olson (1970). High School Plane Geometry Through Transformations: An Exploratory Study, Vol. I. University of Wisconsin--Madison.
• Alton Thorpe Olson (1970). High School Plane Geometry Through Transformations: An Exploratory Study, Vol II. University of Wisconsin--Madison.
| Wikipedia |
Automorphism group
In mathematics, the automorphism group of an object X is the group consisting of automorphisms of X under composition of morphisms. For example, if X is a finite-dimensional vector space, then the automorphism group of X is the group of invertible linear transformations from X to itself (the general linear group of X). If instead X is a group, then its automorphism group $\operatorname {Aut} (X)$ is the group consisting of all group automorphisms of X.
Especially in geometric contexts, an automorphism group is also called a symmetry group. A subgroup of an automorphism group is sometimes called a transformation group.
Automorphism groups are studied in a general way in the field of category theory.
Examples
If X is a set with no additional structure, then any bijection from X to itself is an automorphism, and hence the automorphism group of X in this case is precisely the symmetric group of X. If the set X has additional structure, then it may be the case that not all bijections on the set preserve this structure, in which case the automorphism group will be a subgroup of the symmetric group on X. Some examples of this include the following:
• The automorphism group of a field extension $L/K$ is the group consisting of field automorphisms of L that fix K. If the field extension is Galois, the automorphism group is called the Galois group of the field extension.
• The automorphism group of the projective n-space over a field k is the projective linear group $\operatorname {PGL} _{n}(k).$[1]
• The automorphism group $G$ of a finite cyclic group of order n is isomorphic to $(\mathbb {Z} /n\mathbb {Z} )^{\times }$, the multiplicative group of integers modulo n, with the isomorphism given by ${\overline {a}}\mapsto \sigma _{a}\in G,\,\sigma _{a}(x)=x^{a}$.[2] In particular, $G$ is an abelian group.
• The automorphism group of a finite-dimensional real Lie algebra ${\mathfrak {g}}$ has the structure of a (real) Lie group (in fact, it is even a linear algebraic group: see below). If G is a Lie group with Lie algebra ${\mathfrak {g}}$, then the automorphism group of G has a structure of a Lie group induced from that on the automorphism group of ${\mathfrak {g}}$.[3][4][lower-alpha 1]
If G is a group acting on a set X, the action amounts to a group homomorphism from G to the automorphism group of X and conversely. Indeed, each left G-action on a set X determines $G\to \operatorname {Aut} (X),\,g\mapsto \sigma _{g},\,\sigma _{g}(x)=g\cdot x$, and, conversely, each homomorphism $\varphi :G\to \operatorname {Aut} (X)$ defines an action by $g\cdot x=\varphi (g)x$. This extends to the case when the set X has more structure than just a set. For example, if X is a vector space, then a group action of G on X is a group representation of the group G, representing G as a group of linear transformations (automorphisms) of X; these representations are the main object of study in the field of representation theory.
Here are some other facts about automorphism groups:
• Let $A,B$ be two finite sets of the same cardinality and $\operatorname {Iso} (A,B)$ the set of all bijections $A\mathrel {\overset {\sim }{\to }} B$. Then $\operatorname {Aut} (B)$, which is a symmetric group (see above), acts on $\operatorname {Iso} (A,B)$ from the left freely and transitively; that is to say, $\operatorname {Iso} (A,B)$ is a torsor for $\operatorname {Aut} (B)$ (cf. #In category theory).
• Let P be a finitely generated projective module over a ring R. Then there is an embedding $\operatorname {Aut} (P)\hookrightarrow \operatorname {GL} _{n}(R)$, unique up to inner automorphisms.[5]
In category theory
Automorphism groups appear very naturally in category theory.
If X is an object in a category, then the automorphism group of X is the group consisting of all the invertible morphisms from X to itself. It is the unit group of the endomorphism monoid of X. (For some examples, see PROP.)
If $A,B$ are objects in some category, then the set $\operatorname {Iso} (A,B)$ of all $A\mathrel {\overset {\sim }{\to }} B$ is a left $\operatorname {Aut} (B)$-torsor. In practical terms, this says that a different choice of a base point of $\operatorname {Iso} (A,B)$ differs unambiguously by an element of $\operatorname {Aut} (B)$, or that each choice of a base point is precisely a choice of a trivialization of the torsor.
If $X_{1}$ and $X_{2}$ are objects in categories $C_{1}$ and $C_{2}$, and if $F:C_{1}\to C_{2}$ is a functor mapping $X_{1}$ to $X_{2}$, then $F$ induces a group homomorphism $\operatorname {Aut} (X_{1})\to \operatorname {Aut} (X_{2})$, as it maps invertible morphisms to invertible morphisms.
In particular, if G is a group viewed as a category with a single object * or, more generally, if G is a groupoid, then each functor $F:G\to C$, C a category, is called an action or a representation of G on the object $F(*)$, or the objects $F(\operatorname {Obj} (G))$. Those objects are then said to be $G$-objects (as they are acted by $G$); cf. $\mathbb {S} $-object. If $C$ is a module category like the category of finite-dimensional vector spaces, then $G$-objects are also called $G$-modules.
Automorphism group functor
Let $M$ be a finite-dimensional vector space over a field k that is equipped with some algebraic structure (that is, M is a finite-dimensional algebra over k). It can be, for example, an associative algebra or a Lie algebra.
Now, consider k-linear maps $M\to M$ that preserve the algebraic structure: they form a vector subspace $\operatorname {End} _{\text{alg}}(M)$ of $\operatorname {End} (M)$. The unit group of $\operatorname {End} _{\text{alg}}(M)$ is the automorphism group $\operatorname {Aut} (M)$. When a basis on M is chosen, $\operatorname {End} (M)$ is the space of square matrices and $\operatorname {End} _{\text{alg}}(M)$ is the zero set of some polynomial equations, and the invertibility is again described by polynomials. Hence, $\operatorname {Aut} (M)$ is a linear algebraic group over k.
Now base extensions applied to the above discussion determines a functor:[6] namely, for each commutative ring R over k, consider the R-linear maps $M\otimes R\to M\otimes R$ preserving the algebraic structure: denote it by $\operatorname {End} _{\text{alg}}(M\otimes R)$. Then the unit group of the matrix ring $\operatorname {End} _{\text{alg}}(M\otimes R)$ over R is the automorphism group $\operatorname {Aut} (M\otimes R)$ and $R\mapsto \operatorname {Aut} (M\otimes R)$ is a group functor: a functor from the category of commutative rings over k to the category of groups. Even better, it is represented by a scheme (since the automorphism groups are defined by polynomials): this scheme is called the automorphism group scheme and is denoted by $\operatorname {Aut} (M)$.
In general, however, an automorphism group functor may not be represented by a scheme.
See also
• Outer automorphism group
• Level structure, a technique to remove an automorphism group
• Holonomy group
Notes
1. First, if G is simply connected, the automorphism group of G is that of ${\mathfrak {g}}$. Second, every connected Lie group is of the form ${\widetilde {G}}/C$ where ${\widetilde {G}}$ is a simply connected Lie group and C is a central subgroup and the automorphism group of G is the automorphism group of $G$ that preserves C. Third, by convention, a Lie group is second countable and has at most coutably many connected components; thus, the general case reduces to the connected case.
Citations
1. Hartshorne 1977, Ch. II, Example 7.1.1.
2. Dummit & Foote 2004, § 2.3. Exercise 26.
3. Hochschild, G. (1952). "The Automorphism Group of a Lie Group". Transactions of the American Mathematical Society. 72 (2): 209–216. JSTOR 1990752.
4. Fulton & Harris 1991, Exercise 8.28.
5. Milnor 1971, Lemma 3.2.
6. Waterhouse 2012, § 7.6.
References
• Dummit, David S.; Foote, Richard M. (2004). Abstract Algebra (3rd ed.). Wiley. ISBN 978-0-471-43334-7.
• Fulton, William; Harris, Joe (1991). Representation theory. A first course. Graduate Texts in Mathematics, Readings in Mathematics. Vol. 129. New York: Springer-Verlag. doi:10.1007/978-1-4612-0979-9. ISBN 978-0-387-97495-8. MR 1153249. OCLC 246650103.
• Hartshorne, Robin (1977), Algebraic Geometry, Graduate Texts in Mathematics, vol. 52, New York: Springer-Verlag, ISBN 978-0-387-90244-9, MR 0463157
• Milnor, John Willard (1971). Introduction to algebraic K-theory. Annals of Mathematics Studies. Vol. 72. Princeton, NJ: Princeton University Press. ISBN 9780691081014. MR 0349811. Zbl 0237.18005.
• Waterhouse, William C. (2012) [1979]. Introduction to Affine Group Schemes. Graduate Texts in Mathematics. Vol. 66. Springer Verlag. ISBN 9781461262176.
External links
• https://mathoverflow.net/questions/55042/automorphism-group-of-a-scheme
| Wikipedia |
Transgression map
In algebraic topology, a transgression map is a way to transfer cohomology classes. It occurs, for example in the inflation-restriction exact sequence in group cohomology, and in integration in fibers. It also naturally arises in many spectral sequences; see spectral sequence#Edge maps and transgressions.
Inflation-restriction exact sequence
Main article: Inflation-restriction exact sequence
The transgression map appears in the inflation-restriction exact sequence, an exact sequence occurring in group cohomology. Let G be a group, N a normal subgroup, and A an abelian group which is equipped with an action of G, i.e., a homomorphism from G to the automorphism group of A. The quotient group $G/N$ acts on
$A^{N}=\{a\in A:na=a{\text{ for all }}n\in N\}.$
Then the inflation-restriction exact sequence is:
$0\to H^{1}(G/N,A^{N})\to H^{1}(G,A)\to H^{1}(N,A)^{G/N}\to H^{2}(G/N,A^{N})\to H^{2}(G,A).$
The transgression map is the map $H^{1}(N,A)^{G/N}\to H^{2}(G/N,A^{N})$.
Transgression is defined for general $n\in \mathbb {N} $,
$H^{n}(N,A)^{G/N}\to H^{n+1}(G/N,A^{N})$,
only if $H^{i}(N,A)^{G/N}=0$ for $i\leq n-1$.[1]
References
1. Gille & Szamuely (2006) p.67
• Gille, Philippe; Szamuely, Tamás (2006). Central simple algebras and Galois cohomology. Cambridge Studies in Advanced Mathematics. Vol. 101. Cambridge: Cambridge University Press. ISBN 0-521-86103-9. Zbl 1137.12001.
• Hazewinkel, Michiel (1995). Handbook of Algebra, Volume 1. Elsevier. p. 282. ISBN 0444822127.
• Koch, Helmut (1997). Algebraic Number Theory. Encycl. Math. Sci. Vol. 62 (2nd printing of 1st ed.). Springer-Verlag. ISBN 3-540-63003-1. Zbl 0819.11044.
• Neukirch, Jürgen; Schmidt, Alexander; Wingberg, Kay (2008). Cohomology of Number Fields. Grundlehren der Mathematischen Wissenschaften. Vol. 323 (2nd ed.). Springer-Verlag. pp. 112–113. ISBN 3-540-37888-X. Zbl 1136.11001.
• Schmid, Peter (2007). The Solution of The K(GV) Problem. Advanced Texts in Mathematics. Vol. 4. Imperial College Press. p. 214. ISBN 1860949703.
• Serre, Jean-Pierre (1979). Local Fields. Graduate Texts in Mathematics. Vol. 67. Translated by Greenberg, Marvin Jay. Springer-Verlag. pp. 117–118. ISBN 0-387-90424-7. Zbl 0423.12016.
External links
• transgression at the nLab
| Wikipedia |
Numerical solution of the convection–diffusion equation
The convection–diffusion equation describes the flow of heat, particles, or other physical quantities in situations where there is both diffusion and convection or advection. For information about the equation, its derivation, and its conceptual importance and consequences, see the main article convection–diffusion equation. This article describes how to use a computer to calculate an approximate numerical solution of the discretized equation, in a time-dependent situation.
In order to be concrete, this article focuses on heat flow, an important example where the convection–diffusion equation applies. However, the same mathematical analysis works equally well to other situations like particle flow.
A general discontinuous finite element formulation is needed.[1] The unsteady convection–diffusion problem is considered, at first the known temperature T is expanded into a Taylor series with respect to time taking into account its three components. Next, using the convection diffusion equation an equation is obtained from the differentiation of this equation.
Equation
General
The following convection diffusion equation is considered here[2]
$c\rho \left[{\frac {\partial T(x,t)}{\partial t}}+\epsilon u{\frac {\partial T(x,t)}{\partial x}}\right]=\lambda {\frac {\partial ^{2}T(x,t)}{\partial x^{2}}}+Q(x,t)$
In the above equation, four terms represents transience, convection, diffusion and a source term respectively, where
• T is the temperature in particular case of heat transfer otherwise it is the variable of interest
• t is time
• c is the specific heat
• u is velocity
• ε is porosity that is the ratio of liquid volume to the total volume
• ρ is mass density
• λ is thermal conductivity
• Q(x,t) is source term representing the capacity of internal sources
The equation above can be written in the form
${\frac {\partial T}{\partial t}}=a{\frac {\partial ^{2}T}{\partial x^{2}}}-\epsilon u{\frac {\partial T}{\partial x}}+{\frac {Q}{c\rho }}$
where a = λ/cρ is the diffusion coefficient.
Solving the convection–diffusion equation using the finite difference method
A solution of the transient convection–diffusion equation can be approximated through a finite difference approach, known as the finite difference method (FDM).
Explicit scheme
An explicit scheme of FDM has been considered and stability criteria are formulated. In this scheme, temperature is totally dependent on the old temperature (the initial conditions) and θ, a weighting parameter between 0 and 1. Substitution of θ = 0 gives the explicit discretization of the unsteady conductive heat transfer equation.
${\frac {T_{i}^{f}-T_{i}^{f-1}}{\Delta t}}=a{\frac {T_{i-1}^{f-1}-2T_{i}^{f-1}+T_{i+1}^{f-1}}{h^{2}}}-\epsilon u{\frac {T_{i+1}^{f-1}-T_{i-1}^{f-1}}{2h}}+{\frac {Q_{i}^{f-1}}{c\rho }}$
where
• Δt = tf − tf − 1
• h is the uniform grid spacing (mesh step)
$T_{i}^{f}=\left(1-{\frac {2a\Delta t}{h^{2}}}\right)T_{i}^{f-1}+\left({\frac {a\Delta t}{h^{2}}}+{\frac {\epsilon u\Delta t}{2h}}\right)T_{i-1}^{f-1}+\left({\frac {a\Delta t}{h^{2}}}-{\frac {\epsilon u\Delta t}{2h}}\right)T_{i+1}^{f-1}+{\frac {Q_{i}^{f-1}}{c\rho }}\Delta t$
Stability criteria
${\begin{aligned}h&<{\frac {2a}{\epsilon u}},&\Delta t&<{\frac {h^{2}}{2a}}\end{aligned}}$
These inequalities set a stringent maximum limit to the time step size and represents a serious limitation for the explicit scheme. This method is not recommended for general transient problems because the maximum possible time step has to be reduced as the square of h.
Implicit scheme
In implicit scheme, the temperature is dependent at the new time level t + Δt. After using implicit scheme, it was found that all coefficients are positive. It makes the implicit scheme unconditionally stable for any size of time step. This scheme is preferred for general purpose transient calculations because of its robustness and unconditional stability.[3] The disadvantage of this method is that more procedures are involved and due to larger Δt, truncation error is also larger.
Crank–Nicolson scheme
In the Crank–Nicolson method, the temperature is equally dependent on t and t + Δt. It is a second-order method in time and this method is generally used in diffusion problems.
Stability criteria
$\Delta t<{\frac {h^{2}}{a}}$
This time step limitation is less restricted than the explicit method. The Crank–Nicolson method is based on the central differencing and hence it is second-order accurate in time.[4]
Finite element solution to convection–diffusion problem
Unlike the conduction equation (a finite element solution is used), a numerical solution for the convection–diffusion equation has to deal with the convection part of the governing equation in addition to diffusion. When the Péclet number (Pe) exceeds a critical value, the spurious oscillations result in space and this problem is not unique to finite elements as all other discretization techniques have the same difficulties. In a finite difference formulation, the spatial oscillations are reduced by a family of discretization schemes like upwind scheme.[5] In this method, the basic shape function is modified to obtain the upwinding effect. This method is an extension of Runge–Kutta discontinuous for a convection-diffusion equation. For time-dependent equations, a different kind of approach is followed. The finite difference scheme has an equivalent in the finite element method (Galerkin method). Another similar method is the characteristic Galerkin method (which uses an implicit algorithm). For scalar variables, the above two methods are identical.
See also
• Advanced Simulation Library
• Convection–diffusion equation
• Double diffusive convection
• An Album of Fluid Motion
• Lagrangian and Eulerian specification of the flow field
• Fluid simulation
• Finite volume method for unsteady flow
References
1. “Discontinuous Finite in Fluid Dynamics and Heat transfer” by Ben Q. Li, 2006.
2. "The Finite Difference Method For Transient Convection Diffusion", Ewa Majchrzak & Łukasz Turchan, 2012.
3. H.Versteeg & W. Malalasekra, "an Introduction to Computational Fluid Dynamics" 2009, pages 262–263.
4. H.Versteeg & W. Malalasekra, "an Introduction to Computational Fluid Dynamics" 2009, page no. 262.
5. Ronald W. Lewis, Perumal Nithiarasu & Kankanhally N. Seetharamu, "Fundamentals for the finite element method for heat and fluid flow".
| Wikipedia |
Semiautomaton
In mathematics and theoretical computer science, a semiautomaton is a deterministic finite automaton having inputs but no output. It consists of a set Q of states, a set Σ called the input alphabet, and a function T: Q × Σ → Q called the transition function.
Associated with any semiautomaton is a monoid called the characteristic monoid, input monoid, transition monoid or transition system of the semiautomaton, which acts on the set of states Q. This may be viewed either as an action of the free monoid of strings in the input alphabet Σ, or as the induced transformation semigroup of Q.
In older books like Clifford and Preston (1967) semigroup actions are called "operands".
In category theory, semiautomata essentially are functors.
Transformation semigroups and monoid acts
Main article: semigroup action
A transformation semigroup or transformation monoid is a pair $(M,Q)$ consisting of a set Q (often called the "set of states") and a semigroup or monoid M of functions, or "transformations", mapping Q to itself. They are functions in the sense that every element m of M is a map $m\colon Q\to Q$. If s and t are two functions of the transformation semigroup, their semigroup product is defined as their function composition $(st)(q)=(s\circ t)(q)=s(t(q))$.
Some authors regard "semigroup" and "monoid" as synonyms. Here a semigroup need not have an identity element; a monoid is a semigroup with an identity element (also called "unit"). Since the notion of functions acting on a set always includes the notion of an identity function, which when applied to the set does nothing, a transformation semigroup can be made into a monoid by adding the identity function.
M-acts
Let M be a monoid and Q be a non-empty set. If there exists a multiplicative operation
$\mu \colon Q\times M\to Q$
$(q,m)\mapsto qm=\mu (q,m)$
which satisfies the properties
$q1=q$
for 1 the unit of the monoid, and
$q(st)=(qs)t$
for all $q\in Q$ and $s,t\in M$, then the triple $(Q,M,\mu )$ is called a right M-act or simply a right act. In long-hand, $\mu $ is the right multiplication of elements of Q by elements of M. The right act is often written as $Q_{M}$.
A left act is defined similarly, with
$\mu \colon M\times Q\to Q$
$(m,q)\mapsto mq=\mu (m,q)$
and is often denoted as $\,_{M}Q$.
An M-act is closely related to a transformation monoid. However the elements of M need not be functions per se, they are just elements of some monoid. Therefore, one must demand that the action of $\mu $ be consistent with multiplication in the monoid (i.e. $\mu (q,st)=\mu (\mu (q,s),t)$), as, in general, this might not hold for some arbitrary $\mu $, in the way that it does for function composition.
Once one makes this demand, it is completely safe to drop all parenthesis, as the monoid product and the action of the monoid on the set are completely associative. In particular, this allows elements of the monoid to be represented as strings of letters, in the computer-science sense of the word "string". This abstraction then allows one to talk about string operations in general, and eventually leads to the concept of formal languages as being composed of strings of letters.
Another difference between an M-act and a transformation monoid is that for an M-act Q, two distinct elements of the monoid may determine the same transformation of Q. If we demand that this does not happen, then an M-act is essentially the same as a transformation monoid.
M-homomorphism
For two M-acts $Q_{M}$ and $B_{M}$ sharing the same monoid $M$, an M-homomorphism $f\colon Q_{M}\to B_{M}$ is a map $f\colon Q\to B$ such that
$f(qm)=f(q)m$
for all $q\in Q_{M}$ and $m\in M$. The set of all M-homomorphisms is commonly written as $\mathrm {Hom} (Q_{M},B_{M})$ or $\mathrm {Hom} _{M}(Q,B)$.
The M-acts and M-homomorphisms together form a category called M-Act.
Semiautomata
A semiautomaton is a triple $(Q,\Sigma ,T)$ where $\Sigma $ is a non-empty set, called the input alphabet, Q is a non-empty set, called the set of states, and T is the transition function
$T\colon Q\times \Sigma \to Q.$
When the set of states Q is a finite set—it need not be—, a semiautomaton may be thought of as a deterministic finite automaton $(Q,\Sigma ,T,q_{0},A)$, but without the initial state $q_{0}$ or set of accept states A. Alternately, it is a finite state machine that has no output, and only an input.
Any semiautomaton induces an act of a monoid in the following way.
Let $\Sigma ^{*}$ be the free monoid generated by the alphabet $\Sigma $ (so that the superscript * is understood to be the Kleene star); it is the set of all finite-length strings composed of the letters in $\Sigma $.
For every word w in $\Sigma ^{*}$, let $T_{w}\colon Q\to Q$ be the function, defined recursively, as follows, for all q in Q:
• If $w=\varepsilon $, then $T_{\varepsilon }(q)=q$, so that the empty word $\varepsilon $ does not change the state.
• If $w=\sigma $ is a letter in $\Sigma $, then $T_{\sigma }(q)=T(q,\sigma )$.
• If $w=\sigma v$ for $\sigma \in \Sigma $ and $v\in \Sigma ^{*}$, then $T_{w}(q)=T_{v}(T_{\sigma }(q))$.
Let $M(Q,\Sigma ,T)$ be the set
$M(Q,\Sigma ,T)=\{T_{w}\vert w\in \Sigma ^{*}\}.$
The set $M(Q,\Sigma ,T)$ is closed under function composition; that is, for all $v,w\in \Sigma ^{*}$, one has $T_{w}\circ T_{v}=T_{vw}$. It also contains $T_{\varepsilon }$, which is the identity function on Q. Since function composition is associative, the set $M(Q,\Sigma ,T)$ is a monoid: it is called the input monoid, characteristic monoid, characteristic semigroup or transition monoid of the semiautomaton $(Q,\Sigma ,T)$.
Properties
If the set of states Q is finite, then the transition functions are commonly represented as state transition tables. The structure of all possible transitions driven by strings in the free monoid has a graphical depiction as a de Bruijn graph.
The set of states Q need not be finite, or even countable. As an example, semiautomata underpin the concept of quantum finite automata. There, the set of states Q are given by the complex projective space $\mathbb {C} P^{n}$, and individual states are referred to as n-state qubits. State transitions are given by unitary n×n matrices. The input alphabet $\Sigma $ remains finite, and other typical concerns of automata theory remain in play. Thus, the quantum semiautomaton may be simply defined as the triple $(\mathbb {C} P^{n},\Sigma ,\{U_{\sigma _{1}},U_{\sigma _{2}},\dotsc ,U_{\sigma _{p}}\})$ when the alphabet $\Sigma $ has p letters, so that there is one unitary matrix $U_{\sigma }$ for each letter $\sigma \in \Sigma $. Stated in this way, the quantum semiautomaton has many geometrical generalizations. Thus, for example, one may take a Riemannian symmetric space in place of $\mathbb {C} P^{n}$, and selections from its group of isometries as transition functions.
The syntactic monoid of a regular language is isomorphic to the transition monoid of the minimal automaton accepting the language.
References
• A. H. Clifford and G. B. Preston, The Algebraic Theory of Semigroups. American Mathematical Society, volume 2 (1967), ISBN 978-0-8218-0272-4.
• F. Gecseg and I. Peak, Algebraic Theory of Automata (1972), Akademiai Kiado, Budapest.
• W. M. L. Holcombe, Algebraic Automata Theory (1982), Cambridge University Press
• J. M. Howie, Automata and Languages, (1991), Clarendon Press, ISBN 0-19-853442-6.
• Mati Kilp, Ulrich Knauer, Alexander V. Mikhalov, Monoids, Acts and Categories (2000), Walter de Gruyter, Berlin, ISBN 3-11-015248-7.
• Rudolf Lidl and Günter Pilz, Applied Abstract Algebra (1998), Springer, ISBN 978-0-387-98290-8
| Wikipedia |
Mostowski collapse lemma
In mathematical logic, the Mostowski collapse lemma, also known as the Shepherdson–Mostowski collapse, is a theorem of set theory introduced by Andrzej Mostowski (1949, theorem 3) and John Shepherdson (1953).
Statement
Suppose that R is a binary relation on a class X such that
• R is set-like: R−1[x] = {y : y R x} is a set for every x,
• R is well-founded: every nonempty subset S of X contains an R-minimal element (i.e. an element x ∈ S such that R−1[x] ∩ S is empty),
• R is extensional: R−1[x] ≠ R−1[y] for every distinct elements x and y of X
The Mostowski collapse lemma states that for every such R there exists a unique transitive class (possibly proper) whose structure under the membership relation is isomorphic to (X, R), and the isomorphism is unique. The isomorphism maps each element x of X to the set of images of elements y of X such that y R x (Jech 2003:69).
Generalizations
Every well-founded set-like relation can be embedded into a well-founded set-like extensional relation. This implies the following variant of the Mostowski collapse lemma: every well-founded set-like relation is isomorphic to set-membership on a (non-unique, and not necessarily transitive) class.
A mapping F such that F(x) = {F(y) : y R x} for all x in X can be defined for any well-founded set-like relation R on X by well-founded recursion. It provides a homomorphism of R onto a (non-unique, in general) transitive class. The homomorphism F is an isomorphism if and only if R is extensional.
The well-foundedness assumption of the Mostowski lemma can be alleviated or dropped in non-well-founded set theories. In Boffa's set theory, every set-like extensional relation is isomorphic to set-membership on a (non-unique) transitive class. In set theory with Aczel's anti-foundation axiom, every set-like relation is bisimilar to set-membership on a unique transitive class, hence every bisimulation-minimal set-like relation is isomorphic to a unique transitive class.
Application
Every set model of ZF is set-like and extensional. If the model is well-founded, then by the Mostowski collapse lemma it is isomorphic to a transitive model of ZF and such a transitive model is unique.
Saying that the membership relation of some model of ZF is well-founded is stronger than saying that the axiom of regularity is true in the model. There exists a model M (assuming the consistency of ZF) whose domain has a subset A with no R-minimal element, but this set A is not a "set in the model" (A is not in the domain of the model, even though all of its members are). More precisely, for no such set A there exists x in M such that A = R−1[x]. So M satisfies the axiom of regularity (it is "internally" well-founded) but it is not well-founded and the collapse lemma does not apply to it.
References
• Jech, Thomas (2003), Set Theory, Springer Monographs in Mathematics (third millennium ed.), Berlin, New York: Springer-Verlag, ISBN 978-3-540-44085-7
• Mostowski, Andrzej (1949), "An undecidable arithmetical statement" (PDF), Fundamenta Mathematicae, Institute of Mathematics Polish Academy of Sciences, 36 (1): 143–164, doi:10.4064/fm-36-1-143-164
• Shepherdson, John (1953), "Inner models for set theory, Part III", Journal of Symbolic Logic, Association for Symbolic Logic, 18: 145–167, doi:10.2307/2268947
Mathematical logic
General
• Axiom
• list
• Cardinality
• First-order logic
• Formal proof
• Formal semantics
• Foundations of mathematics
• Information theory
• Lemma
• Logical consequence
• Model
• Theorem
• Theory
• Type theory
Theorems (list)
& Paradoxes
• Gödel's completeness and incompleteness theorems
• Tarski's undefinability
• Banach–Tarski paradox
• Cantor's theorem, paradox and diagonal argument
• Compactness
• Halting problem
• Lindström's
• Löwenheim–Skolem
• Russell's paradox
Logics
Traditional
• Classical logic
• Logical truth
• Tautology
• Proposition
• Inference
• Logical equivalence
• Consistency
• Equiconsistency
• Argument
• Soundness
• Validity
• Syllogism
• Square of opposition
• Venn diagram
Propositional
• Boolean algebra
• Boolean functions
• Logical connectives
• Propositional calculus
• Propositional formula
• Truth tables
• Many-valued logic
• 3
• Finite
• ∞
Predicate
• First-order
• list
• Second-order
• Monadic
• Higher-order
• Free
• Quantifiers
• Predicate
• Monadic predicate calculus
Set theory
• Set
• Hereditary
• Class
• (Ur-)Element
• Ordinal number
• Extensionality
• Forcing
• Relation
• Equivalence
• Partition
• Set operations:
• Intersection
• Union
• Complement
• Cartesian product
• Power set
• Identities
Types of Sets
• Countable
• Uncountable
• Empty
• Inhabited
• Singleton
• Finite
• Infinite
• Transitive
• Ultrafilter
• Recursive
• Fuzzy
• Universal
• Universe
• Constructible
• Grothendieck
• Von Neumann
Maps & Cardinality
• Function/Map
• Domain
• Codomain
• Image
• In/Sur/Bi-jection
• Schröder–Bernstein theorem
• Isomorphism
• Gödel numbering
• Enumeration
• Large cardinal
• Inaccessible
• Aleph number
• Operation
• Binary
Set theories
• Zermelo–Fraenkel
• Axiom of choice
• Continuum hypothesis
• General
• Kripke–Platek
• Morse–Kelley
• Naive
• New Foundations
• Tarski–Grothendieck
• Von Neumann–Bernays–Gödel
• Ackermann
• Constructive
Formal systems (list),
Language & Syntax
• Alphabet
• Arity
• Automata
• Axiom schema
• Expression
• Ground
• Extension
• by definition
• Conservative
• Relation
• Formation rule
• Grammar
• Formula
• Atomic
• Closed
• Ground
• Open
• Free/bound variable
• Language
• Metalanguage
• Logical connective
• ¬
• ∨
• ∧
• →
• ↔
• =
• Predicate
• Functional
• Variable
• Propositional variable
• Proof
• Quantifier
• ∃
• !
• ∀
• rank
• Sentence
• Atomic
• Spectrum
• Signature
• String
• Substitution
• Symbol
• Function
• Logical/Constant
• Non-logical
• Variable
• Term
• Theory
• list
Example axiomatic
systems
(list)
• of arithmetic:
• Peano
• second-order
• elementary function
• primitive recursive
• Robinson
• Skolem
• of the real numbers
• Tarski's axiomatization
• of Boolean algebras
• canonical
• minimal axioms
• of geometry:
• Euclidean:
• Elements
• Hilbert's
• Tarski's
• non-Euclidean
• Principia Mathematica
Proof theory
• Formal proof
• Natural deduction
• Logical consequence
• Rule of inference
• Sequent calculus
• Theorem
• Systems
• Axiomatic
• Deductive
• Hilbert
• list
• Complete theory
• Independence (from ZFC)
• Proof of impossibility
• Ordinal analysis
• Reverse mathematics
• Self-verifying theories
Model theory
• Interpretation
• Function
• of models
• Model
• Equivalence
• Finite
• Saturated
• Spectrum
• Submodel
• Non-standard model
• of arithmetic
• Diagram
• Elementary
• Categorical theory
• Model complete theory
• Satisfiability
• Semantics of logic
• Strength
• Theories of truth
• Semantic
• Tarski's
• Kripke's
• T-schema
• Transfer principle
• Truth predicate
• Truth value
• Type
• Ultraproduct
• Validity
Computability theory
• Church encoding
• Church–Turing thesis
• Computably enumerable
• Computable function
• Computable set
• Decision problem
• Decidable
• Undecidable
• P
• NP
• P versus NP problem
• Kolmogorov complexity
• Lambda calculus
• Primitive recursive function
• Recursion
• Recursive set
• Turing machine
• Type theory
Related
• Abstract logic
• Category theory
• Concrete/Abstract Category
• Category of sets
• History of logic
• History of mathematical logic
• timeline
• Logicism
• Mathematical object
• Philosophy of mathematics
• Supertask
Mathematics portal
Set theory
Overview
• Set (mathematics)
Axioms
• Adjunction
• Choice
• countable
• dependent
• global
• Constructibility (V=L)
• Determinacy
• Extensionality
• Infinity
• Limitation of size
• Pairing
• Power set
• Regularity
• Union
• Martin's axiom
• Axiom schema
• replacement
• specification
Operations
• Cartesian product
• Complement (i.e. set difference)
• De Morgan's laws
• Disjoint union
• Identities
• Intersection
• Power set
• Symmetric difference
• Union
• Concepts
• Methods
• Almost
• Cardinality
• Cardinal number (large)
• Class
• Constructible universe
• Continuum hypothesis
• Diagonal argument
• Element
• ordered pair
• tuple
• Family
• Forcing
• One-to-one correspondence
• Ordinal number
• Set-builder notation
• Transfinite induction
• Venn diagram
Set types
• Amorphous
• Countable
• Empty
• Finite (hereditarily)
• Filter
• base
• subbase
• Ultrafilter
• Fuzzy
• Infinite (Dedekind-infinite)
• Recursive
• Singleton
• Subset · Superset
• Transitive
• Uncountable
• Universal
Theories
• Alternative
• Axiomatic
• Naive
• Cantor's theorem
• Zermelo
• General
• Principia Mathematica
• New Foundations
• Zermelo–Fraenkel
• von Neumann–Bernays–Gödel
• Morse–Kelley
• Kripke–Platek
• Tarski–Grothendieck
• Paradoxes
• Problems
• Russell's paradox
• Suslin's problem
• Burali-Forti paradox
Set theorists
• Paul Bernays
• Georg Cantor
• Paul Cohen
• Richard Dedekind
• Abraham Fraenkel
• Kurt Gödel
• Thomas Jech
• John von Neumann
• Willard Quine
• Bertrand Russell
• Thoralf Skolem
• Ernst Zermelo
| Wikipedia |
Transitive model
In mathematical set theory, a transitive model is a model of set theory that is standard and transitive. Standard means that the membership relation is the usual one, and transitive means that the model is a transitive set or class.
Examples
• An inner model is a transitive model containing all ordinals.
• A countable transitive model (CTM) is, as the name suggests, a transitive model with a countable number of elements.
Properties
If M is a transitive model, then ωM is the standard ω. This implies that the natural numbers, integers, and rational numbers of the model are also the same as their standard counterparts. Each real number in a transitive model is a standard real number, although not all standard reals need be included in a particular transitive model.
References
• Jech, Thomas (2003). Set Theory. Springer Monographs in Mathematics (Third Millennium ed.). Berlin, New York: Springer-Verlag. ISBN 978-3-540-44085-7. Zbl 1007.03002.
| Wikipedia |
Transitive reduction
In the mathematical field of graph theory, a transitive reduction of a directed graph D is another directed graph with the same vertices and as few edges as possible, such that for all pairs of vertices v, w a (directed) path from v to w in D exists if and only if such a path exists in the reduction. Transitive reductions were introduced by Aho, Garey & Ullman (1972), who provided tight bounds on the computational complexity of constructing them.
More technically, the reduction is a directed graph that has the same reachability relation as D. Equivalently, D and its transitive reduction should have the same transitive closure as each other, and the transitive reduction of D should have as few edges as possible among all graphs with that property.
The transitive reduction of a finite directed acyclic graph (a directed graph without directed cycles) is unique and is a subgraph of the given graph. However, uniqueness fails for graphs with (directed) cycles, and for infinite graphs not even existence is guaranteed.
The closely related concept of a minimum equivalent graph is a subgraph of D that has the same reachability relation and as few edges as possible.[1] The difference is that a transitive reduction does not have to be a subgraph of D. For finite directed acyclic graphs, the minimum equivalent graph is the same as the transitive reduction. However, for graphs that may contain cycles, minimum equivalent graphs are NP-hard to construct, while transitive reductions can be constructed in polynomial time.
Transitive reduction can be defined for an abstract binary relation on a set, by interpreting the pairs of the relation as arcs in a directed graph.
In directed acyclic graphs
The transitive reduction of a finite directed graph G is a graph with the fewest possible edges that has the same reachability relation as the original graph. That is, if there is a path from a vertex x to a vertex y in graph G, there must also be a path from x to y in the transitive reduction of G, and vice versa. Specifically, if there is some path from x to y, and another from y to z, then there may be no path from x to z which does not include y. Transitivity for x, y, and z means that if x < y and y < z, then x < z. If for any path from y to z there is a path x to y, then there is a path x to z; however, it is not true that for any paths x to y and x to z that there is a path y to z, and therefore any edge between vertices x and z are excluded under a transitive reduction, as they represent walks which are not transitive. The following image displays drawings of graphs corresponding to a non-transitive binary relation (on the left) and its transitive reduction (on the right).
The transitive reduction of a finite directed acyclic graph G is unique, and consists of the edges of G that form the only path between their endpoints. In particular, it is always a spanning subgraph of the given graph. For this reason, the transitive reduction coincides with the minimum equivalent graph in this case.
In the mathematical theory of binary relations, any relation R on a set X may be thought of as a directed graph that has the set X as its vertex set and that has an arc xy for every ordered pair of elements that are related in R. In particular, this method lets partially ordered sets be reinterpreted as directed acyclic graphs, in which there is an arc xy in the graph whenever there is an order relation x < y between the given pair of elements of the partial order. When the transitive reduction operation is applied to a directed acyclic graph that has been constructed in this way, it generates the covering relation of the partial order, which is frequently given visual expression by means of a Hasse diagram.
Transitive reduction has been used on networks which can be represented as directed acyclic graphs (e.g. citation graphs or citation networks) to reveal structural differences between networks.[2]
In graphs with cycles
In a finite graph that has cycles, the transitive reduction may not be unique: there may be more than one graph on the same vertex set that has a minimum number of edges and has the same reachability relation as the given graph. Additionally, it may be the case that none of these minimum graphs is a subgraph of the given graph. Nevertheless, it is straightforward to characterize the minimum graphs with the same reachability relation as the given graph G.[3] If G is an arbitrary directed graph, and H is a graph with the minimum possible number of edges having the same reachability relation as G, then H consists of
• A directed cycle for each strongly connected component of G, connecting together the vertices in this component
• An edge xy for each edge XY of the transitive reduction of the condensation of G, where X and Y are two strongly connected components of G that are connected by an edge in the condensation, x is any vertex in component X, and y is any vertex in component Y. The condensation of G is a directed acyclic graph that has a vertex for every strongly connected component of G and an edge for every two components that are connected by an edge in G. In particular, because it is acyclic, its transitive reduction can be defined as in the previous section.
The total number of edges in this type of transitive reduction is then equal to the number of edges in the transitive reduction of the condensation, plus the number of vertices in nontrivial strongly connected components (components with more than one vertex).
The edges of the transitive reduction that correspond to condensation edges can always be chosen to be a subgraph of the given graph G. However, the cycle within each strongly connected component can only be chosen to be a subgraph of G if that component has a Hamiltonian cycle, something that is not always true and is difficult to check. Because of this difficulty, it is NP-hard to find the smallest subgraph of a given graph G with the same reachability (its minimum equivalent graph).[3]
Computational complexity
As Aho et al. show,[3] when the time complexity of graph algorithms is measured only as a function of the number n of vertices in the graph, and not as a function of the number of edges, transitive closure and transitive reduction of directed acyclic graphs have the same complexity. It had already been shown that transitive closure and multiplication of Boolean matrices of size n × n had the same complexity as each other,[4] so this result put transitive reduction into the same class. The best exact algorithms for matrix multiplication, as of 2015, take time O(n2.3729),[5] and this gives the fastest known worst-case time bound for transitive reduction in dense graphs.
Computing the reduction using the closure
To prove that transitive reduction is as easy as transitive closure, Aho et al. rely on the already-known equivalence with Boolean matrix multiplication. They let A be the adjacency matrix of the given directed acyclic graph, and B be the adjacency matrix of its transitive closure (computed using any standard transitive closure algorithm). Then an edge uv belongs to the transitive reduction if and only if there is a nonzero entry in row u and column v of matrix A, and there is a zero entry in the same position of the matrix product AB. In this construction, the nonzero elements of the matrix AB represent pairs of vertices connected by paths of length two or more.[3]
Computing the closure using the reduction
To prove that transitive reduction is as hard as transitive closure, Aho et al. construct from a given directed acyclic graph G another graph H, in which each vertex of G is replaced by a path of three vertices, and each edge of G corresponds to an edge in H connecting the corresponding middle vertices of these paths. In addition, in the graph H, Aho et al. add an edge from every path start to every path end. In the transitive reduction of H, there is an edge from the path start for u to the path end for v, if and only if edge uv does not belong to the transitive closure of G. Therefore, if the transitive reduction of H can be computed efficiently, the transitive closure of G can be read off directly from it.[3]
Computing the reduction in sparse graphs
When measured both in terms of the number n of vertices and the number m of edges in a directed acyclic graph, transitive reductions can also be found in time O(nm), a bound that may be faster than the matrix multiplication methods for sparse graphs. To do so, apply a linear time longest path algorithm in the given directed acyclic graph, for each possible choice of starting vertex. From the computed longest paths, keep only those of length one (single edge); in other words, keep those edges (u,v) for which there exists no other path from u to v. This O(nm) time bound matches the complexity of constructing transitive closures by using depth-first search or breadth first search to find the vertices reachable from every choice of starting vertex, so again with these assumptions transitive closures and transitive reductions can be found in the same amount of time.
Notes
1. Moyles & Thompson (1969).
2. Clough et al. (2015).
3. Aho, Garey & Ullman (1972)
4. Aho et al. credit this result to an unpublished 1971 manuscript of Ian Munro, and to a 1970 Russian-language paper by M. E. Furman.
5. Le Gall (2014).
References
• Aho, A. V.; Garey, M. R.; Ullman, J. D. (1972), "The transitive reduction of a directed graph", SIAM Journal on Computing, 1 (2): 131–137, doi:10.1137/0201008, MR 0306032.
• Clough, J. R.; Gollings, J.; Loach, T. V.; Evans, T. S. (2015), "Transitive reduction of citation networks", Journal of Complex Networks, 3 (2): 189–203, arXiv:1310.8224, doi:10.1093/comnet/cnu039.
• Moyles, Dennis M.; Thompson, Gerald L. (1969), "An Algorithm for Finding a Minimum Equivalent Graph of a Digraph", Journal of the ACM, 16 (3): 455–460, doi:10.1145/321526.321534.
• Le Gall, François (2014), "Powers of Tensors and Fast Matrix Multiplication", Proc. 39th International Symposium on Symbolic and Algebraic Computation (ISSAC '14), pp. 296–303, doi:10.1145/2608628.2608664.
External links
• Weisstein, Eric W. "Transitive Reduction". MathWorld.
| Wikipedia |
Transitive relation
In mathematics, a relation R on a set X is transitive if, for all elements a, b, c in X, whenever R relates a to b and b to c, then R also relates a to c. Each partial order as well as each equivalence relation needs to be transitive.
Transitive relation
TypeBinary relation
FieldElementary algebra
StatementA relation $R$ on a set $X$ is transitive if, for all elements $a$, $b$, $c$ in $X$, whenever $R$ relates $a$ to $b$ and $b$ to $c$, then $R$ also relates $a$ to $c$.
Symbolic statement$\forall a,b,c\in X:(aRb\wedge bRc)\Rightarrow aRc$
Definition
Transitive binary relations
Symmetric Antisymmetric Connected Well-founded Has joins Has meets Reflexive Irreflexive Asymmetric
Total, Semiconnex Anti-
reflexive
Equivalence relation Y ✗ ✗ ✗ ✗ ✗ Y ✗ ✗
Preorder (Quasiorder) ✗ ✗ ✗ ✗ ✗ ✗ Y ✗ ✗
Partial order ✗ Y ✗ ✗ ✗ ✗ Y ✗ ✗
Total preorder ✗ ✗ Y ✗ ✗ ✗ Y ✗ ✗
Total order ✗ Y Y ✗ ✗ ✗ Y ✗ ✗
Prewellordering ✗ ✗ Y Y ✗ ✗ Y ✗ ✗
Well-quasi-ordering ✗ ✗ ✗ Y ✗ ✗ Y ✗ ✗
Well-ordering ✗ Y Y Y ✗ ✗ Y ✗ ✗
Lattice ✗ Y ✗ ✗ Y Y Y ✗ ✗
Join-semilattice ✗ Y ✗ ✗ Y ✗ Y ✗ ✗
Meet-semilattice ✗ Y ✗ ✗ ✗ Y Y ✗ ✗
Strict partial order ✗ Y ✗ ✗ ✗ ✗ ✗ Y Y
Strict weak order ✗ Y ✗ ✗ ✗ ✗ ✗ Y Y
Strict total order ✗ Y Y ✗ ✗ ✗ ✗ Y Y
Symmetric Antisymmetric Connected Well-founded Has joins Has meets Reflexive Irreflexive Asymmetric
Definitions, for all $a,b$ and $S\neq \varnothing :$ :} ${\begin{aligned}&aRb\\\Rightarrow {}&bRa\end{aligned}}$ ${\begin{aligned}aRb{\text{ and }}&bRa\\\Rightarrow a={}&b\end{aligned}}$ ${\begin{aligned}a\neq {}&b\Rightarrow \\aRb{\text{ or }}&bRa\end{aligned}}$ ${\begin{aligned}\min S\\{\text{exists}}\end{aligned}}$ ${\begin{aligned}a\vee b\\{\text{exists}}\end{aligned}}$ ${\begin{aligned}a\wedge b\\{\text{exists}}\end{aligned}}$ $aRa$ ${\text{not }}aRa$ ${\begin{aligned}aRb\Rightarrow \\{\text{not }}bRa\end{aligned}}$
Y indicates that the column's property is always true the row's term (at the very left), while ✗ indicates that the property is not guaranteed in general (it might, or might not, hold). For example, that every equivalence relation is symmetric, but not necessarily antisymmetric, is indicated by Y in the "Symmetric" column and ✗ in the "Antisymmetric" column, respectively.
All definitions tacitly require the homogeneous relation $R$ be transitive: for all $a,b,c,$ if $aRb$ and $bRc$ then $aRc.$
A term's definition may require additional properties that are not listed in this table.
A homogeneous relation R on the set X is a transitive relation if,[1]
for all a, b, c ∈ X, if a R b and b R c, then a R c.
Or in terms of first-order logic:
$\forall a,b,c\in X:(aRb\wedge bRc)\Rightarrow aRc$,
where a R b is the infix notation for (a, b) ∈ R.
Examples
As a non-mathematical example, the relation "is an ancestor of" is transitive. For example, if Amy is an ancestor of Becky, and Becky is an ancestor of Carrie, then Amy, too, is an ancestor of Carrie.
On the other hand, "is the birth parent of" is not a transitive relation, because if Alice is the birth parent of Brenda, and Brenda is the birth parent of Claire, then this does not imply that Alice is the birth parent of Claire. What is more, it is antitransitive: Alice can never be the birth parent of Claire.
Non-transitive, non-antitransitive relations include sports fixtures (playoff schedules), 'knows' and 'talks to'.
"Is greater than", "is at least as great as", and "is equal to" (equality) are transitive relations on various sets, for instance, the set of real numbers or the set of natural numbers:
whenever x > y and y > z, then also x > z
whenever x ≥ y and y ≥ z, then also x ≥ z
whenever x = y and y = z, then also x = z.
More examples of transitive relations:
• "is a subset of" (set inclusion, a relation on sets)
• "divides" (divisibility, a relation on natural numbers)
• "implies" (implication, symbolized by "⇒", a relation on propositions)
Examples of non-transitive relations:
• "is the successor of" (a relation on natural numbers)
• "is a member of the set" (symbolized as "∈")[2]
• "is perpendicular to" (a relation on lines in Euclidean geometry)
The empty relation on any set $X$ is transitive[3][4] because there are no elements $a,b,c\in X$ such that $aRb$ and $bRc$, and hence the transitivity condition is vacuously true. A relation R containing only one ordered pair is also transitive: if the ordered pair is of the form $(x,x)$ for some $x\in X$ the only such elements $a,b,c\in X$ are $a=b=c=x$, and indeed in this case $aRc$, while if the ordered pair is not of the form $(x,x)$ then there are no such elements $a,b,c\in X$ and hence $R$ is vacuously transitive.
Properties
Closure properties
• The converse (inverse) of a transitive relation is always transitive. For instance, knowing that "is a subset of" is transitive and "is a superset of" is its converse, one can conclude that the latter is transitive as well.
• The intersection of two transitive relations is always transitive.[5] For instance, knowing that "was born before" and "has the same first name as" are transitive, one can conclude that "was born before and also has the same first name as" is also transitive.
• The union of two transitive relations need not be transitive. For instance, "was born before or has the same first name as" is not a transitive relation, since e.g. Herbert Hoover is related to Franklin D. Roosevelt, who is in turn related to Franklin Pierce, while Hoover is not related to Franklin Pierce.
• The complement of a transitive relation need not be transitive.[6] For instance, while "equal to" is transitive, "not equal to" is only transitive on sets with at most one element.
Other properties
A transitive relation is asymmetric if and only if it is irreflexive.[7]
A transitive relation need not be reflexive. When it is, it is called a preorder. For example, on set X = {1,2,3}:
• R = { (1,1), (2,2), (3,3), (1,3), (3,2) } is reflexive, but not transitive, as the pair (1,2) is absent,
• R = { (1,1), (2,2), (3,3), (1,3) } is reflexive as well as transitive, so it is a preorder,
• R = { (1,1), (2,2), (3,3) } is reflexive as well as transitive, another preorder.
Transitive extensions and transitive closure
Main article: Transitive closure
Let R be a binary relation on set X. The transitive extension of R, denoted R1, is the smallest binary relation on X such that R1 contains R, and if (a, b) ∈ R and (b, c) ∈ R then (a, c) ∈ R1.[8] For example, suppose X is a set of towns, some of which are connected by roads. Let R be the relation on towns where (A, B) ∈ R if there is a road directly linking town A and town B. This relation need not be transitive. The transitive extension of this relation can be defined by (A, C) ∈ R1 if you can travel between towns A and C by using at most two roads.
If a relation is transitive then its transitive extension is itself, that is, if R is a transitive relation then R1 = R.
The transitive extension of R1 would be denoted by R2, and continuing in this way, in general, the transitive extension of Ri would be Ri + 1. The transitive closure of R, denoted by R* or R∞ is the set union of R, R1, R2, ... .[9]
The transitive closure of a relation is a transitive relation.[9]
The relation "is the birth parent of" on a set of people is not a transitive relation. However, in biology the need often arises to consider birth parenthood over an arbitrary number of generations: the relation "is a birth ancestor of" is a transitive relation and it is the transitive closure of the relation "is the birth parent of".
For the example of towns and roads above, (A, C) ∈ R* provided you can travel between towns A and C using any number of roads.
Relation types that require transitivity
• Preorder – a reflexive and transitive relation
• Partial order – an antisymmetric preorder
• Total preorder – a connected (formerly called total) preorder
• Equivalence relation – a symmetric preorder
• Strict weak ordering – a strict partial order in which incomparability is an equivalence relation
• Total ordering – a connected (total), antisymmetric, and transitive relation
Counting transitive relations
No general formula that counts the number of transitive relations on a finite set (sequence A006905 in the OEIS) is known.[10] However, there is a formula for finding the number of relations that are simultaneously reflexive, symmetric, and transitive – in other words, equivalence relations – (sequence A000110 in the OEIS), those that are symmetric and transitive, those that are symmetric, transitive, and antisymmetric, and those that are total, transitive, and antisymmetric. Pfeiffer[11] has made some progress in this direction, expressing relations with combinations of these properties in terms of each other, but still calculating any one is difficult. See also Brinkmann and McKay (2005).[12] Mala showed that no polynomial with integer coefficients can represent a formula for the number of transitive relations on a set,[13] and found certain recursive relations that provide lower bounds for that number. He also showed that that number is a polynomial of degree two if the set contains exactly two ordered pairs.[14]
Number of n-element binary relations of different types
Elements Any Transitive Reflexive Symmetric Preorder Partial order Total preorder Total order Equivalence relation
0111111111
1221211111
216134843322
3512171646429191365
465,5363,9944,0961,024355219752415
n 2n2 2n2−n 2n(n+1)/2 $ \sum _{k=0}^{n}k!S(n,k)$ n! $ \sum _{k=0}^{n}S(n,k)$
OEIS A002416 A006905 A053763 A006125 A000798 A001035 A000670 A000142 A000110
Note that S(n, k) refers to Stirling numbers of the second kind.
Related properties
A relation R is called intransitive if it is not transitive, that is, if xRy and yRz, but not xRz, for some x, y, z. In contrast, a relation R is called antitransitive if xRy and yRz always implies that xRz does not hold. For example, the relation defined by xRy if xy is an even number is intransitive,[15] but not antitransitive.[16] The relation defined by xRy if x is even and y is odd is both transitive and antitransitive.[17] The relation defined by xRy if x is the successor number of y is both intransitive[18] and antitransitive.[19] Unexpected examples of intransitivity arise in situations such as political questions or group preferences.[20]
Generalized to stochastic versions (stochastic transitivity), the study of transitivity finds applications of in decision theory, psychometrics and utility models.[21]
A quasitransitive relation is another generalization;[6] it is required to be transitive only on its non-symmetric part. Such relations are used in social choice theory or microeconomics.[22]
Proposition: If R is a univalent, then R;RT is transitive.
proof: Suppose $xR;R^{T}yR;R^{T}z.$ Then there are a and b such that $xRaR^{T}yRbR^{T}z.$ Since R is univalent, yRb and aRTy imply a=b. Therefore xRaRTz, hence xR;RTz and R;RT is transitive.
Corollary: If R is univalent, then R;RT is an equivalence relation on the domain of R.
proof: R;RT is symmetric and reflexive on its domain. With univalence of R, the transitive requirement for equivalence is fulfilled.
See also
• Transitive reduction
• Intransitive dice
• Rational choice theory
• Hypothetical syllogism — transitivity of the material conditional
Notes
1. Smith, Eggen & St. Andre 2006, p. 145
2. However, the class of von Neumann ordinals is constructed in a way such that ∈ is transitive when restricted to that class.
3. Smith, Eggen & St. Andre 2006, p. 146
4. https://courses.engr.illinois.edu/cs173/sp2011/Lectures/relations.pdf Archived 2023-02-04 at the Wayback Machine
5. Bianchi, Mariagrazia; Mauri, Anna Gillio Berta; Herzog, Marcel; Verardi, Libero (2000-01-12). "On finite solvable groups in which normality is a transitive relation". Journal of Group Theory. 3 (2). doi:10.1515/jgth.2000.012. ISSN 1433-5883. Archived from the original on 2023-02-04. Retrieved 2022-12-29.
6. Robinson, Derek J. S. (January 1964). "Groups in which normality is a transitive relation". Mathematical Proceedings of the Cambridge Philosophical Society. 60 (1): 21–38. doi:10.1017/S0305004100037403. ISSN 0305-0041. S2CID 119707269. Archived from the original on 2023-02-04. Retrieved 2022-12-29.
7. Flaška, V.; Ježek, J.; Kepka, T.; Kortelainen, J. (2007). Transitive Closures of Binary Relations I (PDF). Prague: School of Mathematics - Physics Charles University. p. 1. Archived from the original (PDF) on 2013-11-02. Lemma 1.1 (iv). Note that this source refers to asymmetric relations as "strictly antisymmetric".
8. Liu 1985, p. 111
9. Liu 1985, p. 112
10. Steven R. Finch, "Transitive relations, topologies and partial orders" Archived 2016-03-04 at the Wayback Machine, 2003.
11. Götz Pfeiffer, "Counting Transitive Relations Archived 2023-02-04 at the Wayback Machine", Journal of Integer Sequences, Vol. 7 (2004), Article 04.3.2.
12. Gunnar Brinkmann and Brendan D. McKay,"Counting unlabelled topologies and transitive relations Archived 2005-07-20 at the Wayback Machine"
13. Mala, Firdous Ahmad (2021-06-14). "On the number of transitive relations on a set". Indian Journal of Pure and Applied Mathematics. 53: 228–232. doi:10.1007/s13226-021-00100-0. ISSN 0975-7465. S2CID 256065947. Archived from the original on 2023-02-04. Retrieved 2021-12-06.
14. Mala, Firdous Ahmad (2021-10-13). "Counting Transitive Relations with Two Ordered Pairs". Journal of Applied Mathematics and Computation. 5 (4): 247–251. doi:10.26855/jamc.2021.12.002. ISSN 2576-0645.
15. since e.g. 3R4 and 4R5, but not 3R5
16. since e.g. 2R3 and 3R4 and 2R4
17. since xRy and yRz can never happen
18. since e.g. 3R2 and 2R1, but not 3R1
19. since, more generally, xRy and yRz implies x=y+1=z+2≠z+1, i.e. not xRz, for all x, y, z
20. Drum, Kevin (November 2018). "Preferences are not transitive". Mother Jones. Archived from the original on 2018-11-29. Retrieved 2018-11-29.
21. Oliveira, I.F.D.; Zehavi, S.; Davidov, O. (August 2018). "Stochastic transitivity: Axioms and models". Journal of Mathematical Psychology. 85: 25–35. doi:10.1016/j.jmp.2018.06.002. ISSN 0022-2496.
22. Sen, A. (1969). "Quasi-transitivity, rational choice and collective decisions". Rev. Econ. Stud. 36 (3): 381–393. doi:10.2307/2296434. JSTOR 2296434. Zbl 0181.47302.
References
• Grimaldi, Ralph P. (1994), Discrete and Combinatorial Mathematics (3rd ed.), Addison-Wesley, ISBN 0-201-19912-2
• Liu, C.L. (1985), Elements of Discrete Mathematics, McGraw-Hill, ISBN 0-07-038133-X
• Gunther Schmidt, 2010. Relational Mathematics. Cambridge University Press, ISBN 978-0-521-76268-7.
• Smith, Douglas; Eggen, Maurice; St. Andre, Richard (2006), A Transition to Advanced Mathematics (6th ed.), Brooks/Cole, ISBN 978-0-534-39900-9
• Pfeiffer, G. (2004). Counting transitive relations. Journal of Integer Sequences, 7(2), 3.
External links
• "Transitivity", Encyclopedia of Mathematics, EMS Press, 2001 [1994]
• Transitivity in Action at cut-the-knot
| Wikipedia |
Transitive set
In set theory, a branch of mathematics, a set $A$ is called transitive if either of the following equivalent conditions hold:
• whenever $x\in A$, and $y\in x$, then $y\in A$.
• whenever $x\in A$, and $x$ is not an urelement, then $x$ is a subset of $A$.
Similarly, a class $M$ is transitive if every element of $M$ is a subset of $M$.
Examples
Using the definition of ordinal numbers suggested by John von Neumann, ordinal numbers are defined as hereditarily transitive sets: an ordinal number is a transitive set whose members are also transitive (and thus ordinals). The class of all ordinals is a transitive class.
Any of the stages $V_{\alpha }$ and $L_{\alpha }$ leading to the construction of the von Neumann universe $V$ and Gödel's constructible universe $L$ are transitive sets. The universes $V$ and $L$ themselves are transitive classes.
This is a complete list of all finite transitive sets with up to 20 brackets:[1]
• $\{\},$
• $\{\{\}\},$
• $\{\{\},\{\{\}\}\},$
• $\{\{\},\{\{\}\},\{\{\{\}\}\}\},$
• $\{\{\},\{\{\}\},\{\{\},\{\{\}\}\}\},$
• $\{\{\},\{\{\}\},\{\{\{\}\}\},\{\{\{\{\}\}\}\}\},$
• $\{\{\},\{\{\}\},\{\{\{\}\}\},\{\{\},\{\{\}\}\}\},$
• $\{\{\},\{\{\}\},\{\{\{\}\}\},\{\{\},\{\{\{\}\}\}\}\},$
• $\{\{\},\{\{\}\},\{\{\{\}\}\},\{\{\{\}\},\{\{\{\}\}\}\}\},$
• $\{\{\},\{\{\}\},\{\{\{\},\{\{\}\}\}\},\{\{\},\{\{\}\}\}\},$
• $\{\{\},\{\{\}\},\{\{\{\}\}\},\{\{\},\{\{\}\},\{\{\{\}\}\}\}\},$
• $\{\{\},\{\{\}\},\{\{\},\{\{\}\}\},\{\{\},\{\{\},\{\{\}\}\}\}\},$
• $\{\{\},\{\{\}\},\{\{\},\{\{\}\}\},\{\{\{\}\},\{\{\},\{\{\}\}\}\}\},$
• $\{\{\},\{\{\}\},\{\{\{\}\}\},\{\{\{\{\}\}\}\},\{\{\},\{\{\}\}\}\},$
• $\{\{\},\{\{\}\},\{\{\},\{\{\}\}\},\{\{\},\{\{\}\},\{\{\},\{\{\}\}\}\}\},$
• $\{\{\},\{\{\}\},\{\{\{\}\}\},\{\{\{\{\}\}\}\},\{\{\{\{\{\}\}\}\}\}\},$
• $\{\{\},\{\{\}\},\{\{\{\}\}\},\{\{\{\{\}\}\}\},\{\{\},\{\{\{\}\}\}\}\},$
• $\{\{\},\{\{\}\},\{\{\{\}\}\},\{\{\{\},\{\{\}\}\}\},\{\{\},\{\{\}\}\}\},$
• $\{\{\},\{\{\}\},\{\{\{\}\}\},\{\{\},\{\{\}\}\},\{\{\},\{\{\{\}\}\}\}\},$
• $\{\{\},\{\{\}\},\{\{\{\}\}\},\{\{\{\{\}\}\}\},\{\{\},\{\{\{\{\}\}\}\}\}\},$
• $\{\{\},\{\{\}\},\{\{\{\}\}\},\{\{\{\{\}\}\}\},\{\{\{\}\},\{\{\{\}\}\}\}\},$
• $\{\{\},\{\{\}\},\{\{\{\}\}\},\{\{\},\{\{\}\}\},\{\{\},\{\{\},\{\{\}\}\}\}\},$
• $\{\{\},\{\{\}\},\{\{\{\}\}\},\{\{\},\{\{\}\}\},\{\{\{\}\},\{\{\{\}\}\}\}\},$
• $\{\{\},\{\{\}\},\{\{\{\}\}\},\{\{\{\{\}\}\}\},\{\{\{\}\},\{\{\{\{\}\}\}\}\}\},$
• $\{\{\},\{\{\}\},\{\{\{\}\}\},\{\{\{\{\}\}\}\},\{\{\},\{\{\}\},\{\{\{\}\}\}\}\},$
• $\{\{\},\{\{\}\},\{\{\{\}\}\},\{\{\{\},\{\{\{\}\}\}\}\},\{\{\},\{\{\{\}\}\}\}\},$
• $\{\{\},\{\{\}\},\{\{\{\}\}\},\{\{\},\{\{\}\}\},\{\{\{\}\},\{\{\},\{\{\}\}\}\}\},$
• $\{\{\},\{\{\}\},\{\{\{\}\}\},\{\{\},\{\{\}\}\},\{\{\},\{\{\}\},\{\{\{\}\}\}\}\},$
• $\{\{\},\{\{\}\},\{\{\{\}\}\},\{\{\},\{\{\{\}\}\}\},\{\{\{\}\},\{\{\{\}\}\}\}\},$
• $\{\{\},\{\{\}\},\{\{\{\}\}\},\{\{\{\{\}\}\}\},\{\{\{\{\}\}\},\{\{\{\{\}\}\}\}\}\},$
• $\{\{\},\{\{\}\},\{\{\{\}\}\},\{\{\{\{\}\}\}\},\{\{\},\{\{\}\},\{\{\{\{\}\}\}\}\}\},$
• $\{\{\},\{\{\}\},\{\{\{\}\}\},\{\{\},\{\{\}\}\},\{\{\{\{\}\}\},\{\{\},\{\{\}\}\}\}\},$
• $\{\{\},\{\{\}\},\{\{\{\}\}\},\{\{\},\{\{\}\}\},\{\{\},\{\{\}\},\{\{\},\{\{\}\}\}\}\},$
• $\{\{\},\{\{\}\},\{\{\{\}\}\},\{\{\},\{\{\{\}\}\}\},\{\{\},\{\{\},\{\{\{\}\}\}\}\}\},$
• $\{\{\},\{\{\}\},\{\{\{\}\}\},\{\{\},\{\{\{\}\}\}\},\{\{\},\{\{\}\},\{\{\{\}\}\}\}\},$
• $\{\{\},\{\{\}\},\{\{\{\{\},\{\{\}\}\}\}\},\{\{\{\},\{\{\}\}\}\},\{\{\},\{\{\}\}\}\},$
• $\{\{\},\{\{\}\},\{\{\{\},\{\{\}\}\}\},\{\{\},\{\{\}\}\},\{\{\},\{\{\},\{\{\}\}\}\}\},$
• $\{\{\},\{\{\}\},\{\{\{\}\}\},\{\{\{\{\}\}\}\},\{\{\},\{\{\{\}\}\},\{\{\{\{\}\}\}\}\}\},$
• $\{\{\},\{\{\}\},\{\{\{\}\}\},\{\{\{\{\}\},\{\{\{\}\}\}\}\},\{\{\{\}\},\{\{\{\}\}\}\}\},$
• $\{\{\},\{\{\}\},\{\{\{\}\}\},\{\{\},\{\{\}\}\},\{\{\},\{\{\{\}\}\},\{\{\},\{\{\}\}\}\}\},$
• $\{\{\},\{\{\}\},\{\{\{\}\}\},\{\{\},\{\{\{\}\}\}\},\{\{\{\}\},\{\{\},\{\{\{\}\}\}\}\}\},$
• $\{\{\},\{\{\}\},\{\{\{\}\}\},\{\{\{\}\},\{\{\{\}\}\}\},\{\{\},\{\{\}\},\{\{\{\}\}\}\}\},$
• $\{\{\},\{\{\}\},\{\{\{\},\{\{\}\}\}\},\{\{\},\{\{\}\}\},\{\{\},\{\{\{\},\{\{\}\}\}\}\}\},$
• $\{\{\},\{\{\}\},\{\{\{\},\{\{\}\}\}\},\{\{\},\{\{\}\}\},\{\{\{\}\},\{\{\},\{\{\}\}\}\}\},$
• $\{\{\},\{\{\}\},\{\{\{\}\}\},\{\{\{\{\}\}\}\},\{\{\{\{\{\}\}\}\}\},\{\{\},\{\{\}\}\}\},$
• $\{\{\},\{\{\}\},\{\{\{\}\}\},\{\{\{\{\}\}\}\},\{\{\{\},\{\{\}\}\}\},\{\{\},\{\{\}\}\}\},$
• $\{\{\},\{\{\}\},\{\{\{\}\}\},\{\{\{\{\}\}\}\},\{\{\},\{\{\}\}\},\{\{\},\{\{\{\}\}\}\}\}.$
Properties
A set $X$ is transitive if and only if $ \bigcup X\subseteq X$, where $ \bigcup X$ is the union of all elements of $X$ that are sets, $ \bigcup X=\{y\mid \exists x\in X:y\in x\}$.
If $X$ is transitive, then $ \bigcup X$ is transitive.
If $X$ and $Y$ are transitive, then $X\cup Y$ and $X\cup Y\cup \{X,Y\}$ are transitive. In general, if $Z$ is a class all of whose elements are transitive sets, then $ \bigcup Z$ and $ Z\cup \bigcup Z$ are transitive. (The first sentence in this paragraph is the case of $Z=\{X,Y\}$.)
A set $X$ that does not contain urelements is transitive if and only if it is a subset of its own power set, $ X\subseteq {\mathcal {P}}(X).$ The power set of a transitive set without urelements is transitive.
Transitive closure
The transitive closure of a set $X$ is the smallest (with respect to inclusion) transitive set that includes $X$ (i.e. $ X\subseteq \operatorname {TC} (X)$).[2] Suppose one is given a set $X$, then the transitive closure of $X$ is
$\operatorname {TC} (X)=\bigcup \left\{X,\;\bigcup X,\;\bigcup \bigcup X,\;\bigcup \bigcup \bigcup X,\;\bigcup \bigcup \bigcup \bigcup X,\ldots \right\}.$
Proof. Denote $ X_{0}=X$ and $ X_{n+1}=\bigcup X_{n}$. Then we claim that the set
$T=\operatorname {TC} (X)=\bigcup _{n=0}^{\infty }X_{n}$
is transitive, and whenever $ T_{1}$ is a transitive set including $ X$ then $ T\subseteq T_{1}$.
Assume $ y\in x\in T$. Then $ x\in X_{n}$ for some $ n$ and so $ y\in \bigcup X_{n}=X_{n+1}$. Since $ X_{n+1}\subseteq T$, $ y\in T$. Thus $ T$ is transitive.
Now let $ T_{1}$ be as above. We prove by induction that $ X_{n}\subseteq T_{1}$ for all $n$, thus proving that $ T\subseteq T_{1}$: The base case holds since $ X_{0}=X\subseteq T_{1}$. Now assume $ X_{n}\subseteq T_{1}$. Then $ X_{n+1}=\bigcup X_{n}\subseteq \bigcup T_{1}$. But $ T_{1}$ is transitive so $ \bigcup T_{1}\subseteq T_{1}$, hence $ X_{n+1}\subseteq T_{1}$. This completes the proof.
Note that this is the set of all of the objects related to $X$ by the transitive closure of the membership relation, since the union of a set can be expressed in terms of the relative product of the membership relation with itself.
The transitive closure of a set can be expressed by a first-order formula: $x$ is a transitive closure of $y$ iff $x$ is an intersection of all transitive supersets of $y$ (that is, every transitive superset of $y$ contains $x$ as a subset).
Transitive models of set theory
Transitive classes are often used for construction of interpretations of set theory in itself, usually called inner models. The reason is that properties defined by bounded formulas are absolute for transitive classes.
A transitive set (or class) that is a model of a formal system of set theory is called a transitive model of the system (provided that the element relation of the model is the restriction of the true element relation to the universe of the model). Transitivity is an important factor in determining the absoluteness of formulas.
In the superstructure approach to non-standard analysis, the non-standard universes satisfy strong transitivity.[3]
See also
• End extension
• Transitive relation
• Supertransitive class
References
1. "Number of rooted identity trees with n nodes (rooted trees whose automorphism group is the identity group)". OEIS.
2. Ciesielski, Krzysztof (1997). Set theory for the working mathematician. Cambridge: Cambridge University Press. p. 164. ISBN 978-1-139-17313-1. OCLC 817922080.
3. Goldblatt (1998) p.161
• Ciesielski, Krzysztof (1997), Set theory for the working mathematician, London Mathematical Society Student Texts, vol. 39, Cambridge: Cambridge University Press, ISBN 0-521-59441-3, Zbl 0938.03067
• Goldblatt, Robert (1998), Lectures on the hyperreals. An introduction to nonstandard analysis, Graduate Texts in Mathematics, vol. 188, New York, NY: Springer-Verlag, ISBN 0-387-98464-X, Zbl 0911.03032
• Jech, Thomas (2008) [originally published in 1973], The Axiom of Choice, Dover Publications, ISBN 0-486-46624-8, Zbl 0259.02051
Set theory
Overview
• Set (mathematics)
Axioms
• Adjunction
• Choice
• countable
• dependent
• global
• Constructibility (V=L)
• Determinacy
• Extensionality
• Infinity
• Limitation of size
• Pairing
• Power set
• Regularity
• Union
• Martin's axiom
• Axiom schema
• replacement
• specification
Operations
• Cartesian product
• Complement (i.e. set difference)
• De Morgan's laws
• Disjoint union
• Identities
• Intersection
• Power set
• Symmetric difference
• Union
• Concepts
• Methods
• Almost
• Cardinality
• Cardinal number (large)
• Class
• Constructible universe
• Continuum hypothesis
• Diagonal argument
• Element
• ordered pair
• tuple
• Family
• Forcing
• One-to-one correspondence
• Ordinal number
• Set-builder notation
• Transfinite induction
• Venn diagram
Set types
• Amorphous
• Countable
• Empty
• Finite (hereditarily)
• Filter
• base
• subbase
• Ultrafilter
• Fuzzy
• Infinite (Dedekind-infinite)
• Recursive
• Singleton
• Subset · Superset
• Transitive
• Uncountable
• Universal
Theories
• Alternative
• Axiomatic
• Naive
• Cantor's theorem
• Zermelo
• General
• Principia Mathematica
• New Foundations
• Zermelo–Fraenkel
• von Neumann–Bernays–Gödel
• Morse–Kelley
• Kripke–Platek
• Tarski–Grothendieck
• Paradoxes
• Problems
• Russell's paradox
• Suslin's problem
• Burali-Forti paradox
Set theorists
• Paul Bernays
• Georg Cantor
• Paul Cohen
• Richard Dedekind
• Abraham Fraenkel
• Kurt Gödel
• Thomas Jech
• John von Neumann
• Willard Quine
• Bertrand Russell
• Thoralf Skolem
• Ernst Zermelo
Mathematical logic
General
• Axiom
• list
• Cardinality
• First-order logic
• Formal proof
• Formal semantics
• Foundations of mathematics
• Information theory
• Lemma
• Logical consequence
• Model
• Theorem
• Theory
• Type theory
Theorems (list)
& Paradoxes
• Gödel's completeness and incompleteness theorems
• Tarski's undefinability
• Banach–Tarski paradox
• Cantor's theorem, paradox and diagonal argument
• Compactness
• Halting problem
• Lindström's
• Löwenheim–Skolem
• Russell's paradox
Logics
Traditional
• Classical logic
• Logical truth
• Tautology
• Proposition
• Inference
• Logical equivalence
• Consistency
• Equiconsistency
• Argument
• Soundness
• Validity
• Syllogism
• Square of opposition
• Venn diagram
Propositional
• Boolean algebra
• Boolean functions
• Logical connectives
• Propositional calculus
• Propositional formula
• Truth tables
• Many-valued logic
• 3
• Finite
• ∞
Predicate
• First-order
• list
• Second-order
• Monadic
• Higher-order
• Free
• Quantifiers
• Predicate
• Monadic predicate calculus
Set theory
• Set
• Hereditary
• Class
• (Ur-)Element
• Ordinal number
• Extensionality
• Forcing
• Relation
• Equivalence
• Partition
• Set operations:
• Intersection
• Union
• Complement
• Cartesian product
• Power set
• Identities
Types of Sets
• Countable
• Uncountable
• Empty
• Inhabited
• Singleton
• Finite
• Infinite
• Transitive
• Ultrafilter
• Recursive
• Fuzzy
• Universal
• Universe
• Constructible
• Grothendieck
• Von Neumann
Maps & Cardinality
• Function/Map
• Domain
• Codomain
• Image
• In/Sur/Bi-jection
• Schröder–Bernstein theorem
• Isomorphism
• Gödel numbering
• Enumeration
• Large cardinal
• Inaccessible
• Aleph number
• Operation
• Binary
Set theories
• Zermelo–Fraenkel
• Axiom of choice
• Continuum hypothesis
• General
• Kripke–Platek
• Morse–Kelley
• Naive
• New Foundations
• Tarski–Grothendieck
• Von Neumann–Bernays–Gödel
• Ackermann
• Constructive
Formal systems (list),
Language & Syntax
• Alphabet
• Arity
• Automata
• Axiom schema
• Expression
• Ground
• Extension
• by definition
• Conservative
• Relation
• Formation rule
• Grammar
• Formula
• Atomic
• Closed
• Ground
• Open
• Free/bound variable
• Language
• Metalanguage
• Logical connective
• ¬
• ∨
• ∧
• →
• ↔
• =
• Predicate
• Functional
• Variable
• Propositional variable
• Proof
• Quantifier
• ∃
• !
• ∀
• rank
• Sentence
• Atomic
• Spectrum
• Signature
• String
• Substitution
• Symbol
• Function
• Logical/Constant
• Non-logical
• Variable
• Term
• Theory
• list
Example axiomatic
systems
(list)
• of arithmetic:
• Peano
• second-order
• elementary function
• primitive recursive
• Robinson
• Skolem
• of the real numbers
• Tarski's axiomatization
• of Boolean algebras
• canonical
• minimal axioms
• of geometry:
• Euclidean:
• Elements
• Hilbert's
• Tarski's
• non-Euclidean
• Principia Mathematica
Proof theory
• Formal proof
• Natural deduction
• Logical consequence
• Rule of inference
• Sequent calculus
• Theorem
• Systems
• Axiomatic
• Deductive
• Hilbert
• list
• Complete theory
• Independence (from ZFC)
• Proof of impossibility
• Ordinal analysis
• Reverse mathematics
• Self-verifying theories
Model theory
• Interpretation
• Function
• of models
• Model
• Equivalence
• Finite
• Saturated
• Spectrum
• Submodel
• Non-standard model
• of arithmetic
• Diagram
• Elementary
• Categorical theory
• Model complete theory
• Satisfiability
• Semantics of logic
• Strength
• Theories of truth
• Semantic
• Tarski's
• Kripke's
• T-schema
• Transfer principle
• Truth predicate
• Truth value
• Type
• Ultraproduct
• Validity
Computability theory
• Church encoding
• Church–Turing thesis
• Computably enumerable
• Computable function
• Computable set
• Decision problem
• Decidable
• Undecidable
• P
• NP
• P versus NP problem
• Kolmogorov complexity
• Lambda calculus
• Primitive recursive function
• Recursion
• Recursive set
• Turing machine
• Type theory
Related
• Abstract logic
• Category theory
• Concrete/Abstract Category
• Category of sets
• History of logic
• History of mathematical logic
• timeline
• Logicism
• Mathematical object
• Philosophy of mathematics
• Supertask
Mathematics portal
| Wikipedia |
Transitive closure
In mathematics, the transitive closure R+ of a homogeneous binary relation R on a set X is the smallest relation on X that contains R and is transitive. For finite sets, "smallest" can be taken in its usual sense, of having the fewest related pairs; for infinite sets R+ is the unique minimal transitive superset of R.
Transitive binary relations
Symmetric Antisymmetric Connected Well-founded Has joins Has meets Reflexive Irreflexive Asymmetric
Total, Semiconnex Anti-
reflexive
Equivalence relation Y ✗ ✗ ✗ ✗ ✗ Y ✗ ✗
Preorder (Quasiorder) ✗ ✗ ✗ ✗ ✗ ✗ Y ✗ ✗
Partial order ✗ Y ✗ ✗ ✗ ✗ Y ✗ ✗
Total preorder ✗ ✗ Y ✗ ✗ ✗ Y ✗ ✗
Total order ✗ Y Y ✗ ✗ ✗ Y ✗ ✗
Prewellordering ✗ ✗ Y Y ✗ ✗ Y ✗ ✗
Well-quasi-ordering ✗ ✗ ✗ Y ✗ ✗ Y ✗ ✗
Well-ordering ✗ Y Y Y ✗ ✗ Y ✗ ✗
Lattice ✗ Y ✗ ✗ Y Y Y ✗ ✗
Join-semilattice ✗ Y ✗ ✗ Y ✗ Y ✗ ✗
Meet-semilattice ✗ Y ✗ ✗ ✗ Y Y ✗ ✗
Strict partial order ✗ Y ✗ ✗ ✗ ✗ ✗ Y Y
Strict weak order ✗ Y ✗ ✗ ✗ ✗ ✗ Y Y
Strict total order ✗ Y Y ✗ ✗ ✗ ✗ Y Y
Symmetric Antisymmetric Connected Well-founded Has joins Has meets Reflexive Irreflexive Asymmetric
Definitions, for all $a,b$ and $S\neq \varnothing :$ :} ${\begin{aligned}&aRb\\\Rightarrow {}&bRa\end{aligned}}$ ${\begin{aligned}aRb{\text{ and }}&bRa\\\Rightarrow a={}&b\end{aligned}}$ ${\begin{aligned}a\neq {}&b\Rightarrow \\aRb{\text{ or }}&bRa\end{aligned}}$ ${\begin{aligned}\min S\\{\text{exists}}\end{aligned}}$ ${\begin{aligned}a\vee b\\{\text{exists}}\end{aligned}}$ ${\begin{aligned}a\wedge b\\{\text{exists}}\end{aligned}}$ $aRa$ ${\text{not }}aRa$ ${\begin{aligned}aRb\Rightarrow \\{\text{not }}bRa\end{aligned}}$
Y indicates that the column's property is always true the row's term (at the very left), while ✗ indicates that the property is not guaranteed in general (it might, or might not, hold). For example, that every equivalence relation is symmetric, but not necessarily antisymmetric, is indicated by Y in the "Symmetric" column and ✗ in the "Antisymmetric" column, respectively.
All definitions tacitly require the homogeneous relation $R$ be transitive: for all $a,b,c,$ if $aRb$ and $bRc$ then $aRc.$
A term's definition may require additional properties that are not listed in this table.
This article is about the transitive closure of a binary relation. For the transitive closure of a set, see transitive set § Transitive closure.
For example, if X is a set of airports and x R y means "there is a direct flight from airport x to airport y" (for x and y in X), then the transitive closure of R on X is the relation R+ such that x R+ y means "it is possible to fly from x to y in one or more flights".
More formally, the transitive closure of a binary relation R on a set X is the smallest (w.r.t. ⊆) transitive relation R+ on X such that R ⊆ R+; see Lidl & Pilz (1998, p. 337). We have R+ = R if, and only if, R itself is transitive.
Conversely, transitive reduction adduces a minimal relation S from a given relation R such that they have the same closure, that is, S+ = R+; however, many different S with this property may exist.
Both transitive closure and transitive reduction are also used in the closely related area of graph theory.
Transitive relations and examples
A relation R on a set X is transitive if, for all x, y, z in X, whenever x R y and y R z then x R z. Examples of transitive relations include the equality relation on any set, the "less than or equal" relation on any linearly ordered set, and the relation "x was born before y" on the set of all people. Symbolically, this can be denoted as: if x < y and y < z then x < z.
One example of a non-transitive relation is "city x can be reached via a direct flight from city y" on the set of all cities. Simply because there is a direct flight from one city to a second city, and a direct flight from the second city to the third, does not imply there is a direct flight from the first city to the third. The transitive closure of this relation is a different relation, namely "there is a sequence of direct flights that begins at city x and ends at city y". Every relation can be extended in a similar way to a transitive relation.
An example of a non-transitive relation with a less meaningful transitive closure is "x is the day of the week after y". The transitive closure of this relation is "some day x comes after a day y on the calendar", which is trivially true for all days of the week x and y (and thus equivalent to the Cartesian square, which is "x and y are both days of the week").
Existence and description
For any relation R, the transitive closure of R always exists. To see this, note that the intersection of any family of transitive relations is again transitive. Furthermore, there exists at least one transitive relation containing R, namely the trivial one: X × X. The transitive closure of R is then given by the intersection of all transitive relations containing R.
For finite sets, we can construct the transitive closure step by step, starting from R and adding transitive edges. This gives the intuition for a general construction. For any set X, we can prove that transitive closure is given by the following expression
$R^{+}=\bigcup _{i=1}^{\infty }R^{i}.$
where $R^{i}$ is the i-th power of R, defined inductively by
$R^{1}=R$
and, for $i>0$,
$R^{i+1}=R\circ R^{i}$
where $\circ $ denotes composition of relations.
To show that the above definition of R+ is the least transitive relation containing R, we show that it contains R, that it is transitive, and that it is the smallest set with both of those characteristics.
• $R\subseteq R^{+}$: $R^{+}$ contains all of the $R^{i}$, so in particular $R^{+}$ contains $R$.
• $R^{+}$ is transitive: If $(s_{1},s_{2}),(s_{2},s_{3})\in R^{+}$, then $(s_{1},s_{2})\in R^{j}$ and $(s_{2},s_{3})\in R^{k}$ for some $j,k$ by definition of $R^{+}$. Since composition is associative, $R^{j+k}=R^{j}\circ R^{k}$; hence $(s_{1},s_{3})\in R^{j+k}\subseteq R^{+}$ by definition of $\circ $ and $R^{+}$.
• $R^{+}$ is minimal, that is, if $T$ is any transitive relation containing $R$, then $R^{+}\subseteq T$: Given any such $T$, induction on $i$ can be used to show $R^{i}\subseteq T$ for all $i$ as follows: Base: $R^{1}=R\subseteq T$ by assumption. Step: If $R^{i}\subseteq T$ holds, and $(s_{1},s_{3})\in R^{i+1}=R\circ R^{i}$, then $(s_{1},s_{2})\in R$ and $(s_{2},s_{3})\in R^{i}$ for some $s_{2}$, by definition of $\circ $. Hence, $(s_{1},s_{2}),(s_{2},s_{3})\in T$ by assumption and by induction hypothesis. Hence $(s_{1},s_{3})\in T$ by transitivity of $T$; this completes the induction. Finally, $R^{i}\subseteq T$ for all $i$ implies $R^{+}\subseteq T$ by definition of $R^{+}$.
Properties
The intersection of two transitive relations is transitive.
The union of two transitive relations need not be transitive. To preserve transitivity, one must take the transitive closure. This occurs, for example, when taking the union of two equivalence relations or two preorders. To obtain a new equivalence relation or preorder one must take the transitive closure (reflexivity and symmetry—in the case of equivalence relations—are automatic).
In graph theory
In computer science, the concept of transitive closure can be thought of as constructing a data structure that makes it possible to answer reachability questions. That is, can one get from node a to node d in one or more hops? A binary relation tells you only that node a is connected to node b, and that node b is connected to node c, etc. After the transitive closure is constructed, as depicted in the following figure, in an O(1) operation one may determine that node d is reachable from node a. The data structure is typically stored as a Boolean matrix, so if matrix[1][4] = true, then it is the case that node 1 can reach node 4 through one or more hops.
The transitive closure of the adjacency relation of a directed acyclic graph (DAG) is the reachability relation of the DAG and a strict partial order.
The transitive closure of an undirected graph produces a cluster graph, a disjoint union of cliques. Constructing the transitive closure is an equivalent formulation of the problem of finding the components of the graph.[1]
In logic and computational complexity
The transitive closure of a binary relation cannot, in general, be expressed in first-order logic (FO). This means that one cannot write a formula using predicate symbols R and T that will be satisfied in any model if and only if T is the transitive closure of R. In finite model theory, first-order logic (FO) extended with a transitive closure operator is usually called transitive closure logic, and abbreviated FO(TC) or just TC. TC is a sub-type of fixpoint logics. The fact that FO(TC) is strictly more expressive than FO was discovered by Ronald Fagin in 1974; the result was then rediscovered by Alfred Aho and Jeffrey Ullman in 1979, who proposed to use fixpoint logic as a database query language.[2] With more recent concepts of finite model theory, proof that FO(TC) is strictly more expressive than FO follows immediately from the fact that FO(TC) is not Gaifman-local.[3]
In computational complexity theory, the complexity class NL corresponds precisely to the set of logical sentences expressible in TC. This is because the transitive closure property has a close relationship with the NL-complete problem STCON for finding directed paths in a graph. Similarly, the class L is first-order logic with the commutative, transitive closure. When transitive closure is added to second-order logic instead, we obtain PSPACE.
In database query languages
Since the 1980s Oracle Database has implemented a proprietary SQL extension CONNECT BY... START WITH that allows the computation of a transitive closure as part of a declarative query. The SQL 3 (1999) standard added a more general WITH RECURSIVE construct also allowing transitive closures to be computed inside the query processor; as of 2011 the latter is implemented in IBM Db2, Microsoft SQL Server, Oracle, PostgreSQL, and MySQL (v8.0+). SQLite released support for this in 2014.
Datalog also implements transitive closure computations.[4]
MariaDB implements Recursive Common Table Expressions, which can be used to compute transitive closures. This feature was introduced in release 10.2.2 of April 2016.[5]
Algorithms
Efficient algorithms for computing the transitive closure of the adjacency relation of a graph can be found in Nuutila (1995). Reducing the problem to multiplications of adjacency matrices achieves the least time complexity, viz. that of matrix multiplication (Munro 1971, Fischer & Meyer 1971), which is $O(n^{2.3728596})$ as of December 2020. However, this approach is not practical since both the constant factors and the memory consumption for sparse graphs are high (Nuutila 1995, pp. 22–23, sect.2.3.3). The problem can also be solved by the Floyd–Warshall algorithm in $O(n^{3})$, or by repeated breadth-first search or depth-first search starting from each node of the graph.
For directed graphs, Purdom's algorithm solves the problem by first computing its condensation DAG and its transitive closure, then lifting it to the original graph. Its runtime is $O(m+\mu n)$, where $\mu $ is the number of edges between its strongly connected components.[6][7][8][9]
More recent research has explored efficient ways of computing transitive closure on distributed systems based on the MapReduce paradigm.[10]
See also
• Ancestral relation
• Deductive closure
• Reflexive closure
• Symmetric closure
• Transitive reduction (a smallest relation having the transitive closure of R as its transitive closure)
References
1. McColl, W. F.; Noshita, K. (1986), "On the number of edges in the transitive closure of a graph", Discrete Applied Mathematics, 15 (1): 67–73, doi:10.1016/0166-218X(86)90020-X, MR 0856101
2. (Libkin 2004:vii)
3. (Libkin 2004:49)
4. (Silberschatz et al. 2010:C.3.6)
5. "Recursive Common Table Expressions Overview". mariadb.com.
6. Purdom Jr., Paul (Mar 1970). "A transitive closure algorithm". BIT Numerical Mathematics. 10 (1): 76–94. doi:10.1007/BF01940892.
7. Paul W. Purdom Jr. (Jul 1968). A transitive closure algorithm (Computer Sciences Technical Report). Vol. 33. University of Wisconsin-Madison.
8. ""Purdom's algorithm" on AlgoWiki".
9. ""Transitive closure of a directed graph" on AlgoWiki".
10. (Afrati et al. 2011)
• Foto N. Afrati, Vinayak Borkar, Michael Carey, Neoklis Polyzotis, Jeffrey D. Ullman, Map-Reduce Extensions and Recursive Queries, EDBT 2011, March 22–24, 2011, Uppsala, Sweden, ISBN 978-1-4503-0528-0
• Aho, A. V.; Ullman, J. D. (1979). "Universality of data retrieval languages". Proceedings of the 6th ACM SIGACT-SIGPLAN Symposium on Principles of programming languages - POPL '79. pp. 110–119. doi:10.1145/567752.567763.
• Benedikt, M.; Senellart, P. (2011). "Databases". In Blum, Edward K.; Aho, Alfred V. (eds.). Computer Science. The Hardware, Software and Heart of It. pp. 169–229. doi:10.1007/978-1-4614-1168-0_10. ISBN 978-1-4614-1167-3.
• Heinz-Dieter Ebbinghaus; Jörg Flum (1999). Finite Model Theory (2nd ed.). Springer. pp. 123–124, 151–161, 220–235. ISBN 978-3-540-28787-2.
• Fischer, M.J.; Meyer, A.R. (Oct 1971). "Boolean matrix multiplication and transitive closure" (PDF). In Raymond E. Miller and John E. Hopcroft (ed.). Proc. 12th Ann. Symp. on Switching and Automata Theory (SWAT). IEEE Computer Society. pp. 129–131. doi:10.1109/SWAT.1971.4.
• Erich Grädel; Phokion G. Kolaitis; Leonid Libkin; Maarten Marx; Joel Spencer; Moshe Y. Vardi; Yde Venema; Scott Weinstein (2007). Finite Model Theory and Its Applications. Springer. pp. 151–152. ISBN 978-3-540-68804-4.
• Keller, U., 2004, Some Remarks on the Definability of Transitive Closure in First-order Logic and Datalog (unpublished manuscript)* Libkin, Leonid (2004), Elements of Finite Model Theory, Springer, ISBN 978-3-540-21202-7
• Lidl, R.; Pilz, G. (1998), Applied abstract algebra, Undergraduate Texts in Mathematics (2nd ed.), Springer, ISBN 0-387-98290-6
• Munro, Ian (Jan 1971). "Efficient determination of the transitive closure of a directed graph". Information Processing Letters. 1 (2): 56–58. doi:10.1016/0020-0190(71)90006-8.
• Nuutila, Esko (1995). Efficient transitive closure computation in large digraphs. Finnish Academy of Technology. ISBN 951-666-451-2. OCLC 912471702.
• Abraham Silberschatz; Henry Korth; S. Sudarshan (2010). Database System Concepts (6th ed.). McGraw-Hill. ISBN 978-0-07-352332-3. Appendix C (online only)
External links
• "Transitive closure and reduction", The Stony Brook Algorithm Repository, Steven Skiena.
Order theory
• Topics
• Glossary
• Category
Key concepts
• Binary relation
• Boolean algebra
• Cyclic order
• Lattice
• Partial order
• Preorder
• Total order
• Weak ordering
Results
• Boolean prime ideal theorem
• Cantor–Bernstein theorem
• Cantor's isomorphism theorem
• Dilworth's theorem
• Dushnik–Miller theorem
• Hausdorff maximal principle
• Knaster–Tarski theorem
• Kruskal's tree theorem
• Laver's theorem
• Mirsky's theorem
• Szpilrajn extension theorem
• Zorn's lemma
Properties & Types (list)
• Antisymmetric
• Asymmetric
• Boolean algebra
• topics
• Completeness
• Connected
• Covering
• Dense
• Directed
• (Partial) Equivalence
• Foundational
• Heyting algebra
• Homogeneous
• Idempotent
• Lattice
• Bounded
• Complemented
• Complete
• Distributive
• Join and meet
• Reflexive
• Partial order
• Chain-complete
• Graded
• Eulerian
• Strict
• Prefix order
• Preorder
• Total
• Semilattice
• Semiorder
• Symmetric
• Total
• Tolerance
• Transitive
• Well-founded
• Well-quasi-ordering (Better)
• (Pre) Well-order
Constructions
• Composition
• Converse/Transpose
• Lexicographic order
• Linear extension
• Product order
• Reflexive closure
• Series-parallel partial order
• Star product
• Symmetric closure
• Transitive closure
Topology & Orders
• Alexandrov topology & Specialization preorder
• Ordered topological vector space
• Normal cone
• Order topology
• Order topology
• Topological vector lattice
• Banach
• Fréchet
• Locally convex
• Normed
Related
• Antichain
• Cofinal
• Cofinality
• Comparability
• Graph
• Duality
• Filter
• Hasse diagram
• Ideal
• Net
• Subnet
• Order morphism
• Embedding
• Isomorphism
• Order type
• Ordered field
• Ordered vector space
• Partially ordered
• Positive cone
• Riesz space
• Upper set
• Young's lattice
| Wikipedia |
Transitively normal subgroup
In mathematics, in the field of group theory, a subgroup of a group is said to be transitively normal in the group if every normal subgroup of the subgroup is also normal in the whole group. In symbols, $H$ is a transitively normal subgroup of $G$ if for every $K$ normal in $H$, we have that $K$ is normal in $G$.[1]
An alternate way to characterize these subgroups is: every normal subgroup preserving automorphism of the whole group must restrict to a normal subgroup preserving automorphism of the subgroup.
Here are some facts about transitively normal subgroups:
• Every normal subgroup of a transitively normal subgroup is normal.
• Every direct factor, or more generally, every central factor is transitively normal. Thus, every central subgroup is transitively normal.
• A transitively normal subgroup of a transitively normal subgroup is transitively normal.
• A transitively normal subgroup is normal.
References
1. "On the influence of transitively normal subgroups on the structure of some infinite groups". Project Euclid. Retrieved 30 June 2022.
See also
• Normal subgroup
| Wikipedia |
Thompson transitivity theorem
In mathematical finite group theory, the Thompson transitivity theorem gives conditions under which the centralizer of an abelian subgroup A acts transitively on certain subgroups normalized by A. It originated in the proof of the odd order theorem by Feit and Thompson (1963), where it was used to prove the Thompson uniqueness theorem.
Statement
Suppose that G is a finite group and p a prime such that all p-local subgroups are p-constrained. If A is a self-centralizing normal abelian subgroup of a p-Sylow subgroup such that A has rank at least 3, then the centralizer CG(A) act transitively on the maximal A-invariant q subgroups of G for any prime q ≠ p.
References
• Bender, Helmut; Glauberman, George (1994), Local analysis for the odd order theorem, London Mathematical Society Lecture Note Series, vol. 188, Cambridge University Press, ISBN 978-0-521-45716-3, MR 1311244
• Feit, Walter; Thompson, John G. (1963), "Solvability of groups of odd order", Pacific Journal of Mathematics, 13: 775–1029, doi:10.2140/pjm.1963.13.775, ISSN 0030-8730, MR 0166261
• Gorenstein, D. (1980), Finite groups (2nd ed.), New York: Chelsea Publishing Co., ISBN 978-0-8284-0301-6, MR 0569209
| Wikipedia |
Translation of axes
In mathematics, a translation of axes in two dimensions is a mapping from an xy-Cartesian coordinate system to an x'y'-Cartesian coordinate system in which the x' axis is parallel to the x axis and k units away, and the y' axis is parallel to the y axis and h units away. This means that the origin O' of the new coordinate system has coordinates (h, k) in the original system. The positive x' and y' directions are taken to be the same as the positive x and y directions. A point P has coordinates (x, y) with respect to the original system and coordinates (x', y') with respect to the new system, where
$x=x'+h$ and $y=y'+k$
(1)
or equivalently
$x'=x-h$ and $y'=y-k.$[1][2]
(2)
In the new coordinate system, the point P will appear to have been translated in the opposite direction. For example, if the xy-system is translated a distance h to the right and a distance k upward, then P will appear to have been translated a distance h to the left and a distance k downward in the x'y'-system . A translation of axes in more than two dimensions is defined similarly.[3] A translation of axes is a rigid transformation, but not a linear map. (See Affine transformation.)
Motivation
Coordinate systems are essential for studying the equations of curves using the methods of analytic geometry. To use the method of coordinate geometry, the axes are placed at a convenient position with respect to the curve under consideration. For example, to study the equations of ellipses and hyperbolas, the foci are usually located on one of the axes and are situated symmetrically with respect to the origin. If the curve (hyperbola, parabola, ellipse, etc.) is not situated conveniently with respect to the axes, the coordinate system should be changed to place the curve at a convenient and familiar location and orientation. The process of making this change is called a transformation of coordinates.[4]
The solutions to many problems can be simplified by translating the coordinate axes to obtain new axes parallel to the original ones.[5]
Translation of conic sections
Main article: Conic section
Through a change of coordinates, the equation of a conic section can be put into a standard form, which is usually easier to work with. For the most general equation of the second degree, which takes the form
$Ax^{2}+Bxy+Cy^{2}+Dx+Ey+F=0$ ($A$, $B$ and $C$ not all zero);
(3)
it is always possible to perform a rotation of axes in such a way that in the new system the equation takes the form
$Ax^{2}+Cy^{2}+Dx+Ey+F=0$ ($A$ and $C$ not both zero);
(4)
that is, eliminating the xy term.[6] Next, a translation of axes can reduce an equation of the form (3) to an equation of the same form but with new variables (x', y') as coordinates, and with D and E both equal to zero (with certain exceptions—for example, parabolas). The principal tool in this process is "completing the square."[7] In the examples that follow, it is assumed that a rotation of axes has already been performed.
Example 1
Given the equation
$9x^{2}+25y^{2}+18x-100y-116=0,$
by using a translation of axes, determine whether the locus of the equation is a parabola, ellipse, or hyperbola. Determine foci (or focus), vertices (or vertex), and eccentricity.
Solution: To complete the square in x and y, write the equation in the form
$9(x^{2}+2x\qquad )+25(y^{2}-4y\qquad )=116.$
Complete the squares and obtain
$9(x^{2}+2x+1)+25(y^{2}-4y+4)=116+9+100$
$\Leftrightarrow 9(x+1)^{2}+25(y-2)^{2}=225.$
Define
$x'=x+1$ and $y'=y-2.$
That is, the translation in equations (2) is made with $h=-1,k=2.$ The equation in the new coordinate system is
$9x'^{2}+25y'^{2}=225.$
(5)
Divide equation (5) by 225 to obtain
${\frac {x'^{2}}{25}}+{\frac {y'^{2}}{9}}=1,$
which is recognizable as an ellipse with $a=5,b=3,c^{2}=a^{2}-b^{2}=16,c=4,e={\tfrac {4}{5}}.$ In the x'y'-system, we have: center $(0,0)$; vertices $(\pm 5,0)$; foci $(\pm 4,0).$
In the xy-system, use the relations $x=x'-1,y=y'+2$ to obtain: center $(-1,2)$; vertices $(4,2),(-6,2)$; foci $(3,2),(-5,2)$; eccentricity ${\tfrac {4}{5}}.$[8]
Generalization to several dimensions
For an xyz-Cartesian coordinate system in three dimensions, suppose that a second Cartesian coordinate system is introduced, with axes x', y' and z' so located that the x' axis is parallel to the x axis and h units from it, the y' axis is parallel to the y axis and k units from it, and the z' axis is parallel to the z axis and l units from it. A point P in space will have coordinates in both systems. If its coordinates are (x, y, z) in the original system and (x', y', z') in the second system, the equations
$x'=x-h,\qquad y'=y-k,\qquad z'=z-l$
(6)
hold.[9] Equations (6) define a translation of axes in three dimensions where (h, k, l) are the xyz-coordinates of the new origin.[10] A translation of axes in any finite number of dimensions is defined similarly.
Translation of quadric surfaces
Main article: Quadric surface
In three-space, the most general equation of the second degree in x, y and z has the form
$Ax^{2}+By^{2}+Cz^{2}+Dxy+Exz+Fyz+Gx+Hy+Iz+J=0,$
(7)
where the quantities $A,B,C,\ldots ,J$ are positive or negative numbers or zero. The points in space satisfying such an equation all lie on a surface. Any second-degree equation which does not reduce to a cylinder, plane, line, or point corresponds to a surface which is called quadric.[11]
As in the case of plane analytic geometry, the method of translation of axes may be used to simplify second-degree equations, thereby making evident the nature of certain quadric surfaces. The principal tool in this process is "completing the square."[12]
Example 2
Use a translation of coordinates to identify the quadric surface
$x^{2}+4y^{2}+3z^{2}+2x-8y+9z=10.$
Solution: Write the equation in the form
$x^{2}+2x\qquad +4(y^{2}-2y\qquad )+3(z^{2}+3z\qquad )=10.$
Complete the square to obtain
$(x+1)^{2}+4(y-1)^{2}+3(z+{\tfrac {3}{2}})^{2}=10+1+4+{\tfrac {27}{4}}.$
Introduce the translation of coordinates
$x'=x+1,\qquad y'=y-1,\qquad z'=z+{\tfrac {3}{2}}.$
The equation of the surface takes the form
$x'^{2}+4y'^{2}+3z'^{2}={\tfrac {87}{4}},$
which is recognizable as the equation of an ellipsoid.[13]
See also
• Translation (geometry)
Notes
1. Anton (1987, p. 107)
2. Protter & Morrey (1970, p. 315)
3. Protter & Morrey (1970, pp. 585–588)
4. Protter & Morrey (1970, pp. 314–315)
5. Anton (1987, p. 107)
6. Protter & Morrey (1970, p. 322)
7. Protter & Morrey (1970, p. 316)
8. Protter & Morrey (1970, pp. 316–317)
9. Protter & Morrey (1970, pp. 585–586)
10. Anton (1987, p. 107)
11. Protter & Morrey (1970, p. 579)
12. Protter & Morrey (1970, p. 586)
13. Protter & Morrey (1970, p. 586)
References
• Anton, Howard (1987), Elementary Linear Algebra (5th ed.), New York: Wiley, ISBN 0-471-84819-0
• Protter, Murray H.; Morrey, Charles B., Jr. (1970), College Calculus with Analytic Geometry (2nd ed.), Reading: Addison-Wesley, LCCN 76087042{{citation}}: CS1 maint: multiple names: authors list (link)
| Wikipedia |
Translation plane
In mathematics, a translation plane is a projective plane which admits a certain group of symmetries (described below). Along with the Hughes planes and the Figueroa planes, translation planes are among the most well-studied of the known non-Desarguesian planes, and the vast majority of known non-Desarguesian planes are either translation planes, or can be obtained from a translation plane via successive iterations of dualization and/or derivation.[1]
In a projective plane, let P represent a point, and l represent a line. A central collineation with center P and axis l is a collineation fixing every point on l and every line through P. It is called an elation if P is on l, otherwise it is called a homology. The central collineations with center P and axis l form a group.[2] A line l in a projective plane Π is a translation line if the group of all elations with axis l acts transitively on the points of the affine plane obtained by removing l from the plane Π, Πl (the affine derivative of Π). A projective plane with a translation line is called a translation plane.
The affine plane obtained by removing the translation line is called an affine translation plane. While it is often easier to work with projective planes, in this context several authors use the term translation plane to mean affine translation plane.[3][4]
Algebraic construction with coordinates
Every projective plane can be coordinatized by at least one planar ternary ring.[5] For translation planes, it is always possible to coordinatize with a quasifield.[6] However, some quasifields satisfy additional algebraic properties, and the corresponding planar ternary rings coordinatize translation planes which admit additional symmetries. Some of these special classes are:
• Nearfield planes - coordinatized by nearfields.
• Semifield planes - coordinatized by semifields, semifield planes have the property that their dual is also a translation plane.
• Moufang planes - coordinatized by alternative division rings, Moufang planes are exactly those translation planes that have at least two translation lines. Every finite Moufang plane is Desarguesian and every Desarguesian plane is a Moufang plane, but there are infinite Moufang planes that are not Desarguesian (such as the Cayley plane).
Given a quasifield with operations + (addition) and $\cdot $ (multiplication), one can define a planar ternary ring to create coordinates for a translation plane. However, it is more typical to create an affine plane directly from the quasifield by defining the points as pairs $(a,b)$ where $a$ and $b$ are elements of the quasifield, and the lines are the sets of points $(x,y)$ satisfying an equation of the form $y=m\cdot x+b$ , as $m$ and $b$ vary over the elements of the quasifield, together with the sets of points $(x,y)$ satisfying an equation of the form $x=a$ , as $a$ varies over the elements of the quasifield.[7]
Geometric construction with spreads (Bruck/Bose)
Translation planes are related to spreads of odd-dimensional projective spaces by the Bruck-Bose construction.[8] A spread of PG(2n+1, K), where $n\geq 1$ is an integer and K a division ring, is a partition of the space into pairwise disjoint n-dimensional subspaces. In the finite case, a spread of PG(2n+1, q) is a set of qn+1 + 1 n-dimensional subspaces, with no two intersecting.
Given a spread S of PG(2n +1, K), the Bruck-Bose construction produces a translation plane as follows: Embed PG(2n+1, K) as a hyperplane $\Sigma $ of PG(2n+2, K). Define an incidence structure A(S) with "points," the points of PG(2n+2, K) not on $\Sigma $ and "lines" the (n+1)-dimensional subspaces of PG(2n+2, K) meeting $\Sigma $ in an element of S. Then A(S) is an affine translation plane. In the finite case, this procedure produces a translation plane of order qn+1.
The converse of this statement is almost always true.[9] Any translation plane which is coordinatized by a quasifield that is finite-dimensional over its kernel K (K is necessarily a division ring) can be generated from a spread of PG(2n+1, K) using the Bruck-Bose construction, where (n+1) is the dimension of the quasifield, considered as a module over its kernel. An instant corollary of this result is that every finite translation plane can be obtained from this construction.
Algebraic construction with spreads (André)
André[10] gave an earlier algebraic representation of (affine) translation planes that is fundamentally the same as Bruck/Bose. Let V be a 2n-dimensional vector space over a field F. A spread of V is a set S of n-dimensional subspaces of V that partition the non-zero vectors of V. The members of S are called the components of the spread and if Vi and Vj are distinct components then Vi ⊕ Vj = V. Let A be the incidence structure whose points are the vectors of V and whose lines are the cosets of components, that is, sets of the form v + U where v is a vector of V and U is a component of the spread S. Then:[11]
A is an affine plane and the group of translations x → x + w for w in V is an automorphism group acting regularly on the points of this plane.
The finite case
Let F = GF(q) = Fq, the finite field of order q and V the 2n-dimensional vector space over F represented as:
$V=\{(x,y)\colon x,y\in F^{n}\}.$
Let M0, M1, ..., Mqn - 1 be n × n matrices over F with the property that Mi – Mj is nonsingular whenever i ≠ j. For i = 0, 1, ...,qn – 1 define,
$V_{i}=\{(x,xM_{i})\colon x\in F^{n}\},$
usually referred to as the subspaces "y = xMi". Also define:
$V_{q^{n}}=\{(0,y)\colon y\in F^{n}\},$
the subspace "x = 0".
The set {V0, V1, ..., Vqn} is a spread of V.
The set of matrices Mi used in this construction is called a spread set, and this set of matrices can be used directly in the projective space $PG(2n-1,q)$ to create a spread in the geometric sense.
Reguli and regular spreads
Main article: Spreads
Let $\Sigma $ be the projective space PG(2n+1, K) for $n\geq 1$ an integer, and K a division ring. A regulus[12] R in $\Sigma $ is a collection of pairwise disjoint n-dimensional subspaces with the following properties:
1. R contains at least 3 elements
2. Every line meeting three elements of R, called a transversal, meets every element of R
3. Every point of a transversal to R lies on some element of R
Any three pairwise disjoint n-dimensional subspaces in $\Sigma $ lie in a unique regulus.[13] A spread S of $\Sigma $ is regular if for any three distinct n-dimensional subspaces of S, all the members of the unique regulus determined by them are contained in S. For any division ring K with more than 2 elements, if a spread S of PG(2n+1, K) is regular, then the translation plane created by that spread via the André/Bruck-Bose construction is a Moufang plane. A slightly weaker converse holds: if a translation plane is Pappian, then it can be generated via the André/Bruck-Bose construction from a regular spread.[14]
In the finite case, K must be a field of order $q>2$, and the classes of Moufang, Desarguesian and Pappian planes are all identical, so this theorem can be refined to state that a spread S of PG(2n+1, q) is regular if and only if the translation plane created by that spread via the André/Bruck-Bose construction is Desarguesian.
In the case where K is the field $GF(2)$, all spreads of PG(2n+1, 2) are trivially regular, since a regulus only contains three elements. While the only translation plane of order 8 is Desarguesian, there are known to be non-Desarguesian translation planes of order 2e for every integer $e\geq 4$.[15]
Families of non-Desarguesian translation planes
• Hall planes - constructed via Bruck/Bose from a regular spread of $PG(3,q)$ where one regulus has been replaced by the set of transversal lines to that regulus (called the opposite regulus).
• Subregular planes - constructed via Bruck/Bose from a regular spread of $PG(3,q)$ where a set of pairwise disjoint reguli have been replaced by their opposite reguli.
• André planes
• Nearfield planes
• Semifield planes
Finite translation planes of small order
It is well known that the only projective planes of order 8 or less are Desarguesian, and there are no known non-Desarguesian planes of prime order.[16] Finite translation planes must have prime power order. There are four projective planes of order 9, of which two are translation planes: the Desarguesian plane, and the Hall plane. The following table details the current state of knowledge:
Order Number of Non-Desarguesian
Translation Planes
9 1
16 7[17][18]
25 20[19][20][21]
27 6[22][23]
32 ≥8[24]
49 1346[25][26]
64 ≥2833[27]
Notes
1. Eric Moorhouse has performed extensive computer searches to find projective planes. For order 25, Moorhouse has found 193 projective planes, 180 of which can be obtained from a translation plane by iterated derivation and/or dualization. For order 49, the known 1349 translation planes give rise to more than 309,000 planes obtainable from this procedure.
2. Geometry Translation Plane Retrieved on June 13, 2007
3. Hughes & Piper 1973, p. 100
4. Johnson, Jha & Biliotti 2007, p. 5
5. Hall 1943
6. There are many ways to coordinatize a translation plane which do not yield a quasifield, since the planar ternary ring depends on the quadrangle on which one chooses to base the coordinates. However, for translation planes there is always some coordinatization which yields a quasifield.
7. Dembowski 1968, p. 128. Note that quasifields are technically either left or right quasifields, depending on whether multiplication distributes from the left or from the right (semifields satisfy both distributive laws). The definition of a quasifield in Wikipedia is a left quasifield, while Dembowski uses right quasifields. Generally this distinction is elided, since using a chirally "wrong" quasifield simply produces the dual of the translation plane.
8. Bruck & Bose 1964
9. Bruck & Bose 1964, p. 97
10. André 1954
11. Moorhouse 2007, p. 13
12. This notion generalizes that of a classical regulus, which is one of the two families of ruling lines on a hyperboloid of one sheet in 3-dimensional space
13. Bruck & Bose 1966, p. 163
14. Bruck & Bose 1966, p. 164, Theorem 12.1
15. Knuth 1965, p. 541
16. "Projective Planes of Small Order". ericmoorhouse.org. Retrieved 2020-11-08.
17. "Projective Planes of Order 16". ericmoorhouse.org. Retrieved 2020-11-08.
18. Reifart 1984
19. "Projective Planes of Order 25". ericmoorhouse.org. Retrieved 2020-11-08.
20. Dover 2019
21. Czerwinski & Oakden 1992
22. "Projective Planes of Order 27". ericmoorhouse.org. Retrieved 2020-11-08.
23. Dempwolff 1994
24. "Projective Planes of Order 32". ericmoorhouse.org. Retrieved 2020-11-08.
25. Mathon & Royle 1995
26. "Projective Planes of Order 49". ericmoorhouse.org. Retrieved 2020-11-08.
27. McKay & Royle 2014. This is a complete count of the 2-dimensional non-Desarguesian translation planes; many higher-dimensional planes are known to exist.
References
• André, Johannes (1954), "Über nicht-Desarguessche Ebenen mit transitiver Translationsgruppe", Mathematische Zeitschrift, 60: 156–186, doi:10.1007/BF01187370, ISSN 0025-5874, MR 0063056, S2CID 123661471
• Ball, Simeon; John Bamberg; Michel Lavrauw; Tim Penttila (2003-09-15), Symplectic Spreads (PDF), Polytechnic University of Catalonia, retrieved 2008-10-08
• Bruck, R.H. (1969), R.C.Bose and T.A. Dowling (ed.), "Construction Problems of finite projective planes", Combinatorial Mathematics and Its Applications, Univ. of North Carolina Press, pp. 426–514
• Bruck, R. H.; Bose, R. C. (1966), "Linear Representations of Projective Planes in Projective Spaces" (PDF), Journal of Algebra, 4: 117–172, doi:10.1016/0021-8693(66)90054-8
• Bruck, R. H.; Bose, R. C. (1964), "The Construction of Translation Planes from Projective Spaces" (PDF), Journal of Algebra, 1: 85–102, doi:10.1016/0021-8693(64)90010-9
• Czerwinski, Terry; Oakden, David (1992). "The translation planes of order twenty-five". Journal of Combinatorial Theory, Series A. 59 (2): 193–217. doi:10.1016/0097-3165(92)90065-3.
• Dembowski, Peter (1968), Finite geometries, Ergebnisse der Mathematik und ihrer Grenzgebiete, Band 44, Berlin, New York: Springer-Verlag, ISBN 3-540-61786-8, MR 0233275
• Dempwolff, U. (1994). "Translation planes of order 27". Designs, Codes and Cryptography. 4 (2): 105–121. doi:10.1007/BF01578865. ISSN 0925-1022. S2CID 12524473.
• Dover, Jeremy M. (2019-02-27). "A genealogy of the translation planes of order 25". arXiv:1902.07838 [math.CO].
• Hall, Marshall (1943), "Projective planes" (PDF), Trans. Amer. Math. Soc., 54 (2): 229–277, doi:10.2307/1990331, JSTOR 1990331
• Hughes, Daniel R.; Piper, Fred C. (1973), Projective Planes, Springer-Verlag, ISBN 0-387-90044-6
• Johnson, Norman L.; Jha, Vikram; Biliotti, Mauro (2007), Handbook of Finite Translation Planes, Chapman&Hall/CRC, ISBN 978-1-58488-605-1
• Knuth, Donald E. (1965), "A Class of Projective Planes" (PDF), Transactions of the American Mathematical Society, 115: 541–549, doi:10.2307/1994285, JSTOR 1994285
• Lüneburg, Heinz (1980), Translation Planes, Berlin: Springer Verlag, ISBN 0-387-09614-0
• Mathon, Rudolf; Royle, Gordon F. (1995). "The translation planes of order 49". Designs, Codes and Cryptography. 5 (1): 57–72. doi:10.1007/BF01388504. ISSN 0925-1022. S2CID 1925628.
• McKay, Brendan D.; Royle, Gordon F. (2014). "There are 2834 spreads of lines in PG(3,8)". arXiv:1404.1643 [math.CO].
• Moorhouse, Eric (2007), Incidence Geometry (PDF), archived from the original (PDF) on 2013-10-29
• Reifart, Arthur (1984). "The classification of the translation planes of order 16, II". Geometriae Dedicata. 17 (1). doi:10.1007/BF00181513. ISSN 0046-5755. S2CID 121935740.
• Sherk, F. A.; Pabst, Günther (1977), "Indicator sets, reguli, and a new class of spreads" (PDF), Canadian Journal of Mathematics, 29 (1): 132–54, doi:10.4153/CJM-1977-013-6, S2CID 124215765
Further reading
• Mauro Biliotti, Vikram Jha, Norman L. Johnson (2001) Foundations of Translation Planes, Marcel Dekker ISBN 0-8247-0609-9 .
External links
• Lecture Notes on Projective Geometry
• Publications of Keith Mellinger
Authority control
International
• FAST
National
• France
• BnF data
• Germany
• Israel
• United States
Other
• IdRef
| Wikipedia |
Translation surface (differential geometry)
In differential geometry a translation surface is a surface that is generated by translations:
• For two space curves $c_{1},c_{2}$ with a common point $P$, the curve $c_{1}$ is shifted such that point $P$ is moving on $c_{2}$. By this procedure curve $c_{1}$ generates a surface: the translation surface.
If both curves are contained in a common plane, the translation surface is planar (part of a plane). This case is generally ignored.
Simple examples:
1. Right circular cylinder: $c_{1}$ is a circle (or another cross section) and $c_{2}$ is a line.
2. The elliptic paraboloid $\;z=x^{2}+y^{2}\;$ can be generated by $\ c_{1}:\;(x,0,x^{2})\ $ and $\ c_{2}:\;(0,y,y^{2})\ $ (both curves are parabolas).
3. The hyperbolic paraboloid $z=x^{2}-y^{2}$ can be generated by $c_{1}:(x,0,x^{2})$ (parabola) and $c_{2}:(0,y,-y^{2})$ (downwards open parabola).
Translation surfaces are popular in descriptive geometry[1][2] and architecture,[3] because they can be modelled easily.
In differential geometry minimal surfaces are represented by translation surfaces or as midchord surfaces (s. below).[4]
The translation surfaces as defined here should not be confused with the translation surfaces in complex geometry.
Parametric representation
For two space curves $\ c_{1}:\;{\vec {x}}=\gamma _{1}(u)\ $ and $\ c_{2}:\;{\vec {x}}=\gamma _{2}(v)\ $ with $\gamma _{1}(0)=\gamma _{2}(0)={\vec {0}}$ the translation surface $\Phi $ can be represented by:[5]
(TS) $\quad {\vec {x}}=\gamma _{1}(u)+\gamma _{2}(v)\;$
and contains the origin. Obviously this definition is symmetric regarding the curves $c_{1}$ and $c_{2}$. Therefore, both curves are called generatrices (one: generatrix). Any point $X$ of the surface is contained in a shifted copy of $c_{1}$ and $c_{2}$ resp.. The tangent plane at $X$ is generated by the tangentvectors of the generatrices at this point, if these vectors are linearly independent.
If the precondition $\gamma _{1}(0)=\gamma _{2}(0)={\vec {0}}$ is not fulfilled, the surface defined by (TS) may not contain the origin and the curves $c_{1},c_{2}$. But in any case the surface contains shifted copies of any of the curves $c_{1},c_{2}$ as parametric curves ${\vec {x}}(u_{0},v)$ and ${\vec {x}}(u,v_{0})$ respectively.
The two curves $c_{1},c_{2}$ can be used to generate the so called corresponding midchord surface. Its parametric representation is
(MCS) $\quad {\vec {x}}={\frac {1}{2}}(\gamma _{1}(u)+\gamma _{2}(v))\;.$
Helicoid as translation surface and midchord surface
A helicoid is a special case of a generalized helicoid and a ruled surface. It is an example of a minimal surface and can be represented as a translation surface.
The helicoid with the parametric representation
${\vec {x}}(u,v)=(u\cos v,u\sin v,kv)$
has a turn around shift (German: Ganghöhe) $2\pi k$. Introducing new parameters $\alpha ,\varphi $[6] such that
$u=2a\cos \left({\frac {\alpha -\varphi }{2}}\right)\ ,\ \ v={\frac {\alpha +\varphi }{2}}$
and $a$ a positive real number, one gets a new parametric representation
• ${\vec {X}}(\alpha ,\varphi )=\left(a\cos \alpha +a\cos \varphi \;,\;a\sin \alpha +a\sin \varphi \;,\;{\frac {k\alpha }{2}}+{\frac {k\varphi }{2}}\right)$
$=(a\cos \alpha ,a\sin \alpha ,{\frac {k\alpha }{2}})\ +\ (a\cos \varphi ,a\sin \varphi ,{\frac {k\varphi }{2}})\ ,$
which is the parametric representation of a translation surface with the two identical (!) generatrices
$c_{1}:\;\gamma _{1}={\vec {X}}(\alpha ,0)=\left(a+a\cos \alpha ,a\sin \alpha ,{\frac {k\alpha }{2}}\right)\quad $ and
$c_{2}:\;\gamma _{2}={\vec {X}}(0,\varphi )=\left(a+a\cos \varphi ,a\sin \varphi ,{\frac {k\varphi }{2}}\right)\ .$
The common point used for the diagram is $P={\vec {X}}(0,0)=(2a,0,0)$. The (identical) generatrices are helices with the turn around shift $k\pi \;,$ which lie on the cylinder with the equation $(x-a)^{2}+y^{2}=a^{2}$. Any parametric curve is a shifted copy of the generatrix $c_{1}$ (in diagram: purple) and is contained in the right circular cylinder with radius $a$, which contains the z-axis.
The new parametric representation represents only such points of the helicoid that are within the cylinder with the equation $x^{2}+y^{2}=4a^{2}$.
From the new parametric representation one recognizes, that the helicoid is a midchord surface, too:
${\begin{aligned}{\vec {X}}(\alpha ,\varphi )&=\left(a\cos \alpha ,a\sin \alpha ,{\frac {k\alpha }{2}}\right)\ +\ \left(a\cos \varphi ,a\sin \varphi ,{\frac {k\varphi }{2}}\right)\\[5pt]&={\frac {1}{2}}(\delta _{1}(\alpha )+\delta _{2}(\varphi ))\ ,\quad \end{aligned}}$
where
$d_{1}:\ {\vec {x}}=\delta _{1}(\alpha )=(2a\cos \alpha ,2a\sin \alpha ,k\alpha )\ ,\quad $ and
$d_{2}:\ {\vec {x}}=\delta _{2}(\varphi )=(2a\cos \varphi ,2a\sin \varphi ,k\varphi )\ ,\quad $
are two identical generatrices.
In diagram: $P_{1}:\delta _{1}(\alpha _{0})$ lies on the helix $d_{1}$ and $P_{2}:\delta _{2}(\varphi _{0})$ on the (identical) helix $d_{2}$. The midpoint of the chord is $\ M:{\frac {1}{2}}(\delta _{1}(\alpha _{0})+\delta _{2}(\varphi _{0}))={\vec {X}}(\alpha _{0},\varphi _{0})\ $.
Advantages of a translation surface
Architecture
A surface (for example a roof) can be manufactured using a jig for curve $c_{2}$ and several identical jigs of curve $c_{1}$. The jigs can be designed without any knowledge of mathematics. By positioning the jigs the rules of a translation surface have to be respected only.
Descriptive geometry
Establishing a parallel projection of a translation surface one 1) has to produce projections of the two generatrices, 2) make a jig of curve $c_{1}$ and 3) draw with help of this jig copies of the curve respecting the rules of a translation surface. The contour of the surface is the envelope of the curves drawn with the jig. This procedure works for orthogonal and oblique projections, but not for central projections.
Differential geometry
For a translation surface with parametric representation ${\vec {x}}(u,v)=\gamma _{1}(u)+\gamma _{2}(v)\;$ the partial derivatives of ${\vec {x}}(u,v)$ are simple derivatives of the curves. Hence the mixed derivatives are always $0$ and the coefficient $M$ of the second fundamental form is $0$, too. This is an essential facilitation for showing that (for example) a helicoid is a minimal surface.
References
1. H. Brauner: Lehrbuch der Konstruktiven Geometrie, Springer-Verlag, 2013,ISBN 3709187788, 9783709187784, p. 236
2. Fritz Hohenberg: Konstruktive Geometrie in der Technik, Springer-Verlag, 2013, ISBN 3709181488, 9783709181485, p. 208
3. Hans Schober: Transparente Schalen: Form, Topologie, Tragwerk, John Wiley & Sons, 2015, ISBN 343360598X, 9783433605981, S. 74
4. Wilhelm Blaschke, Kurt Reidemeister: Vorlesungen über Differentialgeometrie und geometrische Grundlagen von Einsteins Relativitätstheorie II: Affine Differentialgeometrie, Springer-Verlag, 2013,ISBN 364247392X, 9783642473920, p. 94
5. Erwin Kruppa: Analytische und konstruktive Differentialgeometrie, Springer-Verlag, 2013, ISBN 3709178673, 9783709178676, p. 45
6. J.C.C. Nitsche: Vorlesungen über Minimalflächen, Springer-Verlag, 2013, ISBN 3642656196, 9783642656194, p. 59
• G. Darboux: Leçons sur la théorie générale des surfaces et ses applications géométriques du calcul infinitésimal , 1–4 , Chelsea, reprint, 972, pp. Sects. 81–84, 218
• Georg Glaeser: Geometrie und ihre Anwendungen in Kunst, Natur und Technik, Springer-Verlag, 2014, ISBN 364241852X, p. 259
• W. Haack: Elementare Differentialgeometrie, Springer-Verlag, 2013, ISBN 3034869509, p. 140
• C. Leopold: Geometrische Grundlagen der Architekturdarstellung. Kohlhammer Verlag, Stuttgart 2005, ISBN 3-17-018489-X, p. 122
• D.J. Struik: Lectures on classical differential geometry , Dover, reprint ,1988, pp. 103, 109, 184
External links
• Encyclopedia of Mathematics
| Wikipedia |
Kinetic energy
In physics, the kinetic energy of an object is the form of energy that it possesses due to its motion.[1] It is defined as the work needed to accelerate a body of a given mass from rest to its stated velocity. Having gained this energy during its acceleration, the body maintains this kinetic energy unless its speed changes. The same amount of work is done by the body when decelerating from its current speed to a state of rest. Formally, a kinetic energy is any term in a system's Lagrangian which includes a derivative with respect to time and the second term in a Taylor expansion of a particle's relativistic energy. [2][3]
Kinetic energy
The cars of a roller coaster reach their maximum kinetic energy when at the bottom of the path. When they start rising, the kinetic energy begins to be converted to gravitational potential energy. The sum of kinetic and potential energy in the system remains constant, ignoring losses to friction.
Common symbols
KE, Ek, K or T
SI unitjoule (J)
Derivations from
other quantities
Ek = 1/2mv2
Ek = Et + Er
Part of a series on
Classical mechanics
${\textbf {F}}={\frac {d}{dt}}(m{\textbf {v}})$
Second law of motion
• History
• Timeline
• Textbooks
Branches
• Applied
• Celestial
• Continuum
• Dynamics
• Kinematics
• Kinetics
• Statics
• Statistical mechanics
Fundamentals
• Acceleration
• Angular momentum
• Couple
• D'Alembert's principle
• Energy
• kinetic
• potential
• Force
• Frame of reference
• Inertial frame of reference
• Impulse
• Inertia / Moment of inertia
• Mass
• Mechanical power
• Mechanical work
• Moment
• Momentum
• Space
• Speed
• Time
• Torque
• Velocity
• Virtual work
Formulations
• Newton's laws of motion
• Analytical mechanics
• Lagrangian mechanics
• Hamiltonian mechanics
• Routhian mechanics
• Hamilton–Jacobi equation
• Appell's equation of motion
• Koopman–von Neumann mechanics
Core topics
• Damping
• Displacement
• Equations of motion
• Euler's laws of motion
• Fictitious force
• Friction
• Harmonic oscillator
• Inertial / Non-inertial reference frame
• Mechanics of planar particle motion
• Motion (linear)
• Newton's law of universal gravitation
• Newton's laws of motion
• Relative velocity
• Rigid body
• dynamics
• Euler's equations
• Simple harmonic motion
• Vibration
Rotation
• Circular motion
• Rotating reference frame
• Centripetal force
• Centrifugal force
• reactive
• Coriolis force
• Pendulum
• Tangential speed
• Rotational frequency
• Angular acceleration / displacement / frequency / velocity
Scientists
• Kepler
• Galileo
• Huygens
• Newton
• Horrocks
• Halley
• Maupertuis
• Daniel Bernoulli
• Johann Bernoulli
• Euler
• d'Alembert
• Clairaut
• Lagrange
• Laplace
• Hamilton
• Poisson
• Cauchy
• Routh
• Liouville
• Appell
• Gibbs
• Koopman
• von Neumann
• Physics portal
• Category
In classical mechanics, the kinetic energy of a non-rotating object of mass m traveling at a speed v is $ {\frac {1}{2}}mv^{2}$. In relativistic mechanics, this is a good approximation only when v is much less than the speed of light.
The standard unit of kinetic energy is the joule, while the English unit of kinetic energy is the foot-pound.
History and etymology
The adjective kinetic has its roots in the Greek word κίνησις kinesis, meaning "motion". The dichotomy between kinetic energy and potential energy can be traced back to Aristotle's concepts of actuality and potentiality.[4]
The principle in classical mechanics that E ∝ mv2 was first developed by Gottfried Leibniz and Johann Bernoulli, who described kinetic energy as the living force, vis viva. Willem 's Gravesande of the Netherlands provided experimental evidence of this relationship in 1722. By dropping weights from different heights into a block of clay, Willem 's Gravesande determined that their penetration depth was proportional to the square of their impact speed. Émilie du Châtelet recognized the implications of the experiment and published an explanation.[5]
The terms kinetic energy and work in their present scientific meanings date back to the mid-19th century. Early understandings of these ideas can be attributed to Gaspard-Gustave Coriolis, who in 1829 published the paper titled Du Calcul de l'Effet des Machines outlining the mathematics of kinetic energy. William Thomson, later Lord Kelvin, is given the credit for coining the term "kinetic energy" c. 1849–1851.[6][7] Rankine, who had introduced the term "potential energy" in 1853, and the phrase "actual energy" to complement it,[8] later cites William Thomson and Peter Tait as substituting the word "kinetic" for "actual".[9]
Overview
Energy occurs in many forms, including chemical energy, thermal energy, electromagnetic radiation, gravitational energy, electric energy, elastic energy, nuclear energy, and rest energy. These can be categorized in two main classes: potential energy and kinetic energy. Kinetic energy is the movement energy of an object. Kinetic energy can be transferred between objects and transformed into other kinds of energy.[10]
Kinetic energy may be best understood by examples that demonstrate how it is transformed to and from other forms of energy. For example, a cyclist uses chemical energy provided by food to accelerate a bicycle to a chosen speed. On a level surface, this speed can be maintained without further work, except to overcome air resistance and friction. The chemical energy has been converted into kinetic energy, the energy of motion, but the process is not completely efficient and produces heat within the cyclist.
The kinetic energy in the moving cyclist and the bicycle can be converted to other forms. For example, the cyclist could encounter a hill just high enough to coast up, so that the bicycle comes to a complete halt at the top. The kinetic energy has now largely been converted to gravitational potential energy that can be released by freewheeling down the other side of the hill. Since the bicycle lost some of its energy to friction, it never regains all of its speed without additional pedaling. The energy is not destroyed; it has only been converted to another form by friction. Alternatively, the cyclist could connect a dynamo to one of the wheels and generate some electrical energy on the descent. The bicycle would be traveling slower at the bottom of the hill than without the generator because some of the energy has been diverted into electrical energy. Another possibility would be for the cyclist to apply the brakes, in which case the kinetic energy would be dissipated through friction as heat.
Like any physical quantity that is a function of velocity, the kinetic energy of an object depends on the relationship between the object and the observer's frame of reference. Thus, the kinetic energy of an object is not invariant.
Spacecraft use chemical energy to launch and gain considerable kinetic energy to reach orbital velocity. In an entirely circular orbit, this kinetic energy remains constant because there is almost no friction in near-earth space. However, it becomes apparent at re-entry when some of the kinetic energy is converted to heat. If the orbit is elliptical or hyperbolic, then throughout the orbit kinetic and potential energy are exchanged; kinetic energy is greatest and potential energy lowest at closest approach to the earth or other massive body, while potential energy is greatest and kinetic energy the lowest at maximum distance. Disregarding loss or gain however, the sum of the kinetic and potential energy remains constant.
Kinetic energy can be passed from one object to another. In the game of billiards, the player imposes kinetic energy on the cue ball by striking it with the cue stick. If the cue ball collides with another ball, it slows down dramatically, and the ball it hit accelerates as the kinetic energy is passed on to it. Collisions in billiards are effectively elastic collisions, in which kinetic energy is preserved. In inelastic collisions, kinetic energy is dissipated in various forms of energy, such as heat, sound and binding energy (breaking bound structures).
Flywheels have been developed as a method of energy storage. This illustrates that kinetic energy is also stored in rotational motion.
Several mathematical descriptions of kinetic energy exist that describe it in the appropriate physical situation. For objects and processes in common human experience, the formula ½mv² given by Newtonian (classical) mechanics is suitable. However, if the speed of the object is comparable to the speed of light, relativistic effects become significant and the relativistic formula is used. If the object is on the atomic or sub-atomic scale, quantum mechanical effects are significant, and a quantum mechanical model must be employed.
Newtonian kinetic energy
Kinetic energy of rigid bodies
In classical mechanics, the kinetic energy of a point object (an object so small that its mass can be assumed to exist at one point), or a non-rotating rigid body depends on the mass of the body as well as its speed. The kinetic energy is equal to 1/2 the product of the mass and the square of the speed. In formula form:
$E_{\text{k}}={\frac {1}{2}}mv^{2}$
where $m$ is the mass and $v$ is the speed (magnitude of the velocity) of the body. In SI units, mass is measured in kilograms, speed in metres per second, and the resulting kinetic energy is in joules.
For example, one would calculate the kinetic energy of an 80 kg mass (about 180 lbs) traveling at 18 metres per second (about 40 mph, or 65 km/h) as
$E_{\text{k}}={\frac {1}{2}}\cdot 80\,{\text{kg}}\cdot \left(18\,{\text{m/s}}\right)^{2}=12,960\,{\text{J}}=12.96\,{\text{kJ}}$
When a person throws a ball, the person does work on it to give it speed as it leaves the hand. The moving ball can then hit something and push it, doing work on what it hits. The kinetic energy of a moving object is equal to the work required to bring it from rest to that speed, or the work the object can do while being brought to rest: net force × displacement = kinetic energy, i.e.,
$Fs={\frac {1}{2}}mv^{2}$
Since the kinetic energy increases with the square of the speed, an object doubling its speed has four times as much kinetic energy. For example, a car traveling twice as fast as another requires four times as much distance to stop, assuming a constant braking force. As a consequence of this quadrupling, it takes four times the work to double the speed.
The kinetic energy of an object is related to its momentum by the equation:
$E_{\text{k}}={\frac {p^{2}}{2m}}$
where:
• $p$ is momentum
• $m$ is mass of the body
For the translational kinetic energy, that is the kinetic energy associated with rectilinear motion, of a rigid body with constant mass $m$, whose center of mass is moving in a straight line with speed $v$, as seen above is equal to
$E_{\text{t}}={\frac {1}{2}}mv^{2}$
where:
• $m$ is the mass of the body
• $v$ is the speed of the center of mass of the body.
The kinetic energy of any entity depends on the reference frame in which it is measured. However, the total energy of an isolated system, i.e. one in which energy can neither enter nor leave, does not change over time in the reference frame in which it is measured. Thus, the chemical energy converted to kinetic energy by a rocket engine is divided differently between the rocket ship and its exhaust stream depending upon the chosen reference frame. This is called the Oberth effect. But the total energy of the system, including kinetic energy, fuel chemical energy, heat, etc., is conserved over time, regardless of the choice of reference frame. Different observers moving with different reference frames would however disagree on the value of this conserved energy.
The kinetic energy of such systems depends on the choice of reference frame: the reference frame that gives the minimum value of that energy is the center of momentum frame, i.e. the reference frame in which the total momentum of the system is zero. This minimum kinetic energy contributes to the invariant mass of the system as a whole.
Without vectors and calculus
The work W done by a force F on an object over a distance s parallel to F equals
$W=F\cdot s$.
Using Newton's Second Law
$F=ma$
with m the mass and a the acceleration of the object and
$s={\frac {at^{2}}{2}}$
the distance traveled by the accelerated object in time t, we find with $v=at$ for the velocity v of the object
$W=ma{\frac {at^{2}}{2}}={\frac {m(at)^{2}}{2}}={\frac {mv^{2}}{2}}.$
With vectors and calculus
The work done in accelerating a particle with mass m during the infinitesimal time interval dt is given by the dot product of force F and the infinitesimal displacement dx
$\mathbf {F} \cdot d\mathbf {x} =\mathbf {F} \cdot \mathbf {v} dt={\frac {d\mathbf {p} }{dt}}\cdot \mathbf {v} dt=\mathbf {v} \cdot d\mathbf {p} =\mathbf {v} \cdot d(m\mathbf {v} )\,,$
where we have assumed the relationship p = m v and the validity of Newton's Second Law. (However, also see the special relativistic derivation below.)
Applying the product rule we see that:
$d(\mathbf {v} \cdot \mathbf {v} )=(d\mathbf {v} )\cdot \mathbf {v} +\mathbf {v} \cdot (d\mathbf {v} )=2(\mathbf {v} \cdot d\mathbf {v} ).$
Therefore, (assuming constant mass so that dm = 0), we have,
$\mathbf {v} \cdot d(m\mathbf {v} )={\frac {m}{2}}d(\mathbf {v} \cdot \mathbf {v} )={\frac {m}{2}}dv^{2}=d\left({\frac {mv^{2}}{2}}\right).$
Since this is a total differential (that is, it only depends on the final state, not how the particle got there), we can integrate it and call the result kinetic energy. Assuming the object was at rest at time 0, we integrate from time 0 to time t because the work done by the force to bring the object from rest to velocity v is equal to the work necessary to do the reverse:
$E_{\text{k}}=\int _{0}^{t}\mathbf {F} \cdot d\mathbf {x} =\int _{0}^{t}\mathbf {v} \cdot d(m\mathbf {v} )=\int _{0}^{t}d\left({\frac {mv^{2}}{2}}\right)={\frac {mv^{2}}{2}}.$
This equation states that the kinetic energy (Ek) is equal to the integral of the dot product of the velocity (v) of a body and the infinitesimal change of the body's momentum (p). It is assumed that the body starts with no kinetic energy when it is at rest (motionless).
Rotating bodies
If a rigid body Q is rotating about any line through the center of mass then it has rotational kinetic energy ($E_{\text{r}}\,$) which is simply the sum of the kinetic energies of its moving parts, and is thus given by:
$E_{\text{r}}=\int _{Q}{\frac {v^{2}dm}{2}}=\int _{Q}{\frac {(r\omega )^{2}dm}{2}}={\frac {\omega ^{2}}{2}}\int _{Q}{r^{2}}dm={\frac {\omega ^{2}}{2}}I={\frac {1}{2}}I\omega ^{2}$
where:
• ω is the body's angular velocity
• r is the distance of any mass dm from that line
• $I$ is the body's moment of inertia, equal to $ \int _{Q}{r^{2}}dm$.
(In this equation the moment of inertia must be taken about an axis through the center of mass and the rotation measured by ω must be around that axis; more general equations exist for systems where the object is subject to wobble due to its eccentric shape).
Kinetic energy of systems
A system of bodies may have internal kinetic energy due to the relative motion of the bodies in the system. For example, in the Solar System the planets and planetoids are orbiting the Sun. In a tank of gas, the molecules are moving in all directions. The kinetic energy of the system is the sum of the kinetic energies of the bodies it contains.
A macroscopic body that is stationary (i.e. a reference frame has been chosen to correspond to the body's center of momentum) may have various kinds of internal energy at the molecular or atomic level, which may be regarded as kinetic energy, due to molecular translation, rotation, and vibration, electron translation and spin, and nuclear spin. These all contribute to the body's mass, as provided by the special theory of relativity. When discussing movements of a macroscopic body, the kinetic energy referred to is usually that of the macroscopic movement only. However, all internal energies of all types contribute to a body's mass, inertia, and total energy.
Fluid dynamics
In fluid dynamics, the kinetic energy per unit volume at each point in an incompressible fluid flow field is called the dynamic pressure at that point.[11]
$E_{\text{k}}={\frac {1}{2}}mv^{2}$
Dividing by V, the unit of volume:
${\begin{aligned}{\frac {E_{\text{k}}}{V}}&={\frac {1}{2}}{\frac {m}{V}}v^{2}\\q&={\frac {1}{2}}\rho v^{2}\end{aligned}}$
where $q$ is the dynamic pressure, and ρ is the density of the incompressible fluid.
Frame of reference
The speed, and thus the kinetic energy of a single object is frame-dependent (relative): it can take any non-negative value, by choosing a suitable inertial frame of reference. For example, a bullet passing an observer has kinetic energy in the reference frame of this observer. The same bullet is stationary to an observer moving with the same velocity as the bullet, and so has zero kinetic energy.[12] By contrast, the total kinetic energy of a system of objects cannot be reduced to zero by a suitable choice of the inertial reference frame, unless all the objects have the same velocity. In any other case, the total kinetic energy has a non-zero minimum, as no inertial reference frame can be chosen in which all the objects are stationary. This minimum kinetic energy contributes to the system's invariant mass, which is independent of the reference frame.
The total kinetic energy of a system depends on the inertial frame of reference: it is the sum of the total kinetic energy in a center of momentum frame and the kinetic energy the total mass would have if it were concentrated in the center of mass.
This may be simply shown: let $\textstyle \mathbf {V} $ be the relative velocity of the center of mass frame i in the frame k. Since
$v^{2}=\left(v_{i}+V\right)^{2}=\left(\mathbf {v} _{i}+\mathbf {V} \right)\cdot \left(\mathbf {v} _{i}+\mathbf {V} \right)=\mathbf {v} _{i}\cdot \mathbf {v} _{i}+2\mathbf {v} _{i}\cdot \mathbf {V} +\mathbf {V} \cdot \mathbf {V} =v_{i}^{2}+2\mathbf {v} _{i}\cdot \mathbf {V} +V^{2},$
Then,
$E_{\text{k}}=\int {\frac {v^{2}}{2}}dm=\int {\frac {v_{i}^{2}}{2}}dm+\mathbf {V} \cdot \int \mathbf {v} _{i}dm+{\frac {V^{2}}{2}}\int dm.$
However, let $ \int {\frac {v_{i}^{2}}{2}}dm=E_{i}$ the kinetic energy in the center of mass frame, $ \int \mathbf {v} _{i}dm$ would be simply the total momentum that is by definition zero in the center of mass frame, and let the total mass: $ \int dm=M$. Substituting, we get:[13]
$E_{\text{k}}=E_{i}+{\frac {MV^{2}}{2}}.$
Thus the kinetic energy of a system is lowest to center of momentum reference frames, i.e., frames of reference in which the center of mass is stationary (either the center of mass frame or any other center of momentum frame). In any different frame of reference, there is additional kinetic energy corresponding to the total mass moving at the speed of the center of mass. The kinetic energy of the system in the center of momentum frame is a quantity that is invariant (all observers see it to be the same).
Rotation in systems
It sometimes is convenient to split the total kinetic energy of a body into the sum of the body's center-of-mass translational kinetic energy and the energy of rotation around the center of mass (rotational energy):
$E_{\text{k}}=E_{\text{t}}+E_{\text{r}}$
where:
• Ek is the total kinetic energy
• Et is the translational kinetic energy
• Er is the rotational energy or angular kinetic energy in the rest frame
Thus the kinetic energy of a tennis ball in flight is the kinetic energy due to its rotation, plus the kinetic energy due to its translation.
Relativistic kinetic energy
If a body's speed is a significant fraction of the speed of light, it is necessary to use relativistic mechanics to calculate its kinetic energy. In relativity, the total energy is given by the energy-momentum relation:
$E^{2}=(p{\textrm {c}})^{2}+\left(m_{0}{\textrm {c}}^{2}\right)^{2}\,$
Here we use the relativistic expression for linear momentum: $p=m\gamma v$, where $ \gamma =1/{\sqrt {1-v^{2}/c^{2}}}$. with $m$ being an object's (rest) mass, $v$ speed, and c the speed of light in vacuum. Then kinetic energy is the total relativistic energy minus the rest energy:
$E_{K}=E-m_{0}c^{2}={\sqrt {(p{\textrm {c}})^{2}+\left(m_{0}{\textrm {c}}^{2}\right)^{2}}}-m_{0}c^{2}$
At low speeds, the square root can be expanded and the rest energy drops out, giving the Newtonian kinetic energy.
Derivation
Start with the expression for linear momentum $\mathbf {p} =m\gamma \mathbf {v} $, where $ \gamma =1/{\sqrt {1-v^{2}/c^{2}}}$. Integrating by parts yields
$E_{\text{k}}=\int \mathbf {v} \cdot d\mathbf {p} =\int \mathbf {v} \cdot d(m\gamma \mathbf {v} )=m\gamma \mathbf {v} \cdot \mathbf {v} -\int m\gamma \mathbf {v} \cdot d\mathbf {v} =m\gamma v^{2}-{\frac {m}{2}}\int \gamma d\left(v^{2}\right)$
Since $\gamma =\left(1-v^{2}/c^{2}\right)^{-{\frac {1}{2}}}$,
${\begin{aligned}E_{\text{k}}&=m\gamma v^{2}-{\frac {-mc^{2}}{2}}\int \gamma d\left(1-{\frac {v^{2}}{c^{2}}}\right)\\&=m\gamma v^{2}+mc^{2}\left(1-{\frac {v^{2}}{c^{2}}}\right)^{\frac {1}{2}}-E_{0}\end{aligned}}$
$E_{0}$ is a constant of integration for the indefinite integral.
Simplifying the expression we obtain
${\begin{aligned}E_{\text{k}}&=m\gamma \left(v^{2}+c^{2}\left(1-{\frac {v^{2}}{c^{2}}}\right)\right)-E_{0}\\&=m\gamma \left(v^{2}+c^{2}-v^{2}\right)-E_{0}\\&=m\gamma c^{2}-E_{0}\end{aligned}}$
$E_{0}$ is found by observing that when $\mathbf {v} =0,\ \gamma =1$ and $E_{\text{k}}=0$, giving
$E_{0}=mc^{2}$
resulting in the formula
$E_{\text{k}}=m\gamma c^{2}-mc^{2}={\frac {mc^{2}}{\sqrt {1-{\frac {v^{2}}{c^{2}}}}}}-mc^{2}=(\gamma -1)mc^{2}$
This formula shows that the work expended accelerating an object from rest approaches infinity as the velocity approaches the speed of light. Thus it is impossible to accelerate an object across this boundary.
The mathematical by-product of this calculation is the mass–energy equivalence formula—the body at rest must have energy content
$E_{\text{rest}}=E_{0}=mc^{2}$
At a low speed (v ≪ c), the relativistic kinetic energy is approximated well by the classical kinetic energy. This is done by binomial approximation or by taking the first two terms of the Taylor expansion for the reciprocal square root:
$E_{\text{k}}\approx mc^{2}\left(1+{\frac {1}{2}}{\frac {v^{2}}{c^{2}}}\right)-mc^{2}={\frac {1}{2}}mv^{2}$
So, the total energy $E_{k}$ can be partitioned into the rest mass energy plus the Newtonian kinetic energy at low speeds.
When objects move at a speed much slower than light (e.g. in everyday phenomena on Earth), the first two terms of the series predominate. The next term in the Taylor series approximation
$E_{\text{k}}\approx mc^{2}\left(1+{\frac {1}{2}}{\frac {v^{2}}{c^{2}}}+{\frac {3}{8}}{\frac {v^{4}}{c^{4}}}\right)-mc^{2}={\frac {1}{2}}mv^{2}+{\frac {3}{8}}m{\frac {v^{4}}{c^{2}}}$
is small for low speeds. For example, for a speed of 10 km/s (22,000 mph) the correction to the Newtonian kinetic energy is 0.0417 J/kg (on a Newtonian kinetic energy of 50 MJ/kg) and for a speed of 100 km/s it is 417 J/kg (on a Newtonian kinetic energy of 5 GJ/kg).
The relativistic relation between kinetic energy and momentum is given by
$E_{\text{k}}={\sqrt {p^{2}c^{2}+m^{2}c^{4}}}-mc^{2}$
This can also be expanded as a Taylor series, the first term of which is the simple expression from Newtonian mechanics:[14]
$E_{\text{k}}\approx {\frac {p^{2}}{2m}}-{\frac {p^{4}}{8m^{3}c^{2}}}.$
This suggests that the formulae for energy and momentum are not special and axiomatic, but concepts emerging from the equivalence of mass and energy and the principles of relativity.
General relativity
See also: Schwarzschild geodesics
Using the convention that
$g_{\alpha \beta }\,u^{\alpha }\,u^{\beta }\,=\,-c^{2}$
where the four-velocity of a particle is
$u^{\alpha }\,=\,{\frac {dx^{\alpha }}{d\tau }}$
and $\tau $ is the proper time of the particle, there is also an expression for the kinetic energy of the particle in general relativity.
If the particle has momentum
$p_{\beta }\,=\,m\,g_{\beta \alpha }\,u^{\alpha }$
as it passes by an observer with four-velocity uobs, then the expression for total energy of the particle as observed (measured in a local inertial frame) is
$E\,=\,-\,p_{\beta }\,u_{\text{obs}}^{\beta }$
and the kinetic energy can be expressed as the total energy minus the rest energy:
$E_{k}\,=\,-\,p_{\beta }\,u_{\text{obs}}^{\beta }\,-\,m\,c^{2}\,.$
Consider the case of a metric that is diagonal and spatially isotropic (gtt, gss, gss, gss). Since
$u^{\alpha }={\frac {dx^{\alpha }}{dt}}{\frac {dt}{d\tau }}=v^{\alpha }u^{t}$
where vα is the ordinary velocity measured w.r.t. the coordinate system, we get
$-c^{2}=g_{\alpha \beta }u^{\alpha }u^{\beta }=g_{tt}\left(u^{t}\right)^{2}+g_{ss}v^{2}\left(u^{t}\right)^{2}\,.$
Solving for ut gives
$u^{t}=c{\sqrt {\frac {-1}{g_{tt}+g_{ss}v^{2}}}}\,.$
Thus for a stationary observer (v = 0)
$u_{\text{obs}}^{t}=c{\sqrt {\frac {-1}{g_{tt}}}}$
and thus the kinetic energy takes the form
$E_{\text{k}}=-mg_{tt}u^{t}u_{\text{obs}}^{t}-mc^{2}=mc^{2}{\sqrt {\frac {g_{tt}}{g_{tt}+g_{ss}v^{2}}}}-mc^{2}\,.$
Factoring out the rest energy gives:
$E_{\text{k}}=mc^{2}\left({\sqrt {\frac {g_{tt}}{g_{tt}+g_{ss}v^{2}}}}-1\right)\,.$
This expression reduces to the special relativistic case for the flat-space metric where
${\begin{aligned}g_{tt}&=-c^{2}\\g_{ss}&=1\,.\end{aligned}}$
In the Newtonian approximation to general relativity
${\begin{aligned}g_{tt}&=-\left(c^{2}+2\Phi \right)\\g_{ss}&=1-{\frac {2\Phi }{c^{2}}}\end{aligned}}$
where Φ is the Newtonian gravitational potential. This means clocks run slower and measuring rods are shorter near massive bodies.
Kinetic energy in quantum mechanics
In quantum mechanics, observables like kinetic energy are represented as operators. For one particle of mass m, the kinetic energy operator appears as a term in the Hamiltonian and is defined in terms of the more fundamental momentum operator ${\hat {p}}$. The kinetic energy operator in the non-relativistic case can be written as
${\hat {T}}={\frac {{\hat {p}}^{2}}{2m}}.$
Notice that this can be obtained by replacing $p$ by ${\hat {p}}$ in the classical expression for kinetic energy in terms of momentum,
$E_{\text{k}}={\frac {p^{2}}{2m}}.$
In the Schrödinger picture, ${\hat {p}}$ takes the form $-i\hbar \nabla $ where the derivative is taken with respect to position coordinates and hence
${\hat {T}}=-{\frac {\hbar ^{2}}{2m}}\nabla ^{2}.$
The expectation value of the electron kinetic energy, $\left\langle {\hat {T}}\right\rangle $, for a system of N electrons described by the wavefunction $\vert \psi \rangle $ is a sum of 1-electron operator expectation values:
$\left\langle {\hat {T}}\right\rangle =\left\langle \psi \left\vert \sum _{i=1}^{N}{\frac {-\hbar ^{2}}{2m_{\text{e}}}}\nabla _{i}^{2}\right\vert \psi \right\rangle =-{\frac {\hbar ^{2}}{2m_{\text{e}}}}\sum _{i=1}^{N}\left\langle \psi \left\vert \nabla _{i}^{2}\right\vert \psi \right\rangle $
where $m_{\text{e}}$ is the mass of the electron and $\nabla _{i}^{2}$ is the Laplacian operator acting upon the coordinates of the ith electron and the summation runs over all electrons.
The density functional formalism of quantum mechanics requires knowledge of the electron density only, i.e., it formally does not require knowledge of the wavefunction. Given an electron density $\rho (\mathbf {r} )$, the exact N-electron kinetic energy functional is unknown; however, for the specific case of a 1-electron system, the kinetic energy can be written as
$T[\rho ]={\frac {1}{8}}\int {\frac {\nabla \rho (\mathbf {r} )\cdot \nabla \rho (\mathbf {r} )}{\rho (\mathbf {r} )}}d^{3}r$
where $T[\rho ]$ is known as the von Weizsäcker kinetic energy functional.
See also
• Escape velocity
• Foot-pound
• Joule
• Kinetic energy penetrator
• Kinetic energy per unit mass of projectiles
• Kinetic projectile
• Parallel axis theorem
• Potential energy
• Recoil
Notes
1. Jain, Mahesh C. (2009). Textbook of Engineering Physics (Part I). PHI Learning Pvt. p. 9. ISBN 978-81-203-3862-3. Archived from the original on 2020-08-04. Retrieved 2018-06-21., Chapter 1, p. 9 Archived 2020-08-04 at the Wayback Machine
2. Landau, Lev; Lifshitz, Evgeny (15 January 1976). Mechanics (Third ed.). Butterworth-Heinemann. p. 15. ISBN 0-7506-2896-0.
3. Goldstein, Herbert (2002). Classical Mechanics (Third ed.). p. 62-33. ISBN 978-0201657029.
4. Brenner, Joseph (2008). Logic in Reality (illustrated ed.). Springer Science & Business Media. p. 93. ISBN 978-1-4020-8375-4. Archived from the original on 2020-01-25. Retrieved 2016-02-01. Extract of page 93 Archived 2020-08-04 at the Wayback Machine
5. Judith P. Zinsser (2007). Emilie du Chatelet: Daring Genius of the Enlightenment. Penguin. ISBN 978-0-14-311268-6.
6. Crosbie Smith, M. Norton Wise (1989-10-26). Energy and Empire: A Biographical Study of Lord Kelvin. Cambridge University Press. p. 866. ISBN 0-521-26173-2.
7. John Theodore Merz (1912). A History of European Thought in the Nineteenth Century. Blackwood. p. 139. ISBN 0-8446-2579-5.
8. William John Macquorn Rankine (1853). "On the general law of the transformation of energy". Proceedings of the Philosophical Society of Glasgow. 3 (5).
9. "... what remained to be done, was to qualify the noun 'energy' by appropriate adjectives, so as to distinguish between energy of activity and energy of configuration. The well-known pair of antithetical adjectives, 'actual' and 'potential,' seemed exactly suited for that purpose. ... Sir William Thomson and Professor Tait have lately substituted the word 'kinetic' for 'actual.'" William John Macquorn Rankine (1867). "On the Phrase "Potential Energy," and on the Definitions of Physical Quantities". Proceedings of the Philosophical Society of Glasgow. VI (III).
10. Goel, V. K. (2007). Fundamentals Of Physics Xi (illustrated ed.). Tata McGraw-Hill Education. p. 12.30. ISBN 978-0-07-062060-5. Archived from the original on 2020-08-03. Retrieved 2020-07-07. Extract of page 12.30 Archived 2020-07-07 at the Wayback Machine
11. A.M. Kuethe and J.D. Schetzer (1959) Foundations of Aerodynamics, 2nd edition, p.53. John Wiley & Sons ISBN 0-471-50952-3
12. Sears, Francis Weston; Brehme, Robert W. (1968). Introduction to the theory of relativity. Addison-Wesley. p. 127., Snippet view of page 127 Archived 2020-08-04 at the Wayback Machine
13. Physics notes - Kinetic energy in the CM frame Archived 2007-06-11 at the Wayback Machine. Duke.edu. Accessed 2007-11-24.
14. Fitzpatrick, Richard (20 July 2010). "Fine Structure of Hydrogen". Quantum Mechanics. Archived from the original on 25 August 2016. Retrieved 20 August 2016.
References
• Physics Classroom (2000). "Kinetic Energy". Retrieved 2015-07-19.
• School of Mathematics and Statistics, University of St Andrews (2000). "Biography of Gaspard-Gustave de Coriolis (1792-1843)". Retrieved 2006-03-03.
• Serway, Raymond A.; Jewett, John W. (2004). Physics for Scientists and Engineers (6th ed.). Brooks/Cole. ISBN 0-534-40842-7.
• Tipler, Paul (2004). Physics for Scientists and Engineers: Mechanics, Oscillations and Waves, Thermodynamics (5th ed.). W. H. Freeman. ISBN 0-7167-0809-4.
• Tipler, Paul; Llewellyn, Ralph (2002). Modern Physics (4th ed.). W. H. Freeman. ISBN 0-7167-4345-0.
External links
• Media related to Kinetic energy at Wikimedia Commons
Energy
• Outline
• History
• Index
Fundamental
concepts
• Energy
• Units
• Conservation of energy
• Energetics
• Energy transformation
• Energy condition
• Energy transition
• Energy level
• Energy system
• Mass
• Negative mass
• Mass–energy equivalence
• Power
• Thermodynamics
• Quantum thermodynamics
• Laws of thermodynamics
• Thermodynamic system
• Thermodynamic state
• Thermodynamic potential
• Thermodynamic free energy
• Irreversible process
• Thermal reservoir
• Heat transfer
• Heat capacity
• Volume (thermodynamics)
• Thermodynamic equilibrium
• Thermal equilibrium
• Thermodynamic temperature
• Isolated system
• Entropy
• Free entropy
• Entropic force
• Negentropy
• Work
• Exergy
• Enthalpy
Types
• Kinetic
• Internal
• Thermal
• Potential
• Gravitational
• Elastic
• Electric potential energy
• Mechanical
• Interatomic potential
• Quantum potential
• Electrical
• Magnetic
• Ionization
• Radiant
• Binding
• Nuclear binding energy
• Gravitational binding energy
• Quantum chromodynamics binding energy
• Dark
• Quintessence
• Phantom
• Negative
• Chemical
• Rest
• Sound energy
• Surface energy
• Vacuum energy
• Zero-point energy
• Quantum potential
• Quantum fluctuation
Energy carriers
• Radiation
• Enthalpy
• Mechanical wave
• Sound wave
• Fuel
• Fossil
• Oil
• Hydrogen
• Hydrogen fuel
• Heat
• Latent heat
• Work
• Electricity
• Battery
• Capacitor
Primary energy
• Fossil fuel
• Coal
• Petroleum
• Natural gas
• Nuclear fuel
• Natural uranium
• Radiant energy
• Solar
• Wind
• Hydropower
• Marine energy
• Geothermal
• Bioenergy
• Gravitational energy
Energy system
components
• Energy engineering
• Oil refinery
• Electric power
• Fossil fuel power station
• Cogeneration
• Integrated gasification combined cycle
• Nuclear power
• Nuclear power plant
• Radioisotope thermoelectric generator
• Solar power
• Photovoltaic system
• Concentrated solar power
• Solar thermal energy
• Solar power tower
• Solar furnace
• Wind power
• Wind farm
• Airborne wind energy
• Hydropower
• Hydroelectricity
• Wave farm
• Tidal power
• Geothermal power
• Biomass
Use and
supply
• Energy consumption
• Energy storage
• World energy consumption
• Energy security
• Energy conservation
• Efficient energy use
• Transport
• Agriculture
• Renewable energy
• Sustainable energy
• Energy policy
• Energy development
• Worldwide energy supply
• South America
• United States
• Mexico
• Canada
• Europe
• Asia
• Africa
• Australia
Misc.
• Jevons paradox
• Carbon footprint
• Category
• Commons
• Portal
• WikiProject
Authority control: National
• Germany
| Wikipedia |
Translation (geometry)
In Euclidean geometry, a translation is a geometric transformation that moves every point of a figure, shape or space by the same distance in a given direction. A translation can also be interpreted as the addition of a constant vector to every point, or as shifting the origin of the coordinate system. In a Euclidean space, any translation is an isometry.
As a function
See also: Displacement (geometry)
If $\mathbf {v} $ is a fixed vector, known as the translation vector, and $\mathbf {p} $ is the initial position of some object, then the translation function $T_{\mathbf {v} }$ will work as $T_{\mathbf {v} }(\mathbf {p} )=\mathbf {p} +\mathbf {v} $.
If $T$ is a translation, then the image of a subset $A$ under the function $T$ is the translate of $A$ by $T$. The translate of $A$ by $T_{\mathbf {v} }$ is often written $A+\mathbf {v} $.
Horizontal and vertical translations
In geometry, a vertical translation (also known as vertical shift) is a translation of a geometric object in a direction parallel to the vertical axis of the Cartesian coordinate system.[1][2][3]
Often, vertical translations are considered for the graph of a function. If f is any function of x, then the graph of the function f(x) + c (whose values are given by adding a constant c to the values of f) may be obtained by a vertical translation of the graph of f(x) by distance c. For this reason the function f(x) + c is sometimes called a vertical translate of f(x).[4] For instance, the antiderivatives of a function all differ from each other by a constant of integration and are therefore vertical translates of each other.[5]
In function graphing, a horizontal translation is a transformation which results in a graph that is equivalent to shifting the base graph left or right in the direction of the x-axis. A graph is translated k units horizontally by moving each point on the graph k units horizontally.
For the base function f(x) and a constant k, the function given by g(x) = f(x − k), can be sketched f(x) shifted k units horizontally.
If function transformation was talked about in terms of geometric transformations it may be clearer why functions translate horizontally the way they do. When addressing translations on the Cartesian plane it is natural to introduce translations in this type of notation:
$(x,y)\rightarrow (x+a,y+b)$
or
$T(x,y)=(x+a,y+b)$
where $a$ and $b$ are horizontal and vertical changes respectively.
Example
Taking the parabola y = x2 , a horizontal translation 5 units to the right would be represented by T(x, y) = (x + 5, y). Now we must connect this transformation notation to an algebraic notation. Consider the point (a, b) on the original parabola that moves to point (c, d) on the translated parabola. According to our translation, c = a + 5 and d = b. The point on the original parabola was b = a2. Our new point can be described by relating d and c in the same equation. b = d and a = c − 5. So d = b = a2 = (c − 5)2. Since this is true for all the points on our new parabola, the new equation is y = (x − 5)2.
Application in classical physics
In classical physics, translational motion is movement that changes the position of an object, as opposed to rotation. For example, according to Whittaker:[6]
If a body is moved from one position to another, and if the lines joining the initial and final points of each of the points of the body are a set of parallel straight lines of length ℓ, so that the orientation of the body in space is unaltered, the displacement is called a translation parallel to the direction of the lines, through a distance ℓ.
— E. T. Whittaker, A Treatise on the Analytical Dynamics of Particles and Rigid Bodies, p. 1
A translation is the operation changing the positions of all points $(x,y,z)$ of an object according to the formula
$(x,y,z)\to (x+\Delta x,y+\Delta y,z+\Delta z)$
where $(\Delta x,\ \Delta y,\ \Delta z)$ is the same vector for each point of the object. The translation vector $(\Delta x,\ \Delta y,\ \Delta z)$ common to all points of the object describes a particular type of displacement of the object, usually called a linear displacement to distinguish it from displacements involving rotation, called angular displacements.
When considering spacetime, a change of time coordinate is considered to be a translation.
As an operator
Main article: Shift operator
The translation operator turns a function of the original position, $f(\mathbf {v} )$, into a function of the final position, $f(\mathbf {v} +\mathbf {\delta } )$. In other words, $T_{\mathbf {\delta } }$ is defined such that $T_{\mathbf {\delta } }f(\mathbf {v} )=f(\mathbf {v} +\mathbf {\delta } ).$ This operator is more abstract than a function, since $T_{\mathbf {\delta } }$ defines a relationship between two functions, rather than the underlying vectors themselves. The translation operator can act on many kinds of functions, such as when the translation operator acts on a wavefunction, which is studied in the field of quantum mechanics.
As a group
The set of all translations forms the translation group $\mathbb {T} $, which is isomorphic to the space itself, and a normal subgroup of Euclidean group $E(n)$. The quotient group of $E(n)$ by $\mathbb {T} $ is isomorphic to the orthogonal group $O(n)$:
$E(n)/\mathbb {T} \cong O(n)$
Because translation is commutative, the translation group is abelian. There are an infinite number of possible translations, so the translation group is an infinite group.
In the theory of relativity, due to the treatment of space and time as a single spacetime, translations can also refer to changes in the time coordinate. For example, the Galilean group and the Poincaré group include translations with respect to time.
Lattice groups
Main article: Lattice (group)
One kind of subgroup of the three-dimensional translation group are the lattice groups, which are infinite groups, but unlike the translation groups, are finitely generated. That is, a finite generating set generates the entire group.
Matrix representation
A translation is an affine transformation with no fixed points. Matrix multiplications always have the origin as a fixed point. Nevertheless, there is a common workaround using homogeneous coordinates to represent a translation of a vector space with matrix multiplication: Write the 3-dimensional vector $\mathbf {v} =(v_{x},v_{y},v_{z})$ using 4 homogeneous coordinates as $\mathbf {v} =(v_{x},v_{y},v_{z},1)$.[7]
To translate an object by a vector $\mathbf {v} $, each homogeneous vector $\mathbf {p} $ (written in homogeneous coordinates) can be multiplied by this translation matrix:
$T_{\mathbf {v} }={\begin{bmatrix}1&0&0&v_{x}\\0&1&0&v_{y}\\0&0&1&v_{z}\\0&0&0&1\end{bmatrix}}$
As shown below, the multiplication will give the expected result:
$T_{\mathbf {v} }\mathbf {p} ={\begin{bmatrix}1&0&0&v_{x}\\0&1&0&v_{y}\\0&0&1&v_{z}\\0&0&0&1\end{bmatrix}}{\begin{bmatrix}p_{x}\\p_{y}\\p_{z}\\1\end{bmatrix}}={\begin{bmatrix}p_{x}+v_{x}\\p_{y}+v_{y}\\p_{z}+v_{z}\\1\end{bmatrix}}=\mathbf {p} +\mathbf {v} $
The inverse of a translation matrix can be obtained by reversing the direction of the vector:
$T_{\mathbf {v} }^{-1}=T_{-\mathbf {v} }.\!$
Similarly, the product of translation matrices is given by adding the vectors:
$T_{\mathbf {v} }T_{\mathbf {w} }=T_{\mathbf {v} +\mathbf {w} }.\!$
Because addition of vectors is commutative, multiplication of translation matrices is therefore also commutative (unlike multiplication of arbitrary matrices).
Translation of axes
Main article: Translation of axes
While geometric translation is often viewed as an active process that changes the position of a geometric object, a similar result can be achieved by a passive transformation that moves the coordinate system itself but leaves the object fixed. The passive version of an active geometric translation is known as a translation of axes.
Translational symmetry
An object that looks the same before and after translation is said to have translational symmetry. A common example is a periodic function, which is an eigenfunction of a translation operator.
Applications
Vehicle dynamics
For describing vehicle dynamics (or movement of any rigid body), including ship dynamics and aircraft dynamics, it is common to use a mechanical model consisting of six degrees of freedom, which includes translations along three reference axes, as well as rotations about those three axes.
These translations are often called:
• Surge, translation along the longitudinal axis (forward or backwards)
• Sway, translation along the transverse axis (from side to side)
• Heave, translation along the vertical axis (to move up or down).
The corresponding rotations are often called:
• roll, about the longitudinal axis
• pitch, about the transverse axis
• yaw, about the vertical axis.
See also
• 2D computer graphics#Translation
• Advection
• Parallel transport
• Rotation matrix
• Scaling (geometry)
• Transformation matrix
• Translational symmetry
References
1. De Berg, Mark; Cheong, Otfried; Van Kreveld, Marc; Overmars, Mark (2008), Computational Geometry Algorithms and Applications, Berlin: Springer, p. 91, doi:10.1007/978-3-540-77974-2, ISBN 978-3-540-77973-5.
2. Smith, James T. (2011), Methods of Geometry, John Wiley & Sons, p. 356, ISBN 9781118031032.
3. Faulkner, John R. (2014), The Role of Nonassociative Algebra in Projective Geometry, Graduate Studies in Mathematics, vol. 159, American Mathematical Society, p. 13, ISBN 9781470418496.
4. Dougherty, Edward R.; Astol, Jaakko (1999), Nonlinear Filters for Image Processing, SPIE/IEEE series on imaging science & engineering, vol. 59, SPIE Press, p. 169, ISBN 9780819430335.
5. Zill, Dennis; Wright, Warren S. (2009), Single Variable Calculus: Early Transcendentals, Jones & Bartlett Learning, p. 269, ISBN 9780763749651.
6. Edmund Taylor Whittaker (1988). A Treatise on the Analytical Dynamics of Particles and Rigid Bodies (Reprint of fourth edition of 1936 with foreword by William McCrea ed.). Cambridge University Press. p. 1. ISBN 0-521-35883-3.
7. Richard Paul, 1981, Robot manipulators: mathematics, programming, and control : the computer control of robot manipulators, MIT Press, Cambridge, MA
Further reading
• Zazkis, R., Liljedahl, P., & Gadowsky, K. Conceptions of function translation: obstacles, intuitions, and rerouting. Journal of Mathematical Behavior, 22, 437-450. Retrieved April 29, 2014, from www.elsevier.com/locate/jmathb
• Transformations of Graphs: Horizontal Translations. (2006, January 1). BioMath: Transformation of Graphs. Retrieved April 29, 2014
External links
Wikimedia Commons has media related to Translation (geometry).
• Translation Transform at cut-the-knot
• Geometric Translation (Interactive Animation) at Math Is Fun
• Understanding 2D Translation and Understanding 3D Translation by Roger Germundsson, The Wolfram Demonstrations Project.
Computer graphics
Vector graphics
• Diffusion curve
• Pixel
2D graphics
2.5D
• Isometric graphics
• Mode 7
• Parallax scrolling
• Ray casting
• Skybox
• Alpha compositing
• Layers
• Text-to-image
3D graphics
• 3D projection
• 3D rendering
• (Image-based
• Spectral
• Unbiased)
• Aliasing
• Anisotropic filtering
• Cel shading
• Lighting
• Global illumination
• Hidden-surface determination
• Polygon mesh
• (Triangle mesh)
• Shading
• Deferred
• Surface triangulation
• Wire-frame model
Concepts
• Affine transformation
• Back-face culling
• Clipping
• Collision detection
• Planar projection
• Rendering
• Rotation
• Scaling
• Shadow mapping
• Shadow volume
• Shear matrix
• Translation
Algorithms
• List of computer graphics algorithms
| Wikipedia |
Bi-directional delay line
In mathematics, a bi-directional delay line is a numerical analysis technique used in computer simulation for solving ordinary differential equations by converting them to hyperbolic equations. In this way an explicit solution scheme is obtained with highly robust numerical properties. It was introduced by Auslander in 1968.
It originates from simulation of hydraulic pipelines where wave propagation was studied. It was then found that it could be used as an efficient numerical technique for numerically insulating different parts of a simulation model in each times step. It is used in the HOPSAN simulation package (Krus et al. 1990).
It is also known as the Transmission Line Modelling (TLM) from an independent development by Johns and O'Brian 1980. This is also extended to partial differential equations.
References
• D.M. Auslander, "Distributed System Simulation with Bilateral Delay Line Models", Journal of Basic Engineering, Trans. ASME p195-p200. June 1968.
• P. B. Johns and M.O'Brien. "Use of the transmission line modelling (t.l.m) method to solve nonlinear lumped networks", The Radio Electron and Engineer. 1980.
• P Krus, A Jansson, J-O Palmberg, K Weddfeldt. "Distributed Simulation of Hydromechanical Systems". Presented at Third Bath International Fluid Power Workshop, Bath, UK 1990.
| Wikipedia |
Transport-of-intensity equation
The transport-of-intensity equation (TIE) is a computational approach to reconstruct the phase of a complex wave in optical and electron microscopy.[1] It describes the internal relationship between the intensity and phase distribution of a wave.[2]
The TIE was first proposed in 1983 by Michael Reed Teague.[3] Teague suggested to use the law of conservation of energy to write a differential equation for the transport of energy by an optical field. This equation, he stated, could be used as an approach to phase recovery.[4]
Teague approximated the amplitude of the wave propagating nominally in the z-direction by a parabolic equation and then expressed it in terms of irradiance and phase:
${\frac {2\pi }{\lambda }}{\frac {\partial }{\partial z}}I(x,y,z)=-\nabla _{x,y}\cdot [I(x,y,z)\nabla _{x,y}\Phi ],$
where $\lambda $ is the wavelength, $I(x,y,z)$ is the irradiance at point $(x,y,z)$, and $\Phi $ is the phase of the wave. If the intensity distribution of the wave and its spatial derivative can be measured experimentally, the equation becomes a linear equation that can be solved to obtain the phase distribution $\Phi $.[5]
For a phase sample with a constant intensity, the TIE simplifies to
${\frac {d}{dz}}I(z)=-{\frac {\lambda }{2\pi }}I(z)\nabla _{x,y}^{2}\Phi .$
It allows measuring the phase distribution of the sample by acquiring a defocused image, i.e. $I(x,y,z+\Delta z)$.
TIE-based approaches are applied in biomedical and technical applications, such as quantitative monitoring of cell growth in culture,[6] investigation of cellular dynamics and characterization of optical elements.[7] The TIE method is also applied for phase retrieval in transmission electron microscopy.[8]
References
1. Bostan, E. (2014). "Phase retrieval by using transport-of-intensity equation and differential interference contrast microscopy" (PDF). 2014 IEEE International Conference on Image Processing (ICIP). pp. 3939–3943. doi:10.1109/ICIP.2014.7025800. ISBN 978-1-4799-5751-4. S2CID 10310598.
2. Cheng, H. (2009). "Phase Retrieval Using the Transport-of-Intensity Equation". 2009 Fifth International Conference on Image and Graphics. pp. 417–421. doi:10.1109/ICIG.2009.32. ISBN 978-1-4244-5237-8. S2CID 15772496.
3. Teague, Michael R. (1983). "Deterministic phase retrieval: a Green's function solution". Journal of the Optical Society of America. 73 (11): 1434–1441. doi:10.1364/JOSA.73.001434.
4. Nugent, Keith (2010). "Coherent methods in the X-ray sciences". Advances in Physics. 59 (1): 1–99. arXiv:0908.3064. Bibcode:2010AdPhy..59....1N. doi:10.1080/00018730903270926. S2CID 118519311.
5. Gureyev, T. E.; Roberts, A.; Nugent, K. A. (1995). "Partially coherent fields, the transport-of-intensity equation, and phase uniqueness". JOSA A. 12 (9): 1942–1946. Bibcode:1995JOSAA..12.1942G. doi:10.1364/JOSAA.12.001942.
6. Curl, C.L. (2004). "Quantitative phase microscopy: a new tool for measurement of cell culture growth and confluency in situ". Pflügers Archiv: European Journal of Physiology. 448 (4): 462–468. doi:10.1007/s00424-004-1248-7. PMID 14985984. S2CID 7640406.
7. Dorrer, C. (2007). "Optical testing using the transport-of-intensity equation". Opt. Express. 15 (12): 7165–7175. Bibcode:2007OExpr..15.7165D. doi:10.1364/oe.15.007165. PMID 19547035.
8. Belaggia, M. (2004). "On the transport of intensity technique for phase retrieval". Ultramicroscopy. 102 (1): 37–49. doi:10.1016/j.ultramic.2004.08.004. PMID 15556699.
| Wikipedia |
Transport theorem
The transport theorem (or transport equation, rate of change transport theorem or basic kinematic equation) is a vector equation that relates the time derivative of a Euclidean vector as evaluated in a non-rotating coordinate system to its time derivative in a rotating reference frame. It has important applications in classical mechanics and analytical dynamics and diverse fields of engineering. A Euclidean vector represents a certain magnitude and direction in space that is independent of the coordinate system in which it is measured. However, when taking a time derivative of such a vector one actually takes the difference between two vectors measured at two different times t and t+dt. In a rotating coordinate system, the coordinate axes can have different directions at these two times, such that even a constant vector can have a non-zero time derivative. As a consequence, the time derivative of a vector measured in a rotating coordinate system can be different from the time derivative of the same vector in a non-rotating reference system. For example, the velocity vector of an airplane as evaluated using a coordinate system that is fixed to the earth (a rotating reference system) is different from its velocity as evaluated using a coordinate system that is fixed in space. The transport theorem provides a way to relate time derivatives of vectors between a rotating and non-rotating coordinate system, it is derived and explained in more detail in rotating reference frame and can be written as:[1][2][3]
${\frac {\mathrm {d} }{\mathrm {d} t}}{\boldsymbol {f}}=\left[\left({\frac {\mathrm {d} }{\mathrm {d} t}}\right)_{\mathrm {r} }+{\boldsymbol {\Omega }}\times \right]{\boldsymbol {f}}\ .$
Here f is the vector of which the time derivative is evaluated in both the non-rotating, and rotating coordinate system. The subscript r designates its time derivative in the rotating coordinate system and the vector Ω is the angular velocity of the rotating coordinate system.
The Transport Theorem is particularly useful for relating velocities and acceleration vectors between rotating and non-rotating coordinate systems.[4]
Reference[2] states: "Despite of its importance in classical mechanics and its ubiquitous application in engineering, there is no universally-accepted name for the Euler derivative transformation formula [...] Several terminology are used: kinematic theorem, transport theorem, and transport equation. These terms, although terminologically correct, are more prevalent in the subject of fluid mechanics to refer to entirely different physics concepts." An example of such a different physics concept is Reynolds transport theorem.
Derivation
Let ${\boldsymbol {b}}_{i}:=T_{B}^{E}{\boldsymbol {e}}_{i}$ be the basis vectors of $B$, as seen from the reference frame $E$, and denote the components of a vector ${\boldsymbol {f}}$ in $B$ by just $f_{i}$. Let
$G:=T'\cdot T^{-1}$
so that this coordinate transformation is generated, in time, according to $T'=G\cdot T$. Such a generator differential equation is important for trajectories in Lie group theory. Applying the product rule with implict summation convention,
${\boldsymbol {f}}'=(f_{i}{\boldsymbol {b}}_{i})'=(f_{i}T)'{\boldsymbol {e}}_{i}=(f_{i}'T+f_{i}G\cdot T)\,{\boldsymbol {e}}_{i}=(f_{i}'+f_{i}G)\,{\boldsymbol {b}}_{i}=\left(\left({\tfrac {\mathrm {d} }{\mathrm {d} t}}\right)_{B}+G\right){\boldsymbol {f}}$
For the rotation groups ${\mathrm {SO} }(n)$, one has $T_{E}^{B}:=(T_{B}^{E})^{-1}=(T_{B}^{E})^{T}$. In three dimensions, $n=3$, the generator $G$ then equals the cross product operation from the left, a skew-symmetric linear map $[{\boldsymbol {\Omega }}_{E}]_{\times }{\boldsymbol {g}}:={\boldsymbol {\Omega }}_{E}\times {\boldsymbol {g}}$ for any vector ${\boldsymbol {g}}$. As a matrix, it is also related to the vector as seen from $B$ via
$[{\boldsymbol {\Omega }}_{E}]_{\times }=[T_{B}^{E}{\boldsymbol {\Omega }}_{B}]_{\times }=T_{B}^{E}\cdot [{\boldsymbol {\Omega }}_{B}]_{\times }\cdot T_{E}^{B}$
References
1. Rao, Anil Vithala (2006). Dynamics of particles and rigid bodies: a systematic approach. New York: Cambridge University Press. pp. 47, eq. (2–128). ISBN 978-0-511-34840-2.
2. Harithuddin, A.S.M. (2014). Derivative Kinematics in Relatively Rotating Coordinate Frames: Investigation on the Razi Acceleration. RMIT University. p. 22.
3. Baruh, H. (1999). Analytical Dynamics. McGraw Hill.
4. "Course Notes MIT" (PDF).
| Wikipedia |
Transpose of a linear map
In linear algebra, the transpose of a linear map between two vector spaces, defined over the same field, is an induced map between the dual spaces of the two vector spaces. The transpose or algebraic adjoint of a linear map is often used to study the original linear map. This concept is generalised by adjoint functors.
See also: Transpose, Dual system § Transposes, and Transpose § Transposes of linear maps and bilinear forms
Definition
See also: Dual system § Transposes, and Transpose § Transposes of linear maps and bilinear forms
Let $X^{\#}$ denote the algebraic dual space of a vector space $X.$ Let $X$ and $Y$ be vector spaces over the same field ${\mathcal {K}}.$ If $u:X\to Y$ is a linear map, then its algebraic adjoint or dual,[1] is the map ${}^{\#}u:Y^{\#}\to X^{\#}$ defined by $f\mapsto f\circ u.$ The resulting functional ${}^{\#}u(f):=f\circ u$ is called the pullback of $f$ by $u.$
The continuous dual space of a topological vector space (TVS) $X$ is denoted by $X^{\prime }.$ If $X$ and $Y$ are TVSs then a linear map $u:X\to Y$ is weakly continuous if and only if ${}^{\#}u\left(Y^{\prime }\right)\subseteq X^{\prime },$ in which case we let ${}^{t}u:Y^{\prime }\to X^{\prime }$ denote the restriction of ${}^{\#}u$ to $Y^{\prime }.$ The map ${}^{t}u$ is called the transpose[2] or algebraic adjoint of $u.$ The following identity characterizes the transpose of $u$[3]
$\left\langle {}^{t}u(f),x\right\rangle =\left\langle f,u(x)\right\rangle \quad {\text{ for all }}f\in Y^{\prime }{\text{ and }}x\in X$
where $\left\langle \cdot ,\cdot \right\rangle $ is the natural pairing defined by $\left\langle z,h\right\rangle :=z(h).$ :=z(h).}
Properties
The assignment $u\mapsto {}^{t}u$ produces an injective linear map between the space of linear operators from $X$ to $Y$ and the space of linear operators from $Y^{\#}$ to $X^{\#}.$ If $X=Y$ then the space of linear maps is an algebra under composition of maps, and the assignment is then an antihomomorphism of algebras, meaning that ${}^{t}(uv)={}^{t}v{}^{t}u.$ In the language of category theory, taking the dual of vector spaces and the transpose of linear maps is therefore a contravariant functor from the category of vector spaces over ${\mathcal {K}}$ to itself. One can identify ${}^{t}\left({}^{t}u\right)$ with $u$ using the natural injection into the double dual.
• If $u:X\to Y$ and $v:Y\to Z$ are linear maps then ${}^{t}(v\circ u)={}^{t}u\circ {}^{t}v$[4]
• If $u:X\to Y$ is a (surjective) vector space isomorphism then so is the transpose ${}^{t}u:Y^{\prime }\to X^{\prime }.$
• If $X$ and $Y$ are normed spaces then
$\|x\|=\sup _{\|x^{\prime }\|\leq 1}\left|x^{\prime }(x)\right|\quad {\text{ for each }}x\in X$
and if the linear operator $u:X\to Y$ is bounded then the operator norm of ${}^{t}u$ is equal to the norm of $u$; that is[5][6]
$\|u\|=\left\|{}^{t}u\right\|$
and moreover
$\|u\|=\sup \left\{\left|y^{\prime }(ux)\right|:\|x\|\leq 1,\left\|y^{*}\right\|\leq 1{\text{ where }}x\in X,y^{\prime }\in Y^{\prime }\right\}.$
Polars
Suppose now that $u:X\to Y$ is a weakly continuous linear operator between topological vector spaces $X$ and $Y$ with continuous dual spaces $X^{\prime }$ and $Y^{\prime },$ respectively. Let $\langle \cdot ,\cdot \rangle :X\times X^{\prime }\to \mathbb {C} $ denote the canonical dual system, defined by $\left\langle x,x^{\prime }\right\rangle =x^{\prime }x$ where $x$ and $x^{\prime }$ are said to be orthogonal if $\left\langle x,x^{\prime }\right\rangle =x^{\prime }x=0.$ For any subsets $A\subseteq X$ and $S^{\prime }\subseteq X^{\prime },$ let
$A^{\circ }=\left\{x^{\prime }\in X^{\prime }:\sup _{a\in A}\left|x^{\prime }(a)\right|\leq 1\right\}\qquad {\text{ and }}\qquad S^{\circ }=\left\{x\in X:\sup _{s^{\prime }\in S^{\prime }}\left|s^{\prime }(x)\right|\leq 1\right\}$
denote the (absolute) polar of $A$ in $X^{\prime }$ (resp. of $S^{\prime }$ in $X$).
• If $A\subseteq X$ and $B\subseteq Y$ are convex, weakly closed sets containing the origin then ${}^{t}u\left(B^{\circ }\right)\subseteq A^{\circ }$ implies $u(A)\subseteq B.$[7]
• If $A\subseteq X$ and $B\subseteq Y$ then[4]
$[u(A)]^{\circ }=\left({}^{t}u\right)^{-1}\left(A^{\circ }\right)$
and
$u(A)\subseteq B\quad {\text{ implies }}\quad {}^{t}u\left(B^{\circ }\right)\subseteq A^{\circ }.$
• If $X$ and $Y$ are locally convex then[5]
$\operatorname {ker} {}^{t}u=\left(\operatorname {Im} u\right)^{\circ }.$
Annihilators
Suppose $X$ and $Y$ are topological vector spaces and $u:X\to Y$ is a weakly continuous linear operator (so $\left({}^{t}u\right)\left(Y^{\prime }\right)\subseteq X^{\prime }$). Given subsets $M\subseteq X$ and $N\subseteq X^{\prime },$ define their annihilators (with respect to the canonical dual system) by[6]
${\begin{alignedat}{4}M^{\bot }:&=\left\{x^{\prime }\in X^{\prime }:\left\langle m,x^{\prime }\right\rangle =0{\text{ for all }}m\in M\right\}\\&=\left\{x^{\prime }\in X^{\prime }:x^{\prime }(M)=\{0\}\right\}\qquad {\text{ where }}x^{\prime }(M):=\left\{x^{\prime }(m):m\in M\right\}\end{alignedat}}$
and
${\begin{alignedat}{4}{}^{\bot }N:&=\left\{x\in X:\left\langle x,n^{\prime }\right\rangle =0{\text{ for all }}n^{\prime }\in N\right\}\\&=\left\{x\in X:N(x)=\{0\}\right\}\qquad {\text{ where }}N(x):=\left\{n^{\prime }(x):n^{\prime }\in N\right\}\\\end{alignedat}}$
• The kernel of ${}^{t}u$ is the subspace of $Y^{\prime }$ orthogonal to the image of $u$:[7]
$\ker {}^{t}u=(\operatorname {Im} u)^{\bot }$
• The linear map $u$ is injective if and only if its image is a weakly dense subset of $Y$ (that is, the image of $u$ is dense in $Y$ when $Y$ is given the weak topology induced by $\operatorname {ker} {}^{t}u$).[7]
• The transpose ${}^{t}u:Y^{\prime }\to X^{\prime }$ is continuous when both $X^{\prime }$ and $Y^{\prime }$ are endowed with the weak-* topology (resp. both endowed with the strong dual topology, both endowed with the topology of uniform convergence on compact convex subsets, both endowed with the topology of uniform convergence on compact subsets).[8]
• (Surjection of Fréchet spaces): If $X$ and $Y$ are Fréchet spaces then the continuous linear operator $u:X\to Y$ is surjective if and only if (1) the transpose ${}^{t}u:Y^{\prime }\to X^{\prime }$ is injective, and (2) the image of the transpose of $u$ is a weakly closed (i.e. weak-* closed) subset of $X^{\prime }.$[9]
Duals of quotient spaces
Let $M$ be a closed vector subspace of a Hausdorff locally convex space $X$ and denote the canonical quotient map by
$\pi :X\to X/M\quad {\text{ where }}\quad \pi (x):=x+M.$
Assume $X/M$ is endowed with the quotient topology induced by the quotient map $\pi :X\to X/M.$ Then the transpose of the quotient map is valued in $M^{\bot }$ and
${}^{t}\pi :(X/M)^{\prime }\to M^{\bot }\subseteq X^{\prime }$ :(X/M)^{\prime }\to M^{\bot }\subseteq X^{\prime }}
is a TVS-isomorphism onto $M^{\bot }.$ If $X$ is a Banach space then ${}^{t}\pi :(X/M)^{\prime }\to M^{\bot }$ :(X/M)^{\prime }\to M^{\bot }} is also an isometry.[6] Using this transpose, every continuous linear functional on the quotient space $X/M$ is canonically identified with a continuous linear functional in the annihilator $M^{\bot }$ of $M.$
Duals of vector subspaces
Let $M$ be a closed vector subspace of a Hausdorff locally convex space $X.$ If $m^{\prime }\in M^{\prime }$ and if $x^{\prime }\in X^{\prime }$ is a continuous linear extension of $m^{\prime }$ to $X$ then the assignment $m^{\prime }\mapsto x^{\prime }+M^{\bot }$ induces a vector space isomorphism
$M^{\prime }\to X^{\prime }/\left(M^{\bot }\right),$
which is an isometry if $X$ is a Banach space.[6]
Denote the inclusion map by
$\operatorname {In} :M\to X\quad {\text{ where }}\quad \operatorname {In} (m):=m\quad {\text{ for all }}m\in M.$
The transpose of the inclusion map is
${}^{t}\operatorname {In} :X^{\prime }\to M^{\prime }$
whose kernel is the annihilator $M^{\bot }=\left\{x^{\prime }\in X^{\prime }:\left\langle m,x^{\prime }\right\rangle =0{\text{ for all }}m\in M\right\}$ and which is surjective by the Hahn–Banach theorem. This map induces an isomorphism of vector spaces
$X^{\prime }/\left(M^{\bot }\right)\to M^{\prime }.$
Representation as a matrix
If the linear map $u$ is represented by the matrix $A$ with respect to two bases of $X$ and $Y,$ then ${}^{t}u$ is represented by the transpose matrix $A^{T}$ with respect to the dual bases of $Y^{\prime }$ and $X^{\prime },$ hence the name. Alternatively, as $u$ is represented by $A$ acting to the right on column vectors, ${}^{t}u$ is represented by the same matrix acting to the left on row vectors. These points of view are related by the canonical inner product on $\mathbb {R} ^{n},$ which identifies the space of column vectors with the dual space of row vectors.
Relation to the Hermitian adjoint
Main article: Hermitian adjoint
See also: Riesz representation theorem
The identity that characterizes the transpose, that is, $\left[u^{*}(f),x\right]=[f,u(x)],$ is formally similar to the definition of the Hermitian adjoint, however, the transpose and the Hermitian adjoint are not the same map. The transpose is a map $Y^{\prime }\to X^{\prime }$ and is defined for linear maps between any vector spaces $X$ and $Y,$ without requiring any additional structure. The Hermitian adjoint maps $Y\to X$ and is only defined for linear maps between Hilbert spaces, as it is defined in terms of the inner product on the Hilbert space. The Hermitian adjoint therefore requires more mathematical structure than the transpose.
However, the transpose is often used in contexts where the vector spaces are both equipped with a nondegenerate bilinear form such as the Euclidean dot product or another real inner product. In this case, the nondegenerate bilinear form is often used implicitly to map between the vector spaces and their duals, to express the transposed map as a map $Y\to X.$ For a complex Hilbert space, the inner product is sesquilinear and not bilinear, and these conversions change the transpose into the adjoint map.
More precisely: if $X$ and $Y$ are Hilbert spaces and $u:X\to Y$ is a linear map then the transpose of $u$ and the Hermitian adjoint of $u,$ which we will denote respectively by ${}^{t}u$ and $u^{*},$ are related. Denote by $I:X\to X^{*}$ and $J:Y\to Y^{*}$ the canonical antilinear isometries of the Hilbert spaces $X$ and $Y$ onto their duals. Then $u^{*}$ is the following composition of maps:[10]
$Y{\overset {J}{\longrightarrow }}Y^{*}{\overset {{}^{\text{t}}u}{\longrightarrow }}X^{*}{\overset {I^{-1}}{\longrightarrow }}X$
Applications to functional analysis
Suppose that $X$ and $Y$ are topological vector spaces and that $u:X\to Y$ is a linear map, then many of $u$'s properties are reflected in ${}^{t}u.$
• If $A\subseteq X$ and $B\subseteq Y$ are weakly closed, convex sets containing the origin, then ${}^{t}u\left(B^{\circ }\right)\subseteq A^{\circ }$ implies $u(A)\subseteq B.$[4]
• The null space of ${}^{t}u$ is the subspace of $Y^{\prime }$ orthogonal to the range $u(X)$ of $u.$[4]
• ${}^{t}u$ is injective if and only if the range $u(X)$ of $u$ is weakly closed.[4]
See also
• Adjoint functors – Relationship between two functors abstracting many common constructions
• Composition operator – Linear operator in mathematics
• Hermitian adjoint – Conjugate transpose of an operator in infinite dimensions
• Riesz representation theorem – Theorem about the dual of a Hilbert space
• Dual space § Transpose of a linear map
• Transpose § Transpose of a linear map
References
1. Schaefer & Wolff 1999, p. 128.
2. Trèves 2006, p. 240.
3. Halmos (1974, §44)
4. Schaefer & Wolff 1999, pp. 129–130
5. Trèves 2006, pp. 240–252.
6. Rudin 1991, pp. 92–115.
7. Schaefer & Wolff 1999, pp. 128–130.
8. Trèves 2006, pp. 199–200.
9. Trèves 2006, pp. 382–383.
10. Trèves 2006, p. 488.
Bibliography
• Halmos, Paul (1974), Finite-dimensional Vector Spaces, Springer, ISBN 0-387-90093-4
• Rudin, Walter (1991). Functional Analysis. International Series in Pure and Applied Mathematics. Vol. 8 (Second ed.). New York, NY: McGraw-Hill Science/Engineering/Math. ISBN 978-0-07-054236-5. OCLC 21163277.
• Schaefer, Helmut H.; Wolff, Manfred P. (1999). Topological Vector Spaces. GTM. Vol. 8 (Second ed.). New York, NY: Springer New York Imprint Springer. ISBN 978-1-4612-7155-0. OCLC 840278135.
• Trèves, François (2006) [1967]. Topological Vector Spaces, Distributions and Kernels. Mineola, N.Y.: Dover Publications. ISBN 978-0-486-45352-1. OCLC 853623322.
Linear algebra
• Outline
• Glossary
Basic concepts
• Scalar
• Vector
• Vector space
• Scalar multiplication
• Vector projection
• Linear span
• Linear map
• Linear projection
• Linear independence
• Linear combination
• Basis
• Change of basis
• Row and column vectors
• Row and column spaces
• Kernel
• Eigenvalues and eigenvectors
• Transpose
• Linear equations
Matrices
• Block
• Decomposition
• Invertible
• Minor
• Multiplication
• Rank
• Transformation
• Cramer's rule
• Gaussian elimination
Bilinear
• Orthogonality
• Dot product
• Hadamard product
• Inner product space
• Outer product
• Kronecker product
• Gram–Schmidt process
Multilinear algebra
• Determinant
• Cross product
• Triple product
• Seven-dimensional cross product
• Geometric algebra
• Exterior algebra
• Bivector
• Multivector
• Tensor
• Outermorphism
Vector space constructions
• Dual
• Direct sum
• Function space
• Quotient
• Subspace
• Tensor product
Numerical
• Floating-point
• Numerical stability
• Basic Linear Algebra Subprograms
• Sparse matrix
• Comparison of linear algebra libraries
• Category
• Mathematics portal
• Commons
• Wikibooks
• Wikiversity
Duality and spaces of linear maps
Basic concepts
• Dual space
• Dual system
• Dual topology
• Duality
• Operator topologies
• Polar set
• Polar topology
• Topologies on spaces of linear maps
Topologies
• Norm topology
• Dual norm
• Ultraweak/Weak-*
• Weak
• polar
• operator
• in Hilbert spaces
• Mackey
• Strong dual
• polar topology
• operator
• Ultrastrong
Main results
• Banach–Alaoglu
• Mackey–Arens
Maps
• Transpose of a linear map
Subsets
• Saturated family
• Total set
Other concepts
• Biorthogonal system
Functional analysis (topics – glossary)
Spaces
• Banach
• Besov
• Fréchet
• Hilbert
• Hölder
• Nuclear
• Orlicz
• Schwartz
• Sobolev
• Topological vector
Properties
• Barrelled
• Complete
• Dual (Algebraic/Topological)
• Locally convex
• Reflexive
• Reparable
Theorems
• Hahn–Banach
• Riesz representation
• Closed graph
• Uniform boundedness principle
• Kakutani fixed-point
• Krein–Milman
• Min–max
• Gelfand–Naimark
• Banach–Alaoglu
Operators
• Adjoint
• Bounded
• Compact
• Hilbert–Schmidt
• Normal
• Nuclear
• Trace class
• Transpose
• Unbounded
• Unitary
Algebras
• Banach algebra
• C*-algebra
• Spectrum of a C*-algebra
• Operator algebra
• Group algebra of a locally compact group
• Von Neumann algebra
Open problems
• Invariant subspace problem
• Mahler's conjecture
Applications
• Hardy space
• Spectral theory of ordinary differential equations
• Heat kernel
• Index theorem
• Calculus of variations
• Functional calculus
• Integral operator
• Jones polynomial
• Topological quantum field theory
• Noncommutative geometry
• Riemann hypothesis
• Distribution (or Generalized functions)
Advanced topics
• Approximation property
• Balanced set
• Choquet theory
• Weak topology
• Banach–Mazur distance
• Tomita–Takesaki theory
• Mathematics portal
• Category
• Commons
| Wikipedia |
Transpositions matrix
Transpositions matrix (Tr matrix) is square $n\times n$ matrix, $n=2^{m}$, $m\in N$, which elements are obtained from the elements of given n-dimensional vector $X=(x_{i})_{\begin{smallmatrix}i={1,n}\end{smallmatrix}}$ as follows: $Tr_{i,j}=x_{(i-1)\oplus (j-1)+1}$, where $\oplus $ denotes operation "bitwise Exclusive or" (XOR). The rows and columns of Transpositions matrix consists permutation of elements of vector X, as there are n/2 transpositions between every two rows or columns of the matrix
Example
The figure below shows Transpositions matrix $Tr(X)$ of order 8, created from arbitrary vector $X={\begin{pmatrix}x_{1},x_{2},x_{3},x_{4},x_{5},x_{6},x_{7},x_{8}\\\end{pmatrix}}$
$Tr(X)=\left[{\begin{array}{cccc|ccccc}x_{1}&x_{2}&x_{3}&x_{4}&x_{5}&x_{6}&x_{7}&x_{8}\\x_{2}&x_{1}&x_{4}&x_{3}&x_{6}&x_{5}&x_{8}&x_{7}\\x_{3}&x_{4}&x_{1}&x_{2}&x_{7}&x_{8}&x_{5}&x_{6}\\x_{4}&x_{3}&x_{2}&x_{1}&x_{8}&x_{7}&x_{6}&x_{5}\\\hline x_{5}&x_{6}&x_{7}&x_{8}&x_{1}&x_{2}&x_{3}&x_{4}\\x_{6}&x_{5}&x_{8}&x_{7}&x_{2}&x_{1}&x_{4}&x_{3}\\x_{7}&x_{8}&x_{5}&x_{6}&x_{3}&x_{4}&x_{1}&x_{2}\\x_{8}&x_{7}&x_{6}&x_{5}&x_{4}&x_{3}&x_{2}&x_{1}\end{array}}\right]$
Properties
• $Tr$ matrix is symmetric matrix.
• $Tr$ matrix is persymmetric matrix, i.e. it is symmetric with respect to the northeast-to-southwest diagonal too.
• Every one row and column of $Tr$ matrix consists all n elements of given vector $X$ without repetition.
• Every two rows $Tr$ matrix consists $n/2$ fours of elements with the same values of the diagonal elements. In example if $Tr_{p,q}$ and $Tr_{u,q}$ are two arbitrary selected elements from the same column q of $Tr$ matrix, then, $Tr$ matrix consists one fours of elements $(Tr_{p,q},Tr_{u,q},Tr_{p,v},Tr_{u,v})$, for which are satisfied the equations $Tr_{p,q}=Tr_{u,v}$ and $Tr_{u,q}=Tr_{p,v}$. This property, named “Tr-property” is specific to $Tr$ matrices.
The figure on the right shows some fours of elements in $Tr$ matrix.
Transpositions matrix with mutually orthogonal rows (Trs matrix)
The property of fours of $Tr$ matrices gives the possibility to create matrix with mutually orthogonal rows and columns ($Trs$ matrix ) by changing the sign to an odd number of elements in every one of fours $(Tr_{p,q},Tr_{u,q},Tr_{p,v},Tr_{u,v})$, $p,q,u,v\in [1,n]$. In [5] is offered algorithm for creating $Trs$ matrix using Hadamard product, (denoted by $\circ $) of Tr matrix and n-dimensional Hadamard matrix whose rows (except the first one) are rearranged relative to the rows of Sylvester-Hadamard matrix in order $R=[1,r_{2},\dots ,r_{n}]^{T},r_{2},\dots ,r_{n}\in [2,n]$, for which the rows of the resulting Trs matrix are mutually orthogonal.
$Trs(X)=Tr(X)\circ H(R)$
$Trs.{Trs}^{T}=\parallel X\parallel ^{2}.I_{n}$
where:
• "$\circ $" denotes operation Hadamard product
• $I_{n}$ is n-dimensional Identity matrix.
• $H(R)$ is n-dimensional Hadamard matrix, which rows are interchanged against the Sylvester-Hadamard[4] matrix in given order $R=[1,r_{2},\dots ,r_{n}]^{T},r_{2},\dots ,r_{n}\in [2,n]$ for which the rows of the resulting $Trs$ matrix are mutually orthogonal.
• $X$ is the vector from which the elements of $Tr$ matrix are derived.
Orderings R of Hadamard matrix’s rows were obtained experimentally for $Trs$ matrices of sizes 2, 4 and 8. It is important to note, that the ordering R of Hadamard matrix’s rows (against the Sylvester-Hadamard matrix) does not depend on the vector $X$. Has been proven[5] that, if $X$ is unit vector (i.e. $\parallel X\parallel =1$), then $Trs$ matrix (obtained as it was described above) is matrix of reflection.
Example of obtaining Trs matrix
Transpositions matrix with mutually orthogonal rows ($Trs$ matrix) of order 4 for vector $X={\begin{pmatrix}x_{1},x_{2},x_{3},x_{4}\end{pmatrix}}^{T}$ is obtained as:
$Trs(X)=H(R)\circ Tr(X)={\begin{pmatrix}1&1&1&1\\1&-1&1&-1\\1&-1&-1&1\\1&1&-1&-1\\\end{pmatrix}}\circ {\begin{pmatrix}x_{1}&x_{2}&x_{3}&x_{4}\\x_{2}&x_{1}&x_{4}&x_{3}\\x_{3}&x_{4}&x_{1}&x_{2}\\x_{4}&x_{3}&x_{2}&x_{1}\\\end{pmatrix}}={\begin{pmatrix}x_{1}&x_{2}&x_{3}&x_{4}\\x_{2}&-x_{1}&x_{4}&-x_{3}\\x_{3}&-x_{4}&-x_{1}&x_{2}\\x_{4}&x_{3}&-x_{2}&-x_{1}\\\end{pmatrix}}$
where $Tr(X)$ is $Tr$ matrix, obtained from vector $X$, and "$\circ $" denotes operation Hadamard product and $H(R)$ is Hadamard matrix, which rows are interchanged in given order $R$ for which the rows of the resulting $Trs$ matrix are mutually orthogonal. As can be seen from the figure above, the first row of the resulting $Trs$ matrix contains the elements of the vector $X$ without transpositions and sign change. Taking into consideration that the rows of the $Trs$ matrix are mutually orthogonal, we get
$Trs(X).X=\left\|X\right\|^{2}{\begin{bmatrix}1\\0\\0\\0\end{bmatrix}}$
which means that the $Trs$ matrix rotates the vector $X$, from which it is derived, in the direction of the coordinate axis $x_{1}$
In [5] are given as examples code of a Matlab functions that creates $Tr$ and $Trs$ matrices for vector $X$ of size n = 2, 4, or, 8. Stay open question is it possible to create $Trs$ matrices of size, greater than 8.
See also
• Symmetric matrix
• Persymmetric matrix
• Orthogonal matrix
References
1. Harville, D. A. (1997). Matrix Algebra from Statistician’s Perspective. Softcover.
2. Horn, Roger A.; Johnson, Charles R. (2013), Matrix analysis (2nd ed.), Cambridge University Press, ISBN 978-0-521-54823-6
3. Mirsky, Leonid (1990), An Introduction to Linear Algebra, Courier Dover Publications, ISBN 978-0-486-66434-7
4. Baumert, L. D.; Hall, Marshall (1965). "Hadamard matrices of the Williamson type". Math. Comp. 19 (91): 442–447. doi:10.1090/S0025-5718-1965-0179093-2. MR 0179093.
5. Zhelezov, O. I. (2021). Determination of a Special Case of Symmetric Matrices and Their Applications. Current Topics on Mathematics and Computer Science Vol. 6, 29–45. ISBN 978-93-91473-89-1.
External links
• http://article.sapub.org/10.5923.j.ajcam.20190904.03.html
| Wikipedia |
Transseries
In mathematics, the field $\mathbb {T} ^{LE}$ of logarithmic-exponential transseries is a non-Archimedean ordered differential field which extends comparability of asymptotic growth rates of elementary nontrigonometric functions to a much broader class of objects. Each log-exp transseries represents a formal asymptotic behavior, and it can be manipulated formally, and when it converges (or in every case if using special semantics such as through infinite surreal numbers), corresponds to actual behavior. Transseries can also be convenient for representing functions. Through their inclusion of exponentiation and logarithms, transseries are a strong generalization of the power series at infinity ($ \sum _{n=0}^{\infty }{\frac {a_{n}}{x^{n}}}$) and other similar asymptotic expansions.
The field $\mathbb {T} ^{LE}$ was introduced independently by Dahn-Göring[1] and Ecalle[2] in the respective contexts of model theory or exponential fields and of the study of analytic singularity and proof by Ecalle of the Dulac conjectures. It constitutes a formal object, extending the field of exp-log functions of Hardy and the field of accelerando-summable series of Ecalle.
The field $\mathbb {T} ^{LE}$ enjoys a rich structure: an ordered field with a notion of generalized series and sums, with a compatible derivation with distinguished antiderivation, compatible exponential and logarithm functions and a notion of formal composition of series.
Examples and counter-examples
Informally speaking, exp-log transseries are well-based (i.e. reverse well-ordered) formal Hahn series of real powers of the positive infinite indeterminate $x$, exponentials, logarithms and their compositions, with real coefficients. Two important additional conditions are that the exponential and logarithmic depth of an exp-log transseries $f,$ that is the maximal numbers of iterations of exp and log occurring in $f,$ must be finite.
The following formal series are log-exp transseries:
$\sum _{n=1}^{\infty }{\frac {e^{x^{\frac {1}{n}}}}{n!}}+x^{3}+\log x+\log \log x+\sum _{n=0}^{\infty }x^{-n}+\sum _{i=1}^{\infty }e^{-\sum _{j=1}^{\infty }e^{ix^{2}-jx}}.$
$\sum _{m,n\in \mathbb {N} }x^{\frac {1}{m+1}}e^{-(\log x)^{n}}.$
The following formal series are not log-exp transseries:
$\sum _{n\in \mathbb {N} }x^{n}$ — this series is not well-based.
$\log x+\log \log x+\log \log \log x+\cdots $ — the logarithmic depth of this series is infinite
${\frac {1}{2}}x+e^{{\frac {1}{2}}\log x}+e^{e^{{\frac {1}{2}}\log \log x}}+\cdots $ — the exponential and logarithmic depths of this series are infinite
It is possible to define differential fields of transseries containing the two last series; they belong respectively to $\mathbb {T} ^{EL}$ and $\mathbb {R} \langle \langle \omega \rangle \rangle $ (see the paragraph Using surreal numbers below).
Introduction
A remarkable fact is that asymptotic growth rates of elementary nontrigonometric functions and even all functions definable in the model theoretic structure $(\mathbb {R} ,+,\times ,<,\exp )$ of the ordered exponential field of real numbers are all comparable: For all such $f$ and $g$, we have $f\leq _{\infty }g$ or $g\leq _{\infty }f$, where $f\leq _{\infty }g$ means $\exists x.\forall y>x.f(y)\leq g(y)$. The equivalence class of $f$ under the relation $f\leq _{\infty }g\wedge g\leq _{\infty }f$ is the asymptotic behavior of $f$, also called the germ of $f$ (or the germ of $f$ at infinity).
The field of transseries can be intuitively viewed as a formal generalization of these growth rates: In addition to the elementary operations, transseries are closed under "limits" for appropriate sequences with bounded exponential and logarithmic depth. However, a complication is that growth rates are non-Archimedean and hence do not have the least upper bound property. We can address this by associating a sequence with the least upper bound of minimal complexity, analogously to construction of surreal numbers. For example, $ (\sum _{k=0}^{n}x^{-k})_{n\in \mathbb {N} }$ is associated with $ \sum _{k=0}^{\infty }x^{-k}$ rather than $ \sum _{k=0}^{\infty }x^{-k}-e^{-x}$ because $e^{-x}$ decays too quickly, and if we identify fast decay with complexity, it has greater complexity than necessary (also, because we care only about asymptotic behavior, pointwise convergence is not dispositive).
Because of the comparability, transseries do not include oscillatory growth rates (such as $\sin x$). On the other hand, there are transseries such as $ \sum _{k\in \mathbb {N} }k!e^{x^{-{\frac {k}{k+1}}}}$ that do not directly correspond to convergent series or real valued functions. Another limitation of transseries is that each of them is bounded by a tower of exponentials, i.e. a finite iteration $e^{e^{.^{.^{.^{e^{x}}}}}}$ of $e^{x}$, thereby excluding tetration and other transexponential functions, i.e. functions which grow faster than any tower of exponentials. There are ways to construct fields of generalized transseries including formal transexponential terms, for instance formal solutions $e_{\omega }$ of the Abel equation $e^{e_{\omega }(x)}=e_{\omega }(x+1)$.[3]
Formal construction
Transseries can be defined as formal (potentially infinite) expressions, with rules defining which expressions are valid, comparison of transseries, arithmetic operations, and even differentiation. Appropriate transseries can then be assigned to corresponding functions or germs, but there are subtleties involving convergence. Even transseries that diverge can often be meaningfully (and uniquely) assigned actual growth rates (that agree with the formal operations on transseries) using accelero-summation, which is a generalization of Borel summation.
Transseries can be formalized in several equivalent ways; we use one of the simplest ones here.
A transseries is a well-based sum,
$\sum a_{i}m_{i},$
with finite exponential depth, where each $a_{i}$ is a nonzero real number and $m_{i}$ is a monic transmonomial ($a_{i}m_{i}$ is a transmonomial but is not monic unless the coefficient $a_{i}=1$; each $m_{i}$ is different; the order of the summands is irrelevant).
The sum might be infinite or transfinite; it is usually written in the order of decreasing $m_{i}$.
Here, well-based means that there is no infinite ascending sequence $m_{i_{1}}<m_{i_{2}}<m_{i_{3}}<\cdots $ (see well-ordering).
A monic transmonomial is one of 1, x, log x, log log x, ..., epurely_large_transseries.
Note: Because $x^{n}=e^{n\log x}$, we do not include it as a primitive, but many authors do; log-free transseries do not include $\log $ but $x^{n}e^{\cdots }$ is permitted. Also, circularity in the definition is avoided because the purely_large_transseries (above) will have lower exponential depth; the definition works by recursion on the exponential depth. See "Log-exp transseries as iterated Hahn series" (below) for a construction that uses $x^{a}e^{\cdots }$ and explicitly separates different stages.
A purely large transseries is a nonempty transseries $ \sum a_{i}m_{i}$ with every $m_{i}>1$.
Transseries have finite exponential depth, where each level of nesting of e or log increases depth by 1 (so we cannot have x + log x + log log x + ...).
Addition of transseries is termwise: $ \sum a_{i}m_{i}+\sum b_{i}m_{i}=\sum (a_{i}+b_{i})m_{i}$ (absence of a term is equated with a zero coefficient).
Comparison:
The most significant term of $ \sum a_{i}m_{i}$ is $a_{i}m_{i}$ for the largest $m_{i}$ (because the sum is well-based, this exists for nonzero transseries). $ \sum a_{i}m_{i}$ is positive iff the coefficient of the most significant term is positive (this is why we used 'purely large' above). X > Y iff X − Y is positive.
Comparison of monic transmonomials:
$x=e^{\log x},\log x=e^{\log \log x},\ldots $ – these are the only equalities in our construction.
$x>\log x>\log \log x>\cdots >1>0.$
$e^{a}<e^{b}$ iff $a<b$ (also $e^{0}=1$).
Multiplication:
$e^{a}e^{b}=e^{a+b}$
$\left(\sum a_{i}x_{i}\right)\left(\sum b_{j}y_{j}\right)=\sum _{k}\left(\sum _{i,j\,:\,z_{k}=x_{i}y_{j}}a_{i}b_{j}\right)z_{k}.$
This essentially applies the distributive law to the product; because the series is well-based, the inner sum is always finite.
Differentiation:
$\left(\sum a_{i}x_{i}\right)'=\sum a_{i}x_{i}'$
$1'=0,x'=1$
$(e^{y})'=y'e^{y}$
$(\log y)'=y'/y$ (division is defined using multiplication).
With these definitions, transseries is an ordered differential field. Transseries is also a valued field, with the valuation $\nu $ given by the leading monic transmonomial, and the corresponding asymptotic relation defined for $0\neq f,g\in \mathbb {T} ^{LE}$ by $f\prec g$ if $\forall 0<r\in \mathbb {R} ,|f|<r|g|$ (where $|f|=\max(f,-f)$ is the absolute value).
Other constructions
Log-free transseries
We first define the subfield $\mathbb {T} ^{E}$ of $\mathbb {T} ^{LE}$ of so-called log-free transseries. Those are transseries which exclude any logarithmic term.
Inductive definition:
For $n\in \mathbb {N} ,$ we will define a linearly ordered multiplicative group of monomials ${\mathfrak {M}}_{n}$. We then let $\mathbb {T} _{n}^{E}$ denote the field of well-based series $\mathbb {R} [[{\mathfrak {M}}_{n}]]$. This is the set of maps $\mathbb {R} \to {\mathfrak {M}}_{n}$ with well-based (i.e. reverse well-ordered) support, equipped with pointwise sum and Cauchy product (see Hahn series). In $\mathbb {T} _{n}^{E}$, we distinguish the (non-unital) subring $\mathbb {T} _{n,\succ }^{E}$ of purely large transseries, which are series whose support contains only monomials lying strictly above $1$.
We start with ${\mathfrak {M}}_{0}=x^{\mathbb {R} }$ equipped with the product $x^{a}x^{b}:=x^{a+b}$ and the order $x^{a}\prec x^{b}\leftrightarrow a<b$.
If $n\in \mathbb {N} $ is such that ${\mathfrak {M}}_{n}$, and thus $\mathbb {T} _{n}^{E}$ and $\mathbb {T} _{n,\succ }^{E}$ are defined, we let ${\mathfrak {M}}_{n+1}$ denote the set of formal expressions $x^{a}e^{\theta }$ where $a\in \mathbb {R} $ and $\theta \in \mathbb {T} _{n,\succ }^{E}$. This forms a linearly ordered commutative group under the product $(x^{a}e^{\theta })(x^{a'}e^{\theta '})=(x^{a+a'})e^{\theta +\theta '}$ and the lexicographic order $x^{a}e^{\theta }\prec x^{a'}e^{\theta '}$ if and only if $\theta <\theta '$ or ($\theta =\theta '$ and $a<a'$).
The natural inclusion of ${\mathfrak {M}}_{0}$ into ${\mathfrak {M}}_{1}$ given by identifying $x^{a}$ and $x^{a}e^{0}$ inductively provides a natural embedding of ${\mathfrak {M}}_{n}$ into ${\mathfrak {M}}_{n+1}$, and thus a natural embedding of $\mathbb {T} _{n}^{E}$ into $\mathbb {T} _{n+1}^{E}$. We may then define the linearly ordered commutative group $ {\mathfrak {M}}=\bigcup _{n\in \mathbb {N} }{\mathfrak {M}}_{n}$ and the ordered field $ \mathbb {T} ^{E}=\bigcup _{n\in \mathbb {N} }\mathbb {T} _{n}^{E}$ which is the field of log-free transseries.
The field $\mathbb {T} ^{E}$ is a proper subfield of the field $\mathbb {R} [[{\mathfrak {M}}]]$ of well-based series with real coefficients and monomials in ${\mathfrak {M}}$. Indeed, every series $f$ in $\mathbb {T} ^{E}$ has a bounded exponential depth, i.e. the least positive integer $n$ such that $f\in \mathbb {T} _{n}^{E}$, whereas the series
$e^{-x}+e^{-e^{x}}+e^{-e^{e^{x}}}+\cdots \in \mathbb {R} [[{\mathfrak {M}}]]$
has no such bound.
Exponentiation on $\mathbb {T} ^{E}$:
The field of log-free transseries is equipped with an exponential function which is a specific morphism $\exp :(\mathbb {T} ^{E},+)\to (\mathbb {T} ^{E,>},\times )$ :(\mathbb {T} ^{E},+)\to (\mathbb {T} ^{E,>},\times )} . Let $f$ be a log-free transseries and let $n\in \mathbb {N} $ be the exponential depth of $f$, so $f\in \mathbb {T} _{n}^{E}$. Write $f$ as the sum $f=\theta +r+\varepsilon $ in $\mathbb {T} _{n}^{E},$ where $\theta \in \mathbb {T} _{n,\succ }^{E}$, $r$ is a real number and $\varepsilon $ is infinitesimal (any of them could be zero). Then the formal Hahn sum
$E(\varepsilon ):=\sum _{k\in \mathbb {N} }{\frac {\varepsilon ^{k}}{k!}}$
converges in $\mathbb {T} _{n}^{E}$, and we define $\exp(f)=e^{\theta }\exp(r)E(\varepsilon )\in \mathbb {T} _{n+1}^{E}$ where $\exp(r)$ is the value of the real exponential function at $r$.
Right-composition with $e^{x}$:
A right composition $\circ _{e^{x}}$ with the series $e^{x}$ can be defined by induction on the exponential depth by
$\left(\sum f_{\mathfrak {m}}{\mathfrak {m}}\right)\circ e^{x}:=\sum f_{\mathfrak {m}}({\mathfrak {m}}\circ e^{x}),$
with $x^{r}\circ e^{x}:=e^{rx}$. It follows inductively that monomials are preserved by $\circ _{e^{x}},$ so at each inductive step the sums are well-based and thus well defined.
Log-exp transseries
Definition:
The function $\exp $ defined above is not onto $\mathbb {T} ^{E,>}$ so the logarithm is only partially defined on $\mathbb {T} ^{E}$: for instance the series $x$ has no logarithm. Moreover, every positive infinite log-free transseries is greater than some positive power of $x$. In order to move from $\mathbb {T} ^{E}$ to $\mathbb {T} ^{LE}$, one can simply "plug" into the variable $x$ of series formal iterated logarithms $\ell _{n},n\in \mathbb {N} $ which will behave like the formal reciprocal of the $n$-fold iterated exponential term denoted $e_{n}$.
For $m,n\in \mathbb {N} ,$ let ${\mathfrak {M}}_{m,n}$ denote the set of formal expressions ${\mathfrak {u}}\circ \ell _{n}$ where ${\mathfrak {u}}\in {\mathfrak {M}}_{m}$. We turn this into an ordered group by defining $({\mathfrak {u}}\circ \ell _{n})({\mathfrak {v}}\circ \ell _{n}(x)):=({\mathfrak {u}}{\mathfrak {v}})\circ \ell _{n}$, and defining ${\mathfrak {u}}\circ \ell _{n}\prec {\mathfrak {v}}\circ \ell _{n}$ when ${\mathfrak {u}}\prec {\mathfrak {v}}$. We define $\mathbb {T} _{m,n}^{LE}:=\mathbb {R} [[{\mathfrak {M}}_{m,n}]]$. If $n'>n$ and $m'\geq m+(n'-n),$ we embed ${\mathfrak {M}}_{m,n}$ into ${\mathfrak {M}}_{m',n'}$ by identifying an element ${\mathfrak {u}}\circ \ell _{n}$ with the term
$\left({\mathfrak {u}}\circ \overbrace {e^{x}\circ \cdots \circ e^{x}} ^{n'-n}\right)\circ \ell _{n'}.$
We then obtain $\mathbb {T} ^{LE}$ as the directed union
$\mathbb {T} ^{LE}=\bigcup _{m,n\in \mathbb {N} }\mathbb {T} _{m,n}^{LE}.$
On $\mathbb {T} ^{LE},$ the right-composition $\circ _{\ell }$ with $\ell $ is naturally defined by
$\mathbb {T} _{m,n}^{LE}\ni \left(\sum f_{{\mathfrak {m}}\circ \ell _{n}}{\mathfrak {m}}\circ \ell _{n}\right)\circ \ell :=\sum f_{{\mathfrak {m}}\circ \ell _{n}}{\mathfrak {m}}\circ \ell _{n+1}\in \mathbb {T} _{m,n+1}^{LE}.$ :=\sum f_{{\mathfrak {m}}\circ \ell _{n}}{\mathfrak {m}}\circ \ell _{n+1}\in \mathbb {T} _{m,n+1}^{LE}.}
Exponential and logarithm:
Exponentiation can be defined on $\mathbb {T} ^{LE}$ in a similar way as for log-free transseries, but here also $\exp $ has a reciprocal $\log $ on $\mathbb {T} ^{LE,>}$. Indeed, for a strictly positive series $f\in \mathbb {T} _{m,n}^{LE,>}$, write $f={\mathfrak {m}}r(1+\varepsilon )$ where ${\mathfrak {m}}$ is the dominant monomial of $f$ (largest element of its support), $r$ is the corresponding positive real coefficient, and $\varepsilon :={\frac {f}{{\mathfrak {m}}r}}-1$ :={\frac {f}{{\mathfrak {m}}r}}-1} is infinitesimal. The formal Hahn sum
$L(1+\varepsilon ):=\sum _{k\in \mathbb {N} }{\frac {(-\varepsilon )^{k}}{k+1}}$
converges in $\mathbb {T} _{m,n}^{LE}$. Write ${\mathfrak {m}}={\mathfrak {u}}\circ \ell _{n}$ where ${\mathfrak {u}}\in {\mathfrak {M}}_{m}$ itself has the form ${\mathfrak {u}}=x^{a}e^{\theta }$ where $\theta \in \mathbb {T} _{m,\succ }^{E}$ and $a\in \mathbb {R} $. We define $\ell ({\mathfrak {m}}):=a\ell _{n+1}+\theta \circ \ell _{n}$. We finally set
$\log(f):=\ell ({\mathfrak {m}})+\log(c)+L(1+\varepsilon )\in \mathbb {T} _{m,n+1}^{LE}.$
Direct construction of log-exp transseries
One may also define the field of log-exp transseries as a subfield of the ordered field $\mathbf {No} $ of surreal numbers.[4] The field $\mathbf {No} $ is equipped with Gonshor-Kruskal's exponential and logarithm functions[5] and with its natural structure of field of well-based series under Conway normal form.[6]
Define $F_{0}^{LE}=\mathbb {R} (\omega )$, the subfield of $\mathbf {No} $ generated by $\mathbb {R} $ and the simplest positive infinite surreal number $\omega $ (which corresponds naturally to the ordinal $\omega $, and as a transseries to the series $x$). Then, for $n\in \mathbb {N} $, define $F_{n+1}^{LE}$ as the field generated by $F_{n}^{LE}$, exponentials of elements of $F_{n}^{LE}$ and logarithms of strictly positive elements of $F_{n}^{LE}$, as well as (Hahn) sums of summable families in $F_{n}^{LE}$. The union $ F_{\omega }^{LE}=\bigcup _{n\in \mathbb {N} }F_{n}^{LE}$ is naturally isomorphic to $\mathbb {T} ^{LE}$. In fact, there is a unique such isomorphism which sends $\omega $ to $x$ and commutes with exponentiation and sums of summable families in $F_{\omega }^{LE}$ lying in $F_{\omega }$.
Other fields of transseries
• Continuing this process by transfinite induction on $\mathbf {Ord} $ beyond $F_{\omega }^{LE}$, taking unions at limit ordinals, one obtains a proper class-sized field $\mathbb {R} \langle \langle \omega \rangle \rangle $ canonically equipped with a derivation and a composition extending that of $\mathbb {T} ^{LE}$ (see Operations on transseries below).
• If instead of $F_{0}^{LE}$ one starts with the subfield $F_{0}^{EL}:=\mathbb {R} (\omega ,\log \omega ,\log \log \omega ,\ldots )$ generated by $\mathbb {R} $ and all finite iterates of $\log $ at $\omega $, and for $n\in \mathbb {N} ,F_{n+1}^{EL}$ is the subfield generated by $F_{n}^{EL}$, exponentials of elements of $F_{n}^{EL}$ and sums of summable families in $F_{n}^{EL}$, then one obtains an isomorphic copy the field $\mathbb {T} ^{EL}$ of exponential-logarithmic transseries, which is a proper extension of $\mathbb {T} ^{LE}$ equipped with a total exponential function.[7]
The Berarducci-Mantova derivation[8] on $\mathbf {No} $ coincides on $\mathbb {T} ^{LE}$ with its natural derivation, and is unique to satisfy compatibility relations with the exponential ordered field structure and generalized series field structure of $\mathbb {T} ^{EL}$ and $\mathbb {R} \langle \langle \omega \rangle \rangle .$
Contrary to $\mathbb {T} ^{LE},$ the derivation in $\mathbb {T} ^{EL}$ and $\mathbb {R} \langle \langle \omega \rangle \rangle $ is not surjective: for instance the series
${\frac {1}{\omega \log \omega \log \log \omega \cdots }}:=\exp(-(\log \omega +\log \log \omega +\log \log \log \omega +\cdots ))\in \mathbb {T} ^{EL}$
doesn't have an antiderivative in $\mathbb {T} ^{EL}$ or $\mathbb {R} \langle \langle \omega \rangle \rangle $ (this is linked to the fact that those fields contain no transexponential function).
Additional properties
Operations on the differential exponential ordered field
Transseries have very strong closure properties, and many operations can be defined on transseries:
• Log-exp transseries form an exponentially closed ordered field: the exponential and logarithmic functions are total. For example:
$\exp(x^{-1})=\sum _{n=0}^{\infty }{\frac {1}{n!}}x^{-n}\quad {\text{and}}\quad \log(x+\ell )=\ell +\sum _{n=0}^{\infty }{\frac {(x^{-1}\ell )^{n}}{n+1}}.$
• Logarithm is defined for positive arguments.
• Log-exp transseries are real-closed.
• Integration: every log-exp transseries $f$ has a unique antiderivative with zero constant term $F\in \mathbb {T} ^{LE}$, $F'=f$ and $F_{1}=0$.
• Logarithmic antiderivative: for $f\in \mathbb {T} ^{LE}$, there is $h\in \mathbb {T} ^{LE}$ with $f'=fh'$.
Note 1. The last two properties mean that $\mathbb {T} ^{LE}$ is Liouville closed.
Note 2. Just like an elementary nontrigonometric function, each positive infinite transseries $f$ has integral exponentiality, even in this strong sense:
$\exists k,n\in \mathbb {N} :\quad \ell _{n-k}-1\leq \ell _{n}\circ f\leq \ell _{n-k}+1.$ :\quad \ell _{n-k}-1\leq \ell _{n}\circ f\leq \ell _{n-k}+1.}
The number $k$ is unique, it is called the exponentiality of $f$.
Composition of transseries
An original property of $\mathbb {T} ^{LE}$ is that it admits a composition $\circ :\mathbb {T} ^{LE}\times \mathbb {T} ^{LE,>,\succ }\to \mathbb {T} ^{LE}$ :\mathbb {T} ^{LE}\times \mathbb {T} ^{LE,>,\succ }\to \mathbb {T} ^{LE}} (where $\mathbb {T} ^{LE,>,\succ }$ is the set of positive infinite log-exp transseries) which enables us to see each log-exp transseries $f$ as a function on $\mathbb {T} ^{LE,>,\succ }$. Informally speaking, for $g\in \mathbb {T} ^{LE,>,\succ }$ and $f\in \mathbb {T} ^{LE}$, the series $f\circ g$ is obtained by replacing each occurrence of the variable $x$ in $f$ by $g$.
Properties
• Associativity: for $f\in \mathbb {T} ^{LE}$ and $g,h\in \mathbb {T} ^{LE,>,\succ }$, we have $g\circ h\in \mathbb {T} ^{LE,>,\succ }$ and $f\circ (g\circ h)=(f\circ g)\circ h$.
• Compatibility of right-compositions: For $g\in \mathbb {T} ^{LE,>,\succ }$, the function $\circ _{g}:f\mapsto f\circ g$ is a field automorphism of $\mathbb {T} ^{LE}$ which commutes with formal sums, sends $x$ onto $g$, $e^{x}$ onto $\exp(g)$ and $\ell $ onto $\log(g)$. We also have $\circ _{x}=\operatorname {id} _{\mathbb {T} ^{LE}}$.
• Unicity: the composition is unique to satisfy the two previous properties.
• Monotonicity: for $f\in \mathbb {T} ^{LE}$, the function $g\mapsto f\circ g$ is constant or strictly monotonous on $\mathbb {T} ^{LE,>,\succ }$. The monotony depends on the sign of $f'$.
• Chain rule: for $f\in \mathbb {T} ^{LE}\times $ and $g\in \mathbb {T} ^{LE,>,\succ }$, we have $(f\circ g)'=g'f'\circ g$.
• Functional inverse: for $g\in \mathbb {T} ^{LE,>,\succ }$, there is a unique series $h\in \mathbb {T} ^{LE,>,\succ }$ with $g\circ h=h\circ g=x$.
• Taylor expansions: each log-exp transseries $f$ has a Taylor expansion around every point in the sense that for every $g\in \mathbb {T} ^{LE,>,\succ }$ and for sufficiently small $\varepsilon \in \mathbb {T} ^{LE}$, we have
$f\circ (g+\varepsilon )=\sum _{k\in \mathbb {N} }{\frac {f^{(k)}\circ g}{k!}}\varepsilon ^{k}$
where the sum is a formal Hahn sum of a summable family.
• Fractional iteration: for $f\in \mathbb {T} ^{LE,>,\succ }$ with exponentiality $0$ and any real number $a$, the fractional iterate $f^{a}$ of $f$ is defined.[9]
Theory of differential ordered valued differential field
The $\left\langle +,\times ,\partial ,<,\prec \right\rangle $ theory of $\mathbb {T} ^{LE}$ is decidable and can be axiomatized as follows (this is Theorem 2.2 of Aschenbrenner et al.):
• $\mathbb {T} ^{LE}$ is an ordered valued differential field.
• $f>0\wedge f\succ 1\Longrightarrow f'>0$
• $f\prec 1\Longrightarrow f'\prec 1$
• $\forall f\exists g:\quad g'=f$
• $\forall f\exists h:\quad h'=fh$
• Intermediate value property (IVP):
$P(f)<0\wedge P(g)>0\Longrightarrow \exists h:\quad P(h)=0,$
where P is a differential polynomial, i.e. a polynomial in $f,f',f'',\ldots ,f^{(k)}.$
In this theory, exponentiation is essentially defined for functions (using differentiation) but not constants; in fact, every definable subset of $\mathbb {R} ^{n}$ is semialgebraic.
Theory of ordered exponential field
The $\langle +,\times ,\exp ,<\rangle $ theory of $\mathbb {T} ^{LE}$ is that of the exponential real ordered exponential field $(\mathbb {R} ,+,\times ,\exp ,<)$, which is model complete by Wilkie's theorem.
Hardy fields
$\mathbb {T} _{\mathrm {as} }$ is the field of accelero-summable transseries, and using accelero-summation, we have the corresponding Hardy field, which is conjectured to be the maximal Hardy field corresponding to a subfield of $\mathbb {T} $. (This conjecture is informal since we have not defined which isomorphisms of Hardy fields into differential subfields of $\mathbb {T} $ are permitted.) $\mathbb {T} _{\mathrm {as} }$ is conjectured to satisfy the above axioms of $\mathbb {T} $. Without defining accelero-summation, we note that when operations on convergent transseries produce a divergent one while the same operations on the corresponding germs produce a valid germ, we can then associate the divergent transseries with that germ.
A Hardy field is said maximal if it is properly contained in no Hardy field. By an application of Zorn's lemma, every Hardy field is contained in a maximal Hardy field. It is conjectured that all maximal Hardy fields are elementary equivalent as differential fields, and indeed have the same first order theory as $\mathbb {T} ^{LE}$.[10] Logarithmic-transseries do not themselves correspond to a maximal Hardy field for not every transseries corresponds to a real function, and maximal Hardy fields always contain transsexponential functions.[11]
See also
• Formal power series
• Hahn series
• Exponentially closed field
• Hardy field
References
1. Dahn, Bernd and Göring, Peter, Notes on exponential-logarithmic terms, Fundamenta Mathematicae, 1987
2. Ecalle, Jean, Introduction aux fonctions analyzables et preuve constructive de la conjecture de Dulac, Actualités mathématiques (Paris), Hermann, 1992
3. Schmeling, Michael, Corps de transséries, PhD thesis, 2001
4. Berarducci, Alessandro and Mantova, Vincenzo, Transseries as germs of surreal functions, Transactions of the American Mathematical Society, 2017
5. Gonshor, Harry, An Introduction to the Theory of Surreal Numbers, 'Cambridge University Press', 1986
6. Conway, John, Horton, On numbers and games, Academic Press, London, 1976
7. Kuhlmann, Salma and Tressl, Marcus, Comparison of exponential-logarithmic and logarithmic-exponential series, Mathematical Logic Quarterly, 2012
8. Berarducci, Alessandro and Mantova, Vincenzo, Surreal numbers, derivations and transseries, European Mathematical Society, 2015
9. Edgar, G. A. (2010), Fractional Iteration of Series and Transseries, arXiv:1002.2378, Bibcode:2010arXiv1002.2378E
10. Aschenbrenner, Matthias, and van den Dries, Lou and van der Hoeven, Joris, On Numbers, Germs, and Transseries, In Proc. Int. Cong. of Math., vol. 1, pp. 1–24, 2018
11. Boshernitzan, Michael, Hardy fields and existence of transexponential functions, In aequationes mathematicae, vol. 30, issue 1, pp. 258–280, 1986.
• Edgar, G. A. (2010), "Transseries for beginners", Real Analysis Exchange, 35 (2): 253–310, arXiv:0801.4877, doi:10.14321/realanalexch.35.2.0253, S2CID 14290638.
• Aschenbrenner, Matthias; Dries, Lou van den; Hoeven, Joris van der (2017), On Numbers, Germs, and Transseries, arXiv:1711.06936, Bibcode:2017arXiv171106936A.
| Wikipedia |
Transshipment problem
Transshipment problems form a subgroup of transportation problems, where transshipment is allowed. In transshipment, transportation may or must go through intermediate nodes, possibly changing modes of transport.
The Transshipment problem has its origins in medieval times when trading started to become a mass phenomenon. Obtaining the minimum-cost route had been the main priority. However, technological development slowly gave priority to minimum-duration transportation problems.
Overview
Transshipment or Transhipment is the shipment of goods or containers to an intermediate destination, and then from there to yet another destination. One possible reason is to change the means of transport during the journey (for example from ship transport to road transport), known as transloading. Another reason is to combine small shipments into a large shipment (consolidation), dividing the large shipment at the other end (deconsolidation). Transshipment usually takes place in transport hubs. Much international transshipment also takes place in designated customs areas, thus avoiding the need for customs checks or duties, otherwise a major hindrance for efficient transport.
Formulation of the problem
A few initial assumptions are required in order to formulate the transshipment problem completely:
• The system consists of m origins and n destinations, with the following indexing respectively: $i=1,\ldots ,m$, $j=1,\ldots ,n$
• One uniform good exists which needs to be shipped
• The required amount of good at the destinations equals the produced quantity available at the origins
• Transportation simultaneously starts at the origins and is possible from any node to any other (also to an origin and from a destination)
• Transportation costs are independent of the shipped amount
• The transshipment problem is a unique Linear Programming Problem (LLP) in that it considers the assumption that all sources and sinks can both receive and distribute shipments at the same time (function in both directions)[1]
Notations
• $t_{r,s}$: time of transportation from node r to node s
• $a_{i}$: goods available at node i
• $b_{m+j}$: demand for the good at node (m+j)
• $x_{r,s}$: actual amount transported from node r to node s
Mathematical formulation of the problem
The goal is to minimize $\sum \limits _{i=1}^{m}\sum \limits _{j=1}^{n}t_{i,j}x_{i,j}$ subject to:
• $x_{r,s}\geq 0$; $\forall r=1\ldots m$, $s=1\ldots n$
• $\sum _{s=1}^{m+n}{x_{i,s}}-\sum _{r=1}^{m+n}{x_{r,i}}=a_{i}$; $\forall i=1\ldots m$
• $\sum _{r=1}^{m+n}{x_{r,m+j}}-\sum _{s=1}^{m+n}{x_{m+j,s}}=b_{m+j}$; $\forall j=1\ldots n$
• $\sum _{i=1}^{m}{a_{i}}=\sum _{j=1}^{n}{b_{m+j}}$
Solution
Since in most cases an explicit expression for the objective function does not exist, an alternative method is suggested by Rajeev and Satya. The method uses two consecutive phases to reveal the minimal durational route from the origins to the destinations. The first phase is willing to solve $n\cdot m$ time-minimizing problem, in each case using the remained $n+m-2$ intermediate nodes as transshipment points. This also leads to the minimal-durational transportation between all sources and destinations. During the second phase a standard time-minimizing problem needs to be solved. The solution of the time-minimizing transshipment problem is the joint solution outcome of these two phases.
Phase 1
Since costs are independent from the shipped amount, in each individual problem one can normalize the shipped quantity to 1. The problem now is simplified to an assignment problem from i to m+j. Let $x'_{r,s}=1$ be 1 if the edge between nodes r and s is used during the optimization, and 0 otherwise. Now the goal is to determine all $x'_{r,s}$ which minimize the objective function:
$T_{i,m+j}=\sum _{r=1}^{m+n}\sum _{s=1}^{m+n}{t_{r,s}\cdot x'_{r,s}}$,
such that
• $\sum _{s=1}^{m+n}{x'_{r,s}}=1$
• $\sum _{r=1}^{m+n}{x'_{r,s}}=1$
• $x'_{m+j,i}=1$
• $x'_{r,s}=0,1$.
Corollary
• $x'_{r,r}=1$ and $x'_{m+j,i}=1$ need to be excluded from the model; on the other hand, without the $x'_{m+j,i}=1$ constraint the optimal path would consist only of $x'_{r,r}$-type loops which obviously can not be a feasible solution.
• Instead of $x'_{m+j,i}=1$, $t_{m+j,i}=-M$ can be written, where M is an arbitrarily large positive number. With that modification the formulation above is reduced to the form of a standard assignment problem, possible to solve with the Hungarian method.
Phase 2
During the second phase, a time minimization problem is solved with m origins and n destinations without transshipment. This phase differs in two main aspects from the original setup:
• Transportation is only possible from an origin to a destination
• Transportation time from i to m+j is the sum of durations coming from the optimal route calculated in Phase 1. Worthy to be denoted by $t'_{i,m+j}$ in order to separate it from the times introduced during the first stage.
In mathematical form
The goal is to find $x_{i,m+j}\geq 0$ which minimize
$z=max\left\{t'_{i,m+j}:x_{i,m+j}>0\;\;(i=1\ldots m,\;j=1\ldots n)\right\}$,
such that
• $\sum _{i=1}^{m}{x_{i,m+j}}=a_{i}$
• $\sum _{j=1}^{n}{x_{i,m+j}}=b_{m+j}$
• $\sum _{i=1}^{m}{a_{i}}=\sum _{j=1}^{n}{b_{m+j}}$
This problem is easy to be solved with the method developed by Prakash. The set $\left\{t'_{i,m+j},i=1\ldots m,\;j=1\ldots n\right\}$ needs to be partitioned into subgroups $L_{k},k=1\ldots q$, where each $L_{k}$ contain the $t'_{i,m+j}$-s with the same value. The sequence $L_{k}$ is organized as $L_{1}$ contains the largest valued $t'_{i,m+j}$'s $L_{2}$ the second largest and so on. Furthermore, $M_{k}$ positive priority factors are assigned to the subgroups $\sum _{L_{k}}{x_{i,m+j}}$, with the following rule:
$\alpha M_{k}-\beta M_{k+1}=\left\{{\begin{array}{cc}-ve,&if\;\alpha <0\\ve,&if\;\alpha >0\end{array}}\right.$
for all $\beta $. With this notation the goal is to find all $x_{i,m+j}$ which minimize the goal function
$z_{1}=\sum _{k=1}^{q}{M_{k}}\sum _{L_{k}}{x_{i,m+j}}$
such that
• $\sum _{i=1}^{m}{x_{i,m+j}}=a_{i}$
• $\sum _{j=1}^{n}{x_{i,m+j}}=b_{m+j}$
• $\sum _{i=1}^{m}{a_{i}}=\sum _{j=1}^{n}{b_{m+j}}$
• $\alpha M_{k}-\beta M_{k+1}=\left\{{\begin{array}{cc}-ve,&if\;\alpha <0\\ve,&if\;\alpha >0\end{array}}\right.$
Extension
Some authors such as Das et al (1999) and Malakooti (2013) have considered multi-objective Transshipment problem.
References
1. "Transshipment Problem and Its Variants: A Review". ResearchGate. Retrieved 2020-11-02.
• R.J Aguilar, Systems Analysis and Design. Prentice Hall, Inc. Englewood Cliffs, New Jersey (1973) pp. 209–220
• H. L. Bhatia, K. Swarup, M. C. Puri, Indian J. pure appl. Math. 8 (1977) 920-929
• R. S. Gartinkel, M. R. Rao, Nav. Res. Log. Quart. 18 (1971) 465-472
• G. Hadley, Linear Programming, Addison-Wesley Publishing Company, (1962) pp. 368–373
• P. L. Hammer, Nav. Res. Log. Quart. 16 (1969) 345-357
• P. L. Hammer, Nav. Res. Log. Quart. 18 (1971) 487-490
• A.J.Hughes, D.E.Grawog, Linear Programming: An Emphasis On Decision Making, Addison-Wesley Publishing Company, pp. 300–312
• H.W.Kuhn, Nav. Res. Log. Quart. 2 (1955) 83-97
• A.Orden, Management Sci, 2 (1956) 276-285
• S.Parkash, Proc. Indian Acad. Sci. (Math. Sci.) 91 (1982) 53-57
• C.S. Ramakrishnan, OPSEARCH 14 (1977) 207-209
• C.R.Seshan, V.G.Tikekar, Proc. Indian Acad. Sci. (Math. Sci.) 89 (1980) 101-102
• J.K.Sharma, K.Swarup, Proc. Indian Acad. Sci. (Math. Sci.) 86 (1977) 513-518
• W.Szwarc, Nav. Res. Log. Quart. 18 (1971) 473-485
• Malakooti, B. (2013). Operations and Production Systems with Multiple Objectives. John Wiley & Sons.
• Das, S. K., A. Goswami, and S. S. Alam. “Multiobjective Transportation Problem with Interval Cost, Source and Destination Parameters.” European Journal of Operational Research, Vol. 117, No. 1, 1999, pp. 100–112
| Wikipedia |
Transvectant
In mathematical invariant theory, a transvectant is an invariant formed from n invariants in n variables using Cayley's Ω process.
Definition
If Q1,...,Qn are functions of n variables x = (x1,...,xn) and r ≥ 0 is an integer then the rth transvectant of these functions is a function of n variables given by
$tr\Omega ^{r}(Q_{1}\otimes \cdots \otimes Q_{n})$
where Ω is Cayley's Ω process, the tensor product means take a product of functions with different variables x1,..., xn, and tr means set all the vectors xk equal.
Examples
The zeroth transvectant is the product of the n functions.
The first transvectant is the Jacobian determinant of the n functions.
The second transvectant is a constant times the completely polarized form of the Hessian of the n functions.
Footnotes
References
• Olver, Peter J. (1999), Classical invariant theory, Cambridge University Press, ISBN 978-0-521-55821-1
• Olver, Peter J.; Sanders, Jan A. (2000), "Transvectants, modular forms, and the Heisenberg algebra", Advances in Applied Mathematics, 25 (3): 252–283, CiteSeerX 10.1.1.46.803, doi:10.1006/aama.2000.0700, ISSN 0196-8858, MR 1783553
| Wikipedia |
Shear mapping
In plane geometry, a shear mapping is a linear map that displaces each point in a fixed direction, by an amount proportional to its signed distance from the line that is parallel to that direction and goes through the origin.[1] This type of mapping is also called shear transformation, transvection, or just shearing.
An example is the mapping that takes any point with coordinates $(x,y)$ to the point $(x+2y,y)$. In this case, the displacement is horizontal by a factor of 2 where the fixed line is the x-axis, and the signed distance is the y-coordinate. Note that points on opposite sides of the reference line are displaced in opposite directions.
Shear mappings must not be confused with rotations. Applying a shear map to a set of points of the plane will change all angles between them (except straight angles), and the length of any line segment that is not parallel to the direction of displacement. Therefore, it will usually distort the shape of a geometric figure, for example turning squares into parallelograms, and circles into ellipses. However a shearing does preserve the area of geometric figures and the alignment and relative distances of collinear points. A shear mapping is the main difference between the upright and slanted (or italic) styles of letters.
The same definition is used in three-dimensional geometry, except that the distance is measured from a fixed plane. A three-dimensional shearing transformation preserves the volume of solid figures, but changes areas of plane figures (except those that are parallel to the displacement). This transformation is used to describe laminar flow of a fluid between plates, one moving in a plane above and parallel to the first.
In the general n-dimensional Cartesian space $\mathbb {R} ^{n},$ the distance is measured from a fixed hyperplane parallel to the direction of displacement. This geometric transformation is a linear transformation of $\mathbb {R} ^{n}$ that preserves the n-dimensional measure (hypervolume) of any set.
Definition
Horizontal and vertical shear of the plane
In the plane $\mathbb {R} ^{2}=\mathbb {R} \times \mathbb {R} $, a horizontal shear (or shear parallel to the x-axis) is a function that takes a generic point with coordinates $(x,y)$ to the point $(x+my,y)$; where m is a fixed parameter, called the shear factor.
The effect of this mapping is to displace every point horizontally by an amount proportionally to its y-coordinate. Any point above the x-axis is displaced to the right (increasing x) if m > 0, and to the left if m < 0. Points below the x-axis move in the opposite direction, while points on the axis stay fixed.
Straight lines parallel to the x-axis remain where they are, while all other lines are turned (by various angles) about the point where they cross the x-axis. Vertical lines, in particular, become oblique lines with slope ${\tfrac {1}{m}}.$ Therefore, the shear factor m is the cotangent of the shear angle $\varphi $ between the former verticals and the x-axis. (In the example on the right the square is tilted by 30°, so the shear angle is 60°.)
If the coordinates of a point are written as a column vector (a 2×1 matrix), the shear mapping can be written as multiplication by a 2×2 matrix:
${\begin{pmatrix}x^{\prime }\\y^{\prime }\end{pmatrix}}={\begin{pmatrix}x+my\\y\end{pmatrix}}={\begin{pmatrix}1&m\\0&1\end{pmatrix}}{\begin{pmatrix}x\\y\end{pmatrix}}.$
A vertical shear (or shear parallel to the y-axis) of lines is similar, except that the roles of x and y are swapped. It corresponds to multiplying the coordinate vector by the transposed matrix:
${\begin{pmatrix}x^{\prime }\\y^{\prime }\end{pmatrix}}={\begin{pmatrix}x\\mx+y\end{pmatrix}}={\begin{pmatrix}1&0\\m&1\end{pmatrix}}{\begin{pmatrix}x\\y\end{pmatrix}}.$
The vertical shear displaces points to the right of the y-axis up or down, depending on the sign of m. It leaves vertical lines invariant, but tilts all other lines about the point where they meet the y-axis. Horizontal lines, in particular, get tilted by the shear angle $\varphi $ to become lines with slope m.
General shear mappings
For a vector space V and subspace W, a shear fixing W translates all vectors in a direction parallel to W.
To be more precise, if V is the direct sum of W and W′, and we write vectors as
$v=w+w'$
correspondingly, the typical shear L fixing W is
$L(v)=(Lw+Lw')=(w+Mw')+w',$
where M is a linear mapping from W′ into W. Therefore in block matrix terms L can be represented as
${\begin{pmatrix}I&M\\0&I\end{pmatrix}}.$
Applications
The following applications of shear mapping were noted by William Kingdon Clifford:
"A succession of shears will enable us to reduce any figure bounded by straight lines to a triangle of equal area."
"... we may shear any triangle into a right-angled triangle, and this will not alter its area. Thus the area of any triangle is half the area of the rectangle on the same base and with height equal to the perpendicular on the base from the opposite angle."[2]
The area-preserving property of a shear mapping can be used for results involving area. For instance, the Pythagorean theorem has been illustrated with shear mapping[3] as well as the related geometric mean theorem.
An algorithm due to Alan W. Paeth uses a sequence of three shear mappings (horizontal, vertical, then horizontal again) to rotate a digital image by an arbitrary angle. The algorithm is very simple to implement, and very efficient, since each step processes only one column or one row of pixels at a time.[4]
In typography, normal text transformed by a shear mapping results in oblique type.
In pre-Einsteinian Galilean relativity, transformations between frames of reference are shear mappings called Galilean transformations. These are also sometimes seen when describing moving reference frames relative to a "preferred" frame, sometimes referred to as absolute time and space.
See also
• Shear matrix
• Transformation matrix
References
Wikimedia Commons has media related to Shear (geometry).
The Wikibook Abstract Algebra has a page on the topic of: Shear mapping
1. Definition according to Weisstein, Eric W. Shear From MathWorld − A Wolfram Web Resource
2. William Kingdon Clifford (1885) Common Sense and the Exact Sciences, page 113
3. Hohenwarter, M Pythagorean theorem by shear mapping; made using GeoGebra. Drag the sliders to observe the shears
4. Alan Paeth (1986), A Fast Algorithm for General Raster Rotation. Proceedings of Graphics Interface '86, pages 77–81.
| Wikipedia |
Transversal plane
In geometry, a transversal plane is a plane that intersects (not contains) two or more lines or planes. A transversal plane may also form dihedral angles.
Theorems
Transversal plane theorem for lines: Lines that intersect a transversal plane are parallel if and only if their alternate interior angles formed by the points of intersection are congruent.
Transversal plane theorem for planes: Planes intersected by a transversal plane are parallel if and only if their alternate interior dihedral angles are congruent.
Transversal line containment theorem: If a transversal line is contained in any plane other than the plane containing all the lines, then the plane is a transversal plane.
| Wikipedia |
Transversality (mathematics)
In mathematics, transversality is a notion that describes how spaces can intersect; transversality can be seen as the "opposite" of tangency, and plays a role in general position. It formalizes the idea of a generic intersection in differential topology. It is defined by considering the linearizations of the intersecting spaces at the points of intersection.
Definition
Two submanifolds of a given finite-dimensional smooth manifold are said to intersect transversally if at every point of intersection, their separate tangent spaces at that point together generate the tangent space of the ambient manifold at that point.[1] Manifolds that do not intersect are vacuously transverse. If the manifolds are of complementary dimension (i.e., their dimensions add up to the dimension of the ambient space), the condition means that the tangent space to the ambient manifold is the direct sum of the two smaller tangent spaces. If an intersection is transverse, then the intersection will be a submanifold whose codimension is equal to the sums of the codimensions of the two manifolds. In the absence of the transversality condition the intersection may fail to be a submanifold, having some sort of singular point.
In particular, this means that transverse submanifolds of complementary dimension intersect in isolated points (i.e., a 0-manifold). If both submanifolds and the ambient manifold are oriented, their intersection is oriented. When the intersection is zero-dimensional, the orientation is simply a plus or minus for each point.
One notation for the transverse intersection of two submanifolds $L_{1}$ and $L_{2}$ of a given manifold $M$ is $L_{1}\pitchfork L_{2}$. This notation can be read in two ways: either as “$L_{1}$ and $L_{2}$ intersect transversally” or as an alternative notation for the set-theoretic intersection $L_{1}\cap L_{2}$ of $L_{1}$ and $L_{2}$ when that intersection is transverse. In this notation, the definition of transversality reads
$L_{1}\pitchfork L_{2}\iff \forall p\in L_{1}\cap L_{2},T_{p}M=T_{p}L_{1}+T_{p}L_{2}.$
Transversality of maps
The notion of transversality of a pair of submanifolds is easily extended to transversality of a submanifold and a map to the ambient manifold, or to a pair of maps to the ambient manifold, by asking whether the pushforwards of the tangent spaces along the preimage of points of intersection of the images generate the entire tangent space of the ambient manifold.[2] If the maps are embeddings, this is equivalent to transversality of submanifolds.
Meaning of transversality for different dimensions
Suppose we have transverse maps $f_{1}:L_{1}\to M$ and $f_{2}:L_{2}\to M$ where $L_{1},L_{2}$ and $M$ are manifolds with dimensions $\ell _{1},\ell _{2}$ and $m$ respectively.
The meaning of transversality differs a lot depending on the relative dimensions of $M,L_{1}$ and $L_{2}$. The relationship between transversality and tangency is clearest when $\ell _{1}+\ell _{2}=m$.
We can consider three separate cases:
1. When $\ell _{1}+\ell _{2}<m$, it is impossible for the image of $L_{1}$ and $L_{2}$'s tangent spaces to span $M$'s tangent space at any point. Thus any intersection between $f_{1}$ and $f_{2}$ cannot be transverse. However, non-intersecting manifolds vacuously satisfy the condition, so can be said to intersect transversely.
2. When $\ell _{1}+\ell _{2}=m$, the image of $L_{1}$ and $L_{2}$'s tangent spaces must sum directly to $M$'s tangent space at any point of intersection. Their intersection thus consists of isolated signed points, i.e. a zero-dimensional manifold.
3. When $\ell _{1}+\ell _{2}>m$ this sum needn't be direct. In fact it cannot be direct if $f_{1}$ and $f_{2}$ are immersions at their point of intersection, as happens in the case of embedded submanifolds. If the maps are immersions, the intersection of their images will be a manifold of dimension $\ell _{1}+\ell _{2}-m.$
Intersection product
Given any two smooth submanifolds, it is possible to perturb either of them by an arbitrarily small amount such that the resulting submanifold intersects transversally with the fixed submanifold. Such perturbations do not affect the homology class of the manifolds or of their intersections. For example, if manifolds of complementary dimension intersect transversally, the signed sum of the number of their intersection points does not change even if we isotope the manifolds to another transverse intersection. (The intersection points can be counted modulo 2, ignoring the signs, to obtain a coarser invariant.) This descends to a bilinear intersection product on homology classes of any dimension, which is Poincaré dual to the cup product on cohomology. Like the cup product, the intersection product is graded-commutative.
Examples of transverse intersections
The simplest non-trivial example of transversality is of arcs in a surface. An intersection point between two arcs is transverse if and only if it is not a tangency, i.e., their tangent lines inside the tangent plane to the surface are distinct.
In a three-dimensional space, transverse curves do not intersect. Curves transverse to surfaces intersect in points, and surfaces transverse to each other intersect in curves. Curves that are tangent to a surface at a point (for instance, curves lying on a surface) do not intersect the surface transversally.
Here is a more specialised example: suppose that $G$ is a simple Lie group and ${\mathfrak {g}}$ is its Lie algebra. By the Jacobson–Morozov theorem every nilpotent element $e\in {\mathfrak {g}}$ can be included into an ${\mathfrak {sl_{2}}}$-triple $(e,h,f)$. The representation theory of ${\mathfrak {sl_{2}}}$ tells us that ${\mathfrak {g}}=[{\mathfrak {g}},e]\oplus {\mathfrak {g}}_{f}$. The space $[{\mathfrak {g}},e]$ is the tangent space at $e$ to the adjoint orbit ${\rm {{Ad}(G)e}}$ and so the affine space $e+{\mathfrak {g}}_{f}$ intersects the orbit of $e$ transversally. The space $e+{\mathfrak {g}}_{f}$ is known as the "Slodowy slice" after Peter Slodowy.
Applications
Optimal control
In fields utilizing the calculus of variations or the related Pontryagin maximum principle, the transversality condition is frequently used to control the types of solutions found in optimization problems. For example, it is a necessary condition for solution curves to problems of the form:
Minimize $\int {F(x,y,y^{\prime })}dx$ where one or both of the endpoints of the curve are not fixed.
In many of these problems, the solution satisfies the condition that the solution curve should cross transversally the nullcline or some other curve describing terminal conditions.
Smoothness of solution spaces
Using Sard's theorem, whose hypothesis is a special case of the transversality of maps, it can be shown that transverse intersections between submanifolds of a space of complementary dimensions or between submanifolds and maps to a space are themselves smooth submanifolds. For instance, if a smooth section of an oriented manifold's tangent bundle—i.e. a vector field—is viewed as a map from the base to the total space, and intersects the zero-section (viewed either as a map or as a submanifold) transversely, then the zero set of the section—i.e. the singularities of the vector field—forms a smooth 0-dimensional submanifold of the base, i.e. a set of signed points. The signs agree with the indices of the vector field, and thus the sum of the signs—i.e. the fundamental class of the zero set—is equal to the Euler characteristic of the manifold. More generally, for a vector bundle over an oriented smooth closed finite-dimensional manifold, the zero set of a section transverse to the zero section will be a submanifold of the base of codimension equal to the rank of the vector bundle, and its homology class will be Poincaré dual to the Euler class of the bundle.
An extremely special case of this is the following: if a differentiable function from reals to the reals has nonzero derivative at a zero of the function, then the zero is simple, i.e. it the graph is transverse to the x-axis at that zero; a zero derivative would mean a horizontal tangent to the curve, which would agree with the tangent space to the x-axis.
For an infinite-dimensional example, the d-bar operator is a section of a certain Banach space bundle over the space of maps from a Riemann surface into an almost-complex manifold. The zero set of this section consists of holomorphic maps. If the d-bar operator can be shown to be transverse to the zero-section, this moduli space will be a smooth manifold. These considerations play a fundamental role in the theory of pseudoholomorphic curves and Gromov–Witten theory. (Note that for this example, the definition of transversality has to be refined in order to deal with Banach spaces!)
Grammar
"Transversal" is a noun; the adjective is "transverse."
quote from J.H.C. Whitehead, 1959[3]
See also
• Transversality theorem
Notes
1. Guillemin and Pollack 1974, p.30.
2. Guillemin and Pollack 1974, p.28.
3. Hirsch (1976), p.66
References
• Thom, René (1954). "Quelques propriétés globales des variétés differentiables". Comment. Math. Helv. 28 (1): 17–86. doi:10.1007/BF02566923. S2CID 120243638.
• Guillemin, Victor; Pollack, Alan (1974). Differential Topology. Prentice-Hall. ISBN 0-13-212605-2.
• Hirsch, Morris (1976). Differential Topology. Springer-Verlag. ISBN 0-387-90148-5.
| Wikipedia |
Transversality condition
In optimal control theory, a transversality condition is a boundary condition for the terminal values of the costate variables. They are one of the necessary conditions for optimality infinite-horizon optimal control problems without an endpoint constraint on the state variables.
See also
• Pontryagin's maximum principle
Further reading
• Beavis, Brian; Dobbs, Ian (1990). "Variable Endpoints and Transversality Conditions". Optimisation and Stability Theory for Economic Analysis. New York: Cambridge University Press. pp. 252–259. ISBN 0-521-33605-8.
• Léonard, Daniel; Long, Ngo Van (1992). "Endpoint Constraints and Transversality Conditions". Optimal Control Theory : Static Optimization in Economics. New York: Cambridge University Press. pp. 221–262. ISBN 0-521-33746-1.
| Wikipedia |
Transversality theorem
In differential topology, the transversality theorem, also known as the Thom transversality theorem after French mathematician René Thom, is a major result that describes the transverse intersection properties of a smooth family of smooth maps. It says that transversality is a generic property: any smooth map $f\colon X\rightarrow Y$, may be deformed by an arbitrary small amount into a map that is transverse to a given submanifold $Z\subseteq Y$. Together with the Pontryagin–Thom construction, it is the technical heart of cobordism theory, and the starting point for surgery theory. The finite-dimensional version of the transversality theorem is also a very useful tool for establishing the genericity of a property which is dependent on a finite number of real parameters and which is expressible using a system of nonlinear equations. This can be extended to an infinite-dimensional parametrization using the infinite-dimensional version of the transversality theorem.
Finite-dimensional version
Previous definitions
Let $f\colon X\rightarrow Y$ be a smooth map between smooth manifolds, and let $Z$ be a submanifold of $Y$. We say that $f$ is transverse to $Z$, denoted as $f\pitchfork Z$, if and only if for every $x\in f^{-1}\left(Z\right)$ we have that
$\operatorname {im} \left(df_{x}\right)+T_{f\left(x\right)}Z=T_{f\left(x\right)}Y$.
An important result about transversality states that if a smooth map $f$ is transverse to $Z$, then $f^{-1}\left(Z\right)$ is a regular submanifold of $X$.
If $X$ is a manifold with boundary, then we can define the restriction of the map $f$ to the boundary, as $\partial f\colon \partial X\rightarrow Y$. The map $\partial f$ is smooth, and it allows us to state an extension of the previous result: if both $f\pitchfork Z$ and $\partial f\pitchfork Z$, then $f^{-1}\left(Z\right)$ is a regular submanifold of $X$ with boundary, and
$\partial f^{-1}\left(Z\right)=f^{-1}\left(Z\right)\cap \partial X$.
Parametric transversality theorem
Consider the map $F\colon X\times S\rightarrow Y$ and define $f_{s}\left(x\right)=F\left(x,s\right)$. This generates a family of mappings $f_{s}\colon X\rightarrow Y$. We require that the family vary smoothly by assuming $S$ to be a (smooth) manifold and $F$ to be smooth.
The statement of the parametric transversality theorem is:
Suppose that $F\colon X\times S\rightarrow Y$ is a smooth map of manifolds, where only $X$ has boundary, and let $Z$ be any submanifold of $Y$ without boundary. If both $F$ and $\partial F$ are transverse to $Z$, then for almost every $s\in S$, both $f_{s}$ and $\partial f_{s}$ are transverse to $Z$.
More general transversality theorems
The parametric transversality theorem above is sufficient for many elementary applications (see the book by Guillemin and Pollack).
There are more powerful statements (collectively known as transversality theorems) that imply the parametric transversality theorem and are needed for more advanced applications.
Informally, the "transversality theorem" states that the set of mappings that are transverse to a given submanifold is a dense open (or, in some cases, only a dense $G_{\delta }$) subset of the set of mappings. To make such a statement precise, it is necessary to define the space of mappings under consideration, and what is the topology in it. There are several possibilities; see the book by Hirsch.
What is usually understood by Thom's transversality theorem is a more powerful statement about jet transversality. See the books by Hirsch and by Golubitsky and Guillemin. The original reference is Thom, Bol. Soc. Mat. Mexicana (2) 1 (1956), pp. 59–71.
John Mather proved in the 1970s an even more general result called the multijet transversality theorem. See the book by Golubitsky and Guillemin.
Infinite-dimensional version
The infinite-dimensional version of the transversality theorem takes into account that the manifolds may be modeled in Banach spaces.
Formal statement
Suppose $F:X\times S\to Y$ is a $C^{k}$ map of $C^{\infty }$-Banach manifolds. Assume:
(i) $X,S$ and $Y$ are non-empty, metrizable $C^{\infty }$-Banach manifolds with chart spaces over a field $\mathbb {K} .$
(ii) The $C^{k}$-map $F:X\times S\to Y$ with $k\geq 1$ has $y$ as a regular value.
(iii) For each parameter $s\in S$, the map $f_{s}(x)=F(x,s)$ is a Fredholm map, where $\operatorname {ind} Df_{s}(x)<k$ for every $x\in f_{s}^{-1}(\{y\}).$
(iv) The convergence $s_{n}\to s$ on $S$ as $n\to \infty $ and $F(x_{n},s_{n})=y$ for all $n$ implies the existence of a convergent subsequence $x_{n}\to x$ as $n\to \infty $ with $x\in X.$
If (i)-(iv) hold, then there exists an open, dense subset $S_{0}\subset S$ such that $y$ is a regular value of $f_{s}$ for each parameter $s\in S_{0}.$
Now, fix an element $s\in S_{0}.$ If there exists a number $n\geq 0$ with $\operatorname {ind} Df_{s}(x)=n$ for all solutions $x\in X$ of $f_{s}(x)=y$, then the solution set $f_{s}^{-1}(\{y\})$ consists of an $n$-dimensional $C^{k}$-Banach manifold or the solution set is empty.
Note that if $\operatorname {ind} Df_{s}(x)=0$ for all the solutions of $f_{s}(x)=y,$ then there exists an open dense subset $S_{0}$ of $S$ such that there are at most finitely many solutions for each fixed parameter $s\in S_{0}.$ In addition, all these solutions are regular.
References
• Arnold, Vladimir I. (1988). Geometrical Methods in the Theory of Ordinary Differential Equations. Springer. ISBN 0-387-96649-8.
• Golubitsky, Martin; Guillemin, Victor (1974). Stable Mappings and Their Singularities. Springer-Verlag. ISBN 0-387-90073-X.
• Guillemin, Victor; Pollack, Alan (1974). Differential Topology. Prentice-Hall. ISBN 0-13-212605-2.
• Hirsch, Morris W. (1976). Differential Topology. Springer. ISBN 0-387-90148-5.
• Thom, René (1954). "Quelques propriétés globales des variétés differentiables". Commentarii Mathematici Helvetici. 28 (1): 17–86. doi:10.1007/BF02566923.
• Thom, René (1956). "Un lemme sur les applications différentiables". Bol. Soc. Mat. Mexicana. 2 (1): 59–71.
• Zeidler, Eberhard (1997). Nonlinear Functional Analysis and Its Applications: Part 4: Applications to Mathematical Physics. Springer. ISBN 0-387-96499-1.
| Wikipedia |
Transverse knot
In mathematics, a transverse knot is a smooth embedding of a circle into a three-dimensional contact manifold such that the tangent vector at every point of the knot is transverse to the contact plane at that point.
Any Legendrian knot can be C0-perturbed in a direction transverse to the contact planes to obtain a transverse knot. This yields a bijection between the set of isomorphism classes of transverse knots and the set of isomorphism classes of Legendrian knots modulo negative Legendrian stabilization.
References
• Geiges, Hansjörg (2008). An introduction to contact topology; Volume 109 of Cambridge studies in advanced mathematics. Cambridge University Press. p. 94. ISBN 978-0-521-86585-2.
• J. Epstein, D. Fuchs, and M. Meyer, Chekanov–Eliashberg invariants and transverse approximations of Legendrian knots, Pacific J. Math. 201 (2001), no. 1, 89–106.
| Wikipedia |
Transylvania lottery
In mathematical combinatorics, the Transylvania lottery is a lottery where players selected three numbers from 1-14 for each ticket, and then three numbers are chosen randomly. A ticket wins if two of the numbers match the random ones. The problem asks how many tickets the player must buy in order to be certain of winning. (Javier Martínez, Gloria Gutiérrez & Pablo Cordero et al. 2008, p.85)(Mazur 2010, p.280 problem 15)
An upper bound can be given using the Fano plane with a collection of 14 tickets in two sets of seven. Each set of seven uses every line of a Fano plane, labelled with the numbers 1 to 7, and 8 to 14.
Low set 1-2-51-3-61-4-72-3-72-4-63-4-55-6-7
High set 8-9-128-10-138-11-149-10-149-11-1310-11-1212-13-14
At least two of the three randomly chosen numbers must be in one Fano plane set, and any two points on a Fano plane are on a line, so there will be a ticket in the collection containing those two numbers. There is a (6/13)*(5/12)=5/26 chance that all three randomly chosen numbers are in the same Fano plane set. In this case, there is a 1/5 chance that they are on a line, and hence all three numbers are on one ticket, otherwise each of the three pairs are on three different tickets.
See also
• Combinatorial design
• Lottery Wheeling
References
• Martínez, Javier; Gutiérrez, Gloria; Cordero, Pablo; Rodríguez, Francisco J.; Merino, Salvador (2008), "Algebraic topics on discrete mathematics", in Moore, Kenneth B. (ed.), Discrete mathematics research progress, Hauppauge, NY: Nova Sci. Publ., pp. 41–90, ISBN 978-1-60456-123-4, MR 2446219
• Mazur, David R. (2010), Combinatorics, MAA Textbooks, Mathematical Association of America, ISBN 978-0-88385-762-5, MR 2572113
| Wikipedia |
Trapezohedron
In geometry, an n-gonal trapezohedron, n-trapezohedron, n-antidipyramid, n-antibipyramid, or n-deltohedron is the dual polyhedron of an n-gonal antiprism. The 2n faces of an n-trapezohedron are congruent and symmetrically staggered; they are called twisted kites. With a higher symmetry, its 2n faces are kites (also called deltoids).[3]
"Deltohedron" redirects here. Not to be confused with Deltahedron.
Set of dual-uniform n-gonal trapezohedra
Example: dual-uniform pentagonal trapezohedron (n = 5)
Typedual-uniform in the sense of dual-semiregular polyhedron
Faces2n congruent kites
Edges4n
Vertices2n + 2
Vertex configurationV3.3.3.n
Schläfli symbol{ } ⨁ {n}[1]
Conway notationdAn
Coxeter diagram
Symmetry groupDnd, [2+,2n], (2*n), order 4n
Rotation groupDn, [2,n]+, (22n), order 2n
Dual polyhedron(convex) uniform n-gonal antiprism
Propertiesconvex, face-transitive, regular vertices[2]
The "n-gonal" part of the name does not refer to faces here, but to two arrangements of each n vertices around an axis of n-fold symmetry. The dual n-gonal antiprism has two actual n-gon faces.
An n-gonal trapezohedron can be dissected into two equal n-gonal pyramids and an n-gonal antiprism.
Terminology
These figures, sometimes called deltohedra, must not be confused with deltahedra, whose faces are equilateral triangles.
Twisted trigonal, tetragonal, and hexagonal trapezohedra (with six, eight, and twelve twisted congruent kite faces) exist as crystals; in crystallography (describing the crystal habits of minerals), they are just called trigonal, tetragonal, and hexagonal trapezohedra. They have no plane of symmetry, and no center of inversion symmetry;[4],[5] but they have a center of symmetry: the intersection point of their symmetry axes. The trigonal trapezohedron has one 3-fold symmetry axis, perpendicular to three 2-fold symmetry axes.[4] The tetragonal trapezohedron has one 4-fold symmetry axis, perpendicular to four 2-fold symmetry axes of two kinds. The hexagonal trapezohedron has one 6-fold symmetry axis, perpendicular to six 2-fold symmetry axes of two kinds.[6]
Crystal arrangements of atoms can repeat in space with trigonal and hexagonal trapezohedron cells.[7]
Also in crystallography, the word trapezohedron is often used for the polyhedron with 24 congruent non-twisted kite faces properly known as a deltoidal icositetrahedron,[8] which has eighteen order-4 vertices and eight order-3 vertices. This is not to be confused with the dodecagonal trapezohedron, which also has 24 congruent kite faces, but two order-12 apices (i.e. poles) and two rings of twelve order-3 vertices each.
Still in crystallography, the deltoid dodecahedron[9] has 12 congruent non-twisted kite faces, six order-4 vertices and eight order-3 vertices (the rhombic dodecahedron is a special case). This is not to be confused with the hexagonal trapezohedron, which also has 12 congruent kite faces,[6] but two order-6 apices (i.e. poles) and two rings of six order-3 vertices each.
Forms
An n-trapezohedron is defined by a regular zig-zag skew 2n-gon base, two symmetric apices with no degree of freedom right above and right below the base, and quadrilateral faces connecting each pair of adjacent basal edges to one apex.
An n-trapezohedron has two apical vertices on its polar axis, and 2n basal vertices in two regular n-gonal rings. It has 2n congruent kite faces, and it is isohedral.
Family of n-gonal trapezohedra
Trapezohedron name Digonal trapezohedron
(Tetrahedron)
Trigonal trapezohedron Tetragonal trapezohedron Pentagonal trapezohedron Hexagonal trapezohedron Heptagonal trapezohedron Octagonal trapezohedron Decagonal trapezohedron Dodecagonal trapezohedron ... Apeirogonal trapezohedron
Polyhedron image ...
Spherical tiling image Plane tiling image
Face configuration V2.3.3.3 V3.3.3.3 V4.3.3.3 V5.3.3.3 V6.3.3.3 V7.3.3.3 V8.3.3.3 V10.3.3.3 V12.3.3.3 ... V∞.3.3.3
Special cases:
• n = 2. A degenerate form of trapezohedron: a geometric tetrahedron with 6 vertices, 8 edges, and 4 degenerate kite faces that are degenerated into triangles. Its dual is a degenerate form of antiprism: also a tetrahedron.
• n = 3. The dual of a triangular antiprism: the kites are rhombi (or squares); hence these trapezohedra are also zonohedra. They are called rhombohedra. They are cubes scaled in the direction of a body diagonal. They are also the parallelepipeds with congruent rhombic faces.
• A special case of a rhombohedron is one in which the rhombi forming the faces have angles of 60° and 120°. It can be decomposed into two equal regular tetrahedra and a regular octahedron. Since parallelepipeds can fill space, so can a combination of regular tetrahedra and regular octahedra.
• n = 5. The pentagonal trapezohedron is the only polyhedron other than the Platonic solids commonly used as a die in roleplaying games such as Dungeons & Dragons. Being convex and face-transitive, it makes fair dice. Having 10 sides, it can be used in repetition to generate any decimal-based uniform probability desired. Typically, two dice of different colors are used for the two digits to represent numbers from 00 to 99.
Symmetry
The symmetry group of an n-gonal trapezohedron is Dnd = Dnv, of order 4n, except in the case of n = 3: a cube has the larger symmetry group Od of order 48 = 4×(4×3), which has four versions of D3d as subgroups.
The rotation group of an n-trapezohedron is Dn, of order 2n, except in the case of n = 3: a cube has the larger rotation group O of order 24 = 4×(2×3), which has four versions of D3 as subgroups.
Note: Every n-trapezohedron with a regular zig-zag skew 2n-gon base and 2n congruent non-twisted kite faces has the same (dihedral) symmetry group as the dual-uniform n-trapezohedron, for n ≥ 4.
One degree of freedom within symmetry from Dnd (order 4n) to Dn (order 2n) changes the congruent kites into congruent quadrilaterals with three edge lengths, called twisted kites, and the n-trapezohedron is called a twisted trapezohedron. (In the limit, one edge of each quadrilateral goes to zero length, and the n-trapezohedron becomes an n-bipyramid.)
If the kites surrounding the two peaks are not twisted but are of two different shapes, the n-trapezohedron can only have Cnv (cyclic with vertical mirrors) symmetry, order 2n, and is called an unequal or asymmetric trapezohedron. Its dual is an unequal n-antiprism, with the top and bottom n-gons of different radii.
If the kites are twisted and are of two different shapes, the n-trapezohedron can only have Cn (cyclic) symmetry, order n, and is called an unequal twisted trapezohedron.
Example: variations with hexagonal trapezohedra (n = 6)
Trapezohedron type Twisted trapezohedron Unequal trapezohedron Unequal twisted trapezohedron
Symmetry group D6, (662), [6,2]+ C6v, (*66), [6] C6, (66), [6]+
Polyhedron image
Net
Star trapezohedron
A star p/q-trapezohedron (where 2 ≤ q < 1p) is defined by a regular zig-zag skew star 2p/q-gon base, two symmetric apices with no degree of freedom right above and right below the base, and quadrilateral faces connecting each pair of adjacent basal edges to one apex.
A star p/q-trapezohedron has two apical vertices on its polar axis, and 2p basal vertices in two regular p-gonal rings. It has 2p congruent kite faces, and it is isohedral.
Such a star p/q-trapezohedron is a self-intersecting, crossed, or non-convex form. It exists for any regular zig-zag skew star 2p/q-gon base (where 2 ≤ q < 1p).
But if p/q < 3/2, then (p − q)360°/p < q/2360°/p, so the dual star antiprism (of the star trapezohedron) cannot be uniform (i.e. cannot have equal edge lengths); and if p/q = 3/2, then (p − q)360°/p = q/2360°/p, so the dual star antiprism must be flat, thus degenerate, to be uniform.
A dual-uniform star p/q-trapezohedron has Coxeter-Dynkin diagram .
Dual-uniform star p/q-trapezohedra up to p = 12
5/25/37/27/37/48/38/59/29/49/5
10/311/211/311/411/511/611/712/512/7
See also
Wikimedia Commons has media related to Trapezohedra.
• Diminished trapezohedron
• Rhombic dodecahedron
• Rhombic triacontahedron
• Bipyramid
• Truncated trapezohedron
• Conway polyhedron notation
• The Haunter of the Dark, a short story by H.P. Lovecraft in which a fictional ancient artifact known as The Shining Trapezohedron plays a crucial role.
References
1. N.W. Johnson: Geometries and Transformations, (2018) ISBN 978-1-107-10340-5 Chapter 11: Finite symmetry groups, 11.3 Pyramids, Prisms, and Antiprisms, Figure 11.3c
2. "duality". maths.ac-noumea.nc. Retrieved 2020-10-19.
3. Spencer 1911, p. 575, or p. 597 on Wikisource, CRYSTALLOGRAPHY, 1. CUBIC SYSTEM, TETRAHEDRAL CLASS, footnote: « [Deltoid]: From the Greek letter δ, Δ; in general, a triangular-shaped object; also an alternative name for a trapezoid ». Remark: a twisted kite can look like and even be a trapezoid.
4. Spencer 1911, p. 581, or p. 603 on Wikisource, CRYSTALLOGRAPHY, 6. HEXAGONAL SYSTEM, Rhombohedral Division, TRAPEZOHEDRAL CLASS, FIG. 74.
5. Spencer 1911, p. 577, or p. 599 on Wikisource, CRYSTALLOGRAPHY, 2. TETRAGONAL SYSTEM, TRAPEZOHEDRAL CLASS.
6. Spencer 1911, p. 582, or p. 604 on Wikisource, CRYSTALLOGRAPHY, 6. HEXAGONAL SYSTEM, Hexagonal Division, TRAPEZOHEDRAL CLASS.
7. Trigonal-trapezohedric Class, 3 2 and Hexagonal-trapezohedric Class, 6 2 2
8. Spencer 1911, p. 574, or p. 596 on Wikisource, CRYSTALLOGRAPHY, 1. CUBIC SYSTEM, HOLOSYMMETRIC CLASS, FIG. 17.
9. Spencer 1911, p. 575, or p. 597 on Wikisource, CRYSTALLOGRAPHY, 1. CUBIC SYSTEM, TETRAHEDRAL CLASS, FIG. 27.
• Anthony Pugh (1976). Polyhedra: A visual approach. California: University of California Press Berkeley. ISBN 0-520-03056-7. Chapter 4: Duals of the Archimedean polyhedra, prisma and antiprisms
• Spencer, Leonard James (1911). "Crystallography" . In Chisholm, Hugh (ed.). Encyclopædia Britannica. Vol. 07 (11th ed.). Cambridge University Press. pp. 569–591.
External links
• Weisstein, Eric W. "Trapezohedron". MathWorld.
• Weisstein, Eric W. "Isohedron". MathWorld.
• Virtual Reality Polyhedra The Encyclopedia of Polyhedra
• VRML models (George Hart) <3> <4> <5> <6> <7> <8> <9> <10>
• Conway Notation for Polyhedra Try: "dAn", where n=3,4,5... Example: "dA5" is a pentagonal trapezohedron.
• Paper model tetragonal (square) trapezohedron
Convex polyhedra
Platonic solids (regular)
• tetrahedron
• cube
• octahedron
• dodecahedron
• icosahedron
Archimedean solids
(semiregular or uniform)
• truncated tetrahedron
• cuboctahedron
• truncated cube
• truncated octahedron
• rhombicuboctahedron
• truncated cuboctahedron
• snub cube
• icosidodecahedron
• truncated dodecahedron
• truncated icosahedron
• rhombicosidodecahedron
• truncated icosidodecahedron
• snub dodecahedron
Catalan solids
(duals of Archimedean)
• triakis tetrahedron
• rhombic dodecahedron
• triakis octahedron
• tetrakis hexahedron
• deltoidal icositetrahedron
• disdyakis dodecahedron
• pentagonal icositetrahedron
• rhombic triacontahedron
• triakis icosahedron
• pentakis dodecahedron
• deltoidal hexecontahedron
• disdyakis triacontahedron
• pentagonal hexecontahedron
Dihedral regular
• dihedron
• hosohedron
Dihedral uniform
• prisms
• antiprisms
duals:
• bipyramids
• trapezohedra
Dihedral others
• pyramids
• truncated trapezohedra
• gyroelongated bipyramid
• cupola
• bicupola
• frustum
• bifrustum
• rotunda
• birotunda
• prismatoid
• scutoid
Degenerate polyhedra are in italics.
| Wikipedia |
Trapezoid graph
In graph theory, trapezoid graphs are intersection graphs of trapezoids between two horizontal lines. They are a class of co-comparability graphs that contain interval graphs and permutation graphs as subclasses. A graph is a trapezoid graph if there exists a set of trapezoids corresponding to the vertices of the graph such that two vertices are joined by an edge if and only if the corresponding trapezoids intersect. Trapezoid graphs were introduced by Dagan, Golumbic, and Pinter in 1988. There exists ${O}(n\log n)$ algorithms for chromatic number, weighted independent set, clique cover, and maximum weighted clique.
Definitions and characterizations
Given a channel, a pair of two horizontal lines, a trapezoid between these lines is defined by two points on the top and two points on the bottom line. A graph is a trapezoid graph if there exists a set of trapezoids corresponding to the vertices of the graph such that two vertices are joined by an edge if and only if the corresponding trapezoids intersect. The interval order dimension of a partially ordered set, $P=(X,<)$, is the minimum number d of interval orders P1 … Pd such that P = P1∩…∩Pd. The incomparability graph of a partially ordered set $P=(X,<)$ is the undirected graph $G=(X,E)$ where x is adjacent to y in G if and only if x and y are incomparable in P. An undirected graph is a trapezoid graph if and only if it is the incomparability graph of a partial order having interval order dimension at most 2.[1]
Applications
The problems of finding maximum cliques and of coloring trapezoid graphs are connected to channel routing problems in VLSI design. Given some labeled terminals on the upper and lower side of a two-sided channel, terminals with the same label will be connected in a common net. This net can be represented by a trapezoid containing the rightmost terminals and leftmost terminals with the same label. Nets may be routed without intersection if and only if the corresponding trapezoids do not intersect. Therefore, the number of layers needed to route the nets without intersection is equal to the graph’s chromatic number.
Equivalent representations
Trapezoid representation
Trapezoids can be used to represent a trapezoid graph by using the definition of trapezoid graph. A trapezoid graph's trapezoid representation can be seen in Figure 1.
Box representation
Dominating rectangles, or box representation, maps the points on the lower of the two lines of the trapezoid representation as lying on the x-axis and that of the upper line as lying on the y-axis of the Euclidean plane. Each trapezoid then corresponds to an axis-parallel box in the plane. Using the notion of a dominance order (In RK, x is said to be dominated by y, denoted x < y, if xi is less than yi for i = 1, …, k), we say that a box b dominates a box b’ if the lower corner of b dominates the upper corner of b’. Furthermore, if one of two boxes dominates the other we say that they are comparable. Otherwise, they are incomparable. Thus, two trapezoids are disjoint exactly if their corresponding boxes are comparable. The box representation is useful because the associated dominance order allows sweep line algorithms to be used.[2]
Bitolerance graphs
Bitolerance graphs are incomparability graphs of a bitolerance order. An order is a bitolerance order if and only if there are intervals Ix and real numbers t1(x) and tr(x) assigned to each vertex x in such a way that x < y if and only if the overlap of Ix and Iy is less than both tr(x) and t1(y) and the center of Ix is less than the center of Iy.[3] In 1993, Langley showed that the bounded bitolerance graphs are equivalent to the class of trapezoid graphs.[4]
Relation to other families of graphs
The class of trapezoid graphs properly contains the union of interval and permutation graphs and is equivalent to the incomparability graphs of partially ordered sets having interval order dimension at most two. Permutation graphs can be seen as the special case of trapezoid graphs when every trapezoid has zero area. This occurs when both of the trapezoid’s points on the upper channel are in the same position and both points on the lower channel are in the same position.
Like all incomparability graphs, trapezoid graphs are perfect.
Circle trapezoid graphs
Circle trapezoid graphs are a class of graphs proposed by Felsner et al. in 1993. They are a superclass of the trapezoid graph class, and also contain circle graphs and circular-arc graphs. A circle trapezoid is the region in a circle that lies between two non-crossing chords and a circle trapezoid graph is the intersection graph of families of circle trapezoids on a common circle. There is an $O(n^{2})$ algorithm for maximum weighted independent set problem and an ${O}(n^{2}\log n)$ algorithm for the maximum weighted clique problem.
k-Trapezoid graphs
k-Trapezoid graphs are an extension of trapezoid graphs to higher dimension orders. They were first proposed by Felsner, and they rely on the definition of dominating boxes carrying over to higher dimensions in which a point x is represented by a vector $(x_{1},\ldots ,x_{k})$. Using (k − 1)-dimensional range trees to store and query coordinates, Felsner’s algorithms for chromatic number, maximum clique, and maximum independent set can be applied to k-trapezoid graphs in ${O}(n\log ^{k-1}n)$ time.
Algorithms
Algorithms for trapezoid graphs should be compared with algorithms for general co-comparability graphs. For this larger class of graphs, the maximum independent set and the minimum clique cover problem can be solved in ${O}(n^{2}\log n)$ time.[5] Dagan et al. first proposed an ${O}(nk)$ algorithm for coloring trapezoid graphs, where n is the number of nodes and k is the chromatic number of the graph. Later, using the box representation of trapezoid graphs, Felsner published ${O}(n\log n)$ algorithms for chromatic number, weighted independent set, clique cover, and maximum weighted clique. These algorithms all require ${O}(n)$ space. These algorithms rely on the associated dominance in the box representation that allows sweeping line algorithms to be used. Felsner proposes using balanced trees that can do insert, delete, and query operations in ${O}(\log n)$ time, which results in ${O}(n\log n)$ algorithms.
Recognition
To determine if a graph ${G}$ is a trapezoid graph, search for a transitive orientation ${F}$ on the complement of ${G}$. Since trapezoid graphs are a subset of co-comparability graphs, if ${G}$ is a trapezoid graph, its complement ${G'}$ must be a comparability graph. If a transitive orientation ${F}$ of the complement ${G'}$ does not exist, ${G}$ is not a trapezoid graph. If ${F}$ does exist, test to see if the order given by ${F}$ is a trapezoid order. The fastest algorithm for trapezoid order recognition was proposed by McConnell and Spinrad in 1994, with a running time of $O(n^{2})$. The process reduces the interval dimension 2 question to a problem of covering an associated bipartite graph by chain graphs (graphs with no induced 2K2).[6] Using vertex splitting, the recognition problem for trapezoid graphs was shown by Mertzios and Corneil to succeed in $O(n(n+m))$ time, where $m$ denotes the number of edges. This process involves augmenting a given graph ${G}$, and then transforming the augmented graph by replacing each of the original graph’s vertices by a pair of new vertices. This “split graph” is a permutation graph with special properties if an only if ${G}$ is a trapezoid graph.[7]
Notes
1. Ido Dagan, Martin Charles Golumbic, and Ron Yair Pinter. Trapezoid graphs and their coloring. Discrete Appl. Math., 35–46, 1988.
2. Stefan Felsner, Rudolf Muller, and Lorenz Wernisch. Trapezoid graphs and generalizations, geometry and algorithms. In Algorithm theory—SWAT ’94 (Aarhus, 1994), volume 824 of Lecture Notes in Comput. Sci., pages 143–154. Springer, Berlin, 1994.
3. Kenneth P. Bogart, Garth Isaak. Proper and unit bitolerance orders and graphs. Discrete Mathematics 181(1–3): 37–51 (1998).
4. Martin Charles Golumbic and Irith B.-A. Hartman, eds., Graph Theory, Combinatorics and Algorithms: Interdisciplinary Applications, Springer-Verlag, New York, 2005
5. R. McConnell and J. Spinrad, Linear-time modular decomposition and efficient transitive orientation of undirected graphs, Proc. 5. Ann. Symp. on Discr. Alg. (1994).
6. Golumbic, Martin Charles, and Ann N. Trenk. Tolerance Graphs. Cambridge [u.a.: Cambridge Univ., 2004.
7. G. B. Mertzios and D. G. Corneil. Vertex splitting and the recognition of trapezoid graphs. Discrete Applied Mathematics, 159(11), pages 1131-1147, 2011.
References
• Golumbic, Martin Charles (1980). Algorithmic Graph Theory and Perfect Graphs. Academic Press. ISBN 0-444-51530-5. Second edition, Annals of Discrete Mathematics 57, Elsevier, 2004.
| Wikipedia |
Trapezoidal distribution
In probability theory and statistics, the trapezoidal distribution is a continuous probability distribution whose probability density function graph resembles a trapezoid. Likewise, trapezoidal distributions also roughly resemble mesas or plateaus.
Trapezoidal
Probability density function
Cumulative distribution function
Parameters
• $a\;(a<d)$ - lower bound
• $b\;(a\leq b<c)$ - level start
• $c\;(b<c\leq d)$ - level end
• $d\;(c\leq d)$ - upper bound
Support $x\in [a,d]$
PDF ${\begin{cases}{\frac {2}{d+c-a-b}}{\frac {x-a}{b-a}}&{\text{for }}a\leq x<b\\{\frac {2}{d+c-a-b}}&{\text{for }}b\leq x<c\\{\frac {2}{d+c-a-b}}{\frac {d-x}{d-c}}&{\text{for }}c\leq x\leq d\end{cases}}$
CDF ${\begin{cases}{\frac {1}{d+c-a-b}}{\frac {1}{b-a}}(x-a)^{2}&{\text{for }}a\leq x<b\\{\frac {1}{d+c-a-b}}(2x-a-b)&{\text{for }}b\leq x<c\\1-{\frac {1}{d+c-a-b}}{\frac {1}{d-c}}(d-x)^{2}&{\text{for }}c\leq x\leq d\end{cases}}$
Mean ${\frac {1}{3(d+c-b-a)}}\left({\frac {d^{3}-c^{3}}{d-c}}-{\frac {b^{3}-a^{3}}{b-a}}\right)$
Variance ${\frac {1}{6(d+c-b-a)}}\left({\frac {d^{4}-c^{4}}{d-c}}-{\frac {b^{4}-a^{4}}{b-a}}\right)-\mu ^{2}$
Entropy ${\frac {d-c+b-a}{2(d+c-b-a)}}+\ln \left({\frac {d+c-b-a}{2}}\right)$
MGF ${\frac {2}{d+c-b-a}}{\frac {1}{t^{2}}}\left({\frac {e^{dt}-e^{ct}}{d-c}}-{\frac {e^{bt}-e^{at}}{b-a}}\right)$
Each trapezoidal distribution has a lower bound a and an upper bound d, where a < d, beyond which no values or events on the distribution can occur (i.e. beyond which the probability is always zero). In addition, there are two sharp bending points (non-differentiable discontinuities) within the probability distribution, which we will call b and c, which occur between a and d, such that a ≤ b ≤ c ≤ d.
The image to the right shows a perfectly linear trapezoidal distribution. However, not all trapezoidal distributions are so precisely shaped. In the standard case, where the middle part of the trapezoid is completely flat, and the side ramps are perfectly linear, all of the values between c and d will occur with equal frequency, and therefore all such points will be modes (local frequency maxima) of the distribution. On the other hand, though, if the middle part of the trapezoid is not completely flat, or if one or both of the side ramps are not perfectly linear, then the trapezoidal distribution in question is a generalized trapezoidal distribution,[1][2] and more complicated and context-dependent rules may apply. The side ramps of a trapezoidal distribution are not required to be symmetric in the general case, just as the sides of trapezoids in geometry are not required to be symmetric.
The non-central moments of the trapezoidal distribution[3] are
$E[X^{k}]={\frac {2}{d+c-b-a}}{\frac {1}{(k+1)(k+2)}}\left({\frac {d^{k+2}-c^{k+2}}{d-c}}-{\frac {b^{k+2}-a^{k+2}}{b-a}}\right)$
Special cases of the trapezoidal distribution include the uniform distribution (with a = b and c = d) and the triangular distribution (with b = c). Trapezoidal probability distributions seem to not be discussed very often in the literature. The uniform, triangular, Irwin-Hall, Bates, Poisson, normal, bimodal, and multimodal distributions are all more frequently discussed in the literature. This may be because these other (non-trapezoidal) distributions seem to occur more frequently in nature than the trapezoidal distribution does. The normal distribution in particular is especially common in nature, just as one would expect from the central limit theorem.
See also
• Trapezoid
• Probability distribution
• Central limit theorem
• Uniform distribution (continuous)
• Triangular distribution
• Irwin–Hall distribution
• Bates distribution
• Normal distribution
• Multimodal distribution
• Poisson distribution
Probability distributions (list)
Discrete
univariate
with finite
support
• Benford
• Bernoulli
• beta-binomial
• binomial
• categorical
• hypergeometric
• negative
• Poisson binomial
• Rademacher
• soliton
• discrete uniform
• Zipf
• Zipf–Mandelbrot
with infinite
support
• beta negative binomial
• Borel
• Conway–Maxwell–Poisson
• discrete phase-type
• Delaporte
• extended negative binomial
• Flory–Schulz
• Gauss–Kuzmin
• geometric
• logarithmic
• mixed Poisson
• negative binomial
• Panjer
• parabolic fractal
• Poisson
• Skellam
• Yule–Simon
• zeta
Continuous
univariate
supported on a
bounded interval
• arcsine
• ARGUS
• Balding–Nichols
• Bates
• beta
• beta rectangular
• continuous Bernoulli
• Irwin–Hall
• Kumaraswamy
• logit-normal
• noncentral beta
• PERT
• raised cosine
• reciprocal
• triangular
• U-quadratic
• uniform
• Wigner semicircle
supported on a
semi-infinite
interval
• Benini
• Benktander 1st kind
• Benktander 2nd kind
• beta prime
• Burr
• chi
• chi-squared
• noncentral
• inverse
• scaled
• Dagum
• Davis
• Erlang
• hyper
• exponential
• hyperexponential
• hypoexponential
• logarithmic
• F
• noncentral
• folded normal
• Fréchet
• gamma
• generalized
• inverse
• gamma/Gompertz
• Gompertz
• shifted
• half-logistic
• half-normal
• Hotelling's T-squared
• inverse Gaussian
• generalized
• Kolmogorov
• Lévy
• log-Cauchy
• log-Laplace
• log-logistic
• log-normal
• log-t
• Lomax
• matrix-exponential
• Maxwell–Boltzmann
• Maxwell–Jüttner
• Mittag-Leffler
• Nakagami
• Pareto
• phase-type
• Poly-Weibull
• Rayleigh
• relativistic Breit–Wigner
• Rice
• truncated normal
• type-2 Gumbel
• Weibull
• discrete
• Wilks's lambda
supported
on the whole
real line
• Cauchy
• exponential power
• Fisher's z
• Kaniadakis κ-Gaussian
• Gaussian q
• generalized normal
• generalized hyperbolic
• geometric stable
• Gumbel
• Holtsmark
• hyperbolic secant
• Johnson's SU
• Landau
• Laplace
• asymmetric
• logistic
• noncentral t
• normal (Gaussian)
• normal-inverse Gaussian
• skew normal
• slash
• stable
• Student's t
• Tracy–Widom
• variance-gamma
• Voigt
with support
whose type varies
• generalized chi-squared
• generalized extreme value
• generalized Pareto
• Marchenko–Pastur
• Kaniadakis κ-exponential
• Kaniadakis κ-Gamma
• Kaniadakis κ-Weibull
• Kaniadakis κ-Logistic
• Kaniadakis κ-Erlang
• q-exponential
• q-Gaussian
• q-Weibull
• shifted log-logistic
• Tukey lambda
Mixed
univariate
continuous-
discrete
• Rectified Gaussian
Multivariate
(joint)
• Discrete:
• Ewens
• multinomial
• Dirichlet
• negative
• Continuous:
• Dirichlet
• generalized
• multivariate Laplace
• multivariate normal
• multivariate stable
• multivariate t
• normal-gamma
• inverse
• Matrix-valued:
• LKJ
• matrix normal
• matrix t
• matrix gamma
• inverse
• Wishart
• normal
• inverse
• normal-inverse
• complex
Directional
Univariate (circular) directional
Circular uniform
univariate von Mises
wrapped normal
wrapped Cauchy
wrapped exponential
wrapped asymmetric Laplace
wrapped Lévy
Bivariate (spherical)
Kent
Bivariate (toroidal)
bivariate von Mises
Multivariate
von Mises–Fisher
Bingham
Degenerate
and singular
Degenerate
Dirac delta function
Singular
Cantor
Families
• Circular
• compound Poisson
• elliptical
• exponential
• natural exponential
• location–scale
• maximum entropy
• mixture
• Pearson
• Tweedie
• wrapped
• Category
• Commons
References
1. "Generalized Trapezoidal Distributions" (PDF). Semantic Scholar. March 2003.
2. van Dorp, J. René; Kotz, Samuel (2003-08-01). "Generalized trapezoidal distributions". Metrika. 58 (1): 85–97. doi:10.1007/s001840200230. ISSN 0026-1335.
3. Kacker, R. N.; Lawrence, J. F. (2007-02-26). "Trapezoidal and triangular distributions for Type B evaluation of standard uncertainty". Metrologia. 44 (2): 117–127. doi:10.1088/0026-1394/44/2/003. ISSN 0026-1394.
| Wikipedia |
Trapezoidal rule (differential equations)
In numerical analysis and scientific computing, the trapezoidal rule is a numerical method to solve ordinary differential equations derived from the trapezoidal rule for computing integrals. The trapezoidal rule is an implicit second-order method, which can be considered as both a Runge–Kutta method and a linear multistep method.
Method
Suppose that we want to solve the differential equation
$y'=f(t,y).$
The trapezoidal rule is given by the formula
$y_{n+1}=y_{n}+{\tfrac {1}{2}}h{\Big (}f(t_{n},y_{n})+f(t_{n+1},y_{n+1}){\Big )},$
where $h=t_{n+1}-t_{n}$ is the step size.[1]
This is an implicit method: the value $y_{n+1}$ appears on both sides of the equation, and to actually calculate it, we have to solve an equation which will usually be nonlinear. One possible method for solving this equation is Newton's method. We can use the Euler method to get a fairly good estimate for the solution, which can be used as the initial guess of Newton's method.[2] Cutting short, using only the guess from Eulers method is equivalent to performing Heun's method.
Motivation
Integrating the differential equation from $t_{n}$ to $t_{n+1}$, we find that
$y(t_{n+1})-y(t_{n})=\int _{t_{n}}^{t_{n+1}}f(t,y(t))\,\mathrm {d} t.$
The trapezoidal rule states that the integral on the right-hand side can be approximated as
$\int _{t_{n}}^{t_{n+1}}f(t,y(t))\,\mathrm {d} t\approx {\tfrac {1}{2}}h{\Big (}f(t_{n},y(t_{n}))+f(t_{n+1},y(t_{n+1})){\Big )}.$
Now combine both formulas and use that $y_{n}\approx y(t_{n})$ and $y_{n+1}\approx y(t_{n+1})$ to get the trapezoidal rule for solving ordinary differential equations.[3]
Error analysis
It follows from the error analysis of the trapezoidal rule for quadrature that the local truncation error $\tau _{n}$ of the trapezoidal rule for solving differential equations can be bounded as:
$|\tau _{n}|\leq {\tfrac {1}{12}}h^{3}\max _{t}|y'''(t)|.$
Thus, the trapezoidal rule is a second-order method. This result can be used to show that the global error is $O(h^{2})$ as the step size $h$ tends to zero (see big O notation for the meaning of this).[4]
Stability
The region of absolute stability for the trapezoidal rule is
$\{z\in \mathbb {C} \mid \operatorname {Re} (z)<0\}.$
This includes the left-half plane, so the trapezoidal rule is A-stable. The second Dahlquist barrier states that the trapezoidal rule is the most accurate amongst the A-stable linear multistep methods. More precisely, a linear multistep method that is A-stable has at most order two, and the error constant of a second-order A-stable linear multistep method cannot be better than the error constant of the trapezoidal rule.[5]
In fact, the region of absolute stability for the trapezoidal rule is precisely the left-half plane. This means that if the trapezoidal rule is applied to the linear test equation y' = λy, the numerical solution decays to zero if and only if the exact solution does.
Notes
1. Iserles 1996, p. 8; Süli & Mayers 2003, p. 324
2. Süli & Mayers 2003, p. 324
3. Iserles 1996, p. 8; Süli & Mayers 2003, p. 324
4. Iserles 1996, p. 9; Süli & Mayers 2003, p. 325
5. Süli & Mayers 2003, p. 324
References
• Iserles, Arieh (1996), A First Course in the Numerical Analysis of Differential Equations, Cambridge University Press, ISBN 978-0-521-55655-2.
• Süli, Endre; Mayers, David (2003), An Introduction to Numerical Analysis, Cambridge University Press, ISBN 0521007941.
See also
• Crank–Nicolson method
Numerical methods for integration
First-order methods
• Euler method
• Backward Euler
• Semi-implicit Euler
• Exponential Euler
Second-order methods
• Verlet integration
• Velocity Verlet
• Trapezoidal rule
• Beeman's algorithm
• Midpoint method
• Heun's method
• Newmark-beta method
• Leapfrog integration
Higher-order methods
• Exponential integrator
• Runge–Kutta methods
• List of Runge–Kutta methods
• Linear multistep method
• General linear methods
• Backward differentiation formula
• Yoshida
• Gauss–Legendre method
Theory
• Symplectic integrator
| Wikipedia |
Trapped surface
Closed trapped surfaces are a concept used in black hole solutions of general relativity[1] which describe the inner region of an event horizon. Roger Penrose defined the notion of closed trapped surfaces in 1965.[2] A trapped surface is one where light is not moving away from the black hole. The boundary of the union of all trapped surfaces around a black hole is called an apparent horizon.
A related term trapped null surface is often used interchangeably. However, when discussing causal horizons, trapped null surfaces are defined as only null vector fields giving rise to null surfaces. But marginally trapped surfaces may be spacelike, timelike or null.[3]
Definition
They are spacelike surfaces (topological spheres, tubes, etc.) with restricted bounds, their area tending to decrease locally along any possible future direction and with a dual definition with respect to the past. The trapped surface is a spacelike surface of co-dimension 2, in a Lorentzian spacetime. It follows[4] that any normal vector can be expressed as a linear combination of two future directed null vectors, normalised by:
k+ · k− = −2
The k+ vector is directed “outwards” and k− “inwards”. The set of all such vectors engenders one outgoing and one ingoing null congruence. The surface is designated trapped if the cross sections of both congruences decrease in area as they exit the surface; and this is apparent in the mean curvature vector, which is:
Hɑ= −θ+k−ɑ − θ−k+ɑ
The surface is trapped if both the null expansions θ± are negative, signifying that the mean curvature vector is timelike and future directed. The surface is marginally trapped if the outer expansion θ+ = 0 and the inner expansion θ− ≤ 0.
Trapped null surface
A trapped null surface is a set of points defined in the context of general relativity as a closed surface on which outward-pointing light rays are actually converging (moving inwards).
Trapped null surfaces are used in the definition of the apparent horizon which typically surrounds a black hole.
Definition
We take a (compact, orientable, spacelike) surface, and find its outward pointing normal vectors. The basic picture to think of here is a ball with pins sticking out of it; the pins are the normal vectors.
Now we look at light rays that are directed outward, along these normal vectors. The rays will either be diverging (the usual case one would expect) or converging. Intuitively, if the light rays are converging, this means that the light is moving backwards inside of the ball. If all the rays around the entire surface are converging, we say that there is a trapped null surface.
More formally, if every null congruence orthogonal to a spacelike two-surface has negative expansion, then such surface is said to be trapped.
See also
• Null hypersurface
• Raychaudhuri equation
References
1. Senovilla, Jose M. M. (September 15, 2011). "Trapped Surfaces". International Journal of Modern Physics D. 20 (11): 2139–2168. arXiv:1107.1344. Bibcode:2011IJMPD..20.2139S. doi:10.1142/S0218271811020354. S2CID 119249809.
2. Penrose, Roger (January 1965). "Gravitational collapse and space-time singularities". Phys. Rev. Lett. 14 (3): 57–59. Bibcode:1965PhRvL..14...57P. doi:10.1103/PhysRevLett.14.57.
3. Nielsen, Alex B. (February 10, 2014). "Revisiting Vaidya Horizons". Galaxies. 2 (1): 62–71. Bibcode:2014Galax...2...62N. doi:10.3390/galaxies2010062.
4. Bengtsson, Ingemar (December 22, 2011). "Some Examples of Trapped Surfaces". arXiv:1112.5318 [gr-qc].
• S. W. Hawking & G. F. R. Ellis (1975). The large scale structure of space-time. Cambridge University Press. This is the gold standard in black holes because of its place in history. It is also quite thorough.
• Robert M. Wald (1984). General Relativity. University of Chicago Press. ISBN 9780226870335. This book is somewhat more up-to-date.
Roger Penrose
Books
• The Emperor's New Mind (1989)
• Shadows of the Mind (1994)
• The Road to Reality (2004)
• Cycles of Time (2010)
• Fashion, Faith, and Fantasy in the New Physics of the Universe (2016)
Coauthored books
• The Nature of Space and Time (with Stephen Hawking) (1996)
• The Large, the Small and the Human Mind (with Abner Shimony, Nancy Cartwright and Stephen Hawking) (1997)
• White Mars or, The Mind Set Free (with Brian W. Aldiss) (1999)
Academic works
• Techniques of Differential Topology in Relativity (1972)
• Spinors and Space-Time: Volume 1, Two-Spinor Calculus and Relativistic Fields (with Wolfgang Rindler) (1987)
• Spinors and Space-Time: Volume 2, Spinor and Twistor Methods in Space-Time Geometry (with Wolfgang Rindler) (1988)
Concepts
• Twistor theory
• Spin network
• Abstract index notation
• Black hole bomb
• Geometry of spacetime
• Cosmic censorship
• Weyl curvature hypothesis
• Penrose inequalities
• Penrose interpretation of quantum mechanics
• Moore–Penrose inverse
• Newman–Penrose formalism
• Penrose diagram
• Penrose–Hawking singularity theorems
• Penrose inequality
• Penrose process
• Penrose tiling
• Penrose triangle
• Penrose stairs
• Penrose graphical notation
• Penrose transform
• Penrose–Terrell effect
• Orchestrated objective reduction/Penrose–Lucas argument
• FELIX experiment
• Trapped surface
• Andromeda paradox
• Conformal cyclic cosmology
Related
• Lionel Penrose (father)
• Oliver Penrose (brother)
• Jonathan Penrose (brother)
• Shirley Hodgson (sister)
• John Beresford Leathes (grandfather)
• Illumination problem
• Quantum mind
| Wikipedia |
Trapping region
In applied mathematics, a trapping region of a dynamical system is a region such that every trajectory that starts within the trapping region will move to the region's interior and remain there as the system evolves.
More precisely, given a dynamical system with flow $\phi _{t}$ defined on the phase space $D$, a subset of the phase space $N$ is a trapping region if it is compact and $\phi _{t}(N)\subset \mathrm {int} (N)$ for all $t>0$.[1]
References
1. Meiss, J. D., Differential dynamical systems, Philadelphia: Society for Industrial and Applied Mathematics, 2007.
Systems science
System
types
• Art
• Biological
• Coupled human–environment
• Ecological
• Economic
• Multi-agent
• Nervous
• Social
Concepts
• Doubling time
• Leverage points
• Limiting factor
• Negative feedback
• Positive feedback
Theoretical
fields
• Control theory
• Cybernetics
• Earth system science
• Living systems
• Sociotechnical system
• Systemics
• Urban metabolism
• World-systems theory
• Analysis
• Biology
• Dynamics
• Ecology
• Engineering
• Neuroscience
• Pharmacology
• Philosophy
• Psychology
• Theory (Systems thinking)
Scientists
• Alexander Bogdanov
• Russell L. Ackoff
• William Ross Ashby
• Ruzena Bajcsy
• Béla H. Bánáthy
• Gregory Bateson
• Anthony Stafford Beer
• Richard E. Bellman
• Ludwig von Bertalanffy
• Margaret Boden
• Kenneth E. Boulding
• Murray Bowen
• Kathleen Carley
• Mary Cartwright
• C. West Churchman
• Manfred Clynes
• George Dantzig
• Edsger W. Dijkstra
• Fred Emery
• Heinz von Foerster
• Stephanie Forrest
• Jay Wright Forrester
• Barbara Grosz
• Charles A. S. Hall
• Mike Jackson
• Lydia Kavraki
• James J. Kay
• Faina M. Kirillova
• George Klir
• Allenna Leonard
• Edward Norton Lorenz
• Niklas Luhmann
• Humberto Maturana
• Margaret Mead
• Donella Meadows
• Mihajlo D. Mesarovic
• James Grier Miller
• Radhika Nagpal
• Howard T. Odum
• Talcott Parsons
• Ilya Prigogine
• Qian Xuesen
• Anatol Rapoport
• John Seddon
• Peter Senge
• Claude Shannon
• Katia Sycara
• Eric Trist
• Francisco Varela
• Manuela M. Veloso
• Kevin Warwick
• Norbert Wiener
• Jennifer Wilby
• Anthony Wilden
Applications
• Systems theory in anthropology
• Systems theory in archaeology
• Systems theory in political science
Organizations
• List
• Principia Cybernetica
• Category
• Portal
• Commons
| Wikipedia |
Traveling plane wave
In mathematics and physics, a traveling plane wave is a special case of plane wave, namely a field whose evolution in time can be described as simple translation of its values at a constant wave speed $c$, along a fixed direction of propagation ${\vec {n}}$.
Such a field can be written as
$F({\vec {x}},t)=G\left({\vec {x}}\cdot {\vec {n}}-ct\right)\,$
where $G(u)$ is a function of a single real parameter $u=d-ct$. The function $G$ describes the profile of the wave, namely the value of the field at time $t=0$, for each displacement $d={\vec {x}}\cdot {\vec {n}}$. For each displacement $d$, the moving plane perpendicular to ${\vec {n}}$ at distance $d+ct$ from the origin is called a wavefront. This plane too travels along the direction of propagation ${\vec {n}}$ with velocity $c$; and the value of the field is then the same, and constant in time, at every one of its points.
The wave $F$ may be a scalar or vector field; its values are the values of $G$.
A sinusoidal plane wave is a special case, when $G(u)$ is a sinusoidal function of $u$.
Properties
A traveling plane wave can be studied by ignoring the dimensions of space perpendicular to the vector ${\vec {n}}$; that is, by considering the wave $F(z{\vec {n}},t)=G(z-ct)$ on a one-dimensional medium, with a single position coordinate $z$.
For a scalar traveling plane wave in two or three dimensions, the gradient of the field is always collinear with the direction ${\vec {n}}$; specifically, $\nabla F({\vec {x}},t)={\vec {n}}G'({\vec {x}}\cdot {\vec {n}}-ct)$, where $G'$ is the derivative of $G$. Moreover, a traveling plane wave $F$ of any shape satisfies the partial differential equation
$\nabla F=-{\frac {\vec {n}}{c}}{\frac {\partial F}{\partial t}}$
Plane traveling waves are also special solutions of the wave equation in an homogeneous medium.
See also
• Spherical wave
• Spherical sinusoidal wave
• Standing wave
References
| Wikipedia |
Traveling tournament problem
The traveling tournament problem (TTP) is a mathematical optimization problem. The question involves scheduling a series of teams such that:
1. Each team plays every other team twice, once at home and once in the other's stadium.
2. No team plays the same opponent in two consecutive weeks.
3. No team plays more than three games in a row at home, or three games in a row on the road.
A matrix is provided of the travel distances between each team's home city. All teams start and end at their own home city, and the goal is to minimize the total travel distance for every team over the course of the whole season.[1]
There have been many papers published on the subject, and a contest exists to find the best solutions for certain specific schedules.[2]
References
1. "Solving the Traveling Tournament Problem" (PDF).
2. "Challenge Traveling Tournament Problems". mat.gsia.cmu.edu. Retrieved 2018-06-18.
| Wikipedia |
Treatise on Analysis
Treatise on Analysis is a translation by Ian G. Macdonald of the nine-volume work Éléments d'analyse on mathematical analysis by Jean Dieudonné, and is an expansion of his textbook Foundations of Modern Analysis. It is a successor to the various Cours d'Analyse by Augustin-Louis Cauchy, Camille Jordan, and Édouard Goursat.
Treatise on Analysis
AuthorJean Dieudonné
Original titleÉlements d'analyse
LanguageFrench
SubjectMathematical analysis
Contents and publication history
Volume I
The first volume was originally a stand-alone graduate textbook with a different title. It was first written in English and later translated into French, unlike the other volumes which were first written in French. It has been republished several times and is much more common than the later volumes of the series.
The contents include
• Chapter I: sets
• Chapter II Real numbers
• Chapter III Metric spaces
• Chapter IV The real line
• Chapter V Normed spaces
• Chapter VI Hilbert spaces
• Chapter VII Spaces of continuous functions
• Chapter VIII Differential calculus (This uses the Cauchy integral rather than the more common Riemann integral of functions.)
• Chapter IX Analytic functions (of a complex variable)
• Chapter X Existence theorems (for ordinary differential equations)
• Chapter XI Elementary spectral theory
• Dieudonné, J. (1960), Foundations of modern analysis, Pure and Applied Mathematics, vol. X, New York-London: Academic Press, MR 0120319
• Dieudonné, J. (1963), Éléments d'analyse. Tome I: Fondements de l'analyse moderne, Cahiers Scientifiques, vol. XXVIII, Paris: Gauthier-Villars, MR 0161945
• Dieudonné, J. (1968), Éléments d'analyse. Tome I: Fondements de l'analyse moderne, Cahiers Scientifiques, vol. XXVIII (2nd ed.), Paris: Gauthier-Villars, MR 0235945
• Dieudonné, J. (1969), Foundations of modern analysis., Pure and Applied Mathematics, vol. 10-I (2nd ed.), New York-London: Academic Press, ISBN 978-0122155505, MR 0349288
Volume II
The second volume includes
• Chapter XII Topology and topological algebra
• Chapter XIII Integration
• Chapter XIV Integration in locally compact groups
• Chapter XV Normed algebras and spectral theory
• Dieudonné, J. (1968), Éléments d'analyse. Tome II: Chapitres XII à XV, Cahiers Scientifiques, vol. XXXI, Paris: Gauthier-Villars, MR 0235946
• Dieudonné, J. (1970), Treatise on analysis. Vol. II, Pure and Applied Mathematics, vol. 10-II, New York-London: Academic Press, MR 0258551
• Dieudonné, J. (1976), Treatise on analysis. Vol. II, Pure and Applied Mathematics, vol. 10-II (2nd ed.), New York-London: Academic Press, ISBN 0-12-215502-5, MR 0530406
Volume III
The third volume includes chapter XVI on differential manifolds and chapter XVII on distributions and differential operators.
Volume IV
The fourth volume includes
• Chapter XVIII Differential systems
• Chapter XIX Lie groups
• Chapter XX Riemannian geometry
Volume V
Volume V consists of chapter XXI on compact Lie groups.
Volume VI
Volume VI consists of chapter XXII on harmonic analysis (mostly on locally compact groups)
Volume VII
Volume VII consists of the first part of chapter XXIII on linear functional equations. This chapter is considerably more advanced than most of the other chapters.
Volume VIII
Volume VIII consists of the second part of chapter XXIII on linear functional equations.
Volume IX
Volume IX contains chapter XXIV on elementary differential topology. Unlike the earlier volumes there is no English translation of it.
• Dieudonné, J. (1982), Éléments d'analyse. Tome IX. Chapitre XXIV, Cahiers Scientifiques, vol. XL11, Paris: Gauthier-Villars, ISBN 2-04-011499-8, MR 0658305
Volume X
Dieudonne planned a final volume containing chapter XXV on nonlinear problems, but this was never published.
References
• Nachbin, Leopoldo (1961), "Review: J. Dieudonné, Foundations of Modern Analysis", Bull. Amer. Math. Soc., 67 (3): 246–250, doi:10.1090/s0002-9904-1961-10566-1
• Frank, Peter (1960), "Book reviews: Foundations of Modern Analysis. J. Dieudonné. Academic Press, New York, 1960", Science, 132 (3441): 1759, doi:10.1126/science.132.3441.1759-a
• Marsden, Jerrold E. (1980), "Review: Jean Dieudonné, Treatise on analysis", Bull. Amer. Math. Soc. (N.S.), 3 (1): 719–724, doi:10.1090/s0273-0979-1980-14804-1
| Wikipedia |
Tree-graded space
A geodesic metric space $X$ is called a tree-graded space with respect to a collection of connected proper subsets called pieces, if any two distinct pieces intersect in at most one point, and every non-trivial simple geodesic triangle of $X$ is contained in one of the pieces.
If the pieces have bounded diameter, tree-graded spaces behave like real trees in their coarse geometry (in the sense of Gromov), while allowing non-tree-like behavior within the pieces.
Tree-graded spaces were introduced by Cornelia Druţu and Mark Sapir (2005) in their study of the asymptotic cones of hyperbolic groups.
References
• Druţu, Cornelia; Sapir, Mark (2005), "Tree-graded spaces and asymptotic cones of groups", Topology, 44 (5): 959–1058, arXiv:math/0405030, doi:10.1016/j.top.2005.03.003, MR 2153979.
| Wikipedia |
Tree (data structure)
In computer science, a tree is a widely used abstract data type that represents a hierarchical tree structure with a set of connected nodes. Each node in the tree can be connected to many children (depending on the type of tree), but must be connected to exactly one parent,[1] except for the root node, which has no parent (i.e., the root node as the top-most node in the tree hierarchy). These constraints mean there are no cycles or "loops" (no node can be its own ancestor), and also that each child can be treated like the root node of its own subtree, making recursion a useful technique for tree traversal. In contrast to linear data structures, many trees cannot be represented by relationships between neighboring nodes (parent and children nodes of a node under consideration if they exists) in a single straight line (called edge or link between two adjacent nodes).
Binary trees are a commonly used type, which constrain the number of children for each parent to at most two. When the order of the children is specified, this data structure corresponds to an ordered tree in graph theory. A value or pointer to other data may be associated with every node in the tree, or sometimes only with the leaf nodes, which have no children nodes.
The Abstract Data Type (ADT) can be represented in a number of ways, including a list of parents with pointers to children, a list of children with pointers to parents, or a list of nodes and a separate list of parent-child relations (a specific type of adjacency list). Representations might also be more complicated, for example using indexes or ancestor lists for performance.
Trees as used in computing are similar to but can be different from mathematical constructs of trees in graph theory, trees in set theory, and trees in descriptive set theory.
Applications
Trees are commonly used to represent or manipulate hierarchical data in applications such as:
• File systems for:
• Directory structure used to organize subdirectories and files (symbolic links create non-tree graphs, as do multiple hard links to the same file or directory)
• The mechanism used to allocate and link blocks of data on the storage device
• Class hierarchy or "inheritance tree" showing the relationships among classes in object-oriented programming; multiple inheritance produces non-tree graphs
• Abstract syntax trees for computer languages
• Natural language processing:
• Parse trees
• Modeling utterances in a generative grammar
• Dialogue tree for generating conversations
• Document Object Models ("DOM tree") of XML and HTML documents
• Search trees store data in a way that makes an efficient search algorithm possible via tree traversal
• A binary search tree is a type of binary tree
• Representing sorted lists of data
• Computer-generated imagery:
• Space partitioning, including binary space partitioning
• Digital compositing
• Storing Barnes–Hut trees used to simulate galaxies
• Implementing heaps
• Nested set collections
• Hierarchical taxonomies such as the Dewey Decimal Classification with sections of increasing specificity.
• Hierarchical temporal memory
• Genetic programming
• Hierarchical clustering
Trees can be used to represent and manipulate various mathematical structures, such as:
• Paths through an arbitrary node-and-edge graph (including multigraphs), by making multiple nodes in the tree for each graph node used in multiple paths
• Any mathematical hierarchy
Tree structures are often used for mapping the relationships between things, such as:
• Components and subcomponents which can be visualized in an exploded-view drawing
• Subroutine calls used to identify which subroutines in a program call other subroutines non recursively
• Inheritance of DNA among species by evolution, of source code by software projects (e.g. Linux distribution timeline), of designs in various types of cars, etc.
• The contents of hierarchical namespaces
JSON and YAML documents can be thought of as trees, but are typically represented by nested lists and dictionaries.
Terminology
A node is a structure which may contain data and connections to other nodes, sometimes called edges or links. Each node in a tree has zero or more child nodes, which are below it in the tree (by convention, trees are drawn with descendants going downwards). A node that has a child is called the child's parent node (or superior). All nodes have exactly one parent, except the topmost root node, which has none. A node might have many ancestor nodes, such as the parent's parent. Child nodes with the same parent are sibling nodes. Typically siblings have an order, with the first one conventionally drawn on the left. Some definitions allow a tree to have no nodes at all, in which case it is called empty.
An internal node (also known as an inner node, inode for short, or branch node) is any node of a tree that has child nodes. Similarly, an external node (also known as an outer node, leaf node, or terminal node) is any node that does not have child nodes.
The height of a node is the length of the longest downward path to a leaf from that node. The height of the root is the height of the tree. The depth of a node is the length of the path to its root (i.e., its root path). Thus the root node has depth zero, leaf nodes have height zero, and a tree with only a single node (hence both a root and leaf) has depth and height zero. Conventionally, an empty tree (tree with no nodes, if such are allowed) has height −1.
Each non-root node can be treated as the root node of its own subtree, which includes that node and all its descendants.[lower-alpha 1][2]
Other terms used with trees:
Neighbor
Parent or child.
Ancestor
A node reachable by repeated proceeding from child to parent.
Descendant
A node reachable by repeated proceeding from parent to child. Also known as subchild.
Degree
For a given node, its number of children. A leaf has necessarily degree zero.
Degree of tree
The degree of a tree is the maximum degree of a node in the tree.
Distance
The number of edges along the shortest path between two nodes.
Level
The level of a node is the number of edges along the unique path between it and the root node.[3] This is the same as depth.
Width
The number of nodes in a level.
Breadth
The number of leaves.
Forest
A set of one or more disjoint trees.
Ordered tree
A rooted tree in which an ordering is specified for the children of each vertex. The book The Art of Computer Programming uses the term oriented tree.[4]
Size of a tree
Number of nodes in the tree.
Examples of trees and non-trees
Not a tree: two non-connected parts, A→B and C→D→E. There is more than one root.
Not a tree: undirected cycle 1-2-4-3. 4 has more than one parent (inbound edge).
Not a tree: cycle B→C→E→D→B. B has more than one parent (inbound edge).
Not a tree: cycle A→A. A is the root but it also has a parent.
Each linear list is trivially a tree
Common operations
• Enumerating all the items
• Enumerating a section of a tree
• Searching for an item
• Adding a new item at a certain position on the tree
• Deleting an item
• Pruning: Removing a whole section of a tree
• Grafting: Adding a whole section to a tree
• Finding the root for any node
• Finding the lowest common ancestor of two nodes
Traversal and search methods
Stepping through the items of a tree, by means of the connections between parents and children, is called walking the tree, and the action is a walk of the tree. Often, an operation might be performed when a pointer arrives at a particular node. A walk in which each parent node is traversed before its children is called a pre-order walk; a walk in which the children are traversed before their respective parents are traversed is called a post-order walk; a walk in which a node's left subtree, then the node itself, and finally its right subtree are traversed is called an in-order traversal. (This last scenario, referring to exactly two subtrees, a left subtree and a right subtree, assumes specifically a binary tree.) A level-order walk effectively performs a breadth-first search over the entirety of a tree; nodes are traversed level by level, where the root node is visited first, followed by its direct child nodes and their siblings, followed by its grandchild nodes and their siblings, etc., until all nodes in the tree have been traversed.
Representations
There are many different ways to represent trees. In working memory, nodes are typically dynamically allocated records with pointers to their children, their parents, or both, as well as any associated data. If of a fixed size, the nodes might be stored in a list. Nodes and relationships between nodes might be stored in a separate special type of adjacency list. In relational databases, nodes are typically represented as table rows, with indexed row IDs facilitating pointers between parents and children.
Nodes can also be stored as items in an array, with relationships between them determined by their positions in the array (as in a binary heap).
A binary tree can be implemented as a list of lists: the head of a list (the value of the first term) is the left child (subtree), while the tail (the list of second and subsequent terms) is the right child (subtree). This can be modified to allow values as well, as in Lisp S-expressions, where the head (value of first term) is the value of the node, the head of the tail (value of second term) is the left child, and the tail of the tail (list of third and subsequent terms) is the right child.
Ordered trees can be naturally encoded by finite sequences, for example with natural numbers.[5]
Type theory
As an abstract data type, the abstract tree type T with values of some type E is defined, using the abstract forest type F (list of trees), by the functions:
value: T → E
children: T → F
nil: () → F
node: E × F → T
with the axioms:
value(node(e, f)) = e
children(node(e, f)) = f
In terms of type theory, a tree is an inductive type defined by the constructors nil (empty forest) and node (tree with root node with given value and children).
Mathematical terminology
Viewed as a whole, a tree data structure is an ordered tree, generally with values attached to each node. Concretely, it is (if required to be non-empty):
• A rooted tree with the "away from root" direction (a more narrow term is an "arborescence"), meaning:
• A directed graph,
• whose underlying undirected graph is a tree (any two vertices are connected by exactly one simple path),
• with a distinguished root (one vertex is designated as the root),
• which determines the direction on the edges (arrows point away from the root; given an edge, the node that the edge points from is called the parent and the node that the edge points to is called the child), together with:
• an ordering on the child nodes of a given node, and
• a value (of some data type) at each node.
Often trees have a fixed (more properly, bounded) branching factor (outdegree), particularly always having two child nodes (possibly empty, hence at most two non-empty child nodes), hence a "binary tree".
Allowing empty trees makes some definitions simpler, some more complicated: a rooted tree must be non-empty, hence if empty trees are allowed the above definition instead becomes "an empty tree or a rooted tree such that ...". On the other hand, empty trees simplify defining fixed branching factor: with empty trees allowed, a binary tree is a tree such that every node has exactly two children, each of which is a tree (possibly empty). The complete sets of operations on the tree must include the fork operation.
See also
• Tree structure (general)
• Category:Trees (data structures) (catalogs types of computational trees)
Notes
1. This is different from the formal definition of subtree used in graph theory, which is a subgraph that forms a tree – it need not include all descendants. For example, the root node by itself is a subtree in the graph theory sense, but not in the data structure sense (unless there are no descendants).
References
1. Subero, Armstrong (2020). "3. Tree Data Structure". Codeless Data Structures and Algorithms. Berkeley, CA: Apress. doi:10.1007/978-1-4842-5725-8. ISBN 978-1-4842-5724-1. A parent can have multiple child nodes. ... However, a child node cannot have multiple parents. If a child node has multiple parents, then it is what we call a graph.
2. Weisstein, Eric W. "Subtree". MathWorld.
3. Susanna S. Epp (Aug 2010). Discrete Mathematics with Applications. Pacific Grove, CA: Brooks/Cole Publishing Co. p. 694. ISBN 978-0-495-39132-6.
4. Donald Knuth (1997). "Section 2.3.4.2: Oriented trees". The Art of Computer Programming. Vol. 1: Fundamental Algorithms (Third ed.). Addison-Wesley. p. 373.
5. L. Afanasiev; P. Blackburn; I. Dimitriou; B. Gaiffe; E. Goris; M. Marx; M. de Rijke (2005). "PDL for ordered trees" (PDF). Journal of Applied Non-Classical Logics. 15 (2): 115–135. doi:10.3166/jancl.15.115-135. S2CID 1979330.
Further reading
• Donald Knuth. The Art of Computer Programming: Fundamental Algorithms, Third Edition. Addison-Wesley, 1997. ISBN 0-201-89683-4 . Section 2.3: Trees, pp. 308–423.
• Thomas H. Cormen, Charles E. Leiserson, Ronald L. Rivest, and Clifford Stein. Introduction to Algorithms, Second Edition. MIT Press and McGraw-Hill, 2001. ISBN 0-262-03293-7. Section 10.4: Representing rooted trees, pp. 214–217. Chapters 12–14 (Binary Search Trees, Red–Black Trees, Augmenting Data Structures), pp. 253–320.
External links
Wikimedia Commons has media related to Tree structures.
• Description from the Dictionary of Algorithms and Data Structures
Tree data structures
Search trees
(dynamic sets/associative arrays)
• 2–3
• 2–3–4
• AA
• (a,b)
• AVL
• B
• B+
• B*
• Bx
• (Optimal) Binary search
• Dancing
• HTree
• Interval
• Order statistic
• (Left-leaning) Red–black
• Scapegoat
• Splay
• T
• Treap
• UB
• Weight-balanced
Heaps
• Binary
• Binomial
• Brodal
• Fibonacci
• Leftist
• Pairing
• Skew
• van Emde Boas
• Weak
Tries
• Ctrie
• C-trie (compressed ADT)
• Hash
• Radix
• Suffix
• Ternary search
• X-fast
• Y-fast
Spatial data partitioning trees
• Ball
• BK
• BSP
• Cartesian
• Hilbert R
• k-d (implicit k-d)
• M
• Metric
• MVP
• Octree
• PH
• Priority R
• Quad
• R
• R+
• R*
• Segment
• VP
• X
Other trees
• Cover
• Exponential
• Fenwick
• Finger
• Fractal tree index
• Fusion
• Hash calendar
• iDistance
• K-ary
• Left-child right-sibling
• Link/cut
• Log-structured merge
• Merkle
• PQ
• Range
• SPQR
• Top
Graph and tree traversal algorithms
• α–β pruning
• A*
• IDA*
• LPA*
• SMA*
• Best-first search
• Beam search
• Bidirectional search
• Breadth-first search
• Lexicographic
• Parallel
• B*
• Depth-first search
• Iterative Deepening
• D*
• Fringe search
• Jump point search
• Monte Carlo tree search
• SSS*
Shortest path
• Bellman–Ford
• Dijkstra's
• Floyd–Warshall
• Johnson's
• Shortest path faster
• Yen's
Minimum spanning tree
• Borůvka's
• Kruskal's
• Prim's
• Reverse-delete
List of graph search algorithms
Well-known data structures
Types
• Collection
• Container
Abstract
• Associative array
• Multimap
• Retrieval Data Structure
• List
• Stack
• Queue
• Double-ended queue
• Priority queue
• Double-ended priority queue
• Set
• Multiset
• Disjoint-set
Arrays
• Bit array
• Circular buffer
• Dynamic array
• Hash table
• Hashed array tree
• Sparse matrix
Linked
• Association list
• Linked list
• Skip list
• Unrolled linked list
• XOR linked list
Trees
• B-tree
• Binary search tree
• AA tree
• AVL tree
• Red–black tree
• Self-balancing tree
• Splay tree
• Heap
• Binary heap
• Binomial heap
• Fibonacci heap
• R-tree
• R* tree
• R+ tree
• Hilbert R-tree
• Trie
• Hash tree
Graphs
• Binary decision diagram
• Directed acyclic graph
• Directed acyclic word graph
• List of data structures
| Wikipedia |
Tree (descriptive set theory)
In descriptive set theory, a tree on a set $X$ is a collection of finite sequences of elements of $X$ such that every prefix of a sequence in the collection also belongs to the collection.
This article is about mathematical trees described by prefixes of finite sequences. For trees described by partially ordered sets, see Tree (set theory).
Definitions
Trees
The collection of all finite sequences of elements of a set $X$ is denoted $X^{<\omega }$. With this notation, a tree is a nonempty subset $T$ of $X^{<\omega }$, such that if $\langle x_{0},x_{1},\ldots ,x_{n-1}\rangle $ is a sequence of length $n$ in $T$, and if $0\leq m<n$, then the shortened sequence $\langle x_{0},x_{1},\ldots ,x_{m-1}\rangle $ also belongs to $T$. In particular, choosing $m=0$ shows that the empty sequence belongs to every tree.
Branches and bodies
A branch through a tree $T$ is an infinite sequence of elements of $X$, each of whose finite prefixes belongs to $T$. The set of all branches through $T$ is denoted $[T]$ and called the body of the tree $T$.
A tree that has no branches is called wellfounded; a tree with at least one branch is illfounded. By Kőnig's lemma, a tree on a finite set with an infinite number of sequences must necessarily be illfounded.
Terminal nodes
A finite sequence that belongs to a tree $T$ is called a terminal node if it is not a prefix of a longer sequence in $T$. Equivalently, $\langle x_{0},x_{1},\ldots ,x_{n-1}\rangle \in T$ is terminal if there is no element $x$ of $X$ such that that $\langle x_{0},x_{1},\ldots ,x_{n-1},x\rangle \in T$. A tree that does not have any terminal nodes is called pruned.
Relation to other types of trees
In graph theory, a rooted tree is a directed graph in which every vertex except for a special root vertex has exactly one outgoing edge, and in which the path formed by following these edges from any vertex eventually leads to the root vertex. If $T$ is a tree in the descriptive set theory sense, then it corresponds to a graph with one vertex for each sequence in $T$, and an outgoing edge from each nonempty sequence that connects it to the shorter sequence formed by removing its last element. This graph is a tree in the graph-theoretic sense. The root of the tree is the empty sequence.
In order theory, a different notion of a tree is used: an order-theoretic tree is a partially ordered set with one minimal element in which each element has a well-ordered set of predecessors. Every tree in descriptive set theory is also an order-theoretic tree, using a partial ordering in which two sequences $T$ and $U$ are ordered by $T<U$ if and only if $T$ is a proper prefix of $U$. The empty sequence is the unique minimal element, and each element has a finite and well-ordered set of predecessors (the set of all of its prefixes). An order-theoretic tree may be represented by an isomorphic tree of sequences if and only if each of its elements has finite height (that is, a finite set of predecessors).
Topology
The set of infinite sequences over $X$ (denoted as $X^{\omega }$) may be given the product topology, treating X as a discrete space. In this topology, every closed subset $C$ of $X^{\omega }$ is of the form $[T]$ for some pruned tree $T$. Namely, let $T$ consist of the set of finite prefixes of the infinite sequences in $C$. Conversely, the body $[T]$ of every tree $T$ forms a closed set in this topology.
Frequently trees on Cartesian products $X\times Y$ are considered. In this case, by convention, we consider only the subset $T$ of the product space, $(X\times Y)^{<\omega }$, containing only sequences whose even elements come from $X$ and odd elements come from $Y$ (e.g., $\langle x_{0},y_{1},x_{2},y_{3}\ldots ,x_{2m},y_{2m+1}\rangle $). Elements in this subspace are identified in the natural way with a subset of the product of two spaces of sequences, $X^{<\omega }\times Y^{<\omega }$ (the subset for which the length of the first sequence is equal to or 1 more than the length of the second sequence). In this way we may identify $[X^{<\omega }]\times [Y^{<\omega }]$ with $[T]$ for over the product space. We may then form the projection of $[T]$,
$p[T]=\{{\vec {x}}\in X^{\omega }|(\exists {\vec {y}}\in Y^{\omega })\langle {\vec {x}},{\vec {y}}\rangle \in [T]\}$.
See also
• Laver tree, a type of tree used in set theory as part of a notion of forcing
References
• Kechris, Alexander S. (1995). Classical Descriptive Set Theory. Graduate Texts in Mathematics 156. Springer. ISBN 0-387-94374-9 ISBN 3-540-94374-9.
| Wikipedia |
Tree (set theory)
In set theory, a tree is a partially ordered set (T, <) such that for each t ∈ T, the set {s ∈ T : s < t} is well-ordered by the relation <. Frequently trees are assumed to have only one root (i.e. minimal element), as the typical questions investigated in this field are easily reduced to questions about single-rooted trees.
For other notions of tree in set theory, see Tree (descriptive set theory) and Tree (disambiguation).
Definition
A tree is a partially ordered set (poset) (T, <) such that for each t ∈ T, the set {s ∈ T : s < t} is well-ordered by the relation <. In particular, each well-ordered set (T, <) is a tree. For each t ∈ T, the order type of {s ∈ T : s < t} is called the height of t, denoted ht(t, T). The height of T itself is the least ordinal greater than the height of each element of T. A root of a tree T is an element of height 0. Frequently trees are assumed to have only one root. Trees in set theory are often defined to grow downward making the root the greatest node.
Trees with a single root may be viewed as rooted trees in the sense of graph theory in one of two ways: either as a tree (graph theory) or as a trivially perfect graph. In the first case, the graph is the undirected Hasse diagram of the partially ordered set, and in the second case, the graph is simply the underlying (undirected) graph of the partially ordered set. However, if T is a tree of height > ω, then the Hasse diagram definition does not work. For example, the partially ordered set $\omega +1=\left\{0,1,2,\dots ,\omega \right\}$ does not have a Hasse Diagram, as there is no predecessor to ω. Hence a height of at most ω is required in this case.
A branch of a tree is a maximal chain in the tree (that is, any two elements of the branch are comparable, and any element of the tree not in the branch is incomparable with at least one element of the branch). The length of a branch is the ordinal that is order isomorphic to the branch. For each ordinal α, the α-th level of T is the set of all elements of T of height α. A tree is a κ-tree, for an ordinal number κ, if and only if it has height κ and every level has cardinality less than the cardinality of κ. The width of a tree is the supremum of the cardinalities of its levels.
Any single-rooted tree of height $\leq \omega $ forms a meet-semilattice, where meet (common ancestor) is given by maximal element of intersection of ancestors, which exists as the set of ancestors is non-empty and finite well-ordered, hence has a maximal element. Without a single root, the intersection of parents can be empty (two elements need not have common ancestors), for example $\left\{a,b\right\}$ where the elements are not comparable; while if there are an infinite number of ancestors there need not be a maximal element – for example, $\left\{0,1,2,\dots ,\omega _{0},\omega _{0}'\right\}$ where $\omega _{0},\omega _{0}'$ are not comparable.
A subtree of a tree $(T,<)$ is a tree $(T',<)$ where $T'\subseteq T$ and $T'$ is downward closed under $<$, i.e., if $s,t\in T$ and $s<t$ then $t\in T'\implies s\in T'$.
Set-theoretic properties
There are some fairly simply stated yet hard problems in infinite tree theory. Examples of this are the Kurepa conjecture and the Suslin conjecture. Both of these problems are known to be independent of Zermelo–Fraenkel set theory. By Kőnig's lemma, every ω-tree has an infinite branch. On the other hand, it is a theorem of ZFC that there are uncountable trees with no uncountable branches and no uncountable levels; such trees are known as Aronszajn trees. Given a cardinal number κ, a κ-Suslin tree is a tree of height κ which has no chains or antichains of size κ. In particular, if κ is singular then there exists a κ-Aronszajn tree and a κ-Suslin tree. In fact, for any infinite cardinal κ, every κ-Suslin tree is a κ-Aronszajn tree (the converse does not hold).
The Suslin conjecture was originally stated as a question about certain total orderings but it is equivalent to the statement: Every tree of height ω1 has an antichain of cardinality ω1 or a branch of length ω1.
See also
• Cantor tree
• Kurepa tree
• Laver tree
• Tree (descriptive set theory)
• Continuous graph
• Prefix order
References
• Jech, Thomas (2002). Set Theory. Springer-Verlag. ISBN 3-540-44085-2.
• Kunen, Kenneth (1980). Set Theory: An Introduction to Independence Proofs. North-Holland. ISBN 0-444-85401-0. Chapter 2, Section 5.
• Monk, J. Donald (1976). Mathematical Logic. New York: Springer-Verlag. p. 517. ISBN 0-387-90170-1.
• Hajnal, András; Hamburger, Peter (1999). Set Theory. Cambridge: Cambridge University Press. ISBN 9780521596671.
• Kechris, Alexander S. (1995). Classical Descriptive Set Theory. Graduate Texts in Mathematics 156. Springer. ISBN 0-387-94374-9 ISBN 3-540-94374-9.
External links
• Sets, Models and Proofs by Ieke Moerdijk and Jaap van Oosten, see Definition 3.1 and Exercise 56 on pp. 68–69.
• tree (set theoretic) by Henry on PlanetMath
• branch by Henry on PlanetMath
• example of tree (set theoretic) by uzeromay on PlanetMath
| Wikipedia |
Graph edit distance
In mathematics and computer science, graph edit distance (GED) is a measure of similarity (or dissimilarity) between two graphs. The concept of graph edit distance was first formalized mathematically by Alberto Sanfeliu and King-Sun Fu in 1983.[1] A major application of graph edit distance is in inexact graph matching, such as error-tolerant pattern recognition in machine learning.[2]
The graph edit distance between two graphs is related to the string edit distance between strings. With the interpretation of strings as connected, directed acyclic graphs of maximum degree one, classical definitions of edit distance such as Levenshtein distance,[3][4] Hamming distance[5] and Jaro–Winkler distance may be interpreted as graph edit distances between suitably constrained graphs. Likewise, graph edit distance is also a generalization of tree edit distance between rooted trees.[6][7][8][9]
Formal definitions and properties
The mathematical definition of graph edit distance is dependent upon the definitions of the graphs over which it is defined, i.e. whether and how the vertices and edges of the graph are labeled and whether the edges are directed. Generally, given a set of graph edit operations (also known as elementary graph operations), the graph edit distance between two graphs $g_{1}$ and $g_{2}$, written as $GED(g_{1},g_{2})$ can be defined as
$GED(g_{1},g_{2})=\min _{(e_{1},...,e_{k})\in {\mathcal {P}}(g_{1},g_{2})}\sum _{i=1}^{k}c(e_{i})$
where ${\mathcal {P}}(g_{1},g_{2})$ denotes the set of edit paths transforming $g_{1}$ into (a graph isomorphic to) $g_{2}$ and $c(e)\geq 0$ is the cost of each graph edit operation $e$.
The set of elementary graph edit operators typically includes:
vertex insertion to introduce a single new labeled vertex to a graph.
vertex deletion to remove a single (often disconnected) vertex from a graph.
vertex substitution to change the label (or color) of a given vertex.
edge insertion to introduce a new colored edge between a pair of vertices.
edge deletion to remove a single edge between a pair of vertices.
edge substitution to change the label (or color) of a given edge.
Additional, but less common operators, include operations such as edge splitting that introduces a new vertex into an edge (also creating a new edge), and edge contraction that eliminates vertices of degree two between edges (of the same color). Although such complex edit operators can be defined in terms of more elementary transformations, their use allows finer parameterization of the cost function $c$ when the operator is cheaper than the sum of its constituents.
A deep analysis of the elementary graph edit operators is presented in [10][11][12]
And some methods have been presented to automatically deduce these elementary graph edit operators.[13][14][15][16][17] And some algorithms learn these costs online:[18]
Applications
Graph edit distance finds applications in handwriting recognition,[19] fingerprint recognition[20] and cheminformatics.[21]
Algorithms and complexity
Exact algorithms for computing the graph edit distance between a pair of graphs typically transform the problem into one of finding the minimum cost edit path between the two graphs. The computation of the optimal edit path is cast as a pathfinding search or shortest path problem, often implemented as an A* search algorithm.
In addition to exact algorithms, a number of efficient approximation algorithms are also known. Most of them have cubic computational time [22][23] [24] [25] [26]
Moreover, there is an algorithm that deduces an approximation of the GED in linear time [27]
Despite the above algorithms sometimes working well in practice, in general the problem of computing graph edit distance is NP-hard (for a proof that's available online, see Section 2 of Zeng et al.), and is even hard to approximate (formally, it is APX-hard[28]).
References
1. Sanfeliu, Alberto; Fu, King-Sun (1983). "A distance measure between attributed relational graphs for pattern recognition". IEEE Transactions on Systems, Man, and Cybernetics. 13 (3): 353–363. doi:10.1109/TSMC.1983.6313167.
2. Gao, Xinbo; Xiao, Bing; Tao, Dacheng; Li, Xuelong (2010). "A survey of graph edit distance". Pattern Analysis and Applications. 13: 113–129. doi:10.1007/s10044-008-0141-y.
3. Влади́мир И. Левенштейн (1965). Двоичные коды с исправлением выпадений, вставок и замещений символов [Binary codes capable of correcting deletions, insertions, and reversals]. Доклады Академий Наук СССР (in Russian). 163 (4): 845–848.
4. Levenshtein, Vladimir I. (February 1966). "Binary codes capable of correcting deletions, insertions, and reversals". Soviet Physics Doklady. 10 (8): 707–710.
5. Hamming, Richard W. (1950). "Error detecting and error correcting codes" (PDF). Bell System Technical Journal. 29 (2): 147–160. doi:10.1002/j.1538-7305.1950.tb00463.x. hdl:10945/46756. MR 0035935. Archived from the original on 2006-05-25.{{cite journal}}: CS1 maint: bot: original URL status unknown (link)
6. Shasha, D; Zhang, K (1989). "Simple fast algorithms for the editing distance between trees and related problems". SIAM J. Comput. 18 (6): 1245–1262. CiteSeerX 10.1.1.460.5601. doi:10.1137/0218082.
7. Zhang, K (1996). "A constrained edit distance between unordered labeled trees". Algorithmica. 15 (3): 205–222. doi:10.1007/BF01975866.
8. Bille, P (2005). "A survey on tree edit distance and related problems". Theor. Comput. Sci. 337 (1–3): 22–34. doi:10.1016/j.tcs.2004.12.030.
9. Demaine, Erik D.; Mozes, Shay; Rossman, Benjamin; Weimann, Oren (2010). "An optimal decomposition algorithm for tree edit distance". ACM Transactions on Algorithms. 6 (1): A2. arXiv:cs/0604037. CiteSeerX 10.1.1.163.6937. doi:10.1145/1644015.1644017. MR 2654906.
10. Serratosa, Francesc (2021). Redefining the Graph Edit Distance. S. N. Computer Science, pp: 2-438.
11. Serratosa, Francesc (2019). Graph edit distance: Restrictions to be a metric. Pattern Recognition, 90, pp: 250-256.
12. Serratosa, Francesc; Cortés, Xavier (2015). Graph Edit Distance: moving from global to local structure to solve the graph-matching problem. Pattern Recognition Letters, 65, pp: 204-210.
13. Santacruz, Pep; Serratosa, Francesc (2020). Learning the graph edit costs based on a learning model applied to sub-optimal graph matching. Neural Processing Letters, 51, pp: 881–904.
14. Algabli, Shaima; Serratosa, Francesc (2018). Embedding the node-to-node mappings to learn the Graph edit distance parameters. Pattern Recognition Letters, 112, pp: 353-360.
15. Xavier, Cortés; Serratosa, Francesc (2016). Learning Graph Matching Substitution Weights based on the Ground Truth Node Correspondence. International Journal of Pattern Recognition and Artificial Intelligence, 30(2), pp: 1650005 [22 pages].
16. Xavier, Cortés; Serratosa, Francesc (2015). Learning Graph-Matching Edit-Costs based on the Optimality of the Oracle's Node Correspondences. Pattern Recognition Letters, 56, pp: 22 - 29.
17. Conte, Donatello; Serratosa, Francesc (2020). Interactive Online Learning for Graph Matching using Active Strategies. Knowledge Based Systems, 105, pp: 106275.
18. Rica, Elena; Álvarez, Susana; Serratosa, Francesc (2021). On-line learning the graph edit distance costs. Pattern Recognition Letters, 146, pp: 52-62.
19. Fischer, Andreas; Suen, Ching Y.; Frinken, Volkmar; Riesen, Kaspar; Bunke, Horst (2013), "A Fast Matching Algorithm for Graph-Based Handwriting Recognition", Graph-Based Representations in Pattern Recognition, Lecture Notes in Computer Science, vol. 7877, pp. 194–203, doi:10.1007/978-3-642-38221-5_21, ISBN 978-3-642-38220-8
20. Neuhaus, Michel; Bunke, Horst (2005), "A Graph Matching Based Approach to Fingerprint Classification using Directional Variance", Audio- and Video-Based Biometric Person Authentication, Lecture Notes in Computer Science, vol. 3546, pp. 191–200, doi:10.1007/11527923_20, ISBN 978-3-540-27887-0
21. Birchall, Kristian; Gillet, Valerie J.; Harper, Gavin; Pickett, Stephen D. (Jan 2006). "Training Similarity Measures for Specific Activities: Application to Reduced Graphs". Journal of Chemical Information and Modeling. 46 (2): 557–586. doi:10.1021/ci050465e. PMID 16562986.
22. Neuhaus, Michel; Bunke, Horst (Nov 2007). Bridging the Gap between Graph Edit Distance and Kernel Machines. Machine Perception and Artificial Intelligence. Vol. 68. World Scientific. ISBN 978-9812708175.
23. Riesen, Kaspar (Feb 2016). Structural Pattern Recognition with Graph Edit Distance: Approximation Algorithms and Applications. Advances in Computer Vision and Pattern Recognition. Springer. ISBN 978-3319272511.
24. Serratosa, Francesc (2014). Fast Computation of Bipartite Graph Matching. Pattern Recognition Letters, 45, pp: 244 - 250.
25. Serratosa, Francesc (2015). Speeding up Fast Bipartite Graph Matching through a new cost matrix. International Journal of Pattern Recognition and Artificial Intelligence, 29 (2), 1550010, [17 pages].
26. Serratosa, Francesc (2015). Computation of Graph Edit Distance: Reasoning about Optimality and Speed-up. Image and Vision Computing, 40, pp: 38-48.
27. Santacruz, Pep; Serratosa, Francesc (2018). Error-tolerant graph matching in linear computational cost using an initial small partial matching. Pattern Recognition Letters.
28. Lin, Chih-Long (1994-08-25). "Hardness of approximating graph transformation problem". In Du, Ding-Zhu; Zhang, Xiang-Sun (eds.). Algorithms and Computation. Lecture Notes in Computer Science. Vol. 834. Springer Berlin Heidelberg. pp. 74–82. doi:10.1007/3-540-58325-4_168. ISBN 9783540583257.
| Wikipedia |
Tree diagram (probability theory)
In probability theory, a tree diagram may be used to represent a probability space.
Part of a series on statistics
Probability theory
• Probability
• Axioms
• Determinism
• System
• Indeterminism
• Randomness
• Probability space
• Sample space
• Event
• Collectively exhaustive events
• Elementary event
• Mutual exclusivity
• Outcome
• Singleton
• Experiment
• Bernoulli trial
• Probability distribution
• Bernoulli distribution
• Binomial distribution
• Normal distribution
• Probability measure
• Random variable
• Bernoulli process
• Continuous or discrete
• Expected value
• Markov chain
• Observed value
• Random walk
• Stochastic process
• Complementary event
• Joint probability
• Marginal probability
• Conditional probability
• Independence
• Conditional independence
• Law of total probability
• Law of large numbers
• Bayes' theorem
• Boole's inequality
• Venn diagram
• Tree diagram
Tree diagrams may represent a series of independent events (such as a set of coin flips) or conditional probabilities (such as drawing cards from a deck, without replacing the cards).[1] Each node on the diagram represents an event and is associated with the probability of that event. The root node represents the certain event and therefore has probability 1. Each set of sibling nodes represents an exclusive and exhaustive partition of the parent event.
The probability associated with a node is the chance of that event occurring after the parent event occurs. The probability that the series of events leading to a particular node will occur is equal to the product of that node and its parents' probabilities.
See also
• Decision tree
• Markov chain
Notes
1. "Tree Diagrams". BBC GCSE Bitesize. BBC. p. 1,3. Retrieved 25 October 2013.
References
• Charles Henry Brase, Corrinne Pellillo Brase: Understanding Basic Statistics. Cengage Learning, 2012, ISBN 9781133713890, pp. 205–208 (online copy at Google)
External links
• tree diagrams - examples and applications
Tree Diagrams
Wikimedia Commons has media related to Probability trees.
| Wikipedia |
Tree spanner
A tree k-spanner (or simply k-spanner) of a graph $G$ is a spanning subtree $T$ of $G$ in which the distance between every pair of vertices is at most $k$ times their distance in $G$.
Known Results
There are several papers written on the subject of tree spanners. One of these was entitled Tree Spanners[1] written by mathematicians Leizhen Cai and Derek Corneil, which explored theoretical and algorithmic problems associated with tree spanners. Some of the conclusions from that paper are listed below. $n$ is always the number of vertices of the graph, and $m$ is its number of edges.
1. A tree 1-spanner, if it exists, is a minimum spanning tree and can be found in ${\mathcal {O}}(m\log \beta (m,n))$ time (in terms of complexity) for a weighted graph, where $\beta (m,n)=\min \left\{i\mid \log ^{i}n\leq m/n\right\}$. Furthermore, every tree 1-spanner admissible weighted graph contains a unique minimum spanning tree.
2. A tree 2-spanner can be constructed in ${\mathcal {O}}(m+n)$ time, and the tree $t$-spanner problem is NP-complete for any fixed integer $t>3$.
3. The complexity for finding a minimum tree spanner in a digraph is ${\mathcal {O}}((m+n)\cdot \alpha (m+n,n))$, where $\alpha (m+n,n)$ is a functional inverse of the Ackermann function
4. The minimum 1-spanner of a weighted graph can be found in ${\mathcal {O}}(mn+n^{2}\log(n))$ time.
5. For any fixed rational number $t>1$, it is NP-complete to determine whether a weighted graph contains a tree t-spanner, even if all edge weights are positive integers.
6. A tree spanner (or a minimum tree spanner) of a digraph can be found in linear time.
7. A digraph contains at most one tree spanner.
8. The quasi-tree spanner of a weighted digraph can be found in ${\mathcal {O}}(m\log \beta (m,n))$ time.
See also
• Graph spanner
• Geometric spanner
References
1. Cai, Leizhen; Corneil, Derek G. (1995). "Tree Spanners". SIAM Journal on Discrete Mathematics. 8 (3): 359–387. doi:10.1137/S0895480192237403.
• Handke, Dagmar; Kortsarz, Guy (2000), "Tree spanners for subgraphs and related tree covering problems", Graph-Theoretic Concepts in Computer Science: 26th International Workshop, WG 2000 Konstanz, Germany, June 15–17, 2000, Proceedings, Lecture Notes in Computer Science, vol. 1928, pp. 206–217, doi:10.1007/3-540-40064-8_20, ISBN 978-3-540-41183-3.
| Wikipedia |
Treewidth
In graph theory, the treewidth of an undirected graph is an integer number which specifies, informally, how far the graph is from being a tree. The smallest treewidth is 1; the graphs with treewidth 1 are exactly the trees and the forests. The graphs with treewidth at most 2 are the series–parallel graphs. The maximal graphs with treewidth exactly k are called k-trees, and the graphs with treewidth at most k are called partial k-trees. Many other well-studied graph families also have bounded treewidth.
Treewidth may be formally defined in several equivalent ways: in terms of the size of the largest vertex set in a tree decomposition of the graph, in terms of the size of the largest clique in a chordal completion of the graph, in terms of the maximum order of a haven describing a strategy for a pursuit–evasion game on the graph, or in terms of the maximum order of a bramble, a collection of connected subgraphs that all touch each other.
Treewidth is commonly used as a parameter in the parameterized complexity analysis of graph algorithms. Many algorithms that are NP-hard for general graphs, become easier when the treewidth is bounded by a constant.
The concept of treewidth was originally introduced by Umberto Bertelè and Francesco Brioschi (1972) under the name of dimension. It was later rediscovered by Rudolf Halin (1976), based on properties that it shares with a different graph parameter, the Hadwiger number. Later it was again rediscovered by Neil Robertson and Paul Seymour (1984) and has since been studied by many other authors.[1]
Definition
A tree decomposition of a graph G = (V, E) is a tree T with nodes X1, …, Xn, where each Xi is a subset of V, satisfying the following properties[2] (the term node is used to refer to a vertex of T to avoid confusion with vertices of G):
1. The union of all sets Xi equals V. That is, each graph vertex is contained in at least one tree node.
2. If Xi and Xj both contain a vertex v, then all nodes Xk of T in the (unique) path between Xi and Xj contain v as well. Equivalently, the tree nodes containing vertex v form a connected subtree of T.
3. For every edge (v, w) in the graph, there is a subset Xi that contains both v and w. That is, vertices are adjacent in the graph only when the corresponding subtrees have a node in common.
The width of a tree decomposition is the size of its largest set Xi minus one. The treewidth tw(G) of a graph G is the minimum width among all possible tree decompositions of G. In this definition, the size of the largest set is diminished by one in order to make the treewidth of a tree equal to one.
Equivalently, the treewidth of G is one less than the size of the largest clique in the chordal graph containing G with the smallest clique number. A chordal graph with this clique size may be obtained by adding to G an edge between every two vertices that both belong to at least one of the sets Xi.
Treewidth may also be characterized in terms of havens, functions describing an evasion strategy for a certain pursuit–evasion game defined on a graph. A graph G has treewidth k if and only if it has a haven of order k + 1 but of no higher order, where a haven of order k + 1 is a function β that maps each set X of at most k vertices in G into one of the connected components of G \ X and that obeys the monotonicity property that β(Y) ⊆ β(X) whenever X ⊆ Y.
A similar characterization can also be made using brambles, families of connected subgraphs that all touch each other (meaning either that they share a vertex or are connected by an edge).[3] The order of a bramble is the smallest hitting set for the family of subgraphs, and the treewidth of a graph is one less than the maximum order of a bramble.
Examples
Every complete graph Kn has treewidth n – 1. This is most easily seen using the definition of treewidth in terms of chordal graphs: the complete graph is already chordal, and adding more edges cannot reduce the size of its largest clique.
A connected graph with at least two vertices has treewidth 1 if and only if it is a tree. A tree has treewidth one by the same reasoning as for complete graphs (namely, it is chordal, and has maximum clique size two). Conversely, if a graph has a cycle, then every chordal completion of the graph includes at least one triangle consisting of three consecutive vertices of the cycle, from which it follows that its treewidth is at least two.
Bounded treewidth
Graph families with bounded treewidth
For any fixed constant k, the graphs of treewidth at most k are called the partial k-trees. Other families of graphs with bounded treewidth include the cactus graphs, pseudoforests, series–parallel graphs, outerplanar graphs, Halin graphs, and Apollonian networks.[4] The control-flow graphs arising in the compilation of structured programs also have bounded treewidth, which allows certain tasks such as register allocation to be performed efficiently on them.[5]
The planar graphs do not have bounded treewidth, because the n × n grid graph is a planar graph with treewidth exactly n. Therefore, if F is a minor-closed graph family with bounded treewidth, it cannot include all planar graphs. Conversely, if some planar graph cannot occur as a minor for graphs in family F, then there is a constant k such that all graphs in F have treewidth at most k. That is, the following three conditions are equivalent to each other:[6]
1. F is a minor-closed family of bounded-treewidth graphs;
2. One of the finitely many forbidden minors characterizing F is planar;
3. F is a minor-closed graph family that does not include all planar graphs.
Forbidden minors
For every finite value of k, the graphs of treewidth at most k may be characterized by a finite set of forbidden minors. (That is, any graph of treewidth > k includes one of the graphs in the set as a minor.) Each of these sets of forbidden minors includes at least one planar graph.
• For k = 1, the unique forbidden minor is a 3-vertex cycle graph.[7]
• For k = 2, the unique forbidden minor is the 4-vertex complete graph K4.[7]
• For k = 3, there are four forbidden minors: K5, the graph of the octahedron, the pentagonal prism graph, and the Wagner graph. Of these, the two polyhedral graphs are planar.[8]
For larger values of k, the number of forbidden minors grows at least as quickly as the exponential of the square root of k.[9] However, known upper bounds on the size and number of forbidden minors are much higher than this lower bound.[10]
Algorithms
Computing the treewidth
It is NP-complete to determine whether a given graph G has treewidth at most a given variable k.[11] However, when k is any fixed constant, the graphs with treewidth k can be recognized, and a width k tree decomposition constructed for them, in linear time.[12] The time dependence of this algorithm on k is exponential.
Due to the roles the treewidth plays in an enormous number of fields, different practical and theoretical algorithms computing the treewidth of a graph were developed. Depending on the application on hand, one can prefer better approximation ratio, or better dependence in the running time from the size of the input or the treewidth. The table below provides an overview of some of the treewidth algorithms. Here k is the treewidth and n is the number of vertices of an input graph G. Each of the algorithms outputs in time f(k) ⋅ g(n) a decomposition of width given in the Approximation column. For example, the algorithm of Bodlaender (1996) in time 2O(k3)⋅n either constructs a tree decomposition of the input graph G of width at most k or reports that the treewidth of G is more than k. Similarly, the algorithm of Bodlaender et al. (2016) in time 2O(k)⋅n either constructs a tree decomposition of the input graph G of width at most 5k + 4 or reports that the treewidth of G is more than k. Korhonen (2021) improved this to 2k + 1 in the same running time.
Approximationf(k)g(n)reference
exactO(1)O(nk+2)Arnborg, Corneil & Proskurowski (1987)
4k + 3O(33k)O(n2)Robertson & Seymour (1995)
8k + 72O(k log k)n log2 nLagergren (1996)
5k + 4 (or 7k + 6)2O(k log k)n log nReed (1992)
exact2O(k3)O(n)Bodlaender (1996)
$O\left(k\cdot {\sqrt {\log k}}\right)$O(1)nO(1)Feige, Hajiaghayi & Lee (2008)
4.5k + 423kn2Amir (2010)
11/3k + 423.6982kn3 log4nAmir (2010)
exactO(1)O(1.7347n)Fomin, Todinca & Villanger (2015)
3k + 22O(k)O(n log n)Bodlaender et al. (2016)
5k + 42O(k)O(n)Bodlaender et al. (2016)
k2O(k7)O(n log n)Fomin et al. (2018)
5k + 428.765kO(n log n)Belbasi & Fürer (2021a)
2k + 1 2O(k) O(n) Korhonen (2021)
5k + 426.755kO(n log n)Belbasi & Fürer (2021b)
exact2O(k2)n4Korhonen & Lokshtanov (2022)
(1+$\varepsilon $)kkO(k/$\varepsilon $)n4Korhonen & Lokshtanov (2022)
Unsolved problem in mathematics:
Can the treewidth of planar graphs be computed in polynomial time?
(more unsolved problems in mathematics)
It is not known whether determining the treewidth of planar graphs is NP-complete, or whether their treewidth can be computed in polynomial time.[13]
In practice, an algorithm of Shoikhet & Geiger (1997) can determine the treewidth of graphs with up to 100 vertices and treewidth up to 11, finding a chordal completion of these graphs with the optimal treewidth.
For larger graphs, one can use search-based techniques such as branch and bound search (BnB) and best-first search to compute the treewidth. These algorithms are anytime in that when stopped early, they will output an upper bound on the treewidth.
The first BnB algorithm for computing treewidth, called the QuickBB algorithm[14] was proposed by Gogate and Dechter.[15] Since the quality of any BnB algorithm is highly dependent on the quality of the lower bound used, Gogate and Dechter[15] also proposed a novel algorithm for computing a lower-bound on treewidth called minor-min-width.[15] At a high level, the minor-min-width algorithm combines the facts that the treewidth of a graph is never larger than its minimum degree or its minor to yield a lower bound on treewidth. The minor-min-width algorithm repeatedly constructs a graph minor by contracting an edge between a minimum degree vertex and one of its neighbors, until just one vertex remains. The maximum of the minimum degree over these constructed minors is guaranteed to be a lower bound on the treewidth of the graph.
Dow and Korf[16] improved the QuickBB algorithm using best-first search. On certain graphs, this best-first search algorithm is an order of magnitude faster than QuickBB.
Solving other problems on graphs of small treewidth
At the beginning of the 1970s, it was observed that a large class of combinatorial optimization problems defined on graphs could be efficiently solved by non serial dynamic programming as long as the graph had a bounded dimension,[17] a parameter shown to be equivalent to treewidth by Bodlaender (1998). Later, several authors independently observed at the end of the 1980s[18] that many algorithmic problems that are NP-complete for arbitrary graphs may be solved efficiently by dynamic programming for graphs of bounded treewidth, using the tree-decompositions of these graphs.
As an example, the problem of coloring a graph of treewidth k may be solved by using a dynamic programming algorithm on a tree decomposition of the graph. For each set Xi of the tree decomposition, and each partition of the vertices of Xi into color classes, the algorithm determines whether that coloring is valid and can be extended to all descendant nodes in the tree decomposition, by combining information of a similar type computed and stored at those nodes. The resulting algorithm finds an optimal coloring of an n-vertex graph in time O(kk+O(1)n), a time bound that makes this problem fixed-parameter tractable.
Courcelle's theorem
Main article: Courcelle's theorem
For a large class of problems, there is a linear time algorithm to solve a problem from the class if a tree-decomposition with constant bounded treewidth is provided. Specifically, Courcelle's theorem[19] states that if a graph problem can be expressed in the logic of graphs using monadic second order logic, then it can be solved in linear time on graphs with bounded treewidth. Monadic second order logic is a language to describe graph properties that uses the following constructions:
• Logic operations, such as $\wedge ,\vee ,\neg ,\Rightarrow $
• Membership tests, such as e ∈ E, v ∈ V
• Quantifications over vertices, edges, sets of vertices, and/or sets of edges, such as ∀v ∈ V, ∃e ∈ E, ∃I ⊆ V, ∀F ⊆ E
• Adjacency tests (u is an endpoint of e), and some extensions that allow for things such as optimization.
Consider for example the 3-coloring problem for graphs. For a graph G = (V, E), this problem asks if it is possible to assign each vertex v ∈ Vone of the 3 colors such that no two adjacent vertices are assigned the same color. This problem can be expressed in monadic second order logic as follows:
$\exists W_{1}\subseteq V:\exists W_{2}\subseteq V:\exists W_{3}\subseteq V:\forall v\in V:(v\in W_{1}\vee v\in W_{2}\vee v\in W_{3})\wedge $
$\forall v\in V:\forall w\in V:(v,w)\in E\Rightarrow (\neg (v\in W_{1}\wedge w\in W_{1})\wedge \neg (v\in W_{2}\wedge w\in W_{2})\wedge \neg (v\in W_{3}\wedge w\in W_{3}))$,
where W1, W2, W3 represent the subsets of vertices having each of the 3 colors. Therefore, by Courcelle's results, the 3-coloring problem can be solved in linear time for a graph given a tree-decomposition of bounded constant treewidth.
Related parameters
Pathwidth
The pathwidth of a graph has a very similar definition to treewidth via tree decompositions, but is restricted to tree decompositions in which the underlying tree of the decomposition is a path graph. Alternatively, the pathwidth may be defined from interval graphs analogously to the definition of treewidth from chordal graphs. As a consequence, the pathwidth of a graph is always at least as large as its treewidth, but it can only be larger by a logarithmic factor.[4] Another parameter, the graph bandwidth, has an analogous definition from proper interval graphs, and is at least as large as the pathwidth. Other related parameters include the tree-depth, a number that is bounded for a minor-closed graph family if and only if the family excludes a path, and the degeneracy, a measure of the sparsity of a graph that is at most equal to its treewidth.
Grid minor size
Because the treewidth of an n × n grid graph is n, the treewidth of a graph G is always greater than or equal to the size of the largest square grid minor of G. In the other direction, the grid minor theorem by Robertson and Seymour shows that there exists an unbounded function f such that the largest square grid minor has size at least f(r) where r is the treewidth.[20] The best bounds known on f are that f must be at least Ω(rd) for some fixed constant d > 0, and at most[21]
$O\left({\sqrt {r/\log r}}\right).$
For the Ω notation in the lower bound, see big O notation. Tighter bounds are known for restricted graph families, leading to efficient algorithms for many graph optimization problems on those families through the theory of bidimensionality.[22] Halin's grid theorem provides an analogue of the relation between treewidth and grid minor size for infinite graphs.[23]
Diameter and local treewidth
A family F of graphs closed under taking subgraphs is said to have bounded local treewidth, or the diameter-treewidth property, if the treewidth of the graphs in the family is upper bounded by a function of their diameter. If the class is also assumed to be closed under taking minors, then F has bounded local treewidth if and only if one of the forbidden minors for F is an apex graph.[24] The original proofs of this result showed that treewidth in an apex-minor-free graph family grows at most doubly exponentially as a function of diameter;[25] later this was reduced to singly exponential[22] and finally to a linear bound.[26] Bounded local treewidth is closely related to the algorithmic theory of bidimensionality,[27] and every graph property definable in first order logic can be decided for an apex-minor-free graph family in an amount of time that is only slightly superlinear.[28]
It is also possible for a class of graphs that is not closed under minors to have bounded local treewidth. In particular this is trivially true for a class of bounded degree graphs, as bounded diameter subgraphs have bounded size. Another example is given by 1-planar graphs, graphs that can be drawn in the plane with one crossing per edge, and more generally for the graphs that can be drawn on a surface of bounded genus with a bounded number of crossings per edge. As with minor-closed graph families of bounded local treewidth, this property has pointed the way to efficient approximation algorithms for these graphs.[29]
Hadwiger number and S-functions
Halin (1976) defines a class of graph parameters that he calls S-functions, which include the treewidth. These functions from graphs to integers are required to be zero on graphs with no edges, to be minor-monotone (a function f is referred to as "minor-monotone" if, whenever H is a minor of G, one has f(H) ≤ f(G)), to increase by one when a new vertex is added that is adjacent to all previous vertices, and to take the larger value from the two subgraphs on either side of a clique separator. The set of all such functions forms a complete lattice under the operations of elementwise minimization and maximization. The top element in this lattice is the treewidth, and the bottom element is the Hadwiger number, the size of the largest complete minor in the given graph.
Notes
1. Diestel (2005) pp.354–355
2. Diestel (2005) section 12.3
3. Seymour & Thomas (1993).
4. Bodlaender (1998).
5. Thorup (1998).
6. Robertson & Seymour (1986).
7. Bodlaender (1988).
8. Arnborg, Proskurowski & Corneil (1990); Satyanarayana & Tung (1990).
9. Ramachandramurthi (1997).
10. Lagergren (1993).
11. Arnborg, Corneil & Proskurowski (1987).
12. Bodlaender (1996).
13. Kao (2008).
14. "Vibhav Gogate". personal.utdallas.edu. Retrieved 2022-11-27.
15. Gogate, Vibhav; Dechter, Rina (2012-07-11). "A Complete Anytime Algorithm for Treewidth". arXiv:1207.4109 [cs.DS].
16. "Best-First Search for Treewidth". www.aaai.org. Retrieved 2022-11-27.
17. Bertelè & Brioschi (1972).
18. Arnborg & Proskurowski (1989); Bern, Lawler & Wong (1987); Bodlaender (1988).
19. Courcelle (1990); Courcelle (1992)
20. Robertson, Seymour. Graph minors. V. Excluding a planar graph.
21. Chekuri & Chuzhoy (2016)
22. Demaine & Hajiaghayi (2008).
23. Diestel (2004).
24. Eppstein (2000).
25. Eppstein (2000); Demaine & Hajiaghayi (2004a).
26. Demaine & Hajiaghayi (2004b).
27. Demaine et al. (2004); Demaine & Hajiaghayi (2008).
28. Frick & Grohe (2001).
29. Grigoriev & Bodlaender (2007).
References
• Amir, Eyal (2010), "Approximation algorithms for treewidth", Algorithmica, 56 (4): 448–479, doi:10.1007/s00453-008-9180-4, MR 2581059, S2CID 5874913.
• Arnborg, S.; Corneil, D.; Proskurowski, A. (1987), "Complexity of finding embeddings in a $k$-tree", SIAM Journal on Matrix Analysis and Applications, 8 (2): 277–284, doi:10.1137/0608024.
• Arnborg, Stefan; Proskurowski, Andrzej; Corneil, Derek G. (1990), "Forbidden minors characterization of partial 3-trees", Discrete Mathematics, 80 (1): 1–19, doi:10.1016/0012-365X(90)90292-P, MR 1045920.
• Arnborg, S.; Proskurowski, A. (1989), "Linear time algorithms for NP-hard problems restricted to partial $k$-trees", Discrete Applied Mathematics, 23 (1): 11–24, doi:10.1016/0166-218X(89)90031-0.
• Belbasi, Mahdi; Fürer, Martin (2021a), "An improvement of Reed's treewidth approximation", in Uehara, Ryuhei; Hong, Seok-Hee; Nandy, Subhas C. (eds.), WALCOM: Algorithms and Computation – 15th International Conference and Workshops, WALCOM 2021, Yangon, Myanmar, February 28 - March 2, 2021, Proceedings, Lecture Notes in Computer Science, vol. 12635, Springer, pp. 166–181, arXiv:2010.03105, doi:10.1007/978-3-030-68211-8_14, MR 4239527, S2CID 222177100.
• Belbasi, Mahdi; Fürer, Martin (2021b), "Finding all leftmost separators of size $\leq k$", in Du, Ding-Zhu; Du, Donglei; Wu, Chenchen; Xu, Dachuan (eds.), Combinatorial Optimization and Applications - 15th International Conference, COCOA 2021, Tianjin, China, December 17-19, 2021, Proceedings, Lecture Notes in Computer Science, vol. 13135, Springer, pp. 273–287, arXiv:2111.02614, doi:10.1007/978-3-030-92681-6_23, S2CID 242758210
• Bern, M. W.; Lawler, E. L.; Wong, A. L. (1987), "Linear-time computation of optimal subgraphs of decomposable graphs", Journal of Algorithms, 8 (2): 216–235, doi:10.1016/0196-6774(87)90039-3.
• Bertelè, Umberto; Brioschi, Francesco (1972), Nonserial Dynamic Programming, Academic Press, pp. 37–38, ISBN 978-0-12-093450-8.
• Bodlaender, Hans L. (1988), "Dynamic programming on graphs with bounded treewidth", Proc. 15th International Colloquium on Automata, Languages and Programming, Lecture Notes in Computer Science, vol. 317, Springer-Verlag, pp. 105–118, CiteSeerX 10.1.1.18.8503, doi:10.1007/3-540-19488-6_110, ISBN 978-3-540-19488-0.
• Bodlaender, Hans L. (1996), "A linear time algorithm for finding tree-decompositions of small treewidth", SIAM Journal on Computing, 25 (6): 1305–1317, CiteSeerX 10.1.1.19.7484, doi:10.1137/S0097539793251219.
• Bodlaender, Hans L. (1998), "A partial k-arboretum of graphs with bounded treewidth", Theoretical Computer Science, 209 (1–2): 1–45, doi:10.1016/S0304-3975(97)00228-4.
• Bodlaender, Hans L.; Drange, Pal G.; Dregi, Markus S.; Fomin, Fedor V.; Lokshtanov, Daniel; Pilipczuk, Michal (2016), "A $c^{k}n$ 5-approximation algorithm for treewidth", SIAM Journal on Computing, 45 (2): 317–378, arXiv:1304.6321, doi:10.1137/130947374.
• Chekuri, Chandra; Chuzhoy, Julia (2016), "Polynomial bounds for the grid-minor theorem", Journal of the ACM, 63 (5): A40:1–65, arXiv:1305.6577, doi:10.1145/2820609, MR 3593966, S2CID 209860422.
• Courcelle, B. (1990), "The monadic second-order logic of graphs I: Recognizable sets of finite graphs", Information and Computation, 85: 12–75, CiteSeerX 10.1.1.158.5595, doi:10.1016/0890-5401(90)90043-h.
• Courcelle, B. (1992), "The monadic second-order logic of graphs III: Treewidth, forbidden minors and complexity issues.", Informatique Théorique (26): 257–286.
• Demaine, Erik D.; Fomin, Fedor V.; Hajiaghayi, MohammadTaghi; Thilikos, Dimitrios M. (2004), "Bidimensional parameters and local treewidth", SIAM Journal on Discrete Mathematics, 18 (3): 501–511, CiteSeerX 10.1.1.107.6195, doi:10.1137/S0895480103433410, MR 2134412, S2CID 7803025.
• Demaine, Erik D.; Hajiaghayi, MohammadTaghi (2004a), "Diameter and treewidth in minor-closed graph families, revisited", Algorithmica, 40 (3): 211–215, doi:10.1007/s00453-004-1106-1, MR 2080518, S2CID 390856.
• Demaine, Erik D.; Hajiaghayi, MohammadTaghi (2004b), "Equivalence of local treewidth and linear local treewidth and its algorithmic applications", Proceedings of the Fifteenth Annual ACM-SIAM Symposium on Discrete Algorithms, New York: ACM, pp. 840–849, MR 2290974.
• Demaine, Erik D.; Hajiaghayi, MohammadTaghi (2008), "Linearity of grid minors in treewidth with applications through bidimensionality" (PDF), Combinatorica, 28 (1): 19–36, doi:10.1007/s00493-008-2140-4, S2CID 16520181.
• Diestel, Reinhard (2004), "A short proof of Halin's grid theorem", Abhandlungen aus dem Mathematischen Seminar der Universität Hamburg, 74: 237–242, doi:10.1007/BF02941538, MR 2112834, S2CID 124603912.
• Diestel, Reinhard (2005), Graph Theory (3rd ed.), Springer, ISBN 978-3-540-26182-7.
• Eppstein, D. (2000), "Diameter and treewidth in minor-closed graph families", Algorithmica, 27 (3–4): 275–291, arXiv:math/9907126, doi:10.1007/s004530010020, MR 1759751, S2CID 3172160.
• Feige, Uriel; Hajiaghayi, MohammadTaghi; Lee, James R. (2008), "Improved approximation algorithms for minimum weight vertex separators", SIAM Journal on Computing, 38 (2): 629–657, CiteSeerX 10.1.1.597.5634, doi:10.1137/05064299X.
• Fomin, Fedor V.; Todinca, Ioan; Villanger, Yngve (2015), "Large induced subgraphs via triangulations and CMSO", SIAM Journal on Computing, 44 (1): 54–87, arXiv:1309.1559, doi:10.1137/140964801, S2CID 15880453.
• Frick, Markus; Grohe, Martin (2001), "Deciding first-order properties of locally tree-decomposable structures", Journal of the ACM, 48 (6): 1184–1206, arXiv:cs/0004007, doi:10.1145/504794.504798, MR 2143836, S2CID 999472.
• Fomin, Fedor V.; Lokshtanov, Daniel; Saurabh, Saket; Pilipczuk, Michal; Wrochna, Marcin (2018), "Fully polynomial-time parameterized computations for graphs and matrices of low treewidth", ACM Transactions on Algorithms, 14 (3): 34:1–34:45, arXiv:1511.01379, doi:10.1145/3186898, S2CID 2144798.
• Grigoriev, Alexander; Bodlaender, Hans L. (2007), "Algorithms for graphs embeddable with few crossings per edge", Algorithmica, 49 (1): 1–11, CiteSeerX 10.1.1.65.5071, doi:10.1007/s00453-007-0010-x, MR 2344391, S2CID 8174422.
• Halin, Rudolf (1976), "S-functions for graphs", Journal of Geometry, 8 (1–2): 171–186, doi:10.1007/BF01917434, S2CID 120256194.
• Kao, Ming-Yang, ed. (2008), "Treewidth of graphs", Encyclopedia of Algorithms, Springer, p. 969, ISBN 9780387307701, Another long-standing open problem is whether there is a polynomial-time algorithm to compute the treewidth of planar graphs.
• Korhonen, Tuukka (2021), "A Single-Exponential Time 2-Approximation Algorithm for Treewidth", Proceedings of the 62nd IEEE Annual Symposium on Foundations of Computer Science, IEEE, pp. 184–192, arXiv:2104.07463, doi:10.1109/FOCS52979.2021.00026, S2CID 233240958.
• Lagergren, Jens (1993), "An upper bound on the size of an obstruction", Graph structure theory (Seattle, WA, 1991), Contemporary Mathematics, vol. 147, Providence, RI: American Mathematical Society, pp. 601–621, doi:10.1090/conm/147/01202, ISBN 9780821851609, MR 1224734.
• Lagergren, Jens (1996), "Efficient parallel algorithms for graphs of bounded tree-width", Journal of Algorithms, 20 (1): 20–44, doi:10.1006/jagm.1996.0002, MR 1368716.
• Ramachandramurthi, Siddharthan (1997), "The structure and number of obstructions to treewidth", SIAM Journal on Discrete Mathematics, 10 (1): 146–157, doi:10.1137/S0895480195280010, MR 1430552.
• Reed, Bruce A. (1992), "Finding approximate separators and computing tree width quickly", in Kosaraju, S. Rao; Fellows, Mike; Wigderson, Avi; Ellis, John A. (eds.), Proceedings of the 24th Annual ACM Symposium on Theory of Computing, May 4-6, 1992, Victoria, British Columbia, Canada, ACM, pp. 221–228, doi:10.1145/129712.129734, S2CID 16259988.
• Robertson, Neil; Seymour, Paul D. (1984), "Graph minors III: Planar tree-width", Journal of Combinatorial Theory, Series B, 36 (1): 49–64, doi:10.1016/0095-8956(84)90013-3.
• Robertson, Neil; Seymour, Paul D. (1986), "Graph minors V: Excluding a planar graph", Journal of Combinatorial Theory, Series B, 41 (1): 92–114, doi:10.1016/0095-8956(86)90030-4.
• Robertson, Neil; Seymour, Paul D. (1995), "Graph Minors XIII: The Disjoint Paths Problem", Journal of Combinatorial Theory, Series B, 63 (1): 65–110, doi:10.1006/jctb.1995.1006.
• Robertson, Neil; Seymour, Paul; Thomas, Robin (1994), "Quickly excluding a planar graph", Journal of Combinatorial Theory, Series B, 62 (2): 323–348, doi:10.1006/jctb.1994.1073, MR 1305057.
• Satyanarayana, A.; Tung, L. (1990), "A characterization of partial 3-trees", Networks, 20 (3): 299–322, doi:10.1002/net.3230200304, MR 1050503.
• Seymour, Paul D.; Thomas, Robin (1993), "Graph searching and a min-max theorem for tree-width", Journal of Combinatorial Theory, Series B, 58 (1): 22–33, doi:10.1006/jctb.1993.1027.
• Shoikhet, Kirill; Geiger, Dan (1997), "A Practical Algorithm for Finding Optimal Triangulations", in Kuipers, Benjamin; Webber, Bonnie L. (eds.), Proceedings of the Fourteenth National Conference on Artificial Intelligence and Ninth Innovative Applications of Artificial Intelligence Conference, AAAI 97, IAAI 97, July 27-31, 1997, Providence, Rhode Island, USA, AAAI Press / The MIT Press, pp. 185–190.
• Thorup, Mikkel (1998), "All structured programs have small tree width and good register allocation", Information and Computation, 142 (2): 159–181, doi:10.1006/inco.1997.2697.
• Korhonen, Tuukka; Lokshtanov, Daniel (2022), "An Improved Parameterized Algorithm for Treewidth", arXiv:2211.07154 [cs.DS].
| Wikipedia |
Trefftz method
In mathematics, the Trefftz method is a method for the numerical solution of partial differential equations named after the German mathematician Erich Trefftz(de) (1888–1937). It falls within the class of finite element methods.
Introduction
The hybrid Trefftz finite-element method has been considerably advanced since its introduction about 30 years ago.[1] The conventional method of finite element analysis involves converting the differential equation that governs the problem into a variational functional from which element nodal properties – known as field variables – can be found. This can be solved by substituting in approximate solutions to the differential equation and generating the finite element stiffness matrix which is combined with all the elements in the continuum to obtain the global stiffness matrix.[2] Application of the relevant boundary conditions to this global matrix, and the subsequent solution of the field variables rounds off the mathematical process, following which numerical computations can be used to solve real life engineering problems.[1][3]
An important aspect of solving the functional requires us to find solutions that satisfy the given boundary conditions and satisfy inter-element continuity since we define independently the properties over each element domain.[1]
The hybrid Trefftz method differs from the conventional finite element method in the assumed displacement fields and the formulation of the variational functional. In contrast to the conventional method (based on the Rayleigh-Ritz mathematical technique) the Trefftz method (based on the Trefftz mathematical technique) assumes the displacement field is composed of two independent components; the intra-element displacement field which satisfies the governing differential equation and is used to approximate the variation of potential within the element domain, and the conforming frame field which specifically satisfies the inter-element continuity condition, defined on the boundary of the element. The frame field here is the same as that used in the conventional finite element method but defined strictly on the boundary of the element – hence the use of the term "hybrid" in the method's nomenclature. The variational functional must thus include additional terms to account for boundary conditions, since the assumed solution field only satisfies the governing differential equation.[1][3]
Advantages over conventional finite element method
The main advantages of the hybrid Trefftz method over the conventional method are:
1. the formulation calls for integration along the element boundaries only which allows for curve-sided or polynomial shapes to be used for the element boundary,
2. presents expansion bases for elements that do not satisfy inter-element continuity through the variational functional, and
3. this method allows for the development of crack singular or perforated elements through the use of localized solution functions as the trial functions.[1][3]
Applications
Since its mainstream introduction some 30 years ago, this modified finite element method has become increasingly popular to applications such as elasticity, Kirchhoff plates, thick plates, general three-dimensional solid mechanics, antisymmetric solid mechanics, potential problems, shells, elastodynamic problems, geometrically nonlinear plate bending, and transient heat conduction analysis among various others.[1][3] It is currently being applied to steady, non-turbulent, incompressible, Newtonian fluid flow applications through ongoing research at the Faculty of Engineering and Information Technology (FEIT) at the Australian National University (ANU) in Canberra, Australia. The hybrid Trefftz method is also being applied to some fields, e.g. computational modeling of hydrated soft tissues or water-saturated porous media, through ongoing research project at the Technical University of Lisbon, Instituto Superior Técnico in Portugal.
Notes
1. Qin (2000)
2. Connor & Brebbia (1976)
3. Qin (2004)
References
• Qin, Q.H. (2000), The Trefftz Finite and Boundary Element Method, Southampton, England: WIT Press, pp. 1–55
• Connor, J.J.; Brebbia, C.A. (1976), Finite Element Techniques for Fluid Flow (3rd ed.), Bristol, England: Newnes-Butterworths
• Qin, Q.H. (2004), "Formulation of hybrid Trefftz finite element method for elastoplasticity", Applied Mathematical Modelling, 29 (2): 235–252, doi:10.1016/j.apm.2004.09.004
External links
• "Trefftz method", Encyclopedia of Mathematics, EMS Press, 2001 [1994]
• Hybrid-Trefftz research project at Instituto Superior Técnico in Lisbon, Portugal
| Wikipedia |
Treks into Intuitive Geometry
Treks into Intuitive Geometry: The World of Polygons and Polyhedra is a book on geometry, written as a discussion between a teacher and a student in the style of a Socratic dialogue. It was written by Japanese mathematician Jin Akiyama and science writer Kiyoko Matsunaga, and published by Springer-Verlag in 2015 (ISBN 978-4-431-55841-5).[1]
Topics
The term "intuitive geometry" of the title was used by László Fejes Tóth to refer to results in geometry that are accessible to the general public, and the book concerns topics of this type.[1][2]
The book has 16 self-contained chapters,[1] each beginning with an illustrative puzzle or real-world application.[3] It includes material on tessellations, polyhedra, and honeycombs, unfoldings of polyhedra and tessellations of unfoldings, cross sections of polyhedra, measuring boxes, gift wrapping, packing problems, wallpaper groups, pentagonal tilings, the Conway criterion for prototiles and Escher-like tilings of the plane by animal-shaped figures, aperiodic tilings including the Penrose tiling, the art gallery theorem, the Euler characteristic, dissection problems and the Dehn invariant, and the Steiner tree problem.[1][2]
The book is heavily illustrated. And although the results of the book are demonstrated in an accessible way, the book provides sequences of deductions leading to each major claim, and more-complete proofs and references are provided in an appendix.[3]
Audience and reception
Although it was initially developed from course material offered to undergraduates at the Tokyo University of Science,[2] the book is aimed at a broad audience, and assumes only a high-school level knowledge of geometry.[1][2] It could be used to encourage children in mathematics as well as to provide material for teachers and public lecturers.[1] There is enough depth of material to also retain the interest of readers with a more advanced mathematical background.[1][2]
Reviewer Matthieu Jacquemet writes that the ordering of topics is unintuitive and the dialogue-based format "artificial", but reviewer Tricia Muldoon Brown instead suggests that this format allows the work to flow very smoothly, "more like a novel or a play than a textbook ... with the ease of reading purely for pleasure".[3] Jacquemet assesses the book as "well illustrated and entertaining",[1] and Brown writes that it "is a delightful read".[3]
Reviewer Michael Fox disagrees, finding the dialogue irritating and the book overall "rather disappointing". He cites as problematic the book's cursory treatment of some of its topics, and in particular its treatment of tiling patterns as purely monochromatic, its omission of the frieze groups, and its use of demonstrations by special examples that do not have all the features of the general case. He also complains about idiosyncratic terminology, the use of decimal approximations instead of exact formulas for angles, the small scale of some figures, and an uneven level of difficulty of material. Nevertheless, he writes that "this is an interesting work, with much that cannot be found elsewhere".[2]
References
1. Jacquemet, Matthieu, "Review of 'Treks into Intuitive Geometry", zbMATH, Zbl 1339.52001
2. Fox, Michael (October 2017), "Review of 'Treks into Intuitive Geometry", The Mathematical Gazette, 101 (552): 565–568, doi:10.1017/mag.2017.164
3. Brown, Tricia Muldoon (April 2016), "Review of 'Treks into Intuitive Geometry", MAA Reviews, Mathematical Association of America
| Wikipedia |
Trémaux tree
In graph theory, a Trémaux tree of an undirected graph $G$ is a type of spanning tree, generalizing depth-first search trees. They are defined by the property that every edge of $G$ connects an ancestor–descendant pair in the tree. Trémaux trees are named after Charles Pierre Trémaux, a 19th-century French author who used a form of depth-first search as a strategy for solving mazes.[1][2] They have also been called normal spanning trees, especially in the context of infinite graphs.[3][4]
All depth-first search trees and all Hamiltonian paths are Trémaux trees. In finite graphs, every Trémaux tree is a depth-first search tree, but although depth-first search itself is inherently sequential, Trémaux trees can be constructed by a randomized parallel algorithm in the complexity class RNC. They can be used to define the tree-depth of a graph, and as part of the left-right planarity test for testing whether a graph is a planar graph. A characterization of Trémaux trees in the monadic second-order logic of graphs allows graph properties involving orientations to be recognized efficiently for graphs of bounded treewidth using Courcelle's theorem.
Not every infinite connected graph has a Trémaux tree, and not every infinite Trémaux tree is a depth-first search tree. The graphs that have Trémaux trees can be characterized by forbidden minors. An infinite Trémaux tree must have exactly one infinite path for each end of the graph, and the existence of a Trémaux tree characterizes the graphs whose topological completions, formed by adding a point at infinity for each end, are metric spaces.
Definition and examples
A Trémaux tree, for a graph $G$, is a spanning tree $T$ with the property that, for every edge $uv$ in $G$, one of the two endpoints $u$ and $v$ is an ancestor of the other. To be a spanning tree, it must only use edges of $G$, and include every vertex, with a unique finite path between every pair of vertices. Additionally, to define the ancestor–descendant relation in this tree, one of its vertices must be designated as its root.
If a finite graph has a Hamiltonian path, then rooting that path at one of its two endpoints produces a Trémaux tree. For such a path, every pair of vertices is an ancestor–descendant pair.
In the graph shown below, the tree with edges 1–3, 2–3, and 3–4 is a Trémaux tree when it is rooted at vertex 1 or vertex 2: every edge of the graph belongs to the tree except for the edge 1–2, which (for these choices of root) connects an ancestor-descendant pair.
However, rooting the same tree at vertex 3 or vertex 4 produces a rooted tree that is not a Trémaux tree, because with this root 1 and 2 are no longer an ancestor and descendant of each other.
In finite graphs
Existence
Every finite connected undirected graph has at least one Trémaux tree.[4] One can construct such a tree by performing a depth-first search and connecting each vertex (other than the starting vertex of the search) to the earlier vertex from which it was discovered. The tree constructed in this way is known as a depth-first search tree. If $uv$ is an arbitrary edge in the graph, and $u$ is the earlier of the two vertices to be reached by the search, then $v$ must belong to the subtree descending from $u$ in the depth-first search tree, because the search will necessarily discover $v$ while it is exploring this subtree, either from one of the other vertices in the subtree or, failing that, from $u$ directly. Every finite Trémaux tree can be generated as a depth-first search tree: If $T$ is a Trémaux tree of a finite graph, and a depth-first search explores the children in $T$ of each vertex prior to exploring any other vertices, it will necessarily generate $T$ as its depth-first search tree.
Parallel construction
Unsolved problem in computer science:
Is there a deterministic parallel NC algorithm for constructing Trémaux trees?
(more unsolved problems in computer science)
It is P-complete to find the Trémaux tree that would be found by a sequential depth-first search algorithm, in which the neighbors of each vertex are searched in order by their identities.[5] Nevertheless, it is possible to find a different Trémaux tree by a randomized parallel algorithm, showing that the construction of Trémaux trees belongs to the complexity class RNC. The algorithm is based on another randomized parallel algorithm, for finding minimum-weight perfect matchings in 0-1-weighted graphs.[6] As of 1997, it remained unknown whether Trémaux tree construction could be performed by a deterministic parallel algorithm, in the complexity class NC.[7] If matchings can be found in NC, then so can Trémaux trees.[6]
Logical expression
It is possible to express the property that a set $T$ of edges with a choice of root vertex $r$ forms a Trémaux tree, in the monadic second-order logic of graphs, and more specifically in the form of this logic called MSO2, which allows quantification over both vertex and edge sets. This property can be expressed as the conjunction of the following properties:
• The graph is connected by the edges in $T$. This can be expressed logically as the statement that, for every non-empty proper subset of the graph's vertices, there exists an edge in $T$ with exactly one endpoint in the given subset.
• $T$ is acyclic. This can be expressed logically as the statement that there does not exist a nonempty subset $C$ of $T$ for which each vertex is incident to either zero or two edges of $C$.
• Every edge $e$ not in $T$ connects an ancestor-descendant pair of vertices in $T$. This is true when both endpoints of $e$ belong to a path in $T$. It can be expressed logically as the statement that, for all edges $e$, there exists a subset $P$ of $T$ such that exactly two vertices, one of them $r$, are incident to a single edge of $P$, and such that both endpoints of $e$ are incident to at least one edge of $P$.
Once a Trémaux tree has been identified in this way, one can describe an orientation of the given graph, also in monadic second-order logic, by specifying the set of edges whose orientation is from the ancestral endpoint to the descendant endpoint. The remaining edges outside this set must be oriented in the other direction. This technique allows graph properties involving orientations to be specified in monadic second order logic, allowing these properties to be tested efficiently on graphs of bounded treewidth using Courcelle's theorem.[8]
Related properties
If a graph has a Hamiltonian path, then that path (rooted at one of its endpoints) is also a Trémaux tree. The undirected graphs for which every Trémaux tree has this form are the cycle graphs, complete graphs, and balanced complete bipartite graphs.[9]
Trémaux trees are closely related to the concept of tree-depth. The tree-depth of a graph $G$ can be defined as the smallest number $d$ for which there exist a graph $H$, with a Trémaux tree $T$ of depth $d$, such that $G$ is a subgraph of $H$. Bounded tree-depth, in a family of graphs, is equivalent to the existence of a path that cannot occur as a graph minor of the graphs in the family. Many hard computational problems on graphs have algorithms that are fixed-parameter tractable when parameterized by the tree-depth of their inputs.[10]
Trémaux trees also play a key role in the Fraysseix–Rosenstiehl planarity criterion for testing whether a given graph is planar. According to this criterion, a graph $G$ is planar if, for a given Trémaux tree $T$ of $G$, the remaining edges can be placed in a consistent way to the left or the right of the tree, subject to constraints that prevent edges with the same placement from crossing each other.[11]
In infinite graphs
Existence
Not every infinite graph has a normal spanning tree. For instance, a complete graph on an uncountable set of vertices does not have one: a normal spanning tree in a complete graph can only be a path, but a path has only a countable number of vertices. However, every graph on a countable set of vertices does have a normal spanning tree.[3][4]
Even in countable graphs, a depth-first search might not succeed in eventually exploring the entire graph,[3] and not every normal spanning tree can be generated by a depth-first search: to be a depth-first search tree, a countable normal spanning tree must have only one infinite path or one node with infinitely many children (and not both).
Minors
If an infinite graph $G$ has a normal spanning tree, so does every connected graph minor of $G$. It follows from this that the graphs that have normal spanning trees have a characterization by forbidden minors. One of the two classes of forbidden minors consists of bipartite graphs in which one side of the bipartition is countable, the other side is uncountable, and every vertex has infinite degree. The other class of forbidden minors consists of certain graphs derived from Aronszajn trees.[12]
The details of this characterization depend on the choice of set-theoretic axiomatization used to formalize mathematics. In particular, in models of set theory for which Martin's axiom is true and the continuum hypothesis is false, the class of bipartite graphs in this characterization can be replaced by a single forbidden minor. However, for models in which the continuum hypothesis is true, this class contains graphs which are incomparable with each other in the minor ordering.[13]
Ends and metrizability
Normal spanning trees are also closely related to the ends of an infinite graph, equivalence classes of infinite paths that, intuitively, go to infinity in the same direction. If a graph has a normal spanning tree, this tree must have exactly one infinite path for each of the graph's ends.[14]
An infinite graph can be used to form a topological space by viewing the graph itself as a simplicial complex and adding a point at infinity for each end of the graph. With this topology, a graph has a normal spanning tree if and only if its set of vertices can be decomposed into a countable union of closed sets. Additionally, this topological space can be represented by a metric space if and only if the graph has a normal spanning tree.[14]
References
1. Even, Shimon (2011), Graph Algorithms (2nd ed.), Cambridge University Press, pp. 46–48, ISBN 978-0-521-73653-4.
2. Sedgewick, Robert (2002), Algorithms in C++: Graph Algorithms (3rd ed.), Pearson Education, pp. 149–157, ISBN 978-0-201-36118-6.
3. Soukup, Lajos (2008), "Infinite combinatorics: from finite to infinite", Horizons of combinatorics, Bolyai Soc. Math. Stud., vol. 17, Berlin: Springer, pp. 189–213, doi:10.1007/978-3-540-77200-2_10, ISBN 978-3-540-77199-9, MR 2432534. See in particular Theorem 3, p. 193.
4. Diestel, Reinhard (2017), Graph Theory, Graduate Texts in Mathematics, vol. 173 (5th ed.), Berlin: Springer, pp. 34–36, 220–221, 247, 251–252, doi:10.1007/978-3-662-53622-3, ISBN 978-3-662-53621-6, MR 3644391
5. Reif, John H. (1985), "Depth-first search is inherently sequential", Information Processing Letters, 20 (5): 229–234, doi:10.1016/0020-0190(85)90024-9, MR 0801987.
6. Aggarwal, A.; Anderson, R. J. (1988), "A random NC algorithm for depth first search", Combinatorica, 8 (1): 1–12, doi:10.1007/BF02122548, MR 0951989.
7. Karger, David R.; Motwani, Rajeev (1997), "An NC algorithm for minimum cuts", SIAM Journal on Computing, 26 (1): 255–272, doi:10.1137/S0097539794273083, MR 1431256.
8. Courcelle, Bruno (1996), "On the expression of graph properties in some fragments of monadic second-order logic" (PDF), in Immerman, Neil; Kolaitis, Phokion G. (eds.), Proc. Descr. Complex. Finite Models, DIMACS, vol. 31, Amer. Math. Soc., pp. 33–62, MR 1451381.
9. Chartrand, Gary; Kronk, Hudson V. (1968), "Randomly traceable graphs", SIAM Journal on Applied Mathematics, 16 (4): 696–700, doi:10.1137/0116056, MR 0234852.
10. Nešetřil, Jaroslav; Ossona de Mendez, Patrice (2012), "Chapter 6. Bounded height trees and tree-depth", Sparsity: Graphs, Structures, and Algorithms, Algorithms and Combinatorics, vol. 28, Heidelberg: Springer, pp. 115–144, doi:10.1007/978-3-642-27875-4, ISBN 978-3-642-27874-7, MR 2920058.
11. de Fraysseix, Hubert; Rosenstiehl, Pierre (1982), "A depth-first-search characterization of planarity", Graph theory (Cambridge, 1981), Ann. Discrete Math., vol. 13, Amsterdam: North-Holland, pp. 75–80, MR 0671906; de Fraysseix, Hubert; Ossona de Mendez, Patrice; Rosenstiehl, Pierre (2006), "Trémaux trees and planarity", International Journal of Foundations of Computer Science, 17 (5): 1017–1029, arXiv:math/0610935, doi:10.1142/S0129054106004248, MR 2270949.
12. Diestel, Reinhard; Leader, Imre (2001), "Normal spanning trees, Aronszajn trees and excluded minors" (PDF), Journal of the London Mathematical Society, Second Series, 63 (1): 16–32, doi:10.1112/S0024610700001708, MR 1801714.
13. Bowler, Nathan; Geschke, Stefan; Pitz, Max (2016), Minimal obstructions for normal spanning trees, arXiv:1609.01042, Bibcode:2016arXiv160901042B
14. Diestel, Reinhard (2006), "End spaces and spanning trees", Journal of Combinatorial Theory, Series B, 96 (6): 846–854, doi:10.1016/j.jctb.2006.02.010, MR 2274079.
| Wikipedia |
Trena Wilkerson
Trena L. Wilkerson (born 1954) is an American mathematician and mathematics educator. She is a Professor of Mathematics Education in the Department of Curriculum & Instruction at Baylor University,[1] and the president of the National Council of Teachers of Mathematics for the 2020–2022 term.[2][3]
Education and career
Wilkerson majored in mathematics at Mississippi College, earned a master's degree in mathematics education from Southeastern Louisiana University, and worked as a high school teacher in Louisiana for 18 years, from 1976 to 1994.[4]
Returning to graduate study, she earned a Ph.D. in curriculum and instruction in 1994 at the University of Southern Mississippi, specializing in mathematics education,[1] and became an assistant research professor at Louisiana State University from 1994 to 1999, when she moved to her present position at Baylor.[4]
References
1. "Trena L. Wilkerson, Ph.D.", School of Education, Baylor School of Education, retrieved 2020-12-30
2. History of the NCTM Board, National Council of Teachers of Mathematics, retrieved 2020-12-30
3. "Dr. Trena Wilkerson Installed as President of National Council of Teachers in Mathematics", Median & Public Relations, Baylor University, April 9, 2020, retrieved 2020-12-30
4. "Trena L. Wilkerson, Candidate for President-Elect", 2018 Candidates, National Council of Teachers of Mathematics, 2018, retrieved 2020-12-30
External links
• Trena Wilkerson publications indexed by Google Scholar
| Wikipedia |
Trend surface analysis
Trend surface analysis is a mathematical technique used in environmental sciences (archeology, geology, soil science, etc.). Trend surface analysis (also called trend surface mapping) is a method based on low-order polynomials of spatial coordinates for estimating a regular grid of points from scattered observations - for example, from archeological finds or from soil survey.
| Wikipedia |
Treviso Arithmetic
The Treviso Arithmetic, or Arte dell'Abbaco, is an anonymous textbook in commercial arithmetic written in vernacular Venetian and published in Treviso, Italy, in 1478.
Treviso Arithmetic
CountryTreviso, Italy
Languagevernacular Venetian
SubjectArithmetic
Published1478
The author explains the motivation for writing this textbook:[1]
I have often been asked by certain youths in whom I have much interest, and who look forward to mercantile pursuits, to put into writing the fundamental principles of arithmetic, commonly called abacus.
The Treviso Arithmetic is the earliest known printed mathematics book in the West, and one of the first printed European textbooks dealing with a science.
The Arithmetic as an early printed book
There appears to have been only one edition of the work. David Eugene Smith translated parts of the Treviso Arithmetic for educational purposes in 1907. Frank J. Swetz translated the complete work using Smith's notes in 1987 in his Capitalism & Arithmetic: The New Math of the 15th Century. Swetz used a copy of the Treviso housed in the Manuscript Library at Columbia University. The volume found its way to this collection via a curious route. Maffeo Pinelli (1785), an Italian bibliophile, is the first known owner. After his death his library was purchased by a London book-dealer and sold at auction on February 6, 1790. The book was obtained for three shillings by Mr. Wodhull.[2] About 100 years later the Arithmetic appeared in the library of Brayton Ives, a New York lawyer. When Ives sold the collection of books at auction, George Arthur Plimpton, a New York publisher, acquired the Treviso and made it an acquisition to his extensive collection of early scientific texts. Plimpton donated his library to Columbia University in 1936.[3] Original copies of the Treviso Arithmetic are extremely rare.
There are 123 pages of text with 32 lines of print to a page. The pages are unnumbered, untrimmed and have wide margins. Some of the margins contain written notes. The size of the book is 14.5 cm by 20.6 cm.
The book included information taken from the 1202 Liber Abaci, such as lattice multiplication. George G. Joseph in Crest of the Peacock suggests that John Napier read this book to create Napier's bones (or rods).
Reasons for publication
The Treviso Arithmetic is a practical book intended for self study and for use in Venetian trade. It is written in vernacular Venetian and communicated knowledge to a large population.
It helped to end the monopoly on mathematical knowledge and gave important information to the middle class. It was not written for a large audience, but was intended to teach mathematics of everyday currency.
The Treviso became one of the first mathematics books written for the expansion of human knowledge. It provided an opportunity for the common person, rather than only a privileged few, to learn the art of computation. The Treviso Arithmetic provided an early example of the Hindu–Arabic numeral system computational algorithms.[4]
See also
• Ars Magna (Gerolamo Cardano) (1510)
• Trigonometria (1595)
Notes
1. David Eugene Smith "The First Printed Arithmetic (Treviso, 1478)," Isis, 6 (1924): 311–331, at p. 314
2. Swetz, Frank, J. 1987. Capitalism and Arithmetic. La Salle: Open Court.
3. Swetz, 34
4. Swetz, 26
References
• Boyer, Carl. 1991. A History of Mathematics. New York City: Wiley.
• Buck-Morss, Susan (1 January 1995). "Envisioning Capital: Political Economy on Display". Critical Inquiry. 21 (2): 434–467. JSTOR 1343930.
• Carter, Baker. 2006. The Role of the History of Mathematics in Middle School. Presentation at East Tennessee University, August 28.
• Gazale, Midhat, J. 2000. Number. Princeton: Princeton University Press.
• Newman, J, R. 1956. The World of Mathematics. New York City: Simon & Schuster.
• Peterson, Ivars. 1996. Old and New Arithmetic. Mathematical Association of America. http://www.maa.org/mathland/mathland_8_5.html (accessed October 11, 2006).
• Swetz, Frank, J. 1987. Capitalism and Arithmetic. La Salle: Open Court.
External links
• Full text of the Treviso Arithmetic
• Treviso Arithmetic at Columbia University
Authority control
International
• VIAF
National
• Norway
• Germany
| Wikipedia |
Trevor Wooley
Trevor Dion Wooley FRS (born 17 September 1964) is a British mathematician and currently Professor of Mathematics at Purdue University. His fields of interest include analytic number theory, Diophantine equations and Diophantine problems, harmonic analysis, the Hardy-Littlewood circle method, and the theory and applications of exponential sums. He has made significant breakthroughs on Waring's problem, for which he was awarded the Salem Prize in 1998.
Trevor D. Wooley
Trevor D. Wooley
Born (1964-09-17) 17 September 1964
United Kingdom
NationalityBritish
Alma materImperial College London
University of Cambridge
Known forAnalytic number theory
Diophantine equations
Hardy–Littlewood circle method
AwardsFellow of the Royal Society
Salem Prize
Berwick Prize (1993)
Scientific career
FieldsMathematician
InstitutionsPurdue University
Doctoral advisorRobert Charles Vaughan
He received his bachelor's degree in 1987 from the University of Cambridge and his PhD, supervised by Robert Charles Vaughan, in 1990 from the University of London.[1] In 2007, he was elected Fellow of the Royal Society.
Awards and honours
• Alfred P. Sloan Research Fellow, 1993–1995
• Salem Prize, 1998
• Invited speaker, International Congress of Mathematicians, Beijing 2002
• Elected Fellow of the Royal Society, 2007.
• Fröhlich Prize, 2012.
• Fellow of the American Mathematical Society, 2012.[2]
• Invited speaker, International Congress of Mathematicians, Seoul 2014
Selected publications
• Wooley, Trevor D. (1992). "Large Improvements in Waring's Problem". The Annals of Mathematics. JSTOR. 135 (1): 131–164. doi:10.2307/2946566. ISSN 0003-486X. JSTOR 2946566.
• Wooley, Trevor D. (1994). "Quasi-diagonal behaviour in certain mean value theorems of additive number theory". Journal of the American Mathematical Society. American Mathematical Society (AMS). 7 (1): 221–245. doi:10.1090/s0894-0347-1994-1224595-9. ISSN 0894-0347.
• Wooley, Trevor D. (1995). "Breaking classical convexity in Waring's problem: Sums of cubes and quasi-diagonal behaviour". Inventiones Mathematicae. Springer Science and Business Media LLC. 122 (1): 421–451. Bibcode:1995InMat.122..421W. doi:10.1007/bf01231451. hdl:2027.42/46588. ISSN 0020-9910.
• Wooley, Trevor (1 May 2012). "Vinogradov's mean value theorem via efficient congruencing". Annals of Mathematics. Annals of Mathematics. 175 (3): 1575–1627. arXiv:1101.0574. doi:10.4007/annals.2012.175.3.12. ISSN 0003-486X. S2CID 13286053.
References
1. Trevor Wooley at the Mathematics Genealogy Project
2. List of Fellows of the American Mathematical Society, retrieved 1 September 2013.
External links
• Official website
Fellows of the Royal Society elected in 2007
Fellows
• Brad Amos
• Peter Barnes
• Gillian Bates
• Samuel Berkovic
• Michael Bickle
• Jeremy Bloxham
• David Boger
• Peter Bruce
• Michael Cates
• Geoffrey Cloke
• Richard Cogdell
• Stewart Cole
• George Coupland
• George F. R. Ellis
• Barry Everitt
• Andre Geim
• Siamon Gordon
• Rosemary Grant
• Grahame Hardie
• Bill Harris
• Nicholas Higham
• Anthony A. Hyman
• Anthony Kinloch
• Richard Leakey
• Malcolm Levitt
• Ottoline Leyser
• Paul Linden
• Peter Littlewood
• Ravinder N. Maini
• Robert Mair, Baron Mair
• Michael Malim
• Andrew McMahon
• Richard Moxon
• John A. Peacock
• Edward Arend Perkins
• Stephen Pope
• Daniela Rhodes
• Morgan Sheng
• David C. Sherrington
• Terence Tao
• Veronica van Heyningen
• David Lee Wark
• Trevor Wooley
• Andrew Zisserman
Foreign
• Wallace Broecker
• James Cronin
• Stanley Falkow
• Tom Fenchel
• Jeremiah P. Ostriker
• Michael O. Rabin
• Gerald M. Rubin
• Peter Wolynes
Honorary
• Onora O'Neill
Authority control
International
• ISNI
• VIAF
National
• Norway
• Israel
• United States
• Netherlands
• Poland
Academics
• DBLP
• Google Scholar
• MathSciNet
• Mathematics Genealogy Project
• ORCID
• Scopus
• zbMATH
Other
• IdRef
| Wikipedia |
Treynor–Black model
In Finance the Treynor–Black model is a mathematical model for security selection published by Fischer Black and Jack Treynor in 1973. The model assumes an investor who considers that most securities are priced efficiently, but who believes they have information that can be used to predict the abnormal performance (Alpha) of a few of them; the model finds the optimum portfolio to hold under such conditions.
In essence the optimal portfolio consists of two parts: a passively invested index fund containing all securities in proportion to their market value and an 'active portfolio' containing the securities for which the investor has made a prediction about alpha. In the active portfolio the weight of each stock is proportional to the alpha value divided by the variance of the residual risk.
The Model
Assume that the risk free rate is RF and the expected market return is RM with standard deviation $\sigma _{M}$. There are N securities that have been analyzed and are thought to be mispriced, with expected returns given by:
$r_{i}=R_{F}+\beta _{i}(R_{M}-R_{F})+\alpha _{i}+\epsilon _{i}$
where the random terms $\epsilon _{i}$ are normally distributed with mean 0, standard deviation $\sigma _{i}$, and are mutually uncorrelated. (This is the so-called Diagonal Model of Stock Returns, or Single-index model due to William F. Sharpe).
Then[1] it was shown by Treynor and Black that the active portfolio A is constructed using the weights
$w_{i}={\frac {\alpha _{i}/\sigma _{i}^{2}}{\sum _{j=1}^{N}\alpha _{j}/\sigma _{j}^{2}}}$
(Note that if an alpha is negative the corresponding portfolio weight will also be negative, i.e. the active portfolio is in general a long-short portfolio).
The alpha, beta and residual risk of the constructed active portfolio are found using the previously computed weights wi:
$\alpha _{A}=\sum w_{i}\alpha _{i}$
$\beta _{A}=\sum w_{i}\beta _{i}$
$\sigma _{A}^{2}=\sum w_{i}^{2}\sigma _{i}^{2}$
The overall risky portfolio for the investor consists of a fraction wA invested in the active portfolio and the remainder invested in the market portfolio. This active fraction is found as follows:
$w_{0}={\frac {\alpha _{A}/\sigma _{A}^{2}}{(R_{M}-R_{F})/\sigma _{M}^{2}}}$
And corrected for the beta exposure of the active portfolio:
$w_{A}={\frac {w_{0}}{1+(1-\beta _{A})w_{0}}}$
$w_{M}=1-w_{A}$
The model is not bounded 0 ≤ wA ≤ 1 and 0 ≤ wM ≤ 1 i.e short positions in the market portfolio or active portfolio could be initiated to leverage a position in the other portfolio. This is often regarded as the major flaw of the model, as it often yields an unrealistic weight in the active portfolio. Imposing lower and upper bounds for wA is a measure to counter this.
References
1. Kane et al.
• Treynor, J. L. and F. Black, 1973, How to Use Security Analysis to Improve Portfolio Selection, Journal of Business, January, pages 66–88. JSTOR 2351280
• Kane, Kim and White: Active Portfolio Management - The power of the Treynor–Black model, December 2003,
| Wikipedia |
Triacontagon
In geometry, a triacontagon or 30-gon is a thirty-sided polygon. The sum of any triacontagon's interior angles is 5040 degrees.
Regular triacontagon
A regular triacontagon
TypeRegular polygon
Edges and vertices30
Schläfli symbol{30}, t{15}
Coxeter–Dynkin diagrams
Symmetry groupDihedral (D30), order 2×30
Internal angle (degrees)168°
PropertiesConvex, cyclic, equilateral, isogonal, isotoxal
Dual polygonSelf
Regular triacontagon
The regular triacontagon is a constructible polygon, by an edge-bisection of a regular pentadecagon, and can also be constructed as a truncated pentadecagon, t{15}. A truncated triacontagon, t{30}, is a hexacontagon, {60}.
One interior angle in a regular triacontagon is 168 degrees, meaning that one exterior angle would be 12°. The triacontagon is the largest regular polygon whose interior angle is the sum of the interior angles of smaller polygons: 168° is the sum of the interior angles of the equilateral triangle (60°) and the regular pentagon (108°).
The area of a regular triacontagon is (with t = edge length)[1]
$A={\frac {15}{2}}t^{2}\cot {\frac {\pi }{30}}={\frac {15}{4}}t^{2}\left({\sqrt {15}}+3{\sqrt {3}}+{\sqrt {2}}{\sqrt {25+11{\sqrt {5}}}}\right)$
The inradius of a regular triacontagon is
$r={\frac {1}{2}}t\cot {\frac {\pi }{30}}={\frac {1}{4}}t\left({\sqrt {15}}+3{\sqrt {3}}+{\sqrt {2}}{\sqrt {25+11{\sqrt {5}}}}\right)$
The circumradius of a regular triacontagon is
$R={\frac {1}{2}}t\csc {\frac {\pi }{30}}={\frac {1}{2}}t\left(2+{\sqrt {5}}+{\sqrt {15+6{\sqrt {5}}}}\right)$
Construction
As 30 = 2 × 3 × 5, a regular triacontagon is constructible using a compass and straightedge.[2]
Symmetry
The regular triacontagon has Dih30 dihedral symmetry, order 60, represented by 30 lines of reflection. Dih30 has 7 dihedral subgroups: Dih15, (Dih10, Dih5), (Dih6, Dih3), and (Dih2, Dih1). It also has eight more cyclic symmetries as subgroups: (Z30, Z15), (Z10, Z5), (Z6, Z3), and (Z2, Z1), with Zn representing π/n radian rotational symmetry.
John Conway labels these lower symmetries with a letter and order of the symmetry follows the letter.[3] He gives d (diagonal) with mirror lines through vertices, p with mirror lines through edges (perpendicular), i with mirror lines through both vertices and edges, and g for rotational symmetry. a1 labels no symmetry.
These lower symmetries allows degrees of freedoms in defining irregular triacontagons. Only the g30 subgroup has no degrees of freedom but can seen as directed edges.
Dissection
Coxeter states that every zonogon (a 2m-gon whose opposite sides are parallel and of equal length) can be dissected into m(m-1)/2 parallelograms.[4] In particular this is true for regular polygons with evenly many sides, in which case the parallelograms are all rhombi. For the regular triacontagon, m=15, it can be divided into 105: 7 sets of 15 rhombs. This decomposition is based on a Petrie polygon projection of a 15-cube.
Examples
Triacontagram
A triacontagram is a 30-sided star polygon. There are 3 regular forms given by Schläfli symbols {30/7}, {30/11}, and {30/13}, and 11 compound star figures with the same vertex configuration.
Compounds and stars
Form Compounds Star polygon Compound
Picture
{30/2}=2{15}
{30/3}=3{10}
{30/4}=2{15/2}
{30/5}=5{6}
{30/6}=6{5}
{30/7}
{30/8}=2{15/4}
Interior angle 156° 144° 132° 120° 108° 96° 84°
Form Compounds Star polygon Compound Star polygon Compounds
Picture
{30/9}=3{10/3}
{30/10}=10{3}
{30/11}
{30/12}=6{5/2}
{30/13}
{30/14}=2{15/7}
{30/15}=15{2}
Interior angle 72° 60° 48° 36° 24° 12° 0°
There are also isogonal triacontagrams constructed as deeper truncations of the regular pentadecagon {15} and pentadecagram {15/7}, and inverted pentadecagrams {15/11}, and {15/13}. Other truncations form double coverings: t{15/14}={30/14}=2{15/7}, t{15/8}={30/8}=2{15/4}, t{15/4}={30/4}=2{15/4}, and t{15/2}={30/2}=2{15}.[5]
Compounds and stars
Quasiregular Isogonal Quasiregular
Double coverings
t{15} = {30}
t{15/14}=2{15/7}
t{15/7}={30/7}
t{15/8}=2{15/4}
t{15/11}={30/11}
t{15/4}=2{15/2}
t{15/13}={30/13}
t{15/2}=2{15}
Petrie polygons
The regular triacontagon is the Petrie polygon for three 8-dimensional polytopes with E8 symmetry, shown in orthogonal projections in the E8 Coxeter plane. It is also the Petrie polygon for two 4-dimensional polytopes, shown in the H4 Coxeter plane.
E8 H4
421
241
142
120-cell
600-cell
The regular triacontagram {30/7} is also the Petrie polygon for the great grand stellated 120-cell and grand 600-cell.
References
1. Weisstein, Eric W. "Triacontagon". MathWorld.
2. Constructible Polygon
3. The Symmetries of Things, Chapter 20
4. Coxeter, Mathematical recreations and Essays, Thirteenth edition, p.141
5. The Lighter Side of Mathematics: Proceedings of the Eugène Strens Memorial Conference on Recreational Mathematics and its History, (1994), Metamorphoses of polygons, Branko Grünbaum
• Naming Polygons and Polyhedra
• triacontagon
Polygons (List)
Triangles
• Acute
• Equilateral
• Ideal
• Isosceles
• Kepler
• Obtuse
• Right
Quadrilaterals
• Antiparallelogram
• Bicentric
• Crossed
• Cyclic
• Equidiagonal
• Ex-tangential
• Harmonic
• Isosceles trapezoid
• Kite
• Orthodiagonal
• Parallelogram
• Rectangle
• Right kite
• Right trapezoid
• Rhombus
• Square
• Tangential
• Tangential trapezoid
• Trapezoid
By number
of sides
1–10 sides
• Monogon (1)
• Digon (2)
• Triangle (3)
• Quadrilateral (4)
• Pentagon (5)
• Hexagon (6)
• Heptagon (7)
• Octagon (8)
• Nonagon (Enneagon, 9)
• Decagon (10)
11–20 sides
• Hendecagon (11)
• Dodecagon (12)
• Tridecagon (13)
• Tetradecagon (14)
• Pentadecagon (15)
• Hexadecagon (16)
• Heptadecagon (17)
• Octadecagon (18)
• Icosagon (20)
>20 sides
• Icositrigon (23)
• Icositetragon (24)
• Triacontagon (30)
• 257-gon
• Chiliagon (1000)
• Myriagon (10,000)
• 65537-gon
• Megagon (1,000,000)
• Apeirogon (∞)
Star polygons
• Pentagram
• Hexagram
• Heptagram
• Octagram
• Enneagram
• Decagram
• Hendecagram
• Dodecagram
Classes
• Concave
• Convex
• Cyclic
• Equiangular
• Equilateral
• Infinite skew
• Isogonal
• Isotoxal
• Magic
• Pseudotriangle
• Rectilinear
• Regular
• Reinhardt
• Simple
• Skew
• Star-shaped
• Tangential
• Weakly simple
| Wikipedia |
Rhombic triacontahedron
In geometry, the rhombic triacontahedron, sometimes simply called the triacontahedron as it is the most common thirty-faced polyhedron, is a convex polyhedron with 30 rhombic faces. It has 60 edges and 32 vertices of two types. It is a Catalan solid, and the dual polyhedron of the icosidodecahedron. It is a zonohedron.
A face of the rhombic triacontahedron. The lengths
of the diagonals are in the golden ratio.
Rhombic triacontahedron
(Click here for rotating model)
TypeCatalan solid
Coxeter diagram
Conway notationjD
Face typeV3.5.3.5
rhombus
Faces30
Edges60
Vertices32
Vertices by type20{3}+12{5}
Symmetry groupIh, H3, [5,3], (*532)
Rotation groupI, [5,3]+, (532)
Dihedral angle144°
Propertiesconvex, face-transitive isohedral, isotoxal, zonohedron
Icosidodecahedron
(dual polyhedron)
Net
The ratio of the long diagonal to the short diagonal of each face is exactly equal to the golden ratio, φ, so that the acute angles on each face measure 2 tan−1(1/φ) = tan−1(2), or approximately 63.43°. A rhombus so obtained is called a golden rhombus.
Being the dual of an Archimedean solid, the rhombic triacontahedron is face-transitive, meaning the symmetry group of the solid acts transitively on the set of faces. This means that for any two faces, A and B, there is a rotation or reflection of the solid that leaves it occupying the same region of space while moving face A to face B.
The rhombic triacontahedron is somewhat special in being one of the nine edge-transitive convex polyhedra, the others being the five Platonic solids, the cuboctahedron, the icosidodecahedron, and the rhombic dodecahedron.
The rhombic triacontahedron is also interesting in that its vertices include the arrangement of four Platonic solids. It contains ten tetrahedra, five cubes, an icosahedron and a dodecahedron. The centers of the faces contain five octahedra.
It can be made from a truncated octahedron by dividing the hexagonal faces into 3 rhombi:
Cartesian coordinates
Let $\phi $ be the golden ratio. The 12 points given by $(0,\pm 1,\pm \phi )$ and cyclic permutations of these coordinates are the vertices of a regular icosahedron. Its dual regular dodecahedron, whose edges intersect those of the icosahedron at right angles, has as vertices the 8 points $(\pm 1,\pm 1,\pm 1)$ together with the 12 points $(0,\pm \phi ,\pm 1/\phi )$ and cyclic permutations of these coordinates. All 32 points together are the vertices of a rhombic triacontahedron centered at the origin. The length of its edges is ${\sqrt {3-\phi }}\approx 1.175\,570\,504\,58$. Its faces have diagonals with lengths $2$ and $2/\phi $.
Dimensions
If the edge length of a rhombic triacontahedron is a, surface area, volume, the radius of an inscribed sphere (tangent to each of the rhombic triacontahedron's faces) and midradius, which touches the middle of each edge are:[1]
${\begin{aligned}S&=12{\sqrt {5}}\,a^{2}&&\approx 26.8328a^{2}\\V&=4{\sqrt {5+2{\sqrt {5}}}}\,a^{3}&&\approx 12.3107a^{3}\\r_{\mathrm {i} }&={\frac {\varphi ^{2}}{\sqrt {1+\varphi ^{2}}}}\,a={\sqrt {1+{\frac {2}{\sqrt {5}}}}}\,a&&\approx 1.37638a\\r_{\mathrm {m} }&=\left(1+{\frac {1}{\sqrt {5}}}\right)\,a&&\approx 1.44721a\end{aligned}}$
where φ is the golden ratio.
The insphere is tangent to the faces at their face centroids. Short diagonals belong only to the edges of the inscribed regular dodecahedron, while long diagonals are included only in edges of the inscribed icosahedron.
Dissection
The rhombic triacontahedron can be dissected into 20 golden rhombohedra: 10 acute ones and 10 obtuse ones.[2][3]
1010
Acute form
Obtuse form
Orthogonal projections
The rhombic triacontahedron has four symmetry positions, two centered on vertices, one mid-face, and one mid-edge. Embedded in projection "10" are the "fat" rhombus and "skinny" rhombus which tile together to produce the non-periodic tessellation often referred to as Penrose tiling.
Orthogonal projections
Projective
symmetry
[2] [2] [6] [10]
Image
Dual
image
Stellations
Further information: Rhombic hexecontahedron
The rhombic triacontahedron has 227 fully supported stellations.[4][5] Another stellation of the Rhombic triacontahedron is the compound of five cubes. The total number of stellations of the rhombic triacontahedron is 358,833,097.
Related polyhedra
Family of uniform icosahedral polyhedra
Symmetry: [5,3], (*532) [5,3]+, (532)
{5,3} t{5,3} r{5,3} t{3,5} {3,5} rr{5,3} tr{5,3} sr{5,3}
Duals to uniform polyhedra
V5.5.5 V3.10.10 V3.5.3.5 V5.6.6 V3.3.3.3.3 V3.4.5.4 V4.6.10 V3.3.3.3.5
This polyhedron is a part of a sequence of rhombic polyhedra and tilings with [n,3] Coxeter group symmetry. The cube can be seen as a rhombic hexahedron where the rhombi are also rectangles.
Symmetry mutations of dual quasiregular tilings: V(3.n)2
*n32 Spherical Euclidean Hyperbolic
*332 *432 *532 *632 *732 *832... *∞32
Tiling
Conf. V(3.3)2 V(3.4)2 V(3.5)2 V(3.6)2 V(3.7)2 V(3.8)2 V(3.∞)2
• Spherical rhombic triacontahedron
• A rhombic triacontahedron with an inscribed tetrahedron (red) and cube (yellow).
(Click here for rotating model)
• A rhombic triacontahedron with an inscribed dodecahedron (blue) and icosahedron (purple).
(Click here for rotating model)
• Fully truncated rhombic triacontahedron
6-cube
The rhombic triacontahedron forms a 32 vertex convex hull of one projection of a 6-cube to three dimensions.
The 3D basis vectors [u,v,w] are:
u = (1, φ, 0, -1, φ, 0)
v = (φ, 0, 1, φ, 0, -1)
w = (0, 1, φ, 0, -1, φ)
Shown with inner edges hidden
20 of 32 interior vertices form a dodecahedron, and the remaining 12 form an icosahedron.
Uses
Danish designer Holger Strøm used the rhombic triacontahedron as a basis for the design of his buildable lamp IQ-light (IQ for "Interlocking Quadrilaterals").
Woodworker Jane Kostick builds boxes in the shape of a rhombic triacontahedron.[6] The simple construction is based on the less than obvious relationship between the rhombic triacontahedron and the cube.
Roger von Oech's "Ball of Whacks" comes in the shape of a rhombic triacontahedron.
The rhombic triacontahedron is used as the "d30" thirty-sided die, sometimes useful in some roleplaying games or other places.
Christopher Bird, co-author of The Secret Life of Plants wrote an article for New Age Journal in May, 1975, popularizing the dual icosahedron and dodecahedron as "the crystalline structure of the Earth," a model of the "Earth (telluric) energy Grid." The EarthStar Globe by Bill Becker and Bethe A. Hagens purports to show "the natural geometry of the Earth, and the geometric relationship between sacred places such as the Great Pyramid, the Bermuda Triangle, and Easter Island." It is printed as a rhombic triacontahedron, on 30 diamonds, and folds up into a globe.[7]
See also
• Golden rhombus
• Rhombille tiling
• Truncated rhombic triacontahedron
References
1. Stephen Wolfram, "" from Wolfram Alpha. Retrieved 7 January 2013.
2. "How to make golden rhombohedra out of paper".
3. Dissection of the rhombic triacontahedron
4. Pawley, G. S. (1975). "The 227 triacontahedra". Geometriae Dedicata. Kluwer Academic Publishers. 4 (2–4): 221–232. doi:10.1007/BF00148756. ISSN 1572-9168. S2CID 123506315.
5. Messer, P. W. (1995). "Stellations of the rhombic triacontahedron and Beyond". Structural Topology. 21: 25–46.
6. triacontahedron box - KO Sticks LLC
7. "History of the World Grid Theory".
• Williams, Robert (1979). The Geometrical Foundation of Natural Structure: A Source Book of Design. Dover Publications, Inc. ISBN 0-486-23729-X. (Section 3-9)
• Wenninger, Magnus (1983), Dual Models, Cambridge University Press, doi:10.1017/CBO9780511569371, ISBN 978-0-521-54325-5, MR 0730208 (The thirteen semiregular convex polyhedra and their duals, p. 22, Rhombic triacontahedron)
• The Symmetries of Things 2008, John H. Conway, Heidi Burgiel, Chaim Goodman-Strass, ISBN 978-1-56881-220-5 (Chapter 21, Naming the Archimedean and Catalan polyhedra and tilings, p. 285, Rhombic triacontahedron )
External links
• Eric W. Weisstein, Rhombic triacontahedron (Catalan solid) at MathWorld.
• Rhombic Triacontrahedron – Interactive Polyhedron Model
• Virtual Reality Polyhedra – The Encyclopedia of Polyhedra
• Stellations of Rhombic Triacontahedron
• EarthStar globe – Rhombic Triacontahedral map projection
• IQ-light—Danish designer Holger Strøm's lamp
• Make your own Archived 17 July 2007 at the Wayback Machine
• a wooden construction of a rhombic triacontahedron box – by woodworker Jane Kostick
• 120 Rhombic Triacontahedra, 30+12 Rhombic Triacontahedra, and 12 Rhombic Triacontahedra by Sándor Kabai, The Wolfram Demonstrations Project
• A viper drawn on a rhombic triacontahedron.
Polyhedra
Listed by number of faces and type
1–10 faces
• Monohedron
• Dihedron
• Trihedron
• Tetrahedron
• Pentahedron
• Hexahedron
• Heptahedron
• Octahedron
• Enneahedron
• Decahedron
11–20 faces
• Hendecahedron
• Dodecahedron
• Tridecahedron
• Tetradecahedron
• Pentadecahedron
• Hexadecahedron
• Heptadecahedron
• Octadecahedron
• Enneadecahedron
• Icosahedron
>20 faces
• Icositetrahedron (24)
• Triacontahedron (30)
• Hexecontahedron (60)
• Enneacontahedron (90)
• Hectotriadiohedron (132)
• Apeirohedron (∞)
elemental things
• face
• edge
• vertex
• uniform polyhedron (two infinite groups and 75)
• regular polyhedron (9)
• quasiregular polyhedron (7)
• semiregular polyhedron (two infinite groups and 59)
convex polyhedron
• Platonic solid (5)
• Archimedean solid (13)
• Catalan solid (13)
• Johnson solid (92)
non-convex polyhedron
• Kepler–Poinsot polyhedron (4)
• Star polyhedron (infinite)
• Uniform star polyhedron (57)
prismatoids
• prism
• antiprism
• frustum
• cupola
• wedge
• pyramid
• parallelepiped
Catalan solids
Tetrahedron
(Dual)
Tetrahedron
(Seed)
Octahedron
(Dual)
Cube
(Seed)
Icosahedron
(Dual)
Dodecahedron
(Seed)
Triakis tetrahedron
(Needle)
Triakis tetrahedron
(Kis)
Triakis octahedron
(Needle)
Tetrakis hexahedron
(Kis)
Triakis icosahedron
(Needle)
Pentakis dodecahedron
(Kis)
Rhombic hexahedron
(Join)
Rhombic dodecahedron
(Join)
Rhombic triacontahedron
(Join)
Deltoidal dodecahedron
(Ortho)
Disdyakis hexahedron
(Meta)
Deltoidal icositetrahedron
(Ortho)
Disdyakis dodecahedron
(Meta)
Deltoidal hexecontahedron
(Ortho)
Disdyakis triacontahedron
(Meta)
Pentagonal dodecahedron
(Gyro)
Pentagonal icositetrahedron
(Gyro)
Pentagonal hexecontahedron
(Gyro)
Archimedean duals
Tetrahedron
(Seed)
Tetrahedron
(Dual)
Cube
(Seed)
Octahedron
(Dual)
Dodecahedron
(Seed)
Icosahedron
(Dual)
Truncated tetrahedron
(Truncate)
Truncated tetrahedron
(Zip)
Truncated cube
(Truncate)
Truncated octahedron
(Zip)
Truncated dodecahedron
(Truncate)
Truncated icosahedron
(Zip)
Tetratetrahedron
(Ambo)
Cuboctahedron
(Ambo)
Icosidodecahedron
(Ambo)
Rhombitetratetrahedron
(Expand)
Truncated tetratetrahedron
(Bevel)
Rhombicuboctahedron
(Expand)
Truncated cuboctahedron
(Bevel)
Rhombicosidodecahedron
(Expand)
Truncated icosidodecahedron
(Bevel)
Snub tetrahedron
(Snub)
Snub cube
(Snub)
Snub dodecahedron
(Snub)
Convex polyhedra
Platonic solids (regular)
• tetrahedron
• cube
• octahedron
• dodecahedron
• icosahedron
Archimedean solids
(semiregular or uniform)
• truncated tetrahedron
• cuboctahedron
• truncated cube
• truncated octahedron
• rhombicuboctahedron
• truncated cuboctahedron
• snub cube
• icosidodecahedron
• truncated dodecahedron
• truncated icosahedron
• rhombicosidodecahedron
• truncated icosidodecahedron
• snub dodecahedron
Catalan solids
(duals of Archimedean)
• triakis tetrahedron
• rhombic dodecahedron
• triakis octahedron
• tetrakis hexahedron
• deltoidal icositetrahedron
• disdyakis dodecahedron
• pentagonal icositetrahedron
• rhombic triacontahedron
• triakis icosahedron
• pentakis dodecahedron
• deltoidal hexecontahedron
• disdyakis triacontahedron
• pentagonal hexecontahedron
Dihedral regular
• dihedron
• hosohedron
Dihedral uniform
• prisms
• antiprisms
duals:
• bipyramids
• trapezohedra
Dihedral others
• pyramids
• truncated trapezohedra
• gyroelongated bipyramid
• cupola
• bicupola
• frustum
• bifrustum
• rotunda
• birotunda
• prismatoid
• scutoid
Degenerate polyhedra are in italics.
| Wikipedia |
Triakis icosahedron
In geometry, the triakis icosahedron is an Archimedean dual solid, or a Catalan solid, with 60 isosceles triangle faces. Its dual is the truncated dodecahedron. It has also been called the kisicosahedron.[1] It was first depicted, in a non-convex form with equilateral triangle faces, by Leonardo da Vinci in Luca Pacioli's Divina proportione, where it was named the icosahedron elevatum.[2] The capsid of the Hepatitis A virus has the shape of a triakis icosahedron.[3]
Triakis icosahedron
(Click here for rotating model)
TypeCatalan solid
Coxeter diagram
Conway notationkI
Face typeV3.10.10
isosceles triangle
Faces60
Edges90
Vertices32
Vertices by type20{3}+12{10}
Symmetry groupIh, H3, [5,3], (*532)
Rotation groupI, [5,3]+, (532)
Dihedral angle160°36′45″
arccos(−24 + 15√5/61)
Propertiesconvex, face-transitive
Truncated dodecahedron
(dual polyhedron)
Net
As a Kleetope
The triakis icosahedron can be formed by gluing triangular pyramids to each face of a regular icosahedron. Depending on the height of these pyramids relative to their base, the result can be either convex or non-convex. This construction, of gluing pyramids to each face, is an instance of a general construction called the Kleetope; the triakis icosahedron is the Kleetope of the icosahedron.[2] This interpretation is also expressed in the name, triakis, which is used for the Kleetopes of polyhedra with triangular faces.[1]
Non-convex triakis icosahedron drawn by Leonardo da Vinci in Luca Pacioli's Divina proportione
The visible parts of a small triambic icosahedron have the same shape as a non-convex triakis icosahedron
The great stellated dodecahedron, with 12 pentagram faces, has a triakis icosahedron as its outer shell
When depicted in Leonardo's form, with equilateral triangle faces, it is an example of a non-convex deltahedron, one of the few known deltahedra that are isohedral (meaning that all faces are symmetric to each other).[4] In another of the non-convex forms of the triakis icosahedron, the three triangles adjacent to each pyramid are coplanar, and can be thought of as instead forming the visible parts of a convex hexagon, in a self-intersecting polyhedron with 20 hexagonal faces that has been called the small triambic icosahedron.[5] Alternatively, for the same form of the triakis icosahedron, the triples of coplanar isosceles triangles form the faces of the first stellation of the icosahedron.[6] Yet another non-convex form, with golden isosceles triangle faces, forms the outer shell of the great stellated dodecahedron, a Kepler–Poinsot polyhedron with twelve pentagram faces.[7]
Each edge of the triakis icosahedron has endpoints of total degree at least 13. By Kotzig's theorem, this is the most possible for any polyhedron. The same total degree is obtained from the Kleetope of any polyhedron with minimum degree five, but the triakis icosahedron is the simplest example of this construction.[8] Although this Kleetope has isosceles triangle faces, iterating the Kleetope construction on it produces convex polyhedra with triangular faces that cannot all be isosceles.[9]
As a Catalan solid
The triakis icosahedron is a Catalan solid, the dual polyhedron of the truncated dodecahedron. The truncated dodecahedron is an Archimedean solid, with faces that are regular decagons and equilateral triangles, and with all edges having unit length; its vertices lie on a common sphere, the circumsphere of the truncated decahedron. The polar reciprocation of this solid through this sphere is a convex form of the triakis icosahedron, with all faces tangent to the same sphere, now an inscribed sphere, with coordinates and dimensions that can be calculated as follows.
Let $\varphi $ denote the golden ratio. The short edges of this form of the triakis icosahedron have length
${\frac {5\varphi +15}{11}}\approx 2.099$,
and the long edges have length
$\varphi +2\approx 3.618$.[10]
Its faces are isosceles triangles with one obtuse angle of
$\cos ^{-1}{\frac {-3\varphi }{10}}\approx 119^{\circ }$
and two acute angles of
$\cos ^{-1}{\frac {\varphi +7}{10}}\approx 30.5^{\circ }$.[11]
As a Catalan solid, its dihedral angles are all equal, $\cos ^{-1}{\frac {\varphi ^{2}(1+2\varphi (2+\varphi )}{\sqrt {(1+5\varphi ^{4})(1+\varphi ^{2}(2+\varphi )^{2}}}}\approx $160°36'45.188". One possible set of 32 Cartesian coordinates for the vertices of the triakis icosahedron centered at the origin (scaled differently than the one above) can be generated by combining the vertices of two appropriately scaled Platonic solids, the regular icosahedron and a regular dodecahedron:[12]
• Twelve vertices of a regular icosahedron, scaled to have a unit circumradius, with the coordinates
${\frac {(0,\pm 1,\pm \varphi )}{\sqrt {\varphi ^{2}+1}}},{\frac {(\pm 1,\pm \varphi ,0)}{\sqrt {\varphi ^{2}+1}}},{\frac {(\pm \varphi ,0,\pm 1)}{\sqrt {\varphi ^{2}+1}}}.$
• Twenty vertices of a regular dodecahedron, scaled to have circumradius
${\frac {2+\varphi }{3+2\varphi }}{\sqrt {\frac {3}{2-1/\varphi }}}={\frac {1}{11}}{\sqrt {75+6{\sqrt {5}}}}\approx 0.8548,$
with the coordinates
$(\pm 1,\pm 1,\pm 1){\frac {\sqrt {75+6{\sqrt {5}}}}{11{\sqrt {3}}}}$
and
$(0,\pm \varphi ,\pm {\frac {1}{\varphi }}){\frac {\sqrt {75+6{\sqrt {5}}}}{11{\sqrt {3}}}},(\pm {\frac {1}{\varphi }},0,\pm \varphi ){\frac {\sqrt {75+6{\sqrt {5}}}}{11{\sqrt {3}}}},(\pm \varphi ,\pm {\frac {1}{\varphi }},0){\frac {\sqrt {75+6{\sqrt {5}}}}{11{\sqrt {3}}}}.$
Symmetry
In any of its standard convex or non-convex forms, the triakis icosahedron has the same symmetries as a regular icosahedron.[4] The three types of symmetry axes of the icosahedron, through two opposite vertices, edge midpoints, and face centroids, become respectively axes through opposite pairs of degree-ten vertices of the triakis icosahedron, through opposite midpoints of edges between degree-ten vertices, and through opposite pairs of degree-three vertices.
See also
• Triakis triangular tiling for other "triakis" polyhedral forms.
• Great triakis icosahedron
References
1. Conway, John H.; Burgiel, Heidi; Goodman-Strauss, Chaim (2008). The Symmetries of Things. AK Peters. p. 284. ISBN 978-1-56881-220-5.
2. Brigaglia, Aldo; Palladino, Nicla; Vaccaro, Maria Alessandra (2018). "Historical notes on star geometry in mathematics, art and nature". In Emmer, Michele; Abate, Marco (eds.). Imagine Math 6: Between Culture and Mathematics. Springer International Publishing. pp. 197–211. doi:10.1007/978-3-319-93949-0_17.
3. Zhu, Ling; Zhang, Xiaoxue (October 2014). "Hepatitis A virus exhibits a structure unique among picornaviruses". Protein & Cell. 6 (2): 79–80. doi:10.1007/s13238-014-0103-7. PMC 4312766.
4. Shephard, G. C. (1999). "Isohedral deltahedra". Periodica Mathematica Hungarica. 39 (1–3): 83–106. doi:10.1023/A:1004838806529.
5. Grünbaum, Branko (2008). "Can every face of a polyhedron have many sides?". Geometry, games, graphs and education. The Joe Malkevitch Festschrift. Papers from Joe Fest 2008, York College–The City University of New York (CUNY), Jamaica, NY, USA, November 8, 2008. Bedford, MA: Comap, Inc. pp. 9–26. hdl:1773/4593. ISBN 978-1-933223-17-9. Zbl 1185.52009.
6. Cromwell, Peter R. (1997). Polyhedra. Cambridge University Press. p. 270. ISBN 0-521-66405-5.
7. Wenninger, Magnus (1974). "22: The great stellated dodecahedron". Polyhedron Models. Cambridge University Press. pp. 40–42. ISBN 0-521-09859-9.
8. Zaks, Joseph (1983). "Extending Kotzig's theorem". Israel Journal of Mathematics. 45 (4): 281–296. doi:10.1007/BF02804013. hdl:10338.dmlcz/127504. MR 0720304.
9. Eppstein, David (2021). "On polyhedral realization with isosceles triangles". Graphs and Combinatorics. 37 (4): 1247–1269. arXiv:2009.00116. doi:10.1007/s00373-021-02314-9.
10. Weisstein, Eric W. "Triakis icosahedron". MathWorld.
11. Williams, Robert (1979). The Geometrical Foundation of Natural Structure: A Source Book of Design. Dover Publications, Inc. p. 89. ISBN 0-486-23729-X.
12. Koca, Mehmet; Ozdes Koca, Nazife; Koc, Ramazon (2010). "Catalan Solids Derived From 3D-Root Systems and Quaternions". Journal of Mathematical Physics. 51 (4). arXiv:0908.3272. doi:10.1063/1.3356985.
Catalan solids
Tetrahedron
(Dual)
Tetrahedron
(Seed)
Octahedron
(Dual)
Cube
(Seed)
Icosahedron
(Dual)
Dodecahedron
(Seed)
Triakis tetrahedron
(Needle)
Triakis tetrahedron
(Kis)
Triakis octahedron
(Needle)
Tetrakis hexahedron
(Kis)
Triakis icosahedron
(Needle)
Pentakis dodecahedron
(Kis)
Rhombic hexahedron
(Join)
Rhombic dodecahedron
(Join)
Rhombic triacontahedron
(Join)
Deltoidal dodecahedron
(Ortho)
Disdyakis hexahedron
(Meta)
Deltoidal icositetrahedron
(Ortho)
Disdyakis dodecahedron
(Meta)
Deltoidal hexecontahedron
(Ortho)
Disdyakis triacontahedron
(Meta)
Pentagonal dodecahedron
(Gyro)
Pentagonal icositetrahedron
(Gyro)
Pentagonal hexecontahedron
(Gyro)
Archimedean duals
Tetrahedron
(Seed)
Tetrahedron
(Dual)
Cube
(Seed)
Octahedron
(Dual)
Dodecahedron
(Seed)
Icosahedron
(Dual)
Truncated tetrahedron
(Truncate)
Truncated tetrahedron
(Zip)
Truncated cube
(Truncate)
Truncated octahedron
(Zip)
Truncated dodecahedron
(Truncate)
Truncated icosahedron
(Zip)
Tetratetrahedron
(Ambo)
Cuboctahedron
(Ambo)
Icosidodecahedron
(Ambo)
Rhombitetratetrahedron
(Expand)
Truncated tetratetrahedron
(Bevel)
Rhombicuboctahedron
(Expand)
Truncated cuboctahedron
(Bevel)
Rhombicosidodecahedron
(Expand)
Truncated icosidodecahedron
(Bevel)
Snub tetrahedron
(Snub)
Snub cube
(Snub)
Snub dodecahedron
(Snub)
Convex polyhedra
Platonic solids (regular)
• tetrahedron
• cube
• octahedron
• dodecahedron
• icosahedron
Archimedean solids
(semiregular or uniform)
• truncated tetrahedron
• cuboctahedron
• truncated cube
• truncated octahedron
• rhombicuboctahedron
• truncated cuboctahedron
• snub cube
• icosidodecahedron
• truncated dodecahedron
• truncated icosahedron
• rhombicosidodecahedron
• truncated icosidodecahedron
• snub dodecahedron
Catalan solids
(duals of Archimedean)
• triakis tetrahedron
• rhombic dodecahedron
• triakis octahedron
• tetrakis hexahedron
• deltoidal icositetrahedron
• disdyakis dodecahedron
• pentagonal icositetrahedron
• rhombic triacontahedron
• triakis icosahedron
• pentakis dodecahedron
• deltoidal hexecontahedron
• disdyakis triacontahedron
• pentagonal hexecontahedron
Dihedral regular
• dihedron
• hosohedron
Dihedral uniform
• prisms
• antiprisms
duals:
• bipyramids
• trapezohedra
Dihedral others
• pyramids
• truncated trapezohedra
• gyroelongated bipyramid
• cupola
• bicupola
• frustum
• bifrustum
• rotunda
• birotunda
• prismatoid
• scutoid
Degenerate polyhedra are in italics.
| Wikipedia |
Triakis truncated tetrahedral honeycomb
The triakis truncated tetrahedral honeycomb is a space-filling tessellation (or honeycomb) in Euclidean 3-space made up of triakis truncated tetrahedra. It was discovered in 1914.[1][2]
Triakis truncated tetrahedral honeycomb
Cell typeTriakis truncated tetrahedron
Face typeshexagon
isosceles triangle
Coxeter groupÃ3×2, [[3[4]]] (double)
Space groupFd3m (227)
PropertiesCell-transitive
Voronoi tessellation
It is the Voronoi tessellation of the carbon atoms in diamond,[3][4] which lie in the diamond cubic crystal structure.
Being composed entirely of triakis truncated tetrahedra, it is cell-transitive.
Relation to quarter cubic honeycomb
It can be seen as the uniform quarter cubic honeycomb where its tetrahedral cells are subdivided by the center point into 4 shorter tetrahedra, and each adjoined to the adjacent truncated tetrahedral cells.
See also
• Disphenoid tetrahedral honeycomb
References
1. Föppl, L. (1914). "Der Fundamentalbereich des Diamantgitters". Phys. Z. 15: 191–193.
2. Grünbaum, B.; Shephard, G. C. (1980). "Tilings with Congruent Tiles". Bull. Amer. Math. Soc. 3 (3): 951–973. doi:10.1090/s0273-0979-1980-14827-2.
3. Conway, John. "Voronoi Polyhedron". geometry.puzzles. Retrieved 20 September 2012.
4. Conway, John H.; Burgiel, Heidi; Goodman-Strauss, Chaim (2008). The Symmetries of Things. p. 332. ISBN 978-1568812205.
| Wikipedia |
Triakis truncated tetrahedron
In geometry, the triakis truncated tetrahedron is a convex polyhedron made from 4 hexagons and 12 isosceles triangles. It can be used to tessellate three-dimensional space, making the triakis truncated tetrahedral honeycomb.[1][2]
Not to be confused with truncated triakis tetrahedron.
Triakis truncated tetrahedron
TypePlesiohedron
Faces4 hexagons
12 isosceles triangles
Edges30
Vertices16
Conway notationk3tT
Dual polyhedron16|Order-3 truncated triakis tetrahedron
Propertiesconvex
The triakis truncated tetrahedron is the shape of the Voronoi cell of the carbon atoms in diamond, which lie on the diamond cubic crystal structure.[3][4] As the Voronoi cell of a symmetric space pattern, it is a plesiohedron.[5]
Construction
For space-filling, the triakis truncated tetrahedron can be constructed as follows:
1. Truncate a regular tetrahedron such that the big faces are regular hexagons.
2. Add an extra vertex at the center of each of the four smaller tetrahedra that were removed.
See also
• Quarter cubic honeycomb
• Truncated tetrahedron
• Triakis tetrahedron
References
1. Conway, John H.; Burgiel, Heidi; Goodman-Strauss, Chaim (2008). The Symmetries of Things. p. 332. ISBN 978-1568812205.
2. Grünbaum, B; Shephard, G. C. (1980). "Tilings with Congruent Tiles". Bull. Amer. Math. Soc. 3 (3): 951–973. doi:10.1090/s0273-0979-1980-14827-2.
3. Föppl, L. (1914). "Der Fundamentalbereich des Diamantgitters". Phys. Z. 15: 191–193.
4. Conway, John. "Voronoi Polyhedron". geometry.puzzles. Retrieved 20 September 2012.
5. Grünbaum, Branko; Shephard, G. C. (1980), "Tilings with congruent tiles", Bulletin of the American Mathematical Society, New Series, 3 (3): 951–973, doi:10.1090/S0273-0979-1980-14827-2, MR 0585178.
| Wikipedia |
Triakis tetrahedron
In geometry, a triakis tetrahedron (or kistetrahedron[1]) is a Catalan solid with 12 faces. Each Catalan solid is the dual of an Archimedean solid. The dual of the triakis tetrahedron is the truncated tetrahedron.
Triakis tetrahedron
(Click here for rotating model)
TypeCatalan solid
Coxeter diagram
Conway notationkT
Face typeV3.6.6
isosceles triangle
Faces12
Edges18
Vertices8
Vertices by type4{3}+4{6}
Symmetry groupTd, A3, [3,3], (*332)
Rotation groupT, [3,3]+, (332)
Dihedral angle129°31′16″
arccos(−7/11)
Propertiesconvex, face-transitive
Truncated tetrahedron
(dual polyhedron)
Net
The triakis tetrahedron can be seen as a tetrahedron with a triangular pyramid added to each face; that is, it is the Kleetope of the tetrahedron. It is very similar to the net for the 5-cell, as the net for a tetrahedron is a triangle with other triangles added to each edge, the net for the 5-cell a tetrahedron with pyramids attached to each face. This interpretation is expressed in the name.
The length of the shorter edges is 3/5 that of the longer edges.[2] If the triakis tetrahedron has shorter edge length 1, it has area 5/3√11 and volume 25/36√2.
Cartesian coordinates
Cartesian coordinates for the 8 vertices of a triakis tetrahedron centered at the origin, are the points (±5/3, ±5/3, ±5/3) with an even number of minus signs, along with the points (±1, ±1, ±1) with an odd number of minus signs:
• (5/3, 5/3, 5/3), (5/3, −5/3, −5/3), (−5/3, 5/3, −5/3), (−5/3, −5/3, 5/3)
• (−1, 1, 1), (1, −1, 1), (1, 1, −1), (−1, −1, −1)
The length of the shorter edges of this triakis tetrahedron equals 2√2. The faces are isosceles triangles with one obtuse and two acute angles. The obtuse angle equals arccos(–7/18) ≈ 112.88538047616° and the acute ones equal arccos(5/6) ≈ 33.55730976192°.
Tetartoid symmetry
The triakis tetrahedron can be made as a degenerate limit of a tetartoid:
Example tetartoid variations
Orthogonal projections
Orthogonal projections (graphs)
Centered by Short edge Face Vertex Long edge
Triakis
tetrahedron
(Dual)
Truncated
tetrahedron
Projective
symmetry
[1] [3] [4]
Orthogonal projections (solids)
Triakis
tetrahedron
Dual
compound
(Dual)
Truncated
tetrahedron
Projective
symmetry
[1] [2] [3]
Variations
A triakis tetrahedron with equilateral triangle faces represents a net of the four-dimensional regular polytope known as the 5-cell.
If the triangles are right-angled isosceles, the faces will be coplanar and form a cubic volume. This can be seen by adding the 6 edges of tetrahedron inside of a cube.
Stellations
This chiral figure is one of thirteen stellations allowed by Miller's rules.
Related polyhedra
The triakis tetrahedron is a part of a sequence of polyhedra and tilings, extending into the hyperbolic plane. These face-transitive figures have (*n32) reflectional symmetry.
*n32 symmetry mutation of truncated tilings: t{n,3}
Symmetry
*n32
[n,3]
Spherical Euclid. Compact hyperb. Paraco. Noncompact hyperbolic
*232
[2,3]
*332
[3,3]
*432
[4,3]
*532
[5,3]
*632
[6,3]
*732
[7,3]
*832
[8,3]...
*∞32
[∞,3]
[12i,3] [9i,3] [6i,3]
Truncated
figures
Symbol t{2,3} t{3,3} t{4,3} t{5,3} t{6,3} t{7,3} t{8,3} t{∞,3} t{12i,3} t{9i,3} t{6i,3}
Triakis
figures
Config. V3.4.4 V3.6.6 V3.8.8 V3.10.10 V3.12.12 V3.14.14 V3.16.16 V3.∞.∞
Family of uniform tetrahedral polyhedra
Symmetry: [3,3], (*332) [3,3]+, (332)
{3,3} t{3,3} r{3,3} t{3,3} {3,3} rr{3,3} tr{3,3} sr{3,3}
Duals to uniform polyhedra
V3.3.3 V3.6.6 V3.3.3.3 V3.6.6 V3.3.3 V3.4.3.4 V4.6.6 V3.3.3.3.3
See also
• Truncated triakis tetrahedron
References
1. Conway, Symmetries of things, p.284
2. "Triakis Tetrahedron - Geometry Calculator".
• Williams, Robert (1979). The Geometrical Foundation of Natural Structure: A Source Book of Design. Dover Publications, Inc. ISBN 0-486-23729-X. (Section 3-9)
• Wenninger, Magnus (1983), Dual Models, Cambridge University Press, doi:10.1017/CBO9780511569371, ISBN 978-0-521-54325-5, MR 0730208 (The thirteen semiregular convex polyhedra and their duals, Page 14, Triakistetrahedron)
• The Symmetries of Things 2008, John H. Conway, Heidi Burgiel, Chaim Goodman-Strass, ISBN 978-1-56881-220-5 (Chapter 21, Naming the Archimedean and Catalan polyhedra and tilings, page 284, Triakis tetrahedron )
External links
• Eric W. Weisstein, Triakis tetrahedron (Catalan solid) at MathWorld.
Catalan solids
Tetrahedron
(Dual)
Tetrahedron
(Seed)
Octahedron
(Dual)
Cube
(Seed)
Icosahedron
(Dual)
Dodecahedron
(Seed)
Triakis tetrahedron
(Needle)
Triakis tetrahedron
(Kis)
Triakis octahedron
(Needle)
Tetrakis hexahedron
(Kis)
Triakis icosahedron
(Needle)
Pentakis dodecahedron
(Kis)
Rhombic hexahedron
(Join)
Rhombic dodecahedron
(Join)
Rhombic triacontahedron
(Join)
Deltoidal dodecahedron
(Ortho)
Disdyakis hexahedron
(Meta)
Deltoidal icositetrahedron
(Ortho)
Disdyakis dodecahedron
(Meta)
Deltoidal hexecontahedron
(Ortho)
Disdyakis triacontahedron
(Meta)
Pentagonal dodecahedron
(Gyro)
Pentagonal icositetrahedron
(Gyro)
Pentagonal hexecontahedron
(Gyro)
Archimedean duals
Tetrahedron
(Seed)
Tetrahedron
(Dual)
Cube
(Seed)
Octahedron
(Dual)
Dodecahedron
(Seed)
Icosahedron
(Dual)
Truncated tetrahedron
(Truncate)
Truncated tetrahedron
(Zip)
Truncated cube
(Truncate)
Truncated octahedron
(Zip)
Truncated dodecahedron
(Truncate)
Truncated icosahedron
(Zip)
Tetratetrahedron
(Ambo)
Cuboctahedron
(Ambo)
Icosidodecahedron
(Ambo)
Rhombitetratetrahedron
(Expand)
Truncated tetratetrahedron
(Bevel)
Rhombicuboctahedron
(Expand)
Truncated cuboctahedron
(Bevel)
Rhombicosidodecahedron
(Expand)
Truncated icosidodecahedron
(Bevel)
Snub tetrahedron
(Snub)
Snub cube
(Snub)
Snub dodecahedron
(Snub)
Convex polyhedra
Platonic solids (regular)
• tetrahedron
• cube
• octahedron
• dodecahedron
• icosahedron
Archimedean solids
(semiregular or uniform)
• truncated tetrahedron
• cuboctahedron
• truncated cube
• truncated octahedron
• rhombicuboctahedron
• truncated cuboctahedron
• snub cube
• icosidodecahedron
• truncated dodecahedron
• truncated icosahedron
• rhombicosidodecahedron
• truncated icosidodecahedron
• snub dodecahedron
Catalan solids
(duals of Archimedean)
• triakis tetrahedron
• rhombic dodecahedron
• triakis octahedron
• tetrakis hexahedron
• deltoidal icositetrahedron
• disdyakis dodecahedron
• pentagonal icositetrahedron
• rhombic triacontahedron
• triakis icosahedron
• pentakis dodecahedron
• deltoidal hexecontahedron
• disdyakis triacontahedron
• pentagonal hexecontahedron
Dihedral regular
• dihedron
• hosohedron
Dihedral uniform
• prisms
• antiprisms
duals:
• bipyramids
• trapezohedra
Dihedral others
• pyramids
• truncated trapezohedra
• gyroelongated bipyramid
• cupola
• bicupola
• frustum
• bifrustum
• rotunda
• birotunda
• prismatoid
• scutoid
Degenerate polyhedra are in italics.
| Wikipedia |
Trial division
Trial division is the most laborious but easiest to understand of the integer factorization algorithms. The essential idea behind trial division tests to see if an integer n, the integer to be factored, can be divided by each number in turn that is less than n. For example, for the integer n = 12, the only numbers that divide it are 1, 2, 3, 4, 6, 12. Selecting only the largest powers of primes in this list gives that 12 = 3 × 4 = 3 × 22.
Trial division was first described by Fibonacci in his book Liber Abaci (1202).[1]
Method
Given an integer n (n refers to "the integer to be factored"), the trial division consists of systematically testing whether n is divisible by any smaller number. Clearly, it is only worthwhile to test candidate factors less than n, and in order from two upwards because an arbitrary n is more likely to be divisible by two than by three, and so on. With this ordering, there is no point in testing for divisibility by four if the number has already been determined not divisible by two, and so on for three and any multiple of three, etc. Therefore, the effort can be reduced by selecting only prime numbers as candidate factors. Furthermore, the trial factors need go no further than $\scriptstyle {\sqrt {n}}$ because, if n is divisible by some number p, then n = p × q and if q were smaller than p, n would have been detected earlier as being divisible by q or by a prime factor of q.
A definite bound on the prime factors is possible. Suppose Pi is the i'th prime, so that P1 = 2, P2 = 3, P3 = 5, etc. Then the last prime number worth testing as a possible factor of n is Pi where P2i + 1 > n; equality here would mean that Pi + 1 is a factor. Thus, testing with 2, 3, and 5 suffices up to n = 48 not just 25 because the square of the next prime is 49, and below n = 25 just 2 and 3 are sufficient. Should the square root of n be an integer, then it is a factor and n is a perfect square.
An example of the trial division algorithm, using successive integers as trial factors, is as follows (in Python):
def trial_division(n: int) -> list[int]:
"""Return a list of the prime factors for a natural number."""
a = [] # Prepare an empty list.
f = 2 # The first possible factor.
while n > 1: # While n still has remaining factors...
if n % f == 0: # The remainder of n divided by f might be zero.
a.append(f) # If so, it divides n. Add f to the list.
n //= f # Divide that factor out of n.
else: # But if f is not a factor of n,
f += 1 # Add one to f and try again.
return a # Prime factors may be repeated: 12 factors to 2,2,3.
Or 2x more efficient:
def trial_division(n: int) -> list[int]:
a = []
while n % 2 == 0:
a.append(2)
n //= 2
f = 3
while f * f <= n:
if n % f == 0:
a.append(f)
n //= f
else:
f += 2
if n != 1: a.append(n)
# Only odd number is possible
return a
These versions of trial division are guaranteed to find a factor of n if there is one since they check all possible factors of n — and if n is a prime number, this means trial factors all the way up to n. Thus, if the algorithm finds one factor only, n, it is proof that n is a prime. If more than one factor is found, then n is a composite integer. A more computationally advantageous way of saying this is, if any prime whose square does not exceed n divides it without a remainder, then n is not prime.
Below is a version in C++ (without squaring f)
template <class T, class U>
vector<T> TrialDivision(U n)
{
vector<T> v; T f;
f = 2;
while (n % 2 == 0) { v.push_back(f); n /= 2; }
f = 3;
while (n % 3 == 0) { v.push_back(f); n /= 3; }
f = 5;
T ac = 9, temp = 16;
do {
ac += temp; // Assume addition does not cause overflow with U type
if (ac > n) break;
if (n % f == 0) {
v.push_back(f);
n /= f;
ac -= temp;
}
else {
f += 2;
temp += 8;
}
} while (1);
if (n != 1) v.push_back(n);
return v;
}
Speed
In the worst case, trial division is a laborious algorithm. For a base-2 n digit number a, if it starts from two and works up only to the square root of a, the algorithm requires
$\pi (2^{n/2})\approx {2^{n/2} \over \left({\frac {n}{2}}\right)\ln 2}$
trial divisions, where $\scriptstyle \pi (x)$ denotes the prime-counting function, the number of primes less than x. This does not take into account the overhead of primality testing to obtain the prime numbers as candidate factors. A useful table need not be large: P(3512) = 32749, the last prime that fits into a sixteen-bit signed integer and P(6542) = 65521 for unsigned sixteen-bit integers. That would suffice to test primality for numbers up to 655372 = 4,295,098,369. Preparing such a table (usually via the Sieve of Eratosthenes) would only be worthwhile if many numbers were to be tested. If instead a variant is used without primality testing, but simply dividing by every odd number less than the square root the base-2 n digit number a, prime or not, it can take up to about:
$2^{n/2}$
In both cases, the required time grows exponentially with the digits of the number.
Even so, this is a quite satisfactory method, considering that even the best-known algorithms have exponential time growth. For a chosen uniformly at random from integers of a given length, there is a 50% chance that 2 is a factor of a and a 33% chance that 3 is a factor of a, and so on. It can be shown that 88% of all positive integers have a factor under 100 and that 92% have a factor under 1000. Thus, when confronted by an arbitrary large a, it is worthwhile to check for divisibility by the small primes, since for $a=1000$, in base-2 $n=10$.
However, many-digit numbers that do not have factors in the small primes can require days or months to factor with the trial division. In such cases other methods are used such as the quadratic sieve and the general number field sieve (GNFS). Because these methods also have superpolynomial time growth a practical limit of n digits is reached very quickly. For this reason, in public key cryptography, values for a are chosen to have large prime factors of similar size so that they cannot be factored by any publicly known method in a useful time period on any available computer system or computer cluster such as supercomputers and computer grids. The largest cryptography-grade number that has been factored is RSA-250, a 250 digits number, using the GNFS and resources of several supercomputers. The running time was 2700 core years.
References
1. Mollin, Richard A. (2002). "A brief history of factoring and primality testing B. C. (before computers)". Mathematics Magazine. 75 (1): 18–29. doi:10.2307/3219180. JSTOR 3219180. MR 2107288.
• Childs, Lindsay N. (2009). A concrete introduction to higher algebra. Undergraduate Texts in Mathematics (3rd ed.). New York, NY: Springer-Verlag. ISBN 978-0-387-74527-5. Zbl 1165.00002.
• Crandall, Richard; Pomerance, Carl (2005). Prime numbers. A computational perspective (2nd ed.). New York, NY: Springer-Verlag. ISBN 0-387-25282-7. Zbl 1088.11001.
External links
• Wikiversity offers a lesson on prime factorization using trial division with Python.
• Fast JavaScript Prime Factor Calculator using trial division. Can handle numbers up to about 253
• Trial Division in Java, C and JavaScript (in Portuguese)
Number-theoretic algorithms
Primality tests
• AKS
• APR
• Baillie–PSW
• Elliptic curve
• Pocklington
• Fermat
• Lucas
• Lucas–Lehmer
• Lucas–Lehmer–Riesel
• Proth's theorem
• Pépin's
• Quadratic Frobenius
• Solovay–Strassen
• Miller–Rabin
Prime-generating
• Sieve of Atkin
• Sieve of Eratosthenes
• Sieve of Pritchard
• Sieve of Sundaram
• Wheel factorization
Integer factorization
• Continued fraction (CFRAC)
• Dixon's
• Lenstra elliptic curve (ECM)
• Euler's
• Pollard's rho
• p − 1
• p + 1
• Quadratic sieve (QS)
• General number field sieve (GNFS)
• Special number field sieve (SNFS)
• Rational sieve
• Fermat's
• Shanks's square forms
• Trial division
• Shor's
Multiplication
• Ancient Egyptian
• Long
• Karatsuba
• Toom–Cook
• Schönhage–Strassen
• Fürer's
Euclidean division
• Binary
• Chunking
• Fourier
• Goldschmidt
• Newton-Raphson
• Long
• Short
• SRT
Discrete logarithm
• Baby-step giant-step
• Pollard rho
• Pollard kangaroo
• Pohlig–Hellman
• Index calculus
• Function field sieve
Greatest common divisor
• Binary
• Euclidean
• Extended Euclidean
• Lehmer's
Modular square root
• Cipolla
• Pocklington's
• Tonelli–Shanks
• Berlekamp
• Kunerth
Other algorithms
• Chakravala
• Cornacchia
• Exponentiation by squaring
• Integer square root
• Integer relation (LLL; KZ)
• Modular exponentiation
• Montgomery reduction
• Schoof
• Trachtenberg system
• Italics indicate that algorithm is for numbers of special forms
| Wikipedia |
Triality
In mathematics, triality is a relationship among three vector spaces, analogous to the duality relation between dual vector spaces. Most commonly, it describes those special features of the Dynkin diagram D4 and the associated Lie group Spin(8), the double cover of 8-dimensional rotation group SO(8), arising because the group has an outer automorphism of order three. There is a geometrical version of triality, analogous to duality in projective geometry.
Of all simple Lie groups, Spin(8) has the most symmetrical Dynkin diagram, D4. The diagram has four nodes with one node located at the center, and the other three attached symmetrically. The symmetry group of the diagram is the symmetric group S3 which acts by permuting the three legs. This gives rise to an S3 group of outer automorphisms of Spin(8). This automorphism group permutes the three 8-dimensional irreducible representations of Spin(8); these being the vector representation and two chiral spin representations. These automorphisms do not project to automorphisms of SO(8). The vector representation—the natural action of SO(8) (hence Spin(8)) on F8—consists over the real numbers of Euclidean 8-vectors and is generally known as the "defining module", while the chiral spin representations are also known as "half-spin representations", and all three of these are fundamental representations.
No other connected Dynkin diagram has an automorphism group of order greater than 2; for other Dn (corresponding to other even Spin groups, Spin(2n)), there is still the automorphism corresponding to switching the two half-spin representations, but these are not isomorphic to the vector representation.
Roughly speaking, symmetries of the Dynkin diagram lead to automorphisms of the Tits building associated with the group. For special linear groups, one obtains projective duality. For Spin(8), one finds a curious phenomenon involving 1-, 2-, and 4-dimensional subspaces of 8-dimensional space, historically known as "geometric triality".
The exceptional 3-fold symmetry of the D4 diagram also gives rise to the Steinberg group 3D4.
General formulation
A duality between two vector spaces over a field F is a non-degenerate bilinear form
$V_{1}\times V_{2}\to F,$
i.e., for each non-zero vector v in one of the two vector spaces, the pairing with v is a non-zero linear functional on the other.
Similarly, a triality between three vector spaces over a field F is a non-degenerate trilinear form
$V_{1}\times V_{2}\times V_{3}\to F,$
i.e., each non-zero vector in one of the three vector spaces induces a duality between the other two.
By choosing vectors ei in each Vi on which the trilinear form evaluates to 1, we find that the three vector spaces are all isomorphic to each other, and to their duals. Denoting this common vector space by V, the triality may be re-expressed as a bilinear multiplication
$V\times V\to V$
where each ei corresponds to the identity element in V. The non-degeneracy condition now implies that V is a composition algebra. It follows that V has dimension 1, 2, 4 or 8. If further F = R and the form used to identify V with its dual is positively definite, then V is a Euclidean Hurwitz algebra, and is therefore isomorphic to R, C, H or O.
Conversely, composition algebras immediately give rise to trialities by taking each Vi equal to the algebra, and contracting the multiplication with the inner product on the algebra to make a trilinear form.
An alternative construction of trialities uses spinors in dimensions 1, 2, 4 and 8. The eight-dimensional case corresponds to the triality property of Spin(8).
See also
• Triple product, may be related to the 4-dimensional triality (on quaternions)
References
• John Frank Adams (1981), Spin(8), Triality, F4 and all that, in "Superspace and supergravity", edited by Stephen Hawking and Martin Roček, Cambridge University Press, pages 435–445.
• John Frank Adams (1996), Lectures on Exceptional Lie Groups (Chicago Lectures in Mathematics), edited by Zafer Mahmud and Mamora Mimura, University of Chicago Press, ISBN 0-226-00527-5.
Further reading
• Knus, Max-Albert; Merkurjev, Alexander; Rost, Markus; Tignol, Jean-Pierre (1998). The book of involutions. Colloquium Publications. Vol. 44. With a preface by J. Tits. Providence, RI: American Mathematical Society. ISBN 0-8218-0904-0. Zbl 0955.16001.
• Wilson, Robert (2009). The Finite Simple Groups. Graduate Texts in Mathematics. Vol. 251. Springer-Verlag. ISBN 1-84800-987-9. Zbl 1203.20012.
External links
• Spinors and Trialities by John Baez
• Triality with Zometool by David Richter
| Wikipedia |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.