text
stringlengths
100
500k
subset
stringclasses
4 values
Rectangular function The rectangular function (also known as the rectangle function, rect function, Pi function, Heaviside Pi function,[1] gate function, unit pulse, or the normalized boxcar function) is defined as[2] $\operatorname {rect} \left({\frac {t}{a}}\right)=\Pi \left({\frac {t}{a}}\right)=\left\{{\begin{array}{rl}0,&{\text{if }}|t|>{\frac {a}{2}}\\{\frac {1}{2}},&{\text{if }}|t|={\frac {a}{2}}\\1,&{\text{if }}|t|<{\frac {a}{2}}.\end{array}}\right.$ "Box function" redirects here. For the Conway box function, see Minkowski's question-mark function § Conway box function. Alternative definitions of the function define $ \operatorname {rect} \left(\pm {\frac {1}{2}}\right)$ to be 0,[3] 1,[4][5] or undefined. Its periodic version is called a rectangular wave. History The rect function has been introduced by Woodward[6] in [7] as an ideal cutout operator, together with the sinc function[8][9] as an ideal interpolation operator, and their counter operations which are sampling (comb operator) and replicating (rep operator), respectively. Relation to the boxcar function The rectangular function is a special case of the more general boxcar function: $\operatorname {rect} \left({\frac {t-X}{Y}}\right)=H(t-(X-Y/2))-H(t-(X+Y/2))=H(t-X+Y/2)-H(t-X-Y/2)$ where $H(x)$ is the Heaviside step function; the function is centered at $X$ and has duration $Y$, from $X-Y/2$ to $X+Y/2.$ Fourier transform of the rectangular function The unitary Fourier transforms of the rectangular function are[2] $\int _{-\infty }^{\infty }\operatorname {rect} (t)\cdot e^{-i2\pi ft}\,dt={\frac {\sin(\pi f)}{\pi f}}=\operatorname {sinc} (f),$ using ordinary frequency f, where $\mathrm {sinc} $ is the normalized form of the sinc function and ${\frac {1}{\sqrt {2\pi }}}\int _{-\infty }^{\infty }\operatorname {rect} (t)\cdot e^{-i\omega t}\,dt={\frac {1}{\sqrt {2\pi }}}\cdot {\frac {\sin \left(\omega /2\right)}{\omega /2}}={\frac {1}{\sqrt {2\pi }}}\operatorname {sinc} \left(\omega /2\right),$ using angular frequency $\omega $, where $\mathrm {sinc} $ is the unnormalized form of the sinc function. For $\mathrm {rect} (x/a)$, its Fourier transform is $\int _{-\infty }^{\infty }\operatorname {rect} \left({\frac {t}{a}}\right)\cdot e^{-i2\pi ft}\,dt=a{\frac {\sin(\pi af)}{\pi af}}=a\ \mathrm {sinc} {(af)}.$ Note that as long as the definition of the pulse function is only motivated by its behavior in the time-domain experience, there is no reason to believe that the oscillatory interpretation (i.e. the Fourier transform function) should be intuitive, or directly understood by humans. However, some aspects of the theoretical result may be understood intuitively, as finiteness in time domain corresponds to an infinite frequency response. (Vice versa, a finite Fourier transform will correspond to infinite time domain response.) Relation to the triangular function We can define the triangular function as the convolution of two rectangular functions: $\mathrm {tri} =\mathrm {rect} *\mathrm {rect} .\,$ Use in probability Main article: Uniform distribution (continuous) Viewing the rectangular function as a probability density function, it is a special case of the continuous uniform distribution with $a=-1/2,b=1/2.$ The characteristic function is $\varphi (k)={\frac {\sin(k/2)}{k/2}},$ and its moment-generating function is $M(k)={\frac {\sinh(k/2)}{k/2}},$ where $\sinh(t)$ is the hyperbolic sine function. Rational approximation The pulse function may also be expressed as a limit of a rational function: $\Pi (t)=\lim _{n\rightarrow \infty ,n\in \mathbb {(} Z)}{\frac {1}{(2t)^{2n}+1}}.$ Demonstration of validity First, we consider the case where $ |t|<{\frac {1}{2}}.$ Notice that the term $ (2t)^{2n}$ is always positive for integer $n.$ However, $2t<1$ and hence $ (2t)^{2n}$ approaches zero for large $n.$ It follows that: $\lim _{n\rightarrow \infty ,n\in \mathbb {(} Z)}{\frac {1}{(2t)^{2n}+1}}={\frac {1}{0+1}}=1,|t|<{\tfrac {1}{2}}.$ Second, we consider the case where $ |t|>{\frac {1}{2}}.$ Notice that the term $ (2t)^{2n}$ is always positive for integer $n.$ However, $2t>1$ and hence $ (2t)^{2n}$ grows very large for large $n.$ It follows that: $\lim _{n\rightarrow \infty ,n\in \mathbb {(} Z)}{\frac {1}{(2t)^{2n}+1}}={\frac {1}{+\infty +1}}=0,|t|>{\tfrac {1}{2}}.$ Third, we consider the case where $ |t|={\frac {1}{2}}.$ We may simply substitute in our equation: $\lim _{n\rightarrow \infty ,n\in \mathbb {(} Z)}{\frac {1}{(2t)^{2n}+1}}=\lim _{n\rightarrow \infty ,n\in \mathbb {(} Z)}{\frac {1}{1^{2n}+1}}={\frac {1}{1+1}}={\tfrac {1}{2}}.$ We see that it satisfies the definition of the pulse function. Therefore, $\mathrm {rect} (t)=\Pi (t)=\lim _{n\rightarrow \infty ,n\in \mathbb {(} Z)}{\frac {1}{(2t)^{2n}+1}}={\begin{cases}0&{\mbox{if }}|t|>{\frac {1}{2}}\\{\frac {1}{2}}&{\mbox{if }}|t|={\frac {1}{2}}\\1&{\mbox{if }}|t|<{\frac {1}{2}}.\\\end{cases}}$ Dirac delta function The rectangle function can be used to represent the Dirac delta function $\delta (x)$.[10] Specifically, $\delta (x)=\lim _{a\to \infty }{\frac {1}{a}}\mathrm {rect} ({\frac {x}{a}}).$ For a function $g(x)$, its average over the width $a$ around 0 in the function domain is calculated as, $g_{avg}(0)={\frac {1}{a}}\int \limits _{-\infty }^{\infty }dx\ g(x)\mathrm {rect} ({\frac {x}{a}}).$ To obtain $g(0)$, the following limit is applied, $g(0)=\lim _{a\to 0}{\frac {1}{a}}\int \limits _{-\infty }^{\infty }dx\ g(x)\mathrm {rect} ({\frac {x}{a}})$ and this can be written in terms of the Dirac delta function as, $g(0)=\int \limits _{-\infty }^{\infty }dx\ g(x)\delta (x).$ The Fourier transform of the Dirac delta function $\delta (t)$ is $\delta (f)=\int _{-\infty }^{\infty }\delta (t)\cdot e^{-i2\pi ft}\,dt=\lim _{a\to 0}{\frac {1}{a}}\int _{-\infty }^{\infty }\mathrm {rect} ({\frac {t}{a}})\cdot e^{-i2\pi ft}\,dt=\lim _{a\to 0}\mathrm {sinc} {(af)}.$ where the sinc function here is the normalized sinc function. Because the first zero of the sinc function is at $f=1/a$ and $a$ goes to infinity, the Fourier transform of $\delta (t)$ is $\delta (f)=1,$ means that the frequency spectrum of the Dirac delta function is infinitely broad. As a pulse is shorten in time, it is larger in spectrum. See also • Fourier transform • Square wave • Step function • Top-hat filter • Boxcar function References 1. Wolfram Research (2008). "HeavisidePi, Wolfram Language function". Retrieved October 11, 2022. 2. Weisstein, Eric W. "Rectangle Function". MathWorld. 3. Wang, Ruye (2012). Introduction to Orthogonal Transforms: With Applications in Data Processing and Analysis. Cambridge University Press. pp. 135–136. ISBN 9780521516884. 4. Tang, K. T. (2007). Mathematical Methods for Engineers and Scientists: Fourier analysis, partial differential equations and variational models. Springer. p. 85. ISBN 9783540446958. 5. Kumar, A. Anand (2011). Signals and Systems. PHI Learning Pvt. Ltd. pp. 258–260. ISBN 9788120343108. 6. Klauder, John R (1960). "The Theory and Design of Chirp Radars". Bell System Technical Journal. 39 (4): 745–808. doi:10.1002/j.1538-7305.1960.tb03942.x. 7. Woodward, Philipp M (1953). Probability and Information Theory, with Applications to Radar. Pergamon Press. p. 29. 8. Higgins, John Rowland (1996). Sampling Theory in Fourier and Signal Analysis: Foundations. Oxford University Press Inc. p. 4. ISBN 0198596995. 9. Zayed, Ahmed I (1996). Handbook of Function and Generalized Function Transformations. CRC Press. p. 507. ISBN 9780849380761. 10. Khare, Kedar; Butola, Mansi; Rajora, Sunaina (2023). "Chapter 2.4 Sampling by Averaging, Distributions and Delta Function". Fourier Optics and Computational Imaging (2nd ed.). Springer. pp. 15–16. doi:10.1007/978-3-031-18353-9. ISBN 978-3-031-18353-9.
Wikipedia
Hyperbola In mathematics, a hyperbola (/haɪˈpɜːrbələ/ (listen); pl. hyperbolas or hyperbolae /-liː/ (listen); adj. hyperbolic /ˌhaɪpərˈbɒlɪk/ (listen)) is a type of smooth curve lying in a plane, defined by its geometric properties or by equations for which it is the solution set. A hyperbola has two pieces, called connected components or branches, that are mirror images of each other and resemble two infinite bows. The hyperbola is one of the three kinds of conic section, formed by the intersection of a plane and a double cone. (The other conic sections are the parabola and the ellipse. A circle is a special case of an ellipse.) If the plane intersects both halves of the double cone but does not pass through the apex of the cones, then the conic is a hyperbola. Besides being a conic section, a hyperbola can arise as the locus of points whose difference of distances to two fixed foci is constant, as a curve for each point of which the rays to two fixed foci are reflections across the tangent line at that point, or as the solution of certain bivariate quadratic equations such as the reciprocal relationship $xy=1.$[1] In practical applications, a hyperbola can arise as the path followed by the shadow of the tip of a sundial's gnomon, the shape of an open orbit such as that of a celestial object exceeding the escape velocity of the nearest gravitational body, or the scattering trajectory of a subatomic particle, among others. Each branch of the hyperbola has two arms which become straighter (lower curvature) further out from the center of the hyperbola. Diagonally opposite arms, one from each branch, tend in the limit to a common line, called the asymptote of those two arms. So there are two asymptotes, whose intersection is at the center of symmetry of the hyperbola, which can be thought of as the mirror point about which each branch reflects to form the other branch. In the case of the curve $y(x)=1/x$ the asymptotes are the two coordinate axes.[2] Hyperbolas share many of the ellipses' analytical properties such as eccentricity, focus, and directrix. Typically the correspondence can be made with nothing more than a change of sign in some term. Many other mathematical objects have their origin in the hyperbola, such as hyperbolic paraboloids (saddle surfaces), hyperboloids ("wastebaskets"), hyperbolic geometry (Lobachevsky's celebrated non-Euclidean geometry), hyperbolic functions (sinh, cosh, tanh, etc.), and gyrovector spaces (a geometry proposed for use in both relativity and quantum mechanics which is not Euclidean). Etymology and history The word "hyperbola" derives from the Greek ὑπερβολή, meaning "over-thrown" or "excessive", from which the English term hyperbole also derives. Hyperbolae were discovered by Menaechmus in his investigations of the problem of doubling the cube, but were then called sections of obtuse cones.[3] The term hyperbola is believed to have been coined by Apollonius of Perga (c. 262–c. 190 BC) in his definitive work on the conic sections, the Conics.[4] The names of the other two general conic sections, the ellipse and the parabola, derive from the corresponding Greek words for "deficient" and "applied"; all three names are borrowed from earlier Pythagorean terminology which referred to a comparison of the side of rectangles of fixed area with a given line segment. The rectangle could be "applied" to the segment (meaning, have an equal length), be shorter than the segment or exceed the segment.[5] Definitions As locus of points A hyperbola can be defined geometrically as a set of points (locus of points) in the Euclidean plane: A hyperbola is a set of points, such that for any point $P$ of the set, the absolute difference of the distances $|PF_{1}|,\,|PF_{2}|$ to two fixed points $F_{1},F_{2}$ (the foci) is constant, usually denoted by $2a,\,a>0$:[6] $H=\left\{P:\left|\left|PF_{2}\right|-\left|PF_{1}\right|\right|=2a\right\}.$ The midpoint $M$ of the line segment joining the foci is called the center of the hyperbola.[7] The line through the foci is called the major axis. It contains the vertices $V_{1},V_{2}$, which have distance $a$ to the center. The distance $c$ of the foci to the center is called the focal distance or linear eccentricity. The quotient ${\tfrac {c}{a}}$ is the eccentricity $e$. The equation $\left|\left|PF_{2}\right|-\left|PF_{1}\right|\right|=2a$ can be viewed in a different way (see diagram): If $c_{2}$ is the circle with midpoint $F_{2}$ and radius $2a$, then the distance of a point $P$ of the right branch to the circle $c_{2}$ equals the distance to the focus $F_{1}$: $|PF_{1}|=|Pc_{2}|.$ $c_{2}$ is called the circular directrix (related to focus $F_{2}$) of the hyperbola.[8][9] In order to get the left branch of the hyperbola, one has to use the circular directrix related to $F_{1}$. This property should not be confused with the definition of a hyperbola with help of a directrix (line) below. Hyperbola with equation y = A/x If the xy-coordinate system is rotated about the origin by the angle $+45^{\circ }$ and new coordinates $\xi ,\eta $ are assigned, then $x={\tfrac {\xi +\eta }{\sqrt {2}}},\;y={\tfrac {-\xi +\eta }{\sqrt {2}}}$. The rectangular hyperbola ${\tfrac {x^{2}-y^{2}}{a^{2}}}=1$ (whose semi-axes are equal) has the new equation ${\tfrac {2\xi \eta }{a^{2}}}=1$. Solving for $\eta $ yields $\eta ={\tfrac {a^{2}/2}{\xi }}\ .$ Thus, in an xy-coordinate system the graph of a function $f:x\mapsto {\tfrac {A}{x}},\;A>0\;,$ with equation $y={\frac {A}{x}}\;,A>0\;,$ is a rectangular hyperbola entirely in the first and third quadrants with • the coordinate axes as asymptotes, • the line $y=x$ as major axis , • the center $(0,0)$ and the semi-axis $a=b={\sqrt {2A}}\;,$ • the vertices $\left({\sqrt {A}},{\sqrt {A}}\right),\left(-{\sqrt {A}},-{\sqrt {A}}\right)\;,$ • the semi-latus rectum and radius of curvature at the vertices $p=a={\sqrt {2A}}\;,$ • the linear eccentricity $c=2{\sqrt {A}}$ and the eccentricity $e={\sqrt {2}}\;,$ • the tangent $y=-{\tfrac {A}{x_{0}^{2}}}x+2{\tfrac {A}{x_{0}}}$ at point $(x_{0},A/x_{0})\;.$ A rotation of the original hyperbola by $-45^{\circ }$ results in a rectangular hyperbola entirely in the second and fourth quadrants, with the same asymptotes, center, semi-latus rectum, radius of curvature at the vertices, linear eccentricity, and eccentricity as for the case of $+45^{\circ }$ rotation, with equation $y=-{\frac {A}{x}}\;,~~A>0\;,$ • the semi-axes $a=b={\sqrt {2A}}\;,$ • the line $y=-x$ as major axis, • the vertices $\left(-{\sqrt {A}},{\sqrt {A}}\right),\left({\sqrt {A}},-{\sqrt {A}}\right)\;.$ Shifting the hyperbola with equation $y={\frac {A}{x}},\ A\neq 0\ ,$ so that the new center is $(c_{0},d_{0})$, yields the new equation $y={\frac {A}{x-c_{0}}}+d_{0}\;,$ and the new asymptotes are $x=c_{0}$ and $y=d_{0}$. The shape parameters $a,b,p,c,e$ remain unchanged. By the directrix property The two lines at distance $ d={\frac {a^{2}}{c}}$ from the center and parallel to the minor axis are called directrices of the hyperbola (see diagram). For an arbitrary point $P$ of the hyperbola the quotient of the distance to one focus and to the corresponding directrix (see diagram) is equal to the eccentricity: ${\frac {|PF_{1}|}{|Pl_{1}|}}={\frac {|PF_{2}|}{|Pl_{2}|}}=e={\frac {c}{a}}\,.$ The proof for the pair $F_{1},l_{1}$ follows from the fact that $|PF_{1}|^{2}=(x-c)^{2}+y^{2},\ |Pl_{1}|^{2}=\left(x-{\tfrac {a^{2}}{c}}\right)^{2}$ and $y^{2}={\tfrac {b^{2}}{a^{2}}}x^{2}-b^{2}$ satisfy the equation $|PF_{1}|^{2}-{\frac {c^{2}}{a^{2}}}|Pl_{1}|^{2}=0\ .$ The second case is proven analogously. The inverse statement is also true and can be used to define a hyperbola (in a manner similar to the definition of a parabola): For any point $F$ (focus), any line $l$ (directrix) not through $F$ and any real number $e$ with $e>1$ the set of points (locus of points), for which the quotient of the distances to the point and to the line is $e$ $H=\left\{P\,{\Biggr |}\,{\frac {|PF|}{|Pl|}}=e\right\}$ is a hyperbola. (The choice $e=1$ yields a parabola and if $e<1$ an ellipse.) Proof Let $F=(f,0),\ e>0$ and assume $(0,0)$ is a point on the curve. The directrix $l$ has equation $x=-{\tfrac {f}{e}}$. With $P=(x,y)$, the relation $|PF|^{2}=e^{2}|Pl|^{2}$ produces the equations $(x-f)^{2}+y^{2}=e^{2}\left(x+{\tfrac {f}{e}}\right)^{2}=(ex+f)^{2}$ and $x^{2}(e^{2}-1)+2xf(1+e)-y^{2}=0.$ The substitution $p=f(1+e)$ yields $x^{2}(e^{2}-1)+2px-y^{2}=0.$ This is the equation of an ellipse ($e<1$) or a parabola ($e=1$) or a hyperbola ($e>1$). All of these non-degenerate conics have, in common, the origin as a vertex (see diagram). If $e>1$, introduce new parameters $a,b$ so that $e^{2}-1={\tfrac {b^{2}}{a^{2}}},{\text{ and }}\ p={\tfrac {b^{2}}{a}}$, and then the equation above becomes ${\frac {(x+a)^{2}}{a^{2}}}-{\frac {y^{2}}{b^{2}}}=1\,,$ which is the equation of a hyperbola with center $(-a,0)$, the x-axis as major axis and the major/minor semi axis $a,b$. Construction of a directrix Because of $c\cdot {\tfrac {a^{2}}{c}}=a^{2}$ point $L_{1}$ of directrix $l_{1}$ (see diagram) and focus $F_{1}$ are inverse with respect to the circle inversion at circle $x^{2}+y^{2}=a^{2}$ (in diagram green). Hence point $E_{1}$ can be constructed using the theorem of Thales (not shown in the diagram). The directrix $l_{1}$ is the perpendicular to line ${\overline {F_{1}F_{2}}}$ through point $E_{1}$. Alternative construction of $E_{1}$: Calculation shows, that point $E_{1}$ is the intersection of the asymptote with its perpendicular through $F_{1}$ (see diagram). As plane section of a cone The intersection of an upright double cone by a plane not through the vertex with slope greater than the slope of the lines on the cone is a hyperbola (see diagram: red curve). In order to prove the defining property of a hyperbola (see above) one uses two Dandelin spheres $d_{1},d_{2}$, which are spheres that touch the cone along circles $c_{1}$, $c_{2}$ and the intersecting (hyperbola) plane at points $F_{1}$ and $F_{2}$. It turns out: $F_{1},F_{2}$ are the foci of the hyperbola. 1. Let $P$ be an arbitrary point of the intersection curve . 2. The generatrix of the cone containing $P$ intersects circle $c_{1}$ at point $A$ and circle $c_{2}$ at a point $B$. 3. The line segments ${\overline {PF_{1}}}$ and ${\overline {PA}}$ are tangential to the sphere $d_{1}$ and, hence, are of equal length. 4. The line segments ${\overline {PF_{2}}}$ and ${\overline {PB}}$ are tangential to the sphere $d_{2}$ and, hence, are of equal length. 5. The result is: $|PF_{1}|-|PF_{2}|=|PA|-|PB|=|AB|$ is independent of the hyperbola point $P$, because no matter where point $P$ is, $A,B$ have to be on circles $c_{1}$, $c_{2}$, and line segment $AB$ has to cross the apex. Therefore, as point $P$ moves along the red curve (hyperbola), line segment ${\overline {AB}}$ simply rotates about apex without changing its length. Pin and string construction The definition of a hyperbola by its foci and its circular directrices (see above) can be used for drawing an arc of it with help of pins, a string and a ruler:[10] 1. Choose the foci $F_{1},F_{2}$, the vertices $V_{1},V_{2}$ and one of the circular directrices , for example $c_{2}$ (circle with radius $2a$) 2. A ruler is fixed at point $F_{2}$ free to rotate around $F_{2}$. Point $B$ is marked at distance $2a$. 3. A string with length $|AB|$ is prepared. 4. One end of the string is pinned at point $A$ on the ruler, the other end is pinned to point $F_{1}$. 5. Take a pen and hold the string tight to the edge of the ruler. 6. Rotating the ruler around $F_{2}$ prompts the pen to draw an arc of the right branch of the hyperbola, because of $|PF_{1}|=|PB|$ (see the definition of a hyperbola by circular directrices). Steiner generation of a hyperbola The following method to construct single points of a hyperbola relies on the Steiner generation of a non degenerate conic section: Given two pencils $B(U),B(V)$ of lines at two points $U,V$ (all lines containing $U$ and $V$, respectively) and a projective but not perspective mapping $\pi $ of $B(U)$ onto $B(V)$, then the intersection points of corresponding lines form a non-degenerate projective conic section. For the generation of points of the hyperbola ${\tfrac {x^{2}}{a^{2}}}-{\tfrac {y^{2}}{b^{2}}}=1$ one uses the pencils at the vertices $V_{1},V_{2}$. Let $P=(x_{0},y_{0})$ be a point of the hyperbola and $A=(a,y_{0}),B=(x_{0},0)$. The line segment ${\overline {BP}}$ is divided into n equally-spaced segments and this division is projected parallel with the diagonal $AB$ as direction onto the line segment ${\overline {AP}}$ (see diagram). The parallel projection is part of the projective mapping between the pencils at $V_{1}$ and $V_{2}$ needed. The intersection points of any two related lines $S_{1}A_{i}$ and $S_{2}B_{i}$ are points of the uniquely defined hyperbola. Remarks: • The subdivision could be extended beyond the points $A$ and $B$ in order to get more points, but the determination of the intersection points would become more inaccurate. A better idea is extending the points already constructed by symmetry (see animation). • The Steiner generation exists for ellipses and parabolas, too. • The Steiner generation is sometimes called a parallelogram method because one can use other points rather than the vertices, which starts with a parallelogram instead of a rectangle. Inscribed angles for hyperbolas y = a/(x − b) + c and the 3-point-form A hyperbola with equation $y={\tfrac {a}{x-b}}+c,\ a\neq 0$ is uniquely determined by three points $(x_{1},y_{1}),\;(x_{2},y_{2}),\;(x_{3},y_{3})$ with different x- and y-coordinates. A simple way to determine the shape parameters $a,b,c$ uses the inscribed angle theorem for hyperbolas: In order to measure an angle between two lines with equations $y=m_{1}x+d_{1},\ y=m_{2}x+d_{2}\ ,m_{1},m_{2}\neq 0$ in this context one uses the quotient ${\frac {m_{1}}{m_{2}}}\ .$ Analogous to the inscribed angle theorem for circles one gets the Inscribed angle theorem for hyperbolas[11][12] — For four points $P_{i}=(x_{i},y_{i}),\ i=1,2,3,4,\ x_{i}\neq x_{k},y_{i}\neq y_{k},i\neq k$ (see diagram) the following statement is true: The four points are on a hyperbola with equation $y={\tfrac {a}{x-b}}+c$ if and only if the angles at $P_{3}$ and $P_{4}$ are equal in the sense of the measurement above. That means if ${\frac {(y_{4}-y_{1})}{(x_{4}-x_{1})}}{\frac {(x_{4}-x_{2})}{(y_{4}-y_{2})}}={\frac {(y_{3}-y_{1})}{(x_{3}-x_{1})}}{\frac {(x_{3}-x_{2})}{(y_{3}-y_{2})}}$ The proof can be derived by straightforward calculation. If the points are on a hyperbola, one can assume the hyperbola's equation is $y=a/x$. A consequence of the inscribed angle theorem for hyperbolas is the 3-point-form of a hyperbola's equation — The equation of the hyperbola determined by 3 points $P_{i}=(x_{i},y_{i}),\ i=1,2,3,\ x_{i}\neq x_{k},y_{i}\neq y_{k},i\neq k$ is the solution of the equation ${\frac {({\color {red}y}-y_{1})}{({\color {green}x}-x_{1})}}{\frac {({\color {green}x}-x_{2})}{({\color {red}y}-y_{2})}}={\frac {(y_{3}-y_{1})}{(x_{3}-x_{1})}}{\frac {(x_{3}-x_{2})}{(y_{3}-y_{2})}}$ for ${\color {red}y}$. As an affine image of the unit hyperbola x2 − y2 = 1 Another definition of a hyperbola uses affine transformations: Any hyperbola is the affine image of the unit hyperbola with equation $x^{2}-y^{2}=1$. Parametric representation An affine transformation of the Euclidean plane has the form ${\vec {x}}\to {\vec {f}}_{0}+A{\vec {x}}$, where $A$ is a regular matrix (its determinant is not 0) and ${\vec {f}}_{0}$ is an arbitrary vector. If ${\vec {f}}_{1},{\vec {f}}_{2}$ are the column vectors of the matrix $A$, the unit hyperbola $(\pm \cosh(t),\sinh(t)),t\in \mathbb {R} ,$ is mapped onto the hyperbola ${\vec {x}}={\vec {p}}(t)={\vec {f}}_{0}\pm {\vec {f}}_{1}\cosh t+{\vec {f}}_{2}\sinh t\ .$ ${\vec {f}}_{0}$ is the center, ${\vec {f}}_{0}+{\vec {f}}_{1}$ a point of the hyperbola and ${\vec {f}}_{2}$ a tangent vector at this point. Vertices In general the vectors ${\vec {f}}_{1},{\vec {f}}_{2}$ are not perpendicular. That means, in general ${\vec {f}}_{0}\pm {\vec {f}}_{1}$ are not the vertices of the hyperbola. But ${\vec {f}}_{1}\pm {\vec {f}}_{2}$ point into the directions of the asymptotes. The tangent vector at point ${\vec {p}}(t)$ is ${\vec {p}}'(t)={\vec {f}}_{1}\sinh t+{\vec {f}}_{2}\cosh t\ .$ Because at a vertex the tangent is perpendicular to the major axis of the hyperbola one gets the parameter $t_{0}$ of a vertex from the equation ${\vec {p}}'(t)\cdot \left({\vec {p}}(t)-{\vec {f}}_{0}\right)=\left({\vec {f}}_{1}\sinh t+{\vec {f}}_{2}\cosh t\right)\cdot \left({\vec {f}}_{1}\cosh t+{\vec {f}}_{2}\sinh t\right)=0$ and hence from $\coth(2t_{0})=-{\tfrac {{\vec {f}}_{1}^{\,2}+{\vec {f}}_{2}^{\,2}}{2{\vec {f}}_{1}\cdot {\vec {f}}_{2}}}\ ,$ which yields $t_{0}={\tfrac {1}{4}}\ln {\tfrac {\left({\vec {f}}_{1}-{\vec {f}}_{2}\right)^{2}}{\left({\vec {f}}_{1}+{\vec {f}}_{2}\right)^{2}}}.$ The formulae $\cosh ^{2}x+\sinh ^{2}x=\cosh 2x$, $2\sinh x\cosh x=\sinh 2x$, and $\operatorname {arcoth} x={\tfrac {1}{2}}\ln {\tfrac {x+1}{x-1}}$ were used. The two vertices of the hyperbola are ${\vec {f}}_{0}\pm \left({\vec {f}}_{1}\cosh t_{0}+{\vec {f}}_{2}\sinh t_{0}\right).$ Implicit representation Solving the parametric representation for $\cosh t,\sinh t$ by Cramer's rule and using $\;\cosh ^{2}t-\sinh ^{2}t-1=0\;$, one gets the implicit representation $\det \left({\vec {x}}\!-\!{\vec {f}}\!_{0},{\vec {f}}\!_{2}\right)^{2}-\det \left({\vec {f}}\!_{1},{\vec {x}}\!-\!{\vec {f}}\!_{0}\right)^{2}-\det \left({\vec {f}}\!_{1},{\vec {f}}\!_{2}\right)^{2}=0.$ Hyperbola in space The definition of a hyperbola in this section gives a parametric representation of an arbitrary hyperbola, even in space, if one allows ${\vec {f}}\!_{0},{\vec {f}}\!_{1},{\vec {f}}\!_{2}$ to be vectors in space. As an affine image of the hyperbola y = 1/x Because the unit hyperbola $x^{2}-y^{2}=1$ is affinely equivalent to the hyperbola $y=1/x$, an arbitrary hyperbola can be considered as the affine image (see previous section) of the hyperbola $y=1/x\,$: ${\vec {x}}={\vec {p}}(t)={\vec {f}}_{0}+{\vec {f}}_{1}t+{\vec {f}}_{2}{\tfrac {1}{t}},\quad t\neq 0\,.$ $M:{\vec {f}}_{0}$ is the center of the hyperbola, the vectors ${\vec {f}}_{1},{\vec {f}}_{2}$ have the directions of the asymptotes and ${\vec {f}}_{1}+{\vec {f}}_{2}$ is a point of the hyperbola. The tangent vector is ${\vec {p}}'(t)={\vec {f}}_{1}-{\vec {f}}_{2}{\tfrac {1}{t^{2}}}.$ At a vertex the tangent is perpendicular to the major axis. Hence ${\vec {p}}'(t)\cdot \left({\vec {p}}(t)-{\vec {f}}_{0}\right)=\left({\vec {f}}_{1}-{\vec {f}}_{2}{\tfrac {1}{t^{2}}}\right)\cdot \left({\vec {f}}_{1}t+{\vec {f}}_{2}{\tfrac {1}{t}}\right)={\vec {f}}_{1}^{2}t-{\vec {f}}_{2}^{2}{\tfrac {1}{t^{3}}}=0$ and the parameter of a vertex is $t_{0}=\pm {\sqrt[{4}]{\frac {{\vec {f}}_{2}^{2}}{{\vec {f}}_{1}^{2}}}}.$ $\left|{\vec {f}}\!_{1}\right|=\left|{\vec {f}}\!_{2}\right|$ is equivalent to $t_{0}=\pm 1$ and ${\vec {f}}_{0}\pm ({\vec {f}}_{1}+{\vec {f}}_{2})$ are the vertices of the hyperbola. The following properties of a hyperbola are easily proven using the representation of a hyperbola introduced in this section. Tangent construction The tangent vector can be rewritten by factorization: ${\vec {p}}'(t)={\tfrac {1}{t}}\left({\vec {f}}_{1}t-{\vec {f}}_{2}{\tfrac {1}{t}}\right)\ .$ This means that the diagonal $AB$ of the parallelogram $M:\ {\vec {f}}_{0},\ A={\vec {f}}_{0}+{\vec {f}}_{1}t,\ B:\ {\vec {f}}_{0}+{\vec {f}}_{2}{\tfrac {1}{t}},\ P:\ {\vec {f}}_{0}+{\vec {f}}_{1}t+{\vec {f}}_{2}{\tfrac {1}{t}}$ is parallel to the tangent at the hyperbola point $P$ (see diagram). This property provides a way to construct the tangent at a point on the hyperbola. This property of a hyperbola is an affine version of the 3-point-degeneration of Pascal's theorem.[13] Area of the grey parallelogram The area of the grey parallelogram $MAPB$ in the above diagram is ${\text{Area}}=\left|\det \left(t{\vec {f}}_{1},{\tfrac {1}{t}}{\vec {f}}_{2}\right)\right|=\left|\det \left({\vec {f}}_{1},{\vec {f}}_{2}\right)\right|=\cdots ={\frac {a^{2}+b^{2}}{4}}$ and hence independent of point $P$. The last equation follows from a calculation for the case, where $P$ is a vertex and the hyperbola in its canonical form ${\tfrac {x^{2}}{a^{2}}}-{\tfrac {y^{2}}{b^{2}}}=1\,.$ Point construction For a hyperbola with parametric representation ${\vec {x}}={\vec {p}}(t)={\vec {f}}_{1}t+{\vec {f}}_{2}{\tfrac {1}{t}}$ (for simplicity the center is the origin) the following is true: For any two points $P_{1}:\ {\vec {f}}_{1}t_{1}+{\vec {f}}_{2}{\tfrac {1}{t_{1}}},\ P_{2}:\ {\vec {f}}_{1}t_{2}+{\vec {f}}_{2}{\tfrac {1}{t_{2}}}$ the points $A:\ {\vec {a}}={\vec {f}}_{1}t_{1}+{\vec {f}}_{2}{\tfrac {1}{t_{2}}},\ B:\ {\vec {b}}={\vec {f}}_{1}t_{2}+{\vec {f}}_{2}{\tfrac {1}{t_{1}}}$ are collinear with the center of the hyperbola (see diagram). The simple proof is a consequence of the equation ${\tfrac {1}{t_{1}}}{\vec {a}}={\tfrac {1}{t_{2}}}{\vec {b}}$. This property provides a possibility to construct points of a hyperbola if the asymptotes and one point are given. This property of a hyperbola is an affine version of the 4-point-degeneration of Pascal's theorem.[14] Tangent–asymptotes triangle For simplicity the center of the hyperbola may be the origin and the vectors ${\vec {f}}_{1},{\vec {f}}_{2}$ have equal length. If the last assumption is not fulfilled one can first apply a parameter transformation (see above) in order to make the assumption true. Hence $\pm ({\vec {f}}_{1}+{\vec {f}}_{2})$ are the vertices, $\pm ({\vec {f}}_{1}-{\vec {f}}_{2})$ span the minor axis and one gets $|{\vec {f}}_{1}+{\vec {f}}_{2}|=a$ and $|{\vec {f}}_{1}-{\vec {f}}_{2}|=b$. For the intersection points of the tangent at point ${\vec {p}}(t_{0})={\vec {f}}_{1}t_{0}+{\vec {f}}_{2}{\tfrac {1}{t_{0}}}$ with the asymptotes one gets the points $C=2t_{0}{\vec {f}}_{1},\ D={\tfrac {2}{t_{0}}}{\vec {f}}_{2}.$ The area of the triangle $M,C,D$ can be calculated by a 2 × 2 determinant: $A={\tfrac {1}{2}}{\Big |}\det \left(2t_{0}{\vec {f}}_{1},{\tfrac {2}{t_{0}}}{\vec {f}}_{2}\right){\Big |}=2{\Big |}\det \left({\vec {f}}_{1},{\vec {f}}_{2}\right){\Big |}$ (see rules for determinants). $\left|\det({\vec {f}}_{1},{\vec {f}}_{2})\right|$ is the area of the rhombus generated by ${\vec {f}}_{1},{\vec {f}}_{2}$. The area of a rhombus is equal to one half of the product of its diagonals. The diagonals are the semi-axes $a,b$ of the hyperbola. Hence: The area of the triangle $MCD$ is independent of the point of the hyperbola: $A=ab.$ Reciprocation of a circle The reciprocation of a circle B in a circle C always yields a conic section such as a hyperbola. The process of "reciprocation in a circle C" consists of replacing every line and point in a geometrical figure with their corresponding pole and polar, respectively. The pole of a line is the inversion of its closest point to the circle C, whereas the polar of a point is the converse, namely, a line whose closest point to C is the inversion of the point. The eccentricity of the conic section obtained by reciprocation is the ratio of the distances between the two circles' centers to the radius r of reciprocation circle C. If B and C represent the points at the centers of the corresponding circles, then $e={\frac {\overline {BC}}{r}}.$ Since the eccentricity of a hyperbola is always greater than one, the center B must lie outside of the reciprocating circle C. This definition implies that the hyperbola is both the locus of the poles of the tangent lines to the circle B, as well as the envelope of the polar lines of the points on B. Conversely, the circle B is the envelope of polars of points on the hyperbola, and the locus of poles of tangent lines to the hyperbola. Two tangent lines to B have no (finite) poles because they pass through the center C of the reciprocation circle C; the polars of the corresponding tangent points on B are the asymptotes of the hyperbola. The two branches of the hyperbola correspond to the two parts of the circle B that are separated by these tangent points. Quadratic equation A hyperbola can also be defined as a second-degree equation in the Cartesian coordinates $(x,y)$ in the plane, $A_{xx}x^{2}+2A_{xy}xy+A_{yy}y^{2}+2B_{x}x+2B_{y}y+C=0,$ provided that the constants $A_{xx},$ $A_{xy},$ $A_{yy},$ $B_{x},$ $B_{y},$ and $C$ satisfy the determinant condition $D:={\begin{vmatrix}A_{xx}&A_{xy}\\A_{xy}&A_{yy}\end{vmatrix}}<0.$ This determinant is conventionally called the discriminant of the conic section.[15] A special case of a hyperbola—the degenerate hyperbola consisting of two intersecting lines—occurs when another determinant is zero: $\Delta :={\begin{vmatrix}A_{xx}&A_{xy}&B_{x}\\A_{xy}&A_{yy}&B_{y}\\B_{x}&B_{y}&C\end{vmatrix}}=0.$ :={\begin{vmatrix}A_{xx}&A_{xy}&B_{x}\\A_{xy}&A_{yy}&B_{y}\\B_{x}&B_{y}&C\end{vmatrix}}=0.} This determinant $\Delta $ is sometimes called the discriminant of the conic section.[16] The general equation's coefficients can be obtained from known semi-major axis $a,$ semi-minor axis $b,$ center coordinates $(x_{\circ },y_{\circ })$, and rotation angle $\theta $ (the angle from the positive horizontal axis to the hyperbola's major axis) using the formulae: ${\begin{aligned}A_{xx}&=-a^{2}\sin ^{2}\theta +b^{2}\cos ^{2}\theta ,&B_{x}&=-A_{xx}x_{\circ }-A_{xy}y_{\circ },\\[1ex]A_{yy}&=-a^{2}\cos ^{2}\theta +b^{2}\sin ^{2}\theta ,&B_{y}&=-A_{xy}x_{\circ }-A_{yy}y_{\circ },\\[1ex]A_{xy}&=\left(a^{2}+b^{2}\right)\sin \theta \cos \theta ,&C&=A_{xx}x_{\circ }^{2}+2A_{xy}x_{\circ }y_{\circ }+A_{yy}y_{\circ }^{2}-a^{2}b^{2}.\end{aligned}}$ These expressions can be derived from the canonical equation ${\frac {X^{2}}{a^{2}}}-{\frac {Y^{2}}{b^{2}}}=1$ by a translation and rotation of the coordinates $(x,y)$: ${\begin{alignedat}{2}X&={\phantom {+}}\left(x-x_{\circ }\right)\cos \theta &&+\left(y-y_{\circ }\right)\sin \theta ,\\Y&=-\left(x-x_{\circ }\right)\sin \theta &&+\left(y-y_{\circ }\right)\cos \theta .\end{alignedat}}$ Given the above general parametrization of the hyperbola in Cartesian coordinates, the eccentricity can be found using the formula in Conic section#Eccentricity in terms of coefficients. The center $(x_{c},y_{c})$ of the hyperbola may be determined from the formulae ${\begin{aligned}x_{c}&=-{\frac {1}{D}}\,{\begin{vmatrix}B_{x}&A_{xy}\\B_{y}&A_{yy}\end{vmatrix}}\,,\\[1ex]y_{c}&=-{\frac {1}{D}}\,{\begin{vmatrix}A_{xx}&B_{x}\\A_{xy}&B_{y}\end{vmatrix}}\,.\end{aligned}}$ In terms of new coordinates, $\xi =x-x_{c}$ and $\eta =y-y_{c},$ the defining equation of the hyperbola can be written $A_{xx}\xi ^{2}+2A_{xy}\xi \eta +A_{yy}\eta ^{2}+{\frac {\Delta }{D}}=0.$ The principal axes of the hyperbola make an angle $\varphi $ with the positive $x$-axis that is given by $\tan(2\varphi )={\frac {2A_{xy}}{A_{xx}-A_{yy}}}.$ Rotating the coordinate axes so that the $x$-axis is aligned with the transverse axis brings the equation into its canonical form ${\frac {x^{2}}{a^{2}}}-{\frac {y^{2}}{b^{2}}}=1.$ The major and minor semiaxes $a$ and $b$ are defined by the equations ${\begin{aligned}a^{2}&=-{\frac {\Delta }{\lambda _{1}D}}=-{\frac {\Delta }{\lambda _{1}^{2}\lambda _{2}}},\\[1ex]b^{2}&=-{\frac {\Delta }{\lambda _{2}D}}=-{\frac {\Delta }{\lambda _{1}\lambda _{2}^{2}}},\end{aligned}}$ where $\lambda _{1}$ and $\lambda _{2}$ are the roots of the quadratic equation $\lambda ^{2}-\left(A_{xx}+A_{yy}\right)\lambda +D=0.$ For comparison, the corresponding equation for a degenerate hyperbola (consisting of two intersecting lines) is ${\frac {x^{2}}{a^{2}}}-{\frac {y^{2}}{b^{2}}}=0.$ The tangent line to a given point $(x_{0},y_{0})$ on the hyperbola is defined by the equation $Ex+Fy+G=0$ where $E,$ $F,$ and $G$ are defined by ${\begin{aligned}E&=A_{xx}x_{0}+A_{xy}y_{0}+B_{x},\\[1ex]F&=A_{xy}x_{0}+A_{yy}y_{0}+B_{y},\\[1ex]G&=B_{x}x_{0}+B_{y}y_{0}+C.\end{aligned}}$ The normal line to the hyperbola at the same point is given by the equation $F(x-x_{0})-E(y-y_{0})=0.$ The normal line is perpendicular to the tangent line, and both pass through the same point $(x_{0},y_{0}).$ From the equation ${\frac {x^{2}}{a^{2}}}-{\frac {y^{2}}{b^{2}}}=1,\qquad 0<b\leq a,$ the left focus is $(-ae,0)$ and the right focus is $(ae,0),$ where $e$ is the eccentricity. Denote the distances from a point $(x,y)$ to the left and right foci as $r_{1}$ and $r_{2}.$ For a point on the right branch, $r_{1}-r_{2}=2a,$ and for a point on the left branch, $r_{2}-r_{1}=2a.$ This can be proved as follows: If $(x,y)$ is a point on the hyperbola the distance to the left focal point is $r_{1}^{2}=(x+ae)^{2}+y^{2}=x^{2}+2xae+a^{2}e^{2}+\left(x^{2}-a^{2}\right)\left(e^{2}-1\right)=(ex+a)^{2}.$ To the right focal point the distance is $r_{2}^{2}=(x-ae)^{2}+y^{2}=x^{2}-2xae+a^{2}e^{2}+\left(x^{2}-a^{2}\right)\left(e^{2}-1\right)=(ex-a)^{2}.$ If $(x,y)$ is a point on the right branch of the hyperbola then $ex>a$ and ${\begin{aligned}r_{1}&=ex+a,\\r_{2}&=ex-a.\end{aligned}}$ Subtracting these equations one gets $r_{1}-r_{2}=2a.$ If $(x,y)$ is a point on the left branch of the hyperbola then $ex<-a$ and ${\begin{aligned}r_{1}&=-ex-a,\\r_{2}&=-ex+a.\end{aligned}}$ Subtracting these equations one gets $r_{2}-r_{1}=2a.$ In Cartesian coordinates Equation If Cartesian coordinates are introduced such that the origin is the center of the hyperbola and the x-axis is the major axis, then the hyperbola is called east-west-opening and the foci are the points $F_{1}=(c,0),\ F_{2}=(-c,0)$,[17] the vertices are $V_{1}=(a,0),\ V_{2}=(-a,0)$.[18] For an arbitrary point $(x,y)$ the distance to the focus $(c,0)$ is $ {\sqrt {(x-c)^{2}+y^{2}}}$ and to the second focus $ {\sqrt {(x+c)^{2}+y^{2}}}$. Hence the point $(x,y)$ is on the hyperbola if the following condition is fulfilled ${\sqrt {(x-c)^{2}+y^{2}}}-{\sqrt {(x+c)^{2}+y^{2}}}=\pm 2a\ .$ Remove the square roots by suitable squarings and use the relation $b^{2}=c^{2}-a^{2}$ to obtain the equation of the hyperbola: ${\frac {x^{2}}{a^{2}}}-{\frac {y^{2}}{b^{2}}}=1\ .$ This equation is called the canonical form of a hyperbola, because any hyperbola, regardless of its orientation relative to the Cartesian axes and regardless of the location of its center, can be transformed to this form by a change of variables, giving a hyperbola that is congruent to the original (see below). The axes of symmetry or principal axes are the transverse axis (containing the segment of length 2a with endpoints at the vertices) and the conjugate axis (containing the segment of length 2b perpendicular to the transverse axis and with midpoint at the hyperbola's center).[19] As opposed to an ellipse, a hyperbola has only two vertices: $(a,0),\;(-a,0)$. The two points $(0,b),\;(0,-b)$ on the conjugate axes are not on the hyperbola. It follows from the equation that the hyperbola is symmetric with respect to both of the coordinate axes and hence symmetric with respect to the origin. Eccentricity For a hyperbola in the above canonical form, the eccentricity is given by $e={\sqrt {1+{\frac {b^{2}}{a^{2}}}}}.$ Two hyperbolas are geometrically similar to each other – meaning that they have the same shape, so that one can be transformed into the other by rigid left and right movements, rotation, taking a mirror image, and scaling (magnification) – if and only if they have the same eccentricity. Asymptotes Solving the equation (above) of the hyperbola for $y$ yields $y=\pm {\frac {b}{a}}{\sqrt {x^{2}-a^{2}}}.$ It follows from this that the hyperbola approaches the two lines $y=\pm {\frac {b}{a}}x$ for large values of $|x|$. These two lines intersect at the center (origin) and are called asymptotes of the hyperbola ${\tfrac {x^{2}}{a^{2}}}-{\tfrac {y^{2}}{b^{2}}}=1\ .$[20] With the help of the second figure one can see that ${\color {blue}{(1)}}$ The perpendicular distance from a focus to either asymptote is $b$ (the semi-minor axis). From the Hesse normal form ${\tfrac {bx\pm ay}{\sqrt {a^{2}+b^{2}}}}=0$ of the asymptotes and the equation of the hyperbola one gets:[21] ${\color {magenta}{(2)}}$ The product of the distances from a point on the hyperbola to both the asymptotes is the constant ${\tfrac {a^{2}b^{2}}{a^{2}+b^{2}}}\ ,$ which can also be written in terms of the eccentricity e as $\left({\tfrac {b}{e}}\right)^{2}.$ From the equation $y=\pm {\frac {b}{a}}{\sqrt {x^{2}-a^{2}}}$ of the hyperbola (above) one can derive: ${\color {green}{(3)}}$ The product of the slopes of lines from a point P to the two vertices is the constant $b^{2}/a^{2}\ .$ In addition, from (2) above it can be shown that[21] ${\color {red}{(4)}}$ The product of the distances from a point on the hyperbola to the asymptotes along lines parallel to the asymptotes is the constant ${\tfrac {a^{2}+b^{2}}{4}}.$ Semi-latus rectum The length of the chord through one of the foci, perpendicular to the major axis of the hyperbola, is called the latus rectum. One half of it is the semi-latus rectum $p$. A calculation shows $p={\frac {b^{2}}{a}}.$ The semi-latus rectum $p$ may also be viewed as the radius of curvature at the vertices. Tangent The simplest way to determine the equation of the tangent at a point $(x_{0},y_{0})$ is to implicitly differentiate the equation ${\tfrac {x^{2}}{a^{2}}}-{\tfrac {y^{2}}{b^{2}}}=1$ of the hyperbola. Denoting dy/dx as y′, this produces ${\frac {2x}{a^{2}}}-{\frac {2yy'}{b^{2}}}=0\ \Rightarrow \ y'={\frac {x}{y}}{\frac {b^{2}}{a^{2}}}\ \Rightarrow \ y={\frac {x_{0}}{y_{0}}}{\frac {b^{2}}{a^{2}}}(x-x_{0})+y_{0}.$ With respect to ${\tfrac {x_{0}^{2}}{a^{2}}}-{\tfrac {y_{0}^{2}}{b^{2}}}=1$, the equation of the tangent at point $(x_{0},y_{0})$ is ${\frac {x_{0}}{a^{2}}}x-{\frac {y_{0}}{b^{2}}}y=1.$ A particular tangent line distinguishes the hyperbola from the other conic sections.[22] Let f be the distance from the vertex V (on both the hyperbola and its axis through the two foci) to the nearer focus. Then the distance, along a line perpendicular to that axis, from that focus to a point P on the hyperbola is greater than 2f. The tangent to the hyperbola at P intersects that axis at point Q at an angle ∠PQV of greater than 45°. Rectangular hyperbola In the case $a=b$ the hyperbola is called rectangular (or equilateral), because its asymptotes intersect at right angles. For this case, the linear eccentricity is $c={\sqrt {2}}a$, the eccentricity $e={\sqrt {2}}$ and the semi-latus rectum $p=a$. The graph of the equation $y=1/x$ is a rectangular hyperbola. Parametric representation with hyperbolic sine/cosine Using the hyperbolic sine and cosine functions $\cosh ,\sinh $, a parametric representation of the hyperbola ${\tfrac {x^{2}}{a^{2}}}-{\tfrac {y^{2}}{b^{2}}}=1$ can be obtained, which is similar to the parametric representation of an ellipse: $(\pm a\cosh t,b\sinh t),\,t\in \mathbb {R} \ ,$ which satisfies the Cartesian equation because $\cosh ^{2}t-\sinh ^{2}t=1.$ Further parametric representations are given in the section Parametric equations below. Conjugate hyperbola Exchange ${\frac {x^{2}}{a^{2}}}$ and ${\frac {y^{2}}{b^{2}}}$ to obtain the equation of the conjugate hyperbola (see diagram): ${\frac {y^{2}}{b^{2}}}-{\frac {x^{2}}{a^{2}}}=1\ ,$ also written as ${\frac {x^{2}}{a^{2}}}-{\frac {y^{2}}{b^{2}}}=-1\ .$ A hyperbola and its conjugate may have diameters which are conjugate. In the theory of special relativity, such diameters may represent axes of time and space, where one hyperbola represents events at a given spatial distance from the center, and the other represents events at a corresponding temporal distance from the center. In polar coordinates Origin at the focus The polar coordinates used most commonly for the hyperbola are defined relative to the Cartesian coordinate system that has its origin in a focus and its x-axis pointing towards the origin of the "canonical coordinate system" as illustrated in the first diagram. In this case the angle $\varphi $ is called true anomaly. Relative to this coordinate system one has that $r={\frac {p}{1\mp e\cos \varphi }},\quad p={\frac {b^{2}}{a}}$ and $-\arccos \left(-{\frac {1}{e}}\right)<\varphi <\arccos \left(-{\frac {1}{e}}\right).$ Origin at the center With polar coordinates relative to the "canonical coordinate system" (see second diagram) one has that $r={\frac {b}{\sqrt {e^{2}\cos ^{2}\varphi -1}}}.\,$ For the right branch of the hyperbola the range of $\varphi $ is $-\arccos \left({\frac {1}{e}}\right)<\varphi <\arccos \left({\frac {1}{e}}\right).$ Parametric equations A hyperbola with equation ${\tfrac {x^{2}}{a^{2}}}-{\tfrac {y^{2}}{b^{2}}}=1$ can be described by several parametric equations: 1. Through hyperbolic trigonometric functions ${\begin{cases}x=\pm a\cosh t,\\y=b\sinh t,\end{cases}}\qquad t\in \mathbb {R} .$ 2. As a rational representation ${\begin{cases}x=\pm a{\dfrac {t^{2}+1}{2t}},\\[1ex]y=b{\dfrac {t^{2}-1}{2t}},\end{cases}}\qquad t>0$ 3. Through circular trigonometric functions ${\begin{cases}x={\frac {a}{\cos t}}=a\sec t,\\y=\pm b\tan t,\end{cases}}\qquad 0\leq t<2\pi ,\ t\neq {\frac {\pi }{2}},\ t\neq {\frac {3}{2}}\pi .$ 4. With the tangent slope as parameter: A parametric representation, which uses the slope $m$ of the tangent at a point of the hyperbola can be obtained analogously to the ellipse case: Replace in the ellipse case $b^{2}$ by $-b^{2}$ and use formulae for the hyperbolic functions. One gets ${\vec {c}}_{\pm }(m)=\left(-{\frac {ma^{2}}{\pm {\sqrt {m^{2}a^{2}-b^{2}}}}},{\frac {-b^{2}}{\pm {\sqrt {m^{2}a^{2}-b^{2}}}}}\right),\quad |m|>b/a.$ Here, ${\vec {c}}_{-}$ is the upper, and ${\vec {c}}_{+}$ the lower half of the hyperbola. The points with vertical tangents (vertices $(\pm a,0)$) are not covered by the representation. The equation of the tangent at point ${\vec {c}}_{\pm }(m)$ is $y=mx\pm {\sqrt {m^{2}a^{2}-b^{2}}}.$ This description of the tangents of a hyperbola is an essential tool for the determination of the orthoptic of a hyperbola. Hyperbolic functions Main article: Hyperbolic functions Just as the trigonometric functions are defined in terms of the unit circle, so also the hyperbolic functions are defined in terms of the unit hyperbola, as shown in this diagram. In a unit circle, the angle (in radians) is equal to twice the area of the circular sector which that angle subtends. The analogous hyperbolic angle is likewise defined as twice the area of a hyperbolic sector. Let $a$ be twice the area between the $x$ axis and a ray through the origin intersecting the unit hyperbola, and define $ (x,y)=(\cosh a,\sinh a)=(x,{\sqrt {x^{2}-1}})$ as the coordinates of the intersection point. Then the area of the hyperbolic sector is the area of the triangle minus the curved region past the vertex at $(1,0)$: ${\begin{aligned}{\frac {a}{2}}&={\frac {xy}{2}}-\int _{1}^{x}{\sqrt {t^{2}-1}}\,dt\\[1ex]&={\frac {1}{2}}\left(x{\sqrt {x^{2}-1}}\right)-{\frac {1}{2}}\left(x{\sqrt {x^{2}-1}}-\ln \left(x+{\sqrt {x^{2}-1}}\right)\right),\end{aligned}}$ which simplifies to the area hyperbolic cosine $a=\operatorname {arcosh} x=\ln \left(x+{\sqrt {x^{2}-1}}\right).$ Solving for $x$ yields the exponential form of the hyperbolic cosine: $x=\cosh a={\frac {e^{a}+e^{-a}}{2}}.$ From $x^{2}-y^{2}=1$ one gets $y=\sinh a={\sqrt {\cosh ^{2}a-1}}={\frac {e^{a}-e^{-a}}{2}},$ and its inverse the area hyperbolic sine: $a=\operatorname {arsinh} y=\ln \left(y+{\sqrt {y^{2}+1}}\right).$ Other hyperbolic functions are defined according to the hyperbolic cosine and hyperbolic sine, so for example $\operatorname {tanh} a={\frac {\sinh a}{\cosh a}}={\frac {e^{2a}-1}{e^{2a}+1}}.$ Properties Reflection property The tangent at a point $P$ bisects the angle between the lines ${\overline {PF_{1}}},{\overline {PF_{2}}}.$ This is called the optical property or reflection property of a hyperbola.[23] Proof Let $L$ be the point on the line ${\overline {PF_{2}}}$ with the distance $2a$ to the focus $F_{2}$ (see diagram, $a$ is the semi major axis of the hyperbola). Line $w$ is the bisector of the angle between the lines ${\overline {PF_{1}}},{\overline {PF_{2}}}$. In order to prove that $w$ is the tangent line at point $P$, one checks that any point $Q$ on line $w$ which is different from $P$ cannot be on the hyperbola. Hence $w$ has only point $P$ in common with the hyperbola and is, therefore, the tangent at point $P$. From the diagram and the triangle inequality one recognizes that $|QF_{2}|<|LF_{2}|+|QL|=2a+|QF_{1}|$ holds, which means: $|QF_{2}|-|QF_{1}|<2a$. But if $Q$ is a point of the hyperbola, the difference should be $2a$. Midpoints of parallel chords The midpoints of parallel chords of a hyperbola lie on a line through the center (see diagram). The points of any chord may lie on different branches of the hyperbola. The proof of the property on midpoints is best done for the hyperbola $y=1/x$. Because any hyperbola is an affine image of the hyperbola $y=1/x$ (see section below) and an affine transformation preserves parallelism and midpoints of line segments, the property is true for all hyperbolas: For two points $P=\left(x_{1},{\tfrac {1}{x_{1}}}\right),\ Q=\left(x_{2},{\tfrac {1}{x_{2}}}\right)$ of the hyperbola $y=1/x$ the midpoint of the chord is $M=\left({\tfrac {x_{1}+x_{2}}{2}},\cdots \right)=\cdots ={\tfrac {x_{1}+x_{2}}{2}}\;\left(1,{\tfrac {1}{x_{1}x_{2}}}\right)\ ;$ ;} the slope of the chord is ${\frac {{\tfrac {1}{x_{2}}}-{\tfrac {1}{x_{1}}}}{x_{2}-x_{1}}}=\cdots =-{\tfrac {1}{x_{1}x_{2}}}\ .$ For parallel chords the slope is constant and the midpoints of the parallel chords lie on the line $y={\tfrac {1}{x_{1}x_{2}}}\;x\ .$ Consequence: for any pair of points $P,Q$ of a chord there exists a skew reflection with an axis (set of fixed points) passing through the center of the hyperbola, which exchanges the points $P,Q$ and leaves the hyperbola (as a whole) fixed. A skew reflection is a generalization of an ordinary reflection across a line $m$, where all point-image pairs are on a line perpendicular to $m$. Because a skew reflection leaves the hyperbola fixed, the pair of asymptotes is fixed, too. Hence the midpoint $M$ of a chord $PQ$ divides the related line segment ${\overline {P}}\,{\overline {Q}}$ between the asymptotes into halves, too. This means that $|P{\overline {P}}|=|Q{\overline {Q}}|$. This property can be used for the construction of further points $Q$ of the hyperbola if a point $P$ and the asymptotes are given. If the chord degenerates into a tangent, then the touching point divides the line segment between the asymptotes in two halves. Orthogonal tangents – orthoptic Main article: Orthoptic (geometry) For a hyperbola $ {\frac {x^{2}}{a^{2}}}-{\frac {y^{2}}{b^{2}}}=1,\,a>b$ the intersection points of orthogonal tangents lie on the circle $x^{2}+y^{2}=a^{2}-b^{2}$. This circle is called the orthoptic of the given hyperbola. The tangents may belong to points on different branches of the hyperbola. In case of $a\leq b$ there are no pairs of orthogonal tangents. Pole-polar relation for a hyperbola Any hyperbola can be described in a suitable coordinate system by an equation ${\tfrac {x^{2}}{a^{2}}}-{\tfrac {y^{2}}{b^{2}}}=1$. The equation of the tangent at a point $P_{0}=(x_{0},y_{0})$ of the hyperbola is ${\tfrac {x_{0}x}{a^{2}}}-{\tfrac {y_{0}y}{b^{2}}}=1.$ If one allows point $P_{0}=(x_{0},y_{0})$ to be an arbitrary point different from the origin, then point $P_{0}=(x_{0},y_{0})\neq (0,0)$ is mapped onto the line ${\frac {x_{0}x}{a^{2}}}-{\frac {y_{0}y}{b^{2}}}=1$, not through the center of the hyperbola. This relation between points and lines is a bijection. The inverse function maps line $y=mx+d,\ d\neq 0$ onto the point $\left(-{\frac {ma^{2}}{d}},-{\frac {b^{2}}{d}}\right)$ and line $x=c,\ c\neq 0$ onto the point $\left({\frac {a^{2}}{c}},0\right)\ .$ Such a relation between points and lines generated by a conic is called pole-polar relation or just polarity. The pole is the point, the polar the line. See Pole and polar. By calculation one checks the following properties of the pole-polar relation of the hyperbola: • For a point (pole) on the hyperbola the polar is the tangent at this point (see diagram: $P_{1},\ p_{1}$). • For a pole $P$ outside the hyperbola the intersection points of its polar with the hyperbola are the tangency points of the two tangents passing $P$ (see diagram: $P_{2},\ p_{2},\ P_{3},\ p_{3}$). • For a point within the hyperbola the polar has no point with the hyperbola in common. (see diagram: $P_{4},\ p_{4}$). Remarks: 1. The intersection point of two polars (for example: $p_{2},p_{3}$) is the pole of the line through their poles (here: $P_{2},P_{3}$). 2. The foci $(c,0),$ and $(-c,0)$ respectively and the directrices $x={\tfrac {a^{2}}{c}}$ and $x=-{\tfrac {a^{2}}{c}}$ respectively belong to pairs of pole and polar. Pole-polar relations exist for ellipses and parabolas, too. Other properties • The following are concurrent: (1) a circle passing through the hyperbola's foci and centered at the hyperbola's center; (2) either of the lines that are tangent to the hyperbola at the vertices; and (3) either of the asymptotes of the hyperbola.[24][25] • The following are also concurrent: (1) the circle that is centered at the hyperbola's center and that passes through the hyperbola's vertices; (2) either directrix; and (3) either of the asymptotes.[25] Arc length The arc length of a hyperbola does not have an elementary expression. The upper half of a hyperbola can be parameterized as $y=b{\sqrt {{\frac {x^{2}}{a^{2}}}-1}}.$ Then the integral giving the arc length $s$ from $x_{1}$ to $x_{2}$ can be computed as: $s=b\int _{\operatorname {arcosh} {\frac {x_{1}}{a}}}^{\operatorname {arcosh} {\frac {x_{2}}{a}}}{\sqrt {1+\left(1+{\frac {a^{2}}{b^{2}}}\right)\sinh ^{2}v}}\,\mathrm {d} v.$ After using the substitution $z=iv$, this can also be represented using the incomplete elliptic integral of the second kind $E$ with parameter $m=k^{2}$: $s=ib{\Biggr [}E\left(iv\,{\Biggr |}\,1+{\frac {a^{2}}{b^{2}}}\right){\Biggr ]}_{\operatorname {arcosh} {\frac {x_{2}}{a}}}^{\operatorname {arcosh} {\frac {x_{1}}{a}}}.$ Using only real numbers, this becomes[26] $s=b\left[F\left(\operatorname {gd} v\,{\Biggr |}-{\frac {a^{2}}{b^{2}}}\right)-E\left(\operatorname {gd} v\,{\Biggr |}-{\frac {a^{2}}{b^{2}}}\right)+{\sqrt {1+{\frac {a^{2}}{b^{2}}}\tanh ^{2}v}}\,\sinh v\right]_{\operatorname {arcosh} {\tfrac {x_{1}}{a}}}^{\operatorname {arcosh} {\tfrac {x_{2}}{a}}}$ where $F$ is the incomplete elliptic integral of the first kind with parameter $m=k^{2}$ and $\operatorname {gd} v=\arctan \sinh v$ is the Gudermannian function. Derived curves Several other curves can be derived from the hyperbola by inversion, the so-called inverse curves of the hyperbola. If the center of inversion is chosen as the hyperbola's own center, the inverse curve is the lemniscate of Bernoulli; the lemniscate is also the envelope of circles centered on a rectangular hyperbola and passing through the origin. If the center of inversion is chosen at a focus or a vertex of the hyperbola, the resulting inverse curves are a limaçon or a strophoid, respectively. Elliptic coordinates A family of confocal hyperbolas is the basis of the system of elliptic coordinates in two dimensions. These hyperbolas are described by the equation $\left({\frac {x}{c\cos \theta }}\right)^{2}-\left({\frac {y}{c\sin \theta }}\right)^{2}=1$ where the foci are located at a distance c from the origin on the x-axis, and where θ is the angle of the asymptotes with the x-axis. Every hyperbola in this family is orthogonal to every ellipse that shares the same foci. This orthogonality may be shown by a conformal map of the Cartesian coordinate system w = z + 1/z, where z= x + iy are the original Cartesian coordinates, and w=u + iv are those after the transformation. Other orthogonal two-dimensional coordinate systems involving hyperbolas may be obtained by other conformal mappings. For example, the mapping w = z2 transforms the Cartesian coordinate system into two families of orthogonal hyperbolas. Conic section analysis of the hyperbolic appearance of circles Besides providing a uniform description of circles, ellipses, parabolas, and hyperbolas, conic sections can also be understood as a natural model of the geometry of perspective in the case where the scene being viewed consists of circles, or more generally an ellipse. The viewer is typically a camera or the human eye and the image of the scene a central projection onto an image plane, that is, all projection rays pass a fixed point O, the center. The lens plane is a plane parallel to the image plane at the lens O. The image of a circle c is 1. a circle, if circle c is in a special position, for example parallel to the image plane and others (see stereographic projection), 2. an ellipse, if c has no point with the lens plane in common, 3. a parabola, if c has one point with the lens plane in common and 4. a hyperbola, if c has two points with the lens plane in common. (Special positions where the circle plane contains point O are omitted.) These results can be understood if one recognizes that the projection process can be seen in two steps: 1) circle c and point O generate a cone which is 2) cut by the image plane, in order to generate the image. One sees a hyperbola whenever catching sight of a portion of a circle cut by one's lens plane. The inability to see very much of the arms of the visible branch, combined with the complete absence of the second branch, makes it virtually impossible for the human visual system to recognize the connection with hyperbolas. Applications Sundials Hyperbolas may be seen in many sundials. On any given day, the sun revolves in a circle on the celestial sphere, and its rays striking the point on a sundial traces out a cone of light. The intersection of this cone with the horizontal plane of the ground forms a conic section. At most populated latitudes and at most times of the year, this conic section is a hyperbola. In practical terms, the shadow of the tip of a pole traces out a hyperbola on the ground over the course of a day (this path is called the declination line). The shape of this hyperbola varies with the geographical latitude and with the time of the year, since those factors affect the cone of the sun's rays relative to the horizon. The collection of such hyperbolas for a whole year at a given location was called a pelekinon by the Greeks, since it resembles a double-bladed axe. Multilateration A hyperbola is the basis for solving multilateration problems, the task of locating a point from the differences in its distances to given points — or, equivalently, the difference in arrival times of synchronized signals between the point and the given points. Such problems are important in navigation, particularly on water; a ship can locate its position from the difference in arrival times of signals from a LORAN or GPS transmitters. Conversely, a homing beacon or any transmitter can be located by comparing the arrival times of its signals at two separate receiving stations; such techniques may be used to track objects and people. In particular, the set of possible positions of a point that has a distance difference of 2a from two given points is a hyperbola of vertex separation 2a whose foci are the two given points. Path followed by a particle The path followed by any particle in the classical Kepler problem is a conic section. In particular, if the total energy E of the particle is greater than zero (that is, if the particle is unbound), the path of such a particle is a hyperbola. This property is useful in studying atomic and sub-atomic forces by scattering high-energy particles; for example, the Rutherford experiment demonstrated the existence of an atomic nucleus by examining the scattering of alpha particles from gold atoms. If the short-range nuclear interactions are ignored, the atomic nucleus and the alpha particle interact only by a repulsive Coulomb force, which satisfies the inverse square law requirement for a Kepler problem. Korteweg–de Vries equation The hyperbolic trig function $\operatorname {sech} \,x$ appears as one solution to the Korteweg–de Vries equation which describes the motion of a soliton wave in a canal. Angle trisection As shown first by Apollonius of Perga, a hyperbola can be used to trisect any angle, a well studied problem of geometry. Given an angle, first draw a circle centered at its vertex O, which intersects the sides of the angle at points A and B. Next draw the line segment with endpoints A and B and its perpendicular bisector $\ell $. Construct a hyperbola of eccentricity e=2 with $\ell $ as directrix and B as a focus. Let P be the intersection (upper) of the hyperbola with the circle. Angle POB trisects angle AOB. To prove this, reflect the line segment OP about the line $\ell $ obtaining the point P' as the image of P. Segment AP' has the same length as segment BP due to the reflection, while segment PP' has the same length as segment BP due to the eccentricity of the hyperbola. As OA, OP', OP and OB are all radii of the same circle (and so, have the same length), the triangles OAP', OPP' and OPB are all congruent. Therefore, the angle has been trisected, since 3×POB = AOB.[27] Efficient portfolio frontier In portfolio theory, the locus of mean-variance efficient portfolios (called the efficient frontier) is the upper half of the east-opening branch of a hyperbola drawn with the portfolio return's standard deviation plotted horizontally and its expected value plotted vertically; according to this theory, all rational investors would choose a portfolio characterized by some point on this locus. Biochemistry In biochemistry and pharmacology, the Hill equation and Hill-Langmuir equation respectively describe biological responses and the formation of protein–ligand complexes as functions of ligand concentration. They are both rectangular hyperbolae. Hyperbolas as plane sections of quadrics Hyperbolas appear as plane sections of the following quadrics: • Elliptic cone • Hyperbolic cylinder • Hyperbolic paraboloid • Hyperboloid of one sheet • Hyperboloid of two sheets • Elliptic cone • Hyperbolic cylinder • Hyperbolic paraboloid • Hyperboloid of one sheet • Hyperboloid of two sheets See also Other conic sections • Circle • Ellipse • Parabola • Degenerate conic Other related topics • Elliptic coordinates, an orthogonal coordinate system based on families of ellipses and hyperbolas. • Hyperbolic growth • Hyperbolic partial differential equation • Hyperbolic sector • Hyperboloid structure • Hyperbolic trajectory • Hyperboloid • Multilateration • Rotation of axes • Translation of axes • Unit hyperbola Notes 1. Oakley (1944, p. 17) 2. Oakley (1944, p. 17) 3. Heath, Sir Thomas Little (1896), "Chapter I. The discovery of conic sections. Menaechmus", Apollonius of Perga: Treatise on Conic Sections with Introductions Including an Essay on Earlier History on the Subject, Cambridge University Press, pp. xvii–xxx. 4. Boyer, Carl B.; Merzbach, Uta C. (2011), A History of Mathematics, Wiley, p. 73, ISBN 9780470630563, It was Apollonius (possibly following up a suggestion of Archimedes) who introduced the names "ellipse" and "hyperbola" in connection with these curves. 5. Eves, Howard (1963), A Survey of Geometry (Vol. One), Allyn and Bacon, pp. 30–31 6. Protter & Morrey (1970, pp. 308–310) 7. Protter & Morrey (1970, p. 310) 8. Apostol, Tom M.; Mnatsakanian, Mamikon A. (2012), New Horizons in Geometry, The Dolciani Mathematical Expositions #47, The Mathematical Association of America, p. 251, ISBN 978-0-88385-354-2 9. The German term for this circle is Leitkreis which can be translated as "Director circle", but that term has a different meaning in the English literature (see Director circle). 10. Frans van Schooten: Mathematische Oeffeningen, Leyden, 1659, p. 327 11. E. Hartmann: Lecture Note 'Planar Circle Geometries', an Introduction to Möbius-, Laguerre- and Minkowski Planes, p. 93 12. W. Benz: Vorlesungen über Geomerie der Algebren, Springer (1973) 13. Lecture Note Planar Circle Geometries, an Introduction to Moebius-, Laguerre- and Minkowski Planes, S. 33, (PDF; 757 kB) 14. Lecture Note Planar Circle Geometries, an Introduction to Moebius-, Laguerre- and Minkowski Planes, S. 32, (PDF; 757 kB) 15. Fanchi, John R. (2006). Math refresher for scientists and engineers. John Wiley and Sons. Section 3.2, pages 44–45. ISBN 0-471-75715-2. 16. Korn, Granino A; Korn, Theresa M. (2000). Mathematical Handbook for Scientists and Engineers: Definitions, Theorems, and Formulas for Reference and Review (second ed.). Dover Publ. p. 40. 17. Protter & Morrey (1970, p. 310) 18. Protter & Morrey (1970, p. 310) 19. Protter & Morrey (1970, p. 310) 20. Protter & Morrey (1970, pp. APP-29–APP-30) 21. Mitchell, Douglas W., "A property of hyperbolas and their asymptotes", Mathematical Gazette 96, July 2012, 299–301. 22. J. W. Downs, Practical Conic Sections, Dover Publ., 2003 (orig. 1993): p. 26. 23. Coffman, R. T.; Ogilvy, C. S. (1963), "The 'Reflection Property' of the Conics", Mathematics Magazine, 36 (1): 11–12, doi:10.2307/2688124 Flanders, Harley (1968), "The Optical Property of the Conics", American Mathematical Monthly, 75 (4): 399, doi:10.2307/2313439 Brozinsky, Michael K. (1984), "Reflection Property of the Ellipse and the Hyperbola", College Mathematics Journal, 15 (2): 140–42, doi:10.2307/2686519 24. "Hyperbola". Mathafou.free.fr. Archived from the original on 4 March 2016. Retrieved 26 August 2018. 25. "Properties of a Hyperbola". Archived from the original on 2017-02-02. Retrieved 2011-06-22. 26. Carlson, B. C. (2010), "Elliptic Integrals", in Olver, Frank W. J.; Lozier, Daniel M.; Boisvert, Ronald F.; Clark, Charles W. (eds.), NIST Handbook of Mathematical Functions, Cambridge University Press, ISBN 978-0-521-19225-5, MR 2723248. 27. This construction is due to Pappus of Alexandria (circa 300 A.D.) and the proof comes from Kazarinoff (1970, p. 62). References • Kazarinoff, Nicholas D. (1970), Ruler and the Round, Boston: Prindle, Weber & Schmidt, ISBN 0-87150-113-9 • Oakley, C. O., Ph.D. (1944), An Outline of the Calculus, New York: Barnes & Noble{{citation}}: CS1 maint: multiple names: authors list (link) • Protter, Murray H.; Morrey, Charles B. Jr. (1970), College Calculus with Analytic Geometry (2nd ed.), Reading: Addison-Wesley, LCCN 76087042 External links Wikimedia Commons has media related to Hyperbolas. Wikisource has the text of the 1911 Encyclopædia Britannica article "Hyperbola". • "Hyperbola", Encyclopedia of Mathematics, EMS Press, 2001 [1994] • Apollonius' Derivation of the Hyperbola at Convergence • Frans van Schooten: Mathematische Oeffeningen, 1659 • Weisstein, Eric W. "Hyperbola". MathWorld. Authority control: National • Germany • Czech Republic
Wikipedia
Rectangular lattice The rectangular lattice and rhombic lattice (or centered rectangular lattice) constitute two of the five two-dimensional Bravais lattice types.[1] The symmetry categories of these lattices are wallpaper groups pmm and cmm respectively. The conventional translation vectors of the rectangular lattices form an angle of 90° and are of unequal lengths. Rectangular lattices Primitive Centered pmm cmm Bravais lattices There are two rectangular Bravais lattices: primitive rectangular and centered rectangular (also rhombic). Bravais lattice Rectangular Centered rectangular Pearson symbol op oc Standard unit cell Rhombic unit cell The primitive rectangular lattice can also be described by a centered rhombic unit cell, while the centered rectangular lattice can also be described by a primitive rhombic unit cell. Note that the length $a$ in the lower row is not the same as in the upper row. For the first column above, $a$ of the second row equals ${\sqrt {a^{2}+b^{2}}}$ of the first row, and for the second column it equals ${\frac {1}{2}}{\sqrt {a^{2}+b^{2}}}$. Crystal classes The rectangular lattice class names, Schönflies notation, Hermann-Mauguin notation, orbifold notation, Coxeter notation, and wallpaper groups are listed in the table below. Geometric class, point group Arithmetic class Wallpaper groups Schön.IntlOrb.Cox. D1m(*)[ ] Along pm (**) pg (××) Between cm (*×)   D22mm(*22)[2] Along pmm (*2222) pmg (22*) Between cmm (2*22) pgg (22×) References 1. Rana, Farhan. "Lattices in 1D, 2D, and 3D" (PDF). Cornell University. Archived (PDF) from the original on 2020-12-18. Crystal systems • Bravais lattice • Crystallographic point group Seven 3D systems • triclinic (anorthic) • monoclinic • orthorhombic • tetragonal • trigonal & hexagonal • cubic (isometric) Four 2D systems • oblique • rectangular • square • hexagonal
Wikipedia
Pronic number A pronic number is a number that is the product of two consecutive integers, that is, a number of the form $n(n+1).$[1] The study of these numbers dates back to Aristotle. They are also called oblong numbers, heteromecic numbers,[2] or rectangular numbers;[3] however, the term "rectangular number" has also been applied to the composite numbers.[4][5] The first few pronic numbers are: 0, 2, 6, 12, 20, 30, 42, 56, 72, 90, 110, 132, 156, 182, 210, 240, 272, 306, 342, 380, 420, 462 … (sequence A002378 in the OEIS). Letting $P_{n}$ denote the pronic number $n(n+1),$ we have $P_{{-}n}=P_{n{-}1}.$ Therefore, in discussing pronic numbers, we may assume that $n\geq 0$ without loss of generality, a convention that is adopted in the following sections. As figurate numbers The pronic numbers were studied as figurate numbers alongside the triangular numbers and square numbers in Aristotle's Metaphysics,[2] and their discovery has been attributed much earlier to the Pythagoreans.[3] As a kind of figurate number, the pronic numbers are sometimes called oblong[2] because they are analogous to polygonal numbers in this way:[1] 1 × 22 × 33 × 44 × 5 The nth pronic number is the sum of the first n even integers, and as such is twice the nth triangular number[1][2] and n more than the nth square number, as given by the alternative formula n2 + n for pronic numbers. The nth pronic number is also the difference between the odd square (2n + 1)2 and the (n+1)st centered hexagonal number. Since the number of off-diagonal entries in a square matrix is twice a triangular number, it is a pronic number.[6] Sum of pronic numbers The partial sum of the first n positive pronic numbers is twice the value of the nth tetrahedral number: $\sum _{k=1}^{n}k(k+1)={\frac {n(n+1)(n+2)}{3}}=2T_{n}.$ The sum of the reciprocals of the positive pronic numbers (excluding 0) is a telescoping series that sums to 1:[7] $\sum _{i=1}^{\infty }{\frac {1}{i(i+1)}}={\frac {1}{2}}+{\frac {1}{6}}+{\frac {1}{12}}\cdots =1.$ The partial sum of the first n terms in this series is[7] $\sum _{i=1}^{n}{\frac {1}{i(i+1)}}={\frac {n}{n+1}}.$ Additional properties Pronic numbers are even, and 2 is the only prime pronic number. It is also the only pronic number in the Fibonacci sequence and the only pronic Lucas number.[8][9] The arithmetic mean of two consecutive pronic numbers is a square number: ${\frac {n(n+1)+(n+1)(n+2)}{2}}=(n+1)^{2}$ So there is a square between any two consecutive pronic numbers. It is unique, since $n^{2}\leq n(n+1)<(n+1)^{2}<(n+1)(n+2)<(n+2)^{2}.$ Another consequence of this chain of inequalities is the following property. If m is a pronic number, then the following holds: $\lfloor {\sqrt {m}}\rfloor \cdot \lceil {\sqrt {m}}\rceil =m.$ The fact that consecutive integers are coprime and that a pronic number is the product of two consecutive integers leads to a number of properties. Each distinct prime factor of a pronic number is present in only one of the factors n or n + 1. Thus a pronic number is squarefree if and only if n and n + 1 are also squarefree. The number of distinct prime factors of a pronic number is the sum of the number of distinct prime factors of n and n + 1. If 25 is appended to the decimal representation of any pronic number, the result is a square number, the square of a number ending on 5; for example, 625 = 252 and 1225 = 352. This is so because $100n(n+1)+25=100n^{2}+100n+25=(10n+5)^{2}\,$. References 1. Conway, J. H.; Guy, R. K. (1996), The Book of Numbers, New York: Copernicus, Figure 2.15, p. 34. 2. Knorr, Wilbur Richard (1975), The evolution of the Euclidean elements, Dordrecht-Boston, Mass.: D. Reidel Publishing Co., pp. 144–150, ISBN 90-277-0509-7, MR 0472300. 3. Ben-Menahem, Ari (2009), Historical Encyclopedia of Natural and Mathematical Sciences, Volume 1, Springer reference, Springer-Verlag, p. 161, ISBN 9783540688310. 4. "Plutarch, De Iside et Osiride, section 42", www.perseus.tufts.edu, retrieved 16 April 2018 5. Higgins, Peter Michael (2008), Number Story: From Counting to Cryptography, Copernicus Books, p. 9, ISBN 9781848000018. 6. Rummel, Rudolf J. (1988), Applied Factor Analysis, Northwestern University Press, p. 319, ISBN 9780810108240. 7. Frantz, Marc (2010), "The telescoping series in perspective", in Diefenderfer, Caren L.; Nelsen, Roger B. (eds.), The Calculus Collection: A Resource for AP and Beyond, Classroom Resource Materials, Mathematical Association of America, pp. 467–468, ISBN 9780883857618. 8. McDaniel, Wayne L. (1998), "Pronic Lucas numbers" (PDF), Fibonacci Quarterly, 36 (1): 60–62, MR 1605345, archived from the original (PDF) on 2017-07-05, retrieved 2011-05-21. 9. McDaniel, Wayne L. (1998), "Pronic Fibonacci numbers" (PDF), Fibonacci Quarterly, 36 (1): 56–59, MR 1605341. Divisibility-based sets of integers Overview • Integer factorization • Divisor • Unitary divisor • Divisor function • Prime factor • Fundamental theorem of arithmetic Factorization forms • Prime • Composite • Semiprime • Pronic • Sphenic • Square-free • Powerful • Perfect power • Achilles • Smooth • Regular • Rough • Unusual Constrained divisor sums • Perfect • Almost perfect • Quasiperfect • Multiply perfect • Hemiperfect • Hyperperfect • Superperfect • Unitary perfect • Semiperfect • Practical • Erdős–Nicolas With many divisors • Abundant • Primitive abundant • Highly abundant • Superabundant • Colossally abundant • Highly composite • Superior highly composite • Weird Aliquot sequence-related • Untouchable • Amicable (Triple) • Sociable • Betrothed Base-dependent • Equidigital • Extravagant • Frugal • Harshad • Polydivisible • Smith Other sets • Arithmetic • Deficient • Friendly • Solitary • Sublime • Harmonic divisor • Descartes • Refactorable • Superperfect Classes of natural numbers Powers and related numbers • Achilles • Power of 2 • Power of 3 • Power of 10 • Square • Cube • Fourth power • Fifth power • Sixth power • Seventh power • Eighth power • Perfect power • Powerful • Prime power Of the form a × 2b ± 1 • Cullen • Double Mersenne • Fermat • Mersenne • Proth • Thabit • Woodall Other polynomial numbers • Hilbert • Idoneal • Leyland • Loeschian • Lucky numbers of Euler Recursively defined numbers • Fibonacci • Jacobsthal • Leonardo • Lucas • Padovan • Pell • Perrin Possessing a specific set of other numbers • Amenable • Congruent • Knödel • Riesel • Sierpiński Expressible via specific sums • Nonhypotenuse • Polite • Practical • Primary pseudoperfect • Ulam • Wolstenholme Figurate numbers 2-dimensional centered • Centered triangular • Centered square • Centered pentagonal • Centered hexagonal • Centered heptagonal • Centered octagonal • Centered nonagonal • Centered decagonal • Star non-centered • Triangular • Square • Square triangular • Pentagonal • Hexagonal • Heptagonal • Octagonal • Nonagonal • Decagonal • Dodecagonal 3-dimensional centered • Centered tetrahedral • Centered cube • Centered octahedral • Centered dodecahedral • Centered icosahedral non-centered • Tetrahedral • Cubic • Octahedral • Dodecahedral • Icosahedral • Stella octangula pyramidal • Square pyramidal 4-dimensional non-centered • Pentatope • Squared triangular • Tesseractic Combinatorial numbers • Bell • Cake • Catalan • Dedekind • Delannoy • Euler • Eulerian • Fuss–Catalan • Lah • Lazy caterer's sequence • Lobb • Motzkin • Narayana • Ordered Bell • Schröder • Schröder–Hipparchus • Stirling first • Stirling second • Telephone number • Wedderburn–Etherington Primes • Wieferich • Wall–Sun–Sun • Wolstenholme prime • Wilson Pseudoprimes • Carmichael number • Catalan pseudoprime • Elliptic pseudoprime • Euler pseudoprime • Euler–Jacobi pseudoprime • Fermat pseudoprime • Frobenius pseudoprime • Lucas pseudoprime • Lucas–Carmichael number • Somer–Lucas pseudoprime • Strong pseudoprime Arithmetic functions and dynamics Divisor functions • Abundant • Almost perfect • Arithmetic • Betrothed • Colossally abundant • Deficient • Descartes • Hemiperfect • Highly abundant • Highly composite • Hyperperfect • Multiply perfect • Perfect • Practical • Primitive abundant • Quasiperfect • Refactorable • Semiperfect • Sublime • Superabundant • Superior highly composite • Superperfect Prime omega functions • Almost prime • Semiprime Euler's totient function • Highly cototient • Highly totient • Noncototient • Nontotient • Perfect totient • Sparsely totient Aliquot sequences • Amicable • Perfect • Sociable • Untouchable Primorial • Euclid • Fortunate Other prime factor or divisor related numbers • Blum • Cyclic • Erdős–Nicolas • Erdős–Woods • Friendly • Giuga • Harmonic divisor • Jordan–Pólya • Lucas–Carmichael • Pronic • Regular • Rough • Smooth • Sphenic • Størmer • Super-Poulet • Zeisel Numeral system-dependent numbers Arithmetic functions and dynamics • Persistence • Additive • Multiplicative Digit sum • Digit sum • Digital root • Self • Sum-product Digit product • Multiplicative digital root • Sum-product Coding-related • Meertens Other • Dudeney • Factorion • Kaprekar • Kaprekar's constant • Keith • Lychrel • Narcissistic • Perfect digit-to-digit invariant • Perfect digital invariant • Happy P-adic numbers-related • Automorphic • Trimorphic Digit-composition related • Palindromic • Pandigital • Repdigit • Repunit • Self-descriptive • Smarandache–Wellin • Undulating Digit-permutation related • Cyclic • Digit-reassembly • Parasitic • Primeval • Transposable Divisor-related • Equidigital • Extravagant • Frugal • Harshad • Polydivisible • Smith • Vampire Other • Friedman Binary numbers • Evil • Odious • Pernicious Generated via a sieve • Lucky • Prime Sorting related • Pancake number • Sorting number Natural language related • Aronson's sequence • Ban Graphemics related • Strobogrammatic • Mathematics portal
Wikipedia
Rectangle In Euclidean plane geometry, a rectangle is a quadrilateral with four right angles. It can also be defined as: an equiangular quadrilateral, since equiangular means that all of its angles are equal (360°/4 = 90°); or a parallelogram containing a right angle. A rectangle with four sides of equal length is a square. The term "oblong" is occasionally used to refer to a non-square rectangle.[1][2][3] A rectangle with vertices ABCD would be denoted as  ABCD. Rectangle Rectangle Typequadrilateral, trapezium, parallelogram, orthotope Edges and vertices4 Schläfli symbol{ } × { } Coxeter–Dynkin diagrams Symmetry groupDihedral (D2), [2], (*22), order 4 Propertiesconvex, isogonal, cyclic Opposite angles and sides are congruent Dual polygonrhombus The word rectangle comes from the Latin rectangulus, which is a combination of rectus (as an adjective, right, proper) and angulus (angle). A crossed rectangle is a crossed (self-intersecting) quadrilateral which consists of two opposite sides of a rectangle along with the two diagonals[4] (therefore only two sides are parallel). It is a special case of an antiparallelogram, and its angles are not right angles and not all equal, though opposite angles are equal. Other geometries, such as spherical, elliptic, and hyperbolic, have so-called rectangles with opposite sides equal in length and equal angles that are not right angles. Rectangles are involved in many tiling problems, such as tiling the plane by rectangles or tiling a rectangle by polygons. Characterizations A convex quadrilateral is a rectangle if and only if it is any one of the following:[5][6] • a parallelogram with at least one right angle • a parallelogram with diagonals of equal length • a parallelogram ABCD where triangles ABD and DCA are congruent • an equiangular quadrilateral • a quadrilateral with four right angles • a quadrilateral where the two diagonals are equal in length and bisect each other[7] • a convex quadrilateral with successive sides a, b, c, d whose area is ${\tfrac {1}{4}}(a+c)(b+d)$.[8]: fn.1  • a convex quadrilateral with successive sides a, b, c, d whose area is ${\tfrac {1}{2}}{\sqrt {(a^{2}+c^{2})(b^{2}+d^{2})}}.$[8] Classification Traditional hierarchy A rectangle is a special case of a parallelogram in which each pair of adjacent sides is perpendicular. A parallelogram is a special case of a trapezium (known as a trapezoid in North America) in which both pairs of opposite sides are parallel and equal in length. A trapezium is a convex quadrilateral which has at least one pair of parallel opposite sides. A convex quadrilateral is • Simple: The boundary does not cross itself. • Star-shaped: The whole interior is visible from a single point, without crossing any edge. Alternative hierarchy De Villiers defines a rectangle more generally as any quadrilateral with axes of symmetry through each pair of opposite sides.[9] This definition includes both right-angled rectangles and crossed rectangles. Each has an axis of symmetry parallel to and equidistant from a pair of opposite sides, and another which is the perpendicular bisector of those sides, but, in the case of the crossed rectangle, the first axis is not an axis of symmetry for either side that it bisects. Quadrilaterals with two axes of symmetry, each through a pair of opposite sides, belong to the larger class of quadrilaterals with at least one axis of symmetry through a pair of opposite sides. These quadrilaterals comprise isosceles trapezia and crossed isosceles trapezia (crossed quadrilaterals with the same vertex arrangement as isosceles trapezia). Properties Symmetry A rectangle is cyclic: all corners lie on a single circle. It is equiangular: all its corner angles are equal (each of 90 degrees). It is isogonal or vertex-transitive: all corners lie within the same symmetry orbit. It has two lines of reflectional symmetry and rotational symmetry of order 2 (through 180°). Rectangle-rhombus duality The dual polygon of a rectangle is a rhombus, as shown in the table below.[10] RectangleRhombus All angles are equal. All sides are equal. Alternate sides are equal. Alternate angles are equal. Its centre is equidistant from its vertices, hence it has a circumcircle. Its centre is equidistant from its sides, hence it has an incircle. Two axes of symmetry bisect opposite sides. Two axes of symmetry bisect opposite angles. Diagonals are equal in length. Diagonals intersect at equal angles. • The figure formed by joining, in order, the midpoints of the sides of a rectangle is a rhombus and vice versa. Miscellaneous A rectangle is a rectilinear polygon: its sides meet at right angles. A rectangle in the plane can be defined by five independent degrees of freedom consisting, for example, of three for position (comprising two of translation and one of rotation), one for shape (aspect ratio), and one for overall size (area). Two rectangles, neither of which will fit inside the other, are said to be incomparable. Formulae If a rectangle has length $\ell $ and width $w$ • it has area $A=\ell w\,$, • it has perimeter $P=2\ell +2w=2(\ell +w)\,$, • each diagonal has length $d={\sqrt {\ell ^{2}+w^{2}}}$, • and when $\ell =w\,$, the rectangle is a square. Theorems The isoperimetric theorem for rectangles states that among all rectangles of a given perimeter, the square has the largest area. The midpoints of the sides of any quadrilateral with perpendicular diagonals form a rectangle. A parallelogram with equal diagonals is a rectangle. The Japanese theorem for cyclic quadrilaterals[11] states that the incentres of the four triangles determined by the vertices of a cyclic quadrilateral taken three at a time form a rectangle. The British flag theorem states that with vertices denoted A, B, C, and D, for any point P on the same plane of a rectangle:[12] $\displaystyle (AP)^{2}+(CP)^{2}=(BP)^{2}+(DP)^{2}.$ For every convex body C in the plane, we can inscribe a rectangle r in C such that a homothetic copy R of r is circumscribed about C and the positive homothety ratio is at most 2 and $0.5{\text{ × Area}}(R)\leq {\text{Area}}(C)\leq 2{\text{ × Area}}(r)$.[13] Crossed rectangles A crossed quadrilateral (self-intersecting) consists of two opposite sides of a non-self-intersecting quadrilateral along with the two diagonals. Similarly, a crossed rectangle is a crossed quadrilateral which consists of two opposite sides of a rectangle along with the two diagonals. It has the same vertex arrangement as the rectangle. It appears as two identical triangles with a common vertex, but the geometric intersection is not considered a vertex. A crossed quadrilateral is sometimes likened to a bow tie or butterfly, sometimes called an "angular eight". A three-dimensional rectangular wire frame that is twisted can take the shape of a bow tie. The interior of a crossed rectangle can have a polygon density of ±1 in each triangle, dependent upon the winding orientation as clockwise or counterclockwise. A crossed rectangle may be considered equiangular if right and left turns are allowed. As with any crossed quadrilateral, the sum of its interior angles is 720°, allowing for internal angles to appear on the outside and exceed 180°.[14] A rectangle and a crossed rectangle are quadrilaterals with the following properties in common: • Opposite sides are equal in length. • The two diagonals are equal in length. • It has two lines of reflectional symmetry and rotational symmetry of order 2 (through 180°). Other rectangles In spherical geometry, a spherical rectangle is a figure whose four edges are great circle arcs which meet at equal angles greater than 90°. Opposite arcs are equal in length. The surface of a sphere in Euclidean solid geometry is a non-Euclidean surface in the sense of elliptic geometry. Spherical geometry is the simplest form of elliptic geometry. In elliptic geometry, an elliptic rectangle is a figure in the elliptic plane whose four edges are elliptic arcs which meet at equal angles greater than 90°. Opposite arcs are equal in length. In hyperbolic geometry, a hyperbolic rectangle is a figure in the hyperbolic plane whose four edges are hyperbolic arcs which meet at equal angles less than 90°. Opposite arcs are equal in length. Tessellations The rectangle is used in many periodic tessellation patterns, in brickwork, for example, these tilings: Stacked bond Running bond Basket weave Basket weave Herringbone pattern Squared, perfect, and other tiled rectangles A rectangle tiled by squares, rectangles, or triangles is said to be a "squared", "rectangled", or "triangulated" (or "triangled") rectangle respectively. The tiled rectangle is perfect[15][16] if the tiles are similar and finite in number and no two tiles are the same size. If two such tiles are the same size, the tiling is imperfect. In a perfect (or imperfect) triangled rectangle the triangles must be right triangles. A database of all known perfect rectangles, perfect squares and related shapes can be found at squaring.net. The lowest number of squares need for a perfect tiling of a rectangle is 9[17] and the lowest number needed for a perfect tilling a square is 21, found in 1978 by computer search.[18] A rectangle has commensurable sides if and only if it is tileable by a finite number of unequal squares.[15][19] The same is true if the tiles are unequal isosceles right triangles. The tilings of rectangles by other tiles which have attracted the most attention are those by congruent non-rectangular polyominoes, allowing all rotations and reflections. There are also tilings by congruent polyaboloes. Unicode U+25AC ▬ BLACK RECTANGLE U+25AD ▭ WHITE RECTANGLE U+25AE ▮ BLACK VERTICAL RECTANGLE U+25AF ▯ WHITE VERTICAL RECTANGLE See also • Cuboid • Golden rectangle • Hyperrectangle • Superellipse (includes a rectangle with rounded corners) References 1. "Archived copy" (PDF). Archived from the original (PDF) on 2014-05-14. Retrieved 2013-06-20.{{cite web}}: CS1 maint: archived copy as title (link) 2. Definition of Oblong. Mathsisfun.com. Retrieved 2011-11-13. 3. Oblong – Geometry – Math Dictionary. Icoachmath.com. Retrieved 2011-11-13. 4. Coxeter, Harold Scott MacDonald; Longuet-Higgins, M.S.; Miller, J.C.P. (1954). "Uniform polyhedra". Philosophical Transactions of the Royal Society of London. Series A. Mathematical and Physical Sciences. The Royal Society. 246 (916): 401–450. Bibcode:1954RSPTA.246..401C. doi:10.1098/rsta.1954.0003. ISSN 0080-4614. JSTOR 91532. MR 0062446. S2CID 202575183. 5. Zalman Usiskin and Jennifer Griffin, "The Classification of Quadrilaterals. A Study of Definition", Information Age Publishing, 2008, pp. 34–36 ISBN 1-59311-695-0. 6. Owen Byer; Felix Lazebnik; Deirdre L. Smeltzer (19 August 2010). Methods for Euclidean Geometry. MAA. pp. 53–. ISBN 978-0-88385-763-2. Retrieved 2011-11-13. 7. Gerard Venema, "Exploring Advanced Euclidean Geometry with GeoGebra", MAA, 2013, p. 56. 8. Josefsson Martin (2013). "Five Proofs of an Area Characterization of Rectangles" (PDF). Forum Geometricorum. 13: 17–21. 9. An Extended Classification of Quadrilaterals Archived 2019-12-30 at the Wayback Machine (An excerpt from De Villiers, M. 1996. Some Adventures in Euclidean Geometry. University of Durban-Westville.) 10. de Villiers, Michael, "Generalizing Van Aubel Using Duality", Mathematics Magazine 73 (4), Oct. 2000, pp. 303–307. 11. Cyclic Quadrilateral Incentre-Rectangle with interactive animation illustrating a rectangle that becomes a 'crossed rectangle', making a good case for regarding a 'crossed rectangle' as a type of rectangle. 12. Hall, Leon M. & Robert P. Roe (1998). "An Unexpected Maximum in a Family of Rectangles" (PDF). Mathematics Magazine. 71 (4): 285–291. doi:10.1080/0025570X.1998.11996653. JSTOR 2690700. 13. Lassak, M. (1993). "Approximation of convex bodies by rectangles". Geometriae Dedicata. 47: 111–117. doi:10.1007/BF01263495. S2CID 119508642. 14. Stars: A Second Look. (PDF). Retrieved 2011-11-13. 15. R.L. Brooks; C.A.B. Smith; A.H. Stone & W.T. Tutte (1940). "The dissection of rectangles into squares". Duke Math. J. 7 (1): 312–340. doi:10.1215/S0012-7094-40-00718-9. 16. J.D. Skinner II; C.A.B. Smith & W.T. Tutte (November 2000). "On the Dissection of Rectangles into Right-Angled Isosceles Triangles". Journal of Combinatorial Theory, Series B. 80 (2): 277–319. doi:10.1006/jctb.2000.1987. 17. Sloane, N. J. A. (ed.). "Sequence A219766 (Number of nonsquare simple perfect squared rectangles of order n up to symmetry)". The On-Line Encyclopedia of Integer Sequences. OEIS Foundation. 18. "Squared Squares; Perfect Simples, Perfect Compounds and Imperfect Simples". www.squaring.net. Retrieved 2021-09-26. 19. R. Sprague (1940). "Ũber die Zerlegung von Rechtecken in lauter verschiedene Quadrate". Journal für die reine und angewandte Mathematik. 1940 (182): 60–64. doi:10.1515/crll.1940.182.60. S2CID 118088887. External links Wikimedia Commons has media related to Rectangles. • Weisstein, Eric W. "Rectangle". MathWorld. • Definition and properties of a rectangle with interactive animation. • Area of a rectangle with interactive animation. Polygons (List) Triangles • Acute • Equilateral • Ideal • Isosceles • Kepler • Obtuse • Right Quadrilaterals • Antiparallelogram • Bicentric • Crossed • Cyclic • Equidiagonal • Ex-tangential • Harmonic • Isosceles trapezoid • Kite • Orthodiagonal • Parallelogram • Rectangle • Right kite • Right trapezoid • Rhombus • Square • Tangential • Tangential trapezoid • Trapezoid By number of sides 1–10 sides • Monogon (1) • Digon (2) • Triangle (3) • Quadrilateral (4) • Pentagon (5) • Hexagon (6) • Heptagon (7) • Octagon (8) • Nonagon (Enneagon, 9) • Decagon (10) 11–20 sides • Hendecagon (11) • Dodecagon (12) • Tridecagon (13) • Tetradecagon (14) • Pentadecagon (15) • Hexadecagon (16) • Heptadecagon (17) • Octadecagon (18) • Icosagon (20) >20 sides • Icositrigon (23) • Icositetragon (24) • Triacontagon (30) • 257-gon • Chiliagon (1000) • Myriagon (10,000) • 65537-gon • Megagon (1,000,000) • Apeirogon (∞) Star polygons • Pentagram • Hexagram • Heptagram • Octagram • Enneagram • Decagram • Hendecagram • Dodecagram Classes • Concave • Convex • Cyclic • Equiangular • Equilateral • Infinite skew • Isogonal • Isotoxal • Magic • Pseudotriangle • Rectilinear • Regular • Reinhardt • Simple • Skew • Star-shaped • Tangential • Weakly simple Authority control: National • Germany
Wikipedia
Rectified 10-cubes In ten-dimensional geometry, a rectified 10-cube is a convex uniform 10-polytope, being a rectification of the regular 10-cube. 10-orthoplex Rectified 10-orthoplex Birectified 10-orthoplex Trirectified 10-orthoplex Quadirectified 10-orthoplex Quadrirectified 10-cube Trirectified 10-cube Birectified 10-cube Rectified 10-cube 10-cube Orthogonal projections in BC10 Coxeter plane There are 10 rectifications of the 10-cube, with the zeroth being the 10-cube itself. Vertices of the rectified 10-cube are located at the edge-centers of the 10-cube. Vertices of the birectified 10-cube are located in the square face centers of the 10-cube. Vertices of the trirectified 10-cube are located in the cubic cell centers of the 10-cube. The others are more simply constructed relative to the 10-cube dual polytope, the 10-orthoplex. These polytopes are part of a family 1023 uniform 10-polytopes with BC10 symmetry. Rectified 10-cube Rectified 10-orthoplex Typeuniform 10-polytope Schläfli symbolt1{38,4} Coxeter-Dynkin diagrams 7-faces 6-faces 5-faces 4-faces Cells Faces Edges46080 Vertices5120 Vertex figure8-simplex prism Coxeter groupsC10, [4,38] D10, [37,1,1] Propertiesconvex Alternate names • Rectified dekeract (Acronym rade) (Jonathan Bowers)[1] Cartesian coordinates Cartesian coordinates for the vertices of a rectified 10-cube, centered at the origin, edge length ${\sqrt {2}}$ are all permutations of: (±1,±1,±1,±1,±1,±1,±1,±1,±1,0) Images Orthographic projections B10 B9 B8 [20] [18] [16] B7 B6 B5 [14] [12] [10] B4 B3 B2 [8] [6] [4] A9 A5 — — [10] [6] A7 A3 — — [8] [4] Birectified 10-cube Birectified 10-orthoplex Typeuniform 10-polytope Coxeter symbol0711 Schläfli symbolt2{38,4} Coxeter-Dynkin diagrams 7-faces 6-faces 5-faces 4-faces Cells Faces Edges184320 Vertices11520 Vertex figure{4}x{36} Coxeter groupsC10, [4,38] D10, [37,1,1] Propertiesconvex Alternate names • Birectified dekeract (Acronym brade) (Jonathan Bowers)[2] Cartesian coordinates Cartesian coordinates for the vertices of a birectified 10-cube, centered at the origin, edge length ${\sqrt {2}}$ are all permutations of: (±1,±1,±1,±1,±1,±1,±1,±1,0,0) Images Orthographic projections B10 B9 B8 [20] [18] [16] B7 B6 B5 [14] [12] [10] B4 B3 B2 [8] [6] [4] A9 A5 — — [10] [6] A7 A3 — — [8] [4] Trirectified 10-cube Trirectified 10-orthoplex Typeuniform 10-polytope Schläfli symbolt3{38,4} Coxeter-Dynkin diagrams 7-faces 6-faces 5-faces 4-faces Cells Faces Edges322560 Vertices15360 Vertex figure{4,3}x{35} Coxeter groupsC10, [4,38] D10, [37,1,1] Propertiesconvex Alternate names • Tririrectified dekeract (Acronym trade) (Jonathan Bowers)[3] Cartesian coordinates Cartesian coordinates for the vertices of a triirectified 10-cube, centered at the origin, edge length ${\sqrt {2}}$ are all permutations of: (±1,±1,±1,±1,±1,±1,±1,0,0,0) Images Orthographic projections B10 B9 B8 [20] [18] [16] B7 B6 B5 [14] [12] [10] B4 B3 B2 [8] [6] [4] A9 A5 — — [10] [6] A7 A3 — — [8] [4] Quadrirectified 10-cube Quadrirectified 10-orthoplex Typeuniform 10-polytope Schläfli symbolt4{38,4} Coxeter-Dynkin diagrams 7-faces 6-faces 5-faces 4-faces Cells Faces Edges322560 Vertices13440 Vertex figure{4,3,3}x{34} Coxeter groupsC10, [4,38] D10, [37,1,1] Propertiesconvex Alternate names • Quadrirectified dekeract • Quadrirectified decacross (Acronym terade) (Jonathan Bowers)[4] Cartesian coordinates Cartesian coordinates for the vertices of a quadrirectified 10-cube, centered at the origin, edge length ${\sqrt {2}}$ are all permutations of: (±1,±1,±1,±1,±1,±1,0,0,0,0) Images Orthographic projections B10 B9 B8 [20] [18] [16] B7 B6 B5 [14] [12] [10] B4 B3 B2 [8] [6] [4] A9 A5 — — [10] [6] A7 A3 — — [8] [4] Notes 1. Klitzing, (o3o3o3o3o3o3o3o3x4o - rade) 2. Klitzing, (o3o3o3o3o3o3o3x3o4o - brade) 3. Klitzing, (o3o3o3o3o3o3x3o3o4o - trade) 4. Klitzing, (o3o3o3o3o3x3o3o3o4o - terade) References • H.S.M. Coxeter: • H.S.M. Coxeter, Regular Polytopes, 3rd Edition, Dover New York, 1973 • Kaleidoscopes: Selected Writings of H.S.M. Coxeter, edited by F. Arthur Sherk, Peter McMullen, Anthony C. Thompson, Asia Ivic Weiss, Wiley-Interscience Publication, 1995, ISBN 978-0-471-01003-6 • (Paper 22) H.S.M. Coxeter, Regular and Semi Regular Polytopes I, [Math. Zeit. 46 (1940) 380–407, MR 2,10] • (Paper 23) H.S.M. Coxeter, Regular and Semi-Regular Polytopes II, [Math. Zeit. 188 (1985) 559-591] • (Paper 24) H.S.M. Coxeter, Regular and Semi-Regular Polytopes III, [Math. Zeit. 200 (1988) 3-45] • Norman Johnson Uniform Polytopes, Manuscript (1991) • N.W. Johnson: The Theory of Uniform Polytopes and Honeycombs, Ph.D. (1966) • Klitzing, Richard. "10D uniform polytopes (polyxenna)". x3o3o3o3o3o3o3o3o4o - ka, o3x3o3o3o3o3o3o3o4o - rake, o3o3x3o3o3o3o3o3o4o - brake, o3o3o3x3o3o3o3o3o4o - trake, o3o3o3o3x3o3o3o3o4o - terake, o3o3o3o3o3x3o3o3o4o - terade, o3o3o3o3o3o3x3o3o4o - trade, o3o3o3o3o3o3o3x3o4o - brade, o3o3o3o3o3o3o3o3x4o - rade, o3o3o3o3o3o3o3o3o4x - deker External links • Polytopes of Various Dimensions • Multi-dimensional Glossary Fundamental convex regular and uniform polytopes in dimensions 2–10 Family An Bn I2(p) / Dn E6 / E7 / E8 / F4 / G2 Hn Regular polygon Triangle Square p-gon Hexagon Pentagon Uniform polyhedron Tetrahedron Octahedron • Cube Demicube Dodecahedron • Icosahedron Uniform polychoron Pentachoron 16-cell • Tesseract Demitesseract 24-cell 120-cell • 600-cell Uniform 5-polytope 5-simplex 5-orthoplex • 5-cube 5-demicube Uniform 6-polytope 6-simplex 6-orthoplex • 6-cube 6-demicube 122 • 221 Uniform 7-polytope 7-simplex 7-orthoplex • 7-cube 7-demicube 132 • 231 • 321 Uniform 8-polytope 8-simplex 8-orthoplex • 8-cube 8-demicube 142 • 241 • 421 Uniform 9-polytope 9-simplex 9-orthoplex • 9-cube 9-demicube Uniform 10-polytope 10-simplex 10-orthoplex • 10-cube 10-demicube Uniform n-polytope n-simplex n-orthoplex • n-cube n-demicube 1k2 • 2k1 • k21 n-pentagonal polytope Topics: Polytope families • Regular polytope • List of regular polytopes and compounds
Wikipedia
24-cell In geometry, the 24-cell is the convex regular 4-polytope[1] (four-dimensional analogue of a Platonic solid) with Schläfli symbol {3,4,3}. It is also called C24, or the icositetrachoron,[2] octaplex (short for "octahedral complex"), icosatetrahedroid,[3] octacube, hyper-diamond or polyoctahedron, being constructed of octahedral cells. 24-cell Schlegel diagram (vertices and edges) TypeConvex regular 4-polytope Schläfli symbol{3,4,3} r{3,3,4} = $\left\{{\begin{array}{l}3\\3,4\end{array}}\right\}$ {31,1,1} = $\left\{{\begin{array}{l}3\\3\\3\end{array}}\right\}$ Coxeter diagram or or Cells24 {3,4} Faces96 {3} Edges96 Vertices24 Vertex figureCube Petrie polygondodecagon Coxeter groupF4, [3,4,3], order 1152 B4, [4,3,3], order 384 D4, [31,1,1], order 192 DualSelf-dual Propertiesconvex, isogonal, isotoxal, isohedral Uniform index22 The boundary of the 24-cell is composed of 24 octahedral cells with six meeting at each vertex, and three at each edge. Together they have 96 triangular faces, 96 edges, and 24 vertices. The vertex figure is a cube. The 24-cell is self-dual.[lower-alpha 1] The 24-cell and the tesseract are the only convex regular 4-polytopes in which the edge length equals the radius.[lower-alpha 2] The 24-cell does not have a regular analogue in 3 dimensions. It is the only one of the six convex regular 4-polytopes which is not the four-dimensional analogue of one of the five regular Platonic solids. It is the unique regular polytope, in any number of dimensions, which has no regular analogue in the adjacent dimension, either below or above.[4] However, it can be seen as the analogue of a pair of irregular solids: the cuboctahedron and its dual the rhombic dodecahedron.[5] Translated copies of the 24-cell can tile four-dimensional space face-to-face, forming the 24-cell honeycomb. As a polytope that can tile by translation, the 24-cell is an example of a parallelotope, the simplest one that is not also a zonotope.[6] Geometry The 24-cell incorporates the geometries of every convex regular polytope in the first four dimensions, except the 5-cell, those with a 5 in their Schlӓfli symbol,[lower-alpha 3] and the polygons {7} and above. It is especially useful to explore the 24-cell, because one can see the geometric relationships among all of these regular polytopes in a single 24-cell or its honeycomb. The 24-cell is the fourth in the sequence of 6 convex regular 4-polytopes (in order of size and complexity).[lower-alpha 4] It can be deconstructed into 3 overlapping instances of its predecessor the tesseract (8-cell), as the 8-cell can be deconstructed into 2 overlapping instances of its predecessor the 16-cell.[8] The reverse procedure to construct each of these from an instance of its predecessor preserves the radius of the predecessor, but generally produces a successor with a smaller edge length.[lower-alpha 5] Squares The 24-cell is the convex hull of its vertices which can be described as the 24 coordinate permutations of: $(\pm 1,\pm 1,0,0)\in \mathbb {R} ^{4}.$ Those coordinates[9] can be constructed as , rectifying the 16-cell with 8 vertices permutations of (±2,0,0,0). The vertex figure of a 16-cell is the octahedron; thus, cutting the vertices of the 16-cell at the midpoint of its incident edges produces 8 octahedral cells. This process[10] also rectifies the tetrahedral cells of the 16-cell which become 16 octahedra, giving the 24-cell 24 octahedral cells. In this frame of reference the 24-cell has edges of length √2 and is inscribed in a 3-sphere of radius √2. Remarkably, the edge length equals the circumradius, as in the hexagon, or the cuboctahedron. Such polytopes are radially equilateral.[lower-alpha 2] Regular convex 4-polytopes of radius √2 Symmetry group A4 B4 F4 H4 Name 5-cell Hyper-tetrahedron 5-point 16-cell Hyper-octahedron 8-point 8-cell Hyper-cube 16-point 24-cell 24-point 600-cell Hyper-icosahedron 120-point 120-cell Hyper-dodecahedron 600-point Schläfli symbol {3, 3, 3} {3, 3, 4} {4, 3, 3} {3, 4, 3} {3, 3, 5} {5, 3, 3} Coxeter mirrors Mirror dihedrals 𝝅/3 𝝅/3 𝝅/3 𝝅/2 𝝅/2 𝝅/2 𝝅/3 𝝅/3 𝝅/4 𝝅/2 𝝅/2 𝝅/2 𝝅/4 𝝅/3 𝝅/3 𝝅/2 𝝅/2 𝝅/2 𝝅/3 𝝅/4 𝝅/3 𝝅/2 𝝅/2 𝝅/2 𝝅/3 𝝅/3 𝝅/5 𝝅/2 𝝅/2 𝝅/2 𝝅/5 𝝅/3 𝝅/3 𝝅/2 𝝅/2 𝝅/2 Graph Vertices 5 tetrahedral 8 octahedral 16 tetrahedral 24 cubical 120 icosahedral 600 tetrahedral Edges 10 triangular 24 square 32 triangular 96 triangular 720 pentagonal 1200 triangular Faces 10 triangles 32 triangles 24 squares 96 triangles 1200 triangles 720 pentagons Cells 5 tetrahedra 16 tetrahedra 8 cubes 24 octahedra 600 tetrahedra 120 dodecahedra Tori 1 5-tetrahedron 2 8-tetrahedron 2 4-cube 4 6-octahedron 20 30-tetrahedron 12 10-dodecahedron Inscribed 120 in 120-cell 675 in 120-cell 2 16-cells 3 8-cells 25 24-cells 10 600-cells Great polygons 2 squares x 3 4 rectangles x 4 4 hexagons x 4 12 decagons x 6 100 irregular hexagons x 4 Petrie polygons 1 pentagon 1 octagon 2 octagons 2 dodecagons 4 30-gons 20 30-gons Long radius ${\sqrt {2}}$ ${\sqrt {2}}$ ${\sqrt {2}}$ ${\sqrt {2}}$ ${\sqrt {2}}$ ${\sqrt {2}}$ Edge length ${\sqrt {5}}\approx 2.236$ $2$ ${\sqrt {2}}\approx 1.414$ ${\sqrt {2}}\approx 1.414$ ${\tfrac {\sqrt {2}}{\phi }}\approx 0.874$ $2-\phi \approx 0.382$ Short radius ${\tfrac {\sqrt {2}}{4}}\approx 0.354$ ${\tfrac {\sqrt {2}}{2}}\approx 0.707$ ${\tfrac {\sqrt {2}}{2}}\approx 0.707$ $1$ ${\sqrt {\tfrac {\phi ^{4}}{4}}}\approx 1.309$ ${\sqrt {\tfrac {\phi ^{4}}{4}}}\approx 1.309$ Area $10\left({\tfrac {5{\sqrt {3}}}{4}}\right)\approx 21.651$ $32\left({\sqrt {3}}\right)\approx 55.425$ $48$ $96\left({\sqrt {\tfrac {3}{4}}}\right)\approx 83.138$ $1200\left({\tfrac {2{\sqrt {3}}}{4\phi ^{2}}}\right)\approx 396.95$ $720\left({\tfrac {\sqrt {25+10{\sqrt {5}}}}{4\phi ^{4}}}\right)\approx 180.73$ Volume $5\left({\tfrac {5{\sqrt {10}}}{12}}\right)\approx 6.588$ $16\left({\tfrac {2{\sqrt {2}}}{3}}\right)\approx 15.085$ $8{\sqrt {8}}\approx 22.627$ $24\left({\tfrac {4}{3}}\right)=32$ $600\left({\tfrac {4}{12\phi ^{3}}}\right)\approx 47.214$ $120\left({\tfrac {15+7{\sqrt {5}}}{4\phi ^{6}}}\right)\approx 51.246$ 4-Content ${\tfrac {\sqrt {5}}{24}}\left({\sqrt {5}}\right)^{4}\approx 2.329$ ${\tfrac {8}{3}}\approx 2.666$ $4$ $8$ ${\tfrac {{\text{Short}}\times {\text{Vol}}}{4}}\approx 15.451$ ${\tfrac {{\text{Short}}\times {\text{Vol}}}{4}}\approx 16.770$ The 24 vertices form 18 great squares[lower-alpha 6] (3 sets of 6 orthogonal[lower-alpha 8] central squares), 3 of which intersect at each vertex. By viewing just one square at each vertex, the 24-cell can be seen as the vertices of 3 pairs of completely orthogonal[lower-alpha 7] great squares which intersect[lower-alpha 11] at no vertices.[lower-alpha 12] Hexagons The 24-cell is self-dual, having the same number of vertices (24) as cells and the same number of edges (96) as faces. If the dual of the above 24-cell of edge length √2 is taken by reciprocating it about its inscribed sphere, another 24-cell is found which has edge length and circumradius 1, and its coordinates reveal more structure. In this frame of reference the 24-cell lies vertex-up, and its vertices can be given as follows: 8 vertices obtained by permuting the integer coordinates: $\left(\pm 1,0,0,0\right)$ and 16 vertices with half-integer coordinates of the form: $\left(\pm {\tfrac {1}{2}},\pm {\tfrac {1}{2}},\pm {\tfrac {1}{2}},\pm {\tfrac {1}{2}}\right)$ all 24 of which lie at distance 1 from the origin. Viewed as quaternions,[lower-alpha 13] these are the unit Hurwitz quaternions. The 24-cell has unit radius and unit edge length[lower-alpha 2] in this coordinate system. We refer to the system as unit radius coordinates to distinguish it from others, such as the √2 radius coordinates used above.[lower-alpha 14] Regular convex 4-polytopes of radius 1 Symmetry group A4 B4 F4 H4 Name 5-cell Hyper-tetrahedron 5-point 16-cell Hyper-octahedron 8-point 8-cell Hyper-cube 16-point 24-cell 24-point 600-cell Hyper-icosahedron 120-point 120-cell Hyper-dodecahedron 600-point Schläfli symbol {3, 3, 3} {3, 3, 4} {4, 3, 3} {3, 4, 3} {3, 3, 5} {5, 3, 3} Coxeter mirrors Mirror dihedrals 𝝅/3 𝝅/3 𝝅/3 𝝅/2 𝝅/2 𝝅/2 𝝅/3 𝝅/3 𝝅/4 𝝅/2 𝝅/2 𝝅/2 𝝅/4 𝝅/3 𝝅/3 𝝅/2 𝝅/2 𝝅/2 𝝅/3 𝝅/4 𝝅/3 𝝅/2 𝝅/2 𝝅/2 𝝅/3 𝝅/3 𝝅/5 𝝅/2 𝝅/2 𝝅/2 𝝅/5 𝝅/3 𝝅/3 𝝅/2 𝝅/2 𝝅/2 Graph Vertices 5 tetrahedral 8 octahedral 16 tetrahedral 24 cubical 120 icosahedral 600 tetrahedral Edges 10 triangular 24 square 32 triangular 96 triangular 720 pentagonal 1200 triangular Faces 10 triangles 32 triangles 24 squares 96 triangles 1200 triangles 720 pentagons Cells 5 tetrahedra 16 tetrahedra 8 cubes 24 octahedra 600 tetrahedra 120 dodecahedra Tori 1 5-tetrahedron 2 8-tetrahedron 2 4-cube 4 6-octahedron 20 30-tetrahedron 12 10-dodecahedron Inscribed 120 in 120-cell 675 in 120-cell 2 16-cells 3 8-cells 25 24-cells 10 600-cells Great polygons 2 squares x 3 4 rectangles x 4 4 hexagons x 4 12 decagons x 6 100 irregular hexagons x 4 Petrie polygons 1 pentagon 1 octagon 2 octagons 2 dodecagons 4 30-gons 20 30-gons Long radius $1$ $1$ $1$ $1$ $1$ $1$ Edge length ${\sqrt {\tfrac {5}{2}}}\approx 1.581$ ${\sqrt {2}}\approx 1.414$ $1$ $1$ ${\tfrac {1}{\phi }}\approx 0.618$ ${\tfrac {1}{\phi ^{2}{\sqrt {2}}}}\approx 0.270$ Short radius ${\tfrac {1}{4}}$ ${\tfrac {1}{2}}$ ${\tfrac {1}{2}}$ ${\sqrt {\tfrac {1}{2}}}\approx 0.707$ ${\sqrt {\tfrac {\phi ^{4}}{8}}}\approx 0.926$ ${\sqrt {\tfrac {\phi ^{4}}{8}}}\approx 0.926$ Area $10\left({\tfrac {5{\sqrt {3}}}{8}}\right)\approx 10.825$ $32\left({\sqrt {\tfrac {3}{4}}}\right)\approx 27.713$ $24$ $96\left({\sqrt {\tfrac {3}{16}}}\right)\approx 41.569$ $1200\left({\tfrac {\sqrt {3}}{4\phi ^{2}}}\right)\approx 198.48$ $720\left({\tfrac {\sqrt {25+10{\sqrt {5}}}}{8\phi ^{4}}}\right)\approx 90.366$ Volume $5\left({\tfrac {5{\sqrt {5}}}{24}}\right)\approx 2.329$ $16\left({\tfrac {1}{3}}\right)\approx 5.333$ $8$ $24\left({\tfrac {\sqrt {2}}{3}}\right)\approx 11.314$ $600\left({\tfrac {\sqrt {2}}{12\phi ^{3}}}\right)\approx 16.693$ $120\left({\tfrac {15+7{\sqrt {5}}}{4\phi ^{6}{\sqrt {8}}}}\right)\approx 18.118$ 4-Content ${\tfrac {\sqrt {5}}{24}}\left({\tfrac {\sqrt {5}}{2}}\right)^{4}\approx 0.146$ ${\tfrac {2}{3}}\approx 0.667$ $1$ $2$ ${\tfrac {{\text{Short}}\times {\text{Vol}}}{4}}\approx 3.863$ ${\tfrac {{\text{Short}}\times {\text{Vol}}}{4}}\approx 4.193$ The 24 vertices and 96 edges form 16 non-orthogonal great hexagons,[lower-alpha 17] four of which intersect[lower-alpha 11] at each vertex.[lower-alpha 19] By viewing just one hexagon at each vertex, the 24-cell can be seen as the 24 vertices of 4 non-intersecting hexagonal great circles which are Clifford parallel to each other.[lower-alpha 20] The 12 axes and 16 hexagons of the 24-cell constitute a Reye configuration, which in the language of configurations is written as 124163 to indicate that each axis belongs to 4 hexagons, and each hexagon contains 3 axes.[11] Triangles The 24 vertices form 32 equilateral great triangles, of edge length √3 in the unit-radius 24-cell,[lower-alpha 23] inscribed in the 16 great hexagons.[lower-alpha 24] Each great triangle is a ring linking three completely disjoint[lower-alpha 25] great squares.[lower-alpha 28] Hypercubic chords The 24 vertices of the 24-cell are distributed[12] at four different chord lengths from each other: √1, √2, √3 and √4. Each vertex is joined to 8 others[lower-alpha 29] by an edge of length 1, spanning 60° = π/3 of arc. Next nearest are 6 vertices[lower-alpha 30] located 90° = π/2 away, along an interior chord of length √2. Another 8 vertices lie 120° = 2π/3 away, along an interior chord of length √3.[lower-alpha 31] The opposite vertex is 180° = π away along a diameter of length 2. Finally, as the 24-cell is radially equilateral, its center can be treated[lower-alpha 32] as a 25th canonical apex vertex,[lower-alpha 33] which is 1 edge length away from all the others. To visualize how the interior polytopes of the 24-cell fit together (as described below), keep in mind that the four chord lengths (√1, √2, √3, √4) are the long diameters of the hypercubes of dimensions 1 through 4: the long diameter of the square is √2; the long diameter of the cube is √3; and the long diameter of the tesseract is √4.[lower-alpha 34] Moreover, the long diameter of the octahedron is √2 like the square; and the long diameter of the 24-cell itself is √4 like the tesseract. In the 24-cell, the √2 chords are the edges of central squares, and the √4 chords are the diagonals of central squares. Geodesics The vertex chords of the 24-cell are arranged in geodesic great circle polygons.[lower-alpha 36] The geodesic distance between two 24-cell vertices along a path of √1 edges is always 1, 2, or 3, and it is 3 only for opposite vertices.[lower-alpha 37] The √1 edges occur in 16 hexagonal great circles (in planes inclined at 60 degrees to each other), 4 of which cross[lower-alpha 19] at each vertex.[lower-alpha 18] The 96 distinct √1 edges divide the surface into 96 triangular faces and 24 octahedral cells: a 24-cell. The 16 hexagonal great circles can be divided into 4 sets of 4 non-intersecting Clifford parallel geodesics, such that only one hexagonal great circle in each set passes through each vertex, and the 4 hexagons in each set reach all 24 vertices.[lower-alpha 20] Orthogonal projections of the 24-cell Coxeter plane F4 Graph Dihedral symmetry [12] Coxeter plane B3 / A2 (a) B3 / A2 (b) Graph Dihedral symmetry [6] [6] Coxeter plane B4 B2 / A3 Graph Dihedral symmetry [8] [4] The √2 chords occur in 18 square great circles (3 sets of 6 orthogonal planes[lower-alpha 10]), 3 of which cross at each vertex.[lower-alpha 40] The 72 distinct √2 chords do not run in the same planes as the hexagonal great circles; they do not follow the 24-cell's edges, they pass through its octagonal cell centers.[lower-alpha 41] The 72 √2 chords are the 3 orthogonal axes of the 24 octahedral cells, joining vertices which are 2 √1 edges apart. The 18 square great circles can be divided into 3 sets of 6 non-intersecting Clifford parallel geodesics,[lower-alpha 35] such that only one square great circle in each set passes through each vertex, and the 6 squares in each set reach all 24 vertices.[lower-alpha 12] The √3 chords occur in 32 triangular great circles in 16 planes, 4 of which cross at each vertex.[lower-alpha 31] The 96 distinct √3 chords[lower-alpha 23] run vertex-to-every-other-vertex in the same planes as the hexagonal great circles.[lower-alpha 24] They are the 3 edges of the 32 great triangles inscribed in the 16 great hexagons, joining vertices which are 2 √1 edges apart on a great circle.[lower-alpha 22] The √4 chords occur as 12 vertex-to-vertex diameters (3 sets of 4 orthogonal axes), the 24 radii around the 25th central vertex.[lower-alpha 33] The sum of the squared lengths[lower-alpha 45] of all these distinct chords of the 24-cell is 576 = 242.[lower-alpha 46] These are all the central polygons through vertices, but in 4-space there are geodesics on the 3-sphere which do not lie in central planes at all. There are geodesic shortest paths between two 24-cell vertices that are helical rather than simply circular; they corresponding to diagonal isoclinic rotations rather than simple rotations.[lower-alpha 47] The √1 edges occur in 48 parallel pairs, √3 apart. The √2 chords occur in 36 parallel pairs, √2 apart. The √3 chords occur in 48 parallel pairs, √1 apart.[lower-alpha 48] The central planes of the 24-cell can be divided into 4 central hyperplanes (3-spaces) each forming a cuboctahedron. The great hexagons are 60 degrees apart; the great squares are 90 degrees or 60 degrees apart; a great square and a great hexagon are 90 degrees and 60 degrees apart.[lower-alpha 50] Each set of similar central polygons (squares or hexagons) can be divided into 4 sets of non-intersecting Clifford parallel polygons (of 6 squares or 4 hexagons).[lower-alpha 51] Each set of Clifford parallel great circles is a parallel fiber bundle which visits all 24 vertices just once. Each great circle intersects[lower-alpha 11] with the other great circles to which it is not Clifford parallel at one √4 diameter of the 24-cell.[lower-alpha 52] Great circles which are completely orthogonal[lower-alpha 7] or otherwise Clifford parallel[lower-alpha 35] do not intersect at all: they pass through disjoint sets of vertices.[lower-alpha 53] Constructions Triangles and squares come together uniquely in the 24-cell to generate, as interior features,[lower-alpha 32] all of the triangle-faced and square-faced regular convex polytopes in the first four dimensions (with caveats for the 5-cell and the 600-cell).[lower-alpha 54] Consequently, there are numerous ways to construct or deconstruct the 24-cell. Reciprocal constructions from 8-cell and 16-cell The 8 integer vertices (±1, 0, 0, 0) are the vertices of a regular 16-cell, and the 16 half-integer vertices (±1/2, ±1/2, ±1/2, ±1/2) are the vertices of its dual, the tesseract (8-cell).[21] The tesseract gives Gosset's construction[22] of the 24-cell, equivalent to cutting a tesseract into 8 cubic pyramids, and then attaching them to the facets of a second tesseract. The analogous construction in 3-space gives the rhombic dodecahedron which, however, is not regular.[lower-alpha 55] The 16-cell gives the reciprocal construction of the 24-cell, Cesaro's construction,[23] equivalent to rectifying a 16-cell (truncating its corners at the mid-edges, as described above). The analogous construction in 3-space gives the cuboctahedron (dual of the rhombic dodecahedron) which, however, is not regular. The tesseract and the 16-cell are the only regular 4-polytopes in the 24-cell.[24] We can further divide the 16 half-integer vertices into two groups: those whose coordinates contain an even number of minus (−) signs and those with an odd number. Each of these groups of 8 vertices also define a regular 16-cell. This shows that the vertices of the 24-cell can be grouped into three disjoint sets of eight with each set defining a regular 16-cell, and with the complement defining the dual tesseract.[25] This also shows that the symmetries of the 16-cell form a subgroup of index 3 of the symmetry group of the 24-cell.[lower-alpha 27] Diminishings We can facet the 24-cell by cutting[lower-alpha 56] through interior cells bounded by vertex chords to remove vertices, exposing the facets of interior 4-polytopes inscribed in the 24-cell. One can cut a 24-cell through any planar hexagon of 6 vertices, any planar rectangle of 4 vertices, or any triangle of 3 vertices. The great circle central planes (above) are only some of those planes. Here we shall expose some of the others: the face planes[lower-alpha 57] of interior polytopes.[lower-alpha 58] 8-cell Starting with a complete 24-cell, remove 8 orthogonal vertices (4 opposite pairs on 4 perpendicular axes), and the 8 edges which radiate from each, by cutting through 8 cubic cells bounded by √1 edges to remove 8 cubic pyramids whose apexes are the vertices to be removed. This removes 4 edges from each hexagonal great circle (retaining just one opposite pair of edges), so no continuous hexagonal great circles remain. Now 3 perpendicular edges meet and form the corner of a cube at each of the 16 remaining vertices,[lower-alpha 59] and the 32 remaining edges divide the surface into 24 square faces and 8 cubic cells: a tesseract. There are three ways you can do this (choose a set of 8 orthogonal vertices out of 24), so there are three such tesseracts inscribed in the 24-cell.[lower-alpha 22] They overlap with each other, but most of their element sets are disjoint: they share some vertex count, but no edge length, face area, or cell volume.[lower-alpha 60] They do share 4-content, their common core.[lower-alpha 61] 16-cell Starting with a complete 24-cell, remove the 16 vertices of a tesseract (retaining the 8 vertices you removed above), by cutting through 16 tetrahedral cells bounded by √2 chords to remove 16 tetrahedral pyramids whose apexes are the vertices to be removed. This removes 12 great squares (retaining just one orthogonal set) and all the √1 edges, exposing √2 chords as the new edges. Now the remaining 6 great squares cross perpendicularly, 3 at each of 8 remaining vertices,[lower-alpha 62] and their 24 edges divide the surface into 32 triangular faces and 16 tetrahedral cells: a 16-cell. There are three ways you can do this (remove 1 of 3 sets of tesseract vertices), so there are three such 16-cells inscribed in the 24-cell.[lower-alpha 26] They overlap with each other, but all of their element sets are disjoint:[lower-alpha 25] they do not share any vertex count, edge length,[lower-alpha 63] or face area, but they do share cell volume. They also share 4-content, their common core.[lower-alpha 61] Tetrahedral constructions The 24-cell can be constructed radially from 96 equilateral triangles of edge length √1 which meet at the center of the polytope, each contributing two radii and an edge.[lower-alpha 2] They form 96 √1 tetrahedra (each contributing one 24-cell face), all sharing the 25th central apex vertex. These form 24 octahedral pyramids (half-16-cells) with their apexes at the center. The 24-cell can be constructed from 96 equilateral triangles of edge length √2, where the three vertices of each triangle are located 90° = π/2 away from each other on the 3-sphere. They form 48 √2 tetrahedra (the cells of the three 16-cells), centered at the 24 mid-edge-radii of the 24-cell.[lower-alpha 63] The 24-cell can be constructed directly from its characteristic simplex , the irregular 5-cell which is the fundamental region of its symmetry group F4, by reflection of that 4-orthoscheme in its own cells (which are 3-orthoschemes).[lower-alpha 64] Relationships among interior polytopes The 24-cell, three tesseracts, and three 16-cells are deeply entwined around their common center, and intersect in a common core.[lower-alpha 61] The tesseracts and the 16-cells are rotated 60° isoclinically[lower-alpha 15] with respect to each other. This means that the corresponding vertices of two tesseracts or two 16-cells are √3 (120°) apart.[lower-alpha 22] The tesseracts are inscribed in the 24-cell[lower-alpha 65] such that their vertices and edges are exterior elements of the 24-cell, but their square faces and cubical cells lie inside the 24-cell (they are not elements of the 24-cell). The 16-cells are inscribed in the 24-cell[lower-alpha 66] such that only their vertices are exterior elements of the 24-cell: their edges, triangular faces, and tetrahedral cells lie inside the 24-cell. The interior[lower-alpha 67] 16-cell edges have length √2.[lower-alpha 28] The 16-cells are also inscribed in the tesseracts: their √2 edges are the face diagonals of the tesseract, and their 8 vertices occupy every other vertex of the tesseract. Each tesseract has two 16-cells inscribed in it (occupying the opposite vertices and face diagonals), so each 16-cell is inscribed in two of the three 8-cells.[29][lower-alpha 27] This is reminiscent of the way, in 3 dimensions, two opposing regular tetrahedra can be inscribed in a cube, as discovered by Kepler.[28] In fact it is the exact dimensional analogy (the demihypercubes), and the 48 tetrahedral cells are inscribed in the 24 cubical cells in just that way.[30][lower-alpha 63] The 24-cell encloses the three tesseracts within its envelope of octahedral facets, leaving 4-dimensional space in some places between its envelope and each tesseract's envelope of cubes. Each tesseract encloses two of the three 16-cells, leaving 4-dimensional space in some places between its envelope and each 16-cell's envelope of tetrahedra. Thus there are measurable[7] 4-dimensional interstices[lower-alpha 68] between the 24-cell, 8-cell and 16-cell envelopes. The shapes filling these gaps are 4-pyramids, alluded to above.[lower-alpha 69] Boundary cells Despite the 4-dimensional interstices between 24-cell, 8-cell and 16-cell envelopes, their 3-dimensional volumes overlap. The different envelopes are separated in some places, and in contact in other places (where no 4-pyramid lies between them). Where they are in contact, they merge and share cell volume: they are the same 3-membrane in those places, not two separate but adjacent 3-dimensional layers.[lower-alpha 71] Because there are a total of 7 envelopes, there are places where several envelopes come together and merge volume, and also places where envelopes interpenetrate (cross from inside to outside each other). Some interior features lie within the 3-space of the (outer) boundary envelope of the 24-cell itself: each octahedral cell is bisected by three perpendicular squares (one from each of the tesseracts), and the diagonals of those squares (which cross each other perpendicularly at the center of the octahedron) are 16-cell edges (one from each 16-cell). Each square bisects an octahedron into two square pyramids, and also bonds two adjacent cubic cells of a tesseract together as their common face.[lower-alpha 70] As we saw above, 16-cell √2 tetrahedral cells are inscribed in tesseract √1 cubic cells, sharing the same volume. 24-cell √1 octahedral cells overlap their volume with √1 cubic cells: they are bisected by a square face into two square pyramids,[32] the apexes of which also lie at a vertex of a cube.[lower-alpha 72] The octahedra share volume not only with the cubes, but with the tetrahedra inscribed in them; thus the 24-cell, tesseracts, and 16-cells all share some boundary volume.[lower-alpha 71] As a configuration This configuration matrix[33] represents the 24-cell. The rows and columns correspond to vertices, edges, faces, and cells. The diagonal numbers say how many of each element occur in the whole 24-cell. The non-diagonal numbers say how many of the column's element occur in or at the row's element. ${\begin{bmatrix}{\begin{matrix}24&8&12&6\\2&96&3&3\\3&3&96&2\\6&12&8&24\end{matrix}}\end{bmatrix}}$ Since the 24-cell is self-dual, its matrix is identical to its 180 degree rotation. Symmetries, root systems, and tessellations The 24 root vectors of the D4 root system of the simple Lie group SO(8) form the vertices of a 24-cell. The vertices can be seen in 3 hyperplanes,[lower-alpha 49] with the 6 vertices of an octahedron cell on each of the outer hyperplanes and 12 vertices of a cuboctahedron on a central hyperplane. These vertices, combined with the 8 vertices of the 16-cell, represent the 32 root vectors of the B4 and C4 simple Lie groups. The 48 vertices (or strictly speaking their radius vectors) of the union of the 24-cell and its dual form the root system of type F4.[35] The 24 vertices of the original 24-cell form a root system of type D4; its size has the ratio √2:1. This is likewise true for the 24 vertices of its dual. The full symmetry group of the 24-cell is the Weyl group of F4, which is generated by reflections through the hyperplanes orthogonal to the F4 roots. This is a solvable group of order 1152. The rotational symmetry group of the 24-cell is of order 576. Quaternionic interpretation When interpreted as the quaternions,[lower-alpha 13] the F4 root lattice (which is the integral span of the vertices of the 24-cell) is closed under multiplication and is therefore a ring. This is the ring of Hurwitz integral quaternions. The vertices of the 24-cell form the group of units (i.e. the group of invertible elements) in the Hurwitz quaternion ring (this group is also known as the binary tetrahedral group). The vertices of the 24-cell are precisely the 24 Hurwitz quaternions with norm squared 1, and the vertices of the dual 24-cell are those with norm squared 2. The D4 root lattice is the dual of the F4 and is given by the subring of Hurwitz quaternions with even norm squared.[37] Viewed as the 24 unit Hurwitz quaternions, the unit radius coordinates of the 24-cell represent (in antipodal pairs) the 12 rotations of a regular tetrahedron.[38] Vertices of other convex regular 4-polytopes also form multiplicative groups of quaternions, but few of them generate a root lattice. Voronoi cells The Voronoi cells of the D4 root lattice are regular 24-cells. The corresponding Voronoi tessellation gives the tessellation of 4-dimensional Euclidean space by regular 24-cells, the 24-cell honeycomb. The 24-cells are centered at the D4 lattice points (Hurwitz quaternions with even norm squared) while the vertices are at the F4 lattice points with odd norm squared. Each 24-cell of this tessellation has 24 neighbors. With each of these it shares an octahedron. It also has 24 other neighbors with which it shares only a single vertex. Eight 24-cells meet at any given vertex in this tessellation. The Schläfli symbol for this tessellation is {3,4,3,3}. It is one of only three regular tessellations of R4. The unit balls inscribed in the 24-cells of this tessellation give rise to the densest known lattice packing of hyperspheres in 4 dimensions. The vertex configuration of the 24-cell has also been shown to give the highest possible kissing number in 4 dimensions. Radially equilateral honeycomb The dual tessellation of the 24-cell honeycomb {3,4,3,3} is the 16-cell honeycomb {3,3,4,3}. The third regular tessellation of four dimensional space is the tesseractic honeycomb {4,3,3,4}, whose vertices can be described by 4-integer Cartesian coordinates.[lower-alpha 13] The congruent relationships among these three tessellations can be helpful in visualizing the 24-cell, in particular the radial equilateral symmetry which it shares with the tesseract.[lower-alpha 2] A honeycomb of unit edge length 24-cells may be overlaid on a honeycomb of unit edge length tesseracts such that every vertex of a tesseract (every 4-integer coordinate) is also the vertex of a 24-cell (and tesseract edges are also 24-cell edges), and every center of a 24-cell is also the center of a tesseract.[39] The 24-cells are twice as large as the tesseracts by 4-dimensional content (hypervolume), so overall there are two tesseracts for every 24-cell, only half of which are inscribed in a 24-cell. If those tesseracts are colored black, and their adjacent tesseracts (with which they share a cubical facet) are colored red, a 4-dimensional checkerboard results.[40] Of the 24 center-to-vertex radii[lower-alpha 73] of each 24-cell, 16 are also the radii of a black tesseract inscribed in the 24-cell. The other 8 radii extend outside the black tesseract (through the centers of its cubical facets) to the centers of the 8 adjacent red tesseracts. Thus the 24-cell honeycomb and the tesseractic honeycomb coincide in a special way: 8 of the 24 vertices of each 24-cell do not occur at a vertex of a tesseract (they occur at the center of a tesseract instead). Each black tesseract is cut from a 24-cell by truncating it at these 8 vertices, slicing off 8 cubic pyramids (as in reversing Gosset's construction,[22] but instead of being removed the pyramids are simply colored red and left in place). Eight 24-cells meet at the center of each red tesseract: each one meets its opposite at that shared vertex, and the six others at a shared octahedral cell. The red tesseracts are filled cells (they contain a central vertex and radii); the black tesseracts are empty cells. The vertex set of this union of two honeycombs includes the vertices of all the 24-cells and tesseracts, plus the centers of the red tesseracts. Adding the 24-cell centers (which are also the black tesseract centers) to this honeycomb yields a 16-cell honeycomb, the vertex set of which includes all the vertices and centers of all the 24-cells and tesseracts. The formerly empty centers of adjacent 24-cells become the opposite vertices of a unit edge length 16-cell. 24 half-16-cells (octahedral pyramids) meet at each formerly empty center to fill each 24-cell, and their octahedral bases are the 6-vertex octahedral facets of the 24-cell (shared with an adjacent 24-cell).[lower-alpha 74] Notice the complete absence of pentagons anywhere in this union of three honeycombs. Like the 24-cell, 4-dimensional Euclidean space itself is entirely filled by a complex of all the polytopes that can be built out of regular triangles and squares (except the 5-cell), but that complex does not require (or permit) any of the pentagonal polytopes.[lower-alpha 3] Rotations The regular convex 4-polytopes are an expression of their underlying symmetry which is known as SO(4), the group of rotations[41] about a fixed point in 4-dimensional Euclidean space.[lower-alpha 77] The 3 Cartesian bases of the 24-cell There are three distinct orientations of the tesseractic honeycomb which could be made to coincide with the 24-cell honeycomb, depending on which of the 24-cell's three disjoint sets of 8 orthogonal vertices (which set of 4 perpendicular axes, or equivalently, which inscribed basis 16-cell)[lower-alpha 16] was chosen to align it, just as three tesseracts can be inscribed in the 24-cell, rotated with respect to each other.[lower-alpha 22] The distance from one of these orientations to another is an isoclinic rotation through 60 degrees (a double rotation of 60 degrees in each pair of orthogonal invariant planes, around a single fixed point).[lower-alpha 78] This rotation can be seen most clearly in the hexagonal central planes, where the hexagon rotates to change which of its three diameters is aligned with a coordinate system axis.[lower-alpha 17] Planes of rotation Rotations in 4-dimensional Euclidean space can be seen as the composition of two 2-dimensional rotations in completely orthogonal planes.[43] Thus the general rotation in 4-space is a double rotation.[44] There are two important special cases, called a simple rotation and an isoclinic rotation.[lower-alpha 82] Simple rotations In 3 dimensions a spinning polyhedron has a single invariant central plane of rotation. The plane is called invariant because each point in the plane moves in a circle but stays within the plane. Only one of a polyhedron's central planes can be invariant during a particular rotation; the choice of invariant central plane, and the angular distance and direction it is rotated, completely specifies the rotation. Points outside the invariant plane also move in circles (unless they are on the fixed axis of rotation perpendicular to the invariant plane), but the circles do not lie within a central plane. When a 4-polytope is rotating with only one invariant central plane, the same kind of simple rotation is happening that occurs in 3 dimensions. One difference is that instead of a fixed axis of rotation, there is an entire fixed central plane in which the points do not move. The fixed plane is the one central plane that is completely orthogonal[lower-alpha 7] to the invariant plane of rotation. In the 24-cell, there is a simple rotation which will take any vertex directly to any other vertex, also moving most of the other vertices but leaving at least 2 and at most 6 other vertices fixed (the vertices that the fixed central plane intersects). The vertex moves along a great circle in the invariant plane of rotation between adjacent vertices of a great hexagon, a great square or a great digon, and the completely orthogonal fixed plane is a digon, a square or a hexagon, respectively. [lower-alpha 53] Double rotations The points in the completely orthogonal central plane are not constrained to be fixed. It is also possible for them to be rotating in circles, as a second invariant plane, at a rate independent of the first invariant plane's rotation: a double rotation in two perpendicular non-intersecting planes[lower-alpha 9] of rotation at once.[lower-alpha 81] In a double rotation there is no fixed plane or axis: every point moves except the center point. The angular distance rotated may be different in the two completely orthogonal central planes, but they are always both invariant: their circularly moving points remain within the plane as the whole plane tilts sideways in the completely orthogonal rotation. A rotation in 4-space always has (at least) two completely orthogonal invariant planes of rotation, although in a simple rotation the angle of rotation in one of them is 0. Double rotations come in two chiral forms: left and right rotations.[lower-alpha 83] In a double rotation each vertex moves in a spiral along two completely orthogonal great circles at once.[lower-alpha 79] Either the path is right-hand threaded (like most screws and bolts), moving along the circles in the "same" directions, or it is left-hand threaded (like a reverse-threaded bolt), moving along the circles in what we conventionally say are "opposite" directions (according to the right hand rule by which we conventionally say which way is "up" on each of the 4 coordinate axes).[46] In double rotations of the 24-cell that take vertices to vertices, one invariant plane of rotation contains either a great hexagon, a great square, or only an axis (two vertices, a great digon). The completely orthogonal invariant plane of rotation will necessarily contain a great digon, a great square, or a great hexagon, respectively. The selection of an invariant plane of rotation, a rotational direction and angle through which to rotate it, and a rotational direction and angle through which to rotate its completely orthogonal plane, completely determines the nature of the rotational displacement. In the 24-cell there are several noteworthy kinds of double rotation permitted by these parameters.[47] Isoclinic rotations When the angles of rotation in the two invariant planes are exactly the same, a remarkably symmetric transformation occurs:[48] all the great circle planes Clifford parallel[lower-alpha 35] to the invariant planes become invariant planes of rotation themselves, through that same angle, and the 4-polytope rotates isoclinically in many directions at once.[49] Each vertex moves an equal distance in four orthogonal directions at the same time.[lower-alpha 15] In the 24-cell any isoclinic rotation through 60 degrees in a hexagonal plane takes each vertex to a vertex two edge lengths away, rotates all 16 hexagons by 60 degrees, and takes every great circle polygon (square,[lower-alpha 42] hexagon or triangle) to a Clifford parallel great circle polygon of the same kind 120 degrees away. An isoclinic rotation is also called a Clifford displacement, after its discoverer.[lower-alpha 78] The 24-cell in the double rotation animation appears to turn itself inside out.[lower-alpha 86] It appears to, because it actually does, reversing the chirality of the whole 4-polytope just the way your bathroom mirror reverses the chirality of your image by a 180 degree reflection. Each 360 degree isoclinic rotation is as if the 24-cell surface had been stripped off like a glove and turned inside out, making a right-hand glove into a left-hand glove (or vice versa).[50] In a simple rotation of the 24-cell in a hexagonal plane, each vertex in the plane rotates first along an edge to an adjacent vertex 60 degrees away. But in an isoclinic rotation in two completely orthogonal planes one of which is a great hexagon,[lower-alpha 53] each vertex rotates first to a vertex two edge lengths away (√3 and 120° distant). The double 60-degree rotation's helical geodesics pass through every other vertex, missing the vertices in between.[lower-alpha 21] Each √3 chord of the helical geodesic[lower-alpha 92] crosses between two Clifford parallel hexagon central planes, and lies in another hexagon central plane that intersects them both.[lower-alpha 97] The √3 chords meet at a 60° angle, but since they lie in different planes they form a helix not a triangle. Three √3 chords and 360° of rotation takes the vertex to an adjacent vertex, not back to itself. The helix of √3 chords closes into a loop only after six √3 chords: a 720° rotation twice around the 24-cell[lower-alpha 80] on a skew hexagram with √3 edges.[lower-alpha 96] Even though all 24 vertices and all the hexagons rotate at once, a 360 degree isoclinic rotation hits only half the vertices in the 24-cell.[lower-alpha 91] After 360 degrees each helix has departed from 3 vertices and reached a fourth vertex adjacent to the original vertex, but has not arrived back exactly at the vertex it departed from. Each central plane (every hexagon or square in the 24-cell) has rotated 360 degrees and been tilted sideways all the way around 360 degrees back to its original position (like a coin flipping twice), but the 24-cell's orientation in the 4-space in which it is embedded is now different. Because the 24-cell is now inside-out, if the isoclinic rotation is continued in the same direction through another 360 degrees, the 24 moving vertices will pass through the other half of the vertices that were missed on the first revolution (the 12 antipodal vertices of the 12 that were hit the first time around), and each isoclinic geodesic will arrive back at the vertex it departed from, forming a closed six-chord helical loop. It takes a 720 degree isoclinic rotation for each hexagram2 geodesic to complete a circuit through every second vertex of its six vertices by winding around the 24-cell twice, returning the 24-cell to its original chiral orientation.[lower-alpha 102] The hexagonal winding path that each vertex takes as it loops twice around the 24-cell forms a double helix bent into a Möbius ring, so that the two strands of the double helix form a continuous single strand in a closed loop.[lower-alpha 99] In the first revolution the vertex traverses one 3-chord strand of the double helix; in the second revolution it traverses the second 3-chord strand, moving in the same rotational direction with the same handedness (bending either left or right) throughout. Although this isoclinic Möbius ring is a closed spiral not a 2-dimensional circle, like a great circle it is a geodesic because it is the shortest path from vertex to vertex.[lower-alpha 47] Clifford parallel polytopes Two planes are also called isoclinic if an isoclinic rotation will bring them together.[lower-alpha 50] The isoclinic planes are precisely those central planes with Clifford parallel geodesic great circles.[52] Clifford parallel great circles do not intersect,[lower-alpha 35] so isoclinic great circle polygons have disjoint vertices. In the 24-cell every hexagonal central plane is isoclinic to three others, and every square central plane is isoclinic to five others. We can pick out 4 mutually isoclinic (Clifford parallel) great hexagons (four different ways) covering all 24 vertices of the 24-cell just once (a hexagonal fibration).[lower-alpha 20] We can pick out 6 mutually isoclinic (Clifford parallel) great squares[lower-alpha 88] (three different ways) covering all 24 vertices of the 24-cell just once (a square fibration).[lower-alpha 12] Every isoclinic rotation taking vertices to vertices corresponds to a discrete fibration.[lower-alpha 107] Two dimensional great circle polygons are not the only polytopes in the 24-cell which are parallel in the Clifford sense.[54] Congruent polytopes of 2, 3 or 4 dimensions can be said to be Clifford parallel in 4 dimensions if their corresponding vertices are all the same distance apart. The three 16-cells inscribed in the 24-cell are Clifford parallels. Clifford parallel polytopes are completely disjoint polytopes.[lower-alpha 25] A 60 degree isoclinic rotation in hexagonal planes takes each 16-cell to a disjoint 16-cell. Like all double rotations, isoclinic rotations come in two chiral forms: there is a disjoint 16-cell to the left of each 16-cell, and another to its right.[lower-alpha 26] All Clifford parallel 4-polytopes are related by an isoclinic rotation,[lower-alpha 78] but not all isoclinic polytopes are Clifford parallels (completely disjoint).[lower-alpha 108] The three 8-cells in the 24-cell are isoclinic but not Clifford parallel. Like the 16-cells, they are rotated 60 degrees isoclinically with respect to each other, but their vertices are not all disjoint (and therefore not all equidistant). Each vertex occurs in two of the three 8-cells (as each 16-cell occurs in two of the three 8-cells).[lower-alpha 22] Isoclinic rotations relate the convex regular 4-polytopes to each other. An isoclinic rotation of a single 16-cell will generate[lower-alpha 109] a 24-cell. A simple rotation of a single 16-cell will not, because its vertices will not reach either of the other two 16-cells' vertices in the course of the rotation. An isoclinic rotation of the 24-cell will generate the 600-cell, and an isoclinic rotation of the 600-cell will generate the 120-cell. (Or they can all be generated directly by an isoclinic rotation of the 16-cell, generating isoclinic copies of itself.) The convex regular 4-polytopes nest inside each other, and hide next to each other in the Clifford parallel spaces that comprise the 3-sphere.[55] For an object of more than one dimension, the only way to reach these parallel subspaces directly is by isoclinic rotation.[lower-alpha 110] Rings In the 24-cell there are sets of rings of six different kinds, described separately in detail in other sections of this article. This section describes how the different kinds of rings are intertwined. The 24-cell contains four kinds of geodesic fibers (polygonal rings running through vertices): great circle squares and their isoclinic helix octagrams,[lower-alpha 12] and great circle hexagons and their isoclinic helix hexagrams.[lower-alpha 20] It also contains two kinds of cell rings (chains of octahedra bent into a ring in the fourth dimension): four octahedra connected vertex-to-vertex and bent into a square, and six octahedra connected face-to-face and bent into a hexagon. 4-cell rings Four unit-edge-length octahedra can be connected vertex-to-vertex along a common axis of length 4√2. The axis can then be bent into a square of edge length √2. Although it is possible to do this in a space of only three dimensions, that is not how it occurs in the 24-cell. Although the √2 axes of the four octahedra occupy the same plane, forming one of the 18 √2 great squares of the 24-cell, each octahedron occupies a different 3-dimensional hyperplane,[lower-alpha 111] and all four dimensions are utilized. The 24-cell can be partitioned into 6 such 4-cell rings (three different ways), mutually interlinked like adjacent links in a chain (but these links all have a common center). An isoclinic rotation in the great square plane by a multiple of 90° takes each octahedron in the ring to an octahedron in the ring. 6-cell rings Six regular octahedra can be connected face-to-face along a common axis that passes through their centers of volume, forming a stack or column with only triangular faces. In a space of four dimensions, the axis can then be bent 60° in the fourth dimension at each of the six octahedron centers, in a plane orthogonal to all three orthogonal central planes of each octahedron, such that the top and bottom triangular faces of the column become coincident. The column becomes a ring around a hexagonal axis. The 24-cell can be partitioned into 4 such rings (four different ways), mutually interlinked. Because the hexagonal axis joins cell centers (not vertices), it is not a great hexagon of the 24-cell.[lower-alpha 114] However, six great hexagons can be found in the ring of six octahedra, running along the edges of the octahedra. In the column of six octahedra (before it is bent into a ring) there are six spiral paths along edges running up the column: three parallel helices spiraling clockwise, and three parallel helices spiraling counterclockwise. Each clockwise helix intersects each counterclockwise helix at two vertices three edge lengths apart. Bending the column into a ring changes these helices into great circle hexagons.[lower-alpha 112] The ring has two sets of three great hexagons, each on three Clifford parallel great circles.[lower-alpha 116] The great hexagons in each parallel set of three do not intersect, but each intersects the other three great hexagons (to which it is not Clifford parallel) at two antipodal vertices. A simple rotation in any of the great hexagon planes by a multiple of 60° rotates only that hexagon invariantly, taking each vertex in that hexagon to a vertex in the same hexagon. An isoclinic rotation by 60° in any of the six great hexagon planes rotates all three Clifford parallel great hexagons invariantly, and takes each octahedron in the ring to a non-adjacent octahedron in the ring.[lower-alpha 118] Each isoclinically displaced octahedron is also rotated itself. After a 360° isoclinic rotation each octahedron is back in the same position, but in a different orientation. In a 720° isoclinic rotation, its vertices are returned to their original orientation. Four great hexagons comprise a discrete fiber bundle covering all 24 vertices in a Hopf fibration. Four 6-cell rings comprise the same discrete fibration. The 24-cell has four such discrete hexagonal fibrations, and each is the domain (container) of a unique left-right pair of isoclinic rotations (left and right Hopf fiber bundles). Each great hexagon belongs to just one fibration,[57] but each 6-cell ring belongs to three fibrations. The 24-cell contains 16 great hexagons, divided among four fibrations, each of which is a set of four 6-cell rings, but the 24-cell has only four distinct 6-cell rings. Each 6-cell ring contains 3 of the great hexagons in each of three fibrations: only 3 of the 4 Clifford parallel hexagons of each of the three fibrations, and only 18 of the 24 vertices.[lower-alpha 107] Helical hexagrams and their isoclines Another kind of geodesic fiber, the helical hexagram isoclines, can be found within a 6-cell ring of octahedra. Each of these geodesics runs through every second vertex of a skew hexagram2, which in the unit-radius, unit-edge-length 24-cell has six √3 edges. The hexagram does not lie in a single central plane, but is composed of six linked √3 chords from the six different hexagon great circles in the 6-cell ring. The isocline geodesic fiber is the path of an isoclinic rotation,[lower-alpha 47] a helical rather than simply circular path around the 24-cell which links vertices two edge lengths apart and consequently must wrap twice around the 24-cell before completing its six-vertex loop.[lower-alpha 85] Rather than a flat hexagon, it forms a skew hexagram out of two three-sided 360 degree half-loops: open triangles joined end-to-end to each other in a six-sided Möbius loop.[lower-alpha 99] Each 6-cell ring contains six such hexagram isoclines, three black and three white, that connect even and odd vertices respectively.[lower-alpha 115] Each of the three black-white pairs of isoclines belongs to one of the three fibrations in which the 6-cell ring occurs. Each fibration's right (or left) rotation traverses two black isoclines and two white isoclines in parallel, rotating all 24 vertices.[lower-alpha 21] Beginning at any vertex at one end of the column of six octahedra, we can follow an isoclinic path of √3 chords of an isocline from octahedron to octahedron. In the 24-cell the √1 edges are great hexagon edges (and octahedron edges); in the column of six octahedra we see six great hexagons running along the octahedra's edges. The √3 chords are great hexagon diagonals, joining great hexagon vertices two √1 edges apart. We find them in the ring of six octahedra running from a vertex in one octahedron to a vertex in the next octahedron, passing through the face shared by the two octahedra (but not touching any of the face's 3 vertices). Each √3 chord is a chord of just one great hexagon (an edge of a great triangle inscribed in that great hexagon), but successive √3 chords belong to different great hexagons.[lower-alpha 97] At each vertex the isoclinic path of √3 chords bends 60 degrees in two completely orthogonal central planes[lower-alpha 119] at once: 60 degrees around the great hexagon that the chord before the vertex belongs to, and 60 degrees into the plane of a different great hexagon entirely, that the chord after the vertex belongs to.[lower-alpha 122] Thus the path follows one great hexagon from each octahedron to the next, but switches to another of the six great hexagons in the next link of the hexagram2 path. Followed along the column of six octahedra (and "around the end" where the column is bent into a ring) the path may at first appear to be zig-zagging between three adjacent parallel hexagonal central planes (like a Petrie polygon), but it is not: any isoclinic path we can pick out always zig-zags between two sets of three adjacent parallel hexagonal central planes, intersecting only every even (or odd) vertex and never changing its inherent even/odd parity, as it visits all six of the great hexagons in the 6-cell ring in rotation.[lower-alpha 84] When it has traversed one chord from each of the six great hexagons, after 720 degrees of isoclinic rotation (either left or right), it closes its skew hexagram and begins to repeat itself, circling again through the black (or white) vertices and cells. At each vertex, there are four great hexagons[lower-alpha 124] and four hexagram isoclines (all black or all white) that cross at the vertex.[lower-alpha 125] Four hexagram isoclines (two black and two white) comprise a unique (left or right) fiber bundle of isoclines covering all 24 vertices in each distinct (left or right) isoclinic rotation. Each fibration has a unique left and right isoclinic rotation, and corresponding unique left and right fiber bundles of isoclines.[lower-alpha 126] There are 16 distinct hexagram isoclines in the 24-cell (8 black and 8 white).[lower-alpha 127] Each isocline is a skew Clifford polygon of no inherent chirality, but acts as a left (or right) isocline when traversed by a left (or right) rotation in different fibrations.[lower-alpha 85] Helical octagrams and their isoclines The 24-cell contains 18 helical octagram isoclines, 9 left-handed and 9 right-handed. Three left-right pairs of octagram edge-helices are found in each of the three inscribed 16-cells, described elsewhere as the helical construction of the 16-cell. In summary, each 16-cell can be decomposed (three different ways) into a left-right pair of 8-cell rings of √2-edged tetrahedral cells. Each 8-cell ring twists either left or right around an axial octagram helix of eight chords. In each 16-cell there are exactly 6 distinct helices, identical octagrams which each circle through all eight vertices. Each acts as either a left helix or a right helix or a Petrie polygon in each of the six distinct isoclinic rotations (three left and three right), and has no inherent chirality except in respect to a particular rotation. The chords of these isoclines connect opposite vertices of face-bonded tetrahedral cells, which are also opposite vertices (antipodal vertices) of the 16-cell, so they are √4 chords. In the 24-cell, these 18 helical octagram isoclines can be found within the six orthogonal 4-cell rings of octahedra. Each 4-cell ring has cells bonded vertex-to-vertex around a great square axis, and we find antipodal vertices at opposite vertices of the great square. A √4 chord (the diagonal of the great square) connects them; this is a chord of each distinct square isoclinic rotation. Boundary cells describes how the √2 axes of the 24-cell's octahedral cells are the edges of the 16-cell's tetrahedral cells, each tetrahedron is inscribed in a (tesseract) cube, and each octahedron is inscribed in a pair of cubes (from different tesseracts), bridging them.[lower-alpha 70] The vertex-bonded octahedra of the 4-cell ring also lie in different tesseracts.[lower-alpha 60] In the 24-cell, the 16-cells' isoclines' chords describe an octagram4{2} with √4 edges that run from the vertex of one cube and octahedron and tetrahedron, to the vertex of another cube and octahedron and tetrahedron (in a different tesseract), straight through the center of the 24-cell on one of the twelve √4 axes. The octahedra in the 4-cell rings are vertex-bonded to more than two other octahedra, because three 4-cell rings (and their three axial great squares, which belong to different 16-cells) cross at 90° at each bonding vertex. At that vertex the octagram makes two right-angled turns at once: 90° around the great square, and 90° completely orthogonally into a different 4-cell ring entirely. The 180° arc of each √4 chord of the octagram runs through the volumes and opposite vertices of two face-bonded √2 tetrahedra (in the same 16-cell), which are also the opposite vertices of two vertex-bonded octahedra in different 4-cell rings (and different tesseracts). The arc does not hit any vertices of those two octahedra except the chord endpoints; in particular, it misses the vertex near the chord midpoint where the two octahedra are vertex-bonded. The 720° octagram isocline runs through one vertex of one octahedron in six different 4-cell rings (of the 18 4-cell rings in the 24-cell), and through the volumes of 16 tetrahedra. At each vertex, there are three great squares and six octagram isoclines (three left-right pairs) that cross at the vertex.[lower-alpha 88] Characteristic orthoscheme Characteristics of the 24-cell[61] edge[62] arc dihedral[63] 𝒍 $1$ 60° ${\tfrac {\pi }{3}}$ 120° ${\tfrac {2\pi }{3}}$ 𝟀 ${\sqrt {\tfrac {1}{3}}}\approx 0.577$ 45° ${\tfrac {\pi }{4}}$ 45° ${\tfrac {\pi }{4}}$ 𝝓 ${\sqrt {\tfrac {1}{4}}}=0.5$ 30° ${\tfrac {\pi }{6}}$ 60° ${\tfrac {\pi }{3}}$ 𝟁 ${\sqrt {\tfrac {1}{12}}}\approx 0.289$ 30° ${\tfrac {\pi }{6}}$ 60° ${\tfrac {\pi }{3}}$ $_{0}R^{3}/l$ ${\sqrt {\tfrac {1}{2}}}\approx 0.707$ 45° ${\tfrac {\pi }{4}}$ 90° ${\tfrac {\pi }{2}}$ $_{1}R^{3}/l$ ${\sqrt {\tfrac {1}{4}}}=0.5$ 30° ${\tfrac {\pi }{6}}$ 90° ${\tfrac {\pi }{2}}$ $_{2}R^{3}/l$ ${\sqrt {\tfrac {1}{6}}}\approx 0.408$ 30° ${\tfrac {\pi }{6}}$ 90° ${\tfrac {\pi }{2}}$ $_{0}R^{4}/l$ $1$ $_{1}R^{4}/l$ ${\sqrt {\tfrac {3}{4}}}\approx 0.866$ $_{2}R^{4}/l$ ${\sqrt {\tfrac {2}{3}}}\approx 0.816$ $_{3}R^{4}/l$ ${\sqrt {\tfrac {1}{2}}}\approx 0.707$ Every regular 4-polytope has its characteristic 4-orthoscheme, an irregular 5-cell.[lower-alpha 64] The characteristic 5-cell of the regular 24-cell is represented by the Coxeter-Dynkin diagram , which can be read as a list of the dihedral angles between its mirror facets.[lower-alpha 128] It is an irregular tetrahedral pyramid based on the characteristic tetrahedron of the regular octahedron. The regular 24-cell is subdivided by its symmetry hyperplanes into 1152 instances of its characteristic 5-cell that all meet at its center.[65] The characteristic 5-cell (4-orthoscheme) has four more edges than its base characteristic tetrahedron (3-orthoscheme), joining the four vertices of the base to its apex (the fifth vertex of the 4-orthoscheme, at the center of the regular 24-cell[lower-alpha 33]).[lower-alpha 129] If the regular 24-cell has radius and edge length 𝒍 = 1, its characteristic 5-cell's ten edges have lengths ${\sqrt {\tfrac {1}{3}}}$, ${\sqrt {\tfrac {1}{4}}}$, ${\sqrt {\tfrac {1}{12}}}$ (the exterior right triangle face, the characteristic triangle 𝟀, 𝝓, 𝟁), plus ${\sqrt {\tfrac {1}{2}}}$, ${\sqrt {\tfrac {1}{4}}}$, ${\sqrt {\tfrac {1}{6}}}$ (the other three edges of the exterior 3-orthoscheme facet the characteristic tetrahedron, which are the characteristic radii of the octahedron), plus $1$, ${\sqrt {\tfrac {3}{4}}}$, ${\sqrt {\tfrac {2}{3}}}$, ${\sqrt {\tfrac {1}{2}}}$ (edges which are the characteristic radii of the 24-cell). The 4-edge path along orthogonal edges of the orthoscheme is ${\sqrt {\tfrac {1}{4}}}$, ${\sqrt {\tfrac {1}{12}}}$, ${\sqrt {\tfrac {1}{6}}}$, ${\sqrt {\tfrac {1}{2}}}$, first from a 24-cell vertex to a 24-cell edge center, then turning 90° to a 24-cell face center, then turning 90° to a 24-cell octahedral cell center, then turning 90° to the 24-cell center. Reflections The 24-cell can be constructed by the reflections of its characteristic 5-cell in its own facets (its tetrahedral mirror walls).[lower-alpha 130] Reflections and rotations are related: a reflection in an even number of intersecting mirrors is a rotation.[66] Consequently, regular polytopes can be generated by reflections or by rotations. For example, any 720° isoclinic rotation of the 24-cell in a hexagonal invariant plane takes each of the 24 vertices to and through 5 other vertices and back to itself, on a skew hexagram2 geodesic isocline that winds twice around the 3-sphere on every second vertex of the hexagram. Any set of four orthogonal pairs of antipodal vertices (the 8 vertices of one of the three inscribed 16-cells) performing half such an orbit visits 3 * 8 = 24 distinct vertices and generates the 24-cell sequentially in 3 steps of a single 360° isoclinic rotation, just as any single characteristic 5-cell reflecting itself in its own mirror walls generates the 24 vertices simultaneously by reflection. Tracing the orbit of one such 16-cell vertex during the 360° isoclinic rotation reveals more about the relationship between reflections and rotations as generative operations.[lower-alpha 131] The vertex follows an isocline (a doubly curved geodesic circle) rather than any one of the singly curved geodesic circles that are the great circle segments over each √3 chord of the rotation.[lower-alpha 97] The isocline connects vertices two edge lengths apart, but curves away from the great circle path over the two edges connecting those vertices, missing the vertex in between.[lower-alpha 92] Although the isocline does not follow any one great circle, it is contained within a ring of another kind: in the 24-cell it stays within a 6-cell ring of spherical[68] octahedral cells, intersecting one vertex in each cell, and passing through the volume of two adjacent cells near the missed vertex. Visualization Cell rings The 24-cell is bounded by 24 octahedral cells. For visualization purposes, it is convenient that the octahedron has opposing parallel faces (a trait it shares with the cells of the tesseract and the 120-cell). One can stack octahedrons face to face in a straight line bent in the 4th direction into a great circle with a circumference of 6 cells.[69][70] The cell locations lend themselves to a hyperspherical description. Pick an arbitrary cell and label it the "North Pole". Eight great circle meridians (two cells long) radiate out in 3 dimensions, converging at the 3rd "South Pole" cell. This skeleton accounts for 18 of the 24 cells (2 + 8×2). See the table below. There is another related great circle in the 24-cell, the dual of the one above. A path that traverses 6 vertices solely along edges resides in the dual of this polytope, which is itself since it is self dual. These are the hexagonal geodesics described above.[lower-alpha 20] One can easily follow this path in a rendering of the equatorial cuboctahedron cross-section. Starting at the North Pole, we can build up the 24-cell in 5 latitudinal layers. With the exception of the poles, each layer represents a separate 2-sphere, with the equator being a great 2-sphere.[lower-alpha 44] The cells labeled equatorial in the following table are interstitial to the meridian great circle cells. The interstitial "equatorial" cells touch the meridian cells at their faces. They touch each other, and the pole cells at their vertices. This latter subset of eight non-meridian and pole cells has the same relative position to each other as the cells in a tesseract (8-cell), although they touch at their vertices instead of their faces. Layer # Number of Cells Description Colatitude Region 1 1 cell North Pole 0° Northern Hemisphere 2 8 cells First layer of meridian cells 60° 3 6 cells Non-meridian / interstitial 90° Equator 4 8 cells Second layer of meridian cells 120° Southern Hemisphere 5 1 cell South Pole 180° Total 24 cells The 24-cell can be partitioned into cell-disjoint sets of four of these 6-cell great circle rings, forming a discrete Hopf fibration of four interlocking rings.[lower-alpha 107] One ring is "vertical", encompassing the pole cells and four meridian cells. The other three rings each encompass two equatorial cells and four meridian cells, two from the northern hemisphere and two from the southern.[71] Note this hexagon great circle path implies the interior/dihedral angle between adjacent cells is 180 - 360/6 = 120 degrees. This suggests you can adjacently stack exactly three 24-cells in a plane and form a 4-D honeycomb of 24-cells as described previously. One can also follow a great circle route, through the octahedrons' opposing vertices, that is four cells long. These are the square geodesics along four √2 chords described above. This path corresponds to traversing diagonally through the squares in the cuboctahedron cross-section. The 24-cell is the only regular polytope in more than two dimensions where you can traverse a great circle purely through opposing vertices (and the interior) of each cell. This great circle is self dual. This path was touched on above regarding the set of 8 non-meridian (equatorial) and pole cells. The 24-cell can be equipartitioned into three 8-cell subsets, each having the organization of a tesseract. Each of these subsets can be further equipartitioned into two interlocking great circle chains, four cells long. Collectively these three subsets now produce another, six ring, discrete Hopf fibration. Parallel projections The vertex-first parallel projection of the 24-cell into 3-dimensional space has a rhombic dodecahedral envelope. Twelve of the 24 octahedral cells project in pairs onto six square dipyramids that meet at the center of the rhombic dodecahedron. The remaining 12 octahedral cells project onto the 12 rhombic faces of the rhombic dodecahedron. The cell-first parallel projection of the 24-cell into 3-dimensional space has a cuboctahedral envelope. Two of the octahedral cells, the nearest and farther from the viewer along the w-axis, project onto an octahedron whose vertices lie at the center of the cuboctahedron's square faces. Surrounding this central octahedron lie the projections of 16 other cells, having 8 pairs that each project to one of the 8 volumes lying between a triangular face of the central octahedron and the closest triangular face of the cuboctahedron. The remaining 6 cells project onto the square faces of the cuboctahedron. This corresponds with the decomposition of the cuboctahedron into a regular octahedron and 8 irregular but equal octahedra, each of which is in the shape of the convex hull of a cube with two opposite vertices removed. The edge-first parallel projection has an elongated hexagonal dipyramidal envelope, and the face-first parallel projection has a nonuniform hexagonal bi-antiprismic envelope. Perspective projections The vertex-first perspective projection of the 24-cell into 3-dimensional space has a tetrakis hexahedral envelope. The layout of cells in this image is similar to the image under parallel projection. The following sequence of images shows the structure of the cell-first perspective projection of the 24-cell into 3 dimensions. The 4D viewpoint is placed at a distance of five times the vertex-center radius of the 24-cell. Cell-first perspective projection In this image, the nearest cell is rendered in red, and the remaining cells are in edge-outline. For clarity, cells facing away from the 4D viewpoint have been culled. In this image, four of the 8 cells surrounding the nearest cell are shown in green. The fourth cell is behind the central cell in this viewpoint (slightly discernible since the red cell is semi-transparent). Finally, all 8 cells surrounding the nearest cell are shown, with the last four rendered in magenta. Note that these images do not include cells which are facing away from the 4D viewpoint. Hence, only 9 cells are shown here. On the far side of the 24-cell are another 9 cells in an identical arrangement. The remaining 6 cells lie on the "equator" of the 24-cell, and bridge the two sets of cells. Animated cross-section of 24-cell A stereoscopic 3D projection of an icositetrachoron (24-cell). Isometric Orthogonal Projection of: 8 Cell(Tesseract) + 16 Cell = 24 Cell Related polytopes Three Coxeter group constructions There are two lower symmetry forms of the 24-cell, derived as a rectified 16-cell, with B4 or [3,3,4] symmetry drawn bicolored with 8 and 16 octahedral cells. Lastly it can be constructed from D4 or [31,1,1] symmetry, and drawn tricolored with 8 octahedra each. Three nets of the 24-cell with cells colored by D4, B4, and F4 symmetry Rectified demitesseract Rectified 16-cell Regular 24-cell D4, [31,1,1], order 192 B4, [3,3,4], order 384 F4, [3,4,3], order 1152 Three sets of 8 rectified tetrahedral cells One set of 16 rectified tetrahedral cells and one set of 8 octahedral cells. One set of 24 octahedral cells Vertex figure (Each edge corresponds to one triangular face, colored by symmetry arrangement) Related complex polygons The regular complex polygon 4{3}4, or contains the 24 vertices of the 24-cell, and 24 4-edges that correspond to central squares of 24 of 48 octahedral cells. Its symmetry is 4[3]4, order 96.[72] The regular complex polytope 3{4}3, or , in $\mathbb {C} ^{2}$ has a real representation as a 24-cell in 4-dimensional space. 3{4}3 has 24 vertices, and 24 3-edges. Its symmetry is 3[4]3, order 72. Related figures in orthogonal projections Name {3,4,3}, 4{3}4, 3{4}3, Symmetry [3,4,3], , order 1152 4[3]4, , order 96 3[4]3, , order 72 Vertices 242424 Edges 96 2-edges24 4-edge24 3-edges Image 24-cell in F4 Coxeter plane, with 24 vertices in two rings of 12, and 96 edges. 4{3}4, has 24 vertices and 32 4-edges, shown here with 8 red, green, blue, and yellow square 4-edges. 3{4}3 or has 24 vertices and 24 3-edges, shown here with 8 red, 8 green, and 8 blue square 3-edges, with blue edges filled. Related 4-polytopes Several uniform 4-polytopes can be derived from the 24-cell via truncation: • truncating at 1/3 of the edge length yields the truncated 24-cell; • truncating at 1/2 of the edge length yields the rectified 24-cell; • and truncating at half the depth to the dual 24-cell yields the bitruncated 24-cell, which is cell-transitive. The 96 edges of the 24-cell can be partitioned into the golden ratio to produce the 96 vertices of the snub 24-cell. This is done by first placing vectors along the 24-cell's edges such that each two-dimensional face is bounded by a cycle, then similarly partitioning each edge into the golden ratio along the direction of its vector. An analogous modification to an octahedron produces an icosahedron, or "snub octahedron." The 24-cell is the unique convex self-dual regular Euclidean polytope that is neither a polygon nor a simplex. Relaxing the condition of convexity admits two further figures: the great 120-cell and grand stellated 120-cell. With itself, it can form a polytope compound: the compound of two 24-cells. Related uniform polytopes D4 uniform polychora {3,31,1} h{4,3,3} 2r{3,31,1} h3{4,3,3} t{3,31,1} h2{4,3,3} 2t{3,31,1} h2,3{4,3,3} r{3,31,1} {31,1,1}={3,4,3} rr{3,31,1} r{31,1,1}=r{3,4,3} tr{3,31,1} t{31,1,1}=t{3,4,3} sr{3,31,1} s{31,1,1}=s{3,4,3} 24-cell family polytopes Name 24-cell truncated 24-cell snub 24-cell rectified 24-cell cantellated 24-cell bitruncated 24-cell cantitruncated 24-cell runcinated 24-cell runcitruncated 24-cell omnitruncated 24-cell Schläfli symbol {3,4,3} t0,1{3,4,3} t{3,4,3} s{3,4,3} t1{3,4,3} r{3,4,3} t0,2{3,4,3} rr{3,4,3} t1,2{3,4,3} 2t{3,4,3} t0,1,2{3,4,3} tr{3,4,3} t0,3{3,4,3} t0,1,3{3,4,3} t0,1,2,3{3,4,3} Coxeter diagram Schlegel diagram F4 B4 B3(a) B3(b) B2 The 24-cell can also be derived as a rectified 16-cell: B4 symmetry polytopes Name tesseract rectified tesseract truncated tesseract cantellated tesseract runcinated tesseract bitruncated tesseract cantitruncated tesseract runcitruncated tesseract omnitruncated tesseract Coxeter diagram = = Schläfli symbol {4,3,3} t1{4,3,3} r{4,3,3} t0,1{4,3,3} t{4,3,3} t0,2{4,3,3} rr{4,3,3} t0,3{4,3,3} t1,2{4,3,3} 2t{4,3,3} t0,1,2{4,3,3} tr{4,3,3} t0,1,3{4,3,3} t0,1,2,3{4,3,3} Schlegel diagram B4   Name 16-cell rectified 16-cell truncated 16-cell cantellated 16-cell runcinated 16-cell bitruncated 16-cell cantitruncated 16-cell runcitruncated 16-cell omnitruncated 16-cell Coxeter diagram = = = = = = Schläfli symbol {3,3,4} t1{3,3,4} r{3,3,4} t0,1{3,3,4} t{3,3,4} t0,2{3,3,4} rr{3,3,4} t0,3{3,3,4} t1,2{3,3,4} 2t{3,3,4} t0,1,2{3,3,4} tr{3,3,4} t0,1,3{3,3,4} t0,1,2,3{3,3,4} Schlegel diagram B4 {3,p,3} polytopes Space S3 H3 Form Finite Compact Paracompact Noncompact {3,p,3} {3,3,3} {3,4,3} {3,5,3} {3,6,3} {3,7,3} {3,8,3} ... {3,∞,3} Image Cells {3,3} {3,4} {3,5} {3,6} {3,7} {3,8} {3,∞} Vertex figure {3,3} {4,3} {5,3} {6,3} {7,3} {8,3} {∞,3} See also • Octacube (sculpture) • Uniform 4-polytope § The F4 family Notes 1. The 24-cell is one of only three self-dual regular Euclidean polytopes which are neither a polygon nor a simplex. The other two are also 4-polytopes, but not convex: the grand stellated 120-cell and the great 120-cell. The 24-cell is nearly unique among self-dual regular convex polytopes in that it and the even polygons are the only such polytopes where a face is not opposite an edge. 2. The long radius (center to vertex) of the 24-cell is equal to its edge length; thus its long diameter (vertex to opposite vertex) is 2 edge lengths. Only a few uniform polytopes have this property, including the four-dimensional 24-cell and tesseract, the three-dimensional cuboctahedron, and the two-dimensional hexagon. (The cuboctahedron is the equatorial cross section of the 24-cell, and the hexagon is the equatorial cross section of the cuboctahedron.) Radially equilateral polytopes are those which can be constructed, with their long radii, from equilateral triangles which meet at the center of the polytope, each contributing two radii and an edge. 3. The convex regular polytopes in the first four dimensions with a 5 in their Schlӓfli symbol are the pentagon {5}, the icosahedron {3, 5}, the dodecahedron {5, 3}, the 600-cell {3,3,5} and the 120-cell {5,3,3}. In other words, the 24-cell possesses all of the triangular and square features that exist in four dimensions except the regular 5-cell, but none of the pentagonal features. (The 5-cell {3, 3, 3} is also pentagonal in the sense that its Petrie polygon is the pentagon.) 4. The convex regular 4-polytopes can be ordered by size as a measure of 4-dimensional content (hypervolume) for the same radius. Each greater polytope in the sequence is rounder than its predecessor, enclosing more content[7] within the same radius. The 4-simplex (5-cell) is the limit smallest case, and the 120-cell is the largest. Complexity (as measured by comparing configuration matrices or simply the number of vertices) follows the same ordering. This provides an alternative numerical naming scheme for regular polytopes in which the 24-cell is the 24-point 4-polytope: fourth in the ascending sequence that runs from 5-point 4-polytope to 600-point 4-polytope. 5. The edge length will always be different unless predecessor and successor are both radially equilateral, i.e. their edge length is the same as their radius (so both are preserved). Since radially equilateral polytopes[lower-alpha 2] are rare, it seems that the only such construction (in any dimension) is from the 8-cell to the 24-cell, making the 24-cell the unique regular polytope (in any dimension) which has the same edge length as its predecessor of the same radius. 6. The edges of six of the squares are aligned with the grid lines of the √2 radius coordinate system. For example:      (  0,–1,  1,  0)   (  0,  1,  1,  0)      (  0,–1,–1,  0)   (  0,  1,–1,  0) is the square in the xy plane. The edges of the squares are not 24-cell edges, they are interior chords joining two vertices 90o distant from each other; so the squares are merely invisible configurations of four of the 24-cell's vertices, not visible 24-cell features. 7. Two flat planes A and B of a Euclidean space of four dimensions are called completely orthogonal if and only if every line in A is orthogonal to every line in B. In that case the planes A and B intersect at a single point O, so that if a line in A intersects with a line in B, they intersect at O.[lower-alpha 10] 8. Up to 6 planes can be mutually orthogonal in 4 dimensions. 3 dimensional space accommodates only 3 perpendicular axes and 3 perpendicular planes through a single point. In 4 dimensional space we may have 4 perpendicular axes and 6 perpendicular planes through a point (for the same reason that the tetrahedron has 6 edges, not 4): there are 6 ways to take 4 dimensions 2 at a time. Three such perpendicular planes (pairs of axes) meet at each vertex of the 24-cell (for the same reason that three edges meet at each vertex of the tetrahedron). Each of the 6 planes is completely orthogonal[lower-alpha 7] to just one of the other planes: the only one with which it does not share a line (for the same reason that each edge of the tetrahedron is orthogonal to just one of the other edges: the only one with which it does not share a point). Two completely orthogonal planes are perpendicular and opposite each other, just as two edges of the tetrahedron are perpendicular and opposite. 9. To visualize how two planes can intersect in a single point in a four dimensional space, consider the Euclidean space (w, x, y, z) and imagine that the w dimension represents time rather than a spatial dimension. The xy central plane (where w=0, z=0) shares no axis with the wz central plane (where x=0, y=0). The xy plane exists at only a single instant in time (w=0); the wz plane (and in particular the w axis) exists all the time. Thus their only moment and place of intersection is at the origin point (0,0,0,0). 10. In 4 dimensional space we can construct 4 perpendicular axes and 6 perpendicular planes through a point. Without loss of generality, we may take these to be the axes and orthogonal central planes of a (w, x, y, z) Cartesian coordinate system. In 4 dimensions we have the same 3 orthogonal planes (xy, xz, yz) that we have in 3 dimensions, and also 3 others (wx, wy, wz). Each of the 6 orthogonal planes shares an axis with 4 of the others, and is completely orthogonal to just one of the others: the only one with which it does not share an axis. Thus there are 3 pairs of completely orthogonal planes: xy and wz intersect only at the origin; xz and wy intersect only at the origin; yz and wx intersect only at the origin. 11. Two planes in 4-dimensional space can have four possible reciprocal positions: (1) they can coincide (be exactly the same plane); (2) they can be parallel (the only way they can fail to intersect at all); (3) they can intersect in a single line, as two non-parallel planes do in 3-dimensional space; or (4) they can intersect in a single point[lower-alpha 9] (and they must, if they are completely orthogonal).[lower-alpha 7] 12. The 24-cell has three sets of 6 non-intersecting Clifford parallel great circles each passing through 4 vertices (a great square), with only one great square in each set passing through each vertex, and the 6 squares in each set reaching all 24 vertices. Each set constitutes a discrete Hopf fibration of interlocking great circles. The 24-cell can also be divided (six different ways) into 3 disjoint subsets of 8 vertices (octagrams) that do not lie in a square central plane, but comprise a 16-cell and lie on a skew octagram3 forming an isoclinic geodesic or isocline that is the rotational cirle traversed by those 8 vertices in one particular left or right isoclinic rotation as they rotate positions within the 16-cell. Each of these isoclines belongs to one of the three discrete Hopf fibrations of square great circles. 13. In four-dimensional Euclidean geometry, a quaternion is simply a (w, x, y, z) Cartesian coordinate. Hamilton did not see them as such when he discovered the quaternions. Schläfli would be the first to consider four-dimensional Euclidean space, publishing his discovery of the regular polyschemes in 1852, but Hamilton would never be influenced by that work, which remained obscure into the 20th century. Hamilton found the quaternions when he realized that a fourth dimension, in some sense, would be necessary in order to model rotations in three-dimensional space.[36] Although he described a quaternion as an ordered four-element multiple of real numbers, the quaternions were for him an extension of the complex numbers, not a Euclidean space of four dimensions. 14. The edges of the orthogonal great squares are not aligned with the grid lines of the unit radius coordinate system. Six of the squares do lie in the 6 orthogonal planes of this coordinate system, but their edges are the √2 diagonals of unit edge length squares of the coordinate lattice. For example:                  (  0,  0,  1,  0)      (  0,–1,  0,  0)   (  0,  1,  0,  0)                  (  0,  0,–1,  0) is the square in the xy plane. Notice that the 8 integer coordinates comprise the vertices of the 6 orthogonal squares. 15. In an isoclinic rotation, each point anywhere in the 4-polytope moves an equal distance in four orthogonal directions at once, on a 4-dimensional diagonal. The point is displaced a total Pythagorean distance equal to the square root of four times the square of that distance. All vertices are displaced to a vertex at least two edge lengths away.[lower-alpha 21] For example, when the unit-radius 24-cell rotates isoclinically 60° in a hexagon invariant plane and 60° in its completely orthogonal invariant plane,[lower-alpha 53] each vertex is displaced to another vertex √3 (120°) away, moving √3/4 ≈ 0.866 in four orthogonal directions. Notice that this distance is half the length of the chord, in all four dimensional isoclinic rotations. 16. Each great hexagon of the 24-cell contains one axis (one pair of antipodal vertices) belonging to each of the three inscribed 16-cells. The 24-cell contains three disjoint inscribed 16-cells, rotated 60° isoclinically[lower-alpha 15] with respect to each other (so their corresponding vertices are 120° = √3 apart). A 16-cell is an orthonormal basis for a 4-dimensional coordinate system, because its 8 vertices define the four orthogonal axes. In any choice of a vertex-up coordinate system (such as the unit radius coordinates used in this article), one of the three inscribed 16-cells is the basis for the coordinate system, and each hexagon has only one axis which is a coordinate system axis. 17. The hexagons are inclined (tilted) at 60 degrees with respect to the unit radius coordinate system's orthogonal planes. Each hexagonal plane contains only one of the 4 coordinate system axes.[lower-alpha 16] The hexagon consists of 3 pairs of opposite vertices (three 24-cell diameters): one opposite pair of integer coordinate vertices (one of the four coordinate axes), and two opposite pairs of half-integer coordinate vertices (not coordinate axes). For example:                  (  0,  0,  1,  0)      (  1/2,–1/2,  1/2,–1/2)   (  1/2,  1/2,  1/2,  1/2)      (–1/2,–1/2,–1/2,–1/2)   (–1/2,  1/2,–1/2,  1/2)                  (  0,  0,–1,  0) is a hexagon on the y axis. Unlike the √2 squares, the hexagons are actually made of 24-cell edges, so they are visible features of the 24-cell. 18. Eight √1 edges converge in curved 3-dimensional space from the corners of the 24-cell's cubical vertex figure[lower-alpha 38] and meet at its center (the vertex), where they form 4 straight lines which cross there. The 8 vertices of the cube are the eight nearest other vertices of the 24-cell. The straight lines are geodesics: two √1-length segments of an apparently straight line (in the 3-space of the 24-cell's curved surface) that is bent in the 4th dimension into a great circle hexagon (in 4-space). Imagined from inside this curved 3-space, the bends in the hexagons are invisible. From outside (if we could view the 24-cell in 4-space), the straight lines would be seen to bend in the 4th dimension at the cube centers, because the center is displaced outward in the 4th dimension, out of the hyperplane defined by the cube's vertices. Thus the vertex cube is actually a cubic pyramid. Unlike a cube, it seems to be radially equilateral (like the tesseract and the 24-cell itself): its "radius" equals its edge length.[lower-alpha 39] 19. It is not difficult to visualize four hexagonal planes intersecting at 60 degrees to each other, even in three dimensions. Four hexagonal central planes intersect at 60 degrees in the cuboctahedron. Four of the 24-cell's 16 hexagonal central planes (lying in the same 3-dimensional hyperplane) intersect at each of the 24-cell's vertices exactly the way they do at the center of a cuboctahedron. But the edges around the vertex do not meet as the radii do at the center of a cuboctahedron; the 24-cell has 8 edges around each vertex, not 12, so its vertex figure is the cube, not the cuboctahedron. The 8 edges meet exactly the way 8 edges do at the apex of a canonical cubic pyramid.[lower-alpha 18] 20. The 24-cell has four sets of 4 non-intersecting Clifford parallel[lower-alpha 35] great circles each passing through 6 vertices (a great hexagon), with only one great hexagon in each set passing through each vertex, and the 4 hexagons in each set reaching all 24 vertices. Each set constitutes a discrete Hopf fibration of interlocking great circles. The 24-cell can also be divided (eight different ways) into 4 disjoint subsets of 6 vertices (hexagrams) that do not lie in a hexagonal central plane, each skew hexagram forming an isoclinic geodesic or isocline that is the rotational circle traversed by those 6 vertices in one particular left or right isoclinic rotation. Each of these sets of four Clifford parallel isoclines belongs to one of the four discrete Hopf fibrations of hexagonal great circles.[lower-alpha 106] 21. In an isoclinic rotation vertices move diagonally, like the bishops in chess. Vertices in an isoclinic rotation cannot reach any of their nearest neighbor vertices because they do not rotate directly toward them;[lower-alpha 29] they move diagonally between them, missing them, to a vertex farther away in a larger-radius surrounding shell of vertices,[lower-alpha 31] the way bishops are confined to the white or black squares of the chessboard and cannot reach squares of the opposite color, even those immediately adjacent.[lower-alpha 91] Things moving diagonally move farther than 1 unit of distance in each movement step (√2 on the chessboard, √3 in the 24-cell), but at the cost of missing half the destinations.[lower-alpha 80] However in an isoclinic rotation of a rigid body all the vertices rotate at once, so every destination will be reached by some vertex. 22. The 24-cell contains 3 distinct 8-cells (tesseracts), rotated 60° isoclinically with respect to each other. The corresponding vertices of two 8-cells are √3 (120°) apart. Each 8-cell contains 8 cubical cells, and each cube contains four √3 chords (its long diameters). The 8-cells are not completely disjoint (they share vertices),[lower-alpha 25] but each cube and each √3 chord belongs to just one 8-cell. The √3 chords joining the corresponding vertices of two 8-cells belong to the third 8-cell.[lower-alpha 31] 23. These triangles' edges of length √3 are the diagonals[lower-alpha 21] of cubical cells of unit edge length found within the 24-cell, but those cubical (tesseract)[lower-alpha 22] cells are not cells of the unit radius coordinate lattice. 24. These triangles lie in the same planes containing the hexagons;[lower-alpha 17] two triangles of edge length √3 are inscribed in each hexagon. For example, in unit radius coordinates:                  (  0,  0,  1,  0)      (  1/2,–1/2,  1/2,–1/2)   (  1/2,  1/2,  1/2,  1/2)      (–1/2,–1/2,–1/2,–1/2)   (–1/2,  1/2,–1/2,  1/2)                  (  0,  0,–1,  0) are two opposing central triangles on the y axis, with each triangle formed by the vertices in alternating rows. Unlike the hexagons, the √3 triangles are not made of actual 24-cell edges, so they are invisible features of the 24-cell, like the √2 squares. 25. Polytopes are completely disjoint if all their element sets are disjoint: they do not share any vertices, edges, faces or cells. They may still overlap in space, sharing 4-content, volume, area, or lineage. 26. Visualize the three 16-cells inscribed in the 24-cell (left, right, and middle), and the rotation which takes them to each other. The vertices of the middle 16-cell lie on the (w, x, y, z) coordinate axes;[lower-alpha 10] the other two are rotated 60° isoclinically to its left and its right. The 24-vertex 24-cell is a compound of three 16-cells, whose three sets of 8 vertices are distributed around the 24-cell symmetrically; each vertex is surrounded by 8 others (in the 3-dimensional space of the 4-dimensional 24-cell's surface), the way the vertices of a cube surround its center.[lower-alpha 18] The 8 surrounding vertices (the cube corners) lie in other 16-cells: 4 in the other 16-cell to the left, and 4 in the other 16-cell to the right. They are the vertices of two tetrahedra inscribed in the cube, one belonging (as a cell) to each 16-cell. If the 16-cell edges are √2, each vertex of the compound of three 16-cells is √1 away from its 8 surrounding vertices in other 16-cells. Now visualize those √1 distances as the edges of the 24-cell (while continuing to visualize the disjoint 16-cells). The √1 edges form great hexagons of 6 vertices which run around the 24-cell in a central plane. Four hexagons cross at each vertex (and its antipodal vertex), inclined at 60° to each other.[lower-alpha 19] The hexagons are not perpendicular to each other, or to the 16-cells' perpendicular square central planes.[lower-alpha 17] The left and right 16-cells form a tesseract.[lower-alpha 27] Two 16-cells have vertex-pairs which are one √1 edge (one hexagon edge) apart. But a simple rotation of 60° will not take one whole 16-cell to another 16-cell, because their vertices are 60° apart in different directions, and a simple rotation has only one hexagonal plane of rotation. One 16-cell can be taken to another 16-cell by a 60° isoclinic rotation, because an isoclinic rotation is 3-sphere symmetric: four Clifford parallel hexagonal planes rotate together, but in four different rotational directions,[lower-alpha 78] taking each 16-cell to another 16-cell. But since an isoclinic 60° rotation is a diagonal rotation by 60° in two completely orthogonal directions at once,[lower-alpha 47] the corresponding vertices of the 16-cell and the 16-cell it is taken to are 120° apart: two √1 hexagon edges (or one √3 hexagon chord) apart, not one √1 edge (60°) apart.[lower-alpha 15] By the chiral diagonal nature of isoclinic rotations, the 16-cell cannot reach the adjacent 16-cell (whose vertices are one √1 edge away) by rotating toward it;[lower-alpha 21] it can only reach the 16-cell beyond it (120° away). But of course, the 16-cell beyond the 16-cell to its right is the 16-cell to its left. So a 60° isoclinic rotation will take every 16-cell to another 16-cell: a 60° right isoclinic rotation will take the middle 16-cell to the 16-cell we may have originally visualized as the left 16-cell, and a 60° left isoclinic rotation will take the middle 16-cell to the 16-cell we visualized as the right 16-cell. (If so, that was our error in visualization; the 16-cell to the "left" is in fact the one reached by the left isoclinic rotation, as that is the only sense in which the two 16-cells are left or right of each other.)[lower-alpha 83] 27. Each pair of the three 16-cells inscribed in the 24-cell forms a 4-dimensional hypercube (a tesseract or 8-cell), in dimensional analogy to the way two tetrahedra form a cube: the two 8-vertex 16-cells are inscribed in the 16-vertex tesseract, occupying its alternate vertices. The third 16-cell does not lie within the tesseract; its 8 vertices protrude from the sides of the tesseract, forming a cubic pyramid on each of the tesseract's cubic cells (as in Gosset's construction of the 24-cell). The three pairs of 16-cells form three tesseracts.[lower-alpha 22] The tesseracts share vertices, but the 16-cells are completely disjoint.[lower-alpha 25] 28. The 18 great squares of the 24-cell occur as three sets of 6 orthogonal great squares,[lower-alpha 10] each forming a 16-cell.[lower-alpha 26] The three 16-cells are completely disjoint (and Clifford parallel): each has its own 8 vertices (on 4 orthogonal axes) and its own 24 edges (of length √2). The 18 square great circles are crossed by 16 hexagonal great circles; each hexagon has one axis (2 vertices) in each 16-cell.[lower-alpha 17] The two great triangles inscribed in each great hexagon (occupying its alternate vertices, and with edges that are its √3 chords) have one vertex in each 16-cell. Thus each great triangle is a ring linking the three completely disjoint 16-cells. There are four different ways (four different fibrations of the 24-cell) in which the 8 vertices of the 16-cells correspond by being triangles of vertices √3 apart: there are 32 distinct linking triangles. Each pair of 16-cells forms a tesseract (8-cell).[lower-alpha 27] Each great triangle has one √3 edge in each tesseract, so it is also a ring linking the three tesseracts. 29. The 8 nearest neighbor vertices surround the vertex (in the curved 3-dimensional space of the 24-cell's boundary surface) the way a cube's 8 corners surround its center. (The vertex figure of the 24-cell is a cube.) 30. The 6 second-nearest neighbor vertices surround the vertex in curved 3-dimensional space the way an octahedron's 6 corners surround its center. 31. Eight √3 chords converge from the corners of the 24-cell's cubical vertex figure[lower-alpha 38] and meet at its center (the vertex), where they form 4 straight lines which cross there. Each of the eight √3 chords runs from this cube's center to the center of a diagonally adjacent (vertex-bonded) cube,[lower-alpha 21] which is another vertex of the 24-cell: one located 120° away in a third concentric shell of eight √3-distant vertices surrounding the second shell of six √2-distant vertices that surrounds the first shell of eight √1-distant vertices. 32. Interior features are not considered elements of the polytope. For example, the center of a 24-cell is a noteworthy feature (as are its long radii), but these interior features do not count as elements in its configuration matrix, which counts only elementary features (which are not interior to any other feature including the polytope itself). Interior features are not rendered in most of the diagrams and illustrations in this article (they are normally invisible). In illustrations showing interior features, we always draw interior edges as dashed lines, to distinguish them from elementary edges. 33. The center of the regular 24-cell is a canonical apex of the 24-cell because it is one edge length equidistant from the 24 ordinary vertices in the 4th dimension, as the apex of a canonical pyramid is one edge length equidistant from its other vertices. 34. Thus (√1, √2, √3, √4) are the vertex chord lengths of the tesseract as well as of the 24-cell. They are also the diameters of the tesseract (from short to long), though not of the 24-cell. 35. Clifford parallels are non-intersecting curved lines that are parallel in the sense that the perpendicular (shortest) distance between them is the same at each point.[14] A double helix is an example of Clifford parallelism in ordinary 3-dimensional Euclidean space. In 4-space Clifford parallels occur as geodesic great circles on the 3-sphere.[15] Whereas in 3-dimensional space, any two geodesic great circles on the 2-sphere will always intersect at two antipodal points, in 4-dimensional space not all great circles intersect; various sets of Clifford parallel non-intersecting geodesic great circles can be found on the 3-sphere. Perhaps the simplest example is that six mutually orthogonal great circles can be drawn on the 3-sphere, as three pairs of completely orthogonal great circles.[lower-alpha 10] Each completely orthogonal pair is Clifford parallel. The two circles cannot intersect at all, because they lie in planes which intersect at only one point: the center of the 3-sphere.[lower-alpha 43] Because they are perpendicular and share a common center,[lower-alpha 44] the two circles are obviously not parallel and separate in the usual way of parallel circles in 3 dimensions; rather they are connected like adjacent links in a chain, each passing through the other without intersecting at any points, forming a Hopf link. 36. A geodesic great circle lies in a 2-dimensional plane which passes through the center of the polytope. Notice that in 4 dimensions this central plane does not bisect the polytope into two equal-sized parts, as it would in 3 dimensions, just as a diameter (a central line) bisects a circle but does not bisect a sphere. Another difference is that in 4 dimensions not all pairs of great circles intersect at two points, as they do in 3 dimensions; some pairs do, but some pairs of great circles are non-intersecting Clifford parallels.[lower-alpha 35] 37. If the Pythagorean distance between any two vertices is √1, their geodesic distance is 1; they may be two adjacent vertices (in the curved 3-space of the surface), or a vertex and the center (in 4-space). If their Pythagorean distance is √2, their geodesic distance is 2 (whether via 3-space or 4-space, because the path along the edges is the same straight line with one 90o bend in it as the path through the center). If their Pythagorean distance is √3, their geodesic distance is still 2 (whether on a hexagonal great circle past one 60o bend, or as a straight line with one 60o bend in it through the center). Finally, if their Pythagorean distance is √4, their geodesic distance is still 2 in 4-space (straight through the center), but it reaches 3 in 3-space (by going halfway around a hexagonal great circle). 38. The vertex figure is the facet which is made by truncating a vertex; canonically, at the mid-edges incident to the vertex. But one can make similar vertex figures of different radii by truncating at any point along those edges, up to and including truncating at the adjacent vertices to make a full size vertex figure. Stillwell defines the vertex figure as "the convex hull of the neighbouring vertices of a given vertex".[13] That is what serves the illustrative purpose here. 39. The cube is not radially equilateral in Euclidean 3-space, but a cubic pyramid is radially equilateral in the curved 3-space of the 24-cell's surface (the 3-sphere). In 4-space the 8 edges radiating from its apex are not actually its radii: the apex of the cubic pyramid is not actually its center, just one of its vertices. But in curved 3-space the edges radiating symmetrically from the apex are radii, so the cube is radially equilateral in that curved 3-space. In Euclidean 4-space 24 edges radiating symmetrically from a central point make the radially equilateral 24-cell,[lower-alpha 2] and a symmetrical subset of 16 of those edges make the radially equilateral tesseract. 40. Six √2 chords converge in 3-space from the face centers of the 24-cell's cubical vertex figure[lower-alpha 38] and meet at its center (the vertex), where they form 3 straight lines which cross there perpendicularly. The 8 vertices of the cube are the eight nearest other vertices of the 24-cell, and eight √1 edges converge from there, but let us ignore them now, since 7 straight lines crossing at the center is confusing to visualize all at once. Each of the six √2 chords runs from this cube's center (the vertex) through a face center to the center of an adjacent (face-bonded) cube, which is another vertex of the 24-cell: not a nearest vertex (at the cube corners), but one located 90° away in a second concentric shell of six √2-distant vertices that surrounds the first shell of eight √1-distant vertices. The face-center through which the √2 chord passes is the mid-point of the √2 chord, so it lies inside the 24-cell. 41. One can cut the 24-cell through 6 vertices (in any hexagonal great circle plane), or through 4 vertices (in any square great circle plane). One can see this in the cuboctahedron (the central hyperplane of the 24-cell), where there are four hexagonal great circles (along the edges) and six square great circles (across the square faces diagonally). 42. In the 16-cell the 6 orthogonal great squares form 3 pairs of completely orthogonal great circles; each pair is Clifford parallel. In the 24-cell, the 3 inscribed 16-cells lie rotated 60 degrees isoclinically[lower-alpha 15] with respect to each other; consequently their corresponding vertices are 120 degrees apart on a hexagonal great circle. Pairing their vertices which are 90 degrees apart reveals corresponding square great circles which are Clifford parallel. Each of the 18 square great circles is Clifford parallel not only to one other square great circle in the same 16-cell (the completely orthogonal one), but also to two square great circles (which are completely orthogonal to each other) in each of the other two 16-cells. (Completely orthogonal great circles are Clifford parallel, but not all Clifford parallels are orthogonal.[lower-alpha 43]) A 60 degree isoclinic rotation of the 24-cell in hexagonal invariant planes takes each square great circle to a Clifford parallel (but non-orthogonal) square great circle in a different 16-cell. 43. Each square plane is isoclinic (Clifford parallel) to five other square planes but completely orthogonal[lower-alpha 7] to only one of them.[lower-alpha 42] Every pair of completely orthogonal planes has Clifford parallel great circles, but not all Clifford parallel great circles are orthogonal (e.g., none of the hexagonal geodesics in the 24-cell are mutually orthogonal). 44. In 4-space, two great circles can be perpendicular and share a common center which is their only point of intersection, because there is more than one great 2-sphere on the 3-sphere. The dimensionally analogous structure to a great circle (a great 1-sphere) is a great 2-sphere,[16] which is an ordinary sphere that constitutes an equator boundary dividing the 3-sphere into two equal halves, just as a great circle divides the 2-sphere. Although two Clifford parallel great circles[lower-alpha 35] occupy the same 3-sphere, they lie on different great 2-spheres. The great 2-spheres are Clifford parallel 3-dimensional objects, displaced relative to each other by a fixed distance d in the fourth dimension. Their corresponding points (on their two surfaces) are d apart. The 2-spheres (by which we mean their surfaces) do not intersect at all, although they have a common center point in 4-space. The displacement d between a pair of their corresponding points is the chord of a great circle which intersects both 2-spheres, so d can be represented equivalently as a linear chordal distance, or as an angular distance. 45. The sum of 1・96 + 2・72 + 3・96 + 4・12 is 576. 46. The sum of the squared lengths of all the distinct chords of any regular convex n-polytope of unit radius is the square of the number of vertices.[17] 47. A point under isoclinic rotation traverses the diagonal[lower-alpha 15] straight line of a single isoclinic geodesic, reaching its destination directly, instead of the bent line of two successive simple geodesics.[lower-alpha 81] A geodesic is the shortest path through a space (intuitively, a string pulled taught between two points). Simple geodesics are great circles lying in a central plane (the only kind of geodesics that occur in 3-space on the 2-sphere). Isoclinic geodesics are different: they do not lie in a single plane; they are 4-dimensional spirals rather than simple 2-dimensional circles.[lower-alpha 79] But they are not like 3-dimensional screw threads either, because they form a closed loop like any circle.[lower-alpha 99] Isoclinic geodesics are 4-dimensional great circles, and they are just as circular as 2-dimensional circles: in fact, twice as circular, because they curve in a circle in two completely orthogonal directions at once.[lower-alpha 100] They are true circles,[lower-alpha 80] and even form fibrations like ordinary 2-dimensional great circles.[lower-alpha 20][lower-alpha 12] These isoclines are geodesic 1-dimensional lines embedded in a 4-dimensional space. On the 3-sphere[lower-alpha 105] they always occur in pairs[lower-alpha 101] as Villarceau circles on the Clifford torus, the geodesic paths traversed by vertices in an isoclinic rotation. They are helices bent into a Möbius loop in the fourth dimension, taking a diagonal winding route around the 3-sphere through the non-adjacent vertices[lower-alpha 21] of a 4-polytope's skew Clifford polygon.[lower-alpha 85] 48. Each pair of parallel √1 edges joins a pair of parallel √3 chords to form one of 48 rectangles (inscribed in the 16 central hexagons), and each pair of parallel √2 chords joins another pair of parallel √2 chords to form one of the 18 central squares. 49. One way to visualize the n-dimensional hyperplanes is as the n-spaces which can be defined by n + 1 points. A point is the 0-space which is defined by 1 point. A line is the 1-space which is defined by 2 points which are not coincident. A plane is the 2-space which is defined by 3 points which are not colinear (any triangle). In 4-space, a 3-dimensional hyperplane is the 3-space which is defined by 4 points which are not coplanar (any tetrahedron). In 5-space, a 4-dimensional hyperplane is the 4-space which is defined by 5 points which are not cocellular (any 5-cell). These simplex figures divide the hyperplane into two parts (inside and outside the figure), but in addition they divide the universe (the enclosing space) into two parts (above and below the hyperplane). The n points bound a finite simplex figure (from the outside), and they define an infinite hyperplane (from the inside).[34] These two divisions are orthogonal, so the defining simplex divides space into six regions: inside the simplex and in the hyperplane, inside the simplex but above or below the hyperplane, outside the simplex but in the hyperplane, and outside the simplex above or below the hyperplane. 50. Two angles are required to fix the relative positions of two planes in 4-space.[18] Since all planes in the same hyperplane[lower-alpha 49] are 0 degrees apart in one of the two angles, only one angle is required in 3-space. Great hexagons in different hyperplanes are 60 degrees apart in both angles. Great squares in different hyperplanes are 90 degrees apart in both angles (completely orthogonal)[lower-alpha 7] or 60 degrees apart in both angles.[lower-alpha 42] Planes which are separated by two equal angles are called isoclinic. Planes which are isoclinic have Clifford parallel great circles.[lower-alpha 35] A great square and a great hexagon in different hyperplanes are neither isoclinic nor Clifford parallel; they are separated by a 90 degree angle and a 60 degree angle. 51. Each pair of Clifford parallel polygons lies in two different hyperplanes (cuboctahedrons). The 4 Clifford parallel hexagons lie in 4 different cuboctahedrons. 52. Two intersecting great squares or great hexagons share two opposing vertices, but squares and hexagons on Clifford parallel great circles share no vertices. Two intersecting great triangles share only one vertex, since they lack opposing vertices. 53. In the 24-cell each great square plane is completely orthogonal[lower-alpha 7] to another great square plane, and each great hexagon plane is completely orthogonal to a plane which intersects only two antipodal vertices: a great digon plane. 54. The 600-cell is larger than the 24-cell, and contains the 24-cell as an interior feature.[19] The regular 5-cell is not found in the interior of any convex regular 4-polytope except the 120-cell,[20] though every convex 4-polytope can be deconstructed into irregular 5-cells. 55. This animation shows the construction of a rhombic dodecahedron from a cube, by inverting the center-to-face pyramids of a cube. Gosset's construction of a 24-cell from a tesseract is the 4-dimensional analogue of this process, inverting the center-to-cell pyramids of an 8-cell (tesseract).[22] 56. We can cut a vertex off a polygon with a 0-dimensional cutting instrument (like the point of a knife, or the head of a zipper) by sweeping it along a 1-dimensional line, exposing a new edge. We can cut a vertex off a polyhedron with a 1-dimensional cutting edge (like a knife) by sweeping it through a 2-dimensional face plane, exposing a new face. We can cut a vertex off a polychoron (a 4-polytope) with a 2-dimensional cutting plane (like a snowplow), by sweeping it through a 3-dimensional cell volume, exposing a new cell. Notice that as within the new edge length of the polygon or the new face area of the polyhedron, every point within the new cell volume is now exposed on the surface of the polychoron. 57. Each cell face plane intersects with the other face planes of its kind to which it is not completely orthogonal or parallel at their characteristic vertex chord edge. Adjacent face planes of orthogonally-faced cells (such as cubes) intersect at an edge since they are not completely orthogonal.[lower-alpha 11] Although their dihedral angle is 90 degrees in the boundary 3-space, they lie in the same hyperplane[lower-alpha 49] (they are coincident rather than perpendicular in the fourth dimension); thus they intersect in a line, as non-parallel planes do in any 3-space. 58. The only planes through exactly 6 vertices of the 24-cell (not counting the central vertex) are the 16 hexagonal great circles. There are no planes through exactly 5 vertices. There are several kinds of planes through exactly 4 vertices: the 18 √2 square great circles, the 72 √1 square (tesseract) faces, and 144 √1 by √2 rectangles. The planes through exactly 3 vertices are the 96 √2 equilateral triangle (16-cell) faces, and the 96 √1 equilateral triangle (24-cell) faces. There are an infinite number of central planes through exactly two vertices (great circle digons); 16 are distinguished, as each is completely orthogonal[lower-alpha 7] to one of the 16 hexagonal great circles. Only the polygons composed of 24-cell √1 edges are visible in the projections and rotating animations illustrating this article; the others contain invisible interior chords.[lower-alpha 32] 59. The 24-cell's cubical vertex figure[lower-alpha 38] has been truncated to a tetrahedral vertex figure (see Kepler's drawing). The vertex cube has vanished, and now there are only 4 corners of the vertex figure where before there were 8. Four tesseract edges converge from the tetrahedron vertices and meet at its center, where they do not cross (since the tetrahedron does not have opposing vertices). 60. Two tesseracts share only vertices, not any edges, faces, cubes (with inscribed tetrahedra), or octahedra (whose central square planes are square faces of cubes). An octahedron that touches another octahedron at a vertex (but not at an edge or a face) is touching an octahedron in another tesseract, and a pair of adjacent cubes in the other tesseract whose common square face the octahedron spans, and a tetrahedron inscribed in each of those cubes. 61. The common core of the 24-cell and its inscribed 8-cells and 16-cells is the unit-radius 24-cell's insphere-inscribed dual 24-cell of edge length and radius 1/2.[26] Rectifying any of the three 16-cells reveals this smaller 24-cell, which has a 4-content of only 1/8 (1/16 that of the unit-radius 24-cell). Its vertices lie at the centers of the 24-cell's octahedral cells, which are also the centers of the tesseracts' square faces, and are also the centers of the 16-cells' edges.[27] 62. The 24-cell's cubical vertex figure[lower-alpha 38] has been truncated to an octahedral vertex figure. The vertex cube has vanished, and now there are only 6 corners of the vertex figure where before there were 8. The 6 √2 chords which formerly converged from cube face centers now converge from octahedron vertices; but just as before, they meet at the center where 3 straight lines cross perpendicularly. The octahedron vertices are located 90° away outside the vanished cube, at the new nearest vertices; before truncation those were 24-cell vertices in the second shell of surrounding vertices. 63. Each of the 72 √2 chords in the 24-cell is a face diagonal in two distinct cubical cells (of different 8-cells) and an edge of four tetrahedral cells (in just one 16-cell). 64. An orthoscheme is a chiral irregular simplex with right triangle faces that is characteristic of some polytope if it will exactly fill that polytope with the reflections of itself in its own facets (its mirror walls). Every regular polytope can be dissected radially into instances of its characteristic orthoscheme surrounding its center. The characteristic orthoscheme has the shape described by the same Coxeter-Dynkin diagram as the regular polytope without the generating point ring. 65. The 24 vertices of the 24-cell, each used twice, are the vertices of three 16-vertex tesseracts. 66. The 24 vertices of the 24-cell, each used once, are the vertices of three 8-vertex 16-cells.[lower-alpha 16] 67. The edges of the 16-cells are not shown in any of the renderings in this article; if we wanted to show interior edges, they could be drawn as dashed lines. The edges of the inscribed tesseracts are always visible, because they are also edges of the 24-cell. 68. The 4-dimensional content of the unit edge length tesseract is 1 (by definition). The content of the unit edge length 24-cell is 2, so half its content is inside each tesseract, and half is between their envelopes. Each 16-cell (edge length √2) encloses a content of 2/3, leaving 1/3 of an enclosing tesseract between their envelopes. 69. Between the 24-cell envelope and the 8-cell envelope, we have the 8 cubic pyramids of Gosset's construction. Between the 8-cell envelope and the 16-cell envelope, we have 16 right tetrahedral pyramids, with their apexes filling the corners of the tesseract. 70. Consider the three perpendicular √2 long diameters of the octahedral cell.[31] Each of them is an edge of a different 16-cell. Two of them are the face diagonals of the square face between two cubes; each is a √2 chord that connects two vertices of those 8-cell cubes across a square face, connects two vertices of two 16-cell tetrahedra (inscribed in the cubes), and connects two opposite vertices of a 24-cell octahedron (diagonally across two of the three orthogonal square central sections).[lower-alpha 63] The third perpendicular long diameter of the octahedron does exactly the same (by symmetry); so it also connects two vertices of a pair of cubes across their common square face: but a different pair of cubes, from one of the other tesseracts in the 24-cell.[lower-alpha 60] 71. Because there are three overlapping tesseracts inscribed in the 24-cell,[lower-alpha 22] each octahedral cell lies on a cubic cell of one tesseract (in the cubic pyramid based on the cube, but not in the cube's volume), and in two cubic cells of each of the other two tesseracts (cubic cells which it spans, sharing their volume).[lower-alpha 70] 72. This might appear at first to be angularly impossible, and indeed it would be in a flat space of only three dimensions. If two cubes rest face-to-face in an ordinary 3-dimensional space (e.g. on the surface of a table in an ordinary 3-dimensional room), an octahedron will fit inside them such that four of its six vertices are at the four corners of the square face between the two cubes; but then the other two octahedral vertices will not lie at a cube corner (they will fall within the volume of the two cubes, but not at a cube vertex). In four dimensions, this is no less true! The other two octahedral vertices do not lie at a corner of the adjacent face-bonded cube in the same tesseract. However, in the 24-cell there is not just one inscribed tesseract (of 8 cubes), there are three overlapping tesseracts (of 8 cubes each). The other two octahedral vertices do lie at the corner of a cube: but a cube in another (overlapping) tesseract.[lower-alpha 71] 73. It is important to visualize the radii only as invisible interior features of the 24-cell (dashed lines), since they are not edges of the honeycomb. Similarly, the center of the 24-cell is empty (not a vertex of the honeycomb). 74. Unlike the 24-cell and the tesseract, the 16-cell is not radially equilateral; therefore 16-cells of two different sizes (unit edge length versus unit radius) occur in the unit edge length honeycomb. The twenty-four 16-cells that meet at the center of each 24-cell have unit edge length, and radius √2/2. The three 16-cells inscribed in each 24-cell have edge length √2, and unit radius. 75. Three dimensional rotations occur around an axis line. Four dimensional rotations may occur around a plane. So in three dimensions we may fold planes around a common line (as when folding a flat net of 6 squares up into a cube), and in four dimensions we may fold cells around a common plane (as when folding a flat net of 8 cubes up into a tesseract). Folding around a square face is just folding around two of its orthogonal edges at the same time; there is not enough space in three dimensions to do this, just as there is not enough space in two dimensions to fold around a line (only enough to fold around a point). 76. There are (at least) two kinds of correct dimensional analogies: the usual kind between dimension n and dimension n + 1, and the much rarer and less obvious kind between dimension n and dimension n + 2. An example of the latter is that rotations in 4-space may take place around a single point, as do rotations in 2-space. Another is the n-sphere rule that the surface area of the sphere embedded in n+2 dimensions is exactly 2π r times the volume enclosed by the sphere embedded in n dimensions, the most well-known examples being that the circumference of a circle is 2π r times 1, and the surface area of the ordinary sphere is 2π r times 2r. Coxeter cites[42] this as an instance in which dimensional analogy can fail us as a method, but it is really our failure to recognize whether a one- or two-dimensional analogy is the appropriate method. 77. Rotations in 4-dimensional Euclidean space may occur around a plane, as when adjacent cells are folded around their plane of intersection (by analogy to the way adjacent faces are folded around their line of intersection).[lower-alpha 75] But in four dimensions there is yet another way in which rotations can occur, called a double rotation. Double rotations are an emergent phenomenon in the fourth dimension and have no analogy in three dimensions: folding up square faces and folding up cubical cells are both examples of simple rotations, the only kind that occur in fewer than four dimensions. In 3-dimensional rotations, the points in a line remain fixed during the rotation, while every other point moves. In 4-dimensional simple rotations, the points in a plane remain fixed during the rotation, while every other point moves. In 4-dimensional double rotations, a point remains fixed during rotation, and every other point moves (as in a 2-dimensional rotation!).[lower-alpha 76] 78. In a Clifford displacement, also known as an isoclinic rotation, all the Clifford parallel[lower-alpha 35] invariant planes are displaced in four orthogonal directions (two completely orthogonal planes) at once: they are rotated by the same angle, and at the same time they are tilted sideways by that same angle.[lower-alpha 80] A Clifford displacement is 4-dimensionally diagonal.[lower-alpha 15] Every plane that is Clifford parallel to one of the completely orthogonal planes (including in this case an entire Clifford parallel bundle of 4 hexagons, but not all 16 hexagons) is invariant under the isoclinic rotation: all the points in the plane rotate in circles but remain in the plane, even as the whole plane tilts sideways.[lower-alpha 85] All 16 hexagons rotate by the same angle (though only 4 of them do so invariantly). All 16 hexagons are rotated by 60 degrees, and also displaced sideways by 60 degrees to a Clifford parallel hexagon. All of the other central polygons (e.g. squares) are also displaced to a Clifford parallel polygon 60 degrees away. 79. In a double rotation each vertex can be said to move along two completely orthogonal great circles at the same time, but it does not stay within the central plane of either of those original great circles; rather, it moves along a helical geodesic that traverses diagonally between great circles. The two completely orthogonal planes of rotation are said to be invariant because the points in each stay in their places in the plane as the plane moves, rotating and tilting sideways by the angle that the other plane rotates. 80. An isoclinic rotation by 60° is two simple rotations by 60° at the same time.[lower-alpha 98] It moves all the vertices 120° at the same time, in various different directions. Six successive diagonal rotational increments, of 60°x60° each, move each vertex through 720° on a Möbius double loop called an isocline, twice around the 24-cell and back to its point of origin, in the same time (six rotational units) that it would take a simple rotation to take the vertex once around the 24-cell on an ordinary great circle.[lower-alpha 99] The helical double loop 4𝝅 isocline is just another kind of single full circle, of the same time interval and period (6 chords) as the simple great circle. The isocline is one true circle,[lower-alpha 100] as perfectly round and geodesic as the simple great circle, even through its chords are √3 longer, its circumference is 4𝝅 instead of 2𝝅,[lower-alpha 101] it circles through four dimensions instead of two, and it acts in two chiral forms (left and right) even though all such circles of the same circumference are directly congruent.[lower-alpha 85] Nevertheless, to avoid confusion we always refer to it as an isocline and reserve the term great circle for a geodesic ordinary circle in the plane. 81. Any double rotation (including an isoclinic rotation) can be seen as the composition of two simple rotations a and b: the left double rotation as a then b, and the right double rotation as b then a. Simple rotations are not commutative; left and right rotations (in general) reach different destinations. The difference between a double rotation and its two composing simple rotations is that the double rotation is 4-dimensionally diagonal: each moving vertex reaches its destination directly without passing through the intermediate point touched by a then b, or the other intermediate point touched by b then a, by rotating on a single helical geodesic (so it is the shortest path).[lower-alpha 79] Conversely, any simple rotation can be seen as the composition of two equal-angled double rotations (a left isoclinic rotation and a right isoclinic rotation),[lower-alpha 80] as discovered by Cayley; perhaps surprisingly, this composition is commutative, and is possible for any double rotation as well.[45] 82. A rotation in 4-space is completely characterized by choosing an invariant plane and an angle and direction (left or right) through which it rotates, and another angle and direction through which its one completely orthogonal invariant plane rotates. Two rotational displacements are identical if they have the same pair of invariant planes of rotation, through the same angles in the same directions (and hence also the same chiral pairing of directions). Thus the general rotation in 4-space is a double rotation, characterized by two angles. A simple rotation is a special case in which one rotational angle is 0.[lower-alpha 81] An isoclinic rotation is a different special case, similar but not identical to two simple rotations through the same angle.[lower-alpha 78] 83. The adjectives left and right are commonly used in two different senses, to distinguish two distinct kinds of pairing. They can refer to alternate directions: the hand on the left side of the body, versus the hand on the right side. Or they can refer to a chiral pair of enantiamorphous objects: a left hand is the mirror image of a right hand (like an inside-out glove). In the case of hands the sense intended is rarely ambiguous, because of course the hand on your left side is the mirror image of the hand on your right side: a hand is either left or right in both senses. But in the case of double-rotating 4-dimensional objects, only one sense of left versus right properly applies: the enantiamorphous sense, in which the left and right rotation are inside-out mirror images of each other. There are two directions, which we may call positive and negative, in which moving vertices may be circling on their isoclines, but it would be ambiguous to label those circular directions "right" and "left", since a rotation's direction and its chirality are independent properties: a right (or left) rotation may be circling in either the positive or negative direction. The left rotation is not rotating "to the left", the right rotation is not rotating "to the right", and unlike your left and right hands, double rotations do not lie on the left or right side of the 4-polytope. If double rotations must be analogized to left and right hands, they are better thought of as a pair of clasped hands, centered on the body, because of course they have a common center. 84. The 24-cell's Petrie polygon is a skew dodecagon {12} and also (orthogonally) a skew dodecagram {12/5} which zig-zags 90° left and right like the edges dividing the black and white squares on the chessboard.[60] In contrast, the skew hexagram2 isocline does not zig-zag, and stays on one side or the other of the dividing line between black and white, like the bishops' paths along the diagonals of either the black or white squares of the chessboard.[lower-alpha 21] The Petrie dodecagon is a circular helix of √1 edges that zig-zag 90° left and right along 12 edges of 6 different octahedra (with 3 consecutive edges in each octahedron) in a 360° rotation. In contrast, the isoclinic hexagram2 has √3 edges which all bend either left or right at every second vertex along a geodesic spiral of both chiralities (left and right)[lower-alpha 85] but only one color (black or white),[lower-alpha 91] visiting one vertex of each of those same 6 octahedra in a 720° rotation. 85. The chord-path of an isocline (the geodesic along which a vertex moves under isoclinic rotation) may be called the 4-polytope's Clifford polygon, as it is the skew polygonal shape of the rotational circles traversed by the 4-polytope's vertices in its characteristic Clifford displacement.[59] The isocline is a helical Möbius double loop which reverses its chirality twice in the course of a full double circuit. The two loops are both entirely contained within the same cell ring, where they both follow chords connecting even (odd) vertices: typically opposite vertices of adjacent cells, two edge lengths apart.[lower-alpha 91] Both "halves" of the double loop pass through each cell in the cell ring, but intersect only two even (odd) vertices in each even (odd) cell. Each pair of intersected vertices in an even (odd) cell lie opposite each other on the Möbius strip, exactly one edge length apart. Thus each cell has two helices passing through it, which are Clifford parallels[lower-alpha 35] of opposite chirality at each pair of parallel points. Globally these two helices are a single connected circle of both chiralities, with no net torsion. An isocline acts as a left (or right) isocline when traversed by a left (or right) rotation (of different fibrations).[lower-alpha 80] 86. That a double rotation can turn a 4-polytope inside out is even more noticeable in the tesseract double rotation. 87. Since it is difficult to color points and lines white, we sometimes use black and red instead of black and white. In particular, isocline chords are sometimes shown as black or red dashed lines.[lower-alpha 32] 88. Each great square plane is isoclinic (Clifford parallel) to five other square planes but completely orthogonal[lower-alpha 7] to only one of them.[lower-alpha 42] Every pair of completely orthogonal planes has Clifford parallel great circles, but not all Clifford parallel great circles are orthogonal (e.g., none of the hexagonal geodesics in the 24-cell are mutually orthogonal). There is also another way in which completely orthogonal planes are in a distinguished category of Clifford parallel planes: they are not chiral, or strictly speaking they possess both chiralities. A pair of isoclinic (Clifford parallel) planes is either a left pair or a right pair, unless they are separated by two angles of 90° (completely orthogonal planes) or 0° (coincident planes).[53] Most isoclinic planes are brought together only by a left isoclinic rotation or a right isoclinic rotation, respectively. Completely orthogonal planes are special: the pair of planes is both a left and a right pair, so either a left or a right isoclinic rotation will bring them together. This occurs because isoclinic planes are 180° apart at all vertex pairs: not just Clifford parallel but completely orthogonal. The isoclines (chiral vertex paths)[lower-alpha 47] of 90° isoclinic rotations are special for the same reason. Left and right isoclines loop through the same set of antipodal vertices (hitting both ends of each 16-cell axis), instead of looping through disjoint left and right subsets of black or white antipodal vertices (hitting just one end of each axis), as the left and right isoclines of all other fibrations do. 89. Chirality and even/odd parity are distinct flavors. Things which have even/odd coordinate parity are black or white: the squares of the chessboard,[lower-alpha 87] cells, vertices and the isoclines which connect them by isoclinic rotation.[lower-alpha 47] Everything else is black and white: e.g. adjacent face-bonded cell pairs, or edges and chords which are black at one end and white at the other. Things which have chirality come in right or left enantiomorphous forms: isoclinic rotations and chiral objects which include characteristic orthoschemes, pairs of Clifford parallel great polygon planes,[lower-alpha 88] fiber bundles of Clifford parallel circles (whether or not the circles themselves are chiral), and the chiral cell rings found in the 16-cell and 600-cell. Things which have neither an even/odd parity nor a chirality include all edges and faces (shared by black and white cells), great circle polygons and their fibrations, and non-chiral cell rings such as the 24-cell's cell rings of octahedra. Some things have both an even/odd parity and a chirality: isoclines are black or white because they connect vertices which are all of the same color, and they act as left or right chiral objects when they are vertex paths in a left or right rotation, although they have no inherent chirality themselves. Each left (or right) rotation traverses an equal number of black and white isoclines.[lower-alpha 85] 90. Left and right isoclinic rotations partition the 24 cells (and 24 vertices) into black and white in the same way.[40] The rotations of all fibrations of the same kind of great polygon use the same chessboard, which is a convention of the coordinate system based on even and odd coordinates. Left and right are not colors: in either a left (or right) rotation half the moving vertices are black, running along black isoclines through black vertices, and the other half are white vertices rotating among themselves.[lower-alpha 89] 91. Isoclinic rotations[lower-alpha 47] partition the 24 cells (and the 24 vertices) of the 24-cell into two disjoint subsets of 12 cells (and 12 vertices), even and odd (or black and white), which shift places among themselves, in a manner dimensionally analogous to the way the bishops' diagonal moves[lower-alpha 21] restrict them to the black or white squares of the chessboard.[lower-alpha 90] 92. Although adjacent vertices on the isoclinic geodesic are a √3 chord apart, a point on a rigid body under rotation does not travel along a chord: it moves along an arc between the two endpoints of the chord (a longer distance). In a simple rotation between two vertices √3 apart, the vertex moves along the arc of a hexagonal great circle to a vertex two great hexagon edges away, and passes through the intervening hexagon vertex midway. But in an isoclinic rotation between two vertices √3 apart the vertex moves along a helical arc called an isocline (not a planar great circle),[lower-alpha 47] which does not pass through an intervening vertex: it misses the vertex nearest to its midpoint.[lower-alpha 21] 93. P0 and P1 lie in the same hyperplane (the same central cuboctahedron) so their other angle of separation is 0.[lower-alpha 50] 94. V0 and V2 are two √3 chords apart on the geodesic path of this rotational isocline, but that is not the shortest geodesic path between them. In the 24-cell, it is impossible for two vertices to be more distant than one √3 chord, unless they are antipodal vertices √4 apart.[lower-alpha 37] V0 and V2 are one √3 chord apart on some other isocline, and just √1 apart on some great hexagon. Between V0 and V2, the isoclinic rotation has gone the long way around the 24-cell over two √3 chords to reach a vertex that was only √1 away. More generally, isoclines are geodesics because the distance between their adjacent vertices is the shortest distance between those two vertices in some rotation connecting them, but on the 3-sphere there may be another rotation which is shorter. A path between two vertices along a geodesic is not always the shortest distance between them (even on ordinary great circle geodesics). 95. P0 and P2 are 60° apart in both angles of separation.[lower-alpha 50] Clifford parallel planes are isoclinic (which means they are separated by two equal angles), and their corresponding vertices are all the same distance apart. Although V0 and V2 are two √3 chords apart[lower-alpha 94], P0 and P2 are just one √1 edge apart (at every pair of nearest vertices). 96. Each half of a skew hexagram is an open triangle of three √3 chords, the two open ends of which are one √1 edge length apart. The two halves, like the whole isocline, have no inherent chirality but the same parity-color (black or white). The halves are the two opposite "edges" of a Möbius strip that is √1 wide; it actually has only one edge, which is a single continuous circle with 6 chords. 97. Departing from any vertex V0 in the original great hexagon plane of isoclinic rotation P0, the first vertex reached V1 is 120 degrees away along a √3 chord lying in a different hexagonal plane P1. P1 is inclined to P0 at a 60° angle.[lower-alpha 93] The second vertex reached V2 is 120 degrees beyond V1 along a second √3 chord lying in another hexagonal plane P2 that is Clifford parallel to P0.[lower-alpha 95] (Notice that V1 lies in both intersecting planes P1 and P2, as V0 lies in both P0 and P1. But P0 and P2 have no vertices in common; they do not intersect.) The third vertex reached V3 is 120 degrees beyond V2 along a third √3 chord lying in another hexagonal plane P3 that is Clifford parallel to P1. V0 and V3 are adjacent vertices, √1 apart.[lower-alpha 96] The three √3 chords lie in different 8-cells.[lower-alpha 22] V0 to V3 is a 360° isoclinic rotation, and one half of the 24-cell's double-loop hexagram2 Clifford polygon.[lower-alpha 85] 98. The composition of two simple 60° rotations in a pair of completely orthogonal invariant planes is a 60° isoclinic rotation in four pairs of completely orthogonal invariant planes.[lower-alpha 81] Thus the isoclinic rotation is the compound of four simple rotations, and all 24 vertices rotate in invariant hexagon planes, versus just 6 vertices in a simple rotation. 99. Because the 24-cell's helical hexagram2 geodesic is bent into a twisted ring in the fourth dimension like a Möbius strip, its screw thread doubles back across itself in each revolution, reversing its chirality[lower-alpha 85] but without ever changing its even/odd parity of rotation (black or white).[lower-alpha 91] The 6-vertex isoclinic path forms a Möbius double loop, like a 3-dimensional double helix with the ends of its two parallel 3-vertex helices cross-connected to each other. This 60° isocline[lower-alpha 104] is a skewed instance of the regular compound polygon denoted {6/2}=2{3} or hexagram2.[lower-alpha 96] Successive √3 edges belong to different 8-cells, as the 720° isoclinic rotation takes each hexagon through all six hexagons in the 6-cell ring, and each 8-cell through all three 8-cells twice.[lower-alpha 22] 100. Isoclinic geodesics or isoclines are 4-dimensional great circles in the sense that they are 1-dimensional geodesic lines that curve in 4-space in two completely orthogonal planes at once.[lower-alpha 105] They should not be confused with great 2-spheres,[16] which are the 4-dimensional analogues of great circles (great 1-spheres).[lower-alpha 44] Discrete isoclines are polygons;[lower-alpha 85] discrete great 2-spheres are polyhedra. 101. Isoclines on the 3-sphere occur in non-intersecting pairs of even/odd coordinate parity.[lower-alpha 91] A single black or white isocline forms a Möbius loop called the {1,1} torus knot or Villarceau circle[51] in which each of two "circles" linked in a Möbius "figure eight" loop traverses through all four dimensions.[lower-alpha 85] The double loop is a true circle in four dimensions.[lower-alpha 80] Even and odd isoclines are also linked, not in a Möbius loop but as a Hopf link of two non-intersecting circles,[lower-alpha 35] as are all the Clifford parallel isoclines of a Hopf fiber bundle. 102. In a 720° isoclinic rotation of a rigid 24-cell the 24 vertices rotate along four separate Clifford parallel hexagram2 geodesic loops (six vertices circling in each loop) and return to their original positions.[lower-alpha 101] 103. The length of a strip can be measured at its centerline, or by cutting the resulting Möbius strip perpendicularly to its boundary so that it forms a rectangle. 104. A strip of paper can form a flattened Möbius strip in the plane by folding it at $60^{\circ }$ angles so that its center line lies along an equilateral triangle, and attaching the ends. The shortest strip for which this is possible consists of three equilateral paper triangles, folded at the edges where two triangles meet. Since the loop traverses both sides of each paper triangle, it is a hexagonal loop over six equilateral triangles. Its aspect ratio – the ratio of the strip's length[lower-alpha 103] to its width – is ${\sqrt {3}}\approx 1.73$. 105. All isoclines are geodesics, and isoclines on the 3-sphere are circles (curving equally in all four dimensions), but not all isoclines on 3-manifolds in 4-space are circles. 106. Each set of Clifford parallel great circle polygons is a different bundle of fibers than the corresponding set of Clifford parallel isocline[lower-alpha 47] polygrams, but the two fiber bundles together constitute the same discrete Hopf fibration, because they enumerate the 24 vertices together by their intersection in the same distinct (left or right) isoclinic rotation. They are the warp and woof of the same woven fabric that is the fibration. 107. The choice of a partitioning of a regular 4-polytope into cell rings (a fibration) is arbitrary, because all of its cells are identical. No particular fibration is distinguished, unless the 4-polytope is rotating. Each fibration corresponds to a left-right pair of isoclinic rotations in a particular set of Clifford parallel invariant central planes of rotation. In the 24-cell, distinguishing a hexagonal fibration[lower-alpha 20] means choosing a cell-disjoint set of four 6-cell rings that is the unique container of a left-right pair of isoclinic rotations in four Clifford parallel hexagonal invariant planes. The left and right rotations take place in chiral subspaces of that container,[58] but the fibration and the octahedral cell rings themselves are not chiral objects.[lower-alpha 115] 108. All isoclinic planes are Clifford parallels (completely disjoint).[lower-alpha 25] Three and four dimensional cocentric objects may intersect (sharing elements) but still be related by an isoclinic rotation. Polyhedra and 4-polytopes may be isoclinic and not disjoint, if all of their corresponding planes are either Clifford parallel, or cocellular (in the same hyperplane) or coincident (the same plane). 109. By generate we mean simply that some vertex of the first polytope will visit each vertex of the generated polytope in the course of the rotation. 110. Like a key operating a four-dimensional lock, an object must twist in two completely perpendicular tumbler cylinders in order to move the short distance between Clifford parallel subspaces. 111. Just as each face of a polyhedron occupies a different (2-dimensional) face plane, each cell of a polychoron occupies a different (3-dimensional) cell hyperplane.[lower-alpha 49] 112. There is a choice of planes in which to fold the column into a ring, but they are equivalent in that they produce congruent rings. Whichever folding planes are chosen, each of the six helices joins its own two ends and forms a simple great circle hexagon. These hexagons are not helices: they lie on ordinary flat great circles. Three of them are Clifford parallel[lower-alpha 35] and belong to one hexagonal fibration. They intersect the other three, which belong to another hexagonal fibration. The three parallel great circles of each fibration spiral around each other in the sense that they form a Hopf link of three ordinary circles, but they are not twisted: the 6-cell ring has no torsion, either clockwise or counterclockwise.[lower-alpha 115] 113. When unit-edge octahedra are placed face-to-face the distance between their centers of volume is √2/3 ≈ 0.816.[56] When 24 face-bonded octahedra are bent into a 24-cell lying on the 3-sphere, the centers of the octahedra are closer together in 4-space. Within the curved 3-dimensional surface space filled by the 24 cells, the cell centers are still √2/3 apart along the curved geodesics that join them. But on the straight chords that join them, which dip inside the 3-sphere, they are only 1/2 edge length apart. 114. The axial hexagon of the 6-octahedron ring does not intersect any vertices or edges of the 24-cell, but it does hit faces. In a unit-edge-length 24-cell, it has edges of length 1/2.[lower-alpha 113] Because it joins six cell centers, the axial hexagon is a great hexagon of the smaller dual 24-cell that is formed by joining the 24 cell centers.[lower-alpha 61] 115. Only one kind of 6-cell ring exists, not two different chiral kinds (right-handed and left-handed), because octahedra have opposing faces and form untwisted cell rings. In addition to two sets of three Clifford parallel[lower-alpha 35] great hexagons, three black and three white isoclinic hexagram geodesics run through the 6-cell ring.[lower-alpha 20] Each of these chiral skew hexagrams lies on a different kind of circle called an isocline,[lower-alpha 105] a helical circle winding through all four dimensions instead of lying in a single plane.[lower-alpha 47] These helical great circles occur in Clifford parallel fiber bundles just as ordinary planar great circles do. In the 6-cell ring, black and white hexagrams pass through even and odd vertices respectively, and miss the vertices in between, so the isoclines are disjoint.[lower-alpha 91] 116. The three great hexagons are Clifford parallel, which is different than ordinary parallelism.[lower-alpha 35] Clifford parallel great hexagons pass through each other like adjacent links of a chain, forming a Hopf link. Unlike links in a 3-dimensional chain, they share the same center point. In the 24-cell, Clifford parallel great hexagons occur in sets of four, not three. The fourth parallel hexagon lies completely outside the 6-cell ring; its 6 vertices are completely disjoint from the ring's 18 vertices. 117. In the column of 6 octahedral cells, we number the cells 0-5 going up the column. We also label each vertex with an integer 0-5 based on how many edge lengths it is up the column. 118. An isoclinic rotation by a multiple of 60° takes even-numbered octahedra in the ring to even-numbered octahedra, and odd-numbered octahedra to odd-numbered octahedra.[lower-alpha 117] It is impossible for an even-numbered octahedron to reach an odd-numbered octahedron, or vice versa, by a left or a right isoclinic rotation alone.[lower-alpha 91] 119. Two central planes in which the path bends 60° at the vertex are (a) the great hexagon plane that the chord before the vertex belongs to, and (b) its completely orthogonal great digon plane.[lower-alpha 53] Plane (b) contains only the vertex and its antipodal vertex: one axis of the 24-cell, which is also an axis of both great hexagon planes (a) and (c), where (c) is the plane that the chord after the vertex belongs to. The 60° angle of rotation in (b) is the angle between the great hexagon planes (a) and (c). In this 60° interval of the isoclinic rotation, great hexagon plane (a) rotates 60° itself and tilts 60° on the common axis of all three planes, to become great hexagon plane (c). The two great hexagon planes (a) and (c) are not mutually orthogonal (they are two central hexagons of the same cuboctahedron inclined at 60° to each other),[lower-alpha 19] but each is completely orthogonal to (b). 120. Each vertex of the 6-cell ring is intersected by two skew hexagrams of the same parity (black or white) belonging to different fibrations.[lower-alpha 115] 121. Each vertex of a 6-cell ring is missed by the two halves of the same Möbius double loop hexagram[lower-alpha 122], which curve past it on either side. 122. At each vertex there is only one adjacent great hexagon plane that the isocline can bend 60 degrees into: the isoclinic path is deterministic in the sense that it is linear, not branching, because each vertex in the cell ring is a place where just two of the six great hexagons contained in the cell ring cross. If each great hexagon is given edges and chords of a particular color (as in the 6-cell ring illustration), we can name each great hexagon by its color, and each kind of vertex by a hyphenated two-color name. The cell ring contains 18 vertices named by the 9 unique two-color combinations; each vertex and its antipodal vertex have the same two colors in their name, since when two great hexagons intersect they do so at antipodal vertices. Each isoclinic skew hexagram[lower-alpha 96] contains one √3 chord of each color, and visits 6 of the 9 different color-pairs of vertex.[lower-alpha 120] Each 6-cell ring contains six such isoclinic skew hexagrams, three black and three white.[lower-alpha 121] 123. The √3 chord passes through the mid-edge of one of the 24-cell's √1 radii. Since the 24-cell can be constructed, with its long radii, from √1 triangles which meet at its center,[lower-alpha 2] this is a mid-edge of one of the six √1 triangles in a great hexagon, as seen in the chord diagram. 124. Each pair of adjacent edges of a great hexagon has just one isocline curving alongside it,[lower-alpha 121] missing the vertex between the two edges (but not the way the √3 edge of the great triangle inscribed in the great hexagon misses the vertex,[lower-alpha 123] because the isocline is an arc on the surface not a chord). If we number the vertices around the hexagon 0-5, the hexagon has three pairs of adjacent edges connecting even vertices (one inscribed great triangle), and three pairs connecting odd vertices (the other inscribed great triangle). Even and odd pairs of edges have the arc of a black and a white isocline respectively curving alongside.[lower-alpha 91] The three black and three white isoclines belong to the same 6-cell ring of the same fibration.[lower-alpha 122] 125. Each hexagram isocline hits only one end of an axis, unlike a great circle which hits both ends. Clifford parallel pairs of black and white isoclines from the same left-right pair of isoclinic rotations (the same fibration) do not intersect, but they hit opposite (antipodal) vertices of one of the 24-cell's 12 axes. 126. The isoclines themselves are not left or right, only the bundles are. Each isocline is left and right.[lower-alpha 85] 127. The 12 black-white pairs of hexagram isoclines in each fibration[lower-alpha 125] and the 16 distinct hexagram isoclines in the 24-cell form a Reye configuration 124163, just the way the 24-cell's 12 axes and 16 hexagons do. Each of the 12 black-white pairs occurs in one cell ring of each fibration of 4 hexagram isoclines, and each cell ring contains 3 black-white pairs of the 16 hexagram isoclines. 128. For a regular k-polytope, the Coxeter-Dynkin diagram of the characteristic k-orthoscheme is the k-polytope's diagram without the generating point ring. The regular k-polytope is subdivided by its symmetry (k-1)-elements into g instances of its characteristic k-orthoscheme that surround its center, where g is the order of the k-polytope's symmetry group.[64] 129. The four edges of each 4-orthoscheme which meet at the center of the regular 4-polytope are of unequal length, because they are the four characteristic radii of the regular 4-polytope: a vertex radius, an edge center radius, a face center radius, and a cell center radius. The five vertices of the 4-orthoscheme always include one regular 4-polytope vertex, one regular 4-polytope edge center, one regular 4-polytope face center, one regular 4-polytope cell center, and the regular 4-polytope center. Those five vertices (in that order) comprise a path along four mutually perpendicular edges (that makes three right angle turns), the characteristic feature of a 4-orthoscheme. The 4-orthoscheme has five dissimilar 3-orthoscheme facets. 130. The reflecting surface of a (3-dimensional) polyhedron consists of 2-dimensional faces; the reflecting surface of a (4-dimensional) polychoron consists of 3-dimensional cells. 131. Let Q denote a rotation, R a reflection, T a translation, and let Qq Rr T denote a product of several such transformations, all commutative with one another. Then RT is a glide-reflection (in two or three dimensions), QR is a rotary-reflection, QT is a screw-displacement, and Q2 is a double rotation (in four dimensions). Every orthogonal transformation is expressible as             Qq Rr where 2q + r ≤ n, the number of dimensions.[67] Citations 1. Coxeter 1973, p. 118, Chapter VII: Ordinary Polytopes in Higher Space. 2. Johnson 2018, p. 249, 11.5. 3. Ghyka 1977, p. 68. 4. Coxeter 1973, p. 289, Epilogue; "Another peculiarity of four-dimensional space is the occurrence of the 24-cell {3,4,3}, which stands quite alone, having no analogue above or below." 5. Coxeter 1995, p. 25, (Paper 3) Two aspects of the regular 24-cell in four dimensions. 6. Coxeter 1968, p. 70, §4.12 The Classification of Zonohedra. 7. Coxeter 1973, pp. 292–293, Table I(ii): The sixteen regular polytopes {p,q,r} in four dimensions; An invaluable table providing all 20 metrics of each 4-polytope in edge length units. They must be algebraically converted to compare polytopes of unit radius. 8. Coxeter 1973, p. 302, Table VI (ii): 𝐈𝐈 = {3,4,3}: see Result column 9. Coxeter 1973, p. 156, §8.7. Cartesian Coordinates. 10. Coxeter 1973, pp. 145–146, §8.1 The simple truncations of the general regular polytope. 11. Waegell & Aravind 2009, pp. 4–5, §3.4 The 24-cell: points, lines and Reye’s configuration; In the 24-cell Reye's "points" and "lines" are axes and hexagons, respectively. 12. Coxeter 1973, p. 298, Table V: The Distribution of Vertices of Four-Dimensional Polytopes in Parallel Solid Sections (§13.1); (i) Sections of {3,4,3} (edge 2) beginning with a vertex; see column a. 13. Stillwell 2001, p. 17. 14. Tyrrell & Semple 1971, pp. 5–6, §3. Clifford's original definition of parallelism. 15. Kim & Rote 2016, pp. 8–10, Relations to Clifford Parallelism. 16. Stillwell 2001, p. 24. 17. Copher 2019, p. 6, §3.2 Theorem 3.4. 18. Kim & Rote 2016, p. 7, §6 Angles between two Planes in 4-Space; "In four (and higher) dimensions, we need two angles to fix the relative position between two planes. (More generally, k angles are defined between k-dimensional subspaces.)". 19. Coxeter 1973, p. 153, 8.5. Gosset's construction for {3,3,5}: "In fact, the vertices of {3,3,5}, each taken 5 times, are the vertices of 25 {3,4,3}'s." 20. Coxeter 1973, p. 304, Table VI(iv) II={5,3,3}: Faceting {5,3,3}[120𝛼4]{3,3,5} of the 120-cell reveals 120 regular 5-cells. 21. Egan 2021, animation of a rotating 24-cell: red half-integer vertices (tesseract), yellow and black integer vertices (16-cell). 22. Coxeter 1973, p. 150, Gosset. 23. Coxeter 1973, p. 148, §8.2. Cesaro's construction for {3, 4, 3}.. 24. Coxeter 1973, p. 302, Table VI(ii) II={3,4,3}, Result column. 25. Coxeter 1973, pp. 149–150, §8.22. see illustrations Fig. 8.2A and Fig 8.2B 26. Coxeter 1995, p. 29, (Paper 3) Two aspects of the regular 24-cell in four dimensions; "The common content of the 4-cube and the 16-cell is a smaller {3,4,3} whose vertices are the permutations of [(±1/2, ±1/2, 0, 0)]". 27. Coxeter 1973, p. 147, §8.1 The simple truncations of the general regular polytope; "At a point of contact, [elements of a regular polytope and elements of its dual in which it is inscribed in some manner] lie in completely orthogonal subspaces[lower-alpha 7] of the tangent hyperplane to the sphere [of reciprocation], so their only common point is the point of contact itself[lower-alpha 11].... In fact, the [various] radii 0𝑹, 1𝑹, 2𝑹, ... determine the polytopes ... whose vertices are the centers of elements 𝐈𝐈0, 𝐈𝐈1, 𝐈𝐈2, ... of the original polytope." 28. Kepler 1619, p. 181. 29. van Ittersum 2020, pp. 73–79, §4.2. 30. Coxeter 1973, p. 269, §14.32. "For instance, in the case of $\gamma _{4}[2\beta _{4}]$...." 31. van Ittersum 2020, p. 79. 32. Coxeter 1973, p. 150: "Thus the 24 cells of the {3, 4, 3} are dipyramids based on the 24 squares of the $\gamma _{4}$. (Their centres are the mid-points of the 24 edges of the $\beta _{4}$.)" 33. Coxeter 1973, p. 12, §1.8. Configurations. 34. Coxeter 1973, p. 120, §7.2.: "... any n+1 points which do not lie in an (n-1)-space are the vertices of an n-dimensional simplex.... Thus the general simplex may alternatively be defined as a finite region of n-space enclosed by n+1 hyperplanes or (n-1)-spaces." 35. van Ittersum 2020, p. 78, §4.2.5. 36. Stillwell 2001, p. 18-21. 37. Egan 2021; quaternions, the binary tetrahedral group and the binary octahedral group, with rotating illustrations. 38. Stillwell 2001, p. 22. 39. Coxeter 1973, p. 163: Coxeter notes that Thorold Gosset was apparently the first to see that the cells of the 24-cell honeycomb {3,4,3,3} are concentric with alternate cells of the tesseractic honeycomb {4,3,3,4}, and that this observation enabled Gosset's method of construction of the complete set of regular polytopes and honeycombs. 40. Coxeter 1973, p. 156: "...the chess-board has an n-dimensional analogue." 41. Mamone, Pileio & Levitt 2010, pp. 1438–1439, §4.5 Regular Convex 4-Polytopes; the 24-cell has 1152 symmetry operations (rotations and reflections) as enumerated in Table 2, symmetry group 𝐹4. 42. Coxeter 1973, p. 119, §7.1. Dimensional Analogy: "For instance, seeing that the circumference of a circle is 2π r, while the surface of a sphere is 4π r 2, ... it is unlikely that the use of analogy, unaided by computation, would ever lead us to the correct expression [for the hyper-surface of a hyper-sphere], 2π 2r 3." 43. Kim & Rote 2016, p. 6, §5. Four-Dimensional Rotations. 44. Perez-Gracia & Thomas 2017, §7. Conclusions; "Rotations in three dimensions are determined by a rotation axis and the rotation angle about it, where the rotation axis is perpendicular to the plane in which points are being rotated. The situation in four dimensions is more complicated. In this case, rotations are determined by two orthogonal planes and two angles, one for each plane. Cayley proved that a general 4D rotation can always be decomposed into two 4D rotations, each of them being determined by two equal rotation angles up to a sign change." 45. Perez-Gracia & Thomas 2017. 46. Perez-Gracia & Thomas 2017, pp. 12−13, §5. A useful mapping. 47. Coxeter 1995, pp. 30–32, (Paper 3) Two aspects of the regular 24-cell in four dimensions; §3. The Dodecagonal Aspect;[lower-alpha 84] Coxeter considers the 150°/30° double rotation of period 12 which locates 12 of the 225 distinct 24-cells inscribed in the 120-cell, a regular 4-polytope with 120 dodecahedral cells that is the convex hull of the compound of 25 disjoint 24-cells. 48. Perez-Gracia & Thomas 2017, pp. 2−3, §2. Isoclinic rotations. 49. Kim & Rote 2016, pp. 7–10, §6. Angles between two Planes in 4-Space. 50. Coxeter 1973, p. 141, §7.x. Historical remarks; "Möbius realized, as early as 1827, that a four-dimensional rotation would be required to bring two enantiomorphous solids into coincidence. This idea was neatly deployed by H. G. Wells in The Plattner Story." 51. Dorst 2019, p. 44, §1. Villarceau Circles; "In mathematics, the path that the (1, 1) knot on the torus traces is also known as a Villarceau circle. Villarceau circles are usually introduced as two intersecting circles that are the cross-section of a torus by a well-chosen plane cutting it. Picking one such circle and rotating it around the torus axis, the resulting family of circles can be used to rule the torus. By nesting tori smartly, the collection of all such circles then form a Hopf fibration.... we prefer to consider the Villarceau circle as the (1, 1) torus knot rather than as a planar cut." 52. Kim & Rote 2016, pp. 8–9, Relations to Clifford parallelism. 53. Kim & Rote 2016, p. 8, Left and Right Pairs of Isoclinic Planes. 54. Tyrrell & Semple 1971, pp. 1–9, §1. Introduction. 55. Tyrrell & Semple 1971, pp. 20–33, Clifford Parallel Spaces and Clifford Reguli. 56. Coxeter 1973, pp. 292–293, Table I(i): Octahedron. 57. Kim & Rote 2016, pp. 14–16, §8.3 Properties of the Hopf Fibration; Corollary 9. Every great circle belongs to a unique right [(and left)] Hopf bundle. 58. Kim & Rote 2016, p. 12, §8 The Construction of Hopf Fibrations; 3. 59. Tyrrell & Semple 1971, pp. 34–57, Linear Systems of Clifford Parallels. 60. Coxeter 1973, pp. 292–293, Table I(ii); 24-cell h1 is {12}, h2 is {12/5}. 61. Coxeter 1973, pp. 292–293, Table I(ii); "24-cell". 62. Coxeter 1973, p. 139, §7.9 The characteristic simplex. 63. Coxeter 1973, p. 290, Table I(ii); "dihedral angles". 64. Coxeter 1973, pp. 130–133, §7.6 The symmetry group of the general regular polytope. 65. Kim & Rote 2016, pp. 17–20, §10 The Coxeter Classification of Four-Dimensional Point Groups. 66. Coxeter 1973, pp. 33–38, §3.1 Congruent transformations. 67. Coxeter 1973, p. 217, §12.2 Congruent transformations. 68. Coxeter 1973, p. 138; "We allow the Schläfli symbol {p,..., v} to have three different meanings: a Euclidean polytope, a spherical polytope, and a spherical honeycomb. This need not cause any confusion, so long as the situation is frankly recognized. The differences are clearly seen in the concept of dihedral angle." 69. Coxeter 1970, p. 18, §8. The simplex, cube, cross-polytope and 24-cell; Coxeter studied cell rings in the general case of their geometry and group theory, identifying each cell ring as a polytope in its own right which fills a three-dimensional manifold (such as the 3-sphere) with its corresponding honeycomb. He found that cell rings follow Petrie polygons[lower-alpha 84] and some (but not all) cell rings and their honeycombs are twisted, occurring in left- and right-handed chiral forms. Specifically, he found that since the 24-cell's octahedral cells have opposing faces, the cell rings in the 24-cell are of the non-chiral (directly congruent) kind.[lower-alpha 115] Therefore each of the 24-cell's cell rings has its corresponding honeycomb in Euclidean (rather than hyperbolic) space, so the 24-cell tiles 4-dimensional Euclidean space by translation to form the 24-cell honeycomb. 70. Banchoff 2013, studied the decomposition of regular 4-polytopes into honeycombs of tori tiling the Clifford torus, showed how the honeycombs correspond to Hopf fibrations, and made a particular study of the 24-cell's 4 rings of 6 octahedral cells with illustrations. 71. Banchoff 2013, pp. 265–266. 72. Coxeter 1991. References • Kepler, Johannes (1619). Harmonices Mundi (The Harmony of the World). Johann Planck. • Coxeter, H.S.M. (1973) [1948]. Regular Polytopes (3rd ed.). New York: Dover. • Coxeter, H.S.M. (1991), Regular Complex Polytopes (2nd ed.), Cambridge: Cambridge University Press • Coxeter, H.S.M. (1995), Sherk, F. Arthur; McMullen, Peter; Thompson, Anthony C.; Weiss, Asia Ivic (eds.), Kaleidoscopes: Selected Writings of H.S.M. Coxeter (2nd ed.), Wiley-Interscience Publication, ISBN 978-0-471-01003-6 • (Paper 3) H.S.M. Coxeter, Two aspects of the regular 24-cell in four dimensions • (Paper 22) H.S.M. Coxeter, Regular and Semi Regular Polytopes I, [Math. Zeit. 46 (1940) 380-407, MR 2,10] • (Paper 23) H.S.M. Coxeter, Regular and Semi-Regular Polytopes II, [Math. Zeit. 188 (1985) 559-591] • (Paper 24) H.S.M. Coxeter, Regular and Semi-Regular Polytopes III, [Math. Zeit. 200 (1988) 3-45] • Coxeter, H.S.M. (1968). The Beauty of Geometry: Twelve Essays (2nd ed.). New York: Dover. • Coxeter, H.S.M. (1989). "Trisecting an Orthoscheme". Computers Math. Applic. 17 (1–3): 59–71. doi:10.1016/0898-1221(89)90148-X. • Coxeter, H.S.M. (1970), "Twisted Honeycombs", Conference Board of the Mathematical Sciences Regional Conference Series in Mathematics, Providence, Rhode Island: American Mathematical Society, 4 • Stillwell, John (January 2001). "The Story of the 120-Cell" (PDF). Notices of the AMS. 48 (1): 17–25. • Johnson, Norman (2018), Geometries and Transformations, Cambridge: Cambridge University Press, ISBN 978-1-107-10340-5 • Johnson, Norman (1991), Uniform Polytopes (Manuscript ed.) • Johnson, Norman (1966), The Theory of Uniform Polytopes and Honeycombs (Ph.D. ed.) • Weisstein, Eric W. "24-Cell". MathWorld. (also under Icositetrachoron) • Klitzing, Richard. "4D uniform polytopes (polychora) x3o4o3o - ico". • Ghyka, Matila (1977). The Geometry of Art and Life. New York: Dover Publications. ISBN 978-0-486-23542-4. • Banchoff, Thomas F. (2013). "Torus Decompostions of Regular Polytopes in 4-space". In Senechal, Marjorie (ed.). Shaping Space. Springer New York. pp. 257–266. doi:10.1007/978-0-387-92714-5_20. ISBN 978-0-387-92713-8. • Copher, Jessica (2019). "Sums and Products of Regular Polytopes' Squared Chord Lengths". arXiv:1903.06971 [math.MG]. • van Ittersum, Clara (2020). Symmetry groups of regular polytopes in three and four dimensions (Thesis). Delft University of Technology. • Kim, Heuna; Rote, G. (2016). "Congruence Testing of Point Sets in 4 Dimensions". arXiv:1603.07269 [cs.CG]. • Perez-Gracia, Alba; Thomas, Federico (2017). "On Cayley's Factorization of 4D Rotations and Applications" (PDF). Adv. Appl. Clifford Algebras. 27: 523–538. doi:10.1007/s00006-016-0683-9. hdl:2117/113067. S2CID 12350382. • Waegell, Mordecai; Aravind, P. K. (2009-11-12). "Critical noncolorings of the 600-cell proving the Bell-Kochen-Specker theorem". Journal of Physics A: Mathematical and Theoretical. 43 (10): 105304. arXiv:0911.2289. doi:10.1088/1751-8113/43/10/105304. S2CID 118501180. • Tyrrell, J. A.; Semple, J.G. (1971). Generalized Clifford parallelism. Cambridge University Press. ISBN 0-521-08042-8. • Egan, Greg (23 December 2021). "Symmetries and the 24-cell". gregegan.net. Retrieved 10 October 2022. • Mamone, Salvatore; Pileio, Giuseppe; Levitt, Malcolm H. (2010). "Orientational Sampling Schemes Based on Four Dimensional Polytopes". Symmetry. 2 (3): 1423–1449. Bibcode:2010Symm....2.1423M. doi:10.3390/sym2031423. • Dorst, Leo (2019). "Conformal Villarceau Rotors". Advances in Applied Clifford Algebras. 29 (44). doi:10.1007/s00006-019-0960-5. S2CID 253592159. External links • 24-cell animations • 24-cell in stereographic projections • 24-cell description and diagrams Archived 2007-07-15 at the Wayback Machine • Petrie dodecagons in the 24-cell: mathematics and animation software Regular 4-polytopes Convex 5-cell8-cell16-cell24-cell120-cell600-cell • {3,3,3} • pentachoron • 4-simplex • {4,3,3} • tesseract • 4-cube • {3,3,4} • hexadecachoron • 4-orthoplex • {3,4,3} • icositetrachoron • octaplex • {5,3,3} • hecatonicosachoron • dodecaplex • {3,3,5} • hexacosichoron • tetraplex Star icosahedral 120-cell small stellated 120-cell great 120-cell grand 120-cell great stellated 120-cell grand stellated 120-cell great grand 120-cell great icosahedral 120-cell grand 600-cell great grand stellated 120-cell • {3,5,5/2} • icosaplex • {5/2,5,3} • stellated dodecaplex • {5,5/2,5} • great dodecaplex • {5,3,5/2} • grand dodecaplex • {5/2,3,5} • great stellated dodecaplex • {5/2,5,5/2} • grand stellated dodecaplex • {5,5/2,3} • great grand dodecaplex • {3,5/2,5} • great icosaplex • {3,3,5/2} • grand tetraplex • {5/2,3,3} • great grand stellated dodecaplex Fundamental convex regular and uniform polytopes in dimensions 2–10 Family An Bn I2(p) / Dn E6 / E7 / E8 / F4 / G2 Hn Regular polygon Triangle Square p-gon Hexagon Pentagon Uniform polyhedron Tetrahedron Octahedron • Cube Demicube Dodecahedron • Icosahedron Uniform polychoron Pentachoron 16-cell • Tesseract Demitesseract 24-cell 120-cell • 600-cell Uniform 5-polytope 5-simplex 5-orthoplex • 5-cube 5-demicube Uniform 6-polytope 6-simplex 6-orthoplex • 6-cube 6-demicube 122 • 221 Uniform 7-polytope 7-simplex 7-orthoplex • 7-cube 7-demicube 132 • 231 • 321 Uniform 8-polytope 8-simplex 8-orthoplex • 8-cube 8-demicube 142 • 241 • 421 Uniform 9-polytope 9-simplex 9-orthoplex • 9-cube 9-demicube Uniform 10-polytope 10-simplex 10-orthoplex • 10-cube 10-demicube Uniform n-polytope n-simplex n-orthoplex • n-cube n-demicube 1k2 • 2k1 • k21 n-pentagonal polytope Topics: Polytope families • Regular polytope • List of regular polytopes and compounds
Wikipedia
24-cell honeycomb In four-dimensional Euclidean geometry, the 24-cell honeycomb, or icositetrachoric honeycomb is a regular space-filling tessellation (or honeycomb) of 4-dimensional Euclidean space by regular 24-cells. It can be represented by Schläfli symbol {3,4,3,3}. 24-cell honeycomb A 24-cell and first layer of its adjacent 4-faces. TypeRegular 4-honeycomb Uniform 4-honeycomb Schläfli symbol{3,4,3,3} r{3,3,4,3} 2r{4,3,3,4} 2r{4,3,31,1} {31,1,1,1} Coxeter-Dynkin diagrams 4-face type{3,4,3} Cell type{3,4} Face type{3} Edge figure{3,3} Vertex figure{4,3,3} Dual{3,3,4,3} Coxeter groups${\tilde {F}}_{4}$, [3,4,3,3] ${\tilde {C}}_{4}$, [4,3,3,4] ${\tilde {B}}_{4}$, [4,3,31,1] ${\tilde {D}}_{4}$, [31,1,1,1] Propertiesregular The dual tessellation by regular 16-cell honeycomb has Schläfli symbol {3,3,4,3}. Together with the tesseractic honeycomb (or 4-cubic honeycomb) these are the only regular tessellations of Euclidean 4-space. Coordinates The 24-cell honeycomb can be constructed as the Voronoi tessellation of the D4 or F4 root lattice. Each 24-cell is then centered at a D4 lattice point, i.e. one of $\left\{(x_{i})\in \mathbb {Z} ^{4}: \sum _{i}}x_{i}\equiv 0\;({\mbox{mod }}2)\right\}.$ These points can also be described as Hurwitz quaternions with even square norm. The vertices of the honeycomb lie at the deep holes of the D4 lattice. These are the Hurwitz quaternions with odd square norm. It can be constructed as a birectified tesseractic honeycomb, by taking a tesseractic honeycomb and placing vertices at the centers of all the square faces. The 24-cell facets exist between these vertices as rectified 16-cells. If the coordinates of the tesseractic honeycomb are integers (i,j,k,l), the birectified tesseractic honeycomb vertices can be placed at all permutations of half-unit shifts in two of the four dimensions, thus: (i+½,j+½,k,l), (i+½,j,k+½,l), (i+½,j,k,l+½), (i,j+½,k+½,l), (i,j+½,k,l+½), (i,j,k+½,l+½). Configuration Each 24-cell in the 24-cell honeycomb has 24 neighboring 24-cells. With each neighbor it shares exactly one octahedral cell. It has 24 more neighbors such that with each of these it shares a single vertex. It has no neighbors with which it shares only an edge or only a face. The vertex figure of the 24-cell honeycomb is a tesseract (4-dimensional cube). So there are 16 edges, 32 triangles, 24 octahedra, and 8 24-cells meeting at every vertex. The edge figure is a tetrahedron, so there are 4 triangles, 6 octahedra, and 4 24-cells surrounding every edge. Finally, the face figure is a triangle, so there are 3 octahedra and 3 24-cells meeting at every face. Cross-sections One way to visualize a 4-dimensional figure is to consider various 3-dimensional cross-sections. That is, the intersection of various hyperplanes with the figure in question. Applying this technique to the 24-cell honeycomb gives rise to various 3-dimensional honeycombs with varying degrees of regularity. Vertex-first sections Rhombic dodecahedral honeycomb Cubic honeycomb Cell-first sections Rectified cubic honeycomb Bitruncated cubic honeycomb A vertex-first cross-section uses some hyperplane orthogonal to a line joining opposite vertices of one of the 24-cells. For instance, one could take any of the coordinate hyperplanes in the coordinate system given above (i.e. the planes determined by xi = 0). The cross-section of {3,4,3,3} by one of these hyperplanes gives a rhombic dodecahedral honeycomb. Each of the rhombic dodecahedra corresponds to a maximal cross-section of one of the 24-cells intersecting the hyperplane (the center of each such (4-dimensional) 24-cell lies in the hyperplane). Accordingly, the rhombic dodecahedral honeycomb is the Voronoi tessellation of the D3 root lattice (a face-centered cubic lattice). Shifting this hyperplane halfway to one of the vertices (e.g. xi = ½) gives rise to a regular cubic honeycomb. In this case the center of each 24-cell lies off the hyperplane. Shifting again, so the hyperplane intersects the vertex, gives another rhombic dodecahedral honeycomb but with new 24-cells (the former ones having shrunk to points). In general, for any integer n, the cross-section through xi = n is a rhombic dodecahedral honeycomb, and the cross-section through xi = n + ½ is a cubic honeycomb. As the hyperplane moves through 4-space, the cross-section morphs between the two periodically. A cell-first cross-section uses some hyperplane parallel to one of the octahedral cells of a 24-cell. Consider, for instance, some hyperplane orthogonal to the vector (1,1,0,0). The cross-section of {3,4,3,3} by this hyperplane is a rectified cubic honeycomb. Each cuboctahedron in this honeycomb is a maximal cross-section of a 24-cell whose center lies in the plane. Meanwhile, each octahedron is a boundary cell of a (4-dimensional) 24-cell whose center lies off the plane. Shifting this hyperplane till it lies halfway between the center of a 24-cell and the boundary, one obtains a bitruncated cubic honeycomb. The cuboctahedra have shrunk, and the octahedra have grown until they are both truncated octahedra. Shifting again, so the hyperplane intersects the boundary of the central 24-cell gives a rectified cubic honeycomb again, the cuboctahedra and octahedra having swapped positions. As the hyperplane sweeps through 4-space, the cross-section morphs between these two honeycombs periodically. Kissing number If a 3-sphere is inscribed in each hypercell of this tessellation, the resulting arrangement is the densest known[note 1] regular sphere packing in four dimensions, with the kissing number 24. The packing density of this arrangement is ${\frac {\pi ^{2}}{16}}\cong 0.61685.$ Each inscribed 3-sphere kisses 24 others at the centers of the octahedral facets of its 24-cell, since each such octahedral cell is shared with an adjacent 24-cell. In a unit-edge-length tessellation, the diameter of the spheres (the distance between the centers of kissing spheres) is √2. Just outside this surrounding shell of 24 kissing 3-spheres is another less dense shell of 24 3-spheres which do not kiss each other or the central 3-sphere; they are inscribed in 24-cells with which the central 24-cell shares only a single vertex (rather than an octahedral cell). The center-to-center distance between one of these spheres and any of its shell neighbors or the central sphere is 2. Alternatively, the same sphere packing arrangement with kissing number 24 can be carried out with smaller 3-spheres of edge-length-diameter, by locating them at the centers and the vertices of the 24-cells. (This is equivalent to locating them at the vertices of a 16-cell honeycomb of unit-edge-length.) In this case the central 3-sphere kisses 24 others at the centers of the cubical facets of the three tesseracts inscribed in the 24-cell. (This is the unique body-centered cubic packing of edge-length spheres of the tesseractic honeycomb.) Just outside this shell of kissing 3-spheres of diameter 1 is another less dense shell of 24 non-kissing 3-spheres of diameter 1; they are centered in the adjacent 24-cells with which the central 24-cell shares an octahedral facet. The center-to-center distance between one of these spheres and any of its shell neighbors or the central sphere is √2. Symmetry constructions There are five different Wythoff constructions of this tessellation as a uniform polytope. They are geometrically identical to the regular form, but the symmetry differences can be represented by colored 24-cell facets. In all cases, eight 24-cells meet at each vertex, but the vertex figures have different symmetry generators. Coxeter group Schläfli symbols Coxeter diagram Facets (24-cells) Vertex figure (8-cell) Vertex figure symmetry order ${\tilde {F}}_{4}$ = [3,4,3,3] ${\begin{Bmatrix}3,4,3,3\end{Bmatrix}}${3,4,3,3} 8: 384 $\left\{{\begin{array}{l}3\\3,4,3\end{array}}\right\}$r{3,3,4,3} 6: 2: 96 ${\tilde {C}}_{4}$ = [4,3,3,4] $\left\{{\begin{array}{l}3,4\\3,4\end{array}}\right\}$2r{4,3,3,4} 4,4: 64 ${\tilde {B}}_{4}$ = [4,3,31,1] $\left\{{\begin{array}{l}3\\3\\3,4\end{array}}\right\}$2r{4,3,31,1} 2,2: 4: 32 ${\tilde {D}}_{4}$ = [31,1,1,1] $\left\{{\begin{array}{l}3\\3\\3\\3\end{array}}\right\}${31,1,1,1} 2,2,2,2: 16 See also Other uniform honeycombs in 4-space: • Truncated 5-cell honeycomb • Omnitruncated 5-cell honeycomb • Truncated 24-cell honeycomb • Rectified 24-cell honeycomb • Snub 24-cell honeycomb Notes 1. The sphere packing problem and the kissing number problem are remarkably difficult and optimal solutions are only known in 1, 2, 3, 8, and 24 dimensions (plus dimension 4 for the kissing number problem). References • Coxeter, H.S.M. Regular Polytopes, (3rd edition, 1973), Dover edition, ISBN 0-486-61480-8 p. 296, Table II: Regular honeycombs • Kaleidoscopes: Selected Writings of H.S.M. Coxeter, edited by F. Arthur Sherk, Peter McMullen, Anthony C. Thompson, Asia Ivic Weiss, Wiley-Interscience Publication, 1995, ISBN 978-0-471-01003-6 • (Paper 24) H.S.M. Coxeter, Regular and Semi-Regular Polytopes III, [Math. Zeit. 200 (1988) 3-45] • George Olshevsky, Uniform Panoploid Tetracombs, Manuscript (2006) (Complete list of 11 convex uniform tilings, 28 convex uniform honeycombs, and 143 convex uniform tetracombs) - Model 88 • Klitzing, Richard. "4D Euclidean tesselations". o4o3x3o4o, o3x3o *b3o4o, o3x3o *b3o4o, o3x3o4o3o, o3o3o4o3x - icot - O88 Fundamental convex regular and uniform honeycombs in dimensions 2–9 Space Family ${\tilde {A}}_{n-1}$ ${\tilde {C}}_{n-1}$ ${\tilde {B}}_{n-1}$ ${\tilde {D}}_{n-1}$ ${\tilde {G}}_{2}$ / ${\tilde {F}}_{4}$ / ${\tilde {E}}_{n-1}$ E2 Uniform tiling {3[3]} δ3 hδ3 qδ3 Hexagonal E3 Uniform convex honeycomb {3[4]} δ4 hδ4 qδ4 E4 Uniform 4-honeycomb {3[5]} δ5 hδ5 qδ5 24-cell honeycomb E5 Uniform 5-honeycomb {3[6]} δ6 hδ6 qδ6 E6 Uniform 6-honeycomb {3[7]} δ7 hδ7 qδ7 222 E7 Uniform 7-honeycomb {3[8]} δ8 hδ8 qδ8 133 • 331 E8 Uniform 8-honeycomb {3[9]} δ9 hδ9 qδ9 152 • 251 • 521 E9 Uniform 9-honeycomb {3[10]} δ10 hδ10 qδ10 E10 Uniform 10-honeycomb {3[11]} δ11 hδ11 qδ11 En-1 Uniform (n-1)-honeycomb {3[n]} δn hδn qδn 1k2 • 2k1 • k21
Wikipedia
Rectified 24-cell In geometry, the rectified 24-cell or rectified icositetrachoron is a uniform 4-dimensional polytope (or uniform 4-polytope), which is bounded by 48 cells: 24 cubes, and 24 cuboctahedra. It can be obtained by rectification of the 24-cell, reducing its octahedral cells to cubes and cuboctahedra.[1] Rectified 24-cell Schlegel diagram 8 of 24 cuboctahedral cells shown Type Uniform 4-polytope Schläfli symbols r{3,4,3} = $\left\{{\begin{array}{l}3\\4,3\end{array}}\right\}$ rr{3,3,4}=$r\left\{{\begin{array}{l}3\\3,4\end{array}}\right\}$ r{31,1,1} = $r\left\{{\begin{array}{l}3\\3\\3\end{array}}\right\}$ Coxeter diagrams or Cells 48 24 3.4.3.4 24 4.4.4 Faces 240 96 {3} 144 {4} Edges 288 Vertices 96 Vertex figure Triangular prism Symmetry groups F4 [3,4,3], order 1152 B4 [3,3,4], order 384 D4 [31,1,1], order 192 Properties convex, edge-transitive Uniform index 22 23 24 E. L. Elte identified it in 1912 as a semiregular polytope, labeling it as tC24. It can also be considered a cantellated 16-cell with the lower symmetries B4 = [3,3,4]. B4 would lead to a bicoloring of the cuboctahedral cells into 8 and 16 each. It is also called a runcicantellated demitesseract in a D4 symmetry, giving 3 colors of cells, 8 for each. Construction The rectified 24-cell can be derived from the 24-cell by the process of rectification: the 24-cell is truncated at the midpoints. The vertices become cubes, while the octahedra become cuboctahedra. Cartesian coordinates A rectified 24-cell having an edge length of √2 has vertices given by all permutations and sign permutations of the following Cartesian coordinates: (0,1,1,2) [4!/2!×23 = 96 vertices] The dual configuration with edge length 2 has all coordinate and sign permutations of: (0,2,2,2) [4×23 = 32 vertices] (1,1,1,3) [4×24 = 64 vertices] Images orthographic projections Coxeter plane F4 Graph Dihedral symmetry [12] Coxeter plane B3 / A2 (a) B3 / A2 (b) Graph Dihedral symmetry [6] [6] Coxeter plane B4 B2 / A3 Graph Dihedral symmetry [8] [4] Stereographic projection Center of stereographic projection with 96 triangular faces blue Symmetry constructions There are three different symmetry constructions of this polytope. The lowest ${D}_{4}$ construction can be doubled into ${C}_{4}$ by adding a mirror that maps the bifurcating nodes onto each other. ${D}_{4}$ can be mapped up to ${F}_{4}$ symmetry by adding two mirror that map all three end nodes together. The vertex figure is a triangular prism, containing two cubes and three cuboctahedra. The three symmetries can be seen with 3 colored cuboctahedra in the lowest ${D}_{4}$ construction, and two colors (1:2 ratio) in ${C}_{4}$, and all identical cuboctahedra in ${F}_{4}$. Coxeter group ${F}_{4}$ = [3,4,3] ${C}_{4}$ = [4,3,3] ${D}_{4}$ = [3,31,1] Order 1152 384 192 Full symmetry group [3,4,3] [4,3,3] <[3,31,1]> = [4,3,3] [3[31,1,1]] = [3,4,3] Coxeter diagram Facets 3: 2: 2,2: 2: 1,1,1: 2: Vertex figure Alternate names • Rectified 24-cell, Cantellated 16-cell (Norman Johnson) • Rectified icositetrachoron (Acronym rico) (George Olshevsky, Jonathan Bowers) • Cantellated hexadecachoron • Disicositetrachoron • Amboicositetrachoron (Neil Sloane & John Horton Conway) Related polytopes The convex hull of the rectified 24-cell and its dual (assuming that they are congruent) is a nonuniform polychoron composed of 192 cells: 48 cubes, 144 square antiprisms, and 192 vertices. Its vertex figure is a triangular bifrustum. Related uniform polytopes D4 uniform polychora {3,31,1} h{4,3,3} 2r{3,31,1} h3{4,3,3} t{3,31,1} h2{4,3,3} 2t{3,31,1} h2,3{4,3,3} r{3,31,1} {31,1,1}={3,4,3} rr{3,31,1} r{31,1,1}=r{3,4,3} tr{3,31,1} t{31,1,1}=t{3,4,3} sr{3,31,1} s{31,1,1}=s{3,4,3} 24-cell family polytopes Name 24-cell truncated 24-cell snub 24-cell rectified 24-cell cantellated 24-cell bitruncated 24-cell cantitruncated 24-cell runcinated 24-cell runcitruncated 24-cell omnitruncated 24-cell Schläfli symbol {3,4,3} t0,1{3,4,3} t{3,4,3} s{3,4,3} t1{3,4,3} r{3,4,3} t0,2{3,4,3} rr{3,4,3} t1,2{3,4,3} 2t{3,4,3} t0,1,2{3,4,3} tr{3,4,3} t0,3{3,4,3} t0,1,3{3,4,3} t0,1,2,3{3,4,3} Coxeter diagram Schlegel diagram F4 B4 B3(a) B3(b) B2 The rectified 24-cell can also be derived as a cantellated 16-cell: B4 symmetry polytopes Name tesseract rectified tesseract truncated tesseract cantellated tesseract runcinated tesseract bitruncated tesseract cantitruncated tesseract runcitruncated tesseract omnitruncated tesseract Coxeter diagram = = Schläfli symbol {4,3,3} t1{4,3,3} r{4,3,3} t0,1{4,3,3} t{4,3,3} t0,2{4,3,3} rr{4,3,3} t0,3{4,3,3} t1,2{4,3,3} 2t{4,3,3} t0,1,2{4,3,3} tr{4,3,3} t0,1,3{4,3,3} t0,1,2,3{4,3,3} Schlegel diagram B4   Name 16-cell rectified 16-cell truncated 16-cell cantellated 16-cell runcinated 16-cell bitruncated 16-cell cantitruncated 16-cell runcitruncated 16-cell omnitruncated 16-cell Coxeter diagram = = = = = = Schläfli symbol {3,3,4} t1{3,3,4} r{3,3,4} t0,1{3,3,4} t{3,3,4} t0,2{3,3,4} rr{3,3,4} t0,3{3,3,4} t1,2{3,3,4} 2t{3,3,4} t0,1,2{3,3,4} tr{3,3,4} t0,1,3{3,3,4} t0,1,2,3{3,3,4} Schlegel diagram B4 Citations 1. Coxeter 1973, p. 154, §8.4. References • T. Gosset: On the Regular and Semi-Regular Figures in Space of n Dimensions, Messenger of Mathematics, Macmillan, 1900 • Coxeter, H.S.M. (1973) [1948]. Regular Polytopes (3rd ed.). New York: Dover. • John H. Conway, Heidi Burgiel, Chaim Goodman-Strass, The Symmetries of Things 2008, ISBN 978-1-56881-220-5 (Chapter 26. pp. 409: Hemicubes: 1n1) • Norman Johnson Uniform Polytopes, Manuscript (1991) • N.W. Johnson: The Theory of Uniform Polytopes and Honeycombs, Ph.D. (1966) • 2. Convex uniform polychora based on the tesseract (8-cell) and hexadecachoron (16-cell) - Model 23, George Olshevsky. • 3. Convex uniform polychora based on the icositetrachoron (24-cell) - Model 23, George Olshevsky. • 7. Uniform polychora derived from glomeric tetrahedron B4 - Model 23, George Olshevsky. • Klitzing, Richard. "4D uniform polytopes (polychora) o3x4o3o - rico". Fundamental convex regular and uniform polytopes in dimensions 2–10 Family An Bn I2(p) / Dn E6 / E7 / E8 / F4 / G2 Hn Regular polygon Triangle Square p-gon Hexagon Pentagon Uniform polyhedron Tetrahedron Octahedron • Cube Demicube Dodecahedron • Icosahedron Uniform polychoron Pentachoron 16-cell • Tesseract Demitesseract 24-cell 120-cell • 600-cell Uniform 5-polytope 5-simplex 5-orthoplex • 5-cube 5-demicube Uniform 6-polytope 6-simplex 6-orthoplex • 6-cube 6-demicube 122 • 221 Uniform 7-polytope 7-simplex 7-orthoplex • 7-cube 7-demicube 132 • 231 • 321 Uniform 8-polytope 8-simplex 8-orthoplex • 8-cube 8-demicube 142 • 241 • 421 Uniform 9-polytope 9-simplex 9-orthoplex • 9-cube 9-demicube Uniform 10-polytope 10-simplex 10-orthoplex • 10-cube 10-demicube Uniform n-polytope n-simplex n-orthoplex • n-cube n-demicube 1k2 • 2k1 • k21 n-pentagonal polytope Topics: Polytope families • Regular polytope • List of regular polytopes and compounds
Wikipedia
Rectified 24-cell honeycomb In four-dimensional Euclidean geometry, the rectified 24-cell honeycomb is a uniform space-filling honeycomb. It is constructed by a rectification of the regular 24-cell honeycomb, containing tesseract and rectified 24-cell cells. Rectified 24-cell honeycomb (No image) TypeUniform 4-honeycomb Schläfli symbolr{3,4,3,3} rr{3,3,4,3} r2r{4,3,3,4} r2r{4,3,31,1} Coxeter-Dynkin diagrams = = = 4-face typeTesseract Rectified 24-cell Cell typeCube Cuboctahedron Face typeSquare Triangle Vertex figure Tetrahedral prism Coxeter groups${\tilde {F}}_{4}$, [3,4,3,3] ${\tilde {C}}_{4}$, [4,3,3,4] ${\tilde {B}}_{4}$, [4,3,31,1] ${\tilde {D}}_{4}$, [31,1,1,1] PropertiesVertex transitive Alternate names • Rectified icositetrachoric tetracomb • Rectified icositetrachoric honeycomb • Cantellated 16-cell honeycomb • Bicantellated tesseractic honeycomb Symmetry constructions There are five different symmetry constructions of this tessellation. Each symmetry can be represented by different arrangements of colored rectified 24-cell and tesseract facets. The tetrahedral prism vertex figure contains 4 rectified 24-cells capped by two opposite tesseracts. Coxeter group Coxeter diagram Facets Vertex figure Vertex figure symmetry (order) ${\tilde {F}}_{4}$ = [3,4,3,3] 4: 1: , [3,3,2] (48) 3: 1: 1: , [3,2] (12) ${\tilde {C}}_{4}$ = [4,3,3,4] 2,2: 1: , [2,2] (8) ${\tilde {B}}_{4}$ = [31,1,3,4] 1,1: 2: 1: , [2] (4) ${\tilde {D}}_{4}$ = [31,1,1,1] 1,1,1,1: 1: , [] (2) See also Regular and uniform honeycombs in 4-space: • Tesseractic honeycomb • 16-cell honeycomb • 24-cell honeycomb • Truncated 24-cell honeycomb • Snub 24-cell honeycomb • 5-cell honeycomb • Truncated 5-cell honeycomb • Omnitruncated 5-cell honeycomb References • Coxeter, H.S.M. Regular Polytopes, (3rd edition, 1973), Dover edition, ISBN 0-486-61480-8 p. 296, Table II: Regular honeycombs • Kaleidoscopes: Selected Writings of H. S. M. Coxeter, edited by F. Arthur Sherk, Peter McMullen, Anthony C. Thompson, Asia Ivic Weiss, Wiley-Interscience Publication, 1995, ISBN 978-0-471-01003-6 • (Paper 24) H.S.M. Coxeter, Regular and Semi-Regular Polytopes III, [Math. Zeit. 200 (1988) 3-45] • George Olshevsky, Uniform Panoploid Tetracombs, Manuscript (2006) (Complete list of 11 convex uniform tilings, 28 convex uniform honeycombs, and 143 convex uniform tetracombs) Model 93 • Klitzing, Richard. "4D Euclidean tesselations"., o3o3o4x3o, o4x3o3x4o - ricot - O93 Fundamental convex regular and uniform honeycombs in dimensions 2–9 Space Family ${\tilde {A}}_{n-1}$ ${\tilde {C}}_{n-1}$ ${\tilde {B}}_{n-1}$ ${\tilde {D}}_{n-1}$ ${\tilde {G}}_{2}$ / ${\tilde {F}}_{4}$ / ${\tilde {E}}_{n-1}$ E2 Uniform tiling {3[3]} δ3 hδ3 qδ3 Hexagonal E3 Uniform convex honeycomb {3[4]} δ4 hδ4 qδ4 E4 Uniform 4-honeycomb {3[5]} δ5 hδ5 qδ5 24-cell honeycomb E5 Uniform 5-honeycomb {3[6]} δ6 hδ6 qδ6 E6 Uniform 6-honeycomb {3[7]} δ7 hδ7 qδ7 222 E7 Uniform 7-honeycomb {3[8]} δ8 hδ8 qδ8 133 • 331 E8 Uniform 8-honeycomb {3[9]} δ9 hδ9 qδ9 152 • 251 • 521 E9 Uniform 9-honeycomb {3[10]} δ10 hδ10 qδ10 E10 Uniform 10-honeycomb {3[11]} δ11 hδ11 qδ11 En-1 Uniform (n-1)-honeycomb {3[n]} δn hδn qδn 1k2 • 2k1 • k21
Wikipedia
2 21 polytope In 6-dimensional geometry, the 221 polytope is a uniform 6-polytope, constructed within the symmetry of the E6 group. It was discovered by Thorold Gosset, published in his 1900 paper. He called it an 6-ic semi-regular figure.[1] It is also called the Schläfli polytope. 221 Rectified 221 (122) Birectified 221 (Rectified 122) orthogonal projections in E6 Coxeter plane Its Coxeter symbol is 221, describing its bifurcating Coxeter-Dynkin diagram, with a single ring on the end of one of the 2-node sequences. He also studied[2] its connection with the 27 lines on the cubic surface, which are naturally in correspondence with the vertices of 221. The rectified 221 is constructed by points at the mid-edges of the 221. The birectified 221 is constructed by points at the triangle face centers of the 221, and is the same as the rectified 122. These polytopes are a part of family of 39 convex uniform polytopes in 6-dimensions, made of uniform 5-polytope facets and vertex figures, defined by all permutations of rings in this Coxeter-Dynkin diagram: . 2_21 polytope 221 polytope TypeUniform 6-polytope Familyk21 polytope Schläfli symbol{3,3,32,1} Coxeter symbol221 Coxeter-Dynkin diagram or 5-faces99 total: 27 211 72 {34} 4-faces648: 432 {33} 216 {33} Cells1080 {3,3} Faces720 {3} Edges216 Vertices27 Vertex figure121 (5-demicube) Petrie polygonDodecagon Coxeter groupE6, [32,2,1], order 51840 Propertiesconvex The 221 has 27 vertices, and 99 facets: 27 5-orthoplexes and 72 5-simplices. Its vertex figure is a 5-demicube. For visualization this 6-dimensional polytope is often displayed in a special skewed orthographic projection direction that fits its 27 vertices within a 12-gonal regular polygon (called a Petrie polygon). Its 216 edges are drawn between 2 rings of 12 vertices, and 3 vertices projected into the center. Higher elements (faces, cells, etc.) can also be extracted and drawn on this projection. The Schläfli graph is the 1-skeleton of this polytope. Alternate names • E. L. Elte named it V27 (for its 27 vertices) in his 1912 listing of semiregular polytopes.[3] • Icosihepta-heptacontidi-peton - 27-72 facetted polypeton (acronym jak) (Jonathan Bowers)[4] Coordinates The 27 vertices can be expressed in 8-space as an edge-figure of the 421 polytope: (-2, 0, 0, 0,-2, 0, 0, 0), ( 0,-2, 0, 0,-2, 0, 0, 0), ( 0, 0,-2, 0,-2, 0, 0, 0), ( 0, 0, 0,-2,-2, 0, 0, 0), ( 0, 0, 0, 0,-2, 0, 0,-2), ( 0, 0, 0, 0, 0,-2,-2, 0) ( 2, 0, 0, 0,-2, 0, 0, 0), ( 0, 2, 0, 0,-2, 0, 0, 0), ( 0, 0, 2, 0,-2, 0, 0, 0), ( 0, 0, 0, 2,-2, 0, 0, 0), ( 0, 0, 0, 0,-2, 0, 0, 2) (-1,-1,-1,-1,-1,-1,-1,-1), (-1,-1,-1, 1,-1,-1,-1, 1), (-1,-1, 1,-1,-1,-1,-1, 1), (-1,-1, 1, 1,-1,-1,-1,-1), (-1, 1,-1,-1,-1,-1,-1, 1), (-1, 1,-1, 1,-1,-1,-1,-1), (-1, 1, 1,-1,-1,-1,-1,-1), ( 1,-1,-1,-1,-1,-1,-1, 1), ( 1,-1, 1,-1,-1,-1,-1,-1), ( 1,-1,-1, 1,-1,-1,-1,-1), ( 1, 1,-1,-1,-1,-1,-1,-1), (-1, 1, 1, 1,-1,-1,-1, 1), ( 1,-1, 1, 1,-1,-1,-1, 1), ( 1, 1,-1, 1,-1,-1,-1, 1), ( 1, 1, 1,-1,-1,-1,-1, 1), ( 1, 1, 1, 1,-1,-1,-1,-1) Construction Its construction is based on the E6 group. The facet information can be extracted from its Coxeter-Dynkin diagram, . Removing the node on the short branch leaves the 5-simplex, . Removing the node on the end of the 2-length branch leaves the 5-orthoplex in its alternated form: (211), . Every simplex facet touches a 5-orthoplex facet, while alternate facets of the orthoplex touch either a simplex or another orthoplex. The vertex figure is determined by removing the ringed node and ringing the neighboring node. This makes 5-demicube (121 polytope), . The edge-figure is the vertex figure of the vertex figure, a rectified 5-cell, (021 polytope), . Seen in a configuration matrix, the element counts can be derived from the Coxeter group orders.[5] E6k-facefkf0f1f2f3f4f5k-figurenotes D5( ) f0 27168016080401610h{4,3,3,3}E6/D5 = 51840/1920 = 27 A4A1{ } f1 22161030201055r{3,3,3}E6/A4A1 = 51840/120/2 = 216 A2A2A1{3} f2 3372066323{3}x{ }E6/A2A2A1 = 51840/6/6/2 = 720 A3A1{3,3} f3 46410802112{ }v( )E6/A3A1 = 51840/24/2 = 1080 A4{3,3,3} f4 510105432*11{ }E6/A4 = 51840/120 = 432 A4A1 510105*21602E6/A4A1 = 51840/120/2 = 216 A5{3,3,3,3} f5 61520156072*( )E6/A5 = 51840/720 = 72 D5{3,3,3,4} 104080801616*27E6/D5 = 51840/1920 = 27 Images Vertices are colored by their multiplicity in this projection, in progressive order: red, orange, yellow. The number of vertices by color are given in parentheses. Coxeter plane orthographic projections E6 [12] D5 [8] D4 / A2 [6] B6 [12/2] (1,3) (1,3) (3,9) (1,3) A5 [6] A4 [5] A3 / D3 [4] (1,3) (1,2) (1,4,7) Geometric folding The 221 is related to the 24-cell by a geometric folding of the E6/F4 Coxeter-Dynkin diagrams. This can be seen in the Coxeter plane projections. The 24 vertices of the 24-cell are projected in the same two rings as seen in the 221. E6 F4 221 24-cell This polytope can tessellate Euclidean 6-space, forming the 222 honeycomb with this Coxeter-Dynkin diagram: . Related complex polyhedra The regular complex polygon 3{3}3{3}3, , in $\mathbb {C} ^{2}$ has a real representation as the 221 polytope, , in 4-dimensional space. It is called a Hessian polyhedron after Edmund Hess. It has 27 vertices, 72 3-edges, and 27 3{3}3 faces. Its complex reflection group is 3[3]3[3]3, order 648. Related polytopes The 221 is fourth in a dimensional series of semiregular polytopes. Each progressive uniform polytope is constructed vertex figure of the previous polytope. Thorold Gosset identified this series in 1900 as containing all regular polytope facets, containing all simplexes and orthoplexes. k21 figures in n dimensions Space Finite Euclidean Hyperbolic En 3 4 5 6 7 8 9 10 Coxeter group E3=A2A1 E4=A4 E5=D5 E6 E7 E8 E9 = ${\tilde {E}}_{8}$ = E8+ E10 = ${\bar {T}}_{8}$ = E8++ Coxeter diagram Symmetry [3−1,2,1] [30,2,1] [31,2,1] [32,2,1] [33,2,1] [34,2,1] [35,2,1] [36,2,1] Order 12 120 1,920 51,840 2,903,040 696,729,600 ∞ Graph - - Name −121 021 121 221 321 421 521 621 The 221 polytope is fourth in dimensional series 2k2. 2k1 figures in n dimensions Space Finite Euclidean Hyperbolic n 3 4 5 6 7 8 9 10 Coxeter group E3=A2A1 E4=A4 E5=D5 E6 E7 E8 E9 = ${\tilde {E}}_{8}$ = E8+ E10 = ${\bar {T}}_{8}$ = E8++ Coxeter diagram Symmetry [3−1,2,1] [30,2,1] [[31,2,1]] [32,2,1] [33,2,1] [34,2,1] [35,2,1] [36,2,1] Order 12 120 384 51,840 2,903,040 696,729,600 ∞ Graph - - Name 2−1,1 201 211 221 231 241 251 261 The 221 polytope is second in dimensional series 22k. 22k figures of n dimensions Space Finite Euclidean Hyperbolic n 4 5 6 7 8 Coxeter group A2A2 A5 E6 ${\tilde {E}}_{6}$=E6+ E6++ Coxeter diagram Graph ∞ ∞ Name 22,-1 220 221 222 223 Rectified 2_21 polytope Rectified 221 polytope TypeUniform 6-polytope Schläfli symbolt1{3,3,32,1} Coxeter symbolt1(221) Coxeter-Dynkin diagram or 5-faces126 total: 72 t1{34} 27 t1{33,4} 27 t1{3,32,1} 4-faces1350 Cells4320 Faces5040 Edges2160 Vertices216 Vertex figurerectified 5-cell prism Coxeter groupE6, [32,2,1], order 51840 Propertiesconvex The rectified 221 has 216 vertices, and 126 facets: 72 rectified 5-simplices, and 27 rectified 5-orthoplexes and 27 5-demicubes . Its vertex figure is a rectified 5-cell prism. Alternate names • Rectified icosihepta-heptacontidi-peton as a rectified 27-72 facetted polypeton (acronym rojak) (Jonathan Bowers)[6] Construction Its construction is based on the E6 group and information can be extracted from the ringed Coxeter-Dynkin diagram representing this polytope: . Removing the ring on the short branch leaves the rectified 5-simplex, . Removing the ring on the end of the other 2-length branch leaves the rectified 5-orthoplex in its alternated form: t1(211), . Removing the ring on the end of the same 2-length branch leaves the 5-demicube: (121), . The vertex figure is determined by removing the ringed ring and ringing the neighboring ring. This makes rectified 5-cell prism, t1{3,3,3}x{}, . Images Vertices are colored by their multiplicity in this projection, in progressive order: red, orange, yellow. Coxeter plane orthographic projections E6 [12] D5 [8] D4 / A2 [6] B6 [12/2] A5 [6] A4 [5] A3 / D3 [4] Truncated 2_21 polytope Truncated 221 polytope TypeUniform 6-polytope Schläfli symbolt{3,3,32,1} Coxeter symbolt(221) Coxeter-Dynkin diagram or 5-faces72+27+27 4-faces432+216+432+270 Cells1080+2160+1080 Faces720+4320 Edges216+2160 Vertices432 Vertex figure( ) v r{3,3,3} Coxeter groupE6, [32,2,1], order 51840 Propertiesconvex The truncated 221 has 432 vertices, 5040 edges, 4320 faces, 1350 cells, and 126 4-faces. Its vertex figure is a rectified 5-cell pyramid. Images Vertices are colored by their multiplicity in this projection, in progressive order: red, orange, yellow, green, cyan, blue, purple. Coxeter plane orthographic projections E6 [12] D5 [8] D4 / A2 [6] B6 [12/2] A5 [6] A4 [5] A3 / D3 [4] See also • List of E6 polytopes Notes 1. Gosset, 1900 2. Coxeter, H.S.M. (1940). "The Polytope 221 Whose Twenty-Seven Vertices Correspond to the Lines on the General Cubic Surface". Amer. J. Math. 62 (1): 457–486. doi:10.2307/2371466. JSTOR 2371466. 3. Elte, 1912 4. Klitzing, (x3o3o3o3o *c3o - jak) 5. Coxeter, Regular Polytopes, 11.8 Gossett figures in six, seven, and eight dimensions, p. 202-203 6. Klitzing, (o3x3o3o3o *c3o - rojak) References • T. Gosset: On the Regular and Semi-Regular Figures in Space of n Dimensions, Messenger of Mathematics, Macmillan, 1900 • Elte, E. L. (1912), The Semiregular Polytopes of the Hyperspaces, Groningen: University of Groningen • Kaleidoscopes: Selected Writings of H.S.M. Coxeter, edited by F. Arthur Sherk, Peter McMullen, Anthony C. Thompson, Asia Ivic Weiss, Wiley-Interscience Publication, 1995, ISBN 978-0-471-01003-6 • (Paper 17) Coxeter, The Evolution of Coxeter-Dynkin diagrams, [Nieuw Archief voor Wiskunde 9 (1991) 233-248] See figure 1: (p. 232) (Node-edge graph of polytope) • Klitzing, Richard. "6D uniform polytopes (polypeta)". x3o3o3o3o *c3o - jak, o3x3o3o3o *c3o - rojak Fundamental convex regular and uniform polytopes in dimensions 2–10 Family An Bn I2(p) / Dn E6 / E7 / E8 / F4 / G2 Hn Regular polygon Triangle Square p-gon Hexagon Pentagon Uniform polyhedron Tetrahedron Octahedron • Cube Demicube Dodecahedron • Icosahedron Uniform polychoron Pentachoron 16-cell • Tesseract Demitesseract 24-cell 120-cell • 600-cell Uniform 5-polytope 5-simplex 5-orthoplex • 5-cube 5-demicube Uniform 6-polytope 6-simplex 6-orthoplex • 6-cube 6-demicube 122 • 221 Uniform 7-polytope 7-simplex 7-orthoplex • 7-cube 7-demicube 132 • 231 • 321 Uniform 8-polytope 8-simplex 8-orthoplex • 8-cube 8-demicube 142 • 241 • 421 Uniform 9-polytope 9-simplex 9-orthoplex • 9-cube 9-demicube Uniform 10-polytope 10-simplex 10-orthoplex • 10-cube 10-demicube Uniform n-polytope n-simplex n-orthoplex • n-cube n-demicube 1k2 • 2k1 • k21 n-pentagonal polytope Topics: Polytope families • Regular polytope • List of regular polytopes and compounds
Wikipedia
2 41 polytope In 8-dimensional geometry, the 241 is a uniform 8-polytope, constructed within the symmetry of the E8 group. 421 142 241 Rectified 421 Rectified 142 Rectified 241 Birectified 421 Trirectified 421 Orthogonal projections in E6 Coxeter plane Its Coxeter symbol is 241, describing its bifurcating Coxeter-Dynkin diagram, with a single ring on the end of the 2-node sequences. The rectified 241 is constructed by points at the mid-edges of the 241. The birectified 241 is constructed by points at the triangle face centers of the 241, and is the same as the rectified 142. These polytopes are part of a family of 255 (28 − 1) convex uniform polytopes in 8-dimensions, made of uniform polytope facets, defined by all permutations of rings in this Coxeter-Dynkin diagram: . 241 polytope 241 polytope TypeUniform 8-polytope Family2k1 polytope Schläfli symbol{3,3,34,1} Coxeter symbol241 Coxeter diagram 7-faces17520: 240 231 17280 {36} 6-faces144960: 6720 221 138240 {35} 5-faces544320: 60480 211 483840 {34} 4-faces1209600: 241920 {201 967680 {33} Cells1209600 {32} Faces483840 {3} Edges69120 Vertices2160 Vertex figure141 Petrie polygon30-gon Coxeter groupE8, [34,2,1] Propertiesconvex The 241 is composed of 17,520 facets (240 231 polytopes and 17,280 7-simplices), 144,960 6-faces (6,720 221 polytopes and 138,240 6-simplices), 544,320 5-faces (60,480 211 and 483,840 5-simplices), 1,209,600 4-faces (4-simplices), 1,209,600 cells (tetrahedra), 483,840 faces (triangles), 69,120 edges, and 2160 vertices. Its vertex figure is a 7-demicube. This polytope is a facet in the uniform tessellation, 251 with Coxeter-Dynkin diagram: Alternate names • E. L. Elte named it V2160 (for its 2160 vertices) in his 1912 listing of semiregular polytopes.[1] • It is named 241 by Coxeter for its bifurcating Coxeter-Dynkin diagram, with a single ring on the end of the 2-node sequence. • Diacositetracont-myriaheptachiliadiacosioctaconta-zetton (Acronym Bay) - 240-17280 facetted polyzetton (Jonathan Bowers)[2] Coordinates The 2160 vertices can be defined as follows: 16 permutations of (±4,0,0,0,0,0,0,0) of (8-orthoplex) 1120 permutations of (±2,±2,±2,±2,0,0,0,0) of (trirectified 8-orthoplex) 1024 permutations of (±3,±1,±1,±1,±1,±1,±1,±1) with an odd number of minus-signs Construction It is created by a Wythoff construction upon a set of 8 hyperplane mirrors in 8-dimensional space. The facet information can be extracted from its Coxeter-Dynkin diagram: . Removing the node on the short branch leaves the 7-simplex: . There are 17280 of these facets Removing the node on the end of the 4-length branch leaves the 231, . There are 240 of these facets. They are centered at the positions of the 240 vertices in the 421 polytope. The vertex figure is determined by removing the ringed node and ringing the neighboring node. This makes the 7-demicube, 141, . Seen in a configuration matrix, the element counts can be derived by mirror removal and ratios of Coxeter group orders.[3] Configuration matrix E8k-facefkf0f1f2f3f4f5f6f7k-figurenotes D7( ) f0 216064672224056022402801344844481464h{4,3,3,3,3,3}E8/D7 = 192*10!/64/7! = 2160 A6A1{ } f1 269120211053514035105214277r{3,3,3,3,3}E8/A6A1 = 192*10!/7!/2 = 69120 A4A2A1{3} f2 33483840105201020101052{}x{3,3,3}E8/A4A2A1 = 192*10!/5!/3!/2 = 483840 A3A3{3,3} f3 464120960014466441{3,3}V( )E8/A3A3 = 192*10!/4!/4! = 1209600 A4A3{3,3,3} f4 510105241920*406040{3,3}E8/A4A3 = 192*10!/5!/4! = 241920 A4A2 510105*967680133331{3}V( )E8/A4A2 = 192*10!/5!/3! = 967680 D5A2{3,3,31,1} f5 10408080161660480*3030{3}E8/D5A2 = 192*10!/16/5!/2 = 40480 A5A1{3,3,3,3} 615201506*4838401221{ }V( )E8/A5A1 = 192*10!/6!/2 = 483840 E6A1{3,3,32,1} f6 27216720108021643227726720*20{ }E8/E6A1 = 192*10!/72/6! = 6720 A6{3,3,3,3,3} 721353502107*13824011E8/A6 = 192*10!/7! = 138240 E7{3,3,33,1} f7 12620161008020160403212096756403256576240*( )E8/E7 = 192*10!/72!/8! = 240 A7{3,3,3,3,3,3} 828567005602808*17280E8/A7 = 192*10!/8! = 17280 Visualizations E8 [30] [20] [24] (1) E7 [18] E6 [12] [6] (1,8,24,32) Petrie polygon projections are 12, 18, or 30-sided based on the E6, E7, and E8 symmetries (respectively). The 2160 vertices are all displayed, but lower symmetry forms have projected positions overlapping, shown as different colored vertices. For comparison, a B6 coxeter group is also shown. D3 / B2 / A3 [4] D4 / B3 / A2 [6] D5 / B4 [8] D6 / B5 / A4 [10] D7 / B6 [12] D8 / B7 / A6 [14] (1,3,9,12,18,21,36) B8 [16/2] A5 [6] A7 [8] Related polytopes and honeycombs 2k1 figures in n dimensions Space Finite Euclidean Hyperbolic n 3 4 5 6 7 8 9 10 Coxeter group E3=A2A1 E4=A4 E5=D5 E6 E7 E8 E9 = ${\tilde {E}}_{8}$ = E8+ E10 = ${\bar {T}}_{8}$ = E8++ Coxeter diagram Symmetry [3−1,2,1] [30,2,1] [[31,2,1]] [32,2,1] [33,2,1] [34,2,1] [35,2,1] [36,2,1] Order 12 120 384 51,840 2,903,040 696,729,600 ∞ Graph - - Name 2−1,1 201 211 221 231 241 251 261 Rectified 2_41 polytope Rectified 241 polytope TypeUniform 8-polytope Schläfli symbolt1{3,3,34,1} Coxeter symbolt1(241) Coxeter diagram 7-faces19680 total: 240 t1(221) 17280 t1{36} 2160 141 6-faces313440 5-faces1693440 4-faces4717440 Cells7257600 Faces5322240 Edges19680 Vertices69120 Vertex figurerectified 6-simplex prism Petrie polygon30-gon Coxeter groupE8, [34,2,1] Propertiesconvex The rectified 241 is a rectification of the 241 polytope, with vertices positioned at the mid-edges of the 241. Alternate names • Rectified Diacositetracont-myriaheptachiliadiacosioctaconta-zetton for rectified 240-17280 facetted polyzetton (known as robay for short)[4][5] Construction It is created by a Wythoff construction upon a set of 8 hyperplane mirrors in 8-dimensional space, defined by root vectors of the E8 Coxeter group. The facet information can be extracted from its Coxeter-Dynkin diagram: . Removing the node on the short branch leaves the rectified 7-simplex: . Removing the node on the end of the 4-length branch leaves the rectified 231, . Removing the node on the end of the 2-length branch leaves the 7-demicube, 141. The vertex figure is determined by removing the ringed node and ringing the neighboring node. This makes the rectified 6-simplex prism, . Visualizations Petrie polygon projections are 12, 18, or 30-sided based on the E6, E7, and E8 symmetries (respectively). The 2160 vertices are all displayed, but lower symmetry forms have projected positions overlapping, shown as different colored vertices. For comparison, a B6 coxeter group is also shown. E8 [30] [20] [24] (1) E7 [18] E6 [12] [6] (1,8,24,32) D3 / B2 / A3 [4] D4 / B3 / A2 [6] D5 / B4 [8] D6 / B5 / A4 [10] D7 / B6 [12] D8 / B7 / A6 [14] (1,3,9,12,18,21,36) B8 [16/2] A5 [6] A7 [8] See also • List of E8 polytopes Notes 1. Elte, 1912 2. Klitzing, (x3o3o3o *c3o3o3o3o - bay) 3. Coxeter, Regular Polytopes, 11.8 Gossett figures in six, seven, and eight dimensions, p. 202-203 4. Jonathan Bowers 5. Klitzing, (o3x3o3o *c3o3o3o3o - robay) References • Elte, E. L. (1912), The Semiregular Polytopes of the Hyperspaces, Groningen: University of Groningen • H. S. M. Coxeter, Regular Polytopes, 3rd Edition, Dover New York, 1973 • Kaleidoscopes: Selected Writings of H.S.M. Coxeter, edited by F. Arthur Sherk, Peter McMullen, Anthony C. Thompson, Asia Ivic Weiss, Wiley-Interscience Publication, 1995, ISBN 978-0-471-01003-6 • (Paper 24) H.S.M. Coxeter, Regular and Semi-Regular Polytopes III, [Math. Zeit. 200 (1988) 3-45] • Klitzing, Richard. "8D Uniform polyzetta". x3o3o3o *c3o3o3o3o - bay, o3x3o3o *c3o3o3o3o - robay Fundamental convex regular and uniform polytopes in dimensions 2–10 Family An Bn I2(p) / Dn E6 / E7 / E8 / F4 / G2 Hn Regular polygon Triangle Square p-gon Hexagon Pentagon Uniform polyhedron Tetrahedron Octahedron • Cube Demicube Dodecahedron • Icosahedron Uniform polychoron Pentachoron 16-cell • Tesseract Demitesseract 24-cell 120-cell • 600-cell Uniform 5-polytope 5-simplex 5-orthoplex • 5-cube 5-demicube Uniform 6-polytope 6-simplex 6-orthoplex • 6-cube 6-demicube 122 • 221 Uniform 7-polytope 7-simplex 7-orthoplex • 7-cube 7-demicube 132 • 231 • 321 Uniform 8-polytope 8-simplex 8-orthoplex • 8-cube 8-demicube 142 • 241 • 421 Uniform 9-polytope 9-simplex 9-orthoplex • 9-cube 9-demicube Uniform 10-polytope 10-simplex 10-orthoplex • 10-cube 10-demicube Uniform n-polytope n-simplex n-orthoplex • n-cube n-demicube 1k2 • 2k1 • k21 n-pentagonal polytope Topics: Polytope families • Regular polytope • List of regular polytopes and compounds
Wikipedia
Rectified 5-orthoplexes In five-dimensional geometry, a rectified 5-orthoplex is a convex uniform 5-polytope, being a rectification of the regular 5-orthoplex. 5-cube Rectified 5-cube Birectified 5-cube Birectified 5-orthoplex 5-orthoplex Rectified 5-orthoplex Orthogonal projections in A5 Coxeter plane There are 5 degrees of rectifications for any 5-polytope, the zeroth here being the 5-orthoplex itself, and the 4th and last being the 5-cube. Vertices of the rectified 5-orthoplex are located at the edge-centers of the 5-orthoplex. Vertices of the birectified 5-orthoplex are located in the triangular face centers of the 5-orthoplex. Rectified 5-orthoplex Rectified pentacross Typeuniform 5-polytope Schläfli symbolt1{3,3,3,4} Coxeter-Dynkin diagrams Hypercells42 total: 10 {3,3,4} 32 t1{3,3,3} Cells240 total: 80 {3,4} 160 {3,3} Faces400 total: 80+320 {3} Edges240 Vertices40 Vertex figure Octahedral prism Petrie polygonDecagon Coxeter groupsBC5, [3,3,3,4] D5, [32,1,1] Propertiesconvex Its 40 vertices represent the root vectors of the simple Lie group D5. The vertices can be seen in 3 hyperplanes, with the 10 vertices rectified 5-cells cells on opposite sides, and 20 vertices of a runcinated 5-cell passing through the center. When combined with the 10 vertices of the 5-orthoplex, these vertices represent the 50 root vectors of the B5 and C5 simple Lie groups. E. L. Elte identified it in 1912 as a semiregular polytope, identifying it as Cr51 as a first rectification of a 5-dimensional cross polytope. Alternate names • rectified pentacross • rectified triacontiditeron (32-faceted 5-polytope) Construction There are two Coxeter groups associated with the rectified pentacross, one with the C5 or [4,3,3,3] Coxeter group, and a lower symmetry with two copies of 16-cell facets, alternating, with the D5 or [32,1,1] Coxeter group. Cartesian coordinates Cartesian coordinates for the vertices of a rectified pentacross, centered at the origin, edge length ${\sqrt {2}}\ $ are all permutations of: (±1,±1,0,0,0) Images orthographic projections Coxeter plane B5 B4 / D5 B3 / D4 / A2 Graph Dihedral symmetry [10] [8] [6] Coxeter plane B2 A3 Graph Dihedral symmetry [4] [4] Related polytopes The rectified 5-orthoplex is the vertex figure for the 5-demicube honeycomb: or This polytope is one of 31 uniform 5-polytope generated from the regular 5-cube or 5-orthoplex. B5 polytopes β5 t1β5 t2γ5 t1γ5 γ5 t0,1β5 t0,2β5 t1,2β5 t0,3β5 t1,3γ5 t1,2γ5 t0,4γ5 t0,3γ5 t0,2γ5 t0,1γ5 t0,1,2β5 t0,1,3β5 t0,2,3β5 t1,2,3γ5 t0,1,4β5 t0,2,4γ5 t0,2,3γ5 t0,1,4γ5 t0,1,3γ5 t0,1,2γ5 t0,1,2,3β5 t0,1,2,4β5 t0,1,3,4γ5 t0,1,2,4γ5 t0,1,2,3γ5 t0,1,2,3,4γ5 Notes References • H.S.M. Coxeter: • H.S.M. Coxeter, Regular Polytopes, 3rd Edition, Dover New York, 1973 • Kaleidoscopes: Selected Writings of H.S.M. Coxeter, edited by F. Arthur Sherk, Peter McMullen, Anthony C. Thompson, Asia Ivic Weiss, Wiley-Interscience Publication, 1995, ISBN 978-0-471-01003-6 • (Paper 22) H.S.M. Coxeter, Regular and Semi Regular Polytopes I, [Math. Zeit. 46 (1940) 380-407, MR 2,10] • (Paper 23) H.S.M. Coxeter, Regular and Semi-Regular Polytopes II, [Math. Zeit. 188 (1985) 559-591] • (Paper 24) H.S.M. Coxeter, Regular and Semi-Regular Polytopes III, [Math. Zeit. 200 (1988) 3-45] • Norman Johnson Uniform Polytopes, Manuscript (1991) • N.W. Johnson: The Theory of Uniform Polytopes and Honeycombs, Ph.D. • Klitzing, Richard. "5D uniform polytopes (polytera)". o3x3o3o4o - rat External links • Polytopes of Various Dimensions • Multi-dimensional Glossary Fundamental convex regular and uniform polytopes in dimensions 2–10 Family An Bn I2(p) / Dn E6 / E7 / E8 / F4 / G2 Hn Regular polygon Triangle Square p-gon Hexagon Pentagon Uniform polyhedron Tetrahedron Octahedron • Cube Demicube Dodecahedron • Icosahedron Uniform polychoron Pentachoron 16-cell • Tesseract Demitesseract 24-cell 120-cell • 600-cell Uniform 5-polytope 5-simplex 5-orthoplex • 5-cube 5-demicube Uniform 6-polytope 6-simplex 6-orthoplex • 6-cube 6-demicube 122 • 221 Uniform 7-polytope 7-simplex 7-orthoplex • 7-cube 7-demicube 132 • 231 • 321 Uniform 8-polytope 8-simplex 8-orthoplex • 8-cube 8-demicube 142 • 241 • 421 Uniform 9-polytope 9-simplex 9-orthoplex • 9-cube 9-demicube Uniform 10-polytope 10-simplex 10-orthoplex • 10-cube 10-demicube Uniform n-polytope n-simplex n-orthoplex • n-cube n-demicube 1k2 • 2k1 • k21 n-pentagonal polytope Topics: Polytope families • Regular polytope • List of regular polytopes and compounds
Wikipedia
Rectified 6-cubes In six-dimensional geometry, a rectified 6-cube is a convex uniform 6-polytope, being a rectification of the regular 6-cube. 6-cube Rectified 6-cube Birectified 6-cube Birectified 6-orthoplex Rectified 6-orthoplex 6-orthoplex Orthogonal projections in A6 Coxeter plane There are unique 6 degrees of rectifications, the zeroth being the 6-cube, and the 6th and last being the 6-orthoplex. Vertices of the rectified 6-cube are located at the edge-centers of the 6-cube. Vertices of the birectified 6-cube are located in the square face centers of the 6-cube. Rectified 6-cube Rectified 6-cube Typeuniform 6-polytope Schläfli symbolt1{4,34} or r{4,34} $\left\{{\begin{array}{l}4\\3,3,3,3\end{array}}\right\}$ Coxeter-Dynkin diagrams = 5-faces76 4-faces444 Cells1120 Faces1520 Edges960 Vertices192 Vertex figure5-cell prism Petrie polygonDodecagon Coxeter groupsB6, [3,3,3,3,4] D6, [33,1,1] Propertiesconvex Alternate names • Rectified hexeract (acronym: rax) (Jonathan Bowers) Construction The rectified 6-cube may be constructed from the 6-cube by truncating its vertices at the midpoints of its edges. Coordinates The Cartesian coordinates of the vertices of the rectified 6-cube with edge length √2 are all permutations of: $(0,\ \pm 1,\ \pm 1,\ \pm 1,\ \pm 1,\ \pm 1)$ Images orthographic projections Coxeter plane B6 B5 B4 Graph Dihedral symmetry [12] [10] [8] Coxeter plane B3 B2 Graph Dihedral symmetry [6] [4] Coxeter plane A5 A3 Graph Dihedral symmetry [6] [4] Birectified 6-cube Birectified 6-cube Typeuniform 6-polytope Coxeter symbol0311 Schläfli symbolt2{4,34} or 2r{4,34} $\left\{{\begin{array}{l}3,4\\3,3,3\end{array}}\right\}$ Coxeter-Dynkin diagrams = = 5-faces76 4-faces636 Cells2080 Faces3200 Edges1920 Vertices240 Vertex figure{4}x{3,3} duoprism Coxeter groupsB6, [3,3,3,3,4] D6, [33,1,1] Propertiesconvex Alternate names • Birectified hexeract (acronym: brox) (Jonathan Bowers) • Rectified 6-demicube Construction The birectified 6-cube may be constructed from the 6-cube by truncating its vertices at the midpoints of its edges. Coordinates The Cartesian coordinates of the vertices of the rectified 6-cube with edge length √2 are all permutations of: $(0,\ 0,\ \pm 1,\ \pm 1,\ \pm 1,\ \pm 1)$ Images orthographic projections Coxeter plane B6 B5 B4 Graph Dihedral symmetry [12] [10] [8] Coxeter plane B3 B2 Graph Dihedral symmetry [6] [4] Coxeter plane A5 A3 Graph Dihedral symmetry [6] [4] Related polytopes These polytopes are part of a set of 63 uniform 6-polytopes generated from the B6 Coxeter plane, including the regular 6-cube or 6-orthoplex. B6 polytopes β6 t1β6 t2β6 t2γ6 t1γ6 γ6 t0,1β6 t0,2β6 t1,2β6 t0,3β6 t1,3β6 t2,3γ6 t0,4β6 t1,4γ6 t1,3γ6 t1,2γ6 t0,5γ6 t0,4γ6 t0,3γ6 t0,2γ6 t0,1γ6 t0,1,2β6 t0,1,3β6 t0,2,3β6 t1,2,3β6 t0,1,4β6 t0,2,4β6 t1,2,4β6 t0,3,4β6 t1,2,4γ6 t1,2,3γ6 t0,1,5β6 t0,2,5β6 t0,3,4γ6 t0,2,5γ6 t0,2,4γ6 t0,2,3γ6 t0,1,5γ6 t0,1,4γ6 t0,1,3γ6 t0,1,2γ6 t0,1,2,3β6 t0,1,2,4β6 t0,1,3,4β6 t0,2,3,4β6 t1,2,3,4γ6 t0,1,2,5β6 t0,1,3,5β6 t0,2,3,5γ6 t0,2,3,4γ6 t0,1,4,5γ6 t0,1,3,5γ6 t0,1,3,4γ6 t0,1,2,5γ6 t0,1,2,4γ6 t0,1,2,3γ6 t0,1,2,3,4β6 t0,1,2,3,5β6 t0,1,2,4,5β6 t0,1,2,4,5γ6 t0,1,2,3,5γ6 t0,1,2,3,4γ6 t0,1,2,3,4,5γ6 Notes References • H.S.M. Coxeter: • H.S.M. Coxeter, Regular Polytopes, 3rd Edition, Dover New York, 1973 • Kaleidoscopes: Selected Writings of H.S.M. Coxeter, edited by F. Arthur Sherk, Peter McMullen, Anthony C. Thompson, Asia Ivic Weiss, Wiley-Interscience Publication, 1995, ISBN 978-0-471-01003-6 • (Paper 22) H.S.M. Coxeter, Regular and Semi Regular Polytopes I, [Math. Zeit. 46 (1940) 380-407, MR 2,10] • (Paper 23) H.S.M. Coxeter, Regular and Semi-Regular Polytopes II, [Math. Zeit. 188 (1985) 559-591] • (Paper 24) H.S.M. Coxeter, Regular and Semi-Regular Polytopes III, [Math. Zeit. 200 (1988) 3-45] • Norman Johnson Uniform Polytopes, Manuscript (1991) • N.W. Johnson: The Theory of Uniform Polytopes and Honeycombs, Ph.D. • Klitzing, Richard. "6D uniform polytopes (polypeta)". o3x3o3o3o4o - rax, o3o3x3o3o4o - brox, External links • Weisstein, Eric W. "Hypercube". MathWorld. • Polytopes of Various Dimensions • Multi-dimensional Glossary Fundamental convex regular and uniform polytopes in dimensions 2–10 Family An Bn I2(p) / Dn E6 / E7 / E8 / F4 / G2 Hn Regular polygon Triangle Square p-gon Hexagon Pentagon Uniform polyhedron Tetrahedron Octahedron • Cube Demicube Dodecahedron • Icosahedron Uniform polychoron Pentachoron 16-cell • Tesseract Demitesseract 24-cell 120-cell • 600-cell Uniform 5-polytope 5-simplex 5-orthoplex • 5-cube 5-demicube Uniform 6-polytope 6-simplex 6-orthoplex • 6-cube 6-demicube 122 • 221 Uniform 7-polytope 7-simplex 7-orthoplex • 7-cube 7-demicube 132 • 231 • 321 Uniform 8-polytope 8-simplex 8-orthoplex • 8-cube 8-demicube 142 • 241 • 421 Uniform 9-polytope 9-simplex 9-orthoplex • 9-cube 9-demicube Uniform 10-polytope 10-simplex 10-orthoplex • 10-cube 10-demicube Uniform n-polytope n-simplex n-orthoplex • n-cube n-demicube 1k2 • 2k1 • k21 n-pentagonal polytope Topics: Polytope families • Regular polytope • List of regular polytopes and compounds
Wikipedia
Rectified 6-orthoplexes In six-dimensional geometry, a rectified 6-orthoplex is a convex uniform 6-polytope, being a rectification of the regular 6-orthoplex. 6-orthoplex Rectified 6-orthoplex Birectified 6-orthoplex Birectified 6-cube Rectified 6-cube 6-cube Orthogonal projections in B6 Coxeter plane There are unique 6 degrees of rectifications, the zeroth being the 6-orthoplex, and the 6th and last being the 6-cube. Vertices of the rectified 6-orthoplex are located at the edge-centers of the 6-orthoplex. Vertices of the birectified 6-orthoplex are located in the triangular face centers of the 6-orthoplex. Rectified 6-orthoplex Rectified hexacross Typeuniform 6-polytope Schläfli symbolst1{34,4} or r{34,4} $\left\{{\begin{array}{l}3,3,3,4\\3\end{array}}\right\}$ r{3,3,3,31,1} Coxeter-Dynkin diagrams = = 5-faces76 total: 64 rectified 5-simplex 12 5-orthoplex 4-faces576 total: 192 rectified 5-cell 384 5-cell Cells1200 total: 240 octahedron 960 tetrahedron Faces1120 total: 160 and 960 triangles Edges480 Vertices60 Vertex figure16-cell prism Petrie polygonDodecagon Coxeter groupsB6, [3,3,3,3,4] D6, [33,1,1] Propertiesconvex The rectified 6-orthoplex is the vertex figure for the demihexeractic honeycomb. or Alternate names • rectified hexacross • rectified hexacontitetrapeton (acronym: rag) (Jonathan Bowers) Construction There are two Coxeter groups associated with the rectified hexacross, one with the C6 or [4,3,3,3,3] Coxeter group, and a lower symmetry with two copies of pentacross facets, alternating, with the D6 or [33,1,1] Coxeter group. Cartesian coordinates Cartesian coordinates for the vertices of a rectified hexacross, centered at the origin, edge length ${\sqrt {2}}\ $ are all permutations of: (±1,±1,0,0,0,0) Images orthographic projections Coxeter plane B6 B5 B4 Graph Dihedral symmetry [12] [10] [8] Coxeter plane B3 B2 Graph Dihedral symmetry [6] [4] Coxeter plane A5 A3 Graph Dihedral symmetry [6] [4] Root vectors The 60 vertices represent the root vectors of the simple Lie group D6. The vertices can be seen in 3 hyperplanes, with the 15 vertices rectified 5-simplices cells on opposite sides, and 30 vertices of an expanded 5-simplex passing through the center. When combined with the 12 vertices of the 6-orthoplex, these vertices represent the 72 root vectors of the B6 and C6 simple Lie groups. The 60 roots of D6 can be geometrically folded into H3 (Icosahedral symmetry), as to , creating 2 copies of 30-vertex icosidodecahedra, with the Golden ratio between their radii:[1] Rectified 6-orthoplex 2 icosidodecahedra 3D (H3 projection) A4/B5/D6 Coxeter plane H2 Coxeter plane Birectified 6-orthoplex Birectified 6-orthoplex Typeuniform 6-polytope Schläfli symbolst2{34,4} or 2r{34,4} $\left\{{\begin{array}{l}3,3,4\\3,3\end{array}}\right\}$ t2{3,3,3,31,1} Coxeter-Dynkin diagrams = = 5-faces76 4-faces636 Cells2160 Faces2880 Edges1440 Vertices160 Vertex figure{3}×{3,4} duoprism Petrie polygonDodecagon Coxeter groupsB6, [3,3,3,3,4] D6, [33,1,1] Propertiesconvex The birectified 6-orthoplex can tessellation space in the trirectified 6-cubic honeycomb. Alternate names • birectified hexacross • birectified hexacontitetrapeton (acronym: brag) (Jonathan Bowers) Cartesian coordinates Cartesian coordinates for the vertices of a rectified hexacross, centered at the origin, edge length ${\sqrt {2}}\ $ are all permutations of: (±1,±1,±1,0,0,0) Images orthographic projections Coxeter plane B6 B5 B4 Graph Dihedral symmetry [12] [10] [8] Coxeter plane B3 B2 Graph Dihedral symmetry [6] [4] Coxeter plane A5 A3 Graph Dihedral symmetry [6] [4] It can also be projected into 3D-dimensions as --> , a dodecahedron envelope. Related polytopes These polytopes are a part a family of 63 Uniform 6-polytopes generated from the B6 Coxeter plane, including the regular 6-cube or 6-orthoplex. B6 polytopes β6 t1β6 t2β6 t2γ6 t1γ6 γ6 t0,1β6 t0,2β6 t1,2β6 t0,3β6 t1,3β6 t2,3γ6 t0,4β6 t1,4γ6 t1,3γ6 t1,2γ6 t0,5γ6 t0,4γ6 t0,3γ6 t0,2γ6 t0,1γ6 t0,1,2β6 t0,1,3β6 t0,2,3β6 t1,2,3β6 t0,1,4β6 t0,2,4β6 t1,2,4β6 t0,3,4β6 t1,2,4γ6 t1,2,3γ6 t0,1,5β6 t0,2,5β6 t0,3,4γ6 t0,2,5γ6 t0,2,4γ6 t0,2,3γ6 t0,1,5γ6 t0,1,4γ6 t0,1,3γ6 t0,1,2γ6 t0,1,2,3β6 t0,1,2,4β6 t0,1,3,4β6 t0,2,3,4β6 t1,2,3,4γ6 t0,1,2,5β6 t0,1,3,5β6 t0,2,3,5γ6 t0,2,3,4γ6 t0,1,4,5γ6 t0,1,3,5γ6 t0,1,3,4γ6 t0,1,2,5γ6 t0,1,2,4γ6 t0,1,2,3γ6 t0,1,2,3,4β6 t0,1,2,3,5β6 t0,1,2,4,5β6 t0,1,2,4,5γ6 t0,1,2,3,5γ6 t0,1,2,3,4γ6 t0,1,2,3,4,5γ6 Notes 1. Icosidodecahedron from D6 John Baez, January 1, 2015 References • H.S.M. Coxeter: • H.S.M. Coxeter, Regular Polytopes, 3rd Edition, Dover New York, 1973 • Kaleidoscopes: Selected Writings of H.S.M. Coxeter, edited by F. Arthur Sherk, Peter McMullen, Anthony C. Thompson, Asia Ivic Weiss, Wiley-Interscience Publication, 1995, ISBN 978-0-471-01003-6 • (Paper 22) H.S.M. Coxeter, Regular and Semi Regular Polytopes I, [Math. Zeit. 46 (1940) 380-407, MR 2,10] • (Paper 23) H.S.M. Coxeter, Regular and Semi-Regular Polytopes II, [Math. Zeit. 188 (1985) 559-591] • (Paper 24) H.S.M. Coxeter, Regular and Semi-Regular Polytopes III, [Math. Zeit. 200 (1988) 3-45] • Norman Johnson Uniform Polytopes, Manuscript (1991) • N.W. Johnson: The Theory of Uniform Polytopes and Honeycombs, Ph.D. • Klitzing, Richard. "6D uniform polytopes (polypeta)". o3x3o3o3o4o - rag, o3o3x3o3o4o - brag External links • Polytopes of Various Dimensions • Multi-dimensional Glossary Fundamental convex regular and uniform polytopes in dimensions 2–10 Family An Bn I2(p) / Dn E6 / E7 / E8 / F4 / G2 Hn Regular polygon Triangle Square p-gon Hexagon Pentagon Uniform polyhedron Tetrahedron Octahedron • Cube Demicube Dodecahedron • Icosahedron Uniform polychoron Pentachoron 16-cell • Tesseract Demitesseract 24-cell 120-cell • 600-cell Uniform 5-polytope 5-simplex 5-orthoplex • 5-cube 5-demicube Uniform 6-polytope 6-simplex 6-orthoplex • 6-cube 6-demicube 122 • 221 Uniform 7-polytope 7-simplex 7-orthoplex • 7-cube 7-demicube 132 • 231 • 321 Uniform 8-polytope 8-simplex 8-orthoplex • 8-cube 8-demicube 142 • 241 • 421 Uniform 9-polytope 9-simplex 9-orthoplex • 9-cube 9-demicube Uniform 10-polytope 10-simplex 10-orthoplex • 10-cube 10-demicube Uniform n-polytope n-simplex n-orthoplex • n-cube n-demicube 1k2 • 2k1 • k21 n-pentagonal polytope Topics: Polytope families • Regular polytope • List of regular polytopes and compounds
Wikipedia
Rectified 6-simplexes In six-dimensional geometry, a rectified 6-simplex is a convex uniform 6-polytope, being a rectification of the regular 6-simplex. 6-simplex Rectified 6-simplex Birectified 6-simplex Orthogonal projections in A6 Coxeter plane There are three unique degrees of rectifications, including the zeroth, the 6-simplex itself. Vertices of the rectified 6-simplex are located at the edge-centers of the 6-simplex. Vertices of the birectified 6-simplex are located in the triangular face centers of the 6-simplex. Rectified 6-simplex Rectified 6-simplex Typeuniform polypeton Schläfli symbolt1{35} r{35} = {34,1} or $\left\{{\begin{array}{l}3,3,3,3\\3\end{array}}\right\}$ Coxeter diagrams Elements f5 = 14, f4 = 63, C = 140, F = 175, E = 105, V = 21 (χ=0) Coxeter groupA6, [35], order 5040 Bowers name and (acronym) Rectified heptapeton (ril) Vertex figure5-cell prism Circumradius0.845154 Propertiesconvex, isogonal E. L. Elte identified it in 1912 as a semiregular polytope, labeling it as S1 6 . It is also called 04,1 for its branching Coxeter-Dynkin diagram, shown as . Alternate names • Rectified heptapeton (Acronym: ril) (Jonathan Bowers) Coordinates The vertices of the rectified 6-simplex can be most simply positioned in 7-space as permutations of (0,0,0,0,0,1,1). This construction is based on facets of the rectified 7-orthoplex. Images orthographic projections Ak Coxeter plane A6 A5 A4 Graph Dihedral symmetry [7] [6] [5] Ak Coxeter plane A3 A2 Graph Dihedral symmetry [4] [3] Birectified 6-simplex Birectified 6-simplex Typeuniform 6-polytope ClassA6 polytope Schläfli symbolt2{3,3,3,3,3} 2r{35} = {33,2} or $\left\{{\begin{array}{l}3,3,3\\3,3\end{array}}\right\}$ Coxeter symbol032 Coxeter diagrams 5-faces14 total: 7 t1{3,3,3,3} 7 t2{3,3,3,3} 4-faces84 Cells245 Faces350 Edges210 Vertices35 Vertex figure{3}x{3,3} Petrie polygonHeptagon Coxeter groupsA6, [3,3,3,3,3] Propertiesconvex E. L. Elte identified it in 1912 as a semiregular polytope, labeling it as S2 6 . It is also called 03,2 for its branching Coxeter-Dynkin diagram, shown as . Alternate names • Birectified heptapeton (Acronym: bril) (Jonathan Bowers) Coordinates The vertices of the birectified 6-simplex can be most simply positioned in 7-space as permutations of (0,0,0,0,1,1,1). This construction is based on facets of the birectified 7-orthoplex. Images orthographic projections Ak Coxeter plane A6 A5 A4 Graph Dihedral symmetry [7] [6] [5] Ak Coxeter plane A3 A2 Graph Dihedral symmetry [4] [3] Related uniform 6-polytopes The rectified 6-simplex polytope is the vertex figure of the 7-demicube, and the edge figure of the uniform 241 polytope. These polytopes are a part of 35 uniform 6-polytopes based on the [3,3,3,3,3] Coxeter group, all shown here in A6 Coxeter plane orthographic projections. A6 polytopes t0 t1 t2 t0,1 t0,2 t1,2 t0,3 t1,3 t2,3 t0,4 t1,4 t0,5 t0,1,2 t0,1,3 t0,2,3 t1,2,3 t0,1,4 t0,2,4 t1,2,4 t0,3,4 t0,1,5 t0,2,5 t0,1,2,3 t0,1,2,4 t0,1,3,4 t0,2,3,4 t1,2,3,4 t0,1,2,5 t0,1,3,5 t0,2,3,5 t0,1,4,5 t0,1,2,3,4 t0,1,2,3,5 t0,1,2,4,5 t0,1,2,3,4,5 Notes References • H.S.M. Coxeter: • H.S.M. Coxeter, Regular Polytopes, 3rd Edition, Dover New York, 1973 • Kaleidoscopes: Selected Writings of H.S.M. Coxeter, edited by F. Arthur Sherk, Peter McMullen, Anthony C. Thompson, Asia Ivic Weiss, Wiley-Interscience Publication, 1995, ISBN 978-0-471-01003-6 • (Paper 22) H.S.M. Coxeter, Regular and Semi Regular Polytopes I, [Math. Zeit. 46 (1940) 380–407, MR 2,10] • (Paper 23) H.S.M. Coxeter, Regular and Semi-Regular Polytopes II, [Math. Zeit. 188 (1985) 559-591] • (Paper 24) H.S.M. Coxeter, Regular and Semi-Regular Polytopes III, [Math. Zeit. 200 (1988) 3-45] • Norman Johnson Uniform Polytopes, Manuscript (1991) • N.W. Johnson: The Theory of Uniform Polytopes and Honeycombs, Ph.D. • Klitzing, Richard. "6D uniform polytopes (polypeta)". o3x3o3o3o3o - ril, o3x3o3o3o3o - bril External links • Polytopes of Various Dimensions • Multi-dimensional Glossary Fundamental convex regular and uniform polytopes in dimensions 2–10 Family An Bn I2(p) / Dn E6 / E7 / E8 / F4 / G2 Hn Regular polygon Triangle Square p-gon Hexagon Pentagon Uniform polyhedron Tetrahedron Octahedron • Cube Demicube Dodecahedron • Icosahedron Uniform polychoron Pentachoron 16-cell • Tesseract Demitesseract 24-cell 120-cell • 600-cell Uniform 5-polytope 5-simplex 5-orthoplex • 5-cube 5-demicube Uniform 6-polytope 6-simplex 6-orthoplex • 6-cube 6-demicube 122 • 221 Uniform 7-polytope 7-simplex 7-orthoplex • 7-cube 7-demicube 132 • 231 • 321 Uniform 8-polytope 8-simplex 8-orthoplex • 8-cube 8-demicube 142 • 241 • 421 Uniform 9-polytope 9-simplex 9-orthoplex • 9-cube 9-demicube Uniform 10-polytope 10-simplex 10-orthoplex • 10-cube 10-demicube Uniform n-polytope n-simplex n-orthoplex • n-cube n-demicube 1k2 • 2k1 • k21 n-pentagonal polytope Topics: Polytope families • Regular polytope • List of regular polytopes and compounds
Wikipedia
Rectified 8-cubes In eight-dimensional geometry, a rectified 8-cube is a convex uniform 8-polytope, being a rectification of the regular 8-cube. 8-cube Rectified 8-cube Birectified 8-cube Trirectified 8-cube Trirectified 8-orthoplex Birectified 8-orthoplex Rectified 8-orthoplex 8-orthoplex Orthogonal projections in B8 Coxeter plane There are unique 8 degrees of rectifications, the zeroth being the 8-cube, and the 7th and last being the 8-orthoplex. Vertices of the rectified 8-cube are located at the edge-centers of the 8-cube. Vertices of the birectified 8-cube are located in the square face centers of the 8-cube. Vertices of the trirectified 8-cube are located in the 7-cube cell centers of the 8-cube. Rectified 8-cube Rectified 8-cube Typeuniform 8-polytope Schläfli symbolt1{4,3,3,3,3,3,3} Coxeter-Dynkin diagrams 7-faces256 + 16 6-faces2048 + 112 5-faces7168 + 448 4-faces14336 + 1120 Cells17920 +* 1792 Faces4336 + 1792 Edges7168 Vertices1024 Vertex figure6-simplex prism {3,3,3,3,3}×{} Coxeter groupsB8, [36,4] D8, [35,1,1] Propertiesconvex Alternate names • rectified octeract Images orthographic projections B8 B7 [16] [14] B6 B5 [12] [10] B4 B3 B2 [8] [6] [4] A7 A5 A3 [8] [6] [4] Birectified 8-cube Birectified 8-cube Typeuniform 8-polytope Coxeter symbol0511 Schläfli symbolt2{4,3,3,3,3,3,3} Coxeter-Dynkin diagrams 7-faces256 + 16 6-faces1024 + 2048 + 112 5-faces7168 + 7168 + 448 4-faces21504 + 14336 + 1120 Cells35840 + 17920 + 1792 Faces35840 + 14336 Edges21504 Vertices1792 Vertex figure{3,3,3,3}x{4} Coxeter groupsB8, [36,4] D8, [35,1,1] Propertiesconvex Alternate names • Birectified octeract • Rectified 8-demicube Images orthographic projections B8 B7 [16] [14] B6 B5 [12] [10] B4 B3 B2 [8] [6] [4] A7 A5 A3 [8] [6] [4] Trirectified 8-cube Triectified 8-cube Typeuniform 8-polytope Schläfli symbolt3{4,3,3,3,3,3,3} Coxeter diagrams 7-faces16+256 6-faces1024 + 2048 + 112 5-faces1792 + 7168 + 7168 + 448 4-faces1792 + 10752 + 21504 +14336 Cells8960 + 26880 + 35840 Faces17920+35840 Edges17920 Vertices1152 Vertex figure{3,3,3}x{3,4} Coxeter groupsB8, [36,4] D8, [35,1,1] Propertiesconvex Alternate names • trirectified octeract Images orthographic projections B8 B7 [16] [14] B6 B5 [12] [10] B4 B3 B2 [8] [6] [4] A7 A5 A3 [8] [6] [4] Notes References • H.S.M. Coxeter: • H.S.M. Coxeter, Regular Polytopes, 3rd Edition, Dover New York, 1973 • Kaleidoscopes: Selected Writings of H.S.M. Coxeter, edited by F. Arthur Sherk, Peter McMullen, Anthony C. Thompson, Asia Ivic Weiss, Wiley-Interscience Publication, 1995, ISBN 978-0-471-01003-6 • (Paper 22) H.S.M. Coxeter, Regular and Semi Regular Polytopes I, [Math. Zeit. 46 (1940) 380-407, MR 2,10] • (Paper 23) H.S.M. Coxeter, Regular and Semi-Regular Polytopes II, [Math. Zeit. 188 (1985) 559-591] • (Paper 24) H.S.M. Coxeter, Regular and Semi-Regular Polytopes III, [Math. Zeit. 200 (1988) 3-45] • Norman Johnson Uniform Polytopes, Manuscript (1991) • N.W. Johnson: The Theory of Uniform Polytopes and Honeycombs, Ph.D. • Klitzing, Richard. "8D uniform polytopes (polyzetta)". o3o3o3o3o3o3x4o, o3o3o3o3o3x3o4o, o3o3o3o3x3o3o4o External links • Polytopes of Various Dimensions • Multi-dimensional Glossary Fundamental convex regular and uniform polytopes in dimensions 2–10 Family An Bn I2(p) / Dn E6 / E7 / E8 / F4 / G2 Hn Regular polygon Triangle Square p-gon Hexagon Pentagon Uniform polyhedron Tetrahedron Octahedron • Cube Demicube Dodecahedron • Icosahedron Uniform polychoron Pentachoron 16-cell • Tesseract Demitesseract 24-cell 120-cell • 600-cell Uniform 5-polytope 5-simplex 5-orthoplex • 5-cube 5-demicube Uniform 6-polytope 6-simplex 6-orthoplex • 6-cube 6-demicube 122 • 221 Uniform 7-polytope 7-simplex 7-orthoplex • 7-cube 7-demicube 132 • 231 • 321 Uniform 8-polytope 8-simplex 8-orthoplex • 8-cube 8-demicube 142 • 241 • 421 Uniform 9-polytope 9-simplex 9-orthoplex • 9-cube 9-demicube Uniform 10-polytope 10-simplex 10-orthoplex • 10-cube 10-demicube Uniform n-polytope n-simplex n-orthoplex • n-cube n-demicube 1k2 • 2k1 • k21 n-pentagonal polytope Topics: Polytope families • Regular polytope • List of regular polytopes and compounds
Wikipedia
Icosidodecahedron In geometry, an icosidodecahedron is a polyhedron with twenty (icosi) triangular faces and twelve (dodeca) pentagonal faces. An icosidodecahedron has 30 identical vertices, with two triangles and two pentagons meeting at each, and 60 identical edges, each separating a triangle from a pentagon. As such it is one of the Archimedean solids and more particularly, a quasiregular polyhedron. Icosidodecahedron (Click here for rotating model) TypeArchimedean solid Uniform polyhedron ElementsF = 32, E = 60, V = 30 (χ = 2) Faces by sides20{3}+12{5} Conway notationaD Schläfli symbolsr{5,3} t1{5,3} Wythoff symbol2 | 3 5 Coxeter diagram Symmetry groupIh, H3, [5,3], (*532), order 120 Rotation groupI, [5,3]+, (532), order 60 Dihedral angle142.62° $\cos ^{-1}\left(-{\sqrt {{\frac {1}{15}}\left(5+2{\sqrt {5}}\right)}}\right)$ ReferencesU24, C28, W12 PropertiesSemiregular convex quasiregular Colored faces 3.5.3.5 (Vertex figure) Rhombic triacontahedron (dual polyhedron) Net Geometry An icosidodecahedron has icosahedral symmetry, and its first stellation is the compound of a dodecahedron and its dual icosahedron, with the vertices of the icosidodecahedron located at the midpoints of the edges of either. Its dual polyhedron is the rhombic triacontahedron. An icosidodecahedron can be split along any of six planes to form a pair of pentagonal rotundae, which belong among the Johnson solids. The icosidodecahedron can be considered a pentagonal gyrobirotunda, as a combination of two rotundae (compare pentagonal orthobirotunda, one of the Johnson solids). In this form its symmetry is D5d, [10,2+], (2*5), order 20. The wire-frame figure of the icosidodecahedron consists of six flat regular decagons, meeting in pairs at each of the 30 vertices. The icosidodecahedron has 6 central decagons. Projected into a sphere, they define 6 great circles. Buckminster Fuller used these 6 great circles, along with 15 and 10 others in two other polyhedra to define his 31 great circles of the spherical icosahedron. Cartesian coordinates Convenient Cartesian coordinates for the vertices of an icosidodecahedron with unit edges are given by the even permutations of:[1] • (0, 0, ±φ) • (±1/2, ±φ/2, ±φ2/2) where φ is the golden ratio, 1 + √5/2. The long radius (center to vertex) of the icosidodecahedron is in the golden ratio to its edge length; thus its radius is φ if its edge length is 1, and its edge length is 1/φ if its radius is 1. Only a few uniform polytopes have this property, including the four-dimensional 600-cell, the three-dimensional icosidodecahedron, and the two-dimensional decagon. (The icosidodecahedron is the equatorial cross section of the 600-cell, and the decagon is the equatorial cross section of the icosidodecahedron.) These radially golden polytopes can be constructed, with their radii, from golden triangles which meet at the center, each contributing two radii and an edge. Orthogonal projections The icosidodecahedron has four special orthogonal projections, centered on a vertex, an edge, a triangular face, and a pentagonal face. The last two correspond to the A2 and H2 Coxeter planes. Orthogonal projections Centered by Vertex Edge Face Triangle Face Pentagon Solid Wireframe Projective symmetry [2] [2] [6] [10] Dual Surface area and volume The surface area A and the volume V of the icosidodecahedron of edge length a are: ${\begin{aligned}A&=\left(5{\sqrt {3}}+3{\sqrt {5}}{\sqrt {3+4\varphi }}\right)a^{2}&&=\left(5{\sqrt {3}}+3{\sqrt {25+10{\sqrt {5}}}}\right)a^{2}&&\approx 29.3059828a^{2}\\V&={\frac {14+17\varphi }{3}}a^{3}&&={\frac {45+17{\sqrt {5}}}{6}}a^{3}&&\approx 13.8355259a^{3}.\end{aligned}}$ Spherical tiling The 60 edges form 6 decagons corresponding to great circles in the spherical tiling. The icosidodecahedron can also be represented as a spherical tiling, and projected onto the plane via a stereographic projection. This projection is conformal, preserving angles but not areas or lengths. Straight lines on the sphere are projected as circular arcs on the plane. Pentagon-centered Triangle-centered Orthographic projection Stereographic projections Orthographic projections 2-fold, 3-fold and 5-fold symmetry axes Related polytopes The icosidodecahedron is a rectified dodecahedron and also a rectified icosahedron, existing as the full-edge truncation between these regular solids. The icosidodecahedron contains 12 pentagons of the dodecahedron and 20 triangles of the icosahedron: Family of uniform icosahedral polyhedra Symmetry: [5,3], (*532) [5,3]+, (532) {5,3} t{5,3} r{5,3} t{3,5} {3,5} rr{5,3} tr{5,3} sr{5,3} Duals to uniform polyhedra V5.5.5 V3.10.10 V3.5.3.5 V5.6.6 V3.3.3.3.3 V3.4.5.4 V4.6.10 V3.3.3.3.5 The icosidodecahedron exists in a sequence of symmetries of quasiregular polyhedra and tilings with vertex configurations (3.n)2, progressing from tilings of the sphere to the Euclidean plane and into the hyperbolic plane. With orbifold notation symmetry of *n32 all of these tilings are wythoff construction within a fundamental domain of symmetry, with generator points at the right angle corner of the domain.[2][3] *n32 orbifold symmetries of quasiregular tilings: (3.n)2 Construction Spherical Euclidean Hyperbolic *332 *432 *532 *632 *732 *832... *∞32 Quasiregular figures Vertex (3.3)2 (3.4)2 (3.5)2 (3.6)2 (3.7)2 (3.8)2 (3.∞)2 *5n2 symmetry mutations of quasiregular tilings: (5.n)2 Symmetry *5n2 [n,5] Spherical Hyperbolic Paracompact Noncompact *352 [3,5] *452 [4,5] *552 [5,5] *652 [6,5] *752 [7,5] *852 [8,5]... *∞52 [∞,5]   [ni,5] Figures Config. (5.3)2 (5.4)2 (5.5)2 (5.6)2 (5.7)2 (5.8)2 (5.∞)2 (5.ni)2 Rhombic figures Config. V(5.3)2 V(5.4)2 V(5.5)2 V(5.6)2 V(5.7)2 V(5.8)2 V(5.∞)2 V(5.∞)2 Dissection The icosidodecahedron is related to the Johnson solid called a pentagonal orthobirotunda created by two pentagonal rotundae connected as mirror images. The icosidodecahedron can therefore be called a pentagonal gyrobirotunda with the gyration between top and bottom halves. (Dissection) Icosidodecahedron (pentagonal gyrobirotunda) Pentagonal orthobirotunda Pentagonal rotunda Related polyhedra The truncated cube can be turned into an icosidodecahedron by dividing the octagons into two pentagons and two triangles. It has pyritohedral symmetry. Eight uniform star polyhedra share the same vertex arrangement. Of these, two also share the same edge arrangement: the small icosihemidodecahedron (having the triangular faces in common), and the small dodecahemidodecahedron (having the pentagonal faces in common). The vertex arrangement is also shared with the compounds of five octahedra and of five tetrahemihexahedra. Icosidodecahedron Small icosihemidodecahedron Small dodecahemidodecahedron Great icosidodecahedron Great dodecahemidodecahedron Great icosihemidodecahedron Dodecadodecahedron Small dodecahemicosahedron Great dodecahemicosahedron Compound of five octahedra Compound of five tetrahemihexahedra Related polychora In four-dimensional geometry the icosidodecahedron appears in the regular 600-cell as the equatorial slice that belongs to the vertex-first passage of the 600-cell through 3D space. In other words: the 30 vertices of the 600-cell which lie at arc distances of 90 degrees on its circumscribed hypersphere from a pair of opposite vertices, are the vertices of an icosidodecahedron. The wire frame figure of the 600-cell consists of 72 flat regular decagons. Six of these are the equatorial decagons to a pair of opposite vertices. They are precisely the six decagons which form the wire frame figure of the icosidodecahedron. If a 600-cell is stereographically projected to 3-space about any vertex and all points are normalised, the geodesics upon which edges fall comprise the icosidodecahedron's barycentric subdivision. Icosidodecahedral graph Icosidodecahedral graph 5-fold symmetry Schlegel diagram Vertices30 Edges60 Automorphisms120 PropertiesQuartic graph, Hamiltonian, regular Table of graphs and parameters In the mathematical field of graph theory, a icosidodecahedral graph is the graph of vertices and edges of the icosidodecahedron, one of the Archimedean solids. It has 30 vertices and 60 edges, and is a quartic graph Archimedean graph.[4] Icosidodecahedra in nature The Hoberman sphere is an icosidodecahedron. Icosidodecahedra can be found in all eukaryotic cells, including human cells, as Sec13/31 COPII coat-protein formations. [5] Trivia In Star Trek Universe, the Vulcan game of logic Kal-Toh has the goal to create a holographic icosidodecahedron. See also • Cuboctahedron • Great truncated icosidodecahedron • Icosahedron • Rhombicosidodecahedron • Truncated icosidodecahedron Notes 1. Weisstein, Eric W. "Icosahedral group". MathWorld. 2. Coxeter Regular Polytopes, Third edition, (1973), Dover edition, ISBN 0-486-61480-8 (Chapter V: The Kaleidoscope, Section: 5.7 Wythoff's construction) 3. Two Dimensional symmetry Mutations by Daniel Huson 4. Read, R. C.; Wilson, R. J. (1998), An Atlas of Graphs, Oxford University Press, p. 269 5. Russell, Christopher; Stagg, Scott (11 February 2010). "New Insights into the Structural Mechanisms of the COPII Coat". Traffic. 11 (3): 303–310. doi:10.1111/j.1600-0854.2009.01026.x. PMID 20070605. References • Williams, Robert (1979). The Geometrical Foundation of Natural Structure: A Source Book of Design. Dover Publications, Inc. ISBN 0-486-23729-X. (Section 3-9) • Cromwell, P. (1997). Polyhedra. United Kingdom: Cambridge. pp. 79–86 Archimedean solids. ISBN 0-521-55432-2. External links • Eric W. Weisstein, Icosidodecahedron (Archimedean solid) at MathWorld. • Klitzing, Richard. "3D convex uniform polyhedra o3x5o - id". • Editable printable net of an icosidodecahedron with interactive 3D view • The Uniform Polyhedra • Virtual Reality Polyhedra The Encyclopedia of Polyhedra Archimedean solids Tetrahedron (Seed) Tetrahedron (Dual) Cube (Seed) Octahedron (Dual) Dodecahedron (Seed) Icosahedron (Dual) Truncated tetrahedron (Truncate) Truncated tetrahedron (Zip) Truncated cube (Truncate) Truncated octahedron (Zip) Truncated dodecahedron (Truncate) Truncated icosahedron (Zip) Tetratetrahedron (Ambo) Cuboctahedron (Ambo) Icosidodecahedron (Ambo) Rhombitetratetrahedron (Expand) Truncated tetratetrahedron (Bevel) Rhombicuboctahedron (Expand) Truncated cuboctahedron (Bevel) Rhombicosidodecahedron (Expand) Truncated icosidodecahedron (Bevel) Snub tetrahedron (Snub) Snub cube (Snub) Snub dodecahedron (Snub) Catalan duals Tetrahedron (Dual) Tetrahedron (Seed) Octahedron (Dual) Cube (Seed) Icosahedron (Dual) Dodecahedron (Seed) Triakis tetrahedron (Needle) Triakis tetrahedron (Kis) Triakis octahedron (Needle) Tetrakis hexahedron (Kis) Triakis icosahedron (Needle) Pentakis dodecahedron (Kis) Rhombic hexahedron (Join) Rhombic dodecahedron (Join) Rhombic triacontahedron (Join) Deltoidal dodecahedron (Ortho) Disdyakis hexahedron (Meta) Deltoidal icositetrahedron (Ortho) Disdyakis dodecahedron (Meta) Deltoidal hexecontahedron (Ortho) Disdyakis triacontahedron (Meta) Pentagonal dodecahedron (Gyro) Pentagonal icositetrahedron (Gyro) Pentagonal hexecontahedron (Gyro) Convex polyhedra Platonic solids (regular) • tetrahedron • cube • octahedron • dodecahedron • icosahedron Archimedean solids (semiregular or uniform) • truncated tetrahedron • cuboctahedron • truncated cube • truncated octahedron • rhombicuboctahedron • truncated cuboctahedron • snub cube • icosidodecahedron • truncated dodecahedron • truncated icosahedron • rhombicosidodecahedron • truncated icosidodecahedron • snub dodecahedron Catalan solids (duals of Archimedean) • triakis tetrahedron • rhombic dodecahedron • triakis octahedron • tetrakis hexahedron • deltoidal icositetrahedron • disdyakis dodecahedron • pentagonal icositetrahedron • rhombic triacontahedron • triakis icosahedron • pentakis dodecahedron • deltoidal hexecontahedron • disdyakis triacontahedron • pentagonal hexecontahedron Dihedral regular • dihedron • hosohedron Dihedral uniform • prisms • antiprisms duals: • bipyramids • trapezohedra Dihedral others • pyramids • truncated trapezohedra • gyroelongated bipyramid • cupola • bicupola • frustum • bifrustum • rotunda • birotunda • prismatoid • scutoid Degenerate polyhedra are in italics.
Wikipedia
Rectified 120-cell In geometry, a rectified 120-cell is a uniform 4-polytope formed as the rectification of the regular 120-cell. Four rectifications 120-cell Rectified 120-cell 600-cell Rectified 600-cell Orthogonal projections in H3 Coxeter plane E. L. Elte identified it in 1912 as a semiregular polytope, labeling it as tC120. There are four rectifications of the 120-cell, including the zeroth, the 120-cell itself. The birectified 120-cell is more easily seen as a rectified 600-cell, and the trirectified 120-cell is the same as the dual 600-cell. Rectified 120-cell Rectified 120-cell Schlegel diagram, centered on icosidodecahedon, tetrahedral cells visible TypeUniform 4-polytope Uniform index33 Coxeter diagram Schläfli symbolt1{5,3,3} or r{5,3,3} Cells720 total: 120 (3.5.3.5) 600 (3.3.3) Faces3120 total: 2400 {3}, 720 {5} Edges3600 Vertices1200 Vertex figure triangular prism Symmetry groupH4 or [3,3,5] Propertiesconvex, vertex-transitive, edge-transitive In geometry, the rectified 120-cell or rectified hecatonicosachoron is a convex uniform 4-polytope composed of 600 regular tetrahedra and 120 icosidodecahedra cells. Its vertex figure is a triangular prism, with three icosidodecahedra and two tetrahedra meeting at each vertex. Alternative names: • Rectified 120-cell (Norman Johnson) • Rectified hecatonicosichoron / rectified dodecacontachoron / rectified polydodecahedron • Icosidodecahedral hexacosihecatonicosachoron • Rahi (Jonathan Bowers: for rectified hecatonicosachoron) • Ambohecatonicosachoron (Neil Sloane & John Horton Conway) Projections 3D parallel projection Parallel projection of the rectified 120-cell into 3D, centered on an icosidodecahedral cell. Nearest cell to 4D viewpoint shown in orange, and tetrahedral cells shown in yellow. Remaining cells culled so that the structure of the projection is visible. Orthographic projections by Coxeter planes H4 - F4 [30] [20] [12] H3 A2 / B3 / D4 A3 / B2 [10] [6] [4] Related polytopes H4 family polytopes 120-cell rectified 120-cell truncated 120-cell cantellated 120-cell runcinated 120-cell cantitruncated 120-cell runcitruncated 120-cell omnitruncated 120-cell {5,3,3} r{5,3,3} t{5,3,3} rr{5,3,3} t0,3{5,3,3} tr{5,3,3} t0,1,3{5,3,3} t0,1,2,3{5,3,3} 600-cell rectified 600-cell truncated 600-cell cantellated 600-cell bitruncated 600-cell cantitruncated 600-cell runcitruncated 600-cell omnitruncated 600-cell {3,3,5} r{3,3,5} t{3,3,5} rr{3,3,5} 2t{3,3,5} tr{3,3,5} t0,1,3{3,3,5} t0,1,2,3{3,3,5} Notes References • Kaleidoscopes: Selected Writings of H. S. M. Coxeter, edited by F. Arthur Sherk, Peter McMullen, Anthony C. Thompson, Asia Ivic Weiss, Wiley-Interscience Publication, 1995, ISBN 978-0-471-01003-6 • (Paper 22) H.S.M. Coxeter, Regular and Semi-Regular Polytopes I, [Math. Zeit. 46 (1940) 380-407, MR 2,10] • (Paper 23) H.S.M. Coxeter, Regular and Semi-Regular Polytopes II, [Math. Zeit. 188 (1985) 559-591] • (Paper 24) H.S.M. Coxeter, Regular and Semi-Regular Polytopes III, [Math. Zeit. 200 (1988) 3-45] • J.H. Conway and M.J.T. Guy: Four-Dimensional Archimedean Polytopes, Proceedings of the Colloquium on Convexity at Copenhagen, page 38 und 39, 1965 • N.W. Johnson: The Theory of Uniform Polytopes and Honeycombs, Ph.D. Dissertation, University of Toronto, 1966 External links • Convex uniform polychora based on the hecatonicosachoron (120-cell) and hexacosichoron (600-cell) - Model 33, George Olshevsky. • rectified 120-cell Marco Möller's Archimedean polytopes in R4 (German) • Klitzing, Richard. "4D uniform polytopes (polychora) o3o3x5o - rahi". • (in German) Four-dimensional Archimedean Polytopes, Marco Möller, 2004 PhD dissertation • H4 uniform polytopes with coordinates: r{5,3,3} Fundamental convex regular and uniform polytopes in dimensions 2–10 Family An Bn I2(p) / Dn E6 / E7 / E8 / F4 / G2 Hn Regular polygon Triangle Square p-gon Hexagon Pentagon Uniform polyhedron Tetrahedron Octahedron • Cube Demicube Dodecahedron • Icosahedron Uniform polychoron Pentachoron 16-cell • Tesseract Demitesseract 24-cell 120-cell • 600-cell Uniform 5-polytope 5-simplex 5-orthoplex • 5-cube 5-demicube Uniform 6-polytope 6-simplex 6-orthoplex • 6-cube 6-demicube 122 • 221 Uniform 7-polytope 7-simplex 7-orthoplex • 7-cube 7-demicube 132 • 231 • 321 Uniform 8-polytope 8-simplex 8-orthoplex • 8-cube 8-demicube 142 • 241 • 421 Uniform 9-polytope 9-simplex 9-orthoplex • 9-cube 9-demicube Uniform 10-polytope 10-simplex 10-orthoplex • 10-cube 10-demicube Uniform n-polytope n-simplex n-orthoplex • n-cube n-demicube 1k2 • 2k1 • k21 n-pentagonal polytope Topics: Polytope families • Regular polytope • List of regular polytopes and compounds
Wikipedia
Rectified 600-cell In geometry, the rectified 600-cell or rectified hexacosichoron is a convex uniform 4-polytope composed of 600 regular octahedra and 120 icosahedra cells. Each edge has two octahedra and one icosahedron. Each vertex has five octahedra and two icosahedra. In total it has 3600 triangle faces, 3600 edges, and 720 vertices. Rectified 600-cell Schlegel diagram, shown as Birectified 120-cell, with 119 icosahedral cells colored TypeUniform 4-polytope Uniform index34 Schläfli symbolt1{3,3,5} or r{3,3,5} Coxeter-Dynkin diagram Cells600 (3.3.3.3) 120 {3,5} Faces1200+2400 {3} Edges3600 Vertices720 Vertex figure pentagonal prism Symmetry groupH4, [3,3,5], order 14400 Propertiesconvex, vertex-transitive, edge-transitive Containing the cell realms of both the regular 120-cell and the regular 600-cell, it can be considered analogous to the polyhedron icosidodecahedron, which is a rectified icosahedron and rectified dodecahedron. The vertex figure of the rectified 600-cell is a uniform pentagonal prism. Semiregular polytope It is one of three semiregular 4-polytopes made of two or more cells which are Platonic solids, discovered by Thorold Gosset in his 1900 paper. He called it a octicosahedric for being made of octahedron and icosahedron cells. E. L. Elte identified it in 1912 as a semiregular polytope, labeling it as tC600. Alternate names • octicosahedric (Thorold Gosset) • Icosahedral hexacosihecatonicosachoron • Rectified 600-cell (Norman W. Johnson) • Rectified hexacosichoron • Rectified polytetrahedron • Rox (Jonathan Bowers) Images Orthographic projections by Coxeter planes H4 - F4 [30] [20] [12] H3 A2 / B3 / D4 A3 / B2 [10] [6] [4] Stereographic projection Net Related polytopes Diminished rectified 600-cell 120-diminished rectified 600-cell Type4-polytope Cells840 cells: 600 square pyramid 120 pentagonal prism 120 pentagonal antiprism Faces2640: 1800 {3} 600 {4} 240 {5} Edges2400 Vertices600 Vertex figure Bi-diminished pentagonal prism (1) 3.3.3.3 + (4) 3.3.4 (2) 4.4.5 (2) 3.3.3.5 Symmetry group1/12[3,3,5], order 1200 Propertiesconvex A related vertex-transitive polytope can be constructed with equal edge lengths removes 120 vertices from the rectified 600-cell, but isn't uniform because it contains square pyramid cells,[1] discovered by George Olshevsky, calling it a swirlprismatodiminished rectified hexacosichoron, with 840 cells (600 square pyramids, 120 pentagonal prisms, and 120 pentagonal antiprisms), 2640 faces (1800 triangles, 600 square, and 240 pentagons), 2400 edges, and 600 vertices. It has a chiral bi-diminished pentagonal prism vertex figure. Each removed vertex creates a pentagonal prism cell, and diminishes two neighboring icosahedra into pentagonal antiprisms, and each octahedron into a square pyramid.[2] This polytope can be partitioned into 12 rings of alternating 10 pentagonal prisms and 10 antiprisms, and 30 rings of square pyramids. Schlegel diagram Orthogonal projection Two orthogonal rings shown 2 rings of 30 red square pyramids, one ring along perimeter, and one centered. Net H4 family H4 family polytopes 120-cell rectified 120-cell truncated 120-cell cantellated 120-cell runcinated 120-cell cantitruncated 120-cell runcitruncated 120-cell omnitruncated 120-cell {5,3,3} r{5,3,3} t{5,3,3} rr{5,3,3} t0,3{5,3,3} tr{5,3,3} t0,1,3{5,3,3} t0,1,2,3{5,3,3} 600-cell rectified 600-cell truncated 600-cell cantellated 600-cell bitruncated 600-cell cantitruncated 600-cell runcitruncated 600-cell omnitruncated 600-cell {3,3,5} r{3,3,5} t{3,3,5} rr{3,3,5} 2t{3,3,5} tr{3,3,5} t0,1,3{3,3,5} t0,1,2,3{3,3,5} Pentagonal prism vertex figures r{p,3,5} Space S3 H3 Form Finite Compact Paracompact Noncompact Name r{3,3,5} r{4,3,5} r{5,3,5} r{6,3,5} r{7,3,5} ... r{∞,3,5} Image Cells {3,5} r{3,3} r{4,3} r{5,3} r{6,3} r{7,3} r{∞,3} References 1. Category S4: Scaliform Swirlprisms spidrox 2. Klitzing, Richard. "4D convex scaliform polychora swirlprismatodiminished rectified hexacosachoron". • Kaleidoscopes: Selected Writings of H. S. M. Coxeter, edited by F. Arthur Sherk, Peter McMullen, Anthony C. Thompson, Asia Ivic Weiss, Wiley-Interscience Publication, 1995, ISBN 978-0-471-01003-6 • (Paper 22) H.S.M. Coxeter, Regular and Semi-Regular Polytopes I, [Math. Zeit. 46 (1940) 380-407, MR 2,10] • (Paper 23) H.S.M. Coxeter, Regular and Semi-Regular Polytopes II, [Math. Zeit. 188 (1985) 559-591] • (Paper 24) H.S.M. Coxeter, Regular and Semi-Regular Polytopes III, [Math. Zeit. 200 (1988) 3-45] • J.H. Conway and M.J.T. Guy: Four-Dimensional Archimedean Polytopes, Proceedings of the Colloquium on Convexity at Copenhagen, page 38 und 39, 1965 • N.W. Johnson: The Theory of Uniform Polytopes and Honeycombs, Ph.D. Dissertation, University of Toronto, 1966 • Four-dimensional Archimedean Polytopes (German), Marco Möller, 2004 PhD dissertation External links • Convex uniform polychora based on the hecatonicosachoron (120-cell) and hexacosichoron (600-cell) - Model 34, George Olshevsky. • Klitzing, Richard. "4D uniform polytopes (polychora) o3x3o5o - rox". • Archimedisches Polychor Nr. 45 (rectified 600-cell) Marco Möller's Archimedean polytopes in R4 (German) • H4 uniform polytopes with coordinates: r{3,3,5} Fundamental convex regular and uniform polytopes in dimensions 2–10 Family An Bn I2(p) / Dn E6 / E7 / E8 / F4 / G2 Hn Regular polygon Triangle Square p-gon Hexagon Pentagon Uniform polyhedron Tetrahedron Octahedron • Cube Demicube Dodecahedron • Icosahedron Uniform polychoron Pentachoron 16-cell • Tesseract Demitesseract 24-cell 120-cell • 600-cell Uniform 5-polytope 5-simplex 5-orthoplex • 5-cube 5-demicube Uniform 6-polytope 6-simplex 6-orthoplex • 6-cube 6-demicube 122 • 221 Uniform 7-polytope 7-simplex 7-orthoplex • 7-cube 7-demicube 132 • 231 • 321 Uniform 8-polytope 8-simplex 8-orthoplex • 8-cube 8-demicube 142 • 241 • 421 Uniform 9-polytope 9-simplex 9-orthoplex • 9-cube 9-demicube Uniform 10-polytope 10-simplex 10-orthoplex • 10-cube 10-demicube Uniform n-polytope n-simplex n-orthoplex • n-cube n-demicube 1k2 • 2k1 • k21 n-pentagonal polytope Topics: Polytope families • Regular polytope • List of regular polytopes and compounds
Wikipedia
Rectified 5-cubes In five-dimensional geometry, a rectified 5-cube is a convex uniform 5-polytope, being a rectification of the regular 5-cube. 5-cube Rectified 5-cube Birectified 5-cube Birectified 5-orthoplex 5-orthoplex Rectified 5-orthoplex Orthogonal projections in A5 Coxeter plane There are 5 degrees of rectifications of a 5-polytope, the zeroth here being the 5-cube, and the 4th and last being the 5-orthoplex. Vertices of the rectified 5-cube are located at the edge-centers of the 5-cube. Vertices of the birectified 5-cube are located in the square face centers of the 5-cube. Rectified 5-cube Rectified 5-cube rectified penteract (rin) Type uniform 5-polytope Schläfli symbol r{4,3,3,3} Coxeter diagram = 4-faces4210 32 Cells20040 160 Faces40080 320 Edges 320 Vertices 80 Vertex figure Tetrahedral prism Coxeter group B5, [4,33], order 3840 Dual Base point (0,1,1,1,1,1)√2 Circumradius sqrt(2) = 1.414214 Properties convex, isogonal Alternate names • Rectified penteract (acronym: rin) (Jonathan Bowers) Construction The rectified 5-cube may be constructed from the 5-cube by truncating its vertices at the midpoints of its edges. Coordinates The Cartesian coordinates of the vertices of the rectified 5-cube with edge length ${\sqrt {2}}$ is given by all permutations of: $(0,\ \pm 1,\ \pm 1,\ \pm 1,\ \pm 1)$ Images orthographic projections Coxeter plane B5 B4 / D5 B3 / D4 / A2 Graph Dihedral symmetry [10] [8] [6] Coxeter plane B2 A3 Graph Dihedral symmetry [4] [4] Birectified 5-cube Birectified 5-cube birectified penteract (nit) Type uniform 5-polytope Schläfli symbol 2r{4,3,3,3} Coxeter diagram = 4-faces4210 32 Cells28040 160 80 Faces640320 320 Edges 480 Vertices 80 Vertex figure {3}×{4} Coxeter group B5, [4,33], order 3840 D5, [32,1,1], order 1920 Dual Base point (0,0,1,1,1,1)√2 Circumradius sqrt(3/2) = 1.224745 Properties convex, isogonal E. L. Elte identified it in 1912 as a semiregular polytope, identifying it as Cr52 as a second rectification of a 5-dimensional cross polytope. Alternate names • Birectified 5-cube/penteract • Birectified pentacross/5-orthoplex/triacontiditeron • Penteractitriacontiditeron (acronym: nit) (Jonathan Bowers) • Rectified 5-demicube/demipenteract Construction and coordinates The birectified 5-cube may be constructed by birectifying the vertices of the 5-cube at ${\sqrt {2}}$ of the edge length. The Cartesian coordinates of the vertices of a birectified 5-cube having edge length 2 are all permutations of: $\left(0,\ 0,\ \pm 1,\ \pm 1,\ \pm 1\right)$ Images orthographic projections Coxeter plane B5 B4 / D5 B3 / D4 / A2 Graph Dihedral symmetry [10] [8] [6] Coxeter plane B2 A3 Graph Dihedral symmetry [4] [4] Related polytopes 2-isotopic hypercubes Dim. 2 3 4 5 6 7 8 n Name t{4} r{4,3} 2t{4,3,3} 2r{4,3,3,3} 3t{4,3,3,3,3} 3r{4,3,3,3,3,3} 4t{4,3,3,3,3,3,3} ... Coxeter diagram Images Facets {3} {4} t{3,3} t{3,4} r{3,3,3} r{3,3,4} 2t{3,3,3,3} 2t{3,3,3,4} 2r{3,3,3,3,3} 2r{3,3,3,3,4} 3t{3,3,3,3,3,3} 3t{3,3,3,3,3,4} Vertex figure ( )v( ) { }×{ } { }v{ } {3}×{4} {3}v{4} {3,3}×{3,4} {3,3}v{3,4} Related polytopes These polytopes are a part of 31 uniform polytera generated from the regular 5-cube or 5-orthoplex. B5 polytopes β5 t1β5 t2γ5 t1γ5 γ5 t0,1β5 t0,2β5 t1,2β5 t0,3β5 t1,3γ5 t1,2γ5 t0,4γ5 t0,3γ5 t0,2γ5 t0,1γ5 t0,1,2β5 t0,1,3β5 t0,2,3β5 t1,2,3γ5 t0,1,4β5 t0,2,4γ5 t0,2,3γ5 t0,1,4γ5 t0,1,3γ5 t0,1,2γ5 t0,1,2,3β5 t0,1,2,4β5 t0,1,3,4γ5 t0,1,2,4γ5 t0,1,2,3γ5 t0,1,2,3,4γ5 Notes References • H.S.M. Coxeter: • H.S.M. Coxeter, Regular Polytopes, 3rd Edition, Dover New York, 1973 • Kaleidoscopes: Selected Writings of H.S.M. Coxeter, edited by F. Arthur Sherk, Peter McMullen, Anthony C. Thompson, Asia Ivic Weiss, Wiley-Interscience Publication, 1995, ISBN 978-0-471-01003-6 • (Paper 22) H.S.M. Coxeter, Regular and Semi Regular Polytopes I, [Math. Zeit. 46 (1940) 380-407, MR 2,10] • (Paper 23) H.S.M. Coxeter, Regular and Semi-Regular Polytopes II, [Math. Zeit. 188 (1985) 559-591] • (Paper 24) H.S.M. Coxeter, Regular and Semi-Regular Polytopes III, [Math. Zeit. 200 (1988) 3-45] • Norman Johnson Uniform Polytopes, Manuscript (1991) • N.W. Johnson: The Theory of Uniform Polytopes and Honeycombs, Ph.D. • Klitzing, Richard. "5D uniform polytopes (polytera)". o3x3o3o4o - rin, o3o3x3o4o - nit External links • Polytopes of Various Dimensions • Multi-dimensional Glossary Fundamental convex regular and uniform polytopes in dimensions 2–10 Family An Bn I2(p) / Dn E6 / E7 / E8 / F4 / G2 Hn Regular polygon Triangle Square p-gon Hexagon Pentagon Uniform polyhedron Tetrahedron Octahedron • Cube Demicube Dodecahedron • Icosahedron Uniform polychoron Pentachoron 16-cell • Tesseract Demitesseract 24-cell 120-cell • 600-cell Uniform 5-polytope 5-simplex 5-orthoplex • 5-cube 5-demicube Uniform 6-polytope 6-simplex 6-orthoplex • 6-cube 6-demicube 122 • 221 Uniform 7-polytope 7-simplex 7-orthoplex • 7-cube 7-demicube 132 • 231 • 321 Uniform 8-polytope 8-simplex 8-orthoplex • 8-cube 8-demicube 142 • 241 • 421 Uniform 9-polytope 9-simplex 9-orthoplex • 9-cube 9-demicube Uniform 10-polytope 10-simplex 10-orthoplex • 10-cube 10-demicube Uniform n-polytope n-simplex n-orthoplex • n-cube n-demicube 1k2 • 2k1 • k21 n-pentagonal polytope Topics: Polytope families • Regular polytope • List of regular polytopes and compounds
Wikipedia
Rectified prism In geometry, a rectified prism (also rectified bipyramid) is one of an infinite set of polyhedra, constructed as a rectification of an n-gonal prism, truncating the vertices down to the midpoint of the original edges. In Conway polyhedron notation, it is represented as aPn, an ambo-prism. The lateral squares or rectangular faces of the prism become squares or rhombic faces, and new isosceles triangle faces are truncations of the original vertices. Set of rectified prisms Rectified pentagonal prism Conway polyhedron notationaPn Faces2 n-gons n squares 2n triangles Edges6n Vertices3n Symmetry groupDnh, [2,2n], (*22n), order 4n Rotation groupDn, [2,n]+, (22n), order 2n Dual polyhedronJoined prism Propertiesconvex Elements An n-gonal form has 3n vertices, 6n edges, and 2+3n faces: 2 regular n-gons, n rhombi, and 2n triangles. Forms The rectified square prism is the same as a semiregular cuboctahedron. n 3 4 5 6 7 n Image Net Related Cuboctahedron Rectified star prisms also exist, like a 5/2 form: Dual Set of joined prisms Joined pentagonal prism Conway polyhedron notationjPn Faces3n Edges6n Vertices2+3n Symmetry groupDnh, [2,2n], (*22n), order 4n Rotation groupDn, [2,n]+, (22n), order 2n Dual polyhedronRectified prism Rectified bipyramid Propertiesconvex The dual of a rectified prism is a joined prism or joined bipyramid, in Conway polyhedron notation. The join operation adds vertices at the center of faces, and replaces edges with rhombic faces between original and the neighboring face centers. The joined square prism is the same topology as the rhombic dodecahedron. The joined triangular prism is the Herschel graph. n 3 4 5 6 8 n Image Net Related Rhombic dodecahedron See also • Rectified antiprism External links • Conway Notation for Polyhedra Try: aPn and jPn, where n=3,4,5,6... example aP4 is a rectified square prism, and jP4 is a joined square prism. Convex polyhedra Platonic solids (regular) • tetrahedron • cube • octahedron • dodecahedron • icosahedron Archimedean solids (semiregular or uniform) • truncated tetrahedron • cuboctahedron • truncated cube • truncated octahedron • rhombicuboctahedron • truncated cuboctahedron • snub cube • icosidodecahedron • truncated dodecahedron • truncated icosahedron • rhombicosidodecahedron • truncated icosidodecahedron • snub dodecahedron Catalan solids (duals of Archimedean) • triakis tetrahedron • rhombic dodecahedron • triakis octahedron • tetrakis hexahedron • deltoidal icositetrahedron • disdyakis dodecahedron • pentagonal icositetrahedron • rhombic triacontahedron • triakis icosahedron • pentakis dodecahedron • deltoidal hexecontahedron • disdyakis triacontahedron • pentagonal hexecontahedron Dihedral regular • dihedron • hosohedron Dihedral uniform • prisms • antiprisms duals: • bipyramids • trapezohedra Dihedral others • pyramids • truncated trapezohedra • gyroelongated bipyramid • cupola • bicupola • frustum • bifrustum • rotunda • birotunda • prismatoid • scutoid Degenerate polyhedra are in italics.
Wikipedia
Rectified tesseract In geometry, the rectified tesseract, rectified 8-cell is a uniform 4-polytope (4-dimensional polytope) bounded by 24 cells: 8 cuboctahedra, and 16 tetrahedra. It has half the vertices of a runcinated tesseract, with its construction, called a runcic tesseract. Rectified tesseract Schlegel diagram Centered on cuboctahedron tetrahedral cells shown Type Uniform 4-polytope Schläfli symbol r{4,3,3} = $\left\{{\begin{array}{l}4\\3,3\end{array}}\right\}$ 2r{3,31,1} h3{4,3,3} Coxeter-Dynkin diagrams = Cells 24 8 (3.4.3.4) 16 (3.3.3) Faces 88 64 {3} 24 {4} Edges 96 Vertices 32 Vertex figure (Elongated equilateral-triangular prism) Symmetry group B4 [3,3,4], order 384 D4 [31,1,1], order 192 Properties convex, edge-transitive Uniform index 10 11 12 It has two uniform constructions, as a rectified 8-cell r{4,3,3} and a cantellated demitesseract, rr{3,31,1}, the second alternating with two types of tetrahedral cells. E. L. Elte identified it in 1912 as a semiregular polytope, labeling it as tC8. Construction The rectified tesseract may be constructed from the tesseract by truncating its vertices at the midpoints of its edges. The Cartesian coordinates of the vertices of the rectified tesseract with edge length 2 is given by all permutations of: $(0,\ \pm {\sqrt {2}},\ \pm {\sqrt {2}},\ \pm {\sqrt {2}})$ Images orthographic projections Coxeter plane B4 B3 / D4 / A2 B2 / D3 Graph Dihedral symmetry [8] [6] [4] Coxeter plane F4 A3 Graph Dihedral symmetry [12/3] [4] Wireframe 16 tetrahedral cells Projections In the cuboctahedron-first parallel projection of the rectified tesseract into 3-dimensional space, the image has the following layout: • The projection envelope is a cube. • A cuboctahedron is inscribed in this cube, with its vertices lying at the midpoint of the cube's edges. The cuboctahedron is the image of two of the cuboctahedral cells. • The remaining 6 cuboctahedral cells are projected to the square faces of the cube. • The 8 tetrahedral volumes lying at the triangular faces of the central cuboctahedron are the images of the 16 tetrahedral cells, two cells to each image. Alternative names • Rit (Jonathan Bowers: for rectified tesseract) • Ambotesseract (Neil Sloane & John Horton Conway) • Rectified tesseract/Runcic tesseract (Norman W. Johnson) • Runcic 4-hypercube/8-cell/octachoron/4-measure polytope/4-regular orthotope • Rectified 4-hypercube/8-cell/octachoron/4-measure polytope/4-regular orthotope Related uniform polytopes Runcic cubic polytopes Runcic n-cubes n45678 [1+,4,3n-2] = [3,3n-3,1] [1+,4,32] = [3,31,1] [1+,4,33] = [3,32,1] [1+,4,34] = [3,33,1] [1+,4,35] = [3,34,1] [1+,4,36] = [3,35,1] Runcic figure Coxeter = = = = = Schläfli h3{4,32} h3{4,33} h3{4,34} h3{4,35} h3{4,36} Tesseract polytopes B4 symmetry polytopes Name tesseract rectified tesseract truncated tesseract cantellated tesseract runcinated tesseract bitruncated tesseract cantitruncated tesseract runcitruncated tesseract omnitruncated tesseract Coxeter diagram = = Schläfli symbol {4,3,3} t1{4,3,3} r{4,3,3} t0,1{4,3,3} t{4,3,3} t0,2{4,3,3} rr{4,3,3} t0,3{4,3,3} t1,2{4,3,3} 2t{4,3,3} t0,1,2{4,3,3} tr{4,3,3} t0,1,3{4,3,3} t0,1,2,3{4,3,3} Schlegel diagram B4   Name 16-cell rectified 16-cell truncated 16-cell cantellated 16-cell runcinated 16-cell bitruncated 16-cell cantitruncated 16-cell runcitruncated 16-cell omnitruncated 16-cell Coxeter diagram = = = = = = Schläfli symbol {3,3,4} t1{3,3,4} r{3,3,4} t0,1{3,3,4} t{3,3,4} t0,2{3,3,4} rr{3,3,4} t0,3{3,3,4} t1,2{3,3,4} 2t{3,3,4} t0,1,2{3,3,4} tr{3,3,4} t0,1,3{3,3,4} t0,1,2,3{3,3,4} Schlegel diagram B4 References • H.S.M. Coxeter: • H.S.M. Coxeter, Regular Polytopes, 3rd Edition, Dover New York, 1973 • Kaleidoscopes: Selected Writings of H.S.M. Coxeter, edited by F. Arthur Sherk, Peter McMullen, Anthony C. Thompson, Asia Ivic Weiss, Wiley-Interscience Publication, 1995, ISBN 978-0-471-01003-6 • (Paper 22) H.S.M. Coxeter, Regular and Semi Regular Polytopes I, [Math. Zeit. 46 (1940) 380-407, MR 2,10] • (Paper 23) H.S.M. Coxeter, Regular and Semi-Regular Polytopes II, [Math. Zeit. 188 (1985) 559-591] • (Paper 24) H.S.M. Coxeter, Regular and Semi-Regular Polytopes III, [Math. Zeit. 200 (1988) 3-45] • Norman Johnson Uniform Polytopes, Manuscript (1991) • N.W. Johnson: The Theory of Uniform Polytopes and Honeycombs, Ph.D. (1966) • 2. Convex uniform polychora based on the tesseract (8-cell) and hexadecachoron (16-cell) - Model 11, George Olshevsky. • Klitzing, Richard. "4D uniform polytopes (polychora) o4x3o3o - rit". Fundamental convex regular and uniform polytopes in dimensions 2–10 Family An Bn I2(p) / Dn E6 / E7 / E8 / F4 / G2 Hn Regular polygon Triangle Square p-gon Hexagon Pentagon Uniform polyhedron Tetrahedron Octahedron • Cube Demicube Dodecahedron • Icosahedron Uniform polychoron Pentachoron 16-cell • Tesseract Demitesseract 24-cell 120-cell • 600-cell Uniform 5-polytope 5-simplex 5-orthoplex • 5-cube 5-demicube Uniform 6-polytope 6-simplex 6-orthoplex • 6-cube 6-demicube 122 • 221 Uniform 7-polytope 7-simplex 7-orthoplex • 7-cube 7-demicube 132 • 231 • 321 Uniform 8-polytope 8-simplex 8-orthoplex • 8-cube 8-demicube 142 • 241 • 421 Uniform 9-polytope 9-simplex 9-orthoplex • 9-cube 9-demicube Uniform 10-polytope 10-simplex 10-orthoplex • 10-cube 10-demicube Uniform n-polytope n-simplex n-orthoplex • n-cube n-demicube 1k2 • 2k1 • k21 n-pentagonal polytope Topics: Polytope families • Regular polytope • List of regular polytopes and compounds
Wikipedia
Rectified truncated cube In geometry, the rectified truncated cube is a polyhedron, constructed as a rectified, truncated cube. It has 38 faces: 8 equilateral triangles, 24 isosceles triangles, and 6 octagons. Rectified truncated cube Faces38: 8 equilateral triangles 24 isosceles triangles 6 octagons Edges72 Vertices12+24 Schläfli symbolrt{4,3} Conway notationatC Symmetry groupOh, [4,3], (*432), order 48 Rotation groupO, [4,3]+, (432), order 24 Dual polyhedronJoined truncated cube Propertiesconvex Net Topologically, the triangles corresponding to the cube's vertices are always equilateral, although the octagons, while having equal edge lengths, do not have the same edge lengths with the equilateral triangles, having different but alternating angles, causing the other triangles to be isosceles instead. Related polyhedra The rectified truncated cube can be seen in sequence of rectification and truncation operations from the cube. Further truncation, and alternation operations creates two more polyhedra: Name Truncated cube Rectified truncated cube Truncated rectified truncated cube Snub rectified truncated cube Coxeter tC rtC trtC srtC Conway atC btC stC Image See also • Rectified truncated tetrahedron • Rectified truncated octahedron • Rectified truncated dodecahedron • Rectified truncated icosahedron References • Coxeter Regular Polytopes, Third edition, (1973), Dover edition, ISBN 0-486-61480-8 (pp. 145–154 Chapter 8: Truncation) • John H. Conway, Heidi Burgiel, Chaim Goodman-Strass, The Symmetries of Things 2008, ISBN 978-1-56881-220-5 External links • George Hart's Conway interpreter: generates polyhedra in VRML, taking Conway notation as input
Wikipedia
Rectified truncated dodecahedron In geometry, the rectified truncated dodecahedron is a convex polyhedron, constructed as a rectified, truncated dodecahedron. It has 92 faces: 20 equilateral triangles, 60 isosceles triangles, and 12 decagons. Rectified truncated dodecahedron Faces92: 20 equilateral triangles 60 isosceles triangles 12 decagons Edges180 Vertices90 Schläfli symbolrt{5,3} Conway notationatD Symmetry groupIh, [5,3], (*532), order 120 Rotation groupI, [5,3]+, (532), order 60 Dual polyhedronJoined truncated dodecahedron Propertiesconvex Net Topologically, the triangles corresponding to the dodecahedrons's vertices are always equilateral, although the decagons, while having equal edge lengths, do not have the same edge lengths with the equilateral triangles, having different but alternating angles, causing the other triangles to be isosceles instead. Related polyhedra The rectified truncated dodecahedron can be seen in sequence of rectification and truncation operations from the dodecahedron. Further truncation, and alternation operations creates two more polyhedra: Name Truncated dodecahedron Rectified truncated dodecahedron Truncated rectified truncated dodecahedron Snub rectified truncated dodecahedron Coxeter tD rtD trtD srtD Conway atD btD stD Image See also • Rectified truncated tetrahedron • Rectified truncated octahedron • Rectified truncated cube • Rectified truncated icosahedron References • Coxeter Regular Polytopes, Third edition, (1973), Dover edition, ISBN 0-486-61480-8 (pp. 145–154 Chapter 8: Truncation) • John H. Conway, Heidi Burgiel, Chaim Goodman-Strass, The Symmetries of Things 2008, ISBN 978-1-56881-220-5 External links • George Hart's Conway interpreter: generates polyhedra in VRML, taking Conway notation as input
Wikipedia
Rectified truncated icosahedron In geometry, the rectified truncated icosahedron is a convex polyhedron. It has 92 faces: 60 isosceles triangles, 12 regular pentagons, and 20 regular hexagons. It is constructed as a rectified, truncated icosahedron, rectification truncating vertices down to mid-edges. Rectified truncated icosahedron TypeNear-miss Johnson solid Faces92: 60 isosceles triangles 12 pentagons 20 hexagons Edges180 Vertices90 Vertex configuration3.6.3.6 3.5.3.6 Schläfli symbolrt{3,5} Conway notationatI[1] Symmetry groupIh, [5,3], (*532) order 120 Rotation groupI, [5,3]+, (532), order 60 Dual polyhedronRhombic enneacontahedron Propertiesconvex Net As a near-miss Johnson solid, under icosahedral symmetry, the pentagons are always regular, although the hexagons, while having equal edge lengths, do not have the same edge lengths with the pentagons, having slightly different but alternating angles, causing the triangles to be isosceles instead. The shape is a symmetrohedron with notation I(1,2,*,[2]) Images Dual By Conway polyhedron notation, the dual polyhedron can be called a joined truncated icosahedron, jtI, but it is topologically equivalent to the rhombic enneacontahedron with all rhombic faces. Related polyhedra The rectified truncated icosahedron can be seen in sequence of rectification and truncation operations from the truncated icosahedron. Further truncation, and alternation operations creates two more polyhedra: Name Truncated icosahedron Truncated truncated icosahedron Rectified truncated icosahedron Expanded truncated icosahedron Truncated rectified truncated icosahedron Snub rectified truncated icosahedron Coxeter tI ttI rtI rrtI trtI srtI Conway atI etI btI stI Image Net Conway dtI = kD kD kdtI jtI jtI otI mtI gtI Dual Net See also • Near-miss Johnson solid • Rectified truncated tetrahedron • Rectified truncated octahedron • Rectified truncated cube • Rectified truncated dodecahedron References 1. "PolyHédronisme". • Coxeter Regular Polytopes, Third edition, (1973), Dover edition, ISBN 0-486-61480-8 (pp. 145–154 Chapter 8: Truncation) • John H. Conway, Heidi Burgiel, Chaim Goodman-Strauss, The Symmetries of Things 2008, ISBN 978-1-56881-220-5 External links • George Hart's Conway interpreter: generates polyhedra in VRML, taking Conway notation as input Near-miss Johnson solids Truncated forms • Truncated triakis tetrahedron • Chamfered cube (Truncated rhombic dodecahedron) • Chamfered dodecahedron (Truncated rhombic triacontahedron) Other forms • Tetrated dodecahedron • Rectified truncated icosahedron • Pentahexagonal pyritoheptacontatetrahedron
Wikipedia
Rectified truncated octahedron In geometry, the rectified truncated octahedron is a convex polyhedron, constructed as a rectified, truncated octahedron. It has 38 faces: 24 isosceles triangles, 6 squares, and 8 hexagons. Rectified truncated octahedron Faces38: 24 isosceles triangles 6 squares 8 hexagons Edges72 Vertices12+24 Schläfli symbolrt{3,4} Conway notationatO Symmetry groupOh, [4,3], (*432), order 48 Rotation groupO, [4,3]+, (432), order 24 Dual polyhedronJoined truncated octahedron Propertiesconvex Net Topologically, the squares corresponding to the octahedron's vertices are always regular, although the hexagons, while having equal edge lengths, do not have the same edge lengths with the squares, having different but alternating angles, causing the triangles to be isosceles instead. Related polyhedra The rectified truncated octahedron can be seen in sequence of rectification and truncation operations from the octahedron. Further truncation, and alternation creates two more polyhedra: Name Truncated octahedron Rectified truncated octahedron Truncated rectified truncated octahedron Snub rectified truncated octahedron Coxeter tO rtO trtO srtO Conway atO btO stO Image Conway dtO = kC jtO mtO mtO Dual See also • Rectified truncated tetrahedron • Rectified truncated cube • Rectified truncated dodecahedron • Rectified truncated icosahedron References • Coxeter Regular Polytopes, Third edition, (1973), Dover edition, ISBN 0-486-61480-8 (pp. 145–154 Chapter 8: Truncation) • John H. Conway, Heidi Burgiel, Chaim Goodman-Strass, The Symmetries of Things 2008, ISBN 978-1-56881-220-5 External links • George Hart's Conway interpreter: generates polyhedra in VRML, taking Conway notation as input
Wikipedia
Rectified truncated tetrahedron In geometry, the rectified truncated tetrahedron is a polyhedron, constructed as a rectified, truncated tetrahedron. It has 20 faces: 4 equilateral triangles, 12 isosceles triangles, and 4 regular hexagons. Rectified truncated tetrahedron Faces20: 4 equilateral triangles 12 isosceles triangles 4 hexagons Edges48 Vertices12+18 Schläfli symbolrt{3,3} Conway notationatT Symmetry groupTd, [3,3], (*332), order 24 Rotation groupT, [3,3]+, (332), order 12 Dual polyhedronJoined truncated tetrahedron Propertiesconvex Net Topologically, the triangles corresponding to the tetrahedron's vertices are always equilateral, although the hexagons, while having equal edge lengths, do not have the same edge lengths with the equilateral triangles, having different but alternating angles, causing the other triangles to be isosceles instead. Related polyhedra The rectified truncated tetrahedron can be seen in sequence of rectification and truncation operations from the tetrahedron. Further truncation, and alternation operations creates two more polyhedra: Name Truncated tetrahedron Rectified truncated tetrahedron Truncated rectified truncated tetrahedron Snub rectified truncated tetrahedron Coxeter tT rtT trtT srtT Conway atT btT stT Image Conway dtT = kT jtT mtT gtT Dual See also • Rectified truncated cube • Rectified truncated octahedron • Rectified truncated dodecahedron • Rectified truncated icosahedron References • Coxeter Regular Polytopes, Third edition, (1973), Dover edition, ISBN 0-486-61480-8 (pp. 145–154 Chapter 8: Truncation) • John H. Conway, Heidi Burgiel, Chaim Goodman-Strass, The Symmetries of Things 2008, ISBN 978-1-56881-220-5 External links • George Hart's Conway interpreter: generates polyhedra in VRML, taking Conway notation as input
Wikipedia
Rectilinear Steiner tree The rectilinear Steiner tree problem, minimum rectilinear Steiner tree problem (MRST), or rectilinear Steiner minimum tree problem (RSMT) is a variant of the geometric Steiner tree problem in the plane, in which the Euclidean distance is replaced with the rectilinear distance. The problem may be formally stated as follows: given n points in the plane, it is required to interconnect them all by a shortest network which consists only of vertical and horizontal line segments. It can be shown that such a network is a tree whose vertices are the input points plus some extra points (Steiner points).[1] The problem arises in the physical design of electronic design automation. In VLSI circuits, wire routing is carried out by wires running only in vertical and horizontal directions, due to high computational complexity of the task. Therefore wire length is the sum of the lengths of vertical and horizontal segments, and the distance between two pins of a net is actually the rectilinear distance ("Manhattan distance") between the corresponding geometric points in the design plane.[1] Properties It is known that the search for the RSMT may be restricted to the Hanan grid, constructed by drawing vertical and horizontal lines through each vertex.[2] Computational complexity The RSMT is an NP-hard problem, and as with other NP-hard problems, common approaches to tackle it are approximate algorithms, heuristic algorithms, and separation of efficiently solvable special cases. An overview of the approaches to the problem may be found in the 1992 book by Hwang, Richards and Winter, The Steiner Tree Problem.[3] Single-trunk Steiner trees The single-trunk Steiner tree is a tree that consists of a single horizontal segment and some vertical segments. A minimum single-trunk Steiner tree problem (MSTST) may be found in linear time. The idea is that STSTs for a given point set essentially have only one "degree of freedom", which is the position of the horizontal trunk. Further, it easy to see that if the Y-axis is split into segments by Y-coordinates of input points, then the length of a STST is constant within any such segment. Finally, it will be minimal if the trunk has the closest possible numbers of points below and above it. Therefore an optimal position of the trunk are defined by a median of the set of Y-coordinates of the points, which may be found in linear time. Once the trunk is found, the vertical segments may be easily computed. Notice however that while the construction of the connecting net takes linear time, the construction of the tree which involves both input points and Steiner points as its vertices will require O(n log n) time, since it essentially accomplishes sorting of the X-coordinates of the input points (along the split of the trunk into the edges of the tree).[4] A MSTST is fast to compute but is a poor approximation of the MRST. A better approximation, called the refined single trunk tree, may be found in O(n log n) time. It is optimal for point sets of sizes up to 4.[5] Approximations and heuristics A number of algorithms exist which start from the rectilinear minimum spanning tree (RMST; the minimum spanning tree in the plane with rectilinear distance) and try to decrease its length by introducing Steiner points. The RMST itself may be up to 1.5 times longer than MRST.[6] References 1. Naveed Sherwani, "Algorithms for VLSI Physical Design Automation" 2. M. Hanan, On Steiner’s problem with rectilinear distance, J. SIAM Appl. Math. 14 (1966), 255 - 265. 3. F.K. Hwang, D.S. Richards, P. Winter, The Steiner Tree Problem. Elsevier, North-Holland, 1992, ISBN 0-444-89098-X (hardbound) (Annals of Discrete Mathematics, vol. 53). 4. J. Soukup. "Circuit Layout". Proceedings of the IEEE, 69:1281–1304, October 1981 5. H. Chen, C. Qiao, F. Zhou, and C.-K. Cheng. "Refined single trunk tree: A rectilinear Steiner tree generator for interconnect prediction". In: Proc. ACM Intl. Workshop on System Level Interconnect Prediction, 2002, pp.85–89. 6. F. K. Hwang. "On Steiner minimal trees with rectilinear distance." SIAM Journal on Applied Mathematics, 30:104–114, 1976.
Wikipedia
Recurrent tensor In mathematics and physics, a recurrent tensor, with respect to a connection $\nabla $ on a manifold M, is a tensor T for which there is a one-form ω on M such that $\nabla T=\omega \otimes T.\,$ For more, see Riemannian geometry. Examples Parallel Tensors An example for recurrent tensors are parallel tensors which are defined by $\nabla A=0$ with respect to some connection $\nabla $. If we take a pseudo-Riemannian manifold $(M,g)$ then the metric g is a parallel and therefore recurrent tensor with respect to its Levi-Civita connection, which is defined via $\nabla ^{LC}g=0$ and its property to be torsion-free. Parallel vector fields ($\nabla X=0$) are examples of recurrent tensors that find importance in mathematical research. For example, if $X$ is a recurrent non-null vector field on a pseudo-Riemannian manifold satisfying $\nabla X=\omega \otimes X$ for some closed one-form $\omega $, then X can be rescaled to a parallel vector field.[1] In particular, non-parallel recurrent vector fields are null vector fields. Metric space Another example appears in connection with Weyl structures. Historically, Weyl structures emerged from the considerations of Hermann Weyl with regards to properties of parallel transport of vectors and their length.[2] By demanding that a manifold have an affine parallel transport in such a way that the manifold is locally an affine space, it was shown that the induced connection had a vanishing torsion tensor $T^{\nabla }(X,Y)=\nabla _{X}Y-\nabla _{Y}X-[X,Y]=0$. Additionally, he claimed that the manifold must have a particular parallel transport in which the ratio of two transported vectors is fixed. The corresponding connection $\nabla '$ which induces such a parallel transport satisfies $\nabla 'g=\varphi \otimes g$ for some one-form $\varphi $. Such a metric is a recurrent tensor with respect to $\nabla '$. As a result, Weyl called the resulting manifold $(M,g)$ with affine connection $\nabla $ and recurrent metric $g$ a metric space. In this sense, Weyl was not just referring to one metric but to the conformal structure defined by $g$. Under the conformal transformation $g\rightarrow e^{\lambda }g$, the form $\varphi $ transforms as $\varphi \rightarrow \varphi -d\lambda $. This induces a canonical map $F:[g]\rightarrow \Lambda ^{1}(M)$ on $(M,[g])$ defined by $F(e^{\lambda }g):=\varphi -d\lambda $, where $[g]$ is the conformal structure. $F$ is called a Weyl structure,[3] which more generally is defined as a map with property $F(e^{\lambda }g)=F(g)-d\lambda $. Recurrent spacetime One more example of a recurrent tensor is the curvature tensor ${\mathcal {R}}$ on a recurrent spacetime,[4] for which $\nabla {\mathcal {R}}=\omega \otimes {\mathcal {R}}$. References 1. Alekseevsky, Baum (2008) 2. Weyl (1918) 3. Folland (1970) 4. Walker (1948) Literature • Weyl, H. (1918). "Gravitation und Elektrizität". Sitzungsberichte der Preuss. Akad. D. Wiss.: 465. • A.G. Walker: On parallel fields of partially null vector spaces, The Quarterly Journal of Mathematics 1949, Oxford Univ. Press • E.M. Patterson: On symmetric recurrent tensors of the second order, The Quarterly Journal of Mathematics 1950, Oxford Univ. Press • J.-C. Wong: Recurrent Tensors on a Linearly Connected Differentiable Manifold, Transactions of the American Mathematical Society 1961, • G.B. Folland: Weyl Manifolds, Journal of Differential Geometry 1970 • D.V. Alekseevky; H. Baum (2008). Recent developments in pseudo-Riemannian geometry. European Mathematical Society. ISBN 978-3-03719-051-7.
Wikipedia
Recurrent point In mathematics, a recurrent point for a function f is a point that is in its own limit set by f. Any neighborhood containing the recurrent point will also contain (a countable number of) iterates of it as well. Definition Let $X$ be a Hausdorff space and $f\colon X\to X$ a function. A point $x\in X$ is said to be recurrent (for $f$) if $x\in \omega (x)$, i.e. if $x$ belongs to its $\omega $-limit set. This means that for each neighborhood $U$ of $x$ there exists $n>0$ such that $f^{n}(x)\in U$.[1] The set of recurrent points of $f$ is often denoted $R(f)$ and is called the recurrent set of $f$. Its closure is called the Birkhoff center of $f$,[2] and appears in the work of George David Birkhoff on dynamical systems.[3][4] Every recurrent point is a nonwandering point,[1] hence if $f$ is a homeomorphism and $X$ is compact, then $R(f)$ is an invariant subset of the non-wandering set of $f$ (and may be a proper subset). References 1. Irwin, M. C. (2001), Smooth dynamical systems, Advanced Series in Nonlinear Dynamics, vol. 17, World Scientific Publishing Co., Inc., River Edge, NJ, p. 47, doi:10.1142/9789812810120, ISBN 981-02-4599-8, MR 1867353. 2. Hart, Klaas Pieter; Nagata, Jun-iti; Vaughan, Jerry E. (2004), Encyclopedia of general topology, Elsevier, p. 390, ISBN 0-444-50355-2, MR 2049453. 3. Coven, Ethan M.; Hedlund, G. A. (1980), "${\bar {P}}={\bar {R}}$ for maps of the interval", Proceedings of the American Mathematical Society, 79 (2): 316–318, doi:10.2307/2043258, MR 0565362. 4. Birkhoff, G. D. (1927), "Chapter 7", Dynamical Systems, Amer. Math. Soc. Colloq. Publ., vol. 9, Providence, R. I.: American Mathematical Society. As cited by Coven & Hedlund (1980). This article incorporates material from Recurrent point on PlanetMath, which is licensed under the Creative Commons Attribution/Share-Alike License.
Wikipedia
Repeating decimal A repeating decimal or recurring decimal is decimal representation of a number whose digits are periodic (repeating its values at regular intervals) and the infinitely repeated portion is not zero. It can be shown that a number is rational if and only if its decimal representation is repeating or terminating (i.e. all except finitely many digits are zero). For example, the decimal representation of 1/3 becomes periodic just after the decimal point, repeating the single digit "3" forever, i.e. 0.333.... A more complicated example is 3227/555, whose decimal becomes periodic at the second digit following the decimal point and then repeats the sequence "144" forever, i.e. 5.8144144144.... At present, there is no single universally accepted notation or phrasing for repeating decimals. Another example of this is 593/53, which becomes periodic after the decimal point, repeating the 13-digit pattern "1886792452830" forever, i.e. 11.18867924528301886792452830.... "Repeating fraction" redirects here. Not to be confused with continued fraction. The infinitely repeated digit sequence is called the repetend or reptend. If the repetend is a zero, this decimal representation is called a terminating decimal rather than a repeating decimal, since the zeros can be omitted and the decimal terminates before these zeros.[1] Every terminating decimal representation can be written as a decimal fraction, a fraction whose denominator is a power of 10 (e.g. 1.585 = 1585/1000); it may also be written as a ratio of the form k/2n·5m (e.g. 1.585 = 317/23·52). However, every number with a terminating decimal representation also trivially has a second, alternative representation as a repeating decimal whose repetend is the digit 9. This is obtained by decreasing the final (rightmost) non-zero digit by one and appending a repetend of 9. Two examples of this are 1.000... = 0.999... and 1.585000... = 1.584999.... (This type of repeating decimal can be obtained by long division if one uses a modified form of the usual division algorithm.[2]) Any number that cannot be expressed as a ratio of two integers is said to be irrational. Their decimal representation neither terminates nor infinitely repeats, but extends forever without repetition (see § Every rational number is either a terminating or repeating decimal). Examples of such irrational numbers are √2 and π. Background Notation There are several notational conventions for representing repeating decimals. None of them are accepted universally. Different notations with examples Fraction Vinculum Dots Parentheses Arc Ellipsis 1/9 0.1 0..1 0.(1) 0.1 0.111... 1/3 = 3/9 0.3 0..3 0.(3) 0.3 0.333... 2/3 = 6/9 0.6 0..6 0.(6) 0.6 0.666... 9/11 = 81/99 0.81 0..8.1 0.(81) 0.81 0.8181... 7/12 = 525/900 0.583 0.58.3 0.58(3) 0.583 0.58333... 1/7 = 142857/999999 0.142857 0..14285.7 0.(142857) 0.142857 0.142857142857... 1/81 = 12345679/999999999 0.012345679 0..01234567.9 0.(012345679) 0.012345679 0.012345679012345679... 22/7 = 3142854/999999 3.142857 3..14285.7 3.(142857) 3.142857 3.142857142857... 593/53 = 111886792452819/9999999999999 11.1886792452830 11..188679245283.0 11.(1886792452830) 11.1886792452830 11.18867924528301886792452830... • Vinculum: In the United States, Canada, India, France, Germany, Italy, Switzerland, the Czech Republic, Slovakia, Slovenia, and Turkey, the convention is to draw a horizontal line (a vinculum) above the repetend. • Dots: In some Islamic countries, such as Pakistan, Iran, Turkey, Algeria and Egypt, as well as the United Kingdom, New Zealand, Australia, Japan, Thailand and India, South Korea, and the People's Republic of China, the convention is to place dots above the outermost numerals of the repetend. • Parentheses: In parts of Europe, incl. Austria, Denmark, Finland, Ukraine and Russia, as well as Vietnam and Israel the convention is to enclose the repetend in parentheses. This can cause confusion with the notation for standard uncertainty. • Arc: In Spain and some Latin American countries, such as Argentina, Brazil, Chile and Mexico, the arc notation over the repetend is also used as an alternative to the vinculum and the dots notation. • Ellipsis: Informally, repeating decimals are often represented by an ellipsis (three periods, 0.333...), especially when the previous notational conventions are first taught in school. This notation introduces uncertainty as to which digits should be repeated and even whether repetition is occurring at all, since such ellipses are also employed for irrational numbers; π, for example, can be represented as 3.14159.... In English, there are various ways to read repeating decimals aloud. For example, 1.234 may be read "one point two repeating three four", "one point two repeated three four", "one point two recurring three four", "one point two repetend three four" or "one point two into infinity three four". Likewise, 11.1886792452830 may be read "eleven point repeating one double eight six seven nine two four five two eight three zero", "eleven point repeated one double eight six seven nine two four five two eight three zero", "eleven point recurring one double eight six seven nine two four five two eight three zero" "eleven point repetend one double eight six seven nine two four five two eight three zero" or "eleven point into infinity one double eight six seven nine two four five two eight three zero". Decimal expansion and recurrence sequence In order to convert a rational number represented as a fraction into decimal form, one may use long division. For example, consider the rational number 5/74: 0.0675 74 ) 5.00000 4.44 560 518 420 370 500 etc. Observe that at each step we have a remainder; the successive remainders displayed above are 56, 42, 50. When we arrive at 50 as the remainder, and bring down the "0", we find ourselves dividing 500 by 74, which is the same problem we began with. Therefore, the decimal repeats: 0.0675675675.... Every rational number is either a terminating or repeating decimal For any given divisor, only finitely many different remainders can occur. In the example above, the 74 possible remainders are 0, 1, 2, ..., 73. If at any point in the division the remainder is 0, the expansion terminates at that point. Then the length of the repetend, also called "period", is defined to be 0. If 0 never occurs as a remainder, then the division process continues forever, and eventually, a remainder must occur that has occurred before. The next step in the division will yield the same new digit in the quotient, and the same new remainder, as the previous time the remainder was the same. Therefore, the following division will repeat the same results. The repeating sequence of digits is called "repetend" which has a certain length greater than 0, also called "period".[3] Every repeating or terminating decimal is a rational number Each repeating decimal number satisfies a linear equation with integer coefficients, and its unique solution is a rational number. To illustrate the latter point, the number α = 5.8144144144... above satisfies the equation 10000α − 10α = 58144.144144... − 58.144144... = 58086, whose solution is α = 58086/9990 = 3227/555. The process of how to find these integer coefficients is described below. Table of values • fraction decimal expansion ℓ10 binary expansion ℓ2 1/2 0.5 0 0.1 0 1/3 0.3 1 0.01 2 1/4 0.25 0 0.01 0 1/5 0.2 0 0.0011 4 1/6 0.16 1 0.001 2 1/7 0.142857 6 0.001 3 1/8 0.125 0 0.001 0 1/9 0.1 1 0.000111 6 1/10 0.1 0 0.00011 4 1/11 0.09 2 0.0001011101 10 1/12 0.083 1 0.0001 2 1/13 0.076923 6 0.000100111011 12 1/14 0.0714285 6 0.0001 3 1/15 0.06 1 0.0001 4 1/16 0.0625 0 0.0001 0 • fraction decimal expansion ℓ10 1/17 0.0588235294117647 16 1/18 0.05 1 1/19 0.052631578947368421 18 1/20 0.05 0 1/21 0.047619 6 1/22 0.045 2 1/23 0.0434782608695652173913 22 1/24 0.0416 1 1/25 0.04 0 1/26 0.0384615 6 1/27 0.037 3 1/28 0.03571428 6 1/29 0.0344827586206896551724137931 28 1/30 0.03 1 1/31 0.032258064516129 15 • fraction decimal expansion ℓ10 1/32 0.03125 0 1/33 0.03 2 1/34 0.02941176470588235 16 1/35 0.0285714 6 1/36 0.027 1 1/37 0.027 3 1/38 0.0263157894736842105 18 1/39 0.025641 6 1/40 0.025 0 1/41 0.02439 5 1/42 0.0238095 6 1/43 0.023255813953488372093 21 1/44 0.0227 2 1/45 0.02 1 1/46 0.02173913043478260869565 22 1/47 0.0212765957446808510638297872340425531914893617 46 • Thereby fraction is the unit fraction 1/n and ℓ10 is the length of the (decimal) repetend. The lengths ℓ10(n) of the decimal repetends of 1/n, n = 1, 2, 3, ..., are: 0, 0, 1, 0, 0, 1, 6, 0, 1, 0, 2, 1, 6, 6, 1, 0, 16, 1, 18, 0, 6, 2, 22, 1, 0, 6, 3, 6, 28, 1, 15, 0, 2, 16, 6, 1, 3, 18, 6, 0, 5, 6, 21, 2, 1, 22, 46, 1, 42, 0, 16, 6, 13, 3, 2, 6, 18, 28, 58, 1, 60, 15, 6, 0, 6, 2, 33, 16, 22, 6, 35, 1, 8, 3, 1, 18, 6, 6, 13, 0, 9, 5, 41, 6, 16, 21, 28, 2, 44, 1, 6, 22, 15, 46, 18, 1, 96, 42, 2, 0... (sequence A051626 in the OEIS). For comparison, the lengths ℓ2(n) of the binary repetends of the fractions 1/n, n = 1, 2, 3, ..., are: 0, 0, 2, 0, 4, 2, 3, 0, 6, 4, 10, 2, 12, 3, 4, 0, 8, 6, 18, 4, 6, 10, 11, 2, 20, 12, 18, 3, 28, 4, 5, 0, 10, 8, 12, 6, 36, 18, 12, 4, 20, 6, 14, 10, 12, 11, ... (=A007733[n], if n not a power of 2 else =0). The decimal repetends of 1/n, n = 1, 2, 3, ..., are: 0, 0, 3, 0, 0, 6, 142857, 0, 1, 0, 09, 3, 076923, 714285, 6, 0, 0588235294117647, 5, 052631578947368421, 0, 047619, 45, 0434782608695652173913, 6, 0, 384615, 037, 571428, 0344827586206896551724137931, 3, 032258064516129, 0, 03, 2941176470588235, 285714... (sequence A036275 in the OEIS). The decimal repetend lengths of 1/p, p = 2, 3, 5, ... (nth prime), are: 0, 1, 0, 6, 2, 6, 16, 18, 22, 28, 15, 3, 5, 21, 46, 13, 58, 60, 33, 35, 8, 13, 41, 44, 96, 4, 34, 53, 108, 112, 42, 130, 8, 46, 148, 75, 78, 81, 166, 43, 178, 180, 95, 192, 98, 99, 30, 222, 113, 228, 232, 7, 30, 50, 256, 262, 268, 5, 69, 28, 141, 146, 153, 155, 312, 79... (sequence A002371 in the OEIS). The least primes p for which 1/p has decimal repetend length n, n = 1, 2, 3, ..., are: 3, 11, 37, 101, 41, 7, 239, 73, 333667, 9091, 21649, 9901, 53, 909091, 31, 17, 2071723, 19, 1111111111111111111, 3541, 43, 23, 11111111111111111111111, 99990001, 21401, 859, 757, 29, 3191, 211, 2791, 353, 67, 103, 71, 999999000001, 2028119, 909090909090909091, 900900900900990990990991, 1676321, 83, 127, 173... (sequence A007138 in the OEIS). The least primes p for which k/p has n different cycles (1 ≤ k ≤ p−1), n = 1, 2, 3, ..., are: 7, 3, 103, 53, 11, 79, 211, 41, 73, 281, 353, 37, 2393, 449, 3061, 1889, 137, 2467, 16189, 641, 3109, 4973, 11087, 1321, 101, 7151, 7669, 757, 38629, 1231, 49663, 12289, 859, 239, 27581, 9613, 18131, 13757, 33931... (sequence A054471 in the OEIS). Citations 1. Courant, R. and Robbins, H. What Is Mathematics?: An Elementary Approach to Ideas and Methods, 2nd ed. Oxford, England: Oxford University Press, 1996: p. 67. 2. Beswick, Kim (2004), "Why Does 0.999... = 1?: A Perennial Question and Number Sense", Australian Mathematics Teacher, 60 (4): 7–9 3. For a base b and a divisor n, in terms of group theory this length divides $\operatorname {ord} _{n}(b):=\min\{L\in \mathbb {N} \,\mid \,b^{L}\equiv 1{\bmod {n}}\}$ (with modular arithmetic ≡ 1 mod n) which divides the Carmichael function $\lambda (n):=\max\{\operatorname {ord} _{n}(b)\,\mid \,\gcd(b,n)=1\}$ which again divides Euler's totient function φ(n). Fractions with prime denominators A fraction in lowest terms with a prime denominator other than 2 or 5 (i.e. coprime to 10) always produces a repeating decimal. The length of the repetend (period of the repeating decimal segment) of 1/p is equal to the order of 10 modulo p. If 10 is a primitive root modulo p, then the repetend length is equal to p − 1; if not, then the repetend length is a factor of p − 1. This result can be deduced from Fermat's little theorem, which states that 10p−1 ≡ 1 (mod p). The base-10 digital root of the repetend of the reciprocal of any prime number greater than 5 is 9.[1] If the repetend length of 1/p for prime p is equal to p − 1 then the repetend, expressed as an integer, is called a cyclic number. Cyclic numbers Main article: Cyclic number Examples of fractions belonging to this group are: • 1/7 = 0.142857, 6 repeating digits • 1/17 = 0.0588235294117647, 16 repeating digits • 1/19 = 0.052631578947368421, 18 repeating digits • 1/23 = 0.0434782608695652173913, 22 repeating digits • 1/29 = 0.0344827586206896551724137931, 28 repeating digits • 1/47 = 0.0212765957446808510638297872340425531914893617, 46 repeating digits • 1/59 = 0.0169491525423728813559322033898305084745762711864406779661, 58 repeating digits • 1/61 = 0.016393442622950819672131147540983606557377049180327868852459, 60 repeating digits • 1/97 = 0.010309278350515463917525773195876288659793814432989690721649484536082474226804123711340206185567, 96 repeating digits The list can go on to include the fractions 1/109, 1/113, 1/131, 1/149, 1/167, 1/179, 1/181, 1/193, 1/223, 1/229, etc. (sequence A001913 in the OEIS). Every proper multiple of a cyclic number (that is, a multiple having the same number of digits) is a rotation: • 1/7 = 1 × 0.142857... = 0.142857... • 2/7 = 2 × 0.142857... = 0.285714... • 3/7 = 3 × 0.142857... = 0.428571... • 4/7 = 4 × 0.142857... = 0.571428... • 5/7 = 5 × 0.142857... = 0.714285... • 6/7 = 6 × 0.142857... = 0.857142... The reason for the cyclic behavior is apparent from an arithmetic exercise of long division of 1/7: the sequential remainders are the cyclic sequence {1, 3, 2, 6, 4, 5}. See also the article 142,857 for more properties of this cyclic number. A fraction which is cyclic thus has a recurring decimal of even length that divides into two sequences in nines' complement form. For example 1/7 starts '142' and is followed by '857' while 6/7 (by rotation) starts '857' followed by its nines' complement '142'. The rotation of the repetend of a cyclic number always happens in such a way that each successive repetend is a bigger number than the previous one. In the succession above, for instance, we see that 0.142857... < 0.285714... < 0.428571... < 0.571428... < 0.714285... < 0.857142.... This, for cyclic fractions with long repetends, allows us to easily predict what the result of multiplying the fraction by any natural number n will be, as long as the repetend is known. A proper prime is a prime p which ends in the digit 1 in base 10 and whose reciprocal in base 10 has a repetend with length p − 1. In such primes, each digit 0, 1,..., 9 appears in the repeating sequence the same number of times as does each other digit (namely, p − 1/10 times). They are:[2]: 166  61, 131, 181, 461, 491, 541, 571, 701, 811, 821, 941, 971, 1021, 1051, 1091, 1171, 1181, 1291, 1301, 1349, 1381, 1531, 1571, 1621, 1741, 1811, 1829, 1861,... (sequence A073761 in the OEIS). A prime is a proper prime if and only if it is a full reptend prime and congruent to 1 mod 10. If a prime p is both full reptend prime and safe prime, then 1/p will produce a stream of p − 1 pseudo-random digits. Those primes are 7, 23, 47, 59, 167, 179, 263, 383, 503, 863, 887, 983, 1019, 1367, 1487, 1619, 1823, 2063... (sequence A000353 in the OEIS). Other reciprocals of primes Some reciprocals of primes that do not generate cyclic numbers are: • 1/3 = 0.3, which has a period (repetend length) of 1. • 1/11 = 0.09, which has a period of two. • 1/13 = 0.076923, which has a period of six. • 1/31 = 0.032258064516129, which has a period of 15. • 1/37 = 0.027, which has a period of three. • 1/41 = 0.02439, which has a period of five. • 1/43 = 0.023255813953488372093, which has a period of 21. • 1/53 = 0.0188679245283, which has a period of 13. • 1/67 = 0.014925373134328358208955223880597, which has a period of 33. • 1/71 = 0.01408450704225352112676058338028169, which has a period of 35. • 1/73 = 0.01369863, which has a period of eight. • 1/79 = 0.0126582278481, which has a period of 13. • 1/83 = 0.01204819277108433734939759036144578313253, which has a period of 41. • 1/89 = 0.01123595505617977528089887640449438202247191, which has a period of 44. (sequence A006559 in the OEIS) The reason is that 3 is a divisor of 9, 11 is a divisor of 99, 41 is a divisor of 99999, etc. To find the period of 1/p, we can check whether the prime p divides some number 999...999 in which the number of digits divides p − 1. Since the period is never greater than p − 1, we can obtain this by calculating 10p−1 − 1/p. For example, for 11 we get ${\frac {10^{11-1}-1}{11}}=909090909$ and then by inspection find the repetend 09 and period of 2. Those reciprocals of primes can be associated with several sequences of repeating decimals. For example, the multiples of 1/13 can be divided into two sets, with different repetends. The first set is: • 1/13 = 0.076923... • 10/13 = 0.769230... • 9/13 = 0.692307... • 12/13 = 0.923076... • 3/13 = 0.230769... • 4/13 = 0.307692..., where the repetend of each fraction is a cyclic re-arrangement of 076923. The second set is: • 2/13 = 0.153846... • 7/13 = 0.538461... • 5/13 = 0.384615... • 11/13 = 0.846153... • 6/13 = 0.461538... • 8/13 = 0.615384..., where the repetend of each fraction is a cyclic re-arrangement of 153846. In general, the set of proper multiples of reciprocals of a prime p consists of n subsets, each with repetend length k, where nk = p − 1. Totient rule For an arbitrary integer n, the length L(n) of the decimal repetend of 1/n divides φ(n), where φ is the totient function. The length is equal to φ(n) if and only if 10 is a primitive root modulo n.[3] In particular, it follows that L(p) = p − 1 if and only if p is a prime and 10 is a primitive root modulo p. Then, the decimal expansions of n/p for n = 1, 2, ..., p − 1, all have period p − 1 and differ only by a cyclic permutation. Such numbers p are called full repetend primes. Reciprocals of composite integers coprime to 10 If p is a prime other than 2 or 5, the decimal representation of the fraction 1/p2 repeats: 1/49 = 0.020408163265306122448979591836734693877551. The period (repetend length) L(49) must be a factor of λ(49) = 42, where λ(n) is known as the Carmichael function. This follows from Carmichael's theorem which states that if n is a positive integer then λ(n) is the smallest integer m such that $a^{m}\equiv 1{\pmod {n}}$ for every integer a that is coprime to n. The period of 1/p2 is usually pTp, where Tp is the period of 1/p. There are three known primes for which this is not true, and for those the period of 1/p2 is the same as the period of 1/p because p2 divides 10p−1−1. These three primes are 3, 487, and 56598313 (sequence A045616 in the OEIS).[4] Similarly, the period of 1/pk is usually pk–1Tp If p and q are primes other than 2 or 5, the decimal representation of the fraction 1/pq repeats. An example is 1/119: 119 = 7 × 17 λ(7 × 17) = LCM(λ(7), λ(17)) = LCM(6, 16) = 48, where LCM denotes the least common multiple. The period T of 1/pq is a factor of λ(pq) and it happens to be 48 in this case: 1/119 = 0.008403361344537815126050420168067226890756302521. The period T of 1/pq is LCM(Tp, Tq), where Tp is the period of 1/p and Tq is the period of 1/q. If p, q, r, etc. are primes other than 2 or 5, and k, ℓ, m, etc. are positive integers, then ${\frac {1}{p^{k}q^{\ell }r^{m}\cdots }}$ is a repeating decimal with a period of $\operatorname {LCM} (T_{p^{k}},T_{q^{\ell }},T_{r^{m}},\ldots )$ where Tpk, Tqℓ, Trm,... are respectively the period of the repeating decimals 1/pk, 1/qℓ, 1/rm,... as defined above. Reciprocals of integers not coprime to 10 An integer that is not coprime to 10 but has a prime factor other than 2 or 5 has a reciprocal that is eventually periodic, but with a non-repeating sequence of digits that precede the repeating part. The reciprocal can be expressed as: ${\frac {1}{2^{a}\cdot 5^{b}p^{k}q^{\ell }\cdots }}\,,$ where a and b are not both zero. This fraction can also be expressed as: ${\frac {5^{a-b}}{10^{a}p^{k}q^{\ell }\cdots }}\,,$ if a > b, or as ${\frac {2^{b-a}}{10^{b}p^{k}q^{\ell }\cdots }}\,,$ if b > a, or as ${\frac {1}{10^{a}p^{k}q^{\ell }\cdots }}\,,$ if a = b. The decimal has: • An initial transient of max(a, b) digits after the decimal point. Some or all of the digits in the transient can be zeros. • A subsequent repetend which is the same as that for the fraction 1/pk qℓ ⋯. For example 1/28 = 0.03571428: • a = 2, b = 0, and the other factors pk qℓ ⋯ = 7 • there are 2 initial non-repeating digits, 03; and • there are 6 repeating digits, 571428, the same amount as 1/7 has. Converting repeating decimals to fractions Given a repeating decimal, it is possible to calculate the fraction that produces it. For example: $x$$=0.333333\ldots $ $10x$$=3.333333\ldots $(multiply each side of the above line by 10) $9x$$=3$(subtract the 1st line from the 2nd) $x$$={\frac {3}{9}}={\frac {1}{3}}$(reduce to lowest terms) Another example: $x$$=\ \ \ \ 0.836363636\ldots $ $10x$$=\ \ \ \ 8.36363636\ldots $(move decimal to start of repetition = move by 1 place = multiply by 10) $1000x$$=836.36363636\ldots $(collate 2nd repetition here with 1st above = move by 2 places = multiply by 100) $990x$$=828$(subtract to clear decimals) $x$$={\frac {828}{990}}={\frac {18\cdot 46}{18\cdot 55}}={\frac {46}{55}}$(reduce to lowest terms) A shortcut The procedure below can be applied in particular if the repetend has n digits, all of which are 0 except the final one which is 1. For instance for n = 7: ${\begin{aligned}x&=0.000000100000010000001\ldots \\10^{7}x&=1.000000100000010000001\ldots \\\left(10^{7}-1\right)x=9999999x&=1\\x&={\frac {1}{10^{7}-1}}={\frac {1}{9999999}}\end{aligned}}$ So this particular repeating decimal corresponds to the fraction 1/10n − 1, where the denominator is the number written as n 9s. Knowing just that, a general repeating decimal can be expressed as a fraction without having to solve an equation. For example, one could reason: ${\begin{aligned}7.48181818\ldots &=7.3+0.18181818\ldots \\[8pt]&={\frac {73}{10}}+{\frac {18}{99}}={\frac {73}{10}}+{\frac {9\cdot 2}{9\cdot 11}}={\frac {73}{10}}+{\frac {2}{11}}\\[12pt]&={\frac {11\cdot 73+10\cdot 2}{10\cdot 11}}={\frac {823}{110}}\end{aligned}}$ or ${\begin{aligned}11.18867924528301886792452830\ldots &=11+0.18867924528301886792452830\ldots \\[8pt]&=11+{\frac {10}{53}}={\frac {11\cdot 53+10}{53}}={\frac {593}{53}}\end{aligned}}$ It is possible to get a general formula expressing a repeating decimal with an n-digit period (repetend length), beginning right after the decimal point, as a fraction: ${\begin{aligned}x&=0.{\overline {a_{1}a_{2}\cdots a_{n}}}\\10^{n}x&=a_{1}a_{2}\cdots a_{n}.{\overline {a_{1}a_{2}\cdots a_{n}}}\\[5pt]\left(10^{n}-1\right)x=99\cdots 99x&=a_{1}a_{2}\cdots a_{n}\\[5pt]x&={\frac {a_{1}a_{2}\cdots a_{n}}{10^{n}-1}}={\frac {a_{1}a_{2}\cdots a_{n}}{99\cdots 99}}\end{aligned}}$ More explicitly, one gets the following cases: If the repeating decimal is between 0 and 1, and the repeating block is n digits long, first occurring right after the decimal point, then the fraction (not necessarily reduced) will be the integer number represented by the n-digit block divided by the one represented by n 9s. For example, • 0.444444... = 4/9 since the repeating block is 4 (a 1-digit block), • 0.565656... = 56/99 since the repeating block is 56 (a 2-digit block), • 0.012012... = 12/999 since the repeating block is 012 (a 3-digit block); this further reduces to 4/333. • 0.999999... = 9/9 = 1, since the repeating block is 9 (also a 1-digit block) If the repeating decimal is as above, except that there are k (extra) digits 0 between the decimal point and the repeating n-digit block, then one can simply add k digits 0 after the n digits 9 of the denominator (and, as before, the fraction may subsequently be simplified). For example, • 0.000444... = 4/9000 since the repeating block is 4 and this block is preceded by 3 zeros, • 0.005656... = 56/9900 since the repeating block is 56 and it is preceded by 2 zeros, • 0.00012012... = 12/99900 = 1/8325 since the repeating block is 012 and it is preceded by 2 zeros. Any repeating decimal not of the form described above can be written as a sum of a terminating decimal and a repeating decimal of one of the two above types (actually the first type suffices, but that could require the terminating decimal to be negative). For example, • 1.23444... = 1.23 + 0.00444... = 123/100 + 4/900 = 1107/900 + 4/900 = 1111/900 • or alternatively 1.23444... = 0.79 + 0.44444... = 79/100 + 4/9 = 711/900 + 400/900 = 1111/900 • 0.3789789... = 0.3 + 0.0789789... = 3/10 + 789/9990 = 2997/9990 + 789/9990 = 3786/9990 = 631/1665 • or alternatively 0.3789789... = −0.6 + 0.9789789... = −6/10 + 978/999 = −5994/9990 + 9780/9990 = 3786/9990 = 631/1665 An even faster method is to ignore the decimal point completely and go like this • 1.23444... = 1234 − 123/900 = 1111/900 (denominator has one 9 and two 0s because one digit repeats and there are two non-repeating digits after the decimal point) • 0.3789789... = 3789 − 3/9990 = 3786/9990 (denominator has three 9s and one 0 because three digits repeat and there is one non-repeating digit after the decimal point) It follows that any repeating decimal with period n, and k digits after the decimal point that do not belong to the repeating part, can be written as a (not necessarily reduced) fraction whose denominator is (10n − 1)10k. Conversely the period of the repeating decimal of a fraction c/d will be (at most) the smallest number n such that 10n − 1 is divisible by d. For example, the fraction 2/7 has d = 7, and the smallest k that makes 10k − 1 divisible by 7 is k = 6, because 999999 = 7 × 142857. The period of the fraction 2/7 is therefore 6. In compressed form The following picture suggests kind of compression of the above shortcut. Thereby $\mathbf {I} $ represents the digits of the integer part of the decimal number (to the left of the decimal point), $\mathbf {A} $ makes up the string of digits of the preperiod and $\#\mathbf {A} $ its length, and $\mathbf {P} $ being the string of repeated digits (the period) with length $\#\mathbf {P} $ which is nonzero. In the generated fraction, the digit $9$ will be repeated $\#\mathbf {P} $ times, and the digit $0$ will be repeated $\#\mathbf {A} $ times. Note that in the absence of an integer part in the decimal, $\mathbf {I} $ will be represented by zero, which being to the left of the other digits, will not affect the final result, and may be omitted in the calculation of the generating function. Examples: ${\begin{array}{lllll}3.254444\ldots &=3.25{\overline {4}}&={\begin{Bmatrix}\mathbf {I} =3&\mathbf {A} =25&\mathbf {P} =4\\&\#\mathbf {A} =2&\#\mathbf {P} =1\end{Bmatrix}}&={\dfrac {3254-325}{900}}&={\dfrac {2929}{900}}\\\\0.512512\ldots &=0.{\overline {512}}&={\begin{Bmatrix}\mathbf {I} =0&\mathbf {A} =\emptyset &\mathbf {P} =512\\&\#\mathbf {A} =0&\#\mathbf {P} =3\end{Bmatrix}}&={\dfrac {512-0}{999}}&={\dfrac {512}{999}}\\\\1.09191\ldots &=1.0{\overline {91}}&={\begin{Bmatrix}\mathbf {I} =1&\mathbf {A} =0&\mathbf {P} =91\\&\#\mathbf {A} =1&\#\mathbf {P} =2\end{Bmatrix}}&={\dfrac {1091-10}{990}}&={\dfrac {1081}{990}}\\\\1.333\ldots &=1.{\overline {3}}&={\begin{Bmatrix}\mathbf {I} =1&\mathbf {A} =\emptyset &\mathbf {P} =3\\&\#\mathbf {A} =0&\#\mathbf {P} =1\end{Bmatrix}}&={\dfrac {13-1}{9}}&={\dfrac {12}{9}}&={\dfrac {4}{3}}\\\\0.3789789\ldots &=0.3{\overline {789}}&={\begin{Bmatrix}\mathbf {I} =0&\mathbf {A} =3&\mathbf {P} =789\\&\#\mathbf {A} =1&\#\mathbf {P} =3\end{Bmatrix}}&={\dfrac {3789-3}{9990}}&={\dfrac {3786}{9990}}&={\dfrac {631}{1665}}\end{array}}$ The symbol $\emptyset $ in the examples above denotes the absence of digits of part $\mathbf {A} $ in the decimal, and therefore $\#\mathbf {A} =0$ and a corresponding absence in the generated fraction. Repeating decimals as infinite series A repeating decimal can also be expressed as an infinite series. That is, a repeating decimal can be regarded as the sum of an infinite number of rational numbers. To take the simplest example, $0.{\overline {1}}={\frac {1}{10}}+{\frac {1}{100}}+{\frac {1}{1000}}+\cdots =\sum _{n=1}^{\infty }{\frac {1}{10^{n}}}$ The above series is a geometric series with the first term as 1/10 and the common factor 1/10. Because the absolute value of the common factor is less than 1, we can say that the geometric series converges and find the exact value in the form of a fraction by using the following formula where a is the first term of the series and r is the common factor. ${\frac {a}{1-r}}={\frac {\frac {1}{10}}{1-{\frac {1}{10}}}}={\frac {1}{10-1}}={\frac {1}{9}}$ Similarly, ${\begin{aligned}0.{\overline {142857}}&={\frac {142857}{10^{6}}}+{\frac {142857}{10^{12}}}+{\frac {142857}{10^{18}}}+\cdots =\sum _{n=1}^{\infty }{\frac {142857}{10^{6n}}}\\[6px]\implies &\quad {\frac {a}{1-r}}={\frac {\frac {142857}{10^{6}}}{1-{\frac {1}{10^{6}}}}}={\frac {142857}{10^{6}-1}}={\frac {142857}{999999}}={\frac {1}{7}}\end{aligned}}$ Multiplication and cyclic permutation The cyclic behavior of repeating decimals in multiplication also leads to the construction of integers which are cyclically permuted when multiplied by certain numbers. For example, 102564 × 4 = 410256. 102564 is the repetend of 4/39 and 410256 the repetend of 16/39. Other properties of repetend lengths Various properties of repetend lengths (periods) are given by Mitchell[5] and Dickson.[6] • The period of 1/k for integer k is always ≤ k − 1. • If p is prime, the period of 1/p divides evenly into p − 1. • If k is composite, the period of 1/k is strictly less than k − 1. • The period of c/k, for c coprime to k, equals the period of 1/k. • If k = 2a·5bn where n > 1 and n is not divisible by 2 or 5, then the length of the transient of 1/k is max(a, b), and the period equals r, where r is the multiplicative order of 10 mod n, that is the smallest integer such that 10r ≡ 1 (mod n). • If p, p′, p″,... are distinct primes, then the period of 1/p p′ p″ ⋯ equals the lowest common multiple of the periods of 1/p, 1/p′, 1/p″,.... • If k and k′ have no common prime factors other than 2 or 5, then the period of 1/k k′ equals the least common multiple of the periods of 1/k and 1/k′. • For prime p, if ${\text{period}}\left({\frac {1}{p}}\right)={\text{period}}\left({\frac {1}{p^{2}}}\right)=\cdots ={\text{period}}\left({\frac {1}{p^{m}}}\right)$ for some m, but ${\text{period}}\left({\frac {1}{p^{m}}}\right)\neq {\text{period}}\left({\frac {1}{p^{m+1}}}\right),$ then for c ≥ 0 we have ${\text{period}}\left({\frac {1}{p^{m+c}}}\right)=p^{c}\cdot {\text{period}}\left({\frac {1}{p}}\right).$ • If p is a proper prime ending in a 1, that is, if the repetend of 1/p is a cyclic number of length p − 1 and p = 10h + 1 for some h, then each digit 0, 1, ..., 9 appears in the repetend exactly h = p − 1/10 times. For some other properties of repetends, see also.[7] Extension to other bases Various features of repeating decimals extend to the representation of numbers in all other integer bases, not just base 10: • Every real number can be represented as an integer part followed by a radix point (the generalization of a decimal point to non-decimal systems) followed by a finite or infinite number of digits. • If the base is an integer, a terminating sequence obviously represents a rational number. • A rational number has a terminating sequence if all the prime factors of the denominator of the fully reduced fractional form are also factors of the base. These numbers make up a dense set in Q and R. • If the positional numeral system is a standard one, that is it has base $b\in \mathbb {Z} \smallsetminus \{-1,0,1\}$ combined with a consecutive set of digits $D:=\{d_{1},d_{1}+1,\dots ,d_{r}\}$ with r := |b|, dr := d1 + r − 1 and 0 ∈ D, then a terminating sequence is obviously equivalent to the same sequence with non-terminating repeating part consisting of the digit 0. If the base is positive, then there exists an order homomorphism from the lexicographical order of the right-sided infinite strings over the alphabet D into some closed interval of the reals, which maps the strings 0.A1A2...Andb and 0.A1A2...(An+1)d1 with Ai ∈ D and An ≠ db to the same real number – and there are no other duplicate images. In the decimal system, for example, there is 0.9 = 1.0 = 1; in the balanced ternary system there is 0.1 = 1.T = 1/2. • A rational number has an indefinitely repeating sequence of finite length l, if the reduced fraction's denominator contains a prime factor that is not a factor of the base. If q is the maximal factor of the reduced denominator which is coprime to the base, l is the smallest exponent such that q divides bℓ − 1. It is the multiplicative order ordq(b) of the residue class b mod q which is a divisor of the Carmichael function λ(q) which in turn is smaller than q. The repeating sequence is preceded by a transient of finite length if the reduced fraction also shares a prime factor with the base. A repeating sequence $\left(0.{\overline {A_{1}A_{2}\ldots A_{\ell }}}\right)_{b}$ represents the fraction ${\frac {(A_{1}A_{2}\ldots A_{\ell })_{b}}{b^{\ell }-1}}.$ • An irrational number has a representation of infinite length that is not, from any point, an indefinitely repeating sequence of finite length. For example, in duodecimal, 1/2 = 0.6, 1/3 = 0.4, 1/4 = 0.3 and 1/6 = 0.2 all terminate; 1/5 = 0.2497 repeats with period length 4, in contrast with the equivalent decimal expansion of 0.2; 1/7 = 0.186A35 has period 6 in duodecimal, just as it does in decimal. If b is an integer base and k is an integer, then ${\frac {1}{k}}={\frac {1}{b}}+{\frac {(b-k)^{1}}{b^{2}}}+{\frac {(b-k)^{2}}{b^{3}}}+{\frac {(b-k)^{3}}{b^{4}}}+\cdots +{\frac {(b-k)^{N-1}}{b^{N}}}+\cdots ={\frac {1}{b}}{\frac {1}{1-{\frac {b-k}{b}}}}.$ For example 1/7 in duodecimal: ${\frac {1}{7}}=\left({\frac {1}{10^{\phantom {1}}}}+{\frac {5}{10^{2}}}+{\frac {21}{10^{3}}}+{\frac {A5}{10^{4}}}+{\frac {441}{10^{5}}}+{\frac {1985}{10^{6}}}+\cdots \right)_{\text{base 12}}$ which is 0.186A35base12. 10base12 is 12base10, 102base12 is 144base10, 21base12 is 25base10, A5base12 is 125base10. Algorithm for positive bases For a rational 0 < p/q < 1 (and base b ∈ N>1) there is the following algorithm producing the repetend together with its length: function b_adic(b,p,q) // b ≥ 2; 0 < p < q static digits = "0123..."; // up to the digit with value b–1 begin s = ""; // the string of digits pos = 0; // all places are right to the radix point while not defined(occurs[p]) do occurs[p] = pos; // the position of the place with remainder p bp = b*p; z = floor(bp/q); // index z of digit within: 0 ≤ z ≤ b-1 p = b*p − z*q; // 0 ≤ p < q if p = 0 then L = 0; if not z = 0 then s = s . substring(digits, z, 1) end if return (s); end if s = s . substring(digits, z, 1); // append the character of the digit pos += 1; end while L = pos - occurs[p]; // the length of the repetend (being < q) // mark the digits of the repetend by a vinculum: for i from occurs[p] to pos-1 do substring(s, i, 1) = overline(substring(s, i, 1)); end for return (s); end function The first highlighted line calculates the digit z. The subsequent line calculates the new remainder p′ of the division modulo the denominator q. As a consequence of the floor function floor we have ${\frac {bp}{q}}-1\;\;<\;\;z=\left\lfloor {\frac {bp}{q}}\right\rfloor \;\;\leq \;\;{\frac {bp}{q}},$ thus $bp-q<zq\quad \implies \quad p':=bp-zq<q$ and $zq\leq bp\quad \implies \quad 0\leq bp-zq=:p'\,.$ Because all these remainders p are non-negative integers less than q, there can be only a finite number of them with the consequence that they must recur in the while loop. Such a recurrence is detected by the associative array occurs. The new digit z is formed in the yellow line, where p is the only non-constant. The length L of the repetend equals the number of the remainders (see also section Every rational number is either a terminating or repeating decimal). Applications to cryptography Repeating decimals (also called decimal sequences) have found cryptographic and error-correction coding applications.[8] In these applications repeating decimals to base 2 are generally used which gives rise to binary sequences. The maximum length binary sequence for 1/p (when 2 is a primitive root of p) is given by:[9] $a(i)=2^{i}{\bmod {p}}{\bmod {2}}$ These sequences of period p − 1 have an autocorrelation function that has a negative peak of −1 for shift of p − 1/2. The randomness of these sequences has been examined by diehard tests.[10] See also • Decimal representation • Full reptend prime • Midy's theorem • Parasitic number • Trailing zero • Unique prime • 0.999..., a repeating decimal equal to one • Pigeonhole principle References and remarks 1. Gray, Alexander J. (March 2000). "Digital roots and reciprocals of primes". Mathematical Gazette. 84 (499): 86. doi:10.2307/3621484. JSTOR 3621484. S2CID 125834304. For primes greater than 5, all the digital roots appear to have the same value, 9. We can confirm this if... 2. Dickson, L. E., History of the Theory of Numbers, Volume 1, Chelsea Publishing Co., 1952. 3. William E. Heal. Some Properties of Repetends. Annals of Mathematics, Vol. 3, No. 4 (Aug., 1887), pp. 97–103 4. Albert H. Beiler, Recreations in the Theory of Numbers, p. 79 5. Mitchell, Douglas W., "A nonlinear random number generator with known, long cycle length", Cryptologia 17, January 1993, pp. 55–62. 6. Dickson, Leonard E., History of the Theory of Numbers, Vol. I, Chelsea Publ. Co., 1952 (orig. 1918), pp. 164–173. 7. Armstrong, N. J., and Armstrong, R. J., "Some properties of repetends", Mathematical Gazette 87, November 2003, pp. 437–443. 8. Kak, Subhash, Chatterjee, A. "On decimal sequences". IEEE Transactions on Information Theory, vol. IT-27, pp. 647–652, September 1981. 9. Kak, Subhash, "Encryption and error-correction using d-sequences". IEEE Transactios on Computers, vol. C-34, pp. 803–809, 1985. 10. Bellamy, J. "Randomness of D sequences via diehard testing". 2013. arXiv:1312.3618 External links • Weisstein, Eric W. "Repeating Decimal". MathWorld.
Wikipedia
Recurrence relation In mathematics, a recurrence relation is an equation according to which the $n$th term of a sequence of numbers is equal to some combination of the previous terms. Often, only $k$ previous terms of the sequence appear in the equation, for a parameter $k$ that is independent of $n$; this number $k$ is called the order of the relation. If the values of the first $k$ numbers in the sequence have been given, the rest of the sequence can be calculated by repeatedly applying the equation. In linear recurrences, the nth term is equated to a linear function of the $k$ previous terms. A famous example is the recurrence for the Fibonacci numbers, $F_{n}=F_{n-1}+F_{n-2}$ where the order $k$ is two and the linear function merely adds the two previous terms. This example is a linear recurrence with constant coefficients, because the coefficients of the linear function (1 and 1) are constants that do not depend on $n$. For these recurrences, one can express the general term of the sequence as a closed-form expression of $n$. As well, linear recurrences with polynomial coefficients depending on $n$ are also important, because many common elementary and special functions have a Taylor series whose coefficients satisfy such a recurrence relation (see holonomic function). Solving a recurrence relation means obtaining a closed-form solution: a non-recursive function of $n$. The concept of a recurrence relation can be extended to multidimensional arrays, that is, indexed families that are indexed by tuples of natural numbers. Definition A recurrence relation is an equation that expresses each element of a sequence as a function of the preceding ones. More precisely, in the case where only the immediately preceding element is involved, a recurrence relation has the form $u_{n}=\varphi (n,u_{n-1})\quad {\text{for}}\quad n>0,$ where $\varphi :\mathbb {N} \times X\to X$ :\mathbb {N} \times X\to X} is a function, where X is a set to which the elements of a sequence must belong. For any $u_{0}\in X$, this defines a unique sequence with $u_{0}$ as its first element, called the initial value.[1] It is easy to modify the definition for getting sequences starting from the term of index 1 or higher. This defines recurrence relation of first order. A recurrence relation of order k has the form $u_{n}=\varphi (n,u_{n-1},u_{n-2},\ldots ,u_{n-k})\quad {\text{for}}\quad n\geq k,$ where $\varphi :\mathbb {N} \times X^{k}\to X$ :\mathbb {N} \times X^{k}\to X} is a function that involves k consecutive elements of the sequence. In this case, k initial values are needed for defining a sequence. Examples Factorial The factorial is defined by the recurrence relation $n!=n(n-1)!\quad {\text{for}}\quad n>0,$ and the initial condition $0!=1.$ This is an example of a linear recurrence with polynomial coefficients of order 1, with the simple polynomial $f(n)=n$ as its only coefficient. Logistic map An example of a recurrence relation is the logistic map: $x_{n+1}=rx_{n}(1-x_{n}),$ with a given constant $r$; given the initial term $x_{0}$, each subsequent term is determined by this relation. Fibonacci numbers The recurrence of order two satisfied by the Fibonacci numbers is the canonical example of a homogeneous linear recurrence relation with constant coefficients (see below). The Fibonacci sequence is defined using the recurrence $F_{n}=F_{n-1}+F_{n-2}$ with initial conditions $F_{0}=0$ $F_{1}=1.$ Explicitly, the recurrence yields the equations $F_{2}=F_{1}+F_{0}$ $F_{3}=F_{2}+F_{1}$ $F_{4}=F_{3}+F_{2}$ etc. We obtain the sequence of Fibonacci numbers, which begins 0, 1, 1, 2, 3, 5, 8, 13, 21, 34, 55, 89, ... The recurrence can be solved by methods described below yielding Binet's formula, which involves powers of the two roots of the characteristic polynomial $t^{2}=t+1$; the generating function of the sequence is the rational function ${\frac {t}{1-t-t^{2}}}.$ Binomial coefficients A simple example of a multidimensional recurrence relation is given by the binomial coefficients ${\tbinom {n}{k}}$, which count the ways of selecting $k$ elements out of a set of $n$ elements. They can be computed by the recurrence relation ${\binom {n}{k}}={\binom {n-1}{k-1}}+{\binom {n-1}{k}},$ with the base cases ${\tbinom {n}{0}}={\tbinom {n}{n}}=1$. Using this formula to compute the values of all binomial coefficients generates an infinite array called Pascal's triangle. The same values can also be computed directly by a different formula that is not a recurrence, but uses factorials, multiplication and division, not just additions: ${\binom {n}{k}}={\frac {n!}{k!(n-k)!}}.$ The binomial coefficients can also be computed with a uni-dimensional recurrence: ${\binom {n}{k}}={\binom {n}{k-1}}(n-k+1)/k,$ with the initial value $ {\binom {n}{0}}=1$ (The division is not displayed as a fraction for emphasizing that it must be computed after the multiplication, for not introducing fractional numbers). This recurrence is widely used in computers because it does not require to build a table as does the bi-dimensional recurrence, and does involve very large integers as does the formula with factorials (if one uses $ {\binom {n}{k}}={\binom {n}{n-k}},$ all involved integers are smaller than the final result). Difference operator and difference equations The difference operator is an operator that maps sequences to sequences, and, more generally, functions to functions. It is commonly denoted $\Delta ,$ and is defined, in functional notation, as $(\Delta f)(x)=f(x+1)-f(x).$ It is thus a special case of finite difference. When using the index notation for sequences, the definition becomes $(\Delta a)_{n}=a_{n+1}-a_{n}.$ The parentheses around $\Delta f$ and $\Delta a$ are generally omitted, and $\Delta a_{n}$ must be understood as the term of index n in the sequence $\Delta a,$ and not $\Delta $ applied to the element $a_{n}.$ Given sequence $a=(a_{n})_{n\in \mathbb {N} },$ the first difference of a is $\Delta a.$ The second difference is $\Delta ^{2}a=(\Delta \circ \Delta )a=\Delta (\Delta a).$ A simple computation shows that $\Delta ^{2}a_{n}=a_{n+2}-2a_{n+1}+a_{n}.$ More generally: the kth difference is defined recursively as $\Delta ^{k}=\Delta \circ \Delta ^{k-1},$ and one has $\Delta ^{k}a_{n}=\sum _{t=0}^{k}(-1)^{t}{\binom {k}{t}}a_{n+k-t}.$ This relation can be inverted, giving $a_{n+k}=a_{n}+{k \choose 1}\Delta a_{n}+\cdots +{k \choose k}\Delta ^{k}(a_{n}).$ A difference equation of order k is an equation that involves the k first differences of a sequence or a function, in the same way as a differential equation of order k relates the k first derivatives of a function. The two above relations allow transforming a recurrence relation of order k into a difference equation of order k, and, conversely, a difference equation of order k into recurrence relation of order k. Each transformation is the inverse of the other, and the sequences that are solution of the difference equation are exactly those that satisfies the recurrence relation. For example, the difference equation $3\Delta ^{2}a_{n}+2\Delta a_{n}+7a_{n}=0$ is equivalent to the recurrence relation $3a_{n+2}=4a_{n+1}-8a_{n},$ in the sense that the two equations are satisfied by the same sequences. As it is equivalent for a sequence to satisfy a recurrence relation or to be the solution of a difference equation, the two terms "recurrence relation" and "difference equation" are sometimes used interchangeably. See Rational difference equation and Matrix difference equation for example of uses of "difference equation" instead of "recurrence relation" Difference equations resemble to differential equations, and this resemblance is often used to mimic methods for solving differentiable equations to apply to solving difference equations, and therefore recurrence relations. Summation equations relate to difference equations as integral equations relate to differential equations. See time scale calculus for a unification of the theory of difference equations with that of differential equations. From sequences to grids Single-variable or one-dimensional recurrence relations are about sequences (i.e. functions defined on one-dimensional grids). Multi-variable or n-dimensional recurrence relations are about $n$-dimensional grids. Functions defined on $n$-grids can also be studied with partial difference equations.[2] Solving Solving linear recurrence relations with constant coefficients Main article: Linear recurrence with constant coefficients Solving first-order non-homogeneous recurrence relations with variable coefficients Moreover, for the general first-order non-homogeneous linear recurrence relation with variable coefficients: $a_{n+1}=f_{n}a_{n}+g_{n},\qquad f_{n}\neq 0,$ there is also a nice method to solve it:[3] $a_{n+1}-f_{n}a_{n}=g_{n}$ ${\frac {a_{n+1}}{\prod _{k=0}^{n}f_{k}}}-{\frac {f_{n}a_{n}}{\prod _{k=0}^{n}f_{k}}}={\frac {g_{n}}{\prod _{k=0}^{n}f_{k}}}$ ${\frac {a_{n+1}}{\prod _{k=0}^{n}f_{k}}}-{\frac {a_{n}}{\prod _{k=0}^{n-1}f_{k}}}={\frac {g_{n}}{\prod _{k=0}^{n}f_{k}}}$ Let $A_{n}={\frac {a_{n}}{\prod _{k=0}^{n-1}f_{k}}},$ Then $A_{n+1}-A_{n}={\frac {g_{n}}{\prod _{k=0}^{n}f_{k}}}$ $\sum _{m=0}^{n-1}(A_{m+1}-A_{m})=A_{n}-A_{0}=\sum _{m=0}^{n-1}{\frac {g_{m}}{\prod _{k=0}^{m}f_{k}}}$ ${\frac {a_{n}}{\prod _{k=0}^{n-1}f_{k}}}=A_{0}+\sum _{m=0}^{n-1}{\frac {g_{m}}{\prod _{k=0}^{m}f_{k}}}$ $a_{n}=\left(\prod _{k=0}^{n-1}f_{k}\right)\left(A_{0}+\sum _{m=0}^{n-1}{\frac {g_{m}}{\prod _{k=0}^{m}f_{k}}}\right)$ If we apply the formula to $a_{n+1}=(1+hf_{nh})a_{n}+hg_{nh}$ and take the limit $h\to 0$, we get the formula for first order linear differential equations with variable coefficients; the sum becomes an integral, and the product becomes the exponential function of an integral. Solving general homogeneous linear recurrence relations Many homogeneous linear recurrence relations may be solved by means of the generalized hypergeometric series. Special cases of these lead to recurrence relations for the orthogonal polynomials, and many special functions. For example, the solution to $J_{n+1}={\frac {2n}{z}}J_{n}-J_{n-1}$ is given by $J_{n}=J_{n}(z),$ the Bessel function, while $(b-n)M_{n-1}+(2n-b+z)M_{n}-nM_{n+1}=0$ is solved by $M_{n}=M(n,b;z)$ the confluent hypergeometric series. Sequences which are the solutions of linear difference equations with polynomial coefficients are called P-recursive. For these specific recurrence equations algorithms are known which find polynomial, rational or hypergeometric solutions. Solving first-order rational difference equations Main article: Rational difference equation A first order rational difference equation has the form $w_{t+1}={\tfrac {aw_{t}+b}{cw_{t}+d}}$. Such an equation can be solved by writing $w_{t}$ as a nonlinear transformation of another variable $x_{t}$ which itself evolves linearly. Then standard methods can be used to solve the linear difference equation in $x_{t}$. Stability Stability of linear higher-order recurrences The linear recurrence of order $d$, $a_{n}=c_{1}a_{n-1}+c_{2}a_{n-2}+\cdots +c_{d}a_{n-d},$ has the characteristic equation $\lambda ^{d}-c_{1}\lambda ^{d-1}-c_{2}\lambda ^{d-2}-\cdots -c_{d}\lambda ^{0}=0.$ The recurrence is stable, meaning that the iterates converge asymptotically to a fixed value, if and only if the eigenvalues (i.e., the roots of the characteristic equation), whether real or complex, are all less than unity in absolute value. Stability of linear first-order matrix recurrences Main article: Matrix difference equation In the first-order matrix difference equation $[x_{t}-x^{*}]=A[x_{t-1}-x^{*}]$ with state vector $x$ and transition matrix $A$, $x$ converges asymptotically to the steady state vector $x^{*}$ if and only if all eigenvalues of the transition matrix $A$ (whether real or complex) have an absolute value which is less than 1. Stability of nonlinear first-order recurrences Consider the nonlinear first-order recurrence $x_{n}=f(x_{n-1}).$ This recurrence is locally stable, meaning that it converges to a fixed point $x^{*}$ from points sufficiently close to $x^{*}$, if the slope of $f$ in the neighborhood of $x^{*}$ is smaller than unity in absolute value: that is, $|f'(x^{*})|<1.$ A nonlinear recurrence could have multiple fixed points, in which case some fixed points may be locally stable and others locally unstable; for continuous f two adjacent fixed points cannot both be locally stable. A nonlinear recurrence relation could also have a cycle of period $k$ for $k>1$. Such a cycle is stable, meaning that it attracts a set of initial conditions of positive measure, if the composite function $g(x):=f\circ f\circ \cdots \circ f(x)$ with $f$ appearing $k$ times is locally stable according to the same criterion: $|g'(x^{*})|<1,$ where $x^{*}$ is any point on the cycle. In a chaotic recurrence relation, the variable $x$ stays in a bounded region but never converges to a fixed point or an attracting cycle; any fixed points or cycles of the equation are unstable. See also logistic map, dyadic transformation, and tent map. Relationship to differential equations When solving an ordinary differential equation numerically, one typically encounters a recurrence relation. For example, when solving the initial value problem $y'(t)=f(t,y(t)),\ \ y(t_{0})=y_{0},$ with Euler's method and a step size $h$, one calculates the values $y_{0}=y(t_{0}),\ \ y_{1}=y(t_{0}+h),\ \ y_{2}=y(t_{0}+2h),\ \dots $ by the recurrence $\,y_{n+1}=y_{n}+hf(t_{n},y_{n}),t_{n}=t_{0}+nh$ Systems of linear first order differential equations can be discretized exactly analytically using the methods shown in the discretization article. Applications Mathematical biology Some of the best-known difference equations have their origins in the attempt to model population dynamics. For example, the Fibonacci numbers were once used as a model for the growth of a rabbit population. The logistic map is used either directly to model population growth, or as a starting point for more detailed models of population dynamics. In this context, coupled difference equations are often used to model the interaction of two or more populations. For example, the Nicholson–Bailey model for a host-parasite interaction is given by $N_{t+1}=\lambda N_{t}e^{-aP_{t}}$ $P_{t+1}=N_{t}(1-e^{-aP_{t}}),$ with $N_{t}$ representing the hosts, and $P_{t}$ the parasites, at time $t$. Integrodifference equations are a form of recurrence relation important to spatial ecology. These and other difference equations are particularly suited to modeling univoltine populations. Computer science Recurrence relations are also of fundamental importance in analysis of algorithms.[4][5] If an algorithm is designed so that it will break a problem into smaller subproblems (divide and conquer), its running time is described by a recurrence relation. A simple example is the time an algorithm takes to find an element in an ordered vector with $n$ elements, in the worst case. A naive algorithm will search from left to right, one element at a time. The worst possible scenario is when the required element is the last, so the number of comparisons is $n$. A better algorithm is called binary search. However, it requires a sorted vector. It will first check if the element is at the middle of the vector. If not, then it will check if the middle element is greater or lesser than the sought element. At this point, half of the vector can be discarded, and the algorithm can be run again on the other half. The number of comparisons will be given by $c_{1}=1$ $c_{n}=1+c_{n/2}$ the time complexity of which will be $O(\log _{2}(n))$. Digital signal processing In digital signal processing, recurrence relations can model feedback in a system, where outputs at one time become inputs for future time. They thus arise in infinite impulse response (IIR) digital filters. For example, the equation for a "feedforward" IIR comb filter of delay $T$ is: $y_{t}=(1-\alpha )x_{t}+\alpha y_{t-T},$ where $x_{t}$ is the input at time $t$, $y_{t}$ is the output at time $t$, and $\alpha $ controls how much of the delayed signal is fed back into the output. From this we can see that $y_{t}=(1-\alpha )x_{t}+\alpha ((1-\alpha )x_{t-T}+\alpha y_{t-2T})$ $y_{t}=(1-\alpha )x_{t}+(\alpha -\alpha ^{2})x_{t-T}+\alpha ^{2}y_{t-2T}$ etc. Economics See also: time series analysis and simultaneous equations model Recurrence relations, especially linear recurrence relations, are used extensively in both theoretical and empirical economics.[6][7] In particular, in macroeconomics one might develop a model of various broad sectors of the economy (the financial sector, the goods sector, the labor market, etc.) in which some agents' actions depend on lagged variables. The model would then be solved for current values of key variables (interest rate, real GDP, etc.) in terms of past and current values of other variables. See also • Holonomic sequences • Iterated function • Orthogonal polynomials • Recursion • Recursion (computer science) • Lagged Fibonacci generator • Master theorem (analysis of algorithms) • Circle points segments proof • Continued fraction • Time scale calculus • Combinatorial principles • Infinite impulse response • Integration by reduction formulae • Mathematical induction References Footnotes 1. Jacobson, Nathan , Basic Algebra 2 (2nd ed.), § 0.4. pg 16. 2. Partial difference equations, Sui Sun Cheng, CRC Press, 2003, ISBN 978-0-415-29884-1 3. "Archived copy" (PDF). Archived (PDF) from the original on 2010-07-05. Retrieved 2010-10-19.{{cite web}}: CS1 maint: archived copy as title (link) 4. Cormen, T. et al, Introduction to Algorithms, MIT Press, 2009 5. R. Sedgewick, F. Flajolet, An Introduction to the Analysis of Algorithms, Addison-Wesley, 2013 6. Stokey, Nancy L.; Lucas, Robert E. Jr.; Prescott, Edward C. (1989). Recursive Methods in Economic Dynamics. Cambridge: Harvard University Press. ISBN 0-674-75096-9. 7. Ljungqvist, Lars; Sargent, Thomas J. (2004). Recursive Macroeconomic Theory (Second ed.). Cambridge: MIT Press. ISBN 0-262-12274-X. Bibliography • Batchelder, Paul M. (1967). An introduction to linear difference equations. Dover Publications. • Miller, Kenneth S. (1968). Linear difference equations. W. A. Benjamin. • Fillmore, Jay P.; Marx, Morris L. (1968). "Linear recursive sequences". SIAM Rev. Vol. 10, no. 3. pp. 324–353. JSTOR 2027658. • Brousseau, Alfred (1971). Linear Recursion and Fibonacci Sequences. Fibonacci Association. • Thomas H. Cormen, Charles E. Leiserson, Ronald L. Rivest, and Clifford Stein. Introduction to Algorithms, Second Edition. MIT Press and McGraw-Hill, 1990. ISBN 0-262-03293-7. Chapter 4: Recurrences, pp. 62–90. • Graham, Ronald L.; Knuth, Donald E.; Patashnik, Oren (1994). Concrete Mathematics: A Foundation for Computer Science (2 ed.). Addison-Wesley. ISBN 0-201-55802-5. • Enders, Walter (2010). Applied Econometric Times Series (3 ed.). Archived from the original on 2014-11-10. • Cull, Paul; Flahive, Mary; Robson, Robbie (2005). Difference Equations: From Rabbits to Chaos. Springer. ISBN 0-387-23234-6. chapter 7. • Jacques, Ian (2006). Mathematics for Economics and Business (Fifth ed.). Prentice Hall. pp. 551–568. ISBN 0-273-70195-9. Chapter 9.1: Difference Equations. • Minh, Tang; Van To, Tan (2006). "Using generating functions to solve linear inhomogeneous recurrence equations" (PDF). Proc. Int. Conf. Simulation, Modelling and Optimization, SMO'06. pp. 399–404. Archived from the original (PDF) on 2016-03-04. Retrieved 2014-08-07. • Polyanin, Andrei D. "Difference and Functional Equations: Exact Solutions". at EqWorld - The World of Mathematical Equations. • Polyanin, Andrei D. "Difference and Functional Equations: Methods". at EqWorld - The World of Mathematical Equations. • Wang, Xiang-Sheng; Wong, Roderick (2012). "Asymptotics of orthogonal polynomials via recurrence relations". Anal. Appl. 10 (2): 215–235. arXiv:1101.4371. doi:10.1142/S0219530512500108. S2CID 28828175. External links • "Recurrence relation", Encyclopedia of Mathematics, EMS Press, 2001 [1994] • Weisstein, Eric W. "Recurrence Equation". MathWorld. • "OEIS Index Rec". OEIS index to a few thousand examples of linear recurrences, sorted by order (number of terms) and signature (vector of values of the constant coefficients) Authority control: National • Germany • Israel • United States • Czech Republic
Wikipedia
Recursion (computer science) In computer science, recursion is a method of solving a computational problem where the solution depends on solutions to smaller instances of the same problem.[1][2] Recursion solves such recursive problems by using functions that call themselves from within their own code. The approach can be applied to many types of problems, and recursion is one of the central ideas of computer science.[3] The power of recursion evidently lies in the possibility of defining an infinite set of objects by a finite statement. In the same manner, an infinite number of computations can be described by a finite recursive program, even if this program contains no explicit repetitions. — Niklaus Wirth, Algorithms + Data Structures = Programs, 1976[4] This article is about recursive approaches to solving problems. For proofs by recursion, see Mathematical induction. For recursion in computer science acronyms, see Recursive acronym § Computer-related examples. For general use of the term, see Recursion. Programming paradigms • Action • Array-oriented • Automata-based • Concurrent computing • Actor-based • Choreographic programming • Multitier programming • Relativistic programming • Structured concurrency • Data-driven • Data-oriented • Declarative (contrast: Imperative) • Functional • Functional logic • Purely functional • Logic • Abductive logic • Answer set • Concurrent logic • Functional logic • Inductive logic • Constraint • Constraint logic • Concurrent constraint logic • Dataflow • Flow-based • Reactive • Functional reactive • Ontology • Query language • Differentiable • Dynamic/scripting • Event-driven • Function-level (contrast: Value-level) • Point-free style • Concatenative • Generic • Imperative (contrast: Declarative) • Procedural • Object-oriented • Intentional • Language-oriented • Domain-specific • Literate • Macroprogramming • Metaprogramming • Automatic • Inductive programming • Reflective • Attribute-oriented • Macro • Template • Natural-language programming • Non-structured (contrast: Structured) • Array • Nondeterministic • Parallel computing • Process-oriented • Probabilistic • Quantum • Set-theoretic • Stack-based • Structured (contrast: Non-structured) • Block-structured • Object-oriented • Agent-oriented • Class-based • Concurrent • Prototype-based • By separation of concerns: • Aspect-oriented • Role-oriented • Subject-oriented • Recursive • Symbolic • Value-level (contrast: Function-level) Most computer programming languages support recursion by allowing a function to call itself from within its own code. Some functional programming languages (for instance, Clojure)[5] do not define any looping constructs but rely solely on recursion to repeatedly call code. It is proved in computability theory that these recursive-only languages are Turing complete; this means that they are as powerful (they can be used to solve the same problems) as imperative languages based on control structures such as while and for. Repeatedly calling a function from within itself may cause the call stack to have a size equal to the sum of the input sizes of all involved calls. It follows that, for problems that can be solved easily by iteration, recursion is generally less efficient, and, for large problems, it is fundamental to use optimization techniques such as tail call optimization. Recursive functions and algorithms A common algorithm design tactic is to divide a problem into sub-problems of the same type as the original, solve those sub-problems, and combine the results. This is often referred to as the divide-and-conquer method; when combined with a lookup table that stores the results of previously solved sub-problems (to avoid solving them repeatedly and incurring extra computation time), it can be referred to as dynamic programming or memoization. Base case A recursive function definition has one or more base cases, meaning input(s) for which the function produces a result trivially (without recurring), and one or more recursive cases, meaning input(s) for which the program recurs (calls itself). For example, the factorial function can be defined recursively by the equations 0! = 1 and, for all n > 0, n! = n(n − 1)!. Neither equation by itself constitutes a complete definition; the first is the base case, and the second is the recursive case. Because the base case breaks the chain of recursion, it is sometimes also called the "terminating case". The job of the recursive cases can be seen as breaking down complex inputs into simpler ones. In a properly designed recursive function, with each recursive call, the input problem must be simplified in such a way that eventually the base case must be reached. (Functions that are not intended to terminate under normal circumstances—for example, some system and server processes—are an exception to this.) Neglecting to write a base case, or testing for it incorrectly, can cause an infinite loop. For some functions (such as one that computes the series for e = 1/0! + 1/1! + 1/2! + 1/3! + ...) there is not an obvious base case implied by the input data; for these one may add a parameter (such as the number of terms to be added, in our series example) to provide a 'stopping criterion' that establishes the base case. Such an example is more naturally treated by corecursion, where successive terms in the output are the partial sums; this can be converted to a recursion by using the indexing parameter to say "compute the nth term (nth partial sum)". Recursive data types Many computer programs must process or generate an arbitrarily large quantity of data. Recursion is a technique for representing data whose exact size is unknown to the programmer: the programmer can specify this data with a self-referential definition. There are two types of self-referential definitions: inductive and coinductive definitions. Inductively defined data An inductively defined recursive data definition is one that specifies how to construct instances of the data. For example, linked lists can be defined inductively (here, using Haskell syntax): data ListOfStrings = EmptyList | Cons String ListOfStrings The code above specifies a list of strings to be either empty, or a structure that contains a string and a list of strings. The self-reference in the definition permits the construction of lists of any (finite) number of strings. Another example of inductive definition is the natural numbers (or positive integers): A natural number is either 1 or n+1, where n is a natural number. Similarly recursive definitions are often used to model the structure of expressions and statements in programming languages. Language designers often express grammars in a syntax such as Backus–Naur form; here is such a grammar, for a simple language of arithmetic expressions with multiplication and addition: <expr> ::= <number> | (<expr> * <expr>) | (<expr> + <expr>) This says that an expression is either a number, a product of two expressions, or a sum of two expressions. By recursively referring to expressions in the second and third lines, the grammar permits arbitrarily complicated arithmetic expressions such as (5 * ((3 * 6) + 8)), with more than one product or sum operation in a single expression. Coinductively defined data and corecursion A coinductive data definition is one that specifies the operations that may be performed on a piece of data; typically, self-referential coinductive definitions are used for data structures of infinite size. A coinductive definition of infinite streams of strings, given informally, might look like this: A stream of strings is an object s such that: head(s) is a string, and tail(s) is a stream of strings. This is very similar to an inductive definition of lists of strings; the difference is that this definition specifies how to access the contents of the data structure—namely, via the accessor functions head and tail—and what those contents may be, whereas the inductive definition specifies how to create the structure and what it may be created from. Corecursion is related to coinduction, and can be used to compute particular instances of (possibly) infinite objects. As a programming technique, it is used most often in the context of lazy programming languages, and can be preferable to recursion when the desired size or precision of a program's output is unknown. In such cases the program requires both a definition for an infinitely large (or infinitely precise) result, and a mechanism for taking a finite portion of that result. The problem of computing the first n prime numbers is one that can be solved with a corecursive program (e.g. here). Types of recursion Single recursion and multiple recursion Recursion that contains only a single self-reference is known as single recursion, while recursion that contains multiple self-references is known as multiple recursion. Standard examples of single recursion include list traversal, such as in a linear search, or computing the factorial function, while standard examples of multiple recursion include tree traversal, such as in a depth-first search. Single recursion is often much more efficient than multiple recursion, and can generally be replaced by an iterative computation, running in linear time and requiring constant space. Multiple recursion, by contrast, may require exponential time and space, and is more fundamentally recursive, not being able to be replaced by iteration without an explicit stack. Multiple recursion can sometimes be converted to single recursion (and, if desired, thence to iteration). For example, while computing the Fibonacci sequence naively entails multiple iteration, as each value requires two previous values, it can be computed by single recursion by passing two successive values as parameters. This is more naturally framed as corecursion, building up from the initial values, while tracking two successive values at each step – see corecursion: examples. A more sophisticated example involves using a threaded binary tree, which allows iterative tree traversal, rather than multiple recursion. Indirect recursion Most basic examples of recursion, and most of the examples presented here, demonstrate direct recursion, in which a function calls itself. Indirect recursion occurs when a function is called not by itself but by another function that it called (either directly or indirectly). For example, if f calls f, that is direct recursion, but if f calls g which calls f, then that is indirect recursion of f. Chains of three or more functions are possible; for example, function 1 calls function 2, function 2 calls function 3, and function 3 calls function 1 again. Indirect recursion is also called mutual recursion, which is a more symmetric term, though this is simply a difference of emphasis, not a different notion. That is, if f calls g and then g calls f, which in turn calls g again, from the point of view of f alone, f is indirectly recursing, while from the point of view of g alone, it is indirectly recursing, while from the point of view of both, f and g are mutually recursing on each other. Similarly a set of three or more functions that call each other can be called a set of mutually recursive functions. Anonymous recursion Recursion is usually done by explicitly calling a function by name. However, recursion can also be done via implicitly calling a function based on the current context, which is particularly useful for anonymous functions, and is known as anonymous recursion. Structural versus generative recursion See also: Structural recursion Some authors classify recursion as either "structural" or "generative". The distinction is related to where a recursive procedure gets the data that it works on, and how it processes that data: [Functions that consume structured data] typically decompose their arguments into their immediate structural components and then process those components. If one of the immediate components belongs to the same class of data as the input, the function is recursive. For that reason, we refer to these functions as (STRUCTURALLY) RECURSIVE FUNCTIONS. — Felleisen, Findler, Flatt, and Krishnaurthi, How to Design Programs, 2001[6] Thus, the defining characteristic of a structurally recursive function is that the argument to each recursive call is the content of a field of the original input. Structural recursion includes nearly all tree traversals, including XML processing, binary tree creation and search, etc. By considering the algebraic structure of the natural numbers (that is, a natural number is either zero or the successor of a natural number), functions such as factorial may also be regarded as structural recursion. Generative recursion is the alternative: Many well-known recursive algorithms generate an entirely new piece of data from the given data and recur on it. HtDP (How to Design Programs) refers to this kind as generative recursion. Examples of generative recursion include: gcd, quicksort, binary search, mergesort, Newton's method, fractals, and adaptive integration. — Matthias Felleisen, Advanced Functional Programming, 2002[7] This distinction is important in proving termination of a function. • All structurally recursive functions on finite (inductively defined) data structures can easily be shown to terminate, via structural induction: intuitively, each recursive call receives a smaller piece of input data, until a base case is reached. • Generatively recursive functions, in contrast, do not necessarily feed smaller input to their recursive calls, so proof of their termination is not necessarily as simple, and avoiding infinite loops requires greater care. These generatively recursive functions can often be interpreted as corecursive functions – each step generates the new data, such as successive approximation in Newton's method – and terminating this corecursion requires that the data eventually satisfy some condition, which is not necessarily guaranteed. • In terms of loop variants, structural recursion is when there is an obvious loop variant, namely size or complexity, which starts off finite and decreases at each recursive step. • By contrast, generative recursion is when there is not such an obvious loop variant, and termination depends on a function, such as "error of approximation" that does not necessarily decrease to zero, and thus termination is not guaranteed without further analysis. Implementation issues In actual implementation, rather than a pure recursive function (single check for base case, otherwise recursive step), a number of modifications may be made, for purposes of clarity or efficiency. These include: • Wrapper function (at top) • Short-circuiting the base case, aka "Arm's-length recursion" (at bottom) • Hybrid algorithm (at bottom) – switching to a different algorithm once data is small enough On the basis of elegance, wrapper functions are generally approved, while short-circuiting the base case is frowned upon, particularly in academia. Hybrid algorithms are often used for efficiency, to reduce the overhead of recursion in small cases, and arm's-length recursion is a special case of this. Wrapper function A wrapper function is a function that is directly called but does not recurse itself, instead calling a separate auxiliary function which actually does the recursion. Wrapper functions can be used to validate parameters (so the recursive function can skip these), perform initialization (allocate memory, initialize variables), particularly for auxiliary variables such as "level of recursion" or partial computations for memoization, and handle exceptions and errors. In languages that support nested functions, the auxiliary function can be nested inside the wrapper function and use a shared scope. In the absence of nested functions, auxiliary functions are instead a separate function, if possible private (as they are not called directly), and information is shared with the wrapper function by using pass-by-reference. Short-circuiting the base case Factorial: ordinary vs. short-circuit Ordinary recursion Short-circuit recursion int fac1(int n) { if (n <= 0) return 1; else return fac1(n-1)*n; } static int fac2(int n) { // assert(n >= 2); if (n == 2) return 2; else return fac2(n-1)*n; } int fac2wrapper(int n) { if (n <= 1) return 1; else return fac2(n); } Short-circuiting the base case, also known as arm's-length recursion, consists of checking the base case before making a recursive call – i.e., checking if the next call will be the base case, instead of calling and then checking for the base case. Short-circuiting is particularly done for efficiency reasons, to avoid the overhead of a function call that immediately returns. Note that since the base case has already been checked for (immediately before the recursive step), it does not need to be checked for separately, but one does need to use a wrapper function for the case when the overall recursion starts with the base case itself. For example, in the factorial function, properly the base case is 0! = 1, while immediately returning 1 for 1! is a short circuit, and may miss 0; this can be mitigated by a wrapper function. The box shows C code to shortcut factorial cases 0 and 1. Short-circuiting is primarily a concern when many base cases are encountered, such as Null pointers in a tree, which can be linear in the number of function calls, hence significant savings for O(n) algorithms; this is illustrated below for a depth-first search. Short-circuiting on a tree corresponds to considering a leaf (non-empty node with no children) as the base case, rather than considering an empty node as the base case. If there is only a single base case, such as in computing the factorial, short-circuiting provides only O(1) savings. Conceptually, short-circuiting can be considered to either have the same base case and recursive step, checking the base case only before the recursion, or it can be considered to have a different base case (one step removed from standard base case) and a more complex recursive step, namely "check valid then recurse", as in considering leaf nodes rather than Null nodes as base cases in a tree. Because short-circuiting has a more complicated flow, compared with the clear separation of base case and recursive step in standard recursion, it is often considered poor style, particularly in academia.[8] Depth-first search A basic example of short-circuiting is given in depth-first search (DFS) of a binary tree; see binary trees section for standard recursive discussion. The standard recursive algorithm for a DFS is: • base case: If current node is Null, return false • recursive step: otherwise, check value of current node, return true if match, otherwise recurse on children In short-circuiting, this is instead: • check value of current node, return true if match, • otherwise, on children, if not Null, then recurse. In terms of the standard steps, this moves the base case check before the recursive step. Alternatively, these can be considered a different form of base case and recursive step, respectively. Note that this requires a wrapper function to handle the case when the tree itself is empty (root node is Null). In the case of a perfect binary tree of height h, there are 2h+1−1 nodes and 2h+1 Null pointers as children (2 for each of the 2h leaves), so short-circuiting cuts the number of function calls in half in the worst case. In C, the standard recursive algorithm may be implemented as: bool tree_contains(struct node *tree_node, int i) { if (tree_node == NULL) return false; // base case else if (tree_node->data == i) return true; else return tree_contains(tree_node->left, i) || tree_contains(tree_node->right, i); } The short-circuited algorithm may be implemented as: // Wrapper function to handle empty tree bool tree_contains(struct node *tree_node, int i) { if (tree_node == NULL) return false; // empty tree else return tree_contains_do(tree_node, i); // call auxiliary function } // Assumes tree_node != NULL bool tree_contains_do(struct node *tree_node, int i) { if (tree_node->data == i) return true; // found else // recurse return (tree_node->left && tree_contains_do(tree_node->left, i)) || (tree_node->right && tree_contains_do(tree_node->right, i)); } Note the use of short-circuit evaluation of the Boolean && (AND) operators, so that the recursive call is made only if the node is valid (non-Null). Note that while the first term in the AND is a pointer to a node, the second term is a boolean, so the overall expression evaluates to a boolean. This is a common idiom in recursive short-circuiting. This is in addition to the short-circuit evaluation of the Boolean || (OR) operator, to only check the right child if the left child fails. In fact, the entire control flow of these functions can be replaced with a single Boolean expression in a return statement, but legibility suffers at no benefit to efficiency. Hybrid algorithm Recursive algorithms are often inefficient for small data, due to the overhead of repeated function calls and returns. For this reason efficient implementations of recursive algorithms often start with the recursive algorithm, but then switch to a different algorithm when the input becomes small. An important example is merge sort, which is often implemented by switching to the non-recursive insertion sort when the data is sufficiently small, as in the tiled merge sort. Hybrid recursive algorithms can often be further refined, as in Timsort, derived from a hybrid merge sort/insertion sort. Recursion versus iteration Recursion and iteration are equally expressive: recursion can be replaced by iteration with an explicit call stack, while iteration can be replaced with tail recursion. Which approach is preferable depends on the problem under consideration and the language used. In imperative programming, iteration is preferred, particularly for simple recursion, as it avoids the overhead of function calls and call stack management, but recursion is generally used for multiple recursion. By contrast, in functional languages recursion is preferred, with tail recursion optimization leading to little overhead. Implementing an algorithm using iteration may not be easily achievable. Compare the templates to compute xn defined by xn = f(n, xn-1) from xbase: function recursive(n) if n == base return xbase else return f(n, recursive(n-1)) function iterative(n) x = xbase for i = base+1 to n x = f(i, x) return x For an imperative language the overhead is to define the function, and for a functional language the overhead is to define the accumulator variable x. For example, a factorial function may be implemented iteratively in C by assigning to a loop index variable and accumulator variable, rather than by passing arguments and returning values by recursion: unsigned int factorial(unsigned int n) { unsigned int product = 1; // empty product is 1 while (n) { product *= n; --n; } return product; } Expressive power Most programming languages in use today allow the direct specification of recursive functions and procedures. When such a function is called, the program's runtime environment keeps track of the various instances of the function (often using a call stack, although other methods may be used). Every recursive function can be transformed into an iterative function by replacing recursive calls with iterative control constructs and simulating the call stack with a stack explicitly managed by the program.[9][10] Conversely, all iterative functions and procedures that can be evaluated by a computer (see Turing completeness) can be expressed in terms of recursive functions; iterative control constructs such as while loops and for loops are routinely rewritten in recursive form in functional languages.[11][12] However, in practice this rewriting depends on tail call elimination, which is not a feature of all languages. C, Java, and Python are notable mainstream languages in which all function calls, including tail calls, may cause stack allocation that would not occur with the use of looping constructs; in these languages, a working iterative program rewritten in recursive form may overflow the call stack, although tail call elimination may be a feature that is not covered by a language's specification, and different implementations of the same language may differ in tail call elimination capabilities. Performance issues In languages (such as C and Java) that favor iterative looping constructs, there is usually significant time and space cost associated with recursive programs, due to the overhead required to manage the stack and the relative slowness of function calls; in functional languages, a function call (particularly a tail call) is typically a very fast operation, and the difference is usually less noticeable. As a concrete example, the difference in performance between recursive and iterative implementations of the "factorial" example above depends highly on the compiler used. In languages where looping constructs are preferred, the iterative version may be as much as several orders of magnitude faster than the recursive one. In functional languages, the overall time difference of the two implementations may be negligible; in fact, the cost of multiplying the larger numbers first rather than the smaller numbers (which the iterative version given here happens to do) may overwhelm any time saved by choosing iteration. Stack space In some programming languages, the maximum size of the call stack is much less than the space available in the heap, and recursive algorithms tend to require more stack space than iterative algorithms. Consequently, these languages sometimes place a limit on the depth of recursion to avoid stack overflows; Python is one such language.[13] Note the caveat below regarding the special case of tail recursion. Vulnerability Because recursive algorithms can be subject to stack overflows, they may be vulnerable to pathological or malicious input.[14] Some malware specifically targets a program's call stack and takes advantage of the stack's inherently recursive nature.[15] Even in the absence of malware, a stack overflow caused by unbounded recursion can be fatal to the program, and exception handling logic may not prevent the corresponding process from being terminated.[16] Multiply recursive problems Multiply recursive problems are inherently recursive, because of prior state they need to track. One example is tree traversal as in depth-first search; though both recursive and iterative methods are used,[17] they contrast with list traversal and linear search in a list, which is a singly recursive and thus naturally iterative method. Other examples include divide-and-conquer algorithms such as Quicksort, and functions such as the Ackermann function. All of these algorithms can be implemented iteratively with the help of an explicit stack, but the programmer effort involved in managing the stack, and the complexity of the resulting program, arguably outweigh any advantages of the iterative solution. Refactoring recursion Recursive algorithms can be replaced with non-recursive counterparts.[18] One method for replacing recursive algorithms is to simulate them using heap memory in place of stack memory.[19] An alternative is to develop a replacement algorithm entirely based on non-recursive methods, which can be challenging.[20] For example, recursive algorithms for matching wildcards, such as Rich Salz' wildmat algorithm,[21] were once typical. Non-recursive algorithms for the same purpose, such as the Krauss matching wildcards algorithm, have been developed to avoid the drawbacks of recursion[22] and have improved only gradually based on techniques such as collecting tests and profiling performance.[23] Tail-recursive functions Tail-recursive functions are functions in which all recursive calls are tail calls and hence do not build up any deferred operations. For example, the gcd function (shown again below) is tail-recursive. In contrast, the factorial function (also below) is not tail-recursive; because its recursive call is not in tail position, it builds up deferred multiplication operations that must be performed after the final recursive call completes. With a compiler or interpreter that treats tail-recursive calls as jumps rather than function calls, a tail-recursive function such as gcd will execute using constant space. Thus the program is essentially iterative, equivalent to using imperative language control structures like the "for" and "while" loops. Tail recursion: Augmenting recursion: //INPUT: Integers x, y such that x >= y and y >= 0 int gcd(int x, int y) { if (y == 0) return x; else return gcd(y, x % y); } //INPUT: n is an Integer such that n >= 0 int fact(int n) { if (n == 0) return 1; else return n * fact(n - 1); } The significance of tail recursion is that when making a tail-recursive call (or any tail call), the caller's return position need not be saved on the call stack; when the recursive call returns, it will branch directly on the previously saved return position. Therefore, in languages that recognize this property of tail calls, tail recursion saves both space and time. Order of execution Consider these two functions: Function 1 void recursiveFunction(int num) { printf("%d\n", num); if (num < 4) recursiveFunction(num + 1); } Function 2 void recursiveFunction(int num) { if (num < 4) recursiveFunction(num + 1); printf("%d\n", num); } Function 2 is function 1 with the lines swapped. In the case of a function calling itself only once, instructions placed before the recursive call are executed once per recursion before any of the instructions placed after the recursive call. The latter are executed repeatedly after the maximum recursion has been reached. Also note that the order of the print statements is reversed, which is due to the way the functions and statements are stored on the call stack. Recursive procedures Factorial A classic example of a recursive procedure is the function used to calculate the factorial of a natural number: $\operatorname {fact} (n)={\begin{cases}1&{\mbox{if }}n=0\\n\cdot \operatorname {fact} (n-1)&{\mbox{if }}n>0\\\end{cases}}$ Pseudocode (recursive): function factorial is: input: integer n such that n >= 0 output: [n × (n-1) × (n-2) × ... × 1] 1. if n is 0, return 1 2. otherwise, return [ n × factorial(n-1) ] end factorial The function can also be written as a recurrence relation: $b_{n}=nb_{n-1}$ $b_{0}=1$ This evaluation of the recurrence relation demonstrates the computation that would be performed in evaluating the pseudocode above: Computing the recurrence relation for n = 4: b4 = 4 × b3 = 4 × (3 × b2) = 4 × (3 × (2 × b1)) = 4 × (3 × (2 × (1 × b0))) = 4 × (3 × (2 × (1 × 1))) = 4 × (3 × (2 × 1)) = 4 × (3 × 2) = 4 × 6 = 24 This factorial function can also be described without using recursion by making use of the typical looping constructs found in imperative programming languages: Pseudocode (iterative): function factorial is: input: integer n such that n >= 0 output: [n × (n-1) × (n-2) × ... × 1] 1. create new variable called running_total with a value of 1 2. begin loop 1. if n is 0, exit loop 2. set running_total to (running_total × n) 3. decrement n 4. repeat loop 3. return running_total end factorial The imperative code above is equivalent to this mathematical definition using an accumulator variable t: ${\begin{aligned}\operatorname {fact} (n)&=\operatorname {fact_{acc}} (n,1)\\\operatorname {fact_{acc}} (n,t)&={\begin{cases}t&{\mbox{if }}n=0\\\operatorname {fact_{acc}} (n-1,nt)&{\mbox{if }}n>0\\\end{cases}}\end{aligned}}$ The definition above translates straightforwardly to functional programming languages such as Scheme; this is an example of iteration implemented recursively. Greatest common divisor The Euclidean algorithm, which computes the greatest common divisor of two integers, can be written recursively. Function definition: $\gcd(x,y)={\begin{cases}x&{\mbox{if }}y=0\\\gcd(y,\operatorname {remainder} (x,y))&{\mbox{if }}y>0\\\end{cases}}$ Pseudocode (recursive): function gcd is: input: integer x, integer y such that x > 0 and y >= 0 1. if y is 0, return x 2. otherwise, return [ gcd( y, (remainder of x/y) ) ] end gcd Recurrence relation for greatest common divisor, where $x\%y$ expresses the remainder of $x/y$: $\gcd(x,y)=\gcd(y,x\%y)$ if $y\neq 0$ $\gcd(x,0)=x$ Computing the recurrence relation for x = 27 and y = 9: gcd(27, 9) = gcd(9, 27 % 9) = gcd(9, 0) = 9 Computing the recurrence relation for x = 111 and y = 259: gcd(111, 259) = gcd(259, 111 % 259) = gcd(259, 111) = gcd(111, 259 % 111) = gcd(111, 37) = gcd(37, 111 % 37) = gcd(37, 0) = 37 The recursive program above is tail-recursive; it is equivalent to an iterative algorithm, and the computation shown above shows the steps of evaluation that would be performed by a language that eliminates tail calls. Below is a version of the same algorithm using explicit iteration, suitable for a language that does not eliminate tail calls. By maintaining its state entirely in the variables x and y and using a looping construct, the program avoids making recursive calls and growing the call stack. Pseudocode (iterative): function gcd is: input: integer x, integer y such that x >= y and y >= 0 1. create new variable called remainder 2. begin loop 1. if y is zero, exit loop 2. set remainder to the remainder of x/y 3. set x to y 4. set y to remainder 5. repeat loop 3. return x end gcd The iterative algorithm requires a temporary variable, and even given knowledge of the Euclidean algorithm it is more difficult to understand the process by simple inspection, although the two algorithms are very similar in their steps. Towers of Hanoi Main article: Towers of Hanoi The Towers of Hanoi is a mathematical puzzle whose solution illustrates recursion.[24][25] There are three pegs which can hold stacks of disks of different diameters. A larger disk may never be stacked on top of a smaller. Starting with n disks on one peg, they must be moved to another peg one at a time. What is the smallest number of steps to move the stack? Function definition: $\operatorname {hanoi} (n)={\begin{cases}1&{\mbox{if }}n=1\\2\cdot \operatorname {hanoi} (n-1)+1&{\mbox{if }}n>1\\\end{cases}}$ Recurrence relation for hanoi: $h_{n}=2h_{n-1}+1$ $h_{1}=1$ Computing the recurrence relation for n = 4: hanoi(4) = 2×hanoi(3) + 1 = 2×(2×hanoi(2) + 1) + 1 = 2×(2×(2×hanoi(1) + 1) + 1) + 1 = 2×(2×(2×1 + 1) + 1) + 1 = 2×(2×(3) + 1) + 1 = 2×(7) + 1 = 15 Example implementations: Pseudocode (recursive): function hanoi is: input: integer n, such that n >= 1 1. if n is 1 then return 1 2. return [2 * [call hanoi(n-1)] + 1] end hanoi Although not all recursive functions have an explicit solution, the Tower of Hanoi sequence can be reduced to an explicit formula.[26] An explicit formula for Towers of Hanoi: h1 = 1 = 21 - 1 h2 = 3 = 22 - 1 h3 = 7 = 23 - 1 h4 = 15 = 24 - 1 h5 = 31 = 25 - 1 h6 = 63 = 26 - 1 h7 = 127 = 27 - 1 In general: hn = 2n - 1, for all n >= 1 Binary search The binary search algorithm is a method of searching a sorted array for a single element by cutting the array in half with each recursive pass. The trick is to pick a midpoint near the center of the array, compare the data at that point with the data being searched and then responding to one of three possible conditions: the data is found at the midpoint, the data at the midpoint is greater than the data being searched for, or the data at the midpoint is less than the data being searched for. Recursion is used in this algorithm because with each pass a new array is created by cutting the old one in half. The binary search procedure is then called recursively, this time on the new (and smaller) array. Typically the array's size is adjusted by manipulating a beginning and ending index. The algorithm exhibits a logarithmic order of growth because it essentially divides the problem domain in half with each pass. Example implementation of binary search in C: /* Call binary_search with proper initial conditions. INPUT: data is an array of integers SORTED in ASCENDING order, toFind is the integer to search for, count is the total number of elements in the array OUTPUT: result of binary_search */ int search(int *data, int toFind, int count) { // Start = 0 (beginning index) // End = count - 1 (top index) return binary_search(data, toFind, 0, count-1); } /* Binary Search Algorithm. INPUT: data is a array of integers SORTED in ASCENDING order, toFind is the integer to search for, start is the minimum array index, end is the maximum array index OUTPUT: position of the integer toFind within array data, -1 if not found */ int binary_search(int *data, int toFind, int start, int end) { //Get the midpoint. int mid = start + (end - start)/2; //Integer division if (start > end) //Stop condition (base case) return -1; else if (data[mid] == toFind) //Found, return index return mid; else if (data[mid] > toFind) //Data is greater than toFind, search lower half return binary_search(data, toFind, start, mid-1); else //Data is less than toFind, search upper half return binary_search(data, toFind, mid+1, end); } Recursive data structures (structural recursion) An important application of recursion in computer science is in defining dynamic data structures such as lists and trees. Recursive data structures can dynamically grow to a theoretically infinite size in response to runtime requirements; in contrast, the size of a static array must be set at compile time. "Recursive algorithms are particularly appropriate when the underlying problem or the data to be treated are defined in recursive terms."[27] The examples in this section illustrate what is known as "structural recursion". This term refers to the fact that the recursive procedures are acting on data that is defined recursively. As long as a programmer derives the template from a data definition, functions employ structural recursion. That is, the recursions in a function's body consume some immediate piece of a given compound value.[7] Linked lists Main article: Linked list Below is a C definition of a linked list node structure. Notice especially how the node is defined in terms of itself. The "next" element of struct node is a pointer to another struct node, effectively creating a list type. struct node { int data; // some integer data struct node *next; // pointer to another struct node }; Because the struct node data structure is defined recursively, procedures that operate on it can be implemented naturally as recursive procedures. The list_print procedure defined below walks down the list until the list is empty (i.e., the list pointer has a value of NULL). For each node it prints the data element (an integer). In the C implementation, the list remains unchanged by the list_print procedure. void list_print(struct node *list) { if (list != NULL) // base case { printf ("%d ", list->data); // print integer data followed by a space list_print (list->next); // recursive call on the next node } } Binary trees Main article: Binary tree Below is a simple definition for a binary tree node. Like the node for linked lists, it is defined in terms of itself, recursively. There are two self-referential pointers: left (pointing to the left sub-tree) and right (pointing to the right sub-tree). struct node { int data; // some integer data struct node *left; // pointer to the left subtree struct node *right; // point to the right subtree }; Operations on the tree can be implemented using recursion. Note that because there are two self-referencing pointers (left and right), tree operations may require two recursive calls: // Test if tree_node contains i; return 1 if so, 0 if not. int tree_contains(struct node *tree_node, int i) { if (tree_node == NULL) return 0; // base case else if (tree_node->data == i) return 1; else return tree_contains(tree_node->left, i) || tree_contains(tree_node->right, i); } At most two recursive calls will be made for any given call to tree_contains as defined above. // Inorder traversal: void tree_print(struct node *tree_node) { if (tree_node != NULL) { // base case tree_print(tree_node->left); // go left printf("%d ", tree_node->data); // print the integer followed by a space tree_print(tree_node->right); // go right } } The above example illustrates an in-order traversal of the binary tree. A Binary search tree is a special case of the binary tree where the data elements of each node are in order. Filesystem traversal Since the number of files in a filesystem may vary, recursion is the only practical way to traverse and thus enumerate its contents. Traversing a filesystem is very similar to that of tree traversal, therefore the concepts behind tree traversal are applicable to traversing a filesystem. More specifically, the code below would be an example of a preorder traversal of a filesystem. import java.io.File; public class FileSystem { public static void main(String [] args) { traverse(); } /** * Obtains the filesystem roots * Proceeds with the recursive filesystem traversal */ private static void traverse() { File[] fs = File.listRoots(); for (int i = 0; i < fs.length; i++) { System.out.println(fs[i]); if (fs[i].isDirectory() && fs[i].canRead()) { rtraverse(fs[i]); } } } /** * Recursively traverse a given directory * * @param fd indicates the starting point of traversal */ private static void rtraverse(File fd) { File[] fss = fd.listFiles(); for (int i = 0; i < fss.length; i++) { System.out.println(fss[i]); if (fss[i].isDirectory() && fss[i].canRead()) { rtraverse(fss[i]); } } } } This code is both recursion and iteration - the files and directories are iterated, and each directory is opened recursively. The "rtraverse" method is an example of direct recursion, whilst the "traverse" method is a wrapper function. The "base case" scenario is that there will always be a fixed number of files and/or directories in a given filesystem. Time-efficiency of recursive algorithms The time efficiency of recursive algorithms can be expressed in a recurrence relation of Big O notation. They can (usually) then be simplified into a single Big-O term. Shortcut rule (master theorem) If the time-complexity of the function is in the form $T(n)=a\cdot T(n/b)+f(n)$ Then the Big O of the time-complexity is thus: • If $f(n)=O(n^{\log _{b}a-\varepsilon })$ for some constant $\varepsilon >0$, then $T(n)=\Theta (n^{\log _{b}a})$ • If $f(n)=\Theta (n^{\log _{b}a})$, then $T(n)=\Theta (n^{\log _{b}a}\log n)$ • If $f(n)=\Omega (n^{\log _{b}a+\varepsilon })$ for some constant $\varepsilon >0$, and if $a\cdot f(n/b)\leq c\cdot f(n)$ for some constant c < 1 and all sufficiently large n, then $T(n)=\Theta (f(n))$ where a represents the number of recursive calls at each level of recursion, b represents by what factor smaller the input is for the next level of recursion (i.e. the number of pieces you divide the problem into), and f(n) represents the work that the function does independently of any recursion (e.g. partitioning, recombining) at each level of recursion. See also • Functional programming • Computational problem • Hierarchical and recursive queries in SQL • Kleene–Rosser paradox • Open recursion • Recursion • Sierpiński curve • McCarthy 91 function • μ-recursive functions • Primitive recursive functions • Tak (function) Notes 1. Graham, Ronald; Knuth, Donald; Patashnik, Oren (1990). "1: Recurrent Problems". Concrete Mathematics. Addison-Wesley. ISBN 0-201-55802-5. 2. Kuhail, M. A.; Negreiros, J.; Seffah, A. (2021). "Teaching Recursive Thinking using Unplugged Activities" (PDF). WTE&TE. 19: 169–175. 3. Epp, Susanna (1995). Discrete Mathematics with Applications (2nd ed.). PWS Publishing Company. p. 427. ISBN 978-0-53494446-9. 4. Wirth, Niklaus (1976). Algorithms + Data Structures = Programs. Prentice-Hall. p. 126. ISBN 978-0-13022418-7. 5. "Functional Programming | Clojure for the Brave and True". www.braveclojure.com. Retrieved 2020-10-21. 6. Felleisen et al. 2001, art V "Generative Recursion 7. Felleisen, Matthias (2002). "Developing Interactive Web Programs". In Jeuring, Johan (ed.). Advanced Functional Programming: 4th International School (PDF). Springer. p. 108. ISBN 9783540448334. 8. Mongan, John; Giguère, Eric; Kindler, Noah (2013). Programming Interviews Exposed: Secrets to Landing Your Next Job (3rd ed.). Wiley. p. 115. ISBN 978-1-118-26136-1. 9. Hetland, Magnus Lie (2010), Python Algorithms: Mastering Basic Algorithms in the Python Language, Apress, p. 79, ISBN 9781430232384. 10. Drozdek, Adam (2012), Data Structures and Algorithms in C++ (4th ed.), Cengage Learning, p. 197, ISBN 9781285415017. 11. Shivers, Olin. "The Anatomy of a Loop - A story of scope and control" (PDF). Georgia Institute of Technology. Retrieved 2012-09-03. 12. Lambda the Ultimate. "The Anatomy of a Loop". Lambda the Ultimate. Retrieved 2012-09-03. 13. "27.1. sys — System-specific parameters and functions — Python v2.7.3 documentation". Docs.python.org. Retrieved 2012-09-03. 14. Krauss, Kirk J. (2014). "Matching Wildcards: An Empirical Way to Tame an Algorithm". Dr. Dobb's Journal. 15. Mueller, Oliver (2012). "Anatomy of a Stack Smashing Attack and How GCC Prevents It". Dr. Dobb's Journal. 16. "StackOverflowException Class". .NET Framework Class Library. Microsoft Developer Network. 2018. 17. "Depth First Search (DFS): Iterative and Recursive Implementation". Techie Delight. 2018. 18. Mitrovic, Ivan. "Replace Recursion with Iteration". ThoughtWorks. 19. La, Woong Gyu (2015). "How to replace recursive functions using stack and while-loop to avoid the stack-overflow". CodeProject. 20. Moertel, Tom (2013). "Tricks of the trade: Recursion to Iteration, Part 2: Eliminating Recursion with the Time-Traveling Secret Feature Trick". 21. Salz, Rich (1991). "wildmat.c". GitHub. 22. Krauss, Kirk J. (2008). "Matching Wildcards: An Algorithm". Dr. Dobb's Journal. 23. Krauss, Kirk J. (2018). "Matching Wildcards: An Improved Algorithm for Big Data". Develop for Performance. 24. Graham, Knuth & Patashnik 1990, §1.1: The Tower of Hanoi 25. Epp 1995, pp. 427–430: The Tower of Hanoi 26. Epp 1995, pp. 447–448: An Explicit Formula for the Tower of Hanoi Sequence 27. Wirth 1976, p. 127 References • Barron, David William (1968) [1967]. Written at Cambridge, UK. Gill, Stanley (ed.). Recursive techniques in programming. Macdonald Computer Monographs (1 ed.). London, UK: Macdonald & Co. (Publishers) Ltd. SBN 356-02201-3. (viii+64 pages) • Felleisen, Matthias; Findler, Robert B.; Flatt, Matthew; Krishnamurthi, Shriram (2001). How To Design Programs: An Introduction to Computing and Programming. MIT Press. ISBN 0262062186. • Rubio-Sanchez, Manuel (2017). Introduction to Recursive Programming. CRC Press. ISBN 978-1-351-64717-5. • Pevac, Irena (2016). Practicing Recursion in Java. CreateSpace Independent. ISBN 978-1-5327-1227-2. • Roberts, Eric (2005). Thinking Recursively with Java. Wiley. ISBN 978-0-47170146-0. • Rohl, Jeffrey S. (1984). Recursion Via Pascal. Cambridge University Press. ISBN 978-0-521-26934-6. • Helman, Paul; Veroff, Robert. Walls and Mirrors. • Abelson, Harold; Sussman, Gerald Jay; Sussman, Julie (1996). Structure and Interpretation of Computer Programs (2nd ed.). MIT Press. ISBN 0-262-51087-1. • Dijkstra, Edsger W. (1960). "Recursive Programming". Numerische Mathematik. 2 (1): 312–318. doi:10.1007/BF01386232. S2CID 127891023. Well-known computer science algorithms Categories • Minimax • Sorting • Search • Streaming Paradigms • Backtracking • Brute-force search • Divide and conquer • Dynamic programming • Greedy • Prune and search • Sweep line • Recursion Other • Binary search • Breadth-first search • Depth-first search • Topological sorting • List of algorithms
Wikipedia
Recursive definition In mathematics and computer science, a recursive definition, or inductive definition, is used to define the elements in a set in terms of other elements in the set (Aczel 1977:740ff). Some examples of recursively-definable objects include factorials, natural numbers, Fibonacci numbers, and the Cantor ternary set. A recursive definition of a function defines values of the function for some inputs in terms of the values of the same function for other (usually smaller) inputs. For example, the factorial function n! is defined by the rules ${\begin{aligned}&0!=1.\\&(n+1)!=(n+1)\cdot n!.\end{aligned}}$ This definition is valid for each natural number n, because the recursion eventually reaches the base case of 0. The definition may also be thought of as giving a procedure for computing the value of the function n!, starting from n = 0 and proceeding onwards with n = 1, 2, 3 etc. The recursion theorem states that such a definition indeed defines a function that is unique. The proof uses mathematical induction.[1] An inductive definition of a set describes the elements in a set in terms of other elements in the set. For example, one definition of the set $\mathbb {N} $ of natural numbers is: 1. 1 is in $\mathbb {N} .$ 2. If an element n is in $\mathbb {N} $ then n + 1 is in $\mathbb {N} .$ 3. $\mathbb {N} $ is the intersection of all sets satisfying (1) and (2). There are many sets that satisfy (1) and (2) – for example, the set {1, 1.649, 2, 2.649, 3, 3.649, …} satisfies the definition. However, condition (3) specifies the set of natural numbers by removing the sets with extraneous members. Note that this definition assumes that $\mathbb {N} $ is contained in a larger set (such as the set of real numbers) — in which the operation + is defined. Properties of recursively defined functions and sets can often be proved by an induction principle that follows the recursive definition. For example, the definition of the natural numbers presented here directly implies the principle of mathematical induction for natural numbers: if a property holds of the natural number 0 (or 1), and the property holds of n + 1 whenever it holds of n, then the property holds of all natural numbers (Aczel 1977:742). Form of recursive definitions Most recursive definitions have two foundations: a base case (basis) and an inductive clause. The difference between a circular definition and a recursive definition is that a recursive definition must always have base cases, cases that satisfy the definition without being defined in terms of the definition itself, and that all other instances in the inductive clauses must be "smaller" in some sense (i.e., closer to those base cases that terminate the recursion) — a rule also known as "recur only with a simpler case".[2] In contrast, a circular definition may have no base case, and even may define the value of a function in terms of that value itself — rather than on other values of the function. Such a situation would lead to an infinite regress. That recursive definitions are valid – meaning that a recursive definition identifies a unique function – is a theorem of set theory known as the recursion theorem, the proof of which is non-trivial.[3] Where the domain of the function is the natural numbers, sufficient conditions for the definition to be valid are that the value of f(0) (i.e., base case) is given, and that for n > 0, an algorithm is given for determining f(n) in terms of n, $f(0),f(1),\dots ,f(n-1)$ (i.e., inductive clause). More generally, recursive definitions of functions can be made whenever the domain is a well-ordered set, using the principle of transfinite recursion. The formal criteria for what constitutes a valid recursive definition are more complex for the general case. An outline of the general proof and the criteria can be found in James Munkres' Topology. However, a specific case (domain is restricted to the positive integers instead of any well-ordered set) of the general recursive definition will be given below.[4] Principle of recursive definition Let A be a set and let a0 be an element of A. If ρ is a function which assigns to each function f mapping a nonempty section of the positive integers into A, an element of A, then there exists a unique function $h:\mathbb {Z} _{+}\to A$ such that ${\begin{aligned}h(1)&=a_{0}\\h(i)&=\rho \left(h|_{\{1,2,\ldots ,i-1\}}\right){\text{ for }}i>1.\end{aligned}}$ Examples of recursive definitions Elementary functions Addition is defined recursively based on counting as ${\begin{aligned}&0+a=a,\\&(1+n)+a=1+(n+a).\end{aligned}}$ Multiplication is defined recursively as ${\begin{aligned}&0\cdot a=0,\\&(1+n)\cdot a=a+n\cdot a.\end{aligned}}$ Exponentiation is defined recursively as ${\begin{aligned}&a^{0}=1,\\&a^{1+n}=a\cdot a^{n}.\end{aligned}}$ Binomial coefficients can be defined recursively as ${\begin{aligned}&{\binom {a}{0}}=1,\\&{\binom {1+a}{1+n}}={\frac {(1+a){\binom {a}{n}}}{1+n}}.\end{aligned}}$ Prime numbers The set of prime numbers can be defined as the unique set of positive integers satisfying • 1 is not a prime number, • any other positive integer is a prime number if and only if it is not divisible by any prime number smaller than itself. The primality of the integer 1 is the base case; checking the primality of any larger integer X by this definition requires knowing the primality of every integer between 1 and X, which is well defined by this definition. That last point can be proved by induction on X, for which it is essential that the second clause says "if and only if"; if it had just said "if", the primality of, for instance, the number 4 would not be clear, and the further application of the second clause would be impossible. Non-negative even numbers The even numbers can be defined as consisting of • 0 is in the set E of non-negative evens (basis clause), • For any element x in the set E, x + 2 is in E (inductive clause), • Nothing is in E unless it is obtained from the basis and inductive clauses (extremal clause). Well formed formulas It is chiefly in logic or computer programming that recursive definitions are found. For example, a well-formed formula (wff) can be defined as: 1. a symbol which stands for a proposition – like p means "Connor is a lawyer." 2. The negation symbol (¬), followed by a wff – like ¬ p means "It is not true that Connor is a lawyer." 3. Any of the four binary connectives – negation (¬), conjunction (∧), disjunction (∨), and implication (→) – followed by two wffs. The symbol ∧ (AND) means "both are true", so p ∧ q may mean "Connor is a lawyer, and Mary likes music." The value of such a recursive definition is that it can be used to determine whether any particular string of symbols is "well formed". • p ∧ q is well formed, because it is AND (∧) followed by the atomic wffs p, q. • ¬ (p ∧ q) is well formed, because it is NOT (¬) followed by p ∧ q, which is in turn a wff. • (¬ p) ∧ (¬ q) is AND (∧) followed by ¬ p and ¬ q; and both ¬ p and ¬ p are wffs. See also • Mathematical induction • Recursive data types • Recursion • Structural induction Notes 1. Henkin, Leon (1960). "On Mathematical Induction". The American Mathematical Monthly. 67 (4): 323–338. doi:10.2307/2308975. ISSN 0002-9890. JSTOR 2308975. 2. "All About Recursion". www.cis.upenn.edu. Retrieved 2019-10-24. 3. For a proof of Recursion Theorem, see On Mathematical Induction (1960) by Leon Henkin. 4. Munkres, James (1975). Topology, a first course (1st ed.). New Jersey: Prentice-Hall. p. 68, exercises 10 and 12. ISBN 0-13-925495-1. References • Halmos, Paul (1960). Naive set theory. van Nostrand. OCLC 802530334. • Aczel, Peter (1977). "An Introduction to Inductive Definitions". In Barwise, J. (ed.). Handbook of Mathematical Logic. Studies in Logic and the Foundations of Mathematics. Vol. 90. North-Holland. pp. 739–782. doi:10.1016/S0049-237X(08)71120-0. ISBN 0-444-86388-5. • Hein, James L. (2010). Discrete Structures, Logic, and Computability. Jones & Bartlett. ISBN 978-0-7637-7206-2. OCLC 636352297. Definition • Circular • Concept • Coordinative • Enumerative • Extensional • Fallacies of • Intensional • Genus–differentia • Lexical • Operational • Ostensive • Persuasive • Precising • Recursive • Stipulative • Theoretical
Wikipedia
Recursion Recursion occurs when the definition of a concept or process depends on a simpler version of itself.[1] Recursion is used in a variety of disciplines ranging from linguistics to logic. The most common application of recursion is in mathematics and computer science, where a function being defined is applied within its own definition. While this apparently defines an infinite number of instances (function values), it is often done in such a way that no infinite loop or infinite chain of references can occur. A process that exhibits recursion is recursive. Formal definitions In mathematics and computer science, a class of objects or methods exhibits recursive behavior when it can be defined by two properties: • A simple base case (or cases) — a terminating scenario that does not use recursion to produce an answer • A recursive step — a set of rules that reduces all successive cases toward the base case. For example, the following is a recursive definition of a person's ancestor. One's ancestor is either: • One's parent (base case), or • One's parent's ancestor (recursive step). The Fibonacci sequence is another classic example of recursion: Fib(0) = 0 as base case 1, Fib(1) = 1 as base case 2, For all integers n > 1, Fib(n) = Fib(n − 1) + Fib(n − 2). Many mathematical axioms are based upon recursive rules. For example, the formal definition of the natural numbers by the Peano axioms can be described as: "Zero is a natural number, and each natural number has a successor, which is also a natural number."[2] By this base case and recursive rule, one can generate the set of all natural numbers. Other recursively defined mathematical objects include factorials, functions (e.g., recurrence relations), sets (e.g., Cantor ternary set), and fractals. There are various more tongue-in-cheek definitions of recursion; see recursive humor. Informal definition Recursion is the process a procedure goes through when one of the steps of the procedure involves invoking the procedure itself. A procedure that goes through recursion is said to be 'recursive'.[3] To understand recursion, one must recognize the distinction between a procedure and the running of a procedure. A procedure is a set of steps based on a set of rules, while the running of a procedure involves actually following the rules and performing the steps. Recursion is related to, but not the same as, a reference within the specification of a procedure to the execution of some other procedure. When a procedure is thus defined, this immediately creates the possibility of an endless loop; recursion can only be properly used in a definition if the step in question is skipped in certain cases so that the procedure can complete. But even if it is properly defined, a recursive procedure is not easy for humans to perform, as it requires distinguishing the new from the old, partially executed invocation of the procedure; this requires some administration as to how far various simultaneous instances of the procedures have progressed. For this reason, recursive definitions are very rare in everyday situations. In language Linguist Noam Chomsky, among many others, has argued that the lack of an upper bound on the number of grammatical sentences in a language, and the lack of an upper bound on grammatical sentence length (beyond practical constraints such as the time available to utter one), can be explained as the consequence of recursion in natural language.[4][5] This can be understood in terms of a recursive definition of a syntactic category, such as a sentence. A sentence can have a structure in which what follows the verb is another sentence: Dorothy thinks witches are dangerous, in which the sentence witches are dangerous occurs in the larger one. So a sentence can be defined recursively (very roughly) as something with a structure that includes a noun phrase, a verb, and optionally another sentence. This is really just a special case of the mathematical definition of recursion. This provides a way of understanding the creativity of language—the unbounded number of grammatical sentences—because it immediately predicts that sentences can be of arbitrary length: Dorothy thinks that Toto suspects that Tin Man said that.... There are many structures apart from sentences that can be defined recursively, and therefore many ways in which a sentence can embed instances of one category inside another.[6] Over the years, languages in general have proved amenable to this kind of analysis. The generally accepted idea that recursion is an essential property of human language has been challenged by Daniel Everett on the basis of his claims about the Pirahã language. Andrew Nevins, David Pesetsky and Cilene Rodrigues are among many who have argued against this.[7] Literary self-reference can in any case be argued to be different in kind from mathematical or logical recursion.[8] Recursion plays a crucial role not only in syntax, but also in natural language semantics. The word and, for example, can be construed as a function that can apply to sentence meanings to create new sentences, and likewise for noun phrase meanings, verb phrase meanings, and others. It can also apply to intransitive verbs, transitive verbs, or ditransitive verbs. In order to provide a single denotation for it that is suitably flexible, and is typically defined so that it can take any of these different types of meanings as arguments. This can be done by defining it for a simple case in which it combines sentences, and then defining the other cases recursively in terms of the simple one.[9] A recursive grammar is a formal grammar that contains recursive production rules.[10] Recursive humor Recursion is sometimes used humorously in computer science, programming, philosophy, or mathematics textbooks, generally by giving a circular definition or self-reference, in which the putative recursive step does not get closer to a base case, but instead leads to an infinite regress. It is not unusual for such books to include a joke entry in their glossary along the lines of: Recursion, see Recursion.[11] A variation is found on page 269 in the index of some editions of Brian Kernighan and Dennis Ritchie's book The C Programming Language; the index entry recursively references itself ("recursion 86, 139, 141, 182, 202, 269"). Early versions of this joke can be found in Let's talk Lisp by Laurent Siklóssy (published by Prentice Hall PTR on December 1, 1975, with a copyright date of 1976) and in Software Tools by Kernighan and Plauger (published by Addison-Wesley Professional on January 11, 1976). The joke also appears in The UNIX Programming Environment by Kernighan and Pike. It did not appear in the first edition of The C Programming Language. The joke is part of the Functional programming folklore and was already widespread in the functional programming community before the publication of the aforementioned books. [12] [13] Another joke is that "To understand recursion, you must understand recursion."[11] In the English-language version of the Google web search engine, when a search for "recursion" is made, the site suggests "Did you mean: recursion."[14] An alternative form is the following, from Andrew Plotkin: "If you already know what recursion is, just remember the answer. Otherwise, find someone who is standing closer to Douglas Hofstadter than you are; then ask him or her what recursion is." Recursive acronyms are other examples of recursive humor. PHP, for example, stands for "PHP Hypertext Preprocessor", WINE stands for "WINE Is Not an Emulator", GNU stands for "GNU's not Unix", and SPARQL denotes the "SPARQL Protocol and RDF Query Language". In mathematics Recursively defined sets Main article: Recursive definition Example: the natural numbers See also: Closure (mathematics) The canonical example of a recursively defined set is given by the natural numbers: 0 is in $\mathbb {N} $ if n is in $\mathbb {N} $, then n + 1 is in $\mathbb {N} $ The set of natural numbers is the smallest set satisfying the previous two properties. In mathematical logic, the Peano axioms (or Peano postulates or Dedekind–Peano axioms), are axioms for the natural numbers presented in the 19th century by the German mathematician Richard Dedekind and by the Italian mathematician Giuseppe Peano. The Peano Axioms define the natural numbers referring to a recursive successor function and addition and multiplication as recursive functions. Example: Proof procedure Another interesting example is the set of all "provable" propositions in an axiomatic system that are defined in terms of a proof procedure which is inductively (or recursively) defined as follows: • If a proposition is an axiom, it is a provable proposition. • If a proposition can be derived from true reachable propositions by means of inference rules, it is a provable proposition. • The set of provable propositions is the smallest set of propositions satisfying these conditions. Finite subdivision rules Main article: Finite subdivision rule Finite subdivision rules are a geometric form of recursion, which can be used to create fractal-like images. A subdivision rule starts with a collection of polygons labelled by finitely many labels, and then each polygon is subdivided into smaller labelled polygons in a way that depends only on the labels of the original polygon. This process can be iterated. The standard `middle thirds' technique for creating the Cantor set is a subdivision rule, as is barycentric subdivision. Functional recursion A function may be recursively defined in terms of itself. A familiar example is the Fibonacci number sequence: F(n) = F(n − 1) + F(n − 2). For such a definition to be useful, it must be reducible to non-recursively defined values: in this case F(0) = 0 and F(1) = 1. A famous recursive function is the Ackermann function, which, unlike the Fibonacci sequence, cannot be expressed without recursion. Proofs involving recursive definitions Applying the standard technique of proof by cases to recursively defined sets or functions, as in the preceding sections, yields structural induction — a powerful generalization of mathematical induction widely used to derive proofs in mathematical logic and computer science. Recursive optimization Dynamic programming is an approach to optimization that restates a multiperiod or multistep optimization problem in recursive form. The key result in dynamic programming is the Bellman equation, which writes the value of the optimization problem at an earlier time (or earlier step) in terms of its value at a later time (or later step). The recursion theorem In set theory, this is a theorem guaranteeing that recursively defined functions exist. Given a set X, an element a of X and a function f: X → X, the theorem states that there is a unique function $F:\mathbb {N} \to X$ (where $\mathbb {N} $ denotes the set of natural numbers including zero) such that $F(0)=a$ $F(n+1)=f(F(n))$ for any natural number n. Dedekind was the first to pose the problem of unique definition of set-theoretical functions on $\mathbb {N} $ by recursion, and gave a sketch of an argument in the 1888 essay "Was sind und was sollen die Zahlen?" [15] Proof of uniqueness Take two functions $F:\mathbb {N} \to X$ and $G:\mathbb {N} \to X$ such that: $F(0)=a$ $G(0)=a$ $F(n+1)=f(F(n))$ $G(n+1)=f(G(n))$ where a is an element of X. It can be proved by mathematical induction that F(n) = G(n) for all natural numbers n: Base Case: F(0) = a = G(0) so the equality holds for n = 0. Inductive Step: Suppose F(k) = G(k) for some $k\in \mathbb {N} $. Then F(k + 1) = f(F(k)) = f(G(k)) = G(k + 1). Hence F(k) = G(k) implies F(k + 1) = G(k + 1). By induction, F(n) = G(n) for all $n\in \mathbb {N} $. In computer science Main article: Recursion (computer science) A common method of simplification is to divide a problem into subproblems of the same type. As a computer programming technique, this is called divide and conquer and is key to the design of many important algorithms. Divide and conquer serves as a top-down approach to problem solving, where problems are solved by solving smaller and smaller instances. A contrary approach is dynamic programming. This approach serves as a bottom-up approach, where problems are solved by solving larger and larger instances, until the desired size is reached. A classic example of recursion is the definition of the factorial function, given here in Python code: def factorial(n): if n > 0: return n * factorial(n - 1) else: return 1 The function calls itself recursively on a smaller version of the input (n - 1) and multiplies the result of the recursive call by n, until reaching the base case, analogously to the mathematical definition of factorial. Recursion in computer programming is exemplified when a function is defined in terms of simpler, often smaller versions of itself. The solution to the problem is then devised by combining the solutions obtained from the simpler versions of the problem. One example application of recursion is in parsers for programming languages. The great advantage of recursion is that an infinite set of possible sentences, designs or other data can be defined, parsed or produced by a finite computer program. Recurrence relations are equations which define one or more sequences recursively. Some specific kinds of recurrence relation can be "solved" to obtain a non-recursive definition (e.g., a closed-form expression). Use of recursion in an algorithm has both advantages and disadvantages. The main advantage is usually the simplicity of instructions. The main disadvantage is that the memory usage of recursive algorithms may grow very quickly, rendering them impractical for larger instances. In biology Shapes that seem to have been created by recursive processes sometimes appear in plants and animals, such as in branching structures in which one large part branches out into two or more similar smaller parts. One example is Romanesco broccoli.[16] In the social sciences Authors use the concept of recursivity to foreground the situation in which specifically social scientists find themselves when producing knowledge about the world they are always already part of.[17][18] According to Audrey Alejandro, “as social scientists, the recursivity of our condition deals with the fact that we are both subjects (as discourses are the medium through which we analyse) and objects of the academic discourses we produce (as we are social agents belonging to the world we analyse).”[19] From this basis, she identifies in recursivity a fundamental challenge in the production of emancipatory knowledge which calls for the exercise of reflexive efforts: we are socialised into discourses and dispositions produced by the socio-political order we aim to challenge, a socio-political order that we may, therefore, reproduce unconsciously while aiming to do the contrary. The recursivity of our situation as scholars – and, more precisely, the fact that the dispositional tools we use to produce knowledge about the world are themselves produced by this world – both evinces the vital necessity of implementing reflexivity in practice and poses the main challenge in doing so. — Audrey Alejandro, Alejandro (2021) In business Recursion is sometimes referred to in management science as the process of iterating through levels of abstraction in large business entities.[20] A common example is the recursive nature of management hierarchies, ranging from line management to senior management via middle management. It also encompasses the larger issue of capital structure in corporate governance.[21] In art See also: Mathematics and art and Infinity mirror The Russian Doll or Matryoshka doll is a physical artistic example of the recursive concept.[22] Recursion has been used in paintings since Giotto's Stefaneschi Triptych, made in 1320. Its central panel contains the kneeling figure of Cardinal Stefaneschi, holding up the triptych itself as an offering.[23][24] This practice is more generally known as the Droste effect, an example of the Mise en abyme technique. M. C. Escher's Print Gallery (1956) is a print which depicts a distorted city containing a gallery which recursively contains the picture, and so ad infinitum.[25] In culture The film Inception has colloquialized the appending of the suffix -ception to a noun to jokingly indicate the recursion of something.[26] See also • Corecursion • Course-of-values recursion • Digital infinity • A Dream Within a Dream (poem) • Droste effect • False awakening • Fixed point combinator • Infinite compositions of analytic functions • Infinite loop • Infinite regress • Infinitism • Infinity mirror • Iterated function • Mathematical induction • Mise en abyme • Reentrant (subroutine) • Self-reference • Spiegel im Spiegel • Strange loop • Tail recursion • Tupper's self-referential formula • Turtles all the way down References 1. Causey, Robert L. (2006). Logic, sets, and recursion (2nd ed.). Sudbury, Mass.: Jones and Bartlett Publishers. ISBN 0-7637-3784-4. OCLC 62093042. 2. "Peano axioms | mathematics". Encyclopedia Britannica. Retrieved 2019-10-24. 3. "Definition of RECURSIVE". www.merriam-webster.com. Retrieved 2019-10-24. 4. Pinker, Steven (1994). The Language Instinct. William Morrow. 5. Pinker, Steven; Jackendoff, Ray (2005). "The faculty of language: What's so special about it?". Cognition. 95 (2): 201–236. CiteSeerX 10.1.1.116.7784. doi:10.1016/j.cognition.2004.08.004. PMID 15694646. S2CID 1599505. 6. Nordquist, Richard. "What Is Recursion in English Grammar?". ThoughtCo. Retrieved 2019-10-24. 7. Nevins, Andrew; Pesetsky, David; Rodrigues, Cilene (2009). "Evidence and argumentation: A reply to Everett (2009)" (PDF). Language. 85 (3): 671–681. doi:10.1353/lan.0.0140. S2CID 16915455. Archived from the original (PDF) on 2012-01-06. 8. Drucker, Thomas (4 January 2008). Perspectives on the History of Mathematical Logic. Springer Science & Business Media. p. 110. ISBN 978-0-8176-4768-1. 9. Barbara Partee and Mats Rooth. 1983. In Rainer Bäuerle et al., Meaning, Use, and Interpretation of Language. Reprinted in Paul Portner and Barbara Partee, eds. 2002. Formal Semantics: The Essential Readings. Blackwell. 10. Nederhof, Mark-Jan; Satta, Giorgio (2002), "Parsing Non-recursive Context-free Grammars", Proceedings of the 40th Annual Meeting on Association for Computational Linguistics (ACL '02), Stroudsburg, PA, USA: Association for Computational Linguistics, pp. 112–119, doi:10.3115/1073083.1073104. 11. Hunter, David (2011). Essentials of Discrete Mathematics. Jones and Bartlett. p. 494. ISBN 9781449604424. 12. Shaffer, Eric. "CS 173:Discrete Structures" (PDF). University of Illinois at Urbana-Champaign. Retrieved 7 July 2023. 13. "Introduction to Computer Science and Programming in C; Session 8: September 25, 2008" (PDF). Columbia University. Retrieved 7 July 2023. 14. "recursion - Google Search". www.google.com. Retrieved 2019-10-24. 15. A. Kanamori, "In Praise of Replacement", pp.50--52. Bulletin of Symbolic Logic, vol. 18, no. 1 (2012). Accessed 21 August 2023. 16. "Picture of the Day: Fractal Cauliflower". 28 December 2012. Retrieved 19 April 2020. 17. Bourdieu, Pierre (1992). "Double Bind et Conversion". Pour Une Anthropologie Réflexive. Paris: Le Seuil. 18. Giddens, Anthony (1987). Social Theory and Modern Sociology. Polity Press. 19. Alejandro, Audrey (2021). "Reflexive discourse analysis: A methodology for the practice of reflexivity". European Journal of International Relations. 27 (1): 171. doi:10.1177/1354066120969789. ISSN 1354-0661. S2CID 229461433. 20. "The Canadian Small Business–Bank Interface: A Recursive Model". SAGE Journals. 21. Beer, Stafford (1972). Brain Of The Firm. ISBN 978-0471948391. 22. Tang, Daisy. "Recursion". Retrieved 24 September 2015. More examples of recursion: Russian Matryoshka dolls. Each doll is made of solid wood or is hollow and contains another Matryoshka doll inside it. 23. "Giotto di Bondone and assistants: Stefaneschi triptych". The Vatican. Retrieved 16 September 2015. 24. Svozil, Karl (2018). Physical (A)Causality: Determinism, Randomness and Uncaused Events. Springer. p. 12. ISBN 9783319708157. 25. Cooper, Jonathan (5 September 2007). "Art and Mathematics". Retrieved 5 July 2020. 26. "-ception – The Rice University Neologisms Database". Rice University. Archived from the original on July 5, 2017. Retrieved December 23, 2016. Bibliography • Dijkstra, Edsger W. (1960). "Recursive Programming". Numerische Mathematik. 2 (1): 312–318. doi:10.1007/BF01386232. S2CID 127891023. • Johnsonbaugh, Richard (2004). Discrete Mathematics. Prentice Hall. ISBN 978-0-13-117686-7. • Hofstadter, Douglas (1999). Gödel, Escher, Bach: an Eternal Golden Braid. Basic Books. ISBN 978-0-465-02656-2. • Shoenfield, Joseph R. (2000). Recursion Theory. A K Peters Ltd. ISBN 978-1-56881-149-9. • Causey, Robert L. (2001). Logic, Sets, and Recursion. Jones & Bartlett. ISBN 978-0-7637-1695-0. • Cori, Rene; Lascar, Daniel; Pelletier, Donald H. (2001). Recursion Theory, Gödel's Theorems, Set Theory, Model Theory. Oxford University Press. ISBN 978-0-19-850050-6. • Barwise, Jon; Moss, Lawrence S. (1996). Vicious Circles. Stanford Univ Center for the Study of Language and Information. ISBN 978-0-19-850050-6. - offers a treatment of corecursion. • Rosen, Kenneth H. (2002). Discrete Mathematics and Its Applications. McGraw-Hill College. ISBN 978-0-07-293033-7. • Cormen, Thomas H.; Leiserson, Charles E.; Rivest, Ronald L.; Stein, Clifford (2001). Introduction to Algorithms. Mit Pr. ISBN 978-0-262-03293-3. • Kernighan, B.; Ritchie, D. (1988). The C programming Language. Prentice Hall. ISBN 978-0-13-110362-7. • Stokey, Nancy; Robert Lucas; Edward Prescott (1989). Recursive Methods in Economic Dynamics. Harvard University Press. ISBN 978-0-674-75096-8. • Hungerford (1980). Algebra. Springer. ISBN 978-0-387-90518-1., first chapter on set theory. External links Wikimedia Commons has media related to Recursive humor. Look up recursion or recursivity in Wiktionary, the free dictionary. • Recursion - tutorial by Alan Gauld • Zip Files All The Way Down • Nevins, Andrew and David Pesetsky and Cilene Rodrigues. Evidence and Argumentation: A Reply to Everett (2009). Language 85.3: 671--681 (2009) Fractals Characteristics • Fractal dimensions • Assouad • Box-counting • Higuchi • Correlation • Hausdorff • Packing • Topological • Recursion • Self-similarity Iterated function system • Barnsley fern • Cantor set • Koch snowflake • Menger sponge • Sierpinski carpet • Sierpinski triangle • Apollonian gasket • Fibonacci word • Space-filling curve • Blancmange curve • De Rham curve • Minkowski • Dragon curve • Hilbert curve • Koch curve • Lévy C curve • Moore curve • Peano curve • Sierpiński curve • Z-order curve • String • T-square • n-flake • Vicsek fractal • Hexaflake • Gosper curve • Pythagoras tree • Weierstrass function Strange attractor • Multifractal system L-system • Fractal canopy • Space-filling curve • H tree Escape-time fractals • Burning Ship fractal • Julia set • Filled • Newton fractal • Douady rabbit • Lyapunov fractal • Mandelbrot set • Misiurewicz point • Multibrot set • Newton fractal • Tricorn • Mandelbox • Mandelbulb Rendering techniques • Buddhabrot • Orbit trap • Pickover stalk Random fractals • Brownian motion • Brownian tree • Brownian motor • Fractal landscape • Lévy flight • Percolation theory • Self-avoiding walk People • Michael Barnsley • Georg Cantor • Bill Gosper • Felix Hausdorff • Desmond Paul Henry • Gaston Julia • Helge von Koch • Paul Lévy • Aleksandr Lyapunov • Benoit Mandelbrot • Hamid Naderi Yeganeh • Lewis Fry Richardson • Wacław Sierpiński Other • "How Long Is the Coast of Britain?" • Coastline paradox • Fractal art • List of fractals by Hausdorff dimension • The Fractal Geometry of Nature (1982 book) • The Beauty of Fractals (1986 book) • Chaos: Making a New Science (1987 book) • Kaleidoscope • Chaos theory Mathematical logic General • Axiom • list • Cardinality • First-order logic • Formal proof • Formal semantics • Foundations of mathematics • Information theory • Lemma • Logical consequence • Model • Theorem • Theory • Type theory Theorems (list)  & Paradoxes • Gödel's completeness and incompleteness theorems • Tarski's undefinability • Banach–Tarski paradox • Cantor's theorem, paradox and diagonal argument • Compactness • Halting problem • Lindström's • Löwenheim–Skolem • Russell's paradox Logics Traditional • Classical logic • Logical truth • Tautology • Proposition • Inference • Logical equivalence • Consistency • Equiconsistency • Argument • Soundness • Validity • Syllogism • Square of opposition • Venn diagram Propositional • Boolean algebra • Boolean functions • Logical connectives • Propositional calculus • Propositional formula • Truth tables • Many-valued logic • 3 • Finite • ∞ Predicate • First-order • list • Second-order • Monadic • Higher-order • Free • Quantifiers • Predicate • Monadic predicate calculus Set theory • Set • Hereditary • Class • (Ur-)Element • Ordinal number • Extensionality • Forcing • Relation • Equivalence • Partition • Set operations: • Intersection • Union • Complement • Cartesian product • Power set • Identities Types of Sets • Countable • Uncountable • Empty • Inhabited • Singleton • Finite • Infinite • Transitive • Ultrafilter • Recursive • Fuzzy • Universal • Universe • Constructible • Grothendieck • Von Neumann Maps & Cardinality • Function/Map • Domain • Codomain • Image • In/Sur/Bi-jection • Schröder–Bernstein theorem • Isomorphism • Gödel numbering • Enumeration • Large cardinal • Inaccessible • Aleph number • Operation • Binary Set theories • Zermelo–Fraenkel • Axiom of choice • Continuum hypothesis • General • Kripke–Platek • Morse–Kelley • Naive • New Foundations • Tarski–Grothendieck • Von Neumann–Bernays–Gödel • Ackermann • Constructive Formal systems (list), Language & Syntax • Alphabet • Arity • Automata • Axiom schema • Expression • Ground • Extension • by definition • Conservative • Relation • Formation rule • Grammar • Formula • Atomic • Closed • Ground • Open • Free/bound variable • Language • Metalanguage • Logical connective • ¬ • ∨ • ∧ • → • ↔ • = • Predicate • Functional • Variable • Propositional variable • Proof • Quantifier • ∃ • ! • ∀ • rank • Sentence • Atomic • Spectrum • Signature • String • Substitution • Symbol • Function • Logical/Constant • Non-logical • Variable • Term • Theory • list Example axiomatic systems  (list) • of arithmetic: • Peano • second-order • elementary function • primitive recursive • Robinson • Skolem • of the real numbers • Tarski's axiomatization • of Boolean algebras • canonical • minimal axioms • of geometry: • Euclidean: • Elements • Hilbert's • Tarski's • non-Euclidean • Principia Mathematica Proof theory • Formal proof • Natural deduction • Logical consequence • Rule of inference • Sequent calculus • Theorem • Systems • Axiomatic • Deductive • Hilbert • list • Complete theory • Independence (from ZFC) • Proof of impossibility • Ordinal analysis • Reverse mathematics • Self-verifying theories Model theory • Interpretation • Function • of models • Model • Equivalence • Finite • Saturated • Spectrum • Submodel • Non-standard model • of arithmetic • Diagram • Elementary • Categorical theory • Model complete theory • Satisfiability • Semantics of logic • Strength • Theories of truth • Semantic • Tarski's • Kripke's • T-schema • Transfer principle • Truth predicate • Truth value • Type • Ultraproduct • Validity Computability theory • Church encoding • Church–Turing thesis • Computably enumerable • Computable function • Computable set • Decision problem • Decidable • Undecidable • P • NP • P versus NP problem • Kolmogorov complexity • Lambda calculus • Primitive recursive function • Recursion • Recursive set • Turing machine • Type theory Related • Abstract logic • Category theory • Concrete/Abstract Category • Category of sets • History of logic • History of mathematical logic • timeline • Logicism • Mathematical object • Philosophy of mathematics • Supertask  Mathematics portal Authority control: National • Germany
Wikipedia
Recursive largest first algorithm The Recursive Largest First (RLF) algorithm is a heuristic for the NP-hard graph coloring problem. It was originally proposed by Frank Leighton in 1979.[1] The RLF algorithm assigns colors to a graph’s vertices by constructing each color class one at a time. It does this by identifying a maximal independent set of vertices in the graph, assigning these to the same color, and then removing these vertices from the graph. These actions are repeated on the remaining subgraph until no vertices remain. To form high-quality solutions (solutions using few colors), the RLF algorithm uses specialized heuristic rules to try to identify "good quality" independent sets. These heuristics make the RLF algorithm exact for bipartite, cycle, and wheel graphs.[2] In general, however, the algorithm is approximate and may well return solutions that use more colors than the graph’s chromatic number. Description The algorithm can be described by the following three steps. At the end of this process, ${\mathcal {S}}$ gives a partition of the vertices representing a feasible $|{\mathcal {S}}|$-colouring of the graph $G$. 1. Let ${\mathcal {S}}=\emptyset $ be an empty solution. Also, let $G=(V,E)$ be the graph we wish to color, comprising a vertex set $V$ and an edge set $E$. 2. Identify a maximal independent set $S\subseteq V$. To do this: 1. The first vertex added to $S$ should be the vertex in $G$ that has the largest number of neighbors. 2. Subsequent vertices added to $S$ should be chosen as those that (a) are not currently adjacent to any vertex in $S$, and (b) have a maximal number of neighbors that are adjacent to vertices in $S$. Ties in condition (b) can be broken by selecting the vertex with the minimum number of neighbors not in $S$. Vertices are added to $S$ in this way until it is impossible to add further vertices. 3. Now set ${\mathcal {S}}={\mathcal {S}}\cup \{S\}$ and remove the vertices of $S$ from $G$. If $G$ still contains vertices, then return to Step 2; otherwise end. Example Consider the graph $G=(V,E)$ shown on the right. This is a wheel graph and will therefore be optimally colored by RLF. Executing the algorithm results in the vertices being selected and colored in the following order: 1. Vertex $g$ (color 1) 2. Vertex $a$, $c$, and then $e$ (color 2) 3. Vertex $b$, $d$, and then $f$ (color 3) This gives the final three-colored solution ${\mathcal {S}}=\{\{g\},\{a,c,e\},\{b,d,f\}\}$. Performance Let $n$ be the number of vertices in the graph and let $m$ be the number of edges. Using big O notation, in his original publication Leighton states the complexity of RLF to be ${\mathcal {O}}(n^{3})$; however, this can be improved upon. Much of the expense of this algorithm is due to Step 2, where vertex selection is made according to the heuristic rules stated above. Indeed, each time a vertex is selected for addition to the independent set $S$, information regarding the neighbors needs to be recalculated for each uncolored vertex. These calculations can be performed in ${\mathcal {O}}(m)$ time, meaning that the overall complexity of RLF is ${\mathcal {O}}(mn)$.[2] If the heuristics of Step 2 are replaced with random selection, then the complexity of this algorithm reduces to ${\mathcal {O}}(n+m)$; however, the resultant algorithm will usually return lower quality solutions compared to those of RLF.[2] It will also now be inexact for bipartite, cycle, and wheel graphs. In an empirical comparison by Lewis in 2021, RLF was shown to produce significantly better vertex colorings than alternative heuristics such as the ${\mathcal {O}}(n+m)$ greedy algorithm and the ${\mathcal {O}}((n+m)\lg n)$ DSatur algorithm on random graphs. However, runtimes with RLF were also seen to be higher than these alternatives due to its higher overall complexity.[2] References 1. Leighton, F. (1979). "A graph coloring algorithm for large scheduling problems". Journal of Research of the National Bureau of Standards. 84 (6): 489–503. doi:10.6028/jres.084.024. PMC 6756213. PMID 34880531. 2. Lewis, R. (2021). A Guide to Graph Colouring: Algorithms and Applications. Texts in Computer Science. Springer. doi:10.1007/978-3-030-81054-2. ISBN 978-3-030-81053-5. S2CID 57188465. External links • High-Performance Graph Colouring Algorithms Suite of graph coloring algorithms (implemented in C++) used in the book A Guide to Graph Colouring: Algorithms and Applications (Springer International Publishers, 2021).
Wikipedia
Computable ordinal In mathematics, specifically computability and set theory, an ordinal $\alpha $ is said to be computable or recursive if there is a computable well-ordering of a computable subset of the natural numbers having the order type $\alpha $. It is easy to check that $\omega $ is computable. The successor of a computable ordinal is computable, and the set of all computable ordinals is closed downwards. The supremum of all computable ordinals is called the Church–Kleene ordinal, the first nonrecursive ordinal, and denoted by $\omega _{1}^{CK}$. The Church–Kleene ordinal is a limit ordinal. An ordinal is computable if and only if it is smaller than $\omega _{1}^{CK}$. Since there are only countably many computable relations, there are also only countably many computable ordinals. Thus, $\omega _{1}^{CK}$ is countable. The computable ordinals are exactly the ordinals that have an ordinal notation in Kleene's ${\mathcal {O}}$. See also • Arithmetical hierarchy • Large countable ordinal • Ordinal analysis • Ordinal notation References • Hartley Rogers Jr. The Theory of Recursive Functions and Effective Computability, 1967. Reprinted 1987, MIT Press, ISBN 0-262-68052-1 (paperback), ISBN 0-07-053522-1 • Gerald Sacks Higher Recursion Theory. Perspectives in mathematical logic, Springer-Verlag, 1990. ISBN 0-387-19305-7
Wikipedia
Recursive tree In graph theory, a recursive tree (i.e., unordered tree) is a labeled, rooted tree. A size-n recursive tree's vertices are labeled by distinct positive integers 1, 2, …, n, where the labels are strictly increasing starting at the root labeled 1. Recursive trees are non-planar, which means that the children of a particular vertex are not ordered; for example, the following two size-3 recursive trees are equivalent: 3/1\2 = 2/1\3. Recursive trees also appear in literature under the name Increasing Cayley trees. Properties The number of size-n recursive trees is given by $T_{n}=(n-1)!.\,$ Hence the exponential generating function T(z) of the sequence Tn is given by $T(z)=\sum _{n\geq 1}T_{n}{\frac {z^{n}}{n!}}=\log \left({\frac {1}{1-z}}\right).$ Combinatorically, a recursive tree can be interpreted as a root followed by an unordered sequence of recursive trees. Let F denote the family of recursive trees. Then $F=\circ +{\frac {1}{1!}}\cdot \circ \times F+{\frac {1}{2!}}\cdot \circ \times F*F+{\frac {1}{3!}}\cdot \circ \times F*F*F*\cdots =\circ \times \exp(F),$ where $\circ $ denotes the node labeled by 1, × the Cartesian product and $*$ the partition product for labeled objects. By translation of the formal description one obtains the differential equation for T(z) $T'(z)=\exp(T(z)),$ with T(0) = 0. Bijections There are bijective correspondences between recursive trees of size n and permutations of size n − 1. Applications Recursive trees can be generated using a simple stochastic process. Such random recursive trees are used as simple models for epidemics. References • Analytic Combinatorics, Philippe Flajolet and Robert Sedgewick, Cambridge University Press, 2008. • Varieties of Increasing Trees, Francois Bergeron, Philippe Flajolet, and Bruno Salvy. In Proceedings of the 17th Colloquium on Trees in Algebra and Programming, Rennes, France, February 1992. Proceedings published in Lecture Notes in Computer Science vol. 581, J.-C. Raoult Ed., 1992, pp. 24–48. • Profile of random trees: correlation and width of random recursive trees and binary search trees, Michael Drmota and Hsien-Kuei Hwang, Adv. Appl. Prob., 37, 1–21, 2005. • Profiles of random trees: Limit theorems for random recursive trees and binary search trees, Michael Fuchs, Hsien-Kuei Hwang, Ralph Neininger., Algorithmica, 46, 367–407, 2006.
Wikipedia
Computable set In computability theory, a set of natural numbers is called computable, recursive, or decidable if there is an algorithm which takes a number as input, terminates after a finite amount of time (possibly depending on the given number) and correctly decides whether the number belongs to the set or not. A set which is not computable is called noncomputable or undecidable. A more general class of sets than the computable ones consists of the computably enumerable (c.e.) sets, also called semidecidable sets. For these sets, it is only required that there is an algorithm that correctly decides when a number is in the set; the algorithm may give no answer (but not the wrong answer) for numbers not in the set. Formal definition A subset $S$ of the natural numbers is called computable if there exists a total computable function $f$ such that $f(x)=1$ if $x\in S$ and $f(x)=0$ if $x\notin S$. In other words, the set $S$ is computable if and only if the indicator function $\mathbb {1} _{S}$ is computable. Examples and non-examples Examples: • Every finite or cofinite subset of the natural numbers is computable. This includes these special cases: • The empty set is computable. • The entire set of natural numbers is computable. • Each natural number (as defined in standard set theory) is computable; that is, the set of natural numbers less than a given natural number is computable. • The subset of prime numbers is computable. • A recursive language is a computable subset of a formal language. • The set of Gödel numbers of arithmetic proofs described in Kurt Gödel's paper "On formally undecidable propositions of Principia Mathematica and related systems I" is computable; see Gödel's incompleteness theorems. Non-examples: Main article: List of undecidable problems • The set of Turing machines that halt is not computable. • The isomorphism class of two finite simplicial complexes is not computable. • The set of busy beaver champions is not computable. • Hilbert's tenth problem is not computable. Properties If A is a computable set then the complement of A is a computable set. If A and B are computable sets then A ∩ B, A ∪ B and the image of A × B under the Cantor pairing function are computable sets. A is a computable set if and only if A and the complement of A are both computably enumerable (c.e.). The preimage of a computable set under a total computable function is a computable set. The image of a computable set under a total computable bijection is computable. (In general, the image of a computable set under a computable function is c.e., but possibly not computable). A is a computable set if and only if it is at level $\Delta _{1}^{0}$ of the arithmetical hierarchy. A is a computable set if and only if it is either the range of a nondecreasing total computable function, or the empty set. The image of a computable set under a nondecreasing total computable function is computable. See also • Recursively enumerable language • Recursive language • Recursion References • Cutland, N. Computability. Cambridge University Press, Cambridge-New York, 1980. ISBN 0-521-22384-9; ISBN 0-521-29465-7 • Rogers, H. The Theory of Recursive Functions and Effective Computability, MIT Press. ISBN 0-262-68052-1; ISBN 0-07-053522-1 • Soare, R. Recursively enumerable sets and degrees. Perspectives in Mathematical Logic. Springer-Verlag, Berlin, 1987. ISBN 3-540-15299-7 External links • Sakharov, Alex. "Recursive Set". MathWorld. Mathematical logic General • Axiom • list • Cardinality • First-order logic • Formal proof • Formal semantics • Foundations of mathematics • Information theory • Lemma • Logical consequence • Model • Theorem • Theory • Type theory Theorems (list)  & Paradoxes • Gödel's completeness and incompleteness theorems • Tarski's undefinability • Banach–Tarski paradox • Cantor's theorem, paradox and diagonal argument • Compactness • Halting problem • Lindström's • Löwenheim–Skolem • Russell's paradox Logics Traditional • Classical logic • Logical truth • Tautology • Proposition • Inference • Logical equivalence • Consistency • Equiconsistency • Argument • Soundness • Validity • Syllogism • Square of opposition • Venn diagram Propositional • Boolean algebra • Boolean functions • Logical connectives • Propositional calculus • Propositional formula • Truth tables • Many-valued logic • 3 • Finite • ∞ Predicate • First-order • list • Second-order • Monadic • Higher-order • Free • Quantifiers • Predicate • Monadic predicate calculus Set theory • Set • Hereditary • Class • (Ur-)Element • Ordinal number • Extensionality • Forcing • Relation • Equivalence • Partition • Set operations: • Intersection • Union • Complement • Cartesian product • Power set • Identities Types of Sets • Countable • Uncountable • Empty • Inhabited • Singleton • Finite • Infinite • Transitive • Ultrafilter • Recursive • Fuzzy • Universal • Universe • Constructible • Grothendieck • Von Neumann Maps & Cardinality • Function/Map • Domain • Codomain • Image • In/Sur/Bi-jection • Schröder–Bernstein theorem • Isomorphism • Gödel numbering • Enumeration • Large cardinal • Inaccessible • Aleph number • Operation • Binary Set theories • Zermelo–Fraenkel • Axiom of choice • Continuum hypothesis • General • Kripke–Platek • Morse–Kelley • Naive • New Foundations • Tarski–Grothendieck • Von Neumann–Bernays–Gödel • Ackermann • Constructive Formal systems (list), Language & Syntax • Alphabet • Arity • Automata • Axiom schema • Expression • Ground • Extension • by definition • Conservative • Relation • Formation rule • Grammar • Formula • Atomic • Closed • Ground • Open • Free/bound variable • Language • Metalanguage • Logical connective • ¬ • ∨ • ∧ • → • ↔ • = • Predicate • Functional • Variable • Propositional variable • Proof • Quantifier • ∃ • ! • ∀ • rank • Sentence • Atomic • Spectrum • Signature • String • Substitution • Symbol • Function • Logical/Constant • Non-logical • Variable • Term • Theory • list Example axiomatic systems  (list) • of arithmetic: • Peano • second-order • elementary function • primitive recursive • Robinson • Skolem • of the real numbers • Tarski's axiomatization • of Boolean algebras • canonical • minimal axioms • of geometry: • Euclidean: • Elements • Hilbert's • Tarski's • non-Euclidean • Principia Mathematica Proof theory • Formal proof • Natural deduction • Logical consequence • Rule of inference • Sequent calculus • Theorem • Systems • Axiomatic • Deductive • Hilbert • list • Complete theory • Independence (from ZFC) • Proof of impossibility • Ordinal analysis • Reverse mathematics • Self-verifying theories Model theory • Interpretation • Function • of models • Model • Equivalence • Finite • Saturated • Spectrum • Submodel • Non-standard model • of arithmetic • Diagram • Elementary • Categorical theory • Model complete theory • Satisfiability • Semantics of logic • Strength • Theories of truth • Semantic • Tarski's • Kripke's • T-schema • Transfer principle • Truth predicate • Truth value • Type • Ultraproduct • Validity Computability theory • Church encoding • Church–Turing thesis • Computably enumerable • Computable function • Computable set • Decision problem • Decidable • Undecidable • P • NP • P versus NP problem • Kolmogorov complexity • Lambda calculus • Primitive recursive function • Recursion • Recursive set • Turing machine • Type theory Related • Abstract logic • Category theory • Concrete/Abstract Category • Category of sets • History of logic • History of mathematical logic • timeline • Logicism • Mathematical object • Philosophy of mathematics • Supertask  Mathematics portal Set theory Overview • Set (mathematics) Axioms • Adjunction • Choice • countable • dependent • global • Constructibility (V=L) • Determinacy • Extensionality • Infinity • Limitation of size • Pairing • Power set • Regularity • Union • Martin's axiom • Axiom schema • replacement • specification Operations • Cartesian product • Complement (i.e. set difference) • De Morgan's laws • Disjoint union • Identities • Intersection • Power set • Symmetric difference • Union • Concepts • Methods • Almost • Cardinality • Cardinal number (large) • Class • Constructible universe • Continuum hypothesis • Diagonal argument • Element • ordered pair • tuple • Family • Forcing • One-to-one correspondence • Ordinal number • Set-builder notation • Transfinite induction • Venn diagram Set types • Amorphous • Countable • Empty • Finite (hereditarily) • Filter • base • subbase • Ultrafilter • Fuzzy • Infinite (Dedekind-infinite) • Recursive • Singleton • Subset · Superset • Transitive • Uncountable • Universal Theories • Alternative • Axiomatic • Naive • Cantor's theorem • Zermelo • General • Principia Mathematica • New Foundations • Zermelo–Fraenkel • von Neumann–Bernays–Gödel • Morse–Kelley • Kripke–Platek • Tarski–Grothendieck • Paradoxes • Problems • Russell's paradox • Suslin's problem • Burali-Forti paradox Set theorists • Paul Bernays • Georg Cantor • Paul Cohen • Richard Dedekind • Abraham Fraenkel • Kurt Gödel • Thomas Jech • John von Neumann • Willard Quine • Bertrand Russell • Thoralf Skolem • Ernst Zermelo
Wikipedia
Turing degree In computer science and mathematical logic the Turing degree (named after Alan Turing) or degree of unsolvability of a set of natural numbers measures the level of algorithmic unsolvability of the set. Overview The concept of Turing degree is fundamental in computability theory, where sets of natural numbers are often regarded as decision problems. The Turing degree of a set is a measure of how difficult it is to solve the decision problem associated with the set, that is, to determine whether an arbitrary number is in the given set. Two sets are Turing equivalent if they have the same level of unsolvability; each Turing degree is a collection of Turing equivalent sets, so that two sets are in different Turing degrees exactly when they are not Turing equivalent. Furthermore, the Turing degrees are partially ordered, so that if the Turing degree of a set X is less than the Turing degree of a set Y, then any (possibly noncomputable) procedure that correctly decides whether numbers are in Y can be effectively converted to a procedure that correctly decides whether numbers are in X. It is in this sense that the Turing degree of a set corresponds to its level of algorithmic unsolvability. The Turing degrees were introduced by Post (1944) and many fundamental results were established by Kleene & Post (1954). The Turing degrees have been an area of intense research since then. Many proofs in the area make use of a proof technique known as the priority method. Turing equivalence Main article: Turing reduction For the rest of this article, the word set will refer to a set of natural numbers. A set X is said to be Turing reducible to a set Y if there is an oracle Turing machine that decides membership in X when given an oracle for membership in Y. The notation X ≤T Y indicates that X is Turing reducible to Y. Two sets X and Y are defined to be Turing equivalent if X is Turing reducible to Y and Y is Turing reducible to X. The notation X ≡T Y indicates that X and Y are Turing equivalent. The relation ≡T can be seen to be an equivalence relation, which means that for all sets X, Y, and Z: • X ≡T X • X ≡T Y implies Y ≡T X • If X ≡T Y and Y ≡T Z then X ≡T Z. A Turing degree is an equivalence class of the relation ≡T. The notation [X] denotes the equivalence class containing a set X. The entire collection of Turing degrees is denoted ${\mathcal {D}}$. The Turing degrees have a partial order ≤ defined so that [X] ≤ [Y] if and only if X ≤T Y. There is a unique Turing degree containing all the computable sets, and this degree is less than every other degree. It is denoted 0 (zero) because it is the least element of the poset ${\mathcal {D}}$. (It is common to use boldface notation for Turing degrees, in order to distinguish them from sets. When no confusion can occur, such as with [X], the boldface is not necessary.) For any sets X and Y, X join Y, written X ⊕ Y, is defined to be the union of the sets {2n : n ∈ X} and {2m+1 : m ∈ Y}. The Turing degree of X ⊕ Y is the least upper bound of the degrees of X and Y. Thus ${\mathcal {D}}$ is a join-semilattice. The least upper bound of degrees a and b is denoted a ∪ b. It is known that ${\mathcal {D}}$ is not a lattice, as there are pairs of degrees with no greatest lower bound. For any set X the notation X′ denotes the set of indices of oracle machines that halt (when given their index as input) when using X as an oracle. The set X′ is called the Turing jump of X. The Turing jump of a degree [X] is defined to be the degree [X′]; this is a valid definition because X′ ≡T Y′ whenever X ≡T Y. A key example is 0′, the degree of the halting problem. Basic properties of the Turing degrees • Every Turing degree is countably infinite, that is, it contains exactly $\aleph _{0}$ sets. • There are $2^{\aleph _{0}}$ distinct Turing degrees. • For each degree a the strict inequality a < a′ holds. • For each degree a, the set of degrees below a is countable. The set of degrees greater than a has size $2^{\aleph _{0}}$. Structure of the Turing degrees A great deal of research has been conducted into the structure of the Turing degrees. The following survey lists only some of the many known results. One general conclusion that can be drawn from the research is that the structure of the Turing degrees is extremely complicated. Order properties • There are minimal degrees. A degree a is minimal if a is nonzero and there is no degree between 0 and a. Thus the order relation on the degrees is not a dense order. • The Turing degrees are not linearly ordered by ≤T.[1] • In fact, for every nonzero degree a there is a degree b incomparable with a. • There is a set of $2^{\aleph _{0}}$ pairwise incomparable Turing degrees. • There are pairs of degrees with no greatest lower bound. Thus ${\mathcal {D}}$ is not a lattice. • Every countable partially ordered set can be embedded in the Turing degrees. • An infinite strictly increasing sequence a1, a2, ... of Turing degrees cannot have the least upper bound, but it always has an exact pair c, d such that ∀e (e<c∧e<d ⇔ ∃i e≤ai), and thus it has (non-unique) minimal upper bounds. • Assuming the axiom of constructibility, it can be shown there is a maximal chain of degrees of order type $\omega _{1}$.[2] Properties involving the jump • For every degree a there is a degree strictly between a and a′. In fact, there is a countable family of pairwise incomparable degrees between a and a′. • Jump inversion: a degree a is of the form b′ if and only if 0′ ≤ a. • For any degree a there is a degree b such that a < b and b′ = a′; such a degree b is called low relative to a. • There is an infinite sequence ai of degrees such that a′i+1 ≤ ai for each i. • Post's theorem establishes a close correspondence between the arithmetical hierarchy and finitely iterated Turing jumps of the empty set. Logical properties • Simpson (1977b) showed that the first-order theory of ${\mathcal {D}}$ in the language ⟨ ≤, = ⟩ or ⟨ ≤, ′, = ⟩ is many-one equivalent to the theory of true second-order arithmetic. This indicates that the structure of ${\mathcal {D}}$ is extremely complicated. • Shore & Slaman (1999) showed that the jump operator is definable in the first-order structure of ${\mathcal {D}}$ with the language ⟨ ≤, = ⟩. Recursively enumerable Turing degrees A degree is called recursively enumerable (r.e.) or computably enumerable (c.e.) if it contains a recursively enumerable set. Every r.e. degree is below 0′, but not every degree below 0′ is r.e.. However, a set $A$ is many-one reducible to 0′ iff $A$ is r.e..[3] • Sacks (1964): The r.e. degrees are dense; between any two r.e. degrees there is a third r.e. degree. • Lachlan (1966a) and Yates (1966): There are two r.e. degrees with no greatest lower bound in the r.e. degrees. • Lachlan (1966a) and Yates (1966): There is a pair of nonzero r.e. degrees whose greatest lower bound is 0. • Lachlan (1966b): There is no pair of r.e. degrees whose greatest lower bound is 0 and whose least upper bound is 0′. This result is informally called the nondiamond theorem. • Thomason (1971): Every finite distributive lattice can be embedded into the r.e. degrees. In fact, the countable atomless Boolean algebra can be embedded in a manner that preserves suprema and infima. • Lachlan & Soare (1980): Not all finite lattices can be embedded in the r.e. degrees (via an embedding that preserves suprema and infima). A particular example is shown to the right.L. A. Harrington and T. A. Slaman (see Nies, Shore & Slaman (1988) harvtxt error: no target: CITEREFNiesShoreSlaman1988 (help)): The first-order theory of the r.e. degrees in the language ⟨ 0, ≤, = ⟩ is many-one equivalent to the theory of true first-order arithmetic. Additionally, there is Shoenfield's limit lemma, a set A satisfies $[A]\leq _{T}\emptyset '$ iff there is a "recursive approximation" to its characteristic function: a function g such that for sufficiently large s, $g(s)=\chi _{A}(s)$.[4] A set A is called n-r e. if there is a family of functions $(A_{s})_{s\in \mathbb {N} }$ such that:[4] • As is a recursive approximation of A: for some t, for any s≥t we have As(x) = A(x), in particular conflating A with its characteristic function. (Removing this condition yields a definition of A being "weakly n-r.e.") • As is an "n-trial predicate": for all x, A0(x)=0 and the cardinality of $\{s\mid A_{s}(x)\neq A_{s+1}(x)\}$ is ≤n. Properties of n-r.e. degrees:[4] • The class of sets of n-r.e. degree is a strict subclass of the class of sets of (n+1)-r.e. degree. • For all n>1 there are two (n+1)-r.e. degrees a, b with $\mathbf {a} \leq _{T}\mathbf {b} $, such that the segment $\{\mathbf {c} \mid \mathbf {a} \leq _{T}\mathbf {c} \leq _{T}\mathbf {b} \}$ contains no n-r.e. degrees. • $A$ and ${\overline {A}}$ are (n+1)-r.e. iff both sets are weakly-n-r.e. Post's problem and the priority method "Post's problem" redirects here. For the other "Post's problem", see Post's correspondence problem. Emil Post studied the r.e. Turing degrees and asked whether there is any r.e. degree strictly between 0 and 0′. The problem of constructing such a degree (or showing that none exist) became known as Post's problem. This problem was solved independently by Friedberg and Muchnik in the 1950s, who showed that these intermediate r.e. degrees do exist (Friedberg–Muchnik theorem). Their proofs each developed the same new method for constructing r.e. degrees, which came to be known as the priority method. The priority method is now the main technique for establishing results about r.e. sets. The idea of the priority method for constructing a r.e. set X is to list a countable sequence of requirements that X must satisfy. For example, to construct a r.e. set X between 0 and 0′ it is enough to satisfy the requirements Ae and Be for each natural number e, where Ae requires that the oracle machine with index e does not compute 0′ from X and Be requires that the Turing machine with index e (and no oracle) does not compute X. These requirements are put into a priority ordering, which is an explicit bijection of the requirements and the natural numbers. The proof proceeds inductively with one stage for each natural number; these stages can be thought of as steps of time during which the set X is enumerated. At each stage, numbers may be put into X or forever (if not injured) prevented from entering X in an attempt to satisfy requirements (that is, force them to hold once all of X has been enumerated). Sometimes, a number can be enumerated into X to satisfy one requirement but doing this would cause a previously satisfied requirement to become unsatisfied (that is, to be injured). The priority order on requirements is used to determine which requirement to satisfy in this case. The informal idea is that if a requirement is injured then it will eventually stop being injured after all higher priority requirements have stopped being injured, although not every priority argument has this property. An argument must be made that the overall set X is r.e. and satisfies all the requirements. Priority arguments can be used to prove many facts about r.e. sets; the requirements used and the manner in which they are satisfied must be carefully chosen to produce the required result. For example, a simple (and hence noncomputable r.e.) low X (low means X′=0′) can be constructed in infinitely many stages as follows. At the start of stage n, let Tn be the output (binary) tape, identified with the set of cell indices where we placed 1 so far (so X=∪n Tn; T0=∅); and let Pn(m) be the priority for not outputting 1 at location m; P0(m)=∞. At stage n, if possible (otherwise do nothing in the stage), pick the least i<n such that ∀m Pn(m)≠i and Turing machine i halts in <n steps on some input S⊇Tn with ∀m∈S\Tn Pn(m)≥i. Choose any such (finite) S, set Tn+1=S, and for every cell m visited by machine i on S, set Pn+1(m) = min(i, Pn(m)), and set all priorities >i to ∞, and then set one priority ∞ cell (any will do) not in S to priority i. Essentially, we make machine i halt if we can do so without upsetting priorities <i, and then set priorities to prevent machines >i from disrupting the halt; all priorities are eventually constant. To see that X is low, machine i halts on X iff it halts in <n steps on some Tn such that machines <i that halt on X do so <n-i steps (by recursion, this is uniformly computable from 0′). X is noncomputable since otherwise a Turing machine could halt on Y iff Y\X is nonempty, contradicting the construction since X excludes some priority i cells for arbitrarily large i; and X is simple because for each i the number of priority i cells is finite. See also • Martin measure References Monographs (undergraduate level) • Cooper, S.B. (2004). Computability theory. Boca Raton, FL: Chapman & Hall/CRC. p. 424. ISBN 1-58488-237-9. • Cutland, Nigel J. (1980). Computability, an introduction to recursive function theory. Cambridge-New York: Cambridge University Press. p. 251. ISBN 0-521-22384-9.; ISBN 0-521-29465-7 Monographs and survey articles (graduate level) • Ambos-Spies, Klaus; Fejer, Peter (20 March 2006). "Degrees of Unsolvability" (PDF). Retrieved 20 August 2023. Unpublished • Epstein, R.L.; Haas, R; Kramer, L.R. (1981). Leman, M; Schmerl, J.; Soare, R. (eds.). "Hierarchies of sets and degrees below 0". Lecture Notes in Mathematics. Springer-Verlag. 859. • Lerman, M. (1983). Degrees of unsolvability. Perspectives in Mathematical Logic. Berlin: Springer-Verlag. ISBN 3-540-12155-2. • Odifreddi, Piergiorgio (1989). Classical Recursion Theory. Studies in Logic and the Foundations of Mathematics. Vol. 125. Amsterdam: North-Holland. ISBN 978-0-444-87295-1. MR 0982269. • Odifreddi, Piergiorgio (1999). Classical recursion theory. Vol. II. Studies in Logic and the Foundations of Mathematics. Vol. 143. Amsterdam: North-Holland. ISBN 978-0-444-50205-6. MR 1718169. • Rogers, Hartley (1967). Theory of Recursive Functions and Effective Computability. Cambridge, Massachusetts: MIT Press. ISBN 9780262680523. OCLC 933975989. Retrieved 6 May 2020. • Sacks, G.E. (1966). Degrees of Unsolvability. Annals of Mathematics Studies. Princeton University Press. ISBN 978-0-6910-7941-7. JSTOR j.ctt1b9x0r8. • Simpson, Steven G. (1977a). "Degrees of Unsolvability: A Survey of Results". Annals of Mathematics Studies. Elsevier. 90: 631–652. doi:10.1016/S0049-237X(08)71117-0. • Shoenfield, Joseph R. (1971). Degrees of Unsolvability. North-Holland/Elsevier. ISBN 978-0-7204-2061-6. • Shore, R. (1993). "The theories of the T, tt, and wtt r.e. degrees: undecidability and beyond". In Univ. Nac. del Sur, Bahía Blanca (ed.). Proceedings of the IX Latin American Symposium on Mathematical Logic, Part 1 (Bahía Blanca, 1992). Notas Lógica Mat. Vol. 38. pp. 61–70. • Soare, Robert Irving (1987). Recursively Enumerable Sets and Degrees: A Study of Computable Functions and Computably Generated Sets. Perspectives in Mathematical Logic. Berlin: Springer-Verlag. ISBN 3-540-15299-7. • Soare, Robert Irving (1978). "Recursively enumerable sets and degrees". Bull. Amer. Math. Soc. 84 (6): 1149–1181. MR 0508451. Research papers • Chong, C.T.; Yu, Liang (December 2007). "Maximal Chains in the Turing Degrees". Journal of Symbolic Logic. 72 (4): 1219–1227. JSTOR 27588601. • DeAntonio, Jasper (24 September 2010). "The Turing degrees and their lack of linear order" (PDF). Retrieved 20 August 2023. • Kleene, Stephen Cole; Post, Emil L. (1954), "The upper semi-lattice of degrees of recursive unsolvability", Annals of Mathematics, Second Series, 59 (3): 379–407, doi:10.2307/1969708, ISSN 0003-486X, JSTOR 1969708, MR 0061078 • Lachlan, Alistair H. (1966a), "Lower Bounds for Pairs of Recursively Enumerable Degrees", Proceedings of the London Mathematical Society, 3 (1): 537–569, CiteSeerX 10.1.1.106.7893, doi:10.1112/plms/s3-16.1.537. • Lachlan, Alistair H. (1966b), "The impossibility of finding relative complements for recursively enumerable degrees", J. Symb. Log., 31 (3): 434–454, doi:10.2307/2270459, JSTOR 2270459, S2CID 30992462. • Lachlan, Alistair H.; Soare, Robert Irving (1980), "Not every finite lattice is embeddable in the recursively enumerable degrees", Advances in Mathematics, 37: 78–82, doi:10.1016/0001-8708(80)90027-4 • Nies, André; Shore, Richard A.; Slaman, Theodore A. (1998), "Interpretability and definability in the recursively enumerable degrees", Proceedings of the London Mathematical Society, 77 (2): 241–291, CiteSeerX 10.1.1.29.9588, doi:10.1112/S002461159800046X, ISSN 0024-6115, MR 1635141, S2CID 16488410 • Post, Emil L. (1944), "Recursively enumerable sets of positive integers and their decision problems", Bulletin of the American Mathematical Society, 50 (5): 284–316, doi:10.1090/S0002-9904-1944-08111-1, ISSN 0002-9904, MR 0010514 • Sacks, G.E. (1964), "The recursively enumerable degrees are dense", Annals of Mathematics, Second Series, 80 (2): 300–312, doi:10.2307/1970393, JSTOR 1970393 • Shore, Richard A.; Slaman, Theodore A. (1999), "Defining the Turing jump", Mathematical Research Letters, 6 (6): 711–722, doi:10.4310/mrl.1999.v6.n6.a10, ISSN 1073-2780, MR 1739227 • Simpson, Stephen G. (1977b). "First-order theory of the degrees of recursive unsolvability". Annals of Mathematics. Second Series. 105 (1): 121–139. doi:10.2307/1971028. ISSN 0003-486X. JSTOR 1971028. MR 0432435. • Thomason, S.K. (1971), "Sublattices of the recursively enumerable degrees", Z. Math. Logik Grundlag. Math., 17: 273–280, doi:10.1002/malq.19710170131 • Yates, C.E.M. (1966), "A minimal pair of recursively enumerable degrees", Journal of Symbolic Logic, 31 (2): 159–168, doi:10.2307/2269807, JSTOR 2269807, S2CID 38778059 Notes 1. DeAntonio 2010, p. 9. 2. Chong & Yu 2007, p. 1224. 3. Odifreddi 1989, p. 252, 258. 4. Epstein, Haas & Kramer 1981. Authority control International • FAST National • Israel • United States Alan Turing • Turing machine • Turing test • Turing completeness • Turing's proof • Turing (microarchitecture) • Turing degree
Wikipedia
Red–black tree In computer science, a red–black tree is a specialised binary search tree data structure noted for fast storage and retrieval of ordered information, and a guarantee that operations will complete within a known time. Compared to other self-balancing binary search trees, the nodes in a red-black tree hold an extra bit called "color" representing "red" and "black" which is used when re-organising the tree to ensure that it is always approximately balanced. [3] Red–black tree TypeTree Invented1972 Invented byRudolf Bayer Complexities in big O notation Space complexity Space $O(n)$ Time complexity Function Amortized Worst Case Search $O(\log n)$[1] $O(\log n)$[1] Insert $O(1)$[2] $O(\log n)$[1] Delete $O(1)$[2] $O(\log n)$[1] When the tree is modified, the new tree is rearranged and "repainted" to restore the coloring properties that constrain how unbalanced the tree can become in the worst case. The properties are designed such that this rearranging and recoloring can be performed efficiently. The (re-)balancing is not perfect, but guarantees searching in Big O time of $O(\log n)$, where $n$ is the number of entries (or keys) in the tree. The insert and delete operations, along with the tree rearrangement and recoloring, are also performed in $O(\log n)$ time.[4] Tracking the color of each node requires only one bit of information per node because there are only two colors. The tree does not contain any other data specific to it being a red–black tree, so its memory footprint is almost identical to that of a classic (uncolored) binary search tree. In some cases, the added bit of information can be stored at no added memory cost. History In 1972, Rudolf Bayer[5] invented a data structure that was a special order-4 case of a B-tree. These trees maintained all paths from root to leaf with the same number of nodes, creating perfectly balanced trees. However, they were not binary search trees. Bayer called them a "symmetric binary B-tree" in his paper and later they became popular as 2–3–4 trees or just 2–4 trees.[6] In a 1978 paper, "A Dichromatic Framework for Balanced Trees",[7] Leonidas J. Guibas and Robert Sedgewick derived the red–black tree from the symmetric binary B-tree.[8] The color "red" was chosen because it was the best-looking color produced by the color laser printer available to the authors while working at Xerox PARC.[9] Another response from Guibas states that it was because of the red and black pens available to them to draw the trees.[10] In 1993, Arne Andersson introduced the idea of a right leaning tree to simplify insert and delete operations.[11] In 1999, Chris Okasaki showed how to make the insert operation purely functional. Its balance function needed to take care of only 4 unbalanced cases and one default balanced case.[12] The original algorithm used 8 unbalanced cases, but Cormen et al. (2001) reduced that to 6 unbalanced cases.[3] Sedgewick showed that the insert operation can be implemented in just 46 lines of Java code.[13][14] In 2008, Sedgewick proposed the left-leaning red–black tree, leveraging Andersson’s idea that simplified the insert and delete operations. Sedgewick originally allowed nodes whose two children are red, making his trees more like 2–3–4 trees, but later this restriction was added, making new trees more like 2–3 trees. Sedgewick implemented the insert algorithm in just 33 lines, significantly shortening his original 46 lines of code.[15][16] Terminology Example of a red–black tree Figure 1: ... with explicit NIL leaves Figure 2: ... with implicit left and right docking points A red–black tree is a special type of binary search tree, used in computer science to organize pieces of comparable data, such as text fragments or numbers (as e. g. the numbers in figures 1 and 2). The nodes carrying keys and/or data are frequently called "internal nodes", but to make this very specific they are also called non-NIL nodes in this article. The leaf nodes of red–black trees ( NIL  in figure 1) do not contain keys or data. These "leaves" need not be explicit individuals in computer memory: a NULL pointer can —as in all binary tree data structures— encode the fact that there is no child node at this position in the (parent) node. Nevertheless, by their position in the tree, these objects are in relation to other nodes that is relevant to the RB-structure, it may have parent, sibling (i. .e., the other child of the parent), uncle, even nephew node; and may be child—but never parent of another node. It is not really necessary to attribute a "color" to these end-of-path objects, because the condition "is NIL or BLACK" is implied by the condition "is NIL" (see also this remark). Figure 2 shows the conceptually same red–black tree without these NIL leaves. To arrive at the same notion of a path, one must notice that e. g., 3 paths run through the node 1, namely a path through 1left plus 2 added paths through 1right, namely the paths through 6left and 6right. This way, these ends of the paths are also docking points for new nodes to be inserted, fully equivalent to the NIL leaves of figure 1. Instead, to save a marginal amount of execution time, these (possibly many) NIL leaves may be implemented as pointers to one unique (and black) sentinel node (instead of pointers of value NULL). As a conclusion, the fact that a child does not exist (is not a true node, does not contain data) can in all occurrences be specified by the very same NULL pointer or as the very same pointer to a sentinel node. Throughout this article, either choice is called NIL node and has the constant value NIL. The black depth of a node is defined as the number of black nodes from the root to that node (i. .e. the number of black ancestors). The black height of a red–black tree is the number of black nodes in any path from the root to the leaves, which, by requirement 4, is constant (alternatively, it could be defined as the black depth of any leaf node).[17]: 154–165  The black height of a node is the black height of the subtree rooted by it. In this article, the black height of a NIL node shall be set to 0, because its subtree is empty as suggested by figure 2, and its tree height is also 0. Properties In addition to the requirements imposed on a binary search tree the following must be satisfied by a red–black tree:[18] 1. Every node is either red or black. 2. All NIL nodes (figure 1) are considered black. 3. A red node does not have a red child. 4. Every path from a given node to any of its descendant NIL nodes goes through the same number of black nodes. 5. (Conclusion) If a node N has exactly one child, it must be a red child, because if it were black, its NIL descendants would sit at a different black depth than N's NIL child, violating requirement 4. Some authors, e. g. Cormen & al.,[18] claim "the root is black" as fifth requirement; but not Mehlhorn & Sanders[17] or Sedgewick & Wayne.[16]: 432–447  Since the root can always be changed from red to black, this rule has little effect on analysis. This article also omits it, because it slightly disturbs the recursive algorithms and proofs. As an example, every perfect binary tree that consists only of black nodes is a red–black tree. The read-only operations, such as search or tree traversal, do not affect any of the requirements. In contrast, the modifying operations insert and delete easily maintain requirements 1 and 2, but with respect to the other requirements some extra effort must be made, to avoid introducing a violation of requirement 3, called a red-violation, or of requirement 4, called a black-violation. The requirements enforce a critical property of red–black trees: the path from the root to the farthest leaf is no more than twice as long as the path from the root to the nearest leaf. The result is that the tree is height-balanced. Since operations such as inserting, deleting, and finding values require worst-case time proportional to the height $h$ of the tree, this upper bound on the height allows red–black trees to be efficient in the worst case, namely logarithmic in the number $n$ of entries, i. e. $h\in O(\log n)$, which is not the case for ordinary binary search trees. For a mathematical proof see section Proof of bounds. Red–black trees, like all binary search trees, allow quite efficient sequential access (e. g. in-order traversal, that is: in the order Left–Root–Right) of their elements. But they support also asymptotically optimal direct access via a traversal from root to leaf, resulting in $O(\log n)$ search time. Analogy to B-trees of order 4 A red–black tree is similar in structure to a B-tree of order[19] 4, where each node can contain between 1 and 3 values and (accordingly) between 2 and 4 child pointers. In such a B-tree, each node will contain only one value matching the value in a black node of the red–black tree, with an optional value before and/or after it in the same node, both matching an equivalent red node of the red–black tree. One way to see this equivalence is to "move up" the red nodes in a graphical representation of the red–black tree, so that they align horizontally with their parent black node, by creating together a horizontal cluster. In the B-tree, or in the modified graphical representation of the red–black tree, all leaf nodes are at the same depth. The red–black tree is then structurally equivalent to a B-tree of order 4, with a minimum fill factor of 33% of values per cluster with a maximum capacity of 3 values. This B-tree type is still more general than a red–black tree though, as it allows ambiguity in a red–black tree conversion—multiple red–black trees can be produced from an equivalent B-tree of order 4 (see figure 3). If a B-tree cluster contains only 1 value, it is the minimum, black, and has two child pointers. If a cluster contains 3 values, then the central value will be black and each value stored on its sides will be red. If the cluster contains two values, however, either one can become the black node in the red–black tree (and the other one will be red). So the order-4 B-tree does not maintain which of the values contained in each cluster is the root black tree for the whole cluster and the parent of the other values in the same cluster. Despite this, the operations on red–black trees are more economical in time because you don’t have to maintain the vector of values.[20] It may be costly if values are stored directly in each node rather than being stored by reference. B-tree nodes, however, are more economical in space because you don’t need to store the color attribute for each node. Instead, you have to know which slot in the cluster vector is used. If values are stored by reference, e. g. objects, null references can be used and so the cluster can be represented by a vector containing 3 slots for value pointers plus 4 slots for child references in the tree. In that case, the B-tree can be more compact in memory, improving data locality. The same analogy can be made with B-trees with larger orders that can be structurally equivalent to a colored binary tree: you just need more colors. Suppose that you add blue, then the blue–red–black tree defined like red–black trees but with the additional constraint that no two successive nodes in the hierarchy will be blue and all blue nodes will be children of a red node, then it becomes equivalent to a B-tree whose clusters will have at most 7 values in the following colors: blue, red, blue, black, blue, red, blue (For each cluster, there will be at most 1 black node, 2 red nodes, and 4 blue nodes). For moderate volumes of values, insertions and deletions in a colored binary tree are faster compared to B-trees because colored trees don’t attempt to maximise the fill factor of each horizontal cluster of nodes (only the minimum fill factor is guaranteed in colored binary trees, limiting the number of splits or junctions of clusters). B-trees will be faster for performing rotations (because rotations will frequently occur within the same cluster rather than with multiple separate nodes in a colored binary tree). For storing large volumes, however, B-trees will be much faster as they will be more compact by grouping several children in the same cluster where they can be accessed locally. All optimizations possible in B-trees to increase the average fill factors of clusters are possible in the equivalent multicolored binary tree. Notably, maximizing the average fill factor in a structurally equivalent B-tree is the same as reducing the total height of the multicolored tree, by increasing the number of non-black nodes. The worst case occurs when all nodes in a colored binary tree are black, the best case occurs when only a third of them are black (and the other two thirds are red nodes). Applications and related data structures Red–black trees offer worst-case guarantees for insertion time, deletion time, and search time. Not only does this make them valuable in time-sensitive applications such as real-time applications, but it makes them valuable building blocks in other data structures that provide worst-case guarantees; for example, many data structures used in computational geometry can be based on red–black trees, and the Completely Fair Scheduler used in current Linux kernels and epoll system call implementation[21] uses red–black trees. The AVL tree is another structure supporting $O(\log n)$ search, insertion, and removal. AVL trees can be colored red–black, thus are a subset of RB trees. Worst-case height is 0.720 times the worst-case height of RB trees, so AVL trees are more rigidly balanced. The performance measurements of Ben Pfaff with realistic test cases in 79 runs find AVL to RB ratios between 0.677 and 1.077, median at 0.947, and geometric mean 0.910.[22] WAVL trees have a performance in between those two. Red–black trees are also particularly valuable in functional programming, where they are one of the most common persistent data structures, used to construct associative arrays and sets that can retain previous versions after mutations. The persistent version of red–black trees requires $O(\log n)$ space for each insertion or deletion, in addition to time. For every 2–4 tree, there are corresponding red–black trees with data elements in the same order. The insertion and deletion operations on 2–4 trees are also equivalent to color-flipping and rotations in red–black trees. This makes 2–4 trees an important tool for understanding the logic behind red–black trees, and this is why many introductory algorithm texts introduce 2–4 trees just before red–black trees, even though 2–4 trees are not often used in practice. In 2008, Sedgewick introduced a simpler version of the red–black tree called the left-leaning red–black tree[23] by eliminating a previously unspecified degree of freedom in the implementation. The LLRB maintains an additional invariant that all red links must lean left except during inserts and deletes. Red–black trees can be made isometric to either 2–3 trees,[24] or 2–4 trees,[23] for any sequence of operations. The 2–4 tree isometry was described in 1978 by Sedgewick.[7] With 2–4 trees, the isometry is resolved by a "color flip," corresponding to a split, in which the red color of two children nodes leaves the children and moves to the parent node. The original description of the tango tree, a type of tree optimised for fast searches, specifically uses red–black trees as part of its data structure.[25] As of Java 8, the HashMap has been modified such that instead of using a LinkedList to store different elements with colliding hashcodes, a red–black tree is used. This results in the improvement of time complexity of searching such an element from $O(m)$ to $O(\log m)$ where $m$ is the number of elements with colliding hashcodes.[26] Operations The read-only operations, such as search or tree traversal, on a red–black tree require no modification from those used for binary search trees, because every red–black tree is a special case of a simple binary search tree. However, the immediate result of an insertion or removal may violate the properties of a red–black tree, the restoration of which is called rebalancing so that red–black trees become self-balancing. It requires in the worst case a small number, $O(\log n)$ in Big O notation, where $n$ is the number of objects in the tree, on average or amortized $O(1)$, a constant number,[27]: 310  [17]: 158  of color changes (which are very quick in practice); and no more than three tree rotations[28] (two for insertion). If the example implementation below is not suitable, other implementations with explanations may be found in Ben Pfaff’s[29] annotated C library GNU libavl (v2.0.3 as of June 2019). The details of the insert and removal operations will be demonstrated with example C++ code, which uses the type definitions, macros below, and the helper function for rotations: // Basic type definitions: enum color_t { BLACK, RED }; struct RBnode { // node of red–black tree RBnode* parent; // == NIL if root of the tree RBnode* child[2]; // == NIL if child is empty // The index is: // LEFT := 0, if (key < parent->key) // RIGHT := 1, if (key > parent->key) enum color_t color; int key; }; #define NIL NULL // null pointer or pointer to sentinel node #define LEFT 0 #define RIGHT 1 #define left child[LEFT] #define right child[RIGHT] struct RBtree { // red–black tree RBnode* root; // == NIL if tree is empty }; // Get the child direction (∈ { LEFT, RIGHT }) // of the non-root non-NIL RBnode* N: #define childDir(N) ( N == (N->parent)->right ? RIGHT : LEFT ) RBnode* RotateDirRoot( RBtree* T, // red–black tree RBnode* P, // root of subtree (may be the root of T) int dir) { // dir ∈ { LEFT, RIGHT } RBnode* G = P->parent; RBnode* S = P->child[1-dir]; RBnode* C; assert(S != NIL); // pointer to true node required C = S->child[dir]; P->child[1-dir] = C; if (C != NIL) C->parent = P; S->child[ dir] = P; P->parent = S; S->parent = G; if (G != NULL) G->child[ P == G->right ? RIGHT : LEFT ] = S; else T->root = S; return S; // new root of subtree } #define RotateDir(N,dir) RotateDirRoot(T,N,dir) #define RotateLeft(N) RotateDirRoot(T,N,LEFT) #define RotateRight(N) RotateDirRoot(T,N,RIGHT) Notes to the sample code and diagrams of insertion and removal The proposal breaks down both, insertion and removal (not mentioning some very simple cases), into six constellations of nodes, edges and colors, which are called cases. The proposal contains for both, insertion and removal, exactly one case that advances one black level closer to the root and loops, the other five cases rebalance the tree of their own. The more complicated cases are pictured in a diagram. • symbolises a red node and a (non-NIL) black node (of black height ≥ 1), symbolises the color red or black of a non-NIL node, but the same color throughout the same diagram. NIL nodes are not represented in the diagrams. • The variable N denotes the current node, which is labeled  N  or  N  in the diagrams. • A diagram contains three columns and two to four actions. The left column shows the first iteration, the right column the higher iterations, the middle column shows the segmentation of a case into its different actions.[30] 1. The action "entry" shows the constellation of nodes with their colors which defines a case and mostly violates some of the requirements. A blue border rings the current node N and the other nodes are labeled according to their relation to N. 2. If a rotation is considered useful, this is pictured in the next action, which is labeled "rotation". 3. If some recoloring is considered useful, this is pictured in the next action, which is labeled "color".[31] 4. If there is still some need to repair, the cases make use of code of other cases and this after a reassignment of the current node N, which then again carries a blue ring and relative to which other nodes may have to be reassigned also. This action is labeled "reassign". For both, insert and delete, there is (exactly) one case which iterates one black level closer to the root; then the reassigned constellation satisfies the respective loop invariant. • A possibly numbered triangle with a black circle atop represents a red–black subtree (connected to its parent according to requirement 3) with a black height equal to the iteration level minus one, i. .e. zero in the first iteration. Its root may be red or black. A possibly numbered triangle represents a red–black subtree with a black height one less, i. .e. its parent has black height zero in the second iteration. Remark For simplicity, the sample code uses the disjunction: U == NIL || U->color == BLACK // considered black and the conjunction: U != NIL && U->color == RED   // not considered black Thereby, it must be kept in mind that both statements are not evaluated in total, if U == NIL. Then in both cases U->color is not touched (see Short-circuit evaluation). (The comment considered black is in accordance with requirement 2.) The related if-statements have to occur far less frequently if the proposal[30] is realised. Insertion Insertion begins by placing the new (non-NIL) node, say N, at the position in the binary search tree of a NIL node whose in-order predecessor’s key compares less than the new node’s key, which in turn compares less than the key of its in-order successor. (Frequently, this positioning is the result of a search within the tree immediately preceding the insert operation and consists of a node P together with a direction dir with P->child[dir] == NIL.) The newly inserted node is temporarily colored red so that all paths contain the same number of black nodes as before. But if its parent, say P, is also red then this action introduces a red-violation. void RBinsert1( RBtree* T, // -> red–black tree struct RBnode* N, // -> node to be inserted struct RBnode* P, // -> parent node of N ( may be NULL ) byte dir) // side ( LEFT or RIGHT ) of P where to insert N { struct RBnode* G; // -> parent node of P struct RBnode* U; // -> uncle of N N->color = RED; N->left = NIL; N->right = NIL; N->parent = P; if (P == NULL) { // There is no parent T->root = N; // N is the new root of the tree T. return; // insertion complete } P->child[dir] = N; // insert N as dir-child of P // start of the (do while)-loop: do { The rebalancing loop of the insert operation has the following invariant: • The current node N is (red) at the beginning of each iteration. • Requirement 3 is satisfied for all pairs node←parent with the possible exception N←P when P is also red (a red-violation at N). • All other properties (including requirement 4) are satisfied throughout the tree. Notes to the insert diagrams beforecase → rota- tion assig- nment after→ next Δh PGUxPGUx —I3→ I1→ —I4→ I2N:=G ? ?2 iI5P↶NN:=PoI60 oI6P↷G→ Insertion: This synopsis shows in its before columns, that all                 possible cases[32] of constellations are covered. • In the diagrams, P is used for N’s parent, G for the grandparent, and U will denote N’s uncle. • The diagrams show the parent node P as the left child of its parent G even though it is possible for P to be on either side. The sample code covers both possibilities by means of the side variable dir. • N is the insertion node, but as the operation proceeds also other nodes may become current (see case I2). • The diagrams show the cases where P is red also, the red-violation. • The column x indicates the change in child direction, i. .e. o (for "outer") means that P and N are both left or both right children, whereas i (for "inner") means that the child direction changes from P’s to N’s. • The column group before defines the case, whose name is given in the column case. Thereby possible values in cells left empty are ignored. So in case I2 the sample code covers both possibilities of child directions of N, although the corresponding diagram shows only one. • The rows in the synopsis are ordered such that the coverage of all possible RB cases is easily comprehensible. • The column rotation indicates whether a rotation contributes to the rebalancing. • The column assignment shows an assignment of N before entering a subsequent step. This possibly induces a reassignment of the other nodes P, G, U also. • If something has been changed by the case, this is shown in the column group after. • An arrow → in column next signifies that the rebalancing is complete with this step. If the column after determines exactly one case, this case is given as the subsequent one, otherwise there are question marks. • The loop is contained in the sections "Insert case I1" and "Insert case I2", where in case I2 the problem of rebalancing is escalated $\Delta h=2$ tree levels or 1 black level higher in the tree, in that the grandfather G becomes the new current node N. So it takes maximally ${\tfrac {h}{2}}$ steps of iteration to repair the tree (where $h$ is the height of the tree). Because the probability of escalation decreases exponentially with each step the total rebalancing cost is constant on average, indeed amortized constant. • From the body of the loop, case I1 exits by itself and there are exiting branches to cases I4, I6, I5 + I6, and I3. • Rotations occur in cases I6 and I5 + I6 – outside the loop. Therefore, at most two rotations occur in total. Insert case I1 The current node’s parent P is black, so requirement 3 holds. Requirement 4 holds also according to the loop invariant. if (P->color == BLACK) { // Case_I1 (P black): return; // insertion complete } // From now on P is red. if ((G = P->parent) == NULL) goto Case_I4; // P red and root // else: P red and G!=NULL. dir = childDir(P); // the side of parent G on which node P is located U = G->child[1-dir]; // uncle if (U == NIL || U->color == BLACK) // considered black goto Case_I56; // P red && U black Insert case I2 first iteration higher iteration Insert case I2 If both the parent P and the uncle U are red, then both of them can be repainted black and the grandparent G becomes red for maintaining requirement 4. Since any path through the parent or uncle must pass through the grandparent, the number of black nodes on these paths has not changed. However, the grandparent G may now violate requirement 3, if it has a red parent. After relabeling G to N the loop invariant is fulfilled so that the rebalancing can be iterated on one black level (= 2 tree levels) higher. // Case_I2 (P+U red): P->color = BLACK; U->color = BLACK; G->color = RED; N = G; // new current node // iterate 1 black level higher // (= 2 tree levels) } while ((P = N->parent) != NULL); // end of the (do while)-loop Insert case I3 Insert case I2 has been executed for ${\tfrac {h-1}{2}}$ times and the total height of the tree has increased by 1, now being $h~.$ The current node N is the (red) root of the tree, and all RB-properties are satisfied. // Leaving the (do while)-loop (after having fallen through from Case_I2). // Case_I3: N is the root and red. return; // insertion complete Insert case I4 The parent P is red and the root. Because N is also red, requirement 3 is violated. But after switching P’s color the tree is in RB-shape. The black height of the tree increases by 1. Case_I4: // P is the root and red: P->color = BLACK; return; // insertion complete Insert case I5 first iteration higher iteration Insert case I5 The parent P is red but the uncle U is black. The ultimate goal is to rotate the parent node P to the grandparent position, but this will not work if N is an "inner" grandchild of G (i. .e., if N is the left child of the right child of G or the right child of the left child of G). A dir-rotation at P switches the roles of the current node N and its parent P. The rotation adds paths through N (those in the subtree labeled 2, see diagram) and removes paths through P (those in the subtree labeled 4). But both P and N are red, so requirement 4 is preserved. Requirement 3 is restored in case 6. Case_I56: // P red && U black: if (N == P->child[1-dir]) { // Case_I5 (P red && U black && N inner grandchild of G): RotateDir(P,dir); // P is never the root N = P; // new current node P = G->child[dir]; // new parent of N // fall through to Case_I6 } Insert case I6 first iteration higher iteration Insert case I6 The current node N is now certain to be an "outer" grandchild of G (left of left child or right of right child). Now (1-dir)-rotate at G, putting P in place of G and making P the parent of N and G. G is black and its former child P is red, since requirement 3 was violated. After switching the colors of P and G the resulting tree satisfies requirement 3. Requirement 4 also remains satisfied, since all paths that went through the black G now go through the black P. // Case_I6 (P red && U black && N outer grandchild of G): RotateDirRoot(T,G,1-dir); // G may be the root P->color = BLACK; G->color = RED; return; // insertion complete } // end of RBinsert1 Because the algorithm transforms the input without using an auxiliary data structure and using only a small amount of extra storage space for auxiliary variables it is in-place. Simple cases The label N denotes the current node that at entry is the node to be deleted. If N is the root that does not have a non-NIL child, it is replaced by a NIL node, after which the tree is empty—and in RB-shape. If N has exactly one non-NIL child, it must be a red child, according to conclusion 5. If N is a red node, it cannot have exactly one non-NIL child, because this would have to be black by requirement 3. Furthermore, it cannot have exactly one black child according to conclusion 5. As a consequence, the red node N is without any child and can simply be removed. If N is a black node, it may have two red children, a single red child or no non-NIL child at all. If N has a single red child, it is simply replaced with this child after painting the latter black. If N has two non-NIL children, an additional navigation to the minimum element in its right subtree (which is N’s in-order successor, say ${\text{y}}$) finds a node with no other node between N and ${\text{y}}$ (as shown here). This node ${\text{y}}$ does not have a left child and thus has at most one non-NIL child. If ${\text{y}}$ is to be removed in N’s place, the red–black tree data related with N and ${\text{y}}$, i. .e. the color of and the pointers to and from the two nodes, have to be exchanged. (As a result, the modified red–black tree is the same as before, except that the order between N and ${\text{y}}$ is reversed.) This choice may result in one of the more simple cases above, but if ${\text{y}}$ is without child and black we arrive at ... Removal of a black non-root leaf The complex case is when N is not the root, colored black and has no proper child (⇔ only NIL children). In the first iteration, N is replaced by NIL. void RBdelete2( RBtree* T, // -> red–black tree struct RBnode* N) // -> node to be deleted { struct RBnode* P = N->parent; // -> parent node of N byte dir; // side of P on which N is located (∈ { LEFT, RIGHT }) struct RBnode* S; // -> sibling of N struct RBnode* C; // -> close nephew struct RBnode* D; // -> distant nephew // P != NULL, since N is not the root. dir = childDir(N); // side of parent P on which the node N is located // Replace N at its parent P by NIL: P->child[dir] = NIL; goto Start_D; // jump into the loop // start of the (do while)-loop: do { dir = childDir(N); // side of parent P on which node N is located Start_D: S = P->child[1-dir]; // sibling of N (has black height >= 1) D = S->child[1-dir]; // distant nephew C = S->child[ dir]; // close nephew if (S->color == RED) goto Case_D3; // S red ===> P+C+D black // S is black: if (D != NIL && D->color == RED) // not considered black goto Case_D6; // D red && S black if (C != NIL && C->color == RED) // not considered black goto Case_D5; // C red && S+D black // Here both nephews are == NIL (first iteration) or black (later). if (P->color == RED) goto Case_D4; // P red && C+S+D black The rebalancing loop of the delete operation has the following invariant: • At the beginning of each iteration the black height of N equals the iteration number minus one, which means that in the first iteration it is zero and that N is a true black node in higher iterations. • The number of black nodes on the paths through N is one less than before the deletion, whereas it is unchanged on all other paths, so that there is a black-violation at P if other paths exist. • All other properties (including requirement 3) are satisfied throughout the tree. Notes to the delete diagrams beforecase → rota- tion assig- nment after→ next Δh PCSDPCSD —D1→ D2N:=P ? ?1 D3P↶SN:=ND60 D50 D40 D4→ D5C↷SN:=ND60 D6P↶S→ Deletion: This synopsis shows in its before columns, that all                 possible cases[32] of color constellations are covered. • In the diagrams below, P is used for N’s parent, S for the sibling of N, C (meaning close nephew) for S’s child in the same direction as N, and D (meaning distant nephew) for S’s other child (S cannot be a NIL node in the first iteration, because it must have black height one, which was the black height of N before its deletion, but C and D may be NIL nodes). • The diagrams show the current node N as the left child of its parent P even though it is possible for N to be on either side. The code samples cover both possibilities by means of the side variable dir. • At the beginning (in the first iteration) of removal, N is the NIL node replacing the node to be deleted. Because its location in parent’s node is the only thing of importance, it is symbolised by (meaning: the current node N is a NIL node and left child) in the left column of the delete diagrams. As the operation proceeds also proper nodes (of black height ≥ 1) may become current (see e. g. case D2). • By counting the black bullets ( and ) in a delete diagram it can be observed that the paths through N have one bullet less than the other paths. This means a black-violation at P—if it exists. • The color constellation in column group before defines the case, whose name is given in the column case. Thereby possible values in cells left empty are ignored. • The rows in the synopsis are ordered such that the coverage of all possible RB cases is easily comprehensible. • The column rotation indicates whether a rotation contributes to the rebalancing. • The column assignment shows an assignment of N before entering a subsequent iteration step. This possibly induces a reassignment of the other nodes P, C, S, D also. • If something has been changed by the case, this is shown in the column group after. • An arrow → in column next signifies that the rebalancing is complete with this step. If the column after determines exactly one case, this case is given as the subsequent one, otherwise there are question marks. • The loop is contained in the sections from Start_D through "Delete case D2", where the problem of rebalancing is escalated $\Delta h=1$ level higher in the tree in that the parent P becomes the new current node N. So it takes maximally $h$ iterations to repair the tree (where $h$ is the height of the tree). Because the probability of escalation decreases exponentially with each iteration the total rebalancing cost is constant on average, indeed amortized constant. (Just as an aside: Mehlhorn & Sanders point out: "AVL trees do not support constant amortized update costs."[17]: 165, 158  This is true for the rebalancing after a deletion, but not AVL insertion.[33]) • Out of the body of the loop there are exiting branches to the cases D3, D6, D5 + D6, D4, and D1; section "Delete case D3" of its own has three different exiting branches to the cases D6, D5 and D4. • Rotations occur in cases D6 and D5 + D6 and D3 + D5 + D6 – all outside the loop. Therefore, at most three rotations occur in total. Delete case D1 The current node N is the new root. One black node has been removed from every path, so the RB-properties are preserved. The black height of the tree decreases by 1. // Case_D1 (P == NULL): return; // deletion complete Delete case D2 → Explanation of symbols first iteration higher iteration Delete case D2 P, S, and S’s children are black. After painting S red all paths passing through S, which are precisely those paths not passing through N, have one less black node. Now all paths in the subtree rooted by P have the same number of black nodes, but one fewer than the paths that do not pass through P, so requirement 4 may still be violated. After relabeling P to N the loop invariant is fulfilled so that the rebalancing can be iterated on one black level (= 1 tree level) higher. // Case_D2 (P+C+S+D black): S->color = RED; N = P; // new current node (maybe the root) // iterate 1 black level // (= 1 tree level) higher } while ((P = N->parent) != NULL); // end of the (do while)-loop Delete case D3 first iteration higher iteration Delete case D3 The sibling S is red, so P and the nephews C and D have to be black. A dir-rotation at P turns S into N’s grandparent. Then after reversing the colors of P and S, the path through N is still short one black node. But N now has a red parent P and after the reassignment a black sibling S, so the transformations in cases D4, D5, or D6 are able to restore the RB-shape. Case_D3: // S red && P+C+D black: RotateDirRoot(T,P,dir); // P may be the root P->color = RED; S->color = BLACK; S = C; // != NIL // now: P red && S black D = S->child[1-dir]; // distant nephew if (D != NIL && D->color == RED) goto Case_D6; // D red && S black C = S->child[ dir]; // close nephew if (C != NIL && C->color == RED) goto Case_D5; // C red && S+D black // Otherwise C+D considered black. // fall through to Case_D4 Delete case D4 first iteration higher iteration Delete case D4 The sibling S and S’s children are black, but P is red. Exchanging the colors of S and P does not affect the number of black nodes on paths going through S, but it does add one to the number of black nodes on paths going through N, making up for the deleted black node on those paths. Case_D4: // P red && S+C+D black: S->color = RED; P->color = BLACK; return; // deletion complete Delete case D5 first iteration higher iteration Delete case D5 The sibling S is black, S’s close child C is red, and S’s distant child D is black. After a (1-dir)-rotation at S the nephew C becomes S’s parent and N’s new sibling. The colors of S and C are exchanged. All paths still have the same number of black nodes, but now N has a black sibling whose distant child is red, so the constellation is fit for case D6. Neither N nor its parent P are affected by this transformation, and P may be red or black ( in the diagram). Case_D5: // C red && S+D black: RotateDir(S,1-dir); // S is never the root S->color = RED; C->color = BLACK; D = S; S = C; // now: D red && S black // fall through to Case_D6 Delete case D6 first iteration higher iteration Delete case D6 The sibling S is black, S’s distant child D is red. After a dir-rotation at P the sibling S becomes the parent of P and S’s distant child D. The colors of P and S are exchanged, and D is made black. The whole subtree still has the same color at its root S, namely either red or black ( in the diagram), which refers to the same color both before and after the transformation. This way requirement 3 is preserved. The paths in the subtree not passing through N (i.o.w. passing through D and node 3 in the diagram) pass through the same number of black nodes as before, but N now has one additional black ancestor: either P has become black, or it was black and S was added as a black grandparent. Thus, the paths passing through N pass through one additional black node, so that requirement 4 is restored and the total tree is in RB-shape. Case_D6: // D red && S black: RotateDirRoot(T,P,dir); // P may be the root S->color = P->color; P->color = BLACK; D->color = BLACK; return; // deletion complete } // end of RBdelete2 Because the algorithm transforms the input without using an auxiliary data structure and using only a small amount of extra storage space for auxiliary variables it is in-place. Proof of bounds For $h\in \mathbb {N} $ there is a red–black tree of height $h$ with $m_{h}$$=2^{\lfloor (h+1)/2\rfloor }+2^{\lfloor h/2\rfloor }-2$ $={\Biggl \{}$$2\cdot 2^{\tfrac {h}{2}}-2=2^{{\tfrac {h}{2}}+1}-2$     if $h$ even $3\cdot 2^{\tfrac {h-1}{2}}-2$if $h$ odd nodes ($\lfloor \,\rfloor $ is the floor function) and there is no red–black tree of this tree height with fewer nodes—therefore it is minimal. Its black height is   $\lceil h/2\rceil $   (with black root) or for odd $h$ (then with a red root) also   $(h-1)/2~.$ Proof For a red–black tree of a certain height to have minimal number of nodes, it must have exactly one longest path with maximal number of red nodes, to achieve a maximal tree height with a minimal black height. Besides this path all other nodes have to be black.[16]: 444 Proof sketch  If a node is taken off this tree it either loses height or some RB property. The RB tree of height $h=1$ with red root is minimal. This is in agreement with $m_{1}=2^{\lfloor (1+1)/2\rfloor }\!+\!2^{\lfloor 1/2\rfloor }\!\!-\!\!2=2^{1}\!+\!2^{0}\!\!-\!\!2=1~.$ A minimal RB tree (RBh in figure 4) of height $h>1$ has a root whose two child subtrees are of different height. The higher child subtree is also a minimal RB tree, RBh–1, containing also a longest path that defines its height $h\!\!-\!\!1$; it has $m_{h-1}$ nodes and the black height $\lfloor (h\!\!-\!\!1)/2\rfloor =:s.$ The other subtree is a perfect binary tree of (black) height $s$ having $2^{s}\!\!-\!\!1=2^{\lfloor (h-1)/2\rfloor }\!\!-\!\!1$ black nodes—and no red node. Then the number of nodes is by induction $m_{h}$$=$$(m_{h-1})$$+$$(1)$$+$$(2^{\lfloor (h-1)/2\rfloor }-1)$ (higher subtree)(root)(second subtree) resulting in $m_{h}$$=$$(2^{\lfloor h/2\rfloor }+2^{\lfloor (h-1)/2\rfloor }-2)$$+$$2^{\lfloor (h-1)/2\rfloor }$ $=$$2^{\lfloor h/2\rfloor }+2^{\lfloor (h+1)/2\rfloor }-2$  ■ The graph of the function $m_{h}$ is convex and piecewise linear with breakpoints at $(h=2k\;|\;m_{2k}=2\cdot 2^{k}\!-\!2)$ where $k\in \mathbb {N} .$ The function has been tabulated as $m_{h}=$ A027383(h–1) for $h\geq 1$ (sequence A027383 in the OEIS). Solving the function for $h$ The inequality $9>8=2^{3}$ leads to $3>2^{3/2}$, which for odd $h$ leads to $m_{h}=3\cdot 2^{(h-1)/2}-2={\bigl (}3\cdot 2^{-3/2}{\bigr )}\cdot 2^{(h+2)/2}-2>2\cdot 2^{h/2}-2$. So in both, the even and the odd case, $h$ is in the interval $\log _{2}(n+1)$$\leq h\leq $$2\log _{2}(n+2)-2=2\log _{2}(n/2+1)$${\bigl [}<2\log _{2}(n+1)\;{\bigr ]}$ (perfect binary tree)(minimal red–black tree) with $n$ being the number of nodes.[34] Conclusion A red–black tree with $n$ nodes (keys) has tree height $h\in O(\log n).$ Set operations and bulk operations In addition to the single-element insert, delete and lookup operations, several set operations have been defined on red–black trees: union, intersection and set difference. Then fast bulk operations on insertions or deletions can be implemented based on these set functions. These set operations rely on two helper operations, Split and Join. With the new operations, the implementation of red–black trees can be more efficient and highly-parallelizable.[35] In order to achieve its time complexities this implementation requires that the root is allowed to be either red or black, and that every node stores its own black height. • Join: The function Join is on two red–black trees t1 and t2 and a key k, where t1 < k < t2, i. .e. all keys in t1 are less than k, and all keys in t2 are greater than k. It returns a tree containing all elements in t1, t2 also as k. If the two trees have the same black height, Join simply creates a new node with left subtree t1, root k and right subtree t2. If both t1 and t2 have black root, set k to be red. Otherwise k is set black. If the black heights are unequal, suppose that t1 has larger black height than t2 (the other case is symmetric). Join follows the right spine of t1 until a black node c, which is balanced with t2. At this point a new node with left child c, root k (set to be red) and right child t2 is created to replace c. The new node may invalidate the red–black invariant because at most three red nodes can appear in a row. This can be fixed with a double rotation. If double red issue propagates to the root, the root is then set to be black, restoring the properties. The cost of this function is the difference of the black heights between the two input trees. • Split: To split a red–black tree into two smaller trees, those smaller than key x, and those larger than key x, first draw a path from the root by inserting x into the red–black tree. After this insertion, all values less than x will be found on the left of the path, and all values greater than x will be found on the right. By applying Join, all the subtrees on the left side are merged bottom-up using keys on the path as intermediate nodes from bottom to top to form the left tree, and the right part is symmetric. For some applications, Split also returns a boolean value denoting if x appears in the tree. The cost of Split is $O(\log n),$ order of the height of the tree. This algorithm actually has nothing to do with any special properties of a red–black tree, and may be used on any tree with a join operation, such as an AVL tree. The join algorithm is as follows: function joinRightRB(TL, k, TR): if (TL.color=black) and (TL.blackHeight=TR.blackHeight): return Node(TL,⟨k,red⟩,TR) T'=Node(TL.left,⟨TL.key,TL.color⟩,joinRightRB(TL.right,k,TR)) if (TL.color=black) and (T'.right.color=T'.right.right.color=red): T'.right.right.color=black; return rotateLeft(T') return T' /* T''[recte T'] */ function joinLeftRB(TL, k, TR): /* symmetric to joinRightRB */ function join(TL, k, TR): if TL.blackHeight>TR.blackHeight: T'=joinRightRB(TL,k,TR) if (T'.color=red) and (T'.right.color=red): T'.color=black return T' if TR.blackHeight>TL.blackHeight: /* symmetric */ if (TL.color=black) and (TR.color=black): return Node(TL,⟨k,red⟩,TR) return Node(TL,⟨k,black⟩,TR) The split algorithm is as follows: function split(T, k): if (T = nil) return (nil, false, nil) if (k = T.key) return (T.left, true, T.right) if (k < T.key): (L',b,R') = split(T.left, k) return (L',b,join(R',T.key,T.right)) (L',b,R') = split(T.right, k) return (join(T.left,T.key,L'),b,T.right) The union of two red–black trees t1 and t2 representing sets A and B, is a red–black tree t that represents A ∪ B. The following recursive function computes this union: function union(t1, t2): if t1 = nil return t2 if t2 = nil return t1 (L1,b,R1)=split(t1,t2.key) proc1=start: TL=union(L1,t2.left) proc2=start: TR=union(R1,t2.right) wait all proc1,proc2 return join(TL, t2.key, TR) Here, split is presumed to return two trees: one holding the keys less its input key, one holding the greater keys. (The algorithm is non-destructive, but an in-place destructive version exists also.) The algorithm for intersection or difference is similar, but requires the Join2 helper routine that is the same as Join but without the middle key. Based on the new functions for union, intersection or difference, either one key or multiple keys can be inserted to or deleted from the red–black tree. Since Split calls Join but does not deal with the balancing criteria of red–black trees directly, such an implementation is usually called the "join-based" implementation. The complexity of each of union, intersection and difference is $O\left(m\log \left({n \over m}+1\right)\right)$ for two red–black trees of sizes $m$ and $n(\geq m)$. This complexity is optimal in terms of the number of comparisons. More importantly, since the recursive calls to union, intersection or difference are independent of each other, they can be executed in parallel with a parallel depth $O(\log m\log n)$.[35] When $m=1$, the join-based implementation has the same computational directed acyclic graph (DAG) as single-element insertion and deletion if the root of the larger tree is used to split the smaller tree. Parallel algorithms Parallel algorithms for constructing red–black trees from sorted lists of items can run in constant time or $O(\log \log n)$ time, depending on the computer model, if the number of processors available is asymptotically proportional to the number $n$ of items where $n\to \infty $. Fast search, insertion, and deletion parallel algorithms are also known.[36] The join-based algorithms for red–black trees are parallel for bulk operations, including union, intersection, construction, filter, map-reduce, and so on. Parallel bulk operations Basic operations like insertion, removal or update can be parallelised by defining operations that process bulks of multiple elements. It is also possible to process bulks with several basic operations, for example bulks may contain elements to insert and also elements to remove from the tree. The algorithms for bulk operations aren’t just applicable to the red–black tree, but can be adapted to other sorted sequence data structures also, like the 2–3 tree, 2–3–4 tree and (a,b)-tree. In the following different algorithms for bulk insert will be explained, but the same algorithms can also be applied to removal and update. Bulk insert is an operation that inserts each element of a sequence $I$ into a tree $T$. Join-based This approach can be applied to every sorted sequence data structure that supports efficient join- and split-operations.[37] The general idea is to split $I$ and $T$ in multiple parts and perform the insertions on these parts in parallel. 1. First the bulk $I$ of elements to insert must be sorted. 2. After that, the algorithm splits $I$ into $k\in \mathbb {N} ^{+}$ parts $\langle I_{1},\cdots ,I_{k}\rangle $ of about equal sizes. 3. Next the tree $T$ must be split into $k$ parts $\langle T_{1},\cdots ,T_{k}\rangle $ in a way, so that for every $j\in \mathbb {N} ^{+}|\,1\leq j<k$ following constraints hold: 1. ${\text{last}}(I_{j})<{\text{first}}(T_{j+1})$ 2. ${\text{last}}(T_{j})<{\text{first}}(I_{j+1})$ 4. Now the algorithm inserts each element of $I_{j}$ into $T_{j}$ sequentially. This step must be performed for every $j$, which can be done by up to $k$ processors in parallel. 5. Finally, the resulting trees will be joined to form the final result of the entire operation. Note that in Step 3 the constraints for splitting $I$ assure that in Step 5 the trees can be joined again and the resulting sequence is sorted. • initial tree • split I and T • insert into the split T • join T The pseudo code shows a simple divide-and-conquer implementation of the join-based algorithm for bulk-insert. Both recursive calls can be executed in parallel. The join operation used here differs from the version explained in this article, instead join2 is used, which misses the second parameter k. bulkInsert(T, I, k): I.sort() bulklInsertRec(T, I, k) bulkInsertRec(T, I, k): if k = 1: forall e in I: T.insert(e) else m := ⌊size(I) / 2⌋ (T1, _, T2) := split(T, I[m]) bulkInsertRec(T1, I[0 .. m], ⌈k / 2⌉) || bulkInsertRec(T2, I[m + 1 .. size(I) - 1], ⌊k / 2⌋) T ← join2(T1, T2) Execution time Sorting $I$ is not considered in this analysis. #recursion levels$\in O(\log k)$ T(split) + T(join)$\in O(\log |T|)$ insertions per thread$\in O\left({\frac {|I|}{k}}\right)$ T(insert)$\in O(\log |T|)$ T(bulkInsert) with $k$ = #processors$\in O\left(\log k\log |T|+{\frac {|I|}{k}}\log |T|\right)$ This can be improved by using parallel algorithms for splitting and joining. In this case the execution time is $\in O\left(\log |T|+{\frac {|I|}{k}}\log |T|\right)$.[38] Work #splits, #joins$\in O(k)$ W(split) + W(join)$\in O(\log |T|)$ #insertions$\in O(|I|)$ W(insert)$\in O(\log |T|)$ W(bulkInsert)$\in O(k\log |T|+|I|\log |T|)$ Pipelining Another method of parallelizing bulk operations is to use a pipelining approach.[39] This can be done by breaking the task of processing a basic operation up into a sequence of subtasks. For multiple basic operations the subtasks can be processed in parallel by assigning each subtask to a separate processor. 1. First the bulk $I$ of elements to insert must be sorted. 2. For each element in $I$ the algorithm locates the according insertion position in $T$. This can be done in parallel for each element $\in I$ since $T$ won’t be mutated in this process. Now $I$ must be divided into subsequences $S$ according to the insertion position of each element. For example $s_{n,{\mathit {left}}}$ is the subsequence of $I$ that contains the elements whose insertion position would be to the left of node $n$. 3. The middle element $m_{n,{\mathit {dir}}}$ of every subsequence $s_{n,{\mathit {dir}}}$ will be inserted into $T$ as a new node $n'$. This can be done in parallel for each $m_{n,{\mathit {dir}}}$ since by definition the insertion position of each $m_{n,{\mathit {dir}}}$ is unique. If $s_{n,{\mathit {dir}}}$ contains elements to the left or to the right of $m_{n,{\mathit {dir}}}$, those will be contained in a new set of subsequences $S$ as $s_{n',{\mathit {left}}}$ or $s_{n',{\mathit {right}}}$. 4. Now $T$ possibly contains up to two consecutive red nodes at the end of the paths form the root to the leaves, which needs to be repaired. Note that, while repairing, the insertion position of elements $\in S$ have to be updated, if the corresponding nodes are affected by rotations. If two nodes have different nearest black ancestors, they can be repaired in parallel. Since at most four nodes can have the same nearest black ancestor, the nodes at the lowest level can be repaired in a constant number of parallel steps. This step will be applied successively to the black levels above until $T$ is fully repaired. 5. The steps 3 to 5 will be repeated on the new subsequences until $S$ is empty. At this point every element $\in I$ has been inserted. Each application of these steps is called a stage. Since the length of the subsequences in $S$ is $\in O(|I|)$ and in every stage the subsequences are being cut in half, the number of stages is $\in O(\log |I|)$. Since all stages move up the black levels of the tree, they can be parallelised in a pipeline. Once a stage has finished processing one black level, the next stage is able to move up and continue at that level. • Initial tree • Find insert positions • Stage 1 inserts elements • Stage 1 begins to repair nodes • Stage 2 inserts elements • Stage 2 begins to repair nodes • Stage 3 inserts elements • Stage 3 begins to repair nodes • Stage 3 continues to repair nodes Execution time Sorting $I$ is not considered in this analysis. Also, $|I|$ is assumed to be smaller than $|T|$, otherwise it would be more efficient to construct the resulting tree from scratch. T(find insert position)$\in O(\log |T|)$ #stages$\in O(\log |I|)$ T(insert) + T(repair)$\in O(\log |T|)$ T(bulkInsert) with $|I|$ ~ #processors$\in O(\log |I|+2\cdot \log |T|)$ $=O(\log |T|)$ Work W(find insert positions)$\in O(|I|\log |T|)$ #insertions, #repairs$\in O(|I|)$ W(insert) + W(repair)$\in O(\log |T|)$ W(bulkInsert)$\in O(2\cdot |I|\log |T|)$ $=O(|I|\log |T|)$ Popular culture A red–black tree was referenced correctly in an episode of Missing[40] as noted by Robert Sedgewick in one of his lectures:[41] Jess: It was the red door again. Pollock: I thought the red door was the storage container. Jess: But it wasn’t red anymore, it was black. Antonio: So red turning to black means what? Pollock: Budget deficits, red ink, black ink. Antonio: It could be from a binary search tree. The red–black tree tracks every simple path from a node to a descendant leaf that has the same number of black nodes. Jess: Does that help you with the ladies? See also • List of data structures • Tree data structure • Tree rotation • AA tree, a variation of the red–black tree • Left-leaning red–black tree • AVL tree • B-tree (2–3 tree, 2–3–4 tree, B+ tree, B*-tree, UB-tree) • Scapegoat tree • Splay tree • T-tree • WAVL tree References and notes 1. Paton, James. "Red–Black Trees". 2. rebalancing only (no lookup), see Tarjan and Mehlhorn. 3. Cormen, Thomas H.; Leiserson, Charles E.; Rivest, Ronald L.; Stein, Clifford (2001). "Red–Black Trees". Introduction to Algorithms (2nd ed.). MIT Press. pp. 273–301. ISBN 978-0-262-03293-3. 4. Morris, John (1998). "Red–Black Trees". Data Structures and Algorithms. 5. Bayer, Rudolf (1972). "Symmetric binary B-Trees: Data structure and maintenance algorithms". Acta Informatica. 1 (4): 290–306. doi:10.1007/BF00289509. S2CID 28836825. 6. Drozdek, Adam (2001). Data Structures and Algorithms in Java (2 ed.). Sams Publishing. p. 323. ISBN 978-0534376680. 7. Guibas, Leonidas J.; Sedgewick, Robert (1978). "A Dichromatic Framework for Balanced Trees". Proceedings of the 19th Annual Symposium on Foundations of Computer Science. pp. 8–21. doi:10.1109/SFCS.1978.3. 8. "Red Black Trees". eternallyconfuzzled.com. Archived from the original on 2007-09-27. Retrieved 2015-09-02. 9. Sedgewick, Robert (2012). Red–Black BSTs. Coursera. A lot of people ask why did we use the name red–black. Well, we invented this data structure, this way of looking at balanced trees, at Xerox PARC that was the home of the personal computer and many other innovations that we live with today entering[sic] graphic user interfaces, Ethernet and object-oriented programmings[sic] and many other things. But one of the things that was invented there was laser printing and we were very excited to have nearby color laser printer that could print things out in color and out of the colors the red looked the best. So, that's why we picked the color red to distinguish red links, the types of links, in three nodes. So, that's an answer to the question for people that have been asking. 10. "Where does the term "Red/Black Tree" come from?". programmers.stackexchange.com. Retrieved 2015-09-02. 11. Andersson, Arne (1993-08-11). "Balanced search trees made simple". In Dehne, Frank; Sack, Jörg-Rüdiger; Santoro, Nicola; Whitesides, Sue (eds.). Algorithms and Data Structures (Proceedings). Lecture Notes in Computer Science. Vol. 709. Springer-Verlag Berlin Heidelberg. pp. 60–71. CiteSeerX 10.1.1.118.6192. doi:10.1007/3-540-57155-8_236. ISBN 978-3-540-57155-1. Archived from the original on 2018-12-08. Alt URL 12. Okasaki, Chris (1999-01-01). "Red–black trees in a functional setting". Journal of Functional Programming. 9 (4): 471–477. doi:10.1017/S0956796899003494. ISSN 1469-7653. S2CID 20298262. 13. Sedgewick, Robert (1983). Algorithms (1st ed.). Addison-Wesley. ISBN 978-0-201-06672-2. 14. Sedgewick, Robert; Wayne, Kevin. "RedBlackBST.java". algs4.cs.princeton.edu. Retrieved 7 April 2018. 15. Sedgewick, Robert (2008). "Left-leaning Red–Black Trees". 16. Sedgewick, Robert; Wayne, Kevin (2011). Algorithms (4th ed.). Addison-Wesley Professional. ISBN 978-0-321-57351-3. 17. Mehlhorn, Kurt; Sanders, Peter (2008). "7. Sorted Sequences" (PDF). Algorithms and Data Structures: The Basic Toolbox. Berlin/Heidelberg: Springer. CiteSeerX 10.1.1.148.2305. doi:10.1007/978-3-540-77978-0. ISBN 978-3-540-77977-3. 18. Cormen, Thomas; Leiserson, Charles; Rivest, Ronald; Stein, Clifford (2022). "13. Red–Black Trees". Introduction to Algorithms (4th ed.). MIT Press. pp. 331–332. ISBN 9780262046305. 19. Using Knuth’s definition of order: the maximum number of children 20. Sedgewick, Robert (1998). Algorithms in C++. Addison-Wesley Professional. pp. 565–575. ISBN 978-0-201-35088-3. 21. "The Implementation of epoll (1)". September 2014. 22. Pfaff 2004 23. "Robert Sedgewick" (PDF). Cs.princeton.edu. 4 June 2020. Retrieved 26 March 2022. 24. "Balanced Trees" (PDF). Cs.princeton.edu. Retrieved 26 March 2022. 25. Demaine, E. D.; Harmon, D.; Iacono, J.; Pătraşcu, M. (2007). "Dynamic Optimality—Almost" (PDF). SIAM Journal on Computing. 37 (1): 240. doi:10.1137/S0097539705447347. S2CID 1480961. 26. "How does a HashMap work in JAVA". coding-geek.com. 27. Tarjan, Robert Endre (April 1985). "Amortized Computational Complexity" (PDF). SIAM Journal on Algebraic and Discrete Methods. 6 (2): 306–318. doi:10.1137/0606031. 28. The important thing about these tree rotations is that they preserve the in-order sequence of the tree’s nodes. 29. "Ben Pfaff (2007): Online HTML version of a well-documented collection of binary search tree and balanced tree library routines". 30. The left columns contain far less nodes than the right ones, especially for removal. This indicates that some efficiency can be gained by pulling the first iteration out of the rebalancing loops of insertion and deletion, because many of the named nodes are NIL nodes in the first iteration and definitively non-NIL later. (See also this remark.) 31. Rotations have been placed before recoloring for reasons of clarity. But the two commute, so that it is free choice to move the rotation to the tail. 32. The same partitioning is found in Ben Pfaff. 33. Dinesh P. Mehta, Sartaj Sahni (Ed.) Handbook of Data Structures and Applications 10.4.2 34. Equality at the upper bound holds for the minimal RB trees RB2k of even height $2\cdot k$ with $n=2\cdot 2^{k}-2$ nodes and only for those. So the inequality is marginally more precise than the widespread $h<2\log _{2}(n+1),$ e. g. in Cormen p. 264. Moreover, these trees are binary trees that admit one and only one coloring conforming to the RB requirements 1 to 4. But there are further such trees, e. g. appending a child node to a black leaf always forces it to red. (A minimal RB tree of odd height allows to flip the root’s color from red to black.) 35. Blelloch, Guy E.; Ferizovic, Daniel; Sun, Yihan (2016), "Just Join for Parallel Ordered Sets" (PDF), Symposium on Parallel Algorithms and Architectures, Proc. of 28th ACM Symp. Parallel Algorithms and Architectures (SPAA 2016), ACM, pp. 253–264, arXiv:1602.02120, doi:10.1145/2935764.2935768, ISBN 978-1-4503-4210-0, S2CID 2897793. 36. Park, Heejin; Park, Kunsoo (2001). "Parallel algorithms for red–black trees". Theoretical Computer Science. 262 (1–2): 415–435. doi:10.1016/S0304-3975(00)00287-5. Our parallel algorithm for constructing a red–black tree from a sorted list of $n$ items runs in $O(1)$ time with $n$ processors on the CRCW PRAM and runs in $O(\log \log n)$ time with $n/\log \log n$ processors on the EREW PRAM. 37. Sanders, Peter (2019). Mehlhorn, Kurt; Dietzfelbinger, Martin; Dementiev, Roman (eds.). Sequential and Parallel Algorithms and Data Structures : The Basic Toolbox. Springer eBooks. Cham: Springer. pp. 252–253. doi:10.1007/978-3-030-25209-0. ISBN 9783030252090. S2CID 201692657. 38. Akhremtsev, Yaroslav; Sanders, Peter (2016). "Fast Parallel Operations on Search Trees". HiPC 2016, the 23rd IEEE International Conference on High Performance Computing, Data, and Analytics, Hyderabad, India, December, 19-22. IEEE, Piscataway (NJ): 291–300. arXiv:1510.05433. Bibcode:2015arXiv151005433A. ISBN 978-1-5090-5411-4. 39. Jájá, Joseph (1992). An introduction to parallel algorithms. Reading, Mass. [u.a.]: Addison-Wesley. pp. 65–70. ISBN 0201548569. Zbl 0781.68009. 40. Missing (Canadian TV series). A, W Network (Canada); Lifetime (United States). 41. Robert Sedgewick (2012). B-Trees. Coursera. 9:48 minutes in. So not only is there some excitement in that dialogue but it's also technically correct that you don't often find with math in popular culture of computer science. A red–black tree tracks every simple path from a node to a descendant leaf with the same number of black nodes they got that right. Further reading • Mathworld: Red–Black Tree • San Diego State University: CS 660: Red–Black tree notes, by Roger Whitney • Pfaff, Ben (June 2004). "Performance Analysis of BSTs in System Software" (PDF). Stanford University. External links • Ben Pfaff: An Introduction to Binary Search Trees and Balanced Trees. Free Software Foundation, Boston 2004, ftp.gnu.org (PDF gzip; 1662 kB) • A complete and working implementation in C • OCW MIT Lecture on Red-black Trees by Erik Demaine • Binary Search Tree Insertion Visualization on YouTube – Visualization of random and pre-sorted data insertions, in elementary binary search trees, and left-leaning red–black trees • An intrusive red–black tree written in C++ • Red–black BSTs in 3.3 Balanced Search Trees • Red–black BST Demo Tree data structures Search trees (dynamic sets/associative arrays) • 2–3 • 2–3–4 • AA • (a,b) • AVL • B • B+ • B* • Bx • (Optimal) Binary search • Dancing • HTree • Interval • Order statistic • (Left-leaning) Red–black • Scapegoat • Splay • T • Treap • UB • Weight-balanced Heaps • Binary • Binomial • Brodal • Fibonacci • Leftist • Pairing • Skew • van Emde Boas • Weak Tries • Ctrie • C-trie (compressed ADT) • Hash • Radix • Suffix • Ternary search • X-fast • Y-fast Spatial data partitioning trees • Ball • BK • BSP • Cartesian • Hilbert R • k-d (implicit k-d) • M • Metric • MVP • Octree • PH • Priority R • Quad • R • R+ • R* • Segment • VP • X Other trees • Cover • Exponential • Fenwick • Finger • Fractal tree index • Fusion • Hash calendar • iDistance • K-ary • Left-child right-sibling • Link/cut • Log-structured merge • Merkle • PQ • Range • SPQR • Top Well-known data structures Types • Collection • Container Abstract • Associative array • Multimap • Retrieval Data Structure • List • Stack • Queue • Double-ended queue • Priority queue • Double-ended priority queue • Set • Multiset • Disjoint-set Arrays • Bit array • Circular buffer • Dynamic array • Hash table • Hashed array tree • Sparse matrix Linked • Association list • Linked list • Skip list • Unrolled linked list • XOR linked list Trees • B-tree • Binary search tree • AA tree • AVL tree • Red–black tree • Self-balancing tree • Splay tree • Heap • Binary heap • Binomial heap • Fibonacci heap • R-tree • R* tree • R+ tree • Hilbert R-tree • Trie • Hash tree Graphs • Binary decision diagram • Directed acyclic graph • Directed acyclic word graph • List of data structures
Wikipedia
Red auxiliary number In the study of ancient Egyptian mathematics, red auxiliary numbers are numbers written in red ink in the Rhind Mathematical Papyrus, apparently used as aids for arithmetic computations involving fractions.It is considered to be the first examples of method that uses Least common multiples. References • Gillings, Richard J. (1982). Mathematics in the Time of the Pharaohs. Dover Publications. ISBN 9780486243153. OCLC 301431218. • Clagett, Marshall (1989). Ancient Egyptian Science: Ancient Egyptian mathematics. American Philosophical Society. ISBN 9780871692320. OCLC 313400062. • Bunt, Lucas N. H.; Jones, Phillip S.; Bedient, Jack D. (2012). The Historical Roots of Elementary Mathematics. Dover Publications. ISBN 9780486139685. OCLC 868272907. Ancient Egypt topics • Index • Major topics • Glossary of artifacts • Agriculture • Architecture (Revival, Obelisks, Pylon) • Art • Portraiture • Astronomy • Chronology • Cities (List) • Clothing • Ancient Egyptian race controversy • Population history of Egypt • Prehistoric Egypt • Cuisine • Dance • Dynasties • Funerary practices • Geography • Great Royal Wives (List) • Hieroglyphs (Cursive hieroglyphs) • History • Language (Demotic, Hieratic) • Literature • Mathematics • Medicine • Military • Music • Mythology • People • Pharaohs (List, Titulary) • Philosophy • Pottery • Religion • Sites (District) • Technology • Trade • Egypt–Mesopotamia relations • Egyptology • Egyptologists • Museums •  Ancient Egypt portal • Category • WikiProject • Commons • Outline
Wikipedia
Redfield equation In quantum mechanics, the Redfield equation is a Markovian master equation that describes the time evolution of the reduced density matrix ρ of a strongly coupled quantum system that is weakly coupled to an environment. The equation is named in honor of Alfred G. Redfield, who first applied it, doing so for nuclear magnetic resonance spectroscopy.[1] There is a close connection to the Lindblad master equation. If a so-called secular approximation is performed, where only certain resonant interactions with the environment are retained, every Redfield equation transforms into a master equation of Lindblad type. Redfield equations are trace-preserving and correctly produce a thermalized state for asymptotic propagation. However, in contrast to Lindblad equations, Redfield equations do not guarantee a positive time evolution of the density matrix. That is, it is possible to get negative populations during the time evolution. The Redfield equation approaches the correct dynamics for sufficiently weak coupling to the environment. The general form of the Redfield equation is ${\frac {\partial }{\partial t}}\rho (t)=-{\frac {i}{\hbar }}[H,\rho (t)]-{\frac {1}{\hbar ^{2}}}\sum _{m}[S_{m},(\Lambda _{m}\rho (t)-\rho (t)\Lambda _{m}^{\dagger })]$ where $H$ is the Hermitian Hamiltonian, and the $S_{m},\Lambda _{m}$ are operators that describe the coupling to the environment, and $[A,B]=AB-BA$ is the commutation bracket. The explicit form is given in the derivation below. Derivation Consider a quantum system coupled to an environment with a total Hamiltonian of $H_{\text{tot}}=H+H_{\text{int}}+H_{\text{env}}$. Furthermore, we assume that the interaction Hamiltonian can be written as $H_{\text{int}}=\sum _{n}S_{n}E_{n}$, where the $S_{n}$ act only on the system degrees of freedom, the $E_{n}$ only on the environment degrees of freedom. The starting point of Redfield theory is the Nakajima–Zwanzig equation with ${\mathcal {P}}$ projecting on the equilibrium density operator of the environment and ${\mathcal {Q}}$ treated up to second order.[2] An equivalent derivation starts with second-order perturbation theory in the interaction $H_{\text{int}}$.[3] In both cases, the resulting equation of motion for the density operator in the interaction picture (with $H_{0,S}=H+H_{\text{env}}$) is ${\frac {\partial }{\partial t}}\rho _{\rm {I}}(t)=-{\frac {1}{\hbar ^{2}}}\sum _{m,n}\int _{t_{0}}^{t}dt'{\biggl (}C_{mn}(t-t'){\Bigl [}S_{m,\mathrm {I} }(t),S_{n,\mathrm {I} }(t')\rho _{\rm {I}}(t'){\Bigr ]}-C_{mn}^{\ast }(t-t'){\Bigl [}S_{m,\mathrm {I} }(t),\rho _{\rm {I}}(t')S_{n,\mathrm {I} }(t'){\Bigr ]}{\biggr )}$ Here, $t_{0}$ is some initial time, where the total state of the system and bath is assumed to be factorized, and we have introduced the bath correlation function $C_{mn}(t)={\text{tr}}(E_{m,\mathrm {I} }(t)E_{n}\rho _{\text{env,eq}})$ in terms of the density operator of the environment in thermal equilibrium, $\rho _{\text{env,eq}}$. This equation is non-local in time: To get the derivative of the reduced density operator at time t, we need its values at all past times. As such, it cannot be easily solved. To construct an approximate solution, note that there are two time scales: a typical relaxation time $\tau _{r}$ that gives the time scale on which the environment affects the system time evolution, and the coherence time of the environment, $\tau _{c}$ that gives the typical time scale on which the correlation functions decay. If the relation $\tau _{c}\ll \tau _{r}$ holds, then the integrand becomes approximately zero before the interaction-picture density operator changes significantly. In this case, the so-called Markov approximation $\rho _{\rm {I}}(t')\approx \rho _{\rm {I}}(t)$ holds. If we also move $t_{0}\to -\infty $ and change the integration variable $t'\to \tau =t-t'$, we end up with the Redfield master equation ${\frac {\partial }{\partial t}}\rho _{\rm {I}}(t)=-{\frac {1}{\hbar ^{2}}}\sum _{m,n}\int _{0}^{\infty }d\tau {\biggl (}C_{mn}(\tau ){\Bigl [}S_{m,\mathrm {I} }(t),S_{n,\mathrm {I} }(t-\tau )\rho _{\rm {I}}(t){\Bigr ]}-C_{mn}^{\ast }(\tau ){\Bigl [}S_{m,\mathrm {I} }(t),\rho _{\rm {I}}(t)S_{n,\mathrm {I} }(t-\tau ){\Bigr ]}{\biggr )}$ We can simplify this equation considerably if we use the shortcut $\Lambda _{m}=\sum _{n}\int _{0}^{\infty }d\tau C_{mn}(\tau )S_{n,\mathrm {I} }(-\tau )$. In the Schrödinger picture, the equation then reads ${\frac {\partial }{\partial t}}\rho (t)=-{\frac {i}{\hbar }}[H,\rho (t)]-{\frac {1}{\hbar ^{2}}}\sum _{m}[S_{m},\Lambda _{m}\rho (t)-\rho (t)\Lambda _{m}^{\dagger }]$ Secular approximation Secular (Latin: saeculum, lit. 'century') approximation is an approximation valid for long times $t$. The time evolution of the Redfield relaxation tensor is neglected as the Redfield equation describes weak coupling to the environment. Therefore, it is assumed that the relaxation tensor changes slowly in time, and it can be assumed constant for the duration of the interaction described by the interaction Hamiltonian. In general, the time evolution of the reduced density matrix can be written for the element $ab$ as ${\frac {\partial }{\partial t}}\rho _{ab}(t)=-i\omega _{ab}\rho _{ab}(t)-\sum _{cd}{\mathcal {R_{abcd}}}\rho _{cd}(t)$ (1) where ${\mathcal {R}}$ is the time-independent Redfield relaxation tensor. Given that the actual coupling to the environment is weak (but non-negligible), the Redfield tensor is a small perturbation of the system Hamiltonian and the solution can be written as $\rho _{ab}(t)=e^{-i\omega _{ab}t}{\rho }_{ab,\mathrm {I} }(t)$ where $\rho _{\rm {I}}(t)$ is not constant but slowly changing amplitude reflecting the weak coupling to the environment. This is also a form of the interaction picture, hence the index "I".[note 1] Taking a derivative of the $\rho _{\rm {I}}(t)$ and substituting the equation (1) for ${\frac {\partial }{\partial t}}\rho _{ab}(t)$, we are left with only the relaxation part of the equation ${\frac {\partial }{\partial t}}\rho _{ab,\mathrm {I} }(t)=-\sum _{cd}{\mathcal {R_{abcd}}}e^{i\omega _{ab}t-i\omega _{cd}t}\rho _{cd,\mathrm {I} }(t)$ . We can integrate this equation on condition that the interaction picture of the reduced density matrix $\rho _{\rm {I}}(t)$ changes slowly in time (which is true if ${\mathcal {R}}$ is small), then $\rho _{ab,\mathrm {I} }(t)\approx \rho _{ab,\mathrm {I} }(0)$, getting $\rho _{ab,\mathrm {I} }(t)=\rho _{ab,\mathrm {I} }(0)-\sum _{cd}\int _{0}^{t}d\tau {\mathcal {R_{abcd}}}e^{i\omega _{ab}\tau -i\omega _{cd}\tau }\rho _{cd,\mathrm {I} }(t)=\rho _{ab,\mathrm {I} }(0)-\sum _{cd}{\mathcal {R_{abcd}}}{\frac {(e^{i\Delta \omega t}-1)}{i\Delta \omega }}\rho _{cd,\mathrm {I} }(t)$ where $\Delta \omega =\omega _{ab}-\omega _{cd}$. In the limit of $\Delta \omega $ approaching zero, the fraction ${\frac {(e^{i\Delta \omega t}-1)}{i\Delta \omega }}$ approaches $t$, therefore the contribution of one element of the reduced density matrix to another element is proportional to time (and therefore dominates for long times $t$). In case $\Delta \omega $ is not approaching zero, the contribution of one element of the reduced density matrix to another oscillates with an amplitude proportional to ${\frac {1}{\Delta \omega }}$ (and therefore is negligible for long times $t$). It is therefore appropriate to neglect any contribution from non-diagonal elements ($cd$) to other non-diagonal elements ($ab$) and from a non-diagonal elements ($cd$) to diagonal elements ($aa$, $a=b$), since the only case when frequencies of different modes are equal is the case of random degeneracy. The only elements left in the Redfield tensor to evaluate after the Secular approximation are therefore: • ${\mathcal {R}}_{aabb}$, the transfer of population from one state to another (from Failed to parse (SVG (MathML can be enabled via browser plugin): Invalid response ("Math extension cannot connect to Restbase.") from server "http://localhost:6011/en.wikipedia.org/v1/":): b to $a$); • ${\mathcal {R}}_{aaaa}$, the depopulation constant of state $a$; and • ${\mathcal {R}}_{abab}$, the pure dephasing of the element $\rho _{ab}(t)$ (dephasing of coherence). Notes 1. The interaction picture describes the evolution of the density matrix in a "frame of reference" where the changes due to Hamiltonian $H_{0}$ are not manifested. It is essentially the same transformation as entering a rotating frame of reference to solve a problem of combined rotating motion in classical mechanics. The interaction picture then describes only the envelope of the time evolution of the density matrix where only the more subtle effects of the perturbation Hamiltonian manifest. The mathematical formula for a transformation from the Schrödinger picture to the interaction picture is given by $\psi _{\rm {I}}(t)=U^{\dagger }(t)\psi _{\rm {S}}(t)=e^{iH_{0}t/\hbar }\psi _{\rm {S}}(t)$, which is the same form as this equation. References 1. Redfield, A.G. (1965-01-01). "The Theory of Relaxation Processes". Advances in Magnetic and Optical Resonance. 1: 1–32. doi:10.1016/B978-1-4832-3114-3.50007-6. ISBN 9781483231143. ISSN 1057-2732. 2. Volkhard May, Oliver Kuehn: Charge and Energy Transfer Dynamics in Molecular Systems. Wiley-VCH, 2000 ISBN 3-527-29608-5 3. Heinz-Peter Breuer, Francesco Petruccione: Theory of Open Quantum Systems. Oxford, 2002 ISBN 978-0-19-852063-4 External links • brmesolve Bloch-Redfield master equation solver from QuTiP.
Wikipedia
Redheffer matrix In mathematics, a Redheffer matrix, often denoted $A_{n}$ as studied by Redheffer (1977), is a square (0,1) matrix whose entries aij are 1 if i divides j or if j = 1; otherwise, aij = 0. It is useful in some contexts to express Dirichlet convolution, or convolved divisors sums, in terms of matrix products involving the transpose of the $n^{th}$ Redheffer matrix. Variants and definitions of component matrices Since the invertibility of the Redheffer matrices are complicated by the initial column of ones in the matrix, it is often convenient to express $A_{n}:=C_{n}+D_{n}$ where $C_{n}:=[c_{ij}]$ is defined to be the (0,1) matrix whose entries are one if and only if $j=1$ and $i\neq 1$. The remaining one-valued entries in $A_{n}$ then correspond to the divisibility condition reflected by the matrix $D_{n}$, which plainly can be seen by an application of Mobius inversion is always invertible with inverse $D_{n}^{-1}=\left[\mu (j/i)M_{i}(j)\right]$. We then have a characterization of the singularity of $A_{n}$ expressed by $\det \left(A_{n}\right)=\det \left(D_{n}^{-1}C_{n}+I_{n}\right).$ If we define the function $M_{j}(i):={\begin{cases}1,&{\text{ if j divides i; }}\\0,&{\text{otherwise, }}\end{cases}},$ then we can define the $n^{th}$ Redheffer (transpose) matrix to be the nxn square matrix $R_{n}=[M_{j}(i)]_{1\leq i,j\leq n}$ in usual matrix notation. We will continue to make use this notation throughout the next sections. Examples The matrix below is the 12 × 12 Redheffer matrix. In the split sum-of-matrices notation for $A_{12}:=C_{12}+D_{12}$, the entries below corresponding to the initial column of ones in $C_{n}$ are marked in blue. $\left({\begin{matrix}1&1&1&1&1&1&1&1&1&1&1&1\\{\color {blue}\mathbf {1} }&1&0&1&0&1&0&1&0&1&0&1\\{\color {blue}\mathbf {1} }&0&1&0&0&1&0&0&1&0&0&1\\{\color {blue}\mathbf {1} }&0&0&1&0&0&0&1&0&0&0&1\\{\color {blue}\mathbf {1} }&0&0&0&1&0&0&0&0&1&0&0\\{\color {blue}\mathbf {1} }&0&0&0&0&1&0&0&0&0&0&1\\{\color {blue}\mathbf {1} }&0&0&0&0&0&1&0&0&0&0&0\\{\color {blue}\mathbf {1} }&0&0&0&0&0&0&1&0&0&0&0\\{\color {blue}\mathbf {1} }&0&0&0&0&0&0&0&1&0&0&0\\{\color {blue}\mathbf {1} }&0&0&0&0&0&0&0&0&1&0&0\\{\color {blue}\mathbf {1} }&0&0&0&0&0&0&0&0&0&1&0\\{\color {blue}\mathbf {1} }&0&0&0&0&0&0&0&0&0&0&1\end{matrix}}\right)$ A corresponding application of the Mobius inversion formula shows that the $n^{th}$ Redheffer transpose matrix is always invertible, with inverse entries given by $R_{n}^{-1}=\left[M_{j}(i)\cdot \mu \left({\frac {i}{j}}\right)\right]_{1\leq i,j\leq n},$ where $\mu (n)$ denotes the Moebius function. In this case, we have that the $12\times 12$ inverse Redheffer transpose matrix is given by $R_{12}^{-1}=\left({\begin{matrix}1&0&0&0&0&0&0&0&0&0&0&0\\-1&1&0&0&0&0&0&0&0&0&0&0\\-1&0&1&0&0&0&0&0&0&0&0&0\\0&-1&0&1&0&0&0&0&0&0&0&0\\-1&0&0&0&1&0&0&0&0&0&0&0\\1&-1&-1&0&0&1&0&0&0&0&0&0\\-1&0&0&0&0&0&1&0&0&0&0&0\\0&0&0&-1&0&0&0&1&0&0&0&0\\0&0&-1&0&0&0&0&0&1&0&0&0\\1&-1&0&0&-1&0&0&0&0&1&0&0\\-1&0&0&0&0&0&0&0&0&0&1&0\\0&1&0&-1&0&-1&0&0&0&0&0&1\\\end{matrix}}\right)$ Key properties Determinants The determinant of the n × n square Redheffer matrix is given by the Mertens function M(n). In particular, the matrix $A_{n}$ is not invertible precisely when the Mertens function is zero (or is close to changing signs). As a corollary of the disproof[1] of the Mertens conjecture, it follows that the Mertens function changes sign, and is therefore zero, infinitely many times, so the Redheffer matrix $A_{n}$ is singular at infinitely many natural numbers. The determinants of the Redheffer matrices are immediately tied to the Riemann Hypothesis through this relation with the Mertens function, since the Hypothesis is equivalent to showing that $M(x)=O\left(x^{1/2+\varepsilon }\right)$ for all (sufficiently small) $\varepsilon >0$. Factorizations of sums encoded by these matrices In a somewhat unconventional construction which reinterprets the (0,1) matrix entries to denote inclusion in some increasing sequence of indexing sets, we can see that these matrices are also related to factorizations of Lambert series. This observation is offered in so much as for a fixed arithmetic function f, the coefficients of the next Lambert series expansion over f provide a so-called inclusion mask for the indices over which we sum f to arrive at the series coefficients of these expansions. Notably, observe that $\sum _{d|n}f(d)=\sum _{k=1}^{n}M_{k}(n)\cdot f(k)=[q^{n}]\left(\sum _{n\geq 1}{\frac {f(n)q^{n}}{1-q^{n}}}\right).$ Now in the special case of these divisor sums, which we can see from the above expansion, are codified by boolean (zero-one) valued inclusion in the sets of divisors of a natural number n, it is possible to re-interpret the Lambert series generating functions which enumerate these sums via yet another matrix-based construction. Namely, Merca and Schmidt (2017-2018) proved invertible matrix factorizations expanding these generating functions in the form of [2] $\sum _{n\geq 1}{\frac {f(n)q^{n}}{1-q^{n}}}={\frac {1}{(q;q)_{\infty }}}\sum _{n\geq 1}\left(\sum _{k=1}^{n}s_{n,k}f(k)\right)q^{n},$ where $(q;q)_{\infty }$ denotes the infinite q-Pochhammer symbol and where the lower triangular matrix sequence is exactly generated as the coefficients of $s_{n,k}=[q^{n}]{\frac {q^{k}}{1-q^{k}}}(q;q)_{\infty }$, through these terms also have interpretations as differences of special even (odd) indexed partition functions. Merca and Schmidt (2017) also proved a simple inversion formula which allows the implicit function f to be expressed as a sum over the convolved coefficients $\ell (n)=(f\ast 1)(n)$ of the original Lambert series generating function in the form of [3] $f(n)=\sum _{d|n}\sum _{k=1}^{n}p(d-k)\mu (n/d)\left[\sum _{j\geq 0 \atop k-j\geq 0}\ell (k-j)[q^{j}](q;q)_{\infty }\right],$ where p(n) denotes the partition function, $\mu (n)$ is the Moebius function, and the coefficients of $(q;q)_{\infty }$ inherit a quadratic dependence on j through the pentagonal number theorem. This inversion formula is compared to the inverses (when they exist) of the Redheffer matrices $A_{n}$ for the sake of completion here. Other than that the underlying so-termed mask matrix which specifies the inclusion of indices in the divisor sums at hand are invertible, utilizing this type of construction to expand other Redheffer-like matrices for other special number theoretic sums need not be limited to those forms classically studied here. For example, in 2018 Mousavi and Schmidt extend such matrix based factorization lemmas to the cases of Anderson-Apostol divisor sums (of which Ramanujan sums are a notable special case) and sums indexed over the integers that are relatively prime to each n (for example, as classically defines the tally denoted by the Euler phi function).[4] More to the point, the examples considered in the applications section below suggest a study of the properties of what can be considered generalized Redheffer matrices representing other special number theoretic sums. Spectral radius and eigenspaces • If we denote the spectral radius of $A_{n}$ by $\rho _{n}$, i.e., the dominant maximum modulus eigenvalue in the spectrum of $A_{n}$, then $\lim _{n\rightarrow \infty }{\frac {\rho _{n}}{\sqrt {n}}}=1,$ which bounds the asymptotic behavior of the spectrum of $A_{n}$ when n is large. It can also be shown that $1+{\sqrt {n-1}}\leq \rho _{n}<{\sqrt {n}}+O(\log n)$, and by a careful analysis (see the characteristic polynomial expansions below) that $\rho _{n}={\sqrt {n}}+\log {\sqrt {n}}+O(1)$. • The matrix $A_{n}$ has eigenvalue one with multiplicity $n-\left\lfloor \log _{2}(n)\right\rfloor -1$. • The dimension of the eigenspace $E_{\lambda }(A_{n})$ corresponding to the eigenvalue $\lambda :=1$ :=1} is known to be $\left\lfloor {\frac {n}{2}}\right\rfloor -1$. In particular, this implies that $A_{n}$ is not diagonalizable whenever $n\geq 5$. • For all other eigenvalues $\lambda \neq 1$ of $A_{n}$, then dimension of the corresponding eigenspaces $E_{\lambda }(A_{n})$ are one. Characterizing eigenvectors We have that $[a_{1},a_{2},\ldots ,a_{n}]$ is an eigenvector of $A_{n}^{T}$ corresponding to some eigenvalue $\lambda \in \sigma (A_{n})$ in the spectrum of $A_{n}$ if and only if for $n\geq 2$ the following two conditions hold: $\lambda a_{n}=\sum _{d|n}a_{d}\quad {\text{ and }}\quad \lambda a_{1}=\sum _{k=1}^{n}a_{k}.$ If we restrict ourselves to the so-called non-trivial cases where $\lambda \neq 1$, then given any initial eigenvector component $a_{1}$ we can recursively compute the remaining n-1 components according to the formula $a_{j}={\frac {1}{\lambda -1}}\sum _{d|j \atop d<j}a_{d}.$ With this in mind, for $\lambda \neq 1$ we can define the sequences of $v_{\lambda }(n):={\begin{cases}1,&n=1;\\{\frac {1}{\lambda -1}}\sum _{d|n \atop d\neq n}v_{\lambda }(d),&n\geq 2.\end{cases}}$ There are a couple of curious implications related to the definitions of these sequences. First, we have that $\lambda \in \sigma (A_{n})$ if and only if $\sum _{k=1}^{n}v_{\lambda }(k)=\lambda .$ Secondly, we have an established formula for the Dirichlet series, or Dirichlet generating function, over these sequences for fixed $\lambda \neq 1$ which holds for all $\Re (s)>1$ given by $\sum _{n\geq 1}{\frac {v_{\lambda }(n)}{n^{s}}}={\frac {\lambda -1}{\lambda -\zeta (s)}},$ where $\zeta (s)$ of course as usual denotes the Riemann zeta function. Bounds and properties of non-trivial eigenvalues A graph theoretic interpretation to evaluating the zeros of the characteristic polynomial of $A_{n}$ and bounding its coefficients is given in Section 5.1 of.[5] Estimates of the sizes of the Jordan blocks of $A_{n}$ corresponding to the eigenvalue one are given in.[6] A brief overview of the properties of a modified approach to factorizing the characteristic polynomial, $p_{A_{n}}(x)$, of these matrices is defined here without the full scope of the somewhat technical proofs justifying the bounds from the references cited above. Namely, let the shorthand $s:=\lfloor \log _{2}(n)\rfloor $ and define a sequence of auxiliary polynomial expansions according to the formula $f_{n}(t):={\frac {p_{A_{n}}(t+1)}{t^{n-s-1}}}=t^{s+1}-\sum _{k=1}^{s}v_{nk}t^{s-k}.$ Then we know that $f_{n}(t)$ has two real roots, denoted by $t_{n}^{\pm }$, which satisfy $t_{n}^{\pm }=\pm {\sqrt {n}}+\log {\sqrt {n}}+\gamma -{\frac {3}{2}}+O\left({\frac {\log ^{2}(n)}{\sqrt {n}}}\right),$ where $\gamma \approx 0.577216$ is Euler's classical gamma constant, and where the remaining coefficients of these polynomials are bounded by $|v_{nk}|\leq {\frac {n\cdot \log ^{k-1}(n)}{(k-1)!}}.$ A plot of the much more size-constrained nature of the eigenvalues of $f_{n}(t)$ which are not characterized by these two dominant zeros of the polynomial seems to be remarkable as evidenced by the only 20 remaining complex zeros shown below. The next image is reproduced from a freely available article cited above when $n\sim 10^{6}$ is available here for reference. Applications and generalizations We provide a few examples of the utility of the Redheffer matrices interpreted as a (0,1) matrix whose parity corresponds to inclusion in an increasing sequence of index sets. These examples should serve to freshen up some of the at times dated historical perspective of these matrices, and their being footnote-worthy by virtue of an inherent, and deep, relation of their determinants to the Mertens function and equivalent statements of the Riemann Hypothesis. This interpretation is a great deal more combinatorial in construction than typical treatments of the special Redheffer matrix determinants. Nonetheless, this combinatorial twist on enumerating special sequences of sums has been explored more recently in a number of papers and is a topic of active interest in pre-print archives. Before diving into the full construction of this spin on the Redheffer matrix variants $R_{n}$ defined above, observe that this type of expansion is in many ways essentially just another variation of the usage of a Toeplitz matrix to represent truncated power series expressions where the matrix entries are coefficients of the formal variable in the series. Let's explore an application of this particular view of a (0,1) matrix as masking inclusion of summation indices in a finite sum over some fixed function. See the citations to the references [7] and [8] for existing generalizations of the Redheffer matrices in the context of general arithmetic function cases. The inverse matrix terms are referred to a generalized Mobius function within the context of sums of this type in.[9] Matrix products expanding Dirichlet convolutions and Dirichlet inverses First, given any two non-identically-zero arithmetic functions f and g, we can provide explicit matrix representations which encode their Dirichlet convolution in rows indexed by natural numbers $n\geq 1,1\leq n\leq x$: $D_{f,g}(x):=\left[M_{d}(n)f(d)g(n/d)\right]_{1\leq d,n\leq x}={\begin{bmatrix}0&0&\cdots &0&g(x)\\0&0&\cdots &g(x-1)&g(x)\\\ldots &\ldots &\ddots &\ddots &\cdots \\g(1)&g(2)&\cdots &g(x-1)&g(x)\end{bmatrix}}{\begin{bmatrix}0&0&\cdots &0&f(1)\\0&0&\cdots &f(2)&f(1)\\\ldots &\ldots &\ddots &\ddots &\cdots \\f(x)&f(x-1)&\cdots &f(2)&f(1)\end{bmatrix}}R_{x}^{T}.$ Then letting $e^{T}:=[1,1,\ldots ,1]$ denote the vector of all ones, it is easily seen that the $n^{th}$ row of the matrix-vector product $e^{T}\cdot D_{f,g}(x)$ gives the convolved Dirichlet sums $(f\ast g)(n)=\sum _{d|n}f(d)g(n/d),$ for all $1\leq n\leq x$ where the upper index $x\geq 2$ is arbitrary. One task that is particularly onerous given an arbitrary function f is to determine its Dirichlet inverse exactly without resorting to a standard recursive definition of this function via yet another convolved divisor sum involving the same function f with its under-specified inverse to be determined: $f^{-1}(n)\ =\ {\frac {-1}{f(1)}}\mathop {\sum _{d\,\mid \,n}} _{d<n}f\left({\frac {n}{d}}\right)f^{-1}(d),\ n>1{\text{ where }}f^{-1}(1):=1/f(1).$ It is clear that in general the Dirichlet inverse $f^{-1}(n)$ for f, i.e., the uniquely defined arithmetic function such that $(f^{-1}\ast f)(n)=\delta _{n,1}$, involves sums of nested divisor sums of depth from one to $\omega (n)$ where this upper bound is the prime omega function which counts the number of distinct prime factors of n. As this example shows, we can formulate an alternate way to construct the Dirichlet inverse function values via matrix inversion with our variant Redheffer matrices, $R_{n}$. Generalizations of the Redheffer matrix forms: GCD sums and other matrices whose entries denote inclusion in special sets There are several often cited articles from worthy journals that fight to establish expansions of number theoretic divisor sums, convolutions, and Dirichlet series (to name a few) through matrix representations. Besides non-trivial estimates on the corresponding spectrum and eigenspaces associated with truly notable and important applications of these representations—the underlying machinery in representing sums of these forms by matrix products is to effectively define a so-termed masking matrix whose zero-or-one valued entries denote inclusion in an increasing sequence of sets of the natural numbers $\{1,2,\ldots ,n\}$. To illustrate that the previous mouthful of jargon makes good sense in setting up a matrix based system for representing a wide range of special summations, consider the following construction: Let ${\mathcal {A}}_{n}\subseteq [1,n]\cap \mathbb {Z} $ be a sequence of index sets, and for any fixed arithmetic function $f:\mathbb {N} \longrightarrow \mathbb {C} $ define the sums $S_{{\mathcal {A}},f}(n)\mapsto S_{f}(n):=\sum _{k\in {\mathcal {A}}_{n}}f(k).$ One of the classes of sums considered by Mousavi and Schmidt (2017) defines the relatively prime divisor sums by setting the index sets in the last definition to be ${\mathcal {A}}_{n}\mapsto {\mathcal {G}}_{n}:=\{1\leq d\leq n:\gcd(d,n)=1\}.$ This class of sums can be used to express important special arithmetic functions of number theoretic interest, including Euler's phi function (where classically we define $m:=0$) as $\varphi (n)=\sum _{d\in {\mathcal {G}}_{n}}d^{m},$ and even the Mobius function through its representation as a discrete (finite) Fourier transform: $\mu (n)=\sum _{\stackrel {1\leq k\leq n}{\gcd(k,\,n)=1}}e^{2\pi i{\frac {k}{n}}}.$ Citations in the full paper provide other examples of this class of sums including applications to cyclotomic polynomials (and their logarithms). The referenced article by Mousavi and Schmidt (2017) develops a factorization-theorem-like treatment to expanding these sums which is an analog to the Lambert series factorization results given in the previous section above. The associated matrices and their inverses for this definition of the index sets ${\mathcal {A}}_{n}$ then allow us to perform the analog of Moebius inversion for divisor sums which can be used to express the summand functions f as a quasi-convolved sum over the inverse matrix entries and the left-hand-side special functions, such as $\varphi (n)$ or $\mu (n)$ pointed out in the last pair of examples. These inverse matrices have many curious properties (and a good reference pulling together a summary of all of them is currently lacking) which are best intimated and conveyed to new readers by inspection. With this in mind, consider the case of the upper index $x:=21$ and the relevant matrices defined for this case given as follows: $\left({\begin{smallmatrix}1&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0\\1&1&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0\\1&0&1&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0\\1&1&1&1&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0\\1&0&0&0&1&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0\\1&1&1&1&1&1&0&0&0&0&0&0&0&0&0&0&0&0&0&0\\1&0&1&0&1&0&1&0&0&0&0&0&0&0&0&0&0&0&0&0\\1&1&0&1&1&0&1&1&0&0&0&0&0&0&0&0&0&0&0&0\\1&0&1&0&0&0&1&0&1&0&0&0&0&0&0&0&0&0&0&0\\1&1&1&1&1&1&1&1&1&1&0&0&0&0&0&0&0&0&0&0\\1&0&0&0&1&0&1&0&0&0&1&0&0&0&0&0&0&0&0&0\\1&1&1&1&1&1&1&1&1&1&1&1&0&0&0&0&0&0&0&0\\1&0&1&0&1&0&0&0&1&0&1&0&1&0&0&0&0&0&0&0\\1&1&0&1&0&0&1&1&0&0&1&0&1&1&0&0&0&0&0&0\\1&0&1&0&1&0&1&0&1&0&1&0&1&0&1&0&0&0&0&0\\1&1&1&1&1&1&1&1&1&1&1&1&1&1&1&1&0&0&0&0\\1&0&0&0&1&0&1&0&0&0&1&0&1&0&0&0&1&0&0&0\\1&1&1&1&1&1&1&1&1&1&1&1&1&1&1&1&1&1&0&0\\1&0&1&0&0&0&1&0&1&0&1&0&1&0&0&0&1&0&1&0\\1&1&0&1&1&0&0&1&0&1&1&0&1&0&0&1&1&0&1&1\\\end{smallmatrix}}\right)^{-1}=\left({\begin{smallmatrix}1&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0\\-1&1&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0\\-1&0&1&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0\\1&-1&-1&1&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0\\-1&0&0&0&1&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0\\1&0&0&-1&-1&1&0&0&0&0&0&0&0&0&0&0&0&0&0&0\\1&0&-1&0&-1&0&1&0&0&0&0&0&0&0&0&0&0&0&0&0\\-1&0&2&-1&0&0&-1&1&0&0&0&0&0&0&0&0&0&0&0&0\\-1&0&0&0&1&0&-1&0&1&0&0&0&0&0&0&0&0&0&0&0\\1&0&-1&1&0&-1&1&-1&-1&1&0&0&0&0&0&0&0&0&0&0\\-1&0&1&0&0&0&-1&0&0&0&1&0&0&0&0&0&0&0&0&0\\1&0&-1&0&0&0&1&0&0&-1&-1&1&0&0&0&0&0&0&0&0\\3&0&-2&0&-2&0&2&0&-1&0&-1&0&1&0&0&0&0&0&0&0\\-3&0&1&0&3&0&-1&-1&1&0&0&0&-1&1&0&0&0&0&0&0\\-1&0&1&0&1&0&-1&0&0&0&0&0&-1&0&1&0&0&0&0&0\\1&0&0&0&-2&0&0&1&0&0&1&-1&1&-1&-1&1&0&0&0&0\\-3&0&2&0&2&0&-2&0&1&0&0&0&-1&0&0&0&1&0&0&0\\3&0&-2&0&-2&0&2&0&-1&0&0&0&1&0&0&-1&-1&1&0&0\\1&0&-1&0&0&0&1&0&-1&0&0&0&0&0&0&0&-1&0&1&0\\-1&0&0&-1&1&1&0&-1&2&-1&-1&1&-1&1&1&-1&0&0&-1&1\\\end{smallmatrix}}\right)$ Examples of invertible matrices which define other special sums with non-standard, however, clear applications should be catalogued and listed in this generalizations section for completeness. An existing summary of inversion relations, and in particular, exact criteria under which sums of these forms can be inverted and related is found in many references on orthogonal polynomials. Other good examples of this type of factorization treatment to inverting relations between sums over sufficiently invertible, or well enough behaved triangular sets of weight coefficients include the Mobius inversion formula, the binomial transform, and the Stirling transform, among others. See also • Redheffer star product References 1. Odlyzko, A. M.; te Riele, H. J. J. (1985), "Disproof of the Mertens conjecture" (PDF), Journal für die reine und angewandte Mathematik, 1985 (357): 138–160, doi:10.1515/crll.1985.357.138, ISSN 0075-4102, MR 0783538, S2CID 13016831, Zbl 0544.10047 2. M. Merca; M. D. Schmidt (2018). "Factorization Theorems for Generalized Lambert Series and Applications". The Ramanujan Journal. arXiv:1712.00611. Bibcode:2017arXiv171200611M. 3. M. Merca; M. D. Schmidt (2017). "Generating Special Arithmetic Functions by Lambert Series Factorizations". arXiv:1706.00393 [math.NT]. 4. H. Mousavi; M. D. Schmidt (2018). "Factorization Theorems for Relatively Prime Divisor Sums, GCD Sums and Generalized Ramanujan Sums". arXiv:1810.08373 [math.NT]. 5. Dana, Will. "Eigenvalues of the Redheffer matrix and their relation to the Mertens function" (PDF). Retrieved 12 December 2018. 6. D. W. Robinson; W. W. Barret. "The Jordan l-Structure of a Matrix of Redheffer" (PDF). Retrieved 12 December 2018. 7. Gillespie, B. R. "Extending Redheffer's Matrix to Arbitrary Arithmetic Functions". Retrieved 12 December 2018. 8. M. Li; Q. Tan. "Divisibility of matrices associated with multiplicative functions" (PDF). Discrete Mathematics: 2276–2282. Retrieved 12 December 2018. 9. J. Sandor; B. Crstici (2004). Handbook of Number Theory II. The Netherlands: Kluwer Academic Publishers. p. 112. doi:10.1007/1-4020-2547-5. ISBN 978-1-4020-2546-4. • Redheffer, Ray (1977), "Eine explizit lösbare Optimierungsaufgabe", Numerische Methoden bei Optimierungsaufgaben, Band 3 (Tagung, Math. Forschungsinst., Oberwolfach, 1976), Basel, Boston, Berlin: Birkhäuser, pp. 213–216, MR 0468170 • W. Barrett and T. Jarvis (1992). "Spectral properties of a matrix of Redheffer". Linear Algebra and Its Applications. 162–164: 673–683. doi:10.1016/0024-3795(92)90401-U. • Cardon, David A. (2010). "Matrices related to Dirichlet series" (PDF). Journal of Number Theory. 130: 27–39. arXiv:0809.0076. Bibcode:2008arXiv0809.0076C. doi:10.1016/j.jnt.2009.05.013. S2CID 11407312. Retrieved 12 December 2018. External links and citations to related work • Weisstein, Eric W. "Redheffer matrix". MathWorld. • Cardinal, Jean-Paul. "Symmetric matrices related to the Mertens function". Retrieved 12 December 2018. • Kline, Jeffery (2020). "On the eigenstructure of sparse matrices related to the prime number theorem". Linear Algebra and Its Applications. 584: 409–430. doi:10.1016/j.laa.2019.09.022. Matrix classes Explicitly constrained entries • Alternant • Anti-diagonal • Anti-Hermitian • Anti-symmetric • Arrowhead • Band • Bidiagonal • Bisymmetric • Block-diagonal • Block • Block tridiagonal • Boolean • Cauchy • Centrosymmetric • Conference • Complex Hadamard • Copositive • Diagonally dominant • Diagonal • Discrete Fourier Transform • Elementary • Equivalent • Frobenius • Generalized permutation • Hadamard • Hankel • Hermitian • Hessenberg • Hollow • Integer • Logical • Matrix unit • Metzler • Moore • Nonnegative • Pentadiagonal • Permutation • Persymmetric • Polynomial • Quaternionic • Signature • Skew-Hermitian • Skew-symmetric • Skyline • Sparse • Sylvester • Symmetric • Toeplitz • Triangular • Tridiagonal • Vandermonde • Walsh • Z Constant • Exchange • Hilbert • Identity • Lehmer • Of ones • Pascal • Pauli • Redheffer • Shift • Zero Conditions on eigenvalues or eigenvectors • Companion • Convergent • Defective • Definite • Diagonalizable • Hurwitz • Positive-definite • Stieltjes Satisfying conditions on products or inverses • Congruent • Idempotent or Projection • Invertible • Involutory • Nilpotent • Normal • Orthogonal • Unimodular • Unipotent • Unitary • Totally unimodular • Weighing With specific applications • Adjugate • Alternating sign • Augmented • Bézout • Carleman • Cartan • Circulant • Cofactor • Commutation • Confusion • Coxeter • Distance • Duplication and elimination • Euclidean distance • Fundamental (linear differential equation) • Generator • Gram • Hessian • Householder • Jacobian • Moment • Payoff • Pick • Random • Rotation • Seifert • Shear • Similarity • Symplectic • Totally positive • Transformation Used in statistics • Centering • Correlation • Covariance • Design • Doubly stochastic • Fisher information • Hat • Precision • Stochastic • Transition Used in graph theory • Adjacency • Biadjacency • Degree • Edmonds • Incidence • Laplacian • Seidel adjacency • Tutte Used in science and engineering • Cabibbo–Kobayashi–Maskawa • Density • Fundamental (computer vision) • Fuzzy associative • Gamma • Gell-Mann • Hamiltonian • Irregular • Overlap • S • State transition • Substitution • Z (chemistry) Related terms • Jordan normal form • Linear independence • Matrix exponential • Matrix representation of conic sections • Perfect matrix • Pseudoinverse • Row echelon form • Wronskian •  Mathematics portal • List of matrices • Category:Matrices
Wikipedia
Redheffer star product In mathematics, the Redheffer star product is a binary operation on linear operators that arises in connection to solving coupled systems of linear equations. It was introduced by Raymond Redheffer in 1959,[1] and has subsequently been widely adopted in computational methods for scattering matrices. Given two scattering matrices from different linear scatterers, the Redheffer star product yields the combined scattering matrix produced when some or all of the output channels of one scatterer are connected to inputs of another scatterer. Definition Suppose $A,B$ are the block matrices $A={\begin{pmatrix}A_{11}&A_{12}\\A_{21}&A_{22}\end{pmatrix}}$ and $B={\begin{pmatrix}B_{11}&B_{12}\\B_{21}&B_{22}\end{pmatrix}}$, whose blocks $A_{ij},B_{kl}$ have the same shape when $ij=kl$. The Redheffer star product is then defined by: [1] $A\star B={\begin{pmatrix}B_{11}(I-A_{12}B_{21})^{-1}A_{11}&B_{12}+B_{11}(I-A_{12}B_{21})^{-1}A_{12}B_{22}\\A_{21}+A_{22}(I-B_{21}A_{12})^{-1}B_{21}A_{11}&A_{22}(I-B_{21}A_{12})^{-1}B_{22}\end{pmatrix}}$ , assuming that $(I-A_{12}B_{21}),(I-B_{21}A_{12})$ are invertible, where $I$ is an identity matrix conformable to $A_{12}B_{21}$ or $B_{21}A_{12}$, respectively. This can be rewritten several ways making use of the so-called push-through identity $(I-AB)A=A(I-BA)\iff A(I-BA)^{-1}=(I-AB)^{-1}A$. Redheffer's definition extends beyond matrices to linear operators on a Hilbert space ${\mathcal {H}}$. [2] . By definition, $A_{ij},B_{kl}$ are linear endomorphisms of ${\mathcal {H}}$, making $A,B$ linear endomorphisms of ${\mathcal {H}}\oplus {\mathcal {H}}$, where $\oplus $ is the direct sum. However, the star product still makes sense as long as the transformations are compatible, which is possible when $A\in {\mathcal {L(H_{\gamma }\oplus H_{\alpha },H_{\alpha }\oplus H_{\gamma })}}$ and $B\in {\mathcal {L(H_{\alpha }\oplus H_{\beta },H_{\beta }\oplus H_{\alpha })}}$ so that $A\star B\in {\mathcal {L(H_{\gamma }\oplus H_{\beta },H_{\beta }\oplus H_{\gamma })}}$. Properties Existence $(I-A_{12}B_{21})^{-1}$ exists if and only if $(I-B_{21}A_{12})^{-1}$ exists. [3] Thus when either exists, so does the Redheffer star product. Identity The star identity is the identity on ${\mathcal {H}}\oplus {\mathcal {H}}$, or ${\begin{pmatrix}I&0\\0&I\end{pmatrix}}$. [2] Associativity The star product is associative, provided all of the relevant matrices are defined. [3] Thus $A\star B\star C=(A\star B)\star C=A\star (B\star C)$. Adjoint Provided either side exists, the adjoint of a Redheffer star product is $(A\star B)^{*}=B^{*}\star A^{*}$. [2] Inverse If $B$ is the left matrix inverse of $A$ such that $BA=I$, $A_{22}$ has a right inverse, and $A\star B$ exists, then $A\star B=I$. [2] Similarly, if $B$ is the left matrix inverse of $A$ such that $BA=I$, $A_{11}$ has a right inverse, and $B\star A$ exists, then $B\star A=I$. Also, if $A\star B=I$ and $A_{22}$ has a left inverse then $BA=I$. The star inverse equals the matrix inverse and both can be computed with block inversion as [2] ${\begin{pmatrix}A_{11}&A_{12}\\A_{21}&A_{22}\end{pmatrix}}^{-1}={\begin{pmatrix}(A_{11}-A_{12}A_{22}^{-1}A_{21})^{-1}&(A_{21}-A_{22}A_{12}^{-1}A_{11})^{-1}\\(A_{12}-A_{11}A_{21}^{-1}A_{22})^{-1}&(A_{22}-A_{21}A_{11}^{-1}A_{12})^{-1}\end{pmatrix}}$. Derivation from a linear system The star product arises from solving multiple linear systems of equations that share variables in common. Often, each linear system models the behavior of one subsystem in a physical process and by connecting the multiple subsystems into a whole, one can eliminate variables shared across subsystems in order to obtain the overall linear system. For instance, let $\{x_{i}\}_{i=1}^{6}$ be elements of a Hilbert space ${\mathcal {H}}$ such that [4] ${\begin{pmatrix}x_{3}\\x_{6}\end{pmatrix}}={\begin{pmatrix}A_{11}&A_{12}\\A_{21}&A_{22}\end{pmatrix}}{\begin{pmatrix}x_{5}\\x_{4}\end{pmatrix}}$ and ${\begin{pmatrix}x_{1}\\x_{4}\end{pmatrix}}={\begin{pmatrix}B_{11}&B_{12}\\B_{21}&B_{22}\end{pmatrix}}{\begin{pmatrix}x_{3}\\x_{2}\end{pmatrix}}$ giving the following $4$ equations in $6$ variables: ${\begin{aligned}x_{3}&=A_{11}x_{5}+A_{12}x_{4}\\x_{6}&=A_{21}x_{5}+A_{22}x_{4}\\x_{1}&=B_{11}x_{3}+B_{12}x_{2}\\x_{4}&=B_{21}x_{3}+B_{22}x_{2}\end{aligned}}$. By substituting the first equation into the last we find: $x_{4}=(I-B_{21}A_{12})^{-1}(B_{21}A_{11}x_{5}+B_{22}x_{2})$. By substituting the last equation into the first we find: $x_{3}=(I-A_{12}B_{21})^{-1}(A_{11}x_{5}+A_{12}B_{22}x_{2})$. Eliminating $x_{3},x_{4}$ by substituting the two preceding equations into those for $x_{1},x_{6}$ results in the Redheffer star product being the matrix such that: [1] ${\begin{pmatrix}x_{1}\\x_{6}\end{pmatrix}}=(A\star B){\begin{pmatrix}x_{5}\\x_{2}\end{pmatrix}}$. Connection to scattering matrices Many scattering processes take on a form that motivates a different convention for the block structure of the linear system of a scattering matrix. Typically a physical device that performs a linear transformation on inputs, such as linear dielectric media on electromagnetic waves or in quantum mechanical scattering, can be encapsulated as a system which interacts with the environment through various ports, each of which accepts inputs and returns outputs. It is conventional to use a different notation for the Hilbert space, ${\mathcal {H}}_{i}$, whose subscript labels a port on the device. Additionally, any element, $c_{i}^{\pm }\in {\mathcal {H}}_{i}$, has an additional superscript labeling the direction of travel (where + indicates moving from port i to i+1 and - indicates the reverse). The equivalent notation for a Redheffer transformation, $R\in {\mathcal {L(H_{1}\oplus H_{2},H_{2}\oplus H_{1})}}$, used in the previous section is ${\begin{pmatrix}c_{2}^{+}\\c_{1}^{-}\end{pmatrix}}={\begin{pmatrix}R_{11}&R_{12}\\R_{21}&R_{22}\end{pmatrix}}{\begin{pmatrix}c_{1}^{+}\\c_{2}^{-}\end{pmatrix}}$ . The action of the S-matrix, $S\in {\mathcal {L(H_{1}\oplus H_{2},H_{1}\oplus H_{2})}}$, is defined with an additional flip compared to Redheffer's definition:[5] ${\begin{pmatrix}c_{1}^{-}\\c_{2}^{+}\end{pmatrix}}={\begin{pmatrix}S_{11}&S_{12}\\S_{21}&S_{22}\end{pmatrix}}{\begin{pmatrix}c_{1}^{+}\\c_{2}^{-}\end{pmatrix}}$ , so $S={\begin{pmatrix}0&I\\I&0\end{pmatrix}}R$ . Note that for in order for the off-diagonal identity matrices to be defined, we require ${\mathcal {H_{1},H_{2}}}$ be the same underlying Hilbert space. (The subscript does not imply any difference, but is just a label for bookkeeping.) The star product, $\star _{S}$, for two S-matrices, $A,B$, is given by [5] $A\star _{S}B={\begin{pmatrix}A_{11}+A_{12}(I-B_{11}A_{22})^{-1}B_{11}A_{21}&A_{12}(I-B_{11}A_{22})^{-1}B_{12}\\B_{21}(I-A_{22}B_{11})^{-1}A_{21}&B_{22}+B_{21}(I-A_{22}B_{11})^{-1}A_{22}B_{12}\end{pmatrix}}$ , where $A\in {\mathcal {L(H_{1}\oplus H_{2},H_{1}\oplus H_{2})}}$ and $B\in {\mathcal {L(H_{2}\oplus H_{3},H_{2}\oplus H_{3})}}$, so $A\star _{S}B\in {\mathcal {L(H_{1}\oplus H_{3},H_{1}\oplus H_{3})}}$. Properties These are analogues of the properties of $\star $ for $\star _{S}$ Most of them follow from the correspondence $J(A\star B)=(JA)\star _{S}(JB)$. $J$, the exchange operator, is also the S-matrix star identity defined below. For the rest of this section, $A,B,C$ are S-matrices. Existence $A\star _{S}B$ exists when either $(I-A_{22}B_{11})^{-1}$ or $(I-B_{11}A_{22})^{-1}$ exist. Identity The S-matrix star identity, $J$, is $J={\begin{pmatrix}0&I\\I&0\end{pmatrix}}$. This means $J\star _{S}S=S\star _{S}J=S$ Associativity Associativity of $\star _{S}$ follows from associativity of $\star $ and of matrix multiplication. Adjoint From the correspondence between $\star $ and $\star _{S}$, and the adjoint of $\star $, we have that $(A\star _{S}B)^{*}=J(B^{*}\star _{S}A^{*})J$ Inverse The matrix $\Sigma $ that is the S-matrix star product inverse of $S$ in the sense that $\Sigma \star _{S}S=S\star _{S}\Sigma =J$ is $JS^{-1}J$ where $S^{-1}$ is the ordinary matrix inverse and $J$ is as defined above. Connection to transfer matrices Observe that a scattering matrix can be rewritten as a transfer matrix, $T$, with action ${\begin{pmatrix}c_{2}^{+}\\c_{2}^{-}\end{pmatrix}}=T{\begin{pmatrix}c_{1}^{+}\\c_{1}^{-}\end{pmatrix}}$, where [6] $T={\begin{pmatrix}T_{\scriptscriptstyle ++}&T_{\scriptscriptstyle +-}\\T_{\scriptscriptstyle -+}&T_{\scriptscriptstyle --}\end{pmatrix}}={\begin{pmatrix}S_{21}-S_{22}S_{12}^{-1}S_{11}&S_{22}S_{12}^{-1}\\-S_{12}^{-1}S_{11}&S_{12}^{-1}\end{pmatrix}}$ . Here the subscripts relate the different directions of propagation at each port. As a result, the star product of scattering matrices ${\begin{pmatrix}c_{3}^{+}\\c_{1}^{-}\end{pmatrix}}=(S^{A}\star S^{B}){\begin{pmatrix}c_{1}^{+}\\c_{3}^{-}\end{pmatrix}}$ , is analogous to the following matrix multiplication of transfer matrices [7] ${\begin{pmatrix}c_{3}^{+}\\c_{3}^{-}\end{pmatrix}}=(T^{A}T^{B}){\begin{pmatrix}c_{1}^{+}\\c_{1}^{-}\end{pmatrix}}$ , where $T^{A}\in {\mathcal {L(H_{1}\oplus H_{1},H_{2}\oplus H_{2})}}$ and $T^{B}\in {\mathcal {L(H_{2}\oplus H_{2},H_{3}\oplus H_{3})}}$, so $T^{A}T^{B}\in {\mathcal {L(H_{1}\oplus H_{1},H_{3}\oplus H_{3})}}$. Generalizations Redheffer generalized the star product in several ways: Arbitrary bijections If there is a bijection $M\leftrightarrow L$ given by $L=f(M)$ then an associative star product can be defined by: [7] $A\star B=f^{-1}(f(A)f(B))$. The particular star product defined by Redheffer above is obtained from: $f(A)=((I-A)+(I+A)J)^{-1}((A-I)+(A+I)J)$ where $J(x,y)=(-x,y)$. 3x3 star product A star product can also be defined for 3x3 matrices. [8] Applications to scattering matrices In physics, the Redheffer star product appears when constructing a total scattering matrix from two or more subsystems. If system $A$ has a scattering matrix $S^{A}$ and system $B$ has scattering matrix $S^{B}$, then the combined system $AB$ has scattering matrix $S^{AB}=S^{A}\star S^{B}$. [5] Transmission line theory Many physical processes, including radiative transfer, neutron diffusion, circuit theory, and others are described by scattering processes whose formulation depends on the dimension of the process and the representation of the operators.[6] For probabilistic problems, the scattering equation may appear in a Kolmogorov-type equation. Electromagnetism The Redheffer star product can be used to solve for the propagation of electromagnetic fields in stratified, multilayered media.[9] Each layer in the structure has its own scattering matrix and the total structure's scattering matrix can be described as the star product between all of the layers.[10] A free software program that simulates electromagnetism in layered media is the Stanford Stratified Structure Solver. Semiconductor interfaces Kinetic models of consecutive semiconductor interfaces can use a scattering matrix formulation to model the motion of electrons between the semiconductors. [11] Factorization on graphs In the analysis of Schrödinger operators on graphs, the scattering matrix of a graph can be obtained as a generalized star product of the scattering matrices corresponding to its subgraphs.[12] References 1. Redheffer, Raymond (1959). "Inequalities for a Matrix Riccati Equation". Journal of Mathematics and Mechanics. 8 (3): 349–367. ISSN 0095-9057. JSTOR 24900576. 2. Redheffer, R. M. (1960). "On a Certain Linear Fractional Transformation". Journal of Mathematics and Physics. 39 (1–4): 269–286. doi:10.1002/sapm1960391269. ISSN 1467-9590. 3. Mistiri, F. (1986-01-01). "The Star-product and its Algebraic Properties". Journal of the Franklin Institute. 321 (1): 21–38. doi:10.1016/0016-0032(86)90053-0. ISSN 0016-0032. 4. Liu, Victor. "On scattering matrices and the Redheffer star product" (PDF). Retrieved 26 June 2021. 5. Rumpf, Raymond C. (2011). "Improved Formulation of Scattering Matrices for Semi-Analytical Methods that is Consistent with Convention". Progress in Electromagnetics Research B. 35: 241–261. doi:10.2528/PIERB11083107. ISSN 1937-6472. 6. Redheffer, Raymond (1962). "On the Relation of Transmission-Line Theory to Scattering and Transfer". Journal of Mathematics and Physics. 41 (1–4): 1–41. doi:10.1002/sapm19624111. ISSN 1467-9590. 7. Redheffer, Raymond (1960). "Supplementary Note on Matrix Riccati Equations". Journal of Mathematics and Mechanics. 9 (5): 745–7f48. ISSN 0095-9057. JSTOR 24900784. 8. Redheffer, Raymond M. (1960). "The Mycielski-Paszkowski Diffusion Problem". Journal of Mathematics and Mechanics. 9 (4): 607–621. ISSN 0095-9057. JSTOR 24900958. 9. Ko, D. Y. K.; Sambles, J. R. (1988-11-01). "Scattering matrix method for propagation of radiation in stratified media: attenuated total reflection studies of liquid crystals". JOSA A. 5 (11): 1863–1866. Bibcode:1988JOSAA...5.1863K. doi:10.1364/JOSAA.5.001863. ISSN 1520-8532. 10. Whittaker, D. M.; Culshaw, I. S. (1999-07-15). "Scattering-matrix treatment of patterned multilayer photonic structures". Physical Review B. 60 (4): 2610–2618. Bibcode:1999PhRvB..60.2610W. doi:10.1103/PhysRevB.60.2610. 11. Gosse, Laurent (2014-01-01). "Redheffer Products and Numerical Approximation of Currents in One-Dimensional Semiconductor Kinetic Models". Multiscale Modeling & Simulation. 12 (4): 1533–1560. doi:10.1137/130939584. ISSN 1540-3459. 12. Kostrykin, V.; Schrader, R. (2001-03-22). "The generalized star product and the factorization of scattering matrices on graphs". Journal of Mathematical Physics. 42 (4): 1563–1598. arXiv:math-ph/0008022. Bibcode:2001JMP....42.1563K. doi:10.1063/1.1354641. ISSN 0022-2488. S2CID 6791638.
Wikipedia
Redshift conjecture In mathematics, more specifically in chromatic homotopy theory, the redshift conjecture states, roughly, that algebraic K-theory $K(R)$ has chromatic level one higher than that of a complex-oriented ring spectrum R.[1] It was formulated by John Rognes in a lecture at Schloss Ringberg, Germany, in January 1999, and made more precise by him in a lecture at Mathematische Forschungsinstitut Oberwolfach, Germany, in September 2000.[2] In July 2022, Burklund, Schlank and Yuan announced a solution of a version of the redshift conjecture for arbitrary $E_{\infty }$-ring spectra, after Hahn and Wilson did so earlier in the case of the truncated Brown-Peterson spectra BP<n>.[3] References 1. Lawson, Tyler (2013). "Future directions" (PDF). Talbot 2013: Chromatic Homotopy Theory. MIT Talbot Workshop. 2. Rognes, John (2000). "Algebraic K-theory of finitely presented ring spectra" (PDF). Oberwolfach talk. 3. Burklund, Schlank, Yuan (2022). The Chromatic Nullstellensatz Notes • Ausoni, C.; Rognes, J. (2008). "The chromatic red-shift in algebraic K-theory" (PDF). Enseign. Math. 54 (2): 9–11. • Westerland, C. (2017). "A higher chromatic analogue of the image of J". Geometry & Topology. 21 (2): 1033–93. arXiv:1210.2472. doi:10.2140/gt.2017.21.1033. S2CID 44643197. • Burklund, Robert; Schlank, Tomer M.; Yuan, Allen (2022). "The Chromatic Nullstellensatz". arXiv:2207.09929 [math.AT]. Further reading • Dundas, Bjørn Ian; Goodwillie, Thomas G.; McCarthy, Randy (2012). The Local Structure of Algebraic K-Theory (PDF). Algebra and Applications. Vol. 18. Springer-Verlag. p. 313 (or 301). ISBN 978-1447143932. External links • red-shift conjecture at the nLab
Wikipedia
Reduced chi-squared statistic In statistics, the reduced chi-square statistic is used extensively in goodness of fit testing. It is also known as mean squared weighted deviation (MSWD) in isotopic dating[1] and variance of unit weight in the context of weighted least squares.[2][3] Its square root is called regression standard error,[4] standard error of the regression,[5][6] or standard error of the equation[7] (see Ordinary least squares § Reduced chi-squared) Definition It is defined as chi-square per degree of freedom:[8][9][10][11]: 85 [12][13][14][15] $\chi _{\nu }^{2}={\frac {\chi ^{2}}{\nu }},$ where the chi-squared is a weighted sum of squared deviations: $\chi ^{2}=\sum _{i}{\frac {(O_{i}-C_{i})^{2}}{\sigma _{i}^{2}}}$ with inputs: variance $\sigma _{i}^{2}$, observations O, and calculated data C.[8] The degree of freedom, $\nu =n-m$, equals the number of observations n minus the number of fitted parameters m. In weighted least squares, the definition is often written in matrix notation as $\chi _{\nu }^{2}={\frac {r^{\mathrm {T} }Wr}{\nu }},$ where r is the vector of residuals, and W is the weight matrix, the inverse of the input (diagonal) covariance matrix of observations. If W is non-diagonal, then generalized least squares applies. In ordinary least squares, the definition simplifies to: $\chi _{\nu }^{2}={\frac {\mathrm {RSS} }{\nu }},$ $\mathrm {RSS} =\sum r^{2},$ where the numerator is the residual sum of squares (RSS). When the fit is just an ordinary mean, then $\chi _{\nu }^{2}$ equals the sample standard deviation. Discussion As a general rule, when the variance of the measurement error is known a priori, a $\chi _{\nu }^{2}\gg 1$ indicates a poor model fit. A $\chi _{\nu }^{2}>1$ indicates that the fit has not fully captured the data (or that the error variance has been underestimated). In principle, a value of $\chi _{\nu }^{2}$ around $1$ indicates that the extent of the match between observations and estimates is in accord with the error variance. A $\chi _{\nu }^{2}<1$ indicates that the model is "over-fitting" the data: either the model is improperly fitting noise, or the error variance has been overestimated.[11]: 89  When the variance of the measurement error is only partially known, the reduced chi-squared may serve as a correction estimated a posteriori. Applications Geochronology In geochronology, the MSWD is a measure of goodness of fit that takes into account the relative importance of both the internal and external reproducibility, with most common usage in isotopic dating.[16][17][1][18][19][20] In general when: MSWD = 1 if the age data fit a univariate normal distribution in t (for the arithmetic mean age) or log(t) (for the geometric mean age) space, or if the compositional data fit a bivariate normal distribution in [log(U/He),log(Th/He)]-space (for the central age). MSWD < 1 if the observed scatter is less than that predicted by the analytical uncertainties. In this case, the data are said to be "underdispersed", indicating that the analytical uncertainties were overestimated. MSWD > 1 if the observed scatter exceeds that predicted by the analytical uncertainties. In this case, the data are said to be "overdispersed". This situation is the rule rather than the exception in (U-Th)/He geochronology, indicating an incomplete understanding of the isotope system. Several reasons have been proposed to explain the overdispersion of (U-Th)/He data, including unevenly distributed U-Th distributions and radiation damage. Often the geochronologist will determine a series of age measurements on a single sample, with the measured value $x_{i}$ having a weighting $w_{i}$ and an associated error $\sigma _{x_{i}}$ for each age determination. As regards weighting, one can either weight all of the measured ages equally, or weight them by the proportion of the sample that they represent. For example, if two thirds of the sample was used for the first measurement and one third for the second and final measurement, then one might weight the first measurement twice that of the second. The arithmetic mean of the age determinations is ${\overline {x}}={\frac {\sum _{i=1}^{N}x_{i}}{N}},$ but this value can be misleading, unless each determination of the age is of equal significance. When each measured value can be assumed to have the same weighting, or significance, the biased and unbiased (or "sample" and "population" respectively) estimators of the variance are computed as follows: $\sigma ^{2}={\frac {\sum _{i=1}^{N}(x_{i}-{\overline {x}})^{2}}{N}}{\text{ and }}s^{2}={\frac {N}{N-1}}\cdot \sigma ^{2}={\frac {1}{N-1}}\cdot \sum _{i=1}^{N}(x_{i}-{\overline {x}})^{2}.$ The standard deviation is the square root of the variance. When individual determinations of an age are not of equal significance, it is better to use a weighted mean to obtain an "average" age, as follows: ${\overline {x}}^{*}={\frac {\sum _{i=1}^{N}w_{i}x_{i}}{\sum _{i=1}^{N}w_{i}}}.$ The biased weighted estimator of variance can be shown to be $\sigma ^{2}={\frac {\sum _{i=1}^{N}w_{i}(x_{i}-{\overline {x}}^{*})^{2}}{\sum _{i=1}^{N}w_{i}}},$ which can be computed as $\sigma ^{2}={\frac {\sum _{i=1}^{N}w_{i}x_{i}^{2}\cdot \sum _{i=1}^{N}w_{i}-{\big (}\sum _{i=1}^{N}w_{i}x_{i}{\big )}^{2}}{{\big (}\sum _{i=1}^{N}w_{i}{\big )}^{2}}}.$ The unbiased weighted estimator of the sample variance can be computed as follows: $s^{2}={\frac {\sum _{i=1}^{N}w_{i}}{{\big (}\sum _{i=1}^{N}w_{i}{\big )}^{2}-\sum _{i=1}^{N}w_{i}^{2}}}\cdot {\sum _{i=1}^{N}w_{i}(x_{i}-{\overline {x}}^{*})^{2}}.$ Again, the corresponding standard deviation is the square root of the variance. The unbiased weighted estimator of the sample variance can also be computed on the fly as follows: $s^{2}={\frac {\sum _{i=1}^{N}w_{i}x_{i}^{2}\cdot \sum _{i=1}^{N}w_{i}-{\big (}\sum _{i=1}^{N}w_{i}x_{i}{\big )}^{2}}{{\big (}\sum _{i=1}^{N}w_{i}{\big )}^{2}-\sum _{i=1}^{N}w_{i}^{2}}}.$ The unweighted mean square of the weighted deviations (unweighted MSWD) can then be computed, as follows: ${\text{MSWD}}_{u}={\frac {1}{N-1}}\cdot \sum _{i=1}^{N}{\frac {(x_{i}-{\overline {x}})^{2}}{\sigma _{x_{i}}^{2}}}.$ By analogy, the weighted mean square of the weighted deviations (weighted MSWD) can be computed as follows: ${\text{MSWD}}_{w}={\frac {\sum _{i=1}^{N}w_{i}}{{\big (}\sum _{i=1}^{N}w_{i}{\big )}^{2}-\sum _{i=1}^{N}w_{i}^{2}}}\cdot \sum _{i=1}^{N}{\frac {w_{i}(x_{i}-{\overline {x}}^{*})^{2}}{(\sigma _{x_{i}})^{2}}}.$ Rasch Analysis In data analysis based on the Rasch Model, the reduced chi-squared statistic is called the outfit mean-square statistic, and the information-weighted reduced chi-squared statistic is called the infit mean-square statistic.[21] References 1. Wendt, I., and Carl, C., 1991,The statistical distribution of the mean squared weighted deviation, Chemical Geology, 275–285. 2. Strang, Gilbert; Borre, Kae (1997). Linear algebra, geodesy, and GPS. Wellesley-Cambridge Press. p. 301. ISBN 9780961408862. 3. Koch, Karl-Rudolf (2013). Parameter Estimation and Hypothesis Testing in Linear Models. Springer Berlin Heidelberg. Section 3.2.5. ISBN 9783662039762. 4. Julian Faraway (2000), Practical Regression and Anova using R 5. Kenney, J.; Keeping, E. S. (1963). Mathematics of Statistics. van Nostrand. p. 187. 6. Zwillinger, D. (1995). Standard Mathematical Tables and Formulae. Chapman&Hall/CRC. p. 626. ISBN 0-8493-2479-3. 7. Hayashi, Fumio (2000). Econometrics. Princeton University Press. ISBN 0-691-01018-8. 8. Laub, Charlie; Kuhl, Tonya L. (n.d.), How Bad is Good? A Critical Look at the Fitting of Reflectivity Models using the Reduced Chi-Square Statistic (PDF), University California, Davis, archived from the original (PDF) on 6 October 2016, retrieved 30 May 2015 9. Taylor, John Robert (1997), An introduction to error analysis, University Science Books, p. 268 10. Kirkman, T. W. (n.d.), Chi-Square Curve Fitting, retrieved 30 May 2015 11. Bevington, Philip R. (1969), Data Reduction and Error Analysis for the Physical Sciences, New York: McGraw-Hill 12. Measurements and Their Uncertainties: A Practical Guide to Modern Error Analysis, By Ifan Hughes, Thomas Hase 13. Dealing with Uncertainties: A Guide to Error Analysis, By Manfred Drosg 14. Practical Statistics for Astronomers, By J. V. Wall, C. R. Jenkins 15. Computational Methods in Physics and Engineering, By Samuel Shaw Ming Wong 16. Dickin, A. P. 1995. Radiogenic Isotope Geology. Cambridge University Press, Cambridge, UK, 1995, ISBN 0-521-43151-4, ISBN 0-521-59891-5 17. McDougall, I. and Harrison, T. M. 1988. Geochronology and Thermochronology by the 40Ar/39Ar Method. Oxford University Press. 18. Lance P. Black, Sandra L. Kamo, Charlotte M. Allen, John N. Aleinikoff, Donald W. Davis, Russell J. Korsch, Chris Foudoulis 2003. TEMORA 1: a new zircon standard for Phanerozoic U–Pb geochronology. Chemical Geology 200, 155–170. 19. M. J. Streule, R. J. Phillips, M. P. Searle, D. J. Waters and M. S. A. Horstwood 2009. Evolution and chronology of the Pangong Metamorphic Complex adjacent to themodelling and U-Pb geochronology Karakoram Fault, Ladakh: constraints from thermobarometry, metamorphic modelling and U-Pb geochronology. Journal of the Geological Society 166, 919–932 doi:10.1144/0016-76492008-117 20. Roger Powell, Janet Hergt, Jon Woodhead 2002. Improving isochron calculations with robust statistics and the bootstrap. Chemical Geology 185, 191–204. 21. Linacre, J.M. (2002). "What do Infit and Outfit, Mean-square and Standardized mean?". Rasch Measurement Transactions. 16 (2): 878.
Wikipedia
Cone (topology) In topology, especially algebraic topology, the cone of a topological space $X$ is intuitively obtained by stretching X into a cylinder and then collapsing one of its end faces to a point. The cone of X is denoted by $CX$ or by $\operatorname {cone} (X)$. Definitions Formally, the cone of X is defined as: $CX=(X\times [0,1])\cup _{p}v\ =\ \varinjlim {\bigl (}(X\times [0,1])\hookleftarrow (X\times \{0\})\xrightarrow {p} v{\bigr )},$ where $v$ is a point (called the vertex of the cone) and $p$ is the projection to that point. In other words, it is the result of attaching the cylinder $X\times [0,1]$ by its face $X\times \{0\}$ to a point $v$ along the projection $p:{\bigl (}X\times \{0\}{\bigr )}\to v$. If $X$ is a non-empty compact subspace of Euclidean space, the cone on $X$ is homeomorphic to the union of segments from $X$ to any fixed point $v\not \in X$ such that these segments intersect only by $v$ itself. That is, the topological cone agrees with the geometric cone for compact spaces when the latter is defined. However, the topological cone construction is more general. The cone is a special case of a join: $CX\simeq X\star \{v\}=$ the join of $X$ with a single point $v\not \in X$.[1]: 76  Examples Here we often use a geometric cone ($CX$ where $X$ is a non-empty compact subspace of Euclidean space). The considered spaces are compact, so we get the same result up to homeomorphism. • The cone over a point p of the real line is a line-segment in $\mathbb {R} ^{2}$, $\{p\}\times [0,1]$. • The cone over two points {0, 1} is a "V" shape with endpoints at {0} and {1}. • The cone over a closed interval I of the real line is a filled-in triangle (with one of the edges being I), otherwise known as a 2-simplex (see the final example). • The cone over a polygon P is a pyramid with base P. • The cone over a disk is the solid cone of classical geometry (hence the concept's name). • The cone over a circle given by $\{(x,y,z)\in \mathbb {R} ^{3}\mid x^{2}+y^{2}=1{\mbox{ and }}z=0\}$ is the curved surface of the solid cone: $\{(x,y,z)\in \mathbb {R} ^{3}\mid x^{2}+y^{2}=(z-1)^{2}{\mbox{ and }}0\leq z\leq 1\}.$ This in turn is homeomorphic to the closed disc. More general examples:[1]: 77, Exercise.1  • The cone over an n-sphere is homeomorphic to the closed (n + 1)-ball. • The cone over an n-ball is also homeomorphic to the closed (n + 1)-ball. • The cone over an n-simplex is an (n + 1)-simplex. Properties All cones are path-connected since every point can be connected to the vertex point. Furthermore, every cone is contractible to the vertex point by the homotopy $h_{t}(x,s)=(x,(1-t)s)$. The cone is used in algebraic topology precisely because it embeds a space as a subspace of a contractible space. When X is compact and Hausdorff (essentially, when X can be embedded in Euclidean space), then the cone $CX$ can be visualized as the collection of lines joining every point of X to a single point. However, this picture fails when X is not compact or not Hausdorff, as generally the quotient topology on $CX$ will be finer than the set of lines joining X to a point. Cone functor The map $X\mapsto CX$ induces a functor $C\colon \mathbf {Top} \to \mathbf {Top} $ on the category of topological spaces Top. If $f\colon X\to Y$ is a continuous map, then $Cf\colon CX\to CY$ is defined by $(Cf)([x,t])=[f(x),t]$, where square brackets denote equivalence classes. Reduced cone If $(X,x_{0})$ is a pointed space, there is a related construction, the reduced cone, given by $(X\times [0,1])/(X\times \left\{0\right\}\cup \left\{x_{0}\right\}\times [0,1])$ where we take the basepoint of the reduced cone to be the equivalence class of $(x_{0},0)$. With this definition, the natural inclusion $x\mapsto (x,1)$ becomes a based map. This construction also gives a functor, from the category of pointed spaces to itself. See also • Cone (disambiguation) • Suspension (topology) • Desuspension • Mapping cone (topology) • Join (topology) References 1. Matoušek, Jiří (2007). Using the Borsuk-Ulam Theorem: Lectures on Topological Methods in Combinatorics and Geometry (2nd ed.). Berlin-Heidelberg: Springer-Verlag. ISBN 978-3-540-00362-5. Written in cooperation with Anders Björner and Günter M. Ziegler , Section 4.3 • Allen Hatcher, Algebraic topology. Cambridge University Press, Cambridge, 2002. xii+544 pp. ISBN 0-521-79160-X and ISBN 0-521-79540-0 • "Cone". PlanetMath.
Wikipedia
Reduced cost In linear programming, reduced cost, or opportunity cost, is the amount by which an objective function coefficient would have to improve (so increase for maximization problem, decrease for minimization problem) before it would be possible for a corresponding variable to assume a positive value in the optimal solution. It is the cost for increasing a variable by a small amount, i.e., the first derivative from a certain point on the polyhedron that constrains the problem. When the point is a vertex in the polyhedron, the variable with the most extreme cost, negatively for minimization and positively maximization, is sometimes referred to as the steepest edge. Given a system minimize $\mathbf {c} ^{T}\mathbf {x} $ subject to $\mathbf {Ax} \leq \mathbf {b} ,\mathbf {x} \geq 0$, the reduced cost vector can be computed as $\mathbf {c} -\mathbf {A} ^{T}\mathbf {y} $, where $\mathbf {y} $ is the dual cost vector. It follows directly that for a minimization problem, any non-basic variables at their lower bounds with strictly negative reduced costs are eligible to enter that basis, while any basic variables must have a reduced cost that is exactly 0. For a maximization problem, the non-basic variables at their lower bounds that are eligible for entering the basis have a strictly positive reduced cost. Interpretation For the case where x and y are optimal, the reduced costs can help explain why variables attain the value they do. For each variable, the corresponding sum of that stuff gives the reduced cost show which constraints forces the variable up and down. For non-basic variables, the distance to zero gives the minimal change in the objective coefficient to change the solution vector x. In pivot strategy In principle, a good pivot strategy would be to select whichever variable has the greatest reduced cost. However, the steepest edge might ultimately not be the most attractive, as the edge might be very short, thus affording only a small betterment of the object function value. From a computational view, another problem is that to compute the steepest edge, an inner product must be computed for every variable in the system, making the computational cost too high in many cases. The Devex algorithm attempts to overcome the latter problem by estimating the reduced costs rather than calculating them at every pivot step, exploiting that a pivot step might not alter the reduced costs of all variables dramatically. In linear programming NOTE: This is a direct quote from the web site linked below: "Associated with each variable is a reduced cost value. However, the reduced cost value is only non-zero when the optimal value of a variable is zero. A somewhat intuitive way to think about the reduced cost variable is to think of it as indicating how much the cost of the activity represented by the variable must be reduced before any of that activity will be done. More precisely, ... the reduced cost value indicates how much the objective function coefficient on the corresponding variable must be improved before the value of the variable will be positive in the optimal solution. In the case of a minimization problem, "improved" means "reduced". So, in the case of a cost-minimization problem, where the objective function coefficients represent the per-unit cost of the activities represented by the variables, the "reduced cost" coefficients indicate how much each cost coefficient would have to be reduced before the activity represented by the corresponding variable would be cost-effective. In the case of a maximization problem, "improved" means "increased". In this case, where, for example, the objective function coefficient might represent the net profit per unit of the activity. The reduced cost value indicates how much the profitability of the activity would have to be increased in order for the activity to occur in the optimal solution. The units of the reduced-cost values are the same as the units of the corresponding objective function coefficients. If the optimal value of a variable is positive (not zero), then the reduced cost is always zero. If the optimal value of a variable is zero and the reduced cost corresponding to the variable is also zero, then there is at least one other corner that is also in the optimal solution. The value of this variable will be positive at one of the other optimal corners."[1] See also • Linear programming • Shadow price References 1. "Interpreting LP Solutions - Reduced Cost". Courses.psu.edu. Retrieved 2013-08-08.
Wikipedia
Row echelon form In linear algebra, a matrix is in echelon form if it has the shape resulting from a Gaussian elimination. A matrix being in row echelon form means that Gaussian elimination has operated on the rows, and column echelon form means that Gaussian elimination has operated on the columns. In other words, a matrix is in column echelon form if its transpose is in row echelon form. Therefore, only row echelon forms are considered in the remainder of this article. The similar properties of column echelon form are easily deduced by transposing all the matrices. Specifically, a matrix is in row echelon form if • All rows consisting of only zeroes are at the bottom.[1] • The leading entry (that is the left-most nonzero entry) of every nonzero row is to the right of the leading entry of every row above.[2] Some texts add the condition that the leading coefficient must be 1[3] while others regard this as reduced row echelon form. These two conditions imply that all entries in a column below a leading coefficient are zeros.[4] The following is an example of a 4x5 matrix in row echelon form, which is not in reduced row echelon form (see below): $\left[{\begin{array}{ccccc}1&a_{0}&a_{1}&a_{2}&a_{3}\\0&0&2&a_{4}&a_{5}\\0&0&0&1&a_{6}\\0&0&0&0&0\end{array}}\right]$ Many properties of matrices may be easily deduced from their row echelon form, such as the rank and the kernel. Reduced row echelon form A matrix is in reduced row echelon form (also called row canonical form) if it satisfies the following conditions:[5] • It is in row echelon form. • The leading entry in each nonzero row is a 1 (called a leading 1). • Each column containing a leading 1 has zeros in all its other entries. The reduced row echelon form of a matrix may be computed by Gauss–Jordan elimination. Unlike the row echelon form, the reduced row echelon form of a matrix is unique and does not depend on the algorithm used to compute it.[6] For a given matrix, despite the row echelon form not being unique, all row echelon forms and the reduced row echelon form have the same number of zero rows and the pivots are located in the same indices.[6] This is an example of a matrix in reduced row echelon form, which shows that the left part of the matrix is not always an identity matrix: $\left[{\begin{array}{ccccc}1&0&a_{1}&0&b_{1}\\0&1&a_{2}&0&b_{2}\\0&0&0&1&b_{3}\end{array}}\right]$ For matrices with integer coefficients, the Hermite normal form is a row echelon form that may be calculated using Euclidean division and without introducing any rational number or denominator. On the other hand, the reduced echelon form of a matrix with integer coefficients generally contains non-integer coefficients. Transformation to row echelon form Main article: Gaussian elimination By means of a finite sequence of elementary row operations, called Gaussian elimination, any matrix can be transformed to row echelon form. Since elementary row operations preserve the row space of the matrix, the row space of the row echelon form is the same as that of the original matrix. The resulting echelon form is not unique; any matrix that is in echelon form can be put in an (equivalent) echelon form by adding a scalar multiple of a row to one of the above rows, for example: ${\begin{bmatrix}1&3&-1\\0&1&7\\\end{bmatrix}}{\xrightarrow {\text{add row 2 to row 1}}}{\begin{bmatrix}1&4&6\\0&1&7\\\end{bmatrix}}.$ However, every matrix has a unique reduced row echelon form. In the above example, the reduced row echelon form can be found as ${\begin{bmatrix}1&3&-1\\0&1&7\\\end{bmatrix}}\xrightarrow {{\text{subtract 3}}\times {\text{(row 2) from row 1}}} {\begin{bmatrix}1&0&-22\\0&1&7\\\end{bmatrix}}.$ This means that the nonzero rows of the reduced row echelon form are the unique reduced row echelon generating set for the row space of the original matrix. Systems of linear equations A system of linear equations is said to be in row echelon form if its augmented matrix is in row echelon form. Similarly, a system of linear equations is said to be in reduced row echelon form or in canonical form if its augmented matrix is in reduced row echelon form. The canonical form may be viewed as an explicit solution of the linear system. In fact, the system is inconsistent if and only if one of the equations of the canonical form is reduced to 0 = 1.[7] Otherwise, regrouping in the right hand side all the terms of the equations but the leading ones, expresses the variables corresponding to the pivots as constants or linear functions of the other variables, if any. Notes 1. Phrased in terms of each individual zero row in Leon (2010, p. 13):"A matrix is said to be in row echelon form ... (iii) If there are rows whose entries are all zero, they are below the rows having nonzero entries." 2. Leon (2010, p. 13):"A matrix is said to be in row echelon form ... (ii) If row k does not consist entirely of zeros, the number of leading zero entries in row $k+1$ is greater than the number of leading zero entries in row k." 3. See, for instance, the first clause of the definition of row echelon form in Leon (2010, p. 13): "A matrix is said to be in row echelon form (i) If the first nonzero entry in each nonzero row is 1." 4. Meyer 2000, p. 44 5. Meyer 2000, p. 48 6. Anton, Howard; Rorres, Chris (2013-10-23). Elementary Linear Algebra: Applications Version, 11th Edition. Wiley Global Education. p. 21. ISBN 9781118879160. 7. Cheney, Ward; Kincaid, David R. (2010-12-29). Linear Algebra: Theory and Applications. Jones & Bartlett Publishers. pp. 47–50. ISBN 9781449613525. References • Leon, Steven J. (2010), Lynch, Deirdre; Hoffman, William; Celano, Caroline (eds.), Linear Algebra with Applications (8th ed.), Pearson, ISBN 978-0-13-600929-0, A matrix is said to be in row echelon form (i) If the first nonzero entry in each nonzero row is 1. (ii) If row k does not consist entirely of zeros, the number of leading zero entries in row $k+1$ is greater than the number of leading zero entries in row k. (iii) If there are rows whose entries are all zero, they are below the rows having nonzero entries.. • Meyer, Carl D. (2000), Matrix Analysis and Applied Linear Algebra, SIAM, ISBN 978-0-89871-454-8. External links The Wikibook Linear Algebra has a page on the topic of: Row Reduction and Echelon Forms • Interactive Row Echelon Form with rational output Matrix classes Explicitly constrained entries • Alternant • Anti-diagonal • Anti-Hermitian • Anti-symmetric • Arrowhead • Band • Bidiagonal • Bisymmetric • Block-diagonal • Block • Block tridiagonal • Boolean • Cauchy • Centrosymmetric • Conference • Complex Hadamard • Copositive • Diagonally dominant • Diagonal • Discrete Fourier Transform • Elementary • Equivalent • Frobenius • Generalized permutation • Hadamard • Hankel • Hermitian • Hessenberg • Hollow • Integer • Logical • Matrix unit • Metzler • Moore • Nonnegative • Pentadiagonal • Permutation • Persymmetric • Polynomial • Quaternionic • Signature • Skew-Hermitian • Skew-symmetric • Skyline • Sparse • Sylvester • Symmetric • Toeplitz • Triangular • Tridiagonal • Vandermonde • Walsh • Z Constant • Exchange • Hilbert • Identity • Lehmer • Of ones • Pascal • Pauli • Redheffer • Shift • Zero Conditions on eigenvalues or eigenvectors • Companion • Convergent • Defective • Definite • Diagonalizable • Hurwitz • Positive-definite • Stieltjes Satisfying conditions on products or inverses • Congruent • Idempotent or Projection • Invertible • Involutory • Nilpotent • Normal • Orthogonal • Unimodular • Unipotent • Unitary • Totally unimodular • Weighing With specific applications • Adjugate • Alternating sign • Augmented • Bézout • Carleman • Cartan • Circulant • Cofactor • Commutation • Confusion • Coxeter • Distance • Duplication and elimination • Euclidean distance • Fundamental (linear differential equation) • Generator • Gram • Hessian • Householder • Jacobian • Moment • Payoff • Pick • Random • Rotation • Seifert • Shear • Similarity • Symplectic • Totally positive • Transformation Used in statistics • Centering • Correlation • Covariance • Design • Doubly stochastic • Fisher information • Hat • Precision • Stochastic • Transition Used in graph theory • Adjacency • Biadjacency • Degree • Edmonds • Incidence • Laplacian • Seidel adjacency • Tutte Used in science and engineering • Cabibbo–Kobayashi–Maskawa • Density • Fundamental (computer vision) • Fuzzy associative • Gamma • Gell-Mann • Hamiltonian • Irregular • Overlap • S • State transition • Substitution • Z (chemistry) Related terms • Jordan normal form • Linear independence • Matrix exponential • Matrix representation of conic sections • Perfect matrix • Pseudoinverse • Row echelon form • Wronskian •  Mathematics portal • List of matrices • Category:Matrices
Wikipedia
Reduced product In model theory, a branch of mathematical logic, and in algebra, the reduced product is a construction that generalizes both direct product and ultraproduct. For the reduced product in algebraic topology, see James reduced product. Let {Si | i ∈ I} be a family of structures of the same signature σ indexed by a set I, and let U be a filter on I. The domain of the reduced product is the quotient of the Cartesian product $\prod _{i\in I}S_{i}$ by a certain equivalence relation ~: two elements (ai) and (bi) of the Cartesian product are equivalent if $\left\{i\in I:a_{i}=b_{i}\right\}\in U$ If U only contains I as an element, the equivalence relation is trivial, and the reduced product is just the original Cartesian product. If U is an ultrafilter, the reduced product is an ultraproduct. Operations from σ are interpreted on the reduced product by applying the operation pointwise. Relations are interpreted by $R((a_{i}^{1})/{\sim },\dots ,(a_{i}^{n})/{\sim })\iff \{i\in I\mid R^{S_{i}}(a_{i}^{1},\dots ,a_{i}^{n})\}\in U.$ For example, if each structure is a vector space, then the reduced product is a vector space with addition defined as (a + b)i = ai + bi and multiplication by a scalar c as (ca)i = c ai. References • Chang, Chen Chung; Keisler, H. Jerome (1990) [1973]. Model Theory. Studies in Logic and the Foundations of Mathematics (3rd ed.). Elsevier. ISBN 978-0-444-88054-3., Chapter 6.
Wikipedia
Reduced residue system In mathematics, a subset R of the integers is called a reduced residue system modulo n if: 1. gcd(r, n) = 1 for each r in R, 2. R contains φ(n) elements, 3. no two elements of R are congruent modulo n.[1][2] Here φ denotes Euler's totient function. A reduced residue system modulo n can be formed from a complete residue system modulo n by removing all integers not relatively prime to n. For example, a complete residue system modulo 12 is {0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11}. The so-called totatives 1, 5, 7 and 11 are the only integers in this set which are relatively prime to 12, and so the corresponding reduced residue system modulo 12 is {1, 5, 7, 11}. The cardinality of this set can be calculated with the totient function: φ(12) = 4. Some other reduced residue systems modulo 12 are: • {13,17,19,23} • {−11,−7,−5,−1} • {−7,−13,13,31} • {35,43,53,61} Facts • If {r1, r2, ... , rφ(n)} is a reduced residue system modulo n with n > 2, then $\sum r_{i}\equiv 0\!\!\!\!\mod n$. • Every number in a reduced residue system modulo n is a generator for the additive group of integers modulo n. • If {r1, r2, ... , rφ(n)} is a reduced residue system modulo n, and a is an integer such that gcd(a, n) = 1, then {ar1, ar2, ... , arφ(n)} is also a reduced residue system modulo n.[3][4] See also • Complete residue system modulo m • Multiplicative group of integers modulo n • Congruence relation • Euler's totient function • Greatest common divisor • Least residue system modulo m • Modular arithmetic • Number theory • Residue number system Notes 1. Long (1972, p. 85) 2. Pettofrezzo & Byrkit (1970, p. 104) 3. Long (1972, p. 86) 4. Pettofrezzo & Byrkit (1970, p. 108) References • Long, Calvin T. (1972), Elementary Introduction to Number Theory (2nd ed.), Lexington: D. C. Heath and Company, LCCN 77171950 • Pettofrezzo, Anthony J.; Byrkit, Donald R. (1970), Elements of Number Theory, Englewood Cliffs: Prentice Hall, LCCN 71081766 External links • Residue systems at PlanetMath • Reduced residue system at MathWorld
Wikipedia
Reduced ring In ring theory, a branch of mathematics, a ring is called a reduced ring if it has no non-zero nilpotent elements. Equivalently, a ring is reduced if it has no non-zero elements with square zero, that is, x2 = 0 implies x = 0. A commutative algebra over a commutative ring is called a reduced algebra if its underlying ring is reduced. The nilpotent elements of a commutative ring R form an ideal of R, called the nilradical of R; therefore a commutative ring is reduced if and only if its nilradical is zero. Moreover, a commutative ring is reduced if and only if the only element contained in all prime ideals is zero. A quotient ring R/I is reduced if and only if I is a radical ideal. Let ${\mathcal {N}}_{R}$ be nilradical of any commutative ring $R$. There is a natural functor $R\mapsto R/{\mathcal {N}}_{R}$ of category of commutative rings ${\text{Crng}}$ into category of reduced rings ${\text{Red}}$ and it is left adjoint to the inclusion functor $I$ of ${\text{Red}}$ into ${\text{Crng}}$ . The bijection ${\text{Hom}}_{\text{Red}}(R/{\mathcal {N}}_{R},S)\cong {\text{Hom}}_{\text{Crng}}(R,I(S))$ is induced from the universal property of quotient rings. Let D be the set of all zero-divisors in a reduced ring R. Then D is the union of all minimal prime ideals.[1] Over a Noetherian ring R, we say a finitely generated module M has locally constant rank if ${\mathfrak {p}}\mapsto \operatorname {dim} _{k({\mathfrak {p}})}(M\otimes k({\mathfrak {p}}))$ is a locally constant (or equivalently continuous) function on Spec R. Then R is reduced if and only if every finitely generated module of locally constant rank is projective.[2] Examples and non-examples • Subrings, products, and localizations of reduced rings are again reduced rings. • The ring of integers Z is a reduced ring. Every field and every polynomial ring over a field (in arbitrarily many variables) is a reduced ring. • More generally, every integral domain is a reduced ring since a nilpotent element is a fortiori a zero-divisor. On the other hand, not every reduced ring is an integral domain. For example, the ring Z[x, y]/(xy) contains x + (xy) and y + (xy) as zero-divisors, but no non-zero nilpotent elements. As another example, the ring Z × Z contains (1, 0) and (0, 1) as zero-divisors, but contains no non-zero nilpotent elements. • The ring Z/6Z is reduced, however Z/4Z is not reduced: The class 2 + 4Z is nilpotent. In general, Z/nZ is reduced if and only if n = 0 or n is a square-free integer. • If R is a commutative ring and N is the nilradical of R, then the quotient ring R/N is reduced. • A commutative ring R of characteristic p for some prime number p is reduced if and only if its Frobenius endomorphism is injective (cf. Perfect field.) Generalizations Reduced rings play an elementary role in algebraic geometry, where this concept is generalized to the concept of a reduced scheme. See also • Total quotient ring#The total ring of fractions of a reduced ring Notes 1. Proof: let ${\mathfrak {p}}_{i}$ be all the (possibly zero) minimal prime ideals. $D\subset \cup {\mathfrak {p}}_{i}:$ Let x be in D. Then xy = 0 for some nonzero y. Since R is reduced, (0) is the intersection of all ${\mathfrak {p}}_{i}$ and thus y is not in some ${\mathfrak {p}}_{i}$. Since xy is in all ${\mathfrak {p}}_{j}$; in particular, in ${\mathfrak {p}}_{i}$, x is in ${\mathfrak {p}}_{i}$. $D\supset {\mathfrak {p}}_{i}:$ (stolen from Kaplansky, commutative rings, Theorem 84). We drop the subscript i. Let $S=\{xy|x\in R-D,y\in R-{\mathfrak {p}}\}$. S is multiplicatively closed and so we can consider the localization $R\to R[S^{-1}]$. Let ${\mathfrak {q}}$ be the pre-image of a maximal ideal. Then ${\mathfrak {q}}$ is contained in both D and ${\mathfrak {p}}$ and by minimality ${\mathfrak {q}}={\mathfrak {p}}$. (This direction is immediate if R is Noetherian by the theory of associated primes.) 2. Eisenbud 1995, Exercise 20.13. References • N. Bourbaki, Commutative Algebra, Hermann Paris 1972, Chap. II, § 2.7 • N. Bourbaki, Algebra, Springer 1990, Chap. V, § 6.7 • Eisenbud, David (1995). Commutative Algebra with a View Toward Algebraic Geometry. Graduate Texts in Mathematics. Springer-Verlag. ISBN 0-387-94268-8.
Wikipedia
Knot theory In topology, knot theory is the study of mathematical knots. While inspired by knots which appear in daily life, such as those in shoelaces and rope, a mathematical knot differs in that the ends are joined so it cannot be undone, the simplest knot being a ring (or "unknot"). In mathematical language, a knot is an embedding of a circle in 3-dimensional Euclidean space, $\mathbb {R} ^{3}$. Two mathematical knots are equivalent if one can be transformed into the other via a deformation of $\mathbb {R} ^{3}$ upon itself (known as an ambient isotopy); these transformations correspond to manipulations of a knotted string that do not involve cutting it or passing it through itself. Knots can be described in various ways. Using different description methods, there may be more than one description of the same knot. For example, a common method of describing a knot is a planar diagram called a knot diagram, in which any knot can be drawn in many different ways. Therefore, a fundamental problem in knot theory is determining when two descriptions represent the same knot. A complete algorithmic solution to this problem exists, which has unknown complexity. In practice, knots are often distinguished using a knot invariant, a "quantity" which is the same when computed from different descriptions of a knot. Important invariants include knot polynomials, knot groups, and hyperbolic invariants. The original motivation for the founders of knot theory was to create a table of knots and links, which are knots of several components entangled with each other. More than six billion knots and links have been tabulated since the beginnings of knot theory in the 19th century. To gain further insight, mathematicians have generalized the knot concept in several ways. Knots can be considered in other three-dimensional spaces and objects other than circles can be used; see knot (mathematics). For example, a higher-dimensional knot is an n-dimensional sphere embedded in (n+2)-dimensional Euclidean space. History Main article: History of knot theory Archaeologists have discovered that knot tying dates back to prehistoric times. Besides their uses such as recording information and tying objects together, knots have interested humans for their aesthetics and spiritual symbolism. Knots appear in various forms of Chinese artwork dating from several centuries BC (see Chinese knotting). The endless knot appears in Tibetan Buddhism, while the Borromean rings have made repeated appearances in different cultures, often representing strength in unity. The Celtic monks who created the Book of Kells lavished entire pages with intricate Celtic knotwork. A mathematical theory of knots was first developed in 1771 by Alexandre-Théophile Vandermonde who explicitly noted the importance of topological features when discussing the properties of knots related to the geometry of position. Mathematical studies of knots began in the 19th century with Carl Friedrich Gauss, who defined the linking integral (Silver 2006). In the 1860s, Lord Kelvin's theory that atoms were knots in the aether led to Peter Guthrie Tait's creation of the first knot tables for complete classification. Tait, in 1885, published a table of knots with up to ten crossings, and what came to be known as the Tait conjectures. This record motivated the early knot theorists, but knot theory eventually became part of the emerging subject of topology. These topologists in the early part of the 20th century—Max Dehn, J. W. Alexander, and others—studied knots from the point of view of the knot group and invariants from homology theory such as the Alexander polynomial. This would be the main approach to knot theory until a series of breakthroughs transformed the subject. In the late 1970s, William Thurston introduced hyperbolic geometry into the study of knots with the hyperbolization theorem. Many knots were shown to be hyperbolic knots, enabling the use of geometry in defining new, powerful knot invariants. The discovery of the Jones polynomial by Vaughan Jones in 1984 (Sossinsky 2002, pp. 71–89), and subsequent contributions from Edward Witten, Maxim Kontsevich, and others, revealed deep connections between knot theory and mathematical methods in statistical mechanics and quantum field theory. A plethora of knot invariants have been invented since then, utilizing sophisticated tools such as quantum groups and Floer homology. In the last several decades of the 20th century, scientists became interested in studying physical knots in order to understand knotting phenomena in DNA and other polymers. Knot theory can be used to determine if a molecule is chiral (has a "handedness") or not (Simon 1986). Tangles, strings with both ends fixed in place, have been effectively used in studying the action of topoisomerase on DNA (Flapan 2000). Knot theory may be crucial in the construction of quantum computers, through the model of topological quantum computation (Collins 2006). Knot equivalence On the left, the unknot, and a knot equivalent to it. It can be more difficult to determine whether complex knots, such as the one on the right, are equivalent to the unknot. A knot is created by beginning with a one-dimensional line segment, wrapping it around itself arbitrarily, and then fusing its two free ends together to form a closed loop (Adams 2004) (Sossinsky 2002). Simply, we can say a knot $K$ is a "simple closed curve" (see Curve) — that is: a "nearly" injective and continuous function $K\colon [0,1]\to \mathbb {R} ^{3}$, with the only "non-injectivity" being $K(0)=K(1)$. Topologists consider knots and other entanglements such as links and braids to be equivalent if the knot can be pushed about smoothly, without intersecting itself, to coincide with another knot. The idea of knot equivalence is to give a precise definition of when two knots should be considered the same even when positioned quite differently in space. A formal mathematical definition is that two knots $K_{1},K_{2}$ are equivalent if there is an orientation-preserving homeomorphism $h\colon \mathbb {R} ^{3}\to \mathbb {R} ^{3}$ with $h(K_{1})=K_{2}$. What this definition of knot equivalence means is that two knots are equivalent when there is a continuous family of homeomorphisms $\{h_{t}:\mathbb {R} ^{3}\rightarrow \mathbb {R} ^{3}\ \mathrm {for} \ 0\leq t\leq 1\}$ of space onto itself, such that the last one of them carries the first knot onto the second knot. (In detail: Two knots $K_{1}$ and $K_{2}$ are equivalent if there exists a continuous mapping $H:\mathbb {R} ^{3}\times [0,1]\rightarrow \mathbb {R} ^{3}$ such that a) for each $t\in [0,1]$ the mapping taking $x\in \mathbb {R} ^{3}$ to $H(x,t)\in \mathbb {R} ^{3}$ is a homeomorphism of $\mathbb {R} ^{3}$ onto itself; b) $H(x,0)=x$ for all $x\in \mathbb {R} ^{3}$; and c) $H(K_{1},1)=K_{2}$. Such a function $H$ is known as an ambient isotopy.) These two notions of knot equivalence agree exactly about which knots are equivalent: Two knots that are equivalent under the orientation-preserving homeomorphism definition are also equivalent under the ambient isotopy definition, because any orientation-preserving homeomorphisms of $\mathbb {R} ^{3}$ to itself is the final stage of an ambient isotopy starting from the identity. Conversely, two knots equivalent under the ambient isotopy definition are also equivalent under the orientation-preserving homeomorphism definition, because the $t=1$ (final) stage of the ambient isotopy must be an orientation-preserving homeomorphism carrying one knot to the other. The basic problem of knot theory, the recognition problem, is determining the equivalence of two knots. Algorithms exist to solve this problem, with the first given by Wolfgang Haken in the late 1960s (Hass 1998). Nonetheless, these algorithms can be extremely time-consuming, and a major issue in the theory is to understand how hard this problem really is (Hass 1998). The special case of recognizing the unknot, called the unknotting problem, is of particular interest (Hoste 2005). In February 2021 Marc Lackenby announced a new unknot recognition algorithm that runs in quasi-polynomial time.[1] Knot diagrams A useful way to visualise and manipulate knots is to project the knot onto a plane—think of the knot casting a shadow on the wall. A small change in the direction of projection will ensure that it is one-to-one except at the double points, called crossings, where the "shadow" of the knot crosses itself once transversely (Rolfsen 1976). At each crossing, to be able to recreate the original knot, the over-strand must be distinguished from the under-strand. This is often done by creating a break in the strand going underneath. The resulting diagram is an immersed plane curve with the additional data of which strand is over and which is under at each crossing. (These diagrams are called knot diagrams when they represent a knot and link diagrams when they represent a link.) Analogously, knotted surfaces in 4-space can be related to immersed surfaces in 3-space. A reduced diagram is a knot diagram in which there are no reducible crossings (also nugatory or removable crossings), or in which all of the reducible crossings have been removed.[2][3] A petal projection is a type of projection in which, instead of forming double points, all strands of the knot meet at a single crossing point, connected to it by loops forming non-nested "petals".[4] Reidemeister moves Main article: Reidemeister move In 1927, working with this diagrammatic form of knots, J. W. Alexander and Garland Baird Briggs, and independently Kurt Reidemeister, demonstrated that two knot diagrams belonging to the same knot can be related by a sequence of three kinds of moves on the diagram, shown below. These operations, now called the Reidemeister moves, are: 1. Twist and untwist in either direction. 2. Move one strand completely over another. 3. Move a strand completely over or under a crossing. Reidemeister moves Type IType II Type III The proof that diagrams of equivalent knots are connected by Reidemeister moves relies on an analysis of what happens under the planar projection of the movement taking one knot to another. The movement can be arranged so that almost all of the time the projection will be a knot diagram, except at finitely many times when an "event" or "catastrophe" occurs, such as when more than two strands cross at a point or multiple strands become tangent at a point. A close inspection will show that complicated events can be eliminated, leaving only the simplest events: (1) a "kink" forming or being straightened out; (2) two strands becoming tangent at a point and passing through; and (3) three strands crossing at a point. These are precisely the Reidemeister moves (Sossinsky 2002, ch. 3) (Lickorish 1997, ch. 1). Knot invariants Main article: Knot invariant A knot invariant is a "quantity" that is the same for equivalent knots (Adams 2004) (Lickorish 1997) (Rolfsen 1976). For example, if the invariant is computed from a knot diagram, it should give the same value for two knot diagrams representing equivalent knots. An invariant may take the same value on two different knots, so by itself may be incapable of distinguishing all knots. An elementary invariant is tricolorability. "Classical" knot invariants include the knot group, which is the fundamental group of the knot complement, and the Alexander polynomial, which can be computed from the Alexander invariant, a module constructed from the infinite cyclic cover of the knot complement (Lickorish 1997)(Rolfsen 1976). In the late 20th century, invariants such as "quantum" knot polynomials, Vassiliev invariants and hyperbolic invariants were discovered. These aforementioned invariants are only the tip of the iceberg of modern knot theory. Knot polynomials Main article: Knot polynomial A knot polynomial is a knot invariant that is a polynomial. Well-known examples include the Jones and Alexander polynomials. A variant of the Alexander polynomial, the Alexander–Conway polynomial, is a polynomial in the variable z with integer coefficients (Lickorish 1997). The Alexander–Conway polynomial is actually defined in terms of links, which consist of one or more knots entangled with each other. The concepts explained above for knots, e.g. diagrams and Reidemeister moves, also hold for links. Consider an oriented link diagram, i.e. one in which every component of the link has a preferred direction indicated by an arrow. For a given crossing of the diagram, let $L_{+},L_{-},L_{0}$ be the oriented link diagrams resulting from changing the diagram as indicated in the figure: The original diagram might be either $L_{+}$ or $L_{-}$, depending on the chosen crossing's configuration. Then the Alexander–Conway polynomial, $C(z)$, is recursively defined according to the rules: • $C(O)=1$ (where $O$ is any diagram of the unknot) • $C(L_{+})=C(L_{-})+zC(L_{0}).$ The second rule is what is often referred to as a skein relation. To check that these rules give an invariant of an oriented link, one should determine that the polynomial does not change under the three Reidemeister moves. Many important knot polynomials can be defined in this way. The following is an example of a typical computation using a skein relation. It computes the Alexander–Conway polynomial of the trefoil knot. The yellow patches indicate where the relation is applied. C() = C() + z C() gives the unknot and the Hopf link. Applying the relation to the Hopf link where indicated, C() = C() + z C() gives a link deformable to one with 0 crossings (it is actually the unlink of two components) and an unknot. The unlink takes a bit of sneakiness: C() = C() + z C() which implies that C(unlink of two components) = 0, since the first two polynomials are of the unknot and thus equal. Putting all this together will show: $C(\mathrm {trefoil} )=1+z(0+z)=1+z^{2}$ Since the Alexander–Conway polynomial is a knot invariant, this shows that the trefoil is not equivalent to the unknot. So the trefoil really is "knotted". • The left-handed trefoil knot. • The right-handed trefoil knot. Actually, there are two trefoil knots, called the right and left-handed trefoils, which are mirror images of each other (take a diagram of the trefoil given above and change each crossing to the other way to get the mirror image). These are not equivalent to each other, meaning that they are not amphichiral. This was shown by Max Dehn, before the invention of knot polynomials, using group theoretical methods (Dehn 1914). But the Alexander–Conway polynomial of each kind of trefoil will be the same, as can be seen by going through the computation above with the mirror image. The Jones polynomial can in fact distinguish between the left- and right-handed trefoil knots (Lickorish 1997). Hyperbolic invariants William Thurston proved many knots are hyperbolic knots, meaning that the knot complement (i.e., the set of points of 3-space not on the knot) admits a geometric structure, in particular that of hyperbolic geometry. The hyperbolic structure depends only on the knot so any quantity computed from the hyperbolic structure is then a knot invariant (Adams 2004). The Borromean rings are a link with the property that removing one ring unlinks the others. SnapPea's cusp view: the Borromean rings complement from the perspective of an inhabitant living near the red component. Geometry lets us visualize what the inside of a knot or link complement looks like by imagining light rays as traveling along the geodesics of the geometry. An example is provided by the picture of the complement of the Borromean rings. The inhabitant of this link complement is viewing the space from near the red component. The balls in the picture are views of horoball neighborhoods of the link. By thickening the link in a standard way, the horoball neighborhoods of the link components are obtained. Even though the boundary of a neighborhood is a torus, when viewed from inside the link complement, it looks like a sphere. Each link component shows up as infinitely many spheres (of one color) as there are infinitely many light rays from the observer to the link component. The fundamental parallelogram (which is indicated in the picture), tiles both vertically and horizontally and shows how to extend the pattern of spheres infinitely. This pattern, the horoball pattern, is itself a useful invariant. Other hyperbolic invariants include the shape of the fundamental parallelogram, length of shortest geodesic, and volume. Modern knot and link tabulation efforts have utilized these invariants effectively. Fast computers and clever methods of obtaining these invariants make calculating these invariants, in practice, a simple task (Adams, Hildebrand & Weeks 1991). Higher dimensions A knot in three dimensions can be untied when placed in four-dimensional space. This is done by changing crossings. Suppose one strand is behind another as seen from a chosen point. Lift it into the fourth dimension, so there is no obstacle (the front strand having no component there); then slide it forward, and drop it back, now in front. Analogies for the plane would be lifting a string up off the surface, or removing a dot from inside a circle. In fact, in four dimensions, any non-intersecting closed loop of one-dimensional string is equivalent to an unknot. First "push" the loop into a three-dimensional subspace, which is always possible, though technical to explain. Four-dimensional space occurs in classical knot theory, however, and an important topic is the study of slice knots and ribbon knots. A notorious open problem asks whether every slice knot is also ribbon. Knotting spheres of higher dimension Since a knot can be considered topologically a 1-dimensional sphere, the next generalization is to consider a two-dimensional sphere ($\mathbb {S} ^{2}$) embedded in 4-dimensional Euclidean space ($\mathbb {R} ^{4}$). Such an embedding is knotted if there is no homeomorphism of $\mathbb {R} ^{4}$ onto itself taking the embedded 2-sphere to the standard "round" embedding of the 2-sphere. Suspended knots and spun knots are two typical families of such 2-sphere knots. The mathematical technique called "general position" implies that for a given n-sphere in m-dimensional Euclidean space, if m is large enough (depending on n), the sphere should be unknotted. In general, piecewise-linear n-spheres form knots only in (n + 2)-dimensional space (Zeeman 1963), although this is no longer a requirement for smoothly knotted spheres. In fact, there are smoothly knotted $(4k-1)$-spheres in 6k-dimensional space; e.g., there is a smoothly knotted 3-sphere in $\mathbb {R} ^{6}$ (Haefliger 1962) (Levine 1965). Thus the codimension of a smooth knot can be arbitrarily large when not fixing the dimension of the knotted sphere; however, any smooth k-sphere embedded in $\mathbb {R} ^{n}$ with $2n-3k-3>0$ is unknotted. The notion of a knot has further generalisations in mathematics, see: Knot (mathematics), isotopy classification of embeddings. Every knot in the n-sphere $\mathbb {S} ^{n}$ is the link of a real-algebraic set with isolated singularity in $\mathbb {R} ^{n+1}$ (Akbulut & King 1981). An n-knot is a single $\mathbb {S} ^{n}$ embedded in $\mathbb {R} ^{m}$. An n-link consists of k-copies of $\mathbb {S} ^{n}$ embedded in $\mathbb {R} ^{m}$, where k is a natural number. Both the $m=n+2$ and the $m>n+2$ cases are well studied, and so is the $n>1$ case.[5][6] Adding knots Main article: Knot sum Two knots can be added by cutting both knots and joining the pairs of ends. The operation is called the knot sum, or sometimes the connected sum or composition of two knots. This can be formally defined as follows (Adams 2004): consider a planar projection of each knot and suppose these projections are disjoint. Find a rectangle in the plane where one pair of opposite sides are arcs along each knot while the rest of the rectangle is disjoint from the knots. Form a new knot by deleting the first pair of opposite sides and adjoining the other pair of opposite sides. The resulting knot is a sum of the original knots. Depending on how this is done, two different knots (but no more) may result. This ambiguity in the sum can be eliminated regarding the knots as oriented, i.e. having a preferred direction of travel along the knot, and requiring the arcs of the knots in the sum are oriented consistently with the oriented boundary of the rectangle. The knot sum of oriented knots is commutative and associative. A knot is prime if it is non-trivial and cannot be written as the knot sum of two non-trivial knots. A knot that can be written as such a sum is composite. There is a prime decomposition for knots, analogous to prime and composite numbers (Schubert 1949). For oriented knots, this decomposition is also unique. Higher-dimensional knots can also be added but there are some differences. While you cannot form the unknot in three dimensions by adding two non-trivial knots, you can in higher dimensions, at least when one considers smooth knots in codimension at least 3. Knots can also be constructed using the circuit topology approach. This is done by combining basic units called soft contacts using five operations (Parallel, Series, Cross, Concerted, and Sub).[7][8] The approach is applicable to open chains as well and can also be extended to include the so-called hard contacts. Tabulating knots See also: List of prime knots and Knot tabulation Traditionally, knots have been catalogued in terms of crossing number. Knot tables generally include only prime knots, and only one entry for a knot and its mirror image (even if they are different) (Hoste, Thistlethwaite & Weeks 1998). The number of nontrivial knots of a given crossing number increases rapidly, making tabulation computationally difficult (Hoste 2005, p. 20). Tabulation efforts have succeeded in enumerating over 6 billion knots and links (Hoste 2005, p. 28). The sequence of the number of prime knots of a given crossing number, up to crossing number 16, is 0, 0, 1, 1, 2, 3, 7, 21, 49, 165, 552, 2176, 9988, 46972, 253293, 1388705... (sequence A002863 in the OEIS). While exponential upper and lower bounds for this sequence are known, it has not been proven that this sequence is strictly increasing (Adams 2004). The first knot tables by Tait, Little, and Kirkman used knot diagrams, although Tait also used a precursor to the Dowker notation. Different notations have been invented for knots which allow more efficient tabulation (Hoste 2005). The early tables attempted to list all knots of at most 10 crossings, and all alternating knots of 11 crossings (Hoste, Thistlethwaite & Weeks 1998). The development of knot theory due to Alexander, Reidemeister, Seifert, and others eased the task of verification and tables of knots up to and including 9 crossings were published by Alexander–Briggs and Reidemeister in the late 1920s. The first major verification of this work was done in the 1960s by John Horton Conway, who not only developed a new notation but also the Alexander–Conway polynomial (Conway 1970) (Doll & Hoste 1991). This verified the list of knots of at most 11 crossings and a new list of links up to 10 crossings. Conway found a number of omissions but only one duplication in the Tait–Little tables; however he missed the duplicates called the Perko pair, which would only be noticed in 1974 by Kenneth Perko (Perko 1974). This famous error would propagate when Dale Rolfsen added a knot table in his influential text, based on Conway's work. Conway's 1970 paper on knot theory also contains a typographical duplication on its non-alternating 11-crossing knots page and omits 4 examples — 2 previously listed in D. Lombardero's 1968 Princeton senior thesis and 2 more subsequently discovered by Alain Caudron. [see Perko (1982), Primality of certain knots, Topology Proceedings] Less famous is the duplicate in his 10 crossing link table: 2.-2.-20.20 is the mirror of 8*-20:-20. [See Perko (2016), Historical highlights of non-cyclic knot theory, J. Knot Theory Ramifications]. In the late 1990s Hoste, Thistlethwaite, and Weeks tabulated all the knots through 16 crossings (Hoste, Thistlethwaite & Weeks 1998). In 2003 Rankin, Flint, and Schermann, tabulated the alternating knots through 22 crossings (Hoste 2005). In 2020 Burton tabulated all prime knots with up to 19 crossings (Burton 2020). Alexander–Briggs notation This is the most traditional notation, due to the 1927 paper of James W. Alexander and Garland B. Briggs and later extended by Dale Rolfsen in his knot table (see image above and List of prime knots). The notation simply organizes knots by their crossing number. One writes the crossing number with a subscript to denote its order amongst all knots with that crossing number. This order is arbitrary and so has no special significance (though in each number of crossings the twist knot comes after the torus knot). Links are written by the crossing number with a superscript to denote the number of components and a subscript to denote its order within the links with the same number of components and crossings. Thus the trefoil knot is notated 31 and the Hopf link is 22 1 . Alexander–Briggs names in the range 10162 to 10166 are ambiguous, due to the discovery of the Perko pair in Charles Newton Little's original and subsequent knot tables, and differences in approach to correcting this error in knot tables and other publications created after this point.[9] Dowker–Thistlethwaite notation Main article: Dowker–Thistlethwaite notation The Dowker–Thistlethwaite notation, also called the Dowker notation or code, for a knot is a finite sequence of even integers. The numbers are generated by following the knot and marking the crossings with consecutive integers. Since each crossing is visited twice, this creates a pairing of even integers with odd integers. An appropriate sign is given to indicate over and undercrossing. For example, in this figure the knot diagram has crossings labelled with the pairs (1,6) (3,−12) (5,2) (7,8) (9,−4) and (11,−10). The Dowker–Thistlethwaite notation for this labelling is the sequence: 6, −12, 2, 8, −4, −10. A knot diagram has more than one possible Dowker notation, and there is a well-understood ambiguity when reconstructing a knot from a Dowker–Thistlethwaite notation. Conway notation Main article: Conway notation (knot theory) The Conway notation for knots and links, named after John Horton Conway, is based on the theory of tangles (Conway 1970). The advantage of this notation is that it reflects some properties of the knot or link. The notation describes how to construct a particular link diagram of the link. Start with a basic polyhedron, a 4-valent connected planar graph with no digon regions. Such a polyhedron is denoted first by the number of vertices then a number of asterisks which determine the polyhedron's position on a list of basic polyhedra. For example, 10** denotes the second 10-vertex polyhedron on Conway's list. Each vertex then has an algebraic tangle substituted into it (each vertex is oriented so there is no arbitrary choice in substitution). Each such tangle has a notation consisting of numbers and + or − signs. An example is 1*2 −3 2. The 1* denotes the only 1-vertex basic polyhedron. The 2 −3 2 is a sequence describing the continued fraction associated to a rational tangle. One inserts this tangle at the vertex of the basic polyhedron 1*. A more complicated example is 8*3.1.2 0.1.1.1.1.1 Here again 8* refers to a basic polyhedron with 8 vertices. The periods separate the notation for each tangle. Any link admits such a description, and it is clear this is a very compact notation even for very large crossing number. There are some further shorthands usually used. The last example is usually written 8*3:2 0, where the ones are omitted and kept the number of dots excepting the dots at the end. For an algebraic knot such as in the first example, 1* is often omitted. Conway's pioneering paper on the subject lists up to 10-vertex basic polyhedra of which he uses to tabulate links, which have become standard for those links. For a further listing of higher vertex polyhedra, there are nonstandard choices available. Gauss code Main article: Gauss code Gauss code, similar to the Dowker–Thistlethwaite notation, represents a knot with a sequence of integers. However, rather than every crossing being represented by two different numbers, crossings are labeled with only one number. When the crossing is an overcrossing, a positive number is listed. At an undercrossing, a negative number. For example, the trefoil knot in Gauss code can be given as: 1,−2,3,−1,2,−3 Gauss code is limited in its ability to identify knots. This problem is partially addressed with by the extended Gauss code. See also • List of knot theory topics • Molecular knot • Circuit topology • Quantum topology • Ribbon theory • Contact geometry#Legendrian submanifolds and knots • Knots and graphs • Necktie § Types of knot • Lamp cord trick References Sources • Adams, Colin (2004), The Knot Book: An Elementary Introduction to the Mathematical Theory of Knots, American Mathematical Society, ISBN 978-0-8218-3678-1 • Adams, Colin; Crawford, Thomas; DeMeo, Benjamin; Landry, Michael; Lin, Alex Tong; Montee, MurphyKate; Park, Seojung; Venkatesh, Saraswathi; Yhee, Farrah (2015), "Knot projections with a single multi-crossing", Journal of Knot Theory and Its Ramifications, 24 (3): 1550011, 30, arXiv:1208.5742, doi:10.1142/S021821651550011X, MR 3342136, S2CID 119320887 • Adams, Colin; Hildebrand, Martin; Weeks, Jeffrey (1991), "Hyperbolic invariants of knots and links", Transactions of the American Mathematical Society, 326 (1): 1–56, doi:10.1090/s0002-9947-1991-0994161-2, JSTOR 2001854 • Akbulut, Selman; King, Henry C. (1981), "All knots are algebraic", Comment. Math. Helv., 56 (3): 339–351, doi:10.1007/BF02566217, S2CID 120218312 • Bar-Natan, Dror (1995), "On the Vassiliev knot invariants", Topology, 34 (2): 423–472, doi:10.1016/0040-9383(95)93237-2 • Burton, Benjamin A. (2020). "The Next 350 Million Knots". 36th International Symposium on Computational Geometry (SoCG 2020). Leibniz Int. Proc. Inform. Vol. 164. Schloss Dagstuhl–Leibniz-Zentrum für Informatik. pp. 25:1–25:17. doi:10.4230/LIPIcs.SoCG.2020.25. • Collins, Graham (April 2006), "Computing with Quantum Knots", Scientific American, 294 (4): 56–63, Bibcode:2006SciAm.294d..56C, doi:10.1038/scientificamerican0406-56, PMID 16596880 • Dehn, Max (1914), "Die beiden Kleeblattschlingen", Mathematische Annalen, 75 (3): 402–413, doi:10.1007/BF01563732, S2CID 120452571 • Conway, John H. (1970), "An enumeration of knots and links, and some of their algebraic properties", Computational Problems in Abstract Algebra, Pergamon, pp. 329–358, doi:10.1016/B978-0-08-012975-4.50034-5, ISBN 978-0-08-012975-4 • Doll, Helmut; Hoste, Jim (1991), "A tabulation of oriented links. With microfiche supplement", Math. Comp., 57 (196): 747–761, Bibcode:1991MaCom..57..747D, doi:10.1090/S0025-5718-1991-1094946-4 • Flapan, Erica (2000), When topology meets chemistry: A topological look at molecular chirality, Outlook, Cambridge University Press, ISBN 978-0-521-66254-3 • Haefliger, André (1962), "Knotted (4k − 1)-spheres in 6k-space", Annals of Mathematics, Second Series, 75 (3): 452–466, doi:10.2307/1970208, JSTOR 1970208 • Hass, Joel (1998), "Algorithms for recognizing knots and 3-manifolds", Chaos, Solitons and Fractals, 9 (4–5): 569–581, arXiv:math/9712269, Bibcode:1998CSF.....9..569H, doi:10.1016/S0960-0779(97)00109-4, S2CID 7381505 • Hoste, Jim; Thistlethwaite, Morwen; Weeks, Jeffrey (1998), "The First 1,701,935 Knots", Math. Intelligencer, 20 (4): 33–48, doi:10.1007/BF03025227, S2CID 18027155 • Hoste, Jim (2005). "The Enumeration and Classification of Knots and Links". Handbook of Knot Theory. pp. 209–232. doi:10.1016/B978-044451452-3/50006-X. ISBN 978-0-444-51452-3. • Levine, Jerome (1965), "A classification of differentiable knots", Annals of Mathematics, Second Series, 1982 (1): 15–50, doi:10.2307/1970561, JSTOR 1970561 • Kontsevich, M. (1993). "Vassiliev's knot invariants". I. M. Gelfand Seminar. ADVSOV. Vol. 16. pp. 137–150. doi:10.1090/advsov/016.2/04. ISBN 978-0-8218-4117-4. • Lickorish, W. B. Raymond (1997), An Introduction to Knot Theory, Graduate Texts in Mathematics, vol. 175, Springer-Verlag, doi:10.1007/978-1-4612-0691-0, ISBN 978-0-387-98254-0, S2CID 122824389 • Perko, Kenneth (1974), "On the classification of knots", Proceedings of the American Mathematical Society, 45 (2): 262–6, doi:10.2307/2040074, JSTOR 2040074 • Rolfsen, Dale (1976), Knots and Links, Mathematics Lecture Series, vol. 7, Berkeley, California: Publish or Perish, ISBN 978-0-914098-16-4, MR 0515288 • Schubert, Horst (1949). Die eindeutige Zerlegbarkeit eines Knotens in Primknoten. doi:10.1007/978-3-642-45813-2. ISBN 978-3-540-01419-5. • Silver, Daniel (2006). "Knot Theory's Odd Origins". American Scientist. 94 (2): 158. doi:10.1511/2006.2.158. • Simon, Jonathan (1986), "Topological chirality of certain molecules", Topology, 25 (2): 229–235, doi:10.1016/0040-9383(86)90041-8 • Sossinsky, Alexei (2002), Knots, mathematics with a twist, Harvard University Press, ISBN 978-0-674-00944-8 • Turaev, Vladimir G. (2016). Quantum Invariants of Knots and 3-Manifolds. doi:10.1515/9783110435221. ISBN 978-3-11-043522-1. S2CID 118682559. • Weisstein, Eric W. (2013). "Reduced Knot Diagram". MathWorld. Wolfram. Retrieved 8 May 2013. • Weisstein, Eric W. (2013a). "Reducible Crossing". MathWorld. Wolfram. Retrieved 8 May 2013. • Witten, Edward (1989), "Quantum field theory and the Jones polynomial", Comm. Math. Phys., 121 (3): 351–399, Bibcode:1989CMaPh.121..351W, doi:10.1007/BF01217730, S2CID 14951363 • Zeeman, Erik C. (1963), "Unknotting combinatorial balls", Annals of Mathematics, Second Series, 78 (3): 501–526, doi:10.2307/1970538, JSTOR 1970538 Footnotes 1. Marc Lackenby announces a new unknot recognition algorithm that runs in quasi-polynomial time, Mathematical Institute, University of Oxford, 2021-02-03, retrieved 2021-02-03 2. Weisstein 2013. 3. Weisstein 2013a. 4. Adams et al. 2015. 5. Levine, J.; Orr, K (2000), "A survey of applications of surgery to knot and link theory", Surveys on Surgery Theory: Papers Dedicated to C.T.C. Wall, Annals of mathematics studies, vol. 1, Princeton University Press, CiteSeerX 10.1.1.64.4359, ISBN 978-0691049380 — An introductory article to high dimensional knots and links for the advanced readers 6. Ogasa, Eiji (2013), Introduction to high dimensional knots, arXiv:1304.6053, Bibcode:2013arXiv1304.6053O — An introductory article to high dimensional knots and links for beginners 7. Golovnev, Anatoly; Mashaghi, Alireza (7 December 2021). "Circuit Topology for Bottom-Up Engineering of Molecular Knots". Symmetry. 13 (12): 2353. arXiv:2106.03925. Bibcode:2021Symm...13.2353G. doi:10.3390/sym13122353. 8. Flapan, Erica; Mashaghi, Alireza; Wong, Helen (1 June 2023). "A tile model of circuit topology for self-entangled biopolymers". Scientific Reports. 13 (1): 8889. Bibcode:2023NatSR..13.8889F. doi:10.1038/s41598-023-35771-8. PMC 10235088. PMID 37264056. 9. "The Revenge of the Perko Pair", RichardElwes.co.uk. Accessed February 2016. Richard Elwes points out a common mistake in describing the Perko pair. Further reading Introductory textbooks There are a number of introductions to knot theory. A classical introduction for graduate students or advanced undergraduates is (Rolfsen 1976). Other good texts from the references are (Adams 2004) and (Lickorish 1997). Adams is informal and accessible for the most part to high schoolers. Lickorish is a rigorous introduction for graduate students, covering a nice mix of classical and modern topics. (Cromwell 2004) is suitable for undergraduates who know point-set topology; knowledge of algebraic topology is not required. • Burde, Gerhard; Zieschang, Heiner (1985), Knots, De Gruyter Studies in Mathematics, vol. 5, Walter de Gruyter, ISBN 978-3-11-008675-1 • Crowell, Richard H.; Fox, Ralph (1977). Introduction to Knot Theory. Springer. ISBN 978-0-387-90272-2. • Kauffman, Louis H. (1987), On Knots, Princeton University Press, ISBN 978-0-691-08435-0 • Kauffman, Louis H. (2013), Knots and Physics (4th ed.), World Scientific, ISBN 978-981-4383-00-4 • Cromwell, Peter R. (2004), Knots and Links, Cambridge University Press, ISBN 978-0-521-54831-1 Surveys • Menasco, William W.; Thistlethwaite, Morwen, eds. (2005), Handbook of Knot Theory, Elsevier, ISBN 978-0-444-51452-3 • Menasco and Thistlethwaite's handbook surveys a mix of topics relevant to current research trends in a manner accessible to advanced undergraduates but of interest to professional researchers. • Livio, Mario (2009), "Ch. 8: Unreasonable Effectiveness?", Is God a Mathematician?, Simon & Schuster, pp. 203–218, ISBN 978-0-7432-9405-8 External links Wikimedia Commons has media related to Knot theory. Look up knot theory in Wiktionary, the free dictionary. • "Mathematics and Knots" This is an online version of an exhibition developed for the 1989 Royal Society "PopMath RoadShow". Its aim was to use knots to present methods of mathematics to the general public. History • Thomson, Sir William (1867), "On Vortex Atoms", Proceedings of the Royal Society of Edinburgh, VI: 94–105 • Silliman, Robert H. (December 1963), "William Thomson: Smoke Rings and Nineteenth-Century Atomism", Isis, 54 (4): 461–474, doi:10.1086/349764, JSTOR 228151, S2CID 144988108 • Movie of a modern recreation of Tait's smoke ring experiment • History of knot theory (on the home page of Andrew Ranicki) Knot tables and software • KnotInfo: Table of Knot Invariants and Knot Theory Resources • The Knot Atlas — detailed info on individual knots in knot tables • KnotPlot — software to investigate geometric properties of knots • Knotscape — software to create images of knots • Knoutilus — online database and image generator of knots • KnotData.html — Wolfram Mathematica function for investigating knots • Regina — software for low-dimensional topology with native support for knots and links. Tables of prime knots with up to 19 crossings Knot theory (knots and links) Hyperbolic • Figure-eight (41) • Three-twist (52) • Stevedore (61) • 62 • 63 • Endless (74) • Carrick mat (818) • Perko pair (10161) • (−2,3,7) pretzel (12n242) • Whitehead (52 1 ) • Borromean rings (63 2 ) • L10a140 • Conway knot (11n34) Satellite • Composite knots • Granny • Square • Knot sum Torus • Unknot (01) • Trefoil (31) • Cinquefoil (51) • Septafoil (71) • Unlink (02 1 ) • Hopf (22 1 ) • Solomon's (42 1 ) Invariants • Alternating • Arf invariant • Bridge no. • 2-bridge • Brunnian • Chirality • Invertible • Crosscap no. • Crossing no. • Finite type invariant • Hyperbolic volume • Khovanov homology • Genus • Knot group • Link group • Linking no. • Polynomial • Alexander • Bracket • HOMFLY • Jones • Kauffman • Pretzel • Prime • list • Stick no. • Tricolorability • Unknotting no. and problem Notation and operations • Alexander–Briggs notation • Conway notation • Dowker–Thistlethwaite notation • Flype • Mutation • Reidemeister move • Skein relation • Tabulation Other • Alexander's theorem • Berge • Braid theory • Conway sphere • Complement • Double torus • Fibered • Knot • List of knots and links • Ribbon • Slice • Sum • Tait conjectures • Twist • Wild • Writhe • Surgery theory • Category • Commons
Wikipedia
Rewriting In mathematics, computer science, and logic, rewriting covers a wide range of methods of replacing subterms of a formula with other terms. Such methods may be achieved by rewriting systems (also known as rewrite systems, rewrite engines,[1][2] or reduction systems). In their most basic form, they consist of a set of objects, plus relations on how to transform those objects. Rewriting can be non-deterministic. One rule to rewrite a term could be applied in many different ways to that term, or more than one rule could be applicable. Rewriting systems then do not provide an algorithm for changing one term to another, but a set of possible rule applications. When combined with an appropriate algorithm, however, rewrite systems can be viewed as computer programs, and several theorem provers[3] and declarative programming languages are based on term rewriting.[4][5] Example cases Logic In logic, the procedure for obtaining the conjunctive normal form (CNF) of a formula can be implemented as a rewriting system.[6] The rules of an example of such a system would be: $\neg \neg A\to A$ (double negation elimination) $\neg (A\land B)\to \neg A\lor \neg B$ (De Morgan's laws) $\neg (A\lor B)\to \neg A\land \neg B$ $(A\land B)\lor C\to (A\lor C)\land (B\lor C)$ (distributivity) $A\lor (B\land C)\to (A\lor B)\land (A\lor C),$[note 1] where the symbol ($\to $) indicates that an expression matching the left hand side of the rule can be rewritten to one formed by the right hand side, and the symbols each denote a subexpression. In such a system, each rule is chosen so that the left side is equivalent to the right side, and consequently when the left side matches a subexpression, performing a rewrite of that subexpression from left to right maintains logical consistency and value of the entire expression. Arithmetic Term rewriting systems can be employed to compute arithmetic operations on natural numbers. To this end, each such number has to be encoded as a term. The simplest encoding is the one used in the Peano axioms, based on the constant 0 (zero) and the successor function S. For example, the numbers 0, 1, 2, and 3 are represented by the terms 0, S(0), S(S(0)), and S(S(S(0))), respectively. The following term rewriting system can then be used to compute sum and product of given natural numbers.[7] ${\begin{aligned}A+0&\to A&{\textrm {(1)}},\\A+S(B)&\to S(A+B)&{\textrm {(2)}},\\A\cdot 0&\to 0&{\textrm {(3)}},\\A\cdot S(B)&\to A+(A\cdot B)&{\textrm {(4)}}.\end{aligned}}$ For example, the computation of 2+2 to result in 4 can be duplicated by term rewriting as follows: $S(S(0))+S(S(0))$ $\;\;{\stackrel {(2)}{\to }}\;\;$ $S(\;S(S(0))+S(0)\;)$ $\;\;{\stackrel {(2)}{\to }}\;\;$ $S(S(\;S(S(0))+0\;))$ $\;\;{\stackrel {(1)}{\to }}\;\;$ $S(S(S(S(0)))),$ where the rule numbers are given above the rewrites-to arrow. As another example, the computation of 2⋅2 looks like: $S(S(0))\cdot S(S(0))$ $\;\;{\stackrel {(4)}{\to }}\;\;$ $S(S(0))+S(S(0))\cdot S(0)$ $\;\;{\stackrel {(4)}{\to }}\;\;$ $S(S(0))+S(S(0))+S(S(0))\cdot 0$ $\;\;{\stackrel {(3)}{\to }}\;\;$ $S(S(0))+S(S(0))+0$ $\;\;{\stackrel {(1)}{\to }}\;\;$ $S(S(0))+S(S(0))$ $\;\;{\stackrel {\textrm {s.a.}}{\to }}\;\;$ $S(S(S(S(0)))),$ where the last step comprises the previous example computation. Linguistics In linguistics, phrase structure rules, also called rewrite rules, are used in some systems of generative grammar,[8] as a means of generating the grammatically correct sentences of a language. Such a rule typically takes the form ${\rm {A\rightarrow X}}$, where A is a syntactic category label, such as noun phrase or sentence, and X is a sequence of such labels or morphemes, expressing the fact that A can be replaced by X in generating the constituent structure of a sentence. For example, the rule ${\rm {S\rightarrow NP\ VP}}$ means that a sentence can consist of a noun phrase (NP) followed by a verb phrase (VP); further rules will specify what sub-constituents a noun phrase and a verb phrase can consist of, and so on. Abstract rewriting systems Main article: Abstract rewriting system From the above examples, it is clear that we can think of rewriting systems in an abstract manner. We need to specify a set of objects and the rules that can be applied to transform them. The most general (unidimensional) setting of this notion is called an abstract reduction system[9] or abstract rewriting system (abbreviated ARS).[10] An ARS is simply a set A of objects, together with a binary relation → on A called the reduction relation, rewrite relation[11] or just reduction.[9] Many notions and notations can be defined in the general setting of an ARS. ${\overset {*}{\rightarrow }}$ is the reflexive transitive closure of $\rightarrow $. $\leftrightarrow $ is the symmetric closure of $\rightarrow $. ${\overset {*}{\leftrightarrow }}$ is the reflexive transitive symmetric closure of $\rightarrow $. The word problem for an ARS is determining, given x and y, whether $x{\overset {*}{\leftrightarrow }}y$. An object x in A is called reducible if there exists some other y in A such that $x\rightarrow y$; otherwise it is called irreducible or a normal form. An object y is called a "normal form of x" if $x{\stackrel {*}{\rightarrow }}y$, and y is irreducible. If the normal form of x is unique, then this is usually denoted with $x{\downarrow }$. If every object has at least one normal form, the ARS is called normalizing. $x\downarrow y$ or x and y are said to be joinable if there exists some z with the property that $x{\overset {*}{\rightarrow }}z{\overset {*}{\leftarrow }}y$. An ARS is said to possess the Church–Rosser property if $x{\overset {*}{\leftrightarrow }}y$ implies $x\downarrow y$. An ARS is confluent if for all w, x, and y in A, $x{\overset {*}{\leftarrow }}w{\overset {*}{\rightarrow }}y$ implies $x\downarrow y$. An ARS is locally confluent if and only if for all w, x, and y in A, $x\leftarrow w\rightarrow y$ implies $x{\mathbin {\downarrow }}y$. An ARS is said to be terminating or noetherian if there is no infinite chain $x_{0}\rightarrow x_{1}\rightarrow x_{2}\rightarrow \cdots $. A confluent and terminating ARS is called convergent or canonical. Important theorems for abstract rewriting systems are that an ARS is confluent iff it has the Church–Rosser property, Newman's lemma (a terminating ARS is confluent if and only if it is locally confluent), and that the word problem for an ARS is undecidable in general. String rewriting systems Main article: String rewriting system A string rewriting system (SRS), also known as semi-Thue system, exploits the free monoid structure of the strings (words) over an alphabet to extend a rewriting relation, $R$, to all strings in the alphabet that contain left- and respectively right-hand sides of some rules as substrings. Formally a semi-Thue system is a tuple $(\Sigma ,R)$ where $\Sigma $ is a (usually finite) alphabet, and $R$ is a binary relation between some (fixed) strings in the alphabet, called the set of rewrite rules. The one-step rewriting relation ${\underset {R}{\rightarrow }}$ induced by $R$ on $\Sigma ^{*}$ is defined as: if $s,t\in \Sigma ^{*}$ are any strings, then $s{\underset {R}{\rightarrow }}t$ if there exist $x,y,u,v\in \Sigma ^{*}$ such that $s=xuy$, $t=xvy$, and $uRv$. Since ${\underset {R}{\rightarrow }}$ is a relation on $\Sigma ^{*}$, the pair $(\Sigma ^{*},{\underset {R}{\rightarrow }})$ fits the definition of an abstract rewriting system. Since the empty string is in $\Sigma ^{*}$, $R$ is a subset of ${\underset {R}{\rightarrow }}$. If the relation $R$ is symmetric, then the system is called a Thue system. In a SRS, the reduction relation ${\overset {*}{\underset {R}{\rightarrow }}}$ is compatible with the monoid operation, meaning that $x{\overset {*}{\underset {R}{\rightarrow }}}y$ implies $uxv{\overset {*}{\underset {R}{\rightarrow }}}uyv$ for all strings $x,y,u,v\in \Sigma ^{*}$. Similarly, the reflexive transitive symmetric closure of ${\underset {R}{\rightarrow }}$, denoted ${\overset {*}{\underset {R}{\leftrightarrow }}}$, is a congruence, meaning it is an equivalence relation (by definition) and it is also compatible with string concatenation. The relation ${\overset {*}{\underset {R}{\leftrightarrow }}}$ is called the Thue congruence generated by $R$. In a Thue system, i.e. if $R$ is symmetric, the rewrite relation ${\overset {*}{\underset {R}{\rightarrow }}}$ coincides with the Thue congruence ${\overset {*}{\underset {R}{\leftrightarrow }}}$. The notion of a semi-Thue system essentially coincides with the presentation of a monoid. Since ${\overset {*}{\underset {R}{\leftrightarrow }}}$ is a congruence, we can define the factor monoid ${\mathcal {M}}_{R}=\Sigma ^{*}/{\overset {*}{\underset {R}{\leftrightarrow }}}$ of the free monoid $\Sigma ^{*}$ by the Thue congruence. If a monoid ${\mathcal {M}}$ is isomorphic with ${\mathcal {M}}_{R}$, then the semi-Thue system $(\Sigma ,R)$ is called a monoid presentation of ${\mathcal {M}}$. We immediately get some very useful connections with other areas of algebra. For example, the alphabet $\{a,b\}$ with the rules $\{ab\rightarrow \varepsilon ,ba\rightarrow \varepsilon \}$, where $\varepsilon $ is the empty string, is a presentation of the free group on one generator. If instead the rules are just $\{ab\rightarrow \varepsilon \}$, then we obtain a presentation of the bicyclic monoid. Thus semi-Thue systems constitute a natural framework for solving the word problem for monoids and groups. In fact, every monoid has a presentation of the form $(\Sigma ,R)$, i.e. it may always be presented by a semi-Thue system, possibly over an infinite alphabet. The word problem for a semi-Thue system is undecidable in general; this result is sometimes known as the Post–Markov theorem.[12] Term rewriting systems A term rewriting system (TRS) is a rewriting system whose objects are terms, which are expressions with nested sub-expressions. For example, the system shown under § Logic above is a term rewriting system. The terms in this system are composed of binary operators $(\vee )$ and $(\wedge )$ and the unary operator $(\neg )$. Also present in the rules are variables, which represent any possible term (though a single variable always represents the same term throughout a single rule). In contrast to string rewriting systems, whose objects are sequences of symbols, the objects of a term rewriting system form a term algebra. A term can be visualized as a tree of symbols, the set of admitted symbols being fixed by a given signature. Formal definition A rewrite rule is a pair of terms, commonly written as $l\rightarrow r$, to indicate that the left-hand side l can be replaced by the right-hand side r. A term rewriting system is a set R of such rules. A rule $l\rightarrow r$ can be applied to a term s if the left term l matches some subterm of s, that is, if there is some substitution $\sigma $ such that the subterm of $s$ rooted at some position p is the result of applying the substitution $\sigma $ to the term l. The subterm matching the left hand side of the rule is called a redex or reducible expression.[13] The result term t of this rule application is then the result of replacing the subterm at position p in s by the term $r$ with the substitution $\sigma $ applied, see picture 1. In this case, $s$ is said to be rewritten in one step, or rewritten directly, to $t$ by the system $R$, formally denoted as $s\rightarrow _{R}t$, $s{\underset {R}{\rightarrow }}t$, or as $s{\overset {R}{\rightarrow }}t$ by some authors. If a term $t_{1}$ can be rewritten in several steps into a term $t_{n}$, that is, if $t_{1}{\underset {R}{\rightarrow }}t_{2}{\underset {R}{\rightarrow }}\cdots {\underset {R}{\rightarrow }}t_{n}$, the term $t_{1}$ is said to be rewritten to $t_{n}$, formally denoted as $t_{1}{\overset {+}{\underset {R}{\rightarrow }}}t_{n}$. In other words, the relation ${\overset {+}{\underset {R}{\rightarrow }}}$ is the transitive closure of the relation ${\underset {R}{\rightarrow }}$; often, also the notation ${\overset {*}{\underset {R}{\rightarrow }}}$ is used to denote the reflexive-transitive closure of ${\underset {R}{\rightarrow }}$, that is, $s{\overset {*}{\underset {R}{\rightarrow }}}t$ if $s=t$ or $s{\overset {+}{\underset {R}{\rightarrow }}}t$.[14] A term rewriting given by a set $R$ of rules can be viewed as an abstract rewriting system as defined above, with terms as its objects and ${\underset {R}{\rightarrow }}$ as its rewrite relation. For example, $x*(y*z)\rightarrow (x*y)*z$ is a rewrite rule, commonly used to establish a normal form with respect to the associativity of $*$. That rule can be applied at the numerator in the term ${\frac {a*((a+1)*(a+2))}{1*(2*3)}}$ with the matching substitution $\{x\mapsto a,\;y\mapsto a+1,\;z\mapsto a+2\}$, see picture 2.[note 2] Applying that substitution to the rule's right-hand side yields the term $(a*(a+1))*(a+2)$, and replacing the numerator by that term yields ${\frac {(a*(a+1))*(a+2)}{1*(2*3)}}$, which is the result term of applying the rewrite rule. Altogether, applying the rewrite rule has achieved what is called "applying the associativity law for $*$ to ${\frac {a*((a+1)*(a+2))}{1*(2*3)}}$" in elementary algebra. Alternately, the rule could have been applied to the denominator of the original term, yielding ${\frac {a*((a+1)*(a+2))}{(1*2)*3}}$. Termination Termination issues of rewrite systems in general are handled in Abstract rewriting system#Termination and convergence. For term rewriting systems in particular, the following additional subtleties are to be considered. Termination even of a system consisting of one rule with a linear left-hand side is undecidable.[15][16] Termination is also undecidable for systems using only unary function symbols; however, it is decidable for finite ground systems.[17] The following term rewrite system is normalizing,[note 3] but not terminating,[note 4] and not confluent:[18] ${\begin{aligned}f(x,x)&\rightarrow g(x),\\f(x,g(x))&\rightarrow b,\\h(c,x)&\rightarrow f(h(x,c),h(x,x)).\\\end{aligned}}$ The following two examples of terminating term rewrite systems are due to Toyama:[19] $f(0,1,x)\rightarrow f(x,x,x)$ and $g(x,y)\rightarrow x,$ $g(x,y)\rightarrow y.$ Their union is a non-terminating system, since ${\begin{aligned}&f(g(0,1),g(0,1),g(0,1))\\\rightarrow &f(0,g(0,1),g(0,1))\\\rightarrow &f(0,1,g(0,1))\\\rightarrow &f(g(0,1),g(0,1),g(0,1))\\\rightarrow &\cdots \end{aligned}}$ This result disproves a conjecture of Dershowitz,[20] who claimed that the union of two terminating term rewrite systems $R_{1}$ and $R_{2}$ is again terminating if all left-hand sides of $R_{1}$ and right-hand sides of $R_{2}$ are linear, and there are no "overlaps" between left-hand sides of $R_{1}$ and right-hand sides of $R_{2}$. All these properties are satisfied by Toyama's examples. See Rewrite order and Path ordering (term rewriting) for ordering relations used in termination proofs for term rewriting systems. Higher-order rewriting systems Higher-order rewriting systems are a generalization of first-order term rewriting systems to lambda terms, allowing higher order functions and bound variables.[21] Various results about first-order TRSs can be reformulated for HRSs as well.[22] Graph rewriting systems Graph rewrite systems are another generalization of term rewrite systems, operating on graphs instead of (ground-) terms / their corresponding tree representation. Trace rewriting systems Trace theory provides a means for discussing multiprocessing in more formal terms, such as via the trace monoid and the history monoid. Rewriting can be performed in trace systems as well. Philosophy Rewriting systems can be seen as programs that infer end-effects from a list of cause-effect relationships. In this way, rewriting systems can be considered to be automated causality provers. See also • Critical pair (logic) • Compiler • Knuth–Bendix completion algorithm • L-systems specify rewriting that is done in parallel. • Referential transparency in computer science • Regulated rewriting • Rho calculus • Interaction Nets Notes 1. This variant of the previous rule is needed since the commutative law A∨B = B∨A cannot be turned into a rewrite rule. A rule like A∨B → B∨A would cause the rewrite system to be nonterminating. 2. since applying that substitution to the rule's left hand side $x*(y*z)$ yields the numerator $a*((a+1)*(a+2))$ 3. i.e. for each term, some normal form exists, e.g. h(c,c) has the normal forms b and g(b), since h(c,c) → f(h(c,c),h(c,c)) → f(h(c,c),f(h(c,c),h(c,c))) → f(h(c,c),g(h(c,c))) → b, and h(c,c) → f(h(c,c),h(c,c)) → g(h(c,c),h(c,c)) → ... → g(b); neither b nor g(b) can be rewritten any further, therefore the system is not confluent 4. i.e., there are infinite derivations, e.g. h(c,c) → f(h(c,c),h(c,c)) → f(f(h(c,c),h(c,c)) ,h(c,c)) → f(f(f(h(c,c),h(c,c)),h(c,c)) ,h(c,c)) → ... Further reading • Baader, Franz; Nipkow, Tobias (1999). Term rewriting and all that. Cambridge University Press. ISBN 978-0-521-77920-3. 316 pages. • Marc Bezem, Jan Willem Klop, Roel de Vrijer ("Terese"), Term Rewriting Systems ("TeReSe"), Cambridge University Press, 2003, ISBN 0-521-39115-6. This is the most recent comprehensive monograph. It uses however a fair deal of non-yet-standard notations and definitions. For instance, the Church–Rosser property is defined to be identical with confluence. • Nachum Dershowitz and Jean-Pierre Jouannaud "Rewrite Systems", Chapter 6 in Jan van Leeuwen (Ed.), Handbook of Theoretical Computer Science, Volume B: Formal Models and Semantics., Elsevier and MIT Press, 1990, ISBN 0-444-88074-7, pp. 243–320. The preprint of this chapter is freely available from the authors, but it is missing the figures. • Nachum Dershowitz and David Plaisted. "Rewriting", Chapter 9 in John Alan Robinson and Andrei Voronkov (Eds.), Handbook of Automated Reasoning, Volume 1. • Gérard Huet et Derek Oppen, Equations and Rewrite Rules, A Survey (1980) Stanford Verification Group, Report N° 15 Computer Science Department Report N° STAN-CS-80-785 • Jan Willem Klop. "Term Rewriting Systems", Chapter 1 in Samson Abramsky, Dov M. Gabbay and Tom Maibaum (Eds.), Handbook of Logic in Computer Science, Volume 2: Background: Computational Structures. • David Plaisted. "Equational reasoning and term rewriting systems", in Dov M. Gabbay, C. J. Hogger and John Alan Robinson (Eds.), Handbook of Logic in Artificial Intelligence and Logic Programming, Volume 1. • Jürgen Avenhaus and Klaus Madlener. "Term rewriting and equational reasoning". In Ranan B. Banerji (Ed.), Formal Techniques in Artificial Intelligence: A Sourcebook, Elsevier (1990). String rewriting • Ronald V. Book and Friedrich Otto, String-Rewriting Systems, Springer (1993). • Benjamin Benninghofen, Susanne Kemmerich and Michael M. Richter, Systems of Reductions. LNCS 277, Springer-Verlag (1987). Other • Martin Davis, Ron Sigal, Elaine J. Weyuker, (1994) Computability, Complexity, and Languages: Fundamentals of Theoretical Computer Science – 2nd edition, Academic Press, ISBN 0-12-206382-1. External links Look up rewriting in Wiktionary, the free dictionary. • The Rewriting Home Page • IFIP Working Group 1.6 • Researchers in rewriting by Aart Middeldorp, University of Innsbruck • Termination Portal • Maude System — a software implementation of a generic term rewriting system.[5] References 1. Joseph Goguen "Proving and Rewriting" International Conference on Algebraic and Logic Programming, 1990 Nancy, France pp 1-24 2. Sculthorpe, Neil; Frisby, Nicolas; Gill, Andy (2014). "The Kansas University rewrite engine" (PDF). Journal of Functional Programming. 24 (4): 434–473. doi:10.1017/S0956796814000185. ISSN 0956-7968. S2CID 16807490. Archived (PDF) from the original on 2017-09-22. Retrieved 2019-02-12. 3. Hsiang, Jieh; Kirchner, Hélène; Lescanne, Pierre; Rusinowitch, Michaël (1992). "The term rewriting approach to automated theorem proving". The Journal of Logic Programming. 14 (1–2): 71–99. doi:10.1016/0743-1066(92)90047-7. 4. Frühwirth, Thom (1998). "Theory and practice of constraint handling rules". The Journal of Logic Programming. 37 (1–3): 95–138. doi:10.1016/S0743-1066(98)10005-5. 5. Clavel, M.; Durán, F.; Eker, S.; Lincoln, P.; Martí-Oliet, N.; Meseguer, J.; Quesada, J.F. (2002). "Maude: Specification and programming in rewriting logic". Theoretical Computer Science. 285 (2): 187–243. doi:10.1016/S0304-3975(01)00359-0. 6. Kim Marriott; Peter J. Stuckey (1998). Programming with Constraints: An Introduction. MIT Press. pp. 436–. ISBN 978-0-262-13341-8. 7. Jürgen Avenhaus; Klaus Madlener (1990). "Term Rewriting and Equational Reasoning". In R.B. Banerji (ed.). Formal Techniques in Artificial Intelligence. Sourcebook. Elsevier. pp. 1–43. Here: Example in sect.4.1, p.24. 8. Robert Freidin (1992). Foundations of Generative Syntax. MIT Press. ISBN 978-0-262-06144-5. 9. Book and Otto, p. 10 10. Bezem et al., p. 7, 11. Bezem et al., p. 7 12. Martin Davis et al. 1994, p. 178 13. Klop, J. W. "Term Rewriting Systems" (PDF). Papers by Nachum Dershowitz and students. Tel Aviv University. p. 12. Archived (PDF) from the original on 15 August 2021. Retrieved 14 August 2021. 14. N. Dershowitz, J.-P. Jouannaud (1990). Jan van Leeuwen (ed.). Rewrite Systems. Handbook of Theoretical Computer Science. Vol. B. Elsevier. pp. 243–320.; here: Sect. 2.3 15. Max Dauchet (1989). "Simulation of Turing Machines by a Left-Linear Rewrite Rule". Proc. 3rd Int. Conf. on Rewriting Techniques and Applications. LNCS. Vol. 355. Springer. pp. 109–120. 16. Max Dauchet (Sep 1992). "Simulation of Turing machines by a regular rewrite rule". Theoretical Computer Science. 103 (2): 409–420. doi:10.1016/0304-3975(92)90022-8. 17. Gerard Huet, D.S. Lankford (Mar 1978). On the Uniform Halting Problem for Term Rewriting Systems (PDF) (Technical report). IRIA. p. 8. 283. Retrieved 16 June 2013. 18. Bernhard Gramlich (Jun 1993). "Relating Innermost, Weak, Uniform, and Modular Termination of Term Rewriting Systems". In Voronkov, Andrei (ed.). Proc. International Conference on Logic Programming and Automated Reasoning (LPAR). LNAI. Vol. 624. Springer. pp. 285–296. Archived from the original on 2016-03-04. Retrieved 2014-06-19. Here: Example 3.3 19. Yoshihito Toyama (1987). "Counterexamples to Termination for the Direct Sum of Term Rewriting Systems" (PDF). Inf. Process. Lett. 25 (3): 141–143. doi:10.1016/0020-0190(87)90122-0. hdl:2433/99946. Archived (PDF) from the original on 2019-11-13. Retrieved 2019-11-13. 20. N. Dershowitz (1985). "Termination" (PDF). In Jean-Pierre Jouannaud (ed.). Proc. RTA. LNCS. Vol. 220. Springer. pp. 180–224. Archived (PDF) from the original on 2013-11-12. Retrieved 2013-06-16.; here: p.210 21. Wolfram, D. A. (1993). The Clausal Theory of Types. Cambridge University Press. pp. 47–50. doi:10.1017/CBO9780511569906. ISBN 9780521395380. S2CID 42331173. 22. Nipkow, Tobias; Prehofer, Christian (1998). "Higher-Order Rewriting and Equational Reasoning". In Bibel, W.; Schmitt, P. (eds.). Automated Deduction - A Basis for Applications. Volume I: Foundations. Kluwer. pp. 399–430. Archived from the original on 2021-08-16. Retrieved 2021-08-16. Authority control International • FAST National • Israel • United States
Wikipedia
Irreducible fraction An irreducible fraction (or fraction in lowest terms, simplest form or reduced fraction) is a fraction in which the numerator and denominator are integers that have no other common divisors than 1 (and −1, when negative numbers are considered).[1] In other words, a fraction a/b is irreducible if and only if a and b are coprime, that is, if a and b have a greatest common divisor of 1. In higher mathematics, "irreducible fraction" may also refer to rational fractions such that the numerator and the denominator are coprime polynomials.[2] Every positive rational number can be represented as an irreducible fraction in exactly one way.[3] An equivalent definition is sometimes useful: if a and b are integers, then the fraction a/b is irreducible if and only if there is no other equal fraction c/d such that |c| < |a| or |d| < |b|, where |a| means the absolute value of a.[4] (Two fractions a/b and c/d are equal or equivalent if and only if ad = bc.) For example, 1/4, 5/6, and −101/100 are all irreducible fractions. On the other hand, 2/4 is reducible since it is equal in value to 1/2, and the numerator of 1/2 is less than the numerator of 2/4. A fraction that is reducible can be reduced by dividing both the numerator and denominator by a common factor. It can be fully reduced to lowest terms if both are divided by their greatest common divisor.[5] In order to find the greatest common divisor, the Euclidean algorithm or prime factorization can be used. The Euclidean algorithm is commonly preferred because it allows one to reduce fractions with numerators and denominators too large to be easily factored.[6] Examples ${\frac {120}{90}}={\frac {12}{9}}={\frac {4}{3}}$ In the first step both numbers were divided by 10, which is a factor common to both 120 and 90. In the second step, they were divided by 3. The final result, 4/3, is an irreducible fraction because 4 and 3 have no common factors other than 1. The original fraction could have also been reduced in a single step by using the greatest common divisor of 90 and 120, which is 30. As 120 ÷ 30 = 4, and 90 ÷ 30 = 3, one gets ${\frac {120}{90}}={\frac {4}{3}}$ Which method is faster "by hand" depends on the fraction and the ease with which common factors are spotted. In case a denominator and numerator remain that are too large to ensure they are coprime by inspection, a greatest common divisor computation is needed anyway to ensure the fraction is actually irreducible. Uniqueness Every rational number has a unique representation as an irreducible fraction with a positive denominator[3] (however 2/3 = −2/−3 although both are irreducible). Uniqueness is a consequence of the unique prime factorization of integers, since a/b = c/d implies ad = bc, and so both sides of the latter must share the same prime factorization, yet a and b share no prime factors so the set of prime factors of a (with multiplicity) is a subset of those of c and vice versa, meaning a = c and by the same argument b = d. Applications The fact that any rational number has a unique representation as an irreducible fraction is utilized in various proofs of the irrationality of the square root of 2 and of other irrational numbers. For example, one proof notes that if √2 could be represented as a ratio of integers, then it would have in particular the fully reduced representation a/b where a and b are the smallest possible; but given that a/b equals √2, so does 2b − a/a − b (since cross-multiplying this with a/b shows that they are equal). Since a > b (because √2 is greater than 1), the latter is a ratio of two smaller integers. This is a contradiction, so the premise that the square root of two has a representation as the ratio of two integers is false. Generalization The notion of irreducible fraction generalizes to the field of fractions of any unique factorization domain: any element of such a field can be written as a fraction in which denominator and numerator are coprime, by dividing both by their greatest common divisor.[7] This applies notably to rational expressions over a field. The irreducible fraction for a given element is unique up to multiplication of denominator and numerator by the same invertible element. In the case of the rational numbers this means that any number has two irreducible fractions, related by a change of sign of both numerator and denominator; this ambiguity can be removed by requiring the denominator to be positive. In the case of rational functions the denominator could similarly be required to be a monic polynomial.[8] See also • Anomalous cancellation, an erroneous arithmetic procedure that produces the correct irreducible fraction by cancelling digits of the original unreduced form. • Diophantine approximation, the approximation of real numbers by rational numbers. References 1. Stepanov, S. A. (2001) [1994], "Fraction", Encyclopedia of Mathematics, EMS Press 2. E.g., see Laudal, Olav Arnfinn; Piene, Ragni (2004), The Legacy of Niels Henrik Abel: The Abel Bicentennial, Oslo, June 3-8, 2002, Springer, p. 155, ISBN 9783540438267 3. Scott, William (1844), Elements of Arithmetic and Algebra: For the Use of the Royal Military College, College text books, Sandhurst. Royal Military College, vol. 1, Longman, Brown, Green, and Longmans, p. 75. 4. Scott (1844), p. 74. 5. Sally, Judith D.; Sally, Paul J., Jr. (2012), "9.1. Reducing a fraction to lowest terms", Integers, Fractions, and Arithmetic: A Guide for Teachers, MSRI mathematical circles library, vol. 10, American Mathematical Society, pp. 131–134, ISBN 9780821887981{{citation}}: CS1 maint: multiple names: authors list (link). 6. Cuoco, Al; Rotman, Joseph (2013), Learning Modern Algebra, Mathematical Association of America Textbooks, Mathematical Association of America, p. 33, ISBN 9781939512017. 7. Garrett, Paul B. (2007), Abstract Algebra, CRC Press, p. 183, ISBN 9781584886907. 8. Grillet, Pierre Antoine (2007), Abstract Algebra, Graduate Texts in Mathematics, vol. 242, Springer, Lemma 9.2, p. 183, ISBN 9780387715681. External links • Weisstein, Eric W. "Reduced Fraction". MathWorld. Fractions and ratios Division and ratio • Dividend ÷ Divisor = Quotient Fraction • Numerator/Denominator = Quotient • Algebraic • Aspect • Binary • Continued • Decimal • Dyadic • Egyptian • Golden • Silver • Integer • Irreducible • Reduction • Just intonation • LCD • Musical interval • Paper size • Percentage • Unit
Wikipedia
Irreducibility (mathematics) In mathematics, the concept of irreducibility is used in several ways. • A polynomial over a field may be an irreducible polynomial if it cannot be factored over that field. • In abstract algebra, irreducible can be an abbreviation for irreducible element of an integral domain; for example an irreducible polynomial. • In representation theory, an irreducible representation is a nontrivial representation with no nontrivial proper subrepresentations. Similarly, an irreducible module is another name for a simple module. • Absolutely irreducible is a term applied to mean irreducible, even after any finite extension of the field of coefficients. It applies in various situations, for example to irreducibility of a linear representation, or of an algebraic variety; where it means just the same as irreducible over an algebraic closure. • In commutative algebra, a commutative ring R is irreducible if its prime spectrum, that is, the topological space Spec R, is an irreducible topological space. • A matrix is irreducible if it is not similar via a permutation to a block upper triangular matrix (that has more than one block of positive size). (Replacing non-zero entries in the matrix by one, and viewing the matrix as the adjacency matrix of a directed graph, the matrix is irreducible if and only if such directed graph is strongly connected.) A detailed definition is given here. • Also, a Markov chain is irreducible if there is a non-zero probability of transitioning (even if in more than one step) from any state to any other state. • In the theory of manifolds, an n-manifold is irreducible if any embedded (n − 1)-sphere bounds an embedded n-ball. Implicit in this definition is the use of a suitable category, such as the category of differentiable manifolds or the category of piecewise-linear manifolds. The notions of irreducibility in algebra and manifold theory are related. An n-manifold is called prime, if it cannot be written as a connected sum of two n-manifolds (neither of which is an n-sphere). An irreducible manifold is thus prime, although the converse does not hold. From an algebraist's perspective, prime manifolds should be called "irreducible"; however, the topologist (in particular the 3-manifold topologist) finds the definition above more useful. The only compact, connected 3-manifolds that are prime but not irreducible are the trivial 2-sphere bundle over S1 and the twisted 2-sphere bundle over S1. See, for example, Prime decomposition (3-manifold). • A topological space is irreducible if it is not the union of two proper closed subsets. This notion is used in algebraic geometry, where spaces are equipped with the Zariski topology; it is not of much significance for Hausdorff spaces. See also irreducible component, algebraic variety. • In universal algebra, irreducible can refer to the inability to represent an algebraic structure as a composition of simpler structures using a product construction; for example subdirectly irreducible. • A 3-manifold is P²-irreducible if it is irreducible and contains no 2-sided $\mathbb {R} P^{2}$ (real projective plane). • An irreducible fraction (or fraction in lowest terms) is a vulgar fraction in which the numerator and denominator are smaller than those in any other equivalent fraction.
Wikipedia
Irreducible representation In mathematics, specifically in the representation theory of groups and algebras, an irreducible representation $(\rho ,V)$ or irrep of an algebraic structure $A$ is a nonzero representation that has no proper nontrivial subrepresentation $(\rho |_{W},W)$, with $W\subset V$ closed under the action of $\{\rho (a):a\in A\}$. Algebraic structure → Group theory Group theory Basic notions • Subgroup • Normal subgroup • Quotient group • (Semi-)direct product Group homomorphisms • kernel • image • direct sum • wreath product • simple • finite • infinite • continuous • multiplicative • additive • cyclic • abelian • dihedral • nilpotent • solvable • action • Glossary of group theory • List of group theory topics Finite groups • Cyclic group Zn • Symmetric group Sn • Alternating group An • Dihedral group Dn • Quaternion group Q • Cauchy's theorem • Lagrange's theorem • Sylow theorems • Hall's theorem • p-group • Elementary abelian group • Frobenius group • Schur multiplier Classification of finite simple groups • cyclic • alternating • Lie type • sporadic • Discrete groups • Lattices • Integers ($\mathbb {Z} $) • Free group Modular groups • PSL(2, $\mathbb {Z} $) • SL(2, $\mathbb {Z} $) • Arithmetic group • Lattice • Hyperbolic group Topological and Lie groups • Solenoid • Circle • General linear GL(n) • Special linear SL(n) • Orthogonal O(n) • Euclidean E(n) • Special orthogonal SO(n) • Unitary U(n) • Special unitary SU(n) • Symplectic Sp(n) • G2 • F4 • E6 • E7 • E8 • Lorentz • Poincaré • Conformal • Diffeomorphism • Loop Infinite dimensional Lie group • O(∞) • SU(∞) • Sp(∞) Algebraic groups • Linear algebraic group • Reductive group • Abelian variety • Elliptic curve Every finite-dimensional unitary representation on a Hilbert space $V$ is the direct sum of irreducible representations. Irreducible representations are always indecomposable (i.e. cannot be decomposed further into a direct sum of representations), but the converse may not hold, e.g. the two-dimensional representation of the real numbers acting by upper triangular unipotent matrices is indecomposable but reducible. History Group representation theory was generalized by Richard Brauer from the 1940s to give modular representation theory, in which the matrix operators act on a vector space over a field $K$ of arbitrary characteristic, rather than a vector space over the field of real numbers or over the field of complex numbers. The structure analogous to an irreducible representation in the resulting theory is a simple module. Overview Further information: Group representation Let $\rho $ be a representation i.e. a homomorphism $\rho :G\to GL(V)$ of a group $G$ where $V$ is a vector space over a field $F$. If we pick a basis $B$ for $V$, $\rho $ can be thought of as a function (a homomorphism) from a group into a set of invertible matrices and in this context is called a matrix representation. However, it simplifies things greatly if we think of the space $V$ without a basis. A linear subspace $W\subset V$ is called $G$-invariant if $\rho (g)w\in W$ for all $g\in G$ and all $w\in W$. The co-restriction of $\rho $ to the general linear group of a $G$-invariant subspace $W\subset V$ is known as a subrepresentation. A representation $\rho :G\to GL(V)$ is said to be irreducible if it has only trivial subrepresentations (all representations can form a subrepresentation with the trivial $G$-invariant subspaces, e.g. the whole vector space $V$, and {0}). If there is a proper nontrivial invariant subspace, $\rho $ is said to be reducible. Notation and terminology of group representations Group elements can be represented by matrices, although the term "represented" has a specific and precise meaning in this context. A representation of a group is a mapping from the group elements to the general linear group of matrices. As notation, let a, b, c, ... denote elements of a group G with group product signified without any symbol, so ab is the group product of a and b and is also an element of G, and let representations be indicated by D. The representation of a is written as $D(a)={\begin{pmatrix}D(a)_{11}&D(a)_{12}&\cdots &D(a)_{1n}\\D(a)_{21}&D(a)_{22}&\cdots &D(a)_{2n}\\\vdots &\vdots &\ddots &\vdots \\D(a)_{n1}&D(a)_{n2}&\cdots &D(a)_{nn}\\\end{pmatrix}}$ By definition of group representations, the representation of a group product is translated into matrix multiplication of the representations: $D(ab)=D(a)D(b)$ If e is the identity element of the group (so that ae = ea = a, etc.), then D(e) is an identity matrix, or identically a block matrix of identity matrices, since we must have $D(ea)=D(ae)=D(a)D(e)=D(e)D(a)=D(a)$ and similarly for all other group elements. The last two statements correspond to the requirement that D is a group homomorphism. Reducible and irreducible representations A representation is reducible if it contains a nontrivial G-invariant subspace, that is to say, all the matrices $D(a)$ can be put in upper triangular block form by the same invertible matrix $P$. In other words, if there is a similarity transformation: $D'(a)\equiv P^{-1}D(a)P,$ which maps every matrix in the representation into the same pattern upper triangular blocks. Every ordered sequence minor block is a group subrepresentation. That is to say, if the representation is, for example, of dimension 2, then we have: $D'(a)=P^{-1}D(a)P={\begin{pmatrix}D^{(11)}(a)&D^{(12)}(a)\\0&D^{(22)}(a)\end{pmatrix}},$ where $D^{(11)}(a)$ is a nontrivial subrepresentation. If we are able to find a matrix $P$ that makes $D^{(12)}(a)=0$ as well, then $D(a)$ is not only reducible but also decomposable. Notice: Even if a representation is reducible, its matrix representation may still not be the upper triangular block form. It will only have this form if we choose a suitable basis, which can be obtained by applying the matrix $P^{-1}$ above to the standard basis. Decomposable and indecomposable representations A representation is decomposable if all the matrices $D(a)$ can be put in block-diagonal form by the same invertible matrix $P$. In other words, if there is a similarity transformation:[1] $D'(a)\equiv P^{-1}D(a)P,$ which diagonalizes every matrix in the representation into the same pattern of diagonal blocks. Each such block is then a group subrepresentation independent from the others. The representations D(a) and D′(a) are said to be equivalent representations.[2] The (k-dimensional, say) representation can be decomposed into a direct sum of k > 1 matrices: $D'(a)=P^{-1}D(a)P={\begin{pmatrix}D^{(1)}(a)&0&\cdots &0\\0&D^{(2)}(a)&\cdots &0\\\vdots &\vdots &\ddots &\vdots \\0&0&\cdots &D^{(k)}(a)\\\end{pmatrix}}=D^{(1)}(a)\oplus D^{(2)}(a)\oplus \cdots \oplus D^{(k)}(a),$ so D(a) is decomposable, and it is customary to label the decomposed matrices by a superscript in brackets, as in D(n)(a) for n = 1, 2, ..., k, although some authors just write the numerical label without parentheses. The dimension of D(a) is the sum of the dimensions of the blocks: $\dim[D(a)]=\dim[D^{(1)}(a)]+\dim[D^{(2)}(a)]+\cdots +\dim[D^{(k)}(a)].$ If this is not possible, i.e. k = 1, then the representation is indecomposable.[1][3] Notice: Even if a representation is decomposable, its matrix representation may not be the diagonal block form. It will only have this form if we choose a suitable basis, which can be obtained by applying the matrix $P^{-1}$ above to the standard basis. Connection between irreducible representation and indecomposable representation An irreducible representation is by nature an indecomposable one. However, the converse may fail. But under some conditions, we do have an indecomposable representation being an irreducible representation. • When group $G$ is finite, and it has a representation over field $\mathbb {C} $, then an indecomposable representation is an irreducible representation. [4] • When group $G$ is finite, and it has a representation over field $K$, if we have $char(K)\nmid |G|$, then an indecomposable representation is an irreducible representation. Examples of irreducible representations Trivial representation All groups $G$ have a one-dimensional, irreducible trivial representation by mapping all group elements to the identity transformation. One-dimensional representation Any one-dimensional representation is irreducible since it has no proper nontrivial subspaces. Irreducible complex representations The irreducible complex representations of a finite group G can be characterized using results from character theory. In particular, all complex representations decompose as a direct sum of irreps, and the number of irreps of $G$ is equal to the number of conjugacy classes of $G$.[5] • The irreducible complex representations of $\mathbb {Z} /n\mathbb {Z} $ are exactly given by the maps $1\mapsto \gamma $, where $\gamma $ is an $n$th root of unity. • Let $V$ be an $n$-dimensional complex representation of $S_{n}$ with basis $\{v_{i}\}_{i=1}^{n}$. Then $V$ decomposes as a direct sum of the irreps $V_{\text{triv}}=\mathbb {C} \left(\sum _{i=1}^{n}v_{i}\right)$ and the orthogonal subspace given by $V_{\text{std}}=\left\{\sum _{i=1}^{n}a_{i}v_{i}:a_{i}\in \mathbb {C} ,\sum _{i=1}^{n}a_{i}=0\right\}.$ The former irrep is one-dimensional and isomorphic to the trivial representation of $S_{n}$. The latter is $n-1$ dimensional and is known as the standard representation of $S_{n}$.[5] • Let $G$ be a group. The regular representation of $G$ is the free complex vector space on the basis $\{e_{g}\}_{g\in G}$ with the group action $g\cdot e_{g'}=e_{gg'}$, denoted $\mathbb {C} G.$ All irreducible representations of $G$ appear in the decomposition of $\mathbb {C} G$ as a direct sum of irreps. Example of an irreducible representation over Fp • Let $G$ be a $p$ group and $V=\mathbb {F} _{p}^{n}$ be a finite dimensional irreducible representation of G over $\mathbb {F} _{p}$. By Orbit-stabilizer theorem, the orbit of every $V$ element acted by the $p$ group $G$ has size being power of $p$. Since the sizes of all these orbits sum up to the size of $G$, and $0\in V$ is in a size 1 orbit only containing itself, there must be other orbits of size 1 for the sum to match. That is, there exists some $v\in V$ such that $gv=v$ for all $g\in G$. This forces every irreducible representation of a $p$ group over $\mathbb {F} _{p}$ to be one dimensional. Applications in theoretical physics and chemistry In quantum physics and quantum chemistry, each set of degenerate eigenstates of the Hamiltonian operator comprises a vector space V for a representation of the symmetry group of the Hamiltonian, a "multiplet", best studied through reduction to its irreducible parts. Identifying the irreducible representations therefore allows one to label the states, predict how they will split under perturbations; or transition to other states in V. Thus, in quantum mechanics, irreducible representations of the symmetry group of the system partially or completely label the energy levels of the system, allowing the selection rules to be determined.[6] Lie groups Main article: Representation theory of Lie groups Lorentz group Main article: Representation theory of the Lorentz group The irreps of D(K) and D(J), where J is the generator of rotations and K the generator of boosts, can be used to build to spin representations of the Lorentz group, because they are related to the spin matrices of quantum mechanics. This allows them to derive relativistic wave equations.[7] See also Associative algebras • Simple module • Indecomposable module • Representation of an associative algebra Lie groups • Representation theory of Lie algebras • Representation theory of SU(2) • Representation theory of SL2(R) • Representation theory of the Galilean group • Representation theory of diffeomorphism groups • Representation theory of the Poincaré group • Theorem of the highest weight References 1. E. P. Wigner (1959). Group theory and its application to the quantum mechanics of atomic spectra. Pure and applied physics. Academic press. p. 73. 2. W. K. Tung (1985). Group Theory in Physics. World Scientific. p. 32. ISBN 978-997-1966-560. 3. W. K. Tung (1985). Group Theory in Physics. World Scientific. p. 33. ISBN 978-997-1966-560. 4. Artin, Michael (2011). Algebra (2nd ed.). Pearson. p. 295. ISBN 978-0132413770. 5. Serre, Jean-Pierre (1977). Linear Representations of Finite Groups. Springer-Verlag. ISBN 978-0-387-90190-9. 6. "A Dictionary of Chemistry, Answers.com" (6th ed.). Oxford Dictionary of Chemistry. 7. T. Jaroszewicz; P. S. Kurzepa (1992). "Geometry of spacetime propagation of spinning particles". Annals of Physics. 216 (2): 226–267. Bibcode:1992AnPhy.216..226J. doi:10.1016/0003-4916(92)90176-M. Books • H. Weyl (1950). The theory of groups and quantum mechanics. Courier Dover Publications. p. 203. ISBN 978-0-486-60269-1. magnetic moments in relativistic quantum mechanics. • P. R. Bunker; Per Jensen (2004). Fundamentals of molecular symmetry. CRC Press. ISBN 0-7503-0941-5. • A. D. Boardman; D. E. O'Conner; P. A. Young (1973). Symmetry and its applications in science. McGraw Hill. ISBN 978-0-07-084011-9. • V. Heine (2007). Group theory in quantum mechanics: an introduction to its present usage. Dover. ISBN 978-0-07-084011-9. • V. Heine (1993). Group Theory in Quantum Mechanics: An Introduction to Its Present Usage. Courier Dover Publications. ISBN 978-048-6675-855. • E. Abers (2004). Quantum Mechanics. Addison Wesley. p. 425. ISBN 978-0-13-146100-0. • B. R. Martin, G.Shaw (3 December 2008). Particle Physics (3rd ed.). Manchester Physics Series, John Wiley & Sons. p. 3. ISBN 978-0-470-03294-7. • Weinberg, S. (1995), The Quantum Theory of Fields, vol. 1, Cambridge university press, pp. 230–231, ISBN 978-0-521-55001-7 • Weinberg, S. (1996), The Quantum Theory of Fields, vol. 2, Cambridge university press, ISBN 978-0-521-55002-4 • Weinberg, S. (2000), The Quantum Theory of Fields, vol. 3, Cambridge university press, ISBN 978-0-521-66000-6 • R. Penrose (2007). The Road to Reality. Vintage books. ISBN 978-0-679-77631-4. • P. W. Atkins (1970). Molecular Quantum Mechanics (Parts 1 and 2): An introduction to quantum chemistry. Vol. 1. Oxford University Press. pp. 125–126. ISBN 978-0-19-855129-4. Articles • Bargmann, V.; Wigner, E. P. (1948). "Group theoretical discussion of relativistic wave equations". Proc. Natl. Acad. Sci. U.S.A. 34 (5): 211–23. Bibcode:1948PNAS...34..211B. doi:10.1073/pnas.34.5.211. PMC 1079095. PMID 16578292. • E. Wigner (1937). "On Unitary Representations Of The Inhomogeneous Lorentz Group" (PDF). Annals of Mathematics. 40 (1): 149–204. Bibcode:1939AnMat..40..149W. doi:10.2307/1968551. JSTOR 1968551. MR 1503456. S2CID 121773411. Archived from the original (PDF) on 2015-10-04. Retrieved 2013-07-07. Further reading • Artin, Michael (1999). "Noncommutative Rings" (PDF). Chapter V. External links • "Commission on Mathematical and Theoretical Crystallography, Summer Schools on Mathematical Crystallography" (PDF). 2010. • van Beveren, Eef (2012). "Some notes on group theory" (PDF). Archived from the original (PDF) on 2011-05-20. Retrieved 2013-07-07. • Teleman, Constantin (2005). "Representation Theory" (PDF). • Finley. "Some Notes on Young Tableaux as useful for irreps of su(n)" (PDF). • Hunt (2008). "Irreducible Representation (IR) Symmetry Labels" (PDF). • Dermisek, Radovan (2008). "Representations of Lorentz Group" (PDF). Archived from the original (PDF) on 2018-11-23. Retrieved 2013-07-07. • Maciejko, Joseph (2007). "Representations of Lorentz and Poincaré groups" (PDF). • Woit, Peter (2015). "Quantum Mechanics for Mathematicians: Representations of the Lorentz Group" (PDF)., see chapter 40 • Drake, Kyle; Feinberg, Michael; Guild, David; Turetsky, Emma (2009). "Representations of the Symmetry Group of Spacetime" (PDF). • Finley. "Lie Algebra for the Poincaré, and Lorentz, Groups" (PDF). Archived from the original (PDF) on 2012-06-17. • Bekaert, Xavier; Boulanger, Niclas (2006). "The unitary representations of the Poincaré group in any spacetime dimension". arXiv:hep-th/0611263. • "McGraw-Hill dictionary of scientific and technical terms". Answers.com.
Wikipedia
Reducing subspace In linear algebra, a reducing subspace $W$ of a linear map $T:V\to V$ from a Hilbert space $V$ to itself is an invariant subspace of $T$ whose orthogonal complement $W^{\perp }$ is also an invariant subspace of $T.$ That is, $T(W)\subseteq W$ and $T(W^{\perp })\subseteq W^{\perp }.$ One says that the subspace $W$ reduces the map $T.$ One says that a linear map is reducible if it has a nontrivial reducing subspace. Otherwise one says it is irreducible. If $V$ is of finite dimension $r$ and $W$ is a reducing subspace of the map $T:V\to V$ represented under basis $B$ by matrix $M\in \mathbb {R} ^{r\times r}$ then $M$ can be expressed as the sum $M=P_{W}MP_{W}+P_{W^{\perp }}MP_{W^{\perp }}$ where $P_{W}\in \mathbb {R} ^{r\times r}$ is the matrix of the orthogonal projection from $V$ to $W$ and $P_{W^{\perp }}=I-P_{W}$ is the matrix of the projection onto $W^{\perp }.$[1] (Here $I\in \mathbb {R} ^{r\times r}$ is the identity matrix.) Furthermore, $V$ has an orthonormal basis $B'$ with a subset that is an orthonormal basis of $W$. If $Q\in \mathbb {R} ^{r\times r}$ is the transition matrix from $B$ to $B'$ then with respect to $B'$ the matrix $Q^{-1}MQ$ representing Failed to parse (SVG (MathML can be enabled via browser plugin): Invalid response ("Math extension cannot connect to Restbase.") from server "http://localhost:6011/en.wikipedia.org/v1/":): T is a block-diagonal matrix $Q^{-1}MQ=\left[{\begin{array}{cc}A&0\\0&B\end{array}}\right]$ with $A\in \mathbb {R} ^{d\times d},$ where $d=\dim W$, and $B\in \mathbb {R} ^{(r-d)\times (r-d)}.$ References 1. R. Dennis Cook (2018). An Introduction to Envelopes : Dimension Reduction for Efficient Estimation in Multivariate Statistics. Wiley. p. 7.
Wikipedia
Reduct In universal algebra and in model theory, a reduct of an algebraic structure is obtained by omitting some of the operations and relations of that structure. The opposite of "reduct" is "expansion." Definition Let A be an algebraic structure (in the sense of universal algebra) or a structure in the sense of model theory, organized as a set X together with an indexed family of operations and relations φi on that set, with index set I. Then the reduct of A defined by a subset J of I is the structure consisting of the set X and J-indexed family of operations and relations whose j-th operation or relation for j ∈ J is the j-th operation or relation of A. That is, this reduct is the structure A with the omission of those operations and relations φi for which i is not in J. A structure A is an expansion of B just when B is a reduct of A. That is, reduct and expansion are mutual converses. Examples The monoid (Z, +, 0) of integers under addition is a reduct of the group (Z, +, −, 0) of integers under addition and negation, obtained by omitting negation. By contrast, the monoid (N, +, 0) of natural numbers under addition is not the reduct of any group. Conversely the group (Z, +, −, 0) is the expansion of the monoid (Z, +, 0), expanding it with the operation of negation. References • Burris, Stanley N.; H. P. Sankappanavar (1981). A Course in Universal Algebra. Springer. ISBN 3-540-90578-2. • Hodges, Wilfrid (1993). Model theory. Cambridge University Press. ISBN 0-521-30442-3. Mathematical logic General • Axiom • list • Cardinality • First-order logic • Formal proof • Formal semantics • Foundations of mathematics • Information theory • Lemma • Logical consequence • Model • Theorem • Theory • Type theory Theorems (list)  & Paradoxes • Gödel's completeness and incompleteness theorems • Tarski's undefinability • Banach–Tarski paradox • Cantor's theorem, paradox and diagonal argument • Compactness • Halting problem • Lindström's • Löwenheim–Skolem • Russell's paradox Logics Traditional • Classical logic • Logical truth • Tautology • Proposition • Inference • Logical equivalence • Consistency • Equiconsistency • Argument • Soundness • Validity • Syllogism • Square of opposition • Venn diagram Propositional • Boolean algebra • Boolean functions • Logical connectives • Propositional calculus • Propositional formula • Truth tables • Many-valued logic • 3 • Finite • ∞ Predicate • First-order • list • Second-order • Monadic • Higher-order • Free • Quantifiers • Predicate • Monadic predicate calculus Set theory • Set • Hereditary • Class • (Ur-)Element • Ordinal number • Extensionality • Forcing • Relation • Equivalence • Partition • Set operations: • Intersection • Union • Complement • Cartesian product • Power set • Identities Types of Sets • Countable • Uncountable • Empty • Inhabited • Singleton • Finite • Infinite • Transitive • Ultrafilter • Recursive • Fuzzy • Universal • Universe • Constructible • Grothendieck • Von Neumann Maps & Cardinality • Function/Map • Domain • Codomain • Image • In/Sur/Bi-jection • Schröder–Bernstein theorem • Isomorphism • Gödel numbering • Enumeration • Large cardinal • Inaccessible • Aleph number • Operation • Binary Set theories • Zermelo–Fraenkel • Axiom of choice • Continuum hypothesis • General • Kripke–Platek • Morse–Kelley • Naive • New Foundations • Tarski–Grothendieck • Von Neumann–Bernays–Gödel • Ackermann • Constructive Formal systems (list), Language & Syntax • Alphabet • Arity • Automata • Axiom schema • Expression • Ground • Extension • by definition • Conservative • Relation • Formation rule • Grammar • Formula • Atomic • Closed • Ground • Open • Free/bound variable • Language • Metalanguage • Logical connective • ¬ • ∨ • ∧ • → • ↔ • = • Predicate • Functional • Variable • Propositional variable • Proof • Quantifier • ∃ • ! • ∀ • rank • Sentence • Atomic • Spectrum • Signature • String • Substitution • Symbol • Function • Logical/Constant • Non-logical • Variable • Term • Theory • list Example axiomatic systems  (list) • of arithmetic: • Peano • second-order • elementary function • primitive recursive • Robinson • Skolem • of the real numbers • Tarski's axiomatization • of Boolean algebras • canonical • minimal axioms • of geometry: • Euclidean: • Elements • Hilbert's • Tarski's • non-Euclidean • Principia Mathematica Proof theory • Formal proof • Natural deduction • Logical consequence • Rule of inference • Sequent calculus • Theorem • Systems • Axiomatic • Deductive • Hilbert • list • Complete theory • Independence (from ZFC) • Proof of impossibility • Ordinal analysis • Reverse mathematics • Self-verifying theories Model theory • Interpretation • Function • of models • Model • Equivalence • Finite • Saturated • Spectrum • Submodel • Non-standard model • of arithmetic • Diagram • Elementary • Categorical theory • Model complete theory • Satisfiability • Semantics of logic • Strength • Theories of truth • Semantic • Tarski's • Kripke's • T-schema • Transfer principle • Truth predicate • Truth value • Type • Ultraproduct • Validity Computability theory • Church encoding • Church–Turing thesis • Computably enumerable • Computable function • Computable set • Decision problem • Decidable • Undecidable • P • NP • P versus NP problem • Kolmogorov complexity • Lambda calculus • Primitive recursive function • Recursion • Recursive set • Turing machine • Type theory Related • Abstract logic • Category theory • Concrete/Abstract Category • Category of sets • History of logic • History of mathematical logic • timeline • Logicism • Mathematical object • Philosophy of mathematics • Supertask  Mathematics portal
Wikipedia
Multiple integral In mathematics (specifically multivariable calculus), a multiple integral is a definite integral of a function of several real variables, for instance, f(x, y) or f(x, y, z). Integrals of a function of two variables over a region in $\mathbb {R} ^{2}$ (the real-number plane) are called double integrals, and integrals of a function of three variables over a region in $\mathbb {R} ^{3}$ (real-number 3D space) are called triple integrals.[1] For multiple integrals of a single-variable function, see the Cauchy formula for repeated integration. Part of a series of articles about Calculus • Fundamental theorem • Limits • Continuity • Rolle's theorem • Mean value theorem • Inverse function theorem Differential Definitions • Derivative (generalizations) • Differential • infinitesimal • of a function • total Concepts • Differentiation notation • Second derivative • Implicit differentiation • Logarithmic differentiation • Related rates • Taylor's theorem Rules and identities • Sum • Product • Chain • Power • Quotient • L'Hôpital's rule • Inverse • General Leibniz • Faà di Bruno's formula • Reynolds Integral • Lists of integrals • Integral transform • Leibniz integral rule Definitions • Antiderivative • Integral (improper) • Riemann integral • Lebesgue integration • Contour integration • Integral of inverse functions Integration by • Parts • Discs • Cylindrical shells • Substitution (trigonometric, tangent half-angle, Euler) • Euler's formula • Partial fractions • Changing order • Reduction formulae • Differentiating under the integral sign • Risch algorithm Series • Geometric (arithmetico-geometric) • Harmonic • Alternating • Power • Binomial • Taylor Convergence tests • Summand limit (term test) • Ratio • Root • Integral • Direct comparison • Limit comparison • Alternating series • Cauchy condensation • Dirichlet • Abel Vector • Gradient • Divergence • Curl • Laplacian • Directional derivative • Identities Theorems • Gradient • Green's • Stokes' • Divergence • generalized Stokes Multivariable Formalisms • Matrix • Tensor • Exterior • Geometric Definitions • Partial derivative • Multiple integral • Line integral • Surface integral • Volume integral • Jacobian • Hessian Advanced • Calculus on Euclidean space • Generalized functions • Limit of distributions Specialized • Fractional • Malliavin • Stochastic • Variations Miscellaneous • Precalculus • History • Glossary • List of topics • Integration Bee • Mathematical analysis • Nonstandard analysis Introduction Just as the definite integral of a positive function of one variable represents the area of the region between the graph of the function and the x-axis, the double integral of a positive function of two variables represents the volume of the region between the surface defined by the function (on the three-dimensional Cartesian plane where z = f(x, y)) and the plane which contains its domain.[1] If there are more variables, a multiple integral will yield hypervolumes of multidimensional functions. Multiple integration of a function in n variables: f(x1, x2, ..., xn) over a domain D is most commonly represented by nested integral signs in the reverse order of execution (the leftmost integral sign is computed last), followed by the function and integrand arguments in proper order (the integral with respect to the rightmost argument is computed last). The domain of integration is either represented symbolically for every argument over each integral sign, or is abbreviated by a variable at the rightmost integral sign:[2] $\int \cdots \int _{\mathbf {D} }\,f(x_{1},x_{2},\ldots ,x_{n})\,dx_{1}\!\cdots dx_{n}$ Since the concept of an antiderivative is only defined for functions of a single real variable, the usual definition of the indefinite integral does not immediately extend to the multiple integral. Mathematical definition For n > 1, consider a so-called "half-open" n-dimensional hyperrectangular domain T, defined as: $T=[a_{1},b_{1})\times [a_{2},b_{2})\times \cdots \times [a_{n},b_{n})\subseteq \mathbb {R} ^{n}.$ Partition each interval [aj, bj) into a finite family Ij of non-overlapping subintervals ijα, with each subinterval closed at the left end, and open at the right end. Then the finite family of subrectangles C given by $C=I_{1}\times I_{2}\times \cdots \times I_{n}$ is a partition of T; that is, the subrectangles Ck are non-overlapping and their union is T. Let f : T → R be a function defined on T. Consider a partition C of T as defined above, such that C is a family of m subrectangles Cm and $T=C_{1}\cup C_{2}\cup \cdots \cup C_{m}$ We can approximate the total (n + 1)-dimensional volume bounded below by the n-dimensional hyperrectangle T and above by the n-dimensional graph of f with the following Riemann sum: $\sum _{k=1}^{m}f(P_{k})\,\operatorname {m} (C_{k})$ where Pk is a point in Ck and m(Ck) is the product of the lengths of the intervals whose Cartesian product is Ck, also known as the measure of Ck. The diameter of a subrectangle Ck is the largest of the lengths of the intervals whose Cartesian product is Ck. The diameter of a given partition of T is defined as the largest of the diameters of the subrectangles in the partition. Intuitively, as the diameter of the partition C is restricted smaller and smaller, the number of subrectangles m gets larger, and the measure m(Ck) of each subrectangle grows smaller. The function f is said to be Riemann integrable if the limit $S=\lim _{\delta \to 0}\sum _{k=1}^{m}f(P_{k})\,\operatorname {m} (C_{k})$ exists, where the limit is taken over all possible partitions of T of diameter at most δ.[3] If f is Riemann integrable, S is called the Riemann integral of f over T and is denoted $\int \cdots \int _{T}\,f(x_{1},x_{2},\ldots ,x_{n})\,dx_{1}\!\cdots dx_{n}$ Frequently this notation is abbreviated as $\int _{T}\!f(\mathbf {x} )\,d^{n}\mathbf {x} .$ where x represents the n-tuple (x1, …, xn) and dnx is the n-dimensional volume differential. The Riemann integral of a function defined over an arbitrary bounded n-dimensional set can be defined by extending that function to a function defined over a half-open rectangle whose values are zero outside the domain of the original function. Then the integral of the original function over the original domain is defined to be the integral of the extended function over its rectangular domain, if it exists. In what follows the Riemann integral in n dimensions will be called the multiple integral. Properties Multiple integrals have many properties common to those of integrals of functions of one variable (linearity, commutativity, monotonicity, and so on). One important property of multiple integrals is that the value of an integral is independent of the order of integrands under certain conditions. This property is popularly known as Fubini's theorem.[4] Particular cases In the case of $T\subseteq \mathbb {R} ^{2}$, the integral $l=\iint _{T}f(x,y)\,dx\,dy$ is the double integral of f on T, and if $T\subseteq \mathbb {R} ^{3}$ the integral $l=\iiint _{T}f(x,y,z)\,dx\,dy\,dz$ is the triple integral of f on T. Notice that, by convention, the double integral has two integral signs, and the triple integral has three; this is a notational convention which is convenient when computing a multiple integral as an iterated integral, as shown later in this article. Methods of integration The resolution of problems with multiple integrals consists, in most cases, of finding a way to reduce the multiple integral to an iterated integral, a series of integrals of one variable, each being directly solvable. For continuous functions, this is justified by Fubini's theorem. Sometimes, it is possible to obtain the result of the integration by direct examination without any calculations. The following are some simple methods of integration:[1] Integrating constant functions When the integrand is a constant function c, the integral is equal to the product of c and the measure of the domain of integration. If c = 1 and the domain is a subregion of R2, the integral gives the area of the region, while if the domain is a subregion of R3, the integral gives the volume of the region. Example. Let f(x, y) = 2 and $D=\left\{(x,y)\in \mathbb {R} ^{2}\ :\ 2\leq x\leq 4\ ;\ 3\leq y\leq 6\right\}$ :\ 2\leq x\leq 4\ ;\ 3\leq y\leq 6\right\}} in which case $\int _{3}^{6}\int _{2}^{4}\ 2\ dx\,dy=2\int _{3}^{6}\int _{2}^{4}\ 1\ dx\,dy=2\cdot \operatorname {area} (D)=2\cdot (2\cdot 3)=12,$ since by definition we have: $\int _{3}^{6}\int _{2}^{4}\ 1\ dx\,dy=\operatorname {area} (D).$ Use of symmetry When the domain of integration is symmetric about the origin with respect to at least one of the variables of integration and the integrand is odd with respect to this variable, the integral is equal to zero, as the integrals over the two halves of the domain have the same absolute value but opposite signs. When the integrand is even with respect to this variable, the integral is equal to twice the integral over one half of the domain, as the integrals over the two halves of the domain are equal. Example 1. Consider the function f(x,y) = 2 sin(x) − 3y3 + 5 integrated over the domain $T=\left\{(x,y)\in \mathbb {R} ^{2}\ :\ x^{2}+y^{2}\leq 1\right\},$ :\ x^{2}+y^{2}\leq 1\right\},} a disc with radius 1 centered at the origin with the boundary included. Using the linearity property, the integral can be decomposed into three pieces: $\iint _{T}\left(2\sin x-3y^{3}+5\right)\,dx\,dy=\iint _{T}2\sin x\,dx\,dy-\iint _{T}3y^{3}\,dx\,dy+\iint _{T}5\,dx\,dy$ The function 2 sin(x) is an odd function in the variable x and the disc T is symmetric with respect to the y-axis, so the value of the first integral is 0. Similarly, the function 3y3 is an odd function of y, and T is symmetric with respect to the x-axis, and so the only contribution to the final result is that of the third integral. Therefore the original integral is equal to the area of the disk times 5, or 5π. Example 2. Consider the function f(x, y, z) = x exp(y2 + z2) and as integration region the ball with radius 2 centered at the origin, $T=\left\{(x,y,z)\in \mathbb {R} ^{3}\ :\ x^{2}+y^{2}+z^{2}\leq 4\right\}.$ :\ x^{2}+y^{2}+z^{2}\leq 4\right\}.} The "ball" is symmetric about all three axes, but it is sufficient to integrate with respect to x-axis to show that the integral is 0, because the function is an odd function of that variable. Normal domains on R2 See also: Order of integration (calculus) This method is applicable to any domain D for which: • the projection of D onto either the x-axis or the y-axis is bounded by the two values, a and b • any line perpendicular to this axis that passes between these two values intersects the domain in an interval whose endpoints are given by the graphs of two functions, α and β. Such a domain will be here called a normal domain. Elsewhere in the literature, normal domains are sometimes called type I or type II domains, depending on which axis the domain is fibred over. In all cases, the function to be integrated must be Riemann integrable on the domain, which is true (for instance) if the function is continuous. x-axis If the domain D is normal with respect to the x-axis, and f : D → R is a continuous function; then α(x) and β(x) (both of which are defined on the interval [a, b]) are the two functions that determine D. Then, by Fubini's theorem:[5] $\iint _{D}f(x,y)\,dx\,dy=\int _{a}^{b}dx\int _{\alpha (x)}^{\beta (x)}f(x,y)\,dy.$ y-axis If D is normal with respect to the y-axis and f : D → R is a continuous function; then α(y) and β(y) (both of which are defined on the interval [a, b]) are the two functions that determine D. Again, by Fubini's theorem: $\iint _{D}f(x,y)\,dx\,dy=\int _{a}^{b}dy\int _{\alpha (y)}^{\beta (y)}f(x,y)\,dx.$ Normal domains on R3 If T is a domain that is normal with respect to the xy-plane and determined by the functions α(x, y) and β(x, y), then $\iiint _{T}f(x,y,z)\,dx\,dy\,dz=\iint _{D}\int _{\alpha (x,y)}^{\beta (x,y)}f(x,y,z)\,dz\,dx\,dy$ This definition is the same for the other five normality cases on R3. It can be generalized in a straightforward way to domains in Rn. Change of variables See also: Integration by substitution § Substitution for multiple variables The limits of integration are often not easily interchangeable (without normality or with complex formulae to integrate). One makes a change of variables to rewrite the integral in a more "comfortable" region, which can be described in simpler formulae. To do so, the function must be adapted to the new coordinates. Example 1a. The function is f(x, y) = (x − 1)2 + √y; if one adopts the substitution u = x − 1, v = y therefore x = u + 1, y = v one obtains the new function f2(u, v) = (u)2 + √v. • Similarly for the domain because it is delimited by the original variables that were transformed before (x and y in example). • the differentials dx and dy transform via the absolute value of the determinant of the Jacobian matrix containing the partial derivatives of the transformations regarding the new variable (consider, as an example, the differential transformation in polar coordinates). There exist three main "kinds" of changes of variable (one in R2, two in R3); however, more general substitutions can be made using the same principle. Polar coordinates See also: Polar coordinate system In R2 if the domain has a circular symmetry and the function has some particular characteristics one can apply the transformation to polar coordinates (see the example in the picture) which means that the generic points P(x, y) in Cartesian coordinates switch to their respective points in polar coordinates. That allows one to change the shape of the domain and simplify the operations. The fundamental relation to make the transformation is the following: $f(x,y)\rightarrow f(\rho \cos \varphi ,\rho \sin \varphi ).$ Example 2a. The function is f(x, y) = x + y and applying the transformation one obtains $f(x,y)=f(\rho \cos \varphi ,\rho \sin \varphi )=\rho \cos \varphi +\rho \sin \varphi =\rho (\cos \varphi +\sin \varphi ).$ Example 2b. The function is f(x, y) = x2 + y2, in this case one has: $f(x,y)=\rho ^{2}\left(\cos ^{2}\varphi +\sin ^{2}\varphi \right)=\rho ^{2}$ using the Pythagorean trigonometric identity (very useful to simplify this operation). The transformation of the domain is made by defining the radius' crown length and the amplitude of the described angle to define the ρ, φ intervals starting from x, y. Example 2c. The domain is D = {x2 + y2 ≤ 4}, that is a circumference of radius 2; it's evident that the covered angle is the circle angle, so φ varies from 0 to 2π, while the crown radius varies from 0 to 2 (the crown with the inside radius null is just a circle). Example 2d. The domain is D = {x2 + y2 ≤ 9, x2 + y2 ≥ 4, y ≥ 0}, that is the circular crown in the positive y half-plane (please see the picture in the example); φ describes a plane angle while ρ varies from 2 to 3. Therefore the transformed domain will be the following rectangle: $T=\{2\leq \rho \leq 3,\ 0\leq \varphi \leq \pi \}.$ The Jacobian determinant of that transformation is the following: ${\frac {\partial (x,y)}{\partial (\rho ,\varphi )}}={\begin{vmatrix}\cos \varphi &-\rho \sin \varphi \\\sin \varphi &\rho \cos \varphi \end{vmatrix}}=\rho $ which has been obtained by inserting the partial derivatives of x = ρ cos(φ), y = ρ sin(φ) in the first column respect to ρ and in the second respect to φ, so the dx dy differentials in this transformation become ρ dρ dφ. Once the function is transformed and the domain evaluated, it is possible to define the formula for the change of variables in polar coordinates: $\iint _{D}f(x,y)\,dx\,dy=\iint _{T}f(\rho \cos \varphi ,\rho \sin \varphi )\rho \,d\rho \,d\varphi .$ φ is valid in the [0, 2π] interval while ρ, which is a measure of a length, can only have positive values. Example 2e. The function is f(x, y) = x and the domain is the same as in Example 2d. From the previous analysis of D we know the intervals of ρ (from 2 to 3) and of φ (from 0 to π). Now we change the function: $f(x,y)=x\longrightarrow f(\rho ,\varphi )=\rho \cos \varphi .$ finally let's apply the integration formula: $\iint _{D}x\,dx\,dy=\iint _{T}\rho \cos \varphi \rho \,d\rho \,d\varphi .$ Once the intervals are known, you have $\int _{0}^{\pi }\int _{2}^{3}\rho ^{2}\cos \varphi \,d\rho \,d\varphi =\int _{0}^{\pi }\cos \varphi \ d\varphi \left[{\frac {\rho ^{3}}{3}}\right]_{2}^{3}={\Big [}\sin \varphi {\Big ]}_{0}^{\pi }\ \left(9-{\frac {8}{3}}\right)=0.$ Cylindrical coordinates In R3 the integration on domains with a circular base can be made by the passage to cylindrical coordinates; the transformation of the function is made by the following relation: $f(x,y,z)\rightarrow f(\rho \cos \varphi ,\rho \sin \varphi ,z)$ The domain transformation can be graphically attained, because only the shape of the base varies, while the height follows the shape of the starting region. Example 3a. The region is D = {x2 + y2 ≤ 9, x2 + y2 ≥ 4, 0 ≤ z ≤ 5} (that is the "tube" whose base is the circular crown of Example 2d and whose height is 5); if the transformation is applied, this region is obtained: $T=\{2\leq \rho \leq 3,\ 0\leq \varphi \leq 2\pi ,\ 0\leq z\leq 5\}$ (that is, the parallelepiped whose base is similar to the rectangle in Example 2d and whose height is 5). Because the z component is unvaried during the transformation, the dx dy dz differentials vary as in the passage to polar coordinates: therefore, they become ρ dρ dφ dz. Finally, it is possible to apply the final formula to cylindrical coordinates: $\iiint _{D}f(x,y,z)\,dx\,dy\,dz=\iiint _{T}f(\rho \cos \varphi ,\rho \sin \varphi ,z)\rho \,d\rho \,d\varphi \,dz.$ This method is convenient in case of cylindrical or conical domains or in regions where it is easy to individuate the z interval and even transform the circular base and the function. Example 3b. The function is f(x, y, z) = x2 + y2 + z and as integration domain this cylinder: D = {x2 + y2 ≤ 9, −5 ≤ z ≤ 5}. The transformation of D in cylindrical coordinates is the following: $T=\{0\leq \rho \leq 3,\ 0\leq \varphi \leq 2\pi ,\ -5\leq z\leq 5\}.$ while the function becomes $f(\rho \cos \varphi ,\rho \sin \varphi ,z)=\rho ^{2}+z$ Finally one can apply the integration formula: $\iiint _{D}\left(x^{2}+y^{2}+z\right)\,dx\,dy\,dz=\iiint _{T}\left(\rho ^{2}+z\right)\rho \,d\rho \,d\varphi \,dz;$ developing the formula you have $\int _{-5}^{5}dz\int _{0}^{2\pi }d\varphi \int _{0}^{3}\left(\rho ^{3}+\rho z\right)\,d\rho =2\pi \int _{-5}^{5}\left[{\frac {\rho ^{4}}{4}}+{\frac {\rho ^{2}z}{2}}\right]_{0}^{3}\,dz=2\pi \int _{-5}^{5}\left({\frac {81}{4}}+{\frac {9}{2}}z\right)\,dz=\cdots =405\pi .$ Spherical coordinates In R3 some domains have a spherical symmetry, so it's possible to specify the coordinates of every point of the integration region by two angles and one distance. It's possible to use therefore the passage to spherical coordinates; the function is transformed by this relation: $f(x,y,z)\longrightarrow f(\rho \cos \theta \sin \varphi ,\rho \sin \theta \sin \varphi ,\rho \cos \varphi )$ Points on the z-axis do not have a precise characterization in spherical coordinates, so θ can vary between 0 and 2π. The better integration domain for this passage is the sphere. Example 4a. The domain is D = x2 + y2 + z2 ≤ 16 (sphere with radius 4 and center at the origin); applying the transformation you get the region $T=\{0\leq \rho \leq 4,\ 0\leq \varphi \leq \pi ,\ 0\leq \theta \leq 2\pi \}.$ The Jacobian determinant of this transformation is the following: ${\frac {\partial (x,y,z)}{\partial (\rho ,\theta ,\varphi )}}={\begin{vmatrix}\cos \theta \sin \varphi &-\rho \sin \theta \sin \varphi &\rho \cos \theta \cos \varphi \\\sin \theta \sin \varphi &\rho \cos \theta \sin \varphi &\rho \sin \theta \cos \varphi \\\cos \varphi &0&-\rho \sin \varphi \end{vmatrix}}=\rho ^{2}\sin \varphi $ The dx dy dz differentials therefore are transformed to ρ2 sin(φ) dρ dθ dφ. This yields the final integration formula: $\iiint _{D}f(x,y,z)\,dx\,dy\,dz=\iiint _{T}f(\rho \sin \varphi \cos \theta ,\rho \sin \varphi \sin \theta ,\rho \cos \varphi )\rho ^{2}\sin \varphi \,d\rho \,d\theta \,d\varphi .$ It is better to use this method in case of spherical domains and in case of functions that can be easily simplified by the first fundamental relation of trigonometry extended to R3 (see Example 4b); in other cases it can be better to use cylindrical coordinates (see Example 4c). $\iiint _{T}f(a,b,c)\rho ^{2}\sin \varphi \,d\rho \,d\theta \,d\varphi .$ The extra ρ2 and sin φ come from the Jacobian. In the following examples the roles of φ and θ have been reversed. Example 4b. D is the same region as in Example 4a and f(x, y, z) = x2 + y2 + z2 is the function to integrate. Its transformation is very easy: $f(\rho \sin \varphi \cos \theta ,\rho \sin \varphi \sin \theta ,\rho \cos \varphi )=\rho ^{2},$ while we know the intervals of the transformed region T from D: $T=\{0\leq \rho \leq 4,\ 0\leq \varphi \leq \pi ,\ 0\leq \theta \leq 2\pi \}.$ We therefore apply the integration formula: $\iiint _{D}\left(x^{2}+y^{2}+z^{2}\right)\,dx\,dy\,dz=\iiint _{T}\rho ^{2}\,\rho ^{2}\sin \theta \,d\rho \,d\theta \,d\varphi ,$ and, developing, we get $\iiint _{T}\rho ^{4}\sin \theta \,d\rho \,d\theta \,d\varphi =\int _{0}^{\pi }\sin \varphi \,d\varphi \int _{0}^{4}\rho ^{4}d\rho \int _{0}^{2\pi }d\theta =2\pi \int _{0}^{\pi }\sin \varphi \left[{\frac {\rho ^{5}}{5}}\right]_{0}^{4}\,d\varphi =2\pi \left[{\frac {\rho ^{5}}{5}}\right]_{0}^{4}{\Big [}-\cos \varphi {\Big ]}_{0}^{\pi }={\frac {4096\pi }{5}}.$ Example 4c. The domain D is the ball with center at the origin and radius 3a, $D=\left\{x^{2}+y^{2}+z^{2}\leq 9a^{2}\right\}$ and f(x, y, z) = x2 + y2 is the function to integrate. Looking at the domain, it seems convenient to adopt the passage to spherical coordinates, in fact, the intervals of the variables that delimit the new T region are obviously: $T=\{0\leq \rho \leq 3a,\ 0\leq \varphi \leq 2\pi ,\ 0\leq \theta \leq \pi \}.$ However, applying the transformation, we get $f(x,y,z)=x^{2}+y^{2}\longrightarrow \rho ^{2}\sin ^{2}\theta \cos ^{2}\varphi +\rho ^{2}\sin ^{2}\theta \sin ^{2}\varphi =\rho ^{2}\sin ^{2}\theta .$ Applying the formula for integration we obtain: $\iiint _{T}\rho ^{2}\sin ^{2}\theta \rho ^{2}\sin \theta \,d\rho \,d\theta \,d\varphi =\iiint _{T}\rho ^{4}\sin ^{3}\theta \,d\rho \,d\theta \,d\varphi $ which can be solved by turning it into an iterated integral. $\iiint _{T}\rho ^{4}\sin ^{3}\theta \,d\rho \,d\theta \,d\varphi =\underbrace {\int _{0}^{3a}\rho ^{4}d\rho } _{I}\,\underbrace {\int _{0}^{\pi }\sin ^{3}\theta \,d\theta } _{II}\,\underbrace {\int _{0}^{2\pi }d\varphi } _{III}$. $I=\left.\int _{0}^{3a}\rho ^{4}d\rho ={\frac {\rho ^{5}}{5}}\right\vert _{0}^{3a}={\frac {243}{5}}a^{5}$, $II=\int _{0}^{\pi }\sin ^{3}\theta \,d\theta =-\int _{0}^{\pi }\sin ^{2}\theta \,d(\cos \theta )=\int _{0}^{\pi }(\cos ^{2}\theta -1)\,d(\cos \theta )=\left.{\frac {\cos ^{3}\theta }{3}}\right|_{0}^{\pi }-\left.\cos \theta \right|_{0}^{\pi }={\frac {4}{3}}$, $III=\int _{0}^{2\pi }d\varphi =2\pi $. Collecting all parts, $\iiint _{T}\rho ^{4}\sin ^{3}\theta \,d\rho \,d\theta \,d\varphi =I\cdot II\cdot III={\frac {243}{5}}a^{5}\cdot {\frac {4}{3}}\cdot 2\pi ={\frac {648}{5}}\pi a^{5}$. Alternatively, this problem can be solved by using the passage to cylindrical coordinates. The new T intervals are $T=\left\{0\leq \rho \leq 3a,\ 0\leq \varphi \leq 2\pi ,\ -{\sqrt {9a^{2}-\rho ^{2}}}\leq z\leq {\sqrt {9a^{2}-\rho ^{2}}}\right\};$ the z interval has been obtained by dividing the ball into two hemispheres simply by solving the inequality from the formula of D (and then directly transforming x2 + y2 into ρ2). The new function is simply ρ2. Applying the integration formula $\iiint _{T}\rho ^{2}\rho \,d\rho \,d\varphi \,dz.$ Then we get ${\begin{aligned}\int _{0}^{2\pi }d\varphi \int _{0}^{3a}\rho ^{3}d\rho \int _{-{\sqrt {9a^{2}-\rho ^{2}}}}^{\sqrt {9a^{2}-\rho ^{2}}}\,dz&=2\pi \int _{0}^{3a}2\rho ^{3}{\sqrt {9a^{2}-\rho ^{2}}}\,d\rho \\&=-2\pi \int _{9a^{2}}^{0}(9a^{2}-t){\sqrt {t}}\,dt&&t=9a^{2}-\rho ^{2}\\&=2\pi \int _{0}^{9a^{2}}\left(9a^{2}{\sqrt {t}}-t{\sqrt {t}}\right)\,dt\\&=2\pi \left(\int _{0}^{9a^{2}}9a^{2}{\sqrt {t}}\,dt-\int _{0}^{9a^{2}}t{\sqrt {t}}\,dt\right)\\&=2\pi \left[9a^{2}{\frac {2}{3}}t^{\frac {3}{2}}-{\frac {2}{5}}t^{\frac {5}{2}}\right]_{0}^{9a^{2}}\\&=2\cdot 27\pi a^{5}\left(6-{\frac {18}{5}}\right)\\&={\frac {648\pi }{5}}a^{5}.\end{aligned}}$ Thanks to the passage to cylindrical coordinates it was possible to reduce the triple integral to an easier one-variable integral. See also the differential volume entry in nabla in cylindrical and spherical coordinates. Examples Double integral over a rectangle Let us assume that we wish to integrate a multivariable function f over a region A: $A=\left\{(x,y)\in \mathbf {R} ^{2}\ :\ 11\leq x\leq 14\ ;\ 7\leq y\leq 10\right\}{\mbox{ and }}f(x,y)=x^{2}+4y\,$ :\ 11\leq x\leq 14\ ;\ 7\leq y\leq 10\right\}{\mbox{ and }}f(x,y)=x^{2}+4y\,} From this we formulate the iterated integral $\int _{7}^{10}\int _{11}^{14}(x^{2}+4y)\,dx\,dy$ The inner integral is performed first, integrating with respect to x and taking y as a constant, as it is not the variable of integration. The result of this integral, which is a function depending only on y, is then integrated with respect to y. ${\begin{aligned}\int _{11}^{14}\left(x^{2}+4y\right)\,dx&=\left[{\frac {1}{3}}x^{3}+4yx\right]_{x=11}^{x=14}\\&={\frac {1}{3}}(14)^{3}+4y(14)-{\frac {1}{3}}(11)^{3}-4y(11)\\&=471+12y\end{aligned}}$ We then integrate the result with respect to y. ${\begin{aligned}\int _{7}^{10}(471+12y)\ dy&={\Big [}471y+6y^{2}{\Big ]}_{y=7}^{y=10}\\&=471(10)+6(10)^{2}-471(7)-6(7)^{2}\\&=1719\end{aligned}}$ In cases where the double integral of the absolute value of the function is finite, the order of integration is interchangeable, that is, integrating with respect to x first and integrating with respect to y first produce the same result. That is Fubini's theorem. For example, doing the previous calculation with order reversed gives the same result: ${\begin{aligned}\int _{11}^{14}\int _{7}^{10}\,\left(x^{2}+4y\right)\,dy\,dx&=\int _{11}^{14}{\Big [}x^{2}y+2y^{2}{\Big ]}_{y=7}^{y=10}\,dx\\&=\int _{11}^{14}\,(3x^{2}+102)\,dx\\&={\Big [}x^{3}+102x{\Big ]}_{x=11}^{x=14}\\&=1719.\end{aligned}}$ Double integral over a normal domain Consider the region (please see the graphic in the example): $D=\{(x,y)\in \mathbf {R} ^{2}\ :\ x\geq 0,y\leq 1,y\geq x^{2}\}$ :\ x\geq 0,y\leq 1,y\geq x^{2}\}} Calculate $\iint _{D}(x+y)\,dx\,dy.$ This domain is normal with respect to both the x- and y-axes. To apply the formulae it is required to find the functions that determine D and the intervals over which these functions are defined. In this case the two functions are: $\alpha (x)=x^{2}{\text{ and }}\beta (x)=1$ while the interval is given by the intersections of the functions with x = 0, so the interval is [a, b] = [0, 1] (normality has been chosen with respect to the x-axis for a better visual understanding). It is now possible to apply the formula: $\iint _{D}(x+y)\,dx\,dy=\int _{0}^{1}dx\int _{x^{2}}^{1}(x+y)\,dy=\int _{0}^{1}dx\ \left[xy+{\frac {y^{2}}{2}}\right]_{x^{2}}^{1}$ (at first the second integral is calculated considering x as a constant). The remaining operations consist of applying the basic techniques of integration: $\int _{0}^{1}\left[xy+{\frac {y^{2}}{2}}\right]_{x^{2}}^{1}\,dx=\int _{0}^{1}\left(x+{\frac {1}{2}}-x^{3}-{\frac {x^{4}}{2}}\right)dx=\cdots ={\frac {13}{20}}.$ If we choose normality with respect to the y-axis we could calculate $\int _{0}^{1}dy\int _{0}^{\sqrt {y}}(x+y)\,dx.$ and obtain the same value. Calculating volume Using the methods previously described, it is possible to calculate the volumes of some common solids. • Cylinder: The volume of a cylinder with height h and circular base of radius R can be calculated by integrating the constant function h over the circular base, using polar coordinates. $\mathrm {Volume} =\int _{0}^{2\pi }d\varphi \,\int _{0}^{R}h\rho \,d\rho =2\pi h\left[{\frac {\rho ^{2}}{2}}\right]_{0}^{R}=\pi R^{2}h$ This is in agreement with the formula for the volume of a prism $\mathrm {Volume} ={\text{base area}}\times {\text{height}}.$ • Sphere: The volume of a sphere with radius R can be calculated by integrating the constant function 1 over the sphere, using spherical coordinates. ${\begin{aligned}{\text{Volume}}&=\iiint _{D}f(x,y,z)\,dx\,dy\,dz\\&=\iiint _{D}1\,dV\\&=\iiint _{S}\rho ^{2}\sin \varphi \,d\rho \,d\theta \,d\varphi \\&=\int _{0}^{2\pi }\,d\theta \int _{0}^{\pi }\sin \varphi \,d\varphi \int _{0}^{R}\rho ^{2}\,d\rho \\&=2\pi \int _{0}^{\pi }\sin \varphi \,d\varphi \int _{0}^{R}\rho ^{2}\,d\rho \\&=2\pi \int _{0}^{\pi }\sin \varphi {\frac {R^{3}}{3}}\,d\varphi \\&={\frac {2}{3}}\pi R^{3}{\Big [}-\cos \varphi {\Big ]}_{0}^{\pi }={\frac {4}{3}}\pi R^{3}.\end{aligned}}$ • Tetrahedron (triangular pyramid or 3-simplex): The volume of a tetrahedron with its apex at the origin and edges of length ℓ along the x-, y- and z-axes can be calculated by integrating the constant function 1 over the tetrahedron. ${\begin{aligned}{\text{Volume}}&=\int _{0}^{\ell }dx\int _{0}^{\ell -x}\,dy\int _{0}^{\ell -x-y}\,dz\\&=\int _{0}^{\ell }dx\int _{0}^{\ell -x}(\ell -x-y)\,dy\\&=\int _{0}^{\ell }\left(l^{2}-2\ell x+x^{2}-{\frac {(\ell -x)^{2}}{2}}\right)\,dx\\&=\ell ^{3}-\ell \ell ^{2}+{\frac {\ell ^{3}}{3}}-\left[{\frac {\ell ^{2}x}{2}}-{\frac {\ell x^{2}}{2}}+{\frac {x^{3}}{6}}\right]_{0}^{\ell }\\&={\frac {\ell ^{3}}{3}}-{\frac {\ell ^{3}}{6}}={\frac {\ell ^{3}}{6}}\end{aligned}}$ This is in agreement with the formula for the volume of a pyramid $\mathrm {Volume} ={\frac {1}{3}}\times {\text{base area}}\times {\text{height}}={\frac {1}{3}}\times {\frac {\ell ^{2}}{2}}\times \ell ={\frac {\ell ^{3}}{6}}.$ Multiple improper integral In case of unbounded domains or functions not bounded near the boundary of the domain, we have to introduce the double improper integral or the triple improper integral. Multiple integrals and iterated integrals See also: Order of integration (calculus) Fubini's theorem states that if[4] $\iint _{A\times B}\left|f(x,y)\right|\,d(x,y)<\infty ,$ that is, if the integral is absolutely convergent, then the multiple integral will give the same result as either of the two iterated integrals: $\iint _{A\times B}f(x,y)\,d(x,y)=\int _{A}\left(\int _{B}f(x,y)\,dy\right)\,dx=\int _{B}\left(\int _{A}f(x,y)\,dx\right)\,dy.$ In particular this will occur if |f(x, y)| is a bounded function and A and B are bounded sets. If the integral is not absolutely convergent, care is needed not to confuse the concepts of multiple integral and iterated integral, especially since the same notation is often used for either concept. The notation $\int _{0}^{1}\int _{0}^{1}f(x,y)\,dy\,dx$ means, in some cases, an iterated integral rather than a true double integral. In an iterated integral, the outer integral $\int _{0}^{1}\cdots \,dx$ is the integral with respect to x of the following function of x: $g(x)=\int _{0}^{1}f(x,y)\,dy.$ A double integral, on the other hand, is defined with respect to area in the xy-plane. If the double integral exists, then it is equal to each of the two iterated integrals (either "dy dx" or "dx dy") and one often computes it by computing either of the iterated integrals. But sometimes the two iterated integrals exist when the double integral does not, and in some such cases the two iterated integrals are different numbers, i.e., one has $\int _{0}^{1}\int _{0}^{1}f(x,y)\,dy\,dx\neq \int _{0}^{1}\int _{0}^{1}f(x,y)\,dx\,dy.$ This is an instance of rearrangement of a conditionally convergent integral. On the other hand, some conditions ensure that the two iterated integrals are equal even though the double integral need not exist. By the Fichtenholz–Lichtenstein theorem, if f is bounded on [0, 1] × [0, 1] and both iterated integrals exist, then they are equal. Moreover, existence of the inner integrals ensures existence of the outer integrals.[6][7][8] The double integral need not exist in this case even as Lebesgue integral, according to Sierpiński.[9] The notation $\int _{[0,1]\times [0,1]}f(x,y)\,dx\,dy$ may be used if one wishes to be emphatic about intending a double integral rather than an iterated integral. Triple integral Main article: Volume integral Triple integral was demonstrated by Fubini's theorem.[10][11] Drichlet theorem and Liouville 's extension theorem on Triple integral. Some practical applications Quite generally, just as in one variable, one can use the multiple integral to find the average of a function over a given set. Given a set D ⊆ Rn and an integrable function f over D, the average value of f over its domain is given by ${\bar {f}}={\frac {1}{m(D)}}\int _{D}f(x)\,dx,$ where m(D) is the measure of D. Additionally, multiple integrals are used in many applications in physics. The examples below also show some variations in the notation. In mechanics, the moment of inertia is calculated as the volume integral (triple integral) of the density weighed with the square of the distance from the axis: $I_{z}=\iiint _{V}\rho r^{2}\,dV.$ The gravitational potential associated with a mass distribution given by a mass measure dm on three-dimensional Euclidean space R3 is[12] $V(\mathbf {x} )=-\iiint _{\mathbf {R} ^{3}}{\frac {G}{|\mathbf {x} -\mathbf {y} |}}\,dm(\mathbf {y} ).$ If there is a continuous function ρ(x) representing the density of the distribution at x, so that dm(x) = ρ(x)d3x, where d3x is the Euclidean volume element, then the gravitational potential is $V(\mathbf {x} )=-\iiint _{\mathbf {R} ^{3}}{\frac {G}{|\mathbf {x} -\mathbf {y} |}}\,\rho (\mathbf {y} )\,d^{3}\mathbf {y} .$ In electromagnetism, Maxwell's equations can be written using multiple integrals to calculate the total magnetic and electric fields.[13] In the following example, the electric field produced by a distribution of charges given by the volume charge density ρ( r→ ) is obtained by a triple integral of a vector function: ${\vec {E}}={\frac {1}{4\pi \varepsilon _{0}}}\iiint {\frac {{\vec {r}}-{\vec {r}}'}{\left\|{\vec {r}}-{\vec {r}}'\right\|^{3}}}\rho ({\vec {r}}')\,d^{3}r'.$ This can also be written as an integral with respect to a signed measure representing the charge distribution. See also • Main analysis theorems that relate multiple integrals: • Divergence theorem • Stokes' theorem • Green's theorem References 1. Stewart, James (2008). Calculus: Early Transcendentals (6th ed.). Brooks Cole Cengage Learning. ISBN 978-0-495-01166-8. 2. Larson; Edwards (2014). Multivariable Calculus (10th ed.). Cengage Learning. ISBN 978-1-285-08575-3. 3. Rudin, Walter (1976). Principles of Mathematical Analysis. Walter Rudin Student Series in Advanced Mathematics (3rd ed.). McGraw–Hill. ISBN 978-0-07-054235-8. 4. Jones, Frank (2001). Lebesgue Integration on Euclidean Space. Jones and Bartlett. pp. 527–529. ISBN 9780763717087. 5. Stewart, James (2015-05-07). Calculus, 8th Edition. Cengage Learning. ISBN 978-1285740621. 6. Lewin, Jonathan (2003). An Interactive Introduction to Mathematical Analysis. Cambridge. Sect. 16.6. ISBN 978-1107694040. 7. Lewin, Jonathan (1987). "Some applications of the bounded convergence theorem for an introductory course in analysis". The American Mathematical Monthly. AMS. 94 (10): 988–993. doi:10.2307/2322609. JSTOR 2322609. 8. Sinclair, George Edward (1974). "A finitely additive generalization of the Fichtenholz–Lichtenstein theorem". Transactions of the American Mathematical Society. AMS. 193: 359–374. doi:10.2307/1996919. JSTOR 1996919. 9. Bogachev, Vladimir I. (2006). Measure Theory. Vol. 1. Springer. Item 3.10.49. 10. Rai University (2015-03-17). "Btech_II_ engineering mathematics_unit2". {{cite journal}}: Cite journal requires |journal= (help) 11. "5.4 Triple Integrals - Calculus Volume 3 | OpenStax". openstax.org. Retrieved 2022-08-25. 12. Kibble, Tom W. B.; Berkshire, Frank H. (2004). Classical Mechanics (5th ed.). Imperial College Press. ISBN 978-1-86094-424-6. 13. Jackson, John D. (1998). Classical Electrodynamics (3rd ed.). Wiley. ISBN 0-471-30932-X. Further reading • Adams, Robert A. (2003). Calculus: A Complete Course (5th ed.). ISBN 0-201-79131-5. • Jain, R. K.; Iyengar, S. R. K. (2009). Advanced Engineering Mathematics (3rd ed.). Narosa Publishing House. ISBN 978-81-7319-730-7. • Herman, Edwin “Jed” & Strang, Gilbert (2016): Calculus : Volume 3 : OpenStax, Rice University, Houston, Texas, USA. ISBN 978-1-50669-805-2. (PDF) External links • Weisstein, Eric W. "Multiple Integral". MathWorld. • L.D. Kudryavtsev (2001) [1994], "Multiple integral", Encyclopedia of Mathematics, EMS Press • Mathematical Assistant on Web online evaluation of double integrals in Cartesian coordinates and polar coordinates (includes intermediate steps in the solution, powered by Maxima (software)) • Online Double Integral Calculator by WolframAlpha • Online Triple Integral Calculator by WolframAlpha Calculus Precalculus • Binomial theorem • Concave function • Continuous function • Factorial • Finite difference • Free variables and bound variables • Graph of a function • Linear function • Radian • Rolle's theorem • Secant • Slope • Tangent Limits • Indeterminate form • Limit of a function • One-sided limit • Limit of a sequence • Order of approximation • (ε, δ)-definition of limit Differential calculus • Derivative • Second derivative • Partial derivative • Differential • Differential operator • Mean value theorem • Notation • Leibniz's notation • Newton's notation • Rules of differentiation • linearity • Power • Sum • Chain • L'Hôpital's • Product • General Leibniz's rule • Quotient • Other techniques • Implicit differentiation • Inverse functions and differentiation • Logarithmic derivative • Related rates • Stationary points • First derivative test • Second derivative test • Extreme value theorem • Maximum and minimum • Further applications • Newton's method • Taylor's theorem • Differential equation • Ordinary differential equation • Partial differential equation • Stochastic differential equation Integral calculus • Antiderivative • Arc length • Riemann integral • Basic properties • Constant of integration • Fundamental theorem of calculus • Differentiating under the integral sign • Integration by parts • Integration by substitution • trigonometric • Euler • Tangent half-angle substitution • Partial fractions in integration • Quadratic integral • Trapezoidal rule • Volumes • Washer method • Shell method • Integral equation • Integro-differential equation Vector calculus • Derivatives • Curl • Directional derivative • Divergence • Gradient • Laplacian • Basic theorems • Line integrals • Green's • Stokes' • Gauss' Multivariable calculus • Divergence theorem • Geometric • Hessian matrix • Jacobian matrix and determinant • Lagrange multiplier • Line integral • Matrix • Multiple integral • Partial derivative • Surface integral • Volume integral • Advanced topics • Differential forms • Exterior derivative • Generalized Stokes' theorem • Tensor calculus Sequences and series • Arithmetico-geometric sequence • Types of series • Alternating • Binomial • Fourier • Geometric • Harmonic • Infinite • Power • Maclaurin • Taylor • Telescoping • Tests of convergence • Abel's • Alternating series • Cauchy condensation • Direct comparison • Dirichlet's • Integral • Limit comparison • Ratio • Root • Term Special functions and numbers • Bernoulli numbers • e (mathematical constant) • Exponential function • Natural logarithm • Stirling's approximation History of calculus • Adequality • Brook Taylor • Colin Maclaurin • Generality of algebra • Gottfried Wilhelm Leibniz • Infinitesimal • Infinitesimal calculus • Isaac Newton • Fluxion • Law of Continuity • Leonhard Euler • Method of Fluxions • The Method of Mechanical Theorems Lists • Differentiation rules • List of integrals of exponential functions • List of integrals of hyperbolic functions • List of integrals of inverse hyperbolic functions • List of integrals of inverse trigonometric functions • List of integrals of irrational functions • List of integrals of logarithmic functions • List of integrals of rational functions • List of integrals of trigonometric functions • Secant • Secant cubed • List of limits • Lists of integrals Miscellaneous topics • Complex calculus • Contour integral • Differential geometry • Manifold • Curvature • of curves • of surfaces • Tensor • Euler–Maclaurin formula • Gabriel's horn • Integration Bee • Proof that 22/7 exceeds π • Regiomontanus' angle maximization problem • Steinmetz solid Authority control: National • Spain • Germany • Czech Republic
Wikipedia
Reduction (mathematics) In mathematics, reduction refers to the rewriting of an expression into a simpler form. For example, the process of rewriting a fraction into one with the smallest whole-number denominator possible (while keeping the numerator a whole number) is called "reducing a fraction". Rewriting a radical (or "root") expression with the smallest possible whole number under the radical symbol is called "reducing a radical". Minimizing the number of radicals that appear underneath other radicals in an expression is called denesting radicals. Algebra In linear algebra, reduction refers to applying simple rules to a series of equations or matrices to change them into a simpler form. In the case of matrices, the process involves manipulating either the rows or the columns of the matrix and so is usually referred to as row-reduction or column-reduction, respectively. Often the aim of reduction is to transform a matrix into its "row-reduced echelon form" or "row-echelon form"; this is the goal of Gaussian elimination. Calculus In calculus, reduction refers to using the technique of integration by parts to evaluate integrals by reducing them to simpler forms. Static (Guyan) reduction In dynamic analysis, static reduction refers to reducing the number of degrees of freedom. Static reduction can also be used in finite element analysis to refer to simplification of a linear algebraic problem. Since a static reduction requires several inversion steps it is an expensive matrix operation and is prone to some error in the solution. Consider the following system of linear equations in an FEA problem: ${\begin{bmatrix}K_{11}&K_{12}\\K_{21}&K_{22}\end{bmatrix}}{\begin{bmatrix}x_{1}\\x_{2}\end{bmatrix}}={\begin{bmatrix}F_{1}\\F_{2}\end{bmatrix}}$ where K and F are known and K, x and F are divided into submatrices as shown above. If F2 contains only zeros, and only x1 is desired, K can be reduced to yield the following system of equations ${\begin{bmatrix}K_{11,{\text{reduced}}}\end{bmatrix}}{\begin{bmatrix}x_{1}\end{bmatrix}}={\begin{bmatrix}F_{1}\end{bmatrix}}$ $K_{11,{\text{reduced}}}$ is obtained by writing out the set of equations as follows: $K_{11}x_{1}+K_{12}x_{2}=F_{1}$ (1) $K_{21}x_{1}+K_{22}x_{2}=0$ (2) Equation (2) can be solved for $x_{2}$ (assuming invertibility of $K_{22}$): $-K_{22}^{-1}K_{21}x_{1}=x_{2}.$ And substituting into (1) gives $K_{11}x_{1}-K_{12}K_{22}^{-1}K_{21}x_{1}=F_{1}.$ Thus $K_{11,{\text{reduced}}}=K_{11}-K_{12}K_{22}^{-1}K_{21}.$ In a similar fashion, any row or column i of F with a zero value may be eliminated if the corresponding value of xi is not desired. A reduced K may be reduced again. As a note, since each reduction requires an inversion, and each inversion is an operation with computational cost O(n3), most large matrices are pre-processed to reduce calculation time. History In the 9th century, Persian mathematician Al-Khwarizmi's Al-Jabr introduced the fundamental concepts of "reduction" and "balancing", referring to the transposition of subtracted terms to the other side of an equation and the cancellation of like terms on opposite sides of the equation. This is the operation which Al-Khwarizmi originally described as al-jabr.[1] The name "algebra" comes from the "al-jabr" in the title of his book. References 1. Boyer, Carl B. (1991), "The Arabic Hegemony", A History of Mathematics (Second ed.), John Wiley & Sons, Inc., p. 229, ISBN 978-0-471-54397-8, It is not certain just what the terms al-jabr and muqabalah mean, but the usual interpretation is similar to that implied in the translation above. The word al-jabr presumably meant something like "restoration" or "completion" and seems to refer to the transposition of subtracted terms to the other side of an equation, which is evident in the treatise; the word muqabalah is said to refer to "reduction" or "balancing"—that is, the cancellation of like terms on opposite sides of the equation.
Wikipedia
Reduction (computability theory) In computability theory, many reducibility relations (also called reductions, reducibilities, and notions of reducibility) are studied. They are motivated by the question: given sets $A$ and $B$ of natural numbers, is it possible to effectively convert a method for deciding membership in $B$ into a method for deciding membership in $A$? If the answer to this question is affirmative then $A$ is said to be reducible to $B$. The study of reducibility notions is motivated by the study of decision problems. For many notions of reducibility, if any noncomputable set is reducible to a set $A$ then $A$ must also be noncomputable. This gives a powerful technique for proving that many sets are noncomputable. Reducibility relations A reducibility relation is a binary relation on sets of natural numbers that is • Reflexive: Every set is reducible to itself. • Transitive: If a set $A$ is reducible to a set $B$ and $B$ is reducible to a set $C$ then $A$ is reducible to $C$. These two properties imply that reducibility is a preorder on the powerset of the natural numbers. Not all preorders are studied as reducibility notions, however. The notions studied in computability theory have the informal property that $A$ is reducible to $B$ if and only if any (possibly noneffective) decision procedure for $B$ can be effectively converted to a decision procedure for $A$. The different reducibility relations vary in the methods they permit such a conversion process to use. Degrees of a reducibility relation Every reducibility relation (in fact, every preorder) induces an equivalence relation on the powerset of the natural numbers in which two sets are equivalent if and only if each one is reducible to the other. In computability theory, these equivalence classes are called the degrees of the reducibility relation. For example, the Turing degrees are the equivalence classes of sets of naturals induced by Turing reducibility. The degrees of any reducibility relation are partially ordered by the relation in the following manner. Let $\leq $ be a reducibility relation and let $C$ and $D$ be two of its degrees. Then $C\leq D$ if and only if there is a set $A$ in $C$ and a set $B$ in $D$ such that $A\leq B$. This is equivalent to the property that for every set $A$ in $C$ and every set $B$ in $D$, $A\leq B$, because any two sets in C are equivalent and any two sets in $D$ are equivalent. It is common, as shown here, to use boldface notation to denote degrees. Turing reducibility Main article: Turing reduction The most fundamental reducibility notion is Turing reducibility. A set $A$ of natural numbers is Turing reducible to a set $B$ if and only if there is an oracle Turing machine that, when run with $B$ as its oracle set, will compute the indicator function (characteristic function) of $A$. Equivalently, $A$ is Turing reducible to $B$ if and only if there is an algorithm for computing the indicator function for $A$ provided that the algorithm is provided with a means to correctly answer questions of the form "Is $n$ in $B$?". Turing reducibility serves as a dividing line for other reducibility notions because, according to the Church-Turing thesis, it is the most general reducibility relation that is effective. Reducibility relations that imply Turing reducibility have come to be known as strong reducibilities, while those that are implied by Turing reducibility are weak reducibilities. Equivalently, a strong reducibility relation is one whose degrees form a finer equivalence relation than the Turing degrees, while a weak reducibility relation is one whose degrees form a coarser equivalence relation than Turing equivalence. Reductions stronger than Turing reducibility The strong reducibilities include • One-one reducibility: $A$ is one-one reducible to $B$ if there is a computable one-to-one function $f$ with $A(x)=B(f(x))$ for all $x$. • Many-one reducibility: $A$ is many-one reducible to $B$ if there is a computable function $f$ with $A(x)=B(f(x))$ for all $x$. • Truth-table reducible: $A$ is truth-table reducible to $B$ if $A$ is Turing reducible to $B$ via a single (oracle) Turing machine which produces a total function relative to every oracle. • Weak truth-table reducible: $A$ is weak truth-table reducible to $B$ if there is a Turing reduction from $B$ to $A$ and a computable function $f$ which bounds the use. Whenever $A$ is truth-table reducible to $B$, $A$ is also weak truth-table reducible to $B$, since one can construct a computable bound on the use by considering the maximum use over the tree of all oracles, which will exist if the reduction is total on all oracles. • Positive reducible: $A$ is positive reducible to $B$ if and only if $A$ is truth-table reducible to $B$ in a way that one can compute for every $x$ a formula consisting of atoms of the form $B(0),B(1),...$ such that these atoms are combined by and's and or's, where the and of $a$ and $b$ is 1 if $a=1$ and $b=1$ and so on. • Enumeration reducibility: Similar to positive reducibility, relating to the effective procedure of enumerability from $A$ to $B$. • Disjunctive reducible: Similar to positive reducible with the additional constraint that only or's are permitted. • Conjunctive reducibility: Similar to positive reducibility with the additional constraint that only and's are permitted. • Linear reducibility: Similar to positive reducibility but with the constraint that all atoms of the form $B(n)$ are combined by exclusive or's. In other words, $A$ is linear reducible to $B$ if and only if a computable function computes for each $x$ a finite set $F(x)$ given as an explicit list of numbers such that $x\in A$ if and only if $F(x)$ contains an odd number of elements of $B$. Many of these were introduced by Post (1944). Post was searching for a non-computable, computably enumerable set which the halting problem could not be Turing reduced to. As he could not construct such a set in 1944, he instead worked on the analogous problems for the various reducibilities that he introduced. These reducibilities have since been the subject of much research, and many relationships between them are known. Bounded reducibilities A bounded form of each of the above strong reducibilities can be defined. The most famous of these is bounded truth-table reduction, but there are also bounded Turing, bounded weak truth-table, and others. These first three are the most common ones and they are based on the number of queries. For example, a set $A$ is bounded truth-table reducible to $B$ if and only if the Turing machine $M$ computing $A$ relative to $B$ computes a list of up to $n$ numbers, queries $B$ on these numbers and then terminates for all possible oracle answers; the value $n$ is a constant independent of $x$. The difference between bounded weak truth-table and bounded Turing reduction is that in the first case, the up to $n$ queries have to be made at the same time while in the second case, the queries can be made one after the other. For that reason, there are cases where $A$ is bounded Turing reducible to $B$ but not weak truth-table reducible to $B$. Strong reductions in computational complexity Main article: Reduction (complexity) The strong reductions listed above restrict the manner in which oracle information can be accessed by a decision procedure but do not otherwise limit the computational resources available. Thus if a set $A$ is decidable then $A$ is reducible to any set $B$ under any of the strong reducibility relations listed above, even if $A$ is not polynomial-time or exponential-time decidable. This is acceptable in the study of computability theory, which is interested in theoretical computability, but it is not reasonable for computational complexity theory, which studies which sets can be decided under certain asymptotical resource bounds. The most common reducibility in computational complexity theory is polynomial-time reducibility; a set A is polynomial-time reducible to a set $B$ if there is a polynomial-time function f such that for every $n$, $n$ is in $A$ if and only if $f(n)$ is in $B$. This reducibility is, essentially, a resource-bounded version of many-one reducibility. Other resource-bounded reducibilities are used in other contexts of computational complexity theory where other resource bounds are of interest. Reductions weaker than Turing reducibility Although Turing reducibility is the most general reducibility that is effective, weaker reducibility relations are commonly studied. These reducibilities are related to the relative definability of sets over arithmetic or set theory. They include: • Arithmetical reducibility: A set $A$ is arithmetical in a set $B$ if $A$ is definable over the standard model of Peano arithmetic with an extra predicate for $B$. Equivalently, according to Post's theorem, A is arithmetical in $B$ if and only if $A$ is Turing reducible to $B^{(n)}$, the $n$th Turing jump of $B$, for some natural number $n$. The arithmetical hierarchy gives a finer classification of arithmetical reducibility. • Hyperarithmetical reducibility: A set $A$ is hyperarithmetical in a set $B$ if $A$ is $\Delta _{1}^{1}$ definable (see analytical hierarchy) over the standard model of Peano arithmetic with a predicate for $B$. Equivalently, $A$ is hyperarithmetical in $B$ if and only if $A$ is Turing reducible to $B^{(\alpha )}$, the $a$th Turing jump of $B$, for some $B$-recursive ordinal $a$. • Relative constructibility: A set $A$ is relatively constructible from a set $B$ if $A$ is in $L(B)$, the smallest transitive model of ZFC set theory containing $B$ and all the ordinals. References • K. Ambos-Spies and P. Fejer, 2006. "Degrees of Unsolvability." Unpublished preprint. • P. Odifreddi, 1989. Classical Recursion Theory, North-Holland. ISBN 0-444-87295-7 • P. Odifreddi, 1999. Classical Recursion Theory, Volume II, Elsevier. ISBN 0-444-50205-X • E. Post, 1944, "Recursively enumerable sets of positive integers and their decision problems", Bulletin of the American Mathematical Society, volume 50, pages 284–316. • H. Rogers, Jr., 1967. The Theory of Recursive Functions and Effective Computability, second edition 1987, MIT Press. ISBN 0-262-68052-1 (paperback), ISBN 0-07-053522-1 • G. Sacks, 1990. Higher Recursion Theory, Springer-Verlag. ISBN 3-540-19305-7 Internet resources • Stanford Encyclopedia of Philosophy: Recursive Functions
Wikipedia
Reduction strategy In rewriting, a reduction strategy or rewriting strategy is a relation specifying a rewrite for each object or term, compatible with a given reduction relation.[1] Some authors use the term to refer to an evaluation strategy.[2][3] Definitions Formally, for an abstract rewriting system $(A,\to )$, a reduction strategy $\to _{S}$ is a binary relation on $A$ with $\to _{S}\subseteq {\overset {+}{\to }}$ , where ${\overset {+}{\to }}$ is the transitive closure of $\to $ (but not the reflexive closure).[1] In addition the normal forms of the strategy must be the same as the normal forms of the original rewriting system, i.e. for all $a$, there exists a $b$ with $a\to b$ iff $\exists b'.a\to _{S}b'$.[4] A one step reduction strategy is one where $\to _{S}\subseteq \to $. Otherwise it is a many step strategy.[5] A deterministic strategy is one where $\to _{S}$ is a partial function, i.e. for each $a\in A$ there is at most one $b$ such that $a\to _{S}b$. Otherwise it is a nondeterministic strategy.[5] Term rewriting In a term rewriting system a rewriting strategy specifies, out of all the reducible subterms (redexes), which one should be reduced (contracted) within a term. One-step strategies for term rewriting include:[5] • leftmost-innermost: in each step the leftmost of the innermost redexes is contracted, where an innermost redex is a redex not containing any redexes[6] • leftmost-outermost: in each step the leftmost of the outermost redexes is contracted, where an outermost redex is a redex not contained in any redexes[6] • rightmost-innermost, rightmost-outermost: similarly Many-step strategies include:[5] • parallel-innermost: reduces all innermost redexes simultaneously. This is well-defined because the redexes are pairwise disjoint. • parallel-outermost: similarly • Gross-Knuth reduction,[7] also called full substitution or Kleene reduction:[5] all redexes in the term are simultaneously reduced Parallel outermost and Gross-Knuth reduction are hypernormalizing for all almost-orthogonal term rewriting systems, meaning that these strategies will eventually reach a normal form if it exists, even when performing (finitely many) arbitrary reductions between successive applications of the strategy.[8] Stratego is a domain-specific language designed specifically for programming term rewriting strategies.[9] Lambda calculus Main article: Lambda calculus § Reduction strategies In the context of the lambda calculus, normal-order reduction refers to leftmost-outermost reduction in the sense given above.[10] Normal-order reduction is normalizing, in the sense that if a term has a normal form, then normal‐order reduction will eventually reach it, hence the name normal. This is known as the standardization theorem.[11][12] Leftmost reduction is sometimes used to refer to normal order reduction, as with a pre-order traversal the notions coincide, and similarly the leftmost-outermost redex is the redex with leftmost starting character when the lambda term is considered as a string of characters.[13][14] When "leftmost" is defined using an in-order traversal the notions are distinct. For example, in the term $(\lambda x.x\Omega )(\lambda y.I)$ with $\Omega ,I$ defined here, the leftmost redex of the in-order traversal is $\Omega $ while the leftmost-outermost redex is the entire expression.[15] Applicative order reduction refers to leftmost-innermost reduction.[10] In contrast to normal order, applicative order reduction may not terminate, even when the term has a normal form.[10] For example, using applicative order reduction, the following sequence of reductions is possible: ${\begin{aligned}&(\mathbf {\lambda } x.z)((\lambda w.www)(\lambda w.www))\\\rightarrow &(\lambda x.z)((\lambda w.www)(\lambda w.www)(\lambda w.www))\\\rightarrow &(\lambda x.z)((\lambda w.www)(\lambda w.www)(\lambda w.www)(\lambda w.www))\\\rightarrow &(\lambda x.z)((\lambda w.www)(\lambda w.www)(\lambda w.www)(\lambda w.www)(\lambda w.www))\\&\ldots \end{aligned}}$ But using normal-order reduction, the same starting point reduces quickly to normal form: $(\mathbf {\lambda } x.z)((\lambda w.www)(\lambda w.www))$ $\rightarrow z$ Full β-reduction refers to the nondeterministic one-step strategy that allows reducing any redex at each step.[3] Takahashi's parallel β-reduction is the strategy that reduces all redexes in the term simultaneously.[16] Weak reduction Normal and applicative order reduction are strong in that they allow reduction under lambda abstractions. In contrast, weak reduction does not reduce under a lambda abstraction.[17] Call-by-name reduction is the weak reduction strategy that reduces the leftmost outermost redex not inside a lambda abstraction, while call-by-value reduction is the weak reduction strategy that reduces the leftmost innermost redex not inside a lambda abstraction. These strategies were devised to reflect the call-by-name and call-by-value evaluation strategies.[18] In fact, applicative order reduction was also originally introduced to model the call-by-value parameter passing technique found in Algol 60 and modern programming languages. When combined with the idea of weak reduction, the resulting call-by-value reduction is indeed a faithful approximation.[19] Unfortunately, weak reduction is not confluent,[17] and the traditional reduction equations of the lambda calculus are useless, because they suggest relationships that violate the weak evaluation regime.[19] However, it is possible to extend the system to be confluent by allowing a restricted form of reduction under an abstraction, in particular when the redex does not involve the variable bound by the abstraction.[17] For example, λx.(λy.x)z is in normal form for a weak reduction strategy because the redex (λy.x)z is contained in a lambda abstraction. But the term λx.(λy.y)z can still be reduced under the extended weak reduction strategy, because the redex (λy.y)z does not refer to x.[20] Optimal reduction Optimal reduction is motivated by the existence of lambda terms where there does not exist a sequence of reductions which reduces them without duplicating work. For example, consider ((λg.(g(g(λx.x)))) (λh.((λf.(f(f(λz.z)))) (λw.(h(w(λy.y))))))) It is composed of three nested terms, x=((λg. ... ) (λh.y)), y=((λf. ...) (λw.z) ), and z=λw.(h(w(λy.y))). There are only two possible β-reductions to be done here, on x and on y. Reducing the outer x term first results in the inner y term being duplicated, and each copy will have to be reduced, but reducing the inner y term first will duplicate its argument z, which will cause work to be duplicated when the values of h and w are made known.[lower-alpha 1] Optimal reduction is not a reduction strategy for the lambda calculus in a narrow sense because performing β-reduction loses the information about the substituted redexes being shared. Instead it is defined for the labelled lambda calculus, an annotated lambda calculus which captures a precise notion of the work that should be shared.[21]: 113–114  Labels consist of a countably infinite set of atomic labels, and concatenations $ab$, overlinings ${\overline {a}}$ and underlinings ${\underline {a}}$ of labels. A labelled term is a lambda calculus term where each subterm has a label. The standard initial labeling of a lambda term gives each subterm a unique atomic label.[21]: 132  Labelled β-reduction is given by:[22] $((\lambda x.M)^{\alpha }N)^{\beta }\to \beta {\overline {\alpha }}\cdot M[x\mapsto {\underline {\alpha }}\cdot N]$ where $\cdot $ concatenates labels, $\beta \cdot T^{\alpha }=T^{\beta \alpha }$, and substitution $M[x\mapsto N]$ is defined as follows (using the Barendregt convention):[22] ${\begin{aligned}x^{\alpha }[x\mapsto N]&=\alpha \cdot N&\quad (\lambda y.M)^{\alpha }[x\mapsto N]&=(\lambda y.M[x\mapsto N])^{\alpha }\\y^{\alpha }[x\mapsto N]&=y^{\alpha }&\quad (MN)^{\alpha }[x\mapsto P]&=(M[x\mapsto P]N[x\mapsto P])^{\alpha }\end{aligned}}$ The system can be proven to be confluent. Optimal reduction is then defined to be normal order or leftmost-outermost reduction using reduction by families, i.e. the parallel reduction of all redexes with the same function part label.[23] The strategy is optimal in the sense that it performs the optimal (minimal) number of family reduction steps.[24] A practical algorithm for optimal reduction was first described in 1989,[25] more than a decade after optimal reduction was first defined in 1974.[26] The Bologna Optimal Higher-order Machine (BOHM) is a prototype implementation of an extension of the technique to interaction nets.[21]: 362 [27] Lambdascope is a more recent implementation of optimal reduction, also using interaction nets.[28][lower-alpha 2] Call by need reduction Call by need reduction can be defined similarly to optimal reduction as weak leftmost-outermost reduction using parallel reduction of redexes with the same label, for a slightly different labelled lambda calculus.[17] An alternate definition changes the beta rule to an operation that finds the next "needed" computation, evaluates it, and substitutes the result into all locations. This requires extending the beta rule to allow reducing terms that are not syntactically adjacent.[29] As with call-by-name and call-by-value, call-by-need reduction was devised to mimic the behavior of the evaluation strategy known as "call-by-need" or lazy evaluation. See also • Reduction system • Reduction semantics • Thunk Notes 1. Incidentally, the above term reduces to the identity function (λy.y), and is constructed by making wrappers which make the identity function available to the binders g=λh..., f=λw..., h=λx.x (at first), and w=λz.z (at first), all of which are applied to the innermost term λy.y. 2. A summary of recent research on optimal reduction can be found in the short article About the efficient reduction of lambda terms. References 1. Kirchner, Hélène (26 August 2015). "Rewriting Strategies and Strategic Rewrite Programs". In Martí-Oliet, Narciso; Ölveczky, Peter Csaba; Talcott, Carolyn (eds.). Logic, Rewriting, and Concurrency: Essays Dedicated to José Meseguer on the Occasion of His 65th Birthday. Springer. ISBN 978-3-319-23165-5. Retrieved 14 August 2021. 2. Selinger, Peter; Valiron, Benoît (2009). "Quantum Lambda Calculus" (PDF). Semantic Techniques in Quantum Computation: 23. doi:10.1017/CBO9781139193313.005. ISBN 9780521513746. Retrieved 21 August 2021. 3. Pierce, Benjamin C. (2002). Types and Programming Languages. MIT Press. p. 56. ISBN 0-262-16209-1. 4. Klop, Jan Willem; van Oostrom, Vincent; van Raamsdonk, Femke (2007). "Reduction Strategies and Acyclicity" (PDF). Rewriting, Computation and Proof. Lecture Notes in Computer Science. 4600: 89–112. CiteSeerX 10.1.1.104.9139. doi:10.1007/978-3-540-73147-4_5. ISBN 978-3-540-73146-7. 5. Klop, J. W. "Term Rewriting Systems" (PDF). Papers by Nachum Dershowitz and students. Tel Aviv University. p. 77. Retrieved 14 August 2021. 6. Horwitz, Susan B. "Lambda Calculus". CS704 Notes. University of Wisconsin Madison. Retrieved 19 August 2021. 7. Barendregt, H. P.; Eekelen, M. C. J. D.; Glauert, J. R. W.; Kennaway, J. R.; Plasmeijer, M. J.; Sleep, M. R. (1987). Term graph rewriting. Parallel Architectures and Languages Europe. Vol. 259. pp. 141–158. doi:10.1007/3-540-17945-3_8. 8. Antoy, Sergio; Middeldorp, Aart (September 1996). "A sequential reduction strategy" (PDF). Theoretical Computer Science. 165 (1): 75–95. doi:10.1016/0304-3975(96)00041-2. Retrieved 8 September 2021. 9. Kieburtz, Richard B. (November 2001). "A Logic for Rewriting Strategies". Electronic Notes in Theoretical Computer Science. 58 (2): 138–154. doi:10.1016/S1571-0661(04)00283-X. 10. Mazzola, Guerino; Milmeister, Gérard; Weissmann, Jody (21 October 2004). Comprehensive Mathematics for Computer Scientists 2. Springer Science & Business Media. p. 323. ISBN 978-3-540-20861-7. 11. Curry, Haskell B.; Feys, Robert (1958). Combinatory Logic. Vol. I. Amsterdam: North Holland. pp. 139–142. ISBN 0-7204-2208-6. 12. Kashima, Ryo. "A Proof of the Standardization Theorem in λ-Calculus" (PDF). Tokyo Institute of Technology. Retrieved 19 August 2021. 13. Vial, Pierre (7 December 2017). Non-Idempotent Typing Operators, beyond the λ-Calculus (PDF) (PhD). Sorbonne Paris Cité. p. 62. 14. Partain, William D. (December 1989). Graph Reduction Without Pointers (PDF) (PhD). University of North Carolina at Chapel Hill. Retrieved 10 January 2022. 15. Van Oostrom, Vincent; Toyama, Yoshihito (2016). Normalisation by Random Descent (PDF). 1st International Conference on Formal Structures for Computation and Deduction. p. 32:3. doi:10.4230/LIPIcs.FSCD.2016.32. 16. Takahashi, M. (April 1995). "Parallel Reductions in λ-Calculus". Information and Computation. 118 (1): 120–127. doi:10.1006/inco.1995.1057. 17. Blanc, Tomasz; Lévy, Jean-Jacques; Maranget, Luc (2005). "Sharing in the Weak Lambda-Calculus". Processes, Terms and Cycles: Steps on the Road to Infinity: Essays Dedicated to Jan Willem Klop on the Occasion of His 60th Birthday. Springer. pp. 70–87. CiteSeerX 10.1.1.129.147. doi:10.1007/11601548_7. ISBN 978-3-540-32425-6. 18. Sestoft, Peter (2002). Mogensen, T; Schmidt, D; Sudborough, I. H. (eds.). Demonstrating Lambda Calculus Reduction (PDF). pp. 420–435. ISBN 3-540-00326-6. {{cite book}}: |work= ignored (help) 19. Felleisen, Matthias (2009). Semantics engineering with PLT Redex. Cambridge, Mass.: MIT Press. p. 42. ISBN 978-0262062756. 20. Sestini, Filippo (2019). Normalization by Evaluation for Typed Weak lambda-Reduction (PDF). 24th International Conference on Types for Proofs and Programs (TYPES 2018). doi:10.4230/LIPIcs.TYPES.2018.6. 21. Asperti, Andrea; Guerrini, Stefano (1998). The optimal implementation of functional programming languages. Cambridge, UK: Cambridge University Press. ISBN 0521621127. 22. Fernández, Maribel; Siafakas, Nikolaos (30 March 2010). "Labelled Lambda-calculi with Explicit Copy and Erase". Electronic Proceedings in Theoretical Computer Science. 22: 49–64. arXiv:1003.5515v1. doi:10.4204/EPTCS.22.5. S2CID 15500633. 23. Lévy, Jean-Jacques (9–11 November 1987). Sharing in the Evaluation of lambda Expressions (PDF). Second Franco-Japanese Symposium on Programming of Future Generation Computers. Cannes, France. p. 187. ISBN 0444705260. 24. Terese (2003). Term rewriting systems. Cambridge, UK: Cambridge University Press. p. 518. ISBN 978-0-521-39115-3. 25. Lamping, John (1990). An algorithm for optimal lambda calculus reduction (PDF). 17th ACM SIGPLAN-SIGACT symposium on Principles of programming languages - POPL '90. pp. 16–30. doi:10.1145/96709.96711. 26. Lévy, Jean-Jacques (June 1974). Réductions sures dans le lambda-calcul (PDF) (PhD) (in French). Université Paris VII. pp. 81–109. OCLC 476040273. Retrieved 17 August 2021. 27. Asperti, Andrea. "Bologna Optimal Higher-Order Machine, Version 1.1". GitHub. 28. van Oostrom, Vincent; van de Looij, Kees-Jan; Zwitserlood, Marijn (2004). ] (Lambdascope): Another optimal implementation of the lambda-calculus (PDF). Workshop on Algebra and Logic on Programming Systems (ALPS). 29. Chang, Stephen; Felleisen, Matthias (2012). "The Call-by-Need Lambda Calculus, Revisited" (PDF). Programming Languages and Systems. Lecture Notes in Computer Science. 7211: 128–147. doi:10.1007/978-3-642-28869-2_7. ISBN 978-3-642-28868-5. S2CID 6350826. External links • Lambda calculus reduction workbench
Wikipedia
Binary quadratic form In mathematics, a binary quadratic form is a quadratic homogeneous polynomial in two variables $q(x,y)=ax^{2}+bxy+cy^{2},\,$ This article is about binary quadratic forms with integer coefficients. For binary quadratic forms with other coefficients, see quadratic form. where a, b, c are the coefficients. When the coefficients can be arbitrary complex numbers, most results are not specific to the case of two variables, so they are described in quadratic form. A quadratic form with integer coefficients is called an integral binary quadratic form, often abbreviated to binary quadratic form. This article is entirely devoted to integral binary quadratic forms. This choice is motivated by their status as the driving force behind the development of algebraic number theory. Since the late nineteenth century, binary quadratic forms have given up their preeminence in algebraic number theory to quadratic and more general number fields, but advances specific to binary quadratic forms still occur on occasion. Pierre Fermat stated that if p is an odd prime then the equation $p=x^{2}+y^{2}$ has a solution iff $p\equiv 1{\pmod {4}}$, and he made similar statement about the equations $p=x^{2}+2y^{2}$, $p=x^{2}+3y^{2}$, $p=x^{2}-2y^{2}$ and $p=x^{2}-3y^{2}$. $x^{2}+y^{2},x^{2}+2y^{2},x^{2}-3y^{2}$ and so on are quadratic forms, and the theory of quadratic forms gives a unified way of looking at and proving these theorems. Another instance of quadratic forms is Pell's equation $x^{2}-ny^{2}=1$. Binary quadratic forms are closely related to ideals in quadratic fields, this allows the class number of a quadratic field to be calculated by counting the number of reduced binary quadratic forms of a given discriminant. The classical theta function of 2 variables is $\sum _{(m,n)\in \mathbb {Z} ^{2}}q^{m^{2}+n^{2}}$, if $f(x,y)$ is a positive definite quadratic form then $\sum _{(m,n)\in \mathbb {Z} ^{2}}q^{f(m,n)}$ is a theta function. Equivalence Two forms f and g are called equivalent if there exist integers $\alpha ,\beta ,\gamma ,{\text{ and }}\delta $ such that the following conditions hold: ${\begin{aligned}f(\alpha x+\beta y,\gamma x+\delta y)&=g(x,y),\\\alpha \delta -\beta \gamma &=1.\end{aligned}}$ For example, with $f=x^{2}+4xy+2y^{2}$ and $\alpha =-3$, $\beta =2$, $\gamma =1$, and $\delta =-1$, we find that f is equivalent to $g=(-3x+2y)^{2}+4(-3x+2y)(x-y)+2(x-y)^{2}$, which simplifies to $-x^{2}+4xy-2y^{2}$. The above equivalence conditions define an equivalence relation on the set of integral quadratic forms. It follows that the quadratic forms are partitioned into equivalence classes, called classes of quadratic forms. A class invariant can mean either a function defined on equivalence classes of forms or a property shared by all forms in the same class. Lagrange used a different notion of equivalence, in which the second condition is replaced by $\alpha \delta -\beta \gamma =\pm 1$. Since Gauss it has been recognized that this definition is inferior to that given above. If there is a need to distinguish, sometimes forms are called properly equivalent using the definition above and improperly equivalent if they are equivalent in Lagrange's sense. In matrix terminology, which is used occasionally below, when ${\begin{pmatrix}\alpha &\beta \\\gamma &\delta \end{pmatrix}}$ has integer entries and determinant 1, the map $f(x,y)\mapsto f(\alpha x+\beta y,\gamma x+\delta y)$ is a (right) group action of $\mathrm {SL} _{2}(\mathbb {Z} )$ on the set of binary quadratic forms. The equivalence relation above then arises from the general theory of group actions. If $f=ax^{2}+bxy+cy^{2}$, then important invariants include • The discriminant $\Delta =b^{2}-4ac$. • The content, equal to the greatest common divisor of a, b, and c. Terminology has arisen for classifying classes and their forms in terms of their invariants. A form of discriminant $\Delta $ is definite if $\Delta <0$, degenerate if $\Delta $ is a perfect square, and indefinite otherwise. A form is primitive if its content is 1, that is, if its coefficients are coprime. If a form's discriminant is a fundamental discriminant, then the form is primitive.[1] Discriminants satisfy $\Delta \equiv 0,1{\pmod {4}}.$ Automorphisms If f is a quadratic form, a matrix ${\begin{pmatrix}\alpha &\beta \\\gamma &\delta \end{pmatrix}}$ in $\mathrm {SL} _{2}(\mathbb {Z} )$ is an automorphism of f if $f(\alpha x+\beta y,\gamma x+\delta y)=f(x,y)$. For example, the matrix ${\begin{pmatrix}3&-4\\-2&3\end{pmatrix}}$ is an automorphism of the form $f=x^{2}-2y^{2}$. The automorphisms of a form form a subgroup of $\mathrm {SL} _{2}(\mathbb {Z} )$. When f is definite, the group is finite, and when f is indefinite, it is infinite and cyclic. Representation A binary quadratic form $q(x,y)$ represents an integer $n$ if it is possible to find integers $x$ and $y$ satisfying the equation $n=q(x,y).$ Such an equation is a representation of n by q. Examples Diophantus considered whether, for an odd integer $n$, it is possible to find integers $x$ and $y$ for which $n=x^{2}+y^{2}$.[2] When $n=65$, we have ${\begin{aligned}65&=1^{2}+8^{2},\\65&=4^{2}+7^{2},\end{aligned}}$ so we find pairs $(x,y)=(1,8){\text{ and }}(4,7)$ that do the trick. We obtain more pairs that work by switching the values of $x$ and $y$ and/or by changing the sign of one or both of $x$ and $y$. In all, there are sixteen different solution pairs. On the other hand, when $n=3$, the equation $3=x^{2}+y^{2}$ does not have integer solutions. To see why, we note that $x^{2}\geq 4$ unless $x=-1,0$ or $1$. Thus, $x^{2}+y^{2}$ will exceed 3 unless $(x,y)$ is one of the nine pairs with $x$ and $y$ each equal to $-1,0$ or 1. We can check these nine pairs directly to see that none of them satisfies $3=x^{2}+y^{2}$, so the equation does not have integer solutions. A similar argument shows that for each $n$, the equation $n=x^{2}+y^{2}$ can have only a finite number of solutions since $x^{2}+y^{2}$ will exceed $n$ unless the absolute values $|x|$ and $|y|$ are both less than ${\sqrt {n}}$. There are only a finite number of pairs satisfying this constraint. Another ancient problem involving quadratic forms asks us to solve Pell's equation. For instance, we may seek integers x and y so that $1=x^{2}-2y^{2}$. Changing signs of x and y in a solution gives another solution, so it is enough to seek just solutions in positive integers. One solution is $(x,y)=(3,2)$, that is, there is an equality $1=3^{2}-2\cdot 2^{2}$. If $(x,y)$ is any solution to $1=x^{2}-2y^{2}$, then $(3x+4y,2x+3y)$ is another such pair. For instance, from the pair $(3,2)$, we compute $(3\cdot 3+4\cdot 2,2\cdot 3+3\cdot 2)=(17,12)$, and we can check that this satisfies $1=17^{2}-2\cdot 12^{2}$. Iterating this process, we find further pairs $(x,y)$ with $1=x^{2}-2y^{2}$: ${\begin{aligned}(3\cdot 17+4\cdot 12,2\cdot 17+3\cdot 12)&=(99,70),\\(3\cdot 99+4\cdot 70,2\cdot 99+3\cdot 70)&=(577,408),\\&\vdots \end{aligned}}$ These values will keep growing in size, so we see there are infinitely many ways to represent 1 by the form $x^{2}-2y^{2}$. This recursive description was discussed in Theon of Smyrna's commentary on Euclid's Elements. The representation problem The oldest problem in the theory of binary quadratic forms is the representation problem: describe the representations of a given number $n$ by a given quadratic form f. "Describe" can mean various things: give an algorithm to generate all representations, a closed formula for the number of representations, or even just determine whether any representations exist. The examples above discuss the representation problem for the numbers 3 and 65 by the form $x^{2}+y^{2}$ and for the number 1 by the form $x^{2}-2y^{2}$. We see that 65 is represented by $x^{2}+y^{2}$ in sixteen different ways, while 1 is represented by $x^{2}-2y^{2}$ in infinitely many ways and 3 is not represented by $x^{2}+y^{2}$ at all. In the first case, the sixteen representations were explicitly described. It was also shown that the number of representations of an integer by $x^{2}+y^{2}$ is always finite. The sum of squares function $r_{2}(n)$ gives the number of representations of n by $x^{2}+y^{2}$ as a function of n. There is a closed formula[3] $r_{2}(n)=4(d_{1}(n)-d_{3}(n)),$ where $d_{1}(n)$ is the number of divisors of n that are congruent to 1 modulo 4 and $d_{3}(n)$ is the number of divisors of n that are congruent to 3 modulo 4. There are several class invariants relevant to the representation problem: • The set of integers represented by a class. If an integer n is represented by a form in a class, then it is represented by all other forms in a class. • The minimum absolute value represented by a class. This is the smallest nonnegative value in the set of integers represented by a class. • The congruence classes modulo the discriminant of a class represented by the class. The minimum absolute value represented by a class is zero for degenerate classes and positive for definite and indefinite classes. All numbers represented by a definite form $f=ax^{2}+bxy+cy^{2}$ have the same sign: positive if $a>0$ and negative if $a<0$. For this reason, the former are called positive definite forms and the latter are negative definite. The number of representations of an integer n by a form f is finite if f is definite and infinite if f is indefinite. We saw instances of this in the examples above: $x^{2}+y^{2}$ is positive definite and $x^{2}-2y^{2}$ is indefinite. Equivalent representations The notion of equivalence of forms can be extended to equivalent representations. Representations $m=f(x_{1},y_{1})$ and $n=g(x_{2},y_{2})$ are equivalent if there exists a matrix ${\begin{pmatrix}\alpha &\beta \\\gamma &\delta \end{pmatrix}}$ with integer entries and determinant 1 so that $f(\alpha x+\beta y,\gamma x+\delta y)=g(x,y)$ and ${\begin{pmatrix}\delta &-\beta \\-\gamma &\alpha \end{pmatrix}}{\begin{pmatrix}x_{1}\\y_{1}\end{pmatrix}}={\begin{pmatrix}x_{2}\\y_{2}\end{pmatrix}}$ The above conditions give a (right) action of the group $\mathrm {SL} _{2}(\mathbb {Z} )$ on the set of representations of integers by binary quadratic forms. It follows that equivalence defined this way is an equivalence relation and in particular that the forms in equivalent representations are equivalent forms. As an example, let $f=x^{2}-2y^{2}$ and consider a representation $1=f(x_{1},y_{1})$. Such a representation is a solution to the Pell equation described in the examples above. The matrix ${\begin{pmatrix}3&-4\\-2&3\end{pmatrix}}$ has determinant 1 and is an automorphism of f. Acting on the representation $1=f(x_{1},y_{1})$ by this matrix yields the equivalent representation $1=f(3x_{1}+4y_{1},2x_{1}+3y_{1})$. This is the recursion step in the process described above for generating infinitely many solutions to $1=x^{2}-2y^{2}$. Iterating this matrix action, we find that the infinite set of representations of 1 by f that were determined above are all equivalent. There are generally finitely many equivalence classes of representations of an integer n by forms of given nonzero discriminant $\Delta $. A complete set of representatives for these classes can be given in terms of reduced forms defined in the section below. When $\Delta <0$, every representation is equivalent to a unique representation by a reduced form, so a complete set of representatives is given by the finitely many representations of n by reduced forms of discriminant $\Delta $. When $\Delta >0$, Zagier proved that every representation of a positive integer n by a form of discriminant $\Delta $ is equivalent to a unique representation $n=f(x,y)$ in which f is reduced in Zagier's sense and $x>0$, $y\geq 0$.[4] The set of all such representations constitutes a complete set of representatives for equivalence classes of representations. Reduction and class numbers Lagrange proved that for every value D, there are only finitely many classes of binary quadratic forms with discriminant D. Their number is the class number of discriminant D. He described an algorithm, called reduction, for constructing a canonical representative in each class, the reduced form, whose coefficients are the smallest in a suitable sense. Gauss gave a superior reduction algorithm in Disquisitiones Arithmeticae, which ever since has been the reduction algorithm most commonly given in textbooks. In 1981, Zagier published an alternative reduction algorithm which has found several uses as an alternative to Gauss's.[5] Composition Composition most commonly refers to a binary operation on primitive equivalence classes of forms of the same discriminant, one of the deepest discoveries of Gauss, which makes this set into a finite abelian group called the form class group (or simply class group) of discriminant $\Delta $. Class groups have since become one of the central ideas in algebraic number theory. From a modern perspective, the class group of a fundamental discriminant $\Delta $ is isomorphic to the narrow class group of the quadratic field $\mathbf {Q} ({\sqrt {\Delta }})$ of discriminant $\Delta $.[6] For negative $\Delta $, the narrow class group is the same as the ideal class group, but for positive $\Delta $ it may be twice as big. "Composition" also sometimes refers to, roughly, a binary operation on binary quadratic forms. The word "roughly" indicates two caveats: only certain pairs of binary quadratic forms can be composed, and the resulting form is not well-defined (although its equivalence class is). The composition operation on equivalence classes is defined by first defining composition of forms and then showing that this induces a well-defined operation on classes. "Composition" can also refer to a binary operation on representations of integers by forms. This operation is substantially more complicated than composition of forms, but arose first historically. We will consider such operations in a separate section below. Composition means taking 2 quadratic forms of the same discriminant and combining them to create a quadratic form of the same discriminant, as follows from Brahmagupta's identity. Composing forms and classes A variety of definitions of composition of forms has been given, often in an attempt to simplify the extremely technical and general definition of Gauss. We present here Arndt's method, because it remains rather general while being simple enough to be amenable to computations by hand. An alternative definition is described at Bhargava cubes. Suppose we wish to compose forms $f_{1}=A_{1}x^{2}+B_{1}xy+C_{1}y^{2}$ and $f_{2}=A_{2}x^{2}+B_{2}xy+C_{2}y^{2}$, each primitive and of the same discriminant $\Delta $. We perform the following steps: 1. Compute $B_{\mu }={\tfrac {B_{1}+B_{2}}{2}}$ and $e=\gcd(A_{1},A_{2},B_{\mu })$, and $A={\tfrac {A_{1}A_{2}}{e^{2}}}$ 2. Solve the system of congruences ${\begin{aligned}x&\equiv B_{1}{\pmod {2{\tfrac {A_{1}}{e}}}}\\x&\equiv B_{2}{\pmod {2{\tfrac {A_{2}}{e}}}}\\{\tfrac {B_{\mu }}{e}}x&\equiv {\tfrac {\Delta +B_{1}B_{2}}{2e}}{\pmod {2A}}\end{aligned}}$ It can be shown that this system always has a unique integer solution modulo $2A$. We arbitrarily choose such a solution and call it B. 3. Compute C such that $\Delta =B^{2}-4AC$. It can be shown that C is an integer. The form $Ax^{2}+Bxy+Cy^{2}$ is "the" composition of $f_{1}$ and $f_{2}$. We see that its first coefficient is well-defined, but the other two depend on the choice of B and C. One way to make this a well-defined operation is to make an arbitrary convention for how to choose B—for instance, choose B to be the smallest positive solution to the system of congruences above. Alternatively, we may view the result of composition, not as a form, but as an equivalence class of forms modulo the action of the group of matrices of the form ${\begin{pmatrix}1&n\\0&1\end{pmatrix}}$, where n is an integer. If we consider the class of $Ax^{2}+Bxy+Cy^{2}$ under this action, the middle coefficients of the forms in the class form a congruence class of integers modulo 2A. Thus, composition gives a well-defined function from pairs of binary quadratic forms to such classes. It can be shown that if $f_{1}$ and $f_{2}$ are equivalent to $g_{1}$ and $g_{2}$ respectively, then the composition of $f_{1}$ and $f_{2}$ is equivalent to the composition of $g_{1}$ and $g_{2}$. It follows that composition induces a well-defined operation on primitive classes of discriminant $\Delta $, and as mentioned above, Gauss showed these classes form a finite abelian group. The identity class in the group is the unique class containing all forms $x^{2}+Bxy+Cy^{2}$, i.e., with first coefficient 1. (It can be shown that all such forms lie in a single class, and the restriction $\Delta \equiv 0{\text{ or }}1{\pmod {4}}$ implies that there exists such a form of every discriminant.) To invert a class, we take a representative $Ax^{2}+Bxy+Cy^{2}$ and form the class of $Ax^{2}-Bxy+Cy^{2}$. Alternatively, we can form the class of $Cx^{2}+Bxy+Ay^{2}$ since this and $Ax^{2}-Bxy+Cy^{2}$ are equivalent. Genera of binary quadratic forms Gauss also considered a coarser notion of equivalence, with each coarse class called a genus of forms. Each genus is the union of a finite number of equivalence classes of the same discriminant, with the number of classes depending only on the discriminant. In the context of binary quadratic forms, genera can be defined either through congruence classes of numbers represented by forms or by genus characters defined on the set of forms. A third definition is a special case of the genus of a quadratic form in n variables. This states that forms are in the same genus if they are locally equivalent at all rational primes (including the Archimedean place). History There is circumstantial evidence of protohistoric knowledge of algebraic identities involving binary quadratic forms.[7] The first problem concerning binary quadratic forms asks for the existence or construction of representations of integers by particular binary quadratic forms. The prime examples are the solution of Pell's equation and the representation of integers as sums of two squares. Pell's equation was already considered by the Indian mathematician Brahmagupta in the 7th century CE. Several centuries later, his ideas were extended to a complete solution of Pell's equation known as the chakravala method, attributed to either of the Indian mathematicians Jayadeva or Bhāskara II.[8] The problem of representing integers by sums of two squares was considered in the 3rd century by Diophantus.[9] In the 17th century, inspired while reading Diophantus's Arithmetica, Fermat made several observations about representations by specific quadratic forms including that which is now known as Fermat's theorem on sums of two squares.[10] Euler provided the first proofs of Fermat's observations and added some new conjectures about representations by specific forms, without proof.[11] The general theory of quadratic forms was initiated by Lagrange in 1775 in his Recherches d'Arithmétique. Lagrange was the first to realize that "a coherent general theory required the simulatenous consideration of all forms."[12] He was the first to recognize the importance of the discriminant and to define the essential notions of equivalence and reduction, which, according to Weil, have "dominated the whole subject of quadratic forms ever since".[13] Lagrange showed that there are finitely many equivalence classes of given discriminant, thereby defining for the first time an arithmetic class number. His introduction of reduction allowed the quick enumeration of the classes of given discriminant and foreshadowed the eventual development of infrastructure. In 1798, Legendre published Essai sur la théorie des nombres, which summarized the work of Euler and Lagrange and added some of his own contributions, including the first glimpse of a composition operation on forms. The theory was vastly extended and refined by Gauss in Section V of Disquisitiones Arithmeticae. Gauss introduced a very general version of a composition operator that allows composing even forms of different discriminants and imprimitive forms. He replaced Lagrange's equivalence with the more precise notion of proper equivalence, and this enabled him to show that the primitive classes of given discriminant form a group under the composition operation. He introduced genus theory, which gives a powerful way to understand the quotient of the class group by the subgroup of squares. (Gauss and many subsequent authors wrote 2b in place of b; the modern convention allowing the coefficient of xy to be odd is due to Eisenstein). These investigations of Gauss strongly influenced both the arithmetical theory of quadratic forms in more than two variables and the subsequent development of algebraic number theory, where quadratic fields are replaced with more general number fields. But the impact was not immediate. Section V of Disquisitiones contains truly revolutionary ideas and involves very complicated computations, sometimes left to the reader. Combined, the novelty and complexity made Section V notoriously difficult. Dirichlet published simplifications of the theory that made it accessible to a broader audience. The culmination of this work is his text Vorlesungen über Zahlentheorie. The third edition of this work includes two supplements by Dedekind. Supplement XI introduces ring theory, and from then on, especially after the 1897 publication of Hilbert's Zahlbericht, the theory of binary quadratic forms lost its preeminent position in algebraic number theory and became overshadowed by the more general theory of algebraic number fields. Even so, work on binary quadratic forms with integer coefficients continues to the present. This includes numerous results about quadratic number fields, which can often be translated into the language of binary quadratic forms, but also includes developments about forms themselves or that originated by thinking about forms, including Shanks's infrastructure, Zagier's reduction algorithm, Conway's topographs, and Bhargava's reinterpretation of composition through Bhargava cubes. See also • Bhargava cube • Fermat's theorem on sums of two squares • Legendre symbol • Brahmagupta's identity Notes 1. Cohen 1993, §5.2 2. Weil 2001, p. 30 3. Hardy & Wright 2008, Thm. 278 4. Zagier 1981 5. Zagier 1981 6. Fröhlich & Taylor 1993, Theorem 58 7. Weil 2001, Ch.I §§VI, VIII 8. Weil 2001, Ch.I §IX 9. Weil 2001, Ch.I §IX 10. Weil 2001, Ch.II §§VIII-XI 11. Weil 2001, Ch.III §§VII-IX 12. Weil 2001, p.318 13. Weil 2001, p.317 References • Johannes Buchmann, Ulrich Vollmer: Binary Quadratic Forms, Springer, Berlin 2007, ISBN 3-540-46367-4 • Duncan A. Buell: Binary Quadratic Forms, Springer, New York 1989 • David A Cox, Primes of the form $x^{2}+y^{2}$, Fermat, class field theory, and complex multiplication • Cohen, Henri (1993), A Course in Computational Algebraic Number Theory, Graduate Texts in Mathematics, vol. 138, Berlin, New York: Springer-Verlag, ISBN 978-3-540-55640-4, MR 1228206 • Fröhlich, Albrecht; Taylor, Martin (1993), Algebraic number theory, Cambridge Studies in Advanced Mathematics, vol. 27, Cambridge University Press, ISBN 978-0-521-43834-6, MR 1215934 • Hardy, G. H.; Wright, E. M. (2008) [1938], An Introduction to the Theory of Numbers, Revised by D. R. Heath-Brown and J. H. Silverman. Foreword by Andrew Wiles. (6th ed.), Oxford: Clarendon Press, ISBN 978-0-19-921986-5, MR 2445243, Zbl 1159.11001 • Weil, André (2001), Number Theory: An approach through history from Hammurapi to Legendre, Birkhäuser Boston • Zagier, Don (1981), Zetafunktionen und quadratische Körper: eine Einführung in die höhere Zahlentheorie, Springer External links • Peter Luschny, Positive numbers represented by a binary quadratic form • A. V. Malyshev (2001) [1994], "Binary quadratic form", Encyclopedia of Mathematics, EMS Press
Wikipedia
Reductive group In mathematics, a reductive group is a type of linear algebraic group over a field. One definition is that a connected linear algebraic group G over a perfect field is reductive if it has a representation that has a finite kernel and is a direct sum of irreducible representations. Reductive groups include some of the most important groups in mathematics, such as the general linear group GL(n) of invertible matrices, the special orthogonal group SO(n), and the symplectic group Sp(2n). Simple algebraic groups and (more generally) semisimple algebraic groups are reductive. Algebraic structure → Group theory Group theory Basic notions • Subgroup • Normal subgroup • Quotient group • (Semi-)direct product Group homomorphisms • kernel • image • direct sum • wreath product • simple • finite • infinite • continuous • multiplicative • additive • cyclic • abelian • dihedral • nilpotent • solvable • action • Glossary of group theory • List of group theory topics Finite groups • Cyclic group Zn • Symmetric group Sn • Alternating group An • Dihedral group Dn • Quaternion group Q • Cauchy's theorem • Lagrange's theorem • Sylow theorems • Hall's theorem • p-group • Elementary abelian group • Frobenius group • Schur multiplier Classification of finite simple groups • cyclic • alternating • Lie type • sporadic • Discrete groups • Lattices • Integers ($\mathbb {Z} $) • Free group Modular groups • PSL(2, $\mathbb {Z} $) • SL(2, $\mathbb {Z} $) • Arithmetic group • Lattice • Hyperbolic group Topological and Lie groups • Solenoid • Circle • General linear GL(n) • Special linear SL(n) • Orthogonal O(n) • Euclidean E(n) • Special orthogonal SO(n) • Unitary U(n) • Special unitary SU(n) • Symplectic Sp(n) • G2 • F4 • E6 • E7 • E8 • Lorentz • Poincaré • Conformal • Diffeomorphism • Loop Infinite dimensional Lie group • O(∞) • SU(∞) • Sp(∞) Algebraic groups • Linear algebraic group • Reductive group • Abelian variety • Elliptic curve Claude Chevalley showed that the classification of reductive groups is the same over any algebraically closed field. In particular, the simple algebraic groups are classified by Dynkin diagrams, as in the theory of compact Lie groups or complex semisimple Lie algebras. Reductive groups over an arbitrary field are harder to classify, but for many fields such as the real numbers R or a number field, the classification is well understood. The classification of finite simple groups says that most finite simple groups arise as the group G(k) of k-rational points of a simple algebraic group G over a finite field k, or as minor variants of that construction. Reductive groups have a rich representation theory in various contexts. First, one can study the representations of a reductive group G over a field k as an algebraic group, which are actions of G on k-vector spaces. But also, one can study the complex representations of the group G(k) when k is a finite field, or the infinite-dimensional unitary representations of a real reductive group, or the automorphic representations of an adelic algebraic group. The structure theory of reductive groups is used in all these areas. Definitions Main article: Linear algebraic group A linear algebraic group over a field k is defined as a smooth closed subgroup scheme of GL(n) over k, for some positive integer n. Equivalently, a linear algebraic group over k is a smooth affine group scheme over k. With the unipotent radical A connected linear algebraic group $G$ over an algebraically closed field is called semisimple if every smooth connected solvable normal subgroup of $G$ is trivial. More generally, a connected linear algebraic group $G$ over an algebraically closed field is called reductive if the largest smooth connected unipotent normal subgroup of $G$ is trivial.[1] This normal subgroup is called the unipotent radical and is denoted $R_{u}(G)$. (Some authors do not require reductive groups to be connected.) A group $G$ over an arbitrary field k is called semisimple or reductive if the base change $G_{\overline {k}}$ is semisimple or reductive, where ${\overline {k}}$ is an algebraic closure of k. (This is equivalent to the definition of reductive groups in the introduction when k is perfect.[2]) Any torus over k, such as the multiplicative group Gm, is reductive. With representation theory Over fields of characteristic zero another equivalent definition of a reductive group is a connected group $G$ admitting a faithful semisimple representation which remains semisimple over its algebraic closure $k^{al}$[3] page 424. Simple reductive groups A linear algebraic group G over a field k is called simple (or k-simple) if it is semisimple, nontrivial, and every smooth connected normal subgroup of G over k is trivial or equal to G.[4] (Some authors call this property "almost simple".) This differs slightly from the terminology for abstract groups, in that a simple algebraic group may have nontrivial center (although the center must be finite). For example, for any integer n at least 2 and any field k, the group SL(n) over k is simple, and its center is the group scheme μn of nth roots of unity. A central isogeny of reductive groups is a surjective homomorphism with kernel a finite central subgroup scheme. Every reductive group over a field admits a central isogeny from the product of a torus and some simple groups. For example, over any field k, $GL(n)\cong (G_{m}\times SL(n))/\mu _{n}.$ It is slightly awkward that the definition of a reductive group over a field involves passage to the algebraic closure. For a perfect field k, that can be avoided: a linear algebraic group G over k is reductive if and only if every smooth connected unipotent normal k-subgroup of G is trivial. For an arbitrary field, the latter property defines a pseudo-reductive group, which is somewhat more general. Split-reductive groups A reductive group G over a field k is called split if it contains a split maximal torus T over k (that is, a split torus in G whose base change to ${\overline {k}}$ is a maximal torus in $G_{\overline {k}}$). It is equivalent to say that T is a split torus in G that is maximal among all k-tori in G.[5] These kinds of groups are useful because their classification can be described through combinatorical data called root data. Examples GLn and SLn A fundamental example of a reductive group is the general linear group ${\text{GL}}_{n}$ of invertible n × n matrices over a field k, for a natural number n. In particular, the multiplicative group Gm is the group GL(1), and so its group Gm(k) of k-rational points is the group k* of nonzero elements of k under multiplication. Another reductive group is the special linear group SL(n) over a field k, the subgroup of matrices with determinant 1. In fact, SL(n) is a simple algebraic group for n at least 2. O(n), SO(n), and Sp(n) An important simple group is the symplectic group Sp(2n) over a field k, the subgroup of GL(2n) that preserves a nondegenerate alternating bilinear form on the vector space k2n. Likewise, the orthogonal group O(q) is the subgroup of the general linear group that preserves a nondegenerate quadratic form q on a vector space over a field k. The algebraic group O(q) has two connected components, and its identity component SO(q) is reductive, in fact simple for q of dimension n at least 3. (For k of characteristic 2 and n odd, the group scheme O(q) is in fact connected but not smooth over k. The simple group SO(q) can always be defined as the maximal smooth connected subgroup of O(q) over k.) When k is algebraically closed, any two (nondegenerate) quadratic forms of the same dimension are isomorphic, and so it is reasonable to call this group SO(n). For a general field k, different quadratic forms of dimension n can yield non-isomorphic simple groups SO(q) over k, although they all have the same base change to the algebraic closure ${\overline {k}}$. Tori The group $\mathbb {G} _{m}$ and products of it are called the algebraic tori. They are examples of reductive groups since they embed in ${\text{GL}}_{n}$ through the diagonal, and from this representation, their unipotent radical is trivial. For example, $\mathbb {G} _{m}\times \mathbb {G} _{m}$ embeds in ${\text{GL}}_{2}$ from the map $(a_{1},a_{2})\mapsto {\begin{bmatrix}a_{1}&0\\0&a_{2}\end{bmatrix}}.$ Non-examples • Any unipotent group is not reductive since its unipotent radical is itself. This includes the additive group $\mathbb {G} _{a}$. • The Borel group $B_{n}$ of ${\text{GL}}_{n}$ has a non-trivial unipotent radical $\mathbb {U} _{n}$ of upper-triangular matrices with $1$ on the diagonal. This is an example of a non-reductive group which is not unipotent. Associated reductive group Note that the normality of the unipotent radical $R_{u}(G)$ implies that the quotient group $G/R_{u}(G)$ is reductive. For example, $B_{n}/(R_{u}(B_{n}))\cong \prod _{i=1}^{n}\mathbb {G} _{m}.$ Other characterizations of reductive groups Every compact connected Lie group has a complexification, which is a complex reductive algebraic group. In fact, this construction gives a one-to-one correspondence between compact connected Lie groups and complex reductive groups, up to isomorphism. For a compact Lie group K with complexification G, the inclusion from K into the complex reductive group G(C) is a homotopy equivalence, with respect to the classical topology on G(C). For example, the inclusion from the unitary group U(n) to GL(n,C) is a homotopy equivalence. For a reductive group G over a field of characteristic zero, all finite-dimensional representations of G (as an algebraic group) are completely reducible, that is, they are direct sums of irreducible representations.[6] That is the source of the name "reductive". Note, however, that complete reducibility fails for reductive groups in positive characteristic (apart from tori). In more detail: an affine group scheme G of finite type over a field k is called linearly reductive if its finite-dimensional representations are completely reducible. For k of characteristic zero, G is linearly reductive if and only if the identity component Go of G is reductive.[7] For k of characteristic p>0, however, Masayoshi Nagata showed that G is linearly reductive if and only if Go is of multiplicative type and G/Go has order prime to p.[8] Roots The classification of reductive algebraic groups is in terms of the associated root system, as in the theories of complex semisimple Lie algebras or compact Lie groups. Here is the way roots appear for reductive groups. Let G be a split reductive group over a field k, and let T be a split maximal torus in G; so T is isomorphic to (Gm)n for some n, with n called the rank of G. Every representation of T (as an algebraic group) is a direct sum of 1-dimensional representations.[9] A weight for G means an isomorphism class of 1-dimensional representations of T, or equivalently a homomorphism T → Gm. The weights form a group X(T) under tensor product of representations, with X(T) isomorphic to the product of n copies of the integers, Zn. The adjoint representation is the action of G by conjugation on its Lie algebra ${\mathfrak {g}}$. A root of G means a nonzero weight that occurs in the action of T ⊂ G on ${\mathfrak {g}}$. The subspace of ${\mathfrak {g}}$ corresponding to each root is 1-dimensional, and the subspace of ${\mathfrak {g}}$ fixed by T is exactly the Lie algebra ${\mathfrak {t}}$ of T.[10] Therefore, the Lie algebra of G decomposes into ${\mathfrak {t}}$ together with 1-dimensional subspaces indexed by the set Φ of roots: ${\mathfrak {g}}={\mathfrak {t}}\oplus \bigoplus _{\alpha \in \Phi }{\mathfrak {g}}_{\alpha }.$ For example, when G is the group GL(n), its Lie algebra ${{\mathfrak {g}}l}(n)$ is the vector space of all n × n matrices over k. Let T be the subgroup of diagonal matrices in G. Then the root-space decomposition expresses ${{\mathfrak {g}}l}(n)$ as the direct sum of the diagonal matrices and the 1-dimensional subspaces indexed by the off-diagonal positions (i, j). Writing L1,...,Ln for the standard basis for the weight lattice X(T) ≅ Zn, the roots are the elements Li − Lj for all i ≠ j from 1 to n. The roots of a semisimple group form a root system; this is a combinatorial structure which can be completely classified. More generally, the roots of a reductive group form a root datum, a slight variation.[11] The Weyl group of a reductive group G means the quotient group of the normalizer of a maximal torus by the torus, W = NG(T)/T. The Weyl group is in fact a finite group generated by reflections. For example, for the group GL(n) (or SL(n)), the Weyl group is the symmetric group Sn. There are finitely many Borel subgroups containing a given maximal torus, and they are permuted simply transitively by the Weyl group (acting by conjugation).[12] A choice of Borel subgroup determines a set of positive roots Φ+ ⊂ Φ, with the property that Φ is the disjoint union of Φ+ and −Φ+. Explicitly, the Lie algebra of B is the direct sum of the Lie algebra of T and the positive root spaces: ${\mathfrak {b}}={\mathfrak {t}}\oplus \bigoplus _{\alpha \in \Phi ^{+}}{\mathfrak {g}}_{\alpha }.$ For example, if B is the Borel subgroup of upper-triangular matrices in GL(n), then this is the obvious decomposition of the subspace ${\mathfrak {b}}$ of upper-triangular matrices in ${{\mathfrak {g}}l}(n)$. The positive roots are Li − Lj for 1 ≤ i < j ≤ n. A simple root means a positive root that is not a sum of two other positive roots. Write Δ for the set of simple roots. The number r of simple roots is equal to the rank of the commutator subgroup of G, called the semisimple rank of G (which is simply the rank of G if G is semisimple). For example, the simple roots for GL(n) (or SL(n)) are Li − Li+1 for 1 ≤ i ≤ n − 1. Root systems are classified by the corresponding Dynkin diagram, which is a finite graph (with some edges directed or multiple). The set of vertices of the Dynkin diagram is the set of simple roots. In short, the Dynkin diagram describes the angles between the simple roots and their relative lengths, with respect to a Weyl group-invariant inner product on the weight lattice. The connected Dynkin diagrams (corresponding to simple groups) are pictured below. For a split reductive group G over a field k, an important point is that a root α determines not just a 1-dimensional subspace of the Lie algebra of G, but also a copy of the additive group Ga in G with the given Lie algebra, called a root subgroup Uα. The root subgroup is the unique copy of the additive group in G which is normalized by T and which has the given Lie algebra.[10] The whole group G is generated (as an algebraic group) by T and the root subgroups, while the Borel subgroup B is generated by T and the positive root subgroups. In fact, a split semisimple group G is generated by the root subgroups alone. Parabolic subgroups For a split reductive group G over a field k, the smooth connected subgroups of G that contain a given Borel subgroup B of G are in one-to-one correspondence with the subsets of the set Δ of simple roots (or equivalently, the subsets of the set of vertices of the Dynkin diagram). Let r be the order of Δ, the semisimple rank of G. Every parabolic subgroup of G is conjugate to a subgroup containing B by some element of G(k). As a result, there are exactly 2r conjugacy classes of parabolic subgroups in G over k.[13] Explicitly, the parabolic subgroup corresponding to a given subset S of Δ is the group generated by B together with the root subgroups U−α for α in S. For example, the parabolic subgroups of GL(n) that contain the Borel subgroup B above are the groups of invertible matrices with zero entries below a given set of squares along the diagonal, such as: $\left\{{\begin{bmatrix}*&*&*&*\\*&*&*&*\\0&0&*&*\\0&0&0&*\end{bmatrix}}\right\}$ By definition, a parabolic subgroup P of a reductive group G over a field k is a smooth k-subgroup such that the quotient variety G/P is proper over k, or equivalently projective over k. Thus the classification of parabolic subgroups amounts to a classification of the projective homogeneous varieties for G (with smooth stabilizer group; that is no restriction for k of characteristic zero). For GL(n), these are the flag varieties, parametrizing sequences of linear subspaces of given dimensions a1,...,ai contained in a fixed vector space V of dimension n: $0\subset S_{a_{1}}\subset \cdots \subset S_{a_{i}}\subset V.$ For the orthogonal group or the symplectic group, the projective homogeneous varieties have a similar description as varieties of isotropic flags with respect to a given quadratic form or symplectic form. For any reductive group G with a Borel subgroup B, G/B is called the flag variety or flag manifold of G. Classification of split reductive groups Chevalley showed in 1958 that the reductive groups over any algebraically closed field are classified up to isomorphism by root data.[14] In particular, the semisimple groups over an algebraically closed field are classified up to central isogenies by their Dynkin diagram, and the simple groups correspond to the connected diagrams. Thus there are simple groups of types An, Bn, Cn, Dn, E6, E7, E8, F4, G2. This result is essentially identical to the classifications of compact Lie groups or complex semisimple Lie algebras, by Wilhelm Killing and Élie Cartan in the 1880s and 1890s. In particular, the dimensions, centers, and other properties of the simple algebraic groups can be read from the list of simple Lie groups. It is remarkable that the classification of reductive groups is independent of the characteristic. For comparison, there are many more simple Lie algebras in positive characteristic than in characteristic zero. The exceptional groups G of type G2 and E6 had been constructed earlier, at least in the form of the abstract group G(k), by L. E. Dickson. For example, the group G2 is the automorphism group of an octonion algebra over k. By contrast, the Chevalley groups of type F4, E7, E8 over a field of positive characteristic were completely new. More generally, the classification of split reductive groups is the same over any field.[15] A semisimple group G over a field k is called simply connected if every central isogeny from a semisimple group to G is an isomorphism. (For G semisimple over the complex numbers, being simply connected in this sense is equivalent to G(C) being simply connected in the classical topology.) Chevalley's classification gives that, over any field k, there is a unique simply connected split semisimple group G with a given Dynkin diagram, with simple groups corresponding to the connected diagrams. At the other extreme, a semisimple group is of adjoint type if its center is trivial. The split semisimple groups over k with given Dynkin diagram are exactly the groups G/A, where G is the simply connected group and A is a k-subgroup scheme of the center of G. For example, the simply connected split simple groups over a field k corresponding to the "classical" Dynkin diagrams are as follows: • An: SL(n+1) over k; • Bn: the spin group Spin(2n+1) associated to a quadratic form of dimension 2n+1 over k with Witt index n, for example the form $q(x_{1},\ldots ,x_{2n+1})=x_{1}x_{2}+x_{3}x_{4}+\cdots +x_{2n-1}x_{2n}+x_{2n+1}^{2};$ • Cn: the symplectic group Sp(2n) over k; • Dn: the spin group Spin(2n) associated to a quadratic form of dimension 2n over k with Witt index n, which can be written as: $q(x_{1},\ldots ,x_{2n})=x_{1}x_{2}+x_{3}x_{4}+\cdots +x_{2n-1}x_{2n}.$ The outer automorphism group of a split reductive group G over a field k is isomorphic to the automorphism group of the root datum of G. Moreover, the automorphism group of G splits as a semidirect product: $\operatorname {Aut} (G)\cong \operatorname {Out} (G)\ltimes (G/Z)(k),$ where Z is the center of G.[16] For a split semisimple simply connected group G over a field, the outer automorphism group of G has a simpler description: it is the automorphism group of the Dynkin diagram of G. Reductive group schemes A group scheme G over a scheme S is called reductive if the morphism G → S is smooth and affine, and every geometric fiber $G_{\overline {k}}$ is reductive. (For a point p in S, the corresponding geometric fiber means the base change of G to an algebraic closure ${\overline {k}}$ of the residue field of p.) Extending Chevalley's work, Michel Demazure and Grothendieck showed that split reductive group schemes over any nonempty scheme S are classified by root data.[17] This statement includes the existence of Chevalley groups as group schemes over Z, and it says that every split reductive group over a scheme S is isomorphic to the base change of a Chevalley group from Z to S. Real reductive groups In the context of Lie groups rather than algebraic groups, a real reductive group is a Lie group G such that there is a linear algebraic group L over R whose identity component (in the Zariski topology) is reductive, and a homomorphism G → L(R) whose kernel is finite and whose image is open in L(R) (in the classical topology). It is also standard to assume that the image of the adjoint representation Ad(G) is contained in Int(gC) = Ad(L0(C)) (which is automatic for G connected).[18] In particular, every connected semisimple Lie group (meaning that its Lie algebra is semisimple) is reductive. Also, the Lie group R is reductive in this sense, since it can be viewed as the identity component of GL(1,R) ≅ R*. The problem of classifying the real reductive groups largely reduces to classifying the simple Lie groups. These are classified by their Satake diagram; or one can just refer to the list of simple Lie groups (up to finite coverings). Useful theories of admissible representations and unitary representations have been developed for real reductive groups in this generality. The main differences between this definition and the definition of a reductive algebraic group have to do with the fact that an algebraic group G over R may be connected as an algebraic group while the Lie group G(R) is not connected, and likewise for simply connected groups. For example, the projective linear group PGL(2) is connected as an algebraic group over any field, but its group of real points PGL(2,R) has two connected components. The identity component of PGL(2,R) (sometimes called PSL(2,R)) is a real reductive group that cannot be viewed as an algebraic group. Similarly, SL(2) is simply connected as an algebraic group over any field, but the Lie group SL(2,R) has fundamental group isomorphic to the integers Z, and so SL(2,R) has nontrivial covering spaces. By definition, all finite coverings of SL(2,R) (such as the metaplectic group) are real reductive groups. On the other hand, the universal cover of SL(2,R) is not a real reductive group, even though its Lie algebra is reductive, that is, the product of a semisimple Lie algebra and an abelian Lie algebra. For a connected real reductive group G, the quotient manifold G/K of G by a maximal compact subgroup K is a symmetric space of non-compact type. In fact, every symmetric space of non-compact type arises this way. These are central examples in Riemannian geometry of manifolds with nonpositive sectional curvature. For example, SL(2,R)/SO(2) is the hyperbolic plane, and SL(2,C)/SU(2) is hyperbolic 3-space. For a reductive group G over a field k that is complete with respect to a discrete valuation (such as the p-adic numbers Qp), the affine building X of G plays the role of the symmetric space. Namely, X is a simplicial complex with an action of G(k), and G(k) preserves a CAT(0) metric on X, the analog of a metric with nonpositive curvature. The dimension of the affine building is the k-rank of G. For example, the building of SL(2,Qp) is a tree. Representations of reductive groups For a split reductive group G over a field k, the irreducible representations of G (as an algebraic group) are parametrized by the dominant weights, which are defined as the intersection of the weight lattice X(T) ≅ Zn with a convex cone (a Weyl chamber) in Rn. In particular, this parametrization is independent of the characteristic of k. In more detail, fix a split maximal torus and a Borel subgroup, T ⊂ B ⊂ G. Then B is the semidirect product of T with a smooth connected unipotent subgroup U. Define a highest weight vector in a representation V of G over k to be a nonzero vector v such that B maps the line spanned by v into itself. Then B acts on that line through its quotient group T, by some element λ of the weight lattice X(T). Chevalley showed that every irreducible representation of G has a unique highest weight vector up to scalars; the corresponding "highest weight" λ is dominant; and every dominant weight λ is the highest weight of a unique irreducible representation L(λ) of G, up to isomorphism.[19] There remains the problem of describing the irreducible representation with given highest weight. For k of characteristic zero, there are essentially complete answers. For a dominant weight λ, define the Schur module ∇(λ) as the k-vector space of sections of the G-equivariant line bundle on the flag manifold G/B associated to λ; this is a representation of G. For k of characteristic zero, the Borel–Weil theorem says that the irreducible representation L(λ) is isomorphic to the Schur module ∇(λ). Furthermore, the Weyl character formula gives the character (and in particular the dimension) of this representation. For a split reductive group G over a field k of positive characteristic, the situation is far more subtle, because representations of G are typically not direct sums of irreducibles. For a dominant weight λ, the irreducible representation L(λ) is the unique simple submodule (the socle) of the Schur module ∇(λ), but it need not be equal to the Schur module. The dimension and character of the Schur module are given by the Weyl character formula (as in characteristic zero), by George Kempf.[20] The dimensions and characters of the irreducible representations L(λ) are in general unknown, although a large body of theory has been developed to analyze these representations. One important result is that the dimension and character of L(λ) are known when the characteristic p of k is much bigger than the Coxeter number of G, by Henning Andersen, Jens Jantzen, and Wolfgang Soergel (proving Lusztig's conjecture in that case). Their character formula for p large is based on the Kazhdan–Lusztig polynomials, which are combinatorially complex.[21] For any prime p, Simon Riche and Geordie Williamson conjectured the irreducible characters of a reductive group in terms of the p-Kazhdan-Lusztig polynomials, which are even more complex, but at least are computable.[22] Non-split reductive groups As discussed above, the classification of split reductive groups is the same over any field. By contrast, the classification of arbitrary reductive groups can be hard, depending on the base field. Some examples among the classical groups are: • Every nondegenerate quadratic form q over a field k determines a reductive group G = SO(q). Here G is simple if q has dimension n at least 3, since $G_{\overline {k}}$ is isomorphic to SO(n) over an algebraic closure ${\overline {k}}$. The k-rank of G is equal to the Witt index of q (the maximum dimension of an isotropic subspace over k).[23] So the simple group G is split over k if and only if q has the maximum possible Witt index, $\lfloor n/2\rfloor $. • Every central simple algebra A over k determines a reductive group G = SL(1,A), the kernel of the reduced norm on the group of units A* (as an algebraic group over k). The degree of A means the square root of the dimension of A as a k-vector space. Here G is simple if A has degree n at least 2, since $G_{\overline {k}}$ is isomorphic to SL(n) over ${\overline {k}}$. If A has index r (meaning that A is isomorphic to the matrix algebra Mn/r(D) for a division algebra D of degree r over k), then the k-rank of G is (n/r) − 1.[24] So the simple group G is split over k if and only if A is a matrix algebra over k. As a result, the problem of classifying reductive groups over k essentially includes the problem of classifying all quadratic forms over k or all central simple algebras over k. These problems are easy for k algebraically closed, and they are understood for some other fields such as number fields, but for arbitrary fields there are many open questions. A reductive group over a field k is called isotropic if it has k-rank greater than 0 (that is, if it contains a nontrivial split torus), and otherwise anisotropic. For a semisimple group G over a field k, the following conditions are equivalent: • G is isotropic (that is, G contains a copy of the multiplicative group Gm over k); • G contains a parabolic subgroup over k not equal to G; • G contains a copy of the additive group Ga over k. For k perfect, it is also equivalent to say that G(k) contains a unipotent element other than 1.[25] For a connected linear algebraic group G over a local field k of characteristic zero (such as the real numbers), the group G(k) is compact in the classical topology (based on the topology of k) if and only if G is reductive and anisotropic.[26] Example: the orthogonal group SO(p,q) over R has real rank min(p,q), and so it is anisotropic if and only if p or q is zero.[23] A reductive group G over a field k is called quasi-split if it contains a Borel subgroup over k. A split reductive group is quasi-split. If G is quasi-split over k, then any two Borel subgroups of G are conjugate by some element of G(k).[27] Example: the orthogonal group SO(p,q) over R is split if and only if |p−q| ≤ 1, and it is quasi-split if and only if |p−q| ≤ 2.[23] Structure of semisimple groups as abstract groups For a simply connected split semisimple group G over a field k, Robert Steinberg gave an explicit presentation of the abstract group G(k).[28] It is generated by copies of the additive group of k indexed by the roots of G (the root subgroups), with relations determined by the Dynkin diagram of G. For a simply connected split semisimple group G over a perfect field k, Steinberg also determined the automorphism group of the abstract group G(k). Every automorphism is the product of an inner automorphism, a diagonal automorphism (meaning conjugation by a suitable ${\overline {k}}$-point of a maximal torus), a graph automorphism (corresponding to an automorphism of the Dynkin diagram), and a field automorphism (coming from an automorphism of the field k).[29] For a k-simple algebraic group G, Tits's simplicity theorem says that the abstract group G(k) is close to being simple, under mild assumptions. Namely, suppose that G is isotropic over k, and suppose that the field k has at least 4 elements. Let G(k)+ be the subgroup of the abstract group G(k) generated by k-points of copies of the additive group Ga over k contained in G. (By the assumption that G is isotropic over k, the group G(k)+ is nontrivial, and even Zariski dense in G if k is infinite.) Then the quotient group of G(k)+ by its center is simple (as an abstract group).[30] The proof uses Jacques Tits's machinery of BN-pairs. The exceptions for fields of order 2 or 3 are well understood. For k = F2, Tits's simplicity theorem remains valid except when G is split of type A1, B2, or G2, or non-split (that is, unitary) of type A2. For k = F3, the theorem holds except for G of type A1.[31] For a k-simple group G, in order to understand the whole group G(k), one can consider the Whitehead group W(k,G)=G(k)/G(k)+. For G simply connected and quasi-split, the Whitehead group is trivial, and so the whole group G(k) is simple modulo its center.[32] More generally, the Kneser–Tits problem asks for which isotropic k-simple groups the Whitehead group is trivial. In all known examples, W(k,G) is abelian. For an anisotropic k-simple group G, the abstract group G(k) can be far from simple. For example, let D be a division algebra with center a p-adic field k. Suppose that the dimension of D over k is finite and greater than 1. Then G = SL(1,D) is an anisotropic k-simple group. As mentioned above, G(k) is compact in the classical topology. Since it is also totally disconnected, G(k) is a profinite group (but not finite). As a result, G(k) contains infinitely many normal subgroups of finite index.[33] Lattices and arithmetic groups Let G be a linear algebraic group over the rational numbers Q. Then G can be extended to an affine group scheme G over Z, and this determines an abstract group G(Z). An arithmetic group means any subgroup of G(Q) that is commensurable with G(Z). (Arithmeticity of a subgroup of G(Q) is independent of the choice of Z-structure.) For example, SL(n,Z) is an arithmetic subgroup of SL(n,Q). For a Lie group G, a lattice in G means a discrete subgroup Γ of G such that the manifold G/Γ has finite volume (with respect to a G-invariant measure). For example, a discrete subgroup Γ is a lattice if G/Γ is compact. The Margulis arithmeticity theorem says, in particular: for a simple Lie group G of real rank at least 2, every lattice in G is an arithmetic group. The Galois action on the Dynkin diagram Main article: Tits index In seeking to classify reductive groups which need not be split, one step is the Tits index, which reduces the problem to the case of anisotropic groups. This reduction generalizes several fundamental theorems in algebra. For example, Witt's decomposition theorem says that a nondegenerate quadratic form over a field is determined up to isomorphism by its Witt index together with its anisotropic kernel. Likewise, the Artin–Wedderburn theorem reduces the classification of central simple algebras over a field to the case of division algebras. Generalizing these results, Tits showed that a reductive group over a field k is determined up to isomorphism by its Tits index together with its anisotropic kernel, an associated anisotropic semisimple k-group. For a reductive group G over a field k, the absolute Galois group Gal(ks/k) acts (continuously) on the "absolute" Dynkin diagram of G, that is, the Dynkin diagram of G over a separable closure ks (which is also the Dynkin diagram of G over an algebraic closure ${\overline {k}}$). The Tits index of G consists of the root datum of Gks, the Galois action on its Dynkin diagram, and a Galois-invariant subset of the vertices of the Dynkin diagram. Traditionally, the Tits index is drawn by circling the Galois orbits in the given subset. There is a full classification of quasi-split groups in these terms. Namely, for each action of the absolute Galois group of a field k on a Dynkin diagram, there is a unique simply connected semisimple quasi-split group H over k with the given action. (For a quasi-split group, every Galois orbit in the Dynkin diagram is circled.) Moreover, any other simply connected semisimple group G over k with the given action is an inner form of the quasi-split group H, meaning that G is the group associated to an element of the Galois cohomology set H1(k,H/Z), where Z is the center of H. In other words, G is the twist of H associated to some H/Z-torsor over k, as discussed in the next section. Example: Let q be a nondegenerate quadratic form of even dimension 2n over a field k of characteristic not 2, with n ≥ 5. (These restrictions can be avoided.) Let G be the simple group SO(q) over k. The absolute Dynkin diagram of G is of type Dn, and so its automorphism group is of order 2, switching the two "legs" of the Dn diagram. The action of the absolute Galois group of k on the Dynkin diagram is trivial if and only if the signed discriminant d of q in k*/(k*)2 is trivial. If d is nontrivial, then it is encoded in the Galois action on the Dynkin diagram: the index-2 subgroup of the Galois group that acts as the identity is $\operatorname {Gal} (k_{s}/k({\sqrt {d}}))\subset \operatorname {Gal} (k_{s}/k)$. The group G is split if and only if q has Witt index n, the maximum possible, and G is quasi-split if and only if q has Witt index at least n − 1.[23] Torsors and the Hasse principle A torsor for an affine group scheme G over a field k means an affine scheme X over k with an action of G such that $X_{\overline {k}}$ is isomorphic to $G_{\overline {k}}$ with the action of $G_{\overline {k}}$ on itself by left translation. A torsor can also be viewed as a principal G-bundle over k with respect to the fppf topology on k, or the étale topology if G is smooth over k. The pointed set of isomorphism classes of G-torsors over k is called H1(k,G), in the language of Galois cohomology. Torsors arise whenever one seeks to classify forms of a given algebraic object Y over a field k, meaning objects X over k which become isomorphic to Y over the algebraic closure of k. Namely, such forms (up to isomorphism) are in one-to-one correspondence with the set H1(k,Aut(Y)). For example, (nondegenerate) quadratic forms of dimension n over k are classified by H1(k,O(n)), and central simple algebras of degree n over k are classified by H1(k,PGL(n)). Also, k-forms of a given algebraic group G (sometimes called "twists" of G) are classified by H1(k,Aut(G)). These problems motivate the systematic study of G-torsors, especially for reductive groups G. When possible, one hopes to classify G-torsors using cohomological invariants, which are invariants taking values in Galois cohomology with abelian coefficient groups M, Ha(k,M). In this direction, Steinberg proved Serre's "Conjecture I": for a connected linear algebraic group G over a perfect field of cohomological dimension at most 1, H1(k,G) = 1.[34] (The case of a finite field was known earlier, as Lang's theorem.) It follows, for example, that every reductive group over a finite field is quasi-split. Serre's Conjecture II predicts that for a simply connected semisimple group G over a field of cohomological dimension at most 2, H1(k,G) = 1. The conjecture is known for a totally imaginary number field (which has cohomological dimension 2). More generally, for any number field k, Martin Kneser, Günter Harder and Vladimir Chernousov (1989) proved the Hasse principle: for a simply connected semisimple group G over k, the map $H^{1}(k,G)\to \prod _{v}H^{1}(k_{v},G)$ is bijective.[35] Here v runs over all places of k, and kv is the corresponding local field (possibly R or C). Moreover, the pointed set H1(kv,G) is trivial for every nonarchimidean local field kv, and so only the real places of k matter. The analogous result for a global field k of positive characteristic was proved earlier by Harder (1975): for every simply connected semisimple group G over k, H1(k,G) is trivial (since k has no real places).[36] In the slightly different case of an adjoint group G over a number field k, the Hasse principle holds in a weaker form: the natural map $H^{1}(k,G)\to \prod _{v}H^{1}(k_{v},G)$ is injective.[37] For G = PGL(n), this amounts to the Albert–Brauer–Hasse–Noether theorem, saying that a central simple algebra over a number field is determined by its local invariants. Building on the Hasse principle, the classification of semisimple groups over number fields is well understood. For example, there are exactly three Q-forms of the exceptional group E8, corresponding to the three real forms of E8. See also • The groups of Lie type are the finite simple groups constructed from simple algebraic groups over finite fields. • Generalized flag variety, Bruhat decomposition, Schubert variety, Schubert calculus • Schur algebra, Deligne–Lusztig theory • Real form (Lie theory) • Weil's conjecture on Tamagawa numbers • Langlands classification, Langlands dual group, Langlands program, geometric Langlands program • Special group, essential dimension • Geometric invariant theory, Luna's slice theorem, Haboush's theorem • Radical of an algebraic group Notes 1. SGA 3 (2011), v. 3, Définition XIX.1.6.1. 2. Milne (2017), Proposition 21.60. 3. Milne. Linear Algebraic Groups (PDF). pp. 381–394. 4. Conrad (2014), after Proposition 5.1.17. 5. Borel (1991), 18.2(i). 6. Milne (2017), Theorem 22.42. 7. Milne (2017), Corollary 22.43. 8. Demazure & Gabriel (1970), Théorème IV.3.3.6. 9. Milne (2017), Theorem 12.12. 10. Milne (2017), Theorem 21.11. 11. Milne (2017), Corollary 21.12. 12. Milne (2017), Proposition 17.53. 13. Borel (1991), Proposition 21.12. 14. Chevalley (2005); Springer (1998), 9.6.2 and 10.1.1. 15. Milne (2017), Theorems 23.25 and 23.55. 16. Milne (2017), Corollary 23.47. 17. SGA 3 (2011), v. 3, Théorème XXV.1.1; Conrad (2014), Theorems 6.1.16 and 6.1.17. 18. Springer (1979), section 5.1. 19. Milne (2017), Theorem 22.2. 20. Jantzen (2003), Proposition II.4.5 and Corollary II.5.11. 21. Jantzen (2003), section II.8.22. 22. Riche & Williamson (2018), section 1.8. 23. Borel (1991), section 23.4. 24. Borel (1991), section 23.2. 25. Borel & Tits (1971), Corollaire 3.8. 26. Platonov & Rapinchuk (1994), Theorem 3.1. 27. Borel (1991), Theorem 20.9(i). 28. Steinberg (2016), Theorem 8. 29. Steinberg (2016), Theorem 30. 30. Tits (1964), Main Theorem; Gille (2009), Introduction. 31. Tits (1964), section 1.2. 32. Gille (2009), Théorème 6.1. 33. Platonov & Rapinchuk (1994), section 9.1. 34. Steinberg (1965), Theorem 1.9. 35. Platonov & Rapinchuk (1994), Theorem 6.6. 36. Platonov & Rapinchuk (1994), section 6.8. 37. Platonov & Rapinchuk (1994), Theorem 6.4. References • Borel, Armand (1991) [1969], Linear Algebraic Groups, Graduate Texts in Mathematics, vol. 126 (2nd ed.), New York: Springer Nature, doi:10.1007/978-1-4612-0941-6, ISBN 0-387-97370-2, MR 1102012 • Borel, Armand; Tits, Jacques (1971), "Éléments unipotents et sous-groupes paraboliques de groupes réductifs. I.", Inventiones Mathematicae, 12 (2): 95–104, Bibcode:1971InMat..12...95B, doi:10.1007/BF01404653, MR 0294349, S2CID 119837998 • Chevalley, Claude (2005) [1958], Cartier, P. (ed.), Classification des groupes algébriques semi-simples, Collected Works, Vol. 3, Springer Nature, ISBN 3-540-23031-9, MR 2124841 • Conrad, Brian (2014), "Reductive group schemes" (PDF), Autour des schémas en groupes, vol. 1, Paris: Société Mathématique de France, pp. 93–444, ISBN 978-2-85629-794-0, MR 3309122 • Demazure, Michel; Gabriel, Pierre (1970), Groupes algébriques. Tome I: Géométrie algébrique, généralités, groupes commutatifs, Paris: Masson, ISBN 978-2225616662, MR 0302656 • Demazure, M.; Grothendieck, A. (2011) [1970]. Gille, P.; Polo, P. (eds.). Schémas en groupes (SGA 3), I: Propriétés générales des schémas en groupes. Société Mathématique de France. ISBN 978-2-85629-323-2. MR 2867621. Revised and annotated edition of the 1970 original. • Demazure, M.; Grothendieck, A. (1970). Schémas en groupes (SGA 3), II: Groupes de type multiplicatif, et structure des schémas en groupes généraux. Lecture Notes in Mathematics. Vol. 152. Berlin; New York: Springer-Verlag. doi:10.1007/BFb0059005. ISBN 978-3540051800. MR 0274459. • Demazure, M.; Grothendieck, A. (2011) [1970]. Gille, P.; Polo, P. (eds.). Schémas en groupes (SGA 3), III: Structure des schémas en groupes réductifs. Société Mathématique de France. ISBN 978-2-85629-324-9. MR 2867622. Revised and annotated edition of the 1970 original. • Gille, Philippe (2009), "Le problème de Kneser–Tits" (PDF), Séminaire Bourbaki. Vol. 2007/2008, Astérisque, vol. 326, Société Mathématique de France, pp. 39–81, ISBN 978-285629-269-3, MR 2605318 • Jantzen, Jens Carsten (2003) [1987], Representations of Algebraic Groups (2nd ed.), American Mathematical Society, ISBN 978-0-8218-3527-2, MR 2015057 • Milne, J. S. (2017), Algebraic Groups: The Theory of Group Schemes of Finite Type over a Field, Cambridge University Press, doi:10.1017/9781316711736, ISBN 978-1107167483, MR 3729270 • Platonov, Vladimir; Rapinchuk, Andrei (1994), Algebraic Groups and Number Theory, Academic Press, ISBN 0-12-558180-7, MR 1278263 • V.L. Popov (2001) [1994], "Reductive group", Encyclopedia of Mathematics, EMS Press • Riche, Simon; Williamson, Geordie (2018), Tilting Modules and the p-Canonical Basis, Astérisque, vol. 397, Société Mathématique de France, arXiv:1512.08296, Bibcode:2015arXiv151208296R, ISBN 978-2-85629-880-0 • Springer, Tonny A. (1979), "Reductive groups", Automorphic Forms, Representations, and L-functions, vol. 1, American Mathematical Society, pp. 3–27, ISBN 0-8218-3347-2, MR 0546587 • Springer, Tonny A. (1998), Linear Algebraic Groups, Progress in Mathematics, vol. 9 (2nd ed.), Boston, MA: Birkhäuser Boston, doi:10.1007/978-0-8176-4840-4, ISBN 978-0-8176-4021-7, MR 1642713 • Steinberg, Robert (1965), "Regular elements of semisimple algebraic groups", Publications Mathématiques de l'IHÉS, 25: 49–80, doi:10.1007/bf02684397, MR 0180554, S2CID 55638217 • Steinberg, Robert (2016) [1968], Lectures on Chevalley Groups, University Lecture Series, vol. 66, American Mathematical Society, doi:10.1090/ulect/066, ISBN 978-1-4704-3105-1, MR 3616493 • Tits, Jacques (1964), "Algebraic and abstract simple groups", Annals of Mathematics, 80 (2): 313–329, doi:10.2307/1970394, JSTOR 1970394, MR 0164968 External links • Demazure, M.; Grothendieck, A., Gille, P.; Polo, P. (eds.), Schémas en groupes (SGA 3), II: Groupes de type multiplicatif, et structure des schémas en groupes généraux Revised and annotated edition of the 1970 original.
Wikipedia
Reductive dual pair In the mathematical field of representation theory, a reductive dual pair is a pair of subgroups (G, G′) of the isometry group Sp(W) of a symplectic vector space W, such that G is the centralizer of G′ in Sp(W) and vice versa, and these groups act reductively on W. Somewhat more loosely, one speaks of a dual pair whenever two groups are the mutual centralizers in a larger group, which is frequently a general linear group. The concept was introduced by Roger Howe in Howe (1979). Its strong ties with Classical Invariant Theory are discussed in Howe (1989a). Examples • The full symplectic group G = Sp(W) and the two-element group G′, the center of Sp(W), form a reductive dual pair. The double centralizer property is clear from the way these groups were defined: the centralizer of the group G in G is its center, and the centralizer of the center of any group is the group itself. The group G′, consists of the identity transformation and its negative, and can be interpreted as the orthogonal group of a one-dimensional vector space. It emerges from the subsequent development of the theory that this pair is a first instance of a general family of dual pairs consisting of a symplectic group and an orthogonal group, which are known as type I irreducible reductive dual pairs. • Let X be an n-dimensional vector space, Y be its dual, and W be the direct sum of X and Y. Then W can be made into a symplectic vector space in a natural way, so that (X, Y) is its lagrangian polarization. The group G is the general linear group GL(X), which acts tautologically on X and contragrediently on Y. The centralizer of G in the symplectic group is the group G′, consisting of linear operators on W that act on X by multiplication by a non-zero scalar λ and on Y by scalar multiplication by its inverse λ−1. Then the centralizer of G′, is G, these two groups act completely reducibly on W, and hence form a reductive dual pair. The group G′, can be interpreted as the general linear group of a one-dimensional vector space. This pair is a member of a family of dual pairs consisting of general linear groups known as type II irreducible reductive dual pairs. Structure theory and classification The notion of a reductive dual pair makes sense over any field F, which we assume to be fixed throughout. Thus W is a symplectic vector space over F. If W1 and W2 are two symplectic vector spaces and (G1, G′1), (G2, G′2) are two reductive dual pairs in the corresponding symplectic groups, then we may form a new symplectic vector space W = W1 ⊕ W2 and a pair of groups G = G1 × G2, G′ = G′1 × G′,2 acting on W by isometries. It turns out that (G, G′) is a reductive dual pair. A reductive dual pair is called reducible if it can be obtained in this fashion from smaller groups, and irreducible otherwise. A reducible pair can be decomposed into a direct product of irreducible ones, and for many purposes, it is enough to restrict one's attention to the irreducible case. Several classes of reductive dual pairs had appeared earlier in the work of André Weil. Roger Howe proved a classification theorem, which states that in the irreducible case, those pairs exhaust all possibilities. An irreducible reductive dual pair (G, G′) in Sp(W) is said to be of type II if there is a lagrangian subspace X in W that is invariant under both G and G′, and of type I otherwise. An archetypical irreducible reductive dual pair of type II consists of a pair of general linear groups and arises as follows. Let U and V be two vector spaces over F, X = U ⊗F V be their tensor product, and Y = HomF(X, F) its dual. Then the direct sum W = X ⊕ Y can be endowed with a symplectic form such that X and Y are lagrangian subspaces, and the restriction of the symplectic form to X × Y ⊂ W × W coincides with the pairing between the vector space X and its dual Y. If G = GL(U) and G′ = GL(V), then both these groups act linearly on X and Y, the actions preserve the symplectic form on W, and (G, G′) is an irreducible reductive dual pair. Note that X is an invariant lagrangian subspace, hence this dual pair is of type II. An archetypical irreducible reductive dual pair of type I consists of an orthogonal group and a symplectic group and is constructed analogously. Let U be an orthogonal vector space and V be a symplectic vector space over F, and W = U ⊗F V be their tensor product. The key observation is that W is a symplectic vector space whose bilinear form is obtained from the product of the forms on the tensor factors. Moreover, if G = O(U) and G′ = Sp(V) are the isometry groups of U and V, then they act on W in a natural way, these actions are symplectic, and (G, G′) is an irreducible reductive dual pair of type I. These two constructions produce all irreducible reductive dual pairs over an algebraically closed field F, such as the field C of complex numbers. In general, one can replace vector spaces over F by vector spaces over a division algebra D over F, and proceed similarly to above to construct an irreducible reductive dual pair of type II. For type I, one starts with a division algebra D with involution τ, a hermitian form on U, and a skew-hermitian form on V (both of them non-degenerate), and forms their tensor product over D, W = U ⊗D V. Then W is naturally endowed with a structure of a symplectic vector space over F, the isometry groups of U and V act symplectically on W and form an irreducible reductive dual pair of type I. Roger Howe proved that, up to an isomorphism, any irreducible dual pair arises in this fashion. An explicit list for the case F = R appears in Howe (1989b). See also • Howe correspondence between representations of elements of a reductive dual pair. • Heisenberg group • Metaplectic group References • Howe, Roger E. (1979), "θ-series and invariant theory" (PDF), in Borel, Armand; Casselman, W. (eds.), Automorphic forms, representations and L-functions (Proc. Sympos. Pure Math., Oregon State Univ., Corvallis, Ore., 1977), Part 1, Proc. Sympos. Pure Math., XXXIII, Providence, R.I.: American Mathematical Society, pp. 275–285, ISBN 978-0-8218-1435-2, MR 0546602 • Howe, Roger E. (1989a), "Remarks on classical invariant theory", Transactions of the American Mathematical Society, American Mathematical Society, 313 (2): 539–570, doi:10.2307/2001418, JSTOR 2001418. • Howe, Roger E. (1989b), "Transcending classical invariant theory", Journal of the American Mathematical Society, American Mathematical Society, 2 (3): 535–552, doi:10.2307/1990942, JSTOR 1990942. • Goodman, Roe; Wallach, Nolan R. (1998), Representations and Invariants of the Classical Groups, Cambridge University Press, ISBN 0-521-66348-2.
Wikipedia
Redundant proof In mathematical logic, a redundant proof is a proof that has a subset that is a shorter proof of the same result. In other words, a proof is redundant if it has more proof steps than are actually necessary to prove the result. Formally, a proof $\psi $ of $\kappa $ is considered redundant if there exists another proof $\psi ^{\prime }$ of $\kappa ^{\prime }$ such that $\kappa ^{\prime }\subseteq \kappa $ (i.e. $\kappa ^{\prime }\;{\text{subsumes}}\;\kappa $) and $|\psi ^{\prime }|<|\psi |$ where $|\varphi |$ is the number of nodes in $\varphi $.[1] Local redundancy A proof containing a subproof of the shapes (here omitted pivots indicate that the resolvents must be uniquely defined) $(\eta \odot \eta _{1})\odot (\eta \odot \eta _{2}){\text{ or }}\eta \odot (\eta _{1}\odot (\eta \odot \eta _{2}))$ is locally redundant. Indeed, both of these subproofs can be equivalently replaced by the shorter subproof $\eta \odot (\eta _{1}\odot \eta _{2})$. In the case of local redundancy, the pairs of redundant inferences having the same pivot occur close to each other in the proof. However, redundant inferences can also occur far apart in the proof. The following definition generalizes local redundancy by considering inferences with the same pivot that occur within different contexts. We write $\psi \left[\eta \right]$ to denote a proof-context $\psi \left[-\right]$ with a single placeholder replaced by the subproof $\eta $. Global redundancy A proof $\psi [\psi _{1}[\eta \odot _{p}\eta _{1}]\odot \psi _{2}[\eta \odot _{p}\eta _{2}]]{\text{ or }}\psi [\psi _{1}[\eta \odot _{p}(\eta _{1}\odot \psi _{2}[\eta \odot _{p}\eta _{2}])]]$ is potentially (globally) redundant. Furthermore, it is (globally) redundant if it can be rewritten to one of the following shorter proofs: $\psi [\eta \odot _{p}(\psi _{1}[\eta _{1}]\odot \psi _{2}[\eta _{2}])]{\text{ or }}\eta \odot _{p}\psi [\psi _{1}[\eta _{1}]\odot \psi _{2}[\eta _{2}]]{\text{ or }}\psi [\psi _{1}[\eta _{1}]\odot \psi _{2}[\eta _{2}]].$ Example The proof ${\cfrac {{\cfrac {{\cfrac {\eta :\,p,q\,\,\,\,\eta _{1}:\,\neg p,r}{q,r}}p\,\,\,\,\,\,{\begin{array}{c}\\\eta _{3}:\,\neg q\end{array}}}{r}}q\,\,\,\,\,\,\,\,\,\,\,\,\,{\cfrac {{\cfrac {\eta \,\,\,\,\,\,\,\,\,\,\,\,\,\eta _{2}:\,\neg p,s,\neg r}{q,s,\neg r}}p\,\,\,\,{\begin{array}{c}\\\eta _{3}\end{array}}}{s,\neg r}}q}{\psi :\,s}}r$ :\,p,q\,\,\,\,\eta _{1}:\,\neg p,r}{q,r}}p\,\,\,\,\,\,{\begin{array}{c}\\\eta _{3}:\,\neg q\end{array}}}{r}}q\,\,\,\,\,\,\,\,\,\,\,\,\,{\cfrac {{\cfrac {\eta \,\,\,\,\,\,\,\,\,\,\,\,\,\eta _{2}:\,\neg p,s,\neg r}{q,s,\neg r}}p\,\,\,\,{\begin{array}{c}\\\eta _{3}\end{array}}}{s,\neg r}}q}{\psi :\,s}}r} is locally redundant as it is an instance of the first pattern in the definition $((\eta \odot _{p}\eta _{1})\odot \eta _{3})\odot ((\eta \odot _{p}\eta _{2})\odot \eta _{3}).$ • The pattern is $\psi [\psi _{1}[\eta \odot _{p}\eta _{1}]\odot \psi _{2}[\eta \odot _{p}\eta _{2}]]$ • $\psi _{1}[-]=\psi _{2}[-]=\_\odot \eta _{3}{\text{ and }}\psi [-]=\_$ But it is not globally redundant because the replacement terms according to the definition contain $\psi _{1}[\eta _{1}]\odot \psi _{2}[\eta _{2}]$ in all the cases and $\psi _{1}[\eta _{1}]\odot \psi _{2}[\eta _{2}]=(\eta _{1}\odot \eta _{3})\odot (\eta _{2}\odot \eta _{3})$ does not correspond to a proof. In particular, neither $\eta _{1}$ nor $\eta _{2}$ can be resolved with $\eta _{3}$, as they do not contain the literal $q$. The second pattern of potentially globally redundant proofs appearing in global redundancy definition is related to the well-known notion of regularity. Informally, a proof is irregular if there is a path from a node to the root of the proof such that a literal is used more than once as a pivot in this path. Notes 1. Fontaine, Pascal; Merz, Stephan; Woltzenlogel Paleo, Bruno. Compression of Propositional Resolution Proofs via Partial Regularization. 23rd International Conference on Automated Deduction, 2011.
Wikipedia
Reeb foliation In mathematics, the Reeb foliation is a particular foliation of the 3-sphere, introduced by the French mathematician Georges Reeb (1920–1993). It is based on dividing the sphere into two solid tori, along a 2-torus: see Clifford torus. Each of the solid tori is then foliated internally, in codimension 1, and the dividing torus surface forms one more leaf. By Novikov's compact leaf theorem, every smooth foliation of the 3-sphere includes a compact torus leaf, bounding a solid torus foliated in the same way. Illustrations References • Reeb, Georges (1952). "Sur certaines propriétés topologiques des variétés feuillétées" [On certain topological properties of foliation varieties]. Actualités Sci. Indust. (in French). Paris: Hermann. 1183. • Candel, Alberto; Conlon, Lawrence (2000). Foliations. American Mathematical Society. p. 93. ISBN 0-8218-0809-5. • Moerdijk, Ieke; Mrčun, J. (2003). Introduction to Foliations and Lie Groupoids. Cambridge Studies in Advanced Mathematics. Vol. 91. Cambridge University Press. p. 8. ISBN 0-521-83197-0. External links • Weisstein, Eric W. "Reeb Foliation". MathWorld.
Wikipedia
Reeb graph A Reeb graph[1] (named after Georges Reeb by René Thom) is a mathematical object reflecting the evolution of the level sets of a real-valued function on a manifold.[2] According to [3] a similar concept was introduced by G.M. Adelson-Velskii and A.S. Kronrod and applied to analysis of Hilbert's thirteenth problem.[4] Proposed by G. Reeb as a tool in Morse theory,[5] Reeb graphs are the natural tool to study multivalued functional relationships between 2D scalar fields $\psi $, $\lambda $, and $\phi $ arising from the conditions $\nabla \psi =\lambda \nabla \phi $ and $\lambda \neq 0$, because these relationships are single-valued when restricted to a region associated with an individual edge of the Reeb graph. This general principle was first used to study neutral surfaces in oceanography.[6] Reeb graphs have also found a wide variety of applications in computational geometry and computer graphics,[1][7] including computer aided geometric design, topology-based shape matching,[8][9][10] topological data analysis,[11] topological simplification and cleaning, surface segmentation [12] and parametrization, efficient computation of level sets, neuroscience,[13] and geometrical thermodynamics.[3] In a special case of a function on a flat space (technically a simply connected domain), the Reeb graph forms a polytree and is also called a contour tree.[14] Level set graphs help statistical inference related to estimating probability density functions and regression functions, and they can be used in cluster analysis and function optimization, among other things. [15] Formal definition Given a topological space X and a continuous function f: X → R, define an equivalence relation ∼ on X where p∼q whenever p and q belong to the same connected component of a single level set f−1(c) for some real c. The Reeb graph is the quotient space X /∼ endowed with the quotient topology. Description for Morse functions If f is a Morse function with distinct critical values, the Reeb graph can be described more explicitly. Its nodes, or vertices, correspond to the critical level sets f−1(c). The pattern in which the arcs, or edges, meet at the nodes/vertices reflects the change in topology of the level set f−1(t) as t passes through the critical value c. For example, if c is a minimum or a maximum of f, a component is created or destroyed; consequently, an arc originates or terminates at the corresponding node, which has degree 1. If c is a saddle point of index 1 and two components of f−1(t) merge at t = c as t increases, the corresponding vertex of the Reeb graph has degree 3 and looks like the letter "Y"; the same reasoning applies if the index of c is dim X−1 and a component of f−1(c) splits into two. References 1. Y. Shinagawa, T.L. Kunii, and Y.L. Kergosien, 1991. Surface coding based on Morse theory. IEEE Computer Graphics and Applications, 11(5), pp.66-78 2. Harish Doraiswamy, Vijay Natarajan, Efficient algorithms for computing Reeb graphs, Computational Geometry 42 (2009) 606–616 3. Gorban, Alexander N. (2013). "Thermodynamic Tree: The Space of Admissible Paths". SIAM Journal on Applied Dynamical Systems. 12 (1): 246–278. arXiv:1201.6315. doi:10.1137/120866919. S2CID 5706376. 4. G. M. Adelson-Velskii, A. S. Kronrod, About level sets of continuous functions with partial derivatives, Dokl. Akad. Nauk SSSR, 49 (4) (1945), pp. 239–241. 5. G. Reeb, Sur les points singuliers d’une forme de Pfaff complètement intégrable ou d’une fonction numérique, C. R. Acad. Sci. Paris 222 (1946) 847–849 6. Stanley, Geoffrey J. (June 2019). "Neutral surface topology". Ocean Modelling. 138: 88–106. arXiv:1903.10091. Bibcode:2019OcMod.138...88S. doi:10.1016/j.ocemod.2019.01.008. S2CID 85502820. 7. Y. Shinagawa and T.L. Kunii, 1991. Constructing a Reeb graph automatically from cross sections. IEEE Computer Graphics and Applications, 11(6), pp.44-51. 8. Pascucci, Valerio; Scorzelli, Giorgio; Bremer, Peer-Timo; Mascarenhas, Ajith (2007). "Robust On-line Computation of Reeb Graphs: Simplicity and Speed" (PDF). ACM Transactions on Graphics. 26 (3): 58.1–58.9. doi:10.1145/1276377.1276449. 9. M. Hilaga, Y. Shinagawa, T. Kohmura and T.L. Kunii, 2001, August. Topology matching for fully automatic similarity estimation of 3D shapes. In Proceedings of the 28th annual conference on Computer graphics and interactive techniques (pp. 203-212). ACM. 10. Tung, Tony; Schmitt, Francis (2005). "The Augmented Multiresolution Reeb Graph Approach for Content-Based Retrieval of 3D Shapes". International Journal of Shape Modeling. 11 (1): 91–120. doi:10.1142/S0218654305000748. 11. "the Topology ToolKit". 12. Hajij, Mustafa; Rosen, Paul (2020). "An Efficient Data Retrieval Parallel Reeb Graph Algorithm". Algorithms. 13 (10): 258. doi:10.3390/a13100258. 13. Shailja, S; Zhang, Angela; Manjunath, B. S. (2021). "A Computational Geometry Approach for Modeling Neuronal Fiber Pathways". Medical Image Computing and Computer Assisted Intervention – MICCAI 2021. Lecture Notes in Computer Science. Lecture Notes in Computer Science. 12908: 175–185. doi:10.1007/978-3-030-87237-3_17. ISBN 978-3-030-87236-6. PMC 8560085. PMID 34729555. 14. Carr, Hamish; Snoeyink, Jack; Axen, Ulrike (2000), "Computing contour trees in all dimensions", Proc. 11th ACM-SIAM Symposium on Discrete Algorithms (SODA 2000), pp. 918–926, ISBN 9780898714531. 15. Klemelä, Jussi (2018). "Level set tree methods". Wiley Interdisciplinary Reviews: Computational Statistics. 10 (5): e1436. doi:10.1002/wics.1436. S2CID 58864566.
Wikipedia
Reeb sphere theorem In mathematics, Reeb sphere theorem, named after Georges Reeb, states that A closed oriented connected manifold M n that admits a singular foliation having only centers is homeomorphic to the sphere Sn and the foliation has exactly two singularities. Morse foliation A singularity of a foliation F is of Morse type if in its small neighborhood all leaves of the foliation are level sets of a Morse function, being the singularity a critical point of the function. The singularity is a center if it is a local extremum of the function; otherwise, the singularity is a saddle. The number of centers c and the number of saddles $s$, specifically $c-s$, is tightly connected with the manifold topology. We denote $\operatorname {ind} p=\min(k,n-k)$, the index of a singularity $p$, where k is the index of the corresponding critical point of a Morse function. In particular, a center has index 0, index of a saddle is at least 1. A Morse foliation F on a manifold M is a singular transversely oriented codimension one foliation of class $C^{2}$ with isolated singularities such that: • each singularity of F is of Morse type, • each singular leaf L contains a unique singularity p; in addition, if $\operatorname {ind} p=1$ then $L\setminus p$ is not connected. Reeb sphere theorem This is the case $c>s=0$, the case without saddles. Theorem:[1] Let $M^{n}$ be a closed oriented connected manifold of dimension $n\geq 2$. Assume that $M^{n}$ admits a $C^{1}$-transversely oriented codimension one foliation $F$ with a non empty set of singularities all of them centers. Then the singular set of $F$ consists of two points and $M^{n}$ is homeomorphic to the sphere $S^{n}$. It is a consequence of the Reeb stability theorem. Generalization More general case is $c>s\geq 0.$ In 1978, Edward Wagneur generalized the Reeb sphere theorem to Morse foliations with saddles. He showed that the number of centers cannot be too much as compared with the number of saddles, notably, $c\leq s+2$. So there are exactly two cases when $c>s$: (1) $c=s+2,$ (2) $c=s+1.$ He obtained a description of the manifold admitting a foliation with singularities that satisfy (1). Theorem:[2] Let $M^{n}$ be a compact connected manifold admitting a Morse foliation $F$ with $c$ centers and $s$ saddles. Then $c\leq s+2$. In case $c=s+2$, • $M$ is homeomorphic to $S^{n}$, • all saddles have index 1, • each regular leaf is diffeomorphic to $S^{n-1}$. Finally, in 2008, César Camacho and Bruno Scardua considered the case (2), $c=s+1$. This is possible in a small number of low dimensions. Theorem:[3] Let $M^{n}$ be a compact connected manifold and $F$ a Morse foliation on $M$. If $s=c+1$, then • $n=2,4,8$ or $16$, • $M^{n}$ is an Eells–Kuiper manifold. References 1. Reeb, Georges (1946), "Sur les points singuliers d'une forme de Pfaff complètement intégrable ou d'une fonction numérique", C. R. Acad. Sci. Paris (in French), 222: 847–849, MR 0015613. 2. Wagneur, Edward (1978), "Formes de Pfaff à singularités non dégénérées", Annales de l'Institut Fourier (in French), 28 (3): xi, 165–176, MR 0511820. 3. Camacho, César; Scárdua, Bruno (2008), "On foliations with Morse singularities", Proceedings of the American Mathematical Society, 136 (11): 4065–4073, arXiv:math/0611395, doi:10.1090/S0002-9939-08-09371-4, MR 2425748.
Wikipedia
Reeb stability theorem In mathematics, Reeb stability theorem, named after Georges Reeb, asserts that if one leaf of a codimension-one foliation is closed and has finite fundamental group, then all the leaves are closed and have finite fundamental group. Reeb local stability theorem Theorem:[1] Let $F$ be a $C^{1}$, codimension $k$ foliation of a manifold $M$ and $L$ a compact leaf with finite holonomy group. There exists a neighborhood $U$ of $L$, saturated in $F$ (also called invariant), in which all the leaves are compact with finite holonomy groups. Further, we can define a retraction $\pi :U\to L$ such that, for every leaf $L'\subset U$, $\pi |_{L'}:L'\to L$ is a covering map with a finite number of sheets and, for each $y\in L$, $\pi ^{-1}(y)$ is homeomorphic to a disk of dimension k and is transverse to $F$. The neighborhood $U$ can be taken to be arbitrarily small. The last statement means in particular that, in a neighborhood of the point corresponding to a compact leaf with finite holonomy, the space of leaves is Hausdorff. Under certain conditions the Reeb local stability theorem may replace the Poincaré–Bendixson theorem in higher dimensions.[2] This is the case of codimension one, singular foliations $(M^{n},F)$, with $n\geq 3$, and some center-type singularity in $Sing(F)$. The Reeb local stability theorem also has a version for a noncompact codimension-1 leaf.[3][4] Reeb global stability theorem An important problem in foliation theory is the study of the influence exerted by a compact leaf upon the global structure of a foliation. For certain classes of foliations, this influence is considerable. Theorem:[1] Let $F$ be a $C^{1}$, codimension one foliation of a closed manifold $M$. If $F$ contains a compact leaf $L$ with finite fundamental group, then all the leaves of $F$ are compact, with finite fundamental group. If $F$ is transversely orientable, then every leaf of $F$ is diffeomorphic to $L$; $M$ is the total space of a fibration $f:M\to S^{1}$ over $S^{1}$, with fibre $L$, and $F$ is the fibre foliation, $\{f^{-1}(\theta )|\theta \in S^{1}\}$. This theorem holds true even when $F$ is a foliation of a manifold with boundary, which is, a priori, tangent on certain components of the boundary and transverse on other components.[5] In this case it implies Reeb sphere theorem. Reeb Global Stability Theorem is false for foliations of codimension greater than one.[6] However, for some special kinds of foliations one has the following global stability results: • In the presence of a certain transverse geometric structure: Theorem:[7] Let $F$ be a complete conformal foliation of codimension $k\geq 3$ of a connected manifold $M$. If $F$ has a compact leaf with finite holonomy group, then all the leaves of $F$ are compact with finite holonomy group. • For holomorphic foliations in complex Kähler manifold: Theorem:[8] Let $F$ be a holomorphic foliation of codimension $k$ in a compact complex Kähler manifold. If $F$ has a compact leaf with finite holonomy group then every leaf of $F$ is compact with finite holonomy group. References • C. Camacho, A. Lins Neto: Geometric theory of foliations, Boston, Birkhauser, 1985 • I. Tamura, Topology of foliations: an introduction, Transl. of Math. Monographs, AMS, v.97, 2006, 193 p. Notes 1. G. Reeb (1952). Sur certaines propriétés toplogiques des variétés feuillétées. Actualités Sci. Indust. Vol. 1183. Paris: Hermann. 2. J. Palis, jr., W. de Melo, Geometric theory of dynamical systems: an introduction, — New-York, Springer,1982. 3. T.Inaba, $C^{2}$ Reeb stability of noncompact leaves of foliations,— Proc. Japan Acad. Ser. A Math. Sci., 59:158{160, 1983 4. J. Cantwell and L. Conlon, Reeb stability for noncompact leaves in foliated 3-manifolds, — Proc. Amer.Math.Soc. 33 (1981), no. 2, 408–410. 5. C. Godbillon, Feuilletages, etudies geometriques, — Basel, Birkhauser, 1991 6. W.T.Wu and G.Reeb, Sur les éspaces fibres et les variétés feuillitées, — Hermann, 1952. 7. R.A. Blumenthal, Stability theorems for conformal foliations, — Proc. AMS. 91, 1984, p. 55–63. 8. J.V. Pereira, Global stability for holomorphic foliations on Kaehler manifolds, — Qual. Theory Dyn. Syst. 2 (2001), 381–384. arXiv:math/0002086v2
Wikipedia
Reeb vector field In mathematics, the Reeb vector field, named after the French mathematician Georges Reeb, is a notion that appears in various domains of contact geometry including: • in a contact manifold, given a contact 1-form $\alpha $, the Reeb vector field satisfies $R\in \mathrm {ker} \ d\alpha ,\ \alpha (R)=1$,[1][2] • in particular, in the context of Sasakian manifold#The Reeb vector field. References 1. http://people.math.gatech.edu/%7Eetnyre/preprints/papers/phys.pdf 2. http://www2.im.uj.edu.pl/katedry/K.G/AutumnSchool/Monday.pdf • Blair, David E. (2010). Riemannian geometry of contact and symplectic manifolds. Progress in Mathematics. Vol. 203 (Second edition of 2002 original ed.). Boston, MA: Birkhäuser Boston, Ltd. doi:10.1007/978-0-8176-4959-3. ISBN 978-0-8176-4958-6. MR 2682326. Zbl 1246.53001. • McDuff, Dusa; Salamon, Dietmar (2017). Introduction to symplectic topology. Oxford Graduate Texts in Mathematics (Third edition of 1995 original ed.). Oxford: Oxford University Press. doi:10.1093/oso/9780198794899.001.0001. ISBN 978-0-19-879490-5. MR 3674984. Zbl 1380.53003.
Wikipedia
Reed–Muller code Reed–Muller codes are error-correcting codes that are used in wireless communications applications, particularly in deep-space communication.[1] Moreover, the proposed 5G standard[2] relies on the closely related polar codes[3] for error correction in the control channel. Due to their favorable theoretical and mathematical properties, Reed–Muller codes have also been extensively studied in theoretical computer science. Reed-Muller code RM(r,m) Named afterIrving S. Reed and David E. Muller Classification TypeLinear block code Block length$2^{m}$ Message length$k=\sum _{i=0}^{r}{\binom {m}{i}}$ Rate$k/2^{m}$ Distance$2^{m-r}$ Alphabet size$2$ Notation$[2^{m},k,2^{m-r}]_{2}$-code Reed–Muller codes generalize the Reed–Solomon codes and the Walsh–Hadamard code. Reed–Muller codes are linear block codes that are locally testable, locally decodable, and list decodable. These properties make them particularly useful in the design of probabilistically checkable proofs. Traditional Reed–Muller codes are binary codes, which means that messages and codewords are binary strings. When r and m are integers with 0 ≤ r ≤ m, the Reed–Muller code with parameters r and m is denoted as RM(r, m). When asked to encode a message consisting of k bits, where $\textstyle k=\sum _{i=0}^{r}{\binom {m}{i}}$ holds, the RM(r, m) code produces a codeword consisting of 2m bits. Reed–Muller codes are named after David E. Muller, who discovered the codes in 1954,[4] and Irving S. Reed, who proposed the first efficient decoding algorithm.[5] Description using low-degree polynomials Reed–Muller codes can be described in several different (but ultimately equivalent) ways. The description that is based on low-degree polynomials is quite elegant and particularly suited for their application as locally testable codes and locally decodable codes.[6] Encoder A block code can have one or more encoding functions $ C:\{0,1\}^{k}\to \{0,1\}^{n}$ that map messages $ x\in \{0,1\}^{k}$ to codewords $ C(x)\in \{0,1\}^{n}$. The Reed–Muller code RM(r, m) has message length $\textstyle k=\sum _{i=0}^{r}{\binom {m}{i}}$ and block length $\textstyle n=2^{m}$. One way to define an encoding for this code is based on the evaluation of multilinear polynomials with m variables and total degree r. Every multilinear polynomial over the finite field with two elements can be written as follows: $p_{c}(Z_{1},\dots ,Z_{m})=\sum _{\underset {|S|\leq r}{S\subseteq \{1,\dots ,m\}}}c_{S}\cdot \prod _{i\in S}Z_{i}\,.$ The $ Z_{1},\dots ,Z_{m}$ are the variables of the polynomial, and the values $ c_{S}\in \{0,1\}$ are the coefficients of the polynomial. Since there are exactly $ k$ coefficients, the message $ x\in \{0,1\}^{k}$ consists of $ k$ values that can be used as these coefficients. In this way, each message $ x$ gives rise to a unique polynomial $ p_{x}$ in m variables. To construct the codeword $ C(x)$, the encoder evaluates $ p_{x}$ at all evaluation points $ a\in \{0,1\}^{m}$, where it interprets the sum as addition modulo two in order to obtain a bit $ (p_{x}(a){\bmod {2}})\in \{0,1\}$. That is, the encoding function is defined via $C(x)=\left(p_{x}(a){\bmod {2}}\right)_{a\in \{0,1\}^{m}}\,.$ The fact that the codeword $C(x)$ suffices to uniquely reconstruct $x$ follows from Lagrange interpolation, which states that the coefficients of a polynomial are uniquely determined when sufficiently many evaluation points are given. Since $C(0)=0$ and $C(x+y)=C(x)+C(y){\bmod {2}}$ holds for all messages $x,y\in \{0,1\}^{k}$, the function $C$ is a linear map. Thus the Reed–Muller code is a linear code. Example For the code RM(2, 4), the parameters are as follows: $ {\begin{aligned}r&=2\\m&=4\\k&=\textstyle {\binom {4}{2}}+{\binom {4}{1}}+{\binom {4}{0}}=6+4+1=11\\n&=2^{m}=16\\\end{aligned}}$ Let $ C:\{0,1\}^{11}\to \{0,1\}^{16}$ be the encoding function just defined. To encode the string x = 1 1010 010101 of length 11, the encoder first constructs the polynomial $ p_{x}$ in 4 variables: ${\begin{aligned}p_{x}(Z_{1},Z_{2},Z_{3},Z_{4})&=1+(1\cdot Z_{1}+0\cdot Z_{2}+1\cdot Z_{3}+0\cdot Z_{4})+(0\cdot Z_{1}Z_{2}+1\cdot Z_{1}Z_{3}+0\cdot Z_{1}Z_{4}+1\cdot Z_{2}Z_{3}+0\cdot Z_{2}Z_{4}+1\cdot Z_{3}Z_{4})\\&=1+Z_{1}+Z_{3}+Z_{1}Z_{3}+Z_{2}Z_{3}+Z_{3}Z_{4}\end{aligned}}$ Then it evaluates this polynomial at all 16 evaluation points (0101 means $Z_{1}=0,Z_{2}=1,Z_{3}=0,Z_{4}=1)$: $p_{x}(0000)=1,\;p_{x}(0001)=1,\;p_{x}(0010)=0,\;p_{x}(0011)=1,\;$ $p_{x}(0100)=1,\;p_{x}(0101)=1,\;p_{x}(0110)=1,\;p_{x}(0111)=0,\;$ $p_{x}(1000)=0,\;p_{x}(1001)=0,\;p_{x}(1010)=0,\;p_{x}(1011)=1,\;$ $p_{x}(1100)=0,\;p_{x}(1101)=0,\;p_{x}(1110)=1,\;p_{x}(1111)=0\,.$ As a result, C(1 1010 010101) = 1101 1110 0001 0010 holds. Decoder As was already mentioned, Lagrange interpolation can be used to efficiently retrieve the message from a codeword. However, a decoder needs to work even if the codeword has been corrupted in a few positions, that is, when the received word is different from any codeword. In this case, a local decoding procedure can help. The algorithm from Reed is based on the following property: you start from the code word, that is a sequence of evaluation points from an unknown polynomial $ p_{x}$ of $ {\mathbb {F} }_{2}[X_{1},X_{2},...,X_{m}]$ of degree at most $ r$ that you want to find. The sequence may contains any number of errors up to $ 2^{m-r-1}-1$ included. If you consider a monomial $ \mu $ of the highest degree $ d$ in $ p_{x}$ and sum all the evaluation points of the polynomial where all variables in $ \mu $ have the values 0 or 1, and all the other variables have value 0, you get the value of the coefficient (0 or 1) of $ \mu $ in $ p_{x}$ (There are $ 2^{d}$ such points). This is due to the fact that all lower monomial divisors of $ \mu $ appears an even number of time in the sum, and only $ \mu $ appears once. To take into account the possibility of errors, you can also remark that you can fix the value of other variables to any value. So instead of doing the sum only once for other variables not in $ \mu $ with 0 value, you do it $ 2^{m-d}$ times for each fixed valuations of the other variables. If there is no error, all those sums should be equals to the value of the coefficient searched. The algorithm consists here to take the majority of the answers as the value searched. If the minority is larger than the maximum number of errors possible, the decoding step fails knowing there are too many errors in the input code. Once a coefficient is computed, if it's 1, update the code to remove the monomial $ \mu $ from the input code and continue to next monomial, in reverse order of their degree. Example Let's consider the previous example and start from the code. With $ r=2,m=4$ we can fix at most 1 error in the code. Consider the input code as 1101 1110 0001 0110 (this is the previous code with one error). We know the degree of the polynomial $ p_{x}$ is at most $ r=2$, we start by searching for monomial of degree 2. • $ \mu =X_{3}X_{4}$ • we start by looking for evaluation points with $ X_{1}=0,X_{2}=0,X_{3}\in \{0,1\},X_{4}\in \{0,1\}$. In the code this is: 1101 1110 0001 0110. The first sum is 1 (odd number of 1). • we look for evaluation points with $ X_{1}=0,X_{2}=1,X_{3}\in \{0,1\},X_{4}\in \{0,1\}$. In the code this is: 1101 1110 0001 0110. The second sum is 1. • we look for evaluation points with $ X_{1}=1,X_{2}=0,X_{3}\in \{0,1\},X_{4}\in \{0,1\}$. In the code this is: 1101 1110 0001 0110. The third sum is 1. • we look for evaluation points with $ X_{1}=1,X_{2}=1,X_{3}\in \{0,1\},X_{4}\in \{0,1\}$. In the code this is: 1101 1110 0001 0110. The third sum is 0 (even number of 1). The four sums don't agree (so we know there is an error), but the minority report is not larger than the maximum number of error allowed (1), so we take the majority and the coefficient of $ \mu $ is 1. We remove $ \mu $ from the code before continue : code : 1101 1110 0001 0110, valuation of $ \mu $ is 0001000100010001, the new code is 1100 1111 0000 0111 • $ \mu =X_{2}X_{4}$ • 1100 1111 0000 0111. Sum is 0 • 1100 1111 0000 0111. Sum is 0 • 1100 1111 0000 0111. Sum is 1 • 1100 1111 0000 0111. Sum is 0 One error detected, coefficient is 0, no change to current code. • $ \mu =X_{1}X_{4}$ • 1100 1111 0000 0111. Sum is 0 • 1100 1111 0000 0111. Sum is 0 • 1100 1111 0000 0111. Sum is 1 • 1100 1111 0000 0111. Sum is 0 One error detected, coefficient is 0, no change to current code. • $ \mu =X_{2}X_{3}$ • 1100 1111 0000 0111. Sum is 1 • 100 1111 0000 0111. Sum is 1 • 1100 1111 0000 0111. Sum is 1 • 1100 1111 0000 0111. Sum is 0 One error detected, coefficient is 1, valuation of $ \mu $ is 0000 0011 0000 0011, current code is now 1100 1100 0000 0100. • $ \mu =X_{1}X_{3}$ • 1100 1100 0000 0100. Sum is 1 • 1100 1100 0000 0100. Sum is 1 • 1100 1100 0000 0100. Sum is 1 • 1100 1100 0000 0100. Sum is 0 One error detected, coefficient is 1, valuation of $ \mu $ is 0000 0000 0011 0011, current code is now 1100 1100 0011 0111. • $ \mu =X_{1}X_{2}$ • 1100 1100 0011 0111. Sum is 0 • 1100 1100 0011 0111. Sum is 1 • 1100 1100 0011 0111. Sum is 0 • 1100 1100 0011 0111. Sum is 0 One error detected, coefficient is 0, no change to current code. We know now all coefficient of degree 2 for the polynomial, we can start mononials of degree 1. Notice that for each next degree, there are twice as much sums, and each sums is half smaller. • $ \mu =X_{4}$ • 1100 1100 0011 0111. Sum is 0 • 1100 1100 0011 0111. Sum is 0 • 1100 1100 0011 0111. Sum is 0 • 1100 1100 0011 0111. Sum is 0 • 1100 1100 0011 0111. Sum is 0 • 1100 1100 0011 0111. Sum is 0 • 1100 1100 0011 0111. Sum is 0 • 1100 1100 0011 0111. Sum is 1 One error detected, coefficient is 0, no change to current code. • $ \mu =X_{3}$ • 1100 1100 0011 0111. Sum is 1 • 1100 1100 0011 0111. Sum is 1 • 1100 1100 0011 0111. Sum is 1 • 1100 1100 0011 0111. Sum is 1 • 1100 1100 0011 0111. Sum is 1 • 1100 1100 0011 0111. Sum is 1 • 1100 1100 0011 0111. Sum is 1 • 1100 1100 0011 0111. Sum is 0 One error detected, coefficient is 1, valuation of $ \mu $ is 0011 0011 0011 0011, current code is now 1111 1111 0000 0100. Then we'll find 0 for $ \mu =X_{2}$, 1 for $ \mu =X_{1}$ and the current code become 1111 1111 1111 1011. For the degree 0, we have 16 sums of only 1 bit. The minority is still of size 1, and we found $ p_{x}=1+X_{1}+X_{3}+X_{1}X3+X_{2}X_{3}+X_{3}X_{4}$ and the corresponding initial word 1 1010 010101 Generalization to larger alphabets via low-degree polynomials Using low-degree polynomials over a finite field $\mathbb {F} $ of size $q$, it is possible to extend the definition of Reed–Muller codes to alphabets of size $q$. Let $m$ and $d$ be positive integers, where $m$ should be thought of as larger than $d$. To encode a message $ x\in \mathbb {F} ^{k}$ of width $k=\textstyle {\binom {m+d}{m}}$, the message is again interpreted as an $m$-variate polynomial $p_{x}$ of total degree at most $d$ and with coefficient from $\mathbb {F} $. Such a polynomial indeed has $\textstyle {\binom {m+d}{m}}$ coefficients. The Reed–Muller encoding of $x$ is the list of all evaluations of $p_{x}(a)$ over all $a\in \mathbb {F} ^{m}$. Thus the block length is $n=q^{m}$. Description using a generator matrix A generator matrix for a Reed–Muller code RM(r, m) of length N = 2m can be constructed as follows. Let us write the set of all m-dimensional binary vectors as: $X=\mathbb {F} _{2}^{m}=\{x_{1},\ldots ,x_{N}\}.$ We define in N-dimensional space $\mathbb {F} _{2}^{N}$ the indicator vectors $\mathbb {I} _{A}\in \mathbb {F} _{2}^{N}$ on subsets $A\subset X$ by: $\left(\mathbb {I} _{A}\right)_{i}={\begin{cases}1&{\mbox{ if }}x_{i}\in A\\0&{\mbox{ otherwise}}\\\end{cases}}$ together with, also in $\mathbb {F} _{2}^{N}$, the binary operation $w\wedge z=(w_{1}\cdot z_{1},\ldots ,w_{N}\cdot z_{N}),$ referred to as the wedge product (not to be confused with the wedge product defined in exterior algebra). Here, $w=(w_{1},w_{2},\ldots ,w_{N})$ and $z=(z_{1},z_{2},\ldots ,z_{N})$ are points in $\mathbb {F} _{2}^{N}$ (N-dimensional binary vectors), and the operation $\cdot $ is the usual multiplication in the field $\mathbb {F} _{2}$. $\mathbb {F} _{2}^{m}$ is an m-dimensional vector space over the field $\mathbb {F} _{2}$, so it is possible to write $(\mathbb {F} _{2})^{m}=\{(y_{m},\ldots ,y_{1})\mid y_{i}\in \mathbb {F} _{2}\}.$ We define in N-dimensional space $\mathbb {F} _{2}^{N}$ the following vectors with length $N:v_{0}=(1,1,\ldots ,1)$ and $v_{i}=\mathbb {I} _{H_{i}},$ where 1 ≤ i ≤ m and the Hi are hyperplanes in $(\mathbb {F} _{2})^{m}$ (with dimension m − 1): $H_{i}=\{y\in (\mathbb {F} _{2})^{m}\mid y_{i}=0\}.$ The generator matrix The Reed–Muller RM(r, m) code of order r and length N = 2m is the code generated by v0 and the wedge products of up to r of the vi, 1 ≤ i ≤ m (where by convention a wedge product of fewer than one vector is the identity for the operation). In other words, we can build a generator matrix for the RM(r, m) code, using vectors and their wedge product permutations up to r at a time ${v_{0},v_{1},\ldots ,v_{n},\ldots ,(v_{i_{1}}\wedge v_{i_{2}}),\ldots (v_{i_{1}}\wedge v_{i_{2}}\ldots \wedge v_{i_{r}})}$, as the rows of the generator matrix, where 1 ≤ ik ≤ m. Example 1 Let m = 3. Then N = 8, and $X=\mathbb {F} _{2}^{3}=\{(0,0,0),(0,0,1),(0,1,0)\ldots ,(1,1,1)\},$ and ${\begin{aligned}v_{0}&=(1,1,1,1,1,1,1,1)\\[2pt]v_{1}&=(1,0,1,0,1,0,1,0)\\[2pt]v_{2}&=(1,1,0,0,1,1,0,0)\\[2pt]v_{3}&=(1,1,1,1,0,0,0,0).\end{aligned}}$ The RM(1,3) code is generated by the set $\{v_{0},v_{1},v_{2},v_{3}\},\,$ or more explicitly by the rows of the matrix: ${\begin{pmatrix}1&1&1&1&1&1&1&1\\1&0&1&0&1&0&1&0\\1&1&0&0&1&1&0&0\\1&1&1&1&0&0&0&0\end{pmatrix}}$ Example 2 The RM(2,3) code is generated by the set: $\{v_{0},v_{1},v_{2},v_{3},v_{1}\wedge v_{2},v_{1}\wedge v_{3},v_{2}\wedge v_{3}\}$ or more explicitly by the rows of the matrix: ${\begin{pmatrix}1&1&1&1&1&1&1&1\\1&0&1&0&1&0&1&0\\1&1&0&0&1&1&0&0\\1&1&1&1&0&0&0&0\\1&0&0&0&1&0&0&0\\1&0&1&0&0&0&0&0\\1&1&0&0&0&0&0&0\\\end{pmatrix}}$ Properties The following properties hold: 1. The set of all possible wedge products of up to m of the vi form a basis for $\mathbb {F} _{2}^{N}$. 2. The RM (r, m) code has rank $\sum _{s=0}^{r}{m \choose s}.$ 3. RM (r, m) = RM (r, m − 1) | RM (r − 1, m − 1) where '|' denotes the bar product of two codes. 4. RM (r, m) has minimum Hamming weight 2m − r. Proof 1. There are $\sum _{s=0}^{m}{m \choose s}=2^{m}=N$ such vectors and $\mathbb {F} _{2}^{N}$ have dimension N so it is sufficient to check that the N vectors span; equivalently it is sufficient to check that $\mathrm {RM} (m,m)=\mathbb {F} _{2}^{N}$. Let x be a binary vector of length m, an element of X. Let (x)i denote the ith element of x. Define $y_{i}={\begin{cases}v_{i}&{\text{ if }}(x)_{i}=0\\v_{0}+v_{i}&{\text{ if }}(x)_{i}=1\\\end{cases}}$ where 1 ≤ i ≤m. Then $\mathbb {I} _{\{x\}}=y_{1}\wedge \cdots \wedge y_{m}$ Expansion via the distributivity of the wedge product gives $\mathbb {I} _{\{x\}}\in \mathrm {RM} (m,m)$. Then since the vectors $\{\mathbb {I} _{\{x\}}\mid x\in X\}$ span $\mathbb {F} _{2}^{N}$ we have $\mathrm {RM} (m,n)=\mathbb {F} _{2}^{N}$. 2. By 1, all such wedge products must be linearly independent, so the rank of RM(r, m) must simply be the number of such vectors. 3. Omitted. 4. By induction. The RM(0, m) code is the repetition code of length N =2m and weight N = 2m−0 = 2m−r. By 1 $\mathrm {RM} (m,n)=\mathbb {F} _{2}^{n}$ and has weight 1 = 20 = 2m−r. The article bar product (coding theory) gives a proof that the weight of the bar product of two codes C1 , C2 is given by $\min\{2w(C_{1}),w(C_{2})\}$ If 0 < r < m and if 1. RM(r,m − 1) has weight 2m−1−r 2. RM(r − 1,m − 1) has weight 2m−1−(r−1) = 2m−r then the bar product has weight $\min\{2\times 2^{m-1-r},2^{m-r}\}=2^{m-r}.$ Decoding RM codes RM(r, m) codes can be decoded using majority logic decoding. The basic idea of majority logic decoding is to build several checksums for each received code word element. Since each of the different checksums must all have the same value (i.e. the value of the message word element weight), we can use a majority logic decoding to decipher the value of the message word element. Once each order of the polynomial is decoded, the received word is modified accordingly by removing the corresponding codewords weighted by the decoded message contributions, up to the present stage. So for a rth order RM code, we have to decode iteratively r+1, times before we arrive at the final received code-word. Also, the values of the message bits are calculated through this scheme; finally we can calculate the codeword by multiplying the message word (just decoded) with the generator matrix. One clue if the decoding succeeded, is to have an all-zero modified received word, at the end of (r + 1)-stage decoding through the majority logic decoding. This technique was proposed by Irving S. Reed, and is more general when applied to other finite geometry codes. Description using a recursive construction A Reed–Muller code RM(r,m) exists for any integers $m\geq 0$ and $0\leq r\leq m$. RM(m, m) is defined as the universe ($2^{m},2^{m},1$) code. RM(−1,m) is defined as the trivial code ($2^{m},0,\infty $). The remaining RM codes may be constructed from these elementary codes using the length-doubling construction $\mathrm {RM} (r,m)=\{(\mathbf {u} ,\mathbf {u} +\mathbf {v} )\mid \mathbf {u} \in \mathrm {RM} (r,m-1),\mathbf {v} \in \mathrm {RM} (r-1,m-1)\}.$ From this construction, RM(r,m) is a binary linear block code (n, k, d) with length n = 2m, dimension $k(r,m)=k(r,m-1)+k(r-1,m-1)$ and minimum distance $d=2^{m-r}$ for $r\geq 0$. The dual code to RM(r,m) is RM(m-r-1,m). This shows that repetition and SPC codes are duals, biorthogonal and extended Hamming codes are duals and that codes with k = n/2 are self-dual. Special cases of Reed–Muller codes Table of all RM(r,m) codes for m≤5 All RM(r, m) codes with $0\leq m\leq 5$ and alphabet size 2 are displayed here, annotated with the standard [n,k,d] coding theory notation for block codes. The code RM(r, m) is a $\textstyle [2^{m},k,2^{m-r}]_{2}$-code, that is, it is a linear code over a binary alphabet, has block length $\textstyle 2^{m}$, message length (or dimension) k, and minimum distance $\textstyle 2^{m-r}$. 0 1 2 3 4 5 m RM(m,m) (2m, 2m, 1) universe codes RM(5,5) (32,32,1) RM(4,4) (16,16,1) RM(m − 1, m) (2m, 2m−1, 2) SPC codes RM(3,3) (8,8,1) RM(4,5) (32,31,2) RM(2,2) (4,4,1) RM(3,4) (16,15,2) RM(m − 2, m) (2m, 2m−m−1, 4) extended Hamming codes RM(1,1) (2,2,1) RM(2,3) (8,7,2) RM(3,5) (32,26,4) RM(0,0) (1,1,1) RM(1,2) (4,3,2) RM(2,4) (16,11,4) RM(0,1) (2,1,2) RM(1,3) (8,4,4) RM(2,5) (32,16,8) RM(r, m=2r+1) (22r+1, 22r, 2r+1) self-dual codes RM(−1,0) (1,0,$\infty $) RM(0,2) (4,1,4) RM(1,4) (16,5,8) RM(−1,1) (2,0,$\infty $) RM(0,3) (8,1,8) RM(1,5) (32,6,16) RM(−1,2) (4,0,$\infty $) RM(0,4) (16,1,16) RM(1,m) (2m, m+1, 2m−1) punctured Hadamard codes RM(−1,3) (8,0,$\infty $) RM(0,5) (32,1,32) RM(−1,4) (16,0,$\infty $) RM(0,m) (2m, 1, 2m) repetition codes RM(−1,5) (32,0,$\infty $) RM(−1,m) (2m, 0, ∞) trivial codes Properties of RM(r,m) codes for r≤1 or r≥m-1 • RM(0, m) codes are repetition codes of length N = 2m, rate ${R={\tfrac {1}{N}}}$ and minimum distance $d_{\min }=N$. • RM(1, m) codes are parity check codes of length N = 2m, rate $R={\tfrac {m+1}{N}}$ and minimum distance $d_{\min }={\tfrac {N}{2}}$. • RM(m − 1, m) codes are single parity check codes of length N = 2m, rate $R={\tfrac {N-1}{N}}$ and minimum distance $d_{\min }=2$. • RM(m − 2, m) codes are the family of extended Hamming codes of length N = 2m with minimum distance $d_{\min }=4$.[7] References 1. Massey, James L. (1992), "Deep-space communications and coding: A marriage made in heaven", Advanced Methods for Satellite and Deep Space Communications, Lecture Notes in Control and Information Sciences, vol. 182, Springer-Verlag, pp. 1–17, CiteSeerX 10.1.1.36.4265, doi:10.1007/bfb0036046, ISBN 978-3540558514pdf 2. "3GPP RAN1 meeting #87 final report". 3GPP. Retrieved 31 August 2017. 3. Arikan, Erdal (2009). "Channel Polarization: A Method for Constructing Capacity-Achieving Codes for Symmetric Binary-Input Memoryless Channels - IEEE Journals & Magazine". IEEE Transactions on Information Theory. 55 (7): 3051–3073. arXiv:0807.3917. doi:10.1109/TIT.2009.2021379. hdl:11693/11695. S2CID 889822. 4. Muller, David E. (1954). "Application of Boolean algebra to switching circuit design and to error detection". Transactions of the I.R.E. Professional Group on Electronic Computers. EC-3 (3): 6–12. doi:10.1109/irepgelc.1954.6499441. ISSN 2168-1740. 5. Reed, Irving S. (1954). "A class of multiple-error-correcting codes and the decoding scheme". Transactions of the IRE Professional Group on Information Theory. 4 (4): 38–49. doi:10.1109/tit.1954.1057465. hdl:10338.dmlcz/143797. ISSN 2168-2690. 6. Prahladh Harsha et al., Limits of Approximation Algorithms: PCPs and Unique Games (DIMACS Tutorial Lecture Notes), Section 5.2.1. 7. Trellis and Turbo Coding, C. Schlegel & L. Perez, Wiley Interscience, 2004, p149. Further reading • Shu Lin; Daniel Costello (2005). Error Control Coding (2 ed.). Pearson. ISBN 978-0-13-017973-9. Chapter 4. • J.H. van Lint (1992). Introduction to Coding Theory. GTM. Vol. 86 (2 ed.). Springer-Verlag. ISBN 978-3-540-54894-2. Chapter 4.5. External links • MIT OpenCourseWare, 6.451 Principles of Digital Communication II, Lecture Notes section 6.4 • GPL Matlab-implementation of RM-codes • Source GPL Matlab-implementation of RM-codes • Weiss, E. (September 1962). "Generalized Reed-Muller codes". Information and Control. 5 (3): 213–222. doi:10.1016/s0019-9958(62)90555-7. ISSN 0019-9958. Consultative Committee for Space Data Systems Data compression • Images • ICER • JPEG • JPEG 2000 • 122.0.B1 • Data • Adaptive Entropy Coder Error Correction Current Binary Golay code Concatenated codes Turbo codes Proposed LDPC codes Telemetry command uplink • Command Loss Timer Reset • Proximity-1 Space Link Protocol Telemetry downlink • Spacecraft Monitoring & Control • Beacon mode service Telemetry general • Space Communications Protocol Specifications (SCPS): Performance Enhancing Proxy Telemetry modulation systems Current BPSK QPSK OQPSK Proposed GMSK Frequencies • X band • S band • Ku band • K band • Ka band Networking, interoperability and monitoring • Service-oriented architecture (Message Abstraction Layer)
Wikipedia
Reed–Muller expansion In Boolean logic, a Reed–Muller expansion (or Davio expansion) is a decomposition of a Boolean function. For a Boolean function $f(x_{1},\ldots ,x_{n}):\mathbb {B} ^{n}\to \mathbb {B} $ we call ${\begin{aligned}f_{x_{i}}(x)&=f(x_{1},\ldots ,x_{i-1},1,x_{i+1},\ldots ,x_{n})\\f_{{\overline {x}}_{i}}(x)&=f(x_{1},\ldots ,x_{i-1},0,x_{i+1},\ldots ,x_{n})\end{aligned}}$ the positive and negative cofactors of $f$ with respect to $x_{i}$, and ${\begin{aligned}{\frac {\partial f}{\partial x_{i}}}&=f_{x_{i}}(x)\oplus f_{{\overline {x}}_{i}}(x)\end{aligned}}$ the boolean derivation of $f$ with respect to $x_{i}$, where ${\oplus }$ denotes the XOR operator. Then we have for the Reed–Muller or positive Davio expansion: $f=f_{{\overline {x}}_{i}}\oplus x_{i}{\frac {\partial f}{\partial x_{i}}}.$ Description This equation is written in a way that it resembles a Taylor expansion of $f$ about $x_{i}=0$. There is a similar decomposition corresponding to an expansion about $x_{i}=1$ (negative Davio expansion): $f=f_{x_{i}}\oplus {\overline {x}}_{i}{\frac {\partial f}{\partial x_{i}}}.$ Repeated application of the Reed–Muller expansion results in an XOR polynomial in $x_{1},\ldots ,x_{n}$: $f=a_{1}\oplus a_{2}x_{1}\oplus a_{3}x_{2}\oplus a_{4}x_{1}x_{2}\oplus \ldots \oplus a_{2^{n}}x_{1}\cdots x_{n}$ This representation is unique and sometimes also called Reed–Muller expansion.[1] E.g. for $n=2$ the result would be $f(x_{1},x_{2})=f_{{\overline {x}}_{1}{\overline {x}}_{2}}\oplus {\frac {\partial f_{{\overline {x}}_{2}}}{\partial x_{1}}}x_{1}\oplus {\frac {\partial f_{{\overline {x}}_{1}}}{\partial x_{2}}}x_{2}\oplus {\frac {\partial ^{2}f}{\partial x_{1}\partial x_{2}}}x_{1}x_{2}$ where ${\partial ^{2}f \over \partial x_{1}\partial x_{2}}=f_{{\bar {x}}_{1}{\bar {x}}_{2}}\oplus f_{{\bar {x}}_{1}x_{2}}\oplus f_{x_{1}{\bar {x}}_{2}}\oplus f_{x_{1}x_{2}}$. For $n=3$ the result would be $f(x_{1},x_{2},x_{3})=f_{{\bar {x}}_{1}{\bar {x}}_{2}{\bar {x}}_{3}}\oplus {\partial f_{{\bar {x}}_{2}{\bar {x}}_{3}} \over \partial x_{1}}x_{1}\oplus {\partial f_{{\bar {x}}_{1}{\bar {x}}_{3}} \over \partial x_{2}}x_{2}\oplus {\partial f_{{\bar {x}}_{1}{\bar {x}}_{2}} \over \partial x_{3}}x_{3}\oplus {\partial ^{2}f_{{\bar {x}}_{3}} \over \partial x_{1}\partial x_{2}}x_{1}x_{2}\oplus {\partial ^{2}f_{{\bar {x}}_{2}} \over \partial x_{1}\partial x_{3}}x_{1}x_{3}\oplus {\partial ^{2}f_{{\bar {x}}_{1}} \over \partial x_{2}\partial x_{3}}x_{2}x_{3}\oplus {\partial ^{3}f \over \partial x_{1}\partial x_{2}\partial x_{3}}x_{1}x_{2}x_{3}$ where ${\partial ^{3}f \over \partial x_{1}\partial x_{2}\partial x_{3}}=f_{{\bar {x}}_{1}{\bar {x}}_{2}{\bar {x}}_{3}}\oplus f_{{\bar {x}}_{1}{\bar {x}}_{2}x_{3}}\oplus f_{{\bar {x}}_{1}x_{2}{\bar {x}}_{3}}\oplus f_{{\bar {x}}_{1}x_{2}x_{3}}\oplus f_{x_{1}{\bar {x}}_{2}{\bar {x}}_{3}}\oplus f_{x_{1}{\bar {x}}_{2}x_{3}}\oplus f_{x_{1}x_{2}{\bar {x}}_{3}}\oplus f_{x_{1}x_{2}x_{3}}$. Geometric interpretation This $n=3$ case can be given a cubical geometric interpretation (or a graph-theoretic interpretation) as follows: when moving along the edge from ${\bar {x}}_{1}{\bar {x}}_{2}{\bar {x}}_{3}$ to $x_{1}{\bar {x}}_{2}{\bar {x}}_{3}$, XOR up the functions of the two end-vertices of the edge in order to obtain the coefficient of $x_{1}$. To move from ${\bar {x}}_{1}{\bar {x}}_{2}{\bar {x}}_{3}$ to $x_{1}x_{2}{\bar {x}}_{3}$ there are two shortest paths: one is a two-edge path passing through $x_{1}{\bar {x}}_{2}{\bar {x}}_{3}$ and the other one a two-edge path passing through ${\bar {x}}_{1}x_{2}{\bar {x}}_{3}$. These two paths encompass four vertices of a square, and XORing up the functions of these four vertices yields the coefficient of $x_{1}x_{2}$. Finally, to move from ${\bar {x}}_{1}{\bar {x}}_{2}{\bar {x}}_{3}$ to $x_{1}x_{2}x_{3}$ there are six shortest paths which are three-edge paths, and these six paths encompass all the vertices of the cube, therefore the coefficient of $x_{1}x_{2}x_{3}$ can be obtained by XORing up the functions of all eight of the vertices. (The other, unmentioned coefficients can be obtained by symmetry.) Paths The shortest paths all involve monotonic changes to the values of the variables, whereas non-shortest paths all involve non-monotonic changes of such variables; or, to put it another way, the shortest paths all have lengths equal to the Hamming distance between the starting and destination vertices. This means that it should be easy to generalize an algorithm for obtaining coefficients from a truth table by XORing up values of the function from appropriate rows of a truth table, even for hyperdimensional cases ($n=4$ and above). Between the starting and destination rows of a truth table, some variables have their values remaining fixed: find all the rows of the truth table such that those variables likewise remain fixed at those given values, then XOR up their functions and the result should be the coefficient for the monomial corresponding to the destination row. (In such monomial, include any variable whose value is 1 (at that row) and exclude any variable whose value is 0 (at that row), instead of including the negation of the variable whose value is 0, as in the minterm style.) Similar to binary decision diagrams (BDDs), where nodes represent Shannon expansion with respect to the according variable, we can define a decision diagram based on the Reed–Muller expansion. These decision diagrams are called functional BDDs (FBDDs). Derivations The Reed–Muller expansion can be derived from the XOR-form of the Shannon decomposition, using the identity ${\overline {x}}=1\oplus x$: ${\begin{aligned}f&=x_{i}f_{x_{i}}\oplus {\overline {x}}_{i}f_{{\overline {x}}_{i}}\\&=x_{i}f_{x_{i}}\oplus (1\oplus x_{i})f_{{\overline {x}}_{i}}\\&=x_{i}f_{x_{i}}\oplus f_{{\overline {x}}_{i}}\oplus x_{i}f_{{\overline {x}}_{i}}\\&=f_{{\overline {x}}_{i}}\oplus x_{i}{\frac {\partial f}{\partial x_{i}}}.\end{aligned}}$ Derivation of the expansion for $n=2$: ${\begin{aligned}f&=f_{{\bar {x}}_{1}}\oplus x_{1}{\partial f \over \partial x_{1}}\\&={\Big (}f_{{\bar {x}}_{2}}\oplus x_{2}{\partial f \over \partial x_{2}}{\Big )}_{{\bar {x}}_{1}}\oplus x_{1}{\partial {\Big (}f_{{\bar {x}}_{2}}\oplus x_{2}{\partial f \over \partial x_{2}}{\Big )} \over \partial x_{1}}\\&=f_{{\bar {x}}_{1}{\bar {x}}_{2}}\oplus x_{2}{\partial f_{{\bar {x}}_{1}} \over \partial x_{2}}\oplus x_{1}{\Big (}{\partial f_{{\bar {x}}_{2}} \over \partial x_{1}}\oplus x_{2}{\partial ^{2}f \over \partial x_{1}\partial x_{2}}{\Big )}\\&=f_{{\bar {x}}_{1}{\bar {x}}_{2}}\oplus x_{2}{\partial f_{{\bar {x}}_{1}} \over \partial x_{2}}\oplus x_{1}{\partial f_{{\bar {x}}_{2}} \over \partial x_{1}}\oplus x_{1}x_{2}{\partial ^{2}f \over \partial x_{1}\partial x_{2}}.\end{aligned}}$ Derivation of the second-order boolean derivative: ${\begin{aligned}{\partial ^{2}f \over \partial x_{1}\partial x_{2}}&={\partial \over \partial x_{1}}{\Big (}{\partial f \over \partial x_{2}}{\Big )}={\partial \over \partial x_{1}}(f_{{\bar {x}}_{2}}\oplus f_{x_{2}})\\&=(f_{{\bar {x}}_{2}}\oplus f_{x_{2}})_{{\bar {x}}_{1}}\oplus (f_{{\bar {x}}_{2}}\oplus f_{x_{2}})_{x_{1}}\\&=f_{{\bar {x}}_{1}{\bar {x}}_{2}}\oplus f_{{\bar {x}}_{1}x_{2}}\oplus f_{x_{1}{\bar {x}}_{2}}\oplus f_{x_{1}x_{2}}.\end{aligned}}$ See also • Algebraic normal form (ANF) • Ring sum normal form (RSNF) • Zhegalkin polynomial • Karnaugh map • Irving Stoy Reed • David Eugene Muller • Reed–Muller code References 1. Kebschull, Udo; Schubert, Endric; Rosenstiel, Wolfgang (1992). "Multilevel logic synthesis based on functional decision diagrams". Proceedings of the 3rd European Conference on Design Automation. Further reading • Жега́лкин [Zhegalkin], Ива́н Ива́нович [Ivan Ivanovich] (1927). "O Tekhnyke Vychyslenyi Predlozhenyi v Symbolytscheskoi Logykye" О технике вычислений предложений в символической логике [On the technique of calculating propositions in symbolic logic (Sur le calcul des propositions dans la logique symbolique)]. Matematicheskii Sbornik (in Russian and French). Moscow, Russia. 34 (1): 9–28. Mi msb7433. Archived from the original on 2017-10-12. Retrieved 2017-10-12. • Reed, Irving Stoy (September 1954). "A Class of Multiple-Error Correcting Codes and the Decoding Scheme". IRE Transactions on Information Theory. IT-4: 38–49. • Muller, David Eugene (September 1954). "Application of Boolean Algebra to Switching Circuit Design and to Error Detection". IRE Transactions on Electronic Computers. EC-3: 6–12. • Kebschull, Udo; Rosenstiel, Wolfgang (1993). "Efficient graph-based computation and manipulation of functional decision diagrams". Proceedings of the 4th European Conference on Design Automation: 278–282. • Maxfield, Clive "Max" (2006-11-29). "Reed-Muller Logic". Logic 101. EETimes. Part 3. Archived from the original on 2017-04-19. Retrieved 2017-04-19. • Steinbach, Bernd [in German]; Posthoff, Christian (2009). "Preface". Logic Functions and Equations - Examples and Exercises (1st ed.). Springer Science + Business Media B. V. p. xv. ISBN 978-1-4020-9594-8. LCCN 2008941076. • Perkowski, Marek A.; Grygiel, Stanislaw (1995-11-20). "6. Historical Overview of the Research on Decomposition". A Survey of Literature on Function Decomposition. Version IV. Functional Decomposition Group, Department of Electrical Engineering, Portland University, Portland, Oregon, USA. pp. 21–22. CiteSeerX 10.1.1.64.1129. (188 pages)
Wikipedia
Reed–Solomon error correction Reed–Solomon codes are a group of error-correcting codes that were introduced by Irving S. Reed and Gustave Solomon in 1960.[1] They have many applications, the most prominent of which include consumer technologies such as MiniDiscs, CDs, DVDs, Blu-ray discs, QR codes, data transmission technologies such as DSL and WiMAX, broadcast systems such as satellite communications, DVB and ATSC, and storage systems such as RAID 6. Reed–Solomon codes Named afterIrving S. Reed and Gustave Solomon Classification HierarchyLinear block code Polynomial code Reed–Solomon code Block lengthn Message lengthk Distancen − k + 1 Alphabet sizeq = pm ≥ n  (p prime) Often n = q − 1. Notation[n, k, n − k + 1]q-code Algorithms Berlekamp–Massey Euclidean et al. Properties Maximum-distance separable code Reed–Solomon codes operate on a block of data treated as a set of finite-field elements called symbols. Reed–Solomon codes are able to detect and correct multiple symbol errors. By adding t = n − k check symbols to the data, a Reed–Solomon code can detect (but not correct) any combination of up to t erroneous symbols, or locate and correct up to ⌊t/2⌋ erroneous symbols at unknown locations. As an erasure code, it can correct up to t erasures at locations that are known and provided to the algorithm, or it can detect and correct combinations of errors and erasures. Reed–Solomon codes are also suitable as multiple-burst bit-error correcting codes, since a sequence of b + 1 consecutive bit errors can affect at most two symbols of size b. The choice of t is up to the designer of the code and may be selected within wide limits. There are two basic types of Reed–Solomon codes – original view and BCH view – with BCH view being the most common, as BCH view decoders are faster and require less working storage than original view decoders. History Reed–Solomon codes were developed in 1960 by Irving S. Reed and Gustave Solomon, who were then staff members of MIT Lincoln Laboratory. Their seminal article was titled "Polynomial Codes over Certain Finite Fields". (Reed & Solomon 1960). The original encoding scheme described in the Reed & Solomon article used a variable polynomial based on the message to be encoded where only a fixed set of values (evaluation points) to be encoded are known to encoder and decoder. The original theoretical decoder generated potential polynomials based on subsets of k (unencoded message length) out of n (encoded message length) values of a received message, choosing the most popular polynomial as the correct one, which was impractical for all but the simplest of cases. This was initially resolved by changing the original scheme to a BCH code like scheme based on a fixed polynomial known to both encoder and decoder, but later, practical decoders based on the original scheme were developed, although slower than the BCH schemes. The result of this is that there are two main types of Reed Solomon codes, ones that use the original encoding scheme, and ones that use the BCH encoding scheme. Also in 1960, a practical fixed polynomial decoder for BCH codes developed by Daniel Gorenstein and Neal Zierler was described in an MIT Lincoln Laboratory report by Zierler in January 1960 and later in a paper in June 1961.[2] The Gorenstein–Zierler decoder and the related work on BCH codes are described in a book Error Correcting Codes by W. Wesley Peterson (1961).[3] By 1963 (or possibly earlier), J. J. Stone (and others) recognized that Reed Solomon codes could use the BCH scheme of using a fixed generator polynomial, making such codes a special class of BCH codes,[4] but Reed Solomon codes based on the original encoding scheme, are not a class of BCH codes, and depending on the set of evaluation points, they are not even cyclic codes. In 1969, an improved BCH scheme decoder was developed by Elwyn Berlekamp and James Massey, and has since been known as the Berlekamp–Massey decoding algorithm. In 1975, another improved BCH scheme decoder was developed by Yasuo Sugiyama, based on the extended Euclidean algorithm.[5] In 1977, Reed–Solomon codes were implemented in the Voyager program in the form of concatenated error correction codes. The first commercial application in mass-produced consumer products appeared in 1982 with the compact disc, where two interleaved Reed–Solomon codes are used. Today, Reed–Solomon codes are widely implemented in digital storage devices and digital communication standards, though they are being slowly replaced by Bose–Chaudhuri–Hocquenghem (BCH) codes. For example, Reed–Solomon codes are used in the Digital Video Broadcasting (DVB) standard DVB-S, in conjunction with a convolutional inner code, but BCH codes are used with LDPC in its successor, DVB-S2. In 1986, an original scheme decoder known as the Berlekamp–Welch algorithm was developed. In 1996, variations of original scheme decoders called list decoders or soft decoders were developed by Madhu Sudan and others, and work continues on these types of decoders – see Guruswami–Sudan list decoding algorithm. In 2002, another original scheme decoder was developed by Shuhong Gao, based on the extended Euclidean algorithm.[6] Applications Data storage Reed–Solomon coding is very widely used in mass storage systems to correct the burst errors associated with media defects. Reed–Solomon coding is a key component of the compact disc. It was the first use of strong error correction coding in a mass-produced consumer product, and DAT and DVD use similar schemes. In the CD, two layers of Reed–Solomon coding separated by a 28-way convolutional interleaver yields a scheme called Cross-Interleaved Reed–Solomon Coding (CIRC). The first element of a CIRC decoder is a relatively weak inner (32,28) Reed–Solomon code, shortened from a (255,251) code with 8-bit symbols. This code can correct up to 2 byte errors per 32-byte block. More importantly, it flags as erasures any uncorrectable blocks, i.e., blocks with more than 2 byte errors. The decoded 28-byte blocks, with erasure indications, are then spread by the deinterleaver to different blocks of the (28,24) outer code. Thanks to the deinterleaving, an erased 28-byte block from the inner code becomes a single erased byte in each of 28 outer code blocks. The outer code easily corrects this, since it can handle up to 4 such erasures per block. The result is a CIRC that can completely correct error bursts up to 4000 bits, or about 2.5 mm on the disc surface. This code is so strong that most CD playback errors are almost certainly caused by tracking errors that cause the laser to jump track, not by uncorrectable error bursts.[7] DVDs use a similar scheme, but with much larger blocks, a (208,192) inner code, and a (182,172) outer code. Reed–Solomon error correction is also used in parchive files which are commonly posted accompanying multimedia files on USENET. The distributed online storage service Wuala (discontinued in 2015) also used Reed–Solomon when breaking up files. Bar code Almost all two-dimensional bar codes such as PDF-417, MaxiCode, Datamatrix, QR Code, and Aztec Code use Reed–Solomon error correction to allow correct reading even if a portion of the bar code is damaged. When the bar code scanner cannot recognize a bar code symbol, it will treat it as an erasure. Reed–Solomon coding is less common in one-dimensional bar codes, but is used by the PostBar symbology. Data transmission Specialized forms of Reed–Solomon codes, specifically Cauchy-RS and Vandermonde-RS, can be used to overcome the unreliable nature of data transmission over erasure channels. The encoding process assumes a code of RS(N, K) which results in N codewords of length N symbols each storing K symbols of data, being generated, that are then sent over an erasure channel. Any combination of K codewords received at the other end is enough to reconstruct all of the N codewords. The code rate is generally set to 1/2 unless the channel's erasure likelihood can be adequately modelled and is seen to be less. In conclusion, N is usually 2K, meaning that at least half of all the codewords sent must be received in order to reconstruct all of the codewords sent. Reed–Solomon codes are also used in xDSL systems and CCSDS's Space Communications Protocol Specifications as a form of forward error correction. Space transmission One significant application of Reed–Solomon coding was to encode the digital pictures sent back by the Voyager program. Voyager introduced Reed–Solomon coding concatenated with convolutional codes, a practice that has since become very widespread in deep space and satellite (e.g., direct digital broadcasting) communications. Viterbi decoders tend to produce errors in short bursts. Correcting these burst errors is a job best done by short or simplified Reed–Solomon codes. Modern versions of concatenated Reed–Solomon/Viterbi-decoded convolutional coding were and are used on the Mars Pathfinder, Galileo, Mars Exploration Rover and Cassini missions, where they perform within about 1–1.5 dB of the ultimate limit, the Shannon capacity. These concatenated codes are now being replaced by more powerful turbo codes: Channel coding schemes used by NASA missions[9] YearsCodeMission(s) 1958–presentUncodedExplorer, Mariner, many others 1968–1978convolutional codes (CC) (25, 1/2)Pioneer, Venus 1969–1975Reed-Muller code (32, 6)Mariner, Viking 1977–presentBinary Golay codeVoyager 1977–presentRS(255, 223) + CC(7, 1/2)Voyager, Galileo, many others 1989–2003RS(255, 223) + CC(7, 1/3)Voyager 1989–2003RS(255, 223) + CC(14, 1/4)Galileo 1996–presentRS + CC (15, 1/6)Cassini, Mars Pathfinder, others 2004–presentTurbo codes[nb 1] Messenger, Stereo, MRO, others est. 2009LDPC codesConstellation, MSL Constructions (encoding) The Reed–Solomon code is actually a family of codes, where every code is characterised by three parameters: an alphabet size q, a block length n, and a message length k, with k < n ≤ q. The set of alphabet symbols is interpreted as the finite field of order q, and thus, q must be a prime power. In the most useful parameterizations of the Reed–Solomon code, the block length is usually some constant multiple of the message length, that is, the rate R = k/n is some constant, and furthermore, the block length is equal to or one less than the alphabet size, that is, n = q or n = q − 1. Reed & Solomon's original view: The codeword as a sequence of values There are different encoding procedures for the Reed–Solomon code, and thus, there are different ways to describe the set of all codewords. In the original view of Reed & Solomon (1960), every codeword of the Reed–Solomon code is a sequence of function values of a polynomial of degree less than k. In order to obtain a codeword of the Reed–Solomon code, the message symbols (each within the q-sized alphabet) are treated as the coefficients of a polynomial p of degree less than k, over the finite field F with q elements. In turn, the polynomial p is evaluated at n ≤ q distinct points $a_{1},\dots ,a_{n}$ of the field F, and the sequence of values is the corresponding codeword. Common choices for a set of evaluation points include {0, 1, 2, ..., n − 1}, {0, 1, α, α2, ..., αn−2}, or for n < q, {1, α, α2, ..., αn−1}, ... , where α is a primitive element of F. Formally, the set $\mathbf {C} $ of codewords of the Reed–Solomon code is defined as follows: $\mathbf {C} ={\Bigl \{}\;{\bigl (}p(a_{1}),p(a_{2}),\dots ,p(a_{n}){\bigr )}\;{\Big |}\;p{\text{ is a polynomial over }}F{\text{ of degree }}<k\;{\Bigr \}}\,.$ Since any two distinct polynomials of degree less than $k$ agree in at most $k-1$ points, this means that any two codewords of the Reed–Solomon code disagree in at least $n-(k-1)=n-k+1$ positions. Furthermore, there are two polynomials that do agree in $k-1$ points but are not equal, and thus, the distance of the Reed–Solomon code is exactly $d=n-k+1$. Then the relative distance is $\delta =d/n=1-k/n+1/n=1-R+1/n\sim 1-R$, where $R=k/n$ is the rate. This trade-off between the relative distance and the rate is asymptotically optimal since, by the Singleton bound, every code satisfies $\delta +R\leq 1+1/n$. Being a code that achieves this optimal trade-off, the Reed–Solomon code belongs to the class of maximum distance separable codes. While the number of different polynomials of degree less than k and the number of different messages are both equal to $q^{k}$, and thus every message can be uniquely mapped to such a polynomial, there are different ways of doing this encoding. The original construction of Reed & Solomon (1960) interprets the message x as the coefficients of the polynomial p, whereas subsequent constructions interpret the message as the values of the polynomial at the first k points $a_{1},\dots ,a_{k}$ and obtain the polynomial p by interpolating these values with a polynomial of degree less than k. The latter encoding procedure, while being slightly less efficient, has the advantage that it gives rise to a systematic code, that is, the original message is always contained as a subsequence of the codeword. Simple encoding procedure: The message as a sequence of coefficients In the original construction of Reed & Solomon (1960), the message $x=(x_{1},\dots ,x_{k})\in F^{k}$ is mapped to the polynomial $p_{x}$ with $p_{x}(a)=\sum _{i=1}^{k}x_{i}a^{i-1}\,.$ The codeword of $x$ is obtained by evaluating $p_{x}$ at $n$ different points $a_{1},\dots ,a_{n}$ of the field $F$. Thus the classical encoding function $C:F^{k}\to F^{n}$ for the Reed–Solomon code is defined as follows: $C(x)={\bigl (}p_{x}(a_{1}),\dots ,p_{x}(a_{n}){\bigr )}\,.$ This function $C$ is a linear mapping, that is, it satisfies $C(x)=x^{T}A$ for the following $k\times n$-matrix $A$ with elements from $F$: $A={\begin{bmatrix}1&\dots &1&\dots &1\\a_{1}&\dots &a_{k}&\dots &a_{n}\\a_{1}^{2}&\dots &a_{k}^{2}&\dots &a_{n}^{2}\\\vdots &&\vdots &&\vdots \\a_{1}^{k-1}&\dots &a_{k}^{k-1}&\dots &a_{n}^{k-1}\end{bmatrix}}$ This matrix is the transpose of a Vandermonde matrix over $F$. In other words, the Reed–Solomon code is a linear code, and in the classical encoding procedure, its generator matrix is $A$. Systematic encoding procedure: The message as an initial sequence of values There is an alternative encoding procedure that also produces the Reed–Solomon code, but that does so in a systematic way. Here, the mapping from the message $x$ to the polynomial $p_{x}$ works differently: the polynomial $p_{x}$ is now defined as the unique polynomial of degree less than $k$ such that $p_{x}(a_{i})=x_{i}{\text{ for all }}i\in \{1,\dots ,k\}.$ To compute this polynomial $p_{x}$ from $x$, one can use Lagrange interpolation. Once it has been found, it is evaluated at the other points $a_{k+1},\dots ,a_{n}$ of the field. The alternative encoding function $C:F^{k}\to F^{n}$ for the Reed–Solomon code is then again just the sequence of values: $C(x)={\bigl (}p_{x}(a_{1}),\dots ,p_{x}(a_{n}){\bigr )}\,.$ Since the first $k$ entries of each codeword $C(x)$ coincide with $x$, this encoding procedure is indeed systematic. Since Lagrange interpolation is a linear transformation, $C$ is a linear mapping. In fact, we have $C(x)=xG$, where $G=(A{\text{'s left square submatrix}})^{-1}\cdot A={\begin{bmatrix}1&0&0&\dots &0&g_{1,k+1}&\dots &g_{1,n}\\0&1&0&\dots &0&g_{2,k+1}&\dots &g_{2,n}\\0&0&1&\dots &0&g_{3,k+1}&\dots &g_{3,n}\\\vdots &\vdots &\vdots &&\vdots &\vdots &&\vdots \\0&\dots &0&\dots &1&g_{k,k+1}&\dots &g_{k,n}\end{bmatrix}}$ Discrete Fourier transform and its inverse A discrete Fourier transform is essentially the same as the encoding procedure; it uses the generator polynomial p(x) to map a set of evaluation points into the message values as shown above: $C(x)={\bigl (}p_{x}(a_{1}),\dots ,p_{x}(a_{n}){\bigr )}\,.$ The inverse Fourier transform could be used to convert an error free set of n < q message values back into the encoding polynomial of k coefficients, with the constraint that in order for this to work, the set of evaluation points used to encode the message must be a set of increasing powers of α: $a_{i}=\alpha ^{i-1}$ $a_{1},\dots ,a_{n}=\{1,\alpha ,\alpha ^{2},\dots ,\alpha ^{n-1}\}$ However, Lagrange interpolation performs the same conversion without the constraint on the set of evaluation points or the requirement of an error free set of message values and is used for systematic encoding, and in one of the steps of the Gao decoder. The BCH view: The codeword as a sequence of coefficients In this view, the message is interpreted as the coefficients of a polynomial $p(x)$. The sender computes a related polynomial $s(x)$ of degree $n-1$ where $n\leq q-1$ and sends the polynomial $s(x)$. The polynomial $s(x)$ is constructed by multiplying the message polynomial $p(x)$, which has degree $k-1$, with a generator polynomial $g(x)$ of degree $n-k$ that is known to both the sender and the receiver. The generator polynomial $g(x)$ is defined as the polynomial whose roots are sequential powers of the Galois field primitive $\alpha $ $g(x)=\left(x-\alpha ^{i}\right)\left(x-\alpha ^{i+1}\right)\cdots \left(x-\alpha ^{i+n-k-1}\right)=g_{0}+g_{1}x+\cdots +g_{n-k-1}x^{n-k-1}+x^{n-k}$ For a "narrow sense code", $i=1$. $\mathbf {C} =\left\{\left(s_{1},s_{2},\dots ,s_{n}\right)\;{\Big |}\;s(a)=\sum _{i=1}^{n}s_{i}a^{i}{\text{ is a polynomial that has at least the roots }}\alpha ^{1},\alpha ^{2},\dots ,\alpha ^{n-k}\right\}.$ Systematic encoding procedure The encoding procedure for the BCH view of Reed–Solomon codes can be modified to yield a systematic encoding procedure, in which each codeword contains the message as a prefix, and simply appends error correcting symbols as a suffix. Here, instead of sending $s(x)=p(x)g(x)$, the encoder constructs the transmitted polynomial $s(x)$ such that the coefficients of the $k$ largest monomials are equal to the corresponding coefficients of $p(x)$, and the lower-order coefficients of $s(x)$ are chosen exactly in such a way that $s(x)$ becomes divisible by $g(x)$. Then the coefficients of $p(x)$ are a subsequence of the coefficients of $s(x)$. To get a code that is overall systematic, we construct the message polynomial $p(x)$ by interpreting the message as the sequence of its coefficients. Formally, the construction is done by multiplying $p(x)$ by $x^{t}$ to make room for the $t=n-k$ check symbols, dividing that product by $g(x)$ to find the remainder, and then compensating for that remainder by subtracting it. The $t$ check symbols are created by computing the remainder $s_{r}(x)$: $s_{r}(x)=p(x)\cdot x^{t}\ {\bmod {\ }}g(x).$ The remainder has degree at most $t-1$, whereas the coefficients of $x^{t-1},x^{t-2},\dots ,x^{1},x^{0}$ in the polynomial $p(x)\cdot x^{t}$ are zero. Therefore, the following definition of the codeword $s(x)$ has the property that the first $k$ coefficients are identical to the coefficients of $p(x)$: $s(x)=p(x)\cdot x^{t}-s_{r}(x)\,.$ As a result, the codewords $s(x)$ are indeed elements of $\mathbf {C} $, that is, they are divisible by the generator polynomial $g(x)$:[10] $s(x)\equiv p(x)\cdot x^{t}-s_{r}(x)\equiv s_{r}(x)-s_{r}(x)\equiv 0\mod g(x)\,.$ Properties The Reed–Solomon code is a [n, k, n − k + 1] code; in other words, it is a linear block code of length n (over F) with dimension k and minimum Hamming distance $ d_{\min }=n-k+1.$ The Reed–Solomon code is optimal in the sense that the minimum distance has the maximum value possible for a linear code of size (n, k); this is known as the Singleton bound. Such a code is also called a maximum distance separable (MDS) code. The error-correcting ability of a Reed–Solomon code is determined by its minimum distance, or equivalently, by $n-k$, the measure of redundancy in the block. If the locations of the error symbols are not known in advance, then a Reed–Solomon code can correct up to $(n-k)/2$ erroneous symbols, i.e., it can correct half as many errors as there are redundant symbols added to the block. Sometimes error locations are known in advance (e.g., "side information" in demodulator signal-to-noise ratios)—these are called erasures. A Reed–Solomon code (like any MDS code) is able to correct twice as many erasures as errors, and any combination of errors and erasures can be corrected as long as the relation 2E + S ≤ n − k is satisfied, where $E$ is the number of errors and $S$ is the number of erasures in the block. The theoretical error bound can be described via the following formula for the AWGN channel for FSK:[11] $P_{b}\approx {\frac {2^{m-1}}{2^{m}-1}}{\frac {1}{n}}\sum _{\ell =t+1}^{n}\ell {n \choose \ell }P_{s}^{\ell }(1-P_{s})^{n-\ell }$ and for other modulation schemes: $P_{b}\approx {\frac {1}{m}}{\frac {1}{n}}\sum _{\ell =t+1}^{n}\ell {n \choose \ell }P_{s}^{\ell }(1-P_{s})^{n-\ell }$ where $ t={\frac {1}{2}}(d_{\min }-1)$, $P_{s}=1-(1-s)^{h}$, $h={\frac {m}{\log _{2}M}}$, $s$ is the symbol error rate in uncoded AWGN case and $M$ is the modulation order. For practical uses of Reed–Solomon codes, it is common to use a finite field $F$ with $2^{m}$ elements. In this case, each symbol can be represented as an $m$-bit value. The sender sends the data points as encoded blocks, and the number of symbols in the encoded block is $n=2^{m}-1$. Thus a Reed–Solomon code operating on 8-bit symbols has $n=2^{8}-1=255$ symbols per block. (This is a very popular value because of the prevalence of byte-oriented computer systems.) The number $k$, with $k<n$, of data symbols in the block is a design parameter. A commonly used code encodes $k=223$ eight-bit data symbols plus 32 eight-bit parity symbols in an $n=255$-symbol block; this is denoted as a $(n,k)=(255,223)$ code, and is capable of correcting up to 16 symbol errors per block. The Reed–Solomon code properties discussed above make them especially well-suited to applications where errors occur in bursts. This is because it does not matter to the code how many bits in a symbol are in error — if multiple bits in a symbol are corrupted it only counts as a single error. Conversely, if a data stream is not characterized by error bursts or drop-outs but by random single bit errors, a Reed–Solomon code is usually a poor choice compared to a binary code. The Reed–Solomon code, like the convolutional code, is a transparent code. This means that if the channel symbols have been inverted somewhere along the line, the decoders will still operate. The result will be the inversion of the original data. However, the Reed–Solomon code loses its transparency when the code is shortened. The "missing" bits in a shortened code need to be filled by either zeros or ones, depending on whether the data is complemented or not. (To put it another way, if the symbols are inverted, then the zero-fill needs to be inverted to a one-fill.) For this reason it is mandatory that the sense of the data (i.e., true or complemented) be resolved before Reed–Solomon decoding. Whether the Reed–Solomon code is cyclic or not depends on subtle details of the construction. In the original view of Reed and Solomon, where the codewords are the values of a polynomial, one can choose the sequence of evaluation points in such a way as to make the code cyclic. In particular, if $\alpha $ is a primitive root of the field $F$, then by definition all non-zero elements of $F$ take the form $\alpha ^{i}$ for $i\in \{1,\dots ,q-1\}$, where $q=|F|$. Each polynomial $p$ over $F$ gives rise to a codeword $(p(\alpha ^{1}),\dots ,p(\alpha ^{q-1}))$. Since the function $a\mapsto p(\alpha a)$ is also a polynomial of the same degree, this function gives rise to a codeword $(p(\alpha ^{2}),\dots ,p(\alpha ^{q}))$; since $\alpha ^{q}=\alpha ^{1}$ holds, this codeword is the cyclic left-shift of the original codeword derived from $p$. So choosing a sequence of primitive root powers as the evaluation points makes the original view Reed–Solomon code cyclic. Reed–Solomon codes in the BCH view are always cyclic because BCH codes are cyclic. Remarks Designers are not required to use the "natural" sizes of Reed–Solomon code blocks. A technique known as "shortening" can produce a smaller code of any desired size from a larger code. For example, the widely used (255,223) code can be converted to a (160,128) code by padding the unused portion of the source block with 95 binary zeroes and not transmitting them. At the decoder, the same portion of the block is loaded locally with binary zeroes. The Delsarte–Goethals–Seidel[12] theorem illustrates an example of an application of shortened Reed–Solomon codes. In parallel to shortening, a technique known as puncturing allows omitting some of the encoded parity symbols. BCH view decoders The decoders described in this section use the BCH view of a codeword as a sequence of coefficients. They use a fixed generator polynomial known to both encoder and decoder. Peterson–Gorenstein–Zierler decoder Daniel Gorenstein and Neal Zierler developed a decoder that was described in a MIT Lincoln Laboratory report by Zierler in January 1960 and later in a paper in June 1961.[13] The Gorenstein–Zierler decoder and the related work on BCH codes are described in a book Error Correcting Codes by W. Wesley Peterson (1961).[14] Formulation The transmitted message, $(c_{0},\ldots ,c_{i},\ldots ,c_{n-1})$, is viewed as the coefficients of a polynomial s(x): $s(x)=\sum _{i=0}^{n-1}c_{i}x^{i}$ As a result of the Reed-Solomon encoding procedure, s(x) is divisible by the generator polynomial g(x): $g(x)=\prod _{j=1}^{n-k}(x-\alpha ^{j}),$ where α is a primitive element. Since s(x) is a multiple of the generator g(x), it follows that it "inherits" all its roots. $s(x){\bmod {(}}x-\alpha ^{j})=g(x){\bmod {(}}x-\alpha ^{j})=0$ Therefore, $s(\alpha ^{j})=0,\ j=1,2,\ldots ,n-k$ The transmitted polynomial is corrupted in transit by an error polynomial e(x) to produce the received polynomial r(x). $r(x)=s(x)+e(x)$ $e(x)=\sum _{i=0}^{n-1}e_{i}x^{i}$ Coefficient ei will be zero if there is no error at that power of x and nonzero if there is an error. If there are ν errors at distinct powers ik of x, then $e(x)=\sum _{k=1}^{\nu }e_{i_{k}}x^{i_{k}}$ The goal of the decoder is to find the number of errors (ν), the positions of the errors (ik), and the error values at those positions (eik). From those, e(x) can be calculated and subtracted from r(x) to get the originally sent message s(x). Syndrome decoding The decoder starts by evaluating the polynomial as received at points $\alpha ^{1}\dots \alpha ^{n-k}$. We call the results of that evaluation the "syndromes", Sj. They are defined as: ${\begin{aligned}S_{j}&=r(\alpha ^{j})=s(\alpha ^{j})+e(\alpha ^{j})=0+e(\alpha ^{j})\\&=e(\alpha ^{j})\\&=\sum _{k=1}^{\nu }e_{i_{k}}\left(\alpha ^{j}\right)^{i_{k}},\quad j=1,2,\ldots ,n-k\end{aligned}}$ Note that $s(\alpha ^{j})=0$ because $s(x)$ has roots at $\alpha ^{j}$, as shown in the previous section. The advantage of looking at the syndromes is that the message polynomial drops out. In other words, the syndromes only relate to the error, and are unaffected by the actual contents of the message being transmitted. If the syndromes are all zero, the algorithm stops here and reports that the message was not corrupted in transit. Error locators and error values For convenience, define the error locators Xk and error values Yk as: $X_{k}=\alpha ^{i_{k}},\ Y_{k}=e_{i_{k}}$ Then the syndromes can be written in terms of these error locators and error values as $S_{j}=\sum _{k=1}^{\nu }Y_{k}X_{k}^{j}$ This definition of the syndrome values is equivalent to the previous since $(\alpha ^{j})^{i_{k}}=\alpha ^{j*i_{k}}=(\alpha ^{i_{k}})^{j}=X_{k}^{j}$. The syndromes give a system of n − k ≥ 2ν equations in 2ν unknowns, but that system of equations is nonlinear in the Xk and does not have an obvious solution. However, if the Xk were known (see below), then the syndrome equations provide a linear system of equations that can easily be solved for the Yk error values. ${\begin{bmatrix}X_{1}^{1}&X_{2}^{1}&\cdots &X_{\nu }^{1}\\X_{1}^{2}&X_{2}^{2}&\cdots &X_{\nu }^{2}\\\vdots &\vdots &\ddots &\vdots \\X_{1}^{n-k}&X_{2}^{n-k}&\cdots &X_{\nu }^{n-k}\\\end{bmatrix}}{\begin{bmatrix}Y_{1}\\Y_{2}\\\vdots \\Y_{\nu }\end{bmatrix}}={\begin{bmatrix}S_{1}\\S_{2}\\\vdots \\S_{n-k}\end{bmatrix}}$ Consequently, the problem is finding the Xk, because then the leftmost matrix would be known, and both sides of the equation could be multiplied by its inverse, yielding Yk In the variant of this algorithm where the locations of the errors are already known (when it is being used as an erasure code), this is the end. The error locations (Xk) are already known by some other method (for example, in an FM transmission, the sections where the bitstream was unclear or overcome with interference are probabilistically determinable from frequency analysis). In this scenario, up to $n-k$ errors can be corrected. The rest of the algorithm serves to locate the errors, and will require syndrome values up to $2v$, instead of just the $v$ used thus far. This is why 2x as many error correcting symbols need to be added as can be corrected without knowing their locations. Error locator polynomial There is a linear recurrence relation that gives rise to a system of linear equations. Solving those equations identifies those error locations Xk. Define the error locator polynomial Λ(x) as $\Lambda (x)=\prod _{k=1}^{\nu }(1-xX_{k})=1+\Lambda _{1}x^{1}+\Lambda _{2}x^{2}+\cdots +\Lambda _{\nu }x^{\nu }$ The zeros of Λ(x) are the reciprocals $X_{k}^{-1}$. This follows from the above product notation construction since if $x=X_{k}^{-1}$ then one of the multiplied terms will be zero $(1-X_{k}^{-1}\cdot X_{k})=1-1=0$, making the whole polynomial evaluate to zero. $\Lambda (X_{k}^{-1})=0$ Let $j$ be any integer such that $1\leq j\leq \nu $. Multiply both sides by $Y_{k}X_{k}^{j+\nu }$ and it will still be zero. ${\begin{aligned}&Y_{k}X_{k}^{j+\nu }\Lambda (X_{k}^{-1})=0.\\[1ex]&Y_{k}X_{k}^{j+\nu }\left(1+\Lambda _{1}X_{k}^{-1}+\Lambda _{2}X_{k}^{-2}+\cdots +\Lambda _{\nu }X_{k}^{-\nu }\right)=0.\\[1ex]&Y_{k}X_{k}^{j+\nu }+\Lambda _{1}Y_{k}X_{k}^{j+\nu }X_{k}^{-1}+\Lambda _{2}Y_{k}X_{k}^{j+\nu }X_{k}^{-2}+\cdots +\Lambda _{\nu }Y_{k}X_{k}^{j+\nu }X_{k}^{-\nu }=0.\\[1ex]&Y_{k}X_{k}^{j+\nu }+\Lambda _{1}Y_{k}X_{k}^{j+\nu -1}+\Lambda _{2}Y_{k}X_{k}^{j+\nu -2}+\cdots +\Lambda _{\nu }Y_{k}X_{k}^{j}=0.\end{aligned}}$ Sum for k = 1 to ν and it will still be zero. $\sum _{k=1}^{\nu }\left(Y_{k}X_{k}^{j+\nu }+\Lambda _{1}Y_{k}X_{k}^{j+\nu -1}+\Lambda _{2}Y_{k}X_{k}^{j+\nu -2}+\cdots +\Lambda _{\nu }Y_{k}X_{k}^{j}\right)=0$ Collect each term into its own sum. $\left(\sum _{k=1}^{\nu }Y_{k}X_{k}^{j+\nu }\right)+\left(\sum _{k=1}^{\nu }\Lambda _{1}Y_{k}X_{k}^{j+\nu -1}\right)+\left(\sum _{k=1}^{\nu }\Lambda _{2}Y_{k}X_{k}^{j+\nu -2}\right)+\cdots +\left(\sum _{k=1}^{\nu }\Lambda _{\nu }Y_{k}X_{k}^{j}\right)=0$ Extract the constant values of $\Lambda $ that are unaffected by the summation. $\left(\sum _{k=1}^{\nu }Y_{k}X_{k}^{j+\nu }\right)+\Lambda _{1}\left(\sum _{k=1}^{\nu }Y_{k}X_{k}^{j+\nu -1}\right)+\Lambda _{2}\left(\sum _{k=1}^{\nu }Y_{k}X_{k}^{j+\nu -2}\right)+\cdots +\Lambda _{\nu }\left(\sum _{k=1}^{\nu }Y_{k}X_{k}^{j}\right)=0$ These summations are now equivalent to the syndrome values, which we know and can substitute in! This therefore reduces to $S_{j+\nu }+\Lambda _{1}S_{j+\nu -1}+\cdots +\Lambda _{\nu -1}S_{j+1}+\Lambda _{\nu }S_{j}=0$ Subtracting $S_{j+\nu }$ from both sides yields $S_{j}\Lambda _{\nu }+S_{j+1}\Lambda _{\nu -1}+\cdots +S_{j+\nu -1}\Lambda _{1}=-S_{j+\nu }$ Recall that j was chosen to be any integer between 1 and v inclusive, and this equivalence is true for any and all such values. Therefore, we have v linear equations, not just one. This system of linear equations can therefore be solved for the coefficients Λi of the error location polynomial: ${\begin{bmatrix}S_{1}&S_{2}&\cdots &S_{\nu }\\S_{2}&S_{3}&\cdots &S_{\nu +1}\\\vdots &\vdots &\ddots &\vdots \\S_{\nu }&S_{\nu +1}&\cdots &S_{2\nu -1}\end{bmatrix}}{\begin{bmatrix}\Lambda _{\nu }\\\Lambda _{\nu -1}\\\vdots \\\Lambda _{1}\end{bmatrix}}={\begin{bmatrix}-S_{\nu +1}\\-S_{\nu +2}\\\vdots \\-S_{\nu +\nu }\end{bmatrix}}$ The above assumes the decoder knows the number of errors ν, but that number has not been determined yet. The PGZ decoder does not determine ν directly but rather searches for it by trying successive values. The decoder first assumes the largest value for a trial ν and sets up the linear system for that value. If the equations can be solved (i.e., the matrix determinant is nonzero), then that trial value is the number of errors. If the linear system cannot be solved, then the trial ν is reduced by one and the next smaller system is examined. (Gill n.d., p. 35) Find the roots of the error locator polynomial Use the coefficients Λi found in the last step to build the error location polynomial. The roots of the error location polynomial can be found by exhaustive search. The error locators Xk are the reciprocals of those roots. The order of coefficients of the error location polynomial can be reversed, in which case the roots of that reversed polynomial are the error locators $X_{k}$ (not their reciprocals $X_{k}^{-1}$). Chien search is an efficient implementation of this step. Calculate the error values Once the error locators Xk are known, the error values can be determined. This can be done by direct solution for Yk in the error equations matrix given above, or using the Forney algorithm. Calculate the error locations Calculate ik by taking the log base $\alpha $ of Xk. This is generally done using a precomputed lookup table. Fix the errors Finally, e(x) is generated from ik and eik and then is subtracted from r(x) to get the originally sent message s(x), with errors corrected. Example Consider the Reed–Solomon code defined in GF(929) with α = 3 and t = 4 (this is used in PDF417 barcodes) for a RS(7,3) code. The generator polynomial is $g(x)=(x-3)(x-3^{2})(x-3^{3})(x-3^{4})=x^{4}+809x^{3}+723x^{2}+568x+522$ If the message polynomial is p(x) = 3 x2 + 2 x + 1, then a systematic codeword is encoded as follows. $s_{r}(x)=p(x)\,x^{t}{\bmod {g}}(x)=547x^{3}+738x^{2}+442x+455$ $s(x)=p(x)\,x^{t}-s_{r}(x)=3x^{6}+2x^{5}+1x^{4}+382x^{3}+191x^{2}+487x+474$ Errors in transmission might cause this to be received instead. $r(x)=s(x)+e(x)=3x^{6}+2x^{5}+123x^{4}+456x^{3}+191x^{2}+487x+474$ The syndromes are calculated by evaluating r at powers of α. $S_{1}=r(3^{1})=3\cdot 3^{6}+2\cdot 3^{5}+123\cdot 3^{4}+456\cdot 3^{3}+191\cdot 3^{2}+487\cdot 3+474=732$ $S_{2}=r(3^{2})=637,\;S_{3}=r(3^{3})=762,\;S_{4}=r(3^{4})=925$ ${\begin{bmatrix}732&637\\637&762\end{bmatrix}}{\begin{bmatrix}\Lambda _{2}\\\Lambda _{1}\end{bmatrix}}={\begin{bmatrix}-762\\-925\end{bmatrix}}={\begin{bmatrix}167\\004\end{bmatrix}}$ Using Gaussian elimination: ${\begin{bmatrix}001&000\\000&001\end{bmatrix}}{\begin{bmatrix}\Lambda _{2}\\\Lambda _{1}\end{bmatrix}}={\begin{bmatrix}329\\821\end{bmatrix}}$ Λ(x) = 329 x2 + 821 x + 001, with roots x1 = 757 = 3−3 and x2 = 562 = 3−4 The coefficients can be reversed to produce roots with positive exponents, but typically this isn't used: R(x) = 001 x2 + 821 x + 329, with roots 27 = 33 and 81 = 34 with the log of the roots corresponding to the error locations (right to left, location 0 is the last term in the codeword). To calculate the error values, apply the Forney algorithm. Ω(x) = S(x) Λ(x) mod x4 = 546 x + 732 Λ'(x) = 658 x + 821 e1 = −Ω(x1)/Λ'(x1) = 074 e2 = −Ω(x2)/Λ'(x2) = 122 Subtracting $e_{1}x^{3}+e_{2}x^{4}=74x^{3}+122x^{4}$ from the received polynomial r(x) reproduces the original codeword s. Berlekamp–Massey decoder The Berlekamp–Massey algorithm is an alternate iterative procedure for finding the error locator polynomial. During each iteration, it calculates a discrepancy based on a current instance of Λ(x) with an assumed number of errors e: $\Delta =S_{i}+\Lambda _{1}\ S_{i-1}+\cdots +\Lambda _{e}\ S_{i-e}$ and then adjusts Λ(x) and e so that a recalculated Δ would be zero. The article Berlekamp–Massey algorithm has a detailed description of the procedure. In the following example, C(x) is used to represent Λ(x). Example Using the same data as the Peterson Gorenstein Zierler example above: nSn+1dCBbm 0732732197 x + 117321 1637846173 x + 117322 2762412634 x2 + 173 x + 1173 x + 14121 3925576329 x2 + 821 x + 1173 x + 14122 The final value of C is the error locator polynomial, Λ(x). Euclidean decoder Another iterative method for calculating both the error locator polynomial and the error value polynomial is based on Sugiyama's adaptation of the extended Euclidean algorithm . Define S(x), Λ(x), and Ω(x) for t syndromes and e errors: ${\begin{aligned}S(x)&=S_{t}x^{t-1}+S_{t-1}x^{t-2}+\cdots +S_{2}x+S_{1}\\[1ex]\Lambda (x)&=\Lambda _{e}x^{e}+\Lambda _{e-1}x^{e-1}+\cdots +\Lambda _{1}x+1\\[1ex]\Omega (x)&=\Omega _{e}x^{e}+\Omega _{e-1}x^{e-1}+\cdots +\Omega _{1}x+\Omega _{0}\end{aligned}}$ The key equation is: $\Lambda (x)S(x)=Q(x)x^{t}+\Omega (x)$ For t = 6 and e = 3: ${\begin{bmatrix}\Lambda _{3}S_{6}&x^{8}\\\Lambda _{2}S_{6}+\Lambda _{3}S_{5}&x^{7}\\\Lambda _{1}S_{6}+\Lambda _{2}S_{5}+\Lambda _{3}S_{4}&x^{6}\\S_{6}+\Lambda _{1}S_{5}+\Lambda _{2}S_{4}+\Lambda _{3}S_{3}&x^{5}\\S_{5}+\Lambda _{1}S_{4}+\Lambda _{2}S_{3}+\Lambda _{3}S_{2}&x^{4}\\S_{4}+\Lambda _{1}S_{3}+\Lambda _{2}S_{2}+\Lambda _{3}S_{1}&x^{3}\\S_{3}+\Lambda _{1}S_{2}+\Lambda _{2}S_{1}&x^{2}\\S_{2}+\Lambda _{1}S_{1}&x\\S_{1}\end{bmatrix}}={\begin{bmatrix}Q_{2}x^{8}\\Q_{1}x^{7}\\Q_{0}x^{6}\\0\\0\\0\\\Omega _{2}x^{2}\\\Omega _{1}x\\\Omega _{0}\end{bmatrix}}$ The middle terms are zero due to the relationship between Λ and syndromes. The extended Euclidean algorithm can find a series of polynomials of the form Ai(x) S(x) + Bi(x) xt = Ri(x) where the degree of R decreases as i increases. Once the degree of Ri(x) < t/2, then Ai(x) = Λ(x) Bi(x) = −Q(x) Ri(x) = Ω(x). B(x) and Q(x) don't need to be saved, so the algorithm becomes: R−1 := xt R0 := S(x) A−1 := 0 A0 := 1 i := 0 while degree of Ri ≥ t/2 i := i + 1 Q := Ri-2 / Ri-1 Ri := Ri-2 - Q Ri-1 Ai := Ai-2 - Q Ai-1 to set low order term of Λ(x) to 1, divide Λ(x) and Ω(x) by Ai(0): Λ(x) = Ai / Ai(0) Ω(x) = Ri / Ai(0) Ai(0) is the constant (low order) term of Ai. Example Using the same data as the Peterson–Gorenstein–Zierler example above: i Ri Ai −1 001 x4 + 000 x3 + 000 x2 + 000 x + 000 000 0 925 x3 + 762 x2 + 637 x + 732 001 1 683 x2 + 676 x + 024 697 x + 396 2 673 x + 596 608 x2 + 704 x + 544 Λ(x) = A2 / 544 = 329 x2 + 821 x + 001 Ω(x) = R2 / 544 = 546 x + 732 Decoder using discrete Fourier transform A discrete Fourier transform can be used for decoding.[15] To avoid conflict with syndrome names, let c(x) = s(x) the encoded codeword. r(x) and e(x) are the same as above. Define C(x), E(x), and R(x) as the discrete Fourier transforms of c(x), e(x), and r(x). Since r(x) = c(x) + e(x), and since a discrete Fourier transform is a linear operator, R(x) = C(x) + E(x). Transform r(x) to R(x) using discrete Fourier transform. Since the calculation for a discrete Fourier transform is the same as the calculation for syndromes, t coefficients of R(x) and E(x) are the same as the syndromes: $R_{j}=E_{j}=S_{j}=r(\alpha ^{j})\qquad {\text{for }}1\leq j\leq t$ Use $R_{1}$ through $R_{t}$ as syndromes (they're the same) and generate the error locator polynomial using the methods from any of the above decoders. Let v = number of errors. Generate E(x) using the known coefficients $E_{1}$ to $E_{t}$, the error locator polynomial, and these formulas ${\begin{aligned}E_{0}&=-{\frac {1}{\Lambda _{v}}}(E_{v}+\Lambda _{1}E_{v-1}+\cdots +\Lambda _{v-1}E_{1})\\E_{j}&=-(\Lambda _{1}E_{j-1}+\Lambda _{2}E_{j-2}+\cdots +\Lambda _{v}E_{j-v})&{\text{for }}t<j<n\end{aligned}}$ Then calculate C(x) = R(x) − E(x) and take the inverse transform (polynomial interpolation) of C(x) to produce c(x). Decoding beyond the error-correction bound The Singleton bound states that the minimum distance d of a linear block code of size (n,k) is upper-bounded by n − k + 1. The distance d was usually understood to limit the error-correction capability to ⌊(d−1) / 2⌋. The Reed–Solomon code achieves this bound with equality, and can thus correct up to ⌊(n−k) / 2⌋ errors. However, this error-correction bound is not exact. In 1999, Madhu Sudan and Venkatesan Guruswami at MIT published "Improved Decoding of Reed–Solomon and Algebraic-Geometry Codes" introducing an algorithm that allowed for the correction of errors beyond half the minimum distance of the code.[16] It applies to Reed–Solomon codes and more generally to algebraic geometric codes. This algorithm produces a list of codewords (it is a list-decoding algorithm) and is based on interpolation and factorization of polynomials over $GF(2^{m})$ and its extensions. Soft-decoding The algebraic decoding methods described above are hard-decision methods, which means that for every symbol a hard decision is made about its value. For example, a decoder could associate with each symbol an additional value corresponding to the channel demodulator's confidence in the correctness of the symbol. The advent of LDPC and turbo codes, which employ iterated soft-decision belief propagation decoding methods to achieve error-correction performance close to the theoretical limit, has spurred interest in applying soft-decision decoding to conventional algebraic codes. In 2003, Ralf Koetter and Alexander Vardy presented a polynomial-time soft-decision algebraic list-decoding algorithm for Reed–Solomon codes, which was based upon the work by Sudan and Guruswami.[17] In 2016, Steven J. Franke and Joseph H. Taylor published a novel soft-decision decoder.[18] Encoder Here we present a simple MATLAB implementation for an encoder. function encoded = rsEncoder(msg, m, prim_poly, n, k) % RSENCODER Encode message with the Reed-Solomon algorithm % m is the number of bits per symbol % prim_poly: Primitive polynomial p(x). Ie for DM is 301 % k is the size of the message % n is the total size (k+redundant) % Example: msg = uint8('Test') % enc_msg = rsEncoder(msg, 8, 301, 12, numel(msg)); % Get the alpha alpha = gf(2, m, prim_poly); % Get the Reed-Solomon generating polynomial g(x) g_x = genpoly(k, n, alpha); % Multiply the information by X^(n-k), or just pad with zeros at the end to % get space to add the redundant information msg_padded = gf([msg zeros(1, n - k)], m, prim_poly); % Get the remainder of the division of the extended message by the % Reed-Solomon generating polynomial g(x) [~, remainder] = deconv(msg_padded, g_x); % Now return the message with the redundant information encoded = msg_padded - remainder; end % Find the Reed-Solomon generating polynomial g(x), by the way this is the % same as the rsgenpoly function on matlab function g = genpoly(k, n, alpha) g = 1; % A multiplication on the galois field is just a convolution for k = mod(1 : n - k, n) g = conv(g, [1 alpha .^ (k)]); end end Decoder Now the decoding part: function [decoded, error_pos, error_mag, g, S] = rsDecoder(encoded, m, prim_poly, n, k) % RSDECODER Decode a Reed-Solomon encoded message % Example: % [dec, ~, ~, ~, ~] = rsDecoder(enc_msg, 8, 301, 12, numel(msg)) max_errors = floor((n - k) / 2); orig_vals = encoded.x; % Initialize the error vector errors = zeros(1, n); g = []; S = []; % Get the alpha alpha = gf(2, m, prim_poly); % Find the syndromes (Check if dividing the message by the generator % polynomial the result is zero) Synd = polyval(encoded, alpha .^ (1:n - k)); Syndromes = trim(Synd); % If all syndromes are zeros (perfectly divisible) there are no errors if isempty(Syndromes.x) decoded = orig_vals(1:k); error_pos = []; error_mag = []; g = []; S = Synd; return; end % Prepare for the euclidean algorithm (Used to find the error locating % polynomials) r0 = [1, zeros(1, 2 * max_errors)]; r0 = gf(r0, m, prim_poly); r0 = trim(r0); size_r0 = length(r0); r1 = Syndromes; f0 = gf([zeros(1, size_r0 - 1) 1], m, prim_poly); f1 = gf(zeros(1, size_r0), m, prim_poly); g0 = f1; g1 = f0; % Do the euclidean algorithm on the polynomials r0(x) and Syndromes(x) in % order to find the error locating polynomial while true % Do a long division [quotient, remainder] = deconv(r0, r1); % Add some zeros quotient = pad(quotient, length(g1)); % Find quotient*g1 and pad c = conv(quotient, g1); c = trim(c); c = pad(c, length(g0)); % Update g as g0-quotient*g1 g = g0 - c; % Check if the degree of remainder(x) is less than max_errors if all(remainder(1:end - max_errors) == 0) break; end % Update r0, r1, g0, g1 and remove leading zeros r0 = trim(r1); r1 = trim(remainder); g0 = g1; g1 = g; end % Remove leading zeros g = trim(g); % Find the zeros of the error polynomial on this galois field evalPoly = polyval(g, alpha .^ (n - 1 : - 1 : 0)); error_pos = gf(find(evalPoly == 0), m); % If no error position is found we return the received work, because % basically is nothing that we could do and we return the received message if isempty(error_pos) decoded = orig_vals(1:k); error_mag = []; return; end % Prepare a linear system to solve the error polynomial and find the error % magnitudes size_error = length(error_pos); Syndrome_Vals = Syndromes.x; b(:, 1) = Syndrome_Vals(1:size_error); for idx = 1 : size_error e = alpha .^ (idx * (n - error_pos.x)); err = e.x; er(idx, :) = err; end % Solve the linear system error_mag = (gf(er, m, prim_poly) \ gf(b, m, prim_poly))'; % Put the error magnitude on the error vector errors(error_pos.x) = error_mag.x; % Bring this vector to the galois field errors_gf = gf(errors, m, prim_poly); % Now to fix the errors just add with the encoded code decoded_gf = encoded(1:k) + errors_gf(1:k); decoded = decoded_gf.x; end % Remove leading zeros from Galois array function gt = trim(g) gx = g.x; gt = gf(gx(find(gx, 1) : end), g.m, g.prim_poly); end % Add leading zeros function xpad = pad(x, k) len = length(x); if len < k xpad = [zeros(1, k - len) x]; end end Reed Solomon original view decoders The decoders described in this section use the Reed Solomon original view of a codeword as a sequence of polynomial values where the polynomial is based on the message to be encoded. The same set of fixed values are used by the encoder and decoder, and the decoder recovers the encoding polynomial (and optionally an error locating polynomial) from the received message. Theoretical decoder Reed & Solomon (1960) described a theoretical decoder that corrected errors by finding the most popular message polynomial. The decoder only knows the set of values $a_{1}$ to $a_{n}$ and which encoding method was used to generate the codeword's sequence of values. The original message, the polynomial, and any errors are unknown. A decoding procedure could use a method like Lagrange interpolation on various subsets of n codeword values taken k at a time to repeatedly produce potential polynomials, until a sufficient number of matching polynomials are produced to reasonably eliminate any errors in the received codeword. Once a polynomial is determined, then any errors in the codeword can be corrected, by recalculating the corresponding codeword values. Unfortunately, in all but the simplest of cases, there are too many subsets, so the algorithm is impractical. The number of subsets is the binomial coefficient, $ {\binom {n}{k}}={n! \over (n-k)!k!}$, and the number of subsets is infeasible for even modest codes. For a $(255,249)$ code that can correct 3 errors, the naïve theoretical decoder would examine 359 billion subsets. Berlekamp Welch decoder In 1986, a decoder known as the Berlekamp–Welch algorithm was developed as a decoder that is able to recover the original message polynomial as well as an error "locator" polynomial that produces zeroes for the input values that correspond to errors, with time complexity $O(n^{3})$, where $n$ is the number of values in a message. The recovered polynomial is then used to recover (recalculate as needed) the original message. Example Using RS(7,3), GF(929), and the set of evaluation points ai = i − 1 a = {0, 1, 2, 3, 4, 5, 6} If the message polynomial is p(x) = 003 x2 + 002 x + 001 The codeword is c = {001, 006, 017, 034, 057, 086, 121} Errors in transmission might cause this to be received instead. b = c + e = {001, 006, 123, 456, 057, 086, 121} The key equations are: $b_{i}E(a_{i})-Q(a_{i})=0$ Assume maximum number of errors: e = 2. The key equations become: $b_{i}(e_{0}+e_{1}a_{i})-(q_{0}+q_{1}a_{i}+q_{2}a_{i}^{2}+q_{3}a_{i}^{3}+q_{4}a_{i}^{4})=-b_{i}a_{i}^{2}$ ${\begin{bmatrix}001&000&928&000&000&000&000\\006&006&928&928&928&928&928\\123&246&928&927&925&921&913\\456&439&928&926&920&902&848\\057&228&928&925&913&865&673\\086&430&928&924&904&804&304\\121&726&928&923&893&713&562\end{bmatrix}}{\begin{bmatrix}e_{0}\\e_{1}\\q_{0}\\q_{1}\\q_{2}\\q_{3}\\q_{4}\end{bmatrix}}={\begin{bmatrix}000\\923\\437\\541\\017\\637\\289\end{bmatrix}}$ Using Gaussian elimination: ${\begin{bmatrix}001&000&000&000&000&000&000\\000&001&000&000&000&000&000\\000&000&001&000&000&000&000\\000&000&000&001&000&000&000\\000&000&000&000&001&000&000\\000&000&000&000&000&001&000\\000&000&000&000&000&000&001\end{bmatrix}}{\begin{bmatrix}e_{0}\\e_{1}\\q_{0}\\q_{1}\\q_{2}\\q_{3}\\q_{4}\end{bmatrix}}={\begin{bmatrix}006\\924\\006\\007\\009\\916\\003\end{bmatrix}}$ Q(x) = 003 x4 + 916 x3 + 009 x2 + 007 x + 006 E(x) = 001 x2 + 924 x + 006 Q(x) / E(x) = P(x) = 003 x2 + 002 x + 001 Recalculate P(x) where E(x) = 0 : {2, 3} to correct b resulting in the corrected codeword: c = {001, 006, 017, 034, 057, 086, 121} Gao decoder In 2002, an improved decoder was developed by Shuhong Gao, based on the extended Euclid algorithm.[6] Example Using the same data as the Berlekamp Welch example above: • $R_{-1}=\prod _{i=1}^{n}(x-a_{i})$ • $R_{0}=$ Lagrange interpolation of $\{a_{i},b(a_{i})\}$ for i = 1 to n • $A_{-1}=0$ • $A_{0}=1$ i Ri Ai −1 001 x7 + 908 x6 + 175 x5 + 194 x4 + 695 x3 + 094 x2 + 720 x + 000 000 0 055 x6 + 440 x5 + 497 x4 + 904 x3 + 424 x2 + 472 x + 001 001 1 702 x5 + 845 x4 + 691 x3 + 461 x2 + 327 x + 237 152 x + 237 2 266 x4 + 086 x3 + 798 x2 + 311 x + 532 708 x2 + 176 x + 532 Q(x) = R2 = 266 x4 + 086 x3 + 798 x2 + 311 x + 532 E(x) = A2 = 708 x2 + 176 x + 532 divide Q(x) and E(x) by most significant coefficient of E(x) = 708. (Optional) Q(x) = 003 x4 + 916 x3 + 009 x2 + 007 x + 006 E(x) = 001 x2 + 924 x + 006 Q(x) / E(x) = P(x) = 003 x2 + 002 x + 001 Recalculate P(x) where E(x) = 0 : {2, 3} to correct b resulting in the corrected codeword: c = {001, 006, 017, 034, 057, 086, 121} See also • BCH code • Cyclic code • Chien search • Berlekamp–Massey algorithm • Forward error correction • Berlekamp–Welch algorithm • Folded Reed–Solomon code Notes 1. Authors in Andrews et al. (2007), provide simulation results which show that for the same code rate (1/6) turbo codes outperform Reed-Solomon concatenated codes up to 2 dB (bit error rate).[9] References 1. Reed & Solomon (1960) 2. D. Gorenstein and N. Zierler, "A class of cyclic linear error-correcting codes in p^m symbols", J. SIAM, vol. 9, pp. 207–214, June 1961 3. Error Correcting Codes by W_Wesley_Peterson, 1961 4. Error Correcting Codes by W_Wesley_Peterson, second edition, 1972 5. Yasuo Sugiyama, Masao Kasahara, Shigeichi Hirasawa, and Toshihiko Namekawa. A method for solving key equation for decoding Goppa codes. Information and Control, 27:87–99, 1975. 6. Gao, Shuhong (January 2002), New Algorithm For Decoding Reed-Solomon Codes (PDF), Clemson 7. Immink, K. A. S. (1994), "Reed–Solomon Codes and the Compact Disc", in Wicker, Stephen B.; Bhargava, Vijay K. (eds.), Reed–Solomon Codes and Their Applications, IEEE Press, ISBN 978-0-7803-1025-4 8. J. Hagenauer, E. Offer, and L. Papke, Reed Solomon Codes and Their Applications. New York IEEE Press, 1994 - p. 433 9. Andrews, Kenneth S., et al. "The development of turbo and LDPC codes for deep-space applications." Proceedings of the IEEE 95.11 (2007): 2142-2156. 10. See Lin & Costello (1983, p. 171), for example. 11. "Analytical Expressions Used in bercoding and BERTool". Archived from the original on 2019-02-01. Retrieved 2019-02-01. 12. Pfender, Florian; Ziegler, Günter M. (September 2004), "Kissing Numbers, Sphere Packings, and Some Unexpected Proofs" (PDF), Notices of the American Mathematical Society, 51 (8): 873–883, archived (PDF) from the original on 2008-05-09, retrieved 2009-09-28. Explains the Delsarte-Goethals-Seidel theorem as used in the context of the error correcting code for compact disc. 13. D. Gorenstein and N. Zierler, "A class of cyclic linear error-correcting codes in p^m symbols," J. SIAM, vol. 9, pp. 207–214, June 1961 14. Error Correcting Codes by W Wesley Peterson, 1961 15. Shu Lin and Daniel J. Costello Jr, "Error Control Coding" second edition, pp. 255–262, 1982, 2004 16. Guruswami, V.; Sudan, M. (September 1999), "Improved decoding of Reed–Solomon codes and algebraic geometry codes", IEEE Transactions on Information Theory, 45 (6): 1757–1767, CiteSeerX 10.1.1.115.292, doi:10.1109/18.782097 17. Koetter, Ralf; Vardy, Alexander (2003). "Algebraic soft-decision decoding of Reed–Solomon codes". IEEE Transactions on Information Theory. 49 (11): 2809–2825. CiteSeerX 10.1.1.13.2021. doi:10.1109/TIT.2003.819332. 18. Franke, Steven J.; Taylor, Joseph H. (2016). "Open Source Soft-Decision Decoder for the JT65 (63,12) Reed–Solomon Code" (PDF). QEX (May/June): 8–17. Archived (PDF) from the original on 2017-03-09. Retrieved 2017-06-07. Further reading • Gill, John (n.d.), EE387 Notes #7, Handout #28 (PDF), Stanford University, archived from the original (PDF) on June 30, 2014, retrieved April 21, 2010 • Hong, Jonathan; Vetterli, Martin (August 1995), "Simple Algorithms for BCH Decoding" (PDF), IEEE Transactions on Communications, 43 (8): 2324–2333, doi:10.1109/26.403765 • Lin, Shu; Costello, Jr., Daniel J. (1983), Error Control Coding: Fundamentals and Applications, New Jersey, NJ: Prentice-Hall, ISBN 978-0-13-283796-5 • Massey, J. L. (1969), "Shift-register synthesis and BCH decoding" (PDF), IEEE Transactions on Information Theory, IT-15 (1): 122–127, doi:10.1109/tit.1969.1054260 • Peterson, Wesley W. (1960), "Encoding and Error Correction Procedures for the Bose-Chaudhuri Codes", IRE Transactions on Information Theory, IT-6 (4): 459–470, doi:10.1109/TIT.1960.1057586 • Reed, Irving S.; Solomon, Gustave (1960), "Polynomial Codes over Certain Finite Fields", Journal of the Society for Industrial and Applied Mathematics, 8 (2): 300–304, doi:10.1137/0108018 • Welch, L. R. (1997), The Original View of Reed–Solomon Codes (PDF), Lecture Notes • Berlekamp, Elwyn R. (1967), Nonbinary BCH decoding, International Symposium on Information Theory, San Remo, Italy{{citation}}: CS1 maint: location missing publisher (link) • Berlekamp, Elwyn R. (1984) [1968], Algebraic Coding Theory (Revised ed.), Laguna Hills, CA: Aegean Park Press, ISBN 978-0-89412-063-3 • Cipra, Barry Arthur (1993), "The Ubiquitous Reed–Solomon Codes", SIAM News, 26 (1) • Forney, Jr., G. (October 1965), "On Decoding BCH Codes", IEEE Transactions on Information Theory, 11 (4): 549–557, doi:10.1109/TIT.1965.1053825 • Koetter, Ralf (2005), Reed–Solomon Codes, MIT Lecture Notes 6.451 (Video), archived from the original on 2013-03-13 • MacWilliams, F. J.; Sloane, N. J. A. (1977), The Theory of Error-Correcting Codes, New York, NY: North-Holland Publishing Company • Reed, Irving S.; Chen, Xuemin (1999), Error-Control Coding for Data Networks, Boston, MA: Kluwer Academic Publishers External links Information and tutorials • Introduction to Reed–Solomon codes: principles, architecture and implementation (CMU) • A Tutorial on Reed–Solomon Coding for Fault-Tolerance in RAID-like Systems • Algebraic soft-decoding of Reed–Solomon codes • Wikiversity:Reed–Solomon codes for coders • BBC R&D White Paper WHP031 • Geisel, William A. (August 1990), Tutorial on Reed–Solomon Error Correction Coding, Technical Memorandum, NASA, TM-102162 • Concatenated codes by Dr. Dave Forney (scholarpedia.org). • Reid, Jeff A. (April 1995), CRC and Reed Solomon ECC (PDF) Implementations • FEC library in C by Phil Karn (aka KA9Q) includes Reed–Solomon codec, both arbitrary and optimized (223,255) version • Schifra Open Source C++ Reed–Solomon Codec • Henry Minsky's RSCode library, Reed–Solomon encoder/decoder • Open Source C++ Reed–Solomon Soft Decoding library • Matlab implementation of errors and-erasures Reed–Solomon decoding • Octave implementation in communications package • Pure-Python implementation of a Reed–Solomon codec
Wikipedia
Rees algebra In commutative algebra, the Rees algebra of an ideal I in a commutative ring R is defined to be $R[It]=\bigoplus _{n=0}^{\infty }I^{n}t^{n}\subseteq R[t].$ The extended Rees algebra of I (which some authors[1] refer to as the Rees algebra of I) is defined as $R[It,t^{-1}]=\bigoplus _{n=-\infty }^{\infty }I^{n}t^{n}\subseteq R[t,t^{-1}].$ This construction has special interest in algebraic geometry since the projective scheme defined by the Rees algebra of an ideal in a ring is the blowing-up of the spectrum of the ring along the subscheme defined by the ideal.[2] Properties • Assume R is Noetherian; then R[It] is also Noetherian. The Krull dimension of the Rees algebra is $\dim R[It]=\dim R+1$ if I is not contained in any prime ideal P with $\dim(R/P)=\dim R$; otherwise $\dim R[It]=\dim R$. The Krull dimension of the extended Rees algebra is $\dim R[It,t^{-1}]=\dim R+1$.[3] • If $J\subseteq I$ are ideals in a Noetherian ring R, then the ring extension $R[Jt]\subseteq R[It]$ is integral if and only if J is a reduction of I.[3] • If I is an ideal in a Noetherian ring R, then the Rees algebra of I is the quotient of the symmetric algebra of I by its torsion submodule. Relationship with other blow-up algebras The associated graded ring of I may be defined as $\operatorname {gr} _{I}(R)=R[It]/IR[It].$ If R is a Noetherian local ring with maximal ideal ${\mathfrak {m}}$, then the special fiber ring of I is given by ${\mathcal {F}}_{I}(R)=R[It]/{\mathfrak {m}}R[It].$ The Krull dimension of the special fiber ring is called the analytic spread of I. References 1. Eisenbud, David (1995). Commutative Algebra with a View Toward Algebraic Geometry. Springer-Verlag. ISBN 978-3-540-78122-6. 2. Eisenbud-Harris, The geometry of schemes. Springer-Verlag, 197, 2000 3. Swanson, Irena; Huneke, Craig (2006). Integral Closure of Ideals, Rings, and Modules. Cambridge University Press. ISBN 9780521688604. External links • What Is the Rees Algebra of a Module? • Geometry behind Rees algebra (deformation to the normal cone)
Wikipedia
Rees decomposition In commutative algebra, a Rees decomposition is a way of writing a ring in terms of polynomial subrings. They were introduced by David Rees (1956). Definition Suppose that a ring R is a quotient of a polynomial ring k[x1,...] over a field by some homogeneous ideal. A Rees decomposition of R is a representation of R as a direct sum (of vector spaces) $R=\bigoplus _{\alpha }\eta _{\alpha }k[\theta _{1},\ldots ,\theta _{f_{\alpha }}]$ where each ηα is a homogeneous element and the d elements θi are a homogeneous system of parameters for R and ηαk[θfα+1,...,θd] ⊆ k[θ1, θfα]. See also • Stanley decomposition • Hironaka decomposition References • Rees, D. (1956), "A basis theorem for polynomial modules", Proc. Cambridge Philos. Soc., 52: 12–16, MR 0074372 • Sturmfels, Bernd; White, Neil (1991), "Computing combinatorial decompositions of rings", Combinatorica, 11 (3): 275–293, doi:10.1007/BF01205079, MR 1122013
Wikipedia
Reeve tetrahedra In geometry, the Reeve tetrahedra are a family of polyhedra in three-dimensional space with vertices at (0, 0, 0), (1, 0, 0), (0, 1, 0) and (1, 1, r) where r is a positive integer. They are named after John Reeve, who in 1957 used them to show that higher-dimensional generalizations of Pick's theorem do not exist.[1] Counterexample to generalizations of Pick's theorem All vertices of a Reeve tetrahedron are lattice points (points whose coordinates are all integers). No other lattice points lie on the surface or in the interior of the tetrahedron. The volume of the Reeve tetrahedron with vertex (1, 1, r) is r/6. In 1957 Reeve used this tetrahedron to show that there exist tetrahedra with four lattice points as vertices, and containing no other lattice points, but with arbitrarily large volume.[2] In two dimensions, the area of every polyhedron with lattice vertices is determined as a formula of the number of lattice points at its vertices, on its boundary, and in its interior, according to Pick's theorem. The Reeve tetrahedra imply that there can be no corresponding formula for the volume in three or more dimensions. Any such formula would be unable to distinguish the Reeve tetrahedra with different choices of r from each other, but their volumes are all different.[2] Despite this negative result, it is possible (as Reeve showed) to devise a more complicated formula for lattice polyhedron volume that combines the number of lattice points in the polyhedron, the number of points of a finer lattice in the polyhedron, and the Euler characteristic of the polyhedron.[2][3] Ehrhart polynomial The Ehrhart polynomial of any lattice polyhedron counts the number of lattice points that it contains when scaled up by an integer factor. The Ehrhart polynomial of the Reeve tetrahedron Tr of height r is[4] $L({\mathcal {T}}_{r},t)={\frac {r}{6}}t^{3}+t^{2}+\left(2-{\frac {r}{6}}\right)t+1.$ Thus, for r ≥ 13, the coefficient of t in the Ehrhart polynomial of Tr is negative. This example shows that Ehrhart polynomials can sometimes have negative coefficients.[4] References 1. Kiradjiev, Kristian (December 2018). "Connecting the Dots with Pick's Theorem" (PDF). Mathematics Today. Institute of mathematics and its applications. Retrieved January 6, 2023. 2. Reeve, J. E. (1957). "On the volume of lattice polyhedra". Proceedings of the London Mathematical Society. Third Series. 7: 378–395. doi:10.1112/plms/s3-7.1.378. MR 0095452. 3. Kołodziejczyk, Krzysztof (1996). "An "odd" formula for the volume of three-dimensional lattice polyhedra". Geometriae Dedicata. 61 (3): 271–278. doi:10.1007/BF00150027. MR 1397808. S2CID 121162659. 4. Beck, Matthias; Robins, Sinai (2015). Computing the Continuous Discretely: Integer-Point Enumeration in Polyhedra. Undergraduate Texts in Mathematics (Second ed.). New York: Springer. pp. 78–79, 82. doi:10.1007/978-1-4939-2969-6. ISBN 978-1-4939-2968-9. MR 3410115.
Wikipedia
Refinable function In mathematics, in the area of wavelet analysis, a refinable function is a function which fulfils some kind of self-similarity. A function $\varphi $ is called refinable with respect to the mask $h$ if $\varphi (x)=2\cdot \sum _{k=0}^{N-1}h_{k}\cdot \varphi (2\cdot x-k)$ This condition is called refinement equation, dilation equation or two-scale equation. Using the convolution (denoted by a star, *) of a function with a discrete mask and the dilation operator $D$ one can write more concisely: $\varphi =2\cdot D_{1/2}(h*\varphi )$ It means that one obtains the function, again, if you convolve the function with a discrete mask and then scale it back. There is a similarity to iterated function systems and de Rham curves. The operator $\varphi \mapsto 2\cdot D_{1/2}(h*\varphi )$ is linear. A refinable function is an eigenfunction of that operator. Its absolute value is not uniquely defined. That is, if $\varphi $ is a refinable function, then for every $c$ the function $c\cdot \varphi $ is refinable, too. These functions play a fundamental role in wavelet theory as scaling functions. Properties Values at integral points A refinable function is defined only implicitly. It may also be that there are several functions which are refinable with respect to the same mask. If $\varphi $ shall have finite support and the function values at integer arguments are wanted, then the two scale equation becomes a system of simultaneous linear equations. Let $a$ be the minimum index and $b$ be the maximum index of non-zero elements of $h$, then one obtains ${\begin{pmatrix}\varphi (a)\\\varphi (a+1)\\\vdots \\\varphi (b)\end{pmatrix}}={\begin{pmatrix}h_{a}&&&&&\\h_{a+2}&h_{a+1}&h_{a}&&&\\h_{a+4}&h_{a+3}&h_{a+2}&h_{a+1}&h_{a}&\\\ddots &\ddots &\ddots &\ddots &\ddots &\ddots \\&h_{b}&h_{b-1}&h_{b-2}&h_{b-3}&h_{b-4}\\&&&h_{b}&h_{b-1}&h_{b-2}\\&&&&&h_{b}\end{pmatrix}}{\begin{pmatrix}\varphi (a)\\\varphi (a+1)\\\vdots \\\varphi (b)\end{pmatrix}}.$ Using the discretization operator, call it $Q$ here, and the transfer matrix of $h$, named $T_{h}$, this can be written concisely as $Q\varphi =T_{h}Q\varphi .$ This is again a fixed-point equation. But this one can now be considered as an eigenvector-eigenvalue problem. That is, a finitely supported refinable function exists only (but not necessarily), if $T_{h}$ has the eigenvalue 1. Values at dyadic points From the values at integral points you can derive the values at dyadic points, i.e. points of the form $k\cdot 2^{-j}$, with $k\in \mathbb {Z} $ and $j\in \mathbb {N} $. $\varphi =D_{1/2}(2\cdot (h*\varphi ))$ $D_{2}\varphi =2\cdot (h*\varphi )$ $Q(D_{2}\varphi )=Q(2\cdot (h*\varphi ))=2\cdot (h*Q\varphi )$ The star denotes the convolution of a discrete filter with a function. With this step you can compute the values at points of the form ${\frac {k}{2}}$. By replacing iteratedly $\varphi $ by $D_{2}\varphi $ you get the values at all finer scales. $Q(D_{2^{j+1}}\varphi )=2\cdot (h*Q(D_{2^{j}}\varphi ))$ Convolution If $\varphi $ is refinable with respect to $h$, and $\psi $ is refinable with respect to $g$, then $\varphi *\psi $ is refinable with respect to $h*g$. Differentiation If $\varphi $ is refinable with respect to $h$, and the derivative $\varphi '$ exists, then $\varphi '$ is refinable with respect to $2\cdot h$. This can be interpreted as a special case of the convolution property, where one of the convolution operands is a derivative of the Dirac impulse. Integration If $\varphi $ is refinable with respect to $h$, and there is an antiderivative $\Phi $ with $ \Phi (t)=\int _{0}^{t}\varphi (\tau )\,\mathrm {d} \tau $, then the antiderivative $t\mapsto \Phi (t)+c$ is refinable with respect to mask $ {\frac {1}{2}}\cdot h$ where the constant $c$ must fulfill $ c\cdot \left(1-\sum _{j}h_{j}\right)=\sum _{j}h_{j}\cdot \Phi (-j)$. If $\varphi $ has bounded support, then we can interpret integration as convolution with the Heaviside function and apply the convolution law. Scalar products Computing the scalar products of two refinable functions and their translates can be broken down to the two above properties. Let $T$ be the translation operator. It holds $\langle \varphi ,T_{k}\psi \rangle =\langle \varphi *\psi ^{*},T_{k}\delta \rangle =(\varphi *\psi ^{*})(k)$ where $\psi ^{*}$ is the adjoint of $\psi $ with respect to convolution, i.e., $\psi ^{*}$ is the flipped and complex conjugated version of $\psi $, i.e., $\psi ^{*}(t)={\overline {\psi (-t)}}$. Because of the above property, $\varphi *\psi ^{*}$ is refinable with respect to $h*g^{*}$, and its values at integral arguments can be computed as eigenvectors of the transfer matrix. This idea can be easily generalized to integrals of products of more than two refinable functions.[1] Smoothness A refinable function usually has a fractal shape. The design of continuous or smooth refinable functions is not obvious. Before dealing with forcing smoothness it is necessary to measure smoothness of refinable functions. Using the Villemoes machine[2] one can compute the smoothness of refinable functions in terms of Sobolev exponents. In a first step the refinement mask $h$ is divided into a filter $b$, which is a power of the smoothness factor $(1,1)$ (this is a binomial mask) and a rest $q$. Roughly spoken, the binomial mask $b$ makes smoothness and $q$ represents a fractal component, which reduces smoothness again. Now the Sobolev exponent is roughly the order of $b$ minus logarithm of the spectral radius of $T_{q*q^{*}}$. Generalization The concept of refinable functions can be generalized to functions of more than one variable, that is functions from $\mathbb {R} ^{d}\to \mathbb {R} $. The most simple generalization is about tensor products. If $\varphi $ and $\psi $ are refinable with respect to $h$ and $g$, respectively, then $\varphi \otimes \psi $ is refinable with respect to $h\otimes g$. The scheme can be generalized even more to different scaling factors with respect to different dimensions or even to mixing data between dimensions.[3] Instead of scaling by scalar factor like 2 the signal the coordinates are transformed by a matrix $M$ of integers. In order to let the scheme work, the absolute values of all eigenvalues of $M$ must be larger than one. (Maybe it also suffices that $\left|\det M\right|>1$.) Formally the two-scale equation does not change very much: $\varphi (x)=\left|\det M\right|\cdot \sum _{k\in \mathbb {Z} ^{d}}h_{k}\cdot \varphi (M\cdot x-k)$ $\varphi =\left|\det M\right|\cdot D_{M^{-1}}(h*\varphi )$ Examples • If the definition is extended to distributions, then the Dirac impulse is refinable with respect to the unit vector $\delta $, that is known as Kronecker delta. The $n$-th derivative of the Dirac distribution is refinable with respect to $2^{n}\cdot \delta $. • The Heaviside function is refinable with respect to ${\frac {1}{2}}\cdot \delta $. • The truncated power functions with exponent $n$ are refinable with respect to ${\frac {1}{2^{n+1}}}\cdot \delta $. • The triangular function is a refinable function.[4] B-spline functions with successive integral nodes are refinable, because of the convolution theorem and the refinability of the characteristic function for the interval $[0,1)$ (a boxcar function). • All polynomial functions are refinable. For every refinement mask there is a polynomial that is uniquely defined up to a constant factor. For every polynomial of degree $n$ there are many refinement masks that all differ by a mask of type $v*(1,-1)^{n+1}$ for any mask $v$ and the convolutional power $(1,-1)^{n+1}$.[5] • A rational function $\varphi $ is refinable if and only if it can be represented using partial fractions as $\varphi (x)=\sum _{i\in \mathbb {Z} }{\frac {s_{i}}{(x-i)^{k}}}$, where $k$ is a positive natural number and $s$ is a real sequence with finitely many non-zero elements (a Laurent polynomial) such that $s|(s\uparrow 2)$ (read: $\exists h(z)\in \mathbb {R} [z,z^{-1}]\ h(z)\cdot s(z)=s(z^{2})$). The Laurent polynomial $2^{k-1}\cdot h$ is the associated refinement mask.[6] References 1. Dahmen, Wolfgang; Micchelli, Charles A. (1993). "Using the refinement equation for evaluating integrals of wavelets". Journal Numerical Analysis. SIAM. 30 (2): 507–537. doi:10.1137/0730024. 2. Villemoes, Lars. "Sobolev regularity of wavelets and stability of iterated filter banks". Archived from the original (PostScript) on 2002-05-11. 3. Berger, Marc A.; Wang, Yang (1992), "Multidimensional two-scale dilation equations (chapter IV)", in Chui, Charles K. (ed.), Wavelets: A Tutorial in Theory and Applications, Wavelet Analysis and its Applications, vol. 2, Academic Press, Inc., pp. 295–323 4. Nathanael, Berglund. "Reconstructing Refinable Functions". Archived from the original on 2009-04-04. Retrieved 2010-12-24. 5. Thielemann, Henning (2012-01-29). "How to refine polynomial functions". arXiv:1012.2453 [math.FA]. 6. Gustafson, Paul; Savir, Nathan; Spears, Ely (2006-11-14), "A Characterization of Refinable Rational Functions" (PDF), American Journal of Undergraduate Research, 5 (3): 11–20, doi:10.33697/ajur.2006.021 See also • Subdivision scheme
Wikipedia
Gysin homomorphism In the field of mathematics known as algebraic topology, the Gysin sequence is a long exact sequence which relates the cohomology classes of the base space, the fiber and the total space of a sphere bundle. The Gysin sequence is a useful tool for calculating the cohomology rings given the Euler class of the sphere bundle and vice versa. It was introduced by Gysin (1942), and is generalized by the Serre spectral sequence. Definition Consider a fiber-oriented sphere bundle with total space E, base space M, fiber Sk and projection map $\pi $: $S^{k}\hookrightarrow E{\stackrel {\pi }{\longrightarrow }}M.$ Any such bundle defines a degree k + 1 cohomology class e called the Euler class of the bundle. De Rham cohomology Discussion of the sequence is clearest with de Rham cohomology. There cohomology classes are represented by differential forms, so that e can be represented by a (k + 1)-form. The projection map $\pi $ induces a map in cohomology $H^{\ast }$ called its pullback $\pi ^{\ast }$ $\pi ^{*}:H^{*}(M)\longrightarrow H^{*}(E).\,$ In the case of a fiber bundle, one can also define a pushforward map $\pi _{\ast }$ $\pi _{*}:H^{*}(E)\longrightarrow H^{*-k}(M)$ which acts by fiberwise integration of differential forms on the oriented sphere – note that this map goes "the wrong way": it is a covariant map between objects associated with a contravariant functor. Gysin proved that the following is a long exact sequence $\cdots \longrightarrow H^{n}(E){\stackrel {\pi _{*}}{\longrightarrow }}H^{n-k}(M){\stackrel {e_{\wedge }}{\longrightarrow }}H^{n+1}(M){\stackrel {\pi ^{*}}{\longrightarrow }}H^{n+1}(E)\longrightarrow \cdots $ where $e_{\wedge }$ is the wedge product of a differential form with the Euler class e. Integral cohomology The Gysin sequence is a long exact sequence not only for the de Rham cohomology of differential forms, but also for cohomology with integral coefficients. In the integral case one needs to replace the wedge product with the Euler class with the cup product, and the pushforward map no longer corresponds to integration. Gysin homomorphism in algebraic geometry Let i: X → Y be a (closed) regular embedding of codimension d, Y' → Y a morphism and i': X' = X ×Y Y' → Y' the induced map. Let N be the pullback of the normal bundle of i to X'. Then the refined Gysin homomorphism i! refers to the composition $i^{!}:A_{k}(Y'){\overset {\sigma }{\longrightarrow }}A_{k}(N){\overset {\text{Gysin}}{\longrightarrow }}A_{k-d}(X')$ where • σ is the specialization homomorphism; which sends a k-dimensional subvariety V to the normal cone to the intersection of V and X' in V. The result lies in N through $C_{X'/Y'}\hookrightarrow N$. • The second map is the (usual) Gysin homomorphism induced by the zero-section embedding $X'\hookrightarrow N$. The homomorphism i! encodes intersection product in intersection theory in that one either shows, or defines the intersection product of X and V by, the formula $X\cdot V=i^{!}[V].$[1] Example: Given a vector bundle E, let s: X → E be a section of E. Then, when s is a regular section, $s^{!}[X]$ is the class of the zero-locus of s, where [X] is the fundamental class of X.[2] See also • Logarithmic form • Wang sequence Notes 1. Fulton 1998, Example 6.2.1.. 2. Fulton 1998, Proposition 14.1. (c). Sources • Bott, Raoul; Tu, Loring (1982), Differential Forms in Algebraic Topology, Graduate Texts in Mathematics, Springer-Verlag, ISBN 978-038790613-3 • Fulton, William (1998), Intersection theory, Ergebnisse der Mathematik und ihrer Grenzgebiete. 3. Folge., vol. 2 (2nd ed.), Berlin, New York: Springer-Verlag, doi:10.1007/978-1-4612-1700-8, ISBN 978-3-540-62046-4, MR 1644323 • Gysin, Werner (1942), "Zur Homologietheorie der Abbildungen und Faserungen von Mannigfaltigkeiten", Commentarii Mathematici Helvetici, 14: 61–122, doi:10.1007/bf02565612, ISSN 0010-2571, MR 0006511
Wikipedia
Refinement (category theory) In category theory and related fields of mathematics, a refinement is a construction that generalizes the operations of "interior enrichment", like bornologification or saturation of a locally convex space. A dual construction is called envelope. Definition Suppose $K$ is a category, $X$ an object in $K$, and $\Gamma $ and $\Phi $ two classes of morphisms in $K$. The definition[1] of a refinement of $X$ in the class $\Gamma $ by means of the class $\Phi $ consists of two steps. • A morphism $\sigma :X'\to X$ in $K$ is called an enrichment of the object $X$ in the class of morphisms $\Gamma $ by means of the class of morphisms $\Phi $, if $\sigma \in \Gamma $, and for any morphism $\varphi :B\to X$ from the class $\Phi $ there exists a unique morphism $\varphi ':B\to X'$ in $K$ such that $\varphi =\sigma \circ \varphi '$. • An enrichment $\rho :E\to X$ of the object $X$ in the class of morphisms $\Gamma $ by means of the class of morphisms $\Phi $ is called a refinement of $X$ in $\Gamma $ by means of $\Phi $, if for any other enrichment $\sigma :X'\to X$ (of $X$ in $\Gamma $ by means of $\Phi $) there is a unique morphism $\upsilon :E\to X'$ in $K$ such that $\rho =\sigma \circ \upsilon $. The object $E$ is also called a refinement of $X$ in $\Gamma $ by means of $\Phi $. Notations: $\rho =\operatorname {ref} _{\Phi }^{\Gamma }X,\qquad E=\operatorname {Ref} _{\Phi }^{\Gamma }X.$ In a special case when $\Gamma $ is a class of all morphisms whose ranges belong to a given class of objects $L$ in $K$ it is convenient to replace $\Gamma $ with $L$ in the notations (and in the terms): $\rho =\operatorname {ref} _{\Phi }^{L}X,\qquad E=\operatorname {Ref} _{\Phi }^{L}X.$ Similarly, if $\Phi $ is a class of all morphisms whose ranges belong to a given class of objects $M$ in $K$ it is convenient to replace $\Phi $ with $M$ in the notations (and in the terms): $\rho =\operatorname {ref} _{M}^{\Gamma }X,\qquad E=\operatorname {Ref} _{M}^{\Gamma }X.$ For example, one can speak about a refinement of $X$ in the class of objects $L$ by means of the class of objects $M$: $\rho =\operatorname {ref} _{M}^{L}X,\qquad E=\operatorname {Ref} _{M}^{L}X.$ Examples 1. The bornologification[2][3] $X_{\operatorname {born} }$ of a locally convex space $X$ is a refinement of $X$ in the category $\operatorname {LCS} $ of locally convex spaces by means of the subcategory $\operatorname {Norm} $ of normed spaces: $X_{\operatorname {born} }=\operatorname {Ref} _{\operatorname {Norm} }^{\operatorname {LCS} }X$ 2. The saturation[4][3] $X^{\blacktriangle }$ of a pseudocomplete[5] locally convex space $X$ is a refinement in the category $\operatorname {LCS} $ of locally convex spaces by means of the subcategory $\operatorname {Smi} $ of the Smith spaces: $X^{\blacktriangle }=\operatorname {Ref} _{\operatorname {Smi} }^{\operatorname {LCS} }X$ See also • Envelope Notes 1. Akbarov 2016, p. 52. 2. Kriegl & Michor 1997, p. 35. 3. Akbarov 2016, p. 57. 4. Akbarov 2003, p. 194. 5. A topological vector space $X$ is said to be pseudocomplete if each totally bounded Cauchy net in $X$ converges. References • Kriegl, A.; Michor, P.W. (1997). The convenient setting of global analysis. Providence, Rhode Island: American Mathematical Society. ISBN 0-8218-0780-3. • Akbarov, S.S. (2003). "Pontryagin duality in the theory of topological vector spaces and in topological algebra". Journal of Mathematical Sciences. 113 (2): 179–349. doi:10.1023/A:1020929201133. S2CID 115297067. • Akbarov, S.S. (2016). "Envelopes and refinements in categories, with applications to functional analysis". Dissertationes Mathematicae. 513: 1–188. arXiv:1110.2013. doi:10.4064/dm702-12-2015. S2CID 118895911. Functional analysis (topics – glossary) Spaces • Banach • Besov • Fréchet • Hilbert • Hölder • Nuclear • Orlicz • Schwartz • Sobolev • Topological vector Properties • Barrelled • Complete • Dual (Algebraic/Topological) • Locally convex • Reflexive • Reparable Theorems • Hahn–Banach • Riesz representation • Closed graph • Uniform boundedness principle • Kakutani fixed-point • Krein–Milman • Min–max • Gelfand–Naimark • Banach–Alaoglu Operators • Adjoint • Bounded • Compact • Hilbert–Schmidt • Normal • Nuclear • Trace class • Transpose • Unbounded • Unitary Algebras • Banach algebra • C*-algebra • Spectrum of a C*-algebra • Operator algebra • Group algebra of a locally compact group • Von Neumann algebra Open problems • Invariant subspace problem • Mahler's conjecture Applications • Hardy space • Spectral theory of ordinary differential equations • Heat kernel • Index theorem • Calculus of variations • Functional calculus • Integral operator • Jones polynomial • Topological quantum field theory • Noncommutative geometry • Riemann hypothesis • Distribution (or Generalized functions) Advanced topics • Approximation property • Balanced set • Choquet theory • Weak topology • Banach–Mazur distance • Tomita–Takesaki theory •  Mathematics portal • Category • Commons Category theory Key concepts Key concepts • Category • Adjoint functors • CCC • Commutative diagram • Concrete category • End • Exponential • Functor • Kan extension • Morphism • Natural transformation • Universal property Universal constructions Limits • Terminal objects • Products • Equalizers • Kernels • Pullbacks • Inverse limit Colimits • Initial objects • Coproducts • Coequalizers • Cokernels and quotients • Pushout • Direct limit Algebraic categories • Sets • Relations • Magmas • Groups • Abelian groups • Rings (Fields) • Modules (Vector spaces) Constructions on categories • Free category • Functor category • Kleisli category • Opposite category • Quotient category • Product category • Comma category • Subcategory Higher category theory Key concepts • Categorification • Enriched category • Higher-dimensional algebra • Homotopy hypothesis • Model category • Simplex category • String diagram • Topos n-categories Weak n-categories • Bicategory (pseudofunctor) • Tricategory • Tetracategory • Kan complex • ∞-groupoid • ∞-topos Strict n-categories • 2-category (2-functor) • 3-category Categorified concepts • 2-group • 2-ring • En-ring • (Traced)(Symmetric) monoidal category • n-group • n-monoid • Category • Outline • Glossary
Wikipedia
Refinement type In type theory, a refinement type[1][2][3] is a type endowed with a predicate which is assumed to hold for any element of the refined type. Refinement types can express preconditions when used as function arguments or postconditions when used as return types: for instance, the type of a function which accepts natural numbers and returns natural numbers greater than 5 may be written as $f:\mathbb {N} \rightarrow \{n\in \mathbb {N} \,|\,n>5\}$. Refinement types are thus related to behavioral subtyping. Type systems General concepts • Type safety • Strong vs. weak typing Major categories • Static vs. dynamic • Manifest vs. inferred • Nominal vs. structural • Duck typing Minor categories • Abstract • Dependent • Flow-sensitive • Gradual • Intersection • Latent • Refinement • Substructural • Unique • Session History The concept of refinement types was first introduced in Freeman and Pfenning's 1991 Refinement types for ML,[1] which presents a type system for a subset of Standard ML. The type system "preserves the decidability of ML's type inference" whilst still "allowing more errors to be detected at compile-time". In more recent times, refinement type systems have been developed for languages such as Haskell,[4][5] TypeScript[6] and Scala. See also • Liquid Haskell • Dependent types References 1. Freeman, T.; Pfenning, F. (1991). "Refinement types for ML" (PDF). Proceedings of the ACM Conference on Programming Language Design and Implementation. pp. 268–277. doi:10.1145/113445.113468. 2. Hayashi, S. (1993). "Logic of refinement types". Proceedings of the Workshop on Types for Proofs and Programs. pp. 157–172. CiteSeerX 10.1.1.38.6346. doi:10.1007/3-540-58085-9_74. 3. Denney, E. (1998). "Refinement types for specification". Proceedings of the IFIP International Conference on Programming Concepts and Methods. Vol. 125. Chapman & Hall. pp. 148–166. CiteSeerX 10.1.1.22.4988. 4. Vazou, Niki. Liquid Haskell: Refinement Types for Haskell. The 45th ACM SIGPLAN Symposium on Principles of Programming Languages (POPL 2018). 5. Volkov, Nikita (2015). "Refinement types as a Haskell library". 6. Panagiotis, Vekris; Cosman, Benjamin; Jhala, Ranjit (2016). "Refinement types for TypeScript". Proceedings of the 37th ACM SIGPLAN Conference on Programming Language Design and Implementation. pp. 310–325. arXiv:1604.02480. doi:10.1145/2908080.2908110.
Wikipedia
Reflection formula In mathematics, a reflection formula or reflection relation for a function f is a relationship between f(a − x) and f(x). It is a special case of a functional equation, and it is very common in the literature to use the term "functional equation" when "reflection formula" is meant. This article is about reflection in number theory and calculus. For reflection formulas in geometry, see Reflection (mathematics). Reflection formulas are useful for numerical computation of special functions. In effect, an approximation that has greater accuracy or only converges on one side of a reflection point (typically in the positive half of the complex plane) can be employed for all arguments. Known formulae The even and odd functions satisfy by definition simple reflection relations around a = 0. For all even functions, $f(-x)=f(x),$ and for all odd functions, $f(-x)=-f(x).$ A famous relationship is Euler's reflection formula $\Gamma (z)\Gamma (1-z)={\frac {\pi }{\sin {(\pi z)}}},\qquad z\not \in \mathbb {Z} $ for the gamma function $\Gamma (z)$, due to Leonhard Euler. There is also a reflection formula for the general n-th order polygamma function ψ(n)(z), $\psi ^{(n)}(1-z)+(-1)^{n+1}\psi ^{(n)}(z)=(-1)^{n}\pi {\frac {d^{n}}{dz^{n}}}\cot {(\pi z)}$ which springs trivially from the fact that the polygamma functions are defined as the derivatives of $\ln \Gamma $ and thus inherit the reflection formula. The Riemann zeta function ζ(z) satisfies ${\frac {\zeta (1-z)}{\zeta (z)}}={\frac {2\,\Gamma (z)}{(2\pi )^{z}}}\cos \left({\frac {\pi z}{2}}\right),$ and the Riemann Xi function ξ(z) satisfies $\xi (z)=\xi (1-z).$ References • Weisstein, Eric W. "Reflection Relation". MathWorld. • Weisstein, Eric W. "Polygamma Function". MathWorld.
Wikipedia
Reflection theorem In algebraic number theory, a reflection theorem or Spiegelungssatz (German for reflection theorem – see Spiegel and Satz) is one of a collection of theorems linking the sizes of different ideal class groups (or ray class groups), or the sizes of different isotypic components of a class group. The original example is due to Ernst Eduard Kummer, who showed that the class number of the cyclotomic field $\mathbb {Q} \left(\zeta _{p}\right)$, with p a prime number, will be divisible by p if the class number of the maximal real subfield $\mathbb {Q} \left(\zeta _{p}\right)^{+}$ is. Another example is due to Scholz.[1] A simplified version of his theorem states that if 3 divides the class number of a real quadratic field $\mathbb {Q} \left({\sqrt {d}}\right)$, then 3 also divides the class number of the imaginary quadratic field $\mathbb {Q} \left({\sqrt {-3d}}\right)$. For reflection principles in set theory, see Reflection principle. Leopoldt's Spiegelungssatz Both of the above results are generalized by Leopoldt's "Spiegelungssatz", which relates the p-ranks of different isotypic components of the class group of a number field considered as a module over the Galois group of a Galois extension. Let L/K be a finite Galois extension of number fields, with group G, degree prime to p and L containing the p-th roots of unity. Let A be the p-Sylow subgroup of the class group of L. Let φ run over the irreducible characters of the group ring Qp[G] and let Aφ denote the corresponding direct summands of A. For any φ let q = pφ(1) and let the G-rank eφ be the exponent in the index $[A_{\phi }:A_{\phi }^{p}]=q^{e_{\phi }}.$ Let ω be the character of G $\zeta ^{g}=\zeta ^{\omega (g)}{\text{ for }}\zeta \in \mu _{p}.$ The reflection (Spiegelung) φ* is defined by $\phi ^{*}(g)=\omega (g)\phi (g^{-1}).$ Let E be the unit group of K. We say that ε is "primary" if $K({\sqrt[{p}]{\epsilon }})/K$ is unramified, and let E0 denote the group of primary units modulo Ep. Let δφ denote the G-rank of the φ component of E0. The Spiegelungssatz states that $|e_{\phi ^{*}}-e_{\phi }|\leq \delta _{\phi }.$ Extensions Extensions of this Spiegelungssatz were given by Oriat and Oriat-Satge, where class groups were no longer associated with characters of the Galois group of K/k, but rather by ideals in a group ring over the Galois group of K/k. Leopoldt's Spiegelungssatz was generalized in a different direction by Kuroda, who extended it to a statement about ray class groups. This was further developed into the very general "T-S reflection theorem" of Georges Gras.[2] Kenkichi Iwasawa also provided an Iwasawa-theoretic reflection theorem. References 1. A. Scholz, Uber die Beziehung der Klassenzahlen quadratischer Korper zueinander, J. reine angew. Math., 166 (1932), 201-203. 2. Georges Gras, Class Field Theory: From Theory to Practice, Springer-Verlag, Berlin, 2004, pp. 157–158. • Koch, Helmut (1997). Algebraic Number Theory. Encycl. Math. Sci. Vol. 62 (2nd printing of 1st ed.). Springer-Verlag. pp. 147–149. ISBN 3-540-63003-1. Zbl 0819.11044.
Wikipedia
Reflection group In group theory and geometry, a reflection group is a discrete group which is generated by a set of reflections of a finite-dimensional Euclidean space. The symmetry group of a regular polytope or of a tiling of the Euclidean space by congruent copies of a regular polytope is necessarily a reflection group. Reflection groups also include Weyl groups and crystallographic Coxeter groups. While the orthogonal group is generated by reflections (by the Cartan–Dieudonné theorem), it is a continuous group (indeed, Lie group), not a discrete group, and is generally considered separately. Definition Let E be a finite-dimensional Euclidean space. A finite reflection group is a subgroup of the general linear group of E which is generated by a set of orthogonal reflections across hyperplanes passing through the origin. An affine reflection group is a discrete subgroup of the affine group of E that is generated by a set of affine reflections of E (without the requirement that the reflection hyperplanes pass through the origin). The corresponding notions can be defined over other fields, leading to complex reflection groups and analogues of reflection groups over a finite field. Examples Plane In two dimensions, the finite reflection groups are the dihedral groups, which are generated by reflection in two lines that form an angle of $2\pi /n$ and correspond to the Coxeter diagram $I_{2}(n).$ Conversely, the cyclic point groups in two dimensions are not generated by reflections, nor contain any – they are subgroups of index 2 of a dihedral group. Infinite reflection groups include the frieze groups $*\infty \infty $ and $*22\infty $ and the wallpaper groups $**$, $*2222$, $*333$, $*442$ and $*632$. If the angle between two lines is an irrational multiple of pi, the group generated by reflections in these lines is infinite and non-discrete, hence, it is not a reflection group. Space Finite reflection groups are the point groups Cnv, Dnh, and the symmetry groups of the five Platonic solids. Dual regular polyhedra (cube and octahedron, as well as dodecahedron and icosahedron) give rise to isomorphic symmetry groups. The classification of finite reflection groups of R3 is an instance of the ADE classification. Relation with Coxeter groups A reflection group W admits a presentation of a special kind discovered and studied by H. S. M. Coxeter.[1] The reflections in the faces of a fixed fundamental "chamber" are generators ri of W of order 2. All relations between them formally follow from the relations $(r_{i}r_{j})^{c_{ij}}=1,$ expressing the fact that the product of the reflections ri and rj in two hyperplanes Hi and Hj meeting at an angle $\pi /c_{ij}$ is a rotation by the angle $2\pi /c_{ij}$ fixing the subspace Hi ∩ Hj of codimension 2. Thus, viewed as an abstract group, every reflection group is a Coxeter group. Finite fields When working over finite fields, one defines a "reflection" as a map that fixes a hyperplane (otherwise for example there would be no reflections in characteristic 2, as $-1=1$ so reflections are the identity). Geometrically, this amounts to including shears in a hyperplane. Reflection groups over finite fields of characteristic not 2 were classified by Zalesskiĭ & Serežkin (1981). Generalizations Discrete isometry groups of more general Riemannian manifolds generated by reflections have also been considered. The most important class arises from Riemannian symmetric spaces of rank 1: the n-sphere Sn, corresponding to finite reflection groups, the Euclidean space Rn, corresponding to affine reflection groups, and the hyperbolic space Hn, where the corresponding groups are called hyperbolic reflection groups. In two dimensions, triangle groups include reflection groups of all three kinds. See also • Hyperplane arrangement • Chevalley–Shephard–Todd theorem • Reflection groups are related to kaleidoscopes.[2] References Notes 1. Coxeter (1934, 1935) 2. Goodman (2004). Bibliography • Coxeter, H.S.M. (1934), "Discrete groups generated by reflections", Ann. of Math., 35 (3): 588–621, CiteSeerX 10.1.1.128.471, doi:10.2307/1968753, JSTOR 1968753 • Coxeter, H.S.M. (1935), "The complete enumeration of finite groups of the form $r_{i}^{2}=(r_{i}r_{j})^{k_{ij}}=1$", J. London Math. Soc., 10: 21–25, doi:10.1112/jlms/s1-10.37.21 • Goodman, Roe (April 2004), "The Mathematics of Mirrors and Kaleidoscopes" (PDF), American Mathematical Monthly, 111 (4): 281–298, CiteSeerX 10.1.1.127.6227, doi:10.2307/4145238, JSTOR 4145238 • Zalesskiĭ, Aleksandr E.; Serežkin, V N (1981), "Finite Linear Groups Generated by Reflections", Math. USSR Izv., 17 (3): 477–503, Bibcode:1981IzMat..17..477Z, doi:10.1070/IM1981v017n03ABEH001369 Textbooks • Borovik, Alexandre; Borovik, Anna (2010), Mirrors and reflections : the geometry of finite reflection groups, New York: Springer, ISBN 9780387790664 • Grove, L. C.; Benson, C. T. (1985), Finite reflection groups, Graduate Texts in Mathematics, vol. 99 (2nd ed.), Springer-Verlag, New York, doi:10.1007/978-1-4757-1869-0, ISBN 0-387-96082-1, MR 0777684 • Humphreys, James E. (1992), Reflection groups and Coxeter groups, Cambridge University Press, ISBN 978-0-521-43613-7 External links • Media related to Reflection groups at Wikimedia Commons • "Reflection group", Encyclopedia of Mathematics, EMS Press, 2001 [1994]
Wikipedia
Reflection principle (Wiener process) In the theory of probability for stochastic processes, the reflection principle for a Wiener process states that if the path of a Wiener process f(t) reaches a value f(s) = a at time t = s, then the subsequent path after time s has the same distribution as the reflection of the subsequent path about the value a.[1] More formally, the reflection principle refers to a lemma concerning the distribution of the supremum of the Wiener process, or Brownian motion. The result relates the distribution of the supremum of Brownian motion up to time t to the distribution of the process at time t. It is a corollary of the strong Markov property of Brownian motion. Statement If $(W(t):t\geq 0)$ is a Wiener process, and $a>0$ is a threshold (also called a crossing point), then the lemma states: $\mathbb {P} \left(\sup _{0\leq s\leq t}W(s)\geq a\right)=2\mathbb {P} (W(t)\geq a)$ Assuming $W(0)=0$ , due to the continuity of Wiener processes, each path (one sampled realization) of Wiener process on $(0,t)$ which finishes at or above value/level/threshold/crossing point $a$ the time $t$ ( $W(t)\geq a$ ) must have crossed (reached) a threshold $a$ ( $W(t_{a})=a$ ) at some earlier time $t_{a}\leq t$ for the first time . (It can cross level $a$ multiple times on the interval $(0,t)$, we take the earliest.) For every such path, you can define another path $W'(t)$ on $(0,t)$ that is reflected or vertically flipped on the sub-interval $(t_{a},t)$ symmetrically around level $a$ from the original path. These reflected paths are also samples of the Wiener process reaching value $W'(t_{a})=a$ on the interval $(0,t)$, but finish below $a$. Thus, of all the paths that reach $a$ on the interval $(0,t)$, half will finish below $a$, and half will finish above. Hence, the probability of finishing above $a$ is half that of reaching $a$. In a stronger form, the reflection principle says that if $\tau $ is a stopping time then the reflection of the Wiener process starting at $\tau $, denoted $(W^{\tau }(t):t\geq 0)$, is also a Wiener process, where: $W^{\tau }(t)=W(t)\chi _{\left\{t\leq \tau \right\}}+(2W(\tau )-W(t))\chi _{\left\{t>\tau \right\}}$ and the indicator function $\chi _{\{t\leq \tau \}}={\begin{cases}1,&{\text{if }}t\leq \tau \\0,&{\text{otherwise }}\end{cases}}$ and $\chi _{\{t>\tau \}}$is defined similarly. The stronger form implies the original lemma by choosing $\tau =\inf \left\{t\geq 0:W(t)=a\right\}$. Proof The earliest stopping time for reaching crossing point a, $\tau _{a}:=\inf \left\{t:W(t)=a\right\}$, is an almost surely bounded stopping time. Then we can apply the strong Markov property to deduce that a relative path subsequent to $\tau _{a}$, given by $X_{t}:=W(t+\tau _{a})-a$, is also simple Brownian motion independent of ${\mathcal {F}}_{\tau _{a}}^{W}$. Then the probability distribution for the last time $W(s)$ is at or above the threshold $a$ in the time interval $[0,t]$ can be decomposed as ${\begin{aligned}\mathbb {P} \left(\sup _{0\leq s\leq t}W(s)\geq a\right)&=\mathbb {P} \left(\sup _{0\leq s\leq t}W(s)\geq a,W(t)\geq a\right)+\mathbb {P} \left(\sup _{0\leq s\leq t}W(s)\geq a,W(t)<a\right)\\&=\mathbb {P} \left(W(t)\geq a\right)+\mathbb {P} \left(\sup _{0\leq s\leq t}W(s)\geq a,X(t-\tau _{a})<0\right)\\\end{aligned}}$. By the tower property for conditional expectations, the second term reduces to: ${\begin{aligned}\mathbb {P} \left(\sup _{0\leq s\leq t}W(s)\geq a,X(t-\tau _{a})<0\right)&=\mathbb {E} \left[\mathbb {P} \left(\sup _{0\leq s\leq t}W(s)\geq a,X(t-\tau _{a})<0|{\mathcal {F}}_{\tau _{a}}^{W}\right)\right]\\&=\mathbb {E} \left[\chi _{\sup _{0\leq s\leq t}W(s)\geq a}\mathbb {P} \left(X(t-\tau _{a})<0|{\mathcal {F}}_{\tau _{a}}^{W}\right)\right]\\&={\frac {1}{2}}\mathbb {P} \left(\sup _{0\leq s\leq t}W(s)\geq a\right),\end{aligned}}$ since $X(t)$ is a standard Brownian motion independent of ${\mathcal {F}}_{\tau _{a}}^{W}$ and has probability $1/2$ of being less than $0$. The proof of the lemma is completed by substituting this into the second line of the first equation.[2] ${\begin{aligned}\mathbb {P} \left(\sup _{0\leq s\leq t}W(s)\geq a\right)&=\mathbb {P} \left(W(t)\geq a\right)+{\frac {1}{2}}\mathbb {P} \left(\sup _{0\leq s\leq t}W(s)\geq a\right)\\\mathbb {P} \left(\sup _{0\leq s\leq t}W(s)\geq a\right)&=2\mathbb {P} \left(W(t)\geq a\right)\end{aligned}}$. Consequences The reflection principle is often used to simplify distributional properties of Brownian motion. Considering Brownian motion on the restricted interval $(W(t):t\in [0,1])$ then the reflection principle allows us to prove that the location of the maxima $t_{\text{max}}$, satisfying $W(t_{\text{max}})=\sup _{0\leq s\leq 1}W(s)$, has the arcsine distribution. This is one of the Lévy arcsine laws.[3] References 1. Jacobs, Kurt (2010). Stochastic Processes for Physicists. Cambridge University Press. pp. 57–59. ISBN 9781139486798. 2. Mörters, P.; Peres,Y. (2010) Brownian Motion, CUP. ISBN 978-0-521-76018-8 3. Lévy, Paul (1940). "Sur certains processus stochastiques homogènes". Compositio Mathematica. 7: 283–339. Retrieved 15 February 2013. Stochastic processes Discrete time • Bernoulli process • Branching process • Chinese restaurant process • Galton–Watson process • Independent and identically distributed random variables • Markov chain • Moran process • Random walk • Loop-erased • Self-avoiding • Biased • Maximal entropy Continuous time • Additive process • Bessel process • Birth–death process • pure birth • Brownian motion • Bridge • Excursion • Fractional • Geometric • Meander • Cauchy process • Contact process • Continuous-time random walk • Cox process • Diffusion process • Empirical process • Feller process • Fleming–Viot process • Gamma process • Geometric process • Hawkes process • Hunt process • Interacting particle systems • Itô diffusion • Itô process • Jump diffusion • Jump process • Lévy process • Local time • Markov additive process • McKean–Vlasov process • Ornstein–Uhlenbeck process • Poisson process • Compound • Non-homogeneous • Schramm–Loewner evolution • Semimartingale • Sigma-martingale • Stable process • Superprocess • Telegraph process • Variance gamma process • Wiener process • Wiener sausage Both • Branching process • Galves–Löcherbach model • Gaussian process • Hidden Markov model (HMM) • Markov process • Martingale • Differences • Local • Sub- • Super- • Random dynamical system • Regenerative process • Renewal process • Stochastic chains with memory of variable length • White noise Fields and other • Dirichlet process • Gaussian random field • Gibbs measure • Hopfield model • Ising model • Potts model • Boolean network • Markov random field • Percolation • Pitman–Yor process • Point process • Cox • Poisson • Random field • Random graph Time series models • Autoregressive conditional heteroskedasticity (ARCH) model • Autoregressive integrated moving average (ARIMA) model • Autoregressive (AR) model • Autoregressive–moving-average (ARMA) model • Generalized autoregressive conditional heteroskedasticity (GARCH) model • Moving-average (MA) model Financial models • Binomial options pricing model • Black–Derman–Toy • Black–Karasinski • Black–Scholes • Chan–Karolyi–Longstaff–Sanders (CKLS) • Chen • Constant elasticity of variance (CEV) • Cox–Ingersoll–Ross (CIR) • Garman–Kohlhagen • Heath–Jarrow–Morton (HJM) • Heston • Ho–Lee • Hull–White • LIBOR market • Rendleman–Bartter • SABR volatility • Vašíček • Wilkie Actuarial models • Bühlmann • Cramér–Lundberg • Risk process • Sparre–Anderson Queueing models • Bulk • Fluid • Generalized queueing network • M/G/1 • M/M/1 • M/M/c Properties • Càdlàg paths • Continuous • Continuous paths • Ergodic • Exchangeable • Feller-continuous • Gauss–Markov • Markov • Mixing • Piecewise-deterministic • Predictable • Progressively measurable • Self-similar • Stationary • Time-reversible Limit theorems • Central limit theorem • Donsker's theorem • Doob's martingale convergence theorems • Ergodic theorem • Fisher–Tippett–Gnedenko theorem • Large deviation principle • Law of large numbers (weak/strong) • Law of the iterated logarithm • Maximal ergodic theorem • Sanov's theorem • Zero–one laws (Blumenthal, Borel–Cantelli, Engelbert–Schmidt, Hewitt–Savage, Kolmogorov, Lévy) Inequalities • Burkholder–Davis–Gundy • Doob's martingale • Doob's upcrossing • Kunita–Watanabe • Marcinkiewicz–Zygmund Tools • Cameron–Martin formula • Convergence of random variables • Doléans-Dade exponential • Doob decomposition theorem • Doob–Meyer decomposition theorem • Doob's optional stopping theorem • Dynkin's formula • Feynman–Kac formula • Filtration • Girsanov theorem • Infinitesimal generator • Itô integral • Itô's lemma • Karhunen–Loève theorem • Kolmogorov continuity theorem • Kolmogorov extension theorem • Lévy–Prokhorov metric • Malliavin calculus • Martingale representation theorem • Optional stopping theorem • Prokhorov's theorem • Quadratic variation • Reflection principle • Skorokhod integral • Skorokhod's representation theorem • Skorokhod space • Snell envelope • Stochastic differential equation • Tanaka • Stopping time • Stratonovich integral • Uniform integrability • Usual hypotheses • Wiener space • Classical • Abstract Disciplines • Actuarial mathematics • Control theory • Econometrics • Ergodic theory • Extreme value theory (EVT) • Large deviations theory • Mathematical finance • Mathematical statistics • Probability theory • Queueing theory • Renewal theory • Ruin theory • Signal processing • Statistics • Stochastic analysis • Time series analysis • Machine learning • List of topics • Category
Wikipedia
Reflection principle In set theory, a branch of mathematics, a reflection principle says that it is possible to find sets that, with respect to any given property, resemble the class of all sets. There are several different forms of the reflection principle depending on exactly what is meant by "resemble". Weak forms of the reflection principle are theorems of ZF set theory due to Montague (1961), while stronger forms can be new and very powerful axioms for set theory. The name "reflection principle" comes from the fact that properties of the universe of all sets are "reflected" down to a smaller set. Motivation A naive version of the reflection principle states that "for any property of the universe of all sets we can find a set with the same property". This leads to an immediate contradiction: the universe of all sets contains all sets, but there is no set with the property that it contains all sets. To get useful (and non-contradictory) reflection principles we need to be more careful about what we mean by "property" and what properties we allow. Reflection principles are associated with attempts to formulate the idea that no one notion, idea, or statement can capture our whole view of the universe of sets.[1] Kurt Gödel described it as follows:[2] The universe of all sets is structurally indefinable. One possible way to make this statement precise is the following: The universe of sets cannot be uniquely characterized (i.e., distinguished from all its initial segments) by any internal structural property of the membership relation in it which is expressible in any logic of finite or transfinite type, including infinitary logics of any cardinal number. This principle may be considered a generalization of the closure principle. — 8.7.3, p. 280 All the principles for setting up the axioms of set theory should be reducible to Ackermann's principle: The Absolute is unknowable. The strength of this principle increases as we get stronger and stronger systems of set theory. The other principles are only heuristic principles. Hence, the central principle is the reflection principle, which presumably will be understood better as our experience increases. Meanwhile, it helps to separate out more specific principles which either give some additional information or are not yet seen clearly to be derivable from the reflection principle as we understand it now. — 8.7.9, p. 283 Generally I believe that, in the last analysis, every axiom of infinity should be derivable from the (extremely plausible) principle that V is indefinable, where definability is to be taken in [a] more and more generalized and idealized sense. — 8.7.16, p. 285 Georg Cantor expressed similar views on Absolute Infinity: All cardinality properties are satisfied in this number, in which held by a smaller cardinal. To find non-contradictory reflection principles we might argue informally as follows. Suppose that we have some collection A of methods for forming sets (for example, taking powersets, subsets, the axiom of replacement, and so on). We can imagine taking all sets obtained by repeatedly applying all these methods, and form these sets into a class X, which can be thought of as a model of some set theory. But in light of this view, V is not be exhaustible by a handful of operations, otherwise it would be easily describable from below, this principle is known as inexhaustibility (of V).[3] As a result, V is larger than X. Applying the methods in A to the set X itself would also result in a collection smaller than V, as V is not exhaustible from the image of X under the operations in A. Then we can introduce the following new principle for forming sets: "the collection of all sets obtained from some set by repeatedly applying all methods in the collection A is also a set". After adding this principle to A, V is still not exhaustible by the operations in this new A. This process may be repeated further and further, adding more and more operations to the set A and obtaining larger and larger models X. Each X resembles V in the sense that it shares the property with V of being closed under the operations in A. We can use this informal argument in two ways. We can try to formalize it in (say) ZF set theory; by doing this we obtain some theorems of ZF set theory, called reflection theorems. Alternatively we can use this argument to motivate introducing new axioms for set theory, such as some axioms asserting existence of large cardinals.[3] In ZFC In trying to formalize the argument for the reflection principle of the previous section in ZF set theory, it turns out to be necessary to add some conditions about the collection of properties A (for example, A might be finite). Doing this produces several closely related "reflection theorems" all of which state that we can find a set that is almost a model of ZFC. In contrast to stronger reflection principles, these are provable in ZFC. One of the most common reflection principles for ZFC is a theorem schema that can be described as follows: for any formula $\phi (x_{1},\ldots ,x_{n})$ with parameters, if $\phi (x_{1},\ldots ,x_{n})$ is true (in the set-theoretic universe $V$), then there is a level $V_{\alpha }$ of the cumulative hierarchy such that $V_{\alpha }\vDash \phi (x_{1},\ldots ,x_{n})$. This is known as the Lévy-Montague reflection principle,[4] or the Lévy reflection principle,[5] principally investigated in Lévy (1960) and Montague (1961).[6] Another version of this reflection principle says that for any finite number of formulas of ZFC we can find a set $V_{\alpha }$ in the cumulative hierarchy such that all the formulas in the set are absolute for $V_{\alpha }$ (which means very roughly that they hold in $V_{\alpha }$ if and only if they hold in the universe of all sets). So this says that the set $V_{\alpha }$ resembles the universe of all sets, at least as far as the given finite number of formulas is concerned. Another reflection principle for ZFC is a theorem schema that can be described as follows:[7][8] Let $\phi $ be a formula with at most free variables $x_{1},\ldots ,x_{n}$. Then ZFC proves that $(\forall N)(\exists M{\supseteq }N)(\forall x_{1},\ldots ,x_{n}{\in }M)(\phi (x_{1},\ldots ,x_{n})\leftrightarrow \phi ^{M})$ where $\phi ^{M}$ denotes the relativization of $\phi $ to $M$ (that is, replacing all quantifiers appearing in $\phi $ of the form $\forall x$ and $\exists x$ by $\forall x{\in }M$ and $\exists x{\in }M$, respectively). Another form of the reflection principle in ZFC says that for any finite set of axioms of ZFC we can find a countable transitive model satisfying these axioms. (In particular this proves that, unless inconsistent, ZFC is not finitely axiomatizable because if it were it would prove the existence of a model of itself, and hence prove its own consistency, contradicting Gödel's second incompleteness theorem.) This version of the reflection theorem is closely related to the Löwenheim–Skolem theorem. If $\kappa $ is a strong inaccessible cardinal, then there is a closed unbounded subset $C$ of $\kappa $, such that for every $\alpha \in C$, the identity function from $V_{\alpha }$ to $V_{\kappa }$ is an elementary embedding. As new axioms Bernays class theory Paul Bernays used a reflection principle as an axiom for one version of set theory (not Von Neumann–Bernays–Gödel set theory, which is a weaker theory). His reflection principle stated roughly that if A is a class with some property, then one can find a transitive set u such that A∩u has the same property when considered as a subset of the "universe" u. This is quite a powerful axiom and implies the existence of several of the smaller large cardinals, such as inaccessible cardinals. (Roughly speaking, the class of all ordinals in ZFC is an inaccessible cardinal apart from the fact that it is not a set, and the reflection principle can then be used to show that there is a set that has the same property, in other words that is an inaccessible cardinal.) Unfortunately, this cannot be axiomatized directly in ZFC, and a class theory like Morse–Kelley set theory normally has to be used. The consistency of Bernays's reflection principle is implied by the existence of an ω-Erdős cardinal. More precisely, the axioms of Bernays' class theory are:[9] 1. extensionality 2. class specification: for any formula $\phi $ without $a$ free, $\exists a\forall b(b\in a\leftrightarrow \phi \land b{\text{ is a set}})$ 3. subsets: $b\subseteq a\land a{\text{ is a set}}\to b{\text{ is a set}}$ 4. reflection: for any formula $\phi $, $\phi (A)\to \exists u(u{\text{ is a transitive set}}\land \phi ^{{\mathcal {P}}u}(A\cap u))$ 5. foundation 6. choice where ${\mathcal {P}}$ denotes the powerset. According to Akihiro Kanamori,[10]: 62  in a 1961 paper, Bernays considered the reflection schema $\phi \to \exists x({\text{transitive}}(x)\land \phi ^{x})$ for any formula $\phi $ without $x$ free, where ${\text{transitive}}(x)$ asserts that $x$ is transitive. Starting with the observation that set parameters $a_{1},\ldots ,a_{n}$ can appear in $\phi $ and $x$ can be required to contain them by introducing clauses $\exists y(a_{i}\in y)$ into $\phi $, Bernays just with this schema established pairing, union, infinity, and replacement, in effect achieving a remarkably economical presentation of ZF. Others Some formulations of Ackermann set theory use a reflection principle. Ackermann's axiom states that, for any formula $\phi $ not mentioning $V$,[2] $a\in V\land b\in V\to \forall x(\phi \to x\in V)\to \exists u{\in }V\forall x(x\in u\leftrightarrow \phi )$ Peter Koellner showed that a general class of reflection principles deemed "intrinsically justified" are either inconsistent or weak, in that they are consistent relative to the Erdös cardinal.[11] However, there are more powerful reflection principles, which are closely related to the various large cardinal axioms. For almost every known large cardinal axiom there is a known reflection principle that implies it, and conversely all but the most powerful known reflection principles are implied by known large cardinal axioms.[9] An example of this is the wholeness axiom,[12] which implies the existence of super-n-huge cardinals for all finite n and its consistency is implied by an I3 rank-into-rank cardinal. Add an axiom saying that Ord is a Mahlo cardinal — for every closed unbounded class of ordinals C (definable by a formula with parameters), there is a regular ordinal in C. This allows one to derive the existence of strong inaccessible cardinals and much more over any ordinal. References • Jech, Thomas (2002), Set theory, third millennium edition (revised and expanded), Springer, ISBN 3-540-44085-2 • Kunen, Kenneth (1980), Set Theory: An Introduction to Independence Proofs, North-Holland, ISBN 0-444-85401-0 • Lévy, Azriel (1960), "Axiom schemata of strong infinity in axiomatic set theory", Pacific Journal of Mathematics, 10: 223–238, doi:10.2140/pjm.1960.10.223, ISSN 0030-8730, MR 0124205 • Montague, Richard (1961), "Fraenkel's addition to the axioms of Zermelo", in Bar-Hillel, Yehoshua; Poznanski, E. I. J.; Rabin, M. O.; Robinson, Abraham (eds.), Essays on the foundations of mathematics, Hebrew Univ., Jerusalem: Magnes Press, pp. 91–114, MR 0163840 • Reinhardt, W. N. (1974), "Remarks on reflection principles, large cardinals, and elementary embeddings.", Axiomatic set theory, Proc. Sympos. Pure Math., vol. XIII, Part II, Providence, R. I.: Amer. Math. Soc., pp. 189–205, MR 0401475 Citations 1. Welch, Philip D. (12 November 2019). "Proving Theorems from Reflection". Reflections on the Foundations of Mathematics. Synthese Library. Vol. 407. Springer, Cham. pp. 79–97. doi:10.1007/978-3-030-15655-8_4. ISBN 978-3-030-15655-8. S2CID 192577454. 2. Wang, Hao (March 25, 2016). A Logical Journey: From Gödel to Philosophy. Bradford Books. pp. 280–285. ISBN 978-0262529167. 3. P. Maddy, "Believing the Axioms. I", pp.501--503. Journal of Symbolic Logic vol. 53, no. 2 (1988). 4. Barton, Neil; Caicedo, Andrés Eduardo; Fuchs, Gunter; Hamkins, Joel David; Reitz, Jonas; Schindler, Ralf (2020). "Inner-Model Reflection Principles". Studia Logica. 108 (3): 573–595. arXiv:1708.06669. doi:10.1007/s11225-019-09860-7. S2CID 255073980. 5. S. D. Friedman, Evidence for Set-Theoretic Truth and the Hyperuniverse Programme (2016), p.15. Accessed 28 March 2023. 6. A. Kanamori, The Higher Infinite, p.58. Springer Monographs in Mathematics (2003). ISBN 978-3-540-88866-6. 7. "Section 3.8 (000F): Reflection principle". The Stacks Project. 2022. Retrieved 7 September 2022. 8. T. Jech, 'Set Theory: The Third Millennium Edition, revised and expanded', pp.168--170. Springer Monographs in Mathematics (2006). ISBN 3-540-44085-2 9. Marshall R., M. Victoria (1989). "Higher order reflection principles". The Journal of Symbolic Logic. 54 (2): 474–489. doi:10.2307/2274862. JSTOR 2274862. S2CID 250351126. Retrieved 9 September 2022. 10. Kanamori, Akihiro (March 2009). "Bernays and Set Theory". The Bulletin of Symbolic Logic. 15 (1): 43–69. doi:10.2178/bsl/1231081769. JSTOR 25470304. S2CID 15567244. Retrieved 9 September 2022. 11. Koellner, Peter (February 2009). "On reflection principles". Annals of Pure and Applied Logic. 157 (2): 206–219. doi:10.1016/j.apal.2008.09.007. 12. Corazza, Paul (2000). "The Wholeness Axiom and Laver Sequences". Annals of Pure and Applied Logic. 105 (1–3): 157–260. doi:10.1016/s0168-0072(99)00052-4. External links • Mizar system proof: http://mizar.org/version/current/html/zf_refle.html
Wikipedia
Reflecting cardinal In set theory, a mathematical discipline, a reflecting cardinal is a cardinal number κ for which there is a normal ideal I on κ such that for every X∈I+, the set of α∈κ for which X reflects at α is in I+. (A stationary subset S of κ is said to reflect at α<κ if S∩α is stationary in α.) Reflecting cardinals were introduced by (Mekler & Shelah 1989). Every weakly compact cardinal is a reflecting cardinal, and is also a limit of reflecting cardinals. The consistency strength of an inaccessible reflecting cardinal is strictly greater than a greatly Mahlo cardinal, where a cardinal κ is called greatly Mahlo if it is κ+-Mahlo (Mekler & Shelah 1989). An inaccessible reflecting cardinal is not in general Mahlo however, see https://mathoverflow.net/q/212597. See also • List of large cardinal properties References • Jech, Thomas (2003), Set Theory, Springer Monographs in Mathematics (third millennium ed.), Berlin, New York: Springer-Verlag, p. 697, ISBN 978-3-540-44085-7 • Mekler, Alan H.; Shelah, Saharon (1989), "The consistency strength of 'every stationary set reflects'", Israel Journal of Mathematics, 67 (3): 353–366, doi:10.1007/BF02764953, ISSN 0021-2172, MR 1029909
Wikipedia
Reflective subcategory In mathematics, a full subcategory A of a category B is said to be reflective in B when the inclusion functor from A to B has a left adjoint.[1]: 91  This adjoint is sometimes called a reflector, or localization.[2] Dually, A is said to be coreflective in B when the inclusion functor has a right adjoint. Informally, a reflector acts as a kind of completion operation. It adds in any "missing" pieces of the structure in such a way that reflecting it again has no further effect. Definition A full subcategory A of a category B is said to be reflective in B if for each B-object B there exists an A-object $A_{B}$ and a B-morphism $r_{B}\colon B\to A_{B}$ such that for each B-morphism $f\colon B\to A$ to an A-object $A$ there exists a unique A-morphism ${\overline {f}}\colon A_{B}\to A$ with ${\overline {f}}\circ r_{B}=f$. The pair $(A_{B},r_{B})$ is called the A-reflection of B. The morphism $r_{B}$ is called the A-reflection arrow. (Although often, for the sake of brevity, we speak about $A_{B}$ only as being the A-reflection of B). This is equivalent to saying that the embedding functor $E\colon \mathbf {A} \hookrightarrow \mathbf {B} $ is a right adjoint. The left adjoint functor $R\colon \mathbf {B} \to \mathbf {A} $ is called the reflector. The map $r_{B}$ is the unit of this adjunction. The reflector assigns to $B$ the A-object $A_{B}$ and $Rf$ for a B-morphism $f$ is determined by the commuting diagram If all A-reflection arrows are (extremal) epimorphisms, then the subcategory A is said to be (extremal) epireflective. Similarly, it is bireflective if all reflection arrows are bimorphisms. All these notions are special case of the common generalization—$E$-reflective subcategory, where $E$ is a class of morphisms. The $E$-reflective hull of a class A of objects is defined as the smallest $E$-reflective subcategory containing A. Thus we can speak about reflective hull, epireflective hull, extremal epireflective hull, etc. An anti-reflective subcategory is a full subcategory A such that the only objects of B that have an A-reflection arrow are those that are already in A. Dual notions to the above-mentioned notions are coreflection, coreflection arrow, (mono)coreflective subcategory, coreflective hull, anti-coreflective subcategory. Examples Algebra • The category of abelian groups Ab is a reflective subcategory of the category of groups, Grp. The reflector is the functor that sends each group to its abelianization. In its turn, the category of groups is a reflective subcategory of the category of inverse semigroups.[3] • Similarly, the category of commutative associative algebras is a reflective subcategory of all associative algebras, where the reflector is quotienting out by the commutator ideal. This is used in the construction of the symmetric algebra from the tensor algebra. • Dually, the category of anti-commutative associative algebras is a reflective subcategory of all associative algebras, where the reflector is quotienting out by the anti-commutator ideal. This is used in the construction of the exterior algebra from the tensor algebra. • The category of fields is a reflective subcategory of the category of integral domains (with injective ring homomorphisms as morphisms). The reflector is the functor that sends each integral domain to its field of fractions. • The category of abelian torsion groups is a coreflective subcategory of the category of abelian groups. The coreflector is the functor sending each group to its torsion subgroup. • The categories of elementary abelian groups, abelian p-groups, and p-groups are all reflective subcategories of the category of groups, and the kernels of the reflection maps are important objects of study; see focal subgroup theorem. • The category of groups is a coreflective subcategory of the category of monoids: the right adjoint maps a monoid to its group of units.[4] Topology • The category of Kolmogorov spaces (T0 spaces) is a reflective subcategory of Top, the category of topological spaces, and the Kolmogorov quotient is the reflector. • The category of completely regular spaces CReg is a reflective subcategory of Top. By taking Kolmogorov quotients, one sees that the subcategory of Tychonoff spaces is also reflective. • The category of all compact Hausdorff spaces is a reflective subcategory of the category of all Tychonoff spaces (and of the category of all topological spaces[2]: 140 ). The reflector is given by the Stone–Čech compactification. • The category of all complete metric spaces with uniformly continuous mappings is a reflective subcategory of the category of metric spaces. The reflector is the completion of a metric space on objects, and the extension by density on arrows.[1]: 90  • The category of sheaves is a reflective subcategory of presheaves on a topological space. The reflector is sheafification, which assigns to a presheaf the sheaf of sections of the bundle of its germs. Functional analysis • The category of Banach spaces is a reflective subcategory of the category of normed spaces and bounded linear operators. The reflector is the norm completion functor. Category theory • For any Grothendieck site (C, J), the topos of sheaves on (C, J) is a reflective subcategory of the topos of presheaves on C, with the special further property that the reflector functor is left exact. The reflector is the sheafification functor a : Presh(C) → Sh(C, J), and the adjoint pair (a, i) is an important example of a geometric morphism in topos theory. Properties • The components of the counit are isomorphisms.[2]: 140 [1] • If D is a reflective subcategory of C, then the inclusion functor D → C creates all limits that are present in C.[2]: 141  • A reflective subcategory has all colimits that are present in the ambient category.[2]: 141  • The monad induced by the reflector/localization adjunction is idempotent.[2]: 158  Notes 1. Mac Lane, Saunders, 1909-2005. (1998). Categories for the working mathematician (2nd ed.). New York: Springer. p. 89. ISBN 0387984038. OCLC 37928530.{{cite book}}: CS1 maint: multiple names: authors list (link) 2. Riehl, Emily (2017-03-09). Category theory in context. Mineola, New York. p. 140. ISBN 9780486820804. OCLC 976394474.{{cite book}}: CS1 maint: location missing publisher (link) 3. Lawson (1998), p. 63, Theorem 2. 4. "coreflective subcategory in nLab". ncatlab.org. Retrieved 2019-04-02. References • Adámek, Jiří; Horst Herrlich; George E. Strecker (1990). Abstract and Concrete Categories (PDF). New York: John Wiley & Sons. • Peter Freyd, Andre Scedrov (1990). Categories, Allegories. Mathematical Library Vol 39. North-Holland. ISBN 978-0-444-70368-2. • Herrlich, Horst (1968). Topologische Reflexionen und Coreflexionen. Lecture Notes in Math. 78. Berlin: Springer. • Mark V. Lawson (1998). Inverse semigroups: the theory of partial symmetries. World Scientific. ISBN 978-981-02-3316-7.
Wikipedia
Equilateral pentagon In geometry, an equilateral pentagon is a polygon in the Euclidean plane with five sides of equal length. Its five vertex angles can take a range of sets of values, thus permitting it to form a family of pentagons. In contrast, the regular pentagon is unique, because it is equilateral and moreover it is equiangular (its five angles are equal; the measure is 108 degrees). Four intersecting equal circles arranged in a closed chain are sufficient to determine a convex equilateral pentagon. Each circle's center is one of four vertices of the pentagon. The remaining vertex is determined by one of the intersection points of the first and the last circle of the chain. Examples SimpleCollinear edgesComplex polygon Convex Concave Regular pentagon (108° internal angles) Adjacent right angles (60° 150° 90° 90° 150°) Reflexed regular pentagon (36° 252° 36° 108° 108°) Dodecagonal versatile[1] (30° 210° 60° 90° 150°) Degenerate into trapezoid (120° 120° 60° 180° 60°) Regular star pentagram (36°) Intersecting (36° 108° −36° −36° 108°) Degenerate into triangle (≈28.07° 180° ≈75.96° ≈75.96° 180°) Self-intersecting Degenerate (edge-vertex overlap) Internal angles of a convex equilateral pentagon When a convex equilateral pentagon is dissected into triangles, two of them appear as isosceles (triangles in orange and blue) while the other one is more general (triangle in green). We assume that we are given the adjacent angles $\alpha $ and $\beta $. According to the law of sines the length of the line dividing the green and blue triangles is: $a=2\sin \left({\frac {\beta }{2}}\right).$ The square of the length of the line dividing the orange and green triangles is: ${\begin{aligned}b^{2}&=1+a^{2}-2(1)(a)\cos \left(\alpha -{\frac {\pi }{2}}+{\frac {\beta }{2}}\right)\\&=1+4\sin ^{2}\left({\frac {\beta }{2}}\right)-4\sin \left({\frac {\beta }{2}}\right)\sin \left(\alpha +{\frac {\beta }{2}}\right).\\\end{aligned}}$ According to the law of cosines, the cosine of δ can be seen from the figure: $\cos(\delta )={\frac {1^{2}+1^{2}-b^{2}}{2(1)(1)}}\ .$ Simplifying, δ is obtained as function of α and β: $\delta =\arccos \left[\cos(\alpha )+\cos(\beta )-\cos(\alpha +\beta )-{\frac {1}{2}}\right].$ The remaining angles of the pentagon can be found geometrically: The remaining angles of the orange and blue triangles are readily found by noting that two angles of an isosceles triangle are equal while all three angles sum to 180°. Then $\epsilon ,\gamma ,$ and the two remaining angles of the green triangle can be found from four equations stating that the sum of the angles of the pentagon is 540°, the sum of the angles of the green triangle is 180°, the angle $\gamma $ is the sum of its three components, and the angle $\epsilon $ is the sum of its two components. A cyclic pentagon is equiangular if and only if it has equal sides and thus is regular. Likewise, a tangential pentagon is equilateral if and only if it has equal angles and thus is regular.[2] Tiling There are two infinite families of equilateral convex pentagons that tile the plane, one having two adjacent supplementary angles and the other having two non-adjacent supplementary angles. Some of those pentagons can tile in more than one way, and there is one sporadic example of an equilateral pentagon that can tile the plane but does not belong to either of those two families; its angles are roughly 89°16', 144°32.5', 70°55', 135°22', and 99°54.5', no two supplementary.[3] A two-dimensional mapping Equilateral pentagons can intersect themselves either not at all, once, twice, or five times. The ones that don't intersect themselves are called simple, and they can be classified as either convex or concave. We here use the term "stellated" to refer to the ones that intersect themselves either twice or five times. We rule out, in this section, the equilateral pentagons that intersect themselves precisely once. Given that we rule out the pentagons that intersect themselves once, we can plot the rest as a function of two variables in the two-dimensional plane. Each pair of values (α, β) maps to a single point of the plane and also maps to a single pentagon. The periodicity of the values of α and β and the condition α ≥ β ≥ δ permit the size of the mapping to be limited. In the plane with coordinate axes α and β, the equation α = β is a line dividing the plane in two parts (south border shown in orange in the drawing). The equation δ = β as a curve divides the plane into different sections (north border shown in blue). Both borders enclose a continuous region of the plane whose points map to unique equilateral pentagons. Points outside the region just map to repeated pentagons—that is, pentagons that when rotated or reflected can match others already described. Pentagons that map exactly onto those borders have a line of symmetry. Inside the region of unique mappings there are three types of pentagons: stellated, concave and convex, separated by new borders. Stellated The stellated pentagons have sides intersected by others. A common example of this type of pentagon is the pentagram. A condition for a pentagon to be stellated, or self-intersecting, is to have 2α + β ≤ 180°. So, in the mapping, the line 2α + β = 180° (shown in orange at the north) is the border between the regions of stellated and non-stellated pentagons. Pentagons which map exactly to this border have a vertex touching another side. Concave The concave pentagons are non-stellated pentagons having at least one angle greater than 180°. The first angle which opens wider than 180° is γ, so the equation γ = 180° (border shown in green at right) is a curve which is the border of the regions of concave pentagons and others, called convex. Pentagons which map exactly to this border have at least two consecutive sides appearing as a double length side, which resembles a pentagon degenerated to a quadrilateral. Convex The convex pentagons have all of their five angles smaller than 180° and no sides intersecting others. A common example of this type of pentagon is the regular pentagon. References 1. Grünbaum, B. and Shephard, G.C., 1979. Spiral tilings and versatiles. Mathematics Teaching, 88, pp.50-51. Spiral Tilings, Paul Gailiunas 2. De Villiers, Michael, "Equiangular cyclic and equilateral circumscribed polygons", Mathematical Gazette 95, March 2011, 102-107. 3. Schattschneider, Doris (1978), "Tiling the plane with congruent pentagons", Mathematics Magazine, 51 (1): 29–44, doi:10.1080/0025570X.1978.11976672, JSTOR 2689644, MR 0493766 Wikimedia Commons has media related to Equilateral pentagons.
Wikipedia
Reflexive closure In mathematics, the reflexive closure of a binary relation $R$ on a set $X$ is the smallest reflexive relation on $X$ that contains $R.$ A relation is called reflexive if it relates every element of $X$ to itself. For example, if $X$ is a set of distinct numbers and $xRy$ means "$x$ is less than $y$", then the reflexive closure of $R$ is the relation "$x$ is less than or equal to $y$". Definition The reflexive closure $S$ of a relation $R$ on a set $X$ is given by $S=R\cup \{(x,x):x\in X\}$ In plain English, the reflexive closure of $R$ is the union of $R$ with the identity relation on $X.$ Example As an example, if $X=\{1,2,3,4\}$ $R=\{(1,1),(2,2),(3,3),(4,4)\}$ then the relation $R$ is already reflexive by itself, so it does not differ from its reflexive closure. However, if any of the pairs in $R$ was absent, it would be inserted for the reflexive closure. For example, if on the same set $X$ $R=\{(1,1),(2,2),(4,4)\}$ then the reflexive closure is $S=R\cup \{(x,x):x\in X\}=\{(1,1),(2,2),(3,3),(4,4)\}.$ See also • Symmetric closure – operation on binary relationsPages displaying wikidata descriptions as a fallback • Transitive closure – Smallest transitive relation containing a given binary relation References • Franz Baader and Tobias Nipkow, Term Rewriting and All That, Cambridge University Press, 1998, p. 8 Order theory • Topics • Glossary • Category Key concepts • Binary relation • Boolean algebra • Cyclic order • Lattice • Partial order • Preorder • Total order • Weak ordering Results • Boolean prime ideal theorem • Cantor–Bernstein theorem • Cantor's isomorphism theorem • Dilworth's theorem • Dushnik–Miller theorem • Hausdorff maximal principle • Knaster–Tarski theorem • Kruskal's tree theorem • Laver's theorem • Mirsky's theorem • Szpilrajn extension theorem • Zorn's lemma Properties & Types (list) • Antisymmetric • Asymmetric • Boolean algebra • topics • Completeness • Connected • Covering • Dense • Directed • (Partial) Equivalence • Foundational • Heyting algebra • Homogeneous • Idempotent • Lattice • Bounded • Complemented • Complete • Distributive • Join and meet • Reflexive • Partial order • Chain-complete • Graded • Eulerian • Strict • Prefix order • Preorder • Total • Semilattice • Semiorder • Symmetric • Total • Tolerance • Transitive • Well-founded • Well-quasi-ordering (Better) • (Pre) Well-order Constructions • Composition • Converse/Transpose • Lexicographic order • Linear extension • Product order • Reflexive closure • Series-parallel partial order • Star product • Symmetric closure • Transitive closure Topology & Orders • Alexandrov topology & Specialization preorder • Ordered topological vector space • Normal cone • Order topology • Order topology • Topological vector lattice • Banach • Fréchet • Locally convex • Normed Related • Antichain • Cofinal • Cofinality • Comparability • Graph • Duality • Filter • Hasse diagram • Ideal • Net • Subnet • Order morphism • Embedding • Isomorphism • Order type • Ordered field • Ordered vector space • Partially ordered • Positive cone • Riesz space • Upper set • Young's lattice
Wikipedia
Reflexive sheaf In algebraic geometry, a reflexive sheaf is a coherent sheaf that is isomorphic to its second dual (as a sheaf of modules) via the canonical map. The second dual of a coherent sheaf is called the reflexive hull of the sheaf. A basic example of a reflexive sheaf is a locally free sheaf of finite rank and, in practice, a reflexive sheaf is thought of as a kind of a vector bundle modulo some singularity. The notion is important both in scheme theory and complex algebraic geometry. For the theory of reflexive sheaves, one works over an integral noetherian scheme. A reflexive sheaf is torsion-free. The dual of a coherent sheaf is reflexive.[1] Usually, the product of reflexive sheaves is defined as the reflexive hull of their tensor products (so the result is reflexive.) A coherent sheaf F is said to be "normal" in the sense of Barth if the restriction $F(U)\to F(U-Y)$ is bijective for every open subset U and a closed subset Y of U of codimension at least 2. With this terminology, a coherent sheaf on an integral normal scheme is reflexive if and only if it is torsion-free and normal in the sense of Barth.[2] A reflexive sheaf of rank one on an integral locally factorial scheme is invertible.[3] A divisorial sheaf on a scheme X is a rank-one reflexive sheaf that is locally free at the generic points of the conductor DX of X.[4] For example, a canonical sheaf (dualizing sheaf) on a normal projective variety is a divisorial sheaf. See also • Torsionless module • Torsion sheaf • Twisted sheaf Notes 1. Hartshorne 1980, Corollary 1.2. 2. Hartshorne 1980, Proposition 1.6. 3. Hartshorne 1980, Proposition 1.9. 4. Kollár, Ch. 3, § 1. References • Hartshorne, R. (1980). "Stable reflexive sheaves". Math. Ann. 254 (2): 121–176. doi:10.1007/BF01467074. S2CID 122336784. • Hartshorne, R. (1982). "Stable reflexive sheaves. II". Invent. Math. 66: 165–190. Bibcode:1982InMat..66..165H. doi:10.1007/BF01404762. S2CID 122374039. • Kollár, János. "Chapter 3". Book on Moduli of Surfaces. Further reading • Greb, Daniel; Kebekus, Stefan; Kovacs, Sandor J.; Peternell, Thomas (2011). "Differential Forms on Log Canonical Spaces". Publications mathématiques de l'IHÉS. 114: 87–169. arXiv:1003.2913. doi:10.1007/s10240-011-0036-0. S2CID 115177340. External links • Reflexive sheaves on singular surfaces • Push-forward of locally free sheaves • http://www-personal.umich.edu/~kschwede/GeneralizedDivisors.pdf
Wikipedia
Reform mathematics Reform mathematics is an approach to mathematics education, particularly in North America. It is based on principles explained in 1989 by the National Council of Teachers of Mathematics (NCTM). The NCTM document Curriculum and Evaluation Standards for School Mathematics (CESSM) set forth a vision for K–12 (ages 5–18) mathematics education in the United States and Canada. The CESSM recommendations were adopted by many local- and federal-level education agencies during the 1990s. In 2000, the NCTM revised its CESSM with the publication of Principles and Standards for School Mathematics (PSSM). Like those in the first publication, the updated recommendations became the basis for many states' mathematics standards, and the method in textbooks developed by many federally-funded projects. The CESSM de-emphasised manual arithmetic in favor of students developing their own conceptual thinking and problem solving. The PSSM presents a more balanced view, but still has the same emphases. Mathematics instruction in this style has been labeled standards-based mathematics[1] or reform mathematics.[2] Principles and standards Mathematics education reform built up momentum in the early 1980s, as educators reacted to the "new math" of the 1960s and 1970s. The work of Piaget and other developmental psychologists had shifted the focus of mathematics educators from mathematics content to how children best learn mathematics.[3] The National Council of Teachers of Mathematics summarized the state of current research with the publication of Curriculum and Evaluation Standards in 1989 and Principles and Standards for School Mathematics in 2000, bringing definition to the reform movement in North America.[4] Reform mathematics curricula challenge students to make sense of new mathematical ideas through explorations and projects, often in real-world contexts.[3] Reform texts emphasize written and verbal communication, working in cooperative groups, and making connections between concepts and between representations. In contrast, "traditional" textbooks emphasize procedural mathematics and provide step-by-step examples with skill-building exercises. Traditional mathematics focuses on teaching algorithms that will lead to the correct answer of a particular problem. Because of this focus on application of algorithms, the student of traditional math must apply the specific method that is being taught. Reform mathematics de-emphasizes this algorithmic dependence.[5] Instead of leading students to find the exact answers to specific problems, reform educators focus students on the overall process which leads to an answer. Students' occasional errors are deemed less important than their understanding of an overall thought process. Research has shown that children make fewer mistakes with calculations and remember algorithms longer when they understand the concepts underlying the methods they use. In general, children in reform classes perform at least as well as children in traditional classes on tests of calculation skill, and perform considerably better on tests of problem solving.[6][7][8][9] Controversy Principles and Standards for School Mathematics was championed by educators, administrators and some mathematicians[10] as raising standards for all students; others criticized it for its prioritizing the understanding of processes over the learning of standard calculation procedures. Parents, educators and some mathematicians opposing reform mathematics complained about students becoming confused and frustrated, claiming that the style of instruction was inefficient and characterized by frequent false starts.[11] Proponents of reform mathematics countered that research showed that correctly-applied reform math curricula taught students basic math skills at least as well as curricula used in traditional programs, and additionally that reform math curricula was a more effective tool for teaching students the underlying concepts.[12] Communities that adopted reform curricula generally saw their students' math scores increase. [13] However, one study found that first-grade students with a below-average aptitude in math responded better to teacher-directed instruction.[14] During the 1990s, the large-scale adoption of curricula such as Mathland was criticized for partially or entirely abandoning teaching of standard arithmetic methods such as practicing regrouping or finding common denominators. Protests from groups such as Mathematically Correct led to many districts and states abandoning such textbooks. Some states—such as California—revised their mathematics standards to partially or largely repudiate the basic tenets of reform mathematics, and to re-emphasize mastery of standard mathematics facts and methods. The American Institutes for Research (AIR) reported in 2005 that the NCTM proposals "risk exposing students to unrealistically advanced mathematics content in the early grades."[15] This is in reference to NCTM's recommendation that algebraic concepts, such as understanding patterns and properties like commutativity (2+3=3+2), should be taught as early as first grade. The 2008 National Mathematics Advisory Panel called for a balance between reform and traditional mathematics teaching styles, rather than a for a "war" to be waged between the proponents of the two styles.[16] In 2006 NCTM published its Curriculum Focal Points, which made clear that standard algorithms, as well as activities aiming at conceptual understanding, were to be included in all elementary school curricula, . A common misconception was that reform educators did not want children to learn the standard methods of arithmetic. As the NCTM Focal Points made clear, such methods were still the ultimate goal, but reformers believed that conceptual understanding should come first. Reform educators believed that such understanding is best pursued by first allowing children to attempt to solve problems using their own understanding and methods. Eventually, under guidance from the teacher, students arrive at an understanding of standard methods. Even the controversial NCTM Standards of 1989 did not call for abandoning standard algorithms, but instead recommended a decreased emphasis on complex paper-and-pencil computation drills, and an increased emphasis on mental computation, estimation skills, thinking strategies for mastering basic facts, and conceptual understanding of arithmetic operations. During the peak of the controversy in the 1990s, unfavorable terminology for reform mathematics appeared in press and web articles, including Where's the math?,[17] anti-math,[18] math for dummies,[19] rainforest algebra,[20] math for women and minorities,[21] and new new math.[22] Most of these critical terms refer to the 1989 Standards rather than the PSSM. Beginning in 2011, most states adopted the Common Core Standards, which attempted to incorporate reform ideas, rigor (introducing ideas at a younger age), and a leaner math curriculum. See also • A Mathematician's Lament • Education in the United States • Jo Boaler • Mathematically Correct, which opposes the NCTM standards • Mathematics education in the United States • National Council of Teachers of Mathematics • Prof David Klein (California State University Northridge), who opposes the NCTM standards Notes 1. Trafton, P. R.; Reys, B. J.; Wasman, D. G. (2001). "Standards-Based Mathematics Curriculum Materials: A Phrase in Search of a Definition". The Phi Delta Kappan. 83 (3): 259–64. doi:10.1177/003172170108300316. JSTOR 20440108. S2CID 119619052. 2. "Reform Mathematics vs the Basics". Mathematically Sane. Retrieved 2022-10-17. 3. John A. Van de Walle, Elementary and Middle School Mathematics: Teaching Developmentally Longman, 2001, ISBN 0-8013-3253-2 4. See Van Hiele model for an example of research that influenced the NCTM Standards. 5. The NCTM Calls it "Math" 6. Carpenter, T.P. (1989), "Using Knowledge of Children's Mathematics Thinking in Classroom Teaching: An Experimental Study", American Educational Research Journal, 26 (4): 499–531, doi:10.3102/00028312026004499, S2CID 59384426 7. Villasenor, A.; Kepner, H. S. (1993), "Arithmetic from a Problem-Solving Perspective: An Urban Implementation", Journal for Research in Mathematics Education, 24 (24): 62–70, doi:10.2307/749386, JSTOR 749386 8. Fennema, E.; Carpenter, M. (1992), Davis & Maher (ed.), Learning to Use Children's Mathematics Thinking: A Case Study, Needham Heights, MA: Allyn and Bacon 9. Hiebert, James (1999), "Relationships between Research and the NCTM Standards", Journal for Research in Mathematics Education, 30 (1): 3–19, doi:10.2307/749627, JSTOR 749627 10. The position of the MAA is "We believe that PSSM outlines an ambitious, challenging and idealized program whose implementation would be a vast improvement over the current state of mathematics education."The MAA and the New NCTM Standards 11. Stokke, Anna (May 2015). "What to Do about Canada's Declining Math Scores". Education Policy; commentary #427. C. D. Howe Institute. Retrieved 11 June 2015. 12. "Which Curriculum Is Most Effective in Producing Gains in Students' Learning?". 13. ARC Center (2003), The ARC Center Tri-State Student Achievement Study executive summary (PDF), Bedford, MA: COMAP 14. Morgan, Paul; Farkas, George; Maczuga, Steve (20 June 2014), "Which Instructional Practices Most Help First-Grade Students With and Without Mathematics Difficulties?", Educational Evaluation and Policy Analysis, XX (X): 184–205, doi:10.3102/0162373714536608, PMC 4500292, PMID 26180268 15. "What the United States Can Learn From Singapore's World-Class Mathematics System" (PDF). Retrieved 2008-01-29. 16. February 17, 2008 10:44 p.m. State's proposed new math standards don't add up, critics say By JESSICA BLANCHARD "the district's elementary and middle-school math textbook choices lack a good balance between reform and traditional learning styles" 17. San Francisco Chronicle: Where's the Math? 18. The State's Invisible Math Standards: "With Zacarias' anti-math policies in force..." 19. Math Framework in California NCTM "A State Dummies Down", editorial, The Business Journal (Sacramento), 10 April 1995 20. Texas adopts textbook rejected by nation: Adoption of "Rainforest Algebra" appears to contradict this logic 21. David Klein: "This misguided view of women and minorities..." 22. New, New Math = Controversy CBS News 5/28/2000 External links • NCTM standards online 120-day free access, otherwise the public is required to pay to purchase or view the standards. Standards-based mathematics controversy Traditional mathematics • Mathematically Correct (David Klein) • NYC HOLD • Saxon math • Singapore math Reform mathematics • Connected Mathematics • Core-Plus Mathematics Project • Everyday Mathematics • Focus on Algebra • Integrated mathematics • Interactive Mathematics Program • Investigations in Numbers, Data, and Space • Mathland • WASL Standards-based education reform in the United States Individuals • Benjamin Bloom • Jerome Bruner • Rheta DeVries • Caleb Gattegno • Constance Kamii • Maria Montessori • Jean Piaget • William Spady • Marc Tucker • Lev Vygotsky Theories • Active learning • Block scheduling • Cognitive load • Constructivism • Developmentally appropriate practice • Discovery learning • Holistic education • Holistic grading • Inclusion • Inquiry-based learning • Inventive spelling • Open-space school • Outcome-based education • Problem-based learning • Small schools movement Values • Achievement gap • Excellence and equity Learning standards • Adequate Yearly Progress • Certificate of Initial Mastery • Goals 2000 • National Reading Panel • National Science Education Standards • National Skill Standards Board • No Child Left Behind Act • Principles and Standards for School Mathematics Standards-based assessment • Authentic assessment • Criterion-referenced test • Norm-referenced test • High school graduation examination Standardized tests • List of standardized tests in the United States • Standardized testing and public policy Standardized curriculum • Decodable text • Direct instruction • Grades • Guided reading • Lecture • Phonics • Rote learning • Standard algorithms • Tracking (education) • Traditional education • Traditional mathematics • Whole language Mathematics education Geography • United States • New York • United Kingdom • Australia Approach • Traditional • Exercise • Three-part lesson • Singapore • Saxon • Reform • Computer-based • Modern elementary • New • Informal • Cognitively guided • Ethno • Critical • Category • Commons
Wikipedia
Completeness (logic) In mathematical logic and metalogic, a formal system is called complete with respect to a particular property if every formula having the property can be derived using that system, i.e. is one of its theorems; otherwise the system is said to be incomplete. The term "complete" is also used without qualification, with differing meanings depending on the context, mostly referring to the property of semantical validity. Intuitively, a system is called complete in this particular sense, if it can derive every formula that is true. Not to be confused with Complete (complexity). Other properties related to completeness Main articles: Soundness and Consistency The property converse to completeness is called soundness: a system is sound with respect to a property (mostly semantical validity) if each of its theorems has that property. Forms of completeness Expressive completeness A formal language is expressively complete if it can express the subject matter for which it is intended. Functional completeness Main article: Functional completeness A set of logical connectives associated with a formal system is functionally complete if it can express all propositional functions. Semantic completeness Semantic completeness is the converse of soundness for formal systems. A formal system is complete with respect to tautologousness or "semantically complete" when all its tautologies are theorems, whereas a formal system is "sound" when all theorems are tautologies (that is, they are semantically valid formulas: formulas that are true under every interpretation of the language of the system that is consistent with the rules of the system). That is, $\models _{\mathcal {S}}\varphi \ \to \ \vdash _{\mathcal {S}}\varphi .$[1] For example, Gödel's completeness theorem establishes semantic completeness for first-order logic. Strong completeness A formal system S is strongly complete or complete in the strong sense if for every set of premises Γ, any formula that semantically follows from Γ is derivable from Γ. That is: $\Gamma \models _{\mathcal {S}}\varphi \ \to \ \Gamma \vdash _{\mathcal {S}}\varphi .$ Refutation completeness A formal system S is refutation-complete if it is able to derive false from every unsatisfiable set of formulas. That is, $\Gamma \models _{\mathcal {S}}\bot \to \ \Gamma \vdash _{\mathcal {S}}\bot .$[2] Every strongly complete system is also refutation-complete. Intuitively, strong completeness means that, given a formula set $\Gamma $, it is possible to compute every semantical consequence $\varphi $ of $\Gamma $, while refutation-completeness means that, given a formula set $\Gamma $ and a formula $\varphi $, it is possible to check whether $\varphi $ is a semantical consequence of $\Gamma $. Examples of refutation-complete systems include: SLD resolution on Horn clauses, superposition on equational clausal first-order logic, Robinson's resolution on clause sets.[3] The latter is not strongly complete: e.g. $\{a\}\models a\lor b$ holds even in the propositional subset of first-order logic, but $a\lor b$ cannot be derived from $\{a\}$ by resolution. However, $\{a,\lnot (a\lor b)\}\vdash \bot $ can be derived. Syntactical completeness A formal system S is syntactically complete or deductively complete or maximally complete if for each sentence (closed formula) φ of the language of the system either φ or ¬φ is a theorem of S. This is also called negation completeness, and is stronger than semantic completeness. In another sense, a formal system is syntactically complete if and only if no unprovable sentence can be added to it without introducing an inconsistency. Truth-functional propositional logic and first-order predicate logic are semantically complete, but not syntactically complete (for example, the propositional logic statement consisting of a single propositional variable A is not a theorem, and neither is its negation). Gödel's incompleteness theorem shows that any recursive system that is sufficiently powerful, such as Peano arithmetic, cannot be both consistent and syntactically complete. Structural completeness In superintuitionistic and modal logics, a logic is structurally complete if every admissible rule is derivable. References 1. Hunter, Geoffrey, Metalogic: An Introduction to the Metatheory of Standard First-Order Logic, University of California Press, 1971 2. David A. Duffy (1991). Principles of Automated Theorem Proving. Wiley. Here: sect. 2.2.3.1, p.33 3. Stuart J. Russell, Peter Norvig (1995). Artificial Intelligence: A Modern Approach. Prentice Hall. Here: sect. 9.7, p.286 Mathematical logic General • Axiom • list • Cardinality • First-order logic • Formal proof • Formal semantics • Foundations of mathematics • Information theory • Lemma • Logical consequence • Model • Theorem • Theory • Type theory Theorems (list)  & Paradoxes • Gödel's completeness and incompleteness theorems • Tarski's undefinability • Banach–Tarski paradox • Cantor's theorem, paradox and diagonal argument • Compactness • Halting problem • Lindström's • Löwenheim–Skolem • Russell's paradox Logics Traditional • Classical logic • Logical truth • Tautology • Proposition • Inference • Logical equivalence • Consistency • Equiconsistency • Argument • Soundness • Validity • Syllogism • Square of opposition • Venn diagram Propositional • Boolean algebra • Boolean functions • Logical connectives • Propositional calculus • Propositional formula • Truth tables • Many-valued logic • 3 • Finite • ∞ Predicate • First-order • list • Second-order • Monadic • Higher-order • Free • Quantifiers • Predicate • Monadic predicate calculus Set theory • Set • Hereditary • Class • (Ur-)Element • Ordinal number • Extensionality • Forcing • Relation • Equivalence • Partition • Set operations: • Intersection • Union • Complement • Cartesian product • Power set • Identities Types of Sets • Countable • Uncountable • Empty • Inhabited • Singleton • Finite • Infinite • Transitive • Ultrafilter • Recursive • Fuzzy • Universal • Universe • Constructible • Grothendieck • Von Neumann Maps & Cardinality • Function/Map • Domain • Codomain • Image • In/Sur/Bi-jection • Schröder–Bernstein theorem • Isomorphism • Gödel numbering • Enumeration • Large cardinal • Inaccessible • Aleph number • Operation • Binary Set theories • Zermelo–Fraenkel • Axiom of choice • Continuum hypothesis • General • Kripke–Platek • Morse–Kelley • Naive • New Foundations • Tarski–Grothendieck • Von Neumann–Bernays–Gödel • Ackermann • Constructive Formal systems (list), Language & Syntax • Alphabet • Arity • Automata • Axiom schema • Expression • Ground • Extension • by definition • Conservative • Relation • Formation rule • Grammar • Formula • Atomic • Closed • Ground • Open • Free/bound variable • Language • Metalanguage • Logical connective • ¬ • ∨ • ∧ • → • ↔ • = • Predicate • Functional • Variable • Propositional variable • Proof • Quantifier • ∃ • ! • ∀ • rank • Sentence • Atomic • Spectrum • Signature • String • Substitution • Symbol • Function • Logical/Constant • Non-logical • Variable • Term • Theory • list Example axiomatic systems  (list) • of arithmetic: • Peano • second-order • elementary function • primitive recursive • Robinson • Skolem • of the real numbers • Tarski's axiomatization • of Boolean algebras • canonical • minimal axioms • of geometry: • Euclidean: • Elements • Hilbert's • Tarski's • non-Euclidean • Principia Mathematica Proof theory • Formal proof • Natural deduction • Logical consequence • Rule of inference • Sequent calculus • Theorem • Systems • Axiomatic • Deductive • Hilbert • list • Complete theory • Independence (from ZFC) • Proof of impossibility • Ordinal analysis • Reverse mathematics • Self-verifying theories Model theory • Interpretation • Function • of models • Model • Equivalence • Finite • Saturated • Spectrum • Submodel • Non-standard model • of arithmetic • Diagram • Elementary • Categorical theory • Model complete theory • Satisfiability • Semantics of logic • Strength • Theories of truth • Semantic • Tarski's • Kripke's • T-schema • Transfer principle • Truth predicate • Truth value • Type • Ultraproduct • Validity Computability theory • Church encoding • Church–Turing thesis • Computably enumerable • Computable function • Computable set • Decision problem • Decidable • Undecidable • P • NP • P versus NP problem • Kolmogorov complexity • Lambda calculus • Primitive recursive function • Recursion • Recursive set • Turing machine • Type theory Related • Abstract logic • Category theory • Concrete/Abstract Category • Category of sets • History of logic • History of mathematical logic • timeline • Logicism • Mathematical object • Philosophy of mathematics • Supertask  Mathematics portal
Wikipedia
Regenerative process In applied probability, a regenerative process is a class of stochastic process with the property that certain portions of the process can be treated as being statistically independent of each other.[2] This property can be used in the derivation of theoretical properties of such processes. History Regenerative processes were first defined by Walter L. Smith in Proceedings of the Royal Society A in 1955.[3][4] Definition A regenerative process is a stochastic process with time points at which, from a probabilistic point of view, the process restarts itself.[5] These time point may themselves be determined by the evolution of the process. That is to say, the process {X(t), t ≥ 0} is a regenerative process if there exist time points 0 ≤ T0 < T1 < T2 < ... such that the post-Tk process {X(Tk + t) : t ≥ 0} • has the same distribution as the post-T0 process {X(T0 + t) : t ≥ 0} • is independent of the pre-Tk process {X(t) : 0 ≤ t < Tk} for k ≥ 1.[6] Intuitively this means a regenerative process can be split into i.i.d. cycles.[7] When T0 = 0, X(t) is called a nondelayed regenerative process. Else, the process is called a delayed regenerative process.[6] Examples • Renewal processes are regenerative processes, with T1 being the first renewal.[5] • Alternating renewal processes, where a system alternates between an 'on' state and an 'off' state.[5] • A recurrent Markov chain is a regenerative process, with T1 being the time of first recurrence.[5] This includes Harris chains. • Reflected Brownian motion is a regenerative process (where one measures the time it takes particles to leave and come back).[7] Properties • By the renewal reward theorem, with probability 1,[8] $\lim _{t\to \infty }{\frac {1}{t}}\int _{0}^{t}X(s)ds={\frac {\mathbb {E} [R]}{\mathbb {E} [\tau ]}}.$ where $\tau $ is the length of the first cycle and $R=\int _{0}^{\tau }X(s)ds$ is the value over the first cycle. • A measurable function of a regenerative process is a regenerative process with the same regeneration time[8] References 1. Hurter, A. P.; Kaminsky, F. C. (1967). "An Application of Regenerative Stochastic Processes to a Problem in Inventory Control". Operations Research. 15 (3): 467–472. doi:10.1287/opre.15.3.467. JSTOR 168455. 2. Ross, S. M. (2010). "Renewal Theory and Its Applications". Introduction to Probability Models. pp. 421–641. doi:10.1016/B978-0-12-375686-2.00003-0. ISBN 9780123756862. 3. Schellhaas, Helmut (1979). "Semi-Regenerative Processes with Unbounded Rewards". Mathematics of Operations Research. 4: 70–78. doi:10.1287/moor.4.1.70. JSTOR 3689240. 4. Smith, W. L. (1955). "Regenerative Stochastic Processes". Proceedings of the Royal Society A: Mathematical, Physical and Engineering Sciences. 232 (1188): 6–31. Bibcode:1955RSPSA.232....6S. doi:10.1098/rspa.1955.0198. 5. Sheldon M. Ross (2007). Introduction to probability models. Academic Press. p. 442. ISBN 0-12-598062-0. 6. Haas, Peter J. (2002). "Regenerative Simulation". Stochastic Petri Nets. Springer Series in Operations Research and Financial Engineering. pp. 189–273. doi:10.1007/0-387-21552-2_6. ISBN 0-387-95445-7. 7. Asmussen, Søren (2003). "Regenerative Processes". Applied Probability and Queues. Stochastic Modelling and Applied Probability. Vol. 51. pp. 168–185. doi:10.1007/0-387-21525-5_6. ISBN 978-0-387-00211-8. 8. Sigman, Karl (2009) Regenerative Processes, lecture notes Stochastic processes Discrete time • Bernoulli process • Branching process • Chinese restaurant process • Galton–Watson process • Independent and identically distributed random variables • Markov chain • Moran process • Random walk • Loop-erased • Self-avoiding • Biased • Maximal entropy Continuous time • Additive process • Bessel process • Birth–death process • pure birth • Brownian motion • Bridge • Excursion • Fractional • Geometric • Meander • Cauchy process • Contact process • Continuous-time random walk • Cox process • Diffusion process • Empirical process • Feller process • Fleming–Viot process • Gamma process • Geometric process • Hawkes process • Hunt process • Interacting particle systems • Itô diffusion • Itô process • Jump diffusion • Jump process • Lévy process • Local time • Markov additive process • McKean–Vlasov process • Ornstein–Uhlenbeck process • Poisson process • Compound • Non-homogeneous • Schramm–Loewner evolution • Semimartingale • Sigma-martingale • Stable process • Superprocess • Telegraph process • Variance gamma process • Wiener process • Wiener sausage Both • Branching process • Galves–Löcherbach model • Gaussian process • Hidden Markov model (HMM) • Markov process • Martingale • Differences • Local • Sub- • Super- • Random dynamical system • Regenerative process • Renewal process • Stochastic chains with memory of variable length • White noise Fields and other • Dirichlet process • Gaussian random field • Gibbs measure • Hopfield model • Ising model • Potts model • Boolean network • Markov random field • Percolation • Pitman–Yor process • Point process • Cox • Poisson • Random field • Random graph Time series models • Autoregressive conditional heteroskedasticity (ARCH) model • Autoregressive integrated moving average (ARIMA) model • Autoregressive (AR) model • Autoregressive–moving-average (ARMA) model • Generalized autoregressive conditional heteroskedasticity (GARCH) model • Moving-average (MA) model Financial models • Binomial options pricing model • Black–Derman–Toy • Black–Karasinski • Black–Scholes • Chan–Karolyi–Longstaff–Sanders (CKLS) • Chen • Constant elasticity of variance (CEV) • Cox–Ingersoll–Ross (CIR) • Garman–Kohlhagen • Heath–Jarrow–Morton (HJM) • Heston • Ho–Lee • Hull–White • LIBOR market • Rendleman–Bartter • SABR volatility • Vašíček • Wilkie Actuarial models • Bühlmann • Cramér–Lundberg • Risk process • Sparre–Anderson Queueing models • Bulk • Fluid • Generalized queueing network • M/G/1 • M/M/1 • M/M/c Properties • Càdlàg paths • Continuous • Continuous paths • Ergodic • Exchangeable • Feller-continuous • Gauss–Markov • Markov • Mixing • Piecewise-deterministic • Predictable • Progressively measurable • Self-similar • Stationary • Time-reversible Limit theorems • Central limit theorem • Donsker's theorem • Doob's martingale convergence theorems • Ergodic theorem • Fisher–Tippett–Gnedenko theorem • Large deviation principle • Law of large numbers (weak/strong) • Law of the iterated logarithm • Maximal ergodic theorem • Sanov's theorem • Zero–one laws (Blumenthal, Borel–Cantelli, Engelbert–Schmidt, Hewitt–Savage, Kolmogorov, Lévy) Inequalities • Burkholder–Davis–Gundy • Doob's martingale • Doob's upcrossing • Kunita–Watanabe • Marcinkiewicz–Zygmund Tools • Cameron–Martin formula • Convergence of random variables • Doléans-Dade exponential • Doob decomposition theorem • Doob–Meyer decomposition theorem • Doob's optional stopping theorem • Dynkin's formula • Feynman–Kac formula • Filtration • Girsanov theorem • Infinitesimal generator • Itô integral • Itô's lemma • Karhunen–Loève theorem • Kolmogorov continuity theorem • Kolmogorov extension theorem • Lévy–Prokhorov metric • Malliavin calculus • Martingale representation theorem • Optional stopping theorem • Prokhorov's theorem • Quadratic variation • Reflection principle • Skorokhod integral • Skorokhod's representation theorem • Skorokhod space • Snell envelope • Stochastic differential equation • Tanaka • Stopping time • Stratonovich integral • Uniform integrability • Usual hypotheses • Wiener space • Classical • Abstract Disciplines • Actuarial mathematics • Control theory • Econometrics • Ergodic theory • Extreme value theory (EVT) • Large deviations theory • Mathematical finance • Mathematical statistics • Probability theory • Queueing theory • Renewal theory • Ruin theory • Signal processing • Statistics • Stochastic analysis • Time series analysis • Machine learning • List of topics • Category
Wikipedia
Regev's theorem In abstract algebra, Regev's theorem, proved by Amitai Regev (1971, 1972), states that the tensor product of two PI algebras is a PI algebra. References • Regev, Amitai (1971), "Existence of polynomial identities in A⊗FB", Bulletin of the American Mathematical Society, 77 (6): 1067–1069, doi:10.1090/S0002-9904-1971-12869-0, ISSN 0002-9904, MR 0284468 • Regev, Amitai (1972), "Existence of identities in A⊗B", Israel Journal of Mathematics, 11 (2): 131–152, doi:10.1007/BF02762615, ISSN 0021-2172, MR 0314893
Wikipedia
Regina (program) Regina is a suite of mathematical software for 3-manifold topologists. It focuses upon the study of 3-manifold triangulations and includes support for normal surfaces and angle structures.[1] Regina Original author(s)Ben Burton, David Letscher, Richard Rannard, Hyam Rubinstein Developer(s)Ben Burton, Ryan Budney, William Pettersson Initial releaseDecember 2000 Stable release 7.1 / Sep, 2022 Repositorygithub.com/regina-normal/regina Written inC++, Python Operating systemLinux, Unix-like, Mac, Microsoft Windows, iOS Available inEnglish TypeMathematical Software LicenseGPL Websiteregina-normal.github.io Features • Regina implements a variant of Rubinstein's 3-sphere recognition algorithm. This is an algorithm that determines whether or not a triangulated 3-manifold is homeomorphic to the 3-sphere. • Regina further implements the connect-sum decomposition. This will decompose a triangulated 3-manifold into a connect-sum of triangulated prime 3-manifolds. • Homology and Poincare duality for 3-manifolds, including the torsion linking form. • Includes portions of the SnapPea kernel for some geometric calculations. • Has both a GUI and Python interface. See also • Computational topology References 1. "ORMS - Regina". orms.mfo.de. Retrieved 2022-10-11.
Wikipedia
Regina S. Burachik Regina Sandra Burachik is an Argentine[1] mathematician who works on optimization and analysis (particularly: convex analysis, functional analysis and non-smooth analysis). Currently, she is a professor at the University of South Australia.[2] Regina S. Burachik NationalityArgentine Academic background Alma materInstituto Nacional de Matemática Pura e Aplicada ThesisGeneralized Proximal Point Method for the Variational Inequality Problem (1995) Doctoral advisorAlfredo Noel Iusem Academic work DisciplineMathematics Sub-disciplineMathematical optimization, Mathematical analysis InstitutionsUniversity of South Australia She earned her Ph.D. from the IMPA in 1995 under the supervision of Alfredo Noel Iusem (Generalized Proximal Point Method for the Variational Inequality Problem).[3] In her thesis, she "introduced and analyzed solution methods for variational inequalities, the latter being a generalization of the convex constrained optimization problem."[4] Selected publications Articles • with A. N. Iusem and B. F. Svaiter. "Enlargement of monotone operators with applications to variational inequalities", Set-Valued Analysis • with A. N. Iusem. "A generalized proximal point algorithm for the variational inequality problem in a Hilbert space", SIAM Journal on Optimization • with A. N. Iusem. "Set-valued mappings & enlargements of monotone operators", Optimization and its Applications • with B. F. Svaiter. "Maximal monotone operators, convex functions and a special family of enlargements", Set-Valued Analysis Books • With Iusem: Set-Valued Mappings and Enlargements of Monotone Operators (2007) • Variational Analysis and Generalized Differentiation in Optimization and Control (2010, as editor) References 1. "Ministério do Trabalho e Previdência". Ministério do Trabalho e Previdência. 2. Unisanet: Burachick 3. Regina Sandra Burachik at the Mathematics Genealogy Project 4. "Federal University of Rio de Janeiro: PESC". External links • Regina S. Burachik publications indexed by Google Scholar • Page at the University of South Australia Authority control International • ISNI • VIAF National • Germany • Israel • United States • Netherlands Academics • DBLP • Google Scholar • MathSciNet • Mathematics Genealogy Project • ORCID • Publons • ResearcherID • Scopus • zbMATH Other • IdRef
Wikipedia