text
stringlengths 100
500k
| subset
stringclasses 4
values |
---|---|
Roselyn E. Williams
Roselyn Elaine Williams is an American mathematician who is an Associate Professor and former chair of the mathematics department at Florida Agricultural and Mechanical University. Her decades long involvement in the National Association of Mathematicians includes a 14 year term as secretary-treasurer.
Dr. Roselyn Williams
BornNovember 1, 1950
Tallahassee, FL, USA
Known forSupport & promotion of HBCU and her commitment to NAM
Scientific career
Doctoral advisorWarren Douglas Nichols
Research
Williams’ dissertation was in the field of finite dimensional Hopf algebras and she also has research interest in application of mathematics to physics and chemistry.[1]
Education
Williams attended Spelman College in Atlanta, Georgia, where she was mentored by Dr. Etta Falconer who was chair of the mathematics department at the time.[2] She graduated in 1972 with a Bachelor of Science degree in mathematics. She then went to the University of Florida where she was the first African American master's degree student in mathematics.[2] After some time away from school, she returned to her graduate studies at Florida State University and earned a PhD in 1988 under the advisement of Warren Douglas Nichols.[3]
Career
After earning her master's degree Williams became an instructor at Florida Agricultural and Mechanical University in Tallahassee, Florida for five years before returning to graduate school to earn a PhD.[4] She returned to Florida Agricultural and Mechanical University as an Associate Professor of Mathematics after completing her doctoral degree in 1988.[1] While chair of the mathematics department she co-founded the Alliance for the Production of African American PhDs in the Mathematical Sciences, which is now known as the National Alliance for Doctoral Studies in the Mathematical Sciences.[5][6] Williams has been very involved in the National Association of Mathematicians (NAM), including serving as Secretary-Treasurer for the National Association of Mathematicians (NAM) from 2005-2019. She has won several National Science Foundation (NSF) grants, many for undergraduate research experiences as well as other programs at historically Black colleges and universities.[7] Additional in 2011, she was the local coordinator for the EDGE (Enhancing Diversity in Graduate Education) program which support female students starting graduate degrees in mathematics.[8]
Honors
Williams was awarded the Dr. Etta Z. Falconer Award for Mentoring and Commitment to Diversity at the Infinite Possibilities Conference in 2012.[1] In 2020, she became an Association for Women in Mathematics (AWM) fellow for "her lifelong promotion of Historically Black Colleges and Universities and support of the EDGE Program; for her unwavering dedication to the National Association of Mathematicians; and for her unsung work to create AIM/ICERM’s REUF and the National Math Alliance." [9] Also in 2020, Williams was awarded the NAM Lifetime Achievement Award which is "given to a Mathematician-Mathematics Educator who has provided at least twenty-five years of exemplary service to the mathematical sciences community and who has affirmed by others as having been the kind of professional and role model whose professional life has made a difference, a professional life worthy of emulating." [6][10]
References
1. "Department of Mathematics- Florida Agricultural and Mechanical University2020". famu.edu. Retrieved 2020-06-03.
2. "Roselyn Williams's Biography". The HistoryMakers. Retrieved 2020-06-03.
3. "Roselyn Williams - The Mathematics Genealogy Project". www.genealogy.math.ndsu.nodak.edu. Retrieved 2020-06-03.
4. "Roselyn Williams | Math Alliance: The National Alliance for Doctoral Studies in the Mathematical Sciences". Retrieved 2020-06-03.
5. "History of the Alliance | Math Alliance: The National Alliance for Doctoral Studies in the Mathematical Sciences". Retrieved 2020-06-04.
6. https://www.nam-math.org/include/pages/files/newsletters/2020%20Spring.pdf
7. "NSF Award Search: Simple Search Results". www.nsf.gov. Retrieved 2020-06-03.
8. "EDGE for Women". Retrieved June 3, 2020.{{cite web}}: CS1 maint: url-status (link)
9. "AWM Fellows". June 3, 2020.{{cite web}}: CS1 maint: url-status (link)
10. "Lifetime Achievement Award". www.nam-math.org. Retrieved 2020-06-04.
Authority control: Academics
• Mathematics Genealogy Project
|
Wikipedia
|
Rosemary A. Bailey
Rosemary A. Bailey FRSE (born 1947) is a British statistician who works in the design of experiments and the analysis of variance and in related areas of combinatorial design, especially in association schemes. She has written books on the design of experiments, on association schemes, and on linear models in statistics.
Rosemary A. Bailey
Born1947 (age 75–76)
CitizenshipBritish
Alma materUniversity of Oxford, England
Scientific career
FieldsDesign of experiments, analysis of variance
InstitutionsMathematical Sciences Institute of Queen Mary, University of London, England
ThesisFinite Permutation Groups (1974)
Doctoral advisorGraham Higman
Websitewww.maths.qmul.ac.uk/~rab/
Education and career
Bailey's first degree and Ph.D. were in mathematics at the University of Oxford. She was awarded her doctorate in 1974 for a dissertation on permutation groups, Finite Permutation Groups supervised by Graham Higman.[1] Bailey's career has not been in pure mathematics but in statistics where she has specialised in the algebraic problems associated with the design of experiments.
Bailey worked at the University of Edinburgh with David Finney and at The Open University. She spent 1981–91 in the Statistics Department of Rothamsted Experimental Station. In 1991 Bailey became Professor of Mathematical Sciences at Goldsmiths College in the University of London and then Professor of Statistics at Queen Mary, University of London where she is Professor Emerita of Statistics. She is currently Professor of Mathematics and Statistics in the School of Mathematics and Statistics at the University of St Andrews, Scotland.
Recognition
Bailey is a Fellow of the Institute of Mathematical Statistics[2] and in 2015 was elected a Fellow of the Royal Society of Edinburgh.[3]
Selected publications
• Bailey, R. A. (1994). Normal linear models. London: External Advisory Service, University of London. ISBN 0-7187-1176-9.
• Bailey, R. A. (2004). Association Schemes: Designed Experiments, Algebra and Combinatorics. Cambridge: Cambridge University Press. ISBN 978-0-521-82446-0.[4]
• Bailey, R. A. (2008). Design of Comparative Experiments. Cambridge: Cambridge University Press. ISBN 978-0-521-68357-9.
• Speed, T. P.; Bailey, R. A. (1987). "Factorial Dispersion Models". International Statistical Review / Revue Internationale de Statistique. International Statistical Institute (ISI). 55 (3): 261–277. doi:10.2307/1403405. JSTOR 1403405.
References
1. Rosemary A. Bailey at the Mathematics Genealogy Project
2. Honored Fellows, Institute of Mathematical Statistics, retrieved 24 November 2017
3. "Professor Rosemary Anne Bailey FRSE - The Royal Society of Edinburgh". The Royal Society of Edinburgh. Retrieved 12 February 2018.
4. Zieschang, Paul-Hermann (2006). "Review: Association schemes: Designed experiments, algebra and combinatorics, by R. A. Bailey" (PDF). Bull. Amer. Math. Soc. (N.S.). 43 (2): 249–253. doi:10.1090/s0273-0979-05-01077-3.
External links
• Homepage of Professor Bailey at Queen Mary University of London
• Homepage of Professor Bailey at the School of Mathematics and Statistics, University of St Andrews
• R.A. Bailey at theoremoftheday.org
Authority control
International
• ISNI
• VIAF
National
• Norway
• France
• BnF data
• Israel
• United States
• Czech Republic
• Australia
• Greece
• Netherlands
Academics
• CiNii
• MathSciNet
• Mathematics Genealogy Project
• ORCID
• zbMATH
Other
• IdRef
|
Wikipedia
|
Rosemary Renaut
Rosemary Anne Renaut is a British and American[1] computational mathematician whose research interests include inverse problems and regularization with applications to medical imaging and seismic analysis. She is a professor in the School of Mathematical and Statistical Sciences at Arizona State University.
Education and career
Renaut earned a bachelor's degree in 1980 at Durham University and then studied for Part III of the Mathematical Tripos in applied mathematics at the University of Cambridge. She completed her Ph.D. at Cambridge in 1985.[1] Her dissertation, Numerical Solution of Hyperbolic Partial Differential Equations, was supervised by Arieh Iserles.[2]
After postdoctoral research at RWTH Aachen University in Germany and the Chr. Michelsen Institute in Norway, she joined the Arizona State University faculty as an assistant professor in 1987. She was promoted to associate professor in 1991 and full professor in 1996, and chaired the Department of Mathematics from 1997 to 2001.[1]
She has also visited multiple other institutions, including a term as John von Neumann Professor at the Technical University of Munich in 2001–2002, and terms as program director for computational mathematics and mathematical biology at the National Science Foundation from 2008 to 2011 and 2014 to 2017.[1]
Recognition
Renaut has been a Fellow of the Institute of Mathematics and its Applications since 1996.[1] She was elected as a Fellow of the Society for Industrial and Applied Mathematics, in the 2022 Class of SIAM Fellows, "for contributions to ill-posed inverse problems and regularization, geophysical and medical imaging, and high order numerical methods".[3]
References
1. Curriculum vitae, retrieved 2022-04-02
2. Rosemary Renaut at the Mathematics Genealogy Project
3. "SIAM Announces Class of 2022 Fellows", SIAM News, 31 March 2022, retrieved 2022-03-31
External links
• Home page
• Rosemary Renaut publications indexed by Google Scholar
Authority control
International
• VIAF
National
• United States
Academics
• DBLP
• Google Scholar
• Mathematics Genealogy Project
• ORCID
|
Wikipedia
|
Rosenau–Hyman equation
The Rosenau–Hyman equation or K(n,n) equation is a KdV-like equation having compacton solutions. This nonlinear partial differential equation is of the form[1]
$u_{t}+a(u^{n})_{x}+(u^{n})_{xxx}=0.\,$
The equation is named after Philip Rosenau and James M. Hyman, who used in their 1993 study of compactons.[2]
The K(n,n) equation has the following traveling wave solutions:
• when a > 0
$u(x,t)=\left({\frac {2cn}{a(n+1)}}\sin ^{2}\left({\frac {n-1}{2n}}{\sqrt {a}}(x-ct+b)\right)\right)^{1/(n-1)},$
• when a < 0
$u(x,t)=\left({\frac {2cn}{a(n+1)}}\sinh ^{2}\left({\frac {n-1}{2n}}{\sqrt {-a}}(x-ct+b)\right)\right)^{1/(n-1)},$
$u(x,t)=\left({\frac {2cn}{a(n+1)}}\cosh ^{2}\left({\frac {n-1}{2n}}{\sqrt {-a}}(x-ct+b)\right)\right)^{1/(n-1)}.$
References
1. Polyanin, Andrei D.; Zaitsev, Valentin F. (28 October 2002), Handbook of Nonlinear Partial Differential Equations (Second ed.), CRC Press, p. 891, ISBN 1584882972
2. Rosenau, Philip; Hyman, James M. (1993), "Compactons: Solitons with finite wavelength", Physical Review Letters, American Physical Society, 70 (5): 564–567, Bibcode:1993PhRvL..70..564R, doi:10.1103/PhysRevLett.70.564, PMID 10054146
|
Wikipedia
|
Rosenbrock methods
Rosenbrock methods refers to either of two distinct ideas in numerical computation, both named for Howard H. Rosenbrock.
Numerical solution of differential equations
Rosenbrock methods for stiff differential equations are a family of single-step methods for solving ordinary differential equations.[1][2] They are related to the implicit Runge–Kutta methods[3] and are also known as Kaps–Rentrop methods.[4]
Search method
Rosenbrock search is a numerical optimization algorithm applicable to optimization problems in which the objective function is inexpensive to compute and the derivative either does not exist or cannot be computed efficiently.[5] The idea of Rosenbrock search is also used to initialize some root-finding routines, such as fzero (based on Brent's method) in Matlab. Rosenbrock search is a form of derivative-free search but may perform better on functions with sharp ridges.[6] The method often identifies such a ridge which, in many applications, leads to a solution.[7]
See also
• Rosenbrock function
• Adaptive coordinate descent
References
1. H. H. Rosenbrock, "Some general implicit processes for the numerical solution of differential equations", The Computer Journal (1963) 5(4): 329-330
2. Press, WH; Teukolsky, SA; Vetterling, WT; Flannery, BP (2007). "Section 17.5.1. Rosenbrock Methods". Numerical Recipes: The Art of Scientific Computing (3rd ed.). New York: Cambridge University Press. ISBN 978-0-521-88068-8.
3. "Archived copy" (PDF). Archived from the original (PDF) on 2013-10-29. Retrieved 2013-05-16.{{cite web}}: CS1 maint: archived copy as title (link)
4. "Rosenbrock Methods".
5. H. H. Rosenbrock, "An Automatic Method for Finding the Greatest or Least Value of a Function", The Computer Journal (1960) 3(3): 175-184
6. Leader, Jeffery J. (2004). Numerical Analysis and Scientific Computation. Addison Wesley. ISBN 0-201-73499-0.
7. Shoup, T., Mistree, F., Optimization methods: with applications for personal computers, 1987, Prentice Hall, pg. 120
External links
• http://www.applied-mathematics.net/optimization/rosenbrock.html
Optimization: Algorithms, methods, and heuristics
Unconstrained nonlinear
Functions
• Golden-section search
• Interpolation methods
• Line search
• Nelder–Mead method
• Successive parabolic interpolation
Gradients
Convergence
• Trust region
• Wolfe conditions
Quasi–Newton
• Berndt–Hall–Hall–Hausman
• Broyden–Fletcher–Goldfarb–Shanno and L-BFGS
• Davidon–Fletcher–Powell
• Symmetric rank-one (SR1)
Other methods
• Conjugate gradient
• Gauss–Newton
• Gradient
• Mirror
• Levenberg–Marquardt
• Powell's dog leg method
• Truncated Newton
Hessians
• Newton's method
Constrained nonlinear
General
• Barrier methods
• Penalty methods
Differentiable
• Augmented Lagrangian methods
• Sequential quadratic programming
• Successive linear programming
Convex optimization
Convex
minimization
• Cutting-plane method
• Reduced gradient (Frank–Wolfe)
• Subgradient method
Linear and
quadratic
Interior point
• Affine scaling
• Ellipsoid algorithm of Khachiyan
• Projective algorithm of Karmarkar
Basis-exchange
• Simplex algorithm of Dantzig
• Revised simplex algorithm
• Criss-cross algorithm
• Principal pivoting algorithm of Lemke
Combinatorial
Paradigms
• Approximation algorithm
• Dynamic programming
• Greedy algorithm
• Integer programming
• Branch and bound/cut
Graph
algorithms
Minimum
spanning tree
• Borůvka
• Prim
• Kruskal
Shortest path
• Bellman–Ford
• SPFA
• Dijkstra
• Floyd–Warshall
Network flows
• Dinic
• Edmonds–Karp
• Ford–Fulkerson
• Push–relabel maximum flow
Metaheuristics
• Evolutionary algorithm
• Hill climbing
• Local search
• Parallel metaheuristics
• Simulated annealing
• Spiral optimization algorithm
• Tabu search
• Software
|
Wikipedia
|
Kummer surface
In algebraic geometry, a Kummer quartic surface, first studied by Ernst Kummer (1864), is an irreducible nodal surface of degree 4 in $\mathbb {P} ^{3}$ with the maximal possible number of 16 double points. Any such surface is the Kummer variety of the Jacobian variety of a smooth hyperelliptic curve of genus 2; i.e. a quotient of the Jacobian by the Kummer involution x ↦ −x. The Kummer involution has 16 fixed points: the 16 2-torsion point of the Jacobian, and they are the 16 singular points of the quartic surface. Resolving the 16 double points of the quotient of a (possibly nonalgebraic) torus by the Kummer involution gives a K3 surface with 16 disjoint rational curves; these K3 surfaces are also sometimes called Kummer surfaces.
Other surfaces closely related to Kummer surfaces include Weddle surfaces, wave surfaces, and tetrahedroids.
Geometry of the Kummer surface
Singular quartic surfaces and the double plane model
Let $K\subset \mathbb {P} ^{3}$ be a quartic surface with an ordinary double point p, near which K looks like a quadratic cone. Any projective line through p then meets K with multiplicity two at p, and will therefore meet the quartic K in just two other points. Identifying the lines in $\mathbb {P} ^{3}$ through the point p with $\mathbb {P} ^{2}$, we get a double cover from the blow up of K at p to $\mathbb {P} ^{2}$; this double cover is given by sending q ≠ p ↦ $\scriptstyle {\overline {pq}}$, and any line in the tangent cone of p in K to itself. The ramification locus of the double cover is a plane curve C of degree 6, and all the nodes of K which are not p map to nodes of C.
By the genus degree formula, the maximal possible number of nodes on a sextic curve is obtained when the curve is a union of $6$ lines, in which case we have 15 nodes. Hence the maximal number of nodes on a quartic is 16, and in this case they are all simple nodes (to show that $p$ is simple project from another node). A quartic which obtains these 16 nodes is called a Kummer Quartic, and we will concentrate on them below.
Since $p$ is a simple node, the tangent cone to this point is mapped to a conic under the double cover. This conic is in fact tangent to the six lines (w.o proof). Conversely, given a configuration of a conic and six lines which tangent to it in the plane, we may define the double cover of the plane ramified over the union of these 6 lines. This double cover may be mapped to $\mathbb {P} ^{3}$, under a map which blows down the double cover of the special conic, and is an isomorphism elsewhere (w.o. proof).
The double plane and Kummer varieties of Jacobians
Starting from a smooth curve $C$ of genus 2, we may identify the Jacobian $Jac(C)$ with $Pic^{2}(C)$ under the map $x\mapsto x+K_{C}$. We now observe two facts: Since $C$ is a hyperelliptic curve the map from the symmetric product $Sym^{2}C$ to $Pic^{2}C$, defined by $\{p,q\}\mapsto p+q$, is the blow down of the graph of the hyperelliptic involution to the canonical divisor class. Moreover, the canonical map $C\to |K_{C}|^{*}$ is a double cover. Hence we get a double cover $Kum(C)\to Sym^{2}|K_{C}|^{*}$.
This double cover is the one which already appeared above: The 6 lines are the images of the odd symmetric theta divisors on $Jac(C)$, while the conic is the image of the blown-up 0. The conic is isomorphic to the canonical system via the isomorphism $T_{0}Jac(C)\cong |K_{C}|^{*}$, and each of the six lines is naturally isomorphic to the dual canonical system $|K_{C}|^{*}$ via the identification of theta divisors and translates of the curve $C$. There is a 1-1 correspondence between pairs of odd symmetric theta divisors and 2-torsion points on the Jacobian given by the fact that $(\Theta +w_{1})\cap (\Theta +w_{2})=\{w_{1}-w_{2},0\}$, where $w_{1},w_{2}$ are Weierstrass points (which are the odd theta characteristics in this in genus 2). Hence the branch points of the canonical map $C\mapsto |K_{C}|^{*}$ appear on each of these copies of the canonical system as the intersection points of the lines and the tangency points of the lines and the conic.
Finally, since we know that every Kummer quartic is a Kummer variety of a Jacobian of a hyperelliptic curve, we show how to reconstruct Kummer quartic surface directly from the Jacobian of a genus 2 curve: The Jacobian of $C$ maps to the complete linear system $|O_{Jac(C)}(2\Theta _{C})|\cong \mathbb {P} ^{2^{2}-1}$ (see the article on Abelian varieties). This map factors through the Kummer variety as a degree 4 map which has 16 nodes at the images of the 2-torsion points on $Jac(C)$.
Level 2 structure
Kummer's 166 configuration
There are several crucial points which relate the geometric, algebraic, and combinatorial aspects of the configuration of the nodes of the kummer quartic:
• Any symmetric odd theta divisor on $Jac(C)$ is given by the set points $\{q-w|q\in C\}$, where w is a Weierstrass point on $C$. This theta divisor contains six 2-torsion points: $w'-w$ such that $w'$ is a Weierstrass point.
• Two odd theta divisors given by Weierstrass points $w,w'$ intersect at $0$ and at $w-w'$.
• The translation of the Jacobian by a two torsion point is an isomorphism of the Jacobian as an algebraic surface, which maps the set of 2-torsion points to itself.
• In the complete linear system $|2\Theta _{C}|$ on $Jac(C)$, any odd theta divisor is mapped to a conic, which is the intersection of the Kummer quartic with a plane. Moreover, this complete linear system is invariant under shifts by 2-torsion points.
Hence we have a configuration of $16$ conics in $\mathbb {P} ^{3}$; where each contains 6 nodes, and such that the intersection of each two is along 2 nodes. This configuration is called the $16_{6}$ configuration or the Kummer configuration.
The Weil Pairing
The 2-torsion points on an Abelian variety admit a symplectic bilinear form called the Weil pairing. In the case of Jacobians of curves of genus two, every nontrivial 2-torsion point is uniquely expressed as a difference between two of the six Weierstrass points of the curve. The Weil pairing is given in this case by $\langle p_{1}-p_{2},p_{3}-p_{4}\rangle =\#\{p_{1},p_{2}\}\cap \{p_{3},p_{4}\}$. One can recover a lot of the group theoretic invariants of the group $Sp_{4}(2)$ via the geometry of the $16_{6}$ configuration.
Group theory, algebra and geometry
Below is a list of group theoretic invariants and their geometric incarnation in the 166 configuration.
• Polar lines
• Apolar complexes
• Klein configuration
• Fundamental quadrics
• Fundamental tetrahedra
• Rosenhain tetrads
• Adolph Göpel 1812-1847s
References
• Barth, Wolf P.; Hulek, Klaus; Peters, Chris A.M.; Van de Ven, Antonius (2004), Compact Complex Surfaces, Ergebnisse der Mathematik und ihrer Grenzgebiete. 3. Folge., vol. 4, Springer-Verlag, Berlin, doi:10.1007/978-3-642-57739-0, ISBN 978-3-540-00832-3, MR 2030225
• Dolgachev, Igor (2012), Classical algebraic geometry. A modern view, Cambridge University Press, ISBN 978-1-107-01765-8, MR 2964027
• Hudson, R. W. H. T. (1990), Kummer's quartic surface, Cambridge Mathematical Library, Cambridge University Press, ISBN 978-0-521-39790-2, MR 1097176
• Kummer, Ernst Eduard (1864), "Über die Flächen vierten Grades mit sechzehn singulären Punkten", Monatsberichte der Königlichen Preußischen Akademie der Wissenschaften zu Berlin: 246–260 Reprinted in (Kummer 1975)
• Kummer, Ernst Eduard (1975), Collected Papers: Volume 2: Function Theory, Geometry, and Miscellaneous, Berlin, New York: Springer-Verlag, ISBN 978-0-387-06836-7, MR 0465761
• Voitsekhovskii, M.I. (2001) [1994], "Kummer_surface", Encyclopedia of Mathematics, EMS Press
This article incorporates material from the Citizendium article "Kummer surface", which is licensed under the Creative Commons Attribution-ShareAlike 3.0 Unported License but not under the GFDL.
|
Wikipedia
|
Peter Rosenthal
Peter Michael Rosenthal (born June 1, 1941) is Canadian-American Professor Emeritus of Mathematics at the University of Toronto,[2] an adjunct professor of Law at the University of Toronto,[3] and a lawyer in private practice.[4]
Peter Michael Rosenthal
Born (1941-06-01) June 1, 1941
Queens, New York, USA
CitizenshipCanadian
American
Occupation(s)Mathematician, lawyer
SpouseHelen Stephanie Rosenthal[1]
ChildrenJeff Rosenthal
Academic background
EducationQueens College, City University of New York (BS)
University of Michigan (PhD)
University of Toronto (LLB)
ThesisOn Latices of Invariant Subspaces (1967)
Doctoral advisorPaul Halmos
Academic work
InstitutionsUniversity of Toronto
Websitewww.math.toronto.edu/cms/people/faculty/rosenthal-peter/
Early life
Rosenthal grew up in Queens, New York with his parents and two brothers.[5]
Mathematics career
Rosenthal graduated from Queens College, City University of New York with a B.S. in Mathematics in 1962.[6] In 1963 he obtained an MA in Mathematics and in 1967 a Ph.D. in Mathematics from the University of Michigan;[6] his Ph.D. thesis advisor was Paul Halmos.[7] His thesis, "On lattices of invariant subspaces"[7] concerns operators on Hilbert space, and most of his subsequent research has been in operator theory and related fields. Much of his work has been related to the invariant subspace problem, the still-unsolved problem of the existence of invariant subspaces for bounded linear operators on Hilbert space. Among many other topics, he has made substantial contributions to the development of reflexive and reductive operator algebras and to the study of lattices of invariant subspaces, composition operators on the Hardy-Hilbert space and linear operator equations. His publications include many with his long-time collaborator Heydar Radjavi,[5] including the book "Invariant subspaces" (Springer-Verlag, 1973; second edition 2003).
Rosenthal has supervised the Ph.D. theses of fifteen students[7] and the research work of a number of post-doctoral fellows.
Legal career
In parallel with his career in mathematics, Rosenthal has pursued a career in law. He worked as a paralegal before obtaining an LL.B. from the University of Toronto in 1990. He was called to the Ontario Bar in 1992.[3] He is a major figure in the Toronto legal community, and has been profiled by Toronto Life,[8] The Globe and Mail,[9] and the Toronto Star[5] In 2006, Now Magazine named Rosenthal Toronto's "Best activist lawyer".[10] In May 2016, he was awarded a Law Society Medal by the Law Society of Upper Canada.[11]
Rosenthal represented Miguel Figueroa, the leader of the Communist Party of Canada, in the case Figueroa v. Canada before the Supreme Court of Canada.[9] The court ruled in Figueroa's favor, striking down a law that prohibited small political parties from obtaining the same tax benefits as large parties.
He has represented hundreds of activists who faced charges as a result of political protests, including Shawn Brant, John Clarke (activist), Vicki Monague of Stop Dump Site 41 , Dudley Laws and Jaggi Singh, and has written articles about some of those cases.[12]
Works
• Radjavi, Heydar; Rosenthal, Peter (1973), Invariant Subspaces, Springer, MR 0367682, 2nd edition MR2003221
• Radjavi, Heydar; Rosenthal, Peter (2000), Simultaneous Triangularization, Springer, ISBN 978-0-387-98466-7
• Martinez-Avendano, Ruben; Rosenthal, Peter (2006), An Introduction to Operators on the Hardy-Hilbert Space, Springer, ISBN 978-0-387-35418-7
• (with Sheldon Axler and Donald Sarason) editors. A Glimpse at Hilbert Space Operators, Birkhäuser, 2010.
• Rosenthal, Daniel; Rosenthal, David; Rosenthal, Peter (2014), A Readable Introduction to Real Mathematics, Springer, ISBN 978-3-319-05654-8, MR 3235953
References
1. "Helen Stephanie Rosenthal: Mathematician who loved to teach and was active in faculty association". University of Toronto News. Retrieved 2022-09-18.
2. "Rosenthal, Peter". Department of Mathematics. University of Toronto. Retrieved 2014-08-01.
3. "Peter Rosenthal". Faculty of Law. University of Toronto. Retrieved 2014-08-01.
4. "Makin - Peter Rosenthal Profile". Faculty of Law. University of Toronto. 2007-03-20. Retrieved 2014-08-01.
5. "Peter Rosenthal's passions for law and math make for a beautiful, if different, life". The Toronto Star. 2014-01-05. Retrieved 2014-08-01.
6. "Peter Rosenthal". Mathematical Association of America. Retrieved 2014-08-01.
7. "Peter Rosenthal". The Mathematics Genealogy Project. Retrieved 2014-08-01.
8. "The Agitator". torontolife.com. January 2008. Archived from the original on 2012-02-13. Retrieved 2014-08-01.
9. Kirk Makin (3 March 2007). "On the left side of the law". The Globe and Mail. Retrieved 2014-08-01.
10. "Peter Rosenthal selected Best Activist Lawyer". Faculty of Law. University of Toronto. 2007-03-20. Retrieved 2014-08-01.
11. "Law Society Announces 2016 Award Recipients". www.lsuc.on.ca. Retrieved 2016-06-09.
12. Rosenthal, Peter (2020). "Articles archived on Medium.com".
Authority control
International
• ISNI
• VIAF
National
• France
• BnF data
• Germany
• Israel
• United States
• Netherlands
• Poland
Academics
• CiNii
• DBLP
• MathSciNet
• Mathematics Genealogy Project
• Scopus
• zbMATH
Other
• IdRef
|
Wikipedia
|
Rose–Vinet equation of state
The Rose–Vinet equation of state is a set of equations used to describe the equation of state of solid objects. It is a modification of the Birch–Murnaghan equation of state.[1][2] The initial paper discusses how the equation only depends on four inputs: the isothermal bulk modulus $B_{0}$, the derivative of bulk modulus with respect to pressure $B_{0}'$, the volume $V_{0}$, and the thermal expansion; all evaluated at zero pressure ($P=0$) and at a single (reference) temperature. The same equation holds for all classes of solids and a wide range of temperatures.
Let the cube root of the specific volume be
$\eta =\left({\frac {V}{V_{0}}}\right)^{\frac {1}{3}}$
then the equation of state is:
$P=3B_{0}\left({\frac {1-\eta }{\eta ^{2}}}\right)e^{{\frac {3}{2}}(B_{0}'-1)(1-\eta )}$
A similar equation was published by Stacey et al. in 1981.[3]
References
1. Pascal Vinet; John R. Smith; John Ferrante; James H. Rose (1987). "Temperature effects on the universal equation of state of solids". Physical Review B. 35 (4): 1945–1953. Bibcode:1987PhRvB..35.1945V. doi:10.1103/physrevb.35.1945. hdl:2060/19860019304. PMID 9941621. S2CID 24238001.
2. "Rose-Vinet (Universal) equation of state". SklogWiki.
3. F. D. Stacey; B. J. Brennan; R. D. Irvine (1981). "Finite strain theories and comparisons with seismological data". Surveys in Geophysics. 4 (4): 189–232. Bibcode:1981GeoSu...4..189S. doi:10.1007/BF01449185. S2CID 129899060.
|
Wikipedia
|
Arnold Ross
Arnold Ephraim Ross (August 24, 1906 – September 25, 2002) was a mathematician and educator who founded the Ross Mathematics Program, a number theory summer program for gifted high school students. He was born in Chicago, but spent his youth in Odesa, Ukraine, where he studied with Samuil Shatunovsky. Ross returned to Chicago and enrolled in University of Chicago graduate coursework under E. H. Moore, despite his lack of formal academic training. He received his Ph.D. and married his wife, Bee, in 1931.
Arnold Ephraim Ross
Ross in 1970
Born
Arnold Ephraim Chaimovich
(1906-08-24)August 24, 1906
Chicago, Illinois, US
DiedSeptember 25, 2002(2002-09-25) (aged 96)
Alma materUniversity of Chicago
Known forMathematics education
(Ross Mathematics Program)
Spouse(s)Bertha (Bee) Halley Horecker
Madeleine Green
Scientific career
FieldsNumber theory
InstitutionsCalifornia Institute of Technology, St. Louis University, University of Notre Dame, Ohio State University
Thesis"On Representation of Integers by Indefinite Ternary Quadratic Forms of Quadratfrei Determinant" (1931)
Doctoral advisorL. E. Dickson
Other academic advisorsSamuil Shatunovsky,
E. H. Moore
Doctoral studentsMargaret Willerding
Ross taught at several institutions including St. Louis University before becoming chair of University of Notre Dame's mathematics department in 1946. He started a teacher training program in mathematics that evolved into the Ross Mathematics Program in 1957 with the addition of high school students. The program moved with him to Ohio State University when he became their department chair in 1963. Though forced to retire in 1976, Ross ran the summer program until 2000. He had worked with over 2,000 students during more than forty summers.
The program is known as Ross's most significant work. Its attendees have since continued on to prominent research positions across the sciences. His program inspired several offshoots and was recognized by mathematicians as highly influential. Ross has received an honorary doctorate and several professional association awards for his instruction and service.
Early life and career
Ross was born Arnold Ephraim Chaimovich[1] on August 24, 1906, in Chicago to Ukrainian-Jewish immigrants.[2] He was an only child.[1] His mother supported the family as a physical therapist.[1] Ross returned to Odesa, Ukraine with his mother in 1909 for assistance from her extended family,[1] and stayed once World War I and the Russian Revolution broke out.[2] The two events led to widespread famine and economic woe in the region.[1] Ross learned Russian at the behest of his mother, and developed a love of the theater and language.[1] Ross's mother encouraged him to read, which he did often, and subscribed to a private library since Odesa had no public library.[1] He credited his favorite uncle, an X-ray diagnostician, with introducing him to mathematics.[1] The uncle had hired Samuil Shatunovsky to tutor his talented son, and Ross asked to join in.[1] As money meant little due to inflation, Shatunovsky was paid to tutor the two boys with a pound of French hard candy.[1] During this time, Ross was not taught with textbooks or lectured on geometric proofs.[1] His geometry teacher would ask the class to prove and justify ideas on the blackboard per trial and error.[1] Many universities were closed due to the famine, but Odesa University reopened and let a small group of adolescents attend, including Ross.[1]
Ross left Odesa—now part of Ukraine—in 1922 with the intention of returning to Chicago and studying topology with E. H. Moore at the University of Chicago.[1] After negotiating his way home, he worked at a family friend's bookbinding shop and continued to learn English at the Lewis Institute.[1] He also changed his surname from Chaimovich to Ross in 1922.[3] Ross used his salary from a year at the shop to enroll for one term at the University of Chicago in Moore's course.[1] Moore gave Ross special attention, knowing his untraditional background, and arranged for Ross to attend the topology class as the sole undergraduate.[1]
In Moore's teaching style, he would propose a conjecture and task the students with proving it.[1] Students could respond with counter-conjectures that they would defend.[1] Ross found Moore's method exciting,[1] and his pedagogy influenced Ross's own.[2] Ross graduated with a B.S. degree[4] and continued his study as Leonard Eugene Dickson's research assistant.[2] Ross earned a M.S. degree[4] and finished his Ph.D. in number theory at the University of Chicago in 1931 with Dickson as his adviser.[2] Ross's dissertation was entitled "On Representation of Integers by Indefinite Ternary Quadratic Forms".[1] He did not pay tuition after his first quarter, which he credits to Dickson.[1]
Ross married Bertha (Bee) Halley Horecker, a singer-musician and daughter of Ross's Chicago neighbors, in 1931,[1] received a National Research Council Fellowship for 1932,[5] and worked as a National Research Council postdoctoral fellow[4] at California Institute of Technology with Eric Temple Bell until 1933.[2] Ross moved back to Chicago and led the mathematics department at an experimental school started by Ph.D.s during the Great Depression, People's Junior College,[4] where he also taught physics.[1] Ross became an assistant professor at St. Louis University in 1935 and stayed for about 11 years.[2] In an interview, he said he advocated for a student who became the first black woman in the South to receive a master's degree in mathematics.[1] This exception led the university to admit black students despite the idea's widespread unpopularity.[1] During World War II, Ross served as a research mathematician for the U.S. Navy.[2] He befriended Hungarian mathematician Gábor Szegő while in St. Louis, who recommended Ross for a 1941 Brown University summer school that prepared young scientists to assist in the war, a program Ross attended.[1] He occasionally worked on proximity fuzes for Stromberg-Carlson's laboratory from 1941 to 1945[1] before accepting a position as head of University of Notre Dame's mathematics department in 1946.[2] He set out to change the school's research climate by inviting distinguished mathematicians including Paul Erdős, whom Ross made a full professor.[1]
Ross Mathematics Program
While at Notre Dame in 1947, Ross began a mathematics program that prioritized what he described as "the act of personal discovery through observation and experimentation" for high school and junior college teachers.[1] In 1957, the program expanded via the National Science Foundation's post-Sputnik funds for teacher retraining, and Ross let high school students attend.[1] This expansion became the Ross Mathematics Program,[1] a summer mathematics program for talented high school students.[2] The program lasts eight weeks and brings students with no prior knowledge to topics such as Gaussian integers and quadratic reciprocity.[2] Though the program teaches number theory, by its Gauss-inspired[6] motto, "Think deeply of simple things," its primary goal is to offer precollege students an intellectual experience[2] as what he described as "a vivid apprenticeship to a life of exploration."[1][7] The program is known for its intensity, and is considered America's "most rigorous number theory program."[8] Ross was known to say, "No one leaves the program unchanged."[9]
This emphasis on computation alone too often produces students who have never practiced thinking for themselves, who have never asked why things work the way they do, who are not prepared to lead the way to future scientific innovation. It is precisely this independence of thought and questioning attitude that the Ross Program strives to nurture.
Ross Program brochure[8]
The program usually has 40–50 first-year students, 15 junior counselors, and 15 counselors.[2] Students are admitted by application—which includes a set of mathematical questions—or by showing "a great eagerness to learn."[2] First-year students meet daily for lectures in elementary number theory and thrice weekly for problem seminars.[2] They are encouraged to think like scientists and devise their own proofs and conjectures to the problems posed,[2] which occupies most of their free time.[8] Ross designed the daily problem sets,[9] and many questions contain his signature directions: "Prove or disprove and salvage if possible."[1] Successful students are asked to return as junior counselors and counselors in future summers.[2] Junior counselors revisit the daily lectures and help first-years with their questions.[2] They also can take advanced courses such as combinatorics[2] and graduate seminars.[9] Student problem sets are graded daily by the live-in counselors.[2]
The program was funded in the 1960s by a National Science Foundation (NSF) program that supported summer programs in science education, but not returning students.[2] As NSF support fluctuates, the program has been funded by various means including gifts from donors, scholarships from businesses, a National Security Agency grant, the university, and its mathematics department.[2] It also receives financial support from the Clay Mathematics Institute.[1][8]
The program grew rapidly with input from prominent mathematicians such as Ram Prakash Bambah, Hans Zassenhaus, Thoralf Skolem, and Max Dehn.[2] In the 1960s and 1970s, Ross brought mathematicians including Zassenhaus, Kurt Mahler, and Dijen K. Ray-Chaudhuri to teach there regularly.[1] Ross left Notre Dame to become chair of Ohio State University's mathematics department in 1963, and the program followed in the 1964 summer.[2] The program briefly moved to the University of Chicago in the summers of 1975–1978 at mathematician Felix Browder's invitation.[2] The program is unadvertised and depends on personal contacts and word of mouth to propagate.[1][2][8] It is recognized by mathematicians as one of the best mathematics programs for high school students.[8]
Retirement and death
Ross reached his mandatory retirement from Ohio State University in 1976,[2] when he became Professor Emeritus,[4] but continued to run the summer program through 2000,[9] after which he had a stroke that left him physically impaired and unable to teach.[1] Daniel Shapiro led the program upon Ross's exit.[1][10] Shapiro was a former counselor at the program.[3]
Ross received an honorary doctorate from Denison University in 1984,[4] the 1985 Mathematics Association of America (MAA) Award for Distinguished Service,[2] the 1998 MAA Citation for Public Service,[7] and was named an American Association for the Advancement of Science Fellow in 1988.[2] His teaching awards include Ohio State's Distinguished Teaching and Service Awards, and membership on the National Science Foundation's science education advisory board.[4]
Ross helped begin similar programs in West Germany, India, and Australia.[2] He consulted for an Indian gifted children program in 1973, assisted in an Australian National University January summer program for talented youth based on Ross's own from 1975 to 1983, and helped start another program in Heidelberg, Germany in 1978.[4] He had previously created other mathematics programs, including the teacher training program (before it included high school students)[1] and another program for Columbus, Ohio inner city middle and high school students called "Horizons Unlimited" in 1970.[4]
Ross's wife, Bee, died in 1983 and left Ross in a deep depression.[1] His colleagues said he "lived only for his summer program" in this period.[1] He later met a French widow of a diplomat, Madeleine Green, and they married in 1990.[1]
Ross died on September 25, 2002.[11] Notices of the American Mathematical Society and MAA FOCUS ran memorial articles on Ross.[3][9][11] Mathematicians such as Karl Rubin expressed their personal debts to Ross.[3] He did not have any children.[2][12]
Legacy
Ross's biggest contribution to his field was not through his research, but through his mathematics education programs.[9] He had run each of his summer programs from 1957 to 2000,[9] working with over 2,000 students.[1] His summer program graduates found roles in prestigious research positions in fields across the sciences.[9] The Ross Program was acclaimed by mathematicians as highly influential.[8][9][13]
The Ross Program inspired many similar programs, the closest in likeness being the Program in Mathematics for Young Scientists (PROMYS) at Boston University and the Honors Math Camp at Southwest Texas State University.[1] Other programs at University of Chicago and University of Texas at San Antonio were inspired by Ross.[1] The founders of PROMYS were Ross Program alumni,[8] and when the Ross Program went to the University of Chicago for several years, mathematics chair Paul Sally slowly became supportive of the program and later began his own gifted students program.[1] Informally, Ross Program and Ross's students are known as "Ross-1s" and those who study under them (including PROMYS attendees) are known as "Ross-2s".[8]
The Arnold Ross Lecture Series founded in his name in 1993[12] and run by the American Mathematical Society puts mathematicians before high school audiences annually in cities across the United States.[1] Ohio State University organized two reunion-conferences for Ross with program alumni, friends of Ross, and a series of science lectures,[1] in 1996 and 2001.[14]
References
1. Jackson, Allyn (August 2001). "Interview with Arnold Ross" (PDF). Notices of the American Mathematical Society. American Mathematical Society. 48 (7): 691–698. ISSN 0002-9920. Archived (PDF) from the original on September 21, 2013. Retrieved September 14, 2013.
2. Shapiro, Daniel B. (October 1996). "A Conference Honoring Arnold Ross on His Ninetieth Birthday" (PDF). Notices of the American Mathematical Society. American Mathematical Society. 43 (10): 1151–1154. ISSN 0002-9920. Archived (PDF) from the original on July 22, 2013. Retrieved September 14, 2013.
3. Jackson, Allyn; Shapiro, Daniel, eds. (June–July 2003). "Arnold Ross (1906–2002)" (PDF). Notices of the American Mathematical Society. American Mathematical Society. 50 (6): 660–665. ISSN 0002-9920. Archived (PDF) from the original on July 22, 2013. Retrieved September 14, 2013.
4. Lax, Anneli; Woods, Alan C. (April 1986). "Award for Distinguished Service to Professor Arnold Ephraim Ross". American Mathematical Monthly. Mathematical Association of America. 93 (4): 245–246. doi:10.1080/00029890.1986.11971798. ISSN 0002-9890. JSTOR 2323671.
5. National Academy of Sciences (U.S.) (1930). Report of the National Academy of Sciences. United States National Academies. p. 164. NAP:11240.
6. William C. Bauldry (9 September 2011). Introduction to Real Analysis: An Educational Approach. John Wiley & Sons. p. 46. ISBN 978-1-118-16443-3. Retrieved September 20, 2013.
7. "1998 Citations for Public Service" (PDF). Notices of the American Mathematical Society. American Mathematical Society. 45 (4): 514–516. April 1998. ISSN 0002-9920. Archived (PDF) from the original on December 2, 2012. Retrieved September 20, 2013.
8. Wissner-Gross, Elizabeth (2007). What High Schools Don't Tell You: 300+ Secrets to Make Your Kid Irresistible to Colleges by Senior Year. Hudson Street Press. pp. 103–109. ISBN 978-1-59463-037-8. Retrieved September 20, 2013.
9. Stevens, Glenn (January 2003). "Memories of Arnold Ross" (PDF). MAA FOCUS. Mathematical Association of America. 23 (1): 17. ISSN 0731-2040. Archived (PDF) from the original on September 21, 2013. Retrieved September 20, 2013.
10. Edgar, Gerald A. (October 23, 2007). Measure, Topology, and Fractal Geometry. Springer-Verlag. p. XI. ISBN 978-0-387-74749-1. Retrieved September 20, 2013.
11. Stevens, Glenn (December 2002). "Memories of Arnold Ross" (PDF). MAA FOCUS. Mathematical Association of America. 22 (9): 22. ISSN 0731-2040. Archived (PDF) from the original on September 21, 2013. Retrieved September 20, 2013.
12. "Arnold Ross Obituary". Ohio State University Department of Mathematics. 2002. Archived from the original on September 21, 2013. Retrieved September 20, 2013.
13. Pohst, Michael (April 1994). "In Memoriam: Hans Zassenhaus (1912–1991)". Journal of Number Theory. 47 (1): 11. doi:10.1006/jnth.1994.1023. ISSN 0022-314X – via ScienceDirect Mathematics Backfile.
14. Shapiro, Daniel (September 25, 2002). "Arnold Ross 1906–2002". Ohio State University Department of Mathematics. Archived from the original on September 21, 2013. Retrieved September 20, 2013.
External links
• Arnold Ross at the Mathematics Genealogy Project
• Photos of Ross
• Ross Mathematics Program official website
Ohio State University
Main campus located in: Columbus, Ohio
Campuses
• Columbus Main Campus (Buildings)
• Lima Campus
• Mansfield Campus
• Marion Campus
• Newark Campus
• Ohio Agricultural Research and Development Center
• Agricultural Technical Institute
Academics
• Calculus One
• College of Arts and Sciences
• College of Dentistry
• College of Engineering
• John Glenn School of Public Affairs
• College of Medicine
• Fisher College of Business
• Moritz College of Law
• Health Sciences Center for Global Health
Athletics
Teams
• Baseball
• Men's basketball
• Women's basketball
• Field hockey
• Football
• Men's ice hockey
• Women's ice hockey
• Men's lacrosse
• Men's soccer
• Men's volleyball
• Women's volleyball
Venues
• Bill Davis Stadium
• Covelli Center
• Ice Rink
• Jerome Schottenstein Center
• Ohio Stadium
• St. John Arena
• Woody Hayes Athletic Center
Facilities
• Blackwell Inn
• Drake Performance and Event Center
• Golf Club
• Knowlton Hall
• Ohio State University Airport
• Ohio Union
• Orton Hall & Geological Museum
• South Bass Island Light
• Lincoln and Morrill Towers
• Tom W. Davis Tower
• Watts Hall
• Wexner Center for the Arts
• Wexner Medical Center
• East Hospital
• James Cancer Hospital
• Libraries
• 18th Avenue Library
• Billy Ireland Cartoon Library & Museum
• Hilandar Research Library
• John A. Prior Health Sciences Library
• Thompson Library
• University Hall
Media
• Geographical Analysis
• The Lantern
• Journal of Law and Policy for the Information Society
• Journal of Money, Credit and Banking
• Ohio State University Press
• WOSU-FM
• WOSU-TV / WPBO
• WOSA
• WVSG
• AROUSE OSU
Research
• Aeronautical and Astronautical Research Laboratory
• The Big Ear
• Buckeye Bullet
• Byrd Polar and Climate Research Center
• Center for Interdisciplinary Law and Policy Studies
• Chadwick Arboretum
• DNB Extension for Education and Research
• Edison Welding Institute
• Industrial Arts Curriculum Project
• Kirwan Institute for the Study of Race and Ethnicity
• Large Binocular Telescope
• Melrose (apple)
• Mershon Center for International Security Studies
• National Center for the Middle Market
• Newman projection
• Ohio Sky Survey
• Olentangy River Wetland Research Park
• Richard M. Ross Heart Hospital
• Sarah (chimpanzee)
• Secrest Arboretum
• Stone Laboratory
Traditions
• "Across the Field"
• Brutus Buckeye
• "Buckeye Battle Cry"
• "Carmen Ohio"
• "Hang On Sloopy"
• Illibuck
• Maudine Ormsby
• Men's Glee Club
• Mirror Lake
• Ohio State Varsity O Hall of Fame
• Script Ohio
• OSU Athletic Bands
• OSU Marching Band
• UM-OSU Rivalry
• UM-OSU Basketball Rivalry
People
• Alumni
• Trustees
• Presidents
• Faculty
• Staff
• Fellows
• Athletic Directors
• Coaches
• Athletes
• Notable people associated with The Ohio State University
Student life
• Block O
• Buckeye Leadership Society
• Old North Columbus
• The Oval
• Activities and Organizations
• Pi Gamma Omicron
• Sigma Alpha
• Sigma Eta Chi
• Student Government
• Texnikoi Engineering Honorary
• The Stadium Scholarship Program
• University Housing
Related
• 1870 (magazine)
• 2016 attack
• Abuse scandal
• Campus Area Bus Service
• Jack Nicklaus Museum
• Nationwide Arena
• Remembrance Park
• Statue of William Oxley Thompson
• University District
Established: 1870 – Endowment: $7 billion (2022) – Students: 60,540 (Columbus) 65,795 (all campuses)
Authority control
International
• FAST
• ISNI
• VIAF
National
• Germany
• Israel
• United States
• Netherlands
Academics
• MathSciNet
• Mathematics Genealogy Project
• zbMATH
People
• Deutsche Biographie
|
Wikipedia
|
Ross Street
Ross Howard Street (born 29 September 1945, Sydney) is an Australian mathematician specialising in category theory.[1][2][3][4]
Biography
Street completed his undergraduate and postgraduate study at the University of Sydney, where his dissertation advisor was Max Kelly. He is an emeritus professor of mathematics at Macquarie University, a fellow of the Australian Mathematical Society (1995), and was elected Fellow of the Australian Academy of Science in 1989.[5] He was awarded the Edgeworth David Medal of the Royal Society of New South Wales in 1977, and the Australian Mathematical Society's George Szekeres Medal in 2012.[2]
References
1. Street, Ross Howard (1945 - ), Biographical entry, Encyclopaedia of Australian Science
2. Street, Ross Howard, FAA (1945-), trove.nla.gov.au
3. "Category Theory, Algebra and Geometry". perso.uclouvain.be. Retrieved 10 April 2016.
4. "Centenary Medal". It's an Honour. 1 January 2001. For service to Australian society and science in mathematics
5. Emeritus Professor Ross Howard Street, Fellow, www.science.org.au
External links
• Personal webpage, maths.mq.edu.au
• Ross Street at the Mathematics Genealogy Project
• Ross Street publications indexed by Google Scholar
Authority control
International
• ISNI
• VIAF
National
• France
• BnF data
• Germany
• Israel
• United States
• Czech Republic
• Netherlands
Academics
• DBLP
• Google Scholar
• MathSciNet
• Mathematics Genealogy Project
• ORCID
• Scopus
• zbMATH
Other
• IdRef
|
Wikipedia
|
Rosser's trick
In mathematical logic, Rosser's trick is a method for proving Gödel's incompleteness theorems without the assumption that the theory being considered is ω-consistent (Smorynski 1977, p. 840; Mendelson 1977, p. 160). This method was introduced by J. Barkley Rosser in 1936, as an improvement of Gödel's original proof of the incompleteness theorems that was published in 1931.
While Gödel's original proof uses a sentence that says (informally) "This sentence is not provable", Rosser's trick uses a formula that says "If this sentence is provable, there is a shorter proof of its negation".
Background
Rosser's trick begins with the assumptions of Gödel's incompleteness theorem. A theory $T$ is selected which is effective, consistent, and includes a sufficient fragment of elementary arithmetic.
Gödel's proof shows that for any such theory there is a formula $\operatorname {Proof} _{T}(x,y)$ which has the intended meaning that $y$ is a natural number code (a Gödel number) for a formula and $x$ is the Gödel number for a proof, from the axioms of $T$, of the formula encoded by $y$. (In the remainder of this article, no distinction is made between the number $y$ and the formula encoded by $y$, and the number coding a formula $\phi $ is denoted $\#\phi $.) Furthermore, the formula $\operatorname {Pvbl} _{T}(y)$ is defined as $\exists x\operatorname {Proof} _{T}(x,y)$. It is intended to define the set of formulas provable from $T$.
The assumptions on $T$ also show that it is able to define a negation function ${\text{neg}}(y)$, with the property that if $y$ is a code for a formula $\phi $ then ${\text{neg}}(y)$ is a code for the formula $\neg \phi $. The negation function may take any value whatsoever for inputs that are not codes of formulas.
The Gödel sentence of the theory $T$ is a formula $\phi $, sometimes denoted $G_{T}$, such that $T$ proves $\phi $ ↔$\neg \operatorname {Pvbl} _{T}(\#\phi )$. Gödel's proof shows that if $T$ is consistent then it cannot prove its Gödel sentence; but in order to show that the negation of the Gödel sentence is also not provable, it is necessary to add a stronger assumption that the theory is ω-consistent, not merely consistent. For example, the theory $T={\text{PA}}+\neg {\text{G}}_{PA}$, in which PA is Peano axioms, proves $\neg G_{T}$. Rosser (1936) constructed a different self-referential sentence that can be used to replace the Gödel sentence in Gödel's proof, removing the need to assume ω-consistency.
The Rosser sentence
For a fixed arithmetical theory $T$, let $\operatorname {Proof} _{T}(x,y)$ and ${\text{neg}}(x)$ be the associated proof predicate and negation function.
A modified proof predicate $\operatorname {Proof} _{T}^{R}(x,y)$ is defined as:
$\operatorname {Proof} _{T}^{R}(x,y)\equiv \operatorname {Proof} _{T}(x,y)\land \lnot \exists z\leq x[\operatorname {Proof} _{T}(z,\operatorname {neg} (y))],$
which means that
$\lnot \operatorname {Proof} _{T}^{R}(x,y)\equiv \operatorname {Proof} _{T}(x,y)\to \exists z\leq x[\operatorname {Proof} _{T}(z,\operatorname {neg} (y))].$
This modified proof predicate is used to define a modified provability predicate $\operatorname {Pvbl} _{T}^{R}(y)$:
$\operatorname {Pvbl} _{T}^{R}(y)\equiv \exists x\operatorname {Proof} _{T}^{R}(x,y).$
Informally, $\operatorname {Pvbl} _{T}^{R}(y)$ is the claim that $y$ is provable via some coded proof $x$ such that there is no smaller coded proof of the negation of $y$. Under the assumption that $T$ is consistent, for each formula $\phi $ the formula $\operatorname {Pvbl} _{T}^{R}(\#\phi )$ will hold if and only if $\operatorname {Pvbl} _{T}(\#\phi )$ holds, because if there is a code for the proof of $\phi $, then (following the consistency of $T$) there is no code for the proof of $\neg \phi $. However, $\operatorname {Pvbl} _{T}(\#\phi )$ and $\operatorname {Pvbl} _{T}^{R}(\#\phi )$ have different properties from the point of view of provability in $T$.
An immediate consequence of the definition is that if $T$ includes enough arithmetic, then it can prove that for every formula $\phi $, $\operatorname {Pvbl} _{T}^{R}(\phi )$ implies $\neg \operatorname {Pvbl} _{T}^{R}({\text{neg}}(\phi ))$. This is because otherwise, there are two numbers $n,m$, coding for the proofs of $\phi $ and $\neg \phi $, respectively, satisfying both $n<m$ and $m<n$. (In fact $T$ only needs to prove that such a situation cannot hold for any two numbers, as well as to include some first-order logic)
Using the diagonal lemma, let $\rho $ be a formula such that $T$ proves ρ ↔ ¬ PvblR
T
(#ρ). $\rho $ ↔$\neg \operatorname {Pvbl} _{T}(\#\rho )$. The formula $\rho $ is the Rosser sentence of the theory $T$.
Rosser's theorem
Let $T$ be an effective, consistent theory including a sufficient amount of arithmetic, with Rosser sentence $\rho $. Then the following hold (Mendelson 1977, p. 160):
1. $T$ does not prove $\rho $
2. $T$ does not prove $\neg \rho $
In order to prove this, one first shows that for a formula $y$ and a number $e$, if $\operatorname {Proof} _{T}^{R}(e,y)$ holds, then $T$ proves $\operatorname {Proof} _{T}^{R}(e,y)$. This is shown in a similar manner to what is done in Gödel's proof of the first incompleteness theorem: $T$ proves $\operatorname {Proof} _{T}(e,y)$, a relation between two concrete natural numbers; one then goes over all the natural numbers $z$ smaller than $e$ one by one, and for each $z$, $T$ proves $\neg \operatorname {Proof} _{T}(z,{\text{(neg}}(y))$, again, a relation between two concrete numbers.
The assumption that $T$ includes enough arithmetic (in fact, what is required is basic first-order logic) ensures that $T$ also proves $\operatorname {Pvbl} _{T}^{R}(y)$ in that case.
Furthermore, if $T$ is consistent and proves $\phi $, then there is a number $e$ coding for its proof in $T$, and there is no number coding for the proof of the negation of $\phi $ in $T$. Therefore $\operatorname {Proof} _{T}^{R}(e,y)$ holds, and thus $T$ proves $\operatorname {Pvbl} _{T}^{R}(\#\phi )$.
The proof of (1) is similar to that in Gödel's proof of the first incompleteness theorem: Assume $T$ proves $\rho $; then it follows, by the previous elaboration, that $T$ proves $\operatorname {Pvbl} _{T}^{R}(\#\rho )$. Thus $T$ also proves $\neg \rho $. But we assumed $T$ proves $\rho $, and this is impossible if $T$ is consistent. We are forced to conclude that $T$ does not prove $\rho $.
The proof of (2) also uses the particular form of $\operatorname {Proof} _{T}^{R}$. Assume $T$ proves $\neg \rho $; then it follows, by the previous elaboration, that $T$ proves $\operatorname {Pvbl} _{T}^{R}({\text{neg}}\#(\rho ))$. But by the immediate consequence of the definition of Rosser's provability predicate, mentioned in the previous section, it follows that $T$ proves $\neg \operatorname {Pvbl} _{T}^{R}(\#\rho )$. Thus $T$ also proves $\rho $. But we assumed $T$ proves $\neg \rho $, and this is impossible if $T$ is consistent. We are forced to conclude that $T$ does not prove $\neg \rho $.
References
• Mendelson (1977), Introduction to Mathematical Logic
• Smorynski (1977), "The incompleteness theorems", in Handbook of Mathematical Logic, Jon Barwise, Ed., North Holland, 1982, ISBN 0-444-86388-5
• Barkley Rosser (September 1936). "Extensions of some theorems of Gödel and Church". Journal of Symbolic Logic. 1 (3): 87–91. doi:10.2307/2269028. JSTOR 2269028. S2CID 36635388.
External links
• Avigad (2007), "Computability and Incompleteness", lecture notes.
|
Wikipedia
|
Rostislav Grigorchuk
Rostislav Ivanovich Grigorchuk (Ukrainian: Ростисла́в Iва́нович Григорчу́к; b. February 23, 1953) is a mathematician working in different areas of mathematics including group theory, dynamical systems, geometry and computer science. He holds the rank of Distinguished Professor in the Mathematics Department of Texas A&M University. Grigorchuk is particularly well known for having constructed, in a 1984 paper,[1] the first example of a finitely generated group of intermediate growth, thus answering an important problem posed by John Milnor in 1968. This group is now known as the Grigorchuk group[2][3][4][5][6] and it is one of the important objects studied in geometric group theory, particularly in the study of branch groups, automaton groups and iterated monodromy groups. Grigorchuk is one of the pioneers of asymptotic group theory as well as of the theory of dynamically defined groups. He introduced the notion of branch groups[7][8][9][10] and developed the foundations of the related theory. Grigorchuk, together with his collaborators and students, initiated the theory of groups generated by finite Mealy type automata,[11][12][13] interpreted them as groups of fractal type,[14][15] developed the theory of groups acting on rooted trees,[16] and found numerous applications[17][18][19] of these groups in various fields of mathematics including functional analysis, topology, spectral graph theory, dynamical systems and ergodic theory.
Rostislav Ivanovich Grigorchuk
Born (1953-02-23) February 23, 1953
Vyshnivets, Ternopil Oblast, Ukraine
Alma materLomonosov Moscow State University
Known forresearcher in geometric group theory, discovering the Grigorchuk group
AwardsAward of Moscow Mathematical Society (1979), Bogolyubov Prize of National Academy of Sciences of Ukraine (2015), Leroy P. Steele Prize (2015), Humboldt Research Award by Germany’s Alexander von Humboldt Foundation (2020)
Scientific career
FieldsMathematics
InstitutionsTexas A&M University
Biographical data
Grigorchuk was born on February 23, 1953, in Ternopil Oblast, now Ukraine (in 1953 part of the USSR).[20] He received his undergraduate degree in 1975 from Moscow State University. He obtained a PhD (Candidate of Science) in Mathematics in 1978, also from Moscow State University, where his thesis advisor was Anatoly M. Stepin. Grigorchuk received a habilitation (Doctor of Science) degree in Mathematics in 1985 at the Steklov Institute of Mathematics in Moscow.[20] During the 1980s and 1990s, Rostislav Grigorchuk held positions at the Moscow State University of Transportation, and subsequently at the Steklov Institute of Mathematics and Moscow State University.[20] In 2002 Grigorchuk joined the faculty of Texas A&M University as a Professor of Mathematics, and he was promoted to the rank of Distinguished Professor in 2008.[21]
Rostislav Grigorchuk gave an invited address at the 1990 International Congress of Mathematicians in Kyoto[22] an AMS Invited Address at the March 2004 meeting of the American Mathematical Society in Athens, Ohio[23] and a plenary talk at the 2004 Winter Meeting of the Canadian Mathematical Society.[24]
Grigorchuk is the Editor-in-Chief of the journal "Groups, Geometry and Dynamics",[25] published by the European Mathematical Society, and is or was a member of the editorial boards of the journals "Mathematical Notes",[26] "International Journal of Algebra and Computation",[27] "Journal of Modern Dynamics",[28] "Geometriae Dedicata",[29] "Ukrainian Mathematical Journal",[30] "Algebra and Discrete Mathematics",[31] "Carpathian Mathematical Publications",[32] "Bukovinian Mathematical Journal",[33] and "Matematychni Studii".[34]
Mathematical contributions
Grigorchuk is most well known for having constructed the first example of a finitely generated group of intermediate growth which now bears his name and is called the Grigorchuk group (sometimes it is also called the first Grigorchuk group since Grigorchuk constructed several other groups that are also commonly studied). This group has growth that is faster than polynomial but slower than exponential. Grigorchuk constructed this group in a 1980 paper[35] and proved that it has intermediate growth in a 1984 article.[1] This result answered a long-standing open problem posed by John Milnor in 1968 about the existence of finitely generated groups of intermediate growth. Grigorchuk's group has a number of other remarkable mathematical properties. It is a finitely generated infinite residually finite 2-group (that is, every element of the group has a finite order which is a power of 2). It is also the first example of a finitely generated group that is amenable but not elementary amenable, thus providing an answer to another long-standing problem, posed by Mahlon Day in 1957.[36] Also Grigorchuk's group is "just infinite": that is, it is infinite but every proper quotient of this group is finite.[2]
Grigorchuk's group is a central object in the study of the so-called branch groups and automata groups. These are finitely generated groups of automorphisms of rooted trees that are given by particularly nice recursive descriptions and that have remarkable self-similar properties. The study of branch, automata and self-similar groups has been particularly active in the 1990s and 2000s and a number of unexpected connections with other areas of mathematics have been discovered there, including dynamical systems, differential geometry, Galois theory, ergodic theory, random walks, fractals, Hecke algebras, bounded cohomology, functional analysis, and others. In particular, many of these self-similar groups arise as iterated monodromy groups of complex polynomials. Important connections have been discovered between the algebraic structure of self-similar groups and the dynamical properties of the polynomials in question, including encoding their Julia sets.[37]
Much of Grigorchuk's work in the 1990s and 2000s has been on developing the theory of branch, automata and self-similar groups and on exploring these connections. For example, Grigorchuk, with co-authors, obtained a counter-example to the conjecture of Michael Atiyah about L2-betti numbers of closed manifolds.[38][39]
Grigorchuk is also known for his contributions to the general theory of random walks on groups and the theory of amenable groups, particularly for obtaining in 1980[40] what is commonly known (see for example[41][42][43]) as Grigorchuk's co-growth criterion of amenability for finitely generated groups.
Awards and honors
In 1979 Rostislav Grigorchuk was awarded the Moscow Mathematical Society.[44]
In 1991 he obtained Fulbright Senior Scholarship,[45] Columbia University, New York.
In 2003 an international group theory conference in honor of Grigorchuk's 50th birthday was held in Gaeta, Italy.[46] Special anniversary issues of the "International Journal of Algebra and Computation",[47] the journal "Algebra and Discrete Mathematics"[20] and the book "Infinite Groups: Geometric, Combinatorial and Dynamical Aspects"[48] were dedicated to Grigorchuk's 50th birthday.
In 2009 Grigorchuk R.I. was awarded the Association of Former Students Distinguished Achievement in Research,[49] Texas A&M University.
In 2012 he became a fellow of the American Mathematical Society.[50]
In 2015 Rostislav Grigorchuk was awarded the AMS Leroy P. Steele Prize for Seminal Contribution to Research.[51] In addition, in this year he became a laureate[52] of Bogolyubov Prize of Ukrainian Academy of Science.
In 2020 Grigorchuk R.I. has been elected as a laureate[53] of the prestigious Humboldt Research Award by Germany’s Alexander von Humboldt Foundation.
See also
• Geometric group theory
• Growth of groups
• Iterated monodromy group
• Amenable groups
• Grigorchuk group
References
1. R. I. Grigorchuk, Degrees of growth of finitely generated groups and the theory of invariant means. Izvestiya Akademii Nauk SSSR. Seriya Matematicheskaya. vol. 48 (1984), no. 5, pp. 939-985
2. Pierre de la Harpe. Topics in geometric group theory. Chicago Lectures in Mathematics. University of Chicago Press, Chicago. ISBN 0-226-31719-6
3. Laurent Bartholdi. The growth of Grigorchuk's torsion group. International Mathematics Research Notices, 1998, no. 20, pp. 1049-1054
4. Tullio Ceccherini-Silberstein, Antonio Machì, and Fabio Scarabotti. The Grigorchuk group of intermediate growth. Rendiconti del Circolo Matematico di Palermo (2), vol. 50 (2001), no. 1, pp. 67-102
5. Yu. G. Leonov. On a lower bound for the growth function of the Grigorchuk group. (in Russian). Matematicheskie Zametki, vol. 67 (2000), no. 3, pp. 475-477; translation in: Mathematical Notes, vol. 67 (2000), no. 3-4, pp. 403-405
6. Roman Muchnik, and Igor Pak. Percolation on Grigorchuk groups. Communications in Algebra, vol. 29 (2001), no. 2, pp. 661-671.
7. Grigorchuk R. I.Just infinite branch groups. New horizons in pro-p groups.Progr. Math., 184, Birkhäuser Boston, Boston, MA, 2000, 121–179.
8. Bartholdi, Laurent; Grigorchuk, Rostislav I.; Šuniḱ, Zoran.Branch groups. Handbook of algebra.Vol. 3, 989–1112, Handb. Algebr., 3, Elsevier/North-Holland, Amsterdam, 2003.
9. Grigorchuk Rostislav.Solved and unsolved problems around one group. Infinite groups: geometric, combinatorial and dynamical aspects.117–218, Progr. Math., 248, Birkhäuser, Basel, 2005.
10. de la Harpe, Pierre.Topics in geometric group theory. (English summary).Chicago Lectures in Mathematics. University of Chicago Press, Chicago, IL, 2000.
11. Grigorchuk R. I.; Nekrashevich, V. V.; Sushchanskiĭ, V. I.Automata, dynamical systems, and groups. (Russian).Tr. Mat. Inst. Steklova 231 (2000), Din. Sist., Avtom. i Beskon. Gruppy, 134–214; translation in Proc. Steklov Inst. Math. 2000, no. 4(231), 128–203.
12. Bondarenko, Ievgen; Grigorchuk, Rostislav; Kravchenko, Rostyslav; Muntyan, Yevgen; Nekrashevych, Volodymyr; Savchuk, Dmytro; Šunić, Zoran.On classification of groups generated by 3-state automata over a 2-letter alphabet.Algebra Discrete Math. 2008, no. 1, 1–163.
13. Ceccherini-Silberstein, Tullio and Coornaert, Michel.Cellular automata and groups.Springer Monographs in Mathematics. Springer-Verlag, Berlin, 2010.
14. Bartholdi, Laurent; Grigorchuk, Rostislav; Nekrashevych, Volodymyr.From fractal groups to fractal sets.Fractals in Graz 2001, 25–118, Trends Math., Birkhäuser, Basel, 2003.
15. Grigorchuk, Rostislav; Nekrashevych, Volodymyr; Šunić, Zoran.From self-similar groups to self-similar sets and spectra. Fractal geometry and stochastics V. 175–207, Progr. Probab., 70, Birkhäuser/Springer, Cham, 2015.
16. Grigorchuk R. I.Some problems of the dynamics of group actions on rooted trees. (Russian)Tr. Mat. Inst. Steklova 273 (2011).
17. Grigorchuk, Rostislav; Šunić, Zoran.Self-similarity and branching in group theory.Groups St. Andrews 2005. Vol. 1, 36–95, London Math. Soc. Lecture Note Ser., 339, Cambridge Univ. Press, Cambridge, 2007.
18. Nekrashevych, Volodymyr.Self-similar groups.Mathematical Surveys and Monographs, 117. American Mathematical Society, Providence, RI, 2005. xii+231 pp
19. Grigorchuk, Rostislav; Nekrashevych, Volodymyr.Self-similar groups, operator algebras and Schur complement.J. Mod. Dyn. 1 (2007), no. 3, 323–370.
20. Editorial Statement, Algebra and Discrete Mathematics, (2003), no. 4
21. 2008 Personal News, Department of Mathematics, Texas A&M University. Accessed January 15, 2010.
22. R. I. Grigorchuk. On growth in group theory. Proceedings of the International Congress of Mathematicians, Vol. I, II (Kyoto, 1990), pp. 325-338, Math. Soc. Japan, Tokyo, 1991
23. Spring Central Section Meeting, Athens, OH, March 26-27, 2004. American Mathematical Society. Accessed January 15, 2010.
24. 2004 Winter Meeting, Canadian Mathematical Society. Accessed January 15, 2010.
25. Groups, Geometry and Dynamics
26. Editorial Board, Mathematical Notes
27. Editorial Board, International Journal of Algebra and Computation
28. Editorial Board, Journal of Modern Dynamics
29. Editorial Board, Geometriae Dedicata
30. Editorial Board, Ukrainian Mathematical Journal
31. Editorial Board, Algebra and Discrete Mathematics Archived 2008-11-21 at the Wayback Machine
32. Editorial Board, Carpathian Mathematical Publications
33. Editorial Board, Bukovinian Mathematical Journal
34. Editorial Board, Matematychni Studii
35. R. I. Grigorchuk. On Burnside's problem on periodic groups. (Russian) Funktsionalnyi Analiz i ego Prilozheniya, vol. 14 (1980), no. 1, pp. 53-54
36. Mahlon M. Day. Amenable semigroups. Illinois Journal of Mathematics, vol. 1 (1957), pp. 509-544.
37. Volodymyr Nekrashevych. Self-similar groups. Mathematical Surveys and Monographs, 117. American Mathematical Society, Providence, RI, 2005. ISBN 0-8218-3831-8
38. R. I. Grigorchuk, and A. Zuk. The lamplighter group as a group generated by a 2-state automaton, and its spectrum. Geometriae Dedicata, vol. 87 (2001), no. 1-3, pp. 209--244.
39. R. I. Grigorchuk, P. Linnell, T. Schick, and A. Zuk. On a question of Atiyah. Comptes Rendus de l'Académie des Sciences, Série I. vol. 331 (2000), no. 9, pp. 663-668.
40. R. I. Grigorchuk. Symmetrical random walks on discrete groups. Multicomponent random systems, pp. 285-325, Adv. Probab. Related Topics, 6, Marcel Dekker, New York, 1980; ISBN 0-8247-6831-0
41. R. Ortner, and W. Woess. Non-backtracking random walks and cogrowth of graphs. Canadian Journal of Mathematics, vol. 59 (2007), no. 4, pp. 828-844
42. Sam Northshield. Quasi-regular graphs, cogrowth, and amenability. Dynamical systems and differential equations (Wilmington, NC, 2002). Discrete and Continuous Dynamical Systems, Series A. 2003, suppl., pp. 678-687.
43. Richard Sharp. Critical exponents for groups of isometries. Geometriae Dedicata, vol. 125 (2007), pp. 63-74
44. Laureates of the Moscow Mathematical Society Prize
45. Fulbright Scholar Directory
46. International Conference on GROUP THEORY: combinatorial, geometric, and dynamical aspects of infinite groups. Archived 2010-12-12 at the Wayback Machine
47. Preface, International Journal of Algebra and Computation, vol. 15 (2005), no. 5-6, pp. v-vi
48. Bartholdi, L., Ceccherini-Silberstein, T., Smirnova-Nagnibeda, T., Zuk, A.Infinite Groups: Geometric, Combinatorial and Dynamical Aspects.
49. [https://dof.tamu.edu/dof/media/PITO-DOF/AFS%20DAA/AFS-DAA-University-Level-All-Winners_1.pdf Recipients of The Association of Former Students Distinguished Achievement Awards University Level]
50. List of Fellows of the American Mathematical Society, retrieved 2013-01-19.
51. AMS 2015 Leroy P. Steele Prize
52. Laureates of Bogolyubov Prize of Ukrainian Academy of Science
53. Laureate of Humboldt Research Award
External links
• Web-page of Rostislav Grigorchuk at Texas A&M University
• Groups and Dynamics at Texas A&M University
Authority control
International
• ISNI
• VIAF
National
• Norway
• Germany
• Israel
• United States
Academics
• CiNii
• DBLP
• Google Scholar
• MathSciNet
• Mathematics Genealogy Project
• zbMATH
Other
• Encyclopedia of Modern Ukraine
• IdRef
|
Wikipedia
|
Roswitha Blind
Roswitha Blind (also published as Roswitha Hammer)[1][2] is a German mathematician, specializing in convex geometry, discrete geometry, and polyhedral combinatorics, and a politician and organizer for the Social Democratic Party of Germany in Stuttgart.
Mathematics
As Roswitha Hammer, Blind completed a Ph.D. in 1974 at the University of Stuttgart. Her dissertation, Über konvexe Strukturen und die Beziehungen zur elementaren Konvexität, concerned convex geometry and discrete geometry and was supervised by Kurt Leichtweiss.[2]
She is best known in mathematics for a 1987 publication with Peter Mani-Levitska in which, solving a conjecture of Micha Perles, she and Mani-Levitska proved that the combinatorial structure of simple polytopes is completely determined by their graphs.[3] This result has been called the Blind–Mani theorem[4] or the Perles–Blind–Mani theorem.[5]
In a 1979 publication,[6] she introduced a class of convex polytopes sometimes called the Blind polytopes, generalizing the semiregular polytopes and Johnson solids, in which all faces are regular polytopes.[7]
Politics
Blind became a city councillor in the Möhringen-Vaihingen district of Stuttgart in 2004,[8] stepping down from that seat in 2009 in order to become chair of the Social Democratic Party of Germany local council group.[9] As councillor, in order to better serve the youth of her district, she became chair of a local football club, 1. FC Lauchhau-Lauchäcker, in 2006, also serving as president of the Stuttgart Sports Forum.[10]
She retired from politics in 2014,[8] and from her position with the football club in 2016.[10]
References
1. "Blind, Roswitha", MathSciNet, retrieved 2021-08-10
2. Roswitha Blind at the Mathematics Genealogy Project
3. Blind, Roswitha; Mani-Levitska, Peter (1987), "Puzzles and polytope isomorphisms", Aequationes Mathematicae, 34 (2–3): 287–297, doi:10.1007/BF01830678, MR 0921106.
4. Kalai, Gil (1995), "Combinatorics and convexity" (PDF), Proceedings of the International Congress of Mathematicians, Vol. 1, 2 (Zürich, 1994), Birkhäuser, Basel, pp. 1363–1374, MR 1404038
5. Gruber, Peter M. (2007), "Edge graphs of simple polytopes determine the combinatorial structure; the Perles–Blind–Mani theorem", Convex and Discrete Geometry, Grundlehren der Mathematischen Wissenschaften [Fundamental Principles of Mathematical Sciences], vol. 336, Berlin: Springer, pp. 275–277, ISBN 978-3-540-71132-2, MR 2335496
6. Blind, R. (1979), "Konvexe Polytope mit kongruenten regulären $(n-1)$-Seiten im $\mathbb {R} ^{n}$ ($n\geq 4$)", Commentarii Mathematici Helvetici (in German), 54 (2): 304–308, doi:10.1007/BF02566273, MR 0535060
7. Klitzing, Richard, "Johnson solids, Blind polytopes, and CRFs", Polytopes, retrieved 2021-08-10
8. Müller, Kai (16 July 2014), "Vaihinger Stadträtin Roswitha Blind sagt Adieu: Familie gewinnt gegen Politik", Stuttgarter Zeitung (in German)
9. Dank an Dr. Roswitha Blind (in German), SPD Stuttgart Möhringen-Fasanenhof-Sonnenberg, 30 August 2009, retrieved 2021-08-10
10. Kratz, Alexandra (1 March 2016), "Jahre zwischen Hoffnung und Hoffnungslosigkeit: Die Alt-Stadträtin Roswitha Blind gibt im März 2016 ihr Amt als Vorsitzende des 1. FC Lauchhau-Lauchäcker ab", Stuttgarter Zeitung (in German)
Authority control: Academics
• DBLP
• MathSciNet
• Mathematics Genealogy Project
|
Wikipedia
|
Rota's basis conjecture
In linear algebra and matroid theory, Rota's basis conjecture is an unproven conjecture concerning rearrangements of bases, named after Gian-Carlo Rota. It states that, if X is either a vector space of dimension n or more generally a matroid of rank n, with n disjoint bases Bi, then it is possible to arrange the elements of these bases into an n × n matrix in such a way that the rows of the matrix are exactly the given bases and the columns of the matrix are also bases. That is, it should be possible to find a second set of n disjoint bases Ci, each of which consists of one element from each of the bases Bi.
Examples
Rota's basis conjecture has a simple formulation for points in the Euclidean plane: it states that, given three triangles with distinct vertices, with each triangle colored with one of three colors, it must be possible to regroup the nine triangle vertices into three "rainbow" triangles having one vertex of each color. The triangles are all required to be non-degenerate, meaning that they do not have all three vertices on a line.
To see this as an instance of the basis conjecture, one may use either linear independence of the vectors ($x_{i},y_{i},1$) in a three-dimensional real vector space (where ($x_{i},y_{i}$) are the Cartesian coordinates of the triangle vertices) or equivalently one may use a matroid of rank three in which a set S of points is independent if either |S| ≤ 2 or S forms the three vertices of a non-degenerate triangle. For this linear algebra and this matroid, the bases are exactly the non-degenerate triangles. Given the three input triangles and the three rainbow triangles, it is possible to arrange the nine vertices into a 3 × 3 matrix in which each row contains the vertices of one of the single-color triangles and each column contains the vertices of one of the rainbow triangles.
Analogously, for points in three-dimensional Euclidean space, the conjecture states that the sixteen vertices of four non-degenerate tetrahedra of four different colors may be regrouped into four rainbow tetrahedra.
Partial results
The statement of Rota's basis conjecture was first published by Huang & Rota (1994), crediting it (without citation) to Rota in 1989.[1] The basis conjecture has been proven for paving matroids (for all n)[2] and for the case n ≤ 3 (for all types of matroid).[3] For arbitrary matroids, it is possible to arrange the basis elements into a matrix the first Ω(√n) columns of which are bases.[4] The basis conjecture for linear algebras over fields of characteristic zero and for even values of n would follow from another conjecture on Latin squares by Alon and Tarsi.[1][5] Based on this implication, the conjecture is known to be true for linear algebras over the real numbers for infinitely many values of n.[6]
Related problems
In connection with Tverberg's theorem, Bárány & Larman (1992) conjectured that, for every set of r (d + 1) points in d-dimensional Euclidean space, colored with d + 1 colors in such a way that there are r points of each color, there is a way to partition the points into rainbow simplices (sets of d + 1 points with one point of each color) in such a way that the convex hulls of these sets have a nonempty intersection.[7] For instance, the two-dimensional case (proven by Bárány and Larman) with r = 3 states that, for every set of nine points in the plane, colored with three colors and three points of each color, it is possible to partition the points into three intersecting rainbow triangles, a statement similar to Rota's basis conjecture which states that it is possible to partition the points into three non-degenerate rainbow triangles. The conjecture of Bárány and Larman allows a collinear triple of points to be considered as a rainbow triangle, whereas Rota's basis conjecture disallows this; on the other hand, Rota's basis conjecture does not require the triangles to have a common intersection. Substantial progress on the conjecture of Bárány and Larman was made by Blagojević, Matschke & Ziegler (2009).[8]
See also
• Rota's conjecture, a different conjecture by Rota about linear algebra and matroids
References
1. Huang, Rosa; Rota, Gian-Carlo (1994), "On the relations of various conjectures on Latin squares and straightening coefficients", Discrete Mathematics, 128 (1–3): 225–236, doi:10.1016/0012-365X(94)90114-7, MR 1271866. See in particular Conjecture 4, p. 226.
2. Geelen, Jim; Humphries, Peter J. (2006), "Rota's basis conjecture for paving matroids" (PDF), SIAM Journal on Discrete Mathematics, 20 (4): 1042–1045, CiteSeerX 10.1.1.63.6806, doi:10.1137/060655596, MR 2272246.
3. Chan, Wendy (1995), "An exchange property of matroid", Discrete Mathematics, 146 (1–3): 299–302, doi:10.1016/0012-365X(94)00071-3, MR 1360125.
4. Geelen, Jim; Webb, Kerri (2007), "On Rota's basis conjecture" (PDF), SIAM Journal on Discrete Mathematics, 21 (3): 802–804, doi:10.1137/060666494, MR 2354007.
5. Onn, Shmuel (1997), "A colorful determinantal identity, a conjecture of Rota, and Latin squares", The American Mathematical Monthly, 104 (2): 156–159, doi:10.2307/2974985, JSTOR 2974985, MR 1437419.
6. Glynn, David G. (2010), "The conjectures of Alon–Tarsi and Rota in dimension prime minus one", SIAM Journal on Discrete Mathematics, 24 (2): 394–399, doi:10.1137/090773751, MR 2646093.
7. Bárány, I.; Larman, D. G. (1992), "A colored version of Tverberg's theorem", Journal of the London Mathematical Society, Second Series, 45 (2): 314–320, CiteSeerX 10.1.1.108.9781, doi:10.1112/jlms/s2-45.2.314, MR 1171558.
8. Blagojević, Pavle V. M.; Matschke, Benjamin; Ziegler, Günter M. (2009), Optimal bounds for the colored Tverberg problem, arXiv:0910.4987, Bibcode:2009arXiv0910.4987B.
External links
• Rota's basis conjecture, Open Problem Garden.
|
Wikipedia
|
Rota's conjecture
Rota's excluded minors conjecture is one of a number of conjectures made by mathematician Gian-Carlo Rota. It is considered to be an important problem by some members of the structural combinatorics community. Rota conjectured in 1971 that, for every finite field, the family of matroids that can be represented over that field has only finitely many excluded minors.[1] A proof of the conjecture has been announced by Geelen, Gerards, and Whittle.[2]
Statement of the conjecture
If $S$ is a set of points in a vector space defined over a field $F$, then the linearly independent subsets of $S$ form the independent sets of a matroid $M$; $S$ is said to be a representation of any matroid isomorphic to $M$. Not every matroid has a representation over every field, for instance, the Fano plane is representable only over fields of characteristic two. Other matroids are representable over no fields at all. The matroids that are representable over a particular field form a proper subclass of all matroids.
A minor of a matroid is another matroid formed by a sequence of two operations: deletion and contraction. In the case of points from a vector space, deleting a point is simply the removal of that point from $S$; contraction is a dual operation in which a point is removed and the remaining points are projected a hyperplane that does not contain the removed points. It follows from this if a matroid is representable over a field, then so are all its minors. A matroid that is not representable over $F$, and is minor-minimal with that property, is called an "excluded minor"; a matroid $M$ is representable over $F$ if and only if it does not contain one of the forbidden minors.
For representability over the real numbers, there are infinitely many forbidden minors.[3] Rota's conjecture is that, for every finite field $F$, there is only a finite number of forbidden minors.
Partial results
W. T. Tutte proved that the binary matroids (matroids representable over the field of two elements) have a single forbidden minor, the uniform matroid $U{}_{4}^{2}$ (geometrically, a line with four points on it).[4][5]
A matroid is representable over the ternary field GF(3) if and only if it does not have one or more of the following four matroids as minors: a five-point line $U{}_{5}^{2}$, its dual matroid $U{}_{5}^{3}$ (five points in general position in three dimensions), the Fano plane, or the dual of the Fano plane. Thus, Rota's conjecture is true in this case as well.[6][7] As a consequence of this result and of the forbidden minor characterization by Tutte (1958) of the regular matroids (matroids that can be represented over all fields) it follows that a matroid is regular if and only if it is both binary and ternary.[7]
There are seven forbidden minors for the matroids representable over GF(4).[8] They are:
• The six-point line $U{}_{6}^{2}$.
• The dual $U{}_{6}^{4}$ to the six-point line, six points in general position in four dimensions.
• A self-dual six-point rank-three matroid with a single three-point line.
• The non-Fano matroid formed by the seven points at the vertices, edge midpoints, and centroid of an equilateral triangle in the Euclidean plane. This configuration is one of two known sets of planar points with fewer than $n/2$ two-point lines.[9]
• The dual of the non-Fano matroid.
• The eight-point matroid of a square antiprism.
• The matroid obtained by relaxing the unique pair of disjoint circuit-hyperplanes of the square antiprism.
This result won the 2003 Fulkerson Prize for its authors Jim Geelen, A. M. H. Gerards, and A. Kapoor.[10]
For GF(5), several forbidden minors on up to 12 elements are known,[11] but it is not known whether the list is complete.
Reported proof
Geoff Whittle announced during a 2013 visit to the UK that he, Jim Geelen, and Bert Gerards had solved Rota's Conjecture. The collaboration involved intense visits where the researchers sat in a room together, all day every day, in front of a whiteboard.[12] It would take them years to write up their research in its entirety and publish it.[13][14] An outline of the proof has appeared in the Notices of the AMS.[15]
See also
• Rota's basis conjecture, a different conjecture by Rota about linear algebra and matroids
References
1. Rota, Gian-Carlo (1971), "Combinatorial theory, old and new", Actes du Congrès International des Mathématiciens (Nice, 1970), Tome 3, Paris: Gauthier-Villars, pp. 229–233, MR 0505646.
2. "Solving Rota's conjecture" (PDF), Notices of the American Mathematical Society: 736–743, Aug 17, 2014
3. Vámos, P. (1978), "The missing axiom of matroid theory is lost forever", Journal of the London Mathematical Society, Second Series, 18 (3): 403–408, doi:10.1112/jlms/s2-18.3.403, MR 0518224.
4. Tutte, W. T. (1958), "A homotopy theorem for matroids. I, II", Transactions of the American Mathematical Society, 88: 144–174, doi:10.2307/1993244, MR 0101526.
5. Tutte, W. T. (1965), "Lectures on matroids", Journal of Research of the National Bureau of Standards, 69B: 1–47, doi:10.6028/jres.069b.001, MR 0179781. See in particular section 5.3, "Characterization of binary matroids", p.17.
6. Bixby, Robert E. (1979), "On Reid's characterization of the ternary matroids", Journal of Combinatorial Theory, Series B, 26 (2): 174–204, doi:10.1016/0095-8956(79)90056-X, MR 0532587. Bixby attributes this characterization of ternary matroids to Ralph Reid.
7. Seymour, P. D. (1979), "Matroid representation over GF(3)", Journal of Combinatorial Theory, Series B, 26 (2): 159–173, doi:10.1016/0095-8956(79)90055-8, MR 0532586.
8. Geelen, J. F.; Gerards, A. M. H.; Kapoor, A. (2000), "The excluded minors for GF(4)-representable matroids" (PDF), Journal of Combinatorial Theory, Series B, 79 (2): 247–299, doi:10.1006/jctb.2000.1963, MR 1769191, archived from the original (PDF) on 2010-09-24.
9. Kelly, L. M.; Moser, W. O. J. (1958), "On the number of ordinary lines determined by n points", Can. J. Math., 10: 210–219, doi:10.4153/CJM-1958-024-6.
10. 2003 Fulkerson Prize citation, retrieved 2012-08-18.
11. Betten, A.; Kingan, R. J.; Kingan, S. R. (2007), "A note on GF(5)-representable matroids" (PDF), MATCH Communications in Mathematical and in Computer Chemistry, 58 (2): 511–521, MR 2357372.
12. Geelen, Gerards and Whittle announce a proof of Rota's conjecture University of Waterloo, August 28, 2013
13. Rota's Conjecture: Researcher solves 40-year-old math problem PhysOrg, August 15, 2013.
14. CWI researcher proves famous Rota’s Conjecture Archived 2013-10-26 at the Wayback Machine CWI, August 22, 2013.
15. "Solving Rota's conjecture" (PDF), Notices of the American Mathematical Society: 736–743, Aug 17, 2014
|
Wikipedia
|
Rotation matrix
In linear algebra, a rotation matrix is a transformation matrix that is used to perform a rotation in Euclidean space. For example, using the convention below, the matrix
$R={\begin{bmatrix}\cos \theta &-\sin \theta \\\sin \theta &\cos \theta \end{bmatrix}}$
rotates points in the xy plane counterclockwise through an angle θ about the origin of a two-dimensional Cartesian coordinate system. To perform the rotation on a plane point with standard coordinates v = (x, y), it should be written as a column vector, and multiplied by the matrix R:
$R\mathbf {v} ={\begin{bmatrix}\cos \theta &-\sin \theta \\\sin \theta &\cos \theta \end{bmatrix}}{\begin{bmatrix}x\\y\end{bmatrix}}={\begin{bmatrix}x\cos \theta -y\sin \theta \\x\sin \theta +y\cos \theta \end{bmatrix}}.$
If x and y are the endpoint coordinates of a vector, where x is cosine and y is sine, then the above equations become the trigonometric summation angle formulae. Indeed, a rotation matrix can be seen as the trigonometric summation angle formulae in matrix form. One way to understand this is to say we have a vector at an angle 30° from the x axis, and we wish to rotate that angle by a further 45°. We simply need to compute the vector endpoint coordinates at 75°.
The examples in this article apply to active rotations of vectors counterclockwise in a right-handed coordinate system (y counterclockwise from x) by pre-multiplication (R on the left). If any one of these is changed (such as rotating axes instead of vectors, a passive transformation), then the inverse of the example matrix should be used, which coincides with its transpose.
Since matrix multiplication has no effect on the zero vector (the coordinates of the origin), rotation matrices describe rotations about the origin. Rotation matrices provide an algebraic description of such rotations, and are used extensively for computations in geometry, physics, and computer graphics. In some literature, the term rotation is generalized to include improper rotations, characterized by orthogonal matrices with a determinant of −1 (instead of +1). These combine proper rotations with reflections (which invert orientation). In other cases, where reflections are not being considered, the label proper may be dropped. The latter convention is followed in this article.
Rotation matrices are square matrices, with real entries. More specifically, they can be characterized as orthogonal matrices with determinant 1; that is, a square matrix R is a rotation matrix if and only if RT = R−1 and det R = 1. The set of all orthogonal matrices of size n with determinant +1 is a representation of a group known as the special orthogonal group SO(n), one example of which is the rotation group SO(3). The set of all orthogonal matrices of size n with determinant +1 or −1 is a representation of the (general) orthogonal group O(n).
In two dimensions
In two dimensions, the standard rotation matrix has the following form:
$R(\theta )={\begin{bmatrix}\cos \theta &-\sin \theta \\\sin \theta &\cos \theta \\\end{bmatrix}}.$
This rotates column vectors by means of the following matrix multiplication,
${\begin{bmatrix}x'\\y'\\\end{bmatrix}}={\begin{bmatrix}\cos \theta &-\sin \theta \\\sin \theta &\cos \theta \\\end{bmatrix}}{\begin{bmatrix}x\\y\\\end{bmatrix}}.$
Thus, the new coordinates (x′, y′) of a point (x, y) after rotation are
${\begin{aligned}x'&=x\cos \theta -y\sin \theta \,\\y'&=x\sin \theta +y\cos \theta \,\end{aligned}}.$
Examples
For example, when the vector
$\mathbf {\hat {x}} ={\begin{bmatrix}1\\0\\\end{bmatrix}}$
is rotated by an angle θ, its new coordinates are
${\begin{bmatrix}\cos \theta \\\sin \theta \\\end{bmatrix}},$
and when the vector
$\mathbf {\hat {y}} ={\begin{bmatrix}0\\1\\\end{bmatrix}}$
is rotated by an angle θ, its new coordinates are
${\begin{bmatrix}-\sin \theta \\\cos \theta \\\end{bmatrix}}.$
Direction
The direction of vector rotation is counterclockwise if θ is positive (e.g. 90°), and clockwise if θ is negative (e.g. −90°) for $R(\theta )$. Thus the clockwise rotation matrix is found as
$R(-\theta )={\begin{bmatrix}\cos \theta &\sin \theta \\-\sin \theta &\cos \theta \\\end{bmatrix}}.$
The two-dimensional case is the only non-trivial (i.e. not one-dimensional) case where the rotation matrices group is commutative, so that it does not matter in which order multiple rotations are performed. An alternative convention uses rotating axes,[1] and the above matrices also represent a rotation of the axes clockwise through an angle θ.
Non-standard orientation of the coordinate system
If a standard right-handed Cartesian coordinate system is used, with the x-axis to the right and the y-axis up, the rotation R(θ) is counterclockwise. If a left-handed Cartesian coordinate system is used, with x directed to the right but y directed down, R(θ) is clockwise. Such non-standard orientations are rarely used in mathematics but are common in 2D computer graphics, which often have the origin in the top left corner and the y-axis down the screen or page.[2]
See below for other alternative conventions which may change the sense of the rotation produced by a rotation matrix.
Common 2D rotations
Particularly useful are the matrices
${\begin{bmatrix}0&-1\\[3pt]1&0\\\end{bmatrix}},\quad {\begin{bmatrix}-1&0\\[3pt]0&-1\\\end{bmatrix}},\quad {\begin{bmatrix}0&1\\[3pt]-1&0\\\end{bmatrix}}$
for 90°, 180°, and 270° counter-clockwise rotations.
A 180° rotation (middle) followed by a positive 90° rotation (left) is equivalent to a single negative 90° (positive 270°) rotation (right). Each of these figures depicts the result of a rotation relative to an upright starting position (bottom left) and includes the matrix representation of the permutation applied by the rotation (center right), as well as other related diagrams. See "Permutation notation" on Wikiversity for details.
Relationship with complex plane
Since
${\begin{bmatrix}0&-1\\1&0\end{bmatrix}}^{2}\ =\ {\begin{bmatrix}-1&0\\0&-1\end{bmatrix}}\ =-I,$
the matrices of the shape
${\begin{bmatrix}x&-y\\y&x\end{bmatrix}}$
form a ring isomorphic to the field of the complex numbers $\mathbb {C} $. Under this isomorphism, the rotation matrices correspond to circle of the unit complex numbers, the complex numbers of modulus 1.
If one identifies $\mathbb {R} ^{2}$ with $\mathbb {C} $ through the linear isomorphism $(a,b)\mapsto a+ib,$ the action of a matrix of the above form on vectors of $\mathbb {R} ^{2}$ corresponds to the multiplication by the complex number x + iy, and rotations correspond to multiplication by complex numbers of modulus 1.
As every rotation matrix can be written
${\begin{pmatrix}\cos t&-\sin t\\\sin t&\cos t\end{pmatrix}},$
the above correspondence associates such a matrix with the complex number
$\cos t+i\sin t=e^{it}$
(this last equality is Euler's formula).
In three dimensions
See also: Rotation formalisms in three dimensions
A positive 90° rotation around the y-axis (left) after one around the z-axis (middle) gives a 120° rotation around the main diagonal (right).
In the top left corner are the rotation matrices, in the bottom right corner are the corresponding permutations of the cube with the origin in its center.
Basic 3D rotations
A basic 3D rotation (also called elemental rotation) is a rotation about one of the axes of a coordinate system. The following three basic rotation matrices rotate vectors by an angle θ about the x-, y-, or z-axis, in three dimensions, using the right-hand rule—which codifies their alternating signs. Notice that the right-hand rule only works when multiplying $R\cdot {\vec {x}}$. (The same matrices can also represent a clockwise rotation of the axes.[nb 1])
${\begin{alignedat}{1}R_{x}(\theta )&={\begin{bmatrix}1&0&0\\0&\cos \theta &-\sin \theta \\[3pt]0&\sin \theta &\cos \theta \\[3pt]\end{bmatrix}}\\[6pt]R_{y}(\theta )&={\begin{bmatrix}\cos \theta &0&\sin \theta \\[3pt]0&1&0\\[3pt]-\sin \theta &0&\cos \theta \\\end{bmatrix}}\\[6pt]R_{z}(\theta )&={\begin{bmatrix}\cos \theta &-\sin \theta &0\\[3pt]\sin \theta &\cos \theta &0\\[3pt]0&0&1\\\end{bmatrix}}\end{alignedat}}$
For column vectors, each of these basic vector rotations appears counterclockwise when the axis about which they occur points toward the observer, the coordinate system is right-handed, and the angle θ is positive. Rz, for instance, would rotate toward the y-axis a vector aligned with the x-axis, as can easily be checked by operating with Rz on the vector (1,0,0):
$R_{z}(90^{\circ }){\begin{bmatrix}1\\0\\0\\\end{bmatrix}}={\begin{bmatrix}\cos 90^{\circ }&-\sin 90^{\circ }&0\\\sin 90^{\circ }&\quad \cos 90^{\circ }&0\\0&0&1\\\end{bmatrix}}{\begin{bmatrix}1\\0\\0\\\end{bmatrix}}={\begin{bmatrix}0&-1&0\\1&0&0\\0&0&1\\\end{bmatrix}}{\begin{bmatrix}1\\0\\0\\\end{bmatrix}}={\begin{bmatrix}0\\1\\0\\\end{bmatrix}}$
This is similar to the rotation produced by the above-mentioned two-dimensional rotation matrix. See below for alternative conventions which may apparently or actually invert the sense of the rotation produced by these matrices.
General 3D rotations
Other 3D rotation matrices can be obtained from these three using matrix multiplication. For example, the product
${\begin{aligned}R=R_{z}(\alpha )\,R_{y}(\beta )\,R_{x}(\gamma )&={\overset {\text{yaw}}{\begin{bmatrix}\cos \alpha &-\sin \alpha &0\\\sin \alpha &\cos \alpha &0\\0&0&1\\\end{bmatrix}}}{\overset {\text{pitch}}{\begin{bmatrix}\cos \beta &0&\sin \beta \\0&1&0\\-\sin \beta &0&\cos \beta \\\end{bmatrix}}}{\overset {\text{roll}}{\begin{bmatrix}1&0&0\\0&\cos \gamma &-\sin \gamma \\0&\sin \gamma &\cos \gamma \\\end{bmatrix}}}\\&={\begin{bmatrix}\cos \alpha \cos \beta &\cos \alpha \sin \beta \sin \gamma -\sin \alpha \cos \gamma &\cos \alpha \sin \beta \cos \gamma +\sin \alpha \sin \gamma \\\sin \alpha \cos \beta &\sin \alpha \sin \beta \sin \gamma +\cos \alpha \cos \gamma &\sin \alpha \sin \beta \cos \gamma -\cos \alpha \sin \gamma \\-\sin \beta &\cos \beta \sin \gamma &\cos \beta \cos \gamma \\\end{bmatrix}}\end{aligned}}$
represents a rotation whose yaw, pitch, and roll angles are α, β and γ, respectively. More formally, it is an intrinsic rotation whose Tait–Bryan angles are α, β, γ, about axes z, y, x, respectively. Similarly, the product
${\begin{aligned}\\R=R_{z}(\gamma )\,R_{y}(\beta )\,R_{x}(\alpha )&={\begin{bmatrix}\cos \gamma &-\sin \gamma &0\\\sin \gamma &\cos \gamma &0\\0&0&1\\\end{bmatrix}}{\begin{bmatrix}\cos \beta &0&\sin \beta \\0&1&0\\-\sin \beta &0&\cos \beta \\\end{bmatrix}}{\begin{bmatrix}1&0&0\\0&\cos \alpha &-\sin \alpha \\0&\sin \alpha &\cos \alpha \\\end{bmatrix}}\\&={\begin{bmatrix}\cos \beta \cos \gamma &\sin \alpha \sin \beta \cos \gamma -\cos \alpha \sin \gamma &\cos \alpha \sin \beta \cos \gamma +\sin \alpha \sin \gamma \\\cos \beta \sin \gamma &\sin \alpha \sin \beta \sin \gamma +\cos \alpha \cos \gamma &\cos \alpha \sin \beta \sin \gamma -\sin \alpha \cos \gamma \\-\sin \beta &\sin \alpha \cos \beta &\cos \alpha \cos \beta \\\end{bmatrix}}\end{aligned}}$
represents an extrinsic rotation whose (improper) Euler angles are α, β, γ, about axes x, y, z.
These matrices produce the desired effect only if they are used to premultiply column vectors, and (since in general matrix multiplication is not commutative) only if they are applied in the specified order (see Ambiguities for more details). The order of rotation operations is from right to left; the matrix adjacent to the column vector is the first to be applied, and then the one to the left.[3]
Conversion from rotation matrix to axis–angle
Every rotation in three dimensions is defined by its axis (a vector along this axis is unchanged by the rotation), and its angle — the amount of rotation about that axis (Euler rotation theorem).
There are several methods to compute the axis and angle from a rotation matrix (see also axis–angle representation). Here, we only describe the method based on the computation of the eigenvectors and eigenvalues of the rotation matrix. It is also possible to use the trace of the rotation matrix.
Determining the axis
Given a 3 × 3 rotation matrix R, a vector u parallel to the rotation axis must satisfy
$R\mathbf {u} =\mathbf {u} ,$
since the rotation of u around the rotation axis must result in u. The equation above may be solved for u which is unique up to a scalar factor unless R = I.
Further, the equation may be rewritten
$R\mathbf {u} =I\mathbf {u} \implies \left(R-I\right)\mathbf {u} =0,$
which shows that u lies in the null space of R − I.
Viewed in another way, u is an eigenvector of R corresponding to the eigenvalue λ = 1. Every rotation matrix must have this eigenvalue, the other two eigenvalues being complex conjugates of each other. It follows that a general rotation matrix in three dimensions has, up to a multiplicative constant, only one real eigenvector.
One way to determine the rotation axis is by showing that:
${\begin{aligned}0&=R^{\mathsf {T}}0+0\\&=R^{\mathsf {T}}\left(R-I\right)\mathbf {u} +\left(R-I\right)\mathbf {u} \\&=\left(R^{\mathsf {T}}R-R^{\mathsf {T}}+R-I\right)\mathbf {u} \\&=\left(I-R^{\mathsf {T}}+R-I\right)\mathbf {u} \\&=\left(R-R^{\mathsf {T}}\right)\mathbf {u} \end{aligned}}$
Since (R − RT) is a skew-symmetric matrix, we can choose u such that
$[\mathbf {u} ]_{\times }=\left(R-R^{\mathsf {T}}\right).$
The matrix–vector product becomes a cross product of a vector with itself, ensuring that the result is zero:
$\left(R-R^{\mathsf {T}}\right)\mathbf {u} =[\mathbf {u} ]_{\times }\mathbf {u} =\mathbf {u} \times \mathbf {u} =0\,$
Therefore, if
$R={\begin{bmatrix}a&b&c\\d&e&f\\g&h&i\\\end{bmatrix}},$
then
$\mathbf {u} ={\begin{bmatrix}h-f\\c-g\\d-b\\\end{bmatrix}}.$
The magnitude of u computed this way is ‖u‖ = 2 sin θ, where θ is the angle of rotation.
This does not work if R is symmetric. Above, if R − RT is zero, then all subsequent steps are invalid. In this case, it is necessary to diagonalize R and find the eigenvector corresponding to an eigenvalue of 1.
Determining the angle
To find the angle of a rotation, once the axis of the rotation is known, select a vector v perpendicular to the axis. Then the angle of the rotation is the angle between v and Rv.
A more direct method, however, is to simply calculate the trace: the sum of the diagonal elements of the rotation matrix. Care should be taken to select the right sign for the angle θ to match the chosen axis:
$\operatorname {tr} (R)=1+2\cos \theta ,$
from which follows that the angle's absolute value is
$|\theta |=\arccos \left({\frac {\operatorname {tr} (R)-1}{2}}\right).$
Rotation matrix from axis and angle
The matrix of a proper rotation R by angle θ around the axis u = (ux, uy, uz), a unit vector with u2
x
+ u2
y
+ u2
z
= 1
, is given by:[4]
$R={\begin{bmatrix}\cos \theta +u_{x}^{2}\left(1-\cos \theta \right)&u_{x}u_{y}\left(1-\cos \theta \right)-u_{z}\sin \theta &u_{x}u_{z}\left(1-\cos \theta \right)+u_{y}\sin \theta \\u_{y}u_{x}\left(1-\cos \theta \right)+u_{z}\sin \theta &\cos \theta +u_{y}^{2}\left(1-\cos \theta \right)&u_{y}u_{z}\left(1-\cos \theta \right)-u_{x}\sin \theta \\u_{z}u_{x}\left(1-\cos \theta \right)-u_{y}\sin \theta &u_{z}u_{y}\left(1-\cos \theta \right)+u_{x}\sin \theta &\cos \theta +u_{z}^{2}\left(1-\cos \theta \right)\end{bmatrix}}.$
A derivation of this matrix from first principles can be found in section 9.2 here.[5] The basic idea to derive this matrix is dividing the problem into few known simple steps.
1. First rotate the given axis and the point such that the axis lies in one of the coordinate planes (xy, yz or zx)
2. Then rotate the given axis and the point such that the axis is aligned with one of the two coordinate axes for that particular coordinate plane (x, y or z)
3. Use one of the fundamental rotation matrices to rotate the point depending on the coordinate axis with which the rotation axis is aligned.
4. Reverse rotate the axis-point pair such that it attains the final configuration as that was in step 2 (Undoing step 2)
5. Reverse rotate the axis-point pair which was done in step 1 (undoing step 1)
This can be written more concisely as
$R=(\cos \theta )\,I+(\sin \theta )\,[\mathbf {u} ]_{\times }+(1-\cos \theta )\,(\mathbf {u} \otimes \mathbf {u} ),$
where [u]× is the cross product matrix of u; the expression u ⊗ u is the outer product, and I is the identity matrix. Alternatively, the matrix entries are:
$R_{jk}={\begin{cases}\cos ^{2}{\frac {\theta }{2}}+\sin ^{2}{\frac {\theta }{2}}\left(2u_{j}^{2}-1\right),\quad &{\text{if }}j=k\\2u_{j}u_{k}\sin ^{2}{\frac {\theta }{2}}-\varepsilon _{jkl}u_{l}\sin \theta ,\quad &{\text{if }}j\neq k\end{cases}}$
where εjkl is the Levi-Civita symbol with ε123 = 1. This is a matrix form of Rodrigues' rotation formula, (or the equivalent, differently parametrized Euler–Rodrigues formula) with[nb 2]
$\mathbf {u} \otimes \mathbf {u} =\mathbf {u} \mathbf {u} ^{\mathsf {T}}={\begin{bmatrix}u_{x}^{2}&u_{x}u_{y}&u_{x}u_{z}\\[3pt]u_{x}u_{y}&u_{y}^{2}&u_{y}u_{z}\\[3pt]u_{x}u_{z}&u_{y}u_{z}&u_{z}^{2}\end{bmatrix}},\qquad [\mathbf {u} ]_{\times }={\begin{bmatrix}0&-u_{z}&u_{y}\\[3pt]u_{z}&0&-u_{x}\\[3pt]-u_{y}&u_{x}&0\end{bmatrix}}.$
In $\mathbb {R} ^{3}$ the rotation of a vector x around the axis u by an angle θ can be written as:
$R_{\mathbf {u} }(\theta )\mathbf {x} =\mathbf {u} (\mathbf {u} \cdot \mathbf {x} )+\cos \left(\theta \right)(\mathbf {u} \times \mathbf {x} )\times \mathbf {u} +\sin \left(\theta \right)(\mathbf {u} \times \mathbf {x} )$
If the 3D space is right-handed and θ > 0, this rotation will be counterclockwise when u points towards the observer (Right-hand rule). Explicitly, with $({\boldsymbol {\alpha }},{\boldsymbol {\beta }},\mathbf {u} )$ a right-handed orthonormal basis,
$R_{\mathbf {u} }(\theta ){\boldsymbol {\alpha }}=\cos \left(\theta \right){\boldsymbol {\alpha }}+\sin \left(\theta \right){\boldsymbol {\beta }},\quad R_{\mathbf {u} }(\theta ){\boldsymbol {\beta }}=-\sin \left(\theta \right){\boldsymbol {\alpha }}+\cos \left(\theta \right){\boldsymbol {\beta }},\quad R_{\mathbf {u} }(\theta )\mathbf {u} =\mathbf {u} .$
Note the striking merely apparent differences to the equivalent Lie-algebraic formulation below.
Properties
For any n-dimensional rotation matrix R acting on $\mathbb {R} ^{n},$
$R^{\mathsf {T}}=R^{-1}$ (The rotation is an orthogonal matrix)
It follows that:
$\det R=\pm 1$
A rotation is termed proper if det R = 1, and improper (or a roto-reflection) if det R = –1. For even dimensions n = 2k, the n eigenvalues λ of a proper rotation occur as pairs of complex conjugates which are roots of unity: λ = e±iθj for j = 1, ..., k, which is real only for λ = ±1. Therefore, there may be no vectors fixed by the rotation (λ = 1), and thus no axis of rotation. Any fixed eigenvectors occur in pairs, and the axis of rotation is an even-dimensional subspace.
For odd dimensions n = 2k + 1, a proper rotation R will have an odd number of eigenvalues, with at least one λ = 1 and the axis of rotation will be an odd dimensional subspace. Proof:
${\begin{aligned}\det \left(R-I\right)&=\det \left(R^{\mathsf {T}}\right)\det \left(R-I\right)=\det \left(R^{\mathsf {T}}R-R^{\mathsf {T}}\right)=\det \left(I-R^{\mathsf {T}}\right)\\&=\det(I-R)=\left(-1\right)^{n}\det \left(R-I\right)=-\det \left(R-I\right).\end{aligned}}$
Here I is the identity matrix, and we use det(RT) = det(R) = 1, as well as (−1)n = −1 since n is odd. Therefore, det(R – I) = 0, meaning there is a null vector v with (R – I)v = 0, that is Rv = v, a fixed eigenvector. There may also be pairs of fixed eigenvectors in the even-dimensional subspace orthogonal to v, so the total dimension of fixed eigenvectors is odd.
For example, in 2-space n = 2, a rotation by angle θ has eigenvalues λ = eiθ and λ = e−iθ, so there is no axis of rotation except when θ = 0, the case of the null rotation. In 3-space n = 3, the axis of a non-null proper rotation is always a unique line, and a rotation around this axis by angle θ has eigenvalues λ = 1, eiθ, e−iθ. In 4-space n = 4, the four eigenvalues are of the form e±iθ, e±iφ. The null rotation has θ = φ = 0. The case of θ = 0, φ ≠ 0 is called a simple rotation, with two unit eigenvalues forming an axis plane, and a two-dimensional rotation orthogonal to the axis plane. Otherwise, there is no axis plane. The case of θ = φ is called an isoclinic rotation, having eigenvalues e±iθ repeated twice, so every vector is rotated through an angle θ.
The trace of a rotation matrix is equal to the sum of its eigenvalues. For n = 2, a rotation by angle θ has trace 2 cos θ. For n = 3, a rotation around any axis by angle θ has trace 1 + 2 cos θ. For n = 4, and the trace is 2(cos θ + cos φ), which becomes 4 cos θ for an isoclinic rotation.
Examples
• The 2 × 2 rotation matrix
$Q={\begin{bmatrix}0&1\\-1&0\end{bmatrix}}$
corresponds to a 90° planar rotation clockwise about the origin.
• The transpose of the 2 × 2 matrix
$M={\begin{bmatrix}0.936&0.352\\0.352&-0.936\end{bmatrix}}$
is its inverse, but since its determinant is −1, this is not a proper rotation matrix; it is a reflection across the line 11y = 2x.
• The 3 × 3 rotation matrix
$Q={\begin{bmatrix}1&0&0\\0&{\frac {\sqrt {3}}{2}}&{\frac {1}{2}}\\0&-{\frac {1}{2}}&{\frac {\sqrt {3}}{2}}\end{bmatrix}}={\begin{bmatrix}1&0&0\\0&\cos 30^{\circ }&\sin 30^{\circ }\\0&-\sin 30^{\circ }&\cos 30^{\circ }\\\end{bmatrix}}$
corresponds to a −30° rotation around the x-axis in three-dimensional space.
• The 3 × 3 rotation matrix
$Q={\begin{bmatrix}0.36&0.48&-0.80\\-0.80&0.60&0.00\\0.48&0.64&0.60\end{bmatrix}}$
corresponds to a rotation of approximately −74° around the axis (−1/2,1,1) in three-dimensional space.
• The 3 × 3 permutation matrix
$P={\begin{bmatrix}0&0&1\\1&0&0\\0&1&0\end{bmatrix}}$
is a rotation matrix, as is the matrix of any even permutation, and rotates through 120° about the axis x = y = z.
• The 3 × 3 matrix
$M={\begin{bmatrix}3&-4&1\\5&3&-7\\-9&2&6\end{bmatrix}}$
has determinant +1, but is not orthogonal (its transpose is not its inverse), so it is not a rotation matrix.
• The 4 × 3 matrix
$M={\begin{bmatrix}0.5&-0.1&0.7\\0.1&0.5&-0.5\\-0.7&0.5&0.5\\-0.5&-0.7&-0.1\end{bmatrix}}$
is not square, and so cannot be a rotation matrix; yet MTM yields a 3 × 3 identity matrix (the columns are orthonormal).
• The 4 × 4 matrix
$Q=-I={\begin{bmatrix}-1&0&0&0\\0&-1&0&0\\0&0&-1&0\\0&0&0&-1\end{bmatrix}}$
describes an isoclinic rotation in four dimensions, a rotation through equal angles (180°) through two orthogonal planes.
• The 5 × 5 rotation matrix
$Q={\begin{bmatrix}0&-1&0&0&0\\1&0&0&0&0\\0&0&-1&0&0\\0&0&0&-1&0\\0&0&0&0&1\end{bmatrix}}$
rotates vectors in the plane of the first two coordinate axes 90°, rotates vectors in the plane of the next two axes 180°, and leaves the last coordinate axis unmoved.
Geometry
In Euclidean geometry, a rotation is an example of an isometry, a transformation that moves points without changing the distances between them. Rotations are distinguished from other isometries by two additional properties: they leave (at least) one point fixed, and they leave "handedness" unchanged. In contrast, a translation moves every point, a reflection exchanges left- and right-handed ordering, a glide reflection does both, and an improper rotation combines a change in handedness with a normal rotation.
If a fixed point is taken as the origin of a Cartesian coordinate system, then every point can be given coordinates as a displacement from the origin. Thus one may work with the vector space of displacements instead of the points themselves. Now suppose (p1, ..., pn) are the coordinates of the vector p from the origin O to point P. Choose an orthonormal basis for our coordinates; then the squared distance to P, by Pythagoras, is
$d^{2}(O,P)=\|\mathbf {p} \|^{2}=\sum _{r=1}^{n}p_{r}^{2}$
which can be computed using the matrix multiplication
$\|\mathbf {p} \|^{2}={\begin{bmatrix}p_{1}\cdots p_{n}\end{bmatrix}}{\begin{bmatrix}p_{1}\\\vdots \\p_{n}\end{bmatrix}}=\mathbf {p} ^{\mathsf {T}}\mathbf {p} .$
A geometric rotation transforms lines to lines, and preserves ratios of distances between points. From these properties it can be shown that a rotation is a linear transformation of the vectors, and thus can be written in matrix form, Qp. The fact that a rotation preserves, not just ratios, but distances themselves, is stated as
$\mathbf {p} ^{\mathsf {T}}\mathbf {p} =(Q\mathbf {p} )^{\mathsf {T}}(Q\mathbf {p} ),$
or
${\begin{aligned}\mathbf {p} ^{\mathsf {T}}I\mathbf {p} &{}=\left(\mathbf {p} ^{\mathsf {T}}Q^{\mathsf {T}}\right)(Q\mathbf {p} )\\&{}=\mathbf {p} ^{\mathsf {T}}\left(Q^{\mathsf {T}}Q\right)\mathbf {p} .\end{aligned}}$
Because this equation holds for all vectors, p, one concludes that every rotation matrix, Q, satisfies the orthogonality condition,
$Q^{\mathsf {T}}Q=I.$
Rotations preserve handedness because they cannot change the ordering of the axes, which implies the special matrix condition,
$\det Q=+1.$
Equally important, it can be shown that any matrix satisfying these two conditions acts as a rotation.
Multiplication
The inverse of a rotation matrix is its transpose, which is also a rotation matrix:
${\begin{aligned}\left(Q^{\mathsf {T}}\right)^{\mathsf {T}}\left(Q^{\mathsf {T}}\right)&=QQ^{\mathsf {T}}=I\\\det Q^{\mathsf {T}}&=\det Q=+1.\end{aligned}}$
The product of two rotation matrices is a rotation matrix:
${\begin{aligned}\left(Q_{1}Q_{2}\right)^{\mathsf {T}}\left(Q_{1}Q_{2}\right)&=Q_{2}^{\mathsf {T}}\left(Q_{1}^{\mathsf {T}}Q_{1}\right)Q_{2}=I\\\det \left(Q_{1}Q_{2}\right)&=\left(\det Q_{1}\right)\left(\det Q_{2}\right)=+1.\end{aligned}}$
For n > 2, multiplication of n × n rotation matrices is generally not commutative.
${\begin{aligned}Q_{1}&={\begin{bmatrix}0&-1&0\\1&0&0\\0&0&1\end{bmatrix}}&Q_{2}&={\begin{bmatrix}0&0&1\\0&1&0\\-1&0&0\end{bmatrix}}\\Q_{1}Q_{2}&={\begin{bmatrix}0&-1&0\\0&0&1\\-1&0&0\end{bmatrix}}&Q_{2}Q_{1}&={\begin{bmatrix}0&0&1\\1&0&0\\0&1&0\end{bmatrix}}.\end{aligned}}$
Noting that any identity matrix is a rotation matrix, and that matrix multiplication is associative, we may summarize all these properties by saying that the n × n rotation matrices form a group, which for n > 2 is non-abelian, called a special orthogonal group, and denoted by SO(n), SO(n,R), SOn, or SOn(R), the group of n × n rotation matrices is isomorphic to the group of rotations in an n-dimensional space. This means that multiplication of rotation matrices corresponds to composition of rotations, applied in left-to-right order of their corresponding matrices.
Ambiguities
The interpretation of a rotation matrix can be subject to many ambiguities.
In most cases the effect of the ambiguity is equivalent to the effect of a rotation matrix inversion (for these orthogonal matrices equivalently matrix transpose).
Alias or alibi (passive or active) transformation
The coordinates of a point P may change due to either a rotation of the coordinate system CS (alias), or a rotation of the point P (alibi). In the latter case, the rotation of P also produces a rotation of the vector v representing P. In other words, either P and v are fixed while CS rotates (alias), or CS is fixed while P and v rotate (alibi). Any given rotation can be legitimately described both ways, as vectors and coordinate systems actually rotate with respect to each other, about the same axis but in opposite directions. Throughout this article, we chose the alibi approach to describe rotations. For instance,
$R(\theta )={\begin{bmatrix}\cos \theta &-\sin \theta \\\sin \theta &\cos \theta \\\end{bmatrix}}$
represents a counterclockwise rotation of a vector v by an angle θ, or a rotation of CS by the same angle but in the opposite direction (i.e. clockwise). Alibi and alias transformations are also known as active and passive transformations, respectively.
Pre-multiplication or post-multiplication
The same point P can be represented either by a column vector v or a row vector w. Rotation matrices can either pre-multiply column vectors (Rv), or post-multiply row vectors (wR). However, Rv produces a rotation in the opposite direction with respect to wR. Throughout this article, rotations produced on column vectors are described by means of a pre-multiplication. To obtain exactly the same rotation (i.e. the same final coordinates of point P), the equivalent row vector must be post-multiplied by the transpose of R (i.e. wRT).
Right- or left-handed coordinates
The matrix and the vector can be represented with respect to a right-handed or left-handed coordinate system. Throughout the article, we assumed a right-handed orientation, unless otherwise specified.
Vectors or forms
The vector space has a dual space of linear forms, and the matrix can act on either vectors or forms.
Decompositions
Independent planes
Consider the 3 × 3 rotation matrix
$Q={\begin{bmatrix}0.36&0.48&-0.80\\-0.80&0.60&0.00\\0.48&0.64&0.60\end{bmatrix}}.$
If Q acts in a certain direction, v, purely as a scaling by a factor λ, then we have
$Q\mathbf {v} =\lambda \mathbf {v} ,$
so that
$\mathbf {0} =(\lambda I-Q)\mathbf {v} .$
Thus λ is a root of the characteristic polynomial for Q,
${\begin{aligned}0&{}=\det(\lambda I-Q)\\&{}=\lambda ^{3}-{\tfrac {39}{25}}\lambda ^{2}+{\tfrac {39}{25}}\lambda -1\\&{}=(\lambda -1)\left(\lambda ^{2}-{\tfrac {14}{25}}\lambda +1\right).\end{aligned}}$
Two features are noteworthy. First, one of the roots (or eigenvalues) is 1, which tells us that some direction is unaffected by the matrix. For rotations in three dimensions, this is the axis of the rotation (a concept that has no meaning in any other dimension). Second, the other two roots are a pair of complex conjugates, whose product is 1 (the constant term of the quadratic), and whose sum is 2 cos θ (the negated linear term). This factorization is of interest for 3 × 3 rotation matrices because the same thing occurs for all of them. (As special cases, for a null rotation the "complex conjugates" are both 1, and for a 180° rotation they are both −1.) Furthermore, a similar factorization holds for any n × n rotation matrix. If the dimension, n, is odd, there will be a "dangling" eigenvalue of 1; and for any dimension the rest of the polynomial factors into quadratic terms like the one here (with the two special cases noted). We are guaranteed that the characteristic polynomial will have degree n and thus n eigenvalues. And since a rotation matrix commutes with its transpose, it is a normal matrix, so can be diagonalized. We conclude that every rotation matrix, when expressed in a suitable coordinate system, partitions into independent rotations of two-dimensional subspaces, at most n/2 of them.
The sum of the entries on the main diagonal of a matrix is called the trace; it does not change if we reorient the coordinate system, and always equals the sum of the eigenvalues. This has the convenient implication for 2 × 2 and 3 × 3 rotation matrices that the trace reveals the angle of rotation, θ, in the two-dimensional space (or subspace). For a 2 × 2 matrix the trace is 2 cos θ, and for a 3 × 3 matrix it is 1 + 2 cos θ. In the three-dimensional case, the subspace consists of all vectors perpendicular to the rotation axis (the invariant direction, with eigenvalue 1). Thus we can extract from any 3 × 3 rotation matrix a rotation axis and an angle, and these completely determine the rotation.
Sequential angles
The constraints on a 2 × 2 rotation matrix imply that it must have the form
$Q={\begin{bmatrix}a&-b\\b&a\end{bmatrix}}$
with a2 + b2 = 1. Therefore, we may set a = cos θ and b = sin θ, for some angle θ. To solve for θ it is not enough to look at a alone or b alone; we must consider both together to place the angle in the correct quadrant, using a two-argument arctangent function.
Now consider the first column of a 3 × 3 rotation matrix,
${\begin{bmatrix}a\\b\\c\end{bmatrix}}.$
Although a2 + b2 will probably not equal 1, but some value r2 < 1, we can use a slight variation of the previous computation to find a so-called Givens rotation that transforms the column to
${\begin{bmatrix}r\\0\\c\end{bmatrix}},$
zeroing b. This acts on the subspace spanned by the x- and y-axes. We can then repeat the process for the xz-subspace to zero c. Acting on the full matrix, these two rotations produce the schematic form
$Q_{xz}Q_{xy}Q={\begin{bmatrix}1&0&0\\0&\ast &\ast \\0&\ast &\ast \end{bmatrix}}.$
Shifting attention to the second column, a Givens rotation of the yz-subspace can now zero the z value. This brings the full matrix to the form
$Q_{yz}Q_{xz}Q_{xy}Q={\begin{bmatrix}1&0&0\\0&1&0\\0&0&1\end{bmatrix}},$
which is an identity matrix. Thus we have decomposed Q as
$Q=Q_{xy}^{-1}Q_{xz}^{-1}Q_{yz}^{-1}.$
An n × n rotation matrix will have (n − 1) + (n − 2) + ⋯ + 2 + 1, or
$\sum _{k=1}^{n-1}k={\frac {1}{2}}n(n-1)$
entries below the diagonal to zero. We can zero them by extending the same idea of stepping through the columns with a series of rotations in a fixed sequence of planes. We conclude that the set of n × n rotation matrices, each of which has n2 entries, can be parameterized by 1/2n(n − 1) angles.
xzxwxzywxyxwxyzw
yxywyxzwyzywyzxw
zyzwzyxwzxzwzxyw
xzxbyzxbxyxbzyxb
yxybzxybyzybxzyb
zyzbxyzbzxzbyxzb
In three dimensions this restates in matrix form an observation made by Euler, so mathematicians call the ordered sequence of three angles Euler angles. However, the situation is somewhat more complicated than we have so far indicated. Despite the small dimension, we actually have considerable freedom in the sequence of axis pairs we use; and we also have some freedom in the choice of angles. Thus we find many different conventions employed when three-dimensional rotations are parameterized for physics, or medicine, or chemistry, or other disciplines. When we include the option of world axes or body axes, 24 different sequences are possible. And while some disciplines call any sequence Euler angles, others give different names (Cardano, Tait–Bryan, roll-pitch-yaw) to different sequences.
One reason for the large number of options is that, as noted previously, rotations in three dimensions (and higher) do not commute. If we reverse a given sequence of rotations, we get a different outcome. This also implies that we cannot compose two rotations by adding their corresponding angles. Thus Euler angles are not vectors, despite a similarity in appearance as a triplet of numbers.
Nested dimensions
A 3 × 3 rotation matrix such as
$Q_{3\times 3}={\begin{bmatrix}\cos \theta &-\sin \theta &{\color {CadetBlue}0}\\\sin \theta &\cos \theta &{\color {CadetBlue}0}\\{\color {CadetBlue}0}&{\color {CadetBlue}0}&{\color {CadetBlue}1}\end{bmatrix}}$
suggests a 2 × 2 rotation matrix,
$Q_{2\times 2}={\begin{bmatrix}\cos \theta &-\sin \theta \\\sin \theta &\cos \theta \end{bmatrix}},$
is embedded in the upper left corner:
$Q_{3\times 3}=\left[{\begin{matrix}Q_{2\times 2}&\mathbf {0} \\\mathbf {0} ^{\mathsf {T}}&1\end{matrix}}\right].$
This is no illusion; not just one, but many, copies of n-dimensional rotations are found within (n + 1)-dimensional rotations, as subgroups. Each embedding leaves one direction fixed, which in the case of 3 × 3 matrices is the rotation axis. For example, we have
${\begin{aligned}Q_{\mathbf {x} }(\theta )&={\begin{bmatrix}{\color {CadetBlue}1}&{\color {CadetBlue}0}&{\color {CadetBlue}0}\\{\color {CadetBlue}0}&\cos \theta &-\sin \theta \\{\color {CadetBlue}0}&\sin \theta &\cos \theta \end{bmatrix}},\\[8px]Q_{\mathbf {y} }(\theta )&={\begin{bmatrix}\cos \theta &{\color {CadetBlue}0}&\sin \theta \\{\color {CadetBlue}0}&{\color {CadetBlue}1}&{\color {CadetBlue}0}\\-\sin \theta &{\color {CadetBlue}0}&\cos \theta \end{bmatrix}},\\[8px]Q_{\mathbf {z} }(\theta )&={\begin{bmatrix}\cos \theta &-\sin \theta &{\color {CadetBlue}0}\\\sin \theta &\cos \theta &{\color {CadetBlue}0}\\{\color {CadetBlue}0}&{\color {CadetBlue}0}&{\color {CadetBlue}1}\end{bmatrix}},\end{aligned}}$
fixing the x-axis, the y-axis, and the z-axis, respectively. The rotation axis need not be a coordinate axis; if u = (x,y,z) is a unit vector in the desired direction, then
${\begin{aligned}Q_{\mathbf {u} }(\theta )&={\begin{bmatrix}0&-z&y\\z&0&-x\\-y&x&0\end{bmatrix}}\sin \theta +\left(I-\mathbf {u} \mathbf {u} ^{\mathsf {T}}\right)\cos \theta +\mathbf {u} \mathbf {u} ^{\mathsf {T}}\\[8px]&={\begin{bmatrix}\left(1-x^{2}\right)c_{\theta }+x^{2}&-zs_{\theta }-xyc_{\theta }+xy&ys_{\theta }-xzc_{\theta }+xz\\zs_{\theta }-xyc_{\theta }+xy&\left(1-y^{2}\right)c_{\theta }+y^{2}&-xs_{\theta }-yzc_{\theta }+yz\\-ys_{\theta }-xzc_{\theta }+xz&xs_{\theta }-yzc_{\theta }+yz&\left(1-z^{2}\right)c_{\theta }+z^{2}\end{bmatrix}}\\[8px]&={\begin{bmatrix}x^{2}(1-c_{\theta })+c_{\theta }&xy(1-c_{\theta })-zs_{\theta }&xz(1-c_{\theta })+ys_{\theta }\\xy(1-c_{\theta })+zs_{\theta }&y^{2}(1-c_{\theta })+c_{\theta }&yz(1-c_{\theta })-xs_{\theta }\\xz(1-c_{\theta })-ys_{\theta }&yz(1-c_{\theta })+xs_{\theta }&z^{2}(1-c_{\theta })+c_{\theta }\end{bmatrix}},\end{aligned}}$
where cθ = cos θ, sθ = sin θ, is a rotation by angle θ leaving axis u fixed.
A direction in (n + 1)-dimensional space will be a unit magnitude vector, which we may consider a point on a generalized sphere, Sn. Thus it is natural to describe the rotation group SO(n + 1) as combining SO(n) and Sn. A suitable formalism is the fiber bundle,
$SO(n)\hookrightarrow SO(n+1)\to S^{n},$
where for every direction in the base space, Sn, the fiber over it in the total space, SO(n + 1), is a copy of the fiber space, SO(n), namely the rotations that keep that direction fixed.
Thus we can build an n × n rotation matrix by starting with a 2 × 2 matrix, aiming its fixed axis on S2 (the ordinary sphere in three-dimensional space), aiming the resulting rotation on S3, and so on up through Sn−1. A point on Sn can be selected using n numbers, so we again have 1/2n(n − 1) numbers to describe any n × n rotation matrix.
In fact, we can view the sequential angle decomposition, discussed previously, as reversing this process. The composition of n − 1 Givens rotations brings the first column (and row) to (1, 0, ..., 0), so that the remainder of the matrix is a rotation matrix of dimension one less, embedded so as to leave (1, 0, ..., 0) fixed.
Skew parameters via Cayley's formula
Main articles: Cayley transform and Skew-symmetric matrix
When an n × n rotation matrix Q, does not include a −1 eigenvalue, thus none of the planar rotations which it comprises are 180° rotations, then Q + I is an invertible matrix. Most rotation matrices fit this description, and for them it can be shown that (Q − I)(Q + I)−1 is a skew-symmetric matrix, A. Thus AT = −A; and since the diagonal is necessarily zero, and since the upper triangle determines the lower one, A contains 1/2n(n − 1) independent numbers.
Conveniently, I − A is invertible whenever A is skew-symmetric; thus we can recover the original matrix using the Cayley transform,
$A\mapsto (I+A)(I-A)^{-1},$
which maps any skew-symmetric matrix A to a rotation matrix. In fact, aside from the noted exceptions, we can produce any rotation matrix in this way. Although in practical applications we can hardly afford to ignore 180° rotations, the Cayley transform is still a potentially useful tool, giving a parameterization of most rotation matrices without trigonometric functions.
In three dimensions, for example, we have (Cayley 1846)
${\begin{aligned}&{\begin{bmatrix}0&-z&y\\z&0&-x\\-y&x&0\end{bmatrix}}\mapsto \\[3pt]\quad {\frac {1}{1+x^{2}+y^{2}+z^{2}}}&{\begin{bmatrix}1+x^{2}-y^{2}-z^{2}&2xy-2z&2y+2xz\\2xy+2z&1-x^{2}+y^{2}-z^{2}&2yz-2x\\2xz-2y&2x+2yz&1-x^{2}-y^{2}+z^{2}\end{bmatrix}}.\end{aligned}}$
If we condense the skew entries into a vector, (x,y,z), then we produce a 90° rotation around the x-axis for (1, 0, 0), around the y-axis for (0, 1, 0), and around the z-axis for (0, 0, 1). The 180° rotations are just out of reach; for, in the limit as x → ∞, (x, 0, 0) does approach a 180° rotation around the x axis, and similarly for other directions.
Decomposition into shears
For the 2D case, a rotation matrix can be decomposed into three shear matrices (Paeth 1986):
${\begin{aligned}R(\theta )&{}={\begin{bmatrix}1&-\tan {\frac {\theta }{2}}\\0&1\end{bmatrix}}{\begin{bmatrix}1&0\\\sin \theta &1\end{bmatrix}}{\begin{bmatrix}1&-\tan {\frac {\theta }{2}}\\0&1\end{bmatrix}}\end{aligned}}$
This is useful, for instance, in computer graphics, since shears can be implemented with fewer multiplication instructions than rotating a bitmap directly. On modern computers, this may not matter, but it can be relevant for very old or low-end microprocessors.
A rotation can also be written as two shears and scaling (Daubechies & Sweldens 1998):
${\begin{aligned}R(\theta )&{}={\begin{bmatrix}1&0\\\tan \theta &1\end{bmatrix}}{\begin{bmatrix}1&-\sin \theta \cos \theta \\0&1\end{bmatrix}}{\begin{bmatrix}\cos \theta &0\\0&{\frac {1}{\cos \theta }}\end{bmatrix}}\end{aligned}}$
Group theory
Below follow some basic facts about the role of the collection of all rotation matrices of a fixed dimension (here mostly 3) in mathematics and particularly in physics where rotational symmetry is a requirement of every truly fundamental law (due to the assumption of isotropy of space), and where the same symmetry, when present, is a simplifying property of many problems of less fundamental nature. Examples abound in classical mechanics and quantum mechanics. Knowledge of the part of the solutions pertaining to this symmetry applies (with qualifications) to all such problems and it can be factored out of a specific problem at hand, thus reducing its complexity. A prime example – in mathematics and physics – would be the theory of spherical harmonics. Their role in the group theory of the rotation groups is that of being a representation space for the entire set of finite-dimensional irreducible representations of the rotation group SO(3). For this topic, see Rotation group SO(3) § Spherical harmonics.
The main articles listed in each subsection are referred to for more detail.
Lie group
Main articles: Special orthogonal group and Rotation group SO(3)
The n × n rotation matrices for each n form a group, the special orthogonal group, SO(n). This algebraic structure is coupled with a topological structure inherited from $\operatorname {GL} _{n}(\mathbb {R} )$ in such a way that the operations of multiplication and taking the inverse are analytic functions of the matrix entries. Thus SO(n) is for each n a Lie group. It is compact and connected, but not simply connected. It is also a semi-simple group, in fact a simple group with the exception SO(4).[6] The relevance of this is that all theorems and all machinery from the theory of analytic manifolds (analytic manifolds are in particular smooth manifolds) apply and the well-developed representation theory of compact semi-simple groups is ready for use.
Lie algebra
Main article: Rotation group SO(3) § Lie algebra
The Lie algebra so(n) of SO(n) is given by
${\mathfrak {so}}(n)={\mathfrak {o}}(n)=\left\{X\in M_{n}(\mathbb {R} )\mid X=-X^{\mathsf {T}}\right\},$
and is the space of skew-symmetric matrices of dimension n, see classical group, where o(n) is the Lie algebra of O(n), the orthogonal group. For reference, the most common basis for so(3) is
$L_{\mathbf {x} }={\begin{bmatrix}0&0&0\\0&0&-1\\0&1&0\end{bmatrix}},\quad L_{\mathbf {y} }={\begin{bmatrix}0&0&1\\0&0&0\\-1&0&0\end{bmatrix}},\quad L_{\mathbf {z} }={\begin{bmatrix}0&-1&0\\1&0&0\\0&0&0\end{bmatrix}}.$
Exponential map
Main articles: Rotation group SO(3) § Exponential map, and Matrix exponential
Connecting the Lie algebra to the Lie group is the exponential map, which is defined using the standard matrix exponential series for eA[7] For any skew-symmetric matrix A, exp(A) is always a rotation matrix.[nb 3]
An important practical example is the 3 × 3 case. In rotation group SO(3), it is shown that one can identify every A ∈ so(3) with an Euler vector ω = θu, where u = (x, y, z) is a unit magnitude vector.
By the properties of the identification $\mathbf {su} (2)\cong \mathbb {R} ^{3}$, u is in the null space of A. Thus, u is left invariant by exp(A) and is hence a rotation axis.
According to Rodrigues' rotation formula on matrix form, one obtains,
${\begin{aligned}\exp(A)&=\exp {\bigl (}\theta (\mathbf {u} \cdot \mathbf {L} ){\bigr )}\\&=\exp \left({\begin{bmatrix}0&-z\theta &y\theta \\z\theta &0&-x\theta \\-y\theta &x\theta &0\end{bmatrix}}\right)\\&=I+\sin \theta \ \mathbf {u} \cdot \mathbf {L} +(1-\cos \theta )(\mathbf {u} \cdot \mathbf {L} )^{2},\end{aligned}}$
where
$\mathbf {u} \cdot \mathbf {L} ={\begin{bmatrix}0&-z&y\\z&0&-x\\-y&x&0\end{bmatrix}}.$
This is the matrix for a rotation around axis u by the angle θ. For full detail, see exponential map SO(3).
Baker–Campbell–Hausdorff formula
Main articles: Baker–Campbell–Hausdorff formula and Rotation group SO(3) § Baker–Campbell–Hausdorff formula
The BCH formula provides an explicit expression for Z = log(eXeY) in terms of a series expansion of nested commutators of X and Y.[8] This general expansion unfolds as[nb 4]
$Z=C(X,Y)=X+Y+{\tfrac {1}{2}}[X,Y]+{\tfrac {1}{12}}{\bigl [}X,[X,Y]{\bigr ]}-{\tfrac {1}{12}}{\bigl [}Y,[X,Y]{\bigr ]}+\cdots .$
In the 3 × 3 case, the general infinite expansion has a compact form,[9]
$Z=\alpha X+\beta Y+\gamma [X,Y],$
for suitable trigonometric function coefficients, detailed in the Baker–Campbell–Hausdorff formula for SO(3).
As a group identity, the above holds for all faithful representations, including the doublet (spinor representation), which is simpler. The same explicit formula thus follows straightforwardly through Pauli matrices; see the 2 × 2 derivation for SU(2). For the general n × n case, one might use Ref.[10]
Spin group
Main articles: Spin group and Rotation group SO(3) § Connection between SO(3) and SU(2)
The Lie group of n × n rotation matrices, SO(n), is not simply connected, so Lie theory tells us it is a homomorphic image of a universal covering group. Often the covering group, which in this case is called the spin group denoted by Spin(n), is simpler and more natural to work with.[11]
In the case of planar rotations, SO(2) is topologically a circle, S1. Its universal covering group, Spin(2), is isomorphic to the real line, R, under addition. Whenever angles of arbitrary magnitude are used one is taking advantage of the convenience of the universal cover. Every 2 × 2 rotation matrix is produced by a countable infinity of angles, separated by integer multiples of 2π. Correspondingly, the fundamental group of SO(2) is isomorphic to the integers, Z.
In the case of spatial rotations, SO(3) is topologically equivalent to three-dimensional real projective space, RP3. Its universal covering group, Spin(3), is isomorphic to the 3-sphere, S3. Every 3 × 3 rotation matrix is produced by two opposite points on the sphere. Correspondingly, the fundamental group of SO(3) is isomorphic to the two-element group, Z2.
We can also describe Spin(3) as isomorphic to quaternions of unit norm under multiplication, or to certain 4 × 4 real matrices, or to 2 × 2 complex special unitary matrices, namely SU(2). The covering maps for the first and the last case are given by
$\mathbb {H} \supset \{q\in \mathbb {H} :\|q\|=1\}\ni w+\mathbf {i} x+\mathbf {j} y+\mathbf {k} z\mapsto {\begin{bmatrix}1-2y^{2}-2z^{2}&2xy-2zw&2xz+2yw\\2xy+2zw&1-2x^{2}-2z^{2}&2yz-2xw\\2xz-2yw&2yz+2xw&1-2x^{2}-2y^{2}\end{bmatrix}}\in \mathrm {SO} (3),$ :\|q\|=1\}\ni w+\mathbf {i} x+\mathbf {j} y+\mathbf {k} z\mapsto {\begin{bmatrix}1-2y^{2}-2z^{2}&2xy-2zw&2xz+2yw\\2xy+2zw&1-2x^{2}-2z^{2}&2yz-2xw\\2xz-2yw&2yz+2xw&1-2x^{2}-2y^{2}\end{bmatrix}}\in \mathrm {SO} (3),}
and
$\mathrm {SU} (2)\ni {\begin{bmatrix}\alpha &\beta \\-{\overline {\beta }}&{\overline {\alpha }}\end{bmatrix}}\mapsto {\begin{bmatrix}{\frac {1}{2}}\left(\alpha ^{2}-\beta ^{2}+{\overline {\alpha ^{2}}}-{\overline {\beta ^{2}}}\right)&{\frac {i}{2}}\left(-\alpha ^{2}-\beta ^{2}+{\overline {\alpha ^{2}}}+{\overline {\beta ^{2}}}\right)&-\alpha \beta -{\overline {\alpha }}{\overline {\beta }}\\{\frac {i}{2}}\left(\alpha ^{2}-\beta ^{2}-{\overline {\alpha ^{2}}}+{\overline {\beta ^{2}}}\right)&{\frac {i}{2}}\left(\alpha ^{2}+\beta ^{2}+{\overline {\alpha ^{2}}}+{\overline {\beta ^{2}}}\right)&-i\left(+\alpha \beta -{\overline {\alpha }}{\overline {\beta }}\right)\\\alpha {\overline {\beta }}+{\overline {\alpha }}\beta &i\left(-\alpha {\overline {\beta }}+{\overline {\alpha }}\beta \right)&\alpha {\overline {\alpha }}-\beta {\overline {\beta }}\end{bmatrix}}\in \mathrm {SO} (3).$
For a detailed account of the SU(2)-covering and the quaternionic covering, see spin group SO(3).
Many features of these cases are the same for higher dimensions. The coverings are all two-to-one, with SO(n), n > 2, having fundamental group Z2. The natural setting for these groups is within a Clifford algebra. One type of action of the rotations is produced by a kind of "sandwich", denoted by qvq∗. More importantly in applications to physics, the corresponding spin representation of the Lie algebra sits inside the Clifford algebra. It can be exponentiated in the usual way to give rise to a 2-valued representation, also known as projective representation of the rotation group. This is the case with SO(3) and SU(2), where the 2-valued representation can be viewed as an "inverse" of the covering map. By properties of covering maps, the inverse can be chosen ono-to-one as a local section, but not globally.
Infinitesimal rotations
Main article: Infinitesimal rotation matrix
The matrices in the Lie algebra are not themselves rotations; the skew-symmetric matrices are derivatives, proportional differences of rotations. An actual "differential rotation", or infinitesimal rotation matrix has the form
$I+A\,d\theta ,$
where dθ is vanishingly small and A ∈ so(n), for instance with A = Lx,
$dL_{x}={\begin{bmatrix}1&0&0\\0&1&-d\theta \\0&d\theta &1\end{bmatrix}}.$
The computation rules are as usual except that infinitesimals of second order are routinely dropped. With these rules, these matrices do not satisfy all the same properties as ordinary finite rotation matrices under the usual treatment of infinitesimals.[12] It turns out that the order in which infinitesimal rotations are applied is irrelevant. To see this exemplified, consult infinitesimal rotations SO(3).
Conversions
See also: Rotation formalisms in three dimensions § Conversion formulae between formalisms
We have seen the existence of several decompositions that apply in any dimension, namely independent planes, sequential angles, and nested dimensions. In all these cases we can either decompose a matrix or construct one. We have also given special attention to 3 × 3 rotation matrices, and these warrant further attention, in both directions (Stuelpnagel 1964).
Quaternion
Main article: Quaternions and spatial rotation
Given the unit quaternion q = w + xi + yj + zk, the equivalent pre-multiplied (to be used with column vectors) 3 × 3 rotation matrix is [13]
$Q={\begin{bmatrix}1-2y^{2}-2z^{2}&2xy-2zw&2xz+2yw\\2xy+2zw&1-2x^{2}-2z^{2}&2yz-2xw\\2xz-2yw&2yz+2xw&1-2x^{2}-2y^{2}\end{bmatrix}}.$
Now every quaternion component appears multiplied by two in a term of degree two, and if all such terms are zero what is left is an identity matrix. This leads to an efficient, robust conversion from any quaternion – whether unit or non-unit – to a 3 × 3 rotation matrix. Given:
${\begin{aligned}n&=w\times w+x\times x+y\times y+z\times z\\s&={\begin{cases}0&{\text{if }}n=0\\{\frac {2}{n}}&{\text{otherwise}}\end{cases}}\\\end{aligned}}$
we can calculate
$Q={\begin{bmatrix}1-s(yy+zz)&s(xy-wz)&s(xz+wy)\\s(xy+wz)&1-s(xx+zz)&s(yz-wx)\\s(xz-wy)&s(yz+wx)&1-s(xx+yy)\end{bmatrix}}$
Freed from the demand for a unit quaternion, we find that nonzero quaternions act as homogeneous coordinates for 3 × 3 rotation matrices. The Cayley transform, discussed earlier, is obtained by scaling the quaternion so that its w component is 1. For a 180° rotation around any axis, w will be zero, which explains the Cayley limitation.
The sum of the entries along the main diagonal (the trace), plus one, equals 4 − 4(x2 + y2 + z2), which is 4w2. Thus we can write the trace itself as 2w2 + 2w2 − 1; and from the previous version of the matrix we see that the diagonal entries themselves have the same form: 2x2 + 2w2 − 1, 2y2 + 2w2 − 1, and 2z2 + 2w2 − 1. So we can easily compare the magnitudes of all four quaternion components using the matrix diagonal. We can, in fact, obtain all four magnitudes using sums and square roots, and choose consistent signs using the skew-symmetric part of the off-diagonal entries:
${\begin{aligned}t&=\operatorname {tr} Q=Q_{xx}+Q_{yy}+Q_{zz}\quad ({\text{the trace of }}Q)\\r&={\sqrt {1+t}}\\w&={\tfrac {1}{2}}r\\x&=\operatorname {sgn} \left(Q_{zy}-Q_{yz}\right)\left|{\tfrac {1}{2}}{\sqrt {1+Q_{xx}-Q_{yy}-Q_{zz}}}\right|\\y&=\operatorname {sgn} \left(Q_{xz}-Q_{zx}\right)\left|{\tfrac {1}{2}}{\sqrt {1-Q_{xx}+Q_{yy}-Q_{zz}}}\right|\\z&=\operatorname {sgn} \left(Q_{yx}-Q_{xy}\right)\left|{\tfrac {1}{2}}{\sqrt {1-Q_{xx}-Q_{yy}+Q_{zz}}}\right|\end{aligned}}$
Alternatively, use a single square root and division
${\begin{aligned}t&=\operatorname {tr} Q=Q_{xx}+Q_{yy}+Q_{zz}\\r&={\sqrt {1+t}}\\s&={\tfrac {1}{2r}}\\w&={\tfrac {1}{2}}r\\x&=\left(Q_{zy}-Q_{yz}\right)s\\y&=\left(Q_{xz}-Q_{zx}\right)s\\z&=\left(Q_{yx}-Q_{xy}\right)s\end{aligned}}$
This is numerically stable so long as the trace, t, is not negative; otherwise, we risk dividing by (nearly) zero. In that case, suppose Qxx is the largest diagonal entry, so x will have the largest magnitude (the other cases are derived by cyclic permutation); then the following is safe.
${\begin{aligned}r&={\sqrt {1+Q_{xx}-Q_{yy}-Q_{zz}}}\\s&={\tfrac {1}{2r}}\\w&=\left(Q_{zy}-Q_{yz}\right)s\\x&={\tfrac {1}{2}}r\\y&=\left(Q_{xy}+Q_{yx}\right)s\\z&=\left(Q_{zx}+Q_{xz}\right)s\end{aligned}}$
If the matrix contains significant error, such as accumulated numerical error, we may construct a symmetric 4 × 4 matrix,
$K={\frac {1}{3}}{\begin{bmatrix}Q_{xx}-Q_{yy}-Q_{zz}&Q_{yx}+Q_{xy}&Q_{zx}+Q_{xz}&Q_{zy}-Q_{yz}\\Q_{yx}+Q_{xy}&Q_{yy}-Q_{xx}-Q_{zz}&Q_{zy}+Q_{yz}&Q_{xz}-Q_{zx}\\Q_{zx}+Q_{xz}&Q_{zy}+Q_{yz}&Q_{zz}-Q_{xx}-Q_{yy}&Q_{yx}-Q_{xy}\\Q_{zy}-Q_{yz}&Q_{xz}-Q_{zx}&Q_{yx}-Q_{xy}&Q_{xx}+Q_{yy}+Q_{zz}\end{bmatrix}},$
and find the eigenvector, (x, y, z, w), of its largest magnitude eigenvalue. (If Q is truly a rotation matrix, that value will be 1.) The quaternion so obtained will correspond to the rotation matrix closest to the given matrix (Bar-Itzhack 2000) (Note: formulation of the cited article is post-multiplied, works with row vectors).
Polar decomposition
If the n × n matrix M is nonsingular, its columns are linearly independent vectors; thus the Gram–Schmidt process can adjust them to be an orthonormal basis. Stated in terms of numerical linear algebra, we convert M to an orthogonal matrix, Q, using QR decomposition. However, we often prefer a Q closest to M, which this method does not accomplish. For that, the tool we want is the polar decomposition (Fan & Hoffman 1955; Higham 1989).
To measure closeness, we may use any matrix norm invariant under orthogonal transformations. A convenient choice is the Frobenius norm, ‖Q − M‖F, squared, which is the sum of the squares of the element differences. Writing this in terms of the trace, Tr, our goal is,
Find Q minimizing Tr( (Q − M)T(Q − M) ), subject to QTQ = I.
Though written in matrix terms, the objective function is just a quadratic polynomial. We can minimize it in the usual way, by finding where its derivative is zero. For a 3 × 3 matrix, the orthogonality constraint implies six scalar equalities that the entries of Q must satisfy. To incorporate the constraint(s), we may employ a standard technique, Lagrange multipliers, assembled as a symmetric matrix, Y. Thus our method is:
Differentiate Tr( (Q − M)T(Q − M) + (QTQ − I)Y ) with respect to (the entries of) Q, and equate to zero.
Consider a 2 × 2 example. Including constraints, we seek to minimize
${\begin{aligned}&\left(Q_{xx}-M_{xx}\right)^{2}+\left(Q_{xy}-M_{xy}\right)^{2}+\left(Q_{yx}-M_{yx}\right)^{2}+\left(Q_{yy}-M_{yy}\right)^{2}\\&\quad {}+\left(Q_{xx}^{2}+Q_{yx}^{2}-1\right)Y_{xx}+\left(Q_{xy}^{2}+Q_{yy}^{2}-1\right)Y_{yy}+2\left(Q_{xx}Q_{xy}+Q_{yx}Q_{yy}\right)Y_{xy}.\end{aligned}}$
Taking the derivative with respect to Qxx, Qxy, Qyx, Qyy in turn, we assemble a matrix.
$2{\begin{bmatrix}Q_{xx}-M_{xx}+Q_{xx}Y_{xx}+Q_{xy}Y_{xy}&Q_{xy}-M_{xy}+Q_{xx}Y_{xy}+Q_{xy}Y_{yy}\\Q_{yx}-M_{yx}+Q_{yx}Y_{xx}+Q_{yy}Y_{xy}&Q_{yy}-M_{yy}+Q_{yx}Y_{xy}+Q_{yy}Y_{yy}\end{bmatrix}}$
In general, we obtain the equation
$0=2(Q-M)+2QY,$
so that
$M=Q(I+Y)=QS,$
where Q is orthogonal and S is symmetric. To ensure a minimum, the Y matrix (and hence S) must be positive definite. Linear algebra calls QS the polar decomposition of M, with S the positive square root of S2 = MTM.
$S^{2}=\left(Q^{\mathsf {T}}M\right)^{\mathsf {T}}\left(Q^{\mathsf {T}}M\right)=M^{\mathsf {T}}QQ^{\mathsf {T}}M=M^{\mathsf {T}}M$
When M is non-singular, the Q and S factors of the polar decomposition are uniquely determined. However, the determinant of S is positive because S is positive definite, so Q inherits the sign of the determinant of M. That is, Q is only guaranteed to be orthogonal, not a rotation matrix. This is unavoidable; an M with negative determinant has no uniquely defined closest rotation matrix.
Axis and angle
Main article: Axis–angle representation
To efficiently construct a rotation matrix Q from an angle θ and a unit axis u, we can take advantage of symmetry and skew-symmetry within the entries. If x, y, and z are the components of the unit vector representing the axis, and
${\begin{aligned}c&=\cos \theta \\s&=\sin \theta \\C&=1-c\end{aligned}}$
then
$Q(\theta )={\begin{bmatrix}xxC+c&xyC-zs&xzC+ys\\yxC+zs&yyC+c&yzC-xs\\zxC-ys&zyC+xs&zzC+c\end{bmatrix}}$
Determining an axis and angle, like determining a quaternion, is only possible up to the sign; that is, (u, θ) and (−u, −θ) correspond to the same rotation matrix, just like q and −q. Additionally, axis–angle extraction presents additional difficulties. The angle can be restricted to be from 0° to 180°, but angles are formally ambiguous by multiples of 360°. When the angle is zero, the axis is undefined. When the angle is 180°, the matrix becomes symmetric, which has implications in extracting the axis. Near multiples of 180°, care is needed to avoid numerical problems: in extracting the angle, a two-argument arctangent with atan2(sin θ, cos θ) equal to θ avoids the insensitivity of arccos; and in computing the axis magnitude in order to force unit magnitude, a brute-force approach can lose accuracy through underflow (Moler & Morrison 1983).
A partial approach is as follows:
${\begin{aligned}x&=Q_{zy}-Q_{yz}\\y&=Q_{xz}-Q_{zx}\\z&=Q_{yx}-Q_{xy}\\r&={\sqrt {x^{2}+y^{2}+z^{2}}}\\t&=Q_{xx}+Q_{yy}+Q_{zz}\\\theta &=\operatorname {atan2} (r,t-1)\end{aligned}}$
The x-, y-, and z-components of the axis would then be divided by r. A fully robust approach will use a different algorithm when t, the trace of the matrix Q, is negative, as with quaternion extraction. When r is zero because the angle is zero, an axis must be provided from some source other than the matrix.
Euler angles
Complexity of conversion escalates with Euler angles (used here in the broad sense). The first difficulty is to establish which of the twenty-four variations of Cartesian axis order we will use. Suppose the three angles are θ1, θ2, θ3; physics and chemistry may interpret these as
$Q(\theta _{1},\theta _{2},\theta _{3})=Q_{\mathbf {z} }(\theta _{1})Q_{\mathbf {y} }(\theta _{2})Q_{\mathbf {z} }(\theta _{3}),$
while aircraft dynamics may use
$Q(\theta _{1},\theta _{2},\theta _{3})=Q_{\mathbf {z} }(\theta _{3})Q_{\mathbf {y} }(\theta _{2})Q_{\mathbf {x} }(\theta _{1}).$
One systematic approach begins with choosing the rightmost axis. Among all permutations of (x,y,z), only two place that axis first; one is an even permutation and the other odd. Choosing parity thus establishes the middle axis. That leaves two choices for the left-most axis, either duplicating the first or not. These three choices gives us 3 × 2 × 2 = 12 variations; we double that to 24 by choosing static or rotating axes.
This is enough to construct a matrix from angles, but triples differing in many ways can give the same rotation matrix. For example, suppose we use the zyz convention above; then we have the following equivalent pairs:
(90°,45°,−105°)≡(−270°,−315°,255°)multiples of 360°
(72°,0°,0°)≡(40°,0°,32°)singular alignment
(45°,60°,−30°)≡(−135°,−60°,150°)bistable flip
Angles for any order can be found using a concise common routine (Herter & Lott 1993; Shoemake 1994).
The problem of singular alignment, the mathematical analog of physical gimbal lock, occurs when the middle rotation aligns the axes of the first and last rotations. It afflicts every axis order at either even or odd multiples of 90°. These singularities are not characteristic of the rotation matrix as such, and only occur with the usage of Euler angles.
The singularities are avoided when considering and manipulating the rotation matrix as orthonormal row vectors (in 3D applications often named the right-vector, up-vector and out-vector) instead of as angles. The singularities are also avoided when working with quaternions.
Vector to vector formulation
In some instances it is interesting to describe a rotation by specifying how a vector is mapped into another through the shortest path (smallest angle). In $\mathbb {R} ^{3}$ this completely describes the associated rotation matrix. In general, given x, y ∈ $\mathbb {S} $n, the matrix
$R:=I+yx^{\mathsf {T}}-xy^{\mathsf {T}}+{\frac {1}{1+\langle x,y\rangle }}\left(yx^{\mathsf {T}}-xy^{\mathsf {T}}\right)^{2}$
belongs to SO(n + 1) and maps x to y.[14]
Uniform random rotation matrices
We sometimes need to generate a uniformly distributed random rotation matrix. It seems intuitively clear in two dimensions that this means the rotation angle is uniformly distributed between 0 and 2π. That intuition is correct, but does not carry over to higher dimensions. For example, if we decompose 3 × 3 rotation matrices in axis–angle form, the angle should not be uniformly distributed; the probability that (the magnitude of) the angle is at most θ should be 1/π(θ − sin θ), for 0 ≤ θ ≤ π.
Since SO(n) is a connected and locally compact Lie group, we have a simple standard criterion for uniformity, namely that the distribution be unchanged when composed with any arbitrary rotation (a Lie group "translation"). This definition corresponds to what is called Haar measure. León, Massé & Rivest (2006) show how to use the Cayley transform to generate and test matrices according to this criterion.
We can also generate a uniform distribution in any dimension using the subgroup algorithm of Diaconis & Shahshahani (1987). This recursively exploits the nested dimensions group structure of SO(n), as follows. Generate a uniform angle and construct a 2 × 2 rotation matrix. To step from n to n + 1, generate a vector v uniformly distributed on the n-sphere Sn, embed the n × n matrix in the next larger size with last column (0, ..., 0, 1), and rotate the larger matrix so the last column becomes v.
As usual, we have special alternatives for the 3 × 3 case. Each of these methods begins with three independent random scalars uniformly distributed on the unit interval. Arvo (1992) takes advantage of the odd dimension to change a Householder reflection to a rotation by negation, and uses that to aim the axis of a uniform planar rotation.
Another method uses unit quaternions. Multiplication of rotation matrices is homomorphic to multiplication of quaternions, and multiplication by a unit quaternion rotates the unit sphere. Since the homomorphism is a local isometry, we immediately conclude that to produce a uniform distribution on SO(3) we may use a uniform distribution on S3. In practice: create a four-element vector where each element is a sampling of a normal distribution. Normalize its length and you have a uniformly sampled random unit quaternion which represents a uniformly sampled random rotation. Note that the aforementioned only applies to rotations in dimension 3. For a generalised idea of quaternions, one should look into Rotors.
Euler angles can also be used, though not with each angle uniformly distributed (Murnaghan 1962; Miles 1965).
For the axis–angle form, the axis is uniformly distributed over the unit sphere of directions, S2, while the angle has the nonuniform distribution over [0,π] noted previously (Miles 1965).
See also
• Euler–Rodrigues formula
• Euler's rotation theorem
• Rodrigues' rotation formula
• Plane of rotation
• Axis–angle representation
• Rotation group SO(3)
• Rotation formalisms in three dimensions
• Rotation operator (vector space)
• Transformation matrix
• Yaw-pitch-roll system
• Kabsch algorithm
• Isometry
• Rigid transformation
• Rotations in 4-dimensional Euclidean space
• Trigonometric Identities
• Versor
Remarks
1. Note that if instead of rotating vectors, it is the reference frame that is being rotated, the signs on the sin θ terms will be reversed. If reference frame A is rotated anti-clockwise about the origin through an angle θ to create reference frame B, then Rx (with the signs flipped) will transform a vector described in reference frame A coordinates to reference frame B coordinates. Coordinate frame transformations in aerospace, robotics, and other fields are often performed using this interpretation of the rotation matrix.
2. Note that
$\mathbf {u} \otimes \mathbf {u} ={\bigl (}[\mathbf {u} ]_{\times }{\bigr )}^{2}+{\mathbf {I} }$
so that, in Rodrigues' notation, equivalently,
$\mathbf {R} =\mathbf {I} +(\sin \theta )[\mathbf {u} ]_{\times }+(1-\cos \theta ){\bigl (}[\mathbf {u} ]_{\times }{\bigr )}^{2}.$
3. Note that this exponential map of skew-symmetric matrices to rotation matrices is quite different from the Cayley transform discussed earlier, differing to the third order,
$e^{2A}-{\frac {I+A}{I-A}}=-{\tfrac {2}{3}}A^{3}+\mathrm {O} \left(A^{4}\right).$
Conversely, a skew-symmetric matrix A specifying a rotation matrix through the Cayley map specifies the same rotation matrix through the map exp(2 artanh A).
4. For a detailed derivation, see Derivative of the exponential map. Issues of convergence of this series to the right element of the Lie algebra are here swept under the carpet. Convergence is guaranteed when ‖X‖ + ‖Y‖ < log 2 and ‖Z‖ < log 2. If these conditions are not fulfilled, the series may still converge. A solution always exists since exp is onto in the cases under consideration.
Notes
1. Swokowski, Earl (1979). Calculus with Analytic Geometry (Second ed.). Boston: Prindle, Weber, and Schmidt. ISBN 0-87150-268-2.
2. W3C recommendation (2003). "Scalable Vector Graphics – the initial coordinate system".
3. "Rotation Matrices" (PDF). Retrieved 30 November 2021.
4. Taylor, Camillo J.; Kriegman, David J. (1994). "Minimization on the Lie Group SO(3) and Related Manifolds" (PDF). Technical Report No. 9405. Yale University.
5. Cole, Ian R. (January 2015). Modelling CPV (thesis). Loughborough University. hdl:2134/18050.
6. Baker (2003); Fulton & Harris (1991)
7. (Wedderburn 1934, §8.02)
8. Hall 2004, Ch. 3; Varadarajan 1984, §2.15
9. (Engø 2001)
10. Curtright, T L; Fairlie, D B; Zachos, C K (2014). "A compact formula for rotations as spin matrix polynomials". SIGMA. 10: 084. arXiv:1402.3541. Bibcode:2014SIGMA..10..084C. doi:10.3842/SIGMA.2014.084. S2CID 18776942.
11. Baker 2003, Ch. 5; Fulton & Harris 1991, pp. 299–315
12. (Goldstein, Poole & Safko 2002, §4.8)
13. Shoemake, Ken (1 July 1985). "Animating rotation with quaternion curves". SIGGRAPH Comput. Graph. Association for Computing Machinery. 19 (3): 245–254. doi:10.1145/325334.325242. ISBN 0897911660. Retrieved 3 January 2023.
14. Cid, Jose Ángel; Tojo, F. Adrián F. (2018). "A Lipschitz condition along a transversal foliation implies local uniqueness for ODEs". Electronic Journal of Qualitative Theory of Differential Equations. 13 (13): 1–14. arXiv:1801.01724. doi:10.14232/ejqtde.2018.1.13.
References
• Arvo, James (1992), "Fast random rotation matrices", in David Kirk (ed.), Graphics Gems III, San Diego: Academic Press Professional, pp. 117–120, Bibcode:1992grge.book.....K, ISBN 978-0-12-409671-4
• Baker, Andrew (2003), Matrix Groups: An Introduction to Lie Group Theory, Springer, ISBN 978-1-85233-470-3
• Bar-Itzhack, Itzhack Y. (Nov–Dec 2000), "New method for extracting the quaternion from a rotation matrix", Journal of Guidance, Control and Dynamics, 23 (6): 1085–1087, Bibcode:2000JGCD...23.1085B, doi:10.2514/2.4654, ISSN 0731-5090
• Björck, Åke; Bowie, Clazett (June 1971), "An iterative algorithm for computing the best estimate of an orthogonal matrix", SIAM Journal on Numerical Analysis, 8 (2): 358–364, Bibcode:1971SJNA....8..358B, doi:10.1137/0708036, ISSN 0036-1429
• Cayley, Arthur (1846), "Sur quelques propriétés des déterminants gauches", Journal für die reine und angewandte Mathematik, 1846 (32): 119–123, doi:10.1515/crll.1846.32.119, ISSN 0075-4102, S2CID 199546746; reprinted as article 52 in Cayley, Arthur (1889), The collected mathematical papers of Arthur Cayley, vol. I (1841–1853), Cambridge University Press, pp. 332–336
• Diaconis, Persi; Shahshahani, Mehrdad (1987), "The subgroup algorithm for generating uniform random variables", Probability in the Engineering and Informational Sciences, 1: 15–32, doi:10.1017/S0269964800000255, ISSN 0269-9648, S2CID 122752374
• Engø, Kenth (June 2001), "On the BCH-formula in so(3)", BIT Numerical Mathematics, 41 (3): 629–632, doi:10.1023/A:1021979515229, ISSN 0006-3835, S2CID 126053191
• Fan, Ky; Hoffman, Alan J. (February 1955), "Some metric inequalities in the space of matrices", Proceedings of the American Mathematical Society, 6 (1): 111–116, doi:10.2307/2032662, ISSN 0002-9939, JSTOR 2032662
• Fulton, William; Harris, Joe (1991), Representation Theory: A First Course, Graduate Texts in Mathematics, vol. 129, New York, Berlin, Heidelberg: Springer, ISBN 978-0-387-97495-8, MR 1153249
• Goldstein, Herbert; Poole, Charles P.; Safko, John L. (2002), Classical Mechanics (third ed.), Addison Wesley, ISBN 978-0-201-65702-9
• Hall, Brian C. (2004), Lie Groups, Lie Algebras, and Representations: An Elementary Introduction, Springer, ISBN 978-0-387-40122-5 (GTM 222)
• Herter, Thomas; Lott, Klaus (September–October 1993), "Algorithms for decomposing 3-D orthogonal matrices into primitive rotations", Computers & Graphics, 17 (5): 517–527, doi:10.1016/0097-8493(93)90003-R, ISSN 0097-8493
• Higham, Nicholas J. (October 1, 1989), "Matrix nearness problems and applications", in Gover, Michael J. C.; Barnett, Stephen (eds.), Applications of Matrix Theory, Oxford University Press, pp. 1–27, ISBN 978-0-19-853625-3
• León, Carlos A.; Massé, Jean-Claude; Rivest, Louis-Paul (February 2006), "A statistical model for random rotations", Journal of Multivariate Analysis, 97 (2): 412–430, doi:10.1016/j.jmva.2005.03.009, ISSN 0047-259X
• Miles, Roger E. (December 1965), "On random rotations in R3", Biometrika, 52 (3/4): 636–639, doi:10.2307/2333716, ISSN 0006-3444, JSTOR 2333716
• Moler, Cleve; Morrison, Donald (1983), "Replacing square roots by pythagorean sums", IBM Journal of Research and Development, 27 (6): 577–581, doi:10.1147/rd.276.0577, ISSN 0018-8646
• Murnaghan, Francis D. (1950), "The element of volume of the rotation group", Proceedings of the National Academy of Sciences, 36 (11): 670–672, Bibcode:1950PNAS...36..670M, doi:10.1073/pnas.36.11.670, ISSN 0027-8424, PMC 1063502, PMID 16589056
• Murnaghan, Francis D. (1962), The Unitary and Rotation Groups, Lectures on applied mathematics, Washington: Spartan Books
• Cayley, Arthur (1889), The collected mathematical papers of Arthur Cayley, vol. I (1841–1853), Cambridge University Press, pp. 332–336
• Paeth, Alan W. (1986), "A Fast Algorithm for General Raster Rotation" (PDF), Proceedings, Graphics Interface '86: 77–81
• Daubechies, Ingrid; Sweldens, Wim (1998), "Factoring wavelet transforms into lifting steps" (PDF), Journal of Fourier Analysis and Applications, 4 (3): 247–269, doi:10.1007/BF02476026, S2CID 195242970
• Pique, Michael E. (1990), "Rotation Tools", in Andrew S. Glassner (ed.), Graphics Gems, San Diego: Academic Press Professional, pp. 465–469, ISBN 978-0-12-286166-6
• Press, William H.; Teukolsky, Saul A.; Vetterling, William T.; Flannery, Brian P. (2007), "Section 21.5.2. Picking a Random Rotation Matrix", Numerical Recipes: The Art of Scientific Computing (3rd ed.), New York: Cambridge University Press, ISBN 978-0-521-88068-8
• Shepperd, Stanley W. (May–June 1978), "Quaternion from rotation matrix", Journal of Guidance and Control, 1 (3): 223–224, doi:10.2514/3.55767b
• Shoemake, Ken (1994), "Euler angle conversion", in Paul Heckbert (ed.), Graphics Gems IV, San Diego: Academic Press Professional, pp. 222–229, ISBN 978-0-12-336155-4
• Stuelpnagel, John (October 1964), "On the parameterization of the three-dimensional rotation group", SIAM Review, 6 (4): 422–430, Bibcode:1964SIAMR...6..422S, doi:10.1137/1006093, ISSN 0036-1445, S2CID 13990266 (Also NASA-CR-53568.)
• Varadarajan, Veeravalli S. (1984), Lie Groups, Lie Algebras, and Their Representation, Springer, ISBN 978-0-387-90969-1 (GTM 102)
• Wedderburn, Joseph H. M. (1934), Lectures on Matrices, AMS, ISBN 978-0-8218-3204-2
External links
• "Rotation", Encyclopedia of Mathematics, EMS Press, 2001 [1994]
• Rotation matrices at Mathworld
• Math Awareness Month 2000 interactive demo (requires Java)
• Rotation Matrices at MathPages
• (in Italian) A parametrization of SOn(R) by generalized Euler Angles
• Rotation about any point
Matrix classes
Explicitly constrained entries
• Alternant
• Anti-diagonal
• Anti-Hermitian
• Anti-symmetric
• Arrowhead
• Band
• Bidiagonal
• Bisymmetric
• Block-diagonal
• Block
• Block tridiagonal
• Boolean
• Cauchy
• Centrosymmetric
• Conference
• Complex Hadamard
• Copositive
• Diagonally dominant
• Diagonal
• Discrete Fourier Transform
• Elementary
• Equivalent
• Frobenius
• Generalized permutation
• Hadamard
• Hankel
• Hermitian
• Hessenberg
• Hollow
• Integer
• Logical
• Matrix unit
• Metzler
• Moore
• Nonnegative
• Pentadiagonal
• Permutation
• Persymmetric
• Polynomial
• Quaternionic
• Signature
• Skew-Hermitian
• Skew-symmetric
• Skyline
• Sparse
• Sylvester
• Symmetric
• Toeplitz
• Triangular
• Tridiagonal
• Vandermonde
• Walsh
• Z
Constant
• Exchange
• Hilbert
• Identity
• Lehmer
• Of ones
• Pascal
• Pauli
• Redheffer
• Shift
• Zero
Conditions on eigenvalues or eigenvectors
• Companion
• Convergent
• Defective
• Definite
• Diagonalizable
• Hurwitz
• Positive-definite
• Stieltjes
Satisfying conditions on products or inverses
• Congruent
• Idempotent or Projection
• Invertible
• Involutory
• Nilpotent
• Normal
• Orthogonal
• Unimodular
• Unipotent
• Unitary
• Totally unimodular
• Weighing
With specific applications
• Adjugate
• Alternating sign
• Augmented
• Bézout
• Carleman
• Cartan
• Circulant
• Cofactor
• Commutation
• Confusion
• Coxeter
• Distance
• Duplication and elimination
• Euclidean distance
• Fundamental (linear differential equation)
• Generator
• Gram
• Hessian
• Householder
• Jacobian
• Moment
• Payoff
• Pick
• Random
• Rotation
• Seifert
• Shear
• Similarity
• Symplectic
• Totally positive
• Transformation
Used in statistics
• Centering
• Correlation
• Covariance
• Design
• Doubly stochastic
• Fisher information
• Hat
• Precision
• Stochastic
• Transition
Used in graph theory
• Adjacency
• Biadjacency
• Degree
• Edmonds
• Incidence
• Laplacian
• Seidel adjacency
• Tutte
Used in science and engineering
• Cabibbo–Kobayashi–Maskawa
• Density
• Fundamental (computer vision)
• Fuzzy associative
• Gamma
• Gell-Mann
• Hamiltonian
• Irregular
• Overlap
• S
• State transition
• Substitution
• Z (chemistry)
Related terms
• Jordan normal form
• Linear independence
• Matrix exponential
• Matrix representation of conic sections
• Perfect matrix
• Pseudoinverse
• Row echelon form
• Wronskian
• Mathematics portal
• List of matrices
• Category:Matrices
|
Wikipedia
|
Rotation
Rotation or rotational motion is the circular movement of an object around a central line, known as axis of rotation. A plane figure can rotate in either a clockwise or counterclockwise sense around a perpendicular axis intersecting anywhere inside or outside the figure at a center of rotation. A solid figure has an infinite number of possible axes and angles of rotation, including chaotic rotation (between arbitrary orientations), in contrast to rotation around a fixed axis.
The special case of a rotation with an internal axis passing through the body's own center of mass is known as a spin (or autorotation).[1] In that case, the surface intersection of the internal spin axis can be called a pole; for example, Earth's rotation defines the geographical poles. A rotation around a completely external axis is called a revolution (or orbit), e.g. Earth's orbit around the Sun. The ends of the external axis of revolution can be called the orbital poles.[1]
Either type of rotation is involved in a corresponding type of angular velocity (spin angular velocity and orbital angular velocity) and angular momentum (spin angular momentum and orbital angular momentum).
Mathematics
Main article: Rotation (mathematics)
Mathematically, a rotation is a rigid body movement which, unlike a translation, keeps at least one point fixed. This definition applies to rotations in two dimensions (in a plane), in which exactly one point is kept fixed; and also in three dimensions (in space), in which additional points may be kept fixed (as in rotation around a fixed axis, as infinite line).
All rigid body movements are rotations, translations, or combinations of the two.
A rotation is simply a progressive radial orientation to a common point. That common point lies within the axis of that motion. The axis is perpendicular to the plane of the motion.
If a rotation around a point or axis is followed by a second rotation around the same point/axis, a third rotation results. The reverse (inverse) of a rotation is also a rotation. Thus, the rotations around a point/axis form a group. However, a rotation around a point or axis and a rotation around a different point/axis may result in something other than a rotation, e.g. a translation.
Rotations around the x, y and z axes are called principal rotations. Rotation around any axis can be performed by taking a rotation around the x axis, followed by a rotation around the y axis, and followed by a rotation around the z axis. That is to say, any spatial rotation can be decomposed into a combination of principal rotations.
See also: curl (mathematics), cyclic permutation, Euler angles, rigid body, rotation around a fixed axis, rotation group SO(3), rotation matrix, axis angle, quaternion, and isometry
Fixed axis vs. fixed point
The combination of any sequence of rotations of an object in three dimensions about a fixed point is always equivalent to a rotation about an axis (which may be considered to be a rotation in the plane that is perpendicular to that axis). Similarly, the rotation rate of an object in three dimensions at any instant is about some axis, although this axis may be changing over time.
In other than three dimensions, it does not make sense to describe a rotation as being around an axis, since more than one axis through the object may be kept fixed; instead, simple rotations are described as being in a plane. In four or more dimensions, a combination of two or more rotations about in a plane is not in general a rotation in a single plane.
Axis of 2-dimensional rotations
2-dimensional rotations, unlike the 3-dimensional ones, possess no axis of rotation, only a point about which the rotation occurs. This is equivalent, for linear transformations, with saying that there is no direction in the plane which is kept unchanged by a 2 dimensional rotation, except, of course, the identity.
The question of the existence of such a direction is the question of existence of an eigenvector for the matrix A representing the rotation. Every 2D rotation around the origin through an angle $\theta $ in counterclockwise direction can be quite simply represented by the following matrix:
$A={\begin{bmatrix}\cos \theta &-\sin \theta \\\sin \theta &\cos \theta \end{bmatrix}}$
A standard eigenvalue determination leads to the characteristic equation
$\lambda ^{2}-2\lambda \cos \theta +1=0,$
which has
$\cos \theta \pm i\sin \theta $
as its eigenvalues. Therefore, there is no real eigenvalue whenever $\cos \theta \neq \pm 1$, meaning that no real vector in the plane is kept unchanged by A.
Rotation angle and axis in 3 dimensions
Knowing that the trace is an invariant, the rotation angle $\alpha $ for a proper orthogonal 3×3 rotation matrix $A$ is found by
$\alpha =\cos ^{-1}\left({\frac {A_{11}+A_{22}+A_{33}-1}{2}}\right)$
Using the principal arc-cosine, this formula gives a rotation angle satisfying $0\leq \alpha \leq 180^{\circ }$. The corresponding rotation axis must be defined to point in a direction that limits the rotation angle to not exceed 180 degrees. (This can always be done because any rotation of more than 180 degrees about an axis $m$ can always be written as a rotation having $0\leq \alpha \leq 180^{\circ }$ if the axis is replaced with $n=-m$.)
Every proper rotation $A$ in 3D space has an axis of rotation, which is defined such that any vector $v$ that is aligned with the rotation axis will not be affected by rotation. Accordingly, $Av=v$, and the rotation axis therefore corresponds to an eigenvector of the rotation matrix associated with an eigenvalue of 1. As long as the rotation angle $\alpha $ is nonzero (i.e., the rotation is not the identity tensor), there is one and only one such direction. Because A has only real components, there is at least one real eigenvalue, and the remaining two eigenvalues must be complex conjugates of each other (see Eigenvalues and eigenvectors#Eigenvalues and the characteristic polynomial). Knowing that 1 is an eigenvalue, it follows that the remaining two eigenvalues are complex conjugates of each other, but this does not imply that they are complex—they could be real with double multiplicity. In the degenerate case of a rotation angle $\alpha =180^{\circ }$, the remaining two eigenvalues are both equal to −1. In the degenerate case of a zero rotation angle, the rotation matrix is the identity, and all three eigenvalues are 1 (which is the only case for which the rotation axis is arbitrary).
A spectral analysis is not required to find the rotation axis. If $n$ denotes the unit eigenvector aligned with the rotation axis, and if $\alpha $ denotes the rotation angle, then it can be shown that $2\sin(\alpha )n=\{A_{32}-A_{23},A_{13}-A_{31},A_{21}-A_{12}\}$. Consequently, the expense of an eigenvalue analysis can be avoided by simply normalizing this vector if it has a nonzero magnitude. On the other hand, if this vector has a zero magnitude, it means that $\sin(\alpha )=0$. In other words, this vector will be zero if and only if the rotation angle is 0 or 180 degrees, and the rotation axis may be assigned in this case by normalizing any column of $A+I$ that has a nonzero magnitude.[2]
This discussion applies to a proper rotation, and hence $\det A=1$. Any improper orthogonal 3x3 matrix $B$ may be written as $B=-A$, in which $A$ is proper orthogonal. That is, any improper orthogonal 3x3 matrix may be decomposed as a proper rotation (from which an axis of rotation can be found as described above) followed by an inversion (multiplication by −1). It follows that the rotation axis of $A$ is also the eigenvector of $B$ corresponding to an eigenvalue of −1.
Rotation plane
Main article: Rotation plane
As much as every tridimensional rotation has a rotation axis, also every tridimensional rotation has a plane, which is perpendicular to the rotation axis, and which is left invariant by the rotation. The rotation, restricted to this plane, is an ordinary 2D rotation.
The proof proceeds similarly to the above discussion. First, suppose that all eigenvalues of the 3D rotation matrix A are real. This means that there is an orthogonal basis, made by the corresponding eigenvectors (which are necessarily orthogonal), over which the effect of the rotation matrix is just stretching it. If we write A in this basis, it is diagonal; but a diagonal orthogonal matrix is made of just +1s and −1s in the diagonal entries. Therefore, we don't have a proper rotation, but either the identity or the result of a sequence of reflections.
It follows, then, that a proper rotation has some complex eigenvalue. Let v be the corresponding eigenvector. Then, as we showed in the previous topic, ${\bar {v}}$ is also an eigenvector, and $v+{\bar {v}}$ and $i(v-{\bar {v}})$ are such that their scalar product vanishes:
$i(v^{\text{T}}+{\bar {v}}^{\text{T}})(v-{\bar {v}})=i(v^{\text{T}}v-{\bar {v}}^{\text{T}}{\bar {v}}+{\bar {v}}^{\text{T}}v-v^{\text{T}}{\bar {v}})=0$
because, since ${\bar {v}}^{\text{T}}{\bar {v}}$ is real, it equals its complex conjugate $v^{\text{T}}v$, and ${\bar {v}}^{\text{T}}v$ and $v^{\text{T}}{\bar {v}}$ are both representations of the same scalar product between $v$ and ${\bar {v}}$.
This means $v+{\bar {v}}$ and $i(v-{\bar {v}})$ are orthogonal vectors. Also, they are both real vectors by construction. These vectors span the same subspace as $v$ and ${\bar {v}}$, which is an invariant subspace under the application of A. Therefore, they span an invariant plane.
This plane is orthogonal to the invariant axis, which corresponds to the remaining eigenvector of A, with eigenvalue 1, because of the orthogonality of the eigenvectors of A.
Astronomy
Further information: Rotation period and Earth's rotation
In astronomy, rotation is a commonly observed phenomenon; it includes both spin (auto-rotation) and orbital revolution.
Spin
Stars, planets and similar bodies may spin around on their axes. The rotation rate of planets in the solar system was first measured by tracking visual features. Stellar rotation is measured through Doppler shift or by tracking active surface features. An example is sunspots, which rotate around the Sun at the same velocity as the outer gases that make up the Sun.
Under some circumstances orbiting bodies may lock their spin rotation to their orbital rotation around a larger body. This effect is called tidal locking; the Moon is tidal-locked to the Earth.
This rotation induces a centrifugal acceleration in the reference frame of the Earth which slightly counteracts the effect of gravitation the closer one is to the equator. Earth's gravity combines both mass effects such that an object weighs slightly less at the equator than at the poles. Another is that over time the Earth is slightly deformed into an oblate spheroid; a similar equatorial bulge develops for other planets.
Another consequence of the rotation of a planet are the phenomena of precession and nutation. Like a gyroscope, the overall effect is a slight "wobble" in the movement of the axis of a planet. Currently the tilt of the Earth's axis to its orbital plane (obliquity of the ecliptic) is 23.44 degrees, but this angle changes slowly (over thousands of years). (See also Precession of the equinoxes and Pole Star.)
Revolution
While revolution is often used as a synonym for rotation, in many fields, particularly astronomy and related fields, revolution, often referred to as orbital revolution for clarity, is used when one body moves around another while rotation is used to mean the movement around an axis. Moons revolve around their planet, planets revolve about their star (such as the Earth around the Sun); and stars slowly revolve about their galaxial center. The motion of the components of galaxies is complex, but it usually includes a rotation component.
Retrograde rotation
Most planets in the Solar System, including Earth, spin in the same direction as they orbit the Sun. The exceptions are Venus and Uranus. Venus may be thought of as rotating slowly backward (or being "upside down"). Uranus rotates nearly on its side relative to its orbit. Current speculation is that Uranus started off with a typical prograde orientation and was knocked on its side by a large impact early in its history. The dwarf planet Pluto (formerly considered a planet) is anomalous in several ways, including that it also rotates on its side.
Physics
Main article: Circular motion
The speed of rotation is given by the angular frequency (rad/s) or frequency (turns per time), or period (seconds, days, etc.). The time-rate of change of angular frequency is angular acceleration (rad/s2), caused by torque. The ratio of torque to the angular acceleration is given by the moment of inertia.
The angular velocity vector (an axial vector) also describes the direction of the axis of rotation. Similarly, the torque is an axial vector.
The physics of the rotation around a fixed axis is mathematically described with the axis–angle representation of rotations. According to the right-hand rule, the direction away from the observer is associated with clockwise rotation and the direction towards the observer with counterclockwise rotation, like a screw.
Cosmological principle
The laws of physics are currently believed to be invariant under any fixed rotation. (Although they do appear to change when viewed from a rotating viewpoint: see rotating frame of reference.)
In modern physical cosmology, the cosmological principle is the notion that the distribution of matter in the universe is homogeneous and isotropic when viewed on a large enough scale, since the forces are expected to act uniformly throughout the universe and have no preferred direction, and should, therefore, produce no observable irregularities in the large scale structuring over the course of evolution of the matter field that was initially laid down by the Big Bang.
In particular, for a system which behaves the same regardless of how it is oriented in space, its Lagrangian is rotationally invariant. According to Noether's theorem, if the action (the integral over time of its Lagrangian) of a physical system is invariant under rotation, then angular momentum is conserved.
Euler rotations
Main article: Euler angles
Euler rotations provide an alternative description of a rotation. It is a composition of three rotations defined as the movement obtained by changing one of the Euler angles while leaving the other two constant. Euler rotations are never expressed in terms of the external frame, or in terms of the co-moving rotated body frame, but in a mixture. They constitute a mixed axes of rotation system, where the first angle moves the line of nodes around the external axis z, the second rotates around the line of nodes and the third one is an intrinsic rotation around an axis fixed in the body that moves.
These rotations are called precession, nutation, and intrinsic rotation.
Flight dynamics
In flight dynamics, the principal rotations described with Euler angles above are known as pitch, roll and yaw. The term rotation is also used in aviation to refer to the upward pitch (nose moves up) of an aircraft, particularly when starting the climb after takeoff.
Principal rotations have the advantage of modelling a number of physical systems such as gimbals, and joysticks, so are easily visualised, and are a very compact way of storing a rotation. But they are difficult to use in calculations as even simple operations like combining rotations are expensive to do, and suffer from a form of gimbal lock where the angles cannot be uniquely calculated for certain rotations.
Amusement rides
Many amusement rides provide rotation. A Ferris wheel has a horizontal central axis, and parallel axes for each gondola, where the rotation is opposite, by gravity or mechanically. As a result, at any time the orientation of the gondola is upright (not rotated), just translated. The tip of the translation vector describes a circle. A carousel provides rotation about a vertical axis. Many rides provide a combination of rotations about several axes. In Chair-O-Planes the rotation about the vertical axis is provided mechanically, while the rotation about the horizontal axis is due to the centripetal force. In roller coaster inversions the rotation about the horizontal axis is one or more full cycles, where inertia keeps people in their seats.
Sports
Rotation of a ball or other object, usually called spin, plays a role in many sports, including topspin and backspin in tennis, English, follow and draw in billiards and pool, curve balls in baseball, spin bowling in cricket, flying disc sports, etc. Table tennis paddles are manufactured with different surface characteristics to allow the player to impart a greater or lesser amount of spin to the ball.
Rotation of a player one or more times around a vertical axis may be called spin in figure skating, twirling (of the baton or the performer) in baton twirling, or 360, 540, 720, etc. in snowboarding, etc. Rotation of a player or performer one or more times around a horizontal axis may be called a flip, roll, somersault, heli, etc. in gymnastics, waterskiing, or many other sports, or a one-and-a-half, two-and-a-half, gainer (starting facing away from the water), etc. in diving, etc. A combination of vertical and horizontal rotation (back flip with 360°) is called a möbius in waterskiing freestyle jumping.
Rotation of a player around a vertical axis, generally between 180 and 360 degrees, may be called a spin move and is used as a deceptive or avoidance manoeuvre, or in an attempt to play, pass, or receive a ball or puck, etc., or to afford a player a view of the goal or other players. It is often seen in hockey, basketball, football of various codes, tennis, etc.
See also
• Absolute rotation – Rotation independent of any external reference
• Circular motion
• Instant centre of rotation – instantaneously fixed point on an arbitrarily moving rigid body
• Mach's principle – speculative hypothesis that a physical law relates the motion of the distant stars to the local inertial frame
• Orientation (geometry)
• Point reflection
• Rolling – motion of two objects in contact with each-other without sliding
• Rotation (quantity) – a unitless scalar representing the number of rotations
• Rotation around a fixed axis
• Rotation formalisms in three dimensions
• Rotating locomotion in living systems
• Top – spinning toy
References
1. Wormeli, R. (2009). Metaphors & Analogies: Power Tools for Teaching Any Subject. Stenhouse Publishers. p. 28. ISBN 978-1-57110-758-9. Retrieved 2023-07-27.
2. Brannon, R.M., "Rotation, Reflection, and Frame Change", 2018
3. "An Oasis, or a Secret Lair?". ESO Picture of the Week. Archived from the original on 11 October 2013. Retrieved 8 October 2013.
External links
• "Rotation", Encyclopedia of Mathematics, EMS Press, 2001 [1994]
• Product of Rotations at cut-the-knot. cut-the-knot.org
• When a Triangle is Equilateral at cut-the-knot. cut-the-knot.org
• Rotate Points Using Polar Coordinates, howtoproperly.com
• Rotation in Two Dimensions by Sergio Hannibal Mejia after work by Roger Germundsson and Understanding 3D Rotation by Roger Germundsson, Wolfram Demonstrations Project. demonstrations.wolfram.com
• Rotation, Reflection, and Frame Change: Orthogonal tensors in computational engineering mechanics, IOP Publishing
Authority control: National
• France
• BnF data
• Germany
• Israel
• United States
|
Wikipedia
|
Rotation distance
In discrete mathematics and theoretical computer science, the rotation distance between two binary trees with the same number of nodes is the minimum number of tree rotations needed to reconfigure one tree into another. Because of a combinatorial equivalence between binary trees and triangulations of convex polygons, rotation distance is equivalent to the flip distance for triangulations of convex polygons.
Rotation distance was first defined by Karel Čulík II and Derick Wood in 1982.[1] Every two n-node binary trees have rotation distance at most 2n − 6, and some pairs of trees have exactly this distance. The computational complexity of computing the rotation distance is unknown.[2]
Definition
A binary tree is a structure consisting of a set of nodes, one of which is designated as the root node, in which each remaining node is either the left child or right child of some other node, its parent, and in which following the parent links from any node eventually leads to the root node. (In some sources, the nodes described here are called "internal nodes", there exists another set of nodes called "external nodes", each internal node is required to have exactly two children, and each external node is required to have zero children.[1] The version described here can be obtained by removing all the external nodes from such a tree.)
For any node x in the tree, there is a subtree of the same form, rooted at x and consisting of all the nodes that can reach x by following parent links. Each binary tree has a left-to-right ordering of its nodes, its inorder traversal, obtained by recursively traversing the left subtree (the subtree at the left child of the root, if such a child exists), then listing the root itself, and then recursively traversing the right subtree. In a binary search tree, each node is associated with a search key, and the left-to-right ordering is required to be consistent with the order of the keys.[2]
A tree rotation is an operation that changes the structure of a binary tree without changing its left-to-right ordering. Several self-balancing binary search tree data structures use these rotations as a primitive operation in their rebalancing algorithms. A rotation operates on two nodes x and y, where x is the parent of y, and restructures the tree by making y be the parent of x and taking the place of x in the tree. To free up one of the child links of y and make room to link x as a child of y, this operation may also need to move one of the children of y to become a child of x. There are two variations of this operation, a right rotation in which y begins as the left child of x and x ends as the right child of y, and a left rotation in which y begins as the right child of x and x ends as the left child of y.[2]
Any two trees that have the same left-to-right sequence of nodes may be transformed into each other by a sequence of rotations. The rotation distance between the two trees is the number of rotations in the shortest possible sequence of rotations that performs this transformation. It can also be described as the shortest path distance in a rotation graph, a graph that has a vertex for each binary tree on a given left-to-right sequence of nodes and an edge for each rotation between two trees.[2] This rotation graph is exactly the graph of vertices and edges of an associahedron.[3]
Equivalence to flip distance
Given a family of triangulations of some geometric object, a flip is an operation that transforms one triangulation to another by removing an edge between two triangles and adding the opposite diagonal to the resulting quadrilateral. The flip distance between two triangulations is the minimum number of flips needed to transform one triangulation into another. It can also be described as the shortest path distance in a flip graph, a graph that has a vertex for each triangulation and an edge for each flip between two triangulations. Flips and flip distances can be defined in this way for several different kinds of triangulations, including triangulations of sets of points in the Euclidean plane, triangulations of polygons, and triangulations of abstract manifolds.
There is a one-to-one correspondence between triangulations of a given convex polygon, with a designated root edge, and binary trees, taking triangulations of n-sided polygons into binary trees with n − 2 nodes. In this correspondence, each triangle of a triangulation corresponds to a node in a binary tree. The root node is the triangle having the designated root edge as one of its sides, and two nodes are linked as parent and child in the tree when the corresponding triangles share a diagonal in the triangulation. Under this correspondence, rotations in binary trees correspond exactly to flips in the corresponding triangulations. Therefore, the rotation distance on (n − 2)-node trees corresponds exactly to flip distance on triangulations of n-sided convex polygons.
Maximum value
Čulík & Wood (1982) define the "right spine" of a binary tree to be the path obtained by starting from the root and following right child links until reaching a node that has no right child. If a tree has the property that not all nodes belong to the right spine, there always exists a right rotation that increases the length of the right spine. For, in this case, there exists at least one node x on the right spine that has a left child y that is not on the right spine. Performing a right rotation on x and y adds y to the right spine without removing any other node from it. By repeatedly increasing the length of the right spine, any n-node tree can be transformed into the unique tree with the same node order in which all nodes belong to the right spine, in at most n − 1 steps. Given any two trees with the same node order, one can transform one into the other by transforming the first tree into a tree with all nodes on the right spine, and then reversing the same transformation of the second tree, in a total of at most 2n − 2 steps. Therefore, as Čulík & Wood (1982) proved, the rotation distance between any two trees is at most 2n − 2.[1]
By considering the problem in terms of flips of convex polygons instead of rotations of trees, Sleator, Tarjan & Thurston (1988) were able to show that the rotation distance is at most 2n − 6. In terms of triangulations of convex polygons, the right spine is the sequence of triangles incident to the right endpoint of the root edge, and the tree in which all vertices lie on the spine corresponds to a fan triangulation for this vertex. The main idea of their improvement is to try flipping both given triangulations to a fan triangulation for any vertex, rather than only the one for the right endpoint of the root edge. It is not possible for all of these choices to simultaneously give the worst-case distance n − 1 from each starting triangulation, giving the improvement.[2]
Sleator, Tarjan & Thurston (1988) also used a geometric argument to show that, for infinitely many values of n, the maximum rotation distance is exactly 2n − 6. They again use the interpretation of the problem in terms of flips of triangulations of convex polygons, and they interpret the starting and ending triangulation as the top and bottom faces of a convex polyhedron with the convex polygon itself interpreted as a Hamiltonian circuit in this polyhedron. Under this interpretation, a sequence of flips from one triangulation to the other can be translated into a collection of tetrahedra that triangulate the given three-dimensional polyhedron. They find a family of polyhedra with the property that (in three-dimensional hyperbolic geometry) the polyhedra have large volume, but all tetrahedra inside them have much smaller volume, implying that many tetrahedra are needed in any triangulation. The binary trees obtained from translating the top and bottom sets of faces of these polyhedra back into trees have high rotation distance, at least 2n − 6.[2]
Subsequently, Pournin (2014) provided a proof that for all n ≥ 11, the maximum rotation distance is exactly 2n − 6. Pournin's proof is combinatorial, and avoids the use of hyperbolic geometry.[3]
Computational complexity
Unsolved problem in mathematics:
What is the complexity of computing the rotation distance between two trees?
(more unsolved problems in mathematics)
As well as defining rotation distance, Čulík & Wood (1982) asked for the computational complexity of computing the rotation distance between two given trees. The existence of short rotation sequences between any two trees implies that testing whether the rotation distance is at most k belongs to the complexity class NP, but it is not known to be NP-complete, nor is it known to be solvable in polynomial time. Yet, Fordham's algorithm computes rotation distance in linear time, but only allows rotations on the root node and its right child. Fordham's algorithm relies on a classification of nodes into 7 types, and a lookup table is used to find the number of rotations required to transform a node of one type into another.
The rotation distance between any two trees can be lower bounded, in the equivalent view of polygon triangulations, by the number of diagonals that need to be removed from one triangulation and replaced by other diagonals to produce the other triangulation. It can also be upper bounded by twice this number, by partitioning the problem into subproblems along any diagonals shared between both triangulations and then applying the method of Čulík & Wood (1982) to each subproblem. This method provides an approximation algorithm for the problem with an approximation ratio of two.[4] A similar approach of partitioning into subproblems along shared diagonals leads to a fixed-parameter tractable algorithm for computing the rotation distance exactly.[5][6]
Determining the complexity of computing the rotation distance exactly without parameterization remains unsolved, and the best algorithms currently known for the problem run in exponential time.[7] Yet, the existence of Fordham's algorithm strongly suggests a linear time algorithm exists for computing rotation distance.
References
1. Čulík, Karel, II; Wood, Derick (1982), "A note on some tree similarity measures", Information Processing Letters, 15 (1): 39–42, doi:10.1016/0020-0190(82)90083-7, MR 0678031{{citation}}: CS1 maint: multiple names: authors list (link)
2. Sleator, Daniel D.; Tarjan, Robert E.; Thurston, William P. (1988), "Rotation distance, triangulations, and hyperbolic geometry", Journal of the American Mathematical Society, 1 (3): 647–681, doi:10.1090/S0894-0347-1988-0928904-4, JSTOR 1990951, MR 0928904
3. Pournin, Lionel (2014), "The diameter of associahedra", Advances in Mathematics, 259: 13–42, doi:10.1016/j.aim.2014.02.035, MR 3197650
4. Cleary, Sean; St. John, Katherine (2010), "A linear-time approximation for rotation distance", Journal of Graph Algorithms and Applications, 14 (2): 385–390, doi:10.7155/jgaa.00212, MR 2740180
5. Cleary, Sean; St. John, Katherine (2009), "Rotation distance is fixed-parameter tractable", Information Processing Letters, 109 (16): 918–922, arXiv:0903.0197, doi:10.1016/j.ipl.2009.04.023, MR 2541971, S2CID 125834
6. Lucas, Joan M. (2010), "An improved kernel size for rotation distance in binary trees", Information Processing Letters, 110 (12–13): 481–484, doi:10.1016/j.ipl.2010.04.022, MR 2667389
7. Kanj, Iyad; Sedgwick, Eric; Xia, Ge (2017), "Computing the flip distance between triangulations", Discrete & Computational Geometry, 58 (2): 313–344, arXiv:1407.1525, doi:10.1007/s00454-017-9867-x, MR 3679938, S2CID 1961246
|
Wikipedia
|
Rotation number
In mathematics, the rotation number is an invariant of homeomorphisms of the circle.
Not to be confused with Rotation (quantity).
"Map winding number" redirects here. Not to be confused with Winding number or Turning number.
History
It was first defined by Henri Poincaré in 1885, in relation to the precession of the perihelion of a planetary orbit. Poincaré later proved a theorem characterizing the existence of periodic orbits in terms of rationality of the rotation number.
Definition
Suppose that $f:S^{1}\to S^{1}$ is an orientation-preserving homeomorphism of the circle $S^{1}=\mathbb {R} /\mathbb {Z} .$ Then f may be lifted to a homeomorphism $F:\mathbb {R} \to \mathbb {R} $ of the real line, satisfying
$F(x+m)=F(x)+m$
for every real number x and every integer m.
The rotation number of f is defined in terms of the iterates of F:
$\omega (f)=\lim _{n\to \infty }{\frac {F^{n}(x)-x}{n}}.$
Henri Poincaré proved that the limit exists and is independent of the choice of the starting point x. The lift F is unique modulo integers, therefore the rotation number is a well-defined element of $\mathbb {R} /\mathbb {Z} .$ Intuitively, it measures the average rotation angle along the orbits of f.
Example
If $f$ is a rotation by $2\pi N$ (where $0<N<1$), then
$F(x)=x+N,$
and its rotation number is $N$ (cf. irrational rotation).
Properties
The rotation number is invariant under topological conjugacy, and even monotone topological semiconjugacy: if f and g are two homeomorphisms of the circle and
$h\circ f=g\circ h$
for a monotone continuous map h of the circle into itself (not necessarily homeomorphic) then f and g have the same rotation numbers. It was used by Poincaré and Arnaud Denjoy for topological classification of homeomorphisms of the circle. There are two distinct possibilities.
• The rotation number of f is a rational number p/q (in the lowest terms). Then f has a periodic orbit, every periodic orbit has period q, and the order of the points on each such orbit coincides with the order of the points for a rotation by p/q. Moreover, every forward orbit of f converges to a periodic orbit. The same is true for backward orbits, corresponding to iterations of f –1, but the limiting periodic orbits in forward and backward directions may be different.
• The rotation number of f is an irrational number θ. Then f has no periodic orbits (this follows immediately by considering a periodic point x of f). There are two subcases.
1. There exists a dense orbit. In this case f is topologically conjugate to the irrational rotation by the angle θ and all orbits are dense. Denjoy proved that this possibility is always realized when f is twice continuously differentiable.
2. There exists a Cantor set C invariant under f. Then C is a unique minimal set and the orbits of all points both in forward and backward direction converge to C. In this case, f is semiconjugate to the irrational rotation by θ, and the semiconjugating map h of degree 1 is constant on components of the complement of C.
The rotation number is continuous when viewed as a map from the group of homeomorphisms (with C0 topology) of the circle into the circle.
See also
• Circle map
• Denjoy diffeomorphism
• Poincaré section
• Poincaré recurrence
• Poincaré–Bendixson theorem
References
• Herman, Michael Robert (December 1979). "Sur la conjugaison différentiable des difféomorphismes du cercle à des rotations" [On the Differentiable Conjugation of Diffeomorphisms from the Circle to Rotations]. Publications Mathématiques de l'IHÉS (in French). 49: 5–233. doi:10.1007/BF02684798. S2CID 118356096., also SciSpace for smaller file size in pdf ver 1.3
• Sebastian van Strien, Rotation Numbers and Poincaré's Theorem (2001)
External links
• Michał Misiurewicz (ed.). "Rotation theory". Scholarpedia.
• Weisstein, Eric W. "Map Winding Number". From MathWorld--A Wolfram Web Resource.
|
Wikipedia
|
Rotation of axes in two dimensions
In mathematics, a rotation of axes in two dimensions is a mapping from an xy-Cartesian coordinate system to an x′y′-Cartesian coordinate system in which the origin is kept fixed and the x′ and y′ axes are obtained by rotating the x and y axes counterclockwise through an angle $\theta $. A point P has coordinates (x, y) with respect to the original system and coordinates (x′, y′) with respect to the new system.[1] In the new coordinate system, the point P will appear to have been rotated in the opposite direction, that is, clockwise through the angle $\theta $. A rotation of axes in more than two dimensions is defined similarly.[2][3] A rotation of axes is a linear map[4][5] and a rigid transformation.
Motivation
Coordinate systems are essential for studying the equations of curves using the methods of analytic geometry. To use the method of coordinate geometry, the axes are placed at a convenient position with respect to the curve under consideration. For example, to study the equations of ellipses and hyperbolas, the foci are usually located on one of the axes and are situated symmetrically with respect to the origin. If the curve (hyperbola, parabola, ellipse, etc.) is not situated conveniently with respect to the axes, the coordinate system should be changed to place the curve at a convenient and familiar location and orientation. The process of making this change is called a transformation of coordinates.[6]
The solutions to many problems can be simplified by rotating the coordinate axes to obtain new axes through the same origin.
Derivation
The equations defining the transformation in two dimensions, which rotates the xy axes counterclockwise through an angle $\theta $ into the x′y′ axes, are derived as follows.
In the xy system, let the point P have polar coordinates $(r,\alpha )$. Then, in the x′y′ system, P will have polar coordinates $(r,\alpha -\theta )$.
Using trigonometric functions, we have
$x=r\cos \alpha $
(1)
$y=r\sin \alpha $
(2)
and using the standard trigonometric formulae for differences, we have
$x'=r\cos(\alpha -\theta )=r\cos \alpha \cos \theta +r\sin \alpha \sin \theta $
(3)
$y'=r\sin(\alpha -\theta )=r\sin \alpha \cos \theta -r\cos \alpha \sin \theta .$
(4)
Substituting equations (1) and (2) into equations (3) and (4), we obtain[7]
$x'=x\cos \theta +y\sin \theta $
(5)
$y'=-x\sin \theta +y\cos \theta .$
(6)
Equations (5) and (6) can be represented in matrix form as
${\begin{bmatrix}x'\\y'\end{bmatrix}}={\begin{bmatrix}\cos \theta &\sin \theta \\-\sin \theta &\cos \theta \end{bmatrix}}{\begin{bmatrix}x\\y\end{bmatrix}},$
which is the standard matrix equation of a rotation of axes in two dimensions.[8]
The inverse transformation is[9]
$x=x'\cos \theta -y'\sin \theta $
(7)
$y=x'\sin \theta +y'\cos \theta ,$
(8)
or
${\begin{bmatrix}x\\y\end{bmatrix}}={\begin{bmatrix}\cos \theta &-\sin \theta \\\sin \theta &\cos \theta \end{bmatrix}}{\begin{bmatrix}x'\\y'\end{bmatrix}}.$
Examples in two dimensions
Example 1
Find the coordinates of the point $P_{1}=(x,y)=({\sqrt {3}},1)$ after the axes have been rotated through the angle $\theta _{1}=\pi /6$, or 30°.
Solution:
$x'={\sqrt {3}}\cos(\pi /6)+1\sin(\pi /6)=({\sqrt {3}})({\sqrt {3}}/2)+(1)(1/2)=2$
$y'=1\cos(\pi /6)-{\sqrt {3}}\sin(\pi /6)=(1)({\sqrt {3}}/2)-({\sqrt {3}})(1/2)=0.$
The axes have been rotated counterclockwise through an angle of $\theta _{1}=\pi /6$ and the new coordinates are $P_{1}=(x',y')=(2,0)$. Note that the point appears to have been rotated clockwise through $\pi /6$ with respect to fixed axes so it now coincides with the (new) x′ axis.
Example 2
Find the coordinates of the point $P_{2}=(x,y)=(7,7)$ after the axes have been rotated clockwise 90°, that is, through the angle $\theta _{2}=-\pi /2$, or −90°.
Solution:
${\begin{bmatrix}x'\\y'\end{bmatrix}}={\begin{bmatrix}\cos(-\pi /2)&\sin(-\pi /2)\\-\sin(-\pi /2)&\cos(-\pi /2)\end{bmatrix}}{\begin{bmatrix}7\\7\end{bmatrix}}={\begin{bmatrix}0&-1\\1&0\end{bmatrix}}{\begin{bmatrix}7\\7\end{bmatrix}}={\begin{bmatrix}-7\\7\end{bmatrix}}.$
The axes have been rotated through an angle of $\theta _{2}=-\pi /2$, which is in the clockwise direction and the new coordinates are $P_{2}=(x',y')=(-7,7)$. Again, note that the point appears to have been rotated counterclockwise through $\pi /2$ with respect to fixed axes.
Rotation of conic sections
Main article: Conic section
The most general equation of the second degree has the form
$Ax^{2}+Bxy+Cy^{2}+Dx+Ey+F=0$
($A,B,C$ not all zero).[10]
(9)
Through a change of coordinates (a rotation of axes and a translation of axes), equation (9) can be put into a standard form, which is usually easier to work with. It is always possible to rotate the coordinates at a specific angle so as to eliminate the x′y′ term. Substituting equations (7) and (8) into equation (9), we obtain
$A'x'^{2}+B'x'y'+C'y'^{2}+D'x'+E'y'+F'=0,$
(10)
where
• $A'=A\cos ^{2}\theta +B\sin \theta \cos \theta +C\sin ^{2}\theta ,$
• $B'=2(C-A)\sin \theta \cos \theta +B(\cos ^{2}\theta -\sin ^{2}\theta ),$
• $C'=A\sin ^{2}\theta -B\sin \theta \cos \theta +C\cos ^{2}\theta ,$
• $D'=D\cos \theta +E\sin \theta ,$
• $E'=-D\sin \theta +E\cos \theta ,$
• $F'=F.$
(11)
If $\theta $ is selected so that $\cot 2\theta =(A-C)/B$ we will have $B'=0$ and the x′y′ term in equation (10) will vanish.[11]
When a problem arises with B, D and E all different from zero, they can be eliminated by performing in succession a rotation (eliminating B) and a translation (eliminating the D and E terms).[12]
Identifying rotated conic sections
A non-degenerate conic section given by equation (9) can be identified by evaluating $B^{2}-4AC$. The conic section is:[13]
• an ellipse or a circle, if $B^{2}-4AC<0$;
• a parabola, if $B^{2}-4AC=0$;
• a hyperbola, if $B^{2}-4AC>0$.
Generalization to several dimensions
Suppose a rectangular xyz-coordinate system is rotated around its z axis counterclockwise (looking down the positive z axis) through an angle $\theta $, that is, the positive x axis is rotated immediately into the positive y axis. The z coordinate of each point is unchanged and the x and y coordinates transform as above. The old coordinates (x, y, z) of a point Q are related to its new coordinates (x′, y′, z′) by[14]
${\begin{bmatrix}x'\\y'\\z'\end{bmatrix}}={\begin{bmatrix}\cos \theta &\sin \theta &0\\-\sin \theta &\cos \theta &0\\0&0&1\end{bmatrix}}{\begin{bmatrix}x\\y\\z\end{bmatrix}}.$
Generalizing to any finite number of dimensions, a rotation matrix $A$ is an orthogonal matrix that differs from the identity matrix in at most four elements. These four elements are of the form
$a_{ii}=a_{jj}=\cos \theta $ and $a_{ij}=-a_{ji}=\sin \theta ,$
for some $\theta $ and some i ≠ j.[15]
Example in several dimensions
Example 3
Find the coordinates of the point $P_{3}=(w,x,y,z)=(1,1,1,1)$ after the positive w axis has been rotated through the angle $\theta _{3}=\pi /12$, or 15°, into the positive z axis.
Solution:
${\begin{aligned}{\begin{bmatrix}w'\\x'\\y'\\z'\end{bmatrix}}&={\begin{bmatrix}\cos(\pi /12)&0&0&\sin(\pi /12)\\0&1&0&0\\0&0&1&0\\-\sin(\pi /12)&0&0&\cos(\pi /12)\end{bmatrix}}{\begin{bmatrix}w\\x\\y\\z\end{bmatrix}}\\[4pt]&\approx {\begin{bmatrix}0.96593&0.0&0.0&0.25882\\0.0&1.0&0.0&0.0\\0.0&0.0&1.0&0.0\\-0.25882&0.0&0.0&0.96593\end{bmatrix}}{\begin{bmatrix}1.0\\1.0\\1.0\\1.0\end{bmatrix}}={\begin{bmatrix}1.22475\\1.00000\\1.00000\\0.70711\end{bmatrix}}.\end{aligned}}$
See also
• Rotation
• Rotation (mathematics)
Notes
1. Protter & Morrey (1970, p. 320)
2. Anton (1987, p. 231)
3. Burden & Faires (1993, p. 532)
4. Anton (1987, p. 247)
5. Beauregard & Fraleigh (1973, p. 266)
6. Protter & Morrey (1970, pp. 314–315)
7. Protter & Morrey (1970, pp. 320–321)
8. Anton (1987, p. 230)
9. Protter & Morrey (1970, p. 320)
10. Protter & Morrey (1970, p. 316)
11. Protter & Morrey (1970, pp. 321–322)
12. Protter & Morrey (1970, p. 324)
13. Protter & Morrey (1970, p. 326)
14. Anton (1987, p. 231)
15. Burden & Faires (1993, p. 532)
References
• Anton, Howard (1987), Elementary Linear Algebra (5th ed.), New York: Wiley, ISBN 0-471-84819-0
• Beauregard, Raymond A.; Fraleigh, John B. (1973), A First Course In Linear Algebra: with Optional Introduction to Groups, Rings, and Fields, Boston: Houghton Mifflin Co., ISBN 0-395-14017-X
• Burden, Richard L.; Faires, J. Douglas (1993), Numerical Analysis (5th ed.), Boston: Prindle, Weber and Schmidt, ISBN 0-534-93219-3
• Protter, Murray H.; Morrey, Charles B., Jr. (1970), College Calculus with Analytic Geometry (2nd ed.), Reading: Addison-Wesley, LCCN 76087042{{citation}}: CS1 maint: multiple names: authors list (link)
|
Wikipedia
|
Earth's rotation
Earth's rotation or Earth's spin is the rotation of planet Earth around its own axis, as well as changes in the orientation of the rotation axis in space. Earth rotates eastward, in prograde motion. As viewed from the northern polar star Polaris, Earth turns counterclockwise.
The North Pole, also known as the Geographic North Pole or Terrestrial North Pole, is the point in the Northern Hemisphere where Earth's axis of rotation meets its surface. This point is distinct from Earth's North Magnetic Pole. The South Pole is the other point where Earth's axis of rotation intersects its surface, in Antarctica.
Earth rotates once in about 24 hours with respect to the Sun, but once every 23 hours, 56 minutes and 4 seconds with respect to other distant stars (see below). Earth's rotation is slowing slightly with time; thus, a day was shorter in the past. This is due to the tidal effects the Moon has on Earth's rotation. Atomic clocks show that the modern day is longer by about 1.7 milliseconds than a century ago,[1] slowly increasing the rate at which UTC is adjusted by leap seconds. Analysis of historical astronomical records shows a slowing trend; the length of a day increased by about 2.3 milliseconds per century since the 8th century BCE.[2]
Scientists reported that in 2020 Earth had started spinning faster, after consistently spinning slower than 86,400 seconds per day in the decades before. On June 29, 2022, Earth's spin was completed in 1.59 milliseconds under 24 hours, setting a new record.[3] Because of that trend, engineers worldwide are discussing a 'negative leap second' and other possible timekeeping measures.[4]
This increase in speed is thought to be due to various factors, including the complex motion of its molten core, oceans, and atmosphere, the effect of celestial bodies such as the Moon, and possibly climate change, which is causing the ice at Earth's poles to melt. The masses of ice account for the Earth's shape being that of an oblate spheroid, bulging around the equator. When these masses are reduced, the poles rebound from the loss of weight, and Earth becomes more spherical, which has the effect of bringing mass closer to its centre of gravity. Conservation of angular momentum dictates that a mass distributed more closely around its centre of gravity spins faster.[5]
History
Among the ancient Greeks, several of the Pythagorean school believed in the rotation of Earth rather than the apparent diurnal rotation of the heavens. Perhaps the first was Philolaus (470–385 BCE), though his system was complicated, including a counter-earth rotating daily about a central fire.[6]
A more conventional picture was supported by Hicetas, Heraclides and Ecphantus in the fourth century BCE who assumed that Earth rotated but did not suggest that Earth revolved about the Sun. In the third century BCE, Aristarchus of Samos suggested the Sun's central place.
However, Aristotle in the fourth century BCE criticized the ideas of Philolaus as being based on theory rather than observation. He established the idea of a sphere of fixed stars that rotated about Earth.[7] This was accepted by most of those who came after, in particular Claudius Ptolemy (2nd century CE), who thought Earth would be devastated by gales if it rotated.[8]
In 499 CE, the Indian astronomer Aryabhata suggested that the spherical Earth rotates about its axis daily, and that the apparent movement of the stars is a relative motion caused by the rotation of Earth. He provided the following analogy: "Just as a man in a boat going in one direction sees the stationary things on the bank as moving in the opposite direction, in the same way to a man at Lanka the fixed stars appear to be going westward."[9][10]
In the 10th century, some Muslim astronomers accepted that Earth rotates around its axis.[11] According to al-Biruni, al-Sijzi (d. c. 1020) invented an astrolabe called al-zūraqī based on the idea believed by some of his contemporaries "that the motion we see is due to the Earth's movement and not to that of the sky."[12][13] The prevalence of this view is further confirmed by a reference from the 13th century which states: "According to the geometers [or engineers] (muhandisīn), the Earth is in constant circular motion, and what appears to be the motion of the heavens is actually due to the motion of the Earth and not the stars."[12] Treatises were written to discuss its possibility, either as refutations or expressing doubts about Ptolemy's arguments against it.[14] At the Maragha and Samarkand observatories, Earth's rotation was discussed by Tusi (b. 1201) and Qushji (b. 1403); the arguments and evidence they used resemble those used by Copernicus.[15]
In medieval Europe, Thomas Aquinas accepted Aristotle's view[16] and so, reluctantly, did John Buridan[17] and Nicole Oresme[18] in the fourteenth century. Not until Nicolaus Copernicus in 1543 adopted a heliocentric world system did the contemporary understanding of Earth's rotation begin to be established. Copernicus pointed out that if the movement of Earth is violent, then the movement of the stars must be very much more so. He acknowledged the contribution of the Pythagoreans and pointed to examples of relative motion. For Copernicus this was the first step in establishing the simpler pattern of planets circling a central Sun.[19]
Tycho Brahe, who produced accurate observations on which Kepler based his laws of planetary motion, used Copernicus's work as the basis of a system assuming a stationary Earth. In 1600, William Gilbert strongly supported Earth's rotation in his treatise on Earth's magnetism[20] and thereby influenced many of his contemporaries.[21]: 208 Those like Gilbert who did not openly support or reject the motion of Earth about the Sun are called "semi-Copernicans".[21]: 221 A century after Copernicus, Riccioli disputed the model of a rotating Earth due to the lack of then-observable eastward deflections in falling bodies;[22] such deflections would later be called the Coriolis effect. However, the contributions of Kepler, Galileo and Newton gathered support for the theory of the rotation of Earth.
Empirical tests
Earth's rotation implies that the Equator bulges and the geographical poles are flattened. In his Principia, Newton predicted this flattening would amount to one part in 230, and pointed to the pendulum measurements taken by Richer in 1673 as corroboration of the change in gravity,[23] but initial measurements of meridian lengths by Picard and Cassini at the end of the 17th century suggested the opposite. However, measurements by Maupertuis and the French Geodesic Mission in the 1730s established the oblateness of Earth, thus confirming the positions of both Newton and Copernicus.[24]
In Earth's rotating frame of reference, a freely moving body follows an apparent path that deviates from the one it would follow in a fixed frame of reference. Because of the Coriolis effect, falling bodies veer slightly eastward from the vertical plumb line below their point of release, and projectiles veer right in the Northern Hemisphere (and left in the Southern) from the direction in which they are shot. The Coriolis effect is mainly observable at a meteorological scale, where it is responsible for the opposite directions of cyclone rotation in the Northern and Southern hemispheres (anticlockwise and clockwise, respectively).
Hooke, following a suggestion from Newton in 1679, tried unsuccessfully to verify the predicted eastward deviation of a body dropped from a height of 8.2 meters, but definitive results were obtained later, in the late 18th and early 19th centuries, by Giovanni Battista Guglielmini in Bologna, Johann Friedrich Benzenberg in Hamburg and Ferdinand Reich in Freiberg, using taller towers and carefully released weights.[n 1] A ball dropped from a height of 158.5 m departed by 27.4 mm from the vertical compared with a calculated value of 28.1 mm.
The most celebrated test of Earth's rotation is the Foucault pendulum first built by physicist Léon Foucault in 1851, which consisted of a lead-filled brass sphere suspended 67 m from the top of the Panthéon in Paris. Because of Earth's rotation under the swinging pendulum, the pendulum's plane of oscillation appears to rotate at a rate depending on latitude. At the latitude of Paris the predicted and observed shift was about 11 degrees clockwise per hour. Foucault pendulums now swing in museums around the world.
Periods
True solar day
Earth's rotation period relative to the Sun (solar noon to solar noon) is its true solar day or apparent solar day.[26] It depends on Earth's orbital motion and is thus affected by changes in the eccentricity and inclination of Earth's orbit. Both vary over thousands of years, so the annual variation of the true solar day also varies. Generally, it is longer than the mean solar day during two periods of the year and shorter during another two.[n 2] The true solar day tends to be longer near perihelion when the Sun apparently moves along the ecliptic through a greater angle than usual, taking about 10 seconds longer to do so. Conversely, it is about 10 seconds shorter near aphelion. It is about 20 seconds longer near a solstice when the projection of the Sun's apparent motion along the ecliptic onto the celestial equator causes the Sun to move through a greater angle than usual. Conversely, near an equinox the projection onto the equator is shorter by about 20 seconds. Currently, the perihelion and solstice effects combine to lengthen the true solar day near 22 December by 30 mean solar seconds, but the solstice effect is partially cancelled by the aphelion effect near 19 June when it is only 13 seconds longer. The effects of the equinoxes shorten it near 26 March and 16 September by 18 seconds and 21 seconds, respectively.[27][28]
Mean solar day
The average of the true solar day during the course of an entire year is the mean solar day, which contains 86,400 mean solar seconds. Currently, each of these seconds is slightly longer than an SI second because Earth's mean solar day is now slightly longer than it was during the 19th century due to tidal friction. The average length of the mean solar day since the introduction of the leap second in 1972 has been about 0 to 2 ms longer than 86,400 SI seconds.[29][30][31] Random fluctuations due to core-mantle coupling have an amplitude of about 5 ms.[32][33] The mean solar second between 1750 and 1892 was chosen in 1895 by Simon Newcomb as the independent unit of time in his Tables of the Sun. These tables were used to calculate the world's ephemerides between 1900 and 1983, so this second became known as the ephemeris second. In 1967 the SI second was made equal to the ephemeris second.[34]
The apparent solar time is a measure of Earth's rotation and the difference between it and the mean solar time is known as the equation of time.
Stellar and sidereal day
Earth's rotation period relative to the International Celestial Reference Frame, called its stellar day by the International Earth Rotation and Reference Systems Service (IERS), is 86 164.098 903 691 seconds of mean solar time (UT1) (23h 56m 4.098903691s, 0.99726966323716 mean solar days).[35][n 3] Earth's rotation period relative to the precessing mean vernal equinox, named sidereal day, is 86164.09053083288 seconds of mean solar time (UT1) (23h 56m 4.09053083288s, 0.99726956632908 mean solar days).[35] Thus, the sidereal day is shorter than the stellar day by about 8.4 ms.[37]
Both the stellar day and the sidereal day are shorter than the mean solar day by about 3 minutes 56 seconds. This is a result of the Earth turning 1 additional rotation, relative to the celestial reference frame, as it orbits the Sun (so 366.24 rotations/y). The mean solar day in SI seconds is available from the IERS for the periods 1623–2005[38] and 1962–2005.[39]
Recently (1999–2010) the average annual length of the mean solar day in excess of 86,400 SI seconds has varied between 0.25 ms and 1 ms, which must be added to both the stellar and sidereal days given in mean solar time above to obtain their lengths in SI seconds (see Fluctuations in the length of day).
Angular speed
The angular speed of Earth's rotation in inertial space is (7.2921150 ± 0.0000001)×10^−5 radians per SI second.[35][n 4] Multiplying by (180°/π radians) × (86,400 seconds/day) yields 360.9856 °/day, indicating that Earth rotates more than 360 degrees relative to the fixed stars in one solar day. Earth's movement along its nearly circular orbit while it is rotating once around its axis requires that Earth rotate slightly more than once relative to the fixed stars before the mean Sun can pass overhead again, even though it rotates only once (360°) relative to the mean Sun.[n 5] Multiplying the value in rad/s by Earth's equatorial radius of 6,378,137 m (WGS84 ellipsoid) (factors of 2π radians needed by both cancel) yields an equatorial speed of 465.10 metres per second (1,674.4 km/h).[40] Some sources state that Earth's equatorial speed is slightly less, or 1,669.8 km/h.[41] This is obtained by dividing Earth's equatorial circumference by 24 hours. However, the use of the solar day is incorrect; it must be the sidereal day, so the corresponding time unit must be a sidereal hour. This is confirmed by multiplying by the number of sidereal days in one mean solar day, 1.002 737 909 350 795,[35] which yields the equatorial speed in mean solar hours given above of 1,674.4 km/h.
The tangential speed of Earth's rotation at a point on Earth can be approximated by multiplying the speed at the equator by the cosine of the latitude.[42] For example, the Kennedy Space Center is located at latitude 28.59° N, which yields a speed of: cos(28.59°) × 1,674.4 km/h = 1,470.2 km/h. Latitude is a placement consideration for spaceports.
The peak of the Cayambe volcano is the point of Earth's surface farthest from its axis; thus, it rotates the fastest as Earth spins.[43]
Changes
In rotational axis
Earth's rotation axis moves with respect to the fixed stars (inertial space); the components of this motion are precession and nutation. It also moves with respect to Earth's crust; this is called polar motion.
Precession is a rotation of Earth's rotation axis, caused primarily by external torques from the gravity of the Sun, Moon and other bodies. The polar motion is primarily due to free core nutation and the Chandler wobble.
Tidal interactions
Over millions of years, Earth's rotation has been slowed significantly by tidal acceleration through gravitational interactions with the Moon. Thus angular momentum is slowly transferred to the Moon at a rate proportional to $r^{-6}$, where $r$ is the orbital radius of the Moon. This process has gradually increased the length of the day to its current value, and resulted in the Moon being tidally locked with Earth.
This gradual rotational deceleration is empirically documented by estimates of day lengths obtained from observations of tidal rhythmites and stromatolites; a compilation of these measurements[44] found that the length of the day has increased steadily from about 21 hours at 600 Myr ago[45] to the current 24-hour value. By counting the microscopic lamina that form at higher tides, tidal frequencies (and thus day lengths) can be estimated, much like counting tree rings, though these estimates can be increasingly unreliable at older ages.[46]
Resonant stabilization
The current rate of tidal deceleration is anomalously high, implying Earth's rotational velocity must have decreased more slowly in the past. Empirical data[44] tentatively shows a sharp increase in rotational deceleration about 600 Myr ago. Some models suggest that Earth maintained a constant day length of 21 hours throughout much of the Precambrian.[45] This day length corresponds to the semidiurnal resonant period of the thermally driven atmospheric tide; at this day length, the decelerative lunar torque could have been canceled by an accelerative torque from the atmospheric tide, resulting in no net torque and a constant rotational period. This stabilizing effect could have been broken by a sudden change in global temperature. Recent computational simulations support this hypothesis and suggest the Marinoan or Sturtian glaciations broke this stable configuration about 600 Myr ago; the simulated results agree quite closely with existing paleorotational data.[47]
Global events
Some recent large-scale events, such as the 2004 Indian Ocean earthquake, have caused the length of a day to shorten by 3 microseconds by reducing Earth's moment of inertia.[48] Post-glacial rebound, ongoing since the last ice age, is also changing the distribution of Earth's mass, thus affecting the moment of inertia of Earth and, by the conservation of angular momentum, Earth's rotation period.[49]
The length of the day can also be influenced by man-made structures. For example, NASA scientists calculated that the water stored in the Three Gorges Dam has increased the length of Earth's day by 0.06 microseconds due to the shift in mass.[50]
Measurement
The primary monitoring of Earth's rotation is performed by very-long-baseline interferometry coordinated with the Global Positioning System, satellite laser ranging, and other satellite geodesy techniques. This provides an absolute reference for the determination of universal time, precession and nutation.[51] The absolute value of Earth rotation including UT1 and nutation can be determined using space geodetic observations, such as very-long-baseline interferometry and lunar laser ranging, whereas their derivatives, denoted as length-of-day excess and nutation rates can be derived from satellite observations, such as GPS, GLONASS, Galileo[52] and satellite laser ranging to geodetic satellites.[53]
Ancient observations
There are recorded observations of solar and lunar eclipses by Babylonian and Chinese astronomers beginning in the 8th century BCE, as well as from the medieval Islamic world[54] and elsewhere. These observations can be used to determine changes in Earth's rotation over the last 27 centuries, since the length of the day is a critical parameter in the calculation of the place and time of eclipses. A change in day length of milliseconds per century shows up as a change of hours and thousands of kilometers in eclipse observations. The ancient data are consistent with a shorter day, meaning Earth was turning faster throughout the past.[55][56]
Cyclic variability
Around every 25–30 years Earth's rotation slows temporarily by a few milliseconds per day, usually lasting around five years. 2017 was the fourth consecutive year that Earth's rotation has slowed. The cause of this variability has not yet been determined.[57]
Origin
Earth's original rotation was a vestige of the original angular momentum of the cloud of dust, rocks and gas that coalesced to form the Solar System. This primordial cloud was composed of hydrogen and helium produced in the Big Bang, as well as heavier elements ejected by supernovas. As this interstellar dust is heterogeneous, any asymmetry during gravitational accretion resulted in the angular momentum of the eventual planet.[58]
However, if the giant-impact hypothesis for the origin of the Moon is correct, this primordial rotation rate would have been reset by the Theia impact 4.5 billion years ago. Regardless of the speed and tilt of Earth's rotation before the impact, it would have experienced a day some five hours long after the impact.[59] Tidal effects would then have slowed this rate to its modern value.
See also
• Allais effect
• Diurnal cycle
• Earth's orbit
• Earth orientation parameters
• Formation and evolution of the Solar System
• Geodesic (in mathematics)
• Geodesics in general relativity
• Geodesy
• History of Earth
• History of geodesy
• Inner core super-rotation
• List of important publications in geology
• Nychthemeron
• Rossby wave
• Spherical Earth
• World Geodetic System
Notes
1. See Fallexperimente zum Nachweis der Erdrotation (German Wikipedia article).
2. When Earth's eccentricity exceeds 0.047 and perihelion is at an appropriate equinox or solstice, only one period with one peak balances another period that has two peaks.[27]
3. Aoki, the ultimate source of these figures, uses the term "seconds of UT1" instead of "seconds of mean solar time".[36]
4. It can be established that SI seconds apply to this value by following the citation in "USEFUL CONSTANTS" to E. Groten "Parameters of Common Relevance of Astronomy, Geodesy, and Geodynamics" which states units are SI units, except for an instance not relevant to this value.
5. In astronomy, unlike geometry, 360° means returning to the same point in some cyclical time scale, either one mean solar day or one sidereal day for rotation on Earth's axis, or one sidereal year or one mean tropical year or even one mean Julian year containing exactly 365.25 days for revolution around the Sun.
References
1. Dennis D. McCarthy; Kenneth P. Seidelmann (18 September 2009). Time: From Earth Rotation to Atomic Physics. John Wiley & Sons. p. 232. ISBN 978-3-527-62795-0.
2. Stephenson, F. Richard (2003). "Historical eclipses and Earth's rotation". Astronomy & Geophysics. 44 (2): 2.22–2.27. Bibcode:2003A&G....44b..22S. doi:10.1046/j.1468-4004.2003.44222.x.
3. Robert Lea (3 August 2022). "Earth sets record for the shortest day". Space.com. Retrieved 8 August 2022.
4. Knapton, Sarah (4 January 2021). "The Earth is spinning faster now than at any time in the past half century". The Telegraph. Retrieved 11 February 2021.
5. Pappas, Stephanie (25 September 2018). "Humans Contribute to Earth's Wobble, Scientists Say". Scientific American. Retrieved 12 August 2022.
6. Pseudo-Plutarchus, Placita philosophorum (874d-911c), Stephanus page 896, section A, line 5 Ἡρακλείδης ὁ Ποντικὸς καὶ Ἔκφαντος ὁ Πυθαγόρειος κινοῦσι μὲν τὴν γῆν, οὐ μήν γε μεταβατικῶς, ἀλλὰ τρεπτικῶς τροχοῦ δίκην ἐνηξονισμένην, ἀπὸ δυσμῶν ἐπ' ἀνατολὰς περὶ τὸ ἴδιον αὐτῆς κέντρον; Plutarchus Biogr., Phil., Numa, Chapter 11, section 1, line 5, Νομᾶς δὲ λέγεται καὶ τὸ τῆς Ἑστίας ἱερὸν ἐγκύκλιον περιβαλέσθαι τῷ ἀσβέστῳ πυρὶ φρουράν, ἀπομιμούμενος οὐ τὸ σχῆμα τῆς γῆς ὡς Ἑστίας οὔσης, ἀλλὰ τοῦ σύμπαντος κόσμου, οὗ μέσον οἱ Πυθαγορικοὶ τὸ πῦρ ἱδρῦσθαι νομίζουσι, καὶ τοῦτο Ἑστίαν καλοῦσι καὶ μονάδα· τὴν δὲ γῆν οὔτε ἀκίνητον οὔτε ἐν μέσῳ τῆς περιφορᾶς οὖσαν, ἀλλὰ κύκλῳ περὶ τὸ πῦρ αἰωρουμένην οὐ τῶν τιμιωτάτων οὐδὲ τῶν πρώτων τοῦ κόσμου μορίων ὑπάρχειν. Burch, George Bosworth (1954). "The Counter-Earth". Osiris. 11: 267–294. doi:10.1086/368583. JSTOR 301675. S2CID 144330867.
7. Aristotle. Of the Heavens. Book II, Ch 13. 1.
8. Ptolemy. Almagest Book I, Chapter 8.
9. "Archived copy" (PDF). Archived from the original (PDF) on 13 December 2013. Retrieved 8 December 2013.{{cite web}}: CS1 maint: archived copy as title (link)
10. Kim Plofker (2009). Mathematics in India. Princeton University Press. p. 71. ISBN 978-0-691-12067-6.
11. Alessandro Bausani (1973). "Cosmology and Religion in Islam". Scientia/Rivista di Scienza. 108 (67): 762.
12. Young, M. J. L., ed. (2 November 2006). Religion, Learning and Science in the 'Abbasid Period. Cambridge University Press. p. 413. ISBN 9780521028875.
13. Nasr, Seyyed Hossein (1 January 1993). An Introduction to Islamic Cosmological Doctrines. SUNY Press. p. 135. ISBN 9781438414195.
14. Ragep, Sally P. (2007). "Ibn Sīnā: Abū ʿAlī al‐Ḥusayn ibn ʿAbdallāh ibn Sīnā". In Thomas Hockey; et al. (eds.). The Biographical Encyclopedia of Astronomers. New York: Springer. pp. 570–2. ISBN 978-0-387-31022-0. (PDF version)
15. Ragep, F. Jamil (2001a), "Tusi and Copernicus: The Earth's Motion in Context", Science in Context, 14 (1–2): 145–163, doi:10.1017/s0269889701000060, S2CID 145372613
16. Aquinas, Thomas. Commentaria in libros Aristotelis De caelo et Mundo. Lib II, cap XIV. trans in Grant, Edward, ed. (1974). A Source Book in Medieval Science. Harvard University Press. pages 496–500
17. Buridan, John (1942). Quaestiones super libris quattuo De Caelo et mundo. pp. 226–232. in Grant 1974, pp. 500–503
18. Oresme, Nicole. Le livre du ciel et du monde. pp. 519–539. in Grant 1974, pp. 503–510
19. Copernicus, Nicolas. On the Revolutions of the Heavenly Spheres. Book I, Chap 5–8.
20. Gilbert, William (1893). De Magnete, On the Magnet and Magnetic Bodies, and on the Great Magnet the Earth. New York, J. Wiley & sons. pp. 313–347.
21. Russell, John L (1972). "Copernican System in Great Britain". In J. Dobrzycki (ed.). The Reception of Copernicus' Heliocentric Theory. ISBN 9789027703118.
22. Almagestum novum, chapter nine, cited in Graney, Christopher M. (2012). "126 arguments concerning the motion of the earth. GIOVANNI BATTISTA RICCIOLI in his 1651 ALMAGESTUM NOVUM". Journal for the History of Astronomy. volume 43, pages 215–226. arXiv:1103.2057.
23. Newton, Isaac (1846). Newton's Principia. Translated by A. Motte. New-York : Published by Daniel Adee. p. 412.
24. Shank, J. B. (2008). The Newton Wars and the Beginning of the French Enlightenment. University of Chicago Press. pp. 324, 355. ISBN 9780226749471.
25. "Starry Spin-up". Retrieved 24 August 2015.
26. "What Is Solar Noon?". timeanddate.com. Retrieved 15 July 2022.
27. Jean Meeus; J. M. A. Danby (January 1997). Mathematical Astronomy Morsels. Willmann-Bell. pp. 345–346. ISBN 978-0-943396-51-4.
28. Ricci, Pierpaolo. "pierpaoloricci.it/dati/giorno solare vero VERSIONE EN". Pierpaoloricci.it. Retrieved 22 September 2018.
29. "INTERNATIONAL EARTH ROTATION AND REFERENCE SYSTEMS SERVICE : EARTH ORIENTATION PARAMETERS : EOP (IERS) 05 C04". Hpiers.obspm.fr. Retrieved 22 September 2018.
30. "Physical basis of leap seconds" (PDF). Iopscience.iop.org. Retrieved 22 September 2018.
31. Leap seconds Archived 12 March 2015 at the Wayback Machine
32. "Prediction of Universal Time and LOD Variations" (PDF). Ien.it. Retrieved 22 September 2018.
33. R. Hide et al., "Topographic core-mantle coupling and fluctuations in the Earth's rotation" 1993.
34. Leap seconds by USNO Archived 12 March 2015 at the Wayback Machine
35. "USEFUL CONSTANTS". Hpiers.obspm.fr. Retrieved 22 September 2018.
36. Aoki, et al., "The new definition of Universal Time", Astronomy and Astrophysics 105 (1982) 359–361.
37. Seidelmann, P. Kenneth, ed. (1992). Explanatory Supplement to the Astronomical Almanac. Mill Valley, California: University Science Books. p. 48. ISBN 978-0-935702-68-2.
38. IERS Excess of the duration of the day to 86,400s … since 1623 Archived 3 October 2008 at the Wayback Machine Graph at end.
39. "Excess to 86400s of the duration day, 1995–1997". 13 August 2007. Archived from the original on 13 August 2007. Retrieved 22 September 2018.
40. Arthur N. Cox, ed., Allen's Astrophysical Quantities p.244.
41. Michael E. Bakich, The Cambridge planetary handbook, p.50.
42. Butterworth & Palmer. "Speed of the turning of the Earth". Ask an Astrophysicist. NASA Goddard Spaceflight Center.
43. Klenke, Paul. "Distance to the Center of the Earth". Summit Post. Retrieved 4 July 2018.
44. Williams, George E. (1 February 2000). "Geological constraints on the Precambrian history of Earth's rotation and the Moon's orbit". Reviews of Geophysics. 38 (1): 37–59. Bibcode:2000RvGeo..38...37W. doi:10.1029/1999RG900016. ISSN 1944-9208. S2CID 51948507.
45. Zahnle, K.; Walker, J. C. (1 January 1987). "A constant daylength during the Precambrian era?". Precambrian Research. 37 (2): 95–105. Bibcode:1987PreR...37...95Z. CiteSeerX 10.1.1.1020.8947. doi:10.1016/0301-9268(87)90073-8. ISSN 0301-9268. PMID 11542096.
46. Scrutton, C. T. (1 January 1978). "Periodic Growth Features in Fossil Organisms and the Length of the Day and Month". In Brosche, Professor Dr Peter; Sündermann, Professor Dr Jürgen (eds.). Tidal Friction and the Earth's Rotation. Springer Berlin Heidelberg. pp. 154–196. doi:10.1007/978-3-642-67097-8_12. ISBN 9783540090465.
47. Bartlett, Benjamin C.; Stevenson, David J. (1 January 2016). "Analysis of a Precambrian resonance-stabilized day length". Geophysical Research Letters. 43 (11): 5716–5724. arXiv:1502.01421. Bibcode:2016GeoRL..43.5716B. doi:10.1002/2016GL068912. ISSN 1944-8007. S2CID 36308735.
48. Sumatran earthquake sped up Earth's rotation, Nature, 30 December 2004.
49. Wu, P.; Peltier, W.R. (1984). "Pleistocene deglaciation and the earth's rotation: a new analysis". Geophysical Journal of the Royal Astronomical Society. 76 (3): 753–792. Bibcode:1984GeoJ...76..753W. doi:10.1111/j.1365-246X.1984.tb01920.x.
50. "NASA Details Earthquake Effects on the Earth". NASA/JPL. Retrieved 22 March 2019.
51. "Permanent monitoring". Hpiers.obspm.fr. Retrieved 22 September 2018.
52. Zajdel, Radosław; Sośnica, Krzysztof; Bury, Grzegorz; Dach, Rolf; Prange, Lars (July 2020). "System-specific systematic errors in earth rotation parameters derived from GPS, GLONASS, and Galileo". GPS Solutions. 24 (3): 74. doi:10.1007/s10291-020-00989-w.
53. Sośnica, K.; Bury, G.; Zajdel, R. (16 March 2018). "Contribution of Multi‐GNSS Constellation to SLR‐Derived Terrestrial Reference Frame". Geophysical Research Letters. 45 (5): 2339–2348. Bibcode:2018GeoRL..45.2339S. doi:10.1002/2017GL076850. S2CID 134160047.
54. "Solar and lunar eclipses recorded in medieval Arab chronicles", Historical Eclipses and Earth's Rotation, Cambridge University Press, pp. 431–455, 5 June 1997, doi:10.1017/cbo9780511525186.012, ISBN 9780521461948, retrieved 15 July 2022
55. Sid Perkins (6 December 2016). "Ancient eclipses show Earth's rotation is slowing". Science. doi:10.1126/science.aal0469.
56. FR Stephenson; LV Morrison; CY Hohonkerk (7 December 2016). "Measurement of the Earth's rotation: 720 BC to AD 2015". Proceedings of the Royal Society A. 472 (2196): 20160404. Bibcode:2016RSPSA.47260404S. doi:10.1098/rspa.2016.0404. PMC 5247521. PMID 28119545.
57. Nace, Trevor. "Earth's Rotation Is Mysteriously Slowing Down: Experts Predict Uptick In 2018 Earthquakes". Forbes. Retrieved 18 October 2019.
58. "Why do planets rotate?". Ask an Astronomer.
59. Stevenson, D. J. (1987). "Origin of the moon–The collision hypothesis". Annual Review of Earth and Planetary Sciences. 15 (1): 271–315. Bibcode:1987AREPS..15..271S. doi:10.1146/annurev.ea.15.050187.001415.
External links
• USNO Earth Orientation new site, being populated
• USNO IERS old site, to be abandoned
• IERS Earth Orientation Center: Earth rotation data and interactive analysis
• International Earth Rotation and Reference Systems Service (IERS)
• If the Earth's rotation period is less than 24 hours, why don't our clocks fall out of sync with the Sun?
Earth
• Outline
• History
Atmosphere
• Atmosphere of Earth
• Prebiotic atmosphere
• Troposphere
• Stratosphere
• Mesosphere
• Thermosphere
• Exosphere
• Weather
Climate
• Climate system
• Energy balance
• Climate change
• Climate variability and change
• Climatology
• Paleoclimatology
Continents
• Africa
• Antarctica
• Asia
• Australia
• Europe
• North America
• South America
Culture and society
• List of sovereign states
• dependent territories
• In culture
• Earth Day
• Flag
• Symbol
• World economy
• Etymology
• World history
• Time zones
• World
Environment
• Biome
• Biosphere
• Biogeochemical cycles
• Ecology
• Ecosystem
• Human impact on the environment
• Evolutionary history of life
• Nature
Geodesy
• Cartography
• Digital mapping
• Earth's orbit
• Geodetic astronomy
• Geomatics
• Gravity
• Navigation
• Remote Sensing
• Geopositioning
• Virtual globe
Geophysics
• Earth structure
• Fluid dynamics
• Geomagnetism
• Magnetosphere
• Mineral physics
• Seismology
• Plate tectonics
• Signal processing
• Tomography
Geology
• Age of Earth
• Earth science
• Extremes on Earth
• Future
• Geological history
• Geologic time scale
• Geologic record
• History of Earth
Oceans
• Antarctic/Southern Ocean
• Arctic Ocean
• Atlantic Ocean
• Indian Ocean
• Pacific Ocean
• Oceanography
Planetary science
• Evolution of the Solar System
• Geology of solar terrestrial planets
• Location in the Universe
• The Moon
• Solar System
• Category
• Commons
• World portal
• Earth sciences portal
• Geology portal
• Solar System portal
|
Wikipedia
|
Rotation operator (quantum mechanics)
This article concerns the rotation operator, as it appears in quantum mechanics.
Part of a series of articles about
Quantum mechanics
$i\hbar {\frac {\partial }{\partial t}}|\psi (t)\rangle ={\hat {H}}|\psi (t)\rangle $
Schrödinger equation
• Introduction
• Glossary
• History
Background
• Classical mechanics
• Old quantum theory
• Bra–ket notation
• Hamiltonian
• Interference
Fundamentals
• Complementarity
• Decoherence
• Entanglement
• Energy level
• Measurement
• Nonlocality
• Quantum number
• State
• Superposition
• Symmetry
• Tunnelling
• Uncertainty
• Wave function
• Collapse
Experiments
• Bell's inequality
• Davisson–Germer
• Double-slit
• Elitzur–Vaidman
• Franck–Hertz
• Leggett–Garg inequality
• Mach–Zehnder
• Popper
• Quantum eraser
• Delayed-choice
• Schrödinger's cat
• Stern–Gerlach
• Wheeler's delayed-choice
Formulations
• Overview
• Heisenberg
• Interaction
• Matrix
• Phase-space
• Schrödinger
• Sum-over-histories (path integral)
Equations
• Dirac
• Klein–Gordon
• Pauli
• Rydberg
• Schrödinger
Interpretations
• Bayesian
• Consistent histories
• Copenhagen
• de Broglie–Bohm
• Ensemble
• Hidden-variable
• Local
• Many-worlds
• Objective collapse
• Quantum logic
• Relational
• Transactional
Advanced topics
• Relativistic quantum mechanics
• Quantum field theory
• Quantum information science
• Quantum computing
• Quantum chaos
• EPR paradox
• Density matrix
• Scattering theory
• Quantum statistical mechanics
• Quantum machine learning
Scientists
• Aharonov
• Bell
• Bethe
• Blackett
• Bloch
• Bohm
• Bohr
• Born
• Bose
• de Broglie
• Compton
• Dirac
• Davisson
• Debye
• Ehrenfest
• Einstein
• Everett
• Fock
• Fermi
• Feynman
• Glauber
• Gutzwiller
• Heisenberg
• Hilbert
• Jordan
• Kramers
• Pauli
• Lamb
• Landau
• Laue
• Moseley
• Millikan
• Onnes
• Planck
• Rabi
• Raman
• Rydberg
• Schrödinger
• Simmons
• Sommerfeld
• von Neumann
• Weyl
• Wien
• Wigner
• Zeeman
• Zeilinger
Quantum mechanical rotations
With every physical rotation $R$, we postulate a quantum mechanical rotation operator $D(R)$ which rotates quantum mechanical states.
$|\alpha \rangle _{R}=D(R)|\alpha \rangle $
In terms of the generators of rotation,
$D(\mathbf {\hat {n}} ,\phi )=\exp \left(-i\phi {\frac {\mathbf {\hat {n}} \cdot \mathbf {J} }{\hbar }}\right),$
where $\mathbf {\hat {n}} $ is rotation axis, $\mathbf {J} $ is angular momentum, and $\hbar $ is the reduced Planck constant.
The translation operator
The rotation operator $\operatorname {R} (z,\theta )$, with the first argument $z$ indicating the rotation axis and the second $\theta $ the rotation angle, can operate through the translation operator $\operatorname {T} (a)$ for infinitesimal rotations as explained below. This is why, it is first shown how the translation operator is acting on a particle at position x (the particle is then in the state $|x\rangle $ according to Quantum Mechanics).
Translation of the particle at position $x$ to position $x+a$: $\operatorname {T} (a)|x\rangle =|x+a\rangle $
Because a translation of 0 does not change the position of the particle, we have (with 1 meaning the identity operator, which does nothing):
$\operatorname {T} (0)=1$
$\operatorname {T} (a)\operatorname {T} (da)|x\rangle =\operatorname {T} (a)|x+da\rangle =|x+a+da\rangle =\operatorname {T} (a+da)|x\rangle \Rightarrow \operatorname {T} (a)\operatorname {T} (da)=\operatorname {T} (a+da)$
Taylor development gives:
$\operatorname {T} (da)=\operatorname {T} (0)+{\frac {d\operatorname {T} (0)}{da}}da+\cdots =1-{\frac {i}{\hbar }}p_{x}da$
with
$p_{x}=i\hbar {\frac {d\operatorname {T} (0)}{da}}$
From that follows:
$\operatorname {T} (a+da)=\operatorname {T} (a)\operatorname {T} (da)=\operatorname {T} (a)\left(1-{\frac {i}{\hbar }}p_{x}da\right)\Rightarrow {\frac {\operatorname {T} (a+da)-\operatorname {T} (a)}{da}}={\frac {d\operatorname {T} }{da}}=-{\frac {i}{\hbar }}p_{x}\operatorname {T} (a)$
This is a differential equation with the solution
$\operatorname {T} (a)=\exp \left(-{\frac {i}{\hbar }}p_{x}a\right).$
Additionally, suppose a Hamiltonian $H$ is independent of the $x$ position. Because the translation operator can be written in terms of $p_{x}$, and $[p_{x},H]=0$, we know that $[H,\operatorname {T} (a)]=0.$ This result means that linear momentum for the system is conserved.
In relation to the orbital angular momentum
Classically we have for the angular momentum $\mathbf {L} =\mathbf {r} \times \mathbf {p} .$ This is the same in quantum mechanics considering $\mathbf {r} $ and $\mathbf {p} $ as operators. Classically, an infinitesimal rotation $dt$ of the vector $\mathbf {r} =(x,y,z)$ about the $z$-axis to $\mathbf {r} '=(x',y',z)$ leaving $z$ unchanged can be expressed by the following infinitesimal translations (using Taylor approximation):
${\begin{aligned}x'&=r\cos(t+dt)=x-y\,dt+\cdots \\y'&=r\sin(t+dt)=y+x\,dt+\cdots \end{aligned}}$
From that follows for states:
$\operatorname {R} (z,dt)|r\rangle =\operatorname {R} (z,dt)|x,y,z\rangle =|x-y\,dt,y+x\,dt,z\rangle =\operatorname {T} _{x}(-y\,dt)\operatorname {T} _{y}(x\,dt)|x,y,z\rangle =\operatorname {T} _{x}(-y\,dt)\operatorname {T} _{y}(x\,dt)|r\rangle $
And consequently:
$\operatorname {R} (z,dt)=\operatorname {T} _{x}(-y\,dt)\operatorname {T} _{y}(x\,dt)$
Using
$T_{k}(a)=\exp \left(-{\frac {i}{\hbar }}p_{k}a\right)$
from above with $k=x,y$ and Taylor expansion we get:
$\operatorname {R} (z,dt)=\exp \left[-{\frac {i}{\hbar }}\left(xp_{y}-yp_{x}\right)dt\right]=\exp \left(-{\frac {i}{\hbar }}L_{z}dt\right)=1-{\frac {i}{\hbar }}L_{z}dt+\cdots $
with $L_{z}=xp_{y}-yp_{x}$ the $z$-component of the angular momentum according to the classical cross product.
To get a rotation for the angle $t$, we construct the following differential equation using the condition $\operatorname {R} (z,0)=1$:
${\begin{aligned}&\operatorname {R} (z,t+dt)=\operatorname {R} (z,t)\operatorname {R} (z,dt)\\[1.1ex]\Rightarrow {}&{\frac {d\operatorname {R} }{dt}}={\frac {\operatorname {R} (z,t+dt)-\operatorname {R} (z,t)}{dt}}=\operatorname {R} (z,t){\frac {\operatorname {R} (z,dt)-1}{dt}}=-{\frac {i}{\hbar }}L_{z}\operatorname {R} (z,t)\\[1.1ex]\Rightarrow {}&\operatorname {R} (z,t)=\exp \left(-{\frac {i}{\hbar }}\,t\,L_{z}\right)\end{aligned}}$
Similar to the translation operator, if we are given a Hamiltonian $H$ which rotationally symmetric about the $z$-axis, $[L_{z},H]=0$ implies $[\operatorname {R} (z,t),H]=0$. This result means that angular momentum is conserved.
For the spin angular momentum about for example the $y$-axis we just replace $L_{z}$ with $ S_{y}={\frac {\hbar }{2}}\sigma _{y}$ (where $\sigma _{y}$ is the Pauli Y matrix) and we get the spin rotation operator
$\operatorname {D} (y,t)=\exp \left(-i{\frac {t}{2}}\sigma _{y}\right).$
Effect on the spin operator and quantum states
See also: Rotation group SO(3) § A note on Lie algebra, and Change of basis § Endomorphisms
Operators can be represented by matrices. From linear algebra one knows that a certain matrix $A$ can be represented in another basis through the transformation
$A'=PAP^{-1}$
where $P$ is the basis transformation matrix. If the vectors $b$ respectively $c$ are the z-axis in one basis respectively another, they are perpendicular to the y-axis with a certain angle $t$ between them. The spin operator $S_{b}$ in the first basis can then be transformed into the spin operator $S_{c}$ of the other basis through the following transformation:
$S_{c}=\operatorname {D} (y,t)S_{b}\operatorname {D} ^{-1}(y,t)$
From standard quantum mechanics we have the known results $ S_{b}|b+\rangle ={\frac {\hbar }{2}}|b+\rangle $ and $ S_{c}|c+\rangle ={\frac {\hbar }{2}}|c+\rangle $ where $|b+\rangle $ and $|c+\rangle $ are the top spins in their corresponding bases. So we have:
${\frac {\hbar }{2}}|c+\rangle =S_{c}|c+\rangle =\operatorname {D} (y,t)S_{b}\operatorname {D} ^{-1}(y,t)|c+\rangle \Rightarrow $
$S_{b}\operatorname {D} ^{-1}(y,t)|c+\rangle ={\frac {\hbar }{2}}\operatorname {D} ^{-1}(y,t)|c+\rangle $
Comparison with $ S_{b}|b+\rangle ={\frac {\hbar }{2}}|b+\rangle $ yields $|b+\rangle =D^{-1}(y,t)|c+\rangle $.
This means that if the state $|c+\rangle $ is rotated about the $y$-axis by an angle $t$, it becomes the state $|b+\rangle $, a result that can be generalized to arbitrary axes.
See also
• Symmetry in quantum mechanics
• Spherical basis
• Optical phase space
References
• L.D. Landau and E.M. Lifshitz: Quantum Mechanics: Non-Relativistic Theory, Pergamon Press, 1985
• P.A.M. Dirac: The Principles of Quantum Mechanics, Oxford University Press, 1958
• R.P. Feynman, R.B. Leighton and M. Sands: The Feynman Lectures on Physics, Addison-Wesley, 1965
Operators in physics
General
Space and time
• d'Alembertian
• Parity
• Time
Particles
• C-symmetry
Operators for operators
• Anti-symmetric operator
• Ladder operator
Quantum
Fundamental
• Momentum
• Position
• Rotation
Energy
• Total energy
• Hamiltonian
• Kinetic energy
Angular momentum
• Total
• Orbital
• Spin
Electromagnetism
• Transition dipole moment
Optics
• Displacement
• Hanbury Brown and Twiss effect
• Quantum correlator
• Squeeze
Particle physics
• Casimir invariant
• Creation and annihilation
|
Wikipedia
|
Quaternions and spatial rotation
Unit quaternions, known as versors, provide a convenient mathematical notation for representing spatial orientations and rotations of elements in three dimensional space. Specifically, they encode information about an axis-angle rotation about an arbitrary axis. Rotation and orientation quaternions have applications in computer graphics,[1] computer vision, robotics,[2] navigation, molecular dynamics, flight dynamics,[3] orbital mechanics of satellites,[4] and crystallographic texture analysis.[5]
When used to represent rotation, unit quaternions are also called rotation quaternions as they represent the 3D rotation group. When used to represent an orientation (rotation relative to a reference coordinate system), they are called orientation quaternions or attitude quaternions. A spatial rotation around a fixed point of $\theta $ radians about a unit axis $(X,Y,Z)$ that denotes the Euler axis is given by the quaternion $(C,X\,S,Y\,S,Z\,S)$, where $C=\cos(\theta /2)$ and $S=\sin(\theta /2)$.
Compared to rotation matrices, quaternions are more compact, efficient, and numerically stable. Compared to Euler angles, they are simpler to compose. However, they are not as intuitive and easy to understand and, due to the periodic nature of sine and cosine, rotation angles differing precisely by the natural period will be encoded into identical quaternions and recovered angles in radians will be limited to $[0,2\pi ]$.
Using quaternions as rotations
In 3-dimensional space, according to Euler's rotation theorem, any rotation or sequence of rotations of a rigid body or coordinate system about a fixed point is equivalent to a single rotation by a given angle $\theta $ about a fixed axis (called the Euler axis) that runs through the fixed point.[6] The Euler axis is typically represented by a unit vector ${\vec {u}}$ (${\hat {e}}$ in the picture). Therefore, any rotation in three dimensions can be represented as via a vector ${\vec {u}}$ and an angle $\theta $.
Quaternions give a simple way to encode this [7] axis–angle representation using four real numbers, and can be used to apply (calculate) the corresponding rotation to a position vector (x,y,z), representing a point relative to the origin in R3.
Euclidean vectors such as (2, 3, 4) or (ax, ay, az) can be rewritten as 2 i + 3 j + 4 k or ax i + ay j + az k, where i, j, k are unit vectors representing the three Cartesian axes (traditionally x, y, z), and also obey the multiplication rules of the fundamental quaternion units by interpreting the Euclidean vector (ax, ay, az) as the vector part of the pure quaternion (0, ax, ay, az).
A rotation of angle $\theta $ around the axis defined by the unit vector
${\vec {u}}=(u_{x},u_{y},u_{z})=u_{x}\mathbf {i} +u_{y}\mathbf {j} +u_{z}\mathbf {k} $
can be represented by conjugation by a unit quaternion q. Since the quaternion product $\ (0+u_{x}\mathbf {i} +u_{y}\mathbf {j} +u_{z}\mathbf {k} )(0+u_{x}\mathbf {i} +u_{y}\mathbf {j} +u_{z}\mathbf {k} )$ gives -1, using the Taylor series of the exponential function, the extension of Euler's formula results:
$\mathbf {q} =e^{{\frac {\theta }{2}}{(u_{x}\mathbf {i} +u_{y}\mathbf {j} +u_{z}\mathbf {k} )}}=\cos {\frac {\theta }{2}}+(u_{x}\mathbf {i} +u_{y}\mathbf {j} +u_{z}\mathbf {k} )\sin {\frac {\theta }{2}}=\cos {\frac {\theta }{2}}+\mathbf {u} \sin {\frac {\theta }{2}}$
It can be shown [8] that the desired rotation can be applied to an ordinary vector $\mathbf {p} =(p_{x},p_{y},p_{z})=p_{x}\mathbf {i} +p_{y}\mathbf {j} +p_{z}\mathbf {k} $ in 3-dimensional space, considered as the vector part of the pure quaternion $\mathbf {p'} $, by evaluating the conjugation of p′ by q, given by:
$L(\mathbf {p'} ):=\mathbf {q} \mathbf {p'} \mathbf {q} ^{-1}=(0,\mathbf {r} ),$
$\mathbf {r} =(\cos ^{2}{\frac {\theta }{2}}-||\mathbf {u} ||^{2})\mathbf {p} +2(\mathbf {u} .\mathbf {p} )\mathbf {u} +2\cos {\frac {\theta }{2}}(\mathbf {u} \times \mathbf {p} ),$
using the Hamilton product, where the vector part of the pure quaternion L(p′) = (0, rx, ry, rz) is the new position vector of the point after the rotation. In a programmatic implementation, the conjugation is achieved by constructing a pure quaternion whose vector part is p, and then performing the quaternion conjugation. The vector part of the resulting pure quaternion is the desired vector r. Clearly, $L$ provides a linear transformation of the quaternion space to itself;[9] also, since $\mathbf {q} $ is unitary, the transformation is an isometry. Also, $L(\mathbf {q} )=\mathbf {q} $ and so $L$ leaves vectors parallel to $\mathbf {q} $ invariant. So, by decomposing $\mathbf {p} $ as a vector parallel to the vector part $(u_{x},u_{y},u_{z})\sin {\frac {\theta }{2}}$ of $\mathbf {q} $ and a vector normal to the vector part of $\mathbf {q} $ and showing that the aplication of $L$ to the normal component of $\mathbf {p} $ rotates it, the claim is shown. So let $\mathbf {n} $ be the component of $\mathbf {p} $ orthogonal to the vector part of $\mathbf {q} $ and let $\mathbf {n} _{T}=\mathbf {n} \times \mathbf {u} $. It turns out that the vector part of $L(0,\mathbf {n} )$ is given by $(\cos ^{2}{\frac {\theta }{2}}-\sin ^{2}{\frac {\theta }{2}})\mathbf {n} +2(\cos ^{2}{\frac {\theta }{2}}\sin ^{2}{\frac {\theta }{2}})\mathbf {n} _{T}$ $=\cos \theta \mathbf {n} +\sin \theta \mathbf {n} _{T}$.
A geometric fact independent of quaternions is the existence of a two-to-one mapping from physical rotations to rotational transformation matrices. If 0 ⩽ $\theta $ ⩽ $2\pi $, a physical rotation about ${\vec {u}}$ by $\theta $ and a physical rotation about $-{\vec {u}}$ by $2\pi -\theta $ both achieve the same final orientation by disjoint paths through intermediate orientations. By inserting those vectors and angles into the formula for q above, one finds that if q represents the first rotation, -q represents the second rotation. This is a geometric proof that conjugation by q and by −q must produce the same rotational transformation matrix. That fact is confirmed algebraically by noting that the conjugation is quadratic in q, so the sign of q cancels, and does not affect the result. (See 2:1 mapping of SU(2) to SO(3)) If both rotations are a half-turn $(\theta =\pi )$, both q and -q will have a real coordinate equal to zero. Otherwise, one will have a positive real part, representing a rotation by an angle less than $\pi $, and the other will have a negative real part, representing a rotation by an angle greater than $\pi $.
Mathematically, this operation carries the set of all "pure" quaternions p (those with real part equal to zero)—which constitute a 3-dimensional space among the quaternions—into itself, by the desired rotation about the axis u, by the angle θ. (Each real quaternion is carried into itself by this operation. But for the purpose of rotations in 3-dimensional space, we ignore the real quaternions.)
The rotation is clockwise if our line of sight points in the same direction as ${\vec {u}}$.
In this (which?) instance, q is a unit quaternion and
$\mathbf {q} ^{-1}=e^{-{\frac {\theta }{2}}{(u_{x}\mathbf {i} +u_{y}\mathbf {j} +u_{z}\mathbf {k} )}}=\cos {\frac {\theta }{2}}-(u_{x}\mathbf {i} +u_{y}\mathbf {j} +u_{z}\mathbf {k} )\sin {\frac {\theta }{2}}.$
It follows that conjugation by the product of two quaternions is the composition of conjugations by these quaternions: If p and q are unit quaternions, then rotation (conjugation) by pq is
$\mathbf {pq} {\vec {v}}(\mathbf {pq} )^{-1}=\mathbf {pq} {\vec {v}}\mathbf {q} ^{-1}\mathbf {p} ^{-1}=\mathbf {p} (\mathbf {q} {\vec {v}}\mathbf {q} ^{-1})\mathbf {p} ^{-1}$,
which is the same as rotating (conjugating) by q and then by p. The scalar component of the result is necessarily zero.
The quaternion inverse of a rotation is the opposite rotation, since $\mathbf {q} ^{-1}(\mathbf {q} {\vec {v}}\mathbf {q} ^{-1})\mathbf {q} ={\vec {v}}$. The square of a quaternion rotation is a rotation by twice the angle around the same axis. More generally qn is a rotation by n times the angle around the same axis as q. This can be extended to arbitrary real n, allowing for smooth interpolation between spatial orientations; see Slerp.
Two rotation quaternions can be combined into one equivalent quaternion by the relation:
$\mathbf {q} '=\mathbf {q} _{2}\mathbf {q} _{1}$
in which q′ corresponds to the rotation q1 followed by the rotation q2. Thus, an arbitrary number of rotations can be composed together and then applied as a single rotation. (Note that quaternion multiplication is not commutative.)
Example conjugation operation
Conjugating p by q refers to the operation p ↦ qpq−1.
Consider the rotation f around the axis ${\vec {v}}=\mathbf {i} +\mathbf {j} +\mathbf {k} $, with a rotation angle of 120°, or 2π/3 radians.
$\alpha ={\frac {2\pi }{3}}$
The length of ${\vec {v}}$ is √3, the half angle is π/3 (60°) with cosine 1/2, (cos 60° = 0.5) and sine √3/2, (sin 60° ≈ 0.866). We are therefore dealing with a conjugation by the unit quaternion
${\begin{aligned}u&=\cos {\frac {\alpha }{2}}+\sin {\frac {\alpha }{2}}\cdot {\frac {1}{\|{\vec {v}}\|}}{\vec {v}}\\&=\cos {\frac {\pi }{3}}+\sin {\frac {\pi }{3}}\cdot {\frac {1}{\sqrt {3}}}{\vec {v}}\\&={\frac {1}{2}}+{\frac {\sqrt {3}}{2}}\cdot {\frac {1}{\sqrt {3}}}{\vec {v}}\\&={\frac {1}{2}}+{\frac {\sqrt {3}}{2}}\cdot {\frac {\mathbf {i} +\mathbf {j} +\mathbf {k} }{\sqrt {3}}}\\&={\frac {1+\mathbf {i} +\mathbf {j} +\mathbf {k} }{2}}\end{aligned}}$
If f is the rotation function,
$f(a\mathbf {i} +b\mathbf {j} +c\mathbf {k} )=u(a\mathbf {i} +b\mathbf {j} +c\mathbf {k} )u^{-1}$
It can be proven that the inverse of a unit quaternion is obtained simply by changing the sign of its imaginary components. As a consequence,
$u^{-1}={\dfrac {1-\mathbf {i} -\mathbf {j} -\mathbf {k} }{2}}$
and
$f(a\mathbf {i} +b\mathbf {j} +c\mathbf {k} )={\dfrac {1+\mathbf {i} +\mathbf {j} +\mathbf {k} }{2}}(a\mathbf {i} +b\mathbf {j} +c\mathbf {k} ){\dfrac {1-\mathbf {i} -\mathbf {j} -\mathbf {k} }{2}}$
This can be simplified, using the ordinary rules for quaternion arithmetic, to
$f(a\mathbf {i} +b\mathbf {j} +c\mathbf {k} )=c\mathbf {i} +a\mathbf {j} +b\mathbf {k} $
As expected, the rotation corresponds to keeping a cube held fixed at one point, and rotating it 120° about the long diagonal through the fixed point (observe how the three axes are permuted cyclically).
Quaternion-derived rotation matrix
A quaternion rotation $\mathbf {p'} =\mathbf {q} \mathbf {p} \mathbf {q} ^{-1}$ (with $\mathbf {q} =q_{r}+q_{i}\mathbf {i} +q_{j}\mathbf {j} +q_{k}\mathbf {k} $) can be algebraically manipulated into a matrix rotation $\mathbf {p'} =\mathbf {Rp} $, where $\mathbf {R} $ is the rotation matrix given by:[10]
$\mathbf {R} ={\begin{bmatrix}1-2s(q_{j}^{2}+q_{k}^{2})&2s(q_{i}q_{j}-q_{k}q_{r})&2s(q_{i}q_{k}+q_{j}q_{r})\\2s(q_{i}q_{j}+q_{k}q_{r})&1-2s(q_{i}^{2}+q_{k}^{2})&2s(q_{j}q_{k}-q_{i}q_{r})\\2s(q_{i}q_{k}-q_{j}q_{r})&2s(q_{j}q_{k}+q_{i}q_{r})&1-2s(q_{i}^{2}+q_{j}^{2})\end{bmatrix}}$
Here $s=\|q\|^{-2}$ and if q is a unit quaternion, $s=1^{-2}=1$.
This can be obtained by using vector calculus and linear algebra if we express $\mathbf {p} $ and $\mathbf {q} $ as scalar and vector parts and use the formula for the multiplication operation in the equation $\mathbf {p'} =\mathbf {q} \mathbf {p} \mathbf {q} ^{-1}$. If we write $\mathbf {p} $ as $\left(0,\ \mathbf {p} \right)$, $\mathbf {p} '$ as $\left(0,\ \mathbf {p} '\right)$ and $\mathbf {q} $ as $\left(q_{r},\ \mathbf {v} \right)$, where $\mathbf {v} =\left(q_{i},q_{j},q_{k}\right)$, our equation turns into $\left(0,\ \mathbf {p} '\right)=\left(q_{r},\ \mathbf {v} \right)\left(0,\ \mathbf {p} \right)s\left(q_{r},\ -\mathbf {v} \right)$. By using the formula for multiplication of two quaternions that are expressed as scalar and vector parts,
$\left(r_{1},\ {\vec {v}}_{1}\right)\left(r_{2},\ {\vec {v}}_{2}\right)=\left(r_{1}r_{2}-{\vec {v}}_{1}\cdot {\vec {v}}_{2},\ r_{1}{\vec {v}}_{2}+r_{2}{\vec {v}}_{1}+{\vec {v}}_{1}\times {\vec {v}}_{2}\right),$
this equation can be rewritten as
${\begin{aligned}(0,\ \mathbf {p} ')=&((q_{r},\ \mathbf {v} )(0,\ \mathbf {p} ))s(q_{r},\ -\mathbf {v} )\\=&(q_{r}0-\mathbf {v} \cdot \mathbf {p} ,\ q_{r}\mathbf {p} +0\mathbf {v} +\mathbf {v} \times \mathbf {p} )s(q_{r},\ -\mathbf {v} )\\=&s(-\mathbf {v} \cdot \mathbf {p} ,\ q_{r}\mathbf {p} +\mathbf {v} \times \mathbf {p} )(q_{r},\ -\mathbf {v} )\\=&s(-\mathbf {v} \cdot \mathbf {p} q_{r}-(q_{r}\mathbf {p} +\mathbf {v} \times \mathbf {p} )\cdot (-\mathbf {v} ),\ (-\mathbf {v} \cdot \mathbf {p} )(-\mathbf {v} )+q_{r}(q_{r}\mathbf {p} +\mathbf {v} \times \mathbf {p} )+(q_{r}\mathbf {p} +\mathbf {v} \times \mathbf {p} )\times (-\mathbf {v} ))\\=&s\left(-\mathbf {v} \cdot \mathbf {p} q_{r}+q_{r}\mathbf {v} \cdot \mathbf {p} ,\ \mathbf {v} \left(\mathbf {v} \cdot \mathbf {p} \right)+q_{r}^{2}\mathbf {p} +q_{r}\mathbf {v} \times \mathbf {p} +\mathbf {v} \times \left(q_{r}\mathbf {p} +\mathbf {v} \times \mathbf {p} \right)\right)\\=&\left(0,\ s\left(\mathbf {v} \otimes \mathbf {v} +q_{r}^{2}\mathbf {I} +2q_{r}[\mathbf {v} ]_{\times }+[\mathbf {v} ]_{\times }^{2}\right)\mathbf {p} \right),\end{aligned}}$
where $\otimes $ denotes the outer product, $\mathbf {I} $ is the identity matrix and $[\mathbf {v} ]_{\times }$ is the transformation matrix that when multiplied from the right with a vector $\mathbf {u} $ gives the cross product $\mathbf {v} \times \mathbf {u} $.
Since $\mathbf {p} '=\mathbf {R} \mathbf {p} $, we can identify $\mathbf {R} $ as $s\left(\mathbf {v} \otimes \mathbf {v} +q_{r}^{2}\mathbf {I} +2q_{r}[\mathbf {v} ]_{\times }+[\mathbf {v} ]_{\times }^{2}\right)$, which upon expansion should result in the expression written in matrix form above.
Recovering the axis-angle representation
The expression $\mathbf {q} \mathbf {p} \mathbf {q} ^{-1}$ rotates any vector quaternion $\mathbf {p} $ around an axis given by the vector $\mathbf {a} $ by the angle $\theta $, where $\mathbf {a} $ and $\theta $ depends on the quaternion $\mathbf {q} =q_{r}+q_{i}\mathbf {i} +q_{j}\mathbf {j} +q_{k}\mathbf {k} $.
$\mathbf {a} $ and $\theta $ can be found from the following equations:
${\begin{aligned}(a_{x},a_{y},a_{z})={}&{\frac {(q_{i},q_{j},q_{k})}{\sqrt {q_{i}^{2}+q_{j}^{2}+q_{k}^{2}}}}\\[2pt]\theta =2\operatorname {atan2} &\left({\sqrt {q_{i}^{2}+q_{j}^{2}+q_{k}^{2}}},\,q_{r}\right),\end{aligned}}$
where $\operatorname {atan2} $ is the two-argument arctangent.
Care should be taken when the quaternion approaches a scalar, since due to degeneracy the axis of an identity rotation is not well-defined.
The composition of spatial rotations
A benefit of the quaternion formulation of the composition of two rotations RB and RA is that it yields directly the rotation axis and angle of the composite rotation RC = RBRA.
Let the quaternion associated with a spatial rotation R be constructed from its rotation axis S with the rotation angle $\varphi $ around this axis. The associated quaternion is given by
$S=\cos {\frac {\varphi }{2}}+\mathbf {S} \sin {\frac {\varphi }{2}}.$
Then the composition of the rotation RB with RA is the rotation RC = RBRA with rotation axis and angle defined by the product of the quaternions
$A=\cos {\frac {\alpha }{2}}+\mathbf {A} \sin {\frac {\alpha }{2}}\quad {\text{and}}\quad B=\cos {\frac {\beta }{2}}+\mathbf {B} \sin {\frac {\beta }{2}},$
that is
$C=\cos {\frac {\gamma }{2}}+\mathbf {C} \sin {\frac {\gamma }{2}}=\left(\cos {\frac {\beta }{2}}+\mathbf {B} \sin {\frac {\beta }{2}}\right)\left(\cos {\frac {\alpha }{2}}+\mathbf {A} \sin {\frac {\alpha }{2}}\right).$
Expand this product to obtain
$\cos {\frac {\gamma }{2}}+\mathbf {C} \sin {\frac {\gamma }{2}}=\left(\cos {\frac {\beta }{2}}\cos {\frac {\alpha }{2}}-\mathbf {B} \cdot \mathbf {A} \sin {\frac {\beta }{2}}\sin {\frac {\alpha }{2}}\right)+\left(\mathbf {B} \sin {\frac {\beta }{2}}\cos {\frac {\alpha }{2}}+\mathbf {A} \sin {\frac {\alpha }{2}}\cos {\frac {\beta }{2}}+\mathbf {B} \times \mathbf {A} \sin {\frac {\beta }{2}}\sin {\frac {\alpha }{2}}\right).$
Divide both sides of this equation by the identity, which is the law of cosines on a sphere,
$\cos {\frac {\gamma }{2}}=\cos {\frac {\beta }{2}}\cos {\frac {\alpha }{2}}-\mathbf {B} \cdot \mathbf {A} \sin {\frac {\beta }{2}}\sin {\frac {\alpha }{2}},$
and compute
$\mathbf {C} \tan {\frac {\gamma }{2}}={\frac {\mathbf {B} \tan {\frac {\beta }{2}}+\mathbf {A} \tan {\frac {\alpha }{2}}+\mathbf {B} \times \mathbf {A} \tan {\frac {\beta }{2}}\tan {\frac {\alpha }{2}}}{1-\mathbf {B} \cdot \mathbf {A} \tan {\frac {\beta }{2}}\tan {\frac {\alpha }{2}}}}.$
This is Rodrigues' formula for the axis of a composite rotation defined in terms of the axes of the two rotations. He derived this formula in 1840 (see page 408).[11]
The three rotation axes A, B, and C form a spherical triangle and the dihedral angles between the planes formed by the sides of this triangle are defined by the rotation angles. Hamilton[12] presented the component form of these equations showing that the quaternion product computes the third vertex of a spherical triangle from two given vertices and their associated arc-lengths, which is also defines an algebra for points in Elliptic geometry.
Axis–angle composition
The normalized rotation axis, removing the $ \cos {\frac {\gamma }{2}}$ from the expanded product, leaves the vector which is the rotation axis, times some constant. Care should be taken normalizing the axis vector when $\gamma $ is $0$ or $k2\pi $ where the vector is near $0$; which is identity, or 0 rotation around any axis.
${\begin{aligned}\gamma &=2\cos ^{-1}\left(\cos {\frac {\beta }{2}}\cos {\frac {\alpha }{2}}-\mathbf {B} \cdot \mathbf {A} \sin {\frac {\beta }{2}}\sin {\frac {\alpha }{2}}\right)\\\mathbf {D} &=\mathbf {B} \sin {\frac {\beta }{2}}\cos {\frac {\alpha }{2}}+\mathbf {A} \sin {\frac {\alpha }{2}}\cos {\frac {\beta }{2}}+\mathbf {B} \times \mathbf {A} \sin {\frac {\beta }{2}}\sin {\frac {\alpha }{2}}\end{aligned}}$
Or with angle addition trigonometric substitutions...
${\begin{aligned}\gamma &=2\cos ^{-1}\left(\left(1-\mathbf {A} \cdot \mathbf {B} \right)\cos {\frac {\beta -\alpha }{2}}+\left(1+\mathbf {A} \cdot \mathbf {B} \right)\cos {\frac {\beta +\alpha }{2}}\right)\\\mathbf {D} &=\left(\sin {\frac {\beta +\alpha }{2}}+\sin {\frac {\beta -\alpha }{2}}\right)\mathbf {A} +\left(\sin {\frac {\beta +\alpha }{2}}-\sin {\frac {\beta -\alpha }{2}}\right)\mathbf {B} +\left(\cos {\frac {\beta -\alpha }{2}}-\cos {\frac {\beta +\alpha }{2}}\right)\mathbf {B} \times \mathbf {A} \end{aligned}}$
finally normalizing the rotation axis: $ {\frac {\mathbf {D} }{2\sin {\frac {1}{2}}\gamma }}$ or $ {\frac {\mathbf {D} }{\|\mathbf {D} \|}}$.
Differentiation with respect to the rotation quaternion
The rotated quaternion p' = q p q−1 needs to be differentiated with respect to the rotating quaternion q, when the rotation is estimated from numerical optimization. The estimation of rotation angle is an essential procedure in 3D object registration or camera calibration. For unitary q and pure imaginary p, that is for a rotation in 3D space, the derivatives of the rotated quaternion can be represented using matrix calculus notation as
${\begin{aligned}{\frac {\partial \mathbf {p'} }{\partial \mathbf {q} }}\equiv \left[{\frac {\partial \mathbf {p'} }{\partial q_{0}}},{\frac {\partial \mathbf {p'} }{\partial q_{x}}},{\frac {\partial \mathbf {p'} }{\partial q_{y}}},{\frac {\partial \mathbf {p'} }{\partial q_{z}}}\right]=\left[\mathbf {pq} -(\mathbf {pq} )^{*},(\mathbf {pqi} )^{*}-\mathbf {pqi} ,(\mathbf {pqj} )^{*}-\mathbf {pqj} ,(\mathbf {pqk} )^{*}-\mathbf {pqk} \right].\end{aligned}}$
A derivation can be found in.[13]
Background
Quaternions
Main article: Quaternions
The complex numbers can be defined by introducing an abstract symbol i which satisfies the usual rules of algebra and additionally the rule i2 = −1. This is sufficient to reproduce all of the rules of complex number arithmetic: for example:
$(a+b\mathbf {i} )(c+d\mathbf {i} )=ac+ad\mathbf {i} +b\mathbf {i} c+b\mathbf {i} d\mathbf {i} =ac+ad\mathbf {i} +bc\mathbf {i} +bd\mathbf {i} ^{2}=(ac-bd)+(bc+ad)\mathbf {i} .$
In the same way the quaternions can be defined by introducing abstract symbols i, j, k which satisfy the rules i2 = j2 = k2 = i j k = −1 and the usual algebraic rules except the commutative law of multiplication (a familiar example of such a noncommutative multiplication is matrix multiplication). From this all of the rules of quaternion arithmetic follow, such as the rules on multiplication of quaternion basis elements. Using these rules, one can show that:
${\begin{aligned}&(a+b\mathbf {i} +c\mathbf {j} +d\mathbf {k} )(e+f\mathbf {i} +g\mathbf {j} +h\mathbf {k} )=\\&(ae-bf-cg-dh)+(af+be+ch-dg)\mathbf {i} +(ag-bh+ce+df)\mathbf {j} +(ah+bg-cf+de)\mathbf {k} .\end{aligned}}$
The imaginary part $b\mathbf {i} +c\mathbf {j} +d\mathbf {k} $ of a quaternion behaves like a vector ${\vec {v}}=(b,c,d)$ in three-dimensional vector space, and the real part a behaves like a scalar in R. When quaternions are used in geometry, it is more convenient to define them as a scalar plus a vector:
$a+b\mathbf {i} +c\mathbf {j} +d\mathbf {k} =a+{\vec {v}}.$
Some might find it strange to add a number to a vector, as they are objects of very different natures, or to multiply two vectors together, as this operation is usually undefined. However, if one remembers that it is a mere notation for the real and imaginary parts of a quaternion, it becomes more legitimate. In other words, the correct reasoning is the addition of two quaternions, one with zero vector/imaginary part, and another one with zero scalar/real part:
$q_{1}=s+{\vec {v}}=\left(s,{\vec {0}}\right)+\left(0,{\vec {v}}\right).$
We can express quaternion multiplication in the modern language of vector cross and dot products (which were actually inspired by the quaternions in the first place[14]). When multiplying the vector/imaginary parts, in place of the rules i2 = j2 = k2 = ijk = −1 we have the quaternion multiplication rule:
${\vec {v}}{\vec {w}}=-{\vec {v}}\cdot {\vec {w}}+{\vec {v}}\times {\vec {w}},$
where:
• ${\vec {v}}{\vec {w}}$ is the resulting quaternion,
• ${\vec {v}}\times {\vec {w}}$ is vector cross product (a vector),
• ${\vec {v}}\cdot {\vec {w}}$ is vector scalar product (a scalar).
Quaternion multiplication is noncommutative (because of the cross product, which anti-commutes), while scalar–scalar and scalar–vector multiplications commute. From these rules it follows immediately that (see details):
$q_{1}q_{2}=\left(s+{\vec {v}}\right)\left(t+{\vec {w}}\right)=\left(st-{\vec {v}}\cdot {\vec {w}}\right)+\left(s{\vec {w}}+t{\vec {v}}+{\vec {v}}\times {\vec {w}}\right).$
The (left and right) multiplicative inverse or reciprocal of a nonzero quaternion is given by the conjugate-to-norm ratio (see details):
$q_{1}^{-1}=\left(s+{\vec {v}}\right)^{-1}={\frac {\left(s+{\vec {v}}\right)^{*}}{\lVert s+{\vec {v}}\rVert ^{2}}}={\frac {s-{\vec {v}}}{s^{2}+\lVert {\vec {v}}\rVert ^{2}}},$
as can be verified by direct calculation (note the similarity to the multiplicative inverse of complex numbers).
Rotation identity
Main article: Rodrigues' rotation formula
Let ${\vec {u}}$ be a unit vector (the rotation axis) and let $q=\cos {\frac {\alpha }{2}}+{\vec {u}}\sin {\frac {\alpha }{2}}$. Our goal is to show that
${\vec {v}}'=q{\vec {v}}q^{-1}=\left(\cos {\frac {\alpha }{2}}+{\vec {u}}\sin {\frac {\alpha }{2}}\right)\,{\vec {v}}\,\left(\cos {\frac {\alpha }{2}}-{\vec {u}}\sin {\frac {\alpha }{2}}\right)$
yields the vector ${\vec {v}}$ rotated by an angle $\alpha $ around the axis ${\vec {u}}$. Expanding out (and bearing in mind that ${\vec {u}}{\vec {v}}={\vec {u}}\times {\vec {v}}-{\vec {u}}\cdot {\vec {v}}$), we have
${\begin{aligned}{\vec {v}}'&={\vec {v}}\cos ^{2}{\frac {\alpha }{2}}+\left({\vec {u}}{\vec {v}}-{\vec {v}}{\vec {u}}\right)\sin {\frac {\alpha }{2}}\cos {\frac {\alpha }{2}}-{\vec {u}}{\vec {v}}{\vec {u}}\sin ^{2}{\frac {\alpha }{2}}\\[6pt]&={\vec {v}}\cos ^{2}{\frac {\alpha }{2}}+2\left({\vec {u}}\times {\vec {v}}\right)\sin {\frac {\alpha }{2}}\cos {\frac {\alpha }{2}}-\left(\left({\vec {u}}\times {\vec {v}}\right)-\left({\vec {u}}\cdot {\vec {v}}\right)\right){\vec {u}}\sin ^{2}{\frac {\alpha }{2}}\\[6pt]&={\vec {v}}\cos ^{2}{\frac {\alpha }{2}}+2\left({\vec {u}}\times {\vec {v}}\right)\sin {\frac {\alpha }{2}}\cos {\frac {\alpha }{2}}-\left(\left({\vec {u}}\times {\vec {v}}\right){\vec {u}}-\left({\vec {u}}\cdot {\vec {v}}\right){\vec {u}}\right)\sin ^{2}{\frac {\alpha }{2}}\\[6pt]&={\vec {v}}\cos ^{2}{\frac {\alpha }{2}}+2\left({\vec {u}}\times {\vec {v}}\right)\sin {\frac {\alpha }{2}}\cos {\frac {\alpha }{2}}-\left(\left(\left({\vec {u}}\times {\vec {v}}\right)\times {\vec {u}}-\left({\vec {u}}\times {\vec {v}}\right)\cdot {\vec {u}}\right)-\left({\vec {u}}\cdot {\vec {v}}\right){\vec {u}}\right)\sin ^{2}{\frac {\alpha }{2}}\\[6pt]&={\vec {v}}\cos ^{2}{\frac {\alpha }{2}}+2\left({\vec {u}}\times {\vec {v}}\right)\sin {\frac {\alpha }{2}}\cos {\frac {\alpha }{2}}-\left(\left({\vec {v}}-\left({\vec {u}}\cdot {\vec {v}}\right){\vec {u}}\right)-0-\left({\vec {u}}\cdot {\vec {v}}\right){\vec {u}}\right)\sin ^{2}{\frac {\alpha }{2}}\\[6pt]&={\vec {v}}\cos ^{2}{\frac {\alpha }{2}}+2\left({\vec {u}}\times {\vec {v}}\right)\sin {\frac {\alpha }{2}}\cos {\frac {\alpha }{2}}-\left({\vec {v}}-2{\vec {u}}\left({\vec {u}}\cdot {\vec {v}}\right)\right)\sin ^{2}{\frac {\alpha }{2}}\\[6pt]&={\vec {v}}\cos ^{2}{\frac {\alpha }{2}}+2\left({\vec {u}}\times {\vec {v}}\right)\sin {\frac {\alpha }{2}}\cos {\frac {\alpha }{2}}+\left(2{\vec {u}}\left({\vec {u}}\cdot {\vec {v}}\right)-{\vec {v}}\right)\sin ^{2}{\frac {\alpha }{2}}\\[6pt]\end{aligned}}$
If we let ${\vec {v}}_{\bot }$ and ${\vec {v}}_{\|}$ equal the components of ${\vec {v}}$ perpendicular and parallel to ${\vec {u}}$ respectively, then ${\vec {v}}={\vec {v}}_{\bot }+{\vec {v}}_{\|}$ and ${\vec {u}}\left({\vec {u}}\cdot {\vec {v}}\right)={\vec {v}}_{\|}$, leading to
${\begin{aligned}{\vec {v}}'&={\vec {v}}\cos ^{2}{\frac {\alpha }{2}}+2\left({\vec {u}}\times {\vec {v}}\right)\sin {\frac {\alpha }{2}}\cos {\frac {\alpha }{2}}+\left(2{\vec {u}}\left({\vec {u}}\cdot {\vec {v}}\right)-{\vec {v}}\right)\sin ^{2}{\frac {\alpha }{2}}\\[6pt]&=\left({\vec {v}}_{\|}+{\vec {v}}_{\bot }\right)\cos ^{2}{\frac {\alpha }{2}}+2\left({\vec {u}}\times {\vec {v}}\right)\sin {\frac {\alpha }{2}}\cos {\frac {\alpha }{2}}+\left({\vec {v}}_{\|}-{\vec {v}}_{\bot }\right)\sin ^{2}{\frac {\alpha }{2}}\\[6pt]&={\vec {v}}_{\|}\left(\cos ^{2}{\frac {\alpha }{2}}+\sin ^{2}{\frac {\alpha }{2}}\right)+\left({\vec {u}}\times {\vec {v}}\right)\left(2\sin {\frac {\alpha }{2}}\cos {\frac {\alpha }{2}}\right)+{\vec {v}}_{\bot }\left(\cos ^{2}{\frac {\alpha }{2}}-\sin ^{2}{\frac {\alpha }{2}}\right)\\[6pt]\end{aligned}}$
Using the trigonometric pythagorean and double-angle identities, we then have
${\begin{aligned}{\vec {v}}'&={\vec {v}}_{\|}\left(\cos ^{2}{\frac {\alpha }{2}}+\sin ^{2}{\frac {\alpha }{2}}\right)+\left({\vec {u}}\times {\vec {v}}\right)\left(2\sin {\frac {\alpha }{2}}\cos {\frac {\alpha }{2}}\right)+{\vec {v}}_{\bot }\left(\cos ^{2}{\frac {\alpha }{2}}-\sin ^{2}{\frac {\alpha }{2}}\right)\\[6pt]&={\vec {v}}_{\|}+\left({\vec {u}}\times {\vec {v}}\right)\sin \alpha +{\vec {v}}_{\bot }\cos \alpha \end{aligned}}$
This is the formula of a rotation by $\alpha $ around the u→ axis.
Quaternion rotation operations
A very formal explanation of the properties used in this section is given by Altman.[15]
The hypersphere of rotations
Main article: 3D rotation group
Visualizing the space of rotations
Unit quaternions represent the group of Euclidean rotations in three dimensions in a very straightforward way. The correspondence between rotations and quaternions can be understood by first visualizing the space of rotations itself.
In order to visualize the space of rotations, it helps to consider a simpler case. Any rotation in three dimensions can be described by a rotation by some angle about some axis; for our purposes, we will use an axis vector to establish handedness for our angle. Consider the special case in which the axis of rotation lies in the xy plane. We can then specify the axis of one of these rotations by a point on a circle through which the vector crosses, and we can select the radius of the circle to denote the angle of rotation.
Similarly, a rotation whose axis of rotation lies in the xy plane can be described as a point on a sphere of fixed radius in three dimensions. Beginning at the north pole of a sphere in three-dimensional space, we specify the point at the north pole to be the identity rotation (a zero angle rotation). Just as in the case of the identity rotation, no axis of rotation is defined, and the angle of rotation (zero) is irrelevant. A rotation having a very small rotation angle can be specified by a slice through the sphere parallel to the xy plane and very near the north pole. The circle defined by this slice will be very small, corresponding to the small angle of the rotation. As the rotation angles become larger, the slice moves in the negative z direction, and the circles become larger until the equator of the sphere is reached, which will correspond to a rotation angle of 180 degrees. Continuing southward, the radii of the circles now become smaller (corresponding to the absolute value of the angle of the rotation considered as a negative number). Finally, as the south pole is reached, the circles shrink once more to the identity rotation, which is also specified as the point at the south pole.
Notice that a number of characteristics of such rotations and their representations can be seen by this visualization. The space of rotations is continuous, each rotation has a neighborhood of rotations which are nearly the same, and this neighborhood becomes flat as the neighborhood shrinks. Also, each rotation is actually represented by two antipodal points on the sphere, which are at opposite ends of a line through the center of the sphere. This reflects the fact that each rotation can be represented as a rotation about some axis, or, equivalently, as a negative rotation about an axis pointing in the opposite direction (a so-called double cover). The "latitude" of a circle representing a particular rotation angle will be half of the angle represented by that rotation, since as the point is moved from the north to south pole, the latitude ranges from zero to 180 degrees, while the angle of rotation ranges from 0 to 360 degrees. (the "longitude" of a point then represents a particular axis of rotation.) Note however that this set of rotations is not closed under composition. Two successive rotations with axes in the xy plane will not necessarily give a rotation whose axis lies in the xy plane, and thus cannot be represented as a point on the sphere. This will not be the case with a general rotation in 3-space, in which rotations do form a closed set under composition.
This visualization can be extended to a general rotation in 3-dimensional space. The identity rotation is a point, and a small angle of rotation about some axis can be represented as a point on a sphere with a small radius. As the angle of rotation grows, the sphere grows, until the angle of rotation reaches 180 degrees, at which point the sphere begins to shrink, becoming a point as the angle approaches 360 degrees (or zero degrees from the negative direction). This set of expanding and contracting spheres represents a hypersphere in four dimensional space (a 3-sphere). Just as in the simpler example above, each rotation represented as a point on the hypersphere is matched by its antipodal point on that hypersphere. The "latitude" on the hypersphere will be half of the corresponding angle of rotation, and the neighborhood of any point will become "flatter" (i.e. be represented by a 3-D Euclidean space of points) as the neighborhood shrinks. This behavior is matched by the set of unit quaternions: A general quaternion represents a point in a four dimensional space, but constraining it to have unit magnitude yields a three-dimensional space equivalent to the surface of a hypersphere. The magnitude of the unit quaternion will be unity, corresponding to a hypersphere of unit radius. The vector part of a unit quaternion represents the radius of the 2-sphere corresponding to the axis of rotation, and its magnitude is the cosine of half the angle of rotation. Each rotation is represented by two unit quaternions of opposite sign, and, as in the space of rotations in three dimensions, the quaternion product of two unit quaternions will yield a unit quaternion. Also, the space of unit quaternions is "flat" in any infinitesimal neighborhood of a given unit quaternion.
Parameterizing the space of rotations
We can parameterize the surface of a sphere with two coordinates, such as latitude and longitude. But latitude and longitude are ill-behaved (degenerate) at the north and south poles, though the poles are not intrinsically different from any other points on the sphere. At the poles (latitudes +90° and −90°), the longitude becomes meaningless.
It can be shown that no two-parameter coordinate system can avoid such degeneracy. We can avoid such problems by embedding the sphere in three-dimensional space and parameterizing it with three Cartesian coordinates (w, x, y), placing the north pole at (w, x, y) = (1, 0, 0), the south pole at (w, x, y) = (−1, 0, 0), and the equator at w = 0, x2 + y2 = 1. Points on the sphere satisfy the constraint w2 + x2 + y2 = 1, so we still have just two degrees of freedom though there are three coordinates. A point (w, x, y) on the sphere represents a rotation in the ordinary space around the horizontal axis directed by the vector (x, y, 0) by an angle $\alpha =2\cos ^{-1}w=2\sin ^{-1}{\sqrt {x^{2}+y^{2}}}$.
In the same way the hyperspherical space of 3D rotations can be parameterized by three angles (Euler angles), but any such parameterization is degenerate at some points on the hypersphere, leading to the problem of gimbal lock. We can avoid this by using four Euclidean coordinates w, x, y, z, with w2 + x2 + y2 + z2 = 1. The point (w, x, y, z) represents a rotation around the axis directed by the vector (x, y, z) by an angle $\alpha =2\cos ^{-1}w=2\sin ^{-1}{\sqrt {x^{2}+y^{2}+z^{2}}}.$
Explaining quaternions' properties with rotations
Non-commutativity
The multiplication of quaternions is non-commutative. This fact explains how the p ↦ q p q−1 formula can work at all, having q q−1 = 1 by definition. Since the multiplication of unit quaternions corresponds to the composition of three-dimensional rotations, this property can be made intuitive by showing that three-dimensional rotations are not commutative in general.
Set two books next to each other. Rotate one of them 90 degrees clockwise around the z axis, then flip it 180 degrees around the x axis. Take the other book, flip it 180° around x axis first, and 90° clockwise around z later. The two books do not end up parallel. This shows that, in general, the composition of two different rotations around two distinct spatial axes will not commute.
Orientation
The vector cross product, used to define the axis–angle representation, does confer an orientation ("handedness") to space: in a three-dimensional vector space, the three vectors in the equation a × b = c will always form a right-handed set (or a left-handed set, depending on how the cross product is defined), thus fixing an orientation in the vector space. Alternatively, the dependence on orientation is expressed in referring to such ${\vec {u}}$ that specifies a rotation as to axial vectors. In quaternionic formalism the choice of an orientation of the space corresponds to order of multiplication: ij = k but ji = −k. If one reverses the orientation, then the formula above becomes p ↦ q−1 p q, i.e., a unit q is replaced with the conjugate quaternion – the same behaviour as of axial vectors.
Alternative conventions
It is reported[16] that the existence and continued usage of an alternative quaternion convention in the aerospace and, to a lesser extent, robotics community is incurring a significant and ongoing cost [sic]. This alternative convention is proposed by Shuster M.D. in [17] and departs from tradition by reversing the definition for multiplying quaternion basis elements such that under Shuster's convention, $\mathbf {i} \mathbf {j} =-\mathbf {k} $ whereas Hamilton's definition is $\mathbf {i} \mathbf {j} =\mathbf {k} $. This convention is also referred to as "JPL convention" for its use in some parts of NASA's Jet Propulsion Laboratory.
Under Shuster's convention, the formula for multiplying two quaternions is altered such that
$\left(r_{1},\ {\vec {v}}_{1}\right)\left(r_{2},\ {\vec {v}}_{2}\right)=\left(r_{1}r_{2}-{\vec {v}}_{1}\cdot {\vec {v}}_{2},\ r_{1}{\vec {v}}_{2}+r_{2}{\vec {v}}_{1}\mathbin {\color {red}\mathbf {-} } {\vec {v}}_{1}\times {\vec {v}}_{2}\right),\qquad {\text{(Alternative convention, usage discouraged!)}}$
The formula for rotating a vector by a quaternion is altered to be
${\begin{aligned}\mathbf {p} '_{\text{alt}}={}&(\mathbf {v} \otimes \mathbf {v} +q_{r}^{2}\mathbf {I} \mathbin {\color {red}\mathbf {-} } 2q_{r}[\mathbf {v} ]_{\times }+[\mathbf {v} ]_{\times }^{2})\mathbf {p} &{\text{(Alternative convention, usage discouraged!)}}\\=&\ (\mathbf {I} \mathbin {\color {red}\mathbf {-} } 2q_{r}[\mathbf {v} ]_{\times }+2[\mathbf {v} ]_{\times }^{2})\mathbf {p} &\end{aligned}}$
To identify the changes under Shuster's convention, see that the sign before the cross product is flipped from plus to minus.
Finally, the formula for converting a quaternion to a rotation matrix is altered to be
${\begin{aligned}\mathbf {R} _{alt}&=\mathbf {I} \mathbin {\color {red}\mathbf {-} } 2q_{r}[\mathbf {v} ]_{\times }+2[\mathbf {v} ]_{\times }^{2}\qquad {\text{(Alternative convention, usage discouraged!)}}\\&={\begin{bmatrix}1-2s(q_{j}^{2}+q_{k}^{2})&2(q_{i}q_{j}+q_{k}q_{r})&2(q_{i}q_{k}-q_{j}q_{r})\\2(q_{i}q_{j}-q_{k}q_{r})&1-2s(q_{i}^{2}+q_{k}^{2})&2(q_{j}q_{k}+q_{i}q_{r})\\2(q_{i}q_{k}+q_{j}q_{r})&2(q_{j}q_{k}-q_{i}q_{r})&1-2s(q_{i}^{2}+q_{j}^{2})\end{bmatrix}}\end{aligned}}$
which is exactly the transpose of the rotation matrix converted under the traditional convention.
Software applications by convention used
The table below groups applications by their adherence to either quaternion convention:[16]
Hamilton multiplication convention Shuster multiplication convention
• Wolfram Mathematica
• MATLAB Robotics System Toolbox
• MATLAB Aerospace Toolbox[19]
• ROS
• Eigen
• Boost quaternions
• Quaternion.js
• Ceres Solver
• SciPy spatial.transform.Rotation library
• SymPy symbolic mathematics library
• numpy-quaternion library
• Microsoft DirectX Math Library
While use of either convention does not impact the capability or correctness of applications thus created, the authors of [16] argued that the Shuster convention should be abandoned because it departs from the much older quaternion multiplication convention by Hamilton and may never be adopted by the mathematical or theoretical physics areas.
Comparison with other representations of rotations
Advantages of quaternions
The representation of a rotation as a quaternion (4 numbers) is more compact than the representation as an orthogonal matrix (9 numbers). Furthermore, for a given axis and angle, one can easily construct the corresponding quaternion, and conversely, for a given quaternion one can easily read off the axis and the angle. Both of these are much harder with matrices or Euler angles.
In video games and other applications, one is often interested in "smooth rotations", meaning that the scene should slowly rotate and not in a single step. This can be accomplished by choosing a curve such as the spherical linear interpolation in the quaternions, with one endpoint being the identity transformation 1 (or some other initial rotation) and the other being the intended final rotation. This is more problematic with other representations of rotations.
When composing several rotations on a computer, rounding errors necessarily accumulate. A quaternion that is slightly off still represents a rotation after being normalized: a matrix that is slightly off may not be orthogonal any more and is harder to convert back to a proper orthogonal matrix.
Quaternions also avoid a phenomenon called gimbal lock which can result when, for example in pitch/yaw/roll rotational systems, the pitch is rotated 90° up or down, so that yaw and roll then correspond to the same motion, and a degree of freedom of rotation is lost. In a gimbal-based aerospace inertial navigation system, for instance, this could have disastrous results if the aircraft is in a steep dive or ascent.
From a quaternion to an orthogonal matrix
The orthogonal matrix corresponding to a rotation by the unit quaternion z = a + b i + c j + d k (with | z | = 1) when post-multiplying with a column vector is given by
$R={\begin{pmatrix}a^{2}+b^{2}-c^{2}-d^{2}&2bc-2ad&2bd+2ac\\2bc+2ad&a^{2}-b^{2}+c^{2}-d^{2}&2cd-2ab\\2bd-2ac&2cd+2ab&a^{2}-b^{2}-c^{2}+d^{2}\\\end{pmatrix}}.$
This rotation matrix is used on vector w as $w_{\text{rotated}}=R\cdot w$. The quaternion representation of this rotation is given by:
${\begin{bmatrix}0\\w_{\text{rotated}}\end{bmatrix}}=z{\begin{bmatrix}0\\w\end{bmatrix}}z^{*},$
where $z^{*}$ is the conjugate of the quaternion $z$, given by $\mathbf {z} ^{*}=a-b\mathbf {i} -c\mathbf {j} -d\mathbf {k} $
Also, quaternion multiplication is defined as (assuming a and b are quaternions, like z above):
$ab=\left(a_{0}b_{0}-{\vec {a}}\cdot {\vec {b}};a_{0}{\vec {b}}+b_{0}{\vec {a}}+{\vec {a}}\times {\vec {b}}\right)$
where the order a, b is important since the cross product of two vectors is not commutative.
A more efficient calculation in which the quaternion does not need to be unit normalized is given by[20]
$R={\begin{pmatrix}1-cc-dd&bc-ad&bd+ac\\bc+ad&1-bb-dd&cd-ab\\bd-ac&cd+ab&1-bb-cc\\\end{pmatrix}},$
where the following intermediate quantities have been defined:
${\begin{alignedat}{2}&\ \ s=2/(a\cdot a+b\cdot b+c\cdot c+d\cdot d),\\&{\begin{array}{lll}bs=b\cdot s,&cs=c\cdot s,&ds=d\cdot s,\\ab=a\cdot bs,&ac=a\cdot cs,&ad=a\cdot ds,\\bb=b\cdot bs,&bc=b\cdot cs,&bd=b\cdot ds,\\cc=c\cdot cs,&cd=c\cdot ds,&dd=d\cdot ds.\\\end{array}}\end{alignedat}}$
From an orthogonal matrix to a quaternion
One must be careful when converting a rotation matrix to a quaternion, as several straightforward methods tend to be unstable when the trace (sum of the diagonal elements) of the rotation matrix is zero or very small. For a stable method of converting an orthogonal matrix to a quaternion, see the Rotation matrix#Quaternion.
Fitting quaternions
The above section described how to recover a quaternion q from a 3 × 3 rotation matrix Q. Suppose, however, that we have some matrix Q that is not a pure rotation—due to round-off errors, for example—and we wish to find the quaternion q that most accurately represents Q. In that case we construct a symmetric 4 × 4 matrix
$K={\frac {1}{3}}{\begin{bmatrix}Q_{xx}-Q_{yy}-Q_{zz}&Q_{yx}+Q_{xy}&Q_{zx}+Q_{xz}&Q_{zy}-Q_{yz}\\Q_{yx}+Q_{xy}&Q_{yy}-Q_{xx}-Q_{zz}&Q_{zy}+Q_{yz}&Q_{xz}-Q_{zx}\\Q_{zx}+Q_{xz}&Q_{zy}+Q_{yz}&Q_{zz}-Q_{xx}-Q_{yy}&Q_{yx}-Q_{xy}\\Q_{zy}-Q_{yz}&Q_{xz}-Q_{zx}&Q_{yx}-Q_{xy}&Q_{xx}+Q_{yy}+Q_{zz}\end{bmatrix}},$
and find the eigenvector (x, y, z, w) corresponding to the largest eigenvalue (that value will be 1 if and only if Q is a pure rotation). The quaternion so obtained will correspond to the rotation closest to the original matrix Q .[21]
Performance comparisons
This section discusses the performance implications of using quaternions versus other methods (axis/angle or rotation matrices) to perform rotations in 3D.
Results
Storage requirements
Method Storage
Rotation matrix 9
Quaternion 3 or 4 (see below)
Angle–axis 3 or 4 (see below)
Only three of the quaternion components are independent, as a rotation is represented by a unit quaternion. For further calculation one usually needs all four elements, so all calculations would suffer additional expense from recovering the fourth component. Likewise, angle–axis can be stored in a three-component vector by multiplying the unit direction by the angle (or a function thereof), but this comes at additional computational cost when using it for calculations.
Performance comparison of rotation chaining operations
Method# multiplies# add/subtractstotal operations
Rotation matrices271845
Quaternions161228
Performance comparison of vector rotating operations[22][23]
Method # multiplies# add/subtracts# sin/costotal operations
Rotation matrix 96015
Quaternions * Without intermediate matrix 1515030
With intermediate matrix 2118039
Angle–axis Without intermediate matrix 18 13 2 30 + 3
With intermediate matrix 21 16 2 37 + 2
* Quaternions can be implicitly converted to a rotation-like matrix (12 multiplications and 12 additions/subtractions), which levels the following vectors rotating cost with the rotation matrix method.
Used methods
There are three basic approaches to rotating a vector v→:
1. Compute the matrix product of a 3 × 3 rotation matrix R and the original 3 × 1 column matrix representing v→. This requires 3 × (3 multiplications + 2 additions) = 9 multiplications and 6 additions, the most efficient method for rotating a vector.
2. A rotation can be represented by a unit-length quaternion q = (w, r→) with scalar (real) part w and vector (imaginary) part r→. The rotation can be applied to a 3D vector v→ via the formula ${\vec {v}}_{\text{new}}={\vec {v}}+2{\vec {r}}\times ({\vec {r}}\times {\vec {v}}+w{\vec {v}})$. This requires only 15 multiplications and 15 additions to evaluate (or 18 multiplications and 12 additions if the factor of 2 is done via multiplication.) This formula, originally thought to be used with axis/angle notation (Rodrigues' formula), can also be applied to quaternion notation. This yields the same result as the less efficient but more compact formula of quaternion multiplication ${\vec {v}}_{\text{new}}=q{\vec {v}}q^{-1}$.
3. Use the angle/axis formula to convert an angle/axis to a rotation matrix R then multiplying with a vector, or, similarly, use a formula to convert quaternion notation to a rotation matrix, then multiplying with a vector. Converting the angle/axis to R costs 12 multiplications, 2 function calls (sin, cos), and 10 additions/subtractions; from item 1, rotating using R adds an additional 9 multiplications and 6 additions for a total of 21 multiplications, 16 add/subtractions, and 2 function calls (sin, cos). Converting a quaternion to R costs 12 multiplications and 12 additions/subtractions; from item 1, rotating using R adds an additional 9 multiplications and 6 additions for a total of 21 multiplications and 18 additions/subtractions.
Performance comparison of n vector rotating operations
Method # multiplies # add/subtracts # sin/cos total operations
Rotation matrix 9n6n015n
Quaternions * Without intermediate matrix 15n15n030n
With intermediate matrix 9n + 126n + 12015n + 24
Angle–axis Without intermediate matrix 18n 12n + 1 2 30n + 3
With intermediate matrix 9n + 12 6n + 10 2 15n + 24
Pairs of unit quaternions as rotations in 4D space
A pair of unit quaternions zl and zr can represent any rotation in 4D space. Given a four-dimensional vector v→, and assuming that it is a quaternion, we can rotate the vector v→ like this:
$f\left({\vec {v}}\right)=\mathbf {z} _{\rm {l}}{\vec {v}}\mathbf {z} _{\rm {r}}={\begin{pmatrix}a_{\rm {l}}&-b_{\rm {l}}&-c_{\rm {l}}&-d_{\rm {l}}\\b_{\rm {l}}&a_{\rm {l}}&-d_{\rm {l}}&c_{\rm {l}}\\c_{\rm {l}}&d_{\rm {l}}&a_{\rm {l}}&-b_{\rm {l}}\\d_{\rm {l}}&-c_{\rm {l}}&b_{\rm {l}}&a_{\rm {l}}\end{pmatrix}}{\begin{pmatrix}w\\x\\y\\z\end{pmatrix}}{\begin{pmatrix}a_{\rm {r}}&-b_{\rm {r}}&-c_{\rm {r}}&-d_{\rm {r}}\\b_{\rm {r}}&a_{\rm {r}}&d_{\rm {r}}&-c_{\rm {r}}\\c_{\rm {r}}&-d_{\rm {r}}&a_{\rm {r}}&b_{\rm {r}}\\d_{\rm {r}}&c_{\rm {r}}&-b_{\rm {r}}&a_{\rm {r}}\end{pmatrix}}.$
The pair of matrices represents a rotation of ℝ4. Note that since $(\mathbf {z} _{\rm {l}}{\vec {v}})\mathbf {z} _{\rm {r}}=\mathbf {z} _{\rm {l}}({\vec {v}}\mathbf {z} _{\rm {r}})$, the two matrices must commute. Therefore, there are two commuting subgroups of the group of four dimensional rotations. Arbitrary four-dimensional rotations have 6 degrees of freedom; each matrix represents 3 of those 6 degrees of freedom.
Since the generators of the four-dimensional rotations can be represented by pairs of quaternions (as follows), all four-dimensional rotations can also be represented.
${\begin{aligned}\mathbf {z} _{\rm {l}}{\vec {v}}\mathbf {z} _{\rm {r}}&={\begin{pmatrix}1&-dt_{ab}&-dt_{ac}&-dt_{ad}\\dt_{ab}&1&-dt_{bc}&-dt_{bd}\\dt_{ac}&dt_{bc}&1&-dt_{cd}\\dt_{ad}&dt_{bd}&dt_{cd}&1\end{pmatrix}}{\begin{pmatrix}w\\x\\y\\z\end{pmatrix}}\\[3pt]\mathbf {z} _{\rm {l}}&=1+{dt_{ab}+dt_{cd} \over 2}i+{dt_{ac}-dt_{bd} \over 2}j+{dt_{ad}+dt_{bc} \over 2}k\\[3pt]\mathbf {z} _{\rm {r}}&=1+{dt_{ab}-dt_{cd} \over 2}i+{dt_{ac}+dt_{bd} \over 2}j+{dt_{ad}-dt_{bc} \over 2}k\end{aligned}}$
See also
• Anti-twister mechanism
• Binary polyhedral group
• Biquaternion
• Charts on SO(3)
• Clifford algebras
• Conversion between quaternions and Euler angles
• Covering space
• Dual quaternion
• Applications of dual quaternions to 2D geometry
• Elliptic geometry
• Rotation formalisms in three dimensions
• Rotation (mathematics)
• Spin group
• Slerp, spherical linear interpolation
• Olinde Rodrigues
• William Rowan Hamilton
References
1. Shoemake, Ken (1985). "Animating Rotation with Quaternion Curves" (PDF). Computer Graphics. 19 (3): 245–254. doi:10.1145/325165.325242. Presented at SIGGRAPH '85.
2. J. M. McCarthy, 1990, Introduction to Theoretical Kinematics, MIT Press
3. Amnon Katz (1996) Computational Rigid Vehicle Dynamics, Krieger Publishing Co. ISBN 978-1575240169
4. J. B. Kuipers (1999) Quaternions and rotation Sequences: a Primer with Applications to Orbits, Aerospace, and Virtual Reality, Princeton University Press ISBN 978-0-691-10298-6
5. Karsten Kunze, Helmut Schaeben (November 2004). "The Bingham Distribution of Quaternions and Its Spherical Radon Transform in Texture Analysis". Mathematical Geology. 36 (8): 917–943. doi:10.1023/B:MATG.0000048799.56445.59. S2CID 55009081.
6. Euclidean and non-Euclidean Geometry. Patrick J. Ryan, Cambridge University Press, Cambridge, 1987.
7. I.L. Kantor. Hypercomplex numbers, Springer-Verlag, New York, 1989.
8. Andrew J. Hanson. Visualizing Quaternions, Morgan Kaufmann Publishers, Amsterdam, 2006.
9. J.H. Conway and D.A. Smith. On Quaternions and Octonions, A.K. Peters, Natick, MA, 2003.
10. "comp.graphics.algorithms FAQ". Retrieved 2 July 2017.
11. Rodrigues, O. (1840), Des lois géométriques qui régissent les déplacements d'un système solide dans l'espace, et la variation des coordonnées provenant de ses déplacements con- sidérés indépendamment des causes qui peuvent les produire, Journal de Mathématiques Pures et Appliquées de Liouville 5, 380–440.
12. William Rowan Hamilton (1844 to 1850) On quaternions or a new system of imaginaries in algebra, Philosophical Magazine, link to David R. Wilkins collection at Trinity College, Dublin
13. Lee, Byung-Uk (1991), "Differentiation with Quaternions, Appendix B" (PDF), Stereo Matching of Skull Landmarks (Ph. D. thesis), Stanford University, pp. 57–58
14. Altmann, Simon L. (1989). "Hamilton, Rodrigues, and the Quaternion Scandal". Mathematics Magazine. 62 (5): 306. doi:10.2307/2689481. JSTOR 2689481.
15. Simon L. Altman (1986) Rotations, Quaternions, and Double Groups, Dover Publications (see especially Ch. 12).
16. Sommer, H. (2018), "Why and How to Avoid the Flipped Quaternion Multiplication", Aerospace, 5 (3): 72, arXiv:1801.07478, Bibcode:2018Aeros...5...72S, doi:10.3390/aerospace5030072, ISSN 2226-4310
17. Shuster, M.D (1993), "A Survey of attitude representations", Journal of the Astronautical Sciences, 41 (4): 439–517, Bibcode:1993JAnSc..41..439S, ISSN 0021-9142
18. "MATLAB Aerospace Toolbox quatrotate".
19. The MATLAB Aerospace Toolbox uses the Hamilton multiplication convention, however because it applies *passive* rather than *active* rotations, the quaternions listed are in effect active rotations using the Shuster convention.[18]
20. Alan Watt and Mark Watt (1992) Advanced Animation and Rendering Techniques: Theory and Practice, ACM Press ISBN 978-0201544121
21. Bar-Itzhack, Itzhack Y. (Nov–Dec 2000), "New method for extracting the quaternion from a rotation matrix", Journal of Guidance, Control and Dynamics, 23 (6): 1085–1087, Bibcode:2000JGCD...23.1085B, doi:10.2514/2.4654, ISSN 0731-5090
22. Eberly, D., Rotation Representations and performance issues
23. "Bitbucket". bitbucket.org.
Further reading
• Grubin, Carl (1970). "Derivation of the quaternion scheme via the Euler axis and angle". Journal of Spacecraft and Rockets. 7 (10): 1261–1263. Bibcode:1970JSpRo...7.1261G. doi:10.2514/3.30149.
• Battey-Pratt, E. P.; Racey, T. J. (1980). "Geometric Model for Fundamental Particles". International Journal of Theoretical Physics. 19 (6): 437–475. Bibcode:1980IJTP...19..437B. doi:10.1007/BF00671608. S2CID 120642923.
• Arribas, M.; Elipe, A.; Palacios, M. (2006). "Quaternions and the rotations of a rigid body". Celest. Mech. Dyn. Astron. 96 (3–4): 239–251. Bibcode:2006CeMDA..96..239A. doi:10.1007/s10569-006-9037-6. S2CID 123591599.
External links and resources
• Shoemake, Ken. "Quaternions" (PDF). Archived (PDF) from the original on 2020-05-03.
• "Simple Quaternion type and operations in over seventy-five computer languages". on Rosetta Code
• Hart, John C. "Quaternion Demonstrator".
• Dam, Eik B.; Koch, Martin; Lillholm, Martin (1998). "Quaternions, Interpolation and Animation" (PDF).
• Leandra, Vicci (2001). "Quaternions and Rotations in 3-Space: The Algebra and its Geometric Interpretation" (PDF).
• Howell, Thomas; Lafon, Jean-Claude (1975). "The Complexity of the Quaternion Product, TR75-245" (PDF). Cornell University.
• Horn, Berthold K.P. (2001). "Some Notes on Unit Quaternions and Rotation" (PDF).
• Lee, Byung-Uk (1991). Unit Quaternion Representation of Rotation - Appendix A, Differentiation with Quaternions - Appendix B (PDF) (Ph. D. Thesis). Stanford University.
• Vance, Rod. "Some examples of connected Lie groups". Archived from the original on 2018-12-15.
• "Visual representation of quaternion rotation".
|
Wikipedia
|
Rotation system
In combinatorial mathematics, rotation systems (also called combinatorial embeddings or combinatorial maps) encode embeddings of graphs onto orientable surfaces by describing the circular ordering of a graph's edges around each vertex. A more formal definition of a rotation system involves pairs of permutations; such a pair is sufficient to determine a multigraph, a surface, and a 2-cell embedding of the multigraph onto the surface.
Every rotation scheme defines a unique 2-cell embedding of a connected multigraph on a closed oriented surface (up to orientation-preserving topological equivalence). Conversely, any embedding of a connected multigraph G on an oriented closed surface defines a unique rotation system having G as its underlying multigraph. This fundamental equivalence between rotation systems and 2-cell-embeddings was first settled in a dual form by Lothar Heffter in the 1890s[1] and extensively used by Ringel during the 1950s.[2] Independently, Edmonds gave the primal form of the theorem[3] and the details of his study have been popularized by Youngs.[4] The generalization to multigraphs was presented by Gross and Alpert.[5]
Rotation systems are related to, but not the same as, the rotation maps used by Reingold et al. (2002) to define the zig-zag product of graphs. A rotation system specifies a circular ordering of the edges around each vertex, while a rotation map specifies a (non-circular) permutation of the edges at each vertex. In addition, rotation systems can be defined for any graph, while as Reingold et al. define them rotation maps are restricted to regular graphs.
Formal definition
Formally, a rotation system is defined as a pair (σ, θ) where σ and θ are permutations acting on the same ground set B, θ is a fixed-point-free involution, and the group <σ, θ> generated by σ and θ acts transitively on B.
To derive a rotation system from a 2-cell embedding of a connected multigraph G on an oriented surface, let B consist of the darts (or flags, or half-edges) of G; that is, for each edge of G we form two elements of B, one for each endpoint of the edge. Even when an edge has the same vertex as both of its endpoints, we create two darts for that edge. We let θ(b) be the other dart formed from the same edge as b; this is clearly an involution with no fixed points. We let σ(b) be the dart in the clockwise position from b in the cyclic order of edges incident to the same vertex, where "clockwise" is defined by the orientation of the surface.
If a multigraph is embedded on an orientable but not oriented surface, it generally corresponds to two rotation systems, one for each of the two orientations of the surface. These two rotation systems have the same involution θ, but the permutation σ for one rotation system is the inverse of the corresponding permutation for the other rotation system.
Recovering the embedding from the rotation system
To recover a multigraph from a rotation system, we form a vertex for each orbit of σ, and an edge for each orbit of θ. A vertex is incident with an edge if these two orbits have a nonempty intersection. Thus, the number of incidences per vertex is the size of the orbit, and the number of incidences per edge is exactly two. If a rotation system is derived from a 2-cell embedding of a connected multigraph G, the graph derived from the rotation system is isomorphic to G.
To embed the graph derived from a rotation system onto a surface, form a disk for each orbit of σθ, and glue two disks together along an edge e whenever the two darts corresponding to e belong to the two orbits corresponding to these disks. The result is a 2-cell embedding of the derived multigraph, the two-cells of which are the disks corresponding to the orbits of σθ. The surface of this embedding can be oriented in such a way that the clockwise ordering of the edges around each vertex is the same as the clockwise ordering given by σ.
Characterizing the surface of the embedding
According to the Euler formula we can deduce the genus g of the closed orientable surface defined by the rotation system $(\sigma ,\theta )$ (that is, the surface on which the underlying multigraph is 2-cell embedded).[6] Notice that $V=|Z(\sigma )|$, $E=|Z(\theta )|$ and $F=|Z(\sigma \theta )|$. We find that
$g=1-{\frac {1}{2}}(V-E+F)=1-{\frac {1}{2}}(|Z(\sigma )|-|Z(\theta )|+|Z(\sigma \theta )|)$
where $Z(\phi )$ denotes the set of the orbits of permutation $\phi $.
See also
• Combinatorial map
Notes
1. Heffter (1891), Heffter (1898)
2. Ringel (1965)
3. Edmonds (1960a), Edmonds (1960b)
4. Youngs (1963)
5. Gross & Alpert (1974)
6. Lando & Zvonkin (2004), formula 1.3, p. 38.
References
• Cori, R.; Machì, A. (1992). "Maps, hypermaps and their automorphisms: a survey". Expositiones Mathematicae. 10: 403–467. MR 1190182.
• Edmonds, J. (1960a). "A combinatorial representation for polyhedral surfaces". Notices of the American Mathematical Society. 7: 646.
• Edmonds, John Robert (1960b). A combinatorial representation for oriented polyhedral surfaces (PDF) (Masters). University of Maryland. hdl:1903/24820.
• Gross, J. L.; Alpert, S. R. (1974). "The topological theory of current graphs". Journal of Combinatorial Theory, Series B. 17 (3): 218–233. doi:10.1016/0095-8956(74)90028-8. MR 0363971.
• Heffter, L. (1891). "Über das Problem der Nachbargebiete". Mathematische Annalen. 38 (4): 477–508. doi:10.1007/BF01203357. S2CID 121206491.
• Heffter, L. (1898). "Über metacyklische Gruppen und Nachbarcontigurationen". Mathematische Annalen. 50 (2–3): 261–268. doi:10.1007/BF01448067. S2CID 120691296.
• Lando, Sergei K.; Zvonkin, Alexander K. (2004). Graphs on Surfaces and Their Applications. Encyclopaedia of Mathematical Sciences: Lower-Dimensional Topology II. Vol. 141. Springer-Verlag. ISBN 978-3-540-00203-1..
• Mohar, Bojan; Thomassen, Carsten (2001). Graphs on Surfaces. Johns Hopkins University Press. ISBN 0-8018-6689-8.
• Reingold, O.; Vadhan, S.; Wigderson, A. (2002). "Entropy waves, the zig-zag graph product, and new constant-degree expanders". Annals of Mathematics. 155 (1): 157–187. arXiv:math/0406038. doi:10.2307/3062153. JSTOR 3062153. MR 1888797. S2CID 120739405.
• Ringel, G. (1965). "Das Geschlecht des vollständigen paaren Graphen". Abhandlungen aus dem Mathematischen Seminar der Universität Hamburg. 28 (3–4): 139–150. doi:10.1007/BF02993245. MR 0189012. S2CID 120414651.
• Youngs, J. W. T. (1963). "Minimal imbeddings and the genus of a graph". Journal of Mathematics and Mechanics. 12 (2): 303–315. doi:10.1512/iumj.1963.12.12021. MR 0145512.
|
Wikipedia
|
Rotational cryptanalysis
In cryptography, rotational cryptanalysis is a generic cryptanalytic attack against algorithms that rely on three operations: modular addition, rotation and XOR — ARX for short. Algorithms relying on these operations are popular because they are relatively cheap in both hardware and software and run in constant time, making them safe from timing attacks in common implementations.
The basic idea of rotational cryptanalysis is that both the bit rotation and XOR operations preserve correlations between bit-rotated pairs of inputs, and that addition of bit-rotated inputs also partially preserves bit rotation correlations. Rotational pairs of inputs can thus be used to "see through" the cipher's cascaded ARX operations to a greater degree than might be expected.[1] This ability to "see" correlations through rounds of processing can then be exploited to break the cipher in a way that is similar to differential cryptanalysis.
The term "rotational cryptanalysis" was coined by Dmitry Khovratovich and Ivica Nikolić in 2010 paper "Rotational Cryptanalysis of ARX", which presented the best cryptanalytic attacks at that time against a reduced-round Threefish cipher — part of the Skein hash function, a SHA-3 competition candidate.[1][2] A follow-up attack from the same authors and Christian Rechberger breaks collision resistance of up to 53 of 72 rounds in Skein-256, and 57 of 72 rounds in Skein-512. It also affects the Threefish cipher.[3]
References
1. Dmitry Khovratovich & Ivica Nikolić (2010). "Rotational Cryptanalysis of ARX" (PDF). University of Luxembourg. {{cite journal}}: Cite journal requires |journal= (help)
2. Bruce Schneier (2010-02-07). "Schneier on Security: New Attack on Threefish".
3. Dmitry Khovratovich; Ivica Nikolic; Christian Rechberger (2010-10-20). "Rotational Rebound Attacks on Reduced Skein". Cryptology ePrint Archive.
Block ciphers (security summary)
Common
algorithms
• AES
• Blowfish
• DES (internal mechanics, Triple DES)
• Serpent
• Twofish
Less common
algorithms
• ARIA
• Camellia
• CAST-128
• GOST
• IDEA
• LEA
• RC2
• RC5
• RC6
• SEED
• Skipjack
• TEA
• XTEA
Other
algorithms
• 3-Way
• Akelarre
• Anubis
• BaseKing
• BassOmatic
• BATON
• BEAR and LION
• CAST-256
• Chiasmus
• CIKS-1
• CIPHERUNICORN-A
• CIPHERUNICORN-E
• CLEFIA
• CMEA
• Cobra
• COCONUT98
• Crab
• Cryptomeria/C2
• CRYPTON
• CS-Cipher
• DEAL
• DES-X
• DFC
• E2
• FEAL
• FEA-M
• FROG
• G-DES
• Grand Cru
• Hasty Pudding cipher
• Hierocrypt
• ICE
• IDEA NXT
• Intel Cascade Cipher
• Iraqi
• Kalyna
• KASUMI
• KeeLoq
• KHAZAD
• Khufu and Khafre
• KN-Cipher
• Kuznyechik
• Ladder-DES
• LOKI (97, 89/91)
• Lucifer
• M6
• M8
• MacGuffin
• Madryga
• MAGENTA
• MARS
• Mercy
• MESH
• MISTY1
• MMB
• MULTI2
• MultiSwap
• New Data Seal
• NewDES
• Nimbus
• NOEKEON
• NUSH
• PRESENT
• Prince
• Q
• REDOC
• Red Pike
• S-1
• SAFER
• SAVILLE
• SC2000
• SHACAL
• SHARK
• Simon
• SM4
• Speck
• Spectr-H64
• Square
• SXAL/MBAL
• Threefish
• Treyfer
• UES
• xmx
• XXTEA
• Zodiac
Design
• Feistel network
• Key schedule
• Lai–Massey scheme
• Product cipher
• S-box
• P-box
• SPN
• Confusion and diffusion
• Round
• Avalanche effect
• Block size
• Key size
• Key whitening (Whitening transformation)
Attack
(cryptanalysis)
• Brute-force (EFF DES cracker)
• MITM
• Biclique attack
• 3-subset MITM attack
• Linear (Piling-up lemma)
• Differential
• Impossible
• Truncated
• Higher-order
• Differential-linear
• Distinguishing (Known-key)
• Integral/Square
• Boomerang
• Mod n
• Related-key
• Slide
• Rotational
• Side-channel
• Timing
• Power-monitoring
• Electromagnetic
• Acoustic
• Differential-fault
• XSL
• Interpolation
• Partitioning
• Rubber-hose
• Black-bag
• Davies
• Rebound
• Weak key
• Tau
• Chi-square
• Time/memory/data tradeoff
Standardization
• AES process
• CRYPTREC
• NESSIE
Utilization
• Initialization vector
• Mode of operation
• Padding
Cryptography
General
• History of cryptography
• Outline of cryptography
• Cryptographic protocol
• Authentication protocol
• Cryptographic primitive
• Cryptanalysis
• Cryptocurrency
• Cryptosystem
• Cryptographic nonce
• Cryptovirology
• Hash function
• Cryptographic hash function
• Key derivation function
• Digital signature
• Kleptography
• Key (cryptography)
• Key exchange
• Key generator
• Key schedule
• Key stretching
• Keygen
• Cryptojacking malware
• Ransomware
• Random number generation
• Cryptographically secure pseudorandom number generator (CSPRNG)
• Pseudorandom noise (PRN)
• Secure channel
• Insecure channel
• Subliminal channel
• Encryption
• Decryption
• End-to-end encryption
• Harvest now, decrypt later
• Information-theoretic security
• Plaintext
• Codetext
• Ciphertext
• Shared secret
• Trapdoor function
• Trusted timestamping
• Key-based routing
• Onion routing
• Garlic routing
• Kademlia
• Mix network
Mathematics
• Cryptographic hash function
• Block cipher
• Stream cipher
• Symmetric-key algorithm
• Authenticated encryption
• Public-key cryptography
• Quantum key distribution
• Quantum cryptography
• Post-quantum cryptography
• Message authentication code
• Random numbers
• Steganography
• Category
|
Wikipedia
|
Rotation formalisms in three dimensions
In geometry, various formalisms exist to express a rotation in three dimensions as a mathematical transformation. In physics, this concept is applied to classical mechanics where rotational (or angular) kinematics is the science of quantitative description of a purely rotational motion. The orientation of an object at a given instant is described with the same tools, as it is defined as an imaginary rotation from a reference placement in space, rather than an actually observed rotation from a previous placement in space.
For broader coverage of this topic, see Rotation group SO(3).
According to Euler's rotation theorem the rotation of a rigid body (or three-dimensional coordinate system with a fixed origin) is described by a single rotation about some axis. Such a rotation may be uniquely described by a minimum of three real parameters. However, for various reasons, there are several ways to represent it. Many of these representations use more than the necessary minimum of three parameters, although each of them still has only three degrees of freedom.
An example where rotation representation is used is in computer vision, where an automated observer needs to track a target. Consider a rigid body, with three orthogonal unit vectors fixed to its body (representing the three axes of the object's local coordinate system). The basic problem is to specify the orientation of these three unit vectors, and hence the rigid body, with respect to the observer's coordinate system, regarded as a reference placement in space.
Rotations and motions
Main articles: Motion (geometry) and Rotation (mathematics)
Rotation formalisms are focused on proper (orientation-preserving) motions of the Euclidean space with one fixed point, that a rotation refers to. Although physical motions with a fixed point are an important case (such as ones described in the center-of-mass frame, or motions of a joint), this approach creates a knowledge about all motions. Any proper motion of the Euclidean space decomposes to a rotation around the origin and a translation. Whichever the order of their composition will be, the "pure" rotation component wouldn't change, uniquely determined by the complete motion.
One can also understand "pure" rotations as linear maps in a vector space equipped with Euclidean structure, not as maps of points of a corresponding affine space. In other words, a rotation formalism captures only the rotational part of a motion, that contains three degrees of freedom, and ignores the translational part, that contains another three.
When representing a rotation as numbers in a computer, some people prefer the quaternion representation or the axis+angle representation, because they avoid the gimbal lock that can occur with Euler rotations.[1]
Formalism alternatives
Rotation matrix
Main article: Rotation matrix
The above-mentioned triad of unit vectors is also called a basis. Specifying the coordinates (components) of vectors of this basis in its current (rotated) position, in terms of the reference (non-rotated) coordinate axes, will completely describe the rotation. The three unit vectors, û, v̂ and ŵ, that form the rotated basis each consist of 3 coordinates, yielding a total of 9 parameters.
These parameters can be written as the elements of a 3 × 3 matrix A, called a rotation matrix. Typically, the coordinates of each of these vectors are arranged along a column of the matrix (however, beware that an alternative definition of rotation matrix exists and is widely used, where the vectors' coordinates defined above are arranged by rows[2])
$\mathbf {A} ={\begin{bmatrix}{\hat {\mathbf {u} }}_{x}&{\hat {\mathbf {v} }}_{x}&{\hat {\mathbf {w} }}_{x}\\{\hat {\mathbf {u} }}_{y}&{\hat {\mathbf {v} }}_{y}&{\hat {\mathbf {w} }}_{y}\\{\hat {\mathbf {u} }}_{z}&{\hat {\mathbf {v} }}_{z}&{\hat {\mathbf {w} }}_{z}\\\end{bmatrix}}$
The elements of the rotation matrix are not all independent—as Euler's rotation theorem dictates, the rotation matrix has only three degrees of freedom.
The rotation matrix has the following properties:
• A is a real, orthogonal matrix, hence each of its rows or columns represents a unit vector.
• The eigenvalues of A are
$\left\{1,e^{\pm i\theta }\right\}=\{1,\ \cos \theta +i\sin \theta ,\ \cos \theta -i\sin \theta \}$
where i is the standard imaginary unit with the property i2 = −1
• The determinant of A is +1, equivalent to the product of its eigenvalues.
• The trace of A is 1 + 2 cos θ, equivalent to the sum of its eigenvalues.
The angle θ which appears in the eigenvalue expression corresponds to the angle of the Euler axis and angle representation. The eigenvector corresponding to the eigenvalue of 1 is the accompanying Euler axis, since the axis is the only (nonzero) vector which remains unchanged by left-multiplying (rotating) it with the rotation matrix.
The above properties are equivalent to
${\begin{aligned}|{\hat {\mathbf {u} }}|=|{\hat {\mathbf {v} }}|=|{\hat {\mathbf {w} }}|&=1\\{\hat {\mathbf {u} }}\cdot {\hat {\mathbf {v} }}&=0\\{\hat {\mathbf {u} }}\times {\hat {\mathbf {v} }}&={\hat {\mathbf {w} }}\,,\end{aligned}}$
which is another way of stating that (û, v̂, ŵ) form a 3D orthonormal basis. These statements comprise a total of 6 conditions (the cross product contains 3), leaving the rotation matrix with just 3 degrees of freedom, as required.
Two successive rotations represented by matrices A1 and A2 are easily combined as elements of a group,
$\mathbf {A} _{\text{total}}=\mathbf {A} _{2}\mathbf {A} _{1}$
(Note the order, since the vector being rotated is multiplied from the right).
The ease by which vectors can be rotated using a rotation matrix, as well as the ease of combining successive rotations, make the rotation matrix a useful and popular way to represent rotations, even though it is less concise than other representations.
Euler axis and angle (rotation vector)
Main article: Axis–angle representation
From Euler's rotation theorem we know that any rotation can be expressed as a single rotation about some axis. The axis is the unit vector (unique except for sign) which remains unchanged by the rotation. The magnitude of the angle is also unique, with its sign being determined by the sign of the rotation axis.
The axis can be represented as a three-dimensional unit vector
${\hat {\mathbf {e} }}={\begin{bmatrix}e_{x}\\e_{y}\\e_{z}\end{bmatrix}}$
and the angle by a scalar θ.
Since the axis is normalized, it has only two degrees of freedom. The angle adds the third degree of freedom to this rotation representation.
One may wish to express rotation as a rotation vector, or Euler vector, an un-normalized three-dimensional vector the direction of which specifies the axis, and the length of which is θ,
$\mathbf {r} =\theta {\hat {\mathbf {e} }}\,.$
The rotation vector is useful in some contexts, as it represents a three-dimensional rotation with only three scalar values (its components), representing the three degrees of freedom. This is also true for representations based on sequences of three Euler angles (see below).
If the rotation angle θ is zero, the axis is not uniquely defined. Combining two successive rotations, each represented by an Euler axis and angle, is not straightforward, and in fact does not satisfy the law of vector addition, which shows that finite rotations are not really vectors at all. It is best to employ the rotation matrix or quaternion notation, calculate the product, and then convert back to Euler axis and angle.
Euler rotations
The idea behind Euler rotations is to split the complete rotation of the coordinate system into three simpler constitutive rotations, called precession, nutation, and intrinsic rotation, being each one of them an increment on one of the Euler angles. Notice that the outer matrix will represent a rotation around one of the axes of the reference frame, and the inner matrix represents a rotation around one of the moving frame axes. The middle matrix represents a rotation around an intermediate axis called line of nodes.
However, the definition of Euler angles is not unique and in the literature many different conventions are used. These conventions depend on the axes about which the rotations are carried out, and their sequence (since rotations are not commutative).
The convention being used is usually indicated by specifying the axes about which the consecutive rotations (before being composed) take place, referring to them by index (1, 2, 3) or letter (X, Y, Z). The engineering and robotics communities typically use 3-1-3 Euler angles. Notice that after composing the independent rotations, they do not rotate about their axis anymore. The most external matrix rotates the other two, leaving the second rotation matrix over the line of nodes, and the third one in a frame comoving with the body. There are 3 × 3 × 3 = 27 possible combinations of three basic rotations but only 3 × 2 × 2 = 12 of them can be used for representing arbitrary 3D rotations as Euler angles. These 12 combinations avoid consecutive rotations around the same axis (such as XXY) which would reduce the degrees of freedom that can be represented.
Therefore, Euler angles are never expressed in terms of the external frame, or in terms of the co-moving rotated body frame, but in a mixture. Other conventions (e.g., rotation matrix or quaternions) are used to avoid this problem.
In aviation orientation of the aircraft is usually expressed as intrinsic Tait-Bryan angles following the z-y′-x″ convention, which are called heading, elevation, and bank (or synonymously, yaw, pitch, and roll).
Quaternions
Main article: Quaternions and spatial rotation
Quaternions, which form a four-dimensional vector space, have proven very useful in representing rotations due to several advantages over the other representations mentioned in this article.
A quaternion representation of rotation is written as a versor (normalized quaternion):
${\hat {\mathbf {q} }}=q_{i}\mathbf {i} +q_{j}\mathbf {j} +q_{k}\mathbf {k} +q_{r}={\begin{bmatrix}q_{i}\\q_{j}\\q_{k}\\q_{r}\end{bmatrix}}$
The above definition stores the quaternion as an array following the convention used in (Wertz 1980) and (Markley 2003). An alternative definition, used for example in (Coutsias 1999) and (Schmidt 2001), defines the "scalar" term as the first quaternion element, with the other elements shifted down one position.
In terms of the Euler axis
${\hat {\mathbf {e} }}={\begin{bmatrix}e_{x}\\e_{y}\\e_{z}\end{bmatrix}}$
and angle θ this versor's components are expressed as follows:
${\begin{aligned}q_{i}&=e_{x}\sin {\frac {\theta }{2}}\\q_{j}&=e_{y}\sin {\frac {\theta }{2}}\\q_{k}&=e_{z}\sin {\frac {\theta }{2}}\\q_{r}&=\cos {\frac {\theta }{2}}\end{aligned}}$
Inspection shows that the quaternion parametrization obeys the following constraint:
$q_{i}^{2}+q_{j}^{2}+q_{k}^{2}+q_{r}^{2}=1$
The last term (in our definition) is often called the scalar term, which has its origin in quaternions when understood as the mathematical extension of the complex numbers, written as
$a+bi+cj+dk\qquad {\text{with }}a,b,c,d\in \mathbb {R} $
and where {i, j, k} are the hypercomplex numbers satisfying
${\begin{array}{ccccccc}i^{2}&=&j^{2}&=&k^{2}&=&-1\\ij&=&-ji&=&k&&\\jk&=&-kj&=&i&&\\ki&=&-ik&=&j&&\end{array}}$
Quaternion multiplication, which is used to specify a composite rotation, is performed in the same manner as multiplication of complex numbers, except that the order of the elements must be taken into account, since multiplication is not commutative. In matrix notation we can write quaternion multiplication as
${\tilde {\mathbf {q} }}\otimes \mathbf {q} ={\begin{bmatrix}\;\;\,q_{r}&\;\;\,q_{k}&-q_{j}&\;\;\,q_{i}\\-q_{k}&\;\;\,q_{r}&\;\;\,q_{i}&\;\;\,q_{j}\\\;\;\,q_{j}&-q_{i}&\;\;\,q_{r}&\;\;\,q_{k}\\-q_{i}&-q_{j}&-q_{k}&\;\;\,q_{r}\end{bmatrix}}{\begin{bmatrix}{\tilde {q}}_{i}\\{\tilde {q}}_{j}\\{\tilde {q}}_{k}\\{\tilde {q}}_{r}\end{bmatrix}}={\begin{bmatrix}\;\;\,{\tilde {q}}_{r}&-{\tilde {q}}_{k}&\;\;\,{\tilde {q}}_{j}&\;\;\,{\tilde {q}}_{i}\\\;\;\,{\tilde {q}}_{k}&\;\;\,{\tilde {q}}_{r}&-{\tilde {q}}_{i}&\;\;\,{\tilde {q}}_{j}\\-{\tilde {q}}_{j}&\;\;\,{\tilde {q}}_{i}&\;\;\,{\tilde {q}}_{r}&\;\;\,{\tilde {q}}_{k}\\-{\tilde {q}}_{i}&-{\tilde {q}}_{j}&-{\tilde {q}}_{k}&\;\;\,{\tilde {q}}_{r}\end{bmatrix}}{\begin{bmatrix}q_{i}\\q_{j}\\q_{k}\\q_{r}\end{bmatrix}}$
Combining two consecutive quaternion rotations is therefore just as simple as using the rotation matrix. Just as two successive rotation matrices, A1 followed by A2, are combined as
$\mathbf {A} _{3}=\mathbf {A} _{2}\mathbf {A} _{1},$
we can represent this with quaternion parameters in a similarly concise way:
$\mathbf {q} _{3}=\mathbf {q} _{2}\otimes \mathbf {q} _{1}$
Quaternions are a very popular parametrization due to the following properties:
• More compact than the matrix representation and less susceptible to round-off errors
• The quaternion elements vary continuously over the unit sphere in ℝ4, (denoted by S3) as the orientation changes, avoiding discontinuous jumps (inherent to three-dimensional parameterizations)
• Expression of the rotation matrix in terms of quaternion parameters involves no trigonometric functions
• It is simple to combine two individual rotations represented as quaternions using a quaternion product
Like rotation matrices, quaternions must sometimes be renormalized due to rounding errors, to make sure that they correspond to valid rotations. The computational cost of renormalizing a quaternion, however, is much less than for normalizing a 3 × 3 matrix.
Quaternions also capture the spinorial character of rotations in three dimensions. For a three-dimensional object connected to its (fixed) surroundings by slack strings or bands, the strings or bands can be untangled after two complete turns about some fixed axis from an initial untangled state. Algebraically, the quaternion describing such a rotation changes from a scalar +1 (initially), through (scalar + pseudovector) values to scalar −1 (at one full turn), through (scalar + pseudovector) values back to scalar +1 (at two full turns). This cycle repeats every 2 turns. After 2n turns (integer n > 0), without any intermediate untangling attempts, the strings/bands can be partially untangled back to the 2(n − 1) turns state with each application of the same procedure used in untangling from 2 turns to 0 turns. Applying the same procedure n times will take a 2n-tangled object back to the untangled or 0 turn state. The untangling process also removes any rotation-generated twisting about the strings/bands themselves. Simple 3D mechanical models can be used to demonstrate these facts.
Rodrigues vector
See also: Rodrigues' rotation formula
The Rodrigues vector (sometimes called the Gibbs vector, with coordinates called Rodrigues parameters)[3][4] can be expressed in terms of the axis and angle of the rotation as follows:
$\mathbf {g} ={\hat {\mathbf {e} }}\tan {\frac {\theta }{2}}$
This representation is a higher-dimensional analog of the gnomonic projection, mapping unit quaternions from a 3-sphere onto the 3-dimensional pure-vector hyperplane.
It has a discontinuity at 180° (π radians): as any rotation vector r tends to an angle of π radians, its tangent tends to infinity.
A rotation g followed by a rotation f in the Rodrigues representation has the simple rotation composition form
$(\mathbf {g} ,\mathbf {f} )={\frac {\mathbf {g} +\mathbf {f} -\mathbf {f} \times \mathbf {g} }{1-\mathbf {g} \cdot \mathbf {f} }}\,.$
Today, the most straightforward way to prove this formula is in the (faithful) doublet representation, where g = n̂ tan a, etc.
The combinatoric features of the Pauli matrix derivation just mentioned are also identical to the equivalent quaternion derivation below. Construct a quaternion associated with a spatial rotation R as,
$S=\cos {\frac {\phi }{2}}+\sin {\frac {\phi }{2}}\mathbf {S} .$
Then the composition of the rotation RB with RA is the rotation RC = RBRA, with rotation axis and angle defined by the product of the quaternions,
$A=\cos {\frac {\alpha }{2}}+\sin {\frac {\alpha }{2}}\mathbf {A} \quad {\text{and}}\quad B=\cos {\frac {\beta }{2}}+\sin {\frac {\beta }{2}}\mathbf {B} ,$
that is
$C=\cos {\frac {\gamma }{2}}+\sin {\frac {\gamma }{2}}\mathbf {C} =\left(\cos {\frac {\beta }{2}}+\sin {\frac {\beta }{2}}\mathbf {B} \right)\left(\cos {\frac {\alpha }{2}}+\sin {\frac {\alpha }{2}}\mathbf {A} \right).$
Expand this quaternion product to
$\cos {\frac {\gamma }{2}}+\sin {\frac {\gamma }{2}}\mathbf {C} =\left(\cos {\frac {\beta }{2}}\cos {\frac {\alpha }{2}}-\sin {\frac {\beta }{2}}\sin {\frac {\alpha }{2}}\mathbf {B} \cdot \mathbf {A} \right)+\left(\sin {\frac {\beta }{2}}\cos {\frac {\alpha }{2}}\mathbf {B} +\sin {\frac {\alpha }{2}}\cos {\frac {\beta }{2}}\mathbf {A} +\sin {\frac {\beta }{2}}\sin {\frac {\alpha }{2}}\mathbf {B} \times \mathbf {A} \right).$
Divide both sides of this equation by the identity resulting from the previous one,
$\cos {\frac {\gamma }{2}}=\cos {\frac {\beta }{2}}\cos {\frac {\alpha }{2}}-\sin {\frac {\beta }{2}}\sin {\frac {\alpha }{2}}\mathbf {B} \cdot \mathbf {A} ,$
and evaluate
$\tan {\frac {\gamma }{2}}\mathbf {C} ={\frac {\tan {\frac {\beta }{2}}\mathbf {B} +\tan {\frac {\alpha }{2}}\mathbf {A} +\tan {\frac {\beta }{2}}\tan {\frac {\alpha }{2}}\mathbf {B} \times \mathbf {A} }{1-\tan {\frac {\beta }{2}}\tan {\frac {\alpha }{2}}\mathbf {B} \cdot \mathbf {A} }}.$
This is Rodrigues' formula for the axis of a composite rotation defined in terms of the axes of the two component rotations. He derived this formula in 1840 (see page 408).[3] The three rotation axes A, B, and C form a spherical triangle and the dihedral angles between the planes formed by the sides of this triangle are defined by the rotation angles.
Modified Rodrigues parameters (MRPs) can be expressed in terms of Euler axis and angle by
$\mathbf {p} ={\hat {\mathbf {e} }}\tan {\frac {\theta }{4}}\,.$
Its components can be expressed in terms of the components of a unit quaternion representing the same rotation as
$p_{x,y,z}={\frac {q_{i,j,k}}{1+q_{r}}}\,.$
The modified Rodrigues vector is a stereographic projection mapping unit quaternions from a 3-sphere onto the 3-dimensional pure-vector hyperplane. The projection of the opposite quaternion −q results in a different modified Rodrigues vector ps than the projection of the original quaternion q. Comparing components one obtains that
$p_{x,y,z}^{s}={\frac {-q_{i,j,k}}{1-q_{r}}}={\frac {-p_{x,y,z}}{\mathbf {p} ^{2}}}\,.$
Notably, if one of these vectors lies inside the unit 3-sphere, the other will lie outside.
Cayley–Klein parameters
See definition at Wolfram Mathworld.
Higher-dimensional analogues
See also: Rotations in 4-dimensional Euclidean space
Vector transformation law
Active rotations of a 3D vector p in Euclidean space around an axis n over an angle η can be easily written in terms of dot and cross products as follows:
$\mathbf {p} '=p_{\parallel }\mathbf {n} +\cos {\eta }\,\mathbf {p} _{\perp }+\sin {\eta }\,\mathbf {p} \wedge \mathbf {n} $
wherein
$p_{\parallel }=\mathbf {p} \cdot \mathbf {n} $
is the longitudinal component of p along n, given by the dot product,
$\mathbf {p} _{\perp }=\mathbf {p} -(\mathbf {p} \cdot \mathbf {n} )\mathbf {n} $
is the transverse component of p with respect to n, and
$\mathbf {p} \wedge \mathbf {n} $
is the cross product of p with n.
The above formula shows that the longitudinal component of p remains unchanged, whereas the transverse portion of p is rotated in the plane perpendicular to n. This plane is spanned by the transverse portion of p itself and a direction perpendicular to both p and n. The rotation is directly identifiable in the equation as a 2D rotation over an angle η.
Passive rotations can be described by the same formula, but with an inverse sign of either η or n.
Conversion formulae between formalisms
Main article: Charts on SO(3)
Rotation matrix ↔ Euler angles
The Euler angles (φ, θ, ψ) can be extracted from the rotation matrix A by inspecting the rotation matrix in analytical form.
Rotation matrix → Euler angles (z-x-z extrinsic)
Using the x-convention, the 3-1-3 extrinsic Euler angles φ, θ and ψ (around the z-axis, x-axis and again the $Z$-axis) can be obtained as follows:
${\begin{aligned}\phi &=\operatorname {atan2} \left(A_{31},A_{32}\right)\\\theta &=\arccos \left(A_{33}\right)\\\psi &=-\operatorname {atan2} \left(A_{13},A_{23}\right)\end{aligned}}$
Note that atan2(a, b) is equivalent to arctan a/b where it also takes into account the quadrant that the point (b, a) is in; see atan2.
When implementing the conversion, one has to take into account several situations:[5]
• There are generally two solutions in the interval [−π, π]3. The above formula works only when θ is within the interval [0, π].
• For the special case A33 = 0, φ and ψ will be derived from A11 and A12.
• There are infinitely many but countably many solutions outside of the interval [−π, π]3.
• Whether all mathematical solutions apply for a given application depends on the situation.
Euler angles (z-y′-x″ intrinsic) → rotation matrix
The rotation matrix A is generated from the 3-2-1 intrinsic Euler angles by multiplying the three matrices generated by rotations about the axes.
$\mathbf {A} =\mathbf {A} _{3}\mathbf {A} _{2}\mathbf {A} _{1}=\mathbf {A} _{Z}\mathbf {A} _{Y}\mathbf {A} _{X}$
The axes of the rotation depend on the specific convention being used. For the x-convention the rotations are about the x-, y- and z-axes with angles ϕ, θ and ψ, the individual matrices are as follows:
${\begin{aligned}\mathbf {A} _{X}&={\begin{bmatrix}1&0&0\\0&\cos \phi &-\sin \phi \\0&\sin \phi &\cos \phi \end{bmatrix}}\\[5px]\mathbf {A} _{Y}&={\begin{bmatrix}\cos \theta &0&\sin \theta \\0&1&0\\-\sin \theta &0&\cos \theta \end{bmatrix}}\\[5px]\mathbf {A} _{Z}&={\begin{bmatrix}\cos \psi &-\sin \psi &0\\\sin \psi &\cos \psi &0\\0&0&1\end{bmatrix}}\end{aligned}}$
This yields
$\mathbf {A} ={\begin{bmatrix}\cos \theta \cos \psi &-\cos \phi \sin \psi +\sin \phi \sin \theta \cos \psi &\sin \phi \sin \psi +\cos \phi \sin \theta \cos \psi \\\cos \theta \sin \psi &\cos \phi \cos \psi +\sin \phi \sin \theta \sin \psi &-\sin \phi \cos \psi +\cos \phi \sin \theta \sin \psi \\-\sin \theta &\sin \phi \cos \theta &\cos \phi \cos \theta \\\end{bmatrix}}$
Note: This is valid for a right-hand system, which is the convention used in almost all engineering and physics disciplines.
The interpretation of these right-handed rotation matrices is that they express coordinate transformations (passive) as opposed to point transformations (active). Because A expresses a rotation from the local frame 1 to the global frame 0 (i.e., A encodes the axes of frame 1 with respect to frame 0), the elementary rotation matrices are composed as above. Because the inverse rotation is just the rotation transposed, if we wanted the global-to-local rotation from frame 0 to frame 1, we would write
$\mathbf {A} ^{\mathsf {T}}=(\mathbf {A} _{Z}\mathbf {A} _{Y}\mathbf {A} _{X})^{\mathsf {T}}=\mathbf {A} _{X}^{\mathsf {T}}\mathbf {A} _{Y}^{\mathsf {T}}\mathbf {A} _{Z}^{\mathsf {T}}\,.$
Rotation matrix ↔ Euler axis/angle
If the Euler angle θ is not a multiple of π, the Euler axis ê and angle θ can be computed from the elements of the rotation matrix A as follows:
${\begin{aligned}\theta &=\arccos {\frac {A_{11}+A_{22}+A_{33}-1}{2}}\\e_{1}&={\frac {A_{32}-A_{23}}{2\sin \theta }}\\e_{2}&={\frac {A_{13}-A_{31}}{2\sin \theta }}\\e_{3}&={\frac {A_{21}-A_{12}}{2\sin \theta }}\end{aligned}}$
Alternatively, the following method can be used:
Eigendecomposition of the rotation matrix yields the eigenvalues 1 and cos θ ± i sin θ. The Euler axis is the eigenvector corresponding to the eigenvalue of 1, and θ can be computed from the remaining eigenvalues.
The Euler axis can be also found using singular value decomposition since it is the normalized vector spanning the null-space of the matrix I − A.
To convert the other way the rotation matrix corresponding to an Euler axis ê and angle θ can be computed according to Rodrigues' rotation formula (with appropriate modification) as follows:
$\mathbf {A} =\mathbf {I} _{3}\cos \theta +(1-\cos \theta ){\hat {\mathbf {e} }}{\hat {\mathbf {e} }}^{\mathsf {T}}+\left[{\hat {\mathbf {e} }}\right]_{\times }\sin \theta $
with I3 the 3 × 3 identity matrix, and
$\left[{\hat {\mathbf {e} }}\right]_{\times }={\begin{bmatrix}0&-e_{3}&e_{2}\\e_{3}&0&-e_{1}\\-e_{2}&e_{1}&0\end{bmatrix}}$
is the cross-product matrix.
This expands to:
${\begin{aligned}A_{11}&=(1-\cos \theta )e_{1}^{2}+\cos \theta \\A_{12}&=(1-\cos \theta )e_{1}e_{2}-e_{3}\sin \theta \\A_{13}&=(1-\cos \theta )e_{1}e_{3}+e_{2}\sin \theta \\A_{21}&=(1-\cos \theta )e_{2}e_{1}+e_{3}\sin \theta \\A_{22}&=(1-\cos \theta )e_{2}^{2}+\cos \theta \\A_{23}&=(1-\cos \theta )e_{2}e_{3}-e_{1}\sin \theta \\A_{31}&=(1-\cos \theta )e_{3}e_{1}-e_{2}\sin \theta \\A_{32}&=(1-\cos \theta )e_{3}e_{2}+e_{1}\sin \theta \\A_{33}&=(1-\cos \theta )e_{3}^{2}+\cos \theta \end{aligned}}$
See also: Rotation matrix § Rotation matrix from axis and angle
Rotation matrix ↔ quaternion
When computing a quaternion from the rotation matrix there is a sign ambiguity, since q and −q represent the same rotation.
One way of computing the quaternion
$\mathbf {q} ={\begin{bmatrix}q_{i}\\q_{j}\\q_{k}\\q_{r}\end{bmatrix}}=q_{i}\mathbf {i} +q_{j}\mathbf {j} +q_{k}\mathbf {k} +q_{r}$
from the rotation matrix A is as follows:
${\begin{aligned}q_{r}&={\frac {1}{2}}{\sqrt {1+A_{11}+A_{22}+A_{33}}}\\q_{i}&={\frac {1}{4q_{r}}}\left(A_{32}-A_{23}\right)\\q_{j}&={\frac {1}{4q_{r}}}\left(A_{13}-A_{31}\right)\\q_{k}&={\frac {1}{4q_{r}}}\left(A_{21}-A_{12}\right)\end{aligned}}$
There are three other mathematically equivalent ways to compute q. Numerical inaccuracy can be reduced by avoiding situations in which the denominator is close to zero. One of the other three methods looks as follows:[6][7]
${\begin{aligned}q_{i}&={\frac {1}{2}}{\sqrt {1+A_{11}-A_{22}-A_{33}}}\\q_{j}&={\frac {1}{4q_{i}}}\left(A_{12}+A_{21}\right)\\q_{k}&={\frac {1}{4q_{i}}}\left(A_{13}+A_{31}\right)\\q_{r}&={\frac {1}{4q_{i}}}\left(A_{32}-A_{23}\right)\end{aligned}}$
The rotation matrix corresponding to the quaternion q can be computed as follows:
$\mathbf {A} =\left(q_{r}^{2}-{\check {\mathbf {q} }}^{\mathsf {T}}{\check {\mathbf {q} }}\right)\mathbf {I} _{3}+2{\check {\mathbf {q} }}{\check {\mathbf {q} }}^{\mathsf {T}}+2q_{r}\mathbf {\mathcal {Q}} $
where
${\check {\mathbf {q} }}={\begin{bmatrix}q_{i}\\q_{j}\\q_{k}\end{bmatrix}}\,,\quad \mathbf {\mathcal {Q}} ={\begin{bmatrix}0&-q_{k}&q_{j}\\q_{k}&0&-q_{i}\\-q_{j}&q_{i}&0\end{bmatrix}}$
which gives
$\mathbf {A} ={\begin{bmatrix}1-2q_{j}^{2}-2q_{k}^{2}&2\left(q_{i}q_{j}-q_{k}q_{r}\right)&2\left(q_{i}q_{k}+q_{j}q_{r}\right)\\2\left(q_{i}q_{j}+q_{k}q_{r}\right)&1-2q_{i}^{2}-2q_{k}^{2}&2\left(q_{j}q_{k}-q_{i}q_{r}\right)\\2\left(q_{i}q_{k}-q_{j}q_{r}\right)&2\left(q_{j}q_{k}+q_{i}q_{r}\right)&1-2q_{i}^{2}-2q_{j}^{2}\end{bmatrix}}$
or equivalently
$\mathbf {A} ={\begin{bmatrix}-1+2q_{i}^{2}+2q_{r}^{2}&2\left(q_{i}q_{j}-q_{k}q_{r}\right)&2\left(q_{i}q_{k}+q_{j}q_{r}\right)\\2\left(q_{i}q_{j}+q_{k}q_{r}\right)&-1+2q_{j}^{2}+2q_{r}^{2}&2\left(q_{j}q_{k}-q_{i}q_{r}\right)\\2\left(q_{i}q_{k}-q_{j}q_{r}\right)&2\left(q_{j}q_{k}+q_{i}q_{r}\right)&-1+2q_{k}^{2}+2q_{r}^{2}\end{bmatrix}}$
Euler angles ↔ quaternion
Main article: Conversion between quaternions and Euler angles
Euler angles (z-x-z extrinsic) → quaternion
We will consider the x-convention 3-1-3 extrinsic Euler angles for the following algorithm. The terms of the algorithm depend on the convention used.
We can compute the quaternion
$\mathbf {q} ={\begin{bmatrix}q_{i}\\q_{j}\\q_{k}\\q_{r}\end{bmatrix}}=q_{i}\mathbf {i} +q_{j}\mathbf {j} +q_{k}\mathbf {k} +q_{r}$
from the Euler angles (ϕ, θ, ψ) as follows:
${\begin{aligned}q_{i}&=\cos {\frac {\phi -\psi }{2}}\sin {\frac {\theta }{2}}\\q_{j}&=\sin {\frac {\phi -\psi }{2}}\sin {\frac {\theta }{2}}\\q_{k}&=\sin {\frac {\phi +\psi }{2}}\cos {\frac {\theta }{2}}\\q_{r}&=\cos {\frac {\phi +\psi }{2}}\cos {\frac {\theta }{2}}\end{aligned}}$
Euler angles (z-y′-x″ intrinsic) → quaternion
A quaternion equivalent to yaw (ψ), pitch (θ) and roll (ϕ) angles. or intrinsic Tait–Bryan angles following the z-y′-x″ convention, can be computed by
${\begin{aligned}q_{i}&=\sin {\frac {\phi }{2}}\cos {\frac {\theta }{2}}\cos {\frac {\psi }{2}}-\cos {\frac {\phi }{2}}\sin {\frac {\theta }{2}}\sin {\frac {\psi }{2}}\\q_{j}&=\cos {\frac {\phi }{2}}\sin {\frac {\theta }{2}}\cos {\frac {\psi }{2}}+\sin {\frac {\phi }{2}}\cos {\frac {\theta }{2}}\sin {\frac {\psi }{2}}\\q_{k}&=\cos {\frac {\phi }{2}}\cos {\frac {\theta }{2}}\sin {\frac {\psi }{2}}-\sin {\frac {\phi }{2}}\sin {\frac {\theta }{2}}\cos {\frac {\psi }{2}}\\q_{r}&=\cos {\frac {\phi }{2}}\cos {\frac {\theta }{2}}\cos {\frac {\psi }{2}}+\sin {\frac {\phi }{2}}\sin {\frac {\theta }{2}}\sin {\frac {\psi }{2}}\end{aligned}}$
Quaternion → Euler angles (z-x-z extrinsic)
Given the rotation quaternion
$\mathbf {q} ={\begin{bmatrix}q_{i}\\q_{j}\\q_{k}\\q_{r}\end{bmatrix}}=q_{i}\mathbf {i} +q_{j}\mathbf {j} +q_{k}\mathbf {k} +q_{r}\,,$
the x-convention 3-1-3 extrinsic Euler Angles (φ, θ, ψ) can be computed by
${\begin{aligned}\phi &=\operatorname {atan2} \left(\left(q_{i}q_{k}+q_{j}q_{r}\right),-\left(q_{j}q_{k}-q_{i}q_{r}\right)\right)\\\theta &=\arccos \left(-q_{i}^{2}-q_{j}^{2}+q_{k}^{2}+q_{r}^{2}\right)\\\psi &=\operatorname {atan2} \left(\left(q_{i}q_{k}-q_{j}q_{r}\right),\left(q_{j}q_{k}+q_{i}q_{r}\right)\right)\end{aligned}}$
Quaternion → Euler angles (z-y′-x″ intrinsic)
Given the rotation quaternion
$\mathbf {q} ={\begin{bmatrix}q_{i}\\q_{j}\\q_{k}\\q_{r}\end{bmatrix}}=q_{i}\mathbf {i} +q_{j}\mathbf {j} +q_{k}\mathbf {k} +q_{r}\,,$
yaw, pitch and roll angles, or intrinsic Tait–Bryan angles following the z-y′-x″ convention, can be computed by
${\begin{aligned}{\text{roll}}&=\operatorname {atan2} \left(2\left(q_{r}q_{i}+q_{j}q_{k}\right),1-2\left(q_{i}^{2}+q_{j}^{2}\right)\right)\\{\text{pitch}}&=\arcsin \left(2\left(q_{r}q_{j}-q_{k}q_{i}\right)\right)\\{\text{yaw}}&=\operatorname {atan2} \left(2\left(q_{r}q_{k}+q_{i}q_{j}\right),1-2\left(q_{j}^{2}+q_{k}^{2}\right)\right)\end{aligned}}$
Euler axis–angle ↔ quaternion
Given the Euler axis ê and angle θ, the quaternion
$\mathbf {q} ={\begin{bmatrix}q_{i}\\q_{j}\\q_{k}\\q_{r}\end{bmatrix}}=q_{i}\mathbf {i} +q_{j}\mathbf {j} +q_{k}\mathbf {k} +q_{r}\,,$
can be computed by
${\begin{aligned}q_{i}&={\hat {e}}_{1}\sin {\frac {\theta }{2}}\\q_{j}&={\hat {e}}_{2}\sin {\frac {\theta }{2}}\\q_{k}&={\hat {e}}_{3}\sin {\frac {\theta }{2}}\\q_{r}&=\cos {\frac {\theta }{2}}\end{aligned}}$
Given the rotation quaternion q, define
${\check {\mathbf {q} }}={\begin{bmatrix}q_{i}\\q_{j}\\q_{k}\end{bmatrix}}\,.$
Then the Euler axis ê and angle θ can be computed by
${\begin{aligned}{\hat {\mathbf {e} }}&={\frac {\check {\mathbf {q} }}{\left\|{\check {\mathbf {q} }}\right\|}}\\\theta &=2\arccos q_{r}\end{aligned}}$
Rodrigues vector → Rotation matrix
Since the definition of the Rodrigues vector can be related to rotation quaternions:
${\begin{cases}g_{i}={\dfrac {q_{i}}{q_{r}}}=e_{x}\tan \left({\dfrac {\theta }{2}}\right)\\g_{j}={\dfrac {q_{j}}{q_{r}}}=e_{y}\tan \left({\dfrac {\theta }{2}}\right)\\g_{k}={\dfrac {q_{k}}{q_{r}}}=e_{z}\tan \left({\dfrac {\theta }{2}}\right)\end{cases}}$
By making use of the following property
$1=q_{r}^{2}+q_{i}^{2}+q_{j}^{2}+q_{k}^{2}=q_{r}^{2}\left(1+{\frac {q_{i}^{2}}{q_{r}^{2}}}+{\frac {q_{j}^{2}}{q_{r}^{2}}}+{\frac {q_{k}^{2}}{q_{r}^{2}}}\right)=q_{r}^{2}\left(1+g_{i}^{2}+g_{j}^{2}+g_{k}^{2}\right)$
The formula can be obtained by factoring q2
r
from the final expression obtained for quaternions:
$\mathbf {A} =q_{r}^{2}{\begin{bmatrix}{\frac {1}{q_{r}^{2}}}-2{\frac {q_{j}^{2}}{q_{r}^{2}}}-2{\frac {q_{k}^{2}}{q_{r}^{2}}}&2\left({\frac {q_{i}}{q_{r}}}{\frac {q_{j}}{q_{r}}}-{\frac {q_{k}}{q_{r}}}\right)&2\left({\frac {q_{i}}{q_{r}}}{\frac {q_{k}}{q_{r}}}+{\frac {q_{j}}{q_{r}}}\right)\\2\left({\frac {q_{i}}{q_{r}}}{\frac {q_{j}}{q_{r}}}+{\frac {q_{k}}{q_{r}}}\right)&{\frac {1}{q_{r}^{2}}}-2{\frac {q_{i}^{2}}{q_{r}^{2}}}-2{\frac {q_{k}^{2}}{q_{r}^{2}}}&2\left({\frac {q_{j}}{q_{r}}}{\frac {q_{k}}{q_{r}}}-{\frac {q_{i}}{q_{r}}}\right)\\2\left({\frac {q_{i}}{q_{r}}}{\frac {q_{k}}{q_{r}}}-{\frac {q_{j}}{q_{r}}}\right)&2\left({\frac {q_{j}}{q_{r}}}{\frac {q_{k}}{q_{r}}}+{\frac {q_{i}}{q_{r}}}\right)&{\frac {1}{q_{r}^{2}}}-2{\frac {q_{i}^{2}}{q_{r}^{2}}}-2{\frac {q_{j}^{2}}{q_{r}^{2}}}\end{bmatrix}}$
Leading to the final formula:
$\mathbf {A} ={\frac {1}{1+g_{i}^{2}+g_{j}^{2}+g_{k}^{2}}}{\begin{bmatrix}1+g_{i}^{2}-g_{j}^{2}-g_{k}^{2}&2\left(g_{i}g_{j}-g_{k}\right)&2\left(g_{i}g_{k}+g_{j}\right)\\2\left(g_{i}g_{j}+g_{k}\right)&1-g_{i}^{2}+g_{j}^{2}-g_{k}^{2}&2\left(g_{j}g_{k}-g_{i}\right)\\2\left(g_{i}g_{k}-g_{j}\right)&2\left(g_{j}g_{k}+g_{i}\right)&1-g_{i}^{2}-g_{j}^{2}+g_{k}^{2}\end{bmatrix}}$
Conversion formulae for derivatives
Rotation matrix ↔ angular velocities
The angular velocity vector
${\boldsymbol {\omega }}={\begin{bmatrix}\omega _{x}\\\omega _{y}\\\omega _{z}\end{bmatrix}}$
can be extracted from the time derivative of the rotation matrix dA/dt by the following relation:
$[{\boldsymbol {\omega }}]_{\times }={\begin{bmatrix}0&-\omega _{z}&\omega _{y}\\\omega _{z}&0&-\omega _{x}\\-\omega _{y}&\omega _{x}&0\end{bmatrix}}={\frac {\mathrm {d} \mathbf {A} }{\mathrm {d} t}}\mathbf {A} ^{\mathsf {T}}$
The derivation is adapted from Ioffe[8] as follows:
For any vector r0, consider r(t) = A(t)r0 and differentiate it:
${\frac {\mathrm {d} \mathbf {r} }{\mathrm {d} t}}={\frac {\mathrm {d} \mathbf {A} }{\mathrm {d} t}}\mathbf {r} _{0}={\frac {\mathrm {d} \mathbf {A} }{\mathrm {d} t}}\mathbf {A} ^{\mathsf {T}}(t)\mathrm {r} (t)$
The derivative of a vector is the linear velocity of its tip. Since A is a rotation matrix, by definition the length of r(t) is always equal to the length of r0, and hence it does not change with time. Thus, when r(t) rotates, its tip moves along a circle, and the linear velocity of its tip is tangential to the circle; i.e., always perpendicular to r(t). In this specific case, the relationship between the linear velocity vector and the angular velocity vector is
${\frac {\mathrm {d} \mathbf {r} }{\mathrm {d} t}}={\boldsymbol {\omega }}(t)\times \mathbf {r} (t)=[{\boldsymbol {\omega }}]_{\times }\mathbf {r} (t)$
(see circular motion and cross product).
By the transitivity of the abovementioned equations,
${\frac {\mathrm {d} \mathbf {A} }{\mathrm {d} t}}\mathbf {A} ^{\mathsf {T}}(t)\mathbf {r} (t)=[{\boldsymbol {\omega }}]_{\times }\mathbf {r} (t)$
which implies
${\frac {\mathrm {d} \mathbf {A} }{\mathrm {d} t}}\mathbf {A} ^{\mathsf {T}}(t)=[{\boldsymbol {\omega }}]_{\times }$
Quaternion ↔ angular velocities
The angular velocity vector
${\boldsymbol {\omega }}={\begin{bmatrix}\omega _{x}\\\omega _{y}\\\omega _{z}\end{bmatrix}}$
can be obtained from the derivative of the quaternion dq/dt as follows:[9]
${\begin{bmatrix}0\\\omega _{x}\\\omega _{y}\\\omega _{z}\end{bmatrix}}=2{\frac {\mathrm {d} \mathbf {q} }{\mathrm {d} t}}{\tilde {\mathbf {q} }}$
where q̃ is the conjugate (inverse) of q.
Conversely, the derivative of the quaternion is
${\frac {\mathrm {d} \mathbf {q} }{\mathrm {d} t}}={\frac {1}{2}}{\begin{bmatrix}0\\\omega _{x}\\\omega _{y}\\\omega _{z}\end{bmatrix}}\mathbf {q} \,.$
Rotors in a geometric algebra
The formalism of geometric algebra (GA) provides an extension and interpretation of the quaternion method. Central to GA is the geometric product of vectors, an extension of the traditional inner and cross products, given by
$\mathbf {ab} =\mathbf {a} \cdot \mathbf {b} +\mathbf {a} \wedge \mathbf {b} $
where the symbol ∧ denotes the exterior product or wedge product. This product of vectors a, and b produces two terms: a scalar part from the inner product and a bivector part from the wedge product. This bivector describes the plane perpendicular to what the cross product of the vectors would return.
Bivectors in GA have some unusual properties compared to vectors. Under the geometric product, bivectors have a negative square: the bivector x̂ŷ describes the xy-plane. Its square is (x̂ŷ)2 = x̂ŷx̂ŷ. Because the unit basis vectors are orthogonal to each other, the geometric product reduces to the antisymmetric outer product – x̂ and ŷ can be swapped freely at the cost of a factor of −1. The square reduces to −x̂x̂ŷŷ = −1 since the basis vectors themselves square to +1.
This result holds generally for all bivectors, and as a result the bivector plays a role similar to the imaginary unit. Geometric algebra uses bivectors in its analogue to the quaternion, the rotor, given by
$\mathbf {R} =\exp \left({\frac {-{\hat {\mathbf {B} }}\theta }{2}}\right)=\cos {\frac {\theta }{2}}-{\hat {\mathbf {B} }}\sin {\frac {\theta }{2}}\,,$
where B̂ is a unit bivector that describes the plane of rotation. Because B̂ squares to −1, the power series expansion of R generates the trigonometric functions. The rotation formula that maps a vector a to a rotated vector b is then
$\mathbf {b} =\mathbf {RaR} ^{\dagger }$
where
$\mathbf {R} ^{\dagger }=\exp \left({\frac {1}{2}}{\hat {\mathbf {B} }}\theta \right)=\cos {\frac {\theta }{2}}+{\hat {\mathbf {B} }}\sin {\frac {\theta }{2}}$
is the reverse of $\scriptstyle R$ (reversing the order of the vectors in $B$ is equivalent to changing its sign).
Example. A rotation about the axis
${\hat {\mathbf {v} }}={\frac {1}{\sqrt {3}}}\left({\hat {\mathbf {x} }}+{\hat {\mathbf {y} }}+{\hat {\mathbf {z} }}\right)$
can be accomplished by converting v̂ to its dual bivector,
${\hat {\mathbf {B} }}={\hat {\mathbf {x} }}{\hat {\mathbf {y} }}{\hat {\mathbf {z} }}{\hat {\mathbf {v} }}=\mathbf {i} {\hat {\mathbf {v} }}\,,$
where i = x̂ŷẑ is the unit volume element, the only trivector (pseudoscalar) in three-dimensional space. The result is
${\hat {\mathbf {B} }}={\frac {1}{\sqrt {3}}}\left({\hat {\mathbf {y} }}{\hat {\mathbf {z} }}+{\hat {\mathbf {z} }}{\hat {\mathbf {x} }}+{\hat {\mathbf {x} }}{\hat {\mathbf {y} }}\right)\,.$
In three-dimensional space, however, it is often simpler to leave the expression for B̂ = iv̂, using the fact that i commutes with all objects in 3D and also squares to −1. A rotation of the x̂ vector in this plane by an angle θ is then
${\hat {\mathbf {x} }}'=\mathbf {R} {\hat {\mathbf {x} }}\mathbf {R} ^{\dagger }=e^{-i{\hat {\mathbf {v} }}{\frac {\theta }{2}}}{\hat {\mathbf {x} }}e^{i{\hat {\mathbf {v} }}{\frac {\theta }{2}}}={\hat {\mathbf {x} }}\cos ^{2}{\frac {\theta }{2}}+\mathbf {i} \left({\hat {\mathbf {x} }}{\hat {\mathbf {v} }}-{\hat {\mathbf {v} }}{\hat {\mathbf {x} }}\right)\cos {\frac {\theta }{2}}\sin {\frac {\theta }{2}}+{\hat {\mathbf {v} }}{\hat {\mathbf {x} }}{\hat {\mathbf {v} }}\sin ^{2}{\frac {\theta }{2}}$
Recognizing that
$\mathbf {i} ({\hat {\mathbf {x} }}{\hat {\mathbf {v} }}-{\hat {\mathbf {v} }}{\hat {\mathbf {x} }})=2\mathbf {i} ({\hat {\mathbf {x} }}\wedge {\hat {\mathbf {v} }})$
and that −v̂x̂v̂ is the reflection of x̂ about the plane perpendicular to v̂ gives a geometric interpretation to the rotation operation: the rotation preserves the components that are parallel to v̂ and changes only those that are perpendicular. The terms are then computed:
${\begin{aligned}{\hat {\mathbf {v} }}{\hat {\mathbf {x} }}{\hat {\mathbf {v} }}&={\frac {1}{3}}\left(-{\hat {\mathbf {x} }}+2{\hat {\mathbf {y} }}+2{\hat {\mathbf {z} }}\right)\\2\mathbf {i} {\hat {\mathbf {x} }}\wedge {\hat {\mathbf {v} }}&=2\mathbf {i} {\frac {1}{\sqrt {3}}}\left({\hat {\mathbf {x} }}{\hat {\mathbf {y} }}+{\hat {\mathbf {x} }}{\hat {\mathbf {z} }}\right)={\frac {2}{\sqrt {3}}}\left({\hat {\mathbf {y} }}-{\hat {\mathbf {z} }}\right)\end{aligned}}$
The result of the rotation is then
${\hat {\mathbf {x} }}'={\hat {\mathbf {x} }}\left(\cos ^{2}{\frac {\theta }{2}}-{\frac {1}{3}}\sin ^{2}{\frac {\theta }{2}}\right)+{\frac {2}{3}}{\hat {\mathbf {y} }}\sin {\frac {\theta }{2}}\left(\sin {\frac {\theta }{2}}+{\sqrt {3}}\cos {\frac {\theta }{2}}\right)+{\frac {2}{3}}{\hat {\mathbf {z} }}\sin {\frac {\theta }{2}}\left(\sin {\frac {\theta }{2}}-{\sqrt {3}}\cos {\frac {\theta }{2}}\right)$
A simple check on this result is the angle θ = 2/3π. Such a rotation should map x̂ to ŷ. Indeed, the rotation reduces to
${\begin{aligned}{\hat {\mathbf {x} }}'&={\hat {\mathbf {x} }}\left({\frac {1}{4}}-{\frac {1}{3}}{\frac {3}{4}}\right)+{\frac {2}{3}}{\hat {\mathbf {y} }}{\frac {\sqrt {3}}{2}}\left({\frac {\sqrt {3}}{2}}+{\sqrt {3}}{\frac {1}{2}}\right)+{\frac {2}{3}}{\hat {\mathbf {z} }}{\frac {\sqrt {3}}{2}}\left({\frac {\sqrt {3}}{2}}-{\sqrt {3}}{\frac {1}{2}}\right)\\&=0{\hat {\mathbf {x} }}+{\hat {\mathbf {y} }}+0{\hat {\mathbf {z} }}={\hat {\mathbf {y} }}\end{aligned}}$
exactly as expected. This rotation formula is valid not only for vectors but for any multivector. In addition, when Euler angles are used, the complexity of the operation is much reduced. Compounded rotations come from multiplying the rotors, so the total rotor from Euler angles is
$\mathbf {R} =\mathbf {R} _{\gamma '}\mathbf {R} _{\beta '}\mathbf {R} _{\alpha }=\exp \left({\frac {-\mathbf {i} {\hat {\mathbf {z} }}'\gamma }{2}}\right)\exp \left({\frac {-\mathbf {i} {\hat {\mathbf {x} }}'\beta }{2}}\right)\exp \left({\frac {-\mathbf {i} {\hat {\mathbf {z} }}\alpha }{2}}\right)$
but
${\begin{aligned}{\hat {\mathbf {x} }}'&=\mathbf {R} _{\alpha }{\hat {\mathbf {x} }}\mathbf {R} _{\alpha }^{\dagger }\quad {\text{and}}\\{\hat {\mathbf {z} }}'&=\mathbf {R} _{\beta '}{\hat {\mathbf {z} }}\mathbf {R} _{\beta '}^{\dagger }\,.\end{aligned}}$
These rotors come back out of the exponentials like so:
$\mathbf {R} _{\beta '}=\cos {\frac {\beta }{2}}-\mathbf {i} \mathbf {R} _{\alpha }{\hat {\mathbf {x} }}\mathbf {R} _{\alpha }^{\dagger }\sin {\frac {\beta }{2}}=\mathbf {R} _{\alpha }\mathbf {R} _{\beta }\mathbf {R} _{\alpha }^{\dagger }$
where Rβ refers to rotation in the original coordinates. Similarly for the γ rotation,
$\mathbf {R} _{\gamma '}=\mathbf {R} _{\beta '}\mathbf {R} _{\gamma }\mathbf {R} _{\beta '}^{\dagger }=\mathbf {R} _{\alpha }\mathbf {R} _{\beta }\mathbf {R} _{\alpha }^{\dagger }\mathbf {R} _{\gamma }\mathbf {R} _{\alpha }\mathbf {R} _{\beta }^{\dagger }\mathbf {R} _{\alpha }^{\dagger }\,.$
Noting that Rγ and Rα commute (rotations in the same plane must commute), and the total rotor becomes
$\mathbf {R} =\mathbf {R} _{\alpha }\mathbf {R} _{\beta }\mathbf {R} _{\gamma }$
Thus, the compounded rotations of Euler angles become a series of equivalent rotations in the original fixed frame.
While rotors in geometric algebra work almost identically to quaternions in three dimensions, the power of this formalism is its generality: this method is appropriate and valid in spaces with any number of dimensions. In 3D, rotations have three degrees of freedom, a degree for each linearly independent plane (bivector) the rotation can take place in. It has been known that pairs of quaternions can be used to generate rotations in 4D, yielding six degrees of freedom, and the geometric algebra approach verifies this result: in 4D, there are six linearly independent bivectors that can be used as the generators of rotations.
Angle-angle-angle
Rotations can be modeled as an axis and an angle; as illustrated with a gyroscope which has an axis through the rotor, and the amount of spin around that axis demonstrated by the rotation of the rotor; this rotation can be expressed as angle ∗ (axis) where axis is a unit vector specifying the direction of the rotor axis. From the origin, in any direction, is the same rotation axis, with the scale of the angle equivalent to the distance from the origin. From any other point in space, similarly the same direction vector applied relative to the orientation represented by the starting point rather than the origin applies the same change around the same axes that the unit vector specifies. The angle ∗ axis scaling each point gives a unique coordinate in angle-angle-angle notation. The difference between two coordinates immediately yields the single axis of rotation and angle between the two orientations.
The natural log of a quaternion represents curving space by 3 angles around 3 axles of rotation, and is expressed in arc-length; similar to Euler angles, but order independent.[10] There is a Lie product formula definition of the addition of rotations, which is that they are sum of infinitesimal steps of each rotation applied in series; this would imply that rotations are the result of all rotations in the same instant are applied, rather than a series of rotations applied subsequently.
The axes of rotation are aligned to the standard Cartesian x, y, z axes. These rotations may be simply added and subtracted, especially when the frames being rotated are fixed to each other as in IK chains. Differences between two objects that are in the same reference frame are found by simply subtracting their orientations. Rotations that are applied from external sources, or are from sources relative to the current rotation still require multiplications, application of the Rodriguez Formula is provided.
The rotation from each axle coordinate represent rotating the plane perpendicular to the specified axis simultaneously with all other axles. Although the measures can be considered in angles, the representation is actually the arc-length of the curve; an angle implies a rotation around a point, where a curvature is a delta applied to the current point in an inertial direction.
Just an observational note: log quaternions have rings, or octaves of rotations; that is for rotations greater than 4π have related curves. Curvatures of things that approach this boundary appear to chaotically jump orbits.
For 'human readable' angles the 1-norm can be used to rescale the angles to look more 'appropriate':
$\mathbf {Q} ={\begin{bmatrix}X\\Y\\Z\end{bmatrix}}$
Other related values are immediately derivable:
${\begin{aligned}\|\mathbf {V} \|{\text{ or }}\|\mathbf {V} \|_{2}&={\sqrt {XX+YY+ZZ}}\\[6pt]\|\mathbf {V} \|_{1}&=|X|+|Y|+|Z|\end{aligned}}$
The total angle of rotation:
$\theta =\|\mathbf {V} \|$
The axis of rotation:
${\text{Axis}}(\ln \mathbf {Q} )={\begin{bmatrix}{\frac {X}{\theta }}\\{\frac {Y}{\theta }}\\{\frac {Z}{\theta }}\end{bmatrix}}$
Quaternion representation
$\mathbf {q} ={\begin{bmatrix}\cos {\frac {\theta }{2}}\\\sin {\frac {\theta }{2}}{\frac {X}{\|\mathbf {Q} \|}}\\\sin {\frac {\theta }{2}}{\frac {Y}{\|\mathbf {Q} \|}}\\\sin {\frac {\theta }{2}}{\frac {Z}{\|\mathbf {Q} \|}}\end{bmatrix}}$
Basis matrix computation
This was built from rotating the vectors (1,0,0), (0,1,0), (0,0,1), and reducing constants.
Given an input Q = [X, Y, Z],
${\begin{matrix}q_{r}=\cos \theta \\q_{i}=\sin \theta \cdot {\frac {X}{\|\mathbf {Q} \|}}\\q_{j}=\sin \theta \cdot {\frac {Y}{\|\mathbf {Q} \|}}\\q_{k}=\sin \theta \cdot {\frac {Z}{\|\mathbf {Q} \|}}\end{matrix}}$
Which are used to compute the resulting matrix
${\begin{bmatrix}1-2q_{j}^{2}-2q_{k}^{2}&2(q_{i}q_{j}-q_{k}q_{r})&2(q_{i}q_{k}+q_{j}q_{r})\\2(q_{i}q_{j}+q_{k}q_{r})&1-2q_{i}^{2}-2q_{k}^{2}&2(q_{j}q_{k}-q_{i}q_{r})\\2(q_{i}q_{k}-q_{j}q_{r})&2(q_{j}q_{k}+q_{i}q_{r})&1-2q_{i}^{2}-2q_{j}^{2}\end{bmatrix}}$
Alternate basis calculation
Alternatively this can be used. Given A = [X, Y, Z], convert to angle-axis θ = ‖A‖, and [x, y, z] = A/‖A‖.
Compute some partial expressions:
${\begin{matrix}x_{y}=xy(1-\cos \theta )&w_{x}=x\sin \theta &x_{x}=xx(1-\cos \theta )\\y_{z}=yz(1-\cos \theta )&w_{y}=y\sin \theta &y_{y}=yy(1-\cos \theta )\\x_{z}=xz(1-\cos \theta )&w_{z}=z\sin \theta &z_{z}=zz(1-\cos \theta )\end{matrix}}$
Compute the resulting matrix:
${\begin{bmatrix}\cos \theta +x_{x}&x_{y}+w_{z}&w_{y}+x_{z}\\w_{z}+x_{y}&\cos \theta +y_{y}&y_{z}-w_{x}\\x_{z}-w_{y}&w_{x}+y_{z}&\cos \theta +z_{z}\end{bmatrix}}$
Expanded:
${\begin{bmatrix}\cos \theta +x^{2}(1-\cos \theta )&xy(1-\cos \theta )-z\sin \theta &y\sin \theta +xz(1-\cos \theta )\\z\sin \theta +xy(1-\cos \theta )&\cos \theta +y^{2}(1-\cos \theta )&yz(1-\cos \theta )-x\sin \theta \\xz(1-\cos \theta )-y\sin \theta &x\sin \theta +yz(1-\cos \theta )&\cos \theta +z^{2}(1-\cos \theta )\end{bmatrix}}$
Vector rotation
Rotate the vector v = (X, Y, Z) around the rotation vector Q = (X, Y, Z).
The angle of rotation will be θ = ‖Q‖.
Calculate the cosine of the angle times the vector to rotate, plus sine of the angle times the axis of rotation, plus one minus cosine of the angle times the dot product of the vector and rotation axis times the axis of rotation.
$\mathbf {v} '=\cos(\theta )\mathbf {v} +\sin(\theta )\left({\frac {\mathbf {Q} }{\|\mathbf {Q} \|}}\times \mathbf {v} \right)+(1-\cos(\theta ))\left({\frac {\mathbf {Q} }{\|\mathbf {Q} \|}}\cdot \mathbf {v} \right){\frac {\mathbf {Q} }{\|\mathbf {Q} \|}}$
Some notes: the dot product includes the cosine of the angle between the vector being rotated and the axis of rotation times the length of v; the cross product includes the sine of the angle between the vector being rotated and the axis of rotation.
Rotate a rotation vector
Using Rodrigues' composite rotation formula, for a given rotation vector Q = (X, Y, Z), and another rotation vector A = (X′, Y′, Z′) to rotate the frame around.
From the initial rotation vectors, extract the angles and axes:
${\begin{aligned}\theta &={\frac {\|\mathbf {Q} \|}{2}}\\[6pt]\gamma &={\frac {\|\mathbf {A} \|}{2}}\end{aligned}}$
Normalized axis of rotation for the current frame:
${\hat {\mathbf {Q} }}={\frac {\mathbf {Q} }{\|\mathbf {Q} \|}}$
Normalized axis of rotation to rotate the frame around:
${\hat {\mathbf {A} }}={\frac {\mathbf {A} }{\|\mathbf {A} \|}}$
The result angle angle of the rotation is
$\alpha =2\arccos \left(\cos(\theta )\cos(\gamma )+\sin(\theta )\sin(\gamma ){\hat {\mathbf {Q} }}\cdot {\hat {\mathbf {A} }}\right)$
or
$\alpha =2\arccos \left({\cos(\theta -\gamma )}(1-{\hat {\mathbf {Q} }}\cdot {\hat {\mathbf {A} }})+{\cos(\theta +\gamma )}(1+{\hat {\mathbf {Q} }}\cdot {\hat {\mathbf {A} }})\right)$
The resultant, unnormalized axis of rotation:
$\mathbf {r} =\sin \gamma \cos \theta {\hat {\mathbf {A} }}+\sin \theta \cos \gamma {\hat {\mathbf {Q} }}+\sin \theta \sin \gamma {\hat {\mathbf {A} }}\times {\hat {\mathbf {Q} }}$
or
$r=\left({\hat {\mathbf {A} }}\times {\hat {\mathbf {Q} }}\right){\bigl (}{\cos({\theta }-\gamma )}-{\cos({\theta }+\gamma )}{\bigr )}+{\hat {\mathbf {A} }}{\bigl (}{\sin(\theta +\gamma )}+{\sin(\theta -\gamma )}{\bigr )}+{\hat {\mathbf {Q} }}{\bigl (}{\sin(\theta +\gamma )}-{\sin({\theta }-\gamma )}{\bigr )}$
The Rodrigues rotation formula would lead that the sin of above resulting angle can be used to normalize the vector, however this fails for large ranges; so normalize the result axis as any other vector.
${\hat {\mathbf {R} }}={\frac {\mathbf {r} }{\|\mathbf {r} \|}}$
And the final frame rotation coordinate:
$\mathbf {R} =\alpha {\hat {\mathbf {R} }}$
Spin rotation around a fixed axis
A rotation vector Q represents three axes; these may be used as a shorthand to rotate the rotation around using the above method to rotate a rotation vector. These expressions are best represented as code fragments.
Setup some constants used in other expressions.
${\begin{aligned}n_{x}&={\frac {Q_{x}}{\|\mathbf {Q} \|}}\\n_{y}&={\frac {Q_{y}}{\|\mathbf {Q} \|}}\\n_{z}&={\frac {Q_{z}}{\|\mathbf {Q} \|}}\\{\text{angle}}&=\|\mathbf {Q} \|\\s&=\sin({\text{angle}})\\c_{1}&=\cos({\text{angle}})\\c&=1-c_{1}\end{aligned}}$
using the above values:
${\text{x-axis}}=\left[x=cn_{x}^{2}+c_{1},\;y=cn_{x}n_{y}+sn_{z},\;z=cn_{x}n_{z}-sn_{y}\right]$
or
${\text{y-axis}}=\left[x=cn_{y}n_{x}-sn_{z},\;y=cn_{y}^{2}+c_{1},\;z=cn_{y}n_{z}+sn_{x}\right]$
or
${\text{z-axis}}=\left[x=cn_{z}n_{x}+sn_{y},\;y=cn_{z}n_{y}-sn_{x},\;z=cn_{z}^{2}+c_{1}\right]$
Conversion from Basis Matrix
Compute the determinant of the matrix:
$d={\frac {\left({\text{basis}}_{{\text{right}}_{X}}+{\text{basis}}_{{\text{up}}_{Y}}+{\text{basis}}_{{\text{forward}}_{Z}}\right)-1}{2}}$
Convert to the angle of rotation:
${\begin{aligned}\theta &=2\arccos d\\[6pt]yz&={\text{basis}}_{{\text{up}}_{Z}}-{\text{basis}}_{{\text{forward}}_{Y}}\\[6pt]xz&={\text{basis}}_{{\text{forward}}_{X}}-{\text{basis}}_{{\text{right}}_{Z}}\\[6pt]xy&={\text{basis}}_{{\text{right}}_{Y}}-{\text{basis}}_{{\text{up}}_{X}}\end{aligned}}$
Compute the normal factor:
${\begin{aligned}{\text{normal}}&={\frac {1}{\sqrt {yz^{2}+xz^{2}+xy^{2}}}}\\[6pt]\mathbf {n} &={\begin{bmatrix}yz\cdot {\text{normal}}\\xz\cdot {\text{normal}}\\xy\cdot {\text{normal}}\end{bmatrix}}\end{aligned}}$
the resulting angle-angle-angle is n ⋅ θ.
Conversion from normal vector (Y)
Representation of a normal as a rotation, this assumes that the Y axis vector (0,1,0) is pointing up. If some other axis is considered primary, the coordinates can be simply swapped.
This assumes a normalized input vector in the direction of the normal
$\mathbf {N} ={\begin{bmatrix}{\text{normal}}_{X}\\{\text{normal}}_{Y}\\{\text{normal}}_{Z}\end{bmatrix}}$
The angle is simply the sum of the x- and z-coordinates (or y and x if Z is up, or y and z if X is up):
${\text{angle}}=|N_{x}|+|N_{z}|$
if angle is 0, the job is done, result with (0,0,0)
$r={\frac {1}{\text{angle}}}$
Some temporary values; these values are just partials referenced later:
$\mathbf {t} ={\begin{bmatrix}N_{x}\cdot r\\N_{y}\\N_{z}\cdot r\end{bmatrix}}$
Use the projected normal on the Y axis as the angle to rotate:
${\begin{aligned}{\text{target}}_{\text{angle}}&=\arccos t_{Y}\\[6pt]{\text{result}}&={\begin{bmatrix}t_{Z}\cdot {\text{target}}_{\text{angle}}\\0\\-t_{X}\cdot {\text{target}}_{\text{angle}}\end{bmatrix}}\end{aligned}}$
Align normal using basis
The default tangent and bitangent of rotations which only have their normal set, results in tangents and bi-tangents that are irregular. Alternatively build a basis matrix, and convert from basis using the above mentioned method. Compute the normal of the above, and the matrix to convert
${\text{normal}}_{\text{twist}}={\sqrt {t_{Z}^{2}+t_{X}^{2}}}$
${\begin{bmatrix}\left(N_{y}\cdot {\frac {-t_{X}}{{\text{normal}}_{\text{twist}}}}\right)&N_{x}&{\frac {t_{Z}}{{\text{normal}}_{\text{twist}}}}\\\left(N_{z}\cdot {\frac {t_{Z}}{{\text{normal}}_{\text{twist}}}}\right)-\left(N_{x}\cdot {\frac {-t_{X}}{{\text{normal}}_{\text{twist}}}}\right)&N_{y}&0\\\left(-N_{y}\cdot {\frac {t_{Z}}{{\text{normal}}_{\text{twist}}}}\right)&N_{z}&{\frac {-t_{X}}{{\text{normal}}_{\text{twist}}}}\end{bmatrix}}$
and then use the basis to log quaternion conversion as follows.
Align normal directly
Or This is the direct computation to result with a log quaternion; compute the above result vector and then...
${\begin{aligned}t_{X_{n}}&=t_{X}\cdot {\text{normal}}_{\text{twist}}\\[4pt]t_{Z_{n}}&=t_{Z}\cdot {\text{normal}}_{\text{twist}}\\[4pt]s&=\sin({\text{target}}_{\text{angle}})\\[4pt]c&=1-\cos({\text{target}}_{\text{angle}})\end{aligned}}$
This is the angle
${\text{angle}}=\arccos \left({\frac {\left(t_{Y}+1\right)\left(1-t_{X_{n}}\right)}{2}}-1\right);$
These partial products are used below:
${\begin{aligned}yz&=s\cdot n_{X}\\[4pt]xz&=\left(2-c\cdot \left(n_{X}^{2}+n_{Z}^{2}\right)\right)\cdot t_{Z_{n}}\\[4pt]xy&=s\cdot n_{X}\cdot t_{Z_{n}}+s\cdot n_{Z}\cdot \left(1-t_{X_{n}}\right)\end{aligned}}$
Compute the normalized rotation vector (axle of rotation):
$n={\begin{bmatrix}{\frac {yz}{\sqrt {yz^{2}+xz^{2}+xy^{2}}}}\\{\frac {xz}{\sqrt {yz^{2}+xz^{2}+xy^{2}}}}\\{\frac {xy}{\sqrt {yz^{2}+xz^{2}+xy^{2}}}}\end{bmatrix}}$
and finally compute the resulting log quaternion.
${\text{final}}_{\text{result}}={\text{angle}}\cdot {n}$
Conversion from axis-angle
This assumes the input axis a = [X, Y, Z] is normalized. If there is zero rotation, result with (0,0,0)
$\theta ={\text{angle}}\quad ;\quad {\text{result}}=\theta *\mathbf {a} $ ;\quad {\text{result}}=\theta *\mathbf {a} }
See also
• Euler filter
• Orientation (geometry)
• Rotation around a fixed axis
• Three-dimensional rotation operator
References
1. "Fiducial Marker Tracking for Augmented Reality".
2. Weisstein, Eric W. "Rotation Matrix". MathWorld.
3. Rodrigues, Olinde (1840). "Des lois géometriques qui regissent les déplacements d'un systéme solide dans l'espace, et de la variation des coordonnées provenant de ces déplacement considérées indépendant des causes qui peuvent les produire". J. Math. Pures Appl. 5: 380–440. online
4. cf. J Willard Gibbs (1884). Elements of Vector Analysis, New Haven, p. 67
5. Direct and inverse kinematics lecture notes, page 5
6. Mebius, Johan (2007). "Derivation of the Euler–Rodrigues formula for three-dimensional rotations from the general formula for four-dimensional rotations". arXiv:math/0701759.
7. Shuster, Malcolm D. (1993). "A Survey of Attitude Representations". Journal of the Astronautical Sciences. 41 (4): 439–517. Archived from the original (pdf) on 2022-09-05.
8. Physics - Mark Ioffe - W(t) in terms of matrices
9. Quaternions and Rotation lecture notes, p. 14-15
10. d3x0r. "STFRPhysics Repository".
Further reading
• Shuster, M.D. (1993). "A Survey of Attitude Representations" (PDF). Journal of the Astronautical Sciences. 41 (4): 439–517. Bibcode:1993JAnSc..41..439S. Archived from the original (PDF) on 2019-09-25.
• Taubin, G. (2011). "3D Rotations". IEEE Computer Graphics and Applications. 31 (6): 84–89. doi:10.1109/MCG.2011.92. PMID 24808261.
• Coutsias, E.; Romero, L. (2004). "The Quaternions with an application to Rigid Body Dynamics". Sandia Technical Report. Sandia National Laboraties. SAND2004-0153.
• Markley, F. Landis (2003). "Attitude Error Representations for Kalman Filtering". Journal of Guidance, Control and Dynamics. 26 (2): 311–7. Bibcode:2003JGCD...26..311M. doi:10.2514/2.5048. hdl:2060/20020060647.
• Goldstein, H. (1980). Classical Mechanics (2nd ed.). Addison–Wesley. ISBN 0-201-02918-9.
• Wertz, James R. (1980). Spacecraft Attitude Determination and Control. D. Reidel. ISBN 90-277-1204-2.
• Schmidt, J.; Niemann, H. (2001). "Using Quaternions for Parametrizing 3-D Rotations in Unconstrained Nonlinear Optimization". Proceedings of the Vision Modeling and Visualization Conference 2001. pp. 399–406. ISBN 3898380289.
• Landau, L.; Lifshitz, E.M. (1976). Mechanics (3rd ed.). Pergamon Press. ISBN 0-08-021022-8.
• Klumpp, A.R. (December 1976). "Singularity-Free Extraction of a Quaternion from a Direction-Cosine Matrix". Journal of Spacecraft and Rockets. 13 (12): 754–5. Bibcode:1976JSpRo..13..754K. doi:10.2514/3.27947.
• Doran, C.; Lasenby, A. (2003). Geometric Algebra for Physicists. Cambridge University Press. ISBN 978-0-521-71595-9.
• Terzakis, G.; Lourakis, M.; Ait-Boudaoud, D. (2018). "Modified Rodrigues Parameters: An Efficient Representation of Orientation in 3D Vision and Graphics". Journal of Mathematical Imaging and Vision. 60 (3): 422–442. doi:10.1007/s10851-017-0765-x.
• Rowenhorst, D.; Rollett, A.D.; Rohrer, G.S.; Groeber, M.; Jackson, M.; Konijnenberg, P.J.; De Graef, M. (2015). "Consistent representations of and conversions between 3D rotations". Modelling and Simulation in Materials Science and Engineering. 23 (8): 083501. Bibcode:2015MSMSE..23h3501R. doi:10.1088/0965-0393/23/8/083501.
External links
Wikimedia Commons has media related to Rotation in three dimensions.
• EuclideanSpace has a wealth of information on rotation representation
• Q36. How do I generate a rotation matrix from Euler angles? and Q37. How do I convert a rotation matrix to Euler angles? — The Matrix and Quaternions FAQ
• Imaginary numbers are not Real – the Geometric Algebra of Spacetime – Section "Rotations and Geometric Algebra" derives and applies the rotor description of rotations
• Starlino's DCM Tutorial – Direction cosine matrix theory tutorial and applications. Space orientation estimation algorithm using accelerometer, gyroscope and magnetometer IMU devices. Using complimentary filter (popular alternative to Kalman filter) with DCM matrix.
|
Wikipedia
|
Rotational symmetry
Rotational symmetry, also known as radial symmetry in geometry, is the property a shape has when it looks the same after some rotation by a partial turn. An object's degree of rotational symmetry is the number of distinct orientations in which it looks exactly the same for each rotation.
Certain geometric objects are partially symmetrical when rotated at certain angles such as squares rotated 90°, however the only geometric objects that are fully rotationally symmetric at any angle are spheres, circles and other spheroids.[1][2]
Formal treatment
See also: Rotational invariance
Formally the rotational symmetry is symmetry with respect to some or all rotations in m-dimensional Euclidean space. Rotations are direct isometries, i.e., isometries preserving orientation. Therefore, a symmetry group of rotational symmetry is a subgroup of E +(m) (see Euclidean group).
Symmetry with respect to all rotations about all points implies translational symmetry with respect to all translations, so space is homogeneous, and the symmetry group is the whole E(m). With the modified notion of symmetry for vector fields the symmetry group can also be E +(m).
For symmetry with respect to rotations about a point we can take that point as origin. These rotations form the special orthogonal group SO(m), the group of m × m orthogonal matrices with determinant 1. For m = 3 this is the rotation group SO(3).
In another definition of the word, the rotation group of an object is the symmetry group within E +(n), the group of direct isometries ; in other words, the intersection of the full symmetry group and the group of direct isometries. For chiral objects it is the same as the full symmetry group.
Laws of physics are SO(3)-invariant if they do not distinguish different directions in space. Because of Noether's theorem, the rotational symmetry of a physical system is equivalent to the angular momentum conservation law.
Discrete rotational symmetry
Rotational symmetry of order n, also called n-fold rotational symmetry, or discrete rotational symmetry of the nth order, with respect to a particular point (in 2D) or axis (in 3D) means that rotation by an angle of ${\tfrac {360^{\circ }}{n}}$ (180°, 120°, 90°, 72°, 60°, 51 3⁄7°, etc.) does not change the object. A "1-fold" symmetry is no symmetry (all objects look alike after a rotation of 360°).
The notation for n-fold symmetry is Cn or simply n. The actual symmetry group is specified by the point or axis of symmetry, together with the n. For each point or axis of symmetry, the abstract group type is cyclic group of order n, Zn. Although for the latter also the notation Cn is used, the geometric and abstract Cn should be distinguished: there are other symmetry groups of the same abstract group type which are geometrically different, see cyclic symmetry groups in 3D.
The fundamental domain is a sector of ${\tfrac {360^{\circ }}{n}}.$
Examples without additional reflection symmetry:
• n = 2, 180°: the dyad; letters Z, N, S; the outlines, albeit not the colors, of the yin and yang symbol; the Union Flag (as divided along the flag's diagonal and rotated about the flag's center point)
• n = 3, 120°: triad, triskelion, Borromean rings; sometimes the term trilateral symmetry is used;
• n = 4, 90°: tetrad, swastika
• n = 6, 60°: hexad, Star of David (this one has additional reflection symmetry)
• n = 8, 45°: octad, Octagonal muqarnas, computer-generated (CG), ceiling
Cn is the rotation group of a regular n-sided polygon in 2D and of a regular n-sided pyramid in 3D.
If there is e.g. rotational symmetry with respect to an angle of 100°, then also with respect to one of 20°, the greatest common divisor of 100° and 360°.
A typical 3D object with rotational symmetry (possibly also with perpendicular axes) but no mirror symmetry is a propeller.
Examples
C2 (more) C3 (more) C4 (more) C5 (more) C6 (more)
Double Pendulum fractal
Roundabout traffic sign
US Bicentennial Star
The starting position in shogi
Snoldelev Stone's interlocked drinking horns design
Multiple symmetry axes through the same point
For discrete symmetry with multiple symmetry axes through the same point, there are the following possibilities:
• In addition to an n-fold axis, n perpendicular 2-fold axes: the dihedral groups Dn of order 2n (n ≥ 2). This is the rotation group of a regular prism, or regular bipyramid. Although the same notation is used, the geometric and abstract Dn should be distinguished: there are other symmetry groups of the same abstract group type which are geometrically different, see dihedral symmetry groups in 3D.
• 4×3-fold and 3×2-fold axes: the rotation group T of order 12 of a regular tetrahedron. The group is isomorphic to alternating group A4.
• 3×4-fold, 4×3-fold, and 6×2-fold axes: the rotation group O of order 24 of a cube and a regular octahedron. The group is isomorphic to symmetric group S4.
• 6×5-fold, 10×3-fold, and 15×2-fold axes: the rotation group I of order 60 of a dodecahedron and an icosahedron. The group is isomorphic to alternating group A5. The group contains 10 versions of D3 and 6 versions of D5 (rotational symmetries like prisms and antiprisms).
In the case of the Platonic solids, the 2-fold axes are through the midpoints of opposite edges, and the number of them is half the number of edges. The other axes are through opposite vertices and through centers of opposite faces, except in the case of the tetrahedron, where the 3-fold axes are each through one vertex and the center of one face.
Rotational symmetry with respect to any angle
Rotational symmetry with respect to any angle is, in two dimensions, circular symmetry. The fundamental domain is a half-line.
In three dimensions we can distinguish cylindrical symmetry and spherical symmetry (no change when rotating about one axis, or for any rotation). That is, no dependence on the angle using cylindrical coordinates and no dependence on either angle using spherical coordinates. The fundamental domain is a half-plane through the axis, and a radial half-line, respectively. Axisymmetric and axisymmetrical are adjectives which refer to an object having cylindrical symmetry, or axisymmetry (i.e. rotational symmetry with respect to a central axis) like a doughnut (torus). An example of approximate spherical symmetry is the Earth (with respect to density and other physical and chemical properties).
In 4D, continuous or discrete rotational symmetry about a plane corresponds to corresponding 2D rotational symmetry in every perpendicular plane, about the point of intersection. An object can also have rotational symmetry about two perpendicular planes, e.g. if it is the Cartesian product of two rotationally symmetry 2D figures, as in the case of e.g. the duocylinder and various regular duoprisms.
Rotational symmetry with translational symmetry
Arrangement within a primitive cell of 2- and 4-fold rotocenters. A fundamental domain is indicated in yellow.
Arrangement within a primitive cell of 2-, 3-, and 6-fold rotocenters, alone or in combination (consider the 6-fold symbol as a combination of a 2- and a 3-fold symbol); in the case of 2-fold symmetry only, the shape of the parallelogram can be different. For the case p6, a fundamental domain is indicated in yellow.
2-fold rotational symmetry together with single translational symmetry is one of the Frieze groups. There are two rotocenters per primitive cell.
Together with double translational symmetry the rotation groups are the following wallpaper groups, with axes per primitive cell:
• p2 (2222): 4×2-fold; rotation group of a parallelogrammic, rectangular, and rhombic lattice.
• p3 (333): 3×3-fold; not the rotation group of any lattice (every lattice is upside-down the same, but that does not apply for this symmetry); it is e.g. the rotation group of the regular triangular tiling with the equilateral triangles alternatingly colored.
• p4 (442): 2×4-fold, 2×2-fold; rotation group of a square lattice.
• p6 (632): 1×6-fold, 2×3-fold, 3×2-fold; rotation group of a hexagonal lattice.
• 2-fold rotocenters (including possible 4-fold and 6-fold), if present at all, form the translate of a lattice equal to the translational lattice, scaled by a factor 1/2. In the case translational symmetry in one dimension, a similar property applies, though the term "lattice" does not apply.
• 3-fold rotocenters (including possible 6-fold), if present at all, form a regular hexagonal lattice equal to the translational lattice, rotated by 30° (or equivalently 90°), and scaled by a factor ${\tfrac {1}{3}}{\sqrt {3}}$
• 4-fold rotocenters, if present at all, form a regular square lattice equal to the translational lattice, rotated by 45°, and scaled by a factor ${\tfrac {1}{2}}{\sqrt {2}}$
• 6-fold rotocenters, if present at all, form a regular hexagonal lattice which is the translate of the translational lattice.
Scaling of a lattice divides the number of points per unit area by the square of the scale factor. Therefore, the number of 2-, 3-, 4-, and 6-fold rotocenters per primitive cell is 4, 3, 2, and 1, respectively, again including 4-fold as a special case of 2-fold, etc.
3-fold rotational symmetry at one point and 2-fold at another one (or ditto in 3D with respect to parallel axes) implies rotation group p6, i.e. double translational symmetry and 6-fold rotational symmetry at some point (or, in 3D, parallel axis). The translation distance for the symmetry generated by one such pair of rotocenters is $2{\sqrt {3}}$ times their distance.
Euclidean plane Hyperbolic plane
Hexakis triangular tiling, an example of p6, [6,3]+, (632) (with colors) and p6m, [6,3], (*632) (without colors); the lines are reflection axes if colors are ignored, and a special kind of symmetry axis if colors are not ignored: reflection reverts the colors. Rectangular line grids in three orientations can be distinguished.
Order 3-7 kisrhombille, an example of [7,3]+ (732) symmetry and [7,3], (*732) (without colors)
See also
• Ambigram
• Axial symmetry
• Crystallographic restriction theorem
• Lorentz symmetry
• Point groups in three dimensions
• Screw axis
• Space group
• Translational symmetry
References
1. Rotational symmetry of Weingarten spheres in homogeneous three-manifolds. By Jos ́e A. G ́alvez, Pablo Mira
2. Topological Bound States in the Continuum in Arrays of Dielectric Spheres. By Dmitrii N. Maksimov, LV Kirensky Institute of Physics, Krasnoyarsk, Russia
• Weyl, Hermann (1982) [1952]. Symmetry. Princeton: Princeton University Press. ISBN 0-691-02374-3.
External links
• Media related to Rotational symmetry at Wikimedia Commons
• Rotational Symmetry Examples from Math Is Fun
|
Wikipedia
|
Angular velocity
In physics, angular velocity (symbol ω, sometimes Ω), also known as angular frequency vector,[1] is a pseudovector representation of how the angular position or orientation of an object changes with time, i.e. how quickly an object rotates (spins or revolves) around an axis of rotation and how fast the axis itself changes direction.
Angular velocity
Common symbols
ω
SI unitrad ⋅ s−1
In SI base unitss−1
Extensive?yes
Intensive?yes (for rigid body only)
Conserved?no
Behaviour under
coord transformation
pseudovector
Derivations from
other quantities
ω = dθ / dt
Dimension${\mathsf {T}}^{-1}$
Part of a series on
Classical mechanics
${\textbf {F}}={\frac {d}{dt}}(m{\textbf {v}})$
Second law of motion
• History
• Timeline
• Textbooks
Branches
• Applied
• Celestial
• Continuum
• Dynamics
• Kinematics
• Kinetics
• Statics
• Statistical mechanics
Fundamentals
• Acceleration
• Angular momentum
• Couple
• D'Alembert's principle
• Energy
• kinetic
• potential
• Force
• Frame of reference
• Inertial frame of reference
• Impulse
• Inertia / Moment of inertia
• Mass
• Mechanical power
• Mechanical work
• Moment
• Momentum
• Space
• Speed
• Time
• Torque
• Velocity
• Virtual work
Formulations
• Newton's laws of motion
• Analytical mechanics
• Lagrangian mechanics
• Hamiltonian mechanics
• Routhian mechanics
• Hamilton–Jacobi equation
• Appell's equation of motion
• Koopman–von Neumann mechanics
Core topics
• Damping
• Displacement
• Equations of motion
• Euler's laws of motion
• Fictitious force
• Friction
• Harmonic oscillator
• Inertial / Non-inertial reference frame
• Mechanics of planar particle motion
• Motion (linear)
• Newton's law of universal gravitation
• Newton's laws of motion
• Relative velocity
• Rigid body
• dynamics
• Euler's equations
• Simple harmonic motion
• Vibration
Rotation
• Circular motion
• Rotating reference frame
• Centripetal force
• Centrifugal force
• reactive
• Coriolis force
• Pendulum
• Tangential speed
• Rotational frequency
• Angular acceleration / displacement / frequency / velocity
Scientists
• Kepler
• Galileo
• Huygens
• Newton
• Horrocks
• Halley
• Maupertuis
• Daniel Bernoulli
• Johann Bernoulli
• Euler
• d'Alembert
• Clairaut
• Lagrange
• Laplace
• Hamilton
• Poisson
• Cauchy
• Routh
• Liouville
• Appell
• Gibbs
• Koopman
• von Neumann
• Physics portal
• Category
The magnitude of the pseudovector, $\omega =\|{\boldsymbol {\omega }}\|$, represents the angular speed (or angular frequency), the rate at which the object rotates (spins or revolves). The pseudovector direction ${\hat {\boldsymbol {\omega }}}={\boldsymbol {\omega }}/\omega $ is normal to the instantaneous plane of rotation or angular displacement.
There are two types of angular velocity:
• Orbital angular velocity refers to how fast a point object revolves about a fixed origin, i.e. the time rate of change of its angular position relative to the origin.
• Spin angular velocity refers to how fast a rigid body rotates with respect to its center of rotation and is independent of the choice of origin, in contrast to orbital angular velocity.
Angular velocity has dimension of angle per unit time; this is analogous to linear velocity, with angle replacing distance, with time in common. The SI unit of angular velocity is radians per second,[2] although degrees per second (°/s) is also common. The radian is a dimensionless quantity, thus the SI units of angular velocity are dimensionally equivalent to reciprocal seconds, s−1, although rad/s is preferable.[3]
The sense of angular velocity is conventionally specified by the right-hand rule, implying clockwise rotations (as viewed on the plane of rotation); negation (multiplication by −1) leaves the magnitude unchanged but flips the axis in the opposite direction.[4]
For example, a geostationary satellite completes one orbit per day above the equator (360 degrees per 24 hours) has angular velocity magnitude (angular speed) ω = 360°/24 h = 15°/h (or 2π rad/24 h ≈ 0.26 rad/h) and angular velocity direction (a unit vector) parallel to Earth's rotation axis (${\hat {\omega }}={\hat {Z}}$, in the geocentric coordinate system). If angle is measured in radians, the linear velocity is the radius times the angular velocity, ${\boldsymbol {v}}=r{\boldsymbol {\omega }}$. With orbital radius 42,000 km from the earth's center, the satellite's tangential speed through space is thus v = 42,000 km × 0.26/h ≈ 11,000 km/h. The angular velocity is positive since the satellite travels eastward with the Earth's rotation (counter-clockwise from above the north pole.)
Orbital angular velocity of a point particle
Particle in two dimensions
In the simplest case of circular motion at radius $r$, with position given by the angular displacement $\phi (t)$ from the x-axis, the orbital angular velocity is the rate of change of angle with respect to time: $ \omega ={\frac {d\phi }{dt}}$. If $\phi $ is measured in radians, the arc-length from the positive x-axis around the circle to the particle is $\ell =r\phi $, and the linear velocity is $ v(t)={\frac {d\ell }{dt}}=r\omega (t)$, so that $ \omega ={\frac {v}{r}}$.
In the general case of a particle moving in the plane, the orbital angular velocity is the rate at which the position vector relative to a chosen origin "sweeps out" angle. The diagram shows the position vector $\mathbf {r} $ from the origin $O$ to a particle $P$, with its polar coordinates $(r,\phi )$. (All variables are functions of time $t$.) The particle has linear velocity splitting as $\mathbf {v} =\mathbf {v} _{\|}+\mathbf {v} _{\perp }$, with the radial component $\mathbf {v} _{\|}$ parallel to the radius, and the cross-radial (or tangential) component $\mathbf {v} _{\perp }$ perpendicular to the radius. When there is no radial component, the particle moves around the origin in a circle; but when there is no cross-radial component, it moves in a straight line from the origin. Since radial motion leaves the angle unchanged, only the cross-radial component of linear velocity contributes to angular velocity.
The angular velocity ω is the rate of change of angular position with respect to time, which can be computed from the cross-radial velocity as:
Here the cross-radial speed $v_{\perp }$ is the signed magnitude of $\mathbf {v} _{\perp }$, positive for counter-clockwise motion, negative for clockwise. Taking polar coordinates for the linear velocity $\mathbf {v} $ gives magnitude $v$ (linear speed) and angle $\theta $ relative to the radius vector; in these terms, $v_{\perp }=v\sin(\theta )$, so that
These formulas may be derived doing $\mathbf {r} =(r\cos(\varphi ),r\sin(\varphi ))$, being $r$ a function of the distance to the origin with respect to time, and $\varphi $ a function of the angle between the vector and the x axis. Then $ {\frac {d\mathbf {r} }{dt}}=({\dot {r}}\cos(\varphi )-r{\dot {\varphi }}\sin(\varphi ),{\dot {r}}\sin(\varphi )+r{\dot {\varphi }}\cos(\varphi ))$. Which is equal to ${\dot {r}}(\cos(\varphi ),\sin(\varphi ))+r{\dot {\varphi }}(-\sin(\varphi ),\cos(\varphi ))={\dot {r}}{\hat {r}}+r{\dot {\varphi }}{\hat {\varphi }}$. (See Unit vector in cylindrical coordinates). Knowing $ {\frac {d\mathbf {r} }{dt}}=\mathbf {v} $, we conclude that the radial component of the velocity is given by ${\dot {r}}$, because ${\hat {r}}$ is a radial unit vector; and the perpendicular component is given by $r{\dot {\varphi }}$ because ${\hat {\varphi }}$ is a perpendicular unit vector.
In two dimensions, angular velocity is a number with plus or minus sign indicating orientation, but not pointing in a direction. The sign is conventionally taken to be positive if the radius vector turns counter-clockwise, and negative if clockwise. Angular velocity then may be termed a pseudoscalar, a numerical quantity which changes sign under a parity inversion, such as inverting one axis or switching the two axes.
Particle in three dimensions
In three-dimensional space, we again have the position vector r of a moving particle. Here, orbital angular velocity is a pseudovector whose magnitude is the rate at which r sweeps out angle, and whose direction is perpendicular to the instantaneous plane in which r sweeps out angle (i.e. the plane spanned by r and v). However, as there are two directions perpendicular to any plane, an additional condition is necessary to uniquely specify the direction of the angular velocity; conventionally, the right-hand rule is used.
Let the pseudovector $\mathbf {u} $ be the unit vector perpendicular to the plane spanned by r and v, so that the right-hand rule is satisfied (i.e. the instantaneous direction of angular displacement is counter-clockwise looking from the top of $\mathbf {u} $). Taking polar coordinates $(r,\phi )$ in this plane, as in the two-dimensional case above, one may define the orbital angular velocity vector as:
${\boldsymbol {\omega }}=\omega \mathbf {u} ={\frac {d\phi }{dt}}\mathbf {u} ={\frac {v\sin(\theta )}{r}}\mathbf {u} ,$
where θ is the angle between r and v. In terms of the cross product, this is:
${\boldsymbol {\omega }}={\frac {\mathbf {r} \times \mathbf {v} }{r^{2}}}.$[5]
From the above equation, one can recover the tangential velocity as:
$\mathbf {v} _{\perp }={\boldsymbol {\omega }}\times \mathbf {r} $
Spin angular velocity of a rigid body or reference frame
Given a rotating frame of three unit coordinate vectors, all the three must have the same angular speed at each instant. In such a frame, each vector may be considered as a moving particle with constant scalar radius.
The rotating frame appears in the context of rigid bodies, and special tools have been developed for it: the spin angular velocity may be described as a vector or equivalently as a tensor.
Consistent with the general definition, the spin angular velocity of a frame is defined as the orbital angular velocity of any of the three vectors (same for all) with respect to its own center of rotation. The addition of angular velocity vectors for frames is also defined by the usual vector addition (composition of linear movements), and can be useful to decompose the rotation as in a gimbal. All components of the vector can be calculated as derivatives of the parameters defining the moving frames (Euler angles or rotation matrices). As in the general case, addition is commutative: $\omega _{1}+\omega _{2}=\omega _{2}+\omega _{1}$.
By Euler's rotation theorem, any rotating frame possesses an instantaneous axis of rotation, which is the direction of the angular velocity vector, and the magnitude of the angular velocity is consistent with the two-dimensional case.
If we choose a reference point ${\boldsymbol {R}}$ fixed in the rigid body, the velocity ${\dot {\boldsymbol {r}}}$ of any point in the body is given by
${\dot {\boldsymbol {r}}}={\dot {\boldsymbol {R}}}+{\boldsymbol {\omega }}\times ({\boldsymbol {r}}-{\boldsymbol {R}})$
Components from the basis vectors of a body-fixed frame
Consider a rigid body rotating about a fixed point O. Construct a reference frame in the body consisting of an orthonormal set of vectors $\mathbf {e} _{1},\mathbf {e} _{2},\mathbf {e} _{3}$ fixed to the body and with their common origin at O. The spin angular velocity vector of both frame and body about O is then
${\boldsymbol {\omega }}=\left({\dot {\mathbf {e} }}_{1}\cdot \mathbf {e} _{2}\right)\mathbf {e} _{3}+\left({\dot {\mathbf {e} }}_{2}\cdot \mathbf {e} _{3}\right)\mathbf {e} _{1}+\left({\dot {\mathbf {e} }}_{3}\cdot \mathbf {e} _{1}\right)\mathbf {e} _{2},$
where ${\dot {\mathbf {e} }}_{i}={\frac {d\mathbf {e} _{i}}{dt}}$ is the time rate of change of the frame vector $\mathbf {e} _{i},i=1,2,3,$ due to the rotation.
This formula is incompatible with the expression for orbital angular velocity
${\boldsymbol {\omega }}={\frac {\mathbf {r} \times \mathbf {v} }{r^{2}}},$
as that formula defines angular velocity for a single point about O, while the formula in this section applies to a frame or rigid body. In the case of a rigid body a single ${\boldsymbol {\omega }}$ has to account for the motion of all particles in the body.
Components from Euler angles
The components of the spin angular velocity pseudovector were first calculated by Leonhard Euler using his Euler angles and the use of an intermediate frame:
• One axis of the reference frame (the precession axis)
• The line of nodes of the moving frame with respect to the reference frame (nutation axis)
• One axis of the moving frame (the intrinsic rotation axis)
Euler proved that the projections of the angular velocity pseudovector on each of these three axes is the derivative of its associated angle (which is equivalent to decomposing the instantaneous rotation into three instantaneous Euler rotations). Therefore:[6]
${\boldsymbol {\omega }}={\dot {\alpha }}\mathbf {u} _{1}+{\dot {\beta }}\mathbf {u} _{2}+{\dot {\gamma }}\mathbf {u} _{3}$
This basis is not orthonormal and it is difficult to use, but now the velocity vector can be changed to the fixed frame or to the moving frame with just a change of bases. For example, changing to the mobile frame:
${\boldsymbol {\omega }}=({\dot {\alpha }}\sin \beta \sin \gamma +{\dot {\beta }}\cos \gamma ){\hat {\mathbf {i} }}+({\dot {\alpha }}\sin \beta \cos \gamma -{\dot {\beta }}\sin \gamma ){\hat {\mathbf {j} }}+({\dot {\alpha }}\cos \beta +{\dot {\gamma }}){\hat {\mathbf {k} }}$
where ${\hat {\mathbf {i} }},{\hat {\mathbf {j} }},{\hat {\mathbf {k} }}$ are unit vectors for the frame fixed in the moving body. This example has been made using the Z-X-Z convention for Euler angles.
Tensor
See also: Skew-symmetric matrix
The angular velocity vector ${\boldsymbol {\omega }}=(\omega _{x},\omega _{y},\omega _{z})$ defined above may be equivalently expressed as an angular velocity tensor, the matrix (or linear mapping) W = W(t) defined by:
$W={\begin{pmatrix}0&-\omega _{z}&\omega _{y}\\\omega _{z}&0&-\omega _{x}\\-\omega _{y}&\omega _{x}&0\\\end{pmatrix}}$
This is an infinitesimal rotation matrix. The linear mapping W acts as $({\boldsymbol {\omega }}\times )$:
${\boldsymbol {\omega }}\times \mathbf {r} =W\mathbf {r} .$
Calculation of angular velocity tensor of a rotating frame
A vector $\mathbf {r} $ undergoing uniform circular motion around a fixed axis satisfies:
${\frac {d\mathbf {r} }{dt}}={\boldsymbol {\omega }}\times \mathbf {r} =W\mathbf {r} $
Let $A(t)=[\mathbf {e} _{1}(t)\ \mathbf {e} _{2}(t)\ \mathbf {e} _{3}(t)]$ be the orientation matrix of a frame, whose columns $\mathbf {e} _{1}$, $\mathbf {e} _{2}$, and $\mathbf {e} _{3}$ are the moving orthonormal coordinate vectors of the frame. We can obtain the angular velocity tensor W(t) of A(t) as follows:
The angular velocity $\omega $ must be the same for each of the column vectors $\mathbf {e} _{i}$, so we have:
${\begin{aligned}{\frac {dA}{dt}}&={\begin{bmatrix}{\dfrac {d\mathbf {e} _{1}}{dt}}&{\dfrac {d\mathbf {e} _{2}}{dt}}&{\dfrac {d\mathbf {e} _{3}}{dt}}\end{bmatrix}}\\&={\begin{bmatrix}\omega \times \mathbf {e} _{1}&\omega \times \mathbf {e} _{2}&\omega \times \mathbf {e} _{3}\end{bmatrix}}\\&={\begin{bmatrix}W\mathbf {e} _{1}&W\mathbf {e} _{2}&W\mathbf {e} _{3}\end{bmatrix}}\\&=WA,\end{aligned}}$
which holds even if A(t) does not rotate uniformly. Therefore the angular velocity tensor is:
$W={\frac {dA}{dt}}A^{-1}={\frac {dA}{dt}}A^{\mathsf {T}},$
since the inverse of an orthogonal matrix $A$ is its transpose $A^{\mathsf {T}}$.
Properties
See also: Infinitesimal rotation
In general, the angular velocity in an n-dimensional space is the time derivative of the angular displacement tensor, which is a second rank skew-symmetric tensor.
This tensor W will have n(n−1)/2 independent components, which is the dimension of the Lie algebra of the Lie group of rotations of an n-dimensional inner product space.[7]
Duality with respect to the velocity vector
In three dimensions, angular velocity can be represented by a pseudovector because second rank tensors are dual to pseudovectors in three dimensions. Since the angular velocity tensor W = W(t) is a skew-symmetric matrix:
$W={\begin{pmatrix}0&-\omega _{z}&\omega _{y}\\\omega _{z}&0&-\omega _{x}\\-\omega _{y}&\omega _{x}&0\\\end{pmatrix}},$
its Hodge dual is a vector, which is precisely the previous angular velocity vector ${\boldsymbol {\omega }}=[\omega _{x},\omega _{y},\omega _{z}]$.
Exponential of W
If we know an initial frame A(0) and we are given a constant angular velocity tensor W, we can obtain A(t) for any given t. Recall the matrix differential equation:
${\frac {dA}{dt}}=W\cdot A.$
This equation can be integrated to give:
$A(t)=e^{Wt}A(0),$
which shows a connection with the Lie group of rotations.
W is skew-symmetric
We prove that angular velocity tensor is skew symmetric, i.e. $W={\frac {dA(t)}{dt}}\cdot A^{\text{T}}$ satisfies $W^{\text{T}}=-W$.
A rotation matrix A is orthogonal, inverse to its transpose, so we have $I=A\cdot A^{\text{T}}$. For $A=A(t)$ a frame matrix, taking the time derivative of the equation gives:
$0={\frac {dA}{dt}}A^{\text{T}}+A{\frac {dA^{\text{T}}}{dt}}$
Applying the formula $(AB)^{\text{T}}=B^{\text{T}}A^{\text{T}}$,
$0={\frac {dA}{dt}}A^{\text{T}}+\left({\frac {dA}{dt}}A^{\text{T}}\right)^{\text{T}}=W+W^{\text{T}}$
Thus, W is the negative of its transpose, which implies it is skew symmetric.
Coordinate-free description
At any instant $t$, the angular velocity tensor represents a linear map between the position vector $\mathbf {r} (t)$ and the velocity vectors $\mathbf {v} (t)$ of a point on a rigid body rotating around the origin:
$\mathbf {v} =W\mathbf {r} .$
The relation between this linear map and the angular velocity pseudovector ${\boldsymbol {\omega }}$ is the following.
Because W is the derivative of an orthogonal transformation, the bilinear form
$B(\mathbf {r} ,\mathbf {s} )=(W\mathbf {r} )\cdot \mathbf {s} $
is skew-symmetric. Thus we can apply the fact of exterior algebra that there is a unique linear form $L$ on $\Lambda ^{2}V$ that
$L(\mathbf {r} \wedge \mathbf {s} )=B(\mathbf {r} ,\mathbf {s} )$
where $\mathbf {r} \wedge \mathbf {s} \in \Lambda ^{2}V$ is the exterior product of $\mathbf {r} $ and $\mathbf {s} $.
Taking the sharp L♯ of L we get
$(W\mathbf {r} )\cdot \mathbf {s} =L^{\sharp }\cdot (\mathbf {r} \wedge \mathbf {s} )$
Introducing ${\boldsymbol {\omega }}:={\star }(L^{\sharp })$, as the Hodge dual of L♯, and applying the definition of the Hodge dual twice supposing that the preferred unit 3-vector is $\star 1$
$(W\mathbf {r} )\cdot \mathbf {s} ={\star }({\star }(L^{\sharp })\wedge \mathbf {r} \wedge \mathbf {s} )={\star }({\boldsymbol {\omega }}\wedge \mathbf {r} \wedge \mathbf {s} )={\star }({\boldsymbol {\omega }}\wedge \mathbf {r} )\cdot \mathbf {s} =({\boldsymbol {\omega }}\times \mathbf {r} )\cdot \mathbf {s} ,$
where
${\boldsymbol {\omega }}\times \mathbf {r} :={\star }({\boldsymbol {\omega }}\wedge \mathbf {r} )$ :={\star }({\boldsymbol {\omega }}\wedge \mathbf {r} )}
by definition.
Because $\mathbf {s} $ is an arbitrary vector, from nondegeneracy of scalar product follows
$W\mathbf {r} ={\boldsymbol {\omega }}\times \mathbf {r} $
Angular velocity as a vector field
Since the spin angular velocity tensor of a rigid body (in its rest frame) is a linear transformation that maps positions to velocities (within the rigid body), it can be regarded as a constant vector field. In particular, the spin angular velocity is a Killing vector field belonging to an element of the Lie algebra SO(3) of the 3-dimensional rotation group SO(3).
Also, it can be shown that the spin angular velocity vector field is exactly half of the curl of the linear velocity vector field v(r) of the rigid body. In symbols,
${\boldsymbol {\omega }}={\frac {1}{2}}\nabla \times \mathbf {v} $
Rigid body considerations
The same equations for the angular speed can be obtained reasoning over a rotating rigid body. Here is not assumed that the rigid body rotates around the origin. Instead, it can be supposed rotating around an arbitrary point that is moving with a linear velocity V(t) in each instant.
To obtain the equations, it is convenient to imagine a rigid body attached to the frames and consider a coordinate system that is fixed with respect to the rigid body. Then we will study the coordinate transformations between this coordinate and the fixed "laboratory" system.
As shown in the figure on the right, the lab system's origin is at point O, the rigid body system origin is at O′ and the vector from O to O′ is R. A particle (i) in the rigid body is located at point P and the vector position of this particle is Ri in the lab frame, and at position ri in the body frame. It is seen that the position of the particle can be written:
$\mathbf {R} _{i}=\mathbf {R} +\mathbf {r} _{i}$
The defining characteristic of a rigid body is that the distance between any two points in a rigid body is unchanging in time. This means that the length of the vector $\mathbf {r} _{i}$ is unchanging. By Euler's rotation theorem, we may replace the vector $\mathbf {r} _{i}$ with ${\mathcal {R}}\mathbf {r} _{io}$ where ${\mathcal {R}}$ is a 3×3 rotation matrix and $\mathbf {r} _{io}$ is the position of the particle at some fixed point in time, say t = 0. This replacement is useful, because now it is only the rotation matrix ${\mathcal {R}}$ that is changing in time and not the reference vector $\mathbf {r} _{io}$, as the rigid body rotates about point O′. Also, since the three columns of the rotation matrix represent the three versors of a reference frame rotating together with the rigid body, any rotation about any axis becomes now visible, while the vector $\mathbf {r} _{i}$ would not rotate if the rotation axis were parallel to it, and hence it would only describe a rotation about an axis perpendicular to it (i.e., it would not see the component of the angular velocity pseudovector parallel to it, and would only allow the computation of the component perpendicular to it). The position of the particle is now written as:
$\mathbf {R} _{i}=\mathbf {R} +{\mathcal {R}}\mathbf {r} _{io}$
Taking the time derivative yields the velocity of the particle:
$\mathbf {V} _{i}=\mathbf {V} +{\frac {d{\mathcal {R}}}{dt}}\mathbf {r} _{io}$
where Vi is the velocity of the particle (in the lab frame) and V is the velocity of O′ (the origin of the rigid body frame). Since ${\mathcal {R}}$ is a rotation matrix its inverse is its transpose. So we substitute ${\mathcal {I}}={\mathcal {R}}^{\text{T}}{\mathcal {R}}$:
$\mathbf {V} _{i}=\mathbf {V} +{\frac {d{\mathcal {R}}}{dt}}{\mathcal {I}}\mathbf {r} _{io}$
$\mathbf {V} _{i}=\mathbf {V} +{\frac {d{\mathcal {R}}}{dt}}{\mathcal {R}}^{\text{T}}{\mathcal {R}}\mathbf {r} _{io}$
$\mathbf {V} _{i}=\mathbf {V} +{\frac {d{\mathcal {R}}}{dt}}{\mathcal {R}}^{\text{T}}\mathbf {r} _{i}$
or
$\mathbf {V} _{i}=\mathbf {V} +W\mathbf {r} _{i}$
where $W={\frac {d{\mathcal {R}}}{dt}}{\mathcal {R}}^{\text{T}}$ is the previous angular velocity tensor.
It can be proved that this is a skew symmetric matrix, so we can take its dual to get a 3 dimensional pseudovector that is precisely the previous angular velocity vector ${\boldsymbol {\omega }}$:
${\boldsymbol {\omega }}=[\omega _{x},\omega _{y},\omega _{z}]$
Substituting ω for W into the above velocity expression, and replacing matrix multiplication by an equivalent cross product:
$\mathbf {V} _{i}=\mathbf {V} +{\boldsymbol {\omega }}\times \mathbf {r} _{i}$
It can be seen that the velocity of a point in a rigid body can be divided into two terms – the velocity of a reference point fixed in the rigid body plus the cross product term involving the orbital angular velocity of the particle with respect to the reference point. This angular velocity is what physicists call the "spin angular velocity" of the rigid body, as opposed to the orbital angular velocity of the reference point O′ about the origin O.
Consistency
We have supposed that the rigid body rotates around an arbitrary point. We should prove that the spin angular velocity previously defined is independent of the choice of origin, which means that the spin angular velocity is an intrinsic property of the spinning rigid body. (Note the marked contrast of this with the orbital angular velocity of a point particle, which certainly does depend on the choice of origin.)
See the graph to the right: The origin of lab frame is O, while O1 and O2 are two fixed points on the rigid body, whose velocity is $\mathbf {v} _{1}$ and $\mathbf {v} _{2}$ respectively. Suppose the angular velocity with respect to O1 and O2 is ${\boldsymbol {\omega }}_{1}$ and ${\boldsymbol {\omega }}_{2}$ respectively. Since point P and O2 have only one velocity,
$\mathbf {v} _{1}+{\boldsymbol {\omega }}_{1}\times \mathbf {r} _{1}=\mathbf {v} _{2}+{\boldsymbol {\omega }}_{2}\times \mathbf {r} _{2}$
$\mathbf {v} _{2}=\mathbf {v} _{1}+{\boldsymbol {\omega }}_{1}\times \mathbf {r} =\mathbf {v} _{1}+{\boldsymbol {\omega }}_{1}\times (\mathbf {r} _{1}-\mathbf {r} _{2})$
The above two yields that
$({\boldsymbol {\omega }}_{2}-{\boldsymbol {\omega }}_{1})\times \mathbf {r} _{2}=0$
Since the point P (and thus $\mathbf {r} _{2}$) is arbitrary, it follows that
${\boldsymbol {\omega }}_{1}={\boldsymbol {\omega }}_{2}$
If the reference point is the instantaneous axis of rotation the expression of the velocity of a point in the rigid body will have just the angular velocity term. This is because the velocity of the instantaneous axis of rotation is zero. An example of the instantaneous axis of rotation is the hinge of a door. Another example is the point of contact of a purely rolling spherical (or, more generally, convex) rigid body.
See also
• Angular acceleration
• Angular frequency
• Angular momentum
• Areal velocity
• Isometry
• Orthogonal group
• Rigid body dynamics
• Vorticity
References
1. Cummings, Karen; Halliday, David (2007). Understanding physics. New Delhi: John Wiley & Sons Inc., authorized reprint to Wiley – India. pp. 449, 484, 485, 487. ISBN 978-81-265-0882-2.(UP1)
2. Taylor, Barry N. (2009). International System of Units (SI) (revised 2008 ed.). DIANE Publishing. p. 27. ISBN 978-1-4379-1558-7. Extract of page 27
3. "Units with special names and symbols; units that incorporate special names and symbols".
4. Hibbeler, Russell C. (2009). Engineering Mechanics. Upper Saddle River, New Jersey: Pearson Prentice Hall. pp. 314, 153. ISBN 978-0-13-607791-6.(EM1)
5. Singh, Sunil K. Angular Velocity. Rice University. Retrieved 21 May 2021 – via OpenStax.
6. K.S.HEDRIH: Leonhard Euler (1707–1783) and rigid body dynamics
7. Rotations and Angular Momentum on the Classical Mechanics page of the website of John Baez, especially Questions 1 and 2.
• Symon, Keith (1971). Mechanics. Addison-Wesley, Reading, MA. ISBN 978-0-201-07392-8.
• Landau, L.D.; Lifshitz, E.M. (1997). Mechanics. Butterworth-Heinemann. ISBN 978-0-7506-2896-9.
External links
Look up angular velocity in Wiktionary, the free dictionary.
Wikimedia Commons has media related to Angular velocity.
• A college text-book of physics By Arthur Lalanne Kimball (Angular Velocity of a particle)
• Pickering, Steve (2009). "ω Speed of Rotation [Angular Velocity]". Sixty Symbols. Brady Haran for the University of Nottingham.
Classical mechanics SI units
Linear/translational quantities Angular/rotational quantities
Dimensions 1 L L2 Dimensions 1 θ θ2
T time: t
s
absement: A
m s
T time: t
s
1 distance: d, position: r, s, x, displacement
m
area: A
m2
1 angle: θ, angular displacement: θ
rad
solid angle: Ω
rad2, sr
T−1 frequency: f
s−1, Hz
speed: v, velocity: v
m s−1
kinematic viscosity: ν,
specific angular momentum: h
m2 s−1
T−1 frequency: f
s−1, Hz
angular speed: ω, angular velocity: ω
rad s−1
T−2 acceleration: a
m s−2
T−2 angular acceleration: α
rad s−2
T−3 jerk: j
m s−3
T−3 angular jerk: ζ
rad s−3
M mass: m
kg
weighted position: M ⟨x⟩ = ∑ m x ML2 moment of inertia: I
kg m2
MT−1 Mass flow rate: ${\dot {m}}$
kg s−1
momentum: p, impulse: J
kg m s−1, N s
action: 𝒮, actergy: ℵ
kg m2 s−1, J s
ML2T−1 angular momentum: L, angular impulse: ΔL
kg m2 s−1
action: 𝒮, actergy: ℵ
kg m2 s−1, J s
MT−2 force: F, weight: Fg
kg m s−2, N
energy: E, work: W, Lagrangian: L
kg m2 s−2, J
ML2T−2 torque: τ, moment: M
kg m2 s−2, N m
energy: E, work: W, Lagrangian: L
kg m2 s−2, J
MT−3 yank: Y
kg m s−3, N s−1
power: P
kg m2 s−3, W
ML2T−3 rotatum: P
kg m2 s−3, N m s−1
power: P
kg m2 s−3, W
Authority control: National
• Germany
|
Wikipedia
|
Rotational invariance
In mathematics, a function defined on an inner product space is said to have rotational invariance if its value does not change when arbitrary rotations are applied to its argument.
Mathematics
Functions
For example, the function
$f(x,y)=x^{2}+y^{2}$
is invariant under rotations of the plane around the origin, because for a rotated set of coordinates through any angle θ
$x'=x\cos \theta -y\sin \theta $
$y'=x\sin \theta +y\cos \theta $
the function, after some cancellation of terms, takes exactly the same form
$f(x',y')={x}^{2}+{y}^{2}$
The rotation of coordinates can be expressed using matrix form using the rotation matrix,
${\begin{bmatrix}x'\\y'\\\end{bmatrix}}={\begin{bmatrix}\cos \theta &-\sin \theta \\\sin \theta &\cos \theta \\\end{bmatrix}}{\begin{bmatrix}x\\y\\\end{bmatrix}}.$
or symbolically x′ = Rx. Symbolically, the rotation invariance of a real-valued function of two real variables is
$f(\mathbf {x} ')=f(\mathbf {Rx} )=f(\mathbf {x} )$
In words, the function of the rotated coordinates takes exactly the same form as it did with the initial coordinates, the only difference is the rotated coordinates replace the initial ones. For a real-valued function of three or more real variables, this expression extends easily using appropriate rotation matrices.
The concept also extends to a vector-valued function f of one or more variables;
$\mathbf {f} (\mathbf {x} ')=\mathbf {f} (\mathbf {Rx} )=\mathbf {f} (\mathbf {x} )$
In all the above cases, the arguments (here called "coordinates" for concreteness) are rotated, not the function itself.
Operators
For a function
$f:X\rightarrow X$
which maps elements from a subset X of the real line ℝ to itself, rotational invariance may also mean that the function commutes with rotations of elements in X. This also applies for an operator that acts on such functions. An example is the two-dimensional Laplace operator
$\nabla ^{2}={\frac {\partial ^{2}}{\partial x^{2}}}+{\frac {\partial ^{2}}{\partial y^{2}}}$
which acts on a function f to obtain another function ∇2f. This operator is invariant under rotations.
If g is the function g(p) = f(R(p)), where R is any rotation, then (∇2g)(p) = (∇2f )(R(p)); that is, rotating a function merely rotates its Laplacian.
Physics
In physics, if a system behaves the same regardless of how it is oriented in space, then its Lagrangian is rotationally invariant. According to Noether's theorem, if the action (the integral over time of its Lagrangian) of a physical system is invariant under rotation, then angular momentum is conserved.
Application to quantum mechanics
Further information: Rotation operator (quantum mechanics) and Symmetry in quantum mechanics
In quantum mechanics, rotational invariance is the property that after a rotation the new system still obeys Schrödinger's equation. That is
$[R,E-H]=0$
for any rotation R. Since the rotation does not depend explicitly on time, it commutes with the energy operator. Thus for rotational invariance we must have [R, H] = 0.
For infinitesimal rotations (in the xy-plane for this example; it may be done likewise for any plane) by an angle dθ the (infinitesimal) rotation operator is
$R=1+J_{z}d\theta \,,$
then
$\left[1+J_{z}d\theta ,{\frac {d}{dt}}\right]=0\,,$
thus
${\frac {d}{dt}}J_{z}=0\,,$
in other words angular momentum is conserved.
See also
• Axial symmetry
• Invariant measure
• Isotropy
• Maxwell's theorem
• Rotational symmetry
References
• Stenger, Victor J. (2000). Timeless Reality. Prometheus Books. Especially chpt. 12. Nontechnical.
|
Wikipedia
|
Rota–Baxter algebra
In mathematics, a Rota–Baxter algebra is an associative algebra, together with a particular linear map R which satisfies the Rota–Baxter identity. It appeared first in the work of the American mathematician Glen E. Baxter[1] in the realm of probability theory. Baxter's work was further explored from different angles by Gian-Carlo Rota,[2][3][4] Pierre Cartier,[5] and Frederic V. Atkinson,[6] among others. Baxter’s derivation of this identity that later bore his name emanated from some of the fundamental results of the famous probabilist Frank Spitzer in random walk theory.[7][8]
In the 1980s, the Rota-Baxter operator of weight 0 in the context of Lie algebras was rediscovered as the operator form of the classical Yang–Baxter equation,[9] named after the well-known physicists Chen-Ning Yang and Rodney Baxter.
The study of Rota–Baxter algebras experienced a renaissance this century, beginning with several developments, in the algebraic approach to renormalization of perturbative quantum field theory,[10] dendriform algebras, associative analogue of the classical Yang–Baxter equation[11] and mixable shuffle product constructions.[12]
Definition and first properties
Let k be a commutative ring and let $\lambda $ be given. A linear operator R on a k-algebra A is called a Rota–Baxter operator of weight $\lambda $ if it satisfies the Rota–Baxter relation of weight $\lambda $:
$R(x)R(y)=R(R(x)y)+R(xR(y))+\lambda R(xy)$
for all $x,y\in A$. Then the pair $(A,R)$ or simply A is called a Rota–Baxter algebra of weight $\lambda $. In some literature, $\theta =-\lambda $ is used in which case the above equation becomes
$R(x)R(y)+\theta R(xy)=R(R(x)y)+R(xR(y)),$
called the Rota-Baxter equation of weight $\theta $. The terms Baxter operator algebra and Baxter algebra are also used.
Let $R$ be a Rota–Baxter of weight $\lambda $. Then $-\lambda Id-R$ is also a Rota–Baxter operator of weight $\lambda $. Further, for $\mu $ in k, $\mu R$ is a Rota-Baxter operator of weight $\mu \lambda $.
Examples
Integration by parts
Integration by parts is an example of a Rota–Baxter algebra of weight 0. Let $C(R)$ be the algebra of continuous functions from the real line to the real line. Let :$f(x)\in C(R)$ be a continuous function. Define integration as the Rota–Baxter operator
$I(f)(x)=\int _{0}^{x}f(t)dt\;.$
Let G(x) = I(g)(x) and F(x) = I(f)(x). Then the formula for integration for parts can be written in terms of these variables as
$F(x)G(x)=\int _{0}^{x}f(t)G(t)dt+\int _{0}^{x}F(t)g(t)dt\;.$
In other words
$I(f)(x)I(g)(x)=I(fI(g)(t))(x)+I(I(f)(t)g)(x)\;,$
which shows that I is a Rota–Baxter algebra of weight 0.
Spitzer identity
The Spitzer identity appeared is named after the American mathematician Frank Spitzer. It is regarded as a remarkable stepping stone in the theory of sums of independent random variables in fluctuation theory of probability. It can naturally be understood in terms of Rota–Baxter operators.
Bohnenblust–Spitzer identity
Notes
1. Baxter, G. (1960). "An analytic problem whose solution follows from a simple algebraic identity". Pacific J. Math. 10 (3): 731–742. doi:10.2140/pjm.1960.10.731. MR 0119224.
2. Rota, G.-C. (1969). "Baxter algebras and combinatorial identities, I, II". Bull. Amer. Math. Soc. 75 (2): 325–329. doi:10.1090/S0002-9904-1969-12156-7.; ibid. 75, 330–334, (1969). Reprinted in: Gian-Carlo Rota on Combinatorics: Introductory papers and commentaries, J. P. S. Kung Ed., Contemp. Mathematicians, Birkhäuser Boston, Boston, MA, 1995.
3. G.-C. Rota, Baxter operators, an introduction, In: Gian-Carlo Rota on Combinatorics, Introductory papers and commentaries, J.P.S. Kung Ed., Contemp. Mathematicians, Birkhäuser Boston, Boston, MA, 1995.
4. G.-C. Rota and D. Smith, Fluctuation theory and Baxter algebras, Instituto Nazionale di Alta Matematica, IX, 179–201, (1972). Reprinted in: Gian-Carlo Rota on Combinatorics: Introductory papers and commentaries, J. P. S. Kung Ed., Contemp. Mathematicians, Birkhäuser Boston, Boston, MA, 1995.
5. Cartier, P. (1972). "On the structure of free Baxter algebras". Advances in Mathematics. 9 (2): 253–265. doi:10.1016/0001-8708(72)90018-7.
6. Atkinson, F. V. (1963). "Some aspects of Baxter's functional equation". J. Math. Anal. Appl. 7: 1–30. doi:10.1016/0022-247X(63)90075-1.
7. Spitzer, F. (1956). "A combinatorial lemma and its application to probability theory". Trans. Amer. Math. Soc. 82 (2): 323–339. doi:10.1090/S0002-9947-1956-0079851-X.
8. Spitzer, F. (1976). "Principles of random walks". Graduate Texts in Mathematics. 34 (Second ed.). New York, Heidelberg: Springer-Verlag. {{cite journal}}: Cite journal requires |journal= (help)
9. Semenov-Tian-Shansky, M.A. (1983). "What is a classical r-matrix?". Func. Anal. Appl. 17 (4): 259–272. doi:10.1007/BF01076717. S2CID 120134842.
10. Connes, A.; Kreimer, D. (2000). "Renormalization in quantum field theory and the Riemann-Hilbert problem. I. The Hopf algebra structure of graphs and the main theorem". Comm. Math. Phys. 210 (1): 249–273. arXiv:hep-th/9912092. Bibcode:2000CMaPh.210..249C. doi:10.1007/s002200050779. S2CID 17448874.
11. Aguiar, M. (2000). "Infinitesimal Hopf algebras". Contemp. Math. Contemporary Mathematics. 267: 1–29. doi:10.1090/conm/267/04262. ISBN 9780821821268.
12. Guo, L.; Keigher, W. (2000). "Baxter algebras and shuffle products". Advances in Mathematics. 150: 117–149. arXiv:math/0407155. doi:10.1006/aima.1999.1858.
External links
• Li Guo. WHAT IS...a Rota-Baxter Algebra? Notices of the AMS, December 2009, Volume 56 Issue 11
|
Wikipedia
|
Roth's theorem on arithmetic progressions
Roth's theorem on arithmetic progressions is a result in additive combinatorics concerning the existence of arithmetic progressions in subsets of the natural numbers. It was first proven by Klaus Roth in 1953.[1] Roth's theorem is a special case of Szemerédi's theorem for the case $k=3$.
For Roth's theorem on Diophantine approximation of algebraic numbers, see Roth's theorem.
Statement
A subset A of the natural numbers is said to have positive upper density if
$\limsup _{n\to \infty }{\frac {|A\cap \{1,2,3,\dotsc ,n\}|}{n}}>0$.
Roth's theorem on arithmetic progressions (infinite version): A subset of the natural numbers with positive upper density contains a 3-term arithmetic progression.
An alternate, more qualitative, formulation of the theorem is concerned with the maximum size of a Salem–Spencer set which is a subset of $[N]=\{1,\dots ,N\}$. Let $r_{3}([N])$ be the size of the largest subset of $[N]$ which contains no 3-term arithmetic progression.
Roth's theorem on arithmetic progressions (finitary version): $r_{3}([N])=o(N)$.
Improving upper and lower bounds on $r_{3}([N])$ is still an open research problem.
History
The first result in this direction was Van der Waerden's theorem in 1927, which states that for sufficiently large N, coloring the integers $1,\dots ,n$ with $r$ colors will result in a $k$ term arithmetic progression.[2]
Later on in 1936 Erdős and Turán conjectured a much stronger result that any subset of the integers with positive density contains arbitrarily long arithmetic progressions. In 1942, Raphaël Salem and Donald C. Spencer provided a construction of a 3-AP-free set (i.e. a set with no 3-term arithmetic progressions) of size ${\frac {N}{e^{O(\log N/\log \log N)}}}$,[3] disproving an additional conjecture of Erdős and Turán that $r_{3}([N])=N^{1-\delta }$ for some $\delta >0$.[4]
In 1953, Roth partially resolved the initial conjecture by proving they must contain an arithmetic progression of length 3 using Fourier analytic methods. Eventually, in 1975, Szemerédi proved Szemerédi's theorem using combinatorial techniques, resolving the original conjecture in full.
Proof techniques
The original proof given by Roth used Fourier analytic methods. Later on another proof was given using Szemerédi's regularity lemma.
Proof sketch via Fourier analysis
In 1953, Roth used Fourier analysis to prove an upper bound of $r_{3}([N])=O\left({\frac {N}{\log \log N}}\right)$. Below is a sketch of this proof.
Define the Fourier transform of a function $f:\mathbb {Z} \rightarrow \mathbb {C} $ to be the function ${\widehat {f}}$ satisfying
${\widehat {f}}(\theta )=\sum _{x\in \mathbb {Z} }f(x)e(-x\theta )$,
where $e(t)=e^{2\pi it}$.
Let $A$ be a 3-AP-free subset of $\{1,\dots ,N\}$. The proof proceeds in 3 steps.
1. Show that a $A$ admits a large Fourier coefficient.
2. Deduce that there exists a sub-progression of $\{1,\dots ,N\}$ such that $A$ has a density increment when restricted to this subprogression.
3. Iterate Step 2 to obtain an upper bound on $|A|$.
Step 1
For functions, $f,g,h:\mathbb {Z} \rightarrow \mathbb {C} ,$ define
$\Lambda (f,g,h)=\sum _{x,y\in \mathbb {Z} }f(x)g(x+y)h(x+2y)$
Counting Lemma Let $f,g:\mathbb {Z} \rightarrow \mathbb {C} $ satisfy $\sum _{n\in \mathbb {Z} }|f(n)|^{2},\sum _{n\in \mathbb {Z} }|g(n)|^{2}\leq M$. Define $\Lambda _{3}(f)=\Lambda (f,f,f)$. Then $|\Lambda _{3}(f)-\Lambda _{3}(g)|\leq 3M\|{\widehat {f-g}}\|_{\infty }$.
The counting lemma tells us that if the Fourier Transforms of $f$ and $g$ are "close", then the number of 3-term arithmetic progressions between the two should also be "close." Let $\alpha =|A|/N$ be the density of $A$. Define the functions $f=\mathbf {1} _{A}$ (i.e the indicator function of $A$), and $g=\alpha \cdot \mathbf {1} _{[N]}$. Step 1 can then be deduced by applying the Counting Lemma to $f$ and $g$, which tells us that there exists some $\theta $ such that
$\left|\sum _{n=1}^{N}(1_{A}-\alpha )(n)e(\theta n)\right|\geq {\frac {\alpha ^{2}}{10}}N$.
Step 2
Given the $\theta $ from step 1, we first show that it's possible to split up $[N]$ into relatively large subprogressions such that the character $x\mapsto e(x\theta )$ is roughly constant on each subprogression.
Lemma 1: Let $0<\eta <1,\theta \in \mathbb {R} $. Assume that $N>C\eta ^{-6}$ for a universal constant $C$. Then it is possible to partition $[N]$ into arithmetic progressions $P_{i}$ with length $N^{1/3}\leq |P_{i}|\leq 2N^{1/3}$ such that $\sup _{x,y\in P_{i}}|e(x\theta )-e(y\theta )|<\eta $ for all $i$.
Next, we apply Lemma 1 to obtain a partition into subprogressions. We then use the fact that $\theta $ produced a large coefficient in step 1 to show that one of these subprogressions must have a density increment:
Lemma 2: Let $A$ be a 3-AP-free subset of $[N]$, with $|A|=\alpha N$ and $N>C\alpha ^{-12}$. Then, there exists a sub progression $P\subset [N]$ such that $|P|\geq N^{1/3}$ and $|A\cap P|\geq (\alpha +\alpha ^{2}/40)|P|$.
Step 3
We now iterate step 2. Let $a_{t}$ be the density of $A$ after the $t$th iteration. We have that $\alpha _{0}=\alpha ,$ and $\alpha _{t+1}\geq \alpha +\alpha ^{2}/40.$ First, see that $\alpha $ doubles (i.e. reach $T$ such that $\alpha _{T}\geq 2\alpha _{0}$) after at most $40/\alpha +1$ steps. We double $\alpha $ again (i.e reach $\alpha _{T}\geq 4\alpha _{0}$) after at most $20/\alpha +1$ steps. Since $\alpha \leq 1$, this process must terminate after at most $O(1/\alpha )$ steps.
Let $N_{t}$ be the size of our current progression after $t$ iterations. By Lemma 2, we can always continue the process whenever $N_{t}\geq C\alpha _{t}^{-12},$ and thus when the process terminates we have that $N_{t}\leq C\alpha _{t}^{-12}\leq C\alpha ^{-12}.$ Also, note that when we pass to a subprogression, the size of our set decreases by a cube root. Therefore
$N\leq N_{t}^{3^{t}}\leq (C\alpha ^{-12})^{3^{O(1/\alpha )}}=e^{e^{O(1/\alpha )}}.$
Therefore $\alpha =O(1/\log \log N),$ so $|A|=O\left({\frac {N}{\log \log N}}\right),$ as desired. $\blacksquare $
Unfortunately, this technique does not generalize directly to larger arithmetic progressions to prove Szemerédi's theorem. An extension of this proof eluded mathematicians for decades until 1998, when Timothy Gowers developed the field of higher-order Fourier analysis specifically to generalize the above proof to prove Szemerédi's theorem.[5]
Proof sketch via graph regularity
Below is an outline of a proof using the Szemerédi regularity lemma.
Let $G$ be a graph and $X,Y\subseteq V(G)$. We call $(X,Y)$ an $\epsilon $-regular pair if for all $A\subset X,B\subset Y$ with $|A|\geq \epsilon |X|,|B|\geq \epsilon |Y|$, one has $|d(A,B)-d(X,Y)|\leq \epsilon $.
A partition ${\mathcal {P}}=\{V_{1},\ldots ,V_{k}\}$ of $V(G)$ is an $\epsilon $-regular partition if
$\sum _{(i,j)\in [k]^{2},(V_{i},V_{j}){\text{ not }}\epsilon {\text{-regular}}}|V_{i}||V_{j}|\leq \epsilon |V(G)|^{2}$.
Then the Szemerédi regularity lemma says that for every $\epsilon >0$, there exists a constant $M$ such that every graph has an $\epsilon $-regular partition into at most $M$ parts.
We can also prove that triangles between $\epsilon $-regular sets of vertices must come along with many other triangles. This is known as the triangle counting lemma.
Triangle Counting Lemma: Let $G$ be a graph and $X,Y,Z$ be subsets of the vertices of $G$ such that $(X,Y),(Y,Z),(Z,X)$ are all $\epsilon $-regular pairs for some $\epsilon >0$. Let $d_{XY},d_{XZ},d_{YZ}$ denote the edge densities $d(X,Y),d(X,Z),d(Y,Z)$ respectively. If $d_{XY},d_{XZ},d_{YZ}\geq 2\epsilon $, then the number of triples $(x,y,z)\in X\times Y\times Z$ such that $x,y,z$ form a triangle in $G$ is at least
$(1-2\epsilon )(d_{XY}-\epsilon )(d_{XZ}-\epsilon )(d_{YZ}-\epsilon )\cdot |X||Y||Z|$.
Using the triangle counting lemma and the Szemerédi regularity lemma, we can prove the triangle removal lemma, a special case of the graph removal lemma.[6]
Triangle Removal Lemma: For all $\epsilon >0$, there exists $\delta >0$ such that any graph on $n$ vertices with less than or equal to $\delta n^{3}$ triangles can be made triangle-free by removing at most $\epsilon n^{2}$ edges.
This has an interesting corollary pertaining to graphs $G$ on $N$ vertices where every edge of $G$ lies in a unique triangle. In specific, all of these graphs must have $o(N^{2})$ edges.
Take a set $A$ with no 3-term arithmetic progressions. Now, construct a tripartite graph $G$ whose parts $X,Y,Z$ are all copies of $\mathbb {Z} /(2N+1)\mathbb {Z} $. Connect a vertex $x\in X$ to a vertex $y\in Y$ if $y-x\in A$. Similarly, connect $z\in Z$ with $y\in Y$ if $z-y\in A$. Finally, connect $x\in X$ with $z\in Z$ if $(z-x)/2\in A$.
This construction is set up so that if $x,y,z$ form a triangle, then we get elements $y-x,{\frac {z-x}{2}},z-y$ that all belong to $A$. These numbers form an arithmetic progression in the listed order. The assumption on $A$ then tells us this progression must be trivial: the elements listed above are all equal. But this condition is equivalent to the assertion that $x,y,z$ is an arithmetic progression in $\mathbb {Z} /(2N+1)\mathbb {Z} $. Consequently, every edge of $G$ lies in exactly one triangle. The desired conclusion follows. $\blacksquare $
Extensions and generalizations
Szemerédi's theorem resolved the original conjecture and generalized Roth's theorem to arithmetic progressions of arbitrary length. Since then it has been extended in multiple fashions to create new and interesting results.
Furstenberg and Katznelson[7] used ergodic theory to prove a multidimensional version and Leibman and Bergelson[8] extended it to polynomial progressions as well. Most recently, Green and Tao proved the Green–Tao theorem which says that the prime numbers contain arbitrarily long arithmetic progressions. Since the prime numbers are a subset of density 0, they introduced a "relative" Szemerédi theorem which applies to subsets with density 0 that satisfy certain pseudorandomness conditions. Later on Conlon, Fox, and Zhao[9][10] strengthened this theorem by weakening the necessary pseudorandomness condition. In 2020, Bloom and Sisask[11] proved that any set $A$ such that $\sum _{n\in A}{\frac {1}{n}}$ diverges must contain arithmetic progressions of length 3; this is the first non-trivial case of another conjecture of Erdős postulating that any such set must in fact contain arbitrarily long arithmetic progressions.
Improving bounds
See also: Salem–Spencer set § Size, and Erdős conjecture on arithmetic progressions § Progress and related results
There has also been work done on improving the bound in Roth's theorem. The bound from the original proof of Roth's theorem showed that
$r_{3}([N])\leq c\cdot {\frac {N}{\log \log N}}$
for some constant $c$. Over the years this bound has been continually lowered by Szemerédi,[12] Heath-Brown,[13] Bourgain,[14][15] and Sanders.[16][17] The current (July 2020) best bound is due to Bloom and Sisask[11] who have showed the existence of an absolute constant c>0 such that
$r_{3}([N])\leq {\frac {N}{(\log N)^{1+c}}}.$
In February 2023 a preprint by Kelley and Meka[18][19] gave a new bound of
$r_{3}([N])\leq 2^{-O((\log N)^{c})}\cdot N$.
and four days later Bloom and Sisask simplified the result and with a little improvement to $r_{3}([N])\leq \exp(-c(\log N)^{1/11})N$.[20]
There has also been work done on the other end, constructing the largest set with no three-term arithmetic progressions. The best construction has barely been improved since 1946 when Behrend[21] improved on the initial construction by Salem and Spencer and proved
$r_{3}([N])\geq N\exp(-c{\sqrt {\log N}})$.
Due to no improvements in over 70 years, it is conjectured that Behrend's set is asymptotically very close in size to the largest possible set with no three-term progressions.[11] If correct, the Kelley-Meka bound will prove this conjecture.
Roth's theorem in finite fields
As a variation, we can consider the analogous problem over finite fields. Consider the finite field $\mathbb {F} _{3}^{n}$, and let $r_{3}(\mathbb {F} _{3}^{n})$ be the size of the largest subset of $\mathbb {F} _{3}^{n}$ which contains no 3-term arithmetic progression. This problem is actually equivalent to the cap set problem, which asks for the largest subset of $\mathbb {F} _{3}^{n}$ such that no 3 points lie on a line. The cap set problem can be seen as a generalization of the card game Set.
In 1982, Brown and Buhler[22] were the first to show that $r_{3}(\mathbb {F} _{3}^{n})=o(3^{n}).$ In 1995, Roy Mesuhlam[23] used a similar technique to the Fourier-analytic proof of Roth's theorem to show that $r_{3}(\mathbb {F} _{3}^{n})=O\left({\frac {3^{n}}{n}}\right).$ This bound was improved to $O(3^{n}/n^{1+\epsilon })$ in 2012 by Bateman and Katz.[24]
In 2016, Ernie Croot, Vsevolod Lev, Péter Pál Pach, Jordan Ellenberg and Dion Gijswijt developed a new technique based on the polynomial method to prove that $r_{3}(\mathbb {F} _{3}^{n})=O(2.756^{n})$.[25][26][27]
The best known lower bound is approximately $2.218^{n}$, given in 2022 by Tyrrell.[28]
Roth's theorem with popular differences
Another generalization of Roth's theorem shows that for positive density subsets, there not only exists a 3-term arithmetic progression, but that there exist many 3-APs all with the same common difference.
Roth's theorem with popular differences: For all $\epsilon >0$, there exists some $n_{0}=n_{0}(\epsilon )$ such that for every $n>n_{0}$ and $A\subset \mathbb {F} _{3}^{n}$ with $|A|=\alpha 3^{n},$ there exists some $y\neq 0$ such that $|\{x:x,x+y,x+2y\in A\}|\geq (\alpha ^{3}-\epsilon )3^{n}.$
If $A$ is chosen randomly from $\mathbb {F} _{3}^{n},$ then we would expect there to be $\alpha ^{3}3^{n}$ progressions for each value of $y$. The popular differences theorem thus states that for each $|A|$ with positive density, there is some $y$ such that the number of 3-APs with common difference $y$ is close to what we would expect.
This theorem was first proven by Green in 2005,[29] who gave a bound of $n_{0}={\text{tow}}((1/\epsilon )^{O(1)}),$ where ${\text{tow}}$ is the tower function. In 2019, Fox and Pham recently improved the bound to $n_{0}={\text{tow}}(O(\log {\frac {1}{\epsilon }})).$[30]
A corresponding statement is also true in $\mathbb {Z} $ for both 3-APs and 4-APs.[31] However, the claim has been shown to be false for 5-APs.[32]
References
1. Roth, Klaus (1953). "On certain sets of integers". Journal of the London Mathematical Society. 28 (1): 104–109. doi:10.1112/jlms/s1-28.1.104.
2. van der Waerden, B. L. (1927). "Beweis einer Baudetschen Vermutung". Nieuw. Arch. Wisk. 15: 212–216.
3. Salem, Raphaël; Spencer, Donald C. (1942). "On sets of integers which contain no three terms in arithmetical progression". Proceedings of the National Academy of Sciences of the United States of America. 28 (12): 561–563. Bibcode:1942PNAS...28..561S. doi:10.1073/pnas.28.12.561. MR 0007405. PMC 1078539. PMID 16588588.
4. Erdös, Paul; Turán, Paul (1936). "On Some Sequences of Integers". Journal of the London Mathematical Society. 4 (4): 261–264. doi:10.1112/jlms/s1-11.4.261. MR 1574918.
5. Gowers, W. T. (1998). "A new proof of Szemerédi's theorem for arithmetic progressions of length four". Geometric and Functional Analysis. 8 (3): 529–551. doi:10.1007/s000390050065.
6. Fox, Jacob (2011), "A new proof of the graph removal lemma", Annals of Mathematics, Second Series, 174 (1): 561–579, arXiv:1006.1300, doi:10.4007/annals.2011.174.1.17, MR 2811609, S2CID 8250133
7. Furstenberg, Hillel; Katznelson, Yitzhak (1978). "An ergodic Szemerédi theorem for commuting transformations". Journal d'Analyse Mathématique. 38 (1): 275–291. doi:10.1007/BF02790016. MR 0531279. S2CID 123386017.
8. Bergelson, Vitaly; Leibman, Alexander (1996). "Polynomial extensions of van der Waerden's and Szemerédi's theorems". Journal of the American Mathematical Society. 9 (3): 725–753. doi:10.1090/S0894-0347-96-00194-4. MR 1325795.
9. Conlon, David; Fox, Jacob; Zhao, Yufei (2015). "A relative Szemerédi theorem". Geometric and Functional Analysis. 25 (3): 733–762. arXiv:1305.5440. doi:10.1007/s00039-015-0324-9. MR 3361771.
10. Zhao, Yufei (2014). "An arithmetic transference proof of a relative Szemerédi theorem". Mathematical Proceedings of the Cambridge Philosophical Society. 156 (2): 255–261. arXiv:1307.4959. Bibcode:2014MPCPS.156..255Z. doi:10.1017/S0305004113000662. MR 3177868. S2CID 119673319.
11. Thomas F. Bloom, Olof Sisask, Breaking the logarithmic barrier in Roth's theorem on arithmetic progressions, arXiv:2007.03528, 2020
12. Szemerédi, Endre (1990). "Integer sets containing no arithmetic progressions". Acta Mathematica Hungarica. 56 (1–2): 155–158. doi:10.1007/BF01903717. MR 1100788.
13. Heath-Brown, Roger (1987). "Integer sets containing no arithmetic progressions". Journal of the London Mathematical Society. 35 (3): 385–394. doi:10.1112/jlms/s2-35.3.385. MR 0889362.
14. Bourgain, Jean (1999). "On triples in arithmetic progression". Geometric and Functional Analysis. 9 (5): 968–984. doi:10.1007/s000390050105. MR 1726234. S2CID 392820.
15. Bourgain, Jean (2008). "Roth's theorem on progressions revisited". Journal d'Analyse Mathématique. 104 (1): 155–192. doi:10.1007/s11854-008-0020-x. MR 2403433. S2CID 16985451.
16. Sanders, Tom (2012). "On certain other sets of integers". Annals of Mathematics. 185 (1): 53–82. arXiv:1007.5444. doi:10.1007/s11854-012-0003-9. MR 2892617. S2CID 119727492.
17. Sanders, Tom (2011). "On Roth's theorem on progressions". Annals of Mathematics. 174 (1): 619–636. arXiv:1011.0104. doi:10.4007/annals.2011.174.1.20. MR 2811612. S2CID 53331882.
18. Kelley, Zander; Meka, Raghu (2023-02-10). "Strong Bounds for 3-Progressions". arXiv:2302.05537 [math.NT].
19. Sloman, Leila (2023-03-21). "Surprise Computer Science Proof Stuns Mathematicians". Quanta Magazine.
20. Bloom, Thomas F.; Sisask, Olof (2023-02-14). "The Kelley--Meka bounds for sets free of three-term arithmetic progressions". arXiv:2302.07211 [math.NT].
21. Behrend, F. A. (1946). "On sets of integers which contain no three terms in arithmetical progression". Proceedings of the National Academy of Sciences of the United States of America. 32 (12): 331–332. Bibcode:1946PNAS...32..331B. doi:10.1073/pnas.32.12.331. PMC 1078964. PMID 16578230.
22. Brown, T. C.; Buhler, J. P. (1982). "A density version of a geometric Ramsey theorem". Journal of Combinatorial Theory. Series A. 32 (1): 20–34. doi:10.1016/0097-3165(82)90062-0.
23. Mesuhlam, Roy (1995). "On subsets of finite abelian groups with no 3-term arithmetic progressions". Journal of Combinatorial Theory. Series A. 71 (1): 168–172. doi:10.1016/0097-3165(95)90024-1.
24. Bateman, M.; Katz, N. (2012). "New bounds on cap sets". Journal of the American Mathematical Society. 25 (2): 585–613. doi:10.1090/S0894-0347-2011-00725-X.
25. Ellenberg, Jordan S.; Gijswijt, Dion (2016). "On large subsets of $\mathbb {F} _{q}^{n}$ with no three-term arithmetic progression". Annals of Mathematics, Second Series. 185 (1): 339–343. arXiv:1605.09223. doi:10.4007/annals.2017.185.1.8. S2CID 119683140.
26. Croot, Ernie; Lev, Vsevolod F.; Pach, Péter Pál (2017). "Progression-free sets in $\mathbb {Z} _{4}^{n}$ are exponentially small". Annals of Mathematics. 2nd series. 185 (1): 331–337. arXiv:1605.01506. doi:10.4007/annals.2017.185.1.7.
27. Klarreich, Erica (May 31, 2016). "Simple Set Game Proof Stuns Mathematicians". Quanta.
28. Tyrrell, Fred (2022). "New lower bounds for cap sets". arXiv:2209.10045 [math.CO]..
29. Green, Ben (2005). "A Szemerédi-type regularity lemma in abelian groups, with applications". Geometric and Functional Analysis. 15 (2): 340–376. doi:10.1007/s00039-005-0509-8. MR 2153903.
30. Fox, Jacob; Pham, Huy Tuan (April 2021). "Popular progression differences in vector spaces". International Mathematics Research Notices. 2021 (7): 5261–5289. arXiv:1708.08482. Bibcode:2017arXiv170808482F. doi:10.1093/imrn/rny240.
31. Green, Ben; Tao, Terrence (2010). "An Arithmetic Regularity Lemma, an Associated Counting Lemma, and Applications". An Irregular Mind. Bolyai Society Mathematical Studies. Vol. 21. Bolyai Society Mathematical Studies. pp. 261–334. arXiv:1002.2028. Bibcode:2010arXiv1002.2028G. doi:10.1007/978-3-642-14444-8_7. ISBN 978-3-642-14443-1. S2CID 115174575.
32. Bergelson, Vitaly; Host, Bernard; Kra, Bryna (2005). "Multiple recurrence and nilsequences. With an appendix by Imre Ruzsa". Inventiones Mathematicae. 160 (2): 261–303. doi:10.1007/s00222-004-0428-6. S2CID 1380361.
External links
• Edmonds, Chelsea; Koutsoukou-Argyraki, Angeliki; Paulson, Lawrence C. Roth's Theorem on Arithmetic Progressions (Formal proof development in Isabelle/HOL, Archive of Formal Proofs)
|
Wikipedia
|
Alice Roth
Alice Roth (6 February 1905 – 22 July 1977)[1] was a Swiss mathematician who invented the Swiss cheese set and made significant contributions to approximation theory. She was born, lived and died in Bern, Switzerland.
Life
Alice attended the Höhere Töchterschule of Zürich, a municipal school for higher education for girls. After graduation in 1924 she studied mathematics, physics and astronomy at ETH Zurich under George Pólya. She graduated with a diploma in 1930. Her Master's thesis was titled "Extension of Weierstrass's Approximation Theorem to the complex plane and to an infinite interval". After that, she was a teacher at multiple high schools for girls in the Zurich area while continuing working with Pólya at ETH. In 1938 she became the second woman to graduate with a PhD from ETH[2] Her PhD Thesis was titled "Properties of approximations and radial limits of meromorphic and entire functions" and was so well regarded that it received a monetary prize and the ETH silver medal. Her supervisors were George Pólya and Heinz Hopf.
From 1940 she was mathematics and physics teacher at Humboldtianum in Bern, a private school. It was only after her retirement in 1971 that she returned to mathematical research, again in the area of complex approximation. She published three papers on her own, as well as a shared paper with Paul Gauthier of the University of Montreal and Harvard University professor Joseph L. Walsh. In 1975, at the age of 70, she was invited to give a public lecture at the University of Montreal.
In 1976 she was diagnosed with cancer, and she died the next year.
Contribution to mathematics
One of the main results of Roth's 1938 thesis was an example of a compact set on which not every continuous function can by approximated uniformly by rational functions. This set, now known as the "Swiss cheese,"[3] was forgotten and independently rediscovered in 1952 in Russia by Mergelyan, and proper credit was restored by 1969.
The following excerpt by her former student, Peter Wilker, appeared in an obituary he wrote after her death: "In Switzerland, as elsewhere, women mathematicians are few and far between.... Alice Roth's dissertation was awarded a medal from the ETH, and appeared shortly after its completion in a Swiss mathematical journal....One year later war broke out, the world had other worries than mathematics, and Alice Roth's work was simply forgotten. So completely forgotten that around 1950 a Russian mathematician re-discovered similar results without having the slightest idea that a young Swiss woman mathematician had published the same ideas more than a decade before he did. However, her priority was recognized."[4]
Roth developed other important results during her brief return to research at the end of her life: "Roth's past as well as future work was to have a strong and lasting influence on mathematicians working in this area [rational approximation theory]. Her Swiss cheese has been modified (to an entire variety of cheeses)[5].... Roth's Fusion Lemma, which appeared in her 1976 paper[6]...influenced a new generation of mathematicians worldwide."[4]
Lecture series and movie
ETH Zürich's Department of Mathematics now sponsors the annual Alice Roth Lecture Series to honor women with outstanding achievements in mathematics.[7] The inaugural lecture was delivered in March 2022 by number theorist and later Fields medalist Maryna Viazovska, who spoke on "Fourier interpolation pairs and their applications".[8] The Spring 2023 lecture will be given by harmonic analyst Gigliola Staffilani.
ETH Zürich has also produced an 8 minute documentary movie about Alice Roth's life and work.[9]
References
1. "Alice Roth". Agnesscott.edu. Retrieved 2015-05-15.
2. "The Mathematics Genealogy Project - Alice Roth". Genealogy.math.uni-bielefeld.de. Archived from the original on 2015-05-18. Retrieved 2015-05-15.
3. "Exercise on Alice Roth's Swiss cheese" (PDF). Math.tamu.edu. Retrieved 2015-05-16.
4. Ulrich Daepp, Paul Gauthier, Pamela Gorkin, and Gerald Schmieder, "Alice in Switzerland: The Life and Mathematics of Alice Roth," Mathematics Intelligencer, Vol. 27, No. 1 (2005), 41–54.
5. Joel Feinstein. "Classicalisation of Swiss Cheeses" (PDF). Math.chalmers.se. Retrieved 2015-05-16.
6. Roth, Alice (1 December 1978). "Uniform Approximation by Meromorphic Functions on Closed Sets with Continuous extension into the Boundary". Canadian Journal of Mathematics. 30 (6): 1243–1255. doi:10.4153/CJM-1978-103-4. S2CID 124804743.
7. Alice Roth Lectures, Department of Mathematics, ETH Zürich
8. Alice Roth Lecture 2022 – Maryna Viazovska, Apr 6, 2022. Lecture was given on 16 March 2022.
9. Alice Roth, Pioneer Mathematician, ETH Zürich, March 17, 2022
External links
• Alice Roth portrait, a video from ETH Zurich department of Mathematics.
Authority control
International
• ISNI
• VIAF
National
• Germany
Academics
• Mathematics Genealogy Project
• zbMATH
People
• Deutsche Biographie
|
Wikipedia
|
Rothberger space
In mathematics, a Rothberger space is a topological space that satisfies a certain a basic selection principle. A Rothberger space is a space in which for every sequence of open covers ${\mathcal {U}}_{1},{\mathcal {U}}_{2},\ldots $ of the space there are sets $U_{1}\in {\mathcal {U}}_{1},U_{2}\in {\mathcal {U}}_{2},\ldots $ such that the family $\{U_{n}:n\in \mathbb {N} \}$ covers the space.
History
In 1938, Fritz Rothberger introduced his property known as $C''$.[1]
Characterizations
Combinatorial characterization
For subsets of the real line, the Rothberger property can be characterized using continuous functions into the Baire space $\mathbb {N} ^{\mathbb {N} }$. A subset $A$ of $\mathbb {N} ^{\mathbb {N} }$ is guessable if there is a function $g\in A$ such that the sets $\{n:f(n)=g(n)\}$ are infinite for all functions $f\in A$. A subset of the real line is Rothberger iff every continuous image of that space into the Baire space is guessable. In particular, every subset of the real line of cardinality less than $\mathrm {cov} ({\mathcal {M}})$[2] is Rothberger.
Topological game characterization
Let $X$ be a topological space. The Rothberger game ${\text{G}}_{1}(\mathbf {O} ,\mathbf {O} )$ played on $X$ is a game with two players Alice and Bob.
1st round: Alice chooses an open cover ${\mathcal {U}}_{1}$ of $X$. Bob chooses a set $U_{1}\in {\mathcal {U}}_{1}$.
2nd round: Alice chooses an open cover ${\mathcal {U}}_{2}$ of $X$. Bob chooses a set $U_{2}\in {\mathcal {U}}_{2}$.
etc.
If the family $\{U_{n}:n\in \mathbb {N} \}$ is a cover of the space $X$, then Bob wins the game ${\text{G}}_{1}(\mathbf {O} ,\mathbf {O} )$. Otherwise, Alice wins.
A player has a winning strategy if he knows how to play in order to win the game ${\text{G}}_{1}(\mathbf {O} ,\mathbf {O} )$ (formally, a winning strategy is a function).
• A topological space is Rothberger iff Alice has no winning strategy in the game ${\text{G}}_{1}(\mathbf {O} ,\mathbf {O} )$ played on this space.[3]
• Let $X$ be a metric space. Bob has a winning strategy in the game ${\text{G}}_{1}(\mathbf {O} ,\mathbf {O} )$ played on the space $X$ iff the space $X$ is countable.[3][4][5]
Properties
• Every countable topological space is Rothberger
• Every Luzin set is Rothberger[1]
• Every Rothberger subset of the real line has strong measure zero.[1]
• In the Laver model for the consistency of the Borel conjecture every Rothberger subset of the real line is countable
References
1. Rothberger, Fritz (1938-01-01). "Eine Verschärfung der Eigenschaft C". Fundamenta Mathematicae (in German). 30 (1): 50–55. doi:10.4064/fm-30-1-50-55. ISSN 0016-2736.
2. Bartoszynski, Tomek; Judah, Haim (1995-08-15). Set Theory: On the Structure of the Real Line. Taylor & Francis. ISBN 9781568810447.
3. Pawlikowski, Janusz. "Undetermined sets of point-open games". Fundamenta Mathematicae. 144 (3). ISSN 0016-2736.
4. Scheepers, Marion (1995-01-01). "A direct proof of a theorem of Telgársky". Proceedings of the American Mathematical Society. 123 (11): 3483–3485. doi:10.1090/S0002-9939-1995-1273523-1. ISSN 0002-9939.
5. Telgársky, Rastislav (1984-06-01). "On games of Topsoe". Mathematica Scandinavica. 54: 170–176. doi:10.7146/math.scand.a-12050. ISSN 1903-1807.
Topology
Fields
• General (point-set)
• Algebraic
• Combinatorial
• Continuum
• Differential
• Geometric
• low-dimensional
• Homology
• cohomology
• Set-theoretic
• Digital
Key concepts
• Open set / Closed set
• Interior
• Continuity
• Space
• compact
• Connected
• Hausdorff
• metric
• uniform
• Homotopy
• homotopy group
• fundamental group
• Simplicial complex
• CW complex
• Polyhedral complex
• Manifold
• Bundle (mathematics)
• Second-countable space
• Cobordism
Metrics and properties
• Euler characteristic
• Betti number
• Winding number
• Chern number
• Orientability
Key results
• Banach fixed-point theorem
• De Rham cohomology
• Invariance of domain
• Poincaré conjecture
• Tychonoff's theorem
• Urysohn's lemma
• Category
• Mathematics portal
• Wikibook
• Wikiversity
• Topics
• general
• algebraic
• geometric
• Publications
|
Wikipedia
|
Rothe–Hagen identity
In mathematics, the Rothe–Hagen identity is a mathematical identity valid for all complex numbers ($x,y,z$) except where its denominators vanish:
$\sum _{k=0}^{n}{\frac {x}{x+kz}}{x+kz \choose k}{\frac {y}{y+(n-k)z}}{y+(n-k)z \choose n-k}={\frac {x+y}{x+y+nz}}{x+y+nz \choose n}.$
It is a generalization of Vandermonde's identity, and is named after Heinrich August Rothe and Johann Georg Hagen.
References
• Chu, Wenchang (2010), "Elementary proofs for convolution identities of Abel and Hagen-Rothe", Electronic Journal of Combinatorics, 17 (1), N24, doi:10.37236/473.
• Gould, H. W. (1956), "Some generalizations of Vandermonde's convolution", The American Mathematical Monthly, 63 (2): 84–91, doi:10.1080/00029890.1956.11988763, JSTOR 2306429, MR 0075170. See especially pp. 89–91.
• Hagen, Johann G. (1891), Synopsis Der Hoeheren Mathematik, Berlin, formula 17, pp. 64–68, vol. I{{citation}}: CS1 maint: location missing publisher (link). As cited by Gould (1956).
• Ma, Xinrong (2011), "Two matrix inversions associated with the Hagen-Rothe formula, their q-analogues and applications", Journal of Combinatorial Theory, Series A, 118 (4): 1475–1493, doi:10.1016/j.jcta.2010.12.012, MR 2763069.
• Rothe, Heinrich August (1793), Formulae De Serierum Reversione Demonstratio Universalis Signis Localibus Combinatorio-Analyticorum Vicariis Exhibita: Dissertatio Academica, Leipzig. As cited by Gould (1956).
|
Wikipedia
|
Rotor (mathematics)
A rotor is an object in the geometric algebra (also called Clifford algebra) of a vector space that represents a rotation about the origin.[1] The term originated with William Kingdon Clifford,[2] in showing that the quaternion algebra is just a special case of Hermann Grassmann's "theory of extension" (Ausdehnungslehre).[3] Hestenes[4] defined a rotor to be any element $R$ of a geometric algebra that can be written as the product of an even number of unit vectors and satisfies $R{\tilde {R}}=1$, where ${\tilde {R}}$ is the "reverse" of $R$—that is, the product of the same vectors, but in reverse order.
This article is about the object in geometric algebra. For the vector concept, see Rotor (operator).
Definition
In mathematics, a rotor in the geometric algebra of a vector space V is the same thing as an element of the spin group Spin(V). We define this group below.
Let V be a vector space equipped with a positive definite quadratic form q, and let Cl(V) be the geometric algebra associated to V. The algebra Cl(V) is the quotient of the tensor algebra of V by the relations $v\cdot v=q(v)$ for all $v\in V$. (The tensor product in Cl(V) is what is called the geometric product in geometric algebra and in this article is denoted by $\cdot $.) The Z-grading on the tensor algebra of V descends to a Z/2Z-grading on Cl(V), which we denote by
$\operatorname {Cl} (V)=\operatorname {Cl} ^{\text{even}}(V)\oplus \operatorname {Cl} ^{\text{odd}}(V).$
Here, Cleven(V) is generated by even-degree blades and Clodd(V) is generated by odd-degree blades.
There is a unique antiautomorphism of Cl(V) which restricts to the identity on V: this is called the transpose, and the transpose of any multivector a is denoted by ${\tilde {a}}$. On a blade (i.e., a simple tensor), it simply reverses the order of the factors. The spin group Spin(V) is defined to be the subgroup of Cleven(V) consisting of multivectors R such that $R{\tilde {R}}=1.$ That is, it consists of multivectors that can be written as a product of an even number of unit vectors.
Action as rotation on the vector space
α > θ/2
α < θ/2
Rotation of a vector a through angle θ, as a double reflection along two unit vectors n and m, separated by angle θ/2 (not just θ). Each prime on a indicates a reflection. The plane of the diagram is the plane of rotation.
Reflections along a vector in geometric algebra may be represented as (minus) sandwiching a multivector M between a non-null vector v perpendicular to the hyperplane of reflection and that vector's inverse v−1:
$-vMv^{-1}$
and are of even grade. Under a rotation generated by the rotor R, a general multivector M will transform double-sidedly as
$RMR^{-1}.$
This action gives a surjective homomorphism $\operatorname {Spin} (V)\to \operatorname {SO} (V)$ presenting Spin(V) as a double cover of SO(V). (See Spin group for more details.)
Restricted alternative formulation
For a Euclidean space, it may be convenient to consider an alternative formulation, and some authors define the operation of reflection as (minus) the sandwiching of a unit (i.e. normalized) multivector:
$-vMv,\quad v^{2}=1,$
forming rotors that are automatically normalised:
$R{\tilde {R}}={\tilde {R}}R=1.$
The derived rotor action is then expressed as a sandwich product with the reverse:
$RM{\tilde {R}}$
For a reflection for which the associated vector squares to a negative scalar, as may be the case with a pseudo-Euclidean space, such a vector can only be normalized up to the sign of its square, and additional bookkeeping of the sign of the application the rotor becomes necessary. The formulation in terms of the sandwich product with the inverse as above suffers no such shortcoming.
Rotations of multivectors and spinors
However, though as multivectors also transform double-sidedly, rotors can be combined and form a group, and so multiple rotors compose single-sidedly. The alternative formulation above is not self-normalizing and motivates the definition of spinor in geometric algebra as an object that transforms single-sidedly – i.e., spinors may be regarded as non-normalised rotors in which the reverse rather than the inverse is used in the sandwich product.
Homogeneous representation algebras
In homogeneous representation algebras such as conformal geometric algebra, a rotor in the representation space corresponds to a rotation about an arbitrary point, a translation or possibly another transformation in the base space.
See also
• Double rotation
• Lie group
• Euler's formula
• Generator (mathematics)
• Versor
References
1. Doran, Chris; Lasenby, Anthony (2007). Geometric Algebra for Physicists. Cambridge, England: Cambridge University Press. p. 592. ISBN 9780521715959.
2. Clifford, William Kingdon (1878). "Applications of Grassmann's Extensive Algebra". American Journal of Mathematics. 1 (4): 353. doi:10.2307/2369379. JSTOR 2369379.
3. Grassmann, Hermann (1862). Die Ausdehnugslehre (second ed.). Berlin: T. C. F. Enslin. p. 400.
4. Hestenes, David (1987). Clifford algebra to geometric calculus (paperback ed.). Dordrecht, Holland: D. Reidel. p. 105. Hestenes uses the notation $R^{\dagger }$ for the reverse.
|
Wikipedia
|
Rotunda (geometry)
In geometry, a rotunda is any member of a family of dihedral-symmetric polyhedra. They are similar to a cupola but instead of alternating squares and triangles, it alternates pentagons and triangles around an axis. The pentagonal rotunda is a Johnson solid.
Set of rotundas
(Example: pentagonal rotunda)
Faces1 n-gon
1 2n-gon
n pentagons
2n triangles
Edges7n
Vertices4n
Symmetry groupCnv, [n], (*nn), order 2n
Rotation groupCn, [n]+, (nn), order n
Propertiesconvex
Other forms can be generated with dihedral symmetry and distorted equilateral pentagons.
Examples
Rotundas
3 4 5 6 7 8
triangular rotunda
square rotunda
pentagonal rotunda
hexagonal rotunda
heptagonal rotunda
octagonal rotunda
Star-rotunda
Star-rotundas
5 7 9 11
Pentagrammic rotunda
Heptagrammic rotunda
Enneagrammic rotunda
Hendecagrammic rotunda
See also
• Birotunda
References
• Norman W. Johnson, "Convex Solids with Regular Faces", Canadian Journal of Mathematics, 18, 1966, pages 169–200. Contains the original enumeration of the 92 solids and the conjecture that there are no others.
• Victor A. Zalgaller (1969). Convex Polyhedra with Regular Faces. Consultants Bureau. No ISBN. The first proof that there are only 92 Johnson solids.
Convex polyhedra
Platonic solids (regular)
• tetrahedron
• cube
• octahedron
• dodecahedron
• icosahedron
Archimedean solids
(semiregular or uniform)
• truncated tetrahedron
• cuboctahedron
• truncated cube
• truncated octahedron
• rhombicuboctahedron
• truncated cuboctahedron
• snub cube
• icosidodecahedron
• truncated dodecahedron
• truncated icosahedron
• rhombicosidodecahedron
• truncated icosidodecahedron
• snub dodecahedron
Catalan solids
(duals of Archimedean)
• triakis tetrahedron
• rhombic dodecahedron
• triakis octahedron
• tetrakis hexahedron
• deltoidal icositetrahedron
• disdyakis dodecahedron
• pentagonal icositetrahedron
• rhombic triacontahedron
• triakis icosahedron
• pentakis dodecahedron
• deltoidal hexecontahedron
• disdyakis triacontahedron
• pentagonal hexecontahedron
Dihedral regular
• dihedron
• hosohedron
Dihedral uniform
• prisms
• antiprisms
duals:
• bipyramids
• trapezohedra
Dihedral others
• pyramids
• truncated trapezohedra
• gyroelongated bipyramid
• cupola
• bicupola
• frustum
• bifrustum
• rotunda
• birotunda
• prismatoid
• scutoid
Degenerate polyhedra are in italics.
|
Wikipedia
|
Rouché's theorem
Rouché's theorem, named after Eugène Rouché, states that for any two complex-valued functions f and g holomorphic inside some region $K$ with closed contour $\partial K$, if |g(z)| < |f(z)| on $\partial K$, then f and f + g have the same number of zeros inside $K$, where each zero is counted as many times as its multiplicity. This theorem assumes that the contour $\partial K$ is simple, that is, without self-intersections. Rouché's theorem is an easy consequence of a stronger symmetric Rouché's theorem described below.
For the theorem in linear algebra, see Rouché–Capelli theorem.
Mathematical analysis → Complex analysis
Complex analysis
Complex numbers
• Real number
• Imaginary number
• Complex plane
• Complex conjugate
• Unit complex number
Complex functions
• Complex-valued function
• Analytic function
• Holomorphic function
• Cauchy–Riemann equations
• Formal power series
Basic theory
• Zeros and poles
• Cauchy's integral theorem
• Local primitive
• Cauchy's integral formula
• Winding number
• Laurent series
• Isolated singularity
• Residue theorem
• Conformal map
• Schwarz lemma
• Harmonic function
• Laplace's equation
Geometric function theory
People
• Augustin-Louis Cauchy
• Leonhard Euler
• Carl Friedrich Gauss
• Jacques Hadamard
• Kiyoshi Oka
• Bernhard Riemann
• Karl Weierstrass
• Mathematics portal
Usage
The theorem is usually used to simplify the problem of locating zeros, as follows. Given an analytic function, we write it as the sum of two parts, one of which is simpler and grows faster than (thus dominates) the other part. We can then locate the zeros by looking at only the dominating part. For example, the polynomial $z^{5}+3z^{3}+7$ has exactly 5 zeros in the disk $|z|<2$ since $|3z^{3}+7|\leq 31<32=|z^{5}|$ for every $|z|=2$, and $z^{5}$, the dominating part, has five zeros in the disk.
Geometric explanation
It is possible to provide an informal explanation of Rouché's theorem.
Let C be a closed, simple curve (i.e., not self-intersecting). Let h(z) = f(z) + g(z). If f and g are both holomorphic on the interior of C, then h must also be holomorphic on the interior of C. Then, with the conditions imposed above, the Rouche's theorem in its original (and not symmetric) form says that
If |f(z)| > |h(z) − f(z)|, for every z in C, then f and h have the same number of zeros in the interior of C.
Notice that the condition |f(z)| > |h(z) − f(z)| means that for any z, the distance from f(z) to the origin is larger than the length of h(z) − f(z), which in the following picture means that for each point on the blue curve, the segment joining it to the origin is larger than the green segment associated with it. Informally we can say that the blue curve f(z) is always closer to the red curve h(z) than it is to the origin.
The previous paragraph shows that h(z) must wind around the origin exactly as many times as f(z). The index of both curves around zero is therefore the same, so by the argument principle, f(z) and h(z) must have the same number of zeros inside C.
One popular, informal way to summarize this argument is as follows: If a person were to walk a dog on a leash around and around a tree, such that the distance between the person and the tree is always greater than the length of the leash, then the person and the dog go around the tree the same number of times.
Applications
See also: Properties of polynomial roots § Bounds on (complex) polynomial roots
Bounding roots
Consider the polynomial $z^{2}+2az+b^{2}$ (where $a>b>0$). By the quadratic formula it has two zeros at $-a\pm {\sqrt {a^{2}-b^{2}}}$. Rouché's theorem can be used to obtain more precise positions of them. Since
$|z^{2}+b^{2}|\leq 2b^{2}<2a|z|{\text{ for all }}|z|=b,$
Rouché's theorem says that the polynomial has exactly one zero inside the disk $|z|<b$. Since $-a-{\sqrt {a^{2}-b^{2}}}$ is clearly outside the disk, we conclude that the zero is $-a+{\sqrt {a^{2}-b^{2}}}$.
In general, a polynomial $f(z)=a_{n}z^{n}+\cdots +a_{0}$. If $|a_{k}|r^{k}>\sum _{j\neq k}|a_{j}|r^{j}$ for some $r>0,k\in 0:n$, then by Rouche's theorem, the polynomial has exactly $k$ roots inside $B(0,r)$.
This sort of argument can be useful in locating residues when one applies Cauchy's residue theorem.
Fundamental theorem of algebra
Rouché's theorem can also be used to give a short proof of the fundamental theorem of algebra. Let
$p(z)=a_{0}+a_{1}z+a_{2}z^{2}+\cdots +a_{n}z^{n},\quad a_{n}\neq 0$
and choose $R>0$ so large that:
$|a_{0}+a_{1}z+\cdots +a_{n-1}z^{n-1}|\leq \sum _{j=0}^{n-1}|a_{j}|R^{j}<|a_{n}|R^{n}=|a_{n}z^{n}|{\text{ for }}|z|=R.$
Since $a_{n}z^{n}$ has $n$ zeros inside the disk $|z|<R$ (because $R>0$), it follows from Rouché's theorem that $p$ also has the same number of zeros inside the disk.
One advantage of this proof over the others is that it shows not only that a polynomial must have a zero but the number of its zeros is equal to its degree (counting, as usual, multiplicity).
Another use of Rouché's theorem is to prove the open mapping theorem for analytic functions. We refer to the article for the proof.
Symmetric version
A stronger version of Rouché's theorem was published by Theodor Estermann in 1962.[1] It states: let $K\subset G$ be a bounded region with continuous boundary $\partial K$. Two holomorphic functions $f,\,g\in {\mathcal {H}}(G)$ have the same number of roots (counting multiplicity) in $K$, if the strict inequality
$|f(z)-g(z)|<|f(z)|+|g(z)|\qquad \left(z\in \partial K\right)$
holds on the boundary $\partial K.$
The original version of Rouché's theorem then follows from this symmetric version applied to the functions $f+g,f$ together with the trivial inequality $|f(z)+g(z)|\geq 0$ (in fact this inequality is strict since $f(z)+g(z)=0$ for some $z\in \partial K$ would imply $|g(z)|=|f(z)|$).
The statement can be understood intuitively as follows. By considering $-g$ in place of $g$, the condition can be rewritten as $|f(z)+g(z)|<|f(z)|+|g(z)|$ for $z\in \partial K$. Since $|f(z)+g(z)|\leq |f(z)|+|g(z)|$ always holds by the triangle inequality, this is equivalent to saying that $|f(z)+g(z)|\neq |f(z)|+|g(z)|$ on $\partial K$, which in turn means that for $z\in \partial K$ the functions $f(z)$ and $g(z)$ are non-vanishing and $\arg {f(z)}\neq \arg {g(z)}$.
Intuitively, if the values of $f$ and $g$ never pass through the origin and never point in the same direction as $z$ circles along $\partial K$, then $f(z)$ and $g(z)$ must wind around the origin the same number of times.
Proof of the symmetric form of Rouché's theorem
Let $C\colon [0,1]\to \mathbb {C} $ be a simple closed curve whose image is the boundary $\partial K$. The hypothesis implies that f has no roots on $\partial K$, hence by the argument principle, the number Nf(K) of zeros of f in K is
${\frac {1}{2\pi i}}\oint _{C}{\frac {f'(z)}{f(z)}}\,dz={\frac {1}{2\pi i}}\oint _{f\circ C}{\frac {dz}{z}}=\mathrm {Ind} _{f\circ C}(0),$
i.e., the winding number of the closed curve $f\circ C$ around the origin; similarly for g. The hypothesis ensures that g(z) is not a negative real multiple of f(z) for any z = C(x), thus 0 does not lie on the line segment joining f(C(x)) to g(C(x)), and
$H_{t}(x)=(1-t)f(C(x))+tg(C(x))$
is a homotopy between the curves $f\circ C$ and $g\circ C$ avoiding the origin. The winding number is homotopy-invariant: the function
$I(t)=\mathrm {Ind} _{H_{t}}(0)={\frac {1}{2\pi i}}\oint _{H_{t}}{\frac {dz}{z}}$
is continuous and integer-valued, hence constant. This shows
$N_{f}(K)=\mathrm {Ind} _{f\circ C}(0)=\mathrm {Ind} _{g\circ C}(0)=N_{g}(K).$
See also
• Fundamental theorem of algebra, for its shortest demonstration yet, while using Rouché's theorem
• Hurwitz's theorem (complex analysis)
• Rational root theorem
• Properties of polynomial roots
• Riemann mapping theorem
• Sturm's theorem
References
1. Estermann, T. (1962). Complex Numbers and Functions. Athlone Press, Univ. of London. p. 156.
• Beardon, Alan (1979). Complex Analysis: The Argument Principle in Analysis and Topology. John Wiley and Sons. p. 131. ISBN 0-471-99672-6.
• Conway, John B. (1978). Functions of One Complex Variable I. Springer-Verlag New York. ISBN 978-0-387-90328-6.
• Titchmarsh, E. C. (1939). The Theory of Functions (2nd ed.). Oxford University Press. pp. 117–119, 198–203. ISBN 0-19-853349-7.
• Rouché É., Mémoire sur la série de Lagrange, Journal de l'École Polytechnique, tome 22, 1862, p. 193-224. Theorem appears at p. 217. See Gallica archives.
|
Wikipedia
|
Rouché–Capelli theorem
In linear algebra, the Rouché–Capelli theorem determines the number of solutions for a system of linear equations, given the rank of its augmented matrix and coefficient matrix. The theorem is variously known as the:
• Rouché–Capelli theorem in English speaking countries, Italy and Brazil;
• Kronecker–Capelli theorem in Austria, Poland, Croatia, Romania, Serbia and Russia;
• Rouché–Fontené theorem in France;
• Rouché–Frobenius theorem in Spain and many countries in Latin America;
• Frobenius theorem in the Czech Republic and in Slovakia.
Not to be confused with Rouché's theorem.
Formal statement
A system of linear equations with n variables has a solution if and only if the rank of its coefficient matrix A is equal to the rank of its augmented matrix [A|b].[1] If there are solutions, they form an affine subspace of $\mathbb {R} ^{n}$ of dimension n − rank(A). In particular:
• if n = rank(A), the solution is unique,
• otherwise there are infinitely many solutions.
Example
Consider the system of equations
x + y + 2z = 3,
x + y + z = 1,
2x + 2y + 2z = 2.
The coefficient matrix is
$A={\begin{bmatrix}1&1&2\\1&1&1\\2&2&2\\\end{bmatrix}},$
and the augmented matrix is
$(A|B)=\left[{\begin{array}{ccc|c}1&1&2&3\\1&1&1&1\\2&2&2&2\end{array}}\right].$
Since both of these have the same rank, namely 2, there exists at least one solution; and since their rank is less than the number of unknowns, the latter being 3, there are infinitely many solutions.
In contrast, consider the system
x + y + 2z = 3,
x + y + z = 1,
2x + 2y + 2z = 5.
The coefficient matrix is
$A={\begin{bmatrix}1&1&2\\1&1&1\\2&2&2\\\end{bmatrix}},$
and the augmented matrix is
$(A|B)=\left[{\begin{array}{ccc|c}1&1&2&3\\1&1&1&1\\2&2&2&5\end{array}}\right].$
In this example the coefficient matrix has rank 2, while the augmented matrix has rank 3; so this system of equations has no solution. Indeed, an increase in the number of linearly independent columns has made the system of equations inconsistent.
See also
• Cramer's rule
• Gaussian elimination
References
1. Shafarevich, Igor R.; Remizov, Alexey (2012-08-23). Linear Algebra and Geometry. Springer Science & Business Media. p. 56. ISBN 9783642309946.
• A. Carpinteri (1997). Structural mechanics. Taylor and Francis. p. 74. ISBN 0-419-19160-7.
External links
• Kronecker-Capelli Theorem at Wikibooks
• Kronecker-Capelli's Theorem - YouTube video with a proof
• Kronecker-Capelli theorem in the Encyclopaedia of Mathematics
|
Wikipedia
|
Rough number
A k-rough number, as defined by Finch in 2001 and 2003, is a positive integer whose prime factors are all greater than or equal to k. k-roughness has alternately been defined as requiring all prime factors to strictly exceed k.[1]
Examples (after Finch)
1. Every odd positive integer is 3-rough.
2. Every positive integer that is congruent to 1 or 5 mod 6 is 5-rough.
3. Every positive integer is 2-rough, since all its prime factors, being prime numbers, exceed 1.
See also
• Buchstab function, used to count rough numbers
• Smooth number
Notes
1. p. 130, Naccache and Shparlinski 2009.
References
• Weisstein, Eric W. "Rough Number". MathWorld.
• Finch's definition from Number Theory Archives
• "Divisibility, Smoothness and Cryptographic Applications", D. Naccache and I. E. Shparlinski, pp. 115–173 in Algebraic Aspects of Digital Communications, eds. Tanush Shaska and Engjell Hasimaj, IOS Press, 2009, ISBN 9781607500193.
The On-Line Encyclopedia of Integer Sequences (OEIS) lists p-rough numbers for small p:
• 2-rough numbers: A000027
• 3-rough numbers: A005408
• 5-rough numbers: A007310
• 7-rough numbers: A007775
• 11-rough numbers: A008364
• 13-rough numbers: A008365
• 17-rough numbers: A008366
• 19-rough numbers: A166061
• 23-rough numbers: A166063
Divisibility-based sets of integers
Overview
• Integer factorization
• Divisor
• Unitary divisor
• Divisor function
• Prime factor
• Fundamental theorem of arithmetic
Factorization forms
• Prime
• Composite
• Semiprime
• Pronic
• Sphenic
• Square-free
• Powerful
• Perfect power
• Achilles
• Smooth
• Regular
• Rough
• Unusual
Constrained divisor sums
• Perfect
• Almost perfect
• Quasiperfect
• Multiply perfect
• Hemiperfect
• Hyperperfect
• Superperfect
• Unitary perfect
• Semiperfect
• Practical
• Erdős–Nicolas
With many divisors
• Abundant
• Primitive abundant
• Highly abundant
• Superabundant
• Colossally abundant
• Highly composite
• Superior highly composite
• Weird
Aliquot sequence-related
• Untouchable
• Amicable (Triple)
• Sociable
• Betrothed
Base-dependent
• Equidigital
• Extravagant
• Frugal
• Harshad
• Polydivisible
• Smith
Other sets
• Arithmetic
• Deficient
• Friendly
• Solitary
• Sublime
• Harmonic divisor
• Descartes
• Refactorable
• Superperfect
Classes of natural numbers
Powers and related numbers
• Achilles
• Power of 2
• Power of 3
• Power of 10
• Square
• Cube
• Fourth power
• Fifth power
• Sixth power
• Seventh power
• Eighth power
• Perfect power
• Powerful
• Prime power
Of the form a × 2b ± 1
• Cullen
• Double Mersenne
• Fermat
• Mersenne
• Proth
• Thabit
• Woodall
Other polynomial numbers
• Hilbert
• Idoneal
• Leyland
• Loeschian
• Lucky numbers of Euler
Recursively defined numbers
• Fibonacci
• Jacobsthal
• Leonardo
• Lucas
• Padovan
• Pell
• Perrin
Possessing a specific set of other numbers
• Amenable
• Congruent
• Knödel
• Riesel
• Sierpiński
Expressible via specific sums
• Nonhypotenuse
• Polite
• Practical
• Primary pseudoperfect
• Ulam
• Wolstenholme
Figurate numbers
2-dimensional
centered
• Centered triangular
• Centered square
• Centered pentagonal
• Centered hexagonal
• Centered heptagonal
• Centered octagonal
• Centered nonagonal
• Centered decagonal
• Star
non-centered
• Triangular
• Square
• Square triangular
• Pentagonal
• Hexagonal
• Heptagonal
• Octagonal
• Nonagonal
• Decagonal
• Dodecagonal
3-dimensional
centered
• Centered tetrahedral
• Centered cube
• Centered octahedral
• Centered dodecahedral
• Centered icosahedral
non-centered
• Tetrahedral
• Cubic
• Octahedral
• Dodecahedral
• Icosahedral
• Stella octangula
pyramidal
• Square pyramidal
4-dimensional
non-centered
• Pentatope
• Squared triangular
• Tesseractic
Combinatorial numbers
• Bell
• Cake
• Catalan
• Dedekind
• Delannoy
• Euler
• Eulerian
• Fuss–Catalan
• Lah
• Lazy caterer's sequence
• Lobb
• Motzkin
• Narayana
• Ordered Bell
• Schröder
• Schröder–Hipparchus
• Stirling first
• Stirling second
• Telephone number
• Wedderburn–Etherington
Primes
• Wieferich
• Wall–Sun–Sun
• Wolstenholme prime
• Wilson
Pseudoprimes
• Carmichael number
• Catalan pseudoprime
• Elliptic pseudoprime
• Euler pseudoprime
• Euler–Jacobi pseudoprime
• Fermat pseudoprime
• Frobenius pseudoprime
• Lucas pseudoprime
• Lucas–Carmichael number
• Somer–Lucas pseudoprime
• Strong pseudoprime
Arithmetic functions and dynamics
Divisor functions
• Abundant
• Almost perfect
• Arithmetic
• Betrothed
• Colossally abundant
• Deficient
• Descartes
• Hemiperfect
• Highly abundant
• Highly composite
• Hyperperfect
• Multiply perfect
• Perfect
• Practical
• Primitive abundant
• Quasiperfect
• Refactorable
• Semiperfect
• Sublime
• Superabundant
• Superior highly composite
• Superperfect
Prime omega functions
• Almost prime
• Semiprime
Euler's totient function
• Highly cototient
• Highly totient
• Noncototient
• Nontotient
• Perfect totient
• Sparsely totient
Aliquot sequences
• Amicable
• Perfect
• Sociable
• Untouchable
Primorial
• Euclid
• Fortunate
Other prime factor or divisor related numbers
• Blum
• Cyclic
• Erdős–Nicolas
• Erdős–Woods
• Friendly
• Giuga
• Harmonic divisor
• Jordan–Pólya
• Lucas–Carmichael
• Pronic
• Regular
• Rough
• Smooth
• Sphenic
• Størmer
• Super-Poulet
• Zeisel
Numeral system-dependent numbers
Arithmetic functions
and dynamics
• Persistence
• Additive
• Multiplicative
Digit sum
• Digit sum
• Digital root
• Self
• Sum-product
Digit product
• Multiplicative digital root
• Sum-product
Coding-related
• Meertens
Other
• Dudeney
• Factorion
• Kaprekar
• Kaprekar's constant
• Keith
• Lychrel
• Narcissistic
• Perfect digit-to-digit invariant
• Perfect digital invariant
• Happy
P-adic numbers-related
• Automorphic
• Trimorphic
Digit-composition related
• Palindromic
• Pandigital
• Repdigit
• Repunit
• Self-descriptive
• Smarandache–Wellin
• Undulating
Digit-permutation related
• Cyclic
• Digit-reassembly
• Parasitic
• Primeval
• Transposable
Divisor-related
• Equidigital
• Extravagant
• Frugal
• Harshad
• Polydivisible
• Smith
• Vampire
Other
• Friedman
Binary numbers
• Evil
• Odious
• Pernicious
Generated via a sieve
• Lucky
• Prime
Sorting related
• Pancake number
• Sorting number
Natural language related
• Aronson's sequence
• Ban
Graphemics related
• Strobogrammatic
• Mathematics portal
|
Wikipedia
|
Rough path
In stochastic analysis, a rough path is a generalization of the notion of smooth path allowing to construct a robust solution theory for controlled differential equations driven by classically irregular signals, for example a Wiener process. The theory was developed in the 1990s by Terry Lyons.[1][2][3] Several accounts of the theory are available.[4][5][6][7]
Rough path theory is focused on capturing and making precise the interactions between highly oscillatory and non-linear systems. It builds upon the harmonic analysis of L.C. Young, the geometric algebra of K.T. Chen, the Lipschitz function theory of H. Whitney and core ideas of stochastic analysis. The concepts and the uniform estimates have widespread application in pure and applied Mathematics and beyond. It provides a toolbox to recover with relative ease many classical results in stochastic analysis (Wong-Zakai, Stroock-Varadhan support theorem, construction of stochastic flows, etc) without using specific probabilistic properties such as the martingale property or predictability. The theory also extends Itô's theory of SDEs far beyond the semimartingale setting. At the heart of the mathematics is the challenge of describing a smooth but potentially highly oscillatory and multidimensional path $x_{t}$ effectively so as to accurately predict its effect on a nonlinear dynamical system $\mathrm {d} y_{t}=f(y_{t})\,\mathrm {d} x_{t},y_{0}=a$. The Signature is a homomorphism from the monoid of paths (under concatenation) into the grouplike elements of the free tensor algebra. It provides a graduated summary of the path $x$. This noncommutative transform is faithful for paths up to appropriate null modifications. These graduated summaries or features of a path are at the heart of the definition of a rough path; locally they remove the need to look at the fine structure of the path. Taylor's theorem explains how any smooth function can, locally, be expressed as a linear combination of certain special functions (monomials based at that point). Coordinate iterated integrals (terms of the signature) form a more subtle algebra of features that can describe a stream or path in an analogous way; they allow a definition of rough path and form a natural linear "basis" for continuous functions on paths.
Martin Hairer used rough paths to construct a robust solution theory for the KPZ equation.[8] He then proposed a generalization known as the theory of regularity structures[9] for which he was awarded a Fields medal in 2014.
Motivation
Rough path theory aims to make sense of the controlled differential equation
$\mathrm {d} Y_{t}^{i}=\sum _{j=1}^{d}V_{j}^{i}(Y_{t})\,\mathrm {d} X_{t}^{j}.$
where the control, the continuous path $X_{t}$ taking values in a Banach space, need not be differentiable nor of bounded variation. A prevalent example of the controlled path $X_{t}$ is the sample path of a Wiener process. In this case, the aforementioned controlled differential equation can be interpreted as a stochastic differential equation and integration against "$\mathrm {d} X_{t}^{j}$" can be defined in the sense of Itô. However, Itô's calculus is defined in the sense of $L^{2}$ and is in particular not a pathwise definition. Rough paths give an almost sure pathwise definition of stochastic differential equations. The rough path notion of solution is well-posed in the sense that if $X(n)_{t}$ is a sequence of smooth paths converging to $X_{t}$ in the $p$-variation metric (described below), and
$\mathrm {d} Y(n)_{t}^{i}=\sum _{j=1}^{d}V_{j}^{i}(Y_{t})\,\mathrm {d} X(n)_{t}^{j};$
$\mathrm {d} Y_{t}^{i}=\sum _{j=1}^{d}V_{j}^{i}(Y_{t})\,\mathrm {d} X_{t}^{j},$
then $Y(n)$ converges to $Y$ in the $p$-variation metric. This continuity property and the deterministic nature of solutions makes it possible to simplify and strengthen many results in Stochastic Analysis, such as the Freidlin-Wentzell's Large Deviation theory[10] as well as results about stochastic flows.
In fact, rough path theory can go far beyond the scope of Itô and Stratonovich calculus and allows to make sense of differential equations driven by non-semimartingale paths, such as Gaussian processes and Markov processes.[11]
Definition of a rough path
Rough paths are paths taking values in the truncated free tensor algebra (more precisely: in the free nilpotent group embedded in the free tensor algebra), which this section now briefly recalls. The tensor powers of $\mathbb {R} ^{d}$, denoted ${\big (}\mathbb {R} ^{d}{\big )}^{\otimes n}$, are equipped with the projective norm $\Vert \cdot \Vert $ (see Topological tensor product, note that rough path theory in fact works for a more general class of norms). Let $T^{(n)}(\mathbb {R} ^{d})$ be the truncated tensor algebra
$T^{(n)}(\mathbb {R} ^{d})=\bigoplus _{i=0}^{n}{\big (}\mathbb {R} ^{d}{\big )}^{\otimes i},$ where by convention $(\mathbb {R} ^{d})^{\otimes 0}\cong \mathbb {R} $.
Let $\triangle _{0,1}$ be the simplex $\{(s,t):0\leq s\leq t\leq 1\}$. Let $p\geq 1$. Let $\mathbf {X} $ and $\mathbf {Y} $ be continuous maps $\triangle _{0,1}\to T^{(\lfloor p\rfloor )}(\mathbb {R} ^{d})$. Let $\mathbf {X} ^{j}$ denote the projection of $\mathbf {X} $ onto $j$-tensors and likewise for $\mathbf {Y} ^{j}$. The $p$-variation metric is defined as
$d_{p}\left(\mathbf {X} ,\mathbf {Y} \right):=\max _{j=1,\ldots ,\lfloor p\rfloor }\sup _{0=t_{0}<t_{1}<\cdots <t_{n}=1}\left(\sum _{i=0}^{n-1}\Vert \mathbf {X} _{t_{i},t_{i+1}}^{j}-\mathbf {Y} _{t_{i},t_{i+1}}^{j}\Vert ^{\frac {p}{j}}\right)^{\frac {j}{p}}$
where the supremum is taken over all finite partitions $\{0=t_{0}<t_{1}<\cdots <t_{n}=1\}$ of $[0,1]$.
A continuous function $\mathbf {X} :\triangle _{0,1}\rightarrow T^{(\lfloor p\rfloor )}(\mathbb {R} ^{d})$ :\triangle _{0,1}\rightarrow T^{(\lfloor p\rfloor )}(\mathbb {R} ^{d})} is a $p$-geometric rough path if there exists a sequence of paths with finite total variation $X(1),X(2),\ldots $ such that
$\mathbf {X} (n)_{s,t}=\left(1,\int _{s<s_{1}<t}\mathrm {d} X(n)_{s_{1}},\ldots ,\int _{s<s_{1}<\cdots <s_{\lfloor p\rfloor }<t}\,\mathrm {d} X(n)_{s_{1}}\otimes \cdots \otimes \mathrm {d} X(n)_{s_{\lfloor p\rfloor }}\right)$
converges in the $p$-variation metric to $\mathbf {X} $ as $n\rightarrow \infty $.[12]
Universal limit theorem
A central result in rough path theory is Lyons' Universal Limit theorem.[1] One (weak) version of the result is the following: Let $X(n)$ be a sequence of paths with finite total variation and let
$\mathbf {X} (n)_{s,t}=\left(1,\int _{s<s_{1}<t}\mathrm {d} X(n)_{s_{1}},\ldots ,\int _{s<s_{1}<\ldots <s_{\lfloor p\rfloor }<t}\mathrm {d} X(n)_{s_{1}}\otimes \cdots \otimes \mathrm {d} X(n)_{s_{\lfloor p\rfloor }}\right)$ denote the rough path lift of $X(n)$.
Suppose that $\mathbf {X} (n)$ converges in the $p$-variation metric to a $p$-geometric rough path $\mathbf {X} $ as $n\to \infty $. Let $(V_{j}^{i})_{j=1,\ldots ,d}^{i=1,\ldots ,n}$ be functions that have at least $\lfloor p\rfloor $ bounded derivatives and the $\lfloor p\rfloor $-th derivatives are $\alpha $-Hölder continuous for some $\alpha >p-\lfloor p\rfloor $. Let $Y(n)$ be the solution to the differential equation
$\mathrm {d} Y(n)_{t}^{i}=\sum _{j=1}^{d}V_{j}^{i}(Y(n)_{t})\,\mathrm {d} X(n)_{t}^{j}$
and let $\mathbf {Y} (n)$ be defined as
$\mathbf {Y} (n)_{s,t}=\left(1,\int _{s<s_{1}<t}\,\mathrm {d} Y(n)_{s_{1}},\ldots ,\int _{s<s_{1}<\ldots <s_{\lfloor p\rfloor }<t}\mathrm {d} Y(n)_{s_{1}}\otimes \cdots \otimes \mathrm {d} Y(n)_{s_{\lfloor p\rfloor }}\right).$
Then $\mathbf {Y} (n)$ converges in the $p$-variation metric to a $p$-geometric rough path $\mathbf {Y} $.
Moreover, $\mathbf {Y} $ is the solution to the differential equation
$\mathrm {d} Y_{t}^{i}=\sum _{j=1}^{d}V_{j}^{i}(Y_{t})\,\mathrm {d} X_{t}^{j}\qquad (\star )$
driven by the geometric rough path $\mathbf {X} $.
Concisely, the theorem can be interpreted as saying that the solution map (aka the Itô-Lyons map) $\Phi :G\Omega _{p}(\mathbb {R} ^{d})\to G\Omega _{p}(\mathbb {R} ^{e})$ of the RDE $(\star )$ is continuous (and in fact locally lipschitz) in the $p$-variation topology. Hence rough paths theory demonstrates that by viewing driving signals as rough paths, one has a robust solution theory for classical stochastic differential equations and beyond.
Examples of rough paths
Brownian motion
Let $(B_{t})_{t\geq 0}$ be a multidimensional standard Brownian motion. Let $\circ $ denote the Stratonovich integration. Then
$\mathbf {B} _{s,t}=\left(1,\int _{s<s_{1}<t}\circ \mathrm {d} B_{s_{1}},\int _{s<s_{1}<s_{2}<t}\circ \mathrm {d} B_{s_{1}}\otimes \circ \mathrm {d} B_{s_{2}}\right)$
is a $p$-geometric rough path for any $2<p<3$. This geometric rough path is called the Stratonovich Brownian rough path.
Fractional Brownian motion
More generally, let $B_{H}(t)$ be a multidimensional fractional Brownian motion (a process whose coordinate components are independent fractional Brownian motions) with $H>{\frac {1}{4}}$. If $B_{H}^{m}(t)$ is the $m$-th dyadic piecewise linear interpolation of $B_{H}(t)$, then
${\begin{aligned}\mathbf {B} _{H}^{m}(s,t)=\left(1,\int _{s<s_{1}<t}\right.&\mathrm {d} B_{H}^{m}(s_{1}),\int _{s<s_{1}<s_{2}<t}\,\mathrm {d} B_{H}^{m}(s_{1})\otimes \mathrm {d} B_{H}^{m}(s_{2}),\\&\left.\int _{s<s_{1}<s_{2}<s_{3}<t}\mathrm {d} B_{H}^{m}(s_{1})\otimes \mathrm {d} B_{H}^{m}(s_{2})\otimes \mathrm {d} B_{H}^{m}(s_{3})\right)\end{aligned}}$
converges almost surely in the $p$-variation metric to a $p$-geometric rough path for ${\frac {1}{H}}<p$.[13] This limiting geometric rough path can be used to make sense of differential equations driven by fractional Brownian motion with Hurst parameter $H>{\frac {1}{4}}$. When $0<H\leq {\frac {1}{4}}$, it turns out that the above limit along dyadic approximations does not converge in $p$-variation. However, one can of course still make sense of differential equations provided one exhibits a rough path lift, existence of such a (non-unique) lift is a consequence of the Lyons–Victoir extension theorem.
Non-uniqueness of enhancement
In general, let $(X_{t})_{t\geq 0}$ be a $\mathbb {R} ^{d}$-valued stochastic process. If one can construct, almost surely, functions $(s,t)\rightarrow \mathbf {X} _{s,t}^{j}\in {\big (}\mathbb {R} ^{d}{\big )}^{\otimes j}$ so that
$\mathbf {X} :(s,t)\rightarrow (1,X_{t}-X_{s},\mathbf {X} _{s,t}^{2},\ldots ,\mathbf {X} _{s,t}^{\lfloor p\rfloor })$ :(s,t)\rightarrow (1,X_{t}-X_{s},\mathbf {X} _{s,t}^{2},\ldots ,\mathbf {X} _{s,t}^{\lfloor p\rfloor })}
is a $p$-geometric rough path, then $\mathbf {X} _{s,t}$ is an enhancement of the process $X$. Once an enhancement has been chosen, the machinery of rough path theory will allow one to make sense of the controlled differential equation
$\mathrm {d} Y_{t}^{i}=\sum _{j=1}^{d}V_{j}^{i}(Y_{t})\,\mathrm {d} X_{t}^{j}.$
for sufficiently regular vector fields $V_{j}^{i}.$
Note that every stochastic process (even if it is a deterministic path) can have more than one (in fact, uncountably many) possible enhancements.[14] Different enhancements will give rise to different solutions to the controlled differential equations. In particular, it is possible to enhance Brownian motion to a geometric rough path in a way other than the Brownian rough path.[15] This implies that the Stratonovich calculus is not the only theory of stochastic calculus that satisfies the classical product rule
$\mathrm {d} (X_{t}\cdot Y_{t})=X_{t}\,\mathrm {d} Y_{t}+Y_{t}\,\mathrm {d} X_{t}.$
In fact any enhancement of Brownian motion as a geometric rough path will give rise a calculus that satisfies this classical product rule. Itô calculus does not come directly from enhancing Brownian motion as a geometric rough path, but rather as a branched rough path.
Applications in stochastic analysis
Stochastic differential equations driven by non-semimartingales
Rough path theory allows to give a pathwise notion of solution to (stochastic) differential equations of the form
$\mathrm {d} Y_{t}=b(Y_{t})\,\mathrm {d} t+\sigma (Y_{t})\,\mathrm {d} X_{t}$
provided that the multidimensional stochastic process $X_{t}$ can be almost surely enhanced as a rough path and that the drift $b$ and the volatility $\sigma $ are sufficiently smooth (see the section on the Universal Limit Theorem).
There are many examples of Markov processes, Gaussian processes, and other processes that can be enhanced as rough paths.[16]
There are, in particular, many results on the solution to differential equation driven by fractional Brownian motion that have been proved using a combination of Malliavin calculus and rough path theory. In fact, it has been proved recently that the solution to controlled differential equation driven by a class of Gaussian processes, which includes fractional Brownian motion with Hurst parameter $H>{\frac {1}{4}}$, has a smooth density under the Hörmander's condition on the vector fields.[17] [18]
Freidlin–Wentzell's large deviation theory
Let $L(V,W)$ denote the space of bounded linear maps from a Banach space $V$ to another Banach space $W$.
Let $B_{t}$ be a $d$-dimensional standard Brownian motion. Let $b:\mathbb {R} ^{n}\rightarrow \mathbb {R} ^{d}$ and $\sigma :\mathbb {R} ^{n}\rightarrow L(\mathbb {R} ^{d},\mathbb {R} ^{n})$ :\mathbb {R} ^{n}\rightarrow L(\mathbb {R} ^{d},\mathbb {R} ^{n})} be twice-differentiable functions and whose second derivatives are $\alpha $-Hölder for some $\alpha >0$.
Let $X^{\varepsilon }$ be the unique solution to the stochastic differential equation
$\mathrm {d} X^{\varepsilon }=b(X_{t}^{\epsilon })\,\mathrm {d} t+{\sqrt {\varepsilon }}\sigma (X^{\varepsilon })\circ \mathrm {d} B_{t};\,X^{\varepsilon }=a,$
where $\circ $ denotes Stratonovich integration.
The Freidlin Wentzell's large deviation theory aims to study the asymptotic behavior, as $\epsilon \rightarrow 0$, of $\mathbb {P} [X^{\varepsilon }\in F]$ for closed or open sets $F$ with respect to the uniform topology.
The Universal Limit Theorem guarantees that the Itô map sending the control path $(t,{\sqrt {\varepsilon }}B_{t})$ to the solution $X^{\varepsilon }$ is a continuous map from the $p$-variation topology to the $p$-variation topology (and hence the uniform topology). Therefore, the Contraction principle in large deviations theory reduces Freidlin–Wentzell's problem to demonstrating the large deviation principle for $(t,{\sqrt {\varepsilon }}B_{t})$ in the $p$-variation topology.[10]
This strategy can be applied to not just differential equations driven by the Brownian motion but also to the differential equations driven any stochastic processes which can be enhanced as rough paths, such as fractional Brownian motion.
Stochastic flow
Once again, let $B_{t}$ be a $d$-dimensional Brownian motion. Assume that the drift term $b$ and the volatility term $\sigma $ has sufficient regularity so that the stochastic differential equation
$\mathrm {d} \phi _{s,t}(x)=b(\phi _{s,t}(x))\,\mathrm {d} t+\sigma {(\phi _{s,t}(x))}\,\mathrm {d} B_{t};X_{s}=x$
has a unique solution in the sense of rough path. A basic question in the theory of stochastic flow is whether the flow map $\phi _{s,t}(x)$ exists and satisfy the cocyclic property that for all $s\leq u\leq t$,
$\phi _{u,t}(\phi _{s,u}(x))=\phi _{s,t}(x)$
outside a null set independent of $s,u,t$.
The Universal Limit Theorem once again reduces this problem to whether the Brownian rough path $\mathbf {B_{s,t}} $ exists and satisfies the multiplicative property that for all $s\leq u\leq t$,
$\mathbf {B} _{s,u}\otimes \mathbf {B} _{u,t}=\mathbf {B} _{s,t}$
outside a null set independent of $s$, $u$ and $t$.
In fact, rough path theory gives the existence and uniqueness of $\phi _{s,t}(x)$ not only outside a null set independent of $s$,$t$ and $x$ but also of the drift $b$ and the volatility $\sigma $.
As in the case of Freidlin–Wentzell theory, this strategy holds not just for differential equations driven by the Brownian motion but to any stochastic processes that can be enhanced as rough paths.
Controlled rough path
Controlled rough paths, introduced by M. Gubinelli,[5] are paths $\mathbf {Y} $ for which the rough integral
$\int _{s}^{t}\mathbf {Y} _{u}\,\mathrm {d} X_{u}$
can be defined for a given geometric rough path $X$.
More precisely, let $L(V,W)$ denote the space of bounded linear maps from a Banach space $V$ to another Banach space $W$.
Given a $p$-geometric rough path
$\mathbf {X} =(1,\mathbf {X} ^{1},\ldots ,\mathbf {X} ^{\lfloor p\rfloor })$
on $\mathbb {R} ^{d}$, a $\gamma $-controlled path is a function $\mathbf {Y} _{s}=(\mathbf {Y} _{s}^{0},\mathbf {Y} _{s}^{1},\ldots ,\mathbf {Y} _{s}^{\lfloor \gamma \rfloor })$ such that $\mathbf {Y} ^{j}:[0,1]\rightarrow L((\mathbb {R} ^{d})^{\otimes j+1},\mathbb {R} ^{n})$ and that there exists $M>0$ such that for all $0\leq s\leq t\leq 1$ and $j=0,1,\ldots ,\lfloor \gamma \rfloor $,
$\Vert \mathbf {Y} _{s}^{j}\Vert \leq M$
and
$\left\|\mathbf {Y} _{t}^{j}-\sum _{i=0}^{\lfloor \gamma \rfloor -j}\mathbf {Y} _{s}^{j+i}\mathbf {X} _{s,t}^{i}\right\|\leq M|t-s|^{\frac {\gamma -j}{p}}.$
Example: Lip(γ) function
Let $\mathbf {X} =(1,\mathbf {X} ^{1},\ldots ,\mathbf {X} ^{\lfloor p\rfloor })$ be a $p$-geometric rough path satisfying the Hölder condition that there exists $M>0$, for all $0\leq s\leq t\leq 1$ and all $j=1,,2,\ldots ,\lfloor p\rfloor $,
$\Vert \mathbf {X} _{s,t}^{j}\Vert \leq M(t-s)^{\frac {j}{p}},$
where $\mathbf {X} ^{j}$ denotes the $j$-th tensor component of $\mathbf {X} $. Let $\gamma \geq 1$. Let $f:\mathbb {R} ^{d}\rightarrow \mathbb {R} ^{n}$ be an $\lfloor \gamma \rfloor $-times differentiable function and the $\lfloor \gamma \rfloor $-th derivative is $\gamma -\lfloor \gamma \rfloor $ Hölder, then
$(f(\mathbf {X} _{s}^{1}),Df(\mathbf {X} _{s}^{1}),\ldots ,D^{\lfloor \gamma \rfloor }f(\mathbf {X} _{s}^{1}))$
is a $\gamma $-controlled path.
Integral of a controlled path is a controlled path
If $\mathbf {Y} $ is a $\gamma $-controlled path where $\gamma >p-1$, then
$\int _{s}^{t}\mathbf {Y} _{u}\,\mathrm {d} X_{u}$
is defined and the path
$\left(\int _{s}^{t}\mathbf {Y} _{u}\,\mathrm {d} X_{u},\mathbf {Y} _{s}^{0},\mathbf {Y} _{s}^{1},\ldots ,\mathbf {Y} _{s}^{\lfloor \gamma -1\rfloor }\right)$
is a $\gamma $-controlled path.
Solution to controlled differential equation is a controlled path
Let $V:\mathbb {R} ^{n}\rightarrow L(\mathbb {R} ^{d},\mathbb {R} ^{n})$ be functions that has at least $\lfloor \gamma \rfloor $ derivatives and the $\lfloor \gamma \rfloor $-th derivatives are $\gamma -\lfloor \gamma \rfloor $-Hölder continuous for some $\gamma >p$. Let $Y$ be the solution to the differential equation
$\mathrm {d} Y_{t}=V(Y_{t})\,\mathrm {d} X_{t}.$
Define
${\frac {\mathrm {d} Y}{\mathrm {d} X}}(\cdot )=V(\cdot );$
${\frac {\mathrm {d} ^{r+1}Y}{\mathrm {d} ^{r+1}X}}(\cdot )=D\left({\frac {\mathrm {d} ^{r}Y}{\mathrm {d} ^{r}X}}\right)(\cdot )V(\cdot ),$
where $D$ denotes the derivative operator, then
$\left(Y_{t},{\frac {\mathrm {d} Y}{\mathrm {d} X}}(Y_{t}),{\frac {\mathrm {d} ^{2}Y}{\mathrm {d} ^{2}X}}(Y_{t}),\ldots ,{\frac {\mathrm {d} ^{\lfloor \gamma \rfloor }Y}{\mathrm {d} ^{\lfloor \gamma \rfloor }X}}(Y_{t})\right)$
is a $\gamma $-controlled path.
Signature
Let $X:[0,1]\rightarrow \mathbb {R} ^{d}$ be a continuous function with finite total variation. Define
$S(X)_{s,t}=\left(1,\int _{s<s_{1}<t}\mathrm {d} X_{s_{1}},\int _{s<s_{1}<s_{2}<t}\mathrm {d} X_{s_{1}}\otimes \mathrm {d} X_{s_{2}},\ldots ,\int _{s<s_{1}<\cdots <s_{n}<t}\mathrm {d} X_{s_{1}}\otimes \cdots \otimes \mathrm {d} X_{s_{n}},\ldots \right).$
The signature of a path is defined to be $S(X)_{0,1}$.
The signature can also be defined for geometric rough paths. Let $\mathbf {X} $ be a geometric rough path and let $\mathbf {X} (n)$ be a sequence of paths with finite total variation such that
$\mathbf {X} (n)_{s,t}=\left(1,\int _{s<s_{1}<t}\,\mathrm {d} X(n)_{s_{1}},\ldots ,\int _{s<s_{1}<\cdots <s_{\lfloor p\rfloor }<t}\,\mathrm {d} X(n)_{s_{1}}\otimes \cdots \otimes \mathrm {d} X(n)_{s_{\lfloor p\rfloor }}\right).$
converges in the $p$-variation metric to $\mathbf {X} $. Then
$\int _{s<s_{1}<\cdots <s_{N}<t}\,\mathrm {d} X(n)_{s_{1}}\otimes \cdots \otimes \mathrm {d} X(n)_{s_{N}}$
converges as $n\rightarrow \infty $ for each $N$. The signature of the geometric rough path $\mathbf {X} $ can be defined as the limit of $S(X(n))_{s,t}$ as $n\rightarrow \infty $.
The signature satisfies Chen's identity,[19] that
$S(\mathbf {X} )_{s,u}\otimes S(\mathbf {X} )_{u,t}=S(\mathbf {X} )_{s,t}$
for all $s\leq u\leq t$.
Kernel of the signature transform
The set of paths whose signature is the trivial sequence, or more precisely,
$S(\mathbf {X} )_{0,1}=(1,0,0,\ldots )$
can be completely characterized using the idea of tree-like path.
A $p$-geometric rough path is tree-like if there exists a continuous function $h:[0,1]\rightarrow [0,\infty )$ such that $h(0)=h(1)=0$ and for all $j=1,\ldots ,\lfloor p\rfloor $ and all $0\leq s\leq t\leq 1$,
$\Vert \mathbf {X} _{s,t}^{j}\Vert ^{p}\leq h(t)+h(s)-2\inf _{u\in [s,t]}h(u)$
where $\mathbf {X} ^{j}$ denotes the $j$-th tensor component of $\mathbf {X} $.
A geometric rough path $\mathbf {X} $ satisfies $S(\mathbf {X} )_{0,1}=(1,0,\ldots )$ if and only if $\mathbf {X} $ is tree-like.[20][21]
Given the signature of a path, it is possible to reconstruct the unique path that has no tree-like pieces.[22][23]
Infinite dimensions
It is also possible to extend the core results in rough path theory to infinite dimensions, providing that the norm on the tensor algebra satisfies certain admissibility condition.[24]
References
1. Lyons, Terry (1998). "Differential equations driven by rough signals". Revista Matemática Iberoamericana. 14 (2): 215–310. doi:10.4171/RMI/240. ISSN 0213-2230. S2CID 59183294. Zbl 0923.34056. Wikidata Q55933523.
2. Lyons, Terry; Qian, Zhongmin (2002). System Control and Rough Paths. Oxford Mathematical Monographs. Oxford: Clarendon Press. doi:10.1093/acprof:oso/9780198506485.001.0001. ISBN 9780198506485. Zbl 1029.93001.
3. Lyons, Terry; Caruana, Michael; Levy, Thierry (2007). Differential equations driven by rough paths, vol. 1908 of Lecture Notes in Mathematics. Springer.
4. Lejay, A. (2003). "An Introduction to Rough Paths". Séminaire de Probabilités XXXVII. Lecture Notes in Mathematics. Vol. 1832. pp. 1–59. doi:10.1007/978-3-540-40004-2_1. ISBN 978-3-540-20520-3. S2CID 12401468.
5. Gubinelli, Massimiliano (November 2004). "Controlling rough paths". Journal of Functional Analysis. 216 (1): 86–140. doi:10.1016/J.JFA.2004.01.002. ISSN 0022-1236. S2CID 119717942. Zbl 1058.60037. Wikidata Q56689330.
6. Friz, Peter K.; Victoir, Nicolas (2010). Multidimensional Stochastic Processes as Rough Paths: Theory and Applications. Cambridge Studies in Advanced Mathematics. Cambridge University Press.
7. Friz, Peter K.; Hairer, Martin (2014). A Course on Rough Paths, with an introduction to regularity structures. Springer.
8. Hairer, Martin (7 June 2013). "Solving the KPZ equation". Annals of Mathematics. 178 (2): 559–664. arXiv:1109.6811. doi:10.4007/ANNALS.2013.178.2.4. ISSN 0003-486X. JSTOR 23470800. MR 3071506. S2CID 119247908. Zbl 1281.60060. Wikidata Q56689331.
9. Hairer, Martin (2014). "A theory of regularity structures". Inventiones Mathematicae. 198 (2): 269–504. arXiv:1303.5113. Bibcode:2014InMat.198..269H. doi:10.1007/s00222-014-0505-4. S2CID 119138901.
10. Ledoux, Michel; Qian, Zhongmin; Zhang, Tusheng (December 2002). "Large deviations and support theorem for diffusion processes via rough paths". Stochastic Processes and their Applications. 102 (2): 265–283. doi:10.1016/S0304-4149(02)00176-X. ISSN 1879-209X. Zbl 1075.60510. Wikidata Q56689332.
11. Friz, Peter K.; Victoir, Nicolas (2010). Multidimensional Stochastic Processes as Rough Paths: Theory and Applications (Cambridge Studies in Advanced Mathematics ed.). Cambridge University Press.
12. Lyons, Terry; Qian, Zhongmin (2002). System Control and Rough Paths. Oxford Mathematical Monographs. Oxford: Clarendon Press. doi:10.1093/acprof:oso/9780198506485.001.0001. ISBN 9780198506485. Zbl 1029.93001.
13. Coutin, Laure; Qian, Zhongmin (2002). "Stochastic analysis, rough path analysis and fractional Brownian motions". Probability Theory and Related Fields. 122: 108–140. doi:10.1007/s004400100158. S2CID 120581658.
14. Lyons, Terry; Victoir, Nicholas (2007). "An extension theorem to rough paths". Annales de l'Institut Henri Poincaré C. 24 (5): 835–847. Bibcode:2007AIHPC..24..835L. doi:10.1016/j.anihpc.2006.07.004.
15. Friz, Peter; Gassiat, Paul; Lyons, Terry (2015). "Physical Brownian motion in a magnetic field as a rough path". Transactions of the American Mathematical Society. 367 (11): 7939–7955. arXiv:1302.2531. doi:10.1090/S0002-9947-2015-06272-2. S2CID 59358406.
16. Friz, Peter K.; Victoir, Nicolas (2010). Multidimensional Stochastic Processes as Rough Paths: Theory and Applications (Cambridge Studies in Advanced Mathematics ed.). Cambridge University Press.
17. Cass, Thomas; Friz, Peter (2010). "Densities for rough differential equations under Hörmander's condition". Annals of Mathematics. 171 (3): 2115–2141. arXiv:0708.3730. doi:10.4007/annals.2010.171.2115. S2CID 17276607.
18. Cass, Thomas; Hairer, Martin; Litterer, Christian; Tindel, Samy (2015). "Smoothness of the density for solutions to Gaussian rough differential equations". The Annals of Probability. 43: 188–239. arXiv:1209.3100. doi:10.1214/13-AOP896. S2CID 17308794.
19. Chen, Kuo-Tsai (1954). "Iterated Integrals and Exponential Homomorphisms". Proceedings of the London Mathematical Society. s3-4: 502–512. doi:10.1112/plms/s3-4.1.502.
20. Hambly, Ben; Lyons, Terry (2010). "Uniqueness for the signature of a path of bounded variation and the reduced path group". Annals of Mathematics. 171: 109–167. arXiv:math/0507536. doi:10.4007/annals.2010.171.109. S2CID 15915599.
21. Boedihardjo, Horatio; Geng, Xi; Lyons, Terry; Yang, Danyu (2016). "The signature of a rough path: Uniqueness". Advances in Mathematics. 293: 720–737. arXiv:1406.7871. doi:10.1016/j.aim.2016.02.011. S2CID 3634324.
22. Lyons, Terry; Xu, Weijun (2018). "Inverting the signature of a path". Journal of the European Mathematical Society. 20 (7): 1655–1687. arXiv:1406.7833. doi:10.4171/JEMS/796. S2CID 67847036.
23. Geng, Xi (2016). "Reconstruction for the Signature of a Rough Path". Proceedings of the London Mathematical Society. 114 (3): 495–526. arXiv:1508.06890. doi:10.1112/plms.12013. S2CID 3641736.
24. Cass, Thomas; Driver, Bruce; Lim, Nengli; Litterer, Christian. "On the integration of weakly geometric rough paths". Journal of the Mathematical Society of Japan.
|
Wikipedia
|
Rough set
In computer science, a rough set, first described by Polish computer scientist Zdzisław I. Pawlak, is a formal approximation of a crisp set (i.e., conventional set) in terms of a pair of sets which give the lower and the upper approximation of the original set. In the standard version of rough set theory (Pawlak 1991), the lower- and upper-approximation sets are crisp sets, but in other variations, the approximating sets may be fuzzy sets.
Definitions
The following section contains an overview of the basic framework of rough set theory, as originally proposed by Zdzisław I. Pawlak, along with some of the key definitions. More formal properties and boundaries of rough sets can be found in Pawlak (1991) and cited references. The initial and basic theory of rough sets is sometimes referred to as "Pawlak Rough Sets" or "classical rough sets", as a means to distinguish from more recent extensions and generalizations.
Information system framework
Let $I=(\mathbb {U} ,\mathbb {A} )$ be an information system (attribute–value system), where $\mathbb {U} $ is a non-empty, finite set of objects (the universe) and $\mathbb {A} $ is a non-empty, finite set of attributes such that $I:\mathbb {U} \rightarrow V_{a}$ for every $a\in \mathbb {A} $. $V_{a}$ is the set of values that attribute $a$ may take. The information table assigns a value $a(x)$ from $V_{a}$ to each attribute $a$ and object $x$ in the universe $\mathbb {U} $.
With any $P\subseteq \mathbb {A} $ there is an associated equivalence relation $\mathrm {IND} (P)$:
$\mathrm {IND} (P)=\left\{(x,y)\in \mathbb {U} ^{2}\mid \forall a\in P,a(x)=a(y)\right\}$
The relation $\mathrm {IND} (P)$ is called a $P$-indiscernibility relation. The partition of $\mathbb {U} $ is a family of all equivalence classes of $\mathrm {IND} (P)$ and is denoted by $\mathbb {U} /\mathrm {IND} (P)$ (or $\mathbb {U} /P$).
If $(x,y)\in \mathrm {IND} (P)$, then $x$ and $y$ are indiscernible (or indistinguishable) by attributes from $P$ .
The equivalence classes of the $P$-indiscernibility relation are denoted $[x]_{P}$.
Example: equivalence-class structure
For example, consider the following information table:
Sample Information System
Object$P_{1}$$P_{2}$$P_{3}$$P_{4}$$P_{5}$
$O_{1}$ 12011
$O_{2}$ 12011
$O_{3}$ 20010
$O_{4}$ 00121
$O_{5}$ 21021
$O_{6}$ 00122
$O_{7}$ 20010
$O_{8}$ 01221
$O_{9}$ 21022
$O_{10}$ 20010
When the full set of attributes $P=\{P_{1},P_{2},P_{3},P_{4},P_{5}\}$ is considered, we see that we have the following seven equivalence classes:
${\begin{cases}\{O_{1},O_{2}\}\\\{O_{3},O_{7},O_{10}\}\\\{O_{4}\}\\\{O_{5}\}\\\{O_{6}\}\\\{O_{8}\}\\\{O_{9}\}\end{cases}}$
Thus, the two objects within the first equivalence class, $\{O_{1},O_{2}\}$, cannot be distinguished from each other based on the available attributes, and the three objects within the second equivalence class, $\{O_{3},O_{7},O_{10}\}$, cannot be distinguished from one another based on the available attributes. The remaining five objects are each discernible from all other objects.
It is apparent that different attribute subset selections will in general lead to different indiscernibility classes. For example, if attribute $P=\{P_{1}\}$ alone is selected, we obtain the following, much coarser, equivalence-class structure:
${\begin{cases}\{O_{1},O_{2}\}\\\{O_{3},O_{5},O_{7},O_{9},O_{10}\}\\\{O_{4},O_{6},O_{8}\}\end{cases}}$
Definition of a rough set
Let $X\subseteq \mathbb {U} $ be a target set that we wish to represent using attribute subset $P$; that is, we are told that an arbitrary set of objects $X$ comprises a single class, and we wish to express this class (i.e., this subset) using the equivalence classes induced by attribute subset $P$. In general, $X$ cannot be expressed exactly, because the set may include and exclude objects which are indistinguishable on the basis of attributes $P$.
For example, consider the target set $X=\{O_{1},O_{2},O_{3},O_{4}\}$, and let attribute subset $P=\{P_{1},P_{2},P_{3},P_{4},P_{5}\}$, the full available set of features. The set $X$ cannot be expressed exactly, because in $[x]_{P},$, objects $\{O_{3},O_{7},O_{10}\}$ are indiscernible. Thus, there is no way to represent any set $X$ which includes $O_{3}$ but excludes objects $O_{7}$ and $O_{10}$.
However, the target set $X$ can be approximated using only the information contained within $P$ by constructing the $P$-lower and $P$-upper approximations of $X$:
${\underline {P}}X=\{x\mid [x]_{P}\subseteq X\}$
${\overline {P}}X=\{x\mid [x]_{P}\cap X\neq \emptyset \}$
Lower approximation and positive region
The $P$-lower approximation, or positive region, is the union of all equivalence classes in $[x]_{P}$ which are contained by (i.e., are subsets of) the target set – in the example, ${\underline {P}}X=\{O_{1},O_{2}\}\cup \{O_{4}\}$, the union of the two equivalence classes in $[x]_{P}$ which are contained in the target set. The lower approximation is the complete set of objects in $\mathbb {U} /P$ that can be positively (i.e., unambiguously) classified as belonging to target set $X$.
Upper approximation and negative region
The $P$-upper approximation is the union of all equivalence classes in $[x]_{P}$ which have non-empty intersection with the target set – in the example, ${\overline {P}}X=\{O_{1},O_{2}\}\cup \{O_{4}\}\cup \{O_{3},O_{7},O_{10}\}$, the union of the three equivalence classes in $[x]_{P}$ that have non-empty intersection with the target set. The upper approximation is the complete set of objects that in $\mathbb {U} /P$ that cannot be positively (i.e., unambiguously) classified as belonging to the complement (${\overline {X}}$) of the target set $X$. In other words, the upper approximation is the complete set of objects that are possibly members of the target set $X$.
The set $\mathbb {U} -{\overline {P}}X$ therefore represents the negative region, containing the set of objects that can be definitely ruled out as members of the target set.
Boundary region
The boundary region, given by set difference ${\overline {P}}X-{\underline {P}}X$, consists of those objects that can neither be ruled in nor ruled out as members of the target set $X$.
In summary, the lower approximation of a target set is a conservative approximation consisting of only those objects which can positively be identified as members of the set. (These objects have no indiscernible "clones" which are excluded by the target set.) The upper approximation is a liberal approximation which includes all objects that might be members of target set. (Some objects in the upper approximation may not be members of the target set.) From the perspective of $\mathbb {U} /P$, the lower approximation contains objects that are members of the target set with certainty (probability = 1), while the upper approximation contains objects that are members of the target set with non-zero probability (probability > 0).
The rough set
The tuple $\langle {\underline {P}}X,{\overline {P}}X\rangle $ composed of the lower and upper approximation is called a rough set; thus, a rough set is composed of two crisp sets, one representing a lower boundary of the target set $X$, and the other representing an upper boundary of the target set $X$.
The accuracy of the rough-set representation of the set $X$ can be given (Pawlak 1991) by the following:
$\alpha _{P}(X)={\frac {\left|{\underline {P}}X\right|}{\left|{\overline {P}}X\right|}}$
That is, the accuracy of the rough set representation of $X$, $\alpha _{P}(X)$, $0\leq \alpha _{P}(X)\leq 1$, is the ratio of the number of objects which can positively be placed in $X$ to the number of objects that can possibly be placed in $X$ – this provides a measure of how closely the rough set is approximating the target set. Clearly, when the upper and lower approximations are equal (i.e., boundary region empty), then $\alpha _{P}(X)=1$, and the approximation is perfect; at the other extreme, whenever the lower approximation is empty, the accuracy is zero (regardless of the size of the upper approximation).
Objective analysis
Rough set theory is one of many methods that can be employed to analyse uncertain (including vague) systems, although less common than more traditional methods of probability, statistics, entropy and Dempster–Shafer theory. However a key difference, and a unique strength, of using classical rough set theory is that it provides an objective form of analysis (Pawlak et al. 1995). Unlike other methods, as those given above, classical rough set analysis requires no additional information, external parameters, models, functions, grades or subjective interpretations to determine set membership – instead it only uses the information presented within the given data (Düntsch and Gediga 1995). More recent adaptations of rough set theory, such as dominance-based, decision-theoretic and fuzzy rough sets, have introduced more subjectivity to the analysis.
Definability
In general, the upper and lower approximations are not equal; in such cases, we say that target set $X$ is undefinable or roughly definable on attribute set $P$. When the upper and lower approximations are equal (i.e., the boundary is empty), ${\overline {P}}X={\underline {P}}X$, then the target set $X$ is definable on attribute set $P$. We can distinguish the following special cases of undefinability:
• Set $X$ is internally undefinable if ${\underline {P}}X=\emptyset $ and ${\overline {P}}X\neq \mathbb {U} $. This means that on attribute set $P$, there are no objects which we can be certain belong to target set $X$, but there are objects which we can definitively exclude from set $X$.
• Set $X$ is externally undefinable if ${\underline {P}}X\neq \emptyset $ and ${\overline {P}}X=\mathbb {U} $. This means that on attribute set $P$, there are objects which we can be certain belong to target set $X$, but there are no objects which we can definitively exclude from set $X$.
• Set $X$ is totally undefinable if ${\underline {P}}X=\emptyset $ and ${\overline {P}}X=\mathbb {U} $. This means that on attribute set $P$, there are no objects which we can be certain belong to target set $X$, and there are no objects which we can definitively exclude from set $X$. Thus, on attribute set $P$, we cannot decide whether any object is, or is not, a member of $X$.
Reduct and core
An interesting question is whether there are attributes in the information system (attribute–value table) which are more important to the knowledge represented in the equivalence class structure than other attributes. Often, we wonder whether there is a subset of attributes which can, by itself, fully characterize the knowledge in the database; such an attribute set is called a reduct.
Formally, a reduct is a subset of attributes $\mathrm {RED} \subseteq P$ such that
• $[x]_{\mathrm {RED} }$ = $[x]_{P}$, that is, the equivalence classes induced by the reduced attribute set $\mathrm {RED} $ are the same as the equivalence class structure induced by the full attribute set $P$.
• the attribute set $\mathrm {RED} $ is minimal, in the sense that $[x]_{(\mathrm {RED} -\{a\})}\neq [x]_{P}$ for any attribute $a\in \mathrm {RED} $; in other words, no attribute can be removed from set $\mathrm {RED} $ without changing the equivalence classes $[x]_{P}$.
A reduct can be thought of as a sufficient set of features – sufficient, that is, to represent the category structure. In the example table above, attribute set $\{P_{3},P_{4},P_{5}\}$ is a reduct – the information system projected on just these attributes possesses the same equivalence class structure as that expressed by the full attribute set:
${\begin{cases}\{O_{1},O_{2}\}\\\{O_{3},O_{7},O_{10}\}\\\{O_{4}\}\\\{O_{5}\}\\\{O_{6}\}\\\{O_{8}\}\\\{O_{9}\}\end{cases}}$
Attribute set $\{P_{3},P_{4},P_{5}\}$ is a reduct because eliminating any of these attributes causes a collapse of the equivalence-class structure, with the result that $[x]_{\mathrm {RED} }\neq [x]_{P}$.
The reduct of an information system is not unique: there may be many subsets of attributes which preserve the equivalence-class structure (i.e., the knowledge) expressed in the information system. In the example information system above, another reduct is $\{P_{1},P_{2},P_{5}\}$, producing the same equivalence-class structure as $[x]_{P}$.
The set of attributes which is common to all reducts is called the core: the core is the set of attributes which is possessed by every reduct, and therefore consists of attributes which cannot be removed from the information system without causing collapse of the equivalence-class structure. The core may be thought of as the set of necessary attributes – necessary, that is, for the category structure to be represented. In the example, the only such attribute is $\{P_{5}\}$; any one of the other attributes can be removed singly without damaging the equivalence-class structure, and hence these are all dispensable. However, removing $\{P_{5}\}$ by itself does change the equivalence-class structure, and thus $\{P_{5}\}$ is the indispensable attribute of this information system, and hence the core.
It is possible for the core to be empty, which means that there is no indispensable attribute: any single attribute in such an information system can be deleted without altering the equivalence-class structure. In such cases, there is no essential or necessary attribute which is required for the class structure to be represented.
Attribute dependency
One of the most important aspects of database analysis or data acquisition is the discovery of attribute dependencies; that is, we wish to discover which variables are strongly related to which other variables. Generally, it is these strong relationships that will warrant further investigation, and that will ultimately be of use in predictive modeling.
In rough set theory, the notion of dependency is defined very simply. Let us take two (disjoint) sets of attributes, set $P$ and set $Q$, and inquire what degree of dependency obtains between them. Each attribute set induces an (indiscernibility) equivalence class structure, the equivalence classes induced by $P$ given by $[x]_{P}$, and the equivalence classes induced by $Q$ given by $[x]_{Q}$.
Let $[x]_{Q}=\{Q_{1},Q_{2},Q_{3},\dots ,Q_{N}\}$, where $Q_{i}$ is a given equivalence class from the equivalence-class structure induced by attribute set $Q$. Then, the dependency of attribute set $Q$ on attribute set $P$, $\gamma _{P}(Q)$, is given by
$\gamma _{P}(Q)={\frac {\sum _{i=1}^{N}\left|{\underline {P}}Q_{i}\right|}{\left|\mathbb {U} \right|}}\leq 1$
That is, for each equivalence class $Q_{i}$ in $[x]_{Q}$, we add up the size of its lower approximation by the attributes in $P$, i.e., ${\underline {P}}Q_{i}$. This approximation (as above, for arbitrary set $X$) is the number of objects which on attribute set $P$ can be positively identified as belonging to target set $Q_{i}$. Added across all equivalence classes in $[x]_{Q}$, the numerator above represents the total number of objects which – based on attribute set $P$ – can be positively categorized according to the classification induced by attributes $Q$. The dependency ratio therefore expresses the proportion (within the entire universe) of such classifiable objects. The dependency $\gamma _{P}(Q)$ "can be interpreted as a proportion of such objects in the information system for which it suffices to know the values of attributes in $P$ to determine the values of attributes in $Q$".
Another, intuitive, way to consider dependency is to take the partition induced by $Q$ as the target class $C$, and consider $P$ as the attribute set we wish to use in order to "re-construct" the target class $C$. If $P$ can completely reconstruct $C$, then $Q$ depends totally upon $P$; if $P$ results in a poor and perhaps a random reconstruction of $C$, then $Q$ does not depend upon $P$ at all.
Thus, this measure of dependency expresses the degree of functional (i.e., deterministic) dependency of attribute set $Q$ on attribute set $P$; it is not symmetric. The relationship of this notion of attribute dependency to more traditional information-theoretic (i.e., entropic) notions of attribute dependence has been discussed in a number of sources (e.g., Pawlak, Wong, & Ziarko 1988; Yao & Yao 2002; Wong, Ziarko, & Ye 1986, Quafafou & Boussouf 2000).
Rule extraction
The category representations discussed above are all extensional in nature; that is, a category or complex class is simply the sum of all its members. To represent a category is, then, just to be able to list or identify all the objects belonging to that category. However, extensional category representations have very limited practical use, because they provide no insight for deciding whether novel (never-before-seen) objects are members of the category.
What is generally desired is an intentional description of the category, a representation of the category based on a set of rules that describe the scope of the category. The choice of such rules is not unique, and therein lies the issue of inductive bias. See Version space and Model selection for more about this issue.
There are a few rule-extraction methods. We will start from a rule-extraction procedure based on Ziarko & Shan (1995).
Decision matrices
Let us say that we wish to find the minimal set of consistent rules (logical implications) that characterize our sample system. For a set of condition attributes ${\mathcal {P}}=\{P_{1},P_{2},P_{3},\dots ,P_{n}\}$ and a decision attribute $Q,Q\notin {\mathcal {P}}$, these rules should have the form $P_{i}^{a}P_{j}^{b}\dots P_{k}^{c}\to Q^{d}$, or, spelled out,
$(P_{i}=a)\land (P_{j}=b)\land \dots \land (P_{k}=c)\to (Q=d)$
where $\{a,b,c,\dots \}$ are legitimate values from the domains of their respective attributes. This is a form typical of association rules, and the number of items in $\mathbb {U} $ which match the condition/antecedent is called the support for the rule. The method for extracting such rules given in Ziarko & Shan (1995) is to form a decision matrix corresponding to each individual value $d$ of decision attribute $Q$. Informally, the decision matrix for value $d$ of decision attribute $Q$ lists all attribute–value pairs that differ between objects having $Q=d$ and $Q\neq d$.
This is best explained by example (which also avoids a lot of notation). Consider the table above, and let $P_{4}$ be the decision variable (i.e., the variable on the right side of the implications) and let $\{P_{1},P_{2},P_{3}\}$ be the condition variables (on the left side of the implication). We note that the decision variable $P_{4}$ takes on two different values, namely $\{1,2\}$. We treat each case separately.
First, we look at the case $P_{4}=1$, and we divide up $\mathbb {U} $ into objects that have $P_{4}=1$ and those that have $P_{4}\neq 1$. (Note that objects with $P_{4}\neq 1$ in this case are simply the objects that have $P_{4}=2$, but in general, $P_{4}\neq 1$ would include all objects having any value for $P_{4}$ other than $P_{4}=1$, and there may be several such classes of objects (for example, those having $P_{4}=2,3,4,etc.$).) In this case, the objects having $P_{4}=1$ are $\{O_{1},O_{2},O_{3},O_{7},O_{10}\}$ while the objects which have $P_{4}\neq 1$ are $\{O_{4},O_{5},O_{6},O_{8},O_{9}\}$. The decision matrix for $P_{4}=1$ lists all the differences between the objects having $P_{4}=1$ and those having $P_{4}\neq 1$; that is, the decision matrix lists all the differences between $\{O_{1},O_{2},O_{3},O_{7},O_{10}\}$ and $\{O_{4},O_{5},O_{6},O_{8},O_{9}\}$. We put the "positive" objects ($P_{4}=1$) as the rows, and the "negative" objects $P_{4}\neq 1$ as the columns.
Decision matrix for $P_{4}=1$
Object$O_{4}$$O_{5}$$O_{6}$$O_{8}$$O_{9}$
$O_{1}$ $P_{1}^{1},P_{2}^{2},P_{3}^{0}$$P_{1}^{1},P_{2}^{2}$$P_{1}^{1},P_{2}^{2},P_{3}^{0}$$P_{1}^{1},P_{2}^{2},P_{3}^{0}$$P_{1}^{1},P_{2}^{2}$
$O_{2}$ $P_{1}^{1},P_{2}^{2},P_{3}^{0}$$P_{1}^{1},P_{2}^{2}$$P_{1}^{1},P_{2}^{2},P_{3}^{0}$$P_{1}^{1},P_{2}^{2},P_{3}^{0}$$P_{1}^{1},P_{2}^{2}$
$O_{3}$ $P_{1}^{2},P_{3}^{0}$$P_{2}^{0}$$P_{1}^{2},P_{3}^{0}$$P_{1}^{2},P_{2}^{0},P_{3}^{0}$$P_{2}^{0}$
$O_{7}$ $P_{1}^{2},P_{3}^{0}$$P_{2}^{0}$$P_{1}^{2},P_{3}^{0}$$P_{1}^{2},P_{2}^{0},P_{3}^{0}$$P_{2}^{0}$
$O_{10}$ $P_{1}^{2},P_{3}^{0}$$P_{2}^{0}$$P_{1}^{2},P_{3}^{0}$$P_{1}^{2},P_{2}^{0},P_{3}^{0}$$P_{2}^{0}$
To read this decision matrix, look, for example, at the intersection of row $O_{3}$ and column $O_{6}$, showing $P_{1}^{2},P_{3}^{0}$ in the cell. This means that with regard to decision value $P_{4}=1$, object $O_{3}$ differs from object $O_{6}$ on attributes $P_{1}$ and $P_{3}$, and the particular values on these attributes for the positive object $O_{3}$ are $P_{1}=2$ and $P_{3}=0$. This tells us that the correct classification of $O_{3}$ as belonging to decision class $P_{4}=1$ rests on attributes $P_{1}$ and $P_{3}$; although one or the other might be dispensable, we know that at least one of these attributes is indispensable.
Next, from each decision matrix we form a set of Boolean expressions, one expression for each row of the matrix. The items within each cell are aggregated disjunctively, and the individuals cells are then aggregated conjunctively. Thus, for the above table we have the following five Boolean expressions:
${\begin{cases}(P_{1}^{1}\lor P_{2}^{2}\lor P_{3}^{0})\land (P_{1}^{1}\lor P_{2}^{2})\land (P_{1}^{1}\lor P_{2}^{2}\lor P_{3}^{0})\land (P_{1}^{1}\lor P_{2}^{2}\lor P_{3}^{0})\land (P_{1}^{1}\lor P_{2}^{2})\\(P_{1}^{1}\lor P_{2}^{2}\lor P_{3}^{0})\land (P_{1}^{1}\lor P_{2}^{2})\land (P_{1}^{1}\lor P_{2}^{2}\lor P_{3}^{0})\land (P_{1}^{1}\lor P_{2}^{2}\lor P_{3}^{0})\land (P_{1}^{1}\lor P_{2}^{2})\\(P_{1}^{2}\lor P_{3}^{0})\land (P_{2}^{0})\land (P_{1}^{2}\lor P_{3}^{0})\land (P_{1}^{2}\lor P_{2}^{0}\lor P_{3}^{0})\land (P_{2}^{0})\\(P_{1}^{2}\lor P_{3}^{0})\land (P_{2}^{0})\land (P_{1}^{2}\lor P_{3}^{0})\land (P_{1}^{2}\lor P_{2}^{0}\lor P_{3}^{0})\land (P_{2}^{0})\\(P_{1}^{2}\lor P_{3}^{0})\land (P_{2}^{0})\land (P_{1}^{2}\lor P_{3}^{0})\land (P_{1}^{2}\lor P_{2}^{0}\lor P_{3}^{0})\land (P_{2}^{0})\end{cases}}$
Each statement here is essentially a highly specific (probably too specific) rule governing the membership in class $P_{4}=1$ of the corresponding object. For example, the last statement, corresponding to object $O_{10}$, states that all the following must be satisfied:
1. Either $P_{1}$ must have value 2, or $P_{3}$ must have value 0, or both.
2. $P_{2}$ must have value 0.
3. Either $P_{1}$ must have value 2, or $P_{3}$ must have value 0, or both.
4. Either $P_{1}$ must have value 2, or $P_{2}$ must have value 0, or $P_{3}$ must have value 0, or any combination thereof.
5. $P_{2}$ must have value 0.
It is clear that there is a large amount of redundancy here, and the next step is to simplify using traditional Boolean algebra. The statement $(P_{1}^{1}\lor P_{2}^{2}\lor P_{3}^{0})\land (P_{1}^{1}\lor P_{2}^{2})\land (P_{1}^{1}\lor P_{2}^{2}\lor P_{3}^{0})\land (P_{1}^{1}\lor P_{2}^{2}\lor P_{3}^{0})\land (P_{1}^{1}\lor P_{2}^{2})$ corresponding to objects $\{O_{1},O_{2}\}$ simplifies to $P_{1}^{1}\lor P_{2}^{2}$, which yields the implication
$(P_{1}=1)\lor (P_{2}=2)\to (P_{4}=1)$
Likewise, the statement $(P_{1}^{2}\lor P_{3}^{0})\land (P_{2}^{0})\land (P_{1}^{2}\lor P_{3}^{0})\land (P_{1}^{2}\lor P_{2}^{0}\lor P_{3}^{0})\land (P_{2}^{0})$ corresponding to objects $\{O_{3},O_{7},O_{10}\}$ simplifies to $P_{1}^{2}P_{2}^{0}\lor P_{3}^{0}P_{2}^{0}$. This gives us the implication
$(P_{1}=2\land P_{2}=0)\lor (P_{3}=0\land P_{2}=0)\to (P_{4}=1)$
The above implications can also be written as the following rule set:
${\begin{cases}(P_{1}=1)\to (P_{4}=1)\\(P_{2}=2)\to (P_{4}=1)\\(P_{1}=2)\land (P_{2}=0)\to (P_{4}=1)\\(P_{3}=0)\land (P_{2}=0)\to (P_{4}=1)\end{cases}}$
It can be noted that each of the first two rules has a support of 1 (i.e., the antecedent matches two objects), while each of the last two rules has a support of 2. To finish writing the rule set for this knowledge system, the same procedure as above (starting with writing a new decision matrix) should be followed for the case of $P_{4}=2$, thus yielding a new set of implications for that decision value (i.e., a set of implications with $P_{4}=2$ as the consequent). In general, the procedure will be repeated for each possible value of the decision variable.
LERS rule induction system
The data system LERS (Learning from Examples based on Rough Sets) Grzymala-Busse (1997) may induce rules from inconsistent data, i.e., data with conflicting objects. Two objects are conflicting when they are characterized by the same values of all attributes, but they belong to different concepts (classes). LERS uses rough set theory to compute lower and upper approximations for concepts involved in conflicts with other concepts.
Rules induced from the lower approximation of the concept certainly describe the concept, hence such rules are called certain. On the other hand, rules induced from the upper approximation of the concept describe the concept possibly, so these rules are called possible. For rule induction LERS uses three algorithms: LEM1, LEM2, and IRIM.
The LEM2 algorithm of LERS is frequently used for rule induction and is used not only in LERS but also in other systems, e.g., in RSES (Bazan et al. (2004). LEM2 explores the search space of attribute–value pairs. Its input data set is a lower or upper approximation of a concept, so its input data set is always consistent. In general, LEM2 computes a local covering and then converts it into a rule set. We will quote a few definitions to describe the LEM2 algorithm.
The LEM2 algorithm is based on an idea of an attribute–value pair block. Let $X$ be a nonempty lower or upper approximation of a concept represented by a decision-value pair $(d,w)$. Set $X$ depends on a set $T$ of attribute–value pairs $t=(a,v)$ if and only if
$\emptyset \neq [T]=\bigcap _{t\in T}[t]\subseteq X.$
Set $T$ is a minimal complex of $X$ if and only if $X$ depends on $T$ and no proper subset $S$ of $T$ exists such that $X$ depends on $S$. Let $\mathbb {T} $ be a nonempty collection of nonempty sets of attribute–value pairs. Then $\mathbb {T} $ is a local covering of $X$ if and only if the following three conditions are satisfied:
each member $T$ of $\mathbb {T} $ is a minimal complex of $X$,
$\bigcup _{t\in \mathbb {T} }[T]=X,$
$\mathbb {T} $ is minimal, i.e., $\mathbb {T} $ has the smallest possible number of members.
For our sample information system, LEM2 will induce the following rules:
${\begin{cases}(P_{1},1)\to (P_{4},1)\\(P_{5},0)\to (P_{4},1)\\(P_{1},0)\to (P_{4},2)\\(P_{2},1)\to (P_{4},2)\end{cases}}$
Other rule-learning methods can be found, e.g., in Pawlak (1991), Stefanowski (1998), Bazan et al. (2004), etc.
Incomplete data
Rough set theory is useful for rule induction from incomplete data sets. Using this approach we can distinguish between three types of missing attribute values: lost values (the values that were recorded but currently are unavailable), attribute-concept values (these missing attribute values may be replaced by any attribute value limited to the same concept), and "do not care" conditions (the original values were irrelevant). A concept (class) is a set of all objects classified (or diagnosed) the same way.
Two special data sets with missing attribute values were extensively studied: in the first case, all missing attribute values were lost (Stefanowski and Tsoukias, 2001), in the second case, all missing attribute values were "do not care" conditions (Kryszkiewicz, 1999).
In attribute-concept values interpretation of a missing attribute value, the missing attribute value may be replaced by any value of the attribute domain restricted to the concept to which the object with a missing attribute value belongs (Grzymala-Busse and Grzymala-Busse, 2007). For example, if for a patient the value of an attribute Temperature is missing, this patient is sick with flu, and all remaining patients sick with flu have values high or very-high for Temperature when using the interpretation of the missing attribute value as the attribute-concept value, we will replace the missing attribute value with high and very-high. Additionally, the characteristic relation, (see, e.g., Grzymala-Busse and Grzymala-Busse, 2007) enables to process data sets with all three kind of missing attribute values at the same time: lost, "do not care" conditions, and attribute-concept values.
Applications
Rough set methods can be applied as a component of hybrid solutions in machine learning and data mining. They have been found to be particularly useful for rule induction and feature selection (semantics-preserving dimensionality reduction). Rough set-based data analysis methods have been successfully applied in bioinformatics, economics and finance, medicine, multimedia, web and text mining, signal and image processing, software engineering, robotics, and engineering (e.g. power systems and control engineering). Recently the three regions of rough sets are interpreted as regions of acceptance, rejection and deferment. This leads to three-way decision making approach with the model which can potentially lead to interesting future applications.
History
The idea of rough set was proposed by Pawlak (1981) as a new mathematical tool to deal with vague concepts. Comer, Grzymala-Busse, Iwinski, Nieminen, Novotny, Pawlak, Obtulowicz, and Pomykala have studied algebraic properties of rough sets. Different algebraic semantics have been developed by P. Pagliani, I. Duntsch, M. K. Chakraborty, M. Banerjee and A. Mani; these have been extended to more generalized rough sets by D. Cattaneo and A. Mani, in particular. Rough sets can be used to represent ambiguity, vagueness and general uncertainty.
Extensions and generalizations
Since the development of rough sets, extensions and generalizations have continued to evolve. Initial developments focused on the relationship - both similarities and difference - with fuzzy sets. While some literature contends these concepts are different, other literature considers that rough sets are a generalization of fuzzy sets - as represented through either fuzzy rough sets or rough fuzzy sets. Pawlak (1995) considered that fuzzy and rough sets should be treated as being complementary to each other, addressing different aspects of uncertainty and vagueness.
Three notable extensions of classical rough sets are:
• Dominance-based rough set approach (DRSA) is an extension of rough set theory for multi-criteria decision analysis (MCDA), introduced by Greco, Matarazzo and Słowiński (2001). The main change in this extension of classical rough sets is the substitution of the indiscernibility relation by a dominance relation, which permits the formalism to deal with inconsistencies typical in consideration of criteria and preference-ordered decision classes.
• Decision-theoretic rough sets (DTRS) is a probabilistic extension of rough set theory introduced by Yao, Wong, and Lingras (1990). It utilizes a Bayesian decision procedure for minimum risk decision making. Elements are included into the lower and upper approximations based on whether their conditional probability is above thresholds $\textstyle \alpha $ and $\textstyle \beta $. These upper and lower thresholds determine region inclusion for elements. This model is unique and powerful since the thresholds themselves are calculated from a set of six loss functions representing classification risks.
• Game-theoretic rough sets (GTRS) is a game theory-based extension of rough set that was introduced by Herbert and Yao (2011). It utilizes a game-theoretic environment to optimize certain criteria of rough sets based classification or decision making in order to obtain effective region sizes.
Rough membership
Rough sets can be also defined, as a generalisation, by employing a rough membership function instead of objective approximation. The rough membership function expresses a conditional probability that $x$ belongs to $X$ given $\textstyle \mathbb {R} $. This can be interpreted as a degree that $x$ belongs to $X$ in terms of information about $x$ expressed by $\textstyle \mathbb {R} $.
Rough membership primarily differs from the fuzzy membership in that the membership of union and intersection of sets cannot, in general, be computed from their constituent membership as is the case of fuzzy sets. In this, rough membership is a generalization of fuzzy membership. Furthermore, the rough membership function is grounded more in probability than the conventionally held concepts of the fuzzy membership function.
Other generalizations
Several generalizations of rough sets have been introduced, studied and applied to solving problems. Here are some of these generalizations:
• rough multisets (Grzymala-Busse, 1987)
• fuzzy rough sets extend the rough set concept through the use of fuzzy equivalence classes(Nakamura, 1988)
• Alpha rough set theory (α-RST) - a generalization of rough set theory that allows approximation using of fuzzy concepts (Quafafou, 2000)
• intuitionistic fuzzy rough sets (Cornelis, De Cock and Kerre, 2003)
• generalized rough fuzzy sets (Feng, 2010)
• rough intuitionistic fuzzy sets (Thomas and Nair, 2011)
• soft rough fuzzy sets and soft fuzzy rough sets (Meng, Zhang and Qin, 2011)
• composite rough sets (Zhang, Li and Chen, 2014)
See also
• Algebraic semantics
• Alternative set theory
• Analog computer
• Description logic
• Fuzzy logic
• Fuzzy set theory
• Granular computing
• Near sets
• Rough fuzzy hybridization
• Type-2 fuzzy sets and systems
• Decision-theoretic rough sets
• Version space
• Dominance-based rough set approach
References
• Pawlak, Zdzisław (1982). "Rough sets". International Journal of Parallel Programming. 11 (5): 341–356. doi:10.1007/BF01001956. S2CID 9240608.
• Bazan, Jan; Szczuka, Marcin; Wojna, Arkadiusz; Wojnarski, Marcin (2004). On the evolution of rough set exploration system. pp. 592–601. CiteSeerX 10.1.1.60.3957. doi:10.1007/978-3-540-25929-9_73. ISBN 978-3-540-22117-3. {{cite book}}: |journal= ignored (help)
• Dubois, D.; Prade, H. (1990). "Rough fuzzy sets and fuzzy rough sets". International Journal of General Systems. 17 (2–3): 191–209. doi:10.1080/03081079008935107.
• Herbert, J. P.; Yao, J. T. (2011). "Game-theoretic Rough Sets". Fundamenta Informaticae. 108 (3–4): 267–286. doi:10.3233/FI-2011-423.
• Greco, Salvatore; Matarazzo, Benedetto; Słowiński, Roman (2001). "Rough sets theory for multicriteria decision analysis". European Journal of Operational Research. 129 (1): 1–47. doi:10.1016/S0377-2217(00)00167-3.
• Grzymala-Busse, Jerzy (1997). "A new version of the rule induction system LERS". Fundamenta Informaticae. 31: 27–39. doi:10.3233/FI-1997-3113.
• Grzymala-Busse, Jerzy; Grzymala-Busse, Witold (2007). An experimental comparison of three rough set approaches to missing attribute values. pp. 31–50. doi:10.1007/978-3-540-71200-8_3. ISBN 978-3-540-71198-8. {{cite book}}: |journal= ignored (help)
• Kryszkiewicz, Marzena (1999). "Rules in incomplete systems". Information Sciences. 113 (3–4): 271–292. doi:10.1016/S0020-0255(98)10065-8.
• Pawlak, Zdzisław Rough Sets Research Report PAS 431, Institute of Computer Science, Polish Academy of Sciences (1981)
• Pawlak, Zdzisław; Wong, S. K. M.; Ziarko, Wojciech (1988). "Rough sets: Probabilistic versus deterministic approach". International Journal of Man-Machine Studies. 29: 81–95. doi:10.1016/S0020-7373(88)80032-4.
• Pawlak, Zdzisław (1991). Rough Sets: Theoretical Aspects of Reasoning About Data. Dordrecht: Kluwer Academic Publishing. ISBN 978-0-7923-1472-1.
• Slezak, Dominik; Wroblewski, Jakub; Eastwood, Victoria; Synak, Piotr (2008). "Brighthouse: an analytic data warehouse for ad-hoc queries" (PDF). Proceedings of the VLDB Endowment. 1 (2): 1337–1345. doi:10.14778/1454159.1454174.
• Stefanowski, Jerzy (1998). "On rough set based approaches to induction of decision rules". In Polkowski, Lech; Skowron, Andrzej (eds.). Rough Sets in Knowledge Discovery 1: Methodology and Applications. Heidelberg: Physica-Verlag. pp. 500–529.
• Stefanowski, Jerzy; Tsoukias, Alexis (2001). "Incomplete information tables and rough classification". Computational Intelligence. 17 (3): 545–566. doi:10.1111/0824-7935.00162. S2CID 22795201.
• Wong, S. K. M.; Ziarko, Wojciech; Ye, R. Li (1986). "Comparison of rough-set and statistical methods in inductive learning". International Journal of Man-Machine Studies. 24: 53–72. doi:10.1016/S0020-7373(86)80033-5.
• Yao, J. T.; Yao, Y. Y. (2002). "Induction of classification rules by granular computing". Proceedings of the Third International Conference on Rough Sets and Current Trends in Computing (TSCTC'02). London, UK: Springer-Verlag. pp. 331–338.
• Ziarko, Wojciech (1998). "Rough sets as a methodology for data mining". Rough Sets in Knowledge Discovery 1: Methodology and Applications. Heidelberg: Physica-Verlag. pp. 554–576.
• Ziarko, Wojciech; Shan, Ning (1995). "Discovering attribute relationships, dependencies and rules by using rough sets". Proceedings of the 28th Annual Hawaii International Conference on System Sciences (HICSS'95). Hawaii. pp. 293–299.
• Pawlak, Zdzisław (1999). "Decision rules, Bayes' rule and rough sets". New Direction in Rough Sets, Data Mining, and Granular-soft Computing: 1–9.
• Pawlak, Zdzisław. "Rough relations, reports" (Document). Institute of Computer Science. {{cite document}}: Unknown parameter |volume= ignored (help)
• Orlowska, E. (1987). "Reasoning about vague concepts". Bulletin of the Polish Academy of Sciences. 35: 643–652.
• Polkowski, L. (2002). "Rough sets: Mathematical foundations". Advances in Soft Computing.
• Skowron, A. (1996). "Rough sets and vague concepts". Fundamenta Informaticae: 417–431.
• Burgin M. (1990). Theory of Named Sets as a Foundational Basis for Mathematics, In Structures in mathematical theories: Reports of the San Sebastian international symposium, September 25–29, 1990 (http://www.blogg.org/blog-30140-date-2005-10-26.html)
• Burgin, M. (2004). Unified Foundations of Mathematics, Preprint Mathematics LO/0403186, p39. (electronic edition: https://arxiv.org/ftp/math/papers/0403/0403186.pdf)
• Burgin, M. (2011), Theory of Named Sets, Mathematics Research Developments, Nova Science Pub Inc, ISBN 978-1-61122-788-8
• Cornelis, C., De Cock, M. and Kerre, E. (2003) Intuitionistic fuzzy rough sets: at the crossroads of imperfect knowledge, Expert Systems, 20:5, pp260–270
• Düntsch, I. and Gediga, G. (1995) Rough Set Dependency Analysis in Evaluation Studies – An Application in the Study of Repeated Heart Attacks. University of Ulster, Informatics Research Reports No. 10
• Feng F. (2010). Generalized Rough Fuzzy Sets Based on Soft Sets, Soft Computing, 14:9, pp 899–911
• Grzymala-Busse, J. (1987). Learning from examples based on rough multisets, in Proceedings of the 2nd International Symposium on Methodologies for Intelligent Systems, pp. 325–332. Charlotte, NC, USA
• Meng, D., Zhang, X. and Qin, K. (2011). Soft rough fuzzy sets and soft fuzzy rough sets, Computers & Mathematics with Applications, 62:12, pp4635–4645
• Quafafou M. (2000). α-RST: a generalization of rough set theory, Information Sciences, 124:1–4, pp301–316.
• Quafafou M. and Boussouf M. (2000). Generalized rough sets based feature selection. Journal Intelligent Data Analysis, 4:1 pp3 – 17
• Nakamura, A. (1988) Fuzzy rough sets, ‘Notes on Multiple-valued Logic in Japan’, 9:1, pp1–8
• Pawlak, Z., Grzymala-Busse, J., Slowinski, R. Ziarko, W. (1995). Rough Sets. Communications of the ACM, 38:11, pp88–95
• Thomas, K. and Nair, L. (2011). Rough intuitionistic fuzzy sets in a lattice, International Mathematical Forum, 6:27, pp1327–1335
• Zhang J., Li T., Chen H. (2014). Composite rough sets for dynamic data mining, Information Sciences, 257, pp81–100
• Zhang J., Wong J-S, Pan Y, Li T. (2015). A parallel matrix-based method for computing approximations in incomplete information systems, IEEE Transactions on Knowledge and Data Engineering, 27(2): 326-339
• Chen H., Li T., Luo C., Horng S-J., Wang G. (2015). A decision-theoretic rough set approach for dynamic data mining. IEEE Transactions on Fuzzy Systems, 23(6): 1958-1970
• Chen H., Li T., Luo C., Horng S-J., Wang G. (2014). A rough set-based method for updating decision rules on attribute values' coarsening and refining, IEEE Transactions on Knowledge and Data Engineering, 26(12): 2886-2899
• Chen H., Li T., Ruan D., Lin J., Hu C, (2013) A rough-set based incremental approach for updating approximations under dynamic maintenance environments. IEEE Transactions on Knowledge and Data Engineering, 25(2): 274-284
Further reading
• Gianpiero Cattaneo and Davide Ciucci, "Heyting Wajsberg Algebras as an Abstract Environment Linking Fuzzy and Rough Sets" in J.J. Alpigini et al. (Eds.): RSCTC 2002, LNAI 2475, pp. 77–84, 2002. doi:10.1007/3-540-45813-1_10
External links
• The International Rough Set Society
• Rough set tutorial
• Rough Sets: A Quick Tutorial
• Rough Set Exploration System
• Rough Sets in Data Warehousing
Authority control: National
• Israel
• United States
• Czech Republic
|
Wikipedia
|
Round-robin item allocation
Round robin is a procedure for fair item allocation. It can be used to allocate several indivisible items among several people, such that the allocation is "almost" envy-free: each agent believes that the bundle he received is at least as good as the bundle of any other agent, when at most one item is removed from the other bundle. In sports, the round-robin procedure is called a draft.
Setting
There are m objects to allocate, and n people ("agents") with equal rights to these objects. Each person has different preferences over the objects. The preferences of an agent are given by a vector of values - a value for each object. It is assumed that the value of a bundle for an agent is the sum of the values of the objects in the bundle (in other words, the agents' valuations are an additive set function on the set of objects).
Description
The protocol proceeds as follows:
1. Number the people arbitrarily from 1 to $n$;
2. While there are unassigned objects:
• Let each person from 1 to $n$ pick an unassigned object.
It is assumed that each person in his turn picks an unassigned object with a highest value among the remaining objects.
Additivity requirement
The round-robin protocol requires additivity, since it requires each agent to pick his "best item" without knowing what other items he is going to get; additivity of valuations guarantees that there is always a "best item" (an item with a highest value). In other words, it assumes that the items are independent goods. The additivity requirement can be relaxed to weak additivity.
Properties
The round-robin protocol is very simple to execute: it requires only m steps. Each agent can order the objects in advance by descending value (this takes O(m log m) $O(m{\text{log}}m)$ time per agent) and then pick an object in time $O(1)$.
The final allocation is EF1 - envy-free up to one object. This means that, for every pair of agents $i$ and $j$, if at most one object is removed from the bundle of $j$, then $i$ does not envy $j$.
Proof:[1] For every agent $i$, divide the selections made by the agents to sub-sequences: the first subsequence starts at agent 1 and ends at agent $i-1$; the latter subsequences start at $i$ and end at $i-1$. In the latter subsequences, agent $i$ chooses first, so he can choose his best item, so he does not envy any other agent. Agent $i$ can envy only one of the agents $1,...,i-1$, and the envy comes only from an item they selected in the first subsequence. If this item is removed, agent $i$ does not envy.
Additionally, round-robin guarantees that each agent receives the same number of items (m/n, if m is divisible by n), or almost the same number of items (if m is not divisible by n). Thus, it is useful in situations with simple cardinality constraints, such as: assigning course-seats to students where each student must receive the same number of courses.
Efficiency considerations
Round-robin guarantees approximate fairness, but the outcome might be inefficient. As a simple example, suppose the valuations are:
z y x w v u
Alice's value: 12 10 8 7 4 1
George's value: 19 16 8 6 5 1
Round-robin, when Alice chooses first, yields the allocation $(zxv,ywu)$ with utilities (24,23) and social welfare 47. It is not Pareto efficient, since it is dominated e.g. y the allocation $(yxw,zvu)$, with utilities (25,25).
An alternative algorithm, which may attain a higher social welfare, is the Iterated maximum-weight matching algorithm.[2] In each iteration, it finds a maximum-weight matching in the bipartite graph in which the nodes are the agents and the items, and the edge weights are the agents' values to the items. In the above example, the first matching is $(y,z)$, the second is $(w,x)$, and the third is $(u,v)$. The total allocation is $(ywu,zxv)$ with utilities (18,32); the social welfare (- the sum of utilities) is 50, which is higher than in the round-robin allocation.
Note that even iterated maximum-weight matching does not guarantee Pareto efficiency, as the above allocation is dominated by (xwv, zyu) with utilities (19,36).
Round-robin for groups
The round-robin algorithm can be used to fairly allocate items among groups. In this setting, all members in each group consume the same bundle, but different members in each group may have different preferences over the items. This raises the question of how each group should decide which item to choose in its turn. Suppose that the goal of each group is to maximize the fraction of its members that are "happy", that is, feel that the allocation is fair (according to their personal preferences). Suppose also that the agents have binary additive valuations, that is, each agent values each item at either 1 ("approve") or 0 ("disapprove"). Then, each group can decide what item to pick using weighted approval voting:[3]
• Each group member is assigned a weight. The weight of member j is a certain function w(rj,sj), where:
• rj is the number of remaining goods that j approves;
• sj is the number of goods that j's group should still get such that the chosen fairness criterion is satisfied for j.
• Each remaining item is assigned a weight. The weight of item g is the sum of weights of the agents who approve g: sum of w(rj,sj) for all j such that j values g at 1.
• The group picks an item with the largest weight.
The resulting algorithm is called RWAV (round-robin with weighted approval voting). The weight function w(r,s) is determined based on an auxiliary function B(r,s), defined by the following recurrence relation:
• $B(r,s):=1~~{\text{if}}~~s\leq 0;$
• $B(r,s):=0~~{\text{if}}~~0<s~{\text{and}}~r<s;$
• $B(r,s):=\min {\bigg [}{\frac {1}{2}}[B(r-1,s)+B(r-1,s-1)],B(r-2,s-1){\bigg ]}~~{\text{otherwise}}$.
Intuitively, B(r,s) of an agent represents the probability that the agent is happy with the final allocation. If s ≤ 0, then by definition this probability is 1: the agent needs no more goods to be happy. If 0<s and r<s, then this probability is 0: the agent cannot be happy, since he needs more goods than are available. Otherwise, B(r,s) is the average between B(r-1,s) - when the other group takes a good wanted by the agent, and B(r-1,s-1) - when the agent's group takes a good wanted by the agent. The term B(r-2,s-1) represents the situation when both groups take a good wanted by the agent. Once B(r,s) is computed, the weight function w is defined as follows:
$w(r,s):=B(r,s)-B(r-1,s)$
When using this weight function and running RWAV with two groups, the fraction of happy members in group 1 is at least B(r, s(r)), and the fraction of happy members in group 2 is at least B(r-1, s(r)).[3]: Lemma 3.6 The function s(r) is determined by the fairness criterion. For example, for 1-out-of-3 maximin-share fairness, s(r) = floor(r/3). The following table shows some values of the function B, with the values of B(r-1, floor(r/3)) boldfaced:
r-s ↓ s →012345678910
-10.0000.0000.0000.0000.0000.0000.0000.0000.0000.000
010.5000.0000.0000.0000.0000.0000.0000.0000.0000.000
110.7500.3750.0000.0000.0000.0000.0000.0000.0000.000
210.8750.6250.3130.0000.0000.0000.0000.0000.0000.000
310.9380.7810.5470.2730.0000.0000.0000.0000.0000.000
410.9690.8750.7110.4920.2460.0000.0000.0000.0000.000
510.9840.9300.8200.6560.4510.2260.0000.0000.0000.000
610.9920.9610.8910.7730.6120.4190.2090.0000.0000.000
710.9960.9790.9350.8540.7330.5760.3930.1960.0000.000
810.9980.9880.9610.9080.8200.6980.5460.3710.1850.000
910.9990.9940.9780.9430.8820.7900.6680.5190.3520.176
1011.0000.9970.9870.9650.9230.8570.7620.6410.4970.336
From this one can conclude that the RWAV algorithm guarantees that, in both groups, at least 75% of the members feel that the allocation is 1-out-of-3 MMS fair.
Extensions
1. The round-robin protocol guarantees EF1 when the items are goods (- valued positively by all agents) and when they are chores (- valued negatively by all agents). However, when there are both goods and chores, it does not guarantee EF1. An adaptation of round-robin called double round-robin guarantees EF1 even with a mixture of goods and chores.[4]
2. When agents have more complex cardinality constraints (i.e., the items are divided into categories, and for each category of items, there is an upper bound on the number of items each agent can get from this category), round-robin might fail. However, combining round-robin with the envy-graph procedure gives an algorithm that finds allocations that are both EF1 and satisfy the cardinality constraints.[5]
See also
Round-robin is a special case of a picking sequence.
Round-robin protocols are used in other areas besides fair item allocation. For example, see round-robin scheduling and round-robin tournament.
References
1. Caragiannis, Ioannis; Kurokawa, David; Moulin, Hervé; Procaccia, Ariel D.; Shah, Nisarg; Wang, Junxing (2016). The Unreasonable Fairness of Maximum Nash Welfare (PDF). Proceedings of the 2016 ACM Conference on Economics and Computation - EC '16. p. 305. doi:10.1145/2940716.2940726. ISBN 9781450339360.
2. Brustle, Johannes; Dippel, Jack; Narayan, Vishnu V.; Suzuki, Mashbat; Vetta, Adrian (2020-07-13). "One Dollar Each Eliminates Envy". Proceedings of the 21st ACM Conference on Economics and Computation. EC '20. Virtual Event, Hungary: Association for Computing Machinery: 23–39. arXiv:1912.02797. doi:10.1145/3391403.3399447. ISBN 978-1-4503-7975-5. S2CID 208637311.
3. Segal-Halevi, Erel; Suksompong, Warut (2019-12-01). "Democratic fair allocation of indivisible goods". Artificial Intelligence. 277: 103167. arXiv:1709.02564. doi:10.1016/j.artint.2019.103167. ISSN 0004-3702. S2CID 203034477.
4. Haris Aziz, Ioannis Caragiannis, Ayumi Igarashi, Toby Walsh (2019). "Fair Allocation of Indivisible Goods and Chores" (PDF). IJCAI 2019 conference.{{cite web}}: CS1 maint: multiple names: authors list (link) CS1 maint: url-status (link)
5. Biswas, Arpita; Barman, Siddharth (2018-07-13). "Fair division under cardinality constraints". Proceedings of the 27th International Joint Conference on Artificial Intelligence. IJCAI'18. Stockholm, Sweden: AAAI Press: 91–97. arXiv:1804.09521. ISBN 978-0-9992411-2-7.
|
Wikipedia
|
Rounding
Rounding means replacing a number with an approximate value that has a shorter, simpler, or more explicit representation. For example, replacing $23.4476 with $23.45, the fraction 312/937 with 1/3, or the expression √2 with 1.414.
Rounding is often done to obtain a value that is easier to report and communicate than the original. Rounding can also be important to avoid misleadingly precise reporting of a computed number, measurement, or estimate; for example, a quantity that was computed as 123,456 but is known to be accurate only to within a few hundred units is usually better stated as "about 123,500".
On the other hand, rounding of exact numbers will introduce some round-off error in the reported result. Rounding is almost unavoidable when reporting many computations – especially when dividing two numbers in integer or fixed-point arithmetic; when computing mathematical functions such as square roots, logarithms, and sines; or when using a floating-point representation with a fixed number of significant digits. In a sequence of calculations, these rounding errors generally accumulate, and in certain ill-conditioned cases they may make the result meaningless.
Accurate rounding of transcendental mathematical functions is difficult because the number of extra digits that need to be calculated to resolve whether to round up or down cannot be known in advance. This problem is known as "the table-maker's dilemma".
Rounding has many similarities to the quantization that occurs when physical quantities must be encoded by numbers or digital signals.
A wavy equals sign (≈: approximately equal to) is sometimes used to indicate rounding of exact numbers, e.g., 9.98 ≈ 10. This sign was introduced by Alfred George Greenhill in 1892.[1]
Ideal characteristics of rounding methods include:
1. Rounding should be done by a function. This way, when the same input is rounded in different instances, the output is unchanged.
2. Calculations done with rounding should be close to those done without rounding.
• As a result of (1) and (2), the output from rounding should be close to its input, often as close as possible by some metric.
3. To be considered rounding, the range will be a subset of the domain, in general discrete. A classical range is the integers, Z.
4. Rounding should preserve symmetries that already exist between the domain and range. With finite precision (or a discrete domain), this translates to removing bias.
5. A rounding method should have utility in computer science or human arithmetic where finite precision is used, and speed is a consideration.
Because it is not usually possible for a method to satisfy all ideal characteristics, many different rounding methods exist.
As a general rule, rounding is idempotent;[2] i.e., once a number has been rounded, rounding it again will not change its value. Rounding functions are also monotonic; i.e., rounding a larger number gives a larger or equal result than rounding a smaller number. In the general case of a discrete range, they are piecewise constant functions.
Types of rounding
Typical rounding problems include:
Rounding problem Example input Result Rounding criterion
Approximating an irrational number by a fraction π 22 / 7 1-digit-denominator
Approximating a rational number by another fraction with smaller numerator and denominator 399 / 941 3 / 7 1-digit-denominator
Approximating a fraction, which have periodic decimal expansion, by a finite decimal fraction 5 / 3 1.6667 4 decimal places
Approximating a fractional decimal number by one with fewer digits 2.1784 2.18 2 decimal places
Approximating a decimal integer by an integer with more trailing zeros 23,217 23,200 3 significant figures
Approximating a large decimal integer using scientific notation 300,999,999 3.01 × 108 3 significant figures
Approximating a value by a multiple of a specified amount 48.2 45 Multiple of 15
Rounding each one of a finite set of real numbers (mostly fractions) to an integer (sometimes the second-nearest integer) so that the sum of the rounded numbers equals the rounded sum of the numbers (needed e.g. [1] for the apportionment of seats, implemented e.g. by the largest remainder method, see Mathematics of apportionment, and [2] for distributing the total VAT of an invoice to its items) {3/12, 4/12, 5/12} {0, 0, 1} Sum of rounded elements equals rounded sum of elements
Rounding to integer
The most basic form of rounding is to replace an arbitrary number by an integer. All the following rounding modes are concrete implementations of an abstract single-argument "round()" procedure. These are true functions (with the exception of those that use randomness).
Directed rounding to an integer
These four methods are called directed rounding, as the displacements from the original number x to the rounded value y are all directed toward or away from the same limiting value (0, +∞, or −∞). Directed rounding is used in interval arithmetic and is often required in financial calculations.
If x is positive, round-down is the same as round-toward-zero, and round-up is the same as round-away-from-zero. If x is negative, round-down is the same as round-away-from-zero, and round-up is the same as round-toward-zero. In any case, if x is an integer, y is just x.
Where many calculations are done in sequence, the choice of rounding method can have a very significant effect on the result. A famous instance involved a new index set up by the Vancouver Stock Exchange in 1982. It was initially set at 1000.000 (three decimal places of accuracy), and after 22 months had fallen to about 520 — whereas stock prices had generally increased in the period. The problem was caused by the index being recalculated thousands of times daily, and always being rounded down to 3 decimal places, in such a way that the rounding errors accumulated. Recalculating with better rounding gave an index value of 1098.892 at the end of the same period.[3]
For the examples below, sgn(x) refers to the sign function applied to the original number, x.
Rounding down
• round down (or take the floor, or round toward negative infinity): y is the largest integer that does not exceed x.
$y=\mathrm {floor} (x)=\left\lfloor x\right\rfloor =-\left\lceil -x\right\rceil $
For example, 23.7 gets rounded to 23, and −23.2 gets rounded to −24.
Rounding up
• round up (or take the ceiling, or round toward positive infinity): y is the smallest integer that is not less than x.
$y=\operatorname {ceil} (x)=\left\lceil x\right\rceil =-\left\lfloor -x\right\rfloor $
For example, 23.2 gets rounded to 24, and −23.7 gets rounded to −23.
Rounding toward zero
• round toward zero (or truncate, or round away from infinity): y is the integer that is closest to x such that it is between 0 and x (included); i.e. y is the integer part of x, without its fraction digits.
$y=\operatorname {truncate} (x)=\operatorname {sgn}(x)\left\lfloor \left|x\right|\right\rfloor =-\operatorname {sgn}(x)\left\lceil -\left|x\right|\right\rceil ={\begin{cases}\left\lfloor x\right\rfloor &x\geq 0\\\left\lceil x\right\rceil &x<0\\\end{cases}}$
For example, 23.7 gets rounded to 23, and −23.7 gets rounded to −23.
Rounding away from zero
• round away from zero (or round toward infinity): y is the integer that is closest to 0 (or equivalently, to x) such that x is between 0 and y (included).
$y=\operatorname {sgn}(x)\left\lceil \left|x\right|\right\rceil =-\operatorname {sgn}(x)\left\lfloor -\left|x\right|\right\rfloor ={\begin{cases}\left\lceil x\right\rceil &x\geq 0\\\left\lfloor x\right\rfloor &x<0\\\end{cases}}$
For example, 23.2 gets rounded to 24, and −23.2 gets rounded to −24.
Rounding to the nearest integer
Rounding a number x to the nearest integer requires some tie-breaking rule for those cases when x is exactly half-way between two integers — that is, when the fraction part of x is exactly 0.5.
If it were not for the 0.5 fractional parts, the round-off errors introduced by the round to nearest method would be symmetric: for every fraction that gets rounded down (such as 0.268), there is a complementary fraction (namely, 0.732) that gets rounded up by the same amount.
When rounding a large set of fixed-point numbers with uniformly distributed fractional parts, the rounding errors by all values, with the omission of those having 0.5 fractional part, would statistically compensate each other. This means that the expected (average) value of the rounded numbers is equal to the expected value of the original numbers when numbers with fractional part 0.5 from the set are removed.
In practice, floating-point numbers are typically used, which have even more computational nuances because they are not equally spaced.
Rounding half up
The following tie-breaking rule, called round half up (or round half toward positive infinity), is widely used in many disciplines. That is, half-way values of x are always rounded up.
• If the fractional part of x is exactly 0.5, then y = x + 0.5
$y=\left\lfloor x+0.5\right\rfloor =-\left\lceil -x-0.5\right\rceil =\left\lceil {\frac {\lfloor 2x\rfloor }{2}}\right\rceil $
For example, 23.5 gets rounded to 24, and −23.5 gets rounded to −23.
However, some programming languages (such as Java, Python) define their half up as round half away from zero here.[4][5]
This method only requires checking one digit to determine rounding direction in two's complement and similar representations.
Rounding half down
One may also use round half down (or round half toward negative infinity) as opposed to the more common round half up.
• If the fractional part of x is exactly 0.5, then y = x − 0.5
$y=\left\lceil x-0.5\right\rceil =-\left\lfloor -x+0.5\right\rfloor =\left\lfloor {\frac {\lceil 2x\rceil }{2}}\right\rfloor $
For example, 23.5 gets rounded to 23, and −23.5 gets rounded to −24.
However, some programming languages (such as Java, Python) define their half down as round half toward zero here.[4][5]
Rounding half toward zero
One may also round half toward zero (or round half away from infinity) as opposed to the conventional round half away from zero.
• If the fractional part of x is exactly 0.5, then y = x − 0.5 if x is positive, and y = x + 0.5 if x is negative.
$y=\operatorname {sgn}(x)\left\lceil \left|x\right|-0.5\right\rceil =-\operatorname {sgn}(x)\left\lfloor -\left|x\right|+0.5\right\rfloor $
For example, 23.5 gets rounded to 23, and −23.5 gets rounded to −23.
This method treats positive and negative values symmetrically, and therefore is free of overall positive/negative bias if the original numbers are positive or negative with equal probability. It does, however, still have bias toward zero.
Rounding half away from zero
The other tie-breaking method commonly taught and used is the round half away from zero (or round half toward infinity), namely:
• If the fractional part of x is exactly 0.5, then y = x + 0.5 if x is positive, and y = x − 0.5 if x is negative.
$y=\operatorname {sgn}(x)\left\lfloor \left|x\right|+0.5\right\rfloor =-\operatorname {sgn}(x)\left\lceil -\left|x\right|-0.5\right\rceil $
For example, 23.5 gets rounded to 24, and −23.5 gets rounded to −24.
This can be more efficient on binary computers because only the first omitted bit needs to be considered to determine if it rounds up (on a 1) or down (on a 0). This is one method used when rounding to significant figures due to its simplicity.
This method, also known as commercial rounding, treats positive and negative values symmetrically, and therefore is free of overall positive/negative bias if the original numbers are positive or negative with equal probability. It does, however, still have bias away from zero.
It is often used for currency conversions and price roundings (when the amount is first converted into the smallest significant subdivision of the currency, such as cents of a euro) as it is easy to explain by just considering the first fractional digit, independently of supplementary precision digits or sign of the amount (for strict equivalence between the paying and recipient of the amount).
Rounding half to even
A tie-breaking rule without positive/negative bias and without bias toward/away from zero is round half to even. By this convention, if the fractional part of x is 0.5, then y is the even integer nearest to x. Thus, for example, +23.5 becomes +24, as does +24.5; however, −23.5 becomes −24, as does −24.5. This function minimizes the expected error when summing over rounded figures, even when the inputs are mostly positive or mostly negative, provided they are neither mostly even nor mostly odd.
This variant of the round-to-nearest method is also called convergent rounding, statistician's rounding, Dutch rounding, Gaussian rounding, odd–even rounding,[6] or bankers' rounding.
This is the default rounding mode used in IEEE 754 operations for results in binary floating-point formats, and the more sophisticated mode used when rounding to significant figures.
By eliminating bias, repeated addition or subtraction of independent numbers, as in a one-dimensional random walk, will give a rounded result with an error that tends to grow in proportion to the square root of the number of operations rather than linearly.
However, this rule distorts the distribution by increasing the probability of evens relative to odds. Typically this is less important than the biases that are eliminated by this method.
Rounding half to odd
A similar tie-breaking rule to round half to even is round half to odd. In this approach, if the fractional part of x is 0.5, then y is the odd integer nearest to x. Thus, for example, +23.5 becomes +23, as does +22.5; while −23.5 becomes −23, as does −22.5.
This method is also free from positive/negative bias and bias toward/away from zero, provided the numbers to be rounded are neither mostly even nor mostly odd. It also shares the round half to even property of distorting the original distribution, as it increases the probability of odds relative to evens. It was the method used for bank balances in the United Kingdom when it decimalized its currency[7].
This variant is almost never used in computations, except in situations where one wants to avoid increasing the scale of floating-point numbers, which have a limited exponent range. With round half to even, a non-infinite number would round to infinity, and a small denormal value would round to a normal non-zero value. Effectively, this mode prefers preserving the existing scale of tie numbers, avoiding out-of-range results when possible for numeral systems of even radix (such as binary and decimal)..
Rounding to prepare for shorter precision
This rounding mode is used to avoid getting wrong result with double (including multiple) rounding. With this rounding mode, one can avoid wrong result after double rounding, if all roundings except the final one are done using RPSP, and only final rounding uses the externally requested mode.
With decimal arithmetic, if there is a choice between numbers with the smallest significant digit 0 or 1, 4 or 5, 5 or 6, 9 or 0, then the digit different from 0 or 5 shall be selected; otherwise, choice is arbitrary. IBM defines [8] that, in the latter case, a digit with the smaller magnitude shall be selected. RPSP can be applied with the step between two consequent roundings as small as a single digit (for example, rounding to 1/10 can be applied after rounding to 1/100). For example, when rounding to integer,
• 20.0 is rounded to 20;
• 20.01, 20.1, 20.9, 20.99, 21, 21.01, 21.9, 21.99 are rounded to 21;
• 22.0, 22.1, 22.9, 22.99 are rounded to 22;
• 24.0, 24.1, 24.9, 24.99 are rounded to 24;
• 25.0 is rounded to 25;
• 25.01, 25.1 are rounded to 26.
In the example from "Double rounding" section, rounding 9.46 to one decimal gives 9.4, which rounding to integer in turn gives 9.
With binary arithmetic, the rounding is made as "round to odd" (not to be mixed with "round half to odd".) For example, when rounding to 1/4:
• x == 2.0 => result is 2
• 2.0 < x < 2.5 => result is 2.25
• x == 2.5 => result is 2.5
• 2.5 < x < 3.0 => result is 2.75
• x == 3.0 => result is 3.0
For correct results, RPSP shall be applied with the step of at least 2 binary digits, otherwise, wrong result may appear. For example,
• 3.125 RPSP to 1/4 => result is 3.25
• 3.25 RPSP to 1/2 => result is 3.5
• 3.5 round-half-to-even to 1 => result is 4 (wrong)
If the step is 2 bits or more, RPSP gives 3.25 which, in turn, round-half-to-even to integer results in 3.
RPSP is implemented in hardware in IBM zSeries and pSeries.
Alternating tie-breaking
One method, more obscure than most, is to alternate direction when rounding a number with 0.5 fractional part. All others are rounded to the closest integer.
• Whenever the fractional part is 0.5, alternate rounding up or down: for the first occurrence of a 0.5 fractional part, round up, for the second occurrence, round down, and so on. Alternatively, the first 0.5 fractional part rounding can be determined by a random seed. "Up" and "down" can be any two rounding methods that oppose each other - toward and away from positive infinity or toward and away from zero.
If occurrences of 0.5 fractional parts occur significantly more than a restart of the occurrence "counting", then it is effectively bias free. With guaranteed zero bias, it is useful if the numbers are to be summed or averaged.
Random tie-breaking
• If the fractional part of x is 0.5, choose y randomly between x + 0.5 and x − 0.5, with equal probability. All others are rounded to the closest integer.
Like round-half-to-even and round-half-to-odd, this rule is essentially free of overall bias, but it is also fair among even and odd y values. An advantage over alternate tie-breaking is that the last direction of rounding on the 0.5 fractional part does not have to be "remembered".
Stochastic rounding
Rounding as follows to one of the closest integer toward negative infinity and the closest integer toward positive infinity, with a probability dependent on the proximity is called stochastic rounding and will give an unbiased result on average.[9]
$\operatorname {Round} (x)={\begin{cases}\lfloor x\rfloor &{\text{ with probability }}1-(x-\lfloor x\rfloor )=\lfloor x\rfloor -x+1\\\lfloor x\rfloor +1&{\text{ with probability }}{x-\lfloor x\rfloor }\end{cases}}$
For example, 1.6 would be rounded to 1 with probability 0.4 and to 2 with probability 0.6.
Stochastic rounding can be accurate in a way that a rounding function can never be. For example, suppose one started with 0 and added 0.3 to that one hundred times while rounding the running total between every addition. The result would be 0 with regular rounding, but with stochastic rounding, the expected result would be 30, which is the same value obtained without rounding. This can be useful in machine learning where the training may use low precision arithmetic iteratively.[9] Stochastic rounding is also a way to achieve 1-dimensional dithering.
Comparison of approaches for rounding to an integer
Value Functional methods Randomized methods
Directed rounding Round to nearest Round to prepare for shorter precision Alternating tie-break Random tie-break Stochastic
Down
(toward −∞)
Up
(toward +∞)
Toward 0 Away From 0 Half Down
(toward −∞)
Half Up
(toward +∞)
Half Toward 0 Half Away From 0 Half to Even Half to Odd Average SD Average SD Average SD
+1.8 +1 +2 +1 +2 +2 +2 +2 +2 +2 +2 +1 +2 0 +2 0 +1.8 0.04
+1.5 +1 +1 +1 +1.505 0 +1.5 0.05 +1.5 0.05
+1.2 +1 +1 +1 +1 0 +1 0 +1.2 0.04
+0.8 0 +1 0 +1 +0.8 0.04
+0.5 0 0 0 +0.505 0 +0.5 0.05 +0.5 0.05
+0.2 0 0 0 0 0 0 0 +0.2 0.04
−0.2 −1 0 −1 −1 −0.2 0.04
−0.5 −1 −1 −1 −0.495 0 −0.5 0.05 −0.5 0.05
−0.8 −1 −1 −1 −1 0 −1 0 −0.8 0.04
−1.2 −2 −1 −1 −2 −1.2 0.04
−1.5 −2 −2 −2 −1.495 0 −1.5 0.05 −1.5 0.05
−1.8 −2 −2 −2 −2 0 −2 0 −1.8 0.04
Rounding to other values
Rounding to a specified multiple
The most common type of rounding is to round to an integer; or, more generally, to an integer multiple of some increment — such as rounding to whole tenths of seconds, hundredths of a dollar, to whole multiples of 1/2 or 1/8 inch, to whole dozens or thousands, etc.
In general, rounding a number x to a multiple of some specified positive value m entails the following steps:
$\mathrm {roundToMultiple} (x,m)=\mathrm {round} (x/m)\times m$
For example, rounding x = 2.1784 dollars to whole cents (i.e., to a multiple of 0.01) entails computing 2.1784 / 0.01 = 217.84, then rounding that to 218, and finally computing 218 × 0.01 = 2.18.
When rounding to a predetermined number of significant digits, the increment m depends on the magnitude of the number to be rounded (or of the rounded result).
The increment m is normally a finite fraction in whatever numeral system is used to represent the numbers. For display to humans, that usually means the decimal numeral system (that is, m is an integer times a power of 10, like 1/1000 or 25/100). For intermediate values stored in digital computers, it often means the binary numeral system (m is an integer times a power of 2).
The abstract single-argument "round()" function that returns an integer from an arbitrary real value has at least a dozen distinct concrete definitions presented in the rounding to integer section. The abstract two-argument "roundToMultiple()" function is formally defined here, but in many cases it is used with the implicit value m = 1 for the increment and then reduces to the equivalent abstract single-argument function, with also the same dozen distinct concrete definitions.
Rounding to a specified power
Rounding to a specified power is very different from rounding to a specified multiple; for example, it is common in computing to need to round a number to a whole power of 2. The steps, in general, to round a positive number x to a power of some positive number b other than 1, are:
$\mathrm {roundToPower} (x,b)=b^{\mathrm {round} (\log _{b}x)},x>0,b>0,b\neq 1$
Many of the caveats applicable to rounding to a multiple are applicable to rounding to a power.
Scaled rounding
This type of rounding, which is also named rounding to a logarithmic scale, is a variant of rounding to a specified power. Rounding on a logarithmic scale is accomplished by taking the log of the amount and doing normal rounding to the nearest value on the log scale.
For example, resistors are supplied with preferred numbers on a logarithmic scale. In particular, for resistors with a 10% accuracy, they are supplied with nominal values 100, 120, 150, 180, 220, etc. rounded to multiples of 10 (E12 series). If a calculation indicates a resistor of 165 ohms is required then log(150) = 2.176, log(165) = 2.217 and log(180) = 2.255. The logarithm of 165 is closer to the logarithm of 180 therefore a 180 ohm resistor would be the first choice if there are no other considerations.
Whether a value x ∈ (a, b) rounds to a or b depends upon whether the squared value x2 is greater than or less than the product ab. The value 165 rounds to 180 in the resistors example because 1652 = 27225 is greater than 150 × 180 = 27000.
Floating-point rounding
In floating-point arithmetic, rounding aims to turn a given value x into a value y with a specified number of significant digits. In other words, y should be a multiple of a number m that depends on the magnitude of x. The number m is a power of the base (usually 2 or 10) of the floating-point representation.
Apart from this detail, all the variants of rounding discussed above apply to the rounding of floating-point numbers as well. The algorithm for such rounding is presented in the Scaled rounding section above, but with a constant scaling factor s = 1, and an integer base b > 1.
Where the rounded result would overflow the result for a directed rounding is either the appropriate signed infinity when "rounding away from zero", or the highest representable positive finite number (or the lowest representable negative finite number if x is negative), when "rounding toward zero". The result of an overflow for the usual case of round to nearest is always the appropriate infinity.
Rounding to a simple fraction
In some contexts it is desirable to round a given number x to a "neat" fraction — that is, the nearest fraction y = m/n whose numerator m and denominator n do not exceed a given maximum. This problem is fairly distinct from that of rounding a value to a fixed number of decimal or binary digits, or to a multiple of a given unit m. This problem is related to Farey sequences, the Stern–Brocot tree, and continued fractions.
Rounding to an available value
Finished lumber, writing paper, capacitors, and many other products are usually sold in only a few standard sizes.
Many design procedures describe how to calculate an approximate value, and then "round" to some standard size using phrases such as "round down to nearest standard value", "round up to nearest standard value", or "round to nearest standard value".[10][11]
When a set of preferred values is equally spaced on a logarithmic scale, choosing the closest preferred value to any given value can be seen as a form of scaled rounding. Such rounded values can be directly calculated.[12]
Rounding in other contexts
Dithering and error diffusion
When digitizing continuous signals, such as sound waves, the overall effect of a number of measurements is more important than the accuracy of each individual measurement. In these circumstances, dithering, and a related technique, error diffusion, are normally used. A related technique called pulse-width modulation is used to achieve analog type output from an inertial device by rapidly pulsing the power with a variable duty cycle.
Error diffusion tries to ensure the error, on average, is minimized. When dealing with a gentle slope from one to zero, the output would be zero for the first few terms until the sum of the error and the current value becomes greater than 0.5, in which case a 1 is output and the difference subtracted from the error so far. Floyd–Steinberg dithering is a popular error diffusion procedure when digitizing images.
As a one-dimensional example, suppose the numbers 0.9677, 0.9204, 0.7451, and 0.3091 occur in order and each is to be rounded to a multiple of 0.01. In this case the cumulative sums, 0.9677, 1.8881 = 0.9677 + 0.9204, 2.6332 = 0.9677 + 0.9204 + 0.7451, and 2.9423 = 0.9677 + 0.9204 + 0.7451 + 0.3091, are each rounded to a multiple of 0.01: 0.97, 1.89, 2.63, and 2.94. The first of these and the differences of adjacent values give the desired rounded values: 0.97, 0.92 = 1.89 − 0.97, 0.74 = 2.63 − 1.89, and 0.31 = 2.94 − 2.63.
Monte Carlo arithmetic
Monte Carlo arithmetic is a technique in Monte Carlo methods where the rounding is randomly up or down. Stochastic rounding can be used for Monte Carlo arithmetic, but in general, just rounding up or down with equal probability is more often used. Repeated runs will give a random distribution of results which can indicate the stability of the computation.[13]
Exact computation with rounded arithmetic
It is possible to use rounded arithmetic to evaluate the exact value of a function with integer domain and range. For example, if an integer n is known to be a perfect square, its square root can be computed by converting n to a floating-point value z, computing the approximate square root x of z with floating point, and then rounding x to the nearest integer y. If n is not too big, the floating-point round-off error in x will be less than 0.5, so the rounded value y will be the exact square root of n. This is essentially why slide rules could be used for exact arithmetic.
Double rounding
Rounding a number twice in succession to different levels of precision, with the latter precision being coarser, is not guaranteed to give the same result as rounding once to the final precision except in the case of directed rounding.[nb 1] For instance rounding 9.46 to one decimal gives 9.5, and then 10 when rounding to integer using rounding half to even, but would give 9 when rounded to integer directly. Borman and Chatfield[14] discuss the implications of double rounding when comparing data rounded to one decimal place to specification limits expressed using integers.
In Martinez v. Allstate and Sendejo v. Farmers, litigated between 1995 and 1997, the insurance companies argued that double rounding premiums was permissible and in fact required. The US courts ruled against the insurance companies and ordered them to adopt rules to ensure single rounding.[15]
Some computer languages and the IEEE 754-2008 standard dictate that in straightforward calculations the result should not be rounded twice. This has been a particular problem with Java as it is designed to be run identically on different machines, special programming tricks have had to be used to achieve this with x87 floating point.[16][17] The Java language was changed to allow different results where the difference does not matter and require a strictfp qualifier to be used when the results have to conform accurately; strict floating point has been restored in Java 17.[18]
In some algorithms, an intermediate result is computed in a larger precision, then must be rounded to the final precision. Double rounding can be avoided by choosing an adequate rounding for the intermediate computation. This consists in avoiding to round to midpoints for the final rounding (except when the midpoint is exact). In binary arithmetic, the idea is to round the result toward zero, and set the least significant bit to 1 if the rounded result is inexact; this rounding is called sticky rounding.[19] Equivalently, it consists in returning the intermediate result when it is exactly representable, and the nearest floating-point number with an odd significand otherwise; this is why it is also known as rounding to odd.[20][21] A concrete implementation of this approach, for binary and decimal arithmetic, is implemented as Rounding to prepare for shorter precision.
Table-maker's dilemma
William M. Kahan coined the term "The Table-Maker's Dilemma" for the unknown cost of rounding transcendental functions:
Nobody knows how much it would cost to compute yw correctly rounded for every two floating-point arguments at which it does not over/underflow. Instead, reputable math libraries compute elementary transcendental functions mostly within slightly more than half an ulp and almost always well within one ulp. Why can't yw be rounded within half an ulp like SQRT? Because nobody knows how much computation it would cost... No general way exists to predict how many extra digits will have to be carried to compute a transcendental expression and round it correctly to some preassigned number of digits. Even the fact (if true) that a finite number of extra digits will ultimately suffice may be a deep theorem.[22]
The IEEE 754 floating-point standard guarantees that add, subtract, multiply, divide, fused multiply–add, square root, and floating-point remainder will give the correctly rounded result of the infinite-precision operation. No such guarantee was given in the 1985 standard for more complex functions and they are typically only accurate to within the last bit at best. However, the 2008 standard guarantees that conforming implementations will give correctly rounded results which respect the active rounding mode; implementation of the functions, however, is optional.
Using the Gelfond–Schneider theorem and Lindemann–Weierstrass theorem, many of the standard elementary functions can be proved to return transcendental results, except on some well-known arguments; therefore, from a theoretical point of view, it is always possible to correctly round such functions. However, for an implementation of such a function, determining a limit for a given precision on how accurate results need to be computed, before a correctly rounded result can be guaranteed, may demand a lot of computation time or may be out of reach.[23] In practice, when this limit is not known (or only a very large bound is known), some decision has to be made in the implementation (see below); but according to a probabilistic model, correct rounding can be satisfied with a very high probability when using an intermediate accuracy of up to twice the number of digits of the target format plus some small constant (after taking special cases into account).
Some programming packages offer correct rounding. The GNU MPFR package gives correctly rounded arbitrary precision results. Some other libraries implement elementary functions with correct rounding in double precision:
• IBM's ml4j, which stands for Mathematical Library for Java, written by Abraham Ziv and Moshe Olshansky in 1999, correctly rounded to nearest only.[24][25] This library was claimed to be portable, but only binaries for PowerPC/AIX, SPARC/Solaris and x86/Windows NT were provided. According to its documentation, this library uses a first step with an accuracy a bit larger than double precision, a second step based on double-double arithmetic, and a third step with a 768-bit precision based on arrays of IEEE 754 double-precision floating-point numbers.
• IBM's Accurate portable mathematical library (abbreviated as APMathLib or just MathLib),[26][27] also called libultim,[28] in rounding to nearest only. This library uses up to 768 bits of working precision. It was included in the GNU C Library in 2001,[29] but the "slow paths" (providing correct rounding) were removed from 2018 to 2021.
• Sun Microsystems's libmcr, in the 4 rounding modes.[30] For the difficult cases, this library also uses multiple precision, and the number of words is increased by 2 each time the Table-maker's dilemma occurs (with undefined behavior in the very unlikely event that some limit of the machine is reached).
• CRlibm, written in the old Arénaire team (LIP, ENS Lyon). It supports the 4 rounding modes and is proved, using the knowledge of the hardest-to-round cases.[31][32]
• The CORE-MATH project provides some correctly rounded functions in the 4 rounding modes, using the knowledge of the hardest-to-round cases.[33][34]
There exist computable numbers for which a rounded value can never be determined no matter how many digits are calculated. Specific instances cannot be given but this follows from the undecidability of the halting problem. For instance, if Goldbach's conjecture is true but unprovable, then the result of rounding the following value up to the next integer cannot be determined: either 1+10−n where n is the first even number greater than 4 which is not the sum of two primes, or 1 if there is no such number. The rounded result is 2 if such a number n exists and 1 otherwise. The value before rounding can however be approximated to any given precision even if the conjecture is unprovable.
Interaction with string searches
Rounding can adversely affect a string search for a number. For example, π rounded to four digits is "3.1416" but a simple search for this string will not discover "3.14159" or any other value of π rounded to more than four digits. In contrast, truncation does not suffer from this problem; for example, a simple string search for "3.1415", which is π truncated to four digits, will discover values of π truncated to more than four digits.
History
The concept of rounding is very old, perhaps older than the concept of division itself. Some ancient clay tablets found in Mesopotamia contain tables with rounded values of reciprocals and square roots in base 60.[35] Rounded approximations to π, the length of the year, and the length of the month are also ancient—see base 60 examples.
The round-to-even method has served as the ASTM (E-29) standard since 1940. The origin of the terms unbiased rounding and statistician's rounding are fairly self-explanatory. In the 1906 fourth edition of Probability and Theory of Errors[36] Robert Simpson Woodward called this "the computer's rule" indicating that it was then in common use by human computers who calculated mathematical tables. Churchill Eisenhart indicated the practice was already "well established" in data analysis by the 1940s.[37]
The origin of the term bankers' rounding remains more obscure. If this rounding method was ever a standard in banking, the evidence has proved extremely difficult to find. To the contrary, section 2 of the European Commission report The Introduction of the Euro and the Rounding of Currency Amounts[38] suggests that there had previously been no standard approach to rounding in banking; and it specifies that "half-way" amounts should be rounded up.
Until the 1980s, the rounding method used in floating-point computer arithmetic was usually fixed by the hardware, poorly documented, inconsistent, and different for each brand and model of computer. This situation changed after the IEEE 754 floating-point standard was adopted by most computer manufacturers. The standard allows the user to choose among several rounding modes, and in each case specifies precisely how the results should be rounded. These features made numerical computations more predictable and machine-independent, and made possible the efficient and consistent implementation of interval arithmetic.
Currently, much research tends to round to multiples of 5 or 2. For example, Jörg Baten used age heaping in many studies, to evaluate the numeracy level of ancient populations. He came up with the ABCC Index, which enables the comparison of the numeracy among regions possible without any historical sources where the population literacy was measured.[39]
Rounding functions in programming languages
Most programming languages provide functions or special syntax to round fractional numbers in various ways. The earliest numeric languages, such as FORTRAN and C, would provide only one method, usually truncation (toward zero). This default method could be implied in certain contexts, such as when assigning a fractional number to an integer variable, or using a fractional number as an index of an array. Other kinds of rounding had to be programmed explicitly; for example, rounding a positive number to the nearest integer could be implemented by adding 0.5 and truncating.
In the last decades, however, the syntax and the standard libraries of most languages have commonly provided at least the four basic rounding functions (up, down, to nearest, and toward zero). The tie-breaking method can vary depending on the language and version or might be selectable by the programmer. Several languages follow the lead of the IEEE 754 floating-point standard, and define these functions as taking a double-precision float argument and returning the result of the same type, which then may be converted to an integer if necessary. This approach may avoid spurious overflows because floating-point types have a larger range than integer types. Some languages, such as PHP, provide functions that round a value to a specified number of decimal digits (e.g., from 4321.5678 to 4321.57 or 4300). In addition, many languages provide a printf or similar string formatting function, which allows one to convert a fractional number to a string, rounded to a user-specified number of decimal places (the precision). On the other hand, truncation (round to zero) is still the default rounding method used by many languages, especially for the division of two integer values.
In contrast, CSS and SVG do not define any specific maximum precision for numbers and measurements, which they treat and expose in their DOM and in their IDL interface as strings as if they had infinite precision, and do not discriminate between integers and floating-point values; however, the implementations of these languages will typically convert these numbers into IEEE 754 double-precision floating-point values before exposing the computed digits with a limited precision (notably within standard JavaScript or ECMAScript[40] interface bindings).
Other rounding standards
Some disciplines or institutions have issued standards or directives for rounding.
US weather observations
In a guideline issued in mid-1966,[41] the U.S. Office of the Federal Coordinator for Meteorology determined that weather data should be rounded to the nearest round number, with the "round half up" tie-breaking rule. For example, 1.5 rounded to integer should become 2, and −1.5 should become −1. Prior to that date, the tie-breaking rule was "round half away from zero".
Negative zero in meteorology
Some meteorologists may write "−0" to indicate a temperature between 0.0 and −0.5 degrees (exclusive) that was rounded to an integer. This notation is used when the negative sign is considered important, no matter how small is the magnitude; for example, when rounding temperatures in the Celsius scale, where below zero indicates freezing.
See also
• Cash rounding, dealing with the absence of extremely low-value coins
• Data binning, a similar operation
• Gal's accurate tables
• Interval arithmetic
• ISO/IEC 80000
• Kahan summation algorithm
• Party-list proportional representation, an application of rounding to integers that has been thoroughly investigated
• Signed-digit representation
• Truncation
Notes
1. A case where double rounding always leads to the same value as directly rounding to the final precision is when the radix is odd.
References
1. Isaiah Lankham, Bruno Nachtergaele, Anne Schilling: Linear Algebra as an Introduction to Abstract Mathematics. World Scientific, Singapur 2016, ISBN 978-981-4730-35-8, p. 186.
2. Kulisch, Ulrich W. (July 1977). "Mathematical foundation of computer arithmetic". IEEE Transactions on Computers. C-26 (7): 610–621. doi:10.1109/TC.1977.1674893. S2CID 35883481.
3. Higham, Nicholas John (2002). Accuracy and stability of numerical algorithms. p. 54. ISBN 978-0-89871-521-7.
4. "java.math.RoundingMode". Oracle.
5. "decimal — Decimal fixed point and floating point arithmetic". Python Software Foundation.
6. Engineering Drafting Standards Manual (NASA), X-673-64-1F, p90
7. Schedule 1 of the Decimal Currency Act 1969
8. IBM z/Architecture Principles of Operation
9. Gupta, Suyog; Angrawl, Ankur; Gopalakrishnan, Kailash; Narayanan, Pritish (2016-02-09). "Deep Learning with Limited Numerical Precision". p. 3. arXiv:1502.02551 [cs.LG].
10. "Zener Diode Voltage Regulators" (PDF). Archived (PDF) from the original on 2011-07-13. Retrieved 2010-11-24.
11. "Build a Mirror Tester"
12. Bruce Trump, Christine Schneider. "Excel Formula Calculates Standard 1%-Resistor Values". Electronic Design, 2002-01-21.
13. Parker, D. Stott; Eggert, Paul R.; Pierce, Brad (2000-03-28). "Monte Carlo Arithmetic: a framework for the statistical analysis of roundoff errors". IEEE Computation in Science and Engineering.
14. Borman, Phil; Chatfield, Marion (2015-11-10). "Avoid the perils of using rounded data". Journal of Pharmaceutical and Biomedical Analysis. 115: 506–507. doi:10.1016/j.jpba.2015.07.021. PMID 26299526.
15. Deborah R. Hensler (2000). Class Action Dilemmas: Pursuing Public Goals for Private Gain. RAND. pp. 255–293. ISBN 0-8330-2601-1.
16. Samuel A. Figueroa (July 1995). "When is double rounding innocuous?". ACM SIGNUM Newsletter. ACM. 30 (3): 21–25. doi:10.1145/221332.221334. S2CID 14829295.
17. Roger Golliver (October 1998). "Efficiently producing default orthogonal IEEE double results using extended IEEE hardware" (PDF). Intel.
18. Darcy, Joseph D. "JEP 306: Restore Always-Strict Floating-Point Semantics". Retrieved 2021-09-12.
19. Moore, J. Strother; Lynch, Tom; Kaufmann, Matt (1996). "A mechanically checked proof of the correctness of the kernel of the AMD5K86 floating-point division algorithm" (PDF). IEEE Transactions on Computers. 47. CiteSeerX 10.1.1.43.3309. doi:10.1109/12.713311. Retrieved 2016-08-02.
20. Boldo, Sylvie; Melquiond, Guillaume (2008). "Emulation of a FMA and correctly-rounded sums: proved algorithms using rounding to odd" (PDF). IEEE Transactions on Computers. 57 (4): 462–471. doi:10.1109/TC.2007.70819. S2CID 1850330. Retrieved 2016-08-02.
21. "21718 – real.c rounding not perfect". gcc.gnu.org.
22. Kahan, William Morton. "A Logarithm Too Clever by Half". Retrieved 2008-11-14.
23. Muller, Jean-Michel; Brisebarre, Nicolas; de Dinechin, Florent; Jeannerod, Claude-Pierre; Lefèvre, Vincent; Melquiond, Guillaume; Revol, Nathalie; Stehlé, Damien; Torres, Serge (2010). "Chapter 12: Solving the Table Maker's Dilemma". Handbook of Floating-Point Arithmetic (1 ed.). Birkhäuser. doi:10.1007/978-0-8176-4705-6. ISBN 978-0-8176-4704-9. LCCN 2009939668.
24. "NA Digest Sunday, April 18, 1999 Volume 99 : Issue 16". 1999-04-18. Retrieved 2022-08-29.
25. "Math Library for Java". Archived from the original on 1999-05-08.
26. "Accurate Portable Mathematical Library". Archived from the original on 2005-02-07.
27. mathlib on GitHub.
28. "libultim – ultimate correctly-rounded elementary-function library". Archived from the original on 2021-03-01.
29. "Git - glibc.git/commit". Sourceware.org. Retrieved 2022-07-18.
30. "libmcr – correctly-rounded elementary-function library".
31. "CRlibm – Correctly Rounded mathematical library". Archived from the original on 2016-10-27.
32. crlibm on GitHub
33. "The CORE-MATH project". Retrieved 2022-08-30.
34. Sibidanov, Alexei; Zimmermann, Paul; Glondu, Stéphane (2022). The CORE-MATH Project. 29th IEEE Symposium on Computer Arithmetic (ARITH 2022). Retrieved 2022-08-30.
35. Duncan J. Melville. "YBC 7289 clay tablet". 2006
36. Probability and theory of errors. 1906. {{cite book}}: |website= ignored (help)
37. Churchill Eisenhart (1947). "Effects of Rounding or Grouping Data". In Eisenhart; Hastay; Wallis (eds.). Selected Techniques of Statistical Analysis for Scientific and Industrial Research, and Production and Management Engineering. New York: McGraw-Hill. pp. 187–223. Retrieved 2014-01-30.
38. "The Introduction of the Euro and the Rounding of Currency Amounts" (PDF). Archived (PDF) from the original on 2010-10-09. Retrieved 2011-08-19.
39. Baten, Jörg (2009). "Quantifying Quantitative Literacy: Age Heaping and the History of Human Capital" (PDF). Journal of Economic History. 69 (3): 783–808. doi:10.1017/S0022050709001120. hdl:10230/481. S2CID 35494384.
40. "ECMA-262 ECMAScript Language Specification" (PDF). ecma-international.org.
41. OFCM, 2005: Federal Meteorological Handbook No. 1 Archived 1999-04-20 at the Wayback Machine, Washington, DC., 104 pp.
External links
• Weisstein, Eric W. "Rounding". MathWorld.
• An introduction to different rounding algorithms that is accessible to a general audience but especially useful to those studying computer science and electronics.
• How To Implement Custom Rounding Procedures by Microsoft (broken)
|
Wikipedia
|
Round-off error
In computing, a roundoff error,[1] also called rounding error,[2] is the difference between the result produced by a given algorithm using exact arithmetic and the result produced by the same algorithm using finite-precision, rounded arithmetic.[3] Rounding errors are due to inexactness in the representation of real numbers and the arithmetic operations done with them. This is a form of quantization error.[4] When using approximation equations or algorithms, especially when using finitely many digits to represent real numbers (which in theory have infinitely many digits), one of the goals of numerical analysis is to estimate computation errors.[5] Computation errors, also called numerical errors, include both truncation errors and roundoff errors.
When a sequence of calculations with an input involving any roundoff error are made, errors may accumulate, sometimes dominating the calculation. In ill-conditioned problems, significant error may accumulate.[6]
In short, there are two major facets of roundoff errors involved in numerical calculations:[7]
1. The ability of computers to represent both magnitude and precision of numbers is inherently limited.
2. Certain numerical manipulations are highly sensitive to roundoff errors. This can result from both mathematical considerations as well as from the way in which computers perform arithmetic operations.
Representation error
The error introduced by attempting to represent a number using a finite string of digits is a form of roundoff error called representation error.[8] Here are some examples of representation error in decimal representations:
Notation Representation Approximation Error
1/70.142 8570.142 8570.000 000 142 857
ln 20.693 147 180 559 945 309 41...0.693 1470.000 000 180 559 945 309 41...
log10 20.301 029 995 663 981 195 21...0.30100.000 029 995 663 981 195 21...
3√21.259 921 049 894 873 164 76...1.259920.000 001 049 894 873 164 76...
√21.414 213 562 373 095 048 80...1.414210.000 003 562 373 095 048 80...
e2.718 281 828 459 045 235 36...2.718 281 828 459 0450.000 000 000 000 000 235 36...
π3.141 592 653 589 793 238 46...3.141 592 653 589 7930.000 000 000 000 000 238 46...
Increasing the number of digits allowed in a representation reduces the magnitude of possible roundoff errors, but any representation limited to finitely many digits will still cause some degree of roundoff error for uncountably many real numbers. Additional digits used for intermediary steps of a calculation are known as guard digits.[9]
Rounding multiple times can cause error to accumulate.[10] For example, if 9.945309 is rounded to two decimal places (9.95), then rounded again to one decimal place (10.0), the total error is 0.054691. Rounding 9.945309 to one decimal place (9.9) in a single step introduces less error (0.045309). This can occur, for example, when software performs arithmetic in x86 80-bit floating-point and then rounds the result to IEEE 754 binary64 floating-point.
Floating-point number system
Compared with the fixed-point number system, the floating-point number system is more efficient in representing real numbers so it is widely used in modern computers. While the real numbers $\mathbb {R} $ are infinite and continuous, a floating-point number system $F$ is finite and discrete. Thus, representation error, which leads to roundoff error, occurs under the floating-point number system.
Notation of floating-point number system
A floating-point number system $F$ is characterized by $4$ integers:
• $\beta $: base or radix
• $p$: precision
• $[L,U]$: exponent range, where $L$ is the lower bound and $U$ is the upper bound
Any $x\in F$ has the following form:
$x=\pm (\underbrace {d_{0}.d_{1}d_{2}\ldots d_{p-1}} _{\text{mantissa}})_{\beta }\times \beta ^{\overbrace {E} ^{\text{exponent}}}=\pm d_{0}\times \beta ^{E}+d_{1}\times \beta ^{E-1}+\ldots +d_{p-1}\times \beta ^{E-(p-1)}$
where $d_{i}$ is an integer such that $0\leq d_{i}\leq \beta -1$ for $i=0,1,\ldots ,p-1$, and $E$ is an integer such that $L\leq E\leq U$.
Normalized floating-number system
• A floating-point number system is normalized if the leading digit $d_{0}$ is always nonzero unless the number is zero.[3] Since the mantissa is $d_{0}.d_{1}d_{2}\ldots d_{p-1}$, the mantissa of a nonzero number in a normalized system satisfies $1\leq {\text{mantissa}}<\beta $. Thus, the normalized form of a nonzero IEEE floating-point number is $\pm 1.bb\ldots b\times 2^{E}$ where $b\in {0,1}$. In binary, the leading digit is always $1$ so it is not written out and is called the implicit bit. This gives an extra bit of precision so that the roundoff error caused by representation error is reduced.
• Since floating-point number system $F$ is finite and discrete, it cannot represent all real numbers which means infinite real numbers can only be approximated by some finite numbers through rounding rules. The floating-point approximation of a given real number $x$ by $fl(x)$ can be denoted.
• The total number of normalized floating-point numbers is
$2(\beta -1)\beta ^{p-1}(U-L+1)+1,$
where
• $2$ counts choice of sign, being positive or negative
• $(\beta -1)$ counts choice of the leading digit
• $\beta ^{p-1}$ counts remaining mantissa
• $U-L+1$ counts choice of exponents
• $1$ counts the case when the number is $0$.
IEEE standard
In the IEEE standard the base is binary, i.e. $\beta =2$, and normalization is used. The IEEE standard stores the sign, exponent, and mantissa in separate fields of a floating point word, each of which has a fixed width (number of bits). The two most commonly used levels of precision for floating-point numbers are single precision and double precision.
Precision Sign (bits) Exponent (bits) Mantissa (bits)
Single1823
Double11152
Machine epsilon
Machine epsilon can be used to measure the level of roundoff error in the floating-point number system. Here are two different definitions.[3]
• The machine epsilon, denoted $\epsilon _{\text{mach}}$, is the maximum possible absolute relative error in representing a nonzero real number $x$ in a floating-point number system.
$\epsilon _{\text{mach}}=\max _{x}{\frac {|x-fl(x)|}{|x|}}$
• The machine epsilon, denoted $\epsilon _{\text{mach}}$, is the smallest number $\epsilon $ such that $fl(1+\epsilon )>1$. Thus, $fl(1+\delta )=fl(1)=1$ whenever $|\delta |<\epsilon _{\text{mach}}$.
Roundoff error under different rounding rules
There are two common rounding rules, round-by-chop and round-to-nearest. The IEEE standard uses round-to-nearest.
• Round-by-chop: The base-$\beta $ expansion of $x$ is truncated after the $(p-1)$-th digit.
• This rounding rule is biased because it always moves the result toward zero.
• Round-to-nearest: $fl(x)$ is set to the nearest floating-point number to $x$. When there is a tie, the floating-point number whose last stored digit is even (also, the last digit, in binary form, is equal to 0) is used.
• For IEEE standard where the base $\beta $ is $2$, this means when there is a tie it is rounded so that the last digit is equal to $0$.
• This rounding rule is more accurate but more computationally expensive.
• Rounding so that the last stored digit is even when there is a tie ensures that it is not rounded up or down systematically. This is to try to avoid the possibility of an unwanted slow drift in long calculations due simply to a biased rounding.
• The following example illustrates the level of roundoff error under the two rounding rules.[3] The rounding rule, round-to-nearest, leads to less roundoff error in general.
x Round-by-chop Roundoff Error Round-to-nearest Roundoff Error
1.6491.60.0491.60.049
1.6501.60.0501.60.050
1.6511.60.0511.7-0.049
1.6991.60.0991.7-0.001
1.7491.70.0491.70.049
1.7501.70.0501.8-0.050
Calculating roundoff error in IEEE standard
Suppose the usage of round-to-nearest and IEEE double precision.
• Example: the decimal number $(9.4)_{10}=(1001.{\overline {0110}})_{2}$ can be rearranged into
$+1.\underbrace {0010110011001100110011001100110011001100110011001100} _{\text{52 bits}}110\ldots \times 2^{3}$
Since the 53-rd bit to the right of the binary point is a 1 and is followed by other nonzero bits, the round-to-nearest rule requires rounding up, that is, add 1 bit to the 52-nd bit. Thus, the normalized floating-point representation in IEEE standard of 9.4 is
$fl(9.4)=1.0010110011001100110011001100110011001100110011001101\times 2^{3}.$
• Now the roundoff error can be calculated when representing $9.4$ with $fl(9.4)$.
This representation is derived by discarding the infinite tail
$0.{\overline {1100}}\times 2^{-52}\times 2^{3}=0.{\overline {0110}}\times 2^{-51}\times 2^{3}=0.4\times 2^{-48}$
from the right tail and then added $1\times 2^{-52}\times 2^{3}=2^{-49}$ in the rounding step.
Then $fl(9.4)=9.4-0.4\times 2^{-48}+2^{-49}=9.4+(0.2)_{10}\times 2^{-49}$.
Thus, the roundoff error is $(0.2\times 2^{-49})_{10}$.
Measuring roundoff error by using machine epsilon
The machine epsilon $\epsilon _{\text{mach}}$ can be used to measure the level of roundoff error when using the two rounding rules above. Below are the formulas and corresponding proof.[3] The first definition of machine epsilon is used here.
Theorem
1. Round-by-chop: $\epsilon _{\text{mach}}=\beta ^{1-p}$
2. Round-to-nearest: $\epsilon _{\text{mach}}={\frac {1}{2}}\beta ^{1-p}$
Proof
Let $x=d_{0}.d_{1}d_{2}\ldots d_{p-1}d_{p}\ldots \times \beta ^{n}\in \mathbb {R} $ where $n\in [L,U]$, and let $fl(x)$ be the floating-point representation of $x$. Since round-by-chop is being used, it is
${\begin{aligned}{\frac {|x-fl(x)|}{|x|}}&={\frac {|d_{0}.d_{1}d_{2}\ldots d_{p-1}d_{p}d_{p+1}\ldots \times \beta ^{n}-d_{0}.d_{1}d_{2}\ldots d_{p-1}\times \beta ^{n}|}{|d_{0}.d_{1}d_{2}\ldots \times \beta ^{n}|}}\\&={\frac {|d_{p}.d_{p+1}\ldots \times \beta ^{n-p}|}{|d_{0}.d_{1}d_{2}\ldots \times \beta ^{n}|}}\\&={\frac {|d_{p}.d_{p+1}d_{p+2}\ldots |}{|d_{0}.d_{1}d_{2}\ldots |}}\times \beta ^{-p}\end{aligned}}$
In order to determine the maximum of this quantity, the is a need to find the maximum of the numerator and the minimum of the denominator. Since $d_{0}\neq 0$ (normalized system), the minimum value of the denominator is $1$. The numerator is bounded above by $(\beta -1).(\beta -1){\overline {(\beta -1)}}=\beta $. Thus, ${\frac {|x-fl(x)|}{|x|}}\leq {\frac {\beta }{1}}\times \beta ^{-p}=\beta ^{1-p}$. Therefore, $\epsilon =\beta ^{1-p}$ for round-by-chop. The proof for round-to-nearest is similar.
• Note that the first definition of machine epsilon is not quite equivalent to the second definition when using the round-to-nearest rule but it is equivalent for round-by-chop.
Roundoff error caused by floating-point arithmetic
Even if some numbers can be represented exactly by floating-point numbers and such numbers are called machine numbers, performing floating-point arithmetic may lead to roundoff error in the final result.
Addition
Machine addition consists of lining up the decimal points of the two numbers to be added, adding them, and then storing the result again as a floating-point number. The addition itself can be done in higher precision but the result must be rounded back to the specified precision, which may lead to roundoff error.[3]
• For example, adding $1$ to $2^{-53}$ in IEEE double precision as follows,
${\begin{aligned}1.00\ldots 0\times 2^{0}+1.00\ldots 0\times 2^{-53}&=1.\underbrace {00\ldots 0} _{\text{52 bits}}\times 2^{0}+0.\underbrace {00\ldots 0} _{\text{52 bits}}1\times 2^{0}\\&=1.\underbrace {00\ldots 0} _{\text{52 bits}}1\times 2^{0}.\end{aligned}}$
This is saved as $1.\underbrace {00\ldots 0} _{\text{52 bits}}\times 2^{0}$ since round-to-nearest is used in IEEE standard. Therefore, $1+2^{-53}$ is equal to $1$ in IEEE double precision and the roundoff error is $2^{-53}$.
This example shows that roundoff error can be introduced when adding a large number and a small number. The shifting of the decimal points in the mantissas to make the exponents match causes the loss of some of the less significant digits. The loss of precision may be described as absorption.[11]
Note that the addition of two floating-point numbers can produce roundoff error when their sum is an order of magnitude greater than that of the larger of the two.
• For example, consider a normalized floating-point number system with base $10$ and precision $2$. Then $fl(62)=6.2\times 10^{1}$ and $fl(41)=4.1\times 10^{1}$. Note that $62+41=103$ but $fl(103)=1.0\times 10^{2}$. There is a roundoff error of $103-fl(103)=3$.
This kind of error can occur alongside an absorption error in a single operation.
Multiplication
In general, the product of two p-digit mantissas contains up to 2p digits, so the result might not fit in the mantissa.[3] Thus roundoff error will be involved in the result.
• For example, consider a normalized floating-point number system with the base $\beta =10$ and the mantissa digits are at most $2$. Then $fl(77)=7.7\times 10^{1}$ and $fl(88)=8.8\times 10^{1}$. Note that $77\times 88=6776$ but $fl(6776)=6.7\times 10^{3}$ since there at most $2$ mantissa digits. The roundoff error would be $6776-fl(6776)=6776-6.7\times 10^{3}=76$.
Division
In general, the quotient of 2p-digit mantissas may contain more than p-digits.Thus roundoff error will be involved in the result.
• For example, if the normalized floating-point number system above is still being used, then $1/3=0.333\ldots $ but $fl(1/3)=fl(0.333\ldots )=3.3\times 10^{-1}$. So, the tail $0.333\ldots -3.3\times 10^{-1}=0.00333\ldots $ is cut off.
Subtraction
Absorption also applies to subtraction.
• For example, subtracting $2^{-60}$ from $1$ in IEEE double precision as follows,
${\begin{aligned}1.00\ldots 0\times 2^{0}-1.00\ldots 0\times 2^{-60}&=\underbrace {1.00\ldots 0} _{\text{60 bits}}\times 2^{0}-\underbrace {0.00\ldots 01} _{\text{60 bits}}\times 2^{0}\\&=\underbrace {0.11\ldots 1} _{\text{60 bits}}\times 2^{0}.\end{aligned}}$
This is saved as $\underbrace {1.00\ldots 0} _{\text{53 bits}}\times 2^{0}$ since round-to-nearest is used in IEEE standard. Therefore, $1-2^{-60}$ is equal to $1$ in IEEE double precision and the roundoff error is $-2^{-60}$.
The subtracting of two nearly equal numbers is called subtractive cancellation.[3] When the leading digits are cancelled, the result may be too small to be represented exactly and it will just be represented as $0$.
• For example, let $|\epsilon |<\epsilon _{\text{mach}}$ and the second definition of machine epsilon is used here. What is the solution to $(1+\epsilon )-(1-\epsilon )$?
It is known that $1+\epsilon $ and $1-\epsilon $ are nearly equal numbers, and $(1+\epsilon )-(1-\epsilon )=1+\epsilon -1+\epsilon =2\epsilon $. However, in the floating-point number system, $fl((1+\epsilon )-(1-\epsilon ))=fl(1+\epsilon )-fl(1-\epsilon )=1-1=0$. Although $2\epsilon $ is easily big enough to be represented, both instances of $\epsilon $ have been rounded away giving $0$.
Even with a somewhat larger $\epsilon $, the result is still significantly unreliable in typical cases. There is not much faith in the accuracy of the value because the most uncertainty in any floating-point number is the digits on the far right.
• For example, $1.99999\times 10^{2}-1.99998\times 10^{2}=0.00001\times 10^{2}=1\times 10^{-5}\times 10^{2}=1\times 10^{-3}$. The result $1\times 10^{-3}$ is clearly representable, but there is not much faith in it.
This is closely related to the phenomenon of catastrophic cancellation, in which the two numbers are known to be approximations.
Accumulation of roundoff error
Errors can be magnified or accumulated when a sequence of calculations is applied on an initial input with roundoff error due to inexact representation.
Unstable algorithms
An algorithm or numerical process is called stable if small changes in the input only produce small changes in the output, and unstable if large changes in the output are produced.[12] For example, the computation of $f(x)={\sqrt {1+x}}-1$ using the "obvious" method is unstable near $x=0$ due to the large error introduced in subtracting two similar quantities, whereas the equivalent expression $\textstyle {f(x)={\frac {x}{{\sqrt {1+x}}+1}}}$ is stable.[12]
Ill-conditioned problems
Even if a stable algorithm is used, the solution to a problem may still be inaccurate due to the accumulation of roundoff error when the problem itself is ill-conditioned.
The condition number of a problem is the ratio of the relative change in the solution to the relative change in the input.[3] A problem is well-conditioned if small relative changes in input result in small relative changes in the solution. Otherwise, the problem is ill-conditioned.[3] In other words, a problem is ill-conditioned if its condition number is "much larger" than 1.
The condition number is introduced as a measure of the roundoff errors that can result when solving ill-conditioned problems.[7]
See also
• Precision (arithmetic)
• Truncation
• Rounding
• Loss of significance
• Floating point
• Kahan summation algorithm
• Machine epsilon
• Wilkinson's polynomial
References
1. Butt, Rizwan (2009), Introduction to Numerical Analysis Using MATLAB, Jones & Bartlett Learning, pp. 11–18, ISBN 978-0-76377376-2
2. Ueberhuber, Christoph W. (1997), Numerical Computation 1: Methods, Software, and Analysis, Springer, pp. 139–146, ISBN 978-3-54062058-7
3. Forrester, Dick (2018). Math/Comp241 Numerical Methods (lecture notes). Dickinson College.
4. Aksoy, Pelin; DeNardis, Laura (2007), Information Technology in Theory, Cengage Learning, p. 134, ISBN 978-1-42390140-2
5. Ralston, Anthony; Rabinowitz, Philip (2012), A First Course in Numerical Analysis, Dover Books on Mathematics (2nd ed.), Courier Dover Publications, pp. 2–4, ISBN 978-0-48614029-2
6. Chapman, Stephen (2012), MATLAB Programming with Applications for Engineers, Cengage Learning, p. 454, ISBN 978-1-28540279-6
7. Chapra, Steven (2012). Applied Numerical Methods with MATLAB for Engineers and Scientists (3rd ed.). McGraw-Hill. ISBN 9780073401102.
8. Laplante, Philip A. (2000). Dictionary of Computer Science, Engineering and Technology. CRC Press. p. 420. ISBN 978-0-84932691-2.
9. Higham, Nicholas John (2002). Accuracy and Stability of Numerical Algorithms (2 ed.). Society for Industrial and Applied Mathematics (SIAM). pp. 43–44. ISBN 978-0-89871521-7.
10. Volkov, E. A. (1990). Numerical Methods. Taylor & Francis. p. 24. ISBN 978-1-56032011-1.
11. Biran, Adrian B.; Breiner, Moshe (2010). "5". What Every Engineer Should Know About MATLAB and Simulink. Boca Raton, Florida: CRC Press. pp. 193–194. ISBN 978-1-4398-1023-1.
12. Collins, Charles (2005). "Condition and Stability" (PDF). Department of Mathematics in University of Tennessee. Retrieved 2018-10-28.
Further reading
• Matt Parker (2021). Humble Pi: When Math Goes Wrong in the Real World. Riverhead Books. ISBN 978-0593084694.
External links
• Roundoff Error at MathWorld.
• Goldberg, David (March 1991). "What Every Computer Scientist Should Know About Floating-Point Arithmetic" (PDF). ACM Computing Surveys. 23 (1): 5–48. doi:10.1145/103162.103163. S2CID 222008826. Retrieved 2016-01-20. (, )
• 20 Famous Software Disasters
Authority control: National
• Germany
• Israel
• United States
|
Wikipedia
|
∂
The character ∂ (Unicode: U+2202) is a stylized cursive d mainly used as a mathematical symbol, usually to denote a partial derivative such as ${\partial z}/{\partial x}$ (read as "the partial derivative of z with respect to x").[1][2] It is also used for boundary of a set, the boundary operator in a chain complex, and the conjugate of the Dolbeault operator on smooth differential forms over a complex manifold. It should be distinguished from other similar-looking symbols such as lowercase Greek letter delta (𝛿) or the lowercase Latin letter eth (ð).
Not to be confused with 𝛿, ð, σ, ə, ә, or d.
History
The symbol was originally introduced in 1770 by Nicolas de Condorcet, who used it for a partial differential, and adopted for the partial derivative by Adrien-Marie Legendre in 1786.[3] It represents a specialized cursive type of the letter d, just as the integral sign originates as a specialized type of a long s (first used in print by Leibniz in 1686). Use of the symbol was discontinued by Legendre, but it was taken up again by Carl Gustav Jacob Jacobi in 1841,[4] whose usage became widely adopted.[5]
Names and coding
The symbol is variously referred to as "partial", "curly d", "rounded d", "curved d", "dabba",[6] or "Jacobi's delta",[5] or as "del"[7] (but this name is also used for the "nabla" symbol ∇). It may also be pronounced simply "dee",[8] "partial dee",[9][10] "doh",[11][12] or "die".[13]
The Unicode character U+2202 ∂ PARTIAL DIFFERENTIAL is accessed by HTML entities ∂ or ∂, and the equivalent LaTeX symbol (Computer Modern glyph: $\partial $) is accessed by \partial.
Uses
∂ is also used to denote the following:
• The Jacobian ${\frac {\partial (x,y,z)}{\partial (u,v,w)}}$.
• The boundary of a set in topology.
• The boundary operator on a chain complex in homological algebra.
• The boundary operator of a differential graded algebra.
• The conjugate of the Dolbeault operator on complex differential forms.
• The boundary ∂(S) of a set of vertices S in a graph is the set of edges leaving S, which defines a cut.
See also
• d'Alembert operator
• Differentiable programming
• Differential operator § Notations
• List of mathematical symbols
• Notation for differentiation
• 𝒹 (Unicode MATHEMATICAL SCRIPT SMALL D)
• ꝺ (lowercase d in Insular script)
• δ (lowercase Greek Delta)
• д (lowercase Cyrillic De, looks similar when italicized in some typefaces)
References
Look up ∂ in Wiktionary, the free dictionary.
1. Christopher, Essex (2013). Calculus : a complete course. p. 682. ISBN 9780321781079. OCLC 872345701.
2. "Calculus III - Partial Derivatives". tutorial.math.lamar.edu. Retrieved 2020-09-16.
3. Adrien-Marie Legendre, "Memoire sur la manière de distinguer les maxima des minima dans le Calcul des Variations," Histoire de l'Académie Royale des Sciences (1786), pp. 7–37.
4. Carl Gustav Jacob Jacobi, "De determinantibus Functionalibus," Crelle's Journal 22 (1841), pp. 319–352.
5. "The "curly d" was used in 1770 by Antoine-Nicolas Caritat, Marquis de Condorcet (1743-1794) in 'Memoire sur les Equations aux différence partielles,' which was published in Histoire de l'Académie Royale des Sciences, pp. 151-178, Annee M. DCCLXXIII (1773). On page 152, Condorcet says:
Dans toute la suite de ce Memoire, dz & ∂z désigneront ou deux differences partielles de z, dont une par rapport a x, l'autre par rapport a y, ou bien dz sera une différentielle totale, & ∂z une difference partielle.
However, the "curly d" was first used in the form ∂u/∂x by Adrien Marie Legendre in 1786 in his 'Memoire sur la manière de distinguer les maxima des minima dans le Calcul des Variations,' Histoire de l'Académie Royale des Sciences, Annee M. DCCLXXXVI (1786), pp. 7-37, Paris, M. DCCXXXVIII (1788). On footnote of page 8, it reads:
Pour éviter toute ambiguité, je représenterai par ∂u/∂x le coefficient de x dans la différence de u, & par du/dx la différence complète de u divisée par dx.
Legendre abandoned the symbol and it was re-introduced by Carl Gustav Jacob Jacobi in 1841. Jacobi used it extensively in his remarkable paper 'De determinantibus Functionalibus" Crelle's Journal, Band 22, pp. 319-352, 1841 (pp. 393-438 of vol. 1 of the Collected Works).
Sed quia uncorum accumulatio et legenti et scribenti molestior fieri solet, praetuli characteristica d differentialia vulgaria, differentialia autem partialia characteristica ∂ denotare.
The "curly d" symbol is sometimes called the "rounded d" or "curved d" or Jacobi's delta. It corresponds to the cursive "dey" (equivalent to our d) in the Cyrillic alphabet." Aldrich, John. "Earliest Uses of Symbols of Calculus". Retrieved 16 January 2014.
6. Gokhale, Mujumdar, Kulkarni, Singh, Atal, Engineering Mathematics I, p. 10.2, Nirali Prakashan ISBN 8190693549.
7. Bhardwaj, R.S. (2005), Mathematics for Economics & Business (2nd ed.), p. 6.4, ISBN 9788174464507
8. Silverman, Richard A. (1989), Essential Calculus: With Applications, p. 216, ISBN 9780486660974
9. Pemberton, Malcolm; Rau, Nicholas (2011), Mathematics for Economists: An Introductory Textbook, p. 271, ISBN 9781442612761
10. Munem, Mustafa; Foulis, David (1978). Calculus with Analytic Geometry. New York, NY: Worth Publishers, Inc. p. 828. ISBN 0-87901-087-8.
11. Bowman, Elizabeth (2014), Video Lecture for University of Alabama in Huntsville, archived from the original on 2021-12-22
12. Karmalkar, S., Department of Electrical Engineering, IIT Madras (2008), Lecture-25-PN Junction(Contd), archived from the original on 2021-12-22, retrieved 2020-04-22
13. Christopher, Essex; Adams, Robert Alexander (2014). Calculus : a complete course (Eighth ed.). p. 682. ISBN 9780321781079. OCLC 872345701.
|
Wikipedia
|
Rouse Ball Professor of Mathematics
The Rouse Ball Professorship of Mathematics is one of the senior chairs in the Mathematics Departments at the University of Cambridge and the University of Oxford. The two positions were founded in 1927 by a bequest from the mathematician W. W. Rouse Ball. At Cambridge, this bequest was made with the "hope (but not making it in any way a condition) that it might be found practicable for such Professor or Reader to include in his or her lectures and treatment historical and philosophical aspects of the subject."[1]
List of Rouse Ball Professors at Cambridge
• 1928–1950 John Edensor Littlewood
• 1950–1958 Abram Samoilovitch Besicovitch
• 1958–1969 Harold Davenport
• 1971–1993 John G. Thompson
• 1994–1997 Nigel Hitchin
• 1998– 2020 William Timothy Gowers
• 2023- Wendelin Werner[2]
List of Rouse Ball Professors at Oxford
The chair at Oxford was established with a £25,000 bequest and was initially advertised by the University as a Chair in Mathematical Physics.[3] The Rouse Ball Professor is now hosted at the university's Mathematical Institute.[4]
• 1928–1950 E. A. Milne
• 1952–1972 Charles Coulson
• 1973–1999 Roger Penrose, Emeritus Rouse Ball Professor of Mathematics
• 1999–2020 Philip Candelas, Emeritus Rouse Ball Professor of Mathematics[5]
• 2020– Luis Fernando Alday, presently Rouse Ball Professor of Mathematics[4]
See also
• Rouse Ball Professor of English Law
References
1. "The University Officers", Statutes and Ordinances of the University of Cambridge (PDF), University of Cambridge, p. 673
2. "Cambridge University Reporter 6683". www.admin.cam.ac.uk. Retrieved 31 May 2023.
3. Fauvel, John; Flood, Raymond; Wilson, Robin (2013). Oxford figures : eight centuries of the mathematical sciences (Second ed.). Oxford: Oxford University Press. pp. 313–314. ISBN 9780199681976.
4. "Luis Fernando Alday | Mathematical Institute". www.maths.ox.ac.uk. Retrieved 27 June 2022.
5. Fauvel, John; Flood, Raymond; Wilson, Robin (2013). Oxford figures : eight centuries of the mathematical sciences (Second ed.). Oxford: Oxford University Press. p. 360. ISBN 9780199681976.
|
Wikipedia
|
Rousseeuw Prize for Statistics
The Rousseeuw Prize for Statistics awards innovations in statistical research with impact on society. This biennial prize is awarded in even years, and consists of a medal, a certificate, and a monetary reward of US$1,000,000, similar to the Nobel Prize in other disciplines.[1] The home institution of the Prize is the King Baudouin Foundation (KBF) in Belgium, which appoints the international jury and carries out the selection procedure. The award money comes from the Rousseeuw Foundation created by the statistician Peter Rousseeuw.
The Rousseeuw Prize for Statistics
King Philippe handing out the first Rousseeuw Prize
Awarded forinnovations in statistical research with impact on statistical practice and society
CountryBelgium
Presented by
• The King Baudouin Foundation
• The Rousseeuw Foundation
Reward(s)A medal, a certificate, and a monetary award of US$1,000,000
First awarded2022
Websiterousseeuwprize.org
The first Rousseeuw Prize was awarded on October 12, 2022, at KU Leuven, presented by His Majesty King Philippe of Belgium.[2][3] The awarded topic was Causal Inference with application in Medicine and Public Health, with laureates James Robins, Andrea Rotnitzky, Thomas Richardson, Miguel Hernán and Eric Tchetgen Tchetgen.[4][5][6][7][8]
Laureates
Year Laureate Institution Country Awarded innovation
2022 James Robins[9] Harvard School of Public Health United States "for their pioneering work on Causal Inference with applications in Medicine and Public Health."[5]
Andrea Rotnitzky[10] Torcuato di Tella University Argentina
Thomas Richardson[11] University of Washington United States
Miguel Hernán[9] Harvard School of Public Health United States
Eric Tchetgen Tchetgen[12] Wharton School of the University of Pennsylvania United States
Nominations for the prize are submitted to its website[1] together with letters of recommendation. The organizers of the prize and its ceremony are Mia Hubert and Stefan Van Aelst.
See also
• International Prize in Statistics
• COPSS Presidents' Award
• COPSS Distinguished Achievement Award and Lectureship
References
1. "Rousseeuw Prize (about)". Retrieved February 9, 2023.
2. "King Philippe presents Prize of 1 million dollars at KU Leuven (extract from television news, in Dutch)". Retrieved November 3, 2022.
3. "The Rousseeuw Prize for Statistics (from the website of the Royal Palace, in Dutch and French)". Retrieved November 3, 2022.
4. "Statistics gets $1 million award (Science News)". Retrieved November 2, 2022.
5. "First Rousseeuw Prize Awarded for Work on Causal Inference (AMSTAT News)". August 2022. Retrieved November 2, 2022.
6. "First Rousseeuw Prize for Statistics awarded for pioneering research on causal inference". Retrieved November 2, 2022.
7. "Rousseeuw Prize winners announced (in Institute of Mathematical Statistics Bulletin)". Retrieved November 3, 2022.
8. "Rousseeuw Prize for Statistics (from RTBF, the French-language public Belgian Radio)". Retrieved November 3, 2022.
9. "Biostats Faculty Receive Prestigious Rousseeuw Prize for Statistics". Retrieved January 13, 2023.
10. "Andrea Rotnitzky, ganadora del Rousseeuw Prize for Statistics". Retrieved January 13, 2023.
11. "Richardson is co-recipient of the 2022 Rousseeuw Prize for Statistics". Retrieved January 13, 2023.
12. "Eric J. Tchetgen Tchetgen Awards". Retrieved January 13, 2023.
External links
• The Rousseeuw Prize for Statistics, official site
• The King Baudouin Foundation, official site
|
Wikipedia
|
Routh's theorem
In geometry, Routh's theorem determines the ratio of areas between a given triangle and a triangle formed by the pairwise intersections of three cevians. The theorem states that if in triangle $ABC$ points $D$, $E$, and $F$ lie on segments $BC$, $CA$, and $AB$, then writing ${\tfrac {CD}{BD}}=x$, ${\tfrac {AE}{CE}}=y$, and ${\tfrac {BF}{AF}}=z$, the signed area of the triangle formed by the cevians $AD$, $BE$, and $CF$ is
$S_{ABC}{\frac {(xyz-1)^{2}}{(xy+y+1)(yz+z+1)(zx+x+1)}},$
where $S_{ABC}$ is the area of the triangle $ABC$.
This theorem was given by Edward John Routh on page 82 of his Treatise on Analytical Statics with Numerous Examples in 1896. The particular case $x=y=z=2$ has become popularized as the one-seventh area triangle. The $x=y=z=1$ case implies that the three medians are concurrent (through the centroid).
Proof
Suppose that the area of triangle $ABC$ is 1. For triangle $ABD$ and line $FRC$ using Menelaus's theorem, We could obtain:
${\frac {AF}{FB}}\times {\frac {BC}{CD}}\times {\frac {DR}{RA}}=1$
Then ${\frac {DR}{RA}}={\frac {BF}{FA}}\times {\frac {DC}{CB}}={\frac {zx}{x+1}}$ So the area of triangle $ARC$ is:
$S_{ARC}={\frac {AR}{AD}}S_{ADC}={\frac {AR}{AD}}\times {\frac {DC}{BC}}S_{ABC}={\frac {x}{zx+x+1}}$
Similarly, we could know: $S_{BPA}={\frac {y}{xy+y+1}}$ and $S_{CQB}={\frac {z}{yz+z+1}}$ Thus the area of triangle $PQR$ is:
${\begin{aligned}S_{PQR}&=S_{ABC}-S_{ARC}-S_{BPA}-S_{CQB}\\&=1-{\frac {x}{zx+x+1}}-{\frac {y}{xy+y+1}}-{\frac {z}{yz+z+1}}\\&={\frac {(xyz-1)^{2}}{(xz+x+1)(yx+y+1)(zy+z+1)}}.\end{aligned}}$
Citations
The citation commonly given for Routh's theorem is Routh's Treatise on Analytical Statics with Numerous Examples, Volume 1, Chap. IV, in the second edition of 1896 p. 82, possibly because that edition has been easier to hand. However, Routh gave the theorem already in the first edition of 1891, Volume 1, Chap. IV, p. 89. Although there is a change in pagination between the editions, the wording of the relevant footnote remained the same.
Routh concludes his extended footnote with a caveat:
"The author has not met with these expressions for the areas of two triangles that often occur. He has therefore placed them here in order that the argument in the text may be more easily understood."
Presumably, Routh felt those circumstances had not changed in the five years between editions. On the other hand, the title of Routh's book had been used earlier by Isaac Todhunter; both had been coached by William Hopkins.
Although Routh published the theorem in his book, that is not the first published statement. It is stated and proved as rider (vii) on page 33 of Solutions of the Cambridge Senate-house Problems and Riders for the Year 1878, i.e., the mathematical tripos of that year, and the link is https://archive.org/details/solutionscambri00glaigoog. It is stated that the author of the problems with roman numerals is Glaisher. Routh was a famous Mathematical Tripos coach when his book came out and was surely familiar with the content of the 1878 tripos examination. Thus, his statement The author has not met with these expressions for the areas of two triangles that often occur. is puzzling.
Problems in this spirit have a long history in recreational mathematics and mathematical paedagogy, perhaps one of the oldest instances of being the determination of the proportions of the fourteen regions of the Stomachion board. With Routh's Cambridge in mind, the one-seventh-area triangle, associated in some accounts with Richard Feynman, shows up, for example, as Question 100, p. 80, in Euclid's Elements of Geometry (Fifth School Edition), by Robert Potts (1805--1885,) of Trinity College, published in 1859; compare also his Questions 98, 99, on the same page. Potts stood twenty-sixth Wrangler in 1832 and then, like Hopkins and Routh, coached at Cambridge. Pott's expository writings in geometry were recognized by a medal at the International Exhibition of 1862, as well as by an Hon. LL.D. from the College of William and Mary, Williamsburg, Virginia.
References
• Murray S. Klamkin and A. Liu (1981) "Three more proofs of Routh's theorem", Crux Mathematicorum 7:199–203.
• H. S. M. Coxeter (1969) Introduction to Geometry, statement p. 211, proof pp. 219–20, 2nd edition, Wiley, New York.
• J. S. Kline and D. Velleman (1995) "Yet another proof of Routh's theorem" (1995) Crux Mathematicorum 21:37–40
• Ivan Niven (1976) "A New Proof of Routh's Theorem", Mathematics Magazine 49(1): 25–7, doi:10.2307/2689876
• Jay Warendorff, Routh's Theorem, The Wolfram Demonstrations Project.
• Weisstein, Eric W. "Routh's Theorem". MathWorld.
• Routh's Theorem by Cross Products at MathPages
• Ayoub, Ayoub B. (2011/2012) "Routh's theorem revisited", Mathematical Spectrum 44 (1): 24-27.
|
Wikipedia
|
Routhian mechanics
In classical mechanics, Routh's procedure or Routhian mechanics is a hybrid formulation of Lagrangian mechanics and Hamiltonian mechanics developed by Edward John Routh. Correspondingly, the Routhian is the function which replaces both the Lagrangian and Hamiltonian functions. Routhian mechanics is equivalent to Lagrangian mechanics and Hamiltonian mechanics, and introduces no new physics. It offers an alternative way to solve mechanical problems.
Part of a series on
Classical mechanics
${\textbf {F}}={\frac {d}{dt}}(m{\textbf {v}})$
Second law of motion
• History
• Timeline
• Textbooks
Branches
• Applied
• Celestial
• Continuum
• Dynamics
• Kinematics
• Kinetics
• Statics
• Statistical mechanics
Fundamentals
• Acceleration
• Angular momentum
• Couple
• D'Alembert's principle
• Energy
• kinetic
• potential
• Force
• Frame of reference
• Inertial frame of reference
• Impulse
• Inertia / Moment of inertia
• Mass
• Mechanical power
• Mechanical work
• Moment
• Momentum
• Space
• Speed
• Time
• Torque
• Velocity
• Virtual work
Formulations
• Newton's laws of motion
• Analytical mechanics
• Lagrangian mechanics
• Hamiltonian mechanics
• Routhian mechanics
• Hamilton–Jacobi equation
• Appell's equation of motion
• Koopman–von Neumann mechanics
Core topics
• Damping
• Displacement
• Equations of motion
• Euler's laws of motion
• Fictitious force
• Friction
• Harmonic oscillator
• Inertial / Non-inertial reference frame
• Mechanics of planar particle motion
• Motion (linear)
• Newton's law of universal gravitation
• Newton's laws of motion
• Relative velocity
• Rigid body
• dynamics
• Euler's equations
• Simple harmonic motion
• Vibration
Rotation
• Circular motion
• Rotating reference frame
• Centripetal force
• Centrifugal force
• reactive
• Coriolis force
• Pendulum
• Tangential speed
• Rotational frequency
• Angular acceleration / displacement / frequency / velocity
Scientists
• Kepler
• Galileo
• Huygens
• Newton
• Horrocks
• Halley
• Maupertuis
• Daniel Bernoulli
• Johann Bernoulli
• Euler
• d'Alembert
• Clairaut
• Lagrange
• Laplace
• Hamilton
• Poisson
• Cauchy
• Routh
• Liouville
• Appell
• Gibbs
• Koopman
• von Neumann
• Physics portal
• Category
Definitions
The Routhian, like the Hamiltonian, can be obtained from a Legendre transform of the Lagrangian, and has a similar mathematical form to the Hamiltonian, but is not exactly the same. The difference between the Lagrangian, Hamiltonian, and Routhian functions are their variables. For a given set of generalized coordinates representing the degrees of freedom in the system, the Lagrangian is a function of the coordinates and velocities, while the Hamiltonian is a function of the coordinates and momenta.
The Routhian differs from these functions in that some coordinates are chosen to have corresponding generalized velocities, the rest to have corresponding generalized momenta. This choice is arbitrary, and can be done to simplify the problem. It also has the consequence that the Routhian equations are exactly the Hamiltonian equations for some coordinates and corresponding momenta, and the Lagrangian equations for the rest of the coordinates and their velocities. In each case the Lagrangian and Hamiltonian functions are replaced by a single function, the Routhian. The full set thus has the advantages of both sets of equations, with the convenience of splitting one set of coordinates to the Hamilton equations, and the rest to the Lagrangian equations.
In the case of Lagrangian mechanics, the generalized coordinates q1, q2, ... and the corresponding velocities dq1/dt, dq2/dt, ..., and possibly time[nb 1] t, enter the Lagrangian,
$L(q_{1},q_{2},\ldots ,{\dot {q}}_{1},{\dot {q}}_{2},\ldots ,t)\,,\quad {\dot {q}}_{i}={\frac {dq_{i}}{dt}}\,,$
where the overdots denote time derivatives.
In Hamiltonian mechanics, the generalized coordinates q1, q2, ... and the corresponding generalized momenta p1, p2, ..., and possibly time, enter the Hamiltonian,
$H(q_{1},q_{2},\ldots ,p_{1},p_{2},\ldots ,t)=\sum _{i}{\dot {q}}_{i}p_{i}-L(q_{1},q_{2},\ldots ,{\dot {q}}_{1}(p_{1}),{\dot {q}}_{2}(p_{2}),\ldots ,t)\,,\quad p_{i}={\frac {\partial L}{\partial {\dot {q}}_{i}}}\,,$
where the second equation is the definition of the generalized momentum pi corresponding to the coordinate qi (partial derivatives are denoted using ∂). The velocities dqi/dt are expressed as functions of their corresponding momenta by inverting their defining relation. In this context, pi is said to be the momentum "canonically conjugate" to qi.
The Routhian is intermediate between L and H; some coordinates q1, q2, ..., qn are chosen to have corresponding generalized momenta p1, p2, ..., pn, the rest of the coordinates ζ1, ζ2, ..., ζs to have generalized velocities dζ1/dt, dζ2/dt, ..., dζs/dt, and time may appear explicitly;[1][2]
Routhian (n + s degrees of freedom)
$R(q_{1},\ldots ,q_{n},\zeta _{1},\ldots ,\zeta _{s},p_{1},\ldots ,p_{n},{\dot {\zeta }}_{1},\ldots ,{\dot {\zeta }}_{s},t)=\sum _{i=1}^{n}p_{i}{\dot {q}}_{i}(p_{i})-L(q_{1},\ldots ,q_{n},\zeta _{1},\ldots ,\zeta _{s},{\dot {q}}_{1}(p_{1}),\ldots ,{\dot {q}}_{n}(p_{n}),{\dot {\zeta }}_{1},\ldots ,{\dot {\zeta }}_{s},t)\,,$
where again the generalized velocity dqi/dt is to be expressed as a function of generalized momentum pi via its defining relation. The choice of which n coordinates are to have corresponding momenta, out of the n + s coordinates, is arbitrary.
The above is used by Landau and Lifshitz, and Goldstein. Some authors may define the Routhian to be the negative of the above definition.[3]
Given the length of the general definition, a more compact notation is to use boldface for tuples (or vectors) of the variables, thus q = (q1, q2, ..., qn), ζ = (ζ1, ζ2, ..., ζs), p = (p1, p2, ..., pn), and d ζ/dt = (dζ1/dt, dζ2/dt, ..., dζs/dt), so that
$R(\mathbf {q} ,{\boldsymbol {\zeta }},\mathbf {p} ,{\dot {\boldsymbol {\zeta }}},t)=\mathbf {p} \cdot {\dot {\mathbf {q} }}-L(\mathbf {q} ,{\boldsymbol {\zeta }},{\dot {\mathbf {q} }},{\dot {\boldsymbol {\zeta }}},t)\,,$
where · is the dot product defined on the tuples, for the specific example appearing here:
$\mathbf {p} \cdot {\dot {\mathbf {q} }}=\sum _{i=1}^{n}p_{i}{\dot {q}}_{i}\,.$
Equations of motion
For reference, the Euler-Lagrange equations for s degrees of freedom are a set of s coupled second order ordinary differential equations in the coordinates
${\frac {d}{dt}}{\frac {\partial L}{\partial {\dot {q}}_{j}}}={\frac {\partial L}{\partial q_{j}}}\,,$
where j = 1, 2, ..., s, and the Hamiltonian equations for n degrees of freedom are a set of 2n coupled first order ordinary differential equations in the coordinates and momenta
${\dot {q}}_{i}={\frac {\partial H}{\partial p_{i}}}\,,\quad {\dot {p}}_{i}=-{\frac {\partial H}{\partial q_{i}}}\,.$
Below, the Routhian equations of motion are obtained in two ways, in the process other useful derivatives are found that can be used elsewhere.
Two degrees of freedom
Consider the case of a system with two degrees of freedom, q and ζ, with generalized velocities dq/dt and dζ/dt, and the Lagrangian is time-dependent. (The generalization to any number of degrees of freedom follows exactly the same procedure as with two).[4] The Lagrangian of the system will have the form
$L(q,\zeta ,{\dot {q}},{\dot {\zeta }},t)$
The differential of L is
$dL={\frac {\partial L}{\partial q}}dq+{\frac {\partial L}{\partial \zeta }}d\zeta +{\frac {\partial L}{\partial {\dot {q}}}}d{\dot {q}}+{\frac {\partial L}{\partial {\dot {\zeta }}}}d{\dot {\zeta }}+{\frac {\partial L}{\partial t}}dt\,.$
Now change variables, from the set (q, ζ, dq/dt, dζ/dt) to (q, ζ, p, dζ/dt), simply switching the velocity dq/dt to the momentum p. This change of variables in the differentials is the Legendre transformation. The differential of the new function to replace L will be a sum of differentials in dq, dζ, dp, d(dζ/dt), and dt. Using the definition of generalized momentum and Lagrange's equation for the coordinate q:
$p={\frac {\partial L}{\partial {\dot {q}}}}\,,\quad {\dot {p}}={\frac {d}{dt}}{\frac {\partial L}{\partial {\dot {q}}}}={\frac {\partial L}{\partial q}}$
we have
$dL={\dot {p}}dq+{\frac {\partial L}{\partial \zeta }}d\zeta +pd{\dot {q}}+{\frac {\partial L}{\partial {\dot {\zeta }}}}d{\dot {\zeta }}+{\frac {\partial L}{\partial t}}dt$
and to replace pd(dq/dt) by (dq/dt)dp, recall the product rule for differentials,[nb 2] and substitute
$pd{\dot {q}}=d({\dot {q}}p)-{\dot {q}}dp$
to obtain the differential of a new function in terms of the new set of variables:
$d(L-p{\dot {q}})={\dot {p}}dq+{\frac {\partial L}{\partial \zeta }}d\zeta -{\dot {q}}dp+{\frac {\partial L}{\partial {\dot {\zeta }}}}d{\dot {\zeta }}+{\frac {\partial L}{\partial t}}dt\,.$
Introducing the Routhian
$R(q,\zeta ,p,{\dot {\zeta }},t)=p{\dot {q}}(p)-L$
where again the velocity dq/dt is a function of the momentum p, we have
$dR=-{\dot {p}}dq-{\frac {\partial L}{\partial \zeta }}d\zeta +{\dot {q}}dp-{\frac {\partial L}{\partial {\dot {\zeta }}}}d{\dot {\zeta }}-{\frac {\partial L}{\partial t}}dt\,,$
but from the above definition, the differential of the Routhian is
$dR={\frac {\partial R}{\partial q}}dq+{\frac {\partial R}{\partial \zeta }}d\zeta +{\frac {\partial R}{\partial p}}dp+{\frac {\partial R}{\partial {\dot {\zeta }}}}d{\dot {\zeta }}+{\frac {\partial R}{\partial t}}dt\,.$
Comparing the coefficients of the differentials dq, dζ, dp, d(dζ/dt), and dt, the results are Hamilton's equations for the coordinate q,
${\dot {q}}={\frac {\partial R}{\partial p}}\,,\quad {\dot {p}}=-{\frac {\partial R}{\partial q}}\,,$
and Lagrange's equation for the coordinate ζ
${\frac {d}{dt}}{\frac {\partial R}{\partial {\dot {\zeta }}}}={\frac {\partial R}{\partial \zeta }}$
which follow from
${\frac {\partial L}{\partial \zeta }}=-{\frac {\partial R}{\partial \zeta }}\,,\quad {\frac {\partial L}{\partial {\dot {\zeta }}}}=-{\frac {\partial R}{\partial {\dot {\zeta }}}}\,,$
and taking the total time derivative of the second equation and equating to the first. Notice the Routhian replaces the Hamiltonian and Lagrangian functions in all the equations of motion.
The remaining equation states the partial time derivatives of L and R are negatives
${\frac {\partial L}{\partial t}}=-{\frac {\partial R}{\partial t}}\,.$
Any number of degrees of freedom
For n + s coordinates as defined above, with Routhian
$R(q_{1},\ldots ,q_{n},\zeta _{1},\ldots ,\zeta _{s},p_{1},\ldots ,p_{n},{\dot {\zeta }}_{1},\ldots ,{\dot {\zeta }}_{s},t)=\sum _{i=1}^{n}p_{i}{\dot {q}}_{i}(p_{i})-L$
the equations of motion can be derived by a Legendre transformation of this Routhian as in the previous section, but another way is to simply take the partial derivatives of R with respect to the coordinates qi and ζj, momenta pi, and velocities dζj/dt, where i = 1, 2, ..., n, and j = 1, 2, ..., s. The derivatives are
${\frac {\partial R}{\partial q_{i}}}=-{\frac {\partial L}{\partial q_{i}}}=-{\frac {d}{dt}}{\frac {\partial L}{\partial {\dot {q}}_{i}}}=-{\dot {p}}_{i}$
${\frac {\partial R}{\partial p_{i}}}={\dot {q}}_{i}$
${\frac {\partial R}{\partial \zeta _{j}}}=-{\frac {\partial L}{\partial \zeta _{j}}}\,,$
${\frac {\partial R}{\partial {\dot {\zeta }}_{j}}}=-{\frac {\partial L}{\partial {\dot {\zeta }}_{j}}}\,,$
${\frac {\partial R}{\partial t}}=-{\frac {\partial L}{\partial t}}\,.$
The first two are identically the Hamiltonian equations. Equating the total time derivative of the fourth set of equations with the third (for each value of j) gives the Lagrangian equations. The fifth is just the same relation between time partial derivatives as before. To summarize[5]
Routhian equations of motion (n + s degrees of freedom)
${\dot {q}}_{i}={\frac {\partial R}{\partial p_{i}}}\,,\quad {\dot {p}}_{i}=-{\frac {\partial R}{\partial q_{i}}}\,,$
${\frac {d}{dt}}{\frac {\partial R}{\partial {\dot {\zeta }}_{j}}}={\frac {\partial R}{\partial \zeta _{j}}}\,.$
The total number of equations is 2n + s, there are 2n Hamiltonian equations plus s Lagrange equations.
Energy
Since the Lagrangian has the same units as energy, the units of the Routhian are also energy. In SI units this is the Joule.
Taking the total time derivative of the Lagrangian leads to the general result
${\frac {\partial L}{\partial t}}={\frac {d}{dt}}\left(\sum _{i=1}^{n}{\dot {q}}_{i}{\frac {\partial L}{\partial {\dot {q}}_{i}}}+\sum _{j=1}^{s}{\dot {\zeta }}_{j}{\frac {\partial L}{\partial {\dot {\zeta }}_{j}}}-L\right)\,.$
If the Lagrangian is independent of time, the partial time derivative of the Lagrangian is zero, ∂L/∂t = 0, so the quantity under the total time derivative in brackets must be a constant, it is the total energy of the system[6]
$E=\sum _{i=1}^{n}{\dot {q}}_{i}{\frac {\partial L}{\partial {\dot {q}}_{i}}}+\sum _{j=1}^{s}{\dot {\zeta }}_{j}{\frac {\partial L}{\partial {\dot {\zeta }}_{j}}}-L\,.$
(If there are external fields interacting with the constituents of the system, they can vary throughout space but not time). This expression requires the partial derivatives of L with respect to all the velocities dqi/dt and dζj/dt. Under the same condition of R being time independent, the energy in terms of the Routhian is a little simpler, substituting the definition of R and the partial derivatives of R with respect to the velocities dζj/dt,
$E=R-\sum _{j=1}^{s}{\dot {\zeta }}_{j}{\frac {\partial R}{\partial {\dot {\zeta }}_{j}}}\,.$
Notice only the partial derivatives of R with respect to the velocities dζj/dt are needed. In the case that s = 0 and the Routhian is explicitly time-independent, then E = R, that is, the Routhian equals the energy of the system. The same expression for R in when s = 0 is also the Hamiltonian, so in all E = R = H.
If the Routhian has explicit time dependence, the total energy of the system is not constant. The general result is
${\frac {\partial R}{\partial t}}={\dfrac {d}{dt}}\left(R-\sum _{j=1}^{s}{\dot {\zeta }}_{j}{\frac {\partial R}{\partial {\dot {\zeta }}_{j}}}\right)\,,$
which can be derived from the total time derivative of R in the same way as for L.
Cyclic coordinates
Often the Routhian approach may offer no advantage, but one notable case where this is useful is when a system has cyclic coordinates (also called "ignorable coordinates"), by definition those coordinates which do not appear in the original Lagrangian. The Lagrangian equations are powerful results, used frequently in theory and practice, since the equations of motion in the coordinates are easy to set up. However, if cyclic coordinates occur there will still be equations to solve for all the coordinates, including the cyclic coordinates despite their absence in the Lagrangian. The Hamiltonian equations are useful theoretical results, but less useful in practice because coordinates and momenta are related together in the solutions - after solving the equations the coordinates and momenta must be eliminated from each other. Nevertheless, the Hamiltonian equations are perfectly suited to cyclic coordinates because the equations in the cyclic coordinates trivially vanish, leaving only the equations in the non cyclic coordinates.
The Routhian approach has the best of both approaches, because cyclic coordinates can be split off to the Hamiltonian equations and eliminated, leaving behind the non cyclic coordinates to be solved from the Lagrangian equations. Overall fewer equations need to be solved compared to the Lagrangian approach.
The Routhian formulation is useful for systems with cyclic coordinates, because by definition those coordinates do not enter L, and hence R. The corresponding partial derivatives of L and R with respect to those coordinates are zero, which equates to the corresponding generalized momenta reducing to constants. To make this concrete, if the qi are all cyclic coordinates, and the ζj are all non cyclic, then
${\frac {\partial L}{\partial q_{i}}}={\dot {p}}_{i}=-{\frac {\partial R}{\partial q_{i}}}=0\quad \Rightarrow \quad p_{i}=\alpha _{i}\,,$
where the αi are constants. With these constants substituted into the Routhian, R is a function of only the non cyclic coordinates and velocities (and in general time also)
$R(\zeta _{1},\ldots ,\zeta _{s},\alpha _{1},\ldots ,\alpha _{n},{\dot {\zeta }}_{1},\ldots ,{\dot {\zeta }}_{s},t)=\sum _{i=1}^{n}\alpha _{i}{\dot {q}}_{i}(\alpha _{i})-L(\zeta _{1},\ldots ,\zeta _{s},{\dot {q}}_{1}(\alpha _{1}),\ldots ,{\dot {q}}_{n}(\alpha _{n}),{\dot {\zeta }}_{1},\ldots ,{\dot {\zeta }}_{s},t)\,,$
The 2n Hamiltonian equation in the cyclic coordinates automatically vanishes,
${\dot {q}}_{i}={\frac {\partial R}{\partial \alpha _{i}}}=f_{i}(\zeta _{1}(t),\ldots ,\zeta _{s}(t),{\dot {\zeta }}_{1}(t),\ldots ,{\dot {\zeta }}_{s}(t),\alpha _{1},\ldots ,\alpha _{n},t)\,,\quad {\dot {p}}_{i}=-{\frac {\partial R}{\partial q_{i}}}=0\,,$
and the s Lagrangian equations are in the non cyclic coordinates
${\frac {d}{dt}}{\frac {\partial R}{\partial {\dot {\zeta }}_{j}}}={\frac {\partial R}{\partial \zeta _{j}}}\,.$
Thus the problem has been reduced to solving the Lagrangian equations in the non cyclic coordinates, with the advantage of the Hamiltonian equations cleanly removing the cyclic coordinates. Using those solutions, the equations for ${\dot {q}}_{i}$can be integrated to compute $q_{i}(t)$.
If we are interested in how the cyclic coordinates change with time, the equations for the generalized velocities corresponding to the cyclic coordinates can be integrated.
Examples
Routh's procedure does not guarantee the equations of motion will be simple, however it will lead to fewer equations.
Central potential in spherical coordinates
One general class of mechanical systems with cyclic coordinates are those with central potentials, because potentials of this form only have dependence on radial separations and no dependence on angles.
Consider a particle of mass m under the influence of a central potential V(r) in spherical polar coordinates (r, θ, φ)
$L(r,{\dot {r}},\theta ,{\dot {\theta }},{\dot {\phi }})={\frac {m}{2}}({\dot {r}}^{2}+{r}^{2}{\dot {\theta }}^{2}+r^{2}\sin ^{2}\theta {\dot {\phi }}^{2})-V(r)\,.$
Notice φ is cyclic, because it does not appear in the Lagrangian. The momentum conjugate to φ is the constant
$p_{\phi }={\frac {\partial L}{\partial {\dot {\phi }}}}=mr^{2}\sin ^{2}\theta {\dot {\phi }}\,,$
in which r and dφ/dt can vary with time, but the angular momentum pφ is constant. The Routhian can be taken to be
${\begin{aligned}R(r,{\dot {r}},\theta ,{\dot {\theta }})&=p_{\phi }{\dot {\phi }}-L\\&=p_{\phi }{\dot {\phi }}-{\frac {m}{2}}{\dot {r}}^{2}-{\frac {m}{2}}r^{2}{\dot {\theta }}^{2}-{\frac {p_{\phi }{\dot {\phi }}}{2}}+V(r)\\&={\frac {p_{\phi }{\dot {\phi }}}{2}}-{\frac {m}{2}}{\dot {r}}^{2}-{\frac {m}{2}}r^{2}{\dot {\theta }}^{2}+V(r)\\&={\frac {p_{\phi }^{2}}{2mr^{2}\sin ^{2}\theta }}-{\frac {m}{2}}{\dot {r}}^{2}-{\frac {m}{2}}r^{2}{\dot {\theta }}^{2}+V(r)\,.\end{aligned}}$
We can solve for r and θ using Lagrange's equations, and do not need to solve for φ since it is eliminated by Hamiltonian's equations. The r equation is
${\frac {d}{dt}}{\frac {\partial R}{\partial {\dot {r}}}}={\frac {\partial R}{\partial r}}\quad \Rightarrow \quad -m{\ddot {r}}=-{\frac {p_{\phi }^{2}}{mr^{3}\sin ^{2}\theta }}-mr{\dot {\theta }}^{2}+{\frac {\partial V}{\partial r}}\,,$
and the θ equation is
${\frac {d}{dt}}{\frac {\partial R}{\partial {\dot {\theta }}}}={\frac {\partial R}{\partial \theta }}\quad \Rightarrow \quad -m(2r{\dot {r}}{\dot {\theta }}+r^{2}{\ddot {\theta }})=-{\frac {p_{\phi }^{2}\cos \theta }{mr^{2}\sin ^{3}\theta }}\,.$
The Routhian approach has obtained two coupled nonlinear equations. By contrast the Lagrangian approach leads to three nonlinear coupled equations, mixing in the first and second time derivatives of φ in all of them, despite its absence from the Lagrangian.
The r equation is
${\frac {d}{dt}}{\frac {\partial L}{\partial {\dot {r}}}}={\frac {\partial L}{\partial r}}\quad \Rightarrow \quad m{\ddot {r}}=mr{\dot {\theta }}^{2}+mr\sin ^{2}\theta {\dot {\phi }}^{2}-{\frac {\partial V}{\partial r}}\,,$
the θ equation is
${\frac {d}{dt}}{\frac {\partial L}{\partial {\dot {\theta }}}}={\frac {\partial L}{\partial \theta }}\quad \Rightarrow \quad 2r{\dot {r}}{\dot {\theta }}+r^{2}{\ddot {\theta }}=r^{2}\sin \theta \cos \theta {\dot {\phi }}^{2}\,,$
the φ equation is
${\frac {d}{dt}}{\frac {\partial L}{\partial {\dot {\phi }}}}={\frac {\partial L}{\partial \phi }}\quad \Rightarrow \quad 2r{\dot {r}}\sin ^{2}\theta {\dot {\phi }}+2r^{2}\sin \theta \cos \theta {\dot {\theta }}{\dot {\phi }}+r^{2}\sin ^{2}\theta {\ddot {\phi }}=0\,.$
Spherical pendulum
Consider the spherical pendulum, a mass m (known as a "pendulum bob") attached to a rigid rod of length l of negligible mass, subject to a local gravitational field g. The system rotates with angular velocity dφ/dt which is not constant. The angle between the rod and vertical is θ and is not constant.
The Lagrangian is[nb 3]
$L(\theta ,{\dot {\theta }},{\dot {\phi }})={\frac {m\ell ^{2}}{2}}({\dot {\theta }}^{2}+\sin ^{2}\theta {\dot {\phi }}^{2})+mg\ell \cos \theta \,,$
and φ is the cyclic coordinate for the system with constant momentum
$p_{\phi }={\frac {\partial L}{\partial {\dot {\phi }}}}=m\ell ^{2}\sin ^{2}\theta {\dot {\phi }}\,.$
which again is physically the angular momentum of the system about the vertical. The angle θ and angular velocity dφ/dt vary with time, but the angular momentum is constant. The Routhian is
${\begin{aligned}R(\theta ,{\dot {\theta }})&=p_{\phi }{\dot {\phi }}-L\\&=p_{\phi }{\dot {\phi }}-{\frac {m\ell ^{2}}{2}}{\dot {\theta }}^{2}-{\frac {p_{\phi }{\dot {\phi }}}{2}}-mg\ell \cos \theta \\&={\frac {p_{\phi }{\dot {\phi }}}{2}}-{\frac {m\ell ^{2}}{2}}{\dot {\theta }}^{2}-mg\ell \cos \theta \\&={\frac {p_{\phi }^{2}}{2m\ell ^{2}\sin ^{2}\theta }}-{\frac {m\ell ^{2}}{2}}{\dot {\theta }}^{2}-mg\ell \cos \theta \end{aligned}}$
The θ equation is found from the Lagrangian equations
${\frac {d}{dt}}{\frac {\partial R}{\partial {\dot {\theta }}}}={\frac {\partial R}{\partial \theta }}\quad \Rightarrow \quad -m\ell ^{2}{\ddot {\theta }}=-{\frac {p_{\phi }^{2}\cos \theta }{m\ell ^{2}\sin ^{3}\theta }}+mg\ell \sin \theta \,,$
or simplifying by introducing the constants
$a={\frac {p_{\phi }^{2}}{m^{2}\ell ^{4}}}\,,\quad b={\frac {g}{\ell }}\,,$
gives
${\ddot {\theta }}=a{\frac {\cos \theta }{\sin ^{3}\theta }}-b\sin \theta \,.$
This equation resembles the simple nonlinear pendulum equation, because it can swing through the vertical axis, with an additional term to account for the rotation about the vertical axis (the constant a is related to the angular momentum pφ).
Applying the Lagrangian approach there are two nonlinear coupled equations to solve.
The θ equation is
${\frac {d}{dt}}{\frac {\partial L}{\partial {\dot {\theta }}}}={\frac {\partial L}{\partial \theta }}\quad \Rightarrow \quad m\ell ^{2}{\ddot {\theta }}=m\ell ^{2}\sin \theta \cos \theta {\dot {\phi }}^{2}-mg\ell \sin \theta \,,$
and the φ equation is
${\frac {d}{dt}}{\frac {\partial L}{\partial {\dot {\phi }}}}={\frac {\partial L}{\partial \phi }}\quad \Rightarrow \quad 2\sin \theta \cos \theta {\dot {\theta }}{\dot {\phi }}+\sin ^{2}\theta {\ddot {\phi }}=0\,.$
Heavy symmetrical top
The heavy symmetrical top of mass M has Lagrangian[7][8]
$L(\theta ,{\dot {\theta }},{\dot {\psi }},{\dot {\phi }})={\frac {I_{1}}{2}}({\dot {\theta }}^{2}+{\dot {\phi }}^{2}\sin ^{2}\theta )+{\frac {I_{3}}{2}}({\dot {\psi }}^{2}+{\dot {\phi }}^{2}\cos ^{2}\theta )+I_{3}{\dot {\psi }}{\dot {\phi }}\cos \theta -Mg\ell \cos \theta $
where ψ, φ, θ are the Euler angles, θ is the angle between the vertical z-axis and the top's z′-axis, ψ is the rotation of the top about its own z′-axis, and φ the azimuthal of the top's z′-axis around the vertical z-axis. The principal moments of inertia are I1 about the top's own x′ axis, I2 about the top's own y′ axes, and I3 about the top's own z′-axis. Since the top is symmetric about its z′-axis, I1 = I2. Here the simple relation for local gravitational potential energy V = Mglcosθ is used where g is the acceleration due to gravity, and the centre of mass of the top is a distance l from its tip along its z′-axis.
The angles ψ, φ are cyclic. The constant momenta are the angular momenta of the top about its axis and its precession about the vertical, respectively:
$p_{\psi }={\frac {\partial L}{\partial {\dot {\psi }}}}=I_{3}{\dot {\psi }}+I_{3}{\dot {\phi }}\cos \theta $
$p_{\phi }={\frac {\partial L}{\partial {\dot {\phi }}}}={\dot {\phi }}(I_{1}\sin ^{2}\theta +I_{3}\cos ^{2}\theta )+I_{3}{\dot {\psi }}\cos \theta $
From these, eliminating dψ/dt:
$p_{\phi }-p_{\psi }\cos \theta =I_{1}{\dot {\phi }}\sin ^{2}\theta $
we have
${\dot {\phi }}={\frac {p_{\phi }-p_{\psi }\cos \theta }{I_{1}\sin ^{2}\theta }}\,,$
and to eliminate dφ/dt, substitute this result into pψ and solve for dψ/dt to find
${\dot {\psi }}={\frac {p_{\psi }}{I_{3}}}-\cos \theta \left({\frac {p_{\phi }-p_{\psi }\cos \theta }{I_{1}\sin ^{2}\theta }}\right)\,.$
The Routhian can be taken to be
$R(\theta ,{\dot {\theta }})=p_{\psi }{\dot {\psi }}+p_{\phi }{\dot {\phi }}-L={\frac {1}{2}}(p_{\psi }{\dot {\psi }}+p_{\phi }{\dot {\phi }})-{\frac {I_{1}{\dot {\theta }}^{2}}{2}}+Mg\ell \cos \theta $
and since
${\frac {p_{\phi }{\dot {\phi }}}{2}}={\frac {p_{\phi }^{2}}{2I_{1}\sin ^{2}\theta }}-{\frac {p_{\psi }p_{\phi }\cos \theta }{2I_{1}\sin ^{2}\theta }}\,,$
${\frac {p_{\psi }{\dot {\psi }}}{2}}={\frac {p_{\psi }^{2}}{2I_{3}}}-{\frac {p_{\psi }p_{\phi }\cos \theta }{2I_{1}\sin ^{2}\theta }}+{\frac {p_{\psi }^{2}\cos ^{2}\theta }{2I_{1}\sin ^{2}\theta }}$
we have
$R={\frac {p_{\psi }^{2}}{2I_{3}}}+{\frac {p_{\psi }^{2}\cos ^{2}\theta }{2I_{1}\sin ^{2}\theta }}+{\frac {p_{\phi }^{2}}{2I_{1}\sin ^{2}\theta }}-{\frac {p_{\psi }p_{\phi }\cos \theta }{I_{1}\sin ^{2}\theta }}-{\frac {I_{1}{\dot {\theta }}^{2}}{2}}+Mg\ell \cos \theta \,.$
The first term is constant, and can be ignored since only the derivatives of R will enter the equations of motion. The simplified Routhian, without loss of information, is thus
$R={\frac {1}{2I_{1}\sin ^{2}\theta }}\left[p_{\psi }^{2}\cos ^{2}\theta +p_{\phi }^{2}-2p_{\psi }p_{\phi }\cos \theta \right]-{\frac {I_{1}{\dot {\theta }}^{2}}{2}}+Mg\ell \cos \theta $
The equation of motion for θ is, by direct calculation,
${\frac {d}{dt}}{\frac {\partial R}{\partial {\dot {\theta }}}}={\frac {\partial R}{\partial \theta }}\quad \Rightarrow \quad $
$-I_{1}{\ddot {\theta }}=-{\frac {\cos \theta }{I_{1}\sin ^{3}\theta }}\left[p_{\psi }^{2}\cos ^{2}\theta +p_{\phi }^{2}-{\frac {p_{\psi }p_{\phi }}{2}}\cos \theta \right]+{\frac {1}{2I_{1}\sin ^{2}\theta }}\left[-2p_{\psi }^{2}\cos \theta \sin \theta +{\frac {p_{\psi }p_{\phi }}{2}}\sin \theta \right]-Mg\ell \sin \theta \,,$
or by introducing the constants
$a={\frac {p_{\psi }^{2}}{I_{1}^{2}}}\,,\quad b={\frac {p_{\phi }^{2}}{I_{1}^{2}}}\,,\quad c={\frac {p_{\psi }p_{\phi }}{2I_{1}^{2}}}\,,\quad k={\frac {Mg\ell }{I_{1}}}\,,$
a simpler form of the equation is obtained
${\ddot {\theta }}={\frac {\cos \theta }{\sin ^{3}\theta }}(a\cos ^{2}\theta +b-c\cos \theta )+{\frac {1}{2\sin \theta }}(2a\cos \theta -c)+k\sin \theta \,.$
Although the equation is highly nonlinear, there is only one equation to solve for, it was obtained directly, and the cyclic coordinates are not involved.
By contrast, the Lagrangian approach leads to three nonlinear coupled equations to solve, despite the absence of the coordinates ψ and φ in the Lagrangian.
The θ equation is
${\frac {d}{dt}}{\frac {\partial L}{\partial {\dot {\theta }}}}={\frac {\partial L}{\partial \theta }}\quad \Rightarrow \quad I_{1}{\ddot {\theta }}=(I_{1}-I_{3}){\dot {\phi }}^{2}\sin \theta \cos \theta -I_{3}{\dot {\psi }}{\dot {\phi }}\sin \theta +Mg\ell \sin \theta \,,$
the ψ equation is
${\frac {d}{dt}}{\frac {\partial L}{\partial {\dot {\psi }}}}={\frac {\partial L}{\partial \psi }}\quad \Rightarrow \quad {\ddot {\psi }}+{\ddot {\phi }}\cos \theta -{\dot {\phi }}{\dot {\theta }}\sin \theta =0\,,$
and the φ equation is
${\frac {d}{dt}}{\frac {\partial L}{\partial {\dot {\phi }}}}={\frac {\partial L}{\partial \phi }}\quad \Rightarrow \quad {\ddot {\phi }}(I_{1}\sin ^{2}\theta +I_{3}\cos ^{2}\theta )+{\dot {\phi }}(I_{1}-I_{3})2\sin \theta \cos \theta {\dot {\theta }}+I_{3}{\ddot {\psi }}\cos \theta -I_{3}{\dot {\psi }}\sin \theta {\dot {\theta }}=0\,,$
Classical charged particle in a uniform magnetic field
Consider a classical charged particle of mass m and electric charge q in a static (time-independent) uniform (constant throughout space) magnetic field B.[9] The Lagrangian for a charged particle in a general electromagnetic field given by the magnetic potential A and electric potential $\phi $ is
$L={\frac {m}{2}}{\dot {\mathbf {r} }}^{2}-q\phi +q{\dot {\mathbf {r} }}\cdot \mathbf {A} \,,$
It is convenient to use cylindrical coordinates (r, θ, z), so that
${\dot {\mathbf {r} }}=\mathbf {v} =(v_{r},v_{\theta },v_{z})=({\dot {r}},r{\dot {\theta }},{\dot {z}})\,,$
$\mathbf {B} =(B_{r},B_{\theta },B_{z})=(0,0,B)\,.$
In this case of no electric field, the electric potential is zero, $\phi =0$, and we can choose the axial gauge for the magnetic potential
$\mathbf {A} ={\frac {1}{2}}\mathbf {B} \times \mathbf {r} \quad \Rightarrow \quad \mathbf {A} =(A_{r},A_{\theta },A_{z})=(0,Br/2,0)\,,$
and the Lagrangian is
$L(r,{\dot {r}},{\dot {\theta }},{\dot {z}})={\frac {m}{2}}({\dot {r}}^{2}+r^{2}{\dot {\theta }}^{2}+{\dot {z}}^{2})+{\frac {qBr^{2}{\dot {\theta }}}{2}}\,.$
Notice this potential has an effectively cylindrical symmetry (although it also has angular velocity dependence), since the only spatial dependence is on the radial length from an imaginary cylinder axis.
There are two cyclic coordinates, θ and z. The canonical momenta conjugate to θ and z are the constants
$p_{\theta }={\frac {\partial L}{\partial {\dot {\theta }}}}=mr^{2}{\dot {\theta }}+{\frac {qBr^{2}}{2}}\,,\quad p_{z}={\frac {\partial L}{\partial {\dot {z}}}}=m{\dot {z}}\,,$
so the velocities are
${\dot {\theta }}={\frac {1}{mr^{2}}}\left(p_{\theta }-{\frac {qBr^{2}}{2}}\right)\,,\quad {\dot {z}}={\frac {p_{z}}{m}}\,.$
The angular momentum about the z axis is not pθ, but the quantity mr2dθ/dt, which is not conserved due to the contribution from the magnetic field. The canonical momentum pθ is the conserved quantity. It is still the case that pz is the linear or translational momentum along the z axis, which is also conserved.
The radial component r and angular velocity dθ/dt can vary with time, but pθ is constant, and since pz is constant it follows dz/dt is constant. The Routhian can take the form
${\begin{aligned}R(r,{\dot {r}})&=p_{\theta }{\dot {\theta }}+p_{z}{\dot {z}}-L\\&=p_{\theta }{\dot {\theta }}+p_{z}{\dot {z}}-{\frac {m}{2}}{\dot {r}}^{2}-{\frac {p_{\theta }{\dot {\theta }}}{2}}-{\frac {p_{z}{\dot {z}}}{2}}-{\frac {1}{2}}qBr^{2}{\dot {\theta }}\\[6pt]&=(p_{\theta }-qBr^{2}){\frac {\dot {\theta }}{2}}-{\frac {m}{2}}{\dot {r}}^{2}+{\frac {p_{z}{\dot {z}}}{2}}\\[6pt]&={\frac {1}{2mr^{2}}}\left(p_{\theta }-qBr^{2}\right)\left(p_{\theta }-{\frac {qBr^{2}}{2}}\right)-{\frac {m}{2}}{\dot {r}}^{2}+{\frac {p_{z}^{2}}{2m}}\\[6pt]&={\frac {1}{2mr^{2}}}\left(p_{\theta }^{2}-{\frac {3}{2}}qBr^{2}+{\frac {(qB)^{2}r^{4}}{2}}\right)-{\frac {m}{2}}{\dot {r}}^{2}\end{aligned}}$
where in the last line, the pz2/2m term is a constant and can be ignored without loss of continuity. The Hamiltonian equations for θ and z automatically vanish and do not need to be solved for. The Lagrangian equation in r
${\frac {d}{dt}}{\frac {\partial R}{\partial {\dot {r}}}}={\frac {\partial R}{\partial r}}$
is by direct calculation
$-m{\ddot {r}}={\frac {1}{2m}}\left[{\frac {-2}{r^{3}}}\left(p_{\theta }^{2}-{\frac {3}{2}}qBr^{2}+{\frac {(qB)^{2}r^{4}}{2}}\right)+{\frac {1}{r^{2}}}(-3qBr+2(qB)^{2}r^{3})\right]\,,$
which after collecting terms is
$m{\ddot {r}}={\frac {1}{2m}}\left[{\frac {2p_{\theta }^{2}}{r^{3}}}-(qB)^{2}r\right]\,,$
and simplifying further by introducing the constants
$a={\frac {p_{\theta }^{2}}{m^{2}}}\,,\quad b=-{\frac {(qB)^{2}}{2m^{2}}}\,,$
the differential equation is
${\ddot {r}}={\frac {a}{r^{3}}}+br$
To see how z changes with time, integrate the momenta expression for pz above
$z={\frac {p_{z}}{m}}t+c_{z}\,,$
where cz is an arbitrary constant, the initial value of z to be specified in the initial conditions.
The motion of the particle in this system is helicoidal, with the axial motion uniform (constant) but the radial and angular components varying in a spiral according to the equation of motion derived above. The initial conditions on r, dr/dt, θ, dθ/dt, will determine if the trajectory of the particle has a constant r or varying r. If initially r is nonzero but dr/dt = 0, while θ and dθ/dt are arbitrary, then the initial velocity of the particle has no radial component, r is constant, so the motion will be in a perfect helix. If r is constant, the angular velocity is also constant according to the conserved pθ.
With the Lagrangian approach, the equation for r would include dθ/dt which has to be eliminated, and there would be equations for θ and z to solve for.
The r equation is
${\frac {d}{dt}}{\frac {\partial L}{\partial {\dot {r}}}}={\frac {\partial L}{\partial r}}\quad \Rightarrow \quad m{\ddot {r}}=mr{\dot {\theta }}^{2}+qBr{\dot {\theta }}\,,$
the θ equation is
${\frac {d}{dt}}{\frac {\partial L}{\partial {\dot {\theta }}}}={\frac {\partial L}{\partial \theta }}\quad \Rightarrow \quad m(2r{\dot {r}}{\dot {\theta }}+r^{2}{\ddot {\theta }})+qBr{\dot {r}}=0\,,$
and the z equation is
${\frac {d}{dt}}{\frac {\partial L}{\partial {\dot {z}}}}={\frac {\partial L}{\partial z}}\quad \Rightarrow \quad m{\ddot {z}}=0\,.$
The z equation is trivial to integrate, but the r and θ equations are not, in any case the time derivatives are mixed in all the equations and must be eliminated.
See also
• Calculus of variations
• Phase space
• Configuration space
• Many-body problem
• Rigid body mechanics
Footnotes
1. The coordinates are functions of time, so the Lagrangian always has implicit time-dependence via the coordinates. If the Lagrangian changes with time irrespective of the coordinates, usually due to some time-dependent potential, then the Lagrangian is said to have "explicit" time-dependence. Similarly for the Hamiltonian and Routhian functions.
2. For two functions u and v, the differential of the product is d(uv) = udv + vdu.
3. The potential energy is actually
$V=mg\ell (1-\cos \theta )\,,$
but since the first term is constant, it can be ignored in the Lagrangian (and Routhian) which only depend on derivatives of coordinates and velocities. Subtracting this from the kinetic energy means a plus sign in the Lagrangian, not minus.
Notes
1. Goldstein 1980, p. 352
2. Landau & Lifshitz 1976, p. 134
3. Hand & Finch 1998, p. 23
4. Landau & Lifshitz 1976, p. 134
5. Goldstein 1980, p. 352
6. Landau & Lifshitz 1976, p. 134
7. Goldstein 1980, p. 214
8. Kibble & Berkshire 2004, p. 236
9. Kibble & Berkshire 2004, p. 243
References
• Landau, L. D.; Lifshitz, E. M. (15 January 1976). Mechanics (3rd ed.). Butterworth Heinemann. p. 134. ISBN 9780750628969.
• Hand, L. N.; Finch, J. D. (13 November 1998). Analytical Mechanics (2nd ed.). Cambridge University Press. p. 23. ISBN 9780521575720.
• Kibble, T. W. B.; Berkshire, F. H. (2004). Classical Mechanics (5th ed.). Imperial College Press. p. 236. ISBN 9781860944352.
• Goldstein, Herbert (1980). Classical Mechanics (2nd ed.). San Francisco, CA: Addison Wesley. pp. 352–353. ISBN 0201029189.
• Goldstein, Herbert; Poole, Charles P., Jr.; Safko, John L. (2002). Classical Mechanics (3rd ed.). San Francisco, CA: Addison Wesley. pp. 347–349. ISBN 0-201-65702-3.{{cite book}}: CS1 maint: multiple names: authors list (link)
|
Wikipedia
|
Latin square
In combinatorics and in experimental design, a Latin square is an n × n array filled with n different symbols, each occurring exactly once in each row and exactly once in each column. An example of a 3×3 Latin square is
ABC
CAB
BCA
The name "Latin square" was inspired by mathematical papers by Leonhard Euler (1707–1783), who used Latin characters as symbols,[2] but any set of symbols can be used: in the above example, the alphabetic sequence A, B, C can be replaced by the integer sequence 1, 2, 3. Euler began the general theory of Latin squares.
History
The Korean mathematician Choi Seok-jeong was the first to publish an example of Latin squares of order nine, in order to construct a magic square in 1700, predating Leonhard Euler by 67 years.[3]
Reduced form
A Latin square is said to be reduced (also, normalized or in standard form) if both its first row and its first column are in their natural order.[4] For example, the Latin square above is not reduced because its first column is A, C, B rather than A, B, C.
Any Latin square can be reduced by permuting (that is, reordering) the rows and columns. Here switching the above matrix's second and third rows yields the following square:
ABC
BCA
CAB
This Latin square is reduced; both its first row and its first column are alphabetically ordered A, B, C.
Properties
Orthogonal array representation
If each entry of an n × n Latin square is written as a triple (r,c,s), where r is the row, c is the column, and s is the symbol, we obtain a set of n2 triples called the orthogonal array representation of the square. For example, the orthogonal array representation of the Latin square
123
231
312
is
{ (1, 1, 1), (1, 2, 2), (1, 3, 3), (2, 1, 2), (2, 2, 3), (2, 3, 1), (3, 1, 3), (3, 2, 1), (3, 3, 2) },
where for example the triple (2, 3, 1) means that in row 2 and column 3 there is the symbol 1. Orthogonal arrays are usually written in array form where the triples are the rows, such as:
r c s
1 1 1
1 2 2
1 3 3
2 1 2
2 2 3
2 3 1
3 1 3
3 2 1
3 3 2
The definition of a Latin square can be written in terms of orthogonal arrays:
• A Latin square is a set of n2 triples (r, c, s), where 1 ≤ r, c, s ≤ n, such that all ordered pairs (r, c) are distinct, all ordered pairs (r, s) are distinct, and all ordered pairs (c, s) are distinct.
This means that the n2 ordered pairs (r, c) are all the pairs (i, j) with 1 ≤ i, j ≤ n, once each. The same is true of the ordered pairs (r, s) and the ordered pairs (c, s).
The orthogonal array representation shows that rows, columns and symbols play rather similar roles, as will be made clear below.
Equivalence classes of Latin squares
See also: Small Latin squares and quasigroups
Many operations on a Latin square produce another Latin square (for example, turning it upside down).
If we permute the rows, permute the columns, or permute the names of the symbols of a Latin square, we obtain a new Latin square said to be isotopic to the first. Isotopism is an equivalence relation, so the set of all Latin squares is divided into subsets, called isotopy classes, such that two squares in the same class are isotopic and two squares in different classes are not isotopic.
Another type of operation is easiest to explain using the orthogonal array representation of the Latin square. If we systematically and consistently reorder the three items in each triple (that is, permute the three columns in the array form), another orthogonal array (and, thus, another Latin square) is obtained. For example, we can replace each triple (r,c,s) by (c,r,s) which corresponds to transposing the square (reflecting about its main diagonal), or we could replace each triple (r,c,s) by (c,s,r), which is a more complicated operation. Altogether there are 6 possibilities including "do nothing", giving us 6 Latin squares called the conjugates (also parastrophes) of the original square.[5]
Finally, we can combine these two equivalence operations: two Latin squares are said to be paratopic, also main class isotopic, if one of them is isotopic to a conjugate of the other. This is again an equivalence relation, with the equivalence classes called main classes, species, or paratopy classes.[5] Each main class contains up to six isotopy classes.
Number of n × n Latin squares
There is no known easily computable formula for the number Ln of n × n Latin squares with symbols 1, 2, ..., n. The most accurate upper and lower bounds known for large n are far apart. One classic result[6] is that
$\prod _{k=1}^{n}\left(k!\right)^{n/k}\geq L_{n}\geq {\frac {\left(n!\right)^{2n}}{n^{n^{2}}}}.$
A simple and explicit formula for the number of Latin squares was published in 1992, but it is still not easily computable due to the exponential increase in the number of terms. This formula for the number Ln of n × n Latin squares is
$L_{n}=n!\sum _{A\in B_{n}}^{}(-1)^{\sigma _{0}(A)}{\binom {\operatorname {per} A}{n}},$
where Bn is the set of all n × n {0, 1}-matrices, σ0(A) is the number of zero entries in matrix A, and per(A) is the permanent of matrix A.[7]
The table below contains all known exact values. It can be seen that the numbers grow exceedingly quickly. For each n, the number of Latin squares altogether (sequence A002860 in the OEIS) is n! (n − 1)! times the number of reduced Latin squares (sequence A000315 in the OEIS).
The numbers of Latin squares of various sizes
nreduced Latin squares of size n
(sequence A000315 in the OEIS)
all Latin squares of size n
(sequence A002860 in the OEIS)
111
212
3112
44576
556161,280
69,408812,851,200
716,942,08061,479,419,904,000
8535,281,401,856108,776,032,459,082,956,800
9377,597,570,964,258,8165,524,751,496,156,892,842,531,225,600
107,580,721,483,160,132,811,489,2809,982,437,658,213,039,871,725,064,756,920,320,000
115,363,937,773,277,371,298,119,673,540,771,840776,966,836,171,770,144,107,444,346,734,230,682,311,065,600,000
12 1.62 × 1044
13 2.51 × 1056
14 2.33 × 1070
15 1.50 × 1086
For each n, each isotopy class (sequence A040082 in the OEIS) contains up to (n!)3 Latin squares (the exact number varies), while each main class (sequence A003090 in the OEIS) contains either 1, 2, 3 or 6 isotopy classes.
Equivalence classes of Latin squares
nmain classes
(sequence A003090 in the OEIS)
isotopy classes
(sequence A040082 in the OEIS)
structurally distinct squares
(sequence A264603 in the OEIS)
111 1
211 1
311 1
422 12
522 192
61222 145,164
7147564 1,524,901,344
8283,6571,676,267
919,270,853,541115,618,721,533
1034,817,397,894,749,939208,904,371,354,363,006
112,036,029,552,582,883,134,196,09912,216,177,315,369,229,261,482,540
The number of structurally distinct Latin squares (i.e. the squares cannot be made identical by means of rotation, reflection, and/or permutation of the symbols) for n = 1 up to 7 is 1, 1, 1, 12, 192, 145164, 1524901344 respectively (sequence A264603 in the OEIS).
Examples
Main article: Small Latin squares and quasigroups
We give one example of a Latin square from each main class up to order five.
${\begin{bmatrix}1\end{bmatrix}}\quad {\begin{bmatrix}1&2\\2&1\end{bmatrix}}\quad {\begin{bmatrix}1&2&3\\2&3&1\\3&1&2\end{bmatrix}}$
${\begin{bmatrix}1&2&3&4\\2&1&4&3\\3&4&1&2\\4&3&2&1\end{bmatrix}}\quad {\begin{bmatrix}1&2&3&4\\2&4&1&3\\3&1&4&2\\4&3&2&1\end{bmatrix}}$
${\begin{bmatrix}1&2&3&4&5\\2&3&5&1&4\\3&5&4&2&1\\4&1&2&5&3\\5&4&1&3&2\end{bmatrix}}\quad {\begin{bmatrix}1&2&3&4&5\\2&4&1&5&3\\3&5&4&2&1\\4&1&5&3&2\\5&3&2&1&4\end{bmatrix}}$
They present, respectively, the multiplication tables of the following groups:
• {0} – the trivial 1-element group
• $\mathbb {Z} _{2}$ – the binary group
• $\mathbb {Z} _{3}$ – cyclic group of order 3
• $\mathbb {Z} _{2}\times \mathbb {Z} _{2}$ – the Klein four-group
• $\mathbb {Z} _{4}$ – cyclic group of order 4
• $\mathbb {Z} _{5}$ – cyclic group of order 5
• the last one is an example of a quasigroup, or rather a loop, which is not associative.
Transversals and rainbow matchings
A transversal in a Latin square is a choice of n cells, where each row contains one cell, each column contains one cell, and there is one cell containing each symbol.
One can consider a Latin square as a complete bipartite graph in which the rows are vertices of one part, the columns are vertices of the other part, each cell is an edge (between its row and its column), and the symbols are colors. The rules of the Latin squares imply that this is a proper edge coloring. With this definition, a Latin transversal is a matching in which each edge has a different color; such a matching is called a rainbow matching.
Therefore, many results on Latin squares/rectangles are contained in papers with the term "rainbow matching" in their title, and vice versa.[8]
Some Latin squares have no transversal. For example, when n is even, an n-by-n Latin square in which the value of cell i,j is (i+j) mod n has no transversal. Here are two examples:
${\begin{bmatrix}1&2\\2&1\end{bmatrix}}\quad {\begin{bmatrix}1&2&3&4\\2&3&4&1\\3&4&1&2\\4&1&2&3\end{bmatrix}}$
In 1967, H. J. Ryser conjectured that, when n is odd, every n-by-n Latin square has a transversal.[9]
In 1975, S. K. Stein and Brualdi conjectured that, when n is even, every n-by-n Latin square has a partial transversal of size n−1.[10]
A more general conjecture of Stein is that a transversal of size n−1 exists not only in Latin squares but also in any n-by-n array of n symbols, as long as each symbol appears exactly n times.[9]
Some weaker versions of these conjectures have been proved:
• Every n-by-n Latin square has a partial transversal of size 2n/3.[11]
• Every n-by-n Latin square has a partial transversal of size n − sqrt(n).[12]
• Every n-by-n Latin square has a partial transversal of size n − 11 log2
2
(n).[13]
Algorithms
For small squares it is possible to generate permutations and test whether the Latin square property is met. For larger squares, Jacobson and Matthews' algorithm allows sampling from a uniform distribution over the space of n × n Latin squares.[14]
Applications
Statistics and mathematics
• In the design of experiments, Latin squares are a special case of row-column designs for two blocking factors.[15][16]
• In algebra, Latin squares are related to generalizations of groups; in particular, Latin squares are characterized as being the multiplication tables (Cayley tables) of quasigroups. A binary operation whose table of values forms a Latin square is said to obey the Latin square property.
Error correcting codes
Sets of Latin squares that are orthogonal to each other have found an application as error correcting codes in situations where communication is disturbed by more types of noise than simple white noise, such as when attempting to transmit broadband Internet over powerlines.[17][18][19]
Firstly, the message is sent by using several frequencies, or channels, a common method that makes the signal less vulnerable to noise at any one specific frequency. A letter in the message to be sent is encoded by sending a series of signals at different frequencies at successive time intervals. In the example below, the letters A to L are encoded by sending signals at four different frequencies, in four time slots. The letter C, for instance, is encoded by first sending at frequency 3, then 4, 1 and 2.
${\begin{matrix}A\\B\\C\\D\\\end{matrix}}{\begin{bmatrix}1&2&3&4\\2&1&4&3\\3&4&1&2\\4&3&2&1\\\end{bmatrix}}\quad {\begin{matrix}E\\F\\G\\H\\\end{matrix}}{\begin{bmatrix}1&3&4&2\\2&4&3&1\\3&1&2&4\\4&2&1&3\\\end{bmatrix}}\quad {\begin{matrix}I\\J\\K\\L\\\end{matrix}}{\begin{bmatrix}1&4&2&3\\2&3&1&4\\3&2&4&1\\4&1&3&2\\\end{bmatrix}}$
The encoding of the twelve letters are formed from three Latin squares that are orthogonal to each other. Now imagine that there's added noise in channels 1 and 2 during the whole transmission. The letter A would then be picked up as:
${\begin{matrix}12&12&123&124\end{matrix}}$
In other words, in the first slot we receive signals from both frequency 1 and frequency 2; while the third slot has signals from frequencies 1, 2 and 3. Because of the noise, we can no longer tell if the first two slots were 1,1 or 1,2 or 2,1 or 2,2. But the 1,2 case is the only one that yields a sequence matching a letter in the above table, the letter A. Similarly, we may imagine a burst of static over all frequencies in the third slot:
${\begin{matrix}1&2&1234&4\end{matrix}}$
Again, we are able to infer from the table of encodings that it must have been the letter A being transmitted. The number of errors this code can spot is one less than the number of time slots. It has also been proven that if the number of frequencies is a prime or a power of a prime, the orthogonal Latin squares produce error detecting codes that are as efficient as possible.
Mathematical puzzles
The problem of determining if a partially filled square can be completed to form a Latin square is NP-complete.[20]
The popular Sudoku puzzles are a special case of Latin squares; any solution to a Sudoku puzzle is a Latin square. Sudoku imposes the additional restriction that nine particular 3×3 adjacent subsquares must also contain the digits 1–9 (in the standard version). See also Mathematics of Sudoku.
The more recent KenKen and Strimko puzzles are also examples of Latin squares.
Board games
Latin squares have been used as the basis for several board games, notably the popular abstract strategy game Kamisado.
Agronomic research
Latin squares are used in the design of agronomic research experiments to minimise experimental errors.[21]
Heraldry
The Latin square also figures in the arms of the Statistical Society of Canada,[22] being specifically mentioned in its blazon. Also, it appears in the logo of the International Biometric Society.[23]
Generalizations
• A Latin rectangle is a generalization of a Latin square in which there are n columns and n possible values, but the number of rows may be smaller than n. Each value still appears at most once in each row and column.
• A Graeco-Latin square is a pair of two Latin squares such that, when one is laid on top of the other, each ordered pair of symbols appears exactly once.
• A Latin hypercube is a generalization of a Latin square from two dimensions to multiple dimensions.
See also
• Block design
• Combinatorial design
• Eight queens puzzle
• Futoshiki
• Magic square
• Problems in Latin squares
• Rook's graph, a graph that has Latin squares as its colorings
• Sator Square
• Vedic square
• Word square
Notes
1. Busby, Mattha (27 June 2020). "Cambridge college to remove window commemorating eugenicist". The Guardian. Retrieved 2020-06-28.
2. Wallis, W. D.; George, J. C. (2011), Introduction to Combinatorics, CRC Press, p. 212, ISBN 978-1-4398-0623-4
3. Colbourn, Charles J.; Dinitz, Jeffrey H. (2 November 2006). Handbook of Combinatorial Designs (2nd ed.). CRC Press. p. 12. ISBN 9781420010541. Retrieved 28 March 2017.
4. Dénes & Keedwell 1974, p. 128
5. Dénes & Keedwell 1974, p. 126
6. van Lint & Wilson 1992, pp. 161-162
7. Jia-yu Shao; Wan-di Wei (1992). "A formula for the number of Latin squares". Discrete Mathematics. 110 (1–3): 293–296. doi:10.1016/0012-365x(92)90722-r.
8. Gyarfas, Andras; Sarkozy, Gabor N. (2012). "Rainbow matchings and partial transversals of Latin squares". arXiv:1208.5670 [CO math. CO].
9. Aharoni, Ron; Berger, Eli; Kotlar, Dani; Ziv, Ran (2017-01-04). "On a conjecture of Stein". Abhandlungen aus dem Mathematischen Seminar der Universität Hamburg. 87 (2): 203–211. doi:10.1007/s12188-016-0160-3. ISSN 0025-5858. S2CID 119139740.
10. Stein, Sherman (1975-08-01). "Transversals of Latin squares and their generalizations". Pacific Journal of Mathematics. 59 (2): 567–575. doi:10.2140/pjm.1975.59.567. ISSN 0030-8730.
11. Koksma, Klaas K. (1969-07-01). "A lower bound for the order of a partial transversal in a latin square". Journal of Combinatorial Theory. 7 (1): 94–95. doi:10.1016/s0021-9800(69)80009-8. ISSN 0021-9800.
12. Woolbright, David E (1978-03-01). "An n × n Latin square has a transversal with at least n−n distinct symbols". Journal of Combinatorial Theory, Series A. 24 (2): 235–237. doi:10.1016/0097-3165(78)90009-2. ISSN 0097-3165.
13. Hatami, Pooya; Shor, Peter W. (2008-10-01). "A lower bound for the length of a partial transversal in a Latin square". Journal of Combinatorial Theory, Series A. 115 (7): 1103–1113. doi:10.1016/j.jcta.2008.01.002. ISSN 0097-3165.
14. Jacobson, M. T.; Matthews, P. (1996). "Generating uniformly distributed random latin squares". Journal of Combinatorial Designs. 4 (6): 405–437. doi:10.1002/(sici)1520-6610(1996)4:6<405::aid-jcd3>3.0.co;2-j.
15. Bailey, R.A. (2008), "6 Row-Column designs and 9 More about Latin squares", Design of Comparative Experiments, Cambridge University Press, ISBN 978-0-521-68357-9, MR 2422352
16. Shah, Kirti R.; Sinha, Bikas K. (1989), "4 Row-Column Designs", Theory of Optimal Designs, Lecture Notes in Statistics, vol. 54, Springer-Verlag, pp. 66–84, ISBN 0-387-96991-8, MR 1016151 {{citation}}: External link in |series= (help)
17. Colbourn, C.J.; Kløve, T.; Ling, A.C.H. (2004). "Permutation arrays for powerline communication". IEEE Trans. Inf. Theory. 50: 1289–1291. doi:10.1109/tit.2004.828150. S2CID 15920471.
18. Euler's revolution, New Scientist, 24 March 2007, pp 48–51
19. Huczynska, Sophie (2006). "Powerline communication and the 36 officers problem". Philosophical Transactions of the Royal Society A. 364 (1849): 3199–3214. Bibcode:2006RSPTA.364.3199H. doi:10.1098/rsta.2006.1885. PMID 17090455. S2CID 17662664.
20. C. Colbourn (1984). "The complexity of completing partial latin squares". Discrete Applied Mathematics. 8: 25–30. doi:10.1016/0166-218X(84)90075-1.
21. The application of Latin square in agronomic research
22. "Letters Patent Confering the SSC Arms". ssc.ca. Archived from the original on 2013-05-21.
23. The International Biometric Society Archived 2005-05-07 at the Wayback Machine
References
• Bailey, R.A. (2008). "6 Row-Column designs and 9 More about Latin squares". Design of Comparative Experiments. Cambridge University Press. ISBN 978-0-521-68357-9. MR 2422352.
• Dénes, J.; Keedwell, A. D. (1974). Latin squares and their applications. New York-London: Academic Press. p. 547. ISBN 0-12-209350-X. MR 0351850.
• Shah, Kirti R.; Sinha, Bikas K. (1989). "4 Row-Column Designs". Theory of Optimal Designs. Lecture Notes in Statistics. Vol. 54. Springer-Verlag. pp. 66–84. ISBN 0-387-96991-8. MR 1016151. {{cite book}}: External link in |series= (help)
• van Lint, J. H.; Wilson, R. M. (1992). A Course in Combinatorics. Cambridge University Press. p. 157. ISBN 0-521-42260-4.
Further reading
• Dénes, J. H.; Keedwell, A. D. (1991). Latin squares: New developments in the theory and applications. Annals of Discrete Mathematics. Vol. 46. Paul Erdős (foreword). Amsterdam: Academic Press. ISBN 0-444-88899-3. MR 1096296.
• Hinkelmann, Klaus; Kempthorne, Oscar (2008). Design and Analysis of Experiments. Vol. I, II (Second ed.). Wiley. ISBN 978-0-470-38551-7. MR 2363107.
• Hinkelmann, Klaus; Kempthorne, Oscar (2008). Design and Analysis of Experiments, Volume I: Introduction to Experimental Design (Second ed.). Wiley. ISBN 978-0-471-72756-9. MR 2363107.
• Hinkelmann, Klaus; Kempthorne, Oscar (2005). Design and Analysis of Experiments, Volume 2: Advanced Experimental Design (First ed.). Wiley. ISBN 978-0-471-55177-5. MR 2129060.
• Knuth, Donald (2011). The Art of Computer Programming, Volume 4A: Combinatorial Algorithms, Part 1. Reading, Massachusetts: Addison-Wesley. ISBN 978-0-201-03804-0.
• Laywine, Charles F.; Mullen, Gary L. (1998). Discrete mathematics using Latin squares. Wiley-Interscience Series in Discrete Mathematics and Optimization. New York: John Wiley & Sons, Inc. ISBN 0-471-24064-8. MR 1644242.
• Shah, K. R.; Sinha, Bikas K. (1996). "Row-column designs". In S. Ghosh and C. R. Rao (ed.). Design and analysis of experiments. Handbook of Statistics. Vol. 13. Amsterdam: North-Holland Publishing Co. pp. 903–937. ISBN 0-444-82061-2. MR 1492586.
• Raghavarao, Damaraju (1988). Constructions and Combinatorial Problems in Design of Experiments (corrected reprint of the 1971 Wiley ed.). New York: Dover. ISBN 0-486-65685-3. MR 1102899.
• Street, Anne Penfold; Street, Deborah J. (1987). Combinatorics of Experimental Design. New York: Oxford University Press. ISBN 0-19-853256-3. MR 0908490.
• Berger, Paul D.; Maurer, Robert E.; Celli, Giovana B. (November 28, 2017). Experimental Design with Applications in Management, Engineering, and the Sciences (2nd edition (November 28, 2017) ed.). Springer. pp. 267–282.
External links
• Weisstein, Eric W. "Latin Square". MathWorld.
• Latin Squares in the Encyclopaedia of Mathematics
• Latin Squares in the Online Encyclopedia of Integer Sequences
Design of experiments
Scientific
method
• Scientific experiment
• Statistical design
• Control
• Internal and external validity
• Experimental unit
• Blinding
• Optimal design: Bayesian
• Random assignment
• Randomization
• Restricted randomization
• Replication versus subsampling
• Sample size
Treatment
and blocking
• Treatment
• Effect size
• Contrast
• Interaction
• Confounding
• Orthogonality
• Blocking
• Covariate
• Nuisance variable
Models
and inference
• Linear regression
• Ordinary least squares
• Bayesian
• Random effect
• Mixed model
• Hierarchical model: Bayesian
• Analysis of variance (Anova)
• Cochran's theorem
• Manova (multivariate)
• Ancova (covariance)
• Compare means
• Multiple comparison
Designs
Completely
randomized
• Factorial
• Fractional factorial
• Plackett-Burman
• Taguchi
• Response surface methodology
• Polynomial and rational modeling
• Box-Behnken
• Central composite
• Block
• Generalized randomized block design (GRBD)
• Latin square
• Graeco-Latin square
• Orthogonal array
• Latin hypercube
Repeated measures design
• Crossover study
• Randomized controlled trial
• Sequential analysis
• Sequential probability ratio test
• Glossary
• Category
• Mathematics portal
• Statistical outline
• Statistical topics
Magic polygons
Types
• Magic circle
• Magic hexagon
• Magic hexagram
• Magic square
• Magic star
• Magic triangle
Related shapes
• Alphamagic square
• Antimagic square
• Geomagic square
• Heterosquare
• Pandiagonal magic square
• Most-perfect magic square
Higher dimensional shapes
• Magic cube
• classes
• Magic hypercube
• Magic hyperbeam
Classification
• Associative magic square
• Pandiagonal magic square
• Multimagic square
Related concepts
• Latin square
• Word square
• Number Scrabble
• Eight queens puzzle
• Magic constant
• Magic graph
• Magic series
Authority control
National
• France
• BnF data
• Germany
Other
• IdRef
|
Wikipedia
|
Row and column vectors
In linear algebra, a column vector with $m$ elements is an $m\times 1$ matrix[1] consisting of a single column of $m$ entries, for example,
${\boldsymbol {x}}={\begin{bmatrix}x_{1}\\x_{2}\\\vdots \\x_{m}\end{bmatrix}}.$
Similarly, a row vector is a $1\times n$ matrix for some $n$, consisting of a single row of $n$ entries,
${\boldsymbol {a}}={\begin{bmatrix}a_{1}&a_{2}&\dots &a_{n}\end{bmatrix}}.$
(Throughout this article, boldface is used for both row and column vectors.)
The transpose (indicated by T) of any row vector is a column vector, and the transpose of any column vector is a row vector:
${\begin{bmatrix}x_{1}\;x_{2}\;\dots \;x_{m}\end{bmatrix}}^{\rm {T}}={\begin{bmatrix}x_{1}\\x_{2}\\\vdots \\x_{m}\end{bmatrix}}$
and
${\begin{bmatrix}x_{1}\\x_{2}\\\vdots \\x_{m}\end{bmatrix}}^{\rm {T}}={\begin{bmatrix}x_{1}\;x_{2}\;\dots \;x_{m}\end{bmatrix}}.$
The set of all row vectors with n entries in a given field (such as the real numbers) forms an n-dimensional vector space; similarly, the set of all column vectors with m entries forms an m-dimensional vector space.
The space of row vectors with n entries can be regarded as the dual space of the space of column vectors with n entries, since any linear functional on the space of column vectors can be represented as the left-multiplication of a unique row vector.
Notation
To simplify writing column vectors in-line with other text, sometimes they are written as row vectors with the transpose operation applied to them.
${\boldsymbol {x}}={\begin{bmatrix}x_{1}\;x_{2}\;\dots \;x_{m}\end{bmatrix}}^{\rm {T}}$
or
${\boldsymbol {x}}={\begin{bmatrix}x_{1},x_{2},\dots ,x_{m}\end{bmatrix}}^{\rm {T}}$
Some authors also use the convention of writing both column vectors and row vectors as rows, but separating row vector elements with commas and column vector elements with semicolons (see alternative notation 2 in the table below).
Row vectorColumn vector
Standard matrix notation
(array spaces, no commas, transpose signs)
${\begin{bmatrix}x_{1}\;x_{2}\;\dots \;x_{m}\end{bmatrix}}$ ${\begin{bmatrix}x_{1}\\x_{2}\\\vdots \\x_{m}\end{bmatrix}}{\text{ or }}{\begin{bmatrix}x_{1}\;x_{2}\;\dots \;x_{m}\end{bmatrix}}^{\rm {T}}$
Alternative notation 1
(commas, transpose signs)
${\begin{bmatrix}x_{1},x_{2},\dots ,x_{m}\end{bmatrix}}$ ${\begin{bmatrix}x_{1},x_{2},\dots ,x_{m}\end{bmatrix}}^{\rm {T}}$
Alternative notation 2
(commas and semicolons, no transpose signs)
${\begin{bmatrix}x_{1},x_{2},\dots ,x_{m}\end{bmatrix}}$ ${\begin{bmatrix}x_{1};x_{2};\dots ;x_{m}\end{bmatrix}}$
Operations
Matrix multiplication involves the action of multiplying each row vector of one matrix by each column vector of another matrix.
The dot product of two column vectors a, b, considered as elements of a coordinate space, is equal to the matrix product of the transpose of a with b,
$\mathbf {a} \cdot \mathbf {b} =\mathbf {a} ^{\intercal }\mathbf {b} ={\begin{bmatrix}a_{1}&\cdots &a_{n}\end{bmatrix}}{\begin{bmatrix}b_{1}\\\vdots \\b_{n}\end{bmatrix}}=a_{1}b_{1}+\cdots +a_{n}b_{n}\,,$
By the symmetry of the dot product, the dot product of two column vectors a, b is also equal to the matrix product of the transpose of b with a,
$\mathbf {b} \cdot \mathbf {a} =\mathbf {b} ^{\intercal }\mathbf {a} ={\begin{bmatrix}b_{1}&\cdots &b_{n}\end{bmatrix}}{\begin{bmatrix}a_{1}\\\vdots \\a_{n}\end{bmatrix}}=a_{1}b_{1}+\cdots +a_{n}b_{n}\,.$
The matrix product of a column and a row vector gives the outer product of two vectors a, b, an example of the more general tensor product. The matrix product of the column vector representation of a and the row vector representation of b gives the components of their dyadic product,
$\mathbf {a} \otimes \mathbf {b} =\mathbf {a} \mathbf {b} ^{\intercal }={\begin{bmatrix}a_{1}\\a_{2}\\a_{3}\end{bmatrix}}{\begin{bmatrix}b_{1}&b_{2}&b_{3}\end{bmatrix}}={\begin{bmatrix}a_{1}b_{1}&a_{1}b_{2}&a_{1}b_{3}\\a_{2}b_{1}&a_{2}b_{2}&a_{2}b_{3}\\a_{3}b_{1}&a_{3}b_{2}&a_{3}b_{3}\\\end{bmatrix}}\,,$
which is the transpose of the matrix product of the column vector representation of b and the row vector representation of a,
$\mathbf {b} \otimes \mathbf {a} =\mathbf {b} \mathbf {a} ^{\intercal }={\begin{bmatrix}b_{1}\\b_{2}\\b_{3}\end{bmatrix}}{\begin{bmatrix}a_{1}&a_{2}&a_{3}\end{bmatrix}}={\begin{bmatrix}b_{1}a_{1}&b_{1}a_{2}&b_{1}a_{3}\\b_{2}a_{1}&b_{2}a_{2}&b_{2}a_{3}\\b_{3}a_{1}&b_{3}a_{2}&b_{3}a_{3}\\\end{bmatrix}}\,.$
Matrix transformations
Main article: Transformation matrix
An n × n matrix M can represent a linear map and act on row and column vectors as the linear map's transformation matrix. For a row vector v, the product vM is another row vector p:
$\mathbf {v} M=\mathbf {p} \,.$
Another n × n matrix Q can act on p,
$\mathbf {p} Q=\mathbf {t} \,.$
Then one can write t = pQ = vMQ, so the matrix product transformation MQ maps v directly to t. Continuing with row vectors, matrix transformations further reconfiguring n-space can be applied to the right of previous outputs.
When a column vector is transformed to another column vector under an n × n matrix action, the operation occurs to the left,
$\mathbf {p} ^{\mathrm {T} }=M\mathbf {v} ^{\mathrm {T} }\,,\quad \mathbf {t} ^{\mathrm {T} }=Q\mathbf {p} ^{\mathrm {T} },$
leading to the algebraic expression QM vT for the composed output from vT input. The matrix transformations mount up to the left in this use of a column vector for input to matrix transformation.
See also
• Covariance and contravariance of vectors
• Index notation
• Vector of ones
• Single-entry vector
• Standard unit vector
• Unit vector
Notes
1. Artin, Michael (1991). Algebra. Englewood Cliffs, NJ: Prentice-Hall. p. 2. ISBN 0-13-004763-5.
References
See also: Linear algebra § Further reading
• Axler, Sheldon Jay (1997), Linear Algebra Done Right (2nd ed.), Springer-Verlag, ISBN 0-387-98259-0
• Lay, David C. (August 22, 2005), Linear Algebra and Its Applications (3rd ed.), Addison Wesley, ISBN 978-0-321-28713-7
• Meyer, Carl D. (February 15, 2001), Matrix Analysis and Applied Linear Algebra, Society for Industrial and Applied Mathematics (SIAM), ISBN 978-0-89871-454-8, archived from the original on March 1, 2001
• Poole, David (2006), Linear Algebra: A Modern Introduction (2nd ed.), Brooks/Cole, ISBN 0-534-99845-3
• Anton, Howard (2005), Elementary Linear Algebra (Applications Version) (9th ed.), Wiley International
• Leon, Steven J. (2006), Linear Algebra With Applications (7th ed.), Pearson Prentice Hall
Linear algebra
• Outline
• Glossary
Basic concepts
• Scalar
• Vector
• Vector space
• Scalar multiplication
• Vector projection
• Linear span
• Linear map
• Linear projection
• Linear independence
• Linear combination
• Basis
• Change of basis
• Row and column vectors
• Row and column spaces
• Kernel
• Eigenvalues and eigenvectors
• Transpose
• Linear equations
Matrices
• Block
• Decomposition
• Invertible
• Minor
• Multiplication
• Rank
• Transformation
• Cramer's rule
• Gaussian elimination
Bilinear
• Orthogonality
• Dot product
• Hadamard product
• Inner product space
• Outer product
• Kronecker product
• Gram–Schmidt process
Multilinear algebra
• Determinant
• Cross product
• Triple product
• Seven-dimensional cross product
• Geometric algebra
• Exterior algebra
• Bivector
• Multivector
• Tensor
• Outermorphism
Vector space constructions
• Dual
• Direct sum
• Function space
• Quotient
• Subspace
• Tensor product
Numerical
• Floating-point
• Numerical stability
• Basic Linear Algebra Subprograms
• Sparse matrix
• Comparison of linear algebra libraries
• Category
• Mathematics portal
• Commons
• Wikibooks
• Wikiversity
|
Wikipedia
|
Row equivalence
In linear algebra, two matrices are row equivalent if one can be changed to the other by a sequence of elementary row operations. Alternatively, two m × n matrices are row equivalent if and only if they have the same row space. The concept is most commonly applied to matrices that represent systems of linear equations, in which case two matrices of the same size are row equivalent if and only if the corresponding homogeneous systems have the same set of solutions, or equivalently the matrices have the same null space.
Because elementary row operations are reversible, row equivalence is an equivalence relation. It is commonly denoted by a tilde (~).[1]
There is a similar notion of column equivalence, defined by elementary column operations; two matrices are column equivalent if and only if their transpose matrices are row equivalent. Two rectangular matrices that can be converted into one another allowing both elementary row and column operations are called simply equivalent.
Elementary row operations
An elementary row operation is any one of the following moves:
1. Swap: Swap two rows of a matrix.
2. Scale: Multiply a row of a matrix by a nonzero constant.
3. Pivot: Add a multiple of one row of a matrix to another row.
Two matrices A and B are row equivalent if it is possible to transform A into B by a sequence of elementary row operations.
Row space
Main article: Row space
The row space of a matrix is the set of all possible linear combinations of its row vectors. If the rows of the matrix represent a system of linear equations, then the row space consists of all linear equations that can be deduced algebraically from those in the system. Two m × n matrices are row equivalent if and only if they have the same row space.
For example, the matrices
${\begin{pmatrix}1&0&0\\0&1&1\end{pmatrix}}\;\;\;\;{\text{and}}\;\;\;\;{\begin{pmatrix}1&0&0\\1&1&1\end{pmatrix}}$
are row equivalent, the row space being all vectors of the form ${\begin{pmatrix}a&b&b\end{pmatrix}}$. The corresponding systems of homogeneous equations convey the same information:
${\begin{matrix}x=0\\y+z=0\end{matrix}}\;\;\;\;{\text{and}}\;\;\;\;{\begin{matrix}x=0\\x+y+z=0.\end{matrix}}$
In particular, both of these systems imply every equation of the form $ax+by+bz=0.\,$
Equivalence of the definitions
The fact that two matrices are row equivalent if and only if they have the same row space is an important theorem in linear algebra. The proof is based on the following observations:
1. Elementary row operations do not affect the row space of a matrix. In particular, any two row equivalent matrices have the same row space.
2. Any matrix can be reduced by elementary row operations to a matrix in reduced row echelon form.
3. Two matrices in reduced row echelon form have the same row space if and only if they are equal.
This line of reasoning also proves that every matrix is row equivalent to a unique matrix with reduced row echelon form.
Additional properties
• Because the null space of a matrix is the orthogonal complement of the row space, two matrices are row equivalent if and only if they have the same null space.
• The rank of a matrix is equal to the dimension of the row space, so row equivalent matrices must have the same rank. This is equal to the number of pivots in the reduced row echelon form.
• A matrix is invertible if and only if it is row equivalent to the identity matrix.
• Matrices A and B are row equivalent if and only if there exists an invertible matrix P such that A=PB.[2]
See also
• Elementary row operations
• Row space
• Basis (linear algebra)
• Row reduction
• (Reduced) row echelon form
References
1. Lay 2005, p. 21, Example 4
2. Roman 2008, p. 9, Example 0.3
• Axler, Sheldon Jay (1997), Linear Algebra Done Right (2nd ed.), Springer-Verlag, ISBN 0-387-98259-0
• Lay, David C. (August 22, 2005), Linear Algebra and Its Applications (3rd ed.), Addison Wesley, ISBN 978-0-321-28713-7
• Meyer, Carl D. (February 15, 2001), Matrix Analysis and Applied Linear Algebra, Society for Industrial and Applied Mathematics (SIAM), ISBN 978-0-89871-454-8, archived from the original on March 1, 2001
• Poole, David (2006), Linear Algebra: A Modern Introduction (2nd ed.), Brooks/Cole, ISBN 0-534-99845-3
• Anton, Howard (2005), Elementary Linear Algebra (Applications Version) (9th ed.), Wiley International
• Leon, Steven J. (2006), Linear Algebra With Applications (7th ed.), Pearson Prentice Hall
• Roman, Steven (2008). Advanced Linear Algebra. Graduate Texts in Mathematics. Vol. 135 (3rd ed.). Springer Science+Business Media, LLC. ISBN 978-0-387-72828-5.
External links
The Wikibook Linear Algebra has a page on the topic of: Row Equivalence
|
Wikipedia
|
Rowbottom cardinal
In set theory, a Rowbottom cardinal, introduced by Rowbottom (1971), is a certain kind of large cardinal number.
An uncountable cardinal number $\kappa $ is said to be $\lambda $- Rowbottom if for every function f: [κ]<ω → λ (where λ < κ) there is a set H of order type $\kappa $ that is quasi-homogeneous for f, i.e., for every n, the f-image of the set of n-element subsets of H has < $\lambda $ elements. $\kappa $ is Rowbottom if it is $\omega _{1}$ - Rowbottom.
Every Ramsey cardinal is Rowbottom, and every Rowbottom cardinal is Jónsson. By a theorem of Kleinberg, the theories ZFC + “there is a Rowbottom cardinal” and ZFC + “there is a Jónsson cardinal” are equiconsistent.
In general, Rowbottom cardinals need not be large cardinals in the usual sense: Rowbottom cardinals could be singular. It is an open question whether ZFC + “$\aleph _{\omega }$ is Rowbottom” is consistent. If it is, it has much higher consistency strength than the existence of a Rowbottom cardinal. The axiom of determinacy does imply that $\aleph _{\omega }$ is Rowbottom (but contradicts the axiom of choice).
References
• Kanamori, Akihiro (2003). The Higher Infinite: Large Cardinals in Set Theory from Their Beginnings (2nd ed.). Springer. ISBN 3-540-00384-3.
• Rowbottom, Frederick (1971) [1964], "Some strong axioms of infinity incompatible with the axiom of constructibility", Annals of Pure and Applied Logic, 3 (1): 1–44, doi:10.1016/0003-4843(71)90009-X, ISSN 0168-0072, MR 0323572
|
Wikipedia
|
John Rowning
John Rowning (c. 1701 – November 1771) was an English mathematician, clergyman, and philosopher. He wrote on natural philosophy, designed measuring and calculating instruments. In his book he rejected the idea of Newtonian ether and explained gravitational forces as being the action of God.
Rowning was the son of a namesake from Ashby-with-Fenby, Lincolnshire who may have been a watchmaker as a brother took to the profession. He was educated in Glanford Brigg before joining Magdalene College, Cambridge as a sizar, graduating BA in 1724 and MA four years later. He become a teacher of experimental philosophy while also working with William Deane on the design of philosophical instruments. He designed a barometer with alterable accuracy using tubes of varying diameters in 1733.[1] He also wrote on graphical solutions to parabolic equations.[2] He was made rector at Westley Waterless, Cambridgeshire in 1734 and later at Anderby. He published A Compendious System of Natural Philosophy which went through revisions from 1735 to 1772. Rowning believed that matter could both attract and repel and included examples of cohesion of mercury drops, other liquids, capillary rise and suggested that attractive forces worked at short ranges and that they begin to repel after a certain range.[3] Joseph Priestly was educated at Daventry Academy where Rowning's text was used and he followed the idea that particles could attract or repel depending on the distance in some of his views on optics.[4] Rowning believed that the brightness of stars varied only due to the distance from the earth.[5] He was a member and sometime secretary of the Gentleman's Society of Spalding.[6]
He was married and had three sons and a daughter. He died at his home on Carey Street in London.
References
1. Rowning, John (1733-12-31). "V. A description of a barometer, wherin the scale of variation may be encreased at pleasure". Philosophical Transactions of the Royal Society of London. 38 (427): 39–42. doi:10.1098/rstl.1733.0005. ISSN 0261-0523.
2. Rowning, John (1771-12-31). "XXIV. Directions for making a machine for finding the roots of equations universally, with the manner of using it". Philosophical Transactions of the Royal Society of London. 60: 240–256. doi:10.1098/rstl.1770.0024. ISSN 0261-0523.
3. Rowlinson, J.S. (2005). Cohesion: A Scientific History of Intermolecular Forces. Cambridge University Press. pp. 21–25.
4. Schofield, R.E. (2005). "Joseph Priestley, Natural Philosopher" (PDF). Bull. Hist. Chem. 30 (2): 57–62.
5. McCormmach, Russell (1968). "John Michell and Henry Cavendish: Weighing the Stars". The British Journal for the History of Science. 4 (2): 126–155. doi:10.1017/S0007087400003459. ISSN 0007-0874.
6. McConnell, Anita (2004-09-23). "Rowning, John". In Matthew, H. C. G.; Harrison, B. (eds.). The Oxford Dictionary of National Biography. Oxford: Oxford University Press. pp. ref:odnb/24230. doi:10.1093/ref:odnb/24230. Retrieved 2022-07-06.
External links
• A compendious system of natural philosophy Volume 1 Volume 2
Authority control
International
• ISNI
• VIAF
National
• Germany
• United States
|
Wikipedia
|
Row and column spaces
In linear algebra, the column space (also called the range or image) of a matrix A is the span (set of all possible linear combinations) of its column vectors. The column space of a matrix is the image or range of the corresponding matrix transformation.
Let $\mathbb {F} $ be a field. The column space of an m × n matrix with components from $\mathbb {F} $ is a linear subspace of the m-space $\mathbb {F} ^{m}$. The dimension of the column space is called the rank of the matrix and is at most min(m, n).[1] A definition for matrices over a ring $\mathbb {K} $ is also possible.
The row space is defined similarly.
The row space and the column space of a matrix A are sometimes denoted as C(AT) and C(A) respectively.[2]
This article considers matrices of real numbers. The row and column spaces are subspaces of the real spaces $\mathbb {R} ^{n}$ and $\mathbb {R} ^{m}$ respectively.[3]
Overview
Let A be an m-by-n matrix. Then
1. rank(A) = dim(rowsp(A)) = dim(colsp(A)),[4]
2. rank(A) = number of pivots in any echelon form of A,
3. rank(A) = the maximum number of linearly independent rows or columns of A.[5]
If one considers the matrix as a linear transformation from $\mathbb {R} ^{n}$ to $\mathbb {R} ^{m}$, then the column space of the matrix equals the image of this linear transformation.
The column space of a matrix A is the set of all linear combinations of the columns in A. If A = [a1 ⋯ an], then colsp(A) = span({a1, ..., an}).
The concept of row space generalizes to matrices over $\mathbb {C} $, the field of complex numbers, or over any field.
Intuitively, given a matrix A, the action of the matrix A on a vector x will return a linear combination of the columns of A weighted by the coordinates of x as coefficients. Another way to look at this is that it will (1) first project x into the row space of A, (2) perform an invertible transformation, and (3) place the resulting vector y in the column space of A. Thus the result y = Ax must reside in the column space of A. See singular value decomposition for more details on this second interpretation.
Example
Given a matrix J:
$J={\begin{bmatrix}2&4&1&3&2\\-1&-2&1&0&5\\1&6&2&2&2\\3&6&2&5&1\end{bmatrix}}$
the rows are $\mathbf {r} _{1}={\begin{bmatrix}2&4&1&3&2\end{bmatrix}}$, $\mathbf {r} _{2}={\begin{bmatrix}-1&-2&1&0&5\end{bmatrix}}$, $\mathbf {r} _{3}={\begin{bmatrix}1&6&2&2&2\end{bmatrix}}$, $\mathbf {r} _{4}={\begin{bmatrix}3&6&2&5&1\end{bmatrix}}$. Consequently, the row space of J is the subspace of $\mathbb {R} ^{5}$ spanned by { r1, r2, r3, r4 }. Since these four row vectors are linearly independent, the row space is 4-dimensional. Moreover, in this case it can be seen that they are all orthogonal to the vector n = [6, −1, 4, −4, 0], so it can be deduced that the row space consists of all vectors in $\mathbb {R} ^{5}$ that are orthogonal to n.
Column space
Definition
Let K be a field of scalars. Let A be an m × n matrix, with column vectors v1, v2, ..., vn. A linear combination of these vectors is any vector of the form
$c_{1}\mathbf {v} _{1}+c_{2}\mathbf {v} _{2}+\cdots +c_{n}\mathbf {v} _{n},$
where c1, c2, ..., cn are scalars. The set of all possible linear combinations of v1, ..., vn is called the column space of A. That is, the column space of A is the span of the vectors v1, ..., vn.
Any linear combination of the column vectors of a matrix A can be written as the product of A with a column vector:
${\begin{array}{rcl}A{\begin{bmatrix}c_{1}\\\vdots \\c_{n}\end{bmatrix}}&=&{\begin{bmatrix}a_{11}&\cdots &a_{1n}\\\vdots &\ddots &\vdots \\a_{m1}&\cdots &a_{mn}\end{bmatrix}}{\begin{bmatrix}c_{1}\\\vdots \\c_{n}\end{bmatrix}}={\begin{bmatrix}c_{1}a_{11}+\cdots +c_{n}a_{1n}\\\vdots \\c_{1}a_{m1}+\cdots +c_{n}a_{mn}\end{bmatrix}}=c_{1}{\begin{bmatrix}a_{11}\\\vdots \\a_{m1}\end{bmatrix}}+\cdots +c_{n}{\begin{bmatrix}a_{1n}\\\vdots \\a_{mn}\end{bmatrix}}\\&=&c_{1}\mathbf {v} _{1}+\cdots +c_{n}\mathbf {v} _{n}\end{array}}$
Therefore, the column space of A consists of all possible products Ax, for x ∈ Kn. This is the same as the image (or range) of the corresponding matrix transformation.
Example
If $A={\begin{bmatrix}1&0\\0&1\\2&0\end{bmatrix}}$, then the column vectors are v1 = [1, 0, 2]T and v2 = [0, 1, 0]T. A linear combination of v1 and v2 is any vector of the form
$c_{1}{\begin{bmatrix}1\\0\\2\end{bmatrix}}+c_{2}{\begin{bmatrix}0\\1\\0\end{bmatrix}}={\begin{bmatrix}c_{1}\\c_{2}\\2c_{1}\end{bmatrix}}$
The set of all such vectors is the column space of A. In this case, the column space is precisely the set of vectors (x, y, z) ∈ R3 satisfying the equation z = 2x (using Cartesian coordinates, this set is a plane through the origin in three-dimensional space).
Basis
The columns of A span the column space, but they may not form a basis if the column vectors are not linearly independent. Fortunately, elementary row operations do not affect the dependence relations between the column vectors. This makes it possible to use row reduction to find a basis for the column space.
For example, consider the matrix
$A={\begin{bmatrix}1&3&1&4\\2&7&3&9\\1&5&3&1\\1&2&0&8\end{bmatrix}}.$
The columns of this matrix span the column space, but they may not be linearly independent, in which case some subset of them will form a basis. To find this basis, we reduce A to reduced row echelon form:
${\begin{bmatrix}1&3&1&4\\2&7&3&9\\1&5&3&1\\1&2&0&8\end{bmatrix}}\sim {\begin{bmatrix}1&3&1&4\\0&1&1&1\\0&2&2&-3\\0&-1&-1&4\end{bmatrix}}\sim {\begin{bmatrix}1&0&-2&1\\0&1&1&1\\0&0&0&-5\\0&0&0&5\end{bmatrix}}\sim {\begin{bmatrix}1&0&-2&0\\0&1&1&0\\0&0&0&1\\0&0&0&0\end{bmatrix}}.$[6]
At this point, it is clear that the first, second, and fourth columns are linearly independent, while the third column is a linear combination of the first two. (Specifically, v3 = −2v1 + v2.) Therefore, the first, second, and fourth columns of the original matrix are a basis for the column space:
${\begin{bmatrix}1\\2\\1\\1\end{bmatrix}},\;\;{\begin{bmatrix}3\\7\\5\\2\end{bmatrix}},\;\;{\begin{bmatrix}4\\9\\1\\8\end{bmatrix}}.$
Note that the independent columns of the reduced row echelon form are precisely the columns with pivots. This makes it possible to determine which columns are linearly independent by reducing only to echelon form.
The above algorithm can be used in general to find the dependence relations between any set of vectors, and to pick out a basis from any spanning set. Also finding a basis for the column space of A is equivalent to finding a basis for the row space of the transpose matrix AT.
To find the basis in a practical setting (e.g., for large matrices), the singular-value decomposition is typically used.
Dimension
Main article: Rank (linear algebra)
The dimension of the column space is called the rank of the matrix. The rank is equal to the number of pivots in the reduced row echelon form, and is the maximum number of linearly independent columns that can be chosen from the matrix. For example, the 4 × 4 matrix in the example above has rank three.
Because the column space is the image of the corresponding matrix transformation, the rank of a matrix is the same as the dimension of the image. For example, the transformation $\mathbb {R} ^{4}\to \mathbb {R} ^{4}$ described by the matrix above maps all of $\mathbb {R} ^{4}$ to some three-dimensional subspace.
The nullity of a matrix is the dimension of the null space, and is equal to the number of columns in the reduced row echelon form that do not have pivots.[7] The rank and nullity of a matrix A with n columns are related by the equation:
$\operatorname {rank} (A)+\operatorname {nullity} (A)=n.\,$
This is known as the rank–nullity theorem.
Relation to the left null space
The left null space of A is the set of all vectors x such that xTA = 0T. It is the same as the null space of the transpose of A. The product of the matrix AT and the vector x can be written in terms of the dot product of vectors:
$A^{\mathsf {T}}\mathbf {x} ={\begin{bmatrix}\mathbf {v} _{1}\cdot \mathbf {x} \\\mathbf {v} _{2}\cdot \mathbf {x} \\\vdots \\\mathbf {v} _{n}\cdot \mathbf {x} \end{bmatrix}},$
because row vectors of AT are transposes of column vectors vk of A. Thus ATx = 0 if and only if x is orthogonal (perpendicular) to each of the column vectors of A.
It follows that the left null space (the null space of AT) is the orthogonal complement to the column space of A.
For a matrix A, the column space, row space, null space, and left null space are sometimes referred to as the four fundamental subspaces.
For matrices over a ring
Similarly the column space (sometimes disambiguated as right column space) can be defined for matrices over a ring K as
$\sum \limits _{k=1}^{n}\mathbf {v} _{k}c_{k}$
for any c1, ..., cn, with replacement of the vector m-space with "right free module", which changes the order of scalar multiplication of the vector vk to the scalar ck such that it is written in an unusual order vector–scalar.[8]
Row space
Definition
Let K be a field of scalars. Let A be an m × n matrix, with row vectors r1, r2, ..., rm. A linear combination of these vectors is any vector of the form
$c_{1}\mathbf {r} _{1}+c_{2}\mathbf {r} _{2}+\cdots +c_{m}\mathbf {r} _{m},$
where c1, c2, ..., cm are scalars. The set of all possible linear combinations of r1, ..., rm is called the row space of A. That is, the row space of A is the span of the vectors r1, ..., rm.
For example, if
$A={\begin{bmatrix}1&0&2\\0&1&0\end{bmatrix}},$
then the row vectors are r1 = [1, 0, 2] and r2 = [0, 1, 0]. A linear combination of r1 and r2 is any vector of the form
$c_{1}{\begin{bmatrix}1&0&2\end{bmatrix}}+c_{2}{\begin{bmatrix}0&1&0\end{bmatrix}}={\begin{bmatrix}c_{1}&c_{2}&2c_{1}\end{bmatrix}}.$
The set of all such vectors is the row space of A. In this case, the row space is precisely the set of vectors (x, y, z) ∈ K3 satisfying the equation z = 2x (using Cartesian coordinates, this set is a plane through the origin in three-dimensional space).
For a matrix that represents a homogeneous system of linear equations, the row space consists of all linear equations that follow from those in the system.
The column space of A is equal to the row space of AT.
Basis
The row space is not affected by elementary row operations. This makes it possible to use row reduction to find a basis for the row space.
For example, consider the matrix
$A={\begin{bmatrix}1&3&2\\2&7&4\\1&5&2\end{bmatrix}}.$
The rows of this matrix span the row space, but they may not be linearly independent, in which case the rows will not be a basis. To find a basis, we reduce A to row echelon form:
r1, r2, r3 represents the rows.
${\begin{aligned}{\begin{bmatrix}1&3&2\\2&7&4\\1&5&2\end{bmatrix}}&\xrightarrow {\mathbf {r} _{2}-2\mathbf {r} _{1}\to \mathbf {r} _{2}} {\begin{bmatrix}1&3&2\\0&1&0\\1&5&2\end{bmatrix}}\xrightarrow {\mathbf {r} _{3}-\,\,\mathbf {r} _{1}\to \mathbf {r} _{3}} {\begin{bmatrix}1&3&2\\0&1&0\\0&2&0\end{bmatrix}}\\&\xrightarrow {\mathbf {r} _{3}-2\mathbf {r} _{2}\to \mathbf {r} _{3}} {\begin{bmatrix}1&3&2\\0&1&0\\0&0&0\end{bmatrix}}\xrightarrow {\mathbf {r} _{1}-3\mathbf {r} _{2}\to \mathbf {r} _{1}} {\begin{bmatrix}1&0&2\\0&1&0\\0&0&0\end{bmatrix}}.\end{aligned}}$
Once the matrix is in echelon form, the nonzero rows are a basis for the row space. In this case, the basis is { [1, 3, 2], [2, 7, 4] }. Another possible basis { [1, 0, 2], [0, 1, 0] } comes from a further reduction.[9]
This algorithm can be used in general to find a basis for the span of a set of vectors. If the matrix is further simplified to reduced row echelon form, then the resulting basis is uniquely determined by the row space.
It is sometimes convenient to find a basis for the row space from among the rows of the original matrix instead (for example, this result is useful in giving an elementary proof that the determinantal rank of a matrix is equal to its rank). Since row operations can affect linear dependence relations of the row vectors, such a basis is instead found indirectly using the fact that the column space of AT is equal to the row space of A. Using the example matrix A above, find AT and reduce it to row echelon form:
$A^{\mathrm {T} }={\begin{bmatrix}1&2&1\\3&7&5\\2&4&2\end{bmatrix}}\sim {\begin{bmatrix}1&2&1\\0&1&2\\0&0&0\end{bmatrix}}.$
The pivots indicate that the first two columns of AT form a basis of the column space of AT. Therefore, the first two rows of A (before any row reductions) also form a basis of the row space of A.
Dimension
Main article: Rank (linear algebra)
The dimension of the row space is called the rank of the matrix. This is the same as the maximum number of linearly independent rows that can be chosen from the matrix, or equivalently the number of pivots. For example, the 3 × 3 matrix in the example above has rank two.[9]
The rank of a matrix is also equal to the dimension of the column space. The dimension of the null space is called the nullity of the matrix, and is related to the rank by the following equation:
$\operatorname {rank} (A)+\operatorname {nullity} (A)=n,$
where n is the number of columns of the matrix A. The equation above is known as the rank–nullity theorem.
Relation to the null space
The null space of matrix A is the set of all vectors x for which Ax = 0. The product of the matrix A and the vector x can be written in terms of the dot product of vectors:
$A\mathbf {x} ={\begin{bmatrix}\mathbf {r} _{1}\cdot \mathbf {x} \\\mathbf {r} _{2}\cdot \mathbf {x} \\\vdots \\\mathbf {r} _{m}\cdot \mathbf {x} \end{bmatrix}},$
where r1, ..., rm are the row vectors of A. Thus Ax = 0 if and only if x is orthogonal (perpendicular) to each of the row vectors of A.
It follows that the null space of A is the orthogonal complement to the row space. For example, if the row space is a plane through the origin in three dimensions, then the null space will be the perpendicular line through the origin. This provides a proof of the rank–nullity theorem (see dimension above).
The row space and null space are two of the four fundamental subspaces associated with a matrix A (the other two being the column space and left null space).
Relation to coimage
If V and W are vector spaces, then the kernel of a linear transformation T: V → W is the set of vectors v ∈ V for which T(v) = 0. The kernel of a linear transformation is analogous to the null space of a matrix.
If V is an inner product space, then the orthogonal complement to the kernel can be thought of as a generalization of the row space. This is sometimes called the coimage of T. The transformation T is one-to-one on its coimage, and the coimage maps isomorphically onto the image of T.
When V is not an inner product space, the coimage of T can be defined as the quotient space V / ker(T).
See also
• Euclidean subspace
References & Notes
1. Linear algebra, as discussed in this article, is a very well established mathematical discipline for which there are many sources. Almost all of the material in this article can be found in Lay 2005, Meyer 2001, and Strang 2005.
2. Strang, Gilbert (2016). Introduction to linear algebra (Fifth ed.). Wellesley, MA: Wellesley-Cambridge Press. pp. 128, 168. ISBN 978-0-9802327-7-6. OCLC 956503593.
3. Anton (1987, p. 179)
4. Anton (1987, p. 183)
5. Beauregard & Fraleigh (1973, p. 254)
6. This computation uses the Gauss–Jordan row-reduction algorithm. Each of the shown steps involves multiple elementary row operations.
7. Columns without pivots represent free variables in the associated homogeneous system of linear equations.
8. Important only if K is not commutative. Actually, this form is merely a product Ac of the matrix A to the column vector c from Kn where the order of factors is preserved, unlike the formula above.
9. The example is valid over the real numbers, the rational numbers, and other number fields. It is not necessarily correct over fields and rings with non-zero characteristic.
See also: Linear algebra § Further reading
Further reading
• Anton, Howard (1987), Elementary Linear Algebra (5th ed.), New York: Wiley, ISBN 0-471-84819-0
• Axler, Sheldon Jay (1997), Linear Algebra Done Right (2nd ed.), Springer-Verlag, ISBN 0-387-98259-0
• Banerjee, Sudipto; Roy, Anindya (June 6, 2014), Linear Algebra and Matrix Analysis for Statistics (1st ed.), CRC Press, ISBN 978-1-42-009538-8
• Beauregard, Raymond A.; Fraleigh, John B. (1973), A First Course In Linear Algebra: with Optional Introduction to Groups, Rings, and Fields, Boston: Houghton Mifflin Company, ISBN 0-395-14017-X
• Lay, David C. (August 22, 2005), Linear Algebra and Its Applications (3rd ed.), Addison Wesley, ISBN 978-0-321-28713-7
• Leon, Steven J. (2006), Linear Algebra With Applications (7th ed.), Pearson Prentice Hall
• Meyer, Carl D. (February 15, 2001), Matrix Analysis and Applied Linear Algebra, Society for Industrial and Applied Mathematics (SIAM), ISBN 978-0-89871-454-8, archived from the original on March 1, 2001
• Poole, David (2006), Linear Algebra: A Modern Introduction (2nd ed.), Brooks/Cole, ISBN 0-534-99845-3
• Strang, Gilbert (July 19, 2005), Linear Algebra and Its Applications (4th ed.), Brooks Cole, ISBN 978-0-03-010567-8
External links
Wikibooks has a book on the topic of: Linear Algebra/Column and Row Spaces
• Weisstein, Eric W. "Row Space". MathWorld.
• Weisstein, Eric W. "Column Space". MathWorld.
• Gilbert Strang, MIT Linear Algebra Lecture on the Four Fundamental Subspaces at Google Video, from MIT OpenCourseWare
• Khan Academy video tutorial
• Lecture on column space and nullspace by Gilbert Strang of MIT
• Row Space and Column Space
Linear algebra
• Outline
• Glossary
Basic concepts
• Scalar
• Vector
• Vector space
• Scalar multiplication
• Vector projection
• Linear span
• Linear map
• Linear projection
• Linear independence
• Linear combination
• Basis
• Change of basis
• Row and column vectors
• Row and column spaces
• Kernel
• Eigenvalues and eigenvectors
• Transpose
• Linear equations
Matrices
• Block
• Decomposition
• Invertible
• Minor
• Multiplication
• Rank
• Transformation
• Cramer's rule
• Gaussian elimination
Bilinear
• Orthogonality
• Dot product
• Hadamard product
• Inner product space
• Outer product
• Kronecker product
• Gram–Schmidt process
Multilinear algebra
• Determinant
• Cross product
• Triple product
• Seven-dimensional cross product
• Geometric algebra
• Exterior algebra
• Bivector
• Multivector
• Tensor
• Outermorphism
Vector space constructions
• Dual
• Direct sum
• Function space
• Quotient
• Subspace
• Tensor product
Numerical
• Floating-point
• Numerical stability
• Basic Linear Algebra Subprograms
• Sparse matrix
• Comparison of linear algebra libraries
• Category
• Mathematics portal
• Commons
• Wikibooks
• Wikiversity
|
Wikipedia
|
Royal Dutch Mathematical Society
The Royal Dutch Mathematical Society (Koninklijk Wiskundig Genootschap in Dutch, abbreviated as KWG) was founded in 1778.[1] Its goal is to promote the development of mathematics, both from a theoretical and applied point of view.
Royal Dutch Mathematical Society
Koninklijk Wiskundig Genootschap (translation: Royal Mathematical Society)
AbbreviationKWG
Founded1778
TypeScientific society
Location
• Netherlands
FieldMathematics
Chair
Barry Koren
Websitehttps://www.wiskgenoot.nl/
The society publishes the quarterly journal Nieuw Archief voor Wiskunde, the magazine Pythagoras, wiskundetijdschrift voor jongeren[2] for high school children, and the scientific journal Indagationes Mathematicae.
Each year the society organizes a winter symposium for high school teachers. Biannually Koninklijk Wiskundig Genootschap organizes the Dutch Mathematical Congress. Once every three years, the society awards the prestigious Brouwer Medal to a distinguished mathematician. This medal is named after L. E. J. Brouwer.
Honorary members
Honorary members of the Koninklijk Wiskundig Genootschap[3]
Date of awardName
30-04-1938 I. M. Vinogradov
28-09-1957 Paul Erdős
28-09-1957 Kurt Mahler
28-09-1957 Alfred Tarski
25-04-1964 Mark Kac
21-05-1966 Pavel Alexandrov
21-05-1966 Dirk Jan Struik
1966 Johannes van der Corput
1966 Jan Arnoldus Schouten
1966 Willem van der Woude
29-03-1978 O. Bottema
29-03-1978 Harold Scott MacDonald Coxeter
29-03-1978 Bartel Leendert van der Waerden
29-03-1978 J.H. Wansink
10-04-1985 Hans Freudenthal
08-04-1988 Adriaan Cornelis Zaanen
08-04-1988 Nicolaas Govert de Bruijn
17-04-1998 Fred van der Blij
17-04-1998 Jacob Korevaar
17-04-1998 J.J. Seidel
16-04-2004 P.C. Baayen
16-04-2004 J. H. van Lint
16-04-2004 J.A.F. de Rijk (pseudonym "Bruno Ernst")
13-04-2007 J. van de Craats
13-04-2007 Dirk van Dalen
13-04-2007 F. Verhulst
05-04-2013 Robert Tijdeman
22-03-2016 Rien Kaashoek
22-03-2016 Henk van der Vorst
23-04-2019 Herman te Riele
Institutional members
The society has the following institutional members:[4]
• Centrum Wiskunde & Informatica
• Delft University of Technology
• Eindhoven University of Technology
• Leiden University
• Radboud University Nijmegen
• University of Amsterdam:
• Institute for Logic, Language and Computation
• Korteweg-de Vries Institute for Mathematics
• University of Groningen
• Utrecht University
• Vrije Universiteit Amsterdam
References
1. Beckers, Danny (2008), "The Royal Dutch Mathematical Society since 1778" (PDF), Nieuw Archief voor Wiskunde, 9 (2): 147–149, MR 2454666.
2. "Pythagoras, wiskundetijdschrift voor jongeren". pyth.eu (in Dutch). Koninklijk Wiskundig Genootschap. Retrieved 9 October 2022.
3. "Ereleden". www.wiskgenoot.nl (in Dutch). Retrieved 2021-01-11.{{cite web}}: CS1 maint: url-status (link)
4. "Instituutsleden". www.wiskgenoot.nl (in Dutch). Retrieved 2021-01-11.{{cite web}}: CS1 maint: url-status (link)
External links
• "KWG". wiskgenoot.nl Official website. Koninklijk Wiskundig Genootschap. 2022. Retrieved 9 October 2022.
The European Mathematical Society
International member societies
• European Consortium for Mathematics in Industry
• European Society for Mathematical and Theoretical Biology
National member societies
• Austria
• Belarus
• Belgium
• Belgian Mathematical Society
• Belgian Statistical Society
• Bosnia and Herzegovina
• Bulgaria
• Croatia
• Cyprus
• Czech Republic
• Denmark
• Estonia
• Finland
• France
• Mathematical Society of France
• Society of Applied & Industrial Mathematics
• Société Francaise de Statistique
• Georgia
• Germany
• German Mathematical Society
• Association of Applied Mathematics and Mechanics
• Greece
• Hungary
• Iceland
• Ireland
• Israel
• Italy
• Italian Mathematical Union
• Società Italiana di Matematica Applicata e Industriale
• The Italian Association of Mathematics applied to Economic and Social Sciences
• Latvia
• Lithuania
• Luxembourg
• Macedonia
• Malta
• Montenegro
• Netherlands
• Norway
• Norwegian Mathematical Society
• Norwegian Statistical Association
• Poland
• Portugal
• Romania
• Romanian Mathematical Society
• Romanian Society of Mathematicians
• Russia
• Moscow Mathematical Society
• St. Petersburg Mathematical Society
• Ural Mathematical Society
• Slovakia
• Slovak Mathematical Society
• Union of Slovak Mathematicians and Physicists
• Slovenia
• Spain
• Catalan Society of Mathematics
• Royal Spanish Mathematical Society
• Spanish Society of Statistics and Operations Research
• The Spanish Society of Applied Mathematics
• Sweden
• Swedish Mathematical Society
• Swedish Society of Statisticians
• Switzerland
• Turkey
• Ukraine
• United Kingdom
• Edinburgh Mathematical Society
• Institute of Mathematics and its Applications
• London Mathematical Society
Academic Institutional Members
• Abdus Salam International Centre for Theoretical Physics
• Academy of Sciences of Moldova
• Bernoulli Center
• Centre de Recerca Matemàtica
• Centre International de Rencontres Mathématiques
• Centrum voor Wiskunde en Informatica
• Emmy Noether Research Institute for Mathematics
• Erwin Schrödinger International Institute for Mathematical Physics
• European Institute for Statistics, Probability and Operations Research
• Institut des Hautes Études Scientifiques
• Institut Henri Poincaré
• Institut Mittag-Leffler
• Institute for Mathematical Research
• International Centre for Mathematical Sciences
• Isaac Newton Institute for Mathematical Sciences
• Mathematisches Forschungsinstitut Oberwolfach
• Mathematical Research Institute
• Max Planck Institute for Mathematics in the Sciences
• Research Institute of Mathematics of the Voronezh State University
• Serbian Academy of Science and Arts
• Mathematical Society of Serbia
• Stefan Banach International Mathematical Center
• Thomas Stieltjes Institute for Mathematics
Institutional Members
• Central European University
• Faculty of Mathematics at the University of Barcelona
• Cellule MathDoc
Authority control
International
• ISNI
• VIAF
National
• Israel
• United States
People
• Trove
|
Wikipedia
|
Royal Spanish Mathematical Society
The Royal Spanish Mathematical Society (Spanish: Real Sociedad Matemática Española, RSME) is the main professional society of Spanish mathematicians and represents Spanish mathematics within the European Mathematical Society (EMS) and the International Mathematical Union (IMU).
Real Sociedad Matemática Española
President
Eva Gallardo
Parent organization
International Mathematical Union and European Mathematical Society
WebsiteRSME Website
History
The RSME was founded in 1911 by a group of mathematicians, among whom were Luis Octavio de Toledo y Zulueta and Julio Rey Pastor, under the name of the Spanish Mathematical Society. The initiative arose at the first congress of the Spanish Association for the Progress of Science (AEPC), where the convenience of establishing a mathematics society was raised.
Throughout its more than 100 years it has gone through various stages of greater or lesser activity. Since 1996, it has been in one of its most active periods, counting in August 2005 about 1700 members, among which there are individual members, as well as institutional members such as, for example, university faculties and departments and high school institutes.
It has reciprocal agreements with a large number of mathematical societies around the world. It is one of the societies that forms part of the Spanish Mathematical Committee and is an institutional member of the European Mathematical Society (EMS) and of the Confederation of Spanish Scientific Societies (COSCE).
Presidents
• José Echegaray y Eizaguirre: 1911-1916
• Zoel García de Galdeano: 1916-1920
• Leonardo Torres Quevedo: 1920-1924
• Luis Octavio de Toledo y Zulueta: 1924-1934
• Julio Rey Pastor: 1934-1934
• Juan López Soler: 1935-1937
• José Barinaga: 1937-1939
• Juan López Soler: 1939-1954
• Julio Rey Pastor: 1955-1961
• Alberto Dou Mas de Xaxàs: 1961-1963
• Francisco Botella: 1963-1970
• Enrique Linés Escardó: 1970-1976
• José Javier Etayo: 1976-1982
• Pedro Luis García Pérez: 1982-1988
• José Manuel Aroca: 1988-1996
• Antonio Martínez Naveira: 1996-2000
• Carlos Andradas: 2000-2006
• Olga Gil Medrano: 2006-2009
• Antonio Campillo López: 2009-2015
• Francisco Marcellán Español: 2015-2022
• Eva Gallardo: 2022-
Activities
The RSME actively collaborates with other scientific societies in Spain in various activities such as the celebration, in 2000, of the World Year of Mathematics, the preparation of the Spanish candidacy and the subsequent organization of the International Congress of Mathematicians (ICM) that was held in August 2006 in Madrid and the work of the Senate Report on the teaching of science in secondary education (2003-04 academic year).
The RSME prepares, through its various commissions, reports on topics such as the situation of mathematical research in Spain, the problems of teaching mathematics in high school, the situation of mathematics in relation to the European higher education area, professional opportunities and the participation of women in mathematical research.
In addition, it is involved in international cooperation projects: digitization of mathematical literature, support of mathematics in Latin America, among others. The society organized the first Meeting of Latin American Mathematical Societies that took place in September 2003 in Santiago de Compostela, one of the results of which was the creation of the Network of Latin American Mathematical Organizations.
Fixed activities
Among the fixed activities of the RSME we can highlight:
• Organize annually, since 1964, the Spanish Mathematical Olympiad: a competition in which the high school students who form the Spanish team that participate in the International Mathematical Olympiad and the Ibero-American one are selected and prepared. In 2004 the final phase of the Ibero-American Mathematical Olympiad was organized in Castellón, and in 2008 the final phase of the International Mathematical Olympiad was held in Madrid.
• Congresses that are held approximately every two years. In them, plenary lectures are scheduled for a wide audience as well as more specialized special sessions on specific research topics in the different areas of mathematics and its applications, including the history and didactics of mathematics. Among the most prominent, in June 2003 the first joint congress with the American Mathematical Society was held in Seville, in February 2005 the first joint congress organized in collaboration with the Spanish Society of Applied Mathematics, the Statistical Society and Operative Research and the Catalan Mathematical Society, in 2007 the first joint congress with the Société Mathématique de France was held in Zaragoza, in 2009 the first joint meeting with the Mexican Mathematical Society took place in Oaxaca, which has been held every two years since then, and in 2011 it was held in Ávila in a congress commemorating the centenary of the RSME.
• Scientific Sessions: two or three a year are organized on specific research topics with a duration of one day (two at the most) in different universities, they have taken place for example in Zaragoza, Salamanca, Cantabria, Barcelona, Seville, Elche, Alicante, Polytechnic of Catalonia and La Rioja.
• Summer School of Mathematical Research "Lluís Santaló". It has been held at the Menéndez Pelayo International University since 2002.
• School of Mathematical Education "Miguel de Guzmán", held for the first time in 2005 in La Coruña.
• The Divulgamat website is a virtual center for the dissemination of Mathematics.
Awards
• Medals of the Royal Spanish Mathematical Society: are distinctions that express the public recognition of the community for outstanding people for their contributions in any area of mathematical endeavor. Its first edition was in 2015[1][2]
• 2020 María Jesús Carro Rossell, Antonio Ros Mulero.[3]
• 2019 Marisa Fernández Rodríguez, Jesús María Sanz Serna, Sebastià Xambó.[4]
• 2018 Consuelo Martínez, Adolfo Quirós, Juan Luis Vázquez.[5]
• 2017 Antonio Campillo López, Manuel de León Rodríguez, Marta Sanz-Solé[6]
• 2016 José Bonet Solves, María Gaspar Alonso-Vega, María Teresa Lozano Imízcoz[7][8]
• 2015 José Luis Fernández Pérez, Marta Macho Stadler, Antonio Martínez Naveira[9]
• Premio José Luis Rubio de Francia: It is one of the most important mathematics awards in Spain,[10] and the highest distinction awarded by the RSME.[11] It is aimed at young researchers in mathematics who are Spanish or who have carried out their work in Spain. The first edition was in 2004 and is awarded annually.
The list of winners is as follows:[12]
• 2021: Ujué Etayo Rodriguez
• 2020: Daniel Sanz Alonso[13]
• 2019: María Ángeles García Ferrero[14]
• 2018: Joaquim Serra Montolí
• 2017: Angelo Lucia
• 2016: Xavier Ros-Oton
• 2015: Roger Casals
• 2014: Nuno Freitas
• 2013: Ángel Castro Martínez
• 2012: María Pe Pereira[15]
• 2011: Alberto Enciso Carrasco
• 2010: Carlos Beltrán Álvarez
• 2009: Álvaro Pelayo
• 2008: Francisco Gancedo
• 2007: Pablo Mira Carrillo
• 2006: Santiago Morales Domingo
• 2005: Javier Parcet
• 2004: Joaquim Puig
• Premio Vicent Caselles: Annual distinction to young Spanish researchers whose doctoral work is pioneering and influential in international research in mathematics. The first edition was in 2015 and 6 awards are awarded annually[16]
• 2021: Jon Asier Bárcena, Xavier Fernández-Real, José Ángel González-Prieto, Mercedes Pelegrín García, Abraham Rueda y María de la Paz Tirado[13]
• 2020: María Cumplido, Judit Muñoz Matute, Ujué Etayo, Diego Alonso Orán, Alessandro Audrito, Rubén Campoy García[17]
• 2019: María Ángeles García Ferrero, Marithania Silvero, Umberto Martínez Peñas, Daniel Álvarez Gavela, Xabier García Martínez y Carlos Mudarra[18]
• 2018: David Beltran, David Gómez Castro, David González Álvaro, Vanesa Guerrero, Álvaro del Pino, Carolina Vallejo Rodríguez[19]
• 2017: Óscar Domínguez Bonilla, Javier Gómez Serrano, Angelo Lucia, María Medina, Marina Murillo, Beatriz Sinova, Félix del Teso[20]
• 2016: Roger Casals, Francesc Castellà, Leonardo Colombo, José Manuel Conde Alonso, Martín López García, Jesús Yepes Nicolás[21]
• 2015: Alejandro Castro Castilla, Jezabel Curbelo Hernández, Javier Fresán Leal, Rafael Granero Belinchón, Luís Hernández Corbato, Xavier Ros Oton[22]
Publications
• Publications periodical: members receive the RSME Gazette quarterly[23][24] published since 1998, a magazine with varied mathematical content. In addition, a newsletter is sent weekly email with the most outstanding news. From April 2005 to October 2007, the electronic magazine was part of the publications of this society. Matematicalia, oriented to mathematical dissemination.
• Publications periodical: Among the non-periodical publications, the similar editions of the works stand out. Introductio in analysin infinitorum, of Leonhard Euler, and De Analysi per Aequationes Numero Terminorum Infinitas, of Isaac Newton, both with commented translation into Spanish. He also has another series, "Publicaciones de la Real Sociedad Matemática Española", consisting of conference proceedings sponsored by the RSME.
• The RSME publishes collections of books, scientific and popular texts, in collaboration with publishers and scientific societies, and a research journal, the Ibero-American Mathematical Magazine.
• RSME works with the Basque Center of Applied Mathematics, mathematical societies in Spain (ESTALMAT, Sociedad Matemática Aplicada), the government of Spain, the Institute of Mathematical Sciences, and several Spanish universities (University minister).
References
1. "Medallas de la Real Sociedad Matemática Española" (in Spanish). Archived from the original on 6 August 2016.
2. "Premios y Becas de la RSME" (in Spanish). Archived from the original on 21 March 2017.
3. López, Nerea Diez (3 July 2020). "María Jesús Carro y Antonio Ros recibirán las Medallas de la RSME 2020". Real Sociedad Matemática Española (in Spanish).
4. López, Nerea Diez (28 June 2019). "Resolución de la edición de 2019 de las Medallas de la RSME". Real Sociedad Matemática Española (in Spanish).
5. López, Nerea Diez (17 June 2018). "Resolución de la edición de 2018 de las Medallas de la RSME". Real Sociedad Matemática Española (in Spanish).
6. "Resolución de la edición de 2017 de las Medallas de la RSME". Real Sociedad Matemática Española (in Spanish). Archived from the original on 22 July 2018.
7. María Teresa Lozano, premiada con la medalla de la Real Sociedad Matemática Española 2016
8. "Resolución de la edición de 2016 de las Medallas de la RSME". Real Sociedad Matemática Española (in Spanish). Archived from the original on 19 December 2016.
9. Resolución de la primera edición de las Medallas de la RSME 2015
10. "Resolución del Premio José Luis Rubio de Francia 2015 y las Medallas RSME 2016". Real Sociedad Matemática Española (in Spanish). Archived from the original on 16 March 2017.
11. "Mathematics People" (PDF). Notices of the AMS. 60 (11): 1472. December 2013.
12. López, Nerea Diez (2 June 2020). "Premio José Luis Rubio de Francia". Real Sociedad Matemática Española (in Spanish).
13. "Los Premios Vicent Caselles 2021 reconocen la excelencia investigadora de seis jóvenes matemáticos". FBBVA (in Spanish). 2021-07-20. Retrieved 2021-07-27.
14. Europa Press (1 July 2020). "María Ángeles García Ferrero, Premio José Luis Rubio de Francia 2019 de la Real Sociedad Matemática Española". Europa Press (in Spanish). Madrid.
15. Burgos, Diario de (29 May 2013). "La burgalesa María Pe Pereira, primera mujer galardonada con el Premio José Luis Rubio de Francia". Diario de Burgos (in Spanish).
16. López, Nerea Diez (3 June 2020). "Premios Vicent Caselles RSME-FBBVA". Real Sociedad Matemática Española (in Spanish).
17. Europa Press (7 July 2020). "Los Premios Vicent Caselles de la RSME y Fundación BBVA reconocen la excelencia investigadora de 6 jóvenes matemáticos". Europa Press (in Spanish). Madrid.
18. Valladolid, Diario de (9 July 2019). "La matemática que explica el calor". Diario de Valladolid (in Spanish).
19. López, Nerea Diez (3 June 2020). "Resolución de los Premios Vicent Caselles RSME – Fundación BBVA 2018". Real Sociedad Matemática Española (in Spanish).
20. López, Nerea Diez (3 June 2020). "Resolución de los Premios Vicent Caselles RSME - Fundación BBVA 2017". Real Sociedad Matemática Española (in Spanish).
21. López, Nerea Diez (3 June 2020). "Resolución de los Premios Vicent Caselles RSME - Fundación BBVA 2016". Real Sociedad Matemática Española (in Spanish).
22. López, Nerea Diez (3 June 2020). "Resolución de los Premios Vicent Caselles RSME - Fundación BBVA 2015". Real Sociedad Matemática Española (in Spanish).
23. La Gaceta de la RSME
24. Gaceta de la Real Sociedad Matematica Española en dialnet
External links
• Portal de la Red de Organizaciones Latinoamericanas de Matemáticas
• La Gaceta de la RSME Archived 2015-02-23 at the Wayback Machine
• Centenario de la RSME
Authority control
International
• ISNI
• VIAF
National
• Norway
• Spain
• France
• BnF data
• Catalonia
• Germany
• Israel
• United States
• Czech Republic
• Portugal
Academics
• CiNii
Other
• IdRef
|
Wikipedia
|
Rozanne Colchester
Rozanne Felicity Hastings Colchester (née Medhurst, 10 November 1922 – 17 November 2016) worked in British intelligence in the 1940s.[1]
Early life
She met Mussolini and Hitler before the Second World War.[2][3]
Career
In 1941 she joined Bletchley Park as a decoder. Her father (Charles Medhurst), himself involved in intelligence, recruited her. She spoke Italian which led to her joining the RAF section.[1] Following a successful interview she was immediately taken on and completed two days' training delivered by Joe Hooper.[4]
Colchester entered one of “Britain’s most secret organisations”, Bletchley Park. The majority of Bletchley Park “was based on the forensic decrypting and ordering of thousands of enemies messages”. Colchester played a massive role in the "decrypting and ordering"of the enemy's incoming messages, along with many other women working alongside Colchester at Bletchley Park. Due to Colchester’s past experience of decoding skills, this helped in uncovering many of the “general patterns of communications and confirmed logistical information”.[5] During Colchester’s interview with The Guardian, she mentions how the conditions during her time at Bletchley Park were “very hard”, adding to this Colchester describes the work as “monotonous, sluggish work”, but states how gradually she began to understand the coding more and more as time went on.[6]
After the war, she worked for the Secret Intelligence Service in an undisclosed role. She served in Cairo and Istanbul where she helped investigate the double agent Kim Philby.[7]
Personal life
In 1946, she married Halsey Sparrowe Colchester, who became vicar of Great Tew and Little Tew, Oxfordshire, having previously been a Foreign Office diplomat and head of personnel at MI6; they had four sons and a daughter.[8][9]
References
1. "Rozanne Colchester (née Medhurst)" (PDF).
2. "The extraordinary female codebreakers of Bletchley Park". Telegraph.co.uk. Archived from the original on 4 January 2015. Retrieved 17 April 2016.
3. "An Unlikely Asset". Dangerous Women Project. 9 April 2016. Retrieved 17 April 2016.
4. "Bletchley Park decoder Rozanne Colchester - BBC Radio Oxford". Howard Bentham. 23 February 2015. Event occurs at 4:22. Retrieved 15 April 2016.
5. "An Unlikely Asset - Dangerous Women Project". 9 April 2016. Retrieved 8 March 2018.
6. McCrum, Robert (7 November 2010). "Women spies in the second world war: "It was horrible and wonderful. Like a love affair"". The Guardian. Retrieved 8 March 2018.
7. "The Guardian - Women spies in the second world war". TheGuardian.com. 7 November 2010.
8. The Foreign Office List and Diplomatic and Consular Year Book 1965, Harrison & Sons, 1965, p. 165
9. "Obituary: The Rev Halsey Colchester". Independent.co.uk. 23 October 2011.
|
Wikipedia
|
Da Ruan
Da Ruan (Chinese: 阮达; September 10, 1960 – July 31, 2011) was a Chinese-Belgian mathematician, scientist, professor. He had a Ph.D. from Ghent University.[1]
Da Ruan
Born(1960-09-10)September 10, 1960
Shanghai, China
DiedJuly 31, 2011(2011-07-31) (aged 50)
Mol, Belgium
NationalityBelgian
OccupationMathematician
Bibliography
• Fuzzy set theory and advanced mathematical applications (1995, Kluwer Academic Publishers)[2]
References
1. Li, Tianrui (2011). "Obituary: Da Ruan (10 September 1960 to 31 July 2011)". International Journal of General Systems. 40 (8): 775–776. doi:10.1080/03081079.2011.625631.
2. Da Ruan (30 June 1995). Fuzzy set theory and advanced mathematical applications. Kluwer Academic Publishers. ISBN 978-0-7923-9586-7. Retrieved 8 June 2012.
Authority control
International
• ISNI
• VIAF
National
• France
• BnF data
• Germany
• Israel
• Belgium
• United States
• Czech Republic
• Netherlands
Academics
• CiNii
• DBLP
• MathSciNet
• Mathematics Genealogy Project
• zbMATH
Other
• IdRef
|
Wikipedia
|
Rubik's Cube group
The Rubik's Cube group is a group $(G,\cdot )$ that represents the structure of the Rubik's Cube mechanical puzzle. Each element of the set $G$ corresponds to a cube move, which is the effect of any sequence of rotations of the cube's faces. With this representation, not only can any cube move be represented, but any position of the cube as well, by detailing the cube moves required to rotate the solved cube into that position. Indeed with the solved position as a starting point, there is a one-to-one correspondence between each of the legal positions of the Rubik's Cube and the elements of $G$.[1][2] The group operation $\cdot $ is the composition of cube moves, corresponding to the result of performing one cube move after another.
The Rubik's Cube group is constructed by labeling each of the 48 non-center facets with the integers 1 to 48. Each configuration of the cube can be represented as a permutation of the labels 1 to 48, depending on the position of each facet. Using this representation, the solved cube is the identity permutation which leaves the cube unchanged, while the twelve cube moves that rotate a layer of the cube 90 degrees are represented by their respective permutations. The Rubik's Cube group is the subgroup of the symmetric group $S_{48}$ generated by the six permutations corresponding to the six clockwise cube moves. With this construction, any configuration of the cube reachable through a sequence of cube moves is within the group. Its operation $\cdot $ refers to the composition of two permutations; within the cube, this refers to combining two sequences of cube moves together, doing one after the other. The Rubik's Cube group is non-abelian as composition of cube moves is not commutative; doing two sequences of cube moves in a different order can result in a different configuration.
Cube moves
A $3\times 3\times 3$ Rubik's Cube consists of $6$ faces, each with $9$ colored squares called facets, for a total of $54$ facets. A solved cube has all of the facets on each face having the same color.
A cube move rotates one of the $6$ faces: $90^{\circ },180^{\circ }$ or $-90^{\circ }$ (half-turn metric).[3] A center facet rotates about its axis but otherwise stays in the same position.[1]
Cube moves are described with the Singmaster notation:[4]
Basic 90° 180° -90°
$F$ turns the front clockwise $F^{2}$ turns the front clockwise twice $F^{\prime }$ turns the front counter-clockwise
$B$ turns the back clockwise $B^{2}$ turns the back clockwise twice $B^{\prime }$ turns the back counter-clockwise
$U$ turns the top clockwise $U^{2}$ turns the top clockwise twice $U^{\prime }$ turns the top counter-clockwise
$D$ turns the bottom clockwise $D^{2}$ turns the bottom clockwise twice $D^{\prime }$ turns the bottom counter-clockwise
$L$ turns the left face clockwise $L^{2}$ turns the left face clockwise twice $L^{\prime }$ turns the left face counter-clockwise
$R$ turns the right face clockwise $R^{2}$ turns the right face clockwise twice $R^{\prime }$ turns the right face counter-clockwise
The empty move is $E$.[note 1] The concatenation $LLLL$ is the same as $E$, and $RRR$ is the same as $R^{\prime }$.
Group structure
Algebraic structure → Group theory
Group theory
Basic notions
• Subgroup
• Normal subgroup
• Quotient group
• (Semi-)direct product
Group homomorphisms
• kernel
• image
• direct sum
• wreath product
• simple
• finite
• infinite
• continuous
• multiplicative
• additive
• cyclic
• abelian
• dihedral
• nilpotent
• solvable
• action
• Glossary of group theory
• List of group theory topics
Finite groups
• Cyclic group Zn
• Symmetric group Sn
• Alternating group An
• Dihedral group Dn
• Quaternion group Q
• Cauchy's theorem
• Lagrange's theorem
• Sylow theorems
• Hall's theorem
• p-group
• Elementary abelian group
• Frobenius group
• Schur multiplier
Classification of finite simple groups
• cyclic
• alternating
• Lie type
• sporadic
• Discrete groups
• Lattices
• Integers ($\mathbb {Z} $)
• Free group
Modular groups
• PSL(2, $\mathbb {Z} $)
• SL(2, $\mathbb {Z} $)
• Arithmetic group
• Lattice
• Hyperbolic group
Topological and Lie groups
• Solenoid
• Circle
• General linear GL(n)
• Special linear SL(n)
• Orthogonal O(n)
• Euclidean E(n)
• Special orthogonal SO(n)
• Unitary U(n)
• Special unitary SU(n)
• Symplectic Sp(n)
• G2
• F4
• E6
• E7
• E8
• Lorentz
• Poincaré
• Conformal
• Diffeomorphism
• Loop
Infinite dimensional Lie group
• O(∞)
• SU(∞)
• Sp(∞)
Algebraic groups
• Linear algebraic group
• Reductive group
• Abelian variety
• Elliptic curve
The following uses the notation described in How to solve the Rubik's Cube. The orientation of the six centre facets is fixed.
We can identify each of the six face rotations as elements in the symmetric group on the set of non-center facets. More concretely, we can label the non-center facets by the numbers 1 through 48, and then identify the six face rotations as elements of the symmetric group S48 according to how each move permutes the various facets. The Rubik's Cube group, G, is then defined to be the subgroup of S48 generated by the 6 face rotations, $\{F,B,U,D,L,R\}$.
The cardinality of G is given by
$|G|=43{,}252{,}003{,}274{,}489{,}856{,}000\,\!={\bigl (}{\bigl (}12!\cdot 8!{\bigr )}\div 2{\bigr )}\cdot {\bigl (}2^{12}\div 2{\bigr )}\cdot {\bigl (}3^{8}\div 3{\bigr )}=2^{27}3^{14}5^{3}7^{2}11$.[5][6]
Despite being this large, God's Number for Rubik's Cube is 20; that is, any position can be solved in 20 or fewer moves[3] (where a half-twist is counted as a single move; if a half-twist is counted as two quarter-twists, then God's number is 26[7]).
The largest order of an element in G is 1260. For example, one such element of order 1260 is
$(RU^{2}D^{-1}BD^{-1})$.[1]
G is non-abelian since, for example, $FR$ is not the same as $RF$. That is, not all cube moves commute with each other.[2]
Subgroups
We consider two subgroups of G: First the subgroup Co of cube orientations, the moves that leave the position of every block fixed, but can change the orientations of blocks. This group is a normal subgroup of G. It can be represented as the normal closure of some moves that flip a few edges or twist a few corners. For example, it is the normal closure of the following two moves:
$BR^{\prime }D^{2}RB^{\prime }U^{2}BR^{\prime }D^{2}RB^{\prime }U^{2},\,\!$ (twist two corners)
$RUDB^{2}U^{2}B^{\prime }UBUB^{2}D^{\prime }R^{\prime }U^{\prime },\,\!$ (flip two edges).
Second, we take the subgroup $C_{P}$ of cube permutations, the moves which can change the positions of the blocks, but leave the orientation fixed. For this subgroup there are several choices, depending on the precise way you define orientation.[note 2] One choice is the following group, given by generators (the last generator is a 3 cycle on the edges):
$C_{p}=[U^{2},D^{2},F,B,L^{2},R^{2},R^{2}U^{\prime }FB^{\prime }R^{2}F^{\prime }BU^{\prime }R^{2}].\,\!$
Since Co is a normal subgroup and the intersection of Co and Cp is the identity and their product is the whole cube group, it follows that the cube group G is the semi-direct product of these two groups. That is
$G=C_{o}\rtimes C_{p}.\,$
Next we can take a closer look at these two groups. The structure of Co is
$\mathbb {Z} _{3}^{7}\times \mathbb {Z} _{2}^{11},\ $
since the group of rotations of each corner (resp. edge) cube is $\mathbb {Z} _{3}$ (resp. $\mathbb {Z} _{2}$), and in each case all but one may be rotated freely, but these rotations determine the orientation of the last one. Noticing that there are 8 corners and 12 edges, and that all the rotation groups are abelian, gives the above structure.
Cube permutations, Cp, is a little more complicated. It has the following two disjoint normal subgroups: the group of even permutations on the corners A8 and the group of even permutations on the edges A12. Complementary to these two subgroups is a permutation that swaps two corners and swaps two edges. It turns out that these generate all possible permutations, which means
$C_{p}=(A_{8}\times A_{12})\,\rtimes \mathbb {Z} _{2}.$
Putting all the pieces together we get that the cube group is isomorphic to
$(\mathbb {Z} _{3}^{7}\times \mathbb {Z} _{2}^{11})\rtimes \,((A_{8}\times A_{12})\rtimes \mathbb {Z} _{2}).$
This group can also be described as the subdirect product
$[(\mathbb {Z} _{3}^{7}\rtimes \mathrm {S} _{8})\times (\mathbb {Z} _{2}^{11}\rtimes \mathrm {S} _{12})]^{\frac {1}{2}}$,
in the notation of Griess.
Generalizations
When the centre facet symmetries are taken into account, the symmetry group is a subgroup of
$[\mathbb {Z} _{4}^{6}\times (\mathbb {Z} _{3}^{7}\rtimes \mathrm {S} _{8})\times (\mathbb {Z} _{2}^{11}\rtimes \mathrm {S} _{12})]^{\frac {1}{2}}.$
(This unimportance of centre facet rotations is an implicit example of a quotient group at work, shielding the reader from the full automorphism group of the object in question.)
The symmetry group of the Rubik's Cube obtained by disassembling and reassembling it is slightly larger: namely it is the direct product
$\mathbb {Z} _{4}^{6}\times (\mathbb {Z} _{3}\wr \mathrm {S} _{8})\times (\mathbb {Z} _{2}\wr \mathrm {S} _{12}).$
The first factor is accounted for solely by rotations of the centre pieces, the second solely by symmetries of the corners, and the third solely by symmetries of the edges. The latter two factors are examples of generalized symmetric groups, which are themselves examples of wreath products.
The simple groups that occur as quotients in the composition series of the standard cube group (i.e. ignoring centre piece rotations) are $A_{8}$, $A_{12}$, $\mathbb {Z} _{3}$ (7 times), and $\mathbb {Z} _{2}$ (12 times).
Conjugacy classes
It has been reported that the Rubik's Cube Group has 81,120 conjugacy classes.[8] The number was calculated by counting the number of even and odd conjugacy classes in the edge and corner groups separately and then multiplying them, ensuring that the total parity is always even. Special care must be taken to count so-called parity-sensitive conjugacy classes, whose elements always differ when conjugated with any even element versus any odd element.[9]
Number of conjugacy classes in the Rubik's Cube Group and various subgroups[9]
Group No. even No. odd No. ps Total
Corner positions 12 10 2 22
Edge positions 40 37 3 77
All positions 856
Corners 140 130 10 270
Edges 308 291 17 599
Whole cube 81,120
See also
• Commutator
• Conjugacy class
• Coset
• Optimal solutions for Rubik's Cube
• Solvable group
• Thistlethwaite's algorithm
Notes
1. Not to be confused with $E$ as used in the extended Singmaster Notation, where it represents a quarter-turn of the equator layer (i.e., the central layer between $U$ and $D$), in the same direction as $D$.
2. One way of defining orientation is as follows, adapted from pages 314–315 of Metamagical Themas by Douglas Hofstadter. Define two notions: the chief color of a block and the chief facet of a position, where a position means the location of a block. The chief facet of a position will be the one on the front or back face of the cube, if that position has such a facet; otherwise it will be the one on the left or right face. There are nine chief facets on F, nine on B, two on L, and two on R. The chief color of a block is defined as the color that should be on the block's chief facet when the block "comes home" to its proper position in a solved cube. A cube move $X$ preserves orientation if, when $X$ has been applied to a solved cube, the chief color of every block is on the chief facet of its position.
References
1. Joyner, David (2002). Adventures in group theory: Rubik's Cube, Merlin's machine, and Other Mathematical Toys. Johns Hopkins University Press. ISBN 0-8018-6947-1.
2. Davis, Tom (2006). "Group Theory via Rubik's Cube" (PDF).
3. Rokicki, Tomas; et al. "God's Number is 20".
4. Singmaster, David (1981). Notes on Rubik's Magic Cube. Penguin Books. ISBN 0-907395-00-7.
5. Schönert, Martin. "Analyzing Rubik's Cube with GAP".
6. Tom Davis, "Rubik's Cube. Part II", p.23 in, Zvezdelina Stankova, Tom Rike (eds), A Decade of the Berkeley Math Circle, American Mathematical Society, 2015 ISBN 978-0-8218-4912-5.
7. God's Number is 26 in the Quarter-Turn Metric
8. Garron, Lucas (March 8, 2010). "The Permutation Group of the Rubik's Cube" (PDF). Semantic Scholar. S2CID 18785794. Archived from the original (PDF) on February 22, 2019. Retrieved August 1, 2020.
9. brac37 (October 20, 2009). "Conjugacy classes of the cube". Domain of the Cube Forum. Retrieved August 1, 2020.
Rubik's Cube
Puzzle inventors
• Ernő Rubik
• Larry Nichols
• Uwe Mèffert
• Tony Fisher
• Panagiotis Verdes
• Oskar van Deventer
Rubik's Cubes
• Overview
• Rubik's family cubes of all sizes
• 2×2×2 (Pocket Cube)
• 3×3×3 (Rubik's Cube)
• 4×4×4 (Rubik's Revenge)
• 5×5×5 (Professor's Cube)
• 6×6×6 (V-Cube 6)
• 7×7×7 (V-Cube 7)
• 8×8×8 (V-Cube 8)
Cubic variations
• Helicopter Cube
• Skewb
• Dino Cube
• Square 1
• Sudoku Cube
• Nine-Colour Cube
• Gear Cube
• Void Cube
Non-cubic
variations
Tetrahedron
• Pyraminx
• Pyraminx Duo
• Pyramorphix
• BrainTwist
Octahedron
• Skewb Diamond
Dodecahedron
• Megaminx
• Pyraminx Crystal
• Skewb Ultimate
Icosahedron
• Impossiball
• Dogic
Great dodecahedron
• Alexander's Star
Truncated icosahedron
• Tuttminx
Cuboid
• Rubik's Domino (2x3x3)
Virtual variations
(>3D)
• MagicCube4D
• MagicCube5D
• MagicCube7D
• Magic 120-cell
Derivatives
• Missing Link
• Rubik's 360
• Rubik's Clock
• Rubik's Magic
• Master Edition
• Rubik's Revolution
• Rubik's Snake
• Rubik's Triamid
Renowned solvers
• Yu Nakajima
• Édouard Chambon
• Bob Burton, Jr.
• Jessica Fridrich
• Chris Hardwick
• Kevin Hays
• Rowe Hessler
• Leyan Lo
• Shotaro Makisumi
• Toby Mao
• Prithveesh K. Bhat
• Krishnam Raju Gadiraju
• Tyson Mao
• Frank Morris
• Lars Petrus
• Gilles Roux
• David Singmaster
• Ron van Bruchem
• Eric Limeback
• Anthony Michael Brooks
• Mats Valk
• Feliks Zemdegs
• Collin Burns
• Max Park
Solutions
Speedsolving
• Speedcubing
Methods
• Layer by Layer
• CFOP method
• Optimal
Mathematics
• God's algorithm
• Superflip
• Thistlethwaite's algorithm
• Rubik's Cube group
Official organization
• World Cube Association
Related articles
• Rubik's Cube in popular culture
• Rubik, the Amazing Cube
• The Simple Solution to Rubik's Cube
• 1982 World Rubik's Cube Championship
|
Wikipedia
|
Rudolf Ernest Langer
Rudolf Ernest Langer (8 March 1894 – 11 March 1968) was an American mathematician, known for the Langer correction and as a president of the Mathematical Association of America.[1]
Career
Langer, the elder brother of William L. Langer, earned his PhD in 1922 from Harvard University under G. D. Birkhoff. He taught mathematics at Dartmouth College from 1922 to 1925. From 1927 to 1964 he was a mathematics professor at the University of Wisconsin-Madison and, from 1942 to 1952, the chair of the mathematics department.[1] From 1956 to 1963 he was the director of the Army Mathematics Research Center; he was succeeded as director by J. Barkley Rosser.[2] Langer's doctoral students include Homer Newell, Jr. and Henry Scheffé.
Works
• "Developments associated with a boundary problem not linear in the parameter". Trans. Amer. Math. Soc. 25 (2): 155–172. 1923. doi:10.1090/s0002-9947-1923-1501235-3. MR 1501235.
• "On the momental constants of a summable function". Trans. Amer. Math. Soc. 28 (1): 168–182. 1926. doi:10.1090/s0002-9947-1926-1501338-6. MR 1501338.
• "On the theory of integral equations with discontinuous kernels". Trans. Amer. Math. Soc. 28 (4): 585–639. 1926. doi:10.1090/s0002-9947-1926-1501367-2. MR 1501367.
• "The boundary problem associated with a differential equation in which the coefficient of the parameter changes sign". Trans. Amer. Math. Soc. 31 (1): 1–24. 1929. doi:10.1090/s0002-9947-1929-1501464-4. MR 1501464.
• "On the zeros of exponential sums and integrals". Bull. Amer. Math. Soc. 37 (4): 213–239. 1931. doi:10.1090/s0002-9904-1931-05133-8. MR 1562129.
• "On the asymptotic solutions of ordinary differential equations, with an application to the Bessel functions of large order". Trans. Amer. Math. Soc. 33 (1): 23–64. 1931. doi:10.1090/s0002-9947-1931-1501574-0. MR 1501574.
• "On the asymptotic solutions of differential equations, with an application to the Bessel functions of large complex order". Trans. Amer. Math. Soc. 34 (3): 447–480. 1932. doi:10.1090/s0002-9947-1932-1501648-5. MR 1501648.
• "On an inverse problem in differential equations". Bull. Am. Math. Soc. 39 (10): 814–820. 1933. doi:10.1090/s0002-9904-1933-05752-x. MR 1562734.
• "The asymptotic solutions of ordinary differential equations of the second order, with special reference to the Stokes phenomenon". Bull. Amer. Math. Soc. 40 (8): 545–582. 1934. doi:10.1090/s0002-9904-1934-05913-5. MR 1562910.
• "The solutions of the Mathieu equation with a complex variable and at least one parameter large". Trans. Amer. Math. Soc. 36 (3): 637–710. 1934. doi:10.1090/s0002-9947-1934-1501760-2. MR 1501760.
• "On the asymptotic solutions of ordinary differential equations, with reference to the Stokes' phenomenon about a singular point". Trans. Amer. Math. Soc. 37 (3): 397–416. 1935. doi:10.1090/s0002-9947-1935-1501793-7. MR 1501793.
• "On determination of earth conductivity from observed surface potentials". Bull. Am. Math. Soc. 42 (10): 747–754. 1936. doi:10.1090/s0002-9904-1936-06420-7. MR 1563417.
• "On the connection formulas and the solutions of the wave equations". Phys. Rev. 51 (8): 669–676. 1937. Bibcode:1937PhRv...51..669L. doi:10.1103/physrev.51.669.
• "The asymptotic solutions of ordinary linear differential equations of the second order, with special reference to a turning point". Trans. Amer. Math. Soc. 67: 461–490. 1949. doi:10.1090/s0002-9947-1949-0033420-2. MR 0033420.
• "Differential Equations, Ordinary". Encyclopædia Britannica. Vol. 7 (1967 and 1968 ed.). pp. 407–412.
References
1. MAA presidents: Rudolf Ernest Langer
2. "Memorial Resolution on the Death of Emeritus Professor J. Barkley Rosser" (PDF), University of Wisconsin, Madison, March 5, 1990, archived from the original (PDF) on June 8, 2011
External links
• Rudolf Ernest Langer at the Mathematics Genealogy Project
Authority control
International
• FAST
• ISNI
• VIAF
National
• France
• BnF data
• Germany
• Israel
• United States
• Czech Republic
• Netherlands
Academics
• MathSciNet
• Mathematics Genealogy Project
• zbMATH
Other
• IdRef
|
Wikipedia
|
Rudolf Gorenflo
Rudolf Gorenflo (31 July 1930 – 20 October 2017)[1] was a German mathematician.
Biography
Gorenflo was born on July 31, 1930, in Friedrichstal, Germany. From 1950 to 1956 he attended Karlsruhe Institute of Technology from which he received his diploma in mathematics. From 1957 to 1961 he became a scientific assistant there and for a year later worked at Standard Electric Lorenz Company. From 1962 to 1970 he worked at the Max Planck Institute for Plasma Physics, at Garching not too far away from Munich. He was a resident in mathematics at the Technical University in Aachen in 1970 and a year later became a professor there.[2]
In 1972 he was invited as a guest professor to the University of Heidelberg and only by October 1973 became a full-time professor at the Free University of Berlin. In 1995 he became a professor at the University of Tokyo and by October 1998 returned to Free University as professor emeritus. During his life he collaborated with scientists from China, Israel, Italy, Japan, former Soviet Union, United States and Vietnam.[2]
Academic work
As of 2013, he had published over 250 peer-reviewed articles, one of which has over 1800 citations. His works have appeared in such journals as the Journal of Vibration and Control and various Journal of Physics journals.[3]
References
1. "Prof. Dr. Rudolf Gorenflo" (in German). FU Berlin. 2017-11-03. Retrieved 2017-11-07.
2. "Prof. Dr. Rudolf Gorenflo". Fractional Calculus Modeling. Retrieved December 11, 2013.
3. "Rudolf Gorenflo". Google Scholar. Retrieved August 30, 2020.
Authority control
International
• ISNI
• VIAF
National
• France
• BnF data
• Germany
• Israel
• Belgium
• United States
• Netherlands
Academics
• CiNii
• DBLP
• Google Scholar
• MathSciNet
• Mathematics Genealogy Project
• Scopus
• zbMATH
Other
• IdRef
|
Wikipedia
|
Rudolf Kruse
Rudolf Kruse (born 12 September 1952 in Rotenburg/Wümme) is a German computer scientist and mathematician.
Education and professional career
Rudolf Kruse obtained his diploma (Mathematics) degree in 1979 from the TU Braunschweig, Germany, and a PhD in Mathematics in 1980 as well as the venia legendi in Mathematics in 1984 from the same university. Following a short stay at the Fraunhofer Society, in 1986 he joined the University of Braunschweig as a professor of computer science. From 1996–2017 he was a full professor at the Department of Computer Science of the Otto-von-Guericke Universität Magdeburg where he has been leading the computational intelligence research group. Since October 2017 he has been an emeritus professor.
Research activities
He has carried out research and projects in statistics, artificial intelligence, expert systems, Fuzzy control, fuzzy data analysis, Computational Intelligence, and information mining. His research group was very successful in various industrial applications.
Rudolf Kruse has coauthored 40 books as well as more than 450 refereed technical papers in various scientific areas. He is associate editor of several scientific journals. He is a fellow of the International Fuzzy Systems Association (IFSA[1]), fellow of the European Coordinating Committee for Artificial Intelligence (ECCAI[2]) and fellow of the Institute of Electrical and Electronics Engineers (IEEE).
References
1. "IFSA". Archived from the original on 2012-03-15. Retrieved 2012-03-15.
2. "Home | European Association for Artificial Intelligence".
External links
• Web pages of the Computational Intelligence group
• Personal Homepage R. Kruse
• Scientific Publications (DBLP)
Authority control
International
• ISNI
• VIAF
National
• France
• BnF data
• Germany
• Israel
• Belgium
• United States
• Czech Republic
• Netherlands
Academics
• DBLP
• MathSciNet
• Mathematics Genealogy Project
• zbMATH
Other
• IdRef
|
Wikipedia
|
Rudolf Lipschitz
Rudolf Otto Sigismund Lipschitz (14 May 1832 – 7 October 1903) was a German mathematician who made contributions to mathematical analysis (where he gave his name to the Lipschitz continuity condition) and differential geometry, as well as number theory, algebras with involution and classical mechanics.
Rudolf Lipschitz
Rudolf Lipschitz
Born(1832-05-14)14 May 1832
Königsberg, Province of Prussia
Died7 October 1903(1903-10-07) (aged 71)
Bonn, German Empire
NationalityGerman
Alma materUniversity of Königsberg
Known forLipschitz continuity
Lipschitz integral condition
Lipschitz quaternion
Scientific career
FieldsMathematics
InstitutionsUniversity of Bonn
Doctoral advisorGustav Dirichlet
Martin Ohm
Doctoral studentsFelix Klein
Biography
Rudolf Lipschitz was born on 14 May 1832 in Königsberg. He was the son of a landowner and was raised at his father's estate at Bönkein which was near Königsberg.[1] He entered the University of Königsberg when he was 15, but later moved to the University of Berlin where he studied with Gustav Dirichlet. Despite having his studies delayed by illness, in 1853 Lipschitz graduated with a PhD in Berlin.[2]
After receiving his PhD, Lipschitz started teaching at local Gymnasiums. In 1857 he married Ida Pascha, the daughter of one of the landowners with an estate near to his father's,[1] and earned his habilitation at the University of Bonn, where he remained as a privatdozent. In 1862 Lipschitz became an extraordinary professor at the University of Breslau where he spent the following two years. In 1864 Lipschitz moved back to Bonn as a full professor. He was the first Jewish full professor at the University of Bonn. He was appointed Bonn's first chair of Mathematics in 1869.[3] He remained there for the rest of his career. Here he examined the dissertation of Felix Klein. Lipschitz died on 7 October 1903 in Bonn.[4]
Rediscovery of Clifford algebra
Lipschitz discovered Clifford algebras in 1880,[5][6] two years after William K. Clifford (1845–1879) and independently of him, and he was the first to use them in the study of orthogonal transformations. Up to 1950, people mentioned "Clifford–Lipschitz numbers" when they referred to this discovery of Lipschitz. Yet Lipschitz's name suddenly disappeared from the publications involving Clifford algebras; for instance Claude Chevalley (1909–1984)[7] gave the name "Clifford group" to an object that is never mentioned in Clifford's works, but stems from Lipschitz's. Pertti Lounesto (1945–2002) contributed greatly to recalling the importance of Lipschitz's role.[8][9]
Selected publications
• Lehrbuch der Analysis (two volumes, Bonn 1877, 1880);
• Wissenschaft und Staat (Bonn, 1874);
• Untersuchungen über die Summen von Quadraten (Bonn, 1886);
• Bedeutung der theoretischen Mechanik (Berlin, 1876).
See also
• Cauchy–Lipschitz theorem
• Lipschitz domain
• Lipschitz quaternion
• Lipschitz continuity
• Uniform, Hölder and Lipschitz continuity
• Lipschitz distance
• Lipschitz-continuous maps and contractions
• Concave moduli and Lipschitz approximation
• Dini–Lipschitz criterion
• Dini–Lipschitz test
References
1. "Rudolf Lipschitz - Biography".
2. McElroy, Tucker (2009). A to Z of Mathematicians. Infobase Publishing. p. 176. ISBN 978-1-438-10921-3.
3. Purkert, Walter (2012). Bonn. In: Bergmann, B., Epple, M., Ungar, R. (eds) Transcending Tradition. Springer. pp. 88–113. ISBN 978-3-642-22464-5.
4. Chang, Sooyoung (2011). Academic Genealogy of Mathematicians. World Scientific. p. 27. ISBN 978-9-814-28229-1.
5. R. Lipschitz (1880). "Principes d'un calcul algébrique qui contient comme espèces particulières le calcul des quantités imaginaires et des quaternions". C. R. Acad. Sci. Paris. 91: 619–621, 660–664.
6. R. Lipschitz (signed) (1959). "Correspondence". Ann. of Math. 69 (1): 247–251. doi:10.2307/1970102. JSTOR 1970102.
7. Chevalley, Claude (1997). The Algebraic Theory of Spinors and Clifford Algebras (Collected Works Vol. 2 ed.). Springer-Verlag. pp. 48, 113. ISBN 978-3-540-57063-9.
8. Lounesto, Pertti (1997). Clifford Algebras and Spinors. Cambridge University Press. p. 220. ISBN 978-0-521-59916-0.
9. Jacques Helmstetter, Artibano Micali: Quadratic Mappings and Clifford Algebras, Birkhäuser, 2008, ISBN 978-3-7643-8605-4 Introduction, p. ix ff.
External links
Media related to Rudolf Lipschitz at Wikimedia Commons
• O'Connor, John J.; Robertson, Edmund F., "Rudolf Lipschitz", MacTutor History of Mathematics Archive, University of St Andrews
• Rudolf Lipschitz at the Mathematics Genealogy Project
• H. Kortum. 1903 Obituary. pp. 56–59. Retrieved 16 July 2006. {{cite book}}: |work= ignored (help) (digitalized document, provided without fee by Göttingen Digitalization Project, in German)
Authority control
International
• FAST
• ISNI
• VIAF
• WorldCat
National
• Norway
• France
• BnF data
• Germany
• Israel
• United States
• Australia
• Greece
• Netherlands
• Poland
Academics
• Leopoldina
• MathSciNet
• Mathematics Genealogy Project
• Scopus
• zbMATH
People
• Deutsche Biographie
• Trove
Other
• IdRef
|
Wikipedia
|
Rudvalis group
In the area of modern algebra known as group theory, the Rudvalis group Ru is a sporadic simple group of order
214 · 33 · 53 · 7 · 13 · 29
= 145926144000
≈ 1×1011.
Algebraic structure → Group theory
Group theory
Basic notions
• Subgroup
• Normal subgroup
• Quotient group
• (Semi-)direct product
Group homomorphisms
• kernel
• image
• direct sum
• wreath product
• simple
• finite
• infinite
• continuous
• multiplicative
• additive
• cyclic
• abelian
• dihedral
• nilpotent
• solvable
• action
• Glossary of group theory
• List of group theory topics
Finite groups
• Cyclic group Zn
• Symmetric group Sn
• Alternating group An
• Dihedral group Dn
• Quaternion group Q
• Cauchy's theorem
• Lagrange's theorem
• Sylow theorems
• Hall's theorem
• p-group
• Elementary abelian group
• Frobenius group
• Schur multiplier
Classification of finite simple groups
• cyclic
• alternating
• Lie type
• sporadic
• Discrete groups
• Lattices
• Integers ($\mathbb {Z} $)
• Free group
Modular groups
• PSL(2, $\mathbb {Z} $)
• SL(2, $\mathbb {Z} $)
• Arithmetic group
• Lattice
• Hyperbolic group
Topological and Lie groups
• Solenoid
• Circle
• General linear GL(n)
• Special linear SL(n)
• Orthogonal O(n)
• Euclidean E(n)
• Special orthogonal SO(n)
• Unitary U(n)
• Special unitary SU(n)
• Symplectic Sp(n)
• G2
• F4
• E6
• E7
• E8
• Lorentz
• Poincaré
• Conformal
• Diffeomorphism
• Loop
Infinite dimensional Lie group
• O(∞)
• SU(∞)
• Sp(∞)
Algebraic groups
• Linear algebraic group
• Reductive group
• Abelian variety
• Elliptic curve
History
Ru is one of the 26 sporadic groups and was found by Arunas Rudvalis (1973, 1984) and constructed by John H. Conway and David B. Wales (1973). Its Schur multiplier has order 2, and its outer automorphism group is trivial.
In 1982 Robert Griess showed that Ru cannot be a subquotient of the monster group.[1] Thus it is one of the 6 sporadic groups called the pariahs.
Properties
The Rudvalis group acts as a rank 3 permutation group on 4060 points, with one point stabilizer being the Ree group 2F4(2), the automorphism group of the Tits group. This representation implies a strongly regular graph srg(4060, 2304, 1328, 1208). That is, each vertex has 2304 neighbors and 1755 non-neighbors, any two adjacent vertices have 1328 common neighbors, while any two non-adjacent ones have 1208 (Griess 1998, p. 125).
Its double cover acts on a 28-dimensional lattice over the Gaussian integers. The lattice has 4×4060 minimal vectors; if minimal vectors are identified whenever one is 1, i, –1, or –i times another, then the 4060 equivalence classes can be identified with the points of the rank 3 permutation representation. Reducing this lattice modulo the principal ideal
$(1+i)\ $
gives an action of the Rudvalis group on a 28-dimensional vector space over the field $\mathbb {F} _{2}$ with 2 elements. Duncan (2006) used the 28-dimensional lattice to construct a vertex operator algebra acted on by the double cover.
Parrott (1976) characterized the Rudvalis group by the centralizer of a central involution. Aschbacher & Smith (2004) gave another characterization as part of their identification of the Rudvalis group as one of the quasithin groups.
Maximal subgroups
Wilson (1984) found the 15 conjugacy classes of maximal subgroups of Ru as follows:
• 2F4(2) = 2F4(2)'.2
• 26.U3(3).2
• (22 × Sz(8)):3
• 23+8:L3(2)
• U3(5):2
• 21+4+6.S5
• PSL2(25).22
• A8
• PSL2(29)
• 52:4.S5
• 3.A6.22
• 51+2:[25]
• L2(13):2
• A6.22
• 5:4 × A5
References
1. Griess (1982)
• Aschbacher, Michael; Smith, Stephen D. (2004), The classification of quasithin groups. I Structure of Strongly Quasithin K-groups, Mathematical Surveys and Monographs, vol. 111, Providence, R.I.: American Mathematical Society, ISBN 978-0-8218-3410-7, MR 2097623
• Conway, John H.; Wales, David B. (1973), "The construction of the Rudvalis simple group of order 145926144000", Journal of Algebra, 27 (3): 538–548, doi:10.1016/0021-8693(73)90063-X
• John F. Duncan (2008). "Moonshine for Rudvalis's sporadic group". arXiv:math/0609449v1.
• Griess, Robert L. (1982), "The Friendly Giant" (PDF), Inventiones Mathematicae, 69 (1): 1–102, Bibcode:1982InMat..69....1G, doi:10.1007/BF01389186, hdl:2027.42/46608
• Griess, Robert L. (1998), Twelve Sporadic Groups, Springer-Verlag
• Parrott, David (1976), "A characterization of the Rudvalis simple group", Proceedings of the London Mathematical Society, Third Series, 32 (1): 25–51, doi:10.1112/plms/s3-32.1.25, ISSN 0024-6115, MR 0390043
• Rudvalis, Arunas (1973), "A new simple group of order 214 33 53 7 13 29", Notices of the American Mathematical Society (20): A–95
• Rudvalis, Arunas (1984), "A rank 3 simple group of order 2¹⁴3³5³7.13.29. I", Journal of Algebra, 86 (1): 181–218, doi:10.1016/0021-8693(84)90063-2, ISSN 0021-8693, MR 0727376
• Rudvalis, Arunas (1984), "A rank 3 simple group G of order 2¹⁴3³5³7.13.29. II. Characters of G and Ĝ", Journal of Algebra, 86 (1): 219–258, doi:10.1016/0021-8693(84)90064-4, ISSN 0021-8693, MR 0727377
• Wilson, Robert A. (1984), "The geometry and maximal subgroups of the simple groups of A. Rudvalis and J. Tits", Proceedings of the London Mathematical Society, Third Series, 48 (3): 533–563, doi:10.1112/plms/s3-48.3.533, ISSN 0024-6115, MR 0735227
External links
• MathWorld: Rudvalis Group
• Atlas of Finite Group Representations: Rudvalis group
|
Wikipedia
|
Rudy Horne
Rudy Lee Horne (1968 – 2017) was an American mathematician and professor of mathematics at Morehouse College. He worked on dynamical systems, including nonlinear waves. He was the mathematics consultant for the film Hidden Figures.[2]
Rudy Horne
Born
Rudy Lee Horne
EducationCrete-Monee High School
Alma materUniversity of Colorado Boulder (PhD)
University of Oklahoma
Known forNonlinear optics
Hidden Figures
Scientific career
InstitutionsMorehouse College
University of North Carolina at Chapel Hill
Florida State University
California State University, East Bay
ThesisCollision-Induced Timing Jitter and Four-Wave Mixing in Wavelength-Division Multiplexing Soliton Systems (2001)
Doctoral advisorMark J. Ablowitz[1]
Early life and education
Horne grew up in the south side of Chicago.[3] His father worked at Sherwin-Williams.[4] He graduated from Crete-Monee High School.[2][5] He completed a double degree in mathematics and physics at the University of Oklahoma in 1991.[6][3] He joined the University of Colorado Boulder for his postgraduate studies, earning a master's in physics in 1994 and in mathematics in 1996. He completed his doctorate, Collision induced timing jitter and four-wave mixing in wavelength division multiplexing soliton systems, in 2001 which was supervised by Mark J. Ablowitz.[1][7] He was the first African American to graduate from the University of Colorado Boulder Department of Applied Mathematics.[8]
Career and research
After completing his PhD, Horne had a position at the California State University, East Bay.[2] before working as postdoctoral researcher at the University of North Carolina at Chapel Hill, with Chris Jones.[9] Horne joined Florida State University in 2005.[8][10] Horne joined Morehouse College in 2010 and was promoted to associate professor of mathematics in 2015.[2] He continued to study four-wave mixing.[11] His work considered nonlinear optical phenomena.[9][12][13] He uncovered effects in parity-time symmetric systems.[14]
Horne was recommended to serve as a mathematics consultant for Hidden Figures by Morehouse College.[15][16] He worked closely with Theodore Melfi ensured the actors knew how to pronounce "Euler's".[2][17][18][19][20][21] He spent four months working with 20th Century Fox.[8] In particular, Horne worked with Taraji P. Henson on the mathematics she required for her role as Katherine Johnson.[22][23] He taught the cast how to get excited by mathematics.[24] His handwriting is on screen during a scene at the beginning of the film where Katherine Johnson solves a quadratic equation.[3] He appeared on the interview series In the Know.[25] Horne completed a Mathematical Association of America Maths Fest tour where he discussed the mathematics in Hidden Figures, focusing on the calculations that concerned Glenn's orbit around in 1962.[26][27] He appeared on NPR's Closer Look.[28]
He died on December 11, 2017 after surgery for a torn aorta.[29][30][2] The University of Colorado Boulder established a Rudy Lee Horne Memorial Fellowship in his honour.[8][31] He was described as a "rock star", inspiring generations of black students.[32][22] He was awarded the National Association of Mathematicians (NAM) lifetime achievement award posthumously in 2018,[33] and was recognized by Mathematically Gifted & Black as a Black History Month 2018 Honoree.[4]
References
1. Rudy Horne at the Mathematics Genealogy Project
2. "Rudy L. Horne dies at 49; Chicago native checked the math in 'Hidden Figures'". Chicago Sun-Times. Retrieved 2018-09-09.
3. McCleland, Jacob. "OU Graduate Makes Sure "Hidden Figures" Math Adds Up". Retrieved 2018-09-09.
4. "Rudy L. Horne, Jr. - Mathematically Gifted & Black". Mathematically Gifted & Black. Retrieved 2018-09-09.
5. "Crete-Monee School District, IL - CMMS Celebrates Black History Month". www.cm201u.org. Retrieved 2018-09-09.
6. "Math, Movies and OU". www.ou.edu. Retrieved 2018-09-09.
7. "Rudy Horne, Jr. - Mathematician of the African Diaspora". www.math.buffalo.edu. Retrieved 2018-09-09.
8. "Honoring Dr. Rudy Horne". Applied Mathematics. 2017-12-21. Retrieved 2018-09-09.
9. Systems, Dynamical. "In Memoriam: Rudy L. Horne". Dynamical Systems. Retrieved 2018-09-09.
10. "Rudy Horne's Math Page". www.math.fsu.edu. Retrieved 2018-09-09.
11. Horne, Rudy L.; Jones, Christopher K. R. T.; Schäfer, Tobias (2008). "The Suppression of Four-Wave Mixing by Random Dispersion". SIAM Journal on Applied Mathematics. 69 (3): 690–703. doi:10.1137/070680539. JSTOR 40233639.
12. Horne, Rudy L. (2011-05-11). "Geometric methods and optical phenomena: Wave stability in certain optical devices" (PDF). Brown. Archived from the original (PDF) on 2018-09-10. Retrieved 2018-09-09.
13. "Joint Mathematics Meetings". jointmathematicsmeetings.org. Retrieved 2018-09-09.
14. "Rudy Horne's Passing | Department of Mathematics". math.unc.edu. Retrieved 2018-09-09.
15. Esser, Mark (2017-04-27). "Plotting a Path from NASA Grids to NIST Graphics". NIST. Retrieved 2018-09-09.
16. Miller, Gerri (2018-06-20). "Meet the people behind the film Hidden Figures". Science News for Students. Retrieved 2018-09-09.
17. Hunt, Fern (2017). "Hidden Figures" (PDF). AMS. Retrieved 2018-09-09.
18. "Exploring the Math in 'Hidden Figures'". Inside Science. 2017-02-24. Retrieved 2018-09-09.
19. "Being Counted: Professor Talks 'Hidden Figures' and Minority Women in Math | American University Washington D.C." American University. Retrieved 2018-09-09.
20. "Morehouse Magazine Special Anniversary Issue". Issuu. Retrieved 2018-09-09.
21. "Rudy L. Horne | BFI". www.bfi.org.uk. Retrieved 2018-09-09.
22. "This "Hidden Figures" Mathematician Inspired Generations Of Black Students". dose. 2018-01-26. Retrieved 2018-09-09.
23. "DO THE MATH - An Amazing True Story Plus A Dedicated Team Adds Up To "Hidden Figures" - Producers Guild of America". www.producersguild.org. Retrieved 2018-09-09.
24. "On 'Hidden Figures' Set, NASA's Early Years Take Center Stage". Space.com. Retrieved 2018-09-09.
25. In The Know (2017-08-14), In The Know: Meet Dr. Rudy Horne, retrieved 2018-09-09
26. "Invited Addresses | Mathematical Association of America". www.maa.org. Retrieved 2018-09-09.
27. "Hidden Figures: Bringing Math, Physics, History, and Race to Hollywood -Free Movie Screening, Colloquium & Reception | MCAIM". mcaim.math.lsa.umich.edu. Retrieved 2018-09-09.
28. "Closer Look: 'Hidden Figures'; Women In Hip-Hop; And More | 90.1 FM WABE". 90.1 FM WABE. Retrieved 2018-09-09.
29. College, Morehouse. "Morehouse College | House News". www.morehouse.edu. Retrieved 2018-09-09.
30. "In Memoriam | Mathematical Association of America". www.maa.org. Retrieved 2018-09-09.
31. "Rudy Lee Horne Endowed Graduate Fellowship in Applied Mathematics Fund | CU Boulder | Giving to CU". giving.cu.edu. Retrieved 2018-09-09.
32. "Rudy Horne: Math rock star remembered - US Black Engineer". US Black Engineer. Retrieved 2018-09-09.
33. "Lifetime Achievement Award". www.nam-math.org. Retrieved 2018-09-09.
Authority control: Academics
• MathSciNet
• Mathematics Genealogy Project
• zbMATH
|
Wikipedia
|
Ruel Vance Churchill
Ruel Vance Churchill (12 December 1899, Akron, Indiana – 31 October 1987, Ann Arbor, Michigan) was an American mathematician known for writing three widely used textbooks on applied mathematics.[1]
In 1922 Churchill received his undergraduate degree from the University of Chicago. In 1929 he received his PhD from the University of Michigan under George Rainich with thesis On the Geometry of the Riemann Tensor.[2] He spent his entire career as a member of the U. of Michigan mathematics faculty and retired in 1965 as professor emeritus.[3] His doctoral students include Earl D. Rainville.
Books
• Complex Variables and Applications, McGraw-Hill, 1st edition 1948, 2nd edition 1960, The 3rd (1974) and later editions were co-authored with James Ward Brown
• Fourier Series and Boundary Value Problems, McGraw-Hill, 1941, 2nd edition 1963[4]
• Modern Operational Mathematics in Engineering, McGraw-Hill, 1944[5]
• Operational Mathematics, McGraw-Hill, 1958, 2nd edition of the 1944 book but with a new title, 3rd edition 1972
Selected articles
• Churchill, R. V. (1932). "On the geometry of the Riemann tensor". Trans. Amer. Math. Soc. 34 (1): 126–152. doi:10.1090/s0002-9947-1932-1501632-1. MR 1501632.
• Churchill, R. V. (1932). "Canonical forms for symmetric linear vector functions in pseudo-Euclidean space". Trans. Amer. Math. Soc. 34 (4): 784–794. doi:10.1090/s0002-9947-1932-1501663-1. MR 1501663.
• Churchill, R. V. (1942). "Expansions in series of non-orthogonal functions". Bull. Amer. Math. Soc. 48 (2): 143–149. doi:10.1090/s0002-9904-1942-07628-2. MR 0005940.
• with R. C. F. Bartels: Bartels, R. C. F.; Churchill, R. V. (1942). "Resolution of boundary problems by the use of a generalized convolutin". Bull. Amer. Math. Soc. 48 (4): 276–282. doi:10.1090/s0002-9904-1942-07655-5. MR 0005994.
• with C. L. Dolph: Churchill, R. V.; Dolph, C. L. (1954). "Inverse transforms of Legendre transforms". Proc. Amer. Math. Soc. 5: 93–100. doi:10.1090/s0002-9939-1954-0062872-4. MR 0062872.
References
1. Ruel Vance Churchill, Faculty History Project, U. of Michigan, Ann Arbor
2. Ruel Vance Churchill at the Mathematics Genealogy Project
3. University of Michigan: Faculty Member Resources
4. Levinson, N. (1941). "Review: Fourier Series and Boundary Values Problems by R. V. Churchill" (PDF). Bull. Amer. Math. Soc. 47 (7): 538–539. doi:10.1090/s0002-9904-1941-07480-x.
5. Camp, Glen D. (October 1945). "Review: Modern Operational Methods in Engineering by R. V. Churchill". National Mathematics Magazine. 20 (1): 44–46. doi:10.2307/3029973. hdl:2027/mdp.39015000983000. JSTOR 3029973.
External links
Dr Ruel Vance Churchill at Find a Grave
Authority control
International
• FAST
• ISNI
• VIAF
National
• Spain
• France
• BnF data
• Catalonia
• Germany
• Israel
• United States
• Sweden
• Czech Republic
• Korea
• Netherlands
Academics
• CiNii
• MathSciNet
• Mathematics Genealogy Project
• zbMATH
Other
• IdRef
|
Wikipedia
|
Perron–Frobenius theorem
In matrix theory, the Perron–Frobenius theorem, proved by Oskar Perron (1907) and Georg Frobenius (1912), asserts that a real square matrix with positive entries has a unique eigenvalue of largest magnitude and that eigenvalue is real. The corresponding eigenvector can be chosen to have strictly positive components, and also asserts a similar statement for certain classes of nonnegative matrices. This theorem has important applications to probability theory (ergodicity of Markov chains); to the theory of dynamical systems (subshifts of finite type); to economics (Okishio's theorem,[1] Hawkins–Simon condition[2]); to demography (Leslie population age distribution model);[3] to social networks (DeGroot learning process); to Internet search engines (PageRank);[4] and even to ranking of football teams.[5] The first to discuss the ordering of players within tournaments using Perron–Frobenius eigenvectors is Edmund Landau.[6][7]
Statement
Let positive and non-negative respectively describe matrices with exclusively positive real numbers as elements and matrices with exclusively non-negative real numbers as elements. The eigenvalues of a real square matrix A are complex numbers that make up the spectrum of the matrix. The exponential growth rate of the matrix powers Ak as k → ∞ is controlled by the eigenvalue of A with the largest absolute value (modulus). The Perron–Frobenius theorem describes the properties of the leading eigenvalue and of the corresponding eigenvectors when A is a non-negative real square matrix. Early results were due to Oskar Perron (1907) and concerned positive matrices. Later, Georg Frobenius (1912) found their extension to certain classes of non-negative matrices.
Positive matrices
Let $A=(a_{ij})$ be an $n\times n$ positive matrix: $a_{ij}>0$ for $1\leq i,j\leq n$. Then the following statements hold.
1. There is a positive real number r, called the Perron root or the Perron–Frobenius eigenvalue (also called the leading eigenvalue or dominant eigenvalue), such that r is an eigenvalue of A and any other eigenvalue λ (possibly complex) in absolute value is strictly smaller than r , |λ| < r. Thus, the spectral radius $\rho (A)$ is equal to r. If the matrix coefficients are algebraic, this implies that the eigenvalue is a Perron number.
2. The Perron–Frobenius eigenvalue is simple: r is a simple root of the characteristic polynomial of A. Consequently, the eigenspace associated to r is one-dimensional. (The same is true for the left eigenspace, i.e., the eigenspace for AT, the transpose of A.)
3. There exists an eigenvector v = (v1,...,vn)T of A with eigenvalue r such that all components of v are positive: A v = r v, vi > 0 for 1 ≤ i ≤ n. (Respectively, there exists a positive left eigenvector w : wT A = r wT, wi > 0.) It is known in the literature under many variations as the Perron vector, Perron eigenvector, Perron-Frobenius eigenvector, leading eigenvector, or dominant eigenvector.
4. There are no other positive (moreover non-negative) eigenvectors except positive multiples of v (respectively, left eigenvectors except w), i.e., all other eigenvectors must have at least one negative or non-real component.
5. $\lim _{k\rightarrow \infty }A^{k}/r^{k}=vw^{T}$, where the left and right eigenvectors for A are normalized so that wTv = 1. Moreover, the matrix v wT is the projection onto the eigenspace corresponding to r. This projection is called the Perron projection.
6. Collatz–Wielandt formula: for all non-negative non-zero vectors x, let f(x) be the minimum value of [Ax]i / xi taken over all those i such that xi ≠ 0. Then f is a real valued function whose maximum over all non-negative non-zero vectors x is the Perron–Frobenius eigenvalue.
7. A "Min-max" Collatz–Wielandt formula takes a form similar to the one above: for all strictly positive vectors x, let g(x) be the maximum value of [Ax]i / xi taken over i. Then g is a real valued function whose minimum over all strictly positive vectors x is the Perron–Frobenius eigenvalue.
8. Birkhoff–Varga formula: Let x and y be strictly positive vectors. Then $r=\sup _{x>0}\inf _{y>0}{\frac {y^{\top }Ax}{y^{\top }x}}=\inf _{x>0}\sup _{y>0}{\frac {y^{\top }Ax}{y^{\top }x}}=\inf _{x>0}\sup _{y>0}\sum _{i,j=1}^{n}y_{i}a_{ij}x_{j}/\sum _{i=1}^{n}y_{i}x_{i}.$ [8]
9. Donsker–Varadhan–Friedland formula: Let p be a probability vector and x a strictly positive vector. Then $r=\sup _{p}\inf _{x>0}\sum _{i=1}^{n}p_{i}[Ax]_{i}/x_{i}.$ [9][10]
10. Fiedler formula: $r=\sup _{z>0}\ \inf _{x>0,\ y>0,\ x\circ y=z}{\frac {y^{\top }Ax}{y^{\top }x}}=\sup _{z>0}\ \inf _{x>0,\ y>0,\ x\circ y=z}\sum _{i,j=1}^{n}y_{i}a_{ij}x_{j}/\sum _{i=1}^{n}y_{i}x_{i}.$[11]
11. The Perron–Frobenius eigenvalue satisfies the inequalities
$\min _{i}\sum _{j}a_{ij}\leq r\leq \max _{i}\sum _{j}a_{ij}.$
All of these properties extend beyond strictly positive matrices to primitive matrices (see below). Facts 1-7 can be found in Meyer[12] chapter 8 claims 8.2.11–15 page 667 and exercises 8.2.5,7,9 pages 668–669.
The left and right eigenvectors w and v are sometimes normalized so that the sum of their components is equal to 1; in this case, they are sometimes called stochastic eigenvectors. Often they are normalized so that the right eigenvector v sums to one, while $w^{T}v=1$.
Non-negative matrices
There is an extension to matrices with non-negative entries. Since any non-negative matrix can be obtained as a limit of positive matrices, one obtains the existence of an eigenvector with non-negative components; the corresponding eigenvalue will be non-negative and greater than or equal, in absolute value, to all other eigenvalues.[13][14] However, for the example $A=\left({\begin{smallmatrix}0&1\\1&0\end{smallmatrix}}\right)$, the maximum eigenvalue r = 1 has the same absolute value as the other eigenvalue −1; while for $A=\left({\begin{smallmatrix}0&1\\0&0\end{smallmatrix}}\right)$, the maximum eigenvalue is r = 0, which is not a simple root of the characteristic polynomial, and the corresponding eigenvector (1, 0) is not strictly positive.
However, Frobenius found a special subclass of non-negative matrices — irreducible matrices — for which a non-trivial generalization is possible. For such a matrix, although the eigenvalues attaining the maximal absolute value might not be unique, their structure is under control: they have the form $\omega r$, where r is a real strictly positive eigenvalue, and $\omega $ ranges over the complex hth roots of 1 for some positive integer h called the period of the matrix. The eigenvector corresponding to r has strictly positive components (in contrast with the general case of non-negative matrices, where components are only non-negative). Also all such eigenvalues are simple roots of the characteristic polynomial. Further properties are described below.
Classification of matrices
Let A be a n × n square matrix over field F. The matrix A is irreducible if any of the following equivalent properties holds.
Definition 1 : A does not have non-trivial invariant coordinate subspaces. Here a non-trivial coordinate subspace means a linear subspace spanned by any proper subset of standard basis vectors of Fn. More explicitly, for any linear subspace spanned by standard basis vectors ei1 , ..., eik, 0 < k < n its image under the action of A is not contained in the same subspace.
Definition 2: A cannot be conjugated into block upper triangular form by a permutation matrix P:
$PAP^{-1}\neq {\begin{pmatrix}E&F\\O&G\end{pmatrix}},$
where E and G are non-trivial (i.e. of size greater than zero) square matrices.
Definition 3: One can associate with a matrix A a certain directed graph GA. It has n vertices labeled 1,...,n, and there is an edge from vertex i to vertex j precisely when aij ≠ 0. Then the matrix A is irreducible if and only if its associated graph GA is strongly connected.
If F is the field of real or complex numbers, then we also have the following condition.
Definition 4: The group representation of $(\mathbb {R} ,+)$ on $\mathbb {R} ^{n}$ or $(\mathbb {C} ,+)$ on $\mathbb {C} ^{n}$ given by $t\mapsto \exp(tA)$ has no non-trivial invariant coordinate subspaces. (By comparison, this would be an irreducible representation if there were no non-trivial invariant subspaces at all, not only considering coordinate subspaces.)
A matrix is reducible if it is not irreducible.
A real matrix A is primitive if it is non-negative and its mth power is positive for some natural number m (i.e. all entries of Am are positive).
Let A be real and non-negative. Fix an index i and define the period of index i to be the greatest common divisor of all natural numbers m such that (Am)ii > 0. When A is irreducible, the period of every index is the same and is called the period of A. In fact, when A is irreducible, the period can be defined as the greatest common divisor of the lengths of the closed directed paths in GA (see Kitchens[15] page 16). The period is also called the index of imprimitivity (Meyer[12] page 674) or the order of cyclicity. If the period is 1, A is aperiodic. It can be proved that primitive matrices are the same as irreducible aperiodic non-negative matrices.
All statements of the Perron–Frobenius theorem for positive matrices remain true for primitive matrices. The same statements also hold for a non-negative irreducible matrix, except that it may possess several eigenvalues whose absolute value is equal to its spectral radius, so the statements need to be correspondingly modified. In fact the number of such eigenvalues is equal to the period.
Results for non-negative matrices were first obtained by Frobenius in 1912.
Perron–Frobenius theorem for irreducible non-negative matrices
Let $A$ be an irreducible non-negative $N\times N$ matrix with period $h$ and spectral radius $\rho (A)=r$. Then the following statements hold.
• The number $r\in \mathbb {R} ^{+}$ is a positive real number and it is an eigenvalue of the matrix $A$. It is called Perron–Frobenius eigenvalue.
• The Perron–Frobenius eigenvalue $r$ is simple. Both right and left eigenspaces associated with $r$ are one-dimensional.
• $A$ has both a right and a left eigenvectors, respectively $\mathbf {v} $ and $\mathbf {w} $, with eigenvalue $r$ and whose components are all positive. Moreover these are the only eigenvectors whose components are all positive are those associated with the eigenvalue $r$.
• The matrix $A$ has exactly $h$ (where $h$ is the period) complex eigenvalues with absolute value $r$. Each of them is a simple root of the characteristic polynomial and is the product of $r$ with an $h$th root of unity.
• Let $\omega =2\pi /h$. Then the matrix $A$ is similar to $e^{i\omega }A$, consequently the spectrum of $A$ is invariant under multiplication by $e^{i\omega }$ (i.e. to rotations of the complex plane by the angle $\omega $).
• If $h>1$ then there exists a permutation matrix $P$ such that
$PAP^{-1}={\begin{pmatrix}O&A_{1}&O&O&\ldots &O\\O&O&A_{2}&O&\ldots &O\\\vdots &\vdots &\vdots &\vdots &&\vdots \\O&O&O&O&\ldots &A_{h-1}\\A_{h}&O&O&O&\ldots &O\end{pmatrix}},$
where $O$ denotes a zero matrix and the blocks along the main diagonal are square matrices.
• Collatz–Wielandt formula: for all non-negative non-zero vectors $\mathbf {x} $ let $f(\mathbf {x} )$ be the minimum value of $[A\mathbf {x} ]_{i}/x_{i}$ taken over all those $i$ such that $x_{i}\neq 0$. Then $f$ is a real valued function whose maximum is the Perron–Frobenius eigenvalue.
• The Perron–Frobenius eigenvalue satisfies the inequalities
$\min _{i}\sum _{j}a_{ij}\leq r\leq \max _{i}\sum _{j}a_{ij}.$
The example $A=\left({\begin{smallmatrix}0&0&1\\0&0&1\\1&1&0\end{smallmatrix}}\right)$ shows that the (square) zero-matrices along the diagonal may be of different sizes, the blocks Aj need not be square, and h need not divide n.
Further properties
Let A be an irreducible non-negative matrix, then:
1. (I+A)n−1 is a positive matrix. (Meyer[12] claim 8.3.5 p. 672). For a non-negative A, this is also a sufficient condition.[16]
2. Wielandt's theorem.[17] If |B|<A, then ρ(B)≤ρ(A). If equality holds (i.e. if μ=ρ(A)eiφ is eigenvalue for B), then B = eiφ D AD−1 for some diagonal unitary matrix D (i.e. diagonal elements of D equals to eiΘl, non-diagonal are zero).[18]
3. If some power Aq is reducible, then it is completely reducible, i.e. for some permutation matrix P, it is true that: $PA^{q}P^{-1}={\begin{pmatrix}A_{1}&O&O&\dots &O\\O&A_{2}&O&\dots &O\\\vdots &\vdots &\vdots &&\vdots \\O&O&O&\dots &A_{d}\\\end{pmatrix}}$, where Ai are irreducible matrices having the same maximal eigenvalue. The number of these matrices d is the greatest common divisor of q and h, where h is period of A.[19]
4. If c(x) = xn + ck1 xn-k1 + ck2 xn-k2 + ... + cks xn-ks is the characteristic polynomial of A in which only the non-zero terms are listed, then the period of A equals the greatest common divisor of k1, k2, ... , ks.[20]
5. Cesàro averages: $\lim _{k\rightarrow \infty }1/k\sum _{i=0,...,k}A^{i}/r^{i}=(vw^{T}),$ where the left and right eigenvectors for A are normalized so that wTv = 1. Moreover, the matrix v wT is the spectral projection corresponding to r, the Perron projection.[21]
6. Let r be the Perron–Frobenius eigenvalue, then the adjoint matrix for (r-A) is positive.[22]
7. If A has at least one non-zero diagonal element, then A is primitive.[23]
8. If 0 ≤ A < B, then rA ≤ rB. Moreover, if B is irreducible, then the inequality is strict: rA < rB.
A matrix A is primitive provided it is non-negative and Am is positive for some m, and hence Ak is positive for all k ≥ m. To check primitivity, one needs a bound on how large the minimal such m can be, depending on the size of A:[24]
• If A is a non-negative primitive matrix of size n, then An2 − 2n + 2 is positive. Moreover, this is the best possible result, since for the matrix M below, the power Mk is not positive for every k < n2 − 2n + 2, since (Mn2 − 2n+1)11 = 0.
$M=\left({\begin{smallmatrix}0&1&0&0&\cdots &0\\0&0&1&0&\cdots &0\\0&0&0&1&\cdots &0\\\vdots &\vdots &\vdots &\vdots &&\vdots \\0&0&0&0&\cdots &1\\1&1&0&0&\cdots &0\end{smallmatrix}}\right)$
Applications
Numerous books have been written on the subject of non-negative matrices, and Perron–Frobenius theory is invariably a central feature. The following examples given below only scratch the surface of its vast application domain.
Non-negative matrices
The Perron–Frobenius theorem does not apply directly to non-negative matrices. Nevertheless, any reducible square matrix A may be written in upper-triangular block form (known as the normal form of a reducible matrix)[25]
PAP−1 = $\left({\begin{smallmatrix}B_{1}&*&*&\cdots &*\\0&B_{2}&*&\cdots &*\\\vdots &\vdots &\vdots &&\vdots \\0&0&0&\cdots &*\\0&0&0&\cdots &B_{h}\end{smallmatrix}}\right)$
where P is a permutation matrix and each Bi is a square matrix that is either irreducible or zero. Now if A is non-negative then so too is each block of PAP−1, moreover the spectrum of A is just the union of the spectra of the Bi.
The invertibility of A can also be studied. The inverse of PAP−1 (if it exists) must have diagonal blocks of the form Bi−1 so if any Bi isn't invertible then neither is PAP−1 or A. Conversely let D be the block-diagonal matrix corresponding to PAP−1, in other words PAP−1 with the asterisks zeroised. If each Bi is invertible then so is D and D−1(PAP−1) is equal to the identity plus a nilpotent matrix. But such a matrix is always invertible (if Nk = 0 the inverse of 1 − N is 1 + N + N2 + ... + Nk−1) so PAP−1 and A are both invertible.
Therefore, many of the spectral properties of A may be deduced by applying the theorem to the irreducible Bi. For example, the Perron root is the maximum of the ρ(Bi). While there will still be eigenvectors with non-negative components it is quite possible that none of these will be positive.
Stochastic matrices
A row (column) stochastic matrix is a square matrix each of whose rows (columns) consists of non-negative real numbers whose sum is unity. The theorem cannot be applied directly to such matrices because they need not be irreducible.
If A is row-stochastic then the column vector with each entry 1 is an eigenvector corresponding to the eigenvalue 1, which is also ρ(A) by the remark above. It might not be the only eigenvalue on the unit circle: and the associated eigenspace can be multi-dimensional. If A is row-stochastic and irreducible then the Perron projection is also row-stochastic and all its rows are equal.
Algebraic graph theory
The theorem has particular use in algebraic graph theory. The "underlying graph" of a nonnegative n-square matrix is the graph with vertices numbered 1, ..., n and arc ij if and only if Aij ≠ 0. If the underlying graph of such a matrix is strongly connected, then the matrix is irreducible, and thus the theorem applies. In particular, the adjacency matrix of a strongly connected graph is irreducible.[26][27]
Finite Markov chains
The theorem has a natural interpretation in the theory of finite Markov chains (where it is the matrix-theoretic equivalent of the convergence of an irreducible finite Markov chain to its stationary distribution, formulated in terms of the transition matrix of the chain; see, for example, the article on the subshift of finite type).
Compact operators
Main article: Krein–Rutman theorem
More generally, it can be extended to the case of non-negative compact operators, which, in many ways, resemble finite-dimensional matrices. These are commonly studied in physics, under the name of transfer operators, or sometimes Ruelle–Perron–Frobenius operators (after David Ruelle). In this case, the leading eigenvalue corresponds to the thermodynamic equilibrium of a dynamical system, and the lesser eigenvalues to the decay modes of a system that is not in equilibrium. Thus, the theory offers a way of discovering the arrow of time in what would otherwise appear to be reversible, deterministic dynamical processes, when examined from the point of view of point-set topology.[28]
Proof methods
A common thread in many proofs is the Brouwer fixed point theorem. Another popular method is that of Wielandt (1950). He used the Collatz–Wielandt formula described above to extend and clarify Frobenius's work.[29] Another proof is based on the spectral theory[30] from which part of the arguments are borrowed.
Perron root is strictly maximal eigenvalue for positive (and primitive) matrices
If A is a positive (or more generally primitive) matrix, then there exists a real positive eigenvalue r (Perron–Frobenius eigenvalue or Perron root), which is strictly greater in absolute value than all other eigenvalues, hence r is the spectral radius of A.
This statement does not hold for general non-negative irreducible matrices, which have h eigenvalues with the same absolute eigenvalue as r, where h is the period of A.
Proof for positive matrices
Let A be a positive matrix, assume that its spectral radius ρ(A) = 1 (otherwise consider A/ρ(A)). Hence, there exists an eigenvalue λ on the unit circle, and all the other eigenvalues are less or equal 1 in absolute value. Suppose that another eigenvalue λ ≠ 1 also falls on the unit circle. Then there exists a positive integer m such that Am is a positive matrix and the real part of λm is negative. Let ε be half the smallest diagonal entry of Am and set T = Am − εI which is yet another positive matrix. Moreover, if Ax = λx then Amx = λmx thus λm − ε is an eigenvalue of T. Because of the choice of m this point lies outside the unit disk consequently ρ(T) > 1. On the other hand, all the entries in T are positive and less than or equal to those in Am so by Gelfand's formula ρ(T) ≤ ρ(Am) ≤ ρ(A)m = 1. This contradiction means that λ=1 and there can be no other eigenvalues on the unit circle.
Absolutely the same arguments can be applied to the case of primitive matrices; we just need to mention the following simple lemma, which clarifies the properties of primitive matrices.
Lemma
Given a non-negative A, assume there exists m, such that Am is positive, then Am+1, Am+2, Am+3,... are all positive.
Am+1 = AAm, so it can have zero element only if some row of A is entirely zero, but in this case the same row of Am will be zero.
Applying the same arguments as above for primitive matrices, prove the main claim.
Power method and the positive eigenpair
For a positive (or more generally irreducible non-negative) matrix A the dominant eigenvector is real and strictly positive (for non-negative A respectively non-negative.)
This can be established using the power method, which states that for a sufficiently generic (in the sense below) matrix A the sequence of vectors bk+1 = Abk / | Abk | converges to the eigenvector with the maximum eigenvalue. (The initial vector b0 can be chosen arbitrarily except for some measure zero set). Starting with a non-negative vector b0 produces the sequence of non-negative vectors bk. Hence the limiting vector is also non-negative. By the power method this limiting vector is the dominant eigenvector for A, proving the assertion. The corresponding eigenvalue is non-negative.
The proof requires two additional arguments. First, the power method converges for matrices which do not have several eigenvalues of the same absolute value as the maximal one. The previous section's argument guarantees this.
Second, to ensure strict positivity of all of the components of the eigenvector for the case of irreducible matrices. This follows from the following fact, which is of independent interest:
Lemma: given a positive (or more generally irreducible non-negative) matrix A and v as any non-negative eigenvector for A, then it is necessarily strictly positive and the corresponding eigenvalue is also strictly positive.
Proof. One of the definitions of irreducibility for non-negative matrices is that for all indexes i,j there exists m, such that (Am)ij is strictly positive. Given a non-negative eigenvector v, and that at least one of its components say j-th is strictly positive, the corresponding eigenvalue is strictly positive, indeed, given n such that (An)ii >0, hence: rnvi = Anvi ≥ (An)iivi >0. Hence r is strictly positive. The eigenvector is strict positivity. Then given m, such that (Am)ij >0, hence: rmvj = (Amv)j ≥ (Am)ijvi >0, hence vj is strictly positive, i.e., the eigenvector is strictly positive.
Multiplicity one
This section proves that the Perron–Frobenius eigenvalue is a simple root of the characteristic polynomial of the matrix. Hence the eigenspace associated to Perron–Frobenius eigenvalue r is one-dimensional. The arguments here are close to those in Meyer.[12]
Given a strictly positive eigenvector v corresponding to r and another eigenvector w with the same eigenvalue. (The vectors v and w can be chosen to be real, because A and r are both real, so the null space of A-r has a basis consisting of real vectors.) Assuming at least one of the components of w is positive (otherwise multiply w by −1). Given maximal possible α such that u=v- α w is non-negative, then one of the components of u is zero, otherwise α is not maximum. Vector u is an eigenvector. It is non-negative, hence by the lemma described in the previous section non-negativity implies strict positivity for any eigenvector. On the other hand, as above at least one component of u is zero. The contradiction implies that w does not exist.
Case: There are no Jordan cells corresponding to the Perron–Frobenius eigenvalue r and all other eigenvalues which have the same absolute value.
If there is a Jordan cell, then the infinity norm (A/r)k∞ tends to infinity for k → ∞ , but that contradicts the existence of the positive eigenvector.
Given r = 1, or A/r. Letting v be a Perron–Frobenius strictly positive eigenvector, so Av=v, then:
$\|v\|_{\infty }=\|A^{k}v\|_{\infty }\geq \|A^{k}\|_{\infty }\min _{i}(v_{i}),~~\Rightarrow ~~\|A^{k}\|_{\infty }\leq \|v\|/\min _{i}(v_{i})$ So Ak∞ is bounded for all k. This gives another proof that there are no eigenvalues which have greater absolute value than Perron–Frobenius one. It also contradicts the existence of the Jordan cell for any eigenvalue which has absolute value equal to 1 (in particular for the Perron–Frobenius one), because existence of the Jordan cell implies that Ak∞ is unbounded. For a two by two matrix:
$J^{k}={\begin{pmatrix}\lambda &1\\0&\lambda \end{pmatrix}}^{k}={\begin{pmatrix}\lambda ^{k}&k\lambda ^{k-1}\\0&\lambda ^{k}\end{pmatrix}},$
hence Jk∞ = |k + λ| (for |λ| = 1), so it tends to infinity when k does so. Since Jk = C−1 AkC, then Ak ≥ Jk/ (C−1 C ), so it also tends to infinity. The resulting contradiction implies that there are no Jordan cells for the corresponding eigenvalues.
Combining the two claims above reveals that the Perron–Frobenius eigenvalue r is simple root of the characteristic polynomial. In the case of nonprimitive matrices, there exist other eigenvalues which have the same absolute value as r. The same claim is true for them, but requires more work.
No other non-negative eigenvectors
Given positive (or more generally irreducible non-negative matrix) A, the Perron–Frobenius eigenvector is the only (up to multiplication by constant) non-negative eigenvector for A.
Other eigenvectors must contain negative or complex components since eigenvectors for different eigenvalues are orthogonal in some sense, but two positive eigenvectors cannot be orthogonal, so they must correspond to the same eigenvalue, but the eigenspace for the Perron–Frobenius is one-dimensional.
Assuming there exists an eigenpair (λ, y) for A, such that vector y is positive, and given (r, x), where x – is the left Perron–Frobenius eigenvector for A (i.e. eigenvector for AT), then rxTy = (xT A) y = xT (Ay) = λxTy, also xT y > 0, so one has: r = λ. Since the eigenspace for the Perron–Frobenius eigenvalue r is one-dimensional, non-negative eigenvector y is a multiple of the Perron–Frobenius one.[31]
Collatz–Wielandt formula
Given a positive (or more generally irreducible non-negative matrix) A, one defines the function f on the set of all non-negative non-zero vectors x such that f(x) is the minimum value of [Ax]i / xi taken over all those i such that xi ≠ 0. Then f is a real-valued function, whose maximum is the Perron–Frobenius eigenvalue r.
For the proof we denote the maximum of f by the value R. The proof requires to show R = r. Inserting the Perron-Frobenius eigenvector v into f, we obtain f(v) = r and conclude r ≤ R. For the opposite inequality, we consider an arbitrary nonnegative vector x and let ξ=f(x). The definition of f gives 0 ≤ ξx ≤ Ax (componentwise). Now, we use the positive right eigenvector w for A for the Perron-Frobenius eigenvalue r, then ξ wT x = wT ξx ≤ wT (Ax) = (wT A)x = r wT x . Hence f(x) = ξ ≤ r, which implies R ≤ r.[32]
Perron projection as a limit: Ak/rk
Let A be a positive (or more generally, primitive) matrix, and let r be its Perron–Frobenius eigenvalue.
1. There exists a limit Ak/rk for k → ∞, denote it by P.
2. P is a projection operator: P2 = P, which commutes with A: AP = PA.
3. The image of P is one-dimensional and spanned by the Perron–Frobenius eigenvector v (respectively for PT—by the Perron–Frobenius eigenvector w for AT).
4. P = vwT, where v,w are normalized such that wT v = 1.
5. Hence P is a positive operator.
Hence P is a spectral projection for the Perron–Frobenius eigenvalue r, and is called the Perron projection. The above assertion is not true for general non-negative irreducible matrices.
Actually the claims above (except claim 5) are valid for any matrix M such that there exists an eigenvalue r which is strictly greater than the other eigenvalues in absolute value and is the simple root of the characteristic polynomial. (These requirements hold for primitive matrices as above).
Given that M is diagonalizable, M is conjugate to a diagonal matrix with eigenvalues r1, ... , rn on the diagonal (denote r1 = r). The matrix Mk/rk will be conjugate (1, (r2/r)k, ... , (rn/r)k), which tends to (1,0,0,...,0), for k → ∞, so the limit exists. The same method works for general M (without assuming that M is diagonalizable).
The projection and commutativity properties are elementary corollaries of the definition: MMk/rk = Mk/rk M ; P2 = lim M2k/r2k = P. The third fact is also elementary: M(Pu) = M lim Mk/rk u = lim rMk+1/rk+1u, so taking the limit yields M(Pu) = r(Pu), so image of P lies in the r-eigenspace for M, which is one-dimensional by the assumptions.
Denoting by v, r-eigenvector for M (by w for MT). Columns of P are multiples of v, because the image of P is spanned by it. Respectively, rows of w. So P takes a form (a v wT), for some a. Hence its trace equals to (a wT v). Trace of projector equals the dimension of its image. It was proved before that it is not more than one-dimensional. From the definition one sees that P acts identically on the r-eigenvector for M. So it is one-dimensional. So choosing (wTv) = 1, implies P = vwT.
Inequalities for Perron–Frobenius eigenvalue
For any non-negative matrix A its Perron–Frobenius eigenvalue r satisfies the inequality:
$r\;\leq \;\max _{i}\sum _{j}a_{ij}.$
This is not specific to non-negative matrices: for any matrix A with an eigenvalue $\scriptstyle \lambda $ it is true that $\scriptstyle |\lambda |\;\leq \;\max _{i}\sum _{j}|a_{ij}|$. This is an immediate corollary of the Gershgorin circle theorem. However another proof is more direct:
Any matrix induced norm satisfies the inequality $\scriptstyle \|A\|\geq |\lambda |$ for any eigenvalue $\scriptstyle \lambda $ because, if $\scriptstyle x$ is a corresponding eigenvector, $\scriptstyle \|A\|\geq |Ax|/|x|=|\lambda x|/|x|=|\lambda |$. The infinity norm of a matrix is the maximum of row sums: $\scriptstyle \left\|A\right\|_{\infty }=\max \limits _{1\leq i\leq m}\sum _{j=1}^{n}|a_{ij}|.$ Hence the desired inequality is exactly $\scriptstyle \|A\|_{\infty }\geq |\lambda |$ applied to the non-negative matrix A.
Another inequality is:
$\min _{i}\sum _{j}a_{ij}\;\leq \;r.$
This fact is specific to non-negative matrices; for general matrices there is nothing similar. Given that A is positive (not just non-negative), then there exists a positive eigenvector w such that Aw = rw and the smallest component of w (say wi) is 1. Then r = (Aw)i ≥ the sum of the numbers in row i of A. Thus the minimum row sum gives a lower bound for r and this observation can be extended to all non-negative matrices by continuity.
Another way to argue it is via the Collatz-Wielandt formula. One takes the vector x = (1, 1, ..., 1) and immediately obtains the inequality.
Perron projection
The proof now proceeds using spectral decomposition. The trick here is to split the Perron root from the other eigenvalues. The spectral projection associated with the Perron root is called the Perron projection and it enjoys the following property:
The Perron projection of an irreducible non-negative square matrix is a positive matrix.
Perron's findings and also (1)–(5) of the theorem are corollaries of this result. The key point is that a positive projection always has rank one. This means that if A is an irreducible non-negative square matrix then the algebraic and geometric multiplicities of its Perron root are both one. Also if P is its Perron projection then AP = PA = ρ(A)P so every column of P is a positive right eigenvector of A and every row is a positive left eigenvector. Moreover, if Ax = λx then PAx = λPx = ρ(A)Px which means Px = 0 if λ ≠ ρ(A). Thus the only positive eigenvectors are those associated with ρ(A). If A is a primitive matrix with ρ(A) = 1 then it can be decomposed as P ⊕ (1 − P)A so that An = P + (1 − P)An. As n increases the second of these terms decays to zero leaving P as the limit of An as n → ∞.
The power method is a convenient way to compute the Perron projection of a primitive matrix. If v and w are the positive row and column vectors that it generates then the Perron projection is just wv/vw. The spectral projections aren't neatly blocked as in the Jordan form. Here they are overlaid and each generally has complex entries extending to all four corners of the square matrix. Nevertheless, they retain their mutual orthogonality which is what facilitates the decomposition.
Peripheral projection
The analysis when A is irreducible and non-negative is broadly similar. The Perron projection is still positive but there may now be other eigenvalues of modulus ρ(A) that negate use of the power method and prevent the powers of (1 − P)A decaying as in the primitive case whenever ρ(A) = 1. So we consider the peripheral projection, which is the spectral projection of A corresponding to all the eigenvalues that have modulus ρ(A). It may then be shown that the peripheral projection of an irreducible non-negative square matrix is a non-negative matrix with a positive diagonal.
Cyclicity
Suppose in addition that ρ(A) = 1 and A has h eigenvalues on the unit circle. If P is the peripheral projection then the matrix R = AP = PA is non-negative and irreducible, Rh = P, and the cyclic group P, R, R2, ...., Rh−1 represents the harmonics of A. The spectral projection of A at the eigenvalue λ on the unit circle is given by the formula $\scriptstyle h^{-1}\sum _{1}^{h}\lambda ^{-k}R^{k}$. All of these projections (including the Perron projection) have the same positive diagonal, moreover choosing any one of them and then taking the modulus of every entry invariably yields the Perron projection. Some donkey work is still needed in order to establish the cyclic properties (6)–(8) but it's essentially just a matter of turning the handle. The spectral decomposition of A is given by A = R ⊕ (1 − P)A so the difference between An and Rn is An − Rn = (1 − P)An representing the transients of An which eventually decay to zero. P may be computed as the limit of Anh as n → ∞.
Counterexamples
The matrices L = $\left({\begin{smallmatrix}1&0&0\\1&0&0\\1&1&1\end{smallmatrix}}\right)$, P = $\left({\begin{smallmatrix}1&0&0\\1&0&0\\\!\!\!-1&1&1\end{smallmatrix}}\right)$, T = $\left({\begin{smallmatrix}0&1&1\\1&0&1\\1&1&0\end{smallmatrix}}\right)$, M = $\left({\begin{smallmatrix}0&1&0&0&0\\1&0&0&0&0\\0&0&0&1&0\\0&0&0&0&1\\0&0&1&0&0\end{smallmatrix}}\right)$ provide simple examples of what can go wrong if the necessary conditions are not met. It is easily seen that the Perron and peripheral projections of L are both equal to P, thus when the original matrix is reducible the projections may lose non-negativity and there is no chance of expressing them as limits of its powers. The matrix T is an example of a primitive matrix with zero diagonal. If the diagonal of an irreducible non-negative square matrix is non-zero then the matrix must be primitive but this example demonstrates that the converse is false. M is an example of a matrix with several missing spectral teeth. If ω = eiπ/3 then ω6 = 1 and the eigenvalues of M are {1,ω2,ω3=-1,ω4} with a dimension 2 eigenspace for +1 so ω and ω5 are both absent. More precisely, since M is block-diagonal cyclic, then the eigenvalues are {1,-1} for the first block, and {1,ω2,ω4} for the lower one
Terminology
A problem that causes confusion is a lack of standardisation in the definitions. For example, some authors use the terms strictly positive and positive to mean > 0 and ≥ 0 respectively. In this article positive means > 0 and non-negative means ≥ 0. Another vexed area concerns decomposability and reducibility: irreducible is an overloaded term. For avoidance of doubt a non-zero non-negative square matrix A such that 1 + A is primitive is sometimes said to be connected. Then irreducible non-negative square matrices and connected matrices are synonymous.[33]
The nonnegative eigenvector is often normalized so that the sum of its components is equal to unity; in this case, the eigenvector is the vector of a probability distribution and is sometimes called a stochastic eigenvector.
Perron–Frobenius eigenvalue and dominant eigenvalue are alternative names for the Perron root. Spectral projections are also known as spectral projectors and spectral idempotents. The period is sometimes referred to as the index of imprimitivity or the order of cyclicity.
See also
• Min-max theorem
• Z-matrix (mathematics)
• M-matrix
• P-matrix
• Hurwitz matrix
• Metzler matrix (Quasipositive matrix)
• Positive operator
• Krein–Rutman theorem
Notes
1. Bowles, Samuel (1981-06-01). "Technical change and the profit rate: a simple proof of the Okishio theorem". Cambridge Journal of Economics. 5 (2): 183–186. doi:10.1093/oxfordjournals.cje.a035479. ISSN 0309-166X.
2. Meyer 2000, pp. 8.3.6 p. 681 "Archived copy" (PDF). Archived from the original (PDF) on March 7, 2010. Retrieved 2010-03-07.{{cite web}}: CS1 maint: archived copy as title (link)
3. Meyer 2000, pp. 8.3.7 p. 683 "Archived copy" (PDF). Archived from the original (PDF) on March 7, 2010. Retrieved 2010-03-07.{{cite web}}: CS1 maint: archived copy as title (link)
4. Langville & Meyer 2006, p. 15.2 p. 167 Langville, Amy N.; Langville, Amy N.; Meyer, Carl D. (2006-07-23). Google's PageRank and Beyond: The Science of Search Engine Rankings. Princeton University Press. ISBN 978-0691122021. Archived from the original on July 10, 2014. Retrieved 2016-10-31.{{cite book}}: CS1 maint: bot: original URL status unknown (link)
5. Keener 1993, p. p. 80
6. Landau, Edmund (1895), "Zur relativen Wertbemessung der Turnierresultaten", Deutsches Wochenschach, XI: 366–369
7. Landau, Edmund (1915), "Über Preisverteilung bei Spielturnieren", Zeitschrift für Mathematik und Physik, 63: 192–202
8. Birkhoff, Garrett and Varga, Richard S., 1958. Reactor criticality and nonnegative matrices. Journal of the Society for Industrial and Applied Mathematics, 6(4), pp.354-377.
9. Donsker, M.D. and Varadhan, S.S., 1975. On a variational formula for the principal eigenvalue for operators with maximum principle. Proceedings of the National Academy of Sciences, 72(3), pp.780-783.
10. Friedland, S., 1981. Convex spectral functions. Linear and multilinear algebra, 9(4), pp.299-316.
11. Miroslav Fiedler; Charles R. Johnson; Thomas L. Markham; Michael Neumann (1985). "A Trace Inequality for M-matrices and the Symmetrizability of a Real Matrix by a Positive Diagonal Matrix". Linear Algebra and Its Applications. 71: 81–94. doi:10.1016/0024-3795(85)90237-X.
12. Meyer 2000, pp. chapter 8 page 665 "Archived copy" (PDF). Archived from the original (PDF) on March 7, 2010. Retrieved 2010-03-07.{{cite web}}: CS1 maint: archived copy as title (link)
13. Meyer 2000, pp. chapter 8.3 page 670. "Archived copy" (PDF). Archived from the original (PDF) on March 7, 2010. Retrieved 2010-03-07.{{cite web}}: CS1 maint: archived copy as title (link)
14. Gantmacher 2000, p. chapter XIII.3 theorem 3 page 66
15. Kitchens, Bruce (1998), Symbolic dynamics: one-sided, two-sided and countable state markov shifts., Springer, ISBN 9783540627388
16. Minc, Henryk (1988). Nonnegative matrices. New York: John Wiley & Sons. p. 6 [Corollary 2.2]. ISBN 0-471-83966-3.
17. Gradshtein, Izrailʹ Solomonovich (18 September 2014). Table of integrals, series, and products. Elsevier. ISBN 978-0-12-384934-2. OCLC 922964628.
18. Meyer 2000, pp. claim 8.3.11 p. 675 "Archived copy" (PDF). Archived from the original (PDF) on March 7, 2010. Retrieved 2010-03-07.{{cite web}}: CS1 maint: archived copy as title (link)
19. Gantmacher 2000, p. section XIII.5 theorem 9
20. Meyer 2000, pp. page 679 "Archived copy" (PDF). Archived from the original (PDF) on March 7, 2010. Retrieved 2010-03-07.{{cite web}}: CS1 maint: archived copy as title (link)
21. Meyer 2000, pp. example 8.3.2 p. 677 "Archived copy" (PDF). Archived from the original (PDF) on March 7, 2010. Retrieved 2010-03-07.{{cite web}}: CS1 maint: archived copy as title (link)
22. Gantmacher 2000, p. section XIII.2.2 page 62
23. Meyer 2000, pp. example 8.3.3 p. 678 "Archived copy" (PDF). Archived from the original (PDF) on March 7, 2010. Retrieved 2010-03-07.{{cite web}}: CS1 maint: archived copy as title (link)
24. Meyer 2000, pp. chapter 8 example 8.3.4 page 679 and exercise 8.3.9 p. 685 "Archived copy" (PDF). Archived from the original (PDF) on March 7, 2010. Retrieved 2010-03-07.{{cite web}}: CS1 maint: archived copy as title (link)
25. Varga 2002, p. 2.43 (page 51)
26. Brualdi, Richard A.; Ryser, Herbert J. (1992). Combinatorial Matrix Theory. Cambridge: Cambridge UP. ISBN 978-0-521-32265-2.
27. Brualdi, Richard A.; Cvetkovic, Dragos (2009). A Combinatorial Approach to Matrix Theory and Its Applications. Boca Raton, FL: CRC Press. ISBN 978-1-4200-8223-4.
28. Mackey, Michael C. (1992). Time's Arrow: The origins of thermodynamic behaviour. New York: Springer-Verlag. ISBN 978-0-387-97702-7.
29. Gantmacher 2000, p. section XIII.2.2 page 54
30. Smith, Roger (2006), "A Spectral Theoretic Proof of Perron–Frobenius" (PDF), Mathematical Proceedings of the Royal Irish Academy, 102 (1): 29–35, doi:10.3318/PRIA.2002.102.1.29
31. Meyer 2000, pp. chapter 8 claim 8.2.10 page 666 "Archived copy" (PDF). Archived from the original (PDF) on March 7, 2010. Retrieved 2010-03-07.{{cite web}}: CS1 maint: archived copy as title (link)
32. Meyer 2000, pp. chapter 8 page 666 "Archived copy" (PDF). Archived from the original (PDF) on March 7, 2010. Retrieved 2010-03-07.{{cite web}}: CS1 maint: archived copy as title (link)
33. For surveys of results on irreducibility, see Olga Taussky-Todd and Richard A. Brualdi.
References
• Perron, Oskar (1907), "Zur Theorie der Matrices", Mathematische Annalen, 64 (2): 248–263, doi:10.1007/BF01449896, hdl:10338.dmlcz/104432, S2CID 123460172
• Frobenius, Georg (May 1912), "Ueber Matrizen aus nicht negativen Elementen", Sitzungsberichte der Königlich Preussischen Akademie der Wissenschaften: 456–477
• Frobenius, Georg (1908), "Über Matrizen aus positiven Elementen, 1", Sitzungsberichte der Königlich Preussischen Akademie der Wissenschaften: 471–476
• Frobenius, Georg (1909), "Über Matrizen aus positiven Elementen, 2", Sitzungsberichte der Königlich Preussischen Akademie der Wissenschaften: 514–518
• Gantmacher, Felix (2000) [1959], The Theory of Matrices, Volume 2, AMS Chelsea Publishing, ISBN 978-0-8218-2664-5 (1959 edition had different title: "Applications of the theory of matrices". Also the numeration of chapters is different in the two editions.)
• Langville, Amy; Meyer, Carl (2006), Google page rank and beyond, Princeton University Press, doi:10.1007/s10791-008-9063-y, ISBN 978-0-691-12202-1, S2CID 7646929
• Keener, James (1993), "The Perron–Frobenius theorem and the ranking of football teams", SIAM Review, 35 (1): 80–93, doi:10.1137/1035004, JSTOR 2132526
• Meyer, Carl (2000), Matrix analysis and applied linear algebra (PDF), SIAM, ISBN 978-0-89871-454-8, archived from the original (PDF) on 2010-03-07
• Minc, Henryk (1988), Nonnegative matrices, John Wiley&Sons,New York, ISBN 0-471-83966-3
• Romanovsky, V. (1933), "Sur les zéros des matrices stocastiques", Bulletin de la Société Mathématique de France, 61: 213–219, doi:10.24033/bsmf.1206
• Collatz, Lothar (1942), "Einschließungssatz für die charakteristischen Zahlen von Matrizen", Mathematische Zeitschrift, 48 (1): 221–226, doi:10.1007/BF01180013, S2CID 120958677
• Wielandt, Helmut (1950), "Unzerlegbare, nicht negative Matrizen", Mathematische Zeitschrift, 52 (1): 642–648, doi:10.1007/BF02230720, hdl:10338.dmlcz/100322, S2CID 122189604
Further reading
• Abraham Berman, Robert J. Plemmons, Nonnegative Matrices in the Mathematical Sciences, 1994, SIAM. ISBN 0-89871-321-8.
• Chris Godsil and Gordon Royle, Algebraic Graph Theory, Springer, 2001.
• A. Graham, Nonnegative Matrices and Applicable Topics in Linear Algebra, John Wiley&Sons, New York, 1987.
• R. A. Horn and C.R. Johnson, Matrix Analysis, Cambridge University Press, 1990
• Bas Lemmens and Roger Nussbaum, Nonlinear Perron-Frobenius Theory, Cambridge Tracts in Mathematics 189, Cambridge Univ. Press, 2012.
• S. P. Meyn and R. L. Tweedie, Markov Chains and Stochastic Stability London: Springer-Verlag, 1993. ISBN 0-387-19832-6 (2nd edition, Cambridge University Press, 2009)
• Seneta, E. Non-negative matrices and Markov chains. 2nd rev. ed., 1981, XVI, 288 p., Softcover Springer Series in Statistics. (Originally published by Allen & Unwin Ltd., London, 1973) ISBN 978-0-387-29765-1
• Suprunenko, D.A. (2001) [1994], "Perron–Frobenius theorem", Encyclopedia of Mathematics, EMS Press (The claim that Aj has order n/h at the end of the statement of the theorem is incorrect.)
• Varga, Richard S. (2002), Matrix Iterative Analysis (2nd ed.), Springer-Verlag.
|
Wikipedia
|
Ruelle zeta function
In mathematics, the Ruelle zeta function is a zeta function associated with a dynamical system. It is named after mathematical physicist David Ruelle.
Formal definition
Let f be a function defined on a manifold M, such that the set of fixed points Fix(f n) is finite for all n > 1. Further let φ be a function on M with values in d × d complex matrices. The zeta function of the first kind is[1]
$\zeta (z)=\exp \left(\sum _{m\geq 1}{\frac {z^{m}}{m}}\sum _{x\in \operatorname {Fix} (f^{m})}\operatorname {Tr} \left(\prod _{k=0}^{m-1}\varphi (f^{k}(x))\right)\right)$
Examples
In the special case d = 1, φ = 1, we have[1]
$\zeta (z)=\exp \left(\sum _{m\geq 1}{\frac {z^{m}}{m}}\left|\operatorname {Fix} (f^{m})\right|\right)$
which is the Artin–Mazur zeta function.
The Ihara zeta function is an example of a Ruelle zeta function.[2]
See also
• List of zeta functions
References
1. Terras (2010) p. 28
2. Terras (2010) p. 29
• Lapidus, Michel L.; van Frankenhuijsen, Machiel (2006). Fractal geometry, complex dimensions and zeta functions. Geometry and spectra of fractal strings. Springer Monographs in Mathematics. New York, NY: Springer-Verlag. ISBN 0-387-33285-5. Zbl 1119.28005.
• Kotani, Motoko; Sunada, Toshikazu (2000). "Zeta functions of finite graphs". J. Math. Sci. Univ. Tokyo. 7: 7–25.
• Terras, Audrey (2010). Zeta Functions of Graphs: A Stroll through the Garden. Cambridge Studies in Advanced Mathematics. Vol. 128. Cambridge University Press. ISBN 0-521-11367-9. Zbl 1206.05003.
• Ruelle, David (2002). "Dynamical Zeta Functions and Transfer Operators" (PDF). Bulletin of AMS. 8 (59): 887–895.
|
Wikipedia
|
Ruffini's rule
In mathematics, Ruffini's rule is a method for computation of the Euclidean division of a polynomial by a binomial of the form x – r. It was described by Paolo Ruffini in 1804.[1] The rule is a special case of synthetic division in which the divisor is a linear factor.
Algorithm
The rule establishes a method for dividing the polynomial:
$P(x)=a_{n}x^{n}+a_{n-1}x^{n-1}+\cdots +a_{1}x+a_{0}$
by the binomial:
$Q(x)=x-r$
to obtain the quotient polynomial:
$R(x)=b_{n-1}x^{n-1}+b_{n-2}x^{n-2}+\cdots +b_{1}x+b_{0}.$
The algorithm is in fact the long division of P(x) by Q(x).
To divide P(x) by Q(x):
1. Take the coefficients of P(x) and write them down in order. Then, write r at the bottom-left edge just over the line:
${\begin{array}{c|c c c c|c}&a_{n}&a_{n-1}&\dots &a_{1}&a_{0}\\r&&&&&\\\hline &&&&&\\\end{array}}$
2. Pass the leftmost coefficient (an) to the bottom just under the line.
${\begin{array}{c|c c c c|c}&a_{n}&a_{n-1}&\dots &a_{1}&a_{0}\\r&&&&&\\\hline &a_{n}&&&&\\&=b_{n-1}&&&&\end{array}}$
3. Multiply the rightmost number under the line by r, and write it over the line and one position to the right.
${\begin{array}{c|c c c c|c}&a_{n}&a_{n-1}&\dots &a_{1}&a_{0}\\r&&b_{n-1}\cdot r&&&\\\hline &a_{n}&&&&\\&=b_{n-1}&&&&\end{array}}$
4. Add the two values just placed in the same column.
${\begin{array}{c|c c c c|c}&a_{n}&a_{n-1}&\dots &a_{1}&a_{0}\\r&&b_{n-1}\cdot r&&&\\\hline &a_{n}&b_{n-1}\cdot r+a_{n-1}&&&\\&=b_{n-1}&=b_{n-2}&&&\end{array}}$
5. Repeat steps 3 and 4 until no numbers remain.
${\begin{array}{c|c c c c|c}&a_{n}&a_{n-1}&\dots &a_{1}&a_{0}\\r&&b_{n-1}\cdot r&\dots &b_{1}\cdot r&b_{0}\cdot r\\\hline &a_{n}&b_{n-1}\cdot r+a_{n-1}&\dots &b_{1}\cdot r+a_{1}&a_{0}+b_{0}\cdot r\\&=b_{n-1}&=b_{n-2}&\dots &=b_{0}&=s\\\end{array}}$
The b values are the coefficients of the result (R(x)) polynomial, the degree of which is one less than that of P(x). The final value obtained, s, is the remainder. The polynomial remainder theorem asserts that the remainder is equal to P(r), the value of the polynomial at r.
Example
Here is an example of polynomial division as described above.
Let:
$P(x)=2x^{3}+3x^{2}-4\,\!$
$Q(x)=x+1.\,\!$
P(x) will be divided by Q(x) using Ruffini's rule. The main problem is that Q(x) is not a binomial of the form x − r, but rather x + r. Q(x) must be rewritten as
$Q(x)=x+1=x-(-1).\,\!$
Now the algorithm is applied:
1. Write down the coefficients and r. Note that, as P(x) didn't contain a coefficient for x, 0 is written:
| 2 3 0 | -4
| |
-1 | |
----|--------------------|-------
| |
| |
2. Pass the first coefficient down:
| 2 3 0 | -4
| |
-1 | |
----|--------------------|-------
| 2 |
| |
3. Multiply the last obtained value by r:
| 2 3 0 | -4
| |
-1 | -2 |
----|--------------------|-------
| 2 |
| |
4. Add the values:
| 2 3 0 | -4
| |
-1 | -2 |
----|--------------------|-------
| 2 1 |
| |
5. Repeat steps 3 and 4 until it's finished:
| 2 3 0 | -4
| |
-1 | -2 -1 | 1
----|----------------------------
| 2 1 -1 | -3
|{result coefficients}|{remainder}
So, if original number = divisor × quotient + remainder, then
$P(x)=Q(x)R(x)+s\,\!$, where
$R(x)=2x^{2}+x-1\,\!$ and $s=-3;\quad \Rightarrow 2x^{3}+3x^{2}-4=(2x^{2}+x-1)(x+1)-3\!$
Application to polynomial factorization
Ruffini's rule can be used when one needs the quotient of a polynomial P by a binomial of the form $x-r.$ (When one needs only the remainder, the polynomial remainder theorem provides a simpler method.)
A typical example, where one needs the quotient, is the factorization of a polynomial $p(x)$ for which one knows a root r:
The remainder of the Euclidean division of $p(x)$ by r is 0, and, if the quotient is $q(x),$ the Euclidean division is written as
$p(x)=q(x)\,(x-r).$
This gives a (possibly partial) factorization of $p(x),$ which can be computed with Ruffini's rule. Then, $p(x)$ can be further factored by factoring $q(x).$
The fundamental theorem of algebra states that every polynomial of positive degree has at least one complex root. The above process shows the fundamental theorem of algebra implies that every polynomial p(x) = anxn + an−1xn−1 + ⋯ + a1x + a0 can be factored as
$p(x)=a_{n}(x-r_{1})\cdots (x-r_{n}),$
where $r_{1},\ldots ,r_{n}$ are complex numbers.
History
The method was invented by Paolo Ruffini, who took part in a competition organized by the Italian Scientific Society (of Forty). The challenge was to devise a method to find the roots of any polynomial. Five submissions were received. In 1804 Ruffini's was awarded first place and his method was published. He later published refinements of his work in 1807 and again in 1813.
See also
• Lill's method, doing the division graphically
• Horner's method
References
1. Cajori, Florian (1911). "Horner's method of approximation anticipated by Ruffini" (PDF). Bulletin of the American Mathematical Society. 17 (8): 389–444. doi:10.1090/s0002-9904-1911-02072-9.
External links
• Weisstein, Eric W. "Ruffini's rule". MathWorld.
• Media related to Ruffini's rule at Wikimedia Commons
|
Wikipedia
|
Carlo Masi
Ruggero Freddi (born October 6, 1976) is an Italian mathematics lecturer and former gay pornographic film actor known professionally as Carlo Masi.
Carlo Masi
Born
Ruggero Freddi
(1976-10-06) October 6, 1976
Rome, Italy
Alma materSapienza University of Rome (BS, MS, PhD)
Other namesCarlo Masi
Years active2004–2009
EmployerCOLT Studio Group
Spouse
Gustavo Leguizamon
(m. 2018)
Scientific career
FieldsMathematics
InstitutionsSapienza University of Rome
ThesisMorse index of multiple blow-up solutions to the two-dimensional sinh-poisson equation (2020)
Doctoral advisorAngela Pistoia
Massimo Grossi
Websiteruggerofreddi.it
Early life and education
Freddi was born in Rome in 1976 to a poor family. His parents divorced when he was three years old.[1] At the age of 14, he began to work out at a local gym, practicing bodybuilding assiduously.[2][3][4] In 2002, when he was about to complete his first cycle of study at the Sapienza University of Rome, he moved to Canada, and subsequently to New York.[5][3]
Career
In 2003, Masi completed a Master of Science (MSc) degree in computer engineering[6][7][8] at the Sapienza University of Rome and worked in an artificial intelligence laboratory.[9][10]
Pornography
In 2004, after being contacted by a Colt Studio Group (CSG) recruiter, he made his debut in the gay pornography industry participating in his first porn movie, Big N 'Plenty.[3] After his debut, he signed an exclusive model contract with CSG. He has always promoted safe sex, fraternising with the Italian LGBT community.[11][3]
In 2006, he was selected to appear on the cover of COLT 40,[12] a coffee table book published to celebrate the fortieth anniversary of the production company.[13]
In 2007, Masi and his future husband, Adam Champ (Gustavo Leguizamon), were selected to appear on the cover of the Damron 2007 Men's Travel Guide.[14][15]
In 2008, CSG and Calaexotic released a dildo reproduction of Masi's penis.[16] That same year, he was named the first and only Colt Man Emeritus and his contract was extended to a lifetime one.[17] In 2008, Masi and Champ, were selected to appear on the cover of Adam Gay Film & Video Directory Magazine.[18]
During his porn career, he was a guest on national Italian TV shows such as Chiambretti Night, L'Infedele and Sugo.[19][20][21] Moreover, he was featured in several tours across America, Mexico and Europe[22][23][24] to promote the CSG brand. His porn career lasted six years (from age 28 to 34). Following a disagreement with Colt in 2009, Masi retired from the pornography industry.[25]
He then entered the theatre industry.[10][26] In 2010, he was officially announced as a permanent member of the cast of Saturday Night Live Italia but he never appeared on the show.[27][28] In 2011, he was included in the anthologies Porn from Andy Warhol to X-Tube[29][30] and Gay Porn Heroes: 100 Most Famous Porn Stars.[31] In 2013, he was interviewed for the documentary HUSTLABALL BERLIN - A Documentary That Bares All.[32] In 2014, he was included in the coffee table book produced by Colt entitled Hairy Chested Men.[33]
Theatre
In 2009, he made his theatre debut with Senzaparole, a reinterpretation of Samuel Beckett's Act Without Words I,[34][35] directed by Andrea Adriatico and staged in Bologna with the Teatri di vitatheatre company.[36] and later in Rome at the Teatro India.[37][38]
Academia
After working in the theatre, Masi decided to return to the Sapienza University of Rome. There he earned a Bachelor of Science degree (cum laude) in mathematics, with a score of 110/110 and then a Master of Science degree (cum laude) in mathematics with a score of 110/110.[39][8][3] In 2020, he completed a Ph.D. in Mathematical Models for Engineering, Electromagnetism and Nanosciences at Sapienza University of Rome focusing on the application of Morse theory to a Dirichlet problem traced back to Poisson equations.[39][40] His doctoral advisors were Angela Pistoia and Massimo Grossi.[41][42][3]
While he was working on his doctorate, he was a lecturer for Analysis 1[43] and Analysis 2[8] courses at the Faculty of Engineering at the Sapienza University of Rome.[44]
Media attention
In 2017, an article published by la Repubblica[45] brought to light his past as a porn actor, causing a media frenzy.[46][47] The story was picked up by numerous newspapers around the world.[48]
In 2020, writer Strega Prize winner Walter Siti published La natura è innocente – Due vite quasi vere.[49] This book is a double biography, told in alternate chapters, one of which is that of Ruggero Freddi.[50]
In 2023, he returned to the media spotlight, after winning a lawsuit for unfair dismissal against his former employers at the Sapienza University of Rome because they fired him without cause and refused to pay him for work done in 2019. The university was ordered to pay him €2,500 for his hours worked and damages of €1,500 for "unjustified dismissal".[51][52][53][54][55]
Personal life
In 2015, Masi married Prince Giovanni Ravaschieri Fieschi Del Drago in Porto.[56] Del Drago died in 2016.[57]
During his participation at Pomeriggio Cinque,[58] he proposed to his partner, Gustavo Leguizamon.[59] The civil union was celebrated on May 4, 2018,[60] and was broadcast live on Pomeriggio Cinque.[61]
Selected videography
Year Title Studio Director
2004 Big 'N Plenty[62] Colt Studios John Rutherford
Muscle Up![62]
Buckleroose - Special Collectable Edition
eXposed: The Making Of A Legend
2005 Minute Man 23[62]
Wide Strokes[62]
2006 Dual: Taking it Like a Man[62]
Man Country[62]
Waterbucks 2[62]
2007 Naked Muscles: The New Breed[62]
Hawai'i[62]
Paradise Found (no sex)[62] Buckshot Productions Steve Landess and Kristofer Weston
2009 Hot Bods[62] Colt Studios John Rutherford
MuscleHeads[62]
2010 Colt Icon: Luke Garrett[62]
2014 Top Shots[62]
Top Shots 3[62]
Awards
Year Award Category Film Partnered Result
2005 GayVN Award Best Sex Scene Big 'N Plenty (2004) Karim Nominee
2006 HeatGay Award Best Actor – – Won
2008 XBIZ Award[63] LGBT Performer of the Year – – Nominee
GayVN Award[64] Best Sex Scene Naked Muscles: The New Breed (2007) Tom Chase Nominee
See also
Wikimedia Commons has media related to Carlo Masi.
• LGBT people in science
• List of actors in gay pornographic films
References
Citations
1. Matthews, David (May 17, 2018). "Interview with Ruggero Freddi". timeshighereducation.com. Archived from the original on March 26, 2020. Retrieved May 21, 2020.
2. Siti 2020, p. 72–73.
3. Morgana (2017-11-11). "Ruggero Freddi biografia: la storia di Carlo Masi". WDonna.it (in Italian). Archived from the original on 2019-04-02. Retrieved 2020-04-16.
4. Corbelli, Giulio Maria (June 2, 2005). "Un ragazzo molto Colt – Intervista a Carlo Masi" (in Italian). CulturaGay.it. Archived from the original on January 19, 2019. Retrieved November 10, 2020.
5. Siti 2020, p. 128.
6. Siti 2020, p. 132–134.
7. "Facevo il pornodivo per pellicole gay. Oggi sono docente di ingegneria a La Sapienza di Roma". L'HuffPost (in Italian). 2017-10-27. Archived from the original on 2018-01-06. Retrieved 2020-04-16.
8. Giovanna, Cavalli (2017-10-27). "Porn star turned lecturer earns students' respect". Corriere della Sera (in Italian). Archived from the original on 2018-04-16. Retrieved 2020-03-26.
9. Siti 2020, p. 133–134.
10. Arcolaci, Alessia (2017-10-28). "Basta col porno, adesso salgo in cattedra". Vanity Fair (in Italian). Archived from the original on 2017-11-20. Retrieved 2020-03-25.
11. Siti 2020, p. 317.
12. Rutherford, John (2006). Colt 40. Colt Studio Group. ISBN 978-1-933842-15-8.
13. AVN, Peter Johnson. "COLT to Celebrate 40th Anniversary AVN". AVN. Archived from the original on 2020-04-22. Retrieved 2020-04-17.
14. Gatta, Gina (December 2007). Damron Men's Travel Guide. Scb Distributors. ISBN 978-0-929435-62-6.
15. XBIZ (21 September 2007). "COLT Exclusives on the Cover of Damron Guide". XBIZ. Archived from the original on 2020-04-22. Retrieved 2020-04-16.
16. "Carlo Masi Teams With Cal Exotics for Casting". AVN. 2009-03-04. Archived from the original on 2020-04-22. Retrieved 2020-04-16.
17. Spencer, Jeremy (2008-04-04). "COLT Studio Renews Contract with Adam Champ and Carlo Masi". XBIZ. Archived from the original on 2018-05-04. Retrieved 2020-04-01.
18. Adam Gay Film & Video Directory Magazine Carlo Masi & Adam Champ 2008 Single Issue Magazine – January 1, 2008. Archived from the original on May 22, 2020. Retrieved May 21, 2020. {{cite book}}: |website= ignored (help)
19. "Sugo - Carlo Masi". YouTube.
20. Magnani, Niccolò (2017-10-26). "CARLO MASI/ Ruggero Freddi: da porno attore gay a professore all'Università La Sapienza (Pomeriggio 5)". Il Sussidiario (in Italian). Archived from the original on 2020-04-22. Retrieved 2020-04-14.
21. "Porn Actor Carlo Masi attends 'Chiambretti Night' Italian Tv Show..." Getty Images (in Italian). Archived from the original on 2020-04-22. Retrieved 2020-04-14.
22. "Parte il Tour degli uomini Colt: ecco chi ci sarà e le tappe". Gay.it (in Italian). 2007-11-10. Archived from the original on 2020-04-22. Retrieved 2020-04-15.
23. AVN, G. Zisk Rice. "COLT Men Ready for Mexican Tour AVN". AVN. Archived from the original on 2020-04-22. Retrieved 2020-04-15.
24. AVN, Peter Johnson. "Europe on Four COLT Exclusives a Day! AVN". AVN. Archived from the original on 2020-04-22. Retrieved 2020-04-15.
25. Carradori, Niccolò (November 26, 2017). "I Went From Being A Porn Star to University Lecturer". vice.com. Retrieved May 21, 2020.
26. Siti 2020, p. 251-252.
27. "Riparte con Mary Carbone Saturday Night Live - Affaritaliani.it". www.affaritaliani.it. Archived from the original on 2020-04-22. Retrieved 2020-04-15.
28. advertising, Nectivity Ltd For; Releases, Press (5 November 2010). "Mary Carbone, Giovanni Conversano, Rocco Pietrantonio e George Leonard al Saturday Night Live – MondoReality" (in Italian). Archived from the original on 2020-04-22. Retrieved 2020-04-16.
29. Clarke, Kevin (2013). Porn: From Andy Warhol to X-Tube. Bruno Gmünder Verlag. ISBN 978-3-86787-591-2.
30. "La storia del porno gay in un volume". Gay.it (in Italian). 2011-09-03. Archived from the original on 2020-04-22. Retrieved 2020-04-15.
31. Adams, J. C. (2011). Gay Porn Heroes: 100 Most Famous Porn Stars. Bruno Gmünder Verlag. ISBN 978-3-86787-169-3.
32. Amos (2014-05-25). ""HUSTLABALL BERLIN"— A Documentary That Bares All". Reviews by Amos Lassen. Archived from the original on 2014-09-03. Retrieved 2020-04-15.
33. Colt (September 2014). Hairy Chested Men. Bruno Gmunder Verlag GmbH. ISBN 978-3-86787-761-9.
34. "Masi, pornodivo gay per Beckett interpreta un desiderio impossibile - la Repubblica.it". Archivio - la Repubblica.it (in Italian). 7 September 2010. Archived from the original on 2014-01-19. Retrieved 2020-03-25.
35. Sassi, Edoardo (2010-09-04). "Ingegnere e porno divo va in scena con Beckett". Corriere della Sera (in Italian). Retrieved 2020-04-14.
36. Trobetta, Sergio (2010-12-08). "Il porno o Beckett è tutto spettacolo". La Stampa. Archived from the original on 2010-08-14. Retrieved 2020-03-26.
37. "Senza parole - Teatri di vita per Short (hot) Theatre". Teatro e Critica (in Italian). 2010-09-06. Archived from the original on April 24, 2020. Retrieved 2020-03-25.
38. Siti 2020, p. 128-129.
39. Siti 2020, p. 315.
40. Siti 2020, p. 325.
41. Freddi, Ruggero (March 2022). "Morse Index of Multiple Blow-Up Solutions to the Two-Dimensional Sinh-Poisson Equation" (PDF). Analysis in Theory and Applications Anal. Theory App. vo.38: 26–78. doi:10.4208/ata.OA-2020-0037. S2CID 244866305 – via Morse index, sinh-Poisson equation, eigenvalues estimates.
42. Freddi, Ruggero (2020). Morse index of multiple blow-up solutions to the two-dimensional sinh-poisson equation (PDF) (Thesis). arXiv:2001.02137. Bibcode:2020arXiv200102137F. Archived from the original (PDF) on April 22, 2020. Retrieved April 19, 2021.
43. Carradori, Niccolò (2017-11-26). "I Went From Being A Porn Star to University Lecturer". Vice. Archived from the original on 2020-03-26. Retrieved 2020-03-26.
44. Siti 2020, p. 315-316.
45. "Il prof era un pornodivo: gli studenti lo scoprono su Facebook". Repubblica Tv - la Repubblica.it (in Italian). 2017-10-26. Archived from the original on 2017-12-05. Retrieved 2020-03-26.
46. "Students accidentally discover their maths teacher is a former gay porn star". Attitude.co.uk. 2017-11-06. Archived from the original on 2020-03-26. Retrieved 2020-03-26.
47. Matthews, David (2018-05-17). "Interview with Ruggero Freddi". Times Higher Education (THE). Archived from the original on 2020-03-26. Retrieved 2020-03-26.
48. Siti 2020, p. 318.
49. Siti 2020.
50. Serino, Gian Paolo (2020-03-10). "Walter Siti mette a nudo la natura (innocente) dell'uomo". ilGiornale.it (in Italian). Archived from the original on 2020-03-11. Retrieved 2020-03-25.
51. "Ruggero Freddi, chi è l'ex porno attore cacciato dalla Sapienza". la Repubblica (in Italian). 2023-03-02. Retrieved 2023-03-11.
52. "Former Gay Adult Actor Wins Lawsuit Against University That Fired Him". www.out.com. Retrieved 2023-03-11.
53. "Ruggero Freddi, ex pornostar e professore universitario: "La Sapienza mi ha cacciato, ora deve risarcirmi"". Vanity Fair Italia (in Italian). 2023-03-02. Retrieved 2023-03-11.
54. "Gay Ex-Adult Film Star Sues University for Firing Him and Wins". Yahoo News. 8 March 2023. Retrieved 2023-03-11.
55. [email protected] (2023-03-03). "Former gay porn actor wins court case against university following unfair dismissal". GCN. Retrieved 2023-03-11.
56. Siti 2020, p. 290-291.
57. Siti 2020, p. 307-308.
58. Da pornodivo a docente all'Università - Pomeriggio Cinque Video, retrieved 2020-04-15
59. Proposta di matrimonio - Pomeriggio Cinque Video, retrieved 2020-03-25
60. Besanvalle, James (2018-05-05). "Gay porn star couple wed, 12 years on from their first gay sex scene together". Gay Star News. Archived from the original on 2018-07-09. Retrieved 2020-03-25.
61. "Matrimonio in diretta a Pomeriggio Cinque, Ruggero Freddi si sposa e Vladimir Luxuria celebra le nozze". TgCom24. 2018-05-04. Archived from the original on 2018-05-04. Retrieved 2020-03-25.
62. "Carlo Masi: Gay Erotic Video Index". www.gayeroticvideoindex.com. Archived from the original on 2010-07-06. Retrieved 2020-04-16.
63. "XBIZ Awards (2008)". IMDb. Retrieved 2020-04-14.
64. Naked Muscles: The New Breed - IMDb, retrieved 2020-04-14
Bibliography
• Siti, Walter (2020). La natura è innocente - due vite quasi vere. Rizzoli Libri. ISBN 9788817146449.
External links
• Official website
• Carlo Masi at IMDb
• Carlo Masi at the Internet Adult Film Database
Authority control: Academics
• zbMATH
|
Wikipedia
|
Ruggiero Torelli
Ruggiero Torelli (7 June 1884, in Naples – 9 September 1915) was an Italian mathematician who introduced Torelli's theorem.
Publications
• Ruggiero Torelli (1913). "Sulle varietà di Jacobi". Rendiconti della Reale accademia nazionale dei Lincei. 22 (5): 98–103.
• Torelli, Ruggiero (1995), Ciliberto, Ciro; Ribenboim, Paulo; Sernesi, Edoardo (eds.), Collected papers of Ruggiero Torelli, Queen's Papers in Pure and Applied Mathematics, vol. 101, Kingston, ON: Queen's University, ISBN 0-88911-707-1, MR 1374332
See also
• Torelli group
References
• Severi, Francesco (1916), "Ruggiero Torelli", Bollettino di bibliografia e storia delle scienze matematiche, Obituary, 18: 11–21
External links
• Biography
• Biography in Italian
Authority control
International
• ISNI
• VIAF
National
• France
• BnF data
• Germany
• Italy
• Israel
• United States
• Vatican
Academics
• zbMATH
Other
• IdRef
|
Wikipedia
|
Rui Loja Fernandes
Rui António Loja Fernandes (July 20, 1965, Coimbra) is a Portuguese mathematician working in the USA.
Rui Loja Fernandes
BornJuly 20, 1965
Coimbra, Portugal
NationalityPortuguese
Alma materUniversity of Minnesota
Scientific career
FieldsMathematics
InstitutionsInstituto Superior Técnico, University of Illinois at Urbana-Champaign
ThesisCompletely Integrable bi-Hamiltonian Systems (1994)
Doctoral advisorPeter Olver
Education and career
Fernandes obtained a bachelor's degree in Physics Engineering at Instituto Superior Técnico (Lisbon, Portugal) in 1988. He then moved to the USA and earned a master's degree in Mathematics in 1991 and a PhD in Mathematics in 1994 from the University of Minnesota. His PhD thesis was entitled "Completely Integrable bi-Hamiltonian Systems" and has been written under the supervision of Peter J. Olver.[1]
In 1994 he returned to Instituto Superior Técnico, where he worked first as Assistant Professor (1994-2002), and then as Associated Professor (2003-2007) and Full Professor (2007-2012).[2][3]
In 2012 he moved back to the USA and since then he is the Lois M. Lackner Professor of Mathematics at University of Illinois at Urbana–Champaign.[4] In 2016, he became a Fellow of the American Mathematical Society "for contributions to the study of Poisson geometry and Lie algebroids, and for service to the mathematical community."[5]
Research
Fernandes research focusses on differential geometry, more precisely on Poisson and symplectic geometry. Among his most well-known results are a solution to the long-standing problem of describing the obstructions to the integrability of Lie algebroids[6] and a new geometric proof of Conn's linearization theorem,[7] both written in collaboration with Marius Crainic.
He is the author of more than 40 research papers in peer-reviewed journals[8] and has supervised 6 PhD students as of 2021.[1]
References
1. Rui Loja Fernandes at the Mathematics Genealogy Project
2. "Rui Loja Fernandes| DMIST". www.math.tecnico.ulisboa.pt. Retrieved 2021-03-27.
3. "CAMGSD — Members". camgsd.tecnico.ulisboa.pt. Retrieved 2021-03-27.{{cite web}}: CS1 maint: url-status (link)
4. "Rui Loja Fernandes". illinois.edu. Retrieved May 13, 2017.
5. List of Fellows of the American Mathematical Society, retrieves 2017-08-09.
6. Crainic, Marius; Fernandes, Rui (2003-03-01). "Integrability of Lie brackets". Annals of Mathematics. 157 (2): 575–620. doi:10.4007/annals.2003.157.575. ISSN 0003-486X.
7. Crainic, Marius; Fernandes, Rui Loja (2011-03-01). "A geometric approach to Conn's linearization theorem". Annals of Mathematics. 173 (2): 1121–1139. doi:10.4007/annals.2011.173.2.14. ISSN 0003-486X.
8. "Rui Loja Fernandes". scholar.google.com. Retrieved 2021-03-27.
External links
• Rui Loja Fernandes publications indexed by Google Scholar
• "Rui Loja Fernandes". ulisboa.pt. Retrieved May 13, 2017.
• "Rui Loja Fernandes". illinois.edu. Retrieved March 27, 2021.{{cite web}}: CS1 maint: url-status (link)
Authority control: Academics
• Google Scholar
• MathSciNet
• Mathematics Genealogy Project
• zbMATH
|
Wikipedia
|
Rule 184
Rule 184 is a one-dimensional binary cellular automaton rule, notable for solving the majority problem as well as for its ability to simultaneously describe several, seemingly quite different, particle systems:
• Rule 184 can be used as a simple model for traffic flow in a single lane of a highway, and forms the basis for many cellular automaton models of traffic flow with greater sophistication. In this model, particles (representing vehicles) move in a single direction, stopping and starting depending on the cars in front of them. The number of particles remains unchanged throughout the simulation. Because of this application, Rule 184 is sometimes called the "traffic rule".[1]
• Rule 184 also models a form of deposition of particles onto an irregular surface, in which each local minimum of the surface is filled with a particle in each step. At each step of the simulation, the number of particles increases. Once placed, a particle never moves.
• Rule 184 can be understood in terms of ballistic annihilation, a system of particles moving both leftwards and rightwards through a one-dimensional medium. When two such particles collide, they annihilate each other, so that at each step the number of particles remains unchanged or decreases.
The apparent contradiction between these descriptions is resolved by different ways of associating features of the automaton's state with particles.
The name of Rule 184 is a Wolfram code that defines the evolution of its states. The earliest research on Rule 184 is by Li (1987) and Krug & Spohn (1988). In particular, Krug and Spohn already describe all three types of particle system modeled by Rule 184.[2]
Definition
A state of the Rule 184 automaton consists of a one-dimensional array of cells, each containing a binary value (0 or 1). In each step of its evolution, the Rule 184 automaton applies the following rule to each of the cells in the array, simultaneously for all cells, to determine the new state of the cell:[3]
current pattern 111 110 101 100 011 010 001 000
new state for center cell 1 0 1 1 1 0 0 0
An entry in this table defines the new state of each cell as a function of the previous state and the previous values of the neighboring cells on either side. The name for this rule, Rule 184, is the Wolfram code describing the state table above: the bottom row of the table, 10111000, when viewed as a binary number, is equal to the decimal number 184.[4]
The rule set for Rule 184 may also be described intuitively, in several different ways:
• At each step, whenever there exists in the current state a 1 immediately followed by a 0, these two symbols swap places. Based on this description, Krug & Spohn (1988) call Rule 184 a deterministic version of a "kinetic Ising model with asymmetric spin-exchange dynamics".
• At each step, if a cell with value 1 has a cell with value 0 immediately to its right, the 1 moves rightwards leaving a 0 behind. A 1 with another 1 to its right remains in place, while a 0 that does not have a 1 to its left stays a 0. This description is most apt for the application to traffic flow modeling.[5]
• If a cell has state 0, its new state is taken from the cell to its left. Otherwise, its new state is taken from the cell to its right. That is, each cell can be implemented by a two-way demultiplexer with the two adjacent cells being inputs, and the cell itself acting as the selector line. Each cell's next state is determined by the demultiplexer's output. This operation is closely related to a Fredkin gate.[6]
Dynamics and majority classification
From the descriptions of the rules above, two important properties of its dynamics may immediately be seen. First, in Rule 184, for any finite set of cells with periodic boundary conditions, the number of 1s and the number of 0s in a pattern remains invariant throughout the pattern's evolution. Rule 184 and its reflection are the only nontrivial[7] elementary cellular automata to have this property of number conservation.[8] Similarly, if the density of 1s is well-defined for an infinite array of cells, it remains invariant as the automaton carries out its steps.[9] And second, although Rule 184 is not symmetric under left-right reversal, it does have a different symmetry: reversing left and right and at the same time swapping the roles of the 0 and 1 symbols produces a cellular automaton with the same update rule.
Patterns in Rule 184 typically quickly stabilize, either to a pattern in which the cell states move in lockstep one position leftwards at each step, or to a pattern that moves one position rightwards at each step. Specifically, if the initial density of cells with state 1 is less than 50%, the pattern stabilizes into clusters of cells in state 1, spaced two units apart, with the clusters separated by blocks of cells in state 0. Patterns of this type move rightwards. If, on the other hand, the initial density is greater than 50%, the pattern stabilizes into clusters of cells in state 0, spaced two units apart, with the clusters separated by blocks of cells in state 1, and patterns of this type move leftwards. If the density is exactly 50%, the initial pattern stabilizes (more slowly) to a pattern that can equivalently be viewed as moving either leftwards or rightwards at each step: an alternating sequence of 0s and 1s.[10]
The majority problem is the problem of constructing a cellular automaton that, when run on any finite set of cells, can compute the value held by a majority of its cells. In a sense, Rule 184 solves this problem, as follows. if Rule 184 is run on a finite set of cells with periodic boundary conditions, with an unequal number of 0s and 1s, then each cell will eventually see two consecutive states of the majority value infinitely often, but will see two consecutive states of the minority value only finitely many times.[11] The majority problem cannot be solved perfectly if it is required that all cells eventually stabilize to the majority state[12] but the Rule 184 solution avoids this impossibility result by relaxing the criterion by which the automaton recognizes a majority.
Traffic flow
If one interprets each 1-cell in Rule 184 as containing a particle, these particles behave in many ways similarly to automobiles in a single lane of traffic: they move forward at a constant speed if there is open space in front of them, and otherwise they stop. Traffic models such as Rule 184 and its generalizations that discretize both space and time are commonly called particle-hopping models.[13] Although very primitive, the Rule 184 model of traffic flow already predicts some of the familiar emergent features of real traffic: clusters of freely moving cars separated by stretches of open road when traffic is light, and waves of stop-and-go traffic when it is heavy.[14]
It is difficult to pinpoint the first use of Rule 184 for traffic flow simulation, in part because the focus of research in this area has been less on achieving the greatest level of mathematical abstraction and more on verisimilitude: even the earlier papers on cellular automaton based traffic flow simulation typically make the model more complex in order to more accurately simulate real traffic. Nevertheless, Rule 184 is fundamental to traffic simulation by cellular automata. Wang, Kwong & Hui (1998), for instance, state that "the basic cellular automaton model describing a one-dimensional traffic flow problem is rule 184." Nagel (1996) writes "Much work using CA models for traffic is based on this model." Several authors describe one-dimensional models with vehicles moving at multiple speeds; such models degenerate to Rule 184 in the single-speed case.[15] Gaylord & Nishidate (1996) extend the Rule 184 dynamics to two-lane highway traffic with lane changes; their model shares with Rule 184 the property that it is symmetric under simultaneous left-right and 0-1 reversal. Biham, Middleton & Levine (1992) describe a two-dimensional city grid model in which the dynamics of individual lanes of traffic is essentially that of Rule 184.[16] For an in-depth survey of cellular automaton traffic modeling and associated statistical mechanics, see Maerivoet & De Moor (2005) and Chowdhury, Santen & Schadschneider (2000).
When viewing Rule 184 as a traffic model, it is natural to consider the average speed of the vehicles. When the density of traffic is less than 50%, this average speed is simply one unit of distance per unit of time: after the system stabilizes, no car ever slows. However, when the density is a number ρ greater than 1/2, the average speed of traffic is ${\tfrac {1-\rho }{\rho }}$. Thus, the system exhibits a second-order kinetic phase transition at ρ = 1/2. When Rule 184 is interpreted as a traffic model, and started from a random configuration whose density is at this critical value ρ = 1/2, then the average speed approaches its stationary limit as the square root of the number of steps. Instead, for random configurations whose density is not at the critical value, the approach to the limiting speed is exponential.[17]
Surface deposition
As shown in the figure, and as originally described by Krug & Spohn (1988),[18] Rule 184 may be used to model deposition of particles onto a surface. In this model, one has a set of particles that occupy a subset of the positions in a square lattice oriented diagonally (the darker particles in the figure). If a particle is present at some position of the lattice, the lattice positions below and to the right, and below and to the left of the particle must also be filled, so the filled part of the lattice extends infinitely downward to the left and right. The boundary between filled and unfilled positions (the thin black line in the figure) is interpreted as modeling a surface, onto which more particles may be deposited. At each time step, the surface grows by the deposition of new particles in each local minimum of the surface; that is, at each position where it is possible to add one new particle that has existing particles below it on both sides (the lighter particles in the figure).
To model this process by Rule 184, observe that the boundary between filled and unfilled lattice positions can be marked by a polygonal line, the segments of which separate adjacent lattice positions and have slopes +1 and −1. Model a segment with slope +1 by an automaton cell with state 0, and a segment with slope −1 by an automaton cell with state 1. The local minima of the surface are the points where a segment of slope −1 lies to the left of a segment of slope +1; that is, in the automaton, a position where a cell with state 1 lies to the left of a cell with state 0. Adding a particle to that position corresponds to changing the states of these two adjacent cells from 1,0 to 0,1, so advancing the polygonal line. This is exactly the behavior of Rule 184.[19]
Related work on this model concerns deposition in which the arrival times of additional particles are random, rather than having particles arrive at all local minima simultaneously.[20] These stochastic growth processes can be modeled as an asynchronous cellular automaton.
Ballistic annihilation
Ballistic annihilation describes a process by which moving particles and antiparticles annihilate each other when they collide. In the simplest version of this process, the system consists of a single type of particle and antiparticle, moving at equal speeds in opposite directions in a one-dimensional medium.[21]
This process can be modeled by Rule 184, as follows. The particles are modeled as points that are aligned, not with the cells of the automaton, but rather with the interstices between cells. Two consecutive cells that both have state 0 model a particle at the space between these two cells that moves rightwards one cell at each time step. Symmetrically, two consecutive cells that both have state 1 model an antiparticle that moves leftwards one cell at each time step. The remaining possibilities for two consecutive cells are that they both have differing states; this is interpreted as modeling a background material without any particles in it, through which the particles move. With this interpretation, the particles and antiparticles interact by ballistic annihilation: when a rightwards-moving particle and a leftwards-moving antiparticle meet, the result is a region of background from which both particles have vanished, without any effect on any other nearby particles.[22]
The behavior of certain other systems, such as one-dimensional cyclic cellular automata, can also be described in terms of ballistic annihilation.[23] There is a technical restriction on the particle positions for the ballistic annihilation view of Rule 184 that does not arise in these other systems, stemming from the alternating pattern of the background: in the particle system corresponding to a Rule 184 state, if two consecutive particles are both of the same type they must be an odd number of cells apart, while if they are of opposite types they must be an even number of cells apart. However this parity restriction does not play a role in the statistical behavior of this system.
Pivato (2007) uses a similar but more complicated particle-system view of Rule 184: he not only views alternating 0–1 regions as background, but also considers regions consisting solely of a single state to be background as well. Based on this view he describes seven different particles formed by boundaries between regions, and classifies their possible interactions. See Chopard & Droz (1998, pp. 188–190) for a more general survey of the cellular automaton models of annihilation processes.
Context free parsing
In his book A New Kind of Science, Stephen Wolfram points out that rule 184, when run on patterns with density 50%, can be interpreted as parsing the context free language describing strings formed from nested parentheses. This interpretation is closely related to the ballistic annihilation view of rule 184: in Wolfram's interpretation, an open parenthesis corresponds to a left-moving particle while a close parenthesis corresponds to a right-moving particle.[24]
See also
• Rule 30, Rule 90, and Rule 110, other one-dimensional cellular automata with different behavior
Notes
1. E.g. see Fukś (1997).
2. One can find many later papers that, when mentioning Rule 184, cite the early papers of Stephen Wolfram. However, Wolfram's papers consider only automata that are symmetric under left-right reversal, and therefore do not describe Rule 184.
3. This rule table is already given in a shorthand form in the name "Rule 184", but it can be found explicitly e.g. in Fukś (1997).
4. For the definition of this code, see Wolfram (2002), p.53. For the calculation of this code for Rule 184, see e.g. Boccara & Fukś (1998).
5. See, e.g., Boccara & Fukś (1998).
6. Li (1992). Li used this interpretation as part of a generalization of Rule 184 to nonlocal neighborhood structures.
7. Rules 170, 204, and 240 trivially exhibit this property, as in each of these rules, every cell is simply copied from one of the three cells above it on each step.
8. Boccara & Fukś (1998); Alonso-Sanz (2011).
9. Boccara & Fukś (1998) have investigated more general automata with similar conservation properties, as has Moreira (2003).
10. Li (1987).
11. Capcarrere, Sipper & Tomassini (1996); Fukś (1997); Sukumar (1998).
12. Land & Belew (1995).
13. Nagel (1996); Chowdhury, Santen & Schadschneider (2000).
14. Tadaki & Kikuchi (1994).
15. For several models of this type see Nagel & Schreckenberg (1992), Fukui & Ishibashi (1996), and Fukś & Boccara (1998). Nagel (1996) observes the equivalence of these models to rule 184 in the single-speed case and lists several additional papers on this type of model.
16. See also Tadaki & Kikuchi (1994) for additional analysis of this model.
17. Fukś & Boccara (1998).
18. See also Belitsky & Ferrari (1995) and Chopard & Droz (1998, p. 29).
19. Krug & Spohn (1988).
20. Also discussed by Krug & Spohn (1988).
21. Redner (2001).
22. Krug & Spohn (1988); Belitsky & Ferrari (1995).
23. Belitsky & Ferrari (1995).
24. Wolfram (2002, pp. 989, 1109).
References
• Alonso-Sanz, Ramon (2011). "Number-preserving rules". Discrete Systems with Memory. World Scientific series on nonlinear science, Ser. A. Vol. 75. World Scientific. pp. 55–57. ISBN 9789814343633.
• Belitsky, Vladimir; Ferrari, Pablo A. (1995). "Ballistic annihilation and deterministic surface growth". Journal of Statistical Physics. 80 (3–4): 517–543. Bibcode:1995JSP....80..517B. CiteSeerX 10.1.1.4.7901. doi:10.1007/BF02178546. S2CID 16293185.
• Biham, Ofer; Middleton, A. Alan; Levine, Dov (1992). "Self-organization and a dynamic transition in traffic-flow models". Physical Review A. 46 (10): R6124–R6127. arXiv:cond-mat/9206001. Bibcode:1992PhRvA..46.6124B. doi:10.1103/PhysRevA.46.R6124. PMID 9907993. S2CID 14543020.
• Boccara, Nino; Fukś, Henryk (1998). "Cellular automaton rules conserving the number of active sites". Journal of Physics A: Mathematical and General. 31 (28): 6007–6018. arXiv:adap-org/9712003. Bibcode:1998JPhA...31.6007B. doi:10.1088/0305-4470/31/28/014. S2CID 14807539.
• Capcarrere, Mathieu S.; Sipper, Moshe; Tomassini, Marco (1996). "Two-state, r = 1 cellular automaton that classifies density" (PDF). Physical Review Letters. 77 (24): 4969–4971. Bibcode:1996PhRvL..77.4969C. doi:10.1103/PhysRevLett.77.4969. PMID 10062680.
• Chopard, Bastien; Droz, Michel (1998). Cellular Automata Modeling of Physical Systems. Cambridge University Press. ISBN 978-0-521-67345-7.
• Chowdhury, Debashish; Santen, Ludger; Schadschneider, Andreas (2000). "Statistical physics of vehicular traffic and some related systems". Physics Reports. 329 (4): 199–329. arXiv:cond-mat/0007053. Bibcode:2000PhR...329..199C. doi:10.1016/S0370-1573(99)00117-9. S2CID 119526662.
• Fukś, Henryk (1997). "Solution of the density classification problem with two similar cellular automata rules". Physical Review E. 55 (3): R2081–R2084. arXiv:comp-gas/9703001. Bibcode:1997PhRvE..55.2081F. doi:10.1103/PhysRevE.55.R2081. S2CID 118954791.
• Fukś, Henryk; Boccara, Nino (1998). "Generalized deterministic traffic rules" (PDF). International Journal of Modern Physics C. 9 (1): 1–12. arXiv:adap-org/9705003. Bibcode:1998IJMPC...9....1F. doi:10.1142/S0129183198000029. S2CID 119938282. Archived from the original (PDF) on 27 September 2007.
• Fukui, M.; Ishibashi, Y. (1996). "Traffic flow in 1D cellular automaton model including cars moving with high speed". Journal of the Physical Society of Japan. 65 (6): 1868–1870. Bibcode:1996JPSJ...65.1868F. doi:10.1143/JPSJ.65.1868.
• Gaylord, Richard J.; Nishidate, Kazume (1996). "Traffic Flow". Modeling Nature: Cellular Automata Simulations with Mathematica. Springer-Verlag. pp. 29–34. ISBN 978-0-387-94620-7.
• Krug, J.; Spohn, H. (1988). "Universality classes for deterministic surface growth". Physical Review A. 38 (8): 4271–4283. Bibcode:1988PhRvA..38.4271K. doi:10.1103/PhysRevA.38.4271. PMID 9900880.
• Land, Mark; Belew, Richard (1995). "No perfect two-state cellular automata for density classification exists". Physical Review Letters. 74 (25): 1548–1550. Bibcode:1995PhRvL..74.5148L. doi:10.1103/PhysRevLett.74.5148. PMID 10058695.
• Li, Wentian (1987). "Power spectra of regular languages and cellular automata" (PDF). Complex Systems. 1: 107–130. Archived from the original (PDF) on 2007-10-07.
• Li, Wentian (1992). "Phenomenology of nonlocal cellular automata". Journal of Statistical Physics. 68 (5–6): 829–882. Bibcode:1992JSP....68..829L. CiteSeerX 10.1.1.590.1708. doi:10.1007/BF01048877. S2CID 17337112.
• Maerivoet, Sven; De Moor, Bart (2005). "Cellular automata models of road traffic". Physics Reports. 419 (1): 1–64. arXiv:physics/0509082. Bibcode:2005PhR...419....1M. doi:10.1016/j.physrep.2005.08.005. S2CID 41394950.
• Moreira, Andres (2003). "Universality and decidability of number-conserving cellular automata". Theoretical Computer Science. 292 (3): 711–721. arXiv:nlin.CG/0306032. Bibcode:2003nlin......6032M. doi:10.1016/S0304-3975(02)00065-8. S2CID 14909462.
• Nagel, Kai (1996). "Particle hopping models and traffic flow theory". Physical Review E. 53 (5): 4655–4672. arXiv:cond-mat/9509075. Bibcode:1996PhRvE..53.4655N. doi:10.1103/PhysRevE.53.4655. PMID 9964794. S2CID 20466753.
• Nagel, Kai; Schreckenberg, Michael (1992). "A cellular automaton model for freeway traffic". Journal de Physique I. 2 (12): 2221–2229. Bibcode:1992JPhy1...2.2221N. doi:10.1051/jp1:1992277. S2CID 37135830.
• Pivato, M. (2007). "Defect particle kinematics in one-dimensional cellular automata". Theoretical Computer Science. 377 (1–3): 205–228. arXiv:math.DS/0506417. doi:10.1016/j.tcs.2007.03.014. S2CID 12650387.
• Redner, Sidney (2001). "8.5 Ballistic Annihilation". A Guide to First-Passage Processes. Cambridge University Press. p. 288. ISBN 9780521652483.
• Sukumar, N. (1998). "Effect of boundary conditions on cellular automata that classify density". arXiv:comp-gas/9804001.
• Tadaki, Shin-ichi; Kikuchi, Macato (1994). "Jam phases in a two-dimensional cellular automaton model of traffic flow". Physical Review E. 50 (6): 4564–4570. arXiv:patt-sol/9409004. Bibcode:1994PhRvE..50.4564T. doi:10.1103/PhysRevE.50.4564. PMID 9962535. S2CID 17516156.
• Wang, Bing-Hong; Kwong, Yvonne-Roamy; Hui, Pak-Ming (1998). "Statistical mechanical approach to Fukui-Ishibashi traffic flow models". Physical Review E. 57 (3): 2568–2573. Bibcode:1998PhRvE..57.2568W. doi:10.1103/PhysRevE.57.2568.
• Wolfram, Stephen (2002). A New Kind of Science. Wolfram Media.
External links
Wikimedia Commons has media related to Rule 184.
• Rule 184 in Wolfram's atlas of cellular automata
|
Wikipedia
|
Structural rule
In proof theory, a structural rule is an inference rule that does not refer to any logical connective, but instead operates on the judgment or sequents directly. Structural rules often mimic intended meta-theoretic properties of the logic. Logics that deny one or more of the structural rules are classified as substructural logics.
Common structural rules
Three common structural rules are:
• Weakening, where the hypotheses or conclusion of a sequent may be extended with additional members. In symbolic form weakening rules can be written as ${\frac {\Gamma \vdash \Sigma }{\Gamma ,A\vdash \Sigma }}$ on the left of the turnstile, and ${\frac {\Gamma \vdash \Sigma }{\Gamma \vdash \Sigma ,A}}$ on the right.
• Contraction, where two equal (or unifiable) members on the same side of a sequent may be replaced by a single member (or common instance). Symbolically: ${\frac {\Gamma ,A,A\vdash \Sigma }{\Gamma ,A\vdash \Sigma }}$ and ${\frac {\Gamma \vdash A,A,\Sigma }{\Gamma \vdash A,\Sigma }}$. Also known as factoring in automated theorem proving systems using resolution. Known as idempotency of entailment in classical logic.
• Exchange, where two members on the same side of a sequent may be swapped. Symbolically: ${\frac {\Gamma _{1},A,\Gamma _{2},B,\Gamma _{3}\vdash \Sigma }{\Gamma _{1},B,\Gamma _{2},A,\Gamma _{3}\vdash \Sigma }}$ and ${\frac {\Gamma \vdash \Sigma _{1},A,\Sigma _{2},B,\Sigma _{3}}{\Gamma \vdash \Sigma _{1},B,\Sigma _{2},A,\Sigma _{3}}}$. (This is also known as the permutation rule.)
A logic without any of the above structural rules would interpret the sides of a sequent as pure sequences; with exchange, they are multisets; and with both contraction and exchange they are sets.
These are not the only possible structural rules. A famous structural rule is known as cut. Considerable effort is spent by proof theorists in showing that cut rules are superfluous in various logics. More precisely, what is shown is that cut is only (in a sense) a tool for abbreviating proofs, and does not add to the theorems that can be proved. The successful 'removal' of cut rules, known as cut elimination, is directly related to the philosophy of computation as normalization (see Curry–Howard correspondence); it often gives a good indication of the complexity of deciding a given logic.
See also
• Affine logic – substructural logic whose proof theory rejects the structural rule of contractionPages displaying wikidata descriptions as a fallback
• Linear logic – System of resource-aware logic
• Ordered logic (linear logic)
• Relevance logic – mathematical logic system that imposes certain restrictions on implicationPages displaying wikidata descriptions as a fallback
• Separation logic
Non-classical logic
Intuitionistic
• Intuitionistic logic
• Constructive analysis
• Heyting arithmetic
• Intuitionistic type theory
• Constructive set theory
Fuzzy
• Degree of truth
• Fuzzy rule
• Fuzzy set
• Fuzzy finite element
• Fuzzy set operations
Substructural
• Structural rule
• Relevance logic
• Linear logic
Paraconsistent
• Dialetheism
Description
• Ontology (computer science)
• Ontology language
Many-valued
• Three-valued
• Four-valued
• Łukasiewicz
Digital logic
• Three-state logic
• Tri-state buffer
• Four-valued
• Verilog
• IEEE 1164
• VHDL
Others
• Dynamic semantics
• Inquisitive logic
• Intermediate logic
• Modal logic
• Nonmonotonic logic
|
Wikipedia
|
Rule of division (combinatorics)
In combinatorics, the rule of division is a counting principle. It states that there are n/d ways to do a task if it can be done using a procedure that can be carried out in n ways, and for each way w, exactly d of the n ways correspond to the way w. In a nutshell, the division rule is a common way to ignore "unimportant" differences when counting things.[1]
Applied to Sets
In the terms of a set: "If the finite set A is the union of n pairwise disjoint subsets each with d elements, then n = |A|/d."[1]
As a function
The rule of division formulated in terms of functions: "If f is a function from A to B where A and B are finite sets, and that for every value y ∈ B there are exactly d values x ∈ A such that f (x) = y (in which case, we say that f is d-to-one), then |B| = |A|/d."[1]
Examples
Example 1
- How many different ways are there to seat four people around a circular table, where two seatings are considered the same when each person has the same left neighbor and the same right neighbor?
To solve this exercise we must first pick a random seat, and assign it to person 1, the rest of seats will be labeled in numerical order, in clockwise rotation around the table. There are 4 seats to choose from when we pick the first seat, 3 for the second, 2 for the third and just 1 option left for the last one. Thus there are 4! = 24 possible ways to seat them. However, since we only consider a different arrangement when they don't have the same neighbours left and right, only 1 out of every 4 seat choices matter.
Because there are 4 ways to choose for seat 1, by the division rule (n/d) there are 24/4 = 6 different seating arrangements for 4 people around the table.
Example 2
- We have 6 coloured bricks in total, 4 of them are red and 2 are white, in how many ways can we arrange them?
If all bricks had different colours, the total of ways to arrange them would be 6! = 720, but since they don't have different colours, we would calculate it as following:
4 red bricks have 4! = 24 arrangements
2 white bricks have 2! = 2 arrangements
Total arrangements of 4 red and 2 white bricks = 6!/4!2! = 15.
See also
• Combinatorial principles
Notes
1. Rosen 2012, pp.385-386
References
• Rosen, Kenneth H (2012). Discrete Mathematics and Its Applications. McGraw-Hill Education. ISBN 978-0077418939.
Further reading
• Leman, Eric; Leighton, F Thompson; Meyer, Albert R; Mathematics for Computer Science, 2018. https://courses.csail.mit.edu/6.042/spring18/mcs.pdf
|
Wikipedia
|
Digit sum
In mathematics, the digit sum of a natural number in a given number base is the sum of all its digits. For example, the digit sum of the decimal number $9045$ would be $9+0+4+5=18.$
Definition
Let $n$ be a natural number. We define the digit sum for base $b>1$, $F_{b}:\mathbb {N} \rightarrow \mathbb {N} $ to be the following:
$F_{b}(n)=\sum _{i=0}^{k}d_{i}$
where $k=\lfloor \log _{b}{n}\rfloor $ is one less than the number of digits in the number in base $b$, and
$d_{i}={\frac {n{\bmod {b^{i+1}}}-n{\bmod {b}}^{i}}{b^{i}}}$
is the value of each digit of the number.
For example, in base 10, the digit sum of 84001 is $F_{10}(84001)=8+4+0+0+1=13.$
For any two bases $2\leq b_{1}<b_{2}$ and for sufficiently large natural numbers $n,$
$\sum _{k=0}^{n}F_{b_{1}}(k)<\sum _{k=0}^{n}F_{b_{2}}(k).$[1]
The sum of the base 10 digits of the integers 0, 1, 2, ... is given by OEIS: A007953 in the On-Line Encyclopedia of Integer Sequences. Borwein & Borwein (1992) use the generating function of this integer sequence (and of the analogous sequence for binary digit sums) to derive several rapidly converging series with rational and transcendental sums.[2]
Extension to negative integers
The digit sum can be extended to the negative integers by use of a signed-digit representation to represent each integer.
Applications
The concept of a decimal digit sum is closely related to, but not the same as, the digital root, which is the result of repeatedly applying the digit sum operation until the remaining value is only a single digit. The digital root of any non-zero integer will be a number in the range 1 to 9, whereas the digit sum can take any value. Digit sums and digital roots can be used for quick divisibility tests: a natural number is divisible by 3 or 9 if and only if its digit sum (or digital root) is divisible by 3 or 9, respectively. For divisibility by 9, this test is called the rule of nines and is the basis of the casting out nines technique for checking calculations.
Digit sums are also a common ingredient in checksum algorithms to check the arithmetic operations of early computers.[3] Earlier, in an era of hand calculation, Edgeworth (1888) suggested using sums of 50 digits taken from mathematical tables of logarithms as a form of random number generation; if one assumes that each digit is random, then by the central limit theorem, these digit sums will have a random distribution closely approximating a Gaussian distribution.[4]
The digit sum of the binary representation of a number is known as its Hamming weight or population count; algorithms for performing this operation have been studied, and it has been included as a built-in operation in some computer architectures and some programming languages. These operations are used in computing applications including cryptography, coding theory, and computer chess.
Harshad numbers are defined in terms of divisibility by their digit sums, and Smith numbers are defined by the equality of their digit sums with the digit sums of their prime factorizations.
See also
• Arithmetic dynamics
• Casting out nines
• Checksum
• Digital root
• Hamming weight
• Harshad number
• Perfect digital invariant
• Sideways sum
• Smith number
• Sum-product number
References
1. Bush, L. E. (1940), "An asymptotic formula for the average sum of the digits of integers", American Mathematical Monthly, Mathematical Association of America, 47 (3): 154–156, doi:10.2307/2304217, JSTOR 2304217.
2. Borwein, J. M.; Borwein, P. B. (1992), "Strange series and high precision fraud" (PDF), American Mathematical Monthly, 99 (7): 622–640, doi:10.2307/2324993, hdl:1959.13/1043650, JSTOR 2324993.
3. Bloch, R. M.; Campbell, R. V. D.; Ellis, M. (1948), "The Logical Design of the Raytheon Computer", Mathematical Tables and Other Aids to Computation, American Mathematical Society, 3 (24): 286–295, doi:10.2307/2002859, JSTOR 2002859.
4. Edgeworth, F. Y. (1888), "The Mathematical Theory of Banking" (PDF), Journal of the Royal Statistical Society, 51 (1): 113–127, archived from the original (PDF) on 2006-09-13.
External links
• Weisstein, Eric W. "Digit Sum". MathWorld.
• Simple applications of digit sum
Classes of natural numbers
Powers and related numbers
• Achilles
• Power of 2
• Power of 3
• Power of 10
• Square
• Cube
• Fourth power
• Fifth power
• Sixth power
• Seventh power
• Eighth power
• Perfect power
• Powerful
• Prime power
Of the form a × 2b ± 1
• Cullen
• Double Mersenne
• Fermat
• Mersenne
• Proth
• Thabit
• Woodall
Other polynomial numbers
• Hilbert
• Idoneal
• Leyland
• Loeschian
• Lucky numbers of Euler
Recursively defined numbers
• Fibonacci
• Jacobsthal
• Leonardo
• Lucas
• Padovan
• Pell
• Perrin
Possessing a specific set of other numbers
• Amenable
• Congruent
• Knödel
• Riesel
• Sierpiński
Expressible via specific sums
• Nonhypotenuse
• Polite
• Practical
• Primary pseudoperfect
• Ulam
• Wolstenholme
Figurate numbers
2-dimensional
centered
• Centered triangular
• Centered square
• Centered pentagonal
• Centered hexagonal
• Centered heptagonal
• Centered octagonal
• Centered nonagonal
• Centered decagonal
• Star
non-centered
• Triangular
• Square
• Square triangular
• Pentagonal
• Hexagonal
• Heptagonal
• Octagonal
• Nonagonal
• Decagonal
• Dodecagonal
3-dimensional
centered
• Centered tetrahedral
• Centered cube
• Centered octahedral
• Centered dodecahedral
• Centered icosahedral
non-centered
• Tetrahedral
• Cubic
• Octahedral
• Dodecahedral
• Icosahedral
• Stella octangula
pyramidal
• Square pyramidal
4-dimensional
non-centered
• Pentatope
• Squared triangular
• Tesseractic
Combinatorial numbers
• Bell
• Cake
• Catalan
• Dedekind
• Delannoy
• Euler
• Eulerian
• Fuss–Catalan
• Lah
• Lazy caterer's sequence
• Lobb
• Motzkin
• Narayana
• Ordered Bell
• Schröder
• Schröder–Hipparchus
• Stirling first
• Stirling second
• Telephone number
• Wedderburn–Etherington
Primes
• Wieferich
• Wall–Sun–Sun
• Wolstenholme prime
• Wilson
Pseudoprimes
• Carmichael number
• Catalan pseudoprime
• Elliptic pseudoprime
• Euler pseudoprime
• Euler–Jacobi pseudoprime
• Fermat pseudoprime
• Frobenius pseudoprime
• Lucas pseudoprime
• Lucas–Carmichael number
• Somer–Lucas pseudoprime
• Strong pseudoprime
Arithmetic functions and dynamics
Divisor functions
• Abundant
• Almost perfect
• Arithmetic
• Betrothed
• Colossally abundant
• Deficient
• Descartes
• Hemiperfect
• Highly abundant
• Highly composite
• Hyperperfect
• Multiply perfect
• Perfect
• Practical
• Primitive abundant
• Quasiperfect
• Refactorable
• Semiperfect
• Sublime
• Superabundant
• Superior highly composite
• Superperfect
Prime omega functions
• Almost prime
• Semiprime
Euler's totient function
• Highly cototient
• Highly totient
• Noncototient
• Nontotient
• Perfect totient
• Sparsely totient
Aliquot sequences
• Amicable
• Perfect
• Sociable
• Untouchable
Primorial
• Euclid
• Fortunate
Other prime factor or divisor related numbers
• Blum
• Cyclic
• Erdős–Nicolas
• Erdős–Woods
• Friendly
• Giuga
• Harmonic divisor
• Jordan–Pólya
• Lucas–Carmichael
• Pronic
• Regular
• Rough
• Smooth
• Sphenic
• Størmer
• Super-Poulet
• Zeisel
Numeral system-dependent numbers
Arithmetic functions
and dynamics
• Persistence
• Additive
• Multiplicative
Digit sum
• Digit sum
• Digital root
• Self
• Sum-product
Digit product
• Multiplicative digital root
• Sum-product
Coding-related
• Meertens
Other
• Dudeney
• Factorion
• Kaprekar
• Kaprekar's constant
• Keith
• Lychrel
• Narcissistic
• Perfect digit-to-digit invariant
• Perfect digital invariant
• Happy
P-adic numbers-related
• Automorphic
• Trimorphic
Digit-composition related
• Palindromic
• Pandigital
• Repdigit
• Repunit
• Self-descriptive
• Smarandache–Wellin
• Undulating
Digit-permutation related
• Cyclic
• Digit-reassembly
• Parasitic
• Primeval
• Transposable
Divisor-related
• Equidigital
• Extravagant
• Frugal
• Harshad
• Polydivisible
• Smith
• Vampire
Other
• Friedman
Binary numbers
• Evil
• Odious
• Pernicious
Generated via a sieve
• Lucky
• Prime
Sorting related
• Pancake number
• Sorting number
Natural language related
• Aronson's sequence
• Ban
Graphemics related
• Strobogrammatic
• Mathematics portal
|
Wikipedia
|
Rule of twelfths
The rule of twelfths is an approximation to a sine curve. It can be used as a rule of thumb for estimating a changing quantity where both the quantity and the steps are easily divisible by 12. Typical uses are predicting the height of the tide or the change in day length over the seasons.
The rule
The rule states that over the first period the quantity increases by 1/12. Then in the second period by 2/12, in the third by 3/12, in the fourth by 3/12, fifth by 2/12 and at the end of the sixth period reaches its maximum with an increase of 1/12. The steps are 1:2:3:3:2:1 giving a total change of 12/12. Over the next six intervals the quantity reduces in a similar manner by 1, 2, 3, 3, 2, 1 twelfths.
PeriodRule or
actual
values
IncrementCumulative
Exact valueDecimalRelative sizeExact valueDecimalRelative size
1 Rule1 / 120.08333 0.08333
1 / 120.08333 0.08333
Actual(cos 0° - cos 30°) / 20.06699 0.06699
(1 - cos 30°) / 20.06699 0.06699
2 Rule2 / 120.16667 0.16667
3 / 120.25 0.25
Actual(cos 30° - cos 60°) / 20.18301 0.18301
(1 - cos 60°) / 20.25 0.25
3 Rule3 / 120.25 0.25
6 / 120.5 0.5
Actual(cos 60° - cos 90°) / 20.25 0.25
(1 - cos 90°) / 20.5 0.5
4 Rule3 / 120.25 0.25
9 / 120.75 0.75
Actual(cos 90° - cos 120°) / 20.25 0.25
(1 - cos 120°) / 20.75 0.75
5 Rule2 / 120.16667 0.16667
11 / 120.91667 0.91667
Actual(cos 120° - cos 150°) / 20.18301 0.18301
(1 - cos 150°) / 20.93301 0.93301
6 Rule1 / 120.08333 0.08333
12 / 121 1
Actual(cos 150° - cos 180°) / 20.06699 0.06699
(1 - cos 180°) / 21 1
Applications
In many parts of the world the tides approximate to a semi-diurnal sine curve, that is there are two high- and two low- tides per day. As an estimate then each period equates to 1 hour, with the tide rising by 1, 2, 3, 3, 2, finally 1 twelfths of its total range in each hour, from low tide to high tide in about 6 hours, then the tide is decreasing by the same pattern in the next 6 hours, back to low tide. In places where there is only one high and one low water per day, the rule can be used by assuming the steps are 2 hours. If the tidal curve does not approximate to a sine wave then the rule cannot be used.[1][2] This is important when navigating a boat or a ship in shallow water, and when launching and retrieving boats on slipways on a tidal shore.[3]
The rule is also useful for estimating the monthly change in sunrise and sunset and thus day length.[4]
Example calculations
Tides
If a tide table gives the information that tomorrow's low water would be at noon and that the water level at this time would be two metres above chart datum, and that at the following high tide the water level would be 14 metres, then the height of water at 3:00 p.m. can be calculated as follows:
• The total increase in water level between low and high tide would be: 14 - 2 = 12 metres.
• In the first hour the water level would rise by 1 twelfth of the total (12 m) or: 1 m
• In the second hour the water level would rise by another 2 twelfths of the total (12 m) or: 2 m
• In the third hour the water level would rise by another 3 twelfths of the total (12 m) or: 3 m
• This gives the increase in the water level by 3:00 p.m. as 6 metres.
This represents only the increase - the total depth of the water (relative to chart datum) will include the 2 m depth at low tide: 6 m + 2 m = 8 metres.
The calculation can be simplified by adding twelfths together and reducing the fraction beforehand:
Rise of tide in three hours $=\left({1 \over 12}+{2 \over 12}+{3 \over 12}\right)\times 12\ \mathrm {m} =\left({6 \over 12}\right)\times 12\ \mathrm {m} =\left({1 \over 2}\right)\times 12\ \mathrm {m} =6\ \mathrm {m} $
Daylength
If midwinter sunrise and set are at 09:00 and 15:00, and midsummer at 03:00 and 21:00, the daylight duration will shift by 0:30, 1:00, 1:30, 1:30, 1:00 and 0:30 over the six months from one solstice to the other. Likewise the day length changes by 0:30, 1:00, 1:30, 1:30, 1:00 and 0:30 each month. More equatorial latitudes change by less, but still in the same proportions; more polar by more.
Caveats
The rule is a rough approximation only and should be applied with great caution when used for navigational purposes. Officially produced tide tables should be used in preference whenever possible.
The rule assumes that all tides behave in a regular manner, this is not true of some geographical locations, such as Poole Harbour[5] or the Solent[6] where there are "double" high waters or Weymouth Bay[5] where there is a double low water.
The rule assumes that the period between high and low tides is six hours but this is an underestimate and can vary anyway.
References
1. "Rule of Twelfths for quick tidal estimates". DIY Wood Boat. Retrieved 19 December 2017.
2. Getchell, David R. (1994). The Outboard Boater's Handbook: Advanced Seamanship and Practical Skills. International Marine. p. 195. ISBN 978-0-07-023053-8.
3. Sweet, Robert J. (16 September 2004). The Weekend Navigator: Simple Boat Navigation with GPS and Electronics. p. 162. ISBN 978-0-07-143035-7.
4. McAdam, Marcus. "The Rule of Twelfths". Mc2Photography.com. Retrieved 2021-03-11. The same Rule of Twelfths can be applied to the duration of the days.
5. Heritage, Trevor. "Poole Harbour and its tides" (PDF). Shrimperowners. Retrieved 19 December 2017.
6. Ridge, M J, FRICS MCIT. "English Channel double tides". Bristol Nomads windsurfing club. Archived from the original on 22 August 2009. Retrieved 19 December 2017.{{cite web}}: CS1 maint: multiple names: authors list (link)
Physical oceanography
Waves
• Airy wave theory
• Ballantine scale
• Benjamin–Feir instability
• Boussinesq approximation
• Breaking wave
• Clapotis
• Cnoidal wave
• Cross sea
• Dispersion
• Edge wave
• Equatorial waves
• Fetch
• Gravity wave
• Green's law
• Infragravity wave
• Internal wave
• Iribarren number
• Kelvin wave
• Kinematic wave
• Longshore drift
• Luke's variational principle
• Mild-slope equation
• Radiation stress
• Rogue wave
• Rossby wave
• Rossby-gravity waves
• Sea state
• Seiche
• Significant wave height
• Soliton
• Stokes boundary layer
• Stokes drift
• Stokes wave
• Swell
• Trochoidal wave
• Tsunami
• megatsunami
• Undertow
• Ursell number
• Wave action
• Wave base
• Wave height
• Wave nonlinearity
• Wave power
• Wave radar
• Wave setup
• Wave shoaling
• Wave turbulence
• Wave–current interaction
• Waves and shallow water
• one-dimensional Saint-Venant equations
• shallow water equations
• Wind setup
• Wind wave
• model
Circulation
• Atmospheric circulation
• Baroclinity
• Boundary current
• Coriolis force
• Coriolis–Stokes force
• Craik–Leibovich vortex force
• Downwelling
• Eddy
• Ekman layer
• Ekman spiral
• Ekman transport
• El Niño–Southern Oscillation
• General circulation model
• Geochemical Ocean Sections Study
• Geostrophic current
• Global Ocean Data Analysis Project
• Gulf Stream
• Halothermal circulation
• Humboldt Current
• Hydrothermal circulation
• Langmuir circulation
• Longshore drift
• Loop Current
• Modular Ocean Model
• Ocean current
• Ocean dynamics
• Ocean dynamical thermostat
• Ocean gyre
• Princeton ocean model
• Rip current
• Subsurface currents
• Sverdrup balance
• Thermohaline circulation
• shutdown
• Upwelling
• Wind generated current
• Whirlpool
• World Ocean Circulation Experiment
Tides
• Amphidromic point
• Earth tide
• Head of tide
• Internal tide
• Lunitidal interval
• Perigean spring tide
• Rip tide
• Rule of twelfths
• Slack water
• Tidal bore
• Tidal force
• Tidal power
• Tidal race
• Tidal range
• Tidal resonance
• Tide gauge
• Tideline
• Theory of tides
Landforms
• Abyssal fan
• Abyssal plain
• Atoll
• Bathymetric chart
• Coastal geography
• Cold seep
• Continental margin
• Continental rise
• Continental shelf
• Contourite
• Guyot
• Hydrography
• Knoll
• Oceanic basin
• Oceanic plateau
• Oceanic trench
• Passive margin
• Seabed
• Seamount
• Submarine canyon
• Submarine volcano
Plate
tectonics
• Convergent boundary
• Divergent boundary
• Fracture zone
• Hydrothermal vent
• Marine geology
• Mid-ocean ridge
• Mohorovičić discontinuity
• Vine–Matthews–Morley hypothesis
• Oceanic crust
• Outer trench swell
• Ridge push
• Seafloor spreading
• Slab pull
• Slab suction
• Slab window
• Subduction
• Transform fault
• Volcanic arc
Ocean zones
• Benthic
• Deep ocean water
• Deep sea
• Littoral
• Mesopelagic
• Oceanic
• Pelagic
• Photic
• Surf
• Swash
Sea level
• Deep-ocean Assessment and Reporting of Tsunamis
• Future sea level
• Global Sea Level Observing System
• North West Shelf Operational Oceanographic System
• Sea-level curve
• Sea level rise
• Sea level drop
• World Geodetic System
Acoustics
• Deep scattering layer
• Hydroacoustics
• Ocean acoustic tomography
• Sofar bomb
• SOFAR channel
• Underwater acoustics
Satellites
• Jason-1
• Jason-2 (Ocean Surface Topography Mission)
• Jason-3
Related
• Acidification
• Argo
• Benthic lander
• Color of water
• DSV Alvin
• Marginal sea
• Marine energy
• Marine pollution
• Mooring
• National Oceanographic Data Center
• Ocean
• Explorations
• Observations
• Reanalysis
• Ocean surface topography
• Ocean temperature
• Ocean thermal energy conversion
• Oceanography
• Outline of oceanography
• Pelagic sediment
• Sea surface microlayer
• Sea surface temperature
• Seawater
• Science On a Sphere
• Stratification
• Thermocline
• Underwater glider
• Water column
• World Ocean Atlas
• Oceans portal
• Category
• Commons
|
Wikipedia
|
Cayley's ruled cubic surface
In differential geometry, Cayley's ruled cubic surface is the ruled cubic surface
$x^{3}+(4x\,z+y)x=0.\ $
Not to be confused with Cayley's nodal cubic surface.
It contains a nodal line of self-intersection and two cuspital points at infinity.[1]
In projective coordinates it is $x^{3}+(4x\,z+y\,w)x=0.\ $.
References
1. "Ruled Cubics | Mathematical Institute". www.maths.ox.ac.uk. Retrieved 2020-08-08.
External links
• Cubical ruled surface
• Weisstein, Eric W. "Cayley Surface". MathWorld.
|
Wikipedia
|
Regulated function
In mathematics, a regulated function, or ruled function, is a certain kind of well-behaved function of a single real variable. Regulated functions arise as a class of integrable functions, and have several equivalent characterisations. Regulated functions were introduced by Nicolas Bourbaki in 1949, in their book "Livre IV: Fonctions d'une variable réelle".
Definition
Let X be a Banach space with norm || - ||X. A function f : [0, T] → X is said to be a regulated function if one (and hence both) of the following two equivalent conditions holds true:[1]
• for every t in the interval [0, T], both the left and right limits f(t−) and f(t+) exist in X (apart from, obviously, f(0−) and f(T+));
• there exists a sequence of step functions φn : [0, T] → X converging uniformly to f (i.e. with respect to the supremum norm || - ||∞).
It requires a little work to show that these two conditions are equivalent. However, it is relatively easy to see that the second condition may be re-stated in the following equivalent ways:
• for every δ > 0, there is some step function φδ : [0, T] → X such that
$\|f-\varphi _{\delta }\|_{\infty }=\sup _{t\in [0,T]}\|f(t)-\varphi _{\delta }(t)\|_{X}<\delta ;$ ;}
• f lies in the closure of the space Step([0, T]; X) of all step functions from [0, T] into X (taking closure with respect to the supremum norm in the space B([0, T]; X) of all bounded functions from [0, T] into X).
Properties of regulated functions
Let Reg([0, T]; X) denote the set of all regulated functions f : [0, T] → X.
• Sums and scalar multiples of regulated functions are again regulated functions. In other words, Reg([0, T]; X) is a vector space over the same field K as the space X; typically, K will be the real or complex numbers. If X is equipped with an operation of multiplication, then products of regulated functions are again regulated functions. In other words, if X is a K-algebra, then so is Reg([0, T]; X).
• The supremum norm is a norm on Reg([0, T]; X), and Reg([0, T]; X) is a topological vector space with respect to the topology induced by the supremum norm.
• As noted above, Reg([0, T]; X) is the closure in B([0, T]; X) of Step([0, T]; X) with respect to the supremum norm.
• If X is a Banach space, then Reg([0, T]; X) is also a Banach space with respect to the supremum norm.
• Reg([0, T]; R) forms an infinite-dimensional real Banach algebra: finite linear combinations and products of regulated functions are again regulated functions.
• Since a continuous function defined on a compact space (such as [0, T]) is automatically uniformly continuous, every continuous function f : [0, T] → X is also regulated. In fact, with respect to the supremum norm, the space C0([0, T]; X) of continuous functions is a closed linear subspace of Reg([0, T]; X).
• If X is a Banach space, then the space BV([0, T]; X) of functions of bounded variation forms a dense linear subspace of Reg([0, T]; X):
$\mathrm {Reg} ([0,T];X)={\overline {\mathrm {BV} ([0,T];X)}}{\mbox{ w.r.t. }}\|\cdot \|_{\infty }.$
• If X is a Banach space, then a function f : [0, T] → X is regulated if and only if it is of bounded φ-variation for some φ:
$\mathrm {Reg} ([0,T];X)=\bigcup _{\varphi }\mathrm {BV} _{\varphi }([0,T];X).$
• If X is a separable Hilbert space, then Reg([0, T]; X) satisfies a compactness theorem known as the Fraňková–Helly selection theorem.
• The set of discontinuities of a regulated function of bounded variation BV is countable for such functions have only jump-type of discontinuities. To see this it is sufficient to note that given $\epsilon >0$, the set of points at which the right and left limits differ by more than $\epsilon $ is finite. In particular, the discontinuity set has measure zero, from which it follows that a regulated function has a well-defined Riemann integral.
• Remark: By the Baire Category theorem the set of points of discontinuity of such function $F_{\sigma }$ is either meager or else has nonempty interior. This is not always equivalent with countability.[2]
• The integral, as defined on step functions in the obvious way, extends naturally to Reg([0, T]; X) by defining the integral of a regulated function to be the limit of the integrals of any sequence of step functions converging uniformly to it. This extension is well-defined and satisfies all of the usual properties of an integral. In particular, the regulated integral
• is a bounded linear function from Reg([0, T]; X) to X; hence, in the case X = R, the integral is an element of the space that is dual to Reg([0, T]; R);
• agrees with the Riemann integral.
References
1. Dieudonné 1969, §7.6
2. Stackexchange discussion
• Aumann, Georg (1954), Reelle Funktionen, Die Grundlehren der mathematischen Wissenschaften in Einzeldarstellungen mit besonderer Berücksichtigung der Anwendungsgebiete, Bd LXVIII (in German), Berlin: Springer-Verlag, pp. viii+416 MR0061652
• Dieudonné, Jean (1969), Foundations of Modern Analysis, Academic Press, pp. xviii+387 MR0349288
• Fraňková, Dana (1991), "Regulated functions", Math. Bohem., 116 (1): 20–59, ISSN 0862-7959 MR1100424
• Gordon, Russell A. (1994), The Integrals of Lebesgue, Denjoy, Perron, and Henstock, Graduate Studies in Mathematics, 4, Providence, RI: American Mathematical Society, pp. xii+395, ISBN 0-8218-3805-9 MR1288751
• Lang, Serge (1985), Differential Manifolds (Second ed.), New York: Springer-Verlag, pp. ix+230, ISBN 0-387-96113-5 MR772023
External links
• "How to show that a set of discontinuous points of an increasing function is at most countable". Stack Exchange. November 23, 2011.
• "Bounded variation functions have jump-type discontinuities". Stack Exchange. November 28, 2013.
• "How discontinuous can a derivative be?". Stack Exchange. February 22, 2012.
|
Wikipedia
|
Ruled join
In algebraic geometry, given irreducible subvarieties V, W of a projective space Pn, the ruled join of V and W is the union of all lines from V to W in P2n+1, where V, W are embedded into P2n+1 so that the last (resp. first) n + 1 coordinates on V (resp. W) vanish.[1] It is denoted by J(V, W). For example, if V and W are linear subspaces, then their join is the linear span of them, the smallest linear subcontaining them.
The join of several subvarieties is defined in a similar way.
See also
• Secant variety
References
1. Fulton 1998, Example 8.4.5.
• Dickenstein, Alicia; Schreyer, Frank-Olaf; Sommese, Andrew J. (2010-07-10). Algorithms in Algebraic Geometry. Springer Science & Business Media. ISBN 9780387751559.
• Fulton, William (1998), Intersection theory, Ergebnisse der Mathematik und ihrer Grenzgebiete. 3. Folge., vol. 2 (2nd ed.), Berlin, New York: Springer-Verlag, ISBN 978-3-540-62046-4, MR 1644323
• Flenner, H.; O'Carroll, L.; Vogel, W. (29 June 2013). Joins and Intersections. ISBN 9783662038178.
• Russo, Francesco. "Geometry of Special Varieties" (PDF). University of Catania. Retrieved 7 March 2018.
|
Wikipedia
|
Ruled surface
In geometry, a surface S is ruled (also called a scroll) if through every point of S there is a straight line that lies on S. Examples include the plane, the lateral surface of a cylinder or cone, a conical surface with elliptical directrix, the right conoid, the helicoid, and the tangent developable of a smooth curve in space.
A ruled surface can be described as the set of points swept by a moving straight line. For example, a cone is formed by keeping one point of a line fixed whilst moving another point along a circle. A surface is doubly ruled if through every one of its points there are two distinct lines that lie on the surface. The hyperbolic paraboloid and the hyperboloid of one sheet are doubly ruled surfaces. The plane is the only surface which contains at least three distinct lines through each of its points (Fuchs & Tabachnikov 2007).
The properties of being ruled or doubly ruled are preserved by projective maps, and therefore are concepts of projective geometry. In algebraic geometry, ruled surfaces are sometimes considered to be surfaces in affine or projective space over a field, but they are also sometimes considered as abstract algebraic surfaces without an embedding into affine or projective space, in which case "straight line" is understood to mean an affine or projective line.
Definition and parametric representation
A two dimensional differentiable manifold is called a ruled surface if it is the union of one parametric family of lines. The lines of this family are the generators of the ruled surface.
A ruled surface can be described by a parametric representation of the form
• (CR) $\quad \mathbf {x} (u,v)={\color {red}\mathbf {c} (u)}+v\;{\color {blue}\mathbf {r} (u)}\ ,\ v\in \mathbb {R} \ ,$.
Any curve $\;v\mapsto \mathbf {x} (u_{0},v)\;$ with fixed parameter $u=u_{0}$ is a generator (line) and the curve $\;u\mapsto \mathbf {c} (u)\;$ is the directrix of the representation. The vectors $\;\mathbf {r} (u)\neq {\bf {0\;}}$ describe the directions of the generators.
The directrix may collapse to a point (in case of a cone, see example below).
Alternatively the ruled surface (CR) can be described by
• (CD) $\quad \mathbf {x} (u,v)=(1-v)\;{\color {red}\mathbf {c} (u)}+v\;{\color {green}\mathbf {d} (u)}\ $
with the second directrix $\;\mathbf {d} (u)=\mathbf {c} (u)+\mathbf {r} (u)\;$.
Alternatively, one can start with two non intersecting curves $\mathbf {c} (u),\mathbf {d} (u)$ as directrices, and get by (CD) a ruled surface with line directions $\;\mathbf {r} (u)=\mathbf {d} (u)-\mathbf {c} (u)\ .$
For the generation of a ruled surface by two directrices (or one directrix and the vectors of line directions) not only the geometric shape of these curves are essential but also the special parametric representations of them influence the shape of the ruled surface (see examples a), d)).
For theoretical investigations representation (CR) is more advantageous, because the parameter $v$ appears only once.
Examples
Right circular cylinder
$\ x^{2}+y^{2}=a^{2}\ $:
$\mathbf {x} (u,v)=(a\cos u,a\sin u,v)^{T}$
$={\color {red}(a\cos u,a\sin u,0)^{T}}\;+\;v\;{\color {blue}(0,0,1)^{T}}$
$=(1-v)\;{\color {red}(a\cos u,a\sin u,0)^{T}}\;+\;v\;{\color {green}(a\cos u,a\sin u,1)^{T}}\ .$
with
$\mathbf {c} (u)=(a\cos u,a\sin u,0)^{T}\ ,\ \mathbf {r} (u)=(0,0,1)^{T}\ ,\ \mathbf {d} (u)=(a\cos u,a\sin u,1)^{T}\ .$
Right circular cone
Main article: Right circular cone
$\ x^{2}+y^{2}=z^{2}\ $:
$\mathbf {x} (u,v)=(\cos u,\sin u,1)^{T}\;+\;v\;(\cos u,\sin u,1)^{T}$
$=(1-v)\;(\cos u,\sin u,1)^{T}\;+\;v\;(2\cos u,2\sin u,2)^{T}.$
with $\quad \mathbf {c} (u)=(\cos u,\sin u,1)^{T}\;=\;\mathbf {r} (u)\ ,\quad \mathbf {d} (u)=(2\cos u,2\sin u,2)^{T}\ .$
In this case one could have used the apex as the directrix, i.e.: $\ \mathbf {c} (u)=(0,0,0)^{T}\ $ and $\ \mathbf {r} (u)=(\cos u,\sin u,1)^{T}\ $ as the line directions.
For any cone one can choose the apex as the directrix. This case shows: The directrix of a ruled surface may degenerate to a point.
Helicoid
Main article: Helicoid
$\mathbf {x} (u,v)=\;(v\cos u,v\sin u,ku)^{T}\;$
$=\;(0,0,ku)^{T}\;+\;v\;(\cos u,\sin u,0)^{T}\ $
$=\;(1-v)\;(0,0,ku)^{T}\;+\;v\;(\cos u,\sin u,ku)^{T}\ .$
The directrix $\ \mathbf {c} (u)=(0,0,ku)^{T}\;$ is the z-axis, the line directions are $\;\mathbf {r} (u)=\ (\cos u,\sin u,0)^{T}\;$ and the second directrix $\ \mathbf {d} (u)=(\cos u,\sin u,ku)^{T}\ $ is a helix.
The helicoid is a special case of the ruled generalized helicoids.
Cylinder, cone and hyperboloids
Further information: Hyperboloid
The parametric representation
$\mathbf {x} (u,v)=(1-v)\;(\cos(u-\varphi ),\sin(u-\varphi ),-1)^{T}\;+\;v\;(\cos(u+\varphi ),\sin(u+\varphi ),1)^{T}$
has two horizontal circles as directrices. The additional parameter $\varphi $ allows to vary the parametric representations of the circles. For
$\varphi =0\ $ one gets the cylinder $x^{2}+y^{2}=1$, for
$\varphi =\pi /2\ $ one gets the cone $x^{2}+y^{2}=z^{2}$ and for
$0<\varphi <\pi /2\ $ one gets a hyperboloid of one sheet with equation $\ {\tfrac {x^{2}+y^{2}}{a^{2}}}-{\tfrac {z^{2}}{c^{2}}}=1\ $ and the semi axes $\ a=\cos \varphi \;,\;c=\cot \varphi $.
A hyperboloid of one sheet is a doubly ruled surface.
Hyperbolic paraboloid
Main article: Hyperbolic paraboloid
If the two directrices in (CD) are the lines
$\mathbf {c} (u)=(1-u)\mathbf {a} _{1}+u\mathbf {a} _{2},\quad \mathbf {d} (u)=(1-u)\mathbf {b} _{1}+u\mathbf {b} _{2}$
one gets
$\mathbf {x} (u,v)=(1-v){\big (}(1-u)\mathbf {a} _{1}+u\mathbf {a} _{2}{\big )}\ +\ v{\big (}(1-u)\mathbf {b} _{1}+u\mathbf {b} _{2}{\big )}\ $,
which is the hyperbolic paraboloid that interpolates the 4 points $\ \mathbf {a} _{1},\;\mathbf {a} _{2},\;\mathbf {b} _{1},\;\mathbf {b} _{2}\ $ bilinearly.[1]
Obviously the ruled surface is a doubly ruled surface, because any point lies on two lines of the surface.
For the example shown in the diagram:
$\ \mathbf {a} _{1}=(0,0,0)^{T},\;\mathbf {a} _{2}=(1,0,0)^{T},\;\mathbf {b} _{1}=(0,1,0)^{T},\;\mathbf {b} _{2}=(1,1,1)^{T}\ $.
The hyperbolic paraboloid has the equation $z=xy$.
Möbius strip
Main article: Möbius strip
The ruled surface
$\mathbf {x} (u,v)=\mathbf {c} (u)+v\;\mathbf {r} (u)$
with
$\mathbf {c} (u)=(\cos 2u,\sin 2u,0)^{T}\ $ (circle as directrix),
$\mathbf {r} (u)=(\cos u\cos 2u,\cos u\sin 2u,\sin u)^{T}\ ,\quad 0\leq u<\pi \ ,$
contains a Möbius strip.
The diagram shows the Möbius strip for $-0.3\leq v\leq 0.3$.
A simple calculation shows $\det(\mathbf {\dot {c}} (0)\;,\;\mathbf {\dot {r}} (0)\;,\;\mathbf {r} (0))\;\neq \;0\ $ (see next section). Hence the given realization of a Möbius strip is not developable. But there exist developable Möbius strips.[2]
Further examples
• Conoid
• Catalan surface
• Developable rollers (oloid, sphericon)
Tangent planes, developable surfaces
For the considerations below any necessary derivative is assumed to exist.
For the determination of the normal vector at a point one needs the partial derivatives of the representation $\quad \mathbf {x} (u,v)=\mathbf {c} (u)+v\;\mathbf {r} (u)$ :
$\mathbf {x} _{u}=\mathbf {\dot {c}} (u)+v\;\mathbf {\dot {r}} (u)\ $ ,$\quad \mathbf {x} _{v}=\;\mathbf {r} (u)$
Hence the normal vector is
• $\mathbf {n} =\mathbf {x} _{u}\times \mathbf {x} _{v}=\mathbf {\dot {c}} \times \mathbf {r} +v(\mathbf {\dot {r}} \times \mathbf {r} )\ .$
Because of $\mathbf {n} \cdot \mathbf {r} =0$ (A mixed product with two equal vectors is always 0 !), vector $\mathbf {r} (u_{0})$ is a tangent vector at any point $\mathbf {x} (u_{0},v)$. The tangent planes along this line are all the same, if $\mathbf {\dot {r}} \times \mathbf {r} $ is a multiple of $\mathbf {\dot {c}} \times \mathbf {r} $ . This is possible only, if the three vectors $\mathbf {\dot {c}} \;,\;\mathbf {\dot {r}} \;,\;\mathbf {r} \ $ lie in a plane, i.e. they are linearly dependent. The linear dependency of three vectors can be checked using the determinant of these vectors:
• The tangent planes along the line $\mathbf {x} (u_{0},v)=\mathbf {c} (u_{0})+v\;\mathbf {r} (u_{0})$ are equal, if
$\det(\mathbf {\dot {c}} (u_{0})\;,\;\mathbf {\dot {r}} (u_{0})\;,\;\mathbf {r} (u_{0}))\;=\;0\ .$
The importance of this determinant condition shows the following statement:
• A ruled surface $\quad \mathbf {x} (u,v)=\mathbf {c} (u)+v\;\mathbf {r} (u)$ is developable into a plane, if for any point the Gauss curvature vanishes. This is exactly the case if
$\det(\mathbf {\dot {c}} \;,\;\mathbf {\dot {r}} \;,\;\mathbf {r} )\;=\;0\quad $
at any point is true.[3]
The generators of any ruled surface coalesce with one family of its asymptotic lines. For developable surfaces they also form one family of its lines of curvature. It can be shown that any developable surface is a cone, a cylinder or a surface formed by all tangents of a space curve.[4]
Application and history of developable surfaces
The determinant condition for developable surfaces is used to determine numerically developable connections between space curves (directrices). The diagram shows a developable connection between two ellipses contained in different planes (one horizontal, the other vertical) and its development.[5]
An impression of the usage of developable surfaces in Computer Aided Design (CAD) is given in Interactive design of developable surfaces[6]
A historical survey on developable surfaces can be found in Developable Surfaces: Their History and Application[7]
Ruled surfaces in algebraic geometry
In algebraic geometry, ruled surfaces were originally defined as projective surfaces in projective space containing a straight line through any given point. This immediately implies that there is a projective line on the surface through any given point, and this condition is now often used as the definition of a ruled surface: ruled surfaces are defined to be abstract projective surfaces satisfying this condition that there is a projective line through any point. This is equivalent to saying that they are birational to the product of a curve and a projective line. Sometimes a ruled surface is defined to be one satisfying the stronger condition that it has a fibration over a curve with fibers that are projective lines. This excludes the projective plane, which has a projective line though every point but cannot be written as such a fibration.
Ruled surfaces appear in the Enriques classification of projective complex surfaces, because every algebraic surface of Kodaira dimension $-\infty $ is a ruled surface (or a projective plane, if one uses the restrictive definition of ruled surface). Every minimal projective ruled surface other than the projective plane is the projective bundle of a 2-dimensional vector bundle over some curve. The ruled surfaces with base curve of genus 0 are the Hirzebruch surfaces.
Ruled surfaces in architecture
Doubly ruled surfaces are the inspiration for curved hyperboloid structures that can be built with a latticework of straight elements, namely:
• Hyperbolic paraboloids, such as saddle roofs.
• Hyperboloids of one sheet, such as cooling towers and some trash bins.
The RM-81 Agena rocket engine employed straight cooling channels that were laid out in a ruled surface to form the throat of the nozzle section.
• Cooling hyperbolic towers at Didcot Power Station, UK; the surface can be doubly ruled.
• Doubly ruled water tower with toroidal tank, by Jan Bogusławski in Ciechanów, Poland
• A hyperboloid Kobe Port Tower, Kobe, Japan, with a double ruling.
• Hyperboloid water tower, 1896 in Nizhny Novgorod.
• The gridshell of Shukhov Tower in Moscow, whose sections are doubly ruled.
• A ruled helicoid spiral staircase inside Cremona's Torrazzo.
• Village church in Selo, Slovenia: both the roof (conical) and the wall (cylindrical) are ruled surfaces.
• A hyperbolic paraboloid roof of Warszawa Ochota railway station in Warsaw, Poland.
• A ruled conical hat.
• Corrugated roof tiles ruled by parallel lines in one direction, and sinusoidal in the perpendicular direction
• Construction of a planar surface by ruling (screeding) concrete
References
1. G. Farin: Curves and Surfaces for Computer Aided Geometric Design, Academic Press, 1990, ISBN 0-12-249051-7, p. 250
2. W. Wunderlich: Über ein abwickelbares Möbiusband, Monatshefte für Mathematik 66, 1962, S. 276-289.
3. W. Kühnel: Differentialgeometrie, p. 58–60
4. G. Farin: p. 380
5. E. Hartmann: Geometry and Algorithms for CAD, lecture note, TU Darmstadt, p. 113
6. Tang, Bo, Wallner, Pottmann: Interactive design of developable surfaces, ACM Trans. Graph. (MONTH 2015), DOI: 10.1145/2832906
7. Snezana Lawrence: Developable Surfaces: Their History and Application, in Nexus Network Journal 13(3) · October 2011, doi:10.1007/s00004-011-0087-z
• Do Carmo, Manfredo P. : Differential Geometry of Curves and Surfaces, Prentice-Hall; 1 edition, 1976 ISBN 978-0132125895
• Barth, Wolf P.; Hulek, Klaus; Peters, Chris A.M.; Van de Ven, Antonius (2004), Compact Complex Surfaces, Ergebnisse der Mathematik und ihrer Grenzgebiete. 3. Folge., vol. 4, Springer-Verlag, Berlin, doi:10.1007/978-3-642-57739-0, ISBN 978-3-540-00832-3, MR 2030225
• Beauville, Arnaud (1996), Complex algebraic surfaces, London Mathematical Society Student Texts, vol. 34 (2nd ed.), Cambridge University Press, doi:10.1017/CBO9780511623936, ISBN 978-0-521-49510-3, MR 1406314
• Edge, W. L. (1931), The Theory of Ruled Surfaces, Cambridge University Press – via Internet Archive. Review: Bulletin of the American Mathematical Society 37 (1931), 791-793, doi:10.1090/S0002-9904-1931-05248-4
• Fuchs, D.; Tabachnikov, Serge (2007), "16.5 There are no non-planar triply ruled surfaces", Mathematical Omnibus: Thirty Lectures on Classic Mathematics, American Mathematical Society, p. 228, ISBN 9780821843161.
• Li, Ta-chʻien (ed.) (2011), Problems and Solutions in Mathematics, 3103 (2nd ed.), World Scientific Publishing Company, ISBN 9789810234805 {{citation}}: |first= has generic name (help).
• Hilbert, David; Cohn-Vossen, Stephan (1952), Geometry and the Imagination (2nd ed.), New York: Chelsea, ISBN 978-0-8284-1087-8.
• Iskovskikh, V.A. (2001) [1994], "Ruled surface", Encyclopedia of Mathematics, EMS Press
• Sharp, John (2008), D-Forms: surprising new 3-D forms from flat curved shapes, Tarquin, ISBN 978-1-899618-87-3. Review: Séquin, Carlo H. (2009), Journal of Mathematics and the Arts 3: 229–230, doi:10.1080/17513470903332913
External links
• Weisstein, Eric W. "Ruled Surface". MathWorld.
• Ruled surface pictures from the University of Arizona
• Examples of developable surfaces on the Rhino3DE website
|
Wikipedia
|
Straightedge and compass construction
In geometry, straightedge-and-compass construction – also known as ruler-and-compass construction, Euclidean construction, or classical construction – is the construction of lengths, angles, and other geometric figures using only an idealized ruler and a pair of compasses.
Geometry
Projecting a sphere to a plane
• Outline
• History (Timeline)
Branches
• Euclidean
• Non-Euclidean
• Elliptic
• Spherical
• Hyperbolic
• Non-Archimedean geometry
• Projective
• Affine
• Synthetic
• Analytic
• Algebraic
• Arithmetic
• Diophantine
• Differential
• Riemannian
• Symplectic
• Discrete differential
• Complex
• Finite
• Discrete/Combinatorial
• Digital
• Convex
• Computational
• Fractal
• Incidence
• Noncommutative geometry
• Noncommutative algebraic geometry
• Concepts
• Features
Dimension
• Straightedge and compass constructions
• Angle
• Curve
• Diagonal
• Orthogonality (Perpendicular)
• Parallel
• Vertex
• Congruence
• Similarity
• Symmetry
Zero-dimensional
• Point
One-dimensional
• Line
• segment
• ray
• Length
Two-dimensional
• Plane
• Area
• Polygon
Triangle
• Altitude
• Hypotenuse
• Pythagorean theorem
Parallelogram
• Square
• Rectangle
• Rhombus
• Rhomboid
Quadrilateral
• Trapezoid
• Kite
Circle
• Diameter
• Circumference
• Area
Three-dimensional
• Volume
• Cube
• cuboid
• Cylinder
• Dodecahedron
• Icosahedron
• Octahedron
• Pyramid
• Platonic Solid
• Sphere
• Tetrahedron
Four- / other-dimensional
• Tesseract
• Hypersphere
Geometers
by name
• Aida
• Aryabhata
• Ahmes
• Alhazen
• Apollonius
• Archimedes
• Atiyah
• Baudhayana
• Bolyai
• Brahmagupta
• Cartan
• Coxeter
• Descartes
• Euclid
• Euler
• Gauss
• Gromov
• Hilbert
• Huygens
• Jyeṣṭhadeva
• Kātyāyana
• Khayyám
• Klein
• Lobachevsky
• Manava
• Minkowski
• Minggatu
• Pascal
• Pythagoras
• Parameshvara
• Poincaré
• Riemann
• Sakabe
• Sijzi
• al-Tusi
• Veblen
• Virasena
• Yang Hui
• al-Yasamin
• Zhang
• List of geometers
by period
BCE
• Ahmes
• Baudhayana
• Manava
• Pythagoras
• Euclid
• Archimedes
• Apollonius
1–1400s
• Zhang
• Kātyāyana
• Aryabhata
• Brahmagupta
• Virasena
• Alhazen
• Sijzi
• Khayyám
• al-Yasamin
• al-Tusi
• Yang Hui
• Parameshvara
1400s–1700s
• Jyeṣṭhadeva
• Descartes
• Pascal
• Huygens
• Minggatu
• Euler
• Sakabe
• Aida
1700s–1900s
• Gauss
• Lobachevsky
• Bolyai
• Riemann
• Klein
• Poincaré
• Hilbert
• Minkowski
• Cartan
• Veblen
• Coxeter
Present day
• Atiyah
• Gromov
The idealized ruler, known as a straightedge, is assumed to be infinite in length, have only one edge, and no markings on it. The compass is assumed to have no maximum or minimum radius, and is assumed to "collapse" when lifted from the page, so may not be directly used to transfer distances. (This is an unimportant restriction since, using a multi-step procedure, a distance can be transferred even with a collapsing compass; see compass equivalence theorem. Note however that whilst a non-collapsing compass held against a straightedge might seem to be equivalent to marking it, the neusis construction is still impermissible and this is what unmarked really means: see Markable rulers below.) More formally, the only permissible constructions are those granted by the first three postulates of Euclid's Elements.
It turns out to be the case that every point constructible using straightedge and compass may also be constructed using compass alone, or by straightedge alone if given a single circle and its center.
Ancient Greek mathematicians first conceived straightedge-and-compass constructions, and a number of ancient problems in plane geometry impose this restriction. The ancient Greeks developed many constructions, but in some cases were unable to do so. Gauss showed that some polygons are constructible but that most are not. Some of the most famous straightedge-and-compass problems were proved impossible by Pierre Wantzel in 1837 using field theory, namely trisecting an arbitrary angle and doubling the volume of a cube (see § impossible constructions). Many of these problems are easily solvable provided that other geometric transformations are allowed; for example, neusis construction can be used to solve the former two problems.
In terms of algebra, a length is constructible if and only if it represents a constructible number, and an angle is constructible if and only if its cosine is a constructible number. A number is constructible if and only if it can be written using the four basic arithmetic operations and the extraction of square roots but of no higher-order roots.
Straightedge and compass tools
The "straightedge" and "compass" of straightedge-and-compass constructions are idealized versions of real-world rulers and compasses.
• The straightedge is an infinitely long edge with no markings on it. It can only be used to draw a line segment between two points, or to extend an existing line segment.
• The compass can have an arbitrarily large radius with no markings on it (unlike certain real-world compasses). Circles and circular arcs can be drawn starting from two given points: the centre and a point on the circle. The compass may or may not collapse (i.e. fold after being taken off the page, erasing its 'stored' radius).
• Lines and circles constructed have infinite precision and zero width.
Actual compasses do not collapse and modern geometric constructions often use this feature. A 'collapsing compass' would appear to be a less powerful instrument. However, by the compass equivalence theorem in Proposition 2 of Book 1 of Euclid's Elements, no power is lost by using a collapsing compass. Although the proposition is correct, its proofs have a long and checkered history.[1] In any case, the equivalence is why this feature is not stipulated in the definition of the ideal compass.
Each construction must be mathematically exact. "Eyeballing" distances (looking at the construction and guessing at its accuracy) or using markings on a ruler, are not permitted. Each construction must also terminate. That is, it must have a finite number of steps, and not be the limit of ever closer approximations. (If an unlimited number of steps is permitted, some otherwise-impossible constructions become possible by means of infinite sequences converging to a limit.)
Stated this way, straightedge-and-compass constructions appear to be a parlour game, rather than a serious practical problem; but the purpose of the restriction is to ensure that constructions can be proved to be exactly correct.
History
The ancient Greek mathematicians first attempted straightedge-and-compass constructions, and they discovered how to construct sums, differences, products, ratios, and square roots of given lengths.[2]: p. 1 They could also construct half of a given angle, a square whose area is twice that of another square, a square having the same area as a given polygon, and regular polygons of 3, 4, or 5 sides[2]: p. xi (or one with twice the number of sides of a given polygon[2]: pp. 49–50 ). But they could not construct one third of a given angle except in particular cases, or a square with the same area as a given circle, or regular polygons with other numbers of sides.[2]: p. xi Nor could they construct the side of a cube whose volume is twice the volume of a cube with a given side.[2]: p. 29
Hippocrates and Menaechmus showed that the volume of the cube could be doubled by finding the intersections of hyperbolas and parabolas, but these cannot be constructed by straightedge and compass.[2]: p. 30 In the fifth century BCE, Hippias used a curve that he called a quadratrix to both trisect the general angle and square the circle, and Nicomedes in the second century BCE showed how to use a conchoid to trisect an arbitrary angle;[2]: p. 37 but these methods also cannot be followed with just straightedge and compass.
No progress on the unsolved problems was made for two millennia, until in 1796 Gauss showed that a regular polygon with 17 sides could be constructed; five years later he showed the sufficient criterion for a regular polygon of n sides to be constructible.[2]: pp. 51 ff.
In 1837 Pierre Wantzel published a proof of the impossibility of trisecting an arbitrary angle or of doubling the volume of a cube,[3] based on the impossibility of constructing cube roots of lengths. He also showed that Gauss's sufficient constructibility condition for regular polygons is also necessary.[4]
Then in 1882 Lindemann showed that $\pi $ is a transcendental number, and thus that it is impossible by straightedge and compass to construct a square with the same area as a given circle.[2]: p. 47
The basic constructions
All straightedge-and-compass constructions consist of repeated application of five basic constructions using the points, lines and circles that have already been constructed. These are:
• Creating the line through two points
• Creating the circle that contains one point and has a center at another point
• Creating the point at the intersection of two (non-parallel) lines
• Creating the one point or two points in the intersection of a line and a circle (if they intersect)
• Creating the one point or two points in the intersection of two circles (if they intersect).
For example, starting with just two distinct points, we can create a line or either of two circles (in turn, using each point as centre and passing through the other point). If we draw both circles, two new points are created at their intersections. Drawing lines between the two original points and one of these new points completes the construction of an equilateral triangle.
Therefore, in any geometric problem we have an initial set of symbols (points and lines), an algorithm, and some results. From this perspective, geometry is equivalent to an axiomatic algebra, replacing its elements by symbols. Probably Gauss first realized this, and used it to prove the impossibility of some constructions; only much later did Hilbert find a complete set of axioms for geometry.
Common straightedge-and-compass constructions
The most-used straightedge-and-compass constructions include:
• Constructing the perpendicular bisector from a segment
• Finding the midpoint of a segment.
• Drawing a perpendicular line from a point to a line.
• Bisecting an angle
• Mirroring a point in a line
• Constructing a line through a point tangent to a circle
• Constructing a circle through 3 noncollinear points
• Drawing a line through a given point parallel to a given line.
Constructible points
Main article: Constructible number
Straightedge-and-compass constructions corresponding to algebraic operations
One can associate an algebra to our geometry using a Cartesian coordinate system made of two lines, and represent points of our plane by vectors. Finally we can write these vectors as complex numbers.
Using the equations for lines and circles, one can show that the points at which they intersect lie in a quadratic extension of the smallest field F containing two points on the line, the center of the circle, and the radius of the circle. That is, they are of the form x +y√k, where x, y, and k are in F.
Since the field of constructible points is closed under square roots, it contains all points that can be obtained by a finite sequence of quadratic extensions of the field of complex numbers with rational coefficients. By the above paragraph, one can show that any constructible point can be obtained by such a sequence of extensions. As a corollary of this, one finds that the degree of the minimal polynomial for a constructible point (and therefore of any constructible length) is a power of 2. In particular, any constructible point (or length) is an algebraic number, though not every algebraic number is constructible; for example, 3√2 is algebraic but not constructible.[3]
Constructible angles
There is a bijection between the angles that are constructible and the points that are constructible on any constructible circle. The angles that are constructible form an abelian group under addition modulo 2π (which corresponds to multiplication of the points on the unit circle viewed as complex numbers). The angles that are constructible are exactly those whose tangent (or equivalently, sine or cosine) is constructible as a number. For example, the regular heptadecagon (the seventeen-sided regular polygon) is constructible because
${\begin{aligned}\cos {\left({\frac {2\pi }{17}}\right)}&=\,-{\frac {1}{16}}\,+\,{\frac {1}{16}}{\sqrt {17}}\,+\,{\frac {1}{16}}{\sqrt {34-2{\sqrt {17}}}}\\[5mu]&\qquad +\,{\frac {1}{8}}{\sqrt {17+3{\sqrt {17}}-{\sqrt {34-2{\sqrt {17}}}}-2{\sqrt {34+2{\sqrt {17}}}}}}\end{aligned}}$
as discovered by Gauss.[5]
The group of constructible angles is closed under the operation that halves angles (which corresponds to taking square roots in the complex numbers). The only angles of finite order that may be constructed starting with two points are those whose order is either a power of two, or a product of a power of two and a set of distinct Fermat primes. In addition there is a dense set of constructible angles of infinite order.
Relation to complex arithmetic
Given a set of points in the Euclidean plane, selecting any one of them to be called 0 and another to be called 1, together with an arbitrary choice of orientation allows us to consider the points as a set of complex numbers.
Given any such interpretation of a set of points as complex numbers, the points constructible using valid straightedge-and-compass constructions alone are precisely the elements of the smallest field containing the original set of points and closed under the complex conjugate and square root operations (to avoid ambiguity, we can specify the square root with complex argument less than π). The elements of this field are precisely those that may be expressed as a formula in the original points using only the operations of addition, subtraction, multiplication, division, complex conjugate, and square root, which is easily seen to be a countable dense subset of the plane. Each of these six operations corresponding to a simple straightedge-and-compass construction. From such a formula it is straightforward to produce a construction of the corresponding point by combining the constructions for each of the arithmetic operations. More efficient constructions of a particular set of points correspond to shortcuts in such calculations.
Equivalently (and with no need to arbitrarily choose two points) we can say that, given an arbitrary choice of orientation, a set of points determines a set of complex ratios given by the ratios of the differences between any two pairs of points. The set of ratios constructible using straightedge and compass from such a set of ratios is precisely the smallest field containing the original ratios and closed under taking complex conjugates and square roots.
For example, the real part, imaginary part and modulus of a point or ratio z (taking one of the two viewpoints above) are constructible as these may be expressed as
$\mathrm {Re} (z)={\frac {z+{\bar {z}}}{2}}\;$
$\mathrm {Im} (z)={\frac {z-{\bar {z}}}{2i}}\;$
$\left|z\right|={\sqrt {z{\bar {z}}}}.\;$
Doubling the cube and trisection of an angle (except for special angles such as any φ such that φ/(2π) is a rational number with denominator not divisible by 3) require ratios which are the solution to cubic equations, while squaring the circle requires a transcendental ratio. None of these are in the fields described, hence no straightedge-and-compass construction for these exists.
Impossible constructions
The ancient Greeks thought that the construction problems they could not solve were simply obstinate, not unsolvable.[6] With modern methods, however, these straightedge-and-compass constructions have been shown to be logically impossible to perform. (The problems themselves, however, are solvable, and the Greeks knew how to solve them without the constraint of working only with straightedge and compass.)
Squaring the circle
Main article: Squaring the circle
The most famous of these problems, squaring the circle, otherwise known as the quadrature of the circle, involves constructing a square with the same area as a given circle using only straightedge and compass.
Squaring the circle has been proved impossible, as it involves generating a transcendental number, that is, √π. Only certain algebraic numbers can be constructed with ruler and compass alone, namely those constructed from the integers with a finite sequence of operations of addition, subtraction, multiplication, division, and taking square roots. The phrase "squaring the circle" is often used to mean "doing the impossible" for this reason.
Without the constraint of requiring solution by ruler and compass alone, the problem is easily solvable by a wide variety of geometric and algebraic means, and was solved many times in antiquity.[7]
A method which comes very close to approximating the "quadrature of the circle" can be achieved using a Kepler triangle.
Doubling the cube
Main article: Doubling the cube
Doubling the cube is the construction, using only a straightedge and compass, of the edge of a cube that has twice the volume of a cube with a given edge. This is impossible because the cube root of 2, though algebraic, cannot be computed from integers by addition, subtraction, multiplication, division, and taking square roots. This follows because its minimal polynomial over the rationals has degree 3. This construction is possible using a straightedge with two marks on it and a compass.
Angle trisection
Main article: Angle trisection
Angle trisection is the construction, using only a straightedge and a compass, of an angle that is one-third of a given arbitrary angle. This is impossible in the general case. For example, the angle 2π/5 radians (72° = 360°/5) can be trisected, but the angle of π/3 radians (60°) cannot be trisected.[8] The general trisection problem is also easily solved when a straightedge with two marks on it is allowed (a neusis construction).
Distance to an ellipse
The line segment from any point in the plane to the nearest point on a circle can be constructed, but the segment from any point in the plane to the nearest point on an ellipse of positive eccentricity cannot in general be constructed.[9]
Alhazen's problem
In 1997, the Oxford mathematician Peter M. Neumann proved the theorem that there is no ruler-and-compass construction for the general solution of the ancient Alhazen's problem (billiard problem or reflection from a spherical mirror).[10][11]
Constructing regular polygons
Main article: Constructible polygon
Some regular polygons (e.g. a pentagon) are easy to construct with straightedge and compass; others are not. This led to the question: Is it possible to construct all regular polygons with straightedge and compass?
Carl Friedrich Gauss in 1796 showed that a regular 17-sided polygon can be constructed, and five years later showed that a regular n-sided polygon can be constructed with straightedge and compass if the odd prime factors of n are distinct Fermat primes. Gauss conjectured that this condition was also necessary; the conjecture was proven by Pierre Wantzel in 1837.[4]
The first few constructible regular polygons have the following numbers of sides:
3, 4, 5, 6, 8, 10, 12, 15, 16, 17, 20, 24, 30, 32, 34, 40, 48, 51, 60, 64, 68, 80, 85, 96, 102, 120, 128, 136, 160, 170, 192, 204, 240, 255, 256, 257, 272... (sequence A003401 in the OEIS)
There are known to be an infinitude of constructible regular polygons with an even number of sides (because if a regular n-gon is constructible, then so is a regular 2n-gon and hence a regular 4n-gon, 8n-gon, etc.). However, there are only 31 known constructible regular n-gons with an odd number of sides.
Constructing a triangle from three given characteristic points or lengths
Sixteen key points of a triangle are its vertices, the midpoints of its sides, the feet of its altitudes, the feet of its internal angle bisectors, and its circumcenter, centroid, orthocenter, and incenter. These can be taken three at a time to yield 139 distinct nontrivial problems of constructing a triangle from three points.[12] Of these problems, three involve a point that can be uniquely constructed from the other two points; 23 can be non-uniquely constructed (in fact for infinitely many solutions) but only if the locations of the points obey certain constraints; in 74 the problem is constructible in the general case; and in 39 the required triangle exists but is not constructible.
Twelve key lengths of a triangle are the three side lengths, the three altitudes, the three medians, and the three angle bisectors. Together with the three angles, these give 95 distinct combinations, 63 of which give rise to a constructible triangle, 30 of which do not, and two of which are underdefined.[13]: pp. 201–203
Restricted constructions
Various attempts have been made to restrict the allowable tools for constructions under various rules, in order to determine what is still constructible and how it may be constructed, as well as determining the minimum criteria necessary to still be able to construct everything that compass and straightedge can.
Constructing with only ruler or only compass
It is possible (according to the Mohr–Mascheroni theorem) to construct anything with just a compass if it can be constructed with a ruler and compass, provided that the given data and the data to be found consist of discrete points (not lines or circles). The truth of this theorem depends on the truth of Archimedes' axiom,[14] which is not first-order in nature. Examples of compass-only constructions include Napoleon's problem.
It is impossible to take a square root with just a ruler, so some things that cannot be constructed with a ruler can be constructed with a compass; but (by the Poncelet–Steiner theorem) given a single circle and its center, they can be constructed.
Extended constructions
The ancient Greeks classified constructions into three major categories, depending on the complexity of the tools required for their solution. If a construction used only a straightedge and compass, it was called planar; if it also required one or more conic sections (other than the circle), then it was called solid; the third category included all constructions that did not fall into either of the other two categories.[15] This categorization meshes nicely with the modern algebraic point of view. A complex number that can be expressed using only the field operations and square roots (as described above) has a planar construction. A complex number that includes also the extraction of cube roots has a solid construction.
In the language of fields, a complex number that is planar has degree a power of two, and lies in a field extension that can be broken down into a tower of fields where each extension has degree two. A complex number that has a solid construction has degree with prime factors of only two and three, and lies in a field extension that is at the top of a tower of fields where each extension has degree 2 or 3.
Solid constructions
A point has a solid construction if it can be constructed using a straightedge, compass, and a (possibly hypothetical) conic drawing tool that can draw any conic with already constructed focus, directrix, and eccentricity. The same set of points can often be constructed using a smaller set of tools. For example, using a compass, straightedge, and a piece of paper on which we have the parabola y=x2 together with the points (0,0) and (1,0), one can construct any complex number that has a solid construction. Likewise, a tool that can draw any ellipse with already constructed foci and major axis (think two pins and a piece of string) is just as powerful.[16]
The ancient Greeks knew that doubling the cube and trisecting an arbitrary angle both had solid constructions. Archimedes gave a solid construction of the regular 7-gon. The quadrature of the circle does not have a solid construction.
A regular n-gon has a solid construction if and only if n=2a3bm where a and b are some non-negative integers and m is a product of zero or more distinct Pierpont primes (primes of the form 2r3s+1). Therefore, regular n-gon admits a solid, but not planar, construction if and only if n is in the sequence
7, 9, 13, 14, 18, 19, 21, 26, 27, 28, 35, 36, 37, 38, 39, 42, 45, 52, 54, 56, 57, 63, 65, 70, 72, 73, 74, 76, 78, 81, 84, 90, 91, 95, 97... (sequence A051913 in the OEIS)
The set of n for which a regular n-gon has no solid construction is the sequence
11, 22, 23, 25, 29, 31, 33, 41, 43, 44, 46, 47, 49, 50, 53, 55, 58, 59, 61, 62, 66, 67, 69, 71, 75, 77, 79, 82, 83, 86, 87, 88, 89, 92, 93, 94, 98, 99, 100... (sequence A048136 in the OEIS)
Like the question with Fermat primes, it is an open question as to whether there are an infinite number of Pierpont primes.
Angle trisection
What if, together with the straightedge and compass, we had a tool that could (only) trisect an arbitrary angle? Such constructions are solid constructions, but there exist numbers with solid constructions that cannot be constructed using such a tool. For example, we cannot double the cube with such a tool.[17] On the other hand, every regular n-gon that has a solid construction can be constructed using such a tool.
Origami
Main article: Huzita–Hatori axioms
The mathematical theory of origami is more powerful than straightedge-and-compass construction. Folds satisfying the Huzita–Hatori axioms can construct exactly the same set of points as the extended constructions using a compass and conic drawing tool. Therefore, origami can also be used to solve cubic equations (and hence quartic equations), and thus solve two of the classical problems.[18]
Markable rulers
Main article: Neusis construction
Archimedes, Nicomedes and Apollonius gave constructions involving the use of a markable ruler. This would permit them, for example, to take a line segment, two lines (or circles), and a point; and then draw a line which passes through the given point and intersects the two given lines, such that the distance between the points of intersection equals the given segment. This the Greeks called neusis ("inclination", "tendency" or "verging"), because the new line tends to the point. In this expanded scheme, we can trisect an arbitrary angle (see Archimedes' trisection) or extract an arbitrary cube root (due to Nicomedes). Hence, any distance whose ratio to an existing distance is the solution of a cubic or a quartic equation is constructible. Using a markable ruler, regular polygons with solid constructions, like the heptagon, are constructible; and John H. Conway and Richard K. Guy give constructions for several of them.[19]
The neusis construction is more powerful than a conic drawing tool, as one can construct complex numbers that do not have solid constructions. In fact, using this tool one can solve some quintics that are not solvable using radicals.[20] It is known that one cannot solve an irreducible polynomial of prime degree greater or equal to 7 using the neusis construction, so it is not possible to construct a regular 23-gon or 29-gon using this tool. Benjamin and Snyder proved that it is possible to construct the regular 11-gon, but did not give a construction.[21] It is still open as to whether a regular 25-gon or 31-gon is constructible using this tool.
Trisect a straight segment
Given a straight line segment called AB, could this be divided in three new equal segments and in many parts required by the use of intercept theorem.
Computation of binary digits
In 1998 Simon Plouffe gave a ruler-and-compass algorithm that can be used to compute binary digits of certain numbers.[22] The algorithm involves the repeated doubling of an angle and becomes physically impractical after about 20 binary digits.
See also
• Carlyle circle
• Geometric cryptography
• Geometrography
• List of interactive geometry software, most of them show straightedge-and-compass constructions
• Mathematics of paper folding
• Underwood Dudley, a mathematician who has made a sideline of collecting false straightedge-and-compass proofs.
References
1. Godfried Toussaint, "A new look at Euclid’s second proposition," The Mathematical Intelligencer, Vol. 15, No. 3, (1993), pp. 12-24.
2. Bold, Benjamin. Famous Problems of Geometry and How to Solve Them, Dover Publications, 1982 (orig. 1969).
3. Wantzel, Pierre-Laurent (1837). "Recherches sur les moyens de reconnaître si un problème de Géométrie peut se résoudre avec la règle et le compas" (PDF). Journal de Mathématiques Pures et Appliquées. 1. 2: 366–372. Retrieved 3 March 2014.
4. Kazarinoff, Nicholas D. (2003) [1970]. Ruler and the Round. Mineola, N.Y.: Dover. pp. 29–30. ISBN 978-0-486-42515-3.
5. Weisstein, Eric W. "Trigonometry Angles--Pi/17". MathWorld.
6. Stewart, Ian. Galois Theory. p. 75.
• Squaring the circle at MacTutor
7. Instructions for trisecting a 72˚ angle.
8. Azad, H., and Laradji, A., "Some impossible constructions in elementary geometry", Mathematical Gazette 88, November 2004, 548–551.
9. Neumann, Peter M. (1998), "Reflections on Reflection in a Spherical Mirror", American Mathematical Monthly, 105 (6): 523–528, doi:10.1080/00029890.1998.12004920, JSTOR 2589403, MR 1626185
10. Highfield, Roger (1 April 1997), "Don solves the last puzzle left by ancient Greeks", Electronic Telegraph, 676, archived from the original on November 23, 2004, retrieved 2008-09-24
11. Pascal Schreck, Pascal Mathis, Vesna Marinkoviċ, and Predrag Janičiċ. "Wernick's list: A final update", Forum Geometricorum 16, 2016, pp. 69–80. http://forumgeom.fau.edu/FG2016volume16/FG201610.pdf
12. Posamentier, Alfred S., and Lehmann, Ingmar. The Secrets of Triangles, Prometheus Books, 2012.
13. Avron, Arnon (1990). "On strict strong constructibility with a compass alone". Journal of Geometry. 38 (1–2): 12–15. doi:10.1007/BF01222890. S2CID 1537763.
14. T.L. Heath, "A History of Greek Mathematics, Volume I"
15. P. Hummel, "Solid constructions using ellipses", The Pi Mu Epsilon Journal, 11(8), 429 -- 435 (2003)
16. Gleason, Andrew: "Angle trisection, the heptagon, and the triskaidecagon", Amer. Math. Monthly 95 (1988), no. 3, 185-194.
17. Row, T. Sundara (1966). Geometric Exercises in Paper Folding. New York: Dover.
18. Conway, John H. and Richard Guy: The Book of Numbers
19. A. Baragar, "Constructions using a Twice-Notched Straightedge", The American Mathematical Monthly, 109 (2), 151 -- 164 (2002).
20. E. Benjamin, C. Snyder, "On the construction of the regular hendecagon by marked ruler and compass", Mathematical Proceedings of the Cambridge Philosophical Society, 156 (3), 409 -- 424 (2014).
21. Simon Plouffe (1998). "The Computation of Certain Numbers Using a Ruler and Compass". Journal of Integer Sequences. 1: 13. Bibcode:1998JIntS...1...13P. ISSN 1530-7638.
External links
• Regular polygon constructions by Dr. Math at The Math Forum @ Drexel
• Construction with the Compass Only at cut-the-knot
• Angle Trisection by Hippocrates at cut-the-knot
• Weisstein, Eric W. "Angle Trisection". MathWorld.
Ancient Greek mathematics
Mathematicians
(timeline)
• Anaxagoras
• Anthemius
• Archytas
• Aristaeus the Elder
• Aristarchus
• Aristotle
• Apollonius
• Archimedes
• Autolycus
• Bion
• Bryson
• Callippus
• Carpus
• Chrysippus
• Cleomedes
• Conon
• Ctesibius
• Democritus
• Dicaearchus
• Diocles
• Diophantus
• Dinostratus
• Dionysodorus
• Domninus
• Eratosthenes
• Eudemus
• Euclid
• Eudoxus
• Eutocius
• Geminus
• Heliodorus
• Heron
• Hipparchus
• Hippasus
• Hippias
• Hippocrates
• Hypatia
• Hypsicles
• Isidore of Miletus
• Leon
• Marinus
• Menaechmus
• Menelaus
• Metrodorus
• Nicomachus
• Nicomedes
• Nicoteles
• Oenopides
• Pappus
• Perseus
• Philolaus
• Philon
• Philonides
• Plato
• Porphyry
• Posidonius
• Proclus
• Ptolemy
• Pythagoras
• Serenus
• Simplicius
• Sosigenes
• Sporus
• Thales
• Theaetetus
• Theano
• Theodorus
• Theodosius
• Theon of Alexandria
• Theon of Smyrna
• Thymaridas
• Xenocrates
• Zeno of Elea
• Zeno of Sidon
• Zenodorus
Treatises
• Almagest
• Archimedes Palimpsest
• Arithmetica
• Conics (Apollonius)
• Catoptrics
• Data (Euclid)
• Elements (Euclid)
• Measurement of a Circle
• On Conoids and Spheroids
• On the Sizes and Distances (Aristarchus)
• On Sizes and Distances (Hipparchus)
• On the Moving Sphere (Autolycus)
• Optics (Euclid)
• On Spirals
• On the Sphere and Cylinder
• Ostomachion
• Planisphaerium
• Sphaerics
• The Quadrature of the Parabola
• The Sand Reckoner
Problems
• Constructible numbers
• Angle trisection
• Doubling the cube
• Squaring the circle
• Problem of Apollonius
Concepts
and definitions
• Angle
• Central
• Inscribed
• Axiomatic system
• Axiom
• Chord
• Circles of Apollonius
• Apollonian circles
• Apollonian gasket
• Circumscribed circle
• Commensurability
• Diophantine equation
• Doctrine of proportionality
• Euclidean geometry
• Golden ratio
• Greek numerals
• Incircle and excircles of a triangle
• Method of exhaustion
• Parallel postulate
• Platonic solid
• Lune of Hippocrates
• Quadratrix of Hippias
• Regular polygon
• Straightedge and compass construction
• Triangle center
Results
In Elements
• Angle bisector theorem
• Exterior angle theorem
• Euclidean algorithm
• Euclid's theorem
• Geometric mean theorem
• Greek geometric algebra
• Hinge theorem
• Inscribed angle theorem
• Intercept theorem
• Intersecting chords theorem
• Intersecting secants theorem
• Law of cosines
• Pons asinorum
• Pythagorean theorem
• Tangent-secant theorem
• Thales's theorem
• Theorem of the gnomon
Apollonius
• Apollonius's theorem
Other
• Aristarchus's inequality
• Crossbar theorem
• Heron's formula
• Irrational numbers
• Law of sines
• Menelaus's theorem
• Pappus's area theorem
• Problem II.8 of Arithmetica
• Ptolemy's inequality
• Ptolemy's table of chords
• Ptolemy's theorem
• Spiral of Theodorus
Centers
• Cyrene
• Mouseion of Alexandria
• Platonic Academy
Related
• Ancient Greek astronomy
• Attic numerals
• Greek numerals
• Latin translations of the 12th century
• Non-Euclidean geometry
• Philosophy of mathematics
• Neusis construction
History of
• A History of Greek Mathematics
• by Thomas Heath
• algebra
• timeline
• arithmetic
• timeline
• calculus
• timeline
• geometry
• timeline
• logic
• timeline
• mathematics
• timeline
• numbers
• prehistoric counting
• numeral systems
• list
Other cultures
• Arabian/Islamic
• Babylonian
• Chinese
• Egyptian
• Incan
• Indian
• Japanese
Ancient Greece portal • Mathematics portal
Authority control: National
• Germany
|
Wikipedia
|
Ruler function
In number theory, the ruler function of an integer $n$ can be either of two closely related functions. One of these functions counts the number of times $n$ can be evenly divided by two, which for the numbers 1, 2, 3, ... is
0, 1, 0, 2, 0, 1, 0, 3, 0, 1, 0, 2, 0, 1, 0, 4, ... (sequence A007814 in the OEIS).
Alternatively, the ruler function can be defined as the same numbers plus one, which for the numbers 1, 2, 3, ... produces the sequence
1, 2, 1, 3, 1, 2, 1, 4, 1, 2, 1, 3, 1, 2, 1, 5, ... (sequence A001511 in the OEIS).
As well as being related by adding one, these two sequences are related in a different way: the second one can be formed from the first one by removing all the zeros, and the first one can be formed from the second one by adding zeros at the start and between every pair of numbers. For either definition of the ruler function, the rising and falling patterns of the values of this function resemble the lengths of marks on rulers with traditional units such as inches. These functions should be distinguished from Thomae's function, a function on real numbers which behaves similarly to the ruler function when restricted to the dyadic rational numbers.
In advanced mathematics, the 0-based ruler function is the 2-adic valuation of the number,[1] and the lexicographically earliest infinite square-free word over the natural numbers.[2] It also gives the position of the bit that changes at each step of the Gray code.[3]
In the Tower of Hanoi puzzle, with the disks of the puzzle numbered in order by their size, the 1-based ruler function gives the number of the disk to move at each step in an optimal solution to the puzzle.[4] A simulation of the puzzle, in conjunction with other methods for generating its optimal sequence of moves, can be used in an algorithm for generating the sequence of values of the ruler function in constant time per value.[3]
References
1. Erickson, Alejandro; Isgur, Abraham; Jackson, Bradley W.; Ruskey, Frank; Tanny, Stephen M. (January 2012). "Nested Recurrence Relations with Conolly-like Solutions". SIAM Journal on Discrete Mathematics. 26 (1): 206–238. arXiv:1509.02613. Bibcode:2015arXiv150902613E. doi:10.1137/100795425. ISSN 0895-4801. S2CID 8116882.
2. Guay-Paquet, Mathieu; Shallit, Jeffrey (November 2009). "Avoiding squares and overlaps over the natural numbers". Discrete Mathematics. 309 (21): 6245–6254. doi:10.1016/j.disc.2009.06.004. S2CID 8646044.
3. Herter, Felix; Rote, Günter (November 2018). "Loopless Gray code enumeration and the Tower of Bucharest". Theoretical Computer Science. 748: 40–54. arXiv:1604.06707. Bibcode:2016arXiv160406707H. doi:10.1016/j.tcs.2017.11.017. S2CID 4014870.
4. Hinz, Andreas M.; Klavžar, Sandi; Milutinović, Uroš; Petr, Ciril (2013). The Tower of Hanoi – Myths and Maths. Basel: Springer Basel. pp. 60–61. doi:10.1007/978-3-0348-0237-6. ISBN 978-3-0348-0236-9.
External links
• Weisstein, Eric W. "Ruler function". MathWorld.
|
Wikipedia
|
Rule of inference
In philosophy of logic and logic, a rule of inference, inference rule or transformation rule is a logical form consisting of a function which takes premises, analyzes their syntax, and returns a conclusion (or conclusions). For example, the rule of inference called modus ponens takes two premises, one in the form "If p then q" and another in the form "p", and returns the conclusion "q". The rule is valid with respect to the semantics of classical logic (as well as the semantics of many other non-classical logics), in the sense that if the premises are true (under an interpretation), then so is the conclusion.
Transformation rules
Propositional calculus
Rules of inference
• Implication introduction / elimination (modus ponens)
• Biconditional introduction / elimination
• Conjunction introduction / elimination
• Disjunction introduction / elimination
• Disjunctive / hypothetical syllogism
• Constructive / destructive dilemma
• Absorption / modus tollens / modus ponendo tollens
• Negation introduction
Rules of replacement
• Associativity
• Commutativity
• Distributivity
• Double negation
• De Morgan's laws
• Transposition
• Material implication
• Exportation
• Tautology
Predicate logic
Rules of inference
• Universal generalization / instantiation
• Existential generalization / instantiation
Typically, a rule of inference preserves truth, a semantic property. In many-valued logic, it preserves a general designation. But a rule of inference's action is purely syntactic, and does not need to preserve any semantic property: any function from sets of formulae to formulae counts as a rule of inference. Usually only rules that are recursive are important; i.e. rules such that there is an effective procedure for determining whether any given formula is the conclusion of a given set of formulae according to the rule. An example of a rule that is not effective in this sense is the infinitary ω-rule.[1]
Popular rules of inference in propositional logic include modus ponens, modus tollens, and contraposition. First-order predicate logic uses rules of inference to deal with logical quantifiers.
Standard form
In formal logic (and many related areas), rules of inference are usually given in the following standard form:
Premise#1
Premise#2
...
Premise#n
Conclusion
This expression states that whenever in the course of some logical derivation the given premises have been obtained, the specified conclusion can be taken for granted as well. The exact formal language that is used to describe both premises and conclusions depends on the actual context of the derivations. In a simple case, one may use logical formulae, such as in:
$A\to B$
${\underline {A\quad \quad \quad }}\,\!$
$B\!$
This is the modus ponens rule of propositional logic. Rules of inference are often formulated as schemata employing metavariables.[2] In the rule (schema) above, the metavariables A and B can be instantiated to any element of the universe (or sometimes, by convention, a restricted subset such as propositions) to form an infinite set of inference rules.
A proof system is formed from a set of rules chained together to form proofs, also called derivations. Any derivation has only one final conclusion, which is the statement proved or derived. If premises are left unsatisfied in the derivation, then the derivation is a proof of a hypothetical statement: "if the premises hold, then the conclusion holds."
Example: Hilbert systems for two propositional logics
In a Hilbert system, the premises and conclusion of the inference rules are simply formulae of some language, usually employing metavariables. For graphical compactness of the presentation and to emphasize the distinction between axioms and rules of inference, this section uses the sequent notation ($\vdash $) instead of a vertical presentation of rules. In this notation,
${\begin{array}{c}{\text{Premise }}1\\{\text{Premise }}2\\\hline {\text{Conclusion}}\end{array}}$
is written as $({\text{Premise }}1),({\text{Premise }}2)\vdash ({\text{Conclusion}})$.
The formal language for classical propositional logic can be expressed using just negation (¬), implication (→) and propositional symbols. A well-known axiomatization, comprising three axiom schemata and one inference rule (modus ponens), is:
(CA1) ⊢ A → (B → A)
(CA2) ⊢ (A → (B → C)) → ((A → B) → (A → C))
(CA3) ⊢ (¬A → ¬B) → (B → A)
(MP) A, A → B ⊢ B
It may seem redundant to have two notions of inference in this case, ⊢ and →. In classical propositional logic, they indeed coincide; the deduction theorem states that A ⊢ B if and only if ⊢ A → B. There is however a distinction worth emphasizing even in this case: the first notation describes a deduction, that is an activity of passing from sentences to sentences, whereas A → B is simply a formula made with a logical connective, implication in this case. Without an inference rule (like modus ponens in this case), there is no deduction or inference. This point is illustrated in Lewis Carroll's dialogue called "What the Tortoise Said to Achilles",[3] as well as later attempts by Bertrand Russell and Peter Winch to resolve the paradox introduced in the dialogue.
For some non-classical logics, the deduction theorem does not hold. For example, the three-valued logic of Łukasiewicz can be axiomatized as:[4]
(CA1) ⊢ A → (B → A)
(LA2) ⊢ (A → B) → ((B → C) → (A → C))
(CA3) ⊢ (¬A → ¬B) → (B → A)
(LA4) ⊢ ((A → ¬A) → A) → A
(MP) A, A → B ⊢ B
This sequence differs from classical logic by the change in axiom 2 and the addition of axiom 4. The classical deduction theorem does not hold for this logic, however a modified form does hold, namely A ⊢ B if and only if ⊢ A → (A → B).[5]
Admissibility and derivability
In a set of rules, an inference rule could be redundant in the sense that it is admissible or derivable. A derivable rule is one whose conclusion can be derived from its premises using the other rules. An admissible rule is one whose conclusion holds whenever the premises hold. All derivable rules are admissible. To appreciate the difference, consider the following set of rules for defining the natural numbers (the judgment $n\,\,{\mathsf {nat}}$ asserts the fact that $n$ is a natural number):
${\begin{matrix}{\begin{array}{c}\\\hline {\mathbf {0} \,\,{\mathsf {nat}}}\end{array}}&{\begin{array}{c}{n\,\,{\mathsf {nat}}}\\\hline {\mathbf {s(} n\mathbf {)} \,\,{\mathsf {nat}}}\end{array}}\end{matrix}}$
The first rule states that 0 is a natural number, and the second states that s(n) is a natural number if n is. In this proof system, the following rule, demonstrating that the second successor of a natural number is also a natural number, is derivable:
${\begin{array}{c}{n\,\,{\mathsf {nat}}}\\\hline {\mathbf {s(s(} n\mathbf {))} \,\,{\mathsf {nat}}}\end{array}}$
Its derivation is the composition of two uses of the successor rule above. The following rule for asserting the existence of a predecessor for any nonzero number is merely admissible:
${\begin{array}{c}{\mathbf {s(} n\mathbf {)} \,\,{\mathsf {nat}}}\\\hline {n\,\,{\mathsf {nat}}}\end{array}}$
This is a true fact of natural numbers, as can be proven by induction. (To prove that this rule is admissible, assume a derivation of the premise and induct on it to produce a derivation of $n\,\,{\mathsf {nat}}$.) However, it is not derivable, because it depends on the structure of the derivation of the premise. Because of this, derivability is stable under additions to the proof system, whereas admissibility is not. To see the difference, suppose the following nonsense rule were added to the proof system:
${\begin{array}{c}\\\hline {\mathbf {s(-3)} \,\,{\mathsf {nat}}}\end{array}}$
In this new system, the double-successor rule is still derivable. However, the rule for finding the predecessor is no longer admissible, because there is no way to derive $\mathbf {-3} \,\,{\mathsf {nat}}$. The brittleness of admissibility comes from the way it is proved: since the proof can induct on the structure of the derivations of the premises, extensions to the system add new cases to this proof, which may no longer hold.
Admissible rules can be thought of as theorems of a proof system. For instance, in a sequent calculus where cut elimination holds, the cut rule is admissible.
See also
• Argumentation scheme
• Immediate inference
• Inference objection
• Law of thought
• List of rules of inference
• Logical truth
• Structural rule
References
1. Boolos, George; Burgess, John; Jeffrey, Richard C. (2007). Computability and logic. Cambridge: Cambridge University Press. p. 364. ISBN 978-0-521-87752-7.
2. John C. Reynolds (2009) [1998]. Theories of Programming Languages. Cambridge University Press. p. 12. ISBN 978-0-521-10697-9.
3. Kosta Dosen (1996). "Logical consequence: a turn in style". In Maria Luisa Dalla Chiara; Kees Doets; Daniele Mundici; Johan van Benthem (eds.). Logic and Scientific Methods: Volume One of the Tenth International Congress of Logic, Methodology and Philosophy of Science, Florence, August 1995. Springer. p. 290. ISBN 978-0-7923-4383-7. preprint (with different pagination)
4. Bergmann, Merrie (2008). An introduction to many-valued and fuzzy logic: semantics, algebras, and derivation systems. Cambridge University Press. p. 100. ISBN 978-0-521-88128-9.
5. Bergmann, Merrie (2008). An introduction to many-valued and fuzzy logic: semantics, algebras, and derivation systems. Cambridge University Press. p. 114. ISBN 978-0-521-88128-9.
Mathematical logic
General
• Axiom
• list
• Cardinality
• First-order logic
• Formal proof
• Formal semantics
• Foundations of mathematics
• Information theory
• Lemma
• Logical consequence
• Model
• Theorem
• Theory
• Type theory
Theorems (list)
& Paradoxes
• Gödel's completeness and incompleteness theorems
• Tarski's undefinability
• Banach–Tarski paradox
• Cantor's theorem, paradox and diagonal argument
• Compactness
• Halting problem
• Lindström's
• Löwenheim–Skolem
• Russell's paradox
Logics
Traditional
• Classical logic
• Logical truth
• Tautology
• Proposition
• Inference
• Logical equivalence
• Consistency
• Equiconsistency
• Argument
• Soundness
• Validity
• Syllogism
• Square of opposition
• Venn diagram
Propositional
• Boolean algebra
• Boolean functions
• Logical connectives
• Propositional calculus
• Propositional formula
• Truth tables
• Many-valued logic
• 3
• Finite
• ∞
Predicate
• First-order
• list
• Second-order
• Monadic
• Higher-order
• Free
• Quantifiers
• Predicate
• Monadic predicate calculus
Set theory
• Set
• Hereditary
• Class
• (Ur-)Element
• Ordinal number
• Extensionality
• Forcing
• Relation
• Equivalence
• Partition
• Set operations:
• Intersection
• Union
• Complement
• Cartesian product
• Power set
• Identities
Types of Sets
• Countable
• Uncountable
• Empty
• Inhabited
• Singleton
• Finite
• Infinite
• Transitive
• Ultrafilter
• Recursive
• Fuzzy
• Universal
• Universe
• Constructible
• Grothendieck
• Von Neumann
Maps & Cardinality
• Function/Map
• Domain
• Codomain
• Image
• In/Sur/Bi-jection
• Schröder–Bernstein theorem
• Isomorphism
• Gödel numbering
• Enumeration
• Large cardinal
• Inaccessible
• Aleph number
• Operation
• Binary
Set theories
• Zermelo–Fraenkel
• Axiom of choice
• Continuum hypothesis
• General
• Kripke–Platek
• Morse–Kelley
• Naive
• New Foundations
• Tarski–Grothendieck
• Von Neumann–Bernays–Gödel
• Ackermann
• Constructive
Formal systems (list),
Language & Syntax
• Alphabet
• Arity
• Automata
• Axiom schema
• Expression
• Ground
• Extension
• by definition
• Conservative
• Relation
• Formation rule
• Grammar
• Formula
• Atomic
• Closed
• Ground
• Open
• Free/bound variable
• Language
• Metalanguage
• Logical connective
• ¬
• ∨
• ∧
• →
• ↔
• =
• Predicate
• Functional
• Variable
• Propositional variable
• Proof
• Quantifier
• ∃
• !
• ∀
• rank
• Sentence
• Atomic
• Spectrum
• Signature
• String
• Substitution
• Symbol
• Function
• Logical/Constant
• Non-logical
• Variable
• Term
• Theory
• list
Example axiomatic
systems
(list)
• of arithmetic:
• Peano
• second-order
• elementary function
• primitive recursive
• Robinson
• Skolem
• of the real numbers
• Tarski's axiomatization
• of Boolean algebras
• canonical
• minimal axioms
• of geometry:
• Euclidean:
• Elements
• Hilbert's
• Tarski's
• non-Euclidean
• Principia Mathematica
Proof theory
• Formal proof
• Natural deduction
• Logical consequence
• Rule of inference
• Sequent calculus
• Theorem
• Systems
• Axiomatic
• Deductive
• Hilbert
• list
• Complete theory
• Independence (from ZFC)
• Proof of impossibility
• Ordinal analysis
• Reverse mathematics
• Self-verifying theories
Model theory
• Interpretation
• Function
• of models
• Model
• Equivalence
• Finite
• Saturated
• Spectrum
• Submodel
• Non-standard model
• of arithmetic
• Diagram
• Elementary
• Categorical theory
• Model complete theory
• Satisfiability
• Semantics of logic
• Strength
• Theories of truth
• Semantic
• Tarski's
• Kripke's
• T-schema
• Transfer principle
• Truth predicate
• Truth value
• Type
• Ultraproduct
• Validity
Computability theory
• Church encoding
• Church–Turing thesis
• Computably enumerable
• Computable function
• Computable set
• Decision problem
• Decidable
• Undecidable
• P
• NP
• P versus NP problem
• Kolmogorov complexity
• Lambda calculus
• Primitive recursive function
• Recursion
• Recursive set
• Turing machine
• Type theory
Related
• Abstract logic
• Category theory
• Concrete/Abstract Category
• Category of sets
• History of logic
• History of mathematical logic
• timeline
• Logicism
• Mathematical object
• Philosophy of mathematics
• Supertask
Mathematics portal
|
Wikipedia
|
Differentiation rules
This is a summary of differentiation rules, that is, rules for computing the derivative of a function in calculus.
Part of a series of articles about
Calculus
• Fundamental theorem
• Limits
• Continuity
• Rolle's theorem
• Mean value theorem
• Inverse function theorem
Differential
Definitions
• Derivative (generalizations)
• Differential
• infinitesimal
• of a function
• total
Concepts
• Differentiation notation
• Second derivative
• Implicit differentiation
• Logarithmic differentiation
• Related rates
• Taylor's theorem
Rules and identities
• Sum
• Product
• Chain
• Power
• Quotient
• L'Hôpital's rule
• Inverse
• General Leibniz
• Faà di Bruno's formula
• Reynolds
Integral
• Lists of integrals
• Integral transform
• Leibniz integral rule
Definitions
• Antiderivative
• Integral (improper)
• Riemann integral
• Lebesgue integration
• Contour integration
• Integral of inverse functions
Integration by
• Parts
• Discs
• Cylindrical shells
• Substitution (trigonometric, tangent half-angle, Euler)
• Euler's formula
• Partial fractions
• Changing order
• Reduction formulae
• Differentiating under the integral sign
• Risch algorithm
Series
• Geometric (arithmetico-geometric)
• Harmonic
• Alternating
• Power
• Binomial
• Taylor
Convergence tests
• Summand limit (term test)
• Ratio
• Root
• Integral
• Direct comparison
• Limit comparison
• Alternating series
• Cauchy condensation
• Dirichlet
• Abel
Vector
• Gradient
• Divergence
• Curl
• Laplacian
• Directional derivative
• Identities
Theorems
• Gradient
• Green's
• Stokes'
• Divergence
• generalized Stokes
Multivariable
Formalisms
• Matrix
• Tensor
• Exterior
• Geometric
Definitions
• Partial derivative
• Multiple integral
• Line integral
• Surface integral
• Volume integral
• Jacobian
• Hessian
Advanced
• Calculus on Euclidean space
• Generalized functions
• Limit of distributions
Specialized
• Fractional
• Malliavin
• Stochastic
• Variations
Miscellaneous
• Precalculus
• History
• Glossary
• List of topics
• Integration Bee
• Mathematical analysis
• Nonstandard analysis
Elementary rules of differentiation
Unless otherwise stated, all functions are functions of real numbers (R) that return real values; although more generally, the formulae below apply wherever they are well defined[1][2] — including the case of complex numbers (C).[3]
Constant term rule
For any value of $c$, where $c\in \mathbb {R} $, if $f(x)$ is the constant function given by $f(x)=c$, then ${\frac {df}{dx}}=0$.[4]
Proof
Let $c\in \mathbb {R} $ and $f(x)=c$. By the definition of the derivative,
${\begin{aligned}f'(x)&=\lim _{h\to 0}{\frac {f(x+h)-f(x)}{h}}\\&=\lim _{h\to 0}{\frac {(c)-(c)}{h}}\\&=\lim _{h\to 0}{\frac {0}{h}}\\&=\lim _{h\to 0}0\\&=0\end{aligned}}$
This shows that the derivative of any constant function is 0.
Differentiation is linear
Main article: Linearity of differentiation
For any functions $f$ and $g$ and any real numbers $a$ and $b$, the derivative of the function $h(x)=af(x)+bg(x)$ with respect to $x$ is: $h'(x)=af'(x)+bg'(x).$
In Leibniz's notation this is written as:
${\frac {d(af+bg)}{dx}}=a{\frac {df}{dx}}+b{\frac {dg}{dx}}.$
Special cases include:
• The constant factor rule
$(af)'=af'$
• The sum rule
$(f+g)'=f'+g'$
• The subtraction rule
$(f-g)'=f'-g'.$
The product rule
Main article: Product rule
For the functions f and g, the derivative of the function h(x) = f(x) g(x) with respect to x is
$h'(x)=(fg)'(x)=f'(x)g(x)+f(x)g'(x).$
In Leibniz's notation this is written
${\frac {d(fg)}{dx}}=g{\frac {df}{dx}}+f{\frac {dg}{dx}}.$
The chain rule
Main article: Chain rule
The derivative of the function $h(x)=f(g(x))$ is
$h'(x)=f'(g(x))\cdot g'(x).$
In Leibniz's notation, this is written as:
${\frac {d}{dx}}h(x)=\left.{\frac {d}{dz}}f(z)\right|_{z=g(x)}\cdot {\frac {d}{dx}}g(x),$
often abridged to
${\frac {dh(x)}{dx}}={\frac {df(g(x))}{dg(x)}}\cdot {\frac {dg(x)}{dx}}.$
Focusing on the notion of maps, and the differential being a map ${\text{D}}$, this is written in a more concise way as:
$[{\text{D}}(f\circ g)]_{x}=[{\text{D}}f]_{g(x)}\cdot [{\text{D}}g]_{x}\,.$
The inverse function rule
Main article: Inverse functions and differentiation
If the function f has an inverse function g, meaning that $g(f(x))=x$ and $f(g(y))=y,$ then
$g'={\frac {1}{f'\circ g}}.$
In Leibniz notation, this is written as
${\frac {dx}{dy}}={\frac {1}{\frac {dy}{dx}}}.$
Power laws, polynomials, quotients, and reciprocals
The polynomial or elementary power rule
Main article: Power rule
If $f(x)=x^{r}$, for any real number $r\neq 0,$ then
$f'(x)=rx^{r-1}.$
When $r=1,$ this becomes the special case that if $f(x)=x,$ then $f'(x)=1.$
Combining the power rule with the sum and constant multiple rules permits the computation of the derivative of any polynomial.
The reciprocal rule
The derivative of $h(x)={\frac {1}{f(x)}}$for any (nonvanishing) function f is:
$h'(x)=-{\frac {f'(x)}{(f(x))^{2}}}$ wherever f is non-zero.
In Leibniz's notation, this is written
${\frac {d(1/f)}{dx}}=-{\frac {1}{f^{2}}}{\frac {df}{dx}}.$
The reciprocal rule can be derived either from the quotient rule, or from the combination of power rule and chain rule.
The quotient rule
Main article: Quotient rule
If f and g are functions, then:
$\left({\frac {f}{g}}\right)'={\frac {f'g-g'f}{g^{2}}}\quad $ wherever g is nonzero.
This can be derived from the product rule and the reciprocal rule.
Generalized power rule
Main article: Power rule
The elementary power rule generalizes considerably. The most general power rule is the functional power rule: for any functions f and g,
$(f^{g})'=\left(e^{g\ln f}\right)'=f^{g}\left(f'{g \over f}+g'\ln f\right),\quad $
wherever both sides are well defined.
Special cases
• If $ f(x)=x^{a}\!$, then $ f'(x)=ax^{a-1}$when a is any non-zero real number and x is positive.
• The reciprocal rule may be derived as the special case where $ g(x)=-1\!$.
Derivatives of exponential and logarithmic functions
${\frac {d}{dx}}\left(c^{ax}\right)={ac^{ax}\ln c},\qquad c>0$
the equation above is true for all c, but the derivative for $ c<0$ yields a complex number.
${\frac {d}{dx}}\left(e^{ax}\right)=ae^{ax}$
${\frac {d}{dx}}\left(\log _{c}x\right)={1 \over x\ln c},\qquad c>1$
the equation above is also true for all c, but yields a complex number if $ c<0\!$.
${\frac {d}{dx}}\left(\ln x\right)={1 \over x},\qquad x>0.$
${\frac {d}{dx}}\left(\ln |x|\right)={1 \over x},\qquad x\neq 0.$
${\frac {d}{dx}}\left(W(x)\right)={1 \over {x+e^{W(x)}}},\qquad x>-{1 \over e}.\qquad $where $W(x)$ is the Lambert W function
${\frac {d}{dx}}\left(x^{x}\right)=x^{x}(1+\ln x).$
${\frac {d}{dx}}\left(f(x)^{g(x)}\right)=g(x)f(x)^{g(x)-1}{\frac {df}{dx}}+f(x)^{g(x)}\ln {(f(x))}{\frac {dg}{dx}},\qquad {\text{if }}f(x)>0,{\text{ and if }}{\frac {df}{dx}}{\text{ and }}{\frac {dg}{dx}}{\text{ exist.}}$
${\frac {d}{dx}}\left(f_{1}(x)^{f_{2}(x)^{\left(...\right)^{f_{n}(x)}}}\right)=\left[\sum \limits _{k=1}^{n}{\frac {\partial }{\partial x_{k}}}\left(f_{1}(x_{1})^{f_{2}(x_{2})^{\left(...\right)^{f_{n}(x_{n})}}}\right)\right]{\biggr \vert }_{x_{1}=x_{2}=...=x_{n}=x},{\text{ if }}f_{i<n}(x)>0{\text{ and }}$ ${\frac {df_{i}}{dx}}{\text{ exists. }}$
Logarithmic derivatives
The logarithmic derivative is another way of stating the rule for differentiating the logarithm of a function (using the chain rule):
$(\ln f)'={\frac {f'}{f}}\quad $ wherever f is positive.
Logarithmic differentiation is a technique which uses logarithms and its differentiation rules to simplify certain expressions before actually applying the derivative.
Logarithms can be used to remove exponents, convert products into sums, and convert division into subtraction — each of which may lead to a simplified expression for taking derivatives.
Derivatives of trigonometric functions
Main article: Differentiation of trigonometric functions
$(\sin x)'=\cos x={\frac {e^{ix}+e^{-ix}}{2}}$ $(\arcsin x)'={1 \over {\sqrt {1-x^{2}}}}$
$(\cos x)'=-\sin x={\frac {e^{-ix}-e^{ix}}{2i}}$ $(\arccos x)'=-{1 \over {\sqrt {1-x^{2}}}}$
$(\tan x)'=\sec ^{2}x={1 \over \cos ^{2}x}=1+\tan ^{2}x$ $(\arctan x)'={1 \over 1+x^{2}}$
$(\cot x)'=-\csc ^{2}x=-{1 \over \sin ^{2}x}=-1-\cot ^{2}x$ $(\operatorname {arccot} x)'={1 \over -1-x^{2}}$
$(\sec x)'=\sec {x}\tan {x}$ $(\operatorname {arcsec} x)'={1 \over |x|{\sqrt {x^{2}-1}}}$
$(\csc x)'=-\csc {x}\cot {x}$ $(\operatorname {arccsc} x)'=-{1 \over |x|{\sqrt {x^{2}-1}}}$
The derivatives in the table above are for when the range of the inverse secant is $[0,\pi ]\!$ and when the range of the inverse cosecant is $\left[-{\frac {\pi }{2}},{\frac {\pi }{2}}\right]\!$.
It is common to additionally define an inverse tangent function with two arguments, $\arctan(y,x)\!$. Its value lies in the range $[-\pi ,\pi ]\!$ and reflects the quadrant of the point $(x,y)\!$. For the first and fourth quadrant (i.e. $x>0\!$) one has $\arctan(y,x>0)=\arctan(y/x)\!$. Its partial derivatives are
${\frac {\partial \arctan(y,x)}{\partial y}}={\frac {x}{x^{2}+y^{2}}}$, and ${\frac {\partial \arctan(y,x)}{\partial x}}={\frac {-y}{x^{2}+y^{2}}}.$
Derivatives of hyperbolic functions
$(\sinh x)'=\cosh x={\frac {e^{x}+e^{-x}}{2}}$ $(\operatorname {arcsinh} x)'={1 \over {\sqrt {1+x^{2}}}}$
$(\cosh x)'=\sinh x={\frac {e^{x}-e^{-x}}{2}}$ $(\operatorname {arccosh} x)'={\frac {1}{\sqrt {x^{2}-1}}}$
$(\tanh x)'={\operatorname {sech} ^{2}x}={1 \over \cosh ^{2}x}=1-\tanh ^{2}x$ $(\operatorname {arctanh} x)'={1 \over 1-x^{2}}$
$(\coth x)'=-\operatorname {csch} ^{2}x=-{1 \over \sinh ^{2}x}=1-\coth ^{2}x$ $(\operatorname {arccoth} x)'=(\operatorname {arctanh} {\frac {1}{x}})'={\frac {-x^{-2}}{1-{\frac {1}{x^{2}}}}}={\frac {1}{1-x^{2}}}$
$(\operatorname {sech} x)'=-\operatorname {sech} {x}\tanh {x}$ $(\operatorname {arcsech} x)'=-{1 \over x{\sqrt {1-x^{2}}}}$
$(\operatorname {csch} x)'=-\operatorname {csch} {x}\coth {x}$ $(\operatorname {arccsch} x)'=-{1 \over |x|{\sqrt {1+x^{2}}}}$
See Hyperbolic functions for restrictions on these derivatives.
Derivatives of special functions
Gamma function
$\Gamma (x)=\int _{0}^{\infty }t^{x-1}e^{-t}\,dt$
${\begin{aligned}\Gamma '(x)&=\int _{0}^{\infty }t^{x-1}e^{-t}\ln t\,dt\\&=\Gamma (x)\left(\sum _{n=1}^{\infty }\left(\ln \left(1+{\dfrac {1}{n}}\right)-{\dfrac {1}{x+n}}\right)-{\dfrac {1}{x}}\right)\\&=\Gamma (x)\psi (x)\end{aligned}}$
with $\psi (x)$ being the digamma function, expressed by the parenthesized expression to the right of $\Gamma (x)$ in the line above.
Riemann Zeta function
$\zeta (x)=\sum _{n=1}^{\infty }{\frac {1}{n^{x}}}$
${\begin{aligned}\zeta '(x)&=-\sum _{n=1}^{\infty }{\frac {\ln n}{n^{x}}}=-{\frac {\ln 2}{2^{x}}}-{\frac {\ln 3}{3^{x}}}-{\frac {\ln 4}{4^{x}}}-\cdots \\&=-\sum _{p{\text{ prime}}}{\frac {p^{-x}\ln p}{(1-p^{-x})^{2}}}\prod _{q{\text{ prime}},q\neq p}{\frac {1}{1-q^{-x}}}\end{aligned}}$
Derivatives of integrals
Main article: Differentiation under the integral sign
Suppose that it is required to differentiate with respect to x the function
$F(x)=\int _{a(x)}^{b(x)}f(x,t)\,dt,$
where the functions $f(x,t)$ and ${\frac {\partial }{\partial x}}\,f(x,t)$ are both continuous in both $t$ and $x$ in some region of the $(t,x)$ plane, including $a(x)\leq t\leq b(x),$ $x_{0}\leq x\leq x_{1}$, and the functions $a(x)$ and $b(x)$ are both continuous and both have continuous derivatives for $x_{0}\leq x\leq x_{1}$. Then for $\,x_{0}\leq x\leq x_{1}$:
$F'(x)=f(x,b(x))\,b'(x)-f(x,a(x))\,a'(x)+\int _{a(x)}^{b(x)}{\frac {\partial }{\partial x}}\,f(x,t)\;dt\,.$
This formula is the general form of the Leibniz integral rule and can be derived using the fundamental theorem of calculus.
Derivatives to nth order
Some rules exist for computing the n-th derivative of functions, where n is a positive integer. These include:
Faà di Bruno's formula
Main article: Faà di Bruno's formula
If f and g are n-times differentiable, then
${\frac {d^{n}}{dx^{n}}}[f(g(x))]=n!\sum _{\{k_{m}\}}f^{(r)}(g(x))\prod _{m=1}^{n}{\frac {1}{k_{m}!}}\left(g^{(m)}(x)\right)^{k_{m}}$
where $ r=\sum _{m=1}^{n-1}k_{m}$ and the set $\{k_{m}\}$ consists of all non-negative integer solutions of the Diophantine equation $ \sum _{m=1}^{n}mk_{m}=n$.
General Leibniz rule
Main article: General Leibniz rule
If f and g are n-times differentiable, then
${\frac {d^{n}}{dx^{n}}}[f(x)g(x)]=\sum _{k=0}^{n}{\binom {n}{k}}{\frac {d^{n-k}}{dx^{n-k}}}f(x){\frac {d^{k}}{dx^{k}}}g(x)$
See also
• Differentiable function – Mathematical function whose derivative exists
• Differential of a function – Notion in calculus
• Differentiation of integrals – Problem in mathematics
• Differentiation under the integral sign – Differentiation under the integral sign formulaPages displaying short descriptions of redirect targets
• Hyperbolic functions – Collective name of 6 mathematical functions
• Inverse hyperbolic functions – Mathematical functions
• Inverse trigonometric functions – Inverse functions of the trigonometric functions
• Lists of integrals
• List of mathematical functions
• Matrix calculus – Specialized notation for multivariable calculus
• Trigonometric functions – Functions of an angle
• Vector calculus identities – Mathematical identities
References
1. Calculus (5th edition), F. Ayres, E. Mendelson, Schaum's Outline Series, 2009, ISBN 978-0-07-150861-2.
2. Advanced Calculus (3rd edition), R. Wrede, M.R. Spiegel, Schaum's Outline Series, 2010, ISBN 978-0-07-162366-7.
3. Complex Variables, M.R. Spiegel, S. Lipschutz, J.J. Schiller, D. Spellman, Schaum's Outlines Series, McGraw Hill (USA), 2009, ISBN 978-0-07-161569-3
4. "Differentiation Rules". University of Waterloo - CEMC Open Courseware. Retrieved 3 May 2022.
Sources and further reading
These rules are given in many books, both on elementary and advanced calculus, in pure and applied mathematics. Those in this article (in addition to the above references) can be found in:
• Mathematical Handbook of Formulas and Tables (3rd edition), S. Lipschutz, M.R. Spiegel, J. Liu, Schaum's Outline Series, 2009, ISBN 978-0-07-154855-7.
• The Cambridge Handbook of Physics Formulas, G. Woan, Cambridge University Press, 2010, ISBN 978-0-521-57507-2.
• Mathematical methods for physics and engineering, K.F. Riley, M.P. Hobson, S.J. Bence, Cambridge University Press, 2010, ISBN 978-0-521-86153-3
• NIST Handbook of Mathematical Functions, F. W. J. Olver, D. W. Lozier, R. F. Boisvert, C. W. Clark, Cambridge University Press, 2010, ISBN 978-0-521-19225-5.
External links
Library resources about
Differentiation rules
• Resources in your library
• Derivative calculator with formula simplification
Calculus
Precalculus
• Binomial theorem
• Concave function
• Continuous function
• Factorial
• Finite difference
• Free variables and bound variables
• Graph of a function
• Linear function
• Radian
• Rolle's theorem
• Secant
• Slope
• Tangent
Limits
• Indeterminate form
• Limit of a function
• One-sided limit
• Limit of a sequence
• Order of approximation
• (ε, δ)-definition of limit
Differential calculus
• Derivative
• Second derivative
• Partial derivative
• Differential
• Differential operator
• Mean value theorem
• Notation
• Leibniz's notation
• Newton's notation
• Rules of differentiation
• linearity
• Power
• Sum
• Chain
• L'Hôpital's
• Product
• General Leibniz's rule
• Quotient
• Other techniques
• Implicit differentiation
• Inverse functions and differentiation
• Logarithmic derivative
• Related rates
• Stationary points
• First derivative test
• Second derivative test
• Extreme value theorem
• Maximum and minimum
• Further applications
• Newton's method
• Taylor's theorem
• Differential equation
• Ordinary differential equation
• Partial differential equation
• Stochastic differential equation
Integral calculus
• Antiderivative
• Arc length
• Riemann integral
• Basic properties
• Constant of integration
• Fundamental theorem of calculus
• Differentiating under the integral sign
• Integration by parts
• Integration by substitution
• trigonometric
• Euler
• Tangent half-angle substitution
• Partial fractions in integration
• Quadratic integral
• Trapezoidal rule
• Volumes
• Washer method
• Shell method
• Integral equation
• Integro-differential equation
Vector calculus
• Derivatives
• Curl
• Directional derivative
• Divergence
• Gradient
• Laplacian
• Basic theorems
• Line integrals
• Green's
• Stokes'
• Gauss'
Multivariable calculus
• Divergence theorem
• Geometric
• Hessian matrix
• Jacobian matrix and determinant
• Lagrange multiplier
• Line integral
• Matrix
• Multiple integral
• Partial derivative
• Surface integral
• Volume integral
• Advanced topics
• Differential forms
• Exterior derivative
• Generalized Stokes' theorem
• Tensor calculus
Sequences and series
• Arithmetico-geometric sequence
• Types of series
• Alternating
• Binomial
• Fourier
• Geometric
• Harmonic
• Infinite
• Power
• Maclaurin
• Taylor
• Telescoping
• Tests of convergence
• Abel's
• Alternating series
• Cauchy condensation
• Direct comparison
• Dirichlet's
• Integral
• Limit comparison
• Ratio
• Root
• Term
Special functions
and numbers
• Bernoulli numbers
• e (mathematical constant)
• Exponential function
• Natural logarithm
• Stirling's approximation
History of calculus
• Adequality
• Brook Taylor
• Colin Maclaurin
• Generality of algebra
• Gottfried Wilhelm Leibniz
• Infinitesimal
• Infinitesimal calculus
• Isaac Newton
• Fluxion
• Law of Continuity
• Leonhard Euler
• Method of Fluxions
• The Method of Mechanical Theorems
Lists
• Differentiation rules
• List of integrals of exponential functions
• List of integrals of hyperbolic functions
• List of integrals of inverse hyperbolic functions
• List of integrals of inverse trigonometric functions
• List of integrals of irrational functions
• List of integrals of logarithmic functions
• List of integrals of rational functions
• List of integrals of trigonometric functions
• Secant
• Secant cubed
• List of limits
• Lists of integrals
Miscellaneous topics
• Complex calculus
• Contour integral
• Differential geometry
• Manifold
• Curvature
• of curves
• of surfaces
• Tensor
• Euler–Maclaurin formula
• Gabriel's horn
• Integration Bee
• Proof that 22/7 exceeds π
• Regiomontanus' angle maximization problem
• Steinmetz solid
Major topics in mathematical analysis
• Calculus: Integration
• Differentiation
• Differential equations
• ordinary
• partial
• stochastic
• Fundamental theorem of calculus
• Calculus of variations
• Vector calculus
• Tensor calculus
• Matrix calculus
• Lists of integrals
• Table of derivatives
• Real analysis
• Complex analysis
• Hypercomplex analysis (quaternionic analysis)
• Functional analysis
• Fourier analysis
• Least-squares spectral analysis
• Harmonic analysis
• P-adic analysis (P-adic numbers)
• Measure theory
• Representation theory
• Functions
• Continuous function
• Special functions
• Limit
• Series
• Infinity
Mathematics portal
|
Wikipedia
|
Rules of passage (logic)
In mathematical logic, the rules of passage govern how quantifiers distribute over the basic logical connectives of first-order logic. The rules of passage govern the "passage" (translation) from any formula of first-order logic to the equivalent formula in prenex normal form, and vice versa.
The rules
See Quine (1982: 119, chpt. 23). Let Q and Q' denote ∀ and ∃ or vice versa. β denotes a closed formula in which x does not appear. The rules of passage then include the following sentences, whose main connective is the biconditional:
• $Qx[\lnot \alpha (x)]\leftrightarrow \lnot Q'x[\alpha (x)].$
• $\ Qx[\beta \lor \alpha (x)]\leftrightarrow (\beta \lor Qx\alpha (x)).$
• $\exists x[\alpha (x)\lor \gamma (x)]\leftrightarrow (\exists x\alpha (x)\lor \exists x\gamma (x)).$
• $\ Qx[\beta \land \alpha (x)]\leftrightarrow (\beta \land Qx\alpha (x)).$
• $\forall x\,[\alpha (x)\land \gamma (x)]\leftrightarrow (\forall x\,\alpha (x)\land \forall x\,\gamma (x)).$
The following conditional sentences can also be taken as rules of passage:
• $\exists x[\alpha (x)\land \gamma (x)]\rightarrow (\exists x\alpha (x)\land \exists x\gamma (x)).$
• $(\forall x\,\alpha (x)\lor \forall x\,\gamma (x))\rightarrow \forall x\,[\alpha (x)\lor \gamma (x)].$
• $(\exists x\,\alpha (x)\land \forall x\,\gamma (x))\rightarrow \exists x\,[\alpha (x)\land \gamma (x)].$
"Rules of passage" first appeared in French, in the writings of Jacques Herbrand. Quine employed the English translation of the phrase in each edition of his Methods of Logic, starting in 1950.
See also
• First-order logic
• Prenex normal form
• Quantifier
References
• Willard Quine, 1982. Methods of Logic, 4th ed. Harvard Univ. Press.
• Jean Van Heijenoort, 1967. From Frege to Gödel: A Source Book on Mathematical Logic. Harvard Univ. Press.
External links
• Stanford Encyclopedia of Philosophy: "Classical Logic—by Stewart Shapiro.
|
Wikipedia
|
Adolfo Rumbos
Adolfo J. Rumbos is an American mathematician whose research interests include nonlinear analysis and boundary value problems.[1] He is the Joseph N. Fiske Professor of Mathematics at Pomona College in Claremont, California.[1]
References
1. "Adolfo Rumbos". Pomona College. June 2015. Retrieved August 26, 2021.
External links
• Faculty page at Pomona College
Authority control: Academics
• MathSciNet
• Mathematics Genealogy Project
|
Wikipedia
|
Taut foliation
In mathematics, tautness is a rigidity property of foliations. A taut foliation is a codimension 1 foliation of a closed manifold with the property that every leaf meets a transverse circle.[1]: 155 By transverse circle, is meant a closed loop that is always transverse to the tangent field of the foliation.
If the foliated manifold has non-empty tangential boundary, then a codimension 1 foliation is taut if every leaf meets a transverse circle or a transverse arc with endpoints on the tangential boundary. Equivalently, by a result of Dennis Sullivan, a codimension 1 foliation is taut if there exists a Riemannian metric that makes each leaf a minimal surface. Furthermore, for compact manifolds the existence, for every leaf $L$, of a transverse circle meeting $L$, implies the existence of a single transverse circle meeting every leaf.
Taut foliations were brought to prominence by the work of William Thurston and David Gabai.
Relation to Reebless foliations
Taut foliations are closely related to the concept of Reebless foliation. A taut foliation cannot have a Reeb component, since the component would act like a "dead-end" from which a transverse curve could never escape; consequently, the boundary torus of the Reeb component has no transverse circle puncturing it. A Reebless foliation can fail to be taut but the only leaves of the foliation with no puncturing transverse circle must be compact, and in particular, homeomorphic to a torus.
Properties
The existence of a taut foliation implies various useful properties about a closed 3-manifold. For example, a closed, orientable 3-manifold, which admits a taut foliation with no sphere leaf, must be irreducible, covered by $\mathbb {R} ^{3}$, and have negatively curved fundamental group.
Rummler–Sullivan theorem
By a theorem of Hansklaus Rummler and Dennis Sullivan, the following conditions are equivalent for transversely orientable codimension one foliations $\left(M,{\mathcal {F}}\right)$ of closed, orientable, smooth manifolds M:[2][1]: 158
• ${\mathcal {F}}$ is taut;
• there is a flow transverse to ${\mathcal {F}}$ which preserves some volume form on M;
• there is a Riemannian metric on M for which the leaves of ${\mathcal {F}}$ are least area surfaces.
References
1. Calegari, Danny (2007). Foliations and the Geometry of 3-Manifolds. Clarendon Press.
2. Alvarez Lopez, Jesús A. (1990). "On riemannian foliations with minimal leaves". Annales de l'Institut Fourier. 40 (1): 163–176.
|
Wikipedia
|
Alternated hexagonal tiling honeycomb
In three-dimensional hyperbolic geometry, the alternated hexagonal tiling honeycomb, h{6,3,3}, or , is a semiregular tessellation with tetrahedron and triangular tiling cells arranged in an octahedron vertex figure. It is named after its construction, as an alteration of a hexagonal tiling honeycomb.
Alternated hexagonal tiling honeycomb
TypeParacompact uniform honeycomb
Semiregular honeycomb
Schläfli symbolsh{6,3,3}
s{3,6,3}
2s{6,3,6}
2s{6,3[3]}
s{3[3,3]}
Coxeter diagrams ↔
↔
↔ ↔
Cells{3,3}
{3[3]}
Facestriangle {3}
Vertex figure
truncated tetrahedron
Coxeter groups${\overline {P}}_{3}$, [3,3[3]]
1/2 ${\overline {V}}_{3}$, [6,3,3]
1/2 ${\overline {Y}}_{3}$, [3,6,3]
1/2 ${\overline {Z}}_{3}$, [6,3,6]
1/2 ${\overline {VP}}_{3}$, [6,3[3]]
1/2 ${\overline {PP}}_{3}$, [3[3,3]]
PropertiesVertex-transitive, edge-transitive, quasiregular
A geometric honeycomb is a space-filling of polyhedral or higher-dimensional cells, so that there are no gaps. It is an example of the more general mathematical tiling or tessellation in any number of dimensions.
Honeycombs are usually constructed in ordinary Euclidean ("flat") space, like the convex uniform honeycombs. They may also be constructed in non-Euclidean spaces, such as hyperbolic uniform honeycombs. Any finite uniform polytope can be projected to its circumsphere to form a uniform honeycomb in spherical space.
Symmetry constructions
It has five alternated constructions from reflectional Coxeter groups all with four mirrors and only the first being regular: [6,3,3], [3,6,3], [6,3,6], [6,3[3]] and [3[3,3]] , having 1, 4, 6, 12 and 24 times larger fundamental domains respectively. In Coxeter notation subgroup markups, they are related as: [6,(3,3)*] (remove 3 mirrors, index 24 subgroup); [3,6,3*] or [3*,6,3] (remove 2 mirrors, index 6 subgroup); [1+,6,3,6,1+] (remove two orthogonal mirrors, index 4 subgroup); all of these are isomorphic to [3[3,3]]. The ringed Coxeter diagrams are , , , and , representing different types (colors) of hexagonal tilings in the Wythoff construction.
Related honeycombs
The alternated hexagonal tiling honeycomb has 3 related forms: the cantic hexagonal tiling honeycomb, ; the runcic hexagonal tiling honeycomb, ; and the runcicantic hexagonal tiling honeycomb, .
Cantic hexagonal tiling honeycomb
Cantic hexagonal tiling honeycomb
TypeParacompact uniform honeycomb
Schläfli symbolsh2{6,3,3}
Coxeter diagrams ↔
Cellsr{3,3}
t{3,3}
h2{6,3}
Facestriangle {3}
hexagon {6}
Vertex figure
wedge
Coxeter groups${\overline {P}}_{3}$, [3,3[3]]
PropertiesVertex-transitive
The cantic hexagonal tiling honeycomb, h2{6,3,3}, or , is composed of octahedron, truncated tetrahedron, and trihexagonal tiling facets, with a wedge vertex figure.
Runcic hexagonal tiling honeycomb
Runcic hexagonal tiling honeycomb
TypeParacompact uniform honeycomb
Schläfli symbolsh3{6,3,3}
Coxeter diagrams ↔
Cells{3,3}
{}x{3}
rr{3,3}
{3[3]}
Facestriangle {3}
square {4}
hexagon {6}
Vertex figure
triangular cupola
Coxeter groups${\overline {P}}_{3}$, [3,3[3]]
PropertiesVertex-transitive
The runcic hexagonal tiling honeycomb, h3{6,3,3}, or , has tetrahedron, triangular prism, cuboctahedron, and triangular tiling facets, with a triangular cupola vertex figure.
Runcicantic hexagonal tiling honeycomb
Runcicantic hexagonal tiling honeycomb
TypeParacompact uniform honeycomb
Schläfli symbolsh2,3{6,3,3}
Coxeter diagrams ↔
Cellst{3,3}
{}x{3}
tr{3,3}
h2{6,3}
Facestriangle {3}
square {4}
hexagon {6}
Vertex figure
rectangular pyramid
Coxeter groups${\overline {P}}_{3}$, [3,3[3]]
PropertiesVertex-transitive
The runcicantic hexagonal tiling honeycomb, h2,3{6,3,3}, or , has truncated tetrahedron, triangular prism, truncated octahedron, and trihexagonal tiling facets, with a rectangular pyramid vertex figure.
See also
• Convex uniform honeycombs in hyperbolic space
• Regular tessellations of hyperbolic 3-space
• Paracompact uniform honeycombs
• Semiregular honeycomb
• Hexagonal tiling honeycomb
References
• Coxeter, Regular Polytopes, 3rd. ed., Dover Publications, 1973. ISBN 0-486-61480-8. (Tables I and II: Regular polytopes and honeycombs, pp. 294–296)
• The Beauty of Geometry: Twelve Essays (1999), Dover Publications, LCCN 99-35678, ISBN 0-486-40919-8 (Chapter 10, Regular Honeycombs in Hyperbolic Space Archived 2016-06-10 at the Wayback Machine) Table III
• Jeffrey R. Weeks The Shape of Space, 2nd edition ISBN 0-8247-0709-5 (Chapters 16–17: Geometries on Three-manifolds I,II)
• N. W. Johnson, R. Kellerhals, J. G. Ratcliffe, S. T. Tschantz, The size of a hyperbolic Coxeter simplex, Transformation Groups (1999), Volume 4, Issue 4, pp 329–353
• N. W. Johnson, R. Kellerhals, J. G. Ratcliffe, S. T. Tschantz, Commensurability classes of hyperbolic Coxeter groups, (2002) H3: p130.
|
Wikipedia
|
Order-4 hexagonal tiling honeycomb
In the field of hyperbolic geometry, the order-4 hexagonal tiling honeycomb arises as one of 11 regular paracompact honeycombs in 3-dimensional hyperbolic space. It is paracompact because it has cells composed of an infinite number of faces. Each cell is a hexagonal tiling whose vertices lie on a horosphere: a flat plane in hyperbolic space that approaches a single ideal point at infinity.
Order-4 hexagonal tiling honeycomb
Perspective projection view
within Poincaré disk model
TypeHyperbolic regular honeycomb
Paracompact uniform honeycomb
Schläfli symbols{6,3,4}
{6,31,1}
t0,1{(3,6)2}
Coxeter diagrams
↔
↔
↔
Cells{6,3}
Faceshexagon {6}
Edge figuresquare {4}
Vertex figure
octahedron
DualOrder-6 cubic honeycomb
Coxeter groups${\overline {BV}}_{3}$, [4,3,6]
${\overline {DV}}_{3}$, [6,31,1]
${\widehat {VV}}_{3}$, [(6,3)[2]]
PropertiesRegular, quasiregular
A geometric honeycomb is a space-filling of polyhedral or higher-dimensional cells, so that there are no gaps. It is an example of the more general mathematical tiling or tessellation in any number of dimensions.
Honeycombs are usually constructed in ordinary Euclidean ("flat") space, like the convex uniform honeycombs. They may also be constructed in non-Euclidean spaces, such as hyperbolic uniform honeycombs. Any finite uniform polytope can be projected to its circumsphere to form a uniform honeycomb in spherical space.
The Schläfli symbol of the order-4 hexagonal tiling honeycomb is {6,3,4}. Since that of the hexagonal tiling is {6,3}, this honeycomb has four such hexagonal tilings meeting at each edge. Since the Schläfli symbol of the octahedron is {3,4}, the vertex figure of this honeycomb is an octahedron. Thus, eight hexagonal tilings meet at each vertex of this honeycomb, and the six edges meeting at each vertex lie along three orthogonal axes.[1]
Images
Perspective projection
One cell, viewed from outside the Poincare sphere
The vertices of a t{(3,∞,3)}, tiling exist as a 2-hypercycle within this honeycomb
The honeycomb is analogous to the H2 order-4 apeirogonal tiling, {∞,4}, shown here with one green apeirogon outlined by its horocycle
Symmetry
The order-4 hexagonal tiling honeycomb has three reflective simplex symmetry constructions.
The half-symmetry uniform construction {6,31,1} has two types (colors) of hexagonal tilings, with Coxeter diagram ↔ . A quarter-symmetry construction also exists, with four colors of hexagonal tilings: .
An additional two reflective symmetries exist with non-simplectic fundamental domains: [6,3*,4], which is index 6, with Coxeter diagram ; and [6,(3,4)*], which is index 48. The latter has a cubic fundamental domain, and an octahedral Coxeter diagram with three axial infinite branches: . It can be seen as using eight colors to color the hexagonal tilings of the honeycomb.
The order-4 hexagonal tiling honeycomb contains , which tile 2-hypercycle surfaces and are similar to the truncated infinite-order triangular tiling, :
Related polytopes and honeycombs
The order-4 hexagonal tiling honeycomb is a regular hyperbolic honeycomb in 3-space, and one of 11 which are paracompact.
11 paracompact regular honeycombs
{6,3,3}
{6,3,4}
{6,3,5}
{6,3,6}
{4,4,3}
{4,4,4}
{3,3,6}
{4,3,6}
{5,3,6}
{3,6,3}
{3,4,4}
There are fifteen uniform honeycombs in the [6,3,4] Coxeter group family, including this regular form, and its dual, the order-6 cubic honeycomb.
[6,3,4] family honeycombs
{6,3,4} r{6,3,4} t{6,3,4} rr{6,3,4} t0,3{6,3,4} tr{6,3,4} t0,1,3{6,3,4} t0,1,2,3{6,3,4}
{4,3,6} r{4,3,6} t{4,3,6} rr{4,3,6} 2t{4,3,6} tr{4,3,6} t0,1,3{4,3,6} t0,1,2,3{4,3,6}
The order-4 hexagonal tiling honeycomb has a related alternated honeycomb, ↔ , with triangular tiling and octahedron cells.
It is a part of sequence of regular honeycombs of the form {6,3,p}, all of which are composed of hexagonal tiling cells:
{6,3,p} honeycombs
Space H3
Form Paracompact Noncompact
Name {6,3,3} {6,3,4} {6,3,5} {6,3,6} {6,3,7} {6,3,8} ... {6,3,∞}
Coxeter
Image
Vertex
figure
{3,p}
{3,3}
{3,4}
{3,5}
{3,6}
{3,7}
{3,8}
{3,∞}
This honeycomb is also related to the 16-cell, cubic honeycomb and order-4 dodecahedral honeycomb, all of which have octahedral vertex figures.
{p,3,4} regular honeycombs
Space S3 E3 H3
Form Finite Affine Compact Paracompact Noncompact
Name {3,3,4}
{4,3,4}
{5,3,4}
{6,3,4}
{7,3,4}
{8,3,4}
... {∞,3,4}
Image
Cells
{3,3}
{4,3}
{5,3}
{6,3}
{7,3}
{8,3}
{∞,3}
The aforementioned honeycombs are also quasiregular:
Regular and Quasiregular honeycombs: {p,3,4} and {p,31,1}
Space Euclidean 4-space Euclidean 3-space Hyperbolic 3-space
Name {3,3,4}
{3,31,1} = $\left\{3,{3 \atop 3}\right\}$
{4,3,4}
{4,31,1} = $\left\{4,{3 \atop 3}\right\}$
{5,3,4}
{5,31,1} = $\left\{5,{3 \atop 3}\right\}$
{6,3,4}
{6,31,1} = $\left\{6,{3 \atop 3}\right\}$
Coxeter
diagram
= = = =
Image
Cells
{p,3}
Rectified order-4 hexagonal tiling honeycomb
Rectified order-4 hexagonal tiling honeycomb
TypeParacompact uniform honeycomb
Schläfli symbolsr{6,3,4} or t1{6,3,4}
Coxeter diagrams
↔
↔
↔
Cells{3,4}
r{6,3}
Facestriangle {3}
hexagon {6}
Vertex figure
square prism
Coxeter groups${\overline {BV}}_{3}$, [4,3,6]
${\overline {BP}}_{3}$, [4,3[3]]
${\overline {DV}}_{3}$, [6,31,1]
${\overline {DP}}_{3}$, [3[]×[]]
PropertiesVertex-transitive, edge-transitive
The rectified order-4 hexagonal tiling honeycomb, t1{6,3,4}, has octahedral and trihexagonal tiling facets, with a square prism vertex figure.
It is similar to the 2D hyperbolic tetraapeirogonal tiling, r{∞,4}, which alternates apeirogonal and square faces:
Truncated order-4 hexagonal tiling honeycomb
Truncated order-4 hexagonal tiling honeycomb
TypeParacompact uniform honeycomb
Schläfli symbolt{6,3,4} or t0,1{6,3,4}
Coxeter diagram
↔
Cells{3,4}
t{6,3}
Facestriangle {3}
dodecagon {12}
Vertex figure
square pyramid
Coxeter groups${\overline {BV}}_{3}$, [4,3,6]
${\overline {DV}}_{3}$, [6,31,1]
PropertiesVertex-transitive
The truncated order-4 hexagonal tiling honeycomb, t0,1{6,3,4}, has octahedron and truncated hexagonal tiling facets, with a square pyramid vertex figure.
It is similar to the 2D hyperbolic truncated order-4 apeirogonal tiling, t{∞,4}, with apeirogonal and square faces:
Bitruncated order-4 hexagonal tiling honeycomb
Bitruncated order-4 hexagonal tiling honeycomb
TypeParacompact uniform honeycomb
Schläfli symbol2t{6,3,4} or t1,2{6,3,4}
Coxeter diagram
↔
↔
↔
Cellst{4,3}
t{3,6}
Facessquare {4}
hexagon {6}
Vertex figure
digonal disphenoid
Coxeter groups${\overline {BV}}_{3}$, [4,3,6]
${\overline {BP}}_{3}$, [4,3[3]]
${\overline {DV}}_{3}$, [6,31,1]
Failed to parse (SVG (MathML can be enabled via browser plugin): Invalid response ("Math extension cannot connect to Restbase.") from server "http://localhost:6011/en.wikipedia.org/v1/":): {\displaystyle \overline{DP}_3} , [3[]×[]]
PropertiesVertex-transitive
The bitruncated order-4 hexagonal tiling honeycomb, t1,2{6,3,4}, has truncated octahedron and hexagonal tiling cells, with a digonal disphenoid vertex figure.
Cantellated order-4 hexagonal tiling honeycomb
Cantellated order-4 hexagonal tiling honeycomb
TypeParacompact uniform honeycomb
Schläfli symbolrr{6,3,4} or t0,2{6,3,4}
Coxeter diagram
↔
Cellsr{3,4}
{}x{4}
rr{6,3}
Facestriangle {3}
square {4}
hexagon {6}
Vertex figure
wedge
Coxeter groups${\overline {BV}}_{3}$, [4,3,6]
${\overline {DV}}_{3}$, [6,31,1]
PropertiesVertex-transitive
The cantellated order-4 hexagonal tiling honeycomb, t0,2{6,3,4}, has cuboctahedron, cube, and rhombitrihexagonal tiling cells, with a wedge vertex figure.
Cantitruncated order-4 hexagonal tiling honeycomb
Cantitruncated order-4 hexagonal tiling honeycomb
TypeParacompact uniform honeycomb
Schläfli symboltr{6,3,4} or t0,1,2{6,3,4}
Coxeter diagram
↔
Cellst{3,4}
{}x{4}
tr{6,3}
Facessquare {4}
hexagon {6}
dodecagon {12}
Vertex figure
mirrored sphenoid
Coxeter groups${\overline {BV}}_{3}$, [4,3,6]
${\overline {DV}}_{3}$, [6,31,1]
PropertiesVertex-transitive
The cantitruncated order-4 hexagonal tiling honeycomb, t0,1,2{6,3,4}, has truncated octahedron, cube, and truncated trihexagonal tiling cells, with a mirrored sphenoid vertex figure.
Runcinated order-4 hexagonal tiling honeycomb
Runcinated order-4 hexagonal tiling honeycomb
TypeParacompact uniform honeycomb
Schläfli symbolt0,3{6,3,4}
Coxeter diagram
↔
Cells{4,3}
{}x{4}
{6,3}
{}x{6}
Facessquare {4}
hexagon {6}
Vertex figure
irregular triangular antiprism
Coxeter groups${\overline {BV}}_{3}$, [4,3,6]
PropertiesVertex-transitive
The runcinated order-4 hexagonal tiling honeycomb, t0,3{6,3,4}, has cube, hexagonal tiling and hexagonal prism cells, with an irregular triangular antiprism vertex figure.
It contains the 2D hyperbolic rhombitetrahexagonal tiling, rr{4,6}, with square and hexagonal faces. The tiling also has a half symmetry construction .
=
Runcitruncated order-4 hexagonal tiling honeycomb
Runcitruncated order-4 hexagonal tiling honeycomb
TypeParacompact uniform honeycomb
Schläfli symbolt0,1,3{6,3,4}
Coxeter diagram
Cellsrr{3,4}
{}x{4}
{}x{12}
t{6,3}
Facestriangle {3}
square {4}
dodecagon {12}
Vertex figure
isosceles-trapezoidal pyramid
Coxeter groups${\overline {BV}}_{3}$, [4,3,6]
PropertiesVertex-transitive
The runcitruncated order-4 hexagonal tiling honeycomb, t0,1,3{6,3,4}, has rhombicuboctahedron, cube, dodecagonal prism, and truncated hexagonal tiling cells, with an isosceles-trapezoidal pyramid vertex figure.
Runcicantellated order-4 hexagonal tiling honeycomb
The runcicantellated order-4 hexagonal tiling honeycomb is the same as the runcitruncated order-6 cubic honeycomb.
Omnitruncated order-4 hexagonal tiling honeycomb
Omnitruncated order-4 hexagonal tiling honeycomb
TypeParacompact uniform honeycomb
Schläfli symbolt0,1,2,3{6,3,4}
Coxeter diagram
Cellstr{4,3}
tr{6,3}
{}x{12}
{}x{8}
Facessquare {4}
hexagon {6}
octagon {8}
dodecagon {12}
Vertex figure
irregular tetrahedron
Coxeter groups${\overline {BV}}_{3}$, [4,3,6]
PropertiesVertex-transitive
The omnitruncated order-4 hexagonal tiling honeycomb, t0,1,2,3{6,3,4}, has truncated cuboctahedron, truncated trihexagonal tiling, dodecagonal prism, and octagonal prism cells, with an irregular tetrahedron vertex figure.
Alternated order-4 hexagonal tiling honeycomb
Alternated order-4 hexagonal tiling honeycomb
TypeParacompact uniform honeycomb
Semiregular honeycomb
Schläfli symbolsh{6,3,4}
Coxeter diagrams ↔
Cells{3[3]}
{3,4}
Facestriangle {3}
Vertex figure
truncated octahedron
Coxeter groups${\overline {BP}}_{3}$, [4,3[3]]
PropertiesVertex-transitive, edge-transitive, quasiregular
The alternated order-4 hexagonal tiling honeycomb, ↔ , is composed of triangular tiling and octahedron cells, in a truncated octahedron vertex figure.
Cantic order-4 hexagonal tiling honeycomb
Cantic order-4 hexagonal tiling honeycomb
TypeParacompact uniform honeycomb
Schläfli symbolsh2{6,3,4}
Coxeter diagrams ↔
Cellsh2{6,3}
t{3,4}
r{3,4}
Facestriangle {3}
square {4}
hexagon {6}
Vertex figure
wedge
Coxeter groups${\overline {BP}}_{3}$, [4,3[3]]
PropertiesVertex-transitive
The cantic order-4 hexagonal tiling honeycomb, ↔ , is composed of trihexagonal tiling, truncated octahedron, and cuboctahedron cells, with a wedge vertex figure.
Runcic order-4 hexagonal tiling honeycomb
Runcic order-4 hexagonal tiling honeycomb
TypeParacompact uniform honeycomb
Schläfli symbolsh3{6,3,4}
Coxeter diagrams ↔
Cells{3[3]}
rr{3,4}
{4,3}
{}x{3}
Facestriangle {3}
square {4}
Vertex figure
triangular cupola
Coxeter groups${\overline {BP}}_{3}$, [4,3[3]]
PropertiesVertex-transitive
The runcic order-4 hexagonal tiling honeycomb, ↔ , is composed of triangular tiling, rhombicuboctahedron, cube, and triangular prism cells, with a triangular cupola vertex figure.
Runcicantic order-4 hexagonal tiling honeycomb
Runcicantic order-4 hexagonal tiling honeycomb
TypeParacompact uniform honeycomb
Schläfli symbolsh2,3{6,3,4}
Coxeter diagrams ↔
Cellsh2{6,3}
tr{3,4}
t{4,3}
{}x{3}
Facestriangle {3}
square {4}
hexagon {6}
octagon {8}
Vertex figure
rectangular pyramid
Coxeter groups${\overline {BP}}_{3}$, [4,3[3]]
PropertiesVertex-transitive
The runcicantic order-4 hexagonal tiling honeycomb, ↔ , is composed of trihexagonal tiling, truncated cuboctahedron, truncated cube, and triangular prism cells, with a rectangular pyramid vertex figure.
Quarter order-4 hexagonal tiling honeycomb
Quarter order-4 hexagonal tiling honeycomb
TypeParacompact uniform honeycomb
Schläfli symbolq{6,3,4}
Coxeter diagram ↔
Cells{3[3]}
{3,3}
t{3,3}
h2{6,3}
Facestriangle {3}
hexagon {6}
Vertex figure
triangular cupola
Coxeter groups${\overline {DP}}_{3}$, [3[]x[]]
PropertiesVertex-transitive
The quarter order-4 hexagonal tiling honeycomb, q{6,3,4}, or , is composed of triangular tiling, trihexagonal tiling, tetrahedron, and truncated tetrahedron cells, with a triangular cupola vertex figure.
See also
• Convex uniform honeycombs in hyperbolic space
• Regular tessellations of hyperbolic 3-space
• Paracompact uniform honeycombs
References
1. Coxeter The Beauty of Geometry, 1999, Chapter 10, Table III
• Coxeter, Regular Polytopes, 3rd. ed., Dover Publications, 1973. ISBN 0-486-61480-8. (Tables I and II: Regular polytopes and honeycombs, pp. 294–296)
• The Beauty of Geometry: Twelve Essays (1999), Dover Publications, LCCN 99-35678, ISBN 0-486-40919-8 (Chapter 10, Regular Honeycombs in Hyperbolic Space) Table III
• Jeffrey R. Weeks The Shape of Space, 2nd edition ISBN 0-8247-0709-5 (Chapter 16-17: Geometries on Three-manifolds I,II)
• Norman Johnson Uniform Polytopes, Manuscript
• N.W. Johnson: The Theory of Uniform Polytopes and Honeycombs, Ph.D. Dissertation, University of Toronto, 1966
• N.W. Johnson: Geometries and Transformations, (2018) Chapter 13: Hyperbolic Coxeter groups
|
Wikipedia
|
Runcicantellated 24-cell honeycomb
In four-dimensional Euclidean geometry, the runcicantellated 24-cell honeycomb is a uniform space-filling honeycomb.
Runcicantellated 24-cell honeycomb
(No image)
TypeUniform 4-honeycomb
Schläfli symbolst0,2,3{3,4,3,3}
s2,3{3,4,3,3}
Coxeter diagrams
4-face typet0,1,3{3,3,4}
2t{3,3,4}
{3}x{3}
t{3,3}x{}
Cell type
Face type
Vertex figure
Coxeter groups${\tilde {F}}_{4}$, [3,4,3,3]
PropertiesVertex transitive
Alternate names
• Runcicantellated icositetrachoric tetracomb/honeycomb
• Prismatorhombated icositetrachoric tetracomb (pricot)
• Great diprismatodisicositetrachoric tetracomb
Related honeycombs
The [3,4,3,3], , Coxeter group generates 31 permutations of uniform tessellations, 28 are unique in this family and ten are shared in the [4,3,3,4] and [4,3,31,1] families. The alternation (13) is also repeated in other families.
F4 honeycombs
Extended
symmetry
Extended
diagram
OrderHoneycombs
[3,3,4,3]×1
1, 3, 5, 6, 8,
9, 10, 11, 12
[3,4,3,3]×1
2, 4, 7, 13,
14, 15, 16, 17,
18, 19, 20, 21,
22 23, 24, 25,
26, 27, 28, 29
[(3,3)[3,3,4,3*]]
=[(3,3)[31,1,1,1]]
=[3,4,3,3]
=
=
×4
(2), (4), (7), (13)
See also
Regular and uniform honeycombs in 4-space:
• Tesseractic honeycomb
• 16-cell honeycomb
• 24-cell honeycomb
• Rectified 24-cell honeycomb
• Snub 24-cell honeycomb
• 5-cell honeycomb
• Truncated 5-cell honeycomb
• Omnitruncated 5-cell honeycomb
References
• Coxeter, H.S.M. Regular Polytopes, (3rd edition, 1973), Dover edition, ISBN 0-486-61480-8 p. 296, Table II: Regular honeycombs
• Kaleidoscopes: Selected Writings of H.S.M. Coxeter, edited by F. Arthur Sherk, Peter McMullen, Anthony C. Thompson, Asia Ivic Weiss, Wiley-Interscience Publication, 1995, ISBN 978-0-471-01003-6
• (Paper 24) H.S.M. Coxeter, Regular and Semi-Regular Polytopes III, [Math. Zeit. 200 (1988) 3-45]
• George Olshevsky, Uniform Panoploid Tetracombs, Manuscript (2006) (Complete list of 11 convex uniform tilings, 28 convex uniform honeycombs, and 143 convex uniform tetracombs) Model 118
• Klitzing, Richard. "4D Euclidean tesselations". o3x3x4o3x - apricot - O118
Fundamental convex regular and uniform honeycombs in dimensions 2–9
Space Family ${\tilde {A}}_{n-1}$ ${\tilde {C}}_{n-1}$ ${\tilde {B}}_{n-1}$ ${\tilde {D}}_{n-1}$ ${\tilde {G}}_{2}$ / ${\tilde {F}}_{4}$ / ${\tilde {E}}_{n-1}$
E2 Uniform tiling {3[3]} δ3 hδ3 qδ3 Hexagonal
E3 Uniform convex honeycomb {3[4]} δ4 hδ4 qδ4
E4 Uniform 4-honeycomb {3[5]} δ5 hδ5 qδ5 24-cell honeycomb
E5 Uniform 5-honeycomb {3[6]} δ6 hδ6 qδ6
E6 Uniform 6-honeycomb {3[7]} δ7 hδ7 qδ7 222
E7 Uniform 7-honeycomb {3[8]} δ8 hδ8 qδ8 133 • 331
E8 Uniform 8-honeycomb {3[9]} δ9 hδ9 qδ9 152 • 251 • 521
E9 Uniform 9-honeycomb {3[10]} δ10 hδ10 qδ10
E10 Uniform 10-honeycomb {3[11]} δ11 hδ11 qδ11
En-1 Uniform (n-1)-honeycomb {3[n]} δn hδn qδn 1k2 • 2k1 • k21
|
Wikipedia
|
Runcic 5-cubes
In six-dimensional geometry, a runcic 5-cube or (runcic 5-demicube, runcihalf 5-cube) is a convex uniform 5-polytope. There are 2 runcic forms for the 5-cube. Runcic 5-cubes have half the vertices of runcinated 5-cubes.
5-cube
Runcic 5-cube
=
5-demicube
=
Runcicantic 5-cube
=
Orthogonal projections in B5 Coxeter plane
Runcic 5-cube
Runcic 5-cube
Typeuniform 5-polytope
Schläfli symbolh3{4,3,3,3}
Coxeter-Dynkin diagram
4-faces42
Cells360
Faces880
Edges720
Vertices160
Vertex figure
Coxeter groupsD5, [32,1,1]
Propertiesconvex
Alternate names
• Cantellated 5-demicube/demipenteract
• Small rhombated hemipenteract (sirhin) (Jonathan Bowers)[1]
Cartesian coordinates
The Cartesian coordinates for the 960 vertices of a runcic 5-cubes centered at the origin are coordinate permutations:
(±1,±1,±1,±3,±3)
with an odd number of plus signs.
Images
orthographic projections
Coxeter plane B5
Graph
Dihedral symmetry [10/2]
Coxeter plane D5 D4
Graph
Dihedral symmetry [8] [6]
Coxeter plane D3 A3
Graph
Dihedral symmetry [4] [4]
Related polytopes
It has half the vertices of the runcinated 5-cube, as compared here in the B5 Coxeter plane projections:
Runcic 5-cube
Runcinated 5-cube
Runcic n-cubes
n45678
[1+,4,3n-2]
= [3,3n-3,1]
[1+,4,32]
= [3,31,1]
[1+,4,33]
= [3,32,1]
[1+,4,34]
= [3,33,1]
[1+,4,35]
= [3,34,1]
[1+,4,36]
= [3,35,1]
Runcic
figure
Coxeter
=
=
=
=
=
Schläfli h3{4,32} h3{4,33} h3{4,34} h3{4,35} h3{4,36}
Runcicantic 5-cube
Runcicantic 5-cube
Typeuniform 5-polytope
Schläfli symbolt0,1,2{3,32,1}
h3{4,33}
Coxeter-Dynkin diagram
4-faces42
Cells360
Faces1040
Edges1200
Vertices480
Vertex figure
Coxeter groupsD5, [32,1,1]
Propertiesconvex
Alternate names
• Cantitruncated 5-demicube/demipenteract
• Great rhombated hemipenteract (girhin) (Jonathan Bowers)[2]
Cartesian coordinates
The Cartesian coordinates for the 480 vertices of a runcicantic 5-cube centered at the origin are coordinate permutations:
(±1,±1,±3,±5,±5)
with an odd number of plus signs.
Images
orthographic projections
Coxeter plane B5
Graph
Dihedral symmetry [10/2]
Coxeter plane D5 D4
Graph
Dihedral symmetry [8] [6]
Coxeter plane D3 A3
Graph
Dihedral symmetry [4] [4]
Related polytopes
It has half the vertices of the runcicantellated 5-cube, as compared here in the B5 Coxeter plane projections:
Runcicantic 5-cube
Runcicantellated 5-cube
Related polytopes
This polytope is based on the 5-demicube, a part of a dimensional family of uniform polytopes called demihypercubes for being alternation of the hypercube family.
There are 23 uniform 5-polytopes that can be constructed from the D5 symmetry of the 5-demicube, of which are unique to this family, and 15 are shared within the 5-cube family.
D5 polytopes
h{4,3,3,3}
h2{4,3,3,3}
h3{4,3,3,3}
h4{4,3,3,3}
h2,3{4,3,3,3}
h2,4{4,3,3,3}
h3,4{4,3,3,3}
h2,3,4{4,3,3,3}
Notes
1. Klitzing, (x3o3o *b3x3o - sirhin)
2. Klitzing, (x3x3o *b3x3o - girhin)
References
• H.S.M. Coxeter:
• H.S.M. Coxeter, Regular Polytopes, 3rd Edition, Dover New York, 1973
• Kaleidoscopes: Selected Writings of H.S.M. Coxeter, edited by F. Arthur Sherk, Peter McMullen, Anthony C. Thompson, Asia Ivic Weiss, Wiley-Interscience Publication, 1995, ISBN 978-0-471-01003-6
• (Paper 22) H.S.M. Coxeter, Regular and Semi Regular Polytopes I, [Math. Zeit. 46 (1940) 380-407, MR 2,10]
• (Paper 23) H.S.M. Coxeter, Regular and Semi-Regular Polytopes II, [Math. Zeit. 188 (1985) 559-591]
• (Paper 24) H.S.M. Coxeter, Regular and Semi-Regular Polytopes III, [Math. Zeit. 200 (1988) 3-45]
• Norman Johnson Uniform Polytopes, Manuscript (1991)
• N.W. Johnson: The Theory of Uniform Polytopes and Honeycombs, Ph.D.
• Klitzing, Richard. "5D uniform polytopes (polytera)". x3o3o *b3x3o - sirhin, x3x3o *b3x3o - girhin
External links
• Weisstein, Eric W. "Hypercube". MathWorld.
• Polytopes of Various Dimensions
• Multi-dimensional Glossary
Fundamental convex regular and uniform polytopes in dimensions 2–10
Family An Bn I2(p) / Dn E6 / E7 / E8 / F4 / G2 Hn
Regular polygon Triangle Square p-gon Hexagon Pentagon
Uniform polyhedron Tetrahedron Octahedron • Cube Demicube Dodecahedron • Icosahedron
Uniform polychoron Pentachoron 16-cell • Tesseract Demitesseract 24-cell 120-cell • 600-cell
Uniform 5-polytope 5-simplex 5-orthoplex • 5-cube 5-demicube
Uniform 6-polytope 6-simplex 6-orthoplex • 6-cube 6-demicube 122 • 221
Uniform 7-polytope 7-simplex 7-orthoplex • 7-cube 7-demicube 132 • 231 • 321
Uniform 8-polytope 8-simplex 8-orthoplex • 8-cube 8-demicube 142 • 241 • 421
Uniform 9-polytope 9-simplex 9-orthoplex • 9-cube 9-demicube
Uniform 10-polytope 10-simplex 10-orthoplex • 10-cube 10-demicube
Uniform n-polytope n-simplex n-orthoplex • n-cube n-demicube 1k2 • 2k1 • k21 n-pentagonal polytope
Topics: Polytope families • Regular polytope • List of regular polytopes and compounds
|
Wikipedia
|
Runcic 6-cubes
In six-dimensional geometry, a runcic 6-cube is a convex uniform 6-polytope. There are 2 unique runcic for the 6-cube.
6-demicube
=
Runcic 6-cube
=
Runcicantic 6-cube
=
Orthogonal projections in D6 Coxeter plane
Runcic 6-cube
Runcic 6-cube
Typeuniform 6-polytope
Schläfli symbolt0,2{3,33,1}
h3{4,34}
Coxeter-Dynkin diagram =
5-faces
4-faces
Cells
Faces
Edges3840
Vertices640
Vertex figure
Coxeter groupsD6, [33,1,1]
Propertiesconvex
Alternate names
• Cantellated 6-demicube/demihexeract
• Small rhombated hemihexeract (Acronym sirhax) (Jonathan Bowers)[1]
Cartesian coordinates
The Cartesian coordinates for the vertices of a runcic 6-cube centered at the origin are coordinate permutations:
(±1,±1,±1,±3,±3,±3)
with an odd number of plus signs.
Images
orthographic projections
Coxeter plane B6
Graph
Dihedral symmetry [12/2]
Coxeter plane D6 D5
Graph
Dihedral symmetry [10] [8]
Coxeter plane D4 D3
Graph
Dihedral symmetry [6] [4]
Coxeter plane A5 A3
Graph
Dihedral symmetry [6] [4]
Related polytopes
Runcic n-cubes
n45678
[1+,4,3n-2]
= [3,3n-3,1]
[1+,4,32]
= [3,31,1]
[1+,4,33]
= [3,32,1]
[1+,4,34]
= [3,33,1]
[1+,4,35]
= [3,34,1]
[1+,4,36]
= [3,35,1]
Runcic
figure
Coxeter
=
=
=
=
=
Schläfli h3{4,32} h3{4,33} h3{4,34} h3{4,35} h3{4,36}
Runcicantic 6-cube
Runcicantic 6-cube
Typeuniform 6-polytope
Schläfli symbolt0,1,2{3,33,1}
h2,3{4,34}
Coxeter-Dynkin diagram =
5-faces
4-faces
Cells
Faces
Edges5760
Vertices1920
Vertex figure
Coxeter groupsD6, [33,1,1]
Propertiesconvex
Alternate names
• Cantitruncated 6-demicube/demihexeract
• Great rhombated hemihexeract (Acronym girhax) (Jonathan Bowers)[2]
Cartesian coordinates
The Cartesian coordinates for the vertices of a runcicantic 6-cube centered at the origin are coordinate permutations:
(±1,±1,±3,±5,±5,±5)
with an odd number of plus signs.
Images
orthographic projections
Coxeter plane B6
Graph
Dihedral symmetry [12/2]
Coxeter plane D6 D5
Graph
Dihedral symmetry [10] [8]
Coxeter plane D4 D3
Graph
Dihedral symmetry [6] [4]
Coxeter plane A5 A3
Graph
Dihedral symmetry [6] [4]
Related polytopes
This polytope is based on the 6-demicube, a part of a dimensional family of uniform polytopes called demihypercubes for being alternation of the hypercube family.
There are 47 uniform polytopes with D6 symmetry, 31 are shared by the B6 symmetry, and 16 are unique:
D6 polytopes
h{4,34}
h2{4,34}
h3{4,34}
h4{4,34}
h5{4,34}
h2,3{4,34}
h2,4{4,34}
h2,5{4,34}
h3,4{4,34}
h3,5{4,34}
h4,5{4,34}
h2,3,4{4,34}
h2,3,5{4,34}
h2,4,5{4,34}
h3,4,5{4,34}
h2,3,4,5{4,34}
Notes
1. Klitzing, (x3o3o *b3x3o3o - sirhax)
2. Klitzing, (x3x3o *b3x3o3o - girhax)
References
• H.S.M. Coxeter:
• H.S.M. Coxeter, Regular Polytopes, 3rd Edition, Dover New York, 1973
• Kaleidoscopes: Selected Writings of H.S.M. Coxeter, edited by F. Arthur Sherk, Peter McMullen, Anthony C. Thompson, Asia Ivic Weiss, Wiley-Interscience Publication, 1995, ISBN 978-0-471-01003-6
• (Paper 22) H.S.M. Coxeter, Regular and Semi Regular Polytopes I, [Math. Zeit. 46 (1940) 380-407, MR 2,10]
• (Paper 23) H.S.M. Coxeter, Regular and Semi-Regular Polytopes II, [Math. Zeit. 188 (1985) 559-591]
• (Paper 24) H.S.M. Coxeter, Regular and Semi-Regular Polytopes III, [Math. Zeit. 200 (1988) 3-45]
• Norman Johnson Uniform Polytopes, Manuscript (1991)
• N.W. Johnson: The Theory of Uniform Polytopes and Honeycombs, Ph.D.
• Klitzing, Richard. "6D uniform polytopes (polypeta)". x3o3o *b3x3o3o, x3x3o *b3x3o3o
External links
• Weisstein, Eric W. "Hypercube". MathWorld.
• Polytopes of Various Dimensions
• Multi-dimensional Glossary
Fundamental convex regular and uniform polytopes in dimensions 2–10
Family An Bn I2(p) / Dn E6 / E7 / E8 / F4 / G2 Hn
Regular polygon Triangle Square p-gon Hexagon Pentagon
Uniform polyhedron Tetrahedron Octahedron • Cube Demicube Dodecahedron • Icosahedron
Uniform polychoron Pentachoron 16-cell • Tesseract Demitesseract 24-cell 120-cell • 600-cell
Uniform 5-polytope 5-simplex 5-orthoplex • 5-cube 5-demicube
Uniform 6-polytope 6-simplex 6-orthoplex • 6-cube 6-demicube 122 • 221
Uniform 7-polytope 7-simplex 7-orthoplex • 7-cube 7-demicube 132 • 231 • 321
Uniform 8-polytope 8-simplex 8-orthoplex • 8-cube 8-demicube 142 • 241 • 421
Uniform 9-polytope 9-simplex 9-orthoplex • 9-cube 9-demicube
Uniform 10-polytope 10-simplex 10-orthoplex • 10-cube 10-demicube
Uniform n-polytope n-simplex n-orthoplex • n-cube n-demicube 1k2 • 2k1 • k21 n-pentagonal polytope
Topics: Polytope families • Regular polytope • List of regular polytopes and compounds
|
Wikipedia
|
Runcinated 16-cell honeycomb
In four-dimensional Euclidean geometry, the runcinated 16-cell honeycomb is a uniform space-filling honeycomb. It can be seen as a runcination of the regular 16-cell honeycomb, containing Rectified 24-cell, runcinated tesseract, cuboctahedral prism, and 3-3 duoprism cells.
Runcinated 16-cell honeycomb
(No image)
TypeUniform 4-honeycomb
Schläfli symbolt03{3,3,4,3}
Coxeter-Dynkin diagrams
4-face typer{3,4,3}
t03{4,3,3}
r{3,4}x{}
{3}x{3}
Cell typer{3,4}
{4,3}
{3,3}
Face type{3}, {4}
Vertex figure
Coxeter groups${\tilde {F}}_{4}$, [3,4,3,3]
PropertiesVertex transitive
Alternate names
• Runcinated hexadecachoric tetracomb/honeycomb
• Small prismated demitesseractic tetracomb (spaht)
• Small disicositetrachoric tetracomb
Related honeycombs
The [3,4,3,3], , Coxeter group generates 31 permutations of uniform tessellations, 28 are unique in this family and ten are shared in the [4,3,3,4] and [4,3,31,1] families. The alternation (13) is also repeated in other families.
F4 honeycombs
Extended
symmetry
Extended
diagram
OrderHoneycombs
[3,3,4,3]×1
1, 3, 5, 6, 8,
9, 10, 11, 12
[3,4,3,3]×1
2, 4, 7, 13,
14, 15, 16, 17,
18, 19, 20, 21,
22 23, 24, 25,
26, 27, 28, 29
[(3,3)[3,3,4,3*]]
=[(3,3)[31,1,1,1]]
=[3,4,3,3]
=
=
×4
(2), (4), (7), (13)
See also
Regular and uniform honeycombs in 4-space:
• Tesseractic honeycomb
• 16-cell honeycomb
• 24-cell honeycomb
• Rectified 24-cell honeycomb
• Snub 24-cell honeycomb
• 5-cell honeycomb
• Truncated 5-cell honeycomb
• Omnitruncated 5-cell honeycomb
References
• Coxeter, H.S.M. Regular Polytopes, (3rd edition, 1973), Dover edition, ISBN 0-486-61480-8 p. 296, Table II: Regular honeycombs
• Kaleidoscopes: Selected Writings of H.S.M. Coxeter, edited by F. Arthur Sherk, Peter McMullen, Anthony C. Thompson, Asia Ivic Weiss, Wiley-Interscience Publication, 1995, ISBN 978-0-471-01003-6
• (Paper 24) H.S.M. Coxeter, Regular and Semi-Regular Polytopes III, [Math. Zeit. 200 (1988) 3-45]
• George Olshevsky, Uniform Panoploid Tetracombs, Manuscript (2006) (Complete list of 11 convex uniform tilings, 28 convex uniform honeycombs, and 143 convex uniform tetracombs) Model 122
• Klitzing, Richard. "4D Euclidean tesselations". x3o3o4x3o - spaht - O122
Fundamental convex regular and uniform honeycombs in dimensions 2–9
Space Family ${\tilde {A}}_{n-1}$ ${\tilde {C}}_{n-1}$ ${\tilde {B}}_{n-1}$ ${\tilde {D}}_{n-1}$ ${\tilde {G}}_{2}$ / ${\tilde {F}}_{4}$ / ${\tilde {E}}_{n-1}$
E2 Uniform tiling {3[3]} δ3 hδ3 qδ3 Hexagonal
E3 Uniform convex honeycomb {3[4]} δ4 hδ4 qδ4
E4 Uniform 4-honeycomb {3[5]} δ5 hδ5 qδ5 24-cell honeycomb
E5 Uniform 5-honeycomb {3[6]} δ6 hδ6 qδ6
E6 Uniform 6-honeycomb {3[7]} δ7 hδ7 qδ7 222
E7 Uniform 7-honeycomb {3[8]} δ8 hδ8 qδ8 133 • 331
E8 Uniform 8-honeycomb {3[9]} δ9 hδ9 qδ9 152 • 251 • 521
E9 Uniform 9-honeycomb {3[10]} δ10 hδ10 qδ10
E10 Uniform 10-honeycomb {3[11]} δ11 hδ11 qδ11
En-1 Uniform (n-1)-honeycomb {3[n]} δn hδn qδn 1k2 • 2k1 • k21
|
Wikipedia
|
Steric 7-cubes
In seven-dimensional geometry, a stericated 7-cube (or runcinated 7-demicube) is a convex uniform 7-polytope, being a runcination of the uniform 7-demicube. There are 4 unique runcinations for the 7-demicube including truncation and cantellation.
7-demicube
Steric 7-cube
Stericantic 7-cube
Steriruncic 7-cube
Steriruncicantic 7-cube
Orthogonal projections in D7 Coxeter plane
Steric 7-cube
Steric 7-cube
Typeuniform 7-polytope
Schläfli symbolt0,3{3,34,1}
h4{4,35}
Coxeter-Dynkin diagram
5-faces
4-faces
Cells
Faces
Edges20160
Vertices2240
Vertex figure
Coxeter groupsD7, [34,1,1]
Propertiesconvex
Cartesian coordinates
The Cartesian coordinates for the vertices of a steric 7-cube centered at the origin are coordinate permutations:
(±1,±1,±1,±1,±3,±3,±3)
with an odd number of plus signs.
Images
orthographic projections
Coxeter
plane
B7 D7 D6
Graph
Dihedral
symmetry
[14/2] [12] [10]
Coxeter plane D5 D4 D3
Graph
Dihedral
symmetry
[8] [6] [4]
Coxeter
plane
A5 A3
Graph
Dihedral
symmetry
[6] [4]
Related polytopes
Dimensional family of steric n-cubes
n5678
[1+,4,3n-2]
= [3,3n-3,1]
[1+,4,33]
= [3,32,1]
[1+,4,34]
= [3,33,1]
[1+,4,35]
= [3,34,1]
[1+,4,36]
= [3,35,1]
Steric
figure
Coxeter
=
=
=
=
Schläfli h4{4,33} h4{4,34} h4{4,35} h4{4,36}
Stericantic 7-cube
Images
orthographic projections
Coxeter
plane
B7 D7 D6
Graph
Dihedral
symmetry
[14/2] [12] [10]
Coxeter plane D5 D4 D3
Graph
Dihedral
symmetry
[8] [6] [4]
Coxeter
plane
A5 A3
Graph
Dihedral
symmetry
[6] [4]
Steriruncic 7-cube
Images
orthographic projections
Coxeter
plane
B7 D7 D6
Graph
Dihedral
symmetry
[14/2] [12] [10]
Coxeter plane D5 D4 D3
Graph
Dihedral
symmetry
[8] [6] [4]
Coxeter
plane
A5 A3
Graph
Dihedral
symmetry
[6] [4]
Steriruncicantic 7-cube
Images
orthographic projections
Coxeter
plane
B7 D7 D6
Graph
Dihedral
symmetry
[14/2] [12] [10]
Coxeter plane D5 D4 D3
Graph
Dihedral
symmetry
[8] [6] [4]
Coxeter
plane
A5 A3
Graph
Dihedral
symmetry
[6] [4]
Related polytopes
This polytope is based on the 7-demicube, a part of a dimensional family of uniform polytopes called demihypercubes for being alternation of the hypercube family.
There are 95 uniform polytopes with D7 symmetry, 63 are shared by the BC6 symmetry, and 32 are unique:
D7 polytopes
t0(141)
t0,1(141)
t0,2(141)
t0,3(141)
t0,4(141)
t0,5(141)
t0,1,2(141)
t0,1,3(141)
t0,1,4(141)
t0,1,5(141)
t0,2,3(141)
t0,2,4(141)
t0,2,5(141)
t0,3,4(141)
t0,3,5(141)
t0,4,5(141)
t0,1,2,3(141)
t0,1,2,4(141)
t0,1,2,5(141)
t0,1,3,4(141)
t0,1,3,5(141)
t0,1,4,5(141)
t0,2,3,4(141)
t0,2,3,5(141)
t0,2,4,5(141)
t0,3,4,5(141)
t0,1,2,3,4(141)
t0,1,2,3,5(141)
t0,1,2,4,5(141)
t0,1,3,4,5(141)
t0,2,3,4,5(141)
t0,1,2,3,4,5(141)
Notes
References
• H.S.M. Coxeter:
• H.S.M. Coxeter, Regular Polytopes, 3rd Edition, Dover New York, 1973
• Kaleidoscopes: Selected Writings of H.S.M. Coxeter, edited by F. Arthur Sherk, Peter McMullen, Anthony C. Thompson, Asia Ivic Weiss, Wiley-Interscience Publication, 1995, ISBN 978-0-471-01003-6
• (Paper 22) H.S.M. Coxeter, Regular and Semi Regular Polytopes I, [Math. Zeit. 46 (1940) 380-407, MR 2,10]
• (Paper 23) H.S.M. Coxeter, Regular and Semi-Regular Polytopes II, [Math. Zeit. 188 (1985) 559-591]
• (Paper 24) H.S.M. Coxeter, Regular and Semi-Regular Polytopes III, [Math. Zeit. 200 (1988) 3-45]
• Norman Johnson Uniform Polytopes, Manuscript (1991)
• N.W. Johnson: The Theory of Uniform Polytopes and Honeycombs, Ph.D.
• Klitzing, Richard. "7D uniform polytopes (polyexa)".
External links
• Weisstein, Eric W. "Hypercube". MathWorld.
• Polytopes of Various Dimensions
• Multi-dimensional Glossary
Fundamental convex regular and uniform polytopes in dimensions 2–10
Family An Bn I2(p) / Dn E6 / E7 / E8 / F4 / G2 Hn
Regular polygon Triangle Square p-gon Hexagon Pentagon
Uniform polyhedron Tetrahedron Octahedron • Cube Demicube Dodecahedron • Icosahedron
Uniform polychoron Pentachoron 16-cell • Tesseract Demitesseract 24-cell 120-cell • 600-cell
Uniform 5-polytope 5-simplex 5-orthoplex • 5-cube 5-demicube
Uniform 6-polytope 6-simplex 6-orthoplex • 6-cube 6-demicube 122 • 221
Uniform 7-polytope 7-simplex 7-orthoplex • 7-cube 7-demicube 132 • 231 • 321
Uniform 8-polytope 8-simplex 8-orthoplex • 8-cube 8-demicube 142 • 241 • 421
Uniform 9-polytope 9-simplex 9-orthoplex • 9-cube 9-demicube
Uniform 10-polytope 10-simplex 10-orthoplex • 10-cube 10-demicube
Uniform n-polytope n-simplex n-orthoplex • n-cube n-demicube 1k2 • 2k1 • k21 n-pentagonal polytope
Topics: Polytope families • Regular polytope • List of regular polytopes and compounds
|
Wikipedia
|
Runcinated 7-orthoplexes
In seven-dimensional geometry, a runcinated 7-orthoplex is a convex uniform 7-polytope with 3rd order truncations (runcination) of the regular 7-orthoplex.
Orthogonal projections in B6 Coxeter plane
7-orthoplex
Runcinated 7-orthoplex
Biruncinated 7-orthoplex
Runcitruncated 7-orthoplex
Biruncitruncated 7-orthoplex
Runcicantellated 7-orthoplex
Biruncicantellated 7-orthoplex
Runcicantitruncated 7-orthoplex
Biruncicantitruncated 7-orthoplex
There are 16 unique runcinations of the 7-orthoplex with permutations of truncations, and cantellations. 8 are more simply constructed from the 7-cube.
These polytopes are among 127 uniform 7-polytopes with B7 symmetry.
Runcinated 7-orthoplex
Runcinated 7-orthoplex
Typeuniform 7-polytope
Schläfli symbolt0,3{35,4}
Coxeter-Dynkin diagrams
6-faces
5-faces
4-faces
Cells
Faces
Edges23520
Vertices2240
Vertex figure
Coxeter groupsB7, [4,35]
Propertiesconvex
Alternate names
• Small prismated hecatonicosoctaexon (acronym: spaz) (Jonathan Bowers)[1]
Images
orthographic projections
Coxeter plane B7 / A6 B6 / D7 B5 / D6 / A4
Graph
Dihedral symmetry [14] [12] [10]
Coxeter plane B4 / D5 B3 / D4 / A2 B2 / D3
Graph
Dihedral symmetry [8] [6] [4]
Coxeter plane A5 A3
Graph
Dihedral symmetry [6] [4]
Biruncinated 7-orthoplex
Biruncinated 7-orthoplex
Typeuniform 7-polytope
Schläfli symbolt1,4{35,4}
Coxeter-Dynkin diagrams
6-faces
5-faces
4-faces
Cells
Faces
Edges60480
Vertices6720
Vertex figure
Coxeter groupsB7, [4,35]
Propertiesconvex
Alternate names
• Small biprismated hecatonicosoctaexon (Acronym sibpaz) (Jonathan Bowers)[2]
Images
orthographic projections
Coxeter plane B7 / A6 B6 / D7 B5 / D6 / A4
Graph
Dihedral symmetry [14] [12] [10]
Coxeter plane B4 / D5 B3 / D4 / A2 B2 / D3
Graph
Dihedral symmetry [8] [6] [4]
Coxeter plane A5 A3
Graph
Dihedral symmetry [6] [4]
Runcitruncated 7-orthoplex
Runcitruncated 7-orthoplex
Typeuniform 7-polytope
Schläfli symbolt0,1,3{35,4}
Coxeter-Dynkin diagrams
6-faces
5-faces
4-faces
Cells
Faces
Edges50400
Vertices6720
Vertex figure
Coxeter groupsB7, [4,35]
Propertiesconvex
Alternate names
• Prismatotruncated hecatonicosoctaexon (acronym: potaz) (Jonathan Bowers)[3]
Images
orthographic projections
Coxeter plane B7 / A6 B6 / D7 B5 / D6 / A4
Graph too complex
Dihedral symmetry [14] [12] [10]
Coxeter plane B4 / D5 B3 / D4 / A2 B2 / D3
Graph
Dihedral symmetry [8] [6] [4]
Coxeter plane A5 A3
Graph
Dihedral symmetry [6] [4]
Biruncitruncated 7-orthoplex
Biruncitruncated 7-orthoplex
Typeuniform 7-polytope
Schläfli symbolt1,2,4{35,4}
Coxeter-Dynkin diagrams
6-faces
5-faces
4-faces
Cells
Faces
Edges120960
Vertices20160
Vertex figure
Coxeter groupsB7, [4,35]
Propertiesconvex
Alternate names
• Biprismatotruncated hecatonicosoctaexon (acronym: baptize) (Jonathan Bowers)[4]
Images
orthographic projections
Coxeter plane B7 / A6 B6 / D7 B5 / D6 / A4
Graph
Dihedral symmetry [14] [12] [10]
Coxeter plane B4 / D5 B3 / D4 / A2 B2 / D3
Graph
Dihedral symmetry [8] [6] [4]
Coxeter plane A5 A3
Graph
Dihedral symmetry [6] [4]
Runcicantellated 7-orthoplex
Runcicantellated 7-orthoplex
Typeuniform 7-polytope
Schläfli symbolt0,2,3{35,4}
Coxeter-Dynkin diagrams
6-faces
5-faces
4-faces
Cells
Faces
Edges33600
Vertices6720
Vertex figure
Coxeter groupsB7, [4,35]
Propertiesconvex
Alternate names
• Prismatorhombated hecatonicosoctaexon (acronym: parz) (Jonathan Bowers)[5]
Images
orthographic projections
Coxeter plane B7 / A6 B6 / D7 B5 / D6 / A4
Graph too complex
Dihedral symmetry [14] [12] [10]
Coxeter plane B4 / D5 B3 / D4 / A2 B2 / D3
Graph
Dihedral symmetry [8] [6] [4]
Coxeter plane A5 A3
Graph
Dihedral symmetry [6] [4]
Biruncicantellated 7-orthoplex
biruncicantellated 7-orthoplex
Typeuniform 7-polytope
Schläfli symbolt1,3,4{35,4}
Coxeter-Dynkin diagrams
6-faces
5-faces
4-faces
Cells
Faces
Edges100800
Vertices20160
Vertex figure
Coxeter groupsB7, [4,35]
Propertiesconvex
Alternate names
• Biprismatorhombated hecatonicosoctaexon (acronym: boparz) (Jonathan Bowers)[6]
Images
orthographic projections
Coxeter plane B7 / A6 B6 / D7 B5 / D6 / A4
Graph
Dihedral symmetry [14] [12] [10]
Coxeter plane B4 / D5 B3 / D4 / A2 B2 / D3
Graph
Dihedral symmetry [8] [6] [4]
Coxeter plane A5 A3
Graph
Dihedral symmetry [6] [4]
Runcicantitruncated 7-orthoplex
Runcicantitruncated 7-orthoplex
Typeuniform 7-polytope
Schläfli symbolt0,1,2,3{35,4}
Coxeter-Dynkin diagrams
6-faces
5-faces
4-faces
Cells
Faces
Edges60480
Vertices13440
Vertex figure
Coxeter groupsB7, [4,35]
Propertiesconvex
Alternate names
• Great prismated hecatonicosoctaexon (acronym: gopaz) (Jonathan Bowers)[7]
Images
orthographic projections
Coxeter plane B7 / A6 B6 / D7 B5 / D6 / A4
Graph too complex
Dihedral symmetry [14] [12] [10]
Coxeter plane B4 / D5 B3 / D4 / A2 B2 / D3
Graph
Dihedral symmetry [8] [6] [4]
Coxeter plane A5 A3
Graph
Dihedral symmetry [6] [4]
Biruncicantitruncated 7-orthoplex
biruncicantitruncated 7-orthoplex
Typeuniform 7-polytope
Schläfli symbolt1,2,3,4{35,4}
Coxeter-Dynkin diagrams
6-faces
5-faces
4-faces
Cells
Faces
Edges161280
Vertices40320
Vertex figure
Coxeter groupsB7, [4,35]
Propertiesconvex
Alternate names
• Great biprismated hecatonicosoctaexon (acronym: gibpaz) (Jonathan Bowers)[8]
Images
orthographic projections
Coxeter plane B7 / A6 B6 / D7 B5 / D6 / A4
Graph
Dihedral symmetry [14] [12] [10]
Coxeter plane B4 / D5 B3 / D4 / A2 B2 / D3
Graph
Dihedral symmetry [8] [6] [4]
Coxeter plane A5 A3
Graph
Dihedral symmetry [6] [4]
Notes
1. Klitzing, (o3o3o3x3o3o4x - spaz)
2. Klitzing, (o3x3o3o3x3o4o - sibpaz)
3. Klitzing, (o3o3o3x3x3o4x - potaz)
4. Klitzing, (o3o3x3o3x3x4o - baptize)
5. Klitzing, (o3o3o3x3x3o4x - parz)
6. Klitzing, (o3x3o3x3x3o4o - boparz)
7. Klitzing, (o3o3o3x3x3x4x - gopaz)
8. Klitzing, (o3o3x3x3x3x3o - gibpaz)
References
• H.S.M. Coxeter:
• H.S.M. Coxeter, Regular Polytopes, 3rd Edition, Dover New York, 1973
• Kaleidoscopes: Selected Writings of H.S.M. Coxeter, edited by F. Arthur Sherk, Peter McMullen, Anthony C. Thompson, Asia Ivic Weiss, Wiley-Interscience Publication, 1995, ISBN 978-0-471-01003-6 Wiley: Kaleidoscopes: Selected Writings of H.S.M. Coxeter
• (Paper 22) H.S.M. Coxeter, Regular and Semi Regular Polytopes I, [Math. Zeit. 46 (1940) 380-407, MR 2,10]
• (Paper 23) H.S.M. Coxeter, Regular and Semi-Regular Polytopes II, [Math. Zeit. 188 (1985) 559-591]
• (Paper 24) H.S.M. Coxeter, Regular and Semi-Regular Polytopes III, [Math. Zeit. 200 (1988) 3-45]
• Norman Johnson Uniform Polytopes, Manuscript (1991)
• N.W. Johnson: The Theory of Uniform Polytopes and Honeycombs, Ph.D. (1966)
• Klitzing, Richard. "7D uniform polytopes (polyexa)". o3o3o3x3o3o4x - spaz, o3x3o3o3x3o4o - sibpaz, o3o3o3x3x3o4x - potaz, o3o3x3o3x3x4o - baptize, o3o3o3x3x3o4x - parz, o3x3o3x3x3o4o - boparz, o3o3o3x3x3x4x - gopaz, o3o3x3x3x3x3o - gibpaz
External links
• Polytopes of Various Dimensions
• Multi-dimensional Glossary
Fundamental convex regular and uniform polytopes in dimensions 2–10
Family An Bn I2(p) / Dn E6 / E7 / E8 / F4 / G2 Hn
Regular polygon Triangle Square p-gon Hexagon Pentagon
Uniform polyhedron Tetrahedron Octahedron • Cube Demicube Dodecahedron • Icosahedron
Uniform polychoron Pentachoron 16-cell • Tesseract Demitesseract 24-cell 120-cell • 600-cell
Uniform 5-polytope 5-simplex 5-orthoplex • 5-cube 5-demicube
Uniform 6-polytope 6-simplex 6-orthoplex • 6-cube 6-demicube 122 • 221
Uniform 7-polytope 7-simplex 7-orthoplex • 7-cube 7-demicube 132 • 231 • 321
Uniform 8-polytope 8-simplex 8-orthoplex • 8-cube 8-demicube 142 • 241 • 421
Uniform 9-polytope 9-simplex 9-orthoplex • 9-cube 9-demicube
Uniform 10-polytope 10-simplex 10-orthoplex • 10-cube 10-demicube
Uniform n-polytope n-simplex n-orthoplex • n-cube n-demicube 1k2 • 2k1 • k21 n-pentagonal polytope
Topics: Polytope families • Regular polytope • List of regular polytopes and compounds
|
Wikipedia
|
Runcinated 7-simplexes
In seven-dimensional geometry, a runcinated 7-simplex is a convex uniform 7-polytope with 3rd order truncations (runcination) of the regular 7-simplex.
7-simplex
Runcinated 7-simplex
Biruncinated 7-simplex
Runcitruncated 7-simplex
Biruncitruncated 7-simplex
Runcicantellated 7-simplex
Biruncicantellated 7-simplex
Runcicantitruncated 7-simplex
Biruncicantitruncated 7-simplex
Orthogonal projections in A7 Coxeter plane
There are 8 unique runcinations of the 7-simplex with permutations of truncations, and cantellations.
Runcinated 7-simplex
Runcinated 7-simplex
Typeuniform 7-polytope
Schläfli symbolt0,3{3,3,3,3,3,3}
Coxeter-Dynkin diagrams
6-faces
5-faces
4-faces
Cells
Faces
Edges2100
Vertices280
Vertex figure
Coxeter groupA7, [36], order 40320
Propertiesconvex
Alternate names
• Small prismated octaexon (acronym: spo) (Jonathan Bowers)[1]
Coordinates
The vertices of the runcinated 7-simplex can be most simply positioned in 8-space as permutations of (0,0,0,0,1,1,1,2). This construction is based on facets of the runcinated 8-orthoplex.
Images
orthographic projections
Ak Coxeter plane A7 A6 A5
Graph
Dihedral symmetry [8] [7] [6]
Ak Coxeter plane A4 A3 A2
Graph
Dihedral symmetry [5] [4] [3]
Biruncinated 7-simplex
Biruncinated 7-simplex
Typeuniform 7-polytope
Schläfli symbolt1,4{3,3,3,3,3,3}
Coxeter-Dynkin diagrams
6-faces
5-faces
4-faces
Cells
Faces
Edges4200
Vertices560
Vertex figure
Coxeter groupA7, [36], order 40320
Propertiesconvex
Alternate names
• Small biprismated octaexon (sibpo) (Jonathan Bowers)[2]
Coordinates
The vertices of the biruncinated 7-simplex can be most simply positioned in 8-space as permutations of (0,0,0,1,1,1,2,2). This construction is based on facets of the biruncinated 8-orthoplex.
Images
orthographic projections
Ak Coxeter plane A7 A6 A5
Graph
Dihedral symmetry [8] [7] [6]
Ak Coxeter plane A4 A3 A2
Graph
Dihedral symmetry [5] [4] [3]
Runcitruncated 7-simplex
runcitruncated 7-simplex
Typeuniform 7-polytope
Schläfli symbolt0,1,3{3,3,3,3,3,3}
Coxeter-Dynkin diagrams
6-faces
5-faces
4-faces
Cells
Faces
Edges4620
Vertices840
Vertex figure
Coxeter groupA7, [36], order 40320
Propertiesconvex
Alternate names
• Prismatotruncated octaexon (acronym: patto) (Jonathan Bowers)[3]
Coordinates
The vertices of the runcitruncated 7-simplex can be most simply positioned in 8-space as permutations of (0,0,0,0,1,1,2,3). This construction is based on facets of the runcitruncated 8-orthoplex.
Images
orthographic projections
Ak Coxeter plane A7 A6 A5
Graph
Dihedral symmetry [8] [7] [6]
Ak Coxeter plane A4 A3 A2
Graph
Dihedral symmetry [5] [4] [3]
Biruncitruncated 7-simplex
Biruncitruncated 7-simplex
Typeuniform 7-polytope
Schläfli symbolt1,2,4{3,3,3,3,3,3}
Coxeter-Dynkin diagrams
6-faces
5-faces
4-faces
Cells
Faces
Edges8400
Vertices1680
Vertex figure
Coxeter groupA7, [36], order 40320
Propertiesconvex
Alternate names
• Biprismatotruncated octaexon (acronym: bipto) (Jonathan Bowers)[4]
Coordinates
The vertices of the biruncitruncated 7-simplex can be most simply positioned in 8-space as permutations of (0,0,0,1,1,2,3,3). This construction is based on facets of the biruncitruncated 8-orthoplex.
Images
orthographic projections
Ak Coxeter plane A7 A6 A5
Graph
Dihedral symmetry [8] [7] [6]
Ak Coxeter plane A4 A3 A2
Graph
Dihedral symmetry [5] [4] [3]
Runcicantellated 7-simplex
runcicantellated 7-simplex
Typeuniform 7-polytope
Schläfli symbolt0,2,3{3,3,3,3,3,3}
Coxeter-Dynkin diagrams
6-faces
5-faces
4-faces
Cells
Faces
Edges3360
Vertices840
Vertex figure
Coxeter groupA7, [36], order 40320
Propertiesconvex
Alternate names
• Prismatorhombated octaexon (acronym: paro) (Jonathan Bowers)[5]
Coordinates
The vertices of the runcicantellated 7-simplex can be most simply positioned in 8-space as permutations of (0,0,0,0,1,2,2,3). This construction is based on facets of the runcicantellated 8-orthoplex.
Images
orthographic projections
Ak Coxeter plane A7 A6 A5
Graph
Dihedral symmetry [8] [7] [6]
Ak Coxeter plane A4 A3 A2
Graph
Dihedral symmetry [5] [4] [3]
Biruncicantellated 7-simplex
biruncicantellated 7-simplex
Typeuniform 7-polytope
Schläfli symbolt1,3,4{3,3,3,3,3,3}
Coxeter-Dynkin diagrams
6-faces
5-faces
4-faces
Cells
Faces
Edges
Vertices
Vertex figure
Coxeter groupA7, [36], order 40320
Propertiesconvex
Alternate names
• Biprismatorhombated octaexon (acronym: bipro) (Jonathan Bowers)
Coordinates
The vertices of the biruncicantellated 7-simplex can be most simply positioned in 8-space as permutations of (0,0,0,1,2,2,3,3). This construction is based on facets of the biruncicantellated 8-orthoplex.
Images
orthographic projections
Ak Coxeter plane A7 A6 A5
Graph
Dihedral symmetry [8] [7] [6]
Ak Coxeter plane A4 A3 A2
Graph
Dihedral symmetry [5] [4] [3]
Runcicantitruncated 7-simplex
runcicantitruncated 7-simplex
Typeuniform 7-polytope
Schläfli symbolt0,1,2,3{3,3,3,3,3,3}
Coxeter-Dynkin diagrams
6-faces
5-faces
4-faces
Cells
Faces
Edges5880
Vertices1680
Vertex figure
Coxeter groupA7, [36], order 40320
Propertiesconvex
Alternate names
• Great prismated octaexon (acronym: gapo) (Jonathan Bowers)[6]
Coordinates
The vertices of the runcicantitruncated 7-simplex can be most simply positioned in 8-space as permutations of (0,0,0,0,1,2,3,4). This construction is based on facets of the runcicantitruncated 8-orthoplex.
Images
orthographic projections
Ak Coxeter plane A7 A6 A5
Graph
Dihedral symmetry [8] [7] [6]
Ak Coxeter plane A4 A3 A2
Graph
Dihedral symmetry [5] [4] [3]
Biruncicantitruncated 7-simplex
biruncicantitruncated 7-simplex
Typeuniform 7-polytope
Schläfli symbolt1,2,3,4{3,3,3,3,3,3}
Coxeter-Dynkin diagrams
6-faces
5-faces
4-faces
Cells
Faces
Edges11760
Vertices3360
Vertex figure
Coxeter groupA7, [36], order 40320
Propertiesconvex
Alternate names
• Great biprismated octaexon (acronym: gibpo) (Jonathan Bowers)[7]
Coordinates
The vertices of the biruncicantitruncated 7-simplex can be most simply positioned in 8-space as permutations of (0,0,0,1,2,3,4,4). This construction is based on facets of the biruncicantitruncated 8-orthoplex.
Images
orthographic projections
Ak Coxeter plane A7 A6 A5
Graph
Dihedral symmetry [8] [7] [6]
Ak Coxeter plane A4 A3 A2
Graph
Dihedral symmetry [5] [4] [3]
Related polytopes
These polytopes are among 71 uniform 7-polytopes with A7 symmetry.
A7 polytopes
t0
t1
t2
t3
t0,1
t0,2
t1,2
t0,3
t1,3
t2,3
t0,4
t1,4
t2,4
t0,5
t1,5
t0,6
t0,1,2
t0,1,3
t0,2,3
t1,2,3
t0,1,4
t0,2,4
t1,2,4
t0,3,4
t1,3,4
t2,3,4
t0,1,5
t0,2,5
t1,2,5
t0,3,5
t1,3,5
t0,4,5
t0,1,6
t0,2,6
t0,3,6
t0,1,2,3
t0,1,2,4
t0,1,3,4
t0,2,3,4
t1,2,3,4
t0,1,2,5
t0,1,3,5
t0,2,3,5
t1,2,3,5
t0,1,4,5
t0,2,4,5
t1,2,4,5
t0,3,4,5
t0,1,2,6
t0,1,3,6
t0,2,3,6
t0,1,4,6
t0,2,4,6
t0,1,5,6
t0,1,2,3,4
t0,1,2,3,5
t0,1,2,4,5
t0,1,3,4,5
t0,2,3,4,5
t1,2,3,4,5
t0,1,2,3,6
t0,1,2,4,6
t0,1,3,4,6
t0,2,3,4,6
t0,1,2,5,6
t0,1,3,5,6
t0,1,2,3,4,5
t0,1,2,3,4,6
t0,1,2,3,5,6
t0,1,2,4,5,6
t0,1,2,3,4,5,6
Notes
1. Klitzing, (x3o3o3x3o3o3o - spo)
2. Klitzing, (o3x3o3o3x3o3o - sibpo)
3. Klitzing, (x3x3o3x3o3o3o - patto)
4. Klitzing, (o3x3x3o3x3o3o - bipto)
5. Klitzing, (x3o3x3x3o3o3o - paro)
6. Klitzing, (x3x3x3x3o3o3o - gapo)
7. Klitzing, (o3x3x3x3x3o3o- gibpo)
References
• H.S.M. Coxeter:
• H.S.M. Coxeter, Regular Polytopes, 3rd Edition, Dover New York, 1973
• Kaleidoscopes: Selected Writings of H.S.M. Coxeter, edited by F. Arthur Sherk, Peter McMullen, Anthony C. Thompson, Asia Ivic Weiss, Wiley-Interscience Publication, 1995, ISBN 978-0-471-01003-6
• (Paper 22) H.S.M. Coxeter, Regular and Semi Regular Polytopes I, [Math. Zeit. 46 (1940) 380-407, MR 2,10]
• (Paper 23) H.S.M. Coxeter, Regular and Semi-Regular Polytopes II, [Math. Zeit. 188 (1985) 559-591]
• (Paper 24) H.S.M. Coxeter, Regular and Semi-Regular Polytopes III, [Math. Zeit. 200 (1988) 3-45]
• Norman Johnson Uniform Polytopes, Manuscript (1991)
• N.W. Johnson: The Theory of Uniform Polytopes and Honeycombs, Ph.D.
• Klitzing, Richard. "7D uniform polytopes (polyexa)". x3o3o3x3o3o3o - spo, o3x3o3o3x3o3o - sibpo, x3x3o3x3o3o3o - patto, o3x3x3o3x3o3o - bipto, x3o3x3x3o3o3o - paro, x3x3x3x3o3o3o - gapo, o3x3x3x3x3o3o- gibpo
External links
• Polytopes of Various Dimensions
• Multi-dimensional Glossary
Fundamental convex regular and uniform polytopes in dimensions 2–10
Family An Bn I2(p) / Dn E6 / E7 / E8 / F4 / G2 Hn
Regular polygon Triangle Square p-gon Hexagon Pentagon
Uniform polyhedron Tetrahedron Octahedron • Cube Demicube Dodecahedron • Icosahedron
Uniform polychoron Pentachoron 16-cell • Tesseract Demitesseract 24-cell 120-cell • 600-cell
Uniform 5-polytope 5-simplex 5-orthoplex • 5-cube 5-demicube
Uniform 6-polytope 6-simplex 6-orthoplex • 6-cube 6-demicube 122 • 221
Uniform 7-polytope 7-simplex 7-orthoplex • 7-cube 7-demicube 132 • 231 • 321
Uniform 8-polytope 8-simplex 8-orthoplex • 8-cube 8-demicube 142 • 241 • 421
Uniform 9-polytope 9-simplex 9-orthoplex • 9-cube 9-demicube
Uniform 10-polytope 10-simplex 10-orthoplex • 10-cube 10-demicube
Uniform n-polytope n-simplex n-orthoplex • n-cube n-demicube 1k2 • 2k1 • k21 n-pentagonal polytope
Topics: Polytope families • Regular polytope • List of regular polytopes and compounds
|
Wikipedia
|
Runcinated 5-cubes
In five-dimensional geometry, a runcinated 5-cube is a convex uniform 5-polytope that is a runcination (a 3rd order truncation) of the regular 5-cube.
5-cube
Runcinated 5-cube
Runcinated 5-orthoplex
Runcitruncated 5-cube
Runcicantellated 5-cube
Runcicantitruncated 5-cube
Runcitruncated 5-orthoplex
Runcicantellated 5-orthoplex
Runcicantitruncated 5-orthoplex
Orthogonal projections in B5 Coxeter plane
There are 8 unique degrees of runcinations of the 5-cube, along with permutations of truncations and cantellations. Four are more simply constructed relative to the 5-orthoplex.
Runcinated 5-cube
Runcinated 5-cube
Type Uniform 5-polytope
Schläfli symbol t0,3{4,3,3,3}
Coxeter diagram
4-faces 202 10
80
80
32
Cells 1240 40
240
320
160
320
160
Faces 2160 240
960
640
320
Edges 1440 480+960
Vertices 320
Vertex figure
Coxeter group B5 [4,3,3,3]
Properties convex
Alternate names
• Small prismated penteract (Acronym: span) (Jonathan Bowers)
Coordinates
The Cartesian coordinates of the vertices of a runcinated 5-cube having edge length 2 are all permutations of:
$\left(\pm 1,\ \pm 1,\ \pm 1,\ \pm (1+{\sqrt {2}}),\ \pm (1+{\sqrt {2}})\right)$
Images
orthographic projections
Coxeter plane B5 B4 / D5 B3 / D4 / A2
Graph
Dihedral symmetry [10] [8] [6]
Coxeter plane B2 A3
Graph
Dihedral symmetry [4] [4]
Runcitruncated 5-cube
Runcitruncated 5-cube
Type Uniform 5-polytope
Schläfli symbol t0,1,3{4,3,3,3}
Coxeter-Dynkin diagrams
4-faces 202 10
80
80
32
Cells 1560 40
240
320
320
160
320
160
Faces 3760 240
960
320
960
640
640
Edges 3360 480+960+1920
Vertices 960
Vertex figure
Coxeter group B5, [3,3,3,4]
Properties convex
Alternate names
• Runcitruncated penteract
• Prismatotruncated penteract (Acronym: pattin) (Jonathan Bowers)
Construction and coordinates
The Cartesian coordinates of the vertices of a runcitruncated 5-cube having edge length 2 are all permutations of:
$\left(\pm 1,\ \pm (1+{\sqrt {2}}),\ \pm (1+{\sqrt {2}}),\ \pm (1+2{\sqrt {2}}),\ \pm (1+2{\sqrt {2}})\right)$
Images
orthographic projections
Coxeter plane B5 B4 / D5 B3 / D4 / A2
Graph
Dihedral symmetry [10] [8] [6]
Coxeter plane B2 A3
Graph
Dihedral symmetry [4] [4]
Runcicantellated 5-cube
Runcicantellated 5-cube
Type Uniform 5-polytope
Schläfli symbol t0,2,3{4,3,3,3}
Coxeter-Dynkin diagram
4-faces 202 10
80
80
32
Cells 1240 40
240
320
320
160
160
Faces 2960 240
480
960
320
640
320
Edges 2880 960+960+960
Vertices 960
Vertex figure
Coxeter group B5 [4,3,3,3]
Properties convex
Alternate names
• Runcicantellated penteract
• Prismatorhombated penteract (Acronym: prin) (Jonathan Bowers)
Coordinates
The Cartesian coordinates of the vertices of a runcicantellated 5-cube having edge length 2 are all permutations of:
$\left(\pm 1,\ \pm 1,\ \pm (1+{\sqrt {2}}),\ \pm (1+2{\sqrt {2}}),\ \pm (1+2{\sqrt {2}})\right)$
Images
orthographic projections
Coxeter plane B5 B4 / D5 B3 / D4 / A2
Graph
Dihedral symmetry [10] [8] [6]
Coxeter plane B2 A3
Graph
Dihedral symmetry [4] [4]
Runcicantitruncated 5-cube
Runcicantitruncated 5-cube
Type Uniform 5-polytope
Schläfli symbol t0,1,2,3{4,3,3,3}
Coxeter-Dynkin
diagram
4-faces202
Cells1560
Faces4240
Edges4800
Vertices1920
Vertex figure
Irregular 5-cell
Coxeter group B5 [4,3,3,3]
Properties convex, isogonal
Alternate names
• Runcicantitruncated penteract
• Biruncicantitruncated pentacross
• great prismated penteract (gippin) (Jonathan Bowers)
Coordinates
The Cartesian coordinates of the vertices of a runcicantitruncated 5-cube having an edge length of 2 are given by all permutations of coordinates and sign of:
$\left(1,\ 1+{\sqrt {2}},\ 1+2{\sqrt {2}},\ 1+3{\sqrt {2}},\ 1+3{\sqrt {2}}\right)$
Images
orthographic projections
Coxeter plane B5 B4 / D5 B3 / D4 / A2
Graph
Dihedral symmetry [10] [8] [6]
Coxeter plane B2 A3
Graph
Dihedral symmetry [4] [4]
Related polytopes
These polytopes are a part of a set of 31 uniform polytera generated from the regular 5-cube or 5-orthoplex.
B5 polytopes
β5
t1β5
t2γ5
t1γ5
γ5
t0,1β5
t0,2β5
t1,2β5
t0,3β5
t1,3γ5
t1,2γ5
t0,4γ5
t0,3γ5
t0,2γ5
t0,1γ5
t0,1,2β5
t0,1,3β5
t0,2,3β5
t1,2,3γ5
t0,1,4β5
t0,2,4γ5
t0,2,3γ5
t0,1,4γ5
t0,1,3γ5
t0,1,2γ5
t0,1,2,3β5
t0,1,2,4β5
t0,1,3,4γ5
t0,1,2,4γ5
t0,1,2,3γ5
t0,1,2,3,4γ5
References
• H.S.M. Coxeter:
• H.S.M. Coxeter, Regular Polytopes, 3rd Edition, Dover New York, 1973
• Kaleidoscopes: Selected Writings of H.S.M. Coxeter, edited by F. Arthur Sherk, Peter McMullen, Anthony C. Thompson, Asia Ivic Weiss, Wiley-Interscience Publication, 1995, ISBN 978-0-471-01003-6
• (Paper 22) H.S.M. Coxeter, Regular and Semi Regular Polytopes I, [Math. Zeit. 46 (1940) 380-407, MR 2,10]
• (Paper 23) H.S.M. Coxeter, Regular and Semi-Regular Polytopes II, [Math. Zeit. 188 (1985) 559-591]
• (Paper 24) H.S.M. Coxeter, Regular and Semi-Regular Polytopes III, [Math. Zeit. 200 (1988) 3-45]
• Norman Johnson Uniform Polytopes, Manuscript (1991)
• N.W. Johnson: The Theory of Uniform Polytopes and Honeycombs, Ph.D.
• Klitzing, Richard. "5D uniform polytopes (polytera)". o3x3o3o4x - span, o3x3o3x4x - pattin, o3x3x3o4x - prin, o3x3x3x4x - gippin
External links
• Glossary for hyperspace, George Olshevsky.
• Polytopes of Various Dimensions, Jonathan Bowers
• Runcinated uniform polytera (spid), Jonathan Bowers
• Multi-dimensional Glossary
Fundamental convex regular and uniform polytopes in dimensions 2–10
Family An Bn I2(p) / Dn E6 / E7 / E8 / F4 / G2 Hn
Regular polygon Triangle Square p-gon Hexagon Pentagon
Uniform polyhedron Tetrahedron Octahedron • Cube Demicube Dodecahedron • Icosahedron
Uniform polychoron Pentachoron 16-cell • Tesseract Demitesseract 24-cell 120-cell • 600-cell
Uniform 5-polytope 5-simplex 5-orthoplex • 5-cube 5-demicube
Uniform 6-polytope 6-simplex 6-orthoplex • 6-cube 6-demicube 122 • 221
Uniform 7-polytope 7-simplex 7-orthoplex • 7-cube 7-demicube 132 • 231 • 321
Uniform 8-polytope 8-simplex 8-orthoplex • 8-cube 8-demicube 142 • 241 • 421
Uniform 9-polytope 9-simplex 9-orthoplex • 9-cube 9-demicube
Uniform 10-polytope 10-simplex 10-orthoplex • 10-cube 10-demicube
Uniform n-polytope n-simplex n-orthoplex • n-cube n-demicube 1k2 • 2k1 • k21 n-pentagonal polytope
Topics: Polytope families • Regular polytope • List of regular polytopes and compounds
|
Wikipedia
|
Runcinated 5-simplexes
In six-dimensional geometry, a runcinated 5-simplex is a convex uniform 5-polytope with 3rd order truncations (Runcination) of the regular 5-simplex.
5-simplex
Runcinated 5-simplex
Runcitruncated 5-simplex
Birectified 5-simplex
Runcicantellated 5-simplex
Runcicantitruncated 5-simplex
Orthogonal projections in A5 Coxeter plane
There are 4 unique runcinations of the 5-simplex with permutations of truncations, and cantellations.
Runcinated 5-simplex
Runcinated 5-simplex
Type Uniform 5-polytope
Schläfli symbol t0,3{3,3,3,3}
Coxeter-Dynkin diagram
4-faces 47 6 t0,3{3,3,3}
20 {3}×{3}
15 { }×r{3,3}
6 r{3,3,3}
Cells 255 45 {3,3}
180 { }×{3}
30 r{3,3}
Faces 420 240 {3}
180 {4}
Edges 270
Vertices 60
Vertex figure
Coxeter group A5 [3,3,3,3], order 720
Properties convex
Alternate names
• Runcinated hexateron
• Small prismated hexateron (Acronym: spix) (Jonathan Bowers)[1]
Coordinates
The vertices of the runcinated 5-simplex can be most simply constructed on a hyperplane in 6-space as permutations of (0,0,1,1,1,2) or of (0,1,1,1,2,2), seen as facets of a runcinated 6-orthoplex, or a biruncinated 6-cube respectively.
Images
orthographic projections
Ak
Coxeter plane
A5 A4
Graph
Dihedral symmetry [6] [5]
Ak
Coxeter plane
A3 A2
Graph
Dihedral symmetry [4] [3]
Runcitruncated 5-simplex
Runcitruncated 5-simplex
Type Uniform 5-polytope
Schläfli symbol t0,1,3{3,3,3,3}
Coxeter-Dynkin diagram
4-faces 47 6 t0,1,3{3,3,3}
20 {3}×{6}
15 { }×r{3,3}
6 rr{3,3,3}
Cells 315
Faces 720
Edges 630
Vertices 180
Vertex figure
Coxeter group A5 [3,3,3,3], order 720
Properties convex, isogonal
Alternate names
• Runcitruncated hexateron
• Prismatotruncated hexateron (Acronym: pattix) (Jonathan Bowers)[2]
Coordinates
The coordinates can be made in 6-space, as 180 permutations of:
(0,0,1,1,2,3)
This construction exists as one of 64 orthant facets of the runcitruncated 6-orthoplex.
Images
orthographic projections
Ak
Coxeter plane
A5 A4
Graph
Dihedral symmetry [6] [5]
Ak
Coxeter plane
A3 A2
Graph
Dihedral symmetry [4] [3]
Runcicantellated 5-simplex
Runcicantellated 5-simplex
Type Uniform 5-polytope
Schläfli symbol t0,2,3{3,3,3,3}
Coxeter-Dynkin diagram
4-faces 47
Cells 255
Faces 570
Edges 540
Vertices 180
Vertex figure
Coxeter group A5 [3,3,3,3], order 720
Properties convex, isogonal
Alternate names
• Runcicantellated hexateron
• Biruncitruncated 5-simplex/hexateron
• Prismatorhombated hexateron (Acronym: pirx) (Jonathan Bowers)[3]
Coordinates
The coordinates can be made in 6-space, as 180 permutations of:
(0,0,1,2,2,3)
This construction exists as one of 64 orthant facets of the runcicantellated 6-orthoplex.
Images
orthographic projections
Ak
Coxeter plane
A5 A4
Graph
Dihedral symmetry [6] [5]
Ak
Coxeter plane
A3 A2
Graph
Dihedral symmetry [4] [3]
Runcicantitruncated 5-simplex
Runcicantitruncated 5-simplex
Type Uniform 5-polytope
Schläfli symbol t0,1,2,3{3,3,3,3}
Coxeter-Dynkin diagram
4-faces 47 6 t0,1,2,3{3,3,3}
20 {3}×{6}
15 {}×t{3,3}
6 tr{3,3,3}
Cells 315 45 t0,1,2{3,3}
120 { }×{3}
120 { }×{6}
30 t{3,3}
Faces 810 120 {3}
450 {4}
240 {6}
Edges 900
Vertices 360
Vertex figure
Irregular 5-cell
Coxeter group A5 [3,3,3,3], order 720
Properties convex, isogonal
Alternate names
• Runcicantitruncated hexateron
• Great prismated hexateron (Acronym: gippix) (Jonathan Bowers)[4]
Coordinates
The coordinates can be made in 6-space, as 360 permutations of:
(0,0,1,2,3,4)
This construction exists as one of 64 orthant facets of the runcicantitruncated 6-orthoplex.
Images
orthographic projections
Ak
Coxeter plane
A5 A4
Graph
Dihedral symmetry [6] [5]
Ak
Coxeter plane
A3 A2
Graph
Dihedral symmetry [4] [3]
Related uniform 5-polytopes
These polytopes are in a set of 19 uniform 5-polytopes based on the [3,3,3,3] Coxeter group, all shown here in A5 Coxeter plane orthographic projections. (Vertices are colored by projection overlap order, red, orange, yellow, green, cyan, blue, purple having progressively more vertices)
A5 polytopes
t0
t1
t2
t0,1
t0,2
t1,2
t0,3
t1,3
t0,4
t0,1,2
t0,1,3
t0,2,3
t1,2,3
t0,1,4
t0,2,4
t0,1,2,3
t0,1,2,4
t0,1,3,4
t0,1,2,3,4
Notes
1. Klitizing, (x3o3o3x3o - spidtix)
2. Klitizing, (x3x3o3x3o - pattix)
3. Klitizing, (x3o3x3x3o - pirx)
4. Klitizing, (x3x3x3x3o - gippix)
References
• H.S.M. Coxeter:
• H.S.M. Coxeter, Regular Polytopes, 3rd Edition, Dover New York, 1973
• Kaleidoscopes: Selected Writings of H.S.M. Coxeter, edited by F. Arthur Sherk, Peter McMullen, Anthony C. Thompson, Asia Ivic Weiss, Wiley-Interscience Publication, 1995, ISBN 978-0-471-01003-6
• (Paper 22) H.S.M. Coxeter, Regular and Semi Regular Polytopes I, [Math. Zeit. 46 (1940) 380-407, MR 2,10]
• (Paper 23) H.S.M. Coxeter, Regular and Semi-Regular Polytopes II, [Math. Zeit. 188 (1985) 559-591]
• (Paper 24) H.S.M. Coxeter, Regular and Semi-Regular Polytopes III, [Math. Zeit. 200 (1988) 3-45]
• Norman Johnson Uniform Polytopes, Manuscript (1991)
• N.W. Johnson: The Theory of Uniform Polytopes and Honeycombs, Ph.D.
• Klitzing, Richard. "5D uniform polytopes (polytera)". x3o3o3x3o - spidtix, x3x3o3x3o - pattix, x3o3x3x3o - pirx, x3x3x3x3o - gippix
External links
• Glossary for hyperspace, George Olshevsky.
• Polytopes of Various Dimensions, Jonathan Bowers
• Runcinated uniform polytera (spid), Jonathan Bowers
• Multi-dimensional Glossary
Fundamental convex regular and uniform polytopes in dimensions 2–10
Family An Bn I2(p) / Dn E6 / E7 / E8 / F4 / G2 Hn
Regular polygon Triangle Square p-gon Hexagon Pentagon
Uniform polyhedron Tetrahedron Octahedron • Cube Demicube Dodecahedron • Icosahedron
Uniform polychoron Pentachoron 16-cell • Tesseract Demitesseract 24-cell 120-cell • 600-cell
Uniform 5-polytope 5-simplex 5-orthoplex • 5-cube 5-demicube
Uniform 6-polytope 6-simplex 6-orthoplex • 6-cube 6-demicube 122 • 221
Uniform 7-polytope 7-simplex 7-orthoplex • 7-cube 7-demicube 132 • 231 • 321
Uniform 8-polytope 8-simplex 8-orthoplex • 8-cube 8-demicube 142 • 241 • 421
Uniform 9-polytope 9-simplex 9-orthoplex • 9-cube 9-demicube
Uniform 10-polytope 10-simplex 10-orthoplex • 10-cube 10-demicube
Uniform n-polytope n-simplex n-orthoplex • n-cube n-demicube 1k2 • 2k1 • k21 n-pentagonal polytope
Topics: Polytope families • Regular polytope • List of regular polytopes and compounds
|
Wikipedia
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.