text
stringlengths
100
500k
subset
stringclasses
4 values
Stencil (numerical analysis) In mathematics, especially the areas of numerical analysis concentrating on the numerical solution of partial differential equations, a stencil is a geometric arrangement of a nodal group that relate to the point of interest by using a numerical approximation routine. Stencils are the basis for many algorithms to numerically solve partial differential equations (PDE). Two examples of stencils are the five-point stencil and the Crank–Nicolson method stencil. Stencils are classified into two categories: compact and non-compact, the difference being the layers from the point of interest that are also used for calculation. In the notation used for one-dimensional stencils n-1, n, n+1 indicate the time steps where timestep n and n-1 have known solutions and time step n+1 is to be calculated. The spatial location of finite volumes used in the calculation are indicated by j-1, j and j+1. Etymology Graphical representations of node arrangements and their coefficients arose early in the study of PDEs. Authors continue to use varying terms for these such as "relaxation patterns", "operating instructions", "lozenges", or "point patterns".[1][2] The term "stencil" was coined for such patterns to reflect the concept of laying out a stencil in the usual sense over a computational grid to reveal just the numbers needed at a particular step.[2] Calculation of coefficients The finite difference coefficients for a given stencil are fixed by the choice of node points. The coefficients may be calculated by taking the derivative of the Lagrange polynomial interpolating between the node points,[3] by computing the Taylor expansion around each node point and solving a linear system,[4] or by enforcing that the stencil is exact for monomials up to the degree of the stencil.[3] For equi-spaced nodes, they may be calculated efficiently as the Padé approximant of $x^{s}\cdot (\log x)^{m}$, where $m$ is the order of the stencil and $s$ is the ratio of the distance between the leftmost derivative and the left function entries divided by the grid spacing.[5] See also • Compact stencil • Non-compact stencil • Five-point stencil References 1. Emmons, Howard W. (1 October 1944). "The numerical solution of partial differential equations" (PDF). Quarterly of Applied Mathematics. 2 (3): 173–195. doi:10.1090/qam/10680. Retrieved 17 April 2017. 2. Milne, William Edmund (1953). Numerical solution of differential equations (1st ed.). Wiley. pp. 128–131. OCLC 527661. Retrieved 17 April 2017. 3. Fornberg, Bengt; Flyer, Natasha (2015). "Brief Summary of Finite Difference Methods". A Primer on Radial Basis Functions with Applications to the Geosciences. Society for Industrial and Applied Mathematics. doi:10.1137/1.9781611974041.ch1. ISBN 9781611974027. Retrieved 9 April 2017. 4. Taylor, Cameron. "Finite Difference Coefficients Calculator". web.media.mit.edu. Retrieved 9 April 2017. 5. Fornberg, Bengt (January 1998). "Classroom Note: Calculation of Weights in Finite Difference Formulas". SIAM Review. 40 (3): 685–691. Bibcode:1998SIAMR..40..685F. doi:10.1137/S0036144596322507. • W. F. Spotz. High-Order Compact Finite Difference Schemes for Computational Mechanics. PhD thesis, University of Texas at Austin, Austin, TX, 1995. • Communications in Numerical Methods in Engineering, Copyright © 2008 John Wiley & Sons, Ltd.
Wikipedia
Step function In mathematics, a function on the real numbers is called a step function if it can be written as a finite linear combination of indicator functions of intervals. Informally speaking, a step function is a piecewise constant function having only finitely many pieces. This article is about a piecewise constant function. For the unit step function, see Heaviside step function. Definition and first consequences A function $f\colon \mathbb {R} \rightarrow \mathbb {R} $ is called a step function if it can be written as $f(x)=\sum \limits _{i=0}^{n}\alpha _{i}\chi _{A_{i}}(x)$, for all real numbers $x$ where $n\geq 0$, $\alpha _{i}$ are real numbers, $A_{i}$ are intervals, and $\chi _{A}$ is the indicator function of $A$: $\chi _{A}(x)={\begin{cases}1&{\text{if }}x\in A\\0&{\text{if }}x\notin A\\\end{cases}}$ In this definition, the intervals $A_{i}$ can be assumed to have the following two properties: 1. The intervals are pairwise disjoint: $A_{i}\cap A_{j}=\emptyset $ for $i\neq j$ 2. The union of the intervals is the entire real line: $\bigcup _{i=0}^{n}A_{i}=\mathbb {R} .$ Indeed, if that is not the case to start with, a different set of intervals can be picked for which these assumptions hold. For example, the step function $f=4\chi _{[-5,1)}+3\chi _{(0,6)}$ can be written as $f=0\chi _{(-\infty ,-5)}+4\chi _{[-5,0]}+7\chi _{(0,1)}+3\chi _{[1,6)}+0\chi _{[6,\infty )}.$ Variations in the definition Sometimes, the intervals are required to be right-open[1] or allowed to be singleton.[2] The condition that the collection of intervals must be finite is often dropped, especially in school mathematics,[3][4][5] though it must still be locally finite, resulting in the definition of piecewise constant functions. Examples • A constant function is a trivial example of a step function. Then there is only one interval, $A_{0}=\mathbb {R} .$ • The sign function sgn(x), which is −1 for negative numbers and +1 for positive numbers, and is the simplest non-constant step function. • The Heaviside function H(x), which is 0 for negative numbers and 1 for positive numbers, is equivalent to the sign function, up to a shift and scale of range ($H=(\operatorname {sgn} +1)/2$). It is the mathematical concept behind some test signals, such as those used to determine the step response of a dynamical system. • The rectangular function, the normalized boxcar function, is used to model a unit pulse. Non-examples • The integer part function is not a step function according to the definition of this article, since it has an infinite number of intervals. However, some authors[6] also define step functions with an infinite number of intervals.[6] Properties • The sum and product of two step functions is again a step function. The product of a step function with a number is also a step function. As such, the step functions form an algebra over the real numbers. • A step function takes only a finite number of values. If the intervals $A_{i},$ for $i=0,1,\dots ,n$ in the above definition of the step function are disjoint and their union is the real line, then $f(x)=\alpha _{i}$ for all $x\in A_{i}.$ • The definite integral of a step function is a piecewise linear function. • The Lebesgue integral of a step function $\textstyle f=\sum _{i=0}^{n}\alpha _{i}\chi _{A_{i}}$ is $\textstyle \int f\,dx=\sum _{i=0}^{n}\alpha _{i}\ell (A_{i}),$ where $\ell (A)$ is the length of the interval $A$, and it is assumed here that all intervals $A_{i}$ have finite length. In fact, this equality (viewed as a definition) can be the first step in constructing the Lebesgue integral.[7] • A discrete random variable is sometimes defined as a random variable whose cumulative distribution function is piecewise constant.[8] In this case, it is locally a step function (globally, it may have an infinite number of steps). Usually however, any random variable with only countably many possible values is called a discrete random variable, in this case their cumulative distribution function is not necessarily locally a step function, as infinitely many intervals can accumulate in a finite region. See also • Crenel function • Piecewise • Sigmoid function • Simple function • Step detection • Heaviside step function • Piecewise-constant valuation References 1. "Step Function". 2. "Step Functions - Mathonline". 3. "Mathwords: Step Function". 4. https://study.com/academy/lesson/step-function-definition-equation-examples.html 5. "Step Function". 6. Bachman, Narici, Beckenstein (5 April 2002). "Example 7.2.2". Fourier and Wavelet Analysis. Springer, New York, 2000. ISBN 0-387-98899-8.{{cite book}}: CS1 maint: multiple names: authors list (link) 7. Weir, Alan J (10 May 1973). "3". Lebesgue integration and measure. Cambridge University Press, 1973. ISBN 0-521-09751-7. 8. Bertsekas, Dimitri P. (2002). Introduction to Probability. Tsitsiklis, John N., Τσιτσικλής, Γιάννης Ν. Belmont, Mass.: Athena Scientific. ISBN 188652940X. OCLC 51441829.
Wikipedia
Stephan Luckhaus Stephan Luckhaus is a German mathematician who is a professor at the University of Leipzig working in pure and applied analysis.[1] His PhD was obtained in 1978 under the supervision of Willi Jäger at the University of Heidelberg.[2] He was elected to the German Academy of Sciences Leopoldina in 2007.[1] References 1. Faculty profile, retrieved 2011-05-06. 2. Stephan Luckhaus at the Mathematics Genealogy Project External links • Website at the University of Leipzig Authority control International • ISNI • VIAF National • Germany Academics • DBLP • Leopoldina • MathSciNet • Mathematics Genealogy Project • Scopus • zbMATH
Wikipedia
Stephan Ramon Garcia Stephan Ramon Garcia is an American mathematician. He is the W.M. Keck Distinguished Service Professor[1] and Professor of Mathematics at Pomona College, in California, United States. Garcia has been a faculty member at Pomona since 2006. He is the author of more than 100 research papers, many with undergraduate co-authors, and four books. Garcia works in operator theory, complex variables, matrix analysis, number theory, and discrete geometry. He serves on the editorial board of several well-known journals and has received four National Science Foundation grants as principal investigator.[1] Early life and education Garcia earned his Bachelor's of Arts with high distinction from the University of California, Berkeley in 1997 and received his PhD in Mathematics in 2003 from the University of California at Berkeley.[1] He joined Pomona College in 2006, where he currently works. Personal life Garcia is married to Gizem Karaali. They have two children, Reyhan and Altay. Published works In addition to his 89 research articles, over the course of his academic career, Stephan Ramon Garcia has published four books as well. His first book, titled Introduction to Model Spaces and Their Operators[1] was written in collaboration with Javed Mashreghi and William Ross and was published by Cambridge University Press in 2016. In 2017, Stephan Garcia had his second book published in collaboration with Robert Horn titled A Second Course in Linear Algebra[1] by Cambridge University Press. Stephan Garcia's third book, Finite Blaschke Products and Their Connections[1] was written in collaboration with Javad Mashreghi and William Ross and was subsequently published by Springer in 2018. Professor Garcia's most recent book entitled 100 Years of Math Milestones: The Pi Mu Epsilon Centennial Collection[1] was written with Steven J. Miller and was published by the American Mathematical Society in July 2019. Awards Throughout his academic career, Garcia has received a plethora of awards. In 1999, Garcia was given the title of Outstanding Graduate Student Instructor at the University of California at Berkeley.[2] In 2005, Garcia was awarded the Mochizuki Memorial Fund Award by the University of California at Santa Barbara.[2] In 2003, Garcia was awarded the Nikki Kose Memorial Teaching Prize. His first award with Pomona college was the Wig Distinguished Professor Award,[2] which was awarded to him in May 2009.[2] Garcia was nominated for CASE Professor of the Year in 2011 and 2012.[2] Garcia was the first professor to receive the 2019 Mary P. Dolciani Award for Excellence in Research.[3][4] Grants and research distinctions Garcia was chosen as the first recipient of the Mary P. Dolciani Excellence in Research for his extensive research history. Aside from publishing 89 research papers, Garcia has helped co-author 29 research articles by his students, some of whom have won awards under his supervision.[2] Garcia has also received four National Science Foundation grants in the areas of complex symmetric operators and function theory (2006–10); Complex symmetric operators - theory and applications (2010–2014); Operators on Hilbert Space (2013–2016); and most recently in Opportunities in Operator Theory (2019–2021).[1] References 1. "Stephan Ramon Garcia". Pomona College. June 1, 2015. Retrieved February 6, 2020.{{cite web}}: CS1 maint: url-status (link) 2. "The Pomona College News Archive". Retrieved February 6, 2020.{{cite web}}: CS1 maint: url-status (link) 3. "Mary P. Dolciani Halloran Foundation - Meet Mary". www.dolcianihalloranfoundation.org. Retrieved February 6, 2020. 4. "The Latest". American Mathematical Society. Retrieved February 6, 2020. Authority control International • ISNI • VIAF National • France • BnF data • Germany • Israel • United States Academics • DBLP • MathSciNet • Mathematics Genealogy Project • ORCID • Scopus • zbMATH
Wikipedia
Stephanie Singer Stephanie Frank Singer is an American mathematician and local politician in Philadelphia, Pennsylvania. Early in her adulthood, Singer pursued a career in education as an assistant professor at Haverford College, serving from 1991 to 2002. She then went on to pursue careers in data science before going into politics, serving as city commissioner in Philadelphia. Early life and education Singer was born in 1964 to Maxine and Daniel Singer where she was raised in Washington, D.C. Career Academic Singer earned a Ph.D. in 1991 at New York University.[1] She is the author of two books: • Linearity, Symmetry, and Prediction in the Hydrogen Atom (Springer, Undergraduate Texts in Mathematics 115, 2005) • Symmetry in Mechanics: A Gentle, Modern Introduction (Birkhauser Boston, 2001).[2] Additionally, she is the translator of a book by Yvette Kosmann-Schwarzbach, Groups and Symmetries: From Finite Groups to Lie Groups (Springer, 2010).[3] In a 2017 article in The Chronicle of Higher Education she discussed her experience as a victim of sexual harassment at Haverford College.[4] Politics Singer was elected Democratic Party committeeperson for Philadelphia's 8th Ward in 2008. In 2011, she was elected as a city commissioner, defeating 36-year incumbent Marge Tartaglione. Singer served one term as city commissioner from 2012 to 2015.[5] In October, 2018, Singer launched a podcast entitled Defend Democracy! where she reflects on her experience as a former election official, data strategist, and successful candidate, with advice to those who have interests in entering politics.[6] References 1. Stephanie Singer at the Mathematics Genealogy Project 2. "Symmetry in Mechanics: A Gentle, Modern Introduction | Mathematical Association of America". www.maa.org. Retrieved 2020-01-17. 3. Reviews of Groups and Symmetries: Aloysius Helminck (2011), MR2553682; Thomas R. Hagedorn (2010), MAA Reviews. 4. "I Spoke Up Against My Harasser — and Paid a Steep Price". The Chronicle of Higher Education. Retrieved 2020-01-17. 5. Biography of Commissioner Stephanie Singer, Office of the Philadelphia City Commissioners, archived from the original on 2015-04-15, retrieved 2015-12-10. 6. "Defend Democracy! • A podcast on Anchor". Anchor. Retrieved 2019-01-16. External links • Campaign Scientific Authority control International • ISNI • VIAF National • Norway • France • BnF data • Germany • Israel • United States • Czech Republic • Croatia Academics • CiNii • MathSciNet • Mathematics Genealogy Project Other • IdRef
Wikipedia
Stephen Cook Stephen Arthur Cook OC OOnt (born December 14, 1939) is an American-Canadian computer scientist and mathematician who has made significant contributions to the fields of complexity theory and proof complexity. He is a university professor at the University of Toronto, Department of Computer Science and Department of Mathematics. Stephen Cook OC OOnt Cook in 2008 Born Stephen Arthur Cook (1939-12-14) December 14, 1939 Buffalo, New York Alma materHarvard University University of Michigan Known forNP-completeness Propositional proof complexity Cook–Levin theorem Awards • Turing Award (1982) • Gödel Lecture (1999) • CRM-Fields-PIMS prize (1999) • John L. Synge Award (2006) • Bernard Bolzano Medal • Gerhard Herzberg Canada Gold Medal for Science and Engineering (2012) • Officer of Order of Canada (2015) • BBVA Foundation Frontiers of Knowledge Award (2015) Scientific career FieldsComputer Science InstitutionsUniversity of Toronto University of California, Berkeley ThesisOn the Minimum Computation Time of Functions (1966) Doctoral advisorHao Wang Doctoral studentsMark Braverman[1] Toniann Pitassi Walter Savitch Arvind Gupta Anna Lubiw Biography Cook received his bachelor's degree in 1961 from the University of Michigan, and his master's degree and PhD from Harvard University, respectively in 1962 and 1966, from the Mathematics Department.[2] He joined the University of California, Berkeley, mathematics department in 1966 as an assistant professor, and stayed there until 1970 when he was denied reappointment. In a speech celebrating the 30th anniversary of the Berkeley electrical engineering and computer sciences department, fellow Turing Award winner and Berkeley professor Richard Karp said that, "It is to our everlasting shame that we were unable to persuade the math department to give him tenure."[3] Cook joined the faculty of the University of Toronto, Computer Science and Mathematics Departments in 1970 as an associate professor, where he was promoted to professor in 1975 and Distinguished Professor in 1985. Research Stephen Cook is considered one of the forefathers of computational complexity theory. During his PhD, Cook worked on complexity of functions, mainly on multiplication. In his seminal 1971 paper "The Complexity of Theorem Proving Procedures",[4] Cook formalized the notions of polynomial-time reduction (also known as Cook reduction) and NP-completeness, and proved the existence of an NP-complete problem by showing that the Boolean satisfiability problem (usually known as SAT) is NP-complete. This theorem was proven independently by Leonid Levin in the Soviet Union, and has thus been given the name the Cook–Levin theorem. The paper also formulated the most famous problem in computer science, the P vs. NP problem. Informally, the "P vs. NP" question asks whether every optimization problem whose answers can be efficiently verified for correctness/optimality can be solved optimally with an efficient algorithm. Given the abundance of such optimization problems in everyday life, a positive answer to the "P vs. NP" question would likely have profound practical and philosophical consequences. Cook conjectures that there are optimization problems (with easily checkable solutions) that cannot be solved by efficient algorithms, i.e., P is not equal to NP. This conjecture has generated a great deal of research in computational complexity theory, which has considerably improved our understanding of the inherent difficulty of computational problems and what can be computed efficiently. Yet, the conjecture remains open and is among the seven famous Millennium Prize Problems.[5][6] In 1982, Cook received the Turing Award for his contributions to complexity theory. His citation reads: For his advancement of our understanding of the complexity of computation in a significant and profound way. His seminal paper, The Complexity of Theorem Proving Procedures, presented at the 1971 ACM SIGACT Symposium on the Theory of Computing, laid the foundations for the theory of NP-Completeness. The ensuing exploration of the boundaries and nature of NP-complete class of problems has been one of the most active and important research activities in computer science for the last decade. In his "Feasibly Constructive Proofs and the Propositional Calculus"[7] paper published in 1975, he introduced the equational theory PV (standing for Polynomial-time Verifiable) to formalize the notion of proofs using only polynomial-time concepts. He made another major contribution to the field in his 1979 paper, joint with his student Robert A. Reckhow, "The Relative Efficiency of Propositional Proof Systems",[8] in which they formalized the notions of p-simulation and efficient propositional proof system, which started an area now called propositional proof complexity. They proved that the existence of a proof system in which every true formula has a short proof is equivalent to NP = coNP. Cook co-authored a book with his student Phuong The Nguyen in this area titled "Logical Foundations of Proof Complexity".[9] His main research areas are complexity theory and proof complexity, with excursions into programming language semantics, parallel computation, and artificial intelligence. Other areas that he has contributed to include bounded arithmetic, bounded reverse mathematics, complexity of higher type functions, complexity of analysis, and lower bounds in propositional proof systems. Some other contributions He named the complexity class NC after Nick Pippenger. The complexity class SC is named after him.[10] The definition of the complexity class AC0 and its hierarchy AC are also introduced by him.[11] According to Don Knuth the KMP algorithm was inspired by Cook's automata for recognizing concatenated palindromes in linear time.[12] Awards and honors Cook was awarded an NSERC E.W.R. Steacie Memorial Fellowship in 1977, a Killam Research Fellowship in 1982, and received the CRM-Fields-PIMS prize in 1999. He has won John L. Synge Award and Bernard Bolzano Medal, and is a fellow of the Royal Society of London and Royal Society of Canada. Cook was elected to membership in the National Academy of Sciences (United States) and the American Academy of Arts and Sciences. He is a corresponding member of the Göttingen Academy of Sciences and Humanities. Cook won the ACM Turing Award in 1982. Association for Computing Machinery honored him as a Fellow of ACM in 2008 for his fundamental contributions to the theory of computational complexity.[13] He was selected by the Association for Symbolic Logic to give the Gödel Lecture in 1999.[14] The Government of Ontario appointed him to the Order of Ontario in 2013, the highest honor in Ontario.[15] He has won the 2012 Gerhard Herzberg Canada Gold Medal for Science and Engineering, the highest honor for scientists and engineers in Canada.[16] The Herzberg Medal is awarded by NSERC for "both the sustained excellence and overall influence of research work conducted in Canada in the natural sciences or engineering".[17] He was named an Officer of the Order of Canada in 2015.[18] Cook was granted the BBVA Foundation Frontiers of Knowledge Award 2015 in the Information and Communication Technologies category "for his important role in identifying what computers can and cannot solve efficiently," in the words of the jury's citation. His work, it continues, "has had a dramatic impact in all fields where complex computations are crucial." Cook has supervised numerous MSc students, and 36 PhD students have completed their degrees under his supervision.[1] Personal life Cook lives with his wife in Toronto. They have two sons, Gordon and James.[19] He plays the violin and enjoys sailing. He is often called by his short name Steve Cook. See also • List of pioneers in computer science References 1. Stephen Cook at the Mathematics Genealogy Project 2. Kapron, Bruce. "Stephen Arthur Cook". A. M. Turing Award. Retrieved October 23, 2018. 3. Richard Karp (2003). "A Personal View of Computer Science at Berkeley". University of California Berkeley. Retrieved February 12, 2023. 4. Stephen Cook (1971), The Complexity of Theorem Proving Procedures (PDF) – via University of Toronto Stephen A. Cook (2009) [1971]. "The Complexity of Theorem-Proving Procedures". Retrieved February 12, 2023. 5. P vs. NP Archived October 14, 2013, at the Wayback Machine problem on Millennium Prize Problems page – Clay Mathematics Institute 6. P vs. NP Archived September 27, 2007, at the Wayback Machine problem's official description by Stephen Cook on Millennium Prize Problems 7. Cook, Stephen A. (May 5, 1975). "Feasibly constructive proofs and the propositional calculus (Preliminary Version)". Proceedings of seventh annual ACM symposium on Theory of computing - STOC '75. STOC '75. New York: Association for Computing Machinery. pp. 83–97. doi:10.1145/800116.803756. ISBN 978-1-4503-7419-4. S2CID 13309619. 8. Cook, Stephen A.; Reckhow, Robert A. (1979). "The Relative Efficiency of Propositional Proof Systems". The Journal of Symbolic Logic. 44 (1): 36–50. doi:10.2307/2273702. ISSN 0022-4812. JSTOR 2273702. S2CID 2187041. 9. "Logical Foundations of Proof Complexity"'s official page 10. ""Steve's class": origin of SC". Theoretical Computer Science – Stack Exchange. 11. "Who introduced the complexity class AC?". Theoretical Computer Science – Stack Exchange. 12. "Twenty Questions for Donald Knuth". 13. Association for Computing Machinery. "Stephen A Cook". awards.acm.org. Retrieved February 12, 2023. 14. "Gödel Lecturers – Association for Symbolic Logic". Retrieved November 8, 2021. 15. "25 Appointees Named to Ontario's Highest Honour". Ministry of Citizenship and Immigration. 16. Emily, Chung (February 27, 2013). "Computer scientist wins Canada's top science prize". cbc.ca. Retrieved February 27, 2013. 17. "Current Winner – 2012 – Stephen Cook". June 28, 2016. 18. "SaltWire | Halifax". www.saltwire.com. Retrieved February 12, 2023. 19. "Stephen A. Cook – Home Page". External links Wikimedia Commons has media related to Stephen Cook. • Home page of Stephen A. Cook • 'P versus NP' and the Limits of Computation – Public lecture given by Stephen Cook at the University of Toronto • Oral history interview with Stephen Cook at Charles Babbage Institute, University of Minnesota. Cook discussed his education at the University of Michigan and Harvard University and early work at the University of California, Berkeley, and his growing interest in problems of computational complexity. Cook recounted his move to the University of Toronto in 1970 and the reception of his work on NP-completeness, leading up to his A.M. Turing Award. • Stephen Arthur Cook at the Mathematics Genealogy Project • Stephen A. Cook at DBLP Bibliography Server Fellows of the Royal Society elected in 1998 Fellows • Colin Atkinson • David Barker • Jean Beggs • Harshad Bhadeshia • David Keith Bowen • Roger Cashmore • Andrew Casson • Thomas Cavalier-Smith • David W. Clarke • Enrico Coen • Stephen Cook • Peter Crane • Richard Denton • Raymond Dwek • Charles Ellington • Richard B. Flavell • Ken Freeman • Brian Greenwood • J. Philip Grime • David C. Hanna • Geoffrey Hinton • Steven Martin • Raghunath Anant Mashelkar • Yoshio Masui • Ronald Charles Newman • Mark Pepys • Trevor Charles Platt • Alan Plumb • Richard J. Puddephatt • Philip Ruffles • Anthony Segal • Ashoke Sen • Jonathan Sprent • James Staunton • John Michael Taylor • Robert K. Thomas • Cheryll Tickle • S. R. Srinivasa Varadhan • Bernard Wood • Brian Worthington Foreign • John E. Casida • Elias James Corey • Walter Kohn • Oliver Smithies • Rolf M. Zinkernagel A. M. Turing Award laureates 1960s • Alan Perlis (1966) • Maurice Vincent Wilkes (1967) • Richard Hamming (1968) • Marvin Minsky (1969) 1970s • James H. Wilkinson (1970) • John McCarthy (1971) • Edsger W. Dijkstra (1972) • Charles Bachman (1973) • Donald Knuth (1974) • Allen Newell; Herbert A. Simon (1975) • Michael O. Rabin; Dana Scott (1976) • John Backus (1977) • Robert W. Floyd (1978) • Kenneth E. Iverson (1979) 1980s • Tony Hoare (1980) • Edgar F. Codd (1981) • Stephen Cook (1982) • Ken Thompson; Dennis Ritchie (1983) • Niklaus Wirth (1984) • Richard Karp (1985) • John Hopcroft; Robert Tarjan (1986) • John Cocke (1987) • Ivan Sutherland (1988) • William Kahan (1989) 1990s • Fernando J. Corbató (1990) • Robin Milner (1991) • Butler Lampson (1992) • Juris Hartmanis; Richard E. Stearns (1993) • Edward Feigenbaum; Raj Reddy (1994) • Manuel Blum (1995) • Amir Pnueli (1996) • Douglas Engelbart (1997) • Jim Gray (1998) • Fred Brooks (1999) 2000s • Andrew Yao (2000) • Ole-Johan Dahl; Kristen Nygaard (2001) • Ron Rivest; Adi Shamir; Leonard Adleman (2002) • Alan Kay (2003) • Vint Cerf; Bob Kahn (2004) • Peter Naur (2005) • Frances Allen (2006) • Edmund M. Clarke; E. Allen Emerson; Joseph Sifakis (2007) • Barbara Liskov (2008) • Charles P. Thacker (2009) 2010s • Leslie G. Valiant (2010) • Judea Pearl (2011) • Shafi Goldwasser; Silvio Micali (2012) • Leslie Lamport (2013) • Michael Stonebraker (2014) • Martin Hellman; Whitfield Diffie (2015) • Tim Berners-Lee (2016) • John L. Hennessy; David Patterson (2017) • Yoshua Bengio; Geoffrey Hinton; Yann LeCun (2018) • Ed Catmull; Pat Hanrahan (2019) 2020s • Alfred Aho; Jeffrey Ullman (2020) • Jack Dongarra (2021) • Robert Metcalfe (2022) Authority control International • FAST • ISNI • VIAF National • Germany • Israel • United States • Czech Republic • Netherlands Academics • Association for Computing Machinery • DBLP • Google Scholar • MathSciNet • Mathematics Genealogy Project • Scopus • zbMATH Other • IdRef
Wikipedia
Stephen Alexander (astronomer) Stephen Alexander (September 1, 1806 – June 25, 1883) was a noted American astronomer and educator. Stephen Alexander BornSeptember 1, 1806  Schenectady  DiedJune 25, 1883  (aged 76) Princeton  Alma mater • Union College • Princeton Theological Seminary  OccupationAstronomer, university teacher, author  Employer • Princeton University (1833–) • Princeton University (1834–1839) • Princeton University (1840–1844) • Princeton University (1845–1854) • Princeton University (1855–1878)  Early years He was born in Schenectady, New York on September 1, 1806.[1] He was the brother-in-law of Joseph Henry, the first secretary of the Smithsonian, and worked closely with him.[2] His education was obtained at Union College, were graduated in 1824, and at Princeton Theological Seminary, were graduated in 1832.[1] Career He became a tutor in mathematics at Princeton University in 1832; he would later become professor of astronomy and mathematics and advocate for the construction of Princeton's first observatory.[3] Alexander relied on the assistance of a free African American man named Alfred Scudder, who worked for him at Princeton during the 1850s.[4][5] Because of his role as Alexander's assistant on campus, Scudder received the nickname "Assistant Professor of Natural Philosophy" from students.[4][5] Alexander was elected a member of the American Philosophical Society[6] in 1839 and an Associate Fellow of the American Academy of Arts and Sciences[7] in 1850. In 1860, he was the head of an expedition to the coast of Labrador to observe the solar eclipse which occurred July 18 of that year, and later to observe the one of 1869.[1] He was one of the original members of the National Academy of Sciences in 1862, and a member of the American Philosophical Society, and the American Association for the Advancement of Science. He also served as the president of this last organization in 1859. His principal writings are "Physical Phenomena attendant upon Solar Eclipses", read before the American Philosophical Society in 1848; a paper on the "Fundamental Principles of Mathematics," read before the American Association for the Advancement of Science in 1848; another on the "Origin of the Forms and the Present Condition of some of the Clusters of Stars and several of the Nebulae", read before the American Association in 1850; others on the "Form and Equatorial Diameter of the Asteroid Planets" and "Harmonies in the Arrangement of the Solar System which seem to be Confirmatory of the Nebular Hypothesis of Laplace", presented to the National Academy of Science; and a "Statement and Exposition of Certain Harmonies of the Solar System", which was published by the Smithsonian Institution in 1875.[8] Works Among many noteworthy astronomical papers he published:[1] • Fundamental Principles of Mathematics • Statement and Exposition of Certain Harmonies of the Solar System References Citations 1. Johnson 1906, p. 77 2. Hockey 2009 3. National Academy of Sciences 4. Princeton University 2017 5. Yannielli 2017 6. "APS Member History". search.amphilsoc.org. Retrieved 2021-04-09. 7. American Academy of Arts and Sciences 8. Leonard & Marquis 1963 Sources • "Book of Members, 1780-2010: Chapter A" (PDF). American Academy of Arts and Sciences. Retrieved April 14, 2011 – via www.amacad.org. • Hockey, Thomas (2009). "Alexander, Stephen". The Biographical Encyclopedia of Astronomers. Springer Publishing. ISBN 978-0-387-31022-0. Retrieved August 22, 2012. • Johnson, Rossiter, ed. (1906). "Alexander, Stephen". The Biographical Dictionary of America. Vol. 1. Boston, Mass.: American Biographical Society. p. 77. Retrieved November 19, 2020 – via en.wikisource.org. This article incorporates text from this source, which is in the public domain. • Leonard, John William; Marquis, Albert Nelson, eds. (1963), Who Was Who in America: Historical Volume 1607-1896, Chicago: Quincy Who's Who • "Stephen Alexander". National Academy of Sciences. Retrieved December 19, 2017 – via www.nasonline.org. • Yannielli, Joseph (November 6, 2017). "African Americans on Campus, 1746–1876". The Princeton & Slavery Project. Retrieved December 19, 2017. • Princeton University (November 6, 2017). "Alfred N. C. Scudder". The Princeton & Slavery Project. Retrieved December 19, 2017. External links • National Academy of Sciences Biographical Memoir Authority control International • FAST • ISNI • VIAF National • Germany • United States • Netherlands People • Deutsche Biographie Other • SNAC • 2 • IdRef
Wikipedia
Stephen Childress William Stephen Childress is an American applied mathematician, author and professor emeritus at the Courant Institute of Mathematical Sciences. He works on classical fluid mechanics, asymptotic methods and singular perturbations, geophysical fluid dynamics,[1] magnetohydrodynamics and dynamo theory, mathematical models in biology,[2] and locomotion in fluids.[3] He is also a co-founder of the Courant Institute of Mathematical Sciences's Applied Mathematics Lab.[4] William Stephen Childress NationalityAmerican Alma materCalifornia Institute of Technology Princeton University Scientific career FieldsMathematics Applied Mathematics InstitutionsCourant Institute of Mathematical Sciences New York University Doctoral advisorPaco Axel Lagerstrom Published books • 1977: Mechanics of Swimming and Flying, online ISBN 9780511569593.[5] • 1978: Mathematical models in developmental biology with Jerome K. Percus, ISBN 978-1470410803 • 1987: Topics in Geophysical Fluid Dynamics: Atmospheric Dynamics, Dynamo Theory, and Climate Dynamics, with M. Ghil. Softcover ISBN 978-0-387-96475-1, eBook ISBN 978-1-4612-1052-8.[6] • 1995: Stretch, Twist, Fold: The Fast Dynamo with Andrew D. Gilbert, ISBN 978-3662140147, ISBN 3662140144 • 2009: An Introduction to Theoretical Fluid Mechanics, ISBN 978-0-8218-4888-3.[7] • 2012: Natural Locomotion in Fluids and on Surfaces Swimming, Flying, and Sliding. Edited with Anette Hosoi, William W. Schultz, Jane Wang. Hardcover ISBN 978-1-4614-3996-7, Softcover ISBN 978-1-4899-9916-0, eBook ISBN 978-1-4614-3997-4[8] • 2018: Construction of Steady-state Hydrodynamic Dynamos. I. Spatially Periodic Fields, ISBN 978-1378904725 Recognition • 1976 Guggenheim Fellowship for Natural Sciences, US & Canada[9] • 2008 Fellow of American Physical Society[10] References 1. "Erosion has a point, and an edge". ScienceDaily. Archived from the original on 2020-11-12. Retrieved 2022-06-25. 2. "How do birds breathe better? Researchers' discovery will throw you for a loop". 16 March 2021. 3. Wednesday, 15 January 2014 Anna SallehABC (January 15, 2014). "'Jellyfish' flying machine keeps upright". www.abc.net.au. Archived from the original on 9 August 2020. Retrieved 25 June 2022. 4. "Applied Math Lab - People". math.nyu.edu. Archived from the original on 2021-05-15. Retrieved 2022-06-27. 5. Childress, Stephen (June 25, 1981). Mechanics of Swimming and Flying. Cambridge University Press. doi:10.1017/CBO9780511569593. ISBN 9780521236133. Archived from the original on January 30, 2022. Retrieved June 25, 2022. 6. Topics in Geophysical Fluid Dynamics: Atmospheric Dynamics, Dynamo Theory, and Climate Dynamics. Applied Mathematical Sciences. Vol. 60. 1987. doi:10.1007/978-1-4612-1052-8. ISBN 978-0-387-96475-1. Archived from the original on 2022-01-30. Retrieved 2022-06-25 – via link.springer.com. 7. "Childress: An Introduction to Theoretical Fluid Mechanics". American Mathematical Society. Archived from the original on 2021-01-27. Retrieved 2022-06-25. 8. Natural Locomotion in Fluids and on Surfaces. The IMA Volumes in Mathematics and its Applications. Vol. 155. 2012. doi:10.1007/978-1-4614-3997-4. ISBN 978-1-4614-3996-7. Archived from the original on 2022-03-03. Retrieved 2022-06-25 – via link.springer.com. 9. "Stephen Childress". John Simon Guggenheim Memorial Foundation. Archived from the original on 2021-05-16. Retrieved 2022-06-25. 10. "APS Fellow Archive". Archived from the original on 2022-06-27. Retrieved 2022-06-26. External links • William Stephen Childress's home page Authority control: Academics • Google Scholar • MathSciNet • Mathematics Genealogy Project • ORCID
Wikipedia
Stephen Cole Kleene Stephen Cole Kleene (/ˈkleɪni/ KLAY-nee;[lower-alpha 1] January 5, 1909 – January 25, 1994) was an American mathematician. One of the students of Alonzo Church, Kleene, along with Rózsa Péter, Alan Turing, Emil Post, and others, is best known as a founder of the branch of mathematical logic known as recursion theory, which subsequently helped to provide the foundations of theoretical computer science. Kleene's work grounds the study of computable functions. A number of mathematical concepts are named after him: Kleene hierarchy, Kleene algebra, the Kleene star (Kleene closure), Kleene's recursion theorem and the Kleene fixed-point theorem. He also invented regular expressions in 1951 to describe McCulloch-Pitts neural networks, and made significant contributions to the foundations of mathematical intuitionism. Stephen Kleene Born(1909-01-05)January 5, 1909 Hartford, Connecticut, U.S. DiedJanuary 25, 1994(1994-01-25) (aged 85) Madison, Wisconsin, U.S. NationalityAmerican Alma materAmherst College Princeton University Known for   • Contributions to intuitionism • Kleene–Mostowski hierarchy • Kleene–Rosser paradox • Kleene star • Kleene's algorithm • Kleene's theorem • Realizability • Regular expressions • Kleene's smn theorem AwardsLeroy P. Steele Prize (1983) National Medal of Science (1990) Scientific career FieldsMathematics InstitutionsUniversity of Wisconsin–Madison Doctoral advisorAlonzo Church Doctoral studentsRobert Constable Joan Moschovakis Yiannis Moschovakis Nels David Nelson Dick de Jongh Biography Kleene was awarded a bachelor's degree from Amherst College in 1930. He was awarded a Ph.D. in mathematics from Princeton University in 1934, where his thesis, entitled A Theory of Positive Integers in Formal Logic, was supervised by Alonzo Church. In the 1930s, he did important work on Church's lambda calculus. In 1935, he joined the mathematics department at the University of Wisconsin–Madison, where he spent nearly all of his career. After two years as an instructor, he was appointed assistant professor in 1937. While a visiting scholar at the Institute for Advanced Study in Princeton, 1939–1940, he laid the foundation for recursion theory, an area that would be his lifelong research interest. In 1941, he returned to Amherst College, where he spent one year as an associate professor of mathematics. During World War II, Kleene was a lieutenant commander in the United States Navy. He was an instructor of navigation at the U.S. Naval Reserve's Midshipmen's School in New York, and then a project director at the Naval Research Laboratory in Washington, D.C. In 1946, Kleene returned to the University of Wisconsin-Madison, becoming a full professor in 1948 and the Cyrus C. MacDuffee professor of mathematics in 1964. He served two terms as the Chair of the Department of Mathematics and one term as the Chair of the Department of Numerical Analysis (later renamed the Department of Computer Science). He also served as Dean of the College of Letters and Science in 1969–1974. During his years at the University of Wisconsin he was thesis advisor to 13 Ph.D. students. He retired from the University of Wisconsin in 1979. In 1999 the mathematics library at the University of Wisconsin was renamed in his honor.[3] Kleene's teaching at Wisconsin resulted in three texts in mathematical logic, Kleene (1952, 1967) and Kleene and Vesley (1965). The first two are often cited and still in print. Kleene (1952) wrote alternative proofs to the Gödel's incompleteness theorems that enhanced their canonical status and made them easier to teach and understand. Kleene and Vesley (1965) is the classic American introduction to intuitionistic logic and mathematical intuitionism. [...] recursive function theory is of central importance in computer science. Kleene is responsible for many of the fundamental results in the area, including the Kleene normal form theorem (1936), the Kleene recursive theorem (1938), the development of the arithmetical and hyper-arithmetical hierarchies in the 1940s and 1950s, the Kleene-Post theory of degrees of unsolvability (1954), and higher-type recursion theory. which he began in the late 1950s and returned to in the late 1970s. [...] Beginning in the late 1940s, Kleene also worked in a second area, Brouwer's intuitionism. Using tools from recursion theory, he introduced recursive realizability, an important technique for interpreting intuitionistic statements. In the summer of 1951 at the Rand Corporation, he produced a major breakthrough in a third area when he gave an important characterization of events accepted by a finite automaton.[4] Kleene served as president of the Association for Symbolic Logic, 1956–1958, and of the International Union of History and Philosophy of Science,[5] 1961. The importance of Kleene's work led to Daniel Dennett coining the saying, published in 1978, that "Kleeneness is next to Gödelness."[6] In 1990, he was awarded the National Medal of Science. Kleene and his wife Nancy Elliott had four children. He had a lifelong devotion to the family farm in Maine. An avid mountain climber, he had a strong interest in nature and the environment, and was active in many conservation causes. Legacy At each conference of the Symposium on Logic in Computer Science the Kleene award, in honour of Stephen Cole Kleene, is given for the best student paper.[7] Selected publications • 1935. Stephen Cole Kleene (Jan 1935). "A Theory of Positive Integers in Formal Logic. Part I". American Journal of Mathematics. 57 (1): 153–173. doi:10.2307/2372027. JSTOR 2372027. • 1935. Stephen Cole Kleene (Apr 1935). "A Theory of Positive Integers in Formal Logic. Part II". American Journal of Mathematics. 57 (2): 219–244. doi:10.2307/2371199. JSTOR 2371199. • 1935. Stephen Cole Kleene; J.B. Rosser (Jul 1935). "The Inconsistency of Certain Formal Logics". Annals of Mathematics. 2nd Series. 36 (3): 630–636. doi:10.2307/1968646. JSTOR 1968646. • 1936. "General recursive functions of natural numbers". Mathematische Annalen (112): 727–742. 1936. • 1936. "$\lambda $-definability and recursiveness". Duke Mathematical Journal. 2 (2): 340–352. 1936. • 1938. "On Notations for Ordinal Numbers" (PDF). Journal of Symbolic Logic. 3 (4): 150–155. 1938. doi:10.2307/2267778. JSTOR 2267778. S2CID 34314018. • 1943. "Recursive predicates and quantifiers". Transactions of the American Mathematical Society. 53 (1): 41–73. Jan 1943. doi:10.1090/S0002-9947-1943-0007371-8. • 1951. Kleene, Stephen Cole (15 December 1951). "Representation of Events in Nerve Nets and Finite Automata" (PDF). U. S. Air Force Project Rand Research Memorandum. No. RM-704. The RAND Corporation. • 1952. Introduction to Metamathematics. New York: Van Nostrand. (Ishi Press: 2009 reprint).[8] • 1956. Kleene, Stephen Cole (1956). Shannon, Claude; McCarthy, John (eds.). Representation of Events in Nerve Nets and Finite Automata. OCLC 564148. {{cite book}}: |work= ignored (help) • 1965 (with Richard Eugene Vesley). The Foundations of Intuitionistic Mathematics. North-Holland.[9] • 1967. Mathematical Logic. John Wiley & Sons. Dover reprint, 2002. ISBN 0-486-42533-9. • 1981. "Origins of Recursive Function Theory" in Annals of the History of Computing 3, No. 1. • 1987. "Reflections on Church's thesis". Notre Dame Journal of Formal Logic. 28 (4): 490–498. Oct 1987. doi:10.1305/ndjfl/1093637645. See also • Kleene–Brouwer order • Kleene–Rosser paradox • Kleene's O • Kleene's T predicate • List of pioneers in computer science Notes 1. Although his last name is commonly pronounced /ˈkliːni/ KLEE-nee or /kliːn/ KLEEN, Kleene himself pronounced it /ˈkleɪni/ KLAY-nee.[1] His son, Ken Kleene, wrote: "As far as I am aware this pronunciation is incorrect in all known languages. I believe that this novel pronunciation was invented by my father."[2] However, many instances of this surname can be found in the Netherlands and Dutch pronunciation of 'ee' is as ay as in hail, but shorter. Probably, Kleene was aware of that. References 1. Pace, Eric (January 27, 1994). "Stephen C. Kleene Is Dead at 85; Was Leader in Computer Science". The New York Times. 2. In Entry "Stephen Kleene" at Free Online Dictionary of Computing. 3. "S. C. Kleene". Retrieved February 8, 2021. 4. Keisler, H. Jerome (September 1994). "Stephen Cole Kleene 1909–1994". Notices of the AMS. 41 (7): 792. 5. IUHPS website; also known as "International Union of the History and the Philosophy of Science". A member of ICSU, the International Council for Science (formerly named International Council of Scientific Unions). 6. Daniel Dennett and Karel Lambert, "kleene", in The Philosophical Lexicon, 7th ed. (Newark, DE: American Philosophical Association, 1978), 5; and Hyperborea (blogger pseudonym), "Dennett's Logocentric Lexicon" (9 December 2007): http://aeconomics.blogspot.com/2007/12/dennetts-logocentric-lexicon.html 7. "LICS – Archive". lics.siglog.org. 8. WorldCat: editions for 'Introduction to metamathematics.'. OCLC 523942. 9. Bishop, Errett (1965). "Review: The foundations of intuitionistic mathematics, by Stephen Cole Kleene and Richard Eugene Vesley" (PDF). Bulletin of the American Mathematical Society. 71 (6): 850–852. doi:10.1090/s0002-9904-1965-11412-4. External links • O'Connor, John J.; Robertson, Edmund F., "Stephen Cole Kleene", MacTutor History of Mathematics Archive, University of St Andrews • Biographical memoir – by Saunders Mac Lane • Kleene bibliography • "The Princeton Mathematics Community in the 1930s: Transcript Number 23 (PMC23): Stephen C. Kleene and J. Barkley Rosser". Archived from the original on 10 March 2015. – Interview with Kleene and John Barkley Rosser about their experiences at Princeton • Stephen Cole Kleene at DBLP Bibliography Server United States National Medal of Science laureates Behavioral and social science 1960s 1964 Neal Elgar Miller 1980s 1986 Herbert A. Simon 1987 Anne Anastasi George J. Stigler 1988 Milton Friedman 1990s 1990 Leonid Hurwicz Patrick Suppes 1991 George A. Miller 1992 Eleanor J. Gibson 1994 Robert K. Merton 1995 Roger N. Shepard 1996 Paul Samuelson 1997 William K. Estes 1998 William Julius Wilson 1999 Robert M. Solow 2000s 2000 Gary Becker 2003 R. Duncan Luce 2004 Kenneth Arrow 2005 Gordon H. Bower 2008 Michael I. Posner 2009 Mortimer Mishkin 2010s 2011 Anne Treisman 2014 Robert Axelrod 2015 Albert Bandura Biological sciences 1960s 1963 C. B. van Niel 1964 Theodosius Dobzhansky Marshall W. Nirenberg 1965 Francis P. Rous George G. Simpson Donald D. Van Slyke 1966 Edward F. Knipling Fritz Albert Lipmann William C. Rose Sewall Wright 1967 Kenneth S. Cole Harry F. Harlow Michael Heidelberger Alfred H. Sturtevant 1968 Horace Barker Bernard B. Brodie Detlev W. Bronk Jay Lush Burrhus Frederic Skinner 1969 Robert Huebner Ernst Mayr 1970s 1970 Barbara McClintock Albert B. Sabin 1973 Daniel I. Arnon Earl W. Sutherland Jr. 1974 Britton Chance Erwin Chargaff James V. Neel James Augustine Shannon 1975 Hallowell Davis Paul Gyorgy Sterling B. Hendricks Orville Alvin Vogel 1976 Roger Guillemin Keith Roberts Porter Efraim Racker E. O. Wilson 1979 Robert H. Burris Elizabeth C. Crosby Arthur Kornberg Severo Ochoa Earl Reece Stadtman George Ledyard Stebbins Paul Alfred Weiss 1980s 1981 Philip Handler 1982 Seymour Benzer Glenn W. Burton Mildred Cohn 1983 Howard L. Bachrach Paul Berg Wendell L. Roelofs Berta Scharrer 1986 Stanley Cohen Donald A. Henderson Vernon B. Mountcastle George Emil Palade Joan A. Steitz 1987 Michael E. DeBakey Theodor O. Diener Harry Eagle Har Gobind Khorana Rita Levi-Montalcini 1988 Michael S. Brown Stanley Norman Cohen Joseph L. Goldstein Maurice R. Hilleman Eric R. Kandel Rosalyn Sussman Yalow 1989 Katherine Esau Viktor Hamburger Philip Leder Joshua Lederberg Roger W. Sperry Harland G. Wood 1990s 1990 Baruj Benacerraf Herbert W. Boyer Daniel E. Koshland Jr. Edward B. Lewis David G. Nathan E. Donnall Thomas 1991 Mary Ellen Avery G. Evelyn Hutchinson Elvin A. Kabat Robert W. Kates Salvador Luria Paul A. Marks Folke K. Skoog Paul C. Zamecnik 1992 Maxine Singer Howard Martin Temin 1993 Daniel Nathans Salome G. Waelsch 1994 Thomas Eisner Elizabeth F. Neufeld 1995 Alexander Rich 1996 Ruth Patrick 1997 James Watson Robert A. Weinberg 1998 Bruce Ames Janet Rowley 1999 David Baltimore Jared Diamond Lynn Margulis 2000s 2000 Nancy C. Andreasen Peter H. Raven Carl Woese 2001 Francisco J. Ayala George F. Bass Mario R. Capecchi Ann Graybiel Gene E. Likens Victor A. McKusick Harold Varmus 2002 James E. Darnell Evelyn M. Witkin 2003 J. Michael Bishop Solomon H. Snyder Charles Yanofsky 2004 Norman E. Borlaug Phillip A. Sharp Thomas E. Starzl 2005 Anthony Fauci Torsten N. Wiesel 2006 Rita R. Colwell Nina Fedoroff Lubert Stryer 2007 Robert J. Lefkowitz Bert W. O'Malley 2008 Francis S. Collins Elaine Fuchs J. Craig Venter 2009 Susan L. Lindquist Stanley B. Prusiner 2010s 2010 Ralph L. Brinster Rudolf Jaenisch 2011 Lucy Shapiro Leroy Hood Sallie Chisholm 2012 May Berenbaum Bruce Alberts 2013 Rakesh K. Jain 2014 Stanley Falkow Mary-Claire King Simon Levin Chemistry 1960s 1964 Roger Adams 1980s 1982 F. Albert Cotton Gilbert Stork 1983 Roald Hoffmann George C. Pimentel Richard N. Zare 1986 Harry B. Gray Yuan Tseh Lee Carl S. Marvel Frank H. Westheimer 1987 William S. Johnson Walter H. Stockmayer Max Tishler 1988 William O. Baker Konrad E. Bloch Elias J. Corey 1989 Richard B. Bernstein Melvin Calvin Rudolph A. Marcus Harden M. McConnell 1990s 1990 Elkan Blout Karl Folkers John D. Roberts 1991 Ronald Breslow Gertrude B. Elion Dudley R. Herschbach Glenn T. Seaborg 1992 Howard E. Simmons Jr. 1993 Donald J. Cram Norman Hackerman 1994 George S. Hammond 1995 Thomas Cech Isabella L. Karle 1996 Norman Davidson 1997 Darleane C. Hoffman Harold S. Johnston 1998 John W. Cahn George M. Whitesides 1999 Stuart A. Rice John Ross Susan Solomon 2000s 2000 John D. Baldeschwieler Ralph F. Hirschmann 2001 Ernest R. Davidson Gábor A. Somorjai 2002 John I. Brauman 2004 Stephen J. Lippard 2005 Tobin J. Marks 2006 Marvin H. Caruthers Peter B. Dervan 2007 Mostafa A. El-Sayed 2008 Joanna Fowler JoAnne Stubbe 2009 Stephen J. Benkovic Marye Anne Fox 2010s 2010 Jacqueline K. Barton Peter J. Stang 2011 Allen J. Bard M. Frederick Hawthorne 2012 Judith P. Klinman Jerrold Meinwald 2013 Geraldine L. Richmond 2014 A. Paul Alivisatos Engineering sciences 1960s 1962 Theodore von Kármán 1963 Vannevar Bush John Robinson Pierce 1964 Charles S. Draper Othmar H. Ammann 1965 Hugh L. Dryden Clarence L. Johnson Warren K. Lewis 1966 Claude E. Shannon 1967 Edwin H. Land Igor I. Sikorsky 1968 J. Presper Eckert Nathan M. Newmark 1969 Jack St. Clair Kilby 1970s 1970 George E. Mueller 1973 Harold E. Edgerton Richard T. Whitcomb 1974 Rudolf Kompfner Ralph Brazelton Peck Abel Wolman 1975 Manson Benedict William Hayward Pickering Frederick E. Terman Wernher von Braun 1976 Morris Cohen Peter C. Goldmark Erwin Wilhelm Müller 1979 Emmett N. Leith Raymond D. Mindlin Robert N. Noyce Earl R. Parker Simon Ramo 1980s 1982 Edward H. Heinemann Donald L. Katz 1983 Bill Hewlett George Low John G. Trump 1986 Hans Wolfgang Liepmann Tung-Yen Lin Bernard M. Oliver 1987 Robert Byron Bird H. Bolton Seed Ernst Weber 1988 Daniel C. Drucker Willis M. Hawkins George W. Housner 1989 Harry George Drickamer Herbert E. Grier 1990s 1990 Mildred Dresselhaus Nick Holonyak Jr. 1991 George H. Heilmeier Luna B. Leopold H. Guyford Stever 1992 Calvin F. Quate John Roy Whinnery 1993 Alfred Y. Cho 1994 Ray W. Clough 1995 Hermann A. Haus 1996 James L. Flanagan C. Kumar N. Patel 1998 Eli Ruckenstein 1999 Kenneth N. Stevens 2000s 2000 Yuan-Cheng B. Fung 2001 Andreas Acrivos 2002 Leo Beranek 2003 John M. Prausnitz 2004 Edwin N. Lightfoot 2005 Jan D. Achenbach 2006 Robert S. Langer 2007 David J. Wineland 2008 Rudolf E. Kálmán 2009 Amnon Yariv 2010s 2010 Shu Chien 2011 John B. Goodenough 2012 Thomas Kailath Mathematical, statistical, and computer sciences 1960s 1963 Norbert Wiener 1964 Solomon Lefschetz H. Marston Morse 1965 Oscar Zariski 1966 John Milnor 1967 Paul Cohen 1968 Jerzy Neyman 1969 William Feller 1970s 1970 Richard Brauer 1973 John Tukey 1974 Kurt Gödel 1975 John W. Backus Shiing-Shen Chern George Dantzig 1976 Kurt Otto Friedrichs Hassler Whitney 1979 Joseph L. Doob Donald E. Knuth 1980s 1982 Marshall H. Stone 1983 Herman Goldstine Isadore Singer 1986 Peter Lax Antoni Zygmund 1987 Raoul Bott Michael Freedman 1988 Ralph E. Gomory Joseph B. Keller 1989 Samuel Karlin Saunders Mac Lane Donald C. Spencer 1990s 1990 George F. Carrier Stephen Cole Kleene John McCarthy 1991 Alberto Calderón 1992 Allen Newell 1993 Martin David Kruskal 1994 John Cocke 1995 Louis Nirenberg 1996 Richard Karp Stephen Smale 1997 Shing-Tung Yau 1998 Cathleen Synge Morawetz 1999 Felix Browder Ronald R. Coifman 2000s 2000 John Griggs Thompson Karen Uhlenbeck 2001 Calyampudi R. Rao Elias M. Stein 2002 James G. Glimm 2003 Carl R. de Boor 2004 Dennis P. Sullivan 2005 Bradley Efron 2006 Hyman Bass 2007 Leonard Kleinrock Andrew J. Viterbi 2009 David B. Mumford 2010s 2010 Richard A. Tapia S. R. Srinivasa Varadhan 2011 Solomon W. Golomb Barry Mazur 2012 Alexandre Chorin David Blackwell 2013 Michael Artin Physical sciences 1960s 1963 Luis W. Alvarez 1964 Julian Schwinger Harold Urey Robert Burns Woodward 1965 John Bardeen Peter Debye Leon M. Lederman William Rubey 1966 Jacob Bjerknes Subrahmanyan Chandrasekhar Henry Eyring John H. Van Vleck Vladimir K. Zworykin 1967 Jesse Beams Francis Birch Gregory Breit Louis Hammett George Kistiakowsky 1968 Paul Bartlett Herbert Friedman Lars Onsager Eugene Wigner 1969 Herbert C. Brown Wolfgang Panofsky 1970s 1970 Robert H. Dicke Allan R. Sandage John C. Slater John A. Wheeler Saul Winstein 1973 Carl Djerassi Maurice Ewing Arie Jan Haagen-Smit Vladimir Haensel Frederick Seitz Robert Rathbun Wilson 1974 Nicolaas Bloembergen Paul Flory William Alfred Fowler Linus Carl Pauling Kenneth Sanborn Pitzer 1975 Hans A. Bethe Joseph O. Hirschfelder Lewis Sarett Edgar Bright Wilson Chien-Shiung Wu 1976 Samuel Goudsmit Herbert S. Gutowsky Frederick Rossini Verner Suomi Henry Taube George Uhlenbeck 1979 Richard P. Feynman Herman Mark Edward M. Purcell John Sinfelt Lyman Spitzer Victor F. Weisskopf 1980s 1982 Philip W. Anderson Yoichiro Nambu Edward Teller Charles H. Townes 1983 E. Margaret Burbidge Maurice Goldhaber Helmut Landsberg Walter Munk Frederick Reines Bruno B. Rossi J. Robert Schrieffer 1986 Solomon J. Buchsbaum H. Richard Crane Herman Feshbach Robert Hofstadter Chen-Ning Yang 1987 Philip Abelson Walter Elsasser Paul C. Lauterbur George Pake James A. Van Allen 1988 D. Allan Bromley Paul Ching-Wu Chu Walter Kohn Norman Foster Ramsey Jr. Jack Steinberger 1989 Arnold O. Beckman Eugene Parker Robert Sharp Henry Stommel 1990s 1990 Allan M. Cormack Edwin M. McMillan Robert Pound Roger Revelle 1991 Arthur L. Schawlow Ed Stone Steven Weinberg 1992 Eugene M. Shoemaker 1993 Val Fitch Vera Rubin 1994 Albert Overhauser Frank Press 1995 Hans Dehmelt Peter Goldreich 1996 Wallace S. Broecker 1997 Marshall Rosenbluth Martin Schwarzschild George Wetherill 1998 Don L. Anderson John N. Bahcall 1999 James Cronin Leo Kadanoff 2000s 2000 Willis E. Lamb Jeremiah P. Ostriker Gilbert F. White 2001 Marvin L. Cohen Raymond Davis Jr. Charles Keeling 2002 Richard Garwin W. Jason Morgan Edward Witten 2003 G. Brent Dalrymple Riccardo Giacconi 2004 Robert N. Clayton 2005 Ralph A. Alpher Lonnie Thompson 2006 Daniel Kleppner 2007 Fay Ajzenberg-Selove Charles P. Slichter 2008 Berni Alder James E. Gunn 2009 Yakir Aharonov Esther M. Conwell Warren M. Washington 2010s 2011 Sidney Drell Sandra Faber Sylvester James Gates 2012 Burton Richter Sean C. Solomon 2014 Shirley Ann Jackson Authority control International • FAST • ISNI • VIAF National • France • BnF data • Germany • Italy • Israel • United States • Sweden • Japan • Czech Republic • Netherlands • Poland • 2 Academics • Association for Computing Machinery • CiNii • DBLP • MathSciNet • Mathematics Genealogy Project • zbMATH People • Deutsche Biographie Other • SNAC • IdRef
Wikipedia
Stephen Drury (mathematician) Stephen William Drury is a Anglo-Canadian mathematician and professor of mathematics at McGill University.[1] He specializes in mathematical analysis, harmonic analysis and linear algebra.[2] He received his doctorate from the University of Cambridge in 1970 under the supervision of Nicholas Varopoulos[3] and completed his postdoctoral training at the Faculté des sciences d'Orsay, France. He was recruited to McGill by Professor Carl Herz in 1972. Among other contributions, he solved the Sidon set union problem,[4][5] worked on restrictions of Fourier and Radon transforms to curves,[6] and generalized von Neumann's inequality.[7] In operator theory, the Drury–Arveson space is named after William Arveson and him.[8] His research now pertains to the interplay between matrix theory and harmonic analysis and their applications to graph theory. References 1. "Stephen W Drury". McGill University. Retrieved 2019-02-11. 2. "S. W. Drury – Research Interests". Retrieved 2019-02-11. 3. "Sam (Stephen William) Drury". Mathematics Genealogy Project. 4. Drury, S.W., 1970, Sur les ensembles de Sidon, C.R. Acad. Sci. Paris, 271, pp. 162–164 5. "Carl Herz 1930–1995" (PDF). American Mathematical Society. Retrieved 29 July 2019. 6. Drury, S.W., 1985. Restriction of Fourier transforms to curves. Ann. Inst. Fourier, 35(1), pp. 117–123. 7. Drury, S.W., 1978. A generalization of von Neumann’s inequality to the complex ball. Proceedings of the American Mathematical Society, 68(3), pp. 300–304. 8. Fang, Quanlei (June 2017). "Operator theory in Drury–Arveson Space" (PDF). Technion – Israel Institute of Technology. Retrieved 29 July 2019. Authority control International • ISNI • VIAF National • Norway • France • BnF data • Germany • Israel • United States Academics • MathSciNet • Mathematics Genealogy Project Other • IdRef
Wikipedia
Stephen A. Fulling Stephen Albert Fulling (born 29 April 1945, Evansville, Indiana) is an American mathematician and mathematical physicist, specializing in the mathematics of quantum theory, general relativity, and the spectral and asymptotic theory of differential operators.[1] He is known for preliminary work that led to the discovery of the hypothetical Unruh effect (also known as the Fulling-Davies-Unruh effect).[2] Stephen Albert Fulling Born(1945-04-29)29 April 1945 Evansville, Indiana NationalityU.S. Alma mater • Harvard University • Princeton University Known forFulling–Davies–Unruh effect Scientific career FieldsMathematics, Physics Institutions • University of Wisconsin-Milwaukee • King’s College London • Texas A&M University ThesisScalar Quantum Field Theory in a Closed Universe of Constant Curvature (1972) Academic advisorsArthur Wightman Education and career After secondary education at Missouri's Lindbergh High School,[3] Fulling graduated in 1967 with A.B. in physics from Harvard University. At Princeton University he became a graduate student in physics and received M.S. in 1969 and Ph.D. in 1972.[4] His thesis Scalar Quantum Field Theory in a Closed Universe of Constant Curvature was supervised by Arthur Wightman.[5] Fulling was a postdoc from 1972 to 1974 at the University of Wisconsin-Milwaukee and from 1974 to 1976 at King’s College London. At Texas A&M University he joined the mathematics faculty in 1976[3] and was promoted to full professor in 1984. In addition to mathematics, he holds a joint appointment in physics and astronomy.[4] In addition to more than a hundred papers and publications, he has authored two books, Aspects of Quantum Field Theory in Curved Space-Time (Cambridge University Press, 1989) and Linearity and the Mathematics of Several Variables (World Scientific, 2000).[3] In 2018 Fulling was elected a fellow of the American Physical Society.[3] He has also been elected a foreign member of the Royal Society of Sciences in Uppsala.[6] Selected publications Books • Fulling, Stephen A. (1989-08-24). Aspects of Quantum Field Theory in Curved Spacetime. ISBN 9780521377683. • Fulling, Stephen A.; Sinyakov, Michael N.; Tischchenko, Sergei V. (2000). Linearity and the Mathematics of Several Variables. World Scientific. ISBN 978-981-02-4196-4. Articles • Fulling, Stephen A. (1973). "Nonuniqueness of Canonical Field Quantization in Riemannian Space-Time". Physical Review D. 7 (10): 2850–2862. Bibcode:1973PhRvD...7.2850F. doi:10.1103/PhysRevD.7.2850. • Fulling, S. A.; Davies, P. C. W. (1976). "Radiation from a Moving Mirror in Two Dimensional Space-Time: Conformal Anomaly". Proceedings of the Royal Society A: Mathematical, Physical and Engineering Sciences. 348 (1654): 393–414. Bibcode:1976RSPSA.348..393F. doi:10.1098/rspa.1976.0045. S2CID 122176090. • Christensen, S. M.; Fulling, S. A. (1977). "Trace anomalies and the Hawking effect". Physical Review D. 15 (8): 2088–2104. Bibcode:1977PhRvD..15.2088C. doi:10.1103/PhysRevD.15.2088. • Parker, Leonard; Fulling, S. A. (1974). "Adiabatic regularization of the energy-momentum tensor of a quantized field in homogeneous spaces". Physical Review D. 9 (2): 341–354. Bibcode:1974PhRvD...9..341P. doi:10.1103/PhysRevD.9.341. • Davies, P. C. W.; Fulling, S. A.; Unruh, W. G. (1976). "Energy-momentum tensor near an evaporating black hole". Physical Review D. 13 (10): 2720–2723. Bibcode:1976PhRvD..13.2720D. doi:10.1103/PhysRevD.13.2720. S2CID 124632555. • Parker, Leonard; Fulling, S. A. (1973). "Quantized Matter Fields and the Avoidance of Singularities in General Relativity". Physical Review D. 7 (8): 2357–2374. Bibcode:1973PhRvD...7.2357P. doi:10.1103/PhysRevD.7.2357. • Fulling, S. A.; Parker, Leonard; Hu, B. L. (1974). "Conformal energy-momentum tensor in curved spacetime: Adiabatic regularization and renormalization". Physical Review D. 10 (12): 3905–3924. Bibcode:1974PhRvD..10.3905F. doi:10.1103/PhysRevD.10.3905. See also • Differential operator • Fulling–Davies–Unruh effect • General relativity • Mathematical formulation of quantum mechanics • Quantum Field Theory References 1. "Stephen Fulling, Texas A&M University". community.wolfram.com. 2. Fulling, Davies, and Unruh were in communication, and the full significance of the mathematical phenomenon was unclear until Unruh related it to both temperature and particle detectors. In 2019 Fulling and Wilson suggested that what Davies discovered is a separate effect. Fulling, S A; Wilson, J H (2019). "The equivalence principle at work in radiation from unaccelerated atoms and mirrors" (PDF). Physica Scripta. 94 (1): 014004. arXiv:1805.01013. Bibcode:2019PhyS...94a4004F. doi:10.1088/1402-4896/aaecaa. ISSN 0031-8949. S2CID 21706009. 3. "Texas A&M Mathematician Stephen Fulling Elected as American Physical Society Fellow". Texas A&M, Science (science.tamu.edu). 31 October 2018. 4. "Stephen Fulling". Mathematics, Texas A&M University (math.tamu.edu). 5. Stephen Albert Fulling at the Mathematics Genealogy Project 6. "Fulling's Curriculum Vitae" (PDF). math.tamu.edu. Retrieved March 5, 2022. External links • Oral history interview transcript with Stephen Fulling on 8 July 2021, American Institute of Physics, Niels Bohr Library & Archives Authority control International • ISNI • VIAF National • Israel • United States • Netherlands Academics • DBLP • Google Scholar • MathSciNet • Mathematics Genealogy Project • ORCID • zbMATH Other • IdRef
Wikipedia
Stephen Gange Stephen Gange is an American epidemiologist and academic administrator serving as the interim provost of Johns Hopkins University since May 1, 2023. He is a professor of epidemiology at the Johns Hopkins Bloomberg School of Public Health. Life Gange earned a Ph.D. in statistics from the University of Wisconsin–Madison in 1994.[1][2] In 2007, he became a full professor of epidemiology in the Johns Hopkins Bloomberg School of Public Health.[3][2] He is a fellow of the American College of Epidemiology and an elected member of the American Epidemiological Society.[3][2] In 2015, he became the executive vice provost for academic affairs of Johns Hopkins University. On May 1, 2023, Gange succeeded Sunil Kumar as interim provost.[3] References 1. "Stephen John Gange, Ph.D., Joint Appointment in Medicine". Johns Hopkins Medicine. Retrieved 2023-07-02. 2. "Stephen Gange | Office of the Provost". Retrieved 2023-07-02. 3. "Stephen Gange named interim provost and senior vice president for academic affairs". The Hub. 2023-02-22. Retrieved 2023-07-02. Authority control International • VIAF Academics • Google Scholar • ORCID • Scopus
Wikipedia
Steve Simpson (mathematician) Stephen George Simpson is an American mathematician whose research concerns the foundations of mathematics, including work in mathematical logic, recursion theory, and Ramsey theory. He is known for his extensive development of the field of reverse mathematics founded by Harvey Friedman, in which the goal is to determine which axioms are needed to prove certain mathematical theorems.[1] He has also argued for the benefits of finitistic mathematical systems, such as primitive recursive arithmetic, which do not include actual infinity.[2] Stephen G. Simpson Steve Simpson at Oberwolfach, 2008 Alma materMIT Known forReverse mathematics Scientific career FieldsMathematics InstitutionsPennsylvania State University Vanderbilt University ThesisAdmissible Ordinals and Recursion Theory Doctoral advisorGerald Sacks Doctoral students • John R. Steel A conference in honor of Simpson's 70th birthday was organized in May 2016.[3] Education Simpson graduated in 1966 from Lehigh University with a B.A. (summa cum laude) and M.A. in mathematics.[4] He earned a Ph.D. from the Massachusetts Institute of Technology in 1971, with a dissertation entitled Admissible Ordinals and Recursion Theory and supervised by Gerald Sacks.[5] Career After short-term positions at Yale University, the University of California, Berkeley, and the University of Oxford, Simpson became an assistant professor at the Pennsylvania State University in 1975. At Penn State, he was Raymond N. Shibley professor from 1987 to 1992.[4] In 2016, his wife, computer scientist Padma Raghavan, moved from Penn State to Vanderbilt University to become vice provost for research,[6] and Simpson followed her, becoming a research professor at Vanderbilt.[7] Selected publications • Simpson, Stephen G. (1977), "First order theory of the degrees of recursive unsolvability", Annals of Mathematics, 105 (1): 121–139, doi:10.2307/1971028, JSTOR 1971028, MR 0432435. • Friedman, Harvey M.; Simpson, Stephen G.; Smith, Rick L. (1983), "Countable algebra and set existence axioms", Annals of Pure and Applied Logic, 25 (2): 141–181, doi:10.1016/0168-0072(83)90012-X, MR 0725732. • Carlson, Timothy J.; Simpson, Stephen G. (1984), "A dual form of Ramsey's theorem", Advances in Mathematics, 53 (3): 265–290, doi:10.1016/0001-8708(84)90026-4, MR 0753869. • Simpson, Stephen G. (1988), "Partial realizations of Hilbert's Program", Journal of Symbolic Logic, 53 (2): 349–363, doi:10.2307/2274508, JSTOR 2274508, MR 0947843. • Simpson, Stephen G. (1999), Subsystems of second order arithmetic, Perspectives in Mathematical Logic, Berlin: Springer-Verlag, doi:10.1007/978-3-642-59971-2, ISBN 3-540-64882-8, MR 1723993. 2nd ed., 2009, MR2517689. References 1. Elwes, Richard (2013), Math in 100 key breakthroughs (PDF), Quercus, New York, p. 397, ISBN 978-1-62365-054-4, MR 3222699. 2. Wolchover, Natalie (December 6, 2013), "Dispute over infinity divides mathematicians" (PDF), Scientific American. 3. The Foundational Impact of Recursion Theory: In honor of Steve Simpson's 70th birthday, May 22, 2016, retrieved 2016-05-06. 4. Simpson, Stephen G. (January 21, 2016), Curriculum vitae (PDF), retrieved 2016-05-06 5. Steve Simpson at the Mathematics Genealogy Project 6. Moran, Melanie (December 2015), "Vanderbilt names Padma Raghavan as vice provost for research", Research news @ Vanderbilt, Vanderbilt University, retrieved 2016-05-06. 7. Faculty profile, Vanderbilt University, retrieved 2016-05-06. External links • Home page at PSU • Google scholar profile Authority control International • ISNI • VIAF National • France • BnF data • Germany • Israel • United States • Czech Republic • Netherlands Academics • DBLP • Google Scholar • MathSciNet • Mathematics Genealogy Project • ORCID • zbMATH Other • IdRef
Wikipedia
Stephen Hawking Stephen William Hawking (8 January 1942 – 14 March 2018) was an English theoretical physicist, cosmologist, and author who, at the time of his death, was director of research at the Centre for Theoretical Cosmology at the University of Cambridge.[7][17][18] Between 1979 and 2009, he was the Lucasian Professor of Mathematics at the University of Cambridge, widely viewed as one of the most prestigious academic posts in the world.[19] Stephen Hawking CH CBE FRS FRSA Hawking in the 1980s Born Stephen William Hawking (1942-01-08)8 January 1942 Oxford, England Died14 March 2018(2018-03-14) (aged 76) Cambridge, England Resting placeWestminster Abbey[1] Education • University College, Oxford (BA) • Trinity Hall, Cambridge (PhD) Known for See list • Hawking radiation • A Brief History of Time • Penrose–Hawking theorems • Black hole information paradox • Micro black hole • Primordial black hole • Chronology protection conjecture • Soft hair (No hair theorem) • Bekenstein–Hawking formula • Hawking energy • Hawking–Page phase transition • Gibbons–Hawking ansatz • Gibbons–Hawking effect • Gibbons–Hawking space • Gibbons–Hawking–York boundary term • Hartle–Hawking state • Thorne–Hawking–Preskill bet Spouses Jane Wilde ​ ​ (m. 1965; div. 1995)​ Elaine Mason ​ ​ (m. 1995; div. 2007)​ Children3, including Lucy Awards See list • Adams Prize (1966) • Eddington Medal (1975) • Maxwell Medal and Prize (1976) • Heineman Prize (1976) • Hughes Medal (1976) • Albert Einstein Award (1978) • Albert Einstein Medal (1979) • RAS Gold Medal (1985) • Dirac Medal (1987) • Wolf Prize (1988) • Prince of Asturias Award (1989) • Foreign Associate of the National Academy of Sciences (1992) • Andrew Gemant Award (1998) • Naylor Prize and Lectureship (1999) • Lilienfeld Prize (1999) • Albert Medal (1999) • Copley Medal (2006) • Presidential Medal of Freedom (2009) • Breakthrough Prize in Fundamental Physics (2012) • BBVA Foundation Frontiers of Knowledge Award (2015) Scientific career Fields • General relativity • quantum gravity Institutions • University of Cambridge • California Institute of Technology • Perimeter Institute for Theoretical Physics ThesisProperties of Expanding Universes (1966) Doctoral advisorDennis W. Sciama[2] Other academic advisorsRobert Berman[3] Doctoral students See list • Bruce Allen[2][4] • Raphael Bousso[2][5] • Bernard Carr[2][6][7] • Fay Dowker[2][8] • Christophe Galfard[9] • Gary Gibbons[2][10][7] • Thomas Hertog[2][11] • Raymond Laflamme[2][12] • Don Page[2][13] • Malcolm Perry[2][14][7] • Christopher Pope • Marika Taylor[2][15] • Alan Yuille • Wu Zhongchao[2][16] • 27 others[2] Websitewww.hawking.org.uk Signature Part of a series on Physical cosmology • Big Bang · Universe • Age of the universe • Chronology of the universe Early universe • Inflation · Nucleosynthesis Backgrounds • Gravitational wave (GWB) • Microwave (CMB) · Neutrino (CNB) Expansion · Future • Hubble's law · Redshift • Expansion of the universe • FLRW metric · Friedmann equations • Inhomogeneous cosmology • Future of an expanding universe • Ultimate fate of the universe Components · Structure Components • Lambda-CDM model • Dark energy · Dark fluid · Dark matter Structure • Shape of the universe • Galaxy filament · Galaxy formation • Large quasar group • Large-scale structure • Reionization · Structure formation Experiments • Black Hole Initiative (BHI) • BOOMERanG • Cosmic Background Explorer (COBE) • Dark Energy Survey • Illustris project • Planck space observatory • Sloan Digital Sky Survey (SDSS) • 2dF Galaxy Redshift Survey ("2dF") • Wilkinson Microwave Anisotropy Probe (WMAP) Scientists • Aaronson • Alfvén • Alpher • Bharadwaj • Copernicus • de Sitter • Dicke • Ehlers • Einstein • Ellis • Friedmann • Galileo • Gamow • Guth • Hawking • Hubble • Kepler • Lemaître • Mather • Newton • Penrose • Penzias • Rubin • Schmidt • Smoot • Suntzeff • Sunyaev • Tolman • Wilson • Zeldovich • List of cosmologists Subject history • Discovery of cosmic microwave background radiation • History of the Big Bang theory • Religious interpretations of the Big Bang theory • Timeline of cosmological theories •  Category •  Astronomy portal Hawking was born in Oxford into a family of physicians. In October 1959, at the age of 17, he began his university education at University College, Oxford, where he received a first-class BA degree in physics. In October 1962, he began his graduate work at Trinity Hall at the University of Cambridge where, in March 1966, he obtained his PhD degree in applied mathematics and theoretical physics, specialising in general relativity and cosmology. In 1963, at age 21, Hawking was diagnosed with an early-onset slow-progressing form of motor neurone disease that gradually, over decades, paralysed him.[20][21] After the loss of his speech, he communicated through a speech-generating device initially through use of a handheld switch, and eventually by using a single cheek muscle.[22] Hawking's scientific works included a collaboration with Roger Penrose on gravitational singularity theorems in the framework of general relativity, and the theoretical prediction that black holes emit radiation, often called Hawking radiation. Initially, Hawking radiation was controversial. By the late 1970s and following the publication of further research, the discovery was widely accepted as a major breakthrough in theoretical physics. Hawking was the first to set out a theory of cosmology explained by a union of the general theory of relativity and quantum mechanics. He was a vigorous supporter of the many-worlds interpretation of quantum mechanics.[23][24] Hawking achieved commercial success with several works of popular science in which he discussed his theories and cosmology in general. His book A Brief History of Time appeared on the Sunday Times bestseller list for a record-breaking 237 weeks. Hawking was a Fellow of the Royal Society, a lifetime member of the Pontifical Academy of Sciences, and a recipient of the Presidential Medal of Freedom, the highest civilian award in the United States. In 2002, Hawking was ranked number 25 in the BBC's poll of the 100 Greatest Britons. He died in 2018 at the age of 76, after having motor neurone disease for more than 50 years. Early life Family Hawking was born on 8 January 1942[25][26] in Oxford to Frank and Isobel Eileen Hawking (née Walker).[27][28] Hawking's mother was born into a family of doctors in Glasgow, Scotland.[29][30] His wealthy paternal great-grandfather, from Yorkshire, over-extended himself buying farm land and then went bankrupt in the great agricultural depression during the early 20th century.[30] His paternal great-grandmother saved the family from financial ruin by opening a school in their home.[30] Despite their families' financial constraints, both parents attended the University of Oxford, where Frank read medicine and Isobel read Philosophy, Politics and Economics.[28] Isobel worked as a secretary for a medical research institute, and Frank was a medical researcher.[28][31] Hawking had two younger sisters, Philippa and Mary, and an adopted brother, Edward Frank David (1955–2003).[32] In 1950, when Hawking's father became head of the division of parasitology at the National Institute for Medical Research, the family moved to St Albans, Hertfordshire.[33][34] In St Albans, the family was considered highly intelligent and somewhat eccentric;[33][35] meals were often spent with each person silently reading a book.[33] They lived a frugal existence in a large, cluttered, and poorly maintained house and travelled in a converted London taxicab.[36][37] During one of Hawking's father's frequent absences working in Africa,[38] the rest of the family spent four months in Mallorca visiting his mother's friend Beryl and her husband, the poet Robert Graves.[39] Primary and secondary school years Hawking began his schooling at the Byron House School in Highgate, London. He later blamed its "progressive methods" for his failure to learn to read while at the school.[40][33] In St Albans, the eight-year-old Hawking attended St Albans High School for Girls for a few months. At that time, younger boys could attend one of the houses.[39][41] Hawking attended two private (i.e. fee-paying) schools, first Radlett School[41] and from September 1952, St Albans School, Hertfordshire,[26][42] after passing the eleven-plus a year early.[43] The family placed a high value on education.[33] Hawking's father wanted his son to attend Westminster School, but the 13-year-old Hawking was ill on the day of the scholarship examination. His family could not afford the school fees without the financial aid of a scholarship, so Hawking remained at St Albans.[44][45] A positive consequence was that Hawking remained close to a group of friends with whom he enjoyed board games, the manufacture of fireworks, model aeroplanes and boats,[46] and long discussions about Christianity and extrasensory perception.[47] From 1958 on, with the help of the mathematics teacher Dikran Tahta, they built a computer from clock parts, an old telephone switchboard and other recycled components.[48][49] Although known at school as "Einstein", Hawking was not initially successful academically.[50] With time, he began to show considerable aptitude for scientific subjects and, inspired by Tahta, decided to read mathematics at university.[51][52][53] Hawking's father advised him to study medicine, concerned that there were few jobs for mathematics graduates.[54] He also wanted his son to attend University College, Oxford, his own alma mater. As it was not possible to read mathematics there at the time, Hawking decided to study physics and chemistry. Despite his headmaster's advice to wait until the next year, Hawking was awarded a scholarship after taking the examinations in March 1959.[55][56] Undergraduate years Hawking began his university education at University College, Oxford,[26] in October 1959 at the age of 17.[57] For the first eighteen months, he was bored and lonely – he found the academic work "ridiculously easy".[58][59] His physics tutor, Robert Berman, later said, "It was only necessary for him to know that something could be done, and he could do it without looking to see how other people did it."[3] A change occurred during his second and third years when, according to Berman, Hawking made more of an effort "to be one of the boys". He developed into a popular, lively and witty college-member, interested in classical music and science fiction.[57] Part of the transformation resulted from his decision to join the college boat-club, the University College Boat Club, where he coxed a rowing-crew.[60][61] The rowing-coach at the time noted that Hawking cultivated a daredevil image, steering his crew on risky courses that led to damaged boats.[60][62] Hawking estimated that he studied about 1,000 hours during his three years at Oxford. These unimpressive study habits made sitting his finals a challenge, and he decided to answer only theoretical physics questions rather than those requiring factual knowledge. A first-class degree was a condition of acceptance for his planned graduate study in cosmology at the University of Cambridge.[63][64] Anxious, he slept poorly the night before the examinations, and the result was on the borderline between first- and second-class honours, making a viva (oral examination) with the Oxford examiners necessary.[64][65] Hawking was concerned that he was viewed as a lazy and difficult student. So, when asked at the viva to describe his plans, he said, "If you award me a First, I will go to Cambridge. If I receive a Second, I shall stay in Oxford, so I expect you will give me a First."[64][66] He was held in higher regard than he believed; as Berman commented, the examiners "were intelligent enough to realise they were talking to someone far cleverer than most of themselves".[64] After receiving a first-class BA degree in physics and completing a trip to Iran with a friend, he began his graduate work at Trinity Hall, Cambridge, in October 1962.[26][67][68] Post-graduate years Hawking's first year as a doctoral student was difficult. He was initially disappointed to find that he had been assigned Dennis William Sciama, one of the founders of modern cosmology, as a supervisor rather than the noted astronomer Fred Hoyle,[69][70] and he found his training in mathematics inadequate for work in general relativity and cosmology.[71] After being diagnosed with motor neurone disease, Hawking fell into a depression – though his doctors advised that he continue with his studies, he felt there was little point.[72] His disease progressed more slowly than doctors had predicted. Although Hawking had difficulty walking unsupported, and his speech was almost unintelligible, an initial diagnosis that he had only two years to live proved unfounded. With Sciama's encouragement, he returned to his work.[73][74] Hawking started developing a reputation for brilliance and brashness when he publicly challenged the work of Fred Hoyle and his student Jayant Narlikar at a lecture in June 1964.[75][76] When Hawking began his doctoral studies, there was much debate in the physics community about the prevailing theories of the creation of the universe: the Big Bang and Steady State theories.[77] Inspired by Roger Penrose's theorem of a spacetime singularity in the centre of black holes, Hawking applied the same thinking to the entire universe; and, during 1965, he wrote his thesis on this topic.[78][79] Hawking's thesis[80] was approved in 1966.[80] There were other positive developments: Hawking received a research fellowship at Gonville and Caius College at Cambridge;[81] he obtained his PhD degree in applied mathematics and theoretical physics, specialising in general relativity and cosmology, in March 1966;[82] and his essay "Singularities and the Geometry of Space–Time" shared top honours with one by Penrose to win that year's prestigious Adams Prize.[83][82] Career 1966–1975 In his work, and in collaboration with Penrose, Hawking extended the singularity theorem concepts first explored in his doctoral thesis. This included not only the existence of singularities but also the theory that the universe might have started as a singularity. Their joint essay was the runner-up in the 1968 Gravity Research Foundation competition.[84][85] In 1970, they published a proof that if the universe obeys the general theory of relativity and fits any of the models of physical cosmology developed by Alexander Friedmann, then it must have begun as a singularity.[86][87][88] In 1969, Hawking accepted a specially created Fellowship for Distinction in Science to remain at Caius.[89] In 1970, Hawking postulated what became known as the second law of black hole dynamics, that the event horizon of a black hole can never get smaller.[90] With James M. Bardeen and Brandon Carter, he proposed the four laws of black hole mechanics, drawing an analogy with thermodynamics.[91] To Hawking's irritation, Jacob Bekenstein, a graduate student of John Wheeler, went further—and ultimately correctly—to apply thermodynamic concepts literally.[92][93] In the early 1970s, Hawking's work with Carter, Werner Israel, and David C. Robinson strongly supported Wheeler's no-hair theorem, one that states that no matter what the original material from which a black hole is created, it can be completely described by the properties of mass, electrical charge and rotation.[94][95] His essay titled "Black Holes" won the Gravity Research Foundation Award in January 1971.[96] Hawking's first book, The Large Scale Structure of Space-Time, written with George Ellis, was published in 1973.[97] Beginning in 1973, Hawking moved into the study of quantum gravity and quantum mechanics.[98][97] His work in this area was spurred by a visit to Moscow and discussions with Yakov Borisovich Zel'dovich and Alexei Starobinsky, whose work showed that according to the uncertainty principle, rotating black holes emit particles.[99] To Hawking's annoyance, his much-checked calculations produced findings that contradicted his second law, which claimed black holes could never get smaller,[100] and supported Bekenstein's reasoning about their entropy.[99][101] His results, which Hawking presented from 1974, showed that black holes emit radiation, known today as Hawking radiation, which may continue until they exhaust their energy and evaporate.[102][103][104] Initially, Hawking radiation was controversial. By the late 1970s and following the publication of further research, the discovery was widely accepted as a significant breakthrough in theoretical physics.[105][106][107] Hawking was elected a Fellow of the Royal Society (FRS) in 1974, a few weeks after the announcement of Hawking radiation. At the time, he was one of the youngest scientists to become a Fellow.[108][109] Hawking was appointed to the Sherman Fairchild Distinguished Visiting Professorship at the California Institute of Technology (Caltech) in 1974. He worked with a friend on the faculty, Kip Thorne,[110][7] and engaged him in a scientific wager about whether the X-ray source Cygnus X-1 was a black hole. The wager was an "insurance policy" against the proposition that black holes did not exist.[111] Hawking acknowledged that he had lost the bet in 1990, a bet that was the first of several he was to make with Thorne and others.[112] Hawking had maintained ties to Caltech, spending a month there almost every year since this first visit.[113] 1975–1990 Hawking returned to Cambridge in 1975 to a more academically senior post, as reader in gravitational physics. The mid-to-late 1970s were a period of growing public interest in black holes and the physicists who were studying them. Hawking was regularly interviewed for print and television.[114][115] He also received increasing academic recognition of his work.[116] In 1975, he was awarded both the Eddington Medal and the Pius XI Gold Medal, and in 1976 the Dannie Heineman Prize, the Maxwell Medal and Prize and the Hughes Medal.[117][118] He was appointed a professor with a chair in gravitational physics in 1977.[119] The following year he received the Albert Einstein Medal and an honorary doctorate from the University of Oxford.[120][116] In 1979, Hawking was elected Lucasian Professor of Mathematics at the University of Cambridge.[116][121] His inaugural lecture in this role was titled: "Is the End in Sight for Theoretical Physics?" and proposed N = 8 supergravity as the leading theory to solve many of the outstanding problems physicists were studying.[122] His promotion coincided with a health-crisis which led to his accepting, albeit reluctantly, some nursing services at home.[123] At the same time, he was also making a transition in his approach to physics, becoming more intuitive and speculative rather than insisting on mathematical proofs. "I would rather be right than rigorous", he told Kip Thorne.[124] In 1981, he proposed that information in a black hole is irretrievably lost when a black hole evaporates. This information paradox violates the fundamental tenet of quantum mechanics, and led to years of debate, including "the Black Hole War" with Leonard Susskind and Gerard 't Hooft.[125][126] Cosmological inflation – a theory proposing that following the Big Bang, the universe initially expanded incredibly rapidly before settling down to a slower expansion – was proposed by Alan Guth and also developed by Andrei Linde.[127] Following a conference in Moscow in October 1981, Hawking and Gary Gibbons[7] organised a three-week Nuffield Workshop in the summer of 1982 on "The Very Early Universe" at Cambridge University, a workshop that focused mainly on inflation theory.[128][129][130] Hawking also began a new line of quantum-theory research into the origin of the universe. In 1981 at a Vatican conference, he presented work suggesting that there might be no boundary – or beginning or ending – to the universe.[131][132] Hawking subsequently developed the research in collaboration with Jim Hartle,[7] and in 1983 they published a model, known as the Hartle–Hawking state. It proposed that prior to the Planck epoch, the universe had no boundary in space-time; before the Big Bang, time did not exist and the concept of the beginning of the universe is meaningless.[133] The initial singularity of the classical Big Bang models was replaced with a region akin to the North Pole. One cannot travel north of the North Pole, but there is no boundary there – it is simply the point where all north-running lines meet and end.[134][135] Initially, the no-boundary proposal predicted a closed universe, which had implications about the existence of God. As Hawking explained, "If the universe has no boundaries but is self-contained... then God would not have had any freedom to choose how the universe began."[136] Hawking did not rule out the existence of a Creator, asking in A Brief History of Time "Is the unified theory so compelling that it brings about its own existence?",[137] also stating "If we discover a complete theory, it would be the ultimate triumph of human reason – for then we should know the mind of God";[138] in his early work, Hawking spoke of God in a metaphorical sense. In the same book he suggested that the existence of God was not necessary to explain the origin of the universe. Later discussions with Neil Turok led to the realisation that the existence of God was also compatible with an open universe.[139] Further work by Hawking in the area of arrows of time led to the 1985 publication of a paper theorising that if the no-boundary proposition were correct, then when the universe stopped expanding and eventually collapsed, time would run backwards.[140] A paper by Don Page and independent calculations by Raymond Laflamme led Hawking to withdraw this concept.[141] Honours continued to be awarded: in 1981 he was awarded the American Franklin Medal,[142] and in the 1982 New Year Honours appointed a Commander of the Order of the British Empire (CBE).[143][144][145] These awards did not significantly change Hawking's financial status, and motivated by the need to finance his children's education and home-expenses, he decided in 1982 to write a popular book about the universe that would be accessible to the general public.[146][147] Instead of publishing with an academic press, he signed a contract with Bantam Books, a mass-market publisher, and received a large advance for his book.[148][149] A first draft of the book, called A Brief History of Time, was completed in 1984.[150] One of the first messages Hawking produced with his speech-generating device was a request for his assistant to help him finish writing A Brief History of Time.[151] Peter Guzzardi, his editor at Bantam, pushed him to explain his ideas clearly in non-technical language, a process that required many revisions from an increasingly irritated Hawking.[152] The book was published in April 1988 in the US and in June in the UK, and it proved to be an extraordinary success, rising quickly to the top of best-seller lists in both countries and remaining there for months.[153][154][155] The book was translated into many languages,[156] and as of 2009, has sold an estimated 9 million copies.[155] Media attention was intense,[156] and a Newsweek magazine-cover and a television special both described him as "Master of the Universe".[157] Success led to significant financial rewards, but also the challenges of celebrity status.[158] Hawking travelled extensively to promote his work, and enjoyed partying and dancing into the small hours.[156] A difficulty refusing the invitations and visitors left him limited time for work and his students.[159] Some colleagues were resentful of the attention Hawking received, feeling it was due to his disability.[160][161] He received further academic recognition, including five more honorary degrees,[157] the Gold Medal of the Royal Astronomical Society (1985),[162] the Paul Dirac Medal (1987)[157] and, jointly with Penrose, the prestigious Wolf Prize (1988).[163] In the 1989 Birthday Honours, he was appointed a Companion of Honour (CH).[159][164] He reportedly declined a knighthood in the late 1990s in objection to the UK's science funding policy.[165][166] 1990–2000 Hawking pursued his work in physics: in 1993 he co-edited a book on Euclidean quantum gravity with Gary Gibbons and published a collected edition of his own articles on black holes and the Big Bang.[167] In 1994, at Cambridge's Newton Institute, Hawking and Penrose delivered a series of six lectures that were published in 1996 as "The Nature of Space and Time".[168] In 1997, he conceded a 1991 public scientific wager made with Kip Thorne and John Preskill of Caltech. Hawking had bet that Penrose's proposal of a "cosmic censorship conjecture" – that there could be no "naked singularities" unclothed within a horizon – was correct.[169] After discovering his concession might have been premature, a new and more refined wager was made. This one specified that such singularities would occur without extra conditions.[170] The same year, Thorne, Hawking and Preskill made another bet, this time concerning the black hole information paradox.[171][172] Thorne and Hawking argued that since general relativity made it impossible for black holes to radiate and lose information, the mass-energy and information carried by Hawking radiation must be "new", and not from inside the black hole event horizon. Since this contradicted the quantum mechanics of microcausality, quantum mechanics theory would need to be rewritten. Preskill argued the opposite, that since quantum mechanics suggests that the information emitted by a black hole relates to information that fell in at an earlier time, the concept of black holes given by general relativity must be modified in some way.[173] Hawking also maintained his public profile, including bringing science to a wider audience. A film version of A Brief History of Time, directed by Errol Morris and produced by Steven Spielberg, premiered in 1992. Hawking had wanted the film to be scientific rather than biographical, but he was persuaded otherwise. The film, while a critical success, was not widely released.[174] A popular-level collection of essays, interviews, and talks titled Black Holes and Baby Universes and Other Essays was published in 1993,[175] and a six-part television series Stephen Hawking's Universe and a companion book appeared in 1997. As Hawking insisted, this time the focus was entirely on science.[176][177] 2000–2018 Hawking continued his writings for a popular audience, publishing The Universe in a Nutshell in 2001,[178] and A Briefer History of Time, which he wrote in 2005 with Leonard Mlodinow to update his earlier works with the aim of making them accessible to a wider audience, and God Created the Integers, which appeared in 2006.[179] Along with Thomas Hertog at CERN and Jim Hartle, from 2006 on Hawking developed a theory of top-down cosmology, which says that the universe had not one unique initial state but many different ones, and therefore that it is inappropriate to formulate a theory that predicts the universe's current configuration from one particular initial state.[180] Top-down cosmology posits that the present "selects" the past from a superposition of many possible histories. In doing so, the theory suggests a possible resolution of the fine-tuning question.[181][182] Hawking continued to travel widely, including trips to Chile, Easter Island, South Africa, Spain (to receive the Fonseca Prize in 2008),[183][184] Canada,[185] and numerous trips to the United States.[186] For practical reasons related to his disability, Hawking increasingly travelled by private jet, and by 2011 that had become his only mode of international travel.[187] By 2003, consensus among physicists was growing that Hawking was wrong about the loss of information in a black hole.[188] In a 2004 lecture in Dublin, he conceded his 1997 bet with Preskill, but described his own, somewhat controversial solution to the information paradox problem, involving the possibility that black holes have more than one topology.[189][173] In the 2005 paper he published on the subject, he argued that the information paradox was explained by examining all the alternative histories of universes, with the information loss in those with black holes being cancelled out by those without such loss.[172][190] In January 2014, he called the alleged loss of information in black holes his "biggest blunder".[191] As part of another longstanding scientific dispute, Hawking had emphatically argued, and bet, that the Higgs boson would never be found.[192] The particle was proposed to exist as part of the Higgs field theory by Peter Higgs in 1964. Hawking and Higgs engaged in a heated and public debate over the matter in 2002 and again in 2008, with Higgs criticising Hawking's work and complaining that Hawking's "celebrity status gives him instant credibility that others do not have."[193] The particle was discovered in July 2012 at CERN following construction of the Large Hadron Collider. Hawking quickly conceded that he had lost his bet[194][195] and said that Higgs should win the Nobel Prize for Physics,[196] which he did in 2013.[197] In 2007, Hawking and his daughter Lucy published George's Secret Key to the Universe, a children's book designed to explain theoretical physics in an accessible fashion and featuring characters similar to those in the Hawking family.[198] The book was followed by sequels in 2009, 2011, 2014 and 2016.[199] In 2002, following a UK-wide vote, the BBC included Hawking in their list of the 100 Greatest Britons.[200] He was awarded the Copley Medal from the Royal Society (2006),[201] the Presidential Medal of Freedom, which is America's highest civilian honour (2009),[202] and the Russian Special Fundamental Physics Prize (2013).[203] Several buildings have been named after him, including the Stephen W. Hawking Science Museum in San Salvador, El Salvador,[204] the Stephen Hawking Building in Cambridge,[205] and the Stephen Hawking Centre at the Perimeter Institute in Canada.[206] Appropriately, given Hawking's association with time, he unveiled the mechanical "Chronophage" (or time-eating) Corpus Clock at Corpus Christi College, Cambridge in September 2008.[207][208] During his career, Hawking supervised 39 successful PhD students.[2] One doctoral student did not successfully complete the PhD.[2] As required by Cambridge University policy, Hawking retired as Lucasian Professor of Mathematics in 2009.[121][209] Despite suggestions that he might leave the United Kingdom as a protest against public funding cuts to basic scientific research,[210] Hawking worked as director of research at the Cambridge University Department of Applied Mathematics and Theoretical Physics.[211] On 28 June 2009, as a tongue-in-cheek test of his 1992 conjecture that travel into the past is effectively impossible, Hawking held a party open to all, complete with hors d'oeuvres and iced champagne, but publicised the party only after it was over so that only time-travellers would know to attend; as expected, nobody showed up to the party.[212] On 20 July 2015, Hawking helped launch Breakthrough Initiatives, an effort to search for extraterrestrial life.[213] Hawking created Stephen Hawking: Expedition New Earth, a documentary on space colonisation, as a 2017 episode of Tomorrow's World.[214][215] In August 2015, Hawking said that not all information is lost when something enters a black hole and there might be a possibility to retrieve information from a black hole according to his theory.[216] In July 2017, Hawking was awarded an Honorary Doctorate from Imperial College London.[217] Hawking's final paper – A smooth exit from eternal inflation? – was posthumously published in the Journal of High Energy Physics on 27 April 2018.[218][219] Personal life Marriages Hawking met his future wife, Jane Wilde, at a party in 1962. The following year, Hawking was diagnosed with motor neurone disease. In October 1964, the couple became engaged to marry, aware of the potential challenges that lay ahead due to Hawking's shortened life expectancy and physical limitations.[120][220] Hawking later said that the engagement gave him "something to live for".[221] The two were married on 14 July 1965 in their shared hometown of St Albans.[81] The couple resided in Cambridge, within Hawking's walking distance to the Department of Applied Mathematics and Theoretical Physics (DAMTP). During their first years of marriage, Jane lived in London during the week as she completed her degree at Westfield College. They travelled to the United States several times for conferences and physics-related visits. Jane began a PhD programme through Westfield College in medieval Spanish poetry (completed in 1981). The couple had three children: Robert, born May 1967,[222][223] Lucy, born November 1970,[224] and Timothy, born April 1979.[116] Hawking rarely discussed his illness and physical challenges—even, in a precedent set during their courtship, with Jane.[225] His disabilities meant that the responsibilities of home and family rested firmly on his wife's increasingly overwhelmed shoulders, leaving him more time to think about physics.[226] Upon his appointment in 1974 to a year-long position at the California Institute of Technology in Pasadena, California, Jane proposed that a graduate or post-doctoral student live with them and help with his care. Hawking accepted, and Bernard Carr travelled with them as the first of many students who fulfilled this role.[227][228] The family spent a generally happy and stimulating year in Pasadena.[229] Hawking returned to Cambridge in 1975 to a new home and a new job, as reader. Don Page, with whom Hawking had begun a close friendship at Caltech, arrived to work as the live-in graduate student assistant. With Page's help and that of a secretary, Jane's responsibilities were reduced so she could return to her doctoral thesis and her new interest in singing.[230] Around December 1977, Jane met organist Jonathan Hellyer Jones when singing in a church choir. Hellyer Jones became close to the Hawking family, and by the mid-1980s, he and Jane had developed romantic feelings for each other.[119][231][232] According to Jane, her husband was accepting of the situation, stating "he would not object so long as I continued to love him".[119][233][234] Jane and Hellyer Jones were determined not to break up the family, and their relationship remained platonic for a long period.[235] By the 1980s, Hawking's marriage had been strained for many years. Jane felt overwhelmed by the intrusion into their family life of the required nurses and assistants.[236] The impact of his celebrity status was challenging for colleagues and family members, while the prospect of living up to a worldwide fairytale image was daunting for the couple.[237][181] Hawking's views of religion also contrasted with her strong Christian faith and resulted in tension.[181][238][239] After a tracheotomy in 1985, Hawking required a full-time nurse and nursing care was split across 3 shifts daily. In the late 1980s, Hawking grew close to one of his nurses, Elaine Mason, to the dismay of some colleagues, caregivers, and family members, who were disturbed by her strength of personality and protectiveness.[240] In February 1990, Hawking told Jane that he was leaving her for Mason,[241] and departed the family home.[143] After his divorce from Jane in 1995, Hawking married Mason in September,[143][242] declaring, "It's wonderful – I have married the woman I love."[243] In 1999, Jane Hawking published a memoir, Music to Move the Stars, describing her marriage to Hawking and its breakdown. Its revelations caused a sensation in the media but, as was his usual practice regarding his personal life, Hawking made no public comment except to say that he did not read biographies about himself.[244] After his second marriage, Hawking's family felt excluded and marginalised from his life.[239] For a period of about five years in the early 2000s, his family and staff became increasingly worried that he was being physically abused.[245] Police investigations took place, but were closed as Hawking refused to make a complaint.[246] In 2006, Hawking and Mason quietly divorced,[247][248] and Hawking resumed closer relationships with Jane, his children, and his grandchildren.[181][248] Reflecting on this happier period, a revised version of Jane's book, re-titled Travelling to Infinity: My Life with Stephen, appeared in 2007,[246] and was made into a film, The Theory of Everything, in 2014.[249] Disability Hawking had a rare early-onset, slow-progressing form of motor neurone disease (MND; also known as amyotrophic lateral sclerosis (ALS) or Lou Gehrig's disease), a fatal neurodegenerative disease that affects the motor neurones in the brain and spinal cord, which gradually paralysed him over decades.[21] Hawking had experienced increasing clumsiness during his final year at Oxford, including a fall on some stairs and difficulties when rowing.[250][251] The problems worsened, and his speech became slightly slurred. His family noticed the changes when he returned home for Christmas, and medical investigations were begun.[252][253] The MND diagnosis came when Hawking was 21, in 1963. At the time, doctors gave him a life expectancy of two years.[254][255] In the late 1960s, Hawking's physical abilities declined: he began to use crutches and could no longer give lectures regularly.[256] As he slowly lost the ability to write, he developed compensatory visual methods, including seeing equations in terms of geometry.[257][258] The physicist Werner Israel later compared the achievements to Mozart composing an entire symphony in his head.[259][260] Hawking was fiercely independent and unwilling to accept help or make concessions for his disabilities. He preferred to be regarded as "a scientist first, popular science writer second, and, in all the ways that matter, a normal human being with the same desires, drives, dreams, and ambitions as the next person."[261] His wife Jane later noted: "Some people would call it determination, some obstinacy. I've called it both at one time or another."[262] He required much persuasion to accept the use of a wheelchair at the end of the 1960s,[263] but ultimately became notorious for the wildness of his wheelchair driving.[264] Hawking was a popular and witty colleague, but his illness, as well as his reputation for brashness, distanced him from some.[262] When Hawking first began using a wheelchair he was using standard motorised models. The earliest surviving example of these chairs was made by BEC Mobility and sold by Christie's in November 2018 for £296,750.[265] Hawking continued to use this type of chair until the early 1990s, at which time his ability to use his hands to drive a wheelchair deteriorated. Hawking used a variety of different chairs from that time, including a DragonMobility Dragon elevating powerchair from 2007, as shown in the April 2008 photo of Hawking attending NASA's 50th anniversary;[266] a Permobil C350 from 2014; and then a Permobil F3 from 2016.[267] Hawking's speech deteriorated, and by the late 1970s he could be understood by only his family and closest friends. To communicate with others, someone who knew him well would interpret his speech into intelligible speech.[268] Spurred by a dispute with the university over who would pay for the ramp needed for him to enter his workplace, Hawking and his wife campaigned for improved access and support for those with disabilities in Cambridge,[269][270] including adapted student housing at the university.[271] In general, Hawking had ambivalent feelings about his role as a disability rights champion: while wanting to help others, he also sought to detach himself from his illness and its challenges.[272] His lack of engagement in this area led to some criticism.[273] During a visit to CERN on the border of France and Switzerland in mid-1985, Hawking contracted pneumonia, which in his condition was life-threatening; he was so ill that Jane was asked if life support should be terminated. She refused, but the consequence was a tracheotomy, which required round-the-clock nursing care and caused the loss of what remained of his speech.[274][275] The National Health Service was ready to pay for a nursing home, but Jane was determined that he would live at home. The cost of the care was funded by an American foundation.[276][277] Nurses were hired for the three shifts required to provide the round-the-clock support he required. One of those employed was Elaine Mason, who was to become Hawking's second wife.[278] For his communication, Hawking initially raised his eyebrows to choose letters on a spelling card,[279] but in 1986 he received a computer program called the "Equalizer" from Walter Woltosz, CEO of Words Plus, who had developed an earlier version of the software to help his mother-in-law, who also had ALS and had lost her ability to speak and write.[280] In a method he used for the rest of his life, Hawking could now simply press a switch to select phrases, words or letters from a bank of about 2,500–3,000 that were scanned.[281][282] The program was originally run on a desktop computer. Elaine Mason's husband, David, a computer engineer, adapted a small computer and attached it to his wheelchair.[283] Released from the need to use somebody to interpret his speech, Hawking commented that "I can communicate better now than before I lost my voice."[284] The voice he used had an American accent and is no longer produced.[285][286] Despite the later availability of other voices, Hawking retained this original voice, saying that he preferred it and identified with it.[287] Originally, Hawking activated a switch using his hand and could produce up to 15 words per minute.[151] Lectures were prepared in advance and were sent to the speech synthesiser in short sections to be delivered.[285] Hawking gradually lost the use of his hand, and in 2005 he began to control his communication device with movements of his cheek muscles,[288][289][290] with a rate of about one word per minute.[289] With this decline there was a risk of him developing locked-in syndrome, so Hawking collaborated with Intel Corporation researchers on systems that could translate his brain patterns or facial expressions into switch activations. After several prototypes that did not perform as planned, they settled on an adaptive word predictor made by the London-based startup SwiftKey, which used a system similar to his original technology. Hawking had an easier time adapting to the new system, which was further developed after inputting large amounts of Hawking's papers and other written materials and uses predictive software similar to other smartphone keyboards.[181][280][290][291] By 2009, he could no longer drive his wheelchair independently, but the same people who created his new typing mechanics were working on a method to drive his chair using movements made by his chin. This proved difficult, since Hawking could not move his neck, and trials showed that while he could indeed drive the chair, the movement was sporadic and jumpy.[280][292] Near the end of his life, Hawking experienced increased breathing difficulties, often resulting in his requiring the usage of a ventilator, and being regularly hospitalised.[181] Disability outreach Starting in the 1990s, Hawking accepted the mantle of role model for disabled people, lecturing and participating in fundraising activities.[293] At the turn of the century, he and eleven other humanitarians signed the Charter for the Third Millennium on Disability, which called on governments to prevent disability and protect the rights of disabled people.[294][295] In 1999, Hawking was awarded the Julius Edgar Lilienfeld Prize of the American Physical Society.[296] In August 2012, Hawking narrated the "Enlightenment" segment of the 2012 Summer Paralympics opening ceremony in London.[297] In 2013, the biographical documentary film Hawking, in which Hawking himself is featured, was released.[298] In September 2013, he expressed support for the legalisation of assisted suicide for the terminally ill.[299] In August 2014, Hawking accepted the Ice Bucket Challenge to promote ALS/MND awareness and raise contributions for research. As he had pneumonia in 2013, he was advised not to have ice poured over him, but his children volunteered to accept the challenge on his behalf.[300] Plans for a trip to space In late 2006, Hawking revealed in a BBC interview that one of his greatest unfulfilled desires was to travel to space;[301] on hearing this, Richard Branson offered a free flight into space with Virgin Galactic, which Hawking immediately accepted. Besides personal ambition, he was motivated by the desire to increase public interest in spaceflight and to show the potential of people with disabilities.[302] On 26 April 2007, Hawking flew aboard a specially-modified Boeing 727–200 jet operated by Zero-G Corp off the coast of Florida to experience weightlessness.[303] Fears that the manoeuvres would cause him undue discomfort proved groundless, and the flight was extended to eight parabolic arcs.[301] It was described as a successful test to see if he could withstand the g-forces involved in space flight.[304] At the time, the date of Hawking's trip to space was projected to be as early as 2009, but commercial flights to space did not commence before his death.[305] Death Hawking died at his home in Cambridge on 14 March 2018, at the age of 76.[306][307][308] His family stated that he "died peacefully".[309][310] He was eulogised by figures in science, entertainment, politics, and other areas.[311][312][313][314] The Gonville and Caius College flag flew at half-mast and a book of condolences was signed by students and visitors.[315][316][317] A tribute was made to Hawking in the closing speech by IPC President Andrew Parsons at the closing ceremony of the 2018 Paralympic Winter Games in Pyeongchang, South Korea.[318] His private funeral took place on 31 March 2018,[319] at Great St Mary's Church, Cambridge.[319][320] Guests at the funeral included The Theory of Everything actors Eddie Redmayne and Felicity Jones, Queen guitarist and astrophysicist Brian May, and model Lily Cole.[321][322] In addition, actor Benedict Cumberbatch, who played Stephen Hawking in Hawking, astronaut Tim Peake, Astronomer Royal Martin Rees and physicist Kip Thorne provided readings at the service.[323] Although Hawking was an atheist, the funeral took place with a traditional Anglican service.[324][325] Following the cremation, a service of thanksgiving was held at Westminster Abbey on 15 June 2018, after which his ashes were interred in the Abbey's nave, between the graves of Sir Isaac Newton and Charles Darwin.[1][321][326][327] Inscribed on his memorial stone are the words "Here lies what was mortal of Stephen Hawking 1942–2018" and his most famed equation.[328] He directed, at least fifteen years before his death, that the Bekenstein–Hawking entropy equation be his epitaph.[329][330][note 1] In June 2018, it was announced that Hawking's words, set to music by Greek composer Vangelis, would be beamed into space from a European space agency satellite dish in Spain with the aim of reaching the nearest black hole, 1A 0620-00.[335] Hawking's final broadcast interview, about the detection of gravitational waves resulting from the collision of two neutron stars, occurred in October 2017.[336] His final words to the world appeared posthumously, in April 2018, in the form of a Smithsonian TV Channel documentary entitled, Leaving Earth: Or How to Colonize a Planet.[337][338] One of his final research studies, entitled A smooth exit from eternal inflation?, about the origin of the universe, was published in the Journal of High Energy Physics in May 2018.[339][218][340] Later, in October 2018, another of his final research studies, entitled Black Hole Entropy and Soft Hair,[341] was published, and dealt with the "mystery of what happens to the information held by objects once they disappear into a black hole".[342][343] Also in October 2018, Hawking's last book, Brief Answers to the Big Questions, a popular science book presenting his final comments on the most important questions facing humankind, was published.[344][345][346] On 8 November 2018, an auction of 22 personal possessions of Stephen Hawking, including his doctoral thesis ("Properties of Expanding Universes", PhD thesis, Cambridge University, 1965) and wheelchair, took place, and fetched about £1.8 m.[347][348] Proceeds from the auction sale of the wheelchair went to two charities, the Motor Neurone Disease Association and the Stephen Hawking Foundation;[349] proceeds from Hawking's other items went to his estate.[348] In March 2019, it was announced that the Royal Mint would issue a commemorative 50p coin, only available as a commemorative edition,[350] in honour of Hawking.[351] The same month, Hawking's nurse, Patricia Dowdy, was struck off the nursing register for "failures over his care and financial misconduct."[352] In May 2021 it was announced that an Acceptance-in-Lieu agreement between HMRC, the Department for Culture, Media and Sport, Cambridge University Library, Science Museum Group, and the Hawking Estate, would see around 10,000 pages of Hawking's scientific and other papers remain in Cambridge, while objects including his wheelchairs, speech synthesisers, and personal memorabilia from his former Cambridge office would be housed at the Science Museum.[353] In February 2022 the "Stephen Hawking at Work" display opened at the Science Museum, London as the start of a two-year nationwide tour.[354] Personal views Philosophy is unnecessary At Google's Zeitgeist Conference in 2011, Stephen Hawking said that "philosophy is dead". He believed that philosophers "have not kept up with modern developments in science", "have not taken science sufficiently seriously and so Philosophy is no longer relevant to knowledge claims", "their art is dead" and that scientists "have become the bearers of the torch of discovery in our quest for knowledge". He said that philosophical problems can be answered by science, particularly new scientific theories which "lead us to a new and very different picture of the universe and our place in it".[355] His view was both praised and criticized.[356] Future of humanity In 2006, Hawking posed an open question on the Internet: "In a world that is in chaos politically, socially and environmentally, how can the human race sustain another 100 years?", later clarifying: "I don't know the answer. That is why I asked the question, to get people to think about it, and to be aware of the dangers we now face."[357] Hawking expressed concern that life on Earth is at risk from a sudden nuclear war, a genetically engineered virus, global warming, or other dangers humans have not yet thought of.[302][358] Hawking stated: "I regard it as almost inevitable that either a nuclear confrontation or environmental catastrophe will cripple the Earth at some point in the next 1,000 years", and considered an "asteroid collision" to be the biggest threat to the planet.[344] Such a planet-wide disaster need not result in human extinction if the human race were to be able to colonise additional planets before the disaster.[358] Hawking viewed spaceflight and the colonisation of space as necessary for the future of humanity.[302][359] Hawking stated that, given the vastness of the universe, aliens likely exist, but that contact with them should be avoided.[360][361] He warned that aliens might pillage Earth for resources. In 2010 he said, "If aliens visit us, the outcome would be much as when Columbus landed in America, which didn't turn out well for the Native Americans."[361] Hawking warned that superintelligent artificial intelligence could be pivotal in steering humanity's fate, stating that "the potential benefits are huge... Success in creating AI would be the biggest event in human history. It might also be the last, unless we learn how to avoid the risks."[362][363] He was fearing that "an extremely intelligent future AI will probably develop a drive to survive and acquire more resources as a step toward accomplishing whatever goal it has", and that "The real risk with AI isn't malice but competence. A super-intelligent AI will be extremely good at accomplishing its goals, and if those goals aren't aligned with ours, we're in trouble".[364] He also considered that the enormous wealth generated by machines needs to be redistributed to prevent exacerbated economic inequality.[364] Hawking was concerned about the future emergence of a race of "superhumans" that would be able to design their own evolution[344] and, as well, argued that computer viruses in today's world should be considered a new form of life, stating that "maybe it says something about human nature, that the only form of life we have created so far is purely destructive. Talk about creating life in our own image."[365] Religion and atheism Hawking was an atheist.[366][367] In an interview published in The Guardian, Hawking regarded "the brain as a computer which will stop working when its components fail", and the concept of an afterlife as a "fairy story for people afraid of the dark".[307][138] In 2011, narrating the first episode of the American television series Curiosity on the Discovery Channel, Hawking declared: We are each free to believe what we want and it is my view that the simplest explanation is there is no God. No one created the universe and no one directs our fate. This leads me to a profound realisation. There is probably no heaven, and no afterlife either. We have this one life to appreciate the grand design of the universe, and for that, I am extremely grateful.[368][369] Hawking's association with atheism and freethinking was in evidence from his university years onwards, when he had been a member of Oxford University's humanist group. He was later scheduled to appear as the keynote speaker at a 2017 Humanists UK conference.[370] In an interview with El Mundo, he said: Before we understand science, it is natural to believe that God created the universe. But now science offers a more convincing explanation. What I meant by 'we would know the mind of God' is, we would know everything that God would know, if there were a God, which there isn't. I'm an atheist.[366] In addition, Hawking stated: If you like, you can call the laws of science 'God', but it wouldn't be a personal God that you would meet and put questions to.[344] Politics Hawking was a longstanding Labour Party supporter.[371][372] He recorded a tribute for the 2000 Democratic presidential candidate Al Gore,[373] called the 2003 invasion of Iraq a "war crime",[372][374] campaigned for nuclear disarmament,[371][372] and supported stem cell research,[372][375] universal health care,[376] and action to prevent climate change.[377] In August 2014, Hawking was one of 200 public figures who were signatories to a letter to The Guardian expressing their hope that Scotland would vote to remain part of the United Kingdom in September's referendum on that issue.[378] Hawking believed a United Kingdom withdrawal from the European Union (Brexit) would damage the UK's contribution to science as modern research needs international collaboration, and that free movement of people in Europe encourages the spread of ideas.[379] Hawking said to Theresa May, "I deal with tough mathematical questions every day, but please don't ask me to help with Brexit."[380] Hawking was disappointed by Brexit and warned against envy and isolationism.[381] Hawking was greatly concerned over health care, and maintained that without the UK National Health Service, he could not have survived into his 70s.[382] Hawking especially feared privatisation. He stated, "The more profit is extracted from the system, the more private monopolies grow and the more expensive healthcare becomes. The NHS must be preserved from commercial interests and protected from those who want to privatise it."[383] Hawking blamed the Conservatives for cutting funding to the NHS, weakening it by privatisation, lowering staff morale through holding pay back and reducing social care.[384] Hawking accused Jeremy Hunt of cherry picking evidence which Hawking maintained debased science.[382] Hawking also stated, "There is overwhelming evidence that NHS funding and the numbers of doctors and nurses are inadequate, and it is getting worse."[385] In June 2017, Hawking endorsed the Labour Party in the 2017 UK general election, citing the Conservatives' proposed cuts to the NHS. But he was also critical of Labour leader Jeremy Corbyn, expressing scepticism over whether the party could win a general election under him.[386] Hawking feared Donald Trump's policies on global warming could endanger the planet and make global warming irreversible. He said, "Climate change is one of the great dangers we face, and it's one we can prevent if we act now. By denying the evidence for climate change, and pulling out of the Paris Agreement, Donald Trump will cause avoidable environmental damage to our beautiful planet, endangering the natural world, for us and our children."[387] Hawking further stated that this could lead Earth "to become like Venus, with a temperature of two hundred and fifty degrees, and raining sulphuric acid".[388] Hawking was also a supporter of a universal basic income.[389] He was critical of the Israeli government's position on the Israeli–Palestinian conflict, stating that their policy "is likely to lead to disaster."[390] Appearances in popular media In 1988, Hawking, Arthur C. Clarke and Carl Sagan were interviewed in God, the Universe and Everything Else. They discussed the Big Bang theory, God and the possibility of extraterrestrial life.[391] At the release party for the home video version of the A Brief History of Time, Leonard Nimoy, who had played Spock on Star Trek, learned that Hawking was interested in appearing on the show. Nimoy made the necessary contact, and Hawking played a holographic simulation of himself in an episode of Star Trek: The Next Generation in 1993.[392][393] The same year, his synthesiser voice was recorded for the Pink Floyd song "Keep Talking",[394][175] and in 1999 for an appearance on The Simpsons.[395] Hawking appeared in documentaries titled The Real Stephen Hawking (2001),[295] Stephen Hawking: Profile (2002)[396] and Hawking (2013), and the documentary series Stephen Hawking, Master of the Universe (2008).[397] Hawking also guest-starred in Futurama[181] and had a recurring role in The Big Bang Theory.[398] Hawking allowed the use of his copyrighted voice[399][400] in the biographical 2014 film The Theory of Everything, in which he was portrayed by Eddie Redmayne in an Academy Award-winning role.[401] Hawking was featured at the Monty Python Live (Mostly) show in 2014. He was shown to sing an extended version of the "Galaxy Song", after running down Brian Cox with his wheelchair, in a pre-recorded video.[402][403] Hawking used his fame to advertise products, including a wheelchair,[295] National Savings,[404] British Telecom, Specsavers, Egg Banking,[405] and Go Compare.[406] In 2015, he applied to trademark his name.[407] Broadcast in March 2018 just a week or two before his death, Hawking was the voice of The Book Mark II on The Hitchhiker's Guide to the Galaxy radio series, and he was the guest of Neil deGrasse Tyson on StarTalk.[408] On 8 January 2022, Google featured Hawking in a Google Doodle on the occasion of his 80th birth anniversary.[409] Awards and honours Hawking being presented by his daughter Lucy Hawking at the lecture he gave for NASA's 50th anniversary, 2008 Hawking received numerous awards and honours. Already early in the list, in 1974 he was elected a Fellow of the Royal Society (FRS).[7] At that time, his nomination read: Hawking has made major contributions to the field of general relativity. These derive from a deep understanding of what is relevant to physics and astronomy, and especially from a mastery of wholly new mathematical techniques. Following the pioneering work of Penrose he established, partly alone and partly in collaboration with Penrose, a series of successively stronger theorems establishing the fundamental result that all realistic cosmological models must possess singularities. Using similar techniques, Hawking has proved the basic theorems on the laws governing black holes: that stationary solutions of Einstein's equations with smooth event horizons must necessarily be axisymmetric; and that in the evolution and interaction of black holes, the total surface area of the event horizons must increase. In collaboration with G. Ellis, Hawking is the author of an impressive and original treatise on "Space-time in the Large". The citation continues, "Other important work by Hawking relates to the interpretation of cosmological observations and to the design of gravitational wave detectors."[410] Hawking was also a member of the American Academy of Arts and Sciences (1984),[411] the American Philosophical Society (1984),[412] and the United States National Academy of Sciences (1992).[413] Hawking received the 2015 BBVA Foundation Frontiers of Knowledge Award in Basic Sciences shared with Viatcheslav Mukhanov for discovering that the galaxies were formed from quantum fluctuations in the early Universe. At the 2016 Pride of Britain Awards, Hawking received the lifetime achievement award "for his contribution to science and British culture".[414] After receiving the award from Prime Minister Theresa May, Hawking humorously requested that she not seek his help with Brexit.[414] The Hawking Fellowship In 2017, the Cambridge Union Society, in conjunction with Hawking, established the Professor Stephen Hawking Fellowship. The fellowship is awarded annually to an individual who has made an exceptional contribution to the STEM fields and social discourse,[415] with a particular focus on impacts affecting the younger generations. Each fellow delivers a lecture on a topic of their choosing, known as the ‘Hawking Lecture’.[416] Hawking himself accepted the inaugural fellowship, and he delivered the first Hawking Lecture in his last public appearance before his passing. [417][418] Medal for Science Communication Hawking was a member of the advisory board of the Starmus Festival, and had a major role in acknowledging and promoting science communication. The Stephen Hawking Medal for Science Communication is an annual award initiated in 2016 to honour members of the arts community for contributions that help build awareness of science.[419] Recipients receive a medal bearing a portrait of Hawking by Alexei Leonov, and the other side represents an image of Leonov himself performing the first spacewalk along with an image of the "Red Special", the guitar of Queen musician and astrophysicist Brian May (with music being another major component of the Starmus Festival).[420] The Starmus III Festival in 2016 was a tribute to Stephen Hawking and the book of all Starmus III lectures, "Beyond the Horizon", was also dedicated to him. The first recipients of the medals, which were awarded at the festival, were chosen by Hawking himself. They were composer Hans Zimmer, physicist Jim Al-Khalili, and the science documentary Particle Fever.[421] Publications Popular books • A Brief History of Time (1988)[199] • Black Holes and Baby Universes and Other Essays (1993)[422] • The Universe in a Nutshell (2001)[199] • On the Shoulders of Giants (2002)[199] • God Created the Integers: The Mathematical Breakthroughs That Changed History (2005)[199] • The Dreams That Stuff Is Made of: The Most Astounding Papers of Quantum Physics and How They Shook the Scientific World (2011)[423] • My Brief History (2013)[199] Hawking's memoir. • Brief Answers to the Big Questions (2018)[344][424] Co-authored • The Nature of Space and Time (with Roger Penrose) (1996) • The Large, the Small and the Human Mind (with Roger Penrose, Abner Shimony and Nancy Cartwright) (1997) • The Future of Spacetime (with Kip Thorne, Igor Novikov, Timothy Ferris and introduction by Alan Lightman, Richard H. Price) (2002) • A Briefer History of Time (with Leonard Mlodinow) (2005)[199] • The Grand Design (with Leonard Mlodinow) (2010)[199] Forewords • Black Holes & Time Warps: Einstein's Outrageous Legacy (Kip Thorne, and introduction by Frederick Seitz) (1994) • The Physics of Star Trek (Lawrence Krauss) (1995) Children's fiction Co-written with his daughter Lucy. • George's Secret Key to the Universe (2007)[199] • George's Cosmic Treasure Hunt (2009)[199] • George and the Big Bang (2011)[199] • George and the Unbreakable Code (2014) • George and the Blue Moon (2016) Films and series • A Brief History of Time (1992)[425] • Stephen Hawking's Universe (1997)[426][233] • Hawking – BBC television film (2004) starring Benedict Cumberbatch • Horizon: The Hawking Paradox (2005)[427] • Masters of Science Fiction (2007)[428] • Stephen Hawking and the Theory of Everything (2007) • Stephen Hawking: Master of the Universe (2008)[429] • Into the Universe with Stephen Hawking (2010)[430] • Brave New World with Stephen Hawking (2011)[431] • Stephen Hawking's Grand Design (2012)[432] • The Big Bang Theory (2012, 2014–2015, 2017) • Stephen Hawking: A Brief History of Mine (2013)[433] • The Theory of Everything – Feature film (2014) starring Eddie Redmayne[434] • Genius by Stephen Hawking (2016) Selected academic works • S. W. Hawking; R. Penrose (27 January 1970). "The Singularities of Gravitational Collapse and Cosmology". Proceedings of the Royal Society A. 314 (1519): 529–548. Bibcode:1970RSPSA.314..529H. doi:10.1098/RSPA.1970.0021. ISSN 1364-5021. S2CID 120208756. Wikidata Q55872061. • S. W. Hawking (May 1971). "Gravitational Radiation from Colliding Black Holes". Physical Review Letters. 26 (21): 1344–1346. Bibcode:1971PhRvL..26.1344H. doi:10.1103/PHYSREVLETT.26.1344. ISSN 0031-9007. Wikidata Q21706376. • Stephen Hawking (June 1972). "Black holes in general relativity". Communications in Mathematical Physics. 25 (2): 152–166. Bibcode:1972CMaPh..25..152H. doi:10.1007/BF01877517. ISSN 0010-3616. S2CID 121527613. Wikidata Q56453197. • Stephen Hawking (March 1974). "Black hole explosions?". Nature. 248 (5443): 30–31. Bibcode:1974Natur.248...30H. doi:10.1038/248030A0. ISSN 1476-4687. S2CID 4290107. Wikidata Q54017915. • Stephen Hawking (September 1982). "The development of irregularities in a single bubble inflationary universe". Physics Letters B. 115 (4): 295–297. Bibcode:1982PhLB..115..295H. doi:10.1016/0370-2693(82)90373-2. ISSN 0370-2693. Wikidata Q29398982. • J. B. Hartle; S. W. Hawking (December 1983). "Wave function of the Universe". Physical Review D. 28 (12): 2960–2975. Bibcode:1983PhRvD..28.2960H. doi:10.1103/PHYSREVD.28.2960. ISSN 1550-7998. Wikidata Q21707690. • Stephen Hawking; C J Hunter (1 October 1996). "The gravitational Hamiltonian in the presence of non-orthogonal boundaries". Classical and Quantum Gravity. 13 (10): 2735–2752. arXiv:gr-qc/9603050. Bibcode:1996CQGra..13.2735H. CiteSeerX 10.1.1.339.8756. doi:10.1088/0264-9381/13/10/012. ISSN 0264-9381. S2CID 10715740. Wikidata Q56551504. • S. W. Hawking (October 2005). "Information loss in black holes". Physical Review D. 72 (8). arXiv:hep-th/0507171. Bibcode:2005PhRvD..72h4013H. doi:10.1103/PHYSREVD.72.084013. ISSN 1550-7998. S2CID 118893360. Wikidata Q21651473. • Stephen Hawking; Thomas Hertog (April 2018). "A smooth exit from eternal inflation?". Journal of High Energy Physics. 2018 (4). arXiv:1707.07702. Bibcode:2018JHEP...04..147H. doi:10.1007/JHEP04(2018)147. ISSN 1126-6708. S2CID 13745992. Wikidata Q55878494. Notes 1. By considering the effect of a black hole's event horizon on virtual particle production, Hawking found in 1974, much to his surprise, that black holes emit black-body radiation associated with a temperature that can be expressed (in the nonspinning case) as: $T={\frac {\hbar c^{3}}{8\pi GMk}},$ where $T$ is black hole temperature, $\hbar $ is the reduced Planck constant, $c$ is the speed of light, $G$ is the Newtonian constant of gravitation, $M$ is the mass of the black hole, and $k$ is the Boltzmann constant. This relationship between concepts from the disparate fields of general relativity, quantum mechanics and thermodynamics implies the existence of deep connections between them and may presage their unification. It is inscribed on Hawking's memorial stone.[331] The equation's most fundamental implication can be obtained as follows. According to thermodynamics, this temperature is associated with an entropy, $S$, such that $T=Mc^{2}/2S,$ where $Mc^{2}$ is the energy of a (nonspinning) black hole as expressed with Einstein's formula.[332] Combining equations then gives: $S={\frac {4\pi GM^{2}k}{\hbar c}}.$ Now, the radius of a nonspinning black hole is given by $r={\frac {2GM}{c^{2}}},$ and since its surface area is just $A=4\pi r^{2},$ $S$ can be expressed in terms of surface area as:[329][333] $S_{\text{BH}}={\frac {kc^{3}}{4\hbar G}}A,$ where the subscript BH stands for either "black hole" or "Bekenstein–Hawking". This can be expressed more simply as a proportionality between two dimensionless ratios: ${\frac {S_{\text{BH}}}{k}}={\frac {1}{4}}{\frac {A}{l_{\text{P}}^{2}}},$ where $l_{\text{P}}={\sqrt {\hbar G/c^{3}}}$ is the Planck length. Jacob Bekenstein had conjectured the proportionality; Hawking confirmed it and established the constant of proportionality at $1/4$.[308][103] Calculations based on string theory, first carried out in 1995, have been found to yield the same result.[334] This relationship is conjectured to be valid not just for black holes, but also (since entropy is proportional to information) as an upper bound on the amount of information that can be contained in any volume of space, which has in turn spawned deeper reflections on the possible nature of reality. See also • On the Origin of Time, a book of Thomas Hertog References Citations 1. Shirbon, Estelle (20 March 2018). "Stephen Hawking to Join Newton, Darwin in Final Resting Place". London: Reuters. Archived from the original on 21 March 2018. Retrieved 21 March 2018. 2. Stephen Hawking at the Mathematics Genealogy Project 3. Ferguson 2011, p. 29. 4. Allen, Bruce (1983). Vacuum energy and general relativity (PhD thesis). University of Cambridge. Archived from the original on 25 January 2016. Retrieved 5 February 2014. 5. Bousso, Raphael (1997). Pair creation of black holes in cosmology (PhD thesis). University of Cambridge. Archived from the original on 25 January 2016. Retrieved 5 February 2014. 6. Carr, Bernard John (1976). Primordial black holes (PhD thesis). University of Cambridge. Archived from the original on 25 January 2016. Retrieved 5 February 2014. 7. Bernard Carr; George F. R. Ellis; Gary Gibbons; James Hartle; Thomas Hertog; Roger Penrose; Malcolm Perry; Kip S. Thorne (July 2019). "Stephen William Hawking CH CBE. 8 January 1942—14 March 2018". Biographical Memoirs of Fellows of the Royal Society. 66: 267–308. arXiv:2002.03185. doi:10.1098/RSBM.2019.0001. ISSN 0080-4606. S2CID 131986323. Wikidata Q63347107. 8. Dowker, Helen Fay (1991). Space-time wormholes (PhD thesis). University of Cambridge. Archived from the original on 25 January 2016. Retrieved 5 February 2014. 9. Galfard, Christophe Georges Gunnar Sven (2006). Black hole information & branes (PhD thesis). University of Cambridge. Archived from the original on 25 January 2016. Retrieved 5 February 2014. 10. Gibbons, Gary William (1973). Some aspects of gravitational radiation and gravitational collapse (PhD thesis). University of Cambridge. Archived from the original on 25 January 2016. Retrieved 5 February 2014. 11. Hertog, Thomas (2002). The origin of inflation (PhD thesis). University of Cambridge. Archived from the original on 25 January 2016. Retrieved 5 February 2014. 12. Laflamme, Raymond (1988). Time and quantum cosmology (PhD thesis). University of Cambridge. Archived from the original on 25 January 2016. Retrieved 5 February 2014. 13. Page, Don Nelson (1976). Accretion into and emission from black holes (PhD thesis). California Institute of Technology. Archived from the original on 21 February 2014. Retrieved 6 February 2014. 14. Perry, Malcolm John (1978). Black holes and quantum mechanics (PhD thesis). University of Cambridge. Archived from the original on 25 January 2016. Retrieved 6 February 2014. 15. Taylor-Robinson, Marika Maxine (1998). Problems in M theory. lib.cam.ac.uk (PhD thesis). University of Cambridge. OCLC 894603647. EThOS uk.bl.ethos.625075. Archived from the original on 1 May 2018. Retrieved 1 May 2018. 16. Wu, Zhongchao (1984). Cosmological models and the inflationary universe (PhD thesis). University of Cambridge. Archived from the original on 25 January 2016. Retrieved 7 February 2014. 17. "Centre for Theoretical Cosmology: Outreach Stephen Hawking". University of Cambridge. Archived from the original on 30 August 2015. Retrieved 23 June 2013. 18. "About Stephen". Stephen Hawking Official Website. Archived from the original on 30 August 2015. Retrieved 23 June 2013. 19. "Michael Green to become Lucasian Professor of Mathematics". The Daily Telegraph. Retrieved 11 December 2012. 20. "Mind over matter: How Stephen Hawking defied Motor Neurone Disease for 50 years". The Independent. 26 November 2015. Archived from the original on 23 August 2017. Retrieved 15 September 2017. 21. "How Has Stephen Hawking Lived to 70 with ALS?". Scientific American. 7 January 2012. Archived from the original on 30 August 2015. Retrieved 23 December 2014. Q: How frequent are these cases of very slow-progressing forms of ALS? A: I would say probably less than a few percent. 22. Stephen Hawking: An inspirational story of willpower and strength. Swagatham Canada https://www.swagathamcanada.com/inspirational/stephen-hawking-an-inspirational-story-of-willpower-and-strength/ Archived 6 November 2021 at the Wayback Machine 26 October 2021 23. Gardner, Martin (September/October 2001). "Multiverses and Blackberries" Archived 28 July 2016 at the Wayback Machine. "Notes of a Fringe-Watcher". Skeptical Inquirer. Volume 25, No. 5. 24. Price, Michael Clive (February 1995). "THE EVERETT FAQ" Archived 20 April 2016 at the Wayback Machine. Department of Physics, Washington University in St. Louis. Retrieved 17 December 2014. 25. "UPI Almanac for Monday, 8 Jan 2018". United Press International. 8 January 2018. Archived from the original on 8 January 2018. Retrieved 21 September 2019. …British physicist and author Stephen Hawking 1942 (age 76) 26. Anon (2015). "Hawking, Prof. Stephen William". Who's Who (online Oxford University Press ed.). A & C Black. doi:10.1093/ww/9780199540884.013.19510. (Subscription or UK public library membership required.) 27. Larsen 2005, pp. xiii, 2. 28. Ferguson 2011, p. 21. 29. "Mind over matter Stephen Hawking". The Herald. Glasgow. Archived from the original on 30 May 2016. Retrieved 14 March 2018. 30. Ferguson, Kitty (6 January 2012). "Stephen Hawking, "Equal to Anything!" [Excerpt]". Scientific American. Archived from the original on 22 March 2018. Retrieved 21 March 2018. 31. White & Gribbin 2002, p. 6. 32. Larsen 2005, pp. 2, 5. 33. Ferguson 2011, p. 22. 34. Larsen 2005, p. xiii. 35. White & Gribbin 2002, p. 12. 36. Ferguson 2011, pp. 22–23. 37. White & Gribbin 2002, pp. 11–12. 38. White & Gribbin 2002, p. 13. 39. Larsen 2005, p. 3. 40. Hawking, Stephen (7 December 2013). "Stephen Hawking: "I'm happy if I have added something to our understanding of the universe"". Radio Times. Archived from the original on 7 January 2017. Retrieved 6 January 2017. 41. Ferguson 2011, p. 24. 42. White & Gribbin 2002, p. 8. 43. My brief history – Stephen Hawking (2013). 44. White & Gribbin 2002, pp. 7–8. 45. Larsen 2005, p. 4. 46. Ferguson 2011, pp. 25–26. 47. White & Gribbin 2002, pp. 14–16. 48. Ferguson 2011, p. 26. 49. White & Gribbin 2002, pp. 19–20. 50. Ferguson 2011, p. 25. 51. White & Gribbin 2002, pp. 17–18. 52. Ferguson 2011, p. 27. 53. Hoare, Geoffrey; Love, Eric (5 January 2007). "Dick Tahta". The Guardian. London. Archived from the original on 8 January 2014. Retrieved 5 March 2012. 54. White & Gribbin 2002, p. 41. 55. Ferguson 2011, pp. 27–28. 56. White & Gribbin 2002, pp. 42–43. 57. Ferguson 2011, p. 28. 58. Ferguson 2011, pp. 28–29. 59. White & Gribbin 2002, pp. 46–47, 51. 60. Ferguson 2011, pp. 30–31. 61. Hawking 1992, p. 44. 62. White & Gribbin 2002, p. 50. 63. White & Gribbin 2002, p. 53. 64. Ferguson 2011, p. 31. 65. White & Gribbin 2002, p. 54. 66. White & Gribbin 2002, pp. 54–55. 67. White & Gribbin 2002, p. 56. 68. Ferguson 2011, pp. 31–32. 69. Ferguson 2011, p. 33. 70. White & Gribbin 2002, p. 58. 71. Ferguson 2011, pp. 33–34. 72. White & Gribbin 2002, pp. 61–63. 73. Ferguson 2011, p. 36. 74. White & Gribbin 2002, pp. 69–70. 75. Ferguson 2011, p. 42. 76. White & Gribbin 2002, pp. 68–69. 77. Ferguson 2011, p. 34. 78. "Stephen Hawking's PhD thesis, explained simply". 30 October 2017. Archived from the original on 13 December 2017. Retrieved 27 November 2017. 79. White & Gribbin 2002, pp. 71–72. 80. Stephen Hawking (1966), Properties of expanding universes, doi:10.17863/CAM.11283, OCLC 62793673, Wikidata Q42307084 81. Ferguson 2011, pp. 43–44. 82. Ferguson 2011, p. 47. 83. Larsen 2005, p. xix. 84. White & Gribbin 2002, p. 101. 85. Ferguson 2011, pp. 61, 64. 86. Ferguson 2011, pp. 64–65. 87. White & Gribbin 2002, pp. 115–16. 88. S. W. Hawking; R. Penrose (27 January 1970). "The Singularities of Gravitational Collapse and Cosmology". Proceedings of the Royal Society A. 314 (1519): 529–548. Bibcode:1970RSPSA.314..529H. doi:10.1098/RSPA.1970.0021. ISSN 1364-5021. S2CID 120208756. Wikidata Q55872061. 89. Ferguson 2011, p. 49. 90. Ferguson 2011, pp. 65–67. 91. Larsen 2005, p. 38. 92. Ferguson 2011, pp. 67–68. 93. White & Gribbin 2002, pp. 123–24. 94. Larsen 2005, p. 33. 95. R.D. Blandford (30 March 1989). "Astrophysical Black Holes". In Hawking, S.W.; Israel, W. (eds.). Three Hundred Years of Gravitation. Cambridge University Press. p. 278. ISBN 978-0-521-37976-2. 96. Larsen 2005, p. 35. 97. Ferguson 2011, p. 68. 98. Larsen 2005, p. 39. 99. White & Gribbin 2002, p. 146. 100. Ferguson 2011, p. 70. 101. Larsen 2005, p. 41. 102. Stephen Hawking (March 1974). "Black hole explosions?". Nature. 248 (5443): 30–31. Bibcode:1974Natur.248...30H. doi:10.1038/248030A0. ISSN 1476-4687. S2CID 4290107. Wikidata Q54017915. 103. Stephen Hawking (August 1975). "Particle creation by black holes". Communications in Mathematical Physics. 43 (3): 199–220. Bibcode:1975CMaPh..43..199H. doi:10.1007/BF02345020. ISSN 0010-3616. S2CID 55539246. Wikidata Q55869076. 104. Ferguson 2011, pp. 69–73. 105. Ferguson 2011, pp. 70–74. 106. Larsen 2005, pp. 42–43. 107. White & Gribbin 2002, pp. 150–51. 108. Larsen 2005, p. 44. 109. White & Gribbin 2002, p. 133. 110. Ferguson 2011, pp. 82, 86. 111. Ferguson 2011, pp. 86–88. 112. Ferguson 2011, pp. 150, 189, 219. 113. Ferguson 2011, p. 95. 114. Ferguson 2011, p. 90. 115. White & Gribbin 2002, pp. 132–33. 116. Ferguson 2011, p. 92. 117. White & Gribbin 2002, p. 162. 118. Larsen 2005, p. xv. 119. Ferguson 2011, p. 91. 120. Larsen 2005, p. xiv. 121. "Stephen Hawking to retire as Cambridge's Professor of Mathematics". The Daily Telegraph. 23 October 2008. Archived from the original on 16 March 2018. Retrieved 15 March 2018. 122. Ferguson 2011, pp. 93–94. 123. Ferguson 2011, pp. 92–93. 124. Ferguson 2011, p. 96. 125. Ferguson 2011, pp. 96–101. 126. Susskind, Leonard (7 July 2008). The Black Hole War: My Battle with Stephen Hawking to Make the World Safe for Quantum Mechanics. Hachette Digital, Inc. pp. 9, 18. ISBN 978-0-316-01640-7. Archived from the original on 18 January 2017. Retrieved 23 February 2016. 127. Ferguson 2011, pp. 108–11. 128. Ferguson 2011, pp. 111–14. 129. See Guth (1997) for a popular description of the workshop, or The Very Early Universe, ISBN 0-521-31677-4 eds Gibbons, Hawking & Siklos for a detailed report. 130. Stephen Hawking (September 1982). "The development of irregularities in a single bubble inflationary universe". Physics Letters B. 115 (4): 295–297. Bibcode:1982PhLB..115..295H. doi:10.1016/0370-2693(82)90373-2. ISSN 0370-2693. Wikidata Q29398982. 131. Ferguson 2011, pp. 102–103. 132. White & Gribbin 2002, p. 180. 133. J. B. Hartle; S. W. Hawking (December 1983). "Wave function of the Universe". Physical Review D. 28 (12): 2960–2975. Bibcode:1983PhRvD..28.2960H. doi:10.1103/PHYSREVD.28.2960. ISSN 1550-7998. Wikidata Q21707690. 134. Baird 2007, p. 234. 135. White & Gribbin 2002, pp. 180–83. 136. Ferguson 2011, p. 129. 137. Ferguson 2011, p. 130. 138. Sample, Ian (15 May 2011). "Stephen Hawking: 'There is no heaven; it's a fairy story'". The Guardian. Archived from the original on 20 September 2013. Retrieved 17 May 2011. 139. Yulsman 2003, pp. 174–176. 140. Ferguson 2011, pp. 180–182. 141. Ferguson 2011, p. 182. 142. White & Gribbin 2002, p. 274. 143. Larsen 2005, pp. x–xix. 144. Ferguson 2011, p. 114. 145. "No. 48837". The London Gazette (Supplement). 30 December 1981. p. 8. 146. Ferguson 2011, pp. 134–35. 147. White & Gribbin 2002, pp. 205, 220–21. 148. Ferguson 2011, p. 134. 149. White & Gribbin 2002, pp. 220–27. 150. Ferguson 2011, p. 135. 151. Ferguson 2011, p. 175. 152. Ferguson 2011, pp. 140–42. 153. Ferguson 2011, p. 143. 154. White & Gribbin 2002, pp. 243–45. 155. Radford, Tim (31 July 2009). "How God propelled Stephen Hawking into the bestsellers lists". The Guardian. Archived from the original on 16 December 2013. Retrieved 5 March 2012. 156. Ferguson 2011, pp. 143–44. 157. Ferguson 2011, p. 146. 158. Ferguson 2011, pp. 145–46. 159. Ferguson 2011, p. 149. 160. Ferguson 2011, pp. 147–48. 161. White & Gribbin 2002, pp. 230–31. 162. Larsen 2005, p. xvi. 163. White & Gribbin 2002, p. 279. 164. "No. 48837". The London Gazette (Supplement). 16 June 1989. p. 18. 165. Peterkin, Tom (15 June 2008). "Stephen Hawking Warns Government over 'Disastrous' Science Funding Cuts". The Telegraph. Archived from the original on 2 April 2018. Retrieved 2 April 2018. 166. Guyoncourt, Sally (14 March 2018). "Why Professor Stephen Hawking Never Had a Knighthood". I News. Archived from the original on 16 June 2018. Retrieved 15 June 2018. 167. Ferguson 2011, p. 180. 168. Ferguson 2011, p. 188. 169. Ferguson 2011, pp. 189–90. 170. Ferguson 2011, p. 190. 171. Hawking, S.W.; Thorne, K.S.; Preskill (6 February 1997). "Black hole information bet". Pasadena, California. Archived from the original on 11 May 2013. Retrieved 20 April 2013. 172. S. W. Hawking (October 2005). "Information loss in black holes". Physical Review D. 72 (8). arXiv:hep-th/0507171. Bibcode:2005PhRvD..72h4013H. doi:10.1103/PHYSREVD.72.084013. ISSN 1550-7998. S2CID 118893360. Wikidata Q21651473. 173. Preskill, John. "John Preskill's comments about Stephen Hawking's concession". Archived from the original on 26 February 2012. Retrieved 29 February 2012. 174. Ferguson 2011, pp. 168–70. 175. Ferguson 2011, p. 178. 176. Ferguson 2011, p. 189. 177. Larsen 2005, p. 97. 178. Ferguson 2011, pp. 199–200. 179. Ferguson 2011, pp. 222–23. 180. Highfield, Roger (26 June 2008). "Stephen Hawking's explosive new theory". The Daily Telegraph. Archived from the original on 5 February 2015. Retrieved 9 April 2012. 181. Highfield, Roger (3 January 2012). "Stephen Hawking: driven by a cosmic force of will". The Daily Telegraph. London. Archived from the original on 9 January 2015. Retrieved 7 December 2012. 182. Stephen Hawking; Thomas Hertog (23 June 2006). "Populating the landscape: A top-down approach". Physical Review D. 73 (12). arXiv:hep-th/0602091. Bibcode:2006PhRvD..73l3527H. doi:10.1103/PHYSREVD.73.123527. ISSN 1550-7998. S2CID 9856127. Wikidata Q27442267. 183. Ferguson 2011, p. 233. 184. "Fonseca Prize 2008". University of Santiago de Compostela. Archived from the original on 5 June 2009. Retrieved 7 August 2009. 185. Ferguson 2011, p. 239. 186. Ferguson 2011, p. 269. 187. Ferguson 2011, pp. 197, 269. 188. Ferguson 2011, pp. 216–17. 189. Ferguson 2011, pp. 217–20. 190. Ferguson 2011, pp. 223–24. 191. Kwong, Matt (28 January 2014). "Stephen Hawking's black holes 'blunder' stirs debate". CBC News. Archived from the original on 17 March 2018. Retrieved 14 March 2018. 192. Ferguson 2011, pp. 95, 236. 193. Ferguson 2011, pp. 94–95, 236. 194. Wright, Robert (17 July 2012). "Why Some Physicists Bet Against the Higgs Boson". The Atlantic. Archived from the original on 7 April 2013. Retrieved 1 April 2013. 195. "Stephen Hawking loses Higgs boson particle bet – Video". The Guardian. London. 5 July 2012. Archived from the original on 20 September 2013. Retrieved 1 April 2013. 196. "Higgs boson breakthrough should earn physicist behind search Nobel Prize: Stephen Hawking". National Post. Agence France-Presse. 4 July 2012. Retrieved 1 April 2013. 197. Amos, Jonathan (8 October 2013). "Higgs: Five decades of noble endeavour". BBC News. Archived from the original on 11 June 2016. Retrieved 10 May 2016. 198. Ferguson 2011, pp. 230–231. 199. "Books". Stephen Hawking Official Website. Archived from the original on 13 March 2012. Retrieved 28 February 2012. 200. "100 great British heroes". BBC News. 21 August 2002. Archived from the original on 4 November 2010. Retrieved 10 May 2016. 201. "Oldest, space-travelled, science prize awarded to Hawking". The Royal Society. 24 August 2006. Archived from the original on 22 January 2015. Retrieved 29 August 2008. 202. MacAskill, Ewen (13 August 2009). "Obama presents presidential medal of freedom to 16 recipients". The Guardian. London. Archived from the original on 7 September 2013. Retrieved 5 March 2012. 203. "2013 Fundamental Physics Prize Awarded to Alexander Polyakov". Fundamental Physics Prize. Archived from the original on 19 January 2015. Retrieved 11 December 2012. 204. Komar, Oliver; Buechner, Linda (October 2000). "The Stephen W. Hawking Science Museum in San Salvador Central America Honours the Fortitude of a Great Living Scientist". Journal of College Science Teaching. XXX (2). Archived from the original on 30 July 2009. Retrieved 28 September 2008. 205. "The Stephen Hawking Building". BBC News. 18 April 2007. Archived from the original on 23 March 2012. Retrieved 24 February 2012. 206. "Grand Opening of the Stephen Hawking Centre at Perimeter Institute" (Press release). Perimeter Institute. Archived from the original on 29 December 2012. Retrieved 6 June 2012. 207. Ferguson 2011, pp. 237–38. 208. "Time to unveil Corpus Clock". Cambridgenetwork.co.uk. 22 September 2008. Archived from the original on 25 January 2016. Retrieved 10 September 2015. 209. "Hawking gives up academic title". BBC News. 30 September 2009. Archived from the original on 3 October 2009. Retrieved 1 October 2009. 210. Ferguson 2011, pp. 238–39. 211. "Professor Stephen Hawking to stay at Cambridge University beyond 2012". The Daily Telegraph. London. 26 March 2010. Archived from the original on 9 January 2014. Retrieved 9 February 2013. 212. Billings, Lee (2 September 2014). "Time Travel Simulation Resolves 'Grandfather Paradox'". Scientific American. Archived from the original on 4 September 2016. Retrieved 2 September 2016. 213. Katz, Gregory (20 July 2015). "Searching for ET: Hawking to look for extraterrestrial life". Associated Press. Archived from the original on 22 July 2015. Retrieved 20 July 2015. 214. "Tomorrow's World returns to BBC with startling warning from Stephen Hawking – we must leave Earth". The Telegraph. 2 May 2017. Archived from the original on 5 May 2017. Retrieved 5 May 2017. 215. "Stephen Hawking will test his theory that humans must leave Earth. Let's hope he's wrong". USA Today. 4 May 2017. Archived from the original on 4 May 2017. Retrieved 5 May 2017. 216. "Stephen Hawking says he has a way to escape from a black hole". New Scientist. Archived from the original on 8 January 2017. Retrieved 31 May 2017. 217. "Stephen Hawking awarded Imperial College London's highest honour". Imperial College London. 17 July 2017. Archived from the original on 14 March 2018. Retrieved 19 July 2017. 218. Stephen Hawking; Thomas Hertog (April 2018). "A smooth exit from eternal inflation?". Journal of High Energy Physics. 2018 (4). arXiv:1707.07702. Bibcode:2018JHEP...04..147H. doi:10.1007/JHEP04(2018)147. ISSN 1126-6708. S2CID 13745992. Wikidata Q55878494. 219. "Hawking's last paper co-authored with ERC grantee posits new cosmology". EurekAlert!. 2 May 2018. Archived from the original on 2 May 2018. Retrieved 3 May 2018. 220. Ferguson 2011, pp. 37–40. 221. Ferguson 2011, p. 40. 222. Ferguson 2011, pp. 45–47. 223. White & Gribbin 2002, pp. 92–98. 224. Ferguson 2011, p. 65. 225. Ferguson 2011, pp. 37–39, 77. 226. Ferguson 2011, p. 78. 227. Ferguson 2011, pp. 82–83. 228. Hawking, Stephen (1994). Black Holes and Baby Universes and Other Essays. Random House. p. 20. ISBN 978-0-553-37411-7. Archived from the original on 18 January 2017. Retrieved 23 February 2016. 229. Ferguson 2011, pp. 83–88. 230. Ferguson 2011, pp. 89–90. 231. Larsen 2005, pp. xiv, 79. 232. Hawking 2007, pp. 279–80. 233. Larsen 2005, p. 79. 234. Hawking 2007, p. 285. 235. Ferguson 2011, pp. 91–92. 236. Ferguson 2011, pp. 164–65. 237. Ferguson 2011, p. 185. 238. Ferguson 2011, pp. 80–81. 239. Adams, Tim (4 April 2004). "Brief History of a first wife". The Observer. London. Archived from the original on 29 December 2016. Retrieved 12 February 2013. 240. Ferguson 2011, p. 145. 241. Ferguson 2011, p. 165. 242. Ferguson 2011, pp. 186–87. 243. Ferguson 2011, p. 187. 244. Ferguson 2011, pp. 187, 192. 245. Ferguson 2011, p. 197. 246. "Welcome back to the family, Stephen". The Times. 6 May 2007. Archived from the original on 3 December 2008. Retrieved 6 May 2007. 247. Sapsted, David (20 October 2006). "Hawking and second wife agree to divorce". The Daily Telegraph. Archived from the original on 18 September 2008. Retrieved 18 March 2007. 248. Ferguson 2011, p. 225. 249. "Eddie Redmayne wins first Oscar for 'Theory of Everything'". Reuters. 10 May 2016. Archived from the original on 1 December 2017. Retrieved 7 March 2015. 250. Ferguson 2011, p. 32. 251. Donaldson, Gregg J. (May 1999). "The Man Behind the Scientist". Tapping Technology. Archived from the original on 11 May 2005. Retrieved 23 December 2012. 252. White & Gribbin 2002, p. 59. 253. Ferguson 2011, pp. 34–35. 254. Larsen 2005, pp. 18–19. 255. White & Gribbin 2002, pp. 59–61. 256. Ferguson 2011, pp. 48–49. 257. Ferguson 2011, pp. 76–77. 258. White & Gribbin 2002, pp. 124–25. 259. Ridpath, Ian (4 May 1978). "Black hole explorer". New Scientist. Archived from the original on 18 January 2017. Retrieved 9 January 2013. 260. White & Gribbin 2002, p. 124. 261. White & Gribbin 2002, p. viii. 262. Ferguson 2011, p. 48. 263. White & Gribbin 2002, p. 117. 264. Ferguson 2011, p. 162. 265. "A motorised wheelchair". November 2018. Retrieved 18 June 2019. 266. "NASA Lecture Series – Dr. Stephen Hawking". 21 April 2008. Retrieved 18 June 2019. 267. "Stephen Hawking's powerchair provider Permobil pays tribute to physics titan". 15 March 2018. Archived from the original on 18 June 2019. Retrieved 18 June 2019. 268. Ferguson 2011, pp. 81–82. 269. Mialet 2003, pp. 450–51. 270. Ferguson 2011, pp. 79, 149. 271. White & Gribbin 2002, pp. 273–74. 272. White & Gribbin 2002, pp. 193–94. 273. White & Gribbin 2002, p. 194. 274. Ferguson 2011, pp. 135–36. 275. White & Gribbin 2002, pp. 232–36. 276. Ferguson 2011, pp. 136–37. 277. White & Gribbin 2002, pp. 235–36. 278. Ferguson 2011, p. 139. 279. Ferguson 2011, p. 136. 280. Medeiros, Joao (13 January 2015). "How Intel Gave Stephen Hawking a Voice". Wired. Archived from the original on 14 March 2018. Retrieved 14 March 2018. 281. Ferguson 2011, pp. 137–38. 282. White & Gribbin 2002, pp. 236–37. 283. Ferguson 2011, p. 140. 284. Ferguson 2011, pp. 140–41. 285. Ferguson 2011, p. 138. 286. Greenemeier, Larry (10 August 2009). "Getting Back the Gift of Gab: Next-Gen Handheld Computers Allow the Mute to Converse". Scientific American. Archived from the original on 12 May 2014. Retrieved 11 September 2012. 287. "Stephen Hawking says pope told him not to study beginning of universe". USA Today. 15 June 2006. Archived from the original on 30 August 2015. Retrieved 12 December 2012. 288. Ferguson 2011, p. 224. 289. de Lange, Catherine (30 December 2011). "The man who saves Stephen Hawking's voice". New Scientist. Archived from the original on 30 August 2015. Retrieved 18 June 2012. 290. Boyle, Alan (25 June 2012). "How researchers hacked into Stephen Hawking's brain". NBC News. Archived from the original on 30 August 2015. Retrieved 29 September 2012. 291. "Start-up attempts to convert Prof Hawking's brainwaves into speech". BBC. 7 July 2012. Archived from the original on 3 November 2012. Retrieved 29 September 2012. 292. Ferguson 2011, p. 240. 293. Ferguson 2011, pp. 164, 178. 294. "Call for global disability campaign". London: BBC. 8 September 1999. Archived from the original on 12 December 2012. Retrieved 12 February 2013. 295. Ferguson 2011, p. 196. 296. "Julius Edgar Lilienfeld Prize". American Physical Society. Archived from the original on 26 October 2011. Retrieved 29 August 2008. 297. "Paralympics: Games opening promises 'journey of discovery'". BBC. 29 August 2012. Archived from the original on 29 August 2012. Retrieved 29 August 2012. 298. DeWitt, David (13 September 2013). "The Brilliance of His Universe". The New York Times. Archived from the original on 12 May 2014. Retrieved 13 September 2013. 299. Duffin, Claire (17 September 2013). "We don't let animals suffer, says Prof Stephen Hawking, as he backs assisted suicide". The Daily Telegraph. London. Archived from the original on 14 March 2018. Retrieved 14 March 2018. 300. Culzac, Natasha (29 August 2014). "Stephen Hawking, MND sufferer, does ice bucket challenge with a twist". The Independent. Archived from the original on 29 August 2014. Retrieved 29 August 2014. 301. "Hawking takes zero-gravity flight". BBC News. 27 April 2007. Archived from the original on 8 September 2012. Retrieved 17 June 2012. 302. Overbye, Dennis (1 March 2007). "Stephen Hawking Plans Prelude to the Ride of His Life". The New York Times. New York. Archived from the original on 10 May 2013. Retrieved 9 February 2013. 303. Ferguson 2011, pp. 232–33. 304. Leonard, T.; Osborne, A. (27 April 2007). "Branson to help Hawking live space dream". The Telegraph. Archived from the original on 16 March 2018. Retrieved 15 March 2018. 305. Dean, James (14 March 2018). "Stephen Hawking felt freedom of weightlessness during KSC visit". USA Today. Archived from the original on 15 March 2018. Retrieved 15 March 2018. 306. Martin Rees (1 March 2018). "Stephen Hawking (1942-2018)". Nature. 555 (7697): 444. Bibcode:2018Natur.555..444R. doi:10.1038/D41586-018-02839-9. ISSN 1476-4687. PMID 32034344. S2CID 4375879. Wikidata Q56551506. 307. Overbye, Dennis (14 March 2018). "Stephen Hawking Dies at 76; His Mind Roamed the Cosmos". The New York Times. Archived from the original on 14 March 2018. Retrieved 14 March 2018. 308. Penrose, Roger (14 March 2018). "'Mind over matter': Stephen Hawking". The Guardian. Archived from the original on 14 March 2018. Retrieved 14 March 2018. 309. Allen, Karma (14 March 2018). "Stephen Hawking, author of 'A Brief History of Time,' dies at 76". ABC News. Archived from the original on 14 March 2018. Retrieved 14 March 2018. 310. Barr, Robert. "Physicist Stephen Hawking dies after living with ALS for 50-plus years". San Francisco Chronicle. Archived from the original on 14 March 2018. Retrieved 14 March 2018. 311. Overbye, Dennis (14 March 2018). "Stephen Hawking's Beautiful Mind". The New York Times. Archived from the original on 14 March 2018. Retrieved 15 March 2018. 312. Mlodinow, Leonard (14 March 2018). "Stephen Hawking, Force of Nature". The New York Times. Archived from the original on 15 March 2018. Retrieved 14 March 2018. 313. Brown, Benjamin. "'We lost a great one today': World reacts to Stephen Hawking's death on social media". Fox News Channel. Archived from the original on 14 March 2018. Retrieved 14 March 2018. 314. "Stephen Hawking: Tributes pour in for 'inspirational' physicist". BBC News. 14 March 2018. Archived from the original on 14 March 2018. Retrieved 14 March 2018. 315. Marsh, Sarah (14 March 2018). "Cambridge colleagues pay tribute to 'inspirational' Hawking". The Guardian. Archived from the original on 14 March 2018. Retrieved 14 March 2018. 316. "Queue of people sign book of condolence at Stephen Hawking's former college". BT News. Press Association. 14 March 2018. Archived from the original on 15 March 2018. Retrieved 14 March 2018. 317. Overbye, Dennis (15 March 2018). "Stephen Hawking Taught Us a Lot About How to Live". The New York Times. Archived from the original on 14 March 2018. Retrieved 15 March 2018. 318. "IPC to Pay Tribute to Stephen Hawking During PyeongChang Paralympics Closing Ceremony". Archived from the original on 20 March 2018. Retrieved 20 March 2018. 319. Elliott, Chris (31 March 2018). "The day Cambridge said goodbye to Stephen Hawking – one of our city's greatest ever academics". Cambridge News. Archived from the original on 31 March 2018. Retrieved 31 March 2018. 320. The Associated Press (31 March 2018). "At Stephen Hawking Funeral, Eddie Redmayne and Astronomer Royal Give Readings". The New York Times. Archived from the original on 31 March 2018. Retrieved 31 March 2018. 321. Prof Stephen Hawking funeral: Legacy 'will live forever' Archived 18 June 2018 at the Wayback Machine. BBC News. Published 31 March 2018. Retrieved 31 March 2018. 322. "Famous guests attend Prof Stephen Hawking's funeral". BBC News. 31 March 2018. Archived from the original on 28 February 2019. Retrieved 28 February 2019. 323. "Benedict Cumberbatch to take lead role in Stephen Hawking memorial service". The Sunday Times. 10 June 2018. Archived from the original on 28 February 2019. Retrieved 28 February 2019. 324. "Stephen Hawking funeral: His city stops to salute genius who unlocked secrets of the universe". Archived from the original on 26 October 2019. Retrieved 26 October 2019. 325. "Stephen Hawking Mourned by Hundreds at Cambridge Funeral". Space.com. 3 April 2018. Archived from the original on 5 April 2019. Retrieved 5 April 2019. 326. pixeltocode.uk, PixelToCode. "Professor Stephen Hawking to be honoured at the Abbey – Westminster Abbey". Archived from the original on 29 March 2018. Retrieved 20 March 2018. 327. Castle, Stephen (15 June 2018). "Stephen Hawking Enters 'Britain's Valhalla,' Where Space Is Tight". The New York Times. Retrieved 21 July 2019. when Stephen Hawking's ashes were interred there on Friday, they were placed between the remains of Isaac Newton and Charles Darwin 328. "Stephen Hawking's Ashes Buried in Westminster Abbey". The Hollywood Reporter. Archived from the original on 16 June 2018. Retrieved 16 June 2018. 329. Roger Highfield (20 February 2002), "A simple formula that will make a fitting epitaph", Telegraph Media Group, archived from the original on 18 March 2018, retrieved 16 March 2018 330. Clark, Stuart (2016), The Unknown Universe, Pegasus, p. 281, ISBN 978-1-68177-153-3 331. Bever, Lindsey (15 June 2018). "Stephen Hawking's farewell: As his ashes were buried, his voice was beamed into space". Washington Post. Archived from the original on 31 July 2018. Retrieved 31 July 2018. 332. Walecka, John Dirk (2007). Introduction to General Relativity. World Scientific. p. 305. ISBN 978-981-270-584-6. 333. Griffin, Andrew (14 March 2018). "Stephen Hawking death: The equation the professor asked to be put on his tombstone". The Independent. Archived from the original on 17 March 2018. Retrieved 17 March 2018. 334. Andrew Strominger; Cumrun Vafa (June 1996). "Microscopic origin of the Bekenstein-Hawking entropy". Physics Letters B. 379 (1–4): 99–104. arXiv:hep-th/9601029. Bibcode:1996PhLB..379...99S. doi:10.1016/0370-2693(96)00345-0. ISSN 0370-2693. S2CID 1041890. Wikidata Q29013523. 335. "Stephen Hawking's words will be beamed into space". BBC. Archived from the original on 22 October 2018. Retrieved 20 October 2018. 336. Ghosh, Pallab (26 March 2018). "Stephen Hawking's final interview: A beautiful Universe". BBC News. Archived from the original on 26 March 2018. Retrieved 26 March 2018. 337. Taylor, Dan (24 March 2018). "Stephen Hawking's incredible last words will stun you". MorningTicker.com. Archived from the original on 25 March 2018. Retrieved 24 March 2018. 338. Tasoff, Harrison (13 March 2018). "4 Smithsonian Space Documentaries You Don't Want to Miss". Space.com. Archived from the original on 25 March 2018. Retrieved 24 March 2018. 339. Staff (University of Cambridge) (2 May 2018). "Taming the multiverse—Stephen Hawking's final theory about the big bang". Phys.org. Archived from the original on 2 May 2018. Retrieved 2 May 2018. 340. Starr, Michelle (22 December 2018). "Stephen Hawking's Final Theory About Our Universe Will Melt Your Brain". ScienceAlert.com. Archived from the original on 22 December 2018. Retrieved 22 December 2018. 341. Haco, Sasha; Hawking, Stephen W.; Perry, Malcolm J.; Strominger, Andrew (2018). "Black Hole Entropy and Soft Hair". Journal of High Energy Physics. Springer Science and Business Media LLC. 2018 (12): 98. arXiv:1810.01847v2. Bibcode:2018JHEP...12..098H. doi:10.1007/jhep12(2018)098. ISSN 1029-8479. 342. Nield, David (12 October 2018). "The Very Last Paper Stephen Hawking Worked on Has Just Been Published Online – He continued the quest to understand black holes until the end". ScienceAlert.com. Archived from the original on 12 October 2018. Retrieved 12 October 2018. 343. Overbye, Dennis (23 October 2018). "Stephen Hawking's Final Paper: How to Escape From a Black Hole – In a study from beyond the grave, the theoretical physicist sings (mathematically) of memory, loss and the possibility of data redemption". The New York Times. Archived from the original on 24 October 2018. Retrieved 23 October 2018. 344. Stanley-Becker, Isaac (15 October 2018). "Stephen Hawking feared race of 'superhumans' able to manipulate their own DNA". The Washington Post. Archived from the original on 15 October 2018. Retrieved 15 October 2018. 345. AP News (15 October 2018). "In Posthumous Message, Hawking Says Science Under Threat". The New York Times. Archived from the original on 16 October 2018. Retrieved 15 October 2018. 346. Staff (2018). "Brief Answers to the Big Questions – Hardcover – 16 October 2018 by Stephen Hawking". Amazon. Retrieved 15 October 2018. 347. Staff (8 November 2018). "Stephen Hawking personal effects fetch £1.8 m at auction". BBC News. Archived from the original on 8 November 2018. Retrieved 8 November 2018. 348. Fortin, Jacey (8 November 2018). "Stephen Hawking's Wheelchair and Thesis Fetch More Than $1 Million at Auction". The New York Times. Archived from the original on 9 November 2018. Retrieved 8 November 2018. 349. Lawless, Jill (22 October 2018). "Stephen Hawking's wheelchair, thesis for sale". Phys.org. Archived from the original on 22 October 2018. Retrieved 22 October 2018. 350. "Celebrating the Life of Stephen Hawking 2019 UK 50p Brilliant Uncirculated Coin." The Royal Mint. Retrieved 3 December 2022. 351. McRae, Mike (13 March 2019). "UK Put a Black Hole on a 50p Coin to Honour Stephen Hawking, And It Looks Stunning". ScienceAlert.com. Archived from the original on 30 March 2019. Retrieved 13 March 2019. 352. "Hawking's nurse struck off over his care". BBC News. 12 March 2019. Archived from the original on 12 March 2019. Retrieved 12 March 2019. 353. Roberts, Stuart (27 May 2021). "Hawking Archive saved for the nation". University of Cambridge. Retrieved 7 June 2022. 354. "Explore incredible objects from Stephen Hawking's office at the Science Museum". Science Museum. 8 January 2022. Retrieved 7 June 2022. 355. "Stephen Hawking tells Google 'philosophy is dead'". www.telegraph.co.uk. Retrieved 17 April 2022. 356. Scott, Callum D. (2012). "The death of Philosophy: a response to Stephen Hawking". South African Journal of Philosophy. 31 (2): 384–404. doi:10.1080/02580136.2012.10751783. S2CID 144498480. 357. Sample, Ian (2 August 2006). "The great man's answer to the question of human survival: Er, I don't know". The Guardian. Archived from the original on 14 March 2018. Retrieved 14 March 2018. 358. "Prof Stephen Hawking: disaster on planet Earth is a near certainty". The Daily Telegraph. Archived from the original on 2 April 2018. Retrieved 2 April 2018. 359. Highfield, Roger (16 October 2001). "Colonies in space may be only hope, says Hawking". The Daily Telegraph. London. Archived from the original on 26 April 2009. Retrieved 5 August 2007. 360. Hickman, Leo (25 April 2010). "Stephen Hawking takes a hard line on aliens". The Guardian. Archived from the original on 1 January 2018. Retrieved 24 February 2012. 361. "Stephen Hawking warns over making contact with aliens". BBC News. 25 April 2010. Archived from the original on 12 May 2010. Retrieved 24 May 2010. 362. Hawking, Stephen; Tegmark, Mark; Wilczek, Frank (1 May 2014). "Stephen Hawking: 'Transcendence looks at the implications of artificial intelligence – but are we taking AI seriously enough?'". The Independent. Archived from the original on 2 October 2015. Retrieved 3 December 2014. 363. "Stephen Hawking warns artificial intelligence could end mankind". BBC News. 2 December 2014. Archived from the original on 30 October 2015. Retrieved 3 December 2014. 364. stevenson, matt (8 October 2015). "Answers to Stephen Hawking's AMA are Here!". Wired. ISSN 1059-1028. Retrieved 28 April 2023. 365. Ferguson 2011, p. 179. 366. Boyle, Alan (23 September 2014). "'I'm an Atheist': Stephen Hawking on God and Space Travel". NBC News. Archived from the original on 25 January 2017. Retrieved 12 January 2017. 367. "Stephen Hawking's Thoughts on Atheism, God and Death". Time. Retrieved 15 May 2021. 368. "Stephen Hawking – There is no God. There is no Fate". YouTube. Archived from the original on 16 August 2013. Retrieved 4 July 2013. 369. Lowry, Brian (4 August 2011). "Curiosity: Did God Create the Universe?". Variety. Archived from the original on 14 March 2018. Retrieved 14 March 2018. 370. "Humanists UK mourns death of Stephen Hawking". Humanists UK. 14 March 2019. Retrieved 18 March 2019. 371. White & Gribbin 2002, p. 195. 372. Ferguson 2011, p. 223. 373. Ferguson 2011, p. 195. 374. "Scientist Stephen Hawking decries Iraq war". USA Today. 3 November 2004. Archived from the original on 14 October 2013. Retrieved 18 February 2013. 375. Andalo, Debbie (24 July 2006). "Hawking urges EU not to stop stem cell funding". The Guardian. London. Archived from the original on 30 August 2013. Retrieved 18 February 2013. 376. Ferguson 2011, p. 242. 377. Lean, Geoffrey (21 January 2007). "Prophet of Doomsday: Stephen Hawking, eco-warrior – Climate Change – Environment". The Independent. London. Archived from the original on 10 April 2014. Retrieved 18 February 2013. 378. "Celebrities' open letter to Scotland – full text and list of signatories". The Guardian. London. 7 August 2014. Archived from the original on 17 August 2014. Retrieved 26 August 2014. 379. Radford, Tim (31 May 2016). "Trump's popularity inexplicable and Brexit spells disaster, says Stephen Hawking". The Guardian. Archived from the original on 31 May 2016. Retrieved 31 May 2016. 380. Stephen Hawking's political views Archived 14 March 2018 at the Wayback Machine BBC 381. "Stephen Hawking's political views". BBC News. 14 March 2018. Archived from the original on 17 March 2018. Retrieved 19 March 2018. 382. "Hawking v Hunt: What happened?". BBC News. 20 August 2017. Archived from the original on 14 March 2018. Retrieved 14 March 2018. 383. Triggle, Nick (19 August 2017). "Stephen Hawking: I'm worried about the future of the NHS". BBC News. Archived from the original on 15 March 2018. Retrieved 14 March 2018. 384. Campbell, Denis (18 August 2017). "Stephen Hawking blames Tory politicians for damaging NHS". The Guardian. Archived from the original on 15 March 2018. Retrieved 14 March 2018. 385. Kennedy, Maev (27 August 2017). "Jeremy Hunt continues war of words with Stephen Hawking over NHS". The Guardian. Archived from the original on 15 March 2018. Retrieved 14 March 2018. 386. Griffin, Andrew (6 June 2017). "Stephen Hawking announces he is voting Labour: 'The Tories would be a disaster'". The Independent. Archived from the original on 5 June 2017. Retrieved 6 June 2017. 387. Ghosh, Pallab (2 July 2017). "Hawking says Trump's climate stance could damage Earth". BBC News. Archived from the original on 14 March 2018. Retrieved 14 March 2018. 388. Eleftheriou-Smith, Loulla-Mae (3 July 2017). "Stephen Hawking says Donald Trump could turn Earth into Venus-like planet with 250C and sulphuric acid rain". The Independent. Archived from the original on 19 March 2018. Retrieved 14 March 2018. 389. Brooks, Libby (25 December 2017). "Scotland united in curiosity as councils trial universal basic income". The Guardian. Retrieved 4 October 2020. Universal basic income is, according to its many and various supporters, an idea whose time has come. The deceptively simple notion of offering every citizen a regular payment without means testing or requiring them to work for it has backers as disparate as Mark Zuckerberg, Stephen Hawking, Caroline Lucas and Richard Branson. 390. Sherwood, Harriet; Kalman, Matthew; Jones, Sam (9 May 2013). "Stephen Hawking: Furore deepens over Israel boycott". The Guardian. Retrieved 16 May 2022. 391. "Watch this: 'God, the Universe and Everything Else' with Carl Sagan, Stephen Hawking and Arthur C. Clarke". The Verge. Retrieved 19 August 2019. 392. Ferguson 2011, pp. 177–78. 393. Larsen 2005, pp. 93–94. 394. Larsen 2005, pp. xiii, 94. 395. Ferguson 2011, p. 192. 396. Ferguson 2011, p. 221. 397. Wollaston, Sam (4 March 2008). "Last night's TV: Stephen Hawking: Master of the Universe". The Guardian. London. Archived from the original on 14 December 2013. Retrieved 10 February 2013. 398. "Professor Stephen Hawking films Big Bang Theory cameo". BBC News. 12 March 2012. Archived from the original on 14 March 2012. Retrieved 12 March 2012. 399. Mialet, Hélène (28 June 2012). Hawking Incorporated: Stephen Hawking and the Anthropology of the Knowing Subject. University of Chicago Press. p. 211. ISBN 978-0-226-52226-5. 400. Edgar, James (30 May 2014). "'Have you still got that American voice?' Queen asks Stephen Hawking". The Telegraph. Archived from the original on 6 August 2015. Retrieved 12 July 2015. 401. Setoodeh, Ramin (28 October 2014). "How Eddie Redmayne Became Stephen Hawking in 'The Theory of Everything'". Variety. Archived from the original on 26 February 2015. Retrieved 24 February 2015. 402. McAfee, Melonyce (14 April 2015). "Stephen Hawking sings Monty Python's 'Galaxy Song'". CNN. Archived from the original on 13 January 2017. Retrieved 12 January 2017. 403. Grow, Korry (14 April 2015). "Hear Stephen Hawking Sing Monty Python's 'Galaxy Song'". Rolling Stone. Archived from the original on 13 January 2017. Retrieved 12 January 2017. 404. Haurant, Sandra (3 June 2008). "Savings: Heavyweight celebrities endorse National Savings". Archived from the original on 2 September 2013. Retrieved 25 February 2013. 405. "Could Hawking's parody be sincerest form of flattery?". Telegraph Media Group Limited. 13 June 2000. Archived from the original on 10 January 2014. Retrieved 19 February 2013. 406. Usborne, Simon (1 January 2013). "Stephen Hawking, Go Compare and a brief history of selling out". The Independent. Archived from the original on 25 February 2013. Retrieved 19 February 2013. 407. Buchanan, Rose Troup (20 March 2015). "Professor Stephen Hawking to trademark name". The Independent. Archived from the original on 2 April 2015. Retrieved 2 April 2015. 408. Tufnell, Nicholas (9 March 2018). "The Hitchhikers Guide to the Galaxy' is back with the original cast". CNET. Archived from the original on 15 March 2018. Retrieved 14 March 2018. and "StarTalk Season 4, Episode 20: Stephen Hawking". National Geographic Partners, LLC. 4 March 2018. Archived from the original on 15 March 2018. Retrieved 14 March 2018. 409. Stephen Hawking's 80th Birthday, retrieved 7 January 2022 410. "Certificate of election: Hawking, Stephen, EC/1974/12". London: The Royal Society. Archived from the original on 4 February 2014. 411. "Stephen William Hawking". American Academy of Arts & Sciences. Retrieved 19 May 2022. 412. "APS Member History". search.amphilsoc.org. Retrieved 19 May 2022. 413. "Stephen Hawking". www.nasonline.org. Retrieved 19 May 2022. 414. Corkery, Claire (1 November 2016). "Pride of Britain 2016: Stephen Hawking makes Brexit joke at PM Theresa May's expense". Express. Archived from the original on 15 March 2018. Retrieved 14 March 2018. 415. "2018 Hawking Fellowship". Varsity. 416. "Jony Ive Hawking Fellowship". Forbes. 417. "Stephen Hawking's Biography". Hawking.org.uk. 418. "Hawking in Pictures". Hawking.org.uk. 419. "Stephen Hawking Medals For Science Communication". STARMUS. Archived from the original on 7 October 2016. Retrieved 16 May 2017. 420. "Stephen Hawking medal" Archived 5 November 2019 at the Wayback Machine. Starmus. Retrieved 19 August 2019 421. Davis, Nicola (16 June 2016). "Winners of inaugural Stephen Hawking medal announced". The Guardian. Retrieved 19 August 2019. 422. "Black Holes and Baby Universes". Kirkus Reviews. 20 March 2010. Archived from the original on 4 August 2012. Retrieved 18 June 2012. 423. "How Physics got Weird". The Wall Street Journal. 5 December 2016. Archived from the original on 20 December 2016. Retrieved 11 March 2017. 424. Griffin, Andrew (16 May 2018). "Stephen Hawking's final work will try to answer some of the biggest questions in the universe – Book will collect the late professor's most profound and celebrated writings". The Independent. Archived from the original on 16 October 2018. Retrieved 15 October 2018. 425. "A Brief History of Time: Synopsis". Errol Morris. Archived from the original on 29 June 2012. Retrieved 18 June 2012. 426. "Stephen Hawking's Universe". PBS. Archived from the original on 6 May 2016. Retrieved 26 June 2012. 427. "The Hawking Paradox". BBC. Archived from the original on 9 March 2012. Retrieved 9 February 2012. 428. Richmond, Ray (3 August 2007). ""Masters of Science Fiction" too artistic for ABC". Reuters. Archived from the original on 14 March 2018. Retrieved 7 December 2012. 429. Walton, James (4 March 2008). "Last night on television: Stephen Hawking: Master of the Universe (Channel 4) – The Palace (ITV1)". The Daily Telegraph. Archived from the original on 14 March 2018. Retrieved 14 March 2018. 430. "Into the Universe with Stephen Hawking". Discovery Channel. Archived from the original on 25 March 2011. Retrieved 25 April 2010. 431. Moulds, Josephine (17 October 2011). "Brave New World with Stephen Hawking, episode one, Channel 4, review". The Daily Telegraph. Archived from the original on 14 March 2018. Retrieved 14 March 2018. 432. "Stephen Hawking's Grand Design". Discovery Channel UK. Archived from the original on 27 April 2013. Retrieved 25 October 2012. 433. Wollaston, Sam (9 December 2013). "Stephen Hawking: A Brief History of Mine – TV review". The Guardian. Archived from the original on 14 March 2018. Retrieved 14 March 2018. 434. Labrecque, Jeff (6 August 2014). "Eddie Redmayne plays Stephen Hawking in 'Theory of Everything' trailer". Entertainment Weekly. Archived from the original on 9 August 2015. Retrieved 6 August 2014. Sources • Hawking, Stephen (2013). My Brief History. Bantam. ISBN 978-0-345-53528-3. Retrieved 9 September 2013. • Baird, Eric (2007). Relativity in Curved Spacetime: Life Without Special Relativity. Chocolate Tree Books. ISBN 978-0-9557068-0-6. • Boslough, John (1989). Stephen Hawking's universe: an introduction to the most remarkable scientist of our time. Avon Books. ISBN 978-0-380-70763-8. Retrieved 4 March 2012. • Ferguson, Kitty (2011). Stephen Hawking: His Life and Work. Transworld. ISBN 978-1-4481-1047-6. • Gibbons, Gary W.; Hawking, Stephen W.; Siklos, S.T.C., eds. (1983). The Very early universe: proceedings of the Nuffield workshop, Cambridge, 21 June to 9 July 1982. Cambridge University Press. ISBN 978-0-521-31677-4. • Hawking, Jane (2007). Travelling to Infinity: My Life With Stephen. Alma. ISBN 978-1-84688-115-2. • Hawking, Stephen W. (1994). Black holes and baby universes and other essays. Bantam Books. ISBN 978-0-553-37411-7. • Hawking, Stephen W.; Ellis, George F.R. (1973). The Large Scale Structure of Space-Time. Cambridge University Press. ISBN 978-0-521-09906-6. • Hawking, Stephen W. (1992). Stephen Hawking's A brief history of time: a reader's companion. Bantam Books. Bibcode:1992bhtr.book.....H. ISBN 978-0-553-07772-8. • Hawking, Stephen W.; Israel, Werner (1989). Three Hundred Years of Gravitation. Cambridge University Press. ISBN 978-0-521-37976-2. • Larsen, Kristine (2005). Stephen Hawking: a biography. Greenwood Publishing. ISBN 978-0-313-32392-8. • Mialet, Hélène (2003). "Is the end in sight for the Lucasian chair? Stephen Hawking as Millennium Professor". In Knox, Kevin C.; Noakes, Richard (eds.). From Newton to Hawking: A History of Cambridge University's Lucasian Professors of Mathematics. Cambridge University Press. pp. 425–460. ISBN 978-0-521-66310-6. • Mialet, Hélène (2012). Hawking Incorporated: Stephen Hawking and the Anthropology of the Knowing Subject. University of Chicago Press. ISBN 978-0-226-52226-5. • Okuda, Michael; Okuda, Denise (1999). The Star Trek Encyclopedia: A Reference Guide to the Future. Pocket Books. ISBN 978-0-671-53609-1. • Pickover, Clifford A. (2008). Archimedes to Hawking: laws of science and the great minds behind them. Oxford University Press. ISBN 978-0-19-533611-5. Retrieved 4 March 2012. • White, Michael; Gribbin, John (2002). Stephen Hawking: A Life in Science (2nd ed.). National Academies Press. ISBN 978-0-309-08410-9. • Yulsman, Tom (2003). Origins: the quest for our cosmic roots. CRC Press. ISBN 978-0-7503-0765-9. External links Library resources about Stephen Hawking • Resources in your library • Resources in other libraries By Stephen Hawking • Resources in your library • Resources in other libraries • Official website • Professor Stephen Hawking Collection on In Our Time at the BBC • "Archival material relating to Stephen Hawking". UK National Archives. • Stephen Hawking publications indexed by Google Scholar • Stephen Hawking collected news and commentary at The New York Times • Stephen Hawking's publications indexed by the Scopus bibliographic database. (subscription required) • O'Connor, John J.; Robertson, Edmund F., "Stephen Hawking", MacTutor History of Mathematics Archive, University of St Andrews • Stephen Hawking collected news and commentary at The Guardian • Stephen Hawking discography at Discogs • Stephen Hawking at IMDb Stephen Hawking Physics • Hawking radiation • Black hole thermodynamics • Micro black hole • Chronology protection conjecture • Gibbons–Hawking ansatz • Gibbons–Hawking effect • Gibbons–Hawking space • Gibbons–Hawking–York boundary term • Hartle–Hawking state • Penrose–Hawking singularity theorems • Hawking energy Books Science • The Large Scale Structure of Space–Time (1973) • A Brief History of Time (1988) • Black Holes and Baby Universes and Other Essays (1993) • The Nature of Space and Time (1996) • The Universe in a Nutshell (2001) • On the Shoulders of Giants (2002) • A Briefer History of Time (2005) • God Created the Integers (2005) • The Grand Design (2010) • The Dreams That Stuff Is Made Of (2011) • Brief Answers to the Big Questions (2018) • On the Origin of Time (2023, posthume message) Fiction • George's Secret Key to the Universe (2007) • George's Cosmic Treasure Hunt (2009) • George and the Big Bang (2011) • George and the Unbreakable Code (2014) • George and the Blue Moon (2016) • Unlocking the Universe (2020) Memoirs • My Brief History (2013) Films • A Brief History of Time (1991) • Hawking (2004) • Hawking (2013) • The Theory of Everything (2014) Television • God, the Universe and Everything Else (1988) • Stephen Hawking's Universe (1997 documentary) • Stephen Hawking: Master of the Universe (2008 documentary) • Genius of Britain (2010 series) • Into the Universe with Stephen Hawking (2010 series) • Brave New World with Stephen Hawking (2011 series) • Genius by Stephen Hawking (2016 series) Family • Jane Wilde Hawking (first wife) • Lucy Hawking (daughter) Other • In popular culture • Black hole information paradox • Thorne–Hawking–Preskill bet Articles related to Stephen Hawking Lucasian Professors of Mathematics • Isaac Barrow (1664) • Isaac Newton (1669) • William Whiston (1702) • Nicholas Saunderson (1711) • John Colson (1739) • Edward Waring (1760) • Isaac Milner (1798) • Robert Woodhouse (1820) • Thomas Turton (1822) • George Biddell Airy (1826) • Charles Babbage (1828) • Joshua King (1839) • George Stokes (1849) • Joseph Larmor (1903) • Paul Dirac (1932) • James Lighthill (1969) • Stephen Hawking (1979) • Michael Green (2009) • Michael Cates (2015) Copley Medallists (2001–present) • Jacques Miller (2001) • John Pople (2002) • John Gurdon (2003) • Harry Kroto (2004) • Paul Nurse (2005) • Stephen Hawking (2006) • Robert May (2007) • Roger Penrose (2008) • Martin Evans (2009) • David Cox and Tomas Lindahl (2010) • Dan McKenzie (2011) • John E. Walker (2012) • Andre Geim (2013) • Alec Jeffreys (2014) • Peter Higgs (2015) • Richard Henderson (2016) • Andrew Wiles (2017) • Jeffrey I. Gordon (2018) • John B. Goodenough (2019) • Alan Fersht (2020) • Jocelyn Bell Burnell (2021) • Oxford–AstraZeneca Vaccine Team (2022) Fellows of the Royal Society elected in 1974 Fellows •  Walter Bodmer • William Boon • Kenneth Burton • John Cairns • Roy Yorke Calne • David Roderick Curtis • John Frank Davidson • Jack D. Dunitz • Pehr Victor Edman • John D. Eshelby • Jack Halpern • Stephen Hawking • Volker Heine • Robert Hinde • Albert Edward Litherland • James Lovelock • Richard Matthews • Drummond Matthews • Peter D. Mitchell • Samuel Victor Perry • Norman James Petch • John R. Philip • John Polkinghorne • Charles Rees • John Rishbeth • Roger Valentine Short • John Trevor Stuart • Robert Henry Stewart Thompson • John Vane • Frederick Vine • Stephen Esslemont Woods • Pierre Henry John Young Statute 12 • Vivian Fuchs Foreign • Renato Dulbecco • George H. Hitchings • Giuseppe Occhialini • Jean-Pierre Serre Breakthrough Prize laureates Mathematics • Simon Donaldson, Maxim Kontsevich, Jacob Lurie, Terence Tao and Richard Taylor (2015) • Ian Agol (2016) • Jean Bourgain (2017) • Christopher Hacon, James McKernan (2018) • Vincent Lafforgue (2019) • Alex Eskin (2020) • Martin Hairer (2021) • Takuro Mochizuki (2022) • Daniel A. Spielman (2023) Fundamental physics • Nima Arkani-Hamed, Alan Guth, Alexei Kitaev, Maxim Kontsevich, Andrei Linde, Juan Maldacena, Nathan Seiberg, Ashoke Sen, Edward Witten (2012) • Special: Stephen Hawking, Peter Jenni, Fabiola Gianotti (ATLAS), Michel Della Negra, Tejinder Virdee, Guido Tonelli, Joseph Incandela (CMS) and Lyn Evans (LHC) (2013) • Alexander Polyakov (2013) • Michael Green and John Henry Schwarz (2014) • Saul Perlmutter and members of the Supernova Cosmology Project; Brian Schmidt, Adam Riess and members of the High-Z Supernova Team (2015) • Special: Ronald Drever, Kip Thorne, Rainer Weiss and contributors to LIGO project (2016) • Yifang Wang, Kam-Biu Luk and the Daya Bay team, Atsuto Suzuki and the KamLAND team, Kōichirō Nishikawa and the K2K / T2K team, Arthur B. McDonald and the Sudbury Neutrino Observatory team, Takaaki Kajita and Yōichirō Suzuki and the Super-Kamiokande team (2016) • Joseph Polchinski, Andrew Strominger, Cumrun Vafa (2017) • Charles L. Bennett, Gary Hinshaw, Norman Jarosik, Lyman Page Jr., David Spergel (2018) • Special: Jocelyn Bell Burnell (2018) • Charles Kane and Eugene Mele (2019) • Special: Sergio Ferrara, Daniel Z. Freedman, Peter van Nieuwenhuizen (2019) • The Event Horizon Telescope Collaboration (2020) • Eric Adelberger, Jens H. Gundlach and Blayne Heckel (2021) • Special: Steven Weinberg (2021) • Hidetoshi Katori and Jun Ye (2022) • Charles H. Bennett, Gilles Brassard, David Deutsch, Peter W. Shor (2023) Life sciences • Cornelia Bargmann, David Botstein, Lewis C. Cantley, Hans Clevers, Titia de Lange, Napoleone Ferrara, Eric Lander, Charles Sawyers, Robert Weinberg, Shinya Yamanaka and Bert Vogelstein (2013) • James P. Allison, Mahlon DeLong, Michael N. Hall, Robert S. Langer, Richard P. Lifton and Alexander Varshavsky (2014) • Alim Louis Benabid, Charles David Allis, Victor Ambros, Gary Ruvkun, Jennifer Doudna and Emmanuelle Charpentier (2015) • Edward Boyden, Karl Deisseroth, John Hardy, Helen Hobbs and Svante Pääbo (2016) • Stephen J. Elledge, Harry F. Noller, Roeland Nusse, Yoshinori Ohsumi, Huda Zoghbi (2017) • Joanne Chory, Peter Walter, Kazutoshi Mori, Kim Nasmyth, Don W. Cleveland (2018) • C. Frank Bennett and Adrian R. Krainer, Angelika Amon, Xiaowei Zhuang, Zhijian Chen (2019) • Jeffrey M. Friedman, Franz-Ulrich Hartl, Arthur L. Horwich, David Julius, Virginia Man-Yee Lee (2020) • David Baker, Catherine Dulac, Dennis Lo, Richard J. Youle (2021) • Jeffery W. Kelly, Katalin Karikó, Drew Weissman, Shankar Balasubramanian, David Klenerman and Pascal Mayer (2022) • Clifford P. Brangwynne, Anthony A. Hyman, Demis Hassabis, John Jumper, Emmanuel Mignot, Masashi Yanagisawa (2023) Laureates of the Wolf Prize in Physics 1970s • Chien-Shiung Wu (1978) • George Uhlenbeck / Giuseppe Occhialini (1979) 1980s • Michael Fisher / Leo Kadanoff / Kenneth G. Wilson (1980) • Freeman Dyson / Gerardus 't Hooft / Victor Weisskopf (1981) • Leon M. Lederman / Martin Lewis Perl (1982) • Erwin Hahn / Peter Hirsch / Theodore Maiman (1983–84) • Conyers Herring / Philippe Nozières (1984–85) • Mitchell Feigenbaum / Albert J. Libchaber (1986) • Herbert Friedman / Bruno Rossi / Riccardo Giacconi (1987) • Roger Penrose / Stephen Hawking (1988) 1990s • Pierre-Gilles de Gennes / David J. Thouless (1990) • Maurice Goldhaber / Valentine Telegdi (1991) • Joseph H. Taylor Jr. (1992) • Benoît Mandelbrot (1993) • Vitaly Ginzburg / Yoichiro Nambu (1994–95) • John Wheeler (1996–97) • Yakir Aharonov / Michael Berry (1998) • Dan Shechtman (1999) 2000s • Raymond Davis Jr. / Masatoshi Koshiba (2000) • Bertrand Halperin / Anthony Leggett (2002–03) • Robert Brout / François Englert / Peter Higgs (2004) • Daniel Kleppner (2005) • Albert Fert / Peter Grünberg (2006–07) 2010s • John F. Clauser / Alain Aspect / Anton Zeilinger (2010) • Maximilian Haider / Harald Rose / Knut Urban (2011) • Jacob Bekenstein (2012) • Peter Zoller / Juan Ignacio Cirac (2013) • James D. Bjorken / Robert P. Kirshner (2015) • Yoseph Imry (2016) • Michel Mayor / Didier Queloz (2017) • Charles H. Bennett / Gilles Brassard (2018) 2020s • Rafi Bistritzer / Pablo Jarillo-Herrero / Allan H. MacDonald (2020) • Giorgio Parisi (2021) • Anne L'Huillier / Paul Corkum / Ferenc Krausz (2022) • Agriculture • Arts • Chemistry • Mathematics • Medicine • Physics Laureates of the Prince or Princess of Asturias Award for Concord Prince of Asturias Award for Concord 1980s • 1986: Vicariate of Solidarity • 1987: Villa El Salvador • 1988: International Union for Conservation of Nature and Natural Resources and World Wide Fund for Nature • 1989: Stephen Hawking 1990s • 1990: Sephardic Communities • 1991: Médecins Sans Frontières and Medicus Mundi • 1992: American Foundation for AIDS Research (AMFAR) • 1993: Association for Peace in the Basque Country • 1994: National Movement of Street Children, Messengers of Peace and Save the Children • 1995: H.M. Hussein I, king of Jordan • 1996: Adolfo Suárez • 1997: Mstislav Rostropovich and Yehudi Menuhin • 1998: Nicolás Castellanos, Vicente Ferrer, Joaquín Sanz Gadea and Muhammad Yunus • 1999: Caritas Spain 2000s • 2000: Royal Spanish Academy and Association of Academies of the Spanish Language • 2001: World Network of Biosphere Reserves • 2002: Daniel Barenboim and Edward Said • 2003: J. K. Rowling • 2004: The Way of St. James • 2005: Daughters of Charity of Saint Vincent de Paul • 2006: UNICEF • 2007: Yad Vashem • 2008: Íngrid Betancourt • 2009: The City of Berlin 2010s • 2010: Manos Unidas • 2011: Fukushima 50 • 2012: Spanish Federation of Food Banks • 2013: Spanish Organization for the Blind (ONCE) • 2014: Caddy Adzuba Princess of Asturias Award for Concord 2010s • 2015: Hospitaller Order of St John of God • 2016: SOS Children's Villages • 2017: European Union • 2018: Sylvia A. Earle • 2019: The City of Gdańsk 2020s • 2020: Spanish health workers on the front line against COVID-19 • 2021: José Andrés and the World Central Kitchen • 2022: Shigeru Ban • 2023: Mary's Meals Existential risk from artificial intelligence Concepts • AGI • AI alignment • AI capability control • AI safety • AI takeover • Consequentialism • Ethics of artificial intelligence • Existential risk from artificial general intelligence • Friendly artificial intelligence • Instrumental convergence • Intelligence explosion • Longtermism • Machine ethics • Suffering risks • Superintelligence • Technological singularity Organizations • Alignment Research Center • Center for AI Safety • Center for Applied Rationality • Center for Human-Compatible Artificial Intelligence • Centre for the Study of Existential Risk • EleutherAI • Future of Humanity Institute • Future of Life Institute • Google DeepMind • Humanity+ • Institute for Ethics and Emerging Technologies • Leverhulme Centre for the Future of Intelligence • Machine Intelligence Research Institute • OpenAI People • Scott Alexander • Sam Altman • Yoshua Bengio • Nick Bostrom • Paul Christiano • Eric Drexler • Sam Harris • Stephen Hawking • Dan Hendrycks • Geoffrey Hinton • Bill Joy • Shane Legg • Elon Musk • Steve Omohundro • Huw Price • Martin Rees • Stuart J. Russell • Jaan Tallinn • Max Tegmark • Frank Wilczek • Roman Yampolskiy • Eliezer Yudkowsky Other • Statement on AI risk of extinction • Human Compatible • Open letter on artificial intelligence (2015) • Our Final Invention • The Precipice • Superintelligence: Paths, Dangers, Strategies • Do You Trust This Computer? • Artificial Intelligence Act Category Authority control International • FAST • ISNI • VIAF National • Norway • Chile • Spain • France • BnF data • Catalonia • Germany • Italy • Israel • Finland • Belgium • United States • Sweden • Latvia • Japan • Czech Republic • Australia • Greece • Korea • Romania • Croatia • Netherlands • Poland • Portugal • Vatican Academics • CiNii • Google Scholar • MathSciNet • Mathematics Genealogy Project • ORCID • Scopus • zbMATH Artists • MusicBrainz People • Deutsche Synchronkartei • Deutsche Biographie • Trove • 2 Other • SNAC • IdRef
Wikipedia
Stephen Milne (mathematician) Stephen Carl Milne is an American mathematician who works in the fields of analysis, analytic number theory, and combinatorics. Milne received a bachelor's degree from San Diego State University in 1972 and a Ph.D. from the University of California, San Diego (UCSD) in 1976. His thesis, Peano curves and smoothness of functions, was written under Adriano M. Garsia.[1] From 1976 to 1978 he was a Gibbs Instructor at Yale University. Milne taught at Texas A&M University, UCSD, the University of Kentucky, and Ohio State University, where he became in 1982 an associate professor and in 1985 a full professor. Milne works on algebraic combinatorics, classical analysis, special functions, analytic number theory, and Lie algebras (generalizations of the Macdonald identities). From 1981 to 1983 he was a Sloan Fellow. In 2007 he was the joint recipient with Heiko Harborth of the Euler Medal. In 2012 Milne was elected a Fellow of the American Mathematical Society.[2] Selected publications • Milne, Stephen C. (1978). "A q-analog of restricted growth functions, Dobinski's equality, and Charlier polynomials". Trans. Amer. Math. Soc. 245: 89–118. doi:10.1090/s0002-9947-1978-0511401-8. MR 0511401. • with Glenn Lilly: Milne, Stephen C.; Lilly, Glenn M. (1992). "The Aℓ and Cℓ Bailey transform and lemma". Bull. Amer. Math. Soc. (N.S.). 26: 258–263. arXiv:math/9204236. doi:10.1090/s0273-0979-1992-00268-9. MR 1118702. S2CID 119144316. • Milne, S. C. (1996). "New infinite families of exact sums of squares formulas, Jacobi elliptic functions, and Ramanujan's tau function". Proc Natl Acad Sci U S A. 93 (26): 15004–15008. arXiv:math/0008068. Bibcode:1996PNAS...9315004M. doi:10.1073/pnas.93.26.15004. PMC 26345. PMID 11038532. • Infinite families of exact sums of squares formulas, Jacobi elliptic functions, continued fractions, and Schur functions. Kluwer Academic Publishers. 2002. ISBN 9781402004919. References 1. Stephen Carl Milne at the Mathematics Genealogy Project 2. List of Fellows of the American Mathematical Society External links • Homepage at Ohio State University Authority control International • ISNI • VIAF National • France • BnF data • Israel • United States • Netherlands Academics • MathSciNet • Mathematics Genealogy Project Other • IdRef
Wikipedia
Stephen Hilbert Stephen R. Hilbert is an American mathematician best known as co-author of the Bramble–Hilbert lemma, which he published with James H. Bramble in 1970.[1] Hilbert's area of specialty is numerical analysis.[2][3] He has been a professor of mathematics at Ithaca College since 1968.[4][5] Additionally, he taught mathematics at Cornell University as a visiting program professor during the 2003–2004 academic year.[6] Stephen Hilbert Born Brooklyn, New York, U.S. NationalityAmerican Alma materUniversity of Maryland (PhD) University of Notre Dame (BS) Known forBramble–Hilbert lemma Scientific career FieldsMathematics InstitutionsIthaca College Cornell University Early life and education Hilbert was born in Brooklyn, New York.[3] As a teenager, he attended Regis High School in New York City.[3] He received his BS in mathematics from the University of Notre Dame in 1964 and his PhD in applied mathematics from the University of Maryland in 1969.[3][7] He completed his dissertation, Numerical Solutions of Elliptic Partial Differential Equations, with Bramble as his advisor.[2][3] Awards and honors • Distinguished College Teaching of Mathematics Award – 1994 – Seaway Section of the Mathematical Association of America[3][8] • Dana Teaching Fellow – 1985[3] • Dana Teaching Fellow – 1982[3] • Outstanding Faculty Award – 1979 – School of Humanities and Sciences, Ithaca College[3] Publications • Barron's GMAT. Jaffe, Eugene D., and Stephen Hilbert, 2009, Barron's Educational Series, ISBN 978-0-7641-3993-2, 497 pgs • Calculus: An Active Approach with Projects. Hilbert, Stephen, et al., 1993–1994, John Wiley & Sons; Reissued 2010 by Mathematical Association of America, ISBN 978-0-88385-763-2, 307 pgs • Estimation of Linear Functionals on Sobolev Spaces with Application to Fourier Transforms and Spline Interpolation. Bramble, James H., and Stephen R. Hilbert. SIAM Journal on Numerical Analysis (Society for Industrial and Applied Mathematics) (Vol. 7, No. 1 (Mar., 1970)): 112–124. • A Mollifier Useful for Approximations in Sobolev Spaces and Some Applications to Approximating Solutions of Differential Equations. Hilbert, Stephen. Mathematics of Computation (American Mathematical Society) (Vol. 27, No. 121 (Jan., 1973)): 81–89. References 1. James H. Bramble and Stephen R. Hilbert (1970). "Estimation of Linear Functionals on Sobolev Spaces with Application to Fourier Transforms and Spline Interpolation". SIAM Journal on Numerical Analysis. Society for Industrial and Applied Mathematics. 7 (Vol. 7, No. 1 (Mar., 1970)): 112–124. Bibcode:1970SJNA....7..112B. doi:10.1137/0707006. JSTOR 2949585. 2. "The Mathematics Genealogy Project – Stephen Hilbert". genealogy.math.ndsu.nodak.edu. Retrieved May 4, 2011. 3. "Seaway Section of the MAA: Distinguished Teaching Awards". math.binghamton.edu. Archived from the original on July 20, 2011. Retrieved May 4, 2011. 4. An Introduction to Sobolev Spaces and Interpolation Spaces, Lecture Notes of the Unione Matematica Italiana, 2007, Volume 3, 53–57, doi:10.1007/978-3-540-71483-5_11 5. "Stephen Hilbert – Ithaca College". faculty.ithaca.edu. Retrieved May 4, 2011. 6. "Cornell Math – Welcome New Faculty and Graduate Students". math.cornell.edu. Retrieved May 16, 2011. 7. "School of Humanities and Sciences – Undergraduate Catalog 2008 – 2009 – Ithaca College". ithaca.edu/catalogs. Retrieved May 4, 2011. 8. "History of Ithaca College Department of Mathematics". ithaca.edu/hs/depts/math. Retrieved July 17, 2021. External links • Stephen Hilbert at the Mathematics Genealogy Project Authority control: Academics • MathSciNet • Mathematics Genealogy Project • zbMATH
Wikipedia
Stephen R. Doty Stephen Richard Doty (born April 16, 1953) is an American mathematician specializing in algebraic representation theory (especially modular representation theory). He earned a doctorate in mathematics from University of Notre Dame in 1982 under the supervision of Warren J. Wong with dissertation The Submodule Structure of Weyl Modules for Groups of Type An.[1] After post-doctoral positions at University of Washington and University of Notre Dame, he joined the faculty at Loyola University Chicago in 1987. Stephen Richard Doty Born (1953-04-16) 16 April 1953 Salt Lake City, Utah, US NationalityAmerican Alma materUniversity of Notre Dame Scientific career FieldsMathematics Doctoral advisorWarren Wong In 2007 Doty was named the Inaugural Yip Fellow of Magdalene College, Cambridge University. In 2009 he was a Mercator Professor in Germany. Selected publications • Doty, Stephen (1989), "The strong linkage principle", American Journal of Mathematics, 111 (1): 135–141, doi:10.2307/2374483, JSTOR 2374483, MR 0980303 • Doty, Stephen (1999), "Representation theory of reductive normal algebraic monoids", Transactions of the American Mathematical Society, 351 (6): 2539–2551, doi:10.1090/S0002-9947-99-02462-9, MR 1653351 • Doty, Stephen; Giaquinto, Anthony (2002), "Presenting Schur algebras", International Mathematics Research Notices, 2002 (36): 1907–1944, doi:10.1155/S1073792802201026, MR 1920169, S2CID 2017942 • Doty, Stephen (2003), "Presenting generalized q-Schur algebras", Representation Theory, 7 (9): 196–213, doi:10.1090/S1088-4165-03-00176-6, MR 1990659, S2CID 16028303 References 1. Stephen R. Doty at the Mathematics Genealogy Project External links • Home page • Mathematical Reviews author profile Authority control: Academics • MathSciNet • Mathematics Genealogy Project • ORCID • Scopus • zbMATH
Wikipedia
Stephen Rallis Stephen James Rallis (May 17, 1942 – April 17, 2012) was an American mathematician who worked on group representations, automorphic forms, the Siegel–Weil formula, and Langlands L-functions. Stephen Rallis Stephen Rallis Born Stephen Rallis (1942-05-17)17 May 1942 Bennington, Vermont Died17 April 2012(2012-04-17) (aged 69) NationalityAmerican Alma materMassachusetts Institute of Technology Harvard University Known forRallis Inner Product Formula automorphic descent Scientific career FieldsMathematics Institutions • Ohio State University Doctoral advisorBertram Kostant[1] Doctoral studentsDihua Jiang Career Rallis received a B.A. in 1964 from Harvard University, a Ph.D. in 1968 from the Massachusetts Institute of Technology, and spent 1968–1970 at the Institute for Advanced Study in Princeton. After two years at Stony Brook, two years at Universite de Strasbourg, and several visiting positions, he joined the faculty at Ohio State University in 1977 and stayed there for the rest of his career. Work Beginning in the 1970s, Rallis and Gérard Schiffmann wrote a series of papers on the Weil representation. This led to Rallis's work with Kudla in which they developed a far-reaching generalization of the Siegel–Weil formula: the regularized Siegel–Weil formula and the first term identity.[2] These results have prompted other mathematicians to extend Siegel–Weil to other cases.[3] Rallis' 1984 paper giving proofs of certain examples of the Howe duality conjecture was the start of his work on what is now known as "The Rallis Inner Product Formula" which relates the inner product of a pair of theta functions to a special value or residue of a Langlands L-function.[4] This cornerstone of what Wee Teck Gan et al.[5] term the Rallis program on the theta correspondence has found wide applications. Rallis then adapted the classical idea of doubling a quadratic space to create the "Piatetski–Shapiro and Rallis Doubling Method" for constructing integral representations of L-functions, and thus they obtained the first general result on L-functions for all classical groups.[6] The 1990 Wolf Prize to Piatetski–Shapiro [7] cites this work with Rallis as one of Piatetski–Shapiro's main achievements. Whereas it had previously been assumed that all the L-functions constructed by the Rankin–Selberg integral method were a subset of those constructed by the Langlands–Shahidi method, the 1992 paper by Rallis with Piatetski-Shapiro and Schiffmann on the Rankin–Selberg integrals for the group G_2 showed this was not the case and opened the way for determining many new examples of L-functions represented by Rankin–Selberg integrals.[8] The L-functions studied by Rallis are important because of their connections with the Langlands functoriality conjecture. Rallis with David Soudry and David Ginzburg wrote a series of papers culminating in their book "The descent map from automorphic representations of GL(n) to classical groups". Their automorphic descent method constructs an explicit inverse map to the (standard) Langlands functorial lift and has had major applications to the analysis of functoriality.[9] Also, using the "Rallis tower property" [10] from his 1984 paper on the Howe duality conjecture, Rallis with Ginzburg and Soudry studied global exceptional correspondences and found new examples of functorial lifts. In 1990, Rallis gave an invited address on his work "Poles of Standard L-functions" at the 1990 International Congress of Mathematicians in Kyoto.[11] In 2003, the conference "Automorphic Representations, L-Functions and Applications: Progress and Prospects" was held in honor of Rallis's 60th birthday [12] and according to the conference proceedings, "reflects the depth and breadth of Rallis's influence". In January, 2015, the Journal of Number Theory published a special issue in honor of Steve Rallis's contributions to mathematics.[13] Rallis has the distinction of having his biography included in the MacTutor History of Mathematics archive.[14] In a series of papers between 2004 and 2009, David Ginzburg, Dihua Jiang, and Stephen Rallis proved one direction of the global Gan–Gross–Prasad conjecture.[15][16][17] Rallis's ideas had a significant and lasting impact on the theory of automorphic forms.[18] His mathematical life was characterized by several long term collaborations with several mathematicians including Stephen Kudla, Herve Jacquet, and Ilya Piatetski-Shapiro. Selected publications Articles • Langlands’ functoriality and the Weil representation. Amer.J.Math. 104 (1982), no. 3, 469–515. MR0658543 • On the Howe duality conjecture. Compositio Math. 51 (1984), no.3, 333–399. MR0743016 • with Stephen Kudla: On the Weil–Siegel formula. J. Reine Angew. Math. 387 (1988), no. 1, 1–68. MR0946349 • with Ilya Piatetski-Shapiro: A new way to get Euler products. J.Reine Angew. Math. 392 (1988), 110–124. MR0965059 • with Ilya Piatetski-Shapiro and Gerard Schiffmann: Rankin–Selberg integrals for the group G_2. Amer. J. Math. 114 (1992), no.6, 1269–1315. MR1198304 • with Stephen Kudla: A regularized Siegel–Weil formula: the first term identity. Ann. Of Math. (2) 140 (1994), no. 1, 1–80. MR1289491 • with Herve Jacquet: Uniqueness of linear periods. Compositio Math. 387 (1996), no. 1, 65–123. MR1394521 • with David Ginzburg and David Soudry: A tower of theta correspondences for G_2. Duke Math. J. 88 (1997), no. 3, 537–624. MR1455531 • with David Ginzburg and David Soudry: On explicit lifts of cusp forms from GL(m) to classical groups. Annals of Mathematics (2) 150 (1999), no. 3, 807–866. MR1740991 • with Erez Lapid: On the nonnegativity of L(1/2,pi) for SO_2(n + 1). Ann. of Math.(2) 157 (2003), no. 3, 891–917. MR1983784 • with Avraham Aizenbud, Dmitry Gourevitch and Gerard Schiffmann: Multiplicity one theorems. Annals of Mathematics (2) 172 (2010), no. 2, 1407–1434. MR2680495 Books • L-functions and the oscillator representation. Springer. 1987. ISBN 0691081565. • with Stephen Gelbart and Ilya Piatetski-Shapiro:Explicit constructions of automorphic L-functions. Springer. 1987. • with David Ginzburg and David Soudry: The descent map from automorphic representations of GL(n) to classical groups. World Scientific Publication Co. 2011. Sources and further reading • James Cogdell, Dihua Jiang, Coordinating Editors (March 2013). "Remembering Steve Rallis" (PDF). Notices of the AMS. 60 (4): 466–469. {{cite journal}}: |author= has generic name (help)CS1 maint: multiple names: authors list (link) • O'Connor, John J.; Robertson, Edmund F., "Stephen Rallis", MacTutor History of Mathematics Archive, University of St Andrews References 1. Stephen Rallis at the Mathematics Genealogy Project 2. W.T. Gan, Y. Qiu, and S. Takeda (2014)"The Regularized Siegel–Weil Formula (The Second Term Identity) and the Rallis Inner Product Formula," Inventiones Math. 198, 739–831 3. J. Cogdell, H. Jacquet, D. Jiang, S. Kudla, (2015), eds. "Steve Rallis (1942–2012)," Journal of Number Theory, 146, 1–3 4. J. Cogdell and D. Jiang, coordinating eds.,"Remembering Steve Rallis," Notices of the AMS 60 (2013), issue 4, 466–469 5. W.T. Gan, Y. Qiu, and S. Takeda (2014)"The Regularized Siegel–Weil Formula (The Second Term Identity) and the Rallis Inner Product Formula," Inventiones Math. 198, 739–831 6. J. Cogdell, H. Jacquet, D. Jiang, S. Kudla, (2015), eds. "Steve Rallis (1942–2012)," Journal of Number Theory, 146, 1–3 7. O'Connor, John J.; Robertson, Edmund F., "Ilya Piatetski–Shapiro", MacTutor History of Mathematics archive, University of St Andrews 8. D. Bump (2005) "The Rankin–Selberg method: an introduction and survey" in Automorphic Representations, L-Functions and Applications: Progress and Prospects: Proceedings of a Conference honoring Steve Rallis on the occasion of his 60th birthday, de Gruyter, Berlin (Ohio State University Research Institute Publications 11), ISSN 0942-0363, ISBN 3-11-017939-3 9. J. Cogdell, H. Jacquet, D. Jiang, S. Kudla, (2015), eds. "Steve Rallis (1942–2012)," Journal of Number Theory, 146, 1–3 10. W.T. Gan, Y. Qiu, and S. Takeda (2014)"The Regularized Siegel–Weil Formula (The Second Term Identity) and the Rallis Inner Product Formula," Inventiones Math. 198, 739–831 11. S. Rallis "Poles of standard L functions," Proceedings of the International Congress of Mathematicians (Kyoto, 1990), Vol. I, II (1991), 833–845, Math. Soc. Japan, Tokyo. 12. J. Cogdell et al., eds. (2005) Automorphic Representations, L-Functions and Applications: Progress and Prospects: Proceedings of a Conference honoring Steve Rallis on the occasion of his 60th birthday, de Gruyter, Berlin (Ohio State University Research Institute Publications 11), ISSN 0942-0363, ISBN 3-11-017939-3 13. J. Cogdell, H. Jacquet, D. Jiang, S. Kudla, (2015), eds. "Steve Rallis (1942–2012)," Journal of Number Theory, 146, 1–3 14. O'Connor, John J.; Robertson, Edmund F. "Stephen James Rallis," MacTutor History of Mathematics archive, University of St. Andrews (http://www-history.mcs.st-andrews.ac.uk/Biographies) 15. Ginzburg, David; Jiang, Dihua; Rallis, Stephen (2004), "On the nonvanishing of the central value of the Rankin–Selberg L-functions.", Journal of the American Mathematical Society, 17 (3): 679–722, doi:10.1090/S0894-0347-04-00455-2 16. Ginzburg, David; Jiang, Dihua; Rallis, Stephen (2005), "On the nonvanishing of the central value of the Rankin–Selberg L-functions, II.", Automorphic Representations, L-functions and Applications: Progress and Prospects, Berlin: Ohio State Univ. Math. Res. Inst. Publ. 11, de Gruyter: 157–191 17. Ginzburg, David; Jiang, Dihua; Rallis, Stephen (2009), "Models for certain residual representations of unitary groups. Automorphic forms and L-functions I.", Global Aspects, Providence, RI: Contemp. Math., 488, Amer. Math. Soc.: 125–146 18. J. Cogdell and D. Jiang, coordinating eds., "Remembering Steve Rallis," Notices of the AMS 60 (2013), issue 4, 466–469 Authority control International • ISNI • VIAF National • France • BnF data • Catalonia • Germany • Israel • Belgium • United States • Netherlands Academics • MathSciNet • Mathematics Genealogy Project • zbMATH People • Deutsche Biographie Other • IdRef
Wikipedia
Stephen Schanuel Stephen H. Schanuel (14 July 1933 – 21 July 2014) was an American mathematician working in the fields of abstract algebra and category theory, number theory, and measure theory.[1][2] Stephen Schanuel Born(1933-07-14)14 July 1933 St. Louis, Missouri Died21 July 2014(2014-07-21) (aged 81) NationalityAmerican Alma materColumbia University Known forSchanuel's conjecture Schanuel's lemma Scientific career FieldsMathematics InstitutionsUniversity at Buffalo Doctoral advisorSerge Lang Doctoral studentsW. Dale Brownawell Life While he was a graduate student at University of Chicago, he discovered Schanuel's lemma, an essential lemma in homological algebra.[2] Schanuel received his Ph.D. in mathematics from Columbia University in 1963, under the supervision of Serge Lang.[2] Work Shortly thereafter he stated a conjecture in the field of transcendental number theory, which remains an important open problem to this day.[2] Schanuel was a professor emeritus of mathematics at University at Buffalo.[1] Books • Lawvere, F. William; Schanuel, Stephen Hoel (2009) [1997]. Conceptual Mathematics: A First Introduction to Categories (2nd ed.). Cambridge University Press. ISBN 978-0-521-89485-2. References 1. "Recent alumni deaths". Princeton Alumni Weekly. April 22, 2015. Retrieved 23 June 2015. 2. "Steve Schanuel has passed away". University at Buffalo, Mathematics Department. Retrieved 23 June 2015. External links • Stephen Schanuel at the Mathematics Genealogy Project • Weisstein, Eric W. "Schanuel's Conjecture". MathWorld. Authority control International • ISNI • VIAF National • Norway • France • BnF data • Catalonia • Germany • Israel • United States • Czech Republic • Netherlands Academics • MathSciNet • Mathematics Genealogy Project • zbMATH Other • IdRef
Wikipedia
Stephen Siklos Stephen Siklos (1950 – 17 August 2019) was a lecturer in the Faculty of Mathematics at the University of Cambridge. He is known for setting up the Sixth Term Examination Papers, used for undergraduate mathematics admissions at several British universities.[1] Early life Siklos was born in Epsom, Surrey, England in 1950.[2][1] His father, Theo Siklos, was an educator and his wife, Ruth Siklos, an almoner.[1] He was educated at Collyer's School before reading the Mathematical Tripos at Pembroke College, University of Cambridge, where he graduated with a masters in mathematics and was awarded the Tyson Medal.[2][1] Academic career In 1973, he began doing research in general relativity under Stephen Hawking, publishing his dissertation titled "Singularities, Invariants and Cosmology" in 1976.[3] From 1980 to 1999 he lectured at Cambridge and was the director of studies at Newnham College. In 1999, he joined Jesus College as a senior tutor, later becoming the college president.[1] References 1. Clackson, James (23 September 2019). "Stephen Siklos obituary". the Guardian. 2. "Stephen Siklos, 1950-2019 | Features: Faculty Insights". www.maths.cam.ac.uk. 3. "Stephen Siklos - The Mathematics Genealogy Project". www.genealogy.math.ndsu.nodak.edu. Authority control International • VIAF Academics • MathSciNet • Mathematics Genealogy Project • Scopus • zbMATH
Wikipedia
Stephen Twinoburyo Stephen Twinoburyo (8 January 1970 – 1 January 2019)[1] was a Ugandan scientist, mathematician, lecturer, and entrepreneur. He was the CEO of Scimatic Solutions, a South African company which helps students with maths and science tuition. Stephen Twinoburyo Mucunguzi Born(1970-01-08)8 January 1970 Mbarara, Uganda Died1 January 2019(2019-01-01) (aged 48) South Africa Resting placeKakoba, Mbarara NationalityUgandan EducationMakerere University University of Pretoria Alma materUniversity of South Africa ChildrenThree Scientific career FieldsMathematics Science Politics InstitutionsScimatics Solutions Early life and education Twinoburyo was born on 8 January 1970,[2] in Mbarara, Uganda.[3] He was the second of seven children, and his father worked as a town clerk.[2] He attended Ntare School, and was head prefect there in 1989.[1] In 1990, he started studying engineering at Makerere University, and relocated to South Africa.[1][3] During his time there, he was chairman of Lumumba Hall.[1] He later studied mathematics as a part-time degree at the University of South Africa, completing the course in 2007.[4] Career In 1994, Twinoburyo visited Soweto, South Africa,[3] and it inspired him to move to the country in 1997.[2][3] He lectured at the University of Pretoria,[2] and taught in colleges in Pretoria and Cape Town.[4] In 2008, Twinoburyo decided to found Uganda Professionals Living in South Africa (AUPSA),[1] and worked as their chairman.[4][5] In 2009, he organised a meeting of Ugandan expatriates in South Africa. The meeting was held in Sandton, South Africa.[4] AUPSA was set up to connect Ugandan expatriates living in South Africa.[1][4] Twinoburyo also worked for the Ugandan Civil Alliance Network.[6] In 2010, Twinoburyo said that Ugandans were unhappy about the ticket prices for the 2010 FIFA World Cup in South Africa.[7] In 2011, he condemned alleged human rights abuses in Uganda, and asked South African president Jacob Zuma not to attend the inauguration of Ugandan president Yoweri Museveni.[6] In 2014, Twinoburyo set up and became the CEO of Scimatic Solutions, a South African company which helps students with maths and science tuition.[2][8] He was inspired to set up the company after visiting the California Science Center and National Air and Space Museum in Washington, D.C.[8] The company is based in Hatfield, Pretoria.[8] Personal life Twinoburyo and his wife had three children.[1] He died in South Africa on 1 January 2019 of a heart attack.[2][3][5] His body was repatriated to Uganda.[2] References 1. Tumushabe, Alfred (6 January 2019). "Twinoburyo was obsessed with maths, critical of NRM leadership". Daily Monitor. Retrieved 6 January 2019. 2. Tumushabe, Alfred (2 January 2019). "Body of Ugandan mathematician who died in South Africa to be repatriated". Daily Monitor. Retrieved 5 January 2019. 3. Mubiru, Apollo (2 January 2019). "Mathematician Twinoburyo dies in South Africa". New Vision. Retrieved 5 January 2019. 4. Kundai, Keith (6 June 2013). "AUPSA Chairman Stephen Twinoburyo". African Pro. Retrieved 6 January 2019. 5. "Death Announcement: The Late Stephen Twinoburyo: Former President Association of Ugandan Professionals in South Africa (AUPSA) Suffers Heart Attack". Ugandan Diaspora. 2 January 2019. Retrieved 6 January 2019. 6. "Ugandans in SA want Zuma to decline inauguration invitation". Eyewitness News. 6 May 2011. Retrieved 5 January 2019. 7. Naik, Sameer (5 June 2010). "Africa puts on its game face". Saturday Star. Retrieved 6 January 2019 – via PressReader. 8. "An office makes a perfect classroom". The Sunday Times. 21 September 2014. Retrieved 5 January 2019 – via PressReader.
Wikipedia
Stephen Wolfram Stephen Wolfram (/ˈwʊlfrəm/ WUUL-frəm; born 29 August 1959) is a British-American[6] computer scientist, physicist, and businessman. He is known for his work in computer science, mathematics, and theoretical physics.[7][8] In 2012, he was named a fellow of the American Mathematical Society.[9] He is currently an adjunct professor at the University of Illinois Department of Computer Science.[10] Stephen Wolfram Wolfram in 2008 Born (1959-08-29) 29 August 1959 London, England NationalityBritish, American EducationDragon School[1] Eton College Alma mater • St. John's College, Oxford (no degree) • California Institute of Technology (PhD, 19 November 1979) Known for • Mathematica • Wolfram Alpha • A New Kind of Science[2] • Wolfram Language AwardsMacArthur Fellowship (1981) Scientific career Fields • Mathematics[3] • Physics • Computing • Cellular automata Institutions • Wolfram Research • Thinking Machines Corporation[4] • California Institute of Technology • Institute for Advanced Study • University of Illinois at Urbana– Champaign ThesisSome Topics in Theoretical High-Energy Physics (1980) Doctoral advisorRichard D. Field[5] Website • www.stephenwolfram.com • twitter.com/stephen_wolfram As a businessman, he is the founder and CEO of the software company Wolfram Research where he works as chief designer of Mathematica and the Wolfram Alpha answer engine. Early life Family Stephen Wolfram was born in London in 1959 to Hugo and Sybil Wolfram, both German Jewish refugees to the United Kingdom.[11] His maternal grandmother was British psychoanalyst Kate Friedlander. Wolfram's father, Hugo Wolfram, was a textile manufacturer and served as managing director of the Lurex Company—makers of the fabric Lurex.[12] Wolfram's mother, Sybil Wolfram, was a Fellow and Tutor in Philosophy at Lady Margaret Hall at University of Oxford from 1964 to 1993.[13] Stephen Wolfram is married to a mathematician. They have four children together.[14][15] Education Wolfram was educated at Eton College, but left prematurely in 1976.[16] As a young child, Wolfram had difficulties learning arithmetic.[17] He entered St. John's College, Oxford, at age 17 and left in 1978[18] without graduating[19][20] to attend the California Institute of Technology the following year, where he received a PhD[21] in particle physics in 1980.[22] Wolfram's thesis committee was composed of Richard Feynman, Peter Goldreich, Frank J. Sciulli and Steven Frautschi, and chaired by Richard D. Field.[22][23] Early career Wolfram, at the age of 15, began research in applied quantum field theory and particle physics and published scientific papers in peer-reviewed scientific journals including Nuclear Physics B, Australian Journal of Physics, Nuovo Cimento, and Physical Review D. Working independently, Wolfram published a widely cited paper on heavy quark production at age 18[2] and nine other papers.[24] Wolfram's work with Geoffrey C. Fox on the theory of the strong interaction is still used in experimental particle physics. Following his PhD, Wolfram joined the faculty at Caltech and became the youngest recipient[25] of a MacArthur Fellowship in 1981, at age 21.[19] Later career Complex systems and cellular automata In 1983, Wolfram left for the School of Natural Sciences of the Institute for Advanced Study in Princeton. By that time, he was no longer interested in particle physics. Instead, he began pursuing investigations into cellular automata, mainly with computer simulations. He produced a series of papers systematically investigating the class of elementary cellular automata, conceiving the Wolfram code, a naming system for one-dimensional cellular automata, and a classification scheme for the complexity of their behaviour.[26] He conjectured that the Rule 110 cellular automaton might be Turing complete, which was later proved correct.[27] Wolfram's cellular-automata work came to be cited in more than 10,000 papers.[24] In the mid-1980s, Wolfram worked on simulations of physical processes (such as turbulent fluid flow) with cellular automata on the Connection Machine alongside Richard Feynman[28] and helped initiate the field of complex systems. In 1984, he was a participant in the Founding Workshops of the Santa Fe Institute, along with Nobel laureates Murray Gell-Mann, Manfred Eigen, and Philip Warren Anderson, and future laureate Frank Wilczek.[29] In 1986, he founded the Center for Complex Systems Research (CCSR) at the University of Illinois at Urbana–Champaign.[30] In 1987, he founded the journal Complex Systems.[30] Symbolic Manipulation Program Wolfram led the development of the computer algebra system SMP (Symbolic Manipulation Program) in the Caltech physics department during 1979–1981. A dispute with the administration over the intellectual property rights regarding SMP—patents, copyright, and faculty involvement in commercial ventures—eventually led him to resign from Caltech.[31] SMP was further developed and marketed commercially by Inference Corp. of Los Angeles during 1983–1988. Mathematica In 1986, Wolfram left the Institute for Advanced Study for the University of Illinois at Urbana–Champaign, where he had founded their Center for Complex Systems Research, and started to develop the computer algebra system Mathematica, which was first released on 23 June 1988, when he left academia. In 1987, he founded Wolfram Research, which continues to develop and market the program.[2] A New Kind of Science From 1992 to 2002, Wolfram worked on his controversial book A New Kind of Science,[2][32] which presents an empirical study of simple computational systems. Additionally, it argues that for fundamental reasons these types of systems, rather than traditional mathematics, are needed to model and understand complexity in nature. Wolfram's conclusion is that the universe is discrete in its nature, and runs on fundamental laws which can be described as simple programs. He predicts that a realization of this within scientific communities will have a revolutionary influence on physics, chemistry, biology, and a majority of scientific areas in general, hence the book's title. The book was met with skepticism and criticism that Wolfram took credit for the work of others.[33][34] Wolfram Alpha computational knowledge engine Main article: Wolfram Alpha In March 2009, Wolfram announced Wolfram Alpha, an answer engine. WolframAlpha later launched in May 2009,[35] and a paid-for version with extra features launched in February 2012 that was met with criticism for its high price that was later dropped from $50.00 to $2.00.[36][37] The engine is based on natural language processing and a large library of algorithms. The application programming interface allows other applications to extend and enhance Wolfram Alpha.[38] Touchpress In 2010, Wolfram co-founded Touchpress along with Theodore Gray, Max Whitby, and John Cromie. The company specialised in creating in-depth premium apps and games covering a wide range of educational subjects designed for children, parents, students, and educators. Since the launch, Touchpress has published more than 100 apps.[39] The company is no longer active. Wolfram Language In March 2014, at the annual South by Southwest (SXSW) event, Wolfram officially announced the Wolfram Language as a new general multi-paradigm programming language,[40] though it was previously available through Mathematica and not an entirely new programming language. The documentation for the language was pre-released in October 2013 to coincide with the bundling of Mathematica and the Wolfram Language on every Raspberry Pi computer with some controversy because of the proprietary nature of the Wolfram Language.[41] While the Wolfram Language has existed for over 30 years as the primary programming language used in Mathematica, it was not officially named until 2014, and is not widely used.[42][43] Wolfram Physics Project In April 2020, Wolfram announced the "Wolfram Physics Project" as an effort to reduce and explain all the laws of physics within a paradigm of a hypergraph that is transformed by minimal rewriting rules that obey the Church-Rosser property.[44][45] The effort is a continuation of the ideas he originally described in A New Kind of Science. Wolfram claims that "From an extremely simple model, we're able to reproduce special relativity, general relativity and the core results of quantum mechanics." Physicists are generally unimpressed with Wolfram's claim, and state that Wolfram's results are non-quantitative and arbitrary.[46][47] Personal interests and activities Wolfram has an extensive log of personal analytics, including emails received and sent, keystrokes made, meetings and events attended, phone calls, even physical movement dating back to the 1980s. In the preface of A New Kind of Science, he noted that he recorded over one-hundred million keystrokes and one-hundred mouse miles. He has stated "[personal analytics] can give us a whole new dimension to experiencing our lives."[48] Stephen Wolfram was involved as a scientific consultant for the 2016 film Arrival. He and his son Christopher wrote some of the code featured on-screen, such as the code in graphics depicting an analysis of the alien logograms, for which they used the Wolfram Language.[49][50] Bibliography • Metamathematics: Foundations & Physicalization, (2022), Wolfram Media, Inc, ASIN:B0BPN7SHN3 • Combinators: A Centennial View (2021) • A Project to Find the Fundamental Theory of Physics (2020), Publisher: Wolfram Media, ISBN 978-1-57955-035-6 • Adventures of a Computational Explorer (2019) • Idea Makers: Personal Perspectives on the Lives & Ideas of Some Notable People (2016)[51] • Elementary Introduction to the Wolfram Language (2015)[52] • Wolfram, Stephen (2002). A new kind of science. Champaign, IL: Wolfram Media. ISBN 1-57955-008-8. OCLC 47831356. • The Mathematica Book (multiple editions) • Cellular Automata and Complexity: Collected Papers (1994) • Theory and Applications of Cellular Automata (1986) References 1. My Life in Technology—As Told at the Computer History Museum 2. Giles, J. (2002). "Stephen Wolfram: What kind of science is this?". Nature. 417 (6886): 216–218. Bibcode:2002Natur.417..216G. doi:10.1038/417216a. PMID 12015565. S2CID 10636328. 3. Wolfram, S. (2013). "Computer algebra". Proceedings of the 38th international symposium on International symposium on symbolic and algebraic computation – ISSAC '13. pp. 7–8. doi:10.1145/2465506.2465930. ISBN 9781450320597. S2CID 37099593. 4. Stephen Wolfram's publications indexed by the Scopus bibliographic database. (subscription required) 5. Wolfram, Stephen (1980). Some topics in theoretical high-energy physics. Caltech Library (phd). California Institute of Technology. Retrieved 8 May 2018. 6. "Biographical Facts for Stephen Wolfram". www.stephenwolfram.com. Archived from the original on 4 February 2018. Retrieved 2 March 2017. 7. "Stephen Wolfram". Wolfram Alpha. Retrieved 15 May 2012. 8. "Stephen Wolfram: 'I am an information pack rat'". New Scientist. Retrieved 19 April 2014. 9. List of Fellows of the American Mathematical Society, retrieved 1 September 2013. 10. , retrieved 9 December 2022 11. The Universal Mind: The Evolution of Machine Intelligence and Human Psychology, Xiphias Press, 1 Sep 2016, Michael Peragine 12. Telling a good yarn by Jenny Lunnon, Oxford Times, Thursday 21 September 2006. 13. Kate Friedländer née Frankl (1902–1949), Psychoanalytikerinnen. Biografisches Lexikon. 14. "Stephen Wolfram". Sunday Profile. 31 May 2009. Australian Broadcasting Corporation. 15. "The Life and Times of Stephen Wolfram: Biographical Facts". Retrieved 3 May 2023. 16. A Speech for (High-School) Graduates by Stephen Wolfram (a commencement speech for Stanford Online High School), StephenWolfram.com, 9 June 2014: "You know, as it happens, I myself never officially graduated from high school, and this is actually the first high school graduation I've ever been to." 17. PHYSICIST AWARDED 'GENIUS' PRIZE FINDS REALITY IN INVISIBLE WORLD, by GLADWIN HILL, New York Times, 24 May 1981: "When I first went to school, they thought I was behind, he says, because I didn't want to read the silly books they gave us. And I never was able to do arithmetic. It was when he got into higher mathematics, such as calculus, he says, that he realized there was an invisible world that he wanted to explore." 18. Complexity: A Guided Tour by Melanie Mitchell, 2009, p. 151: "In the early 1980s, Stephen Wolfram, a physicist working at the Institute for Advanced Study in Princeton, became fascinated by cellular automata and the patterns they make. Wolfram is one of those legendary child prodigies people like to tell stories about. Born in London in 1959, Wolfram published his first physics paper at 15. Two years later, in the summer after his first year at Oxford, . . . Wolfram wrote a paper in the field of "quantum chromodynamics" that attracted the attention of Nobel-Prize-winning physicist Murray Gell-Mann, who invited Wolfram to join his group at Caltech…" 19. Arndt, Michael (17 May 2002). "Stephen Wolfram's Simple Science". BusinessWeek. Retrieved 1 January 2022. 20. Stephen Wolfram: 'The textbook has never interested me': The British child genius who abandoned physics to devote himself to coding and the cosmos, by Zoë Corbyn, The Guardian, Saturday 28 June 2014: "He entered Oxford University at 17 without A-levels and left around a year later without graduating. He was bored and he had been invited to cross the pond by the prestigious California Institute of Technology (Caltech) to do a PhD. "I had written a bunch of papers and so was pretty well known by that time,"" ... 21. Stephen Wolfram at the Mathematics Genealogy Project 22. Wolfram, Stephen (1980). Some Topics in Theoretical High-Energy Physics (PhD thesis). California Institute of Technology. 23. Application 24. Levy, Steven (1 June 2002). "The Man Who Cracked The Code to Everything..." Wired.com. Retrieved 22 November 2018. 25. "FOUNDATION TO SUPPORT 21 AS 'GENIUSES' FOR 5 YEARS". The New York Times. Retrieved 26 March 2023. 26. Regis, Ed (1987). Who Got Einstein's Office: Eccentricity and Genius at the Institute for Advanced Study, Addison-Wesley, Reading. ISBN 0201120658 27. Cook, Matthew (2004). "Universality in Elementary Cellular Automata". Complex Systems. 15 (1). ISSN 0891-2513. Retrieved 24 June 2015. 28. W. Daniel Hillis (February 1989). "Richard Feynman and The Connection Machine". Physics Today. Archived from the original on 28 July 2009. Retrieved 3 November 2006. 29. Pines, David (2018). Pines, David (ed.). Emerging Syntheses in Science: Proceedings of the Founding Workshops of the Santa Fe Institute (PDF). Menlo Park, California: Addison-Wesley. pp. 183–190. doi:10.1201/9780429492594. ISBN 9780429492594. S2CID 142670544. Archived from the original (PDF) on 11 August 2018. 30. "The Man Who Cracked The Code to Everything". Wired. Retrieved 7 April 2012. 31. Kolata, G. (1983). "Caltech Torn by Dispute over Software". Science. 220 (4600): 932–934. Bibcode:1983Sci...220..932K. doi:10.1126/science.220.4600.932. PMID 17816011. 32. Wolfram, Stephen (2002). A New Kind of Science. Wolfram Media. ISBN 1579550088. 33. "Stephen Wolfram, A New Kind of Science". bactra.org. Retrieved 17 October 2022. 34. Giles, Jim (1 May 2002). "What kind of science is this?". Nature. 417 (6886): 216–218. Bibcode:2002Natur.417..216G. doi:10.1038/417216a. ISSN 1476-4687. PMID 12015565. S2CID 10636328. 35. Wolfram, Stephen (5 March 2009). "Wolfram|Alpha Is Coming!". Wolfram blog. Retrieved 9 March 2009. 36. Sorrel, Charlie. "Wolfram Alpha for iPhone Drops from $50 to $2". Wired. ISSN 1059-1028. Retrieved 17 October 2022. 37. "Announcing Wolfram|Alpha Pro". Wolfram|Alpha blog. Retrieved 7 April 2012. 38. Johnson, Bobbie (9 March 2009). "British search engine 'could rival Google'". The Guardian. Retrieved 9 March 2009. 39. "Popular Science columnist earns prestigious American Chemical Society award". American Chemical Society. Retrieved 25 December 2018. 40. Wolfram Language reference page Retrieved on 14 May 2014 41. Shankland, Stephen. "Premium Mathematica software free on budget Raspberry Pi". CNET. Retrieved 18 March 2021. 42. Slate's article Stephen Wolfram's New Programming Language: He Can Make The World Computable, 6 March 2014. Retrieved on 14 May 2014. 43. "TIOBE Index". TIOBE. Retrieved 17 October 2022. 44. "Stephen Wolfram Invites You to Solve Physics". Wired. ISSN 1059-1028. Retrieved 15 April 2020. 45. "Stephen Wolfram's hypergraph project aims for a fundamental theory of physics". Science News. 14 April 2020. Retrieved 23 April 2020. 46. Becker, Adam (6 May 2020). "Physicists Criticize Stephen Wolfram's 'Theory of Everything'". Scientific American. Retrieved 10 May 2020. 47. "The Trouble With Stephen Wolfram's New 'Fundamental Theory of Physics'". Gizmodo. 2020. Retrieved 23 April 2020. 48. Stephen, Wolfram. "The Personal Analytics of My Life". Wired. Retrieved 18 October 2016. 49. How Arrival's Designers Crafted a Mesmerizing Language, Margaret Rhodes, Wired, 16 November 2016. 50. "Dissecting the alien language in 'Arrival'". Engadget. Retrieved 16 November 2016. 51. Siegfried, Tom (13 August 2016). "'Idea Makers' tackles scientific thinkers' big ideas and personal lives Human side of science emphasized in new book". Science News. Society for Science & the Public. Retrieved 11 October 2022. 52. Stephen Wolfram Aims to Democratize His Software by Steve Lohr, The New York Times, 14 December 2015. External links Wikiquote has quotations related to Stephen Wolfram. Wikimedia Commons has media related to Stephen Wolfram. • Official website • Wolfram Foundation • Stephen Wolfram at the Mathematics Genealogy Project • Stephen Wolfram at IMDb • Stephen Wolfram at TED • Stephen Wolfram on Charlie Rose • Works by Stephen Wolfram at Open Library Wolfram Research Products • Computable Document Format • Mathematica • GridMathematica • MathWorld • WolframAlpha • Wolfram Demonstrations Project • Wolfram Language • Wolfram SystemModeler People • Stephen Wolfram • Conrad Wolfram • Theodore Gray • Eric Weisstein • Ed Pegg Jr. Authority control International • FAST • ISNI • VIAF • WorldCat National • Norway • France • BnF data • Germany • Italy • Israel • Belgium • United States • Japan • Czech Republic • Korea • Netherlands • Poland Academics • Association for Computing Machinery • CiNii • DBLP • MathSciNet • Mathematics Genealogy Project • ORCID • Scopus • zbMATH People • Deutsche Biographie Other • SNAC • IdRef
Wikipedia
Stephens' constant Stephens' constant expresses the density of certain subsets of the prime numbers.[1][2] Let $a$ and $b$ be two multiplicatively independent integers, that is, $a^{m}b^{n}\neq 1$ except when both $m$ and $n$ equal zero. Consider the set $T(a,b)$ of prime numbers $p$ such that $p$ evenly divides $a^{k}-b$ for some power $k$. The density of the set $T(a,b)$ relative to the set of all primes is a rational multiple of $C_{S}=\prod _{p}\left(1-{\frac {p}{p^{3}-1}}\right)=0.57595996889294543964316337549249669\ldots $(sequence A065478 in the OEIS) Stephens' constant is closely related to the Artin constant $C_{A}$ that arises in the study of primitive roots.[3][4] $C_{S}=\prod _{p}\left(C_{A}+\left({{1-p^{2}} \over {p^{2}(p-1)}}\right)\right)\left({{p} \over {(p+1+{{1} \over {p}})}}\right)$ See also • Euler product • Twin prime constant References 1. Stephens, P. J. (1976). "Prime Divisor of Second-Order Linear Recurrences, I." Journal of Number Theory. 8 (3): 313–332. doi:10.1016/0022-314X(76)90010-X. 2. Weisstein, Eric W. "Stephens' Constant". MathWorld. 3. Moree, Pieter; Stevenhagen, Peter (2000). "A two-variable Artin conjecture". Journal of Number Theory. 85 (2): 291–304. arXiv:math/9912250. doi:10.1006/jnth.2000.2547. S2CID 119739429. 4. Moree, Pieter (2000). "Approximation of singular series and automata". Manuscripta Mathematica. 101 (3): 385–399. doi:10.1007/s002290050222. S2CID 121036172.
Wikipedia
Steffensen's method In numerical analysis, Steffensen's method is an iterative method for root-finding named after Johan Frederik Steffensen which is similar to Newton's method, but with certain situational advantages. In particular, Steffensen's method achieves similar quadratic convergence, but without using derivatives as Newton's method does. Simple description The simplest form of the formula for Steffensen's method occurs when it is used to find a zero of a real function f; that is, to find the real value $x_{\star }$ that satisfies $f(x_{\star })=0.$ Near the solution $x_{\star },$ the function $f$ is supposed to approximately satisfy $-1<f'(x_{\star })<0;$ this condition makes $~f~$ adequate as a correction-function for $~x~$ for finding its own solution, although it is not required to work efficiently. For some functions, Steffensen's method can work even if this condition is not met, but in such a case, the starting value $~x_{0}~$ must be very close to the actual solution $~x_{\star }~,$ and convergence to the solution may be slow. Given an adequate starting value $~x_{0}~,$ a sequence of values $~x_{0},\,x_{1},\,x_{2},\dots ,\,x_{n},\,\dots ~$ can be generated using the formula below. When it works, each value in the sequence is much closer to the solution $~x_{\star }~$ than the prior value. The value $~x_{n}~$ from the current step generates the value $~x_{n+1}~$ for the next step, via this formula:[1] $x_{n+1}=x_{n}-{\frac {\,f(x_{n})\,}{g(x_{n})}}~$ for $n=0,1,2,3,...~;$ where the slope function $~g(x)~$ is a composite of the original function $~f~$ given by the following formula: $g(x)={\frac {\,f{\bigl (}x+f(x){\bigr )}\,}{f(x)}}-1~$ or perhaps more clearly, $g(x)={\frac {\,f(x+h)-f(x)\,}{h}}~$ where $~h=f(x)~$ is a step-size between the last iteration point, x, and an auxiliary point located at $~x+h~.$ The function $~g~$ is the average value for the slope of the function $f$ between the last sequence point $~\left(x,y\right)={\bigl (}x_{n},\,f\left(x_{n}\right){\bigr )}~$ and the auxiliary point $~{\bigl (}x,y{\bigr )}={\bigl (}x_{n}+h,\,f\left(x_{n}+h\right)\,{\bigr )}~,$ with the step $~h=f(x_{n})~.$ It is also called the first-order divided difference of $~f~$ between those two points. It is only for the purpose of finding $~h~$ for this auxiliary point that the value of the function $~f~$ must be an adequate correction to get closer to its own solution, and for that reason fulfill the requirement that $~-1<f'(x_{\star })<0~.$ For all other parts of the calculation, Steffensen's method only requires the function $~f~$ to be continuous and to actually have a nearby solution.[1] Several modest modifications of the step $~h~$ in the formula for the slope $~g~$ exist, such as multiplying it by  1 /2 or  3 /4, to accommodate functions $~f~$ that do not quite meet the requirement. Advantages and drawbacks The main advantage of Steffensen's method is that it has quadratic convergence[1] like Newton's method – that is, both methods find roots to an equation $~f~$ just as 'quickly'. In this case quickly means that for both methods, the number of correct digits in the answer doubles with each step. But the formula for Newton's method requires evaluation of the function's derivative $~f'~$ as well as the function $~f~,$ while Steffensen's method only requires $~f~$ itself. This is important when the derivative is not easily or efficiently available. The price for the quick convergence is the double function evaluation: Both $~f(x_{n})~$ and $~f(x_{n}+h)~$ must be calculated, which might be time-consuming if $~f~$ is a complicated function. For comparison, the secant method needs only one function evaluation per step. The secant method increases the number of correct digits by "only" a factor of roughly 1.6 per step, but one can do twice as many steps of the secant method within a given time. Since the secant method can carry out twice as many steps in the same time as Steffensen's method,[lower-alpha 1] when both algorithms succeed, the secant method actually converges faster than Steffensen's method in practical use: The secant method achieves a factor of about (1.6)2 ≈ 2.6 times as many digits for every two steps (two function evaluations), compared to Steffensen's factor of 2 for every one step (two function evaluations). Similar to most other iterative root-finding algorithms, the crucial weakness in Steffensen's method is the choice of the starting value $~x_{0}~.$ If the value of $~x_{0}~$ is not 'close enough' to the actual solution $~x_{\star }~,$ the method may fail and the sequence of values $~x_{0},\,x_{1},\,x_{2},\,x_{3},\,\dots ~$ may either flip-flop between two extremes, or diverge to infinity, or both. Derivation using Aitken's delta-squared process The version of Steffensen's method implemented in the MATLAB code shown below can be found using the Aitken's delta-squared process for accelerating convergence of a sequence. To compare the following formulae to the formulae in the section above, notice that $x_{n}=p\,-\,p_{n}~.$ This method assumes starting with a linearly convergent sequence and increases the rate of convergence of that sequence. If the signs of $~p_{n},\,p_{n+1},\,p_{n+2}~$ agree and $~p_{n}~$ is 'sufficiently close' to the desired limit of the sequence $~p~,$ we can assume the following: ${\frac {\,p_{n+1}-p\,}{\,p_{n}-p\,}}~\approx ~{\frac {\,p_{n+2}-p\,}{\,p_{n+1}-p\,}}$ then $(p_{n+1}-p)^{2}~\approx ~(\,p_{n+2}-p\,)\,(\,p_{n}-p\,)~$ so $p_{n+1}^{2}-2\,p_{n+1}\,p+p^{2}~\approx ~p_{n+2}\;p_{n}-(\,p_{n}+p_{n+2}\,)\,p+p^{2}$ and hence $(\,p_{n+2}-2\,p_{n+1}+p_{n}\,)\,p~\approx ~p_{n+2}\,p_{n}-p_{n+1}^{2}~.$ Solving for the desired limit of the sequence $~p~$ gives: $p~\approx ~{\frac {\,p_{n+2}\,p_{n}-p_{n+1}^{2}\,}{\,p_{n+2}-2\,p_{n+1}+p_{n}\,}}~=~{\frac {\,p_{n}^{2}+p_{n}\,p_{n+2}+2\,p_{n}\,p_{n+1}-2\,p_{n}\,p_{n+1}-p_{n}^{2}-p_{n+1}^{2}\,}{\,p_{n+2}-2\,p_{n+1}+p_{n}\,}}$ $=~{\frac {\,(\,p_{n}^{2}+p_{n}\,p_{n+2}-2\,p_{n}\,p_{n+1}\,)-(\,p_{n}^{2}-2\,p_{n}\,p_{n+1}+p_{n+1}^{2}\,)\,}{\,p_{n+2}-2\,p_{n+1}+p_{n}\,}}$ $=~p_{n}-{\frac {\,(\,p_{n+1}-p_{n})^{2}\,}{\,p_{n+2}-2\,p_{n+1}+p_{n}\,}}~,$ which results in the more rapidly convergent sequence: $p~\approx ~p_{n+3}~=~p_{n}-{\frac {\,(\,p_{n+1}-p_{n}\,)^{2}\,}{\,p_{n+2}-2\,p_{n+1}+p_{n}\,}}~.$ Code example In Matlab Here is the source for an implementation of Steffensen's Method in MATLAB. function Steffensen(f,p0,tol) % This function takes as inputs: a fixed point iteration function, f, % and initial guess to the fixed point, p0, and a tolerance, tol. % The fixed point iteration function is assumed to be input as an % inline function. % This function will calculate and return the fixed point, p, % that makes the expression f(x) = p true to within the desired % tolerance, tol. format compact % This shortens the output. format long % This prints more decimal places. for i=1:1000 % get ready to do a large, but finite, number of iterations. % This is so that if the method fails to converge, we won't % be stuck in an infinite loop. p1=f(p0); % calculate the next two guesses for the fixed point. p2=f(p1); p=p0-(p1-p0)^2/(p2-2*p1+p0) % use Aitken's delta squared method to % find a better approximation to p0. if abs(p-p0)<tol % test to see if we are within tolerance. break % if we are, stop the iterations, we have our answer. end p0=p; % update p0 for the next iteration. end if abs(p-p0)>tol % If we fail to meet the tolerance, we output a % message of failure. 'failed to converge in 1000 iterations.' end In Python Here is the source for an implementation of Steffensen's Method in Python. from typing import Callable, Iterator Func = Callable[[float], float] def g(f: Func, x: float, fx: float) -> Func: """First-order divided difference function. Arguments: f: Function input to g x: Point at which to evaluate g fx: Function f evaluated at x """ return lambda x: f(x + fx) / fx - 1 def steff(f: Func, x: float) -> Iterator[float]: """Steffenson algorithm for finding roots. This recursive generator yields the x_{n+1} value first then, when the generator iterates, it yields x_{n+2} from the next level of recursion. Arguments: f: Function whose root we are searching for x: Starting value upon first call, each level n that the function recurses x is x_n """ while True: fx = f(x) gx = g(f, x, fx)(x) if gx == 0: break else: x = x - fx / gx # Update to x_{n+1} yield x # Yield value Generalization to Banach space Steffensen's method can also be used to find an input $~x=x_{\star }~$ for a different kind of function $~F~$ that produces output the same as its input: $~x_{\star }=F(x_{\star })~$ for the special value $~x_{\star }~.$ Solutions like $~x_{\star }~$ are called fixed points. Many of these functions can be used to find their own solutions by repeatedly recycling the result back as input, but the rate of convergence can be slow, or the function can fail to converge at all, depending on the individual function. Steffensen's method accelerates this convergence, to make it quadratic. For orientation, the root function $~f~$ and the fixed-point functions are simply related by $~F(x)=x+\varepsilon \,f(x)~,$ where $~\varepsilon ~$ is some scalar constant small enough in magnitude to make $~F~$ stable under iteration, but large enough for the non-linearity of the function $~f~$ to be appreciable. All issues of a more general Banach space vs. basic real numbers being momentarily ignored for the sake of the comparison. This method for finding fixed points of a real-valued function has been generalised for functions $~F:X\to X~$ that map a Banach space $~X~$ onto itself or even more generally $~F:X\to Y~$ that map from one Banach space $~X~$ into another Banach space $~Y~.$ The generalized method assumes that a family of bounded linear operators $~\{\;L(u,v):u,v\in X\;\}~$ associated with $~u~$ and $~v~$ can be devised that (locally) satisfies that condition.[2] $F\left(u\right)-F\left(v\right)=L\left(u,v\right)\,{\bigl [}\,u-v\,{\bigr ]}~.\qquad \qquad \qquad \qquad ~$eqn. (𝄋) If division is possible in the Banach space, the linear operator $~L~$ can be obtained from $L\left(u,v\right)={\bigl [}\,F\left(u\right)-F\left(v\right)\,{\bigr ]}{\bigl [}\,u-v\,{\bigr ]}^{-1}~,$ Which may provide some insight. (The quotient form is shown for orientation; it is not required per se.) Expressed in this way, the linear operator $~L~$ can be more easily seen to be an elaborate version of the divided difference $~g~$ discussed in the first section, above. Note that the division is not necessary; the only requirement is that the operator $~L~$ satisfy the equation marked with the segno, (𝄋). For the basic real number function $~f~$, given in the first section, the function simply takes in and puts out real numbers. There, the function $~g~$ is a divided difference. In the generalized form here, the operator $~L~$ is the analogue of a divided difference for use in the Banach space. The operator $~L~$ is roughly equivalent to a matrix whose entries are all functions of vector arguments $~u~$ and $~v~$. Steffensen's method is then very similar to the Newton's method, except that it uses the divided difference $~L{\bigl (}\,F\left(x\right),\,x\,{\bigr )}~$ instead of the derivative $~F'(x)~.$ Note that for arguments $~x~$ close to some fixed point $~x_{\star }~$, fixed point functions $~F~$ and their linear operators $~L~$ meeting the marked (𝄋) condition, $~F'(x)\approx L{\bigl (}\,F\left(x\right),\,x\,{\bigr )}\approx I~$ where $~I~$ is the identity operator. In the case that division is possible in the Banach space, the generalized iteration formula is given by $x_{n+1}=x_{n}+{\Bigl [}I-L{\bigl (}F\left(x_{n}\right),x_{n}{\bigr )}{\Bigr ]}^{-1}{\Bigl [}F\left(x_{n}\right)-x_{n}{\Bigr ]}~,$ for $~n=1,\,2,\,3,\,...~.$ In the more general case in which division may not be possible, the iteration formula requires finding a solution $~x_{n+1}~$ close to $~x_{n}~$ for which ${\Bigl [}I-L{\bigl (}F\left(x_{n}\right),x_{n}{\bigr )}{\Bigr ]}{\Bigl [}x_{n+1}-x_{n}{\Bigr ]}=F\left(x_{n}\right)-x_{n}~.$ If the operator $~L~$ satisfies ${\Bigl \|}L\left(u,v\right)-L\left(x,y\right){\Bigr \|}\leq k{\biggl (}{\Bigl \|}u-x{\Bigr \|}+{\Bigr \|}v-y{\Bigr \|}{\biggr )}~$ for some positive real constant $~k~,$ then the method converges quadratically to a fixed point of $~F~$ if the initial approximation $~x_{0}~$ is 'sufficiently close' to the desired solution $~x_{\star }~,$ that satisfies $~x_{\star }=F(x_{\star })~.$ Notes 1. Because $~f(x_{n}+h)~$ requires the prior calculation of $~h=f(x_{n})~,$ the two evaluations must be done sequentially – the algorithm per se cannot be made faster by running the function evaluations in parallel. This is yet another disadvantage of Steffensen's method. References 1. Dahlquist, Germund; Björck, Åke (1974). Numerical Methods. Translated by Anderson, Ned. Englewood Cliffs, NJ: Prentice Hall. pp. 230–231. 2. Johnson, L.W.; Scholz, D.R. (June 1968). "On Steffensen's method". SIAM Journal on Numerical Analysis. 5 (2): 296–302. doi:10.1137/0705026. JSTOR 2949443.
Wikipedia
Sterbenz lemma In floating-point arithmetic, the Sterbenz lemma or Sterbenz's lemma[1] is a theorem giving conditions under which floating-point differences are computed exactly. It is named after Pat H. Sterbenz, who published a variant of it in 1974.[2] Sterbenz lemma — In a floating-point number system with subnormal numbers, if $x$ and $y$ are floating-point numbers such that ${\frac {y}{2}}\leq x\leq 2y,$ then $x-y$ is also a floating-point number. Thus, a correctly rounded floating-point subtraction $x\ominus y=\operatorname {fl} (x-y)=x-y$ is computed exactly. The Sterbenz lemma applies to IEEE 754, the most widely used floating-point number system in computers. Proof Let $\beta $ be the radix of the floating-point system and $p$ the precision. Consider several easy cases first: • If $x$ is zero then $x-y=-y$, and if $y$ is zero then $x-y=x$, so the result is trivial because floating-point negation is always exact. • If $x=y$ the result is zero and thus exact. • If $x<0$ then we must also have $y/2\leq x<0$ so $y<0$. In this case, $x-y=-(-x--y)$, so the result follows from the theorem restricted to $x,y\geq 0$. • If $x\leq y$, we can write $x-y=-(y-x)$ with $x/2\leq y\leq 2x$, so the result follows from the theorem restricted to $x\geq y$. For the rest of the proof, assume $0<y<x\leq 2y$ without loss of generality. Write $x,y>0$ in terms of their positive integral significands $s_{x},s_{y}\leq \beta ^{p}-1$ and minimal exponents $e_{x},e_{y}$: ${\begin{aligned}x&=s_{x}\cdot \beta ^{e_{x}-p+1}\\y&=s_{y}\cdot \beta ^{e_{y}-p+1}\end{aligned}}$ Note that $x$ and $y$ may be subnormal—we do not assume $s_{x},s_{y}\geq \beta ^{p-1}$. The subtraction gives: ${\begin{aligned}x-y&=s_{x}\cdot \beta ^{e_{x}-p+1}-s_{y}\cdot \beta ^{e_{y}-p+1}\\&=s_{x}\beta ^{e_{x}-e_{y}}\cdot \beta ^{e_{y}-p+1}-s_{y}\cdot \beta ^{e_{y}-p+1}\\&=(s_{x}\beta ^{e_{x}-e_{y}}-s_{y})\cdot \beta ^{e_{y}-p+1}.\end{aligned}}$ Let $s'=s_{x}\beta ^{e_{x}-e_{y}}-s_{y}$. Since $0<y<x$ we have: • $e_{y}\leq e_{x}$, so $e_{x}-e_{y}\geq 0$, from which we can conclude $\beta ^{e_{x}-e_{y}}$ is an integer and therefore so is $s'=s_{x}\beta ^{e_{x}-e_{y}}-s_{y}$; and • $x-y>0$, so $s'>0$. Further, since $x\leq 2y$, we have $x-y\leq y$, so that $s'\cdot \beta ^{e_{y}-p+1}=x-y\leq y=s_{y}\cdot \beta ^{e_{y}-p+1}$ which implies that $0<s'\leq s_{y}\leq \beta ^{p}-1.$ Hence $x-y=s'\cdot \beta ^{e_{y}-p+1},\quad {\text{for}}\quad 0<s'\leq \beta ^{p}-1,$ so $x-y$ is a floating-point number. ◻ Note: Even if $x$ and $y$ are normal, i.e., $s_{x},s_{y}\geq \beta ^{p-1}$, we cannot prove that $s'\geq \beta ^{p-1}$ and therefore cannot prove that $x-y$ is also normal. For example, the difference of the two smallest positive normal floating-point numbers $x=(\beta ^{p-1}+1)\cdot \beta ^{e_{\mathrm {min} }-p+1}$ and $y=\beta ^{p-1}\cdot \beta ^{e_{\mathrm {min} }-p+1}$ is $x-y=1\cdot \beta ^{e_{\mathrm {min} }-p+1}$ which is necessarily subnormal. In floating-point number systems without subnormal numbers, such as CPUs in nonstandard flush-to-zero mode instead of the standard gradual underflow, the Sterbenz lemma does not apply. Relation to catastrophic cancellation The Sterbenz lemma may be contrasted with the phenomenon of catastrophic cancellation: • The Sterbenz lemma asserts that if $x$ and $y$ are sufficiently close floating-point numbers then their difference $x-y$ is computed exactly by floating-point arithmetic $x\ominus y=\operatorname {fl} (x-y)$, with no rounding needed. • The phenomenon of catastrophic cancellation is that if ${\tilde {x}}$ and ${\tilde {y}}$ are approximations to true numbers $x$ and $y$—whether the approximations arise from prior rounding error or from series truncation or from physical uncertainty or anything else—the error of the difference ${\tilde {x}}-{\tilde {y}}$ from the desired difference $x-y$ is inversely proportional to $x-y$. Thus, the closer $x$ and $y$ are, the worse ${\tilde {x}}-{\tilde {y}}$ may be as an approximation to $x-y$, even if the subtraction itself is computed exactly. In other words, the Sterbenz lemma shows that subtracting nearby floating-point numbers is exact, but if the numbers you have are approximations then even their exact difference may be far off from the difference of numbers you wanted to subtract. Use in numerical analysis The Sterbenz lemma is instrumental in proving theorems on error bounds in numerical analysis of floating-point algorithms. For example, Heron's formula $A={\sqrt {s(s-a)(s-b)(s-c)}}$ for the area of triangle with side lengths $a$, $b$, and $c$, where $s=(a+b+c)/2$ is the semi-perimeter, may give poor accuracy for long narrow triangles if evaluated directly in floating-point arithmetic. However, for $a\geq b\geq c$, the alternative formula $A={\frac {1}{4}}{\sqrt {{\bigl (}a+(b+c){\bigr )}{\bigl (}c-(a-b){\bigr )}{\bigl (}c+(a-b){\bigr )}{\bigl (}a+(b-c){\bigr )}}}$ can be proven, with the help of the Sterbenz lemma, to have low forward error for all inputs.[3][4][5] References 1. Muller, Jean-Michel; Brunie, Nicolas; de Dinechin, Florent; Jeannerod, Claude-Pierre; Joldes, Mioara; Lefèvre, Vincent; Melquiond, Guillaume; Revol, Nathalie; Torres, Serge (2018). Handbook of Floating-Point Arithmetic (2nd ed.). Gewerbestrasse 11, 6330 Cham, Switzerland: Birkhäuser. Lemma 4.1, p. 101. doi:10.1007/978-3-319-76526-6. ISBN 978-3-319-76525-9.{{cite book}}: CS1 maint: location (link) 2. Sterbenz, Pat H. (1974). Floating-Point Computation. Englewood Cliffs, NJ, United States: Prentice-Hall. Theorem 4.3.1 and Corollary, p. 138. ISBN 0-13-322495-3. 3. Kahan, W. (2014-09-04). "Miscalculating Area and Angles of a Needle-like Triangle" (PDF). Lecture Notes for Introductory Numerical Analysis Classes. Retrieved 2020-09-17. 4. Goldberg, David (March 1991). "What every computer scientist should know about floating-point arithmetic". ACM Computing Surveys. New York, NY, United States: Association for Computing Machinery. 23 (1): 5–48. doi:10.1145/103162.103163. ISSN 0360-0300. S2CID 222008826. Retrieved 2020-09-17. 5. Boldo, Sylvie (April 2013). Nannarelli, Alberto; Seidel, Peter-Michael; Tang, Ping Tak Peter (eds.). How to Compute the Area of a Triangle: a Formal Revisit. 21st IEEE Symposium on Computer Arithmetic. IEEE Computer Society. pp. 91–98. doi:10.1109/ARITH.2013.29. ISBN 978-0-7695-4957-6. ISSN 1063-6889.
Wikipedia
Stericated 6-simplexes In six-dimensional geometry, a stericated 6-simplex is a convex uniform 6-polytope with 4th order truncations (sterication) of the regular 6-simplex. 6-simplex Stericated 6-simplex Steritruncated 6-simplex Stericantellated 6-simplex Stericantitruncated 6-simplex Steriruncinated 6-simplex Steriruncitruncated 6-simplex Steriruncicantellated 6-simplex Steriruncicantitruncated 6-simplex Orthogonal projections in A6 Coxeter plane There are 8 unique sterications for the 6-simplex with permutations of truncations, cantellations, and runcinations. Stericated 6-simplex Stericated 6-simplex Typeuniform 6-polytope Schläfli symbolt0,4{3,3,3,3,3} Coxeter-Dynkin diagrams 5-faces105 4-faces700 Cells1470 Faces1400 Edges630 Vertices105 Vertex figure Coxeter groupA6, [35], order 5040 Propertiesconvex Alternate names • Small cellated heptapeton (Acronym: scal) (Jonathan Bowers)[1] Coordinates The vertices of the stericated 6-simplex can be most simply positioned in 7-space as permutations of (0,0,1,1,1,1,2). This construction is based on facets of the stericated 7-orthoplex. Images orthographic projections Ak Coxeter plane A6 A5 A4 Graph Dihedral symmetry [7] [6] [5] Ak Coxeter plane A3 A2 Graph Dihedral symmetry [4] [3] Steritruncated 6-simplex Steritruncated 6-simplex Typeuniform 6-polytope Schläfli symbolt0,1,4{3,3,3,3,3} Coxeter-Dynkin diagrams 5-faces105 4-faces945 Cells2940 Faces3780 Edges2100 Vertices420 Vertex figure Coxeter groupA6, [35], order 5040 Propertiesconvex Alternate names • Cellitruncated heptapeton (Acronym: catal) (Jonathan Bowers)[2] Coordinates The vertices of the steritruncated 6-simplex can be most simply positioned in 7-space as permutations of (0,0,1,1,1,2,3). This construction is based on facets of the steritruncated 7-orthoplex. Images orthographic projections Ak Coxeter plane A6 A5 A4 Graph Dihedral symmetry [7] [6] [5] Ak Coxeter plane A3 A2 Graph Dihedral symmetry [4] [3] Stericantellated 6-simplex Stericantellated 6-simplex Typeuniform 6-polytope Schläfli symbolt0,2,4{3,3,3,3,3} Coxeter-Dynkin diagrams 5-faces105 4-faces1050 Cells3465 Faces5040 Edges3150 Vertices630 Vertex figure Coxeter groupA6, [35], order 5040 Propertiesconvex Alternate names • Cellirhombated heptapeton (Acronym: cral) (Jonathan Bowers)[3] Coordinates The vertices of the stericantellated 6-simplex can be most simply positioned in 7-space as permutations of (0,0,1,1,2,2,3). This construction is based on facets of the stericantellated 7-orthoplex. Images orthographic projections Ak Coxeter plane A6 A5 A4 Graph Dihedral symmetry [7] [6] [5] Ak Coxeter plane A3 A2 Graph Dihedral symmetry [4] [3] Stericantitruncated 6-simplex stericantitruncated 6-simplex Typeuniform 6-polytope Schläfli symbolt0,1,2,4{3,3,3,3,3} Coxeter-Dynkin diagrams 5-faces105 4-faces1155 Cells4410 Faces7140 Edges5040 Vertices1260 Vertex figure Coxeter groupA6, [35], order 5040 Propertiesconvex Alternate names • Celligreatorhombated heptapeton (Acronym: cagral) (Jonathan Bowers)[4] Coordinates The vertices of the stericanttruncated 6-simplex can be most simply positioned in 7-space as permutations of (0,0,0,1,2,3,4). This construction is based on facets of the stericantitruncated 7-orthoplex. Images orthographic projections Ak Coxeter plane A6 A5 A4 Graph Dihedral symmetry [7] [6] [5] Ak Coxeter plane A3 A2 Graph Dihedral symmetry [4] [3] Steriruncinated 6-simplex steriruncinated 6-simplex Typeuniform 6-polytope Schläfli symbolt0,3,4{3,3,3,3,3} Coxeter-Dynkin diagrams 5-faces105 4-faces700 Cells1995 Faces2660 Edges1680 Vertices420 Vertex figure Coxeter groupA6, [35], order 5040 Propertiesconvex Alternate names • Celliprismated heptapeton (Acronym: copal) (Jonathan Bowers)[5] Coordinates The vertices of the steriruncinated 6-simplex can be most simply positioned in 7-space as permutations of (0,0,1,2,2,3,3). This construction is based on facets of the steriruncinated 7-orthoplex. Images orthographic projections Ak Coxeter plane A6 A5 A4 Graph Dihedral symmetry [7] [6] [5] Ak Coxeter plane A3 A2 Graph Dihedral symmetry [4] [3] Steriruncitruncated 6-simplex steriruncitruncated 6-simplex Typeuniform 6-polytope Schläfli symbolt0,1,3,4{3,3,3,3,3} Coxeter-Dynkin diagrams 5-faces105 4-faces945 Cells3360 Faces5670 Edges4410 Vertices1260 Vertex figure Coxeter groupA6, [35], order 5040 Propertiesconvex Alternate names • Celliprismatotruncated heptapeton (Acronym: captal) (Jonathan Bowers)[6] Coordinates The vertices of the steriruncittruncated 6-simplex can be most simply positioned in 7-space as permutations of (0,0,0,1,2,3,4). This construction is based on facets of the steriruncitruncated 7-orthoplex. Images orthographic projections Ak Coxeter plane A6 A5 A4 Graph Dihedral symmetry [7] [6] [5] Ak Coxeter plane A3 A2 Graph Dihedral symmetry [4] [3] Steriruncicantellated 6-simplex steriruncicantellated 6-simplex Typeuniform 6-polytope Schläfli symbolt0,2,3,4{3,3,3,3,3} Coxeter-Dynkin diagrams 5-faces105 4-faces1050 Cells3675 Faces5880 Edges4410 Vertices1260 Vertex figure Coxeter groupA6, [35], order 5040 Propertiesconvex Alternate names • Bistericantitruncated 6-simplex as t1,2,3,5{3,3,3,3,3} • Celliprismatorhombated heptapeton (Acronym: copril) (Jonathan Bowers)[7] Coordinates The vertices of the steriruncitcantellated 6-simplex can be most simply positioned in 7-space as permutations of (0,0,0,1,2,3,4). This construction is based on facets of the steriruncicantellated 7-orthoplex. Images orthographic projections Ak Coxeter plane A6 A5 A4 Graph Dihedral symmetry [7] [6] [5] Ak Coxeter plane A3 A2 Graph Dihedral symmetry [4] [3] Steriruncicantitruncated 6-simplex Steriuncicantitruncated 6-simplex Typeuniform 6-polytope Schläfli symbolt0,1,2,3,4{3,3,3,3,3} Coxeter-Dynkin diagrams 5-faces105 4-faces1155 Cells4620 Faces8610 Edges7560 Vertices2520 Vertex figure Coxeter groupA6, [35], order 5040 Propertiesconvex Alternate names • Great cellated heptapeton (Acronym: gacal) (Jonathan Bowers)[8] Coordinates The vertices of the steriruncicantittruncated 6-simplex can be most simply positioned in 7-space as permutations of (0,0,1,2,3,4,5). This construction is based on facets of the steriruncicantitruncated 7-orthoplex. Images orthographic projections Ak Coxeter plane A6 A5 A4 Graph Dihedral symmetry [7] [6] [5] Ak Coxeter plane A3 A2 Graph Dihedral symmetry [4] [3] Related uniform 6-polytopes The truncated 6-simplex is one of 35 uniform 6-polytopes based on the [3,3,3,3,3] Coxeter group, all shown here in A6 Coxeter plane orthographic projections. A6 polytopes t0 t1 t2 t0,1 t0,2 t1,2 t0,3 t1,3 t2,3 t0,4 t1,4 t0,5 t0,1,2 t0,1,3 t0,2,3 t1,2,3 t0,1,4 t0,2,4 t1,2,4 t0,3,4 t0,1,5 t0,2,5 t0,1,2,3 t0,1,2,4 t0,1,3,4 t0,2,3,4 t1,2,3,4 t0,1,2,5 t0,1,3,5 t0,2,3,5 t0,1,4,5 t0,1,2,3,4 t0,1,2,3,5 t0,1,2,4,5 t0,1,2,3,4,5 Notes 1. Klitzing, (x3o3o3o3x3o - scal) 2. Klitzing, (x3x3o3o3x3o - catal) 3. Klitzing, (x3o3x3o3x3o - cral) 4. Klitzing, (x3x3x3o3x3o - cagral) 5. Klitzing, (x3o3o3x3x3o - copal) 6. Klitzing, (x3x3o3x3x3o - captal) 7. Klitzing, ( x3o3x3x3x3o - copril) 8. Klitzing, (x3x3x3x3x3o - gacal) References • H.S.M. Coxeter: • H.S.M. Coxeter, Regular Polytopes, 3rd Edition, Dover New York, 1973 • Kaleidoscopes: Selected Writings of H.S.M. Coxeter, edited by F. Arthur Sherk, Peter McMullen, Anthony C. Thompson, Asia Ivic Weiss, Wiley-Interscience Publication, 1995, ISBN 978-0-471-01003-6 • (Paper 22) H.S.M. Coxeter, Regular and Semi Regular Polytopes I, [Math. Zeit. 46 (1940) 380-407, MR 2,10] • (Paper 23) H.S.M. Coxeter, Regular and Semi-Regular Polytopes II, [Math. Zeit. 188 (1985) 559-591] • (Paper 24) H.S.M. Coxeter, Regular and Semi-Regular Polytopes III, [Math. Zeit. 200 (1988) 3-45] • Norman Johnson Uniform Polytopes, Manuscript (1991) • N.W. Johnson: The Theory of Uniform Polytopes and Honeycombs, Ph.D. • Klitzing, Richard. "6D uniform polytopes (polypeta)". External links • Polytopes of Various Dimensions • Multi-dimensional Glossary Fundamental convex regular and uniform polytopes in dimensions 2–10 Family An Bn I2(p) / Dn E6 / E7 / E8 / F4 / G2 Hn Regular polygon Triangle Square p-gon Hexagon Pentagon Uniform polyhedron Tetrahedron Octahedron • Cube Demicube Dodecahedron • Icosahedron Uniform polychoron Pentachoron 16-cell • Tesseract Demitesseract 24-cell 120-cell • 600-cell Uniform 5-polytope 5-simplex 5-orthoplex • 5-cube 5-demicube Uniform 6-polytope 6-simplex 6-orthoplex • 6-cube 6-demicube 122 • 221 Uniform 7-polytope 7-simplex 7-orthoplex • 7-cube 7-demicube 132 • 231 • 321 Uniform 8-polytope 8-simplex 8-orthoplex • 8-cube 8-demicube 142 • 241 • 421 Uniform 9-polytope 9-simplex 9-orthoplex • 9-cube 9-demicube Uniform 10-polytope 10-simplex 10-orthoplex • 10-cube 10-demicube Uniform n-polytope n-simplex n-orthoplex • n-cube n-demicube 1k2 • 2k1 • k21 n-pentagonal polytope Topics: Polytope families • Regular polytope • List of regular polytopes and compounds
Wikipedia
Stericantitruncated 16-cell honeycomb In four-dimensional Euclidean geometry, the stericantitruncated 16-cell honeycomb is a uniform space-filling honeycomb. Stericantitruncated 16-cell honeycomb (No image) TypeUniform 4-honeycomb Schläfli symbolt0,1,2,4{3,3,4,3} s2,3,4{3,4,3,3} Coxeter-Dynkin diagrams 4-face type t0,1,2{3,3,4} t0,1,3{4,3,3} t0,1,2{4,3}x{} t1{4,3}x{} {3}x{6} Cell type Face type Vertex figure Coxeter groups${\tilde {F}}_{4}$, [3,4,3,3] PropertiesVertex transitive Alternate names • Great cellirhombated icositetrachoric tetracomb (gicaricot) • Runcicantic hexadecachoric tetracomb Related honeycombs The [3,4,3,3], , Coxeter group generates 31 permutations of uniform tessellations, 28 are unique in this family and ten are shared in the [4,3,3,4] and [4,3,31,1] families. The alternation (13) is also repeated in other families. F4 honeycombs Extended symmetry Extended diagram OrderHoneycombs [3,3,4,3]×1 1, 3, 5, 6, 8, 9, 10, 11, 12 [3,4,3,3]×1 2, 4, 7, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22 23, 24, 25, 26, 27, 28, 29 [(3,3)[3,3,4,3*]] =[(3,3)[31,1,1,1]] =[3,4,3,3] = = ×4 (2), (4), (7), (13) See also Regular and uniform honeycombs in 4-space: • Tesseractic honeycomb • 16-cell honeycomb • 24-cell honeycomb • Rectified 24-cell honeycomb • Snub 24-cell honeycomb • 5-cell honeycomb • Truncated 5-cell honeycomb • Omnitruncated 5-cell honeycomb References • Coxeter, H.S.M. Regular Polytopes, (3rd edition, 1973), Dover edition, ISBN 0-486-61480-8 p. 296, Table II: Regular honeycombs • Kaleidoscopes: Selected Writings of H.S.M. Coxeter, edited by F. Arthur Sherk, Peter McMullen, Anthony C. Thompson, Asia Ivic Weiss, Wiley-Interscience Publication, 1995, ISBN 978-0-471-01003-6 • (Paper 24) H.S.M. Coxeter, Regular and Semi-Regular Polytopes III, [Math. Zeit. 200 (1988) 3-45] • George Olshevsky, Uniform Panoploid Tetracombs, Manuscript (2006) (Complete list of 11 convex uniform tilings, 28 convex uniform honeycombs, and 143 convex uniform tetracombs) Model 121 (Wrongly named runcinated icositetrachoric honeycomb) • Klitzing, Richard. "4D Euclidean tesselations". x3x3x4o3x - gicaricot - O130 Fundamental convex regular and uniform honeycombs in dimensions 2–9 Space Family ${\tilde {A}}_{n-1}$ ${\tilde {C}}_{n-1}$ ${\tilde {B}}_{n-1}$ ${\tilde {D}}_{n-1}$ ${\tilde {G}}_{2}$ / ${\tilde {F}}_{4}$ / ${\tilde {E}}_{n-1}$ E2 Uniform tiling {3[3]} δ3 hδ3 qδ3 Hexagonal E3 Uniform convex honeycomb {3[4]} δ4 hδ4 qδ4 E4 Uniform 4-honeycomb {3[5]} δ5 hδ5 qδ5 24-cell honeycomb E5 Uniform 5-honeycomb {3[6]} δ6 hδ6 qδ6 E6 Uniform 6-honeycomb {3[7]} δ7 hδ7 qδ7 222 E7 Uniform 7-honeycomb {3[8]} δ8 hδ8 qδ8 133 • 331 E8 Uniform 8-honeycomb {3[9]} δ9 hδ9 qδ9 152 • 251 • 521 E9 Uniform 9-honeycomb {3[10]} δ10 hδ10 qδ10 E10 Uniform 10-honeycomb {3[11]} δ11 hδ11 qδ11 En-1 Uniform (n-1)-honeycomb {3[n]} δn hδn qδn 1k2 • 2k1 • k21
Wikipedia
Stericantitruncated tesseractic honeycomb In four-dimensional Euclidean geometry, the stericantitruncated tesseractic honeycomb is a uniform space-filling honeycomb. It is composed of runcitruncated 16-cell, cantitruncated tesseract, rhombicuboctahedral prism, truncated cuboctahedral prism, and 4-8 duoprism facets, arranged around an irregular 5-cell vertex figure. Stericantitruncated tesseractic honeycomb (No image) TypeUniform honeycomb Schläfli symbolt0,1,2,4{4,3,3,4} Coxeter-Dynkin diagrams 4-face type runcitruncated 16-cell cantitruncated tesseract rhombicuboctahedral prism truncated cuboctahedral prism 4-8 duoprism Cell typeTruncated cuboctahedron Rhombicuboctahedron Truncated tetrahedron Octagonal prism Hexagonal prism Cube Triangular prism Face type{3}, {4}, {6}, {8} Vertex figureirr. square pyramid pyramid Coxeter groups${\tilde {C}}_{4}$, [4,3,3,4] PropertiesVertex transitive Related honeycombs The [4,3,3,4], , Coxeter group generates 31 permutations of uniform tessellations, 21 with distinct symmetry and 20 with distinct geometry. The expanded tesseractic honeycomb (also known as the stericated tesseractic honeycomb) is geometrically identical to the tesseractic honeycomb. Three of the symmetric honeycombs are shared in the [3,4,3,3] family. Two alternations (13) and (17), and the quarter tesseractic (2) are repeated in other families. C4 honeycombs Extended symmetry Extended diagram Order Honeycombs [4,3,3,4]: ×1 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13 [[4,3,3,4]] ×2 (1), (2), (13), 18 (6), 19, 20 [(3,3)[1+,4,3,3,4,1+]] ↔ [(3,3)[31,1,1,1]] ↔ [3,4,3,3] ↔ ↔ ×6 14, 15, 16, 17 See also Regular and uniform honeycombs in 4-space: • Tesseractic honeycomb • 16-cell honeycomb • 24-cell honeycomb • Truncated 24-cell honeycomb • Snub 24-cell honeycomb • 5-cell honeycomb • Truncated 5-cell honeycomb • Omnitruncated 5-cell honeycomb References • Coxeter, H.S.M. Regular Polytopes, (3rd edition, 1973), Dover edition, ISBN 0-486-61480-8 p. 296, Table II: Regular honeycombs • Kaleidoscopes: Selected Writings of H.S.M. Coxeter, edited by F. Arthur Sherk, Peter McMullen, Anthony C. Thompson, Asia Ivic Weiss, Wiley-Interscience Publication, 1995, ISBN 978-0-471-01003-6 • (Paper 24) H.S.M. Coxeter, Regular and Semi-Regular Polytopes III, [Math. Zeit. 200 (1988) 3-45] • George Olshevsky, Uniform Panoploid Tetracombs, Manuscript (2006) (Complete list of 11 convex uniform tilings, 28 convex uniform honeycombs, and 143 convex uniform tetracombs) • Klitzing, Richard. "4D Euclidean tesselations". x4x3x3o4x - gicartit - O101 Fundamental convex regular and uniform honeycombs in dimensions 2–9 Space Family ${\tilde {A}}_{n-1}$ ${\tilde {C}}_{n-1}$ ${\tilde {B}}_{n-1}$ ${\tilde {D}}_{n-1}$ ${\tilde {G}}_{2}$ / ${\tilde {F}}_{4}$ / ${\tilde {E}}_{n-1}$ E2 Uniform tiling {3[3]} δ3 hδ3 qδ3 Hexagonal E3 Uniform convex honeycomb {3[4]} δ4 hδ4 qδ4 E4 Uniform 4-honeycomb {3[5]} δ5 hδ5 qδ5 24-cell honeycomb E5 Uniform 5-honeycomb {3[6]} δ6 hδ6 qδ6 E6 Uniform 6-honeycomb {3[7]} δ7 hδ7 qδ7 222 E7 Uniform 7-honeycomb {3[8]} δ8 hδ8 qδ8 133 • 331 E8 Uniform 8-honeycomb {3[9]} δ9 hδ9 qδ9 152 • 251 • 521 E9 Uniform 9-honeycomb {3[10]} δ10 hδ10 qδ10 E10 Uniform 10-honeycomb {3[11]} δ11 hδ11 qδ11 En-1 Uniform (n-1)-honeycomb {3[n]} δn hδn qδn 1k2 • 2k1 • k21
Wikipedia
Stericated 5-simplexes In five-dimensional geometry, a stericated 5-simplex is a convex uniform 5-polytope with fourth-order truncations (sterication) of the regular 5-simplex. 5-simplex Stericated 5-simplex Steritruncated 5-simplex Stericantellated 5-simplex Stericantitruncated 5-simplex Steriruncitruncated 5-simplex Steriruncicantitruncated 5-simplex (Omnitruncated 5-simplex) Orthogonal projections in A5 and A4 Coxeter planes There are six unique sterications of the 5-simplex, including permutations of truncations, cantellations, and runcinations. The simplest stericated 5-simplex is also called an expanded 5-simplex, with the first and last nodes ringed, for being constructible by an expansion operation applied to the regular 5-simplex. The highest form, the steriruncicantitruncated 5-simplex is more simply called an omnitruncated 5-simplex with all of the nodes ringed. Stericated 5-simplex Stericated 5-simplex Type Uniform 5-polytope Schläfli symbol 2r2r{3,3,3,3} 2r{32,2} = $2r\left\{{\begin{array}{l}3,3\\3,3\end{array}}\right\}$ Coxeter-Dynkin diagram or 4-faces 62 6+6 {3,3,3} 15+15 {}×{3,3} 20 {3}×{3} Cells 180 60 {3,3} 120 {}×{3} Faces 210 120 {3} 90 {4} Edges 120 Vertices 30 Vertex figure Tetrahedral antiprism Coxeter group A5×2, [[3,3,3,3]], order 1440 Properties convex, isogonal, isotoxal A stericated 5-simplex can be constructed by an expansion operation applied to the regular 5-simplex, and thus is also sometimes called an expanded 5-simplex. It has 30 vertices, 120 edges, 210 faces (120 triangles and 90 squares), 180 cells (60 tetrahedra and 120 triangular prisms) and 62 4-faces (12 5-cells, 30 tetrahedral prisms and 20 3-3 duoprisms). Alternate names • Expanded 5-simplex • Stericated hexateron • Small cellated dodecateron (Acronym: scad) (Jonathan Bowers)[1] Cross-sections The maximal cross-section of the stericated hexateron with a 4-dimensional hyperplane is a runcinated 5-cell. This cross-section divides the stericated hexateron into two pentachoral hypercupolas consisting of 6 5-cells, 15 tetrahedral prisms and 10 3-3 duoprisms each. Coordinates The vertices of the stericated 5-simplex can be constructed on a hyperplane in 6-space as permutations of (0,1,1,1,1,2). This represents the positive orthant facet of the stericated 6-orthoplex. A second construction in 6-space, from the center of a rectified 6-orthoplex is given by coordinate permutations of: (1,-1,0,0,0,0) The Cartesian coordinates in 5-space for the normalized vertices of an origin-centered stericated hexateron are: $\left(\pm 1,\ 0,\ 0,\ 0,\ 0\right)$ $\left(0,\ \pm 1,\ 0,\ 0,\ 0\right)$ $\left(0,\ 0,\ \pm 1,\ 0,\ 0\right)$ $\left(\pm 1/2,\ 0,\ \pm 1/2,\ -{\sqrt {1/8}},\ -{\sqrt {3/8}}\right)$ $\left(\pm 1/2,\ 0,\ \pm 1/2,\ {\sqrt {1/8}},\ {\sqrt {3/8}}\right)$ $\left(0,\ \pm 1/2,\ \pm 1/2,\ -{\sqrt {1/8}},\ {\sqrt {3/8}}\right)$ $\left(0,\ \pm 1/2,\ \pm 1/2,\ {\sqrt {1/8}},\ -{\sqrt {3/8}}\right)$ $\left(\pm 1/2,\ \pm 1/2,\ 0,\ \pm {\sqrt {1/2}},\ 0\right)$ Root system Its 30 vertices represent the root vectors of the simple Lie group A5. It is also the vertex figure of the 5-simplex honeycomb. Images orthographic projections Ak Coxeter plane A5 A4 Graph Dihedral symmetry [6] [[5]]=[10] Ak Coxeter plane A3 A2 Graph Dihedral symmetry [4] [[3]]=[6] orthogonal projection with [6] symmetry Steritruncated 5-simplex Steritruncated 5-simplex Type Uniform 5-polytope Schläfli symbol t0,1,4{3,3,3,3} Coxeter-Dynkin diagram 4-faces 62 6 t{3,3,3} 15 {}×t{3,3} 20 {3}×{6} 15 {}×{3,3} 6 t0,3{3,3,3} Cells 330 Faces 570 Edges 420 Vertices 120 Vertex figure Coxeter group A5 [3,3,3,3], order 720 Properties convex, isogonal Alternate names • Steritruncated hexateron • Celliprismated hexateron (Acronym: cappix) (Jonathan Bowers)[2] Coordinates The coordinates can be made in 6-space, as 180 permutations of: (0,1,1,1,2,3) This construction exists as one of 64 orthant facets of the steritruncated 6-orthoplex. Images orthographic projections Ak Coxeter plane A5 A4 Graph Dihedral symmetry [6] [5] Ak Coxeter plane A3 A2 Graph Dihedral symmetry [4] [3] Stericantellated 5-simplex Stericantellated 5-simplex Type Uniform 5-polytope Schläfli symbol t0,2,4{3,3,3,3} Coxeter-Dynkin diagram or 4-faces 62 12 rr{3,3,3} 30 rr{3,3}x{} 20 {3}×{3} Cells 420 60 rr{3,3} 240 {}×{3} 90 {}×{}×{} 30 r{3,3} Faces 900 360 {3} 540 {4} Edges 720 Vertices 180 Vertex figure Coxeter group A5×2, [[3,3,3,3]], order 1440 Properties convex, isogonal Alternate names • Stericantellated hexateron • Cellirhombated dodecateron (Acronym: card) (Jonathan Bowers)[3] Coordinates The coordinates can be made in 6-space, as permutations of: (0,1,1,2,2,3) This construction exists as one of 64 orthant facets of the stericantellated 6-orthoplex. Images orthographic projections Ak Coxeter plane A5 A4 Graph Dihedral symmetry [6] [[5]]=[10] Ak Coxeter plane A3 A2 Graph Dihedral symmetry [4] [[3]]=[6] Stericantitruncated 5-simplex Stericantitruncated 5-simplex Type Uniform 5-polytope Schläfli symbol t0,1,2,4{3,3,3,3} Coxeter-Dynkin diagram 4-faces 62 Cells 480 Faces 1140 Edges 1080 Vertices 360 Vertex figure Coxeter group A5 [3,3,3,3], order 720 Properties convex, isogonal Alternate names • Stericantitruncated hexateron • Celligreatorhombated hexateron (Acronym: cograx) (Jonathan Bowers)[4] Coordinates The coordinates can be made in 6-space, as 360 permutations of: (0,1,1,2,3,4) This construction exists as one of 64 orthant facets of the stericantitruncated 6-orthoplex. Images orthographic projections Ak Coxeter plane A5 A4 Graph Dihedral symmetry [6] [5] Ak Coxeter plane A3 A2 Graph Dihedral symmetry [4] [3] Steriruncitruncated 5-simplex Steriruncitruncated 5-simplex Type Uniform 5-polytope Schläfli symbol t0,1,3,4{3,3,3,3} 2t{32,2} Coxeter-Dynkin diagram or 4-faces 62 12 t0,1,3{3,3,3} 30 {}×t{3,3} 20 {6}×{6} Cells 450 Faces 1110 Edges 1080 Vertices 360 Vertex figure Coxeter group A5×2, [[3,3,3,3]], order 1440 Properties convex, isogonal Alternate names • Steriruncitruncated hexateron • Celliprismatotruncated dodecateron (Acronym: captid) (Jonathan Bowers)[5] Coordinates The coordinates can be made in 6-space, as 360 permutations of: (0,1,2,2,3,4) This construction exists as one of 64 orthant facets of the steriruncitruncated 6-orthoplex. Images orthographic projections Ak Coxeter plane A5 A4 Graph Dihedral symmetry [6] [[5]]=[10] Ak Coxeter plane A3 A2 Graph Dihedral symmetry [4] [[3]]=[6] Omnitruncated 5-simplex Omnitruncated 5-simplex Type Uniform 5-polytope Schläfli symbol t0,1,2,3,4{3,3,3,3} 2tr{32,2} Coxeter-Dynkin diagram or 4-faces 62 12 t0,1,2,3{3,3,3} 30 {}×tr{3,3} 20 {6}×{6} Cells 540 360 t{3,4} 90 {4,3} 90 {}×{6} Faces 1560 480 {6} 1080 {4} Edges 1800 Vertices 720 Vertex figure Irregular 5-cell Coxeter group A5×2, [[3,3,3,3]], order 1440 Properties convex, isogonal, zonotope The omnitruncated 5-simplex has 720 vertices, 1800 edges, 1560 faces (480 hexagons and 1080 squares), 540 cells (360 truncated octahedra, 90 cubes, and 90 hexagonal prisms), and 62 4-faces (12 omnitruncated 5-cells, 30 truncated octahedral prisms, and 20 6-6 duoprisms). Alternate names • Steriruncicantitruncated 5-simplex (Full description of omnitruncation for 5-polytopes by Johnson) • Omnitruncated hexateron • Great cellated dodecateron (Acronym: gocad) (Jonathan Bowers)[6] Coordinates The vertices of the omnitruncated 5-simplex can be most simply constructed on a hyperplane in 6-space as permutations of (0,1,2,3,4,5). These coordinates come from the positive orthant facet of the steriruncicantitruncated 6-orthoplex, t0,1,2,3,4{34,4}, . Images orthographic projections Ak Coxeter plane A5 A4 Graph Dihedral symmetry [6] [[5]]=[10] Ak Coxeter plane A3 A2 Graph Dihedral symmetry [4] [[3]]=[6] Permutohedron The omnitruncated 5-simplex is the permutohedron of order 6. It is also a zonotope, the Minkowski sum of six line segments parallel to the six lines through the origin and the six vertices of the 5-simplex. Orthogonal projection, vertices labeled as a permutohedron. Related honeycomb The omnitruncated 5-simplex honeycomb is constructed by omnitruncated 5-simplex facets with 3 facets around each ridge. It has Coxeter-Dynkin diagram of . Coxeter group ${\tilde {I}}_{1}$ ${\tilde {A}}_{2}$ ${\tilde {A}}_{3}$ ${\tilde {A}}_{4}$ ${\tilde {A}}_{5}$ Coxeter-Dynkin Picture Name Apeirogon Hextille Omnitruncated 3-simplex honeycomb Omnitruncated 4-simplex honeycomb Omnitruncated 5-simplex honeycomb Facets Full snub 5-simplex The full snub 5-simplex or omnisnub 5-simplex, defined as an alternation of the omnitruncated 5-simplex is not uniform, but it can be given Coxeter diagram and symmetry [[3,3,3,3]]+, and constructed from 12 snub 5-cells, 30 snub tetrahedral antiprisms, 20 3-3 duoantiprisms, and 360 irregular 5-cells filling the gaps at the deleted vertices. Related uniform polytopes These polytopes are a part of 19 uniform 5-polytopes based on the [3,3,3,3] Coxeter group, all shown here in A5 Coxeter plane orthographic projections. (Vertices are colored by projection overlap order, red, orange, yellow, green, cyan, blue, purple having progressively more vertices) A5 polytopes t0 t1 t2 t0,1 t0,2 t1,2 t0,3 t1,3 t0,4 t0,1,2 t0,1,3 t0,2,3 t1,2,3 t0,1,4 t0,2,4 t0,1,2,3 t0,1,2,4 t0,1,3,4 t0,1,2,3,4 Notes 1. Klitizing, (x3o3o3o3x - scad) 2. Klitizing, (x3x3o3o3x - cappix) 3. Klitizing, (x3o3x3o3x - card) 4. Klitizing, (x3x3x3o3x - cograx) 5. Klitizing, (x3x3o3x3x - captid) 6. Klitizing, (x3x3x3x3x - gocad) References • H.S.M. Coxeter: • H.S.M. Coxeter, Regular Polytopes, 3rd Edition, Dover New York, 1973 • Kaleidoscopes: Selected Writings of H.S.M. Coxeter, edited by F. Arthur Sherk, Peter McMullen, Anthony C. Thompson, Asia Ivic Weiss, Wiley-Interscience Publication, 1995, ISBN 978-0-471-01003-6 • (Paper 22) H.S.M. Coxeter, Regular and Semi Regular Polytopes I, [Math. Zeit. 46 (1940) 380-407, MR 2,10] • (Paper 23) H.S.M. Coxeter, Regular and Semi-Regular Polytopes II, [Math. Zeit. 188 (1985) 559-591] • (Paper 24) H.S.M. Coxeter, Regular and Semi-Regular Polytopes III, [Math. Zeit. 200 (1988) 3-45] • Norman Johnson Uniform Polytopes, Manuscript (1991) • N.W. Johnson: The Theory of Uniform Polytopes and Honeycombs, Ph.D. • Klitzing, Richard. "5D uniform polytopes (polytera)". x3o3o3o3x - scad, x3x3o3o3x - cappix, x3o3x3o3x - card, x3x3x3o3x - cograx, x3x3o3x3x - captid, x3x3x3x3x - gocad External links • Glossary for hyperspace, George Olshevsky. • Polytopes of Various Dimensions • Multi-dimensional Glossary Fundamental convex regular and uniform polytopes in dimensions 2–10 Family An Bn I2(p) / Dn E6 / E7 / E8 / F4 / G2 Hn Regular polygon Triangle Square p-gon Hexagon Pentagon Uniform polyhedron Tetrahedron Octahedron • Cube Demicube Dodecahedron • Icosahedron Uniform polychoron Pentachoron 16-cell • Tesseract Demitesseract 24-cell 120-cell • 600-cell Uniform 5-polytope 5-simplex 5-orthoplex • 5-cube 5-demicube Uniform 6-polytope 6-simplex 6-orthoplex • 6-cube 6-demicube 122 • 221 Uniform 7-polytope 7-simplex 7-orthoplex • 7-cube 7-demicube 132 • 231 • 321 Uniform 8-polytope 8-simplex 8-orthoplex • 8-cube 8-demicube 142 • 241 • 421 Uniform 9-polytope 9-simplex 9-orthoplex • 9-cube 9-demicube Uniform 10-polytope 10-simplex 10-orthoplex • 10-cube 10-demicube Uniform n-polytope n-simplex n-orthoplex • n-cube n-demicube 1k2 • 2k1 • k21 n-pentagonal polytope Topics: Polytope families • Regular polytope • List of regular polytopes and compounds
Wikipedia
Stericated 6-cubes In six-dimensional geometry, a stericated 6-cube is a convex uniform 6-polytope, constructed as a sterication (4th order truncation) of the regular 6-cube. 6-cube Stericated 6-cube Steritruncated 6-cube Stericantellated 6-cube Stericantitruncated 6-cube Steriruncinated 6-cube Steriruncitruncated 6-cube Steriruncicantellated 6-cube Steriruncicantitruncated 6-cube Orthogonal projections in B6 Coxeter plane There are 8 unique sterications for the 6-cube with permutations of truncations, cantellations, and runcinations. Stericated 6-cube Stericated 6-cube Typeuniform 6-polytope Schläfli symbol2r2r{4,3,3,3,3} Coxeter-Dynkin diagrams 5-faces 4-faces Cells Faces Edges5760 Vertices960 Vertex figure Coxeter groupsB6, [4,3,3,3,3] Propertiesconvex Alternate names • Small cellated hexeract (Acronym: scox) (Jonathan Bowers)[1] Images orthographic projections Coxeter plane B6 B5 B4 Graph Dihedral symmetry [12] [10] [8] Coxeter plane B3 B2 Graph Dihedral symmetry [6] [4] Coxeter plane A5 A3 Graph Dihedral symmetry [6] [4] Steritruncated 6-cube Steritruncated 6-cube Typeuniform 6-polytope Schläfli symbolt0,1,4{4,3,3,3,3} Coxeter-Dynkin diagrams 5-faces 4-faces Cells Faces Edges19200 Vertices3840 Vertex figure Coxeter groupsB6, [4,3,3,3,3] Propertiesconvex Alternate names • Cellirhombated hexeract (Acronym: catax) (Jonathan Bowers)[2] Images orthographic projections Coxeter plane B6 B5 B4 Graph Dihedral symmetry [12] [10] [8] Coxeter plane B3 B2 Graph Dihedral symmetry [6] [4] Coxeter plane A5 A3 Graph Dihedral symmetry [6] [4] Stericantellated 6-cube Stericantellated 6-cube Typeuniform 6-polytope Schläfli symbol2r2r{4,3,3,3,3} Coxeter-Dynkin diagrams 5-faces 4-faces Cells Faces Edges28800 Vertices5760 Vertex figure Coxeter groupsB6, [4,3,3,3,3] Propertiesconvex Alternate names • Cellirhombated hexeract (Acronym: crax) (Jonathan Bowers)[3] Images orthographic projections Coxeter plane B6 B5 B4 Graph Dihedral symmetry [12] [10] [8] Coxeter plane B3 B2 Graph Dihedral symmetry [6] [4] Coxeter plane A5 A3 Graph Dihedral symmetry [6] [4] Stericantitruncated 6-cube stericantitruncated 6-cube Typeuniform 6-polytope Schläfli symbolt0,1,2,4{4,3,3,3,3} Coxeter-Dynkin diagrams 5-faces 4-faces Cells Faces Edges46080 Vertices11520 Vertex figure Coxeter groupsB6, [4,3,3,3,3] Propertiesconvex Alternate names • Celligreatorhombated hexeract (Acronym: cagorx) (Jonathan Bowers)[4] Images orthographic projections Coxeter plane B6 B5 B4 Graph Dihedral symmetry [12] [10] [8] Coxeter plane B3 B2 Graph Dihedral symmetry [6] [4] Coxeter plane A5 A3 Graph Dihedral symmetry [6] [4] Steriruncinated 6-cube steriruncinated 6-cube Typeuniform 6-polytope Schläfli symbolt0,3,4{4,3,3,3,3} Coxeter-Dynkin diagrams 5-faces 4-faces Cells Faces Edges15360 Vertices3840 Vertex figure Coxeter groupsB6, [4,3,3,3,3] Propertiesconvex Alternate names • Celliprismated hexeract (Acronym: copox) (Jonathan Bowers)[5] Images orthographic projections Coxeter plane B6 B5 B4 Graph Dihedral symmetry [12] [10] [8] Coxeter plane B3 B2 Graph Dihedral symmetry [6] [4] Coxeter plane A5 A3 Graph Dihedral symmetry [6] [4] Steriruncitruncated 6-cube steriruncitruncated 6-cube Typeuniform 6-polytope Schläfli symbol2t2r{4,3,3,3,3} Coxeter-Dynkin diagrams 5-faces 4-faces Cells Faces Edges40320 Vertices11520 Vertex figure Coxeter groupsB6, [4,3,3,3,3] Propertiesconvex Alternate names • Celliprismatotruncated hexeract (Acronym: captix) (Jonathan Bowers)[6] Images orthographic projections Coxeter plane B6 B5 B4 Graph Dihedral symmetry [12] [10] [8] Coxeter plane B3 B2 Graph Dihedral symmetry [6] [4] Coxeter plane A5 A3 Graph Dihedral symmetry [6] [4] Steriruncicantellated 6-cube steriruncicantellated 6-cube Typeuniform 6-polytope Schläfli symbolt0,2,3,4{4,3,3,3,3} Coxeter-Dynkin diagrams 5-faces 4-faces Cells Faces Edges40320 Vertices11520 Vertex figure Coxeter groupsB6, [4,3,3,3,3] Propertiesconvex Alternate names • Celliprismatorhombated hexeract (Acronym: coprix) (Jonathan Bowers)[7] Images orthographic projections Coxeter plane B6 B5 B4 Graph Dihedral symmetry [12] [10] [8] Coxeter plane B3 B2 Graph Dihedral symmetry [6] [4] Coxeter plane A5 A3 Graph Dihedral symmetry [6] [4] Steriruncicantitruncated 6-cube Steriuncicantitruncated 6-cube Typeuniform 6-polytope Schläfli symboltr2r{4,3,3,3,3} Coxeter-Dynkin diagrams 5-faces 4-faces Cells Faces Edges69120 Vertices23040 Vertex figure Coxeter groupsB6, [4,3,3,3,3] Propertiesconvex Alternate names • Great cellated hexeract (Acronym: gocax) (Jonathan Bowers)[8] Images orthographic projections Coxeter plane B6 B5 B4 Graph Dihedral symmetry [12] [10] [8] Coxeter plane B3 B2 Graph Dihedral symmetry [6] [4] Coxeter plane A5 A3 Graph Dihedral symmetry [6] [4] Related polytopes These polytopes are from a set of 63 uniform 6-polytopes generated from the B6 Coxeter plane, including the regular 6-cube or 6-orthoplex. B6 polytopes β6 t1β6 t2β6 t2γ6 t1γ6 γ6 t0,1β6 t0,2β6 t1,2β6 t0,3β6 t1,3β6 t2,3γ6 t0,4β6 t1,4γ6 t1,3γ6 t1,2γ6 t0,5γ6 t0,4γ6 t0,3γ6 t0,2γ6 t0,1γ6 t0,1,2β6 t0,1,3β6 t0,2,3β6 t1,2,3β6 t0,1,4β6 t0,2,4β6 t1,2,4β6 t0,3,4β6 t1,2,4γ6 t1,2,3γ6 t0,1,5β6 t0,2,5β6 t0,3,4γ6 t0,2,5γ6 t0,2,4γ6 t0,2,3γ6 t0,1,5γ6 t0,1,4γ6 t0,1,3γ6 t0,1,2γ6 t0,1,2,3β6 t0,1,2,4β6 t0,1,3,4β6 t0,2,3,4β6 t1,2,3,4γ6 t0,1,2,5β6 t0,1,3,5β6 t0,2,3,5γ6 t0,2,3,4γ6 t0,1,4,5γ6 t0,1,3,5γ6 t0,1,3,4γ6 t0,1,2,5γ6 t0,1,2,4γ6 t0,1,2,3γ6 t0,1,2,3,4β6 t0,1,2,3,5β6 t0,1,2,4,5β6 t0,1,2,4,5γ6 t0,1,2,3,5γ6 t0,1,2,3,4γ6 t0,1,2,3,4,5γ6 Notes 1. Klitzing, (x4o3o3o3x3o - scox) 2. Klitzing, (x4x3o3o3x3o - catax) 3. Klitzing, (x4o3x3o3x3o - crax) 4. Klitzing, (x4x3x3o3x3o - cagorx) 5. Klitzing, (x4o3o3x3x3o - copox)) 6. Klitzing, (x4x3o3x3x3o - captix) 7. Klitzing, (x4o3x3x3x3o - coprix) 8. Klitzing, (x4x3x3x3x3o - gocax) References • H.S.M. Coxeter: • H.S.M. Coxeter, Regular Polytopes, 3rd Edition, Dover New York, 1973 • Kaleidoscopes: Selected Writings of H.S.M. Coxeter, edited by F. Arthur Sherk, Peter McMullen, Anthony C. Thompson, Asia Ivic Weiss, Wiley-Interscience Publication, 1995, ISBN 978-0-471-01003-6 • (Paper 22) H.S.M. Coxeter, Regular and Semi Regular Polytopes I, [Math. Zeit. 46 (1940) 380-407, MR 2,10] • (Paper 23) H.S.M. Coxeter, Regular and Semi-Regular Polytopes II, [Math. Zeit. 188 (1985) 559-591] • (Paper 24) H.S.M. Coxeter, Regular and Semi-Regular Polytopes III, [Math. Zeit. 200 (1988) 3-45] • Norman Johnson Uniform Polytopes, Manuscript (1991) • N.W. Johnson: The Theory of Uniform Polytopes and Honeycombs, Ph.D. • Klitzing, Richard. "6D uniform polytopes (polypeta)". External links • Polytopes of Various Dimensions • Multi-dimensional Glossary Fundamental convex regular and uniform polytopes in dimensions 2–10 Family An Bn I2(p) / Dn E6 / E7 / E8 / F4 / G2 Hn Regular polygon Triangle Square p-gon Hexagon Pentagon Uniform polyhedron Tetrahedron Octahedron • Cube Demicube Dodecahedron • Icosahedron Uniform polychoron Pentachoron 16-cell • Tesseract Demitesseract 24-cell 120-cell • 600-cell Uniform 5-polytope 5-simplex 5-orthoplex • 5-cube 5-demicube Uniform 6-polytope 6-simplex 6-orthoplex • 6-cube 6-demicube 122 • 221 Uniform 7-polytope 7-simplex 7-orthoplex • 7-cube 7-demicube 132 • 231 • 321 Uniform 8-polytope 8-simplex 8-orthoplex • 8-cube 8-demicube 142 • 241 • 421 Uniform 9-polytope 9-simplex 9-orthoplex • 9-cube 9-demicube Uniform 10-polytope 10-simplex 10-orthoplex • 10-cube 10-demicube Uniform n-polytope n-simplex n-orthoplex • n-cube n-demicube 1k2 • 2k1 • k21 n-pentagonal polytope Topics: Polytope families • Regular polytope • List of regular polytopes and compounds
Wikipedia
Pentic 7-cubes In seven-dimensional geometry, a pentic 7-cube is a convex uniform 7-polytope, related to the uniform 7-demicube. There are 8 unique forms. 7-demicube (half 7-cube, h{4,35}) Pentic 7-cube h5{4,35} Penticantic 7-cube h2,5{4,35} Pentiruncic 7-cube h3,5{4,35} Pentiruncicantic 7-cube h2,3,5{4,35} Pentisteric 7-cube h4,5{4,35} Pentistericantic 7-cube h2,4,5{4,35} Pentisteriruncic 7-cube h3,4,5{4,35} Penticsteriruncicantic 7-cube h2,3,4,5{4,35} Orthogonal projections in D7 Coxeter plane Pentic 7-cube Pentic 7-cube Typeuniform 7-polytope Schläfli symbolt0,4{3,34,1} h5{4,35} Coxeter-Dynkin diagram 5-faces 4-faces Cells Faces Edges13440 Vertices1344 Vertex figure Coxeter groupsD7, [34,1,1] Propertiesconvex Cartesian coordinates The Cartesian coordinates for the vertices of a pentic 7-cube centered at the origin are coordinate permutations: (±1,±1,±1,±1,±1,±3,±3) with an odd number of plus signs. Images orthographic projections Coxeter plane B7 D7 D6 Graph Dihedral symmetry [14/2] [12] [10] Coxeter plane D5 D4 D3 Graph Dihedral symmetry [8] [6] [4] Coxeter plane A5 A3 Graph Dihedral symmetry [6] [4] Related polytopes Dimensional family of pentic n-cubes n678 [1+,4,3n-2] = [3,3n-3,1] [1+,4,34] = [3,33,1] [1+,4,35] = [3,34,1] [1+,4,36] = [3,35,1] Cantic figure Coxeter = = = Schläfli h5{4,34} h5{4,35} h5{4,36} Penticantic 7-cube Images orthographic projections Coxeter plane B7 D7 D6 Graph Dihedral symmetry [14/2] [12] [10] Coxeter plane D5 D4 D3 Graph Dihedral symmetry [8] [6] [4] Coxeter plane A5 A3 Graph Dihedral symmetry [6] [4] Pentiruncic 7-cube Images orthographic projections Coxeter plane B7 D7 D6 Graph Dihedral symmetry [14/2] [12] [10] Coxeter plane D5 D4 D3 Graph Dihedral symmetry [8] [6] [4] Coxeter plane A5 A3 Graph Dihedral symmetry [6] [4] Pentiruncicantic 7-cube Images orthographic projections Coxeter plane B7 D7 D6 Graph Dihedral symmetry [14/2] [12] [10] Coxeter plane D5 D4 D3 Graph Dihedral symmetry [8] [6] [4] Coxeter plane A5 A3 Graph Dihedral symmetry [6] [4] Pentisteric 7-cube Images orthographic projections Coxeter plane B7 D7 D6 Graph Dihedral symmetry [14/2] [12] [10] Coxeter plane D5 D4 D3 Graph Dihedral symmetry [8] [6] [4] Coxeter plane A5 A3 Graph Dihedral symmetry [6] [4] Pentistericantic 7-cube Images orthographic projections Coxeter plane B7 D7 D6 Graph Dihedral symmetry [14/2] [12] [10] Coxeter plane D5 D4 D3 Graph Dihedral symmetry [8] [6] [4] Coxeter plane A5 A3 Graph Dihedral symmetry [6] [4] Pentisteriruncic 7-cube Images orthographic projections Coxeter plane B7 D7 D6 Graph Dihedral symmetry [14/2] [12] [10] Coxeter plane D5 D4 D3 Graph Dihedral symmetry [8] [6] [4] Coxeter plane A5 A3 Graph Dihedral symmetry [6] [4] Pentisteriruncicantic 7-cube Images orthographic projections Coxeter plane B7 D7 D6 Graph Dihedral symmetry [14/2] [12] [10] Coxeter plane D5 D4 D3 Graph Dihedral symmetry [8] [6] [4] Coxeter plane A5 A3 Graph Dihedral symmetry [6] [4] Related polytopes This polytope is based on the 7-demicube, a part of a dimensional family of uniform polytopes called demihypercubes for being alternation of the hypercube family. There are 95 uniform polytopes with D7 symmetry, 63 are shared by the BC7 symmetry, and 32 are unique: D7 polytopes t0(141) t0,1(141) t0,2(141) t0,3(141) t0,4(141) t0,5(141) t0,1,2(141) t0,1,3(141) t0,1,4(141) t0,1,5(141) t0,2,3(141) t0,2,4(141) t0,2,5(141) t0,3,4(141) t0,3,5(141) t0,4,5(141) t0,1,2,3(141) t0,1,2,4(141) t0,1,2,5(141) t0,1,3,4(141) t0,1,3,5(141) t0,1,4,5(141) t0,2,3,4(141) t0,2,3,5(141) t0,2,4,5(141) t0,3,4,5(141) t0,1,2,3,4(141) t0,1,2,3,5(141) t0,1,2,4,5(141) t0,1,3,4,5(141) t0,2,3,4,5(141) t0,1,2,3,4,5(141) Notes References • H.S.M. Coxeter: • H.S.M. Coxeter, Regular Polytopes, 3rd Edition, Dover New York, 1973 • Kaleidoscopes: Selected Writings of H.S.M. Coxeter, edited by F. Arthur Sherk, Peter McMullen, Anthony C. Thompson, Asia Ivic Weiss, Wiley-Interscience Publication, 1995, ISBN 978-0-471-01003-6 • (Paper 22) H.S.M. Coxeter, Regular and Semi Regular Polytopes I, [Math. Zeit. 46 (1940) 380-407, MR 2,10] • (Paper 23) H.S.M. Coxeter, Regular and Semi-Regular Polytopes II, [Math. Zeit. 188 (1985) 559-591] • (Paper 24) H.S.M. Coxeter, Regular and Semi-Regular Polytopes III, [Math. Zeit. 200 (1988) 3-45] • Norman Johnson Uniform Polytopes, Manuscript (1991) • N.W. Johnson: The Theory of Uniform Polytopes and Honeycombs, Ph.D. • Klitzing, Richard. "7D uniform polytopes (polyexa)". External links • Weisstein, Eric W. "Hypercube". MathWorld. • Polytopes of Various Dimensions • Multi-dimensional Glossary Fundamental convex regular and uniform polytopes in dimensions 2–10 Family An Bn I2(p) / Dn E6 / E7 / E8 / F4 / G2 Hn Regular polygon Triangle Square p-gon Hexagon Pentagon Uniform polyhedron Tetrahedron Octahedron • Cube Demicube Dodecahedron • Icosahedron Uniform polychoron Pentachoron 16-cell • Tesseract Demitesseract 24-cell 120-cell • 600-cell Uniform 5-polytope 5-simplex 5-orthoplex • 5-cube 5-demicube Uniform 6-polytope 6-simplex 6-orthoplex • 6-cube 6-demicube 122 • 221 Uniform 7-polytope 7-simplex 7-orthoplex • 7-cube 7-demicube 132 • 231 • 321 Uniform 8-polytope 8-simplex 8-orthoplex • 8-cube 8-demicube 142 • 241 • 421 Uniform 9-polytope 9-simplex 9-orthoplex • 9-cube 9-demicube Uniform 10-polytope 10-simplex 10-orthoplex • 10-cube 10-demicube Uniform n-polytope n-simplex n-orthoplex • n-cube n-demicube 1k2 • 2k1 • k21 n-pentagonal polytope Topics: Polytope families • Regular polytope • List of regular polytopes and compounds
Wikipedia
Stericated 7-orthoplexes In seven-dimensional geometry, a stericated 7-orthoplex is a convex uniform 7-polytope with 4th order truncations (sterication) of the regular 7-orthoplex. Orthogonal projections in B6 Coxeter plane 7-orthoplex Stericated 7-orthoplex Steritruncated 7-orthoplex Bisteritruncated 7-orthoplex Stericantellated 7-orthoplex Stericantitruncated 7-orthoplex Bistericantitruncated 7-orthoplex Steriruncinated 7-orthoplex Steriruncitruncated 7-orthoplex Steriruncicantellated 7-orthoplex Bisteriruncitruncated 7-orthoplex Steriruncicantitruncated 7-orthoplex There are 24 unique sterication for the 7-orthoplex with permutations of truncations, cantellations, and runcinations. 14 are more simply constructed from the 7-cube. This polytope is one of 127 uniform 7-polytopes with B7 symmetry. Stericated 7-orthoplex Stericated 7-orthoplex Typeuniform 7-polytope Schläfli symbolt0,4{35,4} Coxeter-Dynkin diagrams 6-faces 5-faces 4-faces Cells Faces Edges Vertices Vertex figure Coxeter groupsB7, [4,35] Propertiesconvex Alternate names • Small cellated hecatonicosoctaexon (acronym: ) (Jonathan Bowers)[1] Images orthographic projections Coxeter plane B7 / A6 B6 / D7 B5 / D6 / A4 Graph Dihedral symmetry [14] [12] [10] Coxeter plane B4 / D5 B3 / D4 / A2 B2 / D3 Graph Dihedral symmetry [8] [6] [4] Coxeter plane A5 A3 Graph Dihedral symmetry [6] [4] Steritruncated 7-orthoplex steritruncated 7-orthoplex Typeuniform 7-polytope Schläfli symbolt0,1,4{35,4} Coxeter-Dynkin diagrams 6-faces 5-faces 4-faces Cells Faces Edges Vertices Vertex figure Coxeter groupsB7, [4,35] Propertiesconvex Alternate names • Cellitruncated hecatonicosoctaexon (acronym: ) (Jonathan Bowers)[2] Images orthographic projections Coxeter plane B7 / A6 B6 / D7 B5 / D6 / A4 Graph Dihedral symmetry [14] [12] [10] Coxeter plane B4 / D5 B3 / D4 / A2 B2 / D3 Graph Dihedral symmetry [8] [6] [4] Coxeter plane A5 A3 Graph Dihedral symmetry [6] [4] Bisteritruncated 7-orthoplex bisteritruncated 7-orthoplex Typeuniform 7-polytope Schläfli symbolt1,2,5{35,4} Coxeter-Dynkin diagrams 6-faces 5-faces 4-faces Cells Faces Edges Vertices Vertex figure Coxeter groupsB7, [4,35] Propertiesconvex Alternate names • Bicellitruncated hecatonicosoctaexon (acronym: ) (Jonathan Bowers)[3] Images orthographic projections Coxeter plane B7 / A6 B6 / D7 B5 / D6 / A4 Graph Dihedral symmetry [14] [12] [10] Coxeter plane B4 / D5 B3 / D4 / A2 B2 / D3 Graph Dihedral symmetry [8] [6] [4] Coxeter plane A5 A3 Graph Dihedral symmetry [6] [4] Stericantellated 7-orthoplex Stericantellated 7-orthoplex Typeuniform 7-polytope Schläfli symbolt0,2,4{35,4} Coxeter-Dynkin diagrams 6-faces 5-faces 4-faces Cells Faces Edges Vertices Vertex figure Coxeter groupsB7, [4,35] Propertiesconvex Alternate names • Cellirhombated hecatonicosoctaexon (acronym: ) (Jonathan Bowers)[4] Images orthographic projections Coxeter plane B7 / A6 B6 / D7 B5 / D6 / A4 Graph Dihedral symmetry [14] [12] [10] Coxeter plane B4 / D5 B3 / D4 / A2 B2 / D3 Graph Dihedral symmetry [8] [6] [4] Coxeter plane A5 A3 Graph Dihedral symmetry [6] [4] Stericantitruncated 7-orthoplex stericantitruncated 7-orthoplex Typeuniform 7-polytope Schläfli symbolt0,1,2,4{35,4} Coxeter-Dynkin diagrams 6-faces 5-faces 4-faces Cells Faces Edges Vertices Vertex figure Coxeter groupsB7, [4,35] Propertiesconvex Alternate names • Celligreatorhombated hecatonicosoctaexon (acronym: ) (Jonathan Bowers)[5] Images orthographic projections Coxeter plane B7 / A6 B6 / D7 B5 / D6 / A4 Graph Dihedral symmetry [14] [12] [10] Coxeter plane B4 / D5 B3 / D4 / A2 B2 / D3 Graph Dihedral symmetry [8] [6] [4] Coxeter plane A5 A3 Graph Dihedral symmetry [6] [4] Bistericantitruncated 7-orthoplex bistericantitruncated 7-orthoplex Typeuniform 7-polytope Schläfli symbolt1,2,3,5{35,4} Coxeter-Dynkin diagrams 6-faces 5-faces 4-faces Cells Faces Edges Vertices Vertex figure Coxeter groupsB7, [4,35] Propertiesconvex Alternate names • Bicelligreatorhombated hecatonicosoctaexon (acronym: ) (Jonathan Bowers)[6] Images orthographic projections Coxeter plane B7 / A6 B6 / D7 B5 / D6 / A4 Graph Dihedral symmetry [14] [12] [10] Coxeter plane B4 / D5 B3 / D4 / A2 B2 / D3 Graph Dihedral symmetry [8] [6] [4] Coxeter plane A5 A3 Graph Dihedral symmetry [6] [4] Steriruncinated 7-orthoplex Steriruncinated 7-orthoplex Typeuniform 7-polytope Schläfli symbolt0,3,4{35,4} Coxeter-Dynkin diagrams 6-faces 5-faces 4-faces Cells Faces Edges Vertices Vertex figure Coxeter groupsB7, [4,35] Propertiesconvex Alternate names • Celliprismated hecatonicosoctaexon (acronym: ) (Jonathan Bowers)[7] Images orthographic projections Coxeter plane B7 / A6 B6 / D7 B5 / D6 / A4 Graph too complex Dihedral symmetry [14] [12] [10] Coxeter plane B4 / D5 B3 / D4 / A2 B2 / D3 Graph Dihedral symmetry [8] [6] [4] Coxeter plane A5 A3 Graph Dihedral symmetry [6] [4] Steriruncitruncated 7-orthoplex steriruncitruncated 7-orthoplex Typeuniform 7-polytope Schläfli symbolt0,1,3,4{35,4} Coxeter-Dynkin diagrams 6-faces 5-faces 4-faces Cells Faces Edges Vertices Vertex figure Coxeter groupsB7, [4,35] Propertiesconvex Alternate names • Celliprismatotruncated hecatonicosoctaexon (acronym: ) (Jonathan Bowers)[8] Images orthographic projections Coxeter plane B7 / A6 B6 / D7 B5 / D6 / A4 Graph Dihedral symmetry [14] [12] [10] Coxeter plane B4 / D5 B3 / D4 / A2 B2 / D3 Graph Dihedral symmetry [8] [6] [4] Coxeter plane A5 A3 Graph Dihedral symmetry [6] [4] Steriruncicantellated 7-orthoplex steriruncicantellated 7-orthoplex Typeuniform 7-polytope Schläfli symbolt0,2,3,4{35,4} Coxeter-Dynkin diagrams 6-faces 5-faces 4-faces Cells Faces Edges Vertices Vertex figure Coxeter groupsB7, [4,35] Propertiesconvex Alternate names • Celliprismatorhombated hecatonicosoctaexon (acronym: ) (Jonathan Bowers)[9] Images orthographic projections Coxeter plane B7 / A6 B6 / D7 B5 / D6 / A4 Graph Dihedral symmetry [14] [12] [10] Coxeter plane B4 / D5 B3 / D4 / A2 B2 / D3 Graph Dihedral symmetry [8] [6] [4] Coxeter plane A5 A3 Graph Dihedral symmetry [6] [4] Steriruncicantitruncated 7-orthoplex steriruncicantitruncated 7-orthoplex Typeuniform 7-polytope Schläfli symbolt0,1,2,3,4{35,4} Coxeter-Dynkin diagrams 6-faces 5-faces 4-faces Cells Faces Edges Vertices Vertex figure Coxeter groupsB7, [4,35] Propertiesconvex Alternate names • Great cellated hecatonicosoctaexon (acronym: ) (Jonathan Bowers)[10] Images orthographic projections Coxeter plane B7 / A6 B6 / D7 B5 / D6 / A4 Graph Dihedral symmetry [14] [12] [10] Coxeter plane B4 / D5 B3 / D4 / A2 B2 / D3 Graph Dihedral symmetry [8] [6] [4] Coxeter plane A5 A3 Graph Dihedral symmetry [6] [4] Notes 1. Klitizing, (x3o3o3o3x3o4o - ) 2. Klitizing, (x3x3o3o3x3o4o - ) 3. Klitizing, (o3x3x3o3o3x4o - ) 4. Klitizing, (x3o3x3o3x3o4o - ) 5. Klitizing, (x3x3x3o3x3o4o - ) 6. Klitizing, (o3x3x3x3o3x4o - ) 7. Klitizing, (x3o3o3x3x3o4o - ) 8. Klitizing, (x3x3x3o3x3o4o - ) 9. Klitizing, (x3o3x3x3x3o4o - ) 10. Klitizing, (x3x3x3x3x3o4o - ) References • H.S.M. Coxeter: • H.S.M. Coxeter, Regular Polytopes, 3rd Edition, Dover New York, 1973 • Kaleidoscopes: Selected Writings of H.S.M. Coxeter, edited by F. Arthur Sherk, Peter McMullen, Anthony C. Thompson, Asia Ivic Weiss, Wiley-Interscience Publication, 1995, ISBN 978-0-471-01003-6 • (Paper 22) H.S.M. Coxeter, Regular and Semi Regular Polytopes I, [Math. Zeit. 46 (1940) 380-407, MR 2,10] • (Paper 23) H.S.M. Coxeter, Regular and Semi-Regular Polytopes II, [Math. Zeit. 188 (1985) 559-591] • (Paper 24) H.S.M. Coxeter, Regular and Semi-Regular Polytopes III, [Math. Zeit. 200 (1988) 3-45] • Norman Johnson Uniform Polytopes, Manuscript (1991) • N.W. Johnson: The Theory of Uniform Polytopes and Honeycombs, Ph.D. • Klitzing, Richard. "7D uniform polytopes (polyexa)". External links • Polytopes of Various Dimensions • Multi-dimensional Glossary Fundamental convex regular and uniform polytopes in dimensions 2–10 Family An Bn I2(p) / Dn E6 / E7 / E8 / F4 / G2 Hn Regular polygon Triangle Square p-gon Hexagon Pentagon Uniform polyhedron Tetrahedron Octahedron • Cube Demicube Dodecahedron • Icosahedron Uniform polychoron Pentachoron 16-cell • Tesseract Demitesseract 24-cell 120-cell • 600-cell Uniform 5-polytope 5-simplex 5-orthoplex • 5-cube 5-demicube Uniform 6-polytope 6-simplex 6-orthoplex • 6-cube 6-demicube 122 • 221 Uniform 7-polytope 7-simplex 7-orthoplex • 7-cube 7-demicube 132 • 231 • 321 Uniform 8-polytope 8-simplex 8-orthoplex • 8-cube 8-demicube 142 • 241 • 421 Uniform 9-polytope 9-simplex 9-orthoplex • 9-cube 9-demicube Uniform 10-polytope 10-simplex 10-orthoplex • 10-cube 10-demicube Uniform n-polytope n-simplex n-orthoplex • n-cube n-demicube 1k2 • 2k1 • k21 n-pentagonal polytope Topics: Polytope families • Regular polytope • List of regular polytopes and compounds
Wikipedia
Steric 6-cubes In six-dimensional geometry, a steric 6-cube is a convex uniform 6-polytope. There are unique 4 steric forms of the 6-cube. 6-demicube = Steric 6-cube = Stericantic 6-cube = Steriruncic 6-cube = Stericruncicantic 6-cube = Orthogonal projections in D6 Coxeter plane Steric 6-cube Steric 6-cube Typeuniform 6-polytope Schläfli symbolt0,3{3,33,1} h4{4,34} Coxeter-Dynkin diagram = 5-faces 4-faces Cells Faces Edges3360 Vertices480 Vertex figure Coxeter groupsD6, [33,1,1] Propertiesconvex Alternate names • Runcinated demihexeract/6-demicube • Small prismated hemihexeract (Acronym sophax) (Jonathan Bowers)[1] Cartesian coordinates The Cartesian coordinates for the 480 vertices of a steric 6-cube centered at the origin are coordinate permutations: (±1,±1,±1,±1,±1,±3) with an odd number of plus signs. Images orthographic projections Coxeter plane B6 Graph Dihedral symmetry [12/2] Coxeter plane D6 D5 Graph Dihedral symmetry [10] [8] Coxeter plane D4 D3 Graph Dihedral symmetry [6] [4] Coxeter plane A5 A3 Graph Dihedral symmetry [6] [4] Related polytopes Dimensional family of steric n-cubes n5678 [1+,4,3n-2] = [3,3n-3,1] [1+,4,33] = [3,32,1] [1+,4,34] = [3,33,1] [1+,4,35] = [3,34,1] [1+,4,36] = [3,35,1] Steric figure Coxeter = = = = Schläfli h4{4,33} h4{4,34} h4{4,35} h4{4,36} Stericantic 6-cube Stericantic 6-cube Typeuniform 6-polytope Schläfli symbolt0,1,3{3,33,1} h2,4{4,34} Coxeter-Dynkin diagram = 5-faces 4-faces Cells Faces Edges12960 Vertices2880 Vertex figure Coxeter groupsD6, [33,1,1] Propertiesconvex Alternate names • Runcitruncated demihexeract/6-demicube • Prismatotruncated hemihexeract (Acronym pithax) (Jonathan Bowers)[2] Cartesian coordinates The Cartesian coordinates for the 2880 vertices of a stericantic 6-cube centered at the origin are coordinate permutations: (±1,±1,±1,±3,±3,±5) with an odd number of plus signs. Images orthographic projections Coxeter plane B6 Graph Dihedral symmetry [12/2] Coxeter plane D6 D5 Graph Dihedral symmetry [10] [8] Coxeter plane D4 D3 Graph Dihedral symmetry [6] [4] Coxeter plane A5 A3 Graph Dihedral symmetry [6] [4] Steriruncic 6-cube Steriruncic 6-cube Typeuniform 6-polytope Schläfli symbolt0,2,3{3,33,1} h3,4{4,34} Coxeter-Dynkin diagram = 5-faces 4-faces Cells Faces Edges7680 Vertices1920 Vertex figure Coxeter groupsD6, [33,1,1] Propertiesconvex Alternate names • Runcicantellated demihexeract/6-demicube • Prismatorhombated hemihexeract (Acronym prohax) (Jonathan Bowers)[3] Cartesian coordinates The Cartesian coordinates for the 1920 vertices of a steriruncic 6-cube centered at the origin are coordinate permutations: (±1,±1,±1,±1,±3,±5) with an odd number of plus signs. Images orthographic projections Coxeter plane B6 Graph Dihedral symmetry [12/2] Coxeter plane D6 D5 Graph Dihedral symmetry [10] [8] Coxeter plane D4 D3 Graph Dihedral symmetry [6] [4] Coxeter plane A5 A3 Graph Dihedral symmetry [6] [4] Steriruncicantic 6-cube Steriruncicantic 6-cube Typeuniform 6-polytope Schläfli symbolt0,1,2,3{3,32,1} h2,3,4{4,34} Coxeter-Dynkin diagram = 5-faces 4-faces Cells Faces Edges17280 Vertices5760 Vertex figure Coxeter groupsD6, [33,1,1] Propertiesconvex Alternate names • Runcicantitruncated demihexeract/6-demicube • Great prismated hemihexeract (Acronym gophax) (Jonathan Bowers)[4] Cartesian coordinates The Cartesian coordinates for the 5760 vertices of a steriruncicantic 6-cube centered at the origin are coordinate permutations: (±1,±1,±1,±3,±5,±7) with an odd number of plus signs. Images orthographic projections Coxeter plane B6 Graph Dihedral symmetry [12/2] Coxeter plane D6 D5 Graph Dihedral symmetry [10] [8] Coxeter plane D4 D3 Graph Dihedral symmetry [6] [4] Coxeter plane A5 A3 Graph Dihedral symmetry [6] [4] Related polytopes There are 47 uniform polytopes with D6 symmetry, 31 are shared by the B6 symmetry, and 16 are unique: D6 polytopes h{4,34} h2{4,34} h3{4,34} h4{4,34} h5{4,34} h2,3{4,34} h2,4{4,34} h2,5{4,34} h3,4{4,34} h3,5{4,34} h4,5{4,34} h2,3,4{4,34} h2,3,5{4,34} h2,4,5{4,34} h3,4,5{4,34} h2,3,4,5{4,34} Notes 1. Klitzing, (x3o3o *b3o3x3o - sophax) 2. Klitzing, (x3x3o *b3o3x3o - pithax) 3. Klitzing, (x3o3o *b3x3x3o - prohax) 4. Klitzing, (x3x3o *b3x3x3o - gophax) References • H.S.M. Coxeter: • H.S.M. Coxeter, Regular Polytopes, 3rd Edition, Dover New York, 1973 • Kaleidoscopes: Selected Writings of H.S.M. Coxeter, edited by F. Arthur Sherk, Peter McMullen, Anthony C. Thompson, Asia Ivic Weiss, Wiley-Interscience Publication, 1995, ISBN 978-0-471-01003-6 • (Paper 22) H.S.M. Coxeter, Regular and Semi Regular Polytopes I, [Math. Zeit. 46 (1940) 380-407, MR 2,10] • (Paper 23) H.S.M. Coxeter, Regular and Semi-Regular Polytopes II, [Math. Zeit. 188 (1985) 559-591] • (Paper 24) H.S.M. Coxeter, Regular and Semi-Regular Polytopes III, [Math. Zeit. 200 (1988) 3-45] • Norman Johnson Uniform Polytopes, Manuscript (1991) • N.W. Johnson: The Theory of Uniform Polytopes and Honeycombs, Ph.D. • Klitzing, Richard. "6D uniform polytopes (polypeta)". x3o3o *b3o3x3o - sophax, x3x3o *b3o3x3o - pithax, x3o3o *b3x3x3o - prohax, x3x3o *b3x3x3o - gophax External links • Weisstein, Eric W. "Hypercube". MathWorld. • Polytopes of Various Dimensions • Multi-dimensional Glossary Fundamental convex regular and uniform polytopes in dimensions 2–10 Family An Bn I2(p) / Dn E6 / E7 / E8 / F4 / G2 Hn Regular polygon Triangle Square p-gon Hexagon Pentagon Uniform polyhedron Tetrahedron Octahedron • Cube Demicube Dodecahedron • Icosahedron Uniform polychoron Pentachoron 16-cell • Tesseract Demitesseract 24-cell 120-cell • 600-cell Uniform 5-polytope 5-simplex 5-orthoplex • 5-cube 5-demicube Uniform 6-polytope 6-simplex 6-orthoplex • 6-cube 6-demicube 122 • 221 Uniform 7-polytope 7-simplex 7-orthoplex • 7-cube 7-demicube 132 • 231 • 321 Uniform 8-polytope 8-simplex 8-orthoplex • 8-cube 8-demicube 142 • 241 • 421 Uniform 9-polytope 9-simplex 9-orthoplex • 9-cube 9-demicube Uniform 10-polytope 10-simplex 10-orthoplex • 10-cube 10-demicube Uniform n-polytope n-simplex n-orthoplex • n-cube n-demicube 1k2 • 2k1 • k21 n-pentagonal polytope Topics: Polytope families • Regular polytope • List of regular polytopes and compounds
Wikipedia
Steriruncicantic tesseractic honeycomb In four-dimensional Euclidean geometry, the steriruncicantic tesseractic honeycomb is a uniform space-filling tessellation (or honeycomb) in Euclidean 4-space. Steriruncicantic tesseractic honeycomb (No image) TypeUniform honeycomb Schläfli symbolh2,3,4{4,3,3,4} Coxeter-Dynkin diagram = 4-face typet0123{4,3,3} tr{4,3,3} 2t{4,3,3} t{3,3}×{} Cell typetr{4,3} t{3,4} t{3,3} t{4}×{} t{3}×{} {3}×{} Face type{8} {6} {4} Vertex figure Coxeter group${\tilde {B}}_{4}$ = [4,3,31,1] Dual? Propertiesvertex-transitive Alternate names • great prismated demitesseractic tetracomb (giphatit) • great diprismatodemitesseractic tetracomb Related honeycombs The [4,3,31,1], , Coxeter group generates 31 permutations of uniform tessellations, 23 with distinct symmetry and 4 with distinct geometry. There are two alternated forms: the alternations (19) and (24) have the same geometry as the 16-cell honeycomb and snub 24-cell honeycomb respectively. B4 honeycombs Extended symmetry Extended diagram Order Honeycombs [4,3,31,1]: ×1 5, 6, 7, 8 <[4,3,31,1]>: ↔[4,3,3,4] ↔ ×2 9, 10, 11, 12, 13, 14, (10), 15, 16, (13), 17, 18, 19 [3[1+,4,3,31,1]] ↔ [3[3,31,1,1]] ↔ [3,3,4,3] ↔ ↔ ×3 1, 2, 3, 4 [(3,3)[1+,4,3,31,1]] ↔ [(3,3)[31,1,1,1]] ↔ [3,4,3,3] ↔ ↔ ×12 20, 21, 22, 23 See also Regular and uniform honeycombs in 4-space: • Tesseractic honeycomb • 16-cell honeycomb • 24-cell honeycomb • Rectified 24-cell honeycomb • Truncated 24-cell honeycomb • Snub 24-cell honeycomb • 5-cell honeycomb • Truncated 5-cell honeycomb • Omnitruncated 5-cell honeycomb Notes References • Kaleidoscopes: Selected Writings of H.S.M. Coxeter, edited by F. Arthur Sherk, Peter McMullen, Anthony C. Thompson, Asia Ivic Weiss, Wiley-Interscience Publication, 1995, ISBN 978-0-471-01003-6 • (Paper 24) H.S.M. Coxeter, Regular and Semi-Regular Polytopes III, [Math. Zeit. 200 (1988) 3-45] • George Olshevsky, Uniform Panoploid Tetracombs, Manuscript (2006) (Complete list of 11 convex uniform tilings, 28 convex uniform honeycombs, and 143 convex uniform tetracombs) • Klitzing, Richard. "4D Euclidean tesselations". x3x3o *b3x4x - giphatit - O111 Fundamental convex regular and uniform honeycombs in dimensions 2–9 Space Family ${\tilde {A}}_{n-1}$ ${\tilde {C}}_{n-1}$ ${\tilde {B}}_{n-1}$ ${\tilde {D}}_{n-1}$ ${\tilde {G}}_{2}$ / ${\tilde {F}}_{4}$ / ${\tilde {E}}_{n-1}$ E2 Uniform tiling {3[3]} δ3 hδ3 qδ3 Hexagonal E3 Uniform convex honeycomb {3[4]} δ4 hδ4 qδ4 E4 Uniform 4-honeycomb {3[5]} δ5 hδ5 qδ5 24-cell honeycomb E5 Uniform 5-honeycomb {3[6]} δ6 hδ6 qδ6 E6 Uniform 6-honeycomb {3[7]} δ7 hδ7 qδ7 222 E7 Uniform 7-honeycomb {3[8]} δ8 hδ8 qδ8 133 • 331 E8 Uniform 8-honeycomb {3[9]} δ9 hδ9 qδ9 152 • 251 • 521 E9 Uniform 9-honeycomb {3[10]} δ10 hδ10 qδ10 E10 Uniform 10-honeycomb {3[11]} δ11 hδ11 qδ11 En-1 Uniform (n-1)-honeycomb {3[n]} δn hδn qδn 1k2 • 2k1 • k21
Wikipedia
Pentic 6-cubes In six-dimensional geometry, a pentic 6-cube is a convex uniform 6-polytope. 6-demicube (half 6-cube) = Pentic 6-cube = Penticantic 6-cube = Pentiruncic 6-cube = Pentiruncicantic 6-cube = Pentisteric 6-cube = Pentistericantic 6-cube = Pentisteriruncic 6-cube = Pentisteriruncicantic 6-cube = Orthogonal projections in D6 Coxeter plane There are 8 pentic forms of the 6-cube. Pentic 6-cube Pentic 6-cube Typeuniform 6-polytope Schläfli symbolt0,4{3,34,1} h5{4,34} Coxeter-Dynkin diagram = 5-faces 4-faces Cells Faces Edges1440 Vertices192 Vertex figure Coxeter groupsD6, [33,1,1] Propertiesconvex The pentic 6-cube, , has half of the vertices of a pentellated 6-cube, . Alternate names • Stericated 6-demicube/demihexeract • Small cellated hemihexeract (Acronym: sochax) (Jonathan Bowers)[1] Cartesian coordinates The Cartesian coordinates for the vertices, centered at the origin are coordinate permutations: (±1,±1,±1,±1,±1,±3) with an odd number of plus signs. Images orthographic projections Coxeter plane B6 Graph Dihedral symmetry [12/2] Coxeter plane D6 D5 Graph Dihedral symmetry [10] [8] Coxeter plane D4 D3 Graph Dihedral symmetry [6] [4] Coxeter plane A5 A3 Graph Dihedral symmetry [6] [4] Penticantic 6-cube Penticantic 6-cube Typeuniform 6-polytope Schläfli symbolt0,1,4{3,34,1} h2,5{4,34} Coxeter-Dynkin diagram = 5-faces 4-faces Cells Faces Edges9600 Vertices1920 Vertex figure Coxeter groupsD6, [33,1,1] Propertiesconvex The penticantic 6-cube, , has half of the vertices of a penticantellated 6-cube, . Alternate names • Steritruncated 6-demicube/demihexeract • cellitruncated hemihexeract (Acronym: cathix) (Jonathan Bowers)[2] Cartesian coordinates The Cartesian coordinates for the vertices, centered at the origin are coordinate permutations: (±1,±1,±3,±3,±3,±5) with an odd number of plus signs. Images orthographic projections Coxeter plane B6 Graph Dihedral symmetry [12/2] Coxeter plane D6 D5 Graph Dihedral symmetry [10] [8] Coxeter plane D4 D3 Graph Dihedral symmetry [6] [4] Coxeter plane A5 A3 Graph Dihedral symmetry [6] [4] Pentiruncic 6-cube Pentiruncic 6-cube Typeuniform 6-polytope Schläfli symbolt0,2,4{3,34,1} h3,5{4,34} Coxeter-Dynkin diagram = 5-faces 4-faces Cells Faces Edges10560 Vertices1920 Vertex figure Coxeter groupsD6, [33,1,1] Propertiesconvex The pentiruncic 6-cube, , has half of the vertices of a pentiruncinated 6-cube (penticantellated 6-orthoplex), . Alternate names • Stericantellated 6-demicube/demihexeract • cellirhombated hemihexeract (Acronym: crohax) (Jonathan Bowers)[3] Cartesian coordinates The Cartesian coordinates for the vertices, centered at the origin are coordinate permutations: (±1,±1,±1,±3,±3,±5) with an odd number of plus signs. Images orthographic projections Coxeter plane B6 Graph Dihedral symmetry [12/2] Coxeter plane D6 D5 Graph Dihedral symmetry [10] [8] Coxeter plane D4 D3 Graph Dihedral symmetry [6] [4] Coxeter plane A5 A3 Graph Dihedral symmetry [6] [4] Pentiruncicantic 6-cube Pentiruncicantic 6-cube Typeuniform 6-polytope Schläfli symbolt0,1,2,4{3,32,1} h2,3,5{4,34} Coxeter-Dynkin diagram = 5-faces 4-faces Cells Faces Edges20160 Vertices5760 Vertex figure Coxeter groupsD6, [33,1,1] Propertiesconvex The pentiruncicantic 6-cube, , has half of the vertices of a pentiruncicantellated 6-cube or (pentiruncicantellated 6-orthoplex), Alternate names • Stericantitruncated demihexeract, stericantitruncated 7-demicube • Great cellated hemihexeract (Acronym: cagrohax) (Jonathan Bowers)[4] Cartesian coordinates The Cartesian coordinates for the vertices, centered at the origin are coordinate permutations: (±1,±1,±3,±3,±5,±7) with an odd number of plus signs. Images orthographic projections Coxeter plane B6 Graph Dihedral symmetry [12/2] Coxeter plane D6 D5 Graph Dihedral symmetry [10] [8] Coxeter plane D4 D3 Graph Dihedral symmetry [6] [4] Coxeter plane A5 A3 Graph Dihedral symmetry [6] [4] Pentisteric 6-cube Pentisteric 6-cube Typeuniform 6-polytope Schläfli symbolt0,3,4{3,34,1} h4,5{4,34} Coxeter-Dynkin diagram = 5-faces 4-faces Cells Faces Edges5280 Vertices960 Vertex figure Coxeter groupsD6, [33,1,1] Propertiesconvex The pentisteric 6-cube, , has half of the vertices of a pentistericated 6-cube (pentitruncated 6-orthoplex), Alternate names • Steriruncinated 6-demicube/demihexeract • Small cellipriamated hemihexeract (Acronym: cophix) (Jonathan Bowers)[5] Cartesian coordinates The Cartesian coordinates for the vertices, centered at the origin are coordinate permutations: (±1,±1,±1,±1,±3,±5) with an odd number of plus signs. Images orthographic projections Coxeter plane B6 Graph Dihedral symmetry [12/2] Coxeter plane D6 D5 Graph Dihedral symmetry [10] [8] Coxeter plane D4 D3 Graph Dihedral symmetry [6] [4] Coxeter plane A5 A3 Graph Dihedral symmetry [6] [4] Pentistericantic 6-cube Pentistericantic 6-cube Typeuniform 6-polytope Schläfli symbolt0,1,3,4{3,34,1} h2,4,5{4,34} Coxeter-Dynkin diagram = 5-faces 4-faces Cells Faces Edges23040 Vertices5760 Vertex figure Coxeter groupsD6, [33,1,1] Propertiesconvex The pentistericantic 6-cube, , has half of the vertices of a pentistericantellated 6-cube (pentiruncitruncated 6-orthoplex), . Alternate names • Steriruncitruncated demihexeract/7-demicube • cellitruncated hemihexeract (Acronym: capthix) (Jonathan Bowers)[6] Cartesian coordinates The Cartesian coordinates for the vertices, centered at the origin are coordinate permutations: (±1,±1,±3,±3,±5,±7) with an odd number of plus signs. Images orthographic projections Coxeter plane B6 Graph Dihedral symmetry [12/2] Coxeter plane D6 D5 Graph Dihedral symmetry [10] [8] Coxeter plane D4 D3 Graph Dihedral symmetry [6] [4] Coxeter plane A5 A3 Graph Dihedral symmetry [6] [4] Pentisteriruncic 6-cube Pentisteriruncic 6-cube Typeuniform 6-polytope Schläfli symbolt0,2,3,4{3,34,1} h3,4,5{4,34} Coxeter-Dynkin diagram = 5-faces 4-faces Cells Faces Edges15360 Vertices3840 Vertex figure Coxeter groupsD6, [33,1,1] Propertiesconvex The pentisteriruncic 6-cube, , has half of the vertices of a pentisteriruncinated 6-cube (penticantitruncated 6-orthoplex), . Alternate names • Steriruncicantellated 6-demicube/demihexeract • Celliprismatorhombated hemihexeract (Acronym: caprohax) (Jonathan Bowers)[7] Cartesian coordinates The Cartesian coordinates for the vertices, centered at the origin are coordinate permutations: (±1,±1,±1,±3,±5,±7) with an odd number of plus signs. Images orthographic projections Coxeter plane B6 Graph Dihedral symmetry [12/2] Coxeter plane D6 D5 Graph Dihedral symmetry [10] [8] Coxeter plane D4 D3 Graph Dihedral symmetry [6] [4] Coxeter plane A5 A3 Graph Dihedral symmetry [6] [4] Pentisteriruncicantic 6-cube Pentisteriruncicantic 6-cube Typeuniform 6-polytope Schläfli symbolt0,1,2,3,4{3,32,1} h2,3,4,5{4,34} Coxeter-Dynkin diagram = 5-faces 4-faces Cells Faces Edges34560 Vertices11520 Vertex figure Coxeter groupsD6, [33,1,1] Propertiesconvex The pentisteriruncicantic 6-cube, , has half of the vertices of a pentisteriruncicantellated 6-cube (pentisteriruncicantitruncated 6-orthoplex), . Alternate names • Steriruncicantitruncated 6-demicube/demihexeract • Great cellated hemihexeract (Acronym: gochax) (Jonathan Bowers)[8] Cartesian coordinates The Cartesian coordinates for the vertices, centered at the origin are coordinate permutations: (±1,±1,±3,±3,±5,±7) with an odd number of plus signs. Images orthographic projections Coxeter plane B6 Graph Dihedral symmetry [12/2] Coxeter plane D6 D5 Graph Dihedral symmetry [10] [8] Coxeter plane D4 D3 Graph Dihedral symmetry [6] [4] Coxeter plane A5 A3 Graph Dihedral symmetry [6] [4] Related polytopes There are 47 uniform polytopes with D6 symmetry, 31 are shared by the B6 symmetry, and 16 are unique: D6 polytopes h{4,34} h2{4,34} h3{4,34} h4{4,34} h5{4,34} h2,3{4,34} h2,4{4,34} h2,5{4,34} h3,4{4,34} h3,5{4,34} h4,5{4,34} h2,3,4{4,34} h2,3,5{4,34} h2,4,5{4,34} h3,4,5{4,34} h2,3,4,5{4,34} Notes 1. Klitzing, (x3o3o *b3o3x3o3o - sochax) 2. Klitzing, (x3x3o *b3o3x3o3o - cathix) 3. Klitzing, (x3o3o *b3x3x3o3o - crohax) 4. Klitzing, (x3x3o *b3x3x3o3o - cagrohax) 5. Klitzing, (x3o3o *b3o3x3x3x - cophix) 6. Klitzing, (x3x3o *b3o3x3x3x - capthix) 7. Klitzing, (x3o3o *b3x3x3x3x - caprohax) 8. Klitzing, (x3x3o *b3x3x3x3o - gochax) References • H.S.M. Coxeter: • H.S.M. Coxeter, Regular Polytopes, 3rd Edition, Dover New York, 1973 • Kaleidoscopes: Selected Writings of H.S.M. Coxeter, edited by F. Arthur Sherk, Peter McMullen, Anthony C. Thompson, Asia Ivic Weiss, Wiley-Interscience Publication, 1995, ISBN 978-0-471-01003-6 • (Paper 22) H.S.M. Coxeter, Regular and Semi Regular Polytopes I, [Math. Zeit. 46 (1940) 380-407, MR 2,10] • (Paper 23) H.S.M. Coxeter, Regular and Semi-Regular Polytopes II, [Math. Zeit. 188 (1985) 559-591] • (Paper 24) H.S.M. Coxeter, Regular and Semi-Regular Polytopes III, [Math. Zeit. 200 (1988) 3-45] • Norman Johnson Uniform Polytopes, Manuscript (1991) • N.W. Johnson: The Theory of Uniform Polytopes and Honeycombs, Ph.D. • Klitzing, Richard. "6D uniform polytopes (polypeta)". x3o3o *b3o3x3o3o - sochax, x3x3o *b3o3x3o3o - cathix, x3o3o *b3x3x3o3o - crohax, x3x3o *b3x3x3o3o - cagrohax, x3o3o *b3o3x3x3x - cophix, x3x3o *b3o3x3x3x - capthix, x3o3o *b3x3x3x3x - caprohax, x3x3o *b3x3x3x3o - gochax External links • Weisstein, Eric W. "Hypercube". MathWorld. • Polytopes of Various Dimensions • Multi-dimensional Glossary Fundamental convex regular and uniform polytopes in dimensions 2–10 Family An Bn I2(p) / Dn E6 / E7 / E8 / F4 / G2 Hn Regular polygon Triangle Square p-gon Hexagon Pentagon Uniform polyhedron Tetrahedron Octahedron • Cube Demicube Dodecahedron • Icosahedron Uniform polychoron Pentachoron 16-cell • Tesseract Demitesseract 24-cell 120-cell • 600-cell Uniform 5-polytope 5-simplex 5-orthoplex • 5-cube 5-demicube Uniform 6-polytope 6-simplex 6-orthoplex • 6-cube 6-demicube 122 • 221 Uniform 7-polytope 7-simplex 7-orthoplex • 7-cube 7-demicube 132 • 231 • 321 Uniform 8-polytope 8-simplex 8-orthoplex • 8-cube 8-demicube 142 • 241 • 421 Uniform 9-polytope 9-simplex 9-orthoplex • 9-cube 9-demicube Uniform 10-polytope 10-simplex 10-orthoplex • 10-cube 10-demicube Uniform n-polytope n-simplex n-orthoplex • n-cube n-demicube 1k2 • 2k1 • k21 n-pentagonal polytope Topics: Polytope families • Regular polytope • List of regular polytopes and compounds
Wikipedia
Stericated 7-cubes In seven-dimensional geometry, a stericated 7-cube is a convex uniform 7-polytope with 4th order truncations (sterication) of the regular 7-cube. Orthogonal projections in B6 Coxeter plane 7-cube Stericated 7-cube Bistericated 7-cube Steritruncated 7-cube Bisteritruncated 7-cube Stericantellated 7-cube Bistericantellated 7-cube Stericantitruncated 7-cube Bistericantitruncated 7-cube Steriruncinated 7-cube Steriruncitruncated 7-cube Steriruncicantellated 7-cube Bisteriruncitruncated 7-cube Steriruncicantitruncated 7-cube Bisteriruncicantitruncated 7-cube There are 24 unique sterication for the 7-cube with permutations of truncations, cantellations, and runcinations. 10 are more simply constructed from the 7-orthoplex. This polytope is one of 127 uniform 7-polytopes with B7 symmetry. Stericated 7-cube Stericated 7-cube Typeuniform 7-polytope Schläfli symbolt0,4{4,35} Coxeter-Dynkin diagrams 6-faces 5-faces 4-faces Cells Faces Edges Vertices Vertex figure Coxeter groupsB7, [4,35] Propertiesconvex Alternate names • Small cellated hepteract (acronym: ) (Jonathan Bowers)[1] Images orthographic projections Coxeter plane B7 / A6 B6 / D7 B5 / D6 / A4 Graph Dihedral symmetry [14] [12] [10] Coxeter plane B4 / D5 B3 / D4 / A2 B2 / D3 Graph Dihedral symmetry [8] [6] [4] Coxeter plane A5 A3 Graph Dihedral symmetry [6] [4] Bistericated 7-cube bistericated 7-cube Typeuniform 7-polytope Schläfli symbolt1,5{4,35} Coxeter-Dynkin diagrams 6-faces 5-faces 4-faces Cells Faces Edges Vertices Vertex figure Coxeter groupsB7, [4,35] Propertiesconvex Alternate names • Small bicellated hepteractihecatonicosoctaexon (acronym: ) (Jonathan Bowers)[2] Images orthographic projections Coxeter plane B7 / A6 B6 / D7 B5 / D6 / A4 Graph Dihedral symmetry [14] [12] [10] Coxeter plane B4 / D5 B3 / D4 / A2 B2 / D3 Graph Dihedral symmetry [8] [6] [4] Coxeter plane A5 A3 Graph Dihedral symmetry [6] [4] Steritruncated 7-cube steritruncated 7-cube Typeuniform 7-polytope Schläfli symbolt0,1,4{4,35} Coxeter-Dynkin diagrams 6-faces 5-faces 4-faces Cells Faces Edges Vertices Vertex figure Coxeter groupsB7, [4,35] Propertiesconvex Alternate names • Cellitruncated hepteract (acronym: ) (Jonathan Bowers)[3] Images orthographic projections Coxeter plane B7 / A6 B6 / D7 B5 / D6 / A4 Graph Dihedral symmetry [14] [12] [10] Coxeter plane B4 / D5 B3 / D4 / A2 B2 / D3 Graph Dihedral symmetry [8] [6] [4] Coxeter plane A5 A3 Graph Dihedral symmetry [6] [4] Bisteritruncated 7-cube bisteritruncated 7-cube Typeuniform 7-polytope Schläfli symbolt1,2,5{4,35} Coxeter-Dynkin diagrams 6-faces 5-faces 4-faces Cells Faces Edges Vertices Vertex figure Coxeter groupsB7, [4,35] Propertiesconvex Alternate names • Bicellitruncated hepteract (acronym: ) (Jonathan Bowers)[4] Images orthographic projections Coxeter plane B7 / A6 B6 / D7 B5 / D6 / A4 Graph Dihedral symmetry [14] [12] [10] Coxeter plane B4 / D5 B3 / D4 / A2 B2 / D3 Graph Dihedral symmetry [8] [6] [4] Coxeter plane A5 A3 Graph Dihedral symmetry [6] [4] Stericantellated 7-cube Stericantellated 7-cube Typeuniform 7-polytope Schläfli symbolt0,2,4{4,35} Coxeter-Dynkin diagrams 6-faces 5-faces 4-faces Cells Faces Edges Vertices Vertex figure Coxeter groupsB7, [4,35] Propertiesconvex Alternate names • Cellirhombated hepteract (acronym: ) (Jonathan Bowers)[5] Images orthographic projections Coxeter plane B7 / A6 B6 / D7 B5 / D6 / A4 Graph Dihedral symmetry [14] [12] [10] Coxeter plane B4 / D5 B3 / D4 / A2 B2 / D3 Graph Dihedral symmetry [8] [6] [4] Coxeter plane A5 A3 Graph Dihedral symmetry [6] [4] Bistericantellated 7-cube Bistericantellated 7-cube Typeuniform 7-polytope Schläfli symbolt1,3,5{4,35} Coxeter-Dynkin diagrams 6-faces 5-faces 4-faces Cells Faces Edges Vertices Vertex figure Coxeter groupsB7, [4,35] Propertiesconvex Alternate names • Bicellirhombihepteract (acronym: ) (Jonathan Bowers)[6] Images orthographic projections Coxeter plane B7 / A6 B6 / D7 B5 / D6 / A4 Graph Dihedral symmetry [14] [12] [10] Coxeter plane B4 / D5 B3 / D4 / A2 B2 / D3 Graph Dihedral symmetry [8] [6] [4] Coxeter plane A5 A3 Graph Dihedral symmetry [6] [4] Stericantitruncated 7-cube stericantitruncated 7-cube Typeuniform 7-polytope Schläfli symbolt0,1,2,4{4,35} Coxeter-Dynkin diagrams 6-faces 5-faces 4-faces Cells Faces Edges Vertices Vertex figure Coxeter groupsB7, [4,35] Propertiesconvex Alternate names • Celligreatorhombated hepteract (acronym: ) (Jonathan Bowers)[7] Images orthographic projections Coxeter plane B7 / A6 B6 / D7 B5 / D6 / A4 Graph Dihedral symmetry [14] [12] [10] Coxeter plane B4 / D5 B3 / D4 / A2 B2 / D3 Graph Dihedral symmetry [8] [6] [4] Coxeter plane A5 A3 Graph Dihedral symmetry [6] [4] Bistericantitruncated 7-cube bistericantitruncated 7-cube Typeuniform 7-polytope Schläfli symbolt1,2,3,5{4,35} Coxeter-Dynkin diagrams 6-faces 5-faces 4-faces Cells Faces Edges Vertices Vertex figure Coxeter groupsB7, [4,35] Propertiesconvex Alternate names • Bicelligreatorhombated hepteract (acronym: ) (Jonathan Bowers)[8] Images orthographic projections Coxeter plane B7 / A6 B6 / D7 B5 / D6 / A4 Graph Dihedral symmetry [14] [12] [10] Coxeter plane B4 / D5 B3 / D4 / A2 B2 / D3 Graph Dihedral symmetry [8] [6] [4] Coxeter plane A5 A3 Graph Dihedral symmetry [6] [4] Steriruncinated 7-cube Steriruncinated 7-cube Typeuniform 7-polytope Schläfli symbolt0,3,4{4,35} Coxeter-Dynkin diagrams 6-faces 5-faces 4-faces Cells Faces Edges Vertices Vertex figure Coxeter groupsB7, [4,35] Propertiesconvex Alternate names • Celliprismated hepteract (acronym: ) (Jonathan Bowers)[9] Images orthographic projections Coxeter plane B7 / A6 B6 / D7 B5 / D6 / A4 Graph Dihedral symmetry [14] [12] [10] Coxeter plane B4 / D5 B3 / D4 / A2 B2 / D3 Graph Dihedral symmetry [8] [6] [4] Coxeter plane A5 A3 Graph Dihedral symmetry [6] [4] Steriruncitruncated 7-cube steriruncitruncated 7-cube Typeuniform 7-polytope Schläfli symbolt0,1,3,4{4,35} Coxeter-Dynkin diagrams 6-faces 5-faces 4-faces Cells Faces Edges Vertices Vertex figure Coxeter groupsB7, [4,35] Propertiesconvex Alternate names • Celliprismatotruncated hepteract (acronym: ) (Jonathan Bowers)[10] Images orthographic projections Coxeter plane B7 / A6 B6 / D7 B5 / D6 / A4 Graph Dihedral symmetry [14] [12] [10] Coxeter plane B4 / D5 B3 / D4 / A2 B2 / D3 Graph Dihedral symmetry [8] [6] [4] Coxeter plane A5 A3 Graph Dihedral symmetry [6] [4] Steriruncicantellated 7-cube steriruncicantellated 7-cube Typeuniform 7-polytope Schläfli symbolt0,2,3,4{4,35} Coxeter-Dynkin diagrams 6-faces 5-faces 4-faces Cells Faces Edges Vertices Vertex figure Coxeter groupsB7, [4,35] Propertiesconvex Alternate names • Celliprismatorhombated hepteract (acronym: ) (Jonathan Bowers)[11] Images orthographic projections Coxeter plane B7 / A6 B6 / D7 B5 / D6 / A4 Graph Dihedral symmetry [14] [12] [10] Coxeter plane B4 / D5 B3 / D4 / A2 B2 / D3 Graph Dihedral symmetry [8] [6] [4] Coxeter plane A5 A3 Graph Dihedral symmetry [6] [4] Bisteriruncitruncated 7-cube bisteriruncitruncated 7-cube Typeuniform 7-polytope Schläfli symbolt1,2,4,5{4,35} Coxeter-Dynkin diagrams 6-faces 5-faces 4-faces Cells Faces Edges Vertices Vertex figure Coxeter groupsB7, [4,35] Propertiesconvex Alternate names • Bicelliprismatotruncated hepteractihecatonicosoctaexon (acronym: ) (Jonathan Bowers)[12] Images orthographic projections Coxeter plane B7 / A6 B6 / D7 B5 / D6 / A4 Graph Dihedral symmetry [14] [12] [10] Coxeter plane B4 / D5 B3 / D4 / A2 B2 / D3 Graph Dihedral symmetry [8] [6] [4] Coxeter plane A5 A3 Graph Dihedral symmetry [6] [4] Steriruncicantitruncated 7-cube steriruncicantitruncated 7-cube Typeuniform 7-polytope Schläfli symbolt0,1,2,3,4{4,35} Coxeter-Dynkin diagrams 6-faces 5-faces 4-faces Cells Faces Edges Vertices Vertex figure Coxeter groupsB7, [4,35] Propertiesconvex Alternate names • Great cellated hepteract (acronym: ) (Jonathan Bowers)[13] Images orthographic projections Coxeter plane B7 / A6 B6 / D7 B5 / D6 / A4 Graph Dihedral symmetry [14] [12] [10] Coxeter plane B4 / D5 B3 / D4 / A2 B2 / D3 Graph Dihedral symmetry [8] [6] [4] Coxeter plane A5 A3 Graph too complex too complex Dihedral symmetry [6] [4] Bisteriruncicantitruncated 7-cube bisteriruncicantitruncated 7-cube Typeuniform 7-polytope Schläfli symbolt1,2,3,4,5{4,35} Coxeter-Dynkin diagrams 6-faces 5-faces 4-faces Cells Faces Edges Vertices Vertex figure Coxeter groupsB7, [4,35] Propertiesconvex Alternate names • Great bicellated hepteractihecatonicosoctaexon (Acronym ) (Jonathan Bowers) [14] Images orthographic projections Coxeter plane B7 / A6 B6 / D7 B5 / D6 / A4 Graph too complex Dihedral symmetry [14] [12] [10] Coxeter plane B4 / D5 B3 / D4 / A2 B2 / D3 Graph Dihedral symmetry [8] [6] [4] Coxeter plane A5 A3 Graph Dihedral symmetry [6] [4] Notes 1. Klitizing, (x3o3o3o3x3o4o - ) 2. Klitizing, (x3o3x3o3x3o4o - ) 3. Klitizing, (x3x3o3o3x3o4o - ) 4. Klitizing, (o3x3x3o3o3x4o - ) 5. Klitizing, (x3o3x3o3x3o4o - ) 6. Klitizing, (o3x3o3x3o3x4o - ) 7. Klitizing, (x3x3x3o3x3o4o - ) 8. Klitizing, (o3x3x3x3o3x4o - ) 9. Klitizing, (x3o3o3x3x3o4o - ) 10. Klitizing, (x3x3x3o3x3o4o - ) 11. Klitizing, (x3o3x3x3x3o4o - ) 12. Klitizing, (o3x3x3o3x3x4o - ) 13. Klitizing, (x3x3x3x3x3o4o - ) 14. Klitizing, (o3x3x3x3x3x4o - ) References • H.S.M. Coxeter: • H.S.M. Coxeter, Regular Polytopes, 3rd Edition, Dover New York, 1973 • Kaleidoscopes: Selected Writings of H.S.M. Coxeter, edited by F. Arthur Sherk, Peter McMullen, Anthony C. Thompson, Asia Ivic Weiss, Wiley-Interscience Publication, 1995, ISBN 978-0-471-01003-6 • (Paper 22) H.S.M. Coxeter, Regular and Semi Regular Polytopes I, [Math. Zeit. 46 (1940) 380–407, MR 2,10] • (Paper 23) H.S.M. Coxeter, Regular and Semi-Regular Polytopes II, [Math. Zeit. 188 (1985) 559-591] • (Paper 24) H.S.M. Coxeter, Regular and Semi-Regular Polytopes III, [Math. Zeit. 200 (1988) 3-45] • Norman Johnson Uniform Polytopes, Manuscript (1991) • N.W. Johnson: The Theory of Uniform Polytopes and Honeycombs, Ph.D. • Klitzing, Richard. "7D uniform polytopes (polyexa)". x3o3o3o3x3o4o - , x3o3x3o3x3o4o - , x3x3o3o3x3o4o - , o3x3x3o3o3x4o - , x3o3x3o3x3o4o - , o3x3o3x3o3x4o - , x3x3x3o3x3o4o - , o3x3x3x3o3x4o - , x3o3o3x3x3o4o - , x3x3x3o3x3o4o - , x3o3x3x3x3o4o - , o3x3x3o3x3x4o - , x3x3x3x3x3o4o - , o3x3x3x3x3x4o - External links • Polytopes of Various Dimensions • Multi-dimensional Glossary Fundamental convex regular and uniform polytopes in dimensions 2–10 Family An Bn I2(p) / Dn E6 / E7 / E8 / F4 / G2 Hn Regular polygon Triangle Square p-gon Hexagon Pentagon Uniform polyhedron Tetrahedron Octahedron • Cube Demicube Dodecahedron • Icosahedron Uniform polychoron Pentachoron 16-cell • Tesseract Demitesseract 24-cell 120-cell • 600-cell Uniform 5-polytope 5-simplex 5-orthoplex • 5-cube 5-demicube Uniform 6-polytope 6-simplex 6-orthoplex • 6-cube 6-demicube 122 • 221 Uniform 7-polytope 7-simplex 7-orthoplex • 7-cube 7-demicube 132 • 231 • 321 Uniform 8-polytope 8-simplex 8-orthoplex • 8-cube 8-demicube 142 • 241 • 421 Uniform 9-polytope 9-simplex 9-orthoplex • 9-cube 9-demicube Uniform 10-polytope 10-simplex 10-orthoplex • 10-cube 10-demicube Uniform n-polytope n-simplex n-orthoplex • n-cube n-demicube 1k2 • 2k1 • k21 n-pentagonal polytope Topics: Polytope families • Regular polytope • List of regular polytopes and compounds
Wikipedia
Steritruncated 16-cell honeycomb In four-dimensional Euclidean geometry, the steritruncated 16-cell honeycomb is a uniform space-filling honeycomb, with runcinated 24-cell, truncated 16-cell, octahedral prism, 3-6 duoprism, and truncated tetrahedral prism cells. Steritruncated 16-cell honeycomb (No image) TypeUniform 4-honeycomb Schläfli symbolt014{3,3,4,3} Coxeter-Dynkin diagrams 4-face typet03{3,4,3} t{3,3,4} {3,4}x{} {3}x{6} t{3,3}x{} Cell type Face type{3}, {4}, {6} Vertex figure Coxeter groups${\tilde {F}}_{4}$, [3,4,3,3] PropertiesVertex transitive Alternate names • Celliprismated icositetrachoric tetracomb (capicot) • Great prismatotetracontaoctachoric tetracomb Related honeycombs The [3,4,3,3], , Coxeter group generates 31 permutations of uniform tessellations, 28 are unique in this family and ten are shared in the [4,3,3,4] and [4,3,31,1] families. The alternation (13) is also repeated in other families. F4 honeycombs Extended symmetry Extended diagram OrderHoneycombs [3,3,4,3]×1 1, 3, 5, 6, 8, 9, 10, 11, 12 [3,4,3,3]×1 2, 4, 7, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22 23, 24, 25, 26, 27, 28, 29 [(3,3)[3,3,4,3*]] =[(3,3)[31,1,1,1]] =[3,4,3,3] = = ×4 (2), (4), (7), (13) See also Regular and uniform honeycombs in 4-space: • Tesseractic honeycomb • 16-cell honeycomb • 24-cell honeycomb • Rectified 24-cell honeycomb • Snub 24-cell honeycomb • 5-cell honeycomb • Truncated 5-cell honeycomb • Omnitruncated 5-cell honeycomb References • Coxeter, H.S.M. Regular Polytopes, (3rd edition, 1973), Dover edition, ISBN 0-486-61480-8 p. 296, Table II: Regular honeycombs • Kaleidoscopes: Selected Writings of H.S.M. Coxeter, edited by F. Arthur Sherk, Peter McMullen, Anthony C. Thompson, Asia Ivic Weiss, Wiley-Interscience Publication, 1995, ISBN 978-0-471-01003-6 • (Paper 24) H.S.M. Coxeter, Regular and Semi-Regular Polytopes III, [Math. Zeit. 200 (1988) 3-45] • George Olshevsky, Uniform Panoploid Tetracombs, Manuscript (2006) (Complete list of 11 convex uniform tilings, 28 convex uniform honeycombs, and 143 convex uniform tetracombs) Model 121 (Wrongly named runcinated icositetrachoric honeycomb) • Klitzing, Richard. "4D Euclidean tesselations". x3x3o4o3x - capicot - O127 Fundamental convex regular and uniform honeycombs in dimensions 2–9 Space Family ${\tilde {A}}_{n-1}$ ${\tilde {C}}_{n-1}$ ${\tilde {B}}_{n-1}$ ${\tilde {D}}_{n-1}$ ${\tilde {G}}_{2}$ / ${\tilde {F}}_{4}$ / ${\tilde {E}}_{n-1}$ E2 Uniform tiling {3[3]} δ3 hδ3 qδ3 Hexagonal E3 Uniform convex honeycomb {3[4]} δ4 hδ4 qδ4 E4 Uniform 4-honeycomb {3[5]} δ5 hδ5 qδ5 24-cell honeycomb E5 Uniform 5-honeycomb {3[6]} δ6 hδ6 qδ6 E6 Uniform 6-honeycomb {3[7]} δ7 hδ7 qδ7 222 E7 Uniform 7-honeycomb {3[8]} δ8 hδ8 qδ8 133 • 331 E8 Uniform 8-honeycomb {3[9]} δ9 hδ9 qδ9 152 • 251 • 521 E9 Uniform 9-honeycomb {3[10]} δ10 hδ10 qδ10 E10 Uniform 10-honeycomb {3[11]} δ11 hδ11 qδ11 En-1 Uniform (n-1)-honeycomb {3[n]} δn hδn qδn 1k2 • 2k1 • k21
Wikipedia
Stirling number In mathematics, Stirling numbers arise in a variety of analytic and combinatorial problems. They are named after James Stirling, who introduced them in a purely algebraic setting in his book Methodus differentialis (1730).[1] They were rediscovered and given a combinatorial meaning by Masanobu Saka in 1782.[2] Two different sets of numbers bear this name: the Stirling numbers of the first kind and the Stirling numbers of the second kind. Additionally, Lah numbers are sometimes referred to as Stirling numbers of the third kind. Each kind is detailed in its respective article, this one serving as a description of relations between them. A common property of all three kinds is that they describe coefficients relating three different sequences of polynomials that frequently arise in combinatorics. Moreover, all three can be defined as the number of partitions of n elements into k non-empty subsets, where each subset is endowed with a certain kind of order (no order, cyclical, or linear). Notation Main articles: Stirling numbers of the first kind and Stirling numbers of the second kind Several different notations for Stirling numbers are in use. Ordinary (signed) Stirling numbers of the first kind are commonly denoted: $s(n,k)\,.$ Unsigned Stirling numbers of the first kind, which count the number of permutations of n elements with k disjoint cycles, are denoted: ${\biggl [}{n \atop k}{\biggr ]}=c(n,k)=|s(n,k)|=(-1)^{n-k}s(n,k)\,$ Stirling numbers of the second kind, which count the number of ways to partition a set of n elements into k nonempty subsets:[3] ${\biggl \{}{\!n\! \atop \!k\!}{\biggr \}}=S(n,k)=S_{n}^{(k)}\,$ Abramowitz and Stegun use an uppercase $S$ and a blackletter ${\mathfrak {S}}$, respectively, for the first and second kinds of Stirling number. The notation of brackets and braces, in analogy to binomial coefficients, was introduced in 1935 by Jovan Karamata and promoted later by Donald Knuth. (The bracket notation conflicts with a common notation for Gaussian coefficients.[4]) The mathematical motivation for this type of notation, as well as additional Stirling number formulae, may be found on the page for Stirling numbers and exponential generating functions. Another infrequent notation is $s_{1}(n,k)$ and $s_{2}(n,k)$. Expansions of falling and rising factorials Stirling numbers express coefficients in expansions of falling and rising factorials (also known as the Pochhammer symbol) as polynomials. That is, the falling factorial, defined as $(x)_{n}=x(x-1)\cdots (x-n+1)$, is a polynomial in x of degree n whose expansion is $(x)_{n}=\sum _{k=0}^{n}s(n,k)x^{k}$ with (signed) Stirling numbers of the first kind as coefficients. Note that (x)0 = 1 because it is an empty product. The notations $x^{\underline {n}}$ for the falling factorial and $x^{\overline {n}}$ for the rising factorial are also often used.[5] (Confusingly, the Pochhammer symbol that many use for falling factorials is used in special functions for rising factorials.) Similarly, the rising factorial, defined as $x^{(n)}=x(x+1)\cdots (x+n-1)$, is a polynomial in x of degree n whose expansion is $x^{(n)}=\sum _{k=0}^{n}{\biggl [}{n \atop k}{\biggr ]}x^{k}=\sum _{k=0}^{n}(-1)^{n-k}s(n,k)x^{k}$ with unsigned Stirling numbers of the first kind as coefficients. One of these expansions can be derived from the other by observing that $x^{(n)}=(-1)^{n}(-x)_{n}$. Stirling numbers of the second kind express the reverse relations: $x^{n}=\sum _{k=0}^{n}S(n,k)(x)_{k}$ and $x^{n}=\sum _{k=0}^{n}(-1)^{n-k}S(n,k)x^{(k)}.$ As change of basis coefficients Considering the set of polynomials in the (indeterminate) variable x as a vector space, each of the three sequences $x^{0},x^{1},x^{2},x^{3},\dots \quad (x)_{0},(x)_{1},(x)_{2},\dots \quad x^{(0)},x^{(1)},x^{(2)},\dots $ is a basis. That is, every polynomial in x can be written as a sum $a_{0}x^{(0)}+a_{1}x^{(1)}+\dots +a_{n}x^{(n)}$ for some unique coefficients $a_{i}$ (similarly for the other two bases). The above relations then express the change of basis between them, as summarized in the following commutative diagram: The coefficients for the two bottom changes are described by the Lah numbers below. Since coefficients in any basis are unique, one can define Stirling numbers this way, as the coefficients expressing polynomials of one basis in terms of another, that is, the unique numbers relating $x^{n}$ with falling and rising factorials as above. Falling factorials define, up to scaling, the same polynomials as binomial coefficients: $ {\binom {x}{k}}=(x)_{k}/k!$. The changes between the standard basis $\textstyle x^{0},x^{1},x^{2},\dots $ and the basis $ {\binom {x}{0}},{\binom {x}{1}},{\binom {x}{2}},\dots $ are thus described by similar formulas: $x^{n}=\sum _{k=0}^{n}{\biggl \{}{\!n\! \atop \!k\!}{\biggr \}}k!{\binom {x}{k}}\quad {\text{and}}\quad {\binom {x}{n}}=\sum _{k=0}^{n}{\frac {s(n,k)}{n!}}x^{k}$. Example Expressing a polynomial in the basis of falling factorials is useful for calculating sums of the polynomial evaluated at consecutive integers. Indeed, the sum of falling factorials with fixed k can expressed as another falling factorial (for $k\neq -1$) $\sum _{0\leq i<n}(i)_{k}={\frac {(n)_{k+1}}{k+1}}$ This can be proved by induction. For example, the sum of fourth powers of integers up to n (this time with n included), is: ${\begin{aligned}\sum _{i=0}^{n}i^{4}&=\sum _{i=0}^{n}\sum _{k=0}^{4}{\biggl \{}{\!4\! \atop \!k\!}{\biggr \}}(i)_{k}=\sum _{k=0}^{4}{\biggl \{}{\!4\! \atop \!k\!}{\biggr \}}\sum _{i=0}^{n}(i)_{k}=\sum _{k=0}^{4}{\biggl \{}{\!4\! \atop \!k\!}{\biggr \}}{\frac {(n{+}1)_{k+1}}{k{+}1}}\\[10mu]&={\biggl \{}{\!4\! \atop \!1\!}{\biggr \}}{\frac {(n{+}1)_{2}}{2}}+{\biggl \{}{\!4\! \atop \!2\!}{\biggr \}}{\frac {(n{+}1)_{3}}{3}}+{\biggl \{}{\!4\! \atop \!3\!}{\biggr \}}{\frac {(n{+}1)_{4}}{4}}+{\biggl \{}{\!4\! \atop \!4\!}{\biggr \}}{\frac {(n{+}1)_{5}}{5}}\\[8mu]&={\frac {1}{2}}(n{+}1)_{2}+{\frac {7}{3}}(n{+}1)_{3}+{\frac {6}{4}}(n{+}1)_{4}+{\frac {1}{5}}(n{+}1)_{5}\,.\end{aligned}}$ Here the Stirling numbers can be computed from their definition as the number of partitions of 4 elements into k non-empty unlabeled subsets. In contrast, the sum $ \sum _{i=0}^{n}i^{k}$ in the standard basis is given by Faulhaber's formula, which in general is more complicated. As inverse matrices The Stirling numbers of the first and second kinds can be considered inverses of one another: $\sum _{j=k}^{n}s(n,j)S(j,k)=\sum _{j=k}^{n}(-1)^{n-j}{\biggl [}{n \atop j}{\biggr ]}{\biggl \{}{\!j\! \atop \!k\!}{\biggr \}}=\delta _{n,k}$ and $\sum _{j=k}^{n}S(n,j)s(j,k)=\sum _{j=k}^{n}(-1)^{j-k}{\biggl \{}{\!n\! \atop \!j\!}{\biggr \}}{\biggl [}{j \atop k}{\biggr ]}=\delta _{n,k},$ where $\delta _{nk}$ is the Kronecker delta. These two relationships may be understood to be matrix inverse relationships. That is, let s be the lower triangular matrix of Stirling numbers of the first kind, whose matrix elements $s_{nk}=s(n,k).\,$ The inverse of this matrix is S, the lower triangular matrix of Stirling numbers of the second kind, whose entries are $S_{nk}=S(n,k).$ Symbolically, this is written $s^{-1}=S\,$ Although s and S are infinite, so calculating a product entry involves an infinite sum, the matrix multiplications work because these matrices are lower triangular, so only a finite number of terms in the sum are nonzero. Lah numbers Main article: Lah numbers The Lah numbers $L(n,k)={n-1 \choose k-1}{\frac {n!}{k!}}$ are sometimes called Stirling numbers of the third kind.[6] By convention, $L(0,0)=1$ and $L(n,k)=0$ if $n>k$ or $k=0<n$. These numbers are coefficients expressing falling factorials in terms of rising factorials and vice versa: $x^{(n)}=\sum _{k=0}^{n}L(n,k)(x)_{k}\quad $ and $\quad (x)_{n}=\sum _{k=0}^{n}(-1)^{n-k}L(n,k)x^{(k)}.$ As above, this means they express the change of basis between the bases $(x)_{0},(x)_{1},(x)_{2},\cdots $ and $x^{(0)},x^{(1)},x^{(2)},\cdots $, completing the diagram. In particular, one formula is the inverse of the other, thus: $\sum _{j=k}^{n}(-1)^{j-k}L(n,j)L(j,k)=\delta _{n,k}.$ Similarly, composing the change of basis from $x^{(n)}$ to $x^{n}$ with the change of basis from $x^{n}$ to $(x)_{n}$ gives the change of basis directly from $x^{(n)}$ to $(x)_{n}$: $L(n,k)=\sum _{j=k}^{n}{\biggl [}{n \atop j}{\biggr ]}{\biggl \{}{\!j\! \atop \!k\!}{\biggr \}},$ and similarly for other compositions. In terms of matrices, if $L$ denotes the matrix with entries $L_{nk}=L(n,k)$ and $L^{-}$ denotes the matrix with entries $L_{nk}^{-}=(-1)^{n-k}L(n,k)$, then one is the inverse of the other: $L^{-}=L^{-1}$. Composing the matrix of unsigned Stirling numbers of the first kind with the matrix of Stirling numbers of the second kind gives the Lah numbers: $L=|s|\cdot S$. Enumeratively, $ \left\{{\!n\! \atop \!k\!}\right\},\left[{n \atop k}\right],L(n,k)$ can be defined as the number of partitions of n elements into k non-empty unlabeled subsets, where each subset is endowed with no order, a cyclic order, or a linear order, respectively. In particular, this implies the inequalities: ${\biggl \{}{\!n\! \atop \!k\!}{\biggr \}}\leq {\biggl [}{n \atop k}{\biggr ]}\leq L(n,k).$ Inversion relations and the Stirling transform For any pair of sequences, $\{f_{n}\}$ and $\{g_{n}\}$, related by a finite sum Stirling number formula given by $g_{n}=\sum _{k=0}^{n}\left\{{\begin{matrix}n\\k\end{matrix}}\right\}f_{k},$ for all integers $n\geq 0$, we have a corresponding inversion formula for $f_{n}$ given by $f_{n}=\sum _{k=0}^{n}\left[{\begin{matrix}n\\k\end{matrix}}\right](-1)^{n-k}g_{k}.$ The lower indices could be any integer between $ 0$ and $ n$. These inversion relations between the two sequences translate into functional equations between the sequence exponential generating functions given by the Stirling (generating function) transform as ${\widehat {G}}(z)={\widehat {F}}\left(e^{z}-1\right)$ and ${\widehat {F}}(z)={\widehat {G}}\left(\log(1+z)\right).$ For $D=d/dx$, the differential operators $x^{n}D^{n}$ and $(xD)^{n}$ are related by the following formulas for all integers $n\geq 0$:[7] ${\begin{aligned}(xD)^{n}&=\sum _{k=0}^{n}S(n,k)x^{k}D^{k}\\x^{n}D^{n}&=\sum _{k=0}^{n}s(n,k)(xD)^{k}=(xD)_{n}=xD(xD-1)\ldots (xD-n+1)\end{aligned}}$ Another pair of "inversion" relations involving the Stirling numbers relate the forward differences and the ordinary $n^{th}$ derivatives of a function, $f(x)$, which is analytic for all $x$ by the formulas[8] ${\frac {1}{k!}}{\frac {d^{k}}{dx^{k}}}f(x)=\sum _{n=k}^{\infty }{\frac {s(n,k)}{n!}}\Delta ^{n}f(x)$ ${\frac {1}{k!}}\Delta ^{k}f(x)=\sum _{n=k}^{\infty }{\frac {S(n,k)}{n!}}{\frac {d^{n}}{dx^{n}}}f(x).$ Similar properties Table of similarities Stirling numbers of the first kind Stirling numbers of the second kind $\left[{n+1 \atop k}\right]=n\left[{n \atop k}\right]+\left[{n \atop k-1}\right]$ $\left\{{n+1 \atop k}\right\}=k\left\{{n \atop k}\right\}+\left\{{n \atop k-1}\right\}$ $\sum _{k=0}^{n}\left[{n \atop k}\right]=n!$ $\sum _{k=0}^{n}\left\{{n \atop k}\right\}=B_{n}$, where $B_{n}$ is the n-th Bell number $\sum _{k=0}^{n}\left[{n \atop k}\right]x^{k}=x^{(n)}$, where $\{x^{(n)}\}_{n\in \mathbb {N} }$ is the rising factorials $\sum _{k=0}^{n}\left\{{n \atop k}\right\}x^{k}=T_{n}(x)$, where $\{T_{n}\}_{n\in \mathbb {N} }$ is the Touchard polynomials $\left[{n \atop 0}\right]=\delta _{n},\ \left[{n \atop n-1}\right]={\binom {n}{2}},\ \left[{n \atop n}\right]=1$ $\left\{{n \atop 0}\right\}=\delta _{n},\ \left\{{n \atop n-1}\right\}={\binom {n}{2}},\ \left\{{n \atop n}\right\}=1$ $\left[{n+1 \atop k+1}\right]=\sum _{j=k}^{n}\left[{n \atop j}\right]{\binom {j}{k}}$ $\left\{{n+1 \atop k+1}\right\}=\sum _{j=k}^{n}{\binom {n}{j}}\left\{{j \atop k}\right\}$ $\left[{\begin{matrix}n+1\\k+1\end{matrix}}\right]=\sum _{j=k}^{n}{\frac {n!}{j!}}\left[{j \atop k}\right]$ $\left\{{n+1 \atop k+1}\right\}=\sum _{j=k}^{n}(k+1)^{n-j}\left\{{j \atop k}\right\}$ $\left[{n+k+1 \atop n}\right]=\sum _{j=0}^{k}(n+j)\left[{n+j \atop j}\right]$ $\left\{{n+k+1 \atop k}\right\}=\sum _{j=0}^{k}j\left\{{n+j \atop j}\right\}$ $\left[{n \atop l+m}\right]{\binom {l+m}{l}}=\sum _{k}\left[{k \atop l}\right]\left[{n-k \atop m}\right]{\binom {n}{k}}$ $\left\{{n \atop l+m}\right\}{\binom {l+m}{l}}=\sum _{k}\left\{{k \atop l}\right\}\left\{{n-k \atop m}\right\}{\binom {n}{k}}$ $\left[{n+k \atop n}\right]{\underset {n\to \infty }{\sim }}{\frac {n^{2k}}{2^{k}k!}}.$ $\left\{{n+k \atop n}\right\}{\underset {n\to \infty }{\sim }}{\frac {n^{2k}}{2^{k}k!}}.$ $\sum _{n=k}^{\infty }\left[{n \atop k}\right]{\frac {x^{n}}{n!}}={\frac {(-\log(1-x))^{k}}{k!}}.$ $\sum _{n=k}^{\infty }\left\{{n \atop k}\right\}{\frac {x^{n}}{n!}}={\frac {(e^{x}-1)^{k}}{k!}}.$ $\left[{n \atop k}\right]=\sum _{0\leq i_{1}<\ldots <i_{n-k}<n}i_{1}i_{2}\cdots i_{n-k}.$ $\left\{{n \atop k}\right\}=\sum _{\begin{array}{c}c_{1}+\ldots +c_{k}=n-k\\c_{1},\ldots ,\ c_{k}\ \geq \ 0\end{array}}1^{c_{1}}2^{c_{2}}\cdots k^{c_{k}}$ See the specific articles for details. Symmetric formulae Abramowitz and Stegun give the following symmetric formulae that relate the Stirling numbers of the first and second kind.[9] $\left[{n \atop k}\right]=\sum _{j=n}^{2n-k}(-1)^{j-k}{\binom {j-1}{k-1}}{\binom {2n-k}{j}}\left\{{j-k \atop j-n}\right\}$ and $\left\{{n \atop k}\right\}=\sum _{j=n}^{2n-k}(-1)^{j-k}{\binom {j-1}{k-1}}{\binom {2n-k}{j}}\left[{j-k \atop j-n}\right]$ Stirling numbers with negative integral values The Stirling numbers can be extended to negative integral values, but not all authors do so in the same way.[10][11][12] Regardless of the approach taken, it is worth noting that Stirling numbers of first and second kind are connected by the relations: ${\biggl [}{n \atop k}{\biggr ]}={\biggl \{}{\!-k\! \atop \!-n\!}{\biggr \}}\quad {\text{and}}\quad {\biggl \{}{\!n\! \atop \!k\!}{\biggr \}}={\biggl [}{-k \atop -n}{\biggr ]}$ when n and k are nonnegative integers. So we have the following table for $\left[{-n \atop -k}\right]$: k n −1 −2 −3 −4 −5 −1 1 1 1 1 1 −2 0 1 3 7 15 −3 0 0 1 6 25 −4 0 0 0 1 10 −5 0 0 0 0 1 Donald Knuth[12] defined the more general Stirling numbers by extending a recurrence relation to all integers. In this approach, $ \left[{n \atop k}\right]$ and $ \left\{{\!n\! \atop \!k\!}\right\}$ are zero if n is negative and k is nonnegative, or if n is nonnegative and k is negative, and so we have, for any integers n and k, ${\biggl [}{n \atop k}{\biggr ]}={\biggl \{}{\!-k\! \atop \!-n\!}{\biggr \}}\quad {\text{and}}\quad {\biggl \{}{\!n\! \atop \!k\!}{\biggr \}}={\biggl [}{-k \atop -n}{\biggr ]}.$ On the other hand, for positive integers n and k, David Branson[11] defined $ \left[{-n \atop -k}\right]\!,$ $ \left\{{\!-n\! \atop \!-k\!}\right\}\!,$ $ \left[{-n \atop k}\right]\!,$ and $ \left\{{\!-n\! \atop \!k\!}\right\}$ (but not $ \left[{n \atop -k}\right]$ or $ \left\{{\!n\! \atop \!-k\!}\right\}$). In this approach, one has the following extension of the recurrence relation of the Stirling numbers of the first kind: ${\biggl [}{-n \atop k}{\biggr ]}={\frac {(-1)^{n+1}}{n!}}\sum _{i=1}^{n}{\frac {(-1)^{i+1}}{i^{k}}}{\binom {n}{i}}$, For example, $ \left[{-5 \atop k}\right]={\frac {1}{120}}{\Bigl (}5-{\frac {10}{2^{k}}}+{\frac {10}{3^{k}}}-{\frac {5}{4^{k}}}+{\frac {1}{5^{k}}}{\Bigr )}.$ This leads to the following table of values of $ \left[{n \atop k}\right]$ for negative integral n. k n 0 1 2 3 4 −1 1 1 1 1 1 −2 ${\tfrac {-1}{2}}$ ${\tfrac {-3}{4}}$ ${\tfrac {-7}{8}}$ ${\tfrac {-15}{16}}$ ${\tfrac {-31}{32}}$ −3 ${\tfrac {1}{6}}$ ${\tfrac {11}{36}}$ ${\tfrac {85}{216}}$ ${\tfrac {575}{1296}}$ ${\tfrac {3661}{7776}}$ −4 ${\tfrac {-1}{24}}$ ${\tfrac {-25}{288}}$ ${\tfrac {-415}{3456}}$ ${\tfrac {-5845}{41472}}$ ${\tfrac {-76111}{497664}}$ −5 ${\tfrac {1}{120}}$ ${\tfrac {137}{7200}}$ ${\tfrac {12019}{432000}}$ ${\tfrac {874853}{25920000}}$ ${\tfrac {58067611}{1555200000}}$ In this case $ \sum _{n=1}^{\infty }\left[{-n \atop -k}\right]=B_{k}$ where $B_{k}$ is a Bell number, and so one may define the negative Bell numbers by $ \sum _{n=1}^{\infty }\left[{-n \atop k}\right]=:B_{-k}$. For example, this produces $ \sum _{n=1}^{\infty }\left[{-n \atop 1}\right]=B_{-1}={\frac {1}{e}}\sum _{j=1}^{\infty }{\frac {1}{j\cdot j!}}={\frac {1}{e}}\int _{0}^{1}{\frac {e^{t}-1}{t}}dt=0.4848291\dots $, generally $ B_{-k}={\frac {1}{e}}\sum _{j=1}^{\infty }{\frac {1}{j^{k}j!}}$. See also • Bell polynomials • Catalan number • Cycles and fixed points • Pochhammer symbol • Polynomial sequence • Touchard polynomials • Stirling permutation Citations 1. Mansour & Schork 2015, p. 5. 2. Mansour & Schork 2015, p. 4. 3. Ronald L. Graham, Donald E. Knuth, Oren Patashnik (1988) Concrete Mathematics, Addison-Wesley, Reading MA. ISBN 0-201-14236-8, p. 244. 4. Donald Knuth 5. Aigner, Martin (2007). "Section 1.2 - Subsets and Binomial Coefficients". A Course In Enumeration. Springer. pp. 561. ISBN 978-3-540-39032-9. 6. Sándor, Jozsef; Crstici, Borislav (2004). Handbook of Number Theory II. Kluwer Academic Publishers. p. 464. ISBN 9781402025464. 7. Concrete Mathematics exercise 13 of section 6. Note that this formula immediately implies the first positive-order Stirling number transformation given in the main article on generating function transformations. 8. Olver, Frank; Lozier, Daniel; Boisvert, Ronald; Clark, Charles (2010). "NIST Handbook of Mathematical Functions". Nist Handbook of Mathematical Functions. (Section 26.8) 9. Goldberg, K.; Newman, M; Haynsworth, E. (1972), "Stirling Numbers of the First Kind, Stirling Numbers of the Second Kind", in Abramowitz, Milton; Stegun, Irene A. (eds.), Handbook of Mathematical Functions with Formulas, Graphs, and Mathematical Tables, 10th printing, New York: Dover, pp. 824–825 10. Loeb, Daniel E. (1992) [Received 3 Nov 1989]. "A generalization of the Stirling numbers". Discrete Mathematics. 103 (3): 259–269. doi:10.1016/0012-365X(92)90318-A. 11. Branson, David (August 1994). "An extension of Stirling numbers" (PDF). The Fibonacci Quarterly. Archived (PDF) from the original on 2011-08-27. Retrieved Dec 6, 2017. 12. D.E. Knuth, 1992. References • Rosen, Kenneth H., ed. (2018), Handbook of Discrete and Combinatorial Mathematics, CRC Press, ISBN 978-1-5848-8780-5 • Mansour, Toufik; Schork, Mathias (2015), Commutation Relations, Normal Ordering, and Stirling Numbers, CRC Press, ISBN 978-1-4665-7989-7 Further reading • Adamchik, Victor (1997). "On Stirling Numbers and Euler Sums" (PDF). Journal of Computational and Applied Mathematics. 79: 119–130. doi:10.1016/s0377-0427(96)00167-7. Archived (PDF) from the original on 2004-12-14. • Benjamin, Arthur T.; Preston, Gregory O.; Quinn, Jennifer J. (2002). "A Stirling Encounter with Harmonic Numbers" (PDF). Mathematics Magazine. 75 (2): 95–103. CiteSeerX 10.1.1.383.722. doi:10.2307/3219141. JSTOR 3219141. Archived (PDF) from the original on 2020-09-10. • Boyadzhiev, Khristo N. (2012). "Close encounters with the Stirling numbers of the second kind" (PDF). Mathematics Magazine. 85 (4): 252–266. arXiv:1806.09468. doi:10.4169/math.mag.85.4.252. S2CID 115176876. Archived (PDF) from the original on 2015-09-05. • Comtet, Louis (1970). "Valeur de s(n, k)". Analyse Combinatoire, Tome Second (in French): 51. • Comtet, Louis (1974). Advanced Combinatorics: The Art of Finite and Infinite Expansions. Dordrecht-Holland/Boston-U.S.A.: Reidel Publishing Company. ISBN 9789027703804. • Hsien-Kuei Hwang (1995). "Asymptotic Expansions for the Stirling Numbers of the First Kind". Journal of Combinatorial Theory, Series A. 71 (2): 343–351. doi:10.1016/0097-3165(95)90010-1. • Knuth, D.E. (1992), "Two notes on notation", Amer. Math. Monthly, 99 (5): 403–422, arXiv:math/9205211, doi:10.2307/2325085, JSTOR 2325085, S2CID 119584305 • Miksa, Francis L. (January 1956). "Stirling numbers of the first kind: 27 leaves reproduced from typewritten manuscript on deposit in the UMT File". Mathematical Tables and Other Aids to Computation: Reviews and Descriptions of Tables and Books. 10 (53): 37–38. JSTOR 2002617. • Miksa, Francis L. (1972) [1964]. "Combinatorial Analysis, Table 24.4, Stirling Numbers of the Second Kind". In Abramowitz, Milton; Stegun, Irene A. (eds.). Handbook of Mathematical Functions (with Formulas, Graphs and Mathematical Tables). 55. U.S. Dept. of Commerce, National Bureau of Standards, Applied Math. p. 835. • Mitrinović, Dragoslav S. (1959). "Sur les nombres de Stirling de première espèce et les polynômes de Stirling" (PDF). Publications de la Faculté d'Electrotechnique de l'Université de Belgrade, Série Mathématiques et Physique (in French) (23): 1–20. ISSN 0522-8441. Archived (PDF) from the original on 2009-06-17. • O'Connor, John J.; Robertson, Edmund F. (September 1998). "James Stirling (1692–1770)". • Sixdeniers, J. M.; Penson, K. A.; Solomon, A. I. (2001). "Extended Bell and Stirling Numbers From Hypergeometric Exponentiation" (PDF). Journal of Integer Sequences. 4: 01.1.4. arXiv:math/0106123. Bibcode:2001JIntS...4...14S. • Spivey, Michael Z. (2007). "Combinatorial sums and finite differences". Discrete Math. 307 (24): 3130–3146. doi:10.1016/j.disc.2007.03.052. • Sloane, N. J. A. (ed.). "Sequence A008275 (Stirling numbers of first kind)". The On-Line Encyclopedia of Integer Sequences. OEIS Foundation. • Sloane, N. J. A. (ed.). "Sequence A008277 (Stirling numbers of 2nd kind)". The On-Line Encyclopedia of Integer Sequences. OEIS Foundation. Classes of natural numbers Powers and related numbers • Achilles • Power of 2 • Power of 3 • Power of 10 • Square • Cube • Fourth power • Fifth power • Sixth power • Seventh power • Eighth power • Perfect power • Powerful • Prime power Of the form a × 2b ± 1 • Cullen • Double Mersenne • Fermat • Mersenne • Proth • Thabit • Woodall Other polynomial numbers • Hilbert • Idoneal • Leyland • Loeschian • Lucky numbers of Euler Recursively defined numbers • Fibonacci • Jacobsthal • Leonardo • Lucas • Padovan • Pell • Perrin Possessing a specific set of other numbers • Amenable • Congruent • Knödel • Riesel • Sierpiński Expressible via specific sums • Nonhypotenuse • Polite • Practical • Primary pseudoperfect • Ulam • Wolstenholme Figurate numbers 2-dimensional centered • Centered triangular • Centered square • Centered pentagonal • Centered hexagonal • Centered heptagonal • Centered octagonal • Centered nonagonal • Centered decagonal • Star non-centered • Triangular • Square • Square triangular • Pentagonal • Hexagonal • Heptagonal • Octagonal • Nonagonal • Decagonal • Dodecagonal 3-dimensional centered • Centered tetrahedral • Centered cube • Centered octahedral • Centered dodecahedral • Centered icosahedral non-centered • Tetrahedral • Cubic • Octahedral • Dodecahedral • Icosahedral • Stella octangula pyramidal • Square pyramidal 4-dimensional non-centered • Pentatope • Squared triangular • Tesseractic Combinatorial numbers • Bell • Cake • Catalan • Dedekind • Delannoy • Euler • Eulerian • Fuss–Catalan • Lah • Lazy caterer's sequence • Lobb • Motzkin • Narayana • Ordered Bell • Schröder • Schröder–Hipparchus • Stirling first • Stirling second • Telephone number • Wedderburn–Etherington Primes • Wieferich • Wall–Sun–Sun • Wolstenholme prime • Wilson Pseudoprimes • Carmichael number • Catalan pseudoprime • Elliptic pseudoprime • Euler pseudoprime • Euler–Jacobi pseudoprime • Fermat pseudoprime • Frobenius pseudoprime • Lucas pseudoprime • Lucas–Carmichael number • Somer–Lucas pseudoprime • Strong pseudoprime Arithmetic functions and dynamics Divisor functions • Abundant • Almost perfect • Arithmetic • Betrothed • Colossally abundant • Deficient • Descartes • Hemiperfect • Highly abundant • Highly composite • Hyperperfect • Multiply perfect • Perfect • Practical • Primitive abundant • Quasiperfect • Refactorable • Semiperfect • Sublime • Superabundant • Superior highly composite • Superperfect Prime omega functions • Almost prime • Semiprime Euler's totient function • Highly cototient • Highly totient • Noncototient • Nontotient • Perfect totient • Sparsely totient Aliquot sequences • Amicable • Perfect • Sociable • Untouchable Primorial • Euclid • Fortunate Other prime factor or divisor related numbers • Blum • Cyclic • Erdős–Nicolas • Erdős–Woods • Friendly • Giuga • Harmonic divisor • Jordan–Pólya • Lucas–Carmichael • Pronic • Regular • Rough • Smooth • Sphenic • Størmer • Super-Poulet • Zeisel Numeral system-dependent numbers Arithmetic functions and dynamics • Persistence • Additive • Multiplicative Digit sum • Digit sum • Digital root • Self • Sum-product Digit product • Multiplicative digital root • Sum-product Coding-related • Meertens Other • Dudeney • Factorion • Kaprekar • Kaprekar's constant • Keith • Lychrel • Narcissistic • Perfect digit-to-digit invariant • Perfect digital invariant • Happy P-adic numbers-related • Automorphic • Trimorphic Digit-composition related • Palindromic • Pandigital • Repdigit • Repunit • Self-descriptive • Smarandache–Wellin • Undulating Digit-permutation related • Cyclic • Digit-reassembly • Parasitic • Primeval • Transposable Divisor-related • Equidigital • Extravagant • Frugal • Harshad • Polydivisible • Smith • Vampire Other • Friedman Binary numbers • Evil • Odious • Pernicious Generated via a sieve • Lucky • Prime Sorting related • Pancake number • Sorting number Natural language related • Aronson's sequence • Ban Graphemics related • Strobogrammatic • Mathematics portal
Wikipedia
Stern prime A Stern prime, named for Moritz Abraham Stern, is a prime number that is not the sum of a smaller prime and twice the square of a non zero integer. That is, if for a prime q there is no smaller prime p and nonzero integer b such that q = p + 2b2, then q is a Stern prime. The known Stern primes are 2, 3, 17, 137, 227, 977, 1187, 1493 (sequence A042978 in the OEIS). So, for example, if we try subtracting from 137 the first few squares doubled in order, we get {135, 129, 119, 105, 87, 65, 39, 9}, none of which are prime. That means that 137 is a Stern prime. On the other hand, 139 is not a Stern prime, since we can express it as 137 + 2(12), or 131 + 2(22), etc. In fact, many primes have more than one such representation. Given a twin prime, the larger prime of the pair has a Goldbach representation (namely, a representation as the sum of two primes) of p + 2(12). If that prime is the largest of a prime quadruplet, p + 8, then p + 2(22) is also valid. Sloane's OEIS: A007697 lists odd numbers with at least n Goldbach representations. Leonhard Euler observed that as numbers get larger, they have more representations of the form $p+2b^{2}$, suggesting that there may be a largest number with no such representations; i.e., the above list of Stern primes might be not only finite, but complete. According to Jud McCranie, these are the only Stern primes from among the first 100000 primes. All the known Stern primes have more efficient Waring representations than their Goldbach representations would suggest. There also exist odd composite Stern numbers: the only known ones are 5777 and 5993. Goldbach once incorrectly conjectured that all Stern numbers are prime. (See OEIS: A060003 for odd Stern numbers) Christian Goldbach conjectured in a letter to Leonhard Euler that every odd integer is of the form p + 2b2 for integer b and prime p. Laurent Hodges believes that Stern became interested in the problem after reading a book of Goldbach's correspondence. At the time, 1 was considered a prime, so 3 was not considered a Stern prime given the representation 1 + 2(12). The rest of the list remains the same under either definition. References • Hodges, Laurent (1993). "A Lesser-Known Goldbach Conjecture". Mathematics Magazine. 66 (1): 45–47. doi:10.2307/2690477. Prime number classes By formula • Fermat (22n + 1) • Mersenne (2p − 1) • Double Mersenne (22p−1 − 1) • Wagstaff (2p + 1)/3 • Proth (k·2n + 1) • Factorial (n! ± 1) • Primorial (pn# ± 1) • Euclid (pn# + 1) • Pythagorean (4n + 1) • Pierpont (2m·3n + 1) • Quartan (x4 + y4) • Solinas (2m ± 2n ± 1) • Cullen (n·2n + 1) • Woodall (n·2n − 1) • Cuban (x3 − y3)/(x − y) • Leyland (xy + yx) • Thabit (3·2n − 1) • Williams ((b−1)·bn − 1) • Mills (⌊A3n⌋) By integer sequence • Fibonacci • Lucas • Pell • Newman–Shanks–Williams • Perrin • Partitions • Bell • Motzkin By property • Wieferich (pair) • Wall–Sun–Sun • Wolstenholme • Wilson • Lucky • Fortunate • Ramanujan • Pillai • Regular • Strong • Stern • Supersingular (elliptic curve) • Supersingular (moonshine theory) • Good • Super • Higgs • Highly cototient • Unique Base-dependent • Palindromic • Emirp • Repunit (10n − 1)/9 • Permutable • Circular • Truncatable • Minimal • Delicate • Primeval • Full reptend • Unique • Happy • Self • Smarandache–Wellin • Strobogrammatic • Dihedral • Tetradic Patterns • Twin (p, p + 2) • Bi-twin chain (n ± 1, 2n ± 1, 4n ± 1, …) • Triplet (p, p + 2 or p + 4, p + 6) • Quadruplet (p, p + 2, p + 6, p + 8) • k-tuple • Cousin (p, p + 4) • Sexy (p, p + 6) • Chen • Sophie Germain/Safe (p, 2p + 1) • Cunningham (p, 2p ± 1, 4p ± 3, 8p ± 7, ...) • Arithmetic progression (p + a·n, n = 0, 1, 2, 3, ...) • Balanced (consecutive p − n, p, p + n) By size • Mega (1,000,000+ digits) • Largest known • list Complex numbers • Eisenstein prime • Gaussian prime Composite numbers • Pseudoprime • Catalan • Elliptic • Euler • Euler–Jacobi • Fermat • Frobenius • Lucas • Somer–Lucas • Strong • Carmichael number • Almost prime • Semiprime • Sphenic number • Interprime • Pernicious Related topics • Probable prime • Industrial-grade prime • Illegal prime • Formula for primes • Prime gap First 60 primes • 2 • 3 • 5 • 7 • 11 • 13 • 17 • 19 • 23 • 29 • 31 • 37 • 41 • 43 • 47 • 53 • 59 • 61 • 67 • 71 • 73 • 79 • 83 • 89 • 97 • 101 • 103 • 107 • 109 • 113 • 127 • 131 • 137 • 139 • 149 • 151 • 157 • 163 • 167 • 173 • 179 • 181 • 191 • 193 • 197 • 199 • 211 • 223 • 227 • 229 • 233 • 239 • 241 • 251 • 257 • 263 • 269 • 271 • 277 • 281 List of prime numbers
Wikipedia
Stern–Brocot tree In number theory, the Stern–Brocot tree is an infinite complete binary tree in which the vertices correspond one-for-one to the positive rational numbers, whose values are ordered from the left to the right as in a search tree. The Stern–Brocot tree was introduced independently by Moritz Stern (1858) and Achille Brocot (1861). Stern was a German number theorist; Brocot was a French clockmaker who used the Stern–Brocot tree to design systems of gears with a gear ratio close to some desired value by finding a ratio of smooth numbers near that value. The root of the Stern–Brocot tree corresponds to the number 1. The parent-child relation between numbers in the Stern–Brocot tree may be defined in terms of continued fractions or mediants, and a path in the tree from the root to any other number q provides a sequence of approximations to q with smaller denominators than q. Because the tree contains each positive rational number exactly once, a breadth first search of the tree provides a method of listing all positive rationals that is closely related to Farey sequences. The left subtree of the Stern–Brocot tree, containing the rational numbers in the range (0,1), is called the Farey tree. A tree of continued fractions Every positive rational number $q$ may be expressed as a continued fraction of the form $q=a_{0}+{\cfrac {1}{a_{1}+{\cfrac {1}{a_{2}+{\cfrac {1}{a_{3}+{\cfrac {1}{\ddots +{\cfrac {1}{a_{k}}}}}}}}}}}=[a_{0};a_{1},a_{2},\ldots ,a_{k}]$ where $k$ and $a_{0}$ are non-negative integers, and each subsequent coefficient $a_{i}$ is a positive integer. This representation is not unique because $[a_{0};a_{1},a_{2},\ldots ,a_{k-1},1]=[a_{0};a_{1},a_{2},\ldots ,a_{k-1}+1],$ but using this equivalence to replace every continued fraction ending with a one by a shorter continued fraction shows that every rational number has a unique representation in which the last coefficient is greater than one. Then, unless $q=1$, the number $q$ has a parent in the Stern–Brocot tree given by the continued fraction expression $[a_{0};a_{1},a_{2},\ldots ,a_{k}-1].$ Equivalently this parent is formed by decreasing the denominator in the innermost term of the continued fraction by 1, and contracting with the previous term if the fraction becomes ${\tfrac {1}{1}}$. For instance, the rational number 23⁄16 has the continued fraction representation ${\frac {23}{16}}=1+{\cfrac {1}{2+{\cfrac {1}{3+{\frac {1}{2}}}}}}=[1;2,3,2],$ so its parent in the Stern–Brocot tree is the number $[1;2,3,1]=[1;2,4]=1+{\cfrac {1}{2+{\frac {1}{4}}}}={\frac {13}{9}}.$ Conversely each number $q$ in the Stern–Brocot tree has exactly two children: if $q=[a_{0};a_{1},a_{2},\ldots ,a_{k}]=[a_{0};a_{1},a_{2},\ldots ,a_{k}-1,1]$ then one child is the number represented by the continued fraction $\displaystyle [a_{0};a_{1},a_{2},\ldots ,a_{k}+1]$ while the other child is represented by the continued fraction $[a_{0};a_{1},a_{2},\ldots ,a_{k}-1,2].$ One of these children is less than $q$ and this is the left child; the other is greater than $q$ and it is the right child (in fact the former expression gives the left child if $k$ is odd, and the right child if $k$ is even). For instance, the continued fraction representation of 13⁄9 is [1;2,4] and its two children are [1;2,5] = 16⁄11 (the right child) and [1;2,3,2] = 23⁄16 (the left child). It is clear that for each finite continued fraction expression one can repeatedly move to its parent, and reach the root [1;]=1⁄1 of the tree in finitely many steps (in a0 + ... + ak − 1 steps to be precise). Therefore, every positive rational number appears exactly once in this tree. Moreover all descendants of the left child of any number q are less than q, and all descendants of the right child of q are greater than q. The numbers at depth d in the tree are the numbers for which the sum of the continued fraction coefficients is d + 1. Mediants and binary search The Stern–Brocot tree forms an infinite binary search tree with respect to the usual ordering of the rational numbers.[1][2] The set of rational numbers descending from a node q is defined by the open interval (Lq,Hq) where Lq is the ancestor of q that is smaller than q and closest to it in the tree (or Lq = 0 if q has no smaller ancestor) while Hq is the ancestor of q that is larger than q and closest to it in the tree (or Hq = +∞ if q has no larger ancestor). The path from the root 1 to a number q in the Stern–Brocot tree may be found by a binary search algorithm, which may be expressed in a simple way using mediants. Augment the non-negative rational numbers to including a value 1/0 (representing +∞) that is by definition greater than all other rationals. The binary search algorithm proceeds as follows: • Initialize two values L and H to 0/1 and 1/0, respectively. • Until q is found, repeat the following steps: • Let L = a/b and H = c/d; compute the mediant M = (a + c)/(b + d). • If M is less than q, then q is in the open interval (M,H); replace L by M and continue. • If M is greater than q, then q is in the open interval (L,M); replace H by M and continue. • In the remaining case, q = M; terminate the search algorithm. The sequence of values M computed by this search is exactly the sequence of values on the path from the root to q in the Stern–Brocot tree. Each open interval (L,H) occurring at some step in the search is the interval (LM,HM) representing the descendants of the mediant M. The parent of q in the Stern–Brocot tree is the last mediant found that is not equal to q. This binary search procedure can be used to convert floating-point numbers into rational numbers. By stopping once the desired precision is reached, floating-point numbers can be approximated to arbitrary precision.[3] If a real number x is approximated by any rational number a/b that is not in the sequence of mediants found by the algorithm above, then the sequence of mediants contains a closer approximation to x that has a denominator at most equal to b; in that sense, these mediants form the best rational approximations to x. The Stern–Brocot tree may itself be defined directly in terms of mediants: the left child of any number q is the mediant of q with its closest smaller ancestor, and the right child of q is the mediant of q with its closest larger ancestor. In this formula, q and its ancestor must both be taken in lowest terms, and if there is no smaller or larger ancestor then 0/1 or 1/0 should be used respectively. Again, using 7/5 as an example, its closest smaller ancestor is 4/3, so its left child is (4 + 7)/(3 + 5) = 11/8, and its closest larger ancestor is 3/2, so its right child is (7 + 3)/(5 + 2) = 10/7. Relation to Farey sequences The Farey sequence of order n is the sorted sequence of fractions in the closed interval [0,1] that have denominator less than or equal to n. As in the binary search technique for generating the Stern–Brocot tree, the Farey sequences can be constructed using mediants: the Farey sequence of order n + 1 is formed from the Farey sequence of order n by computing the mediant of each two consecutive values in the Farey sequence of order n, keeping the subset of mediants that have denominator exactly equal to n + 1, and placing these mediants between the two values from which they were computed. A similar process of mediant insertion, starting with a different pair of interval endpoints [0/1,1/0], may also be seen to describe the construction of the vertices at each level of the Stern–Brocot tree. The Stern–Brocot sequence of order 0 is the sequence [0/1,1/0], and the Stern–Brocot sequence of order i is the sequence formed by inserting a mediant between each consecutive pair of values in the Stern–Brocot sequence of order i − 1. The Stern–Brocot sequence of order i consists of all values at the first i levels of the Stern–Brocot tree, together with the boundary values 0/1 and 1/0, in numerical order. Thus the Stern–Brocot sequences differ from the Farey sequences in two ways: they eventually include all positive rationals, not just the rationals within the interval [0,1], and at the nth step all mediants are included, not only the ones with denominator equal to n. The Farey sequence of order n may be found by an inorder traversal of the left subtree of the Stern–Brocot tree, backtracking whenever a number with denominator greater than n is reached. Additional properties If ${{\frac {p_{1}}{q_{1}}},{\frac {p_{2}}{q_{2}}},\dots ,{\frac {p_{n}}{q_{n}}}}$ are all the rationals at the same depth in the Stern–Brocot tree, then $\sum _{k=1}^{n}{\frac {1}{p_{k}q_{k}}}=1$. Moreover, if ${{\frac {p}{q}}<{\frac {p'}{q'}}}$ are two consecutive fractions at or above a certain level in the tree (in the sense that any fraction between them, must be in a lower level of the tree), then $p'q-pq'=1$.[4] Along with the definitions in terms of continued fractions and mediants described above, the Stern–Brocot tree may also be defined as a Cartesian tree for the rational numbers, prioritized by their denominators. In other words, it is the unique binary search tree of the rational numbers in which the parent of any vertex q has a smaller denominator than q (or if q and its parent are both integers, in which the parent is smaller than q). It follows from the theory of Cartesian trees that the lowest common ancestor of any two numbers q and r in the Stern–Brocot tree is the rational number in the closed interval [q, r] that has the smallest denominator among all numbers in this interval. Permuting the vertices on each level of the Stern–Brocot tree by a bit-reversal permutation produces a different tree, the Calkin–Wilf tree, in which the children of each number a/b are the two numbers a/(a + b) and (a + b)/b. Like the Stern–Brocot tree, the Calkin–Wilf tree contains each positive rational number exactly once, but it is not a binary search tree. See also • Minkowski's question-mark function, whose definition for rational arguments is closely related to the Stern–Brocot tree • Calkin–Wilf tree Notes 1. Graham, Ronald L.; Knuth, Donald E.; Patashnik, Oren (1994), Concrete mathematics (Second ed.), Addison-Wesley, pp. 116–118, ISBN 0-201-55802-5 2. Gibbons, Jeremy; Lester, David; Bird, Richard (2006), "Functional pearl: Enumerating the rationals", Journal of Functional Programming, 16 (3): 281–291, doi:10.1017/S0956796806005880, S2CID 14237968. 3. Sedgewick and Wayne, Introduction to Programming in Java. A Java implementation of this algorithm can be found here. 4. Bogomolny credits this property to Pierre Lamothe, a Canadian music theorist. References • Brocot, Achille (1861), "Calcul des rouages par approximation, nouvelle méthode", Revue Chronométrique, 3: 186–194. • Brocot, Achille (1862), "Calcul des rouages par approximation, nouvelle méthode", https://gallica.bnf.fr/ark:/12148/bpt6k1661912?rk=21459;2 • Stern, Moritz A. (1858), "Ueber eine zahlentheoretische Funktion", Journal für die reine und angewandte Mathematik, 55: 193–220. • Berstel, Jean; Lauve, Aaron; Reutenauer, Christophe; Saliola, Franco V. (2009), Combinatorics on words. Christoffel words and repetitions in words, CRM Monograph Series, vol. 27, Providence, RI: American Mathematical Society, ISBN 978-0-8218-4480-9, Zbl 1161.68043 External links • Aiylam, Dhroova (2013), Modified Stern-Brocot Sequences, arXiv:1301.6807, Bibcode:2013arXiv1301.6807A • Austin, David, Trees, Teeth, and Time: The mathematics of clock making, Feature Column from the AMS • Bogomolny, Alexander, Stern Brocot-Tree, cut-the-knot, retrieved 2008-09-03. • Sloane, N. J. A., The Stern–Brocot or Farey Tree, On-line Encyclopedia of Integer Sequences. • Wildberger, Norman, MF96: Fractions and the Stern-Brocot tree. • Weisstein, Eric W., "Stern–Brocot Tree", MathWorld • Stern–Brocot tree at PlanetMath. • Infinite fractions, Numberphile • Amazing Graphs III, Numberphile • OEIS sequence A002487 (Stern's diatomic series (or Stern-Brocot sequence))
Wikipedia
Stereohedron In geometry and crystallography, a stereohedron is a convex polyhedron that fills space isohedrally, meaning that the symmetries of the tiling take any copy of the stereohedron to any other copy. Two-dimensional analogues to the stereohedra are called planigons. Higher dimensional polytopes can also be stereohedra, while they would more accurately be called stereotopes. Plesiohedra A subset of stereohedra are called plesiohedrons, defined as the Voronoi cells of a symmetric Delone set. Parallelohedrons are plesiohedra which are space-filling by translation only. Edges here are colored as parallel vectors. Parallelohedra cube hexagonal prism rhombic dodecahedron elongated dodecahedron truncated octahedron Other periodic stereohedra The catoptric tessellation contain stereohedra cells. Dihedral angles are integer divisors of 180°, and are colored by their order. The first three are the fundamental domains of ${\tilde {C}}_{3}$, ${\tilde {B}}_{3}$, and ${\tilde {A}}_{3}$ symmetry, represented by Coxeter-Dynkin diagrams: , and . ${\tilde {B}}_{3}$ is a half symmetry of ${\tilde {C}}_{3}$, and ${\tilde {A}}_{3}$ is a quarter symmetry. Any space-filling stereohedra with symmetry elements can be dissected into smaller identical cells which are also stereohedra. The name modifiers below, half, quarter, and eighth represent such dissections. Catoptric cells Faces 456812 Type Tetrahedra Square pyramid Triangular bipyramid Cube Octahedron Rhombic dodecahedron Images 1/48 (1) 1/24 (2) 1/12 (4) 1/12 (4) 1/24 (2) 1/6 (8) 1/6 (8) 1/12 (4) 1/4 (12) 1 (48) 1/2 (24) 1/3 (16) 2 (96) Symmetry (order) C1 1 C1v 2 D2d 4 C1v 2 C1v 2 C4v 8 C2v 4 C2v 4 C3v 6 Oh 48 D3d 12 D4h 16 Oh 48 Honeycomb Eighth pyramidille Triangular pyramidille Oblate tetrahedrille Half pyramidille Square quarter pyramidille Pyramidille Half oblate octahedrille Quarter oblate octahedrille Quarter cubille Cubille Oblate cubille Oblate octahedrille Dodecahedrille Other convex polyhedra that are stereohedra but not parallelohedra nor plesiohedra include the gyrobifastigium. Others Faces 81012 Symmetry (order) D2d (8) D4h (16) Images Cell Gyrobifastigium Elongated gyrobifastigium Ten of diamonds Elongated square bipyramid References • Ivanov, A. B. (2001) [1994], "Stereohedron", Encyclopedia of Mathematics, EMS Press • B. N. Delone, N. N. Sandakova, Theory of stereohedra Trudy Mat. Inst. Steklov., 64 (1961) pp. 28–51 (Russian) • Goldberg, Michael Three Infinite Families of Tetrahedral Space-Fillers Journal of Combinatorial Theory A, 16, pp. 348–354, 1974. • Goldberg, Michael The space-filling pentahedra, Journal of Combinatorial Theory, Series A Volume 13, Issue 3, November 1972, Pages 437-443 PDF • Goldberg, Michael The Space-filling Pentahedra II, Journal of Combinatorial Theory 17 (1974), 375–378. PDF • Goldberg, Michael On the space-filling hexahedra Geom. Dedicata, June 1977, Volume 6, Issue 1, pp 99–108 PDF • Goldberg, Michael On the space-filling heptahedra Geometriae Dedicata, June 1978, Volume 7, Issue 2, pp 175–184 PDF • Goldberg, Michael Convex Polyhedral Space-Fillers of More than Twelve Faces. Geom. Dedicata 8, 491-500, 1979. • Goldberg, Michael On the space-filling octahedra, Geometriae Dedicata, January 1981, Volume 10, Issue 1, pp 323–335 PDF • Goldberg, Michael On the Space-filling Decahedra. Structural Topology, 1982, num. Type 10-II PDF • Goldberg, Michael On the space-filling enneahedra Geometriae Dedicata, June 1982, Volume 12, Issue 3, pp 297–306 PDF
Wikipedia
Stephen Smale Stephen Smale (born July 15, 1930) is an American mathematician, known for his research in topology, dynamical systems and mathematical economics. He was awarded the Fields Medal in 1966[2] and spent more than three decades on the mathematics faculty of the University of California, Berkeley (1960–1961 and 1964–1995), where he currently is Professor Emeritus, with research interests in algorithms, numerical analysis and global analysis.[3] Stephen Smale Smale in 2008 Born (1930-07-15) July 15, 1930 Flint, Michigan NationalityAmerican Alma materUniversity of Michigan Known forGeneralized Poincaré conjecture Handle decomposition Homoclinic orbit Smale's horseshoe Smale's theorem Smale conjecture Smale's problems Morse–Smale system Morse–Smale diffeomorphism Palais–Smale compactness condition Blum–Shub–Smale machine Smale–Williams attractor Morse–Palais lemma Regular homotopy Sard's theorem Sphere eversion Structural stability Whitehead torsion Diffeomorphism AwardsWolf Prize (2007) National Medal of Science (1996) Chauvenet Prize (1988)[1] Fields Medal (1966) Oswald Veblen Prize in Geometry (1966) Sloan Fellowship (1960) Scientific career FieldsMathematics InstitutionsToyota Technological Institute at Chicago City University of Hong Kong University of Chicago Columbia University University of California, Berkeley ThesisRegular Curves on Riemannian Manifolds (1957) Doctoral advisorRaoul Bott Doctoral studentsRufus Bowen César Camacho Robert L. Devaney John Guckenheimer Morris Hirsch Nancy Kopell Jacob Palis Themistocles M. Rassias James Renegar Siavash Shahshahani Mike Shub Education and career Smale was born in Flint, Michigan and entered the University of Michigan in 1948.[4][5] Initially, he was a good student, placing into an honors calculus sequence taught by Bob Thrall and earning himself A's. However, his sophomore and junior years were marred with mediocre grades, mostly Bs, Cs and even an F in nuclear physics. However, with some luck, Smale was accepted as a graduate student at the University of Michigan's mathematics department. Yet again, Smale performed poorly in his first years, earning a C average as a graduate student. When the department chair, Hildebrandt, threatened to kick Smale out, he began to take his studies more seriously.[6] Smale finally earned his PhD in 1957, under Raoul Bott, beginning his career as an instructor at the University of Chicago. Early in his career, Smale was involved in controversy over remarks he made regarding his work habits while proving the higher-dimensional Poincaré conjecture. He said that his best work had been done "on the beaches of Rio."[7][8] He has been politically active in various movements in the past, such as the Free Speech movement. In 1966, having travelled to Moscow under an NSF grant to accept the Fields Medal, he held a press conference there to denounce the American position in Vietnam, Soviet intervention in Hungary and Soviet maltreatment of intellectuals. After his return to the US, he was unable to renew the grant.[9] At one time he was subpoenaed[10] by the House Un-American Activities Committee. In 1960, Smale received a Sloan Research Fellowship and was appointed to the Berkeley mathematics faculty, moving to a professorship at Columbia the following year. In 1964 he returned to a professorship at Berkeley, where he has spent the main part of his career. He became a professor emeritus at Berkeley in 1995 and took up a post as professor at the City University of Hong Kong. He also amassed over the years one of the finest private mineral collections in existence. Many of Smale's mineral specimens can be seen in the book—The Smale Collection: Beauty in Natural Crystals.[11] From 2003 to 2012, Smale was a professor at the Toyota Technological Institute at Chicago;[12] starting August 1, 2009, he became a Distinguished University Professor at the City University of Hong Kong.[13] In 1988, Smale was the recipient of the Chauvenet Prize[1] of the MAA. In 2007, Smale was awarded the Wolf Prize in mathematics.[14] Research Smale proved that the oriented diffeomorphism group of the two-dimensional sphere has the same homotopy type as the special orthogonal group of 3 × 3 matrices.[15] Smale's theorem has been reproved and extended a few times, notably to higher dimensions in the form of the Smale conjecture,[16] as well as to other topological types.[17] In another early work, he studied the immersions of the two-dimensional sphere into Euclidean space.[18] By relating immersion theory to the algebraic topology of Stiefel manifolds, he was able to fully clarify when two immersions can be deformed into one another through a family of immersions. Directly from his results it followed that the standard immersion of the sphere into three-dimensional space can be deformed (through immersions) into its negation, which is now known as sphere eversion. He also extended his results to higher-dimensional spheres,[19] and his doctoral student Morris Hirsch extended his work to immersions of general smooth manifolds.[20] Along with John Nash's work on isometric immersions, the Hirsch–Smale immersion theory was highly influential in Mikhael Gromov's early work on development of the h-principle, which abstracted and applied their ideas to contexts other than that of immersions.[21] In the study of dynamical systems, Smale introduced what is now known as a Morse–Smale system.[22] For these dynamical systems, Smale was able to prove Morse inequalities relating the cohomology of the underlying space to the dimensions of the (un)stable manifolds. Part of the significance of these results is from Smale's theorem asserting that the gradient flow of any Morse function can be arbitrarily well approximated by a Morse–Smale system without closed orbits.[23] Using these tools, Smale was able to construct self-indexing Morse functions, where the value of the function equals its Morse index at any critical point.[24] Using these self-indexing Morse functions as a key tool, Smale resolved the generalized Poincaré conjecture in every dimension greater than four.[25] Building on these works, he also established the more powerful h-cobordism theorem the following year, together with the full classification of simply-connected smooth five-dimensional manifolds.[26][24] Smale also identified the Smale horseshoe, inspiring much subsequent research. He also outlined a research program carried out by many others. Smale is also known for injecting Morse theory into mathematical economics, as well as recent explorations of various theories of computation. In 1998 he compiled a list of 18 problems in mathematics to be solved in the 21st century, known as Smale's problems.[27] This list was compiled in the spirit of Hilbert's famous list of problems produced in 1900. In fact, Smale's list contains some of the original Hilbert problems, including the Riemann hypothesis and the second half of Hilbert's sixteenth problem, both of which are still unsolved. Other famous problems on his list include the Poincaré conjecture (now a theorem, proved by Grigori Perelman), the P = NP problem, and the Navier–Stokes equations, all of which have been designated Millennium Prize Problems by the Clay Mathematics Institute. Books • Smale, Steve (1980). The mathematics of time: essays on dynamical systems, economic processes, and related topics. New York-Berlin: Springer-Verlag. doi:10.1007/978-1-4613-8101-3. ISBN 0-387-90519-7. MR 0607330. Zbl 0451.58001. • Blum, Lenore; Cucker, Felipe; Shub, Michael; Smale, Steve (1998). Complexity and real computation. With a foreword by Richard M. Karp. New York: Springer-Verlag. doi:10.1007/978-1-4612-0701-6. ISBN 0-387-98281-7. MR 1479636. S2CID 12510680. Zbl 0948.68068. • Hirsch, Morris W.; Smale, Stephen; Devaney, Robert L. (2013). Differential equations, dynamical systems, and an introduction to chaos (Third edition of 1974 original ed.). Amsterdam: Academic Press. doi:10.1016/C2009-0-61160-0. ISBN 978-0-12-382010-5. MR 3293130. Zbl 1239.37001. • Cucker, F.; Wong, R., eds. (2000). The collected papers of Stephen Smale. In three volumes. Singapore: Singapore University Press. doi:10.1142/4424. ISBN 981-02-4307-3. MR 1781696. Zbl 0995.01005. Important publications • Smale, Stephen (1959a). "A classification of immersions of the two-sphere". Transactions of the American Mathematical Society. 90 (2): 281–290. doi:10.1090/S0002-9947-1959-0104227-9. MR 0104227. Zbl 0089.18102. • Smale, Stephen (1959b). "The classification of immersions of spheres in Euclidean spaces". Annals of Mathematics. Second Series. 69 (2): 327–344. doi:10.2307/1970186. JSTOR 1970186. MR 0105117. Zbl 0089.18201. • Smale, Stephen (1959c). "Diffeomorphisms of the 2-sphere". Proceedings of the American Mathematical Society. 10 (4): 621–626. doi:10.1090/S0002-9939-1959-0112149-8. MR 0112149. Zbl 0118.39103. • Smale, Stephen (1960). "Morse inequalities for a dynamical system". Bulletin of the American Mathematical Society. 66 (1): 43–49. doi:10.1090/S0002-9904-1960-10386-2. MR 0117745. Zbl 0100.29701. • Smale, Stephen (1961a). "On gradient dynamical systems". Annals of Mathematics. Second Series. 74 (1): 199–206. doi:10.2307/1970311. JSTOR 1970311. MR 0133139. Zbl 0136.43702. • Smale, Stephen (1961b). "Generalized Poincaré's conjecture in dimensions greater than four". Annals of Mathematics. Second Series. 74 (2): 391–406. doi:10.2307/1970239. JSTOR 1970239. MR 0137124. Zbl 0099.39202. • Smale, S. (1962a). "On the structure of manifolds". American Journal of Mathematics. 84 (3): 387–399. doi:10.2307/2372978. JSTOR 2372978. MR 0153022. Zbl 0109.41103. • Smale, Stephen (1962b). "On the structure of 5-manifolds". Annals of Mathematics. Second Series. 75 (1): 38–46. doi:10.2307/1970417. JSTOR 1970417. MR 0141133. Zbl 0101.16103. • Smale, S. (1965). "An infinite dimensional version of Sard's theorem". Amer. J. Math. 87 (4): 861–866. doi:10.2307/2373250. JSTOR 2373250. • Smale, Stephen (1967). "Differentiable dynamical systems". Bulletin of the American Mathematical Society. 73 (6): 747–817. doi:10.1090/S0002-9904-1967-11798-1. MR 0228014. • Blum, Lenore; Shub, Mike; Smale, Steve (1989). "On a theory of computation and complexity over the real numbers: NP-completeness, recursive functions and universal machines". Bull. Amer. Math. Soc. New Series. 21 (1): 1–46. doi:10.1090/S0273-0979-1989-15750-9. • Shub, Michael; Smale, Stephen (1993). "Complexity of Bézout's Theorem I: Geometric Aspects". Journal of the American Mathematical Society. Providence, Rhode Island: American Mathematical Society. 6 (2): 459–501. doi:10.2307/2152805. JSTOR 2152805. • Smale, Steve (1998). "Mathematical problems for the next century". The Mathematical Intelligencer. 20 (2): 7–15. doi:10.1007/BF03025291. MR 1631413. S2CID 1331144. Zbl 0947.01011. • Smale, Steve (2000). "Mathematical problems for the next century". In Arnold, V.; Atiyah, M.; Lax, P.; Mazur, B. (eds.). Mathematics: frontiers and perspectives. Providence, RI: American Mathematical Society. pp. 271–294. ISBN 0-8218-2070-2. MR 1754783. Zbl 1031.00005. • Cucker, Felipe; Smale, Steve (2002). "On the mathematical foundations of learning". Bull. Amer. Math. Soc. New Series. 39 (1): 1–49. doi:10.1090/S0273-0979-01-00923-5. • Cucker, Felipe; Smale, Steve (2007). "Emergent behavior in flocks". IEEE Trans. Autom. Control. 52 (5): 852–862. doi:10.1109/TAC.2007.895842. S2CID 206590734.* See also • 5-manifold • Axiom A • Geometric mechanics • Homotopy principle • Mean value problem References 1. Smale, Steve (1985). "On the Efficiency of Algorithms in Analysis". Bulletin of the American Mathematical Society. New Series. 13 (2): 87–121. doi:10.1090/S0273-0979-1985-15391-1. 2. "How Math Got Its 'Nobel'". The New York Times. 8 August 2014. Retrieved 21 October 2016. 3. "Stephen Smale". University of California, Berkeley. Retrieved 27 November 2021. 4. William L. Hosch, ed. (2010). The Britannica Guide to Geometry. Britannica Educational Publishing. p. 225. ISBN 9781615302178. 5. Batterson, Steve (2000). Steven Smale: The Mathematician Who Broke the Dimension Barrier. American Mathematical Soc. p. 11. ISBN 9780821826966. 6. Video on YouTube 7. He discovered the famous Smale horseshoe map on a beach in Leme, Rio de Janeiro. See: S. Smale (1996), Chaos: Finding a Horseshoe on the Beaches of Rio. 8. CS Aravinda (2018). "ICM 2018: On the beaches of Rio de Janeiro". Bhāvanā. 2 (3). Retrieved 8 October 2022. 9. Andrew Jamison (5 October 1967). "Math Professors Question Denial Of Smale Grant". The Harvard Crimson. Retrieved 13 February 2022. 10. Greenberg, D. S. (1966-10-07). "The Smale Case: NSF and Berkeley Pass Through a Case of Jitters". Science. American Association for the Advancement of Science (AAAS). 154 (3745): 130–133. Bibcode:1966Sci...154..130G. doi:10.1126/science.154.3745.130. ISSN 0036-8075. PMID 17740098. 11. "Lithographie LTD". www.lithographie.org. 12. "Faculty Alumni". ttic.edu. 13. Stephen Smale Vita. Accessed November 18, 2009. 14. "The Hebrew University of Jerusalem - Division of Marketing & Communication". www.huji.ac.il. Archived from the original on 2016-03-03. Retrieved 2007-02-04. 15. Smale 1959c. 16. Hatcher, Allen E. (1983). "A proof of the Smale conjecture, Diff(S3) ≃ O(4)". Annals of Mathematics. Second Series. 117 (3): 553–607. doi:10.2307/2007035. JSTOR 2007035. MR 0701256. Zbl 0531.57028. 17. Earle, Clifford J.; Eells, James (1969). "A fibre bundle description of Teichmüller theory". Journal of Differential Geometry. 3 (1–2): 19–43. doi:10.4310/jdg/1214428816. MR 0276999. Zbl 0185.32901. 18. Smale 1959a. 19. Smale 1959b. 20. Hirsch, Morris W. (1959). "Immersions of manifolds". Transactions of the American Mathematical Society. 93 (2): 242–276. doi:10.1090/S0002-9947-1959-0119214-4. MR 0119214. Zbl 0113.17202. 21. Gromov, Mikhael (1986). Partial differential relations. Ergebnisse der Mathematik und ihrer Grenzgebiete, 3. Folge. Vol. 9. Berlin: Springer-Verlag. doi:10.1007/978-3-662-02267-2. ISBN 3-540-12177-3. MR 0864505. Zbl 0651.53001. 22. Smale 1960. 23. Smale 1961a. 24. Milnor, John (1965). Lectures on the h-cobordism theorem. Notes by L. Siebenmann and J. Sondow. Princeton, NJ: Princeton University Press. doi:10.1515/9781400878055. ISBN 9781400878055. MR 0190942. Zbl 0161.20302. 25. Smale 1961b. 26. Smale 1962a; Smale 1962b. 27. Smale 1998; Smale 2000. External links • "Stephen Smale". Google Scholar. • Stephen Smale at the Mathematics Genealogy Project • O'Connor, John J.; Robertson, Edmund F., "Stephen Smale", MacTutor History of Mathematics Archive, University of St Andrews • Weisstein, Eric W. "Smale's Problems". MathWorld. • Robion Kirby, Stephen Smale: The Mathematician Who Broke the Dimension Barrier, a book review of a biography in the Notices of the AMS. Personal websites at universities • Steven Smale at the City University of Hong Kong • Stephen Smale at the University of Chicago • Steve Smale at the University of California, Berkeley Fields Medalists • 1936  Ahlfors • Douglas • 1950  Schwartz • Selberg • 1954  Kodaira • Serre • 1958  Roth • Thom • 1962  Hörmander • Milnor • 1966  Atiyah • Cohen • Grothendieck • Smale • 1970  Baker • Hironaka • Novikov • Thompson • 1974  Bombieri • Mumford • 1978  Deligne • Fefferman • Margulis • Quillen • 1982  Connes • Thurston • Yau • 1986  Donaldson • Faltings • Freedman • 1990  Drinfeld • Jones • Mori • Witten • 1994  Bourgain • Lions • Yoccoz • Zelmanov • 1998  Borcherds • Gowers • Kontsevich • McMullen • 2002  Lafforgue • Voevodsky • 2006  Okounkov • Perelman • Tao • Werner • 2010  Lindenstrauss • Ngô • Smirnov • Villani • 2014  Avila • Bhargava • Hairer • Mirzakhani • 2018  Birkar • Figalli • Scholze • Venkatesh • 2022  Duminil-Copin • Huh • Maynard • Viazovska • Category • Mathematics portal Laureates of the Wolf Prize in Mathematics 1970s • Israel Gelfand / Carl L. Siegel (1978) • Jean Leray / André Weil (1979) 1980s • Henri Cartan / Andrey Kolmogorov (1980) • Lars Ahlfors / Oscar Zariski (1981) • Hassler Whitney / Mark Krein (1982) • Shiing-Shen Chern / Paul Erdős (1983/84) • Kunihiko Kodaira / Hans Lewy (1984/85) • Samuel Eilenberg / Atle Selberg (1986) • Kiyosi Itô / Peter Lax (1987) • Friedrich Hirzebruch / Lars Hörmander (1988) • Alberto Calderón / John Milnor (1989) 1990s • Ennio de Giorgi / Ilya Piatetski-Shapiro (1990) • Lennart Carleson / John G. Thompson (1992) • Mikhail Gromov / Jacques Tits (1993) • Jürgen Moser (1994/95) • Robert Langlands / Andrew Wiles (1995/96) • Joseph Keller / Yakov G. Sinai (1996/97) • László Lovász / Elias M. Stein (1999) 2000s • Raoul Bott / Jean-Pierre Serre (2000) • Vladimir Arnold / Saharon Shelah (2001) • Mikio Sato / John Tate (2002/03) • Grigory Margulis / Sergei Novikov (2005) • Stephen Smale / Hillel Furstenberg (2006/07) • Pierre Deligne / Phillip A. Griffiths / David B. Mumford (2008) 2010s • Dennis Sullivan / Shing-Tung Yau (2010) • Michael Aschbacher / Luis Caffarelli (2012) • George Mostow / Michael Artin (2013) • Peter Sarnak (2014) • James G. Arthur (2015) • Richard Schoen / Charles Fefferman (2017) • Alexander Beilinson / Vladimir Drinfeld (2018) • Jean-François Le Gall / Gregory Lawler (2019) 2020s • Simon K. Donaldson / Yakov Eliashberg (2020) • George Lusztig (2022) • Ingrid Daubechies (2023)  Mathematics portal United States National Medal of Science laureates Behavioral and social science 1960s 1964 Neal Elgar Miller 1980s 1986 Herbert A. Simon 1987 Anne Anastasi George J. Stigler 1988 Milton Friedman 1990s 1990 Leonid Hurwicz Patrick Suppes 1991 George A. Miller 1992 Eleanor J. Gibson 1994 Robert K. Merton 1995 Roger N. Shepard 1996 Paul Samuelson 1997 William K. Estes 1998 William Julius Wilson 1999 Robert M. Solow 2000s 2000 Gary Becker 2003 R. Duncan Luce 2004 Kenneth Arrow 2005 Gordon H. Bower 2008 Michael I. Posner 2009 Mortimer Mishkin 2010s 2011 Anne Treisman 2014 Robert Axelrod 2015 Albert Bandura Biological sciences 1960s 1963 C. B. van Niel 1964 Theodosius Dobzhansky Marshall W. Nirenberg 1965 Francis P. Rous George G. Simpson Donald D. Van Slyke 1966 Edward F. Knipling Fritz Albert Lipmann William C. Rose Sewall Wright 1967 Kenneth S. Cole Harry F. Harlow Michael Heidelberger Alfred H. Sturtevant 1968 Horace Barker Bernard B. Brodie Detlev W. Bronk Jay Lush Burrhus Frederic Skinner 1969 Robert Huebner Ernst Mayr 1970s 1970 Barbara McClintock Albert B. Sabin 1973 Daniel I. Arnon Earl W. Sutherland Jr. 1974 Britton Chance Erwin Chargaff James V. Neel James Augustine Shannon 1975 Hallowell Davis Paul Gyorgy Sterling B. Hendricks Orville Alvin Vogel 1976 Roger Guillemin Keith Roberts Porter Efraim Racker E. O. Wilson 1979 Robert H. Burris Elizabeth C. Crosby Arthur Kornberg Severo Ochoa Earl Reece Stadtman George Ledyard Stebbins Paul Alfred Weiss 1980s 1981 Philip Handler 1982 Seymour Benzer Glenn W. Burton Mildred Cohn 1983 Howard L. Bachrach Paul Berg Wendell L. Roelofs Berta Scharrer 1986 Stanley Cohen Donald A. Henderson Vernon B. Mountcastle George Emil Palade Joan A. Steitz 1987 Michael E. DeBakey Theodor O. Diener Harry Eagle Har Gobind Khorana Rita Levi-Montalcini 1988 Michael S. Brown Stanley Norman Cohen Joseph L. Goldstein Maurice R. Hilleman Eric R. Kandel Rosalyn Sussman Yalow 1989 Katherine Esau Viktor Hamburger Philip Leder Joshua Lederberg Roger W. Sperry Harland G. Wood 1990s 1990 Baruj Benacerraf Herbert W. Boyer Daniel E. Koshland Jr. Edward B. Lewis David G. Nathan E. Donnall Thomas 1991 Mary Ellen Avery G. Evelyn Hutchinson Elvin A. Kabat Robert W. Kates Salvador Luria Paul A. Marks Folke K. Skoog Paul C. Zamecnik 1992 Maxine Singer Howard Martin Temin 1993 Daniel Nathans Salome G. Waelsch 1994 Thomas Eisner Elizabeth F. Neufeld 1995 Alexander Rich 1996 Ruth Patrick 1997 James Watson Robert A. Weinberg 1998 Bruce Ames Janet Rowley 1999 David Baltimore Jared Diamond Lynn Margulis 2000s 2000 Nancy C. Andreasen Peter H. Raven Carl Woese 2001 Francisco J. Ayala George F. Bass Mario R. Capecchi Ann Graybiel Gene E. Likens Victor A. McKusick Harold Varmus 2002 James E. Darnell Evelyn M. Witkin 2003 J. Michael Bishop Solomon H. Snyder Charles Yanofsky 2004 Norman E. Borlaug Phillip A. Sharp Thomas E. Starzl 2005 Anthony Fauci Torsten N. Wiesel 2006 Rita R. Colwell Nina Fedoroff Lubert Stryer 2007 Robert J. Lefkowitz Bert W. O'Malley 2008 Francis S. Collins Elaine Fuchs J. Craig Venter 2009 Susan L. Lindquist Stanley B. Prusiner 2010s 2010 Ralph L. Brinster Rudolf Jaenisch 2011 Lucy Shapiro Leroy Hood Sallie Chisholm 2012 May Berenbaum Bruce Alberts 2013 Rakesh K. Jain 2014 Stanley Falkow Mary-Claire King Simon Levin Chemistry 1960s 1964 Roger Adams 1980s 1982 F. Albert Cotton Gilbert Stork 1983 Roald Hoffmann George C. Pimentel Richard N. Zare 1986 Harry B. Gray Yuan Tseh Lee Carl S. Marvel Frank H. Westheimer 1987 William S. Johnson Walter H. Stockmayer Max Tishler 1988 William O. Baker Konrad E. Bloch Elias J. Corey 1989 Richard B. Bernstein Melvin Calvin Rudolph A. Marcus Harden M. McConnell 1990s 1990 Elkan Blout Karl Folkers John D. Roberts 1991 Ronald Breslow Gertrude B. Elion Dudley R. Herschbach Glenn T. Seaborg 1992 Howard E. Simmons Jr. 1993 Donald J. Cram Norman Hackerman 1994 George S. Hammond 1995 Thomas Cech Isabella L. Karle 1996 Norman Davidson 1997 Darleane C. Hoffman Harold S. Johnston 1998 John W. Cahn George M. Whitesides 1999 Stuart A. Rice John Ross Susan Solomon 2000s 2000 John D. Baldeschwieler Ralph F. Hirschmann 2001 Ernest R. Davidson Gábor A. Somorjai 2002 John I. Brauman 2004 Stephen J. Lippard 2005 Tobin J. Marks 2006 Marvin H. Caruthers Peter B. Dervan 2007 Mostafa A. El-Sayed 2008 Joanna Fowler JoAnne Stubbe 2009 Stephen J. Benkovic Marye Anne Fox 2010s 2010 Jacqueline K. Barton Peter J. Stang 2011 Allen J. Bard M. Frederick Hawthorne 2012 Judith P. Klinman Jerrold Meinwald 2013 Geraldine L. Richmond 2014 A. Paul Alivisatos Engineering sciences 1960s 1962 Theodore von Kármán 1963 Vannevar Bush John Robinson Pierce 1964 Charles S. Draper Othmar H. Ammann 1965 Hugh L. Dryden Clarence L. Johnson Warren K. Lewis 1966 Claude E. Shannon 1967 Edwin H. Land Igor I. Sikorsky 1968 J. Presper Eckert Nathan M. Newmark 1969 Jack St. Clair Kilby 1970s 1970 George E. Mueller 1973 Harold E. Edgerton Richard T. Whitcomb 1974 Rudolf Kompfner Ralph Brazelton Peck Abel Wolman 1975 Manson Benedict William Hayward Pickering Frederick E. Terman Wernher von Braun 1976 Morris Cohen Peter C. Goldmark Erwin Wilhelm Müller 1979 Emmett N. Leith Raymond D. Mindlin Robert N. Noyce Earl R. Parker Simon Ramo 1980s 1982 Edward H. Heinemann Donald L. Katz 1983 Bill Hewlett George Low John G. Trump 1986 Hans Wolfgang Liepmann Tung-Yen Lin Bernard M. Oliver 1987 Robert Byron Bird H. Bolton Seed Ernst Weber 1988 Daniel C. Drucker Willis M. Hawkins George W. Housner 1989 Harry George Drickamer Herbert E. Grier 1990s 1990 Mildred Dresselhaus Nick Holonyak Jr. 1991 George H. Heilmeier Luna B. Leopold H. Guyford Stever 1992 Calvin F. Quate John Roy Whinnery 1993 Alfred Y. Cho 1994 Ray W. Clough 1995 Hermann A. Haus 1996 James L. Flanagan C. Kumar N. Patel 1998 Eli Ruckenstein 1999 Kenneth N. Stevens 2000s 2000 Yuan-Cheng B. Fung 2001 Andreas Acrivos 2002 Leo Beranek 2003 John M. Prausnitz 2004 Edwin N. Lightfoot 2005 Jan D. Achenbach 2006 Robert S. Langer 2007 David J. Wineland 2008 Rudolf E. Kálmán 2009 Amnon Yariv 2010s 2010 Shu Chien 2011 John B. Goodenough 2012 Thomas Kailath Mathematical, statistical, and computer sciences 1960s 1963 Norbert Wiener 1964 Solomon Lefschetz H. Marston Morse 1965 Oscar Zariski 1966 John Milnor 1967 Paul Cohen 1968 Jerzy Neyman 1969 William Feller 1970s 1970 Richard Brauer 1973 John Tukey 1974 Kurt Gödel 1975 John W. Backus Shiing-Shen Chern George Dantzig 1976 Kurt Otto Friedrichs Hassler Whitney 1979 Joseph L. Doob Donald E. Knuth 1980s 1982 Marshall H. Stone 1983 Herman Goldstine Isadore Singer 1986 Peter Lax Antoni Zygmund 1987 Raoul Bott Michael Freedman 1988 Ralph E. Gomory Joseph B. Keller 1989 Samuel Karlin Saunders Mac Lane Donald C. Spencer 1990s 1990 George F. Carrier Stephen Cole Kleene John McCarthy 1991 Alberto Calderón 1992 Allen Newell 1993 Martin David Kruskal 1994 John Cocke 1995 Louis Nirenberg 1996 Richard Karp Stephen Smale 1997 Shing-Tung Yau 1998 Cathleen Synge Morawetz 1999 Felix Browder Ronald R. Coifman 2000s 2000 John Griggs Thompson Karen Uhlenbeck 2001 Calyampudi R. Rao Elias M. Stein 2002 James G. Glimm 2003 Carl R. de Boor 2004 Dennis P. Sullivan 2005 Bradley Efron 2006 Hyman Bass 2007 Leonard Kleinrock Andrew J. Viterbi 2009 David B. Mumford 2010s 2010 Richard A. Tapia S. R. Srinivasa Varadhan 2011 Solomon W. Golomb Barry Mazur 2012 Alexandre Chorin David Blackwell 2013 Michael Artin Physical sciences 1960s 1963 Luis W. Alvarez 1964 Julian Schwinger Harold Urey Robert Burns Woodward 1965 John Bardeen Peter Debye Leon M. Lederman William Rubey 1966 Jacob Bjerknes Subrahmanyan Chandrasekhar Henry Eyring John H. Van Vleck Vladimir K. Zworykin 1967 Jesse Beams Francis Birch Gregory Breit Louis Hammett George Kistiakowsky 1968 Paul Bartlett Herbert Friedman Lars Onsager Eugene Wigner 1969 Herbert C. Brown Wolfgang Panofsky 1970s 1970 Robert H. Dicke Allan R. Sandage John C. Slater John A. Wheeler Saul Winstein 1973 Carl Djerassi Maurice Ewing Arie Jan Haagen-Smit Vladimir Haensel Frederick Seitz Robert Rathbun Wilson 1974 Nicolaas Bloembergen Paul Flory William Alfred Fowler Linus Carl Pauling Kenneth Sanborn Pitzer 1975 Hans A. Bethe Joseph O. Hirschfelder Lewis Sarett Edgar Bright Wilson Chien-Shiung Wu 1976 Samuel Goudsmit Herbert S. Gutowsky Frederick Rossini Verner Suomi Henry Taube George Uhlenbeck 1979 Richard P. Feynman Herman Mark Edward M. Purcell John Sinfelt Lyman Spitzer Victor F. Weisskopf 1980s 1982 Philip W. Anderson Yoichiro Nambu Edward Teller Charles H. Townes 1983 E. Margaret Burbidge Maurice Goldhaber Helmut Landsberg Walter Munk Frederick Reines Bruno B. Rossi J. Robert Schrieffer 1986 Solomon J. Buchsbaum H. Richard Crane Herman Feshbach Robert Hofstadter Chen-Ning Yang 1987 Philip Abelson Walter Elsasser Paul C. Lauterbur George Pake James A. Van Allen 1988 D. Allan Bromley Paul Ching-Wu Chu Walter Kohn Norman Foster Ramsey Jr. Jack Steinberger 1989 Arnold O. Beckman Eugene Parker Robert Sharp Henry Stommel 1990s 1990 Allan M. Cormack Edwin M. McMillan Robert Pound Roger Revelle 1991 Arthur L. Schawlow Ed Stone Steven Weinberg 1992 Eugene M. Shoemaker 1993 Val Fitch Vera Rubin 1994 Albert Overhauser Frank Press 1995 Hans Dehmelt Peter Goldreich 1996 Wallace S. Broecker 1997 Marshall Rosenbluth Martin Schwarzschild George Wetherill 1998 Don L. Anderson John N. Bahcall 1999 James Cronin Leo Kadanoff 2000s 2000 Willis E. Lamb Jeremiah P. Ostriker Gilbert F. White 2001 Marvin L. Cohen Raymond Davis Jr. Charles Keeling 2002 Richard Garwin W. Jason Morgan Edward Witten 2003 G. Brent Dalrymple Riccardo Giacconi 2004 Robert N. Clayton 2005 Ralph A. Alpher Lonnie Thompson 2006 Daniel Kleppner 2007 Fay Ajzenberg-Selove Charles P. Slichter 2008 Berni Alder James E. Gunn 2009 Yakir Aharonov Esther M. Conwell Warren M. Washington 2010s 2011 Sidney Drell Sandra Faber Sylvester James Gates 2012 Burton Richter Sean C. Solomon 2014 Shirley Ann Jackson Chauvenet Prize recipients • 1925 G. A. Bliss • 1929 T. H. Hildebrandt • 1932 G. H. Hardy • 1935 Dunham Jackson • 1938 G. T. Whyburn • 1941 Saunders Mac Lane • 1944 R. H. Cameron • 1947 Paul Halmos • 1950 Mark Kac • 1953 E. J. McShane • 1956 Richard H. Bruck • 1960 Cornelius Lanczos • 1963 Philip J. Davis • 1964 Leon Henkin • 1965 Jack K. Hale and Joseph P. LaSalle • 1967 Guido Weiss • 1968 Mark Kac • 1970 Shiing-Shen Chern • 1971 Norman Levinson • 1972 François Trèves • 1973 Carl D. Olds • 1974 Peter D. Lax • 1975 Martin Davis and Reuben Hersh • 1976 Lawrence Zalcman • 1977 W. Gilbert Strang • 1978 Shreeram S. Abhyankar • 1979 Neil J. A. Sloane • 1980 Heinz Bauer • 1981 Kenneth I. Gross • 1982 No award given. • 1983 No award given. • 1984 R. Arthur Knoebel • 1985 Carl Pomerance • 1986 George Miel • 1987 James H. Wilkinson • 1988 Stephen Smale • 1989 Jacob Korevaar • 1990 David Allen Hoffman • 1991 W. B. Raymond Lickorish and Kenneth C. Millett • 1992 Steven G. Krantz • 1993 David H. Bailey, Jonathan M. Borwein and Peter B. Borwein • 1994 Barry Mazur • 1995 Donald G. Saari • 1996 Joan Birman • 1997 Tom Hawkins • 1998 Alan Edelman and Eric Kostlan • 1999 Michael I. Rosen • 2000 Don Zagier • 2001 Carolyn S. Gordon and David L. Webb • 2002 Ellen Gethner, Stan Wagon, and Brian Wick • 2003 Thomas C. Hales • 2004 Edward B. Burger • 2005 John Stillwell • 2006 Florian Pfender & Günter M. Ziegler • 2007 Andrew J. Simoson • 2008 Andrew Granville • 2009 Harold P. Boas • 2010 Brian J. McCartin • 2011 Bjorn Poonen • 2012 Dennis DeTurck, Herman Gluck, Daniel Pomerleano & David Shea Vela-Vick • 2013 Robert Ghrist • 2014 Ravi Vakil • 2015 Dana Mackenzie • 2016 Susan H. Marshall & Donald R. Smith • 2017 Mark Schilling • 2018 Daniel J. Velleman • 2019 Tom Leinster • 2020 Vladimir Pozdnyakov & J. Michael Steele • 2021 Travis Kowalski • 2022 William Dunham, Ezra Brown & Matthew Crawford Recipients of the Oswald Veblen Prize in Geometry • 1964 Christos Papakyriakopoulos • 1964 Raoul Bott • 1966 Stephen Smale • 1966 Morton Brown and Barry Mazur • 1971 Robion Kirby • 1971 Dennis Sullivan • 1976 William Thurston • 1976 James Harris Simons • 1981 Mikhail Gromov • 1981 Shing-Tung Yau • 1986 Michael Freedman • 1991 Andrew Casson and Clifford Taubes • 1996 Richard S. Hamilton and Gang Tian • 2001 Jeff Cheeger, Yakov Eliashberg and Michael J. Hopkins • 2004 David Gabai • 2007 Peter Kronheimer and Tomasz Mrowka; Peter Ozsváth and Zoltán Szabó • 2010 Tobias Colding and William Minicozzi; Paul Seidel • 2013 Ian Agol and Daniel Wise • 2016 Fernando Codá Marques and André Neves • 2019 Xiuxiong Chen, Simon Donaldson and Song Sun John von Neumann Lecturers • Lars Ahlfors (1960) • Mark Kac (1961) • Jean Leray (1962) • Stanislaw Ulam (1963) • Solomon Lefschetz (1964) • Freeman Dyson (1965) • Eugene Wigner (1966) • Chia-Chiao Lin (1967) • Peter Lax (1968) • George F. Carrier (1969) • James H. Wilkinson (1970) • Paul Samuelson (1971) • Jule Charney (1974) • James Lighthill (1975) • René Thom (1976) • Kenneth Arrow (1977) • Peter Henrici (1978) • Kurt O. Friedrichs (1979) • Keith Stewartson (1980) • Garrett Birkhoff (1981) • David Slepian (1982) • Joseph B. Keller (1983) • Jürgen Moser (1984) • John W. Tukey (1985) • Jacques-Louis Lions (1986) • Richard M. Karp (1987) • Germund Dahlquist (1988) • Stephen Smale (1989) • Andrew Majda (1990) • R. Tyrrell Rockafellar (1992) • Martin D. Kruskal (1994) • Carl de Boor (1996) • William Kahan (1997) • Olga Ladyzhenskaya (1998) • Charles S. Peskin (1999) • Persi Diaconis (2000) • David Donoho (2001) • Eric Lander (2002) • Heinz-Otto Kreiss (2003) • Alan C. Newell (2004) • Jerrold E. Marsden (2005) • George C. Papanicolaou (2006) • Nancy Kopell (2007) • David Gottlieb (2008) • Franco Brezzi (2009) • Bernd Sturmfels (2010) • Ingrid Daubechies (2011) • John M. Ball (2012) • Stanley Osher (2013) • Leslie Greengard (2014) • Jennifer Tour Chayes (2015) • Donald Knuth (2016) • Bernard J. Matkowsky (2017) • Charles F. Van Loan (2018) • Margaret H. Wright (2019) • Nick Trefethen (2020) • Chi-Wang Shu (2021) • Leah Keshet (2022) Authority control International • FAST • ISNI • VIAF National • France • BnF data • Catalonia • Germany • Israel • United States • Sweden • Japan • Czech Republic • Netherlands Academics • DBLP • Google Scholar • MathSciNet • Mathematics Genealogy Project • zbMATH Other • SNAC • IdRef
Wikipedia
Steven J. Miller Steven Joel Miller is a mathematician who specializes in analytic number theory and has also worked in applied fields such as sabermetrics and linear programming.[1] He is a co-author, with Ramin Takloo-Bighash, of An Invitation to Modern Number Theory (Princeton University Press, 2006), with Midge Cozzens of The Mathematics of Encryption: An Elementary Introduction (AMS Mathematical World series 29, Providence, RI, 2013), and with Stephan Ramon Garcia of ``100 Years of Math Milestones: The Pi Mu Epsilon Centennial Collection (American Mathematical Society, 2019). He also edited Theory and Applications of Benford's Law (Princeton University Press, 2015) and wrote The Mathematics of Optimization: How to do things faster (AMS Pure and Applied Undergraduate Texts Volume: 30; 2017) and ``The Probability Lifesaver: All the Tools You Need to Understand Chance (Princeton University Press, 2017). He has written over 100 papers in topics including accounting, Benford's law, computer science, economics, marketing, mathematics, physics, probability, sabermetrics, and statistics, available on the arXiv and his homepage. Steven Joel Miller NationalityAmerican Alma materYale University Princeton University Scientific career FieldsMathematics InstitutionsWilliams College Smith College Mount Holyoke College Brown University Boston University Ohio State University American Institute of Mathematics NYU Princeton University Thesis1 and 2 Level Densities for Families of Elliptic Curves: Evidence for the Underlying Group Symmetries (2002) Doctoral advisorsPeter Sarnak Henryk Iwaniec Websiteweb.williams.edu/Mathematics/sjmiller/public_html/ Academic career Miller earned his B.S. in mathematics and physics at Yale University and completed his graduate studies in mathematics at Princeton University in 2002. His Ph.D. thesis, titled "1 and 2 Level Densities for Families of Elliptic Curves: Evidence for the Underlying Group Symmetries," was written under the direction of Peter Sarnak and Henryk Iwaniec.[2] He is currently a professor of mathematics at Williams College, where he has served as the Director of the Williams SMALL REU Program and is currently the faculty president of the Williams Phi Beta Kappa chapter.[3] He's also a faculty fellow at the Erdos Institute.[4] He was included in the 2019 class of fellows of the American Mathematical Society "for contributions to number theory and service to the mathematical community, particularly in support of mentoring undergraduate research".[5] Books Miller has published six books. • 100 Years of Math Milestones: The Pi Mu Epsilon Centennial Collection (with Stephan Ramon Garcia): https://bookstore.ams.org/mbk-121 • Benford's Law: Theory and Applications (editor): https://press.princeton.edu/books/hardcover/9780691147611/benfords-law • An Invitation to Modern Number Theory (with Ramin Takloo-Bighash): https://press.princeton.edu/books/hardcover/9780691120607/an-invitation-to-modern-number-theory • The Mathematics of Encryption: An Elementary Introduction (with Margaret Cozzens): https://bookstore.ams.org/mawrld-29 • Mathematics of Optimization: How to do Things Faster: https://bookstore.ams.org/amstext-30/ • The Probability Lifesaver: All the Tools You Need to Understand Chance: https://press.princeton.edu/books/hardcover/9780691149547/the-probability-lifesaver Controversies In the aftermath of the 2020 United States presidential election Miller performed a statistical analysis of the integrity of mail in voting in Pennsylvania. The data underlying the analysis was collected by former Trump staffer Matt Braynard's Voter Integrity Fund. The data was collected by calling 20,000 Republican voters in Pennsylvania who, according to state records, had requested but not returned ballots. Of the 20,000 called 2,684 agreed to take the survey, which found that 463 reported that they actually had mailed in a ballot and 556 reported that they had not requested a ballot in the first place.[6] In Miller's statement to the court - Exhibit A of Donald J. Trump for President v. Boockvar - he stated: "I estimate that with a reasonable degree of mathematical certainty (based on the data I received being accurate and a representative sample of the population) the number of the 165,412 mail-in ballots requested by someone other than the registered Republican is at least 37,000, and the number of the 165,412 mail-in ballots requested by registered Republicans and returned but not counted is at least 38,910 ... The analysis is based on responses from a data set drawn from 165,412 registered Republican voters who had a mail-in ballot requested in their name but not counted in the election. We estimate on the order of 41,000 of these ballots were requested by someone other than the proper voter. Who made such requests, and why? One possible explanation is that ballots were requested by others. Another possible explanation is that a large number of people requested ballots and forgot they did so later. Again, the conclusions above are based on the data provided being both accurate and a representative sample."[7] Miller's statement drew sharp criticism from his peers, centered on the low response rate of phone surveys yielding unrepresentive data upon which Miller's estimates were based. Miller apologized for the "lack of clarity and due diligence" in a leaked early draft of his work.[6] Richard D. De Veaux, Vice President of the American Statistical Association and Professor of Statistics at Williams College, commented "any estimates based on unverifiable or biased data are inaccurate, wrong and unfounded. To apply naïve statistical formulas to biased data and publish this is both irresponsible and unethical".[8] In interviews Miller has gone on the record about being a conservative.[9] Research Experiences for Undergraduates Miller has continuously run summer research groups in, among other topics, Benford's law, combinatorics, discrete geometry, number theory, probability, and random matrix theory at Williams College as part of the SMALL REU (Research Experiences for Undergraduates). In 2020 with several colleagues, in response to the loss of opportunities for student research due to many summer programs being cancelled due to covid, he helped create the Polymath REU. The program has been supported by the National Science Foundation and Elsevier. From its homepage: Our goal is to provide research opportunities to every undergraduate who wishes to explore advanced mathematics. The program consists of research projects in a variety of mathematical topics and runs in the spirit of the Polymath Project. Each project is mentored by an active researcher with experience in undergraduate mentoring. Each project consists of 20-30 undergraduates, a main mentor, and additional mentors (usually graduate students). This group works towards solving a research problem and writing a paper. Each participant decides what they wish to obtain from the program, and participates accordingly. Students interested in either program should apply through Math Programs. College Courses Starting in 2014, and consistently from 2016 onward, Miller has recorded his courses and made them freely available through YouTube. Below is a subset; the complete list, including different iterations of each course, is available at his homepage. • Advanced Applied Analysis: Math 466: https://web.williams.edu/Mathematics/sjmiller/public_html/466Fa17/index.htm • Advanced Applied Linear Programming: Math 416: https://web.williams.edu/Mathematics/sjmiller/public_html/416/index.htm • Advanced Analysis: Math 389: https://web.williams.edu/Mathematics/sjmiller/public_html/389/index.htm • Applied Analysis: Math 317 (Operations Research): https://web.williams.edu/Mathematics/sjmiller/public_html/317/index.htm • Complex Analysis: Math 383: https://web.williams.edu/Mathematics/sjmiller/public_html/383Fa21/ • Multivariable Calculus: Math 150: http://web.williams.edu/Mathematics/sjmiller/public_html/150Sp21/ • Number Theory: Math 313: https://web.williams.edu/Mathematics/sjmiller/public_html/313Sp17/index.htm • Operations Research: Math 317: https://web.williams.edu/Mathematics/sjmiller/public_html/317Fa19/ • Probability: Math/Stat 341: https://web.williams.edu/Mathematics/sjmiller/public_html/341Fa21/ • Problem Solving: Math 331: http://web.williams.edu/Mathematics/sjmiller/public_html/331Fa18/ References 1. "Steven J. Miller". Retrieved 11 May 2020. 2. Steven J. Miller at the Mathematics Genealogy Project 3. "Steven J". Web.williams.edu. Retrieved 2016-01-20. 4. "Steven J. Miller". Williams College. Retrieved 11 May 2020. 5. 2019 Class of the Fellows of the AMS, American Mathematical Society, retrieved 2018-11-07 6. Paris, Francesca (24 November 2020). "Williams prof disavows own finding of mishandled GOP ballots". The Berkshire Eagle. 7. https://williamsrecord.com/wp-content/uploads/2020/11/Attachment-A.pdf 8. De Veaux, Richard D. (25 November 2020). "A rebuttal to Steven Miller's "REPORT ON PA GOP MAIL-IN BALLOT REQUESTS"". The Williams Record. 9. Eagle, Francesca Paris, The Berkshire (2020-11-24). "Williams prof disavows own finding of mishandled GOP ballots". The Berkshire Eagle. In interviews with The Williams Record and the public radio station WAMC, Miller has gone on the record about being a conservative.{{cite web}}: CS1 maint: multiple names: authors list (link) CS1 maint: url-status (link) External links • Steven J. Miller at the Mathematics Genealogy Project Authority control International • ISNI • VIAF National • Germany • Israel • United States • Czech Republic • Poland Academics • DBLP • MathSciNet • Mathematics Genealogy Project • zbMATH People • Deutsche Biographie Other • IdRef
Wikipedia
Steven G. Krantz Steven George Krantz (born February 3, 1951) is an American scholar, mathematician, and writer. He has authored more than 350 research papers and published more than 150 books.[1] Additionally, Krantz has edited journals such as the Notices of the American Mathematical Society and The Journal of Geometric Analysis. Steven George Krantz Steven G. Krantz in 2009 Born (1951-02-03) February 3, 1951 San Francisco, California, US Alma materUniversity of California at Santa Cruz, Princeton University Known forComplex analysis Harmonic analysis Partial differential equations Differential geometry Lie theory Geometric measure theory Spouse Randi D. Ruden (m. 1974) Awards • Chauvenet Prize (1992) Scientific career InstitutionsUCLA, Princeton University, Penn State, Washington University in St. Louis Doctoral advisorElias M. Stein, Joseph J. Kohn Early life and education Steven Krantz grew up in Redwood City, California and graduated from Sequoia High School in class of 1967.[1] Krantz was an undergraduate at the University of California, Santa Cruz (UCSC), graduating with summa cum laude in 1971. In the math department at UCSC his teachers included Nick Burgoyne, Marvin Greenberg, Ed Landesman, and Stan Philipp. Krantz obtained his Ph.D. in mathematics from Princeton University in 1974 under the direction of Elias M. Stein and Joseph J. Kohn. Other influencers included Fred Almgren, Robert Gunning, and Ed Nelson.[2] Biography Among Krantz's research interests include: several complex variables, harmonic analysis, partial differential equations, differential geometry, interpolation of operators, Lie theory, smoothness of functions, convexity theory, the corona problem, the inner functions problem, Fourier analysis, singular integrals, Lusin area integrals, Lipschitz spaces, finite difference operators, Hardy spaces, functions of bounded mean oscillation, geometric measure theory, sets of positive reach, the implicit function theorem, approximation theory, real analytic functions, analysis on the Heisenberg group, complex function theory, and real analysis.[3] He applied wavelet analysis to plastic surgery, creating software for facial recognition.[4] Krantz has also written software for the pharmaceutical industry. Krantz has worked on the inhomogeneous Cauchy–Riemann equations (he obtained the first sharp estimates in a variety of nonisotropic norms), on separate smoothness of functions (most notably with hypotheses about smoothness along integral curves of vector fields), on analysis on the Heisenberg group and other nilpotent Lie groups, on harmonic analysis in several complex variables, on the function theory of several complex variables, on the harmonic analysis of several real variables, on partial differential equations, on complex geometry, on the automorphism groups of domains in complex space, and on the geometry of complex domains. He has worked with Siqi Fu, Robert E. Greene, Alexander Isaev and Kang-Tae Kim on the Bergman kernel, the Bergman metric, and automorphism groups of domains; with Song-Ying Li on the harmonic analysis of several complex variables; and with Marco Peloso on harmonic analysis, the inhomogeneous Cauchy–Riemann equations, Hodge theory, and the analysis of the worm domain. Krantz's book on the geometry of complex domains, written jointly with Robert E. Greene and Kang-Tae Kim, appeared in 2011. Krantz's monographs include Function Theory of Several Complex Variables, Complex Analysis: The Geometric Viewpoint, A Primer of Real Analytic Functions (joint with Harold R. Parks), The Implicit Function Theorem (joint with Harold Parks), Geometric Integration Theory (joint with Harold Parks), and The Geometry of Complex Domains (joint with Kang-Tae Kim and Robert E. Greene). His book The Proof is in the Pudding: A Look at the Changing Nature of Mathematical Proof looks at the history and evolving nature of the proof concept. Krantz's latest book, A Mathematician Comes of Age, published by the Mathematical Association of America, is an exploration of the concept of mathematical maturity. Krantz is author of textbooks and popular books.[5] His books Mathematical Apocrypha and Mathematical Apocrypha Redux are collections of anecdotes about famous mathematicians.[2] Krantz's book An Episodic History of Mathematics: Mathematical Culture through Problem Solving is a blend of history and problem solving. A Mathematician's Survival Guide and The Survival of a Mathematician are about how to get into the mathematics profession and how to survive in the mathematics profession. Krantz's new book with Harold R. Parks titled A Mathematical Odyssey: Journey from the Real to the Complex is an entree to mathematics for the layman. His book I, Mathematician (joint with Peter Casazza and Randi D. Ruden) is a study, with contributions from many mathematicians, of how mathematicians think of themselves and how others think of mathematicians. The book The Theory and Practice of Conformal Geometry is a study of classical conformal geometry in the complex plane, and is the first Dover book that is not a reprint of a classic but is instead a new book. Krantz has had 9 Masters students and 20 Ph.D. students. Among the latter are Xiaojun Huang (holder of the Bergman Prize), Marco Peloso, Fausto Di Biase, Daowei Ma, and Siqi Fu. Krantz has organized conferences, including the Summer Workshop in Several Complex Variables held in Santa Cruz in 1989 and attended by 250 people. He was the principal lecturer at a CBMS conference at George Mason University in 1992. He organized and spoke at a conference on the corona problem held at the Fields Institute in Toronto, Canada in June 2012. In 2012 he became a Fellow of the American Mathematical Society.[6] Krantz has an Erdős number of 1. In the past year Krantz has collaborated with Arni S. R. Rao of Augusta University to study the COVID-19 epidemic. They have more than twenty papers and book chapters as well as several virtual seminars on the topic. Teaching Krantz has taught at University of California, Los Angeles, Princeton University, Pennsylvania State University, and Washington University in St. Louis, where he served as chair of the mathematics department. He has been a visiting faculty member at the Institute for Advanced Study, Princeton, the University of Paris, the Universidad Autónoma de Madrid, Pohang Institute of Science and Technology, the Mathematical Sciences Research Institute, the American Institute of Mathematics, Australian National University (as the Richardson Fellow), Texas A&M (as the Frontiers Lecturer), the University of Umeå, Uppsala University, the University of Oslo, Politecnico Torino, the University of Seoul, Université Paul Sabatier, and Beijing University. Editor Krantz was editor-in-chief of the Notices of the American Mathematical Society for 2010 through 2015.[7] Krantz is also editor-in-chief of the Journal of Mathematical Analysis and Applications and managing editor and founder of the Journal of Geometric Analysis. He also edits for The American Mathematical Monthly, Complex Variables and Elliptic Equations, and The Bulletin of the American Mathematical Society. Krantz is editor-in-chief of the new Springer journal titled Complex Analysis and its Synergies. Awards and recognitions • Distinguished Teaching Award, UCLA Alumni Association, 1979[8] • Chauvenet Prize of the MAA, 1992[9][10] • Beckenbach Book Prize of the MAA, 1994[11] • Kemper Prize, 1994[12] • Outstanding Academic Book Award, Current Review for Academic Libraries, 1998[13] • Washington University Faculty Mentor Award, 2007[14] • Sequoia High School Hall of Fame inductee, 2009[15] • Listed in Who's Who and American Men and Women of Science • Fellow of the American Mathematical Society, 2012[16] • Erdős number of 1 Selected publications Krantz has published more than 230 scholarly articles and 130 books. • Krantz, Steven G. (1980), "Holomorphic functions of bounded mean oscillation and mapping properties of the Szegő projection.", Duke Mathematical Journal, 47 (4): 743–761, doi:10.1215/S0012-7094-80-04744-4 • Krantz, Steven G.; Greene, Robert (1982), "Deformations of complex structure, estimates for the Cauchy–Riemann equations, and stability of the Bergman kernel.", Advances in Mathematics, 43: 1–86, doi:10.1016/0001-8708(82)90028-7 • Krantz, Steven G.; Burns, Daniel (1994), "Rigidity of holomorphic mappings and a new Schwarz lemma at the boundary.", Journal of the American Mathematical Society, 7 (3): 661–676, doi:10.2307/2152787, JSTOR 2152787 • Krantz, Steven G.; Kim, Kang-Tae (2008), "Complex scaling and geometric analysis of several variables.", Bulletin of the Korean Mathematical Society, 45 (3): 523–561, arXiv:math/0610710, doi:10.4134/bkms.2008.45.3.523, S2CID 14078303 • Freshman Calculus (with Bonic, Robert A., and Cranford, Estelle) (D. C. Heath, 1971, ISBN 0669520500) • Calculus: Single and Multivariable (with Blank, Brian E.) (2nd ed., John Wiley and Sons, 2011, ISBN 0470453605) • Function Theory of Several Complex Variables (2nd ed., American Mathematical Society, 2001, ISBN 0-8218-2724-3) • Function Theory of One Complex Variable (with Greene, Robert E.) (3rd ed., American Mathematical Society, 2006, ISBN 0821839624) • Complex Analysis: The Geometric Viewpoint (2nd ed., Mathematical Association of America, 2004, ISBN 0-88385-035-4) • A Primer of Real Analytic Functions (with Parks, Harold R.) (2nd ed., Birkhäuser Publishing, 2002, ISBN 0-8176-4264-1) • The Implicit Function Theorem: History, Theory, and Applications (with Parks, Harold R.) (Birkhäuser Publishing, 2002, ISBN 0-8176-4285-4) • A Panorama of Harmonic Analysis (Mathematical Association of America, 1999, ISBN 0-88385-031-1) • A Mathematician's Survival Guide (American Mathematical Society, 2003, ISBN 0-8218-3455-X) • The Survival of a Mathematician (American Mathematical Society, 2008, ISBN 0-8218-4629-9) • Mathematical Apocrypha (Mathematical Association of America, 2002, ISBN 0-88385-539-9) • Mathematical Apocrypha Redux (Mathematical Association of America, 2005, ISBN 0-88385-554-2) • Geometric Integration Theory (with Parks, Harold R.) (Birkhauser, 2008, ISBN 0-8176-4676-0) • The Proof is in the Pudding: The Changing Nature of Mathematical Proof (Springer, 2011, ISBN 0-387-48908-8) • The Geometry of Complex Domains (with Greene, Robert E. and Kim, Kang-Tae) (Birkhauser, 2011, ISBN 0-8176-4139-4) • A Mathematician Comes of Age (Mathematical Association of America, 2011, ISBN 0-88385-578-X) • Elements of Advanced Mathematics, 3rd ed. (Taylor & Francis/CRC Press, 2012, ISBN 978-1439898345) • Real Analysis and Foundations, 2nd ed. (Taylor & Francis/CRC Press, 2004, ISBN 978-1584884835) • A TeX Primer for Scientists (with Stanley Sawyer) (Taylor & Francis/CRC Press, 1995, ISBN 978-0849371592) • A Handbook of Typography for the Mathematical Sciences (Taylor & Francis/CRC Press, 2000, ISBN 978-1584881490) • Geometric Analysis of the Bergman Kernel and Metric (Springer, 2013, ISBN 978-1-4614-7923-9) • How to Teach Mathematics: Third Edition (American Math Society, 2015) • The Theory and Practice of Conformal Geometry (Dover Publishing, 2015) • I, Mathematician, I (with Peter Casazza and Randi D. Ruden) (Mathematical Association of America, 2015) • I, Mathematician, II (with Peter Casazza and Randi D. Ruden) (COMAP, 2016) • A Primer of Mathematical Writing, 2nd edition (American Mathematical Society, 2017) • Harmonic and Complex Analysis in Several Variables (Springer, 2017, ISBN 978-3-319-63229-2) • Geometric Analysis of the Bergman Kernel and Metric (Birkhauser, 2013) • A Guide to Functional Analysis (Mathematical Association of America, 2013) • Foundations of Real Analysis (Taylor & Francis/CRC Press, 2013) • Convex Analysis (Taylor & Francis, 2015) • Essentials of Mathematical Thinking (Taylor & Francis, 2017) • Handbook of Complex Analysis (Taylor & Francis, 2017) • Transition to Analysis with Proof (Taylor & Francis, 2017) • Elementary Introduction to the Lebesgue Integral (Taylor & Francis, 2018) • An Episodic History of Mathematics: Mathematical Culture through Problem Solving (Mathematical Association of America, 2010) References 1. Bishop, Shaun (2009-03-27). "Sequoia High School alumni inducted into Hall of Fame". The Mercury News. Retrieved 2018-04-06. 2. Washington University Newsroom 3. Washington University News and Information 4. Tony Fitzpatrick (September 20, 2002). "Researchers collaborate to make plastic surgery more precise". Record. Washington University. Retrieved June 16, 2013. 5. Steven G. Krantz home page 6. List of Fellows of the American Mathematical Society, retrieved 2013-01-27. 7. Elaine Kehoe (December 2009). "Steven G. Krantz Appointed Notices Editor" (PDF). Notices of the American Mathematical Society. 56 (11): 1445–1446. Retrieved June 16, 2013. 8. "Distinguished Teaching Awards". UCLA General Catalog. Retrieved 2022-11-14. 9. "Chauvenet Prizes". Mathematical Association of America. Retrieved 2022-11-14. 10. Krantz, Steven G. (1987). "What Is Several Complex Variables?". The American Mathematical Monthly. Taylor & Francis. 94 (3): 236–256. doi:10.1080/00029890.1987.12000623. ISSN 0002-9890. 11. Beckenbach Prize of the MAA, 1994 12. Kemper Prize, 1994 13. Outstanding Academic Book Award, 1998 14. Washington University Faculty Mentor Award, 2007 15. Sequoia High School Hall of Fame, 2009 16. "Fellows of the American Mathematical Society". American Mathematical Society. 2018-11-26. Retrieved 2022-11-14. External links • Faculty page at Washington University • Steven G. Krantz at the Mathematics Genealogy Project Chauvenet Prize recipients • 1925 G. A. Bliss • 1929 T. H. Hildebrandt • 1932 G. H. Hardy • 1935 Dunham Jackson • 1938 G. T. Whyburn • 1941 Saunders Mac Lane • 1944 R. H. Cameron • 1947 Paul Halmos • 1950 Mark Kac • 1953 E. J. McShane • 1956 Richard H. Bruck • 1960 Cornelius Lanczos • 1963 Philip J. Davis • 1964 Leon Henkin • 1965 Jack K. Hale and Joseph P. LaSalle • 1967 Guido Weiss • 1968 Mark Kac • 1970 Shiing-Shen Chern • 1971 Norman Levinson • 1972 François Trèves • 1973 Carl D. Olds • 1974 Peter D. Lax • 1975 Martin Davis and Reuben Hersh • 1976 Lawrence Zalcman • 1977 W. Gilbert Strang • 1978 Shreeram S. Abhyankar • 1979 Neil J. A. Sloane • 1980 Heinz Bauer • 1981 Kenneth I. Gross • 1982 No award given. • 1983 No award given. • 1984 R. Arthur Knoebel • 1985 Carl Pomerance • 1986 George Miel • 1987 James H. Wilkinson • 1988 Stephen Smale • 1989 Jacob Korevaar • 1990 David Allen Hoffman • 1991 W. B. Raymond Lickorish and Kenneth C. Millett • 1992 Steven G. Krantz • 1993 David H. Bailey, Jonathan M. Borwein and Peter B. Borwein • 1994 Barry Mazur • 1995 Donald G. Saari • 1996 Joan Birman • 1997 Tom Hawkins • 1998 Alan Edelman and Eric Kostlan • 1999 Michael I. Rosen • 2000 Don Zagier • 2001 Carolyn S. Gordon and David L. Webb • 2002 Ellen Gethner, Stan Wagon, and Brian Wick • 2003 Thomas C. Hales • 2004 Edward B. Burger • 2005 John Stillwell • 2006 Florian Pfender & Günter M. Ziegler • 2007 Andrew J. Simoson • 2008 Andrew Granville • 2009 Harold P. Boas • 2010 Brian J. McCartin • 2011 Bjorn Poonen • 2012 Dennis DeTurck, Herman Gluck, Daniel Pomerleano & David Shea Vela-Vick • 2013 Robert Ghrist • 2014 Ravi Vakil • 2015 Dana Mackenzie • 2016 Susan H. Marshall & Donald R. Smith • 2017 Mark Schilling • 2018 Daniel J. Velleman • 2019 Tom Leinster • 2020 Vladimir Pozdnyakov & J. Michael Steele • 2021 Travis Kowalski • 2022 William Dunham, Ezra Brown & Matthew Crawford Authority control International • ISNI • VIAF National • France • BnF data • Catalonia • Germany • Israel • Belgium • United States • Sweden • Japan • Czech Republic • Korea • Netherlands • Poland Academics • DBLP • MathSciNet • Mathematics Genealogy Project • ORCID • zbMATH Other • IdRef
Wikipedia
Steven Kleiman Steven Lawrence Kleiman (born March 31, 1942) is an American mathematician. Steven Kleiman Born Steven Lawrence Kleiman (1942-03-31) March 31, 1942 Boston, Massachusetts, U.S. Alma materMassachusetts Institute of Technology, Harvard University Scientific career FieldsMathematics InstitutionsMassachusetts Institute of Technology Doctoral advisorOscar Zariski Doctoral students • Spencer Bloch • Ranee Brylinski • Susan Jane Colley • George Kempf • Dan Laksov • Ragni Piene • Abramo Hefez Professional career Kleiman is a professor emeritus of mathematics at the Massachusetts Institute of Technology. Born in Boston, he did his undergraduate studies at MIT. He received his Ph.D. from Harvard University in 1965, after studying there with Oscar Zariski and David Mumford, and joined the MIT faculty in 1969.[1] Kleiman held the prestigious NATO Postdoctoral Fellowship (1966-1967), Sloan Fellowship (1968), and Guggenheim Fellowship (1979). Contributions Kleiman is known for his work in algebraic geometry and commutative algebra. He has made seminal contributions in motivic cohomology, moduli theory, intersection theory and enumerative geometry. A 2002 study of 891 academic collaborations in enumerative geometry and intersection theory covered by Mathematical Reviews found that he was not only the most prolific author in those areas, but also the one with the most collaborative ties, and the most central author of the field in terms of closeness centrality; the study's authors proposed to name the collaboration graph of the field in his honor.[2] Awards and honors In 1989 the University of Copenhagen awarded him an honorary doctorate[3] and in May 2002 the Norwegian Academy of Science and Letters hosted a conference in honor of his 60th birthday and elected him as a foreign member.[4] In 1992 Kleiman was elected foreign member of the Royal Danish Academy of Sciences and Letters. In 2012 he became a fellow of the American Mathematical Society.[5] He was an invited speaker at the International Congress of Mathematics at Nice in 1970.[6] Selected publications • Kleiman, Steven L. (1966), "Toward a numerical theory of ampleness", Annals of Mathematics, Second Series, 84 (3): 293–344, doi:10.2307/1970447, JSTOR 1970447. • Kleiman, S. L. (1968), Algebraic cycles and Weil conjectures. Dix exposés sur la cohomologie des schémas, North-Holland, Amsterdam; Masson, Paris, pp. 359–386. • Altman, I.; Kleiman, Steven L. (1970), Introduction to Grothendieck duality theory, Springer-Verlag. • Kleiman, Steven L. (1974), "The transversality of a general translate", Compositio Mathematica, 28 (3): 287–297. • Altman, Allen B.; Kleiman, Steven L. (1980), "Compactifying the Picard scheme", Advances in Mathematics, 35 (1): 50–112, doi:10.1016/0001-8708(80)90043-2. • Kleiman, Steven; Thorup, Anders L. (1994), "A geometric theory of the Buchsbaum-Rim multiplicity", Journal of Algebra, 167 (1): 168–231, doi:10.1006/jabr.1994.1182. • Gaffney, T.; Kleiman, Steven L. (1999), "Specialization of integral dependence for modules", Inventiones Mathematicae, 137 (3): 541–574, arXiv:alg-geom/9610003, Bibcode:1999InMat.137..541G, doi:10.1007/s002220050335, S2CID 7215999. See also • Cone of curves (Kleiman-Mori cone) • Kleiman's theorem References 1. Duren, Peter L.; Askey, Richard (1989), A Century of Mathematics in America, American Mathematical Society, p. 543, ISBN 0-8218-0124-4. 2. Alberich, R.; Miret, J. M.; Miro-Julia, J.; Rosselló, F.; Xambó, S. (2002), The Kleiman graph (PDF). 3. MIT TechTalk, February 14, 1990. 4. Reports to the president 2001–2002, MIT Mathematics Dept.; Conference in Honor of Steven Kleiman's 60th Birthday, Dan Grayson, Univ. of Illinois. 5. List of Fellows of the American Mathematical Society, retrieved January 27, 2013. 6. International Mathematical Union (IMU) External links • Steven Kleiman at the Mathematics Genealogy Project Authority control International • ISNI • VIAF National • France • BnF data • Germany • Israel • United States • Netherlands Academics • MathSciNet • Mathematics Genealogy Project • zbMATH Other • IdRef
Wikipedia
Steve Shnider Steve Shnider is a retired professor of mathematics at Bar Ilan University.[1] He received a PhD in Mathematics from Harvard University in 1972, under Shlomo Sternberg.[2] His main interests are in the differential geometry of fiber bundles; algebraic methods in the theory of deformation of geometric structures; symplectic geometry; supersymmetry; operads; and Hopf algebras. He retired in 2014.[3] Book on operads A 2002 book of Markl, Shnider and Stasheff Operads in algebra, topology, and physics was the first book to provide a systematic treatment of operad theory, an area of mathematics that came to prominence in 1990s and found many applications in algebraic topology, category theory, graph cohomology, representation theory, algebraic geometry, combinatorics, knot theory, moduli spaces, and other areas. The book was the subject of a Featured Review in Mathematical Reviews by Alexander A. Voronov[4][5] which stated, in particular: "The first book whose main goal is the theory of operads per se ... a book such as this one has been long awaited by a wide scientific readership, including mathematicians and theoretical physicists ... a great piece of mathematical literature and will be helpful to anyone who needs to use operads, from graduate students to mature mathematicians and physicists." Bibliography According to Mathematical Reviews, Shnider's work has been cited over 300 times by over 300 authors by 2010. Books • Markl, Martin; Shnider, Steven; Stasheff, James D (2002), Operads in algebra, topology, and physics, Mathematical surveys and monographs, v. 96, American Mathematical Society, ISBN 978-0-8218-2134-3, OCLC 318373640 • Shnider, Steven; Sternberg, Shlomo (1993), Quantum groups : from coalgebras to Drinfeld algebras : a guided tour, Graduate texts in mathematical physics, 2., International Press, cop, ISBN 978-1-57146-000-4, OCLC 438550743 • Shnider, Steven; Wells, Raymond O (1989), Supermanifolds, super twistor spaces and super Yang-Mills fields, Séminaire de Mathematiques Supérieures, Séminaire Scientifique OTAN (Nato advanced study institute), Département de Mathématiques et de Statistique, Université de Montréal, 106, Montréal (Québec) Presses de l'Univ. de Montréal, ISBN 978-2-7606-0286-1, OCLC 230986063 Selected papers • Katz, Mikhail G.; Schaps, David; Shnider, Steve (2013), "Almost Equal: The Method of Adequality from Diophantus to Fermat and Beyond", Perspectives on Science, 21 (3): 283–324, arXiv:1210.7750, Bibcode:2012arXiv1210.7750K, doi:10.1162/POSC_a_00101, S2CID 57569974. • Bangert, Victor; Katz, Mikhail G; Shnider, Steven; Weinberger, Shmuel (2009), "E7, Wirtinger inequalities, Cayley 4-form, and homotopy", Duke Mathematical Journal, Duke University Press, 146 (1): 35–70, arXiv:math.DG/0608006, doi:10.1215/00127094-2008-061, OCLC 298960889, S2CID 2575584 References 1. "Prof. Steven Shnider - staff page" (in Hebrew). Bar Ilan University. Archived from the original on 2011-02-21. Retrieved 2010-01-21. 2. Steven David Shnider, Mathematics Genealogy Project. Accessed January 24, 2010 3. Meeting in honor of Steve Shnider's Retirement. Bar Ilan University. Accessed June 28, 2019 4. Alexander A. Voronov. "Review of Operads in algebra, topology and physics". MR 1898414. {{cite web}}: Missing or empty |url= (help) (2003f:18011) MathSciNet 5. AMS Bookstore listing, American Mathematical Society. Accessed January 24, 2010 Authority control International • ISNI • VIAF National • Norway • France • BnF data • Germany • Israel • Belgium • United States • Netherlands Academics • CiNii • MathSciNet • Mathematics Genealogy Project • zbMATH Other • IdRef
Wikipedia
Stewart's theorem In geometry, Stewart's theorem yields a relation between the lengths of the sides and the length of a cevian in a triangle. Its name is in honour of the Scottish mathematician Matthew Stewart, who published the theorem in 1746.[1] Statement Let a, b, c be the lengths of the sides of a triangle. Let d be the length of a cevian to the side of length a. If the cevian divides the side of length a into two segments of length m and n, with m adjacent to c and n adjacent to b, then Stewart's theorem states that $b^{2}m+c^{2}n=a(d^{2}+mn).$ A common mnemonic used by students to memorize this equation (after rearranging the terms) is: ${\underset {{\text{A }}man{\text{ and his }}dad}{man\ +\ dad}}=\!\!\!\!\!\!{\underset {{\text{put a }}bomb{\text{ in the }}sink.}{bmb\ +\ cnc}}$ The theorem may be written more symmetrically using signed lengths of segments. That is, take the length AB to be positive or negative according to whether A is to the left or right of B in some fixed orientation of the line. In this formulation, the theorem states that if A, B, C are collinear points, and P is any point, then $\left({\overline {PA}}^{2}\cdot {\overline {BC}}\right)+\left({\overline {PB}}^{2}\cdot {\overline {CA}}\right)+\left({\overline {PC}}^{2}\cdot {\overline {AB}}\right)+\left({\overline {AB}}\cdot {\overline {BC}}\cdot {\overline {CA}}\right)=0.$[2] In the special case that the cevian is the median (that is, it divides the opposite side into two segments of equal length), the result is known as Apollonius' theorem. Proof The theorem can be proved as an application of the law of cosines.[3] Let θ be the angle between m and d and θ' the angle between n and d. Then θ' is the supplement of θ, and so cos θ' = −cos θ. Applying the law of cosines in the two small triangles using angles θ and θ' produces ${\begin{aligned}c^{2}&=m^{2}+d^{2}-2dm\cos \theta ,\\b^{2}&=n^{2}+d^{2}-2dn\cos \theta '\\&=n^{2}+d^{2}+2dn\cos \theta .\end{aligned}}$ Multiplying the first equation by n and the third equation by m and adding them eliminates cos θ. One obtains ${\begin{aligned}b^{2}m+c^{2}n&=nm^{2}+n^{2}m+(m+n)d^{2}\\&=(m+n)(mn+d^{2})\\&=a(mn+d^{2}),\\\end{aligned}}$ which is the required equation. Alternatively, the theorem can be proved by drawing a perpendicular from the vertex of the triangle to the base and using the Pythagorean theorem to write the distances b, c, d in terms of the altitude. The left and right hand sides of the equation then reduce algebraically to the same expression.[2] History According to Hutton & Gregory (1843, p. 220), Stewart published the result in 1746 when he was a candidate to replace Colin Maclaurin as Professor of Mathematics at the University of Edinburgh. Coxeter & Greitzer (1967, p. 6) state that the result was probably known to Archimedes around 300 B.C.E. They go on to say (mistakenly) that the first known proof was provided by R. Simson in 1751. Hutton & Gregory (1843) state that the result is used by Simson in 1748 and by Simpson in 1752, and its first appearance in Europe given by Lazare Carnot in 1803. See also • Mass point geometry Notes 1. Stewart, Matthew (1746), Some General Theorems of Considerable Use in the Higher Parts of Mathematics, Edinburgh: Sands, Murray and Cochran "Proposition II" 2. Russell 1905, p. 3 3. Proof of Stewart's Theorem at PlanetMath. References • Coxeter, H.S.M.; Greitzer, S.L. (1967), Geometry Revisited, New Mathematical Library #19, The Mathematical Association of America, ISBN 0-88385-619-0 • Hutton, C.; Gregory, O. (1843), A Course of Mathematics, vol. II, Longman, Orme & Co. • Russell, John Wellesley (1905), "Chapter 1 §3: Stewart's Theorem", Pure Geometry, Clarendon Press, OCLC 5259132 Further reading • I.S Amarasinghe, Solutions to the Problem 43.3: Stewart's Theorem (A New Proof for the Stewart's Theorem using Ptolemy's Theorem), Mathematical Spectrum, Vol 43(03), pp. 138 – 139, 2011. • Ostermann, Alexander; Wanner, Gerhard (2012), Geometry by Its History, Springer, p. 112, ISBN 978-3-642-29162-3 External links • Weisstein, Eric W. "Stewart's Theorem". MathWorld. • Stewart's Theorem at PlanetMath.
Wikipedia
Centrum Wiskunde & Informatica The Centrum Wiskunde & Informatica (abbr. CWI; English: "National Research Institute for Mathematics and Computer Science") is a research centre in the field of mathematics and theoretical computer science. It is part of the institutes organization of the Dutch Research Council (NWO) and is located at the Amsterdam Science Park. This institute is famous as the creation site of the programming language Python. It was a founding member of the European Research Consortium for Informatics and Mathematics (ERCIM). Centrum Wiskunde & Informatica CWI logo TypeNational research institute Established1946 (1946) PresidentProf.dr. A.G. de Kok Administrative staff ~200 Location Amsterdam , Netherlands Websitewww.cwi.nl Early history The institute was founded in 1946 by Johannes van der Corput, David van Dantzig, Jurjen Koksma, Hendrik Anthony Kramers, Marcel Minnaert and Jan Arnoldus Schouten. It was originally called Mathematical Centre (in Dutch: Mathematisch Centrum). One early mission was to develop mathematical prediction models to assist large Dutch engineering projects, such as the Delta Works. During this early period, the Mathematics Institute also helped with designing the wings of the Fokker F27 Friendship airplane, voted in 2006 as the most beautiful Dutch design of the 20th century.[1][2] The computer science component developed soon after. Adriaan van Wijngaarden, considered the founder of computer science (or informatica) in the Netherlands, was the director of the institute for almost 20 years. Edsger Dijkstra did most of his early influential work on algorithms and formal methods at CWI. The first Dutch computers, the Electrologica X1 and Electrologica X8, were both designed at the centre, and Electrologica was created as a spinoff to manufacture the machines. In 1983, the name of the institute was changed to Centrum Wiskunde & Informatica (CWI) to reflect a governmental push for emphasizing computer science research in the Netherlands.[3] Recent research The institute is known for its work in fields such as operations research, software engineering, information processing, and mathematical applications in life sciences and logistics. More recent examples of research results from CWI include the development of scheduling algorithms for the Dutch railway system (the Nederlandse Spoorwegen, one of the busiest rail networks in the world) and the development of the Python programming language by Guido van Rossum. Python has played an important role in the development of the Google search platform from the beginning, and it continues to do so as the system grows and evolves.[4] Many information retrieval techniques used by packages such as SPSS were initially developed by Data Distilleries, a CWI spinoff.[5][6] Work at the institute was recognized by national or international research awards, such as the Lanchester Prize (awarded yearly by INFORMS), the Gödel Prize (awarded by ACM SIGACT) and the Spinoza Prize. Most of its senior researchers hold part-time professorships at other Dutch universities, with the institute producing over 170 full professors during the course of its history. Several CWI researchers have been recognized as members of the Royal Netherlands Academy of Arts and Sciences, the Academia Europaea, or as knights in the Order of the Netherlands Lion.[7] In February 2017, CWI in association with Google announced a successful collision attack on SHA 1 encryption algorithm.[8] European Internet CWI was an early user of the Internet in Europe, in the form of a TCP/IP connection to NSFNET. Piet Beertema at CWI established one of the first two connections outside the United States to the NSFNET (shortly after France's INRIA)[9][10][11] for EUnet on 17 November 1988. The first Dutch country code top-level domain issued was cwi.nl.[12][13][14] When this domain cwi.nl was registered, on 1 May 1986, .nl effectively became the first active ccTLD outside the United States.[15] For the first ten years CWI, or rather Beertema, managed the .nl administration, until in 1996 this task was transferred to its spin-off SIDN.[12] The Amsterdam Internet Exchange (one of the largest Internet Exchanges in the world, in terms of both members and throughput traffic) is located at the neighbouring SARA (an early CWI spin-off) and Nikhef institutes. The World Wide Web Consortium (W3C) office for the Benelux countries is located at CWI.[16] Spin-off companies CWI has demonstrated a continuing effort to put the work of its researchers at the disposal of society, mainly by collaborating with commercial companies and creating spin-off businesses. In 2000 CWI established "CWI Incubator BV", a dedicated company with the aim to generate high tech spin-off companies.[17] Some of the CWI spinoffs include:[18] • 1956: Electrologica, a pioneering Dutch computer manufacturer. • 1971: SARA (now called SURF), founded as a center for data processing activities for Vrije Universiteit Amsterdam, Universiteit van Amsterdam, and the CWI. • 1990: DigiCash, an electronic money corporation founded by David Chaum. • 1994: NLnet, an Internet Service Provider. • 1994: General Design / Satama Amsterdam, a design company, acquired by LBi (then Lost Boys international). • 1995: Data Distilleries, developer of analytical database software aimed at information retrieval, eventually becoming part of SPSS and acquired by IBM. • 1996: Stichting Internet Domeinregistratie Nederland (SIDN), the .nl top-level domain registrar. • 2000: Software Improvement Group (SIG), a software improvement and legacy code analysis company. • 2008: MonetDB, a high-tech database technology company, developer of the MonetDB column-store. • 2008: Vectorwise, an analytical database technology company, founded in cooperation with the Ingres Corporation (now Actian) and eventually acquired by it. • 2010: Spinque, a company providing search technology for information retrieval specialists. • 2013: MonetDB Solutions, a database services company. • 2016: Seita, a technology company providing demand response services for the energy sector. Software and languages • ABC programming language • Algol 60 • Algol 68 • Alma-0, a multi-paradigm computer programming language • ASF+SDF Meta Environment, programming language specification and prototyping system, IDE generator • Cascading Style Sheets • MonetDB • NetHack • Python programming language • RascalMPL, general purpose meta programming language • RDFa • SMIL • van Wijngaarden grammar • XForms • XHTML • XML Events Notable people • Richard Askey • Adrian Baddeley • Theo Bemelmans • Piet Beertema • Jan Bergstra • Gerrit Blaauw • Peter Boncz • Hugo Brandt Corstius • Stefan Brands • Andries Brouwer • Harry Buhrman • Dick Bulterman • David Chaum • Ronald Cramer • Theodorus Dekker • Edsger Dijkstra • Constance van Eeden • Peter van Emde Boas • Richard D. Gill • Jan Friso Groote • Dick Grune • Michiel Hazewinkel • Jan Hemelrijk • Martin L. Kersten • Willem Klein • Jurjen Ferdinand Koksma • Tom Koornwinder • Kees Koster • Monique Laurent • Gerrit Lekkerkerker • Arjen Lenstra • Jan Karel Lenstra • Gijsbert de Leve • Barry Mailloux • Massimo Marchiori • Lambert Meertens • Rob Mokken • Albert Nijenhuis • Steven Pemberton • Herman te Riele • Guido van Rossum • Alexander Schrijver • Jan H. van Schuppen • Marc Stevens • John Tromp • John V. Tucker • Paul Vitányi • Hans van Vliet • Marc Voorhoeve • Adriaan van Wijngaarden • Ronald de Wolf • Peter Wynn References 1. "Fokker F27 Friendship wins 2006 Best Dutch Design Election". 2. "Fokker Friendship beste Nederlandse design". 5 May 2006. 3. Bennie Mols: ERCOM: The Centrum voor Wiskunde en Informatica turns 60. In: Newsletter of the European Mathematical Society, No. 56 (September 2007), p. 43 (online) 4. "Quotes about Python". Python.org. Retrieved 13 July 2012. 5. "SPSS and Data Distilleries". Python.org. Archived from the original on 24 February 2015. Retrieved 24 February 2015. 6. Sumath, S; Sivanandam, S.N. (2006). Introduction to Data Mining and its Applications. Springer Berlin Heidelberg. p. 743. doi:10.1007/978-3-540-34351-6. ISBN 978-3-540-34350-9. 7. "Lex Schrijver receives EURO Gold Medal 2015". cwi. 25 April 2013. Retrieved 19 February 2018. 8. Announcing the first SHA1 collision 9. "The path to digital literacy and network culture in France (1980s to 1990s)". The Routledge Companion to Global Internet Histories. Taylor & Francis. 2017. pp. 84–89. ISBN 978-1317607656. 10. [Et Dieu crea l'Internet, Christian Huitema, ISBN 2-212-08855-8, 1995, page 10] 11. Andrianarisoa, Menjanirina (2 March 2012). "A brief history of the internet". 12. "CWI History: details". CWI. Retrieved 9 February 2020. 13. (in Dutch) De geschiedenis van SIDN Archived 2013-07-27 at the Wayback Machine (History of SIDN), Official website of SIDN 14. "Kees Neggers: Global Networking Requires Global Collaboration | Internet Hall of Fame". www.internethalloffame.org. Retrieved 3 April 2020. 15. "Our milestones". SIDN. Archived from the original on 8 August 2020. Retrieved 17 August 2022. 16. "The World Wide Web Consortium - Benelux Office". W3C. Retrieved 17 August 2022. 17. "Spin-off companies' details". CWI Amsterdam. Retrieved 8 July 2014. 18. "Spin-off companies". CWI Amsterdam. Retrieved 8 July 2014. External links • Official website The European Mathematical Society International member societies • European Consortium for Mathematics in Industry • European Society for Mathematical and Theoretical Biology National member societies • Austria • Belarus • Belgium • Belgian Mathematical Society • Belgian Statistical Society • Bosnia and Herzegovina • Bulgaria • Croatia • Cyprus • Czech Republic • Denmark • Estonia • Finland • France • Mathematical Society of France • Society of Applied & Industrial Mathematics • Société Francaise de Statistique • Georgia • Germany • German Mathematical Society • Association of Applied Mathematics and Mechanics • Greece • Hungary • Iceland • Ireland • Israel • Italy • Italian Mathematical Union • Società Italiana di Matematica Applicata e Industriale • The Italian Association of Mathematics applied to Economic and Social Sciences • Latvia • Lithuania • Luxembourg • Macedonia • Malta • Montenegro • Netherlands • Norway • Norwegian Mathematical Society • Norwegian Statistical Association • Poland • Portugal • Romania • Romanian Mathematical Society • Romanian Society of Mathematicians • Russia • Moscow Mathematical Society • St. Petersburg Mathematical Society • Ural Mathematical Society • Slovakia • Slovak Mathematical Society • Union of Slovak Mathematicians and Physicists • Slovenia • Spain • Catalan Society of Mathematics • Royal Spanish Mathematical Society • Spanish Society of Statistics and Operations Research • The Spanish Society of Applied Mathematics • Sweden • Swedish Mathematical Society • Swedish Society of Statisticians • Switzerland • Turkey • Ukraine • United Kingdom • Edinburgh Mathematical Society • Institute of Mathematics and its Applications • London Mathematical Society Academic Institutional Members • Abdus Salam International Centre for Theoretical Physics • Academy of Sciences of Moldova • Bernoulli Center • Centre de Recerca Matemàtica • Centre International de Rencontres Mathématiques • Centrum voor Wiskunde en Informatica • Emmy Noether Research Institute for Mathematics • Erwin Schrödinger International Institute for Mathematical Physics • European Institute for Statistics, Probability and Operations Research • Institut des Hautes Études Scientifiques • Institut Henri Poincaré • Institut Mittag-Leffler • Institute for Mathematical Research • International Centre for Mathematical Sciences • Isaac Newton Institute for Mathematical Sciences • Mathematisches Forschungsinstitut Oberwolfach • Mathematical Research Institute • Max Planck Institute for Mathematics in the Sciences • Research Institute of Mathematics of the Voronezh State University • Serbian Academy of Science and Arts • Mathematical Society of Serbia • Stefan Banach International Mathematical Center • Thomas Stieltjes Institute for Mathematics Institutional Members • Central European University • Faculty of Mathematics at the University of Barcelona • Cellule MathDoc Authority control International • ISNI • VIAF National • France • BnF data • Israel • United States Academics • CiNii People • Trove Other • IdRef 52°21′23″N 4°57′07″E
Wikipedia
Dirichlet process In probability theory, Dirichlet processes (after the distribution associated with Peter Gustav Lejeune Dirichlet) are a family of stochastic processes whose realizations are probability distributions. In other words, a Dirichlet process is a probability distribution whose range is itself a set of probability distributions. It is often used in Bayesian inference to describe the prior knowledge about the distribution of random variables—how likely it is that the random variables are distributed according to one or another particular distribution. As an example, a bag of 100 real-world dice is a random probability mass function (random pmf)—to sample this random pmf you put your hand in the bag and draw out a die, that is, you draw a pmf. A bag of dice manufactured using a crude process 100 years ago will likely have probabilities that deviate wildly from the uniform pmf, whereas a bag of state-of-the-art dice used by Las Vegas casinos may have barely perceptible imperfections. We can model the randomness of pmfs with the Dirichlet distribution.[1] The Dirichlet process is specified by a base distribution $H$ and a positive real number $\alpha $ called the concentration parameter (also known as scaling parameter). The base distribution is the expected value of the process, i.e., the Dirichlet process draws distributions "around" the base distribution the way a normal distribution draws real numbers around its mean. However, even if the base distribution is continuous, the distributions drawn from the Dirichlet process are almost surely discrete. The scaling parameter specifies how strong this discretization is: in the limit of $\alpha \rightarrow 0$, the realizations are all concentrated at a single value, while in the limit of $\alpha \rightarrow \infty $ the realizations become continuous. Between the two extremes the realizations are discrete distributions with less and less concentration as $\alpha $ increases. The Dirichlet process can also be seen as the infinite-dimensional generalization of the Dirichlet distribution. In the same way as the Dirichlet distribution is the conjugate prior for the categorical distribution, the Dirichlet process is the conjugate prior for infinite, nonparametric discrete distributions. A particularly important application of Dirichlet processes is as a prior probability distribution in infinite mixture models. The Dirichlet process was formally introduced by Thomas Ferguson in 1973.[2] It has since been applied in data mining and machine learning, among others for natural language processing, computer vision and bioinformatics. Introduction Dirichlet processes are usually used when modelling data that tends to repeat previous values in a so-called "rich get richer" fashion. Specifically, suppose that the generation of values $X_{1},X_{2},\dots $ can be simulated by the following algorithm. Input: $H$ (a probability distribution called base distribution), $\alpha $ (a positive real number called scaling parameter) For $n\geq 1$: a) With probability ${\frac {\alpha }{\alpha +n-1}}$ draw $X_{n}$ from $H$. b) With probability ${\frac {n_{x}}{\alpha +n-1}}$ set $X_{n}=x$, where $n_{x}$ is the number of previous observations of $x$. (Formally, $n_{x}:=|\{j\colon X_{j}=x{\text{ and }}j<n\}|$ where $|\cdot |$ denotes the number of elements in the set.) At the same time, another common model for data is that the observations $X_{1},X_{2},\dots $ are assumed to be independent and identically distributed (i.i.d.) according to some (random) distribution $P$. The goal of introducing Dirichlet processes is to be able to describe the procedure outlined above in this i.i.d. model. The $X_{1},X_{2},\dots $ observations in the algorithm are not independent, since we have to consider the previous results when generating the next value. They are, however, exchangeable. This fact can be shown by calculating the joint probability distribution of the observations and noticing that the resulting formula only depends on which $x$ values occur among the observations and how many repetitions they each have. Because of this exchangeability, de Finetti's representation theorem applies and it implies that the observations $X_{1},X_{2},\dots $ are conditionally independent given a (latent) distribution $P$. This $P$ is a random variable itself and has a distribution. This distribution (over distributions) is called a Dirichlet process ($\operatorname {DP} $). In summary, this means that we get an equivalent procedure to the above algorithm: 1. Draw a distribution $P$ from $\operatorname {DP} \left(H,\alpha \right)$ 2. Draw observations $X_{1},X_{2},\dots $ independently from $P$. In practice, however, drawing a concrete distribution $P$ is impossible, since its specification requires an infinite amount of information. This is a common phenomenon in the context of Bayesian non-parametric statistics where a typical task is to learn distributions on function spaces, which involve effectively infinitely many parameters. The key insight is that in many applications the infinite-dimensional distributions appear only as an intermediary computational device and are not required for either the initial specification of prior beliefs or for the statement of the final inference. Formal definition Given a measurable set S, a base probability distribution H and a positive real number $\alpha $, the Dirichlet process $\operatorname {DP} (H,\alpha )$ is a stochastic process whose sample path (or realization, i.e. an infinite sequence of random variates drawn from the process) is a probability distribution over S, such that the following holds. For any measurable finite partition of S, denoted $\{B_{i}\}_{i=1}^{n}$, ${\text{if }}X\sim \operatorname {DP} (H,\alpha )$ ${\text{then }}(X(B_{1}),\dots ,X(B_{n}))\sim \operatorname {Dir} (\alpha H(B_{1}),\dots ,\alpha H(B_{n})),$ where $\operatorname {Dir} $ denotes the Dirichlet distribution and the notation $X\sim D$ means that the random variable $X$ has the distribution $D$. Alternative views There are several equivalent views of the Dirichlet process. Besides the formal definition above, the Dirichlet process can be defined implicitly through de Finetti's theorem as described in the first section; this is often called the Chinese restaurant process. A third alternative is the stick-breaking process, which defines the Dirichlet process constructively by writing a distribution sampled from the process as $f(x)=\sum _{k=1}^{\infty }\beta _{k}\delta _{x_{k}}(x)$, where $\{x_{k}\}_{k=1}^{\infty }$ are samples from the base distribution $H$, $\delta _{x_{k}}$ is an indicator function centered on $x_{k}$ (zero everywhere except for $\delta _{x_{k}}(x_{k})=1$) and the $\beta _{k}$ are defined by a recursive scheme that repeatedly samples from the beta distribution $\operatorname {Beta} (1,\alpha )$. The Chinese restaurant process A widely employed metaphor for the Dirichlet process is based on the so-called Chinese restaurant process. The metaphor is as follows: Imagine a Chinese restaurant in which customers enter. A new customer sits down at a table with a probability proportional to the number of customers already sitting there. Additionally, a customer opens a new table with a probability proportional to the scaling parameter $\alpha $. After infinitely many customers entered, one obtains a probability distribution over infinitely many tables to be chosen. This probability distribution over the tables is a random sample of the probabilities of observations drawn from a Dirichlet process with scaling parameter $\alpha $. If one associates draws from the base measure $H$ with every table, the resulting distribution over the sample space $S$ is a random sample of a Dirichlet process. The Chinese restaurant process is related to the Pólya urn sampling scheme which yields samples from finite Dirichlet distributions. Because customers sit at a table with a probability proportional to the number of customers already sitting at the table, two properties of the DP can be deduced: 1. The Dirichlet process exhibits a self-reinforcing property: The more often a given value has been sampled in the past, the more likely it is to be sampled again. 2. Even if $H$ is a distribution over an uncountable set, there is a nonzero probability that two samples will have exactly the same value because the probability mass will concentrate on a small number of tables. The stick-breaking process A third approach to the Dirichlet process is the so-called stick-breaking process view. Conceptually, this involves repeatedly breaking off and discarding a random fraction (sampled from a Beta distribution) of a "stick" that is initially of length 1. Remember that draws from a Dirichlet process are distributions over a set $S$. As noted previously, the distribution drawn is discrete with probability 1. In the stick-breaking process view, we explicitly use the discreteness and give the probability mass function of this (random) discrete distribution as: $f(\theta )=\sum _{k=1}^{\infty }\beta _{k}\cdot \delta _{\theta _{k}}(\theta )$ where $\delta _{\theta _{k}}$ is the indicator function which evaluates to zero everywhere, except for $\delta _{\theta _{k}}(\theta _{k})=1$. Since this distribution is random itself, its mass function is parameterized by two sets of random variables: the locations $\left\{\theta _{k}\right\}_{k=1}^{\infty }$ and the corresponding probabilities $\left\{\beta _{k}\right\}_{k=1}^{\infty }$. In the following, we present without proof what these random variables are. The locations $\theta _{k}$ are independent and identically distributed according to $H$, the base distribution of the Dirichlet process. The probabilities $\beta _{k}$ are given by a procedure resembling the breaking of a unit-length stick (hence the name): $\beta _{k}=\beta '_{k}\cdot \prod _{i=1}^{k-1}\left(1-\beta '_{i}\right)$ where $\beta '_{k}$ are independent random variables with the beta distribution $\operatorname {Beta} (1,\alpha )$. The resemblance to 'stick-breaking' can be seen by considering $\beta _{k}$ as the length of a piece of a stick. We start with a unit-length stick and in each step we break off a portion of the remaining stick according to $\beta '_{k}$ and assign this broken-off piece to $\beta _{k}$. The formula can be understood by noting that after the first k − 1 values have their portions assigned, the length of the remainder of the stick is $\prod _{i=1}^{k-1}\left(1-\beta '_{i}\right)$ and this piece is broken according to $\beta '_{k}$ and gets assigned to $\beta _{k}$. The smaller $\alpha $ is, the less of the stick will be left for subsequent values (on average), yielding more concentrated distributions. The stick-breaking process is similar to the construction where one samples sequentially from marginal beta distributions in order to generate a sample from a Dirichlet distribution.[4] The Pólya urn scheme Yet another way to visualize the Dirichlet process and Chinese restaurant process is as a modified Pólya urn scheme sometimes called the Blackwell-MacQueen sampling scheme. Imagine that we start with an urn filled with $\alpha $ black balls. Then we proceed as follows: 1. Each time we need an observation, we draw a ball from the urn. 2. If the ball is black, we generate a new (non-black) colour uniformly, label a new ball this colour, drop the new ball into the urn along with the ball we drew, and return the colour we generated. 3. Otherwise, label a new ball with the colour of the ball we drew, drop the new ball into the urn along with the ball we drew, and return the colour we observed. The resulting distribution over colours is the same as the distribution over tables in the Chinese restaurant process. Furthermore, when we draw a black ball, if rather than generating a new colour, we instead pick a random value from a base distribution $H$ and use that value to label the new ball, the resulting distribution over labels will be the same as the distribution over the values in a Dirichlet process. Use as a prior distribution The Dirichlet Process can be used as a prior distribution to estimate the probability distribution that generates the data. In this section, we consider the model ${\begin{aligned}P&\sim {\textrm {DP}}(H,\alpha )\\X_{1},\cdots ,X_{n}|P&{\overset {\textrm {i.i.d.}}{\sim }}P.\end{aligned}}$ The Dirichlet Process distribution satisfies prior conjugacy, posterior consistency, and the Bernstein–von Mises theorem.[5] Prior conjugacy In this model, the posterior distribution is again a Dirichlet process. This means that the Dirichlet process is a conjugate prior for this model. The posterior distribution is given by ${\begin{aligned}P|X_{1},\cdots ,X_{n}&\sim {\textrm {DP}}\left({\frac {\alpha }{\alpha +n}}H+{\frac {1}{\alpha +n}}\sum _{i=1}^{n}\delta _{X_{i}},\;\alpha +n\right)\\&={\textrm {DP}}\left({\frac {\alpha }{\alpha +n}}H+{\frac {n}{\alpha +n}}\mathbb {P} _{n},\;\alpha +n\right)\end{aligned}}$ where $\mathbb {P} _{n}$ is defined below. Posterior consistency If we take the frequentist view of probability, we believe there is a true probability distribution $P_{0}$ that generated the data. Then it turns out that the Dirichlet process is consistent in the weak topology, which means that for every weak neighbourhood $U$ of $P_{0}$, the posterior probability of $U$ converges to $1$. Bernstein-Von Mises theorem In order to interpret the credible sets as confidence sets, a Bernstein–von Mises theorem is needed. In case of the Dirichlet process we compare the posterior distribution with the empirical process $\mathbb {P} _{n}={\frac {1}{n}}\sum _{i=1}^{n}\delta _{X_{i}}$. Suppose ${\mathcal {F}}$ is a $P_{0}$-Donsker class, i.e. ${\begin{aligned}{\sqrt {(}}n)\left(\mathbb {P} _{n}-P_{0}\right)\rightsquigarrow G_{P_{0}}\end{aligned}}$ for some Brownian Bridge $G_{P_{0}}$. Suppose also that there exists a function $F$ such that $F(x)\geq \sup _{f\in {\mathcal {F}}}f(x)$ such that $\int F^{2}\mathrm {d} H<\infty $, then, $P_{0}$ almost surely ${\sqrt {n}}\left(P-\mathbb {P} _{n}\right)|X_{1},\cdots ,X_{n}\rightsquigarrow G_{P_{0}}.$ This implies that credible sets you construct are asymptotic confidence sets, and the Bayesian inference based on the Dirichlet process is asymptotically also valid frequentist inference. Use in Dirichlet mixture models To understand what Dirichlet processes are and the problem they solve we consider the example of data clustering. It is a common situation that data points are assumed to be distributed in a hierarchical fashion where each data point belongs to a (randomly chosen) cluster and the members of a cluster are further distributed randomly within that cluster. Example 1 For example, we might be interested in how people will vote on a number of questions in an upcoming election. A reasonable model for this situation might be to classify each voter as a liberal, a conservative or a moderate and then model the event that a voter says "Yes" to any particular question as a Bernoulli random variable with the probability dependent on which political cluster they belong to. By looking at how votes were cast in previous years on similar pieces of legislation one could fit a predictive model using a simple clustering algorithm such as k-means. That algorithm, however, requires knowing in advance the number of clusters that generated the data. In many situations, it is not possible to determine this ahead of time, and even when we can reasonably assume a number of clusters we would still like to be able to check this assumption. For example, in the voting example above the division into liberal, conservative and moderate might not be finely tuned enough; attributes such as a religion, class or race could also be critical for modelling voter behaviour, resulting in more clusters in the model. Example 2 As another example, we might be interested in modelling the velocities of galaxies using a simple model assuming that the velocities are clustered, for instance by assuming each velocity is distributed according to the normal distribution $v_{i}\sim N(\mu _{k},\sigma ^{2})$, where the $i$th observation belongs to the $k$th cluster of galaxies with common expected velocity. In this case it is far from obvious how to determine a priori how many clusters (of common velocities) there should be and any model for this would be highly suspect and should be checked against the data. By using a Dirichlet process prior for the distribution of cluster means we circumvent the need to explicitly specify ahead of time how many clusters there are, although the concentration parameter still controls it implicitly. We consider this example in more detail. A first naive model is to presuppose that there are $K$ clusters of normally distributed velocities with common known fixed variance $\sigma ^{2}$. Denoting the event that the $i$th observation is in the $k$th cluster as $z_{i}=k$ we can write this model as: ${\begin{aligned}(v_{i}\mid z_{i}=k,\mu _{k})&\sim N(\mu _{k},\sigma ^{2})\\\operatorname {P} (z_{i}=k)&=\pi _{k}\\({\boldsymbol {\pi }}\mid \alpha )&\sim \operatorname {Dir} \left({\frac {\alpha }{K}}\cdot \mathbf {1} _{K}\right)\\\mu _{k}&\sim H(\lambda )\end{aligned}}$ That is, we assume that the data belongs to $K$ distinct clusters with means $\mu _{k}$ and that $\pi _{k}$ is the (unknown) prior probability of a data point belonging to the $k$th cluster. We assume that we have no initial information distinguishing the clusters, which is captured by the symmetric prior $\operatorname {Dir} \left(\alpha /K\cdot \mathbf {1} _{K}\right)$. Here $\operatorname {Dir} $ denotes the Dirichlet distribution and $\mathbf {1} _{K}$ denotes a vector of length $K$ where each element is 1. We further assign independent and identical prior distributions $H(\lambda )$ to each of the cluster means, where $H$ may be any parametric distribution with parameters denoted as $\lambda $. The hyper-parameters $\alpha $ and $\lambda $ are taken to be known fixed constants, chosen to reflect our prior beliefs about the system. To understand the connection to Dirichlet process priors we rewrite this model in an equivalent but more suggestive form: ${\begin{aligned}(v_{i}\mid {\tilde {\mu }}_{i})&\sim N({\tilde {\mu }}_{i},\sigma ^{2})\\{\tilde {\mu }}_{i}&\sim G=\sum _{k=1}^{K}\pi _{k}\delta _{\mu _{k}}({\tilde {\mu }}_{i})\\({\boldsymbol {\pi }}\mid \alpha )&\sim \operatorname {Dir} \left({\frac {\alpha }{K}}\cdot \mathbf {1} _{K}\right)\\\mu _{k}&\sim H(\lambda )\end{aligned}}$ Instead of imagining that each data point is first assigned a cluster and then drawn from the distribution associated to that cluster we now think of each observation being associated with parameter ${\tilde {\mu }}_{i}$ drawn from some discrete distribution $G$ with support on the $K$ means. That is, we are now treating the ${\tilde {\mu }}_{i}$ as being drawn from the random distribution $G$ and our prior information is incorporated into the model by the distribution over distributions $G$. We would now like to extend this model to work without pre-specifying a fixed number of clusters $K$. Mathematically, this means we would like to select a random prior distribution $G({\tilde {\mu }}_{i})=\sum _{k=1}^{\infty }\pi _{k}\delta _{\mu _{k}}({\tilde {\mu }}_{i})$ where the values of the clusters means $\mu _{k}$ are again independently distributed according to $H\left(\lambda \right)$ and the distribution over $\pi _{k}$ is symmetric over the infinite set of clusters. This is exactly what is accomplished by the model: ${\begin{aligned}(v_{i}\mid {\tilde {\mu }}_{i})&\sim N({\tilde {\mu }}_{i},\sigma ^{2})\\{\tilde {\mu }}_{i}&\sim G\\G&\sim \operatorname {DP} (H(\lambda ),\alpha )\end{aligned}}$ With this in hand we can better understand the computational merits of the Dirichlet process. Suppose that we wanted to draw $n$ observations from the naive model with exactly $K$ clusters. A simple algorithm for doing this would be to draw $K$ values of $\mu _{k}$ from $H(\lambda )$, a distribution $\pi $ from $\operatorname {Dir} \left(\alpha /K\cdot \mathbf {1} _{K}\right)$ and then for each observation independently sample the cluster $k$ with probability $\pi _{k}$ and the value of the observation according to $N\left(\mu _{k},\sigma ^{2}\right)$. It is easy to see that this algorithm does not work in case where we allow infinite clusters because this would require sampling an infinite dimensional parameter ${\boldsymbol {\pi }}$. However, it is still possible to sample observations $v_{i}$. One can e.g. use the Chinese restaurant representation described below and calculate the probability for used clusters and a new cluster to be created. This avoids having to explicitly specify ${\boldsymbol {\pi }}$. Other solutions are based on a truncation of clusters: A (high) upper bound to the true number of clusters is introduced and cluster numbers higher than the lower bound are treated as one cluster. Fitting the model described above based on observed data $D$ means finding the posterior distribution $p\left({\boldsymbol {\pi }},{\boldsymbol {\mu }}\mid D\right)$ over cluster probabilities and their associated means. In the infinite dimensional case it is obviously impossible to write down the posterior explicitly. It is, however, possible to draw samples from this posterior using a modified Gibbs sampler.[6] This is the critical fact that makes the Dirichlet process prior useful for inference. Applications of the Dirichlet process Dirichlet processes are frequently used in Bayesian nonparametric statistics. "Nonparametric" here does not mean a parameter-less model, rather a model in which representations grow as more data are observed. Bayesian nonparametric models have gained considerable popularity in the field of machine learning because of the above-mentioned flexibility, especially in unsupervised learning. In a Bayesian nonparametric model, the prior and posterior distributions are not parametric distributions, but stochastic processes.[7] The fact that the Dirichlet distribution is a probability distribution on the simplex of sets of non-negative numbers that sum to one makes it a good candidate to model distributions over distributions or distributions over functions. Additionally, the nonparametric nature of this model makes it an ideal candidate for clustering problems where the distinct number of clusters is unknown beforehand. In addition, the Dirichlet process has also been used for developing a mixture of expert models, in the context of supervised learning algorithms (regression or classification settings). For instance, mixtures of Gaussian process experts, where the number of required experts must be inferred from the data.[8][9] As draws from a Dirichlet process are discrete, an important use is as a prior probability in infinite mixture models. In this case, $S$ is the parametric set of component distributions. The generative process is therefore that a sample is drawn from a Dirichlet process, and for each data point, in turn, a value is drawn from this sample distribution and used as the component distribution for that data point. The fact that there is no limit to the number of distinct components which may be generated makes this kind of model appropriate for the case when the number of mixture components is not well-defined in advance. For example, the infinite mixture of Gaussians model,[10] as well as associated mixture regression models, e.g.[11] The infinite nature of these models also lends them to natural language processing applications, where it is often desirable to treat the vocabulary as an infinite, discrete set. The Dirichlet Process can also be used for nonparametric hypothesis testing, i.e. to develop Bayesian nonparametric versions of the classical nonparametric hypothesis tests, e.g. sign test, Wilcoxon rank-sum test, Wilcoxon signed-rank test, etc. For instance, Bayesian nonparametric versions of the Wilcoxon rank-sum test and the Wilcoxon signed-rank test have been developed by using the imprecise Dirichlet process, a prior ignorance Dirichlet process. Related distributions • The Pitman–Yor process is a generalization of the Dirichlet process to accommodate power-law tails • The hierarchical Dirichlet process extends the ordinary Dirichlet process for modelling grouped data. References 1. Frigyik, Bela A.; Kapila, Amol; Gupta, Maya R. "Introduction to the Dirichlet Distribution and Related Processes" (PDF). Retrieved 2 September 2021. 2. Ferguson, Thomas (1973). "Bayesian analysis of some nonparametric problems". Annals of Statistics. 1 (2): 209–230. doi:10.1214/aos/1176342360. MR 0350949. 3. "Dirichlet Process and Dirichlet Distribution -- Polya Restaurant Scheme and Chinese Restaurant Process". 4. For the proof, see Paisley, John (August 2010). "A simple proof of the stick-breaking construction of the Dirichlet Process" (PDF). Columbia University. Archived from the original (PDF) on January 22, 2015. 5. Aad van der Vaart, Subhashis Ghosal (2017). Fundamentals of Bayesian Nonparametric Inference. Cambridge University Press. ISBN 978-0-521-87826-5. 6. Sudderth, Erik (2006). Graphical Models for Visual Object Recognition and Tracking (PDF) (Ph.D.). MIT Press. 7. Nils Lid Hjort; Chris Holmes, Peter Müller; Stephen G. Walker (2010). Bayesian Nonparametrics. Cambridge University Press. ISBN 978-0-521-51346-3. 8. Sotirios P. Chatzis, "A Latent Variable Gaussian Process Model with Pitman-Yor Process Priors for Multiclass Classification," Neurocomputing, vol. 120, pp. 482-489, Nov. 2013. doi:10.1016/j.neucom.2013.04.029 9. Sotirios P. Chatzis, Yiannis Demiris, "Nonparametric mixtures of Gaussian processes with power-law behaviour," IEEE Transactions on Neural Networks and Learning Systems, vol. 23, no. 12, pp. 1862-1871, Dec. 2012. doi:10.1109/TNNLS.2012.2217986 10. Rasmussen, Carl (2000). "The Infinite Gaussian Mixture Model" (PDF). Advances in Neural Information Processing Systems. 12: 554–560. 11. Sotirios P. Chatzis, Dimitrios Korkinof, and Yiannis Demiris, "A nonparametric Bayesian approach toward robot learning by demonstration," Robotics and Autonomous Systems, vol. 60, no. 6, pp. 789–802, June 2012. doi:10.1016/j.robot.2012.02.005 External links • Introduction to the Dirichlet Distribution and Related Processes by Frigyik, Kapila and Gupta • Yee Whye Teh's overview of Dirichlet processes • Webpage for the NIPS 2003 workshop on non-parametric Bayesian methods • Michael Jordan's NIPS 2005 tutorial: Nonparametric Bayesian Methods: Dirichlet Processes, Chinese Restaurant Processes and All That • Peter Green's summary of construction of Dirichlet Processes • Peter Green's paper on probabilistic models of Dirichlet Processes with implications for statistical modelling and analysis • Zoubin Ghahramani's UAI 2005 tutorial on Nonparametric Bayesian methods • GIMM software for performing cluster analysis using Infinite Mixture Models • A Toy Example of Clustering using Dirichlet Process. by Zhiyuan Weng Stochastic processes Discrete time • Bernoulli process • Branching process • Chinese restaurant process • Galton–Watson process • Independent and identically distributed random variables • Markov chain • Moran process • Random walk • Loop-erased • Self-avoiding • Biased • Maximal entropy Continuous time • Additive process • Bessel process • Birth–death process • pure birth • Brownian motion • Bridge • Excursion • Fractional • Geometric • Meander • Cauchy process • Contact process • Continuous-time random walk • Cox process • Diffusion process • Empirical process • Feller process • Fleming–Viot process • Gamma process • Geometric process • Hawkes process • Hunt process • Interacting particle systems • Itô diffusion • Itô process • Jump diffusion • Jump process • Lévy process • Local time • Markov additive process • McKean–Vlasov process • Ornstein–Uhlenbeck process • Poisson process • Compound • Non-homogeneous • Schramm–Loewner evolution • Semimartingale • Sigma-martingale • Stable process • Superprocess • Telegraph process • Variance gamma process • Wiener process • Wiener sausage Both • Branching process • Galves–Löcherbach model • Gaussian process • Hidden Markov model (HMM) • Markov process • Martingale • Differences • Local • Sub- • Super- • Random dynamical system • Regenerative process • Renewal process • Stochastic chains with memory of variable length • White noise Fields and other • Dirichlet process • Gaussian random field • Gibbs measure • Hopfield model • Ising model • Potts model • Boolean network • Markov random field • Percolation • Pitman–Yor process • Point process • Cox • Poisson • Random field • Random graph Time series models • Autoregressive conditional heteroskedasticity (ARCH) model • Autoregressive integrated moving average (ARIMA) model • Autoregressive (AR) model • Autoregressive–moving-average (ARMA) model • Generalized autoregressive conditional heteroskedasticity (GARCH) model • Moving-average (MA) model Financial models • Binomial options pricing model • Black–Derman–Toy • Black–Karasinski • Black–Scholes • Chan–Karolyi–Longstaff–Sanders (CKLS) • Chen • Constant elasticity of variance (CEV) • Cox–Ingersoll–Ross (CIR) • Garman–Kohlhagen • Heath–Jarrow–Morton (HJM) • Heston • Ho–Lee • Hull–White • LIBOR market • Rendleman–Bartter • SABR volatility • Vašíček • Wilkie Actuarial models • Bühlmann • Cramér–Lundberg • Risk process • Sparre–Anderson Queueing models • Bulk • Fluid • Generalized queueing network • M/G/1 • M/M/1 • M/M/c Properties • Càdlàg paths • Continuous • Continuous paths • Ergodic • Exchangeable • Feller-continuous • Gauss–Markov • Markov • Mixing • Piecewise-deterministic • Predictable • Progressively measurable • Self-similar • Stationary • Time-reversible Limit theorems • Central limit theorem • Donsker's theorem • Doob's martingale convergence theorems • Ergodic theorem • Fisher–Tippett–Gnedenko theorem • Large deviation principle • Law of large numbers (weak/strong) • Law of the iterated logarithm • Maximal ergodic theorem • Sanov's theorem • Zero–one laws (Blumenthal, Borel–Cantelli, Engelbert–Schmidt, Hewitt–Savage, Kolmogorov, Lévy) Inequalities • Burkholder–Davis–Gundy • Doob's martingale • Doob's upcrossing • Kunita–Watanabe • Marcinkiewicz–Zygmund Tools • Cameron–Martin formula • Convergence of random variables • Doléans-Dade exponential • Doob decomposition theorem • Doob–Meyer decomposition theorem • Doob's optional stopping theorem • Dynkin's formula • Feynman–Kac formula • Filtration • Girsanov theorem • Infinitesimal generator • Itô integral • Itô's lemma • Karhunen–Loève theorem • Kolmogorov continuity theorem • Kolmogorov extension theorem • Lévy–Prokhorov metric • Malliavin calculus • Martingale representation theorem • Optional stopping theorem • Prokhorov's theorem • Quadratic variation • Reflection principle • Skorokhod integral • Skorokhod's representation theorem • Skorokhod space • Snell envelope • Stochastic differential equation • Tanaka • Stopping time • Stratonovich integral • Uniform integrability • Usual hypotheses • Wiener space • Classical • Abstract Disciplines • Actuarial mathematics • Control theory • Econometrics • Ergodic theory • Extreme value theory (EVT) • Large deviations theory • Mathematical finance • Mathematical statistics • Probability theory • Queueing theory • Renewal theory • Ruin theory • Signal processing • Statistics • Stochastic analysis • Time series analysis • Machine learning • List of topics • Category Peter Gustav Lejeune Dirichlet • Dirichlet distribution • Dirichlet character • Dirichlet process • Dirichlet-multinomial distribution • Dirichlet series • Dirichlet's theorem on arithmetic progressions • Dirichlet convolution • Dirichlet problem • Dirichlet integral
Wikipedia
Stick number In the mathematical theory of knots, the stick number is a knot invariant that intuitively gives the smallest number of straight "sticks" stuck end to end needed to form a knot. Specifically, given any knot $K$, the stick number of $K$, denoted by $\operatorname {stick} (K)$, is the smallest number of edges of a polygonal path equivalent to $K$. Known values Six is the lowest stick number for any nontrivial knot. There are few knots whose stick number can be determined exactly. Gyo Taek Jin determined the stick number of a $(p,q)$-torus knot $T(p,q)$ in case the parameters $p$ and $q$ are not too far from each other:[1] $\operatorname {stick} (T(p,q))=2q$, if $2\leq p<q\leq 2p.$ The same result was found independently around the same time by a research group around Colin Adams, but for a smaller range of parameters.[2] Bounds The stick number of a knot sum can be upper bounded by the stick numbers of the summands:[3] ${\text{stick}}(K_{1}\#K_{2})\leq {\text{stick}}(K_{1})+{\text{stick}}(K_{2})-3\,$ Related invariants The stick number of a knot $K$ is related to its crossing number $c(K)$ by the following inequalities:[4] ${\frac {1}{2}}(7+{\sqrt {8\,{\text{c}}(K)+1}})\leq {\text{stick}}(K)\leq {\frac {3}{2}}(c(K)+1).$ These inequalities are both tight for the trefoil knot, which has a crossing number of 3 and a stick number of 6. References Notes 1. Jin 1997 2. Adams et al. 1997 3. Adams et al. 1997, Jin 1997 4. Negami 1991, Calvo 2001, Huh & Oh 2011 Introductory material • Adams, C. C. (May 2001), "Why knot: knots, molecules and stick numbers", Plus Magazine. An accessible introduction into the topic, also for readers with little mathematical background. • Adams, C. C. (2004), The Knot Book: An elementary introduction to the mathematical theory of knots, Providence, RI: American Mathematical Society, ISBN 0-8218-3678-1. Research articles • Adams, Colin C.; Brennan, Bevin M.; Greilsheimer, Deborah L.; Woo, Alexander K. (1997), "Stick numbers and composition of knots and links", Journal of Knot Theory and its Ramifications, 6 (2): 149–161, doi:10.1142/S0218216597000121, MR 1452436 • Calvo, Jorge Alberto (2001), "Geometric knot spaces and polygonal isotopy", Journal of Knot Theory and its Ramifications, 10 (2): 245–267, arXiv:math/9904037, doi:10.1142/S0218216501000834, MR 1822491 • Eddy, Thomas D.; Shonkwiler, Clayton (2019), New stick number bounds from random sampling of confined polygons, arXiv:1909.00917 • Jin, Gyo Taek (1997), "Polygon indices and superbridge indices of torus knots and links", Journal of Knot Theory and its Ramifications, 6 (2): 281–289, doi:10.1142/S0218216597000170, MR 1452441 • Negami, Seiya (1991), "Ramsey theorems for knots, links and spatial graphs", Transactions of the American Mathematical Society, 324 (2): 527–541, doi:10.2307/2001731, MR 1069741 • Huh, Youngsik; Oh, Seungsang (2011), "An upper bound on stick number of knots", Journal of Knot Theory and its Ramifications, 20 (5): 741–747, arXiv:1512.03592, doi:10.1142/S0218216511008966, MR 2806342 External links • Weisstein, Eric W., "Stick number", MathWorld • "Stick numbers for minimal stick knots", KnotPlot Research and Development Site. Knot theory (knots and links) Hyperbolic • Figure-eight (41) • Three-twist (52) • Stevedore (61) • 62 • 63 • Endless (74) • Carrick mat (818) • Perko pair (10161) • (−2,3,7) pretzel (12n242) • Whitehead (52 1 ) • Borromean rings (63 2 ) • L10a140 • Conway knot (11n34) Satellite • Composite knots • Granny • Square • Knot sum Torus • Unknot (01) • Trefoil (31) • Cinquefoil (51) • Septafoil (71) • Unlink (02 1 ) • Hopf (22 1 ) • Solomon's (42 1 ) Invariants • Alternating • Arf invariant • Bridge no. • 2-bridge • Brunnian • Chirality • Invertible • Crosscap no. • Crossing no. • Finite type invariant • Hyperbolic volume • Khovanov homology • Genus • Knot group • Link group • Linking no. • Polynomial • Alexander • Bracket • HOMFLY • Jones • Kauffman • Pretzel • Prime • list • Stick no. • Tricolorability • Unknotting no. and problem Notation and operations • Alexander–Briggs notation • Conway notation • Dowker–Thistlethwaite notation • Flype • Mutation • Reidemeister move • Skein relation • Tabulation Other • Alexander's theorem • Berge • Braid theory • Conway sphere • Complement • Double torus • Fibered • Knot • List of knots and links • Ribbon • Slice • Sum • Tait conjectures • Twist • Wild • Writhe • Surgery theory • Category • Commons
Wikipedia
Ludwig Stickelberger Ludwig Stickelberger (18 May 1850 – 11 April 1936) was a Swiss mathematician who made important contributions to linear algebra (theory of elementary divisors) and algebraic number theory (Stickelberger relation in the theory of cyclotomic fields). Ludwig Stickelberger Ludwig Stickelberger Born18 May 1850 Buch, Schaffhausen; Died11 April 1936 (1936-04-12) (aged 85) Basel NationalitySwiss Alma materUniversity of Heidelberg University of Berlin Known forStickelberger relation Frobenius–Stickelberger theorem Scientific career FieldsMathematics InstitutionsUniversity of Freiburg ThesisDe problemate quodam ad duarum bilinearium vel quadraticarum transformationem pertinente (1874) Doctoral advisorErnst Kummer, Karl Weierstrass Short biography Stickelberger was born in Buch in the canton of Schaffhausen into a family of a pastor. He graduated from a gymnasium in 1867 and studied next in the University of Heidelberg. In 1874 he received a doctorate in Berlin under the direction of Karl Weierstrass for his work on the transformation of quadratic forms to a diagonal form. In the same year, he obtained his Habilitation from Polytechnicum in Zurich (now ETH Zurich). In 1879 he became an extraordinary professor in the Albert Ludwigs University of Freiburg. From 1896 to 1919 he worked there as a full professor, and from 1919 until his return to Basel in 1924 he held the title of a distinguished professor ("ordentlicher Honorarprofessor"). He was married in 1895, but his wife and son both died in 1918. Stickelberger died on 11 April 1936 and was buried next to his wife and son in Freiburg. Mathematical contributions Stickelberger's obituary lists the total of 14 publications: his thesis (in Latin), 8 further papers that he authored which appeared during his lifetime, 4 joint papers with Georg Frobenius and a posthumously published paper written circa 1915. Despite this modest output, he is characterized there as "one of the sharpest among the pupils of Weierstrass" and a "mathematician of high rank". Stickelberger's thesis and several later papers streamline and complete earlier investigations of various authors, in a direct and elegant way. Linear algebra Stickelberger's work on the classification of pairs of bilinear and quadratic forms filled in important gaps in the theory earlier developed by Weierstrass and Darboux. Augmented with the contemporaneous work of Frobenius, it set the theory of elementary divisors upon a rigorous foundation. An important 1878 paper of Stickelberger and Frobenius gave the first complete treatment of the classification of finitely generated abelian groups and sketched the relation with the theory of modules that had just been developed by Dedekind. Number theory Three joint papers with Frobenius deal with the theory of elliptic functions. Today Stickelberger's name is most closely associated with his 1890 paper that established the Stickelberger relation for cyclotomic Gaussian sums. This generalized earlier work of Jacobi and Kummer and was later used by Hilbert in his formulation of the reciprocity laws in algebraic number fields. The Stickelberger relation also yields information about the structure of the class group of a cyclotomic field as a module over its abelian Galois group (cf Iwasawa theory). References • Lothar Heffter, Ludwig Stickelberger, Jahresbericht der Deutschen Matematische Vereinigung, XLVII (1937), pp. 79–86 • Ludwig Stickelberger, Ueber eine Verallgemeinerung der Kreistheilung, Mathematische Annalen 37 (1890), pp. 321–367 External links • Works by or about Ludwig Stickelberger at Internet Archive Authority control International • ISNI • VIAF National • Germany Academics • Leopoldina • MathSciNet • Mathematics Genealogy Project • Scopus • zbMATH People • Deutsche Biographie Other • Historical Dictionary of Switzerland
Wikipedia
Stickelberger's theorem In mathematics, Stickelberger's theorem is a result of algebraic number theory, which gives some information about the Galois module structure of class groups of cyclotomic fields. A special case was first proven by Ernst Kummer (1847) while the general result is due to Ludwig Stickelberger (1890).[1] The Stickelberger element and the Stickelberger ideal Let Km denote the mth cyclotomic field, i.e. the extension of the rational numbers obtained by adjoining the mth roots of unity to $\mathbb {Q} $ (where m ≥ 2 is an integer). It is a Galois extension of $\mathbb {Q} $ with Galois group Gm isomorphic to the multiplicative group of integers modulo m ($\mathbb {Z} $/m$\mathbb {Z} $)×. The Stickelberger element (of level m or of Km) is an element in the group ring $\mathbb {Q} $[Gm] and the Stickelberger ideal (of level m or of Km) is an ideal in the group ring $\mathbb {Z} $[Gm]. They are defined as follows. Let ζm denote a primitive mth root of unity. The isomorphism from ($\mathbb {Z} $/m$\mathbb {Z} $)× to Gm is given by sending a to σa defined by the relation $\sigma _{a}(\zeta _{m})=\zeta _{m}^{a}$. The Stickelberger element of level m is defined as $\theta (K_{m})={\frac {1}{m}}{\underset {(a,m)=1}{\sum _{a=1}^{m}}}a\cdot \sigma _{a}^{-1}\in \mathbb {Q} [G_{m}].$ The Stickelberger ideal of level m, denoted I(Km), is the set of integral multiples of θ(Km) which have integral coefficients, i.e. $I(K_{m})=\theta (K_{m})\mathbb {Z} [G_{m}]\cap \mathbb {Z} [G_{m}].$ More generally, if F be any Abelian number field whose Galois group over $\mathbb {Q} $ is denoted GF, then the Stickelberger element of F and the Stickelberger ideal of F can be defined. By the Kronecker–Weber theorem there is an integer m such that F is contained in Km. Fix the least such m (this is the (finite part of the) conductor of F over $\mathbb {Q} $). There is a natural group homomorphism Gm → GF given by restriction, i.e. if σ ∈ Gm, its image in GF is its restriction to F denoted resmσ. The Stickelberger element of F is then defined as $\theta (F)={\frac {1}{m}}{\underset {(a,m)=1}{\sum _{a=1}^{m}}}a\cdot \mathrm {res} _{m}\sigma _{a}^{-1}\in \mathbb {Q} [G_{F}].$ The Stickelberger ideal of F, denoted I(F), is defined as in the case of Km, i.e. $I(F)=\theta (F)\mathbb {Z} [G_{F}]\cap \mathbb {Z} [G_{F}].$ In the special case where F = Km, the Stickelberger ideal I(Km) is generated by (a − σa)θ(Km) as a varies over $\mathbb {Z} $/m$\mathbb {Z} $. This not true for general F.[2] Examples If F is a totally real field of conductor m, then[3] $\theta (F)={\frac {\varphi (m)}{2[F:\mathbb {Q} ]}}\sum _{\sigma \in G_{F}}\sigma ,$ where φ is the Euler totient function and [F : $\mathbb {Q} $] is the degree of F over $\mathbb {Q} $. Statement of the theorem Stickelberger's Theorem[4] Let F be an abelian number field. Then, the Stickelberger ideal of F annihilates the class group of F. Note that θ(F) itself need not be an annihilator, but any multiple of it in $\mathbb {Z} $[GF] is. Explicitly, the theorem is saying that if α ∈ $\mathbb {Z} $[GF] is such that $\alpha \theta (F)=\sum _{\sigma \in G_{F}}a_{\sigma }\sigma \in \mathbb {Z} [G_{F}]$ and if J is any fractional ideal of F, then $\prod _{\sigma \in G_{F}}\sigma \left(J^{a_{\sigma }}\right)$ is a principal ideal. See also • Gross–Koblitz formula • Herbrand–Ribet theorem • Thaine's theorem • Jacobi sum • Gauss sum Notes 1. Washington 1997, Notes to chapter 6 2. Washington 1997, Lemma 6.9 and the comments following it 3. Washington 1997, §6.2 4. Washington 1997, Theorem 6.10 References • Cohen, Henri (2007). Number Theory – Volume I: Tools and Diophantine Equations. Graduate Texts in Mathematics. Vol. 239. Springer-Verlag. pp. 150–170. ISBN 978-0-387-49922-2. Zbl 1119.11001. • Boas Erez, Darstellungen von Gruppen in der Algebraischen Zahlentheorie: eine Einführung • Fröhlich, A. (1977). "Stickelberger without Gauss sums". In Fröhlich, A. (ed.). Algebraic Number Fields, Proc. Symp. London Math. Soc., Univ. Durham 1975. Academic Press. pp. 589–607. ISBN 0-12-268960-7. Zbl 0376.12002. • Ireland, Kenneth; Rosen, Michael (1990). A Classical Introduction to Modern Number Theory. Graduate Texts in Mathematics. Vol. 84 (2nd ed.). New York: Springer-Verlag. doi:10.1007/978-1-4757-2103-4. ISBN 978-1-4419-3094-1. MR 1070716. • Kummer, Ernst (1847), "Über die Zerlegung der aus Wurzeln der Einheit gebildeten complexen Zahlen in ihre Primfactoren", Journal für die Reine und Angewandte Mathematik, 1847 (35): 327–367, doi:10.1515/crll.1847.35.327 • Stickelberger, Ludwig (1890), "Ueber eine Verallgemeinerung der Kreistheilung", Mathematische Annalen, 37 (3): 321–367, doi:10.1007/bf01721360, JFM 22.0100.01, MR 1510649 • Washington, Lawrence (1997), Introduction to Cyclotomic Fields, Graduate Texts in Mathematics, vol. 83 (2 ed.), Berlin, New York: Springer-Verlag, ISBN 978-0-387-94762-4, MR 1421575 External links • PlanetMath page
Wikipedia
Reflected Brownian motion In probability theory, reflected Brownian motion (or regulated Brownian motion,[1][2] both with the acronym RBM) is a Wiener process in a space with reflecting boundaries.[3] In the physical literature, this process describes diffusion in a confined space and it is often called confined Brownian motion. For example it can describe the motion of hard spheres in water confined between two walls.[4] RBMs have been shown to describe queueing models experiencing heavy traffic[2] as first proposed by Kingman[5] and proven by Iglehart and Whitt.[6][7] Definition A d–dimensional reflected Brownian motion Z is a stochastic process on $\mathbb {R} _{+}^{d}$ uniquely defined by • a d–dimensional drift vector μ • a d×d non-singular covariance matrix Σ and • a d×d reflection matrix R.[8] where X(t) is an unconstrained Brownian motion and[9] $Z(t)=X(t)+RY(t)$ with Y(t) a d–dimensional vector where • Y is continuous and non–decreasing with Y(0) = 0 • Yj only increases at times for which Zj = 0 for j = 1,2,...,d • Z(t) ∈ $\mathbb {R} _{+}^{d}$, t ≥ 0. The reflection matrix describes boundary behaviour. In the interior of $\scriptstyle \mathbb {R} _{+}^{d}$ the process behaves like a Wiener process; on the boundary "roughly speaking, Z is pushed in direction Rj whenever the boundary surface $\scriptstyle \{z\in \mathbb {R} _{+}^{d}:z_{j}=0\}$ is hit, where Rj is the jth column of the matrix R."[9] Stability conditions Stability conditions are known for RBMs in 1, 2, and 3 dimensions. "The problem of recurrence classification for SRBMs in four and higher dimensions remains open."[9] In the special case where R is an M-matrix then necessary and sufficient conditions for stability are[9] 1. R is a non-singular matrix and 2. R−1μ < 0. Marginal and stationary distribution One dimension The marginal distribution (transient distribution) of a one-dimensional Brownian motion starting at 0 restricted to positive values (a single reflecting barrier at 0) with drift μ and variance σ2 is $\mathbb {P} (Z(t)\leq z)=\Phi \left({\frac {z-\mu t}{\sigma t^{1/2}}}\right)-e^{2\mu z/\sigma ^{2}}\Phi \left({\frac {-z-\mu t}{\sigma t^{1/2}}}\right)$ for all t ≥ 0, (with Φ the cumulative distribution function of the normal distribution) which yields (for μ < 0) when taking t → ∞ an exponential distribution[2] $\mathbb {P} (Z<z)=1-e^{2\mu z/\sigma ^{2}}.$ For fixed t, the distribution of Z(t) coincides with the distribution of the running maximum M(t) of the Brownian motion, $Z(t)\sim M(t)=\sup _{s\in [0,t]}X(s).$ But be aware that the distributions of the processes as a whole are very different. In particular, M(t) is increasing in t, which is not the case for Z(t). The heat kernel for reflected Brownian motion at $p_{b}$: $f(x,p_{b})={\frac {e^{-((x-u)/a)^{2}/2}+e^{-((x+u-2p_{b})/a)^{2}/2}}{a(2\pi )^{1/2}}}$ For the plane above $x\geq p_{b}$ Multiple dimensions The stationary distribution of a reflected Brownian motion in multiple dimensions is tractable analytically when there is a product form stationary distribution,[10] which occurs when the process is stable and[11] $2\Sigma =RD+DR'$ where D = diag(Σ). In this case the probability density function is[8] $p(z_{1},z_{2},\ldots ,z_{d})=\prod _{k=1}^{d}\eta _{k}e^{-\eta _{k}z_{k}}$ where ηk = 2μkγk/Σkk and γ = R−1μ. Closed-form expressions for situations where the product form condition does not hold can be computed numerically as described below in the simulation section. Simulation One dimension In one dimension the simulated process is the absolute value of a Wiener process. The following MATLAB program creates a sample path.[12] % rbm.m n = 10^4; h=10^(-3); t=h.*(0:n); mu=-1; X = zeros(1, n+1); M=X; B=X; B(1)=3; X(1)=3; for k=2:n+1 Y = sqrt(h) * randn; U = rand(1); B(k) = B(k-1) + mu * h - Y; M = (Y + sqrt(Y ^ 2 - 2 * h * log(U))) / 2; X(k) = max(M-Y, X(k-1) + h * mu - Y); end subplot(2, 1, 1) plot(t, X, 'k-'); subplot(2, 1, 2) plot(t, X-B, 'k-'); The error involved in discrete simulations has been quantified.[13] Multiple dimensions QNET allows simulation of steady state RBMs.[14][15][16] Other boundary conditions Feller described possible boundary condition for the process[17][18][19] • absorption[17] or killed Brownian motion,[20] a Dirichlet boundary condition • instantaneous reflection,[17] as described above a Neumann boundary condition • elastic reflection, a Robin boundary condition • delayed reflection[17] (the time spent on the boundary is positive with probability one) • partial reflection[17] where the process is either immediately reflected or is absorbed • sticky Brownian motion.[21] See also • Skorokhod problem References 1. Dieker, A. B. (2011). "Reflected Brownian Motion". Wiley Encyclopedia of Operations Research and Management Science. doi:10.1002/9780470400531.eorms0711. ISBN 9780470400531. 2. Harrison, J. Michael (1985). Brownian Motion and Stochastic Flow Systems (PDF). John Wiley & Sons. ISBN 978-0471819394. 3. Veestraeten, D. (2004). "The Conditional Probability Density Function for a Reflected Brownian Motion". Computational Economics. 24 (2): 185–207. doi:10.1023/B:CSEM.0000049491.13935.af. S2CID 121673717. 4. Faucheux, Luc P.; Libchaber, Albert J. (1994-06-01). "Confined Brownian motion". Physical Review E. 49 (6): 5158–5163. doi:10.1103/PhysRevE.49.5158. ISSN 1063-651X. 5. Kingman, J. F. C. (1962). "On Queues in Heavy Traffic". Journal of the Royal Statistical Society. Series B (Methodological). 24 (2): 383–392. doi:10.1111/j.2517-6161.1962.tb00465.x. JSTOR 2984229. 6. Iglehart, Donald L.; Whitt, Ward (1970). "Multiple Channel Queues in Heavy Traffic. I". Advances in Applied Probability. 2 (1): 150–177. doi:10.2307/3518347. JSTOR 3518347. S2CID 202104090. 7. Iglehart, Donald L.; Ward, Whitt (1970). "Multiple Channel Queues in Heavy Traffic. II: Sequences, Networks, and Batches" (PDF). Advances in Applied Probability. 2 (2): 355–369. doi:10.2307/1426324. JSTOR 1426324. S2CID 120281300. Retrieved 30 Nov 2012. 8. Harrison, J. M.; Williams, R. J. (1987). "Brownian models of open queueing networks with homogeneous customer populations" (PDF). Stochastics. 22 (2): 77. doi:10.1080/17442508708833469. 9. Bramson, M.; Dai, J. G.; Harrison, J. M. (2010). "Positive recurrence of reflecting Brownian motion in three dimensions" (PDF). The Annals of Applied Probability. 20 (2): 753. arXiv:1009.5746. doi:10.1214/09-AAP631. S2CID 2251853. 10. Harrison, J. M.; Williams, R. J. (1992). "Brownian Models of Feedforward Queueing Networks: Quasireversibility and Product Form Solutions". The Annals of Applied Probability. 2 (2): 263. doi:10.1214/aoap/1177005704. JSTOR 2959751. 11. Harrison, J. M.; Reiman, M. I. (1981). "On the Distribution of Multidimensional Reflected Brownian Motion". SIAM Journal on Applied Mathematics. 41 (2): 345–361. doi:10.1137/0141030. 12. Kroese, Dirk P.; Taimre, Thomas; Botev, Zdravko I. (2011). Handbook of Monte Carlo Methods. John Wiley & Sons. p. 202. ISBN 978-1118014950. 13. Asmussen, S.; Glynn, P.; Pitman, J. (1995). "Discretization Error in Simulation of One-Dimensional Reflecting Brownian Motion". The Annals of Applied Probability. 5 (4): 875. doi:10.1214/aoap/1177004597. JSTOR 2245096. 14. Dai, Jim G.; Harrison, J. Michael (1991). "Steady-State Analysis of RBM in a Rectangle: Numerical Methods and A Queueing Application". The Annals of Applied Probability. 1 (1): 16–35. CiteSeerX 10.1.1.44.5520. doi:10.1214/aoap/1177005979. JSTOR 2959623. 15. Dai, Jiangang "Jim" (1990). "Section A.5 (code for BNET)" (PDF). Steady-state analysis of reflected Brownian motions: characterization, numerical methods and queueing applications (Ph. D. thesis) (Thesis). Stanford University. Dept. of Mathematics. Retrieved 5 December 2012. 16. Dai, J. G.; Harrison, J. M. (1992). "Reflected Brownian Motion in an Orthant: Numerical Methods for Steady-State Analysis" (PDF). The Annals of Applied Probability. 2 (1): 65–86. doi:10.1214/aoap/1177005771. JSTOR 2959654. 17. Skorokhod, A. V. (1962). "Stochastic Equations for Diffusion Processes in a Bounded Region. II". Theory of Probability and Its Applications. 7: 3–23. doi:10.1137/1107002. 18. Feller, W. (1954). "Diffusion processes in one dimension". Transactions of the American Mathematical Society. 77: 1–31. doi:10.1090/S0002-9947-1954-0063607-6. MR 0063607. 19. Engelbert, H. J.; Peskir, G. (2012). "Stochastic Differential Equations for Sticky Brownian Motion" (PDF). Probab. Statist. Group Manchester Research Report (5). 20. Chung, K. L.; Zhao, Z. (1995). "Killed Brownian Motion". From Brownian Motion to Schrödinger's Equation. Grundlehren der mathematischen Wissenschaften. Vol. 312. p. 31. doi:10.1007/978-3-642-57856-4_2. ISBN 978-3-642-63381-2. 21. Itō, K.; McKean, H. P. (1996). "Time changes and killing". Diffusion Processes and their Sample Paths. pp. 164. doi:10.1007/978-3-642-62025-6_6. ISBN 978-3-540-60629-1. Queueing theory Single queueing nodes • D/M/1 queue • M/D/1 queue • M/D/c queue • M/M/1 queue • Burke's theorem • M/M/c queue • M/M/∞ queue • M/G/1 queue • Pollaczek–Khinchine formula • Matrix analytic method • M/G/k queue • G/M/1 queue • G/G/1 queue • Kingman's formula • Lindley equation • Fork–join queue • Bulk queue Arrival processes • Poisson point process • Markovian arrival process • Rational arrival process Queueing networks • Jackson network • Traffic equations • Gordon–Newell theorem • Mean value analysis • Buzen's algorithm • Kelly network • G-network • BCMP network Service policies • FIFO • LIFO • Processor sharing • Round-robin • Shortest job next • Shortest remaining time Key concepts • Continuous-time Markov chain • Kendall's notation • Little's law • Product-form solution • Balance equation • Quasireversibility • Flow-equivalent server method • Arrival theorem • Decomposition method • Beneš method Limit theorems • Fluid limit • Mean-field theory • Heavy traffic approximation • Reflected Brownian motion Extensions • Fluid queue • Layered queueing network • Polling system • Adversarial queueing network • Loss network • Retrial queue Information systems • Data buffer • Erlang (unit) • Erlang distribution • Flow control (data) • Message queue • Network congestion • Network scheduler • Pipeline (software) • Quality of service • Scheduling (computing) • Teletraffic engineering Category
Wikipedia
Decagrammic prism In geometry, the decagrammic prism is one of an infinite set of nonconvex prisms formed by squares sides and two regular star polygon caps, in this case two decagrams. Decagrammic prism TypeUniform polyhedron Faces2 Decagrams 10 squares Edges30 Vertices20 Vertex configuration10/3.4.4 Wythoff symbol2 10/3 | 2 Symmetry groupD10h, [2,10],(*2.10.10), order 40 Dual polyhedronDecagrammic bipyramid Propertiesnonconvex Vertex figure It has 12 faces (10 squares and 2 decagrams), 30 edges, and 20 vertices.
Wikipedia
Stiefel manifold In mathematics, the Stiefel manifold $V_{k}(\mathbb {R} ^{n})$ is the set of all orthonormal k-frames in $\mathbb {R} ^{n}.$ That is, it is the set of ordered orthonormal k-tuples of vectors in $\mathbb {R} ^{n}.$ It is named after Swiss mathematician Eduard Stiefel. Likewise one can define the complex Stiefel manifold $V_{k}(\mathbb {C} ^{n})$ of orthonormal k-frames in $\mathbb {C} ^{n}$ and the quaternionic Stiefel manifold $V_{k}(\mathbb {H} ^{n})$ of orthonormal k-frames in $\mathbb {H} ^{n}$. More generally, the construction applies to any real, complex, or quaternionic inner product space. In some contexts, a non-compact Stiefel manifold is defined as the set of all linearly independent k-frames in $\mathbb {R} ^{n},\mathbb {C} ^{n},$ or $\mathbb {H} ^{n};$ this is homotopy equivalent, as the compact Stiefel manifold is a deformation retract of the non-compact one, by Gram–Schmidt. Statements about the non-compact form correspond to those for the compact form, replacing the orthogonal group (or unitary or symplectic group) with the general linear group. Topology Let $\mathbb {F} $ stand for $\mathbb {R} ,\mathbb {C} ,$ or $\mathbb {H} .$ The Stiefel manifold $V_{k}(\mathbb {F} ^{n})$ can be thought of as a set of n × k matrices by writing a k-frame as a matrix of k column vectors in $\mathbb {F} ^{n}.$ The orthonormality condition is expressed by A*A = $I_{k}$ where A* denotes the conjugate transpose of A and $I_{k}$ denotes the k × k identity matrix. We then have $V_{k}(\mathbb {F} ^{n})=\left\{A\in \mathbb {F} ^{n\times k}:A^{*}A=I_{k}\right\}.$ The topology on $V_{k}(\mathbb {F} ^{n})$ is the subspace topology inherited from $\mathbb {F} ^{n\times k}.$ With this topology $V_{k}(\mathbb {F} ^{n})$ is a compact manifold whose dimension is given by ${\begin{aligned}\dim V_{k}(\mathbb {R} ^{n})&=nk-{\frac {1}{2}}k(k+1)\\\dim V_{k}(\mathbb {C} ^{n})&=2nk-k^{2}\\\dim V_{k}(\mathbb {H} ^{n})&=4nk-k(2k-1)\end{aligned}}$ As a homogeneous space Each of the Stiefel manifolds $V_{k}(\mathbb {F} ^{n})$ can be viewed as a homogeneous space for the action of a classical group in a natural manner. Every orthogonal transformation of a k-frame in $\mathbb {R} ^{n}$ results in another k-frame, and any two k-frames are related by some orthogonal transformation. In other words, the orthogonal group O(n) acts transitively on $V_{k}(\mathbb {R} ^{n}).$ The stabilizer subgroup of a given frame is the subgroup isomorphic to O(n−k) which acts nontrivially on the orthogonal complement of the space spanned by that frame. Likewise the unitary group U(n) acts transitively on $V_{k}(\mathbb {C} ^{n})$ with stabilizer subgroup U(n−k) and the symplectic group Sp(n) acts transitively on $V_{k}(\mathbb {H} ^{n})$ with stabilizer subgroup Sp(n−k). In each case $V_{k}(\mathbb {F} ^{n})$ can be viewed as a homogeneous space: ${\begin{aligned}V_{k}(\mathbb {R} ^{n})&\cong {\mbox{O}}(n)/{\mbox{O}}(n-k)\\V_{k}(\mathbb {C} ^{n})&\cong {\mbox{U}}(n)/{\mbox{U}}(n-k)\\V_{k}(\mathbb {H} ^{n})&\cong {\mbox{Sp}}(n)/{\mbox{Sp}}(n-k)\end{aligned}}$ When k = n, the corresponding action is free so that the Stiefel manifold $V_{n}(\mathbb {F} ^{n})$ is a principal homogeneous space for the corresponding classical group. When k is strictly less than n then the special orthogonal group SO(n) also acts transitively on $V_{k}(\mathbb {R} ^{n})$ with stabilizer subgroup isomorphic to SO(n−k) so that $V_{k}(\mathbb {R} ^{n})\cong {\mbox{SO}}(n)/{\mbox{SO}}(n-k)\qquad {\mbox{for }}k<n.$ The same holds for the action of the special unitary group on $V_{k}(\mathbb {C} ^{n})$ $V_{k}(\mathbb {C} ^{n})\cong {\mbox{SU}}(n)/{\mbox{SU}}(n-k)\qquad {\mbox{for }}k<n.$ Thus for k = n − 1, the Stiefel manifold is a principal homogeneous space for the corresponding special classical group. Uniform measure The Stiefel manifold can be equipped with a uniform measure, i.e. a Borel measure that is invariant under the action of the groups noted above. For example, $V_{1}(\mathbb {R} ^{2})$ which is isomorphic to the unit circle in the Euclidean plane, has as its uniform measure the obvious uniform measure (arc length) on the circle. It is straightforward to sample this measure on $V_{k}(\mathbb {F} ^{n})$ using Gaussian random matrices: if $A\in \mathbb {F} ^{n\times k}$ is a random matrix with independent entries identically distributed according to the standard normal distribution on $\mathbb {F} $ and A = QR is the QR factorization of A, then the matrices, $Q\in \mathbb {F} ^{n\times k},R\in \mathbb {F} ^{k\times k}$ are independent random variables and Q is distributed according to the uniform measure on $V_{k}(\mathbb {F} ^{n}).$ This result is a consequence of the Bartlett decomposition theorem.[1] Special cases A 1-frame in $\mathbb {F} ^{n}$ is nothing but a unit vector, so the Stiefel manifold $V_{1}(\mathbb {F} ^{n})$ is just the unit sphere in $\mathbb {F} ^{n}.$ Therefore: ${\begin{aligned}V_{1}(\mathbb {R} ^{n})&=S^{n-1}\\V_{1}(\mathbb {C} ^{n})&=S^{2n-1}\\V_{1}(\mathbb {H} ^{n})&=S^{4n-1}\end{aligned}}$ Given a 2-frame in $\mathbb {R} ^{n},$ let the first vector define a point in Sn−1 and the second a unit tangent vector to the sphere at that point. In this way, the Stiefel manifold $V_{2}(\mathbb {R} ^{n})$ may be identified with the unit tangent bundle to Sn−1. When k = n or n−1 we saw in the previous section that $V_{k}(\mathbb {F} ^{n})$ is a principal homogeneous space, and therefore diffeomorphic to the corresponding classical group: ${\begin{aligned}V_{n-1}(\mathbb {R} ^{n})&\cong \mathrm {SO} (n)\\V_{n-1}(\mathbb {C} ^{n})&\cong \mathrm {SU} (n)\end{aligned}}$ ${\begin{aligned}V_{n}(\mathbb {R} ^{n})&\cong \mathrm {O} (n)\\V_{n}(\mathbb {C} ^{n})&\cong \mathrm {U} (n)\\V_{n}(\mathbb {H} ^{n})&\cong \mathrm {Sp} (n)\end{aligned}}$ Functoriality Given an orthogonal inclusion between vector spaces $X\hookrightarrow Y,$ the image of a set of k orthonormal vectors is orthonormal, so there is an induced closed inclusion of Stiefel manifolds, $V_{k}(X)\hookrightarrow V_{k}(Y),$ and this is functorial. More subtly, given an n-dimensional vector space X, the dual basis construction gives a bijection between bases for X and bases for the dual space $X^{*},$ which is continuous, and thus yields a homeomorphism of top Stiefel manifolds $V_{n}(X){\stackrel {\sim }{\to }}V_{n}(X^{*}).$ This is also functorial for isomorphisms of vector spaces. As a principal bundle There is a natural projection $p:V_{k}(\mathbb {F} ^{n})\to G_{k}(\mathbb {F} ^{n})$ from the Stiefel manifold $V_{k}(\mathbb {F} ^{n})$ to the Grassmannian of k-planes in $\mathbb {F} ^{n}$ which sends a k-frame to the subspace spanned by that frame. The fiber over a given point P in $G_{k}(\mathbb {F} ^{n})$ is the set of all orthonormal k-frames contained in the space P. This projection has the structure of a principal G-bundle where G is the associated classical group of degree k. Take the real case for concreteness. There is a natural right action of O(k) on $V_{k}(\mathbb {R} ^{n})$ which rotates a k-frame in the space it spans. This action is free but not transitive. The orbits of this action are precisely the orthonormal k-frames spanning a given k-dimensional subspace; that is, they are the fibers of the map p. Similar arguments hold in the complex and quaternionic cases. We then have a sequence of principal bundles: ${\begin{aligned}\mathrm {O} (k)&\to V_{k}(\mathbb {R} ^{n})\to G_{k}(\mathbb {R} ^{n})\\\mathrm {U} (k)&\to V_{k}(\mathbb {C} ^{n})\to G_{k}(\mathbb {C} ^{n})\\\mathrm {Sp} (k)&\to V_{k}(\mathbb {H} ^{n})\to G_{k}(\mathbb {H} ^{n})\end{aligned}}$ The vector bundles associated to these principal bundles via the natural action of G on $\mathbb {F} ^{k}$ are just the tautological bundles over the Grassmannians. In other words, the Stiefel manifold $V_{k}(\mathbb {F} ^{n})$ is the orthogonal, unitary, or symplectic frame bundle associated to the tautological bundle on a Grassmannian. When one passes to the $n\to \infty $ limit, these bundles become the universal bundles for the classical groups. Homotopy The Stiefel manifolds fit into a family of fibrations: $V_{k-1}(\mathbb {R} ^{n-1})\to V_{k}(\mathbb {R} ^{n})\to S^{n-1},$ thus the first non-trivial homotopy group of the space $V_{k}(\mathbb {R} ^{n})$ is in dimension n − k. Moreover, $\pi _{n-k}V_{k}(\mathbb {R} ^{n})\simeq {\begin{cases}\mathbb {Z} &n-k{\text{ even or }}k=1\\\mathbb {Z} _{2}&n-k{\text{ odd and }}k>1\end{cases}}$ This result is used in the obstruction-theoretic definition of Stiefel–Whitney classes. See also • Flag manifold • Matrix Langevin distribution[2][3] References 1. Muirhead, Robb J. (1982). Aspects of Multivariate Statistical Theory. John Wiley & Sons, Inc., New York. pp. xix+673. ISBN 0-471-09442-0. 2. Chikuse, Yasuko (1 May 2003). "Concentrated matrix Langevin distributions". Journal of Multivariate Analysis. 85 (2): 375–394. doi:10.1016/S0047-259X(02)00065-9. ISSN 0047-259X. 3. Pal, Subhadip; Sengupta, Subhajit; Mitra, Riten; Banerjee, Arunava (September 2020). "Conjugate Priors and Posterior Inference for the Matrix Langevin Distribution on the Stiefel Manifold". Bayesian Analysis. 15 (3): 871–908. doi:10.1214/19-BA1176. ISSN 1936-0975. • Hatcher, Allen (2002). Algebraic Topology. Cambridge University Press. ISBN 0-521-79540-0. • Husemoller, Dale (1994). Fibre Bundles ((3rd ed.) ed.). New York: Springer-Verlag. ISBN 0-387-94087-1. • James, Ioan Mackenzie (1976). The topology of Stiefel manifolds. CUP Archive. ISBN 978-0-521-21334-9. • "Stiefel manifold", Encyclopedia of Mathematics, EMS Press, 2001 [1994]
Wikipedia
Stiefel–Whitney class In mathematics, in particular in algebraic topology and differential geometry, the Stiefel–Whitney classes are a set of topological invariants of a real vector bundle that describe the obstructions to constructing everywhere independent sets of sections of the vector bundle. Stiefel–Whitney classes are indexed from 0 to n, where n is the rank of the vector bundle. If the Stiefel–Whitney class of index i is nonzero, then there cannot exist $(n-i+1)$ everywhere linearly independent sections of the vector bundle. A nonzero nth Stiefel–Whitney class indicates that every section of the bundle must vanish at some point. A nonzero first Stiefel–Whitney class indicates that the vector bundle is not orientable. For example, the first Stiefel–Whitney class of the Möbius strip, as a line bundle over the circle, is not zero, whereas the first Stiefel–Whitney class of the trivial line bundle over the circle, $S^{1}\times \mathbb {R} $, is zero. The Stiefel–Whitney class was named for Eduard Stiefel and Hassler Whitney and is an example of a $\mathbb {Z} /2\mathbb {Z} $-characteristic class associated to real vector bundles. In algebraic geometry one can also define analogous Stiefel–Whitney classes for vector bundles with a non-degenerate quadratic form, taking values in etale cohomology groups or in Milnor K-theory. As a special case one can define Stiefel–Whitney classes for quadratic forms over fields, the first two cases being the discriminant and the Hasse–Witt invariant (Milnor 1970). Introduction General presentation For a real vector bundle E, the Stiefel–Whitney class of E is denoted by w(E). It is an element of the cohomology ring $H^{\ast }(X;\mathbb {Z} /2\mathbb {Z} )=\bigoplus _{i\geq 0}H^{i}(X;\mathbb {Z} /2\mathbb {Z} )$ where X is the base space of the bundle E, and $\mathbb {Z} /2\mathbb {Z} $ (often alternatively denoted by $\mathbb {Z} _{2}$) is the commutative ring whose only elements are 0 and 1. The component of $w(E)$ in $H^{i}(X;\mathbb {Z} /2\mathbb {Z} )$ is denoted by $w_{i}(E)$ and called the i-th Stiefel–Whitney class of E. Thus, $w(E)=w_{0}(E)+w_{1}(E)+w_{2}(E)+\cdots $, where each $w_{i}(E)$ is an element of $H^{i}(X;\mathbb {Z} /2\mathbb {Z} )$. The Stiefel–Whitney class $w(E)$ is an invariant of the real vector bundle E; i.e., when F is another real vector bundle which has the same base space X as E, and if F is isomorphic to E, then the Stiefel–Whitney classes $w(E)$ and $w(F)$ are equal. (Here isomorphic means that there exists a vector bundle isomorphism $E\to F$ which covers the identity $\mathrm {id} _{X}\colon X\to X$.) While it is in general difficult to decide whether two real vector bundles E and F are isomorphic, the Stiefel–Whitney classes $w(E)$ and $w(F)$ can often be computed easily. If they are different, one knows that E and F are not isomorphic. As an example, over the circle $S^{1}$, there is a line bundle (i.e., a real vector bundle of rank 1) that is not isomorphic to a trivial bundle. This line bundle L is the Möbius strip (which is a fiber bundle whose fibers can be equipped with vector space structures in such a way that it becomes a vector bundle). The cohomology group $H^{1}(S^{1};\mathbb {Z} /2\mathbb {Z} )$ has just one element other than 0. This element is the first Stiefel–Whitney class $w_{1}(L)$ of L. Since the trivial line bundle over $S^{1}$ has first Stiefel–Whitney class 0, it is not isomorphic to L. Two real vector bundles E and F which have the same Stiefel–Whitney class are not necessarily isomorphic. This happens for instance when E and F are trivial real vector bundles of different ranks over the same base space X. It can also happen when E and F have the same rank: the tangent bundle of the 2-sphere $S^{2}$ and the trivial real vector bundle of rank 2 over $S^{2}$ have the same Stiefel–Whitney class, but they are not isomorphic. But if two real line bundles over X have the same Stiefel–Whitney class, then they are isomorphic. Origins The Stiefel–Whitney classes $w_{i}(E)$ get their name because Eduard Stiefel and Hassler Whitney discovered them as mod-2 reductions of the obstruction classes to constructing $n-i+1$ everywhere linearly independent sections of the vector bundle E restricted to the i-skeleton of X. Here n denotes the dimension of the fibre of the vector bundle $F\to E\to X$. To be precise, provided X is a CW-complex, Whitney defined classes $W_{i}(E)$ in the i-th cellular cohomology group of X with twisted coefficients. The coefficient system being the $(i-1)$-st homotopy group of the Stiefel manifold $V_{n-i+1}(F)$ of $n-i+1$ linearly independent vectors in the fibres of E. Whitney proved that $W_{i}(E)=0$ if and only if E, when restricted to the i-skeleton of X, has $n-i+1$ linearly-independent sections. Since $\pi _{i-1}V_{n-i+1}(F)$ is either infinite-cyclic or isomorphic to $\mathbb {Z} /2\mathbb {Z} $, there is a canonical reduction of the $W_{i}(E)$ classes to classes $w_{i}(E)\in H^{i}(X;\mathbb {Z} /2\mathbb {Z} )$ which are the Stiefel–Whitney classes. Moreover, whenever $\pi _{i-1}V_{n-i+1}(F)=\mathbb {Z} /2\mathbb {Z} $, the two classes are identical. Thus, $w_{1}(E)=0$ if and only if the bundle $E\to X$ is orientable. The $w_{0}(E)$ class contains no information, because it is equal to 1 by definition. Its creation by Whitney was an act of creative notation, allowing the Whitney sum Formula $w(E_{1}\oplus E_{2})=w(E_{1})w(E_{2})$ to be true. Definitions Throughout, $H^{i}(X;G)$ denotes singular cohomology of a space X with coefficients in the group G. The word map means always a continuous function between topological spaces. Axiomatic definition The Stiefel-Whitney characteristic class $w(E)\in H^{*}(X;\mathbb {Z} /2\mathbb {Z} )$ of a finite rank real vector bundle E on a paracompact base space X is defined as the unique class such that the following axioms are fulfilled: 1. Normalization: The Whitney class of the tautological line bundle over the real projective space $\mathbf {P} ^{1}(\mathbb {R} )$ is nontrivial, i.e., $w(\gamma _{1}^{1})=1+a\in H^{*}(\mathbf {P} ^{1}(\mathbb {R} );\mathbb {Z} /2\mathbb {Z} )=(\mathbb {Z} /2\mathbb {Z} )[a]/(a^{2})$. 2. Rank: $w_{0}(E)=1\in H^{0}(X),$ and for i above the rank of E, $w_{i}=0\in H^{i}(X)$, that is, $w(E)\in H^{\leqslant \mathrm {rank} (E)}(X).$ 3. Whitney product formula: $w(E\oplus F)=w(E)\smallsmile w(F)$, that is, the Whitney class of a direct sum is the cup product of the summands' classes. 4. Naturality: $w(f^{*}E)=f^{*}w(E)$ for any real vector bundle $E\to X$ and map $f\colon X'\to X$, where $f^{*}E$ denotes the pullback vector bundle. The uniqueness of these classes is proved for example, in section 17.2 – 17.6 in Husemoller or section 8 in Milnor and Stasheff. There are several proofs of the existence, coming from various constructions, with several different flavours, their coherence is ensured by the unicity statement. The infinite Grassmannians and vector bundles This section describes a construction using the notion of classifying space. For any vector space V, let $Gr_{n}(V)$ denote the Grassmannian, the space of n-dimensional linear subspaces of V, and denote the infinite Grassmannian $Gr_{n}=Gr_{n}(\mathbb {R} ^{\infty })$. Recall that it is equipped with the tautological bundle $\gamma ^{n}\to Gr_{n},$ a rank n vector bundle that can be defined as the subbundle of the trivial bundle of fiber V whose fiber at a point $W\in Gr_{n}(V)$ is the subspace represented by W. Let $f\colon X\to Gr_{n}$, be a continuous map to the infinite Grassmannian. Then, up to isomorphism, the bundle induced by the map f on X $f^{*}\gamma ^{n}\in \mathrm {Vect} _{n}(X)$ depends only on the homotopy class of the map [f]. The pullback operation thus gives a morphism from the set $[X;Gr_{n}]$ of maps $X\to Gr_{n}$ modulo homotopy equivalence, to the set $\mathrm {Vect} _{n}(X)$ of isomorphism classes of vector bundles of rank n over X. (The important fact in this construction is that if X is a paracompact space, this map is a bijection. This is the reason why we call infinite Grassmannians the classifying spaces of vector bundles.) Now, by the naturality axiom (4) above, $w_{j}(f^{*}\gamma ^{n})=f^{*}w_{j}(\gamma ^{n})$. So it suffices in principle to know the values of $w_{j}(\gamma ^{n})$ for all j. However, the cohomology ring $H^{*}(Gr_{n},\mathbb {Z} _{2})$ is free on specific generators $x_{j}\in H^{j}(Gr_{n},\mathbb {Z} _{2})$ arising from a standard cell decomposition, and it then turns out that these generators are in fact just given by $x_{j}=w_{j}(\gamma ^{n})$. Thus, for any rank-n bundle, $w_{j}=f^{*}x_{j}$, where f is the appropriate classifying map. This in particular provides one proof of the existence of the Stiefel–Whitney classes. The case of line bundles We now restrict the above construction to line bundles, ie we consider the space, $\mathrm {Vect} _{1}(X)$ of line bundles over X. The Grassmannian of lines $Gr_{1}$ is just the infinite projective space $\mathbf {P} ^{\infty }(\mathbf {R} )=\mathbf {R} ^{\infty }/\mathbf {R} ^{*},$ which is doubly covered by the infinite sphere $S^{\infty }$ by antipodal points. This sphere $S^{\infty }$ is contractible, so we have ${\begin{aligned}\pi _{1}(\mathbf {P} ^{\infty }(\mathbf {R} ))&=\mathbf {Z} /2\mathbf {Z} \\\pi _{i}(\mathbf {P} ^{\infty }(\mathbf {R} ))&=\pi _{i}(S^{\infty })=0&&i>1\end{aligned}}$ Hence P∞(R) is the Eilenberg-Maclane space $K(\mathbb {Z} /2\mathbb {Z} ,1)$. It is a property of Eilenberg-Maclane spaces, that $\left[X;\mathbf {P} ^{\infty }(\mathbf {R} )\right]=H^{1}(X;\mathbb {Z} /2\mathbb {Z} )$ for any X, with the isomorphism given by f → f*η, where η is the generator $H^{1}(\mathbf {P} ^{\infty }(\mathbf {R} );\mathbf {Z} /2\mathbf {Z} )=\mathbb {Z} /2\mathbb {Z} $. Applying the former remark that α : [X, Gr1] → Vect1(X) is also a bijection, we obtain a bijection $w_{1}\colon {\text{Vect}}_{1}(X)\to H^{1}(X;\mathbf {Z} /2\mathbf {Z} )$ this defines the Stiefel–Whitney class w1 for line bundles. The group of line bundles If Vect1(X) is considered as a group under the operation of tensor product, then the Stiefel–Whitney class, w1 : Vect1(X) → H1(X; Z/2Z), is an isomorphism. That is, w1(λ ⊗ μ) = w1(λ) + w1(μ) for all line bundles λ, μ → X. For example, since H1(S1; Z/2Z) = Z/2Z, there are only two line bundles over the circle up to bundle isomorphism: the trivial one, and the open Möbius strip (i.e., the Möbius strip with its boundary deleted). The same construction for complex vector bundles shows that the Chern class defines a bijection between complex line bundles over X and H2(X; Z), because the corresponding classifying space is P∞(C), a K(Z, 2). This isomorphism is true for topological line bundles, the obstruction to injectivity of the Chern class for algebraic vector bundles is the Jacobian variety. Properties Topological interpretation of vanishing 1. wi(E) = 0 whenever i > rank(E). 2. If Ek has $s_{1},\ldots ,s_{\ell }$ sections which are everywhere linearly independent then the $\ell $ top degree Whitney classes vanish: $w_{k-\ell +1}=\cdots =w_{k}=0$. 3. The first Stiefel–Whitney class is zero if and only if the bundle is orientable. In particular, a manifold M is orientable if and only if w1(TM) = 0. 4. The bundle admits a spin structure if and only if both the first and second Stiefel–Whitney classes are zero. 5. For an orientable bundle, the second Stiefel–Whitney class is in the image of the natural map H2(M, Z) → H2(M, Z/2Z) (equivalently, the so-called third integral Stiefel–Whitney class is zero) if and only if the bundle admits a spinc structure. 6. All the Stiefel–Whitney numbers (see below) of a smooth compact manifold X vanish if and only if the manifold is the boundary of some smooth compact (unoriented) manifold (Warning: Some Stiefel-Whitney class could still be non-zero, even if all the Stiefel Whitney numbers vanish!) Uniqueness of the Stiefel–Whitney classes The bijection above for line bundles implies that any functor θ satisfying the four axioms above is equal to w, by the following argument. The second axiom yields θ(γ1) = 1 + θ1(γ1). For the inclusion map i : P1(R) → P∞(R), the pullback bundle $i^{*}\gamma ^{1}$ is equal to $\gamma _{1}^{1}$. Thus the first and third axiom imply $i^{*}\theta _{1}\left(\gamma ^{1}\right)=\theta _{1}\left(i^{*}\gamma ^{1}\right)=\theta _{1}\left(\gamma _{1}^{1}\right)=w_{1}\left(\gamma _{1}^{1}\right)=w_{1}\left(i^{*}\gamma ^{1}\right)=i^{*}w_{1}\left(\gamma ^{1}\right).$ Since the map $i^{*}:H^{1}\left(\mathbf {P} ^{\infty }(\mathbf {R} \right);\mathbf {Z} /2\mathbf {Z} )\to H^{1}\left(\mathbf {P} ^{1}(\mathbf {R} );\mathbf {Z} /2\mathbf {Z} \right)$ is an isomorphism, $\theta _{1}(\gamma ^{1})=w_{1}(\gamma ^{1})$ and θ(γ1) = w(γ1) follow. Let E be a real vector bundle of rank n over a space X. Then E admits a splitting map, i.e. a map f : X′ → X for some space X′ such that $f^{*}:H^{*}(X;\mathbf {Z} /2\mathbf {Z} ))\to H^{*}(X';\mathbf {Z} /2\mathbf {Z} )$ is injective and $f^{*}E=\lambda _{1}\oplus \cdots \oplus \lambda _{n}$ for some line bundles $\lambda _{i}\to X'$. Any line bundle over X is of the form $g^{*}\gamma ^{1}$ for some map g, and $\theta \left(g^{*}\gamma ^{1}\right)=g^{*}\theta \left(\gamma ^{1}\right)=g^{*}w\left(\gamma ^{1}\right)=w\left(g^{*}\gamma ^{1}\right),$ by naturality. Thus θ = w on ${\text{Vect}}_{1}(X)$. It follows from the fourth axiom above that $f^{*}\theta (E)=\theta (f^{*}E)=\theta (\lambda _{1}\oplus \cdots \oplus \lambda _{n})=\theta (\lambda _{1})\cdots \theta (\lambda _{n})=w(\lambda _{1})\cdots w(\lambda _{n})=w(f^{*}E)=f^{*}w(E).$ Since $f^{*}$ is injective, θ = w. Thus the Stiefel–Whitney class is the unique functor satisfying the four axioms above. Non-isomorphic bundles with the same Stiefel–Whitney classes Although the map $w_{1}\colon \mathrm {Vect} _{1}(X)\to H^{1}(X;\mathbb {Z} /2\mathbb {Z} )$ is a bijection, the corresponding map is not necessarily injective in higher dimensions. For example, consider the tangent bundle $TS^{n}$ for n even. With the canonical embedding of $S^{n}$ in $\mathbb {R} ^{n+1}$, the normal bundle $\nu $ to $S^{n}$ is a line bundle. Since $S^{n}$ is orientable, $\nu $ is trivial. The sum $TS^{n}\oplus \nu $ is just the restriction of $T\mathbb {R} ^{n+1}$ to $S^{n}$, which is trivial since $\mathbb {R} ^{n+1}$ is contractible. Hence w(TSn) = w(TSn)w(ν) = w(TSn ⊕ ν) = 1. But, provided n is even, TSn → Sn is not trivial; its Euler class $e(TS^{n})=\chi (TS^{n})[S^{n}]=2[S^{n}]\not =0$, where [Sn] denotes a fundamental class of Sn and χ the Euler characteristic. Related invariants Stiefel–Whitney numbers If we work on a manifold of dimension n, then any product of Stiefel–Whitney classes of total degree n can be paired with the Z/2Z-fundamental class of the manifold to give an element of Z/2Z, a Stiefel–Whitney number of the vector bundle. For example, if the manifold has dimension 3, there are three linearly independent Stiefel–Whitney numbers, given by $w_{1}^{3},w_{1}w_{2},w_{3}$. In general, if the manifold has dimension n, the number of possible independent Stiefel–Whitney numbers is the number of partitions of n. The Stiefel–Whitney numbers of the tangent bundle of a smooth manifold are called the Stiefel–Whitney numbers of the manifold. They are known to be cobordism invariants. It was proven by Lev Pontryagin that if B is a smooth compact (n+1)–dimensional manifold with boundary equal to M, then the Stiefel-Whitney numbers of M are all zero.[1] Moreover, it was proved by René Thom that if all the Stiefel-Whitney numbers of M are zero then M can be realised as the boundary of some smooth compact manifold.[2] One Stiefel–Whitney number of importance in surgery theory is the de Rham invariant of a (4k+1)-dimensional manifold, $w_{2}w_{4k-1}.$ Wu classes The Stiefel–Whitney classes $w_{k}$ are the Steenrod squares of the Wu classes $w_{k}$, defined by Wu Wenjun in (Wu 1955) harv error: no target: CITEREFWu1955 (help). Most simply, the total Stiefel–Whitney class is the total Steenrod square of the total Wu class: $\operatorname {Sq} (v)=w$. Wu classes are most often defined implicitly in terms of Steenrod squares, as the cohomology class representing the Steenrod squares. Let the manifold X be n dimensional. Then, for any cohomology class x of degree $n-k$, $v_{k}\cup x=\operatorname {Sq} ^{k}(x)$. Or more narrowly, we can demand $\langle v_{k}\cup x,\mu \rangle =\langle \operatorname {Sq} ^{k}(x),\mu \rangle $, again for cohomology classes x of degree $n-k$.[3] Integral Stiefel–Whitney classes The element $\beta w_{i}\in H^{i+1}(X;\mathbf {Z} )$ is called the i + 1 integral Stiefel–Whitney class, where β is the Bockstein homomorphism, corresponding to reduction modulo 2, Z → Z/2Z: $\beta \colon H^{i}(X;\mathbf {Z} /2\mathbf {Z} )\to H^{i+1}(X;\mathbf {Z} ).$ For instance, the third integral Stiefel–Whitney class is the obstruction to a Spinc structure. Relations over the Steenrod algebra Over the Steenrod algebra, the Stiefel–Whitney classes of a smooth manifold (defined as the Stiefel–Whitney classes of the tangent bundle) are generated by those of the form $w_{2^{i}}$. In particular, the Stiefel–Whitney classes satisfy the Wu formula, named for Wu Wenjun:[4] $Sq^{i}(w_{j})=\sum _{t=0}^{i}{j+t-i-1 \choose t}w_{i-t}w_{j+t}.$ See also • Characteristic class for a general survey, in particular Chern class, the direct analogue for complex vector bundles • Real projective space References 1. Pontryagin, Lev S. (1947). "Characteristic cycles on differentiable manifolds". Mat. Sbornik. New Series (in Russian). 21 (63): 233–284. 2. Milnor, John W.; Stasheff, James D. (1974). Characteristic Classes. Princeton University Press. pp. 50–53. ISBN 0-691-08122-0. 3. Milnor, John W.; Stasheff, James D. (1974). Characteristic Classes. Princeton University Press. pp. 131–133. ISBN 0-691-08122-0. 4. (May 1999, p. 197) • Dale Husemoller, Fibre Bundles, Springer-Verlag, 1994. • May, J. Peter (1999), A Concise Course in Algebraic Topology (PDF), Chicago: University of Chicago Press, retrieved 2009-08-07 • Milnor, John Willard (1970), With an appendix by J. Tate, "Algebraic K-theory and quadratic forms", Inventiones Mathematicae, 9: 318–344, doi:10.1007/BF01425486, ISSN 0020-9910, MR 0260844, Zbl 0199.55501 External links • Wu class at the Manifold Atlas
Wikipedia
Riemann–Stieltjes integral In mathematics, the Riemann–Stieltjes integral is a generalization of the Riemann integral, named after Bernhard Riemann and Thomas Joannes Stieltjes. The definition of this integral was first published in 1894 by Stieltjes.[1] It serves as an instructive and useful precursor of the Lebesgue integral, and an invaluable tool in unifying equivalent forms of statistical theorems that apply to discrete and continuous probability. Formal definition The Riemann–Stieltjes integral of a real-valued function $f$ of a real variable on the interval $[a,b]$ with respect to another real-to-real function $g$ is denoted by $\int _{x=a}^{b}f(x)\,\mathrm {d} g(x).$ Its definition uses a sequence of partitions $P$ of the interval $[a,b]$ $P=\{a=x_{0}<x_{1}<\cdots <x_{n}=b\}.$ The integral, then, is defined to be the limit, as the mesh (the length of the longest subinterval) of the partitions approaches $0$, of the approximating sum $S(P,f,g)=\sum _{i=0}^{n-1}f(c_{i})\left[g(x_{i+1})-g(x_{i})\right]$ where $c_{i}$ is in the $i$-th subinterval $[x_{i};x_{i+1}]$. The two functions $f$ and $g$ are respectively called the integrand and the integrator. Typically $g$ is taken to be monotone (or at least of bounded variation) and right-semicontinuous (however this last is essentially convention). We specifically do not require $g$ to be continuous, which allows for integrals that have point mass terms. The "limit" is here understood to be a number A (the value of the Riemann–Stieltjes integral) such that for every ε > 0, there exists δ > 0 such that for every partition P with norm(P) < δ, and for every choice of points ci in [xi, xi+1], $|S(P,f,g)-A|<\varepsilon .\,$ Properties The Riemann–Stieltjes integral admits integration by parts in the form $\int _{a}^{b}f(x)\,\mathrm {d} g(x)=f(b)g(b)-f(a)g(a)-\int _{a}^{b}g(x)\,\mathrm {d} f(x)$ and the existence of either integral implies the existence of the other.[2] On the other hand, a classical result[3] shows that the integral is well-defined if f is α-Hölder continuous and g is β-Hölder continuous with α + β > 1 . If $f(x)$ is bounded on $[a,b]$, $g(x)$ increases monotonically, and $g'(x)$ is Riemann integrable, then the Riemann–Stieltjes integral is related to the Riemann integral by $\int _{a}^{b}f(x)\,\mathrm {d} g(x)=\int _{a}^{b}f(x)g'(x)\,\mathrm {d} x$ For a step function $g(x)={\begin{cases}0&{\text{if }}x\leq s\\1&{\text{if }}x>s\\\end{cases}}$ where $a<s<b$, if $f$ is continuous at $s$, then $\int _{a}^{b}f(x)\,\mathrm {d} g(x)=f(s)$ Application to probability theory If g is the cumulative probability distribution function of a random variable X that has a probability density function with respect to Lebesgue measure, and f is any function for which the expected value $\operatorname {E} \left[\,\left|f(X)\right|\,\right]$ is finite, then the probability density function of X is the derivative of g and we have $\operatorname {E} [f(X)]=\int _{-\infty }^{\infty }f(x)g'(x)\,\mathrm {d} x.$ But this formula does not work if X does not have a probability density function with respect to Lebesgue measure. In particular, it does not work if the distribution of X is discrete (i.e., all of the probability is accounted for by point-masses), and even if the cumulative distribution function g is continuous, it does not work if g fails to be absolutely continuous (again, the Cantor function may serve as an example of this failure). But the identity $\operatorname {E} [f(X)]=\int _{-\infty }^{\infty }f(x)\,\mathrm {d} g(x)$ holds if g is any cumulative probability distribution function on the real line, no matter how ill-behaved. In particular, no matter how ill-behaved the cumulative distribution function g of a random variable X, if the moment E(Xn) exists, then it is equal to $\operatorname {E} \left[X^{n}\right]=\int _{-\infty }^{\infty }x^{n}\,\mathrm {d} g(x).$ Application to functional analysis The Riemann–Stieltjes integral appears in the original formulation of F. Riesz's theorem which represents the dual space of the Banach space C[a,b] of continuous functions in an interval [a,b] as Riemann–Stieltjes integrals against functions of bounded variation. Later, that theorem was reformulated in terms of measures. The Riemann–Stieltjes integral also appears in the formulation of the spectral theorem for (non-compact) self-adjoint (or more generally, normal) operators in a Hilbert space. In this theorem, the integral is considered with respect to a spectral family of projections.[4] Existence of the integral The best simple existence theorem states that if f is continuous and g is of bounded variation on [a, b], then the integral exists.[5][6][7] A function g is of bounded variation if and only if it is the difference between two (bounded) monotone functions. If g is not of bounded variation, then there will be continuous functions which cannot be integrated with respect to g. In general, the integral is not well-defined if f and g share any points of discontinuity, but there are other cases as well. Geometric interpretation A 3D plot, with $x$, $f(x)$, and $g(x)$ all along orthogonal axes, leads to a geometric interpretation of the Riemann–Stieltjes integral.[8] If the $g$-$x$ plane is horizontal and the $f$-direction is pointing upward, then the surface to be considered is like a curved fence. The fence follows the curve traced by $g(x)$, and the height of the fence is given by $f(x)$. The fence is the section of the $g$-sheet (i.e., the $g$ curve extended along the $f$ axis) that is bounded between the $g$-$x$ plane and the $f$-sheet. The Riemann-Stieljes integral is the area of the projection of this fence onto the $f$-$g$ plane — in effect, its "shadow". The slope of $g(x)$ weights the area of the projection. The values of $x$ for which $g(x)$ has the steepest slope $g'(x)$ correspond to regions of the fence with the greater projection and thereby carry the most weight in the integral. When $g$ is a step function $g(x)={\begin{cases}0&{\text{if }}x\leq s\\1&{\text{if }}x>s\\\end{cases}}$ the fence has a rectangular "gate" of width 1 and height equal to $f(s)$. Thus the gate, and its projection, have area equal to $f(s)$, the value of the Riemann-Stieljes integral. Generalization An important generalization is the Lebesgue–Stieltjes integral, which generalizes the Riemann–Stieltjes integral in a way analogous to how the Lebesgue integral generalizes the Riemann integral. If improper Riemann–Stieltjes integrals are allowed, then the Lebesgue integral is not strictly more general than the Riemann–Stieltjes integral. The Riemann–Stieltjes integral also generalizes to the case when either the integrand ƒ or the integrator g take values in a Banach space. If g : [a,b] → X takes values in the Banach space X, then it is natural to assume that it is of strongly bounded variation, meaning that $\sup \sum _{i}\|g(t_{i-1})-g(t_{i})\|_{X}<\infty $ the supremum being taken over all finite partitions $a=t_{0}\leq t_{1}\leq \cdots \leq t_{n}=b$ of the interval [a,b]. This generalization plays a role in the study of semigroups, via the Laplace–Stieltjes transform. The Itô integral extends the Riemann–Stietjes integral to encompass integrands and integrators which are stochastic processes rather than simple functions; see also stochastic calculus. Generalized Riemann–Stieltjes integral A slight generalization[9] is to consider in the above definition partitions P that refine another partition Pε, meaning that P arises from Pε by the addition of points, rather than from partitions with a finer mesh. Specifically, the generalized Riemann–Stieltjes integral of f with respect to g is a number A such that for every ε > 0 there exists a partition Pε such that for every partition P that refines Pε, $|S(P,f,g)-A|<\varepsilon \,$ for every choice of points ci in [xi, xi+1]. This generalization exhibits the Riemann–Stieltjes integral as the Moore–Smith limit on the directed set of partitions of [a, b] .[10][11] A consequence is that with this definition, the integral $ \int _{a}^{b}f(x)\,\mathrm {d} g(x)$ can still be defined in cases where f and g have a point of discontinuity in common. Darboux sums The Riemann–Stieltjes integral can be efficiently handled using an appropriate generalization of Darboux sums. For a partition P and a nondecreasing function g on [a, b] define the upper Darboux sum of f with respect to g by $U(P,f,g)=\sum _{i=1}^{n}\,\,[\,g(x_{i})-g(x_{i-1})\,]\,\sup _{x\in [x_{i-1},x_{i}]}f(x)$ and the lower sum by $L(P,f,g)=\sum _{i=1}^{n}\,\,[\,g(x_{i})-g(x_{i-1})\,]\,\inf _{x\in [x_{i-1},x_{i}]}f(x).$ Then the generalized Riemann–Stieltjes of f with respect to g exists if and only if, for every ε > 0, there exists a partition P such that $U(P,f,g)-L(P,f,g)<\varepsilon .$ Furthermore, f is Riemann–Stieltjes integrable with respect to g (in the classical sense) if $\lim _{\operatorname {mesh} (P)\to 0}[\,U(P,f,g)-L(P,f,g)\,]=0.\quad $[12] Examples and special cases Differentiable g(x) Given a $g(x)$ which is continuously differentiable over $\mathbb {R} $ it can be shown that there is the equality $\int _{a}^{b}f(x)\,\mathrm {d} g(x)=\int _{a}^{b}f(x)g'(x)\,\mathrm {d} x$ where the integral on the right-hand side is the standard Riemann integral, assuming that $f$ can be integrated by the Riemann–Stieltjes integral. More generally, the Riemann integral equals the Riemann–Stieltjes integral if $g$ is the Lebesgue integral of its derivative; in this case $g$ is said to be absolutely continuous. It may be the case that $g$ has jump discontinuities, or may have derivative zero almost everywhere while still being continuous and increasing (for example, $g$ could be the Cantor function or “Devil's staircase”), in either of which cases the Riemann–Stieltjes integral is not captured by any expression involving derivatives of g. Riemann integral The standard Riemann integral is a special case of the Riemann–Stieltjes integral where $g(x)=x$. Rectifier Consider the function $g(x)=\max\{0,x\}$ used in the study of neural networks, called a rectified linear unit (ReLU). Then the Riemann–Stieltjes can be evaluated as $\int _{a}^{b}f(x)\,\mathrm {d} g(x)=\int _{g(a)}^{g(b)}f(x)\,\mathrm {d} x$ where the integral on the right-hand side is the standard Riemann integral. Cavalieri integration Cavalieri's principle can be used to calculate areas bounded by curves using Riemann–Stieltjes integrals.[13] The integration strips of Riemann integration are replaced with strips that are non-rectangular in shape. The method is to transform a "Cavaliere region" with a transformation $h$, or to use $g=h^{-1}$ as integrand. For a given function $f(x)$ on an interval $[a,b]$, a "translational function" $a(y)$ must intersect $(x,f(x))$ exactly once for any shift in the interval. A "Cavaliere region" is then bounded by $f(x),a(y)$, the $x$-axis, and $b(y)=a(y)+(b-a)$. The area of the region is then $\int _{a(y)}^{b(y)}f(x)\,dx\ =\ \int _{a'}^{b'}f(x)\,dg(x),$ where $a'$ and $b'$ are the $x$-values where $a(y)$ and $b(y)$ intersect $f(x)$. Notes 1. Stieltjes (1894), pp. 68–71. 2. Hille & Phillips (1974), §3.3. 3. Young (1936). 4. See Riesz & Sz. Nagy (1990) for details. 5. Johnsonbaugh & Pfaffenberger (2010), p. 219. 6. Rudin (1964), pp. 121–122. 7. Kolmogorov & Fomin (1975), p. 368. 8. Bullock (1988) 9. Introduced by Pollard (1920) and now standard in analysis. 10. McShane (1952). 11. Hildebrandt (1938) calls it the Pollard–Moore–Stieltjes integral. 12. Graves (1946), Chap. XII, §3. 13. T. L. Grobler, E. R. Ackermann, A. J. van Zyl & J. C. Olivier Cavaliere integration from Council for Scientific and Industrial Research References • Bullock, Gregory L. (May 1988). "A Geometric Interpretation of the Riemann-Stieltjes Integral". The American Mathematical Monthly. Mathematical Association of America. 95 (5): 448–455. doi:10.1080/00029890.1988.11972030. JSTOR 2322483.{{cite journal}}: CS1 maint: date and year (link) • Graves, Lawrence (1946). The Theory of Functions of Real Variables. International series in pure and applied mathematics. McGraw-Hill. via HathiTrust • Hildebrandt, T.H. (1938). "Definitions of Stieltjes integrals of the Riemann type". The American Mathematical Monthly. 45 (5): 265–278. doi:10.1080/00029890.1938.11990804. ISSN 0002-9890. JSTOR 2302540. MR 1524276. • Hille, Einar; Phillips, Ralph S. (1974). Functional analysis and semi-groups. Providence, RI: American Mathematical Society. MR 0423094. • Johnsonbaugh, Richard F.; Pfaffenberger, William Elmer (2010). Foundations of mathematical analysis. Mineola, NY: Dover Publications. ISBN 978-0-486-47766-4. • Kolmogorov, Andrey; Fomin, Sergei V. (1975) [1970]. Introductory Real Analysis. Translated by Silverman, Richard A. (Revised English ed.). Dover Press. ISBN 0-486-61226-0. • McShane, E. J. (1952). "Partial orderings & Moore-Smith limit" (PDF). The American Mathematical Monthly. 59: 1–11. doi:10.2307/2307181. JSTOR 2307181. Retrieved 2 November 2010. • Pollard, Henry (1920). "The Stieltjes integral and its generalizations". The Quarterly Journal of Pure and Applied Mathematics. 49. • Riesz, F.; Sz. Nagy, B. (1990). Functional Analysis. Dover Publications. ISBN 0-486-66289-6. • Rudin, Walter (1964). Principles of mathematical analysis (Second ed.). New York, NY: McGraw-Hill. • Shilov, G. E.; Gurevich, B. L. (1978). Integral, Measure, and Derivative: A unified approach. Translated by Silverman, Richard A. Dover Publications. Bibcode:1966imdu.book.....S. ISBN 0-486-63519-8. • Stieltjes, Thomas Jan (1894). "Recherches sur les fractions continues". Ann. Fac. Sci. Toulouse. VIII: 1–122. MR 1344720. • Stroock, Daniel W. (1998). A Concise Introduction to the Theory of Integration (3rd ed.). Birkhauser. ISBN 0-8176-4073-8. • Young, L.C. (1936). "An inequality of the Hölder type, connected with Stieltjes integration". Acta Mathematica. 67 (1): 251–282. doi:10.1007/bf02401743. Integrals Types of integrals • Riemann integral • Lebesgue integral • Burkill integral • Bochner integral • Daniell integral • Darboux integral • Henstock–Kurzweil integral • Haar integral • Hellinger integral • Khinchin integral • Kolmogorov integral • Lebesgue–Stieltjes integral • Pettis integral • Pfeffer integral • Riemann–Stieltjes integral • Regulated integral Integration techniques • Substitution • Trigonometric • Euler • Weierstrass • By parts • Partial fractions • Euler's formula • Inverse functions • Changing order • Reduction formulas • Parametric derivatives • Differentiation under the integral sign • Laplace transform • Contour integration • Laplace's method • Numerical integration • Simpson's rule • Trapezoidal rule • Risch algorithm Improper integrals • Gaussian integral • Dirichlet integral • Fermi–Dirac integral • complete • incomplete • Bose–Einstein integral • Frullani integral • Common integrals in quantum field theory Stochastic integrals • Itô integral • Russo–Vallois integral • Stratonovich integral • Skorokhod integral Miscellaneous • Basel problem • Euler–Maclaurin formula • Gabriel's horn • Integration Bee • Proof that 22/7 exceeds π • Volumes • Washers • Shells Bernhard Riemann • Cauchy–Riemann equations • Generalized Riemann hypothesis • Grand Riemann hypothesis • Grothendieck–Hirzebruch–Riemann–Roch theorem • Hirzebruch–Riemann–Roch theorem • Local zeta function • Measurable Riemann mapping theorem • Riemann (crater) • Riemann Xi function • Riemann curvature tensor • Riemann hypothesis • Riemann integral • Riemann invariant • Riemann mapping theorem • Riemann form • Riemann problem • Riemann series theorem • Riemann solver • Riemann sphere • Riemann sum • Riemann surface • Riemann zeta function • Riemann's differential equation • Riemann's minimal surface • Riemannian circle • Riemannian connection on a surface • Riemannian geometry • Riemann–Hilbert correspondence • Riemann–Hilbert problems • Riemann–Lebesgue lemma • Riemann–Liouville integral • Riemann–Roch theorem • Riemann–Roch theorem for smooth manifolds • Riemann–Siegel formula • Riemann–Siegel theta function • Riemann–Silberstein vector • Riemann–Stieltjes integral • Riemann–von Mangoldt formula • Category
Wikipedia
Stieltjes matrix In mathematics, particularly matrix theory, a Stieltjes matrix, named after Thomas Joannes Stieltjes, is a real symmetric positive definite matrix with nonpositive off-diagonal entries. A Stieltjes matrix is necessarily an M-matrix. Every n×n Stieltjes matrix is invertible to a nonsingular symmetric nonnegative matrix, though the converse of this statement is not true in general for n > 2. From the above definition, a Stieltjes matrix is a symmetric invertible Z-matrix whose eigenvalues have positive real parts. As it is a Z-matrix, its off-diagonal entries are less than or equal to zero. See also • Hurwitz matrix • Metzler matrix References • David M. Young (2003). Iterative Solution of Large Linear Systems. Dover Publications. p. 42. ISBN 0-486-42548-7. • Anne Greenbaum (1987). Iterative Methods for Solving Linear Systems. SIAM. p. 162. ISBN 0-89871-396-X. Matrix classes Explicitly constrained entries • Alternant • Anti-diagonal • Anti-Hermitian • Anti-symmetric • Arrowhead • Band • Bidiagonal • Bisymmetric • Block-diagonal • Block • Block tridiagonal • Boolean • Cauchy • Centrosymmetric • Conference • Complex Hadamard • Copositive • Diagonally dominant • Diagonal • Discrete Fourier Transform • Elementary • Equivalent • Frobenius • Generalized permutation • Hadamard • Hankel • Hermitian • Hessenberg • Hollow • Integer • Logical • Matrix unit • Metzler • Moore • Nonnegative • Pentadiagonal • Permutation • Persymmetric • Polynomial • Quaternionic • Signature • Skew-Hermitian • Skew-symmetric • Skyline • Sparse • Sylvester • Symmetric • Toeplitz • Triangular • Tridiagonal • Vandermonde • Walsh • Z Constant • Exchange • Hilbert • Identity • Lehmer • Of ones • Pascal • Pauli • Redheffer • Shift • Zero Conditions on eigenvalues or eigenvectors • Companion • Convergent • Defective • Definite • Diagonalizable • Hurwitz • Positive-definite • Stieltjes Satisfying conditions on products or inverses • Congruent • Idempotent or Projection • Invertible • Involutory • Nilpotent • Normal • Orthogonal • Unimodular • Unipotent • Unitary • Totally unimodular • Weighing With specific applications • Adjugate • Alternating sign • Augmented • Bézout • Carleman • Cartan • Circulant • Cofactor • Commutation • Confusion • Coxeter • Distance • Duplication and elimination • Euclidean distance • Fundamental (linear differential equation) • Generator • Gram • Hessian • Householder • Jacobian • Moment • Payoff • Pick • Random • Rotation • Seifert • Shear • Similarity • Symplectic • Totally positive • Transformation Used in statistics • Centering • Correlation • Covariance • Design • Doubly stochastic • Fisher information • Hat • Precision • Stochastic • Transition Used in graph theory • Adjacency • Biadjacency • Degree • Edmonds • Incidence • Laplacian • Seidel adjacency • Tutte Used in science and engineering • Cabibbo–Kobayashi–Maskawa • Density • Fundamental (computer vision) • Fuzzy associative • Gamma • Gell-Mann • Hamiltonian • Irregular • Overlap • S • State transition • Substitution • Z (chemistry) Related terms • Jordan normal form • Linear independence • Matrix exponential • Matrix representation of conic sections • Perfect matrix • Pseudoinverse • Row echelon form • Wronskian •  Mathematics portal • List of matrices • Category:Matrices
Wikipedia
Stieltjes moment problem In mathematics, the Stieltjes moment problem, named after Thomas Joannes Stieltjes, seeks necessary and sufficient conditions for a sequence (m0, m1, m2, ...) to be of the form $m_{n}=\int _{0}^{\infty }x^{n}\,d\mu (x)$ for some measure μ. If such a function μ exists, one asks whether it is unique. The essential difference between this and other well-known moment problems is that this is on a half-line [0, ∞), whereas in the Hausdorff moment problem one considers a bounded interval [0, 1], and in the Hamburger moment problem one considers the whole line (−∞, ∞). Existence Let $\Delta _{n}=\left[{\begin{matrix}m_{0}&m_{1}&m_{2}&\cdots &m_{n}\\m_{1}&m_{2}&m_{3}&\cdots &m_{n+1}\\m_{2}&m_{3}&m_{4}&\cdots &m_{n+2}\\\vdots &\vdots &\vdots &\ddots &\vdots \\m_{n}&m_{n+1}&m_{n+2}&\cdots &m_{2n}\end{matrix}}\right]$ and $\Delta _{n}^{(1)}=\left[{\begin{matrix}m_{1}&m_{2}&m_{3}&\cdots &m_{n+1}\\m_{2}&m_{3}&m_{4}&\cdots &m_{n+2}\\m_{3}&m_{4}&m_{5}&\cdots &m_{n+3}\\\vdots &\vdots &\vdots &\ddots &\vdots \\m_{n+1}&m_{n+2}&m_{n+3}&\cdots &m_{2n+1}\end{matrix}}\right].$ Then { mn : n = 1, 2, 3, ... } is a moment sequence of some measure on $[0,\infty )$ with infinite support if and only if for all n, both $\det(\Delta _{n})>0\ \mathrm {and} \ \det \left(\Delta _{n}^{(1)}\right)>0.$ { mn : n = 1, 2, 3, ... } is a moment sequence of some measure on $[0,\infty )$ with finite support of size m if and only if for all $n\leq m$, both $\det(\Delta _{n})>0\ \mathrm {and} \ \det \left(\Delta _{n}^{(1)}\right)>0$ and for all larger $n$ $\det(\Delta _{n})=0\ \mathrm {and} \ \det \left(\Delta _{n}^{(1)}\right)=0.$ Uniqueness There are several sufficient conditions for uniqueness, for example, Carleman's condition, which states that the solution is unique if $\sum _{n\geq 1}m_{n}^{-1/(2n)}=\infty ~.$ References • Reed, Michael; Simon, Barry (1975), Fourier Analysis, Self-Adjointness, Methods of modern mathematical physics, vol. 2, Academic Press, p. 341 (exercise 25), ISBN 0-12-585002-6
Wikipedia
Stieltjes constants In mathematics, the Stieltjes constants are the numbers $\gamma _{k}$ that occur in the Laurent series expansion of the Riemann zeta function: $\zeta (1+s)={\frac {1}{s}}+\sum _{n=0}^{\infty }{\frac {(-1)^{n}}{n!}}\gamma _{n}s^{n}.$ The constant $\gamma _{0}=\gamma =0.577\dots $ is known as the Euler–Mascheroni constant. Representations The Stieltjes constants are given by the limit $\gamma _{n}=\lim _{m\to \infty }\left\{\sum _{k=1}^{m}{\frac {(\ln k)^{n}}{k}}-\int _{1}^{m}{\frac {(\ln x)^{n}}{x}}\,dx\right\}=\lim _{m\rightarrow \infty }{\left\{\sum _{k=1}^{m}{\frac {(\ln k)^{n}}{k}}-{\frac {(\ln m)^{n+1}}{n+1}}\right\}}.$ (In the case n = 0, the first summand requires evaluation of 00, which is taken to be 1.) Cauchy's differentiation formula leads to the integral representation $\gamma _{n}={\frac {(-1)^{n}n!}{2\pi }}\int _{0}^{2\pi }e^{-nix}\zeta \left(e^{ix}+1\right)dx.$ Various representations in terms of integrals and infinite series are given in works of Jensen, Franel, Hermite, Hardy, Ramanujan, Ainsworth, Howell, Coppo, Connon, Coffey, Choi, Blagouchine and some other authors.[1][2][3][4][5][6] In particular, Jensen-Franel's integral formula, often erroneously attributed to Ainsworth and Howell, states that $\gamma _{n}={\frac {1}{2}}\delta _{n,0}+{\frac {1}{i}}\int _{0}^{\infty }{\frac {dx}{e^{2\pi x}-1}}\left\{{\frac {(\ln(1-ix))^{n}}{1-ix}}-{\frac {(\ln(1+ix))^{n}}{1+ix}}\right\}\,,\qquad \quad n=0,1,2,\ldots $ where δn,k is the Kronecker symbol (Kronecker delta).[5][6] Among other formulae, we find $\gamma _{n}=-{\frac {\pi }{2(n+1)}}\int _{-\infty }^{\infty }{\frac {\left(\ln \left({\frac {1}{2}}\pm ix\right)\right)^{n+1}}{\cosh ^{2}\pi x}}\,dx\qquad \qquad \qquad \qquad \qquad \qquad n=0,1,2,\ldots $ ${\begin{array}{l}\displaystyle \gamma _{1}=-\left[\gamma -{\frac {\ln 2}{2}}\right]\ln 2+i\int _{0}^{\infty }{\frac {dx}{e^{\pi x}+1}}\left\{{\frac {\ln(1-ix)}{1-ix}}-{\frac {\ln(1+ix)}{1+ix}}\right\}\\[6mm]\displaystyle \gamma _{1}=-\gamma ^{2}-\int _{0}^{\infty }\left[{\frac {1}{1-e^{-x}}}-{\frac {1}{x}}\right]e^{-x}\ln x\,dx\end{array}}$ see.[1][5][7] As concerns series representations, a famous series implying an integer part of a logarithm was given by Hardy in 1912[8] $\gamma _{1}={\frac {\ln 2}{2}}\sum _{k=2}^{\infty }{\frac {(-1)^{k}}{k}}\lfloor \log _{2}{k}\rfloor \cdot \left(2\log _{2}{k}-\lfloor \log _{2}{2k}\rfloor \right)$ Israilov[9] gave semi-convergent series in terms of Bernoulli numbers $B_{2k}$ $\gamma _{m}=\sum _{k=1}^{n}{\frac {(\ln k)^{m}}{k}}-{\frac {(\ln n)^{m+1}}{m+1}}-{\frac {(\ln n)^{m}}{2n}}-\sum _{k=1}^{N-1}{\frac {B_{2k}}{(2k)!}}\left[{\frac {(\ln x)^{m}}{x}}\right]_{x=n}^{(2k-1)}-\theta \cdot {\frac {B_{2N}}{(2N)!}}\left[{\frac {(\ln x)^{m}}{x}}\right]_{x=n}^{(2N-1)}\,,\qquad 0<\theta <1$ Connon,[10] Blagouchine[6][11] and Coppo[1] gave several series with the binomial coefficients ${\begin{array}{l}\displaystyle \gamma _{m}=-{\frac {1}{m+1}}\sum _{n=0}^{\infty }{\frac {1}{n+1}}\sum _{k=0}^{n}(-1)^{k}{\binom {n}{k}}(\ln(k+1))^{m+1}\\[7mm]\displaystyle \gamma _{m}=-{\frac {1}{m+1}}\sum _{n=0}^{\infty }{\frac {1}{n+2}}\sum _{k=0}^{n}(-1)^{k}{\binom {n}{k}}{\frac {(\ln(k+1))^{m+1}}{k+1}}\\[7mm]\displaystyle \gamma _{m}=-{\frac {1}{m+1}}\sum _{n=0}^{\infty }H_{n+1}\sum _{k=0}^{n}(-1)^{k}{\binom {n}{k}}(\ln(k+2))^{m+1}\\[7mm]\displaystyle \gamma _{m}=\sum _{n=0}^{\infty }\left|G_{n+1}\right|\sum _{k=0}^{n}(-1)^{k}{\binom {n}{k}}{\frac {(\ln(k+1))^{m}}{k+1}}\end{array}}$ where Gn are Gregory's coefficients, also known as reciprocal logarithmic numbers (G1=+1/2, G2=−1/12, G3=+1/24, G4=−19/720,... ). More general series of the same nature include these examples[11] $\gamma _{m}=-{\frac {(\ln(1+a))^{m+1}}{m+1}}+\sum _{n=0}^{\infty }(-1)^{n}\psi _{n+1}(a)\sum _{k=0}^{n}(-1)^{k}{\binom {n}{k}}{\frac {(\ln(k+1))^{m}}{k+1}},\quad \Re (a)>-1$ and $\gamma _{m}=-{\frac {1}{r(m+1)}}\sum _{l=0}^{r-1}(\ln(1+a+l))^{m+1}+{\frac {1}{r}}\sum _{n=0}^{\infty }(-1)^{n}N_{n+1,r}(a)\sum _{k=0}^{n}(-1)^{k}{\binom {n}{k}}{\frac {(\ln(k+1))^{m}}{k+1}},\quad \Re (a)>-1,\;r=1,2,3,\ldots $ or $\gamma _{m}=-{\frac {1}{{\tfrac {1}{2}}+a}}\left\{{\frac {(-1)^{m}}{m+1}}\,\zeta ^{(m+1)}(0,1+a)-(-1)^{m}\zeta ^{(m)}(0)-\sum _{n=0}^{\infty }(-1)^{n}\psi _{n+2}(a)\sum _{k=0}^{n}(-1)^{k}{\binom {n}{k}}{\frac {(\ln(k+1))^{m}}{k+1}}\right\},\quad \Re (a)>-1$ where ψn(a) are the Bernoulli polynomials of the second kind and Nn,r(a) are the polynomials given by the generating equation ${\frac {(1+z)^{a+m}-(1+z)^{a}}{\ln(1+z)}}=\sum _{n=0}^{\infty }N_{n,m}(a)z^{n},\qquad |z|<1,$ respectively (note that Nn,1(a) = ψn(a)).[12] Oloa and Tauraso[13] showed that series with harmonic numbers may lead to Stieltjes constants ${\begin{array}{l}\displaystyle \sum _{n=1}^{\infty }{\frac {H_{n}-(\gamma +\ln n)}{n}}=-\gamma _{1}-{\frac {1}{2}}\gamma ^{2}+{\frac {1}{12}}\pi ^{2}\\[6mm]\displaystyle \sum _{n=1}^{\infty }{\frac {H_{n}^{2}-(\gamma +\ln n)^{2}}{n}}=-\gamma _{2}-2\gamma \gamma _{1}-{\frac {2}{3}}\gamma ^{3}+{\frac {5}{3}}\zeta (3)\end{array}}$ Blagouchine[6] obtained slowly-convergent series involving unsigned Stirling numbers of the first kind $\left[{\cdot \atop \cdot }\right]$ $\gamma _{m}={\frac {1}{2}}\delta _{m,0}+{\frac {(-1)^{m}m!}{\pi }}\sum _{n=1}^{\infty }{\frac {1}{n\cdot n!}}\sum _{k=0}^{\lfloor n/2\rfloor }{\frac {(-1)^{k}\cdot \left[{2k+2 \atop m+1}\right]\cdot \left[{n \atop 2k+1}\right]}{(2\pi )^{2k+1}}}\,,\qquad m=0,1,2,...,$ as well as semi-convergent series with rational terms only $\gamma _{m}={\frac {1}{2}}\delta _{m,0}+(-1)^{m}m!\cdot \sum _{k=1}^{N}{\frac {\left[{2k \atop m+1}\right]\cdot B_{2k}}{(2k)!}}+\theta \cdot {\frac {(-1)^{m}m!\cdot \left[{2N+2 \atop m+1}\right]\cdot B_{2N+2}}{(2N+2)!}},\qquad 0<\theta <1,$ where m=0,1,2,... In particular, series for the first Stieltjes constant has a surprisingly simple form $\gamma _{1}=-{\frac {1}{2}}\sum _{k=1}^{N}{\frac {B_{2k}\cdot H_{2k-1}}{k}}+\theta \cdot {\frac {B_{2N+2}\cdot H_{2N+1}}{2N+2}},\qquad 0<\theta <1,$ where Hn is the nth harmonic number.[6] More complicated series for Stieltjes constants are given in works of Lehmer, Liang, Todd, Lavrik, Israilov, Stankus, Keiper, Nan-You, Williams, Coffey.[2][3][6] Bounds and asymptotic growth The Stieltjes constants satisfy the bound $|\gamma _{n}|\leq {\begin{cases}\displaystyle {\frac {2(n-1)!}{\pi ^{n}}}\,,\qquad &n=1,3,5,\ldots \\[3mm]\displaystyle {\frac {4(n-1)!}{\pi ^{n}}}\,,\qquad &n=2,4,6,\ldots \end{cases}}$ given by Berndt in 1972.[14] Better bounds in terms of elementary functions were obtained by Lavrik[15] $|\gamma _{n}|\leq {\frac {n!}{2^{n+1}}},\qquad n=1,2,3,\ldots $ by Israilov[9] $|\gamma _{n}|\leq {\frac {n!C(k)}{(2k)^{n}}},\qquad n=1,2,3,\ldots $ with k=1,2,... and C(1)=1/2, C(2)=7/12,... , by Nan-You and Williams[16] $|\gamma _{n}|\leq {\begin{cases}\displaystyle {\frac {2(2n)!}{n^{n+1}(2\pi )^{n}}}\,,\qquad &n=1,3,5,\ldots \\[4mm]\displaystyle {\frac {4(2n)!}{n^{n+1}(2\pi )^{n}}}\,,\qquad &n=2,4,6,\ldots \end{cases}}$ by Blagouchine[6] ${\begin{array}{ll}\displaystyle -{\frac {{\big |}{B}_{m+1}{\big |}}{m+1}}<\gamma _{m}<{\frac {(3m+8)\cdot {\big |}{B}_{m+3}{\big |}}{24}}-{\frac {{\big |}{B}_{m+1}{\big |}}{m+1}},&m=1,5,9,\ldots \\[12pt]\displaystyle {\frac {{\big |}B_{m+1}{\big |}}{m+1}}-{\frac {(3m+8)\cdot {\big |}B_{m+3}{\big |}}{24}}<\gamma _{m}<{\frac {{\big |}{B}_{m+1}{\big |}}{m+1}},&m=3,7,11,\ldots \\[12pt]\displaystyle -{\frac {{\big |}{B}_{m+2}{\big |}}{2}}<\gamma _{m}<{\frac {(m+3)(m+4)\cdot {\big |}{B}_{m+4}{\big |}}{48}}-{\frac {{\big |}B_{m+2}{\big |}}{2}},\qquad &m=2,6,10,\ldots \\[12pt]\displaystyle {\frac {{\big |}{B}_{m+2}{\big |}}{2}}-{\frac {(m+3)(m+4)\cdot {\big |}{B}_{m+4}{\big |}}{48}}<\gamma _{m}<{\frac {{\big |}{B}_{m+2}{\big |}}{2}},&m=4,8,12,\ldots \\\end{array}}$ where Bn are Bernoulli numbers, and by Matsuoka[17][18] $|\gamma _{n}|<10^{-4}e^{n\ln \ln n}\,,\qquad n=5,6,7,\ldots $ As concerns estimations resorting to non-elementary functions and solutions, Knessl, Coffey[19] and Fekih-Ahmed[20] obtained quite accurate results. For example, Knessl and Coffey give the following formula that approximates the Stieltjes constants relatively well for large n.[19] If v is the unique solution of $2\pi \exp(v\tan v)=n{\frac {\cos(v)}{v}}$ with $0<v<\pi /2$, and if $u=v\tan v$, then $\gamma _{n}\sim {\frac {B}{\sqrt {n}}}e^{nA}\cos(an+b)$ where $A={\frac {1}{2}}\ln(u^{2}+v^{2})-{\frac {u}{u^{2}+v^{2}}}$ $B={\frac {2{\sqrt {2\pi }}{\sqrt {u^{2}+v^{2}}}}{[(u+1)^{2}+v^{2}]^{1/4}}}$ $a=\tan ^{-1}\left({\frac {v}{u}}\right)+{\frac {v}{u^{2}+v^{2}}}$ $b=\tan ^{-1}\left({\frac {v}{u}}\right)-{\frac {1}{2}}\left({\frac {v}{u+1}}\right).$ Up to n = 100000, the Knessl-Coffey approximation correctly predicts the sign of γn with the single exception of n = 137.[19] Numerical values The first few values are [21] napproximate value of γnOEIS 0+0.5772156649015328606065120900824024310421593359A001620 1−0.0728158454836767248605863758749013191377363383A082633 2−0.0096903631928723184845303860352125293590658061A086279 3+0.0020538344203033458661600465427533842857158044A086280 4+0.0023253700654673000574681701775260680009044694A086281 5+0.0007933238173010627017533348774444448307315394A086282 6−0.0002387693454301996098724218419080042777837151A183141 7−0.0005272895670577510460740975054788582819962534A183167 8−0.0003521233538030395096020521650012087417291805A183206 9−0.0000343947744180880481779146237982273906207895A184853 10+0.0002053328149090647946837222892370653029598537A184854 100−4.2534015717080269623144385197278358247028931053 × 1017 1000−1.5709538442047449345494023425120825242380299554 × 10486 10000−2.2104970567221060862971082857536501900234397174 × 106883 100000+1.9919273063125410956582272431568589205211659777 × 1083432 For large n, the Stieltjes constants grow rapidly in absolute value, and change signs in a complex pattern. Further information related to the numerical evaluation of Stieltjes constants may be found in works of Keiper,[22] Kreminski,[23] Plouffe,[24] Johansson[25][26] and Blagouchine.[26] First, Johansson provided values of the Stieltjes constants up to n = 100000, accurate to over 10000 digits each (the numerical values can be retrieved from the LMFDB . Later, Johansson and Blagouchine devised a particularly efficient algorithm for computing generalized Stieltjes constants (see below) for large n and complex a, which can be also used for ordinary Stieltjes constants.[26] In particular, it allows one to compute γn to 1000 digits in a minute for any n up to n=10100. Generalized Stieltjes constants General information More generally, one can define Stieltjes constants γn(a) that occur in the Laurent series expansion of the Hurwitz zeta function: $\zeta (s,a)={\frac {1}{s-1}}+\sum _{n=0}^{\infty }{\frac {(-1)^{n}}{n!}}\gamma _{n}(a)(s-1)^{n}.$ Here a is a complex number with Re(a)>0. Since the Hurwitz zeta function is a generalization of the Riemann zeta function, we have γn(1)=γn The zeroth constant is simply the digamma-function γ0(a)=-Ψ(a),[27] while other constants are not known to be reducible to any elementary or classical function of analysis. Nevertheless, there are numerous representations for them. For example, there exists the following asymptotic representation $\gamma _{n}(a)=\lim _{m\to \infty }\left\{\sum _{k=0}^{m}{\frac {(\ln(k+a))^{n}}{k+a}}-{\frac {(\ln(m+a))^{n+1}}{n+1}}\right\},\qquad {\begin{array}{l}n=0,1,2,\ldots \\[1mm]a\neq 0,-1,-2,\ldots \end{array}}$ due to Berndt and Wilton. The analog of Jensen-Franel's formula for the generalized Stieltjes constant is the Hermite formula[5] $\gamma _{n}(a)=\left[{\frac {1}{2a}}-{\frac {\ln {a}}{n+1}}\right](\ln a)^{n}-i\int _{0}^{\infty }{\frac {dx}{e^{2\pi x}-1}}\left\{{\frac {(\ln(a-ix))^{n}}{a-ix}}-{\frac {(\ln(a+ix))^{n}}{a+ix}}\right\},\qquad {\begin{array}{l}n=0,1,2,\ldots \\[1mm]\Re (a)>0\end{array}}$ Similar representations are given by the following formulas:[26] $\gamma _{n}(a)=-{\frac {{\big (}\ln(a-{\frac {1}{2}}){\big )}^{n+1}}{n+1}}+i\int _{0}^{\infty }{\frac {dx}{e^{2\pi x}+1}}\left\{{\frac {{\big (}\ln(a-{\frac {1}{2}}-ix){\big )}^{n}}{a-{\frac {1}{2}}-ix}}-{\frac {{\big (}\ln(a-{\frac {1}{2}}+ix){\big )}^{n}}{a-{\frac {1}{2}}+ix}}\right\},\qquad {\begin{array}{l}n=0,1,2,\ldots \\[1mm]\Re (a)>{\frac {1}{2}}\end{array}}$ and $\gamma _{n}(a)=-{\frac {\pi }{2(n+1)}}\int _{0}^{\infty }{\frac {{\big (}\ln(a-{\frac {1}{2}}-ix){\big )}^{n+1}+{\big (}\ln(a-{\frac {1}{2}}+ix){\big )}^{n+1}}{{\big (}\cosh(\pi x){\big )}^{2}}}\,dx,\qquad {\begin{array}{l}n=0,1,2,\ldots \\[1mm]\Re (a)>{\frac {1}{2}}\end{array}}$ Generalized Stieltjes constants satisfy the following recurrence relation $\gamma _{n}(a+1)=\gamma _{n}(a)-{\frac {(\ln a)^{n}}{a}}\,,\qquad {\begin{array}{l}n=0,1,2,\ldots \\[1mm]a\neq 0,-1,-2,\ldots \end{array}}$ as well as the multiplication theorem $\sum _{l=0}^{n-1}\gamma _{p}\left(a+{\frac {l}{n}}\right)=(-1)^{p}n\left[{\frac {\ln n}{p+1}}-\Psi (an)\right](\ln n)^{p}+n\sum _{r=0}^{p-1}(-1)^{r}{\binom {p}{r}}\gamma _{p-r}(an)\cdot (\ln n)^{r}\,,\qquad \qquad n=2,3,4,\ldots $ where ${\binom {p}{r}}$ denotes the binomial coefficient (see[28] and,[29] pp. 101–102). First generalized Stieltjes constant The first generalized Stieltjes constant has a number of remarkable properties. • Malmsten's identity (reflection formula for the first generalized Stieltjes constants): the reflection formula for the first generalized Stieltjes constant has the following form $\gamma _{1}{\biggl (}{\frac {m}{n}}{\biggr )}-\gamma _{1}{\biggl (}1-{\frac {m}{n}}{\biggr )}=2\pi \sum _{l=1}^{n-1}\sin {\frac {2\pi ml}{n}}\cdot \ln \Gamma {\biggl (}{\frac {l}{n}}{\biggr )}-\pi (\gamma +\ln 2\pi n)\cot {\frac {m\pi }{n}}$ where m and n are positive integers such that m<n. This formula has been long-time attributed to Almkvist and Meurman who derived it in 1990s.[30] However, it was recently reported that this identity, albeit in a slightly different form, was first obtained by Carl Malmsten in 1846.[5][31] • Rational arguments theorem: the first generalized Stieltjes constant at rational argument may be evaluated in a quasi-closed form via the following formula ${\begin{array}{ll}\displaystyle \gamma _{1}{\biggl (}{\frac {r}{m}}{\biggr )}=&\displaystyle \gamma _{1}+\gamma ^{2}+\gamma \ln 2\pi m+\ln 2\pi \cdot \ln {m}+{\frac {1}{2}}(\ln m)^{2}+(\gamma +\ln 2\pi m)\cdot \Psi \left({\frac {r}{m}}\right)\\[5mm]\displaystyle &\displaystyle \qquad +\pi \sum _{l=1}^{m-1}\sin {\frac {2\pi rl}{m}}\cdot \ln \Gamma {\biggl (}{\frac {l}{m}}{\biggr )}+\sum _{l=1}^{m-1}\cos {\frac {2\pi rl}{m}}\cdot \zeta ''\left(0,{\frac {l}{m}}\right)\end{array}}\,,\qquad \quad r=1,2,3,\ldots ,m-1\,.$ see Blagouchine.[5][27] An alternative proof was later proposed by Coffey[32] and several other authors. • Finite summations: there are numerous summation formulae for the first generalized Stieltjes constants. For example, ${\begin{array}{ll}\displaystyle \sum _{r=0}^{m-1}\gamma _{1}\left(a+{\frac {r}{m}}\right)=m\ln {m}\cdot \Psi (am)-{\frac {m}{2}}(\ln m)^{2}+m\gamma _{1}(am)\,,\qquad a\in \mathbb {C} \\[6mm]\displaystyle \sum _{r=1}^{m-1}\gamma _{1}\left({\frac {r}{m}}\right)=(m-1)\gamma _{1}-m\gamma \ln {m}-{\frac {m}{2}}(\ln m)^{2}\\[6mm]\displaystyle \sum _{r=1}^{2m-1}(-1)^{r}\gamma _{1}{\biggl (}{\frac {r}{2m}}{\biggr )}=-\gamma _{1}+m(2\gamma +\ln 2+2\ln m)\ln 2\\[6mm]\displaystyle \sum _{r=0}^{2m-1}(-1)^{r}\gamma _{1}{\biggl (}{\frac {2r+1}{4m}}{\biggr )}=m\left\{4\pi \ln \Gamma {\biggl (}{\frac {1}{4}}{\biggr )}-\pi {\big (}4\ln 2+3\ln \pi +\ln m+\gamma {\big )}\right\}\\[6mm]\displaystyle \sum _{r=1}^{m-1}\gamma _{1}{\biggl (}{\frac {r}{m}}{\biggr )}\cdot \cos {\dfrac {2\pi rk}{m}}=-\gamma _{1}+m(\gamma +\ln 2\pi m)\ln \left(2\sin {\frac {k\pi }{m}}\right)+{\frac {m}{2}}\left\{\zeta ''\left(0,{\frac {k}{m}}\right)+\zeta ''\left(0,1-{\frac {k}{m}}\right)\right\}\,,\qquad k=1,2,\ldots ,m-1\\[6mm]\displaystyle \sum _{r=1}^{m-1}\gamma _{1}{\biggl (}{\frac {r}{m}}{\biggr )}\cdot \sin {\dfrac {2\pi rk}{m}}={\frac {\pi }{2}}(\gamma +\ln 2\pi m)(2k-m)-{\frac {\pi m}{2}}\left\{\ln \pi -\ln \sin {\frac {k\pi }{m}}\right\}+m\pi \ln \Gamma {\biggl (}{\frac {k}{m}}{\biggr )}\,,\qquad k=1,2,\ldots ,m-1\\[6mm]\displaystyle \sum _{r=1}^{m-1}\gamma _{1}{\biggl (}{\frac {r}{m}}{\biggr )}\cdot \cot {\frac {\pi r}{m}}=\displaystyle {\frac {\pi }{6}}{\Big \{}(1-m)(m-2)\gamma +2(m^{2}-1)\ln 2\pi -(m^{2}+2)\ln {m}{\Big \}}-2\pi \sum _{l=1}^{m-1}l\cdot \ln \Gamma \left({\frac {l}{m}}\right)\\[6mm]\displaystyle \sum _{r=1}^{m-1}{\frac {r}{m}}\cdot \gamma _{1}{\biggl (}{\frac {r}{m}}{\biggr )}={\frac {1}{2}}\left\{(m-1)\gamma _{1}-m\gamma \ln {m}-{\frac {m}{2}}(\ln m)^{2}\right\}-{\frac {\pi }{2m}}(\gamma +\ln 2\pi m)\sum _{l=1}^{m-1}l\cdot \cot {\frac {\pi l}{m}}-{\frac {\pi }{2}}\sum _{l=1}^{m-1}\cot {\frac {\pi l}{m}}\cdot \ln \Gamma {\biggl (}{\frac {l}{m}}{\biggr )}\end{array}}$ For more details and further summation formulae, see.[5][29] • Some particular values: some particular values of the first generalized Stieltjes constant at rational arguments may be reduced to the gamma-function, the first Stieltjes constant and elementary functions. For instance, $\gamma _{1}\left({\frac {1}{2}}\right)=-2\gamma \ln 2-(\ln 2)^{2}+\gamma _{1}=-1.353459680\ldots $ At points 1/4, 3/4 and 1/3, values of first generalized Stieltjes constants were independently obtained by Connon[33] and Blagouchine[29] ${\begin{array}{l}\displaystyle \gamma _{1}\left({\frac {1}{4}}\right)=2\pi \ln \Gamma \left({\frac {1}{4}}\right)-{\frac {3\pi }{2}}\ln \pi -{\frac {7}{2}}(\ln 2)^{2}-(3\gamma +2\pi )\ln 2-{\frac {\gamma \pi }{2}}+\gamma _{1}=-5.518076350\ldots \\[6mm]\displaystyle \gamma _{1}\left({\frac {3}{4}}\right)=-2\pi \ln \Gamma \left({\frac {1}{4}}\right)+{\frac {3\pi }{2}}\ln \pi -{\frac {7}{2}}(\ln 2)^{2}-(3\gamma -2\pi )\ln 2+{\frac {\gamma \pi }{2}}+\gamma _{1}=-0.3912989024\ldots \\[6mm]\displaystyle \gamma _{1}\left({\frac {1}{3}}\right)=-{\frac {3\gamma }{2}}\ln 3-{\frac {3}{4}}(\ln 3)^{2}+{\frac {\pi }{4{\sqrt {3}}}}\left\{\ln 3-8\ln 2\pi -2\gamma +12\ln \Gamma \left({\frac {1}{3}}\right)\right\}+\gamma _{1}=-3.259557515\ldots \end{array}}$ At points 2/3, 1/6 and 5/6 ${\begin{array}{l}\displaystyle \gamma _{1}\left({\frac {2}{3}}\right)=-{\frac {3\gamma }{2}}\ln 3-{\frac {3}{4}}(\ln 3)^{2}-{\frac {\pi }{4{\sqrt {3}}}}\left\{\ln 3-8\ln 2\pi -2\gamma +12\ln \Gamma \left({\frac {1}{3}}\right)\right\}+\gamma _{1}=-0.5989062842\ldots \\[6mm]\displaystyle \gamma _{1}\left({\frac {1}{6}}\right)=-{\frac {3\gamma }{2}}\ln 3-{\frac {3}{4}}(\ln 3)^{2}-(\ln 2)^{2}-(3\ln 3+2\gamma )\ln 2+{\frac {3\pi {\sqrt {3}}}{2}}\ln \Gamma \left({\frac {1}{6}}\right)\\[5mm]\displaystyle \qquad \qquad \quad -{\frac {\pi }{2{\sqrt {3}}}}\left\{3\ln 3+11\ln 2+{\frac {15}{2}}\ln \pi +3\gamma \right\}+\gamma _{1}=-10.74258252\ldots \\[6mm]\displaystyle \gamma _{1}\left({\frac {5}{6}}\right)=-{\frac {3\gamma }{2}}\ln 3-{\frac {3}{4}}(\ln 3)^{2}-(\ln 2)^{2}-(3\ln 3+2\gamma )\ln 2-{\frac {3\pi {\sqrt {3}}}{2}}\ln \Gamma \left({\frac {1}{6}}\right)\\[6mm]\displaystyle \qquad \qquad \quad +{\frac {\pi }{2{\sqrt {3}}}}\left\{3\ln 3+11\ln 2+{\frac {15}{2}}\ln \pi +3\gamma \right\}+\gamma _{1}=-0.2461690038\ldots \end{array}}$ These values were calculated by Blagouchine.[29] To the same author are also due ${\begin{array}{ll}\displaystyle \gamma _{1}{\biggl (}{\frac {1}{5}}{\biggr )}=&\displaystyle \gamma _{1}+{\frac {\sqrt {5}}{2}}\left\{\zeta ''\left(0,{\frac {1}{5}}\right)+\zeta ''\left(0,{\frac {4}{5}}\right)\right\}+{\frac {\pi {\sqrt {10+2{\sqrt {5}}}}}{2}}\ln \Gamma {\biggl (}{\frac {1}{5}}{\biggr )}\\[5mm]&\displaystyle +{\frac {\pi {\sqrt {10-2{\sqrt {5}}}}}{2}}\ln \Gamma {\biggl (}{\frac {2}{5}}{\biggr )}+\left\{{\frac {\sqrt {5}}{2}}\ln {2}-{\frac {\sqrt {5}}{2}}\ln {\big (}1+{\sqrt {5}}{\big )}-{\frac {5}{4}}\ln 5-{\frac {\pi {\sqrt {25+10{\sqrt {5}}}}}{10}}\right\}\cdot \gamma \\[5mm]&\displaystyle -{\frac {\sqrt {5}}{2}}\left\{\ln 2+\ln 5+\ln \pi +{\frac {\pi {\sqrt {25-10{\sqrt {5}}}}}{10}}\right\}\cdot \ln {\big (}1+{\sqrt {5}})+{\frac {\sqrt {5}}{2}}(\ln 2)^{2}+{\frac {{\sqrt {5}}{\big (}1-{\sqrt {5}}{\big )}}{8}}(\ln 5)^{2}\\[5mm]&\displaystyle +{\frac {3{\sqrt {5}}}{4}}\ln 2\cdot \ln 5+{\frac {\sqrt {5}}{2}}\ln 2\cdot \ln \pi +{\frac {\sqrt {5}}{4}}\ln 5\cdot \ln \pi -{\frac {\pi {\big (}2{\sqrt {25+10{\sqrt {5}}}}+5{\sqrt {25+2{\sqrt {5}}}}{\big )}}{20}}\ln 2\\[5mm]&\displaystyle -{\frac {\pi {\big (}4{\sqrt {25+10{\sqrt {5}}}}-5{\sqrt {5+2{\sqrt {5}}}}{\big )}}{40}}\ln 5-{\frac {\pi {\big (}5{\sqrt {5+2{\sqrt {5}}}}+{\sqrt {25+10{\sqrt {5}}}}{\big )}}{10}}\ln \pi \\[5mm]&\displaystyle =-8.030205511\ldots \\[6mm]\displaystyle \gamma _{1}{\biggl (}{\frac {1}{8}}{\biggr )}=&\displaystyle \gamma _{1}+{\sqrt {2}}\left\{\zeta ''\left(0,{\frac {1}{8}}\right)+\zeta ''\left(0,{\frac {7}{8}}\right)\right\}+2\pi {\sqrt {2}}\ln \Gamma {\biggl (}{\frac {1}{8}}{\biggr )}-\pi {\sqrt {2}}{\big (}1-{\sqrt {2}}{\big )}\ln \Gamma {\biggl (}{\frac {1}{4}}{\biggr )}\\[5mm]&\displaystyle -\left\{{\frac {1+{\sqrt {2}}}{2}}\pi +4\ln {2}+{\sqrt {2}}\ln {\big (}1+{\sqrt {2}}{\big )}\right\}\cdot \gamma -{\frac {1}{\sqrt {2}}}{\big (}\pi +8\ln 2+2\ln \pi {\big )}\cdot \ln {\big (}1+{\sqrt {2}})\\[5mm]&\displaystyle -{\frac {7{\big (}4-{\sqrt {2}}{\big )}}{4}}(\ln 2)^{2}+{\frac {1}{\sqrt {2}}}\ln 2\cdot \ln \pi -{\frac {\pi {\big (}10+11{\sqrt {2}}{\big )}}{4}}\ln 2-{\frac {\pi {\big (}3+2{\sqrt {2}}{\big )}}{2}}\ln \pi \\[5mm]&\displaystyle =-16.64171976\ldots \\[6mm]\displaystyle \gamma _{1}{\biggl (}{\frac {1}{12}}{\biggr )}=&\displaystyle \gamma _{1}+{\sqrt {3}}\left\{\zeta ''\left(0,{\frac {1}{12}}\right)+\zeta ''\left(0,{\frac {11}{12}}\right)\right\}+4\pi \ln \Gamma {\biggl (}{\frac {1}{4}}{\biggr )}+3\pi {\sqrt {3}}\ln \Gamma {\biggl (}{\frac {1}{3}}{\biggr )}\\[5mm]&\displaystyle -\left\{{\frac {2+{\sqrt {3}}}{2}}\pi +{\frac {3}{2}}\ln 3-{\sqrt {3}}(1-{\sqrt {3}})\ln {2}+2{\sqrt {3}}\ln {\big (}1+{\sqrt {3}}{\big )}\right\}\cdot \gamma \\[5mm]&\displaystyle -2{\sqrt {3}}{\big (}3\ln 2+\ln 3+\ln \pi {\big )}\cdot \ln {\big (}1+{\sqrt {3}})-{\frac {7-6{\sqrt {3}}}{2}}(\ln 2)^{2}-{\frac {3}{4}}(\ln 3)^{2}\\[5mm]&\displaystyle +{\frac {3{\sqrt {3}}(1-{\sqrt {3}})}{2}}\ln 3\cdot \ln 2+{\sqrt {3}}\ln 2\cdot \ln \pi -{\frac {\pi {\big (}17+8{\sqrt {3}}{\big )}}{2{\sqrt {3}}}}\ln 2\\[5mm]&\displaystyle +{\frac {\pi {\big (}1-{\sqrt {3}}{\big )}{\sqrt {3}}}{4}}\ln 3-\pi {\sqrt {3}}(2+{\sqrt {3}})\ln \pi =-29.84287823\ldots \end{array}}$ Second generalized Stieltjes constant The second generalized Stieltjes constant is much less studied than the first constant. Similarly to the first generalized Stieltjes constant, the second generalized Stieltjes constant at rational argument may be evaluated via the following formula ${\begin{array}{rl}\displaystyle \gamma _{2}{\biggl (}{\frac {r}{m}}{\biggr )}=\gamma _{2}+{\frac {2}{3}}\sum _{l=1}^{m-1}\cos {\frac {2\pi rl}{m}}\cdot \zeta '''\left(0,{\frac {l}{m}}\right)-2(\gamma +\ln 2\pi m)\sum _{l=1}^{m-1}\cos {\frac {2\pi rl}{m}}\cdot \zeta ''\left(0,{\frac {l}{m}}\right)\\[6mm]\displaystyle \quad +\pi \sum _{l=1}^{m-1}\sin {\frac {2\pi rl}{m}}\cdot \zeta ''\left(0,{\frac {l}{m}}\right)-2\pi (\gamma +\ln 2\pi m)\sum _{l=1}^{m-1}\sin {\frac {2\pi rl}{m}}\cdot \ln \Gamma {\biggl (}{\frac {l}{m}}{\biggr )}-2\gamma _{1}\ln {m}\\[6mm]\displaystyle \quad -\gamma ^{3}-\left[(\gamma +\ln 2\pi m)^{2}-{\frac {\pi ^{2}}{12}}\right]\cdot \Psi {\biggl (}{\frac {r}{m}}{\biggr )}+{\frac {\pi ^{3}}{12}}\cot {\frac {\pi r}{m}}-\gamma ^{2}\ln {\big (}4\pi ^{2}m^{3}{\big )}+{\frac {\pi ^{2}}{12}}(\gamma +\ln {m})\\[6mm]\displaystyle \quad -\gamma {\big (}(\ln 2\pi )^{2}+4\ln m\cdot \ln 2\pi +2(\ln m)^{2}{\big )}-\left\{(\ln 2\pi )^{2}+2\ln 2\pi \cdot \ln m+{\frac {2}{3}}(\ln m)^{2}\right\}\ln m\end{array}}\,,\qquad \quad r=1,2,3,\ldots ,m-1.$ see Blagouchine.[5] An equivalent result was later obtained by Coffey by another method.[32] References 1. Coppo, Marc-Antoine (1999). "Nouvelles expressions des constantes de Stieltjes". Expositiones Mathematicae. 17: 349–358. 2. Coffey, Mark W. (2009). "Series representations for the Stieltjes constants". arXiv:0905.1111 [math-ph]. 3. Coffey, Mark W. (2010). "Addison-type series representation for the Stieltjes constants". J. Number Theory. 130 (9): 2049–2064. doi:10.1016/j.jnt.2010.01.003. 4. Choi, Junesang (2013). "Certain integral representations of Stieltjes constants". Journal of Inequalities and Applications. 532: 1–10. 5. Blagouchine, Iaroslav V. (2015). "A theorem for the closed-form evaluation of the first generalized Stieltjes constant at rational arguments and some related summations". Journal of Number Theory. 148: 537–592. arXiv:1401.3724. doi:10.1016/j.jnt.2014.08.009. And vol. 151, pp. 276-277, 2015. arXiv:1401.3724 6. Blagouchine, Iaroslav V. (2016). "Expansions of generalized Euler's constants into the series of polynomials in π−2 and into the formal enveloping series with rational coefficients only". Journal of Number Theory. 158: 365–396. arXiv:1501.00740. doi:10.1016/j.jnt.2015.06.012. Corrigendum: vol. 173, pp. 631-632, 2017. 7. "A couple of definite integrals related to Stieltjes constants". Stack Exchange. 8. Hardy, G. H. (2012). "Note on Dr. Vacca's series for γ". Q. J. Pure Appl. Math. 43: 215–216. 9. Israilov, M. I. (1981). "On the Laurent decomposition of Riemann's zeta function [in Russian]". Trudy Mat. Inst. Akad. Nauk. SSSR. 158: 98–103. 10. Donal F. Connon Some applications of the Stieltjes constants, arXiv:0901.2083 11. Blagouchine, Iaroslav V. (2018). "Three notes on Ser's and Hasse's representations for the zeta-functions" (PDF). INTEGERS: The Electronic Journal of Combinatorial Number Theory. 18A (#A3): 1–45. arXiv:1606.02044. 12. Actually Blagouchine gives more general formulas, which are valid for the generalized Stieltjes constants as well. 13. "A closed form for the series ..." Stack Exchange. 14. Bruce C. Berndt. On the Hurwitz Zeta-function. Rocky Mountain Journal of Mathematics, vol. 2, no. 1, pp. 151-157, 1972. 15. A. F. Lavrik. On the main term of the divisor's problem and the power series of the Riemann's zeta function in a neighbourhood of its pole (in Russian). Trudy Mat. Inst. Akad. Nauk. SSSR, vol. 142, pp. 165-173, 1976. 16. Z. Nan-You and K. S. Williams. Some results on the generalized Stieltjes constants. Analysis, vol. 14, pp. 147-162, 1994. 17. Y. Matsuoka. Generalized Euler constants associated with the Riemann zeta function. Number Theory and Combinatorics: Japan 1984, World Scientific, Singapore, pp. 279-295, 1985 18. Y. Matsuoka. On the power series coefficients of the Riemann zeta function. Tokyo Journal of Mathematics, vol. 12, no. 1, pp. 49-58, 1989. 19. Charles Knessl and Mark W. Coffey. An effective asymptotic formula for the Stieltjes constants. Math. Comp., vol. 80, no. 273, pp. 379-386, 2011. 20. Lazhar Fekih-Ahmed. A New Effective Asymptotic Formula for the Stieltjes Constants, arXiv:1407.5567 21. Choudhury, B. K. (1995). "The Riemann zeta-function and its derivatives". Proc. R. Soc. A. 450 (1940): 477–499. Bibcode:1995RSPSA.450..477C. doi:10.1098/rspa.1995.0096. S2CID 124034712. 22. Keiper, J.B. (1992). "Power series expansions of Riemann ζ-function". Math. Comp. 58 (198): 765–773. Bibcode:1992MaCom..58..765K. doi:10.1090/S0025-5718-1992-1122072-5. 23. Kreminski, Rick (2003). "Newton-Cotes integration for approximating Stieltjes generalized Euler constants". Math. Comp. 72 (243): 1379–1397. Bibcode:2003MaCom..72.1379K. doi:10.1090/S0025-5718-02-01483-7. 24. Simon Plouffe. Stieltjes Constants, from 0 to 78, 256 digits each 25. Johansson, Fredrik (2015). "Rigorous high-precision computation of the Hurwitz zeta function and its derivatives". Num. Alg. 69 (2): 253–570. arXiv:1309.2877. doi:10.1007/s11075-014-9893-1. S2CID 10344040. 26. Johansson, Fredrik; Blagouchine, Iaroslav (2019). "Computing Stieltjes constants using complex integration". Mathematics of Computation. 88 (318): 1829–1850. arXiv:1804.01679. doi:10.1090/mcom/3401. S2CID 4619883. 27. "Definite integral". Stack Exchange. 28. Connon, Donal F. (2009). "New proofs of the duplication and multiplication formulae for the gamma and the Barnes double gamma functions". arXiv:0903.4539 [math.CA]. 29. Iaroslav V. Blagouchine Rediscovery of Malmsten's integrals, their evaluation by contour integration methods and some related results. The Ramanujan Journal, vol. 35, no. 1, pp. 21-110, 2014. Erratum-Addendum: vol. 42, pp. 777-781, 2017. PDF 30. V. Adamchik. A class of logarithmic integrals. Proceedings of the 1997 International Symposium on Symbolic and Algebraic Computation, pp. 1-8, 1997. 31. "Evaluation of a particular integral". Stack Exchange. 32. Mark W. Coffey Functional equations for the Stieltjes constants, arXiv:1402.3746 33. Donal F. Connon The difference between two Stieltjes constants, arXiv:0906.0277
Wikipedia
Stieltjes polynomials In mathematics, the Stieltjes polynomials En are polynomials associated to a family of orthogonal polynomials Pn. They are unrelated to the Stieltjes polynomial solutions of differential equations. Stieltjes originally considered the case where the orthogonal polynomials Pn are the Legendre polynomials. For the orthogonal polynomials, see Stieltjes–Wigert polynomials. For the Stieltjes polynomial solutions of Fuchsian differential equations, see Heine–Stieltjes polynomials. The Gauss–Kronrod quadrature formula uses the zeros of Stieltjes polynomials. Definition If P0, P1, form a sequence of orthogonal polynomials for some inner product, then the Stieltjes polynomial En is a degree n polynomial orthogonal to Pn–1(x)xk for k = 0, 1, ..., n – 1. References • Ehrich, Sven (2001) [1994], "Stieltjes polynomials", Encyclopedia of Mathematics, EMS Press
Wikipedia
Stieltjes–Wigert polynomials In mathematics, Stieltjes–Wigert polynomials (named after Thomas Jan Stieltjes and Carl Severin Wigert) are a family of basic hypergeometric orthogonal polynomials in the basic Askey scheme, for the weight function [1] $w(x)={\frac {k}{\sqrt {\pi }}}x^{-1/2}\exp(-k^{2}\log ^{2}x)$ Not to be confused with Stieltjes polynomial. on the positive real line x > 0. The moment problem for the Stieltjes–Wigert polynomials is indeterminate; in other words, there are many other measures giving the same family of orthogonal polynomials (see Krein's condition). Koekoek et al. (2010) give in Section 14.27 a detailed list of the properties of these polynomials. Definition The polynomials are given in terms of basic hypergeometric functions and the Pochhammer symbol by[2] $\displaystyle S_{n}(x;q)={\frac {1}{(q;q)_{n}}}{}_{1}\phi _{1}(q^{-n},0;q,-q^{n+1}x),$ where $q=\exp \left(-{\frac {1}{2k^{2}}}\right).$ Orthogonality Since the moment problem for these polynomials is indeterminate there are many different weight functions on [0,∞] for which they are orthogonal. Two examples of such weight functions are ${\frac {1}{(-x,-qx^{-1};q)_{\infty }}}$ and ${\frac {k}{\sqrt {\pi }}}x^{-1/2}\exp \left(-k^{2}\log ^{2}x\right).$ Notes 1. Up to a constant factor this is w(q−1/2x) for the weight function w in Szegő (1975), Section 2.7. See also Koornwinder et al. (2010), Section 18.27(vi). 2. Up to a constant factor Sn(x;q)=pn(q−1/2x) for pn(x) in Szegő (1975), Section 2.7. References • Gasper, George; Rahman, Mizan (2004), Basic hypergeometric series, Encyclopedia of Mathematics and its Applications, vol. 96 (2nd ed.), Cambridge University Press, ISBN 978-0-521-83357-8, MR 2128719 • Koekoek, Roelof; Lesky, Peter A.; Swarttouw, René F. (2010), Hypergeometric orthogonal polynomials and their q-analogues, Springer Monographs in Mathematics, Berlin, New York: Springer-Verlag, doi:10.1007/978-3-642-05014-5, ISBN 978-3-642-05013-8, MR 2656096 • Koornwinder, Tom H.; Wong, Roderick S. C.; Koekoek, Roelof; Swarttouw, René F. (2010), "Ch. 18, Orthogonal polynomials", in Olver, Frank W. J.; Lozier, Daniel M.; Boisvert, Ronald F.; Clark, Charles W. (eds.), NIST Handbook of Mathematical Functions, Cambridge University Press, ISBN 978-0-521-19225-5, MR 2723248. • Szegő, Gábor (1975), Orthogonal Polynomials, Colloquium Publications 23, American Mathematical Society, Fourth Edition, ISBN 978-0-8218-1023-1, MR 0372517 • Stieltjes, T. -J. (1894), "Recherches sur les fractions continues", Ann. Fac. Sci. Toulouse (in French), VIII (4): 1–122, doi:10.5802/afst.108, JFM 25.0326.01, MR 1344720 • Wang, Xiang-Sheng; Wong, Roderick (2010). "Uniform asymptotics of some q-orthogonal polynomials". J. Math. Anal. Appl. 364 (1): 79–87. doi:10.1016/j.jmaa.2009.10.038. • Wigert, S. (1923), "Sur les polynomes orthogonaux et l'approximation des fonctions continues", Arkiv för matematik, astronomi och fysik (in French), 17: 1–15, JFM 49.0296.01
Wikipedia
Stiffness matrix In the finite element method for the numerical solution of elliptic partial differential equations, the stiffness matrix is a matrix that represents the system of linear equations that must be solved in order to ascertain an approximate solution to the differential equation. The stiffness matrix for the Poisson problem For simplicity, we will first consider the Poisson problem $-\nabla ^{2}u=f$ on some domain Ω, subject to the boundary condition u = 0 on the boundary of Ω. To discretize this equation by the finite element method, one chooses a set of basis functions {φ1, …, φn} defined on Ω which also vanish on the boundary. One then approximates $u\approx u^{h}=u_{1}\varphi _{1}+\cdots +u_{n}\varphi _{n}.$ The coefficients u1, u2, …, un are determined so that the error in the approximation is orthogonal to each basis function φi: $\int _{x\in \Omega }\varphi _{i}\cdot f\,dx=-\int _{x\in \Omega }\varphi _{i}\nabla ^{2}u^{h}\,dx=-\sum _{j}\left(\int _{x\in \Omega }\varphi _{i}\nabla ^{2}\varphi _{j}\,dx\right)\,u_{j}=\sum _{j}\left(\int _{x\in \Omega }\nabla \varphi _{i}\cdot \nabla \varphi _{j}\,dx\right)u_{j}.$ The stiffness matrix is the n-element square matrix A defined by $\mathbf {A} _{ij}=\int _{x\in \Omega }\nabla \varphi _{i}\cdot \nabla \varphi _{j}\,dx.$ By defining the vector F with components $ \mathbf {F} _{i}=\int _{\Omega }\varphi _{i}f\,dx,$ the coefficients ui are determined by the linear system Au = F. The stiffness matrix is symmetric, i.e. Aij = Aji, so all its eigenvalues are real. Moreover, it is a strictly positive-definite matrix, so that the system Au = F always has a unique solution. (For other problems, these nice properties will be lost.) Note that the stiffness matrix will be different depending on the computational grid used for the domain and what type of finite element is used. For example, the stiffness matrix when piecewise quadratic finite elements are used will have more degrees of freedom than piecewise linear elements. The stiffness matrix for other problems Determining the stiffness matrix for other PDEs follows essentially the same procedure, but it can be complicated by the choice of boundary conditions. As a more complex example, consider the elliptic equation $-\sum _{k,l}{\frac {\partial }{\partial x_{k}}}\left(a^{kl}{\frac {\partial u}{\partial x_{l}}}\right)=f$ where $\mathbf {A} (x)=a^{kl}(x)$ is a positive-definite matrix defined for each point x in the domain. We impose the Robin boundary condition $-\sum _{k,l}\nu _{k}a^{kl}{\frac {\partial u}{\partial x_{l}}}=c(u-g),$ where νk is the component of the unit outward normal vector ν in the k-th direction. The system to be solved is $\sum _{j}\left(\sum _{k,l}\int _{\Omega }a^{kl}{\frac {\partial \varphi _{i}}{\partial x_{k}}}{\frac {\partial \varphi _{j}}{\partial x_{l}}}dx+\int _{\partial \Omega }c\varphi _{i}\varphi _{j}\,ds\right)u_{j}=\int _{\Omega }\varphi _{i}f\,dx+\int _{\partial \Omega }c\varphi _{i}g\,ds,$ as can be shown using an analogue of Green's identity. The coefficients ui are still found by solving a system of linear equations, but the matrix representing the system is markedly different from that for the ordinary Poisson problem. In general, to each scalar elliptic operator L of order 2k, there is associated a bilinear form B on the Sobolev space Hk, so that the weak formulation of the equation Lu = f is $B[u,v]=(f,v)$ for all functions v in Hk. Then the stiffness matrix for this problem is $\mathbf {A} _{ij}=B[\varphi _{j},\varphi _{i}].$ Practical assembly of the stiffness matrix In order to implement the finite element method on a computer, one must first choose a set of basis functions and then compute the integrals defining the stiffness matrix. Usually, the domain Ω is discretized by some form of mesh generation, wherein it is divided into non-overlapping triangles or quadrilaterals, which are generally referred to as elements. The basis functions are then chosen to be polynomials of some order within each element, and continuous across element boundaries. The simplest choices are piecewise linear for triangular elements and piecewise bilinear for rectangular elements. The element stiffness matrix A[k] for element Tk is the matrix $\mathbf {A} _{ij}^{[k]}=\int _{T_{k}}\nabla \varphi _{i}\cdot \nabla \varphi _{j}\,dx.$ The element stiffness matrix is zero for most values of i and j, for which the corresponding basis functions are zero within Tk. The full stiffness matrix A is the sum of the element stiffness matrices. In particular, for basis functions that are only supported locally, the stiffness matrix is sparse. For many standard choices of basis functions, i.e. piecewise linear basis functions on triangles, there are simple formulas for the element stiffness matrices. For example, for piecewise linear elements, consider a triangle with vertices (x1, y1), (x2, y2), (x3, y3), and define the 2×3 matrix $\mathbf {D} =\left[{\begin{matrix}x_{3}-x_{2}&x_{1}-x_{3}&x_{2}-x_{1}\\y_{3}-y_{2}&y_{1}-y_{3}&y_{2}-y_{1}\end{matrix}}\right].$ Then the element stiffness matrix is $\mathbf {A} ^{[k]}={\frac {\mathbf {D} ^{\mathsf {T}}\mathbf {D} }{4\operatorname {area} (T)}}.$ When the differential equation is more complicated, say by having an inhomogeneous diffusion coefficient, the integral defining the element stiffness matrix can be evaluated by Gaussian quadrature. The condition number of the stiffness matrix depends strongly on the quality of the numerical grid. In particular, triangles with small angles in the finite element mesh induce large eigenvalues of the stiffness matrix, degrading the solution quality. References • Ern, A.; Guermond, J.-L. (2004), Theory and Practice of Finite Elements, New York, NY: Springer-Verlag, ISBN 0387205748 • Gockenbach, M.S. (2006), Understanding and Implementing the Finite Element Method, Philadelphia, PA: SIAM, ISBN 0898716144 • Grossmann, C.; Roos, H.-G.; Stynes, M. (2007), Numerical Treatment of Partial Differential Equations, Berlin, Germany: Springer-Verlag, ISBN 978-3-540-71584-9 • Johnson, C. (2009), Numemerical Solution of Partial Differential Equations by the Finite Element Method, Dover, ISBN 978-0486469003 • Zienkiewicz, O.C.; Taylor, R.L.; Zhu, J.Z. (2005), The Finite Element Method: Its Basis and Fundamentals (6th ed.), Oxford, UK: Elsevier Butterworth-Heinemann, ISBN 978-0750663205
Wikipedia
Stiff equation In mathematics, a stiff equation is a differential equation for which certain numerical methods for solving the equation are numerically unstable, unless the step size is taken to be extremely small. It has proven difficult to formulate a precise definition of stiffness, but the main idea is that the equation includes some terms that can lead to rapid variation in the solution. When integrating a differential equation numerically, one would expect the requisite step size to be relatively small in a region where the solution curve displays much variation and to be relatively large where the solution curve straightens out to approach a line with slope nearly zero. For some problems this is not the case. In order for a numerical method to give a reliable solution to the differential system sometimes the step size is required to be at an unacceptably small level in a region where the solution curve is very smooth. The phenomenon is known as stiffness. In some cases there may be two different problems with the same solution, yet one is not stiff and the other is. The phenomenon cannot therefore be a property of the exact solution, since this is the same for both problems, and must be a property of the differential system itself. Such systems are thus known as stiff systems. Motivating example Consider the initial value problem $\,y'(t)=-15y(t),\quad t\geq 0,\quad y(0)=1.$ (1) The exact solution (shown in cyan) is $y(t)=e^{-15t},\quad y(t)\to 0{\text{ as }}t\to \infty .$ (2) We seek a numerical solution that exhibits the same behavior. The figure (right) illustrates the numerical issues for various numerical integrators applied on the equation. 1. Euler's method with a step size of $h={\tfrac {1}{4}}$ oscillates wildly and quickly exits the range of the graph (shown in red). 2. Euler's method with half the step size, $h={\tfrac {1}{8}}$, produces a solution within the graph boundaries, but oscillates about zero (shown in green). 3. The trapezoidal method (that is, the two-stage Adams–Moulton method) is given by $y_{n+1}=y_{n}+{\tfrac {1}{2}}h{\bigl (}f(t_{n},y_{n})+f(t_{n+1},y_{n+1}){\bigr )},$ (3) where $y'=f(t,y)$. Applying this method instead of Euler's method gives a much better result (blue). The numerical results decrease monotonically to zero, just as the exact solution does. One of the most prominent examples of the stiff ordinary differential equations (ODEs) is a system that describes the chemical reaction of Robertson:[1] ${\begin{aligned}{\dot {x}}&=-0.04x+10^{4}y\cdot z\\{\dot {y}}&=0.04x-10^{4}y\cdot z-3\cdot 10^{7}y^{2}\\{\dot {z}}&=3\cdot 10^{7}y^{2}\end{aligned}}$ (4) If one treats this system on a short interval, for example, $t\in [0,40]$ there is no problem in numerical integration. However, if the interval is very large (1011 say), then many standard codes fail to integrate it correctly. Additional examples are the sets of ODEs resulting from the temporal integration of large chemical reaction mechanisms. Here, the stiffness arises from the coexistence of very slow and very fast reactions. To solve them, the software packages KPP and Autochem can be used. Stiffness ratio Consider the linear constant coefficient inhomogeneous system $\mathbf {y} '=\mathbf {A} \mathbf {y} +\mathbf {f} (x),$ (5) where $\mathbf {y} ,\mathbf {f} \in \mathbb {R} ^{n}$ and $\mathbf {A} $ is a constant, diagonalizable, $n\times n$ matrix with eigenvalues $\lambda _{t}\in \mathbb {C} ,t=1,2,\ldots ,n$ (assumed distinct) and corresponding eigenvectors $\mathbf {c} _{t}\in \mathbb {C} ^{n},t=1,2,\ldots ,n$. The general solution of (5) takes the form $\mathbf {y} (x)=\sum _{t=1}^{n}\kappa _{t}e^{\lambda _{t}x}\mathbf {c} _{t}+\mathbf {g} (x),$ (6) where the $\kappa _{t}$ are arbitrary constants and $\mathbf {g} (x)$ is a particular integral. Now let us suppose that $\operatorname {Re} (\lambda _{t})<0,\qquad t=1,2,\ldots ,n,$ (7) which implies that each of the terms $e^{\lambda _{t}x}\mathbf {c} _{t}\to 0$ as $x\to \infty $, so that the solution $\mathbf {y} (x)$ approaches $\mathbf {g} (x)$ asymptotically as $x\to \infty $; the term $e^{\lambda _{t}x}\mathbf {c} _{t}$ will decay monotonically if $\lambda _{t}$ is real and sinusoidally if $\lambda _{t}$ is complex. Interpreting $x$ to be time (as it often is in physical problems), $ \sum _{t=1}^{n}\kappa _{t}e^{\lambda _{t}x}\mathbf {c} _{t}$ is called the transient solution and $\mathbf {g} (x)$ the steady-state solution. If $\left|\operatorname {Re} (\lambda _{t})\right|$ is large, then the corresponding term $\kappa _{t}e^{\lambda _{t}x}\mathbf {c} _{t}$ will decay quickly as $x$ increases and is thus called a fast transient; if $\left|\operatorname {Re} (\lambda _{t})\right|$ is small, the corresponding term $\kappa _{t}e^{\lambda _{t}x}\mathbf {c} _{t}$ decays slowly and is called a slow transient. Let ${\overline {\lambda }},{\underline {\lambda }}\in \{\lambda _{t},t=1,2,\ldots ,n\}$ be defined by ${\bigl |}\operatorname {Re} ({\overline {\lambda }}){\bigr |}\geq {\bigl |}\operatorname {Re} (\lambda _{t}){\bigr |}\geq {\bigl |}\operatorname {Re} ({\underline {\lambda }}){\bigr |},\qquad t=1,2,\ldots ,n$ (8) so that $\kappa _{t}e^{{\overline {\lambda }}x}\mathbf {c} _{t}$ is the fastest transient and $\kappa _{t}e^{{\underline {\lambda }}x}\mathbf {c} _{t}$ the slowest. We now define the stiffness ratio as[2] ${\frac {{\bigl |}\operatorname {Re} ({\overline {\lambda }}){\bigr |}}{{\bigl |}\operatorname {Re} ({\underline {\lambda }}){\bigr |}}}.$ (9) Characterization of stiffness In this section we consider various aspects of the phenomenon of stiffness. "Phenomenon" is probably a more appropriate word than "property", since the latter rather implies that stiffness can be defined in precise mathematical terms; it turns out not to be possible to do this in a satisfactory manner, even for the restricted class of linear constant coefficient systems. We shall also see several qualitative statements that can be (and mostly have been) made in an attempt to encapsulate the notion of stiffness, and state what is probably the most satisfactory of these as a "definition" of stiffness. J. D. Lambert defines stiffness as follows: If a numerical method with a finite region of absolute stability, applied to a system with any initial conditions, is forced to use in a certain interval of integration a step length which is excessively small in relation to the smoothness of the exact solution in that interval, then the system is said to be stiff in that interval. There are other characteristics which are exhibited by many examples of stiff problems, but for each there are counterexamples, so these characteristics do not make good definitions of stiffness. Nonetheless, definitions based upon these characteristics are in common use by some authors and are good clues as to the presence of stiffness. Lambert refers to these as "statements" rather than definitions, for the aforementioned reasons. A few of these are: 1. A linear constant coefficient system is stiff if all of its eigenvalues have negative real part and the stiffness ratio is large. 2. Stiffness occurs when stability requirements, rather than those of accuracy, constrain the step length. 3. Stiffness occurs when some components of the solution decay much more rapidly than others.[3] Etymology The origin of the term "stiffness" has not been clearly established. According to Joseph Oakland Hirschfelder, the term "stiff" is used because such systems correspond to tight coupling between the driver and driven in servomechanisms.[4] According to Richard. L. Burden and J. Douglas Faires, Significant difficulties can occur when standard numerical techniques are applied to approximate the solution of a differential equation when the exact solution contains terms of the form $e^{\lambda t}$, where $\lambda $ is a complex number with negative real part. . . . Problems involving rapidly decaying transient solutions occur naturally in a wide variety of applications, including the study of spring and damping systems, the analysis of control systems, and problems in chemical kinetics. These are all examples of a class of problems called stiff (mathematical stiffness) systems of differential equations, due to their application in analyzing the motion of spring and mass systems having large spring constants (physical stiffness).[5] For example, the initial value problem $m{\ddot {x}}+c{\dot {x}}+kx=0,\qquad x(0)=x_{0},\qquad {\dot {x}}(0)=0,$ (10) with $m=1$, $c=1001$, $k=1000$, can be written in the form (5) with $n=2$ and $\mathbf {A} ={\begin{pmatrix}0&1\\-1000&-1001\end{pmatrix}},\qquad \mathbf {f} (t)={\begin{pmatrix}0\\0\end{pmatrix}},\qquad \mathbf {x} (0)={\begin{pmatrix}x_{0}\\0\end{pmatrix}},$ (11) and has eigenvalues ${\overline {\lambda }}=-1000,{\underline {\lambda }}=-1$. Both eigenvalues have negative real part and the stiffness ratio is ${\frac {|-1000|}{|-1|}}=1000,$ (12) which is fairly large. System (10) then certainly satisfies statements 1 and 3. Here the spring constant $k$ is large and the damping constant $c$ is even larger.[6] (while "large" is not a clearly-defined term, but the larger the above quantities are, the more pronounced will be the effect of stiffness.) The exact solution to (10) is $x(t)=x_{0}\left(-{\frac {1}{999}}e^{-1000t}+{\frac {1000}{999}}e^{-t}\right)\approx x_{0}e^{-t}.$ (13) Equation 13 behaves quite similarly to a simple exponential $x_{0}e^{-t}$, but the presence of the $e^{-1000t}$ term, even with a small coefficient, is enough to make the numerical computation very sensitive to step size. Stable integration of (10) requires a very small step size until well into the smooth part of the solution curve, resulting in an error much smaller than required for accuracy. Thus the system also satisfies statement 2 and Lambert's definition. A-stability The behaviour of numerical methods on stiff problems can be analyzed by applying these methods to the test equation $y'=ky$ subject to the initial condition $y(0)=1$ with $k\in \mathbb {C} $. The solution of this equation is $y(t)=e^{kt}$. This solution approaches zero as $t\to \infty $ when $\operatorname {Re} (k)<0.$ If the numerical method also exhibits this behaviour (for a fixed step size), then the method is said to be A-stable.[7] A numerical method that is L-stable (see below) has the stronger property that the solution approaches zero in a single step as the step size goes to infinity. A-stable methods do not exhibit the instability problems as described in the motivating example. Runge–Kutta methods Runge–Kutta methods applied to the test equation $y'=k\cdot y$ take the form $y_{n+1}=\phi (hk)\cdot y_{n}$, and, by induction, $y_{n}={\bigl (}\phi (hk){\bigr )}^{n}\cdot y_{0}$. The function $\phi $ is called the stability function. Thus, the condition that $y_{n}\to 0$ as $n\to \infty $ is equivalent to $|\phi (hk)|<1$. This motivates the definition of the region of absolute stability (sometimes referred to simply as stability region), which is the set ${\bigl \{}z\in \mathbb {C} \,{\big |}\,|\phi (z)|<1{\bigr \}}$. The method is A-stable if the region of absolute stability contains the set ${\bigl \{}z\in \mathbb {C} \,{\big |}\,\operatorname {Re} (z)<0{\bigr \}}$, that is, the left half plane. Example: The Euler methods Consider the Euler methods above. The explicit Euler method applied to the test equation $y'=k\cdot y$ is $y_{n+1}=y_{n}+h\cdot f(t_{n},y_{n})=y_{n}+h\cdot (ky_{n})=y_{n}+h\cdot k\cdot y_{n}=(1+h\cdot k)y_{n}.$ Hence, $y_{n}=(1+hk)^{n}\cdot y_{0}$ with $\phi (z)=1+z$. The region of absolute stability for this method is thus ${\bigl \{}z\in \mathbb {C} \,{\big |}\,|1+z|<1{\bigr \}}$ which is the disk depicted on the right. The Euler method is not A-stable. The motivating example had $k=-15$. The value of z when taking step size $h={\tfrac {1}{4}}$ is $z=-15\times {\tfrac {1}{4}}=-3.75$, which is outside the stability region. Indeed, the numerical results do not converge to zero. However, with step size $h={\tfrac {1}{8}}$, we have $z=-1.875$ which is just inside the stability region and the numerical results converge to zero, albeit rather slowly. Example: Trapezoidal method Consider the trapezoidal method $y_{n+1}=y_{n}+{\tfrac {1}{2}}h\cdot {\bigl (}f(t_{n},y_{n})+f(t_{n+1},y_{n+1}){\bigr )},$ when applied to the test equation $y'=k\cdot y$, is $y_{n+1}=y_{n}+{\tfrac {1}{2}}h\cdot \left(ky_{n}+ky_{n+1}\right).$ Solving for $y_{n+1}$ yields $y_{n+1}={\frac {1+{\frac {1}{2}}hk}{1-{\frac {1}{2}}hk}}\cdot y_{n}.$ Thus, the stability function is $\phi (z)={\frac {1+{\frac {1}{2}}z}{1-{\frac {1}{2}}z}}$ and the region of absolute stability is $\left\{z\in \mathbb {C} \ \left|\ \left|{\frac {1+{\frac {1}{2}}z}{1-{\frac {1}{2}}z}}\right|<1\right.\right\}.$ This region contains the left half-plane, so the trapezoidal method is A-stable. In fact, the stability region is identical to the left half-plane, and thus the numerical solution of $y'=k\cdot y$ converges to zero if and only if the exact solution does. Nevertheless, the trapezoidal method does not have perfect behavior: it does damp all decaying components, but rapidly decaying components are damped only very mildly, because $\phi (z)\to 1$ as $z\to -\infty $. This led to the concept of L-stability: a method is L-stable if it is A-stable and $|\phi (z)|\to 0$ as $z\to \infty $. The trapezoidal method is A-stable but not L-stable. The implicit Euler method is an example of an L-stable method.[8] General theory The stability function of a Runge–Kutta method with coefficients $\mathbf {A} $ and $\mathbf {b} $ is given by $\phi (z)={\frac {\det \left(\mathbf {I} -z\mathbf {A} +z\mathbf {e} \mathbf {b} ^{\mathsf {T}}\right)}{\det(\mathbf {I} -z\mathbf {A} )}},$ where $\mathbf {e} $ denotes the vector with all ones. This is a rational function (one polynomial divided by another). Explicit Runge–Kutta methods have a strictly lower triangular coefficient matrix $\mathbf {A} $ and thus, their stability function is a polynomial. It follows that explicit Runge–Kutta methods cannot be A-stable. The stability function of implicit Runge–Kutta methods is often analyzed using order stars. The order star for a method with stability function $\phi $ is defined to be the set ${\bigl \{}z\in \mathbb {C} \,{\big |}\,|\phi (z)|>|e^{z}|{\bigr \}}$. A method is A-stable if and only if its stability function has no poles in the left-hand plane and its order star contains no purely imaginary numbers.[9] Multistep methods Linear multistep methods have the form $y_{n+1}=\sum _{i=0}^{s}a_{i}y_{n-i}+h\sum _{j=-1}^{s}b_{j}f\left(t_{n-j},y_{n-j}\right).$ Applied to the test equation, they become $y_{n+1}=\sum _{i=0}^{s}a_{i}y_{n-i}+hk\sum _{j=-1}^{s}b_{j}y_{n-j},$ which can be simplified to $\left(1-b_{-1}z\right)y_{n+1}-\sum _{j=0}^{s}\left(a_{j}+b_{j}z\right)y_{n-j}=0$ where $z=hk$. This is a linear recurrence relation. The method is A-stable if all solutions $\{y_{n}\}$ of the recurrence relation converge to zero when $\operatorname {Re} (z)<0$. The characteristic polynomial is $\Phi (z,w)=w^{s+1}-\sum _{i=0}^{s}a_{i}w^{s-i}-z\sum _{j=-1}^{s}b_{j}w^{s-j}.$ All solutions converge to zero for a given value of $z$ if all solutions $w$ of $\Phi (z,w)=0$ lie in the unit circle. The region of absolute stability for a multistep method of the above form is then the set of all $z\in \mathbb {C} $ for which all $w$ such that $\Phi (z,w)=0$ satisfy $|w|<1$. Again, if this set contains the left half-plane, the multi-step method is said to be A-stable. Example: The second-order Adams–Bashforth method Let us determine the region of absolute stability for the two-step Adams–Bashforth method $y_{n+1}=y_{n}+h\left({\tfrac {3}{2}}f(t_{n},y_{n})-{\tfrac {1}{2}}f(t_{n-1},y_{n-1})\right).$ The characteristic polynomial is $\Phi (w,z)=w^{2}-\left(1+{\tfrac {3}{2}}z\right)w+{\tfrac {1}{2}}z=0$ which has roots $w={\tfrac {1}{2}}\left(1+{\tfrac {3}{2}}z\pm {\sqrt {1+z+{\tfrac {9}{4}}z^{2}}}\right),$ thus the region of absolute stability is $\left\{z\in \mathbb {C} \ \left|\ \left|{\tfrac {1}{2}}\left(1+{\tfrac {3}{2}}z\pm {\sqrt {1+z+{\tfrac {9}{4}}z^{2}}}\right)\right|<1\right.\right\}.$ This region is shown on the right. It does not include all the left half-plane (in fact it only includes the real axis between $-1\leq z\leq 0$) so the Adams–Bashforth method is not A-stable. General theory Explicit multistep methods can never be A-stable, just like explicit Runge–Kutta methods. Implicit multistep methods can only be A-stable if their order is at most 2. The latter result is known as the second Dahlquist barrier; it restricts the usefulness of linear multistep methods for stiff equations. An example of a second-order A-stable method is the trapezoidal rule mentioned above, which can also be considered as a linear multistep method.[10] See also • Backward differentiation formula, a family of implicit methods especially used for the solution of stiff differential equations • Condition number • Differential inclusion, an extension of the notion of differential equation that allows discontinuities, in part as way to sidestep some stiffness issues • Explicit and implicit methods Notes 1. Robertson, H. H. (1966). "The solution of a set of reaction rate equations". Numerical analysis: an introduction. Academic Press. pp. 178–182. 2. Lambert (1992, pp. 216–217) 3. Lambert (1992, pp. 217–220) 4. Hirshfelder (1963) 5. Burden & Faires (1993, p. 314) 6. Kreyszig (1972, pp. 62–68) 7. This definition is due to Dahlquist (1963). 8. The definition of L-stability is due to Ehle (1969). 9. The definition is due to Wanner, Hairer & Nørsett (1978); see also Iserles & Nørsett (1991). 10. See Dahlquist (1963). References • Burden, Richard L.; Faires, J. Douglas (1993), Numerical Analysis (5th ed.), Boston: Prindle, Weber and Schmidt, ISBN 0-534-93219-3. • Dahlquist, Germund (1963), "A special stability problem for linear multistep methods", BIT, 3 (1): 27–43, doi:10.1007/BF01963532, hdl:10338.dmlcz/103497, S2CID 120241743. • Eberly, David (2008), Stability analysis for systems of differential equations (PDF). • Ehle, B. L. (1969), On Padé approximations to the exponential function and A-stable methods for the numerical solution of initial value problems (PDF), University of Waterloo. • Gear, C. W. (1971), Numerical Initial-Value Problems in Ordinary Differential Equations, Englewood Cliffs: Prentice Hall, Bibcode:1971nivp.book.....G. • Gear, C. W. (1981), "Numerical solution of ordinary differential equations: Is there anything left to do?", SIAM Review, 23 (1): 10–24, doi:10.1137/1023002. • Hairer, Ernst; Wanner, Gerhard (1996), Solving ordinary differential equations II: Stiff and differential-algebraic problems (second ed.), Berlin: Springer-Verlag, ISBN 978-3-540-60452-5. • Hirshfelder, J. O. (1963), "Applied Mathematics as used in Theoretical Chemistry", American Mathematical Society Symposium: 367–376. • Iserles, Arieh; Nørsett, Syvert (1991), Order Stars, Chapman & Hall, ISBN 978-0-412-35260-7. • Kreyszig, Erwin (1972), Advanced Engineering Mathematics (3rd ed.), New York: Wiley, ISBN 0-471-50728-8. • Lambert, J. D. (1977), D. Jacobs (ed.), "The initial value problem for ordinary differential equations", The State of the Art in Numerical Analysis, New York: Academic Press: 451–501. • Lambert, J. D. (1992), Numerical Methods for Ordinary Differential Systems, New York: Wiley, ISBN 978-0-471-92990-1. • Mathews, John; Fink, Kurtis (1992), Numerical methods using MATLAB. • Press, WH; Teukolsky, SA; Vetterling, WT; Flannery, BP (2007). "Section 17.5. Stiff Sets of Equations". Numerical Recipes: The Art of Scientific Computing (3rd ed.). New York: Cambridge University Press. ISBN 978-0-521-88068-8. • Shampine, L. F.; Gear, C. W. (1979), "A user's view of solving stiff ordinary differential equations", SIAM Review, 21 (1): 1–17, doi:10.1137/1021001. • Wanner, Gerhard; Hairer, Ernst; Nørsett, Syvert (1978), "Order stars and stability theory", BIT, 18 (4): 475–489, doi:10.1007/BF01932026, S2CID 8824105. • Stability of Runge-Kutta Methods External links • An Introduction to Physically Based Modeling: Energy Functions and Stiffness • Stiff systems Lawrence F. Shampine and Skip Thompson Scholarpedia, 2(3):2855. doi:10.4249/scholarpedia.2855
Wikipedia
Stigler's law of eponymy Stigler's law of eponymy, proposed by University of Chicago statistics professor Stephen Stigler in his 1980 publication Stigler’s law of eponymy,[1] states that no scientific discovery is named after its original discoverer. Examples include Hubble's law, which was derived by Georges Lemaître two years before Edwin Hubble; the Pythagorean theorem, which was known to Babylonian mathematicians before Pythagoras; and Halley's Comet, which was observed by astronomers since at least 240 BC (although its official designation is due to the first ever mathematical prediction of such astronomical phenomenon in the sky, not to its discovery). Stigler himself named the sociologist Robert K. Merton as the discoverer of "Stigler's law" to show that it follows its own decree, though the phenomenon had previously been noted by others.[2] Derivation Historical acclaim for discoveries is often assigned to persons of note who bring attention to an idea that is not yet widely known, whether or not that person was its original inventor – theories may be named long after their discovery. In the case of eponymy, the idea becomes named after that person, even if that person is acknowledged by historians of science not to be the one who discovered it. Often, several people will arrive at a new idea around the same time, as in the case of calculus. It can be dependent on the publicity of the new work and the fame of its publisher as to whether the scientist's name becomes historically associated. Similar concepts There is a similar quote attributed to Mark Twain: It takes a thousand men to invent a telegraph, or a steam engine, or a phonograph, or a photograph, or a telephone or any other important thing – and the last man gets the credit and we forget the others. He added his little mite – that is all he did. These object lessons should teach us that ninety-nine parts of all things that proceed from the intellect are plagiarisms, pure and simple; and the lesson ought to make us modest. But nothing can do that.[3] Stephen Stigler's father, the economist George Stigler, also examined the process of discovery in economics. He said, "If an earlier, valid statement of a theory falls on deaf ears, and a later restatement is accepted by the science, this is surely proof that the science accepts ideas only when they fit into the then-current state of the science." He gave several examples in which the original discoverer was not recognized as such.[4] The Matthew effect was coined by Robert K. Merton to describe how eminent scientists get more credit than a comparatively unknown researcher, even if their work is similar, so that credit will usually be given to researchers who are already famous. Merton notes: This pattern of recognition, skewed in favor of the established scientist, appears principally (i) in cases of collaboration and (ii) in cases of independent multiple discoveries made by scientists of distinctly different rank.[5] The effect applies specifically to women through the Matilda effect. Boyer's law was named by Hubert Kennedy in 1972. It says, "Mathematical formulas and theorems are usually not named after their original discoverers" and was named after Carl Boyer, whose book A History of Mathematics contains many examples of this law. Kennedy observed that "it is perhaps interesting to note that this is probably a rare instance of a law whose statement confirms its own validity".[6] "Everything of importance has been said before by somebody who did not discover it" is an adage attributed to Alfred North Whitehead.[7] List of examples See also • List of misnamed theorems • List of persons considered father or mother of a scientific field • Eponym • Scientific priority • Matthew effect • Matilda effect • Obliteration by incorporation • Theories and sociology of the history of science • Standing on the shoulders of giants References 1. Gieryn, T. F., ed. (1980). Science and social structure: a festschrift for Robert K. Merton. New York: NY Academy of Sciences. pp. 147–57. ISBN 0-89766-043-9., republished in Stigler's collection "Statistics on the Table" 2. For example Henry Dudeney noted in his 1917 Amusements in Mathematics solution 129 that Pell's equation was called that "apparently because Pell neither first propounded the question nor first solved it!" 3. "Letter to Helen Keller". American Foundation for the Blind. 1903. 4. Diamond, Arthur M. Jr. (December 2005). "Measurement, incentives, and constraints in Stigler's economics of science" (PDF). The European Journal of the History of Economic Thought. 12 (4): 639–640. doi:10.1080/09672560500370292. S2CID 154618308. Retrieved 12 January 2015. (Link is to Art Diamond's personal web site.) 5. Merton, Robert K. (5 January 1968). "The Matthew Effect in Science". Science. 159 (3810): 56–63. Bibcode:1968Sci...159...56M. doi:10.1126/science.159.3810.56. PMID 17737466. S2CID 3526819. 6. Kennedy, H.C. (January 1972). "Who discovered Boyer's Law?". The American Mathematical Monthly. 79 (1): 66–67. doi:10.2307/2978134. JSTOR 2978134. 7. Menand, Louis (19 February 2007). "Notable Quotables". The New Yorker. Retrieved 27 March 2009. Further reading • Stigler, George J. (1982a). The Economist as Preacher, and Other Essays. Chicago: The University of Chicago Press. ISBN 0-226-77430-9. • Stigler, Stephen M. (1980). Gieryn, F. (ed.). "Stigler's law of eponymy". Transactions of the New York Academy of Sciences. 39: 147–58. doi:10.1111/j.2164-0947.1980.tb02775.x. (Festschrift for Robert K. Merton) • Stigler, Stephen M. (1983). "Who discovered Bayes's theorem?". The American Statistician. 37 (4): 290–6. doi:10.2307/2682766. JSTOR 2682766. • Kern, Scott E (September–October 2002). "Whose Hypothesis? Ciphering, Sectorials, D Lesions, Freckles and the Operation of Stigler's Law". Cancer Biology & Therapy. Landes Bioscience. 1 (5): 571–581. doi:10.4161/cbt.1.5.225. ISSN 1555-8576. PMID 12496492. Retrieved 28 March 2009. External links • Miller, Jeff. "Eponymy and Laws of Eponymy". on Miller, Jeff. "Earliest known uses of some of the words of mathematics". • Malcolm Gladwell (19 December 2006). "In the Air: Who says big ideas are rare?". The New Yorker. Retrieved 6 May 2008. Stigler's law is described near the end of the article History of science Background • Theories and sociology • Historiography • Pseudoscience • History and philosophy of science By era • Ancient world • Classical Antiquity • The Golden Age of Islam • Renaissance • Scientific Revolution • Age of Enlightenment • Romanticism By culture • African • Argentine • Brazilian • Byzantine • Medieval European • French • Chinese • Indian • Medieval Islamic • Japanese • Korean • Mexican • Russian • Spanish Natural sciences • Astronomy • Biology • Chemistry • Earth science • Physics Mathematics • Algebra • Calculus • Combinatorics • Geometry • Logic • Probability • Statistics • Trigonometry Social sciences • Anthropology • Archaeology • Economics • History • Political science • Psychology • Sociology Technology • Agricultural science • Computer science • Materials science • Engineering Medicine • Human medicine • Veterinary medicine • Anatomy • Neuroscience • Neurology and neurosurgery • Nutrition • Pathology • Pharmacy • Timelines • Portal • Category
Wikipedia
Stinespring dilation theorem In mathematics, Stinespring's dilation theorem, also called Stinespring's factorization theorem, named after W. Forrest Stinespring, is a result from operator theory that represents any completely positive map on a C*-algebra A as a composition of two completely positive maps each of which has a special form: 1. A *-representation of A on some auxiliary Hilbert space K followed by 2. An operator map of the form T ↦ V*TV. Moreover, Stinespring's theorem is a structure theorem from a C*-algebra into the algebra of bounded operators on a Hilbert space. Completely positive maps are shown to be simple modifications of *-representations, or sometimes called *-homomorphisms. Formulation In the case of a unital C*-algebra, the result is as follows: Theorem. Let A be a unital C*-algebra, H be a Hilbert space, and B(H) be the bounded operators on H. For every completely positive $\Phi :A\to B(H),$ there exists a Hilbert space K and a unital *-homomorphism $\pi :A\to B(K)$ such that $\Phi (a)=V^{\ast }\pi (a)V,$ where $V:H\to K$ is a bounded operator. Furthermore, we have $\|\Phi (1)\|=\|V\|^{2}.$ Informally, one can say that every completely positive map $\Phi $ can be "lifted" up to a map of the form $V^{*}(\cdot )V$. The converse of the theorem is true trivially. So Stinespring's result classifies completely positive maps. Sketch of proof We now briefly sketch the proof. Let $K=A\otimes H$. For $a\otimes h,\ b\otimes g\in K$, define $\langle a\otimes h,b\otimes g\rangle _{K}:=\langle \Phi (b^{*}a)h,g\rangle _{H}=\langle h,\Phi (a^{*}b)g\rangle _{H}$ and extend by semi-linearity to all of K. This is a Hermitian sesquilinear form because $\Phi $ is compatible with the * operation. Complete positivity of $\Phi $ is then used to show that this sesquilinear form is in fact positive semidefinite. Since positive semidefinite Hermitian sesquilinear forms satisfy the Cauchy–Schwarz inequality, the subset $K'=\{x\in K\mid \langle x,x\rangle _{K}=0\}\subset K$ is a subspace. We can remove degeneracy by considering the quotient space $K/K'$. The completion of this quotient space is then a Hilbert space, also denoted by $K$. Next define $\pi (a)(b\otimes g)=ab\otimes g$ and $Vh=1_{A}\otimes h$. One can check that $\pi $ and $V$ have the desired properties. Notice that $V$ is just the natural algebraic embedding of H into K. One can verify that $V^{\ast }(a\otimes h)=\Phi (a)h$ holds. In particular $V^{\ast }V=\Phi (1)$ holds so that $V$ is an isometry if and only if $\Phi (1)=1$. In this case H can be embedded, in the Hilbert space sense, into K and $V^{\ast }$, acting on K, becomes the projection onto H. Symbolically, we can write $\Phi (a)=P_{H}\;\pi (a){\Big |}_{H}.$ In the language of dilation theory, this is to say that $\Phi (a)$ is a compression of $\pi (a)$. It is therefore a corollary of Stinespring's theorem that every unital completely positive map is the compression of some *-homomorphism. Minimality The triple (π, V, K) is called a Stinespring representation of Φ. A natural question is now whether one can reduce a given Stinespring representation in some sense. Let K1 be the closed linear span of π(A) VH. By property of *-representations in general, K1 is an invariant subspace of π(a) for all a. Also, K1 contains VH. Define $\pi _{1}(a)=\pi (a){\Big |}_{K_{1}}.$ We can compute directly ${\begin{aligned}\pi _{1}(a)\pi _{1}(b)&=\pi (a){\Big |}_{K_{1}}\pi (b){\Big |}_{K_{1}}\\&=\pi (a)\pi (b){\Big |}_{K_{1}}\\&=\pi (ab){\Big |}_{K_{1}}\\&=\pi _{1}(ab)\end{aligned}}$ and if k and ℓ lie in K1 ${\begin{aligned}\langle \pi _{1}(a^{*})k,\ell \rangle &=\langle \pi (a^{*})k,\ell \rangle \\&=\langle \pi (a)^{*}k,\ell \rangle \\&=\langle k,\pi (a)\ell \rangle \\&=\langle k,\pi _{1}(a)\ell \rangle \\&=\langle \pi _{1}(a)^{*}k,\ell \rangle .\end{aligned}}$ So (π1, V, K1) is also a Stinespring representation of Φ and has the additional property that K1 is the closed linear span of π(A) V H. Such a representation is called a minimal Stinespring representation. Uniqueness Let (π1, V1, K1) and (π2, V2, K2) be two Stinespring representations of a given Φ. Define a partial isometry W : K1 → K2 by $\;W\pi _{1}(a)V_{1}h=\pi _{2}(a)V_{2}h.$ On V1H ⊂ K1, this gives the intertwining relation $\;W\pi _{1}=\pi _{2}W.$ In particular, if both Stinespring representations are minimal, W is unitary. Thus minimal Stinespring representations are unique up to a unitary transformation. Some consequences We mention a few of the results which can be viewed as consequences of Stinespring's theorem. Historically, some of the results below preceded Stinespring's theorem. GNS construction The Gelfand–Naimark–Segal (GNS) construction is as follows. Let H in Stinespring's theorem be 1-dimensional, i.e. the complex numbers. So Φ now is a positive linear functional on A. If we assume Φ is a state, that is, Φ has norm 1, then the isometry $V:H\to K$ is determined by $V1=\xi $ for some $\xi \in K$ of unit norm. So ${\begin{aligned}\Phi (a)=V^{*}\pi (a)V&=\langle V^{*}\pi (a)V1,1\rangle _{H}\\&=\langle \pi (a)V1,V1\rangle _{K}\\&=\langle \pi (a)\xi ,\xi \rangle _{K}\end{aligned}}$ and we have recovered the GNS representation of states. This is one way to see that completely positive maps, rather than merely positive ones, are the true generalizations of positive functionals. A linear positive functional on a C*-algebra is absolutely continuous with respect to another such functional (called a reference functional) if it is zero on any positive element on which the reference positive functional is zero. This leads to a noncommutative generalization of the Radon–Nikodym theorem. The usual density operator of states on the matrix algebras with respect to the standard trace is nothing but the Radon–Nikodym derivative when the reference functional is chosen to be trace. Belavkin introduced the notion of complete absolute continuity of one completely positive map with respect to another (reference) map and proved an operator variant of the noncommutative Radon–Nikodym theorem for completely positive maps. A particular case of this theorem corresponding to a tracial completely positive reference map on the matrix algebras leads to the Choi operator as a Radon–Nikodym derivative of a CP map with respect to the standard trace (see Choi's Theorem). Choi's theorem It was shown by Choi that if $\Phi :B(G)\to B(H)$ is completely positive, where G and H are finite-dimensional Hilbert spaces of dimensions n and m respectively, then Φ takes the form: $\Phi (a)=\sum _{i=1}^{nm}V_{i}^{*}aV_{i}.$ This is called Choi's theorem on completely positive maps. Choi proved this using linear algebra techniques, but his result can also be viewed as a special case of Stinespring's theorem: Let (π, V, K) be a minimal Stinespring representation of Φ. By minimality, K has dimension less than that of $C^{n\times n}\otimes C^{m}$. So without loss of generality, K can be identified with $K=\bigoplus _{i=1}^{nm}C_{i}^{n}.$ Each $C_{i}^{n}$ is a copy of the n-dimensional Hilbert space. From $\pi (a)(b\otimes g)=ab\otimes g$, we see that the above identification of K can be arranged so $\;P_{i}\pi (a)P_{i}=a$, where Pi is the projection from K to $C_{i}^{n}$. Let $V_{i}=P_{i}V$. We have $\Phi (a)=\sum _{i=1}^{nm}(V^{*}P_{i})(P_{i}\pi (a)P_{i})(P_{i}V)=\sum _{i=1}^{nm}V_{i}^{*}aV_{i}$ and Choi's result is proved. Choi's result is a particular case of noncommutative Radon–Nikodym theorem for completely positive (CP) maps corresponding to a tracial completely positive reference map on the matrix algebras. In strong operator form this general theorem was proven by Belavkin in 1985 who showed the existence of the positive density operator representing a CP map which is completely absolutely continuous with respect to a reference CP map. The uniqueness of this density operator in the reference Steinspring representation simply follows from the minimality of this representation. Thus, Choi's operator is the Radon–Nikodym derivative of a finite-dimensional CP map with respect to the standard trace. Notice that, in proving Choi's theorem, as well as Belavkin's theorem from Stinespring's formulation, the argument does not give the Kraus operators Vi explicitly, unless one makes the various identification of spaces explicit. On the other hand, Choi's original proof involves direct calculation of those operators. Naimark's dilation theorem Naimark's theorem says that every B(H)-valued, weakly countably-additive measure on some compact Hausdorff space X can be "lifted" so that the measure becomes a spectral measure. It can be proved by combining the fact that C(X) is a commutative C*-algebra and Stinespring's theorem. Sz.-Nagy's dilation theorem This result states that every contraction on a Hilbert space has a unitary dilation with the minimality property. Application In quantum information theory, quantum channels, or quantum operations, are defined to be completely positive maps between C*-algebras. Being a classification for all such maps, Stinespring's theorem is important in that context. For example, the uniqueness part of the theorem has been used to classify certain classes of quantum channels. For the comparison of different channels and computation of their mutual fidelities and information another representation of the channels by their "Radon–Nikodym" derivatives introduced by Belavkin is useful. In the finite-dimensional case, Choi's theorem as the tracial variant of the Belavkin's Radon–Nikodym theorem for completely positive maps is also relevant. The operators $\{V_{i}\}$ from the expression $\Phi (a)=\sum _{i=1}^{nm}V_{i}^{*}aV_{i}.$ are called the Kraus operators of Φ. The expression $\sum _{i=1}^{nm}V_{i}^{*}(\cdot )V_{i}$ is sometimes called the operator sum representation of Φ. References • M.-D. Choi, Completely Positive Linear Maps on Complex Matrices, Linear Algebra and its Applications, 10, 285–290 (1975). • V. P. Belavkin, P. Staszewski, Radon–Nikodym Theorem for Completely Positive Maps, Reports on Mathematical Physics, v. 24, No 1, 49–55 (1986). • V. Paulsen, Completely Bounded Maps and Operator Algebras, Cambridge University Press, 2003. • W. F. Stinespring, Positive Functions on C*-algebras, Proceedings of the American Mathematical Society, 6, 211–216 (1955). Functional analysis (topics – glossary) Spaces • Banach • Besov • Fréchet • Hilbert • Hölder • Nuclear • Orlicz • Schwartz • Sobolev • Topological vector Properties • Barrelled • Complete • Dual (Algebraic/Topological) • Locally convex • Reflexive • Reparable Theorems • Hahn–Banach • Riesz representation • Closed graph • Uniform boundedness principle • Kakutani fixed-point • Krein–Milman • Min–max • Gelfand–Naimark • Banach–Alaoglu Operators • Adjoint • Bounded • Compact • Hilbert–Schmidt • Normal • Nuclear • Trace class • Transpose • Unbounded • Unitary Algebras • Banach algebra • C*-algebra • Spectrum of a C*-algebra • Operator algebra • Group algebra of a locally compact group • Von Neumann algebra Open problems • Invariant subspace problem • Mahler's conjecture Applications • Hardy space • Spectral theory of ordinary differential equations • Heat kernel • Index theorem • Calculus of variations • Functional calculus • Integral operator • Jones polynomial • Topological quantum field theory • Noncommutative geometry • Riemann hypothesis • Distribution (or Generalized functions) Advanced topics • Approximation property • Balanced set • Choquet theory • Weak topology • Banach–Mazur distance • Tomita–Takesaki theory •  Mathematics portal • Category • Commons
Wikipedia
Stirling numbers of the second kind In mathematics, particularly in combinatorics, a Stirling number of the second kind (or Stirling partition number) is the number of ways to partition a set of n objects into k non-empty subsets and is denoted by $S(n,k)$ or $\textstyle \left\{{n \atop k}\right\}$.[1] Stirling numbers of the second kind occur in the field of mathematics called combinatorics and the study of partitions. They are named after James Stirling. The Stirling numbers of the first and second kind can be understood as inverses of one another when viewed as triangular matrices. This article is devoted to specifics of Stirling numbers of the second kind. Identities linking the two kinds appear in the article on Stirling numbers. Definition The Stirling numbers of the second kind, written $S(n,k)$ or $\lbrace \textstyle {n \atop k}\rbrace $ or with other notations, count the number of ways to partition a set of $n$ labelled objects into $k$ nonempty unlabelled subsets. Equivalently, they count the number of different equivalence relations with precisely $k$ equivalence classes that can be defined on an $n$ element set. In fact, there is a bijection between the set of partitions and the set of equivalence relations on a given set. Obviously, $\left\{{n \atop n}\right\}=1$ for n ≥ 0, and $\left\{{n \atop 1}\right\}=1$ for n ≥ 1, as the only way to partition an n-element set into n parts is to put each element of the set into its own part, and the only way to partition a nonempty set into one part is to put all of the elements in the same part. Unlike Stirling numbers of the first kind, they can be calculated using a one-sum formula:[2] $\left\{{n \atop k}\right\}={\frac {1}{k!}}\sum _{i=0}^{k}(-1)^{k-i}{\binom {k}{i}}i^{n}=\sum _{i=0}^{k}{\frac {(-1)^{k-i}i^{n}}{(k-i)!i!}}.$ The Stirling numbers of the second kind may also be characterized as the numbers that arise when one expresses powers of an indeterminate x in terms of the falling factorials[3] $(x)_{n}=x(x-1)(x-2)\cdots (x-n+1).$ (In particular, (x)0 = 1 because it is an empty product.) In other words $\sum _{k=0}^{n}\left\{{n \atop k}\right\}(x)_{k}=x^{n}.$ Notation Various notations have been used for Stirling numbers of the second kind. The brace notation $ \textstyle \lbrace {n \atop k}\rbrace $ was used by Imanuel Marx and Antonio Salmeri in 1962 for variants of these numbers.[4][5] This led Knuth to use it, as shown here, in the first volume of The Art of Computer Programming (1968).[6][7] According to the third edition of The Art of Computer Programming, this notation was also used earlier by Jovan Karamata in 1935.[8][9] The notation S(n, k) was used by Richard Stanley in his book Enumerative Combinatorics and also, much earlier, by many other writers.[6] The notations used on this page for Stirling numbers are not universal, and may conflict with notations in other sources. Relation to Bell numbers Main article: Bell number Since the Stirling number $\left\{{n \atop k}\right\}$ counts set partitions of an n-element set into k parts, the sum $B_{n}=\sum _{k=0}^{n}\left\{{n \atop k}\right\}$ over all values of k is the total number of partitions of a set with n members. This number is known as the nth Bell number. Analogously, the ordered Bell numbers can be computed from the Stirling numbers of the second kind via $a_{n}=\sum _{k=0}^{n}k!\left\{{n \atop k}\right\}.$[10] Table of values Below is a triangular array of values for the Stirling numbers of the second kind (sequence A008277 in the OEIS): k n 0 1 2 3 4 5 6 7 8 9 10 0 1 1 0 1 2 0 1 1 3 0 1 3 1 4 0 1 7 6 1 5 0 1 15 25 10 1 6 0 1 31 90 65 15 1 7 0 1 63 301 350 140 21 1 8 0 1 127 966 1701 1050 266 28 1 9 0 1 255 3025 7770 6951 2646 462 36 1 10 0 1 511 9330 34105 42525 22827 5880 750 45 1 As with the binomial coefficients, this table could be extended to k > n, but the entries would all be 0. Properties Recurrence relation Stirling numbers of the second kind obey the recurrence relation $\left\{{n+1 \atop k}\right\}=k\left\{{n \atop k}\right\}+\left\{{n \atop k-1}\right\}\quad {\mbox{for}}\;0<k<n$ with initial conditions $\left\{{n \atop n}\right\}=1\quad {\mbox{ for}}\;n\geq 0\quad {\text{ and }}\quad \left\{{n \atop 0}\right\}=\left\{{0 \atop n}\right\}=0\quad {\text{ for }}n>0{\text{.}}$ For instance, the number 25 in column k = 3 and row n = 5 is given by 25 = 7 + (3×6), where 7 is the number above and to the left of 25, 6 is the number above 25 and 3 is the column containing the 6. To prove this recurrence, observe that a partition of the $n-1$ objects into k nonempty subsets either contains the $(n+1)$-th object as a singleton or it does not. The number of ways that the singleton is one of the subsets is given by $\left\{{n \atop k-1}\right\}$ since we must partition the remaining n objects into the available $k-1$ subsets. In the other case the $(n+1)$-th object belongs to a subset containing other objects. The number of ways is given by $k\left\{{n \atop k}\right\}$ since we partition all objects other than the $(n+1)$-th into k subsets, and then we are left with k choices for inserting object $n+1$. Summing these two values gives the desired result. Another recurrence relation is given by $\left\lbrace {\begin{matrix}n\\k\end{matrix}}\right\rbrace ={\frac {k^{n}}{k!}}-\sum _{r=1}^{k-1}{\frac {\left\lbrace {\begin{matrix}n\\r\end{matrix}}\right\rbrace }{(k-r)!}}.$ Simple identities Some simple identities include $\left\{{n \atop n-1}\right\}={\binom {n}{2}}.$ This is because dividing n elements into n − 1 sets necessarily means dividing it into one set of size 2 and n − 2 sets of size 1. Therefore we need only pick those two elements; and $\left\{{n \atop 2}\right\}=2^{n-1}-1.$ To see this, first note that there are 2n ordered pairs of complementary subsets A and B. In one case, A is empty, and in another B is empty, so 2n − 2 ordered pairs of subsets remain. Finally, since we want unordered pairs rather than ordered pairs we divide this last number by 2, giving the result above. Another explicit expansion of the recurrence-relation gives identities in the spirit of the above example. Identities The table in section 6.1 of Concrete Mathematics provides a plethora of generalized forms of finite sums involving the Stirling numbers. Several particular finite sums relevant to this article include. ${\begin{aligned}\left\{{n+1 \atop k+1}\right\}&=\sum _{j=k}^{n}{n \choose j}\left\{{j \atop k}\right\}\\\left\{{n+1 \atop k+1}\right\}&=\sum _{j=k}^{n}(k+1)^{n-j}\left\{{j \atop k}\right\}\\\left\{{n+k+1 \atop k}\right\}&=\sum _{j=0}^{k}j\left\{{n+j \atop j}\right\}\\\left\{{n \atop \ell +m}\right\}{\binom {\ell +m}{\ell }}&=\sum _{k}\left\{{k \atop \ell }\right\}\left\{{n-k \atop m}\right\}{\binom {n}{k}}\end{aligned}}$ Explicit formula The Stirling numbers of the second kind are given by the explicit formula: $\left\{{n \atop k}\right\}={\frac {1}{k!}}\sum _{j=0}^{k}(-1)^{k-j}{k \choose j}j^{n}=\sum _{j=0}^{k}{\frac {(-1)^{k-j}j^{n}}{(k-j)!j!}}.$ This can be derived by using inclusion-exclusion to count the number of surjections from n to k and using the fact that the number of such surjections is $ k!\left\{{n \atop k}\right\}$. Additionally, this formula is a special case of the kth forward difference of the monomial $x^{n}$ evaluated at x = 0: $\Delta ^{k}x^{n}=\sum _{j=0}^{k}(-1)^{k-j}{k \choose j}(x+j)^{n}.$ Because the Bernoulli polynomials may be written in terms of these forward differences, one immediately obtains a relation in the Bernoulli numbers: $B_{m}(0)=\sum _{k=0}^{m}{\frac {(-1)^{k}k!}{k+1}}\left\{{m \atop k}\right\}.$ Another explicit formula given in the NIST Handbook of Mathematical Functions is $\left\{{n \atop k}\right\}=\sum _{\begin{array}{c}c_{1}+\ldots +c_{k}=n-k\\c_{1},\ldots ,\ c_{k}\ \geq \ 0\end{array}}1^{c_{1}}2^{c_{2}}\cdots k^{c_{k}}$ Parity The parity of a Stirling number of the second kind is equal to the parity of a related binomial coefficient: $\left\{{n \atop k}\right\}\equiv {\binom {z}{w}}\ {\pmod {2}},$ where $z=n-\left\lceil \displaystyle {\frac {k+1}{2}}\right\rceil ,\ w=\left\lfloor \displaystyle {\frac {k-1}{2}}\right\rfloor .$ This relation is specified by mapping n and k coordinates onto the Sierpiński triangle. More directly, let two sets contain positions of 1's in binary representations of results of respective expressions: ${\begin{aligned}\mathbb {A} :\ \sum _{i\in \mathbb {A} }2^{i}&=n-k,\\\mathbb {B} :\ \sum _{j\in \mathbb {B} }2^{j}&=\left\lfloor {\dfrac {k-1}{2}}\right\rfloor .\\\end{aligned}}$ :\ \sum _{i\in \mathbb {A} }2^{i}&=n-k,\\\mathbb {B} :\ \sum _{j\in \mathbb {B} }2^{j}&=\left\lfloor {\dfrac {k-1}{2}}\right\rfloor .\\\end{aligned}}} One can mimic a bitwise AND operation by intersecting these two sets: ${\begin{Bmatrix}n\\k\end{Bmatrix}}\,{\bmod {\,}}2={\begin{cases}0,&\mathbb {A} \cap \mathbb {B} \neq \emptyset ;\\1,&\mathbb {A} \cap \mathbb {B} =\emptyset ;\end{cases}}$ ;\\1,&\mathbb {A} \cap \mathbb {B} =\emptyset ;\end{cases}}} to obtain the parity of a Stirling number of the second kind in O(1) time. In pseudocode: ${\begin{Bmatrix}n\\k\end{Bmatrix}}\,{\bmod {\,}}2:=\left[\left(\left(n-k\right)\ \And \ \left(\left(k-1\right)\,\mathrm {div} \,2\right)\right)=0\right];$ where $\left[b\right]$ is the Iverson bracket. The parity of a central Stirling number of the second kind $\textstyle \left\{{2n \atop n}\right\}$ is odd if and only if $n$ is a fibbinary number, a number whose binary representation has no two consecutive 1s.[11] Generating functions For a fixed integer n, the ordinary generating function for Stirling numbers of the second kind $\left\{{n \atop 0}\right\},\left\{{n \atop 1}\right\},\ldots $ is given by $\sum _{k=0}^{n}\left\{{n \atop k}\right\}x^{k}=T_{n}(x),$ where $T_{n}(x)$ are Touchard polynomials. If one sums the Stirling numbers against the falling factorial instead, one can show the following identities, among others: $\sum _{k=0}^{n}\left\{{n \atop k}\right\}(x)_{k}=x^{n}$ and $\sum _{k=1}^{n+1}\left\{{n+1 \atop k}\right\}(x-1)_{k-1}=x^{n}$ Which has special case $\sum _{k=0}^{n}\left\{{n \atop k}\right\}(n)_{k}=n^{n}$. For a fixed integer k, the Stirling numbers of the second kind have rational ordinary generating function $\sum _{n=k}^{\infty }\left\{{n \atop k}\right\}x^{n-k}=\prod _{r=1}^{k}{\frac {1}{1-rx}}={\frac {1}{x^{k+1}(1/x)_{k+1}}}$ and have an exponential generating function given by $\sum _{n=k}^{\infty }\left\{{n \atop k}\right\}{\frac {x^{n}}{n!}}={\frac {(e^{x}-1)^{k}}{k!}}.$ A mixed bivariate generating function for the Stirling numbers of the second kind is $\sum _{0\leq k\leq n}\left\{{n \atop k}\right\}{\frac {x^{n}}{n!}}y^{k}=e^{y(e^{x}-1)}.$ Lower and upper bounds If $n\geq 2$ and $1\leq k\leq n-1$, then ${\frac {1}{2}}(k^{2}+k+2)k^{n-k-1}-1\leq \left\{{n \atop k}\right\}\leq {\frac {1}{2}}{n \choose k}k^{n-k}$ [12] Asymptotic approximation For fixed value of $k,$ the asymptotic value of the Stirling numbers of the second kind as $n\rightarrow \infty $ is given by $\left\{{n \atop k}\right\}{\underset {n\to \infty }{\sim }}{\frac {k^{n}}{k!}}.$ If $k=o({\sqrt {n}})$ (where o denotes the little o notation) then $\left\{{n+k \atop n}\right\}{\underset {n\to \infty }{\sim }}{\frac {n^{2k}}{2^{k}k!}}.$[13] Maximum for fixed n For fixed $n$, $\left\{{n \atop k}\right\}$ has a single maximum, which is attained for at most two consecutive values of k. That is, there is an integer $k_{n}$ such that $\left\{{n \atop 1}\right\}<\left\{{n \atop 2}\right\}<\cdots <\left\{{n \atop k_{n}}\right\}\geq \left\{{n \atop k_{n}+1}\right\}>\cdots >\left\{{n \atop n}\right\}.$ Looking at the table of values above, the first few values for $k_{n}$ are $0,1,1,2,2,3,3,4,4,4,5,\ldots $ When $n$ is large $k_{n}{\underset {n\to \infty }{\sim }}{\frac {n}{\log n}},$ and the maximum value of the Stirling number can be approximated with $\log \left\{{n \atop k_{n}}\right\}=n\log n-n\log \log n-n+O(n\log \log n/\log n).$ [12] Applications Moments of the Poisson distribution If X is a random variable with a Poisson distribution with expected value λ, then its n-th moment is $E(X^{n})=\sum _{k=0}^{n}\left\{{n \atop k}\right\}\lambda ^{k}.$ In particular, the nth moment of the Poisson distribution with expected value 1 is precisely the number of partitions of a set of size n, i.e., it is the nth Bell number (this fact is Dobiński's formula). Moments of fixed points of random permutations Let the random variable X be the number of fixed points of a uniformly distributed random permutation of a finite set of size m. Then the nth moment of X is $E(X^{n})=\sum _{k=0}^{m}\left\{{n \atop k}\right\}.$ Note: The upper bound of summation is m, not n. In other words, the nth moment of this probability distribution is the number of partitions of a set of size n into no more than m parts. This is proved in the article on random permutation statistics, although the notation is a bit different. Rhyming schemes The Stirling numbers of the second kind can represent the total number of rhyme schemes for a poem of n lines. $S(n,k)$ gives the number of possible rhyming schemes for n lines using k unique rhyming syllables. As an example, for a poem of 3 lines, there is 1 rhyme scheme using just one rhyme (aaa), 3 rhyme schemes using two rhymes (aab, aba, abb), and 1 rhyme scheme using three rhymes (abc). Variants r-Stirling numbers of the second kind The r-Stirling number of the second kind $\left\{{n \atop k}\right\}_{r}$ counts the number of partitions of a set of n objects into k non-empty disjoint subsets, such that the first r elements are in distinct subsets [14]. These numbers satisfy the recurrence relation $\left\{{n \atop k}\right\}_{r}=k\left\{{n-1 \atop k}\right\}_{r}+\left\{{n-1 \atop k-1}\right\}_{r}$ Some combinatorial identities and a connection between these numbers and context-free grammars can be found in [15] Associated Stirling numbers of the second kind An r-associated Stirling number of the second kind is the number of ways to partition a set of n objects into k subsets, with each subset containing at least r elements.[16] It is denoted by $S_{r}(n,k)$ and obeys the recurrence relation $S_{r}(n+1,k)=k\ S_{r}(n,k)+{\binom {n}{r-1}}S_{r}(n-r+1,k-1)$ The 2-associated numbers (sequence A008299 in the OEIS) appear elsewhere as "Ward numbers" and as the magnitudes of the coefficients of Mahler polynomials. Reduced Stirling numbers of the second kind Denote the n objects to partition by the integers 1, 2, ..., n. Define the reduced Stirling numbers of the second kind, denoted $S^{d}(n,k)$, to be the number of ways to partition the integers 1, 2, ..., n into k nonempty subsets such that all elements in each subset have pairwise distance at least d. That is, for any integers i and j in a given subset, it is required that $|i-j|\geq d$. It has been shown that these numbers satisfy $S^{d}(n,k)=S(n-d+1,k-d+1),n\geq k\geq d$ (hence the name "reduced").[17] Observe (both by definition and by the reduction formula), that $S^{1}(n,k)=S(n,k)$, the familiar Stirling numbers of the second kind. See also • Stirling number • Stirling numbers of the first kind • Bell number – the number of partitions of a set with n members • Stirling polynomials • Twelvefold way • Learning materials related to Partition related number triangles at Wikiversity References 1. Ronald L. Graham, Donald E. Knuth, Oren Patashnik (1988) Concrete Mathematics, Addison–Wesley, Reading MA. ISBN 0-201-14236-8, p. 244. 2. "Stirling Numbers of the Second Kind, Theorem 3.4.1". 3. Confusingly, the notation that combinatorialists use for falling factorials coincides with the notation used in special functions for rising factorials; see Pochhammer symbol. 4. Transformation of Series by a Variant of Stirling's Numbers, Imanuel Marx, The American Mathematical Monthly 69, #6 (June–July 1962), pp. 530–532, JSTOR 2311194. 5. Antonio Salmeri, Introduzione alla teoria dei coefficienti fattoriali, Giornale di Matematiche di Battaglini 90 (1962), pp. 44–54. 6. Knuth, D.E. (1992), "Two notes on notation", Amer. Math. Monthly, 99 (5): 403–422, arXiv:math/9205211, Bibcode:1992math......5211K, doi:10.2307/2325085, JSTOR 2325085, S2CID 119584305 7. Donald E. Knuth, Fundamental Algorithms, Reading, Mass.: Addison–Wesley, 1968. 8. p. 66, Donald E. Knuth, Fundamental Algorithms, 3rd ed., Reading, Mass.: Addison–Wesley, 1997. 9. Jovan Karamata, Théorèmes sur la sommabilité exponentielle et d'autres sommabilités s'y rattachant, Mathematica (Cluj) 9 (1935), pp, 164–178. 10. Sprugnoli, Renzo (1994), "Riordan arrays and combinatorial sums" (PDF), Discrete Mathematics, 132 (1–3): 267–290, doi:10.1016/0012-365X(92)00570-H, MR 1297386 11. Chan, O-Yeat; Manna, Dante (2010), "Congruences for Stirling numbers of the second kind" (PDF), Gems in Experimental Mathematics, Contemporary Mathematics, vol. 517, Providence, Rhode Island: American Mathematical Society, pp. 97–111, doi:10.1090/conm/517/10135, MR 2731094 12. Rennie, B.C.; Dobson, A.J. (1969). "On stirling numbers of the second kind". Journal of Combinatorial Theory. 7 (2): 116–121. doi:10.1016/S0021-9800(69)80045-1. ISSN 0021-9800. 13. L. C. Hsu, Note on an Asymptotic Expansion of the nth Difference of Zero, AMS Vol.19 NO.2 1948, pp. 273--277 14. Broder, A. (1984). The r-Stirling numbers. Discrete Mathematics 49, 241-259 15. Triana, J. (2022). r-Stirling numbers of the second kind through context-free grammars. Journal of automata, languages and combinatorics 27(4), 323-333 16. L. Comtet, Advanced Combinatorics, Reidel, 1974, p. 222. 17. A. Mohr and T.D. Porter, Applications of Chromatic Polynomials Involving Stirling Numbers, Journal of Combinatorial Mathematics and Combinatorial Computing 70 (2009), 57–64. • Boyadzhiev, Khristo (2012). "Close encounters with the Stirling numbers of the second kind". Mathematics Magazine. 85 (4): 252–266. arXiv:1806.09468. doi:10.4169/math.mag.85.4.252. S2CID 115176876.. • "Stirling numbers of the second kind". PlanetMath.. • Weisstein, Eric W. "Stirling Number of the Second Kind". MathWorld. • Calculator for Stirling Numbers of the Second Kind • Set Partitions: Stirling Numbers • Jack van der Elsen (2005). Black and white transformations. Maastricht. ISBN 90-423-0263-1. Classes of natural numbers Powers and related numbers • Achilles • Power of 2 • Power of 3 • Power of 10 • Square • Cube • Fourth power • Fifth power • Sixth power • Seventh power • Eighth power • Perfect power • Powerful • Prime power Of the form a × 2b ± 1 • Cullen • Double Mersenne • Fermat • Mersenne • Proth • Thabit • Woodall Other polynomial numbers • Hilbert • Idoneal • Leyland • Loeschian • Lucky numbers of Euler Recursively defined numbers • Fibonacci • Jacobsthal • Leonardo • Lucas • Padovan • Pell • Perrin Possessing a specific set of other numbers • Amenable • Congruent • Knödel • Riesel • Sierpiński Expressible via specific sums • Nonhypotenuse • Polite • Practical • Primary pseudoperfect • Ulam • Wolstenholme Figurate numbers 2-dimensional centered • Centered triangular • Centered square • Centered pentagonal • Centered hexagonal • Centered heptagonal • Centered octagonal • Centered nonagonal • Centered decagonal • Star non-centered • Triangular • Square • Square triangular • Pentagonal • Hexagonal • Heptagonal • Octagonal • Nonagonal • Decagonal • Dodecagonal 3-dimensional centered • Centered tetrahedral • Centered cube • Centered octahedral • Centered dodecahedral • Centered icosahedral non-centered • Tetrahedral • Cubic • Octahedral • Dodecahedral • Icosahedral • Stella octangula pyramidal • Square pyramidal 4-dimensional non-centered • Pentatope • Squared triangular • Tesseractic Combinatorial numbers • Bell • Cake • Catalan • Dedekind • Delannoy • Euler • Eulerian • Fuss–Catalan • Lah • Lazy caterer's sequence • Lobb • Motzkin • Narayana • Ordered Bell • Schröder • Schröder–Hipparchus • Stirling first • Stirling second • Telephone number • Wedderburn–Etherington Primes • Wieferich • Wall–Sun–Sun • Wolstenholme prime • Wilson Pseudoprimes • Carmichael number • Catalan pseudoprime • Elliptic pseudoprime • Euler pseudoprime • Euler–Jacobi pseudoprime • Fermat pseudoprime • Frobenius pseudoprime • Lucas pseudoprime • Lucas–Carmichael number • Somer–Lucas pseudoprime • Strong pseudoprime Arithmetic functions and dynamics Divisor functions • Abundant • Almost perfect • Arithmetic • Betrothed • Colossally abundant • Deficient • Descartes • Hemiperfect • Highly abundant • Highly composite • Hyperperfect • Multiply perfect • Perfect • Practical • Primitive abundant • Quasiperfect • Refactorable • Semiperfect • Sublime • Superabundant • Superior highly composite • Superperfect Prime omega functions • Almost prime • Semiprime Euler's totient function • Highly cototient • Highly totient • Noncototient • Nontotient • Perfect totient • Sparsely totient Aliquot sequences • Amicable • Perfect • Sociable • Untouchable Primorial • Euclid • Fortunate Other prime factor or divisor related numbers • Blum • Cyclic • Erdős–Nicolas • Erdős–Woods • Friendly • Giuga • Harmonic divisor • Jordan–Pólya • Lucas–Carmichael • Pronic • Regular • Rough • Smooth • Sphenic • Størmer • Super-Poulet • Zeisel Numeral system-dependent numbers Arithmetic functions and dynamics • Persistence • Additive • Multiplicative Digit sum • Digit sum • Digital root • Self • Sum-product Digit product • Multiplicative digital root • Sum-product Coding-related • Meertens Other • Dudeney • Factorion • Kaprekar • Kaprekar's constant • Keith • Lychrel • Narcissistic • Perfect digit-to-digit invariant • Perfect digital invariant • Happy P-adic numbers-related • Automorphic • Trimorphic Digit-composition related • Palindromic • Pandigital • Repdigit • Repunit • Self-descriptive • Smarandache–Wellin • Undulating Digit-permutation related • Cyclic • Digit-reassembly • Parasitic • Primeval • Transposable Divisor-related • Equidigital • Extravagant • Frugal • Harshad • Polydivisible • Smith • Vampire Other • Friedman Binary numbers • Evil • Odious • Pernicious Generated via a sieve • Lucky • Prime Sorting related • Pancake number • Sorting number Natural language related • Aronson's sequence • Ban Graphemics related • Strobogrammatic • Mathematics portal
Wikipedia
Stirling polynomials In mathematics, the Stirling polynomials are a family of polynomials that generalize important sequences of numbers appearing in combinatorics and analysis, which are closely related to the Stirling numbers, the Bernoulli numbers, and the generalized Bernoulli polynomials. There are multiple variants of the Stirling polynomial sequence considered below most notably including the Sheffer sequence form of the sequence, $S_{k}(x)$, defined characteristically through the special form of its exponential generating function, and the Stirling (convolution) polynomials, $\sigma _{n}(x)$, which also satisfy a characteristic ordinary generating function and that are of use in generalizing the Stirling numbers (of both kinds) to arbitrary complex-valued inputs. We consider the "convolution polynomial" variant of this sequence and its properties second in the last subsection of the article. Still other variants of the Stirling polynomials are studied in the supplementary links to the articles given in the references. Definition and examples For nonnegative integers k, the Stirling polynomials, Sk(x), are a Sheffer sequence for $(g(t),{\bar {f}}(t)):=\left(e^{-t},\log \left({\frac {t}{1-e^{-t}}}\right)\right)$ [1] defined by the exponential generating function $\left({t \over {1-e^{-t}}}\right)^{x+1}=\sum _{k=0}^{\infty }S_{k}(x){t^{k} \over k!}.$ The Stirling polynomials are a special case of the Nørlund polynomials (or generalized Bernoulli polynomials) [2] each with exponential generating function $\left({t \over {e^{t}-1}}\right)^{a}e^{zt}=\sum _{k=0}^{\infty }B_{k}^{(a)}(z){t^{k} \over k!},$ given by the relation $S_{k}(x)=B_{k}^{(x+1)}(x+1)$. The first 10 Stirling polynomials are given in the following table: kSk(x) 0$1$ 1${\scriptstyle {\frac {1}{2}}}(x+1)$ 2${\scriptstyle {\frac {1}{12}}}(3x^{2}+5x+2)$ 3${\scriptstyle {\frac {1}{8}}}(x^{3}+2x^{2}+x)$ 4${\scriptstyle {\frac {1}{240}}}(15x^{4}+30x^{3}+5x^{2}-18x-8)$ 5${\scriptstyle {\frac {1}{96}}}(3x^{5}+5x^{4}-5x^{3}-13x^{2}-6x)$ 6${\scriptstyle {\frac {1}{4032}}}(63x^{6}+63x^{5}-315x^{4}-539x^{3}-84x^{2}+236x+96)$ 7${\scriptstyle {\frac {1}{1152}}}(9x^{7}-84x^{5}-98x^{4}+91x^{3}+194x^{2}+80x)$ 8${\scriptstyle {\frac {1}{34560}}}(135x^{8}-180x^{7}-1890x^{6}-840x^{5}+6055x^{4}+8140x^{3}+884x^{2}-3088x-1152)$ 9${\scriptstyle {\frac {1}{7680}}}(15x^{9}-45x^{8}-270x^{7}+182x^{6}+1687x^{5}+1395x^{4}-1576x^{3}-2684x^{2}-1008x)$ Yet another variant of the Stirling polynomials is considered in [3] (see also the subsection on Stirling convolution polynomials below). In particular, the article by I. Gessel and R. P. Stanley defines the modified Stirling polynomial sequences, $f_{k}(n):=S(n+k,n)$ and $g_{k}(n):=c(n,n-k)$ where $c(n,k):=(-1)^{n-k}s(n,k)$ are the unsigned Stirling numbers of the first kind, in terms of the two Stirling number triangles for non-negative integers $n\geq 1,\ k\geq 0$. For fixed $k\geq 0$, both $f_{k}(n)$ and $g_{k}(n)$ are polynomials of the input $n\in \mathbb {Z} ^{+}$ each of degree $2k$ and with leading coefficient given by the double factorial term $(1\cdot 3\cdot 5\cdots (2k-1))/(2k)!$. Properties Below $B_{k}(x)$ denote the Bernoulli polynomials and $B_{k}=B_{k}(0)$ the Bernoulli numbers under the convention $B_{1}=B_{1}(0)=-{\tfrac {1}{2}};$ $s_{m,n}$ denotes a Stirling number of the first kind; and $S_{m,n}$ denotes Stirling numbers of the second kind. • Special values: ${\begin{aligned}S_{k}(-m)&={\frac {(-1)^{k}}{k+m-1 \choose k}}S_{k+m-1,m-1}&&0<m\in \mathbb {Z} \\[6pt]S_{k}(-1)&=\delta _{k,0}\\[6pt]S_{k}(0)&=(-1)^{k}B_{k}\\[6pt]S_{k}(1)&=(-1)^{k+1}((k-1)B_{k}+kB_{k-1})\\[6pt]S_{k}(2)&={\tfrac {(-1)^{k}}{2}}((k-1)(k-2)B_{k}+3k(k-2)B_{k-1}+2k(k-1)B_{k-2})\\[6pt]S_{k}(k)&=k!\\[6pt]\end{aligned}}$ • If $m\in \mathbb {N} $ and $m<k$ then: $S_{k}(m)={(m+1)}{\binom {k}{m+1}}\sum _{j=0}^{m}(-1)^{m-j}s_{m+1,m+1-j}{\frac {B_{k-j}}{k-j}}.$ • If $m\in \mathbb {N} $ and $m\geq k$ then:[4] $S_{k}(m)=(-1)^{k}B_{k}^{(m+1)}(0),$ and: $S_{k}(m)={(-1)^{k} \over {m \choose k}}s_{m+1,m+1-k}.$ • The sequence $S_{k}(x-1)$ is of binomial type, since $S_{k}(x+y-1)=\sum _{i=0}^{k}{k \choose i}S_{i}(x-1)S_{k-i}(y-1).$ Moreover, this basic recursion holds: $S_{k}(x)=(x-k){S_{k}(x-1) \over x}+kS_{k-1}(x+1).$ • Explicit representations involving Stirling numbers can be deduced with Lagrange's interpolation formula: ${\begin{aligned}S_{k}(x)&=\sum _{n=0}^{k}(-1)^{k-n}S_{k+n,n}{{x+n \choose n}{x+k+1 \choose k-n} \over {k+n \choose n}}\\[6pt]&=\sum _{n=0}^{k}(-1)^{n}s_{k+n+1,n+1}{{x-k \choose n}{x-k-n-1 \choose k-n} \over {k+n \choose k}}\\[6pt]&=k!\sum _{j=0}^{k}(-1)^{k-j}\sum _{m=j}^{k}{x+m \choose m}{m \choose j}L_{k+m}^{(-k-j)}(-j)\\[6pt]\end{aligned}}$ Here, $L_{n}^{(\alpha )}$ are Laguerre polynomials. • The following relations hold as well: ${k+m \choose k}S_{k}(x-m)=\sum _{i=0}^{k}(-1)^{k-i}{k+m \choose i}S_{k-i+m,m}\cdot S_{i}(x),$ ${k-m \choose k}S_{k}(x+m)=\sum _{i=0}^{k}{k-m \choose i}s_{m,m-k+i}\cdot S_{i}(x).$ • By differentiating the generating function it readily follows that $S_{k}^{\prime }(x)=-\sum _{j=0}^{k-1}{k \choose j}S_{j}(x){\frac {B_{k-j}}{k-j}}.$ Stirling convolution polynomials Definition and examples Another variant of the Stirling polynomial sequence corresponds to a special case of the convolution polynomials studied by Knuth's article [5] and in the Concrete Mathematics reference. We first define these polynomials through the Stirling numbers of the first kind as $\sigma _{n}(x)=\left[{\begin{matrix}x\\x-n\end{matrix}}\right]\cdot {\frac {1}{x(x-1)\cdots (x-n)}}.$ It follows that these polynomials satisfy the next recurrence relation given by $(x+1)\sigma _{n}(x+1)=(x-n)\sigma _{n}(x)+x\sigma _{n-1}(x),\ n\geq 1.$ These Stirling "convolution" polynomials may be used to define the Stirling numbers, $\scriptstyle {\left[{\begin{matrix}x\\x-n\end{matrix}}\right]}$ and $\scriptstyle {\left\{{\begin{matrix}x\\x-n\end{matrix}}\right\}}$, for integers $n\geq 0$ and arbitrary complex values of $x$. The next table provides several special cases of these Stirling polynomials for the first few $n\geq 0$. nσn(x) 0${\frac {1}{x}}$ 1${\frac {1}{2}}$ 2${\frac {1}{24}}(3x-1)$ 3${\frac {1}{48}}(x^{2}-x)$ 4${\frac {1}{5760}}(15x^{3}-30x^{2}+5x+2)$ 5${\frac {1}{11520}}(3x^{4}-10x^{3}+5x^{2}+2x)$ 6${\frac {1}{2903040}}(63x^{5}-315x^{4}+315x^{3}+91x^{2}-42x-16)$ 7${\frac {1}{5806080}}(9x^{6}-63x^{5}+105x^{4}+7x^{3}-42x^{2}-16x)$ 8${\frac {1}{1393459200}}(135x^{7}-1260x^{6}+3150x^{5}-840x^{4}-2345x^{3}-540x^{2}+404x+144)$ 9${\frac {1}{2786918400}}(15x^{8}-180x^{7}+630x^{6}-448x^{5}-665x^{4}+100x^{3}+404x^{2}+144x)$ 10${\frac {1}{367873228800}}(99x^{9}-1485x^{8}+6930x^{7}-8778x^{6}-8085x^{5}+8195x^{4}+11792x^{3}+2068x^{2}-2288x-768)$ Generating functions This variant of the Stirling polynomial sequence has particularly nice ordinary generating functions of the following forms: ${\begin{aligned}\left({\frac {ze^{z}}{e^{z}-1}}\right)^{x}&=\sum _{n\geq 0}x\sigma _{n}(x)z^{n}\\\left({\frac {1}{z}}\ln {\frac {1}{1-z}}\right)^{x}&=\sum _{n\geq 0}x\sigma _{n}(x+n)z^{n}.\end{aligned}}$ More generally, if ${\mathcal {S}}_{t}(z)$ is a power series that satisfies $\ln \left(1-z{\mathcal {S}}_{t}(z)^{t-1}\right)=-z{\mathcal {S}}_{t}(z)^{t}$, we have that ${\mathcal {S}}_{t}(z)^{x}=\sum _{n\geq 0}x\sigma _{n}(x+tn)z^{n}.$ We also have the related series identity [6] $\sum _{n\geq 0}(-1)^{n-1}\sigma _{n}(n-1)z^{n}={\frac {z}{\ln(1+z)}}=1+{\frac {z}{2}}-{\frac {z^{2}}{12}}+\cdots ,$ and the Stirling (Sheffer) polynomial related generating functions given by $\sum _{n\geq 0}(-1)^{n+1}m\cdot \sigma _{n}(n-m)z^{n}=\left({\frac {z}{\ln(1+z)}}\right)^{m}$ $\sum _{n\geq 0}(-1)^{n+1}m\cdot \sigma _{n}(m)z^{n}=\left({\frac {z}{1-e^{-z}}}\right)^{m}.$ Properties and relations For integers $0\leq k\leq n$ and $r,s\in \mathbb {C} $, these polynomials satisfy the two Stirling convolution formulas given by $(r+s)\sigma _{n}(r+s+tn)=rs\sum _{k=0}^{n}\sigma _{k}(r+tk)\sigma _{n-k}(s+t(n-k))$ and $n\sigma _{n}(r+s+tn)=s\sum _{k=0}^{n}k\sigma _{k}(r+tk)\sigma _{n-k}(s+t(n-k)).$ When $n,m\in \mathbb {N} $, we also have that the polynomials, $\sigma _{n}(m)$, are defined through their relations to the Stirling numbers ${\begin{aligned}\left\{{\begin{matrix}n\\m\end{matrix}}\right\}&=(-1)^{n-m+1}{\frac {n!}{(m-1)!}}\sigma _{n-m}(-m)\ ({\text{when }}m<0)\\\left[{\begin{matrix}n\\m\end{matrix}}\right]&={\frac {n!}{(m-1)!}}\sigma _{n-m}(n)\ ({\text{when }}m>n),\end{aligned}}$ and their relations to the Bernoulli numbers given by ${\begin{aligned}\sigma _{n}(m)&={\frac {(-1)^{m+n-1}}{m!(n-m)!}}\sum _{0\leq k<m}\left[{\begin{matrix}m\\m-k\end{matrix}}\right]{\frac {B_{n-k}}{n-k}},\ n\geq m>0\\\sigma _{n}(m)&=-{\frac {B_{n}}{n\cdot n!}},\ m=0.\end{aligned}}$ See also • Bernoulli polynomials • Bernoulli polynomials of the second kind • Sheffer and Appell sequences • Difference polynomials • Special polynomial generating functions • Gregory coefficients References 1. See section 4.8.8 of The Umbral Calculus (1984) reference linked below. 2. See Norlund polynomials on MathWorld. 3. Gessel & Stanley (1978). "Stirling polynomials". J. Combin. Theory Ser. A. 53: 24–33. doi:10.1016/0097-3165(78)90042-0. 4. Section 4.4.8 of The Umbral Calculus. 5. Knuth, D. E. (1992). "Convolution Polynomials". Mathematica J. 2: 67–78. arXiv:math/9207221. Bibcode:1992math......7221K. The article contains definitions and properties of special convolution polynomial families defined by special generating functions of the form $F(z)^{x}$ for $F(0)=1$. Special cases of these convolution polynomial sequences include the binomial power series, ${\mathcal {B}}_{t}(z)=1+z{\mathcal {B}}_{t}(z)^{t}$, so-termed tree polynomials, the Bell numbers, $B(n)$, and the Laguerre polynomials. For $F_{n}(x):=[z^{n}]F(z)^{x}$, the polynomials $n!\cdot F_{n}(x)$ are said to be of binomial type, and moreover, satisfy the generating function relation ${\frac {zF_{n}(x+tn)}{(x+tn)}}=[z^{n}]{\mathcal {F}}_{t}(z)^{x}$ for all $t\in \mathbb {C} $, where ${\mathcal {F}}_{t}(z)$ is implicitly defined by a functional equation of the form ${\mathcal {F}}_{t}(z)=F\left(x{\mathcal {F}}_{t}(z)^{t}\right)$. The article also discusses asymptotic approximations and methods applied to polynomial sequences of this type. 6. Section 7.4 of Concrete Mathematics. • Erdeli, A.; Magnus, W.; Oberhettinger, F. & Tricomi, F. G. Higher Transcendental Functions. Volume III. New York. • Graham; Knuth & Patashnik (1994). Concrete Mathematics: A Foundation for Computer Science. • S. Roman (1984). The Umbral Calculus. External links • Weisstein, Eric W. "Stirling Polynomial". MathWorld. • Weisstein, Eric W. "Nørlund Polynomial". MathWorld. • This article incorporates material from Stirling polynomial on PlanetMath, which is licensed under the Creative Commons Attribution/Share-Alike License.
Wikipedia
Stirling transform In combinatorial mathematics, the Stirling transform of a sequence { an : n = 1, 2, 3, ... } of numbers is the sequence { bn : n = 1, 2, 3, ... } given by $b_{n}=\sum _{k=1}^{n}\left\{{\begin{matrix}n\\k\end{matrix}}\right\}a_{k},$ where $\left\{{\begin{matrix}n\\k\end{matrix}}\right\}$ is the Stirling number of the second kind, also denoted S(n,k) (with a capital S), which is the number of partitions of a set of size n into k parts. The inverse transform is $a_{n}=\sum _{k=1}^{n}s(n,k)b_{k},$ where s(n,k) (with a lower-case s) is a Stirling number of the first kind. Berstein and Sloane (cited below) state "If an is the number of objects in some class with points labeled 1, 2, ..., n (with all labels distinct, i.e. ordinary labeled structures), then bn is the number of objects with points labeled 1, 2, ..., n (with repetitions allowed)." If $f(x)=\sum _{n=1}^{\infty }{a_{n} \over n!}x^{n}$ is a formal power series, and $g(x)=\sum _{n=1}^{\infty }{b_{n} \over n!}x^{n}$ with an and bn as above, then $g(x)=f(e^{x}-1).\,$ Likewise, the inverse transform leads to the generating function identity $f(x)=g(\log(1+x)).$ See also • Binomial transform • Generating function transformation • List of factorial and binomial topics References • Bernstein, M.; Sloane, N. J. A. (1995). "Some canonical sequences of integers". Linear Algebra and Its Applications. 226/228: 57–72. arXiv:math/0205301. doi:10.1016/0024-3795(94)00245-9. S2CID 14672360.. • Khristo N. Boyadzhiev, Notes on the Binomial Transform, Theory and Table, with Appendix on the Stirling Transform (2018), World Scientific.
Wikipedia
Stirling's approximation In mathematics, Stirling's approximation (or Stirling's formula) is an approximation for factorials. It is a good approximation, leading to accurate results even for small values of $n$. It is named after James Stirling, though a related but less precise result was first stated by Abraham de Moivre.[1][2][3] One way of stating the approximation involves the logarithm of the factorial: $\ln(n!)=n\ln n-n+O(\ln n),$ where the big O notation means that, for all sufficiently large values of $n$, the difference between $\ln(n!)$ and $n\ln n-n$ will be at most proportional to the logarithm. In computer science applications such as the worst-case lower bound for comparison sorting, it is convenient to use instead the binary logarithm, giving the equivalent form $\log _{2}(n!)=n\log _{2}n-n\log _{2}e+O(\log _{2}n).$ The error term in either base can be expressed more precisely as ${\tfrac {1}{2}}\log(2\pi n)+O({\tfrac {1}{n}})$, corresponding to an approximate formula for the factorial itself, $n!\sim {\sqrt {2\pi n}}\left({\frac {n}{e}}\right)^{n}.$ Here the sign $\sim $ means that the two quantities are asymptotic, that is, that their ratio tends to 1 as $n$ tends to infinity. The following version of the bound holds for all $n\geq 1$, rather than only asymptotically: ${\sqrt {2\pi n}}\ \left({\frac {n}{e}}\right)^{n}e^{\frac {1}{12n+1}}<n!<{\sqrt {2\pi n}}\ \left({\frac {n}{e}}\right)^{n}e^{\frac {1}{12n}}.$ Derivation Roughly speaking, the simplest version of Stirling's formula can be quickly obtained by approximating the sum $\ln(n!)=\sum _{j=1}^{n}\ln j$ with an integral: $\sum _{j=1}^{n}\ln j\approx \int _{1}^{n}\ln x\,{\rm {d}}x=n\ln n-n+1.$ The full formula, together with precise estimates of its error, can be derived as follows. Instead of approximating $n!$, one considers its natural logarithm, as this is a slowly varying function: $\ln(n!)=\ln 1+\ln 2+\cdots +\ln n.$ The right-hand side of this equation minus ${\tfrac {1}{2}}(\ln 1+\ln n)={\tfrac {1}{2}}\ln n$ is the approximation by the trapezoid rule of the integral $\ln(n!)-{\tfrac {1}{2}}\ln n\approx \int _{1}^{n}\ln x\,{\rm {d}}x=n\ln n-n+1,$ and the error in this approximation is given by the Euler–Maclaurin formula: ${\begin{aligned}\ln(n!)-{\tfrac {1}{2}}\ln n&={\tfrac {1}{2}}\ln 1+\ln 2+\ln 3+\cdots +\ln(n-1)+{\tfrac {1}{2}}\ln n\\&=n\ln n-n+1+\sum _{k=2}^{m}{\frac {(-1)^{k}B_{k}}{k(k-1)}}\left({\frac {1}{n^{k-1}}}-1\right)+R_{m,n},\end{aligned}}$ where $B_{k}$ is a Bernoulli number, and Rm,n is the remainder term in the Euler–Maclaurin formula. Take limits to find that $\lim _{n\to \infty }\left(\ln(n!)-n\ln n+n-{\tfrac {1}{2}}\ln n\right)=1-\sum _{k=2}^{m}{\frac {(-1)^{k}B_{k}}{k(k-1)}}+\lim _{n\to \infty }R_{m,n}.$ Denote this limit as $y$. Because the remainder Rm,n in the Euler–Maclaurin formula satisfies $R_{m,n}=\lim _{n\to \infty }R_{m,n}+O\left({\frac {1}{n^{m}}}\right),$ where big-O notation is used, combining the equations above yields the approximation formula in its logarithmic form: $\ln(n!)=n\ln \left({\frac {n}{e}}\right)+{\tfrac {1}{2}}\ln n+y+\sum _{k=2}^{m}{\frac {(-1)^{k}B_{k}}{k(k-1)n^{k-1}}}+O\left({\frac {1}{n^{m}}}\right).$ Taking the exponential of both sides and choosing any positive integer $m$, one obtains a formula involving an unknown quantity $e^{y}$. For m = 1, the formula is $n!=e^{y}{\sqrt {n}}\left({\frac {n}{e}}\right)^{n}\left(1+O\left({\frac {1}{n}}\right)\right).$ The quantity $e^{y}$ can be found by taking the limit on both sides as $n$ tends to infinity and using Wallis' product, which shows that $e^{y}={\sqrt {2\pi }}$. Therefore, one obtains Stirling's formula: $n!={\sqrt {2\pi n}}\left({\frac {n}{e}}\right)^{n}\left(1+O\left({\frac {1}{n}}\right)\right).$ Alternative derivations An alternative formula for $n!$ using the gamma function is $n!=\int _{0}^{\infty }x^{n}e^{-x}\,{\rm {d}}x.$ (as can be seen by repeated integration by parts). Rewriting and changing variables x = ny, one obtains $n!=\int _{0}^{\infty }e^{n\ln x-x}\,{\rm {d}}x=e^{n\ln n}n\int _{0}^{\infty }e^{n(\ln y-y)}\,{\rm {d}}y.$ Applying Laplace's method one has $\int _{0}^{\infty }e^{n(\ln y-y)}\,{\rm {d}}y\sim {\sqrt {\frac {2\pi }{n}}}e^{-n},$ which recovers Stirling's formula: $n!\sim e^{n\ln n}n{\sqrt {\frac {2\pi }{n}}}e^{-n}={\sqrt {2\pi n}}\left({\frac {n}{e}}\right)^{n}.$ Higher orders In fact, further corrections can also be obtained using Laplace's method. From previous result, we know that $\Gamma (x)\sim x^{x}e^{-x}$, so we "peel off" this dominant term, then perform a change of variables, to obtain: $x^{-x}e^{x}\Gamma (x)=\int _{\mathbb {R} }e^{x(1+t-e^{t})}dt$ Now the function $t\mapsto 1+t-e^{t}$ is unimodal, with maximum value zero. Locally around zero, it looks like $-t^{2}/2$, which is why we are able to perform Laplace's method. In order to extend Laplace's method to higher orders, we perform another change of variables by $1+t-e^{t}=-\tau ^{2}/2$. This equation cannot be solved in closed form, but it can be solved by serial expansion, which gives us $t=\tau -\tau ^{2}/6+\tau ^{3}/36+a_{4}\tau ^{4}+O(\tau ^{5})$. Now plug back to the equation to obtain $x^{-x}e^{x}\Gamma (x)=\int _{\mathbb {R} }e^{-x\tau ^{2}/2}(1-\tau /3+\tau ^{2}/12+4a_{4}\tau ^{3}+O(\tau ^{4}))d\tau ={\sqrt {2\pi }}(x^{-1/2}+x^{-3/2}/12)+O(x^{-5/2})$ notice how we don't need to actually find $a_{4}$, since it is cancelled out by the integral. Higher orders can be achieved by computing more terms in $t=\tau +\cdots $. Thus we get Stirling's formula to two orders: $n!={\sqrt {2\pi n}}\left({\frac {n}{e}}\right)^{n}\left(1+{\frac {1}{12n}}+O\left({\frac {1}{n^{2}}}\right)\right).$ Complex-analytic version A complex-analysis version of this method[4] is to consider ${\frac {1}{n!}}$ as a Taylor coefficient of the exponential function $e^{z}=\sum _{n=0}^{\infty }{\frac {z^{n}}{n!}}$, computed by Cauchy's integral formula as ${\frac {1}{n!}}={\frac {1}{2\pi i}}\oint \limits _{|z|=r}{\frac {e^{z}}{z^{n+1}}}\,\mathrm {d} z.$ This line integral can then be approximated using the saddle-point method with an appropriate choice of contour radius $r=r_{n}$. The dominant portion of the integral near the saddle point is then approximated by a real integral and Laplace's method, while the remaining portion of the integral can be bounded above to give an error term. Speed of convergence and error estimates Stirling's formula is in fact the first approximation to the following series (now called the Stirling series):[5] $n!\sim {\sqrt {2\pi n}}\left({\frac {n}{e}}\right)^{n}\left(1+{\frac {1}{12n}}+{\frac {1}{288n^{2}}}-{\frac {139}{51840n^{3}}}-{\frac {571}{2488320n^{4}}}+\cdots \right).$ An explicit formula for the coefficients in this series was given by G. Nemes.[6] Further terms are listed in the On-Line Encyclopedia of Integer Sequences as A001163 and A001164. The first graph in this section shows the relative error vs. $n$, for 1 through all 5 terms listed above. (Bender and Orszag[7] p. 218) gives the asymptotic formula for the coefficients: $A_{2j+1}\sim (-1)^{j}2(2j)!/(2\pi )^{2(j+1)}$ which shows that it grows superexponentially, and that by ratio test the radius of convergence is zero. As n → ∞, the error in the truncated series is asymptotically equal to the first omitted term. This is an example of an asymptotic expansion. It is not a convergent series; for any particular value of $n$ there are only so many terms of the series that improve accuracy, after which accuracy worsens. This is shown in the next graph, which shows the relative error versus the number of terms in the series, for larger numbers of terms. More precisely, let S(n, t) be the Stirling series to $t$ terms evaluated at $n$. The graphs show $\left|\ln \left({\frac {S(n,t)}{n!}}\right)\right|,$ which, when small, is essentially the relative error. Writing Stirling's series in the form $\ln(n!)\sim n\ln n-n+{\tfrac {1}{2}}\ln(2\pi n)+{\frac {1}{12n}}-{\frac {1}{360n^{3}}}+{\frac {1}{1260n^{5}}}-{\frac {1}{1680n^{7}}}+\cdots ,$ it is known that the error in truncating the series is always of the opposite sign and at most the same magnitude as the first omitted term. More precise bounds, due to Robbins,[8] valid for all positive integers $n$ are ${\sqrt {2\pi n}}\left({\frac {n}{e}}\right)^{n}e^{\frac {1}{12n+1}}<n!<{\sqrt {2\pi n}}\left({\frac {n}{e}}\right)^{n}e^{\frac {1}{12n}}.$ A looser version of this bound is that ${\frac {n!e^{n}}{n^{n+{\frac {1}{2}}}}}\in ({\sqrt {2\pi }},e]$ for all $n\geq 1$. Stirling's formula for the gamma function For all positive integers, $n!=\Gamma (n+1),$ where Γ denotes the gamma function. However, the gamma function, unlike the factorial, is more broadly defined for all complex numbers other than non-positive integers; nevertheless, Stirling's formula may still be applied. If Re(z) > 0, then $\ln \Gamma (z)=z\ln z-z+{\tfrac {1}{2}}\ln {\frac {2\pi }{z}}+\int _{0}^{\infty }{\frac {2\arctan \left({\frac {t}{z}}\right)}{e^{2\pi t}-1}}\,{\rm {d}}t.$ Repeated integration by parts gives $\ln \Gamma (z)\sim z\ln z-z+{\tfrac {1}{2}}\ln {\frac {2\pi }{z}}+\sum _{n=1}^{N-1}{\frac {B_{2n}}{2n(2n-1)z^{2n-1}}},$ where $B_{n}$ is the $n$th Bernoulli number (note that the limit of the sum as $N\to \infty $ is not convergent, so this formula is just an asymptotic expansion). The formula is valid for $z$ large enough in absolute value, when |arg(z)| < π − ε, where ε is positive, with an error term of O(z−2N+ 1). The corresponding approximation may now be written: $\Gamma (z)={\sqrt {\frac {2\pi }{z}}}\,{\left({\frac {z}{e}}\right)}^{z}\left(1+O\left({\frac {1}{z}}\right)\right).$ where the expansion is identical to that of Stirling's series above for $n!$, except that $n$ is replaced with z − 1.[9] A further application of this asymptotic expansion is for complex argument z with constant Re(z). See for example the Stirling formula applied in Im(z) = t of the Riemann–Siegel theta function on the straight line 1/4 + it. Error bounds For any positive integer $N$, the following notation is introduced: $\ln \Gamma (z)=z\ln z-z+{\tfrac {1}{2}}\ln {\frac {2\pi }{z}}+\sum \limits _{n=1}^{N-1}{\frac {B_{2n}}{2n\left({2n-1}\right)z^{2n-1}}}+R_{N}(z)$ and $\Gamma (z)={\sqrt {\frac {2\pi }{z}}}\left({\frac {z}{e}}\right)^{z}\left({\sum \limits _{n=0}^{N-1}{\frac {a_{n}}{z^{n}}}+{\widetilde {R}}_{N}(z)}\right).$ Then[10][11] ${\begin{aligned}|R_{N}(z)|&\leq {\frac {|B_{2N}|}{2N(2N-1)|z|^{2N-1}}}\times {\begin{cases}1&{\text{ if }}\left|\arg z\right|\leq {\frac {\pi }{4}},\\\left|\csc(\arg z)\right|&{\text{ if }}{\frac {\pi }{4}}<\left|\arg z\right|<{\frac {\pi }{2}},\\\sec ^{2N}\left({\tfrac {\arg z}{2}}\right)&{\text{ if }}\left|\arg z\right|<\pi ,\end{cases}}\\[6pt]\left|{\widetilde {R}}_{N}(z)\right|&\leq \left({\frac {\left|a_{N}\right|}{|z|^{N}}}+{\frac {\left|a_{N+1}\right|}{|z|^{N+1}}}\right)\times {\begin{cases}1&{\text{ if }}\left|\arg z\right|\leq {\frac {\pi }{4}},\\\left|\csc(2\arg z)\right|&{\text{ if }}{\frac {\pi }{4}}<\left|\arg z\right|<{\frac {\pi }{2}}.\end{cases}}\end{aligned}}$ For further information and other error bounds, see the cited papers. A convergent version of Stirling's formula Thomas Bayes showed, in a letter to John Canton published by the Royal Society in 1763, that Stirling's formula did not give a convergent series.[12] Obtaining a convergent version of Stirling's formula entails evaluating Binet's formula: $\int _{0}^{\infty }{\frac {2\arctan \left({\frac {t}{x}}\right)}{e^{2\pi t}-1}}\,{\rm {d}}t=\ln \Gamma (x)-x\ln x+x-{\tfrac {1}{2}}\ln {\frac {2\pi }{x}}.$ One way to do this is by means of a convergent series of inverted rising factorials. If $z^{\bar {n}}=z(z+1)\cdots (z+n-1),$ then $\int _{0}^{\infty }{\frac {2\arctan \left({\frac {t}{x}}\right)}{e^{2\pi t}-1}}\,{\rm {d}}t=\sum _{n=1}^{\infty }{\frac {c_{n}}{(x+1)^{\bar {n}}}},$ where $c_{n}={\frac {1}{n}}\int _{0}^{1}x^{\bar {n}}\left(x-{\tfrac {1}{2}}\right)\,{\rm {d}}x={\frac {1}{2n}}\sum _{k=1}^{n}{\frac {k|s(n,k)|}{(k+1)(k+2)}},$ where s(n, k) denotes the Stirling numbers of the first kind. From this one obtains a version of Stirling's series ${\begin{aligned}\ln \Gamma (x)&=x\ln x-x+{\tfrac {1}{2}}\ln {\frac {2\pi }{x}}+{\frac {1}{12(x+1)}}+{\frac {1}{12(x+1)(x+2)}}+\\&\quad +{\frac {59}{360(x+1)(x+2)(x+3)}}+{\frac {29}{60(x+1)(x+2)(x+3)(x+4)}}+\cdots ,\end{aligned}}$ which converges when Re(x) > 0. Stirling's formula may also be given in convergent form as[13] $\Gamma (x)={\sqrt {2\pi }}x^{x-{\frac {1}{2}}}e^{-x+\mu (x)}$ where $\mu \left(x\right)=\sum _{n=0}^{\infty }\left(\left(x+n+{\frac {1}{2}}\right)\ln \left(1+{\frac {1}{x+n}}\right)-1\right).$ Versions suitable for calculators The approximation $\Gamma (z)\approx {\sqrt {\frac {2\pi }{z}}}\left({\frac {z}{e}}{\sqrt {z\sinh {\frac {1}{z}}+{\frac {1}{810z^{6}}}}}\right)^{z}$ and its equivalent form $2\ln \Gamma (z)\approx \ln(2\pi )-\ln z+z\left(2\ln z+\ln \left(z\sinh {\frac {1}{z}}+{\frac {1}{810z^{6}}}\right)-2\right)$ can be obtained by rearranging Stirling's extended formula and observing a coincidence between the resultant power series and the Taylor series expansion of the hyperbolic sine function. This approximation is good to more than 8 decimal digits for z with a real part greater than 8. Robert H. Windschitl suggested it in 2002 for computing the gamma function with fair accuracy on calculators with limited program or register memory.[14] Gergő Nemes proposed in 2007 an approximation which gives the same number of exact digits as the Windschitl approximation but is much simpler:[15] $\Gamma (z)\approx {\sqrt {\frac {2\pi }{z}}}\left({\frac {1}{e}}\left(z+{\frac {1}{12z-{\frac {1}{10z}}}}\right)\right)^{z},$ or equivalently, $\ln \Gamma (z)\approx {\tfrac {1}{2}}\left(\ln(2\pi )-\ln z\right)+z\left(\ln \left(z+{\frac {1}{12z-{\frac {1}{10z}}}}\right)-1\right).$ An alternative approximation for the gamma function stated by Srinivasa Ramanujan (Ramanujan 1988) is $\Gamma (1+x)\approx {\sqrt {\pi }}\left({\frac {x}{e}}\right)^{x}\left(8x^{3}+4x^{2}+x+{\frac {1}{30}}\right)^{\frac {1}{6}}$ for x ≥ 0. The equivalent approximation for ln n! has an asymptotic error of 1/1400n3 and is given by $\ln n!\approx n\ln n-n+{\tfrac {1}{6}}\ln(8n^{3}+4n^{2}+n+{\tfrac {1}{30}})+{\tfrac {1}{2}}\ln \pi .$ The approximation may be made precise by giving paired upper and lower bounds; one such inequality is[16][17][18][19] ${\sqrt {\pi }}\left({\frac {x}{e}}\right)^{x}\left(8x^{3}+4x^{2}+x+{\frac {1}{100}}\right)^{1/6}<\Gamma (1+x)<{\sqrt {\pi }}\left({\frac {x}{e}}\right)^{x}\left(8x^{3}+4x^{2}+x+{\frac {1}{30}}\right)^{1/6}.$ History The formula was first discovered by Abraham de Moivre[2] in the form $n!\sim [{\rm {constant}}]\cdot n^{n+{\frac {1}{2}}}e^{-n}.$ De Moivre gave an approximate rational-number expression for the natural logarithm of the constant. Stirling's contribution consisted of showing that the constant is precisely ${\sqrt {2\pi }}$.[3] See also • Lanczos approximation • Spouge's approximation References 1. Dutka, Jacques (1991), "The early history of the factorial function", Archive for History of Exact Sciences, 43 (3): 225–249, doi:10.1007/BF00389433 2. Le Cam, L. (1986), "The central limit theorem around 1935", Statistical Science, 1 (1): 78–96, doi:10.1214/ss/1177013818, JSTOR 2245503, MR 0833276; see p. 81, "The result, obtained using a formula originally proved by de Moivre but now called Stirling's formula, occurs in his 'Doctrine of Chances' of 1733." 3. Pearson, Karl (1924), "Historical note on the origin of the normal curve of errors", Biometrika, 16 (3/4): 402–404 [p. 403], doi:10.2307/2331714, JSTOR 2331714, I consider that the fact that Stirling showed that De Moivre's arithmetical constant was ${\sqrt {2\pi }}$ does not entitle him to claim the theorem, [...] 4. Flajolet, Philippe; Sedgewick, Robert (2009), Analytic Combinatorics, Cambridge, UK: Cambridge University Press, p. 555, doi:10.1017/CBO9780511801655, ISBN 978-0-521-89806-5, MR 2483235 5. Olver, F. W. J.; Olde Daalhuis, A. B.; Lozier, D. W.; Schneider, B. I.; Boisvert, R. F.; Clark, C. W.; Miller, B. R. & Saunders, B. V., "5.11 Gamma function properties: Asymptotic Expansions", NIST Digital Library of Mathematical Functions, Release 1.0.13 of 2016-09-16 6. Nemes, Gergő (2010), "On the coefficients of the asymptotic expansion of $n!$", Journal of Integer Sequences, 13 (6): 5 7. Bender, Carl M.; Orszag, Steven A. (2009). Advanced mathematical methods for scientists and engineers. 1: Asymptotic methods and perturbation theory (Nachdr. ed.). New York, NY: Springer. ISBN 978-0-387-98931-0. 8. Robbins, Herbert (1955), "A Remark on Stirling's Formula", The American Mathematical Monthly, 62 (1): 26–29, doi:10.2307/2308012, JSTOR 2308012 9. Spiegel, M. R. (1999), Mathematical handbook of formulas and tables, McGraw-Hill, p. 148 10. Schäfke, F. W.; Sattler, A. (1990), "Restgliedabschätzungen für die Stirlingsche Reihe", Note di Matematica, 10 (suppl. 2): 453–470, MR 1221957 11. G. Nemes, Error bounds and exponential improvements for the asymptotic expansions of the gamma function and its reciprocal, Proc. Roy. Soc. Edinburgh Sect. A 145 (2015), 571–596. 12. A letter from the late Reverend Mr. Thomas Bayes, F. R. S. to John Canton, M. A. and F. R. S. (PDF), 24 November 1763, archived (PDF) from the original on 2012-01-28, retrieved 2012-03-01 13. Artin, Emil (2015). The Gamma Function. Dover. p. 24. 14. Toth, V. T. Programmable Calculators: Calculators and the Gamma Function (2006) Archived 2005-12-31 at the Wayback Machine. 15. Nemes, Gergő (2010), "New asymptotic expansion for the Gamma function", Archiv der Mathematik, 95 (2): 161–169, doi:10.1007/s00013-010-0146-9 16. Karatsuba, Ekatherina A. (2001), "On the asymptotic representation of the Euler gamma function by Ramanujan", Journal of Computational and Applied Mathematics, 135 (2): 225–240, doi:10.1016/S0377-0427(00)00586-0, MR 1850542 17. Mortici, Cristinel (2011), "Ramanujan's estimate for the gamma function via monotonicity arguments", Ramanujan J., 25: 149–154 18. Mortici, Cristinel (2011), "Improved asymptotic formulas for the gamma function", Comput. Math. Appl., 61: 3364–3369. 19. Mortici, Cristinel (2011), "On Ramanujan's large argument formula for the gamma function", Ramanujan J., 26: 185–192. Further reading • Abramowitz, M. & Stegun, I. (2002), Handbook of Mathematical Functions • Paris, R. B. & Kaminski, D. (2001), Asymptotics and Mellin–Barnes Integrals, New York: Cambridge University Press, ISBN 978-0-521-79001-7 • Whittaker, E. T. & Watson, G. N. (1996), A Course in Modern Analysis (4th ed.), New York: Cambridge University Press, ISBN 978-0-521-58807-2 • Romik, Dan (2000), "Stirling's approximation for $n!$: the ultimate short proof?", The American Mathematical Monthly, 107 (6): 556–557, doi:10.2307/2589351, MR 1767064 • Li, Yuan-Chuan (July 2006), "A note on an identity of the gamma function and Stirling's formula", Real Analysis Exchange, 32 (1): 267–271, MR 2329236 External links Wikimedia Commons has media related to Stirling's approximation. • "Stirling_formula", Encyclopedia of Mathematics, EMS Press, 2001 [1994] • Peter Luschny, Approximation formulas for the factorial function n! • Weisstein, Eric W., "Stirling's Approximation", MathWorld • Stirling's approximation at PlanetMath. Calculus Precalculus • Binomial theorem • Concave function • Continuous function • Factorial • Finite difference • Free variables and bound variables • Graph of a function • Linear function • Radian • Rolle's theorem • Secant • Slope • Tangent Limits • Indeterminate form • Limit of a function • One-sided limit • Limit of a sequence • Order of approximation • (ε, δ)-definition of limit Differential calculus • Derivative • Second derivative • Partial derivative • Differential • Differential operator • Mean value theorem • Notation • Leibniz's notation • Newton's notation • Rules of differentiation • linearity • Power • Sum • Chain • L'Hôpital's • Product • General Leibniz's rule • Quotient • Other techniques • Implicit differentiation • Inverse functions and differentiation • Logarithmic derivative • Related rates • Stationary points • First derivative test • Second derivative test • Extreme value theorem • Maximum and minimum • Further applications • Newton's method • Taylor's theorem • Differential equation • Ordinary differential equation • Partial differential equation • Stochastic differential equation Integral calculus • Antiderivative • Arc length • Riemann integral • Basic properties • Constant of integration • Fundamental theorem of calculus • Differentiating under the integral sign • Integration by parts • Integration by substitution • trigonometric • Euler • Tangent half-angle substitution • Partial fractions in integration • Quadratic integral • Trapezoidal rule • Volumes • Washer method • Shell method • Integral equation • Integro-differential equation Vector calculus • Derivatives • Curl • Directional derivative • Divergence • Gradient • Laplacian • Basic theorems • Line integrals • Green's • Stokes' • Gauss' Multivariable calculus • Divergence theorem • Geometric • Hessian matrix • Jacobian matrix and determinant • Lagrange multiplier • Line integral • Matrix • Multiple integral • Partial derivative • Surface integral • Volume integral • Advanced topics • Differential forms • Exterior derivative • Generalized Stokes' theorem • Tensor calculus Sequences and series • Arithmetico-geometric sequence • Types of series • Alternating • Binomial • Fourier • Geometric • Harmonic • Infinite • Power • Maclaurin • Taylor • Telescoping • Tests of convergence • Abel's • Alternating series • Cauchy condensation • Direct comparison • Dirichlet's • Integral • Limit comparison • Ratio • Root • Term Special functions and numbers • Bernoulli numbers • e (mathematical constant) • Exponential function • Natural logarithm • Stirling's approximation History of calculus • Adequality • Brook Taylor • Colin Maclaurin • Generality of algebra • Gottfried Wilhelm Leibniz • Infinitesimal • Infinitesimal calculus • Isaac Newton • Fluxion • Law of Continuity • Leonhard Euler • Method of Fluxions • The Method of Mechanical Theorems Lists • Differentiation rules • List of integrals of exponential functions • List of integrals of hyperbolic functions • List of integrals of inverse hyperbolic functions • List of integrals of inverse trigonometric functions • List of integrals of irrational functions • List of integrals of logarithmic functions • List of integrals of rational functions • List of integrals of trigonometric functions • Secant • Secant cubed • List of limits • Lists of integrals Miscellaneous topics • Complex calculus • Contour integral • Differential geometry • Manifold • Curvature • of curves • of surfaces • Tensor • Euler–Maclaurin formula • Gabriel's horn • Integration Bee • Proof that 22/7 exceeds π • Regiomontanus' angle maximization problem • Steinmetz solid
Wikipedia
Stochastic Stochastic (/stəˈkæstɪk/; from Ancient Greek στόχος (stókhos) 'aim, guess')[1] refers to the property of being well described by a random probability distribution.[1] Although stochasticity and randomness are distinct in that the former refers to a modeling approach and the latter refers to phenomena themselves, these two terms are often used synonymously. Furthermore, in probability theory, the formal concept of a stochastic process is also referred to as a random process.[2][3][4][5][6] Stochasticity is used in many different fields, including the natural sciences such as biology,[7] chemistry,[8] ecology,[9] neuroscience,[10] and physics,[11] as well as technology and engineering fields such as image processing, signal processing,[12] information theory,[13] computer science,[14] cryptography,[15] and telecommunications.[16] It is also used in finance, due to seemingly random changes in financial markets[17][18][19] as well as in medicine, linguistics, music, media, colour theory, botany, manufacturing, and geomorphology. Etymology The word stochastic in English was originally used as an adjective with the definition "pertaining to conjecturing", and stemming from a Greek word meaning "to aim at a mark, guess", and the Oxford English Dictionary gives the year 1662 as its earliest occurrence.[1] In his work on probability Ars Conjectandi, originally published in Latin in 1713, Jakob Bernoulli used the phrase "Ars Conjectandi sive Stochastice", which has been translated to "the art of conjecturing or stochastics".[20] This phrase was used, with reference to Bernoulli, by Ladislaus Bortkiewicz,[21] who in 1917 wrote in German the word Stochastik with a sense meaning random. The term stochastic process first appeared in English in a 1934 paper by Joseph Doob.[1] For the term and a specific mathematical definition, Doob cited another 1934 paper, where the term stochastischer Prozeß was used in German by Aleksandr Khinchin,[22][23] though the German term had been used earlier in 1931 by Andrey Kolmogorov.[24] Mathematics In the early 1930s, Aleksandr Khinchin gave the first mathematical definition of a stochastic process as a family of random variables indexed by the real line.[25][22][lower-alpha 1] Further fundamental work on probability theory and stochastic processes was done by Khinchin as well as other mathematicians such as Andrey Kolmogorov, Joseph Doob, William Feller, Maurice Fréchet, Paul Lévy, Wolfgang Doeblin, and Harald Cramér.[27][28] Decades later Cramér referred to the 1930s as the "heroic period of mathematical probability theory".[28] In mathematics, the theory of stochastic processes is an important contribution to probability theory,[29] and continues to be an active topic of research for both theory and applications.[30][31][32] The word stochastic is used to describe other terms and objects in mathematics. Examples include a stochastic matrix, which describes a stochastic process known as a Markov process, and stochastic calculus, which involves differential equations and integrals based on stochastic processes such as the Wiener process, also called the Brownian motion process. Natural science One of the simplest continuous-time stochastic processes is Brownian motion. This was first observed by botanist Robert Brown while looking through a microscope at pollen grains in water. Physics The Monte Carlo method is a stochastic method popularized by physics researchers Stanisław Ulam, Enrico Fermi, John von Neumann, and Nicholas Metropolis.[33] The use of randomness and the repetitive nature of the process are analogous to the activities conducted at a casino. Methods of simulation and statistical sampling generally did the opposite: using simulation to test a previously understood deterministic problem. Though examples of an "inverted" approach do exist historically, they were not considered a general method until the popularity of the Monte Carlo method spread. Perhaps the most famous early use was by Enrico Fermi in 1930, when he used a random method to calculate the properties of the newly discovered neutron. Monte Carlo methods were central to the simulations required for the Manhattan Project, though they were severely limited by the computational tools of the time. Therefore, it was only after electronic computers were first built (from 1945 on) that Monte Carlo methods began to be studied in depth. In the 1950s they were used at Los Alamos for early work relating to the development of the hydrogen bomb, and became popularized in the fields of physics, physical chemistry, and operations research. The RAND Corporation and the U.S. Air Force were two of the major organizations responsible for funding and disseminating information on Monte Carlo methods during this time, and they began to find a wide application in many different fields. Uses of Monte Carlo methods require large amounts of random numbers, and it was their use that spurred the development of pseudorandom number generators, which were far quicker to use than the tables of random numbers which had been previously used for statistical sampling. Biology Stochastic resonance: In biological systems, introducing stochastic "noise" has been found to help improve the signal strength of the internal feedback loops for balance and other vestibular communication.[34] It has been found to help diabetic and stroke patients with balance control.[35] Many biochemical events also lend themselves to stochastic analysis. Gene expression, for example, has a stochastic component through the molecular collisions—as during binding and unbinding of RNA polymerase to a gene promoter—via the solution's Brownian motion. Creativity Simonton (2003, Psych Bulletin) argues that creativity in science (of scientists) is a constrained stochastic behaviour such that new theories in all sciences are, at least in part, the product of a stochastic process. Computer science Stochastic ray tracing is the application of Monte Carlo simulation to the computer graphics ray tracing algorithm. "Distributed ray tracing samples the integrand at many randomly chosen points and averages the results to obtain a better approximation. It is essentially an application of the Monte Carlo method to 3D computer graphics, and for this reason is also called Stochastic ray tracing." Stochastic forensics analyzes computer crime by viewing computers as stochastic steps. In artificial intelligence, stochastic programs work by using probabilistic methods to solve problems, as in simulated annealing, stochastic neural networks, stochastic optimization, genetic algorithms, and genetic programming. A problem itself may be stochastic as well, as in planning under uncertainty. Finance The financial markets use stochastic models to represent the seemingly random behaviour of various financial assets, including the random behavior of the price of one currency compared to that of another (such as the price of US Dollar compared to that of the Euro), and also to represent random behaviour of interest rates. These models are then used by quantitative analysts to value options on stock prices, bond prices, and on interest rates, see Markov models. Moreover, it is at the heart of the insurance industry. Geomorphology The formation of river meanders has been analyzed as a stochastic process. Language and linguistics Non-deterministic approaches in language studies are largely inspired by the work of Ferdinand de Saussure, for example, in functionalist linguistic theory, which argues that competence is based on performance.[36][37] This distinction in functional theories of grammar should be carefully distinguished from the langue and parole distinction. To the extent that linguistic knowledge is constituted by experience with language, grammar is argued to be probabilistic and variable rather than fixed and absolute. This conception of grammar as probabilistic and variable follows from the idea that one's competence changes in accordance with one's experience with language. Though this conception has been contested,[38] it has also provided the foundation for modern statistical natural language processing[39] and for theories of language learning and change.[40] Manufacturing Manufacturing processes are assumed to be stochastic processes. This assumption is largely valid for either continuous or batch manufacturing processes. Testing and monitoring of the process is recorded using a process control chart which plots a given process control parameter over time. Typically a dozen or many more parameters will be tracked simultaneously. Statistical models are used to define limit lines which define when corrective actions must be taken to bring the process back to its intended operational window. This same approach is used in the service industry where parameters are replaced by processes related to service level agreements. Media The marketing and the changing movement of audience tastes and preferences, as well as the solicitation of and the scientific appeal of certain film and television debuts (i.e., their opening weekends, word-of-mouth, top-of-mind knowledge among surveyed groups, star name recognition and other elements of social media outreach and advertising), are determined in part by stochastic modeling. A recent attempt at repeat business analysis was done by Japanese scholars and is part of the Cinematic Contagion Systems patented by Geneva Media Holdings, and such modeling has been used in data collection from the time of the original Nielsen ratings to modern studio and television test audiences. Medicine Stochastic effect, or "chance effect" is one classification of radiation effects that refers to the random, statistical nature of the damage. In contrast to the deterministic effect, severity is independent of dose. Only the probability of an effect increases with dose. Music In music, mathematical processes based on probability can generate stochastic elements. Stochastic processes may be used in music to compose a fixed piece or may be produced in performance. Stochastic music was pioneered by Iannis Xenakis, who coined the term stochastic music. Specific examples of mathematics, statistics, and physics applied to music composition are the use of the statistical mechanics of gases in Pithoprakta, statistical distribution of points on a plane in Diamorphoses, minimal constraints in Achorripsis, the normal distribution in ST/10 and Atrées, Markov chains in Analogiques, game theory in Duel and Stratégie, group theory in Nomos Alpha (for Siegfried Palm), set theory in Herma and Eonta,[41] and Brownian motion in N'Shima. Xenakis frequently used computers to produce his scores, such as the ST series including Morsima-Amorsima and Atrées, and founded CEMAMu. Earlier, John Cage and others had composed aleatoric or indeterminate music, which is created by chance processes but does not have the strict mathematical basis (Cage's Music of Changes, for example, uses a system of charts based on the I-Ching). Lejaren Hiller and Leonard Issacson used generative grammars and Markov chains in their 1957 Illiac Suite. Modern electronic music production techniques make these processes relatively simple to implement, and many hardware devices such as synthesizers and drum machines incorporate randomization features. Generative music techniques are therefore readily accessible to composers, performers, and producers. Social sciences Stochastic social science theory is similar to systems theory in that events are interactions of systems, although with a marked emphasis on unconscious processes. The event creates its own conditions of possibility, rendering it unpredictable if simply for the number of variables involved. Stochastic social science theory can be seen as an elaboration of a kind of 'third axis' in which to situate human behavior alongside the traditional 'nature vs. nurture' opposition. See Julia Kristeva on her usage of the 'semiotic', Luce Irigaray on reverse Heideggerian epistemology, and Pierre Bourdieu on polythetic space for examples of stochastic social science theory. The term stochastic terrorism has come into frequent use[42] with regard to lone wolf terrorism. The terms "Scripted Violence" and "Stochastic Terrorism" are linked in a "cause <> effect" relationship. "Scripted violence" rhetoric can result in an act of "stochastic terrorism". The phrase "scripted violence" has been used in social science since at least 2002.[43] Author David Neiwert, who wrote the book Alt-America, told Salon interviewer Chauncey Devega: Scripted violence is where a person who has a national platform describes the kind of violence that they want to be carried out. He identifies the targets and leaves it up to the listeners to carry out this violence. It is a form of terrorism. It is an act and a social phenomenon where there is an agreement to inflict massive violence on a whole segment of society. Again, this violence is led by people in high-profile positions in the media and the government. They're the ones who do the scripting, and it is ordinary people who carry it out. Think of it like Charles Manson and his followers. Manson wrote the script; he didn't commit any of those murders. He just had his followers carry them out.[44] Subtractive color reproduction When color reproductions are made, the image is separated into its component colors by taking multiple photographs filtered for each color. One resultant film or plate represents each of the cyan, magenta, yellow, and black data. Color printing is a binary system, where ink is either present or not present, so all color separations to be printed must be translated into dots at some stage of the work-flow. Traditional line screens which are amplitude modulated had problems with moiré but were used until stochastic screening became available. A stochastic (or frequency modulated) dot pattern creates a sharper image. See also • Jump process • Sortition • Stochastic process Notes 1. Doob, when citing Khinchin, uses the term 'chance variable', which used to be an alternative term for 'random variable'.[26] References 1. "Stochastic". Lexico UK English Dictionary. Oxford University Press. Archived from the original on January 2, 2020. 2. Robert J. Adler; Jonathan E. Taylor (29 January 2009). Random Fields and Geometry. Springer Science & Business Media. pp. 7–8. ISBN 978-0-387-48116-6. 3. David Stirzaker (2005). Stochastic Processes and Models. Oxford University Press. p. 45. ISBN 978-0-19-856814-8. 4. Loïc Chaumont; Marc Yor (19 July 2012). Exercises in Probability: A Guided Tour from Measure Theory to Random Processes, Via Conditioning. Cambridge University Press. p. 175. ISBN 978-1-107-60655-5. 5. Murray Rosenblatt (1962). Random Processes. Oxford University Press. p. 91. ISBN 9780758172174. 6. Olav Kallenberg (8 January 2002). Foundations of Modern Probability. Springer Science & Business Media. pp. 24 and 25. ISBN 978-0-387-95313-7. 7. Paul C. Bressloff (22 August 2014). Stochastic Processes in Cell Biology. Springer. ISBN 978-3-319-08488-6. 8. N.G. Van Kampen (30 August 2011). Stochastic Processes in Physics and Chemistry. Elsevier. ISBN 978-0-08-047536-3. 9. Russell Lande; Steinar Engen; Bernt-Erik Sæther (2003). Stochastic Population Dynamics in Ecology and Conservation. Oxford University Press. ISBN 978-0-19-852525-7. 10. Carlo Laing; Gabriel J Lord (2010). Stochastic Methods in Neuroscience. OUP Oxford. ISBN 978-0-19-923507-0. 11. Wolfgang Paul; Jörg Baschnagel (11 July 2013). Stochastic Processes: From Physics to Finance. Springer Science & Business Media. ISBN 978-3-319-00327-6. 12. Edward R. Dougherty (1999). Random processes for image and signal processing. SPIE Optical Engineering Press. ISBN 978-0-8194-2513-3. 13. Thomas M. Cover; Joy A. Thomas (28 November 2012). Elements of Information Theory. John Wiley & Sons. p. 71. ISBN 978-1-118-58577-1. 14. Michael Baron (15 September 2015). Probability and Statistics for Computer Scientists, Second Edition. CRC Press. p. 131. ISBN 978-1-4987-6060-7. 15. Jonathan Katz; Yehuda Lindell (2007-08-31). Introduction to Modern Cryptography: Principles and Protocols. CRC Press. p. 26. ISBN 978-1-58488-586-3. 16. François Baccelli; Bartlomiej Blaszczyszyn (2009). Stochastic Geometry and Wireless Networks. Now Publishers Inc. pp. 200–. ISBN 978-1-60198-264-3. 17. J. Michael Steele (2001). Stochastic Calculus and Financial Applications. Springer Science & Business Media. ISBN 978-0-387-95016-7. 18. Marek Musiela; Marek Rutkowski (21 January 2006). Martingale Methods in Financial Modelling. Springer Science & Business Media. ISBN 978-3-540-26653-2. 19. Steven E. Shreve (3 June 2004). Stochastic Calculus for Finance II: Continuous-Time Models. Springer Science & Business Media. ISBN 978-0-387-40101-0. 20. O. B. Sheĭnin (2006). Theory of probability and statistics as exemplified in short dictums. NG Verlag. p. 5. ISBN 978-3-938417-40-9. 21. Oscar Sheynin; Heinrich Strecker (2011). Alexandr A. Chuprov: Life, Work, Correspondence. V&R unipress GmbH. p. 136. ISBN 978-3-89971-812-6. 22. Doob, Joseph (1934). "Stochastic Processes and Statistics". Proceedings of the National Academy of Sciences of the United States of America. 20 (6): 376–379. Bibcode:1934PNAS...20..376D. doi:10.1073/pnas.20.6.376. PMC 1076423. PMID 16587907. 23. Khintchine, A. (1934). "Korrelationstheorie der stationeren stochastischen Prozesse". Mathematische Annalen. 109 (1): 604–615. doi:10.1007/BF01449156. ISSN 0025-5831. S2CID 122842868. 24. Kolmogoroff, A. (1931). "Über die analytischen Methoden in der Wahrscheinlichkeitsrechnung". Mathematische Annalen. 104 (1): 1. doi:10.1007/BF01457949. ISSN 0025-5831. S2CID 119439925. 25. Vere-Jones, David (2006). "Khinchin, Aleksandr Yakovlevich". Encyclopedia of Statistical Sciences. p. 4. doi:10.1002/0471667196.ess6027.pub2. ISBN 0471667196. 26. Snell, J. Laurie (2005). "Obituary: Joseph Leonard Doob". Journal of Applied Probability. 42 (1): 251. doi:10.1239/jap/1110381384. ISSN 0021-9002. 27. Bingham, N. (2000). "Studies in the history of probability and statistics XLVI. Measure into probability: from Lebesgue to Kolmogorov". Biometrika. 87 (1): 145–156. doi:10.1093/biomet/87.1.145. ISSN 0006-3444. 28. Cramer, Harald (1976). "Half a Century with Probability Theory: Some Personal Recollections". The Annals of Probability. 4 (4): 509–546. doi:10.1214/aop/1176996025. ISSN 0091-1798. 29. Applebaum, David (2004). "Lévy processes: From probability to finance and quantum groups". Notices of the AMS. 51 (11): 1336–1347. 30. Jochen Blath; Peter Imkeller; Sylvie Roelly (2011). Surveys in Stochastic Processes. European Mathematical Society. pp. 5–. ISBN 978-3-03719-072-2. 31. Michel Talagrand (12 February 2014). Upper and Lower Bounds for Stochastic Processes: Modern Methods and Classical Problems. Springer Science & Business Media. pp. 4–. ISBN 978-3-642-54075-2. 32. Paul C. Bressloff (22 August 2014). Stochastic Processes in Cell Biology. Springer. pp. vii–ix. ISBN 978-3-319-08488-6. 33. Douglas Hubbard "How to Measure Anything: Finding the Value of Intangibles in Business" p. 46, John Wiley & Sons, 2007 34. Hänggi, P. (2002). "Stochastic Resonance in Biology How Noise Can Enhance Detection of Weak Signals and Help Improve Biological Information Processing". ChemPhysChem. 3 (3): 285–90. doi:10.1002/1439-7641(20020315)3:3<285::AID-CPHC285>3.0.CO;2-A. PMID 12503175. 35. Priplata, A.; et al. (2006). "Noise-Enhanced Balance Control in Patients with Diabetes and Patients with Stroke" (PDF). Ann Neurol. 59 (1): 4–12. doi:10.1002/ana.20670. PMID 16287079. S2CID 3140340. 36. Newmeyer, Frederick. 2001. "The Prague School and North American functionalist approaches to syntax" Journal of Linguistics 37, pp. 101–126. "Since most American functionalists adhere to this trend, I will refer to it and its practitioners with the initials 'USF'. Some of the more prominent USFs are Joan Bybee, William Croft, Talmy Givon, John Haiman, Paul Hopper, Marianne Mithun and Sandra Thompson. In its most extreme form (Hopper 1987, 1988), USF rejects the Saussurean dichotomies such as langue vs. parôle. For early interpretivist approaches to focus, see Chomsky (1971) and Jackendoff (1972). parole and synchrony vs. diachrony. All adherents of this tendency feel that the Chomskyan advocacy of a sharp distinction between competence and performance is at best unproductive and obscurantist; at worst theoretically unmotivated. " 37. Bybee, Joan. "Usage-based phonology." p. 213 in Darnel, Mike (ed). 1999. Functionalism and Formalism in Linguistics: General papers. John Benjamins Publishing Company 38. Chomsky (1959). Review of Skinner's Verbal Behavior, Language, 35: 26–58 39. Manning and Schütze, (1999) Foundations of Statistical Natural Language Processing, MIT Press. Cambridge, MA 40. Bybee (2007) Frequency of use and the organization of language. Oxford: Oxford University Press 41. Ilias Chrissochoidis, Stavros Houliaras, and Christos Mitsakis, "Set theory in Xenakis' EONTA", in International Symposium Iannis Xenakis, ed. Anastasia Georgaki and Makis Solomos (Athens: The National and Kapodistrian University, 2005), 241–249. 42. Anthony Scaramucci says he does not support President Trump's reelection on YouTube published August 12, 2019 CNN 43. Hamamoto, Darrell Y. (2002). "Empire of Death: Militarized Society and the Rise of Serial Killing and Mass Murder". New Political Science. 24 (1): 105–120. doi:10.1080/07393140220122662. S2CID 145617529. 44. DeVega, Chauncey (1 November 2018). "Author David Neiwert on the outbreak of political violence". Salon. Retrieved 13 December 2018. Further reading • Formalized Music: Thought and Mathematics in Composition by Iannis Xenakis, ISBN 1-57647-079-2 • Frequency and the Emergence of Linguistic Structure by Joan Bybee and Paul Hopper (eds.), ISBN 1-58811-028-1/ISBN 90-272-2948-1 (Eur.) • The Stochastic Empirical Loading and Dilution Model provides documentation and computer code for modeling stochastic processes in Visual Basic for Applications. External links • The dictionary definition of stochastic at Wiktionary Authority control: National • Germany
Wikipedia
Burgers' equation Burgers' equation or Bateman–Burgers equation is a fundamental partial differential equation and convection–diffusion equation[1] occurring in various areas of applied mathematics, such as fluid mechanics,[2] nonlinear acoustics,[3] gas dynamics, and traffic flow.[4] The equation was first introduced by Harry Bateman in 1915[5][6] and later studied by Johannes Martinus Burgers in 1948.[7] For a given field $u(x,t)$ and diffusion coefficient (or kinematic viscosity, as in the original fluid mechanical context) $\nu $, the general form of Burgers' equation (also known as viscous Burgers' equation) in one space dimension is the dissipative system: ${\frac {\partial u}{\partial t}}+u{\frac {\partial u}{\partial x}}=\nu {\frac {\partial ^{2}u}{\partial x^{2}}}.$ When the diffusion term is absent (i.e. $\nu =0$), Burgers' equation becomes the inviscid Burgers' equation: ${\frac {\partial u}{\partial t}}+u{\frac {\partial u}{\partial x}}=0,$ which is a prototype for conservation equations that can develop discontinuities (shock waves). The previous equation is the advective form of the Burgers' equation. The conservative form is found to be more useful in numerical integration ${\frac {\partial u}{\partial t}}+{\frac {1}{2}}{\frac {\partial (u^{2})}{\partial x}}=0.$ Terms There are 4 parameters in Burgers' equation: $u,x,t$ and $\nu $. In a system consisting of a moving viscous fluid with one spatial ($x$) and one temporal ($t$) dimension, e.g. a thin ideal pipe with fluid running through it, Burgers' equation describes the speed of the fluid at each location along the pipe as time progresses. The terms of the equation represent the following quantities:[8] • $x$: spatial coordinate • $t$: temporal coordinate • $u(x,t)$: speed of fluid at the indicated spatial and temporal coordinates • $\nu $: viscosity of fluid The viscosity is a constant physical property of the fluid, and the other parameters represent the dynamics contingent on that viscosity. Inviscid Burgers' equation The inviscid Burgers' equation is a conservation equation, more generally a first order quasilinear hyperbolic equation. The solution to the equation and along with the initial condition ${\frac {\partial u}{\partial t}}+u{\frac {\partial u}{\partial x}}=0,\quad u(x,0)=f(x)$ can be constructed by the method of characteristics. The characteristic equations are ${\frac {dx}{dt}}=u,\quad {\frac {du}{dt}}=0.$ Integration of the second equation tells us that $u$ is constant along the characteristic and integration of the first equation shows that the characteristics are straight lines, i.e., $u=c,\quad x=ut+\xi $ where $\xi $ is the point (or parameter) on the x-axis (t = 0) of the x-t plane from which the characteristic curve is drawn. Since $u$ at $x$-axis is known from the initial condition and the fact that $u$ is unchanged as we move along the characteristic emanating from each point $x=\xi $, we write $u=c=f(\xi )$ on each characteristic. Therefore, the family of trajectories of characteristics parametrized by $\xi $ is $x=f(\xi )t+\xi .$ Thus, the solution is given by $u(x,t)=f(\xi )=f(x-ut),\quad \xi =x-f(\xi )t.$ This is an implicit relation that determines the solution of the inviscid Burgers' equation provided characteristics don't intersect. If the characteristics do intersect, then a classical solution to the PDE does not exist and leads to the formation of a shock wave. Whether characteristics can intersect or not depends on the initial condition. In fact, the breaking time before a shock wave can be formed is given by[9] $t_{b}=\inf _{x}\left({\frac {-1}{f'(x)}}\right)$ Inviscid Burgers' equation for linear initial condition Subrahmanyan Chandrasekhar provided the explicit solution in 1943 when the initial condition is linear, i.e., $f(x)=ax+b$, where a and b are constants.[10] The explicit solution is $u(x,t)={\frac {ax+b}{at+1}}.$ This solution is also the complete integral of the inviscid Burgers' equation because it contains as many arbitrary constants as the number of independent variables appearing in the equation.[11] Using this complete integral, Chandrasekhar obtained the general solution described for arbitrary initial conditions from the envelope of the complete integral. Viscous Burgers' equation The viscous Burgers' equation can be converted to a linear equation by the Cole–Hopf transformation,[12][13][14] $u=-2\nu {\frac {1}{\phi }}{\frac {\partial \phi }{\partial x}},$ which turns it into the equation ${\frac {\partial }{\partial x}}\left({\frac {1}{\phi }}{\frac {\partial \phi }{\partial t}}\right)=\nu {\frac {\partial }{\partial x}}\left({\frac {1}{\phi }}{\frac {\partial ^{2}\phi }{\partial x^{2}}}\right)$ which can be integrated with respect to $x$ to obtain ${\frac {\partial \phi }{\partial t}}=\nu {\frac {\partial ^{2}\phi }{\partial x^{2}}}+g(t)\phi $ where $g(t)$ is a function that depends on boundary conditions. If $g(t)=0$ identically (e.g. if the problem is to be solved on a periodic domain), then we get the diffusion equation ${\frac {\partial \phi }{\partial t}}=\nu {\frac {\partial ^{2}\phi }{\partial x^{2}}}.$ The diffusion equation can be solved, and the Cole–Hopf transformation inverted, to obtain the solution to the Burgers' equation: $u(x,t)=-2\nu {\frac {\partial }{\partial x}}\ln \left\{(4\pi \nu t)^{-1/2}\int _{-\infty }^{\infty }\exp \left[-{\frac {(x-x')^{2}}{4\nu t}}-{\frac {1}{2\nu }}\int _{0}^{x'}f(x'')dx''\right]dx'\right\}.$ Other forms Generalized Burgers' equation The generalized Burgers' equation extends the quasilinear convective to more generalized form, i.e., ${\frac {\partial u}{\partial t}}+c(u){\frac {\partial u}{\partial x}}=\nu {\frac {\partial ^{2}u}{\partial x^{2}}}.$ where $c(u)$ is any arbitrary function of u. The inviscid $\nu =0$ equation is still a quasilinear hyperbolic equation for $c(u)>0$ and its solution can be constructed using method of characteristics as before.[15] Stochastic Burgers' equation Added space-time noise $\eta (x,t)={\dot {W}}(x,t)$, where $W$ is an $L^{2}(\mathbb {R} )$ Wiener process, forms a stochastic Burgers' equation[16] ${\frac {\partial u}{\partial t}}+u{\frac {\partial u}{\partial x}}=\nu {\frac {\partial ^{2}u}{\partial x^{2}}}-\lambda {\frac {\partial \eta }{\partial x}}.$ This stochastic PDE is the one-dimensional version of Kardar–Parisi–Zhang equation in a field $h(x,t)$ upon substituting $u(x,t)=-\lambda \partial h/\partial x$. See also • Euler–Tricomi equation • Chaplygin's equation • Conservation equation • Fokker–Planck equation References 1. Misra, Souren; Raghurama Rao, S. V.; Bobba, Manoj Kumar (2010-09-01). "Relaxation system based sub-grid scale modelling for large eddy simulation of Burgers' equation". International Journal of Computational Fluid Dynamics. 24 (8): 303–315. Bibcode:2010IJCFD..24..303M. doi:10.1080/10618562.2010.523518. ISSN 1061-8562. S2CID 123001189. 2. It relates to the Navier–Stokes momentum equation with the pressure term removed Burgers Equation (PDF): here the variable is the flow speed y=u 3. It arises from Westervelt equation with an assumption of strictly forward propagating waves and the use of a coordinate transformation to a retarded time frame: here the variable is the pressure 4. Musha, Toshimitsu; Higuchi, Hideyo (1978-05-01). "Traffic Current Fluctuation and the Burgers Equation". Japanese Journal of Applied Physics. 17 (5): 811. Bibcode:1978JaJAP..17..811M. doi:10.1143/JJAP.17.811. ISSN 1347-4065. S2CID 121252757. 5. Bateman, H. (1915). "Some recent researches on the motion of fluids". Monthly Weather Review. 43 (4): 163–170. Bibcode:1915MWRv...43..163B. doi:10.1175/1520-0493(1915)43<163:SRROTM>2.0.CO;2. 6. Whitham, G. B. (2011). Linear and nonlinear waves (Vol. 42). John Wiley & Sons. 7. Burgers, J. M. (1948). "A Mathematical Model Illustrating the Theory of Turbulence". Advances in Applied Mechanics. 1: 171–199. doi:10.1016/S0065-2156(08)70100-5. ISBN 9780123745798. 8. Cameron, Maria. "Notes on Burgers's Equation" (PDF). 9. Olver, Peter J. (2013). Introduction to Partial Differential Equations. Undergraduate Texts in Mathematics. Online: Springer. p. 37. doi:10.1007/978-3-319-02099-0. ISBN 978-3-319-02098-3. S2CID 220617008. 10. Chandrasekhar, S. (1943). On the decay of plane shock waves (Report). Ballistic Research Laboratories. Report No. 423. 11. Forsyth, A. R. (1903). A Treatise on Differential Equations. London: Macmillan. 12. Cole, Julian (1951). "On a quasi-linear parabolic equation occurring in aerodynamics". Quarterly of Applied Mathematics. 9 (3): 225–236. doi:10.1090/qam/42889. JSTOR 43633894. 13. Eberhard Hopf (September 1950). "The partial differential equation ut + uux = μuxx". Communications on Pure and Applied Mathematics. 3 (3): 201–230. doi:10.1002/cpa.3160030302. 14. Kevorkian, J. (1990). Partial Differential Equations: Analytical Solution Techniques. Belmont: Wadsworth. pp. 31–35. ISBN 0-534-12216-7. 15. Courant, R., & Hilbert, D. Methods of Mathematical Physics. Vol. II. 16. Wang, W.; Roberts, A. J. (2015). "Diffusion Approximation for Self-similarity of Stochastic Advection in Burgers' Equation". Communications in Mathematical Physics. 333 (3): 1287–1316. arXiv:1203.0463. Bibcode:2015CMaPh.333.1287W. doi:10.1007/s00220-014-2117-7. S2CID 119650369. External links • Burgers' Equation at EqWorld: The World of Mathematical Equations. • Burgers' Equation at NEQwiki, the nonlinear equations encyclopedia.
Wikipedia
Stochastic dynamic programming Originally introduced by Richard E. Bellman in (Bellman 1957), stochastic dynamic programming is a technique for modelling and solving problems of decision making under uncertainty. Closely related to stochastic programming and dynamic programming, stochastic dynamic programming represents the problem under scrutiny in the form of a Bellman equation. The aim is to compute a policy prescribing how to act optimally in the face of uncertainty. A motivating example: Gambling game A gambler has $2, she is allowed to play a game of chance 4 times and her goal is to maximize her probability of ending up with a least $6. If the gambler bets $$b$ on a play of the game, then with probability 0.4 she wins the game, recoup the initial bet, and she increases her capital position by $$b$; with probability 0.6, she loses the bet amount $$b$; all plays are pairwise independent. On any play of the game, the gambler may not bet more money than she has available at the beginning of that play.[1] Stochastic dynamic programming can be employed to model this problem and determine a betting strategy that, for instance, maximizes the gambler's probability of attaining a wealth of at least $6 by the end of the betting horizon. Note that if there is no limit to the number of games that can be played, the problem becomes a variant of the well known St. Petersburg paradox. Formal background Consider a discrete system defined on $n$ stages in which each stage $t=1,\ldots ,n$ is characterized by • an initial state $s_{t}\in S_{t}$, where $S_{t}$ is the set of feasible states at the beginning of stage $t$; • a decision variable $x_{t}\in X_{t}$, where $X_{t}$ is the set of feasible actions at stage $t$ – note that $X_{t}$ may be a function of the initial state $s_{t}$; • an immediate cost/reward function $p_{t}(s_{t},x_{t})$, representing the cost/reward at stage $t$ if $s_{t}$ is the initial state and $x_{t}$ the action selected; • a state transition function $g_{t}(s_{t},x_{t})$ that leads the system towards state $s_{t+1}=g_{t}(s_{t},x_{t})$. Let $f_{t}(s_{t})$ represent the optimal cost/reward obtained by following an optimal policy over stages $t,t+1,\ldots ,n$. Without loss of generality in what follow we will consider a reward maximisation setting. In deterministic dynamic programming one usually deals with functional equations taking the following structure $f_{t}(s_{t})=\max _{x_{t}\in X_{t}}\{p_{t}(s_{t},x_{t})+f_{t+1}(s_{t+1})\}$ where $s_{t+1}=g_{t}(s_{t},x_{t})$ and the boundary condition of the system is $f_{n}(s_{n})=\max _{x_{n}\in X_{n}}\{p_{n}(s_{n},x_{n})\}.$ The aim is to determine the set of optimal actions that maximise $f_{1}(s_{1})$. Given the current state $s_{t}$ and the current action $x_{t}$, we know with certainty the reward secured during the current stage and – thanks to the state transition function $g_{t}$ – the future state towards which the system transitions. In practice, however, even if we know the state of the system at the beginning of the current stage as well as the decision taken, the state of the system at the beginning of the next stage and the current period reward are often random variables that can be observed only at the end of the current stage. Stochastic dynamic programming deals with problems in which the current period reward and/or the next period state are random, i.e. with multi-stage stochastic systems. The decision maker's goal is to maximise expected (discounted) reward over a given planning horizon. In their most general form, stochastic dynamic programs deal with functional equations taking the following structure $f_{t}(s_{t})=\max _{x_{t}\in X_{t}(s_{t})}\left\{({\text{expected reward during stage }}t\mid s_{t},x_{t})+\alpha \sum _{s_{t+1}}\Pr(s_{t+1}\mid s_{t},x_{t})f_{t+1}(s_{t+1})\right\}$ where • $f_{t}(s_{t})$ is the maximum expected reward that can be attained during stages $t,t+1,\ldots ,n$, given state $s_{t}$ at the beginning of stage $t$; • $x_{t}$ belongs to the set $X_{t}(s_{t})$ of feasible actions at stage $t$ given initial state $s_{t}$; • $\alpha $ is the discount factor; • $\Pr(s_{t+1}\mid s_{t},x_{t})$ is the conditional probability that the state at the end of stage $t$ is $s_{t+1}$ given current state $s_{t}$ and selected action $x_{t}$. Markov decision processes represent a special class of stochastic dynamic programs in which the underlying stochastic process is a stationary process that features the Markov property. Gambling game as a stochastic dynamic program Gambling game can be formulated as a Stochastic Dynamic Program as follows: there are $n=4$ games (i.e. stages) in the planning horizon • the state $s$ in period $t$ represents the initial wealth at the beginning of period $t$; • the action given state $s$ in period $t$ is the bet amount $b$; • the transition probability $p_{i,j}^{a}$ from state $i$ to state $j$ when action $a$ is taken in state $i$ is easily derived from the probability of winning (0.4) or losing (0.6) a game. Let $f_{t}(s)$ be the probability that, by the end of game 4, the gambler has at least $6, given that she has $$s$ at the beginning of game $t$. • the immediate profit incurred if action $b$ is taken in state $s$ is given by the expected value $p_{t}(s,b)=0.4f_{t+1}(s+b)+0.6f_{t+1}(s-b)$. To derive the functional equation, define $b_{t}(s)$ as a bet that attains $f_{t}(s)$, then at the beginning of game $t=4$ • if $s<3$ it is impossible to attain the goal, i.e. $f_{4}(s)=0$ for $s<3$; • if $s\geq 6$ the goal is attained, i.e. $f_{4}(s)=1$ for $s\geq 6$; • if $3\leq s\leq 5$ the gambler should bet enough to attain the goal, i.e. $f_{4}(s)=0.4$ for $3\leq s\leq 5$. For $t<4$ the functional equation is $f_{t}(s)=\max _{b_{t}(s)}\{0.4f_{t+1}(s+b)+0.6f_{t+1}(s-b)\}$, where $b_{t}(s)$ ranges in $0,...,s$; the aim is to find $f_{1}(2)$. Given the functional equation, an optimal betting policy can be obtained via forward recursion or backward recursion algorithms, as outlined below. Solution methods Stochastic dynamic programs can be solved to optimality by using backward recursion or forward recursion algorithms. Memoization is typically employed to enhance performance. However, like deterministic dynamic programming also its stochastic variant suffers from the curse of dimensionality. For this reason approximate solution methods are typically employed in practical applications. Backward recursion Given a bounded state space, backward recursion (Bertsekas 2000) begins by tabulating $f_{n}(k)$ for every possible state $k$ belonging to the final stage $n$. Once these values are tabulated, together with the associated optimal state-dependent actions $x_{n}(k)$, it is possible to move to stage $n-1$ and tabulate $f_{n-1}(k)$ for all possible states belonging to the stage $n-1$. The process continues by considering in a backward fashion all remaining stages up to the first one. Once this tabulation process is complete, $f_{1}(s)$ – the value of an optimal policy given initial state $s$ – as well as the associated optimal action $x_{1}(s)$ can be easily retrieved from the table. Since the computation proceeds in a backward fashion, it is clear that backward recursion may lead to computation of a large number of states that are not necessary for the computation of $f_{1}(s)$. Example: Gambling game Forward recursion Given the initial state $s$ of the system at the beginning of period 1, forward recursion (Bertsekas 2000) computes $f_{1}(s)$ by progressively expanding the functional equation (forward pass). This involves recursive calls for all $f_{t+1}(\cdot ),f_{t+2}(\cdot ),\ldots $ that are necessary for computing a given $f_{t}(\cdot )$. The value of an optimal policy and its structure are then retrieved via a (backward pass) in which these suspended recursive calls are resolved. A key difference from backward recursion is the fact that $f_{t}$ is computed only for states that are relevant for the computation of $f_{1}(s)$. Memoization is employed to avoid recomputation of states that have been already considered. Example: Gambling game We shall illustrate forward recursion in the context of the Gambling game instance previously discussed. We begin the forward pass by considering $f_{1}(2)=\min \left\{{\begin{array}{rr}b&{\text{success probability in periods 1,2,3,4}}\\\hline 0&0.4f_{2}(2+0)+0.6f_{2}(2-0)\\1&0.4f_{2}(2+1)+0.6f_{2}(2-1)\\2&0.4f_{2}(2+2)+0.6f_{2}(2-2)\\\end{array}}\right.$ At this point we have not computed yet $f_{2}(4),f_{2}(3),f_{2}(2),f_{2}(1),f_{2}(0)$, which are needed to compute $f_{1}(2)$; we proceed and compute these items. Note that $f_{2}(2+0)=f_{2}(2-0)=f_{2}(2)$, therefore one can leverage memoization and perform the necessary computations only once. Computation of $f_{2}(4),f_{2}(3),f_{2}(2),f_{2}(1),f_{2}(0)$ $f_{2}(0)=\min \left\{{\begin{array}{rr}b&{\text{success probability in periods 2,3,4}}\\\hline 0&0.4f_{3}(0+0)+0.6f_{3}(0-0)\\\end{array}}\right.$ $f_{2}(1)=\min \left\{{\begin{array}{rr}b&{\text{success probability in periods 2,3,4}}\\\hline 0&0.4f_{3}(1+0)+0.6f_{3}(1-0)\\1&0.4f_{3}(1+1)+0.6f_{3}(1-1)\\\end{array}}\right.$ $f_{2}(2)=\min \left\{{\begin{array}{rr}b&{\text{success probability in periods 2,3,4}}\\\hline 0&0.4f_{3}(2+0)+0.6f_{3}(2-0)\\1&0.4f_{3}(2+1)+0.6f_{3}(2-1)\\2&0.4f_{3}(2+2)+0.6f_{3}(2-2)\\\end{array}}\right.$ $f_{2}(3)=\min \left\{{\begin{array}{rr}b&{\text{success probability in periods 2,3,4}}\\\hline 0&0.4f_{3}(3+0)+0.6f_{3}(3-0)\\1&0.4f_{3}(3+1)+0.6f_{3}(3-1)\\2&0.4f_{3}(3+2)+0.6f_{3}(3-2)\\3&0.4f_{3}(3+3)+0.6f_{3}(3-3)\\\end{array}}\right.$ $f_{2}(4)=\min \left\{{\begin{array}{rr}b&{\text{success probability in periods 2,3,4}}\\\hline 0&0.4f_{3}(4+0)+0.6f_{3}(4-0)\\1&0.4f_{3}(4+1)+0.6f_{3}(4-1)\\2&0.4f_{3}(4+2)+0.6f_{3}(4-2)\end{array}}\right.$ We have now computed $f_{2}(k)$ for all $k$ that are needed to compute $f_{1}(2)$. However, this has led to additional suspended recursions involving $f_{3}(4),f_{3}(3),f_{3}(2),f_{3}(1),f_{3}(0)$. We proceed and compute these values. Computation of $f_{3}(4),f_{3}(3),f_{3}(2),f_{3}(1),f_{3}(0)$ $f_{3}(0)=\min \left\{{\begin{array}{rr}b&{\text{success probability in periods 3,4}}\\\hline 0&0.4f_{4}(0+0)+0.6f_{4}(0-0)\\\end{array}}\right.$ $f_{3}(1)=\min \left\{{\begin{array}{rr}b&{\text{success probability in periods 3,4}}\\\hline 0&0.4f_{4}(1+0)+0.6f_{4}(1-0)\\1&0.4f_{4}(1+1)+0.6f_{4}(1-1)\\\end{array}}\right.$ $f_{3}(2)=\min \left\{{\begin{array}{rr}b&{\text{success probability in periods 3,4}}\\\hline 0&0.4f_{4}(2+0)+0.6f_{4}(2-0)\\1&0.4f_{4}(2+1)+0.6f_{4}(2-1)\\2&0.4f_{4}(2+2)+0.6f_{4}(2-2)\\\end{array}}\right.$ $f_{3}(3)=\min \left\{{\begin{array}{rr}b&{\text{success probability in periods 3,4}}\\\hline 0&0.4f_{4}(3+0)+0.6f_{4}(3-0)\\1&0.4f_{4}(3+1)+0.6f_{4}(3-1)\\2&0.4f_{4}(3+2)+0.6f_{4}(3-2)\\3&0.4f_{4}(3+3)+0.6f_{4}(3-3)\\\end{array}}\right.$ $f_{3}(4)=\min \left\{{\begin{array}{rr}b&{\text{success probability in periods 3,4}}\\\hline 0&0.4f_{4}(4+0)+0.6f_{4}(4-0)\\1&0.4f_{4}(4+1)+0.6f_{4}(4-1)\\2&0.4f_{4}(4+2)+0.6f_{4}(4-2)\end{array}}\right.$ $f_{3}(5)=\min \left\{{\begin{array}{rr}b&{\text{success probability in periods 3,4}}\\\hline 0&0.4f_{4}(5+0)+0.6f_{4}(5-0)\\1&0.4f_{4}(5+1)+0.6f_{4}(5-1)\end{array}}\right.$ Since stage 4 is the last stage in our system, $f_{4}(\cdot )$ represent boundary conditions that are easily computed as follows. Boundary conditions ${\begin{array}{ll}f_{4}(0)=0&b_{4}(0)=0\\f_{4}(1)=0&b_{4}(1)=\{0,1\}\\f_{4}(2)=0&b_{4}(2)=\{0,1,2\}\\f_{4}(3)=0.4&b_{4}(3)=\{3\}\\f_{4}(4)=0.4&b_{4}(4)=\{2,3,4\}\\f_{4}(5)=0.4&b_{4}(5)=\{1,2,3,4,5\}\\f_{4}(d)=1&b_{4}(d)=\{0,\ldots ,d-6\}{\text{ for }}d\geq 6\end{array}}$ At this point it is possible to proceed and recover the optimal policy and its value via a backward pass involving, at first, stage 3 Backward pass involving $f_{3}(\cdot )$ $f_{3}(0)=\min \left\{{\begin{array}{rr}b&{\text{success probability in periods 3,4}}\\\hline 0&0.4(0)+0.6(0)=0\\\end{array}}\right.$ $f_{3}(1)=\min \left\{{\begin{array}{rrr}b&{\text{success probability in periods 3,4}}&{\mbox{max}}\\\hline 0&0.4(0)+0.6(0)=0&\leftarrow b_{3}(1)=0\\1&0.4(0)+0.6(0)=0&\leftarrow b_{3}(1)=1\\\end{array}}\right.$ $f_{3}(2)=\min \left\{{\begin{array}{rrr}b&{\text{success probability in periods 3,4}}&{\mbox{max}}\\\hline 0&0.4(0)+0.6(0)=0\\1&0.4(0.4)+0.6(0)=0.16&\leftarrow b_{3}(2)=1\\2&0.4(0.4)+0.6(0)=0.16&\leftarrow b_{3}(2)=2\\\end{array}}\right.$ $f_{3}(3)=\min \left\{{\begin{array}{rrr}b&{\text{success probability in periods 3,4}}&{\mbox{max}}\\\hline 0&0.4(0.4)+0.6(0.4)=0.4&\leftarrow b_{3}(3)=0\\1&0.4(0.4)+0.6(0)=0.16\\2&0.4(0.4)+0.6(0)=0.16\\3&0.4(1)+0.6(0)=0.4&\leftarrow b_{3}(3)=3\\\end{array}}\right.$ $f_{3}(4)=\min \left\{{\begin{array}{rrr}b&{\text{success probability in periods 3,4}}&{\mbox{max}}\\\hline 0&0.4(0.4)+0.6(0.4)=0.4&\leftarrow b_{3}(4)=0\\1&0.4(0.4)+0.6(0.4)=0.4&\leftarrow b_{3}(4)=1\\2&0.4(1)+0.6(0)=0.4&\leftarrow b_{3}(4)=2\\\end{array}}\right.$ $f_{3}(5)=\min \left\{{\begin{array}{rrr}b&{\text{success probability in periods 3,4}}&{\mbox{max}}\\\hline 0&0.4(0.4)+0.6(0.4)=0.4\\1&0.4(1)+0.6(0.4)=0.64&\leftarrow b_{3}(5)=1\\\end{array}}\right.$ and, then, stage 2. Backward pass involving $f_{2}(\cdot )$ $f_{2}(0)=\min \left\{{\begin{array}{rrr}b&{\text{success probability in periods 2,3,4}}&{\mbox{max}}\\\hline 0&0.4(0)+0.6(0)=0&\leftarrow b_{2}(0)=0\\\end{array}}\right.$ $f_{2}(1)=\min \left\{{\begin{array}{rrr}b&{\text{success probability in periods 2,3,4}}&{\mbox{max}}\\\hline 0&0.4(0)+0.6(0)=0\\1&0.4(0.16)+0.6(0)=0.064&\leftarrow b_{2}(1)=1\\\end{array}}\right.$ $f_{2}(2)=\min \left\{{\begin{array}{rrr}b&{\text{success probability in periods 2,3,4}}&{\mbox{max}}\\\hline 0&0.4(0.16)+0.6(0.16)=0.16&\leftarrow b_{2}(2)=0\\1&0.4(0.4)+0.6(0)=0.16&\leftarrow b_{2}(2)=1\\2&0.4(0.4)+0.6(0)=0.16&\leftarrow b_{2}(2)=2\\\end{array}}\right.$ $f_{2}(3)=\min \left\{{\begin{array}{rrr}b&{\text{success probability in periods 2,3,4}}&{\mbox{max}}\\\hline 0&0.4(0.4)+0.6(0.4)=0.4&\leftarrow b_{2}(3)=0\\1&0.4(0.4)+0.6(0.16)=0.256\\2&0.4(0.64)+0.6(0)=0.256\\3&0.4(1)+0.6(0)=0.4&\leftarrow b_{2}(3)=3\\\end{array}}\right.$ $f_{2}(4)=\min \left\{{\begin{array}{rrr}b&{\text{success probability in periods 2,3,4}}&{\mbox{max}}\\\hline 0&0.4(0.4)+0.6(0.4)=0.4\\1&0.4(0.64)+0.6(0.4)=0.496&\leftarrow b_{2}(4)=1\\2&0.4(1)+0.6(0.16)=0.496&\leftarrow b_{2}(4)=2\\\end{array}}\right.$ We finally recover the value $f_{1}(2)$ of an optimal policy $f_{1}(2)=\min \left\{{\begin{array}{rrr}b&{\text{success probability in periods 1,2,3,4}}&{\mbox{max}}\\\hline 0&0.4(0.16)+0.6(0.16)=0.16\\1&0.4(0.4)+0.6(0.064)=0.1984&\leftarrow b_{1}(2)=1\\2&0.4(0.496)+0.6(0)=0.1984&\leftarrow b_{1}(2)=2\\\end{array}}\right.$ This is the optimal policy that has been previously illustrated. Note that there are multiple optimal policies leading to the same optimal value $f_{1}(2)=0.1984$; for instance, in the first game one may either bet $1 or $2. Python implementation. The one that follows is a complete Python implementation of this example. from typing import List, Tuple import memoize as mem import functools class memoize: def __init__(self, func): self.func = func self.memoized = {} self.method_cache = {} def __call__(self, *args): return self.cache_get(self.memoized, args, lambda: self.func(*args)) def __get__(self, obj, objtype): return self.cache_get(self.method_cache, obj, lambda: self.__class__(functools.partial(self.func, obj))) def cache_get(self, cache, key, func): try: return cache[key] except KeyError: cache[key] = func() return cache[key] def reset(self): self.memoized = {} self.method_cache = {} class State: '''the state of the gambler's ruin problem ''' def __init__(self, t: int, wealth: float): '''state constructor Arguments: t {int} -- time period wealth {float} -- initial wealth ''' self.t, self.wealth = t, wealth def __eq__(self, other): return self.__dict__ == other.__dict__ def __str__(self): return str(self.t) + " " + str(self.wealth) def __hash__(self): return hash(str(self)) class GamblersRuin: def __init__(self, bettingHorizon:int, targetWealth: float, pmf: List[List[Tuple[int, float]]]): '''the gambler's ruin problem Arguments: bettingHorizon {int} -- betting horizon targetWealth {float} -- target wealth pmf {List[List[Tuple[int, float]]]} -- probability mass function ''' # initialize instance variables self.bettingHorizon, self.targetWealth, self.pmf = bettingHorizon, targetWealth, pmf # lambdas self.ag = lambda s: [i for i in range(0, min(self.targetWealth//2, s.wealth) + 1)] # action generator self.st = lambda s, a, r: State(s.t + 1, s.wealth - a + a*r) # state transition self.iv = lambda s, a, r: 1 if s.wealth - a + a*r >= self.targetWealth else 0 # immediate value function self.cache_actions = {} # cache with optimal state/action pairs def f(self, wealth: float) -> float: s = State(0, wealth) return self._f(s) def q(self, t: int, wealth: float) -> float: s = State(t, wealth) return self.cache_actions[str(s)] @memoize def _f(self, s: State) -> float: #Forward recursion v = max( [sum([p[1]*(self._f(self.st(s, a, p[0])) if s.t < self.bettingHorizon - 1 else self.iv(s, a, p[0])) # future value for p in self.pmf[s.t]]) # random variable realisations for a in self.ag(s)]) # actions opt_a = lambda a: sum([p[1]*(self._f(self.st(s, a, p[0])) if s.t < self.bettingHorizon - 1 else self.iv(s, a, p[0])) for p in self.pmf[s.t]]) == v q = [k for k in filter(opt_a, self.ag(s))] # retrieve best action list self.cache_actions[str(s)]=q[0] if bool(q) else None # store an action in dictionary return v # return value instance = {"bettingHorizon": 4, "targetWealth": 6, "pmf": [[(0, 0.6),(2, 0.4)] for i in range(0,4)]} gr, initial_wealth = GamblersRuin(**instance), 2 # f_1(x) is gambler's probability of attaining $targetWealth at the end of bettingHorizon print("f_1("+str(initial_wealth)+"): " + str(gr.f(initial_wealth))) #Recover optimal action for period 2 when initial wealth at the beginning of period 2 is $1. t, initial_wealth = 1, 1 print("b_"+str(t+1)+"("+str(initial_wealth)+"): " + str(gr.q(t, initial_wealth))) Java implementation. GamblersRuin.java is a standalone Java 8 implementation of the above example. Approximate dynamic programming An introduction to approximate dynamic programming is provided by (Powell 2009). Further reading • Bellman, R. (1957), Dynamic Programming, Princeton University Press, ISBN 978-0-486-42809-3. Dover paperback edition (2003). • Ross, S. M.; Bimbaum, Z. W.; Lukacs, E. (1983), Introduction to Stochastic Dynamic Programming, Elsevier, ISBN 978-0-12-598420-1. • Bertsekas, D. P. (2000), Dynamic Programming and Optimal Control (2nd ed.), Athena Scientific, ISBN 978-1-886529-09-0. In two volumes. • Powell, W. B. (2009), "What you should know about approximate dynamic programming", Naval Research Logistics, 56 (1): 239–249, CiteSeerX 10.1.1.150.1854, doi:10.1002/nav.20347, S2CID 7134937 See also • Control theory – Branch of engineering and mathematics • Dynamic programming – Problem optimization method • Reinforcement learning – Field of machine learning • Stochastic control – Probabilistic optimal control • Stochastic process – Collection of random variables • Stochastic programming – Framework for modeling optimization problems that involve uncertainty References 1. This problem is adapted from W. L. Winston, Operations Research: Applications and Algorithms (7th Edition), Duxbury Press, 2003, chap. 19, example 3.
Wikipedia
Stochastic Eulerian Lagrangian method In computational fluid dynamics, the Stochastic Eulerian Lagrangian Method (SELM)[1] is an approach to capture essential features of fluid-structure interactions subject to thermal fluctuations while introducing approximations which facilitate analysis and the development of tractable numerical methods. SELM is a hybrid approach utilizing an Eulerian description for the continuum hydrodynamic fields and a Lagrangian description for elastic structures. Thermal fluctuations are introduced through stochastic driving fields. Approaches also are introduced for the stochastic fields of the SPDEs to obtain numerical methods taking into account the numerical discretization artifacts to maintain statistical principles, such as fluctuation-dissipation balance and other properties in statistical mechanics.[1] The SELM fluid-structure equations typically used are $\rho {\frac {d{u}}{d{t}}}=\mu \,\Delta u-\nabla p+\Lambda [\Upsilon (V-\Gamma {u})]+\lambda +f_{\mathrm {thm} }(x,t)$ $m{\frac {d{V}}{d{t}}}=-\Upsilon (V-\Gamma {u})-\nabla \Phi [X]+\xi +F_{\mathrm {thm} }$ ${\frac {d{X}}{d{t}}}=V.$ The pressure p is determined by the incompressibility condition for the fluid $\nabla \cdot u=0.\,$ The $\Gamma ,\Lambda $ operators couple the Eulerian and Lagrangian degrees of freedom. The $X,V$ denote the composite vectors of the full set of Lagrangian coordinates for the structures. The $\Phi $ is the potential energy for a configuration of the structures. The $f_{\mathrm {thm} },F_{\mathrm {thm} }$ are stochastic driving fields accounting for thermal fluctuations. The $\lambda ,\xi $ are Lagrange multipliers imposing constraints, such as local rigid body deformations. To ensure that dissipation occurs only through the $\Upsilon $ coupling and not as a consequence of the interconversion by the operators $\Gamma ,\Lambda $ the following adjoint conditions are imposed $\Gamma =\Lambda ^{T}.$ Thermal fluctuations are introduced through Gaussian random fields with mean zero and the covariance structure $\langle f_{\mathrm {thm} }(s)f_{\mathrm {thm} }^{T}(t)\rangle =-\left(2k_{B}{T}\right)\left(\mu \Delta -\Lambda \Upsilon \Gamma \right)\delta (t-s).$ $\langle F_{\mathrm {thm} }(s)F_{\mathrm {thm} }^{T}(t)\rangle =2k_{B}{T}\Upsilon \delta (t-s).$ $\langle f_{\mathrm {thm} }(s)F_{\mathrm {thm} }^{T}(t)\rangle =-2k_{B}{T}\Lambda \Upsilon \delta (t-s).$ To obtain simplified descriptions and efficient numerical methods, approximations in various limiting physical regimes have been considered to remove dynamics on small time-scales or inertial degrees of freedom. In different limiting regimes, the SELM framework can be related to the immersed boundary method, accelerated Stokesian dynamics, and arbitrary Lagrangian Eulerian method. The SELM approach has been shown to yield stochastic fluid-structure dynamics that are consistent with statistical mechanics. In particular, the SELM dynamics have been shown to satisfy detailed-balance for the Gibbs–Boltzmann ensemble. Different types of coupling operators have also been introduced allowing for descriptions of structures involving generalized coordinates and additional translational or rotational degrees of freedom. For numerically discretizing the SELM SPDEs, general methods were also introduced for deriving numerical stochastic fields for SPDEs that take discretization artifacts into account to maintain statistical principles, such as fluctuation-dissipation balance and other properties in statistical mechanics.[1] See also • Immersed boundary method • Stokesian dynamics • Volume of fluid method • Level-set method • Marker-and-cell method References 1. Atzberger, Paul (2011). "Stochastic Eulerian Lagrangian Methods for Fluid Structure Interactions with Thermal Fluctuations". Journal of Computational Physics. 230 (8): 2821–2837. arXiv:1009.5648. Bibcode:2011JCoPh.230.2821A. doi:10.1016/j.jcp.2010.12.028. S2CID 6067032. 1. Atzberger, P.J.; Kramer, P.R.; Peskin, C.S. (2007). "A Stochastic Immersed Boundary Method for Fluid-Structure Dynamics at Microscopic Length Scales". Journal of Computational Physics. 224 (2): 1255–92. arXiv:0910.5748. Bibcode:2007JCoPh.224.1255A. doi:10.1016/j.jcp.2006.11.015. S2CID 17977915. 2. Peskin, C.S. (2002). "The immersed boundary method". Acta Numerica. 11: 479–517. doi:10.1017/S0962492902000077. S2CID 53517954. Software : Numerical Codes • Mango-Selm : Stochastic Eulerian Lagrangian and Immersed Boundary Methods, 3D Simulation Package, (Python interface, LAMMPS MD Integration), P. Atzberger, UCSB
Wikipedia
Stochastic Gronwall inequality Stochastic Gronwall inequality is a generalization of Gronwall's inequality and has been used for proving the well-posedness of path-dependent stochastic differential equations with local monotonicity and coercivity assumption with respect to supremum norm.[1][2] Statement Let $X(t),\,t\geq 0$ be a non-negative right-continuous $({\mathcal {F}}_{t})_{t\geq 0}$-adapted process. Assume that $A:[0,\infty )\to [0,\infty )$ is a deterministic non-decreasing càdlàg function with $A(0)=0$ and let $H(t),\,t\geq 0$ be a non-decreasing and càdlàg adapted process starting from $H(0)\geq 0$. Further, let $M(t),\,t\geq 0$ be an $({\mathcal {F}}_{t})_{t\geq 0}$- local martingale with $M(0)=0$ and càdlàg paths. Assume that for all $t\geq 0$, $X(t)\leq \int _{0}^{t}X^{*}(u^{-})\,dA(u)+M(t)+H(t),$ where $X^{*}(u):=\sup _{r\in [0,u]}X(r)$. and define $c_{p}={\frac {p^{-p}}{1-p}}$. Then the following estimates hold for $p\in (0,1)$ and $T>0$:[1][2] • If $\mathbb {E} {\big (}H(T)^{p}{\big )}<\infty $ and $H$ is predictable, then $\mathbb {E} \left[\left(X^{*}(T)\right)^{p}{\Big \vert }{\mathcal {F}}_{0}\right]\leq {\frac {c_{p}}{p}}\mathbb {E} \left[(H(T))^{p}{\big \vert }{\mathcal {F}}_{0}\right]\exp \left\lbrace c_{p}^{1/p}A(T)\right\rbrace $; • If $\mathbb {E} {\big (}H(T)^{p}{\big )}<\infty $ and $M$ has no negative jumps, then $\mathbb {E} \left[\left(X^{*}(T)\right)^{p}{\Big \vert }{\mathcal {F}}_{0}\right]\leq {\frac {c_{p}+1}{p}}\mathbb {E} \left[(H(T))^{p}{\big \vert }{\mathcal {F}}_{0}\right]\exp \left\lbrace (c_{p}+1)^{1/p}A(T)\right\rbrace $; • If $\mathbb {E} H(T)<\infty ,$ then $\displaystyle {\mathbb {E} \left[\left(X^{*}(T)\right)^{p}{\Big \vert }{\mathcal {F}}_{0}\right]\leq {\frac {c_{p}}{p}}\left(\mathbb {E} \left[H(T){\big \vert }{\mathcal {F}}_{0}\right]\right)^{p}\exp \left\lbrace c_{p}^{1/p}A(T)\right\rbrace }$; Proof It has been proven by Lenglart's inequality.[1] References 1. Mehri, Sima; Scheutzow, Michael (2021). "A stochastic Gronwall lemma and well-posedness of path-dependent SDEs driven by martingale noise". Latin Americal Journal of Probability and Mathematical Statistics. 18: 193-209. doi:10.30757/ALEA.v18-09. S2CID 201660248. 2. von Renesse, Max; Scheutzow, Michael (2010). "Existence and uniqueness of solutions of stochastic functional differential equations". Random Oper. Stoch. Equ. 18 (3): 267-284. arXiv:0812.1726. doi:10.1515/rose.2010.015. S2CID 18595968.
Wikipedia
Stochastic block model The stochastic block model is a generative model for random graphs. This model tends to produce graphs containing communities, subsets of nodes characterized by being connected with one another with particular edge densities. For example, edges may be more common within communities than between communities. Its mathematical formulation has been firstly introduced in 1983 in the field of social network by Paul W. Holland et al.[1] The stochastic block model is important in statistics, machine learning, and network science, where it serves as a useful benchmark for the task of recovering community structure in graph data. Part of a series on Network science • Theory • Graph • Complex network • Contagion • Small-world • Scale-free • Community structure • Percolation • Evolution • Controllability • Graph drawing • Social capital • Link analysis • Optimization • Reciprocity • Closure • Homophily • Transitivity • Preferential attachment • Balance theory • Network effect • Social influence Network types • Informational (computing) • Telecommunication • Transport • Social • Scientific collaboration • Biological • Artificial neural • Interdependent • Semantic • Spatial • Dependency • Flow • on-Chip Graphs Features • Clique • Component • Cut • Cycle • Data structure • Edge • Loop • Neighborhood • Path • Vertex • Adjacency list / matrix • Incidence list / matrix Types • Bipartite • Complete • Directed • Hyper • Labeled • Multi • Random • Weighted • Metrics • Algorithms • Centrality • Degree • Motif • Clustering • Degree distribution • Assortativity • Distance • Modularity • Efficiency Models Topology • Random graph • Erdős–Rényi • Barabási–Albert • Bianconi–Barabási • Fitness model • Watts–Strogatz • Exponential random (ERGM) • Random geometric (RGG) • Hyperbolic (HGN) • Hierarchical • Stochastic block • Blockmodeling • Maximum entropy • Soft configuration • LFR Benchmark Dynamics • Boolean network • agent based • Epidemic/SIR • Lists • Categories • Topics • Software • Network scientists • Category:Network theory • Category:Graph theory Definition The stochastic block model takes the following parameters: • The number $n$ of vertices; • a partition of the vertex set $\{1,\ldots ,n\}$ into disjoint subsets $C_{1},\ldots ,C_{r}$, called communities; • a symmetric $r\times r$ matrix $P$ of edge probabilities. The edge set is then sampled at random as follows: any two vertices $u\in C_{i}$ and $v\in C_{j}$ are connected by an edge with probability $P_{ij}$. An example problem is: given a graph with $n$ vertices, where the edges are sampled as described, recover the groups $C_{1},\ldots ,C_{r}$. Special cases If the probability matrix is a constant, in the sense that $P_{ij}=p$ for all $i,j$, then the result is the Erdős–Rényi model $G(n,p)$. This case is degenerate—the partition into communities becomes irrelevant—but it illustrates a close relationship to the Erdős–Rényi model. The planted partition model is the special case that the values of the probability matrix $P$ are a constant $p$ on the diagonal and another constant $q$ off the diagonal. Thus two vertices within the same community share an edge with probability $p$, while two vertices in different communities share an edge with probability $q$. Sometimes it is this restricted model that is called the stochastic block model. The case where $p>q$ is called an assortative model, while the case $p<q$ is called disassortative. Returning to the general stochastic block model, a model is called strongly assortative if $P_{ii}>P_{jk}$ whenever $j\neq k$: all diagonal entries dominate all off-diagonal entries. A model is called weakly assortative if $P_{ii}>P_{ij}$ whenever $i\neq j$: each diagonal entry is only required to dominate the rest of its own row and column.[2] Disassortative forms of this terminology exist, by reversing all inequalities. For some algorithms, recovery might be easier for block models with assortative or disassortative conditions of this form.[2] Typical statistical tasks Much of the literature on algorithmic community detection addresses three statistical tasks: detection, partial recovery, and exact recovery. Detection The goal of detection algorithms is simply to determine, given a sampled graph, whether the graph has latent community structure. More precisely, a graph might be generated, with some known prior probability, from a known stochastic block model, and otherwise from a similar Erdos-Renyi model. The algorithmic task is to correctly identify which of these two underlying models generated the graph.[3] Partial recovery In partial recovery, the goal is to approximately determine the latent partition into communities, in the sense of finding a partition that is correlated with the true partition significantly better than a random guess.[4] Exact recovery In exact recovery, the goal is to recover the latent partition into communities exactly. The community sizes and probability matrix may be known[5] or unknown.[6] Statistical lower bounds and threshold behavior Stochastic block models exhibit a sharp threshold effect reminiscent of percolation thresholds.[7][3][8] Suppose that we allow the size $n$ of the graph to grow, keeping the community sizes in fixed proportions. If the probability matrix remains fixed, tasks such as partial and exact recovery become feasible for all non-degenerate parameter settings. However, if we scale down the probability matrix at a suitable rate as $n$ increases, we observe a sharp phase transition: for certain settings of the parameters, it will become possible to achieve recovery with probability tending to 1, whereas on the opposite side of the parameter threshold, the probability of recovery tends to 0 no matter what algorithm is used. For partial recovery, the appropriate scaling is to take $P_{ij}={\tilde {P}}_{ij}/n$ for fixed ${\tilde {P}}$, resulting in graphs of constant average degree. In the case of two equal-sized communities, in the assortative planted partition model with probability matrix $P=\left({\begin{array}{cc}{\tilde {p}}/n&{\tilde {q}}/n\\{\tilde {q}}/n&{\tilde {p}}/n\end{array}}\right),$ partial recovery is feasible[4] with probability $1-o(1)$ whenever $({\tilde {p}}-{\tilde {q}})^{2}>2({\tilde {p}}+{\tilde {q}})$, whereas any estimator fails[3] partial recovery with probability $1-o(1)$ whenever $({\tilde {p}}-{\tilde {q}})^{2}<2({\tilde {p}}+{\tilde {q}})$. For exact recovery, the appropriate scaling is to take $P_{ij}={\tilde {P}}_{ij}\log n/n$, resulting in graphs of logarithmic average degree. Here a similar threshold exists: for the assortative planted partition model with $r$ equal-sized communities, the threshold lies at ${\sqrt {\tilde {p}}}-{\sqrt {\tilde {q}}}={\sqrt {r}}$. In fact, the exact recovery threshold is known for the fully general stochastic block model.[5] Algorithms In principle, exact recovery can be solved in its feasible range using maximum likelihood, but this amounts to solving a constrained or regularized cut problem such as minimum bisection that is typically NP-complete. Hence, no known efficient algorithms will correctly compute the maximum-likelihood estimate in the worst case. However, a wide variety of algorithms perform well in the average case, and many high-probability performance guarantees have been proven for algorithms in both the partial and exact recovery settings. Successful algorithms include spectral clustering of the vertices,[9][4][5][10] semidefinite programming,[2][8] forms of belief propagation,[7][11] and community detection[12] among others. Variants Several variants of the model exist. One minor tweak allocates vertices to communities randomly, according to a categorical distribution, rather than in a fixed partition.[5] More significant variants include the degree-corrected stochastic block model,[13] the hierarchical stochastic block model,[14] the geometric block model,[15] censored block model and the mixed-membership block model.[16] Topic models Stochastic block model have been recognised to be a topic model on bipartite networks.[17] In a network of documents and words, Stochastic block model can identify topics: group of words with a similar meaning. Extensions to signed graphs Signed graphs allow for both favorable and adverse relationships and serve as a common model choice for various data analysis applications, e.g., correlation clustering. The stochastic block model can be trivially extended to signed graphs by assigning both positive and negative edge weights or equivalently using a difference of adjacency matrices of two stochastic block models. [18] DARPA/MIT/AWS Graph Challenge: streaming stochastic block partition GraphChallenge[19] encourages community approaches to developing new solutions for analyzing graphs and sparse data derived from social media, sensor feeds, and scientific data to enable relationships between events to be discovered as they unfold in the field. Streaming stochastic block partition is one of the challenges since 2017. [20] Spectral clustering has demonstrated outstanding performance compared to the original and even improved[21] base algorithm, matching its quality of clusters while being multiple orders of magnitude faster.[22] [23] See also • blockmodeling • Girvan–Newman algorithm – Community detection algorithm • Lancichinetti–Fortunato–Radicchi benchmark – AlgorithmPages displaying short descriptions with no spaces for generating benchmark networks with communities References 1. Holland, Paul W; Laskey, Kathryn Blackmond; Leinhardt, Samuel (1983). "Stochastic blockmodels: First steps". Social Networks. 5 (2): 109–137. doi:10.1016/0378-8733(83)90021-7. ISSN 0378-8733. S2CID 34098453. Archived from the original on 2023-02-04. Retrieved 2021-06-16. 2. Amini, Arash A.; Levina, Elizaveta (June 2014). "On semidefinite relaxations for the block model". arXiv:1406.5647 [cs.LG]. 3. Mossel, Elchanan; Neeman, Joe; Sly, Allan (February 2012). "Stochastic Block Models and Reconstruction". arXiv:1202.1499 [math.PR]. 4. Massoulie, Laurent (November 2013). "Community detection thresholds and the weak Ramanujan property". arXiv:1311.3085 [cs.SI]. 5. Abbe, Emmanuel; Sandon, Colin (March 2015). "Community detection in general stochastic block models: fundamental limits and efficient recovery algorithms". arXiv:1503.00609 [math.PR]. 6. Abbe, Emmanuel; Sandon, Colin (June 2015). "Recovering communities in the general stochastic block model without knowing the parameters". arXiv:1506.03729 [math.PR]. 7. Decelle, Aurelien; Krzakala, Florent; Moore, Cristopher; Zdeborová, Lenka (September 2011). "Asymptotic analysis of the stochastic block model for modular networks and its algorithmic applications". Physical Review E. 84 (6): 066106. arXiv:1109.3041. Bibcode:2011PhRvE..84f6106D. doi:10.1103/PhysRevE.84.066106. PMID 22304154. S2CID 15788070. 8. Abbe, Emmanuel; Bandeira, Afonso S.; Hall, Georgina (May 2014). "Exact Recovery in the Stochastic Block Model". arXiv:1405.3267 [cs.SI]. 9. Krzakala, Florent; Moore, Cristopher; Mossel, Elchanan; Neeman, Joe; Sly, Allan; Lenka, Lenka; Zhang, Pan (October 2013). "Spectral redemption in clustering sparse networks". Proceedings of the National Academy of Sciences. 110 (52): 20935–20940. arXiv:1306.5550. Bibcode:2013PNAS..11020935K. doi:10.1073/pnas.1312486110. PMC 3876200. PMID 24277835. 10. Lei, Jing; Rinaldo, Alessandro (February 2015). "Consistency of spectral clustering in stochastic block models". The Annals of Statistics. 43 (1): 215–237. arXiv:1312.2050. doi:10.1214/14-AOS1274. ISSN 0090-5364. S2CID 88519551. 11. Mossel, Elchanan; Neeman, Joe; Sly, Allan (September 2013). "Belief Propagation, Robust Reconstruction, and Optimal Recovery of Block Models". The Annals of Applied Probability. 26 (4): 2211–2256. arXiv:1309.1380. Bibcode:2013arXiv1309.1380M. doi:10.1214/15-AAP1145. S2CID 184446. 12. Fathi, Reza (April 2019). "Efficient Distributed Community Detection in the Stochastic Block Model". arXiv:1904.07494 [cs.DC]. 13. Karrer, Brian; Newman, Mark E J (2011). "Stochastic blockmodels and community structure in networks". Physical Review E. 83 (1): 016107. arXiv:1008.3926. Bibcode:2011PhRvE..83a6107K. doi:10.1103/PhysRevE.83.016107. PMID 21405744. S2CID 9068097. Archived from the original on 2023-02-04. Retrieved 2021-06-16. 14. Peixoto, Tiago (2014). "Hierarchical block structures and high-resolution model selection in large networks". Physical Review X. 4 (1): 011047. arXiv:1310.4377. Bibcode:2014PhRvX...4a1047P. doi:10.1103/PhysRevX.4.011047. S2CID 5841379. Archived from the original on 2021-06-24. Retrieved 2021-06-16. 15. Galhotra, Sainyam; Mazumdar, Arya; Pal, Soumyabrata; Saha, Barna (February 2018). "The Geometric Block Model". AAAI. 32. arXiv:1709.05510. doi:10.1609/aaai.v32i1.11905. S2CID 19152144. 16. Airoldi, Edoardo; Blei, David; Feinberg, Stephen; Xing, Eric (May 2007). "Mixed membership stochastic blockmodels". Journal of Machine Learning Research. 9: 1981–2014. arXiv:0705.4485. Bibcode:2007arXiv0705.4485A. PMC 3119541. PMID 21701698. 17. Martin Gerlach; Tiago Peixoto; Eduardo Altmann (2018). "A network approach to topic models". Science Advances. 4 (7): eaaq1360. arXiv:1708.01677. Bibcode:2018SciA....4.1360G. doi:10.1126/sciadv.aaq1360. PMC 6051742. PMID 30035215. 18. Alyson Fox; Geoffrey Sanders; Andrew Knyazev (2018). "Investigation of Spectral Clustering for Signed Graph Matrix Representations". 2018 IEEE High Performance extreme Computing Conference (HPEC). pp. 1–7. doi:10.1109/HPEC.2018.8547575. ISBN 978-1-5386-5989-2. OSTI 1476177. S2CID 54443034. 19. Archived 2023-02-04 at the Wayback Machine DARPA/MIT/AWS Graph Challenge 20. Archived 2023-02-04 at the Wayback Machine DARPA/MIT/AWS Graph Challenge Champions 21. A. J. Uppal; J. Choi; T. B. Rolinger; H. Howie Huang (2021). "Faster Stochastic Block Partition Using Aggressive Initial Merging, Compressed Representation, and Parallelism Control". 2021 IEEE High Performance Extreme Computing Conference (HPEC). pp. 1–7. doi:10.1109/HPEC49654.2021.9622836. ISBN 978-1-6654-2369-4. S2CID 244780210. 22. David Zhuzhunashvili; Andrew Knyazev (2017). "Preconditioned spectral clustering for stochastic block partition streaming graph challenge". 2017 IEEE High Performance Extreme Computing Conference (HPEC). arXiv:1708.07481. doi:10.1109/HPEC.2017.8091045. S2CID 19781504. 23. Lisa Durbeck; Peter Athanas (2020). "Incremental Streaming Graph Partitioning". 2020 IEEE High Performance Extreme Computing Conference (HPEC). pp. 1–8. doi:10.1109/HPEC43674.2020.9286181. ISBN 978-1-7281-9219-2. S2CID 229376193.
Wikipedia
Stochastic cellular automaton Stochastic cellular automata or probabilistic cellular automata (PCA) or random cellular automata or locally interacting Markov chains[1][2] are an important extension of cellular automaton. Cellular automata are a discrete-time dynamical system of interacting entities, whose state is discrete. The state of the collection of entities is updated at each discrete time according to some simple homogeneous rule. All entities' states are updated in parallel or synchronously. Stochastic Cellular Automata are CA whose updating rule is a stochastic one, which means the new entities' states are chosen according to some probability distributions. It is a discrete-time random dynamical system. From the spatial interaction between the entities, despite the simplicity of the updating rules, complex behaviour may emerge like self-organization. As mathematical object, it may be considered in the framework of stochastic processes as an interacting particle system in discrete-time. See [3] for a more detailed introduction. PCA as Markov stochastic processes As discrete-time Markov process, PCA are defined on a product space $E=\prod _{k\in G}S_{k}$ (cartesian product) where $G$ is a finite or infinite graph, like $\mathbb {Z} $ and where $S_{k}$ is a finite space, like for instance $S_{k}=\{-1,+1\}$ or $S_{k}=\{0,1\}$. The transition probability has a product form $P(d\sigma |\eta )=\otimes _{k\in G}p_{k}(d\sigma _{k}|\eta )$ where $\eta \in E$ and $p_{k}(d\sigma _{k}|\eta )$ is a probability distribution on $S_{k}$. In general some locality is required $p_{k}(d\sigma _{k}|\eta )=p_{k}(d\sigma _{k}|\eta _{V_{k}})$ where $\eta _{V_{k}}=(\eta _{j})_{j\in V_{k}}$ with ${V_{k}}$ a finite neighbourhood of k. See [4] for a more detailed introduction following the probability theory's point of view. Examples of stochastic cellular automaton Majority cellular automaton There is a version of the majority cellular automaton with probabilistic updating rules. See the Toom's rule. Relation to lattice random fields PCA may be used to simulate the Ising model of ferromagnetism in statistical mechanics.[5] Some categories of models were studied from a statistical mechanics point of view. Cellular Potts model There is a strong connection[6] between probabilistic cellular automata and the cellular Potts model in particular when it is implemented in parallel. Non Markovian generalization The Galves-Löcherbach model is an example of a generalized PCA with a non Markovian aspect. References 1. Toom, A. L. (1978), Locally Interacting Systems and their Application in Biology: Proceedings of the School-Seminar on Markov Interaction Processes in Biology, held in Pushchino, March 1976, Lecture Notes in Mathematics, vol. 653, Springer-Verlag, Berlin-New York, ISBN 978-3-540-08450-1, MR 0479791 2. R. L. Dobrushin; V. I. Kri︠u︡kov; A. L. Toom (1978). Stochastic Cellular Systems: Ergodicity, Memory, Morphogenesis. ISBN 9780719022067. 3. Fernandez, R.; Louis, P.-Y.; Nardi, F. R. (2018). "Chapter 1: Overview: PCA Models and Issues". In Louis, P.-Y.; Nardi, F. R. (eds.). Probabilistic Cellular Automata. Springer. doi:10.1007/978-3-319-65558-1_1. ISBN 9783319655581. S2CID 64938352. 4. P.-Y. Louis PhD 5. Vichniac, G. (1984), "Simulating physics with cellular automata", Physica D, 10 (1–2): 96–115, Bibcode:1984PhyD...10...96V, doi:10.1016/0167-2789(84)90253-7. 6. Boas, Sonja E. M.; Jiang, Yi; Merks, Roeland M. H.; Prokopiou, Sotiris A.; Rens, Elisabeth G. (2018). "Chapter 18: Cellular Potts Model: Applications to Vasculogenesis and Angiogenesis". In Louis, P.-Y.; Nardi, F. R. (eds.). Probabilistic Cellular Automata. Springer. doi:10.1007/978-3-319-65558-1_18. hdl:1887/69811. ISBN 9783319655581. Further reading • Almeida, R. M.; Macau, E. E. N. (2010), "Stochastic cellular automata model for wildland fire spread dynamics", 9th Brazilian Conference on Dynamics, Control and their Applications, June 7–11, 2010, doi:10.1088/1742-6596/285/1/012038. • Clarke, K. C.; Hoppen, S. (1997), "A self-modifying cellular automaton model of historical urbanization in the San Francisco Bay area" (PDF), Environment and Planning B: Planning and Design, 24 (2): 247–261, doi:10.1068/b240247, S2CID 40847078. • Mahajan, Meena Bhaskar (1992), Studies in language classes defined by different types of time-varying cellular automata, Ph.D. dissertation, Indian Institute of Technology Madras. • Nishio, Hidenosuke; Kobuchi, Youichi (1975), "Fault tolerant cellular spaces", Journal of Computer and System Sciences, 11 (2): 150–170, doi:10.1016/s0022-0000(75)80065-1, MR 0389442. • Smith, Alvy Ray, III (1972), "Real-time language recognition by one-dimensional cellular automata", Journal of Computer and System Sciences, 6 (3): 233–253, doi:10.1016/S0022-0000(72)80004-7, MR 0309383{{citation}}: CS1 maint: multiple names: authors list (link). • Louis, P.-Y.; Nardi, F. R., eds. (2018). Probabilistic Cellular Automata. Emergence, Complexity and Computation. Vol. 27. Springer. doi:10.1007/978-3-319-65558-1. hdl:2158/1090564. ISBN 9783319655581. • Agapie, A.; Andreica, A.; Giuclea, M. (2014), "Probabilistic Cellular Automata", Journal of Computational Biology, 21 (9): 699–708, doi:10.1089/cmb.2014.0074, PMC 4148062, PMID 24999557
Wikipedia
Stochastic chains with memory of variable length Stochastic chains with memory of variable length are a family of stochastic chains of finite order in a finite alphabet, such as, for every time pass, only one finite suffix of the past, called context, is necessary to predict the next symbol. These models were introduced in the information theory literature by Jorma Rissanen in 1983,[1] as a universal tool to data compression, but recently have been used to model data in different areas such as biology,[2] linguistics[3] and music.[4] Definition A stochastic chain with memory of variable length is a stochastic chain $(X_{n})_{n\in Z}$, taking values in a finite alphabet $A$, and characterized by a probabilistic context tree $(\tau ,p)$, so that • $\tau $ is the group of all contexts. A context $X_{n-l},\ldots ,X_{n-1}$, being $l$ the size of the context, is a finite portion of the past $X_{-\infty },\ldots ,X_{n-1}$, which is relevant to predict the next symbol $X_{n}$; • $p$ is a family of transition probabilities associated with each context. History The class of stochastic chains with memory of variable length was introduced by Jorma Rissanen in the article A universal system for data compression system.[1] Such class of stochastic chains was popularized in the statistical and probabilistic community by P. Bühlmann and A. J. Wyner in 1999, in the article Variable Length Markov Chains. Named by Bühlmann and Wyner as “variable length Markov chains” (VLMC), these chains are also known as “variable-order Markov models" (VOM), “probabilistic suffix trees”[2] and “context tree models”.[5] The name “stochastic chains with memory of variable length” seems to have been introduced by Galves and Löcherbach, in 2008, in the article of the same name.[6] Examples Interrupted light source Consider a system by a lamp, an observer and a door between both of them. The lamp has two possible states: on, represented by 1, or off, represented by 0. When the lamp is on, the observer may see the light through the door, depending on which state the door is at the time: open, 1, or closed, 0. such states are independent of the original state of the lamp. Let $(X_{n})_{n\geq 0}$ a Markov chain that represents the state of the lamp, with values in $A={0,1}$ and let $p$ be a probability transition matrix. Also, let $(\xi _{n})_{n\geq 0}$ be a sequence of independent random variables that represents the door's states, also taking values in $A$, independent of the chain $(X_{n})_{n\geq 0}$ and such that $\mathbb {P} (\xi _{n}=1)=1-\varepsilon $ where $0<\epsilon <1$. Define a new sequence $(Z_{n})_{n\geq 0}$ such that $Z_{n}=X_{n}\xi _{n}$ for every $(Z_{n})_{n\geq 0}.$ In order to determine the last instant that the observer could see the lamp on, i.e. to identify the least instant $k$, with $k<n$ in which $Z_{k}=1$. Using a context tree it's possible to represent the past states of the sequence, showing which are relevant to identify the next state. The stochastic chain $(Z_{n})_{n\in \mathbb {Z} }$ is, then, a chain with memory of variable length, taking values in $A$ and compatible with the probabilistic context tree $(\tau ,p)$, where $\tau =\{1,10,100,\cdots \}\cup \{0^{\infty }\}.$ Inferences in chains with variable length Given a sample $X_{l},\ldots ,X_{n}$, one can find the appropriated context tree using the following algorithms. The context algorithm In the article A Universal Data Compression System,[1] Rissanen introduced a consistent algorithm to estimate the probabilistic context tree that generates the data. This algorithm's function can be summarized in two steps: 1. Given the sample produced by a chain with memory of variable length, we start with the maximum tree whose branches are all the candidates to contexts to the sample; 2. The branches in this tree are then cut until you obtain the smallest tree that's well adapted to the data. Deciding whether or not shortening the context is done through a given gain function, such as the ratio of the log-likelihood. Be $X_{0},\ldots ,X_{n-1}$ a sample of a finite probabilistic tree $(\tau ,p)$. For any sequence $x_{-j}^{-1}$ with $j\leq n$, it is possible to denote by $N_{n}(x_{-j}^{-1})$ the number of occurrences of the sequence in the sample, i.e., $N_{n}(x_{-j}^{-1})=\sum _{t=0}^{n-j}\mathbf {1} \left\{X_{t}^{t+j-1}=x_{-j}^{-1}\right\}$ Rissanen first built a context maximum candidate, given by $X_{n-K(n)}^{n-1}$, where $K(n)=C\log {n}$ and $C$ is an arbitrary positive constant. The intuitive reason for the choice of $C\log {n}$ comes from the impossibility of estimating the probabilities of sequence with lengths greater than $\log {n}$ based in a sample of size $n$. From there, Rissanen shortens the maximum candidate through successive cutting the branches according to a sequence of tests based in statistical likelihood ratio. In a more formal definition, if bANnxk1b0 define the probability estimator of the transition probability $p$ by ${\hat {p}}_{n}(a\mid x_{-k}^{-1})={\frac {N_{n}(x_{-k}^{-1}a)}{\sum _{b\in A}N_{n}(x_{-k}^{-1}b)}}$ where $x_{-j}^{-1}a=(x_{-j},\ldots ,x_{-1},a)$. If $\sum _{b\in A}N_{n}(x_{-k}^{-1}b)\,=\,0$, define ${\hat {p}}_{n}(a\mid x_{-k}^{-1})\,=\,1/|A|$. To $i\geq 1$, define $\Lambda _{n}(x_{-i}^{-1})\,=\,2\,\sum _{y\in A}\sum _{a\in A}N_{n}(yx_{-i}^{-1}a)\log \left[{\frac {{\hat {p}}_{n}(a\mid x_{-i}^{-1}y)}{{\hat {p}}_{n}(a\mid x_{-i}^{-1})}}\right]\,$ where $yx_{-i}^{-1}=(y,x_{-i},\ldots ,x_{-1})$ and ${\hat {p}}_{n}(a\mid x_{-i}^{-1}y)={\frac {N_{n}(yx_{-i}^{-1}a)}{\sum _{b\in A}N_{n}(yx_{-i}^{-1}b)}}.$ Note that $\Lambda _{n}(x_{-i}^{-1})$ is the ratio of the log-likelihood to test the consistency of the sample with the probabilistic context tree $(\tau ,p)$ against the alternative that is consistent with $(\tau ',p')$, where $\tau $ and $\tau '$ differ only by a set of sibling knots. The length of the current estimated context is defined by ${\hat {\ell }}_{n}(X_{0}^{n-1})=\max \left\{i=1,\ldots ,K(n):\Lambda _{n}(X_{n-i}^{n-1})\,>\,C\log n\right\}\,$ where $C$ is any positive constant. At last, by Rissanen,[1] there's the following result. Given $X_{0},\ldots ,X_{n-1}$ of a finite probabilistic context tree $(\tau ,p)$, then $P\left({\hat {\ell }}_{n}(X_{0}^{n-1})\neq \ell (X_{0}^{n-1})\right)\longrightarrow 0,$ when $n\rightarrow \infty $. Bayesian information criterion (BIC) The estimator of the context tree by BIC with a penalty constant $c>0$ is defined as ${\hat {\tau }}_{\mathrm {BIC} }={\underset {\tau \in {\mathcal {T}}_{n}}{\arg \max }}\{\log L_{\tau }(X_{1}^{n})-c\,{\textrm {d}}f(\tau )\log n\}$ Smallest maximizer criterion (SMC) The smallest maximizer criterion[3] is calculated by selecting the smallest tree τ of a set of champion trees C such that $\lim _{n\to \infty }{\frac {\log L_{\tau }(X_{1}^{n})-\log L_{\hat {\tau }}(X_{1}^{n})}{n}}=0$ See also • Variable-order Markov model • Markov chain • Stochastic process References 1. Rissanen, J (Sep 1983). "A Universal Data Compression System". IEEE Transactions on Information Theory. 29 (5): 656–664. doi:10.1109/TIT.1983.1056741. 2. Bejenaro, G (2001). "Variations on probabilistic suffix trees: statistical modeling and prediction of protein families". Bioinformatics. 17 (5): 23–43. doi:10.1093/bioinformatics/17.1.23. PMID 11222260. 3. Galves A, Galves C, Garcia J, Garcia NL, Leonardi F (2012). "Context tree selection and linguistic rhythm retrieval from written texts". The Annals of Applied Statistics. 6 (5): 186–209. arXiv:0902.3619. doi:10.1214/11-AOAS511. 4. Dubnov S, Assayag G, Lartillot O, Bejenaro G (2003). "Using machine-learning methods for musical style modeling". Computer. 36 (10): 73–80. CiteSeerX 10.1.1.628.4614. doi:10.1109/MC.2003.1236474. 5. Galves A, Garivier A, Gassiat E (2012). "Joint estimation of intersecting context tree models". Scandinavian Journal of Statistics. 40 (2): 344–362. arXiv:1102.0673. doi:10.1111/j.1467-9469.2012.00814.x. 6. Galves A, Löcherbach E (2008). "Stochastic chains with memory of variable length". TICSP Series. 38: 117–133. Stochastic processes Discrete time • Bernoulli process • Branching process • Chinese restaurant process • Galton–Watson process • Independent and identically distributed random variables • Markov chain • Moran process • Random walk • Loop-erased • Self-avoiding • Biased • Maximal entropy Continuous time • Additive process • Bessel process • Birth–death process • pure birth • Brownian motion • Bridge • Excursion • Fractional • Geometric • Meander • Cauchy process • Contact process • Continuous-time random walk • Cox process • Diffusion process • Empirical process • Feller process • Fleming–Viot process • Gamma process • Geometric process • Hawkes process • Hunt process • Interacting particle systems • Itô diffusion • Itô process • Jump diffusion • Jump process • Lévy process • Local time • Markov additive process • McKean–Vlasov process • Ornstein–Uhlenbeck process • Poisson process • Compound • Non-homogeneous • Schramm–Loewner evolution • Semimartingale • Sigma-martingale • Stable process • Superprocess • Telegraph process • Variance gamma process • Wiener process • Wiener sausage Both • Branching process • Galves–Löcherbach model • Gaussian process • Hidden Markov model (HMM) • Markov process • Martingale • Differences • Local • Sub- • Super- • Random dynamical system • Regenerative process • Renewal process • Stochastic chains with memory of variable length • White noise Fields and other • Dirichlet process • Gaussian random field • Gibbs measure • Hopfield model • Ising model • Potts model • Boolean network • Markov random field • Percolation • Pitman–Yor process • Point process • Cox • Poisson • Random field • Random graph Time series models • Autoregressive conditional heteroskedasticity (ARCH) model • Autoregressive integrated moving average (ARIMA) model • Autoregressive (AR) model • Autoregressive–moving-average (ARMA) model • Generalized autoregressive conditional heteroskedasticity (GARCH) model • Moving-average (MA) model Financial models • Binomial options pricing model • Black–Derman–Toy • Black–Karasinski • Black–Scholes • Chan–Karolyi–Longstaff–Sanders (CKLS) • Chen • Constant elasticity of variance (CEV) • Cox–Ingersoll–Ross (CIR) • Garman–Kohlhagen • Heath–Jarrow–Morton (HJM) • Heston • Ho–Lee • Hull–White • LIBOR market • Rendleman–Bartter • SABR volatility • Vašíček • Wilkie Actuarial models • Bühlmann • Cramér–Lundberg • Risk process • Sparre–Anderson Queueing models • Bulk • Fluid • Generalized queueing network • M/G/1 • M/M/1 • M/M/c Properties • Càdlàg paths • Continuous • Continuous paths • Ergodic • Exchangeable • Feller-continuous • Gauss–Markov • Markov • Mixing • Piecewise-deterministic • Predictable • Progressively measurable • Self-similar • Stationary • Time-reversible Limit theorems • Central limit theorem • Donsker's theorem • Doob's martingale convergence theorems • Ergodic theorem • Fisher–Tippett–Gnedenko theorem • Large deviation principle • Law of large numbers (weak/strong) • Law of the iterated logarithm • Maximal ergodic theorem • Sanov's theorem • Zero–one laws (Blumenthal, Borel–Cantelli, Engelbert–Schmidt, Hewitt–Savage, Kolmogorov, Lévy) Inequalities • Burkholder–Davis–Gundy • Doob's martingale • Doob's upcrossing • Kunita–Watanabe • Marcinkiewicz–Zygmund Tools • Cameron–Martin formula • Convergence of random variables • Doléans-Dade exponential • Doob decomposition theorem • Doob–Meyer decomposition theorem • Doob's optional stopping theorem • Dynkin's formula • Feynman–Kac formula • Filtration • Girsanov theorem • Infinitesimal generator • Itô integral • Itô's lemma • Karhunen–Loève theorem • Kolmogorov continuity theorem • Kolmogorov extension theorem • Lévy–Prokhorov metric • Malliavin calculus • Martingale representation theorem • Optional stopping theorem • Prokhorov's theorem • Quadratic variation • Reflection principle • Skorokhod integral • Skorokhod's representation theorem • Skorokhod space • Snell envelope • Stochastic differential equation • Tanaka • Stopping time • Stratonovich integral • Uniform integrability • Usual hypotheses • Wiener space • Classical • Abstract Disciplines • Actuarial mathematics • Control theory • Econometrics • Ergodic theory • Extreme value theory (EVT) • Large deviations theory • Mathematical finance • Mathematical statistics • Probability theory • Queueing theory • Renewal theory • Ruin theory • Signal processing • Statistics • Stochastic analysis • Time series analysis • Machine learning • List of topics • Category
Wikipedia
Continuity in probability In probability theory, a stochastic process is said to be continuous in probability or stochastically continuous if its distributions converge whenever the values in the index set converge. [1][2] Definition Let $X=(X_{t})_{t\in T}$ be a stochastic process in $\mathbb {R} ^{n}$. The process $X$ is continuous in probability when $X_{r}$ converges in probability to $X_{s}$ whenever $r$ converges to $s$.[2] Examples and Applications Feller processes are continuous in probability at $t=0$. Continuity in probability is a sometimes used as one of the defining property for Lévy process.[1] Any process that is continuous in probability and has independent increments has a version that is càdlàg.[2] As a result, some authors immediately define Lévy process as being càdlàg and having independent increments.[3] References 1. Applebaum, D. "Lectures on Lévy processes and Stochastic calculus, Braunschweig; Lecture 2: Lévy processes" (PDF). University of Sheffield. pp. 37–53. 2. Kallenberg, Olav (2002). Foundations of Modern Probability (2nd ed.). New York: Springer. p. 286. 3. Kallenberg, Olav (2002). Foundations of Modern Probability (2nd ed.). New York: Springer. p. 290.
Wikipedia
Stochastic analysis on manifolds In mathematics, stochastic analysis on manifolds or stochastic differential geometry is the study of stochastic analysis over smooth manifolds. It is therefore a synthesis of stochastic analysis and differential geometry. The connection between analysis and stochastic processes stems from the fundamental relation that the infinitesimal generator of a continuous strong Markov process is a second-order elliptic operator. The infinitesimal generator of Brownian motion is the Laplace operator and the transition probability density $p(t,x,y)$ of Brownian motion is the minimal heat kernel of the heat equation. Interpreting the paths of Brownian motion as characteristic curves of the operator, Brownian motion can be seen as a stochastic counterpart of a flow to a second-order partial differential operator. Stochastic analysis on manifolds investigates stochastic processes on non-linear state spaces or manifolds. Classical theory can be reformulated in a coordinate-free representation. In that, it is often complicated (or not possible) to formulate objects with coordinates of $\mathbb {R} ^{d}$. Thus, we require an additional structure in form of a linear connection or Riemannian metric to define martingales and Brownian motion on manifolds. Therefore, controlled by the Riemannian metric, Brownian motion will be a local object by definition. However, its stochastic behaviour determines global aspects of the topology and geometry of the manifold. Brownian motion is defined to be the diffusion process generated by the Laplace-Beltrami operator ${\tfrac {1}{2}}\Delta _{M}$ with respect to a manifold $M$ and can be constructed as the solution to a non-canonical stochastic differential equation on a Riemannian manifold. As there is no Hörmander representation of the operator $\Delta _{M}$ if the manifold is not parallelizable, i.e. if the tangent bundle is not trivial, there is no canonical procedure to construct Brownian motion. However, this obstacle can be overcome if the manifold is equipped with a connection: We can then introduce the stochastic horizontal lift of a semimartingale and the stochastic development by the so-called Eells-Elworthy-Malliavin construction.[1][2] The latter is a generalisation of a horizontal lift of smooth curves to horizontal curves in the frame bundle, such that the anti-development and the horizontal lift are connected by a stochastic differential equation. Using this, we can consider an SDE on the orthonormal frame bundle of a Riemannian manifold, whose solution is Brownian motion, and projects down to the (base) manifold via stochastic development. A visual representation of this construction corresponds to the construction of a spherical Brownian motion by rolling without slipping the manifold along the paths (or footprints) of Brownian motion left in Euclidean space.[3] Stochastic differential geometry provides insight into classical analytic problems, and offers new approaches to prove results by means of probability. For example, one can apply Brownian motion to the Dirichlet problem at infinity for Cartan-Hadamard manifolds[4] or give a probabilistic proof of the Atiyah-Singer index theorem.[5] Stochastic differential geometry also applies in other areas of mathematics (e.g. mathematical finance). For example, we can convert classical arbitrage theory into differential-geometric language (also called geometric arbitrage theory).[6] Preface For the reader's convenience and if not stated otherwise, let $(\Omega ,{\mathcal {A}},({\mathcal {F}}_{t})_{t\geq 0},\mathbb {P} )$ be a filtered probability space and $M$ be a smooth manifold. The filtration satisfies the usual conditions, i.e. it is right-continuous and complete. We use the Stratonovich integral which obeys the classical chain rule (compared to Itô calculus). The main advantage for us lies in the fact that stochastic differential equations are then stable under diffeomorphisms $f:M\to N$ between manifolds, i.e. if $X$ is a solution, then also $f(X)$ is a solution under transformations of the stochastic differential equation. Notation: • $TM$ is. the tangent bundle of $M$. • $T^{*}M$ is the cotangent bundle of $M$. • $\Gamma (TM)$ is the $C^{\infty }(M)$-module of vector fields on $M$. • $X\circ dZ$ is the Stratonovich integral. • $C_{c}^{\infty }(M)$ is the space of test functions on $M$, i.e. $f\in C_{c}^{\infty }(M)$ is smooth and has compact support. • ${\widehat {M}}:=M\cup \{\infty \}$ is the one-point compactification (or Alexandroff compactification). Flow processes Flow processes (also called $L$-diffusions) are the probabilistic counterpart of integral curves (flow lines) of vector fields. In contrast, a flow process is defined with respect to a second-order differential operator, and thus, generalises the notion of deterministic flows being defined with respect to a first-order operator. Partial differential operator in Hörmander form Let $A\in \Gamma (TM)$ be a vector field, understood as a derivation by the $C^{\infty }(M)$-isomorphism $\Gamma (TM)\to \operatorname {Der} _{\mathbb {R} }C^{\infty }(M),\quad A\mapsto (f\mapsto Af)$ for some $f\in C^{\infty }(M)$. The map $Af:M\to \mathbb {R} $ is defined by $Af(x):=A_{x}(f)$. For the composition, we set $A^{2}:=A(A(f))$ for some $f\in C^{\infty }(M)$. A partial differential operator (PDO) $L:C^{\infty }(M)\to C^{\infty }(M)$ is given in Hörmander form if and only there are vector fields $A_{0},A_{1},\dots ,A_{r}\in \Gamma (TM)$ and $L$ can be written in the form $L=A_{0}+\sum \limits _{i=1}^{r}A_{i}^{2}$. Flow process Let $L$ be a PDO in Hörmander form on $M$ and $x\in M$ a starting point. An adapted and continuous $M$-valued process $X$ with $X_{0}=x$ is called a flow process to $L$ starting in $x$, if for every test function $f\in C_{c}^{\infty }(M)$ and $t\in \mathbb {R} _{+}$ the process $N(f)_{t}:=f(X_{t})-f(X_{0})-\int _{0}^{t}Lf(X_{r})\mathrm {d} r$ is a martingale, i.e. $\mathbb {E} \left(N(f)_{t}\mid {\mathcal {F}}_{s}\right)=N(f)_{s},\quad \forall s\leq t$. Remark For a test function $f\in C_{c}^{\infty }(M)$, a PDO $L$ in Hörmander form and a flow process $X_{t}^{x}$ (starting in $x$) also holds the flow equation, but in comparison to the deterministic case only in mean ${\frac {\mathrm {d} }{\mathrm {d} t}}\mathbb {E} f(X_{t}^{x})=\mathbb {E} \left[Lf(X_{t}^{x})\right]$. and we can recover the PDO by taking the time derivative at time 0, i.e. $\left.{\frac {\mathrm {d} }{\mathrm {d} t}}\right|_{t=0}\mathbb {E} f(X_{t}^{x})=Lf(x)$. Lifetime and explosion time Let $\emptyset \neq U\subset \mathbb {R} ^{n}$ be open und $\xi >0$ a predictable stopping time. We call $\xi $ the lifetime of a continuous semimartingale $X=(X_{t})_{0\leq t<\xi }$ on $U$ if • there is a sequence of stopping times $(\xi _{n})$ with $\xi _{n}\nearrow \xi $, such that $\xi _{n}<\xi $ $\mathbb {P} $-almost surely on $\{0<\xi <\infty \}$. • the stopped process $(X_{t\wedge \xi _{n}})$ is a semimartingale. Moreover, if $X_{\xi _{n}(\omega )}\to \partial U$ for almost all $\omega \in \{\xi <\infty \}$, we call $\xi $ explosion time. A flow process $X$ can have a finite lifetime $\xi $. By this we mean that $X=(X)_{t<\xi }$ is defined such that if $t\to \xi $, then $\mathbb {P} $-almost surely on $\{\xi <\infty \}$ we have $X_{t}\to \infty $ in the one-point compactification ${\widehat {M}}:=M\cup \{\infty \}$. In that case we extend the process path-wise by $X_{t}:=\infty $ for $t\geq \xi $. Semimartingales on a manifold A process $X$ is a semimartingale on $M$, if for every $f\in C^{2}(M)$ the random variable $f(X)$ is an $\mathbb {R} $-semimartingale, i.e. the composition of any smooth function $f$ with the process $X$ is a real-valued semimartingale. It can be shown that any $M$-semimartingale is a solution of a stochastic differential equation on $M$. If the semimartingale is only defined up to a finite lifetime $\xi $, we can always construct a semimartingale with infinite lifetime by a transformation of time. A semimartingale has a quadratic variation with respect to a section in the bundle of bilinear forms on $TM$. Introducing the Stratonovich Integral of a differential form $\alpha $ along the semimartingale $X$ we can study the so called winding behaviour of $X$, i.e. a generalisation of the winding number. Stratonovich integral of a 1-form Let $X$ be an $M$-valued semimartingale and $\alpha \in \Gamma (T^{*}M)$ be a 1-form. We call the integral $\int _{X}\alpha :=\int \alpha (\circ dX)$ :=\int \alpha (\circ dX)} the Stratonovich integral of $\alpha $ along $X$. For $f\in C^{\infty }(M)$ we define $f(X)\circ \alpha (\circ dX):=f(X)\circ d(\int _{X}\alpha )$. SDEs on a manifold A stochastic differential equation on a manifold $M$, denoted SDE on $M$, is defined by the pair $(A,Z)$ including a bundle homomorphism (i.e. a homomorphism of vector bundles) or the ($r+1$)-tuple $(A_{1},\dots ,A_{r},Z)$ with vector fields $A_{1},\dots ,A_{r}$ given. Using the Whitney embedding, we can show that there is a unique maximal solution to every SDE on $M$ with initial condition $X_{0}=x$. If we have identified the maximal solution, we recover directly a flow process $X^{x}$ to the operator $L$. Definition An SDE on $M$ is a pair $(A,Z)$, where • $Z=(Z_{t})_{t\in \mathbb {R} _{+}}$ is a continuous semimartingale on a finite-dimensional $\mathbb {R} $-vector space $E$; and • $A:M\times E\to TM$ is a (smooth) homomorphism of vector bundles over $M$ $A:(x,e)\mapsto A(x)e$ where$A(x):E\to TM$ is a linear map. The stochastic differential equation $(A,Z)$ is denoted by $dX=A(X)\circ dZ$ or $dX=\sum \limits _{i=1}^{r}A_{i}(X)\circ dZ^{i}.$ The latter follows from setting $A_{i}:=A(\cdot )e_{i}$ with respect to a basis $(e_{i})_{i=1,\dots ,r}$ and $\mathbb {R} $-valued semimartingales $(Z^{i})_{i=1,\dots ,r}$ with $Z=\sum \limits _{i=1}^{r}Z^{i}e_{i}$. As for given vector fields $A_{1},\dots ,A_{r}\in \Gamma (TM)$ there is exactly one bundle homomorphism $A$ such that $A_{i}:=A(\cdot )e_{i}$, our definition of an SDE on $M$ as $(A_{1},\dots ,A_{r},Z)$ is plausible. If $Z$ has only finite life time, we can transform the time horizon into the infinite case.[7] Solutions Let $(A,Z)$ be an SDE on $M$ and $x_{0}:\Omega \to M$ an ${\mathcal {F}}_{0}$-measurable random variable. Let $(X_{t})_{t<\zeta }$ be a continuous adapted $M$-valued process with life time $\zeta $ on the same probability space such as $Z$. Then $(X_{t})_{t<\zeta }$ is called a solution to the SDE $dX=A(X)\circ dZ$ with initial condition $X_{0}=x_{0}$ up to the life time $\zeta $, if for every test function $f\in C_{c}^{\infty }(M)$ the process $f(X)$ is an $\mathbb {R} $-valued semimartingale and for every stopping time $\tau $ with $0\leq \tau <\zeta $, it holds $\mathbb {P} $-almost surely $f(X_{\tau })=f(x_{0})+\int _{0}^{\tau }(df)_{X_{s}}A(X_{s})\circ \mathrm {d} Z_{s}$, where $(df)_{X}:T_{x}M\to T_{f(x)}M$ is the push-forward (or differential) at the point $X$. Following the idea from above, by definition $f(X)$ is a semimartingale for every test function $f\in C_{c}^{\infty }(M)$, so that $X$ is a semimartingale on $M$. If the lifetime is maximal, i.e. $\{\zeta <\infty \}\subset \left\{\lim \limits _{t\nearrow \zeta }X_{t}=\infty {\text{ in }}{\widehat {M}}\right\}$ $\mathbb {P} $-almost surely, we call this solution the maximal solution. The lifetime of a maximal solution $X$ can be extended to all of $\mathbb {R} _{+}$ , and after extending $f$ to the whole of ${\widehat {M}}$, the equation $f(X_{t})=f(X_{0})+\int _{0}^{t}(df)_{X}A(X)\circ dZ,\quad t\geq 0$, holdsup to indistinguishability.[8] Remark Let $Z=(t,B)$ with a $d$-dimensional Brownian motion $B=(B_{1},\dots ,B_{d})$, then we can show that every maximal solution starting in $x_{0}$ is a flow process to the operator $L=A_{0}+{\frac {1}{2}}\sum \limits _{i=1}^{d}A_{i}^{2}$. Martingales and Brownian motion Brownian motion on manifolds are stochastic flow processes to the Laplace-Beltrami operator. It is possible to construct Brownian motion on Riemannian manifolds $(M,g)$. However, to follow a canonical ansatz, we need some additional structure. Let ${\mathcal {O}}(d)$ be the orthogonal group; we consider the canonical SDE on the orthonormal frame bundle $O(M)$ over $M$, whose solution is Brownian motion. The orthonormal frame bundle is the collection of all sets $O_{x}(M)$ of orthonormal frames of the tangent space $T_{x}M$ $O(M):=\bigcup \limits _{x\in M}O_{x}(M)$ or in other words, the ${\mathcal {O}}(d)$-principal bundle associated to $TM$. Let $W$ be an $\mathbb {R} ^{d}$-valued semimartingale. The solution $U$ of the SDE $dU_{t}=\sum \limits _{i=1}^{d}A_{i}(U_{t})\circ dW_{t}^{i},\quad U_{0}=u_{0},$ defined by the projection $\pi :O(M)\to M$ of a Brownian motion $X$ on the Riemannian manifold, is the stochastic development from $W$ on $M$. Conversely we call $W$ the anti-development of $U$ or, respectively, $\pi (U)=X$. In short, we get the following relations: $W\leftrightarrow U\leftrightarrow X$, where • $U$ is an $O(M)$-valued semimartingale; and • $X$ is an $M$-valued semimartingale. For a Riemannian manifold we always use the Levi-Civita connection and the corresponding Laplace-Beltrami operator $\Delta _{M}$. The key observation is that there exists a lifted version of the Laplace-Beltrami operator on the orthonormal frame bundle. The fundamental relation reads, for $f\in C^{\infty }(M)$, $\Delta _{M}f(x)=\Delta _{O(M)}(f\circ \pi )(u)$ for all $u\in O(M)$ with $\pi u=x$, and the operator $\Delta _{O(M)}$ on $O(M)$ is well-defined for so-called horizontal vector fields. The operator $\Delta _{O(M)}$ is called Bochner's horizontal Laplace operator. Martingales with linear connection To define martingales, we need a linear connection $\nabla $. Using the connection, we can characterise $\nabla $-martingales, if their anti-development is a local martingale. It is also possible to define $\nabla $-martingales without using the anti-development. We write ${\stackrel {m}{=}}$ to indicate that equality holds modulo differentials of local martingales. Let $X$ be an $M$-valued semimartingale. Then $X$ is a martingale or $\nabla $-martingale, if and only if for every $f\in C^{\infty }(M)$, it holds that $d(f\circ X)\,\,{\stackrel {m}{=}}\,\,{\tfrac {1}{2}}(\nabla df)(dX,dX).$ Brownian motion on a Riemannian manifold Let $(M,g)$ be a Riemannian manifold with Laplace-Beltrami operator $\Delta _{M}$. An adapted $M$-valued process $X$ with maximal lifetime $\xi $ is called a Brownian motion$(M,g)$, if for every $f\in C^{\infty }(M)$ $f(X)-{\frac {1}{2}}\int \Delta _{M}f(X)\mathrm {d} t$ is a local $\mathbb {R} $-martingale with life time $\xi $. Hence, Brownian motion Bewegung is the diffusion process to ${\tfrac {1}{2}}\Delta _{M}$. Note that this characterisation does not provide a canonical procedure to define Brownian motion. References and notes 1. Stochastic differential equations on manifolds. Vol. 70. 1982. 2. Géométrie différentielle stochastique. 1978. 3. Stochastische Analysis: Eine Einführung in die Theorie der stetigen Semimartingale. pp. 349–544. ISBN 978-3-519-02229-9. 4. Brownian Motion and the Dirichlet Problem at Infinity on Two-dimensional Cartan-Hadamard Manifolds. Vol. 41. 2014. pp. 443–462. doi:10.1007/s11118-013-9376-3. 5. Stochastic Analysis on Manifolds. Vol. 38. 6. Geometric Arbitrage Theory and Market Dynamics. Vol. 7. 2015. doi:10.3934/jgm.2015.7.431. 7. Stochastische Analysis: Eine Einführung in die Theorie der stetigen Semimartingale. p. 364. ISBN 978-3-519-02229-9. 8. Wolfgang Hackenbroch und Anton Thalmaier, Vieweg+Teubner Verlag Wiesbaden (ed.), Stochastische Analysis: Eine Einführung in die Theorie der stetigen Semimartingale, p. 364, ISBN 978-3-519-02229-9 Bibliography • Wolfgang Hackenbroch und Anton Thalmaier, Vieweg+Teubner Verlag Wiesbaden (ed.), Stochastische Analysis: Eine Einführung in die Theorie der stetigen Semimartingale [Stochastic Analysis: An introduction to the theory of continuous semimartingales], pp. 349–544, ISBN 978-3-519-02229-9 • Nobuyuki Ikeda und Shinzo Watanabe, North Holland (ed.), Stochastic Differential Equations and Diffusion Processes • Elton P. Hsu, American Mathematical Society (ed.), "Stochastic Analysis on Manifolds", Graduate Studies in Mathematics, vol. 38 • K. D. Elworthy (1982), Cambridge University Press (ed.), Stochastic Differential Equations on Manifolds, doi:10.1017/CBO9781107325609
Wikipedia
Stochastic simulation A stochastic simulation is a simulation of a system that has variables that can change stochastically (randomly) with individual probabilities.[1] Realizations of these random variables are generated and inserted into a model of the system. Outputs of the model are recorded, and then the process is repeated with a new set of random values. These steps are repeated until a sufficient amount of data is gathered. In the end, the distribution of the outputs shows the most probable estimates as well as a frame of expectations regarding what ranges of values the variables are more or less likely to fall in.[1] Often random variables inserted into the model are created on a computer with a random number generator (RNG). The U(0,1) uniform distribution outputs of the random number generator are then transformed into random variables with probability distributions that are used in the system model.[2] Etymology Stochastic originally meant "pertaining to conjecture"; from Greek stokhastikos "able to guess, conjecturing": from stokhazesthai "guess"; from stokhos "a guess, aim, target, mark". The sense of "randomly determined" was first recorded in 1934, from German Stochastik.[3] Discrete-event simulation In order to determine the next event in a stochastic simulation, the rates of all possible changes to the state of the model are computed, and then ordered in an array. Next, the cumulative sum of the array is taken, and the final cell contains the number R, where R is the total event rate. This cumulative array is now a discrete cumulative distribution, and can be used to choose the next event by picking a random number z~U(0,R) and choosing the first event, such that z is less than the rate associated with that event. Probability distributions A probability distribution is used to describe the potential outcome of a random variable. Limits the outcomes where the variable can only take on discrete values.[4] Bernoulli distribution Main article: Bernoulli distribution A random variable X is Bernoulli-distributed with parameter p if it has two possible outcomes usually encoded 1 (success or default) or 0 (failure or survival)[5] where the probabilities of success and failure are $P(X=1)=p$ and $P(X=0)=1-p$ where $0\leq p\leq 1$. To produce a random variable X with a Bernoulli distribution from a U(0,1) uniform distribution made by a random number generator, we define $X={\begin{cases}1,&{\text{if }}0\leq U<p\\0,&{\text{if }}1\geq U\geq p\end{cases}}$ such that the probability for $P(X=1)=P(0\leq U<p)=p$ and $P(X=0)=P(1\geq U\geq p)=1-p$.[2] Example: Toss of coin Define $X={\begin{cases}1&{\text{if heads comes up}}\\0&{\text{if tails comes up}}\end{cases}}$ For a fair coin, both realizations are equally likely. We can generate realizations of this random variable X from a $U(1,0)$ uniform distribution provided by a random number generator (RNG) by having $X=1$ if the RNG outputs a value between 0 and 0.5 and $X=0$ if the RNG outputs a value between 0.5 and 1. ${\begin{aligned}P(X=1)&=P(0\leq U<1/2)=1/2\\P(X=0)&=P(1\geq U\geq 1/2)=1/2\end{aligned}}$ Of course, the two outcomes may not be equally likely (e.g. success of medical treatment).[6] Binomial distribution Main article: Binomial distribution A binomial distributed random variable Y with parameters n and p is obtained as the sum of n independent and identically Bernoulli-distributed random variables X1, X2, ..., Xn[4] Example: A coin is tossed three times. Find the probability of getting exactly two heads. This problem can be solved by looking at the sample space. There are three ways to get two heads. HHH, HHT, HTH, THH, TTH, THT, HTT, TTT The answer is 3/8 (= 0.375).[7] Poisson distribution Main article: Poisson distribution A poisson process is a process where events occur randomly in an interval of time or space.[2][8] The probability distribution for Poisson processes with constant rate λ per time interval is given by the following equation.[4] $P(k{\text{ events in interval}})={\frac {\lambda ^{k}e^{-\lambda }}{k!}}$ Defining $N(t)$ as the number of events that occur in the time interval $t$ $P(N(t)=k)={\frac {(t\lambda )^{k}}{k!}}e^{-t\lambda }$ It can be shown that inter-arrival times for events is exponentially distributed with a cumulative distribution function (CDF) of $F(t)=1-e^{-t\lambda }$. The inverse of the exponential CDF is given by $t=-{\frac {1}{\lambda }}\ln(u)$ where $u$ is an $U(0,1)$ uniformly distributed random variable.[2] Simulating a Poisson process with a constant rate $\lambda $ for the number of events $N$ that occur in interval $[t_{\text{start}},t_{\text{end}}]$ can be carried out with the following algorithm.[9] 1. Begin with $N=0$ and $t=t_{\text{start}}$ 2. Generate random variable $u$ from $U(0,1)$ uniform distribution 3. Update the time with $t=t-\ln(u)/\lambda $ 4. If $t>t_{\text{end}}$, then stop. Else continue to step 5. 5. $N=N+1$ 6. Continue to step 2 Direct and first reaction methods Published by Dan Gillespie in 1977, and is a linear search on the cumulative array. See Gillespie algorithm. Gillespie’s Stochastic Simulation Algorithm (SSA) is essentially an exact procedure for numerically simulating the time evolution of a well-stirred chemically reacting system by taking proper account of the randomness inherent in such a system.[10] It is rigorously based on the same microphysical premise that underlies the chemical master equation and gives a more realistic representation of a system’s evolution than the deterministic reaction rate equation (RRE) represented mathematically by ODEs.[10] As with the chemical master equation, the SSA converges, in the limit of large numbers of reactants, to the same solution as the law of mass action. Next reaction method Published 2000 by Gibson and Bruck[11] the next reaction method improves over the first reaction method by reducing the amount of random numbers that need to be generated. To make the sampling of reactions more efficient, an indexed [priority queue] is used to store the reaction times. To make the computation of reaction propensities more efficient, a dependency graph is also used. This dependency graph tells which reaction propensities to update after a particular reaction has fired. While more efficient, the next reaction method requires more complex data structures than either direct simulation or the first reaction method. Optimised and sorting direct methods Published 2004[12] and 2005. These methods sort the cumulative array to reduce the average search depth of the algorithm. The former runs a presimulation to estimate the firing frequency of reactions, whereas the latter sorts the cumulative array on-the-fly. Logarithmic direct method Published in 2006. This is a binary search on the cumulative array, thus reducing the worst-case time complexity of reaction sampling to O (log M). Partial-propensity methods Published in 2009, 2010, and 2011 (Ramaswamy 2009, 2010, 2011). Use factored-out, partial reaction propensities to reduce the computational cost to scale with the number of species in the network, rather than the (larger) number of reactions. Four variants exist: • PDM, the partial-propensity direct method. Has a computational cost that scales linearly with the number of different species in the reaction network, independent of the coupling class of the network (Ramaswamy 2009). • SPDM, the sorting partial-propensity direct method. Uses dynamic bubble sort to reduce the pre-factor of the computational cost in multi-scale reaction networks where the reaction rates span several orders of magnitude (Ramaswamy 2009). • PSSA-CR, the partial-propensity SSA with composition-rejection sampling. Reduces the computational cost to constant time (i.e., independent of network size) for weakly coupled networks (Ramaswamy 2010) using composition-rejection sampling (Slepoy 2008). • dPDM, the delay partial-propensity direct method. Extends PDM to reaction networks that incur time delays (Ramaswamy 2011) by providing a partial-propensity variant of the delay-SSA method (Bratsun 2005, Cai 2007). The use of partial-propensity methods is limited to elementary chemical reactions, i.e., reactions with at most two different reactants. Every non-elementary chemical reaction can be equivalently decomposed into a set of elementary ones, at the expense of a linear (in the order of the reaction) increase in network size. Approximate Methods A general drawback of stochastic simulations is that for big systems, too many events happen which cannot all be taken into account in a simulation. The following methods can dramatically improve simulation speed by some approximations. τ leaping method Since the SSA method keeps track of each transition, it would be impractical to implement for certain applications due to high time complexity. Gillespie proposed an approximation procedure, the tau-leaping method which decreases computational time with minimal loss of accuracy.[13] Instead of taking incremental steps in time, keeping track of X(t) at each time step as in the SSA method, the tau-leaping method leaps from one subinterval to the next, approximating how many transitions take place during a given subinterval. It is assumed that the value of the leap, τ, is small enough that there is no significant change in the value of the transition rates along the subinterval [t, t + τ]. This condition is known as the leap condition. The tau-leaping method thus has the advantage of simulating many transitions in one leap while not losing significant accuracy, resulting in a speed up in computational time.[14] Conditional Difference Method This method approximates reversible processes (which includes random walk/diffusion processes) by taking only net rates of the opposing events of a reversible process into account. The main advantage of this method is that it can be implemented with a simple if-statement replacing the previous transition rates of the model with new, effective rates. The model with the replaced transition rates can thus be solved, for instance, with the conventional SSA.[15] Continuous simulation While in discrete state space it is clearly distinguished between particular states (values) in continuous space it is not possible due to certain continuity. The system usually change over time, variables of the model, then change continuously as well. Continuous simulation thereby simulates the system over time, given differential equations determining the rates of change of state variables.[16] Example of continuous system is the predator/prey model[17] or cart-pole balancing [18] Normal distribution Main article: Normal distribution The random variable X is said to be normally distributed with parameters μ and σ, abbreviated by X ∈ N(μ, σ2), if the density of the random variable is given by the formula [4] $f_{X}(x)={\frac {1}{\sqrt {2\pi \sigma ^{2}}}}e^{-{\frac {(x-\mu )^{2}}{2\sigma ^{2}}}},\quad x\in \mathbb {R} .$ Many things actually are normally distributed, or very close to it. For example, height and intelligence are approximately normally distributed; measurement errors also often have a normal distribution.[19] Exponential distribution Main article: Exponential distribution Exponential distribution describes the time between events in a Poisson process, i.e. a process in which events occur continuously and independently at a constant average rate. The exponential distribution is popular, for example, in queuing theory when we want to model the time we have to wait until a certain event takes place. Examples include the time until the next client enters the store, the time until a certain company defaults or the time until some machine has a defect.[4] Student's t-distribution Main article: Student's t-distribution Student's t-distribution are used in finance as probabilistic models of assets returns. The density function of the t-distribution is given by the following equation:[4] $f(t)={\frac {\Gamma ({\frac {\nu +1}{2}})}{{\sqrt {\nu \pi }}\,\Gamma ({\frac {\nu }{2}})}}\left(1+{\frac {t^{2}}{\nu }}\right)^{-{\frac {\nu +1}{2}}},$ where $\nu $ is the number of degrees of freedom and $\Gamma $ is the gamma function. For large values of n, the t-distribution doesn't significantly differ from a standard normal distribution. Usually, for values n > 30, the t-distribution is considered as equal to the standard normal distribution. Other distributions • Generalized extreme value distribution Combined simulation It is often possible to model one and the same system by use of completely different world views. Discrete event simulation of a problem as well as continuous event simulation of it (continuous simulation with the discrete events that disrupt the continuous flow) may lead eventually to the same answers. Sometimes however, the techniques can answer different questions about a system. If we necessarily need to answer all the questions, or if we don't know what purposes is the model going to be used for, it is convenient to apply combined continuous/discrete methodology.[20] Similar techniques can change from a discrete, stochastic description to a deterministic, continuum description in a time-and space dependent manner.[21] The use of this technique enables the capturing of noise due to small copy numbers, while being much faster to simulate than the conventional Gillespie algorithm. Furthermore, the use of the deterministic continuum description enables the simulations of arbitrarily large systems. Monte Carlo simulation Monte Carlo is an estimation procedure. The main idea is that if it is necessary to know the average value of some random variable and its distribution cannot be stated, and if it is possible to take samples from the distribution, we can estimate it by taking the samples, independently, and averaging them. If there are sufficient samples, then the law of large numbers says the average must be close to the true value. The central limit theorem says that the average has a Gaussian distribution around the true value.[22] As a simple example, suppose we need to measure area of a shape with a complicated, irregular outline. The Monte Carlo approach is to draw a square around the shape and measure the square. Then we throw darts into the square, as uniformly as possible. The fraction of darts falling on the shape gives the ratio of the area of the shape to the area of the square. In fact, it is possible to cast almost any integral problem, or any averaging problem, into this form. It is necessary to have a good way to tell if you're inside the outline, and a good way to figure out how many darts to throw. Last but not least, we need to throw the darts uniformly, i.e., using a good random number generator.[22] Application There are wide possibilities for use of Monte Carlo Method:[1] • Statistic experiment using generation of random variables (e.g. dice) • sampling method • Mathematics (e.g. numerical integration, multiple integrals) • Reliability Engineering • Project Management (SixSigma) • Experimental particle physics • Simulations • Risk Measurement/Risk Management (e.g. Portfolio value estimation) • Economics (e.g. finding the best fitting demand curve) • Process Simulation • Operations Research Random number generators For simulation experiments (including Monte Carlo) it is necessary to generate random numbers (as values of variables). The problem is that the computer is highly deterministic machine—basically, behind each process there is always an algorithm, a deterministic computation changing inputs to outputs; therefore it is not easy to generate uniformly spread random numbers over a defined interval or set.[1] A random number generator is a device capable of producing a sequence of numbers which cannot be "easily" identified with deterministic properties. This sequence is then called a sequence of stochastic numbers.[23] The algorithms typically rely on pseudorandom numbers, computer generated numbers mimicking true random numbers, to generate a realization, one possible outcome of a process.[24] Methods for obtaining random numbers have existed for a long time and are used in many different fields (such as gaming). However, these numbers suffer from a certain bias. Currently the best methods expected to produce truly random sequences are natural methods that take advantage of the random nature of quantum phenomena.[23] See also • Deterministic simulation • Gillespie algorithm • Network simulation • Network traffic simulation • Simulation language • Queueing theory • Discretization • Hybrid stochastic simulations References 1. DLOUHÝ, M.; FÁBRY, J.; KUNCOVÁ, M.. Simulace pro ekonomy. Praha : VŠE, 2005. 2. Dekking, Frederik Michel (2005). A Modern Introduction to Probability and Statistics: Understanding Why and How. Springer. ISBN 1-85233-896-2. OCLC 783259968. 3. stochastic. (n.d.). Online Etymology Dictionary. Retrieved January 23, 2014, from Dictionary.com website: http://dictionary.reference.com/browse/stochastic 4. Rachev, Svetlozar T. Stoyanov, Stoyan V. Fabozzi, Frank J., "Chapter 1 Concepts of Probability" in Advanced Stochastic Models, Risk Assessment, and Portfolio Optimization : The Ideal Risk, Uncertainty, and Performance Measures, Hoboken, NJ, USA: Wiley, 2008 5. Rachev, Svetlozar T.; Stoyanov, Stoyan V.; Fabozzi, Frank J. (2011-04-14). A Probability Metrics Approach to Financial Risk Measures. doi:10.1002/9781444392715. ISBN 9781444392715. 6. Bernoulli Distribution, The University of Chicago - Department of Statistics, [online] available at http://galton.uchicago.edu/~eichler/stat22000/Handouts/l12.pdf 7. "The Binomial Distribution". Archived from the original on 2014-02-26. Retrieved 2014-01-25. 8. Haight, Frank A. (1967). Handbook of the Poisson distribution. Wiley. OCLC 422367440. 9. Sigman, Karl. "Poisson processes, and Compound (batch) Poisson processes" (PDF). 10. Stephen Gilmore, An Introduction to Stochastic Simulation - Stochastic Simulation Algorithms, University of Edinburgh, [online] available at http://www.doc.ic.ac.uk/~jb/conferences/pasta2006/slides/stochastic-simulation-introduction.pdf 11. Michael A. Gibson and Jehoshua Bruck, Efficient exact stochastic simulation of chemical systems with many species and many channels, J. Phys. Chem. A, 104:1876–1899, 2000. 12. Y. Cao, H. Li, and L. Petzold. Efficient formulation of the stochastic simulation algorithm for chemically reacting systems, J. Chem. Phys, 121(9):4059–4067, 2004. 13. Gillespie, D.T. (1976). "A General Method for Numerically Simulating the stochastic time evolution of coupled chemical reactions". Journal of Computational Physics. 22 (4): 403–434. Bibcode:1976JCoPh..22..403G. doi:10.1016/0021-9991(76)90041-3. 14. H.T. Banks, Anna Broido, Brandi Canter, Kaitlyn Gayvert,Shuhua Hu, Michele Joyner, Kathryn Link, Simulation Algorithms for Continuous Time Markov Chain Models, [online] available at http://www.ncsu.edu/crsc/reports/ftp/pdf/crsc-tr11-17.pdf 15. Spill, F; Maini, PK; Byrne, HM (2016). "Optimisation of simulations of stochastic processes by removal of opposing reactions". Journal of Chemical Physics. 144 (8): 084105. arXiv:1602.02655. Bibcode:2016JChPh.144h4105S. doi:10.1063/1.4942413. PMID 26931679. S2CID 13334842. 16. Crespo-Márquez, A., R. R. Usano and R. D. Aznar, 1993, "Continuous and Discrete Simulation in a Production Planning System. A Comparative Study" 17. Louis G. Birta, Gilbert Arbez (2007). Modelling and Simulation, p. 255. Springer. 18. "Pole Balancing Tutorial". 19. University of Notre Dame, Normal Distribution, [online] available at http://www3.nd.edu/~rwilliam/stats1/x21.pdf 20. Francois E. Cellier, Combined Continuous/Discrete Simulation Applications, Techniques, and Tools 21. Spill, F.; et al. (2015). "Hybrid approaches for multiple-species stochastic reaction–diffusion models". Journal of Computational Physics. 299: 429–445. arXiv:1507.07992. Bibcode:2015JCoPh.299..429S. doi:10.1016/j.jcp.2015.07.002. PMC 4554296. PMID 26478601. 22. Cosma Rohilla Shalizi, Monte Carlo, and Other Kinds of Stochastic Simulation, [online] available at http://bactra.org/notebooks/monte-carlo.html 23. Donald E. Knuth, The Art of Computer Programming, Volume 2: Seminumerical Algorithms - chapitre 3 : Random Numbers (Addison-Wesley, Boston, 1998). 24. Andreas hellander, Stochastic Simulation and Monte Carlo Methods, [online] available at http://www.it.uu.se/edu/course/homepage/bervet2/MCkompendium/mc.pdf • (Slepoy 2008): Slepoy, A; Thompson, AP; Plimpton, SJ (2008). "A constant-time kinetic Monte Carlo algorithm for simulation of large biochemical reaction networks". Journal of Chemical Physics. 128 (20): 205101. Bibcode:2008JChPh.128t5101S. doi:10.1063/1.2919546. PMID 18513044. • (Bratsun 2005): D. Bratsun; D. Volfson; J. Hasty; L. Tsimring (2005). "Delay-induced stochastic oscillations in gene regulation". PNAS. 102 (41): 14593–8. Bibcode:2005PNAS..10214593B. doi:10.1073/pnas.0503858102. PMC 1253555. PMID 16199522. • (Cai 2007): X. Cai (2007). "Exact stochastic simulation of coupled chemical reactions with delays". J. Chem. Phys. 126 (12): 124108. Bibcode:2007JChPh.126l4108C. doi:10.1063/1.2710253. PMID 17411109. • Hartmann, A.K. (2009). Practical Guide to Computer Simulations. World Scientific. ISBN 978-981-283-415-7. Archived from the original on 2009-02-11. Retrieved 2012-05-03. • Press, WH; Teukolsky, SA; Vetterling, WT; Flannery, BP (2007). "Section 17.7. Stochastic Simulation of Chemical Reaction Networks". Numerical Recipes: The Art of Scientific Computing (3rd ed.). New York: Cambridge University Press. ISBN 978-0-521-88068-8. • (Ramaswamy 2009): R. Ramaswamy; N. Gonzalez-Segredo; I. F. Sbalzarini (2009). "A new class of highly efficient exact stochastic simulation algorithms for chemical reaction networks". J. Chem. Phys. 130 (24): 244104. arXiv:0906.1992. Bibcode:2009JChPh.130x4104R. doi:10.1063/1.3154624. PMID 19566139. S2CID 4952205. • (Ramaswamy 2010): R. Ramaswamy; I. F. Sbalzarini (2010). "A partial-propensity variant of the composition-rejection stochastic simulation algorithm for chemical reaction networks" (PDF). J. Chem. Phys. 132 (4): 044102. Bibcode:2010JChPh.132d4102R. doi:10.1063/1.3297948. PMID 20113014. • (Ramaswamy 2011): R. Ramaswamy; I. F. Sbalzarini (2011). "A partial-propensity formulation of the stochastic simulation algorithm for chemical reaction networks with delays" (PDF). J. Chem. Phys. 134 (1): 014106. Bibcode:2011JChPh.134a4106R. doi:10.1063/1.3521496. PMID 21218996. S2CID 4949530. External links Software • cayenne - Fast, easy to use Python package for stochastic simulations. Implementations of direct, tau-leaping, and tau-adaptive algorithms. • StochSS - StochSS: Stochastic Simulation Service - A Cloud Computing Framework for Modeling and Simulation of Stochastic Biochemical Systems. • ResAssure - Stochastic reservoir simulation software - solves fully implicit, dynamic three-phase fluid flow equations for every geological realisation. • Cain - Stochastic simulation of chemical kinetics. Direct, next reaction, tau-leaping, hybrid, etc. • pSSAlib - C++ implementations of all partial-propensity methods. • StochPy - Stochastic modelling in Python • STEPS - STochastic Engine for Pathway Simulation using swig to create Python interface to C/C++ code
Wikipedia
Stochastic geometry In mathematics, stochastic geometry is the study of random spatial patterns. At the heart of the subject lies the study of random point patterns. This leads to the theory of spatial point processes, hence notions of Palm conditioning, which extend to the more abstract setting of random measures. Models There are various models for point processes, typically based on but going beyond the classic homogeneous Poisson point process (the basic model for complete spatial randomness) to find expressive models which allow effective statistical methods. The point pattern theory provides a major building block for generation of random object processes, allowing construction of elaborate random spatial patterns. The simplest version, the Boolean model, places a random compact object at each point of a Poisson point process. More complex versions allow interactions based in various ways on the geometry of objects. Different directions of application include: the production of models for random images either as set-union of objects, or as patterns of overlapping objects; also the generation of geometrically inspired models for the underlying point process (for example, the point pattern distribution may be biased by an exponential factor involving the area of the union of the objects; this is related to the Widom–Rowlinson model[1] of statistical mechanics). Random object What is meant by a random object? A complete answer to this question requires the theory of random closed sets, which makes contact with advanced concepts from measure theory. The key idea is to focus on the probabilities of the given random closed set hitting specified test sets. There arise questions of inference (for example, estimate the set which encloses a given point pattern) and theories of generalizations of means etc. to apply to random sets. Connections are now being made between this latter work and recent developments in geometric mathematical analysis concerning general metric spaces and their geometry. Good parametrizations of specific random sets can allow us to refer random object processes to the theory of marked point processes; object-point pairs are viewed as points in a larger product space formed as the product of the original space and the space of parametrization. Line and hyper-flat processes Suppose we are concerned no longer with compact objects, but with objects which are spatially extended: lines on the plane or flats in 3-space. This leads to consideration of line processes, and of processes of flats or hyper-flats. There can no longer be a preferred spatial location for each object; however the theory may be mapped back into point process theory by representing each object by a point in a suitable representation space. For example, in the case of directed lines in the plane one may take the representation space to be a cylinder. A complication is that the Euclidean motion symmetries will then be expressed on the representation space in a somewhat unusual way. Moreover, calculations need to take account of interesting spatial biases (for example, line segments are less likely to be hit by random lines to which they are nearly parallel) and this provides an interesting and significant connection to the hugely significant area of stereology, which in some respects can be viewed as yet another theme of stochastic geometry. It is often the case that calculations are best carried out in terms of bundles of lines hitting various test-sets, rather than by working in representation space. Line and hyper-flat processes have their own direct applications, but also find application as one way of creating tessellations dividing space; hence for example one may speak of Poisson line tessellations. A notable recent result[2] proves that the cell at the origin of the Poisson line tessellation is approximately circular when conditioned to be large. Tessellations in stochastic geometry can of course be produced by other means, for example by using Voronoi and variant constructions, and also by iterating various means of construction. Origin of the name The name appears to have been coined by David Kendall and Klaus Krickeberg[3] while preparing for a June 1969 Oberwolfach workshop, though antecedents for the theory stretch back much further under the name geometric probability. The term "stochastic geometry" was also used by Frisch and Hammersley in 1963[4] as one of two suggestions for names of a theory of "random irregular structures" inspired by percolation theory. Applications This brief description has focused on the theory[3][5] of stochastic geometry, which allows a view of the structure of the subject. However, much of the life and interest of the subject, and indeed many of its original ideas, flow from a very wide range of applications, for example: astronomy,[6] spatially distributed telecommunications,[7] wireless network modeling and analysis,[8] modeling of channel fading,[9][10] forestry,[11] the statistical theory of shape,[12] material science,[13] multivariate analysis, problems in image analysis[14] and stereology. There are links to statistical mechanics,[15] Markov chain Monte Carlo, and implementations of the theory in statistical computing (for example, spatstat[16] in R). Most recently determinantal and permanental point processes (connected to random matrix theory) are beginning to play a role.[17] See also • Nearest neighbour function • Spherical contact distribution function • Factorial moment measure • Moment measure • Continuum percolation theory • Random graphs • Spatial statistics • Stochastic geometry models of wireless networks • Mathematical morphology • Information geometry • Stochastic differential geometry References 1. Chayes, J. T.; Chayes, L.; Kotecký, R. (1995). "The analysis of the Widom-Rowlinson model by stochastic geometric methods". Communications in Mathematical Physics. 172 (3): 551–569. Bibcode:1995CMaPh.172..551C. doi:10.1007/BF02101808. 2. Kovalenko, I. N. (1999). "A simplified proof of a conjecture of D. G. Kendall concerning shapes of random polygons". Journal of Applied Mathematics and Stochastic Analysis. 12 (4): 301–310. doi:10.1155/S1048953399000283. 3. See foreword in Stoyan, D.; Kendall, W. S.; Mecke, J. (1987). Stochastic geometry and its applications. Wiley. ISBN 0-471-90519-4. 4. Frisch, H. L.; Hammersley, J. M. (1963). "Percolation processes and related topics". SIAM Journal on Applied Mathematics. 11 (4): 894–918. doi:10.1137/0111066. 5. Schneider, R.; Weil, W. (2008). Stochastic and Integral Geometry. Probability and Its Applications. Springer. doi:10.1007/978-3-540-78859-1. ISBN 978-3-540-78858-4. MR 2455326. 6. Martinez, V. J.; Saar, E. (2001). Statistics of The Galaxy Distribution. Chapman & Hall. ISBN 1-58488-084-8. 7. Baccelli, F.; Klein, M.; Lebourges, M.; Zuyev, S. (1997). "Stochastic geometry and architecture of communication networks". Telecommunication Systems. 7: 209–227. doi:10.1023/A:1019172312328. 8. M. Haenggi. Stochastic geometry for wireless networks. Cambridge University Press, 2012. 9. Piterbarg, V. I.; Wong, K. T. (2005). "Spatial-Correlation-Coefficient at the Basestation, in Closed-Form Explicit Analytic Expression, Due to Heterogeneously Poisson Distributed Scatterers". IEEE Antennas and Wireless Propagation Letters. 4 (1): 385–388. Bibcode:2005IAWPL...4..385P. doi:10.1109/LAWP.2005.857968. 10. Abdulla, M.; Shayan, Y. R. (2014). "Large-Scale Fading Behavior for a Cellular Network with Uniform Spatial Distribution". Wireless Communications and Mobile Computing. 4 (7): 1–17. arXiv:1302.0891. doi:10.1002/WCM.2565. 11. Stoyan, D.; Penttinen, A. (2000). "Recent Applications of Point Process Methods in Forestry Statistics". Statistical Science. 15: 61–78. 12. Kendall, D. G. (1989). "A survey of the statistical theory of shape". Statistical Science. 4 (2): 87–99. doi:10.1214/ss/1177012582. 13. Torquato, S. (2002). Random heterogeneous materials. Springer-Verlag. ISBN 0-387-95167-9. 14. Van Lieshout, M. N. M. (1995). Stochastic Geometry Models in Image Analysis and Spatial Statistics. CWI Tract, 108. CWI. ISBN 90-6196-453-9. 15. Georgii, H.-O.; Häggström, O.; Maes, C. (2001). "The random geometry of equilibrium phases". Phase Transitions and Critical Phenomena. Vol. 18. Academic Press. pp. 1–142. 16. Baddeley, A.; Turner, R. (2005). "Spatstat: An R package for analyzing spatial point patterns". Journal of Statistical Software. 12 (6): 1–42. doi:10.18637/jss.v012.i06. 17. McCullagh, P.; Møller, J. (2006). "The permanental process". Advances in Applied Probability. 38 (4): 873–888. doi:10.1239/aap/1165414583.
Wikipedia
Stochastic gradient Langevin dynamics Stochastic gradient Langevin dynamics (SGLD) is an optimization and sampling technique composed of characteristics from Stochastic gradient descent, a Robbins–Monro optimization algorithm, and Langevin dynamics, a mathematical extension of molecular dynamics models. Like stochastic gradient descent, SGLD is an iterative optimization algorithm which uses minibatching to create a stochastic gradient estimator, as used in SGD to optimize a differentiable objective function.[1] Unlike traditional SGD, SGLD can be used for Bayesian learning as a sampling method. SGLD may be viewed as Langevin dynamics applied to posterior distributions, but the key difference is that the likelihood gradient terms are minibatched, like in SGD. SGLD, like Langevin dynamics, produces samples from a posterior distribution of parameters based on available data. First described by Welling and Teh in 2011, the method has applications in many contexts which require optimization, and is most notably applied in machine learning problems. Formal definition Given some parameter vector $\theta $, its prior distribution $p(\theta )$, and a set of data points $X=\{x_{i}\}_{i=1}^{N}$, Langevin dynamics samples from the posterior distribution $p(\theta \mid X)\propto p(\theta )\prod _{i=1}^{N}p(x_{i}\mid \theta )$ by updating the chain: $\Delta \theta _{t}={\frac {\varepsilon _{t}}{2}}\left(\nabla \log p(\theta _{t})+\sum _{i=1}^{N}\nabla \log p(x_{t_{i}}\mid \theta _{t})\right)+\eta _{t}$ Stochastic gradient Langevin dynamics uses a modified update procedure with minibatched likelihood terms: $\Delta \theta _{t}={\frac {\varepsilon _{t}}{2}}\left(\nabla \log p(\theta _{t})+{\frac {N}{n}}\sum _{i=1}^{n}\nabla \log p(x_{t_{i}}\mid \theta _{t})\right)+\eta _{t}$ where $n<N$ is a positive integer, $\eta _{t}\sim {\mathcal {N}}(0,\varepsilon _{t})$ is Gaussian noise, $p(x\mid \theta )$ is the likelihood of the data given the parameter vector $\theta $, and our step sizes $\varepsilon _{t}$satisfy the following conditions: $\sum _{t=1}^{\infty }\varepsilon _{t}=\infty \quad \sum _{t=1}^{\infty }\varepsilon _{t}^{2}<\infty $ For early iterations of the algorithm, each parameter update mimics Stochastic Gradient Descent; however, as the algorithm approaches a local minimum or maximum, the gradient shrinks to zero and the chain produces samples surrounding the maximum a posteriori mode allowing for posterior inference. This process generates approximate samples from the posterior as by balancing variance from the injected Gaussian noise and stochastic gradient computation. Application SGLD is applicable in any optimization context for which it is desirable to quickly obtain posterior samples instead of a maximum a posteriori mode. In doing so, the method maintains the computational efficiency of stochastic gradient descent when compared to traditional gradient descent while providing additional information regarding the landscape around the critical point of the objective function. In practice, SGLD can be applied to the training of Bayesian Neural Networks in Deep Learning, a task in which the method provides a distribution over model parameters. By introducing information about the variance of these parameters, SGLD characterizes the generalizability of these models at certain points in training.[2] Additionally, obtaining samples from a posterior distribution permits uncertainty quantification by means of confidence intervals, a feature which is not possible using traditional stochastic gradient descent. Variants and associated algorithms If gradient computations are exact, SGLD reduces down to the Langevin Monte Carlo algorithm,[3] first coined in the literature of lattice field theory. This algorithm is also a reduction of Hamiltonian Monte Carlo, consisting of a single leapfrog step proposal rather than a series of steps.[4] Since SGLD can be formulated as a modification of both stochastic gradient descent and MCMC methods, the method lies at the intersection between optimization and sampling algorithms; the method maintains SGD's ability to quickly converge to regions of low cost while providing samples to facilitate posterior inference. Considering relaxed constraints on the step sizes $\varepsilon _{t}$such that they do not approach zero asymptotically, SGLD fails to produce samples for which the Metropolis Hastings rejection rate is zero, and thus a MH rejection step becomes necessary.[1] The resulting algorithm, dubbed the Metropolis Adjusted Langevin algorithm,[5] requires the step: ${\frac {p(\mathbf {\theta } ^{t}\mid \mathbf {\theta } ^{t+1})p^{*}\left(\mathbf {\theta } ^{t}\right)}{p\left(\mathbf {\theta } ^{t+1}\mid \mathbf {\theta } ^{t}\right)p^{*}(\mathbf {\theta } ^{t+1})}}<u,\ u\sim {\mathcal {U}}[0,1]$ where $p(\theta ^{t}\mid \theta ^{t+1})$is a normal distribution centered one gradient descent step from $\theta ^{t}$and $p(\theta )$is our target distribution. Mixing rates and algorithmic convergence Recent contributions have proven upper bounds on mixing times for both the traditional Langevin algorithm and the Metropolis adjusted Langevin algorithm.[5] Released in Ma et al., 2018, these bounds define the rate at which the algorithms converge to the true posterior distribution, defined formally as: $\tau (\varepsilon ;p^{0})=\min \left\{k\mid \left\|p^{k}-p^{*}\right\|_{\mathrm {V} }\leq \varepsilon \right\}$ where $\varepsilon \in (0,1)$is an arbitrary error tolerance, $p^{0}$is some initial distribution, $p^{*}$is the posterior distribution, and $||*||_{TV}$is the total variation norm. Under some regularity conditions of an L-Lipschitz smooth objective function $U(x)$which is m-strongly convex outside of a region of radius $R$ with condition number $\kappa ={\frac {L}{m}}$, we have mixing rate bounds: $\tau _{ULA}(\varepsilon ,p^{0})\leq {\mathcal {O}}\left(e^{32LR^{2}}\kappa ^{2}{\frac {d}{\varepsilon ^{2}}}\ln \left({\frac {d}{\varepsilon ^{2}}}\right)\right)$ $\tau _{MALA}(\varepsilon ,p^{0})\leq {\mathcal {O}}\left(e^{16LR^{2}}\kappa ^{3/2}d^{1/2}\left(d\ln \kappa +\ln \left({\frac {1}{\varepsilon }}\right)\right)^{3/2}\right)$ where $\tau _{ULA}$ and $\tau _{MALA}$refer to the mixing rates of the Unadjusted Langevin Algorithm and the Metropolis Adjusted Langevin Algorithm respectively. These bounds are important because they show computational complexity is polynomial in dimension $d$ conditional on $LR^{2}$ being ${\mathcal {O}}(\log d)$. References 1. Welling, Max; Teh, Yee Whye (2011). "Bayesian Learning via Stochastic Gradient Langevin Dynamics" (PDF). Proceedings of the 28th International Conference on Machine Learning: 681–688. 2. Chaudhari, Pratik; Choromanska, Anna; Soatto, Stefano; LeCun, Yann; Baldassi, Carlo; Borgs, Christian; Chayes, Jennifer; Sagun, Levent; Zecchina, Riccardo (2017). "Entropy-sgd: Biasing gradient descent into wide valleys". arXiv:1611.01838 [cs.LG]. 3. Kennedy, A. D. (1990). "The theory of hybrid stochastic algorithms". Probabilistic Methods in Quantum Field Theory and Quantum Gravity. Plenum Press. pp. 209–223. ISBN 0-306-43602-7. 4. Neal, R. (2011). "MCMC Using Hamiltonian Dynamics". Handbook of Markov Chain Monte Carlo. CRC Press. ISBN 978-1-4200-7941-8. 5. Ma, Y. A.; Chen, Y.; Jin, C.; Flammarion, N.; Jordan, M. I. (2018). "Sampling Can Be Faster Than Optimization". arXiv:1811.08413 [stat.ML].
Wikipedia
Stochastic homogenization In homogenization theory, a branch of mathematics, stochastic homogenization is a technique for understanding solutions to partial differential equations with oscillatory random coefficients.[1] References 1. "Stochastic Homogenization". www.mis.mpg.de. Retrieved 11 June 2018.
Wikipedia
Stochastic calculus Stochastic calculus is a branch of mathematics that operates on stochastic processes. It allows a consistent theory of integration to be defined for integrals of stochastic processes with respect to stochastic processes. This field was created and started by the Japanese mathematician Kiyosi Itô during World War II. Part of a series of articles about Calculus • Fundamental theorem • Limits • Continuity • Rolle's theorem • Mean value theorem • Inverse function theorem Differential Definitions • Derivative (generalizations) • Differential • infinitesimal • of a function • total Concepts • Differentiation notation • Second derivative • Implicit differentiation • Logarithmic differentiation • Related rates • Taylor's theorem Rules and identities • Sum • Product • Chain • Power • Quotient • L'Hôpital's rule • Inverse • General Leibniz • Faà di Bruno's formula • Reynolds Integral • Lists of integrals • Integral transform • Leibniz integral rule Definitions • Antiderivative • Integral (improper) • Riemann integral • Lebesgue integration • Contour integration • Integral of inverse functions Integration by • Parts • Discs • Cylindrical shells • Substitution (trigonometric, tangent half-angle, Euler) • Euler's formula • Partial fractions • Changing order • Reduction formulae • Differentiating under the integral sign • Risch algorithm Series • Geometric (arithmetico-geometric) • Harmonic • Alternating • Power • Binomial • Taylor Convergence tests • Summand limit (term test) • Ratio • Root • Integral • Direct comparison • Limit comparison • Alternating series • Cauchy condensation • Dirichlet • Abel Vector • Gradient • Divergence • Curl • Laplacian • Directional derivative • Identities Theorems • Gradient • Green's • Stokes' • Divergence • generalized Stokes Multivariable Formalisms • Matrix • Tensor • Exterior • Geometric Definitions • Partial derivative • Multiple integral • Line integral • Surface integral • Volume integral • Jacobian • Hessian Advanced • Calculus on Euclidean space • Generalized functions • Limit of distributions Specialized • Fractional • Malliavin • Stochastic • Variations Miscellaneous • Precalculus • History • Glossary • List of topics • Integration Bee • Mathematical analysis • Nonstandard analysis The best-known stochastic process to which stochastic calculus is applied is the Wiener process (named in honor of Norbert Wiener), which is used for modeling Brownian motion as described by Louis Bachelier in 1900 and by Albert Einstein in 1905 and other physical diffusion processes in space of particles subject to random forces. Since the 1970s, the Wiener process has been widely applied in financial mathematics and economics to model the evolution in time of stock prices and bond interest rates. The main flavours of stochastic calculus are the Itô calculus and its variational relative the Malliavin calculus. For technical reasons the Itô integral is the most useful for general classes of processes, but the related Stratonovich integral is frequently useful in problem formulation (particularly in engineering disciplines). The Stratonovich integral can readily be expressed in terms of the Itô integral. The main benefit of the Stratonovich integral is that it obeys the usual chain rule and therefore does not require Itô's lemma. This enables problems to be expressed in a coordinate system invariant form, which is invaluable when developing stochastic calculus on manifolds other than Rn. The dominated convergence theorem does not hold for the Stratonovich integral; consequently it is very difficult to prove results without re-expressing the integrals in Itô form. Itô integral Main article: Itô calculus The Itô integral is central to the study of stochastic calculus. The integral $\int H\,dX$ is defined for a semimartingale X and locally bounded predictable process H. Stratonovich integral Main article: Stratonovich integral The Stratonovich integral or Fisk–Stratonovich integral of a semimartingale $X$ against another semimartingale Y can be defined in terms of the Itô integral as $\int _{0}^{t}X_{s-}\circ dY_{s}:=\int _{0}^{t}X_{s-}dY_{s}+{\frac {1}{2}}\left[X,Y\right]_{t}^{c},$ where [X, Y]tc denotes the quadratic covariation of the continuous parts of X and Y. The alternative notation $\int _{0}^{t}X_{s}\,\partial Y_{s}$ is also used to denote the Stratonovich integral. Applications An important application of stochastic calculus is in mathematical finance, in which asset prices are often assumed to follow stochastic differential equations. For example, the Black–Scholes model prices options as if they follow a geometric Brownian motion, illustrating the opportunities and risks from applying stochastic calculus. Stochastic integrals Besides the classical Itô and Fisk–Stratonovich integrals, many different notion of stochastic integrals exist such as the Hitsuda–Skorokhod integral, the Marcus integral, the Ogawa integral and more. See also • Itô calculus • Itô's lemma • Stratonovich integral • Semimartingale • Wiener process References • Fima C Klebaner, 2012, Introduction to Stochastic Calculus with Application (3rd Edition). World Scientific Publishing, ISBN 9781848168312 • Szabados, T. S.; Székely, B. Z. (2008). "Stochastic Integration Based on Simple, Symmetric Random Walks". Journal of Theoretical Probability. 22: 203. arXiv:0712.3908. doi:10.1007/s10959-007-0140-8. Preprint Industrial and applied mathematics Computational • Algorithms • design • analysis • Automata theory • Coding theory • Computational geometry • Constraint programming • Computational logic • Cryptography • Information theory Discrete • Computer algebra • Computational number theory • Combinatorics • Graph theory • Discrete geometry Analysis • Approximation theory • Clifford analysis • Clifford algebra • Differential equations • Ordinary differential equations • Partial differential equations • Stochastic differential equations • Differential geometry • Differential forms • Gauge theory • Geometric analysis • Dynamical systems • Chaos theory • Control theory • Functional analysis • Operator algebra • Operator theory • Harmonic analysis • Fourier analysis • Multilinear algebra • Exterior • Geometric • Tensor • Vector • Multivariable calculus • Exterior • Geometric • Tensor • Vector • Numerical analysis • Numerical linear algebra • Numerical methods for ordinary differential equations • Numerical methods for partial differential equations • Validated numerics • Variational calculus Probability theory • Distributions (random variables) • Stochastic processes / analysis • Path integral • Stochastic variational calculus Mathematical physics • Analytical mechanics • Lagrangian • Hamiltonian • Field theory • Classical • Conformal • Effective • Gauge • Quantum • Statistical • Topological • Perturbation theory • in quantum mechanics • Potential theory • String theory • Bosonic • Topological • Supersymmetry • Supersymmetric quantum mechanics • Supersymmetric theory of stochastic dynamics Algebraic structures • Algebra of physical space • Feynman integral • Poisson algebra • Quantum group • Renormalization group • Representation theory • Spacetime algebra • Superalgebra • Supersymmetry algebra Decision sciences • Game theory • Operations research • Optimization • Social choice theory • Statistics • Mathematical economics • Mathematical finance Other applications • Biology • Chemistry • Psychology • Sociology • "The Unreasonable Effectiveness of Mathematics in the Natural Sciences" Related • Mathematics • Mathematical software Organizations • Society for Industrial and Applied Mathematics • Japan Society for Industrial and Applied Mathematics • Société de Mathématiques Appliquées et Industrielles • International Council for Industrial and Applied Mathematics • European Community on Computational Methods in Applied Sciences • Category • Mathematics portal / outline / topics list Authority control: National • Germany • Israel • United States
Wikipedia
Stochastic logarithm In stochastic calculus, stochastic logarithm of a semimartingale $Y$such that $Y\neq 0$ and $Y_{-}\neq 0$ is the semimartingale $X$ given by[1] $dX_{t}={\frac {dY_{t}}{Y_{t-}}},\quad X_{0}=0.$ In layperson's terms, stochastic logarithm of $Y$ measures the cumulative percentage change in $Y$. Notation and terminology The process $X$ obtained above is commonly denoted ${\mathcal {L}}(Y)$. The terminology stochastic logarithm arises from the similarity of ${\mathcal {L}}(Y)$ to the natural logarithm $\log(Y)$: If $Y$ is absolutely continuous with respect to time and $Y\neq 0$, then $X$ solves, path-by-path, the differential equation ${\frac {dX_{t}}{dt}}={\frac {\frac {dY_{t}}{dt}}{Y_{t}}},$ whose solution is $X=\log |Y|-\log |Y_{0}|$. General formula and special cases • Without any assumptions on the semimartingale $Y$ (other than $Y\neq 0,Y_{-}\neq 0$), one has[1] ${\mathcal {L}}(Y)_{t}=\log {\Biggl |}{\frac {Y_{t}}{Y_{0}}}{\Biggl |}+{\frac {1}{2}}\int _{0}^{t}{\frac {d[Y]_{s}^{c}}{Y_{s-}^{2}}}+\sum _{s\leq t}{\Biggl (}\log {\Biggl |}1+{\frac {\Delta Y_{s}}{Y_{s-}}}{\Biggr |}-{\frac {\Delta Y_{s}}{Y_{s-}}}{\Biggr )},\qquad t\geq 0,$ where $[Y]^{c}$ is the continuous part of quadratic variation of $Y$ and the sum extends over the (countably many) jumps of $Y$ up to time $t$. • If $Y$ is continuous, then ${\mathcal {L}}(Y)_{t}=\log {\Biggl |}{\frac {Y_{t}}{Y_{0}}}{\Biggl |}+{\frac {1}{2}}\int _{0}^{t}{\frac {d[Y]_{s}^{c}}{Y_{s-}^{2}}},\qquad t\geq 0.$ In particular, if $Y$ is a geometric Brownian motion, then $X$ is a Brownian motion with a constant drift rate. • If $Y$ is continuous and of finite variation, then ${\mathcal {L}}(Y)=\log {\Biggl |}{\frac {Y}{Y_{0}}}{\Biggl |}.$ Here $Y$ need not be differentiable with respect to time; for example, $Y$ can equal 1 plus the Cantor function. Properties • Stochastic logarithm is an inverse operation to stochastic exponential: If $\Delta X\neq -1$, then ${\mathcal {L}}({\mathcal {E}}(X))=X-X_{0}$. Conversely, if $Y\neq 0$ and $Y_{-}\neq 0$, then ${\mathcal {E}}({\mathcal {L}}(Y))=Y/Y_{0}$.[1] • Unlike the natural logarithm $\log(Y_{t})$, which depends only of the value of $Y$ at time $t$, the stochastic logarithm ${\mathcal {L}}(Y)_{t}$ depends not only on $Y_{t}$ but on the whole history of $Y$ in the time interval $[0,t]$. For this reason one must write ${\mathcal {L}}(Y)_{t}$ and not ${\mathcal {L}}(Y_{t})$. • Stochastic logarithm of a local martingale that does not vanish together with its left limit is again a local martingale. • All the formulae and properties above apply also to stochastic logarithm of a complex-valued $Y$. • Stochastic logarithm can be defined also for processes $Y$ that are absorbed in zero after jumping to zero. Such definition is meaningful up to the first time that $Y$ reaches $0$ continuously.[2] Useful identities • Converse of the Yor formula:[1] If $Y^{(1)},Y^{(2)}$ do not vanish together with their left limits, then ${\mathcal {L}}{\bigl (}Y^{(1)}Y^{(2)}{\bigr )}={\mathcal {L}}{\bigl (}Y^{(1)}{\bigr )}+{\mathcal {L}}{\bigl (}Y^{(2)}{\bigr )}+{\bigl [}{\mathcal {L}}{\bigl (}Y^{(1)}{\bigr )},{\mathcal {L}}{\bigl (}Y^{(2)}{\bigr )}{\bigr ]}.$ • Stochastic logarithm of $1/{\mathcal {E}}(X)$:[2] If $\Delta X\neq -1$, then ${\mathcal {L}}{\biggl (}{\frac {1}{{\mathcal {E}}(X)}}{\biggr )}_{t}=X_{0}-X_{t}-[X]_{t}^{c}+\sum _{s\leq t}{\frac {(\Delta X_{s})^{2}}{1+\Delta X_{s}}}.$ Applications • Girsanov's theorem can be paraphrased as follows: Let $Q$ be a probability measure equivalent to another probability measure $P$. Denote by $Z$ the uniformly integrable martingale closed by $Z_{\infty }=dQ/dP$. For a semimartingale $U$ the following are equivalent: 1. Process $U$ is special under $Q$. 2. Process $U+[U,{\mathcal {L}}(Z)]$ is special under $P$. • + If either of these conditions holds, then the $Q$-drift of $U$ equals the $P$-drift of $U+[U,{\mathcal {L}}(Z)]$. References 1. Jacod, Jean; Shiryaev, Albert Nikolaevich (2003). Limit theorems for stochastic processes (2nd ed.). Berlin: Springer. pp. 134–138. ISBN 3-540-43932-3. OCLC 50554399. 2. Larsson, Martin; Ruf, Johannes (2019). "Stochastic exponentials and logarithms on stochastic intervals — A survey". Journal of Mathematical Analysis and Applications. 476 (1): 2–12. doi:10.1016/j.jmaa.2018.11.040. S2CID 119148331. See also • Stochastic exponential
Wikipedia
Mixing (mathematics) In mathematics, mixing is an abstract concept originating from physics: the attempt to describe the irreversible thermodynamic process of mixing in the everyday world: e.g. mixing paint, mixing drinks, industrial mixing. The concept appears in ergodic theory—the study of stochastic processes and measure-preserving dynamical systems. Several different definitions for mixing exist, including strong mixing, weak mixing and topological mixing, with the last not requiring a measure to be defined. Some of the different definitions of mixing can be arranged in a hierarchical order; thus, strong mixing implies weak mixing. Furthermore, weak mixing (and thus also strong mixing) implies ergodicity: that is, every system that is weakly mixing is also ergodic (and so one says that mixing is a "stronger" condition than ergodicity). Informal explanation The mathematical definition of mixing aims to capture the ordinary every-day process of mixing, such as mixing paints, drinks, cooking ingredients, industrial process mixing, smoke in a smoke-filled room, and so on. To provide the mathematical rigor, such descriptions begin with the definition of a measure-preserving dynamical system, written as $(X,{\mathcal {A}},\mu ,T)$. The set $X$ is understood to be the total space to be filled: the mixing bowl, the smoke-filled room, etc. The measure $\mu $ is understood to define the natural volume of the space $X$ and of its subspaces. The collection of subspaces is denoted by ${\mathcal {A}}$, and the size of any given subset $A\subset X$ is $\mu (A)$; the size is its volume. Naively, one could imagine ${\mathcal {A}}$ to be the power set of $X$; this doesn't quite work, as not all subsets of a space have a volume (famously, the Banach-Tarski paradox). Thus, conventionally, ${\mathcal {A}}$ consists of the measurable subsets—the subsets that do have a volume. It is always taken to be a Borel set—the collection of subsets that can be constructed by taking intersections, unions and set complements; these can always be taken to be measurable. The time evolution of the system is described by a map $T:X\to X$. Given some subset $A\subset X$, its map $T(A)$ will in general be a deformed version of $A$ – it is squashed or stretched, folded or cut into pieces. Mathematical examples include the baker's map and the horseshoe map, both inspired by bread-making. The set $T(A)$ must have the same volume as $A$; the squashing/stretching does not alter the volume of the space, only its distribution. Such a system is "measure-preserving" (area-preserving, volume-preserving). A formal difficulty arises when one tries to reconcile the volume of sets with the need to preserve their size under a map. The problem arises because, in general, several different points in the domain of a function can map to the same point in its range; that is, there may be $x\neq y$ with $T(x)=T(y)$. Worse, a single point $x\in X$ has no size. These difficulties can be avoided by working with the inverse map $T^{-1}:{\mathcal {A}}\to {\mathcal {A}}$; it will map any given subset $A\subset X$ to the parts that were assembled to make it: these parts are $T^{-1}(A)\in {\mathcal {A}}$. It has the important property of not "losing track" of where things came from. More strongly, it has the important property that any (measure-preserving) map ${\mathcal {A}}\to {\mathcal {A}}$ is the inverse of some map $X\to X$. The proper definition of a volume-preserving map is one for which $\mu (A)=\mu (T^{-1}(A))$ because $T^{-1}(A)$ describes all the pieces-parts that $A$ came from. One is now interested in studying the time evolution of the system. If a set $A\in {\mathcal {A}}$ eventually visits all of $X$ over a long period of time (that is, if $\cup _{k=1}^{n}T^{k}(A)$ approaches all of $X$ for large $n$), the system is said to be ergodic. If every set $A$ behaves in this way, the system is a conservative system, placed in contrast to a dissipative system, where some subsets $A$ wander away, never to be returned to. An example would be water running downhill -- once it's run down, it will never come back up again. The lake that forms at the bottom of this river can, however, become well-mixed. The ergodic decomposition theorem states that every ergodic system can be split into two parts: the conservative part, and the dissipative part. Mixing is a stronger statement than ergodicity. Mixing asks for this ergodic property to hold between any two sets $A,B$, and not just between some set $A$ and $X$. That is, given any two sets $A,B\in {\mathcal {A}}$, a system is said to be (topologically) mixing if there is an integer $N$ such that, for all $A,B$ and $n>N$, one has that $T^{n}(A)\cap B\neq \varnothing $. Here, $\cap $ denotes set intersection and $\varnothing $ is the empty set. The above definition of topological mixing should be enough to provide an informal idea of mixing (it is equivalent to the formal definition, given below). However, it made no mention of the volume of $A$ and $B$, and, indeed, there is another definition that explicitly works with the volume. Several, actually; one has both strong mixing and weak mixing; they are inequivalent, although a strong mixing system is always weakly mixing. The measure-based definitions are not compatible with the definition of topological mixing: there are systems which are one, but not the other. The general situation remains cloudy: for example, given three sets $A,B,C\in {\mathcal {A}}$, one can define 3-mixing. As of 2020, it is not known if 2-mixing implies 3-mixing. (If one thinks of ergodicity as "1-mixing", then it is clear that 1-mixing does not imply 2-mixing; there are systems that are ergodic but not mixing.) The concept of strong mixing is made in reference to the volume of a pair of sets. Consider, for example, a set $A$ of colored dye that is being mixed into a cup of some sort of sticky liquid, say, corn syrup, or shampoo, or the like. Practical experience shows that mixing sticky fluids can be quite hard: there is usually some corner of the container where it is hard to get the dye mixed into. Pick as set $B$ that hard-to-reach corner. The question of mixing is then, can $A$, after a long enough period of time, not only penetrate into $B$ but also fill $B$ with the same proportion as it does elsewhere? One phrases the definition of strong mixing as the requirement that $\lim _{n\to \infty }\mu \left(T^{-n}A\cap B\right)=\mu (A)\mu (B).$ The time parameter $n$ serves to separate $A$ and $B$ in time, so that one is mixing $A$ while holding the test volume $B$ fixed. The product $\mu (A)\mu (B)$ is a bit more subtle. Imagine that the volume $B$ is 10% of the total volume, and that the volume of dye $A$ will also be 10% of the grand total. If $A$ is uniformly distributed, then it is occupying 10% of $B$, which itself is 10% of the total, and so, in the end, after mixing, the part of $A$ that is in $B$ is 1% of the total volume. That is, $\mu \left({\mbox{after-mixing}}(A)\cap B\right)=\mu (A)\mu (B).$ This product-of-volumes has more than passing resemblance to Bayes theorem in probabilities; this is not an accident, but rather a consequence that measure theory and probability theory are the same theory: they share the same axioms (the Kolmogorov axioms), even as they use different notation. The reason for using $T^{-n}A$ instead of $T^{n}A$ in the definition is a bit subtle, but it follows from the same reasons why $T^{-1}A$ was used to define the concept of a measure-preserving map. When looking at how much dye got mixed into the corner $B$, one wants to look at where that dye "came from" (presumably, it was poured in at the top, at some time in the past). One must be sure that every place it might have "come from" eventually gets mixed into $B$. Mixing in dynamical systems Let $(X,{\mathcal {A}},\mu ,T)$ be a measure-preserving dynamical system, with T being the time-evolution or shift operator. The system is said to be strong mixing if, for any $A,B\in {\mathcal {A}}$, one has $\lim _{n\to \infty }\mu \left(A\cap T^{-n}B\right)=\mu (A)\mu (B).$ For shifts parametrized by a continuous variable instead of a discrete integer n, the same definition applies, with $T^{-n}$ replaced by $T_{g}$ with g being the continuous-time parameter. A dynamical system is said to be weak mixing if one has $\lim _{n\to \infty }{\frac {1}{n}}\sum _{k=0}^{n-1}\left|\mu (A\cap T^{-k}B)-\mu (A)\mu (B)\right|=0.$ In other words, $T$ is strong mixing if $\mu (A\cap T^{-n}B)-\mu (A)\mu (B)\to 0$ in the usual sense, weak mixing if $\left|\mu (A\cap T^{-n}B)-\mu (A)\mu (B)\right|\to 0,$ in the Cesàro sense, and ergodic if $\mu \left(A\cap T^{-n}B\right)\to \mu (A)\mu (B)$ in the Cesàro sense. Hence, strong mixing implies weak mixing, which implies ergodicity. However, the converse is not true: there exist ergodic dynamical systems which are not weakly mixing, and weakly mixing dynamical systems which are not strongly mixing. The Chacon system was historically the first example given of a system that is weak-mixing but not strong-mixing.[1] L2 formulation The properties of ergodicity, weak mixing and strong mixing of a measure-preserving dynamical system can also be characterized by the average of observables. By von Neumann's ergodic theorem, ergodicity of a dynamical system $(X,{\mathcal {A}},\mu ,T)$ is equivalent to the property that, for any function $f\in L^{2}(X,\mu )$, the sequence $(f\circ T^{n})_{n\geq 0}$ converges strongly and in the sense of Cesàro to $\int _{X}f\,d\mu $, i.e., $\lim _{N\to \infty }\left\|{1 \over N}\sum _{n=0}^{N-1}f\circ T^{n}-\int _{X}f\,d\mu \right\|_{L^{2}(X,\mu )}=0.$ A dynamical system $(X,{\mathcal {A}},\mu ,T)$ is weakly mixing if, for any functions $f$ and $g\in L^{2}(X,\mu ),$ $\lim _{N\to \infty }{1 \over N}\sum _{n=0}^{N-1}\left|\int _{X}f\circ T^{n}\cdot gd\mu -\int _{X}f\,d\mu \cdot \int _{X}g\,d\mu \right|=0.$ A dynamical system $(X,{\mathcal {A}},\mu ,T)$ is strongly mixing if, for any function $f\in L^{2}(X,\mu ),$ the sequence $(f\circ T^{n})_{n\geq 0}$ converges weakly to $\int _{X}f\,d\mu ,$ i.e., for any function $g\in L^{2}(X,\mu ),$ $\lim _{n\to \infty }\int _{X}f\circ T^{n}\cdot g\,d\mu =\int _{X}f\,d\mu \cdot \int _{X}g\,d\mu .$ Since the system is assumed to be measure preserving, this last line is equivalent to saying that the covariance $\lim _{n\to \infty }\operatorname {Cov} (f\circ T^{n},g)=0,$ so that the random variables $f\circ T^{n}$ and $g$ become orthogonal as $n$ grows. Actually, since this works for any function $g,$ one can informally see mixing as the property that the random variables $f\circ T^{n}$ and $g$ become independent as $n$ grows. Products of dynamical systems Given two measured dynamical systems $(X,\mu ,T)$ and $(Y,\nu ,S),$ one can construct a dynamical system $(X\times Y,\mu \otimes \nu ,T\times S)$ on the Cartesian product by defining $(T\times S)(x,y)=(T(x),S(y)).$ We then have the following characterizations of weak mixing: Proposition. A dynamical system $(X,\mu ,T)$ is weakly mixing if and only if, for any ergodic dynamical system $(Y,\nu ,S)$, the system $(X\times Y,\mu \otimes \nu ,T\times S)$ is also ergodic. Proposition. A dynamical system $(X,\mu ,T)$ is weakly mixing if and only if $(X^{2},\mu \otimes \mu ,T\times T)$ is also ergodic. If this is the case, then $(X^{2},\mu \otimes \mu ,T\times T)$ is also weakly mixing. Generalizations The definition given above is sometimes called strong 2-mixing, to distinguish it from higher orders of mixing. A strong 3-mixing system may be defined as a system for which $\lim _{m,n\to \infty }\mu (A\cap T^{-m}B\cap T^{-m-n}C)=\mu (A)\mu (B)\mu (C)$ holds for all measurable sets A, B, C. We can define strong k-mixing similarly. A system which is strong k-mixing for all k = 2,3,4,... is called mixing of all orders. It is unknown whether strong 2-mixing implies strong 3-mixing. It is known that strong m-mixing implies ergodicity. Examples Irrational rotations of the circle, and more generally irreducible translations on a torus, are ergodic but neither strongly nor weakly mixing with respect to the Lebesgue measure. Many maps considered as chaotic are strongly mixing for some well-chosen invariant measure, including: the dyadic map, Arnold's cat map, horseshoe maps, Kolmogorov automorphisms, and the Anosov flow (the geodesic flow on the unit tangent bundle of compact manifolds of negative curvature.) Topological mixing A form of mixing may be defined without appeal to a measure, using only the topology of the system. A continuous map $f:X\to X$ is said to be topologically transitive if, for every pair of non-empty open sets $A,B\subset X$, there exists an integer n such that $f^{n}(A)\cap B\neq \varnothing $ where $f^{n}$ is the nth iterate of f. In the operator theory, a topologically transitive bounded linear operator (a continuous linear map on a topological vector space) is usually called hypercyclic operator. A related idea is expressed by the wandering set. Lemma: If X is a complete metric space with no isolated point, then f is topologically transitive if and only if there exists a hypercyclic point $x\in X$, that is, a point x such that its orbit $\{f^{n}(x):n\in \mathbb {N} \}$ is dense in X. A system is said to be topologically mixing if, given open sets $A$ and $B$, there exists an integer N, such that, for all $n>N$, one has $f^{n}(A)\cap B\neq \varnothing .$ For a continuous-time system, $f^{n}$ is replaced by the flow $\varphi _{g}$, with g being the continuous parameter, with the requirement that a non-empty intersection hold for all $\Vert g\Vert >N$. A weak topological mixing is one that has no non-constant continuous (with respect to the topology) eigenfunctions of the shift operator. Topological mixing neither implies, nor is implied by either weak or strong mixing: there are examples of systems that are weak mixing but not topologically mixing, and examples that are topologically mixing but not strong mixing. Mixing in stochastic processes Let $(X_{t})_{-\infty <t<\infty }$ be a stochastic process on a probability space $(\Omega ,{\mathcal {F}},\mathbb {P} )$. The sequence space into which the process maps can be endowed with a topology, the product topology. The open sets of this topology are called cylinder sets. These cylinder sets generate a σ-algebra, the Borel σ-algebra; this is the smallest σ-algebra that contains the topology. Define a function $\alpha $, called the strong mixing coefficient, as $\alpha (s)=\sup \left\{|\mathbb {P} (A\cap B)-\mathbb {P} (A)\mathbb {P} (B)|:-\infty <t<\infty ,A\in X_{-\infty }^{t},B\in X_{t+s}^{\infty }\right\}$ for all $-\infty <s<\infty $. The symbol $X_{a}^{b}$, with $-\infty \leq a\leq b\leq \infty $ denotes a sub-σ-algebra of the σ-algebra; it is the set of cylinder sets that are specified between times a and b, i.e. the σ-algebra generated by $\{X_{a},X_{a+1},\ldots ,X_{b}\}$. The process $(X_{t})_{-\infty <t<\infty }$ is said to be strongly mixing if $\alpha (s)\to 0$ as $s\to \infty $. That is to say, a strongly mixing process is such that, in a way that is uniform over all times $t$ and all events, the events before time $t$ and the events after time $t+s$ tend towards being independent as $s\to \infty $; more colloquially, the process, in a strong sense, forgets its history. Mixing in Markov processes Suppose $(X_{t})$ were a stationary Markov process with stationary distribution $\mathbb {Q} $ and let $L^{2}(\mathbb {Q} )$ denote the space of Borel-measurable functions that are square-integrable with respect to the measure $\mathbb {Q} $. Also let ${\mathcal {E}}_{t}\varphi (x)=\mathbb {E} [\varphi (X_{t})\mid X_{0}=x]$ denote the conditional expectation operator on $L^{2}(\mathbb {Q} ).$ Finally, let $Z=\left\{\varphi \in L^{2}(\mathbb {Q} ):\int \varphi \,d\mathbb {Q} =0\right\}$ denote the space of square-integrable functions with mean zero. The ρ-mixing coefficients of the process {xt} are $\rho _{t}=\sup _{\varphi \in Z:\,\|\varphi \|_{2}=1}\|{\mathcal {E}}_{t}\varphi \|_{2}.$ The process is called ρ-mixing if these coefficients converge to zero as t → ∞, and “ρ-mixing with exponential decay rate” if ρt < e−δt for some δ > 0. For a stationary Markov process, the coefficients ρt may either decay at an exponential rate, or be always equal to one.[2] The α-mixing coefficients of the process {xt} are $\alpha _{t}=\sup _{\varphi \in Z:\|\varphi \|_{\infty }=1}\|{\mathcal {E}}_{t}\varphi \|_{1}.$ The process is called α-mixing if these coefficients converge to zero as t → ∞, it is “α-mixing with exponential decay rate” if αt < γe−δt for some δ > 0, and it is α-mixing with a sub-exponential decay rate if αt < ξ(t) for some non-increasing function $\xi $ satisfying ${\frac {\ln \xi (t)}{t}}\to 0$ as $t\to \infty $.[2] The α-mixing coefficients are always smaller than the ρ-mixing ones: αt ≤ ρt, therefore if the process is ρ-mixing, it will necessarily be α-mixing too. However, when ρt = 1, the process may still be α-mixing, with sub-exponential decay rate. The β-mixing coefficients are given by $\beta _{t}=\int \sup _{0\leq \varphi \leq 1}\left|{\mathcal {E}}_{t}\varphi (x)-\int \varphi \,d\mathbb {Q} \right|\,d\mathbb {Q} .$ The process is called β-mixing if these coefficients converge to zero as t → ∞, it is β-mixing with an exponential decay rate if βt < γe−δt for some δ > 0, and it is β-mixing with a sub-exponential decay rate if βtξ(t) → 0 as t → ∞ for some non-increasing function $\xi $ satisfying ${\frac {\ln \xi (t)}{t}}\to 0$ as $t\to \infty $.[2] A strictly stationary Markov process is β-mixing if and only if it is an aperiodic recurrent Harris chain. The β-mixing coefficients are always bigger than the α-mixing ones, so if a process is β-mixing it will also be α-mixing. There is no direct relationship between β-mixing and ρ-mixing: neither of them implies the other. References • V. I. Arnold and A. Avez, Ergodic Problems of Classical Mechanics, (1968) W. A. Benjamin, Inc. • Achim Klenke, Probability Theory, (2006) Springer ISBN 978-1-84800-047-6 • Chen, Xiaohong; Hansen, Lars Peter; Carrasco, Marine (2010). "Nonlinearity and temporal dependence". Journal of Econometrics. 155 (2): 155–169. CiteSeerX 10.1.1.597.8777. doi:10.1016/j.jeconom.2009.10.001. S2CID 10567129. 1. Matthew Nicol and Karl Petersen, (2009) "Ergodic Theory: Basic Examples and Constructions", Encyclopedia of Complexity and Systems Science, Springer https://doi.org/10.1007/978-0-387-30440-3_177 2. Chen, Hansen & Carrasco (2010)
Wikipedia
Stochastic multicriteria acceptability analysis Stochastic multicriteria acceptability analysis (SMAA) is a multiple-criteria decision analysis method for problems with missing or incomplete information. Description This means that criteria and preference information can be uncertain, inaccurate or partially missing. Incomplete information is represented in SMAA using suitable probability distributions. The method is based on stochastic simulation by drawing random values for criteria measurements and weights from their corresponding distributions.[1] SMAA can handle mixed cardinal and ordinal information. Ordinal information is treated by a special joint distribution that preserves the ordinal information.[2] A survey on different variants and applications of SMAA can be found in this article.[3] Open source implementations of SMAA can be found at the website SMAA.fi.[4] References 1. Lahdelma, R.; Salminen, P. (2001). "SMAA-2: Stochastic Multicriteria Acceptability Analysis for Group Decision Making". Operations Research. 49 (3): 444–454. CiteSeerX 10.1.1.138.4807. doi:10.1287/opre.49.3.444.11220. 2. Sousa R., Yevseyeva I., Pinto da Costa J.F., Cardoso J.S. (2013). Multicriteria models for learning ordinal data: A literature review. In Yang X.S. Artificial Intelligence, Evolutionary Computing and Metaheuristics: In the Footsteps of Alan Turing. Studies in Computational Intelligence 427, Springer. 3. Tervonen, T.; Figueira, J. (2008). "A survey on stochastic multicriteria acceptability analysis methods". Journal of Multi-Criteria Decision Analysis. 15 (1–2): 1–14. doi:10.1002/mcda.407. 4. Tervonen, Tommi. "Open source decision aiding software for real-life applications". SMAA.fi. Retrieved 17 December 2016.
Wikipedia
Queueing theory Queueing theory is the mathematical study of waiting lines, or queues.[1] A queueing model is constructed so that queue lengths and waiting time can be predicted.[1] Queueing theory is generally considered a branch of operations research because the results are often used when making business decisions about the resources needed to provide a service. Queueing theory has its origins in research by Agner Krarup Erlang, who created models to describe the system of incoming calls at the Copenhagen Telephone Exchange Company.[1] These ideas have since seen applications in telecommunication, traffic engineering, computing,[2] project management, and particularly industrial engineering, where they are applied in the design of factories, shops, offices, and hospitals.[3][4] Spelling The spelling "queueing" over "queuing" is typically encountered in the academic research field. In fact, one of the flagship journals of the field is Queueing Systems. Single queueing nodes A queue or queueing node can be thought of as nearly a black box. Jobs (also called customers or requests, depending on the field) arrive to the queue, possibly wait some time, take some time being processed, and then depart from the queue. However, the queueing node is not quite a pure black box since some information is needed about the inside of the queuing node. The queue has one or more servers which can each be paired with an arriving job. When the job is completed and departs, that server will again be free to be paired with another arriving job. An analogy often used is that of the cashier at a supermarket. (There are other models, but this one is commonly encountered in the literature.) Customers arrive, are processed by the cashier, and depart. Each cashier processes one customer at a time, and hence this is a queueing node with only one server. A setting where a customer will leave immediately if the cashier is busy when the customer arrives, is referred to as a queue with no buffer (or no waiting area). A setting with a waiting zone for up to n customers is called a queue with a buffer of size n. Birth-death process The behaviour of a single queue (also called a queueing node) can be described by a birth–death process, which describes the arrivals and departures from the queue, along with the number of jobs currently in the system. If k denotes the number of jobs in the system (either being serviced or waiting if the queue has a buffer of waiting jobs), then an arrival increases k by 1 and a departure decreases k by 1. The system transitions between values of k by "births" and "deaths", which occur at the arrival rates $\lambda _{i}$ and the departure rates $\mu _{i}$ for each job $i$. For a queue, these rates are generally considered not to vary with the number of jobs in the queue, so a single average rate of arrivals/departures per unit time is assumed. Under this assumption, this process has an arrival rate of $\lambda ={\text{avg}}(\lambda _{1},\lambda _{2},\dots ,\lambda _{k})$ and a departure rate of $\mu ={\text{avg}}(\mu _{1},\mu _{2},\dots ,\mu _{k})$. Balance equations The steady state equations for the birth-and-death process, known as the balance equations, are as follows. Here $P_{n}$ denotes the steady state probability to be in state n. $\mu _{1}P_{1}=\lambda _{0}P_{0}$ $\lambda _{0}P_{0}+\mu _{2}P_{2}=(\lambda _{1}+\mu _{1})P_{1}$ $\lambda _{n-1}P_{n-1}+\mu _{n+1}P_{n+1}=(\lambda _{n}+\mu _{n})P_{n}$ The first two equations imply $P_{1}={\frac {\lambda _{0}}{\mu _{1}}}P_{0}$ and $P_{2}={\frac {\lambda _{1}}{\mu _{2}}}P_{1}+{\frac {1}{\mu _{2}}}(\mu _{1}P_{1}-\lambda _{0}P_{0})={\frac {\lambda _{1}}{\mu _{2}}}P_{1}={\frac {\lambda _{1}\lambda _{0}}{\mu _{2}\mu _{1}}}P_{0}$. By mathematical induction, $P_{n}={\frac {\lambda _{n-1}\lambda _{n-2}\cdots \lambda _{0}}{\mu _{n}\mu _{n-1}\cdots \mu _{1}}}P_{0}=P_{0}\prod _{i=0}^{n-1}{\frac {\lambda _{i}}{\mu _{i+1}}}$. The condition $\sum _{n=0}^{\infty }P_{n}=P_{0}+P_{0}\sum _{n=1}^{\infty }\prod _{i=0}^{n-1}{\frac {\lambda _{i}}{\mu _{i+1}}}=1$ leads to $P_{0}={\frac {1}{1+\sum _{n=1}^{\infty }\prod _{i=0}^{n-1}{\frac {\lambda _{i}}{\mu _{i+1}}}}}$ which, together with the equation for $P_{n}$ $(n\geq 1)$, fully describes the required steady state probabilities. Kendall's notation Single queueing nodes are usually described using Kendall's notation in the form A/S/c where A describes the distribution of durations between each arrival to the queue, S the distribution of service times for jobs, and c the number of servers at the node.[5][6] For an example of the notation, the M/M/1 queue is a simple model where a single server serves jobs that arrive according to a Poisson process (where inter-arrival durations are exponentially distributed) and have exponentially distributed service times (the M denotes a Markov process). In an M/G/1 queue, the G stands for "general" and indicates an arbitrary probability distribution for service times. Example analysis of an M/M/1 queue Consider a queue with one server and the following characteristics: • $\lambda $: the arrival rate (the reciprocal of the expected time between each customer arriving, e.g. 10 customers per second) • $\mu $: the reciprocal of the mean service time (the expected number of consecutive service completions per the same unit time, e.g. per 30 seconds) • n: the parameter characterizing the number of customers in the system • $P_{n}$: the probability of there being n customers in the system in steady state Further, let $E_{n}$ represent the number of times the system enters state n, and $L_{n}$ represent the number of times the system leaves state n. Then $\left\vert E_{n}-L_{n}\right\vert \in \{0,1\}$ for all n. That is, the number of times the system leaves a state differs by at most 1 from the number of times it enters that state, since it will either return into that state at some time in the future ($E_{n}=L_{n}$) or not ($\left\vert E_{n}-L_{n}\right\vert =1$). When the system arrives at a steady state, the arrival rate should be equal to the departure rate. Thus the balance equations $\mu P_{1}=\lambda P_{0}$ $\lambda P_{0}+\mu P_{2}=(\lambda +\mu )P_{1}$ $\lambda P_{n-1}+\mu P_{n+1}=(\lambda +\mu )P_{n}$ imply $P_{n}={\frac {\lambda }{\mu }}P_{n-1},\ n=1,2,\ldots $ The fact that $P_{0}+P_{1}+\cdots =1$ leads to the geometric distribution formula $P_{n}=(1-\rho )\rho ^{n}$ where $\rho ={\frac {\lambda }{\mu }}<1$. Simple two-equation queue A common basic queuing system is attributed to Erlang and is a modification of Little's Law. Given an arrival rate λ, a dropout rate σ, and a departure rate μ, length of the queue L is defined as: $L={\frac {\lambda -\sigma }{\mu }}$. Assuming an exponential distribution for the rates, the waiting time W can be defined as the proportion of arrivals that are served. This is equal to the exponential survival rate of those who do not drop out over the waiting period, giving: ${\frac {\mu }{\lambda }}=e^{-W{\mu }}$ The second equation is commonly rewritten as: $W={\frac {1}{\mu }}\mathrm {ln} {\frac {\lambda }{\mu }}$ The two-stage one-box model is common in epidemiology.[7] History In 1909, Agner Krarup Erlang, a Danish engineer who worked for the Copenhagen Telephone Exchange, published the first paper on what would now be called queueing theory.[8][9][10] He modeled the number of telephone calls arriving at an exchange by a Poisson process and solved the M/D/1 queue in 1917 and M/D/k queueing model in 1920.[11] In Kendall's notation: • M stands for "Markov" or "memoryless", and means arrivals occur according to a Poisson process • D stands for "deterministic", and means jobs arriving at the queue require a fixed amount of service • k describes the number of servers at the queueing node (k = 1, 2, 3, ...) If the node has more jobs than servers, then jobs will queue and wait for service. The M/G/1 queue was solved by Felix Pollaczek in 1930,[12] a solution later recast in probabilistic terms by Aleksandr Khinchin and now known as the Pollaczek–Khinchine formula.[11][13] After the 1940s, queueing theory became an area of research interest to mathematicians.[13] In 1953, David George Kendall solved the GI/M/k queue[14] and introduced the modern notation for queues, now known as Kendall's notation. In 1957, Pollaczek studied the GI/G/1 using an integral equation.[15] John Kingman gave a formula for the mean waiting time in a G/G/1 queue, now known as Kingman's formula.[16] Leonard Kleinrock worked on the application of queueing theory to message switching in the early 1960s and packet switching in the early 1970s. His initial contribution to this field was his doctoral thesis at the Massachusetts Institute of Technology in 1962, published in book form in 1964. His theoretical work published in the early 1970s underpinned the use of packet switching in the ARPANET, a forerunner to the Internet. The matrix geometric method and matrix analytic methods have allowed queues with phase-type distributed inter-arrival and service time distributions to be considered.[17] Systems with coupled orbits are an important part in queueing theory in the application to wireless networks and signal processing.[18] Modern day application of queueing theory concerns among other things product development where (material) products have a spatiotemporal existence, in the sense that products have a certain volume and a certain duration.[19] Problems such as performance metrics for the M/G/k queue remain an open problem.[11][13] Service disciplines Various scheduling policies can be used at queuing nodes: First in, first out Also called first-come, first-served (FCFS),[20] this principle states that customers are served one at a time and that the customer that has been waiting the longest is served first.[21] Last in, first out This principle also serves customers one at a time, but the customer with the shortest waiting time will be served first.[21] Also known as a stack. Processor sharing Service capacity is shared equally between customers.[21] Priority Customers with high priority are served first.[21] Priority queues can be of two types: non-preemptive (where a job in service cannot be interrupted) and preemptive (where a job in service can be interrupted by a higher-priority job). No work is lost in either model.[22] Shortest job first The next job to be served is the one with the smallest size.[23] Preemptive shortest job first The next job to be served is the one with the smallest original size.[24] Shortest remaining processing time The next job to serve is the one with the smallest remaining processing requirement.[25] Service facility • Single server: customers line up and there is only one server • Several parallel servers (single queue): customers line up and there are several servers • Several parallel servers (several queues): there are many counters and customers can decide for which to queue Unreliable server Server failures occur according to a stochastic (random) process (usually Poisson) and are followed by setup periods during which the server is unavailable. The interrupted customer remains in the service area until server is fixed.[26] Customer waiting behavior • Balking: customers decide not to join the queue if it is too long • Jockeying: customers switch between queues if they think they will get served faster by doing so • Reneging: customers leave the queue if they have waited too long for service Arriving customers not served (either due to the queue having no buffer, or due to balking or reneging by the customer) are also known as dropouts. The average rate of dropouts is a significant parameter describing a queue. Queueing networks Queue networks are systems in which multiple queues are connected by customer routing. When a customer is serviced at one node, it can join another node and queue for service, or leave the network. For networks of m nodes, the state of the system can be described by an m–dimensional vector (x1, x2, ..., xm) where xi represents the number of customers at each node. The simplest non-trivial networks of queues are called tandem queues.[27] The first significant results in this area were Jackson networks,[28][29] for which an efficient product-form stationary distribution exists and the mean value analysis[30] (which allows average metrics such as throughput and sojourn times) can be computed.[31] If the total number of customers in the network remains constant, the network is called a closed network and has been shown to also have a product–form stationary distribution by the Gordon–Newell theorem.[32] This result was extended to the BCMP network,[33] where a network with very general service time, regimes, and customer routing is shown to also exhibit a product–form stationary distribution. The normalizing constant can be calculated with the Buzen's algorithm, proposed in 1973.[34] Networks of customers have also been investigated, such as Kelly networks, where customers of different classes experience different priority levels at different service nodes.[35] Another type of network are G-networks, first proposed by Erol Gelenbe in 1993:[36] these networks do not assume exponential time distributions like the classic Jackson network. Routing algorithms In discrete-time networks where there is a constraint on which service nodes can be active at any time, the max-weight scheduling algorithm chooses a service policy to give optimal throughput in the case that each job visits only a single-person service node.[20] In the more general case where jobs can visit more than one node, backpressure routing gives optimal throughput. A network scheduler must choose a queueing algorithm, which affects the characteristics of the larger network. Mean-field limits Mean-field models consider the limiting behaviour of the empirical measure (proportion of queues in different states) as the number of queues m approaches infinity. The impact of other queues on any given queue in the network is approximated by a differential equation. The deterministic model converges to the same stationary distribution as the original model.[37] Heavy traffic/diffusion approximations In a system with high occupancy rates (utilisation near 1), a heavy traffic approximation can be used to approximate the queueing length process by a reflected Brownian motion,[38] Ornstein–Uhlenbeck process, or more general diffusion process.[39] The number of dimensions of the Brownian process is equal to the number of queueing nodes, with the diffusion restricted to the non-negative orthant. Fluid limits Main article: Fluid limit Fluid models are continuous deterministic analogs of queueing networks obtained by taking the limit when the process is scaled in time and space, allowing heterogeneous objects. This scaled trajectory converges to a deterministic equation which allows the stability of the system to be proven. It is known that a queueing network can be stable but have an unstable fluid limit.[40] See also • Ehrenfest model • Erlang unit • Line management • Network simulation • Project production management • Queue area • Queueing delay • Queue management system • Queuing Rule of Thumb • Random early detection • Renewal theory • Throughput • Scheduling (computing) • Traffic jam • Traffic generation model • Flow network References 1. Sundarapandian, V. (2009). "7. Queueing Theory". Probability, Statistics and Queueing Theory. PHI Learning. ISBN 978-8120338449. 2. Lawrence W. Dowdy, Virgilio A.F. Almeida, Daniel A. Menasce. "Performance by Design: Computer Capacity Planning by Example". Archived from the original on 2016-05-06. Retrieved 2009-07-08. 3. Schlechter, Kira (March 2, 2009). "Hershey Medical Center to open redesigned emergency room". The Patriot-News. Archived from the original on June 29, 2016. Retrieved March 12, 2009. 4. Mayhew, Les; Smith, David (December 2006). Using queuing theory to analyse completion times in accident and emergency departments in the light of the Government 4-hour target. Cass Business School. ISBN 978-1-905752-06-5. Archived from the original on September 7, 2021. Retrieved 2008-05-20. 5. Tijms, H.C, Algorithmic Analysis of Queues, Chapter 9 in A First Course in Stochastic Models, Wiley, Chichester, 2003 6. Kendall, D. G. (1953). "Stochastic Processes Occurring in the Theory of Queues and their Analysis by the Method of the Imbedded Markov Chain". The Annals of Mathematical Statistics. 24 (3): 338–354. doi:10.1214/aoms/1177728975. JSTOR 2236285. 7. Hernández-Suarez, Carlos (2010). "An application of queuing theory to SIS and SEIS epidemic models". Math. Biosci. 7 (4): 809–823. doi:10.3934/mbe.2010.7.809. PMID 21077709. 8. "Agner Krarup Erlang (1878-1929) | plus.maths.org". Pass.maths.org.uk. 1997-04-30. Archived from the original on 2008-10-07. Retrieved 2013-04-22. 9. Asmussen, S. R.; Boxma, O. J. (2009). "Editorial introduction". Queueing Systems. 63 (1–4): 1–2. doi:10.1007/s11134-009-9151-8. S2CID 45664707. 10. Erlang, Agner Krarup (1909). "The theory of probabilities and telephone conversations" (PDF). Nyt Tidsskrift for Matematik B. 20: 33–39. Archived from the original (PDF) on 2011-10-01. 11. Kingman, J. F. C. (2009). "The first Erlang century—and the next". Queueing Systems. 63 (1–4): 3–4. doi:10.1007/s11134-009-9147-4. S2CID 38588726. 12. Pollaczek, F., Ueber eine Aufgabe der Wahrscheinlichkeitstheorie, Math. Z. 1930 13. Whittle, P. (2002). "Applied Probability in Great Britain". Operations Research. 50 (1): 227–239. doi:10.1287/opre.50.1.227.17792. JSTOR 3088474. 14. Kendall, D.G.:Stochastic processes occurring in the theory of queues and their analysis by the method of the imbedded Markov chain, Ann. Math. Stat. 1953 15. Pollaczek, F., Problèmes Stochastiques posés par le phénomène de formation d'une queue 16. Kingman, J. F. C.; Atiyah (October 1961). "The single server queue in heavy traffic". Mathematical Proceedings of the Cambridge Philosophical Society. 57 (4): 902. Bibcode:1961PCPS...57..902K. doi:10.1017/S0305004100036094. JSTOR 2984229. S2CID 62590290. 17. Ramaswami, V. (1988). "A stable recursion for the steady state vector in markov chains of m/g/1 type". Communications in Statistics. Stochastic Models. 4: 183–188. doi:10.1080/15326348808807077. 18. Morozov, E. (2017). "Stability analysis of a multiclass retrial system withcoupled orbit queues". Proceedings of 14th European Workshop. Lecture Notes in Computer Science. Vol. 17. pp. 85–98. doi:10.1007/978-3-319-66583-2_6. ISBN 978-3-319-66582-5. 19. "Simulation and queueing network modeling of single-product production campaigns". ScienceDirect. 20. Manuel, Laguna (2011). Business Process Modeling, Simulation and Design. Pearson Education India. p. 178. ISBN 9788131761359. Retrieved 6 October 2017. 21. Penttinen A., Chapter 8 – Queueing Systems, Lecture Notes: S-38.145 - Introduction to Teletraffic Theory. 22. Harchol-Balter, M. (2012). "Scheduling: Non-Preemptive, Size-Based Policies". Performance Modeling and Design of Computer Systems. pp. 499–507. doi:10.1017/CBO9781139226424.039. ISBN 9781139226424. 23. Andrew S. Tanenbaum; Herbert Bos (2015). Modern Operating Systems. Pearson. ISBN 978-0-13-359162-0. 24. Harchol-Balter, M. (2012). "Scheduling: Preemptive, Size-Based Policies". Performance Modeling and Design of Computer Systems. pp. 508–517. doi:10.1017/CBO9781139226424.040. ISBN 9781139226424. 25. Harchol-Balter, M. (2012). "Scheduling: SRPT and Fairness". Performance Modeling and Design of Computer Systems. pp. 518–530. doi:10.1017/CBO9781139226424.041. ISBN 9781139226424. 26. Dimitriou, I. (2019). "A Multiclass Retrial System With Coupled Orbits And Service Interruptions: Verification of Stability Conditions". Proceedings of FRUCT 24. 7: 75–82. 27. "Archived copy" (PDF). Archived (PDF) from the original on 2017-03-29. Retrieved 2018-08-02.{{cite web}}: CS1 maint: archived copy as title (link) 28. Jackson, J. R. (1957). "Networks of Waiting Lines". Operations Research. 5 (4): 518–521. doi:10.1287/opre.5.4.518. JSTOR 167249. 29. Jackson, James R. (Oct 1963). "Jobshop-like Queueing Systems". Management Science. 10 (1): 131–142. doi:10.1287/mnsc.1040.0268. JSTOR 2627213. 30. Reiser, M.; Lavenberg, S. S. (1980). "Mean-Value Analysis of Closed Multichain Queuing Networks". Journal of the ACM. 27 (2): 313. doi:10.1145/322186.322195. S2CID 8694947. 31. Van Dijk, N. M. (1993). "On the arrival theorem for communication networks". Computer Networks and ISDN Systems. 25 (10): 1135–2013. doi:10.1016/0169-7552(93)90073-D. S2CID 45218280. Archived from the original on 2019-09-24. Retrieved 2019-09-24. 32. Gordon, W. J.; Newell, G. F. (1967). "Closed Queuing Systems with Exponential Servers". Operations Research. 15 (2): 254. doi:10.1287/opre.15.2.254. JSTOR 168557. 33. Baskett, F.; Chandy, K. Mani; Muntz, R.R.; Palacios, F.G. (1975). "Open, closed and mixed networks of queues with different classes of customers". Journal of the ACM. 22 (2): 248–260. doi:10.1145/321879.321887. S2CID 15204199. 34. Buzen, J. P. (1973). "Computational algorithms for closed queueing networks with exponential servers" (PDF). Communications of the ACM. 16 (9): 527–531. doi:10.1145/362342.362345. S2CID 10702. Archived (PDF) from the original on 2016-05-13. Retrieved 2015-09-01. 35. Kelly, F. P. (1975). "Networks of Queues with Customers of Different Types". Journal of Applied Probability. 12 (3): 542–554. doi:10.2307/3212869. JSTOR 3212869. S2CID 51917794. 36. Gelenbe, Erol (Sep 1993). "G-Networks with Triggered Customer Movement". Journal of Applied Probability. 30 (3): 742–748. doi:10.2307/3214781. JSTOR 3214781. S2CID 121673725. 37. Bobbio, A.; Gribaudo, M.; Telek, M. S. (2008). "Analysis of Large Scale Interacting Systems by Mean Field Method". 2008 Fifth International Conference on Quantitative Evaluation of Systems. p. 215. doi:10.1109/QEST.2008.47. ISBN 978-0-7695-3360-5. S2CID 2714909. 38. Chen, H.; Whitt, W. (1993). "Diffusion approximations for open queueing networks with service interruptions". Queueing Systems. 13 (4): 335. doi:10.1007/BF01149260. S2CID 1180930. 39. Yamada, K. (1995). "Diffusion Approximation for Open State-Dependent Queueing Networks in the Heavy Traffic Situation". The Annals of Applied Probability. 5 (4): 958–982. doi:10.1214/aoap/1177004602. JSTOR 2245101. 40. Bramson, M. (1999). "A stable queueing network with unstable fluid model". The Annals of Applied Probability. 9 (3): 818–853. doi:10.1214/aoap/1029962815. JSTOR 2667284. Further reading • Gross, Donald; Carl M. Harris (1998). Fundamentals of Queueing Theory. Wiley. ISBN 978-0-471-32812-4. Online • Zukerman, Moshe (2013). Introduction to Queueing Theory and Stochastic Teletraffic Models (PDF). arXiv:1307.2968. • Deitel, Harvey M. (1984) [1982]. An introduction to operating systems (revisited first ed.). Addison-Wesley. p. 673. ISBN 978-0-201-14502-1. chap.15, pp. 380–412 • Gelenbe, Erol; Isi Mitrani (2010). Analysis and Synthesis of Computer Systems. World Scientific 2nd Edition. ISBN 978-1-908978-42-4. • Newell, Gordron F. (1 June 1971). Applications of Queueing Theory. Chapman and Hall. • Leonard Kleinrock, Information Flow in Large Communication Nets, (MIT, Cambridge, May 31, 1961) Proposal for a Ph.D. Thesis • Leonard Kleinrock. Information Flow in Large Communication Nets (RLE Quarterly Progress Report, July 1961) • Leonard Kleinrock. Communication Nets: Stochastic Message Flow and Delay (McGraw-Hill, New York, 1964) • Kleinrock, Leonard (2 January 1975). Queueing Systems: Volume I – Theory. New York: Wiley Interscience. pp. 417. ISBN 978-0471491101. • Kleinrock, Leonard (22 April 1976). Queueing Systems: Volume II – Computer Applications. New York: Wiley Interscience. pp. 576. ISBN 978-0471491118. • Lazowska, Edward D.; John Zahorjan; G. Scott Graham; Kenneth C. Sevcik (1984). Quantitative System Performance: Computer System Analysis Using Queueing Network Models. Prentice-Hall, Inc. ISBN 978-0-13-746975-8. • Jon Kleinberg; Éva Tardos (30 June 2013). Algorithm Design. Pearson. ISBN 978-1-292-02394-6. External links Look up queueing or queuing in Wiktionary, the free dictionary. • Queueing theory calculator • Teknomo's Queueing theory tutorial and calculators • Office Fire Emergency Evacuation Simulation on YouTube • Virtamo's Queueing Theory Course • Myron Hlynka's Queueing Theory Page • Queueing Theory Basics • A free online tool to solve some classical queueing systems • JMT: an open source graphical environment for queueing theory • LINE: a general-purpose engine to solve queueing models • What You Hate Most About Waiting in Line: (It’s not the length of the wait.), by Seth Stevenson, Slate, 2012 – popular introduction Queueing theory Single queueing nodes • D/M/1 queue • M/D/1 queue • M/D/c queue • M/M/1 queue • Burke's theorem • M/M/c queue • M/M/∞ queue • M/G/1 queue • Pollaczek–Khinchine formula • Matrix analytic method • M/G/k queue • G/M/1 queue • G/G/1 queue • Kingman's formula • Lindley equation • Fork–join queue • Bulk queue Arrival processes • Poisson point process • Markovian arrival process • Rational arrival process Queueing networks • Jackson network • Traffic equations • Gordon–Newell theorem • Mean value analysis • Buzen's algorithm • Kelly network • G-network • BCMP network Service policies • FIFO • LIFO • Processor sharing • Round-robin • Shortest job next • Shortest remaining time Key concepts • Continuous-time Markov chain • Kendall's notation • Little's law • Product-form solution • Balance equation • Quasireversibility • Flow-equivalent server method • Arrival theorem • Decomposition method • Beneš method Limit theorems • Fluid limit • Mean-field theory • Heavy traffic approximation • Reflected Brownian motion Extensions • Fluid queue • Layered queueing network • Polling system • Adversarial queueing network • Loss network • Retrial queue Information systems • Data buffer • Erlang (unit) • Erlang distribution • Flow control (data) • Message queue • Network congestion • Network scheduler • Pipeline (software) • Quality of service • Scheduling (computing) • Teletraffic engineering Category Authority control International • FAST National • France • BnF data • Germany • Israel • United States • Japan • Czech Republic
Wikipedia
Statistical parsing Statistical parsing is a group of parsing methods within natural language processing. The methods have in common that they associate grammar rules with a probability. Grammar rules are traditionally viewed in computational linguistics as defining the valid sentences in a language. Within this mindset, the idea of associating each rule with a probability then provides the relative frequency of any given grammar rule and, by deduction, the probability of a complete parse for a sentence. (The probability associated with a grammar rule may be induced, but the application of that grammar rule within a parse tree and the computation of the probability of the parse tree based on its component rules is a form of deduction.) Using this concept, statistical parsers make use of a procedure to search over a space of all candidate parses, and the computation of each candidate's probability, to derive the most probable parse of a sentence. The Viterbi algorithm is one popular method of searching for the most probable parse. "Search" in this context is an application of search algorithms in artificial intelligence. As an example, think about the sentence "The can can hold water". A reader would instantly see that there is an object called "the can" and that this object is performing the action 'can' (i.e. is able to); and the thing the object is able to do is "hold"; and the thing the object is able to hold is "water". Using more linguistic terminology, "The can" is a noun phrase composed of a determiner followed by a noun, and "can hold water" is a verb phrase which is itself composed of a verb followed by a verb phrase. But is this the only interpretation of the sentence? Certainly "The can can" is a perfectly valid noun-phrase referring to a type of dance, and "hold water" is also a valid verb-phrase, although the coerced meaning of the combined sentence is non-obvious. This lack of meaning is not seen as a problem by most linguists (for a discussion on this point, see Colorless green ideas sleep furiously) but from a pragmatic point of view it is desirable to obtain the first interpretation rather than the second and statistical parsers achieve this by ranking the interpretations based on their probability. (In this example various assumptions about the grammar have been made, such as a simple left-to-right derivation rather than head-driven, its use of noun-phrases rather than the currently fashionable determiner-phrases, and no type-check preventing a concrete noun being combined with an abstract verb phrase. None of these assumptions affect the thesis of the argument and a comparable argument can be made using any other grammatical formalism.) There are a number of methods that statistical parsing algorithms frequently use. While few algorithms will use all of these they give a good overview of the general field. Most statistical parsing algorithms are based on a modified form of chart parsing. The modifications are necessary to support an extremely large number of grammatical rules and therefore search space, and essentially involve applying classical artificial intelligence algorithms to the traditionally exhaustive search. Some examples of the optimisations are only searching a likely subset of the search space (stack search), for optimising the search probability (Baum-Welch algorithm) and for discarding parses that are too similar to be treated separately (Viterbi algorithm). Notable people in statistical parsing • Eugene Charniak Author of Statistical techniques for natural language parsing amongst many other contributions • Fred Jelinek Applied and developed numerous techniques from Information Theory to build the field • David Magerman Major contributor to turning the field from theoretical to practical by managing data • James Curran Applying the MaxEnt algorithm, word representation, and other contributions • Michael Collins (computational linguist) First very high performance statistical parser • Joshua Goodman Hypergraphs, and other generalizations between different methods See also • Statistical machine translation • Statistical semantics • Stochastic context-free grammar Parsing algorithms Top-down • Earley • LL • Recursive descent • Tail recursive Bottom-up • Precedence • Simple • Operator • Shunting-yard • LR • Simple • Look-ahead • Canonical • Generalized • CYK • Recursive ascent • Shift-reduce Mixed, other • Combinator • Chart • Left corner • Statistical Related topics • PEG • Definite clause grammar • Deterministic parsing • Dynamic programming • Memoization • Parser generator • LALR • Parse tree • AST • Scannerless parsing • History of compiler construction • Comparison of parser generators • Operator-precedence grammar
Wikipedia
Stochastic partial differential equation Stochastic partial differential equations (SPDEs) generalize partial differential equations via random force terms and coefficients, in the same way ordinary stochastic differential equations generalize ordinary differential equations. Differential equations Scope Fields • Natural sciences • Engineering • Astronomy • Physics • Chemistry • Biology • Geology Applied mathematics • Continuum mechanics • Chaos theory • Dynamical systems Social sciences • Economics • Population dynamics List of named differential equations Classification Types • Ordinary • Partial • Differential-algebraic • Integro-differential • Fractional • Linear • Non-linear By variable type • Dependent and independent variables • Autonomous • Coupled / Decoupled • Exact • Homogeneous / Nonhomogeneous Features • Order • Operator • Notation Relation to processes • Difference (discrete analogue) • Stochastic • Stochastic partial • Delay Solution Existence and uniqueness • Picard–Lindelöf theorem • Peano existence theorem • Carathéodory's existence theorem • Cauchy–Kowalevski theorem General topics • Initial conditions • Boundary values • Dirichlet • Neumann • Robin • Cauchy problem • Wronskian • Phase portrait • Lyapunov / Asymptotic / Exponential stability • Rate of convergence • Series / Integral solutions • Numerical integration • Dirac delta function Solution methods • Inspection • Method of characteristics • Euler • Exponential response formula • Finite difference (Crank–Nicolson) • Finite element • Infinite element • Finite volume • Galerkin • Petrov–Galerkin • Green's function • Integrating factor • Integral transforms • Perturbation theory • Runge–Kutta • Separation of variables • Undetermined coefficients • Variation of parameters People List • Isaac Newton • Gottfried Leibniz • Jacob Bernoulli • Leonhard Euler • Józef Maria Hoene-Wroński • Joseph Fourier • Augustin-Louis Cauchy • George Green • Carl David Tolmé Runge • Martin Kutta • Rudolf Lipschitz • Ernst Lindelöf • Émile Picard • Phyllis Nicolson • John Crank They have relevance to quantum field theory, statistical mechanics, and spatial modeling.[1][2] Examples One of the most studied SPDEs is the stochastic heat equation,[3] which may formally be written as $\partial _{t}u=\Delta u+\xi \;,$ where $\Delta $ is the Laplacian and $\xi $ denotes space-time white noise. Other examples also include stochastic versions of famous linear equations, such as the wave equation[4] and the Schrödinger equation.[5] Discussion One difficulty is their lack of regularity. In one dimensional space, solutions to the stochastic heat equation are only almost 1/2-Hölder continuous in space and 1/4-Hölder continuous in time. For dimensions two and higher, solutions are not even function-valued, but can be made sense of as random distributions. For linear equations, one can usually find a mild solution via semigroup techniques.[6] However, problems start to appear when considering non-linear equations. For example $\partial _{t}u=\Delta u+P(u)+\xi ,$ where $P$ is a polynomial. In this case it is not even clear how one should make sense of the equation. Such an equation will also not have a function-valued solution in dimension larger than one, and hence no pointwise meaning. It is well known that the space of distributions has no product structure. This is the core problem of such a theory. This leads to the need of some form of renormalization. An early attempt to circumvent such problems for some specific equations was the so called da Prato–Debussche trick which involved studying such non-linear equations as perturbations of linear ones.[7] However, this can only be used in very restrictive settings, as it depends on both the non-linear factor and on the regularity of the driving noise term. In recent years, the field has drastically expanded, and now there exists a large machinery to guarantee local existence for a variety of sub-critical SPDEs.[8] See also • Brownian surface • Kardar–Parisi–Zhang equation • Kushner equation • Malliavin calculus • Polynomial chaos • Wick product • Zakai equation References 1. Prévôt, Claudia; Röckner, Michael (2007). A Concise Course on Stochastic Partial Differential Equations. Lecture Notes in Mathematics. Berlin Heidelberg: Springer-Verlag. ISBN 978-3-540-70780-6. 2. Krainski, Elias T.; Gómez-Rubio, Virgilio; Bakka, Haakon; Lenzi, Amanda; Castro-Camilo, Daniela; Simpson, Daniel; Lindgren, Finn; Rue, Håvard (2018). Advanced Spatial Modeling with Stochastic Partial Differential Equations Using R and INLA. Boca Raton, FL: Chapman and Hall/CRC Press. ISBN 978-1-138-36985-6. 3. Edwards, S.F.; Wilkinson, D.R. (1982-05-08). "The Surface Statistics of a Granular Aggregate". Proc. R. Soc. Lond. A. 381 (1780): 17–31. doi:10.1098/rspa.1982.0056. 4. Dalang, Robert C.; Frangos, N. E. (1998). "The Stochastic Wave Equation in Two Spatial Dimensions". The Annals of Probability. 26 (1): 187–212. ISSN 0091-1798. 5. Diósi, Lajos; Strunz, Walter T. (1997-11-24). "The non-Markovian stochastic Schrödinger equation for open systems". Physics Letters A. 235 (6): 569–573. arXiv:quant-ph/9706050. doi:10.1016/S0375-9601(97)00717-2. ISSN 0375-9601. 6. Walsh, John B. (1986). Carmona, René; Kesten, Harry; Walsh, John B.; Hennequin, P. L. (eds.). "An introduction to stochastic partial differential equations". École d'Été de Probabilités de Saint Flour XIV - 1984. Lecture Notes in Mathematics. Springer Berlin Heidelberg. 1180: 265–439. doi:10.1007/bfb0074920. ISBN 978-3-540-39781-6. 7. Da Prato, Giuseppe; Debussche, Arnaud (2003). "Strong Solutions to the Stochastic Quantization Equations". Annals of Probability. 31 (4): 1900–1916. JSTOR 3481533. 8. Corwin, Ivan; Shen, Hao (2020). "Some recent progress in singular stochastic partial differential equations". Bull. Amer. Math. Soc. 57 (3): 409–454. doi:10.1090/bull/1670. Further reading • Bain, A.; Crisan, D. (2009). Fundamentals of Stochastic Filtering. Stochastic Modelling and Applied Probability. Vol. 60. New York: Springer. ISBN 978-0387768953. • Holden, H.; Øksendal, B.; Ubøe, J.; Zhang, T. (2010). Stochastic Partial Differential Equations: A Modeling, White Noise Functional Approach. Universitext (2nd ed.). New York: Springer. doi:10.1007/978-0-387-89488-1. ISBN 978-0-387-89487-4. • Lindgren, F.; Rue, H.; Lindström, J. (2011). "An Explicit Link between Gaussian Fields and Gaussian Markov Random Fields: The Stochastic Partial Differential Equation Approach". Journal of the Royal Statistical Society Series B: Statistical Methodology. 73 (4): 423–498. doi:10.1111/j.1467-9868.2011.00777.x. hdl:20.500.11820/1084d335-e5b4-4867-9245-ec9c4f6f4645. ISSN 1369-7412. • Xiu, D. (2010). Numerical Methods for Stochastic Computations: A Spectral Method Approach. Princeton University Press. ISBN 978-0-691-14212-8. External links • "A Minicourse on Stochastic Partial Differential Equations" (PDF). 2006. • Hairer, Martin (2009). "An Introduction to Stochastic PDEs". arXiv:0907.4178 [math.PR].
Wikipedia
Stochastic portfolio theory Stochastic portfolio theory (SPT) is a mathematical theory for analyzing stock market structure and portfolio behavior introduced by E. Robert Fernholz in 2002. It is descriptive as opposed to normative, and is consistent with the observed behavior of actual markets. Normative assumptions, which serve as a basis for earlier theories like modern portfolio theory (MPT) and the capital asset pricing model (CAPM), are absent from SPT. SPT uses continuous-time random processes (in particular, continuous semi-martingales) to represent the prices of individual securities. Processes with discontinuities, such as jumps, have also been incorporated* into the theory (*unverifiable claim due to missing citation!). Stocks, portfolios and markets SPT considers stocks and stock markets, but its methods can be applied to other classes of assets as well. A stock is represented by its price process, usually in the logarithmic representation. In the case the market is a collection of stock-price processes $X_{i},$ for $i=1,\dots ,n,$ each defined by a continuous semimartingale $d\log X_{i}(t)=\gamma _{i}(t)\,dt+\sum _{\nu =1}^{d}\xi _{i\nu }(t)\,dW_{\nu }(t)$ where $W:=(W_{1},\dots ,W_{d})$ is an $n$-dimensional Brownian motion (Wiener) process with $d\geq n$, and the processes $\gamma _{i}$ and $\xi _{i\nu }$ are progressively measurable with respect to the Brownian filtration $\{{\mathcal {F}}_{t}\}=\{{\mathcal {F}}_{t}^{W}\}$. In this representation $\gamma _{i}(t)$ is called the (compound) growth rate of $X_{i},$ and the covariance between $\log X_{i}$ and $\log X_{j}$ is $\sigma _{ij}(t)=\sum _{\nu =1}^{d}\xi _{i\nu }(t)\xi _{j\nu }(t).$ It is frequently assumed that, for all $i,$ the process $\xi _{i,1}^{2}(t)+\cdots +\xi _{id}^{2}(t)$ is positive, locally square-integrable, and does not grow too rapidly as $t\rightarrow \infty .$ The logarithmic representation is equivalent to the classical arithmetic representation which uses the rate of return $\alpha _{i}(t),$ however the growth rate can be a meaningful indicator of long-term performance of a financial asset, whereas the rate of return has an upward bias. The relation between the rate of return and the growth rate is $\alpha _{i}(t)=\gamma _{i}(t)+{\frac {\sigma _{ii}(t)}{2}}$ The usual convention in SPT is to assume that each stock has a single share outstanding, so $X_{i}(t)$ represents the total capitalization of the $i$-th stock at time $t,$ and $X(t)=X_{1}(t)+\cdots +X_{n}(t)$ is the total capitalization of the market. Dividends can be included in this representation, but are omitted here for simplicity. An investment strategy $\pi =(\pi _{1},\cdots ,\pi _{n})$ is a vector of bounded, progressively measurable processes; the quantity $\pi _{i}(t)$ represents the proportion of total wealth invested in the $i$-th stock at time $t$, and $\pi _{0}(t):=1-\sum _{i=1}^{n}\pi _{i}(t)$ is the proportion hoarded (invested in a money market with zero interest rate). Negative weights correspond to short positions. The cash strategy $\kappa \equiv 0(\kappa _{0}\equiv 1)$ keeps all wealth in the money market. A strategy $\pi $ is called portfolio, if it is fully invested in the stock market, that is $\pi _{1}(t)+\cdots +\pi _{n}(t)=1$ holds, at all times. The value process $Z_{\pi }$ of a strategy $\pi $ is always positive and satisfies $d\log Z_{\pi }(t)=\sum _{i=1}^{n}\pi _{i}(t)\,d\log X_{i}(t)+\gamma _{\pi }^{*}(t)\,dt$ where the process $\gamma _{\pi }^{*}$ is called the excess growth rate process and is given by $\gamma _{\pi }^{*}(t):={\frac {1}{2}}\sum _{i=1}^{n}\pi _{i}(t)\sigma _{ii}(t)-{\frac {1}{2}}\sum _{i,j=1}^{n}\pi _{i}(t)\pi _{j}(t)\sigma _{ij}(t)$ This expression is non-negative for a portfolio with non-negative weights $\pi _{i}(t)$ and has been used in quadratic optimization of stock portfolios, a special case of which is optimization with respect to the logarithmic utility function. The market weight processes, $\mu _{i}(t):={\frac {X_{i}(t)}{X_{1}(t)+\cdots +X_{n}(t)}}$ where $i=1,\dots ,n$ define the market portfolio $\mu $. With the initial condition $Z_{\mu }(0)=X(0),$ the associated value process will satisfy $Z_{\mu }(t)=X(t)$ for all $t.$ A number of conditions can be imposed on a market, sometimes to model actual markets and sometimes to emphasize certain types of hypothetical market behavior. Some commonly invoked conditions are: 1. A market is nondegenerate if the eigenvalues of the covariance matrix $(\sigma _{ij}(t))_{1\leq i,j\leq n}$ are bounded away from zero. It has bounded variance if the eigenvalues are bounded. 2. A market is coherent if $\operatorname {lim} _{t\rightarrow \infty }t^{-1}\log(\mu _{i}(t))=0$ for all $i=1,\dots ,n.$ 3. A market is diverse on $[0,T]$ if there exists $\varepsilon >0$ such that $\mu _{\max }(t)\leq 1-\varepsilon $ for $t\in [0,T].$ 4. A market is weakly diverse on $[0,T]$ if there exists $\varepsilon >0$ such that ${\frac {1}{T}}\int _{0}^{T}\mu _{\max }(t)\,dt\leq 1-\varepsilon $ Diversity and weak diversity are rather weak conditions, and markets are generally far more diverse than would be tested by these extremes. A measure of market diversity is market entropy, defined by $S(\mu (t))=-\sum _{i=1}^{n}\mu _{i}(t)\log(\mu _{i}(t)).$ Stochastic stability We consider the vector process $(\mu _{(1)}(t),\dots ,\mu _{(n)}(t)),$ with $0\leq t<\infty $ of ranked market weights $\max _{1\leq i\leq n}\mu _{i}(t)=:\mu _{(1)}(t)\geq \mu _{(2)}(t)\geq \cdots \mu _{(n)}(t):=\min _{1\leq i\leq n}\mu _{i}(t)$ where ties are resolved “lexicographically”, always in favor of the lowest index. The log-gaps $G^{(k,k+1)}(t):=\log(\mu _{(k)}(t)/\mu _{(k+1)}(t)),$ where $0\leq t<\infty $ and $k=1,\dots ,n-1$ are continuous, non-negative semimartingales; we denote by $\Lambda ^{(k,k+1)}(t)=L^{G^{(k,k+1)}}(t;0)$ their local times at the origin. These quantities measure the amount of turnover between ranks $k$ and $k+1$ during the time-interval $[0,t]$. A market is called stochastically stable, if $(\mu _{(1)}(t),\cdots ,\mu _{(n)}(t))$ converges in distribution as $t\rightarrow \infty $ to a random vector $(M_{(1)},\cdots ,M_{(n)})$ with values in the Weyl chamber $\{(x_{1},\dots ,x_{n})\mid x_{1}>x_{2}>\dots >x_{n}{\text{ and }}\sum _{i=1}^{n}x_{i}=1\}$ of the unit simplex, and if the strong law of large numbers $\lim _{t\rightarrow \infty }{\frac {\Lambda ^{(k,k+1)}(t)}{t}}=\lambda ^{(k,k+1)}>0$ holds for suitable real constants $\lambda ^{(1,2)},\dots ,\lambda ^{(n-1,n)}.$ Arbitrage and the numeraire property Given any two investment strategies $\pi ,\rho $ and a real number $T>0$, we say that $\pi $ is arbitrage relative to $\rho $ over the time-horizon $[0,T]$, if $\mathbb {P} (Z_{\pi }(T)\geq Z_{\rho }(T))\geq 1$ and $\mathbb {P} (Z_{\pi }(T)>Z_{\rho }(T))>0$ both hold; this relative arbitrage is called “strong” if $\mathbb {P} (Z_{\pi }(T)>Z_{\rho }(T))=1.$ When $\rho $ is $\kappa \equiv 0,$ we recover the usual definition of arbitrage relative to cash. We say that a given strategy $\nu $ has the numeraire property, if for any strategy $\pi $ the ratio $Z_{\pi }/Z_{\nu }$ is a $\mathbb {P} $−supermartingale. In such a case, the process $1/Z_{\nu }$ is called a “deflator” for the market. No arbitrage is possible, over any given time horizon, relative to a strategy $\nu $ that has the numeraire property (either with respect to the underlying probability measure $\mathbb {P} $, or with respect to any other probability measure which is equivalent to $\mathbb {P} $). A strategy $\nu $ with the numeraire property maximizes the asymptotic growth rate from investment, in the sense that $\limsup _{T\rightarrow \infty }{\frac {1}{T}}\log \left({\frac {Z_{\pi }(T)}{Z_{\nu }(T)}}\right)\leq 0$ holds for any strategy $\pi $; it also maximizes the expected log-utility from investment, in the sense that for any strategy $\pi $ and real number $T>0$ we have $\mathbb {E} [\log(Z_{\pi }(T)]\leq \mathbb {E} [\log(Z_{\nu }(T))].$ If the vector $\alpha (t)=(\alpha _{1}(t),\cdots ,\alpha _{n}(t))'$ of instantaneous rates of return, and the matrix $\sigma (t)=(\sigma (t))_{1\leq i,j\leq n}$ of instantaneous covariances, are known, then the strategy $\nu (t)=\arg \max _{p\in \mathbb {R} ^{n}}(p'\alpha (t)-{\tfrac {1}{2}}p'\alpha (t)p)\qquad {\text{ for all }}0\leq t<\infty $ has the numeraire property whenever the indicated maximum is attained. The study of the numeraire portfolio links SPT to the so-called Benchmark approach to Mathematical Finance, which takes such a numeraire portfolio as given and provides a way to price contingent claims, without any further assumptions. A probability measure $\mathbb {Q} $ is called equivalent martingale measure (EMM) on a given time-horizon $[0,T]$, if it has the same null sets as $\mathbb {P} $ on ${\mathcal {F}}_{T}$, and if the processes $X_{1}(t),\dots ,X_{n}(t)$ with $0\leq t\leq T$ are all $\mathbb {Q} $−martingales. Assuming that such an EMM exists, arbitrage is not possible on $[0,T]$ relative to either cash $\kappa $ or to the market portfolio $\mu $ (or more generally, relative to any strategy $\rho $ whose wealth process $Z_{\rho }$ is a martingale under some EMM). Conversely, if $\pi ,\rho $ are portfolios and one of them is arbitrage relative to the other on $[0,T]$ then no EMM can exist on this horizon. Functionally-generated portfolios Suppose we are given a smooth function $G:U\rightarrow (0,\infty )$ on some neighborhood $U$ of the unit simplex in $\mathbb {R} ^{n}$ . We call $\pi _{i}^{\mathbb {G} }(t):=\mu _{i}(t)\left(D_{i}\log(\mathbb {G} (\mu (t)))+1-\sum _{j=1}^{n}\mu _{j}(t)D_{j}\log(\mathbb {G} (\mu (t)))\right)\qquad {\text{ for }}1\leq i\leq n$ the portfolio generated by the function $\mathbb {G} $. It can be shown that all the weights of this portfolio are non-negative, if its generating function $\mathbb {G} $ is concave. Under mild conditions, the relative performance of this functionally-generated portfolio $\pi _{\mathbb {G} }$ with respect to the market portfolio $\mu $, is given by the F-G decomposition $\log \left({\frac {Z_{\pi ^{\mathbb {G} }}(T)}{Z_{\mu }(T)}}\right)=\log \left({\frac {\mathbb {G} (\mu (T))}{\mathbb {G} (\mu (0))}}\right)+\int _{0}^{T}g(t)\,dt$ which involves no stochastic integrals. Here the expression $g(t):={\frac {-1}{2\mathbb {G} (\mu (t))}}\sum _{i=1}^{n}\sum _{j=1}^{n}D_{ij}^{2}\mathbb {G} (\mu (t))\mu _{i}(t)\mu _{j}(t)\tau _{ij}^{\mu }(t)$ is called the drift process of the portfolio (and it is a non-negative quantity if the generating function $\mathbb {G} $ is concave); and the quantities $\tau _{ij}^{\mu }(t):=\sum _{\nu =1}^{n}(\xi _{i\nu }(t)-\xi _{\nu }^{\mu }(t))(\xi _{j\nu }(t)-\xi _{\nu }^{\mu }(t)),\qquad \xi _{i\nu }(t):=\sum _{i=1}^{n}\mu _{i}(t)\xi _{i\nu }(t)$ with $1\leq i,j\leq n$ are called the relative covariances between $\log(X_{i})$ and $\log(X_{j})$ with respect to the market. Examples 1. The constant function $\mathbb {G} :=w>0$ :=w>0} generates the market portfolio $\mu $, 2. The geometric mean function $\mathbb {H} (x):=(x_{1}\cdots x_{n})^{\frac {1}{n}}$ generates the equal-weighted portfolio $\varphi _{i}(n)={\frac {1}{n}}$ for all $1\leq i\leq n$, 3. The modified entropy function $\mathbb {S} ^{c}(x)=c-\sum _{i=1}^{n}x_{i}\cdot \log(x_{i})$ for any $c>0$ generates the modified entropy-weighted portfolio, 4. The function $\mathbb {D} ^{(p)}(x):=(\sum _{i=1}^{n}x_{i}^{p})^{\frac {1}{p}}$ with $0<p<1$ generates the diversity-weighted portfolio $\delta _{i}^{(p)}(t)={\frac {(\mu _{i}(t))^{p}}{\sum _{i=1}^{n}(\mu _{i}(t))^{p}}}$ with drift process $(1-p)\gamma _{\delta ^{(p)}}^{*}(t)$. Arbitrage relative to the market The excess growth rate of the market portfolio admits the representation $2\gamma _{\mu }^{*}(t)=\sum _{i=1}^{n}\mu _{i}(t)\tau _{ii}^{\mu }(t)$ as a capitalization-weighted average relative stock variance. This quantity is nonnegative; if it happens to be bounded away from zero, namely $\gamma _{\mu }^{*}(t)={\frac {1}{2}}\sum _{i=1}^{n}\mu _{i}(t)\tau _{ii}^{\mu }(t)\geq h>0,$ for all $0\leq t<\infty $ for some real constant $h$, then it can be shown using the F-G decomposition that, for every $T>\mathbb {S} (\mu (0))/h,$ there exists a constant $c>0$ for which the modified entropic portfolio $\Theta ^{(c)}$ is strict arbitrage relative to the market $\mu $ over $[0,T]$; see Fernholz and Karatzas (2005) for details. It is an open question, whether such arbitrage exists over arbitrary time horizons (for two special cases, in which the answer to this question turns out to be affirmative, please see the paragraph below and the next section). If the eigenvalues of the covariance matrix $(\sigma _{ij}(t))_{1\leq i,j\leq n}$ are bounded away from both zero and infinity, the condition $\gamma _{\mu }^{*}\geq h>0$ can be shown to be equivalent to diversity, namely $\mu _{\max }\leq 1-\varepsilon $ for a suitable $\varepsilon \in (0,1).$ Then the diversity-weighted portfolio $\delta ^{(p)}$ leads to strict arbitrage relative to the market portfolio over sufficiently long time horizons; whereas, suitable modifications of this diversity-weighted portfolio realize such strict arbitrage over arbitrary time horizons. An example: volatility-stabilized markets We consider the example of a system of stochastic differential equations $d\log(X_{i}(t))={\frac {\alpha }{2\mu _{i}(t)}}\,dt+{\frac {\sigma }{\mu _{i}(t)}}\,dW_{i}(t)$ with $1\leq i\leq n$ given real constants $\alpha \geq 0$ and an $n$-dimensional Brownian motion $(W_{1},\dots ,W_{n}).$ It follows from the work of Bass and Perkins (2002) that this system has a weak solution, which is unique in distribution. Fernholz and Karatzas (2005) show how to construct this solution in terms of scaled and time-changed squared Bessel processes, and prove that the resulting system is coherent. The total market capitalization $X$ behaves here as geometric Brownian motion with drift, and has the same constant growth rate as the largest stock; whereas the excess growth rate of the market portfolio is a positive constant. On the other hand, the relative market weights $\mu _{i}$ with $1\leq i\leq n$ have the dynamics of multi-allele Wright-Fisher processes. This model is an example of a non-diverse market with unbounded variances, in which strong arbitrage opportunities with respect to the market portfolio $\mu $ exist over arbitrary time horizons, as was shown by Banner and Fernholz (2008). Moreover, Pal (2012) derived the joint density of market weights at fixed times and at certain stopping times. Rank-based portfolios We fix an integer $m\in \{2,\dots ,n-1\}$ and construct two capitalization-weighted portfolios: one consisting of the top $m$ stocks, denoted $\zeta $, and one consisting of the bottom $n-m$ stocks, denoted $\eta $. More specifically, $\zeta _{i}(t)={\frac {\sum _{k=1}^{m}\mu _{(k)}(t)\mathbf {1} _{\{\mu _{i}(t)=\mu _{(k)}(t)\}}}{\sum _{l=1}^{m}\mu _{(l)}(t)}}\qquad {\text{ and }}\eta _{i}(t)={\frac {\sum _{k=m+1}^{n}\mu _{(k)}(t)\mathbf {1} _{\{\mu _{i}(t)=\mu _{(k)}(t)\}}}{\sum _{l=m+1}^{n}\mu _{(l)}(t)}}$ for $1\leq i\leq n.$ Fernholz (1999), (2002) showed that the relative performance of the large-stock portfolio with respect to the market is given as $\log \left({\frac {Z_{\zeta }(t)}{Z_{\mu }(t)}}\right)=\log \left({\frac {\mu _{(1)}(T)+\cdots +\mu _{(m)}(T)}{\mu _{(1)}(0)+\cdots +\mu _{(m)}(0)}}\right)-{\frac {1}{2}}\int _{0}^{T}{\frac {\mu _{(m)}(t)}{\mu _{(1)}(t)+\cdots +\mu _{(m)}(t)}}\,d\Lambda ^{(m,m+1)}(t).$ Indeed, if there is no turnover at the mth rank during the interval $[0,T]$, the fortunes of $\zeta $ relative to the market are determined solely on the basis of how the total capitalization of this sub-universe of the $m$ largest stocks fares, at time $T$ versus time 0; whenever there is turnover at the $m$-th rank, though, $\zeta $ has to sell at a loss a stock that gets “relegated” to the lower league, and buy a stock that has risen in value and been promoted. This accounts for the “leakage” that is evident in the last term, an integral with respect to the cumulative turnover process $\Lambda ^{(m,m+1)}$ of the relative weight in the large-cap portfolio $\zeta $ of the stock that occupies the mth rank. The reverse situation prevails with the portfolio $\eta $ of small stocks, which gets to sell at a profit stocks that are being promoted to the “upper capitalization” league, and buy relatively cheaply stocks that are being relegated: $\log \left({\frac {Z_{\eta }(t)}{Z_{\mu }(t)}}\right)=\log \left({\frac {\mu _{(m+1)}(T)+\cdots +\mu _{(n)}(T)}{\mu _{(m+1)}(0)+\cdots +\mu _{(n)}(0)}}\right)+{\frac {1}{2}}\int _{0}^{T}{\frac {\mu _{(m+1)}(t)}{\mu _{(m+1)}(t)+\cdots +\mu _{(n)}(t)}}.$ It is clear from these two expressions that, in a coherent and stochastically stable market, the small- stock cap-weighted portfolio $\zeta $ will tend to outperform its large-stock counterpart $\eta $, at least over large time horizons and; in particular, we have under those conditions $\lim _{T\rightarrow \infty }{\frac {1}{T}}\log \left({\frac {Z_{\eta }(t)}{Z_{\mu }(t)}}\right)=\lambda ^{(m,m+1)}\mathbb {E} \left({\frac {M_{(1)}}{M_{(1)}+\cdots +M_{(m)}}}+{\frac {M_{(m+1)}}{M_{(m+1)}+\cdots +M_{(n)}}}\right)>0.$ This quantifies the so-called size effect. In Fernholz (1999, 2002), constructions such as these are generalized to include functionally generated portfolios based on ranked market weights. First- and second-order models First- and second-order models are hybrid Atlas models that reproduce some of the structure of real stock markets. First-order models have only rank-based parameters, and second-order models have both rank-based and name-based parameters. Suppose that $X_{1},\ldots ,X_{n}$ is a coherent market, and that the limits $\mathbf {\sigma } _{k}^{2}=\lim _{t\to \infty }t^{-1}\langle \log \mu _{(k)}\rangle (t)$ and $\mathbf {g} _{k}=\lim _{T\to \infty }{\frac {1}{T}}\int _{0}^{T}\sum _{i=1}^{n}\mathbf {1} _{\{r_{t}(i)=k\}}\,d\log \mu _{i}(t)$ exist for $k=1,\ldots ,n$, where $r_{t}(i)$ is the rank of $X_{i}(t)$. Then the Atlas model ${\widehat {X}}_{1},\ldots ,{\widehat {X}}_{n}$ defined by $d\log {\widehat {X}}_{i}(t)=\sum _{k=1}^{n}\mathbf {g} _{k}\,\mathbf {1} _{\{{\hat {r}}_{t}(i)=k\}}\,dt+\sum _{k=1}^{n}\mathbf {\sigma } _{k}\mathbf {1} _{\{{\hat {r}}_{t}(i)=k\}}\,dW_{i}(t),$ where ${\hat {r}}_{t}(i)$ is the rank of ${\widehat {X}}_{i}(t)$ and $(W_{1},\ldots ,W_{n})$ is an $n$-dimensional Brownian motion process, is the first-order model for the original market, $X_{1},\ldots ,X_{n}$. Under reasonable conditions, the capital distribution curve for a first-order model will be close to that of the original market. However, a first-order model is ergodic in the sense that each stock asymptotically spends $(1/n)$-th of its time at each rank, a property that is not present in actual markets. In order to vary the proportion of time that a stock spends at each rank, it is necessary to use some form of hybrid Atlas model with parameters that depend on both rank and name. An effort in this direction was made by Fernholz, Ichiba, and Karatzas (2013), who introduced a second-order model for the market with rank- and name-based growth parameters, and variance parameters that depended on rank alone. References • Fernholz, E.R. (2002). Stochastic Portfolio Theory. New York: Springer-Verlag.
Wikipedia
Probability vector In mathematics and statistics, a probability vector or stochastic vector is a vector with non-negative entries that add up to one. "Stochastic vector" redirects here. For the concept of a random vector, see Multivariate random variable. The positions (indices) of a probability vector represent the possible outcomes of a discrete random variable, and the vector gives us the probability mass function of that random variable, which is the standard way of characterizing a discrete probability distribution.[1] Examples Here are some examples of probability vectors. The vectors can be either columns or rows. • $x_{0}={\begin{bmatrix}0.5\\0.25\\0.25\end{bmatrix}},$ • $x_{1}={\begin{bmatrix}0\\1\\0\end{bmatrix}},$ • $x_{2}={\begin{bmatrix}0.65&0.35\end{bmatrix}},$ • $x_{3}={\begin{bmatrix}0.3&0.5&0.07&0.1&0.03\end{bmatrix}}.$ Geometric interpretation Writing out the vector components of a vector $p$ as $p={\begin{bmatrix}p_{1}\\p_{2}\\\vdots \\p_{n}\end{bmatrix}}\quad {\text{or}}\quad p={\begin{bmatrix}p_{1}&p_{2}&\cdots &p_{n}\end{bmatrix}}$ the vector components must sum to one: $\sum _{i=1}^{n}p_{i}=1$ Each individual component must have a probability between zero and one: $0\leq p_{i}\leq 1$ for all $i$. Therefore, the set of stochastic vectors coincides with the standard $(n-1)$-simplex. It is a point if $n=1$, a segment if $n=2$, a (filled) triangle if $n=3$, a (filled) tetrahedron $n=4$, etc. Properties • The mean of any probability vector is $1/n$. • The shortest probability vector has the value $1/n$ as each component of the vector, and has a length of $ 1/{\sqrt {n}}$. • The longest probability vector has the value 1 in a single component and 0 in all others, and has a length of 1. • The shortest vector corresponds to maximum uncertainty, the longest to maximum certainty. • The length of a probability vector is equal to $ {\sqrt {n\sigma ^{2}+1/n}}$; where $\sigma ^{2}$ is the variance of the elements of the probability vector. See also • Stochastic matrix • Dirichlet distribution References 1. Jacobs, Konrad (1992), Discrete Stochastics, Basler Lehrbücher [Basel Textbooks], vol. 3, Birkhäuser Verlag, Basel, p. 45, doi:10.1007/978-3-0348-8645-1, ISBN 3-7643-2591-7, MR 1139766.
Wikipedia
Stochastic optimization Stochastic optimization (SO) methods are optimization methods that generate and use random variables. For stochastic problems, the random variables appear in the formulation of the optimization problem itself, which involves random objective functions or random constraints. Stochastic optimization methods also include methods with random iterates. Some stochastic optimization methods use random iterates to solve stochastic problems, combining both meanings of stochastic optimization.[1] Stochastic optimization methods generalize deterministic methods for deterministic problems. This article is about iterative methods. For the modeling (and optimization) of decisions under uncertainty, see stochastic programming. For the context of control theory, see stochastic control. Methods for stochastic functions Partly random input data arise in such areas as real-time estimation and control, simulation-based optimization where Monte Carlo simulations are run as estimates of an actual system,[2][3] and problems where there is experimental (random) error in the measurements of the criterion. In such cases, knowledge that the function values are contaminated by random "noise" leads naturally to algorithms that use statistical inference tools to estimate the "true" values of the function and/or make statistically optimal decisions about the next steps. Methods of this class include: • stochastic approximation (SA), by Robbins and Monro (1951)[4] • stochastic gradient descent • finite-difference SA by Kiefer and Wolfowitz (1952)[5] • simultaneous perturbation SA by Spall (1992)[6] • scenario optimization Randomized search methods On the other hand, even when the data set consists of precise measurements, some methods introduce randomness into the search-process to accelerate progress.[7] Such randomness can also make the method less sensitive to modeling errors. Another advantage is that randomness into the search-process can be used for obtaining interval estimates of the minimum of a function via extreme value statistics.[8][9] Further, the injected randomness may enable the method to escape a local optimum and eventually to approach a global optimum. Indeed, this randomization principle is known to be a simple and effective way to obtain algorithms with almost certain good performance uniformly across many data sets, for many sorts of problems. Stochastic optimization methods of this kind include: • simulated annealing by S. Kirkpatrick, C. D. Gelatt and M. P. Vecchi (1983)[10] • quantum annealing • Probability Collectives by D.H. Wolpert, S.R. Bieniawski and D.G. Rajnarayan (2011)[11] • reactive search optimization (RSO) by Roberto Battiti, G. Tecchiolli (1994),[12] recently reviewed in the reference book [13] • cross-entropy method by Rubinstein and Kroese (2004)[14] • random search by Anatoly Zhigljavsky (1991)[15] • Informational search [16] • stochastic tunneling[17] • parallel tempering a.k.a. replica exchange[18] • stochastic hill climbing • swarm algorithms • evolutionary algorithms • genetic algorithms by Holland (1975)[19] • evolution strategies • cascade object optimization & modification algorithm (2016)[20] In contrast, some authors have argued that randomization can only improve a deterministic algorithm if the deterministic algorithm was poorly designed in the first place.[21] Fred W. Glover[22] argues that reliance on random elements may prevent the development of more intelligent and better deterministic components. The way in which results of stochastic optimization algorithms are usually presented (e.g., presenting only the average, or even the best, out of N runs without any mention of the spread), may also result in a positive bias towards randomness. See also • Global optimization • Machine learning • Scenario optimization • Gaussian process • State Space Model • Model predictive control • Nonlinear programming • Entropic value at risk References 1. Spall, J. C. (2003). Introduction to Stochastic Search and Optimization. Wiley. ISBN 978-0-471-33052-3. 2. Fu, M. C. (2002). "Optimization for Simulation: Theory vs. Practice". INFORMS Journal on Computing. 14 (3): 192–227. doi:10.1287/ijoc.14.3.192.113. 3. M.C. Campi and S. Garatti. The Exact Feasibility of Randomized Solutions of Uncertain Convex Programs. SIAM J. on Optimization, 19, no.3: 1211–1230, 2008. 4. Robbins, H.; Monro, S. (1951). "A Stochastic Approximation Method". Annals of Mathematical Statistics. 22 (3): 400–407. doi:10.1214/aoms/1177729586. 5. J. Kiefer; J. Wolfowitz (1952). "Stochastic Estimation of the Maximum of a Regression Function". Annals of Mathematical Statistics. 23 (3): 462–466. doi:10.1214/aoms/1177729392. 6. Spall, J. C. (1992). "Multivariate Stochastic Approximation Using a Simultaneous Perturbation Gradient Approximation". IEEE Transactions on Automatic Control. 37 (3): 332–341. CiteSeerX 10.1.1.19.4562. doi:10.1109/9.119632. 7. Holger H. Hoos and Thomas Stützle, Stochastic Local Search: Foundations and Applications, Morgan Kaufmann / Elsevier, 2004. 8. M. de Carvalho (2011). "Confidence intervals for the minimum of a function using extreme value statistics" (PDF). International Journal of Mathematical Modelling and Numerical Optimisation. 2 (3): 288–296. doi:10.1504/IJMMNO.2011.040793. 9. M. de Carvalho (2012). "A generalization of the Solis-Wets method" (PDF). Journal of Statistical Planning and Inference. 142 (3): 633‒644. doi:10.1016/j.jspi.2011.08.016. 10. S. Kirkpatrick; C. D. Gelatt; M. P. Vecchi (1983). "Optimization by Simulated Annealing". Science. 220 (4598): 671–680. Bibcode:1983Sci...220..671K. CiteSeerX 10.1.1.123.7607. doi:10.1126/science.220.4598.671. PMID 17813860. S2CID 205939. 11. D.H. Wolpert; S.R. Bieniawski; D.G. Rajnarayan (2011). "Probability Collectives in Optimization". Santa Fe Institute. 12. Battiti, Roberto; Gianpietro Tecchiolli (1994). "The reactive tabu search" (PDF). ORSA Journal on Computing. 6 (2): 126–140. doi:10.1287/ijoc.6.2.126. 13. Battiti, Roberto; Mauro Brunato; Franco Mascia (2008). Reactive Search and Intelligent Optimization. Springer Verlag. ISBN 978-0-387-09623-0. 14. Rubinstein, R. Y.; Kroese, D. P. (2004). The Cross-Entropy Method. Springer-Verlag. ISBN 978-0-387-21240-1. 15. Zhigljavsky, A. A. (1991). Theory of Global Random Search. Kluwer Academic. ISBN 978-0-7923-1122-5. 16. Kagan E.; Ben-Gal I. (2014). "A Group-Testing Algorithm with Online Informational Learning". IIE Transactions. 46 (2): 164–184. doi:10.1080/0740817X.2013.803639. S2CID 18588494. 17. W. Wenzel; K. Hamacher (1999). "Stochastic tunneling approach for global optimization of complex potential energy landscapes". Phys. Rev. Lett. 82 (15): 3003. arXiv:physics/9903008. Bibcode:1999PhRvL..82.3003W. doi:10.1103/PhysRevLett.82.3003. S2CID 5113626. 18. E. Marinari; G. Parisi (1992). "Simulated tempering: A new monte carlo scheme". Europhys. Lett. 19 (6): 451–458. arXiv:hep-lat/9205018. Bibcode:1992EL.....19..451M. doi:10.1209/0295-5075/19/6/002. S2CID 12321327. 19. Goldberg, D. E. (1989). Genetic Algorithms in Search, Optimization, and Machine Learning. Addison-Wesley. ISBN 978-0-201-15767-3. Archived from the original on 2006-07-19. 20. Tavridovich, S. A. (2017). "COOMA: an object-oriented stochastic optimization algorithm". International Journal of Advanced Studies. 7 (2): 26–47. doi:10.12731/2227-930x-2017-2-26-47. 21. Yudkowsky, Eliezer. "Worse Than Random - LessWrong". 22. Glover, F. (2007). "Tabu search—uncharted domains". Annals of Operations Research. 149: 89–98. CiteSeerX 10.1.1.417.8223. doi:10.1007/s10479-006-0113-9. S2CID 6854578. Further reading • Michalewicz, Z. and Fogel, D. B. (2000), How to Solve It: Modern Heuristics, Springer-Verlag, New York. External links • COSP
Wikipedia
Stochastic tunneling In numerical analysis, stochastic tunneling (STUN) is an approach to global optimization based on the Monte Carlo method-sampling of the function to be objective minimized in which the function is nonlinearly transformed to allow for easier tunneling among regions containing function minima. Easier tunneling allows for faster exploration of sample space and faster convergence to a good solution. Idea Monte Carlo method-based optimization techniques sample the objective function by randomly "hopping" from the current solution vector to another with a difference in the function value of $\Delta E$. The acceptance probability of such a trial jump is in most cases chosen to be $\min \left(1;\exp \left(-\beta \cdot \Delta E\right)\right)$ (Metropolis criterion) with an appropriate parameter $\beta $. The general idea of STUN is to circumvent the slow dynamics of ill-shaped energy functions that one encounters for example in spin glasses by tunneling through such barriers. This goal is achieved by Monte Carlo sampling of a transformed function that lacks this slow dynamics. In the "standard-form" the transformation reads $f_{STUN}:=1-\exp \left(-\gamma \cdot \left(E(x)-E_{o}\right)\right)$ where $E_{o}$ is the lowest function value found so far. This transformation preserves the loci of the minima. $f_{STUN}$ is then used in place of $E$ in the original algorithm giving a new acceptance probability of $\min \left(1;\exp \left(-\beta \cdot \Delta f_{STUN}\right)\right)$ The effect of such a transformation is shown in the graph. Dynamically adaptive stochastic tunneling A variation on always tunneling is to do so only when trapped at a local minimum. $\gamma $ is then adjusted to tunnel out of the minimum and pursue a more globally optimum solution. Detrended fluctuation analysis is the recommended way of determining if trapped at a local minimum. Other approaches • Simulated annealing • Parallel tempering • Genetic algorithm • Differential evolution References • K. Hamacher (2006). "Adaptation in Stochastic Tunneling Global Optimization of Complex Potential Energy Landscapes". Europhys. Lett. 74 (6): 944–950. Bibcode:2006EL.....74..944H. doi:10.1209/epl/i2006-10058-0. S2CID 250761754. • K. Hamacher & W. Wenzel (1999). "The Scaling Behaviour of Stochastic Minimization Algorithms in a Perfect Funnel Landscape". Phys. Rev. E. 59 (1): 938–941. arXiv:physics/9810035. Bibcode:1999PhRvE..59..938H. doi:10.1103/PhysRevE.59.938. S2CID 119096368. • W. Wenzel & K. Hamacher (1999). "A Stochastic tunneling approach for global minimization". Phys. Rev. Lett. 82 (15): 3003–3007. arXiv:physics/9903008. Bibcode:1999PhRvL..82.3003W. doi:10.1103/PhysRevLett.82.3003. S2CID 5113626. • Nicholas Metropolis, Arianna W. Rosenbluth, Marshall N. Rosenbluth, Augusta H. Teller and Edward Teller (June 1953). "Equation of State Calculations by Fast Computing Machines" (PDF). The Journal of Chemical Physics. 21 (6): 1087–1092. Bibcode:1953JChPh..21.1087M. doi:10.1063/1.1699114. OSTI 4390578. S2CID 1046577.{{cite journal}}: CS1 maint: multiple names: authors list (link) • Mingjie Lin (December 2010). "Improving FPGA Placement with Dynamically Adaptive Stochastic Tunneling". IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems. 29 (12): 1858–1869. doi:10.1109/tcad.2010.2061670. S2CID 8706692.
Wikipedia
Stoic logic Stoic logic is the system of propositional logic developed by the Stoic philosophers in ancient Greece. It was one of the two great systems of logic in the classical world. It was largely built and shaped by Chrysippus, the third head of the Stoic school in the 3rd-century BCE. Chrysippus's logic differed from Aristotle's term logic because it was based on the analysis of propositions rather than terms. The smallest unit in Stoic logic is an assertible (the Stoic equivalent of a proposition) which is the content of a statement such as "it is day". Assertibles have a truth-value such that they are only true or false depending on when it was expressed (e.g. the assertible "it is night" will only be true if it is true that it is night). [1] In contrast, Aristotelian propositions strongly affirm or deny a predicate of a subject and seek to have its truth validated or falsified independent of context. Compound assertibles can be built up from simple ones through the use of logical connectives. The resulting syllogistic was grounded on five basic indemonstrable arguments to which all other syllogisms were claimed to be reducible. Towards the end of antiquity Stoic logic was neglected in favour of Aristotle's logic, and as a result the Stoic writings on logic did not survive, and the only accounts of it were incomplete reports by other writers. Knowledge about Stoic logic as a system was lost until the 20th century, when logicians familiar with the modern propositional calculus reappraised the ancient accounts of it. Background Stoicism is a school of philosophy which developed in the Hellenistic period around a generation after the time of Aristotle.[2] The Stoics believed that the universe operated according to reason, i.e. by a God which is immersed in nature itself.[2] Logic (logike) was the part of philosophy which examined reason (logos).[3] To achieve a happy life—a life worth living—requires logical thought.[2] The Stoics held that an understanding of ethics was impossible without logic.[4] In the words of Inwood, the Stoics believed that:[5] Logic helps a person see what is the case, reason effectively about practical affairs, stand his or her ground amid confusion, differentiate the certain from the probable, and so forth. Aristotle's term logic can be viewed as a logic of classification.[6] It makes use of four logical terms "all", "some", "is/are", and "is/are not" and to that extent is fairly static.[6][7] The Stoics needed a logic that examines choice and consequence.[4] The Stoics therefore developed a logic of propositions which uses connectives such as "if ... then", "either ... or", and "not both".[8] Such connectives are part of everyday reasoning.[8] Socrates in the Dialogues of Plato often asks a fellow citizen if they believe a certain thing; when they agree, Socrates then proceeds to show how the consequences are logically false or absurd, inferring that the original belief must be wrong.[8] Similar attempts at forensic reasoning must have been used in the law-courts, and they are a fundamental part of Greek mathematics.[8] Aristotle himself was familiar with propositions, and his pupils Theophrastus and Eudemus had examined hypothetical syllogisms, but there was no attempt by the Peripatetic school to develop these ideas into a system of logic.[9] The Stoic tradition of logic originated in the 4th-century BCE in a different school of philosophy known as the Megarian school.[10] It was two dialecticians of this school, Diodorus Cronus and his pupil Philo, who developed their own theories of modalities and of conditional propositions.[10] The founder of Stoicism, Zeno of Citium, studied under the Megarians and he was said to have been a fellow pupil with Philo.[11] However, the outstanding figure in the development of Stoic logic was Chrysippus of Soli (c. 279 – c. 206 BCE), the third head of the Stoic school.[10] Chrysippus shaped much of Stoic logic as we know it creating a system of propositional logic.[12] As a logician Chrysippus is sometimes said to rival Aristotle in stature.[11] The logical writings by Chrysippus are, however, almost entirely lost,[10] instead his system has to be reconstructed from the partial and incomplete accounts preserved in the works of later authors such as Sextus Empiricus, Diogenes Laërtius, and Galen.[11] Propositions To the Stoics, logic was a wide field of knowledge which included the study of language, grammar, rhetoric and epistemology.[3] However, all of these fields were interrelated, and the Stoics developed their logic (or "dialectic") within the context of their theory of language and epistemology.[13] Assertibles The Stoics held that any meaningful utterance will involve three items: the sounds uttered; the thing which is referred to or described by the utterance; and an incorporeal item—the lektón (sayable)—that which is conveyed in the language.[14] The lekton is not a statement but the content of a statement, and it corresponds to a complete utterance.[15][16] A lekton can be something such as a question or a command, but Stoic logic operates on those lekta which are called "assertibles" (axiomata), described as a proposition which is either true or false and which affirms or denies.[15][17] Examples of assertibles include "it is night", "it is raining this afternoon", and "no one is walking."[18][19] The assertibles are truth-bearers.[20] They can never be true and false at the same time (law of noncontradiction) and they must be at least true or false (law of excluded middle).[21] The Stoics catalogued these simple assertibles according to whether they are affirmative or negative, and whether they are definite or indefinite (or both).[22] The assertibles are much like modern propositions, however their truth value can change depending on when they are asserted.[1] Thus an assertible such as "it is night" will only be true when it is night and not when it is day.[17] Compound assertibles Simple assertibles can be connected to each other to form compound or non-simple assertibles.[23] This is achieved through the use of logical connectives.[23] Chrysippus seems to have been responsible for introducing the three main types of connectives: the conditional (if), conjunctive (and), and disjunctive (or).[24] A typical conditional takes the form of "if p then q";[25] whereas a conjunction takes the form of "both p and q";[25] and a disjunction takes the form of "either p or q".[26] The or they used is exclusive, unlike the inclusive or generally used in modern formal logic.[27] These connectives are combined with the use of not for negation.[28] Thus the conditional can take the following four forms:[29] If p, then q | If not p, then q | If p, then not q | If not p, then not q Later Stoics added more connectives: the pseudo-conditional took the form of "since p then q"; and the causal assertible took the form of "because p then q".[a] There was also a comparative (or dissertive): "more/less (likely) p than q".[30] Logical connectives NameTypeExample Conditionalifif it is day, it is light Conjunctionandit is day and light Disjunctioneither ... oreither it is day or night Pseudo-conditionalsincesince it is day, it is light Causalbecausebecause it is day, it is light Comparativemore/less likely ... thanmore likely it is day than night Modality Assertibles can also be distinguished by their modal properties[b]—whether they are possible, impossible, necessary, or non-necessary.[31] In this the Stoics were building on an earlier Megarian debate initiated by Diodorus Cronus.[31] Diodorus had defined possibility in a way which seemed to adopt a form of fatalism.[32] Diodorus defined possible as "that which either is or will be true".[33] Thus there are no possibilities that are forever unrealised, whatever is possible is or one day will be true.[32] His pupil Philo, rejecting this, defined possible as "that which is capable of being true by the proposition's own nature",[33] thus a statement like "this piece of wood can burn" is possible, even if it spent its entire existence on the bottom of the ocean.[34] Chrysippus, on the other hand, was a causal determinist: he thought that true causes inevitably give rise to their effects and that all things arise in this way.[35] But he was not a logical determinist or fatalist: he wanted to distinguish between possible and necessary truths.[35] Thus he took a middle position between Diodorus and Philo, combining elements of both their modal systems.[36] Chrysippus's set of Stoic modal definitions was as follows:[37] Modal definitions NameDefinition possibleAn assertible which can become true and is not hindered by external things from becoming true impossibleAn assertible which cannot become true or which can become true but is hindered by external things from becoming true necessaryAn assertible which (when true) cannot become false or which can become false but is hindered by external things from becoming false non-necessaryAn assertible which can become false and is not hindered by external things from becoming false Syllogistic Arguments In Stoic logic, an argument form contains two (or more) premises related to one another as cause and effect.[38] A typical Stoic syllogism is:[39] If it is day, it is light; It is day; Therefore it is light. It has a non-simple assertible for the first premise ("If it is day, it is light") and a simple assertible for the second premise ("It is day").[39] The second premise doesn't always have to be simple but it will have fewer components than the first.[39] In more formal terms this type of syllogism is:[17] If p, then q; p; Therefore q. As with Aristotle's term logic, Stoic logic also uses variables, but the values of the variables are propositions not terms.[40] Chrysippus listed five basic argument forms, which he regarded as true beyond dispute.[c] These five indemonstrable arguments are made up of conditional, disjunction, and negation conjunction connectives,[41] and all other arguments are reducible to these five indemonstrable arguments.[16] Indemonstrable arguments Name[d] Description Modern sequent Example Modus ponens If p, then q.  p.  Therefore, q. $p\to q,\;p\;\;\vdash \;\;q$ If it is day, it is light. It is day. Therefore, it is light. Modus tollens If p, then q.  Not q.  Therefore, not p. $p\to q,\;\neg q\;\;\vdash \;\neg p$ If it is day, it is light. It is not light. Therefore, it is not day. Conjunctive syllogism Not both p and q.  p.  Therefore, not q.  $\neg (p\land q),\;p\;\;\vdash \;\neg q$ It is not both day and night. It is day. Therefore, it is not night.  Modus tollendo ponens Either p or q.  Not p.  Therefore, q. $p\lor q,\;\neg p\;\;\vdash \;\;q$ It is either day or night. It is not day. Therefore, it is night. Modus ponendo tollens Either p or q.  p.  Therefore, not q. $p{\underline {\lor }}q,\;p\;\;\vdash \;\neg q$ It is either day or night. It is day. Therefore, it is not night. There can be many variations of these five indemonstrable arguments.[42] For example the assertibles in the premises can be more complex, and the following syllogism is a valid example of the second indemonstrable (modus tollens):[29] if both p and q, then r; not r; therefore not: both p and q Similarly one can incorporate negation into these arguments.[29] A valid example of the fourth indemonstrable (modus tollendo ponens or disjunctive syllogism) is:[43] either [not p] or q; not [not p]; therefore q which, incorporating the principle of double negation, is equivalent to:[43] either [not p] or q; p; therefore q Analysis Many arguments are not in the form of the five indemonstrables, and the task is to show how they can be reduced to one of the five types.[28] A simple example of Stoic reduction is reported by Sextus Empiricus:[44] if both p and q, then r; not r; but also p; Therefore not q This can be reduced to two separate indemonstrable arguments of the second and third type:[45] if both p and q, then r; not r; therefore not: both p and q not: both p and q p; therefore not q The Stoics stated that complex syllogisms could be reduced to the indemonstrables through the use of four ground rules or themata.[46] Of these four themata, only two have survived.[33] One, the so-called first thema, was a rule of antilogism:[33] When from two [assertibles] a third follows, then from either of them together with the contradictory of the conclusion the contradictory of the other follows (Apuleius, De Interpretatione 209. 9–14). The other, the third thema, was a cut rule by which chain syllogisms could be reduced to simple syllogisms.[e] The importance of these rules is not altogether clear.[47] In the 2nd-century BCE Antipater of Tarsus is said to have introduced a simpler method involving the use of fewer themata, although few details survive concerning this.[47] In any case, the themata cannot have been a necessary part of every analysis.[48] Paradoxes Why should not the philosopher develop his own reason? You turn to vessels of crystal, I to the syllogism called The Liar; you to myrrhine glassware, I to the syllogism called The Denyer. –Epictetus, Discourses, iii.9.20 In addition to describing which inferences are valid ones, part of a Stoic's logical training was the enumeration and refutation of false arguments, including the identification of paradoxes.[49] A false argument could be one with a false premise or which is formally incorrect, however paradoxes represented a challenge to the basic logical notions of the Stoics such as truth or falsehood.[50] One famous paradox, known as The Liar, asked "A man says he is lying; is what he says true or false?"—if the man says something true then it seems he is lying, but if he is lying then he is not saying something true, and so on.[51] Chrysippus is known to have written several books on this paradox, although it is not known what solution he offered for it.[52] Another paradox known as the Sorites or "Heap" asked "How many grains of wheat do you need before you get a heap?"[52] It was said to challenge the idea of true or false by offering up the possibility of vagueness.[52] The response of Chrysippus however was: "That doesn't harm me, for like a skilled driver I shall restrain my horses before I reach the edge ... In like manner I restrain myself in advance and stop replying to sophistical questions."[52] However, this mastery of logical puzzles, study of paradoxes, and dissection of arguments[53] was not an end in itself, but rather its purpose was for the Stoics to cultivate their rational powers.[54] Stoic logic was thus a method of self-discovery.[55] Its aim was to enable ethical reflection, permit secure and confident arguing, and lead the pupil to truth.[53] The end result would be thought that is consistent, clear and precise, and which exposes confusion, murkiness and inconsistency.[56] Diogenes Laërtius gives a list of dialectical virtues, which were probably invented by Chrysippus:[57] First he mentions aproptosia, which means literally 'not falling forward' and is defined as 'knowledge of when one should give assent or not' (give assent); next aneikaiotes, 'unhastiness', defined as 'strong-mindedness against the probable (or plausible), so as not to give in to it'; third, anelenxia, 'irrefutability', the definition of which is 'strength in argument, so as not to be driven by it to the contradictory'; and fourth, amataiotes, 'lack of emptyheadedness', defined as 'a disposition which refers impressions (phantasiai) to the correct logos.[57] Later reception For around five hundred years Stoic logic was one of the two great systems of logic.[58] The logic of Chrysippus was discussed alongside that of Aristotle, and it may well have been more prominent since Stoicism was the dominant philosophical school.[59] From a modern perspective Aristotle's term logic and the Stoic logic of propositions appear complementary, but they were sometimes regarded as rival systems.[28] In late antiquity the Stoic school fell into decline, and the last pagan philosophical school, the Neoplatonists, adopted Aristotle's logic for their own.[60] Only elements of Stoic logic made their way into the logical writings of later commentators such as Boethius, transmitting confused parts of Stoic logic to the Middle Ages.[59] Propositional logic was redeveloped by Peter Abelard in the 12th-century, but by the mid-15th-century the only logic which was being studied was a simplified version of Aristotle's.[61] In the 18th-century Immanuel Kant declared that "since Aristotle ... logic has not been able to advance a single step, and is thus to all appearance a closed and complete body of doctrine."[62] To 19th-century historians, who believed that Hellenistic philosophy represented a decline from that of Plato and Aristotle, Stoic logic was seen with contempt.[63] Carl Prantl thought that Stoic logic was "dullness, triviality, and scholastic quibbling" and he welcomed the fact that the works of Chrysippus were no longer extant.[64] Eduard Zeller remarked that "the whole contribution of the Stoics to the field of logic consists in their having clothed the logic of the Peripatetics with a new terminology."[65] Although developments in modern logic that parallel Stoic logic[66] began in the middle of the 19th-century with the work of George Boole and Augustus De Morgan,[61] Stoic logic itself was only reappraised in the 20th-century,[64] beginning with the work of Polish logician Jan Łukasiewicz[64] and Benson Mates.[64] What we see as a result is a close similarity between [these] methods of reasoning and the behaviour of digital computers. ... The code happens to come from the nineteenth-century logician and mathematician George Boole, whose aim was to codify the relations studied much earlier by Chrysippus (albeit with greater abstraction and sophistication). Later generations built on Boole's insights ... but the logic that made it all possible was the interconnected logic of an interconnected universe, discovered by the ancient Chrysippus, who labored long ago under an old Athenian stoa.[67] Notes a. ^ The minimum requirement for a conditional is that the consequent follows from the antecedent.[25] The pseudo-conditional adds that the antecedent must also be true. The causal assertible adds an asymmetry rule such that if p is the cause/reason for q, then q cannot be the cause/reason for p. Bobzien 1999, p. 109 b. ^ "Stoic modal logic is not a logic of modal propositions (e.g., propositions of the type 'It is possible that it is day' ...) ... instead, their modal theory was about non-modalized propositions like 'It is day', insofar as they are possible, necessary, and so forth." Bobzien 1999, p. 117 c. ^ Most of these argument forms had already been discussed by Theophrastus, but: "It is plain that even if Theophrastus discussed (1)–(5), he did not anticipate Chrysippus' achievement. ... his Aristotelian approach to the study and organization of argument-forms would have given his discussion of mixed hypothetical syllogisms an utterly unStoical aspect." Barnes 1999, p. 83 d. ^ These Latin names date from the Middle Ages. Shenefelt & White 2013, p. 288 e. ^ For a brief summary of these themata see Susanne Bobzien's Ancient Logic article for the Stanford Encyclopedia of Philosophy. For a detailed (and technical) analysis of the themata, including a tentative reconstruction of the two lost ones, see Bobzien 1999, pp. 137–148 Citations 1. Bobzien 1999, p. 95 2. Shenefelt & White 2013, p. 74 3. Sellars 2006, p. 55 4. Shenefelt & White 2013, p. 78 5. Inwood 2003, p. 229 6. Shenefelt & White 2013, p. 73 7. Sellars 2006, p. 57 8. Shenefelt & White 2013, p. 79 9. Ierodiakonou 2009, p. 507 10. Bobzien 1996a, p. 880 11. Sellars 2006, p. 56 12. Shenefelt & White 2013, p. 80 13. O'Toole & Jennings 2004, p. 400 14. Everson 1994, p. 85 15. Johansen & Rosenmeier 1998, p. 466 16. Ierodiakonou 2006, p. 678 17. Sellars 2006, p. 58 18. Sellars 2006, pp. 58–59 19. Bobzien 1999, p. 102 20. Bobzien 1999, p. 92 21. Shenefelt & White 2013, p. 88 22. Bobzien 1999, pp. 97–98 23. Bobzien 1999, p. 103 24. Bobzien 1999, p. 105 25. Bobzien 1999, p. 106 26. Bobzien 1999, p. 109 27. Inwood 2003, p. 231 28. Sellars 2006, p. 60 29. Bobzien 1999, p. 129 30. Bobzien 1999, pp. 109–111 31. Sellars 2006, p. 59 32. Adamson 2015, p. 136 33. Bobzien 2020 34. Adamson 2015, p. 138 35. Adamson 2015, p. 58 36. Bobzien 1999, p. 120 37. Bobzien 1999, p. 118 38. Zeller 1880, p. 113 39. Bobzien 1999, p. 121 40. Bobzien 1996a, p. 881 41. Asmus & Restall2012, p. 21 42. Bobzien 1999, p. 128 43. Shenefelt & White 2013, p. 87 44. Ierodiakonou 2009, p. 521 45. Ierodiakonou 2009, p. 522 46. Bobzien 1996b, p. 133 47. Barnes 1997, p. 82 48. Barnes 1997, p. 83 49. Inwood 2003, p. 232 50. Ierodiakonou 2009, p. 525 51. Asmus & Restall 2012, p. 20 52. Ierodiakonou 2009, p. 526 53. Nussbaum 2009, p. 349 54. Long 2001, p. 95 55. Long 2001, p. 102 56. Nussbaum 2009, pp. 348–349 57. Long 2001, p. 92 citing Diogenes Laërtius, vii. 46f. 58. Kneale & Kneale 1962, p. 113 59. Kneale & Kneale 1962, p. 177 60. Sharples 2003, p. 156 61. Hurley 2011, p. 6 62. O'Toole & Jennings 2004, p. 403 quoting Kant's Critique of Pure Reason. 63. O'Toole & Jennings 2004, p. 403 64. O'Toole & Jennings 2004, p. 397 65. Zeller 1880, p. 124 66. Bonevac & Dever 2012, p. 181 67. Shenefelt & White 2013, pp. 96–97 References • Adamson, Peter (2015), Philosophy in the Hellenistic and Roman Worlds, Oxford University Press, ISBN 978-0-19-872802-3 • Asmus, Conrad; Restall, Greg (2012), "A History of the Consequence Relations", in Gabbay, Dov M.; Pelletier, Francis Jeffry; Woods, John (eds.), Handbook of the History of Logic, vol. 2, Elsevier, ISBN 978-0-444-52937-4 • Barnes, Johnathan (1997), Logic and the Imperial Stoa, Brill, ISBN 90-04-10828-9 • Barnes, Johnathan (1999), "Logic: The Peripatetics", in Algra, Keimpe (ed.), The Cambridge History of Hellenistic Philosophy, Cambridge University Press, ISBN 0-521-25028-5 • Bobzien, Susanne (1996a), "Logic", in Hornblower, Simon; Spawforth, Antony (eds.), The Oxford Classical Dictionary, Oxford University Press, ISBN 978-0-1986-6172-6 • Bobzien, Susanne (1996b), "Stoic Syllogistic", Oxford Studies in Ancient Philosophy 14, Oxford University Press, ISBN 978-0-1982-3670-2 • Bobzien, Susanne (1999), "Logic: The Stoics", in Algra, Keimpe (ed.), The Cambridge History of Hellenistic Philosophy, Cambridge University Press, ISBN 0-521-25028-5 • Bobzien, Susanne (2020). "Ancient Logic". In Zalta, Edward N. (ed.). Stanford Encyclopedia of Philosophy. • Bonevac, Daniel; Dever, Josh (2012), "A History of the Connectives", in Gabbay, Dov M.; Pelletier, Francis Jeffry; Woods, John (eds.), Handbook of the History of Logic, vol. 2, Elsevier, ISBN 978-0-444-52937-4 • Everson, Stephen (1994), Companions to Ancient Thought 3: Language, Cambridge University Press • Hadot, Pierre (2002), What is Ancient Philosophy?, Harvard University Press, ISBN 0-674-00733-6 • Hurley, Patrick J. (2011), A Concise Introduction to Logic, Wadsworth, ISBN 978-0-8400-3417-5 • Ierodiakonou, Katerina (2006), "Stoicism", in Wilson, Nigel (ed.), Encyclopedia of Ancient Greece, Psychology Press, ISBN 978-0-4158-7396-3 • Ierodiakonou, Katerina (2009), "Stoic Logic", in Gill, Mary Louise; Pellegrin, Pierre (eds.), A Companion to Ancient Philosophy, Wiley-Blackwell, ISBN 978-1-4051-8834-0 • Inwood, Brad (2003), "Stoicism", in Furley, David (ed.), Routledge History of Philosophy Volume II: Aristotle to Augustine, Routledge, ISBN 978-0-4153-0874-8 • Johansen, Karsten Friis; Rosenmeier, Henrik (1998), A History of Ancient Philosophy: From the Beginnings to Augustine, Routledge, ISBN 0-415-12738-6 • Kenny, Anthony (2006), Ancient Philosophy, Oxford University Press, ISBN 978-0-19-875272-1 • Kneale, William; Kneale, Martha (1962), The Development of Logic, Clarendon Press • Long, A. A. (2001), "Dialectic and the Stoic Sage", Stoic Studies, University of California Press, ISBN 0-520-22974-6 • Nussbaum, Martha C. (2009), The Therapy of Desire: Theory and Practice in Hellenistic Ethics, Princeton University Press, ISBN 978-0-691-14131-2 • O'Toole, Robert R.; Jennings, Raymond E. (2004), "The Megarians and the Stoics", in Gabbay, Dov M.; Woods, John (eds.), Handbook of the History of Logic, vol. 1, Elsevier, ISBN 0-444-51596-8 • Sellars, John (2006), Ancient Philosophies: Stoicism, Acumen, ISBN 978-1-84465-053-8 • Shenefelt, Michael; White, Heidi (2013), If A, Then B: How Logic Shaped the World, Columbia University Press, ISBN 978-0-231-53519-9 • Sharples, Robert W. (2003), "The Peripatetic School", in Furley, David (ed.), Routledge History of Philosophy Volume II: Aristotle to Augustine, Routledge, ISBN 978-0-4153-0874-8 • Zeller, Eduard (1880), The Stoics, Epicureans and Sceptics, Longmans, Green, and Co. External links • Bobzien, Susanne. "Ancient Logic". In Zalta, Edward N. (ed.). Stanford Encyclopedia of Philosophy. • Stoic Logic (1953) by Benson Mates (1919–2009) Stoicism Philosophers • Zeno of Citium • Cleanthes • Chrysippus • Diogenes of Babylon • Antipater of Tarsus • Panaetius • Posidonius • Seneca • Cornutus • Musonius Rufus • Epictetus • Marcus Aurelius • more... Philosophical concepts Logic • Logos • Adiaphora • Katalepsis • Diairesis Physics • Physis • Fire • Pneuma Ethics • Pathos • Apatheia • Eudaimonia • Kathekon • Oikeiôsis • Prohairesis • Sophos Works • On Passions (Chrysippus) • Paradoxa Stoicorum • Letters to Lucilius (Seneca) • Seneca's Essays • Seneca's Consolations • Lectures (Musonius Rufus) • Discourses of Epictetus • Enchiridion of Epictetus • Meditations of Marcus Aurelius • Stoicorum Veterum Fragmenta Related articles • Stoa Poikile • Stoic Opposition • Neostoicism
Wikipedia
Stokes operator The Stokes operator, named after George Gabriel Stokes, is an unbounded linear operator used in the theory of partial differential equations, specifically in the fields of fluid dynamics and electromagnetics. Definition If we define $P_{\sigma }$ as the Leray projection onto divergence free vector fields, then the Stokes Operator $A$ is defined by $A:=-P_{\sigma }\Delta ,$ where $\Delta \equiv \nabla ^{2}$ is the Laplacian. Since $A$ is unbounded, we must also give its domain of definition, which is defined as ${\mathcal {D}}(A)=H^{2}\cap V$, where $V=\{{\vec {u}}\in (H_{0}^{1}(\Omega ))^{n}|\operatorname {div} \,{\vec {u}}=0\}$. Here, $\Omega $ is a bounded open set in $\mathbb {R} ^{n}$ (usually n = 2 or 3), $H^{2}(\Omega )$ and $H_{0}^{1}(\Omega )$ are the standard Sobolev spaces, and the divergence of ${\vec {u}}$ is taken in the distribution sense. Properties For a given domain $\Omega $ which is open, bounded, and has $C^{2}$ boundary, the Stokes operator $A$ is a self-adjoint positive-definite operator with respect to the $L^{2}$ inner product. It has an orthonormal basis of eigenfunctions $\{w_{k}\}_{k=1}^{\infty }$ corresponding to eigenvalues $\{\lambda _{k}\}_{k=1}^{\infty }$ which satisfy $0<\lambda _{1}<\lambda _{2}\leq \lambda _{3}\cdots \leq \lambda _{k}\leq \cdots $ and $\lambda _{k}\rightarrow \infty $ as $k\rightarrow \infty $. Note that the smallest eigenvalue is unique and non-zero. These properties allow one to define powers of the Stokes operator. Let $\alpha >0$ be a real number. We define $A^{\alpha }$ by its action on ${\vec {u}}\in {\mathcal {D}}(A)$: $A^{\alpha }{\vec {u}}=\sum _{k=1}^{\infty }\lambda _{k}^{\alpha }u_{k}{\vec {w_{k}}}$ where $u_{k}:=({\vec {u}},{\vec {w_{k}}})$ and $(\cdot ,\cdot )$ is the $L^{2}(\Omega )$ inner product. The inverse $A^{-1}$ of the Stokes operator is a bounded, compact, self-adjoint operator in the space $H:=\{{\vec {u}}\in (L^{2}(\Omega ))^{n}|\operatorname {div} \,{\vec {u}}=0{\text{ and }}\gamma ({\vec {u}})=0\}$, where $\gamma $ is the trace operator. Furthermore, $A^{-1}:H\rightarrow V$ is injective. References • Temam, Roger (2001), Navier-Stokes Equations: Theory and Numerical Analysis, AMS Chelsea Publishing, ISBN 0-8218-2737-5 • Constantin, Peter and Foias, Ciprian. Navier-Stokes Equations, University of Chicago Press, (1988)
Wikipedia
Stolarsky mean In mathematics, the Stolarsky mean is a generalization of the logarithmic mean. It was introduced by Kenneth B. Stolarsky in 1975.[1] Definition For two positive real numbers x, y the Stolarsky Mean is defined as: ${\begin{aligned}S_{p}(x,y)&=\lim _{(\xi ,\eta )\to (x,y)}\left({\frac {\xi ^{p}-\eta ^{p}}{p(\xi -\eta )}}\right)^{1/(p-1)}\\[10pt]&={\begin{cases}x&{\text{if }}x=y\\\left({\frac {x^{p}-y^{p}}{p(x-y)}}\right)^{1/(p-1)}&{\text{else}}\end{cases}}\end{aligned}}$ Derivation It is derived from the mean value theorem, which states that a secant line, cutting the graph of a differentiable function $f$ at $(x,f(x))$ and $(y,f(y))$, has the same slope as a line tangent to the graph at some point $\xi $ in the interval $[x,y]$. $\exists \xi \in [x,y]\ f'(\xi )={\frac {f(x)-f(y)}{x-y}}$ The Stolarsky mean is obtained by $\xi =\left[f'\right]^{-1}\left({\frac {f(x)-f(y)}{x-y}}\right)$ when choosing $f(x)=x^{p}$. Special cases • $\lim _{p\to -\infty }S_{p}(x,y)$ is the minimum. • $S_{-1}(x,y)$ is the geometric mean. • $\lim _{p\to 0}S_{p}(x,y)$ is the logarithmic mean. It can be obtained from the mean value theorem by choosing $f(x)=\ln x$. • $S_{\frac {1}{2}}(x,y)$ is the power mean with exponent ${\frac {1}{2}}$. • $\lim _{p\to 1}S_{p}(x,y)$ is the identric mean. It can be obtained from the mean value theorem by choosing $f(x)=x\cdot \ln x$. • $S_{2}(x,y)$ is the arithmetic mean. • $S_{3}(x,y)=QM(x,y,GM(x,y))$ is a connection to the quadratic mean and the geometric mean. • $\lim _{p\to \infty }S_{p}(x,y)$ is the maximum. Generalizations One can generalize the mean to n + 1 variables by considering the mean value theorem for divided differences for the nth derivative. One obtains $S_{p}(x_{0},\dots ,x_{n})={f^{(n)}}^{-1}(n!\cdot f[x_{0},\dots ,x_{n}])$ for $f(x)=x^{p}$. See also • Mean References 1. Stolarsky, Kenneth B. (1975). "Generalizations of the logarithmic mean". Mathematics Magazine. 48: 87–92. doi:10.2307/2689825. ISSN 0025-570X. JSTOR 2689825. Zbl 0302.26003.
Wikipedia
Stolz–Cesàro theorem In mathematics, the Stolz–Cesàro theorem is a criterion for proving the convergence of a sequence. The theorem is named after mathematicians Otto Stolz and Ernesto Cesàro, who stated and proved it for the first time. The Stolz–Cesàro theorem can be viewed as a generalization of the Cesàro mean, but also as a l'Hôpital's rule for sequences. Statement of the theorem for the */∞ case Let $(a_{n})_{n\geq 1}$ and $(b_{n})_{n\geq 1}$ be two sequences of real numbers. Assume that $(b_{n})_{n\geq 1}$ is a strictly monotone and divergent sequence (i.e. strictly increasing and approaching $+\infty $, or strictly decreasing and approaching $-\infty $) and the following limit exists: $\lim _{n\to \infty }{\frac {a_{n+1}-a_{n}}{b_{n+1}-b_{n}}}=l.\ $ Then, the limit $\lim _{n\to \infty }{\frac {a_{n}}{b_{n}}}=l.\ $ Statement of the theorem for the 0/0 case Let $(a_{n})_{n\geq 1}$ and $(b_{n})_{n\geq 1}$ be two sequences of real numbers. Assume now that $(a_{n})\to 0$ and $(b_{n})\to 0$ while $(b_{n})_{n\geq 1}$ is strictly decreasing. If $\lim _{n\to \infty }{\frac {a_{n+1}-a_{n}}{b_{n+1}-b_{n}}}=l,\ $ then $\lim _{n\to \infty }{\frac {a_{n}}{b_{n}}}=l.\ $[1] Proofs Proof of the theorem for the */∞ case Case 1: suppose $(b_{n})$ strictly increasing and divergent to $+\infty $, and $-\infty <l<\infty $. By hypothesis, we have that for all $\epsilon /2>0$ there exists $\nu >0$ such that $\forall n>\nu $ $\left|\,{\frac {a_{n+1}-a_{n}}{b_{n+1}-b_{n}}}-l\,\right|<{\frac {\epsilon }{2}},$ which is to say $l-\epsilon /2<{\frac {a_{n+1}-a_{n}}{b_{n+1}-b_{n}}}<l+\epsilon /2,\quad \forall n>\nu .$ Since $(b_{n})$ is strictly increasing, $b_{n+1}-b_{n}>0$, and the following holds $(l-\epsilon /2)(b_{n+1}-b_{n})<a_{n+1}-a_{n}<(l+\epsilon /2)(b_{n+1}-b_{n}),\quad \forall n>\nu $. Next we notice that $a_{n}=[(a_{n}-a_{n-1})+\dots +(a_{\nu +2}-a_{\nu +1})]+a_{\nu +1}$ thus, by applying the above inequality to each of the terms in the square brackets, we obtain ${\begin{aligned}&(l-\epsilon /2)(b_{n}-b_{\nu +1})+a_{\nu +1}=(l-\epsilon /2)[(b_{n}-b_{n-1})+\dots +(b_{\nu +2}-b_{\nu +1})]+a_{\nu +1}<a_{n}\\&a_{n}<(l+\epsilon /2)[(b_{n}-b_{n-1})+\dots +(b_{\nu +2}-b_{\nu +1})]+a_{\nu +1}=(l+\epsilon /2)(b_{n}-b_{\nu +1})+a_{\nu +1}.\end{aligned}}$ Now, since $b_{n}\to +\infty $ as $n\to \infty $, there is an $n_{0}>0$ such that $b_{n}>0$ for all $n>n_{0}$, and we can divide the two inequalities by $b_{n}$ for all $n>\max\{\nu ,n_{0}\}$ $(l-\epsilon /2)+{\frac {a_{\nu +1}-b_{\nu +1}(l-\epsilon /2)}{b_{n}}}<{\frac {a_{n}}{b_{n}}}<(l+\epsilon /2)+{\frac {a_{\nu +1}-b_{\nu +1}(l+\epsilon /2)}{b_{n}}}.$ The two sequences (which are only defined for $n>n_{0}$ as there could be an $N\leq n_{0}$ such that $b_{N}=0$) $c_{n}^{\pm }:={\frac {a_{\nu +1}-b_{\nu +1}(l\pm \epsilon /2)}{b_{n}}}$ are infinitesimal since $b_{n}\to +\infty $ and the numerator is a constant number, hence for all $\epsilon /2>0$ there exists $n_{\pm }>n_{0}>0$, such that ${\begin{aligned}&|c_{n}^{+}|<\epsilon /2,\quad \forall n>n_{+},\\&|c_{n}^{-}|<\epsilon /2,\quad \forall n>n_{-},\end{aligned}}$ therefore $l-\epsilon <l-\epsilon /2+c_{n}^{-}<{\frac {a_{n}}{b_{n}}}<l+\epsilon /2+c_{n}^{+}<l+\epsilon ,\quad \forall n>\max \lbrace \nu ,n_{\pm }\rbrace =:N>0,$ which concludes the proof. The case with $(b_{n})$ strictly decreasing and divergent to $-\infty $, and $l<\infty $ is similar. Case 2: we assume $(b_{n})$ strictly increasing and divergent to $+\infty $, and $l=+\infty $. Proceeding as before, for all $2M>0$ there exists $\nu >0$ such that for all $n>\nu $ ${\frac {a_{n+1}-a_{n}}{b_{n+1}-b_{n}}}>2M.$ Again, by applying the above inequality to each of the terms inside the square brackets we obtain $a_{n}>2M(b_{n}-b_{\nu +1})+a_{\nu +1},\quad \forall n>\nu ,$ and ${\frac {a_{n}}{b_{n}}}>2M+{\frac {a_{\nu +1}-2Mb_{\nu +1}}{b_{n}}},\quad \forall n>\max\{\nu ,n_{0}\}.$ The sequence $(c_{n})_{n>n_{0}}$ defined by $c_{n}:={\frac {a_{\nu +1}-2Mb_{\nu +1}}{b_{n}}}$ is infinitesimal, thus $\forall M>0\,\exists {\bar {n}}>n_{0}>0{\text{ such that }}-M<c_{n}<M,\,\forall n>{\bar {n}},$ combining this inequality with the previous one we conclude ${\frac {a_{n}}{b_{n}}}>2M+c_{n}>M,\quad \forall n>\max\{\nu ,{\bar {n}}\}=:N.$ The proofs of the other cases with $(b_{n})$ strictly increasing or decreasing and approaching $+\infty $ or $-\infty $ respectively and $l=\pm \infty $ all proceed in this same way. Proof of the theorem for the 0/0 case Case 1: we first consider the case with $l<\infty $ and $(b_{n})$ strictly decreasing. This time, for each $\nu >0$, we can write $a_{n}=(a_{n}-a_{n+1})+\dots +(a_{n+\nu -1}-a_{n+\nu })+a_{n+\nu },$ and for any $\epsilon /2>0,$ $\exists n_{0}$ such that for all $n>n_{0}$ we have ${\begin{aligned}&(l-\epsilon /2)(b_{n}-b_{n+\nu })+a_{n+\nu }=(l-\epsilon /2)[(b_{n}-b_{n+1})+\dots +(b_{n+\nu -1}-b_{n+\nu })]+a_{n+\nu }<a_{n}\\&a_{n}<(l+\epsilon /2)[(b_{n}-b_{n+1})+\dots +(b_{n+\nu -1}-b_{n+\nu })]+a_{n+\nu }=(l+\epsilon /2)(b_{n}-b_{n+\nu })+a_{n+\nu }.\end{aligned}}$ The two sequences $c_{\nu }^{\pm }:={\frac {a_{n+\nu }-b_{n+\nu }(l\pm \epsilon /2)}{b_{n}}}$ are infinitesimal since by hypothesis $a_{n+\nu },b_{n+\nu }\to 0$ as $\nu \to \infty $, thus for all $\epsilon /2>0$ there are $\nu _{\pm }>0$ such that ${\begin{aligned}&|c_{\nu }^{+}|<\epsilon /2,\quad \forall \nu >\nu _{+},\\&|c_{\nu }^{-}|<\epsilon /2,\quad \forall \nu >\nu _{-},\end{aligned}}$ thus, choosing $\nu $ appropriately (which is to say, taking the limit with respect to $\nu $) we obtain $l-\epsilon <l-\epsilon /2+c_{\nu }^{-}<{\frac {a_{n}}{b_{n}}}<l+\epsilon /2+c_{\nu }^{+}<l+\epsilon ,\quad \forall n>n_{0}$ which concludes the proof. Case 2: we assume $l=+\infty $ and $(b_{n})$ strictly decreasing. For all $2M>0$ there exists $n_{0}>0$ such that for all $n>n_{0},$ ${\frac {a_{n+1}-a_{n}}{b_{n+1}-b_{n}}}>2M\implies a_{n}-a_{n+1}>2M(b_{n}-b_{n+1}).$ Therefore, for each $\nu >0,$ ${\frac {a_{n}}{b_{n}}}>2M+{\frac {a_{n+\nu }-2Mb_{n+\nu }}{b_{n}}},\quad \forall n>n_{0}.$ The sequence $c_{\nu }:={\frac {a_{n+\nu }-2Mb_{n+\nu }}{b_{n}}}$ converges to $0$ (keeping $n$ fixed). Hence $\forall M>0\,~\exists {\bar {\nu }}>0$ such that $-M<c_{\nu }<M,\,\forall \nu >{\bar {\nu }},$ and, choosing $\nu $ conveniently, we conclude the proof ${\frac {a_{n}}{b_{n}}}>2M+c_{\nu }>M,\quad \forall n>n_{0}.$ Applications and examples The theorem concerning the ∞/∞ case has a few notable consequences which are useful in the computation of limits. Arithmetic mean Let $(x_{n})$ be a sequence of real numbers which converges to $l$, define $a_{n}:=\sum _{m=1}^{n}x_{m}=x_{1}+\dots +x_{n},\quad b_{n}:=n$ then $(b_{n})$ is strictly increasing and diverges to $+\infty $. We compute $\lim _{n\to \infty }{\frac {a_{n+1}-a_{n}}{b_{n+1}-b_{n}}}=\lim _{n\to \infty }x_{n+1}=\lim _{n\to \infty }x_{n}=l$ therefore $\lim _{n\to \infty }{\frac {x_{1}+\dots +x_{n}}{n}}=\lim _{n\to \infty }x_{n}.$ Given any sequence $(x_{n})_{n\geq 1}$ of real numbers, suppose that $\lim _{n\to \infty }x_{n}$ exists (finite or infinite), then $\lim _{n\to \infty }{\frac {x_{1}+\dots +x_{n}}{n}}=\lim _{n\to \infty }x_{n}.$ Geometric mean Let $(x_{n})$ be a sequence of positive real numbers converging to $l$ and define $a_{n}:=\log(x_{1}\cdots x_{n}),\quad b_{n}:=n,$ again we compute $\lim _{n\to \infty }{\frac {a_{n+1}-a_{n}}{b_{n+1}-b_{n}}}=\lim _{n\to \infty }\log {\Big (}{\frac {x_{1}\cdots x_{n+1}}{x_{1}\cdots x_{n}}}{\Big )}=\lim _{n\to \infty }\log(x_{n+1})=\lim _{n\to \infty }\log(x_{n})=\log(l),$ where we used the fact that the logarithm is continuous. Thus $\lim _{n\to \infty }{\frac {\log(x_{1}\cdots x_{n})}{n}}=\lim _{n\to \infty }\log {\Big (}(x_{1}\cdots x_{n})^{\frac {1}{n}}{\Big )}=\log(l),$ since the logarithm is both continuous and injective we can conclude that $\lim _{n\to \infty }{\sqrt[{n}]{x_{1}\cdots x_{n}}}=\lim _{n\to \infty }x_{n}$. Given any sequence $(x_{n})_{n\geq 1}$ of (strictly) positive real numbers, suppose that $\lim _{n\to \infty }x_{n}$ exists (finite or infinite), then $\lim _{n\to \infty }{\sqrt[{n}]{x_{1}\cdots x_{n}}}=\lim _{n\to \infty }x_{n}.$ Suppose we are given a sequence $(y_{n})_{n\geq 1}$ and we are asked to compute $\lim _{n\to \infty }{\sqrt[{n}]{y_{n}}},$ defining $y_{0}=1$ and $x_{n}=y_{n}/y_{n-1}$ we obtain $\lim _{n\to \infty }{\sqrt[{n}]{x_{1}\dots x_{n}}}=\lim _{n\to \infty }{\sqrt[{n}]{\frac {y_{1}\dots y_{n}}{y_{0}\cdot y_{1}\dots y_{n-1}}}}=\lim _{n\to \infty }{\sqrt[{n}]{y_{n}}},$ if we apply the property above $\lim _{n\to \infty }{\sqrt[{n}]{y_{n}}}=\lim _{n\to \infty }x_{n}=\lim _{n\to \infty }{\frac {y_{n}}{y_{n-1}}}.$ This last form is usually the most useful to compute limits Given any sequence $(y_{n})_{n\geq 1}$ of (strictly) positive real numbers, suppose that $\lim _{n\to \infty }{\frac {y_{n+1}}{y_{n}}}$ exists (finite or infinite), then $\lim _{n\to \infty }{\sqrt[{n}]{y_{n}}}=\lim _{n\to \infty }{\frac {y_{n+1}}{y_{n}}}.$ Example 1 $\lim _{n\to \infty }{\sqrt[{n}]{n}}=\lim _{n\to \infty }{\frac {n+1}{n}}=1.$ Example 2 ${\begin{aligned}\lim _{n\to \infty }{\frac {\sqrt[{n}]{n!}}{n}}&=\lim _{n\to \infty }{\frac {(n+1)!(n^{n})}{n!(n+1)^{n+1}}}\\&=\lim _{n\to \infty }{\frac {n^{n}}{(n+1)^{n}}}=\lim _{n\to \infty }{\frac {1}{(1+{\frac {1}{n}})^{n}}}={\frac {1}{e}}\end{aligned}}$ where we used the representation of $e$ as the limit of a sequence. History The ∞/∞ case is stated and proved on pages 173—175 of Stolz's 1885 book and also on page 54 of Cesàro's 1888 article. It appears as Problem 70 in Pólya and Szegő (1925). The general form Statement The general form of the Stolz–Cesàro theorem is the following:[2] If $(a_{n})_{n\geq 1}$ and $(b_{n})_{n\geq 1}$ are two sequences such that $(b_{n})_{n\geq 1}$ is monotone and unbounded, then: $\liminf _{n\to \infty }{\frac {a_{n+1}-a_{n}}{b_{n+1}-b_{n}}}\leq \liminf _{n\to \infty }{\frac {a_{n}}{b_{n}}}\leq \limsup _{n\to \infty }{\frac {a_{n}}{b_{n}}}\leq \limsup _{n\to \infty }{\frac {a_{n+1}-a_{n}}{b_{n+1}-b_{n}}}.$ Proof Instead of proving the previous statement, we shall prove a slightly different one; first we introduce a notation: let $(a_{n})_{n\geq 1}$ be any sequence, its partial sum will be denoted by $A_{n}:=\sum _{m\geq 1}^{n}a_{m}$. The equivalent statement we shall prove is: Let $(a_{n})_{n\geq 1},(b_{n})_{\geq 1}$ be any two sequences of real numbers such that • $b_{n}>0,\quad \forall n\in {\mathbb {Z} }_{>0}$, • $\lim _{n\to \infty }B_{n}=+\infty $, then $\liminf _{n\to \infty }{\frac {a_{n}}{b_{n}}}\leq \liminf _{n\to \infty }{\frac {A_{n}}{B_{n}}}\leq \limsup _{n\to \infty }{\frac {A_{n}}{B_{n}}}\leq \limsup _{n\to \infty }{\frac {a_{n}}{b_{n}}}.$ Proof of the equivalent statement First we notice that: • $\liminf _{n\to \infty }{\frac {A_{n}}{B_{n}}}\leq \limsup _{n\to \infty }{\frac {A_{n}}{B_{n}}}$ holds by definition of limit superior and limit inferior; • $\liminf _{n\to \infty }{\frac {a_{n}}{b_{n}}}\leq \liminf _{n\to \infty }{\frac {A_{n}}{B_{n}}}$ holds if and only if $\limsup _{n\to \infty }{\frac {A_{n}}{B_{n}}}\leq \limsup _{n\to \infty }{\frac {a_{n}}{b_{n}}}$ because $\liminf _{n\to \infty }x_{n}=-\limsup _{n\to \infty }(-x_{n})$ for any sequence $(x_{n})_{n\geq 1}$. Therefore we need only to show that $\limsup _{n\to \infty }{\frac {A_{n}}{B_{n}}}\leq \limsup _{n\to \infty }{\frac {a_{n}}{b_{n}}}$. If $L:=\limsup _{n\to \infty }{\frac {a_{n}}{b_{n}}}=+\infty $ there is nothing to prove, hence we can assume $L<+\infty $ (it can be either finite or $-\infty $). By definition of $\limsup $, for all $l>L$ there is a natural number $\nu >0$ such that ${\frac {a_{n}}{b_{n}}}<l,\quad \forall n>\nu .$ We can use this inequality so as to write $A_{n}=A_{\nu }+a_{\nu +1}+\dots +a_{n}<A_{\nu }+l(B_{n}-B_{\nu }),\quad \forall n>\nu ,$ Because $b_{n}>0$, we also have $B_{n}>0$ and we can divide by $B_{n}$ to get ${\frac {A_{n}}{B_{n}}}<{\frac {A_{\nu }-lB_{\nu }}{B_{n}}}+l,\quad \forall n>\nu .$ Since $B_{n}\to +\infty $ as $n\to +\infty $, the sequence ${\frac {A_{\nu }-lB_{\nu }}{B_{n}}}\to 0{\text{ as }}n\to +\infty {\text{ (keeping }}\nu {\text{ fixed)}},$ and we obtain $\limsup _{n\to \infty }{\frac {A_{n}}{B_{n}}}\leq l,\quad \forall l>L,$ By definition of least upper bound, this precisely means that $\limsup _{n\to \infty }{\frac {A_{n}}{B_{n}}}\leq L=\limsup _{n\to \infty }{\frac {a_{n}}{b_{n}}},$ and we are done. Proof of the original statement Now, take $(a_{n}),(b_{n})$ as in the statement of the general form of the Stolz-Cesàro theorem and define $\alpha _{1}=a_{1},\alpha _{k}=a_{k}-a_{k-1},\,\forall k>1\quad \beta _{1}=b_{1},\beta _{k}=b_{k}-b_{k-1}\,\forall k>1$ since $(b_{n})$ is strictly monotone (we can assume strictly increasing for example), $\beta _{n}>0$ for all $n$ and since $b_{n}\to +\infty $ also $\mathrm {B} _{n}=b_{1}+(b_{2}-b_{1})+\dots +(b_{n}-b_{n-1})=b_{n}\to +\infty $, thus we can apply the theorem we have just proved to $(\alpha _{n}),(\beta _{n})$ (and their partial sums $(\mathrm {A} _{n}),(\mathrm {B} _{n})$) $\limsup _{n\to \infty }{\frac {a_{n}}{b_{n}}}=\limsup _{n\to \infty }{\frac {\mathrm {A} _{n}}{\mathrm {B} _{n}}}\leq \limsup _{n\to \infty }{\frac {\alpha _{n}}{\beta _{n}}}=\limsup _{n\to \infty }{\frac {a_{n}-a_{n-1}}{b_{n}-b_{n-1}}},$ which is exactly what we wanted to prove. References • Mureşan, Marian (2008), A Concrete Approach to Classical Analysis, Berlin: Springer, pp. 85–88, ISBN 978-0-387-78932-3. • Stolz, Otto (1885), Vorlesungen über allgemeine Arithmetik: nach den Neueren Ansichten, Leipzig: Teubners, pp. 173–175. • Cesàro, Ernesto (1888), "Sur la convergence des séries", Nouvelles annales de mathématiques, Series 3, 7: 49–59. • Pólya, George; Szegő, Gábor (1925), Aufgaben und Lehrsätze aus der Analysis, vol. I, Berlin: Springer. • A. D. R. Choudary, Constantin Niculescu: Real Analysis on Intervals. Springer, 2014, ISBN 9788132221487, pp. 59-62 • J. Marshall Ash, Allan Berele, Stefan Catoiu: Plausible and Genuine Extensions of L’Hospital's Rule. Mathematics Magazine, Vol. 85, No. 1 (February 2012), pp. 52–60 (JSTOR) External links • l'Hôpital's rule and Stolz-Cesàro theorem at imomath.com • Proof of Stolz–Cesàro theorem at PlanetMath. Notes 1. Choudary, A. D. R.; Niculescu, Constantin (2014). Real Analysis on Intervals. Springer India. pp. 59–60. ISBN 978-81-322-2147-0. 2. l'Hôpital's rule and Stolz-Cesàro theorem at imomath.com This article incorporates material from Stolz-Cesaro theorem on PlanetMath, which is licensed under the Creative Commons Attribution/Share-Alike License.
Wikipedia
Stone duality In mathematics, there is an ample supply of categorical dualities between certain categories of topological spaces and categories of partially ordered sets. Today, these dualities are usually collected under the label Stone duality, since they form a natural generalization of Stone's representation theorem for Boolean algebras. These concepts are named in honor of Marshall Stone. Stone-type dualities also provide the foundation for pointless topology and are exploited in theoretical computer science for the study of formal semantics. This article gives pointers to special cases of Stone duality and explains a very general instance thereof in detail. Overview of Stone-type dualities Probably the most general duality that is classically referred to as "Stone duality" is the duality between the category Sob of sober spaces with continuous functions and the category SFrm of spatial frames with appropriate frame homomorphisms. The dual category of SFrm is the category of spatial locales denoted by SLoc. The categorical equivalence of Sob and SLoc is the basis for the mathematical area of pointless topology, which is devoted to the study of Loc—the category of all locales, of which SLoc is a full subcategory. The involved constructions are characteristic for this kind of duality, and are detailed below. Now one can easily obtain a number of other dualities by restricting to certain special classes of sober spaces: • The category CohSp of coherent sober spaces (and coherent maps) is equivalent to the category CohLoc of coherent (or spectral) locales (and coherent maps), on the assumption of the Boolean prime ideal theorem (in fact, this statement is equivalent to that assumption). The significance of this result stems from the fact that CohLoc in turn is dual to the category DLat01 of bounded distributive lattices. Hence, DLat01 is dual to CohSp—one obtains Stone's representation theorem for distributive lattices. • When restricting further to coherent sober spaces that are Hausdorff, one obtains the category Stone of so-called Stone spaces. On the side of DLat01, the restriction yields the subcategory Bool of Boolean algebras. Thus one obtains Stone's representation theorem for Boolean algebras. • Stone's representation for distributive lattices can be extended via an equivalence of coherent spaces and Priestley spaces (ordered topological spaces, that are compact and totally order-disconnected). One obtains a representation of distributive lattices via ordered topologies: Priestley's representation theorem for distributive lattices. Many other Stone-type dualities could be added to these basic dualities. Duality of sober spaces and spatial locales The lattice of open sets The starting point for the theory is the fact that every topological space is characterized by a set of points X and a system Ω(X) of open sets of elements from X, i.e. a subset of the powerset of X. It is known that Ω(X) has certain special properties: it is a complete lattice within which suprema and finite infima are given by set unions and finite set intersections, respectively. Furthermore, it contains both X and the empty set. Since the embedding of Ω(X) into the powerset lattice of X preserves finite infima and arbitrary suprema, Ω(X) inherits the following distributivity law: $x\wedge \bigvee S=\bigvee \{\,x\wedge s:s\in S\,\},$ for every element (open set) x and every subset S of Ω(X). Hence Ω(X) is not an arbitrary complete lattice but a complete Heyting algebra (also called frame or locale – the various names are primarily used to distinguish several categories that have the same class of objects but different morphisms: frame morphisms, locale morphisms and homomorphisms of complete Heyting algebras). Now an obvious question is: To what extent is a topological space characterized by its locale of open sets? As already hinted at above, one can go even further. The category Top of topological spaces has as morphisms the continuous functions, where a function f is continuous if the inverse image f −1(O) of any open set in the codomain of f is open in the domain of f. Thus any continuous function f from a space X to a space Y defines an inverse mapping f −1 from Ω(Y) to Ω(X). Furthermore, it is easy to check that f −1 (like any inverse image map) preserves finite intersections and arbitrary unions and therefore is a morphism of frames. If we define Ω(f) = f −1 then Ω becomes a contravariant functor from the category Top to the category Frm of frames and frame morphisms. Using the tools of category theory, the task of finding a characterization of topological spaces in terms of their open set lattices is equivalent to finding a functor from Frm to Top which is adjoint to Ω. Points of a locale The goal of this section is to define a functor pt from Frm to Top that in a certain sense "inverts" the operation of Ω by assigning to each locale L a set of points pt(L) (hence the notation pt) with a suitable topology. But how can we recover the set of points just from the locale, though it is not given as a lattice of sets? It is certain that one cannot expect in general that pt can reproduce all of the original elements of a topological space just from its lattice of open sets – for example all sets with the indiscrete topology yield (up to isomorphism) the same locale, such that the information on the specific set is no longer present. However, there is still a reasonable technique for obtaining "points" from a locale, which indeed gives an example of a central construction for Stone-type duality theorems. Let us first look at the points of a topological space X. One is usually tempted to consider a point of X as an element x of the set X, but there is in fact a more useful description for our current investigation. Any point x gives rise to a continuous function px from the one element topological space 1 (all subsets of which are open) to the space X by defining px(1) = x. Conversely, any function from 1 to X clearly determines one point: the element that it "points" to. Therefore, the set of points of a topological space is equivalently characterized as the set of functions from 1 to X. When using the functor Ω to pass from Top to Frm, all set-theoretic elements of a space are lost, but – using a fundamental idea of category theory – one can as well work on the function spaces. Indeed, any "point" px: 1 → X in Top is mapped to a morphism Ω(px): Ω(X) → Ω(1). The open set lattice of the one-element topological space Ω(1) is just (isomorphic to) the two-element locale 2 = { 0, 1 } with 0 < 1. After these observations it appears reasonable to define the set of points of a locale L to be the set of frame morphisms from L to 2. Yet, there is no guarantee that every point of the locale Ω(X) is in one-to-one correspondence to a point of the topological space X (consider again the indiscrete topology, for which the open set lattice has only one "point"). Before defining the required topology on pt(X), it is worthwhile to clarify the concept of a point of a locale further. The perspective motivated above suggests to consider a point of a locale L as a frame morphism p from L to 2. But these morphisms are characterized equivalently by the inverse images of the two elements of 2. From the properties of frame morphisms, one can derive that p −1(0) is a lower set (since p is monotone), which contains a greatest element ap = V p −1(0) (since p preserves arbitrary suprema). In addition, the principal ideal p −1(0) is a prime ideal since p preserves finite infima and thus the principal ap is a meet-prime element. Now the set-inverse of p −1(0) given by p −1(1) is a completely prime filter because p −1(0) is a principal prime ideal. It turns out that all of these descriptions uniquely determine the initial frame morphism. We sum up: A point of a locale L is equivalently described as: • a frame morphism from L to 2 • a principal prime ideal of L • a meet-prime element of L • a completely prime filter of L. All of these descriptions have their place within the theory and it is convenient to switch between them as needed. The functor pt Now that a set of points is available for any locale, it remains to equip this set with an appropriate topology in order to define the object part of the functor pt. This is done by defining the open sets of pt(L) as φ(a) = { p ∈ pt(L) | p(a) = 1 }, for every element a of L. Here we viewed the points of L as morphisms, but one can of course state a similar definition for all of the other equivalent characterizations. It can be shown that setting Ω(pt(L)) = {φ(a) | a ∈ L} does really yield a topological space (pt(L), Ω(pt(L))). It is common to abbreviate this space as pt(L). Finally pt can be defined on morphisms of Frm rather canonically by defining, for a frame morphism g from L to M, pt(g): pt(M) → pt(L) as pt(g)(p) = p o g. In words, we obtain a morphism from L to 2 (a point of L) by applying the morphism g to get from L to M before applying the morphism p that maps from M to 2. Again, this can be formalized using the other descriptions of points of a locale as well – for example just calculate (p o g) −1(0). The adjunction of Top and Loc As noted several times before, pt and Ω usually are not inverses. In general neither is X homeomorphic to pt(Ω(X)) nor is L order-isomorphic to Ω(pt(L)). However, when introducing the topology of pt(L) above, a mapping φ from L to Ω(pt(L)) was applied. This mapping is indeed a frame morphism. Conversely, we can define a continuous function ψ from X to pt(Ω(X)) by setting ψ(x) = Ω(px), where px is just the characteristic function for the point x from 1 to X as described above. Another convenient description is given by viewing points of a locale as meet-prime elements. In this case we have ψ(x) = X \ Cl{x}, where Cl{x} denotes the topological closure of the set {x} and \ is just set-difference. At this point we already have more than enough data to obtain the desired result: the functors Ω and pt define an adjunction between the categories Top and Loc = Frmop, where pt is right adjoint to Ω and the natural transformations ψ and φop provide the required unit and counit, respectively. The duality theorem The above adjunction is not an equivalence of the categories Top and Loc (or, equivalently, a duality of Top and Frm). For this it is necessary that both ψ and φ are isomorphisms in their respective categories. For a space X, ψ: X → pt(Ω(X)) is a homeomorphism if and only if it is bijective. Using the characterization via meet-prime elements of the open set lattice, one sees that this is the case if and only if every meet-prime open set is of the form X \ Cl{x} for a unique x. Alternatively, every join-prime closed set is the closure of a unique point, where "join-prime" can be replaced by (join-) irreducible since we are in a distributive lattice. Spaces with this property are called sober. Conversely, for a locale L, φ: L → Ω(pt(L)) is always surjective. It is additionally injective if and only if any two elements a and b of L for which a is not less-or-equal to b can be separated by points of the locale, formally: if not a ≤ b, then there is a point p in pt(L) such that p(a) = 1 and p(b) = 0. If this condition is satisfied for all elements of the locale, then the locale is spatial, or said to have enough points. (See also well-pointed category for a similar condition in more general categories.) Finally, one can verify that for every space X, Ω(X) is spatial and for every locale L, pt(L) is sober. Hence, it follows that the above adjunction of Top and Loc restricts to an equivalence of the full subcategories Sob of sober spaces and SLoc of spatial locales. This main result is completed by the observation that for the functor pt o Ω, sending each space to the points of its open set lattice is left adjoint to the inclusion functor from Sob to Top. For a space X, pt(Ω(X)) is called its soberification. The case of the functor Ω o pt is symmetric but a special name for this operation is not commonly used. References • Stanley N. Burris and H. P. Sankappanavar, 1981. A Course in Universal Algebra. Springer-Verlag. ISBN 3-540-90578-2. (available free online at the website mentioned) • P. T. Johnstone, Stone Spaces, Cambridge Studies in Advanced Mathematics 3, Cambridge University Press, Cambridge, 1982. ISBN 0-521-23893-5. • Pedicchio, Maria Cristina; Tholen, Walter, eds. (2004). Categorical foundations. Special topics in order, topology, algebra, and sheaf theory. Encyclopedia of Mathematics and Its Applications. Vol. 97. Cambridge: Cambridge University Press. ISBN 0-521-83414-7. Zbl 1034.18001. • Vickers, Steven (1989). Topology via logic. Cambridge Tracts in Theoretical Computer Science. Vol. 5. Cambridge: Cambridge University Press. ISBN 0-521-36062-5. Zbl 0668.54001. • Abstract Stone Duality • Caramello, Olivia (2011). "A topos-theoretic approach to Stone-type dualities". arXiv:1103.3493 [math.CT].
Wikipedia