text
stringlengths
100
500k
subset
stringclasses
4 values
Saul Kaplun Saul Kaplun (July 3, 1924[1] Lwów,[2] Poland now Lviv, Ukraine – February 13, 1964,[1] Pasadena, California, U.S.A.) was a Polish-American aerodynamicist[2] at the California Institute of Technology (Caltech). Family Kaplun was the only child[3] of Jewish immigrants from Poland, Morris J. Kaplun (February 12, 1888[4] Kamenetz-Podolsk,[5] Ukraine – ?), a textile businessman and industrialist[6] and a prominent Zionist philanthropist[7] beginning in the 1930s,[8] and Betty (Bettina) Kaplun[4] (? – 1963).[3] Saul and his parents, who were refugees from Nazi persecution,[8] lived in Lwów until 1939, when they fled Poland;[4] they immigrated to New York shortly before World War II.[3] He became a naturalized American citizen in 1944 and served in the United States Navy from 1944 to 1946.[2] Upon his death at age 39 of a heart attack, three months after his mother's death,[3] Saul Kaplun left behind a grieving father,[1] who sought to perpetuate his son's memory by endowing several educational projects in Israel and the United States. At the dedication ceremony of one of the institutions he funded, at Tel Aviv University,[9] Morris Kaplun himself "learned for the first time just how important a man [his son] was in his field. The late Dr. Saul Kaplun… left behind 'a roomful of manuscripts' which Prof. Paco Lagerstrom, of the California Institute of Technology, who spoke at the dedication, said contained 'a wealth of scientific ideas far outweighing his published work'."[10] Career Saul Kaplun received his PhD in 1954 under the advisorship of Paco Lagerstrom at the California Institute of Technology[11] with his thesis dissertation The role of coordinate systems in boundary layer theory.[12] Kaplun and Lagerstrom later collaborated on and published an article together[13] and Lagerstrom edited Kaplun's papers for publication as a monograph after the latter's death.[14] Kaplun spent his entire academic career, a total of 20 years, at Caltech and received four degrees there.[2] He became a research fellow in aeronautics upon completing his PhD in 1954 and was a senior research fellow in aeronautics on the Caltech faculty from 1957 until his death.[2] At a memorial ceremony at Caltech held that same year, Dr. Clark Blanchard Millikan, Caltech professor of aeronautics, stated, "Saul Kaplun's very special hallmark as a scientist was his unusual intuition. He lived with a problem till he 'saw' the solution. This enabled him to understand the essence of some fundamental problems but also made it difficult for others to understand his work. His work could in general not be explained by discursive reasoning; one had to make an effort to share his intuitive thinking."[2] In the ceremony, Millikan also noted that "Saul Kaplun's work played a decisive role in the development of applied mathematics at the California Institute."[2] Caltech President Lee Alvin DuBridge eulogized at the same ceremony, "Saul Kaplun had a brilliant analytical and creative mind and made many profound and original contributions to the theory of fluid mechanics. He was an applied mathematician of extraordinary ability and had already won wide and admiring recognition for his work."[2] Publications Dr. Millikan stated at the 1964 Caltech ceremony, "Few publications bear his name as author… however, in very many publications by others the author expresses his thanks to Saul Kaplun for having contributed some fundamental ideas to the work, or states that he has used methods due to Kaplun. By now his work has won world-wide recognition among specialists."[2] After Kaplun's untimely death, his published papers and much of his unpublished work were edited by his former PhD advisor, Lagerstrom, and by Louis Norberg Howard of MIT and Ching-shi Liu of Caltech and were published in 1967 in book form under the title Fluid Mechanics and Singular Perturbation, a Collection of Papers by Saul Kaplun.[14] Kaplun's work has been cited and extolled by colleagues and authors in the field. Robert Edmund O'Malley wrote, "The work of Kaplun and Lagerstrom at Caltech in the 1950s was especially important to the development of matched expansions and its applications to fluid mechanics."[15] Sunil Datta wrote, "Singular perturbation method… It was left to the genius of Saul Kaplun (1957) to recognize the analogy between the theory of flow at small Reynolds number and boundary layer theory and to apply to it the singular perturbation method."[16] Articles • Kaplun, Saul; Lagerstrom, P. A. (1957). "Asymptotic Expansions of Navier-Stokes Solutions for Small Reynolds Numbers". Journal of Mathematics and Mechanics. 6 (5): 585–593. JSTOR 24900496. • Kaplun, Saul (1957). "Low Reynolds Number Flow Past a Circular Cylinder". Journal of Mathematics and Mechanics. 6 (5): 595–603. JSTOR 24900497. Theses EngD thesis • Kaplun, Saul (1951). Dimensional analysis of the inflation process of parachute canopies (EngD). California Institute of Technology. doi:10.7907/25MM-HY18. PhD Thesis • Kaplun, Saul (1954). The role of coordinate systems in boundary layer theory (PDF) (PhD). Legacy Morris Kaplun's philanthropy included funding of the Saul Kaplun Institute of Applied Mathematics and Space Physics at Tel Aviv University, Israel,[9][17] consecrated in February 1966, which was attended by Prof. Paco Lagerstrom and others; and of the Saul Kaplun Building for Applied Mathematics and Theoretical Physics at the Hebrew University of Jerusalem, Israel dedicated a month later in the presence of the university president, Eliahu Eilat.[18] Morris Kaplun also established a memorial fellowship which funds graduate research in applied mathematics and is awarded in perpetuity in his son's name at Caltech.[19] The father wrote a short book about his son: My Son Saul: Saul Kaplun, July 3, 1924 – February 13, 1964, in Memoriam, published in 1965.[1] References 1. My son Saul. January 1965. Retrieved July 22, 2020. {{cite book}}: |website= ignored (help) 2. "Bio" (PDF). calteches.library.caltech.edu. Retrieved July 22, 2020. 3. "December 03, 1965 – Image 10". The Detroit Jewish News Digital Archives. 4. "Litigation documents" (PDF). www.crt-ii.org. Retrieved July 22, 2020. 5. "Kamyanets Podilskyy, Ukraine (Pages 89–102)". www.jewishgen.org. 6. "4 Receive Kaplun Prizes". March 4, 1977. 7. Profile. January 1968. Retrieved July 22, 2020. {{cite book}}: |website= ignored (help) 8. "Kaplun Foundation". Kaplun Foundation. 9. "Tel Aviv University to Get New Facility Named After Dr. Saul Kaplun". October 20, 1964. 10. "Jewish Post 10 June 1966 — Hoosier State Chronicles: Indiana's Digital Historic Newspaper Program". newspapers.library.in.gov. 11. "Saul Kaplun - the Mathematics Genealogy Project". 12. Kaplun, Saul (1954). The role of coordinate systems in boundary layer theory (phd). California Institute of Technology. doi:10.7907/9QZM-8W36. 13. Kaplun, Saul; Lagerstrom, P. A. (1957). "Asymptotic Expansions of Navier-Stokes Solutions for Small Reynolds Numbers". Journal of Mathematics and Mechanics. 6 (5): 585–593. JSTOR 24900496. 14. Fluid Mechanics and Singular Perturbations. Elsevier. ISBN 9780123955746. 15. O'Malley, Robert E. (July 22, 2014). Historical Developments in Singular Perturbations. Springer International Publishing. doi:10.1007/978-3-319-11924-3. ISBN 978-3-319-11923-6 – via www.springer.com. 16. "Mathematics and Fluid Mechanics" (PDF). bharataganitaparisad.com. 2018. Retrieved July 22, 2020. 17. "נחנר המכון למתמטיקה. שימושית באזניברסיטת ת"א". jpress.org.il. 18. "Saul Kaplun $300,000 Building Dedicated at Hebrew University". March 17, 1966. 19. "Caltech Computing + Mathematical Sciences | Honors and Awards". Caltech Computing + Mathematical Sciences. Authority control International • VIAF National • Israel • United States Academics • MathSciNet • Mathematics Genealogy Project Other • IdRef
Wikipedia
Saunders Mac Lane Saunders Mac Lane (4 August 1909 – 14 April 2005) was an American mathematician who co-founded category theory with Samuel Eilenberg. Saunders Mac Lane Born(1909-08-04)4 August 1909 Taftville, Connecticut, U.S. Died14 April 2005(2005-04-14) (aged 95)[1] San Francisco, California, U.S. NationalityAmerican Alma materYale University University of Chicago University of Göttingen Known forAcyclic model Category theory Shuffle algebra Standard complex Mac Lane coherence theorem Mac Lane set theory Mac Lane's condition Mac Lane's planarity criterion Eilenberg–MacLane space Steinitz–Mac Lane exchange lemma AwardsChauvenet Prize (1941)[2][3] Leroy P. Steele Prize (1986) National Medal of Science (1989) Scientific career FieldsMathematics (Mathematical logic Algebraic number theory Algebraic topology) InstitutionsHarvard University Cornell University University of Chicago Columbia University Doctoral advisorHermann Weyl Paul Bernays Doctoral studentsSteve Awodey David Eisenbud William Howard Irving Kaplansky Roger Lyndon Michael D. Morley Anil Nerode Robert Solovay John G. Thompson Early life and education Mac Lane was born in Norwich, Connecticut, near where his family lived in Taftville.[4] He was christened "Leslie Saunders MacLane", but "Leslie" fell into disuse because his parents, Donald MacLane and Winifred Saunders, came to dislike it. He began inserting a space into his surname because his first wife found it difficult to type the name without a space.[5] He was the oldest of three brothers; one of his brothers, Gerald MacLane, also became a mathematics professor at Rice University and Purdue University. Another sister died as a baby. His father and grandfather were both ministers; his grandfather had been a Presbyterian, but was kicked out of the church for believing in evolution, and his father was a Congregationalist. His mother, Winifred, studied at Mount Holyoke College and taught English, Latin, and mathematics.[4] In high school, Mac Lane's favorite subject was chemistry. While in high school, his father died, and he came under his grandfather's care. His half-uncle, a lawyer, was determined to send him to Yale University, where many of his relatives had been educated, and paid his way there beginning in 1926. As a freshman, he became disillusioned with chemistry. His mathematics instructor, Lester S. Hill, coached him for a local mathematics competition which he won, setting the direction for his future work. He went on to study mathematics and physics as a double major, taking courses from Jesse Beams, Ernest William Brown, Ernest Lawrence, F. S. C. Northrop, and Øystein Ore, among others. He graduated from Yale with a B.A. in 1930.[4] During this period, he published his first scientific paper, in physics and co-authored with Irving Langmuir. In 1929, at a party of Yale football supporters in Montclair, New Jersey, Mac Lane (there to be presented with a prize for having the best grade point average yet recorded at Yale) had met Robert Maynard Hutchins, the new president of the University of Chicago, who encouraged him to go there for his graduate studies and soon afterwards offered him a scholarship. Mac Lane neglected to actually apply to the program, but showed up and was admitted anyway. At Chicago, the subjects he studied included set theory with E. H. Moore, number theory with Leonard Eugene Dickson, the calculus of variations with Gilbert Ames Bliss, and logic with Mortimer J. Adler.[4] In 1931, having earned his master's degree and feeling restless at Chicago, he earned a fellowship from the Institute of International Education and became one of the last Americans to study at the University of Göttingen prior to its decline under the Nazis. His greatest influences there were Paul Bernays and Hermann Weyl. By the time he finished his doctorate in 1934, Bernays had been forced to leave because he was Jewish, and Weyl became his main examiner. At Göttingen, Mac Lane also studied with Gustav Herglotz and Emmy Noether. Within days of finishing his degree, he married Dorothy Jones, from Chicago, and soon returned to the U.S.[4][6][7] Career From 1934 through 1938, Mac Lane held short-term appointments at Yale University, Harvard University, Cornell University, and the University of Chicago. He then held a tenure track appointment at Harvard from 1938 to 1947. In 1941, while giving a series of visiting lectures at the University of Michigan, he met Samuel Eilenberg and began what would become a fruitful collaboration on the interplay between algebra and topology. In 1944 and 1945, he also directed Columbia University's Applied Mathematics Group, which was involved in the war effort as a contractor for the Applied Mathematics Panel; the mathematics he worked on in this group concerned differential equations for fire-control systems.[4] In 1947, he accepted an offer to return to Chicago, where (in part because of the university's involvement in the Manhattan Project, and in part because of the administrative efforts of Marshall Stone) many other famous mathematicians and physicists had also recently moved. He traveled as a Guggenheim Fellow to ETH Zurich for the 1947–1948 term, where he worked with Heinz Hopf. Mac Lane succeeded Stone as department chair in 1952, and served for six years.[4] Mac Lane was vice president of the National Academy of Sciences and the American Philosophical Society, and president of the American Mathematical Society. While presiding over the Mathematical Association of America in the 1950s, he initiated its activities aimed at improving the teaching of modern mathematics. He was a member of the National Science Board, 1974–1980, advising the American government. In 1976, he led a delegation of mathematicians to China to study the conditions affecting mathematics there. Mac Lane was elected to the National Academy of Sciences in 1949, and received the National Medal of Science in 1989. Contributions After a thesis in mathematical logic, his early work was in field theory and valuation theory. He wrote on valuation rings and Witt vectors, and separability in infinite field extensions. He started writing on group extensions in 1942, and in 1943 began his research on what are now called Eilenberg–MacLane spaces K(G,n), having a single non-trivial homotopy group G in dimension n. This work opened the way to group cohomology in general. After introducing, via the Eilenberg–Steenrod axioms, the abstract approach to homology theory, he and Eilenberg originated category theory in 1945. He is especially known for his work on coherence theorems. A recurring feature of category theory, abstract algebra, and of some other mathematics as well, is the use of diagrams, consisting of arrows (morphisms) linking objects, such as products and coproducts. According to McLarty (2005), this diagrammatic approach to contemporary mathematics largely stems from Mac Lane (1948). Mac Lane also coined the term Yoneda lemma for a lemma which is an essential background to many central concepts of category theory and which was discovered by Nobuo Yoneda.[8] Mac Lane had an exemplary devotion to writing approachable texts, starting with his very influential A Survey of Modern Algebra, coauthored in 1941 with Garrett Birkhoff. From then on, it was possible to teach elementary modern algebra to undergraduates using an English text. His Categories for the Working Mathematician remains the definitive introduction to category theory. Mac Lane supervised the Ph.Ds of, among many others, David Eisenbud, William Howard, Irving Kaplansky, Michael Morley, Anil Nerode, Robert Solovay, and John G. Thompson. In addition to reviewing a fair amount of his mathematical output, the obituary articles McLarty (2005, 2007) clarify Mac Lane's contributions to the philosophy of mathematics. Mac Lane (1986) is an approachable introduction to his views on this subject. Selected works • 1997 (1941). A Survey of Modern Algebra (with Garrett Birkhoff). A K Peters. ISBN 1-56881-068-7 • 1948, "Groups, categories and duality," Proceedings of the Nat. Acad. of Sciences of the USA 34: 263–67. • 1995 (1963). Homology, Springer (Classics in Mathematics) ISBN 978-0-387-58662-5 (Originally, Band 114 of Die Grundlehren Der Mathematischen Wissenschaften in Einzeldarstellungen.) AMS review by David Buchsbaum. • 1999 (1967). Algebra (with Garrett Birkhoff). Chelsea. ISBN 0-8218-1646-2 • 1998 (1972). Categories for the Working Mathematician, Springer (Graduate Texts in Mathematics) ISBN 0-387-98403-8 • 1986. Mathematics, Form and Function. Springer-Verlag. ISBN 0-387-96217-4 • 1992. Sheaves in Geometry and Logic: A First Introduction to Topos Theory (with Ieke Moerdijk). ISBN 0-387-97710-4 • 1995. "Mathematics at Gottingen under the Nazis" (PDF). Notices of the AMS. 42 (10): 1134–38. • 2005. Saunders Mac Lane: A Mathematical Autobiography. A K Peters. ISBN 1-56881-150-0 See also • Foundations of geometry • PROP (category theory) • SPQR tree Notes 1. Pearce, Jeremy (21 April 2005). "Saunders Mac Lane, 95, Pioneer of Algebra's Category Theory, Dies". The New York Times. Retrieved 28 August 2020. 2. Mac Lane, Saunders (1940). "Modular Fields". Amer. Math. Monthly. 47 (5): 67–84. doi:10.2307/2302685. JSTOR 2302685. 3. Mac Lane, Saunders (1939). "Some Recent Advances in Algebra". Amer. Math. Monthly. 46 (1): 3–19. doi:10.2307/2302916. JSTOR 2302916. 4. Albers, Donald J.; Alexanderson, Gerald L.; Reid, Constance, eds. (1990), "Saunders Mac Lane", More Mathematical People, Harcourt Brace Jovanovich, pp. 196–219. 5. Mac Lane (2005), p. 6. 6. Mac Lane, Saunders (Oct 1995). "Mathematics at Göttingen under the Nazis" (PDF). Notices of the AMS. 42 (10): 1134–1138. 7. Segal, Sanford L. (April 1996). "Letters to the Editor: Corrections on Mac Lane's Article" (PDF). Notices of the AMS. 43 (4): 405–406. 8. Kinoshita, Yoshiki (23 April 1996). "Prof. Nobuo Yoneda passed away". Retrieved 21 December 2013. References • Nadis, Steve; Yau, Shing-Tung (2013), "Chapter 4. Analysis and Algebra Meet Topology: Marston Morse, Hassler Whitney, and Saunders Mac Lane", A History in Sum, Cambridge, MA: Harvard University Press, pp. 86–115, ISBN 978-0-674-72500-3, JSTOR j.ctt6wpqft, MR 3100544, Zbl 1290.01005 (e-book: ISBN 978-0-674-72655-0). Biographical references • McLarty, Colin (2005), "Saunders Mac Lane (1909–2005): His Mathematical Life and Philosophical Works", Philosophia Mathematica, Series III, 13 (3): 237–251, doi:10.1093/philmat/nki038, MR 2192173, Zbl 1094.01010, archived from the original on 2013-01-13. With selected bibliography emphasizing Mac Lane's philosophical writings. • McLarty, Colin (2007), "The Last Mathematician from Hilbert's Göttingen: Saunders Mac Lane as Philosopher of Mathematics", The British Journal for the Philosophy of Science, 58 (1): 77–112, CiteSeerX 10.1.1.828.5753, doi:10.1093/bjps/axl030, MR 2301283, S2CID 53561655, Zbl 1122.01017. • Lawvere, William (2007), "Saunders Mac Lane", New Dictionary of Scientific Biography, New York: Charles Scribner's Sons, pp. 237–251, ISBN 978-0684315591. External links • O'Connor, John J.; Robertson, Edmund F., "Saunders Mac Lane", MacTutor History of Mathematics Archive, University of St Andrews • Obituary press release from the University of Chicago. • Photographs of Mac Lane Archived 2013-05-24 at the Wayback Machine, 1984–1999. • Kutateladze S.S., Saunders Mac Lane, the Knight of Mathematics • Saunders Mac Lane at the Mathematics Genealogy Project United States National Medal of Science laureates Behavioral and social science 1960s 1964 Neal Elgar Miller 1980s 1986 Herbert A. Simon 1987 Anne Anastasi George J. Stigler 1988 Milton Friedman 1990s 1990 Leonid Hurwicz Patrick Suppes 1991 George A. Miller 1992 Eleanor J. Gibson 1994 Robert K. Merton 1995 Roger N. Shepard 1996 Paul Samuelson 1997 William K. Estes 1998 William Julius Wilson 1999 Robert M. Solow 2000s 2000 Gary Becker 2003 R. Duncan Luce 2004 Kenneth Arrow 2005 Gordon H. Bower 2008 Michael I. Posner 2009 Mortimer Mishkin 2010s 2011 Anne Treisman 2014 Robert Axelrod 2015 Albert Bandura Biological sciences 1960s 1963 C. B. van Niel 1964 Theodosius Dobzhansky Marshall W. Nirenberg 1965 Francis P. Rous George G. Simpson Donald D. Van Slyke 1966 Edward F. Knipling Fritz Albert Lipmann William C. Rose Sewall Wright 1967 Kenneth S. Cole Harry F. Harlow Michael Heidelberger Alfred H. Sturtevant 1968 Horace Barker Bernard B. Brodie Detlev W. Bronk Jay Lush Burrhus Frederic Skinner 1969 Robert Huebner Ernst Mayr 1970s 1970 Barbara McClintock Albert B. Sabin 1973 Daniel I. Arnon Earl W. Sutherland Jr. 1974 Britton Chance Erwin Chargaff James V. Neel James Augustine Shannon 1975 Hallowell Davis Paul Gyorgy Sterling B. Hendricks Orville Alvin Vogel 1976 Roger Guillemin Keith Roberts Porter Efraim Racker E. O. Wilson 1979 Robert H. Burris Elizabeth C. Crosby Arthur Kornberg Severo Ochoa Earl Reece Stadtman George Ledyard Stebbins Paul Alfred Weiss 1980s 1981 Philip Handler 1982 Seymour Benzer Glenn W. Burton Mildred Cohn 1983 Howard L. Bachrach Paul Berg Wendell L. Roelofs Berta Scharrer 1986 Stanley Cohen Donald A. Henderson Vernon B. Mountcastle George Emil Palade Joan A. Steitz 1987 Michael E. DeBakey Theodor O. Diener Harry Eagle Har Gobind Khorana Rita Levi-Montalcini 1988 Michael S. Brown Stanley Norman Cohen Joseph L. Goldstein Maurice R. Hilleman Eric R. Kandel Rosalyn Sussman Yalow 1989 Katherine Esau Viktor Hamburger Philip Leder Joshua Lederberg Roger W. Sperry Harland G. Wood 1990s 1990 Baruj Benacerraf Herbert W. Boyer Daniel E. Koshland Jr. Edward B. Lewis David G. Nathan E. Donnall Thomas 1991 Mary Ellen Avery G. Evelyn Hutchinson Elvin A. Kabat Robert W. Kates Salvador Luria Paul A. Marks Folke K. Skoog Paul C. Zamecnik 1992 Maxine Singer Howard Martin Temin 1993 Daniel Nathans Salome G. Waelsch 1994 Thomas Eisner Elizabeth F. Neufeld 1995 Alexander Rich 1996 Ruth Patrick 1997 James Watson Robert A. Weinberg 1998 Bruce Ames Janet Rowley 1999 David Baltimore Jared Diamond Lynn Margulis 2000s 2000 Nancy C. Andreasen Peter H. Raven Carl Woese 2001 Francisco J. Ayala George F. Bass Mario R. Capecchi Ann Graybiel Gene E. Likens Victor A. McKusick Harold Varmus 2002 James E. Darnell Evelyn M. Witkin 2003 J. Michael Bishop Solomon H. Snyder Charles Yanofsky 2004 Norman E. Borlaug Phillip A. Sharp Thomas E. Starzl 2005 Anthony Fauci Torsten N. Wiesel 2006 Rita R. Colwell Nina Fedoroff Lubert Stryer 2007 Robert J. Lefkowitz Bert W. O'Malley 2008 Francis S. Collins Elaine Fuchs J. Craig Venter 2009 Susan L. Lindquist Stanley B. Prusiner 2010s 2010 Ralph L. Brinster Rudolf Jaenisch 2011 Lucy Shapiro Leroy Hood Sallie Chisholm 2012 May Berenbaum Bruce Alberts 2013 Rakesh K. Jain 2014 Stanley Falkow Mary-Claire King Simon Levin Chemistry 1960s 1964 Roger Adams 1980s 1982 F. Albert Cotton Gilbert Stork 1983 Roald Hoffmann George C. Pimentel Richard N. Zare 1986 Harry B. Gray Yuan Tseh Lee Carl S. Marvel Frank H. Westheimer 1987 William S. Johnson Walter H. Stockmayer Max Tishler 1988 William O. Baker Konrad E. Bloch Elias J. Corey 1989 Richard B. Bernstein Melvin Calvin Rudolph A. Marcus Harden M. McConnell 1990s 1990 Elkan Blout Karl Folkers John D. Roberts 1991 Ronald Breslow Gertrude B. Elion Dudley R. Herschbach Glenn T. Seaborg 1992 Howard E. Simmons Jr. 1993 Donald J. Cram Norman Hackerman 1994 George S. Hammond 1995 Thomas Cech Isabella L. Karle 1996 Norman Davidson 1997 Darleane C. Hoffman Harold S. Johnston 1998 John W. Cahn George M. Whitesides 1999 Stuart A. Rice John Ross Susan Solomon 2000s 2000 John D. Baldeschwieler Ralph F. Hirschmann 2001 Ernest R. Davidson Gábor A. Somorjai 2002 John I. Brauman 2004 Stephen J. Lippard 2005 Tobin J. Marks 2006 Marvin H. Caruthers Peter B. Dervan 2007 Mostafa A. El-Sayed 2008 Joanna Fowler JoAnne Stubbe 2009 Stephen J. Benkovic Marye Anne Fox 2010s 2010 Jacqueline K. Barton Peter J. Stang 2011 Allen J. Bard M. Frederick Hawthorne 2012 Judith P. Klinman Jerrold Meinwald 2013 Geraldine L. Richmond 2014 A. Paul Alivisatos Engineering sciences 1960s 1962 Theodore von Kármán 1963 Vannevar Bush John Robinson Pierce 1964 Charles S. Draper Othmar H. Ammann 1965 Hugh L. Dryden Clarence L. Johnson Warren K. Lewis 1966 Claude E. Shannon 1967 Edwin H. Land Igor I. Sikorsky 1968 J. Presper Eckert Nathan M. Newmark 1969 Jack St. Clair Kilby 1970s 1970 George E. Mueller 1973 Harold E. Edgerton Richard T. Whitcomb 1974 Rudolf Kompfner Ralph Brazelton Peck Abel Wolman 1975 Manson Benedict William Hayward Pickering Frederick E. Terman Wernher von Braun 1976 Morris Cohen Peter C. Goldmark Erwin Wilhelm Müller 1979 Emmett N. Leith Raymond D. Mindlin Robert N. Noyce Earl R. Parker Simon Ramo 1980s 1982 Edward H. Heinemann Donald L. Katz 1983 Bill Hewlett George Low John G. Trump 1986 Hans Wolfgang Liepmann Tung-Yen Lin Bernard M. Oliver 1987 Robert Byron Bird H. Bolton Seed Ernst Weber 1988 Daniel C. Drucker Willis M. Hawkins George W. Housner 1989 Harry George Drickamer Herbert E. Grier 1990s 1990 Mildred Dresselhaus Nick Holonyak Jr. 1991 George H. Heilmeier Luna B. Leopold H. Guyford Stever 1992 Calvin F. Quate John Roy Whinnery 1993 Alfred Y. Cho 1994 Ray W. Clough 1995 Hermann A. Haus 1996 James L. Flanagan C. Kumar N. Patel 1998 Eli Ruckenstein 1999 Kenneth N. Stevens 2000s 2000 Yuan-Cheng B. Fung 2001 Andreas Acrivos 2002 Leo Beranek 2003 John M. Prausnitz 2004 Edwin N. Lightfoot 2005 Jan D. Achenbach 2006 Robert S. Langer 2007 David J. Wineland 2008 Rudolf E. Kálmán 2009 Amnon Yariv 2010s 2010 Shu Chien 2011 John B. Goodenough 2012 Thomas Kailath Mathematical, statistical, and computer sciences 1960s 1963 Norbert Wiener 1964 Solomon Lefschetz H. Marston Morse 1965 Oscar Zariski 1966 John Milnor 1967 Paul Cohen 1968 Jerzy Neyman 1969 William Feller 1970s 1970 Richard Brauer 1973 John Tukey 1974 Kurt Gödel 1975 John W. Backus Shiing-Shen Chern George Dantzig 1976 Kurt Otto Friedrichs Hassler Whitney 1979 Joseph L. Doob Donald E. Knuth 1980s 1982 Marshall H. Stone 1983 Herman Goldstine Isadore Singer 1986 Peter Lax Antoni Zygmund 1987 Raoul Bott Michael Freedman 1988 Ralph E. Gomory Joseph B. Keller 1989 Samuel Karlin Saunders Mac Lane Donald C. Spencer 1990s 1990 George F. Carrier Stephen Cole Kleene John McCarthy 1991 Alberto Calderón 1992 Allen Newell 1993 Martin David Kruskal 1994 John Cocke 1995 Louis Nirenberg 1996 Richard Karp Stephen Smale 1997 Shing-Tung Yau 1998 Cathleen Synge Morawetz 1999 Felix Browder Ronald R. Coifman 2000s 2000 John Griggs Thompson Karen Uhlenbeck 2001 Calyampudi R. Rao Elias M. Stein 2002 James G. Glimm 2003 Carl R. de Boor 2004 Dennis P. Sullivan 2005 Bradley Efron 2006 Hyman Bass 2007 Leonard Kleinrock Andrew J. Viterbi 2009 David B. Mumford 2010s 2010 Richard A. Tapia S. R. Srinivasa Varadhan 2011 Solomon W. Golomb Barry Mazur 2012 Alexandre Chorin David Blackwell 2013 Michael Artin Physical sciences 1960s 1963 Luis W. Alvarez 1964 Julian Schwinger Harold Urey Robert Burns Woodward 1965 John Bardeen Peter Debye Leon M. Lederman William Rubey 1966 Jacob Bjerknes Subrahmanyan Chandrasekhar Henry Eyring John H. Van Vleck Vladimir K. Zworykin 1967 Jesse Beams Francis Birch Gregory Breit Louis Hammett George Kistiakowsky 1968 Paul Bartlett Herbert Friedman Lars Onsager Eugene Wigner 1969 Herbert C. Brown Wolfgang Panofsky 1970s 1970 Robert H. Dicke Allan R. Sandage John C. Slater John A. Wheeler Saul Winstein 1973 Carl Djerassi Maurice Ewing Arie Jan Haagen-Smit Vladimir Haensel Frederick Seitz Robert Rathbun Wilson 1974 Nicolaas Bloembergen Paul Flory William Alfred Fowler Linus Carl Pauling Kenneth Sanborn Pitzer 1975 Hans A. Bethe Joseph O. Hirschfelder Lewis Sarett Edgar Bright Wilson Chien-Shiung Wu 1976 Samuel Goudsmit Herbert S. Gutowsky Frederick Rossini Verner Suomi Henry Taube George Uhlenbeck 1979 Richard P. Feynman Herman Mark Edward M. Purcell John Sinfelt Lyman Spitzer Victor F. Weisskopf 1980s 1982 Philip W. Anderson Yoichiro Nambu Edward Teller Charles H. Townes 1983 E. Margaret Burbidge Maurice Goldhaber Helmut Landsberg Walter Munk Frederick Reines Bruno B. Rossi J. Robert Schrieffer 1986 Solomon J. Buchsbaum H. Richard Crane Herman Feshbach Robert Hofstadter Chen-Ning Yang 1987 Philip Abelson Walter Elsasser Paul C. Lauterbur George Pake James A. Van Allen 1988 D. Allan Bromley Paul Ching-Wu Chu Walter Kohn Norman Foster Ramsey Jr. Jack Steinberger 1989 Arnold O. Beckman Eugene Parker Robert Sharp Henry Stommel 1990s 1990 Allan M. Cormack Edwin M. McMillan Robert Pound Roger Revelle 1991 Arthur L. Schawlow Ed Stone Steven Weinberg 1992 Eugene M. Shoemaker 1993 Val Fitch Vera Rubin 1994 Albert Overhauser Frank Press 1995 Hans Dehmelt Peter Goldreich 1996 Wallace S. Broecker 1997 Marshall Rosenbluth Martin Schwarzschild George Wetherill 1998 Don L. Anderson John N. Bahcall 1999 James Cronin Leo Kadanoff 2000s 2000 Willis E. Lamb Jeremiah P. Ostriker Gilbert F. White 2001 Marvin L. Cohen Raymond Davis Jr. Charles Keeling 2002 Richard Garwin W. Jason Morgan Edward Witten 2003 G. Brent Dalrymple Riccardo Giacconi 2004 Robert N. Clayton 2005 Ralph A. Alpher Lonnie Thompson 2006 Daniel Kleppner 2007 Fay Ajzenberg-Selove Charles P. Slichter 2008 Berni Alder James E. Gunn 2009 Yakir Aharonov Esther M. Conwell Warren M. Washington 2010s 2011 Sidney Drell Sandra Faber Sylvester James Gates 2012 Burton Richter Sean C. Solomon 2014 Shirley Ann Jackson Chauvenet Prize recipients • 1925 G. A. Bliss • 1929 T. H. Hildebrandt • 1932 G. H. Hardy • 1935 Dunham Jackson • 1938 G. T. Whyburn • 1941 Saunders Mac Lane • 1944 R. H. Cameron • 1947 Paul Halmos • 1950 Mark Kac • 1953 E. J. McShane • 1956 Richard H. Bruck • 1960 Cornelius Lanczos • 1963 Philip J. Davis • 1964 Leon Henkin • 1965 Jack K. Hale and Joseph P. LaSalle • 1967 Guido Weiss • 1968 Mark Kac • 1970 Shiing-Shen Chern • 1971 Norman Levinson • 1972 François Trèves • 1973 Carl D. Olds • 1974 Peter D. Lax • 1975 Martin Davis and Reuben Hersh • 1976 Lawrence Zalcman • 1977 W. Gilbert Strang • 1978 Shreeram S. Abhyankar • 1979 Neil J. A. Sloane • 1980 Heinz Bauer • 1981 Kenneth I. Gross • 1982 No award given. • 1983 No award given. • 1984 R. Arthur Knoebel • 1985 Carl Pomerance • 1986 George Miel • 1987 James H. Wilkinson • 1988 Stephen Smale • 1989 Jacob Korevaar • 1990 David Allen Hoffman • 1991 W. B. Raymond Lickorish and Kenneth C. Millett • 1992 Steven G. Krantz • 1993 David H. Bailey, Jonathan M. Borwein and Peter B. Borwein • 1994 Barry Mazur • 1995 Donald G. Saari • 1996 Joan Birman • 1997 Tom Hawkins • 1998 Alan Edelman and Eric Kostlan • 1999 Michael I. Rosen • 2000 Don Zagier • 2001 Carolyn S. Gordon and David L. Webb • 2002 Ellen Gethner, Stan Wagon, and Brian Wick • 2003 Thomas C. Hales • 2004 Edward B. Burger • 2005 John Stillwell • 2006 Florian Pfender & Günter M. Ziegler • 2007 Andrew J. Simoson • 2008 Andrew Granville • 2009 Harold P. Boas • 2010 Brian J. McCartin • 2011 Bjorn Poonen • 2012 Dennis DeTurck, Herman Gluck, Daniel Pomerleano & David Shea Vela-Vick • 2013 Robert Ghrist • 2014 Ravi Vakil • 2015 Dana Mackenzie • 2016 Susan H. Marshall & Donald R. Smith • 2017 Mark Schilling • 2018 Daniel J. Velleman • 2019 Tom Leinster • 2020 Vladimir Pozdnyakov & J. Michael Steele • 2021 Travis Kowalski • 2022 William Dunham, Ezra Brown & Matthew Crawford Presidents of the American Mathematical Society 1888–1900 • John Howard Van Amringe (1888–1890) • Emory McClintock (1891–1894) • George William Hill (1895–1896) • Simon Newcomb (1897–1898) • Robert Simpson Woodward (1899–1900) 1901–1924 • E. H. Moore (1901–1902) • Thomas Fiske (1903–1904) • William Fogg Osgood (1905–1906) • Henry Seely White (1907–1908) • Maxime Bôcher (1909–1910) • Henry Burchard Fine (1911–1912) • Edward Burr Van Vleck (1913–1914) • Ernest William Brown (1915–1916) • Leonard Eugene Dickson (1917–1918) • Frank Morley (1919–1920) • Gilbert Ames Bliss (1921–1922) • Oswald Veblen (1923–1924) 1925–1950 • George David Birkhoff (1925–1926) • Virgil Snyder (1927–1928) • Earle Raymond Hedrick (1929–1930) • Luther P. Eisenhart (1931–1932) • Arthur Byron Coble (1933–1934) • Solomon Lefschetz (1935–1936) • Robert Lee Moore (1937–1938) • Griffith C. Evans (1939–1940) • Marston Morse (1941–1942) • Marshall H. Stone (1943–1944) • Theophil Henry Hildebrandt (1945–1946) • Einar Hille (1947–1948) • Joseph L. Walsh (1949–1950) 1951–1974 • John von Neumann (1951–1952) • Gordon Thomas Whyburn (1953–1954) • Raymond Louis Wilder (1955–1956) • Richard Brauer (1957–1958) • Edward J. McShane (1959–1960) • Deane Montgomery (1961–1962) • Joseph L. Doob (1963–1964) • Abraham Adrian Albert (1965–1966) • Charles B. Morrey Jr. (1967–1968) • Oscar Zariski (1969–1970) • Nathan Jacobson (1971–1972) • Saunders Mac Lane (1973–1974) 1975–2000 • Lipman Bers (1975–1976) • R. H. Bing (1977–1978) • Peter Lax (1979–1980) • Andrew M. Gleason (1981–1982) • Julia Robinson (1983–1984) • Irving Kaplansky (1985–1986) • George Mostow (1987–1988) • William Browder (1989–1990) • Michael Artin (1991–1992) • Ronald Graham (1993–1994) • Cathleen Synge Morawetz (1995–1996) • Arthur Jaffe (1997–1998) • Felix Browder (1999–2000) 2001–2024 • Hyman Bass (2001–2002) • David Eisenbud (2003–2004) • James Arthur (2005–2006) • James Glimm (2007–2008) • George Andrews (2009–2010) • Eric Friedlander (2011–2012) • David Vogan (2013–2014) • Robert Bryant (2015–2016) • Ken Ribet (2017–2018) • Jill Pipher (2019–2020) • Ruth Charney (2021–2022) • Bryna Kra (2023–2024) Authority control International • FAST • ISNI • VIAF National • France • BnF data • Catalonia • Germany • Israel • United States • Sweden • Japan • Czech Republic • Netherlands • Poland Academics • CiNii • DBLP • MathSciNet • Mathematics Genealogy Project • zbMATH People • Deutsche Biographie Other • SNAC • IdRef
Wikipedia
Bayes factor The Bayes factor is a ratio of two competing statistical models represented by their evidence, and is used to quantify the support for one model over the other.[1] The models in questions can have a common set of parameters, such as a null hypothesis and an alternative, but this is not necessary; for instance, it could also be a non-linear model compared to its linear approximation. The Bayes factor can be thought of as a Bayesian analog to the likelihood-ratio test, although it uses the (integrated) marginal likelihood rather than the maximized likelihood. As such, both quantities only coincide under simple hypotheses (e.g., two specific parameter values).[2] Also, in contrast with null hypothesis significance testing, Bayes factors support evaluation of evidence in favor of a null hypothesis, rather than only allowing the null to be rejected or not rejected.[3] Part of a series on Bayesian statistics Posterior = Likelihood × Prior ÷ Evidence Background • Bayesian inference • Bayesian probability • Bayes' theorem • Bernstein–von Mises theorem • Coherence • Cox's theorem • Cromwell's rule • Principle of indifference • Principle of maximum entropy Model building • Weak prior ... Strong prior • Conjugate prior • Linear regression • Empirical Bayes • Hierarchical model Posterior approximation • Markov chain Monte Carlo • Laplace's approximation • Integrated nested Laplace approximations • Variational inference • Approximate Bayesian computation Estimators • Bayesian estimator • Credible interval • Maximum a posteriori estimation Evidence approximation • Evidence lower bound • Nested sampling Model evaluation • Bayes factor • Model averaging • Posterior predictive •  Mathematics portal Although conceptually simple, the computation of the Bayes factor can be challenging depending on the complexity of the model and the hypotheses.[4] Since closed-form expressions of the marginal likelihood are generally not available, numerical approximations based on MCMC samples have been suggested.[5] For certain special cases, simplified algebraic expressions can be derived; for instance, the Savage–Dickey density ratio in the case of a precise (equality constrained) hypothesis against an unrestricted alternative.[6][7] Another approximation, derived by applying Laplace's approximation to the integrated likelihoods, is known as the Bayesian information criterion (BIC);[8] in large data sets the Bayes factor will approach the BIC as the influence of the priors wanes. In small data sets, priors generally matter and must not be improper since the Bayes factor will be undefined if either of the two integrals in its ratio is not finite. Definition The Bayes factor is the ratio of two marginal likelihoods; that is, the likelihoods of two statistical models integrated over the prior probabilities of their parameters.[9] The posterior probability $\Pr(M|D)$ of a model M given data D is given by Bayes' theorem: $\Pr(M|D)={\frac {\Pr(D|M)\Pr(M)}{\Pr(D)}}.$ The key data-dependent term $\Pr(D|M)$ represents the probability that some data are produced under the assumption of the model M; evaluating it correctly is the key to Bayesian model comparison. Given a model selection problem in which one wishes to choose between two models on the basis of observed data D, the plausibility of the two different models M1 and M2, parametrised by model parameter vectors $\theta _{1}$ and $\theta _{2}$, is assessed by the Bayes factor K given by $K={\frac {\Pr(D|M_{1})}{\Pr(D|M_{2})}}={\frac {\int \Pr(\theta _{1}|M_{1})\Pr(D|\theta _{1},M_{1})\,d\theta _{1}}{\int \Pr(\theta _{2}|M_{2})\Pr(D|\theta _{2},M_{2})\,d\theta _{2}}}={\frac {\frac {\Pr(M_{1}|D)\Pr(D)}{\Pr(M_{1})}}{\frac {\Pr(M_{2}|D)\Pr(D)}{\Pr(M_{2})}}}={\frac {\Pr(M_{1}|D)}{\Pr(M_{2}|D)}}{\frac {\Pr(M_{2})}{\Pr(M_{1})}}.$ When the two models have equal prior probability, so that $\Pr(M_{1})=\Pr(M_{2})$, the Bayes factor is equal to the ratio of the posterior probabilities of M1 and M2. If instead of the Bayes factor integral, the likelihood corresponding to the maximum likelihood estimate of the parameter for each statistical model is used, then the test becomes a classical likelihood-ratio test. Unlike a likelihood-ratio test, this Bayesian model comparison does not depend on any single set of parameters, as it integrates over all parameters in each model (with respect to the respective priors). However, an advantage of the use of Bayes factors is that it automatically, and quite naturally, includes a penalty for including too much model structure.[10] It thus guards against overfitting. For models where an explicit version of the likelihood is not available or too costly to evaluate numerically, approximate Bayesian computation can be used for model selection in a Bayesian framework,[11] with the caveat that approximate-Bayesian estimates of Bayes factors are often biased.[12] Other approaches are: • to treat model comparison as a decision problem, computing the expected value or cost of each model choice; • to use minimum message length (MML). • to use minimum description length (MDL). Interpretation A value of K > 1 means that M1 is more strongly supported by the data under consideration than M2. Note that classical hypothesis testing gives one hypothesis (or model) preferred status (the 'null hypothesis'), and only considers evidence against it. Harold Jeffreys gave a scale for interpretation of K:[13] style="text-align: center; margin-left: auto; margin-right: auto; border: none;" KdHartbitsStrength of evidence < 100< 0< 0Negative (supports M2) 100 to 101/20 to 50 to 1.6Barely worth mentioning 101/2 to 1015 to 101.6 to 3.3Substantial 101 to 103/210 to 153.3 to 5.0Strong 103/2 to 10215 to 205.0 to 6.6Very strong > 102> 20> 6.6Decisive The second column gives the corresponding weights of evidence in decihartleys (also known as decibans); bits are added in the third column for clarity. According to I. J. Good a change in a weight of evidence of 1 deciban or 1/3 of a bit (i.e. a change in an odds ratio from evens to about 5:4) is about as finely as humans can reasonably perceive their degree of belief in a hypothesis in everyday use.[14] An alternative table, widely cited, is provided by Kass and Raftery (1995):[10] style="text-align: center; margin-left: auto; margin-right: auto; border: none;" log10 KKStrength of evidence 0 to 1/21 to 3.2Not worth more than a bare mention 1/2 to 13.2 to 10Substantial 1 to 210 to 100Strong > 2> 100Decisive Example Suppose we have a random variable that produces either a success or a failure. We want to compare a model M1 where the probability of success is q = 1⁄2, and another model M2 where q is unknown and we take a prior distribution for q that is uniform on [0,1]. We take a sample of 200, and find 115 successes and 85 failures. The likelihood can be calculated according to the binomial distribution: ${{200 \choose 115}q^{115}(1-q)^{85}}.$ Thus we have for M1 $P(X=115\mid M_{1})={200 \choose 115}\left({1 \over 2}\right)^{200}\approx 0.006$ whereas for M2 we have $P(X=115\mid M_{2})=\int _{0}^{1}{200 \choose 115}q^{115}(1-q)^{85}dq={1 \over 201}\approx 0.005$ The ratio is then 1.2, which is "barely worth mentioning" even if it points very slightly towards M1. A frequentist hypothesis test of M1 (here considered as a null hypothesis) would have produced a very different result. Such a test says that M1 should be rejected at the 5% significance level, since the probability of getting 115 or more successes from a sample of 200 if q = 1⁄2 is 0.02, and as a two-tailed test of getting a figure as extreme as or more extreme than 115 is 0.04. Note that 115 is more than two standard deviations away from 100. Thus, whereas a frequentist hypothesis test would yield significant results at the 5% significance level, the Bayes factor hardly considers this to be an extreme result. Note, however, that a non-uniform prior (for example one that reflects the fact that you expect the number of success and failures to be of the same order of magnitude) could result in a Bayes factor that is more in agreement with the frequentist hypothesis test. A classical likelihood-ratio test would have found the maximum likelihood estimate for q, namely ${\frac {115}{200}}=0.575$, whence $\textstyle P(X=115\mid M_{2})={{200 \choose 115}{\hat {q}}^{115}(1-{\hat {q}})^{85}}\approx 0.06$ (rather than averaging over all possible q). That gives a likelihood ratio of 0.1 and points towards M2. M2 is a more complex model than M1 because it has a free parameter which allows it to model the data more closely. The ability of Bayes factors to take this into account is a reason why Bayesian inference has been put forward as a theoretical justification for and generalisation of Occam's razor, reducing Type I errors.[15] On the other hand, the modern method of relative likelihood takes into account the number of free parameters in the models, unlike the classical likelihood ratio. The relative likelihood method could be applied as follows. Model M1 has 0 parameters, and so its Akaike information criterion (AIC) value is $2\cdot 0-2\cdot \ln(0.005956)\approx 10.2467$. Model M2 has 1 parameter, and so its AIC value is $2\cdot 1-2\cdot \ln(0.056991)\approx 7.7297$. Hence M1 is about $\exp \left({\frac {7.7297-10.2467}{2}}\right)\approx 0.284$ times as probable as M2 to minimize the information loss. Thus M2 is slightly preferred, but M1 cannot be excluded. See also • Akaike information criterion • Approximate Bayesian computation • Bayesian information criterion • Deviance information criterion • Lindley's paradox • Minimum message length • Model selection Statistical ratios • Odds ratio • Relative risk References 1. Morey, Richard D.; Romeijn, Jan-Willem; Rouder, Jeffrey N. (2016). "The philosophy of Bayes factors and the quantification of statistical evidence". Journal of Mathematical Psychology. 72: 6–18. doi:10.1016/j.jmp.2015.11.001. 2. Lesaffre, Emmanuel; Lawson, Andrew B. (2012). "Bayesian hypothesis testing". Bayesian Biostatistics. Somerset: John Wiley & Sons. pp. 72–78. doi:10.1002/9781119942412.ch3. ISBN 978-0-470-01823-1. 3. Ly, Alexander; et al. (2020). "The Bayesian Methodology of Sir Harold Jeffreys as a Practical Alternative to the P Value Hypothesis Test". Computational Brain & Behavior. 3 (2): 153–161. doi:10.1007/s42113-019-00070-x. 4. Llorente, Fernando; et al. (2022). "Marginal likelihood computation for model selection and hypothesis testing: an extensive review". SIAM Review. to appear. arXiv:2005.08334. 5. Congdon, Peter (2014). "Estimating model probabilities or marginal likelihoods in practice". Applied Bayesian Modelling (2nd ed.). Wiley. pp. 38–40. ISBN 978-1-119-95151-3. 6. Koop, Gary (2003). "Model Comparison: The Savage–Dickey Density Ratio". Bayesian Econometrics. Somerset: John Wiley & Sons. pp. 69–71. ISBN 0-470-84567-8. 7. Wagenmakers, Eric-Jan; Lodewyckx, Tom; Kuriyal, Himanshu; Grasman, Raoul (2010). "Bayesian hypothesis testing for psychologists: A tutorial on the Savage–Dickey method" (PDF). Cognitive Psychology. 60 (3): 158–189. doi:10.1016/j.cogpsych.2009.12.001. PMID 20064637. S2CID 206867662. 8. Ibrahim, Joseph G.; Chen, Ming-Hui; Sinha, Debajyoti (2001). "Bayesian Information Criterion". Bayesian Survival Analysis. New York: Springer. pp. 246–254. doi:10.1007/978-1-4757-3447-8_6. ISBN 0-387-95277-2. 9. Gill, Jeff (2002). "Bayesian Hypothesis Testing and the Bayes Factor". Bayesian Methods : A Social and Behavioral Sciences Approach. Chapman & Hall. pp. 199–237. ISBN 1-58488-288-3. 10. Robert E. Kass & Adrian E. Raftery (1995). "Bayes Factors" (PDF). Journal of the American Statistical Association. 90 (430): 791. doi:10.2307/2291091. JSTOR 2291091. 11. Toni, T.; Stumpf, M.P.H. (2009). "Simulation-based model selection for dynamical systems in systems and population biology". Bioinformatics. 26 (1): 104–10. arXiv:0911.1705. doi:10.1093/bioinformatics/btp619. PMC 2796821. PMID 19880371. 12. Robert, C.P.; J. Cornuet; J. Marin & N.S. Pillai (2011). "Lack of confidence in approximate Bayesian computation model choice". Proceedings of the National Academy of Sciences. 108 (37): 15112–15117. Bibcode:2011PNAS..10815112R. doi:10.1073/pnas.1102900108. PMC 3174657. PMID 21876135. 13. Jeffreys, Harold (1998) [1961]. The Theory of Probability (3rd ed.). Oxford, England. p. 432. ISBN 9780191589676.{{cite book}}: CS1 maint: location missing publisher (link) 14. Good, I.J. (1979). "Studies in the History of Probability and Statistics. XXXVII A. M. Turing's statistical work in World War II". Biometrika. 66 (2): 393–396. doi:10.1093/biomet/66.2.393. MR 0548210. 15. Sharpening Ockham's Razor On a Bayesian Strop Further reading • Bernardo, J.; Smith, A. F. M. (1994). Bayesian Theory. John Wiley. ISBN 0-471-92416-4. • Denison, D. G. T.; Holmes, C. C.; Mallick, B. K.; Smith, A. F. M. (2002). Bayesian Methods for Nonlinear Classification and Regression. John Wiley. ISBN 0-471-49036-9. • Dienes, Z. (2019). How do I know what my theory predicts? Advances in Methods and Practices in Psychological Science doi:10.1177/2515245919876960 • Duda, Richard O.; Hart, Peter E.; Stork, David G. (2000). "Section 9.6.5". Pattern classification (2nd ed.). Wiley. pp. 487–489. ISBN 0-471-05669-3. • Gelman, A.; Carlin, J.; Stern, H.; Rubin, D. (1995). Bayesian Data Analysis. London: Chapman & Hall. ISBN 0-412-03991-5. • Jaynes, E. T. (1994), Probability Theory: the logic of science, chapter 24. • Kadane, Joseph B.; Dickey, James M. (1980). "Bayesian Decision Theory and the Simplification of Models". In Kmenta, Jan; Ramsey, James B. (eds.). Evaluation of Econometric Models. New York: Academic Press. pp. 245–268. ISBN 0-12-416550-8. • Lee, P. M. (2012). Bayesian Statistics: an introduction. Wiley. ISBN 9781118332573. • Winkler, Robert (2003). Introduction to Bayesian Inference and Decision (2nd ed.). Probabilistic. ISBN 0-9647938-4-9. External links • BayesFactor —an R package for computing Bayes factors in common research designs • Bayes factor calculator — Online calculator for informed Bayes factors • Bayes Factor Calculators —web-based version of much of the BayesFactor package
Wikipedia
Savitch's theorem In computational complexity theory, Savitch's theorem, proved by Walter Savitch in 1970, gives a relationship between deterministic and non-deterministic space complexity. It states that for any function $f\in \Omega (\log(n))$, ${\mathsf {NSPACE}}\left(f\left(n\right)\right)\subseteq {\mathsf {DSPACE}}\left(f\left(n\right)^{2}\right).$ In other words, if a nondeterministic Turing machine can solve a problem using $f(n)$ space, a deterministic Turing machine can solve the same problem in the square of that space bound.[1] Although it seems that nondeterminism may produce exponential gains in time (as formalized in the unproven exponential time hypothesis), Savitch's theorem shows that it has a markedly more limited effect on space requirements.[2] Proof The proof relies on an algorithm for STCON, the problem of determining whether there is a path between two vertices in a directed graph, which runs in $O\left((\log n)^{2}\right)$ space for $n$ vertices. The basic idea of the algorithm is to solve recursively a somewhat more general problem, testing the existence of a path from a vertex $s$ to another vertex $t$ that uses at most $k$ edges, for a parameter $k$ given as input. STCON is a special case of this problem where $k$ is set large enough to impose no restriction on the paths (for instance, equal to the total number of vertices in the graph, or any larger value). To test for a $k$-edge path from $s$ to $t$, a deterministic algorithm can iterate through all vertices $u$, and recursively search for paths of half the length from $s$ to $u$ and from $u$ to $t$. This algorithm can be expressed in pseudocode (in Python syntax) as follows: def stcon(s, t) -> bool: """Test whether a path of any length exists from s to t""" return k_edge_path(s, t, n) # n is the number of vertices def k_edge_path(s, t, k) -> bool: """Test whether a path of length at most k exists from s to t""" if k == 0: return s == t if k == 1: return (s, t) in edges for u in vertices: if k_edge_path(s, u, floor(k / 2)) and k_edge_path(u, t, ceil(k / 2)): return True return False Because each recursive call halves the parameter $k$, the number of levels of recursion is $\lceil \log _{2}n\rceil $. Each level requires $O(\log n)$ bits of storage for its function arguments and local variables: $k$ and the vertices $s$, $t$, and $u$ require $\lceil \log _{2}n\rceil $ bits each. The total auxiliary space complexity is thus $O\left((\log n)^{2}\right)$. The input graph is considered to be represented in a separate read-only memory and does not contribute to this auxiliary space bound. Alternatively, it may be represented as an implicit graph. Although described above in the form of a program in a high-level language, the same algorithm may be implemented with the same asymptotic space bound on a Turing machine. This algorithm can be applied to an implicit graph whose vertices represent the configurations of a nondeterministic Turing machine and its tape, running within a given space bound $f(n)$. The edges of this graph represent the nondeterministic transitions of the machine, $s$ is set to the initial configuration of the machine, and $t$ is set to a special vertex representing all accepting halting states. In this case, the algorithm returns true when the machine has a nondeterministic accepting path, and false otherwise. The number of configurations in this graph is $O(2^{f(n)})$, from which it follows that applying the algorithm to this implicit graph uses space $O(f(n)^{2})$. Thus by deciding connectivity in a graph representing nondeterministic Turing machine configurations, one can decide membership in the language recognized by that machine, in space proportional to the square of the space used by the Turing machine. Corollaries Some important corollaries of the theorem include: PSPACE = NPSPACE That is, the languages that can be recognized by deterministic polynomial-space Turing machines and nondeterministic polynomial-space Turing machines are the same. This follows directly from the fact that the square of a polynomial function is still a polynomial function. It is believed that a similar relationship does not exist between the polynomial time complexity classes, P and NP, although this is still an open question. NL ⊆ L2 That is, all languages that can be solved nondeterministically in logarithmic space can be solved deterministically in the complexity class ${\mathsf {\color {Blue}L}}^{2}={\mathsf {DSPACE}}\left(\left(\log n\right)^{2}\right).$ This follows from the fact that STCON is NL-complete. See also • Immerman–Szelepcsényi theorem – Nondeterministic space complexity classes are closed under complementation Notes 1. Arora & Barak (2009) p.86 2. Arora & Barak (2009) p.92 References • Arora, Sanjeev; Barak, Boaz (2009), Computational complexity. A modern approach, Cambridge University Press, ISBN 978-0-521-42426-4, Zbl 1193.68112 • Papadimitriou, Christos (1993), "Section 7.3: The Reachability Method", Computational Complexity (1st ed.), Addison Wesley, pp. 149–150, ISBN 0-201-53082-1 • Savitch, Walter J. (1970), "Relationships between nondeterministic and deterministic tape complexities", Journal of Computer and System Sciences, 4 (2): 177–192, doi:10.1016/S0022-0000(70)80006-X, hdl:10338.dmlcz/120475 • Sipser, Michael (1997), "Section 8.1: Savitch's Theorem", Introduction to the Theory of Computation, PWS Publishing, pp. 279–281, ISBN 0-534-94728-X External links • Lance Fortnow, Foundations of Complexity, Lesson 18: Savitch's Theorem. Accessed 09/09/09. • Richard J. Lipton, Savitch’s Theorem. Gives a historical account on how the proof was discovered.
Wikipedia
Savitzky–Golay filter A Savitzky–Golay filter is a digital filter that can be applied to a set of digital data points for the purpose of smoothing the data, that is, to increase the precision of the data without distorting the signal tendency. This is achieved, in a process known as convolution, by fitting successive sub-sets of adjacent data points with a low-degree polynomial by the method of linear least squares. When the data points are equally spaced, an analytical solution to the least-squares equations can be found, in the form of a single set of "convolution coefficients" that can be applied to all data sub-sets, to give estimates of the smoothed signal, (or derivatives of the smoothed signal) at the central point of each sub-set. The method, based on established mathematical procedures,[1][2] was popularized by Abraham Savitzky and Marcel J. E. Golay, who published tables of convolution coefficients for various polynomials and sub-set sizes in 1964.[3][4] Some errors in the tables have been corrected.[5] The method has been extended for the treatment of 2- and 3-dimensional data. Savitzky and Golay's paper is one of the most widely cited papers in the journal Analytical Chemistry[6] and is classed by that journal as one of its "10 seminal papers" saying "it can be argued that the dawn of the computer-controlled analytical instrument can be traced to this article".[7] Applications The data consists of a set of points {xj, yj}, j = 1, ..., n, where xj is an independent variable and yj is an observed value. They are treated with a set of m convolution coefficients, Ci, according to the expression $Y_{j}=\sum _{i={\tfrac {1-m}{2}}}^{\tfrac {m-1}{2}}C_{i}\,y_{j+i},\qquad {\frac {m+1}{2}}\leq j\leq n-{\frac {m-1}{2}}$ Selected convolution coefficients are shown in the tables, below. For example, for smoothing by a 5-point quadratic polynomial, m = 5, i = −2, −1, 0, 1, 2 and the jth smoothed data point, Yj, is given by $Y_{j}={\frac {1}{35}}(-3y_{j-2}+12y_{j-1}+17y_{j}+12y_{j+1}-3y_{j+2})$, where, C−2 = −3/35, C−1 = 12 / 35, etc. There are numerous applications of smoothing, which is performed primarily to make the data appear to be less noisy than it really is. The following are applications of numerical differentiation of data.[8] Note When calculating the nth derivative, an additional scaling factor of ${\frac {n!}{h^{n}}}$ may be applied to all calculated data points to obtain absolute values (see expressions for ${\frac {d^{n}Y}{dx^{n}}}$, below, for details). (1) Synthetic Lorentzian + noise (blue) and 1st derivative (green) (2) Titration curve (blue) for malonic acid and 2nd derivative (green). The part in the light blue box is magnified 10 times (3) Lorentzian on exponential baseline (blue) and 2nd derivative (green) (4) Sum of two Lorentzians (blue) and 2nd derivative (green) (5) 4th derivative of the sum of two Lorentzians 1. Location of maxima and minima in experimental data curves. This was the application that first motivated Savitzky.[4] The first derivative of a function is zero at a maximum or minimum. The diagram shows data points belonging to a synthetic Lorentzian curve, with added noise (blue diamonds). Data are plotted on a scale of half width, relative to the peak maximum at zero. The smoothed curve (red line) and 1st derivative (green) were calculated with 7-point cubic Savitzky–Golay filters. Linear interpolation of the first derivative values at positions either side of the zero-crossing gives the position of the peak maximum. 3rd derivatives can also be used for this purpose. 2. Location of an end-point in a titration curve. An end-point is an inflection point where the second derivative of the function is zero.[9] The titration curve for malonic acid illustrates the power of the method. The first end-point at 4 ml is barely visible, but the second derivative allows its value to be easily determined by linear interpolation to find the zero crossing. 3. Baseline flattening. In analytical chemistry it is sometimes necessary to measure the height of an absorption band against a curved baseline.[10] Because the curvature of the baseline is much less than the curvature of the absorption band, the second derivative effectively flattens the baseline. Three measures of the derivative height, which is proportional to the absorption band height, are the "peak-to-valley" distances h1 and h2, and the height from baseline, h3.[11] 4. Resolution enhancement in spectroscopy. Bands in the second derivative of a spectroscopic curve are narrower than the bands in the spectrum: they have reduced half-width. This allows partially overlapping bands to be "resolved" into separate (negative) peaks.[12] The diagram illustrates how this may be used also for chemical analysis, using measurement of "peak-to-valley" distances. In this case the valleys are a property of the 2nd derivative of a Lorentzian. (x-axis position is relative to the position of the peak maximum on a scale of half width at half height). 5. Resolution enhancement with 4th derivative (positive peaks). The minima are a property of the 4th derivative of a Lorentzian. Moving average A moving average filter is commonly used with time series data to smooth out short-term fluctuations and highlight longer-term trends or cycles. It is often used in technical analysis of financial data, like stock prices, returns or trading volumes. It is also used in economics to examine gross domestic product, employment or other macroeconomic time series. An unweighted moving average filter is the simplest convolution filter. Each subset of the data set is fitted by a straight horizontal line. It was not included in the Savitzsky-Golay tables of convolution coefficients as all the coefficient values are identical, with the value 1/m. Derivation of convolution coefficients When the data points are equally spaced, an analytical solution to the least-squares equations can be found.[2] This solution forms the basis of the convolution method of numerical smoothing and differentiation. Suppose that the data consists of a set of n points (xj, yj) (j = 1, ..., n), where xj is an independent variable and yj is a datum value. A polynomial will be fitted by linear least squares to a set of m (an odd number) adjacent data points, each separated by an interval h. Firstly, a change of variable is made $z={{x-{\bar {x}}} \over h}$ where ${\bar {x}}$ is the value of the central point. z takes the values ${\tfrac {1-m}{2}},\cdots ,0,\cdots ,{\tfrac {m-1}{2}}$ (e.g. m = 5 → z = −2, −1, 0, 1, 2).[note 1] The polynomial, of degree k is defined as $Y=a_{0}+a_{1}z+a_{2}z^{2}\cdots +a_{k}z^{k}.$[note 2] The coefficients a0, a1 etc. are obtained by solving the normal equations (bold a represents a vector, bold J represents a matrix). ${\mathbf {a} }=\left({{\mathbf {J} }^{\mathbf {T} }{\mathbf {J} }}\right)^{-{\mathbf {1} }}{\mathbf {J} }^{\mathbf {T} }{\mathbf {y} },$ where $\mathbf {J} $ is a Vandermonde matrix, that is $i$-th row of $\mathbf {J} $ has values $1,z_{i},z_{i}^{2},\dots $. For example, for a cubic polynomial fitted to 5 points, z= −2, −1, 0, 1, 2 the normal equations are solved as follows. $\mathbf {J} ={\begin{pmatrix}1&-2&4&-8\\1&-1&1&-1\\1&0&0&0\\1&1&1&1\\1&2&4&8\end{pmatrix}}$ $\mathbf {J^{T}J} ={\begin{pmatrix}m&\sum z&\sum z^{2}&\sum z^{3}\\\sum z&\sum z^{2}&\sum z^{3}&\sum z^{4}\\\sum z^{2}&\sum z^{3}&\sum z^{4}&\sum z^{5}\\\sum z^{3}&\sum z^{4}&\sum z^{5}&\sum z^{6}\\\end{pmatrix}}={\begin{pmatrix}m&0&\sum z^{2}&0\\0&\sum z^{2}&0&\sum z^{4}\\\sum z^{2}&0&\sum z^{4}&0\\0&\sum z^{4}&0&\sum z^{6}\\\end{pmatrix}}={\begin{pmatrix}5&0&10&0\\0&10&0&34\\10&0&34&0\\0&34&0&130\\\end{pmatrix}}$ Now, the normal equations can be factored into two separate sets of equations, by rearranging rows and columns, with $\mathbf {J^{T}J} _{\text{even}}={\begin{pmatrix}5&10\\10&34\\\end{pmatrix}}\quad \mathrm {and} \quad \mathbf {J^{T}J} _{\text{odd}}={\begin{pmatrix}10&34\\34&130\\\end{pmatrix}}$ Expressions for the inverse of each of these matrices can be obtained using Cramer's rule $(\mathbf {J^{T}J} )_{\text{even}}^{-1}={1 \over 70}{\begin{pmatrix}34&-10\\-10&5\\\end{pmatrix}}\quad \mathrm {and} \quad (\mathbf {J^{T}J} )_{\text{odd}}^{-1}={1 \over 144}{\begin{pmatrix}130&-34\\-34&10\\\end{pmatrix}}$ The normal equations become ${\begin{pmatrix}{a_{0}}\\{a_{2}}\\\end{pmatrix}}_{j}={1 \over 70}{\begin{pmatrix}34&-10\\-10&5\end{pmatrix}}{\begin{pmatrix}1&1&1&1&1\\4&1&0&1&4\\\end{pmatrix}}{\begin{pmatrix}y_{j-2}\\y_{j-1}\\y_{j}\\y_{j+1}\\y_{j+2}\end{pmatrix}}$ and ${\begin{pmatrix}a_{1}\\a_{3}\\\end{pmatrix}}_{j}={1 \over 144}{\begin{pmatrix}130&-34\\-34&10\\\end{pmatrix}}{\begin{pmatrix}-2&-1&0&1&2\\-8&-1&0&1&8\\\end{pmatrix}}{\begin{pmatrix}y_{j-2}\\y_{j-1}\\y_{j}\\y_{j+1}\\y_{j+2}\\\end{pmatrix}}$ Multiplying out and removing common factors, ${\begin{aligned}a_{0,j}&={1 \over 35}(-3y_{j-2}+12y_{j-1}+17y_{j}+12y_{j+1}-3y_{j+2})\\a_{1,j}&={1 \over 12}(y_{j-2}-8y_{j-1}+8y_{j+1}-y_{j+2})\\a_{2,j}&={1 \over 14}(2y_{j-2}-y_{j-1}-2y_{j}-y_{j+1}+2y_{j+2})\\a_{3,j}&={1 \over 12}(-y_{j-2}+2y_{j-1}-2y_{j+1}+y_{j+2})\end{aligned}}$ The coefficients of y in these expressions are known as convolution coefficients. They are elements of the matrix $\mathbf {C=(J^{T}J)^{-1}J^{T}} $ In general, $(C\otimes y)_{j}\ =Y_{j}=\sum _{i=-{\tfrac {m-1}{2}}}^{\tfrac {m-1}{2}}C_{i}\,y_{j+i},\qquad {\frac {m+1}{2}}\leq j\leq n-{\frac {m-1}{2}}$ In matrix notation this example is written as ${\begin{pmatrix}Y_{3}\\Y_{4}\\Y_{5}\\\vdots \end{pmatrix}}={1 \over 35}{\begin{pmatrix}-3&12&17&12&-3&0&0&\cdots \\0&-3&12&17&12&-3&0&\cdots \\0&0&-3&12&17&12&-3&\cdots \\\vdots &\vdots &\vdots &\vdots &\vdots &\vdots &\vdots &\ddots \end{pmatrix}}{\begin{pmatrix}y_{1}\\y_{2}\\y_{3}\\y_{4}\\y_{5}\\y_{6}\\y_{7}\\\vdots \end{pmatrix}}$ Tables of convolution coefficients, calculated in the same way for m up to 25, were published for the Savitzky–Golay smoothing filter in 1964,[3][5] The value of the central point, z = 0, is obtained from a single set of coefficients, a0 for smoothing, a1 for 1st derivative etc. The numerical derivatives are obtained by differentiating Y. This means that the derivatives are calculated for the smoothed data curve. For a cubic polynomial ${\begin{aligned}Y&=a_{0}+a_{1}z+a_{2}z^{2}+a_{3}z^{3}=a_{0}&{\text{ at }}z=0,x={\bar {x}}\\{\frac {dY}{dx}}&={\frac {1}{h}}\left({a_{1}+2a_{2}z+3a_{3}z^{2}}\right)={\frac {1}{h}}a_{1}&{\text{ at }}z=0,x={\bar {x}}\\{\frac {d^{2}Y}{dx^{2}}}&={\frac {1}{h^{2}}}\left({2a_{2}+6a_{3}z}\right)={\frac {2}{h^{2}}}a_{2}&{\text{ at }}z=0,x={\bar {x}}\\{\frac {d^{3}Y}{dx^{3}}}&={\frac {6}{h^{3}}}a_{3}\end{aligned}}$ In general, polynomials of degree (0 and 1),[note 3] (2 and 3), (4 and 5) etc. give the same coefficients for smoothing and even derivatives. Polynomials of degree (1 and 2), (3 and 4) etc. give the same coefficients for odd derivatives. Algebraic expressions It is not necessary always to use the Savitzky–Golay tables. The summations in the matrix JTJ can be evaluated in closed form, ${\begin{aligned}\sum _{z=-{\frac {m-1}{2}}}^{\frac {m-1}{2}}z^{2}&={m(m^{2}-1) \over 12}\\\sum z^{4}&={m(m^{2}-1)(3m^{2}-7) \over 240}\\\sum z^{6}&={m(m^{2}-1)(3m^{4}-18m^{2}+31) \over 1344}\end{aligned}}$ so that algebraic formulae can be derived for the convolution coefficients.[13][note 4] Functions that are suitable for use with a curve that has an inflection point are: Smoothing, polynomial degree 2,3 : $C_{0i}={\frac {\left({3m^{2}-7-20i^{2}}\right)/4}{m\left({m^{2}-4}\right)/3}};\quad {\frac {1-m}{2}}\leq i\leq {\frac {m-1}{2}}$ (the range of values for i also applies to the expressions below) 1st derivative: polynomial degree 3,4 $C_{1i}={\frac {5\left({3m^{4}-18m^{2}+31}\right)i-28\left({3m^{2}-7}\right)i^{3}}{m\left({m^{2}-1}\right)\left({3m^{4}-39m^{2}+108}\right)/15}}$ 2nd derivative: polynomial degree 2,3 $C_{2i}={\frac {12mi^{2}-m\left({m^{2}-1}\right)}{m^{2}\left({m^{2}-1}\right)\left({m^{2}-4}\right)/30}}$ 3rd derivative: polynomial degree 3,4 $C_{3i}={\frac {-\left({3m^{2}-7}\right)i+20i^{3}}{m\left({m^{2}-1}\right)\left({3m^{4}-39m^{2}+108}\right)/420}}$ Simpler expressions that can be used with curves that don't have an inflection point are: Smoothing, polynomial degree 0,1 (moving average): $C_{0i}={\frac {1}{m}}$ 1st derivative, polynomial degree 1,2: $C_{1i}={\frac {i}{m(m^{2}-1)/12}}$ Higher derivatives can be obtained. For example, a fourth derivative can be obtained by performing two passes of a second derivative function.[14] Use of orthogonal polynomials An alternative to fitting m data points by a simple polynomial in the subsidiary variable, z, is to use orthogonal polynomials. $Y=b_{0}P^{0}(z)+b_{1}P^{1}(z)\cdots +b_{k}P^{k}(z).$ where P0, ..., Pk is a set of mutually orthogonal polynomials of degree 0, ..., k. Full details on how to obtain expressions for the orthogonal polynomials and the relationship between the coefficients b and a are given by Guest.[2] Expressions for the convolution coefficients are easily obtained because the normal equations matrix, JTJ, is a diagonal matrix as the product of any two orthogonal polynomials is zero by virtue of their mutual orthogonality. Therefore, each non-zero element of its inverse is simply the reciprocal the corresponding element in the normal equation matrix. The calculation is further simplified by using recursion to build orthogonal Gram polynomials. The whole calculation can be coded in a few lines of PASCAL, a computer language well-adapted for calculations involving recursion.[15] Treatment of first and last points Savitzky–Golay filters are most commonly used to obtain the smoothed or derivative value at the central point, z = 0, using a single set of convolution coefficients. (m − 1)/2 points at the start and end of the series cannot be calculated using this process. Various strategies can be employed to avoid this inconvenience. • The data could be artificially extended by adding, in reverse order, copies of the first (m − 1)/2 points at the beginning and copies of the last (m − 1)/2 points at the end. For instance, with m = 5, two points are added at the start and end of the data y1, ..., yn. y3,y2,y1, ... ,yn, yn−1, yn−2. • Looking again at the fitting polynomial, it is obvious that data can be calculated for all values of z by using all sets of convolution coefficients for a single polynomial, a0 .. ak. For a cubic polynomial ${\begin{aligned}Y&=a_{0}+a_{1}z+a_{2}z^{2}+a_{3}z^{3}\\{\frac {dY}{dx}}&={\frac {1}{h}}(a_{1}+2a_{2}z+3a_{3}z^{2})\\{\frac {d^{2}Y}{dx^{2}}}&={\frac {1}{h^{2}}}(2a_{2}+6a_{3}z)\\{\frac {d^{3}Y}{dx^{3}}}&={\frac {6}{h^{3}}}a_{3}\end{aligned}}$ • Convolution coefficients for the missing first and last points can also be easily obtained.[15] This is also equivalent to fitting the first (m + 1)/2 points with the same polynomial, and similarly for the last points. Weighting the data It is implicit in the above treatment that the data points are all given equal weight. Technically, the objective function $U=\sum _{i}w_{i}(Y_{i}-y_{i})^{2}$ being minimized in the least-squares process has unit weights, wi = 1. When weights are not all the same the normal equations become $\mathbf {a} =\left(\mathbf {J^{T}W} \mathbf {J} \right)^{-1}\mathbf {J^{T}W} \mathbf {y} \qquad W_{i,i}\neq 1$, If the same set of diagonal weights is used for all data subsets, $W={\text{diag}}(w_{1},w_{2},...,w_{m})$, an analytical solution to the normal equations can be written down. For example, with a quadratic polynomial, $\mathbf {J^{T}WJ} ={\begin{pmatrix}m\sum w_{i}&\sum w_{i}z_{i}&\sum w_{i}z_{i}^{2}\\\sum w_{i}z_{i}&\sum w_{i}z_{i}^{2}&\sum w_{i}z_{i}^{3}\\\sum w_{i}z_{i}^{2}&\sum w_{i}z_{i}^{3}&\sum w_{i}z_{i}^{4}\end{pmatrix}}$ An explicit expression for the inverse of this matrix can be obtained using Cramer's rule. A set of convolution coefficients may then be derived as $\mathbf {C} =\left(\mathbf {J^{T}W} \mathbf {J} \right)^{-1}\mathbf {J^{T}W} .$ Alternatively the coefficients, C, could be calculated in a spreadsheet, employing a built-in matrix inversion routine to obtain the inverse of the normal equations matrix. This set of coefficients, once calculated and stored, can be used with all calculations in which the same weighting scheme applies. A different set of coefficients is needed for each different weighting scheme. It was shown that Savitzky–Golay filter can be improved by introducing weights that decrease at the ends of the fitting interval.[16] Two-dimensional convolution coefficients Two-dimensional smoothing and differentiation can also be applied to tables of data values, such as intensity values in a photographic image which is composed of a rectangular grid of pixels.[17] [18] Such a grid is referred as a kernel, and the data points that constitute the kernel are referred as nodes. The trick is to transform the rectangular kernel into a single row by a simple ordering of the indices of the nodes. Whereas the one-dimensional filter coefficients are found by fitting a polynomial in the subsidiary variable z to a set of m data points, the two-dimensional coefficients are found by fitting a polynomial in subsidiary variables v and w to a set of the values at the m × n kernel nodes. The following example, for a bivariate polynomial of total degree 3, m = 7, and n = 5, illustrates the process, which parallels the process for the one dimensional case, above.[19] $v={\frac {x-{\bar {x}}}{h(x)}};w={\frac {y-{\bar {y}}}{h(y)}}$ $Y=a_{00}+a_{10}v+a_{01}w+a_{20}v^{2}+a_{11}vw+a_{02}w^{2}+a_{30}v^{3}+a_{21}v^{2}w+a_{12}vw^{2}+a_{03}w^{3}$ The rectangular kernel of 35 data values, d1 − d35 v w −3−2−101 2 3 −2 d1d2d3d4d5 d6 d7 −1 d8d9d10d11d12 d13 d14 0 d15d16d17d18d19 d20 d21 1 d22d23d24d25d26 d27 d28 2 d29d30d31d32d33 d34 d35 becomes a vector when the rows are placed one after another. d = (d1 ... d35)T The Jacobian has 10 columns, one for each of the parameters a00 − a03, and 35 rows, one for each pair of v and w values. Each row has the form $J_{\text{row}}=1\quad v\quad w\quad v^{2}\quad vw\quad w^{2}\quad v^{3}\quad v^{2}w\quad vw^{2}\quad w^{3}$ The convolution coefficients are calculated as $\mathbf {C} =\left(\mathbf {J} ^{T}\mathbf {J} \right)^{-1}\mathbf {J} ^{T}$ The first row of C contains 35 convolution coefficients, which can be multiplied with the 35 data values, respectively, to obtain the polynomial coefficient $a_{00}$, which is the smoothed value at the central node of the kernel (i.e. at the 18th node of the above table). Similarly, other rows of C can be multiplied with the 35 values to obtain other polynomial coefficients, which, in turn, can be used to obtain smoothed values and different smoothed partial derivatives at different nodes. Nikitas and Pappa-Louisi showed that depending on the format of the used polynomial, the quality of smoothing may vary significantly.[20] They recommend using the polynomial of the form $Y=\sum _{i=0}^{p}\sum _{j=0}^{q}a_{ij}v^{i}w^{j}$ because such polynomials can achieve good smoothing both in the central and in the near-boundary regions of a kernel, and therefore they can be confidently used in smoothing both at the internal and at the near-boundary data points of a sampled domain. In order to avoid ill-conditioning when solving the least-squares problem, p < m and q < n. For software that calculates the two-dimensional coefficients and for a database of such C's, see the section on multi-dimensional convolution coefficients, below. Multi-dimensional convolution coefficients The idea of two-dimensional convolution coefficients can be extended to the higher spatial dimensions as well, in a straightforward manner,[17][21] by arranging multidimensional distribution of the kernel nodes in a single row. Following the aforementioned finding by Nikitas and Pappa-Louisi[20] in two-dimensional cases, usage of the following form of the polynomial is recommended in multidimensional cases: $Y=\sum _{i_{1}=0}^{p_{1}}\sum _{i_{2}=0}^{p_{2}}\cdots \sum _{i_{D}=0}^{p_{D}}(a_{i_{1}i_{2}\cdots i_{D}}\times u_{1}^{i_{1}}u_{2}^{i_{2}}\cdots u_{D}^{i_{D}})$ where D is the dimension of the space, $a$'s are the polynomial coefficients, and u's are the coordinates in the different spatial directions. Algebraic expressions for partial derivatives of any order, be it mixed or otherwise, can be easily derived from the above expression.[21] Note that C depends on the manner in which the kernel nodes are arranged in a row and on the manner in which the different terms of the expanded form of the above polynomial is arranged, when preparing the Jacobian. Accurate computation of C in multidimensional cases becomes challenging, as precision of standard floating point numbers available in computer programming languages no longer remain sufficient. The insufficient precision causes the floating point truncation errors to become comparable to the magnitudes of some C elements, which, in turn, severely degrades its accuracy and renders it useless. Chandra Shekhar has brought forth two open source softwares, Advanced Convolution Coefficient Calculator (ACCC) and Precise Convolution Coefficient Calculator (PCCC), which handle these accuracy issues adequately. ACCC performs the computation by using floating point numbers, in an iterative manner.[22] The precision of the floating-point numbers is gradually increased in each iteration, by using GNU MPFR. Once the obtained C's in two consecutive iterations start having same significant digits until a pre-specified distance, the convergence is assumed to have reached. If the distance is sufficiently large, the computation yields a highly accurate C. PCCC employs rational number calculations, by using GNU Multiple Precision Arithmetic Library, and yields a fully accurate C, in the rational number format.[23] In the end, these rational numbers are converted into floating point numbers, until a pre-specified number of significant digits. A database of C's that are calculated by using ACCC, for symmetric kernels and both symmetric and asymmetric polynomials, on unity-spaced kernel nodes, in the 1, 2, 3, and 4 dimensional spaces, is made available.[24] Chandra Shekhar has also laid out a mathematical framework that describes usage of C calculated on unity-spaced kernel nodes to perform filtering and partial differentiations (of various orders) on non-uniformly spaced kernel nodes,[21] allowing usage of C provided in the aforementioned database. Although this method yields approximate results only, they are acceptable in most engineering applications, provided that non-uniformity of the kernel nodes is weak. Some properties of convolution 1. The sum of convolution coefficients for smoothing is equal to one. The sum of coefficients for odd derivatives is zero.[25] 2. The sum of squared convolution coefficients for smoothing is equal to the value of the central coefficient.[26] 3. Smoothing of a function leaves the area under the function unchanged.[25] 4. Convolution of a symmetric function with even-derivative coefficients conserves the centre of symmetry.[25] 5. Properties of derivative filters.[27] Signal distortion and noise reduction It is inevitable that the signal will be distorted in the convolution process. From property 3 above, when data which has a peak is smoothed the peak height will be reduced and the half-width will be increased. Both the extent of the distortion and S/N (signal-to-noise ratio) improvement: • decrease as the degree of the polynomial increases • increase as the width, m of the convolution function increases For example, If the noise in all data points is uncorrelated and has a constant standard deviation, σ, the standard deviation on the noise will be decreased by convolution with an m-point smoothing function to[26][note 5] polynomial degree 0 or 1:${\sqrt {1 \over m}}\sigma $ (moving average) polynomial degree 2 or 3: ${\sqrt {\frac {3(3m^{2}-7)}{4m(m^{2}-4)}}}\sigma $. These functions are shown in the plot at the right. For example, with a 9-point linear function (moving average) two thirds of the noise is removed and with a 9-point quadratic/cubic smoothing function only about half the noise is removed. Most of the noise remaining is low-frequency noise(see Frequency characteristics of convolution filters, below). Although the moving average function gives better noise reduction it is unsuitable for smoothing data which has curvature over m points. A quadratic filter function is unsuitable for getting a derivative of a data curve with an inflection point because a quadratic polynomial does not have one. The optimal choice of polynomial order and number of convolution coefficients will be a compromise between noise reduction and distortion.[28] Multipass filters One way to mitigate distortion and improve noise removal is to use a filter of smaller width and perform more than one convolution with it. For two passes of the same filter this is equivalent to one pass of a filter obtained by convolution of the original filter with itself.[29] For example, 2 passes of the filter with coefficients (1/3, 1/3, 1/3) is equivalent to 1 pass of the filter with coefficients (1/9, 2/9, 3/9, 2/9, 1/9). The disadvantage of multipassing is that the equivalent filter width for $n$ passes of an $m$–point function is $n(m-1)+1$ so multipassing is subject to greater end-effects. Nevertheless, multipassing has been used to great advantage. For instance, some 40–80 passes on data with a signal-to-noise ratio of only 5 gave useful results.[30] The noise reduction formulae given above do not apply because correlation between calculated data points increases with each pass. Frequency characteristics of convolution filters Convolution maps to multiplication in the Fourier co-domain. The discrete Fourier transform of a convolution filter is a real-valued function which can be represented as $FT(\theta )=\sum _{j={\tfrac {1-m}{2}}}^{\tfrac {m-1}{2}}C_{j}\cos(j\theta )$ θ runs from 0 to 180 degrees, after which the function merely repeats itself. The plot for a 9-point quadratic/cubic smoothing function is typical. At very low angle, the plot is almost flat, meaning that low-frequency components of the data will be virtually unchanged by the smoothing operation. As the angle increases the value decreases so that higher frequency components are more and more attenuated. This shows that the convolution filter can be described as a low-pass filter: the noise that is removed is primarily high-frequency noise and low-frequency noise passes through the filter.[31] Some high-frequency noise components are attenuated more than others, as shown by undulations in the Fourier transform at large angles. This can give rise to small oscillations in the smoothed data[32] and phase reversal, i.e., high-frequency oscillations in the data get inverted by Savitzky–Golay filtering.[33] Convolution and correlation Convolution affects the correlation between errors in the data. The effect of convolution can be expressed as a linear transformation. $Y_{j}\ =\sum _{i=-{\frac {m-1}{2}}}^{\frac {m-1}{2}}C_{i}\,y_{j+i}$ By the law of error propagation, the variance-covariance matrix of the data, A will be transformed into B according to $\mathbf {B} =\mathbf {C} \mathbf {A} \mathbf {C} ^{T}$ To see how this applies in practice, consider the effect of a 3-point moving average on the first three calculated points, Y2 − Y4, assuming that the data points have equal variance and that there is no correlation between them. A will be an identity matrix multiplied by a constant, σ2, the variance at each point. $\mathbf {B} ={\sigma ^{2} \over 9}{\begin{pmatrix}1&1&1&0&0\\0&1&1&1&0\\0&0&1&1&1\end{pmatrix}}{\begin{pmatrix}1&0&0&0&0\\0&1&0&0&0\\0&0&1&0&0\\0&0&0&1&0\\0&0&0&0&1\end{pmatrix}}{\begin{pmatrix}1&0&0\\1&1&0\\1&1&1\\0&1&1\\0&0&1\end{pmatrix}}={\sigma ^{2} \over 9}{\begin{pmatrix}3&2&1\\2&3&2\\1&2&3\end{pmatrix}}$ In this case the correlation coefficients, $\rho _{ij}={\frac {B_{ij}}{\sqrt {B_{ii}B_{jj}}}},\ (i\neq j)$ between calculated points i and j will be $\rho _{i,i+1}={2 \over 3}=0.66,\,\rho _{i,i+2}={1 \over 3}=0.33$ In general, the calculated values are correlated even when the observed values are not correlated. The correlation extends over m − 1 calculated points at a time.[34] Multipass filters To illustrate the effect of multipassing on the noise and correlation of a set of data, consider the effects of a second pass of a 3-point moving average filter. For the second pass[note 6] $\mathbf {CBC^{T}} ={\sigma ^{2} \over 81}{\begin{pmatrix}1&1&1&0&0\\0&1&1&1&0\\0&0&1&1&1\end{pmatrix}}{\begin{pmatrix}3&2&1&0&0\\2&3&2&0&0\\1&2&3&0&0\\0&0&0&0&0\\0&0&0&0&0\end{pmatrix}}{\begin{pmatrix}1&0&0\\1&1&0\\1&1&1\\0&1&1\\0&0&1\end{pmatrix}}={\sigma ^{2} \over 81}{\begin{pmatrix}19&16&10&4&1\\16&19&16&10&4\\10&16&19&16&10\\4&10&16&19&16\\1&4&10&16&19\end{pmatrix}}$ After two passes, the standard deviation of the central point has decreased to ${\sqrt {\tfrac {19}{81}}}\sigma =0.48\sigma $, compared to 0.58σ for one pass. The noise reduction is a little less than would be obtained with one pass of a 5-point moving average which, under the same conditions, would result in the smoothed points having the smaller standard deviation of 0.45σ. Correlation now extends over a span of 4 sequential points with correlation coefficients $\rho _{i,i+1}={16 \over 19}=0.84,\rho _{i,i+2}={10 \over 19}=0.53,\rho _{i,i+3}={4 \over 19}=0.21,\rho _{i,i+4}={1 \over 19}=0.05$ The advantage obtained by performing two passes with the narrower smoothing function is that it introduces less distortion into the calculated data. Comparison with other filters and alternatives Compared with other smoothing filters, e.g. convolution with a Gaussian or multi-pass moving-average filtering, Savitzky–Golay filters have an initially flatter response and sharper cutoff in the frequency domain, especially for high orders of the fit polynomial (see frequency characteristics). For data with limited signal bandwidth, this means that Savitzky–Golay filtering can provide better signal-to-noise ratio than many other filters; e.g., peak heights of spectra are better preserved than for other filters with similar noise suppression. Disadvantages of the Savitzky–Golay filters are comparably poor suppression of some high frequencies (poor stopband suppression) and artifacts when using polynomial fits for the first and last points.[16] Alternative smoothing methods that share the advantages of Savitzky–Golay filters and mitigate at least some of their disadvantages are Savitzky–Golay filters with properly chosen fitting weights, Whittaker–Henderson smoothing (a method closely related to smoothing splines), and convolution with a windowed sinc function.[16] See also • Kernel smoother – Different terminology for many of the same processes, used in statistics • Local regression — the LOESS and LOWESS methods • Numerical differentiation – Application to differentiation of functions • Smoothing spline • Stencil (numerical analysis) – Application to the solution of differential equations Appendix Tables of selected convolution coefficients Consider a set of data points $(x_{j},y_{j})_{1\leq j\leq n}$. The Savitzky–Golay tables refer to the case that the step $x_{j}-x_{j-1}$ is constant, h. Examples of the use of the so-called convolution coefficients, with a cubic polynomial and a window size, m, of 5 points are as follows. Smoothing $Y_{j}={\frac {1}{35}}(-3\times y_{j-2}+12\times y_{j-1}+17\times y_{j}+12\times y_{j+1}-3\times y_{j+2})$ ; 1st derivative $Y'_{j}={\frac {1}{12h}}(1\times y_{j-2}-8\times y_{j-1}+0\times y_{j}+8\times y_{j+1}-1\times y_{j+2})$ ; 2nd derivative $Y''_{j}={\frac {1}{7h^{2}}}(2\times y_{j-2}-1\times y_{j-1}-2\times y_{j}-1\times y_{j+1}+2\times y_{j+2})$. Selected values of the convolution coefficients for polynomials of degree 1, 2, 3, 4 and 5 are given in the following tables[note 7] The values were calculated using the PASCAL code provided in Gorry.[15] Coefficients for smoothing Polynomial Degree quadratic or cubic 2 or 3 quartic or quintic 4 or 5 Window size 579 79 −4 −2115 −3 −2145−55 −2 −3339−3030 −1 1265475135 0 17759131179 1 1265475135 2 −3339−3030 3 −2145−55 4 −2115 Normalisation 3521231231429 Coefficients for 1st derivative Polynomial Degree linear or quadratic 1 or 2 cubic or quartic 3 or 4 Window size 3579 579 −4 −486 −3 −3−322−142 −2 −2−2−21−67−193 −1 -1−1−1−1−8−58−126 0 0000000 1 1111858126 2 222−167193 3 33−22142 4 4−86 Normalisation 2102860122521,188 Coefficients for 2nd derivative Polynomial Degree quadratic or cubic 2 or 3 quartic or quintic 4 or 5 Window size 579 579 −4 28−126 −3 57−13371 −2 20−8−167151 −1 −1−3−1716−19−211 0 −2−4−20−30−70−370 1 −1−3−1716−19−211 2 20−8−167151 3 57−13371 4 28−126 Normalisation 742462121321716 Coefficients for 3rd derivative Polynomial Degree cubic or quartic 3 or 4 quintic or sextic 5 or 6 Window size57979 −4 −14100 −3 −171−457 −2 −1113−8256 −1 21913459 0 00000 1 −2−1−9−13−459 2 1−1−138−256 3 1−7−1457 4 14−100 Normalisation 2619881144 Coefficients for 4th derivative Polynomial Degree quartic or quintic 4 or 5 Window size79 −4 14 −3 3−21 −2 −7−11 −1 19 0 618 1 19 2 -7−11 3 3−21 4 14 Normalisation 11143 Notes 1. With even values of m, z will run from 1 − m to m − 1 in steps of 2 2. The simple moving average is a special case with k = 0, Y = a0. In this case all convolution coefficients are equal to 1/m. 3. Smoothing using the moving average is equivalent, with equally spaced points, to local fitting with a (sloping) straight line 4. The expressions given here are different from those of Madden, which are given in terms of the variable m' = (m − 1)/2. 5. The expressions under the square root sign are the same as the expression for the convolution coefficient with z=0 6. The same result is obtained with one pass of the equivalent filter with coefficients (1/9, 2/9, 3/9, 2/9, 1/9) and an identity variance-covariance matrix 7. More extensive tables and the method to calculate additional coefficients were originally provided by Savitzky and Golay.[3] References 1. Whittaker, E.T; Robinson, G (1924). The Calculus Of Observations. Blackie & Son. pp. 291–6. OCLC 1187948.. "Graduation Formulae obtained by fitting a Polynomial." 2. Guest, P.G. (2012) [1961]. "Ch. 7: Estimation of Polynomial Coefficients". Numerical Methods of Curve Fitting. Cambridge University Press. pp. 147–. ISBN 978-1-107-64695-7. 3. Savitzky, A.; Golay, M.J.E. (1964). "Smoothing and Differentiation of Data by Simplified Least Squares Procedures". Analytical Chemistry. 36 (8): 1627–39. Bibcode:1964AnaCh..36.1627S. doi:10.1021/ac60214a047. 4. Savitzky, Abraham (1989). "A Historic Collaboration". Analytical Chemistry. 61 (15): 921A–3A. doi:10.1021/ac00190a744. 5. Steinier, Jean; Termonia, Yves; Deltour, Jules (1972). "Smoothing and differentiation of data by simplified least square procedure". Analytical Chemistry. 44 (11): 1906–9. doi:10.1021/ac60319a045. PMID 22324618. 6. Larive, Cynthia K.; Sweedler, Jonathan V. (2013). "Celebrating the 75th Anniversary of the ACS Division of Analytical Chemistry: A Special Collection of the Most Highly Cited Analytical Chemistry Papers Published between 1938 and 2012". Analytical Chemistry. 85 (9): 4201–2. doi:10.1021/ac401048d. PMID 23647149. 7. Riordon, James; Zubritsky, Elizabeth; Newman, Alan (2000). "Top 10 Articles". Analytical Chemistry. 72 (9): 24 A–329 A. doi:10.1021/ac002801q. 8. Talsky, Gerhard (1994-10-04). Derivative Spectrophotometry. Wiley. ISBN 978-3527282944. 9. Abbaspour, Abdolkarim; Khajehzadeha, Abdolreza (2012). "End point detection of precipitation titration by scanometry method without using indicator". Anal. Methods. 4 (4): 923–932. doi:10.1039/C2AY05492B. 10. Li, N; Li, XY; Zou, XZ; Lin, LR; Li, YQ (2011). "A novel baseline-correction method for standard addition based derivative spectra and its application to quantitative analysis of benzo(a)pyrene in vegetable oil samples". Analyst. 136 (13): 2802–10. Bibcode:2011Ana...136.2802L. doi:10.1039/c0an00751j. PMID 21594244. 11. Dixit, L.; Ram, S. (1985). "Quantitative Analysis by Derivative Electronic Spectroscopy". Applied Spectroscopy Reviews. 21 (4): 311–418. Bibcode:1985ApSRv..21..311D. doi:10.1080/05704928508060434. 12. Giese, Arthur T.; French, C. Stacey (1955). "The Analysis of Overlapping Spectral Absorption Bands by Derivative Spectrophotometry". Appl. Spectrosc. 9 (2): 78–96. Bibcode:1955ApSpe...9...78G. doi:10.1366/000370255774634089. S2CID 97784067. 13. Madden, Hannibal H. (1978). "Comments on the Savitzky–Golay convolution method for least-squares-fit smoothing and differentiation of digital data" (PDF). Anal. Chem. 50 (9): 1383–6. doi:10.1021/ac50031a048. 14. Gans 1992, pp. 153–7, "Repeated smoothing and differentiation" 15. A., Gorry (1990). "General least-squares smoothing and differentiation by the convolution (Savitzky–Golay) method". Analytical Chemistry. 62 (6): 570–3. doi:10.1021/ac00205a007. 16. Schmid, Michael; Rath, David; Diebold, Ulrike (2022). "Why and how Savitzky–Golay filters should be replaced". ACS Measurement Science Au. 2 (2): 185–196. doi:10.1021/acsmeasuresciau.1c00054. PMC 9026279. PMID 35479103. 17. Thornley, David J. Anisotropic Multidimensional Savitzky Golay kernels for Smoothing, Differentiation and Reconstruction (PDF) (Technical report). Imperial College Department of Computing. 2066/8. 18. Ratzlaff, Kenneth L.; Johnson, Jean T. (1989). "Computation of two-dimensional polynomial least-squares convolution smoothing integers". Anal. Chem. 61 (11): 1303–5. doi:10.1021/ac00186a026. 19. Krumm, John. "Savitzky–Golay filters for 2D Images". Microsoft Research, Redmond. 20. Nikitas and Pappa-Louisi (2000). "Comments on the two-dimensional smoothing of data". Analytica Chimica Acta. 415 (1–2): 117–125. doi:10.1016/s0003-2670(00)00861-8. 21. Shekhar, Chandra (2015). On Simplified Application of Multidimensional Savitzky-Golay Filters and Differentiators. Progress in Applied Mathematics in Science and Engineering. AIP Conference Proceedings. Vol. 1705. p. 020014. Bibcode:2016AIPC.1705b0014S. doi:10.1063/1.4940262. 22. Chandra, Shekhar (2017-08-02). "Advanced Convolution Coefficient Calculator". Zenodo. doi:10.5281/zenodo.835283. 23. Chandra, Shekhar (2018-06-02). "Precise Convolution Coefficient Calculator". Zenodo. doi:10.5281/zenodo.1257898. 24. Shekhar, Chandra. "Convolution Coefficient Database for Multidimensional Least-Squares Filters". 25. Gans, 1992 & Appendix 7 harvnb error: no target: CITEREFGans1992Appendix_7 (help) 26. Ziegler, Horst (1981). "Properties of Digital Smoothing Polynomial (DISPO) Filters". Applied Spectroscopy. 35 (1): 88–92. Bibcode:1981ApSpe..35...88Z. doi:10.1366/0003702814731798. S2CID 97777604. 27. Luo, Jianwen; Ying, Kui; He, Ping; Bai, Jing (2005). "Properties of Savitzky–Golay digital differentiators" (PDF). Digital Signal Processing. 15 (2): 122–136. doi:10.1016/j.dsp.2004.09.008. 28. Gans, Peter; Gill, J. Bernard (1983). "Examination of the Convolution Method for Numerical Smoothing and Differentiation of Spectroscopic Data in Theory and in Practice". Applied Spectroscopy. 37 (6): 515–520. Bibcode:1983ApSpe..37..515G. doi:10.1366/0003702834634712. S2CID 97649068. 29. Gans 1992, pp. 153 30. Procter, Andrew; Sherwood, Peter M.A. (1980). "Smoothing of digital x-ray photoelectron spectra by an extended sliding least-squares approach". Anal. Chem. 52 (14): 2315–21. doi:10.1021/ac50064a018. 31. Gans 1992, pp. 207 32. Bromba, Manfred U.A; Ziegler, Horst (1981). "Application hints for Savitzky–Golay digital smoothing filters". Anal. Chem. 53 (11): 1583–6. doi:10.1021/ac00234a011. 33. Marchand, P.; Marmet, L. (1983). "Binomial smoothing filter: A way to avoid some pitfalls of least‐squares polynomial smoothing". Review of Scientific Instruments. 54 (8): 1034–1041. doi:10.1063/1.1137498. 34. Gans 1992, pp. 157 • Gans, Peter (1992). Data fitting in the chemical sciences: By the method of least squares. ISBN 9780471934127. External links Wikimedia Commons has media related to Savitzky–Golay filter. • Advanced Convolution Coefficient Calculator (ACCC) for multidimensional least-squares filters • Savitzky–Golay filter in Fundamentals of Statistics • A wider range of coefficients for a range of data set sizes, orders of fit, and offsets from the centre point
Wikipedia
Sawtooth wave The sawtooth wave (or saw wave) is a kind of non-sinusoidal waveform. It is so named based on its resemblance to the teeth of a plain-toothed saw with a zero rake angle. A single sawtooth, or an intermittently triggered sawtooth, is called a ramp waveform. Sawtooth wave A bandlimited sawtooth wave[1] pictured in the time domain (top) and frequency domain (bottom). The fundamental is at 220 Hz (A3). General information General definition$x(t)=2\left(t-\left\lfloor t+{\tfrac {1}{2}}\right\rfloor \right),t-{\tfrac {1}{2}}\notin \mathbb {Z} $ Fields of applicationElectronics, synthesizers Domain, Codomain and Image Domain$\mathbb {R} \setminus \left\{n-{\tfrac {1}{2}}\right\},n\in \mathbb {Z} $ Codomain$\left(-1,1\right)$ Basic features ParityOdd Period1 Specific features Root$\mathbb {Z} $ Fourier series$x(t)=-{\frac {2}{\pi }}\sum _{k=1}^{\infty }{\frac {{\left(-1\right)}^{k}}{k}}\sin \left(2\pi kt\right)$ The convention is that a sawtooth wave ramps upward and then sharply drops. In a reverse (or inverse) sawtooth wave, the wave ramps downward and then sharply rises. It can also be considered the extreme case of an asymmetric triangle wave.[2] The equivalent piecewise linear functions $x(t)=t-\lfloor t\rfloor $ $x(t)=t{\bmod {1}}$ based on the floor function of time t is an example of a sawtooth wave with period 1. A more general form, in the range −1 to 1, and with period p, is $2\left({\frac {t}{p}}-\left\lfloor {\frac {1}{2}}+{\frac {t}{p}}\right\rfloor \right)$ This sawtooth function has the same phase as the sine function. While a square wave is constructed from only odd harmonics, a sawtooth wave's sound is harsh and clear and its spectrum contains both even and odd harmonics of the fundamental frequency. Because it contains all the integer harmonics, it is one of the best waveforms to use for subtractive synthesis of musical sounds, particularly bowed string instruments like violins and cellos, since the slip-stick behavior of the bow drives the strings with a sawtooth-like motion.[3] A sawtooth can be constructed using additive synthesis. For period p and amplitude a, the following infinite Fourier series converge to a sawtooth and a reverse (inverse) sawtooth wave: $f={\frac {1}{p}}$ $x_{\text{sawtooth}}(t)=a\left({\frac {1}{2}}-{\frac {1}{\pi }}\sum _{k=1}^{\infty }{(-1)}^{k}{\frac {\sin(2\pi kft)}{k}}\right)$ $x_{\text{reverse sawtooth}}(t)={\frac {2a}{\pi }}\sum _{k=1}^{\infty }{(-1)}^{k}{\frac {\sin(2\pi kft)}{k}}$ In digital synthesis, these series are only summed over k such that the highest harmonic, Nmax, is less than the Nyquist frequency (half the sampling frequency). This summation can generally be more efficiently calculated with a fast Fourier transform. If the waveform is digitally created directly in the time domain using a non-bandlimited form, such as y = x − floor(x), infinite harmonics are sampled and the resulting tone contains aliasing distortion. An audio demonstration of a sawtooth played at 440 Hz (A4) and 880 Hz (A5) and 1,760 Hz (A6) is available below. Both bandlimited (non-aliased) and aliased tones are presented. Applications • Sawtooth waves are known for their use in music. The sawtooth and square waves are among the most common waveforms used to create sounds with subtractive analog and virtual analog music synthesizers. • Sawtooth waves are used in switched-mode power supplies. In the regulator chip the feedback signal from the output is continuously compared to a high frequency sawtooth to generate a new duty cycle PWM signal on the output of the comparator. • In the field of computer science, particularly in automation and robotics, $sawtooth(\theta )=2*atan(tan(\theta /2))$ allows to calculate sums and differences of angles while avoiding discontinuities at 360° and 0°. • The sawtooth wave is the form of the vertical and horizontal deflection signals used to generate a raster on CRT-based television or monitor screens. Oscilloscopes also use a sawtooth wave for their horizontal deflection, though they typically use electrostatic deflection. • On the wave's "ramp", the magnetic field produced by the deflection yoke drags the electron beam across the face of the CRT, creating a scan line. • On the wave's "cliff", the magnetic field suddenly collapses, causing the electron beam to return to its resting position as quickly as possible. • The current applied to the deflection yoke is adjusted by various means (transformers, capacitors, center-tapped windings) so that the half-way voltage on the sawtooth's cliff is at the zero mark, meaning that a negative current will cause deflection in one direction, and a positive current deflection in the other; thus, a center-mounted deflection yoke can use the whole screen area to depict a trace. Frequency is 15.734 kHz on NTSC, 15.625 kHz for PAL and SECAM). • The vertical deflection system operates the same way as the horizontal, though at a much lower frequency (59.94 Hz on NTSC, 50 Hz for PAL and SECAM). • The ramp portion of the wave must appear as a straight line. If otherwise, it indicates that the current isn't increasing linearly, and therefore that the magnetic field produced by the deflection yoke is not linear. As a result, the electron beam will accelerate during the non-linear portions. This would result in a television image "squished" in the direction of the non-linearity. Extreme cases will show marked brightness increases, since the electron beam spends more time on that side of the picture. • The first television receivers had controls allowing users to adjust the picture's vertical or horizontal linearity. Such controls were not present on later sets as the stability of electronic components had improved. See also • List of periodic functions • Sine wave • Square wave • Triangle wave • Pulse wave • Sound • Wave • Zigzag References 1. Kraft, Sebastian; Zölzer, Udo (5 September 2017). "LP-BLIT: Bandlimited Impulse Train Synthesis of Lowpass-filtered Waveforms". Proceedings of the 20th International Conference on Digital Audio Effects (DAFx-17). 20th International Conference on Digital Audio Effects (DAFx-17). Edinburgh. pp. 255–259. 2. "Fourier Series-Triangle Wave - from Wolfram MathWorld". Mathworld.wolfram.com. 2012-07-02. Retrieved 2012-07-11. 3. Dave Benson. "Music: A Mathematical Offering" (PDF). Homepages.abdn.ac.uk. p. 42. Retrieved 26 November 2021. External links • Hugh L. Montgomery; Robert C. Vaughan (2007). Multiplicative number theory I. Classical theory. Cambridge tracts in advanced mathematics. Vol. 97. pp. 536–537. ISBN 978-0-521-84903-6. Waveforms • Sine wave • Non-sinusoidal • Rectangular wave • Sawtooth wave • Square wave • Triangle wave
Wikipedia
Sayre equation In crystallography, the Sayre equation, named after David Sayre who introduced it in 1952, is a mathematical relationship that allows one to calculate probable values for the phases of some diffracted beams. It is used when employing direct methods to solve a structure. Its formulation is the following: $F_{hkl}=\sum _{h'k'l'}F_{h'k'l'}F_{h-h',k-k',l-l'}$ which states how the structure factor for a beam can be calculated as the sum of the products of pairs of structure factors whose indices sum to the desired values of $h,k,l$. Since weak diffracted beams will contribute a little to the sum, this method can be a powerful way of finding the phase of related beams, if some of the initial phases are already known by other methods. In particular, for three such related beams in a centrosymmetric structure, the phases can only be 0 or $\pi $ and the Sayre equation reduces to the triplet relationship: $S_{h}\approx S_{h'}S_{h-h'}$ where the $S$ indicates the sign of the structure factor (positive if the phase is 0 and negative if it is $\pi $) and the $\approx $ sign indicates that there is a certain degree of probability that the relationship is true, which becomes higher the stronger the beams are. References • Sayre, D. (1952). "The squaring method: A new method for phase determination". Acta Crystallographica. 5: 60–65. doi:10.1107/S0365110X52000137. • Werner, Massa (2004). Crystal Structure Determination. Springer. p. 102. ISBN 3540206442.
Wikipedia
Sazonov's theorem In mathematics, Sazonov's theorem, named after Vyacheslav Vasilievich Sazonov (Вячесла́в Васи́льевич Сазо́нов), is a theorem in functional analysis. It states that a bounded linear operator between two Hilbert spaces is γ-radonifying if it is a Hilbert–Schmidt operator. The result is also important in the study of stochastic processes and the Malliavin calculus, since results concerning probability measures on infinite-dimensional spaces are of central importance in these fields. Sazonov's theorem also has a converse: if the map is not Hilbert–Schmidt, then it is not γ-radonifying. Statement of the theorem Let G and H be two Hilbert spaces and let T : G → H be a bounded operator from G to H. Recall that T is said to be γ-radonifying if the push forward of the canonical Gaussian cylinder set measure on G is a bona fide measure on H. Recall also that T is said to be a Hilbert–Schmidt operator if there is an orthonormal basis { ei : i ∈ I } of G such that $\sum _{i\in I}\|T(e_{i})\|_{H}^{2}<+\infty .$ Then Sazonov's theorem is that T is γ-radonifying if it is a Hilbert–Schmidt operator. The proof uses Prokhorov's theorem. Remarks The canonical Gaussian cylinder set measure on an infinite-dimensional Hilbert space can never be a bona fide measure; equivalently, the identity function on such a space cannot be γ-radonifying. See also • Cameron–Martin theorem – Theorem of measure theory • Girsanov theorem • Radonifying function References • Schwartz, Laurent (1973), Radon measures on arbitrary topological spaces and cylindrical measures., Tata Institute of Fundamental Research Studies in Mathematics, London: Oxford University Press, pp. xii+393, MR 0426084 Measure theory Basic concepts • Absolute continuity of measures • Lebesgue integration • Lp spaces • Measure • Measure space • Probability space • Measurable space/function Sets • Almost everywhere • Atom • Baire set • Borel set • equivalence relation • Borel space • Carathéodory's criterion • Cylindrical σ-algebra • Cylinder set • 𝜆-system • Essential range • infimum/supremum • Locally measurable • π-system • σ-algebra • Non-measurable set • Vitali set • Null set • Support • Transverse measure • Universally measurable Types of Measures • Atomic • Baire • Banach • Besov • Borel • Brown • Complex • Complete • Content • (Logarithmically) Convex • Decomposable • Discrete • Equivalent • Finite • Inner • (Quasi-) Invariant • Locally finite • Maximising • Metric outer • Outer • Perfect • Pre-measure • (Sub-) Probability • Projection-valued • Radon • Random • Regular • Borel regular • Inner regular • Outer regular • Saturated • Set function • σ-finite • s-finite • Signed • Singular • Spectral • Strictly positive • Tight • Vector Particular measures • Counting • Dirac • Euler • Gaussian • Haar • Harmonic • Hausdorff • Intensity • Lebesgue • Infinite-dimensional • Logarithmic • Product • Projections • Pushforward • Spherical measure • Tangent • Trivial • Young Maps • Measurable function • Bochner • Strongly • Weakly • Convergence: almost everywhere • of measures • in measure • of random variables • in distribution • in probability • Cylinder set measure • Random: compact set • element • measure • process • variable • vector • Projection-valued measure Main results • Carathéodory's extension theorem • Convergence theorems • Dominated • Monotone • Vitali • Decomposition theorems • Hahn • Jordan • Maharam's • Egorov's • Fatou's lemma • Fubini's • Fubini–Tonelli • Hölder's inequality • Minkowski inequality • Radon–Nikodym • Riesz–Markov–Kakutani representation theorem Other results • Disintegration theorem • Lifting theory • Lebesgue's density theorem • Lebesgue differentiation theorem • Sard's theorem For Lebesgue measure • Isoperimetric inequality • Brunn–Minkowski theorem • Milman's reverse • Minkowski–Steiner formula • Prékopa–Leindler inequality • Vitale's random Brunn–Minkowski inequality Applications & related • Convex analysis • Descriptive set theory • Probability theory • Real analysis • Spectral theory Analysis in topological vector spaces Basic concepts • Abstract Wiener space • Classical Wiener space • Bochner space • Convex series • Cylinder set measure • Infinite-dimensional vector function • Matrix calculus • Vector calculus Derivatives • Differentiable vector–valued functions from Euclidean space • Differentiation in Fréchet spaces • Fréchet derivative • Total • Functional derivative • Gateaux derivative • Directional • Generalizations of the derivative • Hadamard derivative • Holomorphic • Quasi-derivative Measurability • Besov measure • Cylinder set measure • Canonical Gaussian • Classical Wiener measure • Measure like set functions • infinite-dimensional Gaussian measure • Projection-valued • Vector • Bochner / Weakly / Strongly measurable function • Radonifying function Integrals • Bochner • Direct integral • Dunford • Gelfand–Pettis/Weak • Regulated • Paley–Wiener Results • Cameron–Martin theorem • Inverse function theorem • Nash–Moser theorem • Feldman–Hájek theorem • No infinite-dimensional Lebesgue measure • Sazonov's theorem • Structure theorem for Gaussian measures Related • Crinkled arc • Covariance operator Functional calculus • Borel functional calculus • Continuous functional calculus • Holomorphic functional calculus Applications • Banach manifold (bundle) • Convenient vector space • Choquet theory • Fréchet manifold • Hilbert manifold Functional analysis (topics – glossary) Spaces • Banach • Besov • Fréchet • Hilbert • Hölder • Nuclear • Orlicz • Schwartz • Sobolev • Topological vector Properties • Barrelled • Complete • Dual (Algebraic/Topological) • Locally convex • Reflexive • Reparable Theorems • Hahn–Banach • Riesz representation • Closed graph • Uniform boundedness principle • Kakutani fixed-point • Krein–Milman • Min–max • Gelfand–Naimark • Banach–Alaoglu Operators • Adjoint • Bounded • Compact • Hilbert–Schmidt • Normal • Nuclear • Trace class • Transpose • Unbounded • Unitary Algebras • Banach algebra • C*-algebra • Spectrum of a C*-algebra • Operator algebra • Group algebra of a locally compact group • Von Neumann algebra Open problems • Invariant subspace problem • Mahler's conjecture Applications • Hardy space • Spectral theory of ordinary differential equations • Heat kernel • Index theorem • Calculus of variations • Functional calculus • Integral operator • Jones polynomial • Topological quantum field theory • Noncommutative geometry • Riemann hypothesis • Distribution (or Generalized functions) Advanced topics • Approximation property • Balanced set • Choquet theory • Weak topology • Banach–Mazur distance • Tomita–Takesaki theory •  Mathematics portal • Category • Commons
Wikipedia
Vector projection The vector projection of a vector a on (or onto) a nonzero vector b, sometimes denoted by $\operatorname {proj} _{\mathbf {b} }\mathbf {a} $ (also known as the vector component or vector resolution of a in the direction of b), is the orthogonal projection of a onto a straight line parallel to b. It is a vector parallel to b, defined as $\mathbf {a} _{1}=a_{1}\mathbf {\hat {b}} $ For more general concepts, see Projection (linear algebra) and Projection (mathematics). where $a_{1}$ is a scalar, called the scalar projection of a onto b, and b̂ is the unit vector in the direction of b. In turn, the scalar projection is defined as[1] $a_{1}=\left\|\mathbf {a} \right\|\cos \theta =\mathbf {a} \cdot \mathbf {\hat {b}} $ where the operator ⋅ denotes a dot product, ‖a‖ is the length of a, and θ is the angle between a and b. Which finally gives: $\mathbf {a} _{1}=\left(\mathbf {a} \cdot \mathbf {\hat {b}} \right)\mathbf {\hat {b}} ={\frac {\mathbf {a} \cdot \mathbf {b} }{\left\|\mathbf {b} \right\|}}{\frac {\mathbf {b} }{\left\|\mathbf {b} \right\|}}={\frac {\mathbf {a} \cdot \mathbf {b} }{\left\|\mathbf {b} \right\|^{2}}}{\mathbf {b} }={\frac {\mathbf {a} \cdot \mathbf {b} }{\mathbf {b} \cdot \mathbf {b} }}{\mathbf {b} }~.$ The scalar projection is equal to the length of the vector projection, with a minus sign if the direction of the projection is opposite to the direction of b. The vector component or vector resolute of a perpendicular to b, sometimes also called the vector rejection of a from b (denoted $\operatorname {oproj} _{\mathbf {b} }\mathbf {a} $),[2] is the orthogonal projection of a onto the plane (or, in general, hyperplane) orthogonal to b. Both the projection a1 and rejection a2 of a vector a are vectors, and their sum is equal to a, which implies that the rejection is given by: $\mathbf {a} _{2}=\mathbf {a} -\mathbf {a} _{1}.$ Notation Typically, a vector projection is denoted in a bold font (e.g. a1), and the corresponding scalar projection with normal font (e.g. a1). In some cases, especially in handwriting, the vector projection is also denoted using a diacritic above or below the letter (e.g., ${\vec {a}}_{1}$ or a1). The vector projection of a on b and the corresponding rejection are sometimes denoted by a∥b and a⊥b, respectively. Definitions based on angle θ Scalar projection Main article: Scalar projection The scalar projection of a on b is a scalar equal to $a_{1}=\left\|\mathbf {a} \right\|\cos \theta ,$ where θ is the angle between a and b. A scalar projection can be used as a scale factor to compute the corresponding vector projection. Vector projection The vector projection of a on b is a vector whose magnitude is the scalar projection of a on b with the same direction as b. Namely, it is defined as $\mathbf {a} _{1}=a_{1}\mathbf {\hat {b}} =(\left\|\mathbf {a} \right\|\cos \theta )\mathbf {\hat {b}} $ where $a_{1}$ is the corresponding scalar projection, as defined above, and $\mathbf {\hat {b}} $ is the unit vector with the same direction as b: $\mathbf {\hat {b}} ={\frac {\mathbf {b} }{\left\|\mathbf {b} \right\|}}$ Vector rejection By definition, the vector rejection of a on b is: $\mathbf {a} _{2}=\mathbf {a} -\mathbf {a} _{1}$ Hence, $\mathbf {a} _{2}=\mathbf {a} -\left(\left\|\mathbf {a} \right\|\cos \theta \right)\mathbf {\hat {b}} $ Definitions in terms of a and b When θ is not known, the cosine of θ can be computed in terms of a and b, by the following property of the dot product a ⋅ b ${\frac {\mathbf {a} \cdot \mathbf {b} }{\left\|\mathbf {a} \right\|\left\|\mathbf {b} \right\|}}=\cos \theta $ Scalar projection By the above-mentioned property of the dot product, the definition of the scalar projection becomes:[1] $a_{1}=\left\|\mathbf {a} \right\|\cos \theta ={\frac {\mathbf {a} \cdot \mathbf {b} }{\left\|\mathbf {b} \right\|}}.$ In two dimensions, this becomes $a_{1}={\frac {\mathbf {a} _{x}\mathbf {b} _{x}+\mathbf {a} _{y}\mathbf {b} _{y}}{\left\|\mathbf {b} \right\|}}.$ Vector projection Similarly, the definition of the vector projection of a onto b becomes:[1] $\mathbf {a} _{1}=a_{1}\mathbf {\hat {b}} ={\frac {\mathbf {a} \cdot \mathbf {b} }{\left\|\mathbf {b} \right\|}}{\frac {\mathbf {b} }{\left\|\mathbf {b} \right\|}},$ which is equivalent to either $\mathbf {a} _{1}=\left(\mathbf {a} \cdot \mathbf {\hat {b}} \right)\mathbf {\hat {b}} ,$ or[3] $\mathbf {a} _{1}={\frac {\mathbf {a} \cdot \mathbf {b} }{\left\|\mathbf {b} \right\|^{2}}}{\mathbf {b} }={\frac {\mathbf {a} \cdot \mathbf {b} }{\mathbf {b} \cdot \mathbf {b} }}{\mathbf {b} }~.$ Scalar rejection In two dimensions, the scalar rejection is equivalent to the projection of a onto $\mathbf {b} ^{\perp }={\begin{pmatrix}-\mathbf {b} _{y}&\mathbf {b} _{x}\end{pmatrix}}$, which is $\mathbf {b} ={\begin{pmatrix}\mathbf {b} _{x}&\mathbf {b} _{y}\end{pmatrix}}$ rotated 90° to the left. Hence, $a_{2}=\left\|\mathbf {a} \right\|\sin \theta ={\frac {\mathbf {a} \cdot \mathbf {b} ^{\perp }}{\left\|\mathbf {b} \right\|}}={\frac {\mathbf {a} _{y}\mathbf {b} _{x}-\mathbf {a} _{x}\mathbf {b} _{y}}{\left\|\mathbf {b} \right\|}}.$ Such a dot product is called the "perp dot product."[4] Vector rejection By definition, $\mathbf {a} _{2}=\mathbf {a} -\mathbf {a} _{1}$ Hence, $\mathbf {a} _{2}=\mathbf {a} -{\frac {\mathbf {a} \cdot \mathbf {b} }{\mathbf {b} \cdot \mathbf {b} }}{\mathbf {b} }.$ Properties Scalar projection Main article: Scalar projection The scalar projection a on b is a scalar which has a negative sign if 90 degrees < θ ≤ 180 degrees. It coincides with the length ‖c‖ of the vector projection if the angle is smaller than 90°. More exactly: • a1 = ‖a1‖ if 0° ≤ θ ≤ 90°, • a1 = −‖a1‖ if 90° < θ ≤ 180°. Vector projection The vector projection of a on b is a vector a1 which is either null or parallel to b. More exactly: • a1 = 0 if θ = 90°, • a1 and b have the same direction if 0° ≤ θ < 90°, • a1 and b have opposite directions if 90° < θ ≤ 180°. Vector rejection The vector rejection of a on b is a vector a2 which is either null or orthogonal to b. More exactly: • a2 = 0 if θ = 0° or θ = 180°, • a2 is orthogonal to b if 0 < θ < 180°, Matrix representation The orthogonal projection can be represented by a projection matrix. To project a vector onto the unit vector a = (ax, ay, az), it would need to be multiplied with this projection matrix: $P_{\mathbf {a} }=\mathbf {a} \mathbf {a} ^{\textsf {T}}={\begin{bmatrix}a_{x}\\a_{y}\\a_{z}\end{bmatrix}}{\begin{bmatrix}a_{x}&a_{y}&a_{z}\end{bmatrix}}={\begin{bmatrix}a_{x}^{2}&a_{x}a_{y}&a_{x}a_{z}\\a_{x}a_{y}&a_{y}^{2}&a_{y}a_{z}\\a_{x}a_{z}&a_{y}a_{z}&a_{z}^{2}\\\end{bmatrix}}$ Uses The vector projection is an important operation in the Gram–Schmidt orthonormalization of vector space bases. It is also used in the separating axis theorem to detect whether two convex shapes intersect. Generalizations Since the notions of vector length and angle between vectors can be generalized to any n-dimensional inner product space, this is also true for the notions of orthogonal projection of a vector, projection of a vector onto another, and rejection of a vector from another. In some cases, the inner product coincides with the dot product. Whenever they don't coincide, the inner product is used instead of the dot product in the formal definitions of projection and rejection. For a three-dimensional inner product space, the notions of projection of a vector onto another and rejection of a vector from another can be generalized to the notions of projection of a vector onto a plane, and rejection of a vector from a plane.[5] The projection of a vector on a plane is its orthogonal projection on that plane. The rejection of a vector from a plane is its orthogonal projection on a straight line which is orthogonal to that plane. Both are vectors. The first is parallel to the plane, the second is orthogonal. For a given vector and plane, the sum of projection and rejection is equal to the original vector. Similarly, for inner product spaces with more than three dimensions, the notions of projection onto a vector and rejection from a vector can be generalized to the notions of projection onto a hyperplane, and rejection from a hyperplane. In geometric algebra, they can be further generalized to the notions of projection and rejection of a general multivector onto/from any invertible k-blade. See also • Scalar projection • Vector notation References 1. "Scalar and Vector Projections". www.ck12.org. Retrieved 2020-09-07. 2. Perwass, G. (2009). Geometric Algebra With Applications in Engineering. p. 83. ISBN 9783540890676. 3. "Dot Products and Projections". 4. Hill, F. S. Jr. (1994). Graphics Gems IV. San Diego: Academic Press. pp. 138–148. 5. M.J. Baker, 2012. Projection of a vector onto a plane. Published on www.euclideanspace.com. External links • Projection of a vector onto a plane Linear algebra • Outline • Glossary Basic concepts • Scalar • Vector • Vector space • Scalar multiplication • Vector projection • Linear span • Linear map • Linear projection • Linear independence • Linear combination • Basis • Change of basis • Row and column vectors • Row and column spaces • Kernel • Eigenvalues and eigenvectors • Transpose • Linear equations Matrices • Block • Decomposition • Invertible • Minor • Multiplication • Rank • Transformation • Cramer's rule • Gaussian elimination Bilinear • Orthogonality • Dot product • Hadamard product • Inner product space • Outer product • Kronecker product • Gram–Schmidt process Multilinear algebra • Determinant • Cross product • Triple product • Seven-dimensional cross product • Geometric algebra • Exterior algebra • Bivector • Multivector • Tensor • Outermorphism Vector space constructions • Dual • Direct sum • Function space • Quotient • Subspace • Tensor product Numerical • Floating-point • Numerical stability • Basic Linear Algebra Subprograms • Sparse matrix • Comparison of linear algebra libraries • Category • Mathematics portal • Commons • Wikibooks • Wikiversity
Wikipedia
Scalar field theory In theoretical physics, scalar field theory can refer to a relativistically invariant classical or quantum theory of scalar fields. A scalar field is invariant under any Lorentz transformation.[1] The only fundamental scalar quantum field that has been observed in nature is the Higgs field. However, scalar quantum fields feature in the effective field theory descriptions of many physical phenomena. An example is the pion, which is actually a pseudoscalar.[2] Since they do not involve polarization complications, scalar fields are often the easiest to appreciate second quantization through. For this reason, scalar field theories are often used for purposes of introduction of novel concepts and techniques.[3] The signature of the metric employed below is (+, −, −, −). Classical scalar field theory A general reference for this section is Ramond, Pierre (2001-12-21). Field Theory: A Modern Primer (Second Edition). USA: Westview Press. ISBN 0-201-30450-3, Ch 1. Linear (free) theory The most basic scalar field theory is the linear theory. Through the Fourier decomposition of the fields, it represents the normal modes of an infinity of coupled oscillators where the continuum limit of the oscillator index i is now denoted by x. The action for the free relativistic scalar field theory is then ${\begin{aligned}{\mathcal {S}}&=\int \mathrm {d} ^{D-1}x\mathrm {d} t{\mathcal {L}}\\&=\int \mathrm {d} ^{D-1}x\mathrm {d} t\left[{\frac {1}{2}}\eta ^{\mu \nu }\partial _{\mu }\phi \partial _{\nu }\phi -{\frac {1}{2}}m^{2}\phi ^{2}\right]\\[6pt]&=\int \mathrm {d} ^{D-1}x\mathrm {d} t\left[{\frac {1}{2}}(\partial _{t}\phi )^{2}-{\frac {1}{2}}\delta ^{ij}\partial _{i}\phi \partial _{j}\phi -{\frac {1}{2}}m^{2}\phi ^{2}\right],\end{aligned}}$ where ${\mathcal {L}}$ is known as a Lagrangian density; d4−1x ≡ dx ⋅ dy ⋅ dz ≡ dx1 ⋅ dx2 ⋅ dx3 for the three spatial coordinates; δij is the Kronecker delta function; and ∂ρ = ∂/∂xρ for the ρ-th coordinate xρ. This is an example of a quadratic action, since each of the terms is quadratic in the field, φ. The term proportional to m2 is sometimes known as a mass term, due to its subsequent interpretation, in the quantized version of this theory, in terms of particle mass. The equation of motion for this theory is obtained by extremizing the action above. It takes the following form, linear in φ, $\eta ^{\mu \nu }\partial _{\mu }\partial _{\nu }\phi +m^{2}\phi =\partial _{t}^{2}\phi -\nabla ^{2}\phi +m^{2}\phi =0~,$ where ∇2 is the Laplace operator. This is the Klein–Gordon equation, with the interpretation as a classical field equation, rather than as a quantum-mechanical wave equation. Nonlinear (interacting) theory The most common generalization of the linear theory above is to add a scalar potential V(Φ) to the Lagrangian, where typically, in addition to a mass term, V is a polynomial in Φ. Such a theory is sometimes said to be interacting, because the Euler–Lagrange equation is now nonlinear, implying a self-interaction. The action for the most general such theory is ${\begin{aligned}{\mathcal {S}}&=\int \mathrm {d} ^{D-1}x\,\mathrm {d} t{\mathcal {L}}\\[3pt]&=\int \mathrm {d} ^{D-1}x\mathrm {d} t\left[{\frac {1}{2}}\eta ^{\mu \nu }\partial _{\mu }\phi \partial _{\nu }\phi -V(\phi )\right]\\[3pt]&=\int \mathrm {d} ^{D-1}x\,\mathrm {d} t\left[{\frac {1}{2}}(\partial _{t}\phi )^{2}-{\frac {1}{2}}\delta ^{ij}\partial _{i}\phi \partial _{j}\phi -{\frac {1}{2}}m^{2}\phi ^{2}-\sum _{n=3}^{\infty }{\frac {1}{n!}}g_{n}\phi ^{n}\right]\end{aligned}}$ The n! factors in the expansion are introduced because they are useful in the Feynman diagram expansion of the quantum theory, as described below. The corresponding Euler–Lagrange equation of motion is now $\eta ^{\mu \nu }\partial _{\mu }\partial _{\nu }\phi +V'(\phi )=\partial _{t}^{2}\phi -\nabla ^{2}\phi +V'(\phi )=0.$ Dimensional analysis and scaling Physical quantities in these scalar field theories may have dimensions of length, time or mass, or some combination of the three. However, in a relativistic theory, any quantity t, with dimensions of time, can be readily converted into a length, l =ct, by using the velocity of light, c. Similarly, any length l is equivalent to an inverse mass, ħ=lmc, using Planck's constant, ħ. In natural units, one thinks of a time as a length, or either time or length as an inverse mass. In short, one can think of the dimensions of any physical quantity as defined in terms of just one independent dimension, rather than in terms of all three. This is most often termed the mass dimension of the quantity. Knowing the dimensions of each quantity, allows one to uniquely restore conventional dimensions from a natural units expression in terms of this mass dimension, by simply reinserting the requisite powers of ħ and c required for dimensional consistency. One conceivable objection is that this theory is classical, and therefore it is not obvious how Planck's constant should be a part of the theory at all. If desired, one could indeed recast the theory without mass dimensions at all: However, this would be at the expense of slightly obscuring the connection with the quantum scalar field. Given that one has dimensions of mass, Planck's constant is thought of here as an essentially arbitrary fixed reference quantity of action (not necessarily connected to quantization), hence with dimensions appropriate to convert between mass and inverse length. Scaling dimension The classical scaling dimension, or mass dimension, Δ, of φ describes the transformation of the field under a rescaling of coordinates: $x\rightarrow \lambda x$ $\phi \rightarrow \lambda ^{-\Delta }\phi ~.$ The units of action are the same as the units of ħ, and so the action itself has zero mass dimension. This fixes the scaling dimension of the field φ to be $\Delta ={\frac {D-2}{2}}.$ Scale invariance There is a specific sense in which some scalar field theories are scale-invariant. While the actions above are all constructed to have zero mass dimension, not all actions are invariant under the scaling transformation $x\rightarrow \lambda x$ $\phi \rightarrow \lambda ^{-\Delta }\phi ~.$ The reason that not all actions are invariant is that one usually thinks of the parameters m and gn as fixed quantities, which are not rescaled under the transformation above. The condition for a scalar field theory to be scale invariant is then quite obvious: all of the parameters appearing in the action should be dimensionless quantities. In other words, a scale invariant theory is one without any fixed length scale (or equivalently, mass scale) in the theory. For a scalar field theory with D spacetime dimensions, the only dimensionless parameter gn satisfies n = 2D⁄(D − 2) . For example, in D = 4, only g4 is classically dimensionless, and so the only classically scale-invariant scalar field theory in D = 4 is the massless φ4 theory. Classical scale invariance, however, normally does not imply quantum scale invariance, because of the renormalization group involved – see the discussion of the beta function below. Conformal invariance A transformation $x\rightarrow {\tilde {x}}(x)$ is said to be conformal if the transformation satisfies ${\frac {\partial {\tilde {x^{\mu }}}}{\partial x^{\rho }}}{\frac {\partial {\tilde {x^{\nu }}}}{\partial x^{\sigma }}}\eta _{\mu \nu }=\lambda ^{2}(x)\eta _{\rho \sigma }$ for some function λ(x). The conformal group contains as subgroups the isometries of the metric $\eta _{\mu \nu }$ (the Poincaré group) and also the scaling transformations (or dilatations) considered above. In fact, the scale-invariant theories in the previous section are also conformally-invariant. φ4 theory Massive φ4 theory illustrates a number of interesting phenomena in scalar field theory. The Lagrangian density is ${\mathcal {L}}={\frac {1}{2}}(\partial _{t}\phi )^{2}-{\frac {1}{2}}\delta ^{ij}\partial _{i}\phi \partial _{j}\phi -{\frac {1}{2}}m^{2}\phi ^{2}-{\frac {g}{4!}}\phi ^{4}.$ Spontaneous symmetry breaking This Lagrangian has a ℤ₂ symmetry under the transformation φ→ −φ. This is an example of an internal symmetry, in contrast to a space-time symmetry. If m2 is positive, the potential $V(\phi )={\frac {1}{2}}m^{2}\phi ^{2}+{\frac {g}{4!}}\phi ^{4}$ has a single minimum, at the origin. The solution φ=0 is clearly invariant under the ℤ₂ symmetry. Conversely, if m2 is negative, then one can readily see that the potential $\,V(\phi )={\frac {1}{2}}m^{2}\phi ^{2}+{\frac {g}{4!}}\phi ^{4}\!$ has two minima. This is known as a double well potential, and the lowest energy states (known as the vacua, in quantum field theoretical language) in such a theory are not invariant under the ℤ₂ symmetry of the action (in fact it maps each of the two vacua into the other). In this case, the ℤ₂ symmetry is said to be spontaneously broken. Kink solutions The φ4 theory with a negative m2 also has a kink solution, which is a canonical example of a soliton. Such a solution is of the form $\phi ({\vec {x}},t)=\pm {\frac {m}{2{\sqrt {\frac {g}{4!}}}}}\tanh \left[{\frac {m(x-x_{0})}{\sqrt {2}}}\right]$ where x is one of the spatial variables (φ is taken to be independent of t, and the remaining spatial variables). The solution interpolates between the two different vacua of the double well potential. It is not possible to deform the kink into a constant solution without passing through a solution of infinite energy, and for this reason the kink is said to be stable. For D>2 (i.e., theories with more than one spatial dimension), this solution is called a domain wall. Another well-known example of a scalar field theory with kink solutions is the sine-Gordon theory. Complex scalar field theory In a complex scalar field theory, the scalar field takes values in the complex numbers, rather than the real numbers. The complex scalar field represents spin-0 particles and antiparticles with charge. The action considered normally takes the form ${\mathcal {S}}=\int \mathrm {d} ^{D-1}x\,\mathrm {d} t{\mathcal {L}}=\int \mathrm {d} ^{D-1}x\,\mathrm {d} t\left[\eta ^{\mu \nu }\partial _{\mu }\phi ^{*}\partial _{\nu }\phi -V(|\phi |^{2})\right]$ This has a U(1), equivalently O(2) symmetry, whose action on the space of fields rotates $\phi \rightarrow e^{i\alpha }\phi $, for some real phase angle α. As for the real scalar field, spontaneous symmetry breaking is found if m2 is negative. This gives rise to Goldstone's Mexican hat potential which is a rotation of the double-well potential of a real scalar field by 2π radians about the V$(\phi )$ axis. The symmetry breaking takes place in one higher dimension, i.e. the choice of vacuum breaks a continuous U(1) symmetry instead of a discrete one. The two components of the scalar field are reconfigured as a massive mode and a massless Goldstone boson. O(N) theory One can express the complex scalar field theory in terms of two real fields, φ1 = Re φ and φ2 = Im φ, which transform in the vector representation of the U(1) = O(2) internal symmetry. Although such fields transform as a vector under the internal symmetry, they are still Lorentz scalars. This can be generalised to a theory of N scalar fields transforming in the vector representation of the O(N) symmetry. The Lagrangian for an O(N)-invariant scalar field theory is typically of the form ${\mathcal {L}}={\frac {1}{2}}\eta ^{\mu \nu }\partial _{\mu }\phi \cdot \partial _{\nu }\phi -V(\phi \cdot \phi )$ using an appropriate O(N)-invariant inner product. The theory can also be expressed for complex vector fields, i.e. for $\phi \in \mathbb {C} ^{n}$, in which case the symmetry group is the Lie group SU(N). Gauge-field couplings When the scalar field theory is coupled in a gauge invariant way to the Yang–Mills action, one obtains the Ginzburg–Landau theory of superconductors. The topological solitons of that theory correspond to vortices in a superconductor; the minimum of the Mexican hat potential corresponds to the order parameter of the superconductor. Quantum scalar field theory A general reference for this section is Ramond, Pierre (2001-12-21). Field Theory: A Modern Primer (Second Edition). USA: Westview Press. ISBN 0-201-30450-3, Ch. 4 In quantum field theory, the fields, and all observables constructed from them, are replaced by quantum operators on a Hilbert space. This Hilbert space is built on a vacuum state, and dynamics are governed by a quantum Hamiltonian, a positive-definite operator which annihilates the vacuum. A construction of a quantum scalar field theory is detailed in the canonical quantization article, which relies on canonical commutation relations among the fields. Essentially, the infinity of classical oscillators repackaged in the scalar field as its (decoupled) normal modes, above, are now quantized in the standard manner, so the respective quantum operator field describes an infinity of quantum harmonic oscillators acting on a respective Fock space. In brief, the basic variables are the quantum field φ and its canonical momentum π. Both these operator-valued fields are Hermitian. At spatial points x→, y→ and at equal times, their canonical commutation relations are given by ${\begin{aligned}\left[\phi \left({\vec {x}}\right),\phi \left({\vec {y}}\right)\right]=\left[\pi \left({\vec {x}}\right),\pi \left({\vec {y}}\right)\right]&=0,\\\left[\phi \left({\vec {x}}\right),\pi \left({\vec {y}}\right)\right]&=i\delta \left({\vec {x}}-{\vec {y}}\right),\end{aligned}}$ while the free Hamiltonian is, similarly to above, $H=\int d^{3}x\left[{1 \over 2}\pi ^{2}+{1 \over 2}(\nabla \phi )^{2}+{m^{2} \over 2}\phi ^{2}\right].$ A spatial Fourier transform leads to momentum space fields ${\begin{aligned}{\widetilde {\phi }}({\vec {k}})&=\int d^{3}xe^{-i{\vec {k}}\cdot {\vec {x}}}\phi ({\vec {x}}),\\{\widetilde {\pi }}({\vec {k}})&=\int d^{3}xe^{-i{\vec {k}}\cdot {\vec {x}}}\pi ({\vec {x}})\end{aligned}}$ which resolve to annihilation and creation operators ${\begin{aligned}a({\vec {k}})&=\left(E{\widetilde {\phi }}({\vec {k}})+i{\widetilde {\pi }}({\vec {k}})\right),\\a^{\dagger }({\vec {k}})&=\left(E{\widetilde {\phi }}({\vec {k}})-i{\widetilde {\pi }}({\vec {k}})\right),\end{aligned}}$ where $E={\sqrt {k^{2}+m^{2}}}$ . These operators satisfy the commutation relations ${\begin{aligned}\left[a({\vec {k}}_{1}),a({\vec {k}}_{2})\right]=\left[a^{\dagger }({\vec {k}}_{1}),a^{\dagger }({\vec {k}}_{2})\right]&=0,\\\left[a({\vec {k}}_{1}),a^{\dagger }({\vec {k}}_{2})\right]&=(2\pi )^{3}2E\delta ({\vec {k}}_{1}-{\vec {k}}_{2}).\end{aligned}}$ The state $|0\rangle $ annihilated by all of the operators a is identified as the bare vacuum, and a particle with momentum k→ is created by applying $a^{\dagger }({\vec {k}})$ to the vacuum. Applying all possible combinations of creation operators to the vacuum constructs the relevant Hilbert space: This construction is called Fock space. The vacuum is annihilated by the Hamiltonian $H=\int {d^{3}k \over (2\pi )^{3}}{\frac {1}{2}}a^{\dagger }({\vec {k}})a({\vec {k}}),$ where the zero-point energy has been removed by Wick ordering. (See canonical quantization.) Interactions can be included by adding an interaction Hamiltonian. For a φ4 theory, this corresponds to adding a Wick ordered term g:φ4:/4! to the Hamiltonian, and integrating over x. Scattering amplitudes may be calculated from this Hamiltonian in the interaction picture. These are constructed in perturbation theory by means of the Dyson series, which gives the time-ordered products, or n-particle Green's functions $\langle 0|{\mathcal {T}}\{\phi (x_{1})\cdots \phi (x_{n})\}|0\rangle $ as described in the Dyson series article. The Green's functions may also be obtained from a generating function that is constructed as a solution to the Schwinger–Dyson equation. Feynman path integral The Feynman diagram expansion may be obtained also from the Feynman path integral formulation.[4] The time ordered vacuum expectation values of polynomials in φ, known as the n-particle Green's functions, are constructed by integrating over all possible fields, normalized by the vacuum expectation value with no external fields, $\langle 0|{\mathcal {T}}\{\phi (x_{1})\cdots \phi (x_{n})\}|0\rangle ={\frac {\int {\mathcal {D}}\phi \phi (x_{1})\cdots \phi (x_{n})e^{i\int d^{4}x\left({1 \over 2}\partial ^{\mu }\phi \partial _{\mu }\phi -{m^{2} \over 2}\phi ^{2}-{g \over 4!}\phi ^{4}\right)}}{\int {\mathcal {D}}\phi e^{i\int d^{4}x\left({1 \over 2}\partial ^{\mu }\phi \partial _{\mu }\phi -{m^{2} \over 2}\phi ^{2}-{g \over 4!}\phi ^{4}\right)}}}.$ All of these Green's functions may be obtained by expanding the exponential in J(x)φ(x) in the generating function $Z[J]=\int {\mathcal {D}}\phi e^{i\int d^{4}x\left({1 \over 2}\partial ^{\mu }\phi \partial _{\mu }\phi -{m^{2} \over 2}\phi ^{2}-{g \over 4!}\phi ^{4}+J\phi \right)}=Z[0]\sum _{n=0}^{\infty }{\frac {i^{n}}{n!}}J(x_{1})\cdots J(x_{n})\langle 0|{\mathcal {T}}\{\phi (x_{1})\cdots \phi (x_{n})\}|0\rangle .$ A Wick rotation may be applied to make time imaginary. Changing the signature to (++++) then turns the Feynman integral into a statistical mechanics partition function in Euclidean space, $Z[J]=\int {\mathcal {D}}\phi e^{-\int d^{4}x\left[{1 \over 2}(\nabla \phi )^{2}+{m^{2} \over 2}\phi ^{2}+{g \over 4!}\phi ^{4}+J\phi \right]}.$ Normally, this is applied to the scattering of particles with fixed momenta, in which case, a Fourier transform is useful, giving instead ${\tilde {Z}}[{\tilde {J}}]=\int {\mathcal {D}}{\tilde {\phi }}e^{-\int {d^{4}p \over (2\pi )^{4}}\left({1 \over 2}(p^{2}+m^{2}){\tilde {\phi }}^{2}-{\tilde {J}}{\tilde {\phi }}+{g \over 4!}{\int {d^{4}p_{1} \over (2\pi )^{4}}{d^{4}p_{2} \over (2\pi )^{4}}{d^{4}p_{3} \over (2\pi )^{4}}\delta (p-p_{1}-p_{2}-p_{3}){\tilde {\phi }}(p){\tilde {\phi }}(p_{1}){\tilde {\phi }}(p_{2}){\tilde {\phi }}(p_{3})}\right)}.$ where $\delta (x)$ is the Dirac delta function. The standard trick to evaluate this functional integral is to write it as a product of exponential factors, schematically, ${\tilde {Z}}[{\tilde {J}}]=\int {\mathcal {D}}{\tilde {\phi }}\prod _{p}\left[e^{-(p^{2}+m^{2}){\tilde {\phi }}^{2}/2}e^{-g/4!\int {d^{4}p_{1} \over (2\pi )^{4}}{d^{4}p_{2} \over (2\pi )^{4}}{d^{4}p_{3} \over (2\pi )^{4}}\delta (p-p_{1}-p_{2}-p_{3}){\tilde {\phi }}(p){\tilde {\phi }}(p_{1}){\tilde {\phi }}(p_{2}){\tilde {\phi }}(p_{3})}e^{{\tilde {J}}{\tilde {\phi }}}\right].$ The second two exponential factors can be expanded as power series, and the combinatorics of this expansion can be represented graphically through Feynman diagrams of the Quartic interaction. The integral with g = 0 can be treated as a product of infinitely many elementary Gaussian integrals: the result may be expressed as a sum of Feynman diagrams, calculated using the following Feynman rules: • Each field ~φ(p) in the n-point Euclidean Green's function is represented by an external line (half-edge) in the graph, and associated with momentum p. • Each vertex is represented by a factor −g. • At a given order gk, all diagrams with n external lines and k vertices are constructed such that the momenta flowing into each vertex is zero. Each internal line is represented by a propagator 1/(q2 + m2), where q is the momentum flowing through that line. • Any unconstrained momenta are integrated over all values. • The result is divided by a symmetry factor, which is the number of ways the lines and vertices of the graph can be rearranged without changing its connectivity. • Do not include graphs containing "vacuum bubbles", connected subgraphs with no external lines. The last rule takes into account the effect of dividing by ~Z[0]. The Minkowski-space Feynman rules are similar, except that each vertex is represented by −ig, while each internal line is represented by a propagator i/(q2−m2+iε), where the ε term represents the small Wick rotation needed to make the Minkowski-space Gaussian integral converge. Renormalization The integrals over unconstrained momenta, called "loop integrals", in the Feynman graphs typically diverge. This is normally handled by renormalization, which is a procedure of adding divergent counter-terms to the Lagrangian in such a way that the diagrams constructed from the original Lagrangian and counter-terms is finite.[5] A renormalization scale must be introduced in the process, and the coupling constant and mass become dependent upon it. The dependence of a coupling constant g on the scale λ is encoded by a beta function, β(g), defined by $\beta (g)=\lambda \,{\frac {\partial g}{\partial \lambda }}~.$ This dependence on the energy scale is known as "the running of the coupling parameter", and theory of this systematic scale-dependence in quantum field theory is described by the renormalization group. Beta-functions are usually computed in an approximation scheme, most commonly perturbation theory, where one assumes that the coupling constant is small. One can then make an expansion in powers of the coupling parameters and truncate the higher-order terms (also known as higher loop contributions, due to the number of loops in the corresponding Feynman graphs). The β-function at one loop (the first perturbative contribution) for the φ4 theory is $\beta (g)={\frac {3}{16\pi ^{2}}}g^{2}+O\left(g^{3}\right)~.$ The fact that the sign in front of the lowest-order term is positive suggests that the coupling constant increases with energy. If this behavior persisted at large couplings, this would indicate the presence of a Landau pole at finite energy, arising from quantum triviality. However, the question can only be answered non-perturbatively, since it involves strong coupling. A quantum field theory is said to be trivial when the renormalized coupling, computed through its beta function, goes to zero when the ultraviolet cutoff is removed. Consequently, the propagator becomes that of a free particle and the field is no longer interacting. For a φ4 interaction, Michael Aizenman proved that the theory is indeed trivial, for space-time dimension D ≥ 5.[6] For D = 4, the triviality has yet to be proven rigorously, but lattice computations have provided strong evidence for this. This fact is important as quantum triviality can be used to bound or even predict parameters such as the Higgs boson mass. This can also lead to a predictable Higgs mass in asymptotic safety scenarios.[7] See also • Renormalization • Quantum triviality • Landau pole • Scale invariance (CFT description) • Scalar electrodynamics Notes 1. i.e., it transforms under the trivial (0, 0)-representation of the Lorentz group, leaving the value of the field at any spacetime point unchanged, in contrast to a vector or tensor field, or more generally, spinor-tensors, whose components undergo a mix under Lorentz transformations. Since particle or field spin by definition is determined by the Lorentz representation under which it transforms, all scalar (and pseudoscalar) fields and particles have spin zero, and are as such bosonic by the spin statistics theorem. See Weinberg 1995, Chapter 5 2. This means it is not invariant under parity transformations which invert the spatial directions, distinguishing it from a true scalar, which is parity-invariant.See Weinberg 1998, Chapter 19 3. Brown, Lowell S. (1994). Quantum Field Theory. Cambridge University Press. ISBN 978-0-521-46946-3. Ch 3. 4. A general reference for this section is Ramond, Pierre (2001-12-21). Field Theory: A Modern Primer (Second ed.). USA: Westview Press. ISBN 0-201-30450-3. 5. See the previous reference, or for more detail, Itzykson, Zuber; Zuber, Jean-Bernard (2006-02-24). Quantum Field Theory. Dover. ISBN 0-07-032071-3. 6. Aizenman, M. (1981). "Proof of the Triviality of ϕ4 d Field Theory and Some Mean-Field Features of Ising Models for d > 4". Physical Review Letters. 47 (1): 1–4. Bibcode:1981PhRvL..47....1A. doi:10.1103/PhysRevLett.47.1. 7. Callaway, D. J. E. (1988). "Triviality Pursuit: Can Elementary Scalar Particles Exist?". Physics Reports. 167 (5): 241–320. Bibcode:1988PhR...167..241C. doi:10.1016/0370-1573(88)90008-7. References • Peskin, M.; Schroeder, D. (1995). An Introduction to Quantum Field Theory. Westview Press. ISBN 978-0201503975. • Weinberg, S. (1995). The Quantum Theory of Fields. Vol. I. Cambridge University Press. ISBN 0-521-55001-7. • Weinberg, S. (1998). The Quantum Theory of Fields. Vol. II. Cambridge University Press. ISBN 0-521-55002-5. • Srednicki, M. (2007). Quantum Field Theory. Cambridge University Press. ISBN 9780521864497. • Zinn-Justin, J (2002). Quantum Field Theory and Critical Phenomena. Oxford University Press. ISBN 978-0198509233. External links • The Conceptual Basis of Quantum Field Theory Click on the link for Chap. 3 to find an extensive, simplified introduction to scalars in relativistic quantum mechanics and quantum field theory.
Wikipedia
Scalar multiplication In mathematics, scalar multiplication is one of the basic operations defining a vector space in linear algebra[1][2][3] (or more generally, a module in abstract algebra[4][5]). In common geometrical contexts, scalar multiplication of a real Euclidean vector by a positive real number multiplies the magnitude of the vector—without changing its direction. The term "scalar" itself derives from this usage: a scalar is that which scales vectors. Scalar multiplication is the multiplication of a vector by a scalar (where the product is a vector), and is to be distinguished from inner product of two vectors (where the product is a scalar). Not to be confused with scalar product. Definition In general, if K is a field and V is a vector space over K, then scalar multiplication is a function from K × V to V. The result of applying this function to k in K and v in V is denoted kv. Properties Scalar multiplication obeys the following rules (vector in boldface): • Additivity in the scalar: (c + d)v = cv + dv; • Additivity in the vector: c(v + w) = cv + cw; • Compatibility of product of scalars with scalar multiplication: (cd)v = c(dv); • Multiplying by 1 does not change a vector: 1v = v; • Multiplying by 0 gives the zero vector: 0v = 0; • Multiplying by −1 gives the additive inverse: (−1)v = −v. Here, + is addition either in the field or in the vector space, as appropriate; and 0 is the additive identity in either. Juxtaposition indicates either scalar multiplication or the multiplication operation in the field. Interpretation Scalar multiplication may be viewed as an external binary operation or as an action of the field on the vector space. A geometric interpretation of scalar multiplication is that it stretches or contracts vectors by a constant factor. As a result, it produces a vector in the same or opposite direction of the original vector but of a different length.[6] As a special case, V may be taken to be K itself and scalar multiplication may then be taken to be simply the multiplication in the field. When V is Kn, scalar multiplication is equivalent to multiplication of each component with the scalar, and may be defined as such. The same idea applies if K is a commutative ring and V is a module over K. K can even be a rig, but then there is no additive inverse. If K is not commutative, the distinct operations left scalar multiplication cv and right scalar multiplication vc may be defined. Scalar multiplication of matrices Main article: Matrix (mathematics) The left scalar multiplication of a matrix A with a scalar λ gives another matrix of the same size as A. It is denoted by λA, whose entries of λA are defined by $(\lambda \mathbf {A} )_{ij}=\lambda \left(\mathbf {A} \right)_{ij}\,,$ explicitly: $\lambda \mathbf {A} =\lambda {\begin{pmatrix}A_{11}&A_{12}&\cdots &A_{1m}\\A_{21}&A_{22}&\cdots &A_{2m}\\\vdots &\vdots &\ddots &\vdots \\A_{n1}&A_{n2}&\cdots &A_{nm}\\\end{pmatrix}}={\begin{pmatrix}\lambda A_{11}&\lambda A_{12}&\cdots &\lambda A_{1m}\\\lambda A_{21}&\lambda A_{22}&\cdots &\lambda A_{2m}\\\vdots &\vdots &\ddots &\vdots \\\lambda A_{n1}&\lambda A_{n2}&\cdots &\lambda A_{nm}\\\end{pmatrix}}\,.$ Similarly, even though there is no widely-accepted definition, the right scalar multiplication of a matrix A with a scalar λ could be defined to be $(\mathbf {A} \lambda )_{ij}=\left(\mathbf {A} \right)_{ij}\lambda \,,$ explicitly: $\mathbf {A} \lambda ={\begin{pmatrix}A_{11}&A_{12}&\cdots &A_{1m}\\A_{21}&A_{22}&\cdots &A_{2m}\\\vdots &\vdots &\ddots &\vdots \\A_{n1}&A_{n2}&\cdots &A_{nm}\\\end{pmatrix}}\lambda ={\begin{pmatrix}A_{11}\lambda &A_{12}\lambda &\cdots &A_{1m}\lambda \\A_{21}\lambda &A_{22}\lambda &\cdots &A_{2m}\lambda \\\vdots &\vdots &\ddots &\vdots \\A_{n1}\lambda &A_{n2}\lambda &\cdots &A_{nm}\lambda \\\end{pmatrix}}\,.$ When the entries of the matrix and the scalars are from the same commutative field, for example, the real number field or the complex number field, these two multiplications are the same, and can be simply called scalar multiplication. For matrices over a more general field that is not commutative, they may not be equal. For a real scalar and matrix: $\lambda =2,\quad \mathbf {A} ={\begin{pmatrix}a&b\\c&d\\\end{pmatrix}}$ $2\mathbf {A} =2{\begin{pmatrix}a&b\\c&d\\\end{pmatrix}}={\begin{pmatrix}2\!\cdot \!a&2\!\cdot \!b\\2\!\cdot \!c&2\!\cdot \!d\\\end{pmatrix}}={\begin{pmatrix}a\!\cdot \!2&b\!\cdot \!2\\c\!\cdot \!2&d\!\cdot \!2\\\end{pmatrix}}={\begin{pmatrix}a&b\\c&d\\\end{pmatrix}}2=\mathbf {A} 2.$ For quaternion scalars and matrices: $\lambda =i,\quad \mathbf {A} ={\begin{pmatrix}i&0\\0&j\\\end{pmatrix}}$ $i{\begin{pmatrix}i&0\\0&j\\\end{pmatrix}}={\begin{pmatrix}i^{2}&0\\0&ij\\\end{pmatrix}}={\begin{pmatrix}-1&0\\0&k\\\end{pmatrix}}\neq {\begin{pmatrix}-1&0\\0&-k\\\end{pmatrix}}={\begin{pmatrix}i^{2}&0\\0&ji\\\end{pmatrix}}={\begin{pmatrix}i&0\\0&j\\\end{pmatrix}}i\,,$ where i, j, k are the quaternion units. The non-commutativity of quaternion multiplication prevents the transition of changing ij = +k to ji = −k. See also • Dot product • Matrix multiplication • Multiplication of vectors • Product (mathematics) References 1. Lay, David C. (2006). Linear Algebra and Its Applications (3rd ed.). Addison–Wesley. ISBN 0-321-28713-4. 2. Strang, Gilbert (2006). Linear Algebra and Its Applications (4th ed.). Brooks Cole. ISBN 0-03-010567-6. 3. Axler, Sheldon (2002). Linear Algebra Done Right (2nd ed.). Springer. ISBN 0-387-98258-2. 4. Dummit, David S.; Foote, Richard M. (2004). Abstract Algebra (3rd ed.). John Wiley & Sons. ISBN 0-471-43334-9. 5. Lang, Serge (2002). Algebra. Graduate Texts in Mathematics. Springer. ISBN 0-387-95385-X. 6. Weisstein, Eric W. "Scalar Multiplication". mathworld.wolfram.com. Retrieved 2020-09-06. Linear algebra • Outline • Glossary Basic concepts • Scalar • Vector • Vector space • Scalar multiplication • Vector projection • Linear span • Linear map • Linear projection • Linear independence • Linear combination • Basis • Change of basis • Row and column vectors • Row and column spaces • Kernel • Eigenvalues and eigenvectors • Transpose • Linear equations Matrices • Block • Decomposition • Invertible • Minor • Multiplication • Rank • Transformation • Cramer's rule • Gaussian elimination Bilinear • Orthogonality • Dot product • Hadamard product • Inner product space • Outer product • Kronecker product • Gram–Schmidt process Multilinear algebra • Determinant • Cross product • Triple product • Seven-dimensional cross product • Geometric algebra • Exterior algebra • Bivector • Multivector • Tensor • Outermorphism Vector space constructions • Dual • Direct sum • Function space • Quotient • Subspace • Tensor product Numerical • Floating-point • Numerical stability • Basic Linear Algebra Subprograms • Sparse matrix • Comparison of linear algebra libraries • Category • Mathematics portal • Commons • Wikibooks • Wikiversity
Wikipedia
Scalar projection In mathematics, the scalar projection of a vector $\mathbf {a} $ on (or onto) a vector $\mathbf {b} ,$ also known as the scalar resolute of $\mathbf {a} $ in the direction of $\mathbf {b} ,$ is given by: $s=\left\|\mathbf {a} \right\|\cos \theta =\mathbf {a} \cdot \mathbf {\hat {b}} ,$ where the operator $\cdot $ denotes a dot product, ${\hat {\mathbf {b} }}$ is the unit vector in the direction of $\mathbf {b} ,$ $\left\|\mathbf {a} \right\|$ is the length of $\mathbf {a} ,$ and $\theta $ is the angle between $\mathbf {a} $ and $\mathbf {b} $. The term scalar component refers sometimes to scalar projection, as, in Cartesian coordinates, the components of a vector are the scalar projections in the directions of the coordinate axes. The scalar projection is a scalar, equal to the length of the orthogonal projection of $\mathbf {a} $ on $\mathbf {b} $, with a negative sign if the projection has an opposite direction with respect to $\mathbf {b} $. Multiplying the scalar projection of $\mathbf {a} $ on $\mathbf {b} $ by $\mathbf {\hat {b}} $ converts it into the above-mentioned orthogonal projection, also called vector projection of $\mathbf {a} $ on $\mathbf {b} $. Definition based on angle θ If the angle $\theta $ between $\mathbf {a} $ and $\mathbf {b} $ is known, the scalar projection of $\mathbf {a} $ on Failed to parse (SVG (MathML can be enabled via browser plugin): Invalid response ("Math extension cannot connect to Restbase.") from server "http://localhost:6011/en.wikipedia.org/v1/":): \mathbf {b} can be computed using $s=\left\|\mathbf {a} \right\|\cos \theta .$ ($s=\left\|\mathbf {a} _{1}\right\|$ in the figure) Definition in terms of a and b When $\theta $ is not known, the cosine of $\theta $ can be computed in terms of $\mathbf {a} $ and $\mathbf {b} ,$ by the following property of the dot product $\mathbf {a} \cdot \mathbf {b} $: ${\frac {\mathbf {a} \cdot \mathbf {b} }{\left\|\mathbf {a} \right\|\left\|\mathbf {b} \right\|}}=\cos \theta $ By this property, the definition of the scalar projection $s$ becomes: $s=\left\|\mathbf {a} _{1}\right\|=\left\|\mathbf {a} \right\|\cos \theta =\left\|\mathbf {a} \right\|{\frac {\mathbf {a} \cdot \mathbf {b} }{\left\|\mathbf {a} \right\|\left\|\mathbf {b} \right\|}}={\frac {\mathbf {a} \cdot \mathbf {b} }{\left\|\mathbf {b} \right\|}}\,$ Properties The scalar projection has a negative sign if $90^{\circ }<\theta \leq 180^{\circ }$. It coincides with the length of the corresponding vector projection if the angle is smaller than 90°. More exactly, if the vector projection is denoted $\mathbf {a} _{1}$ and its length $\left\|\mathbf {a} _{1}\right\|$: $s=\left\|\mathbf {a} _{1}\right\|$ if $0^{\circ }\leq \theta \leq 90^{\circ },$ $s=-\left\|\mathbf {a} _{1}\right\|$ if $90^{\circ }<\theta \leq 180^{\circ }.$ See also • Scalar product • Cross product • Vector projection Sources • Dot products - www.mit.org • Scalar projection - Flexbooks.ck12.org • Scalar Projection & Vector Projection - medium.com • Lesson Explainer: Scalar Projection | Nagwa
Wikipedia
Diagonal matrix In linear algebra, a diagonal matrix is a matrix in which the entries outside the main diagonal are all zero; the term usually refers to square matrices. Elements of the main diagonal can either be zero or nonzero. An example of a 2×2 diagonal matrix is $\left[{\begin{smallmatrix}3&0\\0&2\end{smallmatrix}}\right]$, while an example of a 3×3 diagonal matrix is$\left[{\begin{smallmatrix}6&0&0\\0&0&0\\0&0&0\end{smallmatrix}}\right]$. An identity matrix of any size, or any multiple of it (a scalar matrix), is a diagonal matrix. A diagonal matrix is sometimes called a scaling matrix, since matrix multiplication with it results in changing scale (size). Its determinant is the product of its diagonal values. Definition As stated above, a diagonal matrix is a matrix in which all off-diagonal entries are zero. That is, the matrix D = (di,j) with n columns and n rows is diagonal if $\forall i,j\in \{1,2,\ldots ,n\},i\neq j\implies d_{i,j}=0.$ However, the main diagonal entries are unrestricted. The term diagonal matrix may sometimes refer to a rectangular diagonal matrix, which is an m-by-n matrix with all the entries not of the form di,i being zero. For example: ${\begin{bmatrix}1&0&0\\0&4&0\\0&0&-3\\0&0&0\\\end{bmatrix}}$ or ${\begin{bmatrix}1&0&0&0&0\\0&4&0&0&0\\0&0&-3&0&0\end{bmatrix}}$ More often, however, diagonal matrix refers to square matrices, which can be specified explicitly as a square diagonal matrix. A square diagonal matrix is a symmetric matrix, so this can also be called a symmetric diagonal matrix. The following matrix is square diagonal matrix: ${\begin{bmatrix}1&0&0\\0&4&0\\0&0&-2\end{bmatrix}}$ If the entries are real numbers or complex numbers, then it is a normal matrix as well. In the remainder of this article we will consider only square diagonal matrices, and refer to them simply as "diagonal matrices". Vector-to-matrix diag operator A diagonal matrix $\mathbf {D} $ can be constructed from a vector $\mathbf {a} ={\begin{bmatrix}a_{1}&\dotsm &a_{n}\end{bmatrix}}^{\textsf {T}}$ using the $\operatorname {diag} $ operator: $\mathbf {D} =\operatorname {diag} (a_{1},\dots ,a_{n})$ This may be written more compactly as $\mathbf {D} =\operatorname {diag} (\mathbf {a} )$. The same operator is also used to represent block diagonal matrices as $\mathbf {A} =\operatorname {diag} (A_{1},\dots ,A_{n})$ where each argument $A_{i}$ is a matrix. The $\operatorname {diag} $ operator may be written as: $\operatorname {diag} (\mathbf {a} )=\left(\mathbf {a} \mathbf {1} ^{\textsf {T}}\right)\circ \mathbf {I} $ where $\circ $ represents the Hadamard product and $\mathbf {1} $ is a constant vector with elements 1. Matrix-to-vector diag operator The inverse matrix-to-vector $\operatorname {diag} $ operator is sometimes denoted by the identically named $\operatorname {diag} (\mathbf {D} )={\begin{bmatrix}a_{1}&\dotsm &a_{n}\end{bmatrix}}^{\textsf {T}}$ where the argument is now a matrix and the result is a vector of its diagonal entries. The following property holds: $\operatorname {diag} (\mathbf {A} \mathbf {B} )=\sum _{j}\left(\mathbf {A} \circ \mathbf {B} ^{\textsf {T}}\right)_{ij}=\left(\mathbf {A} \circ \mathbf {B} ^{\textsf {T}}\right)\mathbf {1} $ Scalar matrix A diagonal matrix with equal diagonal entries is a scalar matrix; that is, a scalar multiple λ of the identity matrix I. Its effect on a vector is scalar multiplication by λ. For example, a 3×3 scalar matrix has the form: ${\begin{bmatrix}\lambda &0&0\\0&\lambda &0\\0&0&\lambda \end{bmatrix}}\equiv \lambda {\boldsymbol {I}}_{3}$ The scalar matrices are the center of the algebra of matrices: that is, they are precisely the matrices that commute with all other square matrices of the same size.[lower-alpha 1] By contrast, over a field (like the real numbers), a diagonal matrix with all diagonal elements distinct only commutes with diagonal matrices (its centralizer is the set of diagonal matrices). That is because if a diagonal matrix $\mathbf {D} =\operatorname {diag} (a_{1},\dots ,a_{n})$ has $a_{i}\neq a_{j},$ then given a matrix $\mathbf {M} $ with $m_{ij}\neq 0,$ the $(i,j)$ term of the products are: $(\mathbf {D} \mathbf {M} )_{ij}=a_{i}m_{ij}$ and $(\mathbf {M} \mathbf {D} )_{ij}=m_{ij}a_{j},$ and $a_{j}m_{ij}\neq m_{ij}a_{i}$ (since one can divide by $m_{ij}$), so they do not commute unless the off-diagonal terms are zero.[lower-alpha 2] Diagonal matrices where the diagonal entries are not all equal or all distinct have centralizers intermediate between the whole space and only diagonal matrices.[1] For an abstract vector space V (rather than the concrete vector space $K^{n}$), the analog of scalar matrices are scalar transformations. This is true more generally for a module M over a ring R, with the endomorphism algebra End(M) (algebra of linear operators on M) replacing the algebra of matrices. Formally, scalar multiplication is a linear map, inducing a map $R\to \operatorname {End} (M),$ (from a scalar λ to its corresponding scalar transformation, multiplication by λ) exhibiting End(M) as a R-algebra. For vector spaces, the scalar transforms are exactly the center of the endomorphism algebra, and, similarly, invertible transforms are the center of the general linear group GL(V). The former is more generally true free modules $M\cong R^{n}$, for which the endomorphism algebra is isomorphic to a matrix algebra. Vector operations Multiplying a vector by a diagonal matrix multiplies each of the terms by the corresponding diagonal entry. Given a diagonal matrix $\mathbf {D} =\operatorname {diag} (a_{1},\dots ,a_{n})$ and a vector $\mathbf {v} ={\begin{bmatrix}x_{1}&\dotsm &x_{n}\end{bmatrix}}^{\textsf {T}}$, the product is: $\mathbf {D} \mathbf {v} =\operatorname {diag} (a_{1},\dots ,a_{n}){\begin{bmatrix}x_{1}\\\vdots \\x_{n}\end{bmatrix}}={\begin{bmatrix}a_{1}\\&\ddots \\&&a_{n}\end{bmatrix}}{\begin{bmatrix}x_{1}\\\vdots \\x_{n}\end{bmatrix}}={\begin{bmatrix}a_{1}x_{1}\\\vdots \\a_{n}x_{n}\end{bmatrix}}.$ This can be expressed more compactly by using a vector instead of a diagonal matrix, $\mathbf {d} ={\begin{bmatrix}a_{1}&\dotsm &a_{n}\end{bmatrix}}^{\textsf {T}}$, and taking the Hadamard product of the vectors (entrywise product), denoted $\mathbf {d} \circ \mathbf {v} $: $\mathbf {D} \mathbf {v} =\mathbf {d} \circ \mathbf {v} ={\begin{bmatrix}a_{1}\\\vdots \\a_{n}\end{bmatrix}}\circ {\begin{bmatrix}x_{1}\\\vdots \\x_{n}\end{bmatrix}}={\begin{bmatrix}a_{1}x_{1}\\\vdots \\a_{n}x_{n}\end{bmatrix}}.$ This is mathematically equivalent, but avoids storing all the zero terms of this sparse matrix. This product is thus used in machine learning, such as computing products of derivatives in backpropagation or multiplying IDF weights in TF-IDF,[2] since some BLAS frameworks, which multiply matrices efficiently, do not include Hadamard product capability directly.[3] Matrix operations The operations of matrix addition and matrix multiplication are especially simple for diagonal matrices. Write diag(a1, ..., an) for a diagonal matrix whose diagonal entries starting in the upper left corner are a1, ..., an. Then, for addition, we have diag(a1, ..., an) + diag(b1, ..., bn) = diag(a1 + b1, ..., an + bn) and for matrix multiplication, diag(a1, ..., an) diag(b1, ..., bn) = diag(a1b1, ..., anbn). The diagonal matrix diag(a1, ..., an) is invertible if and only if the entries a1, ..., an are all nonzero. In this case, we have diag(a1, ..., an)−1 = diag(a1−1, ..., an−1). In particular, the diagonal matrices form a subring of the ring of all n-by-n matrices. Multiplying an n-by-n matrix A from the left with diag(a1, ..., an) amounts to multiplying the i-th row of A by ai for all i; multiplying the matrix A from the right with diag(a1, ..., an) amounts to multiplying the i-th column of A by ai for all i. Operator matrix in eigenbasis Main articles: Transformation matrix § Finding the matrix of a transformation, and Eigenvalues and eigenvectors As explained in determining coefficients of operator matrix, there is a special basis, e1, ..., en, for which the matrix $\mathbf {A} $ takes the diagonal form. Hence, in the defining equation $ \mathbf {A} \mathbf {e} _{j}=\sum _{i}a_{i,j}\mathbf {e} _{i}$, all coefficients $a_{i,j}$ with i ≠ j are zero, leaving only one term per sum. The surviving diagonal elements, $a_{i,i}$, are known as eigenvalues and designated with $\lambda _{i}$ in the equation, which reduces to $\mathbf {A} \mathbf {e} _{i}=\lambda _{i}\mathbf {e} _{i}$. The resulting equation is known as eigenvalue equation[4] and used to derive the characteristic polynomial and, further, eigenvalues and eigenvectors. In other words, the eigenvalues of diag(λ1, ..., λn) are λ1, ..., λn with associated eigenvectors of e1, ..., en. Properties • The determinant of diag(a1, ..., an) is the product a1⋯an. • The adjugate of a diagonal matrix is again diagonal. • Where all matrices are square, • A matrix is diagonal if and only if it is triangular and normal. • A matrix is diagonal if and only if it is both upper- and lower-triangular. • A diagonal matrix is symmetric. • The identity matrix In and zero matrix are diagonal. • A 1×1 matrix is always diagonal. Applications Diagonal matrices occur in many areas of linear algebra. Because of the simple description of the matrix operation and eigenvalues/eigenvectors given above, it is typically desirable to represent a given matrix or linear map by a diagonal matrix. In fact, a given n-by-n matrix A is similar to a diagonal matrix (meaning that there is a matrix X such that X−1AX is diagonal) if and only if it has n linearly independent eigenvectors. Such matrices are said to be diagonalizable. Over the field of real or complex numbers, more is true. The spectral theorem says that every normal matrix is unitarily similar to a diagonal matrix (if AA∗ = A∗A then there exists a unitary matrix U such that UAU∗ is diagonal). Furthermore, the singular value decomposition implies that for any matrix A, there exist unitary matrices U and V such that U∗AV is diagonal with positive entries. Operator theory In operator theory, particularly the study of PDEs, operators are particularly easy to understand and PDEs easy to solve if the operator is diagonal with respect to the basis with which one is working; this corresponds to a separable partial differential equation. Therefore, a key technique to understanding operators is a change of coordinates—in the language of operators, an integral transform—which changes the basis to an eigenbasis of eigenfunctions: which makes the equation separable. An important example of this is the Fourier transform, which diagonalizes constant coefficient differentiation operators (or more generally translation invariant operators), such as the Laplacian operator, say, in the heat equation. Especially easy are multiplication operators, which are defined as multiplication by (the values of) a fixed function–the values of the function at each point correspond to the diagonal entries of a matrix. See also • Anti-diagonal matrix • Banded matrix • Bidiagonal matrix • Diagonally dominant matrix • Diagonalizable matrix • Jordan normal form • Multiplication operator • Tridiagonal matrix • Toeplitz matrix • Toral Lie algebra • Circulant matrix Notes 1. Proof: given the elementary matrix $e_{ij}$, $Me_{ij}$ is the matrix with only the i-th row of M and $e_{ij}M$ is the square matrix with only the M j-th column, so the non-diagonal entries must be zero, and the ith diagonal entry much equal the jth diagonal entry. 2. Over more general rings, this does not hold, because one cannot always divide. References 1. "Do Diagonal Matrices Always Commute?". Stack Exchange. March 15, 2016. Retrieved August 4, 2018. 2. Sahami, Mehran (2009-06-15). Text Mining: Classification, Clustering, and Applications. CRC Press. p. 14. ISBN 9781420059458. 3. "Element-wise vector-vector multiplication in BLAS?". stackoverflow.com. 2011-10-01. Retrieved 2020-08-30. 4. Nearing, James (2010). "Chapter 7.9: Eigenvalues and Eigenvectors" (PDF). Mathematical Tools for Physics. ISBN 978-0486482125. Retrieved January 1, 2012. Sources • Horn, Roger Alan; Johnson, Charles Royal (1985), Matrix Analysis, Cambridge University Press, ISBN 978-0-521-38632-6 Matrix classes Explicitly constrained entries • Alternant • Anti-diagonal • Anti-Hermitian • Anti-symmetric • Arrowhead • Band • Bidiagonal • Bisymmetric • Block-diagonal • Block • Block tridiagonal • Boolean • Cauchy • Centrosymmetric • Conference • Complex Hadamard • Copositive • Diagonally dominant • Diagonal • Discrete Fourier Transform • Elementary • Equivalent • Frobenius • Generalized permutation • Hadamard • Hankel • Hermitian • Hessenberg • Hollow • Integer • Logical • Matrix unit • Metzler • Moore • Nonnegative • Pentadiagonal • Permutation • Persymmetric • Polynomial • Quaternionic • Signature • Skew-Hermitian • Skew-symmetric • Skyline • Sparse • Sylvester • Symmetric • Toeplitz • Triangular • Tridiagonal • Vandermonde • Walsh • Z Constant • Exchange • Hilbert • Identity • Lehmer • Of ones • Pascal • Pauli • Redheffer • Shift • Zero Conditions on eigenvalues or eigenvectors • Companion • Convergent • Defective • Definite • Diagonalizable • Hurwitz • Positive-definite • Stieltjes Satisfying conditions on products or inverses • Congruent • Idempotent or Projection • Invertible • Involutory • Nilpotent • Normal • Orthogonal • Unimodular • Unipotent • Unitary • Totally unimodular • Weighing With specific applications • Adjugate • Alternating sign • Augmented • Bézout • Carleman • Cartan • Circulant • Cofactor • Commutation • Confusion • Coxeter • Distance • Duplication and elimination • Euclidean distance • Fundamental (linear differential equation) • Generator • Gram • Hessian • Householder • Jacobian • Moment • Payoff • Pick • Random • Rotation • Seifert • Shear • Similarity • Symplectic • Totally positive • Transformation Used in statistics • Centering • Correlation • Covariance • Design • Doubly stochastic • Fisher information • Hat • Precision • Stochastic • Transition Used in graph theory • Adjacency • Biadjacency • Degree • Edmonds • Incidence • Laplacian • Seidel adjacency • Tutte Used in science and engineering • Cabibbo–Kobayashi–Maskawa • Density • Fundamental (computer vision) • Fuzzy associative • Gamma • Gell-Mann • Hamiltonian • Irregular • Overlap • S • State transition • Substitution • Z (chemistry) Related terms • Jordan normal form • Linear independence • Matrix exponential • Matrix representation of conic sections • Perfect matrix • Pseudoinverse • Row echelon form • Wronskian •  Mathematics portal • List of matrices • Category:Matrices
Wikipedia
Scale-free network A scale-free network is a network whose degree distribution follows a power law, at least asymptotically. That is, the fraction P(k) of nodes in the network having k connections to other nodes goes for large values of k as $P(k)\ \sim \ k^{\boldsymbol {-\gamma }}$ Part of a series on Network science • Theory • Graph • Complex network • Contagion • Small-world • Scale-free • Community structure • Percolation • Evolution • Controllability • Graph drawing • Social capital • Link analysis • Optimization • Reciprocity • Closure • Homophily • Transitivity • Preferential attachment • Balance theory • Network effect • Social influence Network types • Informational (computing) • Telecommunication • Transport • Social • Scientific collaboration • Biological • Artificial neural • Interdependent • Semantic • Spatial • Dependency • Flow • on-Chip Graphs Features • Clique • Component • Cut • Cycle • Data structure • Edge • Loop • Neighborhood • Path • Vertex • Adjacency list / matrix • Incidence list / matrix Types • Bipartite • Complete • Directed • Hyper • Labeled • Multi • Random • Weighted • Metrics • Algorithms • Centrality • Degree • Motif • Clustering • Degree distribution • Assortativity • Distance • Modularity • Efficiency Models Topology • Random graph • Erdős–Rényi • Barabási–Albert • Bianconi–Barabási • Fitness model • Watts–Strogatz • Exponential random (ERGM) • Random geometric (RGG) • Hyperbolic (HGN) • Hierarchical • Stochastic block • Blockmodeling • Maximum entropy • Soft configuration • LFR Benchmark Dynamics • Boolean network • agent based • Epidemic/SIR • Lists • Categories • Topics • Software • Network scientists • Category:Network theory • Category:Graph theory where $\gamma $ is a parameter whose value is typically in the range $ 2<\gamma <3$ (wherein the second moment (scale parameter) of $k^{\boldsymbol {-\gamma }}$ is infinite but the first moment is finite), although occasionally it may lie outside these bounds.[1][2] The name "scale-free" means that some moments of the degree distribution are not defined, so that the network does not have a characteristic scale or "size". Many networks have been reported to be scale-free, although statistical analysis has refuted many of these claims and seriously questioned others.[3][4] Additionally, some have argued that simply knowing that a degree-distribution is fat-tailed is more important than knowing whether a network is scale-free according to statistically rigorous definitions.[5][6] Preferential attachment and the fitness model have been proposed as mechanisms to explain conjectured power law degree distributions in real networks. Alternative models such as super-linear preferential attachment and second-neighbour preferential attachment may appear to generate transient scale-free networks, but the degree distribution deviates from a power law as networks become very large.[7][8] History In studies of the networks of citations between scientific papers, Derek de Solla Price showed in 1965 that the number of links to papers—i.e., the number of citations they receive—had a heavy-tailed distribution following a Pareto distribution or power law, and thus that the citation network is scale-free. He did not however use the term "scale-free network", which was not coined until some decades later. In a later paper in 1976, Price also proposed a mechanism to explain the occurrence of power laws in citation networks, which he called "cumulative advantage" but which is today more commonly known under the name preferential attachment. Recent interest in scale-free networks started in 1999 with work by Albert-László Barabási and Réka Albert at the University of Notre Dame who mapped the topology of a portion of the World Wide Web,[9] finding that some nodes, which they called "hubs", had many more connections than others and that the network as a whole had a power-law distribution of the number of links connecting to a node. After finding that a few other networks, including some social and biological networks, also had heavy-tailed degree distributions, Barabási and Réka Albert coined the term "scale-free network" to describe the class of networks that exhibit a power-law degree distribution. However, studying seven examples of networks in social, economic, technological, biological, and physical systems, Amaral et al. were not able to find a scale-free network among these seven examples. Only one of these examples, the movie-actor network, had degree distribution P(k) following a power law regime for moderate k, though eventually this power law regime was followed by a sharp cutoff showing exponential decay for large k.[10] Barabási and Réka Albert proposed a generative mechanism to explain the appearance of power-law distributions, which they called "preferential attachment" and which is essentially the same as that proposed by Price. Analytic solutions for this mechanism (also similar to the solution of Price) were presented in 2000 by Dorogovtsev, Mendes and Samukhin[11] and independently by Krapivsky, Redner, and Leyvraz, and later rigorously proved by mathematician Béla Bollobás.[12] Notably, however, this mechanism only produces a specific subset of networks in the scale-free class, and many alternative mechanisms have been discovered since.[13] The history of scale-free networks also includes some disagreement. On an empirical level, the scale-free nature of several networks has been called into question. For instance, the three brothers Faloutsos believed that the Internet had a power law degree distribution on the basis of traceroute data; however, it has been suggested that this is a layer 3 illusion created by routers, which appear as high-degree nodes while concealing the internal layer 2 structure of the ASes they interconnect. [14] On a theoretical level, refinements to the abstract definition of scale-free have been proposed. For example, Li et al. (2005) offered a potentially more precise "scale-free metric". Briefly, let G be a graph with edge set E, and denote the degree of a vertex $v$ (that is, the number of edges incident to $v$) by $\deg(v)$. Define $s(G)=\sum _{(u,v)\in E}\deg(u)\cdot \deg(v).$ This is maximized when high-degree nodes are connected to other high-degree nodes. Now define $S(G)={\frac {s(G)}{s_{\max }}},$ where smax is the maximum value of s(H) for H in the set of all graphs with degree distribution identical to that of G. This gives a metric between 0 and 1, where a graph G with small S(G) is "scale-rich", and a graph G with S(G) close to 1 is "scale-free". This definition captures the notion of self-similarity implied in the name "scale-free". Overview There are two major components that explain the emergence of the scale-free property in complex networks: the growth and the preferential attachment.[15] By "growth" is meant a growth process where, over an extended period of time, new nodes join an already existing system, a network (like the World Wide Web which has grown by billions of web pages over 10 years). Finally, by "preferential attachment" is meant that new nodes prefer to connect to nodes that already have a high number of links with others. Thus, there is a higher probability that more and more nodes will link themselves to that one which has already many links, leading this node to a hub in-fine.[9] Depending on the network, the hubs might either be assortative or disassortative. Assortativity would be found in social networks in which well-connected/famous people would tend to know better each other. Disassortativity would be found in technological (Internet, World Wide Web) and biological (protein interaction, metabolism) networks.[15] Characteristics The most notable characteristic in a scale-free network is the relative commonness of vertices with a degree that greatly exceeds the average. The highest-degree nodes are often called "hubs", and are thought to serve specific purposes in their networks, although this depends greatly on the domain. Clustering Another important characteristic of scale-free networks is the clustering coefficient distribution, which decreases as the node degree increases. This distribution also follows a power law. This implies that the low-degree nodes belong to very dense sub-graphs and those sub-graphs are connected to each other through hubs. Consider a social network in which nodes are people and links are acquaintance relationships between people. It is easy to see that people tend to form communities, i.e., small groups in which everyone knows everyone (one can think of such community as a complete graph). In addition, the members of a community also have a few acquaintance relationships to people outside that community. Some people, however, are connected to a large number of communities (e.g., celebrities, politicians). Those people may be considered the hubs responsible for the small-world phenomenon. At present, the more specific characteristics of scale-free networks vary with the generative mechanism used to create them. For instance, networks generated by preferential attachment typically place the high-degree vertices in the middle of the network, connecting them together to form a core, with progressively lower-degree nodes making up the regions between the core and the periphery. The random removal of even a large fraction of vertices impacts the overall connectedness of the network very little, suggesting that such topologies could be useful for security, while targeted attacks destroys the connectedness very quickly. Other scale-free networks, which place the high-degree vertices at the periphery, do not exhibit these properties. Similarly, the clustering coefficient of scale-free networks can vary significantly depending on other topological details. Immunization The question of how to immunize efficiently scale free networks which represent realistic networks such as the Internet and social networks has been studied extensively. One such strategy is to immunize the largest degree nodes, i.e., targeted (intentional) attacks since for this case p$c$ is relatively high and less nodes are needed to be immunized. However, in many realistic cases the global structure is not available and the largest degree nodes are not known. Properties of random graph may change or remain invariant under graph transformations. Mashaghi A. et al., for example, demonstrated that a transformation which converts random graphs to their edge-dual graphs (or line graphs) produces an ensemble of graphs with nearly the same degree distribution, but with degree correlations and a significantly higher clustering coefficient. Scale free graphs, as such, remain scale free under such transformations.[16] Examples Although many real-world networks are thought to be scale-free, the evidence often remains inconclusive, primarily due to the developing awareness of more rigorous data analysis techniques.[3] As such, the scale-free nature of many networks is still being debated by the scientific community. A few examples of networks claimed to be scale-free include: • Some Social networks, including collaboration networks. Two examples that have been studied extensively are the collaboration of movie actors in films and the co-authorship by mathematicians of papers. • Many kinds of computer networks, including the internet and the webgraph of the World Wide Web. • Some financial networks such as interbank payment networks [17][18] • Protein–protein interaction networks. • Semantic networks.[19] • Airline networks. Scale free topology has been also found in high temperature superconductors.[20] The qualities of a high-temperature superconductor — a compound in which electrons obey the laws of quantum physics, and flow in perfect synchrony, without friction — appear linked to the fractal arrangements of seemingly random oxygen atoms and lattice distortion.[21] A space-filling cellular structure, weighted planar stochastic lattice (WPSL) has recently been proposed whose coordination number distribution follow a power-law. It implies that the lattice has a few blocks which have astonishingly large number neighbors with whom they share common borders. Its construction starts with an initiator, say a square of unit area, and a generator that divides it randomly into four blocks. The generator thereafter is sequentially applied over and over again to only one of the available blocks picked preferentially with respect to their areas. It results in the partitioning of the square into ever smaller mutually exclusive rectangular blocks. The dual of the WPSL (DWPSL), which is obtained by replacing each block with a node at its center, and each common border between blocks with an edge joining the two corresponding vertices, emerges as a network whose degree distribution follows a power-law.[22][23] The reason for it is that it grows following mediation-driven attachment model rule which also embodies preferential attachment rule but in disguise. Generative models Scale-free networks do not arise by chance alone. Erdős and Rényi (1960) studied a model of growth for graphs in which, at each step, two nodes are chosen uniformly at random and a link is inserted between them. The properties of these random graphs are different from the properties found in scale-free networks, and therefore a model for this growth process is needed. The most widely known generative model for a subset of scale-free networks is Barabási and Albert's (1999) rich get richer generative model in which each new Web page creates links to existing Web pages with a probability distribution which is not uniform, but proportional to the current in-degree of Web pages. This model was originally invented by Derek J. de Solla Price in 1965 under the term cumulative advantage, but did not reach popularity until Barabási rediscovered the results under its current name (BA Model). According to this process, a page with many in-links will attract more in-links than a regular page. This generates a power-law but the resulting graph differs from the actual Web graph in other properties such as the presence of small tightly connected communities. More general models and network characteristics have been proposed and studied. For example, Pachon et al. (2018) proposed a variant of the rich get richer generative model which takes into account two different attachment rules: a preferential attachment mechanism and a uniform choice only for the most recent nodes.[24] For a review see the book by Dorogovtsev and Mendes. Some mechanisms such as super-linear preferential attachment and second neighbour attachment generate networks which are transiently scale-free, but deviate from a power law as networks grow large.[7][8] A somewhat different generative model for Web links has been suggested by Pennock et al. (2002). They examined communities with interests in a specific topic such as the home pages of universities, public companies, newspapers or scientists, and discarded the major hubs of the Web. In this case, the distribution of links was no longer a power law but resembled a normal distribution. Based on these observations, the authors proposed a generative model that mixes preferential attachment with a baseline probability of gaining a link. Another generative model is the copy model studied by Kumar et al.[25] (2000), in which new nodes choose an existent node at random and copy a fraction of the links of the existent node. This also generates a power law. The growth of the networks (adding new nodes) is not a necessary condition for creating a scale-free network (see Dangalchev[26]). One possibility (Caldarelli et al. 2002) is to consider the structure as static and draw a link between vertices according to a particular property of the two vertices involved. Once specified the statistical distribution for these vertex properties (fitnesses), it turns out that in some circumstances also static networks develop scale-free properties. Generalized scale-free model There has been a burst of activity in the modeling of scale-free complex networks. The recipe of Barabási and Albert[27] has been followed by several variations and generalizations[28][29][30][31][24] and the revamping of previous mathematical works.[32] As long as there is a power law distribution in a model, it is a scale-free network, and a model of that network is a scale-free model. Features Many real networks are (approximately) scale-free and hence require scale-free models to describe them. In Price's scheme, there are two ingredients needed to build up a scale-free model: 1. Adding or removing nodes. Usually we concentrate on growing the network, i.e. adding nodes. 2. Preferential attachment: The probability $\Pi $ that new nodes will be connected to the "old" node. Note that some models (see Dangalchev[26] and Fitness model below) can work also statically, without changing the number of nodes. It should also be kept in mind that the fact that "preferential attachment" models give rise to scale-free networks does not prove that this is the mechanism underlying the evolution of real-world scale-free networks, as there might exist different mechanisms at work in real-world systems that nevertheless give rise to scaling. Examples There have been several attempts to generate scale-free network properties. Here are some examples: The Barabási–Albert model The Barabási–Albert model, an undirected version of Price's model has a linear preferential attachment $\Pi (k_{i})={\frac {k_{i}}{\sum _{j}k_{j}}}$ and adds one new node at every time step. (Note, another general feature of $\Pi (k)$ in real networks is that $\Pi (0)\neq 0$, i.e. there is a nonzero probability that a new node attaches to an isolated node. Thus in general $\Pi (k)$ has the form $\Pi (k)=A+k^{\alpha }$, where $A$ is the initial attractiveness of the node.) Two-level network model Dangalchev (see [26]) builds a 2-L model by considering the importance of each of the neighbours of a target node in preferential attachment. The attractiveness of a node in the 2-L model depends not only on the number of nodes linked to it but also on the number of links in each of these nodes. $\Pi (k_{i})={\frac {k_{i}+C\sum _{(i,j)}k_{j}}{\sum _{j}k_{j}+C\sum _{j}k_{j}^{2}}},$ where C is a coefficient between 0 and 1. A variant of the 2-L model, the k2 model, where first and second neighbour nodes contribute equally to a target node's attractiveness, demonstrates the emergence of transient scale-free networks.[8] In the k2 model, the degree distribution appears approximately scale-free as long as the network is relatively small, but significant deviations from the scale-free regime emerge as the network grows larger. This results in the relative attractiveness of nodes with different degrees changing over time, a feature also observed in real networks. Mediation-driven attachment (MDA) model In the mediation-driven attachment (MDA) model, a new node coming with $m$ edges picks an existing connected node at random and then connects itself, not with that one, but with $m$ of its neighbors, also chosen at random. The probability $\Pi (i)$ that the node $i$ of the existing node picked is $\Pi (i)={\frac {k_{i}}{N}}{\frac {\sum _{j=1}^{k_{i}}{\frac {1}{k_{j}}}}{k_{i}}}.$ The factor ${\frac {\sum _{j=1}^{k_{i}}{\frac {1}{k_{j}}}}{k_{i}}}$ is the inverse of the harmonic mean (IHM) of degrees of the $k_{i}$ neighbors of a node $i$. Extensive numerical investigation suggest that for approximately $m>14$ the mean IHM value in the large $N$ limit becomes a constant which means $\Pi (i)\propto k_{i}$. It implies that the higher the links (degree) a node has, the higher its chance of gaining more links since they can be reached in a larger number of ways through mediators which essentially embodies the intuitive idea of rich get richer mechanism (or the preferential attachment rule of the Barabasi–Albert model). Therefore, the MDA network can be seen to follow the PA rule but in disguise.[33] However, for $m=1$ it describes the winner takes it all mechanism as we find that almost $99\%$ of the total nodes has degree one and one is super-rich in degree. As $m$ value increases the disparity between the super rich and poor decreases and as $m>14$ we find a transition from rich get super richer to rich get richer mechanism. Non-linear preferential attachment The Barabási–Albert model assumes that the probability $\Pi (k)$ that a node attaches to node $i$ is proportional to the degree $k$ of node $i$. This assumption involves two hypotheses: first, that $\Pi (k)$ depends on $k$, in contrast to random graphs in which $\Pi (k)=p$, and second, that the functional form of $\Pi (k)$ is linear in $k$. In non-linear preferential attachment, the form of $\Pi (k)$ is not linear, and recent studies have demonstrated that the degree distribution depends strongly on the shape of the function $\Pi (k)$ Krapivsky, Redner, and Leyvraz[30] demonstrate that the scale-free nature of the network is destroyed for nonlinear preferential attachment. The only case in which the topology of the network is scale free is that in which the preferential attachment is asymptotically linear, i.e. $\Pi (k_{i})\sim a_{\infty }k_{i}$ as $k_{i}\to \infty $. In this case the rate equation leads to $P(k)\sim k^{-\gamma }{\text{ with }}\gamma =1+{\frac {\mu }{a_{\infty }}}.$ This way the exponent of the degree distribution can be tuned to any value between 2 and $\infty $. Hierarchical network model Hierarchical network models are, by design, scale free and have high clustering of nodes.[34] The iterative construction leads to a hierarchical network. Starting from a fully connected cluster of five nodes, we create four identical replicas connecting the peripheral nodes of each cluster to the central node of the original cluster. From this, we get a network of 25 nodes (N = 25). Repeating the same process, we can create four more replicas of the original cluster – the four peripheral nodes of each one connect to the central node of the nodes created in the first step. This gives N = 125, and the process can continue indefinitely. Fitness model The idea is that the link between two vertices is assigned not randomly with a probability p equal for all the couple of vertices. Rather, for every vertex j there is an intrinsic fitness xj and a link between vertex i and j is created with a probability $p(x_{i},x_{j})$.[35] In the case of World Trade Web it is possible to reconstruct all the properties by using as fitnesses of the country their GDP, and taking $p(x_{i},x_{j})={\frac {\delta x_{i}x_{j}}{1+\delta x_{i}x_{j}}}.$[36] Hyperbolic geometric graphs Assuming that a network has an underlying hyperbolic geometry, one can use the framework of spatial networks to generate scale-free degree distributions. This heterogeneous degree distribution then simply reflects the negative curvature and metric properties of the underlying hyperbolic geometry.[37] Edge dual transformation to generate scale free graphs with desired properties Starting with scale free graphs with low degree correlation and clustering coefficient, one can generate new graphs with much higher degree correlations and clustering coefficients by applying edge-dual transformation.[16] Uniform-preferential-attachment model (UPA model) UPA model is a variant of the preferential attachment model (proposed by Pachon et al.) which takes into account two different attachment rules: a preferential attachment mechanism (with probability 1−p) that stresses the rich get richer system, and a uniform choice (with probability p) for the most recent nodes. This modification is interesting to study the robustness of the scale-free behavior of the degree distribution. It is proved analytically that the asymptotically power-law degree distribution is preserved.[24] Scale-free ideal networks In the context of network theory a scale-free ideal network is a random network with a degree distribution following the scale-free ideal gas density distribution. These networks are able to reproduce city-size distributions and electoral results by unraveling the size distribution of social groups with information theory on complex networks when a competitive cluster growth process is applied to the network.[38][39] In models of scale-free ideal networks it is possible to demonstrate that Dunbar's number is the cause of the phenomenon known as the 'six degrees of separation'. Novel characteristics For a scale-free network with $n$ nodes and power-law exponent $\gamma >3$, the induced subgraph constructed by vertices with degrees larger than $\log {n}\times \log ^{*}{n}$ is a scale-free network with $\gamma '=2$, almost surely.[40] Estimating the power law exponent Estimating the power-law exponent $\gamma $ of a scale-free network is typically done by using the maximum likelihood estimation with the degrees of a few uniformly sampled nodes.[3] However, since uniform sampling does not obtain enough samples from the important heavy-tail of the power law degree distribution, this method can yield a large bias and a variance. It has been recently proposed to sample random friends (i.e., random ends of random links) who are more likely come from the tail of the degree distribution as a result of the friendship paradox.[41][42] Theoretically, maximum likelihood estimation with random friends lead to a smaller bias and a smaller variance compared to classical approach based on uniform sampling.[42] See also • Random graph – Graph generated by a random process • Erdős–Rényi model – Two closely related models for generating random graphs • Non-linear preferential attachment • Bose–Einstein condensation (network theory) – model in network sciencePages displaying wikidata descriptions as a fallback • Scale invariance – Features that do not change if length or energy scales are multiplied by a common factor • Complex network – Network with non-trivial topological features • Webgraph – Graph of connected web pages • Barabási–Albert model – algorithm for generating random networksPages displaying wikidata descriptions as a fallback • Bianconi–Barabási model – model in network sciencePages displaying wikidata descriptions as a fallback References 1. Onnela, J.-P.; Saramaki, J.; Hyvonen, J.; Szabo, G.; Lazer, D.; Kaski, K.; Kertesz, J.; Barabasi, A. -L. (2007). "Structure and tie strengths in mobile communication networks". Proceedings of the National Academy of Sciences. 104 (18): 7332–7336. arXiv:physics/0610104. Bibcode:2007PNAS..104.7332O. doi:10.1073/pnas.0610245104. PMC 1863470. PMID 17456605. 2. Choromański, K.; Matuszak, M.; MiȩKisz, J. (2013). "Scale-Free Graph with Preferential Attachment and Evolving Internal Vertex Structure". Journal of Statistical Physics. 151 (6): 1175–1183. Bibcode:2013JSP...151.1175C. doi:10.1007/s10955-013-0749-1. 3. Clauset, Aaron; Cosma Rohilla Shalizi; M. E. J Newman (2009). "Power-law distributions in empirical data". SIAM Review. 51 (4): 661–703. arXiv:0706.1062. Bibcode:2009SIAMR..51..661C. doi:10.1137/070710111. S2CID 9155618. 4. Broido, Anna; Aaron Clauset (2019-03-04). "Scale-free networks are rare". Nature Communications. 10 (1): 1017. arXiv:1801.03400. Bibcode:2019NatCo..10.1017B. doi:10.1038/s41467-019-08746-5. PMC 6399239. PMID 30833554. 5. Holme, Petter (December 2019). "Rare and everywhere: Perspectives on scale-free networks". Nature Communications. 10 (1): 1016. Bibcode:2019NatCo..10.1016H. doi:10.1038/s41467-019-09038-8. PMC 6399274. PMID 30833568. 6. Stumpf, M. P. H.; Porter, M. A. (10 February 2012). "Critical Truths About Power Laws". Science. 335 (6069): 665–666. Bibcode:2012Sci...335..665S. doi:10.1126/science.1216142. PMID 22323807. S2CID 206538568. 7. Krapivsky, Paul; Krioukov, Dmitri (21 August 2008). "Scale-free networks as preasymptotic regimes of superlinear preferential attachment". Physical Review E. 78 (2): 026114. arXiv:0804.1366. Bibcode:2008PhRvE..78b6114K. doi:10.1103/PhysRevE.78.026114. PMID 18850904. S2CID 14292535. 8. Falkenberg, Max; Lee, Jong-Hyeok; Amano, Shun-ichi; Ogawa, Ken-ichiro; Yano, Kazuo; Miyake, Yoshihiro; Evans, Tim S.; Christensen, Kim (18 June 2020). "Identifying time dependence in network growth". Physical Review Research. 2 (2): 023352. arXiv:2001.09118. Bibcode:2020PhRvR...2b3352F. doi:10.1103/PhysRevResearch.2.023352. 9. Barabási, Albert-László; Albert, Réka. (October 15, 1999). "Emergence of scaling in random networks". Science. 286 (5439): 509–512. arXiv:cond-mat/9910332. Bibcode:1999Sci...286..509B. doi:10.1126/science.286.5439.509. MR 2091634. PMID 10521342. S2CID 524106. 10. Among the seven examples studied by Amaral et al, six of them where single-scale and only example iii, the movie-actor network had a power law regime followed by a sharp cutoff. None of Amaral et al's examples obeyed the power law regime for large k, i.e. none of these seven examples were shown to be scale-free. See especially the beginning of the discussion section of Amaral LAN, Scala A, Barthelemy M, Stanley HE (2000). "Classes of small-world networks". PNAS. 97 (21): 11149–52. arXiv:cond-mat/0001458. Bibcode:2000PNAS...9711149A. doi:10.1073/pnas.200327197. PMC 17168. PMID 11005838. 11. Dorogovtsev, S.; Mendes, J.; Samukhin, A. (2000). "Structure of Growing Networks with Preferential Linking". Physical Review Letters. 85 (21): 4633–4636. arXiv:cond-mat/0004434. Bibcode:2000PhRvL..85.4633D. doi:10.1103/PhysRevLett.85.4633. PMID 11082614. S2CID 118876189. 12. Bollobás, B.; Riordan, O.; Spencer, J.; Tusnády, G. (2001). "The degree sequence of a scale-free random graph process". Random Structures and Algorithms. 18 (3): 279–290. doi:10.1002/rsa.1009. MR 1824277. S2CID 1486779. 13. Dorogovtsev, S. N.; Mendes, J. F. F. (2002). "Evolution of networks". Advances in Physics. 51 (4): 1079–1187. arXiv:cond-mat/0106144. Bibcode:2002AdPhy..51.1079D. doi:10.1080/00018730110112519. S2CID 429546. 14. Willinger, Walter; David Alderson; John C. Doyle (May 2009). "Mathematics and the Internet: A Source of Enormous Confusion and Great Potential" (PDF). Notices of the AMS. American Mathematical Society. 56 (5): 586–599. Archived (PDF) from the original on 2011-05-15. Retrieved 2011-02-03. 15. Barabási, Albert-László; Zoltán N., Oltvai. (2004). "Network biology: understanding the cell's functional organization". Nature Reviews Genetics. 5 (2): 101–113. doi:10.1038/nrg1272. PMID 14735121. S2CID 10950726. 16. Ramezanpour, A.; Karimipour, V.; Mashaghi, A. (2003). "Generating correlated networks from uncorrelated ones". Phys. Rev. E. 67 (4): 046107. arXiv:cond-mat/0212469. Bibcode:2003PhRvE..67d6107R. doi:10.1103/PhysRevE.67.046107. PMID 12786436. S2CID 33054818. 17. De Masi, Giulia; et al. (2006). "Fitness model for the Italian interbank money market". Physical Review E. 74 (6): 066112. arXiv:physics/0610108. Bibcode:2006PhRvE..74f6112D. doi:10.1103/PhysRevE.74.066112. PMID 17280126. S2CID 30814484. 18. Soramäki, Kimmo; et al. (2007). "The topology of interbank payment flows". Physica A: Statistical Mechanics and Its Applications. 379 (1): 317–333. Bibcode:2007PhyA..379..317S. doi:10.1016/j.physa.2006.11.093. hdl:10419/60649. 19. Steyvers, Mark; Joshua B. Tenenbaum (2005). "The Large-Scale Structure of Semantic Networks: Statistical Analyses and a Model of Semantic Growth". Cognitive Science. 29 (1): 41–78. arXiv:cond-mat/0110012. doi:10.1207/s15516709cog2901_3. PMID 21702767. S2CID 6000627. 20. Fratini, Michela; Poccia, Nicola; Ricci, Alessandro; Campi, Gaetano; Burghammer, Manfred; Aeppli, Gabriel; Bianconi, Antonio (2010). "Scale-free structural organization of oxygen interstitials in La2CuO4+y". Nature. 466 (7308): 841–4. arXiv:1008.2015. Bibcode:2010Natur.466..841F. doi:10.1038/nature09260. PMID 20703301. S2CID 4405620. 21. Poccia, Nicola; Ricci, Alessandro; Campi, Gaetano; Fratini, Michela; Puri, Alessandro; Di Gioacchino, Daniele; Marcelli, Augusto; Reynolds, Michael; Burghammer, Manfred; Saini, Naurang L.; Aeppli, Gabriel; Bianconi, Antonio (2012). "Optimum inhomogeneity of local lattice distortions in La2CuO4+y". PNAS. 109 (39): 15685–15690. arXiv:1208.0101. Bibcode:2012PNAS..10915685P. doi:10.1073/pnas.1208492109. PMC 3465392. PMID 22961255. 22. Hassan, M. K.; Hassan, M. Z.; Pavel, N. I. (2010). "Scale-free network topology and multifractality in a weighted planar stochastic lattice". New Journal of Physics. 12 (9): 093045. arXiv:1008.4994. Bibcode:2010NJPh...12i3045H. doi:10.1088/1367-2630/12/9/093045. 23. Hassan, M. K.; Hassan, M. Z.; Pavel, N. I. (2010). "Scale-free coordination number disorder and multifractal size disorder in weighted planar stochastic lattice". J. Phys.: Conf. Ser. 297: 01. 24. Pachon, Angelica; Sacerdote, Laura; Yang, Shuyi (2018). "Scale-free behavior of networks with the copresence of preferential and uniform attachment rules". Physica D: Nonlinear Phenomena. 371: 1–12. arXiv:1704.08597. Bibcode:2018PhyD..371....1P. doi:10.1016/j.physd.2018.01.005. S2CID 119320331. 25. Kumar, Ravi; Raghavan, Prabhakar (2000). Stochastic Models for the Web Graph (PDF). Foundations of Computer Science, 41st Annual Symposium on. pp. 57–65. doi:10.1109/SFCS.2000.892065. Archived (PDF) from the original on 2016-03-03. Retrieved 2016-02-10. 26. Dangalchev, Chavdar (July 2004). "Generation models for scale-free networks". Physica A: Statistical Mechanics and Its Applications. 338 (3–4): 659–671. Bibcode:2004PhyA..338..659D. doi:10.1016/j.physa.2004.01.056. 27. Barabási, A.-L. and R. Albert, Science 286, 509 (1999). 28. R. Albert, and A.L. Barabási, Phys. Rev. Lett. 85, 5234(2000). 29. S. N. Dorogovtsev, J. F. F. Mendes, and A. N. Samukhim, cond-mat/0011115. 30. P.L. Krapivsky, S. Redner, and F. Leyvraz, Phys. Rev. Lett. 85, 4629 (2000). 31. B. Tadic, Physica A 293, 273(2001). 32. S. Bomholdt and H. Ebel, cond-mat/0008465; H.A. Simon, Bimetrika 42, 425(1955). 33. Hassan, M. K.; Islam, Liana; Arefinul Haque, Syed (2017). "Degree distribution, rank-size distribution, and leadership persistence in mediation-driven attachment networks". Physica A. 469: 23–30. arXiv:1411.3444. Bibcode:2017PhyA..469...23H. doi:10.1016/j.physa.2016.11.001. S2CID 51976352. 34. Ravasz, E.; Barabási (2003). "Hierarchical organization in complex networks". Phys. Rev. E. 67 (2): 026112. arXiv:cond-mat/0206130. Bibcode:2003PhRvE..67b6112R. doi:10.1103/physreve.67.026112. PMID 12636753. S2CID 17777155. 35. Caldarelli, G.; et al. (2002). "Scale-free networks from varying vertex intrinsic fitness" (PDF). Phys. Rev. Lett. 89 (25): 258702. Bibcode:2002PhRvL..89y8702C. doi:10.1103/physrevlett.89.258702. PMID 12484927. 36. Garlaschelli, D.; et al. (2004). "Fitness-Dependent Topological Properties of the World Trade Web". Phys. Rev. Lett. 93 (18): 188701. arXiv:cond-mat/0403051. Bibcode:2004PhRvL..93r8701G. doi:10.1103/physrevlett.93.188701. PMID 15525215. S2CID 16367275. 37. Krioukov, Dmitri; Papadopoulos, Fragkiskos; Kitsak, Maksim; Vahdat, Amin; Boguñá, Marián (2010). "Hyperbolic geometry of complex networks". Physical Review E. 82 (3): 036106. arXiv:1006.5169. Bibcode:2010PhRvE..82c6106K. doi:10.1103/PhysRevE.82.036106. PMID 21230138. S2CID 6451908. 38. A. Hernando; D. Villuendas; C. Vesperinas; M. Abad; A. Plastino (2009). "Unravelling the size distribution of social groups with information theory on complex networks". arXiv:0905.3704 [physics.soc-ph]., submitted to European Physical Journal B 39. André A. Moreira; Demétrius R. Paula; Raimundo N. Costa Filho; José S. Andrade, Jr. (2006). "Competitive cluster growth in complex networks". Physical Review E. 73 (6): 065101. arXiv:cond-mat/0603272. Bibcode:2006PhRvE..73f5101M. doi:10.1103/PhysRevE.73.065101. PMID 16906890. S2CID 45651735. 40. Heydari, H.; Taheri, S.M.; Kaveh, K. (2018). "Distributed Maximal Independent Set on Scale-Free Networks". arXiv:1804.02513 [cs.DC]. 41. Eom, Young-Ho; Jo, Hang-Hyun (2015-05-11). "Tail-scope: Using friends to estimate heavy tails of degree distributions in large-scale complex networks". Scientific Reports. 5 (1): 9752. arXiv:1411.6871. Bibcode:2015NatSR...5E9752E. doi:10.1038/srep09752. ISSN 2045-2322. PMC 4426729. PMID 25959097. 42. Nettasinghe, Buddhika; Krishnamurthy, Vikram (2021-05-19). "Maximum Likelihood Estimation of Power-law Degree Distributions via Friendship Paradox-based Sampling". ACM Transactions on Knowledge Discovery from Data. 15 (6): 1–28. doi:10.1145/3451166. ISSN 1556-4681. Further reading • Albert R.; Barabási A.-L. (2002). "Statistical mechanics of complex networks". Rev. Mod. Phys. 74 (1): 47–97. arXiv:cond-mat/0106096. Bibcode:2002RvMP...74...47A. doi:10.1103/RevModPhys.74.47. S2CID 60545. • Amaral LAN, Scala A, Barthelemy M, Stanley HE (2000). "Classes of small-world networks". PNAS. 97 (21): 11149–52. arXiv:cond-mat/0001458. Bibcode:2000PNAS...9711149A. doi:10.1073/pnas.200327197. PMC 17168. PMID 11005838. • Barabási, Albert-László (2004). Linked: How Everything is Connected to Everything Else. Perseus Pub. ISBN 0-452-28439-2. • Barabási, Albert-László; Bonabeau, Eric (May 2003). "Scale-Free Networks" (PDF). Scientific American. 288 (5): 50–9. Bibcode:2003SciAm.288e..60B. doi:10.1038/scientificamerican0503-60. PMID 12701331. • Dan Braha; Yaneer Bar-Yam (2004). "Topology of Large-Scale Engineering Problem-Solving Networks" (PDF). Phys. Rev. E. 69 (1): 016113. Bibcode:2004PhRvE..69a6113B. doi:10.1103/PhysRevE.69.016113. PMID 14995673. S2CID 1001176. • Caldarelli G. "Scale-Free Networks" Oxford University Press, Oxford (2007). • Caldarelli G.; Capocci A.; De Los Rios P.; Muñoz M.A. (2002). "Scale-free networks from varying vertex intrinsic fitness". Physical Review Letters. 89 (25): 258702. arXiv:cond-mat/0207366. Bibcode:2002PhRvL..89y8702C. doi:10.1103/PhysRevLett.89.258702. PMID 12484927. • Dangalchev, Ch. (2004). "Generation models for scale-free networks". Physica A. 338 (3–4): 659–671. Bibcode:2004PhyA..338..659D. doi:10.1016/j.physa.2004.01.056. • Dorogovtsev, S.N.; Mendes, J.F.F.; Samukhin, A.N. (2000). "Structure of Growing Networks: Exact Solution of the Barabási—Albert's Model". Phys. Rev. Lett. 85 (21): 4633–6. arXiv:cond-mat/0004434. Bibcode:2000PhRvL..85.4633D. doi:10.1103/PhysRevLett.85.4633. PMID 11082614. S2CID 118876189. • Dorogovtsev, S.N.; Mendes, J.F.F. (2003). Evolution of Networks: from biological networks to the Internet and WWW. Oxford University Press. ISBN 0-19-851590-1. • Dorogovtsev, S.N.; Goltsev A.V.; Mendes, J.F.F. (2008). "Critical phenomena in complex networks". Rev. Mod. Phys. 80 (4): 1275–1335. arXiv:0705.0010. Bibcode:2008RvMP...80.1275D. doi:10.1103/RevModPhys.80.1275. S2CID 3174463. • Dorogovtsev, S.N.; Mendes, J.F.F. (2002). "Evolution of networks". Advances in Physics. 51 (4): 1079–1187. arXiv:cond-mat/0106144. Bibcode:2002AdPhy..51.1079D. doi:10.1080/00018730110112519. S2CID 429546. • Erdős, P.; Rényi, A. (1960). On the Evolution of Random Graphs (PDF). Vol. 5. Publication of the Mathematical Institute of the Hungarian Academy of Science. pp. 17–61. • Faloutsos, M.; Faloutsos, P.; Faloutsos, C. (1999). "On power-law relationships of the internet topology". Comp. Comm. Rev. 29 (4): 251–262. doi:10.1145/316194.316229. • Li, L.; Alderson, D.; Tanaka, R.; Doyle, J.C.; Willinger, W. (2005). "Towards a Theory of Scale-Free Graphs: Definition, Properties, and Implications (Extended Version)". arXiv:cond-mat/0501169. • Kumar, R.; Raghavan, P.; Rajagopalan, S.; Sivakumar, D.; Tomkins, A.; Upfal, E. (2000). "Stochastic models for the web graph" (PDF). Proceedings of the 41st Annual Symposium on Foundations of Computer Science (FOCS). Redondo Beach, CA: IEEE CS Press. pp. 57–65. • Matlis, Jan (November 4, 2002). "Scale-Free Networks". • Newman, Mark E.J. (2003). "The structure and function of complex networks". SIAM Review. 45 (2): 167–256. arXiv:cond-mat/0303516. Bibcode:2003SIAMR..45..167N. doi:10.1137/S003614450342480. S2CID 221278130. • Pastor-Satorras, R.; Vespignani, A. (2004). Evolution and Structure of the Internet: A Statistical Physics Approach. Cambridge University Press. ISBN 0-521-82698-5. • Pennock, D.M.; Flake, G.W.; Lawrence, S.; Glover, E.J.; Giles, C.L. (2002). "Winners don't take all: Characterizing the competition for links on the web". PNAS. 99 (8): 5207–11. Bibcode:2002PNAS...99.5207P. doi:10.1073/pnas.032085699. PMC 122747. PMID 16578867. • Robb, John. Scale-Free Networks and Terrorism, 2004. • Keller, E.F. (2005). "Revisiting "scale-free" networks". BioEssays. 27 (10): 1060–8. doi:10.1002/bies.20294. PMID 16163729. Archived from the original on 2011-08-13. • Onody, R.N.; de Castro, P.A. (2004). "Complex Network Study of Brazilian Soccer Player". Phys. Rev. E. 70 (3): 037103. arXiv:cond-mat/0409609. Bibcode:2004PhRvE..70c7103O. doi:10.1103/PhysRevE.70.037103. PMID 15524675. S2CID 31653489. • Kasthurirathna, D.; Piraveenan, M. (2015). "Complex Network Study of Brazilian Soccer Player". Sci. Rep. In Press.
Wikipedia
Scale (analytical tool) In the study of complex systems and hierarchy theory, the concept of scale refers to the combination of (1) the level of analysis (for example, analyzing the whole or a specific component of the system); and (2) the level of observation (for example, observing a system as an external viewer or as an internal participant).[1] The scale of analysis encompasses both the analytical choice of how to observe a given system or object of study, and the role of the observer in determining the identity of the system.[2][3] This analytical tool is central to multi-scale analysis (see for example, MuSIASEM, land-use analysis).[4] For example, on at the scale of analysis of a given population of zebras, the number of predators (e.g. lions) determines the number of preys that survives after hunting, while at the scale of analysis of the ecosystem, the availability of preys determines how many predators can survive in a given area. The semantic categories of "prey" and "predator" are not given, but are defined by the observer. See also • Overview effect References 1. Ahl, Valerie; Allen, Timothy F. H. (1996). Hierarchy theory: a vision, vocabulary, and epistemology. New York: Columbia University Press. ISBN 0231084803. OCLC 34149766. 2. Giampietro, Mario; Allen, Timothy F. H.; Mayumi, Kozo (December 2006). "The epistemological predicament associated with purposive quantitative analysis". Ecological Complexity. 3 (4): 307–327. doi:10.1016/j.ecocom.2007.02.005. 3. Kovacic, Zora; Giampietro, Mario (December 2015). "Empty promises or promising futures? The case of smart grids". Energy. 93 (Part 1): 67–74. doi:10.1016/j.energy.2015.08.116. 4. Serrano-Tovar, Tarik; Giampietro, Mario (January 2014). "Multi-scale integrated analysis of rural Laos: studying metabolic patterns of land uses across different levels and scales". Land Use Policy. 36: 155–170. doi:10.1016/j.landusepol.2013.08.003.
Wikipedia
Scale (ratio) The scale ratio of a model represents the proportional ratio of a linear dimension of the model to the same feature of the original. Examples include a 3-dimensional scale model of a building or the scale drawings of the elevations or plans of a building.[1] In such cases the scale is dimensionless and exact throughout the model or drawing. The scale can be expressed in four ways: in words (a lexical scale), as a ratio, as a fraction and as a graphical (bar) scale. Thus on an architect's drawing one might read 'one centimeter to one meter', 1:100, 1/100, or 1/100. A bar scale would also normally appear on the drawing. Colon may also be substituted with a specific, slightly raised ratio symbol U+2236 ∶ RATIO (&ratio;), ie. "1∶100". General representation In general a representation may involve more than one scale at the same time. For example, a drawing showing a new road in elevation might use different horizontal and vertical scales. An elevation of a bridge might be annotated with arrows with a length proportional to a force loading, as in 1 cm to 1000 newtons: this is an example of a dimensional scale. A weather map at some scale may be annotated with wind arrows at a dimensional scale of 1 cm to 20 mph. In maps Map scales require careful discussion. A town plan may be constructed as an exact scale drawing, but for larger areas a map projection is necessary and no projection can represent the Earth's surface at a uniform scale. In general the scale of a projection depends on position and direction. The variation of scale may be considerable in small scale maps which may cover the globe. In large scale maps of small areas the variation of scale may be insignificant for most purposes but it is always present. The scale of a map projection must be interpreted as a nominal scale. (The usage large and small in relation to map scales relates to their expressions as fractions. The fraction 1/10,000 used for a local map is much larger than the1/100,000,000 used for a global map. There is no fixed dividing line between small and large scales.) A scale model is a representation or copy of an object that is larger or smaller than the actual size of the object being represented. Very often the scale model is smaller than the original and used as a guide to making the object in full size. — Unknown Mathematics In mathematics, the idea of geometric scaling can be generalized. The scale between two mathematical objects need not be a fixed ratio but may vary in some systematic way; this is part of mathematical projection, which generally defines a point by point relationship between two mathematical objects. (Generally, these may be mathematical sets and may not represent geometric objects.) See also • Aspect ratio • List of scale model sizes • Scale (analytical tool) • Scale invariance • Scale space • Spatial scale References Wikimedia Commons has media related to Scales (ratio). 1. "What is a Ratio Scale?". www.rasch.org. Retrieved 2017-11-19. Fractions and ratios Division and ratio • Dividend ÷ Divisor = Quotient Fraction • Numerator/Denominator = Quotient • Algebraic • Aspect • Binary • Continued • Decimal • Dyadic • Egyptian • Golden • Silver • Integer • Irreducible • Reduction • Just intonation • LCD • Musical interval • Paper size • Percentage • Unit
Wikipedia
Scale co-occurrence matrix Scale co-occurrence matrix (SCM) is a method for image feature extraction within scale space after wavelet transformation, proposed by Wu Jun and Zhao Zhongming (Institute of Remote Sensing Application, China). In practice, we first do discrete wavelet transformation for one gray image and get sub images with different scales. Then we construct a series of scale based concurrent matrices, every matrix describing the gray level variation between two adjacent scales. Last we use selected functions (such as Harris statistical approach) to calculate measurements with SCM and do feature extraction and classification. One basis of the method is the fact: way texture information changes from one scale to another can represent that texture in some extent thus it can be used as a criterion for feature extraction. The matrix captures the relation of features between different scales rather than the features within a single scale space, which can represent the scale property of texture better. Also, there are several experiments showing that it can get more accurate results for texture classification than the traditional texture classification.[1] Background Texture can be regarded as a similarity grouping in an image. Traditional texture analysis can be divided into four major issues: feature extraction, texture discrimination, texture classification and shape from texture(to reconstruct 3D surface geometry from texture information). For tradition feature extraction, approaches are usually categorized into structural, statistical, model based and transform.[2] Wavelet transformation is a popular method in numerical analysis and functional analysis, which captures both frequency and location information. Gray level co-occurrence matrix provides an important basis for SCM construction. SCM based on discrete wavelet frame transformation make use of both correlations and feature information so that it combines structural and statistical benefits. Discrete wavelet frame (DWF) In order to do SCM we have to use discrete wavelet frame (DWF) transformation first to get a series of sub images. The discrete wavelet frames is nearly identical to the standard wavelet transform,[3] except that one upsamples the filters, rather than downsamples the image. Given an image, the DWF decomposes its channel using the same method as the wavelet transform, but without the subsampling process. This results in four filtered images with the same size as the input image. The decomposition is then continued in the LL channels only as in the wavelet transform, but since the image is not subsampled, the filter has to be upsampled by inserting zeros in between its coefficients. The number of channels, hence the number of features for DWF is given by 3 × l − 1.[4] One dimension discrete wavelet frame decompose the image in this way: $d_{i}(k)=[[g_{i}]^{T}x],\quad (i=1,\ldots ,N)$ Example If there are two sub images X1 and X0 from the parent image X (in practice X = X0), X1 = [1 1;1 2], X2 = [1 1;1 4],the grayscale is 4 so that we can get k = 1, G = 4. X1(1,1), (1,2) and (2,1) are 1, while X0(1,1), (1,2) and (2,1) are 1, thus Φ1(1,1) = 3; Similarly, Φ1(2,4) = 1. The SCM is as following: G=4Gray level 0Gray level 1Gray level 2Gray level 3Gray level 4 Gray level 000000 Gray level 130000 Gray level 200000 Gray level 300000 Gray level 400100 External links • Tao Chen; Kai-Kuang Ma; Li-Hui Chen (1998). "Discrete wavelet frame representations of color texture features for image query". 1998 IEEE Second Workshop on Multimedia Signal Processing (Cat. No.98EX175). ieeexplore.ieee.org. pp. 45–50. doi:10.1109/MMSP.1998.738911. ISBN 0-7803-4919-9. S2CID 1833240. • co-occurrence-matrix MATLAB tutorial • Co-occurrence Matrix References 1. Wu, Jun; Zhao, Zhongming (Mar 2001). "Scale Co-occurrence Matrix for Texture Analysis using Wavelet Transformation". Journal of Remote Sensing. 5 (2): 100. 2. Duda, R.O. (1973-02-09). Pattern Classification and Scene Analysis. ISBN 978-0471223610. 3. Kevin, Lund; Curt, Burgess (June 1996). "Producing high-dimensional semantic spaces from lexical co-occurrence". Behavior Research Methods. 28 (2): 203–208. 4. Mallat, S.G. (1989). "A theory for multiresolution signal decomposition: The wavelet representation". IEEE Transactions on Pattern Analysis and Machine Intelligence. 11 (7): 674–693. Bibcode:1989ITPAM..11..674M. doi:10.1109/34.192463.
Wikipedia
Scale (descriptive set theory) In the mathematical discipline of descriptive set theory, a scale is a certain kind of object defined on a set of points in some Polish space (for example, a scale might be defined on a set of real numbers). Scales were originally isolated as a concept in the theory of uniformization,[1] but have found wide applicability in descriptive set theory, with applications such as establishing bounds on the possible lengths of wellorderings of a given complexity, and showing (under certain assumptions) that there are largest countable sets of certain complexities. Formal definition Given a pointset A contained in some product space $A\subseteq X=X_{0}\times X_{1}\times \ldots X_{m-1}$ where each Xk is either the Baire space or a countably infinite discrete set, we say that a norm on A is a map from A into the ordinal numbers. Each norm has an associated prewellordering, where one element of A precedes another element if the norm of the first is less than the norm of the second. A scale on A is a countably infinite collection of norms $(\phi _{n})_{n<\omega }$ with the following properties: If the sequence xi is such that xi is an element of A for each natural number i, and xi converges to an element x in the product space X, and for each natural number n there is an ordinal λn such that φn(xi)=λn for all sufficiently large i, then x is an element of A, and for each n, φn(x)≤λn.[2] By itself, at least granted the axiom of choice, the existence of a scale on a pointset is trivial, as A can be wellordered and each φn can simply enumerate A. To make the concept useful, a definability criterion must be imposed on the norms (individually and together). Here "definability" is understood in the usual sense of descriptive set theory; it need not be definability in an absolute sense, but rather indicates membership in some pointclass of sets of reals. The norms φn themselves are not sets of reals, but the corresponding prewellorderings are (at least in essence). The idea is that, for a given pointclass Γ, we want the prewellorderings below a given point in A to be uniformly represented both as a set in Γ and as one in the dual pointclass of Γ, relative to the "larger" point being an element of A. Formally, we say that the φn form a Γ-scale on A if they form a scale on A and there are ternary relations S and T such that, if y is an element of A, then $\forall n\forall x(\varphi _{n}(x)\leq \varphi _{n}(y)\iff S(n,x,y)\iff T(n,x,y))$ where S is in Γ and T is in the dual pointclass of Γ (that is, the complement of T is in Γ).[3] Note here that we think of φn(x) as being ∞ whenever x∉A; thus the condition φn(x)≤φn(y), for y∈A, also implies x∈A. The definition does not imply that the collection of norms is in the intersection of Γ with the dual pointclass of Γ. This is because the three-way equivalence is conditional on y being an element of A. For y not in A, it might be the case that one or both of S(n,x,y) or T(n,x,y) fail to hold, even if x is in A (and therefore automatically φn(x)≤φn(y)=∞). Applications Scale property The scale property is a strengthening of the prewellordering property. For pointclasses of a certain form, it implies that relations in the given pointclass have a uniformization that is also in the pointclass. Notes 1. Kechris and Moschovakis 2008:28 2. Kechris and Moschovakis 2008:37 3. Kechris and Moschovakis 2008:37, with harmless reworking References • Moschovakis, Yiannis N. (1980), Descriptive Set Theory, North Holland, ISBN 0-444-70199-0 • Kechris, Alexander S.; Moschovakis, Yiannis N. (2008), "Notes on the theory of scales", in Kechris, Alexander S.; Benedikt Löwe; Steel, John R. (eds.), Games, Scales and Suslin Cardinals: The Cabal Seminar, Volume I, Cambridge University Press, pp. 28–74, ISBN 978-0-521-89951-2
Wikipedia
Scaled inverse chi-squared distribution The scaled inverse chi-squared distribution is the distribution for x = 1/s2, where s2 is a sample mean of the squares of ν independent normal random variables that have mean 0 and inverse variance 1/σ2 = τ2. The distribution is therefore parametrised by the two quantities ν and τ2, referred to as the number of chi-squared degrees of freedom and the scaling parameter, respectively. Scaled inverse chi-squared Probability density function Cumulative distribution function Parameters $\nu >0\,$ $\tau ^{2}>0\,$ Support $x\in (0,\infty )$ PDF ${\frac {(\tau ^{2}\nu /2)^{\nu /2}}{\Gamma (\nu /2)}}~{\frac {\exp \left[{\frac {-\nu \tau ^{2}}{2x}}\right]}{x^{1+\nu /2}}}$ CDF $\Gamma \left({\frac {\nu }{2}},{\frac {\tau ^{2}\nu }{2x}}\right)\left/\Gamma \left({\frac {\nu }{2}}\right)\right.$ Mean ${\frac {\nu \tau ^{2}}{\nu -2}}$ for $\nu >2\,$ Mode ${\frac {\nu \tau ^{2}}{\nu +2}}$ Variance ${\frac {2\nu ^{2}\tau ^{4}}{(\nu -2)^{2}(\nu -4)}}$for $\nu >4\,$ Skewness ${\frac {4}{\nu -6}}{\sqrt {2(\nu -4)}}$for $\nu >6\,$ Ex. kurtosis ${\frac {12(5\nu -22)}{(\nu -6)(\nu -8)}}$for $\nu >8\,$ Entropy ${\frac {\nu }{2}}\!+\!\ln \left({\frac {\tau ^{2}\nu }{2}}\Gamma \left({\frac {\nu }{2}}\right)\right)$ $\!-\!\left(1\!+\!{\frac {\nu }{2}}\right)\psi \left({\frac {\nu }{2}}\right)$ MGF ${\frac {2}{\Gamma ({\frac {\nu }{2}})}}\left({\frac {-\tau ^{2}\nu t}{2}}\right)^{\!\!{\frac {\nu }{4}}}\!\!K_{\frac {\nu }{2}}\left({\sqrt {-2\tau ^{2}\nu t}}\right)$ CF ${\frac {2}{\Gamma ({\frac {\nu }{2}})}}\left({\frac {-i\tau ^{2}\nu t}{2}}\right)^{\!\!{\frac {\nu }{4}}}\!\!K_{\frac {\nu }{2}}\left({\sqrt {-2i\tau ^{2}\nu t}}\right)$ This family of scaled inverse chi-squared distributions is closely related to two other distribution families, those of the inverse-chi-squared distribution and the inverse-gamma distribution. Compared to the inverse-chi-squared distribution, the scaled distribution has an extra parameter τ2, which scales the distribution horizontally and vertically, representing the inverse-variance of the original underlying process. Also, the scaled inverse chi-squared distribution is presented as the distribution for the inverse of the mean of ν squared deviates, rather than the inverse of their sum. The two distributions thus have the relation that if $X\sim {\mbox{Scale-inv-}}\chi ^{2}(\nu ,\tau ^{2})$   then   ${\frac {X}{\tau ^{2}\nu }}\sim {\mbox{inv-}}\chi ^{2}(\nu )$ Compared to the inverse gamma distribution, the scaled inverse chi-squared distribution describes the same data distribution, but using a different parametrization, which may be more convenient in some circumstances. Specifically, if $X\sim {\mbox{Scale-inv-}}\chi ^{2}(\nu ,\tau ^{2})$   then   $X\sim {\textrm {Inv-Gamma}}\left({\frac {\nu }{2}},{\frac {\nu \tau ^{2}}{2}}\right)$ Either form may be used to represent the maximum entropy distribution for a fixed first inverse moment $(E(1/X))$ and first logarithmic moment $(E(\ln(X))$. The scaled inverse chi-squared distribution also has a particular use in Bayesian statistics, somewhat unrelated to its use as a predictive distribution for x = 1/s2. Specifically, the scaled inverse chi-squared distribution can be used as a conjugate prior for the variance parameter of a normal distribution. In this context the scaling parameter is denoted by σ02 rather than by τ2, and has a different interpretation. The application has been more usually presented using the inverse-gamma distribution formulation instead; however, some authors, following in particular Gelman et al. (1995/2004) argue that the inverse chi-squared parametrisation is more intuitive. Characterization The probability density function of the scaled inverse chi-squared distribution extends over the domain $x>0$ and is $f(x;\nu ,\tau ^{2})={\frac {(\tau ^{2}\nu /2)^{\nu /2}}{\Gamma (\nu /2)}}~{\frac {\exp \left[{\frac {-\nu \tau ^{2}}{2x}}\right]}{x^{1+\nu /2}}}$ where $\nu $ is the degrees of freedom parameter and $\tau ^{2}$ is the scale parameter. The cumulative distribution function is $F(x;\nu ,\tau ^{2})=\Gamma \left({\frac {\nu }{2}},{\frac {\tau ^{2}\nu }{2x}}\right)\left/\Gamma \left({\frac {\nu }{2}}\right)\right.$ $=Q\left({\frac {\nu }{2}},{\frac {\tau ^{2}\nu }{2x}}\right)$ where $\Gamma (a,x)$ is the incomplete gamma function, $\Gamma (x)$ is the gamma function and $Q(a,x)$ is a regularized gamma function. The characteristic function is $\varphi (t;\nu ,\tau ^{2})=$ ${\frac {2}{\Gamma ({\frac {\nu }{2}})}}\left({\frac {-i\tau ^{2}\nu t}{2}}\right)^{\!\!{\frac {\nu }{4}}}\!\!K_{\frac {\nu }{2}}\left({\sqrt {-2i\tau ^{2}\nu t}}\right),$ where $K_{\frac {\nu }{2}}(z)$ is the modified Bessel function of the second kind. Parameter estimation The maximum likelihood estimate of $\tau ^{2}$ is $\tau ^{2}=n/\sum _{i=1}^{n}{\frac {1}{x_{i}}}.$ The maximum likelihood estimate of ${\frac {\nu }{2}}$ can be found using Newton's method on: $\ln \left({\frac {\nu }{2}}\right)-\psi \left({\frac {\nu }{2}}\right)={\frac {1}{n}}\sum _{i=1}^{n}\ln \left(x_{i}\right)-\ln \left(\tau ^{2}\right),$ where $\psi (x)$ is the digamma function. An initial estimate can be found by taking the formula for mean and solving it for $\nu .$ Let ${\bar {x}}={\frac {1}{n}}\sum _{i=1}^{n}x_{i}$ be the sample mean. Then an initial estimate for $\nu $ is given by: ${\frac {\nu }{2}}={\frac {\bar {x}}{{\bar {x}}-\tau ^{2}}}.$ Bayesian estimation of the variance of a normal distribution The scaled inverse chi-squared distribution has a second important application, in the Bayesian estimation of the variance of a Normal distribution. According to Bayes' theorem, the posterior probability distribution for quantities of interest is proportional to the product of a prior distribution for the quantities and a likelihood function: $p(\sigma ^{2}|D,I)\propto p(\sigma ^{2}|I)\;p(D|\sigma ^{2})$ where D represents the data and I represents any initial information about σ2 that we may already have. The simplest scenario arises if the mean μ is already known; or, alternatively, if it is the conditional distribution of σ2 that is sought, for a particular assumed value of μ. Then the likelihood term L(σ2|D) = p(D|σ2) has the familiar form ${\mathcal {L}}(\sigma ^{2}|D,\mu )={\frac {1}{\left({\sqrt {2\pi }}\sigma \right)^{n}}}\;\exp \left[-{\frac {\sum _{i}^{n}(x_{i}-\mu )^{2}}{2\sigma ^{2}}}\right]$ Combining this with the rescaling-invariant prior p(σ2|I) = 1/σ2, which can be argued (e.g. following Jeffreys) to be the least informative possible prior for σ2 in this problem, gives a combined posterior probability $p(\sigma ^{2}|D,I,\mu )\propto {\frac {1}{\sigma ^{n+2}}}\;\exp \left[-{\frac {\sum _{i}^{n}(x_{i}-\mu )^{2}}{2\sigma ^{2}}}\right]$ This form can be recognised as that of a scaled inverse chi-squared distribution, with parameters ν = n and τ2 = s2 = (1/n) Σ (xi-μ)2 Gelman et al remark that the re-appearance of this distribution, previously seen in a sampling context, may seem remarkable; but given the choice of prior the "result is not surprising".[1] In particular, the choice of a rescaling-invariant prior for σ2 has the result that the probability for the ratio of σ2 / s2 has the same form (independent of the conditioning variable) when conditioned on s2 as when conditioned on σ2: $p({\tfrac {\sigma ^{2}}{s^{2}}}|s^{2})=p({\tfrac {\sigma ^{2}}{s^{2}}}|\sigma ^{2})$ In the sampling-theory case, conditioned on σ2, the probability distribution for (1/s2) is a scaled inverse chi-squared distribution; and so the probability distribution for σ2 conditioned on s2, given a scale-agnostic prior, is also a scaled inverse chi-squared distribution. Use as an informative prior If more is known about the possible values of σ2, a distribution from the scaled inverse chi-squared family, such as Scale-inv-χ2(n0, s02) can be a convenient form to represent a more informative prior for σ2, as if from the result of n0 previous observations (though n0 need not necessarily be a whole number): $p(\sigma ^{2}|I^{\prime },\mu )\propto {\frac {1}{\sigma ^{n_{0}+2}}}\;\exp \left[-{\frac {n_{0}s_{0}^{2}}{2\sigma ^{2}}}\right]$ Such a prior would lead to the posterior distribution $p(\sigma ^{2}|D,I^{\prime },\mu )\propto {\frac {1}{\sigma ^{n+n_{0}+2}}}\;\exp \left[-{\frac {ns^{2}+n_{0}s_{0}^{2}}{2\sigma ^{2}}}\right]$ which is itself a scaled inverse chi-squared distribution. The scaled inverse chi-squared distributions are thus a convenient conjugate prior family for σ2 estimation. Estimation of variance when mean is unknown If the mean is not known, the most uninformative prior that can be taken for it is arguably the translation-invariant prior p(μ|I) ∝ const., which gives the following joint posterior distribution for μ and σ2, ${\begin{aligned}p(\mu ,\sigma ^{2}\mid D,I)&\propto {\frac {1}{\sigma ^{n+2}}}\exp \left[-{\frac {\sum _{i}^{n}(x_{i}-\mu )^{2}}{2\sigma ^{2}}}\right]\\&={\frac {1}{\sigma ^{n+2}}}\exp \left[-{\frac {\sum _{i}^{n}(x_{i}-{\bar {x}})^{2}}{2\sigma ^{2}}}\right]\exp \left[-{\frac {n(\mu -{\bar {x}})^{2}}{2\sigma ^{2}}}\right]\end{aligned}}$ The marginal posterior distribution for σ2 is obtained from the joint posterior distribution by integrating out over μ, ${\begin{aligned}p(\sigma ^{2}|D,I)\;\propto \;&{\frac {1}{\sigma ^{n+2}}}\;\exp \left[-{\frac {\sum _{i}^{n}(x_{i}-{\bar {x}})^{2}}{2\sigma ^{2}}}\right]\;\int _{-\infty }^{\infty }\exp \left[-{\frac {n(\mu -{\bar {x}})^{2}}{2\sigma ^{2}}}\right]d\mu \\=\;&{\frac {1}{\sigma ^{n+2}}}\;\exp \left[-{\frac {\sum _{i}^{n}(x_{i}-{\bar {x}})^{2}}{2\sigma ^{2}}}\right]\;{\sqrt {2\pi \sigma ^{2}/n}}\\\propto \;&(\sigma ^{2})^{-(n+1)/2}\;\exp \left[-{\frac {(n-1)s^{2}}{2\sigma ^{2}}}\right]\end{aligned}}$ This is again a scaled inverse chi-squared distribution, with parameters $\scriptstyle {n-1}\;$ and $\scriptstyle {s^{2}=\sum (x_{i}-{\bar {x}})^{2}/(n-1)}$. Related distributions • If $X\sim {\mbox{Scale-inv-}}\chi ^{2}(\nu ,\tau ^{2})$ then $kX\sim {\mbox{Scale-inv-}}\chi ^{2}(\nu ,k\tau ^{2})\,$ • If $X\sim {\mbox{inv-}}\chi ^{2}(\nu )\,$ (Inverse-chi-squared distribution) then $X\sim {\mbox{Scale-inv-}}\chi ^{2}(\nu ,1/\nu )\,$ • If $X\sim {\mbox{Scale-inv-}}\chi ^{2}(\nu ,\tau ^{2})$ then ${\frac {X}{\tau ^{2}\nu }}\sim {\mbox{inv-}}\chi ^{2}(\nu )\,$ (Inverse-chi-squared distribution) • If $X\sim {\mbox{Scale-inv-}}\chi ^{2}(\nu ,\tau ^{2})$ then $X\sim {\textrm {Inv-Gamma}}\left({\frac {\nu }{2}},{\frac {\nu \tau ^{2}}{2}}\right)$ (Inverse-gamma distribution) • Scaled inverse chi square distribution is a special case of type 5 Pearson distribution References • Gelman A. et al (1995), Bayesian Data Analysis, pp 474–475; also pp 47, 480 1. Gelman et al (1995), Bayesian Data Analysis (1st ed), p.68 Probability distributions (list) Discrete univariate with finite support • Benford • Bernoulli • beta-binomial • binomial • categorical • hypergeometric • negative • Poisson binomial • Rademacher • soliton • discrete uniform • Zipf • Zipf–Mandelbrot with infinite support • beta negative binomial • Borel • Conway–Maxwell–Poisson • discrete phase-type • Delaporte • extended negative binomial • Flory–Schulz • Gauss–Kuzmin • geometric • logarithmic • mixed Poisson • negative binomial • Panjer • parabolic fractal • Poisson • Skellam • Yule–Simon • zeta Continuous univariate supported on a bounded interval • arcsine • ARGUS • Balding–Nichols • Bates • beta • beta rectangular • continuous Bernoulli • Irwin–Hall • Kumaraswamy • logit-normal • noncentral beta • PERT • raised cosine • reciprocal • triangular • U-quadratic • uniform • Wigner semicircle supported on a semi-infinite interval • Benini • Benktander 1st kind • Benktander 2nd kind • beta prime • Burr • chi • chi-squared • noncentral • inverse • scaled • Dagum • Davis • Erlang • hyper • exponential • hyperexponential • hypoexponential • logarithmic • F • noncentral • folded normal • Fréchet • gamma • generalized • inverse • gamma/Gompertz • Gompertz • shifted • half-logistic • half-normal • Hotelling's T-squared • inverse Gaussian • generalized • Kolmogorov • Lévy • log-Cauchy • log-Laplace • log-logistic • log-normal • log-t • Lomax • matrix-exponential • Maxwell–Boltzmann • Maxwell–Jüttner • Mittag-Leffler • Nakagami • Pareto • phase-type • Poly-Weibull • Rayleigh • relativistic Breit–Wigner • Rice • truncated normal • type-2 Gumbel • Weibull • discrete • Wilks's lambda supported on the whole real line • Cauchy • exponential power • Fisher's z • Kaniadakis κ-Gaussian • Gaussian q • generalized normal • generalized hyperbolic • geometric stable • Gumbel • Holtsmark • hyperbolic secant • Johnson's SU • Landau • Laplace • asymmetric • logistic • noncentral t • normal (Gaussian) • normal-inverse Gaussian • skew normal • slash • stable • Student's t • Tracy–Widom • variance-gamma • Voigt with support whose type varies • generalized chi-squared • generalized extreme value • generalized Pareto • Marchenko–Pastur • Kaniadakis κ-exponential • Kaniadakis κ-Gamma • Kaniadakis κ-Weibull • Kaniadakis κ-Logistic • Kaniadakis κ-Erlang • q-exponential • q-Gaussian • q-Weibull • shifted log-logistic • Tukey lambda Mixed univariate continuous- discrete • Rectified Gaussian Multivariate (joint) • Discrete: • Ewens • multinomial • Dirichlet • negative • Continuous: • Dirichlet • generalized • multivariate Laplace • multivariate normal • multivariate stable • multivariate t • normal-gamma • inverse • Matrix-valued: • LKJ • matrix normal • matrix t • matrix gamma • inverse • Wishart • normal • inverse • normal-inverse • complex Directional Univariate (circular) directional Circular uniform univariate von Mises wrapped normal wrapped Cauchy wrapped exponential wrapped asymmetric Laplace wrapped Lévy Bivariate (spherical) Kent Bivariate (toroidal) bivariate von Mises Multivariate von Mises–Fisher Bingham Degenerate and singular Degenerate Dirac delta function Singular Cantor Families • Circular • compound Poisson • elliptical • exponential • natural exponential • location–scale • maximum entropy • mixture • Pearson • Tweedie • wrapped • Category • Commons
Wikipedia
Bipyramid A (symmetric) n-gonal bipyramid or dipyramid is a polyhedron formed by joining an n-gonal pyramid and its mirror image base-to-base.[3][4] An n-gonal bipyramid has 2n triangle faces, 3n edges, and 2 + n vertices. Set of dual-uniform n-gonal bipyramids Example: dual-uniform hexagonal bipyramid (n = 6) Typedual-uniform in the sense of dual-semiregular polyhedron Coxeter diagram Schläfli symbol{ } + {n} [1] Faces2n congruent isosceles triangles Edges3n Vertices2 + n Face configurationV4.4.n Symmetry groupDnh, [n,2], (*n22), order 4n Rotation groupDn, [n,2]+, (n22), order 2n Dual polyhedron(convex) uniform n-gonal prism Propertiesconvex, face-transitive, regular vertices[2] Net Example: net of pentagonal bipyramid (n = 5) The "n-gonal" in the name of a bipyramid does not refer to a face but to the internal polygon base, lying in the mirror plane that connects the two pyramid halves. (If it were a face, then each of its edges would connect three faces instead of two.) "Regular", right bipyramids A "regular" bipyramid has a regular polygon base. It is usually implied to be also a right bipyramid. A right bipyramid has its two apices right above and right below the center or the centroid of its polygon base. A "regular" right (symmetric) n-gonal bipyramid has Schläfli symbol { } + {n}. A right (symmetric) bipyramid has Schläfli symbol { } + P, for polygon base P. The "regular" right (thus face-transitive) n-gonal bipyramid with regular vertices[2] is the dual of the n-gonal uniform (thus right) prism, and has congruent isosceles triangle faces. A "regular" right (symmetric) n-gonal bipyramid can be projected on a sphere or globe as a "regular" right (symmetric) n-gonal spherical bipyramid: n equally spaced lines of longitude going from pole to pole, and an equator line bisecting them. "Regular" right (symmetric) n-gonal bipyramids: Bipyramid name Digonal bipyramid Triangular bipyramid (See: J12) Square bipyramid (See: O) Pentagonal bipyramid (See: J13) Hexagonal bipyramid Heptagonal bipyramid Octagonal bipyramid Enneagonal bipyramid Decagonal bipyramid ... Apeirogonal bipyramid Polyhedron image ... Spherical tiling image Plane tiling image Face config. V2.4.4V3.4.4V4.4.4V5.4.4V6.4.4V7.4.4V8.4.4V9.4.4V10.4.4...V∞.4.4 Coxeter diagram ... Equilateral triangle bipyramids Only three kinds of bipyramids can have all edges of the same length (which implies that all faces are equilateral triangles, and thus the bipyramid is a deltahedron): the "regular" right (symmetric) triangular, tetragonal, and pentagonal bipyramids. The tetragonal or square bipyramid with same length edges, or regular octahedron, counts among the Platonic solids; the triangular and pentagonal bipyramids with same length edges count among the Johnson solids (J12 and J13). Equilateral triangle bipyramids: "Regular" right (symmetric) bipyramid name Trigonal or Triangular bipyramid J12 Tetragonal or square bipyramid (Regular octahedron) O Pentagonal bipyramid J13 Bipyramid image Kaleidoscopic symmetry A "regular" right (symmetric) n-gonal bipyramid has dihedral symmetry group Dnh, of order 4n, except in the case of a regular octahedron, which has the larger octahedral symmetry group Oh, of order 48, which has three versions of D4h as subgroups. The rotation group is Dn, of order 2n, except in the case of a regular octahedron, which has the larger rotation group O, of order 24, which has three versions of D4 as subgroups. Note: Every "regular" right (symmetric) n-gonal bipyramid has the same (dihedral) symmetry group as the dual-uniform n-gonal bipyramid, for n ≠ 4. The 4n triangle faces of a "regular" right (symmetric) 2n-gonal bipyramid, projected as the 4n spherical triangle faces of a "regular" right (symmetric) 2n-gonal spherical bipyramid, represent the fundamental domains of dihedral symmetry in three dimensions: Dnh, [n,2], (*n22), of order 4n. These domains can be shown as alternately colored spherical triangles: • across a reflection plane through cocyclic edges, mirror image domains are in different colors (indirect isometry); • about an n-fold or a 2-fold rotation axis through opposite vertices, a domain and its image are in the same color (direct isometry). An n-gonal (symmetric) bipyramid can be seen as the Kleetope of the "corresponding" n-gonal dihedron. Fundamental domains of dihedral symmetry in three dimensions: Dihedral Symmetry D1h D2h D3h D4h D5h D6h ... Dnh Fundamental domains image ... Coxeter diagram ... Volume Volume of a (symmetric) bipyramid: $V={\frac {2}{3}}Bh,$ where B is the area of the base and h the height from the base plane to any apex. This works for any shape of the base, and for any location of the apices, provided that h is measured as the perpendicular distance from the base plane to any apex. Hence: Volume of a (symmetric) bipyramid whose base is a regular n-sided polygon with side length s and whose height is h: $V={\frac {n}{6}}hs^{2}\cot {\frac {\pi }{n}}.$ Oblique bipyramids Non-right bipyramids are called oblique bipyramids. Concave bipyramids A concave bipyramid has a concave polygon base. (*) Its base has no obvious center; but if its apices are right above and right below the centroid of its base, then it is a right bipyramid. Anyway, it is a concave octahedron. Asymmetric/inverted right bipyramids An asymmetric right bipyramid joins two right pyramids with congruent bases but unequal heights, base-to-base. An inverted right bipyramid joins two right pyramids with congruent bases but unequal heights, base-to-base, but on the same side of their common base. The dual of an asymmetric/inverted right n-gonal bipyramid is an n-gonal frustum. A "regular" asymmetric/inverted right n-gonal bipyramid has symmetry group Cnv, of order 2n. Examples: "regular" asymmetric/inverted right hexagonal bipyramids (n = 6): Asymmetric Inverted Scalene triangle bipyramids An "isotoxal" right (symmetric) di-n-gonal bipyramid is a right (symmetric) 2n-gonal bipyramid with an isotoxal flat polygon base: its 2n basal vertices are coplanar, but alternate in two radii. All its faces are congruent scalene triangles, and it is isohedral. It can be seen as another type of a right "symmetric" di-n-gonal scalenohedron, with an isotoxal flat polygon base. An "isotoxal" right (symmetric) di-n-gonal bipyramid has n two-fold rotation axes through opposite basal vertices, n reflection planes through opposite apical edges, an n-fold rotation axis through apices, a reflection plane through base, and an n-fold rotation-reflection axis through apices,[4] representing symmetry group Dnh, [n,2], (*22n), of order 4n. (The reflection about the base plane corresponds to the 0° rotation-reflection. If n is even, then there is an inversion symmetry about the center, corresponding to the 180° rotation-reflection.) Example with 2n = 2×3: An "isotoxal" right (symmetric) ditrigonal bipyramid has three similar vertical planes of symmetry, intersecting in a (vertical) 3-fold rotation axis; perpendicular to them is a fourth plane of symmetry (horizontal); at the intersection of the three vertical planes with the horizontal plane are three similar (horizontal) 2-fold rotation axes; there is no center of inversion symmetry,[5] but there is a center of symmetry: the intersection point of the four axes. Example with 2n = 2×4: An "isotoxal" right (symmetric) ditetragonal bipyramid has four vertical planes of symmetry of two kinds, intersecting in a (vertical) 4-fold rotation axis; perpendicular to them is a fifth plane of symmetry (horizontal); at the intersection of the four vertical planes with the horizontal plane are four (horizontal) 2-fold rotation axes of two kinds, each perpendicular to a plane of symmetry; two vertical planes bisect the angles between two horizontal axes; and there is a centre of inversion symmetry.[6] Note: For at most two particular values of zA = |zA'|, the faces of such a scalene triangle bipyramid may be isosceles. Double example: • The bipyramid with isotoxal 2×2-gon base vertices: U = (1,0,0), U′ = (−1,0,0), V = (0,2,0), V′ = (0,−2,0), and with "right" symmetric apices: A = (0,0,1), A′ = (0,0,−1), has its faces isosceles. Indeed: upper apical edge lengths: AU = AU′ = ${\sqrt {2}},$ AV = AV′ = ${\sqrt {5}};$ base edge length: UV = VU′ = U′V' = V′U = ${\sqrt {5}};$ lower apical edge lengths = upper ones. • The bipyramid with same base vertices, but with "right" symmetric apices: A = (0,0,2), A′ = (0,0,−2), also has its faces isosceles. Indeed: upper apical edge lengths: AU = AU′ = ${\sqrt {5}},$ AV = AV′ = 2${\sqrt {2}};$ base edge length = previous one = ${\sqrt {5}};$ lower apical edge lengths = upper ones. In crystallography, "isotoxal" right (symmetric) "didigonal" (*) (8-faced), ditrigonal (12-faced), ditetragonal (16-faced), and dihexagonal (24-faced) bipyramids exist.[4][3] (*) The smallest geometric di-n-gonal bipyramids have eight faces, and are topologically identical to the regular octahedron. In this case (2n = 2×2): an "isotoxal" right (symmetric) "didigonal" bipyramid is called a rhombic bipyramid,[4][3] although all its faces are scalene triangles, because its flat polygon base is a rhombus. Scalenohedra A "regular" right "symmetric" di-n-gonal scalenohedron is defined by a regular zigzag skew 2n-gon base, two symmetric apices right above and right below the base center, and triangle faces connecting each basal edge to each apex. It has two apices and 2n basal vertices, 4n faces, and 6n edges; it is topologically identical to a 2n-gonal bipyramid, but its 2n basal vertices alternate in two rings above and below the center.[3] All its faces are congruent scalene triangles, and it is isohedral. It can be seen as another type of a right "symmetric" di-n-gonal bipyramid, with a regular zigzag skew polygon base. A "regular" right "symmetric" di-n-gonal scalenohedron has n two-fold rotation axes through opposite basal mid-edges, n reflection planes through opposite apical edges, an n-fold rotation axis through apices, and a 2n-fold rotation-reflection axis through apices (about which 1n rotations-reflections globally preserve the solid),[4] representing symmetry group Dnv = Dnd, [2+,2n], (2*n), of order 4n. (If n is odd, then there is an inversion symmetry about the center, corresponding to the 180° rotation-reflection.) Example with 2n = 2×3: A "regular" right "symmetric" ditrigonal scalenohedron has three similar vertical planes of symmetry inclined to one another at 60° and intersecting in a (vertical) 3-fold rotation axis, three similar horizontal 2-fold rotation axes, each perpendicular to a plane of symmetry, a center of inversion symmetry,[7] and a vertical 6-fold rotation-reflection axis. Example with 2n = 2×2: A "regular" right "symmetric" "didigonal" scalenohedron has only one vertical and two horizontal 2-fold rotation axes, two vertical planes of symmetry, which bisect the angles between the horizontal pair of axes, and a vertical 4-fold rotation-reflection axis;[8] it has no center of inversion symmetry. Note: For at most two particular values of zA = |zA'|, the faces of such a scalenohedron may be isosceles. Double example: • The scalenohedron with regular zigzag skew 2×2-gon base vertices: U = (3,0,2), U' = (−3,0,2), V = (0,3,−2), V' = (0,−3,−2), and with "right" symmetric apices: A = (0,0,3), A' = (0,0,−3), has its faces isosceles. Indeed: upper apical edge lengths: AU = AU' = ${\sqrt {10}},$ AV = AV' = ${\sqrt {34}};$ base edge length: UV = VU' = U'V' = V'U = ${\sqrt {34}};$ lower apical edge lengths = (swapped) upper ones. • The scalenohedron with same base vertices, but with "right" symmetric apices: A = (0,0,7), A' = (0,0,−7), also has its faces isosceles. Indeed: upper apical edge lengths: AU = AU' = ${\sqrt {34}},$ AV = AV' = 3${\sqrt {10}};$ base edge length = previous one = ${\sqrt {34}};$ lower apical edge lengths = (swapped) upper ones. In crystallography, "regular" right "symmetric" "didigonal" (8-faced) and ditrigonal (12-faced) scalenohedra exist.[4][3] The smallest geometric scalenohedra have eight faces, and are topologically identical to the regular octahedron. In this case (2n = 2×2), in crystallography, a "regular" right "symmetric" "didigonal" (8-faced) scalenohedron is called a tetragonal scalenohedron.[4][3] Let us temporarily focus on the "regular" right "symmetric" 8-faced scalenohedra with h = r, i.e. zA = |zA'| = xU = |xU'| = yV = |yV'|. Their two apices can be represented as A = (0,0,1), A' = (0,0,−1), and their four basal vertices as U = (1,0,z), U' = (−1,0,z), V = (0,1,−z), V' = (0,−1,−z), where z is a parameter between 0 and 1. At z = 0, it is a regular octahedron; at z = 1, it has four pairs of coplanar faces, and merging these into four congruent isosceles triangles makes it a disphenoid; for z > 1, it is concave. Example: geometric variations with "regular" right "symmetric" 8-faced scalenohedra: z = 0.1 z = 0.25 z = 0.5 z = 0.95 z = 1.5 Note: If the 2n-gon base is both isotoxal in-out and zigzag skew, then not all faces of the "isotoxal" right "symmetric" scalenohedron are congruent. Example with five different edge lengths: The scalenohedron with isotoxal in-out zigzag skew 2×2-gon base vertices: U = (1,0,1), U′ = (−1,0,1), V = (0,2,−1), V′ = (0,−2,−1), and with "right" symmetric apices: A = (0,0,3), A′ = (0,0,−3), has congruent scalene upper faces, and congruent scalene lower faces, but not all its faces are congruent. Indeed: upper apical edge lengths: AU = AU′ = ${\sqrt {5}},$ AV = AV′ = 2${\sqrt {5}};$ base edge length: UV = VU′ = U′V' = V′U = 3; lower apical edge lengths: A′U = A′U′ = ${\sqrt {17}},$ A′V = A′V′ = 2${\sqrt {2}}.$ Note: For some particular values of zA = |zA'|, half the faces of such a scalenohedron may be isosceles or equilateral. Example with three different edge lengths: The scalenohedron with isotoxal in-out zigzag skew 2×2-gon base vertices: U = (3,0,2), U' = (−3,0,2), V = (0,${\sqrt {65}}$,−2), V' = (0,−${\sqrt {65}}$,−2), and with "right" symmetric apices: A = (0,0,7), A' = (0,0,−7), has congruent scalene upper faces, and congruent equilateral lower faces; thus not all its faces are congruent. Indeed: upper apical edge lengths: AU = AU' = ${\sqrt {34}},$ AV = AV' = ${\sqrt {146}};$ base edge length: UV = VU' = U'V' = V'U = 3${\sqrt {10}};$ lower apical edge length(s): A'U = A'U' = 3${\sqrt {10}},$ A'V = A'V' = 3${\sqrt {10}}.$ "Regular" star bipyramids A self-intersecting or star bipyramid has a star polygon base. A "regular" right symmetric star bipyramid is defined by a regular star polygon base, two symmetric apices right above and right below the base center, and thus one-to-one symmetric triangle faces connecting each basal edge to each apex. A "regular" right symmetric star bipyramid has congruent isosceles triangle faces, and is isohedral. Note: For at most one particular value of zA = |zA'|, the faces of such a "regular" star bipyramid may be equilateral. A p/q-bipyramid has Coxeter diagram . Examples of "regular" right symmetric star bipyramids: Star polygon base 5/2-gon 7/2-gon 7/3-gon 8/3-gon 9/2-gon 9/4-gon Star bipyramid image Coxeter diagram Examples of "regular" right symmetric star bipyramids: Star polygon base 10/3-gon 11/2-gon 11/3-gon 11/4-gon 11/5-gon 12/5-gon Star bipyramid image Coxeter diagram Scalene triangle star bipyramids An "isotoxal" right symmetric 2p/q-gonal star bipyramid is defined by an isotoxal in-out star 2p/q-gon base, two symmetric apices right above and right below the base center, and thus one-to-one symmetric triangle faces connecting each basal edge to each apex. An "isotoxal" right symmetric 2p/q-gonal star bipyramid has congruent scalene triangle faces, and is isohedral. It can be seen as another type of a 2p/q-gonal right "symmetric" star scalenohedron, with an isotoxal in-out star polygon base. Note: For at most two particular values of zA = |zA'|, the faces of such a scalene triangle star bipyramid may be isosceles. Example of an "isotoxal" right symmetric 2p/q-gonal star bipyramid: Star polygon base Isotoxal in-out 8/3-gon Scalene triangle star bipyramid image Star scalenohedra A "regular" right "symmetric" 2p/q-gonal star scalenohedron is defined by a regular zigzag skew star 2p/q-gon base, two symmetric apices right above and right below the base center, and triangle faces connecting each basal edge to each apex. A "regular" right "symmetric" 2p/q-gonal star scalenohedron has congruent scalene triangle faces, and is isohedral. It can be seen as another type of a right "symmetric" 2p/q-gonal star bipyramid, with a regular zigzag skew star polygon base. Note: For at most two particular values of zA = |zA'|, the faces of such a star scalenohedron may be isosceles. Example of a "regular" right "symmetric" 2p/q-gonal star scalenohedron: Star polygon base Regular zigzag skew 8/3-gon Star scalenohedron image Note: If the star 2p/q-gon base is both isotoxal in-out and zigzag skew, then not all faces of the "isotoxal" right "symmetric" star scalenohedron are congruent. Example of an "isotoxal" right "symmetric" 2p/q-gonal star scalenohedron: Star polygon base Isotoxal in-out zigzag skew 8/3-gon Star scalenohedron image Note: For some particular values of zA = |zA'|, half the faces of such a star scalenohedron may be isosceles or equilateral. Example with four different edge lengths: The star scalenohedron with isotoxal in-out zigzag skew 8/3-gon base vertices: U0 = (1,0,1), U1 = (0,1,1), U2 = (−1,0,1), U3 = (0,−1,1), V0 = (2,2,−1), V1 = (−2,2,−1), V2 = (−2,−2,−1), V3 = (2,−2,−1), and with "right" symmetric apices: A = (0,0,3), A′ = (0,0,−3), has congruent scalene upper faces, and congruent isosceles lower faces; thus not all its faces are congruent. Indeed: upper apical edge lengths: AU0 = AU1 = AU2 = AU3 = ${\sqrt {5}},$ AV0 = AV1 = AV2 = AV3 = 2${\sqrt {6}};$ base edge length: U0V1 = V1U3 = U3V0 = V0U2 = U2V3 = V3U1 = U1V2 = V2U0 = ${\sqrt {17}};$ lower apical edge lengths: A′U0 = A′U1 = A′U2 = A′U3 = ${\sqrt {17}},$ A′V0 = A′V1 = A′V2 = A′V3 = 2${\sqrt {3}}.$ Example with three different edge lengths: The star scalenohedron with isotoxal in-out zigzag skew 8/3-gon base vertices: U0 = (4,0,${\sqrt {2}}$), U1 = (0,4,${\sqrt {2}}$), U2 = (−4,0,${\sqrt {2}}$), U3 = (0,−4,${\sqrt {2}}$), V0 = (6,6,−${\sqrt {2}}$), V1 = (−6,6,−${\sqrt {2}}$), V2 = (−6,−6,−${\sqrt {2}}$), V3 = (6,−6,−${\sqrt {2}}$), and with "right" symmetric apices: A = (0,0,7${\sqrt {2}}$), A' = (0,0,−7${\sqrt {2}}$), has congruent scalene upper faces, and congruent equilateral lower faces; thus not all its faces are congruent. Indeed: upper apical edge lengths: AU0 = AU1 = AU2 = AU3 = 2${\sqrt {22}},$ AV0 = AV1 = AV2 = AV3 = 10${\sqrt {2}};$ base edge length: U0V1 = V1U3 = U3V0 = V0U2 = U2V3 = V3U1 = U1V2 = V2U0 = 12; lower apical edge length(s): A'U0 = A'U1 = A'U2 = A'U3 = 12, A'V0 = A'V1 = A'V2 = A'V3 = 12. 4-polytopes with bipyramidal cells The dual of the rectification of each convex regular 4-polytopes is a cell-transitive 4-polytope with bipyramidal cells. In the following, the apex vertex of the bipyramid is A and an equator vertex is E. The distance between adjacent vertices on the equator EE = 1, the apex to equator edge is AE and the distance between the apices is AA. The bipyramid 4-polytope will have VA vertices where the apices of NA bipyramids meet. It will have VE vertices where the type E vertices of NE bipyramids meet. NAE bipyramids meet along each type AE edge. NEE bipyramids meet along each type EE edge. CAE is the cosine of the dihedral angle along an AE edge. CEE is the cosine of the dihedral angle along an EE edge. As cells must fit around an edge, NEE cos−1(CEE) ≤ 2π, NAE cos−1(CAE) ≤ 2π. 4-polytopes with bipyramidal cells 4-polytope properties Bipyramid properties Dual of Coxeter diagram Cells VA VE NA NE NAE NEE Cell Coxeter diagram AA AE** CAE CEE Rectified 5-cell 10 5 5 4 6 3 3 Triangular bipyramid $ {\frac {2}{3}}$ 0.667 $ -{\frac {1}{7}}$ $ -{\frac {1}{7}}$ Rectified tesseract 32 16 8 4 12 3 4 Triangular bipyramid $ {\frac {\sqrt {2}}{3}}$ 0.624 $ -{\frac {2}{5}}$ $ -{\frac {1}{5}}$ Rectified 24-cell 96 24 24 8 12 4 3 Triangular bipyramid $ {\frac {2{\sqrt {2}}}{3}}$ 0.745 $ {\frac {1}{11}}$ $ -{\frac {5}{11}}$ Rectified 120-cell 1200 600 120 4 30 3 5 Triangular bipyramid $ {\frac {{\sqrt {5}}-1}{3}}$ 0.613 $ -{\frac {10+9{\sqrt {5}}}{61}}$ $ -{\frac {7-12{\sqrt {5}}}{61}}$ Rectified 16-cell 24* 8 16 6 6 3 3 Square bipyramid $ {\sqrt {2}}$ 1 $ -{\frac {1}{3}}$ $ -{\frac {1}{3}}$ Rectified cubic honeycomb ∞ ∞ ∞ 6 12 3 4 Square bipyramid $ 1$ 0.866 $ -{\frac {1}{2}}$ $ 0$ Rectified 600-cell 720 120 600 12 6 3 3 Pentagonal bipyramid $ {\frac {5+3{\sqrt {5}}}{5}}$ 1.447 $ -{\frac {11+4{\sqrt {5}}}{41}}$ $ -{\frac {11+4{\sqrt {5}}}{41}}$ * The rectified 16-cell is the regular 24-cell and vertices are all equivalent – octahedra are regular bipyramids. ** Given numerically due to more complex form. Other dimensions In general, a bipyramid can be seen as an n-polytope constructed with a (n − 1)-polytope in a hyperplane with two points in opposite directions and equal perpendicular distances from the hyperplane. If the (n − 1)-polytope is a regular polytope, it will have identical pyramidal facets. A 2-dimensional ("regular") right symmetric (digonal) bipyramid is formed by joining two congruent isosceles triangles base-to-base; its outline is a rhombus, {}+{}. Polyhedral bipyramids A polyhedral bipyramid is a 4-polytope with a polyhedron base, and an apex point. An example is the 16-cell, which is an octahedral bipyramid, {}+{3,4}, and more generally an n-orthoplex is an (n − 1)-orthoplex bipyramid, {}+{3n-2,4}. Other bipyramids include the tetrahedral bipyramid, {}+{3,3}, icosahedral bipyramid, {}+{3,5}, and dodecahedral bipyramid, {}+{5,3}, the first two having all regular cells, they are also Blind polytopes. See also • Trapezohedron References Citations 1. N.W. Johnson: Geometries and Transformations, (2018) ISBN 978-1-107-10340-5 Chapter 11: Finite symmetry groups, 11.3 Pyramids, Prisms, and Antiprisms, Figure 11.3c 2. "duality". maths.ac-noumea.nc. Retrieved 5 November 2020. 3. "The 48 Special Crystal Forms". 18 September 2013. Archived from the original on 18 September 2013. Retrieved 18 November 2020. 4. "Crystal Form, Zones, Crystal Habit". Tulane.edu. Retrieved 16 September 2017. 5. Spencer 1911, p. 581, or p. 603 on Wikisource, CRYSTALLOGRAPHY, 6. HEXAGONAL SYSTEM, Rhombohedral Division, DITRIGONAL BIPYRAMIDAL CLASS. 6. Spencer 1911, p. 577, or p. 599 on Wikisource, CRYSTALLOGRAPHY, 2. TETRAGONAL SYSTEM, HOLOSYMMETRIC CLASS, FIG. 46. 7. Spencer 1911, p. 580, or p. 602 on Wikisource, CRYSTALLOGRAPHY, 6. HEXAGONAL SYSTEM, Rhombohedral Division, HOLOSYMMETRIC CLASS, FIG. 68. 8. Spencer 1911, p. 577, or p. 599 on Wikisource, CRYSTALLOGRAPHY, 2. TETRAGONAL SYSTEM, SCALENOHEDRAL CLASS, FIG. 51. General references • Anthony Pugh (1976). Polyhedra: A visual approach. California: University of California Press Berkeley. ISBN 0-520-03056-7. Chapter 4: Duals of the Archimedean polyhedra, prisms and antiprisms • Spencer, Leonard James (1911). "Crystallography" . In Chisholm, Hugh (ed.). Encyclopædia Britannica. Vol. 07 (11th ed.). Cambridge University Press. pp. 569–591. External links Wikimedia Commons has media related to Bipyramids. • Weisstein, Eric W. "Dipyramid". MathWorld. • Weisstein, Eric W. "Isohedron". MathWorld. • The Uniform Polyhedra • Virtual Reality Polyhedra The Encyclopedia of Polyhedra • VRML models (George Hart) <3> <4> <5> <6> <7> <8> <9> <10> • Conway Notation for Polyhedra Try: "dPn", where n = 3, 4, 5, 6, ... Example: "dP4" is an octahedron. Convex polyhedra Platonic solids (regular) • tetrahedron • cube • octahedron • dodecahedron • icosahedron Archimedean solids (semiregular or uniform) • truncated tetrahedron • cuboctahedron • truncated cube • truncated octahedron • rhombicuboctahedron • truncated cuboctahedron • snub cube • icosidodecahedron • truncated dodecahedron • truncated icosahedron • rhombicosidodecahedron • truncated icosidodecahedron • snub dodecahedron Catalan solids (duals of Archimedean) • triakis tetrahedron • rhombic dodecahedron • triakis octahedron • tetrakis hexahedron • deltoidal icositetrahedron • disdyakis dodecahedron • pentagonal icositetrahedron • rhombic triacontahedron • triakis icosahedron • pentakis dodecahedron • deltoidal hexecontahedron • disdyakis triacontahedron • pentagonal hexecontahedron Dihedral regular • dihedron • hosohedron Dihedral uniform • prisms • antiprisms duals: • bipyramids • trapezohedra Dihedral others • pyramids • truncated trapezohedra • gyroelongated bipyramid • cupola • bicupola • frustum • bifrustum • rotunda • birotunda • prismatoid • scutoid Degenerate polyhedra are in italics.
Wikipedia
Uniform polytope In geometry, a uniform polytope of dimension three or higher is a vertex-transitive polytope bounded by uniform facets. The uniform polytopes in two dimensions are the regular polygons (the definition is different in 2 dimensions to exclude vertex-transitive even-sided polygons that alternate two different lengths of edges). Convex uniform polytopes 2D 3D Truncated triangle or uniform hexagon, with Coxeter diagram . Truncated octahedron, 4D 5D Truncated 16-cell, Truncated 5-orthoplex, This is a generalization of the older category of semiregular polytopes, but also includes the regular polytopes. Further, star regular faces and vertex figures (star polygons) are allowed, which greatly expand the possible solutions. A strict definition requires uniform polytopes to be finite, while a more expansive definition allows uniform honeycombs (2-dimensional tilings and higher dimensional honeycombs) of Euclidean and hyperbolic space to be considered polytopes as well. Operations Nearly every uniform polytope can be generated by a Wythoff construction, and represented by a Coxeter diagram. Notable exceptions include the great dirhombicosidodecahedron in three dimensions and the grand antiprism in four dimensions. The terminology for the convex uniform polytopes used in uniform polyhedron, uniform 4-polytope, uniform 5-polytope, uniform 6-polytope, uniform tiling, and convex uniform honeycomb articles were coined by Norman Johnson. Equivalently, the Wythoffian polytopes can be generated by applying basic operations to the regular polytopes in that dimension. This approach was first used by Johannes Kepler, and is the basis of the Conway polyhedron notation. Rectification operators Regular n-polytopes have n orders of rectification. The zeroth rectification is the original form. The (n−1)-th rectification is the dual. A rectification reduces edges to vertices, a birectification reduces faces to vertices, a trirectification reduces cells to vertices, a quadirectification reduces 4-faces to vertices, a quintirectification reduced 5-faces to vertices, and so on. An extended Schläfli symbol can be used for representing rectified forms, with a single subscript: • k-th rectification = tk{p1, p2, ..., pn-1} = kr. Truncation operators Truncation operations that can be applied to regular n-polytopes in any combination. The resulting Coxeter diagram has two ringed nodes, and the operation is named for the distance between them. Truncation cuts vertices, cantellation cuts edges, runcination cuts faces, sterication cut cells. Each higher operation also cuts lower ones too, so a cantellation also truncates vertices. 1. t0,1 or t: Truncation - applied to polygons and higher. A truncation removes vertices, and inserts a new facet in place of each former vertex. Faces are truncated, doubling their edges. (The term, coined by Kepler, comes from Latin truncare 'to cut off'.) • There are higher truncations also: bitruncation t1,2 or 2t, tritruncation t2,3 or 3t, quadritruncation t3,4 or 4t, quintitruncation t4,5 or 5t, etc. 2. t0,2 or rr: Cantellation - applied to polyhedra and higher. It can be seen as rectifying its rectification. A cantellation truncates both vertices and edges and replaces them with new facets. Cells are replaced by topologically expanded copies of themselves. (The term, coined by Johnson, is derived from the verb cant, like bevel, meaning to cut with a slanted face.) • There are higher cantellations also: bicantellation t1,3 or r2r, tricantellation t2,4 or r3r, quadricantellation t3,5 or r4r, etc. • t0,1,2 or tr: Cantitruncation - applied to polyhedra and higher. It can be seen as truncating its rectification. A cantitruncation truncates both vertices and edges and replaces them with new facets. Cells are replaced by topologically expanded copies of themselves. (The composite term combines cantellation and truncation) • There are higher cantellations also: bicantitruncation t1,2,3 or t2r, tricantitruncation t2,3,4 or t3r, quadricantitruncation t3,4,5 or t4r, etc. 3. t0,3: Runcination - applied to Uniform 4-polytope and higher. Runcination truncates vertices, edges, and faces, replacing them each with new facets. 4-faces are replaced by topologically expanded copies of themselves. (The term, coined by Johnson, is derived from Latin runcina 'carpenter's plane'.) • There are higher runcinations also: biruncination t1,4, triruncination t2,5, etc. 4. t0,4 or 2r2r: Sterication - applied to Uniform 5-polytopes and higher. It can be seen as birectifying its birectification. Sterication truncates vertices, edges, faces, and cells, replacing each with new facets. 5-faces are replaced by topologically expanded copies of themselves. (The term, coined by Johnson, is derived from Greek stereos 'solid'.) • There are higher sterications also: bisterication t1,5 or 2r3r, tristerication t2,6 or 2r4r, etc. • t0,2,4 or 2t2r: Stericantellation - applied to Uniform 5-polytopes and higher. It can be seen as bitruncating its birectification. • There are higher sterications also: bistericantellation t1,3,5 or 2t3r, tristericantellation t2,4,6 or 2t4r, etc. 5. t0,5: Pentellation - applied to Uniform 6-polytopes and higher. Pentellation truncates vertices, edges, faces, cells, and 4-faces, replacing each with new facets. 6-faces are replaced by topologically expanded copies of themselves. (Pentellation is derived from Greek pente 'five'.) • There are also higher pentellations: bipentellation t1,6, tripentellation t2,7, etc. 6. t0,6 or 3r3r: Hexication - applied to Uniform 7-polytopes and higher. It can be seen as trirectifying its trirectification. Hexication truncates vertices, edges, faces, cells, 4-faces, and 5-faces, replacing each with new facets. 7-faces are replaced by topologically expanded copies of themselves. (Hexication is derived from Greek hex 'six'.) • There are higher hexications also: bihexication: t1,7 or 3r4r, trihexication: t2,8 or 3r5r, etc. • t0,3,6 or 3t3r: Hexiruncinated - applied to Uniform 7-polytopes and higher. It can be seen as tritruncating its trirectification. • There are also higher hexiruncinations: bihexiruncinated: t1,4,7 or 3t4r, trihexiruncinated: t2,5,8 or 3t5r, etc. 7. t0,7: Heptellation - applied to Uniform 8-polytopes and higher. Heptellation truncates vertices, edges, faces, cells, 4-faces, 5-faces, and 6-faces, replacing each with new facets. 8-faces are replaced by topologically expanded copies of themselves. (Heptellation is derived from Greek hepta 'seven'.) • There are higher heptellations also: biheptellation t1,8, triheptellation t2,9, etc. 8. t0,8 or 4r4r: Octellation - applied to Uniform 9-polytopes and higher. 9. t0,9: Ennecation - applied to Uniform 10-polytopes and higher. In addition combinations of truncations can be performed which also generate new uniform polytopes. For example, a runcitruncation is a runcination and truncation applied together. If all truncations are applied at once, the operation can be more generally called an omnitruncation. Alternation One special operation, called alternation, removes alternate vertices from a polytope with only even-sided faces. An alternated omnitruncated polytope is called a snub. The resulting polytopes always can be constructed, and are not generally reflective, and also do not in general have uniform polytope solutions. The set of polytopes formed by alternating the hypercubes are known as demicubes. In three dimensions, this produces a tetrahedron; in four dimensions, this produces a 16-cell, or demitesseract. Vertex figure Uniform polytopes can be constructed from their vertex figure, the arrangement of edges, faces, cells, etc. around each vertex. Uniform polytopes represented by a Coxeter diagram, marking active mirrors by rings, have reflectional symmetry, and can be simply constructed by recursive reflections of the vertex figure. A smaller number of nonreflectional uniform polytopes have a single vertex figure but are not repeated by simple reflections. Most of these can be represented with operations like alternation of other uniform polytopes. Vertex figures for single-ringed Coxeter diagrams can be constructed from the diagram by removing the ringed node, and ringing neighboring nodes. Such vertex figures are themselves vertex-transitive. Multiringed polytopes can be constructed by a slightly more complicated construction process, and their topology is not a uniform polytope. For example, the vertex figure of a truncated regular polytope (with 2 rings) is a pyramid. An omnitruncated polytope (all nodes ringed) will always have an irregular simplex as its vertex figure. Circumradius Uniform polytopes have equal edge-lengths, and all vertices are an equal distance from the center, called the circumradius. Uniform polytopes whose circumradius is equal to the edge length can be used as vertex figures for uniform honeycombs. For example, the regular hexagon divides into 6 equilateral triangles and is the vertex figure for the regular triangular tiling. Also the cuboctahedron divides into 8 regular tetrahedra and 6 square pyramids (half octahedron), and it is the vertex figure for the alternated cubic honeycomb. Uniform polytopes by dimension It is useful to classify the uniform polytopes by dimension. This is equivalent to the number of nodes on the Coxeter diagram, or the number of hyperplanes in the Wythoffian construction. Because (n+1)-dimensional polytopes are tilings of n-dimensional spherical space, tilings of n-dimensional Euclidean and hyperbolic space are also considered to be (n+1)-dimensional. Hence, the tilings of two-dimensional space are grouped with the three-dimensional solids. One dimension The only one-dimensional polytope is the line segment. It corresponds to the Coxeter family A1. Two dimensions In two dimensions, there is an infinite family of convex uniform polytopes, the regular polygons, the simplest being the equilateral triangle. Truncated regular polygons become bicolored geometrically quasiregular polygons of twice as many sides, t{p}={2p}. The first few regular polygons (and quasiregular forms) are displayed below: Name Triangle (2-simplex) Square (2-orthoplex) (2-cube) Pentagon Hexagon Heptagon Octagon Enneagon Decagon Hendecagon Schläfli {3} {4} t{2} {5} {6} t{3} {7} {8} t{4} {9} {10} t{5} {11} Coxeter diagram Image Name Dodecagon Tridecagon Tetradecagon Pentadecagon Hexadecagon Heptadecagon Octadecagon Enneadecagon Icosagon Schläfli {12} t{6} {13} {14} t{7} {15} {16} t{8} {17} {18} t{9} {19} {20} t{10} Coxeter diagram Image There is also an infinite set of star polygons (one for each rational number greater than 2), but these are non-convex. The simplest example is the pentagram, which corresponds to the rational number 5/2. Regular star polygons, {p/q}, can be truncated into semiregular star polygons, t{p/q}=t{2p/q}, but become double-coverings if q is even. A truncation can also be made with a reverse orientation polygon t{p/(p-q)}={2p/(p-q)}, for example t{5/3}={10/3}. Name Pentagram Heptagrams Octagram Enneagrams Decagram ...n-agrams Schläfli {5/2} {7/2} {7/3} {8/3} t{4/3} {9/2} {9/4} {10/3} t{5/3} {p/q} Coxeter diagram Image Regular polygons, represented by Schläfli symbol {p} for a p-gon. Regular polygons are self-dual, so the rectification produces the same polygon. The uniform truncation operation doubles the sides to {2p}. The snub operation, alternating the truncation, restores the original polygon {p}. Thus all uniform polygons are also regular. The following operations can be performed on regular polygons to derive the uniform polygons, which are also regular polygons: Operation Extended Schläfli Symbols Regular result Coxeter diagram Position Symmetry (1) (0) Parent {p}t0{p} {p} {} -- [p] (order 2p) Rectified (Dual) r{p}t1{p} {p} -- {} [p] (order 2p) Truncated t{p}t0,1{p} {2p} {} {} [[p]]=[2p] (order 4p) Half h{2p} {p} -- -- [1+,2p]=[p] (order 2p) Snub s{p} {p} -- -- [[p]]+=[p] (order 2p) Three dimensions Main article: Uniform polyhedron In three dimensions, the situation gets more interesting. There are five convex regular polyhedra, known as the Platonic solids: Name Schläfli {p,q} Diagram Image (transparent) Image (solid) Image (sphere) Faces {p} Edges Vertices {q} Symmetry Dual Tetrahedron (3-simplex) (Pyramid) {3,3} 4 {3} 6 4 {3} Td (self) Cube (3-cube) (Hexahedron) {4,3} 6 {4} 12 8 {3} Oh Octahedron Octahedron (3-orthoplex) {3,4} 8 {3} 12 6 {4} Oh Cube Dodecahedron {5,3} 12 {5} 30 20 {3}2 Ih Icosahedron Icosahedron {3,5} 20 {3} 30 12 {5} Ih Dodecahedron In addition to these, there are also 13 semiregular polyhedra, or Archimedean solids, which can be obtained via Wythoff constructions, or by performing operations such as truncation on the Platonic solids, as demonstrated in the following table: Parent Truncated Rectified Bitruncated (tr. dual) Birectified (dual) Cantellated Omnitruncated (Cantitruncated) Snub Tetrahedral 3-3-2 {3,3} (3.6.6) (3.3.3.3) (3.6.6) {3,3} (3.4.3.4) (4.6.6) (3.3.3.3.3) Octahedral 4-3-2 {4,3} (3.8.8) (3.4.3.4) (4.6.6) {3,4} (3.4.4.4) (4.6.8) (3.3.3.3.4) Icosahedral 5-3-2 {5,3} (3.10.10) (3.5.3.5) (5.6.6) {3,5} (3.4.5.4) (4.6.10) (3.3.3.3.5) There is also the infinite set of prisms, one for each regular polygon, and a corresponding set of antiprisms. # Name Picture Tiling Vertex figure Diagram and Schläfli symbols P2p Prism tr{2,p} Ap Antiprism sr{2,p} The uniform star polyhedra include a further 4 regular star polyhedra, the Kepler-Poinsot polyhedra, and 53 semiregular star polyhedra. There are also two infinite sets, the star prisms (one for each star polygon) and star antiprisms (one for each rational number greater than 3/2). Constructions The Wythoffian uniform polyhedra and tilings can be defined by their Wythoff symbol, which specifies the fundamental region of the object. An extension of Schläfli notation, also used by Coxeter, applies to all dimensions; it consists of the letter 't', followed by a series of subscripted numbers corresponding to the ringed nodes of the Coxeter diagram, and followed by the Schläfli symbol of the regular seed polytope. For example, the truncated octahedron is represented by the notation: t0,1{3,4}. Operation Schläfli Symbol Coxeter diagram Wythoff symbol Position: Parent ${\begin{Bmatrix}p,q\end{Bmatrix}}$ {p,q} t0{p,q} q | 2 p{p}{ }------{ } Birectified (or dual) ${\begin{Bmatrix}q,p\end{Bmatrix}}$ {q,p} t2{p,q} p | 2 q--{ }{q}{ }---- Truncated $t{\begin{Bmatrix}p,q\end{Bmatrix}}$ t{p,q} t0,1{p,q} 2 q | p{2p}{ }{q}--{ }{ } Bitruncated (or truncated dual) $t{\begin{Bmatrix}q,p\end{Bmatrix}}$ t{q,p} t1,2{p,q} 2 p | q{p}{ }{2q}{ }{ } -- Rectified ${\begin{Bmatrix}p\\q\end{Bmatrix}}$ r{p,q} t1{p,q} 2 | p q{p}--{q}--{ }-- Cantellated (or expanded) $r{\begin{Bmatrix}p\\q\end{Bmatrix}}$ rr{p,q} t0,2{p,q} p q | 2{p}{ }×{ }{q}{ }--{ } Cantitruncated (or Omnitruncated) $t{\begin{Bmatrix}p\\q\end{Bmatrix}}$ tr{p,q} t0,1,2{p,q} 2 p q |{2p}{ }×{}{2q}{ }{ }{ } Operation Schläfli Symbol Coxeter diagram Wythoff symbol Position: Snub rectified $s{\begin{Bmatrix}p\\q\end{Bmatrix}}$ sr{p,q} | 2 p q{p}{3} {3} {q}------ Snub $s{\begin{Bmatrix}p,q\end{Bmatrix}}$ s{p,2q} ht0,1{p,q} s{2p}{3}{q}--{3} Generating triangles Four dimensions Main article: Uniform 4-polytope In four dimensions, there are 6 convex regular 4-polytopes, 17 prisms on the Platonic and Archimedean solids (excluding the cube-prism, which has already been counted as the tesseract), and two infinite sets: the prisms on the convex antiprisms, and the duoprisms. There are also 41 convex semiregular 4-polytope, including the non-Wythoffian grand antiprism and the snub 24-cell. Both of these special 4-polytope are composed of subgroups of the vertices of the 600-cell. The four-dimensional uniform star polytopes have not all been enumerated. The ones that have include the 10 regular star (Schläfli-Hess) 4-polytopes and 57 prisms on the uniform star polyhedra, as well as three infinite families: the prisms on the star antiprisms, the duoprisms formed by multiplying two star polygons, and the duoprisms formed by multiplying an ordinary polygon with a star polygon. There is an unknown number of 4-polytope that do not fit into the above categories; over one thousand have been discovered so far. Every regular polytope can be seen as the images of a fundamental region in a small number of mirrors. In a 4-dimensional polytope (or 3-dimensional cubic honeycomb) the fundamental region is bounded by four mirrors. A mirror in 4-space is a three-dimensional hyperplane, but it is more convenient for our purposes to consider only its two-dimensional intersection with the three-dimensional surface of the hypersphere; thus the mirrors form an irregular tetrahedron. Each of the sixteen regular 4-polytopes is generated by one of four symmetry groups, as follows: • group [3,3,3]: the 5-cell {3,3,3}, which is self-dual; • group [3,3,4]: 16-cell {3,3,4} and its dual tesseract {4,3,3}; • group [3,4,3]: the 24-cell {3,4,3}, self-dual; • group [3,3,5]: 600-cell {3,3,5}, its dual 120-cell {5,3,3}, and their ten regular stellations. • group [31,1,1]: contains only repeated members of the [3,3,4] family. (The groups are named in Coxeter notation.) Eight of the convex uniform honeycombs in Euclidean 3-space are analogously generated from the cubic honeycomb {4,3,4}, by applying the same operations used to generate the Wythoffian uniform 4-polytopes. For a given symmetry simplex, a generating point may be placed on any of the four vertices, 6 edges, 4 faces, or the interior volume. On each of these 15 elements there is a point whose images, reflected in the four mirrors, are the vertices of a uniform 4-polytope. The extended Schläfli symbols are made by a t followed by inclusion of one to four subscripts 0,1,2,3. If there is one subscript, the generating point is on a corner of the fundamental region, i.e. a point where three mirrors meet. These corners are notated as • 0: vertex of the parent 4-polytope (center of the dual's cell) • 1: center of the parent's edge (center of the dual's face) • 2: center of the parent's face (center of the dual's edge) • 3: center of the parent's cell (vertex of the dual) (For the two self-dual 4-polytopes, "dual" means a similar 4-polytope in dual position.) Two or more subscripts mean that the generating point is between the corners indicated. Constructive summary The 15 constructive forms by family are summarized below. The self-dual families are listed in one column, and others as two columns with shared entries on the symmetric Coxeter diagrams. The final 10th row lists the snub 24-cell constructions. This includes all nonprismatic uniform 4-polytopes, except for the non-Wythoffian grand antiprism, which has no Coxeter family. A4 BC4 D4 F4 H4 [3,3,3] [4,3,3] [3,31,1] [3,4,3] [5,3,3] 5-cell {3,3,3} 16-cell {3,3,4} tesseract {4,3,3} demitesseract {3,31,1} 24-cell {3,4,3} 600-cell {3,3,5} 120-cell {5,3,3} rectified 5-cell r{3,3,3} rectified 16-cell r{3,3,4} rectified tesseract r{4,3,3} rectified demitesseract r{3,31,1} rectified 24-cell r{3,4,3} rectified 600-cell r{3,3,5} rectified 120-cell r{5,3,3} truncated 5-cell t{3,3,3} truncated 16-cell t{3,3,4} truncated tesseract t{4,3,3} truncated demitesseract t{3,31,1} truncated 24-cell t{3,4,3} truncated 600-cell t{3,3,5} truncated 120-cell t{5,3,3} cantellated 5-cell rr{3,3,3} cantellated 16-cell rr{3,3,4} cantellated tesseract rr{4,3,3} cantellated demitesseract 2r{3,31,1} cantellated 24-cell rr{3,4,3} cantellated 600-cell rr{3,3,5} cantellated 120-cell rr{5,3,3} runcinated 5-cell t0,3{3,3,3} runcinated 16-cell t0,3{3,3,4} runcinated tesseract t0,3{4,3,3} runcinated 24-cell t0,3{3,4,3} runcinated 600-cell runcinated 120-cell t0,3{3,3,5} bitruncated 5-cell t1,2{3,3,3} bitruncated 16-cell 2t{3,3,4} bitruncated tesseract 2t{4,3,3} cantitruncated demitesseract 2t{3,31,1} bitruncated 24-cell 2t{3,4,3} bitruncated 600-cell bitruncated 120-cell 2t{3,3,5} cantitruncated 5-cell tr{3,3,3} cantitruncated 16-cell tr{3,3,4} cantitruncated tesseract tr{4,3,3} omnitruncated demitesseract tr{3,31,1} cantitruncated 24-cell tr{3,4,3} cantitruncated 600-cell tr{3,3,5} cantitruncated 120-cell tr{5,3,3} runcitruncated 5-cell t0,1,3{3,3,3} runcitruncated 16-cell t0,1,3{3,3,4} runcitruncated tesseract t0,1,3{4,3,3} runcicantellated demitesseract rr{3,31,1} runcitruncated 24-cell t0,1,3{3,4,3} runcitruncated 600-cell t0,1,3{3,3,5} runcitruncated 120-cell t0,1,3{5,3,3} omnitruncated 5-cell t0,1,2,3{3,3,3} omnitruncated 16-cell t0,1,2,3{3,3,4} omnitruncated tesseract t0,1,2,3{3,3,4} omnitruncated 24-cell t0,1,2,3{3,4,3} omnitruncated 120-cell omnitruncated 600-cell t0,1,2,3{5,3,3} alternated cantitruncated 16-cell sr{3,3,4} snub demitesseract sr{3,31,1} Alternated truncated 24-cell s{3,4,3} Truncated forms The following table defines all 15 forms. Each trunction form can have from one to four cell types, located in positions 0,1,2,3 as defined above. The cells are labeled by polyhedral truncation notation. • An n-gonal prism is represented as : {n}×{ }. • The green background is shown on forms that are equivalent to either the parent or the dual. • The red background shows the truncations of the parent, and blue the truncations of the dual. Operation Schläfli symbol Coxeter diagram Cells by position: (3) (2) (1) (0) Parent {p,q,r}t0{p,q,r} {p,q} -- -- -- Rectified r{p,q,r}t1{p,q,r} r{p,q} -- -- {q,r} Birectified (or rectified dual) 2r{p,q,r} = r{r,q,p} t2{p,q,r} {q,p} -- -- r{q,r} Trirectifed (or dual) 3r{p,q,r} = {r,q,p} t3{p,q,r} -- -- -- {r,q} Truncated t{p,q,r}t0,1{p,q,r} t{p,q} -- -- {q,r} Bitruncated 2t{p,q,r}2t{p,q,r} t{q,p} -- -- t{q,r} Tritruncated (or truncated dual) 3t{p,q,r} = t{r,q,p} t2,3{p,q,r} {q,p} -- -- t{r,q} Cantellated rr{p,q,r}t0,2{p,q,r} rr{p,q} -- { }×{r} r{q,r} Bicantellated (or cantellated dual) r2r{p,q,r} = rr{r,q,p} t1,3{p,q,r} r{p,q} {p}×{ } -- rr{q,r} Runcinated (or expanded) e{p,q,r}t0,3{p,q,r} {p,q} {p}×{ } { }×{r} {r,q} Cantitruncated tr{p,q,r}tr{p,q,r} tr{p,q} -- { }×{r} t{q,r} Bicantitruncated (or cantitruncated dual) t2r{p,q,r} = tr{r,q,p} t1,2,3{p,q,r} t{q,p} {p}×{ } -- tr{q,r} Runcitruncated et{p,q,r}t0,1,3{p,q,r} t{p,q} {2p}×{ } { }×{r} rr{q,r} Runcicantellated (or runcitruncated dual) e3t{p,q,r} = et{r,q,p} t0,2,3{p,q,r} tr{p,q} {p}×{ } { }×{2r} t{r,q} Runcicantitruncated (or omnitruncated) o{p,q,r}t0,1,2,3{p,q,r} tr{p,q} {2p}×{ } { }×{2r} tr{q,r} Half forms Half constructions exist with holes rather than ringed nodes. Branches neighboring holes and inactive nodes must be even-order. Half construction have the vertices of an identically ringed construction. Operation Schläfli symbol Coxeter diagram Cells by position: (3) (2) (1) (0) Half Alternated h{p,2q,r}ht0{p,2q,r} h{p,2q} -- -- -- Alternated rectified hr{2p,2q,r}ht1{2p,2q,r} hr{2p,2q} -- -- h{2q,r} Snub Alternated truncation s{p,2q,r}ht0,1{p,2q,r} s{p,2q} -- -- h{2q,r} Bisnub Alternated bitruncation 2s{2p,q,2r}ht1,2{2p,q,2r} s{q,2p} -- -- s{q,2r} Snub rectified Alternated truncated rectified sr{p,q,2r}ht0,1,2{p,q,2r} sr{p,q} -- s{2,2r} s{q,2r} Omnisnub Alternated omnitruncation os{p,q,r}ht0,1,2,3{p,q,r} sr{p,q} {p}×{ } { }×{r} sr{q,r} Five and higher dimensions In five and higher dimensions, there are 3 regular polytopes, the hypercube, simplex and cross-polytope. They are generalisations of the three-dimensional cube, tetrahedron and octahedron, respectively. There are no regular star polytopes in these dimensions. Most uniform higher-dimensional polytopes are obtained by modifying the regular polytopes, or by taking the Cartesian product of polytopes of lower dimensions. In six, seven and eight dimensions, the exceptional simple Lie groups, E6, E7 and E8 come into play. By placing rings on a nonzero number of nodes of the Coxeter diagrams, one can obtain 39 new 6-polytopes, 127 new 7-polytopes and 255 new 8-polytopes. A notable example is the 421 polytope. Uniform honeycombs Related to the subject of finite uniform polytopes are uniform honeycombs in Euclidean and hyperbolic spaces. Euclidean uniform honeycombs are generated by affine Coxeter groups and hyperbolic honeycombs are generated by the hyperbolic Coxeter groups. Two affine Coxeter groups can be multiplied together. There are two classes of hyperbolic Coxeter groups, compact and paracompact. Uniform honeycombs generated by compact groups have finite facets and vertex figures, and exist in 2 through 4 dimensions. Paracompact groups have affine or hyperbolic subgraphs, and infinite facets or vertex figures, and exist in 2 through 10 dimensions. See also • Schläfli symbol References • Coxeter The Beauty of Geometry: Twelve Essays, Dover Publications, 1999, ISBN 978-0-486-40919-1 (Chapter 3: Wythoff's Construction for Uniform Polytopes) • Norman Johnson Uniform Polytopes, Manuscript (1991) • N.W. Johnson: The Theory of Uniform Polytopes and Honeycombs, Ph.D. Dissertation, University of Toronto, 1966 • A. Boole Stott: Geometrical deduction of semiregular from regular polytopes and space fillings, Verhandelingen of the Koninklijke academy van Wetenschappen width unit Amsterdam, Eerste Sectie 11,1, Amsterdam, 1910 • H.S.M. Coxeter: • H.S.M. Coxeter, M.S. Longuet-Higgins and J.C.P. Miller: Uniform Polyhedra, Philosophical Transactions of the Royal Society of London, Londne, 1954 • H.S.M. Coxeter, Regular Polytopes, 3rd Edition, Dover New York, 1973 • Kaleidoscopes: Selected Writings of H.S.M. Coxeter, edited by F. Arthur Sherk, Peter McMullen, Anthony C. Thompson, Asia Ivic Weiss, Wiley-Interscience Publication, 1995, ISBN 978-0-471-01003-6 • (Paper 22) H.S.M. Coxeter, Regular and Semi Regular Polytopes I, [Math. Zeit. 46 (1940) 380-407, MR 2,10] • (Paper 23) H.S.M. Coxeter, Regular and Semi-Regular Polytopes II, [Math. Zeit. 188 (1985) 559-591] • (Paper 24) H.S.M. Coxeter, Regular and Semi-Regular Polytopes III, [Math. Zeit. 200 (1988) 3-45] • Coxeter, Longuet-Higgins, Miller, Uniform polyhedra, Phil. Trans. 1954, 246 A, 401-50. (Extended Schläfli notation used) • Marco Möller, Vierdimensionale Archimedische Polytope, Dissertation, Universität Hamburg, Hamburg (2004) (in German) External links • Olshevsky, George. "Uniform polytope". Glossary for Hyperspace. Archived from the original on 4 February 2007. • uniform, convex polytopes in four dimensions:, Marco Möller (in German) Fundamental convex regular and uniform polytopes in dimensions 2–10 Family An Bn I2(p) / Dn E6 / E7 / E8 / F4 / G2 Hn Regular polygon Triangle Square p-gon Hexagon Pentagon Uniform polyhedron Tetrahedron Octahedron • Cube Demicube Dodecahedron • Icosahedron Uniform polychoron Pentachoron 16-cell • Tesseract Demitesseract 24-cell 120-cell • 600-cell Uniform 5-polytope 5-simplex 5-orthoplex • 5-cube 5-demicube Uniform 6-polytope 6-simplex 6-orthoplex • 6-cube 6-demicube 122 • 221 Uniform 7-polytope 7-simplex 7-orthoplex • 7-cube 7-demicube 132 • 231 • 321 Uniform 8-polytope 8-simplex 8-orthoplex • 8-cube 8-demicube 142 • 241 • 421 Uniform 9-polytope 9-simplex 9-orthoplex • 9-cube 9-demicube Uniform 10-polytope 10-simplex 10-orthoplex • 10-cube 10-demicube Uniform n-polytope n-simplex n-orthoplex • n-cube n-demicube 1k2 • 2k1 • k21 n-pentagonal polytope Topics: Polytope families • Regular polytope • List of regular polytopes and compounds
Wikipedia
Scan statistic In statistics, a scan statistic or window statistic is a problem relating to the clustering of randomly positioned points. An example of a typical problem is the maximum size of a cluster of points on a line or the longest series of successes recorded by a moving window of fixed length.[1] Joseph Naus first published on the problem in the 1960s,[2] and has been called the "father of the scan statistic" in honour of his early contributions.[3] The results can be applied in epidemiology, public health and astronomy to find unusual clusters of events.[4] It was extended by Martin Kulldorff to multidimensional settings and varying window sizes in a 1997 paper,[5] which is (as of 11 October 2015) the most cited article in its journal, Communications in Statistics – Theory and Methods.[6] This work lead to the creation of the software SaTScan, a program trademarked by Martin Kulldorff that applies his methods to data. Recent results have shown that using scale-dependent critical values for the scan statistic allows to attain asymptotically optimal detection simultaneously for all signal lengths, thereby improving on the traditional scan, but this procedure has been criticized for losing too much power for short signals. Walther and Perry (2022) considered the problem of detecting an elevated mean on an interval with unknown location and length in the univariate Gaussian sequence model.[7] They explain this discrepancy by showing that these asymptotic optimality results will necessarily be too imprecise to discern the performance of scan statistics in a practically relevant way, even in a large sample context. Instead, they propose to assess the performance with a new finite sample criterion. They presented three new calibration techniques for scan statistics that perform well across a range of relevant signal lengths to optimally increase performance of short signals. The scan-statistic-based methods have been specifically developed to detect rare variant associations in the noncoding genome, especially for the intergenic region. Compared with fixed-size sliding window analysis, scan-statistic-based methods use data-adaptive size dynamic window to scan the genome continuously, and increase the analysis power by flexibly selecting the locations and sizes of the signal regions.[8] Some examples of these methods are Q-SCAN,[9] SCANG,[10] WGScan.[11] References 1. Naus, J. I. (1982). "Approximations for Distributions of Scan Statistics". Journal of the American Statistical Association. 77 (377): 177–183. doi:10.1080/01621459.1982.10477783. JSTOR 2287786. 2. Naus, Joseph Irwin (1964). Clustering of random points in line and plane (Ph. D.). Retrieved 6 January 2014. 3. Wallenstein, S. (2009). "Joseph Naus: Father of the Scan Statistic". Scan Statistics. pp. 1–25. doi:10.1007/978-0-8176-4749-0_1. ISBN 978-0-8176-4748-3. 4. Glaz, J.; Naus, J.; Wallenstein, S. (2001). "Introduction". Scan Statistics. Springer Series in Statistics. pp. 3–9. doi:10.1007/978-1-4757-3460-7_1. ISBN 978-1-4419-3167-2. 5. Kulldorff, Martin (1997). "A spatial scan statistic" (PDF). Communications in Statistics – Theory and Methods. 26 (6): 1481–1496. doi:10.1080/03610929708831995. 6. "Most Cited Articles". Communications in Statistics – Theory and Methods. Retrieved 11 October 2015. 7. Walther, Guenther; Perry, Andrew (November 2022). "Calibrating the scan statistic: Finite sample performance versus asymptotics". Journal of the Royal Statistical Society: Series B (Statistical Methodology). 84 (5): 1608–1639. doi:10.1111/rssb.12549. ISSN 1369-7412. S2CID 221713232. 8. Li, Zilin; Li, Xihao; Zhou, Hufeng; Gaynor, Sheila M.; Margaret, Sunitha Selvaraj; Arapoglou, Theodore; Qiuck, Corbin; Liu, Yaowu; Chen, Han; Sun, Ryan; Dey, Rounak; Arnett, Donna K.; Auer, Paul L.; Bielak, Lawrence F.; Bis, Joshua C.; Blackwell, Thomas W.; Blangero, John; Boerwinkle, Eric; Bowden, Donald W.; Brody, Jennifer A.; Cade, Brian E.; Conomos, Matthew P.; Correa, Adolfo; Cupples, L. Adrienne; Curran, Joanne E.; de Vries, Paul S.; Duggirala, Ravindranath; Franceschini, Nora; Freedman, Barry I.; Goring, Harald H.H.; Guo, Xiuqing; Kalyani, Rita R.; Kooperberg, Charles; Kral, Brian G.; Lange, Leslie A.; Lin, Bridget; Manichaikul, Ani; Martin, Lisa W.; Mathias, Rasika A.; Meigs, James B.; Mitchell, Braxton D.; Mitchell, Braxton D.; Montasser, May E.; Morrison, Alanna C.; Naseri, Take; O’Connell, Jeffrey R.; Palmer, Nicholette D.; Reupena, Muagututi’a Sefuiva; Rice, Kenneth M.; Rich, Stephen S.; Smith, Jennifer A.; Taylor, Kent D.; Taub, Margaret A.; Vasan, Ramachandran S.; Weeks, Daniel E.; Wilson, James G.; Yanek, Lisa R.; Zhao, Wei; NHLBI Trans-Omics for Precision Medicine (TOPMed) Consortium; TOPMed Lipids Working Group; Rotter, Jerome I.; Willer, Cristen; Natarajan, Pradeep; Peloso, Gina M.; Lin, Xihong (2022). "A framework for detecting noncoding rare-variant associations of large-scale whole-genome sequencing studies". Nature Methods. 19 (12): 1599–1611. doi:10.1038/s41592-022-01640-x. PMC 10008172. PMID 36303018. S2CID 243873361. 9. Li, Zilin; Liu, Yaowu; Lin, Xihong (2022). "Simultaneous Detection of Signal Regions Using Quadratic Scan Statistics With Applications to Whole Genome Association Studies". Journal of the American Statistical Association. 117 (538): 823–834. doi:10.1080/01621459.2020.1822849. PMC 9285665. PMID 35845434. 10. Li, Zilin; Li, Xihao; Liu, Yaowu; Shen, Jincheng; Chen, Han; Zhou, Hufeng; Morrison, Alanna C.; Boerwinkle, Eric; Lin, Xihong (2019). "Dynamic Scan Procedure for Detecting Rare-Variant Association Regions in Whole-Genome Sequencing Studies". American Journal of Human Genetics. 104 (5): 802–814. doi:10.1016/j.ajhg.2019.03.002. PMC 6507043. PMID 30982610. 11. He, Zihuai; Xu, Bin; Buxbaum, Joseph; Ionita-Laza, Iuliana (2019). "A genome-wide scan statistic framework for whole-genome sequence data analysis". Nature Communications. 10 (1): 3018. doi:10.1038/s41467-019-11023-0. PMC 6616627. PMID 31289270. External links • SaTScan free software for the spatial, temporal and space-time scan statistics
Wikipedia
Scattered order In mathematical order theory, a scattered order is a linear order that contains no densely ordered subset with more than one element.[1] A characterization due to Hausdorff states that the class of all scattered orders is the smallest class of linear orders that contains the singleton orders and is closed under well-ordered and reverse well-ordered sums. Laver's theorem (generalizing a conjecture of Roland Fraïssé on countable orders) states that the embedding relation on the class of countable unions of scattered orders is a well-quasi-order.[2] The order topology of a scattered order is scattered. The converse implication does not hold, as witnessed by the lexicographic order on $\mathbb {Q} \times \mathbb {Z} $. References 1. Egbert Harzheim (2005). "6.6 Scattered sets". Ordered Sets. Springer. pp. 193–201. ISBN 0-387-24219-8. 2. Harzheim, Theorem 6.17, p. 201; Laver, Richard (1971). "On Fraïssé's order type conjecture". Annals of Mathematics. 93 (1): 89–111. doi:10.2307/1970754. JSTOR 1970754.
Wikipedia
Scattered space In mathematics, a scattered space is a topological space X that contains no nonempty dense-in-itself subset.[1][2] Equivalently, every nonempty subset A of X contains a point isolated in A. A subset of a topological space is called a scattered set if it is a scattered space with the subspace topology. Examples • Every discrete space is scattered. • Every ordinal number with the order topology is scattered. Indeed, every nonempty subset A contains a minimum element, and that element is isolated in A. • A space X with the particular point topology, in particular the Sierpinski space, is scattered. This is an example of a scattered space that is not a T1 space. • The closure of a scattered set is not necessarily scattered. For example, in the Euclidean plane $\mathbb {R} ^{2}$ take a countably infinite discrete set A in the unit disk, with the points getting denser and denser as one approaches the boundary. For example, take the union of the vertices of a series of n-gons centered at the origin, with radius getting closer and closer to 1. Then the closure of A will contain the whole circle of radius 1, which is dense-in-itself. Properties • In a topological space X the closure of a dense-in-itself subset is a perfect set. So X is scattered if and only if it does not contain any nonempty perfect set. • Every subset of a scattered space is scattered. Being scattered is a hereditary property. • Every scattered space X is a T0 space. (Proof: Given two distinct points x, y in X, at least one of them, say x, will be isolated in $\{x,y\}$. That means there is neighborhood of x in X that does not contain y.) • In a T0 space the union of two scattered sets is scattered.[3][4] Note that the T0 assumption is necessary here. For example, if $X=\{a,b\}$ with the indiscrete topology, $\{a\}$ and $\{b\}$ are both scattered, but their union, $X$, is not scattered as it has no isolated point. • Every T1 scattered space is totally disconnected. (Proof: If C is a nonempty connected subset of X, it contains a point x isolated in C. So the singleton $\{x\}$ is both open in C (because x is isolated) and closed in C (because of the T1 property). Because C is connected, it must be equal to $\{x\}$. This shows that every connected component of X has a single point.) • Every second countable scattered space is countable.[5] • Every topological space X can be written in a unique way as the disjoint union of a perfect set and a scattered set.[6][7] • Every second countable space X can be written in a unique way as the disjoint union of a perfect set and a countable scattered open set. (Proof: Use the perfect + scattered decomposition and the fact above about second countable scattered spaces, together with the fact that a subset of a second countable space is second countable.) Furthermore, every closed subset of a second countable X can be written uniquely as the disjoint union of a perfect subset of X and a countable scattered subset of X.[8] This holds in particular in any Polish space, which is the contents of the Cantor–Bendixson theorem. Notes 1. Steen & Seebach, p. 33 2. Engelking, p. 59 3. See proposition 2.8 in Al-Hajri, Monerah; Belaid, Karim; Belaid, Lamia Jaafar (2016). "Scattered Spaces, Compactifications and an Application to Image Classification Problem". Tatra Mountains Mathematical Publications. 66: 1–12. doi:10.1515/tmmp-2016-0015. S2CID 199470332. 4. "General topology - in a $T_0$ space the union of two scattered sets is scattered". 5. "General topology - Second countable scattered spaces are countable". 6. Willard, problem 30E, p. 219 7. "General topology - Uniqueness of decomposition into perfect set and scattered set". 8. "Real analysis - is Cantor-Bendixson theorem right for a general second countable space?". References • Engelking, Ryszard, General Topology, Heldermann Verlag Berlin, 1989. ISBN 3-88538-006-4 • Steen, Lynn Arthur; Seebach, J. Arthur Jr. (1995) [1978]. Counterexamples in Topology (Dover reprint of 1978 ed.). Berlin, New York: Springer-Verlag. ISBN 978-0-486-68735-3. MR 0507446. • Willard, Stephen (2004) [1970], General Topology (Dover reprint of 1970 ed.), Addison-Wesley
Wikipedia
Scatter plot A scatter plot (also called a scatterplot, scatter graph, scatter chart, scattergram, or scatter diagram)[3] is a type of plot or mathematical diagram using Cartesian coordinates to display values for typically two variables for a set of data. If the points are coded (color/shape/size), one additional variable can be displayed. The data are displayed as a collection of points, each having the value of one variable determining the position on the horizontal axis and the value of the other variable determining the position on the vertical axis.[4] Not to be confused with Correlogram or Scatter matrix. Scatter plot One of the Seven Basic Tools of Quality First described byJohn Herschel[1] PurposeTo identify the type of relationship (if any) between two quantitative variables Overview A scatter plot can be used either when one continuous variable is under the control of the experimenter and the other depends on it or when both continuous variables are independent. If a parameter exists that is systematically incremented and/or decremented by the other, it is called the control parameter or independent variable and is customarily plotted along the horizontal axis. The measured or dependent variable is customarily plotted along the vertical axis. If no dependent variable exists, either type of variable can be plotted on either axis and a scatter plot will illustrate only the degree of correlation (not causation) between two variables. A scatter plot can suggest various kinds of correlations between variables with a certain confidence interval. For example, weight and height would be on the y-axis, and height would be on the x-axis. Correlations may be positive (rising), negative (falling), or null (uncorrelated). If the dots' pattern slopes from lower left to upper right, it indicates a positive correlation between the variables being studied. If the pattern of dots slopes from upper left to lower right, it indicates a negative correlation. A line of best fit (alternatively called 'trendline') can be drawn to study the relationship between the variables. An equation for the correlation between the variables can be determined by established best-fit procedures. For a linear correlation, the best-fit procedure is known as linear regression and is guaranteed to generate a correct solution in a finite time. No universal best-fit procedure is guaranteed to generate a correct solution for arbitrary relationships. A scatter plot is also very useful when we wish to see how two comparable data sets agree to show nonlinear relationships between variables. The ability to do this can be enhanced by adding a smooth line such as LOESS.[5] Furthermore, if the data are represented by a mixture model of simple relationships, these relationships will be visually evident as superimposed patterns. The scatter diagram is one of the seven basic tools of quality control.[6] Scatter charts can be built in the form of bubble, marker, or/and line charts.[7] Example For example, to display a link between a person's lung capacity, and how long that person could hold their breath, a researcher would choose a group of people to study, then measure each one's lung capacity (first variable) and how long that person could hold their breath (second variable). The researcher would then plot the data in a scatter plot, assigning "lung capacity" to the horizontal axis, and "time holding breath" to the vertical axis. A person with a lung capacity of 400 cl who held their breath for 21.7 s would be represented by a single dot on the scatter plot at the point (400, 21.7) in the Cartesian coordinates. The scatter plot of all the people in the study would enable the researcher to obtain a visual comparison of the two variables in the data set and will help to determine what kind of relationship there might be between the two variables. Scatter plot matrices For a set of data variables (dimensions) X1, X2, ... , Xk, the scatter plot matrix shows all the pairwise scatter plots of the variables on a single view with multiple scatterplots in a matrix format. For k variables, the scatterplot matrix will contain k rows and k columns. A plot located on the intersection of row and jth column is a plot of variables Xi versus Xj.[8] This means that each row and column is one dimension, and each cell plots a scatter plot of two dimensions. A generalized scatter plot matrix[9] offers a range of displays of paired combinations of categorical and quantitative variables. A mosaic plot, fluctuation diagram, or faceted bar chart may be used to display two categorical variables. Other plots are used for one categorical and one quantitative variables. See also • Data and information visualization • Rug plot • Bar graph • Line chart • Scagnostics • Dot plot (statistics) References 1. Friendly, Michael; Denis, Dan (2005). "The early origins and development of the scatterplot". Journal of the History of the Behavioral Sciences. 41 (2): 103–130. doi:10.1002/jhbs.20078. PMID 15812820. 2. Visualizations that have been created with VisIt at wci.llnl.gov. Last updated: November 8, 2007. 3. Jarrell, Stephen B. (1994). Basic Statistics (Special pre-publication ed.). Dubuque, Iowa: Wm. C. Brown Pub. p. 492. ISBN 978-0-697-21595-6. When we search for a relationship between two quantitative variables, a standard graph of the available data pairs (X,Y), called a scatter diagram, frequently helps... 4. Utts, Jessica M. Seeing Through Statistics 3rd Edition, Thomson Brooks/Cole, 2005, pp 166-167. ISBN 0-534-39402-7 5. Cleveland, William (1993). Visualizing data. Murray Hill, N.J. Summit, N.J: At & T Bell Laboratories Published by Hobart Press. ISBN 978-0963488404. 6. Nancy R. Tague (2004). "Seven Basic Quality Tools". The Quality Toolbox. Milwaukee, Wisconsin: American Society for Quality. p. 15. Retrieved 2010-02-05. 7. "Scatter Chart - AnyChart JavaScript Chart Documentation". AnyChart. Archived from the original on 1 February 2016. Retrieved 3 February 2016. 8. Scatter Plot Matrix at itl.nist.gov. 9. Emerson, John W.; Green, Walton A.; Schoerke, Barret; Crowley, Jason (2013). "The Generalized Pairs Plot". Journal of Computational and Graphical Statistics. 22 (1): 79–91. doi:10.1080/10618600.2012.694762. S2CID 28344569. External links • Media related to Scatterplots at Wikimedia Commons • What is a scatterplot? Archived 2020-08-07 at the Wayback Machine • Correlation scatter-plot matrix for ordered-categorical data – Explanation and R code • Density scatterplot for large datasets (hundreds of millions of points) Seven basic tools of quality • Cause-and-effect diagram • Check sheet • Control chart • Histogram • Pareto chart • Scatter diagram • Stratification Quality (business) Statistics • Outline • Index Descriptive statistics Continuous data Center • Mean • Arithmetic • Arithmetic-Geometric • Cubic • Generalized/power • Geometric • Harmonic • Heronian • Heinz • Lehmer • Median • Mode Dispersion • Average absolute deviation • Coefficient of variation • Interquartile range • Percentile • Range • Standard deviation • Variance Shape • Central limit theorem • Moments • Kurtosis • L-moments • Skewness Count data • Index of dispersion Summary tables • Contingency table • Frequency distribution • Grouped data Dependence • Partial correlation • Pearson product-moment correlation • Rank correlation • Kendall's τ • Spearman's ρ • Scatter plot Graphics • Bar chart • Biplot • Box plot • Control chart • Correlogram • Fan chart • Forest plot • Histogram • Pie chart • Q–Q plot • Radar chart • Run chart • Scatter plot • Stem-and-leaf display • Violin plot Data collection Study design • Effect size • Missing data • Optimal design • Population • Replication • Sample size determination • Statistic • Statistical power Survey methodology • Sampling • Cluster • Stratified • Opinion poll • Questionnaire • Standard error Controlled experiments • Blocking • Factorial experiment • Interaction • Random assignment • Randomized controlled trial • Randomized experiment • Scientific control Adaptive designs • Adaptive clinical trial • Stochastic approximation • Up-and-down designs Observational studies • Cohort study • Cross-sectional study • Natural experiment • Quasi-experiment Statistical inference Statistical theory • Population • Statistic • Probability distribution • Sampling distribution • Order statistic • Empirical distribution • Density estimation • Statistical model • Model specification • Lp space • Parameter • location • scale • shape • Parametric family • Likelihood (monotone) • Location–scale family • Exponential family • Completeness • Sufficiency • Statistical functional • Bootstrap • U • V • Optimal decision • loss function • Efficiency • Statistical distance • divergence • Asymptotics • Robustness Frequentist inference Point estimation • Estimating equations • Maximum likelihood • Method of moments • M-estimator • Minimum distance • Unbiased estimators • Mean-unbiased minimum-variance • Rao–Blackwellization • Lehmann–Scheffé theorem • Median unbiased • Plug-in Interval estimation • Confidence interval • Pivot • Likelihood interval • Prediction interval • Tolerance interval • Resampling • Bootstrap • Jackknife Testing hypotheses • 1- & 2-tails • Power • Uniformly most powerful test • Permutation test • Randomization test • Multiple comparisons Parametric tests • Likelihood-ratio • Score/Lagrange multiplier • Wald Specific tests • Z-test (normal) • Student's t-test • F-test Goodness of fit • Chi-squared • G-test • Kolmogorov–Smirnov • Anderson–Darling • Lilliefors • Jarque–Bera • Normality (Shapiro–Wilk) • Likelihood-ratio test • Model selection • Cross validation • AIC • BIC Rank statistics • Sign • Sample median • Signed rank (Wilcoxon) • Hodges–Lehmann estimator • Rank sum (Mann–Whitney) • Nonparametric anova • 1-way (Kruskal–Wallis) • 2-way (Friedman) • Ordered alternative (Jonckheere–Terpstra) • Van der Waerden test Bayesian inference • Bayesian probability • prior • posterior • Credible interval • Bayes factor • Bayesian estimator • Maximum posterior estimator • Correlation • Regression analysis Correlation • Pearson product-moment • Partial correlation • Confounding variable • Coefficient of determination Regression analysis • Errors and residuals • Regression validation • Mixed effects models • Simultaneous equations models • Multivariate adaptive regression splines (MARS) Linear regression • Simple linear regression • Ordinary least squares • General linear model • Bayesian regression Non-standard predictors • Nonlinear regression • Nonparametric • Semiparametric • Isotonic • Robust • Heteroscedasticity • Homoscedasticity Generalized linear model • Exponential families • Logistic (Bernoulli) / Binomial / Poisson regressions Partition of variance • Analysis of variance (ANOVA, anova) • Analysis of covariance • Multivariate ANOVA • Degrees of freedom Categorical / Multivariate / Time-series / Survival analysis Categorical • Cohen's kappa • Contingency table • Graphical model • Log-linear model • McNemar's test • Cochran–Mantel–Haenszel statistics Multivariate • Regression • Manova • Principal components • Canonical correlation • Discriminant analysis • Cluster analysis • Classification • Structural equation model • Factor analysis • Multivariate distributions • Elliptical distributions • Normal Time-series General • Decomposition • Trend • Stationarity • Seasonal adjustment • Exponential smoothing • Cointegration • Structural break • Granger causality Specific tests • Dickey–Fuller • Johansen • Q-statistic (Ljung–Box) • Durbin–Watson • Breusch–Godfrey Time domain • Autocorrelation (ACF) • partial (PACF) • Cross-correlation (XCF) • ARMA model • ARIMA model (Box–Jenkins) • Autoregressive conditional heteroskedasticity (ARCH) • Vector autoregression (VAR) Frequency domain • Spectral density estimation • Fourier analysis • Least-squares spectral analysis • Wavelet • Whittle likelihood Survival Survival function • Kaplan–Meier estimator (product limit) • Proportional hazards models • Accelerated failure time (AFT) model • First hitting time Hazard function • Nelson–Aalen estimator Test • Log-rank test Applications Biostatistics • Bioinformatics • Clinical trials / studies • Epidemiology • Medical statistics Engineering statistics • Chemometrics • Methods engineering • Probabilistic design • Process / quality control • Reliability • System identification Social statistics • Actuarial science • Census • Crime statistics • Demography • Econometrics • Jurimetrics • National accounts • Official statistics • Population statistics • Psychometrics Spatial statistics • Cartography • Environmental statistics • Geographic information system • Geostatistics • Kriging • Category •  Mathematics portal • Commons • WikiProject
Wikipedia
Scenario optimization The scenario approach or scenario optimization approach is a technique for obtaining solutions to robust optimization and chance-constrained optimization problems based on a sample of the constraints. It also relates to inductive reasoning in modeling and decision-making. The technique has existed for decades as a heuristic approach and has more recently been given a systematic theoretical foundation. In optimization, robustness features translate into constraints that are parameterized by the uncertain elements of the problem. In the scenario method,[1][2][3] a solution is obtained by only looking at a random sample of constraints (heuristic approach) called scenarios and a deeply-grounded theory tells the user how “robust” the corresponding solution is related to other constraints. This theory justifies the use of randomization in robust and chance-constrained optimization. Data-driven optimization At times, scenarios are obtained as random extractions from a model. More often, however, scenarios are instances of the uncertain constraints that are obtained as observations (data-driven science). In this latter case, no model of uncertainty is needed to generate scenarios. Moreover, most remarkably, also in this case scenario optimization comes accompanied by a full-fledged theory because all scenario optimization results are distribution-free and can therefore be applied even when a model of uncertainty is not available. Theoretical results For constraints that are convex (e.g. in semidefinite problems, involving LMIs (Linear Matrix Inequalities)), a deep theoretical analysis has been established which shows that the probability that a new constraint is not satisfied follows a distribution that is dominated by a Beta distribution. This result is tight since it is exact for a whole class of convex problems.[3] More generally, various empirical levels have been shown to follow a Dirichlet distribution, whose marginals are beta distribution.[4] The scenario approach with $L_{1}$ regularization has also been considered,[5] and handy algorithms with reduced computational complexity are available.[6] Extensions to more complex, non-convex, set-ups are still objects of active investigation. Along the scenario approach, it is also possible to pursue a risk-return trade-off.[7][8] Moreover, a full-fledged method can be used to apply this approach to control.[9] First $N$ constraints are sampled and then the user starts removing some of the constraints in succession. This can be done in different ways, even according to greedy algorithms. After elimination of one more constraint, the optimal solution is updated, and the corresponding optimal value is determined. As this procedure moves on, the user constructs an empirical “curve of values”, i.e. the curve representing the value achieved after the removing of an increasing number of constraints. The scenario theory provides precise evaluations of how robust the various solutions are. A remarkable advance in the theory has been established by the recent wait-and-judge approach:[10] one assesses the complexity of the solution (as precisely defined in the referenced article) and from its value formulates precise evaluations on the robustness of the solution. These results shed light on deeply-grounded links between the concepts of complexity and risk. A related approach, named "Repetitive Scenario Design" aims at reducing the sample complexity of the solution by repeatedly alternating a scenario design phase (with reduced number of samples) with a randomized check of the feasibility of the ensuing solution.[11] Example Consider a function $R_{\delta }(x)$ which represents the return of an investment; it depends on our vector of investment choices $x$ and on the market state $\delta $ which will be experienced at the end of the investment period. Given a stochastic model for the market conditions, we consider $N$ of the possible states $\delta _{1},\dots ,\delta _{N}$ (randomization of uncertainty). Alternatively, the scenarios $\delta _{i}$ can be obtained from a record of observations. We set out to solve the scenario optimization program $\max _{x}\min _{i=1,\dots ,N}R_{\delta _{i}}(x).\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ (1)$ This corresponds to choosing a portfolio vector x so as to obtain the best possible return in the worst-case scenario.[12][13] After solving (1), an optimal investment strategy $x^{\ast }$ is achieved along with the corresponding optimal return $R^{\ast }$. While $R^{\ast }$ has been obtained by looking at $N$ possible market states only, the scenario theory tells us that the solution is robust up to a level $\varepsilon $, that is, the return $R^{\ast }$ will be achieved with probability $1-\varepsilon $ for other market states. In quantitative finance, the worst-case approach can be overconservative. One alternative is to discard some odd situations to reduce pessimism;[7] moreover, scenario optimization can be applied to other risk-measures including CVaR – Conditional Value at Risk – so adding to the flexibility of its use.[14] Application fields Fields of application include: prediction, systems theory, regression analysis (Interval Predictor Models in particular), Actuarial science, optimal control, financial mathematics, machine learning, decision making, supply chain, and management. References 1. Calafiore, Giuseppe; Campi, M.C. (2005). "Uncertain convex programs: Randomized solutions and confidence levels". Mathematical Programming. 102: 25–46. doi:10.1007/s10107-003-0499-y. S2CID 1063933. 2. Calafiore, G.C.; Campi, M.C. (2006). "The Scenario Approach to Robust Control Design". IEEE Transactions on Automatic Control. 51 (5): 742–753. doi:10.1109/TAC.2006.875041. S2CID 49263. 3. Campi, M. C.; Garatti, S. (2008). "The Exact Feasibility of Randomized Solutions of Uncertain Convex Programs". SIAM Journal on Optimization. 19 (3): 1211–1230. doi:10.1137/07069821X. 4. Carè, A.; Garatti, S.; Campi, M. C. (2015). "Scenario Min-Max Optimization and the Risk of Empirical Costs". SIAM Journal on Optimization. 25 (4): 2061–2080. doi:10.1137/130928546. hdl:11311/979283. 5. Campi, M. C.; Carè, A. (2013). "Random Convex Programs with L1-Regularization: Sparsity and Generalization". SIAM Journal on Control and Optimization. 51 (5): 3532–3557. doi:10.1137/110856204. 6. Carè, Algo; Garatti, Simone; Campi, Marco C. (2014). "FAST—Fast Algorithm for the Scenario Technique". Operations Research. 62 (3): 662–671. doi:10.1287/opre.2014.1257. hdl:11311/937164. 7. Campi, M. C.; Garatti, S. (2011). "A Sampling-and-Discarding Approach to Chance-Constrained Optimization: Feasibility and Optimality". Journal of Optimization Theory and Applications. 148 (2): 257–280. doi:10.1007/s10957-010-9754-6. S2CID 7856112. 8. Calafiore, Giuseppe Carlo (2010). "Random Convex Programs". SIAM Journal on Optimization. 20 (6): 3427–3464. doi:10.1137/090773490. 9. "Modulating robustness in control design: Principles and algorithms". IEEE Control Systems Magazine. 33 (2): 36–51. 2013. doi:10.1109/MCS.2012.2234964. S2CID 24072721. 10. Campi, M. C.; Garatti, S. (2018). "Wait-and-judge scenario optimization". Mathematical Programming. 167: 155–189. doi:10.1007/s10107-016-1056-9. hdl:11311/1002492. S2CID 39523265. 11. Calafiore, Giuseppe C. (2017). "Repetitive Scenario Design". IEEE Transactions on Automatic Control. 62 (3): 1125–1137. arXiv:1602.03796. doi:10.1109/TAC.2016.2575859. S2CID 47572451. 12. Pagnoncelli, B. K.; Reich, D.; Campi, M. C. (2012). "Risk-Return Trade-off with the Scenario Approach in Practice: A Case Study in Portfolio Selection". Journal of Optimization Theory and Applications. 155 (2): 707–722. doi:10.1007/s10957-012-0074-x. S2CID 1509645. 13. Calafiore, Giuseppe Carlo (2013). "Direct data-driven portfolio optimization with guaranteed shortfall probability". Automatica. 49 (2): 370–380. doi:10.1016/j.automatica.2012.11.012. 14. Ramponi, Federico Alessandro; Campi, Marco C. (2018). "Expected shortfall: Heuristics and certificates". European Journal of Operational Research. 267 (3): 1003–1013. doi:10.1016/j.ejor.2017.11.022. S2CID 3553018.
Wikipedia
Schaefer's dichotomy theorem In computational complexity theory, a branch of computer science, Schaefer's dichotomy theorem states necessary and sufficient conditions under which a finite set S of relations over the Boolean domain yields polynomial-time or NP-complete problems when the relations of S are used to constrain some of the propositional variables.[1] It is called a dichotomy theorem because the complexity of the problem defined by S is either in P or NP-complete as opposed to one of the classes of intermediate complexity that is known to exist (assuming P ≠ NP) by Ladner's theorem. Special cases of Schaefer's dichotomy theorem include the NP-completeness of SAT (the Boolean satisfiability problem) and its two popular variants 1-in-3 SAT and not-all-equal 3SAT (often denoted by NAE-3SAT). In fact, for these two variants of SAT, Schaefer's dichotomy theorem shows that their monotone versions (where negations of variables are not allowed) are also NP-complete. Original presentation Schaefer defines a decision problem that he calls the Generalized Satisfiability problem for S (denoted by SAT(S)), where $S=\{R_{1},\ldots ,R_{m}\}$ is a finite set of relations over propositional variables. An instance of the problem is an S-formula, i.e. a conjunction of constraints of the form $R_{j}(x_{i_{1}},\ldots ,x_{i_{n}})$ where $R_{j}\in S$ and the $x_{i_{j}}$ are propositional variables. The problem is to determine whether the given formula is satisfiable, in other words if the variables can be assigned values such that they satisfy all the constraints as given by the relations from S. Schaefer identifies six classes of sets of Boolean relations for which SAT(S) is in P and proves that all other sets of relations generate an NP-complete problem. A finite set of relations S over the Boolean domain defines a polynomial time computable satisfiability problem if any one of the following conditions holds: 1. all relations which are not constantly false are true when all its arguments are true; 2. all relations which are not constantly false are true when all its arguments are false; 3. all relations are equivalent to a conjunction of binary clauses; 4. all relations are equivalent to a conjunction of Horn clauses; 5. all relations are equivalent to a conjunction of dual-Horn clauses; 6. all relations are equivalent to a conjunction of affine formulae. [2] Otherwise, the problem SAT(S) is NP-complete. Modern presentation A modern, streamlined presentation of Schaefer's theorem is given in an expository paper by Hubie Chen.[3][4] In modern terms, the problem SAT(S) is viewed as a constraint satisfaction problem over the Boolean domain. In this area, it is standard to denote the set of relations by Γ and the decision problem defined by Γ as CSP(Γ). This modern understanding uses algebra, in particular, universal algebra. For Schaefer's dichotomy theorem, the most important concept in universal algebra is that of a polymorphism. An operation $f:D^{m}\to D$ is a polymorphism of a relation $R\subseteq D^{k}$ if, for any choice of m tuples $(t_{11},\dotsc ,t_{1k}),\dotsc ,(t_{m1},\dotsc ,t_{mk})$ from R, it holds that the tuple obtained from these m tuples by applying f coordinate-wise, i.e. $(f(t_{11},\dotsc ,t_{m1}),\dotsc ,f(t_{1k},\dotsc ,t_{mk}))$, is in R. That is, an operation f is a polymorphism of R if R is closed under f: applying f to any tuples in R yields another tuple inside R. A set of relations Γ is said to have a polymorphism f if every relation in Γ has f as a polymorphism. This definition allows for the algebraic formulation of Schaefer's dichotomy theorem. Let Γ be a finite constraint language over the Boolean domain. The problem CSP(Γ) is decidable in polynomial-time if Γ has one of the following six operations as a polymorphism: 1. the constant unary operation 0; 2. the constant unary operation 1; 3. the binary AND operation ∧; 4. the binary OR operation ∨; 5. the ternary majority operation $\operatorname {Majority} (x,y,z)=(x\wedge y)\vee (x\wedge z)\vee (y\wedge z);$ 6. the ternary minority operation $\operatorname {Minority} (x,y,z)=x\oplus y\oplus z.$ Otherwise, the problem CSP(Γ) is NP-complete. In this formulation, it is easy to check if any of the tractability conditions hold. Properties of Polymorphisms Given a set Γ of relations, there is a surprisingly close connection between its polymorphisms and the computational complexity of CSP(Γ). A relation R is called primitive positive definable, or short pp-definable, from a set Γ of relations if R(v1, ... , vk) ⇔ ∃x1 ... xm. C holds for some conjunction C of constraints from Γ and equations over the variables {v1,...,vk, x1,...,xm}. For example, if Γ consists of the ternary relation nae(x,y,z) holding if x,y,z are not all equal, and R(x,y,z) is x∨y∨z, then R can be pp-defined by R(x,y,z) ⇔ ∃a. nae(0,x,a) ∧ nae(y,z,¬a); this reduction has been used to prove that NAE-3SAT is NP-complete. The set of all relations which are pp-definable from Γ is denoted by ≪Γ≫. If Γ' ⊆ ≪Γ≫ for some finite constraint sets Γ and Γ', then CSP(Γ') reduces to CSP(Γ).[5] Given a set Γ of relations, Pol(Γ) denotes the set of polymorphisms of Γ. Conversely, if O is a set of operations, then Inv(O) denotes the set of relations having all operations in O as a polymorphism. Pol and Inv together build a Galois connection. For any finite set Γ of relations over a finite domain, ≪Γ≫ = Inv(Pol(Γ)) holds, that is, the set of relations pp-definable from Γ can be derived from the polymorphisms of Γ.[6] Moreover, if Pol(Γ) ⊆ Pol(Γ') for two finite relation sets Γ and Γ', then Γ' ⊆ ≪Γ≫ and CSP(Γ') reduces to CSP(Γ). As a consequence, two relation sets having the same polymorphisms lead to the same computational complexity.[7] Generalizations The analysis was later fine-tuned: CSP(Γ) is either solvable in co-NLOGTIME, L-complete, NL-complete, ⊕L-complete, P-complete or NP-complete and given Γ, one can decide in polynomial time which of these cases holds.[8] Schaefer's dichotomy theorem was recently generalized to a larger class of relations.[9] Related work If the problem is to count the number of solutions, which is denoted by #CSP(Γ), then a similar result by Creignou and Hermann holds.[10] Let Γ be a finite constraint language over the Boolean domain. The problem #CSP(Γ) is computable in polynomial time if Γ has a Mal'tsev operation as a polymorphism. Otherwise, the problem #CSP(Γ) is #P-complete. A Mal'tsev operation m is a ternary operation that satisfies $m(x,y,y)=m(y,y,x)=x.$ An example of a Mal'tsev operation is the Minority operation given in the modern, algebraic formulation of Schaefer's dichotomy theorem above. Thus, when Γ has the Minority operation as a polymorphism, it is not only possible to decide CSP(Γ) in polynomial time, but to compute #CSP(Γ) in polynomial time. There are a total of 4 Mal'tsev operations on Boolean variables, determined by the values of $m(T,F,T)$ and $m(F,T,F)$. An example of less symmetric one is given by $m(x,y,z)=(x\wedge z)\vee (\neg y\wedge (x\vee z))$. On another domains, such as groups, examples of Mal'tsev operations include $x-y+z$ and $xy^{-1}z.$ For larger domains, even for a domain of size three, the existence of a Mal'tsev polymorphism for Γ is no longer a sufficient condition for the tractability of #CSP(Γ). However, the absence of a Mal'tsev polymorphism for Γ still implies the #P-hardness of #CSP(Γ). See also • Max/min CSP/Ones classification theorems, a similar set of constraints for optimization problems References 1. Schaefer, Thomas J. (1978). "The Complexity of Satisfiability Problems". STOC 1978. pp. 216–226. doi:10.1145/800133.804350. 2. Schaefer (1978, p.218 left) defines an affine formula to be of the form x1 ⊕ ... ⊕ xn = c, where each xi is a variable, c is a constant, i.e. true or false, and "⊕" denotes XOR, i.e. addition in a Boolean ring. 3. Chen, Hubie (December 2009). "A Rendezvous of Logic, Complexity, and Algebra". ACM Computing Surveys. 42 (1): 1–32. arXiv:cs/0611018. doi:10.1145/1592451.1592453. 4. Chen, Hubie (December 2006). "A Rendezvous of Logic, Complexity, and Algebra". SIGACT News (Logic Column). arXiv:cs/0611018. 5. Chen (2006), p.8, Proposition 3.9; Chen uses polynomial-time many-one reduction 6. Chen (2006), p.9, Theorem 3.13 7. Chen (2006), p.11, Theorem 3.15 8. Allender, Eric; Bauland, Michael; Immerman, Neil; Schnoor, Henning; Vollmer, Heribert (June 2009). "The complexity of satisfiability problems: Refining Schaefer's theorem" (PDF). Journal of Computer and System Sciences. 75 (4): 245–254. doi:10.1016/j.jcss.2008.11.001. Retrieved 2013-09-19. 9. Bodirsky, Manuel; Pinsker, Michael (2015). "Schaefer's Theorem for Graphs". J. ACM. 62 (3): 19:1–19:52. arXiv:1011.2894. doi:10.1145/2764899. 10. Creignou, Nadia; Hermann, Miki (1996). "Complexity of generalized satisfiability counting problems". Information and Computation. 125 (1): 1–12. doi:10.1006/inco.1996.0016. ISSN 0890-5401.
Wikipedia
Alice T. Schafer Prize The Alice T. Schafer Mathematics Prize is given annually to an undergraduate woman for excellence in mathematics by the Association for Women in Mathematics (AWM). The prize, which carries a monetary award, is named for former AWM president and founding member Alice T. Schafer; it was first awarded in 1990.[1] Recipients The recipients of the Alice T. Schafer Mathematics Prize are:[2][3] • 1990: Linda Green, Elizabeth Wilmer • 1991: Jeanne Nielsen • 1992: Zvezdelina E. Stankova • 1993: Catherine O'Neil, Dana Pascovici • 1994: Jing-Rebecca Li • 1995: Ruth Britto-Pacumio • 1996: Ioana Dumitriu • 1997: No prize awarded (due to calendar change) • 1998: Sharon Ann Lozano, Jessica A. Shepherd • 1999: Caroline Klivans • 2000: Mariana E. Campbell • 2001: Jaclyn (Kohles) Anderson • 2002: Kay Kirkpatrick, Melanie Wood • 2003: Kate Gruher • 2004: Kimberly Spears • 2005: Melody Chan • 2006: Alexandra Ovetsky • 2007: Ana Caraiani • 2008: Galyna Dobrovolska, Alison Miller • 2009: Maria Monks • 2010: Hannah Alpert, Charmaine Sia • 2011: Sherry Gong • 2012: Fan Wei • 2013: MurphyKate Montee • 2014: Sarah Peluse • 2015: Sheela Devadas • 2016: Mackenzie Simper • 2017: Hannah Larson • 2018: Libby Taylor[4] • 2019: Naomi Sweeting • 2020: Natalia Pacheco-Tallaj[5] • 2021: Elena Kim[6] • 2022: Letong (Carina) Hong[7] • 2023: Faye Jackson[8] See also • List of mathematics awards References 1. Recognizing excellence in the mathematical sciences : an international compilation of awards, prizes, and recipients. Jaguszewski, Janice M. Greenwich, Conn.: JAI Press. 1997. ISBN 0762302356. OCLC 37513025.{{cite book}}: CS1 maint: others (link) 2. "Past Schafer Prize Recipients - AWM Association for Women in Mathematics". sites.google.com. Retrieved 2018-05-04. 3. Gallian, Joseph A. (June–July 2019), "The First Twenty-Five Winners of the AWM Alice T. Schafer Prize" (PDF), Notices of the American Mathematical Society, 66 (6): 870–874 4. "Libby Taylor wins 2018 Alice T. Schafer Mathematics Prize". Georgia Tech College of Sciences. Retrieved 2018-09-14. 5. "Natalia Pacheco-Tallaj awarded Alice T. Schafer Prize". Harvard Mathematics. Retrieved 2020-05-13. 6. "Elena Kim '21 Awarded 2021 Alice T. Shafer Mathematics Prize". Pomona College. Retrieved 2021-01-11. 7. "Awards for Carina Hong and Alexandra Hoey – Women In Math". math.mit.edu. Retrieved 2022-11-07. 8. "Schafer Prize 2023". Association for Women in Mathematics. Retrieved 2022-12-06.
Wikipedia
Schanuel's lemma In mathematics, especially in the area of algebra known as module theory, Schanuel's lemma, named after Stephen Schanuel, allows one to compare how far modules depart from being projective. It is useful in defining the Heller operator in the stable category, and in giving elementary descriptions of dimension shifting. Statement Schanuel's lemma is the following statement: If 0 → K → P → M → 0 and 0 → K′ → P′ → M → 0 are short exact sequences of R-modules and P and P′ are projective, then K ⊕ P′ is isomorphic to K′ ⊕ P. Proof Define the following submodule of P ⊕ P′, where φ : P → M and φ′ : P′ → M: $X=\{(p,q)\in P\oplus P':\phi (p)=\phi '(q)\}.$ The map π : X → P, where π is defined as the projection of the first coordinate of X into P, is surjective. Since φ′ is surjective, for any p in P, one may find a q in P′ such that φ(p) = φ′(q). This gives (p,q) $\in $ X with π(p,q) = p. Now examine the kernel of the map π: ${\begin{aligned}\ker \pi &=\{(0,q):(0,q)\in X\}\\&=\{(0,q):\phi '(q)=0\}\\&\cong \ker \phi '\cong K'.\end{aligned}}$ We may conclude that there is a short exact sequence $0\rightarrow K'\rightarrow X\rightarrow P\rightarrow 0.$ Since P is projective this sequence splits, so X ≅ K′ ⊕ P. Similarly, we can write another map π : X → P′, and the same argument as above shows that there is another short exact sequence $0\rightarrow K\rightarrow X\rightarrow P'\rightarrow 0,$ and so X ≅ P′ ⊕ K. Combining the two equivalences for X gives the desired result. Long exact sequences The above argument may also be generalized to long exact sequences.[1] Origins Stephen Schanuel discovered the argument in Irving Kaplansky's homological algebra course at the University of Chicago in Autumn of 1958. Kaplansky writes: Early in the course I formed a one-step projective resolution of a module, and remarked that if the kernel was projective in one resolution it was projective in all. I added that, although the statement was so simple and straightforward, it would be a while before we proved it. Steve Schanuel spoke up and told me and the class that it was quite easy, and thereupon sketched what has come to be known as "Schanuel's lemma." [2] Notes 1. Lam, T.Y. (1999). Lectures on Modules and Rings. Springer. ISBN 0-387-98428-3. pgs. 165–167. 2. Kaplansky, Irving (1972). Fields and Rings. Chicago Lectures in Mathematics (2nd ed.). University Of Chicago Press. pp. 165–168. ISBN 0-226-42451-0. Zbl 1001.16500.
Wikipedia
Schanuel's conjecture In mathematics, specifically transcendental number theory, Schanuel's conjecture is a conjecture made by Stephen Schanuel in the 1960s concerning the transcendence degree of certain field extensions of the rational numbers. Part of a series of articles on the mathematical constant e Properties • Natural logarithm • Exponential function Applications • compound interest • Euler's identity • Euler's formula • half-lives • exponential growth and decay Defining e • proof that e is irrational • representations of e • Lindemann–Weierstrass theorem People • John Napier • Leonhard Euler Related topics • Schanuel's conjecture Statement The conjecture is as follows: Given any n complex numbers z1, ..., zn that are linearly independent over the rational numbers $\mathbb {Q} $, the field extension $\mathbb {Q} $(z1, ..., zn, ez1, ..., ezn) has transcendence degree at least n over $\mathbb {Q} $. The conjecture can be found in Lang (1966).[1] Consequences The conjecture, if proven, would generalize most known results in transcendental number theory. The special case where the numbers z1,...,zn are all algebraic is the Lindemann–Weierstrass theorem. If, on the other hand, the numbers are chosen so as to make exp(z1),...,exp(zn) all algebraic then one would prove that linearly independent logarithms of algebraic numbers are algebraically independent, a strengthening of Baker's theorem. The Gelfond–Schneider theorem follows from this strengthened version of Baker's theorem, as does the currently unproven four exponentials conjecture. Schanuel's conjecture, if proved, would also settle whether numbers such as e + π and ee are algebraic or transcendental, and prove that e and π are algebraically independent simply by setting z1 = 1 and z2 = πi, and using Euler's identity. Euler's identity states that eπi + 1 = 0. If Schanuel's conjecture is true then this is, in some precise sense involving exponential rings, the only relation between e, π, and i over the complex numbers.[2] Although ostensibly a problem in number theory, the conjecture has implications in model theory as well. Angus Macintyre and Alex Wilkie, for example, proved that the theory of the real field with exponentiation, $\mathbb {R} $exp, is decidable provided Schanuel's conjecture is true.[3] In fact they only needed the real version of the conjecture, defined below, to prove this result, which would be a positive solution to Tarski's exponential function problem. Related conjectures and results The converse Schanuel conjecture[4] is the following statement: Suppose F is a countable field with characteristic 0, and e : F → F is a homomorphism from the additive group (F,+) to the multiplicative group (F,·) whose kernel is cyclic. Suppose further that for any n elements x1,...,xn of F which are linearly independent over $\mathbb {Q} $, the extension field $\mathbb {Q} $(x1,...,xn,e(x1),...,e(xn)) has transcendence degree at least n over $\mathbb {Q} $. Then there exists a field homomorphism h : F → $\mathbb {C} $ such that h(e(x)) = exp(h(x)) for all x in F. A version of Schanuel's conjecture for formal power series, also by Schanuel, was proven by James Ax in 1971.[5] It states: Given any n formal power series f1,...,fn in t$\mathbb {C} $[[t]] which are linearly independent over $\mathbb {Q} $, then the field extension $\mathbb {C} $(t,f1,...,fn,exp(f1),...,exp(fn)) has transcendence degree at least n over $\mathbb {C} $(t). As stated above, the decidability of $\mathbb {R} $exp follows from the real version of Schanuel's conjecture which is as follows:[6] Suppose x1,...,xn are real numbers and the transcendence degree of the field $\mathbb {Q} $(x1,...,xn, exp(x1),...,exp(xn)) is strictly less than n, then there are integers m1,...,mn, not all zero, such that m1x1 +...+ mnxn = 0. A related conjecture called the uniform real Schanuel's conjecture essentially says the same but puts a bound on the integers mi. The uniform real version of the conjecture is equivalent to the standard real version.[6] Macintyre and Wilkie showed that a consequence of Schanuel's conjecture, which they dubbed the Weak Schanuel's conjecture, was equivalent to the decidability of $\mathbb {R} $exp. This conjecture states that there is a computable upper bound on the norm of non-singular solutions to systems of exponential polynomials; this is, non-obviously, a consequence of Schanuel's conjecture for the reals.[3] It is also known that Schanuel's conjecture would be a consequence of conjectural results in the theory of motives. In this setting Grothendieck's period conjecture for an abelian variety A states that the transcendence degree of its period matrix is the same as the dimension of the associated Mumford–Tate group, and what is known by work of Pierre Deligne is that the dimension is an upper bound for the transcendence degree. Bertolin has shown how a generalised period conjecture includes Schanuel's conjecture.[7] Zilber's pseudo-exponentiation While a proof of Schanuel's conjecture seems a long way off,[8] connections with model theory have prompted a surge of research on the conjecture. In 2004, Boris Zilber systematically constructed exponential fields Kexp that are algebraically closed and of characteristic zero, and such that one of these fields exists for each uncountable cardinality.[9] He axiomatised these fields and, using Hrushovski's construction and techniques inspired by work of Shelah on categoricity in infinitary logics, proved that this theory of "pseudo-exponentiation" has a unique model in each uncountable cardinal. Schanuel's conjecture is part of this axiomatisation, and so the natural conjecture that the unique model of cardinality continuum is actually isomorphic to the complex exponential field implies Schanuel's conjecture. In fact, Zilber showed that this conjecture holds if and only if both Schanuel's conjecture and another unproven condition on the complex exponentiation field, which Zilber calls exponential-algebraic closedness, hold.[10] As this construction can also give models with counterexamples of Schanuel's conjecture, this method cannot prove Schanuel's conjecture.[11] References 1. Lang, Serge (1966). Introduction to Transcendental Numbers. Addison–Wesley. pp. 30–31. 2. Terzo, Giuseppina (2008). "Some consequences of Schanuel's conjecture in exponential rings". Communications in Algebra. 36 (3): 1171–1189. doi:10.1080/00927870701410694. S2CID 122764821. 3. Macintyre, A. & Wilkie, A. J. (1996). "On the decidability of the real exponential field". In Odifreddi, Piergiorgio (ed.). Kreiseliana: About and Around Georg Kreisel. Wellesley: Peters. pp. 441–467. ISBN 978-1-56881-061-4. 4. Scott W. Williams, Million Bucks Problems 5. Ax, James (1971). "On Schanuel's conjectures". Annals of Mathematics. 93 (2): 252–268. doi:10.2307/1970774. JSTOR 1970774. 6. Kirby, Jonathan & Zilber, Boris (2006). "The uniform Schanuel conjecture over the real numbers". Bull. London Math. Soc. 38 (4): 568–570. CiteSeerX 10.1.1.407.5667. doi:10.1112/S0024609306018510. S2CID 122077474. 7. Bertolin, Cristiana (2002). "Périodes de 1-motifs et transcendance". Journal of Number Theory. 97 (2): 204–221. doi:10.1016/S0022-314X(02)00002-1. hdl:2318/103562. 8. Waldschmidt, Michel (2000). Diophantine approximation on linear algebraic groups. Berlin: Springer. ISBN 978-3-662-11569-5. 9. Zilber, Boris (2004). "Pseudo-exponentiation on algebraically closed fields of characteristic zero". Annals of Pure and Applied Logic. 132 (1): 67–95. doi:10.1016/j.apal.2004.07.001. 10. Zilber, Boris (2002). "Exponential sums equations and the Schanuel conjecture". J. London Math. Soc. 65 (2): 27–44. doi:10.1112/S0024610701002861. S2CID 123143365. 11. Bays, Martin; Kirby, Jonathan (2018). "Pseudo-exponential maps, variants, and quasiminimality". Algebra Number Theory. 12 (3): 493–549. arXiv:1512.04262. doi:10.2140/ant.2018.12.493. S2CID 119602079. External links • Weisstein, Eric W. "Schanuel's Conjecture". MathWorld.
Wikipedia
Martin Scharlemann Martin George Scharlemann (born 6 December 1948) is an American topologist who is a professor at the University of California, Santa Barbara.[1] He obtained his Ph.D. from the University of California, Berkeley under the guidance of Robion Kirby in 1974.[2] A conference in his honor was held in 2009 at the University of California, Davis.[3] He is a Fellow of the American Mathematical Society, for his "contributions to low-dimensional topology and knot theory."[4] Abigail Thompson was a student of his.[2] Together they solved the graph planarity problem: There is an algorithm to decide whether a finite graph in 3-space can be moved in 3-space into a plane.[5] He gave the first proof of the classical theorem that knots with unknotting number one are prime. He used hard combinatorial arguments for this. Simpler proofs are now known.[6][7] Selected publications • "Producing reducible 3-manifolds by surgery on a knot" Topology 29 (1990), no. 4, 481–500. • with Abigail Thompson, "Heegaard splittings of (surface) x I are standard" Mathematische Annalen 295 (1993), no. 3, 549–564. • "Sutured manifolds and generalized Thurston norms", Journal of Differential Geometry 29 (1989), no. 3, 557–614. • with J. Hyam Rubinstein, "Comparing Heegaard splittings of non-Haken 3-manifolds" Topology 35 (1996), no. 4, 1005–1026 • "Unknotting number one knots are prime", Inventiones mathematicae 82 (1985), no. 1, 37–55. • with Maggy Tomova, "Alternate Heegaard genus bounds distance" Geometry & Topology 10 (2006), 593–617. • "Local detection of strongly irreducible Heegaard splittings" Topology and its Applications, 1998 • with Abigail Thompson – "Link genus and the Conway moves" Commentarii Mathematici Helvetici, 1989 • "Smooth spheres in $\mathbb {R} ^{4}$ with four critical points are standard" Inventiones mathematicae, 1985 • "Tunnel number one knots satisfy the Poenaru conjecture" Topology and its Applications, 1984 • with A Thompson – "Detecting unknotted graphs in 3-space" Journal of Differential Geometry, 1991 • with A Thompson – "Thin position and Heegaard splittings of the 3-sphere" J. Differential Geom, 1994 References 1. "Curriculum Vitae – Martin Scharlemann". 2. "The Mathematics Genealogy Project – Martin Scharlemann". 3. "Geometric Topology in Dimensions 3 and 4". 4. https://www.ams.org/profession/ams-fellows/fellows2014.pdf 5. Scharlemann, Martin; Thompson, Abigail (1991). "Detecting unknotted graphs in 3-space". Journal of Differential Geometry. 34 (2): 539–560. doi:10.4310/jdg/1214447220. 6. Lackenby, Marc (1997-08-01). "Surfaces, surgery and unknotting operations". Mathematische Annalen. 308 (4): 615–632. doi:10.1007/s002080050093. ISSN 0025-5831. S2CID 121512073. 7. Zhang, Xingru (1991-01-01). "Unknotting Number One Knots are Prime: A New Proof". Proceedings of the American Mathematical Society. 113 (2): 611–612. doi:10.2307/2048550. JSTOR 2048550. Authority control International • ISNI • VIAF National • Germany • United States Academics • MathSciNet • Mathematics Genealogy Project Other • IdRef
Wikipedia
Robert Schatten Robert Schatten (January 28, 1911 – August 26, 1977) was an American mathematician. Robert Schatten Born(1911-01-28)January 28, 1911 Lwów, Poland DiedAugust 26, 1977(1977-08-26) (aged 66) New York City NationalityAmerican Alma materJan Kazimierz University of Lwów Columbia University Known forSchatten norm Schatten class operator Scientific career FieldsMathematics InstitutionsUniversity of Kansas City University of New York ThesisOn the Direct Product of Banach Spaces (1943) Doctoral advisorFrancis Joseph Murray Doctoral studentsElliott Ward Cheney, Jr. Robert Schatten was born to a Jewish family in Lviv. His intellectual origins were at Lwów School of Mathematics, particularly well known for fundamental contributions to functional analysis. His entire family was murdered during World War II, he himself emigrated to the United States. In 1933 he got magister degree at Jan Kazimierz University of Lwów, and in 1939 he got master's degree at Columbia University. Supervised by Francis Joseph Murray, he got doctorate degree in 1942 for the thesis "On the Direct Product of Banach Spaces". Shortly after being appointed to a junior professorship, he joined the United States army where during training he suffered a back injury which affected him for the remainder of his life. In 1943 he was appointed to an assistant professorship at University of Vermont. At National Research Council, by two years he worked with John von Neumann and Nelson Dunford. In 1946, he went to the University of Kansas, first as extraordinary professor until 1952 and then as ordinary professor until 1961. He stayed at Institute for Advanced Study in 1950 and 1952–1953, at University of Southern California in 1960–1961, and at State University of New York in 1961–1962. In 1962 he became professor at Hunter College, where he stayed until his death. Schatten widely studied tensor products of Banach spaces. In functional analysis, he is the namesake of the Schatten norm and the Schatten class operators. His doctoral students included Elliott Ward Cheney, Jr. at University of Kansas, and Peter Falley and Charles Masiello at City University of New York. Schatten died in New York City in 1977. Further reading • A Theory of Cross-Spaces. Annals of Mathematics Studies, ISBN 0-691-08396-7 • Norm Ideals of Completely Continuous Operators.[1] Ergebnisse der Mathematik und ihrer Grenzgebiete, 2. Folge, ISBN 3-540-04806-5 References 1. Fan, Ky (1961). "Review: Norm ideals of completely continuous operators". Bull. Amer. Math. Soc. 67 (6): 532–533. doi:10.1090/s0002-9904-1961-10668-x. External links • Robert Schatten at the Mathematics Genealogy Project Authority control International • ISNI • VIAF National • Germany • Netherlands Academics • MathSciNet • Mathematics Genealogy Project • zbMATH People • Deutsche Biographie Other • IdRef
Wikipedia
Schatten class operator In mathematics, specifically functional analysis, a pth Schatten-class operator is a bounded linear operator on a Hilbert space with finite pth Schatten norm. The space of pth Schatten-class operators is a Banach space with respect to the Schatten norm. Via polar decomposition, one can prove that the space of pth Schatten class operators is an ideal in B(H). Furthermore, the Schatten norm satisfies a type of Hölder inequality: $\|ST\|_{S_{1}}\leq \|S\|_{S_{p}}\|T\|_{S_{q}}\ {\mbox{if}}\ S\in S_{p},\ T\in S_{q}{\mbox{ and }}1/p+1/q=1.$ If we denote by $S_{\infty }$ the Banach space of compact operators on H with respect to the operator norm, the above Hölder-type inequality even holds for $p\in [1,\infty ]$. From this it follows that $\phi :S_{p}\rightarrow S_{q}'$, $T\mapsto \mathrm {tr} (T\cdot )$ is a well-defined contraction. (Here the prime denotes (topological) dual.) Observe that the 2nd Schatten class is in fact the Hilbert space of Hilbert–Schmidt operators. Moreover, the 1st Schatten class is the space of trace class operators.
Wikipedia
Schauder estimates In mathematics, and more precisely, in functional Analysis and PDEs, the Schauder estimates are a collection of results due to Juliusz Schauder (1934, 1937) concerning the regularity of solutions to linear, uniformly elliptic partial differential equations. The estimates say that when the equation has appropriately smooth terms and appropriately smooth solutions, then the Hölder norm of the solution can be controlled in terms of the Hölder norms for the coefficient and source terms. Since these estimates assume by hypothesis the existence of a solution, they are called a priori estimates. There is both an interior result, giving a Hölder condition for the solution in interior domains away from the boundary, and a boundary result, giving the Hölder condition for the solution in the entire domain. The former bound depends only on the spatial dimension, the equation, and the distance to the boundary; the latter depends on the smoothness of the boundary as well. The Schauder estimates are a necessary precondition to using the method of continuity to prove the existence and regularity of solutions to the Dirichlet problem for elliptic PDEs. This result says that when the coefficients of the equation and the nature of the boundary conditions are sufficiently smooth, there is a smooth classical solution to the PDE. Notation The Schauder estimates are given in terms of weighted Hölder norms; the notation will follow that given in the text of D. Gilbarg and Neil Trudinger (1983). The supremum norm of a continuous function $f\in C(\Omega )$ is given by $|f|_{0;\Omega }=\sup _{x\in \Omega }|f(x)|$ For a function which is Hölder continuous with exponent $\alpha $, that is to say $f\in C^{0,\alpha }(\Omega )$, the usual Hölder seminorm is given by $[f]_{0,\alpha ;\Omega }=\sup _{x,y\in \Omega }{\frac {|f(x)-f(y)|}{|x-y|^{\alpha }}}.$ ;\Omega }=\sup _{x,y\in \Omega }{\frac {|f(x)-f(y)|}{|x-y|^{\alpha }}}.} The sum of the two is the full Hölder norm of f $|f|_{0,\alpha ;\Omega }=|f|_{0;\Omega }+[f]_{0,\alpha ;\Omega }=\sup _{x\in \Omega }|f(x)|+\sup _{x,y\in \Omega }{\frac {|f(x)-f(y)|}{|x-y|^{\alpha }}}.$ ;\Omega }=|f|_{0;\Omega }+[f]_{0,\alpha ;\Omega }=\sup _{x\in \Omega }|f(x)|+\sup _{x,y\in \Omega }{\frac {|f(x)-f(y)|}{|x-y|^{\alpha }}}.} For differentiable functions u, it is necessary to consider the higher order norms, involving derivatives. The norm in the space of functions with k continuous derivatives, $C^{k}(\Omega )$, is given by $|u|_{k;\Omega }=\sum _{|\beta |\leq k}\sup _{x\in \Omega }|D^{\beta }u(x)|$ where $\beta $ ranges over all multi-indices of appropriate orders. For functions with kth order derivatives which are Holder continuous with exponent $\alpha $, the appropriate semi-norm is given by $[u]_{k,\alpha ;\Omega }=\sup _{\stackrel {x,y\in \Omega }{|\beta |=k}}{\frac {|D^{\beta }u(x)-D^{\beta }u(y)|}{|x-y|^{\alpha }}}$ ;\Omega }=\sup _{\stackrel {x,y\in \Omega }{|\beta |=k}}{\frac {|D^{\beta }u(x)-D^{\beta }u(y)|}{|x-y|^{\alpha }}}} which gives a full norm of $|u|_{k,\alpha ;\Omega }=|u|_{k;\Omega }+[u]_{k,\alpha ;\Omega }=\sum _{|\beta |\leq k}\sup _{x\in \Omega }|D^{\beta }u(x)|+\sup _{\stackrel {x,y\in \Omega }{|\beta |=k}}{\frac {|D^{\beta }u(x)-D^{\beta }u(y)|}{|x-y|^{\alpha }}}.$ ;\Omega }=|u|_{k;\Omega }+[u]_{k,\alpha ;\Omega }=\sum _{|\beta |\leq k}\sup _{x\in \Omega }|D^{\beta }u(x)|+\sup _{\stackrel {x,y\in \Omega }{|\beta |=k}}{\frac {|D^{\beta }u(x)-D^{\beta }u(y)|}{|x-y|^{\alpha }}}.} For the interior estimates, the norms are weighted by the distance to the boundary $d_{x}=d(x,\partial \Omega )$ raised to the same power as the derivative, and the seminorms are weighted by $d_{x,y}=\min(d_{x},d_{y})$ raised to the appropriate power. The resulting weighted interior norm for a function is given by $|u|_{k,\alpha ;\Omega }^{*}=|u|_{k;\Omega }^{*}+[u]_{k,\alpha ;\Omega }^{*}=\sum _{|\beta |\leq k}\sup _{x\in \Omega }|d_{x}^{|\beta |}D^{\beta }u(x)|+\sup _{\stackrel {x,y\in \Omega }{|\beta |=k}}d_{x,y}^{k+\alpha }{\frac {|D^{\beta }u(x)-D^{\beta }u(y)|}{|x-y|^{\alpha }}}$ ;\Omega }^{*}=|u|_{k;\Omega }^{*}+[u]_{k,\alpha ;\Omega }^{*}=\sum _{|\beta |\leq k}\sup _{x\in \Omega }|d_{x}^{|\beta |}D^{\beta }u(x)|+\sup _{\stackrel {x,y\in \Omega }{|\beta |=k}}d_{x,y}^{k+\alpha }{\frac {|D^{\beta }u(x)-D^{\beta }u(y)|}{|x-y|^{\alpha }}}} It is occasionally necessary to add "extra" powers of the weight, denoted by $|u|_{k,\alpha ;\Omega }^{(m)}=|u|_{k;\Omega }^{(m)}+[u]_{k,\alpha ;\Omega }^{(m)}=\sum _{|\beta |\leq k}\sup _{x\in \Omega }|d_{x}^{|\beta |+m}D^{\beta }u(x)|+\sup _{\stackrel {x,y\in \Omega }{|\beta |=k}}d_{x,y}^{m+k+\alpha }{\frac {|D^{\beta }u(x)-D^{\beta }u(y)|}{|x-y|^{\alpha }}}.$ ;\Omega }^{(m)}=|u|_{k;\Omega }^{(m)}+[u]_{k,\alpha ;\Omega }^{(m)}=\sum _{|\beta |\leq k}\sup _{x\in \Omega }|d_{x}^{|\beta |+m}D^{\beta }u(x)|+\sup _{\stackrel {x,y\in \Omega }{|\beta |=k}}d_{x,y}^{m+k+\alpha }{\frac {|D^{\beta }u(x)-D^{\beta }u(y)|}{|x-y|^{\alpha }}}.} Formulation The formulations in this section are taken from the text of D. Gilbarg and Neil Trudinger (1983). Interior estimates Consider a bounded solution $u\in C^{2,\alpha }(\Omega )$ on the domain $\Omega $ to the elliptic, second order, partial differential equation $\sum _{i,j}a_{i,j}(x)D_{i}D_{j}u(x)+\sum _{i}b_{i}(x)D_{i}u(x)+c(x)u(x)=f(x)$ where the source term satisfies $f\in C^{\alpha }(\Omega )$. If there exists a constant $\lambda >0$ such that the $a_{i,j}$ are strictly elliptic, $\sum a_{i,j}(x)\xi _{i}\xi _{j}\geq \lambda |\xi |^{2}$ for all $x\in \Omega ,\xi \in \mathbb {R} ^{n}$ and the relevant norms coefficients are all bounded by another constant $\Lambda $ $|a_{i,j}|_{0,\alpha ;\Omega },|b_{i}|_{0,\alpha ;\Omega }^{(1)},|c|_{0,\alpha ;\Omega }^{(2)}\leq \Lambda .$ ;\Omega },|b_{i}|_{0,\alpha ;\Omega }^{(1)},|c|_{0,\alpha ;\Omega }^{(2)}\leq \Lambda .} Then the weighted $C^{2,\alpha }$ norm of u is controlled by the supremum of u and the Holder norm of f: $|u|_{2,\alpha ;\Omega }^{*}\leq C(n,\alpha ,\lambda ,\Lambda )(|u|_{0,\Omega }+|f|_{0,\alpha ;\Omega }^{(2)}).$ ;\Omega }^{*}\leq C(n,\alpha ,\lambda ,\Lambda )(|u|_{0,\Omega }+|f|_{0,\alpha ;\Omega }^{(2)}).} Boundary estimates Let $\Omega $ be a $C^{2,\alpha }$ domain (that is to say, about any point on the boundary of the domain the boundary hypersurface can be realized, after an appropriate rotation of coordinates, as a $C^{2,\alpha }$ function), with Dirichlet boundary data that coincides with a function $\phi (x)$ which is also at least $C^{2,\alpha }$. Then subject to analogous conditions on the coefficients as in the case of the interior estimate, the unweighted Holder norm of u is controlled by the unweighted norms of the source term, the boundary data, and the supremum norm of u: $|u|_{2,\alpha ;\Omega }\leq C(n,\alpha ,\lambda ,\Lambda ,\Omega )(|u|_{0,\Omega }+|f|_{0,\alpha ;\Omega }+|\phi |_{2,\alpha ;\partial \Omega }).$ ;\Omega }\leq C(n,\alpha ,\lambda ,\Lambda ,\Omega )(|u|_{0,\Omega }+|f|_{0,\alpha ;\Omega }+|\phi |_{2,\alpha ;\partial \Omega }).} When the solution u satisfies the maximum principle, the first factor on the right hand side can be dropped. Sources • Gilbarg, D.; Trudinger, Neil (1983), Elliptic Partial Differential Equations of Second Order, New York: Springer, ISBN 3-540-41160-7 • Schauder, Juliusz (1934), "Über lineare elliptische Differentialgleichungen zweiter Ordnung", Mathematische Zeitschrift (in German), Berlin, Germany: Springer-Verlag, vol. 38, no. 1, pp. 257–282, doi:10.1007/BF01170635, S2CID 120461752 MR1545448 • Schauder, Juliusz (1937), "Numerische Abschätzungen in elliptischen linearen Differentialgleichungen" (PDF), Studia Mathematica (in German), Lwów, Poland: Polska Akademia Nauk. Instytut Matematyczny, vol. 5, pp. 34–42 Further reading • Courant, Richard; Hilbert, David (1989), Methods of Mathematical Physics, vol. 2 (1st English ed.), New York: Wiley-Interscience, ISBN 0-471-50439-4 • Han, Qing; Lin, Fanghua (1997), Elliptic Partial Differential Equations, New York: Courant Institute of Mathematical Sciences, ISBN 0-9658703-0-8, OCLC 38168365 MR1669352
Wikipedia
Schaum's Outlines Schaum's Outlines (/ʃɔːm/) is a series of supplementary texts for American high school, AP, and college-level courses, currently published by McGraw-Hill Education Professional, a subsidiary of McGraw-Hill Education. The outlines cover a wide variety of academic subjects including mathematics, engineering and the physical sciences, computer science, biology and the health sciences, accounting, finance, economics, grammar and vocabulary, and other fields.[1] In most subject areas the full title of each outline starts with Schaum's Outline of Theory and Problems of, but on the cover this has been shortened to simply Schaum's Outlines followed by the subject name in more recent texts. Background and description The series was originally developed in the 1930s by Daniel Schaum (November 13, 1913 – August 22, 2008), son of eastern European immigrants. McGraw-Hill purchased Schaum Publishing Company in 1967.[2] Titles are continually revised to reflect current educational standards in their fields, including updates with new information, additional examples, use of new technology (calculators and computers), and so forth. New titles are also introduced in emerging fields such as computer graphics. Many titles feature noted authors in their respective fields, such as Murray R. Spiegel and Seymour Lipschutz. Originally designed for college-level students as a supplement to standard course textbooks, each chapter of a typical Outline begins with only a terse explanation of relevant topics, followed by many fully worked examples to illustrate common problem-solving techniques, and ends with a set of further exercises where usually only brief answers are given and not full solutions. Despite being marketed as a supplement, several titles have become widely used as primary textbooks for courses (the Discrete Mathematics and Statistics titles are examples). This is particularly true in settings where an important factor in the selection of a text is the price, such as in community colleges. Easy Outlines Condensed versions of the full Schaum's Outlines called "Easy Outlines" started to appear in the late 1990s, aimed primarily at high-school students, especially those taking AP courses. These typically feature the same explanatory material as their full-size counterparts, sometimes edited to omit advanced topics, but contain greatly reduced sets of worked examples and usually lack any supplementary exercises. As a result, they are less suited to self-study for those learning a subject for the first time, unless they are used alongside a standard textbook or other resource. They cost about half the price of the full outlines, however, and their smaller size makes them more portable. Comparison with other series Schaum's Outlines are part of the educational supplements niche of book publishing. They are a staple in the educational sections of retail bookstores, where books on subjects such as chemistry and calculus may be found. Many titles on advanced topics are also available, such as complex variables and topology, but these may be harder to find in retail stores. Schaum's Outlines are frequently seen alongside the Barron's "Easy Way" series and McGraw-Hill's own "Demystified" series. The "Demystified" series is introductory in nature, for middle and high school students, favoring more in-depth coverage of introductory material at the expense of fewer topics. The "Easy Way" series is a middle ground: more rigorous and detailed than the "Demystified" books, but not as rigorous and terse as the Schaum's series. Schaum's originally occupied the niche of college supplements, and the titles tend to be more advanced and rigorous. With the expansion of AP classes in high schools, Schaum's Outlines are positioned as AP supplements. The outline format makes explanations more terse than any other supplement. Schaum's has a much wider range of titles than any other series, including even some graduate-level titles. See also • Barron's Educational Series • CliffsNotes • SparkNotes References 1. "Viewing All Products in Schaums Outlines". McGraw-Hill Professional. Archived from the original on 6 May 2015. 2. "The McGraw-Hill Companies, Inc". International Directory of Company Histories. 2003. Retrieved 18 July 2016. External links • Official website
Wikipedia
Eric Schechter Eric Schechter (born August 1, 1950) is an American mathematician, retired from Vanderbilt University with the title of professor emeritus. His interests started primarily in analysis but moved into mathematical logic. Schechter is best known for his 1996 book Handbook of Analysis and its Foundations, which provides a novel approach to mathematical analysis and related topics at the graduate level. Eric Schechter Born (1950-08-01) August 1, 1950 NationalityAmerican Alma materUniversity of Chicago Scientific career FieldsMathematics InstitutionsVanderbilt University Doctoral advisorJerry L. Bona Important works Schechter has authored a number of articles in analysis, differential equations, mathematical logic, and set theory. He is best known for writing two textbooks covering advanced material but written at an introductory level: • (2005) Classical and Nonclassical Logics (ISBN 0-691-12279-2) (more information) • (1996) Handbook of Analysis and its Foundations (ISBN 0126227608) (more information) Handbook of Analysis and its Foundations was reviewed at length by the Society for Industrial and Applied Mathematics review, which wrote: Every once in a while a book comes along that so effectively redefines an educational enterprise -- in this case, graduate mathematical training -- and so effectively reexamines the hegemony of ideas prevailing in a discipline -- in this case, mathematical analysis -- that it deserves our careful attention. This is such a book.[1] Schechter also maintains two webpages that are frequently cited: • Common Errors in Undergraduate Mathematics • A Home Page for the Axiom of Choice Politics Schechter is involved in political activism of the democratic socialist variety. His mathematical homepage includes a few anti-war statements,[2] and his political home page includes a long essay about progressive ideology.[3] He has worked as an organizer for the Nashville Peace Coalition, protesting the wars in Iraq and Afghanistan.[4] At a meeting for the living wage movement on Vanderbilt's campus, he remarked that it is hard to bring up politics in a non-political environment, and expressed that people did not talk much about politics in the mathematics department at Vanderbilt.[5] His father, Henry Schechter, was a deputy of the AFL-CIO.[6] In 2010, Schechter ran for Tennessee's 5th Congressional District seat against incumbent congressman Jim Cooper,[7] but was defeated in the Democratic primary. Schechter describes himself as "a different kind of Democrat."[8] References 1. , S.I.A.M. Review, Volume 40, Number 2, pp. 421-426. 2. Eric Schechter's Mathematical Home Page, Accessed Apr. 16, 2009. 3. Eric Schechter's Political Home Page, Accessed June 6, 2009. 4. "Students, others to protest war outside Petraeus presentation", The Vanderbilt Hustler, Mar. 1, 2010. 5. Allison Malone, "NEWS: LIVE enlists support of Greeks, student government and faculty to bolster campaign", Jan. 25, 2007. 6. "Obituaries: AFL-CIO Deputy Henry Schechter", Washington Post, July 22, 2005. 7. "Political discord inspires hundreds to enter TN races", The Tennessean , Apr. 2, 2010. 8. "Eric Schechter for U.S. Congress in 2010, Tennessee's 5th district". Eric Schechter. Archived from the original on November 11, 2010. Retrieved Oct 17, 2011. External links • Eric Schechter's Mathematical Home Page • Eric Schechter's Political Philosophy Home Page • Eric Schechter's later Political Philosophy Home Page • Congressional Campaign Website • Additional Campaign Website • Eric Schechter at the Mathematics Genealogy Project Authority control International • ISNI • VIAF National • France • BnF data • Germany • Israel • United States • Netherlands • Poland Academics • DBLP • MathSciNet • Mathematics Genealogy Project • zbMATH Other • IdRef
Wikipedia
Jacobi method In numerical linear algebra, the Jacobi method (a.k.a. the Jacobi iteration method) is an iterative algorithm for determining the solutions of a strictly diagonally dominant system of linear equations. Each diagonal element is solved for, and an approximate value is plugged in. The process is then iterated until it converges. This algorithm is a stripped-down version of the Jacobi transformation method of matrix diagonalization. The method is named after Carl Gustav Jacob Jacobi. Not to be confused with Jacobi eigenvalue algorithm. Description Let $A\mathbf {x} =\mathbf {b} $ be a square system of n linear equations, where: $A={\begin{bmatrix}a_{11}&a_{12}&\cdots &a_{1n}\\a_{21}&a_{22}&\cdots &a_{2n}\\\vdots &\vdots &\ddots &\vdots \\a_{n1}&a_{n2}&\cdots &a_{nn}\end{bmatrix}},\qquad \mathbf {x} ={\begin{bmatrix}x_{1}\\x_{2}\\\vdots \\x_{n}\end{bmatrix}},\qquad \mathbf {b} ={\begin{bmatrix}b_{1}\\b_{2}\\\vdots \\b_{n}\end{bmatrix}}.$ When $A$ and $\mathbf {b} $ are known, and $\mathbf {x} $ is unknown, we can use the Jacobi method to approximate $\mathbf {x} $. The vector $\mathbf {x} ^{(0)}$ denotes our initial guess for $\mathbf {x} $ (often $\mathbf {x} _{i}^{(0)}=0$ for $i=1,2,...,n$). We denote $\mathbf {x} ^{(k)}$ as the k-th approximation or iteration of $\mathbf {x} $, and $\mathbf {x} ^{(k+1)}$ is the next (or k+1) iteration of $\mathbf {x} $. Matrix-based formula Then A can be decomposed into a diagonal component D, a lower triangular part L and an upper triangular part U: $A=D+L+U\qquad {\text{where}}\qquad D={\begin{bmatrix}a_{11}&0&\cdots &0\\0&a_{22}&\cdots &0\\\vdots &\vdots &\ddots &\vdots \\0&0&\cdots &a_{nn}\end{bmatrix}}{\text{ and }}L+U={\begin{bmatrix}0&a_{12}&\cdots &a_{1n}\\a_{21}&0&\cdots &a_{2n}\\\vdots &\vdots &\ddots &\vdots \\a_{n1}&a_{n2}&\cdots &0\end{bmatrix}}.$ The solution is then obtained iteratively via $\mathbf {x} ^{(k+1)}=D^{-1}(\mathbf {b} -(L+U)\mathbf {x} ^{(k)}).$ Element-based formula The element-based formula for each row $i$ is thus: $x_{i}^{(k+1)}={\frac {1}{a_{ii}}}\left(b_{i}-\sum _{j\neq i}a_{ij}x_{j}^{(k)}\right),\quad i=1,2,\ldots ,n.$ The computation of $x_{i}^{(k+1)}$ requires each element in $\mathbf {x} ^{(k)}$ except itself. Unlike the Gauss–Seidel method, we can't overwrite $x_{i}^{(k)}$ with $x_{i}^{(k+1)}$, as that value will be needed by the rest of the computation. The minimum amount of storage is two vectors of size n. Algorithm Input: initial guess x(0) to the solution, (diagonal dominant) matrix A, right-hand side vector b, convergence criterion Output: solution when convergence is reached Comments: pseudocode based on the element-based formula above k = 0 while convergence not reached do for i := 1 step until n do σ = 0 for j := 1 step until n do if j ≠ i then σ = σ + aij xj(k) end end xi(k+1) = (bi − σ) / aii end increment k end Convergence The standard convergence condition (for any iterative method) is when the spectral radius of the iteration matrix is less than 1: $\rho (D^{-1}(L+U))<1.$ A sufficient (but not necessary) condition for the method to converge is that the matrix A is strictly or irreducibly diagonally dominant. Strict row diagonal dominance means that for each row, the absolute value of the diagonal term is greater than the sum of absolute values of other terms: $\left|a_{ii}\right|>\sum _{j\neq i}{\left|a_{ij}\right|}.$ The Jacobi method sometimes converges even if these conditions are not satisfied. Note that the Jacobi method does not converge for every symmetric positive-definite matrix. For example, $A={\begin{pmatrix}29&2&1\\2&6&1\\1&1&{\frac {1}{5}}\end{pmatrix}}\quad \Rightarrow \quad D^{-1}(L+U)={\begin{pmatrix}0&{\frac {2}{29}}&{\frac {1}{29}}\\{\frac {1}{3}}&0&{\frac {1}{6}}\\5&5&0\end{pmatrix}}\quad \Rightarrow \quad \rho (D^{-1}(L+U))\approx 1.0661\,.$ Examples Example 1 A linear system of the form $Ax=b$ with initial estimate $x^{(0)}$ is given by $A={\begin{bmatrix}2&1\\5&7\\\end{bmatrix}},\ b={\begin{bmatrix}11\\13\\\end{bmatrix}}\quad {\text{and}}\quad x^{(0)}={\begin{bmatrix}1\\1\\\end{bmatrix}}.$ We use the equation $x^{(k+1)}=D^{-1}(b-(L+U)x^{(k)})$, described above, to estimate $x$. First, we rewrite the equation in a more convenient form $D^{-1}(b-(L+U)x^{(k)})=Tx^{(k)}+C$, where $T=-D^{-1}(L+U)$ and $C=D^{-1}b$. From the known values $D^{-1}={\begin{bmatrix}1/2&0\\0&1/7\\\end{bmatrix}},\ L={\begin{bmatrix}0&0\\5&0\\\end{bmatrix}}\quad {\text{and}}\quad U={\begin{bmatrix}0&1\\0&0\\\end{bmatrix}}.$ we determine $T=-D^{-1}(L+U)$ as $T={\begin{bmatrix}1/2&0\\0&1/7\\\end{bmatrix}}\left\{{\begin{bmatrix}0&0\\-5&0\\\end{bmatrix}}+{\begin{bmatrix}0&-1\\0&0\\\end{bmatrix}}\right\}={\begin{bmatrix}0&-1/2\\-5/7&0\\\end{bmatrix}}.$ Further, $C$ is found as $C={\begin{bmatrix}1/2&0\\0&1/7\\\end{bmatrix}}{\begin{bmatrix}11\\13\\\end{bmatrix}}={\begin{bmatrix}11/2\\13/7\\\end{bmatrix}}.$ With $T$ and $C$ calculated, we estimate $x$ as $x^{(1)}=Tx^{(0)}+C$: $x^{(1)}={\begin{bmatrix}0&-1/2\\-5/7&0\\\end{bmatrix}}{\begin{bmatrix}1\\1\\\end{bmatrix}}+{\begin{bmatrix}11/2\\13/7\\\end{bmatrix}}={\begin{bmatrix}5.0\\8/7\\\end{bmatrix}}\approx {\begin{bmatrix}5\\1.143\\\end{bmatrix}}.$ The next iteration yields $x^{(2)}={\begin{bmatrix}0&-1/2\\-5/7&0\\\end{bmatrix}}{\begin{bmatrix}5.0\\8/7\\\end{bmatrix}}+{\begin{bmatrix}11/2\\13/7\\\end{bmatrix}}={\begin{bmatrix}69/14\\-12/7\\\end{bmatrix}}\approx {\begin{bmatrix}4.929\\-1.714\\\end{bmatrix}}.$ This process is repeated until convergence (i.e., until $\|Ax^{(n)}-b\|$ is small). The solution after 25 iterations is $x={\begin{bmatrix}7.111\\-3.222\end{bmatrix}}.$ Example 2 Suppose we are given the following linear system: ${\begin{aligned}10x_{1}-x_{2}+2x_{3}&=6,\\-x_{1}+11x_{2}-x_{3}+3x_{4}&=25,\\2x_{1}-x_{2}+10x_{3}-x_{4}&=-11,\\3x_{2}-x_{3}+8x_{4}&=15.\end{aligned}}$ If we choose (0, 0, 0, 0) as the initial approximation, then the first approximate solution is given by ${\begin{aligned}x_{1}&=(6+0-(2*0))/10=0.6,\\x_{2}&=(25+0+0-(3*0))/11=25/11=2.2727,\\x_{3}&=(-11-(2*0)+0+0)/10=-1.1,\\x_{4}&=(15-(3*0)+0)/8=1.875.\end{aligned}}$ Using the approximations obtained, the iterative procedure is repeated until the desired accuracy has been reached. The following are the approximated solutions after five iterations. $x_{1}$ $x_{2}$ $x_{3}$ $x_{4}$ 0.6 2.27272 -1.1 1.875 1.04727 1.7159 -0.80522 0.88522 0.93263 2.05330 -1.0493 1.13088 1.01519 1.95369 -0.9681 0.97384 0.98899 2.0114 -1.0102 1.02135 The exact solution of the system is (1, 2, −1, 1). Python example import numpy as np ITERATION_LIMIT = 1000 # initialize the matrix A = np.array([[10., -1., 2., 0.], [-1., 11., -1., 3.], [2., -1., 10., -1.], [0.0, 3., -1., 8.]]) # initialize the RHS vector b = np.array([6., 25., -11., 15.]) # prints the system print("System:") for i in range(A.shape[0]): row = [f"{A[i, j]}*x{j + 1}" for j in range(A.shape[1])] print(f'{" + ".join(row)} = {b[i]}') print() x = np.zeros_like(b) for it_count in range(ITERATION_LIMIT): if it_count != 0: print(f"Iteration {it_count}: {x}") x_new = np.zeros_like(x) for i in range(A.shape[0]): s1 = np.dot(A[i, :i], x[:i]) s2 = np.dot(A[i, i + 1:], x[i + 1:]) x_new[i] = (b[i] - s1 - s2) / A[i, i] if x_new[i] == x_new[i-1]: break if np.allclose(x, x_new, atol=1e-10, rtol=0.): break x = x_new print("Solution: ") print(x) error = np.dot(A, x) - b print("Error:") print(error) Weighted Jacobi method The weighted Jacobi iteration uses a parameter $\omega $ to compute the iteration as $\mathbf {x} ^{(k+1)}=\omega D^{-1}(\mathbf {b} -(L+U)\mathbf {x} ^{(k)})+\left(1-\omega \right)\mathbf {x} ^{(k)}$ with $\omega =2/3$ being the usual choice.[1] From the relation $L+U=A-D$, this may also be expressed as $\mathbf {x} ^{(k+1)}=\omega D^{-1}\mathbf {b} +\left(I-\omega D^{-1}A\right)\mathbf {x} ^{(k)}$. Convergence in the symmetric positive definite case In case that the system matrix $A$ is of symmetric positive-definite type one can show convergence. Let $C=C_{\omega }=I-\omega D^{-1}A$ be the iteration matrix. Then, convergence is guaranteed for $\rho (C_{\omega })<1\quad \Longleftrightarrow \quad 0<\omega <{\frac {2}{\lambda _{\text{max}}(D^{-1}A)}}\,,$ where $\lambda _{\text{max}}$ is the maximal eigenvalue. The spectral radius can be minimized for a particular choice of $\omega =\omega _{\text{opt}}$ as follows $\min _{\omega }\rho (C_{\omega })=\rho (C_{\omega _{\text{opt}}})=1-{\frac {2}{\kappa (D^{-1}A)+1}}\quad {\text{for}}\quad \omega _{\text{opt}}:={\frac {2}{\lambda _{\text{min}}(D^{-1}A)+\lambda _{\text{max}}(D^{-1}A)}}\,,$ where $\kappa $ is the matrix condition number. See also • Gauss–Seidel method • Successive over-relaxation • Iterative method § Linear systems • Gaussian Belief Propagation • Matrix splitting References 1. Saad, Yousef (2003). Iterative Methods for Sparse Linear Systems (2nd ed.). SIAM. p. 414. ISBN 0898715342. External links • This article incorporates text from the article Jacobi_method on CFD-Wiki that is under the GFDL license. • Black, Noel; Moore, Shirley & Weisstein, Eric W. "Jacobi method". MathWorld. • Jacobi Method from www.math-linux.com Numerical linear algebra Key concepts • Floating point • Numerical stability Problems • System of linear equations • Matrix decompositions • Matrix multiplication (algorithms) • Matrix splitting • Sparse problems Hardware • CPU cache • TLB • Cache-oblivious algorithm • SIMD • Multiprocessing Software • MATLAB • Basic Linear Algebra Subprograms (BLAS) • LAPACK • Specialized libraries • General purpose software Authority control: National • Germany
Wikipedia
Sheffer stroke In Boolean functions and propositional calculus, the Sheffer stroke denotes a logical operation that is equivalent to the negation of the conjunction operation, expressed in ordinary language as "not both". It is also called non-conjunction, or alternative denial since it says in effect that at least one of its operands is false, or NAND ("not and"). In digital electronics, it corresponds to the NAND gate. It is named after Henry Maurice Sheffer and written as $\mid $ or as $\uparrow $ or as ${\overline {\wedge }}$ or as $Dpq$ in Polish notation by Łukasiewicz (but not as ||, often used to represent disjunction). Sheffer stroke NAND Definition${\overline {x\cdot y}}$ Truth table$(0111)$ Logic gate Normal forms Disjunctive${\overline {x}}+{\overline {y}}$ Conjunctive${\overline {x}}+{\overline {y}}$ Zhegalkin polynomial$1\oplus xy$ Post's lattices 0-preservingno 1-preservingno Monotoneno Affineno Its dual is the NOR operator (also known as the Peirce arrow, Quine dagger or Webb operator). Like its dual, NAND can be used by itself, without any other logical operator, to constitute a logical formal system (making NAND functionally complete). This property makes the NAND gate crucial to modern digital electronics, including its use in computer processor design. Definition The non-conjunction is a logical operation on two logical values. It produces a value of true, if — and only if — at least one of the propositions is false. Truth table The truth table of $P\uparrow Q$ is as follows. $P$$Q$$P\uparrow Q$ TrueTrueFalse TrueFalseTrue FalseTrueTrue FalseFalseTrue Logical equivalences The Sheffer stroke of $P$ and $Q$ is the negation of their conjunction $P\uparrow Q$   $\Leftrightarrow $   $\neg (P\land Q)$   $\Leftrightarrow $   $\neg $ By De Morgan's laws, this is also equivalent to the disjunction of the negations of $P$ and $Q$ $P\uparrow Q$   $\Leftrightarrow $   $\neg P$ $\lor $ $\neg Q$   $\Leftrightarrow $   $\lor $ Alternative notations and names Peirce was the first to show the functional completeness of non-conjunction (representing this as ${\overline {\curlywedge }}$) but didn't publish his result.[1][2] Peirce's editor added ${\overline {\curlywedge }}$) for non-disjunction.[2] In 1911, Stamm was the first to publish a proof of the completeness of non-conjunction, representing this with $\sim $ (the Stamm hook)[3] and non-disjunction in print at the first time and showed their functional completeness.[4] In 1913, Sheffer described non-disjunction using $\mid $ and showed its functional completeness. Sheffer also used $\wedge $ for non-disjunction. Many people, beginning with Nicod in 1917, and followed by Whitehead, Russell and many others, mistakenly thought Sheffer has described non-conjunction using $\mid $, naming this the Sheffer Stroke. In 1928, Hilbert and Ackermann described non-conjunction with the operator $/$.[5][6] In 1929, Łukasiewicz used $D$ in $Dpq$ for non-conjunction in his Polish notation.[7] An alternative notation for non-conjunction is $\uparrow $. It is not clear who first introduced this notation, although the corresponding $\downarrow $ for non-disjunction was used by Quine in 1940,[8]. History The stroke is named after Henry Maurice Sheffer, who in 1913 published a paper in the Transactions of the American Mathematical Society[9] providing an axiomatization of Boolean algebras using the stroke, and proved its equivalence to a standard formulation thereof by Huntington employing the familiar operators of propositional logic (AND, OR, NOT). Because of self-duality of Boolean algebras, Sheffer's axioms are equally valid for either of the NAND or NOR operations in place of the stroke. Sheffer interpreted the stroke as a sign for nondisjunction (NOR) in his paper, mentioning non-conjunction only in a footnote and without a special sign for it. It was Jean Nicod who first used the stroke as a sign for non-conjunction (NAND) in a paper of 1917 and which has since become current practice.[10][11] Russell and Whitehead used the Sheffer stroke in the 1927 second edition of Principia Mathematica and suggested it as a replacement for the "OR" and "NOT" operations of the first edition. Charles Sanders Peirce (1880) had discovered the functional completeness of NAND or NOR more than 30 years earlier, using the term ampheck (for 'cutting both ways'), but he never published his finding. Two years before Sheffer, Edward Stamm also described the NAND and NOR operators and showed that the other Boolean operations could be expressed by it.[4] Properties NAND does not possess any of the following five properties, each of which is required to be absent from, and the absence of all of which is sufficient for, at least one member of a set of functionally complete operators: truth-preservation, falsity-preservation, linearity, monotonicity, self-duality. (An operator is truth- (or falsity-)preserving if its value is truth (falsity) whenever all of its arguments are truth (falsity).) Therefore {NAND} is a functionally complete set. This can also be realized as follows: All three elements of the functionally complete set {AND, OR, NOT} can be constructed using only NAND. Thus the set {NAND} must be functionally complete as well. Other Boolean operations in terms of the Sheffer stroke Expressed in terms of NAND $\uparrow $, the usual operators of propositional logic are: $\neg P$     $\Leftrightarrow $     $P$ $\uparrow $ $P$     $\Leftrightarrow $     $\uparrow $     $P\rightarrow Q$     $\Leftrightarrow $     $~P$ $\uparrow $ $(Q\uparrow Q)$     $\Leftrightarrow $     $~P$ $\uparrow $ $(P\uparrow Q)$     $\Leftrightarrow $     $\uparrow $     $\Leftrightarrow $     $\uparrow $     $P\leftrightarrow Q$     $\Leftrightarrow $     $(P\uparrow Q)$ $\uparrow $ $((P\uparrow P)\uparrow (Q\uparrow Q))$     $\Leftrightarrow $     $\uparrow $   $P\land Q$     $\Leftrightarrow $     $(P\uparrow Q)$ $\uparrow $ $(P\uparrow Q)$     $\Leftrightarrow $     $\uparrow $     $P\lor Q$     $\Leftrightarrow $     $(P\uparrow P)$ $\uparrow $ $(Q\uparrow Q)$     $\Leftrightarrow $     $\uparrow $ Formal system based on the Sheffer stroke The following is an example of a formal system based entirely on the Sheffer stroke, yet having the functional expressiveness of the propositional logic: Symbols pn for natural numbers n: ( | ) The Sheffer stroke commutes but does not associate (e.g., (T | T) | F = T, but T | (T | F) = F). Hence any formal system including the Sheffer stroke as an infix symbol must also include a means of indicating grouping (grouping is automatic if the stroke is used as a prefix, thus: || TTF = T and | T | TF = F). We shall employ '(' and ')' to this effect. We also write p, q, r, … instead of p0, p1, p2. Syntax Construction rule I: For each natural number n, the symbol pn is a well-formed formula (WFF), called an atom. Construction rule II: If X and Y are WFFs, then (X | Y) is a WFF. Closure rule: Any formulae which cannot be constructed by means of the first two construction rules are not WFFs. The letters U, V, W, X, and Y are metavariables standing for WFFs. A decision procedure for determining whether a formula is well-formed goes as follows: "deconstruct" the formula by applying the construction rules backwards, thereby breaking the formula into smaller subformulae. Then repeat this recursive deconstruction process to each of the subformulae. Eventually the formula should be reduced to its atoms, but if some subformula cannot be so reduced, then the formula is not a WFF. Calculus All WFFs of the form ((U | (V | W)) | ((Y | (Y | Y)) | ((X | V) | ((U | X) | (U | X))))) are axioms. Instances of $(U|(V|W)),U\vdash W$ are inference rules. Simplification Since the only connective of this logic is |, the symbol | could be discarded altogether, leaving only the parentheses to group the letters. A pair of parentheses must always enclose a pair of WFFs. Examples of theorems in this simplified notation are (p(p(q(q((pq)(pq)))))), (p(p((qq)(pp)))). The notation can be simplified further, by letting (U) := (UU) $((U))\equiv U$ for any U. This simplification causes the need to change some rules: 1. More than two letters are allowed within parentheses. 2. Letters or WFFs within parentheses are allowed to commute. 3. Repeated letters or WFFs within a same set of parentheses can be eliminated. The result is a parenthetical version of the Peirce existential graphs. Another way to simplify the notation is to eliminate parentheses by using Polish notation (PN). For example, the earlier examples with only parentheses could be rewritten using only strokes as follows (p(p(q(q((pq)(pq)))))) becomes | p | p | q | q || pq | pq, and (p(p((qq)(pp)))) becomes, | p | p || qq | pp. This follows the same rules as the parenthesis version, with the opening parenthesis replaced with a Sheffer stroke and the (redundant) closing parenthesis removed. Or (for some formulas) one could omit both parentheses and strokes and allow the order of the arguments to determine the order of function application so that for example, applying the function from right to left (reverse Polish notation – any other unambiguous convention based on ordering would do) ${\begin{aligned}&pqr\equiv (p\mid (q\mid r)),{\text{ whereas}}\\&rqp\equiv (r\mid (q\mid p)).\end{aligned}}$ See also • Boolean domain • CMOS • Gate equivalent (GE) • Laws of Form • Logical graph • Minimal axioms for Boolean algebra • NAND flash memory • NAND logic • Peirce's law • Peirce arrow = NOR • Sole sufficient operator References 1. Peirce, C. S. (1933) [1880]. "A Boolian Algebra with One Constant". In Hartshorne, C.; Weiss, P. (eds.). Collected Papers of Charles Sanders Peirce, Volume IV The Simplest Mathematics. Massachusetts: Harvard University Press. pp. 13–18. 2. Peirce, C. S. (1933) [1902]. "The Simplest Mathematics". In Hartshorne, C.; Weiss, P. (eds.). Collected Papers of Charles Sanders Peirce, Volume IV The Simplest Mathematics. Massachusetts: Harvard University Press. pp. 189–262. 3. Zach, R. (2023-02-18). "Sheffer stroke before Sheffer: Edward Stamm". Retrieved 2023-07-02. 4. Stamm, Edward Bronisław [in Polish] (1911). "Beitrag zur Algebra der Logik". Monatshefte für Mathematik und Physik (in German). 22 (1): 137–149. doi:10.1007/BF01742795. 5. Hilbert, D.; Ackermann, W. (1928). Grundzügen der theoretischen Logik (in German) (1 ed.). Berlin: Verlag von Julius Springer. p. 9. 6. Hilbert, D.; Ackermann, W. (1950). Luce, R. E. (ed.). Principles of Mathematical Logic. Translated by Hammond, L. M.; Leckie, G. G.; Steinhardt, F. New York: Chelsea Publishing Company. p. 11. 7. Łukasiewicz, J. (1958) [1929]. Elementy logiki matematycznej (in Polish) (2 ed.). Warszawa: Państwowe Wydawnictwo Naukowe. 8. Quine, W. V (1981) [1940]. Mathematical Logic (Revised ed.). Cambridge, London, New York, New Rochelle, Melbourne and Sydney: Harvard University Press. p. 45. 9. Sheffer, Henry Maurice (1913). "A set of five independent postulates for Boolean algebras, with application to logical constants". Transactions of the American Mathematical Society. 14: 481–488. doi:10.2307/1988701. JSTOR 1988701. 10. Nicod, Jean George Pierre (1917). "A Reduction in the Number of Primitive Propositions of Logic". Proceedings of the Cambridge Philosophical Society. 19: 32–41. 11. Church, Alonzo (1956). Introduction to mathematical logic. Vol. 1. Princeton University Press. p. 134. Further reading • Bocheński, Józef Maria; Menne, Albert Heinrich [in German] (1960). Precis of Mathematical Logic. Translated by Bird, Otto (revised ed.). Dordrecht, South Holland, Netherlands: D. Reidel. (NB. Edited and translated from the French and German editions: Précis de logique mathématique) • Peirce, Charles Sanders (1931–1935) [1880]. "A Boolian Algebra with One Constant". In Hartshorne, Charles; Weiss, Paul (eds.). Collected Papers of Charles Sanders Peirce. Vol. 4. Cambridge: Harvard University Press. pp. 12–20. External links Wikimedia Commons has media related to Sheffer stroke. • Sheffer Stroke article in the Internet Encyclopedia of Philosophy • http://hyperphysics.phy-astr.gsu.edu/hbase/electronic/nand.html • Implementations of 2- and 4-input NAND gates • Proofs of some axioms by Stroke function by Yasuo Setô @ Project Euclid Common logical connectives • Tautology/True $\top $ • Alternative denial (NAND gate) $\uparrow $ • Converse implication $\leftarrow $ • Implication (IMPLY gate) $\rightarrow $ • Disjunction (OR gate) $\lor $ • Negation (NOT gate) $\neg $ • Exclusive or (XOR gate) $\not \leftrightarrow $ • Biconditional (XNOR gate) $\leftrightarrow $ • Statement (Digital buffer) • Joint denial (NOR gate) $\downarrow $ • Nonimplication (NIMPLY gate) $\nrightarrow $ • Converse nonimplication $\nleftarrow $ • Conjunction (AND gate) $\land $ • Contradiction/False $\bot $  Philosophy portal Common logical symbols ∧  or  & and ∨ or ¬  or  ~ not → implies ⊃ implies, superset ↔  or  ≡ iff | nand ∀ universal quantification ∃ existential quantification ⊤ true, tautology ⊥ false, contradiction ⊢ entails, proves ⊨ entails, therefore ∴ therefore ∵ because  Philosophy portal  Mathematics portal
Wikipedia
Scheinerman's conjecture In mathematics, Scheinerman's conjecture, now a theorem, states that every planar graph is the intersection graph of a set of line segments in the plane. This conjecture was formulated by E. R. Scheinerman in his Ph.D. thesis (1984), following earlier results that every planar graph could be represented as the intersection graph of a set of simple curves in the plane (Ehrlich, Even & Tarjan 1976). It was proven by Jeremie Chalopin and Daniel Gonçalves (2009). For instance, the graph G shown below to the left may be represented as the intersection graph of the set of segments shown below to the right. Here, vertices of G are represented by straight line segments and edges of G are represented by intersection points. Scheinerman also conjectured that segments with only three directions would be sufficient to represent 3-colorable graphs, and West (1991) conjectured that analogously every planar graph could be represented using four directions. If a graph is represented with segments having only k directions and no two segments belong to the same line, then the graph can be colored using k colors, one color for each direction. Therefore, if every planar graph can be represented in this way with only four directions, then the four color theorem follows. Hartman, Newman & Ziv (1991) and de Fraysseix, Ossona de Mendez & Pach (1991) proved that every bipartite planar graph can be represented as an intersection graph of horizontal and vertical line segments; for this result see also Czyzowicz, Kranakis & Urrutia (1998). De Castro et al. (2002) proved that every triangle-free planar graph can be represented as an intersection graph of line segments having only three directions; this result implies Grötzsch's theorem (Grötzsch 1959) that triangle-free planar graphs can be colored with three colors. de Fraysseix & Ossona de Mendez (2005) proved that if a planar graph G can be 4-colored in such a way that no separating cycle uses all four colors, then G has a representation as an intersection graph of segments. Chalopin, Gonçalves & Ochem (2007) proved that planar graphs are in 1-STRING, the class of intersection graphs of simple curves in the plane that intersect each other in at most one crossing point per pair. This class is intermediate between the intersection graphs of segments appearing in Scheinerman's conjecture and the intersection graphs of unrestricted simple curves from the result of Ehrlich et al. It can also be viewed as a generalization of the circle packing theorem, which shows the same result when curves are allowed to intersect in a tangent. The proof of the conjecture by Chalopin & Gonçalves (2009) was based on an improvement of this result. References • De Castro, N.; Cobos, F. J.; Dana, J. C.; Márquez, A. (2002), "Triangle-free planar graphs as segment intersection graphs" (PDF), Journal of Graph Algorithms and Applications, 6 (1): 7–26, doi:10.7155/jgaa.00043, MR 1898201. • Chalopin, J.; Gonçalves, D. (2009), "Every planar graph is the intersection graph of segments in the plane" (PDF), ACM Symposium on Theory of Computing. • Chalopin, J.; Gonçalves, D.; Ochem, P. (2007), "Planar graphs are in 1-STRING", Proceedings of the Eighteenth Annual ACM-SIAM Symposium on Discrete Algorithms, ACM and SIAM, pp. 609–617. • Czyzowicz, J.; Kranakis, E.; Urrutia, J. (1998), "A simple proof of the representation of bipartite planar graphs as the contact graphs of orthogonal straight line segments", Information Processing Letters, 66 (3): 125–126, doi:10.1016/S0020-0190(98)00046-5. • de Fraysseix, H.; Ossona de Mendez, P. (2005), "Contact and intersection representations", in Pach, J. (ed.), Graph Drawing, 12th International Symposium, GD 2004, Lecture Notes in Computer Science, vol. 3383, Springer-Verlag, pp. 217–227. • de Fraysseix, H.; Ossona de Mendez, P.; Pach, J. (1991), "Representation of planar graphs by segments", Intuitive Geometry, 63: 109–117, MR 1383616. • Ehrlich, G.; Even, S.; Tarjan, R. E. (1976), "Intersection graphs of curves in the plane", Journal of Combinatorial Theory, Series B, 21 (1): 8–20, doi:10.1016/0095-8956(76)90022-8, MR 0505857. • Grötzsch, Herbert (1959), "Zur Theorie der diskreten Gebilde, VII: Ein Dreifarbensatz für dreikreisfreie Netze auf der Kugel", Wiss. Z. Martin-Luther-U., Halle-Wittenberg, Math.-Nat. Reihe, 8: 109–120, MR 0116320. • Hartman, I. B.-A.; Newman, I.; Ziv, R. (1991), "On grid intersection graphs", Discrete Mathematics, 87 (1): 41–52, doi:10.1016/0012-365X(91)90069-E, MR 1090188. • Scheinerman, E. R. (1984), Intersection Classes and Multiple Intersection Parameters of Graphs, Ph.D. thesis, Princeton University. • West, D. (1991), "Open problems #2", SIAM Activity Group Newsletter in Discrete Mathematics, 2 (1): 10–12.
Wikipedia
Scheirer–Ray–Hare test The Scheirer–Ray–Hare (SRH) test is a statistical test that can be used to examine whether a measure is affected by two or more factors. Since it does not require a normal distribution of the data, it is one of the non-parametric methods. It is an extension of the Kruskal–Wallis test, the non-parametric equivalent for one-way analysis of variance (ANOVA), to the application for more than one factor. It is thus a non-parameter alternative to multi-factorial ANOVA analyses. The test is named after James Scheirer, William Ray and Nathan Hare, who published it in 1976.[1] Test description The Scheirer–Ray–Hare test is analogous to the parametric multi-factorial ANOVA of investigating the influence of two different factors on a measure for which different samples are available for the factors. As with the parametric analysis of variance, the test can be used to investigate the null hypotheses that the two factors examined in each case have no influence on the positional parameter of the samples and thus on the measure, and that there are no interactions between the two factors. A p-value less than 0.05 for one or more of these three hypotheses leads to their rejection. As with many other non-parametric methods, the analysis in this method relies on the evaluation of the ranks of the samples in the samples rather than the actual observations. Modifications also allow extending the test to examine more than two factors. The test strength of the Scheirer–Ray–Hare test, i.e. the probability of actually finding a statistically significant result, is significantly lower than that of the parametric multi-factorial ANOVA, so that it is considered more conservative in comparison of both methods.[2] For this reason, and because the method was described later than most other parametric and non-parametric variance analysis tests, it has found little use in textbooks and statistical analysis software. With computer programs that contain a function for parametric multi-factorial ANOVA, however, with additional manual effort and a calculation of the Scheirer Ray Hare test is possible.[2] Since the Scheirer–Ray–Hare test only makes a statement about the diversity of all samples considered, it makes sense to perform a post-hoc test that compares the individual samples in pairs. Alternative procedures The parametric alternative to the Scheirer–Ray–Hare test is multi-factorial ANOVA, which requires a normal distribution of data within the samples. The Kruskal–Wallis test, from which the Scheirer–Ray–Hare test is derived, serves in contrast to this to investigate the influence of exactly one factor on the measured variable. A non-parametric test comparing exactly two unpaired samples is the Wilcoxon–Mann–Whitney test. References 1. James Scheirer, William S. Ray, Nathan Hare: The Analysis of Ranked Data Derived from Completely Randomized Factorial Designs. In: Biometrics. 32(2)/1976. International Biometric Society, S. 429–434, doi:10.2307/2529511 2. Scheirer–Ray–Hare test. In: Calvin Dytham: Choosing And Using Statistics: A Biologist's Guide. Wiley-Blackwell, 2003, ISBN 1405102438, S. 145–150 Literature • Robert R. Sokal, F. James Rohlf: Biometry: The Principles And Practice of Statistics In Biological Research. Third edition. Freeman, New York 1995, ISBN 0-71-672411-1, pp. 445–447 Statistics • Outline • Index Descriptive statistics Continuous data Center • Mean • Arithmetic • Arithmetic-Geometric • Cubic • Generalized/power • Geometric • Harmonic • Heronian • Heinz • Lehmer • Median • Mode Dispersion • Average absolute deviation • Coefficient of variation • Interquartile range • Percentile • Range • Standard deviation • Variance Shape • Central limit theorem • Moments • Kurtosis • L-moments • Skewness Count data • Index of dispersion Summary tables • Contingency table • Frequency distribution • Grouped data Dependence • Partial correlation • Pearson product-moment correlation • Rank correlation • Kendall's τ • Spearman's ρ • Scatter plot Graphics • Bar chart • Biplot • Box plot • Control chart • Correlogram • Fan chart • Forest plot • Histogram • Pie chart • Q–Q plot • Radar chart • Run chart • Scatter plot • Stem-and-leaf display • Violin plot Data collection Study design • Effect size • Missing data • Optimal design • Population • Replication • Sample size determination • Statistic • Statistical power Survey methodology • Sampling • Cluster • Stratified • Opinion poll • Questionnaire • Standard error Controlled experiments • Blocking • Factorial experiment • Interaction • Random assignment • Randomized controlled trial • Randomized experiment • Scientific control Adaptive designs • Adaptive clinical trial • Stochastic approximation • Up-and-down designs Observational studies • Cohort study • Cross-sectional study • Natural experiment • Quasi-experiment Statistical inference Statistical theory • Population • Statistic • Probability distribution • Sampling distribution • Order statistic • Empirical distribution • Density estimation • Statistical model • Model specification • Lp space • Parameter • location • scale • shape • Parametric family • Likelihood (monotone) • Location–scale family • Exponential family • Completeness • Sufficiency • Statistical functional • Bootstrap • U • V • Optimal decision • loss function • Efficiency • Statistical distance • divergence • Asymptotics • Robustness Frequentist inference Point estimation • Estimating equations • Maximum likelihood • Method of moments • M-estimator • Minimum distance • Unbiased estimators • Mean-unbiased minimum-variance • Rao–Blackwellization • Lehmann–Scheffé theorem • Median unbiased • Plug-in Interval estimation • Confidence interval • Pivot • Likelihood interval • Prediction interval • Tolerance interval • Resampling • Bootstrap • Jackknife Testing hypotheses • 1- & 2-tails • Power • Uniformly most powerful test • Permutation test • Randomization test • Multiple comparisons Parametric tests • Likelihood-ratio • Score/Lagrange multiplier • Wald Specific tests • Z-test (normal) • Student's t-test • F-test Goodness of fit • Chi-squared • G-test • Kolmogorov–Smirnov • Anderson–Darling • Lilliefors • Jarque–Bera • Normality (Shapiro–Wilk) • Likelihood-ratio test • Model selection • Cross validation • AIC • BIC Rank statistics • Sign • Sample median • Signed rank (Wilcoxon) • Hodges–Lehmann estimator • Rank sum (Mann–Whitney) • Nonparametric anova • 1-way (Kruskal–Wallis) • 2-way (Friedman) • Ordered alternative (Jonckheere–Terpstra) • Van der Waerden test Bayesian inference • Bayesian probability • prior • posterior • Credible interval • Bayes factor • Bayesian estimator • Maximum posterior estimator • Correlation • Regression analysis Correlation • Pearson product-moment • Partial correlation • Confounding variable • Coefficient of determination Regression analysis • Errors and residuals • Regression validation • Mixed effects models • Simultaneous equations models • Multivariate adaptive regression splines (MARS) Linear regression • Simple linear regression • Ordinary least squares • General linear model • Bayesian regression Non-standard predictors • Nonlinear regression • Nonparametric • Semiparametric • Isotonic • Robust • Heteroscedasticity • Homoscedasticity Generalized linear model • Exponential families • Logistic (Bernoulli) / Binomial / Poisson regressions Partition of variance • Analysis of variance (ANOVA, anova) • Analysis of covariance • Multivariate ANOVA • Degrees of freedom Categorical / Multivariate / Time-series / Survival analysis Categorical • Cohen's kappa • Contingency table • Graphical model • Log-linear model • McNemar's test • Cochran–Mantel–Haenszel statistics Multivariate • Regression • Manova • Principal components • Canonical correlation • Discriminant analysis • Cluster analysis • Classification • Structural equation model • Factor analysis • Multivariate distributions • Elliptical distributions • Normal Time-series General • Decomposition • Trend • Stationarity • Seasonal adjustment • Exponential smoothing • Cointegration • Structural break • Granger causality Specific tests • Dickey–Fuller • Johansen • Q-statistic (Ljung–Box) • Durbin–Watson • Breusch–Godfrey Time domain • Autocorrelation (ACF) • partial (PACF) • Cross-correlation (XCF) • ARMA model • ARIMA model (Box–Jenkins) • Autoregressive conditional heteroskedasticity (ARCH) • Vector autoregression (VAR) Frequency domain • Spectral density estimation • Fourier analysis • Least-squares spectral analysis • Wavelet • Whittle likelihood Survival Survival function • Kaplan–Meier estimator (product limit) • Proportional hazards models • Accelerated failure time (AFT) model • First hitting time Hazard function • Nelson–Aalen estimator Test • Log-rank test Applications Biostatistics • Bioinformatics • Clinical trials / studies • Epidemiology • Medical statistics Engineering statistics • Chemometrics • Methods engineering • Probabilistic design • Process / quality control • Reliability • System identification Social statistics • Actuarial science • Census • Crime statistics • Demography • Econometrics • Jurimetrics • National accounts • Official statistics • Population statistics • Psychometrics Spatial statistics • Cartography • Environmental statistics • Geographic information system • Geostatistics • Kriging • Category •  Mathematics portal • Commons • WikiProject Design of experiments Scientific method • Scientific experiment • Statistical design • Control • Internal and external validity • Experimental unit • Blinding • Optimal design: Bayesian • Random assignment • Randomization • Restricted randomization • Replication versus subsampling • Sample size Treatment and blocking • Treatment • Effect size • Contrast • Interaction • Confounding • Orthogonality • Blocking • Covariate • Nuisance variable Models and inference • Linear regression • Ordinary least squares • Bayesian • Random effect • Mixed model • Hierarchical model: Bayesian • Analysis of variance (Anova) • Cochran's theorem • Manova (multivariate) • Ancova (covariance) • Compare means • Multiple comparison Designs Completely randomized • Factorial • Fractional factorial • Plackett-Burman • Taguchi • Response surface methodology • Polynomial and rational modeling • Box-Behnken • Central composite • Block • Generalized randomized block design (GRBD) • Latin square • Graeco-Latin square • Orthogonal array • Latin hypercube Repeated measures design • Crossover study • Randomized controlled trial • Sequential analysis • Sequential probability ratio test • Glossary • Category •  Mathematics portal • Statistical outline • Statistical topics
Wikipedia
Morphism of finite type For a homomorphism A → B of commutative rings, B is called an A-algebra of finite type if B is a finitely generated as an A-algebra. It is much stronger for B to be a finite A-algebra, which means that B is finitely generated as an A-module. For example, for any commutative ring A and natural number n, the polynomial ring A[x1, ..., xn] is an A-algebra of finite type, but it is not a finite A-module unless A = 0 or n = 0. Another example of a finite-type morphism which is not finite is $\mathbb {C} [t]\to \mathbb {C} [t][x,y]/(y^{2}-x^{3}-t)$. The analogous notion in terms of schemes is: a morphism f: X → Y of schemes is of finite type if Y has a covering by affine open subschemes Vi = Spec Ai such that f−1(Vi) has a finite covering by affine open subschemes Uij = Spec Bij with Bij an Ai-algebra of finite type. One also says that X is of finite type over Y. For example, for any natural number n and field k, affine n-space and projective n-space over k are of finite type over k (that is, over Spec k), while they are not finite over k unless n = 0. More generally, any quasi-projective scheme over k is of finite type over k. The Noether normalization lemma says, in geometric terms, that every affine scheme X of finite type over a field k has a finite surjective morphism to affine space An over k, where n is the dimension of X. Likewise, every projective scheme X over a field has a finite surjective morphism to projective space Pn, where n is the dimension of X. See also • Finitely generated algebra References Bosch, Siegfried (2013). Algebraic Geometry and Commutative Algebra. London: Springer. pp. 360–365. ISBN 9781447148289.
Wikipedia
Scherk surface In mathematics, a Scherk surface (named after Heinrich Scherk) is an example of a minimal surface. Scherk described two complete embedded minimal surfaces in 1834;[1] his first surface is a doubly periodic surface, his second surface is singly periodic. They were the third non-trivial examples of minimal surfaces (the first two were the catenoid and helicoid).[2] The two surfaces are conjugates of each other. Scherk surfaces arise in the study of certain limiting minimal surface problems and in the study of harmonic diffeomorphisms of hyperbolic space. Scherk's first surface Scherk's first surface is asymptotic to two infinite families of parallel planes, orthogonal to each other, that meet near z = 0 in a checkerboard pattern of bridging arches. It contains an infinite number of straight vertical lines. Construction of a simple Scherk surface Consider the following minimal surface problem on a square in the Euclidean plane: for a natural number n, find a minimal surface Σn as the graph of some function $u_{n}:\left(-{\frac {\pi }{2}},+{\frac {\pi }{2}}\right)\times \left(-{\frac {\pi }{2}},+{\frac {\pi }{2}}\right)\to \mathbb {R} $ such that $\lim _{y\to \pm \pi /2}u_{n}\left(x,y\right)=+n{\text{ for }}-{\frac {\pi }{2}}<x<+{\frac {\pi }{2}},$ $\lim _{x\to \pm \pi /2}u_{n}\left(x,y\right)=-n{\text{ for }}-{\frac {\pi }{2}}<y<+{\frac {\pi }{2}}.$ That is, un satisfies the minimal surface equation $\mathrm {div} \left({\frac {\nabla u_{n}(x,y)}{\sqrt {1+|\nabla u_{n}(x,y)|^{2}}}}\right)\equiv 0$ and $\Sigma _{n}=\left\{(x,y,u_{n}(x,y))\in \mathbb {R} ^{3}\left|-{\frac {\pi }{2}}<x,y<+{\frac {\pi }{2}}\right.\right\}.$ What, if anything, is the limiting surface as n tends to infinity? The answer was given by H. Scherk in 1834: the limiting surface Σ is the graph of $u:\left(-{\frac {\pi }{2}},+{\frac {\pi }{2}}\right)\times \left(-{\frac {\pi }{2}},+{\frac {\pi }{2}}\right)\to \mathbb {R} ,$ $u(x,y)=\log \left({\frac {\cos(x)}{\cos(y)}}\right).$ That is, the Scherk surface over the square is $\Sigma =\left\{\left.\left(x,y,\log \left({\frac {\cos(x)}{\cos(y)}}\right)\right)\in \mathbb {R} ^{3}\right|-{\frac {\pi }{2}}<x,y<+{\frac {\pi }{2}}\right\}.$ More general Scherk surfaces One can consider similar minimal surface problems on other quadrilaterals in the Euclidean plane. One can also consider the same problem on quadrilaterals in the hyperbolic plane. In 2006, Harold Rosenberg and Pascal Collin used hyperbolic Scherk surfaces to construct a harmonic diffeomorphism from the complex plane onto the hyperbolic plane (the unit disc with the hyperbolic metric), thereby disproving the Schoen–Yau conjecture. Scherk's second surface Scherk's second surface looks globally like two orthogonal planes whose intersection consists of a sequence of tunnels in alternating directions. Its intersections with horizontal planes consists of alternating hyperbolas. It has implicit equation: $\sin(z)-\sinh(x)\sinh(y)=0$ It has the Weierstrass–Enneper parameterization $f(z)={\frac {4}{1-z^{4}}}$, $g(z)=iz$ and can be parametrized as:[3] $x(r,\theta )=2\Re (\ln(1+re^{i\theta })-\ln(1-re^{i\theta }))=\ln \left({\frac {1+r^{2}+2r\cos \theta }{1+r^{2}-2r\cos \theta }}\right)$ $y(r,\theta )=\Re (4i\tan ^{-1}(re^{i\theta }))=\ln \left({\frac {1+r^{2}-2r\sin \theta }{1+r^{2}+2r\sin \theta }}\right)$ $z(r,\theta )=\Re (2i(-\ln(1-r^{2}e^{2i\theta })+\ln(1+r^{2}e^{2i\theta }))=2\tan ^{-1}\left({\frac {2r^{2}\sin 2\theta }{r^{4}-1}}\right)$ for $\theta \in [0,2\pi )$ and $r\in (0,1)$. This gives one period of the surface, which can then be extended in the z-direction by symmetry. The surface has been generalised by H. Karcher into the saddle tower family of periodic minimal surfaces. Somewhat confusingly, this surface is occasionally called Scherk's fifth surface in the literature.[4][5] To minimize confusion it is useful to refer to it as Scherk's singly periodic surface or the Scherk-tower. External links • Sabitov, I.Kh. (2001) [1994], "Scherk surface", Encyclopedia of Mathematics, EMS Press • Scherk's first surface in MSRI Geometry • Scherk's second surface in MSRI Geometry • Scherk's minimal surfaces in Mathworld References 1. H.F. Scherk, Bemerkungen über die kleinste Fläche innerhalb gegebener Grenzen, Journal für die reine und angewandte Mathematik, Volume 13 (1835) pp. 185–208 2. "Heinrich Scherk - Biography". 3. Eric W. Weisstein, CRC Concise Encyclopedia of Mathematics, 2nd ed., CRC press 2002 4. Nikolaos Kapuoleas, Constructions of minimal surfaces by glueing minimal immersions. In Global Theory of Minimal Surfaces: Proceedings of the Clay Mathematics Institute 2001 Summer School, Mathematical Sciences Research Institute, Berkeley, California, June 25-July 27, 2001 p. 499 5. David Hoffman and William H. Meeks, Limits of minimal surfaces and Scherk's Fifth Surface, Archive for rational mechanics and analysis, Volume 111, Number 2 (1990) Minimal surfaces • Associate family • Bour's • Catalan's • Catenoid • Chen–Gackstatter • Costa's • Enneper • Gyroid • Helicoid • Henneberg • k-noid • Lidinoid • Neovius • Richmond • Riemann's • Saddle tower • Scherk • Schwarz • Triply periodic
Wikipedia
Pompeiu problem In mathematics, the Pompeiu problem is a conjecture in integral geometry, named for Dimitrie Pompeiu, who posed the problem in 1929, as follows. Suppose f is a nonzero continuous function defined on a Euclidean space, and K is a simply connected Lipschitz domain, so that the integral of f vanishes on every congruent copy of K. Then the domain is a ball. A special case is Schiffer's conjecture. References • Pompeiu, Dimitrie (1929), "Sur certains systèmes d'équations linéaires et sur une propriété intégrale des fonctions de plusieurs variables", Comptes Rendus de l'Académie des Sciences, Série I, 188: 1138–1139 • Ciatti, Paolo (2008), Topics in mathematical analysis, Series on analysis, applications and computation, vol. 3, World Scientific, ISBN 978-981-281-105-9 External links • Pompeiu problem at Department of Geometry, Bolyai Institute, University of Szeged, Hungary • Pompeiu problem at SpringerLink encyclopaedia of mathematics • The Pompeiu problem, • Schiffer's conjecture,
Wikipedia
Schiffler point In geometry, the Schiffler point of a triangle is a triangle center, a point defined from the triangle that is equivariant under Euclidean transformations of the triangle. This point was first defined and investigated by Schiffler et al. (1985). Definition A triangle △ABC with the incenter I has its Schiffler point at the point of concurrence of the Euler lines of the four triangles △BCI, △CAI, △ABI, △ABC. Schiffler's theorem states that these four lines all meet at a single point. Coordinates Trilinear coordinates for the Schiffler point are ${\frac {1}{\cos B+\cos C}}:{\frac {1}{\cos C+\cos A}}:{\frac {1}{\cos A+\cos B}}$ or, equivalently, ${\frac {b+c-a}{b+c}}:{\frac {c+a-b}{c+a}}:{\frac {a+b-c}{a+b}}$ where a, b, c denote the side lengths of triangle △ABC. References • Emelyanov, Lev; Emelyanova, Tatiana (2003). "A note on the Schiffler point". Forum Geometricorum. 3: 113–116. MR 2004116. • Hatzipolakis, Antreas P.; van Lamoen, Floor; Wolk, Barry; Yiu, Paul (2001). "Concurrency of four Euler lines". Forum Geometricorum. 1: 59–68. MR 1891516. • Nguyen, Khoa Lu (2005). "On the complement of the Schiffler point". Forum Geometricorum. 5: 149–164. MR 2195745. • Schiffler, Kurt (1985). "Problem 1018" (PDF). Crux Mathematicorum. 11: 51. • Veldkamp, G. R. & van der Spek, W. A. (1986). "Solution to Problem 1018" (PDF). Crux Mathematicorum. 12: 150–152. • Thas, Charles (2004). "On the Schiffler center". Forum Geometricorum. 4: 85–95. MR 2081772. External links • Weisstein, Eric W. "Schiffler Point". MathWorld.
Wikipedia
Schild's ladder In the theory of general relativity, and differential geometry more generally, Schild's ladder is a first-order method for approximating parallel transport of a vector along a curve using only affinely parametrized geodesics. The method is named for Alfred Schild, who introduced the method during lectures at Princeton University. Construction The idea is to identify a tangent vector x at a point $A_{0}$ with a geodesic segment of unit length $A_{0}X_{0}$, and to construct an approximate parallelogram with approximately parallel sides $A_{0}X_{0}$ and $A_{1}X_{1}$ as an approximation of the Levi-Civita parallelogramoid; the new segment $A_{1}X_{1}$ thus corresponds to an approximately parallel translated tangent vector at $A_{1}.$ Formally, consider a curve γ through a point A0 in a Riemannian manifold M, and let x be a tangent vector at A0. Then x can be identified with a geodesic segment A0X0 via the exponential map. This geodesic σ satisfies $\sigma (0)=A_{0}\,$ $\sigma '(0)=x.\,$ The steps of the Schild's ladder construction are: • Let X0 = σ(1), so the geodesic segment $A_{0}X_{0}$ has unit length. • Now let A1 be a point on γ close to A0, and construct the geodesic X0A1. • Let P1 be the midpoint of X0A1 in the sense that the segments X0P1 and P1A1 take an equal affine parameter to traverse. • Construct the geodesic A0P1, and extend it to a point X1 so that the parameter length of A0X1 is double that of A0P1. • Finally construct the geodesic A1X1. The tangent to this geodesic x1 is then the parallel transport of X0 to A1, at least to first order. Approximation This is a discrete approximation of the continuous process of parallel transport. If the ambient space is flat, this is exactly parallel transport, and the steps define parallelograms, which agree with the Levi-Civita parallelogramoid. In a curved space, the error is given by holonomy around the triangle $A_{1}A_{0}X_{0},$ which is equal to the integral of the curvature over the interior of the triangle, by the Ambrose-Singer theorem; this is a form of Green's theorem (integral around a curve related to integral over interior), and in the case of Levi-Civita connections on surfaces, of Gauss–Bonnet theorem. Notes 1. Schild's ladder requires not only geodesics but also relative distance along geodesics. Relative distance may be provided by affine parametrization of geodesics, from which the required midpoints may be determined. 2. The parallel transport which is constructed by Schild's ladder is necessarily torsion-free. 3. A Riemannian metric is not required to generate the geodesics. But if the geodesics are generated from a Riemannian metric, the parallel transport which is constructed in the limit by Schild's ladder is the same as the Levi-Civita connection because this connection is defined to be torsion-free. References • Kheyfets, Arkady; Miller, Warner A.; Newton, Gregory A. (2000), "Schild's ladder parallel transport procedure for an arbitrary connection", International Journal of Theoretical Physics, 39 (12): 2891–2898, doi:10.1023/A:1026473418439, S2CID 117503563. • Misner, Charles W.; Thorne, Kip S.; Wheeler, John A. (1973), Gravitation, W. H. Freeman, ISBN 0-7167-0344-0
Wikipedia
Schinzel's hypothesis H In mathematics, Schinzel's hypothesis H is one of the most famous open problems in the topic of number theory. It is a very broad generalization of widely open conjectures such as the twin prime conjecture. The hypothesis is named after Andrzej Schinzel. Statement The hypothesis claims that for every finite collection $\{f_{1},f_{2},\ldots ,f_{k}\}$ of nonconstant irreducible polynomials over the integers with positive leading coefficients, one of the following conditions holds: 1. There are infinitely many positive integers $n$ such that all of $f_{1}(n),f_{2}(n),\ldots ,f_{k}(n)$ are simultaneously prime numbers, or 2. There is an integer $m>1$ (called a "fixed divisor"), which depends on the polynomials, which always divides the product $f_{1}(n)f_{2}(n)\cdots f_{k}(n)$. (Or, equivalently: There exists a prime $p$ such that for every $n$ there is an $i$ such that $p$ divides $f_{i}(n)$.) The second condition is satisfied by sets such as $f_{1}(x)=x+4,f_{2}(x)=x+7$, since $(x+4)(x+7)$ is always divisible by 2. It is easy to see that this condition prevents the first condition from being true. Schinzel's hypothesis essentially claims that condition 2 is the only way condition 1 can fail to hold. No effective technique is known for determining whether the first condition holds for a given set of polynomials, but the second one is straightforward to check: Let $Q(x)=f_{1}(x)f_{2}(x)\cdots f_{k}(x)$ and compute the greatest common divisor of $\deg(Q)+1$ successive values of $Q(n)$. One can see by extrapolating with finite differences that this divisor will also divide all other values of $Q(n)$ too. Schinzel's hypothesis builds on the earlier Bunyakovsky conjecture, for a single polynomial, and on the Hardy–Littlewood conjectures and Dickson's conjecture for multiple linear polynomials. It is in turn extended by the Bateman–Horn conjecture. Examples As a simple example with $k=1$, $x^{2}+1$ has no fixed prime divisor. We therefore expect that there are infinitely many primes $n^{2}+1$ This has not been proved, though. It was one of Landau's conjectures and goes back to Euler, who observed in a letter to Goldbach in 1752 that $n^{2}+1$ is often prime for $n$ up to 1500. As another example, take $k=2$ with $f_{1}(x)=x$ and $f_{2}(x)=x+2$. The hypothesis then implies the existence of infinitely many twin primes, a basic and notorious open problem. Variants As proved by Schinzel and Sierpiński[1] it is equivalent to the following: if condition 2 does not hold, then there exists at least one positive integer $n$ such that all $f_{i}(n)$ will be simultaneously prime, for any choice of irreducible integral polynomials $f_{i}(x)$ with positive leading coefficients. If the leading coefficients were negative, we could expect negative prime values; this is a harmless restriction. There is probably no real reason to restrict polynomials with integer coefficients, rather than integer-valued polynomials (such as ${\tfrac {1}{2}}x^{2}+{\tfrac {1}{2}}x+1$, which takes integer values for all integers $x$ even though the coefficients are not integers). Previous results The special case of a single linear polynomial is Dirichlet's theorem on arithmetic progressions, one of the most important results of number theory. In fact, this special case is the only known instance of Schinzel's Hypothesis H. We do not know the hypothesis to hold for any given polynomial of degree greater than $1$, nor for any system of more than one polynomial. Almost prime approximations to Schinzel's Hypothesis have been attempted by many mathematicians; among them, most notably, Chen's theorem states that there exist infinitely many prime numbers $n$ such that $n+2$ is either a prime or a semiprime [2] and Iwaniec proved that there exist infinitely many integers $n$ for which $n^{2}+1$ is either a prime or a semiprime.[3] Skorobogatov and Sofos have proved that almost all polynomials of any fixed degree satisfy Schinzel's hypothesis H.[4] If there is a hypothetical probabilistic density sieve, using the DHR sieve can prove the Schinzel's hypothesis H in all cases by mathematical induction. Prospects and applications The hypothesis is probably not accessible with current methods in analytic number theory, but is now quite often used to prove conditional results, for example in Diophantine geometry. This connection is due to Jean-Louis Colliot-Thélène and Jean-Jacques Sansuc.[5] For further explanations and references on this connection see the notes of Swinnerton-Dyer.[6] The conjectural result being so strong in nature, it is possible that it could be shown to be too much to expect. Extension to include the Goldbach conjecture The hypothesis does not cover Goldbach's conjecture, but a closely related version (hypothesis HN) does. That requires an extra polynomial $F(x)$, which in the Goldbach problem would just be $x$, for which N − F(n) is required to be a prime number, also. This is cited in Halberstam and Richert, Sieve Methods. The conjecture here takes the form of a statement when N is sufficiently large, and subject to the condition that $f_{1}(n)f_{2}(n)\cdots f_{k}(n)(N-F(n))$ has no fixed divisor > 1. Then we should be able to require the existence of n such that N − F(n) is both positive and a prime number; and with all the fi(n) prime numbers. Not many cases of these conjectures are known; but there is a detailed quantitative theory (see Bateman–Horn conjecture). Local analysis The condition of having no fixed prime divisor is purely local (depending just on primes, that is). In other words, a finite set of irreducible integer-valued polynomials with no local obstruction to taking infinitely many prime values is conjectured to take infinitely many prime values. An analogue that fails The analogous conjecture with the integers replaced by the one-variable polynomial ring over a finite field is false. For example, Swan noted in 1962 (for reasons unrelated to Hypothesis H) that the polynomial $x^{8}+u^{3}\,$ over the ring F2[u] is irreducible and has no fixed prime polynomial divisor (after all, its values at x = 0 and x = 1 are relatively prime polynomials) but all of its values as x runs over F2[u] are composite. Similar examples can be found with F2 replaced by any finite field; the obstructions in a proper formulation of Hypothesis H over F[u], where F is a finite field, are no longer just local but a new global obstruction occurs with no classical parallel, assuming hypothesis H is in fact correct. References 1. Schinzel, A.; Sierpiński, W. (1958). "Sur certaines hypothèses concernant les nombres premiers". Acta Arithmetica. 4 (3): 185–208. doi:10.4064/aa-4-3-185-208. MR 0106202. Page 188. 2. Chen, J.R. (1973). "On the representation of a larger even integer as the sum of a prime and the product of at most two primes". Sci. Sinica. 16: 157–176. MR 0434997. 3. Iwaniec, H. (1978). "Almost-primes represented by quadratic polynomials". Inventiones Mathematicae. 47 (2): 171–188. Bibcode:1978InMat..47..171I. doi:10.1007/BF01578070. MR 0485740. S2CID 122656097. 4. Skorobogatov, A.N.; Sofos, E. (2022). "Schinzel Hypothesis on average and rational points". Inventiones Mathematicae. 231 (2): 673–739. doi:10.1007/s00222-022-01153-6. MR 4542704. 5. Colliot-Thélène, J.L.; Sansuc, J.J. (1982). "Sur le principe de Hasse et l'approximation faible, et sur une hypothese de Schinzel". Acta Arithmetica. 41 (1): 33–53. doi:10.4064/aa-41-1-33-53. MR 0667708. 6. Swinnerton-Dyer, P. (2011). "Topics in Diophantine equations". Arithmetic geometry. Lecture Notes in Math. Vol. 2009. Springer, Berlin. pp. 45–110. MR 2757628. • Crandall, Richard; Pomerance, Carl B. (2005). Prime Numbers: A Computational Perspective (Second ed.). New York: Springer-Verlag. doi:10.1007/0-387-28979-8. ISBN 0-387-25282-7. MR 2156291. Zbl 1088.11001. • Guy, Richard K. (2004). Unsolved problems in number theory (Third ed.). Springer-Verlag. ISBN 978-0-387-20860-2. Zbl 1058.11001. • Pollack, Paul (2008). "An explicit approach to hypothesis H for polynomials over a finite field". In De Koninck, Jean-Marie; Granville, Andrew; Luca, Florian (eds.). Anatomy of integers. Based on the CRM workshop, Montreal, Canada, March 13–17, 2006. CRM Proceedings and Lecture Notes. Vol. 46. Providence, RI: American Mathematical Society. pp. 259–273. ISBN 978-0-8218-4406-9. Zbl 1187.11046. • Swan, R. G. (1962). "Factorization of Polynomials over Finite Fields". Pacific Journal of Mathematics. 12 (3): 1099–1106. doi:10.2140/pjm.1962.12.1099. External links Prime number conjectures • Hardy–Littlewood • 1st • 2nd • Agoh–Giuga • Andrica's • Artin's • Bateman–Horn • Brocard's • Bunyakovsky • Chinese hypothesis • Cramér's • Dickson's • Elliott–Halberstam • Firoozbakht's • Gilbreath's • Grimm's • Landau's problems • Goldbach's • weak • Legendre's • Twin prime • Legendre's constant • Lemoine's • Mersenne • Oppermann's • Polignac's • Pólya • Schinzel's hypothesis H • Waring's prime number
Wikipedia
Schinzel's theorem In the geometry of numbers, Schinzel's theorem is the following statement: Schinzel's theorem — For any given positive integer $n$, there exists a circle in the Euclidean plane that passes through exactly $n$ integer points. It was originally proved by and named after Andrzej Schinzel.[1][2] Proof Schinzel proved this theorem by the following construction. If $n$ is an even number, with $n=2k$, then the circle given by the following equation passes through exactly $n$ points:[1][2] $\left(x-{\frac {1}{2}}\right)^{2}+y^{2}={\frac {1}{4}}5^{k-1}.$ This circle has radius $5^{(k-1)/2}/2$, and is centered at the point $({\tfrac {1}{2}},0)$. For instance, the figure shows a circle with radius ${\sqrt {5}}/2$ through four integer points. Multiplying both sides of Schinzel's equation by four produces an equivalent equation in integers, $\left(2x-1\right)^{2}+(2y)^{2}=5^{k-1}$. This writes $5^{k-1}$ as a sum of two squares, where the first is odd and the second is even. There are exactly $4k$ ways to write $5^{k-1}$ as a sum of two squares, and half are in the order (odd, even) by symmetry. For example, $5^{1}=(\pm 1)^{2}+(\pm 2)^{2}$, so we have $2x-1=1$ or $2x-1=-1$, and $2y=2$ or $2y=-2$, which produces the four points pictured. On the other hand, if $n$ is odd, with $n=2k+1$, then the circle given by the following equation passes through exactly $n$ points:[1][2] $\left(x-{\frac {1}{3}}\right)^{2}+y^{2}={\frac {1}{9}}5^{2k}.$ This circle has radius $5^{k}/3$, and is centered at the point $({\tfrac {1}{3}},0)$. Properties The circles generated by Schinzel's construction are not the smallest possible circles passing through the given number of integer points,[3] but they have the advantage that they are described by an explicit equation.[2] References 1. Schinzel, André (1958), "Sur l'existence d'un cercle passant par un nombre donné de points aux coordonnées entières", L'Enseignement mathématique (in French), 4: 71–72, MR 0098059 2. Honsberger, Ross (1973), "Schinzel's theorem", Mathematical Gems I, Dolciani Mathematical Expositions, vol. 1, Mathematical Association of America, pp. 118–121 3. Weisstein, Eric W., "Schinzel Circle", MathWorld
Wikipedia
Schizophrenic number A schizophrenic number (also known as mock rational number) is an irrational number that displays certain characteristics of rational numbers. Definition The Universal Book of Mathematics defines "schizophrenic number" as: An informal name for an irrational number that displays such persistent patterns in its decimal expansion, that it has the appearance of a rational number. A schizophrenic number can be obtained as follows. For any positive integer n let f(n) denote the integer given by the recurrence f(n) = 10 f(n − 1) + n with the initial value f(0) = 0. Thus, f(1) = 1, f(2) = 12, f(3) = 123, and so on. The square roots of f(n) for odd integers n give rise to a curious mixture appearing to be rational for periods, and then disintegrating into irrationality. This is illustrated by the first 500 digits of √f(49): 1111111111111111111111111.1111111111111111111111 0860 555555555555555555555555555555555555555555555 2730541 66666666666666666666666666666666666666666 0296260347 2222222222222222222222222222222222222 0426563940928819 4444444444444444444444444444444 38775551250401171874 9999999999999999999999999999 808249687711486305338541 66666666666666666666666 5987185738621440638655598958 33333333333333333333 0843460407627608206940277099609374 99999999999999 0642227587555983066639430321587456597 222222222 1863492016791180833081844 ... The repeating strings become progressively shorter and the scrambled strings become larger until eventually the repeating strings disappear. However, by increasing n we can forestall the disappearance of the repeating strings as long as we like. The repeating digits are always 1, 5, 6, 2, 4, 9, 6, 3, 9, 2, ... .[1] The sequence of numbers generated by the recurrence relation f(n) = 10 f(n − 1) + n described above is: 0, 1, 12, 123, 1234, 12345, 123456, 1234567, 12345678, 123456789, 1234567900, ... (sequence A014824 in the OEIS). f(49) = 1234567901234567901234567901234567901234567901229 The integer parts of their square roots, 1, 3, 11, 35, 111, 351, 1111, 3513, 11111, 35136, 111111, 351364, 1111111, ... (sequence A068995 in the OEIS), alternate between numbers with irregular digits and numbers with repeating digits, in a similar way to the alternations appearing within the fractional part of each square root. Characteristics The schizophrenic number shown above is the special case of a more general phenomenon that appears in the $b$-ary expansions of square roots of the solutions of the recurrence $f_{b}(n)=bf_{b}(n-1)+n$, for all $b\geq 2$, with initial value $f(0)=0$ taken at odd positive integers $n$. The case $b=10$ and $n=49$ corresponds to the example above. Indeed, Tóth showed that these irrational numbers present schizophrenic patterns within their $b$-ary expansion,[2] composed of blocks that begin with a non-repeating digit block followed by a repeating digit block. When put together in base $b$, these blocks form the schizophrenic pattern. For instance, in base 8, the number ${\sqrt {f_{8}(49)}}$ begins: 1111111111111111111111111.1111111111111111111111 0600 444444444444444444444444444444444444444444444 02144 333333333333333333333333333333333333333333 175124422 666666666666666666666666666666666666666 .... The pattern is due to the Taylor expansion of the square root of the recurrence's solution taken at odd positive integers. The various digit contributions of the Taylor expansion yield the non-repeating and repeating digit blocks that form the schizophrenic pattern. Other properties In some cases, instead of repeating digit sequences we find repeating digit patterns. For instance, the number ${\sqrt {f_{3}(49)}}$: 1111111111111111111111111.1111111111111111111111111111111 01200 202020202020202020202020202020202020202020 11010102 00120012000012001200120012001200120012 0010 21120020211210002112100021121000211210 ... shows repeating digit patterns in base $3$. Numbers that are schizophrenic in base $b$ are also schizophrenic in base $b^{m}$ (up to a certain limit, see Tóth). An example is ${\sqrt {f_{3}(49)}}$ above, which is still schizophrenic in base $9$: 1444444444444.4444444444 350 666666666666666666666 4112 0505050505050505050 337506 75307530753075307 40552382 ... History Clifford A. Pickover has said that the schizophrenic numbers were discovered by Kevin Brown. In his book Wonders of Numbers he has so described the history of schizophrenic numbers: The construction and discovery of schizophrenic numbers was prompted by a claim (posted in the Usenet newsgroup sci.math) that the digits of an irrational number chosen at random would not be expected to display obvious patterns in the first 100 digits. It was said that if such a pattern were found, it would be irrefutable proof of the existence of either God or extraterrestrial intelligence. (An irrational number is any number that cannot be expressed as a ratio of two integers. Transcendental numbers like e and π, and noninteger surds such as square root of 2 are irrational.)[3] See also • Almost integer • Normal number • Six nines in pi References 1. Darling, David (2004), The Universal Book of Mathematics: From Abracadabra to Zeno's Paradoxes, John Wiley & Sons, p. 12, ISBN 9780471667001 2. Tóth, László (2020), "On Schizophrenic Patterns in b-ary Expansions of Some Irrational Numbers", Proceedings of the American Mathematical Society, 148 (1): 461–469, arXiv:2002.06584, Bibcode:2020arXiv200206584T, doi:10.1090/proc/14863, S2CID 211133029 3. Pickover, Clifford A. (2003), "Schizophrenic Numbers", Wonders of Numbers: Adventures in Mathematics, Mind, and Meaning, Oxford University Press, pp. 210–211, ISBN 9780195157994 External links • Mock-Rational Numbers, K. S. Brown, mathpages. Irrational numbers • Chaitin's (Ω) • Liouville • Prime (ρ) • Omega • Cahen • Logarithm of 2 • Gauss's (G) • Twelfth root of 2 • Apéry's (ζ(3)) • Plastic (ρ) • Square root of 2 • Supergolden ratio (ψ) • Erdős–Borwein (E) • Golden ratio (φ) • Square root of 3 • Square root of pi (√π) • Square root of 5 • Silver ratio (δS) • Square root of 6 • Square root of 7 • Euler's (e) • Pi (π) • Schizophrenic • Transcendental • Trigonometric
Wikipedia
Schläfli orthoscheme In geometry, a Schläfli orthoscheme is a type of simplex. The orthoscheme is the generalization of the right triangle to simplex figures of any number of dimensions. Orthoschemes are defined by a sequence of edges $(v_{0}v_{1}),(v_{1}v_{2}),\dots ,(v_{d-1}v_{d})\,$ that are mutually orthogonal. They were introduced by Ludwig Schläfli, who called them orthoschemes and studied their volume in Euclidean, hyperbolic, and spherical geometries. H. S. M. Coxeter later named them after Schläfli. As right triangles provide the basis for trigonometry, orthoschemes form the basis of a trigonometry of n dimensions, as developed by Schoute who called it polygonometry.[1] J.-P. Sydler and Børge Jessen studied orthoschemes extensively in connection with Hilbert's third problem. Orthoschemes, also called path-simplices in the applied mathematics literature, are a special case of a more general class of simplices studied by Fiedler,[2] and later rediscovered by Coxeter.[3] These simplices are the convex hulls of trees in which all edges are mutually perpendicular. In an orthoscheme, the underlying tree is a path. In three dimensions, an orthoscheme is also called a birectangular tetrahedron (because its path makes two right angles at vertices each having two right angles) or a quadrirectangular tetrahedron (because it contains four right angles).[4] Properties • All 2-faces are right triangles. • All facets of a d-dimensional orthoscheme are (d − 1)-dimensional orthoschemes. • The $d$ dihedral angles that are disjoint from edges of the path have acute angles; the remaining ${\tbinom {d}{2}}$ dihedral angles are all right angles.[3] • The midpoint of the longest edge is the center of the circumscribed sphere. • The case when $|v_{0}v_{1}|=|v_{1}v_{2}|=\cdots =|v_{d-1}v_{d}|$ is a generalized Hill tetrahedron. • Every hypercube in d-dimensional space can be dissected into d! congruent orthoschemes. A similar dissection into the same number of orthoschemes applies more generally to every hyperrectangle but in this case the orthoschemes may not be congruent. • Every regular polytope can be dissected radially into g congruent orthoschemes that meet at its center, where g is the order of the regular polytope's symmetry group.[5] • In 3- and 4-dimensional Euclidean space, every convex polytope is scissor congruent to an orthoscheme. • Every orthoscheme can be trisected into three smaller orthoschemes.[1] • In 3-dimensional hyperbolic and spherical spaces, the volume of orthoschemes can be expressed in terms of the Lobachevsky function, or in terms of dilogarithms.[6] Dissection into orthoschemes Hugo Hadwiger conjectured in 1956 that every simplex can be dissected into finitely many orthoschemes.[7] The conjecture has been proven in spaces of five or fewer dimensions,[8] but remains unsolved in higher dimensions.[9] Hadwiger's conjecture implies that every convex polytope can be dissected into orthoschemes. Characteristic simplex of the general regular polytope Coxeter identifies various orthoschemes as the characteristic simplexes of the polytopes they generate by reflections.[10] The characteristic simplex is the polytope's fundamental building block. It can be replicated by reflections or rotations to construct the polytope, just as the polytope can be dissected into some integral number of it. The characteristic simplex is chiral (it comes in two mirror-image forms which are different), and the polytope is dissected into an equal number of left- and right-hand instances of it. It has dissimilar edge lengths and faces, instead of the equilateral triangle faces of the regular simplex. When the polytope is regular, its characteristic simplex is an orthoscheme, a simplex with only right triangle faces. Every regular polytope has its characteristic orthoscheme which is its fundamental region, the irregular simplex which has exactly the same symmetry characteristics as the regular polytope but captures them all without repetition.[11] For a regular k-polytope, the Coxeter-Dynkin diagram of the characteristic k-orthoscheme is the k-polytope's diagram without the generating point ring. The regular k-polytope is subdivided by its symmetry (k-1)-elements into g instances of its characteristic k-orthoscheme that surround its center, where g is the order of the k-polytope's symmetry group. This is a barycentric subdivision. We proceed to describe the "simplicial subdivision" of a regular polytope, beginning with the one-dimensional case. The segment 𝚷1 is divided into two equal parts by its centre 𝚶1. The polygon 𝚷2 = {p} is divided by its lines of symmetry into 2p right-angled triangles, which join the center 𝚶2 to the simplicially subdivided sides. The polyhedron 𝚷3 = {p, q} is divided by its planes of symmetry into g quadrirectangular tetrahedra (see 5.43), which join the centre 𝚶3 to the simplicially subdivided faces. Analogously, the general regular polytope 𝚷n is divided into a number of congruent simplexes ([orthoschemes]) which join the centre 𝚶n to the simplicially subdivided cells.[5] See also • 3-orthoscheme (tetrahedron with right-triangle faces) • 4-orthoscheme (5-cell with right-triangle faces) • Goursat tetrahedron • Order polytope References 1. Coxeter, H. S. M. (1989), "Trisecting an orthoscheme", Computers and Mathematics with Applications, 17 (1–3): 59–71, doi:10.1016/0898-1221(89)90148-X, MR 0994189 2. Fiedler, Miroslav (1957), "Über qualitative Winkeleigenschaften der Simplexe", Czechoslovak Mathematical Journal, 7 (82): 463–478, doi:10.21136/CMJ.1957.100260, MR 0094740 3. Coxeter, H. S. M. (1991), "Orthogonal trees", in Drysdale, Robert L. Scot (ed.), Proceedings of the Seventh Annual Symposium on Computational Geometry, North Conway, NH, USA, June 10–12, 1991, Association for Computing Machinery, pp. 89–97, doi:10.1145/109648.109658, S2CID 18687383 4. Coxeter, H. S. M. (1973), "§4.7 Other honeycombs (characteristic tetrahedra)", Regular Polytopes, pp. 71–72 5. Coxeter, H. S. M. (1973), "§7.6 The symmetry group of the general regular polytope", Regular Polytopes 6. Vinberg, E. B. (1993), "Volumes of non-Euclidean polyhedra", Russian Math. Surveys, 48:2 (2): 15–45, Bibcode:1993RuMaS..48...15V, doi:10.1070/rm1993v048n02abeh001011 7. Hadwiger, Hugo (1956), "Ungelöste Probleme", Elemente der Mathematik, 11: 109–110 8. Tschirpke, Katrin (1994), "The dissection of five-dimensional simplices into orthoschemes", Beiträge zur Algebra und Geometrie, 35 (1): 1–11, MR 1287191 9. Brandts, Jan; Korotov, Sergey; Křížek, Michal; Šolc, Jakub (2009), "On nonobtuse simplicial partitions" (PDF), SIAM Review, 51 (2): 317–335, Bibcode:2009SIAMR..51..317B, doi:10.1137/060669073, MR 2505583. See in particular Conjecture 23, p. 327. 10. Coxeter, H. S. M. (1973), "§11.7 Regular figures and their truncations", Regular Polytopes 11. Coxeter, H. S. M. (1973), "§7.9 The characteristic simplex", Regular Polytopes
Wikipedia
Schlegel diagram In geometry, a Schlegel diagram is a projection of a polytope from $ \mathbb {R} ^{d}$ into $ \mathbb {R} ^{d-1}$ through a point just outside one of its facets. The resulting entity is a polytopal subdivision of the facet in $ \mathbb {R} ^{d-1}$ that, together with the original facet, is combinatorially equivalent to the original polytope. The diagram is named for Victor Schlegel, who in 1886 introduced this tool for studying combinatorial and topological properties of polytopes. In dimension 3, a Schlegel diagram is a projection of a polyhedron into a plane figure; in dimension 4, it is a projection of a 4-polytope to 3-space. As such, Schlegel diagrams are commonly used as a means of visualizing four-dimensional polytopes. Various visualizations of the icosahedron perspective Net Orthogonal Petrie Schlegel Vertex figure Construction The most elementary Schlegel diagram, that of a polyhedron, was described by Duncan Sommerville as follows:[1] A very useful method of representing a convex polyhedron is by plane projection. If it is projected from any external point, since each ray cuts it twice, it will be represented by a polygonal area divided twice over into polygons. It is always possible by suitable choice of the centre of projection to make the projection of one face completely contain the projections of all the other faces. This is called a Schlegel diagram of the polyhedron. The Schlegel diagram completely represents the morphology of the polyhedron. It is sometimes convenient to project the polyhedron from a vertex; this vertex is projected to infinity and does not appear in the diagram, the edges through it are represented by lines drawn outwards. Sommerville also considers the case of a simplex in four dimensions:[2] "The Schlegel diagram of simplex in S4 is a tetrahedron divided into four tetrahedra." More generally, a polytope in n-dimensions has a Schlegel diagram constructed by a perspective projection viewed from a point outside of the polytope, above the center of a facet. All vertices and edges of the polytope are projected onto a hyperplane of that facet. If the polytope is convex, a point near the facet will exist which maps the facet outside, and all other facets inside, so no edges need to cross in the projection. Examples Dodecahedron 120-cell 12 pentagon faces in the plane 120 dodecahedral cells in 3-space See also • Net (polyhedron) – A different approach for visualization by lowering the dimension of a polytope is to build a net, disconnecting facets, and unfolding until the facets can exist on a single hyperplane. This maintains the geometric scale and shape, but makes the topological connections harder to see. References 1. Duncan Sommerville (1929). Introduction to the Geometry of N Dimensions, p.100. E. P. Dutton. Reprint 1958 by Dover Books. 2. Sommerville (1929), p.101. Further reading • Victor Schlegel (1883) Theorie der homogen zusammengesetzten Raumgebilde, Nova Acta, Ksl. Leop.-Carol. Deutsche Akademie der Naturforscher, Band XLIV, Nr. 4, Druck von E. Blochmann & Sohn in Dresden. • Victor Schlegel (1886) Ueber Projectionsmodelle der regelmässigen vier-dimensionalen Körper, Waren. • Coxeter, H.S.M.; Regular Polytopes, (Methuen and Co., 1948). (p. 242) • Regular Polytopes, (3rd edition, 1973), Dover edition, ISBN 0-486-61480-8 • Grünbaum, Branko (2003), Kaibel, Volker; Klee, Victor; Ziegler, Günter M. (eds.), Convex polytopes (2nd ed.), New York & London: Springer-Verlag, ISBN 0-387-00424-6. External links Wikimedia Commons has media related to Schlegel diagrams. • Weisstein, Eric W. "Schlegel graph". MathWorld. • Weisstein, Eric W. "Skeleton". MathWorld. • George W. Hart: 4D Polytope Projection Models by 3D Printing • Nrich maths – for the teenager. Also useful for teachers.
Wikipedia
Schlessinger's theorem In algebra, Schlessinger's theorem is a theorem in deformation theory introduced by Schlessinger (1968) that gives conditions for a functor of artinian local rings to be pro-representable, refining an earlier theorem of Grothendieck. Definitions Λ is a complete Noetherian local ring with residue field k, and C is the category of local Artinian Λ-algebras (meaning in particular that as modules over Λ they are finitely generated and Artinian) with residue field k. A small extension in C is a morphism Y→Z in C that is surjective with kernel a 1-dimensional vector space over k. A functor is called representable if it is of the form hX where hX(Y)=hom(X,Y) for some X, and is called pro-representable if it is of the form Y→lim hom(Xi,Y) for a filtered direct limit over i in some filtered ordered set. A morphism of functors F→G from C to sets is called smooth if whenever Y→Z is an epimorphism of C, the map from F(Y) to F(Z)×G(Z)G(Y) is surjective. This definition is closely related to the notion of a formally smooth morphism of schemes. If in addition the map between the tangent spaces of F and G is an isomorphism, then F is called a hull of G. Grothendieck's theorem Grothendieck (1960, proposition 3.1) showed that a functor from the category C of Artinian algebras to sets is pro-representable if and only if it preserves all finite limits. This condition is equivalent to asking that the functor preserves pullbacks and the final object. In fact Grothendieck's theorem applies not only to the category C of Artinian algebras, but to any category with finite limits whose objects are Artinian. By taking the projective limit of the pro-representable functor in the larger category of linearly topologized local rings, one obtains a complete linearly topologized local ring representing the functor. Schlessinger's representation theorem One difficulty in applying Grothendieck's theorem is that it can be hard to check that a functor preserves all pullbacks. Schlessinger showed that it is sufficient to check that the functor preserves pullbacks of a special form, which is often easier to check. Schlessinger's theorem also gives conditions under which the functor has a hull, even if it is not representable. Schessinger's theorem gives conditions for a set-valued functor F on C to be representable by a complete local Λ-algebra R with maximal ideal m such that R/mn is in C for all n. Schlessinger's theorem states that a functor from C to sets with F(k) a 1-element set is representable by a complete Noetherian local algebra if it has the following properties, and has a hull if it has the first three properties: • H1: The map F(Y×XZ)→F(Y)×F(X)F(Z) is surjective whenever Z→X is a small extension in C and Y→X is some morphism in C. • H2: The map in H1 is a bijection whenever Z→X is the small extension k[x]/(x2)→k. • H3: The tangent space of F is a finite-dimensional vector space over k. • H4: The map in H1 is a bijection whenever Y=Z is a small extension of X and the maps from Y and Z to X are the same. See also • Formal moduli • Artin's criterion References • Grothendieck (1960), Technique de descente et théorèmes d'existence en géométrie algébrique, II. Le théorème d'existence en théorie formelle des modules, Séminaire Bourbaki, vol. 12 • Schlessinger, Michael (1968), "Functors of Artin rings", Transactions of the American Mathematical Society, 130: 208–222, doi:10.2307/1994967, ISSN 0002-9947, JSTOR 1994967, MR 0217093
Wikipedia
Martin Schlichenmaier Martin Schlichenmaier is a German - Luxembourgish mathematician whose research deals with algebraic, geometric and analytic mathematical methods which partly have relations to theoretical and mathematical physics. Martin Schlichenmaier Born(1952-10-09)October 9, 1952 Backnang Germany Alma materUniversity of Karlsruhe, Germany Occupationmathematician Websitemath.uni.lu/schlichenmaier/ Life and work In 1990 Schlichenmaier earned a doctoral degree.[1] in mathematics at the University of Mannheim with Rainer Weissauer with the thesis Verallgemeinerte Krichever - Novikov Algebren und deren Darstellungen.[2] His research topics are, beside other fields , the geometric foundations of quantisation, e.g. Berezin-Toeplitz-Quantisierung and infinite dimensional Lie algebras of geometric origin, like the algebras of Krichever- Novikov type. [3] From 1986 until 2003 he worked at the University of Mannheim. In the year 1996 he habilitated with the thesis Zwei Anwendungen algebraisch-geometrischer Methoden in der theoretischen Physik: Berezin-Toeplitz-Quantisierung und globale Algebren der zweidimensionalen konformen Feldtheorie [4] Since 2003 he has been professor at the University of Luxemburg,[5] recently as Emeritus . From 2005 until 2017 he was director of the Mathematical Research Unit, Department of Mathematics [6] at the University of Luxemburg. He is a member of the editorial boards of the mathematical journals Journal of Lie Theory,[7] and Analysis and Mathematical Physics[8] He is president of the Luxembourgish Mathematical Society, SML.[9] He received the Grand Prix 2016 en sciences mathematiques de L'Institut Grand-Ducal -prix de la Bourse de Luxembourg.[10] 2019 he was appointed as full member of the Institut Grand-Ducal, Section des Sciences[11] Selected publications Books: • Schlichenmaier, Martin (2014), Krichever-Novikov Type Algebras. Theory and Applications, Studies in Mathematics, vol. 53, Berlin/Boston: de Gruyter, doi:10.1515/9783110279641, ISBN 978-3-11-026517-0. • Schlichenmaier, Martin (2007), An Introduction to Riemann Surfaces, Algebraic Curves and Moduli Spaces, 2nd enlarged edition, Theoretical and Mathematical Physics, Berlin/Heidelberg: Springer, doi:10.1007/978-3-540-71175-9, ISBN 978-3-540-71174-2. Articles: • Schlichenmaier, Martin (2003), "Local cocycles and central extensions for multi-point algebras of Krichever-Novikov type", J. reine angew. Math., 2003 (559): 53–94, arXiv:math/0112116, doi:10.1515/crll.2003.052, MR 1989644, S2CID 14518508. • Karabegov, Alexander; Schlichenmaier, Martin (2001), "Identification of Berezin-Toeplitz deformation quantization" (PDF), J. reine angew. Math., 2001 (540): 49–76, doi:10.1515/crll.2001.086, MR 1868597, S2CID 9408910. • Schlichenmaier, Martin; Sheinman, Oleg K. (2004), "Knizhnik-Zamolodchikov equations for positive genus and Krichever-Novikov algebras", Russ. Math. Surv., 59 (4): 737–770, Bibcode:2004RuMaS..59..737S, doi:10.1070/RM2004v059n04ABEH000760, MR 2106647, S2CID 250825111. • Bordemann, Martin; Meinrenken, Eckhard; Schlichenmaier, Martin (1994), "Toeplitz quantization of Kähler manifolds and gl(N), N to infinity limits", Communications in Mathematical Physics, 165 (2): 281–296, arXiv:hep-th/9309134, doi:10.1007/BF02099772, MR 1301849, S2CID 117257261 References 1. Martin Schlichenmaier at the Mathematics Genealogy Project 2. Martin Schlichenmaier (1990). Verallgemeinerte Krichever - Novikov Algebren und deren Darstellungen Dissertation, University of Mannheim, June 1990 (Thesis) (in German). University of Mannheim, Mannheim, Germany. 3. Autoren-Profil Martin Schlichenmaier in the database zbMATH 4. Martin Schlichenmaier (1996). Zwei Anwendungen algebraisch-geometrischer Methoden in der Physik: Berezin-Toeplitz quanisierung und globale Algebren der konformen Feldtheorie, Habilitationsschrift, University of Mannheim, June 1996 (Thesis) (in German). University of Mannheim, Mannheim, Germany. 5. Université du Luxembourg. "Martin Schlichenmaier". 6. Department of Mathematics. "DMATH". 7. "Journal of Lie Theory". 8. "Analysis and Mathematical Physics". 9. "Luxembourg Mathematical Society, Executive Board". 10. https://www.igdss.lu/wp-content/uploads/2020/02/Conf%C3%A9rences-donn%C3%A9es-de-2008-%C3%A0-2018.pdf 11. "INSTITUT GRAND-DUCAL | Section des sciences". External links • Martin Schlichenmaiers Homepage at the Universität Luxemburg • Autoren-Profil Martin Schlichenmaier in the database zbMATH • Martin Schlichenmaier at the Mathematics Genealogy Project Authority control International • ISNI • VIAF National • Norway • France • BnF data • Israel • United States • Czech Republic • Netherlands • Poland Academics • CiNii • Google Scholar • MathSciNet • Mathematics Genealogy Project • ORCID Other • IdRef
Wikipedia
Planar Riemann surface In mathematics, a planar Riemann surface (or schlichtartig Riemann surface) is a Riemann surface sharing the topological properties of a connected open subset of the Riemann sphere. They are characterized by the topological property that the complement of every closed Jordan curve in the Riemann surface has two connected components. An equivalent characterization is the differential geometric property that every closed differential 1-form of compact support is exact. Every simply connected Riemann surface is planar. The class of planar Riemann surfaces was studied by Koebe who proved in 1910, as a generalization of the uniformization theorem, that every such surface is conformally equivalent to either the Riemann sphere or the complex plane with slits parallel to the real axis removed. Elementary properties Main article: Differential forms on a Riemann surface § Integration of 1-forms along paths • A closed 1-form ω is exact if and only if ∫γ ω = 0 for every closed Jordan curve γ.[1] This follows from the Poincaré lemma for 1-forms and the fact that ∫δ df = f(δ(b)) – f(δ(a)) for a path δ parametrized by [a, b] and f a smooth function defined on an open neighbourhood of δ([a, b]). This formula for ∫δ df extends by continuity to continuous paths, and hence vanishes for a closed path. Conversely if ∫γ ω = 0 for every closed Jordan curve γ, then a function f(z) can be defined on X by fixing a point w and taking any piecewise smooth path δ from w to z and set f(z) = ∫δ ω. The assumption implies that f is independent of the path. To check that df = ω, it suffices to check this locally. Fix z0 and take a path δ1 from w to z0. Near z0 the Poincaré lemma implies ω = dg for some smooth function g defined in a neighbourhood of z0. If δ2 is a path from z0 to z, then f(z) = ∫δ1 ω + ∫δ2 ω = ∫δ1 ω + g(z) − g(z0), so f differs from g by a constant near z0. Hence df = dg = ω near z0. • A closed Jordan curve γ on a Riemann surface separates the surface into two disjoint connected regions if and only if ∫γ ω = 0 for every closed 1-form ω of compact support.[2] If the closed Jordan curve γ separates the surface, it is homotopic to a smooth Jordan curve δ (with non-vanishing derivative) that separates the surface into two halves. The integral of dω over each half equals ± ∫δ ω by Stokes' theorem. Since dω = 0, it follows that ∫δ ω = 0. Hence ∫γ ω = 0. Conversely suppose γ is a Jordan curve that does not separate the Riemann surface. Replacing γ by a homotopic curve, it may be assumed that γ is a smooth Jordan curve δ with non-vanishing derivative. Since γ does not separate the surface, there is a smooth Jordan curve δ (with non-vanishing derivative) which cuts γ transversely at only one point. An open neighbourhood of γ ∪ δ is diffeomorphic to an open neighbourhood of corresponding Jordan curves in a torus. A model for this can be taken as the square [−π,π]×[−π,π] in R2 with opposite sides identified; the transverse Jordan curves γ and δ correspond to the x and y axes. Let ω = a(x) dx with a ≥ 0 supported near 0 with ∫ a = 1. Thus ω is a closed 1-form supported in an open neighbourhood of δ with ∫γ ω = 1 ≠ 0. • A Riemann surface is planar if and only if every closed 1-form of compact support is exact.[3] Let ω be a closed 1-form of compact support on a planar Riemann surface. If γ is a closed Jordan curve on the surface, then it separates the surface. Hence ∫γ ω = 0. Since this is true for all closed Jordan curves, ω must be exact. Conversely suppose that every closed 1-form of compact support is exact. Let γ be closed Jordan curve. Let ω be closed 1-form of compact support. Because ω must be exact, ∫γ ω = 0. It follows that γ on separates the surface into two disjoint connected regions. So the surface is planar. • Every connected open subset of a planar Riemann surface is planar. This is immediate from the characterization in terms of 1-forms. • Every simply connected Riemann surface is planar.[4] If ω is a closed 1-form of compact support, the integral ∫γ ω is independent of the homotopy class of γ. In a simply connected Riemann surface, every closed curve is homotopic to a constant curve for which the integral is zero. Hence a simply connected Riemann surface is planar. • If ω is a closed 1-form on a simply connected Riemann surface, ∫γ ω = 0 for every closed Jordan curve γ.[5] This is the so-called "monodromy property." Covering the path with disks and using the Poincaré lemma for ω, by the fundamental theorem of calculus successive parts of the integral can be computed as f(γ(ti)) − f(γ(ti − 1)). Since the curve is closed, γ(tN) = γ(t0), so that the sums cancel. Uniformization theorem Koebe's Theorem. A compact planar Riemann surface X is conformally equivalent to the Riemann sphere. A non-compact planar Riemann surface X is conformally equivalent either to the complex plane or to the complex plane with finitely many closed intervals parallel to the real axis removed.[6][7] • The harmonic function U. If X is a Riemann surface and P is a point on X with local coordinate z, there is a unique real-valued harmonic function U on X \ {P} such that U(z) – Re z−1 is harmonic near z = 0 (the point P) and dU is square integrable on the complement of a neighbourhood of P. Moreover, if h is any real-valued smooth function on X vanishing in a neighbourhood of P of U with ||dh||2 = ∫X dh∧∗dh < ∞, then (dU,dh) = ∫X dU ∧ *dh = 0. This is an immediate consequence of Dirichlet's principle in a planar surface; it can also be proved using Weyl's method of orthogonal projection in the space of square integrable 1-forms. • The conjugate harmonic function V.[8] There is a harmonic function V on X \ {P} such that ∗dU = dV. In the local coordinate z, V(z) − Im z−1 is harmonic near z = 0. The function V is uniquely determined up to the addition of a real constant. The function U and its harmonic conjugate V satisfy the Cauchy-Riemann equations Ux = Vy and Uy = − Vx. It suffices to prove that ∫C ∗dU = 0 for any piecewise smooth Jordan curve in X \ {P}. Since X is planar, the complement of C in X has two open components S1 and S2 with P lying in S2. There is an open neighborhood N of C made up of a union of finite number of disks and a smooth function 0 ≤ h ≤ 1 such that h equals 1 on S1 and equals 0 on S1 away from P and N. Thus (dU,dh) = 0. By Stokes' theorem, this condition can be rewritten as ∫C ∗dU = 0. So ∗dU is exact and therefore has the form dV. • The meromorphic function f. The meromorphic differential df = dU + idV is holomorphic everywhere except for a double pole at P with singular term d(z−1) at the local coordinate z. • Koebe's separation argument.[9] Let φ and ψ be smooth bounded real-valued functions on R with bounded first derivatives such that φ'(t) > 0 for all t ≠ 0 and φ vanishes to infinite order at t = 0 while ψ(t) > 0 for t in (a,b) while ψ(t) ≡ 0 for t outside (a,b) (here a = −∞ and b = +∞ are allowed). Let X be a Riemann surface and W an open connected subset with a holomorphic function g = u + iv differing from f by a constant such that g(W) lies in the strip a < Im z < b. Define a real-valued function by h = φ(u)ψ(v) on W and 0 off W. Then h, so defined, cannot be a smooth function; for if so $(dh,dh)=\int _{X}dh\wedge \star dh=\int _{W}(\varphi ^{\prime }(u)^{2}\psi (v)^{2}+\varphi (u)^{2}\psi ^{\prime }(v)^{2})\,du\wedge \star du\leq 2M^{2}\,\|dU\|^{2}<\infty ,$ where M = sup (|φ|, |φ'|, |ψ|, |ψ'|), and $(dU,dh)=\int _{X}dU\wedge \star dh=\int _{W}\varphi ^{\prime }(u)\psi (v)\,du\wedge \star du>0,$ contradicting the orthogonality condition on U. • Connectivity and level curves. (1) A level curve for V divide X into two open connected regions. (2) The open set between two level curves of V is connected. (3) The level curves for U and V through any regular point of f divide X into four open connected regions, each containing the regular point and the pole of f in their closures. (1) Since V is only defined up to a constant, it suffices to prove this for the level curve V = 0, i.e. that V = 0 divides the surface into two connected open regions.[10] If not, there is a connected component W of the complement of V = 0 not containing P in its closure. Take g = f and a = 0 and b = ∞ if V > 0 on W and a = −∞ and b = 0 if V < 0 on W. The boundary of W lies on the level curve V = 0. Take g = f in this case. Since ψ(v) vanishes to infinite order when v = 0, h is a smooth function, so Koebe's argument gives a contradiction. (2) It suffices to show that the open set defined by a < V < b is connected.[11] If not, this open set has a connected component W not containing P in its closure. Take g = f in this case. The boundary of W lies on the level curves V = a and V = b. Since ψ(v) vanishes to infinite order when v = a or b, h is a smooth function, so Koebe's argument gives a contradiction. (3) Translating f by a constant if necessary, it suffices to show that if U = 0 = V at a regular point of f, then the two level curves U = 0 and V = 0 divide the surface into 4 connected regions.[12] The level curves U = 0, V = 0 divide the Riemann surface into four disjoint open sets ±u > 0 and ±v > 0. If one of these open sets is not connected, then it has an open connected component W not containing P in its closure. If v > 0 on W, take a = 0 and b = ÷∞; if v < 0 on W, set a = −∞ and b = 0. Take g = f in this case. The boundary of W lies on the union of the level curves U = 0 and V = 0. Since φ and ψ vanish to infinite order at 0, h is smooth function, so Koebe's argument gives a contradiction. Finally, using f as a local coordinate, the level curves divide an open neighbourhood of the regular point into four disjoint connected open sets; in particular each of the four regions is non-empty and contains the regular point in its closure; similar reasoning applies at the pole of f using f(z)–1 as a local coordinate. • Univalence of f at regular points. The function f takes different values at distinct regular points (where df ≠ 0). Suppose that f takes the same value at two regular points z and w and has a pole at ζ. Translating f by a constant if necessary, it can be assumed that f(z) = 0 = f(w). The points z, w and ζ lies in the closure of each of the four regions into which the level curves U = 0 and V = 0 divide the surface. the points z and w can be joined by a Jordan curve in the region U > 0, V > 0 apart from their endpoints. Similarly they can be joined by a Jordan curve the region U < 0, V < 0 apart from their endpoints, where the curve is transverse to the boundary. Together these curves give a closed Jordan curve γ passing through z and w. Since the Riemann surface X is planar, this Jordan curve must divide the surface into two open connected regions. The pole ζ must lie in one of these regions, Y say. Since each of the connected open regions U > 0, V < 0 and U < 0, V > 0 is disjoint from γ and intersects a neighbourhood of ζ, both must be contained in Y. On the other hand using f to define coordinates near z (or w) the curve lies in two opposite quadrants and the other two open quadrants lie in different components of the complement of the curve, a contradiction.[13] • Regularity of f. The meromorphic function f is regular at every point except the pole. If f is not regular at a point, in local coordinates f has the expansion f(z) = a + b zm (1 + c1z + c2z2 + ⋅⋅⋅) with b ≠ 0 and m > 1. By the argument principle—or by taking the mth root of 1 + c1z + c2z2 + ⋅⋅⋅ —away from 0 this map is m-to-one, a contradiction.[14] • The complement of the image of f. Either the image of f is the whole Riemann sphere C ∪ ∞, in which case the Riemann surface is compact and f gives a conformal equivalence with the Riemann sphere; or the complement of the image is a union of closed intervals and isolated points, in which case the Riemann surface is conformally equivalent to a horizontal slit region. Considered as a holomorphic mapping from the Riemann surface X to the Riemann sphere, f is regular everywhere including at infinity. So its image Ω is open in the Riemann sphere. Since it is one-one, the inverse mapping of f is holomorphic from the image onto the Riemann surface. In particular the two are homeomorphic. If the image is the whole sphere then the first statement follows. In this case the Riemann surface is compact. Conversely if the Riemann surface is compact, its image is compact so closed. But then the image is open and closed and hence the whole Riemann sphere by connectivity. If f is not onto, the complement of the image is a closed non-empty subset of the Riemann sphere. So it is a compact subset of the Riemann sphere. It does not contain ∞. So the complement of the image is a compact subset of the complex plane. Now on the Riemann surface the open subsets a < V < b are connected. So the open set of points w in Ω with a < Im w < b is connected and hence path connected. To prove that Ω is a horizontal slit region, it is enough to show that every connected component of C \ Ω is either a single point or a compact interval parallel to the x axis. This follows once it is known that two points in the complement with different imaginary parts lie in different connect components. Suppose then that w1 = u1 + iv1 and w2 = u2 + iv2 are points in C \ Ω with v1 < v2. Take a point in the strip v1 < Im z < v2, say w. By compactness of C \ Ω, this set is contained in the interior of a circle of radius R centre w. The points w ± R lie in the intersection of Ω and the strip, which is open and connected. So they can be joined by a piecewise linear curve in the intersection. This curve and one of the semicircles between z + R and z − R give a Jordan curve enclosing w1 with w2 in its exterior. But then w1 and w2 lie on different connected components of C \ Ω. Finally the connected components of C \ Ω must be closed, so compact; and the connected compact subsets of a line parallel to the x axis are just isolated points or closed intervals.[15] Since G does not contain the infinity at ∞, the construction can equally be applied to e–i θ G taking $\mathbb {C} $ with horizontal slits removed to give a uniformizer fθ. The uniformizer e i θ gθ(e−iθz) now takes G to $\mathbb {C} $ with parallel slits removed at an angle of θ to the x-axis. In particular θ = π/2 leads to a uniformizer fπ/2(z) for $\mathbb {C} $ with vertical slits removed. By uniqueness fθ(z) = eiθ [cos θ f0(z) − i sin θ fπ/2(z)].[16][17][18] Classification of simply connected Riemann surfaces Main article: Riemann mapping theorem Theorem. Any simply connected Riemann surface is conformally equivalent to either (1) the Riemann sphere (elliptic), (2) the complex plane (parabolic) or (3) the unit disk (hyperbolic).[19][20][21] Simple-connectedness of the extended sphere with k > 1 points or closed intervals removed can be excluded on purely topological reasons, using the Seifert-van Kampen theorem; for in this case the fundamental group is isomorphic to the free group with (k − 1) generators and its Abelianization, the singular homology group, is isomorphic to Zk − 1. A short direct proof is also possible using complex function theory. The Riemann sphere is compact whereas the complex plane nor the unit dis are not, so there is not even homeomorphism for (1) onto (2) or (3). A conformal equivalence of (2) onto (3) would result in a bounded holomorphic function on the complex plane: by Liouville's theorem, it would have to be a constant, a contradiction. The "slit realisation" as the unit disk as the extended complex plane with [−1,1] removed comes from the mapping z = (w + w−1)/2.[22] On the other hand the map (z + 1)/(z − 1) carries the extended plane with [−1,1] removed onto the complex plane with (−∞,0] removed. Taking the principal value of the square root gives a conformal mapping of the extended sphere with [−1,1] removed onto the upper half-plane. The Möbius transformation (t − 1)/(t + 1} carries the upper half-plane onto the unit disk. Composition of these mappings results in the conformal mapping z − (z2 -1)1/2, thus solving z = (w + w−1)/2.[23] To show that there can only be one interval closed, suppose reductio ad absurdum that there are at least two: they could just be single points. The two points a and b can be assumed to be on different intervals. There will then be a piecewise smooth closed curve C such b lies in the interior of X and a in the exterior. Let ω = dz(z - b)−1 − dz(z − a)−1, a closed holomorphic form on X. By simple connectivity ∫C ω = 0. On the other hand by Cauchy's integral formula, (2iπ)−1 ∫C ω = 1, a contradiction.[24] Corollary (Riemann mapping theorem). Any connected and simply connected open domain in the complex plane with at least two boundary points is conformally equivalent to the unit disk.[25][26] This is an immediate consequence of the theorem. Applications Koebe's uniformization theorem for planar Riemann surfaces implies the uniformization theorem for simply connected Riemann surface. Indeed, the slit domain is either the whole Riemann sphere; or the Riemann sphere less a point, so the complex plane after applying a Möbius transformation to move the point to infinity; or the Riemann sphere less a closed interval parallel to the real axis. After applying a Möbius transformation, the closed interval can be mapped to [–1,1]. It is therefore conformally equivalent to the unit disk, since the conformal mapping g(z) = (z + z−1)/2 maps the unit disk onto C \ [−1,1]. For a domain G obtained by excising $\mathbb {C} $ ∪ {∞} from finitely many disjoint closed disks, the conformal mapping onto a slit horizontal or vertical domains can be made explicit and presented in closed form. Thus the Poisson kernel on any of the disks can be used to solve the Dirichlet problem on the boundary of the disk as described in Katznelson (2004). Elementary properties such as the maximum principle and the Schwarz reflection principle apply as described in Ahlfors (1978). For a specific disk, the group of Möbius transformations stabilizing the boundary, a copy of SU(1,1), acts equivariantly on the corresponding Poisson kernel. For a fixed a in G, the Dirichlet problem with boundary value log |z − a| can be solved using the Poisson kernels. It yields a harmonic function h(z) on G. The difference g(z,a) = h(z) − log |z − a| is called the Green's function with pole at a. It has the important symmetry property that g(z,w) = g(w,z), so it is harmonic in both variables when it makes sense. Hence, if a = u + i v, the harmonic function ∂u g(z,a) has harmonic conjugate − ∂v g(z,a). On the other hand, by the Dirichlet problem, for each ∂Di there is a unique harmonic function ωi on G equal to 1 on ∂Di and 0 on ∂Dj for j ≠ i (the so-called harmonic measure of ∂Di). The ωi's sum to 1. The harmonic function ∂v g(z,a) on D \ {a} is multi-valued: its argument changes by an integer multiple of 2π around each of the boundary disks Di. The problem of multi-valuedness is resolved by choosing λi's so that ∂v g(z,a) + Σ λi ∂v ωi(z) has no change in argument around every ∂Dj. By construction the horizontal slit mapping p(z) = (∂u + i ∂v) [g(z,a) + Σ λi ωi(z)] is holomorphic in G except at a where it has a pole with residue 1. Similarly the vertical slit mapping is obtained by setting q(z) = (− ∂v + i ∂u) [g(z,a) + Σ μi ωi(z)]; the mapping q(z) is holomorphic except for a pole at a with residue 1.[27] Koebe's theorem also implies that every finitely connected bounded region in the plane is conformally equivalent to the open unit disk with finitely many smaller disjoint closed disks removed, or equivalently the extended complex plane with finitely many disjoint closed disks removed. This result is known as Koebe's "Kreisnormierungs" theorem. Following Goluzin (1969) it can be deduced from the parallel slit theorem using a variant of Carathéodory's kernel theorem and Brouwer's theorem on invariance of domain. Goluzin's method is a simplification of Koebe's original argument. In fact every conformal mapping of such a circular domain onto another circular domain is necessarily given by a Möbius transformation. To see this, it can be assumed that both domains contain the point ∞ and that the conformal mapping f carries ∞ onto ∞. The mapping functions can be continued continuously to the boundary circles. Successive inversions in these boundary circles generate Schottky groups. The union of the domains under the action of both Schottky groups define dense open subsets of the Riemann sphere. By the Schwarz reflection principle, f can be extended to a conformal map between these open dense sets. Their complements are the limit sets of the Schottky groups. They are compact and have measure zero. The Koebe distortion theorem can then be used to prove that f extends continuously to a conformal map of the Riemann sphere onto itself. Consequently, f is given by a Möbius transformation.[28] Now the space of circular domains with n circles has dimension 3n – 2 (fixing a point on one circle) as does the space of parallel slit domains with n parallel slits (fixing an endpoint point on a slit). Both spaces are path connected. The parallel slit theorem gives a map from one space to the other. It is one-one because conformal maps between circular domains are given by Möbius transformations. It is continuous by the convergence theorem for kernels. By invariance of domain, the map carries open sets onto open sets. The convergence theorem for kernels can be applied to the inverse of the map: it proves that if a sequence of slit domains is realisable by circular domains and the slit domains tend to a slit domain, then the corresponding sequence of circular domains converges to a circular domain; moreover the associated conformal mappings also converge. So the map must be onto by path connectedness of the target space.[29] An account of Koebe's original proof of uniformization by circular domains can be found in Bieberbach (1953). Uniformization can also be proved using the Beltrami equation. Schiffer & Hawley (1962) constructed the conformal mapping to a circular domain by minimizing a nonlinear functional—a method that generalized the Dirichlet principle.[30] Koebe also described two iterative schemes for constructing the conformal mapping onto a circular domain; these are described in Gaier (1964) and Henrici (1986) (rediscovered by engineers in aeronautics, Halsey (1979), they are highly efficient). In fact suppose a region on the Riemann sphere is given by the exterior of n disjoint Jordan curves and that ∞ is an exterior point. Let f1 be the Riemann mapping sending the outside of the first curve onto the outside of the unit disk, fixing ∞. The Jordan curves are transformed by f1 to n new curves. Now do the same for the second curve to get f2 with another new set of n curves. Continue in this way until fn has been defined. Then restart the process on the first of the new curves and continue. The curves gradually tend to fixed circles and for large N the map fN approaches the identity; and the compositions fN ∘ fN−1 ∘ ⋅⋅⋅ ∘ f2 ∘ f1 tend uniformly on compacta to the uniformizing map.[31] Uniformization by parallel slit domains and by circle domains were proved by variational principles via Richard Courant starting in 1910 and are described in Courant (1950) harvtxt error: no target: CITEREFCourant1950 (help). Uniformization by parallel slit domains holds for arbitrary connected open domains in C; Koebe (1908) conjectured (Koebe's "Kreisnormierungsproblem") that a similar statement was true for uniformization by circular domains. He & Schramm (1993) proved Koebe's conjecture when the number of boundary components is countable; although proved for wide classes of domains, the conjecture remains open when the number of boundary components is uncountable. Koebe (1936) also considered the limiting case of osculating or tangential circles which has continued to be actively studied in the theory of circle packing. See also • Carathéodory's theorem (conformal mapping) • Jordan curve theorem • Schoenflies problem Notes 1. Kodaira 2007, pp. 257, 293 2. Napier & Ramachandran 2011, pp. 267, 335 3. Napier & Ramachandran 2011, p. 267 4. Kodaira 2007, pp. 320–321 5. Kodaira 2007, pp. 314–315 6. Kodaira 2002, p. 322 harvnb error: no target: CITEREFKodaira2002 (help) 7. Springer 1957, p. 223 8. Springer 1957, pp. 219–220 9. See: • Koebe 1910b • Weyl 1955, pp. 161–162 • Springer 1957, pp. 221 • Kodaira 2007, pp. 324–325 10. Weyl 1955, pp. 161–162 11. Kodaira, pp. 324–325 harvnb error: no target: CITEREFKodaira (help) 12. Springer 1957, pp. 220–222 13. Springer 1957, p. 223 14. Springer 1957, p. 223 15. Kodaira 2007, pp. 328–329 16. Nehari 1952, pp. 338–339 17. Ahlfors 1978, pp. 259–261 18. Koebe 1910a, Koebe 1916, Koebe 1918 19. Springer 1957, pp. 224–225 20. Kodaira 2007, pp. 329–330 21. Weyl 1955, pp. 165–167 22. Weyl 1955, pp. 165 23. Kodaira 2007, p. 331 24. Kodaira 2007, p. 330 25. Springer 1957, p. 225 26. Kodaira 2007, p. 332 27. Ahlfors 1978, pp. 162–171, 251–261 28. Goluzin 1969, pp. 234–237 29. Goluzin 1969, pp. 237–241 30. Henrici 1986, p. 488–496 31. Henrici 1986, pp. 497–504 References • Ahlfors, Lars V.; Sario, Leo (1960), Riemann surfaces, Princeton Mathematical Series, vol. 26, Princeton University Press • Ahlfors, Lars V. (1978), Complex analysis. An introduction to the theory of analytic functions of one complex variable, International Series in Pure and Applied Mathematics (3rd ed.), McGraw-Hill, ISBN 0070006571 • Bieberbach, L. (1953), Conformal mapping, translated by F. Steinhardt, Chelsea • Courant, Richard (1977), Dirichlet's principle, conformal mapping, and minimal surfaces, Springer, ISBN 0-387-90246-5 • Gaier, Dieter (1959a), "Über ein Extremalproblem der konformen Abbildung", Math. Z. (in German), 71: 83–88, doi:10.1007/BF01181387, S2CID 120833385 • Gaier, Dieter (1959b), "Untersuchungen zur Durchführung der konformen Abbildung mehrfach zusammenhängender Gebiete", Arch. Rational Mech. Anal. (in German), 3: 149–178, doi:10.1007/BF00284172, S2CID 121587700 • Gaier, Dieter (1964), Konstruktive Methoden der konformen Abbildung, Springer • Goluzin, G. M. (1969), Geometric theory of functions of a complex variable, Translations of Mathematical Monographs, vol. 26, American Mathematical Society • Grunsky, Helmut (1978), Lectures on theory of functions in multiply connected domains, Vandenhoeck & Ruprecht, ISBN 3-525-40142-6 • Halsey, N.D. (1979), "Potential flow analysis of multi-element airfoils using conformal mapping", AIAA J., 17 (12): 1281–1288, doi:10.2514/3.61308 • He, Zheng-Xu; Schramm, Oded (1993), "Fixed points, Koebe uniformization and circle packings", Ann. of Math., 137 (2): 369–406, doi:10.2307/2946541, JSTOR 2946541 • Henrici, Peter (1986), Applied and computational complex analysis, Wiley-Interscience, ISBN 0-471-08703-3 • Katznelson, Yitzhak (2004), An Introduction to Harmonic Analysis, Cambridge University Press, ISBN 978-0-521-54359-0 • Kodaira, Kunihiko (2007), Complex analysis, Cambridge Studies in Advanced Mathematics, vol. 107, Cambridge University Press, ISBN 9780521809375 • Koebe, Paul (1908), "Über die Uniformisierung beliebiger analytischer Kurven, III", Göttingen Nachrichten: 337–358 • Koebe, Paul (1910), "Über die konforme Abbildung mehrfach-zusammenhangender Bereiche", Jahresber. Deut. Math. Ver., 19: 339–348 • Koebe, Paul (1910a), "Über die Uniformisierung beliebiger analytischer Kurven", Journal für die reine und angewandte Mathematik, 138: 192–253, doi:10.1515/crll.1910.138.192, S2CID 120198686 • Koebe, Paul (1910b), "Über die Hilbertsche Uniformlsierungsmethode" (PDF), Göttinger Nachrichten: 61–65 • Koebe, Paul (1916), "Abhandlungen zur Theorie der konformen Abbildung. IV. Abbildung mehrfach zusammenhängender schlichter Bereiche auf Schlitzbereiche", Acta Math., 41: 305–344, doi:10.1007/BF02422949, S2CID 124229696 • Koebe, Paul (1918), "Abhandlungen zur Theorie der konformen Abbildung: V. Abbildung mehrfach zusammenhängender schlichter Bereiche auf Schlitzbereiche", Math. Z., 2: 198–236, doi:10.1007/BF01212905, S2CID 124045767 • Koebe, Paul (1920), "Abhandlungen zur Theorie der konformen Abbildung VI. Abbildung mehrfach zusammenhängender schlichter Bereiche auf Kreisbereiche. Uniformisierung hyperelliptischer Kurven. (Iterationsmethoden)", Math. Z., 7: 235–301, doi:10.1007/BF01199400, S2CID 125679472 • Koebe, Paul (1936), "Kontaktprobleme der konformen Abbildung", Berichte Verhande. Sächs. Akad. Wiss. Leipzig, 88: 141–164 • Kühnau, R. (2005), "Canonical conformal and quasiconformal mappings", Handbook of complex analysis, Volume 2, Elsevier, pp. 131–163 • Napier, Terrence; Ramachandran, Mohan (2011), An introduction to Riemann surfaces, Birkhäuser, ISBN 978-0-8176-4693-6 • Nehari, Zeev (1952), Conformal mapping, Dover Publications, ISBN 9780486611372 • Nevanlinna, Rolf (1953), Uniformisierung, Die Grundlehren der Mathematischen Wissenschaften, vol. 64, Springer • Pfluger, Albert (1957), Theorie der Riemannschen Flächen, Springer • Schiffer, Menahem; Spencer, Donald C. (1954), Functionals of finite Riemann surfaces, Princeton University Press • Schiffer, M. (1959), "Fredholm eigenvalues of multiply connected domains", Pacific J. Math., 9: 211–269, doi:10.2140/pjm.1959.9.211 • Schiffer, Menahem; Hawley, N. S. (1962), "Connections and conformal mapping", Acta Math., 107 (3–4): 175–274, doi:10.1007/bf02545790 • Simha, R. R. (1989), "The uniformisation theorem for planar Riemann surfaces", Arch. Math., 53 (6): 599–603, doi:10.1007/bf01199820, S2CID 119590093 • Springer, George (1957), Introduction to Riemann surfaces, Addison–Wesley, MR 0092855 • Stephenson, Kenneth (2005), Introduction to circle packing, Cambridge University Press, ISBN 0-521-82356-0 • Weyl, Hermann (1913), Die Idee der Riemannschen Fläche (1997 reprint of the 1913 German original), Teubner, ISBN 3-8154-2096-2 • Weyl, Hermann (1955), The concept of a Riemann surface, translated by Gerald R. MacLane (3rd ed.), Addison–Wesley, MR 0069903
Wikipedia
Schläfli graph In the mathematical field of graph theory, the Schläfli graph, named after Ludwig Schläfli, is a 16-regular undirected graph with 27 vertices and 216 edges. It is a strongly regular graph with parameters srg(27, 16, 10, 8). Schläfli graph Vertices27 Edges216 Radius2 Diameter2 Girth3 Automorphisms51840 Chromatic number9 PropertiesStrongly regular Claw-free Hamiltonian Table of graphs and parameters Construction The intersection graph of the 27 lines on a cubic surface is a locally linear graph that is the complement of the Schläfli graph. That is, two vertices are adjacent in the Schläfli graph if and only if the corresponding pair of lines are skew.[1] The Schläfli graph may also be constructed from the system of eight-dimensional vectors (1, 0, 0, 0, 0, 0, 1, 0), (1, 0, 0, 0, 0, 0, 0, 1), and (−1/2, −1/2, 1/2, 1/2, 1/2, 1/2, 1/2, 1/2), and the 24 other vectors obtained by permuting the first six coordinates of these three vectors. These 27 vectors correspond to the vertices of the Schläfli graph; two vertices are adjacent if and only if the corresponding two vectors have 1 as their inner product.[2] Alternately, this graph can be seen as the complement of the collinearity graph of the generalized quadrangle GQ(2, 4). Subgraphs and neighborhoods The neighborhood of any vertex in the Schläfli graph forms a 16-vertex subgraph in which each vertex has 10 neighbors (the numbers 16 and 10 coming from the parameters of the Schläfli graph as a strongly regular graph). These subgraphs are all isomorphic to the complement graph of the Clebsch graph.[1][3] Since the Clebsch graph is triangle-free, the Schläfli graph is claw-free. It plays an important role in the structure theory for claw-free graphs by Chudnovsky & Seymour (2005). Any two skew lines of these 27 belong to a unique Schläfli double six configuration, a set of 12 lines whose intersection graph is a crown graph in which the two lines have disjoint neighborhoods. Correspondingly, in the Schläfli graph, each edge uv belongs uniquely to a subgraph in the form of a Cartesian product of complete graphs K6 $\square $ K2 in such a way that u and v belong to different K6 subgraphs of the product. The Schläfli graph has a total of 36 subgraphs of this form, one of which consists of the zero-one vectors in the eight-dimensional representation described above.[2] Ultrahomogeneity A graph is defined to be k-ultrahomogeneous if every isomorphism between two of its induced subgraphs of at most k vertices can be extended to an automorphism of the whole graph. If a graph is 5-ultrahomogeneous, it is ultrahomogeneous for every k; the only finite connected graphs of this type are complete graphs, Turán graphs, 3 × 3 rook's graphs, and the 5-cycle. The infinite Rado graph is countably ultrahomogeneous. There are only two connected graphs that are 4-ultrahomogeneous but not 5-ultrahomogeneous: the Schläfli graph and its complement. The proof relies on the classification of finite simple groups.[4] See also • Gosset graph - contains the Schläfli graph as an induced subgraph of the neighborhood of any vertex. Notes 1. Holton & Sheehan (1993). 2. Bussemaker & Neumaier (1992). 3. Cameron & van Lint (1991). Note that Cameron and van Lint use an alternative definition of these graphs in which both the Schläfli graph and the Clebsch graph are complemented from their definitions here. 4. Buczak (1980); Cameron (1980); Devillers (2002). References • Buczak, J. M. J. (1980), Finite Group Theory, D.Phil. thesis, University of Oxford. As cited by Devillers (2002). • Bussemaker, F. C.; Neumaier, A. (1992), "Exceptional graphs with smallest eigenvalue-2 and related problems", Mathematics of Computation, 59 (200): 583–608, doi:10.1090/S0025-5718-1992-1134718-6. • Cameron, Peter Jephson (1980), "6-transitive graphs", Journal of Combinatorial Theory, Series B, 28 (2): 168–179, doi:10.1016/0095-8956(80)90063-5. As cited by Devillers (2002). • Cameron, Peter Jephson; van Lint, Jacobus Hendricus (1991), Designs, graphs, codes and their links, London Mathematical Society student texts, vol. 22, Cambridge University Press, p. 35, ISBN 978-0-521-41325-1. • Chudnovsky, Maria; Seymour, Paul (2005), "The structure of claw-free graphs", Surveys in combinatorics 2005 (PDF), London Math. Soc. Lecture Note Ser., vol. 327, Cambridge: Cambridge Univ. Press, pp. 153–171, MR 2187738. • Devillers, Alice (2002), Classification of some homogeneous and ultrahomogeneous structures, Ph.D. thesis, Université Libre de Bruxelles. • Holton, D. A.; Sheehan, J. (1993), The Petersen Graph, Cambridge University Press, pp. 270–271. External links • Weisstein, Eric W. "Schläfli Graph". MathWorld. • Andries E. Brouwer page.
Wikipedia
Schläfli symbol In geometry, the Schläfli symbol is a notation of the form $\{p,q,r,...\}$ that defines regular polytopes and tessellations. The Schläfli symbol is named after the 19th-century Swiss mathematician Ludwig Schläfli,[1]: 143  who generalized Euclidean geometry to more than three dimensions and discovered all their convex regular polytopes, including the six that occur in four dimensions. Definition The Schläfli symbol is a recursive description,[1]: 129  starting with {p} for a p-sided regular polygon that is convex. For example, {3} is an equilateral triangle, {4} is a square, {5} a convex regular pentagon, etc. Regular star polygons are not convex, and their Schläfli symbols {p/q} contain irreducible fractions p/q, where p is the number of vertices, and q is their turning number. Equivalently, {p/q} is created from the vertices of {p}, connected every q. For example, {5⁄2} is a pentagram; {5⁄1} is a pentagon. A regular polyhedron that has q regular p-sided polygon faces around each vertex is represented by {p,q}. For example, the cube has 3 squares around each vertex and is represented by {4,3}. A regular 4-dimensional polytope, with r {p,q} regular polyhedral cells around each edge is represented by {p,q,r}. For example, a tesseract, {4,3,3}, has 3 cubes, {4,3}, around an edge. In general, a regular polytope {p,q,r,...,y,z} has z {p,q,r,...,y} facets around every peak, where a peak is a vertex in a polyhedron, an edge in a 4-polytope, a face in a 5-polytope, and an (n-3)-face in an n-polytope. Properties A regular polytope has a regular vertex figure. The vertex figure of a regular polytope {p,q,r,...,y,z} is {q,r,...,y,z}. Regular polytopes can have star polygon elements, like the pentagram, with symbol {5⁄2}, represented by the vertices of a pentagon but connected alternately. The Schläfli symbol can represent a finite convex polyhedron, an infinite tessellation of Euclidean space, or an infinite tessellation of hyperbolic space, depending on the angle defect of the construction. A positive angle defect allows the vertex figure to fold into a higher dimension and loops back into itself as a polytope. A zero angle defect tessellates space of the same dimension as the facets. A negative angle defect cannot exist in ordinary space, but can be constructed in hyperbolic space. Usually, a facet or a vertex figure is assumed to be a finite polytope, but can sometimes itself be considered a tessellation. A regular polytope also has a dual polytope, represented by the Schläfli symbol elements in reverse order. A self-dual regular polytope will have a symmetric Schläfli symbol. In addition to describing Euclidean polytopes, Schläfli symbols can be used to describe spherical polytopes or spherical honeycombs.[1]: 138  History and variations Schläfli's work was almost unknown in his lifetime, and his notation for describing polytopes was rediscovered independently by several others. In particular, Thorold Gosset rediscovered the Schläfli symbol which he wrote as | p | q | r | ... | z | rather than with brackets and commas as Schläfli did.[1]: 144  Gosset's form has greater symmetry, so the number of dimensions is the number of vertical bars, and the symbol exactly includes the sub-symbols for facet and vertex figure. Gosset regarded | p as an operator, which can be applied to | q | ... | z | to produce a polytope with p-gonal faces whose vertex figure is | q | ... | z |. Cases Symmetry groups Schläfli symbols are closely related to (finite) reflection symmetry groups, which correspond precisely to the finite Coxeter groups and are specified with the same indices, but square brackets instead [p,q,r,...]. Such groups are often named by the regular polytopes they generate. For example, [3,3] is the Coxeter group for reflective tetrahedral symmetry, [3,4] is reflective octahedral symmetry, and [3,5] is reflective icosahedral symmetry. Regular polygons (plane) The Schläfli symbol of a convex regular polygon with p edges is {p}. For example, a regular pentagon is represented by {5}. For nonconvex star polygons, the constructive notation {p⁄q} is used, where p is the number of vertices and q−1 is the number of vertices skipped when drawing each edge of the star. For example, {5⁄2} represents the pentagram. Regular polyhedra (3 dimensions) The Schläfli symbol of a regular polyhedron is {p,q} if its faces are p-gons, and each vertex is surrounded by q faces (the vertex figure is a q-gon). For example, {5,3} is the regular dodecahedron. It has pentagonal (5 edges) faces, and 3 pentagons around each vertex. See the 5 convex Platonic solids, the 4 nonconvex Kepler-Poinsot polyhedra. Topologically, a regular 2-dimensional tessellation may be regarded as similar to a (3-dimensional) polyhedron, but such that the angular defect is zero. Thus, Schläfli symbols may also be defined for regular tessellations of Euclidean or hyperbolic space in a similar way as for polyhedra. The analogy holds for higher dimensions. For example, the hexagonal tiling is represented by {6,3}. Regular 4-polytopes (4 dimensions) The Schläfli symbol of a regular 4-polytope is of the form {p,q,r}. Its (two-dimensional) faces are regular p-gons ({p}), the cells are regular polyhedra of type {p,q}, the vertex figures are regular polyhedra of type {q,r}, and the edge figures are regular r-gons (type {r}). See the six convex regular and 10 regular star 4-polytopes. For example, the 120-cell is represented by {5,3,3}. It is made of dodecahedron cells {5,3}, and has 3 cells around each edge. There is one regular tessellation of Euclidean 3-space: the cubic honeycomb, with a Schläfli symbol of {4,3,4}, made of cubic cells and 4 cubes around each edge. There are also 4 regular compact hyperbolic tessellations including {5,3,4}, the hyperbolic small dodecahedral honeycomb, which fills space with dodecahedron cells. If a 4-polytope's symbol is palindromic (e.g. {3,3,3} or {3,4,3}), it's bitruncation will only have truncated forms of the vertex figure as cells. Regular n-polytopes (higher dimensions) For higher-dimensional regular polytopes, the Schläfli symbol is defined recursively as {p1, p2,...,pn − 1} if the facets have Schläfli symbol {p1,p2,...,pn − 2} and the vertex figures have Schläfli symbol {p2,p3,...,pn − 1}. A vertex figure of a facet of a polytope and a facet of a vertex figure of the same polytope are the same: {p2,p3,...,pn − 2}. There are only 3 regular polytopes in 5 dimensions and above: the simplex, {3,3,3,...,3}; the cross-polytope, {3,3, ..., 3,4}; and the hypercube, {4,3,3,...,3}. There are no non-convex regular polytopes above 4 dimensions. Dual polytopes If a polytope of dimension n ≥ 2 has Schläfli symbol {p1,p2, ..., pn − 1} then its dual has Schläfli symbol {pn − 1, ..., p2,p1}. If the sequence is palindromic, i.e. the same forwards and backwards, the polytope is self-dual. Every regular polytope in 2 dimensions (polygon) is self-dual. Prismatic polytopes Uniform prismatic polytopes can be defined and named as a Cartesian product (with operator "×") of lower-dimensional regular polytopes. • In 0D, a point is represented by ( ). Its Coxeter diagram is empty. Its Coxeter notation symmetry is ][. • In 1D, a line segment is represented by { }. Its Coxeter diagram is . Its symmetry is [ ]. • In 2D, a rectangle is represented as { } × { }. Its Coxeter diagram is . Its symmetry is [2]. • In 3D, a p-gonal prism is represented as { } × {p}. Its Coxeter diagram is . Its symmetry is [2,p]. • In 4D, a uniform {p,q}-hedral prism is represented as { } × {p,q}. Its Coxeter diagram is . Its symmetry is [2,p,q]. • In 4D, a uniform p-q duoprism is represented as {p} × {q}. Its Coxeter diagram is . Its symmetry is [p,2,q]. The prismatic duals, or bipyramids can be represented as composite symbols, but with the addition operator, "+". • In 2D, a rhombus is represented as { } + { }. Its Coxeter diagram is . Its symmetry is [2]. • In 3D, a p-gonal bipyramid, is represented as { } + {p}. Its Coxeter diagram is . Its symmetry is [2,p]. • In 4D, a {p,q}-hedral bipyramid is represented as { } + {p,q}. Its Coxeter diagram is . Its symmetry is [p,q]. • In 4D, a p-q duopyramid is represented as {p} + {q}. Its Coxeter diagram is . Its symmetry is [p,2,q]. Pyramidal polytopes containing vertices orthogonally offset can be represented using a join operator, "∨". Every pair of vertices between joined figures are connected by edges. In 2D, an isosceles triangle can be represented as ( ) ∨ { } = ( ) ∨ [( ) ∨ ( )]. In 3D: • A digonal disphenoid can be represented as { } ∨ { } = [( ) ∨ ( )] ∨ [( ) ∨ ( )]. • A p-gonal pyramid is represented as ( ) ∨ {p}. In 4D: • A p-q-hedral pyramid is represented as ( ) ∨ {p,q}. • A 5-cell is represented as ( ) ∨ [( ) ∨ {3}] or [( ) ∨ ( )] ∨ {3} = { } ∨ {3}. • A square pyramidal pyramid is represented as ( ) ∨ [( ) ∨ {4}] or [( ) ∨ ( )] ∨ {4} = { } ∨ {4}. When mixing operators, the order of operations from highest to lowest is ×, +, ∨. Axial polytopes containing vertices on parallel offset hyperplanes can be represented by the || operator. A uniform prism is {n}||{n} and antiprism {n}||r{n}. Extension of Schläfli symbols Polygons and circle tilings A truncated regular polygon doubles in sides. A regular polygon with even sides can be halved. An altered even-sided regular 2n-gon generates a star figure compound, 2{n}. Form Schläfli symbol Symmetry Coxeter diagram Example, {6} Regular {p} [p] Hexagon Truncated t{p} = {2p}[[p]] = [2p] = Truncated hexagon (Dodecagon) = Altered and Holosnubbed a{2p} = β{p}[2p] = Altered hexagon (Hexagram) = Half and Snubbed h{2p} = s{p} = {p}[1+,2p] = [p] = = Half hexagon (Triangle) = = Polyhedra and tilings Coxeter expanded his usage of the Schläfli symbol to quasiregular polyhedra by adding a vertical dimension to the symbol. It was a starting point toward the more general Coxeter diagram. Norman Johnson simplified the notation for vertical symbols with an r prefix. The t-notation is the most general, and directly corresponds to the rings of the Coxeter diagram. Symbols have a corresponding alternation, replacing rings with holes in a Coxeter diagram and h prefix standing for half, construction limited by the requirement that neighboring branches must be even-ordered and cuts the symmetry order in half. A related operator, a for altered, is shown with two nested holes, represents a compound polyhedra with both alternated halves, retaining the original full symmetry. A snub is a half form of a truncation, and a holosnub is both halves of an alternated truncation. Form Schläfli symbols Symmetry Coxeter diagram Example, {4,3} Regular ${\begin{Bmatrix}p,q\end{Bmatrix}}${p,q}t0{p,q} [p,q] or [(p,q,2)] Cube Truncated $t{\begin{Bmatrix}p,q\end{Bmatrix}}$t{p,q}t0,1{p,q} Truncated cube Bitruncation (Truncated dual) $t{\begin{Bmatrix}q,p\end{Bmatrix}}$2t{p,q}t1,2{p,q} Truncated octahedron Rectified (Quasiregular) ${\begin{Bmatrix}p\\q\end{Bmatrix}}$r{p,q}t1{p,q} Cuboctahedron Birectification (Regular dual) ${\begin{Bmatrix}q,p\end{Bmatrix}}$2r{p,q}t2{p,q} Octahedron Cantellated (Rectified rectified) $r{\begin{Bmatrix}p\\q\end{Bmatrix}}$rr{p,q}t0,2{p,q} Rhombicuboctahedron Cantitruncated (Truncated rectified) $t{\begin{Bmatrix}p\\q\end{Bmatrix}}$tr{p,q}t0,1,2{p,q} Truncated cuboctahedron Alternations, quarters and snubs Alternations have half the symmetry of the Coxeter groups and are represented by unfilled rings. There are two choices possible on which half of vertices are taken, but the symbol doesn't imply which one. Quarter forms are shown here with a + inside a hollow ring to imply they are two independent alternations. Alternations Form Schläfli symbols Symmetry Coxeter diagram Example, {4,3} Alternated (half) regular $h{\begin{Bmatrix}2p,q\end{Bmatrix}}$h{2p,q}ht0{2p,q}[1+,2p,q] = Demicube (Tetrahedron) Snub regular $s{\begin{Bmatrix}p,2q\end{Bmatrix}}$s{p,2q}ht0,1{p,2q}[p+,2q] Snub dual regular $s{\begin{Bmatrix}q,2p\end{Bmatrix}}$s{q,2p}ht1,2{2p,q}[2p,q+] Snub octahedron (Icosahedron) Alternated rectified (p and q are even) $h{\begin{Bmatrix}p\\q\end{Bmatrix}}$hr{p,q}ht1{p,q}[p,1+,q] Alternated rectified rectified (p and q are even) $hr{\begin{Bmatrix}p\\q\end{Bmatrix}}$hrr{p,q}ht0,2{p,q}[(p,q,2+)] Quartered (p and q are even) $q{\begin{Bmatrix}p\\q\end{Bmatrix}}$q{p,q}ht0ht2{p,q}[1+,p,q,1+] Snub rectified Snub quasiregular $s{\begin{Bmatrix}p\\q\end{Bmatrix}}$sr{p,q}ht0,1,2{p,q}[p,q]+ Snub cuboctahedron (Snub cube) Altered and holosnubbed Altered and holosnubbed forms have the full symmetry of the Coxeter group, and are represented by double unfilled rings, but may be represented as compounds. Altered and holosnubbed Form Schläfli symbols Symmetry Coxeter diagram Example, {4,3} Altered regular $a{\begin{Bmatrix}p,q\end{Bmatrix}}$a{p,q}at0{p,q}[p,q] = ∪ Stellated octahedron Holosnub dual regular ß{q, p}ß{q,p}at0,1{q,p}[p,q] Compound of two icosahedra ß, looking similar to the greek letter beta (β), is the German alphabet letter eszett. Polychora and honeycombs Linear families Form Schläfli symbol Coxeter diagram Example, {4,3,3} Regular ${\begin{Bmatrix}p,q,r\end{Bmatrix}}${p,q,r} t0{p,q,r} Tesseract Truncated $t{\begin{Bmatrix}p,q,r\end{Bmatrix}}$t{p,q,r} t0,1{p,q,r} Truncated tesseract Rectified $\left\{{\begin{array}{l}p\\q,r\end{array}}\right\}$ r{p,q,r} t1{p,q,r} Rectified tesseract = Bitruncated 2t{p,q,r} t1,2{p,q,r} Bitruncated tesseract Birectified (Rectified dual) $\left\{{\begin{array}{l}q,p\\r\end{array}}\right\}$ 2r{p,q,r} = r{r,q,p} t2{p,q,r} Rectified 16-cell = Tritruncated (Truncated dual) $t{\begin{Bmatrix}r,q,p\end{Bmatrix}}$3t{p,q,r} = t{r,q,p} t2,3{p,q,r} Bitruncated tesseract Trirectified (Dual) ${\begin{Bmatrix}r,q,p\end{Bmatrix}}$3r{p,q,r} = {r,q,p} t3{p,q,r} = {r,q,p} 16-cell Cantellated $r\left\{{\begin{array}{l}p\\q,r\end{array}}\right\}$rr{p,q,r} t0,2{p,q,r} Cantellated tesseract = Cantitruncated $t\left\{{\begin{array}{l}p\\q,r\end{array}}\right\}$tr{p,q,r} t0,1,2{p,q,r} Cantitruncated tesseract = Runcinated (Expanded) $e_{3}{\begin{Bmatrix}p,q,r\end{Bmatrix}}$e3{p,q,r} t0,3{p,q,r} Runcinated tesseract Runcitruncated t0,1,3{p,q,r} Runcitruncated tesseract Omnitruncated t0,1,2,3{p,q,r} Omnitruncated tesseract Alternations, quarters and snubs Alternations Form Schläfli symbol Coxeter diagram Example, {4,3,3} Alternations Half p even $h{\begin{Bmatrix}p,q,r\end{Bmatrix}}$h{p,q,r} ht0{p,q,r} 16-cell Quarter p and r even $q{\begin{Bmatrix}p,q,r\end{Bmatrix}}$q{p,q,r} ht0ht3{p,q,r} Snub q even $s{\begin{Bmatrix}p,q,r\end{Bmatrix}}$s{p,q,r} ht0,1{p,q,r} Snub 24-cell Snub rectified r even $s\left\{{\begin{array}{l}p\\q,r\end{array}}\right\}$sr{p,q,r} ht0,1,2{p,q,r} Snub 24-cell = Alternated duoprism s{p}s{q} ht0,1,2,3{p,2,q} Great duoantiprism Bifurcating families Bifurcating families Form Extended Schläfli symbol Coxeter diagram Examples Quasiregular $\left\{p,{q \atop q}\right\}${p,q1,1} t0{p,q1,1} Demitesseract (16-cell) Truncated $t\left\{p,{q \atop q}\right\}$t{p,q1,1} t0,1{p,q1,1} Truncated demitesseract (Truncated 16-cell) Rectified $\left\{{\begin{array}{l}p\\q\\q\end{array}}\right\}$r{p,q1,1} t1{p,q1,1} Rectified demitesseract (24-cell) Cantellated $r\left\{{\begin{array}{l}p\\q\\q\end{array}}\right\}$rr{p,q1,1} t0,2,3{p,q1,1} Cantellated demitesseract (Cantellated 16-cell) Cantitruncated $t\left\{{\begin{array}{l}p\\q\\q\end{array}}\right\}$tr{p,q1,1} t0,1,2,3{p,q1,1} Cantitruncated demitesseract (Cantitruncated 16-cell) Snub rectified $s\left\{{\begin{array}{l}p\\q\\q\end{array}}\right\}$sr{p,q1,1} ht0,1,2,3{p,q1,1} Snub demitesseract (Snub 24-cell) Quasiregular $\left\{r,{p \atop q}\right\}${r,/q\,p} t0{r,/q\,p} Tetrahedral-octahedral honeycomb Truncated $t\left\{r,{p \atop q}\right\}$t{r,/q\,p} t0,1{r,/q\,p} Truncated tetrahedral-octahedral honeycomb Rectified $\left\{{\begin{array}{l}r\\p\\q\end{array}}\right\}$r{r,/q\,p} t1{r,/q\,p} Rectified tetrahedral-octahedral honeycomb (Rectified cubic honeycomb) Cantellated $r\left\{{\begin{array}{l}r\\p\\q\end{array}}\right\}$rr{r,/q\,p} t0,2,3{r,/q\,p} Cantellated cubic honeycomb Cantitruncated $t\left\{{\begin{array}{l}r\\p\\q\end{array}}\right\}$tr{r,/q\,p} t0,1,2,3{r,/q\,p} Cantitruncated cubic honeycomb Snub rectified $s\left\{{\begin{array}{l}p\\q\\r\end{array}}\right\}$sr{p,/q,\r} ht0,1,2,3{p,/q\,r} Snub rectified cubic honeycomb (non-uniform, but near miss) Tessellations Spherical • {2,n} • s{2,2n} • { } ⊕ {n} • t{2, n} • { } + {n} Regular • {2,∞} • {3,6} • {4,4} • {6,3} Semi-regular • s{4,4} • e{3,6} • sr{2,∞} • sr{6,3} • rr{6,3} • r{6,3} • t{6,3} • t{2,∞} • tr{6,3} • tr{4,4} Hyperbolic • sr{5,4} • sr{6,4} • sr{7,4} • sr{8,4} • sr{∞,4} • s{5,4} • sr{6,5} • s{6,4} • sr{8,6} • sr{7,7} • s{8,4} • s{4,6} • sr{∞,∞} • sr{7,3} • sr{8,3} • sr{∞,3} • s{3,8} • {3,7} • {3,8} • {3,∞} • h{8,3} • h{6,4} • q{4,6} • rr{7,3} • rr{8,3} • rr{∞,3} • h2{8,3} • r{7,3} • r{8,3} • t{7,3} • t{8,3} • r{∞,3} • t{∞,3} • rr{5,4} • rr{6,4} • rr{7,4} • rr{8,4} • rr{∞,4} • {4,5} • {4,6} • {4,7} • {4,8} • {4,∞} • r{5,4} • r{6,4} • tr{6,3} • tr{7,3} • ??? • tr{8,3} • ??? • tr{∞,3} • r{7,4} • r{8,4} • tr{5,4} • ??? • tr{6,4} • tr{7,4} • tr{8,4} • tr{∞,4} • t{5,4} • tr{6,5} • t{6,4} • tr{8,6} • t{7,4} • t{8,4} • t{∞,4} • r{∞,4} • {5,4} • {5,5} • {5,6} • {5,∞} • rr{6,5} • r{6,5} • t{4,5} • t{5,5} • t{6,5} • r{∞,5} • {6,4} • {6,5} • {6,6} • {6,8} • rr{8,6} • r{8,6} • t{4,6} • t{5,6} • t{6,6} • t{8,6} • {7,3} • {7,4} • {7,7} • t{3,7} • t{4,7} • t{7,7} • {8,3} • {8,4} • {8,6} • {8,8} • t{6,8} • t{3,8} • t{8,8} • {∞,3} • {∞,4} • {∞,5} • {∞,∞} • t{3,∞} • t{4,∞} References 1. Coxeter, H.S.M. (1973). Regular Polytopes (3rd ed.). New York: Dover. Sources • Coxeter, Harold Scott MacDonald (1973) [1948]. Regular Polytopes (3rd ed.). Dover Publications. pp. 14, 69, 149. ISBN 0-486-61480-8. OCLC 798003. Regular Polytopes. • Sherk, F. Arthur; McMullen, Peter; Thompson, Anthony C.; Weiss, Asia Ivic, eds. (1995). Kaleidoscopes: Selected Writings of H.S.M. Coxeter. Wiley. ISBN 978-0-471-01003-6. • (Paper 22) pp. 251–278 Coxeter, H.S.M. (1940). "Regular and Semi Regular Polytopes I". Math. Zeit. 46: 380–407. doi:10.1007/BF01181449. S2CID 186237114. Zbl 0022.38305. MR 2,10 • (Paper 23) pp. 279–312 — (1985). "Regular and Semi-Regular Polytopes II". Math. Zeit. 188 (4): 559–591. doi:10.1007/BF01161657. S2CID 120429557. Zbl 0547.52005. • (Paper 24) pp. 313–358 — (1988). "Regular and Semi-Regular Polytopes III". Math. Zeit. 200 (1): 3–45. doi:10.1007/BF01161745. S2CID 186237142. Zbl 0633.52006. External links • Weisstein, Eric W. "Schläfli Symbol". MathWorld. Retrieved December 28, 2019. • Starck, Maurice (April 13, 2012). "Polyhedral Names and Notations". A Ride Through the Polyhedra World. Retrieved December 28, 2019.
Wikipedia
Schlömilch's series Schlömilch's series is a Fourier series type expansion of twice continuously differentiable function in the interval $(0,\pi )$ in terms of the Bessel function of the first kind, named after the German mathematician Oskar Schlömilch, who derived the series in 1857.[1][2][3][4][5] The real-valued function $f(x)$ has the following expansion: $f(x)=a_{0}+\sum _{n=1}^{\infty }a_{n}J_{0}(nx),$ where ${\begin{aligned}a_{0}&=f(0)+{\frac {1}{\pi }}\int _{0}^{\pi }\int _{0}^{\pi /2}uf'(u\sin \theta )\ d\theta \ du,\\a_{n}&={\frac {2}{\pi }}\int _{0}^{\pi }\int _{0}^{\pi /2}u\cos nu\ f'(u\sin \theta )\ d\theta \ du.\end{aligned}}$ Examples Some examples of Schlömilch's series are the following: • Null functions in the interval $(0,\pi )$ can be expressed by Schlömilch's Series, $0={\frac {1}{2}}+\sum _{n=1}^{\infty }(-1)^{n}J_{0}(nx)$, which cannot be obtained by Fourier Series. This is particularly interesting because the null function is represented by a series expansion in which not all the coefficients are zero. The series converges only when $0<x<\pi $; the series oscillates at $x=0$ and diverges at $x=\pi $. This theorem is generalized so that $0={\frac {1}{2\Gamma (\nu +1)}}+\sum _{n=1}^{\infty }(-1)^{n}J_{0}(nx)/(nx/2)^{\nu }$ when $-1/2<\nu \leq 1/2$ and $0<x<\pi $ and also when $\nu >1/2$ and $0<x\leq \pi $. These properties were identified by Niels Nielsen.[6] • $x={\frac {\pi ^{2}}{4}}-2\sum _{n=1,3,...}^{\infty }{\frac {J_{0}(nx)}{n^{2}}},\quad 0<x<\pi .$ • $x^{2}={\frac {2\pi ^{2}}{3}}+8\sum _{n=1}^{\infty }{\frac {(-1)^{n}}{n^{2}}}J_{0}(nx),\quad -\pi <x<\pi .$ • ${\frac {1}{x}}+\sum _{m=1}^{k}{\frac {2}{\sqrt {x^{2}-4m^{2}\pi ^{2}}}}={\frac {1}{2}}+\sum _{n=1}^{\infty }J_{0}(nx),\quad 2k\pi <x<2(k+1)\pi .$ • If $(r,z)$ are the cylindrical polar coordinates, then the series $1+\sum _{n=1}^{\infty }e^{-nz}J_{0}(nr)$ is a solution of Laplace equation for $z>0$. See also • Kapteyn series References 1. Schlomilch, G. (1857). On Bessel's function. Zeitschrift fur Math, and Pkys., 2, 155-158. 2. Whittaker, E. T., & Watson, G. N. (1996). A Course of Modern Analysis. Cambridge university press. 3. Lord Rayleigh (1911). LXII. On a physical interpretation of Schlömilch's theorem in Bessel's functions. The London, Edinburgh, and Dublin Philosophical Magazine and Journal of Science, 21(124), 567-571. 4. Watson, G. N. (1995). A treatise on the theory of Bessel functions. Cambridge university press. 5. Chapman, S. (1911). On the general theory of summability, with application to Fourier's and other series. Quarterly Journal, 43, 1-52. 6. Nielsen, N. (1904). Handbuch der theorie der cylinderfunktionen. BG Teubner.
Wikipedia
Subspace theorem In mathematics, the subspace theorem says that points of small height in projective space lie in a finite number of hyperplanes. It is a result obtained by Wolfgang M. Schmidt (1972). Statement The subspace theorem states that if L1,...,Ln are linearly independent linear forms in n variables with algebraic coefficients and if ε>0 is any given real number, then the non-zero integer points x with $|L_{1}(x)\cdots L_{n}(x)|<|x|^{-\epsilon }$ lie in a finite number of proper subspaces of Qn. A quantitative form of the theorem, in which the number of subspaces containing all solutions, was also obtained by Schmidt, and the theorem was generalised by Schlickewei (1977) to allow more general absolute values on number fields. Applications The theorem may be used to obtain results on Diophantine equations such as Siegel's theorem on integral points and solution of the S-unit equation.[1] A corollary on Diophantine approximation The following corollary to the subspace theorem is often itself referred to as the subspace theorem. If a1,...,an are algebraic such that 1,a1,...,an are linearly independent over Q and ε>0 is any given real number, then there are only finitely many rational n-tuples (x1/y,...,xn/y) with $|a_{i}-x_{i}/y|<y^{-(1+1/n+\epsilon )},\quad i=1,\ldots ,n.$ The specialization n = 1 gives the Thue–Siegel–Roth theorem. One may also note that the exponent 1+1/n+ε is best possible by Dirichlet's theorem on diophantine approximation. References 1. Bombieri & Gubler (2006) pp. 176–230. • Bombieri, Enrico; Gubler, Walter (2006). Heights in Diophantine Geometry. New Mathematical Monographs. Vol. 4. Cambridge: Cambridge University Press. ISBN 978-0-521-71229-3. MR 2216774. Zbl 1130.11034. • Schlickewei, Hans Peter (1977). "On norm form equations". J. Number Theory. 9 (3): 370–380. doi:10.1016/0022-314X(77)90072-5. MR 0444562. • Schmidt, Wolfgang M. (1972). "Norm form equations". Annals of Mathematics. Second Series. 96 (3): 526–551. doi:10.2307/1970824. JSTOR 1970824. MR 0314761. • Schmidt, Wolfgang M. (1980). Diophantine approximation. Lecture Notes in Mathematics. Vol. 785 (1996 with minor corrections ed.). Berlin: Springer-Verlag. doi:10.1007/978-3-540-38645-2. ISBN 3-540-09762-7. MR 0568710. Zbl 0421.10019. • Schmidt, Wolfgang M. (1991). Diophantine approximations and Diophantine equations. Lecture Notes in Mathematics. Vol. 1467. Berlin: Springer-Verlag. doi:10.1007/BFb0098246. ISBN 3-540-54058-X. MR 1176315. S2CID 118143570. Zbl 0754.11020.
Wikipedia
Schmidt–Kalman filter The Schmidt–Kalman Filter is a modification of the Kalman filter for reducing the dimensionality of the state estimate, while still considering the effects of the additional state in the calculation of the covariance matrix and the Kalman gains.[1] A common application is to account for the effects of nuisance parameters such as sensor biases without increasing the dimensionality of the state estimate. This ensures that the covariance matrix will accurately represent the distribution of the errors. This article is about the reduced-order Kalman filter. For the linearized Kalman filter, see Extended Kalman filter. The primary advantage of utilizing the Schmidt–Kalman filter instead of increasing the dimensionality of the state space is the reduction in computational complexity. This can enable the use of filtering in real-time systems. Another usage of Schmidt–Kalman is when residual biases are unobservable; that is, the effect of the bias cannot be separated out from the measurement. In this case, Schmidt–Kalman is a robust way to not try and estimate the value of the bias, but only keep track of the effect of the bias on the true error distribution. For use in non-linear systems, the observation and state transition models may be linearized around the current mean and covariance estimate in a method analogous to the extended Kalman filter. Naming and historical development Stanley F. Schmidt developed the Schmidt–Kalman filter as a method to account for unobservable biases while maintaining the low dimensionality required for implementation in real time systems. See also • Kalman filter • Extended Kalman filter References 1. Schmidt, S. (1966). "Applications of State-space Methods to Navigation Problems". In Leondes, C. (ed.). Advances in Control Systems. Vol. 3. New York, NY: Academic Press. pp. 293–340.
Wikipedia
Gyrobifastigium In geometry, the gyrobifastigium is the 26th Johnson solid (J26). It can be constructed by joining two face-regular triangular prisms along corresponding square faces, giving a quarter-turn to one prism.[1] It is the only Johnson solid that can tile three-dimensional space.[2][3] Gyrobifastigium TypeJohnson J25 – J26 – J27 Faces4 triangles 4 squares Edges14 Vertices8 Vertex configuration4(3.42) 4(3.4.3.4) Symmetry groupD2d Dual polyhedronElongated tetragonal disphenoid Propertiesconvex, honeycomb Net It is also the vertex figure of the nonuniform p-q duoantiprism (if p and q are greater than 2). Despite the fact that p, q = 3 would yield a geometrically identical equivalent to the Johnson solid, it lacks a circumscribed sphere that touches all vertices, except for the case p = 5, q = 5/3, which represents a uniform great duoantiprism. Its dual, the elongated tetragonal disphenoid, can be found as cells of the duals of the p-q duoantiprisms. History and name A Johnson solid is one of 92 strictly convex polyhedra that is composed of regular polygon faces but are not uniform polyhedra (that is, they are not Platonic solids, Archimedean solids, prisms, or antiprisms). They were named by Norman Johnson, who first listed these polyhedra in 1966.[4] The name of the gyrobifastigium comes from the Latin fastigium, meaning a sloping roof.[5] In the standard naming convention of the Johnson solids, bi- means two solids connected at their bases, and gyro- means the two halves are twisted with respect to each other. The gyrobifastigium's place in the list of Johnson solids, immediately before the bicupolas, is explained by viewing it as a digonal gyrobicupola. Just as the other regular cupolas have an alternating sequence of squares and triangles surrounding a single polygon at the top (triangle, square or pentagon), each half of the gyrobifastigium consists of just alternating squares and triangles, connected at the top only by a ridge. Honeycomb The gyrated triangular prismatic honeycomb can be constructed by packing together large numbers of identical gyrobifastigiums. The gyrobifastigium is one of five convex polyhedra with regular faces capable of space-filling (the others being the cube, truncated octahedron, triangular prism, and hexagonal prism) and it is the only Johnson solid capable of doing so.[2][3] Cartesian coordinates Cartesian coordinates for the gyrobifastigium with regular faces and unit edge lengths may easily be derived from the formula of the height of unit edge length $h={\frac {\sqrt {3}}{2}},$[6] as follows: $\left(\pm {\frac {1}{2}},\pm {\frac {1}{2}},0\right),\left(0,\pm {\frac {1}{2}},{\frac {{\sqrt {3}}+1}{2}}\right),\left(\pm {\frac {1}{2}},0,-{\frac {{\sqrt {3}}+1}{2}}\right).$ To calculate formulae for the surface area and volume of a gyrobifastigium with regular faces and with edge length a, one may simply adapt the corresponding formulae for the triangular prism:[7] $A=\left(4+{\sqrt {3}}\right)a^{2}\approx 5.73205a^{2},$[8] $V=\left({\frac {\sqrt {3}}{2}}\right)a^{3}\approx 0.86603a^{3}.$[9] Topologically equivalent polyhedra Schmitt–Conway–Danzer biprism The Schmitt–Conway–Danzer biprism (also called a SCD prototile[10]) is a polyhedron topologically equivalent to the gyrobifastigium, but with parallelogram and irregular triangle faces instead of squares and equilateral triangles. Like the gyrobifastigium, it can fill space, but only aperiodically or with a screw symmetry, not with a full three-dimensional group of symmetries. Thus, it provides a partial solution to the three-dimensional einstein problem.[11][12] Dual Elongated tetragonal disphenoid TypeJohnson dual Faces8 triangles 4 parallelograms Edges14 Vertices8 Symmetry groupD2d Dual polyhedronGyrobifastigium Net The dual polyhedron of the gyrobifastigium has 8 faces: 4 isosceles triangles, corresponding to the valence-3 vertices of the gyrobifastigium, and 4 parallelograms corresponding to the valence-4 equatorial vertices. See also • Elongated gyrobifastigium • Elongated octahedron References 1. Darling, David (2004), The Universal Book of Mathematics: From Abracadabra to Zeno's Paradoxes, John Wiley & Sons, p. 169, ISBN 9780471667001. 2. Alam, S. M. Nazrul; Haas, Zygmunt J. (2006), "Coverage and Connectivity in Three-dimensional Networks", Proceedings of the 12th Annual International Conference on Mobile Computing and Networking (MobiCom '06), New York, NY, USA: ACM, pp. 346–357, arXiv:cs/0609069, doi:10.1145/1161089.1161128, ISBN 1-59593-286-0, S2CID 3205780. 3. Kepler, Johannes (2010), The Six-Cornered Snowflake, Paul Dry Books, Footnote 18, p. 146, ISBN 9781589882850. 4. Johnson, Norman W. (1966), "Convex polyhedra with regular faces", Canadian Journal of Mathematics, 18: 169–200, doi:10.4153/cjm-1966-021-8, MR 0185507, Zbl 0132.14603. 5. Rich, Anthony (1875), "Fastigium", in Smith, William (ed.), A Dictionary of Greek and Roman Antiquities, London: John Murray, pp. 523–524. 6. Weisstein, Eric W. "Equilateral Triangle". mathworld.wolfram.com. Retrieved 2020-04-13. 7. Weisstein, Eric W. "Triangular Prism". mathworld.wolfram.com. Retrieved 2020-04-13. 8. Wolfram Research, Inc. (2020). "Wolfram|Alpha Knowledgebase". Champaign, IL. PolyhedronData[{"Johnson", 26}, "SurfaceArea"] {{cite journal}}: Cite journal requires |journal= (help) 9. Wolfram Research, Inc. (2020). "Wolfram|Alpha Knowledgebase". Champaign, IL. PolyhedronData[{"Johnson", 26}, "Volume"] {{cite journal}}: Cite journal requires |journal= (help) 10. Forcing Nonperiodicity With a Single Tile Joshua E. S. Socolar and Joan M. Taylor, 2011 11. Senechal, Marjorie (1996), "7.2 The SCD (Schmitt–Conway–Danzer) tile", Quasicrystals and Geometry, Cambridge University Press, pp. 209–213, ISBN 9780521575416. 12. Tiling Space with a Schmitt-Conway Biprism wolfram demonstrations External links • Eric W. Weisstein, Gyrobifastigium (Johnson solid) at MathWorld. Johnson solids Pyramids, cupolae and rotundae • square pyramid • pentagonal pyramid • triangular cupola • square cupola • pentagonal cupola • pentagonal rotunda Modified pyramids • elongated triangular pyramid • elongated square pyramid • elongated pentagonal pyramid • gyroelongated square pyramid • gyroelongated pentagonal pyramid • triangular bipyramid • pentagonal bipyramid • elongated triangular bipyramid • elongated square bipyramid • elongated pentagonal bipyramid • gyroelongated square bipyramid Modified cupolae and rotundae • elongated triangular cupola • elongated square cupola • elongated pentagonal cupola • elongated pentagonal rotunda • gyroelongated triangular cupola • gyroelongated square cupola • gyroelongated pentagonal cupola • gyroelongated pentagonal rotunda • gyrobifastigium • triangular orthobicupola • square orthobicupola • square gyrobicupola • pentagonal orthobicupola • pentagonal gyrobicupola • pentagonal orthocupolarotunda • pentagonal gyrocupolarotunda • pentagonal orthobirotunda • elongated triangular orthobicupola • elongated triangular gyrobicupola • elongated square gyrobicupola • elongated pentagonal orthobicupola • elongated pentagonal gyrobicupola • elongated pentagonal orthocupolarotunda • elongated pentagonal gyrocupolarotunda • elongated pentagonal orthobirotunda • elongated pentagonal gyrobirotunda • gyroelongated triangular bicupola • gyroelongated square bicupola • gyroelongated pentagonal bicupola • gyroelongated pentagonal cupolarotunda • gyroelongated pentagonal birotunda Augmented prisms • augmented triangular prism • biaugmented triangular prism • triaugmented triangular prism • augmented pentagonal prism • biaugmented pentagonal prism • augmented hexagonal prism • parabiaugmented hexagonal prism • metabiaugmented hexagonal prism • triaugmented hexagonal prism Modified Platonic solids • augmented dodecahedron • parabiaugmented dodecahedron • metabiaugmented dodecahedron • triaugmented dodecahedron • metabidiminished icosahedron • tridiminished icosahedron • augmented tridiminished icosahedron Modified Archimedean solids • augmented truncated tetrahedron • augmented truncated cube • biaugmented truncated cube • augmented truncated dodecahedron • parabiaugmented truncated dodecahedron • metabiaugmented truncated dodecahedron • triaugmented truncated dodecahedron • gyrate rhombicosidodecahedron • parabigyrate rhombicosidodecahedron • metabigyrate rhombicosidodecahedron • trigyrate rhombicosidodecahedron • diminished rhombicosidodecahedron • paragyrate diminished rhombicosidodecahedron • metagyrate diminished rhombicosidodecahedron • bigyrate diminished rhombicosidodecahedron • parabidiminished rhombicosidodecahedron • metabidiminished rhombicosidodecahedron • gyrate bidiminished rhombicosidodecahedron • tridiminished rhombicosidodecahedron Elementary solids • snub disphenoid • snub square antiprism • sphenocorona • augmented sphenocorona • sphenomegacorona • hebesphenomegacorona • disphenocingulum • bilunabirotunda • triangular hebesphenorotunda (See also List of Johnson solids, a sortable table)
Wikipedia
Arboricity The arboricity of an undirected graph is the minimum number of forests into which its edges can be partitioned. Equivalently it is the minimum number of spanning forests needed to cover all the edges of the graph. The Nash-Williams theorem provides necessary and sufficient conditions for when a graph is k-arboric. Example The figure shows the complete bipartite graph K4,4, with the colors indicating a partition of its edges into three forests. K4,4 cannot be partitioned into fewer forests, because any forest on its eight vertices has at most seven edges, while the overall graph has sixteen edges, more than double the number of edges in a single forest. Therefore, the arboricity of K4,4 is three. Arboricity as a measure of density Main article: Nash-Williams theorem The arboricity of a graph is a measure of how dense the graph is: graphs with many edges have high arboricity, and graphs with high arboricity must have a dense subgraph. In more detail, as any n-vertex forest has at most n-1 edges, the arboricity of a graph with n vertices and m edges is at least $\lceil m/(n-1)\rceil $. Additionally, the subgraphs of any graph cannot have arboricity larger than the graph itself, or equivalently the arboricity of a graph must be at least the maximum arboricity of any of its subgraphs. Nash-Williams proved that these two facts can be combined to characterize arboricity: if we let nS and mS denote the number of vertices and edges, respectively, of any subgraph S of the given graph, then the arboricity of the graph equals $\max _{S}\{\lceil m_{S}/(n_{S}-1)\rceil \}.$ Any planar graph with $n$ vertices has at most $3n-6$ edges, from which it follows by Nash-Williams' formula that planar graphs have arboricity at most three. Schnyder used a special decomposition of a planar graph into three forests called a Schnyder wood to find a straight-line embedding of any planar graph into a grid of small area. Algorithms The arboricity of a graph can be expressed as a special case of a more general matroid partitioning problem,[1] in which one wishes to express a set of elements of a matroid as a union of a small number of independent sets. As a consequence, the arboricity can be calculated by a polynomial-time algorithm (Gabow & Westermann 1992). The current best exact algorithm computes the arboricity in $O(m{\sqrt {m}})$ time, where $m$ is the number of edges in the graph. Approximations to the arboricity of a graph can be computed faster. There are linear time 2-approximation algorithms,[2][3] and a near-linear time algorithm with an additive error of 2.[4] Related concepts The anarboricity of a graph is the maximum number of edge-disjoint nonacyclic subgraphs into which the edges of the graph can be partitioned. The star arboricity of a graph is the size of the minimum forest, each tree of which is a star (tree with at most one non-leaf node), into which the edges of the graph can be partitioned. If a tree is not a star itself, its star arboricity is two, as can be seen by partitioning the edges into two subsets at odd and even distances from the tree root respectively. Therefore, the star arboricity of any graph is at least equal to the arboricity, and at most equal to twice the arboricity. The linear arboricity of a graph is the minimum number of linear forests (a collection of paths) into which the edges of the graph can be partitioned. The linear arboricity of a graph is closely related to its maximum degree and its slope number. The pseudoarboricity of a graph is the minimum number of pseudoforests into which its edges can be partitioned. Equivalently, it is the maximum ratio of edges to vertices in any subgraph of the graph, rounded up to an integer. As with the arboricity, the pseudoarboricity has a matroid structure allowing it to be computed efficiently (Gabow & Westermann 1992). The subgraph density of a graph is the density of its densest subgraph. The thickness of a graph is the minimum number of planar subgraphs into which its edges can be partitioned. As any planar graph has arboricity three, the thickness of any graph is at least equal to a third of the arboricity, and at most equal to the arboricity. The degeneracy of a graph is the maximum, over all induced subgraphs of the graph, of the minimum degree of a vertex in the subgraph. The degeneracy of a graph with arboricity $a$ is at least equal to $a$, and at most equal to $2a-1$. The coloring number of a graph, also known as its Szekeres-Wilf number (Szekeres & Wilf 1968) is always equal to its degeneracy plus 1 (Jensen & Toft 1995, p. 77f.). The strength of a graph is a fractional value whose integer part gives the maximum number of disjoint spanning trees that can be drawn in a graph. It is the packing problem that is dual to the covering problem raised by the arboricity. The two parameters have been studied together by Tutte and Nash-Williams. The fractional arboricity is a refinement of the arboricity, as it is defined for a graph $G$ as $\max\{m_{S}/(n_{S}-1)\mid S\subseteq G\}.$ In other terms, the arboricity of a graph is the ceiling of the fractional arboricity. The (a,b)-decomposability generalizes the arboricity. A graph is $(a,b)$-decomposable if its edges can be partitioned into $a+1$ sets, each one of them inducing a forest, except one who induces a graph with maximum degree $b$. A graph with arboricity $a$ is $(a,0)$-decomposable. The tree number is the minimal number of trees covering the edges of a graph. Special appearances Arboricity appears in the Goldberg–Seymour conjecture. References • Alon, N. (1988). "The linear arboricity of graphs". Israel Journal of Mathematics. 62 (3): 311–325. doi:10.1007/BF02783300. MR 0955135. • Chen, B.; Matsumoto, M.; Wang, J.; Zhang, Z.; Zhang, J. (1994). "A short proof of Nash-Williams' theorem for the arboricity of a graph". Graphs and Combinatorics. 10 (1): 27–28. doi:10.1007/BF01202467. MR 1273008. • Erdős, P.; Hajnal, A. (1966). "On chromatic number of graphs and set-systems". Acta Mathematica Hungarica. 17 (1–2): 61–99. doi:10.1007/BF02020444. MR 0193025. • Gabow, H. N.; Westermann, H. H. (1992). "Forests, frames, and games: Algorithms for matroid sums and applications". Algorithmica. 7 (1): 465–497. doi:10.1007/BF01758774. MR 1154585. • Hakimi, S. L.; Mitchem, J.; Schmeichel, E. E. (1996). "Star arboricity of graphs". Discrete Mathematics. 149 (1–3): 93–98. doi:10.1016/0012-365X(94)00313-8. MR 1375101. • Jensen, T. R.; Toft, B. (1995). Graph Coloring Problems. New York: Wiley-Interscience. ISBN 0-471-02865-7. MR 1304254. • C. St. J. A. Nash-Williams (1961). "Edge-disjoint spanning trees of finite graphs". Journal of the London Mathematical Society. 36 (1): 445–450. doi:10.1112/jlms/s1-36.1.445. MR 0133253. • C. St. J. A. Nash-Williams (1964). "Decomposition of finite graphs into forests". Journal of the London Mathematical Society. 39 (1): 12. doi:10.1112/jlms/s1-39.1.12. MR 0161333. • W. Schnyder (1990). "Embedding planar graphs on the grid". Proc. 1st ACM/SIAM Symposium on Discrete Algorithms (SODA). pp. 138–148. • Szekeres, G.; Wilf, H. S. (1968). "An inequality for the chromatic number of a graph". Journal of Combinatorial Theory. 4: 1–3. doi:10.1016/s0021-9800(68)80081-x. MR 0218269. • Tutte, W. T. (1961). "On the problem of decomposing a graph into n connected factors". Journal of the London Mathematical Society. 36 (1): 221–230. doi:10.1112/jlms/s1-36.1.221. MR 0140438. 1. Edmonds, Jack (1965), "Minimum partition of a matroid into independent subsets", Journal of Research of the National Bureau of Standards Section B, 69B: 67, doi:10.6028/jres.069B.004 2. Eppstein, David (1994), "Arboricity and bipartite subgraph listing algorithms", Inf. Process. Lett., 51 (4): 207–211, doi:10.1016/0020-0190(94)90121-X 3. Arikati, Srinivasa Rao; Maheshwari, Anil; Zaroliagis, Christos D. (1997), "Efficient computation of implicit representations of sparse graphs", Discrete Appl. Math., 78 (1–3): 1–16, doi:10.1016/S0166-218X(97)00007-3 4. Blumenstock, Markus; Fischer, Frank (2020), "A constructive arboricity approximation scheme", 46th International Conference on Current Trends in Theory and Practice of Informatics
Wikipedia
Schoch line In geometry, the Schoch line is a line defined from an arbelos and named by Peter Woo after Thomas Schoch, who had studied it in conjunction with the Schoch circles. Construction An arbelos is a shape bounded by three mutually-tangent semicircular arcs with collinear endpoints, with the two smaller arcs nested inside the larger one; let the endpoints of these three arcs be (in order along the line containing them) A, B, and C. Let K1 and K2 be two more arcs, centered at A and C, respectively, with radii AB and CB, so that these two arcs are tangent at B; let K3 be the largest of the three arcs of the arbelos. A circle, with the center A1, is then created tangent to the arcs K1, K2, and K3. This circle is congruent with Archimedes' twin circles, making it an Archimedean circle; it is one of the Schoch circles. The Schoch line is perpendicular to the line AC and passes through the point A1. It is also the location of the centers of infinitely many Archimedean circles, e.g. the Woo circles.[1] Radius and center of A1 If r = AB/AC, and AC = 1, then the radius of A1 is $\rho ={\frac {1}{2}}r\left(1-r\right)$ and the center is $\left({\frac {1}{2}}r\left(-1+3r-2r^{2}\right)~,~r\left(1-r\right){\sqrt {\left(1+r\right)\left(2-r\right)}}\right).$[1] References 1. Dodge, Clayton W.; Schoch, Thomas; Woo, Peter Y.; Yiu, Paul (1999), "Those ubiquitous Archimedean circles" (PDF), Mathematics Magazine, 72 (3): 202–213, doi:10.2307/2690883, JSTOR 2690883, MR 1706441. Further reading • Okumura, Hiroshi; Watanabe, Masayuki (2004), "The Archimedean circles of Schoch and Woo" (PDF), Forum Geometricorum, 4: 27–34, MR 2057752. External links • van Lamoen, Floor. "Schoch Line." From MathWorld--A Wolfram Web Resource, created by Eric W. Weisstein". Retrieved 2008-04-11.
Wikipedia
Schoen–Yau conjecture In mathematics, the Schoen–Yau conjecture is a disproved conjecture in hyperbolic geometry, named after the mathematicians Richard Schoen and Shing-Tung Yau. It was inspired by a theorem of Erhard Heinz (1952). One method of disproof is the use of Scherk surfaces, as used by Harold Rosenberg and Pascal Collin (2006). Setting and statement of the conjecture Let $\mathbb {C} $ be the complex plane considered as a Riemannian manifold with its usual (flat) Riemannian metric. Let $\mathbb {H} $ denote the hyperbolic plane, i.e. the unit disc $\mathbb {H} :=\{(x,y)\in \mathbb {R} ^{2}|x^{2}+y^{2}<1\}$ :=\{(x,y)\in \mathbb {R} ^{2}|x^{2}+y^{2}<1\}} endowed with the hyperbolic metric $\mathrm {d} s^{2}=4{\frac {\mathrm {d} x^{2}+\mathrm {d} y^{2}}{(1-(x^{2}+y^{2}))^{2}}}.$ E. Heinz proved in 1952 that there can exist no harmonic diffeomorphism $f:\mathbb {H} \to \mathbb {C} .\,$ In light of this theorem, Schoen conjectured that there exists no harmonic diffeomorphism $g:\mathbb {C} \to \mathbb {H} .\,$ (It is not clear how Yau's name became associated with the conjecture: in unpublished correspondence with Harold Rosenberg, both Schoen and Yau identify Schoen as having postulated the conjecture). The Schoen(-Yau) conjecture has since been disproved. Comments The emphasis is on the existence or non-existence of an harmonic diffeomorphism, and that this property is a "one-way" property. In more detail: suppose that we consider two Riemannian manifolds M and N (with their respective metrics), and write $M\sim N\,$ if there exists a diffeomorphism from M onto N (in the usual terminology, M and N are diffeomorphic). Write $M\propto N$ if there exists an harmonic diffeomorphism from M onto N. It is not difficult to show that $\sim $ (being diffeomorphic) is an equivalence relation on the objects of the category of Riemannian manifolds. In particular, $\sim $ is a symmetric relation: $M\sim N\iff N\sim M.$ It can be shown that the hyperbolic plane and (flat) complex plane are indeed diffeomorphic: $\mathbb {H} \sim \mathbb {C} ,$ so the question is whether or not they are "harmonically diffeomorphic". However, as the truth of Heinz's theorem and the falsity of the Schoen–Yau conjecture demonstrate, $\propto $ is not a symmetric relation: $\mathbb {C} \propto \mathbb {H} {\text{ but }}\mathbb {H} \not \propto \mathbb {C} .$ Thus, being "harmonically diffeomorphic" is a much stronger property than simply being diffeomorphic, and can be a "one-way" relation. References • Heinz, Erhard (1952). "Über die Lösungen der Minimalflächengleichung". Nachr. Akad. Wiss. Göttingen. Math.-Phys. Kl. Math.-Phys.-Chem. Abt. 1952: 51–56. • Collin, Pascal; Rosenberg, Harold (2010). "Construction of harmonic diffeomorphisms and minimal graphs". Ann. of Math. 2. 172 (3): 1879–1906. arXiv:math/0701547. doi:10.4007/annals.2010.172.1879. ISSN 0003-486X. MR2726102 Disproved conjectures • Borsuk's • Chinese hypothesis • Connes • Euler's sum of powers • Ganea • Hedetniemi's • Hauptvermutung • Hirsch • Kalman's • Keller's • Mertens • Ono's inequality • Pólya • Ragsdale • Schoen–Yau • Seifert • Tait's • Von Neumann • Weyl–Berry • Williamson
Wikipedia
Scholz's reciprocity law In mathematics, Scholz's reciprocity law is a reciprocity law for quadratic residue symbols of real quadratic number fields discovered by Theodor Schönemann (1839) and rediscovered by Arnold Scholz (1929). Statement Suppose that p and q are rational primes congruent to 1 mod 4 such that the Legendre symbol (p/q) is 1. Then the ideal (p) factorizes in the ring of integers of Q(√q) as (p)=𝖕𝖕' and similarly (q)=𝖖𝖖' in the ring of integers of Q(√p). Write εp and εq for the fundamental units in these quadratic fields. Then Scholz's reciprocity law says that [εp/𝖖] = [εq/𝖕] where [] is the quadratic residue symbol in a quadratic number field. References • Lemmermeyer, Franz (2000), Reciprocity laws. From Euler to Eisenstein, Springer Monographs in Mathematics, Springer-Verlag, Berlin, ISBN 3-540-66957-4, MR 1761696, Zbl 0949.11002 • Scholz, Arnold (1929), "Zwei Bemerkungen zum Klassenkörperturm.", Journal für die reine und angewandte Mathematik (in German), 161: 201–207, doi:10.1515/crll.1929.161.201, ISSN 0075-4102, JFM 55.0103.06 • Schönemann, Theodor (1839), "Ueber die Congruenz x² + y² ≡ 1 (mod p)", Journal für die reine und angewandte Mathematik, 19: 93–112, doi:10.1515/crll.1839.19.93, ISSN 0075-4102, ERAM 019.0611cj
Wikipedia
Peter Scholze Peter Scholze (German pronunciation: [ˈpeːtɐ ˈʃɔlt͡sə] (listen); born 11 December 1987[2]) is a German mathematician known for his work in arithmetic geometry. He has been a professor at the University of Bonn since 2012 and director at the Max Planck Institute for Mathematics since 2018. He has been called one of the leading mathematicians in the world.[3][4][5][6] He won the Fields Medal in 2018, which is regarded as the highest professional honor in mathematics.[7][8][9] Peter Scholze Scholze in 2014 Born (1987-12-11) 11 December 1987 Dresden, East Germany NationalityGerman Alma materUniversity of Bonn Known forIntroduction of perfectoid spaces and diamonds Prismatic cohomology Condensed mathematics Geometrization of the local Langlands conjectures Children1 AwardsPrix and Cours Peccot (2012) SASTRA Ramanujan Prize (2013) Clay Research Award (2014) Cole Prize (2015) Fermat Prize (2015) Ostrowski Prize (2015) EMS Prize (2016) Leibniz Prize (2016) Berlin-Brandenburg Academy Award (2016) Fields Medal (2018) Scientific career FieldsMathematics Arithmetic geometry Algebraic geometry Algebraic number theory InstitutionsUniversity of Bonn Max Planck Institute for Mathematics Academy of Sciences Leopoldina University of California, Berkeley Clay Mathematics Institute ThesisPerfectoid Spaces (2011) Doctoral advisorMichael Rapoport[1] Early life and education Scholze was born in Dresden and grew up in Berlin.[10] His father is a physicist, his mother a computer scientist, and his sister studied chemistry.[11] He attended the Heinrich-Hertz-Gymnasium in Berlin-Friedrichshain, a gymnasium devoted to mathematics and science.[12] As a student, Scholze participated in the International Mathematical Olympiad, winning three gold medals and one silver medal.[13] He studied at the University of Bonn and completed his bachelor's degree in three semesters and his master's degree in two further semesters.[14] He obtained his Ph.D. in 2012 under the supervision of Michael Rapoport.[1] Career From July 2011 until 2016, Scholze was a Research Fellow of the Clay Mathematics Institute in New Hampshire.[15] In 2012, shortly after completing his PhD, he was made full professor at the University of Bonn, becoming at the age of 24 the youngest full professor in Germany.[3][14][16][17] In fall 2014, Scholze was appointed the Chancellor's Professor at University of California, Berkeley, where he taught a course on p-adic geometry.[18][19] In 2018, Scholze was appointed as a director of the Max Planck Institute for Mathematics in Bonn.[20] Work Scholze's work has concentrated on purely local aspects of arithmetic geometry such as p-adic geometry and its applications. He presented in a more compact form some of the previous fundamental theories pioneered by Gerd Faltings, Jean-Marc Fontaine and later by Kiran Kedlaya. His PhD thesis on perfectoid spaces[21] yields the solution to a special case of the weight-monodromy conjecture.[22] Scholze and Bhargav Bhatt have developed a theory of prismatic cohomology, which has been described as progress towards motivic cohomology by unifying singular cohomology, de Rham cohomology, ℓ-adic cohomology, and crystalline cohomology.[23][24] Scholze and Dustin Clausen proposed a program for condensed mathematics—a project to unify various mathematical subfields, including topology, geometry, functional analysis and number theory. Awards In 2012, he was awarded the Prix and Cours Peccot.[25] He was awarded the 2013 SASTRA Ramanujan Prize.[26] In 2014, he received the Clay Research Award.[27] In 2015, he was awarded the Frank Nelson Cole Prize in Algebra,[28] and the Ostrowski Prize.[29][30] He received the Fermat Prize 2015 from the Institut de Mathématiques de Toulouse.[31] In 2016, he was awarded the Leibniz Prize 2016 by the German Research Foundation.[32] He declined the $100,000 "New Horizons in Mathematics Prize" of the 2016 Breakthrough Prizes.[33] His turning down of the prize received little media attention.[34] In 2017 he became a member of the German Academy of Sciences Leopoldina.[35] In 2018, at thirty years old, Scholze, who was at the time serving as a mathematics professor at the University of Bonn, became one of the youngest mathematicians ever to be awarded the Fields Medal[36][37] for "transforming arithmetic algebraic geometry over p-adic fields through his introduction of perfectoid spaces, with application to Galois representations, and for the development of new cohomology theories".[38] In 2019, Scholze received the Great Cross of Merit of the Order of Merit of the Federal Republic of Germany.[39][40][41] In 2022 he became a foreign member of the Royal Society[42] and was awarded the Pius XI Medal from the Pontifical Academy of Sciences.[43] Personal life Scholze is married to a fellow mathematician[44] and has a daughter.[45] References 1. Peter Scholze at the Mathematics Genealogy Project. 2. "Prof. Dr. Peter Scholze". Hausdorff Center for Mathematics. Archived from the original on 26 April 2017. Retrieved 1 August 2018. 3. "Mathematiker Peter Scholze (24) nimmt Ruf nach Bonn an – als jüngster deutscher W3-Professor". Informationsdienst Wissenschaft (in German). 15 October 2012. Retrieved 1 August 2018. 4. "Leibniz Prize 2016: Professor Dr. Peter Scholze". Deutsche Forschungsgemeinschaft. Retrieved 1 August 2018. 5. Klarreich, Erica (1 August 2018). "A Master of Numbers and Shapes Who Is Rewriting Arithmetic". Quanta Magazine. Retrieved 2 August 2018. 6. Kaschel, Helena (23 July 2016). "Don't call me a prodigy: the rising stars of European mathematics". Deutsche Welle. Retrieved 2 August 2018. 7. Ball, Philip (12 August 2014). "Iranian is first woman to nab highest prize in maths". Nature. doi:10.1038/nature.2014.15686. S2CID 180573813. Retrieved 4 November 2018. 8. "Fields Medal". School of Mathematics and Statistics – University of St Andrews, Scotland. Retrieved 29 March 2018. 9. "Fields Medal". The University of Chicago. Retrieved 29 March 2018. 10. "Zwei Forscher der Uni Bonn erhalten den Leibniz-Preis" (in German). Rheinische Friedrich-Wilhelms-Universität Bonn. 10 December 2015. Retrieved 1 August 2018. Peter Scholze wurde 1987 in Dresden geboren und wuchs auf in Berlin. 11. Centre International de Rencontres Mathématiques (29 June 2015). "Interview at CIRM: Peter Scholze". YouTube. Retrieved 23 December 2018. 12. "Mit ihm kann man rechnen". Der Tagesspiegel (in German). 3 August 2005. Retrieved 1 August 2018. 13. Dolinar, Gregor. "International Mathematical Olympiad". International Mathematical Olympiad. Retrieved 1 August 2018. 14. "Mathe-Genie: 24-Jähriger wird Deutschlands jüngster Professor". Spiegel Online (in German). 16 October 2012. Retrieved 1 August 2018. 15. "Peter Scholze". Clay Mathematics Institute. Retrieved 1 August 2018. 16. "Peter Scholze ist erst 24 Jahre alt: Mathe-Genie wird Deutschlands jüngster Professor". Bild (in German). 15 October 2012. Retrieved 1 August 2018. 17. Harmsen, Torsten (15 October 2012). "Hochschule: Mathematikgenie aus Berlin". Berliner Zeitung (in German). Retrieved 1 August 2018. 18. "Peter Scholze". Department of Mathematics at University of California Berkeley. Retrieved 1 August 2018. 19. Scholze, Peter; Weinstein, Jared (October 2018). "Berkeley lectures on p-adic geometry" (PDF). University of Bonn. Retrieved 4 November 2018. 20. "Peter Scholze new director at the Max Planck Institute for Mathematics". Max Planck Institute for Mathematics. 2 July 2018. Retrieved 11 July 2018. 21. Scholze, Peter (November 2012). "Perfectoid Spaces". Publications Mathématiques de l'IHÉS. 116 (1): 245–313. doi:10.1007/s10240-012-0042-x. S2CID 15227588. 22. Scholze, Peter. "Perfectoid spaces: A survey" (PDF). University of Bonn. Retrieved 4 November 2018. 23. Sury, B. (2019). "ICM Awards 2018". Resonance. 24 (5): 597–605. doi:10.1007/s12045-019-0813-5. ISSN 0971-8044. S2CID 199675280. 24. Tao, Terence (19 March 2019). "Prismatic cohomology". Terence Tao's blog. Retrieved 21 March 2021. 25. "LISTE CHRONOLOGIQUE DES INTITULES DES COURS PECCOT DEPUIS 1899" (PDF) (in French). College de France. 26. "Peter Scholze to receive 2013 Sastra Ramanujan Prize". Shanmugha Arts, Science, Technology & Research Academy. Retrieved 1 August 2018. 27. "2014 Clay Research Awards". Clay Mathematics Institute. 14 July 2014. Retrieved 1 August 2018. 28. "Peter Scholze to Receive 2015 AMS Cole Prize in Algebra". American Mathematical Society. 4 December 2014. Retrieved 1 August 2018. 29. "The prize and the prize winners". Ostrowski Foundation. Retrieved 1 August 2018. 30. "Ostrowski Prize 2015: Peter Scholze" (PDF). Ostrowski Foundation. Retrieved 1 August 2018. 31. "Lauréats du Prix Fermat" (in French). Institut de Mathématiques de Toulouse. Retrieved 1 August 2018. 32. "Leibniz Prizes 2016: DFG Honours Ten Researchers". Deutsche Forschungsgemeinschaft. 10 December 2015. Retrieved 3 January 2016. 33. "2016 Breakthrough Prizes". breakthroughprize.org. Retrieved 15 November 2015. 34. Woit, Peter (9 November 2015). "2016 Breakthrough Prizes". Not Even Wrong. Department of Mathematics at Columbia University. Retrieved 3 August 2018. 35. "Peter Scholze". German Academy of Sciences Leopoldina. Retrieved 26 May 2021. 36. Chang, Kenneth (1 August 2018). "Fields Medals Awarded to 4 Mathematicians". The New York Times. Retrieved 1 August 2018. 37. Dambeck, Holger (1 August 2018). "Fields-Medaille: Peter Scholze bekommt weltweit höchste Auszeichnung für Mathematiker". Spiegel Online (in German). Retrieved 1 August 2018. 38. "Fields Medals 2018". International Mathematical Union. Retrieved 2 August 2018. 39. "Bundesverdienstkreuz (Great Cross of Merit) for Peter Scholze". Max Planck Institute for Mathematics. 15 October 2019. Retrieved 2 April 2020. 40. "Bekanntgabe vom 1. Oktober 2019" (in German). Bundespräsidialamt. Retrieved 2 April 2020. 41. "Peter Scholze erhält das "Große Verdienstkreuz"" (in German). Hausdorff Center for Mathematics. 2 October 2019. Retrieved 2 April 2020. 42. "Peter Scholze". The Royal Society. 43. "Pius XI-Medaille an Peter Scholze verliehen". University of Bonn (in German). 9 September 2022. 44. Schmundt, Hilmar (23 April 2016). "Bildung: Lieber Mathe als Rockband". Der Spiegel (in German). Retrieved 15 August 2018. 45. Klarreich, Erica (28 June 2016). "The Oracle of Arithmetic". Quanta Magazine. Retrieved 15 August 2018. External links • Prof. Dr. Peter Scholze, University of Bonn (in German) • Prof. Dr. Peter Scholze, Hausdorff Center for Mathematics • Prof. Dr. Peter Scholze, Academy of Sciences Leopoldina (in German) • Peter Scholze's results at International Mathematical Olympiad • Klarreich, Erica (28 June 2016), "The Oracle of Arithmetic", Quanta Magazine ("Peter Scholze And The Future of Arithmetic Geometry") • Scopus preview – Scholze, Peter – Author details – Scopus • Hesse, Michael (16 August 2018), "Interview mit Peter Scholze "Mathematiker brauchen eine hohe Frustrationstoleranz"", Berliner Zeitung • Annual Report 2012: Interview with Research Fellow Peter Scholze (PDF), Clay Mathematics Institute, 2012, pp. 12–14 Fields Medalists • 1936  Ahlfors • Douglas • 1950  Schwartz • Selberg • 1954  Kodaira • Serre • 1958  Roth • Thom • 1962  Hörmander • Milnor • 1966  Atiyah • Cohen • Grothendieck • Smale • 1970  Baker • Hironaka • Novikov • Thompson • 1974  Bombieri • Mumford • 1978  Deligne • Fefferman • Margulis • Quillen • 1982  Connes • Thurston • Yau • 1986  Donaldson • Faltings • Freedman • 1990  Drinfeld • Jones • Mori • Witten • 1994  Bourgain • Lions • Yoccoz • Zelmanov • 1998  Borcherds • Gowers • Kontsevich • McMullen • 2002  Lafforgue • Voevodsky • 2006  Okounkov • Perelman • Tao • Werner • 2010  Lindenstrauss • Ngô • Smirnov • Villani • 2014  Avila • Bhargava • Hairer • Mirzakhani • 2018  Birkar • Figalli • Scholze • Venkatesh • 2022  Duminil-Copin • Huh • Maynard • Viazovska • Category • Mathematics portal Recipients of SASTRA Ramanujan Prize • Manjul Bhargava (2005) • Kannan Soundararajan (2005) • Terence Tao (2006) • Ben Green (2007) • Akshay Venkatesh (2008) • Kathrin Bringmann (2009) • Wei Zhang (2010) • Roman Holowinsky (2011) • Zhiwei Yun (2012) • Peter Scholze (2013) • James Maynard (2014) • Jacob Tsimerman (2015) • Kaisa Matomäki (2016) • Maksym Radziwill (2016) • Maryna Viazovska (2017) • Yifeng Liu (2018) • Jack Thorne (2018) • Adam Harper (2019) • Shai Evra (2020) • Will Sawin (2021) • Yunqing Tang (2022) Authority control International • ISNI • VIAF • WorldCat National • Germany • Israel • United States • Poland Academics • DBLP • Leopoldina • MathSciNet • Mathematics Genealogy Project • Scopus • zbMATH Other • IdRef
Wikipedia
Schoof's algorithm Schoof's algorithm is an efficient algorithm to count points on elliptic curves over finite fields. The algorithm has applications in elliptic curve cryptography where it is important to know the number of points to judge the difficulty of solving the discrete logarithm problem in the group of points on an elliptic curve. The algorithm was published by René Schoof in 1985 and it was a theoretical breakthrough, as it was the first deterministic polynomial time algorithm for counting points on elliptic curves. Before Schoof's algorithm, approaches to counting points on elliptic curves such as the naive and baby-step giant-step algorithms were, for the most part, tedious and had an exponential running time. This article explains Schoof's approach, laying emphasis on the mathematical ideas underlying the structure of the algorithm. Introduction Let $E$ be an elliptic curve defined over the finite field $\mathbb {F} _{q}$, where $q=p^{n}$ for $p$ a prime and $n$ an integer $\geq 1$. Over a field of characteristic $\neq 2,3$ an elliptic curve can be given by a (short) Weierstrass equation $y^{2}=x^{3}+Ax+B$ with $A,B\in \mathbb {F} _{q}$. The set of points defined over $\mathbb {F} _{q}$ consists of the solutions $(a,b)\in \mathbb {F} _{q}^{2}$ satisfying the curve equation and a point at infinity $O$. Using the group law on elliptic curves restricted to this set one can see that this set $E(\mathbb {F} _{q})$ forms an abelian group, with $O$ acting as the zero element. In order to count points on an elliptic curve, we compute the cardinality of $E(\mathbb {F} _{q})$. Schoof's approach to computing the cardinality $\#E(\mathbb {F} _{q})$ makes use of Hasse's theorem on elliptic curves along with the Chinese remainder theorem and division polynomials. Hasse's theorem Main article: Hasse's theorem on elliptic curves Hasse's theorem states that if $E/\mathbb {F} _{q}$ is an elliptic curve over the finite field $\mathbb {F} _{q}$, then $\#E(\mathbb {F} _{q})$ satisfies $\mid q+1-\#E(\mathbb {F} _{q})\mid \leq 2{\sqrt {q}}.$ This powerful result, given by Hasse in 1934, simplifies our problem by narrowing down $\#E(\mathbb {F} _{q})$ to a finite (albeit large) set of possibilities. Defining $t$ to be $q+1-\#E(\mathbb {F} _{q})$, and making use of this result, we now have that computing the value of $t$ modulo $N$ where $N>4{\sqrt {q}}$, is sufficient for determining $t$, and thus $\#E(\mathbb {F} _{q})$. While there is no efficient way to compute $t{\pmod {N}}$ directly for general $N$, it is possible to compute $t{\pmod {l}}$ for $l$ a small prime, rather efficiently. We choose $S=\{l_{1},l_{2},...,l_{r}\}$ to be a set of distinct primes such that $\prod l_{i}=N>4{\sqrt {q}}$. Given $t{\pmod {l_{i}}}$ for all $l_{i}\in S$, the Chinese remainder theorem allows us to compute $t{\pmod {N}}$. In order to compute $t{\pmod {l}}$ for a prime $l\neq p$, we make use of the theory of the Frobenius endomorphism $\phi $ and division polynomials. Note that considering primes $l\neq p$ is no loss since we can always pick a bigger prime to take its place to ensure the product is big enough. In any case Schoof's algorithm is most frequently used in addressing the case $q=p$ since there are more efficient, so called $p$ adic algorithms for small-characteristic fields. The Frobenius endomorphism Given the elliptic curve $E$ defined over $\mathbb {F} _{q}$ we consider points on $E$ over ${\bar {\mathbb {F} }}_{q}$, the algebraic closure of $\mathbb {F} _{q}$; i.e. we allow points with coordinates in ${\bar {\mathbb {F} }}_{q}$. The Frobenius endomorphism of ${\bar {\mathbb {F} }}_{q}$ over $\mathbb {F} _{q}$ extends to the elliptic curve by $\phi :(x,y)\mapsto (x^{q},y^{q})$ :(x,y)\mapsto (x^{q},y^{q})} . This map is the identity on $E(\mathbb {F} _{q})$ and one can extend it to the point at infinity $O$, making it a group morphism from $E({\bar {\mathbb {F} _{q}}})$ to itself. The Frobenius endomorphism satisfies a quadratic polynomial which is linked to the cardinality of $E(\mathbb {F} _{q})$ by the following theorem: Theorem: The Frobenius endomorphism given by $\phi $ satisfies the characteristic equation $\phi ^{2}-t\phi +q=0,$ where $t=q+1-\#E(\mathbb {F} _{q})$ Thus we have for all $P=(x,y)\in E$ that $(x^{q^{2}},y^{q^{2}})+q(x,y)=t(x^{q},y^{q})$, where + denotes addition on the elliptic curve and $q(x,y)$ and $t(x^{q},y^{q})$ denote scalar multiplication of $(x,y)$ by $q$ and of $(x^{q},y^{q})$ by $t$. One could try to symbolically compute these points $(x^{q^{2}},y^{q^{2}})$, $(x^{q},y^{q})$ and $q(x,y)$ as functions in the coordinate ring $\mathbb {F} _{q}[x,y]/(y^{2}-x^{3}-Ax-B)$ of $E$ and then search for a value of $t$ which satisfies the equation. However, the degrees get very large and this approach is impractical. Schoof's idea was to carry out this computation restricted to points of order $l$ for various small primes $l$. Fixing an odd prime $l$, we now move on to solving the problem of determining $t_{l}$, defined as $t{\pmod {l}}$, for a given prime $l\neq 2,p$. If a point $(x,y)$ is in the $l$-torsion subgroup $E[l]=\{P\in E({\bar {\mathbb {F} _{q}}})\mid lP=O\}$, then $qP={\bar {q}}P$ where ${\bar {q}}$ is the unique integer such that $q\equiv {\bar {q}}{\pmod {l}}$ and $\mid {\bar {q}}\mid <l/2$. Note that $\phi (O)=O$ and that for any integer $r$ we have $r\phi (P)=\phi (rP)$. Thus $\phi (P)$ will have the same order as $P$. Thus for $(x,y)$ belonging to $E[l]$, we also have $t(x^{q},y^{q})={\bar {t}}(x^{q},y^{q})$ if $t\equiv {\bar {t}}{\pmod {l}}$. Hence we have reduced our problem to solving the equation $(x^{q^{2}},y^{q^{2}})+{\bar {q}}(x,y)\equiv {\bar {t}}(x^{q},y^{q}),$ where ${\bar {t}}$ and ${\bar {q}}$ have integer values in $[-(l-1)/2,(l-1)/2]$. Computation modulo primes The lth division polynomial is such that its roots are precisely the x coordinates of points of order l. Thus, to restrict the computation of $(x^{q^{2}},y^{q^{2}})+{\bar {q}}(x,y)$ to the l-torsion points means computing these expressions as functions in the coordinate ring of E and modulo the lth division polynomial. I.e. we are working in $\mathbb {F} _{q}[x,y]/(y^{2}-x^{3}-Ax-B,\psi _{l})$. This means in particular that the degree of X and Y defined via $(X(x,y),Y(x,y)):=(x^{q^{2}},y^{q^{2}})+{\bar {q}}(x,y)$ is at most 1 in y and at most $(l^{2}-3)/2$ in x. The scalar multiplication ${\bar {q}}(x,y)$ can be done either by double-and-add methods or by using the ${\bar {q}}$th division polynomial. The latter approach gives: ${\bar {q}}(x,y)=(x_{\bar {q}},y_{\bar {q}})=\left(x-{\frac {\psi _{{\bar {q}}-1}\psi _{{\bar {q}}+1}}{\psi _{\bar {q}}^{2}}},{\frac {\psi _{2{\bar {q}}}}{2\psi _{\bar {q}}^{4}}}\right)$ where $\psi _{n}$ is the nth division polynomial. Note that $y_{\bar {q}}/y$ is a function in x only and denote it by $\theta (x)$. We must split the problem into two cases: the case in which $(x^{q^{2}},y^{q^{2}})\neq \pm {\bar {q}}(x,y)$, and the case in which $(x^{q^{2}},y^{q^{2}})=\pm {\bar {q}}(x,y)$. Note that these equalities are checked modulo $\psi _{l}$. Case 1: $(x^{q^{2}},y^{q^{2}})\neq \pm {\bar {q}}(x,y)$ By using the addition formula for the group $E(\mathbb {F} _{q})$ we obtain: $X(x,y)=\left({\frac {y^{q^{2}}-y_{\bar {q}}}{x^{q^{2}}-x_{\bar {q}}}}\right)^{2}-x^{q^{2}}-x_{\bar {q}}.$ Note that this computation fails in case the assumption of inequality was wrong. We are now able to use the x-coordinate to narrow down the choice of ${\bar {t}}$ to two possibilities, namely the positive and negative case. Using the y-coordinate one later determines which of the two cases holds. We first show that X is a function in x alone. Consider $(y^{q^{2}}-y_{\bar {q}})^{2}=y^{2}(y^{q^{2}-1}-y_{\bar {q}}/y)^{2}$. Since $q^{2}-1$ is even, by replacing $y^{2}$ by $x^{3}+Ax+B$, we rewrite the expression as $(x^{3}+Ax+B)((x^{3}+Ax+B)^{\frac {q^{2}-1}{2}}-\theta (x))$ and have that $X(x)\equiv (x^{3}+Ax+B)((x^{3}+Ax+B)^{\frac {q^{2}-1}{2}}-\theta (x)){\bmod {\psi }}_{l}(x).$ Here, it seems not right, we throw away $x^{q^{2}}-x_{\bar {q}}$? Now if $X\equiv x_{\bar {t}}^{q}{\bmod {\psi }}_{l}(x)$ for one ${\bar {t}}\in [0,(l-1)/2]$ then ${\bar {t}}$ satisfies $\phi ^{2}(P)\mp {\bar {t}}\phi (P)+{\bar {q}}P=O$ for all l-torsion points P. As mentioned earlier, using Y and $y_{\bar {t}}^{q}$ we are now able to determine which of the two values of ${\bar {t}}$ (${\bar {t}}$ or $-{\bar {t}}$) works. This gives the value of $t\equiv {\bar {t}}{\pmod {l}}$. Schoof's algorithm stores the values of ${\bar {t}}{\pmod {l}}$ in a variable $t_{l}$ for each prime l considered. Case 2: $(x^{q^{2}},y^{q^{2}})=\pm {\bar {q}}(x,y)$ We begin with the assumption that $(x^{q^{2}},y^{q^{2}})={\bar {q}}(x,y)$. Since l is an odd prime it cannot be that ${\bar {q}}(x,y)=-{\bar {q}}(x,y)$ and thus ${\bar {t}}\neq 0$. The characteristic equation yields that ${\bar {t}}\phi (P)=2{\bar {q}}P$. And consequently that ${\bar {t}}^{2}{\bar {q}}\equiv (2q)^{2}{\pmod {l}}$. This implies that q is a square modulo l. Let $q\equiv w^{2}{\pmod {l}}$. Compute $w\phi (x,y)$ in $\mathbb {F} _{q}[x,y]/(y^{2}-x^{3}-Ax-B,\psi _{l})$ and check whether ${\bar {q}}(x,y)=w\phi (x,y)$. If so, $t_{l}$ is $\pm 2w{\pmod {l}}$ depending on the y-coordinate. If q turns out not to be a square modulo l or if the equation does not hold for any of w and $-w$, our assumption that $(x^{q^{2}},y^{q^{2}})=+{\bar {q}}(x,y)$ is false, thus $(x^{q^{2}},y^{q^{2}})=-{\bar {q}}(x,y)$. The characteristic equation gives $t_{l}=0$. Additional case $l=2$ If you recall, our initial considerations omit the case of $l=2$. Since we assume q to be odd, $q+1-t\equiv t{\pmod {2}}$ and in particular, $t_{2}\equiv 0{\pmod {2}}$ if and only if $E(\mathbb {F} _{q})$ has an element of order 2. By definition of addition in the group, any element of order 2 must be of the form $(x_{0},0)$. Thus $t_{2}\equiv 0{\pmod {2}}$ if and only if the polynomial $x^{3}+Ax+B$ has a root in $\mathbb {F} _{q}$, if and only if $\gcd(x^{q}-x,x^{3}+Ax+B)\neq 1$. The algorithm Input: 1. An elliptic curve $E=y^{2}-x^{3}-Ax-B$. 2. An integer q for a finite field $F_{q}$ with $q=p^{b},b\geq 1$. Output: The number of points of E over $F_{q}$. Choose a set of odd primes S not containing p such that $N=\prod _{l\in S}l>4{\sqrt {q}}.$ Put $t_{2}=0$ if $\gcd(x^{q}-x,x^{3}+Ax+B)\neq 1$, else $t_{2}=1$. Compute the division polynomial $\psi _{l}$. All computations in the loop below are performed in the ring $\mathbb {F} _{q}[x,y]/(y^{2}-x^{3}-Ax-B,\psi _{l}).$ For $l\in S$ do: Let ${\bar {q}}$ be the unique integer such that $q\equiv {\bar {q}}{\pmod {l}}$ and $\mid {\bar {q}}\mid <l/2$. Compute $(x^{q},y^{q})$, $(x^{q^{2}},y^{q^{2}})$ and $(x_{\bar {q}},y_{\bar {q}})$. if $x^{q^{2}}\neq x_{\bar {q}}$ then Compute $(X,Y)$. for $1\leq {\bar {t}}\leq (l-1)/2$ do: if $X=x_{\bar {t}}^{q}$ then if $Y=y_{\bar {t}}^{q}$ then $t_{l}={\bar {t}}$; else $t_{l}=-{\bar {t}}$. else if q is a square modulo l then compute w with $q\equiv w^{2}{\pmod {l}}$ compute $w(x^{q},y^{q})$ if $w(x^{q},y^{q})=(x^{q^{2}},y^{q^{2}})$ then $t_{l}=2w$ else if $w(x^{q},y^{q})=(x^{q^{2}},-y^{q^{2}})$ then $t_{l}=-2w$ else $t_{l}=0$ else $t_{l}=0$ Use the Chinese Remainder Theorem to compute t modulo N from the equations $t\equiv t_{l}{\pmod {l}}$, where $l\in S$. Output $q+1-t$. Complexity Most of the computation is taken by the evaluation of $\phi (P)$ and $\phi ^{2}(P)$, for each prime $l$, that is computing $x^{q}$, $y^{q}$, $x^{q^{2}}$, $y^{q^{2}}$ for each prime $l$. This involves exponentiation in the ring $R=\mathbb {F} _{q}[x,y]/(y^{2}-x^{3}-Ax-B,\psi _{l})$ and requires $O(\log q)$ multiplications. Since the degree of $\psi _{l}$ is ${\frac {l^{2}-1}{2}}$, each element in the ring is a polynomial of degree $O(l^{2})$. By the prime number theorem, there are around $O(\log q)$ primes of size $O(\log q)$, giving that $l$ is $O(\log q)$ and we obtain that $O(l^{2})=O(\log ^{2}q)$. Thus each multiplication in the ring $R$ requires $O(\log ^{4}q)$ multiplications in $\mathbb {F} _{q}$ which in turn requires $O(\log ^{2}q)$ bit operations. In total, the number of bit operations for each prime $l$ is $O(\log ^{7}q)$. Given that this computation needs to be carried out for each of the $O(\log q)$ primes, the total complexity of Schoof's algorithm turns out to be $O(\log ^{8}q)$. Using fast polynomial and integer arithmetic reduces this to ${\tilde {O}}(\log ^{5}q)$. Improvements to Schoof's algorithm Main article: Schoof–Elkies–Atkin algorithm In the 1990s, Noam Elkies, followed by A. O. L. Atkin, devised improvements to Schoof's basic algorithm by restricting the set of primes $S=\{l_{1},\ldots ,l_{s}\}$ considered before to primes of a certain kind. These came to be called Elkies primes and Atkin primes respectively. A prime $l$ is called an Elkies prime if the characteristic equation: $\phi ^{2}-t\phi +q=0$ splits over $\mathbb {F} _{l}$, while an Atkin prime is a prime that is not an Elkies prime. Atkin showed how to combine information obtained from the Atkin primes with the information obtained from Elkies primes to produce an efficient algorithm, which came to be known as the Schoof–Elkies–Atkin algorithm. The first problem to address is to determine whether a given prime is Elkies or Atkin. In order to do so, we make use of modular polynomials, which come from the study of modular forms and an interpretation of elliptic curves over the complex numbers as lattices. Once we have determined which case we are in, instead of using division polynomials, we are able to work with a polynomial that has lower degree than the corresponding division polynomial: $O(l)$ rather than $O(l^{2})$. For efficient implementation, probabilistic root-finding algorithms are used, which makes this a Las Vegas algorithm rather than a deterministic algorithm. Under the heuristic assumption that approximately half of the primes up to an $O(\log q)$ bound are Elkies primes, this yields an algorithm that is more efficient than Schoof's, with an expected running time of $O(\log ^{6}q)$ using naive arithmetic, and ${\tilde {O}}(\log ^{4}q)$ using fast arithmetic. Although this heuristic assumption is known to hold for most elliptic curves, it is not known to hold in every case, even under the GRH. Implementations Several algorithms were implemented in C++ by Mike Scott and are available with source code. The implementations are free (no terms, no conditions), and make use of the MIRACL library which is distributed under the AGPLv3. • Schoof's algorithm implementation for $E(\mathbb {F} _{p})$ with prime $p$. • Schoof's algorithm implementation for $E(\mathbb {F} _{2^{m}})$. See also • Elliptic curve cryptography • Counting points on elliptic curves • Division Polynomials • Frobenius endomorphism References • R. Schoof: Elliptic Curves over Finite Fields and the Computation of Square Roots mod p. Math. Comp., 44(170):483–494, 1985. Available at http://www.mat.uniroma2.it/~schoof/ctpts.pdf • R. Schoof: Counting Points on Elliptic Curves over Finite Fields. J. Theor. Nombres Bordeaux 7:219–254, 1995. Available at http://www.mat.uniroma2.it/~schoof/ctg.pdf • G. Musiker: Schoof's Algorithm for Counting Points on $E(\mathbb {F} _{q})$. Available at http://www.math.umn.edu/~musiker/schoof.pdf • V. Müller : Die Berechnung der Punktanzahl von elliptischen kurven über endlichen Primkörpern. Master's Thesis. Universität des Saarlandes, Saarbrücken, 1991. Available at http://lecturer.ukdw.ac.id/vmueller/publications.php • A. Enge: Elliptic Curves and their Applications to Cryptography: An Introduction. Kluwer Academic Publishers, Dordrecht, 1999. • L. C. Washington: Elliptic Curves: Number Theory and Cryptography. Chapman & Hall/CRC, New York, 2003. • N. Koblitz: A Course in Number Theory and Cryptography, Graduate Texts in Math. No. 114, Springer-Verlag, 1987. Second edition, 1994 Number-theoretic algorithms Primality tests • AKS • APR • Baillie–PSW • Elliptic curve • Pocklington • Fermat • Lucas • Lucas–Lehmer • Lucas–Lehmer–Riesel • Proth's theorem • Pépin's • Quadratic Frobenius • Solovay–Strassen • Miller–Rabin Prime-generating • Sieve of Atkin • Sieve of Eratosthenes • Sieve of Pritchard • Sieve of Sundaram • Wheel factorization Integer factorization • Continued fraction (CFRAC) • Dixon's • Lenstra elliptic curve (ECM) • Euler's • Pollard's rho • p − 1 • p + 1 • Quadratic sieve (QS) • General number field sieve (GNFS) • Special number field sieve (SNFS) • Rational sieve • Fermat's • Shanks's square forms • Trial division • Shor's Multiplication • Ancient Egyptian • Long • Karatsuba • Toom–Cook • Schönhage–Strassen • Fürer's Euclidean division • Binary • Chunking • Fourier • Goldschmidt • Newton-Raphson • Long • Short • SRT Discrete logarithm • Baby-step giant-step • Pollard rho • Pollard kangaroo • Pohlig–Hellman • Index calculus • Function field sieve Greatest common divisor • Binary • Euclidean • Extended Euclidean • Lehmer's Modular square root • Cipolla • Pocklington's • Tonelli–Shanks • Berlekamp • Kunerth Other algorithms • Chakravala • Cornacchia • Exponentiation by squaring • Integer square root • Integer relation (LLL; KZ) • Modular exponentiation • Montgomery reduction • Schoof • Trachtenberg system • Italics indicate that algorithm is for numbers of special forms Topics in algebraic curves Rational curves • Five points determine a conic • Projective line • Rational normal curve • Riemann sphere • Twisted cubic Elliptic curves Analytic theory • Elliptic function • Elliptic integral • Fundamental pair of periods • Modular form Arithmetic theory • Counting points on elliptic curves • Division polynomials • Hasse's theorem on elliptic curves • Mazur's torsion theorem • Modular elliptic curve • Modularity theorem • Mordell–Weil theorem • Nagell–Lutz theorem • Supersingular elliptic curve • Schoof's algorithm • Schoof–Elkies–Atkin algorithm Applications • Elliptic curve cryptography • Elliptic curve primality Higher genus • De Franchis theorem • Faltings's theorem • Hurwitz's automorphisms theorem • Hurwitz surface • Hyperelliptic curve Plane curves • AF+BG theorem • Bézout's theorem • Bitangent • Cayley–Bacharach theorem • Conic section • Cramer's paradox • Cubic plane curve • Fermat curve • Genus–degree formula • Hilbert's sixteenth problem • Nagata's conjecture on curves • Plücker formula • Quartic plane curve • Real plane curve Riemann surfaces • Belyi's theorem • Bring's curve • Bolza surface • Compact Riemann surface • Dessin d'enfant • Differential of the first kind • Klein quartic • Riemann's existence theorem • Riemann–Roch theorem • Teichmüller space • Torelli theorem Constructions • Dual curve • Polar curve • Smooth completion Structure of curves Divisors on curves • Abel–Jacobi map • Brill–Noether theory • Clifford's theorem on special divisors • Gonality of an algebraic curve • Jacobian variety • Riemann–Roch theorem • Weierstrass point • Weil reciprocity law Moduli • ELSV formula • Gromov–Witten invariant • Hodge bundle • Moduli of algebraic curves • Stable curve Morphisms • Hasse–Witt matrix • Riemann–Hurwitz formula • Prym variety • Weber's theorem (Algebraic curves) Singularities • Acnode • Crunode • Cusp • Delta invariant • Tacnode Vector bundles • Birkhoff–Grothendieck theorem • Stable vector bundle • Vector bundles on algebraic curves
Wikipedia
School Mathematics Project The School Mathematics Project arose in the United Kingdom as part of the new mathematics educational movement of the 1960s.[1] It is a developer of mathematics textbooks for secondary schools, formerly based in Southampton in the UK. Now generally known as SMP, it began as a research project inspired by a 1961 conference chaired by Bryan Thwaites at the University of Southampton, which itself was precipitated by calls to reform mathematics teaching in the wake of the Sputnik launch by the Soviet Union, the same circumstances that prompted the wider New Math movement. It maintained close ties with the former Collaborative Group for Research in Mathematics Education at the university. Instead of dwelling on 'traditional' areas such as arithmetic and geometry, SMP dwelt on subjects such as set theory, graph theory and logic, non-cartesian co-ordinate systems, matrix mathematics, affine transforms, Euclidean vectors, and non-decimal number systems. Course books SMP, Book 1 This was published in 1965. It was aimed at entry level pupils at secondary school, and was the first book in a series of 4 preparing pupils for Elementary Mathematics Examination at 'O' level.[2] SMP, Book 3 The computer paper tape motif on early educational material reads "THE SCHOOL MATHEMATICS PROJECT DIRECTED BY BRYAN THWAITES". O O O O O O OO O O O O OO O O O O O O O OOOO O O O O OO O O O O O O O O O O OO O O OO O O O O O O OOO O O O OO O ··································································· O OO OO OO OOO O O O O OO O O O O O O OO OO OO OOO OOO O OO O OO O O OO OOO OO O THE SCHOOL MATHEMATICS PROJECT DIRECTED BY BRYAN THWAITES The code for this tape is introduced in Book 3 as part of the notional computer system now described. Simpol programming language The Simpol language was devised by The School Mathematics Project [3] in the 1960s so as to introduce secondary pupils (typically aged 13) to what was then the novel concept of computer programming. It runs on the fictitious Simon computer. An interpreter for the Simpol language (that will run on a present-day PC) can be downloaded from the University of Southampton, at their SMP 2.0 website.[4] References 1. Walmsley, Angela Lynn Evans (2003). A History of the "new Mathematics" Movement and Its Relationship with Current Mathematical Reform. University Press of America. p. 60. ISBN 978-0-7618-2512-8. 2. "Book reviews" (PDF). Cambridge Core. Cambridge University. Retrieved 4 January 2021. 3. School Mathematics Project (SMP) Book 3 [Metric]. Cambridge University Press. 1970. p. 248. 4. University of Southampton, Simpol. External links • Manning, Godfrey (n.d.). "The simpol interpreter, manual and simpol code can be downloaded here". Simpol – Towards a School Mathematics Project 2.0. Southampton Education School; University of Southampton. Mathematics in the United Kingdom Organizations and Projects • International Centre for Mathematical Sciences • Advisory Committee on Mathematics Education • Association of Teachers of Mathematics • British Society for Research into Learning Mathematics • Council for the Mathematical Sciences • Count On • Edinburgh Mathematical Society • HoDoMS • Institute of Mathematics and its Applications • Isaac Newton Institute • United Kingdom Mathematics Trust • Joint Mathematical Council • Kent Mathematics Project • London Mathematical Society • Making Mathematics Count • Mathematical Association • Mathematics and Computing College • Mathematics in Education and Industry • Megamaths • Millennium Mathematics Project • More Maths Grads • National Centre for Excellence in the Teaching of Mathematics • National Numeracy • National Numeracy Strategy • El Nombre • Numbertime • Oxford University Invariant Society • School Mathematics Project • Science, Technology, Engineering and Mathematics Network • Sentinus Maths schools • Exeter Mathematics School • King's College London Mathematics School • Lancaster University School of Mathematics • University of Liverpool Mathematics School Journals • Compositio Mathematica • Eureka • Forum of Mathematics • Glasgow Mathematical Journal • The Mathematical Gazette • Philosophy of Mathematics Education Journal • Plus Magazine Competitions • British Mathematical Olympiad • British Mathematical Olympiad Subtrust • National Cipher Challenge Awards • Chartered Mathematician • Smith's Prize • Adams Prize • Thomas Bond Sprague Prize • Rollo Davidson Prize
Wikipedia
Department of Mathematics, University of Manchester The Department of Mathematics at the University of Manchester is one of the largest unified mathematics departments in the United Kingdom, with over 90 academic staff and an undergraduate intake of roughly 400 students per year (including students studying mathematics with a minor in another subject) and approximately 200 postgraduate students in total.[1][2] The School of Mathematics was formed in 2004 by the merger of the mathematics departments of University of Manchester Institute of Science and Technology (UMIST) and the Victoria University of Manchester (VUM). In July 2007 the department moved into a purpose-designed building─the first three floors of the Alan Turing Building─on Upper Brook Street. In a Faculty restructure in 2019 the School of Mathematics reverted to the Department of Mathematics. It is one of five Departments that make up the School of Natural Sciences, which together with the School of Engineering now constitutes the Faculty of Science and Engineering at Manchester. Organization The current head of the department is Andrew Hazel. The department is divided into three groups: Pure Mathematics (Head: Charles Eaton), Applied Mathematics (Head: David Sylvester), and Probability and Statistics (Head: Korbinian Strimmer). The director of research is William Parnell. The Manchester Institute for Mathematical Sciences (MIMS) is a unit of the department focusing on the organising of mathematical colloquia and conferences, and research visitors. MIMS is headed by Nick Higham. Other high-profile mathematicians at Manchester include Martin Taylor and Jeff Paris. Since its formation, the department has made some influential appointments including the topologist Viktor Buchstaber and model theorist Alex Wilkie. Numerical analyst Jack Dongarra, one of the authors of LINPACK, was appointed in 2007 as Turing Fellow. In the autumn of 2007, Albert Shiryaev was appointed to a 20% chair. Shiryaev is known for his work on probability theory (he was a student of Andrey Kolmogorov) and for his work on financial mathematics. Research As might be expected from its size (about 30 academic staff in Probability & Statistics, 30 in Pure Mathematics and 45 in Applied Mathematics), the department has a wide range of research interests, including the traditionally pure areas of algebra, analysis, noncommutative geometry, ergodic theory, mathematical logic, number theory, geometry and topology; and the more applied dynamical system, fluid dynamics, solid mechanics, inverse problems, mathematical finance, wave propagation and scattering. The department also has a strong tradition in numerical analysis and well established groups in Probability theory, and Mathematical statistics. Manchester mathematicians have a long tradition of applying mathematics to industrial problems. Nowadays this involves not only the traditional applications in engineering and the physical sciences, but also in the life sciences and the financial sector. Some of the recent industrial partners include Qinetiq, Hewlett Packard, NAg, MathWorks, Comsol, Philips Labs, Thales Underwater Systems, Rapiscan Systems and Schlumberger. Research Assessment Exercise (2008) The department of Mathematics entered research into three units of assessment. In Pure Mathematics 20% of submissions from 27 FTE category A staff were rated 4* (World Class) and 40% 3* (Internationally Excellent). In Applied Mathematics 25% of submissions from 28.8 FTE category A staff were rated 4* and 35%, 3*. And in Statistics and Operational Research, 20% of submissions from 10.9 FTE category A staff were rated 4* and 35%, 3*.[3] History At the time of merger the two departments that came together to form the school were of roughly equal sizes and academic strengths, and already had a substantial record of collaboration including shared research seminar programmes and fourth year undergraduate and MSc programmes. Many famous mathematicians have worked at the precursor departments to the department. In 1885 Horace Lamb, famous for his contribution to fluid dynamics accepted a chair at the VUM and under his leadership the department grew rapidly. Newman wrote: 'His lecture courses were numerous, and his books provide a record of his methods. Many of his students were engineers, and they found in him a sympathetic guide, one who understood their difficulties and shared their interest in applications of mathematics to mechanics.' In 1907 famous analyst and number theorist John Edensor Littlewood was appointed to the Richardson Lectureship which he held for three years. During 1912–1913 the pioneer of weather forecasting and numerical analysis Lewis Fry Richardson worked at Manchester College of Science and Technology (later to become UMIST). Number theorist Louis J. Mordell joined the College in 1920. During this time he discovered the result for which he is best known, namely the finite basis theorem (or Mordell–Weil theorem), which proved a conjecture of Henri Poincaré. Mordell then went on to become Fielden Reader in Pure Mathematics at VUM in 1922 and then held the Fielden Chair in 1923. Mordell built up the department, offering posts to a number of outstanding mathematicians who had been forced from posts on the continent of Europe. He brought in Reinhold Baer, G. Billing, Paul Erdős, Chao Ko, Kurt Mahler, and Beniamino Segre. He also recruited J. A. Todd, Patrick du Val, Harold Davenport, L. C. Young, and invited distinguished visitors. Although Manchester was later to be known as the birthplace of the electronic computer, Douglas Hartree made an earlier contribution building a differential analyser in 1933. The machine was used for ballistics calculations as well calculating railway timetables. Mordell was succeeded by the famous topologist and cryptanalyst Max Newman in 1945 who, as head of department, transformed it into a centre of international renown.[4] Undergraduate numbers increased from eight per year to 40 and then 60. In 1948 Newman recruited Alan Turing as Reader in the department, and he worked there until his death in 1954, completing some of his profound work on the foundations of computer science including Computing Machinery and Intelligence. Newman retired in 1964. From 1949 to 1960 M. S. Bartlett held the first chair in mathematical statistics at VUM, he is known for his contribution to the analysis of data with spatial and temporal patterns, the theory of statistical inference and in multivariate analysis. At Manchester he developed an interest in epidemiology, building a strong group in mathematical statistics and strengthening the department. Fluid dynamicist Sydney Goldstein held the Beyer Chair of Applied Mathematics from 1945 to 1950, and was succeeded from 1950 to 1959 by James Lighthill, also a fluid dynamicist. In pure mathematics, Bernhard Neumann, an influential group theorist, joined the department at VUM in 1948, leaving as a Reader in 1961 to take a chair in Australia. In 1969, VUM's Mathematics Tower, an 18-storey skyscraper on Oxford Road, was completed. Up until the 1950s, UMIST's Mathematics Department taught largely service courses for the engineering and applied science courses, and despite stars such as Richardson, Mordell and in 1958–1963 group theorist Hanna Neumann, did not have a strong focus on research. Neumann was later to be the first woman appointed to a Professorial Chair of Mathematics in Australia. With the rapid expansion of higher education and the starting of an undergraduate mathematics degree this changed, and by 1968 the 15-storey Maths and Social Sciences Building (MSS) was completed on UMIST campus to house the growing department. In 1960 Robin Bullough joined the UMIST department initiating four decades of mathematical physics focusing especially on solitons. The statistics group also grew in strength with an emphasis on time series, led by Maurice Priestley and also Tata Subba Rao. In 1986 pure mathematics at UMIST was strengthened by the appointment of Martin J. Taylor FRS, famous for his work on properties and structures of algebraic numbers. Another renowned topologist, Frank Adams, succeeded Newman in the Fielden Chair, which he held from 1964 to 1970. The VUM Mathematics tower was demolished in 2005, with most of the staff moving to temporary buildings, the pure mathematicians to one named after Newman and the applied to one named after Lamb. The history of the department entered a new phase in July 2007 with the move to the Alan Turing Building. The department was known as the School of Mathematics until a 2019 faculty-wide restructuring.[5] In 2013, the Sir Horace Lamb Chair was founded in memory of Sir Horace Lamb.[6] The chair was inaugurated in May 2013 with the appointment of Professor Oliver Jensen, who already held a personal chair in the school. See also • Mathematics section in People Associated with the University of Manchester • Richardson Professor of Applied Mathematics • Fielden Professor of Pure Mathematics • Beyer Professor of Applied Mathematics References and notes 1. Certainly the Faculty of Mathematics, University of Cambridge is larger. Exact figures for Cambridge are hard to come by as the faculty is divided into DPMMS and DAMTP (which includes some physicists). In the 2001 RAE Cambridge returned 60 applied mathematicians and 38 pure mathematicians as lecturers and professors. In any measure Cambridge is bigger. Oxfords 2001 RAE return "HERO - Higher Education & Research Opportunities in the UK: HERO - Higher Education & Research Opportunities in the UK: RAE 2001 : Submissions". Archived from the original on 3 December 2008. Retrieved 15 April 2007. lists 43 pure, 32 applied and also 12 statisticians making it slightly larger than and the size may have increased. Probably the next biggest after Manchester is Leeds with about 70 academic staff over pure, applied and statistics. 2. League Tables of UK Mathematics Departments, gives more details of size. 3. "Research Assessment Exercise 2008". Retrieved 27 January 2012. 4. Walter Ledermann, Encounters of a Mathematician, 2009 ISBN 978-1-4092-8267-9 5. "FAQs - Structure". Archived from the original on 17 November 2019. Retrieved 17 November 2019. 6. "New Chair to honour Mathematics pioneer Sir Horace Lamb". The University of Manchester. Retrieved 31 May 2013. External links • Department of Mathematics home page. • VUM Mathematics Tower on syskcrapernews.com • MSS building syskcrapernews.com University of Manchester Faculties and Schools Standalone • Architecture Science and Engineering Engineering • Chemical and Analytical Science • Computer Science • Electrical and Electronic • Mechanical, Aerospace and Civil Natural Sciences • Chemistry • Earth and Environmental • Materials • Mathematics • Physics and Astronomy Humanities • Anthropology • Business Biology, Medicine and Heath • Medicine • Psychology • Biological Sciences Research • Academic Health Science Centre • Brooks World Poverty Institute • Centre for the Advanced Study of the Arab World • Henry Royce Institute • Institute for Science, Ethics and Innovation • Jodrell Bank Centre for Astrophysics • Joule Centre • Lincoln Theological Institute • Centre for Integrative Systems Biology • Centre for New Writing • Masood Entrepreneurship Centre • Manchester Institute of Biotechnology (MIB) • National Graphene Institute • Manchester Institute of Innovation Research • Research Computing Services • Wolfson Molecular Imaging Centre • Wellcome Trust Centre for Cell-Matrix Research Buildings • Alan Turing Building • Arthur Lewis Building • Barnes Wallis Building • Beyer Building • Faraday Building • Grove House • Kilburn Building • Mathematics Tower, Manchester (1968–2005) • Maths and Social Sciences Building • Renold Building • Sackville Street Building • Schuster Laboratory • Stephen Joseph Studio • Stopford Building • Whitworth Building Facilities • Computing Machine Laboratory • Contact Theatre • Firs Botanical Grounds • Godlee Observatory • Jodrell Bank Observatory • John Rylands Library • Manchester Academy • Manchester Aquatics Centre • Manchester Conference Centre • Manchester Museum • Tabley House • UMIST linear system • University of Manchester Library • Whitworth Art Gallery Student life • Christie Cup • Fuse FM • Students' Union • The Mancunion Halls of residence City • Denmark Road (Private) • George Kenyon Hall • Horniman House • Weston Hall (Private) • Whitworth Park Victoria Park • Brook Hall (Private) • Burkhardt House • Canterbury Court • Daisy Bank Hall (Private) • Dalton-Ellis Hall • Greygarth Hall (Private, Male) • Hulme Hall • Rusholme Place (Private) • St Anselm Hall (Male) • Wilmslow Park (Private) Fallowfield • Ashburne Hall • Oak House • Owens Park (Closed) • Richmond Park • Sheavyn House • Unsworth Park • Uttley House • Woolton Hall History • University of Manchester • UMIST • Victoria University of Manchester • Victoria University • Mechanics' Institute, Manchester Senior management • Luke Georghiou • April McMahon • Nancy Rothwell • Martin Schröder Other • 2020 protests • 2023 protests • Academic dress • Manchester University Press • Toller Lecture • People associated with the University of Manchester • List of University of Manchester alumni • Category • Commons 53°28′04″N 2°13′53″W Authority control • ISNI • VIAF
Wikipedia
Prime form In algebraic geometry, the Schottky–Klein prime form E(x,y) of a compact Riemann surface X depends on two elements x and y of X, and vanishes if and only if x = y. The prime form E is not quite a holomorphic function on X × X, but is a section of a holomorphic line bundle over this space. Prime forms were introduced by Friedrich Schottky and Felix Klein. Prime forms can be used to construct meromorphic functions on X with given poles and zeros. If Σniai is a divisor linearly equivalent to 0, then ΠE(x,ai)ni is a meromorphic function with given poles and zeros. See also • Fay's trisecant identity References • Fay, John D. (1973), "The prime- form", Theta functions on Riemann surfaces, Lecture Notes in Mathematics, vol. 352, Berlin, New York: Springer-Verlag, doi:10.1007/BFb0060090, ISBN 978-3-540-06517-3, MR 0335789 • Baker, Henry Frederick (1995) [1897], Abelian functions, Cambridge Mathematical Library, Cambridge University Press, ISBN 978-0-521-49877-7, MR 1386644 • Mumford, David (1984), Tata lectures on theta. II, Progress in Mathematics, vol. 43, Boston, MA: Birkhäuser Boston, doi:10.1007/978-0-8176-4578-6, ISBN 978-0-8176-3110-9, MR 0742776
Wikipedia
Schottky group In mathematics, a Schottky group is a special sort of Kleinian group, first studied by Friedrich Schottky (1877). Definition Fix some point p on the Riemann sphere. Each Jordan curve not passing through p divides the Riemann sphere into two pieces, and we call the piece containing p the "exterior" of the curve, and the other piece its "interior". Suppose there are 2g disjoint Jordan curves A1, B1,..., Ag, Bg in the Riemann sphere with disjoint interiors. If there are Möbius transformations Ti taking the outside of Ai onto the inside of Bi, then the group generated by these transformations is a Kleinian group. A Schottky group is any Kleinian group that can be constructed like this. Properties By work of Maskit (1967), a finitely generated Kleinian group is Schottky if and only if it is finitely generated, free, has nonempty domain of discontinuity, and all non-trivial elements are loxodromic. A fundamental domain for the action of a Schottky group G on its regular points Ω(G) in the Riemann sphere is given by the exterior of the Jordan curves defining it. The corresponding quotient space Ω(G)/G is given by joining up the Jordan curves in pairs, so is a compact Riemann surface of genus g. This is the boundary of the 3-manifold given by taking the quotient (H∪Ω(G))/G of 3-dimensional hyperbolic H space plus the regular set Ω(G) by the Schottky group G, which is a handlebody of genus g. Conversely any compact Riemann surface of genus g can be obtained from some Schottky group of genus g. Classical and non-classical Schottky groups A Schottky group is called classical if all the disjoint Jordan curves corresponding to some set of generators can be chosen to be circles. Marden (1974, 1977) gave an indirect and non-constructive proof of the existence of non-classical Schottky groups, and Yamamoto (1991) gave an explicit example of one. It has been shown by Doyle (1988) that all finitely generated classical Schottky groups have limit sets of Hausdorff dimension bounded above strictly by a universal constant less than 2. Conversely, Hou (2010) has proved that there exists a universal lower bound on the Hausdorff dimension of limit sets of all non-classical Schottky groups. Limit sets of Schottky groups The limit set of a Schottky group, the complement of Ω(G), always has Lebesgue measure zero, but can have positive d-dimensional Hausdorff measure for d < 2. It is perfect and nowhere dense with positive logarithmic capacity. The statement on Lebesgue measures follows for classical Schottky groups from the existence of the Poincaré series $\displaystyle {P(z)=\sum (c_{i}z+d_{i})^{-4}.}$ Poincaré showed that the series | ci |−4 is summable over the non-identity elements of the group. In fact taking a closed disk in the interior of the fundamental domain, its images under different group elements are disjoint and contained in a fixed disk about 0. So the sums of the areas is finite. By the changes of variables formula, the area is greater than a constant times | ci |−4.[1] A similar argument implies that the limit set has Lebesgue measure zero.[2] For it is contained in the complement of union of the images of the fundamental region by group elements with word length bounded by n. This is a finite union of circles so has finite area. That area is bounded above by a constant times the contribution to the Poincaré sum of elements of word length n, so decreases to 0. Schottky space Schottky space (of some genus g ≥ 2) is the space of marked Schottky groups of genus g, in other words the space of sets of g elements of PSL2(C) that generate a Schottky group, up to equivalence under Möbius transformations (Bers 1975). It is a complex manifold of complex dimension 3g−3. It contains classical Schottky space as the subset corresponding to classical Schottky groups. Schottky space of genus g is not simply connected in general, but its universal covering space can be identified with Teichmüller space of compact genus g Riemann surfaces. See also • Beltrami equation • Riley slice Notes 1. Lehner 1964, p. 159 2. Akaza 1964 References • Akaza, Tohru (1964), "Poincaré theta series and singular sets of Schottky groups", Nagoya Math. J., 24: 43–65 • Bers, Lipman (1975), "Automorphic forms for Schottky groups", Advances in Mathematics, 16: 332–361, doi:10.1016/0001-8708(75)90117-6, ISSN 0001-8708, MR 0377044 • Chuckrow, Vicki (1968), "On Schottky groups with applications to kleinian groups", Annals of Mathematics, Second Series, 88: 47–61, doi:10.2307/1970555, ISSN 0003-486X, JSTOR 1970555, MR 0227403 • Doyle, Peter (1988), "On the bass note of a Schottky group", Acta Mathematica, 160: 249–284, doi:10.1007/bf02392277, MR 0945013 • Fricke, Robert; Klein, Felix (1897), Vorlesungen über die Theorie der automorphen Functionen. Erster Band; Die gruppentheoretischen Grundlagen. (in German), Leipzig: B. G. Teubner, ISBN 978-1-4297-0551-6, JFM 28.0334.01 • Fricke, Robert; Klein, Felix (1912), Vorlesungen über die Theorie der automorphen Functionen. Zweiter Band: Die funktionentheoretischen Ausführungen und die Anwendungen. 1. Lieferung: Engere Theorie der automorphen Funktionen. (in German), Leipzig: B. G. Teubner., ISBN 978-1-4297-0552-3, JFM 32.0430.01 • Gilman, Jane, A Survey of Schottky Groups (PDF) • Hou, Yong (2010), "Kleinian groups of small Hausdorff dimension are classical Schottky groups I", Geometry & Topology, 14: 473–519, arXiv:math/0610458, doi:10.2140/gt.2010.14.473 • Hou, Yong, All finitely generated Kleinian groups of small Hausdorff dimension are classical Schottky groups, arXiv:1307.2677, Bibcode:2013arXiv1307.2677H • Jørgensen, T.; Marden, A.; Maskit, Bernard (1979), "The boundary of classical Schottky space", Duke Mathematical Journal, 46 (2): 441–446, doi:10.1215/s0012-7094-79-04619-2, ISSN 0012-7094, MR 0534060 • Lehner, Joseph (1964), Discontinuous Groups and Automorphic Functions, Mathematical Surveys and Monographs, vol. 8, American Mathematical Society, ISBN 0-8218-1508-3 • Marden, Albert (1974), "The geometry of finitely generated kleinian groups", Annals of Mathematics, Second Series, 99: 383–462, doi:10.2307/1971059, ISSN 0003-486X, JSTOR 1971059, MR 0349992, Zbl 0282.30014 • Marden, A. (1977), "Geometrically finite Kleinian groups and their deformation spaces", in Harvey, W. J. (ed.), Discrete groups and automorphic functions (Proc. Conf., Cambridge, 1975), Boston, MA: Academic Press, pp. 259–293, ISBN 978-0-12-329950-5, MR 0494117 • Maskit, Bernard (1967), "A characterization of Schottky groups", Journal d'Analyse Mathématique, 19: 227–230, doi:10.1007/BF02788719, ISSN 0021-7670, MR 0220929 • Maskit, Bernard (1988), Kleinian groups, Grundlehren der Mathematischen Wissenschaften, vol. 287, Berlin, New York: Springer-Verlag, ISBN 978-3-540-17746-3, MR 0959135 • David Mumford, Caroline Series, and David Wright, Indra's Pearls: The Vision of Felix Klein, Cambridge University Press, 2002 ISBN 0-521-35253-3 • Schottky, F. (1877), "Ueber die conforme Abbildung mehrfach zusammenhängender ebener Flächen", Journal für die reine und angewandte Mathematik, 83: 300–351, doi:10.1515/crll.1877.83.300, ISSN 0075-4102 • Yamamoto, Hiro-o (1991), "An example of a nonclassical Schottky group", Duke Mathematical Journal, 63 (1): 193–197, doi:10.1215/S0012-7094-91-06308-8, ISSN 0012-7094, MR 1106942 External links • Three transformations generating a Schottky group from (Fricke & Klein 1897, p. 442).
Wikipedia
Schottky's theorem In mathematical complex analysis, Schottky's theorem, introduced by Schottky (1904) is a quantitative version of Picard's theorem. It states that for a holomorphic function f in the open unit disk that does not take the values 0 or 1, the value of |f(z)| can be bounded in terms of z and f(0). Schottky's original theorem did not give an explicit bound for f. Ostrowski (1931, 1933) gave some weak explicit bounds. Ahlfors (1938, theorem B) gave a strong explicit bound, showing that if f is holomorphic in the open unit disk and does not take the values 0 or 1, then $\log |f(z)|\leq {\frac {1+|z|}{1-|z|}}(7+\max(0,\log |f(0)|))$. Several authors, such as Jenkins (1955), have given variations of Ahlfors's bound with better constants: in particular Hempel (1980) gave some bounds whose constants are in some sense the best possible. References • Ahlfors, Lars V. (1938), "An Extension of Schwarz's Lemma", Transactions of the American Mathematical Society, 43 (3): 359–364, doi:10.2307/1990065, ISSN 0002-9947, JSTOR 1990065 • Hempel, Joachim A. (1980), "Precise bounds in the theorems of Schottky and Picard", Journal of the London Mathematical Society, 21 (2): 279–286, doi:10.1112/jlms/s2-21.2.279, ISSN 0024-6107, MR 0575385 • Jenkins, J. A. (1955), "On explicit bounds in Schottky's theorem", Canadian Journal of Mathematics, 7: 76–82, doi:10.4153/CJM-1955-010-4, ISSN 0008-414X, MR 0066460 • Ostrowski, A. M. (1931), Studien über den schottkyschen satz, Basel, B. Wepf & cie. • Ostrowski, Alexander (1933), "Asymptotische Abschätzung des absoluten Betrages einer Funktion, die die Werte 0 und 1 nicht annimmt", Commentarii Mathematici Helvetici, 5: 55–87, doi:10.1007/bf01297506, ISSN 0010-2571, S2CID 119852055 • Schottky, F. (1904), "Über den Picardschen Satz und die Borelschen Ungleichungen", Sitzungsberichte der Preussischen Akademie der Wissenschaften zu Berlin: 1244–1263
Wikipedia
Abacus The abacus (PL: abaci or abacuses), also called a counting frame, is a hand-operated calculating tool of unknown origin used since ancient times in the ancient Near East, Europe, China, and Russia, millennia before the adoption of the Hindu-Arabic numeral system.[1] "Abaci" and "Abacuses" redirect here. For the Turkish Surname, see Abacı. For the medieval book, see Liber Abaci. The abacus consists of multiple columns of slidable beads (or similar objects). In their earliest designs, the columns of beads could be loose on a flat surface or sliding in grooves. Later the beads were made to slide on rods and built into a frame, allowing faster manipulation. Each column typically represents one digit of a multi-digit number laid out using a positional numeral system such as base ten (though some cultures used different numerical bases). Roman and East Asian abacuses use a system resembling bi-quinary coded decimal, with a top deck (containing one or two beads) representing fives and a bottom rank (containing four or five beads) representing ones. Natural numbers are normally used, but some allow simple fractional components (e.g. 1⁄2, 1⁄4, and 1⁄12 in Roman abacus), and a decimal point can be imagined for fixed-point arithmetic. Any particular abacus design supports multiple methods to perform calculations, including addition, subtraction, multiplication, division, and square and cube roots. The beads are first arranged to represent a number, then are manipulated to perform a mathematical operation with another number, and their final position can be read as the result (or can be used as the starting number for subsequent operations). In the ancient world, abacuses were a practical calculating tool. Although calculators and computers are commonly used today instead of abacuses, abacuses remain in everyday use in some countries. The abacuses has an advantage of not requiring a writing implement and paper (needed for algorism) or an electric power source. Merchants, traders, and clerks in some parts of Eastern Europe, Russia, China, and Africa use abacuses. The abacus remains in common use as a scoring system in non-electronic table games. Others may use an abacus due to visual impairment that prevents the use of a calculator.[1] The abacus is still used to teach the fundamentals of mathematics to children in most countries. Etymology The word abacus dates to at least AD 1387 when a Middle English work borrowed the word from Latin that described a sandboard abacus. The Latin word is derived from ancient Greek ἄβαξ (abax) which means something without a base, and colloquially, any piece of rectangular material.[2][3][4] Alternatively, without reference to ancient texts on etymology, it has been suggested that it means "a square tablet strewn with dust",[5] or "drawing-board covered with dust (for the use of mathematics)"[6] (the exact shape of the Latin perhaps reflects the genitive form of the Greek word, ἄβακoς (abakos)). While the table strewn with dust definition is popular, some argue evidence is insufficient for that conclusion.[7][nb 1] Greek ἄβαξ probably borrowed from a Northwest Semitic language like Phoenician, evidenced by a cognate with the Hebrew word ʾābāq (אבק‎), or "dust" (in the post-Biblical sense "sand used as a writing surface").[8] Both abacuses[9] and abaci[9] are used as plurals. The user of an abacus is called an abacist.[10] History Mesopotamia The Sumerian abacus appeared between 2700 and 2300 BC. It held a table of successive columns which delimited the successive orders of magnitude of their sexagesimal (base 60) number system.[11] Some scholars point to a character in Babylonian cuneiform that may have been derived from a representation of the abacus.[12] It is the belief of Old Babylonian[13] scholars, such as Ettore Carruccio, that Old Babylonians "seem to have used the abacus for the operations of addition and subtraction; however, this primitive device proved difficult to use for more complex calculations".[14] Egypt Greek historian Herodotus mentioned the abacus in Ancient Egypt. He wrote that the Egyptians manipulated the pebbles from right to left, opposite in direction to the Greek left-to-right method. Archaeologists have found ancient disks of various sizes that are thought to have been used as counters. However, wall depictions of this instrument are yet to be discovered.[15] Persia At around 600 BC, Persians first began to use the abacus, during the Achaemenid Empire.[16] Under the Parthian, Sassanian, and Iranian empires, scholars concentrated on exchanging knowledge and inventions with the countries around them – India, China, and the Roman Empire- which is how the abacus may have been exported to other countries. Greece The earliest archaeological evidence for the use of the Greek abacus dates to the 5th century BC.[17] Demosthenes (384 BC–322 BC) complained that the need to use pebbles for calculations was too difficult.[18][19] A play by Alexis from the 4th century BC mentions an abacus and pebbles for accounting, and both Diogenes and Polybius use the abacus as a metaphor for human behavior, stating "that men that sometimes stood for more and sometimes for less" like the pebbles on an abacus.[19] The Greek abacus was a table of wood or marble, pre-set with small counters in wood or metal for mathematical calculations. This Greek abacus was used in Achaemenid Persia, the Etruscan civilization, Ancient Rome, and the Western Christian world until the French Revolution. A tablet found on the Greek island Salamis in 1846 AD (the Salamis Tablet) dates to 300 BC, making it the oldest counting board discovered so far. It is a slab of white marble 149 cm (59 in) in length, 75 cm (30 in) wide, and 4.5 cm (2 in) thick, on which are 5 groups of markings. In the tablet's center is a set of 5 parallel lines equally divided by a vertical line, capped with a semicircle at the intersection of the bottom-most horizontal line and the single vertical line. Below these lines is a wide space with a horizontal crack dividing it. Below this crack is another group of eleven parallel lines, again divided into two sections by a line perpendicular to them, but with the semicircle at the top of the intersection; the third, sixth and ninth of these lines are marked with a cross where they intersect with the vertical line.[20] Also from this time frame, the Darius Vase was unearthed in 1851. It was covered with pictures, including a "treasurer" holding a wax tablet in one hand while manipulating counters on a table with the other.[18] Rome Main article: Roman abacus The normal method of calculation in ancient Rome, as in Greece, was by moving counters on a smooth table. Originally pebbles (Latin: calculi) were used. Marked lines indicated units, fives, tens, etc. as in the Roman numeral system. Writing in the 1st century BC, Horace refers to the wax abacus, a board covered with a thin layer of black wax on which columns and figures were inscribed using a stylus.[21] One example of archaeological evidence of the Roman abacus, shown nearby in reconstruction, dates to the 1st century AD. It has eight long grooves containing up to five beads in each and eight shorter grooves having either one or no beads in each. The groove marked I indicates units, X tens, and so on up to millions. The beads in the shorter grooves denote fives (five units, five tens, etc.) resembling a bi-quinary coded decimal system related to the Roman numerals. The short grooves on the right may have been used for marking Roman "ounces" (i.e. fractions). Medieval Europe The Roman system of 'counter casting' was used widely in medieval Europe, and persisted in limited use into the nineteenth century.[22] Wealthy abacists used decorative minted counters, called jetons. Due to Pope Sylvester II's reintroduction of the abacus with modifications, it became widely used in Europe again during the 11th century[23][24] It used beads on wires, unlike the traditional Roman counting boards, which meant the abacus could be used much faster and was more easily moved.[25] China Main article: Suanpan Abacus Traditional Chinese算盤 Simplified Chinese算盘 Literal meaning"calculating tray" Transcriptions Standard Mandarin Hanyu Pinyinsuànpán IPA[swân.pʰǎn] Yue: Cantonese Yale Romanizationsyun-pùhn Jyutpingsyun3-pun4 IPA[syːn˧pʰuːn˧˥] Southern Min Hokkien POJsǹg-pôaⁿ Tâi-lôsǹg-puânn The earliest known written documentation of the Chinese abacus dates to the 2nd century BC.[26] The Chinese abacus, also known as the suanpan (算盤/算盘, lit. "calculating tray"), comes in various lengths and widths, depending on the operator. It usually has more than seven rods. There are two beads on each rod in the upper deck and five beads each in the bottom one, to represent numbers in a bi-quinary coded decimal-like system. The beads are usually rounded and made of hardwood. The beads are counted by moving them up or down towards the beam; beads moved toward the beam are counted, while those moved away from it are not.[27] One of the top beads is 5, while one of the bottom beads is 1. Each rod has a number under it, showing the place value. The suanpan can be reset to the starting position instantly by a quick movement along the horizontal axis to spin all the beads away from the horizontal beam at the center. The prototype of the Chinese abacus appeared during the Han Dynasty, and the beads are oval. The Song Dynasty and earlier used the 1:4 type or four-beads abacus similar to the modern abacus including the shape of the beads commonly known as Japanese-style abacus. In the early Ming Dynasty, the abacus began to appear in a 1:5 ratio. The upper deck had one bead and the bottom had five beads.[28] In the late Ming Dynasty, the abacus styles appeared in a 2:5 ratio.[28] The upper deck had two beads, and the bottom had five. Various calculation techniques were devised for Suanpan enabling efficient calculations. Some schools teach students how to use it. In the long scroll Along the River During the Qingming Festival painted by Zhang Zeduan during the Song dynasty (960–1297), a suanpan is clearly visible beside an account book and doctor's prescriptions on the counter of an apothecary's (Feibao). The similarity of the Roman abacus to the Chinese one suggests that one could have inspired the other, given evidence of a trade relationship between the Roman Empire and China. However, no direct connection has been demonstrated, and the similarity of the abacuses may be coincidental, both ultimately arising from counting with five fingers per hand. Where the Roman model (like most modern Korean and Japanese) has 4 plus 1 bead per decimal place, the standard suanpan has 5 plus 2. Incidentally, this allows use with a hexadecimal numeral system (or any base up to 18) which may have been used for traditional Chinese measures of weight. (Instead of running on wires as in the Chinese, Korean, and Japanese models, the Roman model used grooves, presumably making arithmetic calculations much slower.) Another possible source of the suanpan is Chinese counting rods, which operated with a decimal system but lacked the concept of zero as a placeholder. The zero was probably introduced to the Chinese in the Tang dynasty (618–907) when travel in the Indian Ocean and the Middle East would have provided direct contact with India, allowing them to acquire the concept of zero and the decimal point from Indian merchants and mathematicians. India The Abhidharmakośabhāṣya of Vasubandhu (316-396), a Sanskrit work on Buddhist philosophy, says that the second-century CE philosopher Vasumitra said that "placing a wick (Sanskrit vartikā) on the number one (ekāṅka) means it is a one while placing the wick on the number hundred means it is called a hundred, and on the number one thousand means it is a thousand". It is unclear exactly what this arrangement may have been. Around the 5th century, Indian clerks were already finding new ways of recording the contents of the abacus.[29] Hindu texts used the term śūnya (zero) to indicate the empty column on the abacus.[30] Japan Main article: Soroban In Japan, the abacus is called soroban (算盤, そろばん, lit. "counting tray"). It was imported from China in the 14th century.[31] It was probably in use by the working class a century or more before the ruling class adopted it, as the class structure obstructed such changes.[32] The 1:4 abacus, which removes the seldom-used second and fifth bead became popular in the 1940s. Today's Japanese abacus is a 1:4 type, four-bead abacus, introduced from China in the Muromachi era. It adopts the form of the upper deck one bead and the bottom four beads. The top bead on the upper deck was equal to five and the bottom one is similar to the Chinese or Korean abacus, and the decimal number can be expressed, so the abacus is designed as a one:four device. The beads are always in the shape of a diamond. The quotient division is generally used instead of the division method; at the same time, in order to make the multiplication and division digits consistently use the division multiplication. Later, Japan had a 3:5 abacus called 天三算盤, which is now in the Ize Rongji collection of Shansi Village in Yamagata City. Japan also used a 2:5 type abacus. The four-bead abacus spread, and became common around the world. Improvements to the Japanese abacus arose in various places. In China an aluminium frame plastic bead abacus was used. The file is next to the four beads, and pressing the "clearing" button put the upper bead in the upper position, and the lower bead in the lower position. The abacus is still manufactured in Japan even with the proliferation, practicality, and affordability of pocket electronic calculators. The use of the soroban is still taught in Japanese primary schools as part of mathematics, primarily as an aid to faster mental calculation. Using visual imagery can complete a calculation as quickly as a physical instrument.[33] Korea The Chinese abacus migrated from China to Korea around 1400 AD.[18][34][35] Koreans call it jupan (주판), supan (수판) or jusan (주산).[36] The four-beads abacus (1:4) was introduced during the Goryeo Dynasty. The 5:1 abacus was introduced to Korea from China during the Ming Dynasty. Native America Some sources mention the use of an abacus called a nepohualtzintzin in ancient Aztec culture.[37] This Mesoamerican abacus used a 5-digit base-20 system.[38] The word Nepōhualtzintzin Nahuatl pronunciation: [nepoːwaɬˈt͡sint͡sin] comes from Nahuatl, formed by the roots; Ne – personal -; pōhual or pōhualli Nahuatl pronunciation: [ˈpoːwalːi] – the account -; and tzintzin Nahuatl pronunciation: [ˈt͡sint͡sin] – small similar elements. Its complete meaning was taken as: counting with small similar elements. Its use was taught in the Calmecac to the temalpouhqueh Nahuatl pronunciation: [temaɬˈpoʍkeʔ], who were students dedicated to taking the accounts of skies, from childhood. The Nepōhualtzintzin was divided into two main parts separated by a bar or intermediate cord. In the left part were four beads. Beads in the first row have unitary values (1, 2, 3, and 4), and on the right side, three beads had values of 5, 10, and 15, respectively. In order to know the value of the respective beads of the upper rows, it is enough to multiply by 20 (by each row), the value of the corresponding count in the first row. The device featured 13 rows with 7 beads, 91 in total. This was a basic number for this culture. It had a close relation to natural phenomena, the underworld, and the cycles of the heavens. One Nepōhualtzintzin (91) represented the number of days that a season of the year lasts, two Nepōhualtzitzin (182) is the number of days of the corn's cycle, from its sowing to its harvest, three Nepōhualtzintzin (273) is the number of days of a baby's gestation, and four Nepōhualtzintzin (364) completed a cycle and approximated one year. When translated into modern computer arithmetic, the Nepōhualtzintzin amounted to the rank from 10 to 18 in floating point, which precisely calculated large and small amounts, although round off was not allowed. The rediscovery of the Nepōhualtzintzin was due to the Mexican engineer David Esparza Hidalgo,[39] who in his travels throughout Mexico found diverse engravings and paintings of this instrument and reconstructed several of them in gold, jade, encrustations of shell, etc.[40] Very old Nepōhualtzintzin are attributed to the Olmec culture, and some bracelets of Mayan origin, as well as a diversity of forms and materials in other cultures. Sanchez wrote in Arithmetic in Maya that another base 5, base 4 abacus had been found in the Yucatán Peninsula that also computed calendar data. This was a finger abacus, on one hand, 0, 1, 2, 3, and 4 were used; and on the other hand 0, 1, 2, and 3 were used. Note the use of zero at the beginning and end of the two cycles. The quipu of the Incas was a system of colored knotted cords used to record numerical data,[41] like advanced tally sticks – but not used to perform calculations. Calculations were carried out using a yupana (Quechua for "counting tool"; see figure) which was still in use after the conquest of Peru. The working principle of a yupana is unknown, but in 2001 Italian mathematician De Pasquale proposed an explanation. By comparing the form of several yupanas, researchers found that calculations were based using the Fibonacci sequence 1, 1, 2, 3, 5 and powers of 10, 20, and 40 as place values for the different fields in the instrument. Using the Fibonacci sequence would keep the number of grains within any one field at a minimum.[42] Russia The Russian abacus, the schoty (Russian: счёты, plural from Russian: счёт, counting), usually has a single slanted deck, with ten beads on each wire (except one wire with four beads for quarter-ruble fractions). 4-bead wire was introduced for quarter-kopeks, which were minted until 1916.[43] The Russian abacus is used vertically, with each wire running horizontally. The wires are usually bowed upward in the center, to keep the beads pinned to either side. It is cleared when all the beads are moved to the right. During manipulation, beads are moved to the left. For easy viewing, the middle 2 beads on each wire (the 5th and 6th bead) usually are of a different color from the other eight. Likewise, the left bead of the thousands wire (and the million wire, if present) may have a different color. The Russian abacus was in use in shops and markets throughout the former Soviet Union, and its usage was taught in most schools until the 1990s.[44][45] Even the 1874 invention of mechanical calculator, Odhner arithmometer, had not replaced them in Russia; according to Yakov Perelman. Some businessmen attempting to import calculators into the Russian Empire were known to leave in despair after watching a skilled abacus operator.[46] Likewise, the mass production of Felix arithmometers since 1924 did not significantly reduce abacus use in the Soviet Union.[47] The Russian abacus began to lose popularity only after the mass production of domestic microcalculators in 1974. The Russian abacus was brought to France around 1820 by mathematician Jean-Victor Poncelet, who had served in Napoleon's army and had been a prisoner of war in Russia.[48] The abacus had fallen out of use in western Europe in the 16th century with the rise of decimal notation and algorismic methods. To Poncelet's French contemporaries, it was something new. Poncelet used it, not for any applied purpose, but as a teaching and demonstration aid.[49] The Turks and the Armenian people used abacuses similar to the Russian schoty. It was named a coulba by the Turks and a choreb by the Armenians.[50] School abacus Around the world, abacuses have been used in pre-schools and elementary schools as an aid in teaching the numeral system and arithmetic. In Western countries, a bead frame similar to the Russian abacus but with straight wires and a vertical frame is common (see image). The wireframe may be used either with positional notation like other abacuses (thus the 10-wire version may represent numbers up to 9,999,999,999), or each bead may represent one unit (e.g. 74 can be represented by shifting all beads on 7 wires and 4 beads on the 8th wire, so numbers up to 100 may be represented). In the bead frame shown, the gap between the 5th and 6th wire, corresponding to the color change between the 5th and the 6th bead on each wire, suggests the latter use. Teaching multiplication, e.g. 6 times 7, may be represented by shifting 7 beads on 6 wires. The red-and-white abacus is used in contemporary primary schools for a wide range of number-related lessons. The twenty bead version, referred to by its Dutch name rekenrek ("calculating frame"), is often used, either on a string of beads or on a rigid framework.[51] Feynman vs the abacus Physicist Richard Feynman was noted for facility in mathematical calculations. He wrote about an encounter in Brazil with a Japanese abacus expert, who challenged him to speed contests between Feynman's pen and paper, and the abacus. The abacus was much faster for addition, somewhat faster for multiplication, but Feynman was faster at division. When the abacus was used for a really difficult challenge, i.e. cube roots, Feynman won easily. However, the number chosen at random was close to a number Feynman happened to know was an exact cube, allowing him to use approximate methods.[52] Neurological analysis Learning how to calculate with the abacus may improve capacity for mental calculation. Abacus-based mental calculation (AMC), which was derived from the abacus, is the act of performing calculations, including addition, subtraction, multiplication, and division, in the mind by manipulating an imagined abacus. It is a high-level cognitive skill that runs calculations with an effective algorithm. People doing long-term AMC training show higher numerical memory capacity and experience more effectively connected neural pathways.[53][54] They are able to retrieve memory to deal with complex processes.[55] AMC involves both visuospatial and visuomotor processing that generate the visual abacus and move the imaginary beads.[56] Since it only requires that the final position of beads be remembered, it takes less memory and less computation time.[56] Renaissance abacuses Binary abacus The binary abacus is used to explain how computers manipulate numbers.[57] The abacus shows how numbers, letters, and signs can be stored in a binary system on a computer, or via ASCII. The device consists of a series of beads on parallel wires arranged in three separate rows. The beads represent a switch on the computer in either an "on" or "off" position. Visually impaired users An adapted abacus, invented by Tim Cranmer, and called a Cranmer abacus is commonly used by visually impaired users. A piece of soft fabric or rubber is placed behind the beads, keeping them in place while the users manipulate them. The device is then used to perform the mathematical functions of multiplication, division, addition, subtraction, square root, and cube root.[58] Although blind students have benefited from talking calculators, the abacus is often taught to these students in early grades.[59] Blind students can also complete mathematical assignments using a braille-writer and Nemeth code (a type of braille code for mathematics) but large multiplication and long division problems are tedious. The abacus gives these students a tool to compute mathematical problems that equals the speed and mathematical knowledge required by their sighted peers using pencil and paper. Many blind people find this number machine a useful tool throughout life.[58] Slideshow of various abacuses See also • Chinese Zhusuan • Chisanbop • Logical abacus • Mental abacus • Napier's bones • Sand table • Slide rule • Soroban • Suanpan Notes 1. Both C. J. Gadd, a keeper of the Egyptian and Assyrian Antiquities at the British Museum, and Jacob Levy, a Jewish Historian who wrote Neuhebräisches und chaldäisches wörterbuch über die Talmudim und Midraschim [Neuhebräisches and Chaldean dictionary on the Talmuds and Midrashi] disagree with the "dust table" theory.[7] Footnotes 1. Boyer & Merzbach 1991, pp. 252–253 2. de Stefani 1909, p. 2 3. Gaisford 1962, p. 2 4. Lasserre & Livadaras 1976, p. 4 5. Klein 1966, p. 1 6. Onions, Friedrichsen & Burchfield 1967, p. 2 7. Pullan 1968, p. 17 8. Huehnergard 2011, p. 2 9. Brown 1993, p. 2 10. Gove 1976, p. 1 11. Ifrah 2001, p. 11 12. Crump 1992, p. 188 13. Melville 2001 14. Carruccio 2006, p. 14 15. Smith 1958, pp. 157–160 16. Carr 2014 17. Ifrah 2001, p. 15 18. Williams 1997, p. 55 19. Pullan 1968, p. 16 20. Williams 1997, pp. 55–56 21. Ifrah 2001, p. 18 22. Pullan 1968, p. 18 23. Brown 2010, pp. 81–82 24. Brown 2011 25. Huff 1993, p. 50 26. Ifrah 2001, p. 17 27. Fernandes 2003 28. "中国算盘 | 清华大学科学博物馆". Department of the History of Science, Tsinghua University (in Chinese). August 22, 2020. Archived from the original on August 8, 2021. Retrieved August 8, 2021. 29. Körner 1996, p. 232 30. Mollin 1998, p. 3 31. Gullberg 1997, p. 169 32. Williams 1997, p. 65 33. Murray 1982 34. Anon 2002 35. Jami 1998, p. 4 36. Anon 2013 37. Sanyal 2008 38. Anon 2004 39. Hidalgo 1977, p. 94 40. Hidalgo 1977, pp. 94–101 41. Albree 2000, p. 42 42. Aimi & De Pasquale 2005 43. Sokolov, Viatcheslav; Karelskaia, Svetlana; Zuga, Ekaterina (February 2023). "The schoty (abacus) as the phenomenon of Russian accounting". Accounting History. 28 (1): 90–118. doi:10.1177/10323732221132005. ISSN 1032-3732. S2CID 256789240. 44. Burnett & Ryan 1998, p. 7 45. Hudgins 2004, p. 219 46. Arithmetic for Entertainment, Yakov Perelman, page 51. 47. Leushina 1991, p. 427 48. Trogeman & Ernst 2001, p. 24 49. Flegg 1983, p. 72 50. Williams 1997, p. 64 51. West 2011, p. 49 52. Feynman, Richard (1985). "Lucky Numbers". Surely you're joking, Mr. Feynman!. New York: W.W. Norton. ISBN 978-0-393-31604-9. OCLC 10925248. 53. Hu, Yuzheng; Geng, Fengji; Tao, Lixia; Hu, Nantu; Du, Fenglei; Fu, Kuang; Chen, Feiyan (December 14, 2010). "Enhanced white matter tracts integrity in children with abacus training". Human Brain Mapping. 32 (1): 10–21. doi:10.1002/hbm.20996. ISSN 1065-9471. PMC 6870462. PMID 20235096. 54. Wu, Tung-Hsin; Chen, Chia-Lin; Huang, Yung-Hui; Liu, Ren-Shyan; Hsieh, Jen-Chuen; Lee, Jason J. S. (November 5, 2008). "Effects of long-term practice and task complexity on brain activities when performing abacus-based mental calculations: a PET study". European Journal of Nuclear Medicine and Molecular Imaging. 36 (3): 436–445. doi:10.1007/s00259-008-0949-0. ISSN 1619-7070. PMID 18985348. S2CID 9860036. 55. Lee, J.S.; Chen, C.L.; Wu, T.H.; Hsieh, J.C.; Wui, Y.T.; Cheng, M.C.; Huang, Y.H. (2003). "Brain activation during abacus-based mental calculation with fMRI: A comparison between abacus experts and normal subjects". First International IEEE EMBS Conference on Neural Engineering, 2003. Conference Proceedings. pp. 553–556. doi:10.1109/CNE.2003.1196886. ISBN 978-0-7803-7579-6. S2CID 60704352. 56. Chen, C.L.; Wu, T.H.; Cheng, M.C.; Huang, Y.H.; Sheu, C.Y.; Hsieh, J.C.; Lee, J.S. (December 20, 2006). "Prospective demonstration of brain plasticity after intensive abacus-based mental calculation training: An fMRI study". Nuclear Instruments and Methods in Physics Research Section A: Accelerators, Spectrometers, Detectors and Associated Equipment. 569 (2): 567–571. Bibcode:2006NIMPA.569..567C. doi:10.1016/j.nima.2006.08.101. ISSN 0168-9002. 57. Good 1985, p. 34 58. Terlau & Gissoni 2005 59. Presley & D'Andrea 2009 References • Aimi, Antonio; De Pasquale, Nicolino (2005). "Andean Calculators" (PDF). translated by Del Bianco, Franca. Archived (PDF) from the original on May 3, 2015. Retrieved July 31, 2014. • Albree, Joe (2000). Hessenbruch, Arne (ed.). Reader's Guide to the History of Science. London, UK: Fitzroy Dearborn Publishers. ISBN 978-1-884964-29-9. • Anon (September 12, 2002). "Abacus middle ages, region of origin Middle East". The History of Computing Project. Archived from the original on May 9, 2014. Retrieved July 31, 2014. • Anon (2004). "Nepohualtzintzin, The Pre Hispanic Computer". Iberamia 2004. Archived from the original on May 3, 2015. Retrieved July 31, 2014. • Anon (2013). 주판 [Abacus]. enc.daum.net (in Korean). Archived from the original on July 7, 2012. Retrieved July 31, 2014. • Boyer, Carl B.; Merzbach, Uta C. (1991). A History of Mathematics (2nd ed.). John Wiley & Sons, Inc. ISBN 978-0-471-54397-8. • Brown, Lesley, ed. (1993). "abacus". Shorter Oxford English Dictionary on Historical Principles. Vol. 2: A-K (5th ed.). Oxford, UK: Oxford University Press. ISBN 978-0-19-860575-1. • Brown, Nancy Marie (2010). The Abacus and the Cross: The Story of the Pope Who Brought the Light of Science to the Dark Ages. Philadelphia, PA: Basic Books. ISBN 978-0-465-00950-3. • Brown, Nancy Marie (January 2, 2011). "Everything You Think You Know About the Dark Ages is Wrong". rd magazine (Interview). USC Annenberg. Archived from the original on August 8, 2014. • Burnett, Charles; Ryan, W. F. (1998). "Abacus (Western)". In Bud, Robert; Warner, Deborah Jean (eds.). Instruments of Science: An Historical Encyclopedia. Garland Encyclopedias in the History of Science. New York, NY: Garland Publishing, Inc. pp. 5–7. ISBN 978-0-8153-1561-2. • Carr, Karen (2014). "West Asian Mathematics". Kidipede. History for Kids!. Archived from the original on July 3, 2014. Retrieved Jun 19, 2014. • Carruccio, Ettore (2006). Mathematics and Logic In History and In Contemporary Thought. translated by Quigly, Isabel. Aldine Transaction. ISBN 978-0-202-30850-0. • Crump, Thomas (1992). The Japanese Numbers Game: The Use and Understanding of Numbers in Modern Japan. The Nissan Institute/Routledge Japanese Studies Series. Routledge. ISBN 978-0-415-05609-0. • de Stefani, Aloysius, ed. (1909). Etymologicum Gudianum quod vocatur; recensuit et apparatum criticum indicesque adiecit. Vol. I. Leipzig, Germany: Teubner. LCCN 23016143. • Fernandes, Luis (November 27, 2003). "A Brief Introduction to the Abacus". ee.ryerson.ca. Archived from the original on December 26, 2014. Retrieved July 31, 2014. • Flegg, Graham (1983). Numbers: Their History and Meaning. Dover Books on Mathematics. Mineola, NY: Courier Dover Publications. ISBN 978-0-233-97516-0. • Gaisford, Thomas, ed. (1962) [1848]. Etymologicon Magnum seu verius Lexicon Saepissime vocabulorum origines indagans ex pluribus lexicis scholiastis et grammaticis anonymi cuiusdam opera concinnatum [The Great Etymologicon: Which Contains the Origins of the Lexicon of Words from a Large Number or Rather with a Great Amount of Research Lexicis Scholiastis and Connected Together by the Works of Anonymous Grammarians] (in Latin). Amsterdam, the Netherlands: Adolf M. Hakkert. • Good, Robert C. Jr. (Fall 1985). "The Binary Abacus: A Useful Tool for Explaining Computer Operations". Journal of Computers in Mathematics and Science Teaching. 5 (1): 34–37. • Gove, Philip Babcock, ed. (1976). "abacist". Websters Third New International Dictionary (17th ed.). Springfield, MA: G. & C. Merriam Company. ISBN 978-0-87779-101-0. • Gullberg, Jan (1997). Mathematics: From the Birth of Numbers. Illustrated by Pär Gullberg. New York, NY: W. W. Norton & Company. ISBN 978-0-393-04002-9. • Hidalgo, David Esparza (1977). Nepohualtzintzin: Computador Prehispánico en Vigencia [The Nepohualtzintzin: An Effective Pre-Hispanic Computer] (in Spanish). Tlacoquemécatl, Mexico: Editorial Diana. • Hudgins, Sharon (2004). The Other Side of Russia: A Slice of Life in Siberia and the Russian Far East. Eugenia & Hugh M. Stewart '26 Series on Eastern Europe. Texas A&M University Press. ISBN 978-1-58544-404-5. • Huehnergard, John, ed. (2011). "Appendix of Semitic Roots, under the root ʾbq.". American Heritage Dictionary of the English Language (5th ed.). Houghton Mifflin Harcourt Trade. ISBN 978-0-547-04101-8. • Huff, Toby E. (1993). The Rise of Early Modern Science: Islam, China and the West (1st ed.). Cambridge, UK: Cambridge University Press. ISBN 978-0-521-43496-6. • Ifrah, Georges (2001). The Universal History of Computing: From the Abacus to the Quantum Computer. New York, NY: John Wiley & Sons, Inc. ISBN 978-0-471-39671-0. • Jami, Catherine (1998). "Abacus (Eastern)". In Bud, Robert; Warner, Deborah Jean (eds.). Instruments of Science: An Historical Encyclopedia. New York, NY: Garland Publishing, Inc. ISBN 978-0-8153-1561-2. • Klein, Ernest, ed. (1966). "abacus". A Comprehensive Etymological Dictionary of the English Language. Vol. I: A-K. Amsterdam: Elsevier Publishing Company. • Körner, Thomas William (1996). The Pleasures of Counting. Cambridge, UK: Cambridge University Press. ISBN 978-0-521-56823-4. • Lasserre, Franciscus; Livadaras, Nicolaus, eds. (1976). Etymologicum Magnum Genuinum: Symeonis Etymologicum: Una Cum Magna Grammatica (in Greek and Latin). Vol. Primum: α — άμωσϒέπωϛ. Rome, Italy: Edizioni dell'Ateneo. LCCN 77467964. • Leushina, A. M. (1991). The development of elementary mathematical concepts in preschool children. National Council of Teachers of Mathematics. ISBN 978-0-87353-299-0. • Melville, Duncan J. (May 30, 2001). "Chronology of Mesopotamian Mathematics". St. Lawrence University. It.stlawu.edu. Archived from the original on January 12, 2014. Retrieved Jun 19, 2014. • Mish, Frederick C., ed. (2003). "abacus". Merriam-Webster's Collegiate Dictionary (11th ed.). Merriam-Webster, Inc. ISBN 978-0-87779-809-5. • Mollin, Richard Anthony (September 1998). Fundamental Number Theory with Applications. Discrete Mathematics and its Applications. Boca Raton, FL: CRC Press. ISBN 978-0-8493-3987-5. • Murray, Geoffrey (July 20, 1982). "Ancient calculator is a hit with Japan's newest generation". The Christian Science Monitor. CSMonitor.com. Archived from the original on December 2, 2013. Retrieved July 31, 2014. • Onions, C. T.; Friedrichsen, G. W. S.; Burchfield, R. W., eds. (1967). "abacus". The Oxford Dictionary of English Etymology. Oxford, UK: Oxford at the Clarendon Press. • Presley, Ike; D'Andrea, Frances Mary (2009). Assistive Technology for Students who are Blind Or Visually Impaired: A Guide to Assessment. American Foundation for the Blind. p. 61. ISBN 978-0-89128-890-9. • Pullan, J. M. (1968). The History of the Abacus. New York, NY: Frederick A. Praeger, Inc., Publishers. ISBN 978-0-09-089410-9. LCCN 72075113. • Reilly, Edwin D., ed. (2004). Concise Encyclopedia of Computer Science. New York, NY: John Wiley and Sons, Inc. ISBN 978-0-470-09095-4. • Sanyal, Amitava (July 6, 2008). "Learning by Beads". Hindustan Times. • Smith, David Eugene (1958). History of Mathematics. Dover Books on Mathematics. Vol. 2: Special Topics of Elementary Mathematics. Courier Dover Publications. ISBN 978-0-486-20430-7. • Stearns, Peter N.; Langer, William Leonard, eds. (2001). "The Encyclopedia of World History: Ancient, Medieval, and Modern, Chronologically Arranged". The Encyclopedia of World History (6th ed.). New York, NY: Houghton Mifflin Harcourt. ISBN 978-0-395-65237-4. • Terlau, Terrie; Gissoni, Fred (March 2005). "Abacus = Pencil and Paper When Calculating". APH News. American Printing House for the Blind. Archived from the original on December 2, 2013. • Trogeman, Georg; Ernst, Wolfgang (2001). Trogeman, Georg; Nitussov, Alexander Y.; Ernst, Wolfgang (eds.). Computing in Russia: The History of Computer Devices and Information Technology Revealed. Braunschweig/Wiesbaden: Vieweg+Teubner Verlag. ISBN 978-3-528-05757-2. • West, Jessica F. (2011). Number sense routines : building numerical literacy every day in grades K-3. Portland, Me.: Stenhouse Publishers. ISBN 978-1-57110-790-9. • Williams, Michael R. (1997). Baltes, Cheryl (ed.). A History of Computing technology (2nd ed.). Los Alamitos, CA: IEEE Computer Society Press. ISBN 978-0-8186-7739-7. LCCN 96045232. • Yoke, Ho Peng (2000). Li, Qi and Shu: An Introduction to Science and Civilization in China. Dover Science Books. Courier Dover Publications. ISBN 978-0-486-41445-4. Reading • Fernandes, Luis (2013). "The Abacus: A Brief History". ee.ryerson.ca. Archived from the original on July 2, 2014. Retrieved July 31, 2014. • Menninger, Karl W. (1969), Number Words and Number Symbols: A Cultural History of Numbers, MIT Press, ISBN 978-0-262-13040-0 • Kojima, Takashi (1954), The Japanese Abacus: its Use and Theory, Tokyo: Charles E. Tuttle Co., Inc., ISBN 978-0-8048-0278-9 • Kojima, Takashi (1963), Advanced Abacus: Japanese Theory and Practice, Tokyo: Charles E. Tuttle Co., Inc., ISBN 978-0-8048-0003-7 • Stephenson, Stephen Kent (July 7, 2010), Ancient Computers, IEEE Global History Network, arXiv:1206.4349, Bibcode:2012arXiv1206.4349S, retrieved July 2, 2011 • Stephenson, Stephen Kent (2013), Ancient Computers, Part I - Rediscovery (2nd ed.), ISBN 978-1-4909-6437-9 External links Look up abacus in Wiktionary, the free dictionary. Wikimedia Commons has media related to Abacus. • Texts on Wikisource: • "Abacus", from A Dictionary of Greek and Roman Antiquities, 3rd ed., 1890. • "Abacus" . Encyclopædia Britannica. Vol. I (9th ed.). 1878. p. 4. • "Abacus". Encyclopædia Britannica (11th ed.). 1911. Tutorials • Heffelfinger, Totton & Gary Flom, Abacus: Mystery of the Bead - an Abacus Manual • Min Multimedia • Stephenson, Stephen Kent (2009), How to use a Counting Board Abacus History • Esaulov, Vladimir (2019), History of Abacus and Ancient Computing • The Abacus: a Brief History Curiosities • Schreiber, Michael (2007), Abacus, The Wolfram Demonstrations Project • Abacus in Various Number Systems at cut-the-knot • Java applet of Chinese, Japanese and Russian abaci • An atomic-scale abacus • Examples of Abaci • Aztex Abacus • Indian Abacus Authority control: National • Chile • France • BnF data • Germany • Israel • United States • Czech Republic
Wikipedia
Schouten–Nijenhuis bracket In differential geometry, the Schouten–Nijenhuis bracket, also known as the Schouten bracket, is a type of graded Lie bracket defined on multivector fields on a smooth manifold extending the Lie bracket of vector fields. There are two different versions, both rather confusingly called by the same name. The most common version is defined on alternating multivector fields and makes them into a Gerstenhaber algebra, but there is also another version defined on symmetric multivector fields, which is more or less the same as the Poisson bracket on the cotangent bundle. It was invented by Jan Arnoldus Schouten (1940, 1953) and its properties were investigated by his student Albert Nijenhuis (1955). It is related to but not the same as the Nijenhuis–Richardson bracket and the Frölicher–Nijenhuis bracket. Definition and properties An alternating multivector field is a section of the exterior algebra ∧∗TM over the tangent bundle of a manifold M. The alternating multivector fields form a graded supercommutative ring with the product of a and b written as ab (some authors use a∧b). This is dual to the usual algebra of differential forms Ω∗M by the pairing on homogeneous elements: $\omega (a_{1}a_{2}\dots a_{p})=\left\{{\begin{matrix}\omega (a_{1},\dots ,a_{p})&(\omega \in \Omega ^{p}M)\\0&(\omega \not \in \Omega ^{p}M)\end{matrix}}\right.$ The degree of a multivector A in $\Lambda ^{p}TM$ is defined to be |A| = p. The skew symmetric Schouten–Nijenhuis bracket is the unique extension of the Lie bracket of vector fields to a graded bracket on the space of alternating multivector fields that makes the alternating multivector fields into a Gerstenhaber algebra. It is given in terms of the Lie bracket of vector fields by $[a_{1}\cdots a_{m},b_{1}\cdots b_{n}]=\sum _{i,j}(-1)^{i+j}[a_{i},b_{j}]a_{1}\cdots a_{i-1}a_{i+1}\cdots a_{m}b_{1}\cdots b_{j-1}b_{j+1}\cdots b_{n}$ for vector fields ai, bj and $[f,a_{1}\cdots a_{m}]=-\iota _{df}(a_{1}\cdots a_{m})$ for vector fields $a_{i}$ and smooth function $f$, where $\iota _{df}$ is the common interior product operator. It has the following properties. • |ab| = |a| + |b| (The product has degree 0) • |[a,b]| = |a| + |b| − 1 (The Schouten–Nijenhuis bracket has degree −1) • (ab)c = a(bc), ab = (−1)|a||b|ba (the product is associative and (super) commutative) • [a, bc] = [a, b]c + (−1)|b|(|a| − 1)b[a, c] (Poisson identity) • [a,b] = −(−1)(|a| − 1)(|b| − 1) [b,a] (Antisymmetry of Schouten–Nijenhuis bracket) • [[a,b],c] = [a,[b,c]] − (−1)(|a| − 1)(|b| − 1)[b,[a,c]] (Jacobi identity for Schouten–Nijenhuis bracket) • If f and g are functions (multivectors homogeneous of degree 0), then [f,g] = 0. • If a is a vector field, then [a,b] = Lab is the usual Lie derivative of the multivector field b along a, and in particular if a and b are vector fields then the Schouten–Nijenhuis bracket is the usual Lie bracket of vector fields. The Schouten–Nijenhuis bracket makes the multivector fields into a Lie superalgebra if the grading is changed to the one of opposite parity (so that the even and odd subspaces are switched), though with this new grading it is no longer a supercommutative ring. Accordingly, the Jacobi identity may also be expressed in the symmetrical form $(-1)^{(|a|-1)(|c|-1)}[a,[b,c]]+(-1)^{(|b|-1)(|a|-1)}[b,[c,a]]+(-1)^{(|c|-1)(|b|-1)}[c,[a,b]]=0.\,$ Generalizations There is a common generalization of the Schouten–Nijenhuis bracket for alternating multivector fields and the Frölicher–Nijenhuis bracket due to Vinogradov (1990). A version of the Schouten–Nijenhuis bracket can also be defined for symmetric multivector fields in a similar way. The symmetric multivector fields can be identified with functions on the cotangent space T*(M) of M that are polynomial in the fiber, and under this identification the symmetric Schouten–Nijenhuis bracket corresponds to the Poisson bracket of functions on the symplectic manifold T*(M). There is a common generalization of the Schouten–Nijenhuis bracket for symmetric multivector fields and the Frölicher–Nijenhuis bracket due to Dubois-Violette and Peter W. Michor (1995). References • Dubois-Violette, Michel; Michor, Peter W. (1995). "A common generalization of the Frölicher–Nijenhuis bracket and the Schouten bracket for symmetric multi vector fields". Indag. Math. 6 (1): 51–66. arXiv:alg-geom/9401006. doi:10.1016/0019-3577(95)98200-u. • Marle, Charles-Michel (1997). "The Schouten-Nijenhuis bracket and interior products" (PDF). Journal of Geometry and Physics. 23 (3–4): 350–359. Bibcode:1997JGP....23..350M. CiteSeerX 10.1.1.27.5358. doi:10.1016/s0393-0440(97)80009-5. • Nijenhuis, A. (1955). "Jacobi-type identities for bilinear differential concomitants of certain tensor fields I". Indagationes Mathematicae. 17: 390–403. doi:10.1016/S1385-7258(55)50054-0. hdl:10338.dmlcz/102420. • Schouten, J. A. (1940). "Über Differentialkonkomitanten zweier kontravarianten Grössen". Indag. Math. 2: 449–452. • Schouten, J. A. (1953). "On the differential operators of the first order in tensor calculus". In Cremonese (ed.). Convegno Int. Geom. Diff. Italia. pp. 1–7. • Vinogradov, A. M. (1990). "Unification of Schouten–Nijenhuis and Frölicher–Nijenhuis brackets, cohomology and super differential operators". Sov. Math. Zametki. 47. External links • Nicola Ciccoli Schouten–Nijenhuis bracket in notes on From Poisson to Quantum Geometry
Wikipedia
Schramm–Loewner evolution In probability theory, the Schramm–Loewner evolution with parameter κ, also known as stochastic Loewner evolution (SLEκ), is a family of random planar curves that have been proven to be the scaling limit of a variety of two-dimensional lattice models in statistical mechanics. Given a parameter κ and a domain in the complex plane U, it gives a family of random curves in U, with κ controlling how much the curve turns. There are two main variants of SLE, chordal SLE which gives a family of random curves from two fixed boundary points, and radial SLE, which gives a family of random curves from a fixed boundary point to a fixed interior point. These curves are defined to satisfy conformal invariance and a domain Markov property. It was discovered by Oded Schramm (2000) as a conjectured scaling limit of the planar uniform spanning tree (UST) and the planar loop-erased random walk (LERW) probabilistic processes, and developed by him together with Greg Lawler and Wendelin Werner in a series of joint papers. Besides UST and LERW, the Schramm–Loewner evolution is conjectured or proven to describe the scaling limit of various stochastic processes in the plane, such as critical percolation, the critical Ising model, the double-dimer model, self-avoiding walks, and other critical statistical mechanics models that exhibit conformal invariance. The SLE curves are the scaling limits of interfaces and other non-self-intersecting random curves in these models. The main idea is that the conformal invariance and a certain Markov property inherent in such stochastic processes together make it possible to encode these planar curves into a one-dimensional Brownian motion running on the boundary of the domain (the driving function in Loewner's differential equation). This way, many important questions about the planar models can be translated into exercises in Itô calculus. Indeed, several mathematically non-rigorous predictions made by physicists using conformal field theory have been proven using this strategy. The Loewner equation Main article: Loewner differential equation If D is a simply connected, open complex domain not equal to C, and γ is a simple curve in D starting on the boundary (a continuous function with γ(0) on the boundary of D and γ((0, ∞)) a subset of D), then for each t ≥ 0, the complement Dt of γ([0, t]) is simply connected and therefore conformally isomorphic to D by the Riemann mapping theorem. If ƒt is a suitable normalized isomorphism from D to Dt, then it satisfies a differential equation found by Loewner (1923, p. 121) in his work on the Bieberbach conjecture. Sometimes it is more convenient to use the inverse function gt of ƒt, which is a conformal mapping from Dt to D. In Loewner's equation, z is in the domain D, t ≥ 0, and the boundary values at time t = 0 are ƒ0(z) = z or g0(z) = z. The equation depends on a driving function ζ(t) taking values in the boundary of D. If D is the unit disk and the curve γ is parameterized by "capacity", then Loewner's equation is ${\frac {\partial f_{t}(z)}{\partial t}}=-zf_{t}^{\prime }(z){\frac {\zeta (t)+z}{\zeta (t)-z}}$   or   ${\dfrac {\partial g_{t}(z)}{\partial t}}=g_{t}(z){\dfrac {\zeta (t)+g_{t}(z)}{\zeta (t)-g_{t}(z)}}.$ When D is the upper half plane the Loewner equation differs from this by changes of variable and is ${\frac {\partial f_{t}(z)}{\partial t}}={\frac {2f_{t}^{\prime }(z)}{\zeta (t)-z}}$   or   ${\dfrac {\partial g_{t}(z)}{\partial t}}={\dfrac {2}{g_{t}(z)-\zeta (t)}}.$ The driving function ζ and the curve γ are related by $f_{t}(\zeta (t))=\gamma (t){\text{ or }}\zeta (t)=g_{t}(\gamma (t))$ where $f_{t}$ and $g_{t}$ are extended by continuity. Example Let D be the upper half plane and consider an SLE0, so the driving function ζ is a Brownian motion of diffusivity zero. The function ζ is thus identically zero almost surely and $f_{t}(z)={\sqrt {z^{2}-4t}}$ $g_{t}(z)={\sqrt {z^{2}+4t}}$ $\gamma (t)=2i{\sqrt {t}}$ $D_{t}$ is the upper half-plane with the line from 0 to $2i{\sqrt {t}}$ removed. Schramm–Loewner evolution Schramm–Loewner evolution is the random curve γ given by the Loewner equation as in the previous section, for the driving function $\zeta (t)={\sqrt {\kappa }}B(t)$ where B(t) is Brownian motion on the boundary of D, scaled by some real κ. In other words, Schramm–Loewner evolution is a probability measure on planar curves, given as the image of Wiener measure under this map. In general the curve γ need not be simple, and the domain Dt is not the complement of γ([0,t]) in D, but is instead the unbounded component of the complement. There are two versions of SLE, using two families of curves, each depending on a non-negative real parameter κ: • Chordal SLEκ, which is related to curves connecting two points on the boundary of a domain (usually the upper half plane, with the points being 0 and infinity). • Radial SLEκ, which related to curves joining a point on the boundary of a domain to a point in the interior (often curves joining 1 and 0 in the unit disk). SLE depends on a choice of Brownian motion on the boundary of the domain, and there are several variations depending on what sort of Brownian motion is used: for example it might start at a fixed point, or start at a uniformly distributed point on the unit circle, or might have a built in drift, and so on. The parameter κ controls the rate of diffusion of the Brownian motion, and the behavior of SLE depends critically on its value. The two domains most commonly used in Schramm–Loewner evolution are the upper half plane and the unit disk. Although the Loewner differential equation in these two cases look different, they are equivalent up to changes of variables as the unit disk and the upper half plane are conformally equivalent. However a conformal equivalence between them does not preserve the Brownian motion on their boundaries used to drive Schramm–Loewner evolution. Special values of κ • For 0 ≤ κ < 4 the curve γ(t) is simple (with probability 1). • For 4 < κ < 8 the curve γ(t) intersects itself and every point is contained in a loop but the curve is not space-filling (with probability 1). • For κ ≥ 8 the curve γ(t) is space-filling (with probability 1). • κ = 2 corresponds to the loop-erased random walk, or equivalently, branches of the uniform spanning tree. • For κ = 8/3, SLEκ has the restriction property and is conjectured to be the scaling limit of self-avoiding random walks. A version of it is the outer boundary of Brownian motion. • κ = 3 is the limit of interfaces for the Ising model. • κ = 4 corresponds to the path of the harmonic explorer and contour lines of the Gaussian free field. • For κ = 6, SLEκ has the locality property. This arises in the scaling limit of critical percolation on the triangular lattice and conjecturally on other lattices. • κ = 8 corresponds to the path separating the uniform spanning tree from its dual tree. When SLE corresponds to some conformal field theory, the parameter κ is related to the central charge c of the conformal field theory by $c={\frac {(8-3\kappa )(\kappa -6)}{2\kappa }}.$ Each value of c < 1 corresponds to two values of κ, one value κ between 0 and 4, and a "dual" value 16/κ greater than 4. (see Bauer & Bernard (2002a) Bauer & Bernard (2002b)) Beffara (2008) showed that the Hausdorff dimension of the paths (with probability 1) is equal to min(2, 1 + κ/8). Left passage probability formulas for SLEκ The probability of chordal SLEκ γ being on the left of fixed point $x_{0}+iy_{0}=z_{0}\in \mathbb {H} $ was computed by Schramm (2001a)[1] $\mathbb {P} [\gamma {\text{ passes to the left }}z_{0}]={\frac {1}{2}}+{\frac {\Gamma ({\frac {4}{\kappa }})}{{\sqrt {\pi }}\,\Gamma ({\frac {8-\kappa }{2\kappa }})}}{\frac {x_{0}}{y_{0}}}\,_{2}F_{1}\left({\frac {1}{2}},{\frac {4}{\kappa }},{\frac {3}{2}},-\left({\frac {x_{0}}{y_{0}}}\right)^{2}\right)$ where $\Gamma $ is the Gamma function and $_{2}F_{1}(a,b,c,d)$ is the hypergeometric function. This was derived by using the martingale property of $h(x,y):=\mathbb {P} [\gamma {\text{ passes to the left }}x+iy]$ and Itô's lemma to obtain the following partial differential equation for $w:={\tfrac {x}{y}}$ ${\frac {\kappa }{2}}\partial _{ww}h(w)+{\frac {4w}{w^{2}+1}}\partial _{w}h=0.$ For κ = 4, the RHS is $1-{\tfrac {1}{\pi }}\arg(z_{0})$, which was used in the construction of the harmonic explorer,[2] and for κ = 6, we obtain Cardy's formula, which was used by Smirnov to prove conformal invariance in percolation.[3] Applications Lawler, Schramm & Werner (2001b) used SLE6 to prove the conjecture of Mandelbrot (1982) that the boundary of planar Brownian motion has fractal dimension 4/3. Critical percolation on the triangular lattice was proved to be related to SLE6 by Stanislav Smirnov.[4] Combined with earlier work of Harry Kesten,[5] this led to the determination of many of the critical exponents for percolation.[6] This breakthrough, in turn, allowed further analysis of many aspects of this model.[7][8] Loop-erased random walk was shown to converge to SLE2 by Lawler, Schramm and Werner.[9] This allowed derivation of many quantitative properties of loop-erased random walk (some of which were derived earlier by Richard Kenyon[10]). The related random Peano curve outlining the uniform spanning tree was shown to converge to SLE8.[9] Rohde and Schramm showed that κ is related to the fractal dimension of a curve by the following relation $d=1+{\frac {\kappa }{8}}.$ Simulation Computer programs (Matlab) are presented in this GitHub repository to simulate Schramm Loewner Evolution planar curves. References 1. Schramm, Oded (2001a), "Percolation formula.", Electron. Comm., 33 (6): 115–120, arXiv:math/0107096, Bibcode:2001math......7096S, JSTOR 3481779 2. Schramm, Oded; Sheffield, Scott (2005), "Harmonic explorer and its convergence to SLE4.", Annals of Probability, 33 (6): 2127–2148, arXiv:math/0310210, doi:10.1214/009117905000000477, JSTOR 3481779, S2CID 9055859 3. Smirnov, Stanislav (2001). "Critical percolation in the plane: conformal invariance, Cardy's formula, scaling limits". Comptes Rendus de l'Académie des Sciences, Série I. 333 (3): 239–244. arXiv:0909.4499. Bibcode:2001CRASM.333..239S. doi:10.1016/S0764-4442(01)01991-7. ISSN 0764-4442. 4. Smirnov, Stanislav (2001). "Critical percolation in the plane". Comptes Rendus de l'Académie des Sciences. 333 (3): 239–244. arXiv:0909.4499. Bibcode:2001CRASM.333..239S. doi:10.1016/S0764-4442(01)01991-7. 5. Kesten, Harry (1987). "Scaling relations for 2D-percolation". Comm. Math. Phys. 109 (1): 109–156. Bibcode:1987CMaPh.109..109K. doi:10.1007/BF01205674. S2CID 118713698. 6. Smirnov, Stanislav; Werner, Wendelin (2001). "Critical exponents for two-dimensional percolation". Math. Res. Lett. 8 (6): 729–744. arXiv:math/0109120. doi:10.4310/mrl.2001.v8.n6.a4. S2CID 6837772. 7. Schramm, Oded; Steif, Jeffrey E. (2010). "Quantitative noise sensitivity and exceptional times for percolation". Ann. of Math. 171 (2): 619–672. arXiv:math/0504586. doi:10.4007/annals.2010.171.619. S2CID 14742163. 8. Garban, Christophe; Pete, Gábor; Schramm, Oded (2013). "Pivotal, cluster and interface measures for critical planar percolation". J. Amer. Math. Soc. 26 (4): 939–1024. arXiv:1008.1378. doi:10.1090/S0894-0347-2013-00772-9. S2CID 119677336. 9. Lawler, Gregory F.; Schramm, Oded; Werner, Wendelin (2004). "Conformal invariance of planar loop-erased random walks and uniform spanning trees". Ann. Probab. 32 (1B): 939–995. arXiv:math/0112234. doi:10.1214/aop/1079021469. 10. Kenyon, Richard (2000). "Long range properties of spanning trees". J. Math. Phys. 41 (3): 1338–1363. Bibcode:2000JMP....41.1338K. CiteSeerX 10.1.1.39.7560. doi:10.1063/1.533190. Further reading • Beffara, Vincent (2008), "The dimension of the SLE curves", The Annals of Probability, 36 (4): 1421–1452, arXiv:math/0211322, doi:10.1214/07-AOP364, MR 2435854, S2CID 226992 • Cardy, John (2005), "SLE for theoretical physicists", Annals of Physics, 318 (1): 81–118, arXiv:cond-mat/0503313, Bibcode:2005AnPhy.318...81C, doi:10.1016/j.aop.2005.04.001, S2CID 17747133 • Goluzina, E.G. (2001) [1994], "Löwner method", Encyclopedia of Mathematics, EMS Press • Gutlyanskii, V.Ya. (2001) [1994], "Löwner equation", Encyclopedia of Mathematics, EMS Press • Kager, Wouter; Nienhuis, Bernard (2004), "A Guide to Stochastic Loewner Evolution and its Applications", J. Stat. Phys., 115 (5/6): 1149–1229, arXiv:math-ph/0312056, Bibcode:2004JSP...115.1149K, doi:10.1023/B:JOSS.0000028058.87266.be, S2CID 7239233 • Lawler, Gregory F. (2004), "An introduction to the stochastic Loewner evolution", in Kaimanovich, Vadim A. (ed.), Random walks and geometry, Walter de Gruyter GmbH & Co. KG, Berlin, pp. 261–293, ISBN 978-3-11-017237-9, MR 2087784, archived from the original on September 18, 2009 • Lawler, Gregory F. (2005), Conformally invariant processes in the plane, Mathematical Surveys and Monographs, vol. 114, Providence, R.I.: American Mathematical Society, ISBN 978-0-8218-3677-4, MR 2129588 • Lawler, Gregory F. (2007), "Schramm–Loewner Evolution", arXiv:0712.3256 [math.PR] • Lawler, Gregory F., Stochastic Loewner Evolution • Lawler, Gregory F. (2009), "Conformal invariance and 2D statistical physics", Bull. Amer. Math. Soc., 46: 35–54, doi:10.1090/S0273-0979-08-01229-9 • Lawler, Gregory F.; Schramm, Oded; Werner, Wendelin (2001b), "The dimension of the planar Brownian frontier is 4/3", Mathematical Research Letters, 8 (4): 401–411, arXiv:math/0010165, doi:10.4310/mrl.2001.v8.n4.a1, MR 1849257, S2CID 5877745 • Loewner, C. (1923), "Untersuchungen über schlichte konforme Abbildungen des Einheitskreises. I" (PDF), Math. Ann., 89 (1–2): 103–121, doi:10.1007/BF01448091, JFM 49.0714.01, S2CID 121752388 • Mandelbrot, Benoît (1982), The Fractal Geometry of Nature, W. H. Freeman, ISBN 978-0-7167-1186-5 • Norris, J. R. (2010), Introduction to Schramm–Loewner evolutions (PDF) • Pommerenke, Christian (1975), Univalent functions, with a chapter on quadratic differentials by Gerd Jensen, Studia Mathematica/Mathematische Lehrbücher, vol. 15, Vandenhoeck & Ruprecht (Chapter 6 treats the classical theory of Loewner's equation) • Schramm, Oded (2000), "Scaling limits of loop-erased random walks and uniform spanning trees", Israel Journal of Mathematics, 118: 221–288, arXiv:math.PR/9904022, doi:10.1007/BF02803524, MR 1776084, S2CID 17164604 Schramm's original paper, introducing SLE • Schramm, Oded (2007), "Conformally invariant scaling limits: an overview and a collection of problems", International Congress of Mathematicians. Vol. I, Eur. Math. Soc., Zürich, pp. 513–543, arXiv:math/0602151, Bibcode:2006math......2151S, doi:10.4171/022-1/20, ISBN 978-3-03719-022-7, MR 2334202 • Werner, Wendelin (2004), "Random planar curves and Schramm–Loewner evolutions", Lectures on probability theory and statistics, Lecture Notes in Math., vol. 1840, Berlin, New York: Springer-Verlag, pp. 107–195, arXiv:math.PR/0303354, doi:10.1007/b96719, ISBN 978-3-540-21316-1, MR 2079672 • Werner, Wendelin (2005), "Conformal restriction and related questions", Probability Surveys, 2: 145–190, doi:10.1214/154957805100000113, MR 2178043 • Bauer, Michel; Bernard, Denis (2002a), $SLE_\kappa$ growth processes and conformal field theories, arXiv:math-ph/0206028 • Bauer, Michel; Bernard, Denis (2002b), Conformal Field Theories of Stochastic Loewner Evolutions, arXiv:hep-th/0210015 External links • Lawler; Schramm; Werner (2001), Tutorial: SLE, Lawrence Hall of Science, University of California, Berkeley{{citation}}: CS1 maint: location missing publisher (link) ( video of MSRI lecture) • Schramm, Oded (2001), Conformally Invariant Scaling Limits and SLE, MSRI (Slides from a talk.) Stochastic processes Discrete time • Bernoulli process • Branching process • Chinese restaurant process • Galton–Watson process • Independent and identically distributed random variables • Markov chain • Moran process • Random walk • Loop-erased • Self-avoiding • Biased • Maximal entropy Continuous time • Additive process • Bessel process • Birth–death process • pure birth • Brownian motion • Bridge • Excursion • Fractional • Geometric • Meander • Cauchy process • Contact process • Continuous-time random walk • Cox process • Diffusion process • Empirical process • Feller process • Fleming–Viot process • Gamma process • Geometric process • Hawkes process • Hunt process • Interacting particle systems • Itô diffusion • Itô process • Jump diffusion • Jump process • Lévy process • Local time • Markov additive process • McKean–Vlasov process • Ornstein–Uhlenbeck process • Poisson process • Compound • Non-homogeneous • Schramm–Loewner evolution • Semimartingale • Sigma-martingale • Stable process • Superprocess • Telegraph process • Variance gamma process • Wiener process • Wiener sausage Both • Branching process • Galves–Löcherbach model • Gaussian process • Hidden Markov model (HMM) • Markov process • Martingale • Differences • Local • Sub- • Super- • Random dynamical system • Regenerative process • Renewal process • Stochastic chains with memory of variable length • White noise Fields and other • Dirichlet process • Gaussian random field • Gibbs measure • Hopfield model • Ising model • Potts model • Boolean network • Markov random field • Percolation • Pitman–Yor process • Point process • Cox • Poisson • Random field • Random graph Time series models • Autoregressive conditional heteroskedasticity (ARCH) model • Autoregressive integrated moving average (ARIMA) model • Autoregressive (AR) model • Autoregressive–moving-average (ARMA) model • Generalized autoregressive conditional heteroskedasticity (GARCH) model • Moving-average (MA) model Financial models • Binomial options pricing model • Black–Derman–Toy • Black–Karasinski • Black–Scholes • Chan–Karolyi–Longstaff–Sanders (CKLS) • Chen • Constant elasticity of variance (CEV) • Cox–Ingersoll–Ross (CIR) • Garman–Kohlhagen • Heath–Jarrow–Morton (HJM) • Heston • Ho–Lee • Hull–White • LIBOR market • Rendleman–Bartter • SABR volatility • Vašíček • Wilkie Actuarial models • Bühlmann • Cramér–Lundberg • Risk process • Sparre–Anderson Queueing models • Bulk • Fluid • Generalized queueing network • M/G/1 • M/M/1 • M/M/c Properties • Càdlàg paths • Continuous • Continuous paths • Ergodic • Exchangeable • Feller-continuous • Gauss–Markov • Markov • Mixing • Piecewise-deterministic • Predictable • Progressively measurable • Self-similar • Stationary • Time-reversible Limit theorems • Central limit theorem • Donsker's theorem • Doob's martingale convergence theorems • Ergodic theorem • Fisher–Tippett–Gnedenko theorem • Large deviation principle • Law of large numbers (weak/strong) • Law of the iterated logarithm • Maximal ergodic theorem • Sanov's theorem • Zero–one laws (Blumenthal, Borel–Cantelli, Engelbert–Schmidt, Hewitt–Savage, Kolmogorov, Lévy) Inequalities • Burkholder–Davis–Gundy • Doob's martingale • Doob's upcrossing • Kunita–Watanabe • Marcinkiewicz–Zygmund Tools • Cameron–Martin formula • Convergence of random variables • Doléans-Dade exponential • Doob decomposition theorem • Doob–Meyer decomposition theorem • Doob's optional stopping theorem • Dynkin's formula • Feynman–Kac formula • Filtration • Girsanov theorem • Infinitesimal generator • Itô integral • Itô's lemma • Karhunen–Loève theorem • Kolmogorov continuity theorem • Kolmogorov extension theorem • Lévy–Prokhorov metric • Malliavin calculus • Martingale representation theorem • Optional stopping theorem • Prokhorov's theorem • Quadratic variation • Reflection principle • Skorokhod integral • Skorokhod's representation theorem • Skorokhod space • Snell envelope • Stochastic differential equation • Tanaka • Stopping time • Stratonovich integral • Uniform integrability • Usual hypotheses • Wiener space • Classical • Abstract Disciplines • Actuarial mathematics • Control theory • Econometrics • Ergodic theory • Extreme value theory (EVT) • Large deviations theory • Mathematical finance • Mathematical statistics • Probability theory • Queueing theory • Renewal theory • Ruin theory • Signal processing • Statistics • Stochastic analysis • Time series analysis • Machine learning • List of topics • Category
Wikipedia
Schreier conjecture In finite group theory, the Schreier conjecture asserts that the outer automorphism group of every finite simple group is solvable. It was proposed by Otto Schreier in 1926, and is now known to be true as a result of the classification of finite simple groups, but no simpler proof is known. References • Dixon, John D.; Mortimer, Brian (1996), Permutation Groups, Graduate Texts in Mathematics, vol. 163, Springer-Verlag, p. 133, ISBN 978-0-387-94599-6.
Wikipedia
Schreier coset graph In the area of mathematics called combinatorial group theory, the Schreier coset graph is a graph associated with a group G, a generating set {xi : i in I } of G, and a subgroup H ≤ G. The Schreier graph encodes the abstract structure of a group modulo an equivalence relation formed by the coset. The graph is named after Otto Schreier, who used the term “Nebengruppenbild”.[1] An equivalent definition was made in an early paper of Todd and Coxeter.[2] Description The vertices of the graph are the right cosets Hg = {hg : h in H } for g in G. The edges of the graph are of the form (Hg, Hgxi). The Cayley graph of the group G with {xi : i in I } is the Schreier coset graph for H = {1G} (Gross & Tucker 1987, p. 73). A spanning tree of a Schreier coset graph corresponds to a Schreier transversal, as in Schreier's subgroup lemma (Conder 2003). The book "Categories and Groupoids" listed below relates this to the theory of covering morphisms of groupoids. A subgroup H of a group G determines a covering morphism of groupoids $p:K\rightarrow G$ and if X is a generating set for G then its inverse image under p is the Schreier graph of (G, X). Applications The graph is useful to understand coset enumeration and the Todd–Coxeter algorithm. Coset graphs can be used to form large permutation representations of groups and were used by Graham Higman to show that the alternating groups of large enough degree are Hurwitz groups, (Conder 2003). Every vertex-transitive graph is a coset graph. References 1. Schreier, Otto (December 1927). "Die Untergruppen der freien Gruppen". Abhandlungen aus dem Mathematischen Seminar der Universität Hamburg. 5 (1): 161–183. doi:10.1007/BF02952517. 2. Todd, J.A; Coxeter, H.S.M. (October 1936). "A practical method for enumerating cosets of a finite abstract group". Proceedings of the Edinburgh Mathematical Society. 5 (1): 26–34. doi:10.1017/S0013091500008221. Retrieved 2018-03-05. • Magnus, W.; Karrass, A.; Solitar, D. (1976), Combinatorial Group Theory, Dover • Conder, Marston (2003), "Group actions on graphs, maps and surfaces with maximum symmetry", Groups St. Andrews 2001 in Oxford. Vol. I, London Math. Soc. Lecture Note Ser., vol. 304, Cambridge University Press, pp. 63–91, MR 2051519 • Gross, Jonathan L.; Tucker, Thomas W. (1987), Topological graph theory, Wiley-Interscience Series in Discrete Mathematics and Optimization, New York: John Wiley & Sons, ISBN 978-0-471-04926-5, MR 0898434 • Schreier graphs of the Basilica group Authors: Daniele D'Angeli, Alfredo Donno, Michel Matter, Tatiana Nagnibeda • Philip J. Higgins, Categories and Groupoids, van Nostrand, New York, Lecture Notes, 1971, Republished as TAC Reprint, 2005
Wikipedia
Schreier domain In abstract algebra, a Schreier domain, named after Otto Schreier, is an integrally closed domain where every nonzero element is primal; i.e., whenever x divides yz, x can be written as x = x1 x2 so that x1 divides y and x2 divides z. An integral domain is said to be pre-Schreier if every nonzero element is primal. A GCD domain is an example of a Schreier domain. The term "Schreier domain" was introduced by P. M. Cohn in 1960s. The term "pre-Schreier domain" is due to Muhammad Zafrullah. In general, an irreducible element is primal if and only if it is a prime element. Consequently, in a Schreier domain, every irreducible is prime. In particular, an atomic Schreier domain is a unique factorization domain; this generalizes the fact that an atomic GCD domain is a UFD. References • Cohn, P.M., Bezout rings and their subrings, 1968. • Zafrullah, Muhammad, On a property of pre-Schreier domains, 1987.
Wikipedia
Schreier refinement theorem In mathematics, the Schreier refinement theorem of group theory states that any two subnormal series of subgroups of a given group have equivalent refinements, where two series are equivalent if there is a bijection between their factor groups that sends each factor group to an isomorphic one. The theorem is named after the Austrian mathematician Otto Schreier who proved it in 1928. It provides an elegant proof of the Jordan–Hölder theorem. It is often proved using the Zassenhaus lemma. Baumslag (2006) gives a short proof by intersecting the terms in one subnormal series with those in the other series. Example Consider $\mathbb {Z} _{2}\times S_{3}$, where $S_{3}$ is the symmetric group of degree 3. The alternating group $A_{3}$ is a normal subgroup of $S_{3}$, so we have the two subnormal series $\{0\}\times \{(1)\}\;\triangleleft \;\mathbb {Z} _{2}\times \{(1)\}\;\triangleleft \;\mathbb {Z} _{2}\times S_{3},$ $\{0\}\times \{(1)\}\;\triangleleft \;\{0\}\times A_{3}\;\triangleleft \;\mathbb {Z} _{2}\times S_{3},$ with respective factor groups $(\mathbb {Z} _{2},S_{3})$ and $(A_{3},\mathbb {Z} _{2}\times \mathbb {Z} _{2})$. The two subnormal series are not equivalent, but they have equivalent refinements: $\{0\}\times \{(1)\}\;\triangleleft \;\mathbb {Z} _{2}\times \{(1)\}\;\triangleleft \;\mathbb {Z} _{2}\times A_{3}\;\triangleleft \;\mathbb {Z} _{2}\times S_{3}$ with factor groups isomorphic to $(\mathbb {Z} _{2},A_{3},\mathbb {Z} _{2})$ and $\{0\}\times \{(1)\}\;\triangleleft \;\{0\}\times A_{3}\;\triangleleft \;\{0\}\times S_{3}\;\triangleleft \;\mathbb {Z} _{2}\times S_{3}$ with factor groups isomorphic to $(A_{3},\mathbb {Z} _{2},\mathbb {Z} _{2})$. References • Baumslag, Benjamin (2006), "A simple way of proving the Jordan-Hölder-Schreier theorem", American Mathematical Monthly, 113 (10): 933–935, doi:10.2307/27642092, JSTOR 27642092
Wikipedia
Schröder–Bernstein theorem In set theory, the Schröder–Bernstein theorem states that, if there exist injective functions f : A → B and g : B → A between the sets A and B, then there exists a bijective function h : A → B. In terms of the cardinality of the two sets, this classically implies that if |A| ≤ |B| and |B| ≤ |A|, then |A| = |B|; that is, A and B are equipotent. This is a useful feature in the ordering of cardinal numbers. The theorem is named after Felix Bernstein and Ernst Schröder. It is also known as the Cantor–Bernstein theorem or Cantor–Schröder–Bernstein theorem, after Georg Cantor, who first published it (albeit without proof). Proof The following proof is attributed to Julius König.[1] Assume without loss of generality that A and B are disjoint. For any a in A or b in B we can form a unique two-sided sequence of elements that are alternately in A and B, by repeatedly applying $f$ and $g^{-1}$ to go from A to B and $g$ and $f^{-1}$ to go from B to A (where defined; the inverses $f^{-1}$ and $g^{-1}$ are understood as partial functions.) $\cdots \rightarrow f^{-1}(g^{-1}(a))\rightarrow g^{-1}(a)\rightarrow a\rightarrow f(a)\rightarrow g(f(a))\rightarrow \cdots $ For any particular a, this sequence may terminate to the left or not, at a point where $f^{-1}$ or $g^{-1}$ is not defined. By the fact that $f$ and $g$ are injective functions, each a in A and b in B is in exactly one such sequence to within identity: if an element occurs in two sequences, all elements to the left and to the right must be the same in both, by the definition of the sequences. Therefore, the sequences form a partition of the (disjoint) union of A and B. Hence it suffices to produce a bijection between the elements of A and B in each of the sequences separately, as follows: Call a sequence an A-stopper if it stops at an element of A, or a B-stopper if it stops at an element of B. Otherwise, call it doubly infinite if all the elements are distinct or cyclic if it repeats. See the picture for examples. • For an A-stopper, the function $f$ is a bijection between its elements in A and its elements in B. • For a B-stopper, the function $g$ is a bijection between its elements in B and its elements in A. • For a doubly infinite sequence or a cyclic sequence, either $f$ or $g$ will do ($g$ is used in the picture). Another proof Let $f:A\to B$ and $g:B\to A$ be the two injective functions. Then define the sets: $C_{0}=A\setminus g(B)\quad $ and $\quad C_{k+1}=g(f(C_{k}))$ for $k\in \{0,1,2,3,...\}$ and finally $\quad C=\bigcup _{k=0}^{\infty }C_{k}$ Now $\;h(x):={\begin{cases}f(x),&\mathrm {for} \ x\in C\\g^{-1}(x),&\mathrm {for} \ x\in A\setminus C\end{cases}}\quad $ defines a bijective function $\quad h:A\to B\quad $. For the proof let's first observe that $g(f(C))=g(f(\bigcup _{k=0}^{\infty }C_{k}))=\bigcup _{k=0}^{\infty }g(f(C_{k}))=\bigcup _{k=0}^{\infty }C_{k+1}=\bigcup _{k=1}^{\infty }C_{k}$ • $h$ is indeed a function: If $x\in C\subset A\quad $ then $h(x)=f(x)\in B\quad $ and if $\quad x\in A\setminus C\subset A\setminus C_{0}=g(B)\quad $ then $\quad h(x)=g^{-1}(x)$ exists and is one element from the set $B$ since $g:B\to A$ is injective. • $h$ is injective: Assume $h(x_{1})=h(x_{2})$ • If $x_{1},x_{2}\in C$ then $h(x_{1})=f(x_{1})=f(x_{2})=h(x_{2})$ and hence $x_{1}=x_{2}$ as $f$ is injective. • and if $x_{1},x_{2}\in A\setminus C$ then $g^{-1}(x_{1})=g^{-1}(x_{2})$ that implies $g(g^{-1}(x_{1}))=x_{1}=x_{2}=g(g^{-1}(x_{2}))$ • $x_{1}\in C,x_{2}\in A\setminus C$ is impossible since $h(x_{1})=h(x_{2})$ would mean $f(x_{1})=g^{-1}(x_{2})$ and hence $x_{2}=g(g^{-1}(x_{2}))=g(f(x_{1}))\in g(f(C))=\bigcup _{k=1}^{\infty }C_{k}\subset C$ (!) And for $x_{2}\in C,x_{1}\in A\setminus C$ one also gets a contradiction. • $h$ is surjective: Let $y\in B$ • if $y\in f(C)$ then there is a $x\in C$ with $h(x)=f(x)=y$ • if $y\in B\setminus f(C)$ then $g(y)\notin g(f(C))$ since $g$ is injective. Now $g(y)\notin g(f(C))=\bigcup _{k=1}^{\infty }C_{k}$ but $g(y)\in g(B)$ and hence $g(y)\notin C_{0}=A\setminus g(B)$. So together $g(y)\notin C$ and therefore $h(g(y))=g^{-1}(g(y))=y\quad $ So $x=g(y)$ has the property $h(x)=y$. Q.E.D.[2][3] Examples Bijective function from $[0,1]\to [0,1)$ Note: $[0,1)$ is the half open set from 0 to 1, including the boundary 0 and excluding the boundary 1. Let $f:[0,1]\to [0,1)\;$ with $f(x)=x/2;\;$ and $g:[0,1)\to [0,1]\;$ with $g(x)=x;\;$ the two injective functions as in the previous procedure of proof. In line with that procedure $C_{0}=\{1\},\;C_{k}=\{2^{-k}\},\;C=\bigcup _{k=0}^{\infty }C_{k}=\{1,{\tfrac {1}{2}},{\tfrac {1}{4}},{\tfrac {1}{8}},...\}$ Then $\;h(x)={\begin{cases}{\frac {x}{2}},&\mathrm {for} \ x\in C\\x,&\mathrm {for} \ x\in [0,1]\setminus C\end{cases}}\;$ is a bijective function from $[0,1]\to [0,1)$. Bijective function from $[0,2)\to [0,1)^{2}$ Let $f:[0,2)\to [0,1)^{2}\;$ with $f(x)=(x/2;0);\;$ Then for $(x;y)\in [0,1)^{2}\;$ one can use the expansions $\;x=\sum _{k=1}^{\infty }a_{k}\cdot 10^{-k}\;$ and $\;y=\sum _{k=1}^{\infty }b_{k}\cdot 10^{-k}\;$ with $\;a_{k},b_{k}\in \{0,1,...,9\}\;$ and now one can set $g(x;y)=\sum _{k=1}^{\infty }(10\cdot a_{k}+b_{k})\cdot 10^{-2k}$ which defines an injective function $[0,1)^{2}\to [0,2)\;$. (Example: $g({\tfrac {1}{3}};{\tfrac {2}{3}})=0.363636...={\tfrac {12}{33}}$) And therefore a bijective function $h$ can be constructed with the use of $f(x)$ and $g^{-1}(x)$. In this case $C_{0}=[1,2)$ is still easy but already $C_{1}=g(f(C_{0}))=g(\{(x;0)|x\in [{\tfrac {1}{2}},1)\,\})\;$ gets quite complicated. Note: Of course there's a more simple way by using the (already bijective) function definition $g_{2}(x;y)=2\cdot \sum _{k=1}^{\infty }(10\cdot a_{k}+b_{k})\cdot 10^{-2k}$. Then $C$ would be the empty set and $h(x)=g_{2}^{-1}(x)$ for all x. History The traditional name "Schröder–Bernstein" is based on two proofs published independently in 1898. Cantor is often added because he first stated the theorem in 1887, while Schröder's name is often omitted because his proof turned out to be flawed while the name of Richard Dedekind, who first proved it, is not connected with the theorem. According to Bernstein, Cantor had suggested the name equivalence theorem (Äquivalenzsatz).[4] • 1887 Cantor publishes the theorem, however without proof.[5][4] • 1887 On July 11, Dedekind proves the theorem (not relying on the axiom of choice)[6] but neither publishes his proof nor tells Cantor about it. Ernst Zermelo discovered Dedekind's proof and in 1908[7] he publishes his own proof based on the chain theory from Dedekind's paper Was sind und was sollen die Zahlen?[4][8] • 1895 Cantor states the theorem in his first paper on set theory and transfinite numbers. He obtains it as an easy consequence of the linear order of cardinal numbers.[9][10][11] However, he could not prove the latter theorem, which is shown in 1915 to be equivalent to the axiom of choice by Friedrich Moritz Hartogs.[4][12] • 1896 Schröder announces a proof (as a corollary of a theorem by Jevons).[13] • 1897 Bernstein, a 19-year-old student in Cantor's Seminar, presents his proof.[14][15] • 1897 Almost simultaneously, but independently, Schröder finds a proof.[14][15] • 1897 After a visit by Bernstein, Dedekind independently proves the theorem a second time. • 1898 Bernstein's proof (not relying on the axiom of choice) is published by Émile Borel in his book on functions.[16] (Communicated by Cantor at the 1897 International Congress of Mathematicians in Zürich.) In the same year, the proof also appears in Bernstein's dissertation.[17][4] • 1898 Schröder publishes his proof[18] which, however, is shown to be faulty by Alwin Reinhold Korselt in 1902 (just before Schröder's death),[19] (confirmed by Schröder),[4][20] but Korselt's paper is published only in 1911. Both proofs of Dedekind are based on his famous 1888 memoir Was sind und was sollen die Zahlen? and derive it as a corollary of a proposition equivalent to statement C in Cantor's paper,[9] which reads A ⊆ B ⊆ C and |A| = |C| implies |A| = |B| = |C|. Cantor observed this property as early as 1882/83 during his studies in set theory and transfinite numbers and was therefore (implicitly) relying on the Axiom of Choice. Prerequisites The 1895 proof by Cantor relied, in effect, on the axiom of choice by inferring the result as a corollary of the well-ordering theorem.[10][11] However, König's proof given above shows that the result can also be proved without using the axiom of choice. On the other hand, König's proof uses the principle of excluded middle to draw a conclusion through case analysis. As such, the above proof is not a constructive one. In fact, in a constructive set theory such as intuitionistic set theory ${\mathsf {IZF}}$, which adopts the full axiom of separation but dispenses with the principle of excluded middle, assuming the Schröder–Bernstein theorem implies the latter.[21] In turn, there is no proof of König's conclusion in this or weaker constructive theories. Therefore, intuitionists do not accept the statement of the Schröder–Bernstein theorem.[22] There is also a proof which uses Tarski's fixed point theorem.[23] See also • Myhill isomorphism theorem • Netto's theorem, according to which the bijections constructed by the Schröder–Bernstein theorem between spaces of different dimensions cannot be continuous • Schröder–Bernstein theorem for measurable spaces • Schröder–Bernstein theorems for operator algebras • Schröder–Bernstein property Notes 1. J. König (1906). "Sur la théorie des ensembles". Comptes Rendus Hebdomadaires des Séances de l'Académie des Sciences. 143: 110–112. 2. Thanks to the German and French Wikipedia sister articles, see for instance fr:Théorème de Cantor-Bernstein#Première démonstration 3. "CSM25 - The Cantor-Schröder-Bernstein Theorem" (PDF). www.cs.swan.ac.uk. Retrieved 4 December 2022. (Actually, the proof starts on page 3.) 4. Felix Hausdorff (2002), Egbert Brieskorn; Srishti D. Chatterji; et al. (eds.), Grundzüge der Mengenlehre (1. ed.), Berlin/Heidelberg: Springer, p. 587, ISBN 978-3-540-42224-2 – Original edition (1914) 5. Georg Cantor (1887), "Mitteilungen zur Lehre vom Transfiniten", Zeitschrift für Philosophie und philosophische Kritik, 91: 81–125 Reprinted in: Georg Cantor (1932), Adolf Fraenkel (Lebenslauf); Ernst Zermelo (eds.), Gesammelte Abhandlungen mathematischen und philosophischen Inhalts, Berlin: Springer, pp. 378–439 Here: p.413 bottom 6. Richard Dedekind (1932), Robert Fricke; Emmy Noether; Øystein Ore (eds.), Gesammelte mathematische Werke, vol. 3, Braunschweig: Friedr. Vieweg & Sohn, pp. 447–449 (Ch.62) 7. Ernst Zermelo (1908), Felix Klein; Walther von Dyck; David Hilbert; Otto Blumenthal (eds.), "Untersuchungen über die Grundlagen der Mengenlehre I", Mathematische Annalen, 65 (2): 261–281, here: p.271–272, doi:10.1007/bf01449999, ISSN 0025-5831, S2CID 120085563 8. Richard Dedekind (1888), Was sind und was sollen die Zahlen? (2., unchanged (1893) ed.), Braunschweig: Friedr. Vieweg & Sohn 9. Georg Cantor (1932), Adolf Fraenkel (Lebenslauf); Ernst Zermelo (eds.), Gesammelte Abhandlungen mathematischen und philosophischen Inhalts, Berlin: Springer, pp. 285 ("Satz B") 10. Georg Cantor (1895). "Beiträge zur Begründung der transfiniten Mengenlehre (1)". Mathematische Annalen. 46 (4): 481–512 (Theorem see "Satz B", p.484). doi:10.1007/bf02124929. S2CID 177801164. 11. (Georg Cantor (1897). "Beiträge zur Begründung der transfiniten Mengenlehre (2)". Mathematische Annalen. 49 (2): 207–246. doi:10.1007/bf01444205. S2CID 121665994.) 12. Friedrich M. Hartogs (1915), Felix Klein; Walther von Dyck; David Hilbert; Otto Blumenthal (eds.), "Über das Problem der Wohlordnung", Mathematische Annalen, 76 (4): 438–443, doi:10.1007/bf01458215, ISSN 0025-5831, S2CID 121598654 13. Ernst Schröder (1896). "Über G. Cantorsche Sätze". Jahresbericht der Deutschen Mathematiker-Vereinigung. 5: 81–82. 14. Oliver Deiser (2010), Einführung in die Mengenlehre – Die Mengenlehre Georg Cantors und ihre Axiomatisierung durch Ernst Zermelo, Springer-Lehrbuch (3rd, corrected ed.), Berlin/Heidelberg: Springer, pp. 71, 501, doi:10.1007/978-3-642-01445-1, ISBN 978-3-642-01444-4 15. Patrick Suppes (1972), Axiomatic Set Theory (1. ed.), New York: Dover Publications, pp. 95 f, ISBN 978-0-486-61630-8 16. Émile Borel (1898), Leçons sur la théorie des fonctions, Paris: Gauthier-Villars et fils, pp. 103 ff 17. Felix Bernstein (1901), Untersuchungen aus der Mengenlehre, Halle a. S.: Buchdruckerei des Waisenhauses Reprinted in: Felix Bernstein (1905), Felix Klein; Walther von Dyck; David Hilbert (eds.), "Untersuchungen aus der Mengenlehre", Mathematische Annalen, 61 (1): 117–155, (Theorem see "Satz 1" on p.121), doi:10.1007/bf01457734, ISSN 0025-5831, S2CID 119658724 18. Ernst Schröder (1898), Kaiserliche Leopoldino-Carolinische Deutsche Akademie der Naturforscher (ed.), "Ueber zwei Definitionen der Endlichkeit und G. Cantor'sche Sätze", Nova Acta, 71 (6): 303–376 (proof: p.336–344) 19. Alwin R. Korselt (1911), Felix Klein; Walther von Dyck; David Hilbert; Otto Blumenthal (eds.), "Über einen Beweis des Äquivalenzsatzes", Mathematische Annalen, 70 (2): 294–296, doi:10.1007/bf01461161, ISSN 0025-5831, S2CID 119757900 20. Korselt (1911), p.295 21. Pradic, Pierre; Brown, Chad E. (2019). "Cantor-Bernstein implies Excluded Middle". arXiv:1904.09193 [math.LO]. 22. Ettore Carruccio (2006). Mathematics and Logic in History and in Contemporary Thought. Transaction Publishers. p. 354. ISBN 978-0-202-30850-0. 23. R. Uhl, "Tarski's Fixed Point Theorem", from MathWorld–a Wolfram Web Resource, created by Eric W. Weisstein. (Example 3) References • Martin Aigner & Gunter M. Ziegler (1998) Proofs from THE BOOK, § 3 Analysis: Sets and functions, Springer books MR1723092, fifth edition 2014 MR3288091, sixth edition 2018 MR3823190 • Hinkis, Arie (2013), Proofs of the Cantor-Bernstein theorem. A mathematical excursion, Science Networks. Historical Studies, vol. 45, Heidelberg: Birkhäuser/Springer, doi:10.1007/978-3-0348-0224-6, ISBN 978-3-0348-0223-9, MR 3026479 • Searcóid, Míchaél Ó (2013). "On the history and mathematics of the equivalence theorem". Mathematical Proceedings of the Royal Irish Academy. 113A (2): 151–68. doi:10.1353/mpr.2013.0006. JSTOR 42912521. S2CID 245841055. External links • Weisstein, Eric W. "Schröder-Bernstein Theorem". MathWorld. • Cantor-Schroeder-Bernstein theorem at the nLab • Cantor-Bernstein’s Theorem in a Semiring by Marcel Crabbé. • This article incorporates material from the Citizendium article "Schröder-Bernstein_theorem", which is licensed under the Creative Commons Attribution-ShareAlike 3.0 Unported License but not under the GFDL. Set theory Overview • Set (mathematics) Axioms • Adjunction • Choice • countable • dependent • global • Constructibility (V=L) • Determinacy • Extensionality • Infinity • Limitation of size • Pairing • Power set • Regularity • Union • Martin's axiom • Axiom schema • replacement • specification Operations • Cartesian product • Complement (i.e. set difference) • De Morgan's laws • Disjoint union • Identities • Intersection • Power set • Symmetric difference • Union • Concepts • Methods • Almost • Cardinality • Cardinal number (large) • Class • Constructible universe • Continuum hypothesis • Diagonal argument • Element • ordered pair • tuple • Family • Forcing • One-to-one correspondence • Ordinal number • Set-builder notation • Transfinite induction • Venn diagram Set types • Amorphous • Countable • Empty • Finite (hereditarily) • Filter • base • subbase • Ultrafilter • Fuzzy • Infinite (Dedekind-infinite) • Recursive • Singleton • Subset · Superset • Transitive • Uncountable • Universal Theories • Alternative • Axiomatic • Naive • Cantor's theorem • Zermelo • General • Principia Mathematica • New Foundations • Zermelo–Fraenkel • von Neumann–Bernays–Gödel • Morse–Kelley • Kripke–Platek • Tarski–Grothendieck • Paradoxes • Problems • Russell's paradox • Suslin's problem • Burali-Forti paradox Set theorists • Paul Bernays • Georg Cantor • Paul Cohen • Richard Dedekind • Abraham Fraenkel • Kurt Gödel • Thomas Jech • John von Neumann • Willard Quine • Bertrand Russell • Thoralf Skolem • Ernst Zermelo Mathematical logic General • Axiom • list • Cardinality • First-order logic • Formal proof • Formal semantics • Foundations of mathematics • Information theory • Lemma • Logical consequence • Model • Theorem • Theory • Type theory Theorems (list)  & Paradoxes • Gödel's completeness and incompleteness theorems • Tarski's undefinability • Banach–Tarski paradox • Cantor's theorem, paradox and diagonal argument • Compactness • Halting problem • Lindström's • Löwenheim–Skolem • Russell's paradox Logics Traditional • Classical logic • Logical truth • Tautology • Proposition • Inference • Logical equivalence • Consistency • Equiconsistency • Argument • Soundness • Validity • Syllogism • Square of opposition • Venn diagram Propositional • Boolean algebra • Boolean functions • Logical connectives • Propositional calculus • Propositional formula • Truth tables • Many-valued logic • 3 • Finite • ∞ Predicate • First-order • list • Second-order • Monadic • Higher-order • Free • Quantifiers • Predicate • Monadic predicate calculus Set theory • Set • Hereditary • Class • (Ur-)Element • Ordinal number • Extensionality • Forcing • Relation • Equivalence • Partition • Set operations: • Intersection • Union • Complement • Cartesian product • Power set • Identities Types of Sets • Countable • Uncountable • Empty • Inhabited • Singleton • Finite • Infinite • Transitive • Ultrafilter • Recursive • Fuzzy • Universal • Universe • Constructible • Grothendieck • Von Neumann Maps & Cardinality • Function/Map • Domain • Codomain • Image • In/Sur/Bi-jection • Schröder–Bernstein theorem • Isomorphism • Gödel numbering • Enumeration • Large cardinal • Inaccessible • Aleph number • Operation • Binary Set theories • Zermelo–Fraenkel • Axiom of choice • Continuum hypothesis • General • Kripke–Platek • Morse–Kelley • Naive • New Foundations • Tarski–Grothendieck • Von Neumann–Bernays–Gödel • Ackermann • Constructive Formal systems (list), Language & Syntax • Alphabet • Arity • Automata • Axiom schema • Expression • Ground • Extension • by definition • Conservative • Relation • Formation rule • Grammar • Formula • Atomic • Closed • Ground • Open • Free/bound variable • Language • Metalanguage • Logical connective • ¬ • ∨ • ∧ • → • ↔ • = • Predicate • Functional • Variable • Propositional variable • Proof • Quantifier • ∃ • ! • ∀ • rank • Sentence • Atomic • Spectrum • Signature • String • Substitution • Symbol • Function • Logical/Constant • Non-logical • Variable • Term • Theory • list Example axiomatic systems  (list) • of arithmetic: • Peano • second-order • elementary function • primitive recursive • Robinson • Skolem • of the real numbers • Tarski's axiomatization • of Boolean algebras • canonical • minimal axioms • of geometry: • Euclidean: • Elements • Hilbert's • Tarski's • non-Euclidean • Principia Mathematica Proof theory • Formal proof • Natural deduction • Logical consequence • Rule of inference • Sequent calculus • Theorem • Systems • Axiomatic • Deductive • Hilbert • list • Complete theory • Independence (from ZFC) • Proof of impossibility • Ordinal analysis • Reverse mathematics • Self-verifying theories Model theory • Interpretation • Function • of models • Model • Equivalence • Finite • Saturated • Spectrum • Submodel • Non-standard model • of arithmetic • Diagram • Elementary • Categorical theory • Model complete theory • Satisfiability • Semantics of logic • Strength • Theories of truth • Semantic • Tarski's • Kripke's • T-schema • Transfer principle • Truth predicate • Truth value • Type • Ultraproduct • Validity Computability theory • Church encoding • Church–Turing thesis • Computably enumerable • Computable function • Computable set • Decision problem • Decidable • Undecidable • P • NP • P versus NP problem • Kolmogorov complexity • Lambda calculus • Primitive recursive function • Recursion • Recursive set • Turing machine • Type theory Related • Abstract logic • Category theory • Concrete/Abstract Category • Category of sets • History of logic • History of mathematical logic • timeline • Logicism • Mathematical object • Philosophy of mathematics • Supertask  Mathematics portal Logic • Outline • History Major fields • Computer science • Formal semantics (natural language) • Inference • Philosophy of logic • Proof • Semantics of logic • Syntax Logics • Classical • Informal • Critical thinking • Reason • Mathematical • Non-classical • Philosophical Theories • Argumentation • Metalogic • Metamathematics • Set Foundations • Abduction • Analytic and synthetic propositions • Contradiction • Paradox • Antinomy • Deduction • Deductive closure • Definition • Description • Entailment • Linguistic • Form • Induction • Logical truth • Name • Necessity and sufficiency • Premise • Probability • Reference • Statement • Substitution • Truth • Validity Lists topics • Mathematical logic • Boolean algebra • Set theory other • Logicians • Rules of inference • Paradoxes • Fallacies • Logic symbols •  Philosophy portal • Category • WikiProject (talk) • changes
Wikipedia
Schröder–Bernstein theorems for operator algebras The Schröder–Bernstein theorem from set theory has analogs in the context operator algebras. This article discusses such operator-algebraic results. For von Neumann algebras Suppose M is a von Neumann algebra and E, F are projections in M. Let ~ denote the Murray-von Neumann equivalence relation on M. Define a partial order « on the family of projections by E « F if E ~ F' ≤ F. In other words, E « F if there exists a partial isometry U ∈ M such that U*U = E and UU* ≤ F. For closed subspaces M and N where projections PM and PN, onto M and N respectively, are elements of M, M « N if PM « PN. The Schröder–Bernstein theorem states that if M « N and N « M, then M ~ N. A proof, one that is similar to a set-theoretic argument, can be sketched as follows. Colloquially, N « M means that N can be isometrically embedded in M. So $M=M_{0}\supset N_{0}$ where N0 is an isometric copy of N in M. By assumption, it is also true that, N, therefore N0, contains an isometric copy M1 of M. Therefore, one can write $M=M_{0}\supset N_{0}\supset M_{1}.$ By induction, $M=M_{0}\supset N_{0}\supset M_{1}\supset N_{1}\supset M_{2}\supset N_{2}\supset \cdots .$ It is clear that $R=\cap _{i\geq 0}M_{i}=\cap _{i\geq 0}N_{i}.$ Let $M\ominus N{\stackrel {\mathrm {def} }{=}}M\cap (N)^{\perp }.$ So $M=\oplus _{i\geq 0}(M_{i}\ominus N_{i})\quad \oplus \quad \oplus _{j\geq 0}(N_{j}\ominus M_{j+1})\quad \oplus R$ and $N_{0}=\oplus _{i\geq 1}(M_{i}\ominus N_{i})\quad \oplus \quad \oplus _{j\geq 0}(N_{j}\ominus M_{j+1})\quad \oplus R.$ Notice $M_{i}\ominus N_{i}\sim M\ominus N\quad {\mbox{for all}}\quad i.$ The theorem now follows from the countable additivity of ~. Representations of C*-algebras There is also an analog of Schröder–Bernstein for representations of C*-algebras. If A is a C*-algebra, a representation of A is a *-homomorphism φ from A into L(H), the bounded operators on some Hilbert space H. If there exists a projection P in L(H) where P φ(a) = φ(a) P for every a in A, then a subrepresentation σ of φ can be defined in a natural way: σ(a) is φ(a) restricted to the range of P. So φ then can be expressed as a direct sum of two subrepresentations φ = φ' ⊕ σ. Two representations φ1 and φ2, on H1 and H2 respectively, are said to be unitarily equivalent if there exists a unitary operator U: H2 → H1 such that φ1(a)U = Uφ2(a), for every a. In this setting, the Schröder–Bernstein theorem reads: If two representations ρ and σ, on Hilbert spaces H and G respectively, are each unitarily equivalent to a subrepresentation of the other, then they are unitarily equivalent. A proof that resembles the previous argument can be outlined. The assumption implies that there exist surjective partial isometries from H to G and from G to H. Fix two such partial isometries for the argument. One has $\rho =\rho _{1}\simeq \rho _{1}'\oplus \sigma _{1}\quad {\mbox{where}}\quad \sigma _{1}\simeq \sigma .$ In turn, $\rho _{1}\simeq \rho _{1}'\oplus (\sigma _{1}'\oplus \rho _{2})\quad {\mbox{where}}\quad \rho _{2}\simeq \rho .$ By induction, $\rho _{1}\simeq \rho _{1}'\oplus \sigma _{1}'\oplus \rho _{2}'\oplus \sigma _{2}'\cdots \simeq (\oplus _{i\geq 1}\rho _{i}')\oplus (\oplus _{i\geq 1}\sigma _{i}'),$ and $\sigma _{1}\simeq \sigma _{1}'\oplus \rho _{2}'\oplus \sigma _{2}'\cdots \simeq (\oplus _{i\geq 2}\rho _{i}')\oplus (\oplus _{i\geq 1}\sigma _{i}').$ Now each additional summand in the direct sum expression is obtained using one of the two fixed partial isometries, so $\rho _{i}'\simeq \rho _{j}'\quad {\mbox{and}}\quad \sigma _{i}'\simeq \sigma _{j}'\quad {\mbox{for all}}\quad i,j\;.$ This proves the theorem. See also • Schröder–Bernstein theorem for measurable spaces • Schröder–Bernstein property References • B. Blackadar, Operator Algebras, Springer, 2006. Spectral theory and *-algebras Basic concepts • Involution/*-algebra • Banach algebra • B*-algebra • C*-algebra • Noncommutative topology • Projection-valued measure • Spectrum • Spectrum of a C*-algebra • Spectral radius • Operator space Main results • Gelfand–Mazur theorem • Gelfand–Naimark theorem • Gelfand representation • Polar decomposition • Singular value decomposition • Spectral theorem • Spectral theory of normal C*-algebras Special Elements/Operators • Isospectral • Normal operator • Hermitian/Self-adjoint operator • Unitary operator • Unit Spectrum • Krein–Rutman theorem • Normal eigenvalue • Spectrum of a C*-algebra • Spectral radius • Spectral asymmetry • Spectral gap Decomposition • Decomposition of a spectrum • Continuous • Point • Residual • Approximate point • Compression • Direct integral • Discrete • Spectral abscissa Spectral Theorem • Borel functional calculus • Min-max theorem • Positive operator-valued measure • Projection-valued measure • Riesz projector • Rigged Hilbert space • Spectral theorem • Spectral theory of compact operators • Spectral theory of normal C*-algebras Special algebras • Amenable Banach algebra • With an Approximate identity • Banach function algebra • Disk algebra • Nuclear C*-algebra • Uniform algebra • Von Neumann algebra • Tomita–Takesaki theory Finite-Dimensional • Alon–Boppana bound • Bauer–Fike theorem • Numerical range • Schur–Horn theorem Generalizations • Dirac spectrum • Essential spectrum • Pseudospectrum • Structure space (Shilov boundary) Miscellaneous • Abstract index group • Banach algebra cohomology • Cohen–Hewitt factorization theorem • Extensions of symmetric operators • Fredholm theory • Limiting absorption principle • Schröder–Bernstein theorems for operator algebras • Sherman–Takeda theorem • Unbounded operator Examples • Wiener algebra Applications • Almost Mathieu operator • Corona theorem • Hearing the shape of a drum (Dirichlet eigenvalue) • Heat kernel • Kuznetsov trace formula • Lax pair • Proto-value function • Ramanujan graph • Rayleigh–Faber–Krahn inequality • Spectral geometry • Spectral method • Spectral theory of ordinary differential equations • Sturm–Liouville theory • Superstrong approximation • Transfer operator • Transform theory • Weyl law • Wiener–Khinchin theorem
Wikipedia
Schröder–Hipparchus number In combinatorics, the Schröder–Hipparchus numbers form an integer sequence that can be used to count the number of plane trees with a given set of leaves, the number of ways of inserting parentheses into a sequence, and the number of ways of dissecting a convex polygon into smaller polygons by inserting diagonals. These numbers begin 1, 1, 3, 11, 45, 197, 903, 4279, 20793, 103049, ... (sequence A001003 in the OEIS). Schröder–Hipparchus number Named afterErnst Schröder, Hipparchus, Eugène Charles Catalan Publication year1870 (Schröder) No. of known termsinfinity Formula$x_{n}=\sum _{i=1}^{n}{\frac {1}{n}}{n \choose i}{n \choose i-1}k^{i-1}$ First terms1, 1, 3, 11, 45, 197, 903 OEIS index • A001003 • Little Schroeder numbers, Super Catalan They are also called the super-Catalan numbers, the little Schröder numbers, or the Hipparchus numbers, after Eugène Charles Catalan and his Catalan numbers, Ernst Schröder and the closely related Schröder numbers, and the ancient Greek mathematician Hipparchus who appears from evidence in Plutarch to have known of these numbers. Combinatorial enumeration applications The Schröder–Hipparchus numbers may be used to count several closely related combinatorial objects:[1][2][3][4] • The nth number in the sequence counts the number of different ways of subdividing of a polygon with n + 1 sides into smaller polygons by adding diagonals of the original polygon. • The nth number counts the number of different plane trees with n leaves and with all internal vertices having two or more children. • The nth number counts the number of different ways of inserting parentheses into a sequence of n + 1 symbols, with each pair of parentheses surrounding two or more symbols or parenthesized groups, and without any parentheses surrounding the entire sequence. • The nth number counts the number of faces of all dimensions of an associahedron Kn + 1 of dimension n − 1, including the associahedron itself as a face, but not including the empty set. For instance, the two-dimensional associahedron K4 is a pentagon; it has five vertices, five faces, and one whole associahedron, for a total of 11 faces. As the figure shows, there is a simple combinatorial equivalence between these objects: a polygon subdivision has a plane tree as a form of its dual graph, the leaves of the tree correspond to the symbols in a parenthesized sequence, and the internal nodes of the tree other than the root correspond to parenthesized groups. The parenthesized sequence itself may be written around the perimeter of the polygon with its symbols on the sides of the polygon and with parentheses at the endpoints of the selected diagonals. This equivalence provides a bijective proof that all of these kinds of objects are counted by a single integer sequence.[2] The same numbers also count the number of double permutations (sequences of the numbers from 1 to n, each number appearing twice, with the first occurrences of each number in sorted order) that avoid the permutation patterns 12312 and 121323.[5] Related sequences The closely related large Schröder numbers are equal to twice the Schröder–Hipparchus numbers, and may also be used to count several types of combinatorial objects including certain kinds of lattice paths, partitions of a rectangle into smaller rectangles by recursive slicing, and parenthesizations in which a pair of parentheses surrounding the whole sequence of elements is also allowed. The Catalan numbers also count closely related sets of objects including subdivisions of a polygon into triangles, plane trees in which all internal nodes have exactly two children, and parenthesizations in which each pair of parentheses surrounds exactly two symbols or parenthesized groups.[3] The sequence of Catalan numbers and the sequence of Schröder–Hipparchus numbers, viewed as infinite-dimensional vectors, are the unique eigenvectors for the first two in a sequence of naturally defined linear operators on number sequences.[6][7] More generally, the kth sequence in this sequence of integer sequences is (x1, x2, x3, ...) where the numbers xn are calculated as the sums of Narayana numbers multiplied by powers of k. This can be expressed as a hypergeometric function: $x_{n}=\sum _{i=1}^{n}N(n,i)\,k^{i-1}=\sum _{i=1}^{n}{\frac {1}{n}}{n \choose i}{n \choose i-1}k^{i-1}={}_{2}F_{1}(1-n,-n;2;k).$ Substituting k = 1 into this formula gives the Catalan numbers and substituting k = 2 into this formula gives the Schröder–Hipparchus numbers.[7] In connection with the property of Schröder–Hipparchus numbers of counting faces of an associahedron, the number of vertices of the associahedron is given by the Catalan numbers. The corresponding numbers for the permutohedron are respectively the ordered Bell numbers and the factorials. Recurrence As well as the summation formula above, the Schröder–Hipparchus numbers may be defined by a recurrence relation: $x_{n}={\frac {1}{n}}\left((6n-9)x_{n-1}-(n-3)x_{n-2}\right).$ Stanley proves this fact using generating functions[8] while Foata and Zeilberger provide a direct combinatorial proof.[9] History Plutarch's dialogue Table Talk contains the line: Chrysippus says that the number of compound propositions that can be made from only ten simple propositions exceeds a million. (Hipparchus, to be sure, refuted this by showing that on the affirmative side there are 103,049 compound statements, and on the negative side 310,952.)[8] This statement went unexplained until 1994, when David Hough, a graduate student at George Washington University, observed that there are 103049 ways of inserting parentheses into a sequence of ten items.[1][8][10] The historian of mathematics Fabio Acerbi (CNRS) has shown that a similar explanation can be provided for the other number: it is very close to the average of the tenth and eleventh Schröder–Hipparchus numbers, 310954, and counts bracketings of ten terms together with a negative particle.[10] The problem of counting parenthesizations was introduced to modern mathematics by Schröder (1870).[11] Plutarch's recounting of Hipparchus's two numbers records a disagreement between Hipparchus and the earlier Stoic philosopher Chrysippus, who had claimed that the number of compound propositions that can be made from 10 simple propositions exceeds a million. Contemporary philosopher Susanne Bobzien (2011) has argued that Chrysippus's calculation was the correct one, based on her analysis of Stoic logic. Bobzien reconstructs the calculations of both Chrysippus and Hipparchus, and says that showing how Hipparchus got his mathematics correct but his Stoic logic wrong can cast new light on the Stoic notions of conjunctions and assertibles.[12] References 1. Stanley, Richard P. (1997, 1999), Enumerative Combinatorics, Cambridge University Press. Exercise 1.45, vol. I, p. 51; vol. II, pp. 176–178 and p. 213. 2. Shapiro, Louis W.; Sulanke, Robert A. (2000), "Bijections for the Schröder numbers", Mathematics Magazine, 73 (5): 369–376, doi:10.2307/2690814, JSTOR 2690814, MR 1805263. 3. Etherington, I. M. H. (1940), "Some problems of non-associative combinations (I)", Edinburgh Mathematical Notes, 32: 1–6, doi:10.1017/S0950184300002639. 4. Holtkamp, Ralf (2006), "On Hopf algebra structures over free operads", Advances in Mathematics, 207 (2): 544–565, arXiv:math/0407074, doi:10.1016/j.aim.2005.12.004, MR 2271016, S2CID 15908662. 5. Chen, William Y. C.; Mansour, Toufik; Yan, Sherry H. F. (2006), "Matchings avoiding partial patterns", Electronic Journal of Combinatorics, 13 (1): Research Paper 112, 17 pp. (electronic), doi:10.37236/1138, MR 2274327. 6. Bernstein, M.; Sloane, N. J. A. (1995), "Some canonical sequences of integers", Linear Algebra and Its Applications, 226/228: 57–72, arXiv:math/0205301, doi:10.1016/0024-3795(94)00245-9, MR 1344554, S2CID 14672360. 7. Coker, Curtis (2004), "A family of eigensequences", Discrete Mathematics, 282 (1–3): 249–250, doi:10.1016/j.disc.2003.12.008, MR 2059525. 8. Stanley, Richard P. (1997), "Hipparchus, Plutarch, Schröder, and Hough" (PDF), American Mathematical Monthly, 104 (4): 344–350, doi:10.2307/2974582, JSTOR 2974582, MR 1450667. 9. Foata, Dominique; Zeilberger, Doron (1997), "A classic proof of a recurrence for a very classical sequence", Journal of Combinatorial Theory, Series A, 80 (2): 380–384, arXiv:math/9805015, doi:10.1006/jcta.1997.2814, MR 1485153, S2CID 537142. 10. Acerbi, F. (2003), "On the shoulders of Hipparchus: A reappraisal of ancient Greek combinatorics" (PDF), Archive for History of Exact Sciences, 57: 465–502, doi:10.1007/s00407-003-0067-0, S2CID 122758966, archived from the original (PDF) on 2011-07-21. 11. Schröder, Ernst (1870), "Vier kombinatorische Probleme", Zeitschrift für Mathematik und Physik, 15: 361–376. 12. Bobzien, Susanne (Summer 2011), "The Combinatorics of Stoic Conjunction: Hipparchus refuted, Chrysippus vindicated" (PDF), Oxford Studies in Ancient Philosophy, XL: 157–188. External links • Weisstein, Eric W., "Super Catalan Number", MathWorld • The Hipparchus Operad, The n-Category Café, April 1, 2013 Classes of natural numbers Powers and related numbers • Achilles • Power of 2 • Power of 3 • Power of 10 • Square • Cube • Fourth power • Fifth power • Sixth power • Seventh power • Eighth power • Perfect power • Powerful • Prime power Of the form a × 2b ± 1 • Cullen • Double Mersenne • Fermat • Mersenne • Proth • Thabit • Woodall Other polynomial numbers • Hilbert • Idoneal • Leyland • Loeschian • Lucky numbers of Euler Recursively defined numbers • Fibonacci • Jacobsthal • Leonardo • Lucas • Padovan • Pell • Perrin Possessing a specific set of other numbers • Amenable • Congruent • Knödel • Riesel • Sierpiński Expressible via specific sums • Nonhypotenuse • Polite • Practical • Primary pseudoperfect • Ulam • Wolstenholme Figurate numbers 2-dimensional centered • Centered triangular • Centered square • Centered pentagonal • Centered hexagonal • Centered heptagonal • Centered octagonal • Centered nonagonal • Centered decagonal • Star non-centered • Triangular • Square • Square triangular • Pentagonal • Hexagonal • Heptagonal • Octagonal • Nonagonal • Decagonal • Dodecagonal 3-dimensional centered • Centered tetrahedral • Centered cube • Centered octahedral • Centered dodecahedral • Centered icosahedral non-centered • Tetrahedral • Cubic • Octahedral • Dodecahedral • Icosahedral • Stella octangula pyramidal • Square pyramidal 4-dimensional non-centered • Pentatope • Squared triangular • Tesseractic Combinatorial numbers • Bell • Cake • Catalan • Dedekind • Delannoy • Euler • Eulerian • Fuss–Catalan • Lah • Lazy caterer's sequence • Lobb • Motzkin • Narayana • Ordered Bell • Schröder • Schröder–Hipparchus • Stirling first • Stirling second • Telephone number • Wedderburn–Etherington Primes • Wieferich • Wall–Sun–Sun • Wolstenholme prime • Wilson Pseudoprimes • Carmichael number • Catalan pseudoprime • Elliptic pseudoprime • Euler pseudoprime • Euler–Jacobi pseudoprime • Fermat pseudoprime • Frobenius pseudoprime • Lucas pseudoprime • Lucas–Carmichael number • Somer–Lucas pseudoprime • Strong pseudoprime Arithmetic functions and dynamics Divisor functions • Abundant • Almost perfect • Arithmetic • Betrothed • Colossally abundant • Deficient • Descartes • Hemiperfect • Highly abundant • Highly composite • Hyperperfect • Multiply perfect • Perfect • Practical • Primitive abundant • Quasiperfect • Refactorable • Semiperfect • Sublime • Superabundant • Superior highly composite • Superperfect Prime omega functions • Almost prime • Semiprime Euler's totient function • Highly cototient • Highly totient • Noncototient • Nontotient • Perfect totient • Sparsely totient Aliquot sequences • Amicable • Perfect • Sociable • Untouchable Primorial • Euclid • Fortunate Other prime factor or divisor related numbers • Blum • Cyclic • Erdős–Nicolas • Erdős–Woods • Friendly • Giuga • Harmonic divisor • Jordan–Pólya • Lucas–Carmichael • Pronic • Regular • Rough • Smooth • Sphenic • Størmer • Super-Poulet • Zeisel Numeral system-dependent numbers Arithmetic functions and dynamics • Persistence • Additive • Multiplicative Digit sum • Digit sum • Digital root • Self • Sum-product Digit product • Multiplicative digital root • Sum-product Coding-related • Meertens Other • Dudeney • Factorion • Kaprekar • Kaprekar's constant • Keith • Lychrel • Narcissistic • Perfect digit-to-digit invariant • Perfect digital invariant • Happy P-adic numbers-related • Automorphic • Trimorphic Digit-composition related • Palindromic • Pandigital • Repdigit • Repunit • Self-descriptive • Smarandache–Wellin • Undulating Digit-permutation related • Cyclic • Digit-reassembly • Parasitic • Primeval • Transposable Divisor-related • Equidigital • Extravagant • Frugal • Harshad • Polydivisible • Smith • Vampire Other • Friedman Binary numbers • Evil • Odious • Pernicious Generated via a sieve • Lucky • Prime Sorting related • Pancake number • Sorting number Natural language related • Aronson's sequence • Ban Graphemics related • Strobogrammatic • Mathematics portal
Wikipedia
Schröder–Bernstein theorem for measurable spaces The Cantor–Bernstein–Schroeder theorem of set theory has a counterpart for measurable spaces, sometimes called the Borel Schroeder–Bernstein theorem, since measurable spaces are also called Borel spaces. This theorem, whose proof is quite easy, is instrumental when proving that two measurable spaces are isomorphic. The general theory of standard Borel spaces contains very strong results about isomorphic measurable spaces, see Kuratowski's theorem. However, (a) the latter theorem is very difficult to prove, (b) the former theorem is satisfactory in many important cases (see Examples), and (c) the former theorem is used in the proof of the latter theorem. The theorem Let $X$ and $Y$ be measurable spaces. If there exist injective, bimeasurable maps $f:X\to Y,$ $g:Y\to X,$ then $X$ and $Y$ are isomorphic (the Schröder–Bernstein property). Comments The phrase "$f$ is bimeasurable" means that, first, $f$ is measurable (that is, the preimage $f^{-1}(B)$ is measurable for every measurable $B\subset Y$), and second, the image $f(A)$ is measurable for every measurable $A\subset X$. (Thus, $f(X)$ must be a measurable subset of $Y,$ not necessarily the whole $Y.$) An isomorphism (between two measurable spaces) is, by definition, a bimeasurable bijection. If it exists, these measurable spaces are called isomorphic. Proof First, one constructs a bijection $h:X\to Y$ out of $f$ and $g$ exactly as in the proof of the Cantor–Bernstein–Schroeder theorem. Second, $h$ is measurable, since it coincides with $f$ on a measurable set and with $g^{-1}$ on its complement. Similarly, $h^{-1}$ is measurable. Examples Example 1 The open interval (0, 1) and the closed interval [0, 1] are evidently non-isomorphic as topological spaces (that is, not homeomorphic). However, they are isomorphic as measurable spaces. Indeed, the closed interval is evidently isomorphic to a shorter closed subinterval of the open interval. Also the open interval is evidently isomorphic to a part of the closed interval (just itself, for instance). Example 2 The real line $\mathbb {R} $ and the plane $\mathbb {R} ^{2}$ are isomorphic as measurable spaces. It is immediate to embed $\mathbb {R} $ into $\mathbb {R} ^{2}.$ The converse, embedding of $\mathbb {R} ^{2}.$ into $\mathbb {R} $ (as measurable spaces, of course, not as topological spaces) can be made by a well-known trick with interspersed digits; for example, g(π,100e) = g(3.14159 265…, 271.82818 28…) = 20731.184218 51982 2685…. The map $g:\mathbb {R} ^{2}\to \mathbb {R} $ is clearly injective. It is easy to check that it is bimeasurable. (However, it is not bijective; for example, the number $1/11=0.090909\dots $ is not of the form $g(x,y)$). References • S.M. Srivastava, A Course on Borel Sets, Springer, 1998. See Proposition 3.3.6 (on page 96), and the first paragraph of Section 3.3 (on page 94).
Wikipedia
Schröder's equation Schröder's equation,[1][2][3] named after Ernst Schröder, is a functional equation with one independent variable: given the function h, find the function Ψ such that $\forall x\;\;\;\Psi {\big (}h(x){\big )}=s\Psi (x).$ Not to be confused with Schrödinger's equation. Schröder's equation is an eigenvalue equation for the composition operator Ch that sends a function f to f(h(.)). If a is a fixed point of h, meaning h(a) = a, then either Ψ(a) = 0 (or ∞) or s = 1. Thus, provided that Ψ(a) is finite and Ψ′(a) does not vanish or diverge, the eigenvalue s is given by s = h′(a). Functional significance For a = 0, if h is analytic on the unit disk, fixes 0, and 0 < |h′(0)| < 1, then Gabriel Koenigs showed in 1884 that there is an analytic (non-trivial) Ψ satisfying Schröder's equation. This is one of the first steps in a long line of theorems fruitful for understanding composition operators on analytic function spaces, cf. Koenigs function. Equations such as Schröder's are suitable to encoding self-similarity, and have thus been extensively utilized in studies of nonlinear dynamics (often referred to colloquially as chaos theory). It is also used in studies of turbulence, as well as the renormalization group.[4][5] An equivalent transpose form of Schröder's equation for the inverse Φ = Ψ−1 of Schröder's conjugacy function is h(Φ(y)) = Φ(sy). The change of variables α(x) = log(Ψ(x))/log(s) (the Abel function) further converts Schröder's equation to the older Abel equation, α(h(x)) = α(x) + 1. Similarly, the change of variables Ψ(x) = log(φ(x)) converts Schröder's equation to Böttcher's equation, φ(h(x)) = (φ(x))s. Moreover, for the velocity,[5] β(x) = Ψ/Ψ′,   Julia's equation,   β(f(x)) = f′(x)β(x), holds. The n-th power of a solution of Schröder's equation provides a solution of Schröder's equation with eigenvalue sn, instead. In the same vein, for an invertible solution Ψ(x) of Schröder's equation, the (non-invertible) function Ψ(x) k(log Ψ(x)) is also a solution, for any periodic function k(x) with period log(s). All solutions of Schröder's equation are related in this manner. Solutions Schröder's equation was solved analytically if a is an attracting (but not superattracting) fixed point, that is 0 < |h′(a)| < 1 by Gabriel Koenigs (1884).[6][7] In the case of a superattracting fixed point, |h′(a)| = 0, Schröder's equation is unwieldy, and had best be transformed to Böttcher's equation.[8] There are a good number of particular solutions dating back to Schröder's original 1870 paper.[1] The series expansion around a fixed point and the relevant convergence properties of the solution for the resulting orbit and its analyticity properties are cogently summarized by Szekeres.[9] Several of the solutions are furnished in terms of asymptotic series, cf. Carleman matrix. Applications See also: Rational difference equation It is used to analyse discrete dynamical systems by finding a new coordinate system in which the system (orbit) generated by h(x) looks simpler, a mere dilation. More specifically, a system for which a discrete unit time step amounts to x → h(x), can have its smooth orbit (or flow) reconstructed from the solution of the above Schröder's equation, its conjugacy equation. That is, h(x) = Ψ−1(s Ψ(x)) ≡ h1(x). In general, all of its functional iterates (its regular iteration group, see iterated function) are provided by the orbit $h_{t}(x)=\Psi ^{-1}{\big (}s^{t}\Psi (x){\big )},$ for t real — not necessarily positive or integer. (Thus a full continuous group.) The set of hn(x), i.e., of all positive integer iterates of h(x) (semigroup) is called the splinter (or Picard sequence) of h(x). However, all iterates (fractional, infinitesimal, or negative) of h(x) are likewise specified through the coordinate transformation Ψ(x) determined to solve Schröder's equation: a holographic continuous interpolation of the initial discrete recursion x → h(x) has been constructed;[10] in effect, the entire orbit. For instance, the functional square root is h1/2(x) = Ψ−1(s1/2 Ψ(x)), so that h1/2(h1/2(x)) = h(x), and so on. For example,[11] special cases of the logistic map such as the chaotic case h(x) = 4x(1 − x) were already worked out by Schröder in his original article[1] (p. 306), Ψ(x) = (arcsin √x)2, s = 4, and hence ht(x) = sin2(2t arcsin √x). In fact, this solution is seen to result as motion dictated by a sequence of switchback potentials,[12] V(x) ∝ x(x − 1) (nπ + arcsin √x)2, a generic feature of continuous iterates effected by Schröder's equation. A nonchaotic case he also illustrated with his method, h(x) = 2x(1 − x), yields Ψ(x) = −1/2ln(1 − 2x), and hence ht(x) = −1/2((1 − 2x)2t − 1). Likewise, for the Beverton–Holt model, h(x) = x/(2 − x), one readily finds[10] Ψ(x) = x/(1 − x), so that[13] $h_{t}(x)=\Psi ^{-1}{\big (}2^{-t}\Psi (x){\big )}={\frac {x}{2^{t}+x(1-2^{t})}}.$ See also • Böttcher's equation References 1. Schröder, Ernst (1870). "Ueber iterirte Functionen". Math. Ann. 3 (2): 296–322. doi:10.1007/BF01443992. 2. Carleson, Lennart; Gamelin, Theodore W. (1993). Complex Dynamics. Textbook series: Universitext: Tracts in Mathematics. Springer-Verlag. ISBN 0-387-97942-5. 3. Kuczma, Marek (1968). Functional equations in a single variable. Monografie Matematyczne. Warszawa: PWN – Polish Scientific Publishers. ISBN 978-0-02-848110-4. OCLC 489667432. 4. Gell-Mann, M.; Low, F.E. (1954). "Quantum Electrodynamics at Small Distances" (PDF). Physical Review. 95 (5): 1300–1312. Bibcode:1954PhRv...95.1300G. doi:10.1103/PhysRev.95.1300. 5. Curtright, T.L.; Zachos, C.K. (March 2011). "Renormalization Group Functional Equations". Physical Review D. 83 (6): 065019. arXiv:1010.5174. Bibcode:2011PhRvD..83f5019C. doi:10.1103/PhysRevD.83.065019. 6. Koenigs, G. (1884). "Recherches sur les intégrales de certaines équations fonctionelles" (PDF). Annales Scientifiques de l'École Normale Supérieure. 1 (3, Supplément): 3–41. doi:10.24033/asens.247. 7. Erdős, Paul; Jabotinsky, Eri (1960). "On Analytic Iteration". Journal d'Analyse Mathématique. 8 (1): 361–376. doi:10.1007/BF02786856. 8. Böttcher, L. E. (1904). "The principal laws of convergence of iterates and their application to analysis". Izv. Kazan. Fiz.-Mat. Obshch. (Russian). 14: 155–234. 9. Szekeres, G. (1958). "Regular iteration of real and complex functions". Acta Mathematica. 100 (3–4): 361–376. doi:10.1007/BF02559539. 10. Curtright, T.L.; Zachos, C. K. (2009). "Evolution Profiles and Functional Equations". Journal of Physics A. 42 (48): 485208. arXiv:0909.2424. Bibcode:2009JPhA...42V5208C. doi:10.1088/1751-8113/42/48/485208. 11. Curtright, T. L. Evolution surfaces and Schröder functional methods. 12. Curtright, T. L.; Zachos, C. K. (2010). "Chaotic Maps, Hamiltonian Flows, and Holographic Methods". Journal of Physics A. 43 (44): 445101. arXiv:1002.0104. Bibcode:2010JPhA...43R5101C. doi:10.1088/1751-8113/43/44/445101. 13. Skellam, J.G. (1951). "Random dispersal in theoretical populations". Biometrika. 38 (1–2): 196−218. doi:10.1093/biomet/38.1-2.196. JSTOR 2332328. See equations 41, 42.
Wikipedia
Schröder number In mathematics, the Schröder number $S_{n},$ also called a large Schröder number or big Schröder number, describes the number of lattice paths from the southwest corner $(0,0)$ of an $n\times n$ grid to the northeast corner $(n,n),$ using only single steps north, $(0,1);$ northeast, $(1,1);$ or east, $(1,0),$ that do not rise above the SW–NE diagonal.[1] Schröder number Named afterErnst Schröder No. of known termsinfinity First terms1, 2, 6, 22, 90, 394, 1806 OEIS index • A006318 • Large Schröder The first few Schröder numbers are 1, 2, 6, 22, 90, 394, 1806, 8558, ... (sequence A006318 in the OEIS). where $S_{0}=1$ and $S_{1}=2.$ They were named after the German mathematician Ernst Schröder. Examples The following figure shows the 6 such paths through a $2\times 2$ grid: Related constructions A Schröder path of length $n$ is a lattice path from $(0,0)$ to $(2n,0)$ with steps northeast, $(1,1);$ east, $(2,0);$ and southeast, $(1,-1),$ that do not go below the $x$-axis. The $n$th Schröder number is the number of Schröder paths of length $n$.[2] The following figure shows the 6 Schröder paths of length 2. Similarly, the Schröder numbers count the number of ways to divide a rectangle into $n+1$ smaller rectangles using $n$ cuts through $n$ points given inside the rectangle in general position, each cut intersecting one of the points and dividing only a single rectangle in two (i.e., the number of structurally-different guillotine partitions). This is similar to the process of triangulation, in which a shape is divided into nonoverlapping triangles instead of rectangles. The following figure shows the 6 such dissections of a rectangle into 3 rectangles using two cuts: Pictured below are the 22 dissections of a rectangle into 4 rectangles using three cuts: The Schröder number $S_{n}$ also counts the separable permutations of length $n-1.$ Related sequences Schröder numbers are sometimes called large or big Schröder numbers because there is another Schröder sequence: the little Schröder numbers, also known as the Schröder-Hipparchus numbers or the super-Catalan numbers. The connections between these paths can be seen in a few ways: • Consider the paths from $(0,0)$ to $(n,n)$ with steps $(1,1),$ $(2,0),$ and $(1,-1)$ that do not rise above the main diagonal. There are two types of paths: those that have movements along the main diagonal and those that do not. The (large) Schröder numbers count both types of paths, and the little Schröder numbers count only the paths that only touch the diagonal but have no movements along it.[3] • Just as there are (large) Schröder paths, a little Schröder path is a Schröder path that has no horizontal steps on the $x$-axis.[4] • If $S_{n}$ is the $n$th Schröder number and $s_{n}$ is the $n$th little Schröder number, then $S_{n}=2s_{n}$ for $n>0$ $(S_{0}=s_{0}=1).$[4] Schröder paths are similar to Dyck paths but allow the horizontal step instead of just diagonal steps. Another similar path is the type of path that the Motzkin numbers count; the Motzkin paths allow the same diagonal paths but allow only a single horizontal step, (1,0), and count such paths from $(0,0)$ to $(n,0)$.[5] There is also a triangular array associated with the Schröder numbers that provides a recurrence relation[6] (though not just with the Schröder numbers). The first few terms are 1, 1, 2, 1, 4, 6, 1, 6, 16, 22, .... (sequence A033877 in the OEIS). It is easier to see the connection with the Schröder numbers when the sequence is in its triangular form: k n 0 1 2 3 4 5 6 0 1 1 12 2 146 3 161622 4 18306890 5 11048146304394 6 1127026471414121806 Then the Schröder numbers are the diagonal entries, i.e. $S_{n}=T(n,n)$ where $T(n,k)$ is the entry in row $n$ and column $k$. The recurrence relation given by this arrangement is $T(n,k)=T(n,k-1)+T(n-1,k-1)+T(n-1,k)$ with $T(1,k)=1$ and $T(n,k)=0$ for $k>n$.[6] Another interesting observation to make is that the sum of the $n$th row is the $(n+1)$st little Schröder number; that is, $\sum _{k=0}^{n}T(n,k)=s_{n+1}$. Recurrence relations With $S_{0}=1$, $S_{1}=2$, [7] $S_{n}=3S_{n-1}+\sum _{k=1}^{n-2}S_{k}S_{n-k-1}$ for $n\geq 2$ and also $S_{n}={\frac {6n-3}{n+1}}S_{n-1}-{\frac {n-2}{n+1}}S_{n-2}$ for $n\geq 2$ Generating function The generating function $G(x)$ of $(S_{n})$ is $G(x)={\frac {1-x-{\sqrt {x^{2}-6x+1}}}{2x}}=\sum _{n=0}^{\infty }S_{n}x^{n}$.[7] Uses One topic of combinatorics is tiling shapes, and one particular instance of this is domino tilings; the question in this instance is, "How many dominoes (that is, $1\times 2$ or $2\times 1$ rectangles) can we arrange on some shape such that none of the dominoes overlap, the entire shape is covered, and none of the dominoes stick out of the shape?" The shape that the Schröder numbers have a connection with is the Aztec diamond. Shown below for reference is an Aztec diamond of order 4 with a possible domino tiling. It turns out that the determinant of the $(2n-1)\times (2n-1)$ Hankel matrix of the Schröder numbers, that is, the square matrix whose $(i,j)$th entry is $S_{i+j-1},$ is the number of domino tilings of the order $n$ Aztec diamond, which is $2^{n(n+1)/2}.$[8] That is, ${\begin{vmatrix}S_{1}&S_{2}&\cdots &S_{n}\\S_{2}&S_{3}&\cdots &S_{n+1}\\\vdots &\vdots &\ddots &\vdots \\S_{n}&S_{n+1}&\cdots &S_{2n-1}\end{vmatrix}}=2^{n(n+1)/2}.$ For example: • ${\begin{vmatrix}2\end{vmatrix}}=2=2^{1(2)/2}$ • ${\begin{vmatrix}2&6\\6&22\end{vmatrix}}=8=2^{2(3)/2}$ • ${\begin{vmatrix}2&6&22\\6&22&90\\22&90&394\end{vmatrix}}=64=2^{3(4)/2}$ See also • Delannoy number • Motzkin number • Narayana number • Schröder–Hipparchus number • Catalan number References 1. Sloane, N. J. A. (ed.). "Sequence A006318 (Large Schröder numbers (or large Schroeder numbers, or big Schroeder numbers).)". The On-Line Encyclopedia of Integer Sequences. OEIS Foundation. Retrieved 5 March 2018. 2. Ardila, Federico (2015). "Algebraic and geometric methods in enumerative combinatorics". Handbook of enumerative combinatorics. Boca Raton, FL: CRC Press. pp. 3–172. 3. Sloane, N. J. A. (ed.). "Sequence A001003 (Schroeder's second problem (generalized parentheses); also called super-Catalan numbers or little Schroeder numbers)". The On-Line Encyclopedia of Integer Sequences. OEIS Foundation. Retrieved 5 March 2018. 4. Drake, Dan (2010). "Bijections from weighted Dyck paths to Schröder paths". arXiv:1006.1959 [math.CO]. 5. Deng, Eva Y. P.; Yan, Wei-Jun (2008). "Some identities on the Catalan, Motzkin, and Schröder numbers". Discrete Applied Mathematics. 156 (166–218X): 2781–2789. doi:10.1016/j.dam.2007.11.014. 6. Sloane, N. J. A. "Triangular array associated with Schroeder numbers". The On-Line Encyclopedia of Integer Sequences. Retrieved 5 March 2018. 7. Oi, Feng; Guo, Bai-Ni (2017). "Some explicit and recursive formulas of the large and little Schröder numbers". Arab Journal of Mathematical Sciences. 23 (1319–5166): 141–147. doi:10.1016/j.ajmsc.2016.06.002. 8. Eu, Sen-Peng; Fu, Tung-Shan (2005). "A simple proof of the Aztec diamond theorem". Electronic Journal of Combinatorics. 12 (1077–8926): Research Paper 18, 8. doi:10.37236/1915. S2CID 5978643. Further reading • Weisstein, Eric W. "Schröder Number". MathWorld. • Stanley, Richard P.: Catalan addendum to Enumerative Combinatorics, Volume 2 Classes of natural numbers Powers and related numbers • Achilles • Power of 2 • Power of 3 • Power of 10 • Square • Cube • Fourth power • Fifth power • Sixth power • Seventh power • Eighth power • Perfect power • Powerful • Prime power Of the form a × 2b ± 1 • Cullen • Double Mersenne • Fermat • Mersenne • Proth • Thabit • Woodall Other polynomial numbers • Hilbert • Idoneal • Leyland • Loeschian • Lucky numbers of Euler Recursively defined numbers • Fibonacci • Jacobsthal • Leonardo • Lucas • Padovan • Pell • Perrin Possessing a specific set of other numbers • Amenable • Congruent • Knödel • Riesel • Sierpiński Expressible via specific sums • Nonhypotenuse • Polite • Practical • Primary pseudoperfect • Ulam • Wolstenholme Figurate numbers 2-dimensional centered • Centered triangular • Centered square • Centered pentagonal • Centered hexagonal • Centered heptagonal • Centered octagonal • Centered nonagonal • Centered decagonal • Star non-centered • Triangular • Square • Square triangular • Pentagonal • Hexagonal • Heptagonal • Octagonal • Nonagonal • Decagonal • Dodecagonal 3-dimensional centered • Centered tetrahedral • Centered cube • Centered octahedral • Centered dodecahedral • Centered icosahedral non-centered • Tetrahedral • Cubic • Octahedral • Dodecahedral • Icosahedral • Stella octangula pyramidal • Square pyramidal 4-dimensional non-centered • Pentatope • Squared triangular • Tesseractic Combinatorial numbers • Bell • Cake • Catalan • Dedekind • Delannoy • Euler • Eulerian • Fuss–Catalan • Lah • Lazy caterer's sequence • Lobb • Motzkin • Narayana • Ordered Bell • Schröder • Schröder–Hipparchus • Stirling first • Stirling second • Telephone number • Wedderburn–Etherington Primes • Wieferich • Wall–Sun–Sun • Wolstenholme prime • Wilson Pseudoprimes • Carmichael number • Catalan pseudoprime • Elliptic pseudoprime • Euler pseudoprime • Euler–Jacobi pseudoprime • Fermat pseudoprime • Frobenius pseudoprime • Lucas pseudoprime • Lucas–Carmichael number • Somer–Lucas pseudoprime • Strong pseudoprime Arithmetic functions and dynamics Divisor functions • Abundant • Almost perfect • Arithmetic • Betrothed • Colossally abundant • Deficient • Descartes • Hemiperfect • Highly abundant • Highly composite • Hyperperfect • Multiply perfect • Perfect • Practical • Primitive abundant • Quasiperfect • Refactorable • Semiperfect • Sublime • Superabundant • Superior highly composite • Superperfect Prime omega functions • Almost prime • Semiprime Euler's totient function • Highly cototient • Highly totient • Noncototient • Nontotient • Perfect totient • Sparsely totient Aliquot sequences • Amicable • Perfect • Sociable • Untouchable Primorial • Euclid • Fortunate Other prime factor or divisor related numbers • Blum • Cyclic • Erdős–Nicolas • Erdős–Woods • Friendly • Giuga • Harmonic divisor • Jordan–Pólya • Lucas–Carmichael • Pronic • Regular • Rough • Smooth • Sphenic • Størmer • Super-Poulet • Zeisel Numeral system-dependent numbers Arithmetic functions and dynamics • Persistence • Additive • Multiplicative Digit sum • Digit sum • Digital root • Self • Sum-product Digit product • Multiplicative digital root • Sum-product Coding-related • Meertens Other • Dudeney • Factorion • Kaprekar • Kaprekar's constant • Keith • Lychrel • Narcissistic • Perfect digit-to-digit invariant • Perfect digital invariant • Happy P-adic numbers-related • Automorphic • Trimorphic Digit-composition related • Palindromic • Pandigital • Repdigit • Repunit • Self-descriptive • Smarandache–Wellin • Undulating Digit-permutation related • Cyclic • Digit-reassembly • Parasitic • Primeval • Transposable Divisor-related • Equidigital • Extravagant • Frugal • Harshad • Polydivisible • Smith • Vampire Other • Friedman Binary numbers • Evil • Odious • Pernicious Generated via a sieve • Lucky • Prime Sorting related • Pancake number • Sorting number Natural language related • Aronson's sequence • Ban Graphemics related • Strobogrammatic • Mathematics portal
Wikipedia
Composition of relations In the mathematics of binary relations, the composition of relations is the forming of a new binary relation R; S from two given binary relations R and S. In the calculus of relations, the composition of relations is called relative multiplication,[1] and its result is called a relative product.[2]: 40  Function composition is the special case of composition of relations where all relations involved are functions. The word uncle indicates a compound relation: for a person to be an uncle, he must be the brother of a parent. In algebraic logic it is said that the relation of Uncle ($xUz$) is the composition of relations "is a brother of" ($xBy$) and "is a parent of" ($yPz$). $U=BP\quad {\text{ is equivalent to: }}\quad xByPz{\text{ if and only if }}xUz.$ Beginning with Augustus De Morgan,[3] the traditional form of reasoning by syllogism has been subsumed by relational logical expressions and their composition.[4] Definition If $R\subseteq X\times Y$ and $S\subseteq Y\times Z$ are two binary relations, then their composition $R;S$ is the relation $R;S=\{(x,z)\in X\times Z:{\text{ there exists }}y\in Y{\text{ such that }}(x,y)\in R{\text{ and }}(y,z)\in S\}.$ In other words, $R;S\subseteq X\times Z$ is defined by the rule that says $(x,z)\in R;S$ if and only if there is an element $y\in Y$ such that $x\,R\,y\,S\,z$ (that is, $(x,y)\in R$ and $(y,z)\in S$).[5]: 13  Notational variations The semicolon as an infix notation for composition of relations dates back to Ernst Schroder's textbook of 1895.[6] Gunther Schmidt has renewed the use of the semicolon, particularly in Relational Mathematics (2011).[2]: 40 [7] The use of semicolon coincides with the notation for function composition used (mostly by computer scientists) in category theory,[8] as well as the notation for dynamic conjunction within linguistic dynamic semantics.[9] A small circle $(R\circ S)$ has been used for the infix notation of composition of relations by John M. Howie in his books considering semigroups of relations.[10] However, the small circle is widely used to represent composition of functions $g(f(x))=(g\circ f)(x)$ which reverses the text sequence from the operation sequence. The small circle was used in the introductory pages of Graphs and Relations[5]: 18  until it was dropped in favor of juxtaposition (no infix notation). Juxtaposition $(RS)$ is commonly used in algebra to signify multiplication, so too, it can signify relative multiplication. Further with the circle notation, subscripts may be used. Some authors[11] prefer to write $\circ _{l}$ and $\circ _{r}$ explicitly when necessary, depending whether the left or the right relation is the first one applied. A further variation encountered in computer science is the Z notation: $\circ $ is used to denote the traditional (right) composition, but ⨾ (U+2A3E ⨾ Z NOTATION RELATIONAL COMPOSITION) denotes left composition.[12][13] The binary relations $R\subseteq X\times Y$ are sometimes regarded as the morphisms $R:X\to Y$ in a category Rel which has the sets as objects. In Rel, composition of morphisms is exactly composition of relations as defined above. The category Set of sets is a subcategory of Rel that has the same objects but fewer morphisms. Properties • Composition of relations is associative: $R;(S;T)=(R;S);T.$ • The converse relation of $R\,;S$ is $(R\,;S)^{\textsf {T}}=S^{\textsf {T}}\,;R^{\textsf {T}}.$ This property makes the set of all binary relations on a set a semigroup with involution. • The composition of (partial) functions (that is, functional relations) is again a (partial) function. • If $R$ and $S$ are injective, then $R\,;S$ is injective, which conversely implies only the injectivity of $R.$ • If $R$ and $S$ are surjective, then $R\,;S$ is surjective, which conversely implies only the surjectivity of $S.$ • The set of binary relations on a set $X$ (that is, relations from $X$ to $X$) together with (left or right) relation composition forms a monoid with zero, where the identity map on $X$ is the neutral element, and the empty set is the zero element. Composition in terms of matrices Finite binary relations are represented by logical matrices. The entries of these matrices are either zero or one, depending on whether the relation represented is false or true for the row and column corresponding to compared objects. Working with such matrices involves the Boolean arithmetic with $1+1=1$ and $1\times 1=1.$ An entry in the matrix product of two logical matrices will be 1, then, only if the row and column multiplied have a corresponding 1. Thus the logical matrix of a composition of relations can be found by computing the matrix product of the matrices representing the factors of the composition. "Matrices constitute a method for computing the conclusions traditionally drawn by means of hypothetical syllogisms and sorites."[14] Heterogeneous relations Consider a heterogeneous relation $R\subseteq A\times B;$ that is, where $A$ and $B$ are distinct sets. Then using composition of relation $R$ with its converse $R^{\textsf {T}},$ there are homogeneous relations $RR^{\textsf {T}}$ (on $A$) and $R^{\textsf {T}}R$ (on $B$). If for all $x\in A$ there exists some $y\in B,$ such that $xRy$ (that is, $R$ is a (left-)total relation), then for all $x,xRR^{\textsf {T}}x$ so that $RR^{\textsf {T}}$ is a reflexive relation or $I\subseteq RR^{\textsf {T}}$ where I is the identity relation $\{xIx:x\in A\}.$ Similarly, if $R$ is a surjective relation then $R^{\textsf {T}}R\supseteq I=\{xIx:x\in B\}.$ In this case $R\subseteq RR^{\textsf {T}}R.$ The opposite inclusion occurs for a difunctional relation. The composition ${\bar {R}}^{\textsf {T}}R$ is used to distinguish relations of Ferrer's type, which satisfy $R{\bar {R}}^{\textsf {T}}R=R.$ Example Let $A=$ { France, Germany, Italy, Switzerland } and $B=$ { French, German, Italian } with the relation $R$ given by $aRb$ when $b$ is a national language of $a.$ Since both $A$ and $B$ is finite, $R$ can be represented by a logical matrix, assuming rows (top to bottom) and columns (left to right) are ordered alphabetically: ${\begin{pmatrix}1&0&0\\0&1&0\\0&0&1\\1&1&1\end{pmatrix}}.$ The converse relation $R^{\textsf {T}}$ corresponds to the transposed matrix, and the relation composition $R^{\textsf {T}};R$ corresponds to the matrix product $R^{\textsf {T}}R$ when summation is implemented by logical disjunction. It turns out that the $3\times 3$ matrix $R^{\textsf {T}}R$ contains a 1 at every position, while the reversed matrix product computes as: $RR^{\textsf {T}}={\begin{pmatrix}1&0&0&1\\0&1&0&1\\0&0&1&1\\1&1&1&1\end{pmatrix}}.$ This matrix is symmetric, and represents a homogeneous relation on $A.$ Correspondingly, $R^{\textsf {T}}\,;R$ is the universal relation on $B,$ hence any two languages share a nation where they both are spoken (in fact: Switzerland). Vice versa, the question whether two given nations share a language can be answered using $R\,;R^{\textsf {T}}.$ Schröder rules For a given set $V,$ the collection of all binary relations on $V$ forms a Boolean lattice ordered by inclusion $(\subseteq ).$ Recall that complementation reverses inclusion: $A\subseteq B{\text{ implies }}B^{\complement }\subseteq A^{\complement }.$ In the calculus of relations[15] it is common to represent the complement of a set by an overbar: ${\bar {A}}=A^{\complement }.$ If $S$ is a binary relation, let $S^{\textsf {T}}$ represent the converse relation, also called the transpose. Then the Schröder rules are $QR\subseteq S\quad {\text{ is equivalent to }}\quad Q^{\textsf {T}}{\bar {S}}\subseteq {\bar {R}}\quad {\text{ is equivalent to }}\quad {\bar {S}}R^{\textsf {T}}\subseteq {\bar {Q}}.$ Verbally, one equivalence can be obtained from another: select the first or second factor and transpose it; then complement the other two relations and permute them.[5]: 15–19  Though this transformation of an inclusion of a composition of relations was detailed by Ernst Schröder, in fact Augustus De Morgan first articulated the transformation as Theorem K in 1860.[4] He wrote[16] $LM\subseteq N{\text{ implies }}{\bar {N}}M^{\textsf {T}}\subseteq {\bar {L}}.$ With Schröder rules and complementation one can solve for an unknown relation $X$ in relation inclusions such as $RX\subseteq S\quad {\text{and}}\quad XR\subseteq S.$ For instance, by Schröder rule $RX\subseteq S{\text{ implies }}R^{\textsf {T}}{\bar {S}}\subseteq {\bar {X}},$ and complementation gives $X\subseteq {\overline {R^{\textsf {T}}{\bar {S}}}},$ which is called the left residual of $S$ by $R$. Quotients Just as composition of relations is a type of multiplication resulting in a product, so some operations compare to division and produce quotients. Three quotients are exhibited here: left residual, right residual, and symmetric quotient. The left residual of two relations is defined presuming that they have the same domain (source), and the right residual presumes the same codomain (range, target). The symmetric quotient presumes two relations share a domain and a codomain. Definitions: • Left residual: $A\backslash B\mathrel {:=} {\overline {A^{\textsf {T}}{\bar {B}}}}$ • Right residual: $D/C\mathrel {:=} {\overline {{\bar {D}}C^{\textsf {T}}}}$ • Symmetric quotient: $\operatorname {syq} (E,F)\mathrel {:=} {\overline {E^{\textsf {T}}{\bar {F}}}}\cap {\overline {{\bar {E}}^{\textsf {T}}F}}$ Using Schröder's rules, $AX\subseteq B$ is equivalent to $X\subseteq A\setminus B.$ Thus the left residual is the greatest relation satisfying $AX\subseteq B.$ Similarly, the inclusion $YC\subseteq D$ is equivalent to $Y\subseteq D\setminus C,$ and the right residual is the greatest relation satisfying $YC\subseteq D.$[2]: 43–6  One can practice the logic of residuals with Sudoku. Join: another form of composition A fork operator $(<)$ has been introduced to fuse two relations $c:H\to A$ and $d:H\to B$ into $c\,(<)\,d:H\to A\times B.$ The construction depends on projections $a:A\times B\to A$ and $b:A\times B\to B,$ understood as relations, meaning that there are converse relations $a^{\textsf {T}}$ and $b^{\textsf {T}}.$ Then the fork of $c$ and $d$ is given by[17] $c\,(<)\,d~\mathrel {:=} ~c;a^{\textsf {T}}\cap \ d;b^{\textsf {T}}.$ Another form of composition of relations, which applies to general $n$-place relations for $n\geq 2,$ is the join operation of relational algebra. The usual composition of two binary relations as defined here can be obtained by taking their join, leading to a ternary relation, followed by a projection that removes the middle component. For example, in the query language SQL there is the operation Join (SQL). See also • Demonic composition • Friend of a friend – Human contact that exists because of a mutual friend Notes 1. Bjarni Jónssen (1984) "Maximal Algebras of Binary Relations", in Contributions to Group Theory, K.I. Appel editor American Mathematical Society ISBN 978-0-8218-5035-0 2. Gunther Schmidt (2011) Relational Mathematics, Encyclopedia of Mathematics and its Applications, vol. 132, Cambridge University Press ISBN 978-0-521-76268-7 3. A. De Morgan (1860) "On the Syllogism: IV and on the Logic of Relations" 4. Daniel D. Merrill (1990) Augustus De Morgan and the Logic of Relations, page 121, Kluwer Academic ISBN 9789400920477 5. Gunther Schmidt & Thomas Ströhlein (1993) Relations and Graphs, Springer books 6. Ernst Schroder (1895) Algebra und Logik der Relative 7. Paul Taylor (1999). Practical Foundations of Mathematics. Cambridge University Press. p. 24. ISBN 978-0-521-63107-5. A free HTML version of the book is available at http://www.cs.man.ac.uk/~pt/Practical_Foundations/ 8. Michael Barr & Charles Wells (1998) Category Theory for Computer Scientists Archived 2016-03-04 at the Wayback Machine, page 6, from McGill University 9. Rick Nouwen and others (2016) Dynamic Semantics §2.2, from Stanford Encyclopedia of Philosophy 10. John M. Howie (1995) Fundamentals of Semigroup Theory, page 16, LMS Monograph #12, Clarendon Press ISBN 0-19-851194-9 11. Kilp, Knauer & Mikhalev, p. 7 12. ISO/IEC 13568:2002(E), p. 23 13. Unicode character: Z Notation relational composition from FileFormat.info 14. Irving Copilowish (December 1948) "Matrix development of the calculus of relations", Journal of Symbolic Logic 13(4): 193–203 Jstor link, quote from page 203 15. Vaughn Pratt The Origins of the Calculus of Relations, from Stanford University 16. De Morgan indicated contraries by lower case, conversion as M−1, and inclusion with )), so his notation was $nM^{-1}))\ l.$ 17. Gunther Schmidt and Michael Winter (2018): Relational Topology, page 26, Lecture Notes in Mathematics vol. 2208, Springer books, ISBN 978-3-319-74451-3 References • M. Kilp, U. Knauer, A.V. Mikhalev (2000) Monoids, Acts and Categories with Applications to Wreath Products and Graphs, De Gruyter Expositions in Mathematics vol. 29, Walter de Gruyter,ISBN 3-11-015248-7. Order theory • Topics • Glossary • Category Key concepts • Binary relation • Boolean algebra • Cyclic order • Lattice • Partial order • Preorder • Total order • Weak ordering Results • Boolean prime ideal theorem • Cantor–Bernstein theorem • Cantor's isomorphism theorem • Dilworth's theorem • Dushnik–Miller theorem • Hausdorff maximal principle • Knaster–Tarski theorem • Kruskal's tree theorem • Laver's theorem • Mirsky's theorem • Szpilrajn extension theorem • Zorn's lemma Properties & Types (list) • Antisymmetric • Asymmetric • Boolean algebra • topics • Completeness • Connected • Covering • Dense • Directed • (Partial) Equivalence • Foundational • Heyting algebra • Homogeneous • Idempotent • Lattice • Bounded • Complemented • Complete • Distributive • Join and meet • Reflexive • Partial order • Chain-complete • Graded • Eulerian • Strict • Prefix order • Preorder • Total • Semilattice • Semiorder • Symmetric • Total • Tolerance • Transitive • Well-founded • Well-quasi-ordering (Better) • (Pre) Well-order Constructions • Composition • Converse/Transpose • Lexicographic order • Linear extension • Product order • Reflexive closure • Series-parallel partial order • Star product • Symmetric closure • Transitive closure Topology & Orders • Alexandrov topology & Specialization preorder • Ordered topological vector space • Normal cone • Order topology • Order topology • Topological vector lattice • Banach • Fréchet • Locally convex • Normed Related • Antichain • Cofinal • Cofinality • Comparability • Graph • Duality • Filter • Hasse diagram • Ideal • Net • Subnet • Order morphism • Embedding • Isomorphism • Order type • Ordered field • Ordered vector space • Partially ordered • Positive cone • Riesz space • Upper set • Young's lattice
Wikipedia
Schubert calculus In mathematics, Schubert calculus is a branch of algebraic geometry introduced in the nineteenth century by Hermann Schubert, in order to solve various counting problems of projective geometry (part of enumerative geometry). It was a precursor of several more modern theories, for example characteristic classes, and in particular its algorithmic aspects are still of current interest. The phrase "Schubert calculus" is sometimes used to mean the enumerative geometry of linear subspaces, roughly equivalent to describing the cohomology ring of Grassmannians, and sometimes used to mean the more general enumerative geometry of nonlinear varieties. Even more generally, "Schubert calculus" is often understood to encompass the study of analogous questions in generalized cohomology theories. The objects introduced by Schubert are the Schubert cells, which are locally closed sets in a Grassmannian defined by conditions of incidence of a linear subspace in projective space with a given flag. For details see Schubert variety. The intersection theory of these cells, which can be seen as the product structure in the cohomology ring of the Grassmannian of associated cohomology classes, in principle allows the prediction of the cases where intersections of cells results in a finite set of points, which are potentially concrete answers to enumerative questions. A supporting theoretical result is that the Schubert cells (or rather, their classes) span the whole cohomology ring. In detailed calculations the combinatorial aspects enter as soon as the cells have to be indexed. Lifted from the Grassmannian, which is a homogeneous space, to the general linear group that acts on it, similar questions are involved in the Bruhat decomposition and classification of parabolic subgroups (by block matrix). Putting Schubert's system on a rigorous footing is Hilbert's fifteenth problem. Construction Schubert calculus can be constructed using the Chow ring of the Grassmannian where the generating cycles are represented by geometrically meaningful data.[1] Denote $G(k,V)$ as the Grassmannian of $k$-planes in a fixed $n$-dimensional vector space $V$, and $A^{*}(G(k,V))$ its Chow ring; note that sometimes the Grassmannian is denoted as $G(k,n)$ if the vector space isn't explicitly given. Associated to an arbitrary complete flag ${\mathcal {V}}$ $0\subset V_{1}\subset \cdots \subset V_{n-1}\subset V_{n}=V$ and a decreasing $k$-tuple of integers $\mathbf {a} =(a_{1},\ldots ,a_{k})$ where $n-k\geq a_{1}\geq a_{2}\geq \cdots \geq a_{k}\geq 0$ there are Schubert cycles (which are called Schubert cells when considering cellular homology instead of the Chow ring) $\Sigma _{\mathbf {a} }({\mathcal {V}})\subset G(k,V)$ defined as $\Sigma _{\mathbf {a} }({\mathcal {V}})=\{\Lambda \in G(k,V):\dim(V_{n-k+i-a_{i}}\cap \Lambda )\geq i{\text{ for all }}i\geq 1\}$ Since the class $[\Sigma _{\mathbb {a} }({\mathcal {V}})]\in A^{*}(G(k,V))$ does not depend on the complete flag, the class can be written as $\sigma _{\mathbb {a} }:=[\Sigma _{\mathbb {a} }]\in A^{*}(G(k,V))$ which are called Schubert classes. It can be shown these classes generate the Chow ring, and the associated intersection theory is called Schubert calculus. Note given a sequence $\mathbb {a} =(a_{1},\ldots ,a_{j},0,\ldots ,0)$ the Schubert class $\sigma _{(a_{1},\ldots ,a_{j},0,\ldots ,0)}$ is typically denoted as just $\sigma _{(a_{1},\ldots ,a_{j})}$. Also, the Schubert classes given by a single integer, $\sigma _{a_{1}}$, are called special classes. Using the Giambeli formula below, all of the Schubert classes can be generated from these special classes. Explanation In order to explain the definition, consider a generic $k$-plane $\Lambda \subset V$: it will have only a zero intersection with $V_{j}$ for $j\leq n-k$, whereas $\dim(V_{j}\cap \Lambda )=i$ for $j=n-k+i\geq n-k$. For example, in $G(4,9)$, a $4$-plane $\Lambda $ is the solution space of a system of five independent homogeneous linear equations. These equations will generically span when restricted to a subspace $V_{j}$ with $j=\dim V_{j}\leq 5=9-4$, in which case the solution space (the intersection of $V_{j}$ with $\Lambda $) will consist only of the zero vector. However, once $\dim(V_{j})+\dim(\Lambda )>n=9$, then $V_{j}$ and $\Lambda $ will necessarily have nonzero intersection. For example, the expected dimension of intersection of $V_{6}$ and $\Lambda $ is $1$, the intersection of $V_{7}$ and $\Lambda $ has expected dimension $2$, and so on. The definition of a Schubert cycle states that the first value of $j$ with $\dim(V_{j}\cap \Lambda )\geq i$ is generically smaller than the expected value $n-k+i$ by the parameter $a_{i}$. The $k$-planes $\Lambda \subset V$ given by these constraints then define special subvarieties of $G(k,n)$.[1] Inclusion There is a partial ordering on all $k$-tuples where $\mathbb {a} \geq \mathbb {b} $ if $a_{i}\geq b_{i}$ for every $i$. This gives the inclusion of Schubert cycles $\Sigma _{\mathbb {a} }\subset \Sigma _{\mathbb {b} }\iff a\geq b$ showing an increase of the indices corresponds to an even greater specialization of subvarieties. Codimension formula A Schubert cycle $\Sigma _{\mathbb {a} }$ has codimension $\sum a_{i}$ which is stable under inclusions of Grassmannians. That is, the inclusion $i:G(k,n)\hookrightarrow G(k+1,n+1)$ given by adding the extra basis element $e_{n+1}$ to each $k$-plane, giving a $(k+1)$-plane, has the property $i^{*}(\sigma _{\mathbb {a} })=\sigma _{\mathbb {a} }$ Also, the inclusion $j:G(k,n)\hookrightarrow G(k,n+1)$ given by inclusion of the $k$-plane has the same pullback property. Intersection product The intersection product was first established using the Pieri and Giambelli formulas. Pieri formula In the special case $\mathbb {b} =(b,0,\ldots ,0)$, there is an explicit formula of the product of $\sigma _{b}$ with an arbitrary Schubert class $\sigma _{a_{1},\ldots ,a_{k}}$ given by $\sigma _{b}\cdot \sigma _{a_{1},\ldots ,a_{k}}=\sum _{\begin{matrix}|c|=|a|+b\\a_{i}\leq c_{i}\leq a_{i-1}\end{matrix}}\sigma _{\mathbb {c} }$ Note $|\mathbb {a} |=a_{1}+\cdots +a_{k}$. This formula is called the Pieri formula and can be used to determine the intersection product of any two Schubert classes when combined with the Giambelli formula. For example $\sigma _{1}\cdot \sigma _{4,2,1}=\sigma _{5,2,1}+\sigma _{4,3,1}+\sigma _{4,2,1,1}$ and $\sigma _{2}\cdot \sigma _{4,3}=\sigma _{4,3,2}+\sigma _{4,4,1}+\sigma _{5,3,1}+\sigma _{5,4}+\sigma _{6,3}$ Giambelli formula Schubert classes with tuples of length two or more can be described as a determinantal equation using the classes of only one tuple. The Giambelli formula reads as the equation $\sigma _{(a_{1},\ldots ,a_{k})}={\begin{vmatrix}\sigma _{a_{1}}&\sigma _{a_{1}+1}&\sigma _{a_{1}+2}&\cdots &\sigma _{a_{1}+k-1}\\\sigma _{a_{2}-1}&\sigma _{a_{2}}&\sigma _{a_{2}+1}&\cdots &\sigma _{a_{2}+k-2}\\\sigma _{a_{3}-2}&\sigma _{a_{3}-1}&\sigma _{a_{3}}&\cdots &\sigma _{a_{3}+k-3}\\\vdots &\vdots &\vdots &\ddots &\vdots \\\sigma _{a_{k}-k+1}&\sigma _{a_{k}-k+2}&\sigma _{a_{k}-k+3}&\cdots &\sigma _{a_{k}}\end{vmatrix}}$ given by the determinant of a $(k,k)$-matrix. For example, $\sigma _{2,2}={\begin{vmatrix}\sigma _{2}&\sigma _{3}\\\sigma _{1}&\sigma _{2}\end{vmatrix}}=\sigma _{2}^{2}-\sigma _{1}\cdot \sigma _{3}$ and $\sigma _{2,1,1}={\begin{vmatrix}\sigma _{2}&\sigma _{3}&\sigma _{4}\\\sigma _{0}&\sigma _{1}&\sigma _{2}\\0&\sigma _{0}&\sigma _{1}\end{vmatrix}}$ Relation with Chern classes There is an easy description of the cohomology ring, or the Chow ring, of the Grassmannian using the Chern classes of two natural vector bundles over the grassmannian $G(k,n)$. There is a sequence of vector bundles $0\to T\to {\underline {V}}\to Q\to 0$ where ${\underline {V}}$ is the trivial vector bundle of rank $n$, the fiber of the "tautological bundle" $T$ over $\Lambda \in G(k,n)$ is the subspace $\Lambda \subset V$, and $Q$ is the quotient vector bundle (which exists since the rank is constant on each of the fibers). The Chern classes of these two associated bundles are $c_{i}(T)=(-1)^{i}\sigma _{(1,\ldots ,1)}$ where $(1,\ldots ,1)$ is an $i$-tuple and $c_{i}(Q)=\sigma _{i}$ The tautological sequence then gives the presentation of the Chow ring as $A^{*}(G(k,n))={\frac {\mathbb {Z} [c_{1}(T),\ldots ,c_{k}(T),c_{1}(Q),\ldots ,c_{n-k}(Q)]}{(c(T)c(Q)-1)}}$ G(2,4) One of the classical examples analyzed is the Grassmannian $G(2,4)$ since it parameterizes lines in $\mathbb {P} ^{3}$. Schubert calculus can be used to find the number of lines on a Cubic surface. Chow ring The Chow ring has the presentation $A^{*}(G(2,4))={\frac {\mathbb {Z} [\sigma _{1},\sigma _{1,1},\sigma _{2}]}{((1-\sigma _{1}+\sigma _{1,1})(1+\sigma _{1}+\sigma _{2})-1)}}$ and as a graded Abelian group it is given by ${\begin{aligned}A^{0}(G(2,4))&=\mathbb {Z} \cdot 1\\A^{2}(G(2,4))&=\mathbb {Z} \cdot \sigma _{1}\\A^{4}(G(2,4))&=\mathbb {Z} \cdot \sigma _{2}\oplus \mathbb {Z} \cdot \sigma _{1,1}\\A^{6}(G(2,4))&=\mathbb {Z} \cdot \sigma _{2,1}\\A^{8}(G(2,4))&=\mathbb {Z} \cdot \sigma _{2,2}\\\end{aligned}}$[2] Lines on a cubic surface This Chow ring can be used to compute the number of lines on a cubic surface.[1] Recall a line in $\mathbb {P} ^{3}$ gives a dimension two subspace of $\mathbb {A} ^{4}$, hence $\mathbb {G} (1,3)\cong G(2,4)$. Also, the equation of a line can be given as a section of $\Gamma (\mathbb {G} (1,3),T^{*})$. Since a cubic surface $X$ is given as a generic homogeneous cubic polynomial, this is given as a generic section $s\in \Gamma (\mathbb {G} (1,3),{\text{Sym}}^{3}(T^{*}))$. Then, a line $L\subset \mathbb {P} ^{3}$ is a subvariety of $X$ if and only if the section vanishes on $[L]\in \mathbb {G} (1,3)$. Therefore, the Euler class of ${\text{Sym}}^{3}(T^{*})$ can be integrated over $\mathbb {G} (1,3)$ to get the number of points where the generic section vanishes on $\mathbb {G} (1,3)$. In order to get the Euler class, the total Chern class of $T^{*}$ must be computed, which is given as $c(T^{*})=1+\sigma _{1}+\sigma _{1,1}$ Then, the splitting formula reads as the formal equation ${\begin{aligned}c(T^{*})&=(1+\alpha )(1+\beta )\\&=1+\alpha +\beta +\alpha \cdot \beta \end{aligned}}$ where $c({\mathcal {L}})=1+\alpha $ and $c({\mathcal {M}})=1+\beta $ for formal line bundles ${\mathcal {L}},{\mathcal {M}}$. The splitting equation gives the relations $\sigma _{1}=\alpha +\beta $ and $\sigma _{1,1}=\alpha \cdot \beta $. Since ${\text{Sym}}^{3}(T^{*})$ can be read as the direct sum of formal vector bundles ${\text{Sym}}^{3}(T^{*})={\mathcal {L}}^{\otimes 3}\oplus ({\mathcal {L}}^{\otimes 2}\otimes {\mathcal {M}})\oplus ({\mathcal {L}}\otimes {\mathcal {M}}^{\otimes 2})\oplus {\mathcal {M}}^{\otimes 3}$ whose total Chern class is $c({\text{Sym}}^{3}(T^{*}))=(1+3\alpha )(1+2\alpha +\beta )(1+\alpha +2\beta )(1+3\beta )$ hence ${\begin{aligned}c_{4}({\text{Sym}}^{3}(T^{*}))&=3\alpha (2\alpha +\beta )(\alpha +2\beta )3\beta \\&=9\alpha \beta (2(\alpha +\beta )^{2}+\alpha \beta )\\&=9\sigma _{1,1}(2\sigma _{1}^{2}+\sigma _{1,1})\\&=27\sigma _{2,2}\end{aligned}}$ using the fact $\sigma _{1,1}\cdot \sigma _{1}^{2}=\sigma _{2,1}\sigma _{1}=\sigma _{2,2}$ and $\sigma _{1,1}\cdot \sigma _{1,1}=\sigma _{2,2}$ Then, the integral is $\int _{\mathbb {G} (1,3)}27\sigma _{2,2}=27$ since $\sigma _{2,2}$ is the top class. Therefore there are $27$ lines on a cubic surface. See also • Enumerative geometry • Chow ring • Intersection theory • Grassmannian • Giambelli's formula • Pieri's formula • Chern class • Quintic threefold • Mirror symmetry conjecture References 1. 3264 and All That (PDF). pp. 132, section 4.1, 200, section 6.2.1. 2. Katz, Sheldon. Enumerative Geometry and String Theory. p. 96. • Summer school notes http://homepages.math.uic.edu/~coskun/poland.html • Phillip Griffiths and Joseph Harris (1978), Principles of Algebraic Geometry, Chapter 1.5 • Kleiman, Steven (1976). "Rigorous foundations of Schubert's enumerative calculus". In Felix E. Browder (ed.). Mathematical Developments Arising from Hilbert Problems. Proceedings of Symposia in Pure Mathematics. Vol. XXVIII.2. American Mathematical Society. pp. 445–482. ISBN 0-8218-1428-1. • Steven Kleiman and Dan Laksov (1972). "Schubert calculus" (PDF). American Mathematical Monthly. 79: 1061–1082. doi:10.2307/2317421. • Sottile, Frank (2001) [1994], "Schubert calculus", Encyclopedia of Mathematics, EMS Press • David Eisenbud and Joseph Harris (2016), "3264 and All That: A Second Course in Algebraic Geometry".
Wikipedia
Schubert polynomial In mathematics, Schubert polynomials are generalizations of Schur polynomials that represent cohomology classes of Schubert cycles in flag varieties. They were introduced by Lascoux & Schützenberger (1982) and are named after Hermann Schubert. Background Lascoux (1995) described the history of Schubert polynomials. The Schubert polynomials ${\mathfrak {S}}_{w}$ are polynomials in the variables $x_{1},x_{2},\ldots $ depending on an element $w$ of the infinite symmetric group $S_{\infty }$ of all permutations of $\mathbb {N} $ fixing all but a finite number of elements. They form a basis for the polynomial ring $\mathbb {Z} [x_{1},x_{2},\ldots ]$ in infinitely many variables. The cohomology of the flag manifold ${\text{Fl}}(m)$ is $\mathbb {Z} [x_{1},x_{2},\ldots ,x_{m}]/I,$ where $I$ is the ideal generated by homogeneous symmetric functions of positive degree. The Schubert polynomial ${\mathfrak {S}}_{w}$ is the unique homogeneous polynomial of degree $\ell (w)$ representing the Schubert cycle of $w$ in the cohomology of the flag manifold ${\text{Fl}}(m)$ for all sufficiently large $m.$ Properties • If $w_{0}$ is the permutation of longest length in $S_{n}$ then ${\mathfrak {S}}_{w_{0}}=x_{1}^{n-1}x_{2}^{n-2}\cdots x_{n-1}^{1}$ • $\partial _{i}{\mathfrak {S}}_{w}={\mathfrak {S}}_{ws_{i}}$ if $w(i)>w(i+1)$, where $s_{i}$ is the transposition $(i,i+1)$ and where $\partial _{i}$ is the divided difference operator taking $P$ to $(P-s_{i}P)/(x_{i}-x_{i+1})$. Schubert polynomials can be calculated recursively from these two properties. In particular, this implies that ${\mathfrak {S}}_{w}=\partial _{w^{-1}w_{0}}x_{1}^{n-1}x_{2}^{n-2}\cdots x_{n-1}^{1}$. Other properties are • ${\mathfrak {S}}_{id}=1$ • If $s_{i}$ is the transposition $(i,i+1)$, then ${\mathfrak {S}}_{s_{i}}=x_{1}+\cdots +x_{i}$. • If $w(i)<w(i+1)$ for all $i\neq r$, then ${\mathfrak {S}}_{w}$ is the Schur polynomial $s_{\lambda }(x_{1},\ldots ,x_{r})$ where $\lambda $ is the partition $(w(r)-r,\ldots ,w(2)-2,w(1)-1)$. In particular all Schur polynomials (of a finite number of variables) are Schubert polynomials. • Schubert polynomials have positive coefficients. A conjectural rule for their coefficients was put forth by Richard P. Stanley, and proven in two papers, one by Sergey Fomin and Stanley and one by Sara Billey, William Jockusch, and Stanley. • The Schubert polynomials can be seen as a generating function over certain combinatorial objects called pipe dreams or rc-graphs. These are in bijection with reduced Kogan faces, (introduced in the PhD thesis of Mikhail Kogan) which are special faces of the Gelfand-Tsetlin polytope. • Schubert polynomials also can be written as a weighted sum of objects called bumpless pipe dreams. As an example ${\mathfrak {S}}_{24531}(x)=x_{1}x_{3}^{2}x_{4}x_{2}^{2}+x_{1}^{2}x_{3}x_{4}x_{2}^{2}+x_{1}^{2}x_{3}^{2}x_{4}x_{2}.$ Multiplicative structure constants Since the Schubert polynomials form a $\mathbb {Z} $-basis, there are unique coefficients $c_{\beta \gamma }^{\alpha }$ such that ${\mathfrak {S}}_{\beta }{\mathfrak {S}}_{\gamma }=\sum _{\alpha }c_{\beta \gamma }^{\alpha }{\mathfrak {S}}_{\alpha }.$ These can be seen as a generalization of the Littlewood−Richardson coefficients described by the Littlewood–Richardson rule. For algebro-geometric reasons (Kleiman's transversality theorem of 1974), these coefficients are non-negative integers and it is an outstanding problem in representation theory and combinatorics to give a combinatorial rule for these numbers. Double Schubert polynomials Double Schubert polynomials ${\mathfrak {S}}_{w}(x_{1},x_{2},\ldots ,y_{1},y_{2},\ldots )$ are polynomials in two infinite sets of variables, parameterized by an element w of the infinite symmetric group, that becomes the usual Schubert polynomials when all the variables $y_{i}$ are $0$. The double Schubert polynomial ${\mathfrak {S}}_{w}(x_{1},x_{2},\ldots ,y_{1},y_{2},\ldots )$ are characterized by the properties • ${\mathfrak {S}}_{w}(x_{1},x_{2},\ldots ,y_{1},y_{2},\ldots )=\prod \limits _{i+j\leq n}(x_{i}-y_{j})$ when $w$ is the permutation on $1,\ldots ,n$ of longest length. • $\partial _{i}{\mathfrak {S}}_{w}={\mathfrak {S}}_{ws_{i}}$ if $w(i)>w(i+1)$. The double Schubert polynomials can also be defined as ${\mathfrak {S}}_{w}(x,y)=\sum _{w=v^{-1}u{\text{ and }}\ell (w)=\ell (u)+\ell (v)}{\mathfrak {S}}_{u}(x){\mathfrak {S}}_{v}(-y)$. Quantum Schubert polynomials Fomin, Gelfand & Postnikov (1997) introduced quantum Schubert polynomials, that have the same relation to the (small) quantum cohomology of flag manifolds that ordinary Schubert polynomials have to the ordinary cohomology. Universal Schubert polynomials Fulton (1999) introduced universal Schubert polynomials, that generalize classical and quantum Schubert polynomials. He also described universal double Schubert polynomials generalizing double Schubert polynomials. See also • Stanley symmetric function • Kostant polynomial • Monk's formula gives the product of a linear Schubert polynomial and a Schubert polynomial. • nil-Coxeter algebra References • Bernstein, I. N.; Gelfand, I. M.; Gelfand, S. I. (1973), "Schubert cells, and the cohomology of the spaces G/P", Russian Math. Surveys, 28 (3): 1–26, Bibcode:1973RuMaS..28....1B, doi:10.1070/RM1973v028n03ABEH001557 • Fomin, Sergey; Gelfand, Sergei; Postnikov, Alexander (1997), "Quantum Schubert polynomials", Journal of the American Mathematical Society, 10 (3): 565–596, doi:10.1090/S0894-0347-97-00237-3, ISSN 0894-0347, MR 1431829 • Fulton, William (1992), "Flags, Schubert polynomials, degeneracy loci, and determinantal formulas", Duke Mathematical Journal, 65 (3): 381–420, doi:10.1215/S0012-7094-92-06516-1, ISSN 0012-7094, MR 1154177 • Fulton, William (1997), Young tableaux, London Mathematical Society Student Texts, vol. 35, Cambridge University Press, ISBN 978-0-521-56144-0, MR 1464693 • Fulton, William (1999), "Universal Schubert polynomials", Duke Mathematical Journal, 96 (3): 575–594, arXiv:alg-geom/9702012, doi:10.1215/S0012-7094-99-09618-7, ISSN 0012-7094, MR 1671215, S2CID 10546579 • Lascoux, Alain (1995), "Polynômes de Schubert: une approche historique", Discrete Mathematics, 139 (1): 303–317, doi:10.1016/0012-365X(95)93984-D, ISSN 0012-365X, MR 1336845 • Lascoux, Alain; Schützenberger, Marcel-Paul (1982), "Polynômes de Schubert", Comptes Rendus de l'Académie des Sciences, Série I, 294 (13): 447–450, ISSN 0249-6291, MR 0660739 • Lascoux, Alain; Schützenberger, Marcel-Paul (1985), "Schubert polynomials and the Littlewood-Richardson rule", Letters in Mathematical Physics. A Journal for the Rapid Dissemination of Short Contributions in the Field of Mathematical Physics, 10 (2): 111–124, Bibcode:1985LMaPh..10..111L, doi:10.1007/BF00398147, ISSN 0377-9017, MR 0815233, S2CID 119654656 • Macdonald, I. G. (1991), "Schubert polynomials", in Keedwell, A. D. (ed.), Surveys in combinatorics, 1991 (Guildford, 1991), London Math. Soc. Lecture Note Ser., vol. 166, Cambridge University Press, pp. 73–99, ISBN 978-0-521-40766-3, MR 1161461 • Macdonald, I.G. (1991b), Notes on Schubert polynomials, Publications du Laboratoire de combinatoire et d'informatique mathématique, vol. 6, Laboratoire de combinatoire et d'informatique mathématique (LACIM), Université du Québec a Montréal, ISBN 978-2-89276-086-6 • Manivel, Laurent (2001) [1998], Symmetric functions, Schubert polynomials and degeneracy loci, SMF/AMS Texts and Monographs, vol. 6, Providence, R.I.: American Mathematical Society, ISBN 978-0-8218-2154-1, MR 1852463 • Sottile, Frank (2001) [1994], "Schubert polynomials", Encyclopedia of Mathematics, EMS Press
Wikipedia
Schubert variety In algebraic geometry, a Schubert variety is a certain subvariety of a Grassmannian, usually with singular points. Like a Grassmannian, it is a kind of moduli space, whose points correspond to certain kinds of subspaces V, specified using linear algebra, inside a fixed vector subspace W. Here W may be a vector space over an arbitrary field, though most commonly over the complex numbers. A typical example is the set X whose points correspond to those 2-dimensional subspaces V of a 4-dimensional vector space W, such that V non-trivially intersects a fixed (reference) 2-dimensional subspace W2: $X\ =\ \{V\subset W\mid \dim(V)=2,\,\dim(V\cap W_{2})\geq 1\}.$ Over the real number field, this can be pictured in usual xyz-space as follows. Replacing subspaces with their corresponding projective spaces, and intersecting with an affine coordinate patch of $\mathbb {P} (W)$, we obtain an open subset X° ⊂ X. This is isomorphic to the set of all lines L (not necessarily through the origin) which meet the x-axis. Each such line L corresponds to a point of X°, and continuously moving L in space (while keeping contact with the x-axis) corresponds to a curve in X°. Since there are three degrees of freedom in moving L (moving the point on the x-axis, rotating, and tilting), X is a three-dimensional real algebraic variety. However, when L is equal to the x-axis, it can be rotated or tilted around any point on the axis, and this excess of possible motions makes L a singular point of X. More generally, a Schubert variety is defined by specifying the minimal dimension of intersection between a k-dimensional V with each of the spaces in a fixed reference flag $W_{1}\subset W_{2}\subset \cdots \subset W_{n}=W$, where $\dim W_{j}=j$. (In the example above, this would mean requiring certain intersections of the line L with the x-axis and the xy-plane.) In even greater generality, given a semisimple algebraic group G with a Borel subgroup B and a standard parabolic subgroup P, it is known that the homogeneous space X = G/P, which is an example of a flag variety, consists of finitely many B-orbits that may be parametrized by certain elements of the Weyl group W. The closure of the B-orbit associated to an element w of the Weyl group is denoted by Xw and is called a Schubert variety in G/P. The classical case corresponds to G = SLn and P being the kth maximal parabolic subgroup of G. Significance Schubert varieties form one of the most important and best studied classes of singular algebraic varieties. A certain measure of singularity of Schubert varieties is provided by Kazhdan–Lusztig polynomials, which encode their local Goresky–MacPherson intersection cohomology. The algebras of regular functions on Schubert varieties have deep significance in algebraic combinatorics and are examples of algebras with a straightening law. (Co)homology of the Grassmannian, and more generally, of more general flag varieties, has a basis consisting of the (co)homology classes of Schubert varieties, the Schubert cycles. The study of the intersection theory on the Grassmannian was initiated by Hermann Schubert and continued by Zeuthen in the 19th century under the heading of enumerative geometry. This area was deemed by David Hilbert important enough to be included as the fifteenth of his celebrated 23 problems. The study continued in the 20th century as part of the general development of algebraic topology and representation theory, but accelerated in the 1990s beginning with the work of William Fulton on the degeneracy loci and Schubert polynomials, following up on earlier investigations of Bernstein–Gelfand–Gelfand and Demazure in representation theory in the 1970s, Lascoux and Schützenberger in combinatorics in the 1980s, and of Fulton and MacPherson in intersection theory of singular algebraic varieties, also in the 1980s. See also • Schubert calculus • Bruhat decomposition • Bott–Samelson resolution References • P.A. Griffiths, J.E. Harris, Principles of algebraic geometry, Wiley (Interscience) (1978) • A.L. Onishchik (2001) [1994], "Schubert variety", Encyclopedia of Mathematics, EMS Press • H. Schubert, Lösung des Charakteristiken-Problems für lineare Räume beliebiger Dimension Mitt. Math. Gesellschaft Hamburg, 1 (1889) pp. 134–155 Authority control: National • Germany
Wikipedia
Schuette–Nesbitt formula In mathematics, the Schuette–Nesbitt formula is a generalization of the inclusion–exclusion principle. It is named after Donald R. Schuette and Cecil J. Nesbitt. The probabilistic version of the Schuette–Nesbitt formula has practical applications in actuarial science, where it is used to calculate the net single premium for life annuities and life insurances based on the general symmetric status. Combinatorial versions Consider a set Ω and subsets A1, ..., Am. Let $N(\omega )=\sum _{n=1}^{m}1_{A_{n}}(\omega ),\qquad \omega \in \Omega ,$ (1) denote the number of subsets to which ω ∈ Ω belongs, where we use the indicator functions of the sets A1, ..., Am. Furthermore, for each k ∈ {0, 1, ..., m}, let $N_{k}(\omega )=\sum _{\scriptstyle J\subset \{1,\ldots ,m\} \atop \scriptstyle |J|=k}1_{\cap _{j\in J}A_{j}}(\omega ),\qquad \omega \in \Omega ,$ (2) denote the number of intersections of exactly k sets out of A1, ..., Am, to which ω belongs, where the intersection over the empty index set is defined as Ω, hence N0 = 1Ω. Let V denote a vector space over a field R such as the real or complex numbers (or more generally a module over a ring R with multiplicative identity). Then, for every choice of c0, ..., cm ∈ V, $\sum _{n=0}^{m}1_{\{N=n\}}c_{n}=\sum _{k=0}^{m}N_{k}\sum _{l=0}^{k}(-1)^{k-l}{\binom {k}{l}}c_{l},$ (3) where 1{N=n} denotes the indicator function of the set of all ω ∈ Ω with N(ω) = n, and $\textstyle {\binom {k}{l}}$ is a binomial coefficient. Equality (3) says that the two V-valued functions defined on Ω are the same. Proof of (3) We prove that (3) holds pointwise. Take ω ∈ Ω and define n = N(ω). Then the left-hand side of (3) equals cn. Let I denote the set of all those indices i ∈ {1, ..., m} such that ω ∈ Ai, hence I contains exactly n indices. Given J ⊂ {1, ..., m} with k elements, then ω belongs to the intersection ∩j∈JAj if and only if J is a subset of I. By the combinatorial interpretation of the binomial coefficient, there are Nk = $\textstyle {\binom {n}{k}}$ such subsets (the binomial coefficient is zero for k > n). Therefore the right-hand side of (3) evaluated at ω equals $\sum _{k=0}^{m}{\binom {n}{k}}\sum _{l=0}^{k}(-1)^{k-l}{\binom {k}{l}}c_{l}=\sum _{l=0}^{m}\underbrace {\sum _{k=l}^{n}(-1)^{k-l}{\binom {n}{k}}{\binom {k}{l}}} _{=:\,(*)}c_{l},$ where we used that the first binomial coefficient is zero for k > n. Note that the sum (*) is empty and therefore defined as zero for n < l. Using the factorial formula for the binomial coefficients, it follows that ${\begin{aligned}(*)&=\sum _{k=l}^{n}(-1)^{k-l}{\frac {n!}{k!\,(n-k)!}}\,{\frac {k!}{l!\,(k-l)!}}\\&=\underbrace {\frac {n!}{l!\,(n-l)!}} _{={\binom {n}{l}}}\underbrace {\sum _{k=l}^{n}(-1)^{k-l}{\frac {(n-l)!}{(n-k)!\,(k-l)!}}} _{=:\,(**)}\\\end{aligned}}$ Rewriting (**) with the summation index j = k − l und using the binomial formula for the third equality shows that ${\begin{aligned}(**)&=\sum _{j=0}^{n-l}(-1)^{j}{\frac {(n-l)!}{(n-l-j)!\,j!}}\\&=\sum _{j=0}^{n-l}(-1)^{j}{\binom {n-l}{j}}=(1-1)^{n-l}=\delta _{ln},\end{aligned}}$ which is the Kronecker delta. Substituting this result into the above formula and noting that n choose l equals 1 for l = n, it follows that the right-hand side of (3) evaluated at ω also reduces to cn. Representation in the polynomial ring As a special case, take for V the polynomial ring R[x] with the indeterminate x. Then (3) can be rewritten in a more compact way as $\sum _{n=0}^{m}1_{\{N=n\}}x^{n}=\sum _{k=0}^{m}N_{k}(x-1)^{k}.$ (4) This is an identity for two polynomials whose coefficients depend on ω, which is implicit in the notation. Proof of (4) using (3): Substituting cn = xn for n ∈ {0, ..., m} into (3) and using the binomial formula shows that $\sum _{n=0}^{m}1_{\{N=n\}}x^{n}=\sum _{k=0}^{m}N_{k}\underbrace {\sum _{l=0}^{k}{\binom {k}{l}}(-1)^{k-l}x^{l}} _{=\,(x-1)^{k}},$ which proves (4). Representation with shift and difference operators Consider the linear shift operator E and the linear difference operator Δ, which we define here on the sequence space of V by ${\begin{aligned}E:V^{\mathbb {N} _{0}}&\to V^{\mathbb {N} _{0}},\\E(c_{0},c_{1},c_{2},c_{3},\ldots )&\mapsto (c_{1},c_{2},c_{3},\ldots ),\\\end{aligned}}$ and ${\begin{aligned}\Delta :V^{\mathbb {N} _{0}}&\to V^{\mathbb {N} _{0}},\\\Delta (c_{0},c_{1},c_{2},c_{3}\ldots )&\mapsto (c_{1}-c_{0},c_{2}-c_{1},c_{3}-c_{2},\ldots ).\\\end{aligned}}$ Substituting x = E in (4) shows that $\sum _{n=0}^{m}1_{\{N=n\}}E^{n}=\sum _{k=0}^{m}N_{k}\Delta ^{k},$ (5) where we used that Δ = E – I with I denoting the identity operator. Note that E0 and Δ0 equal the identity operator I on the sequence space, Ek and Δk denote the k-fold composition. Direct proof of (5) by the operator method To prove (5), we first want to verify the equation $\sum _{n=0}^{m}1_{\{N=n\}}E^{n}=\prod _{j=1}^{m}(1_{A_{j}^{\mathrm {c} }}I+1_{A_{j}}E)$ (✳) involving indicator functions of the sets A1, ..., Am and their complements with respect to Ω. Suppose an ω from Ω belongs to exactly k sets out of A1, ..., Am, where k ∈ {0, ..., m}, for simplicity of notation say that ω only belongs to A1, ..., Ak. Then the left-hand side of (✳) is Ek. On the right-hand side of (✳), the first k factors equal E, the remaining ones equal I, their product is also Ek, hence the formula (✳) is true. Note that ${\begin{aligned}1_{A_{j}^{\mathrm {c} }}I+1_{A_{j}}E&=I-1_{A_{j}}I+1_{A_{j}}E\\&=I+1_{A_{j}}(E-I)=I+1_{A_{j}}\Delta ,\qquad j\in \{0,\ldots ,m\}.\end{aligned}}$ Inserting this result into equation (✳) and expanding the product gives $\sum _{n=0}^{m}1_{\{N=n\}}E^{n}=\sum _{k=0}^{m}\sum _{\scriptstyle J\subset \{1,\ldots ,m\} \atop \scriptstyle |J|=k}1_{\cap _{j\in J}A_{j}}\Delta ^{k},$ because the product of indicator functions is the indicator function of the intersection. Using the definition (2), the result (5) follows. Let (Δkc)0 denote the 0th component of the k-fold composition Δk applied to c = (c0, c1, ..., cm, ...), where Δ0 denotes the identity. Then (3) can be rewritten in a more compact way as $\sum _{n=0}^{m}1_{\{N=n\}}c_{n}=\sum _{k=0}^{m}N_{k}(\Delta ^{k}c)_{0}.$ (6) Probabilistic versions Consider arbitrary events A1, ..., Am in a probability space (Ω, F, $\mathbb {P} $) and let E denote the expectation operator. Then N from (1) is the random number of these events which occur simultaneously. Using Nk from (2), define $S_{k}=\mathbb {E} [N_{k}]=\sum _{\scriptstyle J\subset \{1,\ldots ,m\} \atop \scriptstyle |J|=k}\mathbb {P} {\biggl (}\bigcap _{j\in J}A_{j}{\biggr )},\qquad k\in \{0,\ldots ,m\},$ (7) where the intersection over the empty index set is again defined as Ω, hence S0 = 1. If the ring R is also an algebra over the real or complex numbers, then taking the expectation of the coefficients in (4) and using the notation from (7), $\sum _{n=0}^{m}\mathbb {P} (N=n)x^{n}=\sum _{k=0}^{m}S_{k}(x-1)^{k}$ (4') in R[x]. If R is the field of real numbers, then this is the probability-generating function of the probability distribution of N. Similarly, (5) and (6) yield $\sum _{n=0}^{m}\mathbb {P} (N=n)E^{n}=\sum _{k=0}^{m}S_{k}\Delta ^{k}$ (5') and, for every sequence c = (c0, c1, c2, c3, ..., cm, ...), $\sum _{n=0}^{m}\mathbb {P} (N=n)\,c_{n}=\sum _{k=0}^{m}S_{k}\,(\Delta ^{k}c)_{0}.$ (6') The quantity on the left-hand side of (6') is the expected value of cN. Remarks 1. In actuarial science, the name Schuette–Nesbitt formula refers to equation (6'), where V denotes the set of real numbers. 2. The left-hand side of equation (5') is a convex combination of the powers of the shift operator E, it can be seen as the expected value of random operator EN. Accordingly, the left-hand side of equation (6') is the expected value of random component cN. Note that both have a discrete probability distribution with finite support, hence expectations are just the well-defined finite sums. 3. The probabilistic version of the inclusion–exclusion principle can be derived from equation (6') by choosing the sequence c = (0, 1, 1, ...): the left-hand side reduces to the probability of the event {N ≥ 1}, which is the union of A1, ..., Am, and the right-hand side is S1 – S2 + S3 – ... – (–1)mSm, because (Δ0c)0 = 0 and (Δkc)0 = –(–1)k for k ∈ {1, ..., m}. 4. Equations (5), (5'), (6) and (6') are also true when the shift operator and the difference operator are considered on a subspace like the ℓ p spaces. 5. If desired, the formulae (5), (5'), (6) and (6') can be considered in finite dimensions, because only the first m + 1 components of the sequences matter. Hence, represent the linear shift operator E and the linear difference operator Δ as mappings of the (m + 1)-dimensional Euclidean space into itself, given by the (m + 1) × (m + 1)-matrices $E={\begin{pmatrix}0&1&0&\cdots &0\\0&0&1&\ddots &\vdots \\\vdots &\ddots &\ddots &\ddots &0\\0&\cdots &0&0&1\\0&\cdots &0&0&0\end{pmatrix}},\qquad \Delta ={\begin{pmatrix}-1&1&0&\cdots &0\\0&-1&1&\ddots &\vdots \\\vdots &\ddots &\ddots &\ddots &0\\0&\cdots &0&-1&1\\0&\cdots &0&0&-1\end{pmatrix}},$ and let I denote the (m + 1)-dimensional identity matrix. Then (6) and (6') hold for every vector c = (c0, c1, ..., cm)T in (m + 1)-dimensional Euclidean space, where the exponent T in the definition of c denotes the transpose. 1. Equations (5) and (5') hold for an arbitrary linear operator E as long as Δ is the difference of E and the identity operator I. 2. The probabilistic versions (4'), (5') and (6') can be generalized to every finite measure space. For textbook presentations of the probabilistic Schuette–Nesbitt formula (6') and their applications to actuarial science, cf. Gerber (1997). Chapter 8, or Bowers et al. (1997), Chapter 18 and the Appendix, pp. 577–578. History For independent events, the formula (6') appeared in a discussion of Robert P. White and T.N.E. Greville's paper by Donald R. Schuette and Cecil J. Nesbitt, see Schuette & Nesbitt (1959). In the two-page note Gerber (1979), Hans U. Gerber, called it Schuette–Nesbitt formula and generalized it to arbitrary events. Christian Buchta, see Buchta (1994), noticed the combinatorial nature of the formula and published the elementary combinatorial proof of (3). Cecil J. Nesbitt, PhD, F.S.A., M.A.A.A., received his mathematical education at the University of Toronto and the Institute for Advanced Study in Princeton. He taught actuarial mathematics at the University of Michigan from 1938 to 1980. He served the Society of Actuaries from 1985 to 1987 as Vice-President for Research and Studies. Professor Nesbitt died in 2001. (Short CV taken from Bowers et al. (1997), page xv.) Donald Richard Schuette was a PhD student of C. Nesbitt, he later became professor at the University of Wisconsin–Madison. The probabilistic version of the Schuette–Nesbitt formula (6') generalizes much older formulae of Waring, which express the probability of the events {N = n} and {N ≥ n} in terms of S1, S2, ..., Sm. More precisely, with $\textstyle {\binom {k}{n}}$ denoting the binomial coefficient, $\mathbb {P} (N=n)=\sum _{k=n}^{m}(-1)^{k-n}{\binom {k}{n}}S_{k},\qquad n\in \{0,\ldots ,m\},$ (8) and $\mathbb {P} (N\geq n)=\sum _{k=n}^{m}(-1)^{k-n}{\binom {k-1}{n-1}}S_{k},\qquad n\in \{1,\ldots ,m\},$ (9) see Feller (1968), Sections IV.3 and IV.5, respectively. To see that these formulae are special cases of the probabilistic version of the Schuette–Nesbitt formula, note that by the binomial theorem $\Delta ^{k}=(E-I)^{k}=\sum _{j=0}^{k}{\binom {k}{j}}(-1)^{k-j}E^{j},\qquad k\in \mathbb {N} _{0}.$ Applying this operator identity to the sequence c = (0, ..., 0, 1, 0, 0, ...) with n leading zeros and noting that (E jc)0 = 1 if j = n and (E jc)0 = 0 otherwise, the formula (8) for {N = n} follows from (6'). Applying the identity to c = (0, ..., 0, 1, 1, 1, ...) with n leading zeros and noting that (E jc)0 = 1 if j ≥ n and (E jc)0 = 0 otherwise, equation (6') implies that $\mathbb {P} (N\geq n)=\sum _{k=n}^{m}S_{k}\sum _{j=n}^{k}{\binom {k}{j}}(-1)^{k-j}.$ Expanding (1 – 1)k using the binomial theorem and using equation (11) of the formulas involving binomial coefficients, we obtain $\sum _{j=n}^{k}{\binom {k}{j}}(-1)^{k-j}=-\sum _{j=0}^{n-1}{\binom {k}{j}}(-1)^{k-j}=(-1)^{k-n}{\binom {k-1}{n-1}}.$ Hence, we have the formula (9) for {N ≥ n}. Applications In actuarial science Problem: Suppose there are m persons aged x1, ..., xm with remaining random (but independent) lifetimes T1, ..., Tm. Suppose the group signs a life insurance contract which pays them after t years the amount cn if exactly n persons out of m are still alive after t years. How high is the expected payout of this insurance contract in t years? Solution: Let Aj denote the event that person j survives t years, which means that Aj = {Tj > t}. In actuarial notation the probability of this event is denoted by t pxj and can be taken from a life table. Use independence to calculate the probability of intersections. Calculate S1, ..., Sm and use the probabilistic version of the Schuette–Nesbitt formula (6') to calculate the expected value of cN. In probability theory Let σ be a random permutation of the set {1, ..., m} and let Aj denote the event that j is a fixed point of σ, meaning that Aj = {σ(j) = j}. When the numbers in J, which is a subset of {1, ..., m}, are fixed points, then there are (m – |J|)! ways to permute the remaining m – |J| numbers, hence $\mathbb {P} {\biggl (}\bigcap _{j\in J}A_{j}{\biggr )}={\frac {(m-|J|)!}{m!}}.$ By the combinatorical interpretation of the binomial coefficient, there are $\textstyle {\binom {m}{k}}$ different choices of a subset J of {1, ..., m} with k elements, hence (7) simplifies to $S_{k}={\binom {m}{k}}{\frac {(m-k)!}{m!}}={\frac {1}{k!}}.$ Therefore, using (4'), the probability-generating function of the number N of fixed points is given by $\mathbb {E} [x^{N}]=\sum _{k=0}^{m}{\frac {(x-1)^{k}}{k!}},\qquad x\in \mathbb {R} .$ This is the partial sum of the infinite series giving the exponential function at x – 1, which in turn is the probability-generating function of the Poisson distribution with parameter 1. Therefore, as m tends to infinity, the distribution of N converges to the Poisson distribution with parameter 1. See also • Rencontres numbers References • Bowers, Newton L.; Gerber, Hans U.; Hickman, James C.; Jones, Donald A.; Nesbitt, Cecil J. (1997), Actuarial Mathematics (2nd ed.), The Society of Actuaries, ISBN 0-938959-46-8, Zbl 0634.62107 • Buchta, Christian (1994), "An elementary proof of the Schuette–Nesbitt formula", Mitteilungen der Schweiz. Vereinigung der Versicherungsmathematiker, 1994 (2): 219–220, Zbl 0825.62745 • Feller, William (1968) [1950], An Introduction to Probability Theory and Its Applications, Wiley Series in Probability and Mathematical Statistics, vol. I (revised printing, 3rd ed.), New York, London, Sydney: John Wiley and Sons, ISBN 0-471-25708-7, Zbl 0155.23101 • Gerber, Hans U. (1979), "A proof of the Schuette–Nesbitt formula for dependent events" (PDF), Actuarial Research Clearing House, 1: 9–10 • Gerber, Hans U. (1997) [1986], Life Insurance Mathematics (3rd ed.), Berlin: Springer-Verlag, ISBN 3-540-62242-X, Zbl 0869.62072 • Schuette, Donald R.; Nesbitt, Cecil J. (1959), "Discussion of the preceding paper by Robert P. White and T.N.E. Greville" (PDF), Transactions of Society of Actuaries, 11 (29AB): 97–99 External links • Cecil J. Nesbitt at the Mathematics Genealogy Project • Donald R. Schuette at the Mathematics Genealogy Project
Wikipedia
Logic optimization Logic optimization is a process of finding an equivalent representation of the specified logic circuit under one or more specified constraints. This process is a part of a logic synthesis applied in digital electronics and integrated circuit design. Generally, the circuit is constrained to a minimum chip area meeting a predefined response delay. The goal of logic optimization of a given circuit is to obtain the smallest logic circuit that evaluates to the same values as the original one.[1] Usually, the smaller circuit with the same function is cheaper,[2] takes less space, consumes less power, has shorter latency, and minimizes risks of unexpected cross-talk, hazard of delayed signal processing, and other issues present at the nano-scale level of metallic structures on an integrated circuit. In terms of Boolean algebra, the optimization of a complex boolean expression is a process of finding a simpler one, which would upon evaluation ultimately produce the same results as the original one. Motivation The problem with having a complicated circuit (i.e. one with many elements, such as logic gates) is that each element takes up physical space and costs time and money to produce. Circuit minimization may be one form of logic optimization used to reduce the area of complex logic in integrated circuits. With the advent of logic synthesis, one of the biggest challenges faced by the electronic design automation (EDA) industry was to find the most simple circuit representation of the given design description.[nb 1] While two-level logic optimization had long existed in the form of the Quine–McCluskey algorithm, later followed by the Espresso heuristic logic minimizer, the rapidly improving chip densities, and the wide adoption of Hardware description languages for circuit description, formalized the logic optimization domain as it exists today, including Logic Friday (graphical interface), Minilog, and ESPRESSO-IISOJS (many-valued logic).[3] Methods The methods of logic circuit simplifications are equally applicable to boolean expression minimization. Classification Today, logic optimization is divided into various categories: Based on circuit representation Two-level logic optimization Multi-level logic optimization Based on circuit characteristics Sequential logic optimization Combinational logic optimization Based on type of execution Graphical optimization methods Tabular optimization methods Algebraic optimization methods Graphical methods Graphical methods represent the required logical function by a diagram representing the logic variables and value of the function. By manipulating or inspecting a diagram, much tedious calculation may be eliminated. Graphical minimization methods for two-level logic include: • Euler diagram (aka Eulerian circle) (1768) by Leonhard P. Euler (1707–1783) • Venn diagram (1880) by John Venn (1834–1923) • Karnaugh map (1953) by Maurice Karnaugh Boolean expression minimization The same methods of boolean expression minimization (simplification) listed below may be applied to the circuit optimization. For the case when the Boolean function is specified by a circuit (that is, we want to find an equivalent circuit of minimum size possible), the unbounded circuit minimization problem was long-conjectured to be $\Sigma _{2}^{P}$-complete in time complexity, a result finally proved in 2008,[4] but there are effective heuristics such as Karnaugh maps and the Quine–McCluskey algorithm that facilitate the process. Boolean function minimizing methods include: • Quine–McCluskey algorithm • Petrick's method Optimal multi-level methods Methods which find optimal circuit representations of Boolean functions are often referred as "exact synthesis" in the literature. Due to the computational complexity, exact synthesis is tractable only for small Boolean functions. Recent approaches map the optimization problem to a Boolean satisfiability problem.[5][6] This allows finding optimal circuit representations using a SAT solver. Heuristic methods A heuristic method uses established rules that solve a practical useful subset of the much larger possible set of problems. The heuristic method may not produce the theoretically optimum solution, but if useful, will provide most of the optimization desired with a minimum of effort. An example of a computer system that uses heuristic methods for logic optimization is the Espresso heuristic logic minimizer. Two-level versus multi-level representations While a two-level circuit representation of circuits strictly refers to the flattened view of the circuit in terms of SOPs (sum-of-products) — which is more applicable to a PLA implementation of the design — a multi-level representation is a more generic view of the circuit in terms of arbitrarily connected SOPs, POSs (product-of-sums), factored form etc. Logic optimization algorithms generally work either on the structural (SOPs, factored form) or functional (Binary decision diagrams, Algebraic Decision Diagrams (ADDs)) representation of the circuit. In sum-of-products (SOP) form, AND gates form the smallest unit and are stitched together using ORs, whereas in product-of-sums (POS) form it is opposite. POS form requires parentheses to group the OR terms together under AND gates, because OR has lower precedence than AND. Both SOP and POS forms translate nicely into circuit logic. If we have two functions F1 and F2: $F_{1}=AB+AC+AD,\,$ $F_{2}=A'B+A'C+A'E.\,$ The above 2-level representation takes six product terms and 24 transistors in CMOS Rep. A functionally equivalent representation in multilevel can be: P = B + C. F1 = AP + AD. F2 = A'P + A'E. While the number of levels here is 3, the total number of product terms and literals reduce because of the sharing of the term B + C. Similarly, we distinguish between sequential and combinational circuits, whose behavior can be described in terms of finite-state machine state tables/diagrams or by Boolean functions and relations respectively. Combinational circuits are defined as the time independent circuits which do not depends upon previous inputs to generate any output are termed as combinational circuits. Examples – Priority encoder, Binary decoder, Multiplexer, Demultiplexer. Sequential circuits are those which are dependent on clock cycles and depends on present as well as past inputs to generate any output. Examples – Flip-flops, Counters. Example While there are many ways to minimize a circuit, this is an example that minimizes (or simplifies) a Boolean function. The Boolean function carried out by the circuit is directly related to the algebraic expression from which the function is implemented.[7] Consider the circuit used to represent $(A\wedge {\bar {B}})\vee ({\bar {A}}\wedge B)$. It is evident that two negations, two conjunctions, and a disjunction are used in this statement. This means that to build the circuit one would need two inverters, two AND gates, and an OR gate. The circuit can simplified (minimized) by applying laws of Boolean algebra or using intuition. Since the example states that $A$ is true when $B$ is false and the other way around, one can conclude that this simply means $A\neq B$. In terms of logical gates, inequality simply means an XOR gate (exclusive or). Therefore, $(A\wedge {\bar {B}})\vee ({\bar {A}}\wedge B)\iff A\neq B$. Then the two circuits shown below are equivalent, as can be checked using a truth table: AB(A∧B)∨(A∧B)A≠B FFFFTFTFFFFF FTFFFTTTTFTT TFTTTTFFFTTF TTTFFFFFTTFT See also • Binary decision diagram (BDD) • Don't care condition • Prime implicant • Circuit complexity — on estimation of the circuit complexity • Function composition • Function decomposition • Gate underutilization • Logic redundancy • Harvard minimizing chart (Wikiversity) (Wikibooks) Notes 1. The netlist size can be used to measure simplicity. References 1. Maxfield, Clive "Max" (2008-01-01). "Chapter 5: "Traditional" Design Flows". In Maxfield, Clive "Max" (ed.). FPGAs. Instant Access. Burlington: Newnes / Elsevier Inc. pp. 75–106. doi:10.1016/B978-0-7506-8974-8.00005-3. ISBN 978-0-7506-8974-8. Retrieved 2021-10-04.{{cite book}}: CS1 maint: url-status (link) 2. Balasanyan, Seyran; Aghagulyan, Mane; Wuttke, Heinz-Dietrich; Henke, Karsten (2018-05-16). "Digital Electronics" (PDF). Bachelor Embedded Systems - Year Group. Tempus. DesIRE. Archived (PDF) from the original on 2021-10-04. Retrieved 2021-10-04. (101 pages) 3. Theobald, M.; Nowick, S. M. (November 1998). "Fast heuristic and exact algorithms for two-level hazard-free logic minimization". IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems. 17 (11): 1130–1147. doi:10.1109/43.736186. 4. Buchfuhrer, David; Umans, Christopher (January 2011). "The complexity of Boolean formula minimization" (PDF). Journal of Computer and System Sciences (JCSS). Computer Science Department, California Institute of Technology, Pasadena, California, USA: Elsevier Inc. 77 (1): 142–153. doi:10.1016/j.jcss.2010.06.011. This is an extended version of the conference paper: Buchfuhrer, David; Umans, Christopher (2008). "The Complexity of Boolean Formula Minimization". Proceedings of Automata, Languages and Programming (PDF). pp. 24–35. doi:10.1007/978-3-540-70575-8_3. ISBN 978-3-540-70574-1. Archived (PDF) from the original on 2018-01-14. Retrieved 2018-01-14. {{cite book}}: |work= ignored (help) 5. Haaswijk, Winston. "SAT-Based Exact Synthesis: Encodings, Topology Families, and Parallelism" (PDF). EPFL. Retrieved 2022-12-07. 6. Haaswijk, Winston. "SAT-Based Exact Synthesis for Multi-Level Logic Networks" (PDF). EPFL. Retrieved 2022-12-07. 7. Mano, M. Morris; Kime, Charles R. (2014). Logic and Computer Design Fundamentals (4th new international ed.). Pearson Education Limited. p. 54. ISBN 978-1-292-02468-4. Further reading • Lind, Larry Frederick; Nelson, John Christopher Cunliffe (1977). Analysis and Design of Sequential Digital Systems. Macmillan Press. ISBN 0-33319266-4. (146 pages) • De Micheli, Giovanni (1994). Synthesis and Optimization of Digital Circuits. McGraw-Hill. ISBN 0-07-016333-2. (NB. Chapters 7–9 cover combinatorial two-level, combinatorial multi-level, and respectively sequential circuit optimization.) • Hachtel, Gary D.; Somenzi, Fabio (2006) [1996]. Logic Synthesis and Verification Algorithms. Springer Science & Business Media. ISBN 978-0-387-31005-3. • Kohavi, Zvi; Jha, Niraj K. (2009). "4–6". Switching and Finite Automata Theory (3rd ed.). Cambridge University Press. ISBN 978-0-521-85748-2. • Rutenbar, Rob A. Multi-level minimization, Part I: Models & Methods (PDF) (lecture slides). Carnegie Mellon University (CMU). Lecture 7. Archived (PDF) from the original on 2018-01-15. Retrieved 2018-01-15; Rutenbar, Rob A. Multi-level minimization, Part II: Cube/Cokernel Extract (PDF) (lecture slides). Carnegie Mellon University (CMU). Lecture 8. Archived (PDF) from the original on 2018-01-15. Retrieved 2018-01-15. Digital electronics Components • Transistor • Resistor • Inductor • Capacitor • Printed electronics • Printed circuit board • Electronic circuit • Flip-flop • Memory cell • Combinational logic • Sequential logic • Logic gate • Boolean circuit • Integrated circuit (IC) • Hybrid integrated circuit (HIC) • Mixed-signal integrated circuit • Three-dimensional integrated circuit (3D IC) • Emitter-coupled logic (ECL) • Erasable programmable logic device (EPLD) • Macrocell array • Programmable logic array (PLA) • Programmable logic device (PLD) • Programmable Array Logic (PAL) • Generic array logic (GAL) • Complex programmable logic device (CPLD) • Field-programmable gate array (FPGA) • Field-programmable object array (FPOA) • Application-specific integrated circuit (ASIC) • Tensor Processing Unit (TPU) Theory • Digital signal • Boolean algebra • Logic synthesis • Logic in computer science • Computer architecture • Digital signal • Digital signal processing • Circuit minimization • Switching circuit theory • Gate equivalent Design • Logic synthesis • Place and route • Placement • Routing • Transaction-level modeling • Register-transfer level • Hardware description language • High-level synthesis • Formal equivalence checking • Synchronous logic • Asynchronous logic • Finite-state machine • Hierarchical state machine Applications • Computer hardware • Hardware acceleration • Digital audio • radio • Digital photography • Digital telephone • Digital video • cinematography • television • Electronic literature Design issues • Metastability • Runt pulse
Wikipedia
Schulz–Zimm distribution The Schulz–Zimm distribution is a special case of the gamma distribution. It is widely used to model the polydispersity of polymers. In this context it has been introduced in 1939 by Günter Victor Schulz[1] and in 1948 by Bruno H. Zimm.[2] Schulz–Zimm Probability density function Parameters $k$ (shape parameter) Support $x\in \mathbb {R} _{>0}$ PDF ${\frac {k^{k}x^{k-1}e^{-kx}}{\Gamma (k)}}$ Mean $1$ Variance ${\frac {1}{k}}$ This distribution has only a shape parameter k, the scale being fixed at θ=1/k. Accordingly, the probability density function is Failed to parse (SVG (MathML can be enabled via browser plugin): Invalid response ("Math extension cannot connect to Restbase.") from server "http://localhost:6011/en.wikipedia.org/v1/":): {\displaystyle f(x)=\frac{k^k x^{k - 1} e^{-kx}}{\Gamma(k)}.} When applied to polymers, the variable x is the relative mass or chain length $x=M/M_{n}$. Accordingly, the mass distribution $f(M)$ is just a gamma distribution with scale parameter $\theta =M_{n}/k$. This explains why the Schulz–Zimm distribution is unheard of outside its conventional application domain. The distribution has mean 1 and variance 1/k. The polymer dispersity is $\langle x^{2}\rangle /\langle x\rangle =1+1/k$. For large k the Schulz–Zimm distribution approaches a Gaussian distribution. In algorithms where one needs to draw samples $x\geq 0$, the Schulz–Zimm distribution is to be preferred over a Gaussian because the latter requires an arbitrary cut-off to prevent negative x. References 1. G V Schulz (1939), Z. Phys. Chem. 43B, 25-46. - Eq (27a) with -ln(a), k+1 in place of our x,k. 2. B H Zimm (1948), J. Chem. Phys. 16, 1099. - Proposes a two-parameter variant of Eq (13) without derivation and without reference to Schulz or whomsoever. One of the two parameters can be eliminated by the requirement <n>=1.
Wikipedia
Permutation polynomial In mathematics, a permutation polynomial (for a given ring) is a polynomial that acts as a permutation of the elements of the ring, i.e. the map $x\mapsto g(x)$ is a bijection. In case the ring is a finite field, the Dickson polynomials, which are closely related to the Chebyshev polynomials, provide examples. Over a finite field, every function, so in particular every permutation of the elements of that field, can be written as a polynomial function. In the case of finite rings Z/nZ, such polynomials have also been studied and applied in the interleaver component of error detection and correction algorithms.[1][2] Single variable permutation polynomials over finite fields Let Fq = GF(q) be the finite field of characteristic p, that is, the field having q elements where q = pe for some prime p. A polynomial f with coefficients in Fq (symbolically written as f ∈ Fq[x]) is a permutation polynomial of Fq if the function from Fq to itself defined by $c\mapsto f(c)$ is a permutation of Fq.[3] Due to the finiteness of Fq, this definition can be expressed in several equivalent ways:[4] • the function $c\mapsto f(c)$ is onto (surjective); • the function $c\mapsto f(c)$ is one-to-one (injective); • f(x) = a has a solution in Fq for each a in Fq; • f(x) = a has a unique solution in Fq for each a in Fq. A characterization of which polynomials are permutation polynomials is given by (Hermite's Criterion)[5][6] f ∈ Fq[x] is a permutation polynomial of Fq if and only if the following two conditions hold: 1. f has exactly one root in Fq; 2. for each integer t with 1 ≤ t ≤ q − 2 and $t\not \equiv 0\!{\pmod {p}}$, the reduction of f(x)t mod (xq − x) has degree ≤ q − 2. If f(x) is a permutation polynomial defined over the finite field GF(q), then so is g(x) = a f(x + b) + c for all a ≠ 0, b and c in GF(q). The permutation polynomial g(x) is in normalized form if a, b and c are chosen so that g(x) is monic, g(0) = 0 and (provided the characteristic p does not divide the degree n of the polynomial) the coefficient of xn−1 is 0. There are many open questions concerning permutation polynomials defined over finite fields.[7][8] Small degree Hermite's criterion is computationally intensive and can be difficult to use in making theoretical conclusions. However, Dickson was able to use it to find all permutation polynomials of degree at most five over all finite fields. These results are:[9][6] Normalized Permutation Polynomial of Fq q $x$any $q$ $x^{2}$$q\equiv 0\!{\pmod {2}}$ $x^{3}$$q\not \equiv 1\!{\pmod {3}}$ $x^{3}-ax$ ($a$ not a square)$q\equiv 0\!{\pmod {3}}$ $x^{4}\pm 3x$$q=7$ $x^{4}+a_{1}x^{2}+a_{2}x$ (if its only root in Fq is 0)$q\equiv 0\!{\pmod {2}}$ $x^{5}$$q\not \equiv 1\!{\pmod {5}}$ $x^{5}-ax$ ($a$ not a fourth power)$q\equiv 0\!{\pmod {5}}$ $x^{5}+ax\,(a^{2}=2)$$q=9$ $x^{5}\pm 2x^{2}$$q=7$ $x^{5}+ax^{3}\pm x^{2}+3a^{2}x$ ($a$ not a square)$q=7$ $x^{5}+ax^{3}+5^{-1}a^{2}x$ ($a$ arbitrary)$q\equiv \pm 2\!{\pmod {5}}$ $x^{5}+ax^{3}+3a^{2}x$ ($a$ not a square)$q=13$ $x^{5}-2ax^{3}+a^{2}x$ ($a$ not a square)$q\equiv 0\!{\pmod {5}}$ A list of all monic permutation polynomials of degree six in normalized form can be found in Shallue & Wanless (2013).[10] Some classes of permutation polynomials Beyond the above examples, the following list, while not exhaustive, contains almost all of the known major classes of permutation polynomials over finite fields.[11] • xn permutes GF(q) if and only if n and q − 1 are coprime (notationally, (n, q − 1) = 1).[12] • If a is in GF(q) and n ≥ 1 then the Dickson polynomial (of the first kind) Dn(x,a) is defined by $D_{n}(x,a)=\sum _{j=0}^{\lfloor n/2\rfloor }{\frac {n}{n-j}}{\binom {n-j}{j}}(-a)^{j}x^{n-2j}.$ These can also be obtained from the recursion $D_{n}(x,a)=xD_{n-1}(x,a)-aD_{n-2}(x,a),$ with the initial conditions $D_{0}(x,a)=2$ and $D_{1}(x,a)=x$. The first few Dickson polynomials are: • $D_{2}(x,a)=x^{2}-2a$ • $D_{3}(x,a)=x^{3}-3ax$ • $D_{4}(x,a)=x^{4}-4ax^{2}+2a^{2}$ • $D_{5}(x,a)=x^{5}-5ax^{3}+5a^{2}x.$ If a ≠ 0 and n > 1 then Dn(x, a) permutes GF(q) if and only if (n, q2 − 1) = 1.[13] If a = 0 then Dn(x, 0) = xn and the previous result holds. • If GF(qr) is an extension of GF(q) of degree r, then the linearized polynomial $L(x)=\sum _{s=0}^{r-1}\alpha _{s}x^{q^{s}},$ with αs in GF(qr), is a linear operator on GF(qr) over GF(q). A linearized polynomial L(x) permutes GF(qr) if and only if 0 is the only root of L(x) in GF(qr).[12] This condition can be expressed algebraically as[14] $\det \left(\alpha _{i-j}^{q^{j}}\right)\neq 0\quad (i,j=0,1,\ldots ,r-1).$ The linearized polynomials that are permutation polynomials over GF(qr) form a group under the operation of composition modulo $x^{q^{r}}-x$, which is known as the Betti-Mathieu group, isomorphic to the general linear group GL(r, Fq).[14] • If g(x) is in the polynomial ring Fq[x] and g(xs) has no nonzero root in GF(q) when s divides q − 1, and r > 1 is relatively prime (coprime) to q − 1, then xr(g(xs))(q - 1)/s permutes GF(q).[6] • Only a few other specific classes of permutation polynomials over GF(q) have been characterized. Two of these, for example, are: $x^{(q+m-1)/m}+ax$ where m divides q − 1, and $x^{r}\left(x^{d}-a\right)^{\left(p^{n}-1\right)/d}$ where d divides pn − 1. Exceptional polynomials An exceptional polynomial over GF(q) is a polynomial in Fq[x] which is a permutation polynomial on GF(qm) for infinitely many m.[15] A permutation polynomial over GF(q) of degree at most q1/4 is exceptional over GF(q).[16] Every permutation of GF(q) is induced by an exceptional polynomial.[16] If a polynomial with integer coefficients (i.e., in ℤ[x]) is a permutation polynomial over GF(p) for infinitely many primes p, then it is the composition of linear and Dickson polynomials.[17] (See Schur's conjecture below). Geometric examples Main article: Oval (projective plane) In finite geometry coordinate descriptions of certain point sets can provide examples of permutation polynomials of higher degree. In particular, the points forming an oval in a finite projective plane, PG(2,q) with q a power of 2, can be coordinatized in such a way that the relationship between the coordinates is given by an o-polynomial, which is a special type of permutation polynomial over the finite field GF(q). Computational complexity The problem of testing whether a given polynomial over a finite field is a permutation polynomial can be solved in polynomial time.[18] Permutation polynomials in several variables over finite fields A polynomial $f\in \mathbb {F} _{q}[x_{1},\ldots ,x_{n}]$ is a permutation polynomial in n variables over $\mathbb {F} _{q}$ if the equation $f(x_{1},\ldots ,x_{n})=\alpha $ has exactly $q^{n-1}$ solutions in $\mathbb {F} _{q}^{n}$ for each $\alpha \in \mathbb {F} _{q}$.[19] Quadratic permutation polynomials (QPP) over finite rings For the finite ring Z/nZ one can construct quadratic permutation polynomials. Actually it is possible if and only if n is divisible by p2 for some prime number p. The construction is surprisingly simple, nevertheless it can produce permutations with certain good properties. That is why it has been used in the interleaver component of turbo codes in 3GPP Long Term Evolution mobile telecommunication standard (see 3GPP technical specification 36.212 [20] e.g. page 14 in version 8.8.0). Simple examples Consider $g(x)=2x^{2}+x$ for the ring Z/4Z. One sees: $g(0)=0$; $g(1)=3$; $g(2)=2$; $g(3)=1$, so the polynomial defines the permutation ${\begin{pmatrix}0&1&2&3\\0&3&2&1\end{pmatrix}}.$ Consider the same polynomial $g(x)=2x^{2}+x$ for the other ring Z/8Z. One sees: $g(0)=0$; $g(1)=3$; $g(2)=2$; $g(3)=5$; $g(4)=4$; $g(5)=7$; $g(6)=6$; $g(7)=1$, so the polynomial defines the permutation ${\begin{pmatrix}0&1&2&3&4&5&6&7\\0&3&2&5&4&7&6&1\end{pmatrix}}.$ Rings Z/pkZ Consider $g(x)=ax^{2}+bx+c$ for the ring Z/pkZ. Lemma: for k=1 (i.e. Z/pZ) such polynomial defines a permutation only in the case a=0 and b not equal to zero. So the polynomial is not quadratic, but linear. Lemma: for k>1, p>2 (Z/pkZ) such polynomial defines a permutation if and only if $a\equiv 0{\pmod {p}}$ and $b\not \equiv 0{\pmod {p}}$. Rings Z/nZ Consider $n=p_{1}^{k_{1}}p_{2}^{k_{2}}...p_{l}^{k_{l}}$, where pt are prime numbers. Lemma: any polynomial $ g(x)=a_{0}+\sum _{0<i\leq M}a_{i}x^{i}$ defines a permutation for the ring Z/nZ if and only if all the polynomials $ g_{p_{t}}(x)=a_{0,p_{t}}+\sum _{0<i\leq M}a_{i,p_{t}}x^{i}$ defines the permutations for all rings $Z/p_{t}^{k_{t}}Z$, where $a_{j,p_{t}}$ are remainders of $a_{j}$ modulo $p_{t}^{k_{t}}$. As a corollary one can construct plenty quadratic permutation polynomials using the following simple construction. Consider $n=p_{1}^{k_{1}}p_{2}^{k_{2}}\dots p_{l}^{k_{l}}$, assume that k1 >1. Consider $ax^{2}+bx$, such that $a=0{\bmod {p}}_{1}$, but $a\neq 0{\bmod {p}}_{1}^{k_{1}}$; assume that $a=0{\bmod {p}}_{i}^{k_{i}}$, i > 1. And assume that $b\neq 0{\bmod {p}}_{i}$ for all i = 1, ..., l. (For example, one can take $a=p_{1}p_{2}^{k_{2}}...p_{l}^{k_{l}}$ and $b=1$). Then such polynomial defines a permutation. To see this we observe that for all primes pi, i > 1, the reduction of this quadratic polynomial modulo pi is actually linear polynomial and hence is permutation by trivial reason. For the first prime number we should use the lemma discussed previously to see that it defines the permutation. For example, consider Z/12Z and polynomial $6x^{2}+x$. It defines a permutation ${\begin{pmatrix}0&1&2&3&4&5&6&7&8&\cdots \\0&7&2&9&4&11&6&1&8&\cdots \end{pmatrix}}$ Higher degree polynomials over finite rings A polynomial g(x) for the ring Z/pkZ is a permutation polynomial if and only if it permutes the finite field Z/pZ and $g'(x)\neq 0{\bmod {p}}$ for all x in Z/pkZ, where g′(x) is the formal derivative of g(x).[21] Schur's conjecture Let K be an algebraic number field with R the ring of integers. The term "Schur's conjecture" refers to the assertion that, if a polynomial f defined over K is a permutation polynomial on R/P for infinitely many prime ideals P, then f is the composition of Dickson polynomials, degree-one polynomials, and polynomials of the form xk. In fact, Schur did not make any conjecture in this direction. The notion that he did is due to Fried,[22] who gave a flawed proof of a false version of the result. Correct proofs have been given by Turnwald[23] and Müller.[24] Notes 1. Takeshita, Oscar (2006). "Permutation Polynomial Interleavers: An Algebraic-Geometric Perspective". IEEE Transactions on Information Theory. 53: 2116–2132. arXiv:cs/0601048. doi:10.1109/TIT.2007.896870. 2. Takeshita, Oscar (2005). "A New Construction for LDPC Codes using Permutation Polynomials over Integer Rings". arXiv:cs/0506091. 3. Mullen & Panario 2013, p. 215 4. Lidl & Niederreiter 1997, p. 348 5. Lidl & Niederreiter 1997, p. 349 6. Mullen & Panario 2013, p. 216 7. Lidl & Mullen (1988) 8. Lidl & Mullen (1993) 9. Dickson 1958, pg. 63 10. Mullen & Panario 2013, p. 217 11. Lidl & Mullen 1988, p. 244 12. Lidl & Niederreiter 1997, p. 351 13. Lidl & Niederreiter 1997, p. 356 14. Lidl & Niederreiter 1997, p. 362 15. Mullen & Panario 2013, p. 236 16. Mullen & Panario 2013, p. 238 17. Mullen & Panario 2013, p. 239 18. Kayal, Neeraj (2005). "Recognizing permutation functions in polynomial time". Electronic Colloquium on Computational Complexity. ECCC TR05-008. For earlier research on this problem, see: Ma, Keju; von zur Gathen, Joachim (1995). "The computational complexity of recognizing permutation functions". Computational Complexity. 5 (1): 76–97. doi:10.1007/BF01277957. MR 1319494. Shparlinski, I. E. (1992). "A deterministic test for permutation polynomials". Computational Complexity. 2 (2): 129–132. doi:10.1007/BF01202000. MR 1190826. 19. Mullen & Panario 2013, p. 230 20. 3GPP TS 36.212 21. Sun, Jing; Takeshita, Oscar (2005). "Interleaver for Turbo Codes Using Permutation Polynomials Over Integer Rings". IEEE Transactions on Information Theory. 51 (1): 102. 22. Fried, M. (1970). "On a conjecture of Schur". Michigan Math. J.: 41–55. 23. Turnwald, G. (1995). "On Schur's conjecture". J. Austral. Math. Soc.: 312–357. 24. Müller, P. (1997). "A Weil-bound free proof of Schur's conjecture". Finite Fields and Their Applications: 25–32. References • Dickson, L. E. (1958) [1901]. Linear Groups with an Exposition of the Galois Field Theory. New York: Dover. • Lidl, Rudolf; Mullen, Gary L. (March 1988). "When Does a Polynomial over a Finite Field Permute the Elements of the Field?". The American Mathematical Monthly. 95 (3): 243–246. doi:10.2307/2323626. • Lidl, Rudolf; Mullen, Gary L. (January 1993). "When Does a Polynomial over a Finite Field Permute the Elements of the Field?, II". The American Mathematical Monthly. 100 (1): 71–74. doi:10.2307/2324822. • Lidl, Rudolf; Niederreiter, Harald (1997). Finite fields. Encyclopedia of Mathematics and Its Applications. Vol. 20 (2nd ed.). Cambridge University Press. ISBN 0-521-39231-4. Zbl 0866.11069. Chapter 7. • Mullen, Gary L.; Panario, Daniel (2013). Handbook of Finite Fields. CRC Press. ISBN 978-1-4398-7378-6. Chapter 8. • Shallue, C.J.; Wanless, I.M. (March 2013). "Permutation polynomials and orthomorphism polynomials of degree six". Finite Fields and Their Applications. 20: 84–92. doi:10.1016/j.ffa.2012.12.003.
Wikipedia
Schur's theorem In discrete mathematics, Schur's theorem is any of several theorems of the mathematician Issai Schur. In differential geometry, Schur's theorem is a theorem of Axel Schur. In functional analysis, Schur's theorem is often called Schur's property, also due to Issai Schur. Ramsey theory The Wikibook Combinatorics has a page on the topic of: Proof of Schur's theorem In Ramsey theory, Schur's theorem states that for any partition of the positive integers into a finite number of parts, one of the parts contains three integers x, y, z with $x+y=z.$ For every positive integer c, S(c) denotes the smallest number S such that for every partition of the integers $\{1,\ldots ,S(c)\}$ into c parts, one of the parts contains integers x, y, and z with $x+y=z$. Schur's theorem ensures that S(c) is well-defined for every positive integer c. The numbers of the form S(c) are called Schur's number. Folkman's theorem generalizes Schur's theorem by stating that there exist arbitrarily large sets of integers, all of whose nonempty sums belong to the same part. Using this definition, the only known Schur numbers are S(n) = 2, 5, 14, 45, and 161 (OEIS: A030126) The proof that S(5) = 161 was announced in 2017 and took up 2 petabytes of space.[1][2] Combinatorics In combinatorics, Schur's theorem tells the number of ways for expressing a given number as a (non-negative, integer) linear combination of a fixed set of relatively prime numbers. In particular, if $\{a_{1},\ldots ,a_{n}\}$ is a set of integers such that $\gcd(a_{1},\ldots ,a_{n})=1$, the number of different tuples of non-negative integer numbers $(c_{1},\ldots ,c_{n})$ such that $x=c_{1}a_{1}+\cdots +c_{n}a_{n}$ when $x$ goes to infinity is: ${\frac {x^{n-1}}{(n-1)!a_{1}\cdots a_{n}}}(1+o(1)).$ As a result, for every set of relatively prime numbers $\{a_{1},\ldots ,a_{n}\}$ there exists a value of $x$ such that every larger number is representable as a linear combination of $\{a_{1},\ldots ,a_{n}\}$ in at least one way. This consequence of the theorem can be recast in a familiar context considering the problem of changing an amount using a set of coins. If the denominations of the coins are relatively prime numbers (such as 2 and 5) then any sufficiently large amount can be changed using only these coins. (See Coin problem.) Differential geometry In differential geometry, Schur's theorem compares the distance between the endpoints of a space curve $C^{*}$ to the distance between the endpoints of a corresponding plane curve $C$ of less curvature. Suppose $C(s)$ is a plane curve with curvature $\kappa (s)$ which makes a convex curve when closed by the chord connecting its endpoints, and $C^{*}(s)$ is a curve of the same length with curvature $\kappa ^{*}(s)$. Let $d$ denote the distance between the endpoints of $C$ and $d^{*}$ denote the distance between the endpoints of $C^{*}$. If $\kappa ^{*}(s)\leq \kappa (s)$ then $d^{*}\geq d$. Schur's theorem is usually stated for $C^{2}$ curves, but John M. Sullivan has observed that Schur's theorem applies to curves of finite total curvature (the statement is slightly different). Linear algebra Main article: Schur decomposition In linear algebra, Schur’s theorem is referred to as either the triangularization of a square matrix with complex entries, or of a square matrix with real entries and real eigenvalues. Functional analysis In functional analysis and the study of Banach spaces, Schur's theorem, due to I. Schur, often refers to Schur's property, that for certain spaces, weak convergence implies convergence in the norm. Number theory In number theory, Issai Schur showed in 1912 that for every nonconstant polynomial p(x) with integer coefficients, if S is the set of all nonzero values ${\begin{Bmatrix}p(n)\neq 0:n\in \mathbb {N} \end{Bmatrix}}$, then the set of primes that divide some member of S is infinite. See also • Schur's lemma (from Riemannian geometry) References 1. Heule, Marijn J. H. (2017). "Schur Number Five". arXiv:1711.08076. 2. "Schur Number Five". www.cs.utexas.edu. Retrieved 2021-10-06. • Herbert S. Wilf (1994). generatingfunctionology. Academic Press. • Shiing-Shen Chern (1967). Curves and Surfaces in Euclidean Space. In Studies in Global Geometry and Analysis. Prentice-Hall. • Issai Schur (1912). Über die Existenz unendlich vieler Primzahlen in einigen speziellen arithmetischen Progressionen, Sitzungsberichte der Berliner Math. Further reading • Dany Breslauer and Devdatt P. Dubhashi (1995). Combinatorics for Computer Scientists • John M. Sullivan (2006). Curves of Finite Total Curvature. arXiv. Functional analysis (topics – glossary) Spaces • Banach • Besov • Fréchet • Hilbert • Hölder • Nuclear • Orlicz • Schwartz • Sobolev • Topological vector Properties • Barrelled • Complete • Dual (Algebraic/Topological) • Locally convex • Reflexive • Reparable Theorems • Hahn–Banach • Riesz representation • Closed graph • Uniform boundedness principle • Kakutani fixed-point • Krein–Milman • Min–max • Gelfand–Naimark • Banach–Alaoglu Operators • Adjoint • Bounded • Compact • Hilbert–Schmidt • Normal • Nuclear • Trace class • Transpose • Unbounded • Unitary Algebras • Banach algebra • C*-algebra • Spectrum of a C*-algebra • Operator algebra • Group algebra of a locally compact group • Von Neumann algebra Open problems • Invariant subspace problem • Mahler's conjecture Applications • Hardy space • Spectral theory of ordinary differential equations • Heat kernel • Index theorem • Calculus of variations • Functional calculus • Integral operator • Jones polynomial • Topological quantum field theory • Noncommutative geometry • Riemann hypothesis • Distribution (or Generalized functions) Advanced topics • Approximation property • Balanced set • Choquet theory • Weak topology • Banach–Mazur distance • Tomita–Takesaki theory •  Mathematics portal • Category • Commons
Wikipedia
Schur–Zassenhaus theorem The Schur–Zassenhaus theorem is a theorem in group theory which states that if $G$ is a finite group, and $N$ is a normal subgroup whose order is coprime to the order of the quotient group $G/N$, then $G$ is a semidirect product (or split extension) of $N$ and $G/N$. An alternative statement of the theorem is that any normal Hall subgroup $N$ of a finite group $G$ has a complement in $G$. Moreover if either $N$ or $G/N$ is solvable then the Schur–Zassenhaus theorem also states that all complements of $N$ in $G$ are conjugate. The assumption that either $N$ or $G/N$ is solvable can be dropped as it is always satisfied, but all known proofs of this require the use of the much harder Feit–Thompson theorem. The Schur–Zassenhaus theorem at least partially answers the question: "In a composition series, how can we classify groups with a certain set of composition factors?" The other part, which is where the composition factors do not have coprime orders, is tackled in extension theory. History The Schur–Zassenhaus theorem was introduced by Zassenhaus (1937, 1958, Chapter IV, section 7). Theorem 25, which he credits to Issai Schur, proves the existence of a complement, and theorem 27 proves that all complements are conjugate under the assumption that $N$ or $G/N$ is solvable. It is not easy to find an explicit statement of the existence of a complement in Schur's published works, though the results of Schur (1904, 1907) on the Schur multiplier imply the existence of a complement in the special case when the normal subgroup is in the center. Zassenhaus pointed out that the Schur–Zassenhaus theorem for non-solvable groups would follow if all groups of odd order are solvable, which was later proved by Feit and Thompson. Ernst Witt showed that it would also follow from the Schreier conjecture (see Witt (1998, p.277) for Witt's unpublished 1937 note about this), but the Schreier conjecture has only been proved using the classification of finite simple groups, which is far harder than the Feit–Thompson theorem. Examples If we do not impose the coprime condition, the theorem is not true: consider for example the cyclic group $C_{4}$ and its normal subgroup $C_{2}$. Then if $C_{4}$ were a semidirect product of $C_{2}$ and $C_{4}/C_{2}\cong C_{2}$ then $C_{4}$ would have to contain two elements of order 2, but it only contains one. Another way to explain this impossibility of splitting $C_{4}$ (i.e. expressing it as a semidirect product) is to observe that the automorphisms of $C_{2}$ are the trivial group, so the only possible [semi]direct product of $C_{2}$ with itself is a direct product (which gives rise to the Klein four-group, a group that is non-isomorphic with $C_{4}$). An example where the Schur–Zassenhaus theorem does apply is the symmetric group on 3 symbols, $S_{3}$, which has a normal subgroup of order 3 (isomorphic with $C_{3}$) which in turn has index 2 in $S_{3}$ (in agreement with the theorem of Lagrange), so $S_{3}/C_{3}\cong C_{2}$. Since 2 and 3 are relatively prime, the Schur–Zassenhaus theorem applies and $S_{3}\cong C_{3}\rtimes C_{2}$. Note that the automorphism group of $C_{3}$ is $C_{2}$ and the automorphism of $C_{3}$ used in the semidirect product that gives rise to $S_{3}$ is the non-trivial automorphism that permutes the two non-identity elements of $C_{3}$. Furthermore, the three subgroups of order 2 in $S_{3}$ (any of which can serve as a complement to $C_{3}$ in $S_{3}$) are conjugate to each other. The non-triviality of the (additional) conjugacy conclusion can be illustrated with the Klein four-group $V$ as the non-example. Any of the three proper subgroups of $V$ (all of which have order 2) is normal in $V$; fixing one of these subgroups, any of the other two remaining (proper) subgroups complements it in $V$, but none of these three subgroups of $V$ is a conjugate of any other one, because $V$ is abelian. The quaternion group has normal subgroups of order 4 and 2 but is not a [semi]direct product. Schur's papers at the beginning of the 20th century introduced the notion of central extension to address examples such as $C_{4}$ and the quaternions. Proof The existence of a complement to a normal Hall subgroup H of a finite group G can be proved in the following steps: 1. By induction on the order of G, we can assume that it is true for any smaller group. 2. If H is abelian, then the existence of a complement follows from the fact that the cohomology group H2(G/H,H) vanishes (as H and G/H have coprime orders) and the fact that all complements are conjugate follows from the vanishing of H1(G/H,H). 3. If H is solvable, it has a nontrivial abelian subgroup A that is characteristic in H and therefore normal in G. Applying the Schur–Zassenhaus theorem to G/A reduces the proof to the case when H=A is abelian which has been done in the previous step. 4. If the normalizer N=NG(P) of every p-Sylow subgroup P of H is equal to G, then H is nilpotent, and in particular solvable, so the theorem follows by the previous step. 5. If the normalizer N=NG(P) of some p-Sylow subgroup P of H is smaller than G, then by induction the Schur–Zassenhaus theorem holds for N, and a complement of N∩H in N is a complement for H in G because G=NH. References • Rotman, Joseph J. (1995). An Introduction to the Theory of Groups. Graduate Texts in Mathematics. Vol. 148 (Fourth ed.). New York: Springer–Verlag. doi:10.1007/978-1-4612-4176-8. ISBN 978-0-387-94285-8. MR 1307623. • Dummit, David S.; Foote, Richard M. (2004). Abstract Algebra (Third ed.). Hoboken, NJ: John Wiley & Sons, Inc. ISBN 978-0-471-43334-7. MR 2286236. • Gaschütz, Wolfgang (1952), "Zur Erweiterungstheorie der endlichen Gruppen", J. Reine Angew. Math., 1952 (190): 93–107, doi:10.1515/crll.1952.190.93, MR 0051226, S2CID 116597116 • Rose, John S. (1978). A Course on Group Theory. Cambridge-New York-Melbourne: Cambridge University Press. ISBN 0-521-21409-2. MR 0498810. • Isaacs, I. Martin (2008). Finite Group Theory. Graduate Studies in Mathematics. Vol. 92. Providence, RI: American Mathematical Society. doi:10.1090/gsm/092. ISBN 978-0-8218-4344-4. MR 2426855. • Kurzweil, Hans; Stellmacher, Bernd (2004). The Theory of Finite Groups: An Introduction. Universitext. New York: Springer-Verlag. doi:10.1007/b97433. ISBN 0-387-40510-0. MR 2014408. • Humphreys, James E. (1996). A Course in Group Theory. Oxford Science Publications. New York: The Clarendon Press, Oxford University Press. ISBN 0-19-853459-0. MR 1420410. • Schur, Issai (1904). "Über die Darstellung der endlichen Gruppen durch gebrochene lineare Substitutionen". Journal für die reine und angewandte Mathematik. 127: 20–50. • Schur, Issai (1907). "Untersuchungen über die Darstellung der endlichen Gruppen durch gebrochene lineare Substitutionen". Journal für die reine und angewandte Mathematik. 132: 85–137. • Witt, Ernst (1998), Kersten, Ina (ed.), Collected papers. Gesammelte Abhandlungen, Springer Collected Works in Mathematics, Berlin, New York: Springer-Verlag, doi:10.1007/978-3-642-41970-6, ISBN 978-3-540-57061-5, MR 1643949 • Zassenhaus, Hans (1937). Lehrbuch der Gruppentheorie. Hamburger Mathematische Einzelschriften. Vol. 21. Leipzig and Berlin: Teubner.. English translation:Zassenhaus, Hans J. (1958) [1949], The theory of groups. (2nd ed.), New York: Chelsea Publishing Company, MR 0091275
Wikipedia
Schur-convex function In mathematics, a Schur-convex function, also known as S-convex, isotonic function and order-preserving function is a function $f:\mathbb {R} ^{d}\rightarrow \mathbb {R} $ that for all $x,y\in \mathbb {R} ^{d}$ such that $x$ is majorized by $y$, one has that $f(x)\leq f(y)$. Named after Issai Schur, Schur-convex functions are used in the study of majorization. Every function that is convex and symmetric is also Schur-convex. The opposite implication is not true, but all Schur-convex functions are symmetric (under permutations of the arguments).[1] Schur-concave function A function f is 'Schur-concave' if its negative, −f, is Schur-convex. Schur-Ostrowski criterion If f is symmetric and all first partial derivatives exist, then f is Schur-convex if and only if $(x_{i}-x_{j})\left({\frac {\partial f}{\partial x_{i}}}-{\frac {\partial f}{\partial x_{j}}}\right)\geq 0$ for all $x\in \mathbb {R} ^{d}$ holds for all 1 ≤ i ≠ j ≤ d.[2] Examples • $f(x)=\min(x)$ is Schur-concave while $f(x)=\max(x)$ is Schur-convex. This can be seen directly from the definition. • The Shannon entropy function $\sum _{i=1}^{d}{P_{i}\cdot \log _{2}{\frac {1}{P_{i}}}}$ is Schur-concave. • The Rényi entropy function is also Schur-concave. • $\sum _{i=1}^{d}{x_{i}^{k}},k\geq 1$ is Schur-convex. • The function $f(x)=\prod _{i=1}^{d}x_{i}$ is Schur-concave, when we assume all $x_{i}>0$. In the same way, all the elementary symmetric functions are Schur-concave, when $x_{i}>0$. • A natural interpretation of majorization is that if $x\succ y$ then $x$ is more spread out than $y$. So it is natural to ask if statistical measures of variability are Schur-convex. The variance and standard deviation are Schur-convex functions, while the median absolute deviation is not. • If $g$ is a convex function defined on a real interval, then $\sum _{i=1}^{n}g(x_{i})$ is Schur-convex. • A probability example: If $X_{1},\dots ,X_{n}$ are exchangeable random variables, then the function ${\text{E}}\prod _{j=1}^{n}X_{j}^{a_{j}}$ is Schur-convex as a function of $a=(a_{1},\dots ,a_{n})$, assuming that the expectations exist. • The Gini coefficient is strictly Schur convex. References 1. Roberts, A. Wayne; Varberg, Dale E. (1973). Convex functions. New York: Academic Press. p. 258. ISBN 9780080873725. 2. E. Peajcariaac, Josip; L. Tong, Y. (3 June 1992). Convex Functions, Partial Orderings, and Statistical Applications. Academic Press. p. 333. ISBN 9780080925226. See also • Quasiconvex function
Wikipedia
Schur complement method In numerical analysis, the Schur complement method, named after Issai Schur, is the basic and the earliest version of non-overlapping domain decomposition method, also called iterative substructuring. A finite element problem is split into non-overlapping subdomains, and the unknowns in the interiors of the subdomains are eliminated. The remaining Schur complement system on the unknowns associated with subdomain interfaces is solved by the conjugate gradient method. This article is not about Schur complements of matrices. The method and implementation Suppose we want to solve the Poisson equation $-\Delta u=f,\qquad u|_{\partial \Omega }=0$ on some domain Ω. When we discretize this problem we get an N-dimensional linear system AU = F. The Schur complement method splits up the linear system into sub-problems. To do so, divide Ω into two subdomains Ω1, Ω2 which share an interface Γ. Let U1, U2 and UΓ be the degrees of freedom associated with each subdomain and with the interface. We can then write the linear system as $\left[{\begin{matrix}A_{11}&0&A_{1\Gamma }\\0&A_{22}&A_{2\Gamma }\\A_{\Gamma 1}&A_{\Gamma 2}&A_{\Gamma \Gamma }\end{matrix}}\right]\left[{\begin{matrix}U_{1}\\U_{2}\\U_{\Gamma }\end{matrix}}\right]=\left[{\begin{matrix}F_{1}\\F_{2}\\F_{\Gamma }\end{matrix}}\right],$ where F1, F2 and FΓ are the components of the load vector in each region. The Schur complement method proceeds by noting that we can find the values on the interface by solving the smaller system $\Sigma U_{\Gamma }=F_{\Gamma }-A_{\Gamma 1}A_{11}^{-1}F_{1}-A_{\Gamma 2}A_{22}^{-1}F_{2},$ for the interface values UΓ, where we define the Schur complement matrix $\Sigma =A_{\Gamma \Gamma }-A_{\Gamma 1}A_{11}^{-1}A_{1\Gamma }-A_{\Gamma 2}A_{22}^{-1}A_{2\Gamma }.$ The important thing to note is that the computation of any quantities involving $A_{11}^{-1}$ or $A_{22}^{-1}$ involves solving decoupled Dirichlet problems on each domain, and these can be done in parallel. Consequently, we need not store the Schur complement matrix explicitly; it is sufficient to know how to multiply a vector by it. Once we know the values on the interface, we can find the interior values using the two relations $A_{11}U_{1}=F_{1}-A_{1\Gamma }U_{\Gamma },\qquad A_{22}U_{2}=F_{2}-A_{2\Gamma }U_{\Gamma },$ which can both be done in parallel. The multiplication of a vector by the Schur complement is a discrete version of the Poincaré–Steklov operator, also called the Dirichlet to Neumann mapping. Advantages There are two benefits of this method. First, the elimination of the interior unknowns on the subdomains, that is the solution of the Dirichlet problems, can be done in parallel. Second, passing to the Schur complement reduces condition number and thus tends to decrease the number of iterations. For second-order problems, such as the Laplace equation or linear elasticity, the matrix of the system has condition number of the order 1/h2, where h is the characteristic element size. The Schur complement, however, has condition number only of the order 1/h. For performances, the Schur complement method is combined with preconditioning, at least a diagonal preconditioner. The Neumann–Neumann method and the Neumann–Dirichlet method are the Schur complement method with particular kinds of preconditioners. Numerical methods for partial differential equations Finite difference Parabolic • Forward-time central-space (FTCS) • Crank–Nicolson Hyperbolic • Lax–Friedrichs • Lax–Wendroff • MacCormack • Upwind • Method of characteristics Others • Alternating direction-implicit (ADI) • Finite-difference time-domain (FDTD) Finite volume • Godunov • High-resolution • Monotonic upstream-centered (MUSCL) • Advection upstream-splitting (AUSM) • Riemann solver • Essentially non-oscillatory (ENO) • Weighted essentially non-oscillatory (WENO) Finite element • hp-FEM • Extended (XFEM) • Discontinuous Galerkin (DG) • Spectral element (SEM) • Mortar • Gradient discretisation (GDM) • Loubignac iteration • Smoothed (S-FEM) Meshless/Meshfree • Smoothed-particle hydrodynamics (SPH) • Peridynamics (PD) • Moving particle semi-implicit method (MPS) • Material point method (MPM) • Particle-in-cell (PIC) Domain decomposition • Schur complement • Fictitious domain • Schwarz alternating • additive • abstract additive • Neumann–Dirichlet • Neumann–Neumann • Poincaré–Steklov operator • Balancing (BDD) • Balancing by constraints (BDDC) • Tearing and interconnect (FETI) • FETI-DP Others • Spectral • Pseudospectral (DVR) • Method of lines • Multigrid • Collocation • Level-set • Boundary element • Method of moments • Immersed boundary • Analytic element • Isogeometric analysis • Infinite difference method • Infinite element method • Galerkin method • Petrov–Galerkin method • Validated numerics • Computer-assisted proof • Integrable algorithm • Method of fundamental solutions
Wikipedia
Schur complement In linear algebra and the theory of matrices, the Schur complement of a block matrix is defined as follows. Suppose p, q are nonnegative integers, and suppose A, B, C, D are respectively p × p, p × q, q × p, and q × q matrices of complex numbers. Let $M=\left[{\begin{matrix}A&B\\C&D\end{matrix}}\right]$ so that M is a (p + q) × (p + q) matrix. If D is invertible, then the Schur complement of the block D of the matrix M is the p × p matrix defined by $M/D:=A-BD^{-1}C.$ If A is invertible, the Schur complement of the block A of the matrix M is the q × q matrix defined by $M/A:=D-CA^{-1}B.$ In the case that A or D is singular, substituting a generalized inverse for the inverses on M/A and M/D yields the generalized Schur complement. The Schur complement is named after Issai Schur who used it to prove Schur's lemma, although it had been used previously.[1] Emilie Virginia Haynsworth was the first to call it the Schur complement.[2] The Schur complement is a key tool in the fields of numerical analysis, statistics, and matrix analysis. Background The Schur complement arises when performing a block Gaussian elimination on the matrix M. In order to eliminate the elements below the block diagonal, one multiplies the matrix M by a block lower triangular matrix on the right as follows: ${\begin{aligned}&M={\begin{bmatrix}A&B\\C&D\end{bmatrix}}\quad \to \quad {\begin{bmatrix}A&B\\C&D\end{bmatrix}}{\begin{bmatrix}I_{p}&0\\-D^{-1}C&I_{q}\end{bmatrix}}={\begin{bmatrix}A-BD^{-1}C&B\\0&D\end{bmatrix}},\end{aligned}}$ where Ip denotes a p×p identity matrix. As a result, the Schur complement $M/D=A-BD^{-1}C$ appears in the upper-left p×p block. Continuing the elimination process beyond this point (i.e., performing a block Gauss–Jordan elimination), ${\begin{aligned}&{\begin{bmatrix}A-BD^{-1}C&B\\0&D\end{bmatrix}}\quad \to \quad {\begin{bmatrix}I_{p}&-BD^{-1}\\0&I_{q}\end{bmatrix}}{\begin{bmatrix}A-BD^{-1}C&B\\0&D\end{bmatrix}}={\begin{bmatrix}A-BD^{-1}C&0\\0&D\end{bmatrix}},\end{aligned}}$ leads to an LDU decomposition of M, which reads ${\begin{aligned}M&={\begin{bmatrix}A&B\\C&D\end{bmatrix}}={\begin{bmatrix}I_{p}&BD^{-1}\\0&I_{q}\end{bmatrix}}{\begin{bmatrix}A-BD^{-1}C&0\\0&D\end{bmatrix}}{\begin{bmatrix}I_{p}&0\\D^{-1}C&I_{q}\end{bmatrix}}.\end{aligned}}$ Thus, the inverse of M may be expressed involving D−1 and the inverse of Schur's complement, assuming it exists, as ${\begin{aligned}M^{-1}={\begin{bmatrix}A&B\\C&D\end{bmatrix}}^{-1}={}&\left({\begin{bmatrix}I_{p}&BD^{-1}\\0&I_{q}\end{bmatrix}}{\begin{bmatrix}A-BD^{-1}C&0\\0&D\end{bmatrix}}{\begin{bmatrix}I_{p}&0\\D^{-1}C&I_{q}\end{bmatrix}}\right)^{-1}\\={}&{\begin{bmatrix}I_{p}&0\\-D^{-1}C&I_{q}\end{bmatrix}}{\begin{bmatrix}\left(A-BD^{-1}C\right)^{-1}&0\\0&D^{-1}\end{bmatrix}}{\begin{bmatrix}I_{p}&-BD^{-1}\\0&I_{q}\end{bmatrix}}\\[4pt]={}&{\begin{bmatrix}\left(A-BD^{-1}C\right)^{-1}&-\left(A-BD^{-1}C\right)^{-1}BD^{-1}\\-D^{-1}C\left(A-BD^{-1}C\right)^{-1}&D^{-1}+D^{-1}C\left(A-BD^{-1}C\right)^{-1}BD^{-1}\end{bmatrix}}\\[4pt]={}&{\begin{bmatrix}\left(M/D\right)^{-1}&-\left(M/D\right)^{-1}BD^{-1}\\-D^{-1}C\left(M/D\right)^{-1}&D^{-1}+D^{-1}C\left(M/D\right)^{-1}BD^{-1}\end{bmatrix}}.\end{aligned}}$ The above relationship comes from the elimination operations that involve D−1 and M/D. An equivalent derivation can be done with the roles of A and D interchanged. By equating the expressions for M−1 obtained in these two different ways, one can establish the matrix inversion lemma, which relates the two Schur complements of M: M/D and M/A (see "Derivation from LDU decomposition" in Woodbury matrix identity § Alternative proofs). Properties • If p and q are both 1 (i.e., A, B, C and D are all scalars), we get the familiar formula for the inverse of a 2-by-2 matrix: $M^{-1}={\frac {1}{AD-BC}}\left[{\begin{matrix}D&-B\\-C&A\end{matrix}}\right]$ provided that AD − BC is non-zero. • In general, if A is invertible, then ${\begin{aligned}M&={\begin{bmatrix}A&B\\C&D\end{bmatrix}}={\begin{bmatrix}I_{p}&0\\CA^{-1}&I_{q}\end{bmatrix}}{\begin{bmatrix}A&0\\0&D-CA^{-1}B\end{bmatrix}}{\begin{bmatrix}I_{p}&A^{-1}B\\0&I_{q}\end{bmatrix}},\\[4pt]M^{-1}&={\begin{bmatrix}A^{-1}+A^{-1}B(M/A)^{-1}CA^{-1}&-A^{-1}B(M/A)^{-1}\\-(M/A)^{-1}CA^{-1}&(M/A)^{-1}\end{bmatrix}}\end{aligned}}$ whenever this inverse exists. • (Schur's formula) When A, respectively D, is invertible, the determinant of M is also clearly seen to be given by $\det(M)=\det(A)\det \left(D-CA^{-1}B\right)$, respectively $\det(M)=\det(D)\det \left(A-BD^{-1}C\right)$, which generalizes the determinant formula for 2 × 2 matrices. • (Guttman rank additivity formula) If D is invertible, then the rank of M is given by $\operatorname {rank} (M)=\operatorname {rank} (D)+\operatorname {rank} \left(A-BD^{-1}C\right)$ • (Haynsworth inertia additivity formula) If A is invertible, then the inertia of the block matrix M is equal to the inertia of A plus the inertia of M/A. • (Quotient identity) $A/B=((A/C)/(B/C))$.[3] • The Schur complement of a Laplacian matrix is also a Laplacian matrix.[4] Application to solving linear equations The Schur complement arises naturally in solving a system of linear equations such as[5] ${\begin{bmatrix}A&B\\C&D\end{bmatrix}}{\begin{bmatrix}x\\y\end{bmatrix}}={\begin{bmatrix}u\\v\end{bmatrix}}$. Assuming that the submatrix $A$ is invertible, we can eliminate $x$ from the equations, as follows. $x=A^{-1}(u-By)$. Substituting this expression into the second equation yields $\left(D-CA^{-1}B\right)y=v-CA^{-1}u$. We refer to this as the reduced equation obtained by eliminating $x$ from the original equation. The matrix appearing in the reduced equation is called the Schur complement of the first block $A$ in $M$: $S\ {\overset {\underset {\mathrm {def} }{}}{=}}\ D-CA^{-1}B$. Solving the reduced equation, we obtain $y=S^{-1}\left(v-CA^{-1}u\right)$. Substituting this into the first equation yields $x=\left(A^{-1}+A^{-1}BS^{-1}CA^{-1}\right)u-A^{-1}BS^{-1}v$. We can express the above two equation as: ${\begin{bmatrix}x\\y\end{bmatrix}}={\begin{bmatrix}A^{-1}+A^{-1}BS^{-1}CA^{-1}&-A^{-1}BS^{-1}\\-S^{-1}CA^{-1}&S^{-1}\end{bmatrix}}{\begin{bmatrix}u\\v\end{bmatrix}}$. Therefore, a formulation for the inverse of a block matrix is: ${\begin{bmatrix}A&B\\C&D\end{bmatrix}}^{-1}={\begin{bmatrix}A^{-1}+A^{-1}BS^{-1}CA^{-1}&-A^{-1}BS^{-1}\\-S^{-1}CA^{-1}&S^{-1}\end{bmatrix}}={\begin{bmatrix}I_{p}&-A^{-1}B\\&I_{q}\end{bmatrix}}{\begin{bmatrix}A^{-1}&\\&S^{-1}\end{bmatrix}}{\begin{bmatrix}I_{p}&\\-CA^{-1}&I_{q}\end{bmatrix}}$. In particular, we see that the Schur complement is the inverse of the $2,2$ block entry of the inverse of $M$. In practice, one needs $A$ to be well-conditioned in order for this algorithm to be numerically accurate. In electrical engineering this is often referred to as node elimination or Kron reduction. Applications to probability theory and statistics Suppose the random column vectors X, Y live in Rn and Rm respectively, and the vector (X, Y) in Rn + m has a multivariate normal distribution whose covariance is the symmetric positive-definite matrix $\Sigma =\left[{\begin{matrix}A&B\\B^{\mathsf {T}}&C\end{matrix}}\right],$ where $ A\in \mathbb {R} ^{n\times n}$ is the covariance matrix of X, $ C\in \mathbb {R} ^{m\times m}$ is the covariance matrix of Y and $ B\in \mathbb {R} ^{n\times m}$ is the covariance matrix between X and Y. Then the conditional covariance of X given Y is the Schur complement of C in $ \Sigma $:[6] ${\begin{aligned}\operatorname {Cov} (X\mid Y)&=A-BC^{-1}B^{\mathsf {T}}\\\operatorname {E} (X\mid Y)&=\operatorname {E} (X)+BC^{-1}(Y-\operatorname {E} (Y))\end{aligned}}$ If we take the matrix $\Sigma $ above to be, not a covariance of a random vector, but a sample covariance, then it may have a Wishart distribution. In that case, the Schur complement of C in $\Sigma $ also has a Wishart distribution. Conditions for positive definiteness and semi-definiteness Let X be a symmetric matrix of real numbers given by $X=\left[{\begin{matrix}A&B\\B^{\mathsf {T}}&C\end{matrix}}\right].$ Then • If A is invertible, then X is positive definite if and only if A and its complement X/A are both positive definite: • $X\succ 0\Leftrightarrow A\succ 0,X/A=C-B^{\mathsf {T}}A^{-1}B\succ 0.$[7] • If C is invertible, then X is positive definite if and only if C and its complement X/C are both positive definite: • $X\succ 0\Leftrightarrow C\succ 0,X/C=A-BC^{-1}B^{\mathsf {T}}\succ 0.$ • If A is positive definite, then X is positive semi-definite if and only if the complement X/A is positive semi-definite: • ${\text{If }}A\succ 0,{\text{ then }}X\succeq 0\Leftrightarrow X/A=C-B^{\mathsf {T}}A^{-1}B\succeq 0.$[7] • If C is positive definite, then X is positive semi-definite if and only if the complement X/C is positive semi-definite: • ${\text{If }}C\succ 0,{\text{ then }}X\succeq 0\Leftrightarrow X/C=A-BC^{-1}B^{\mathsf {T}}\succeq 0.$ The first and third statements can be derived[5] by considering the minimizer of the quantity $u^{\mathsf {T}}Au+2v^{\mathsf {T}}B^{\mathsf {T}}u+v^{\mathsf {T}}Cv,\,$ as a function of v (for fixed u). Furthermore, since $\left[{\begin{matrix}A&B\\B^{\mathsf {T}}&C\end{matrix}}\right]\succ 0\Longleftrightarrow \left[{\begin{matrix}C&B^{\mathsf {T}}\\B&A\end{matrix}}\right]\succ 0$ and similarly for positive semi-definite matrices, the second (respectively fourth) statement is immediate from the first (resp. third) statement. There is also a sufficient and necessary condition for the positive semi-definiteness of X in terms of a generalized Schur complement.[1] Precisely, • $X\succeq 0\Leftrightarrow A\succeq 0,C-B^{\mathsf {T}}A^{g}B\succeq 0,\left(I-AA^{g}\right)B=0\,$ and • $X\succeq 0\Leftrightarrow C\succeq 0,A-BC^{g}B^{\mathsf {T}}\succeq 0,\left(I-CC^{g}\right)B^{\mathsf {T}}=0,$ where $A^{g}$ denotes a generalized inverse of $A$. See also • Woodbury matrix identity • Quasi-Newton method • Haynsworth inertia additivity formula • Gaussian process • Total least squares References 1. Zhang, Fuzhen (2005). Zhang, Fuzhen (ed.). The Schur Complement and Its Applications. Numerical Methods and Algorithms. Vol. 4. Springer. doi:10.1007/b105056. ISBN 0-387-24271-6. 2. Haynsworth, E. V., "On the Schur Complement", Basel Mathematical Notes, #BNB 20, 17 pages, June 1968. 3. Crabtree, Douglas E.; Haynsworth, Emilie V. (1969). "An identity for the Schur complement of a matrix". Proceedings of the American Mathematical Society. 22 (2): 364–366. doi:10.1090/S0002-9939-1969-0255573-1. ISSN 0002-9939. S2CID 122868483. 4. Devriendt, Karel (2022). "Effective resistance is more than distance: Laplacians, Simplices and the Schur complement". Linear Algebra and Its Applications. 639: 24–49. arXiv:2010.04521. doi:10.1016/j.laa.2022.01.002. S2CID 222272289. 5. Boyd, S. and Vandenberghe, L. (2004), "Convex Optimization", Cambridge University Press (Appendix A.5.5) 6. von Mises, Richard (1964). "Chapter VIII.9.3". Mathematical theory of probability and statistics. Academic Press. ISBN 978-1483255385. 7. Zhang, Fuzhen (2005). The Schur Complement and Its Applications. Springer. p. 34.
Wikipedia
Schur functor In mathematics, especially in the field of representation theory, Schur functors (named after Issai Schur) are certain functors from the category of modules over a fixed commutative ring to itself. They generalize the constructions of exterior powers and symmetric powers of a vector space. Schur functors are indexed by Young diagrams in such a way that the horizontal diagram with n cells corresponds to the nth symmetric power functor, and the vertical diagram with n cells corresponds to the nth exterior power functor. If a vector space V is a representation of a group G, then $\mathbb {S} ^{\lambda }V$ also has a natural action of G for any Schur functor $\mathbb {S} ^{\lambda }(-)$. Definition Schur functors are indexed by partitions and are described as follows. Let R be a commutative ring, E an R-module and λ a partition of a positive integer n. Let T be a Young tableau of shape λ, thus indexing the factors of the n-fold direct product, E × E × ... × E, with the boxes of T. Consider those maps of R-modules $\varphi :E^{\times n}\to M$ satisfying the following conditions (1) $\varphi $ is multilinear, (2) $\varphi $ is alternating in the entries indexed by each column of T, (3) $\varphi $ satisfies an exchange condition stating that if $I\subset \{1,2,\dots ,n\}$ are numbers from column i of T then $\varphi (x)=\sum _{x'}\varphi (x')$ where the sum is over n-tuples x' obtained from x by exchanging the elements indexed by I with any $|I|$ elements indexed by the numbers in column $i-1$ (in order). The universal R-module $\mathbb {S} ^{\lambda }E$ that extends $\varphi $ to a mapping of R-modules ${\tilde {\varphi }}:\mathbb {S} ^{\lambda }E\to M$ is the image of E under the Schur functor indexed by λ. For an example of the condition (3) placed on $\varphi $ suppose that λ is the partition $(2,2,1)$ and the tableau T is numbered such that its entries are 1, 2, 3, 4, 5 when read top-to-bottom (left-to-right). Taking $I=\{4,5\}$ (i.e., the numbers in the second column of T) we have $\varphi (x_{1},x_{2},x_{3},x_{4},x_{5})=\varphi (x_{4},x_{5},x_{3},x_{1},x_{2})+\varphi (x_{4},x_{2},x_{5},x_{1},x_{3})+\varphi (x_{1},x_{4},x_{5},x_{2},x_{3}),$ while if $I=\{5\}$ then $\varphi (x_{1},x_{2},x_{3},x_{4},x_{5})=\varphi (x_{5},x_{2},x_{3},x_{4},x_{1})+\varphi (x_{1},x_{5},x_{3},x_{4},x_{2})+\varphi (x_{1},x_{2},x_{5},x_{4},x_{3}).$ Examples Fix a vector space V over a field of characteristic zero. We identify partitions and the corresponding Young diagrams. The following descriptions hold:[1] • For a partition λ = (n) the Schur functor Sλ(V) = Symn(V). • For a partition λ = (1, ..., 1) (repeated n times) the Schur functor Sλ(V) = Λn(V). • For a partition λ = (2, 1) the Schur functor Sλ(V) is the cokernel of the comultiplication map of exterior powers Λ3(V) → Λ2(V) ⊗ V. • For a partition λ = (2, 2) the Schur functor Sλ(V) is the quotient of Λ2(V) ⊗ Λ2(V) by the images of two maps. One is the composition Λ3(V) ⊗ V → Λ2(V) ⊗ V ⊗ V → Λ2(V) ⊗ Λ2(V), where the first map is the comultiplication along the first coordinate. The other map is a comultiplication Λ4(V) → Λ2(V) ⊗ Λ2(V). • For a partition λ = (n, 1, ..., 1), with 1 repeated m times, the Schur functor Sλ(V) is the quotient of Λn(V) ⊗ Symm(V) by the image of the composition of the comultiplication in exterior powers and the multiplication in symmetric powers: $\Lambda ^{n+1}(V)\otimes \mathrm {Sym} ^{m-1}(V){\xrightarrow {\Delta \otimes \mathrm {id} }}\Lambda ^{n}(V)\otimes V\otimes \mathrm {Sym} ^{m-1}(V){\xrightarrow {\mathrm {id} \otimes \cdot }}\Lambda ^{n}(V)\otimes \mathrm {Sym} ^{m}(V)$ Applications Let V be a complex vector space of dimension k. It's a tautological representation of its automorphism group GL(V). If λ is a diagram where each row has no more than k cells, then Sλ(V) is an irreducible GL(V)-representation of highest weight λ. In fact, any rational representation of GL(V) is isomorphic to a direct sum of representations of the form Sλ(V) ⊗ det(V)⊗m, where λ is a Young diagram with each row strictly shorter than k, and m is any (possibly negative) integer. In this context Schur-Weyl duality states that as a $GL(V)$-module $V^{\otimes n}=\bigoplus _{\lambda \vdash n:\ell (\lambda )\leq k}(\mathbb {S} ^{\lambda }V)^{\oplus f^{\lambda }}$ where $f^{\lambda }$ is the number of standard young tableaux of shape λ. More generally, we have the decomposition of the tensor product as $GL(V)\times {\mathfrak {S}}_{n}$-bimodule $V^{\otimes n}=\bigoplus _{\lambda \vdash n:\ell (\lambda )\leq k}(\mathbb {S} ^{\lambda }V)\otimes \operatorname {Specht} (\lambda )$ where $\operatorname {Specht} (\lambda )$ is the Specht module indexed by λ. Schur functors can also be used to describe the coordinate ring of certain flag varieties. Plethysm Main article: Plethysm For two Young diagrams λ and μ consider the composition of the corresponding Schur functors Sλ(Sμ(-)). This composition is called a plethysm of λ and μ. From the general theory it's known[1] that, at least for vector spaces over a characteristic zero field, the plethysm is isomorphic to a direct sum of Schur functors. The problem of determining which Young diagrams occur in that description and how to calculate their multiplicities is open, aside from some special cases like Symm(Sym2(V)). See also • Young symmetrizer • Schur polynomial • Littlewood–Richardson rule • Polynomial functor References 1. Weyman, Jerzy (2003). Cohomology of Vector Bundles and Syzygies. Cambridge University Press. doi:10.1017/CBO9780511546556. ISBN 9780511546556. • J. Towber, Two new functors from modules to algebras, J. Algebra 47 (1977), 80-104. doi:10.1016/0021-8693(77)90211-3 • W. Fulton, Young Tableaux, with Applications to Representation Theory and Geometry. Cambridge University Press, 1997, ISBN 0-521-56724-6. External links • Schur Functors | The n-Category Café
Wikipedia
Central simple algebra In ring theory and related areas of mathematics a central simple algebra (CSA) over a field K is a finite-dimensional associative K-algebra A which is simple, and for which the center is exactly K. (Note that not every simple algebra is a central simple algebra over its center: for instance, if K is a field of characteristic 0, then the Weyl algebra $K[X,\partial _{X}]$ is a simple algebra with center K, but is not a central simple algebra over K as it has infinite dimension as a K-module.) For example, the complex numbers C form a CSA over themselves, but not over the real numbers R (the center of C is all of C, not just R). The quaternions H form a 4-dimensional CSA over R, and in fact represent the only non-trivial element of the Brauer group of the reals (see below). Given two central simple algebras A ~ M(n,S) and B ~ M(m,T) over the same field F, A and B are called similar (or Brauer equivalent) if their division rings S and T are isomorphic. The set of all equivalence classes of central simple algebras over a given field F, under this equivalence relation, can be equipped with a group operation given by the tensor product of algebras. The resulting group is called the Brauer group Br(F) of the field F.[1] It is always a torsion group.[2] Properties • According to the Artin–Wedderburn theorem a finite-dimensional simple algebra A is isomorphic to the matrix algebra M(n,S) for some division ring S. Hence, there is a unique division algebra in each Brauer equivalence class.[3] • Every automorphism of a central simple algebra is an inner automorphism (this follows from the Skolem–Noether theorem). • The dimension of a central simple algebra as a vector space over its centre is always a square: the degree is the square root of this dimension.[4] The Schur index of a central simple algebra is the degree of the equivalent division algebra:[5] it depends only on the Brauer class of the algebra.[6] • The period or exponent of a central simple algebra is the order of its Brauer class as an element of the Brauer group. It is a divisor of the index,[7] and the two numbers are composed of the same prime factors.[8][9][10] • If S is a simple subalgebra of a central simple algebra A then dimF S divides dimF A. • Every 4-dimensional central simple algebra over a field F is isomorphic to a quaternion algebra; in fact, it is either a two-by-two matrix algebra, or a division algebra. • If D is a central division algebra over K for which the index has prime factorisation $\mathrm {ind} (D)=\prod _{i=1}^{r}p_{i}^{m_{i}}\ $ then D has a tensor product decomposition $D=\bigotimes _{i=1}^{r}D_{i}\ $ where each component Di is a central division algebra of index $p_{i}^{m_{i}}$, and the components are uniquely determined up to isomorphism.[11] Splitting field We call a field E a splitting field for A over K if A⊗E is isomorphic to a matrix ring over E. Every finite dimensional CSA has a splitting field: indeed, in the case when A is a division algebra, then a maximal subfield of A is a splitting field. In general by theorems of Wedderburn and Koethe there is a splitting field which is a separable extension of K of degree equal to the index of A, and this splitting field is isomorphic to a subfield of A.[12][13] As an example, the field C splits the quaternion algebra H over R with $t+x\mathbf {i} +y\mathbf {j} +z\mathbf {k} \leftrightarrow \left({\begin{array}{*{20}c}t+xi&y+zi\\-y+zi&t-xi\end{array}}\right).$ We can use the existence of the splitting field to define reduced norm and reduced trace for a CSA A.[14] Map A to a matrix ring over a splitting field and define the reduced norm and trace to be the composite of this map with determinant and trace respectively. For example, in the quaternion algebra H, the splitting above shows that the element t + x i + y j + z k has reduced norm t2 + x2 + y2 + z2 and reduced trace 2t. The reduced norm is multiplicative and the reduced trace is additive. An element a of A is invertible if and only if its reduced norm in non-zero: hence a CSA is a division algebra if and only if the reduced norm is non-zero on the non-zero elements.[15] Generalization CSAs over a field K are a non-commutative analog to extension fields over K – in both cases, they have no non-trivial 2-sided ideals, and have a distinguished field in their center, though a CSA can be non-commutative and need not have inverses (need not be a division algebra). This is of particular interest in noncommutative number theory as generalizations of number fields (extensions of the rationals Q); see noncommutative number field. See also • Azumaya algebra, generalization of CSAs where the base field is replaced by a commutative local ring • Severi–Brauer variety • Posner's theorem References 1. Lorenz (2008) p.159 2. Lorenz (2008) p.194 3. Lorenz (2008) p.160 4. Gille & Szamuely (2006) p.21 5. Lorenz (2008) p.163 6. Gille & Szamuely (2006) p.100 7. Jacobson (1996) p.60 8. Jacobson (1996) p.61 9. Gille & Szamuely (2006) p.104 10. Cohn, Paul M. (2003). Further Algebra and Applications. Springer-Verlag. p. 208. ISBN 1852336676. 11. Gille & Szamuely (2006) p.105 12. Jacobson (1996) pp.27-28 13. Gille & Szamuely (2006) p.101 14. Gille & Szamuely (2006) pp.37-38 15. Gille & Szamuely (2006) p.38 • Cohn, P.M. (2003). Further Algebra and Applications (2nd ed.). Springer. ISBN 1852336676. Zbl 1006.00001. • Jacobson, Nathan (1996). Finite-dimensional division algebras over fields. Berlin: Springer-Verlag. ISBN 3-540-57029-2. Zbl 0874.16002. • Lam, Tsit-Yuen (2005). Introduction to Quadratic Forms over Fields. Graduate Studies in Mathematics. Vol. 67. American Mathematical Society. ISBN 0-8218-1095-2. MR 2104929. Zbl 1068.11023. • Lorenz, Falko (2008). Algebra. Volume II: Fields with Structure, Algebras and Advanced Topics. Springer. ISBN 978-0-387-72487-4. Zbl 1130.12001. Further reading • Albert, A.A. (1939). Structure of Algebras. Colloquium Publications. Vol. 24 (7th revised reprint ed.). American Mathematical Society. ISBN 0-8218-1024-3. Zbl 0023.19901. • Gille, Philippe; Szamuely, Tamás (2006). Central simple algebras and Galois cohomology. Cambridge Studies in Advanced Mathematics. Vol. 101. Cambridge: Cambridge University Press. ISBN 0-521-86103-9. Zbl 1137.12001.
Wikipedia
Frobenius–Schur indicator In mathematics, and especially the discipline of representation theory, the Schur indicator, named after Issai Schur, or Frobenius–Schur indicator describes what invariant bilinear forms a given irreducible representation of a compact group on a complex vector space has. It can be used to classify the irreducible representations of compact groups on real vector spaces. Definition If a finite-dimensional continuous complex representation of a compact group G has character χ its Frobenius–Schur indicator is defined to be $\int _{g\in G}\chi (g^{2})\,d\mu $ for Haar measure μ with μ(G) = 1. When G is finite it is given by ${1 \over |G|}\sum _{g\in G}\chi (g^{2}).$ If χ is irreducible, then its Frobenius–Schur indicator is 1, 0, or -1. It provides a criterion for deciding whether an irreducible representation of G is real, complex or quaternionic, in a specific sense defined below. Much of the content below discusses the case of finite groups, but the general compact case is analogous. Real irreducible representations Main article: Real representation There are three types of irreducible real representations of a finite group on a real vector space V, as Schur's lemma implies that the endomorphism ring commuting with the group action is a real associative division algebra and by the Frobenius theorem can only be isomorphic to either the real numbers, or the complex numbers, or the quaternions. • If the ring is the real numbers, then V⊗C is an irreducible complex representation with Schur indicator 1, also called a real representation. • If the ring is the complex numbers, then V has two different conjugate complex structures, giving two irreducible complex representations with Schur indicator 0, sometimes called complex representations. • If the ring is the quaternions, then choosing a subring of the quaternions isomorphic to the complex numbers makes V into an irreducible complex representation of G with Schur indicator −1, called a quaternionic representation. Moreover every irreducible representation on a complex vector space can be constructed from a unique irreducible representation on a real vector space in one of the three ways above. So knowing the irreducible representations on complex spaces and their Schur indicators allows one to read off the irreducible representations on real spaces. Real representations can be complexified to get a complex representation of the same dimension and complex representations can be converted into a real representation of twice the dimension by treating the real and imaginary components separately. Also, since all finite dimensional complex representations can be turned into a unitary representation, for unitary representations the dual representation is also a (complex) conjugate representation because the Hilbert space norm gives an antilinear bijective map from the representation to its dual representation. Self-dual complex irreducible representation correspond to either real irreducible representation of the same dimension or real irreducible representations of twice the dimension called quaternionic representations (but not both) and non-self-dual complex irreducible representation correspond to a real irreducible representation of twice the dimension. Note for the latter case, both the complex irreducible representation and its dual give rise to the same real irreducible representation. An example of a quaternionic representation would be the four-dimensional real irreducible representation of the quaternion group Q8. Definition in terms of the symmetric and alternating square See also: Symmetric and alternating squares If V is the underlying vector space of a representation of a group G, then the tensor product representation $V\otimes V$ can be decomposed as the direct sum of two subrepresentations, the symmetric square, denoted $\operatorname {Sym} ^{2}(V)$ (also often denoted by $V\otimes _{S}V$ or $V\odot V$) and the alternating square, $\operatorname {Alt} ^{2}(V)$(also often denoted by $\wedge ^{2}V$, $V\otimes _{A}V$, or $V\wedge V$).[1] In terms of these square representations, the indicator has the following, alternate definition: $\iota \chi _{V}={\begin{cases}1&{\text{if }}W_{\text{triv}}{\text{ is a subrepresentation of }}\operatorname {Sym} ^{2}(V)\\-1&{\text{if }}W_{\text{triv}}{\text{ is a subrepresentation of }}\operatorname {Alt} ^{2}(V)\\0&{\text{otherwise}}\end{cases}}$ where $W_{\text{triv}}$is the trivial representation. To see this, note that the term $\chi (g^{2})$naturally arises in the characters of these representations; to wit, we have $\chi _{V}(g^{2})=\chi _{V}(g)^{2}-2\chi _{\wedge ^{2}V}(g)$ and $\chi _{V}(g^{2})=2\chi _{\operatorname {Sym} ^{2}(V)}(g)-\chi _{V}(g)^{2}$.[2] Substituting either of these formulae, the Frobenius–Schur indicator takes on the structure of the natural G-invariant inner product on class functions: $\iota \chi _{V}={\begin{cases}1&\langle \chi _{\text{triv}},\chi _{\operatorname {Sym} ^{2}(V)}\rangle =1\\-1&\langle \chi _{\text{triv}},\chi _{\operatorname {Alt} ^{2}(V)}\rangle =1\\0&{\text{otherwise}}\\\end{cases}}$ The inner product counts the multiplicities of direct summands; the equivalence of the definitions then follows immediately. Applications Let V be an irreducible complex representation of a group G (or equivalently, an irreducible $\mathbb {C} [G]$-module, where $\mathbb {C} [G]$ denotes the group ring). Then 1. There exists a nonzero G-invariant bilinear form on V if and only if $\iota \chi \neq 0$ 2. There exists a nonzero G-invariant symmetric bilinear form on V if and only if $\iota \chi =1$ 3. There exists a nonzero G-invariant skew-symmetric bilinear form on V if and only if $\iota \chi =-1$.[3] The above is a consequence of the universal properties of the symmetric algebra and exterior algebra, which are the underlying vector spaces of the symmetric and alternating square. Additionally, 1. $\iota \chi =0$ if and only if $\chi $ is not real-valued (these are complex representations), 2. $\iota \chi =1$ if and only if $\chi $ can be realized over $\mathbb {R} $ (these are real representations), and 3. $\iota \chi =-1$ if and only if $\chi $ is real but cannot be realized over $\mathbb {R} $ (these are quaternionic representations).[4] Higher Frobenius-Schur indicators Just as for any complex representation ρ, ${\frac {1}{|G|}}\sum _{g\in G}\rho (g)$ is a self-intertwiner, for any integer n, ${\frac {1}{|G|}}\sum _{g\in G}\rho (g^{n})$ is also a self-intertwiner. By Schur's lemma, this will be a multiple of the identity for irreducible representations. The trace of this self-intertwiner is called the nth Frobenius-Schur indicator. The original case of the Frobenius–Schur indicator is that for n = 2. The zeroth indicator is the dimension of the irreducible representation, the first indicator would be 1 for the trivial representation and zero for the other irreducible representations. It resembles the Casimir invariants for Lie algebra irreducible representations. In fact, since any representation of G can be thought of as a module for C[G] and vice versa, we can look at the center of C[G]. This is analogous to looking at the center of the universal enveloping algebra of a Lie algebra. It is simple to check that $\sum _{g\in G}g^{n}$ belongs to the center of C[G], which is simply the subspace of class functions on G. References 1. Serre 1977, pp. 9. 2. Fulton, William; Harris, Joe (1991). Axler, S.; Gehring, F. W.; Ribet, K. (eds.). Representation Theory: A First Course. Springer Graduate Texts in Mathematics 129. New York: Springer. pp. 13. ISBN 3-540-97527-6. 3. James 2001, pp. 274, Theorem 23.16. 4. James 2001, pp. 277, Corollary 23.17. • G.Frobenius & I.Schur, Über die reellen Darstellungen der endlichen Gruppen (1906), Frobenius Gesammelte Abhandlungen vol.III, 354-377. • Serre, Jean-Pierre (1977). Linear Representations of Finite Groups. Springer-Verlag. ISBN 0-387-90190-6. OCLC 2202385. • James, Gordon Douglas (2001). Representations and characters of groups. Liebeck, Martin W. (2nd ed.). Cambridge, UK: Cambridge University Press. pp. 272–278. ISBN 052100392X. OCLC 52220683.
Wikipedia
Schur's Inequality In mathematics, Schur's inequality, named after Issai Schur, establishes that for all non-negative real numbers x, y, z, and t>0, $\sum _{cyc}x^{t}(x-y)(x-z)=x^{t}(x-y)(x-z)+y^{t}(y-z)(y-x)+z^{t}(z-x)(z-y)\geq 0$ with equality if and only if x = y = z or two of them are equal and the other is zero. When t is an even positive integer, the inequality holds for all real numbers x, y and z. When $t=1$, the following well-known special case can be derived: $x^{3}+y^{3}+z^{3}+3xyz\geq xy(x+y)+xz(x+z)+yz(y+z)$ Proof Since the inequality is symmetric in $x,y,z$ we may assume without loss of generality that $x\geq y\geq z$. Then the inequality $(x-y)[x^{t}(x-z)-y^{t}(y-z)]+z^{t}(x-z)(y-z)\geq 0$ clearly holds, since every term on the left-hand side of the inequality is non-negative. This rearranges to Schur's inequality. Extensions A generalization of Schur's inequality is the following: Suppose a,b,c are positive real numbers. If the triples (a,b,c) and (x,y,z) are similarly sorted, then the following inequality holds: $a(x-y)(x-z)+b(y-z)(y-x)+c(z-x)(z-y)\geq 0.$ In 2007, Romanian mathematician Valentin Vornicu showed that a yet further generalized form of Schur's inequality holds: Consider $a,b,c,x,y,z\in \mathbb {R} $, where $a\geq b\geq c$, and either $x\geq y\geq z$ or $z\geq y\geq x$. Let $k\in \mathbb {Z} ^{+}$, and let $f:\mathbb {R} \rightarrow \mathbb {R} _{0}^{+}$ be either convex or monotonic. Then, ${f(x)(a-b)^{k}(a-c)^{k}+f(y)(b-a)^{k}(b-c)^{k}+f(z)(c-a)^{k}(c-b)^{k}\geq 0}.$ The standard form of Schur's is the case of this inequality where x = a, y = b, z = c, k = 1, ƒ(m) = mr.[1] Another possible extension states that if the non-negative real numbers $x\geq y\geq z\geq v$ with and the positive real number t are such that x + v ≥ y + z then[2] $x^{t}(x-y)(x-z)(x-v)+y^{t}(y-x)(y-z)(y-v)+z^{t}(z-x)(z-y)(z-v)+v^{t}(v-x)(v-y)(v-z)\geq 0.$ Notes 1. Vornicu, Valentin; Olimpiada de Matematica... de la provocare la experienta; GIL Publishing House; Zalau, Romania. 2. Finta, Béla (2015). "A Schur Type Inequality for Five Variables". Procedia Technology. 19: 799–801. doi:10.1016/j.protcy.2015.02.114.
Wikipedia