text
stringlengths
100
500k
subset
stringclasses
4 values
Runcinated 6-orthoplexes In six-dimensional geometry, a runcinated 6-orthplex is a convex uniform 6-polytope with 3rd order truncations (runcination) of the regular 6-orthoplex. 6-cube Runcinated 6-cube Biruncinated 6-cube Runcinated 6-orthoplex 6-orthoplex Runcitruncated 6-cube Biruncitruncated 6-cube Runcicantellated 6-orthoplex Runcicantellated 6-cube Biruncitruncated 6-orthoplex Runcitruncated 6-orthoplex Runcicantitruncated 6-cube Biruncicantitruncated 6-cube Runcicantitruncated 6-orthoplex Orthogonal projections in BC6 Coxeter plane There are 12 unique runcinations of the 6-orthoplex with permutations of truncations, and cantellations. Half are expressed relative to the dual 6-cube. Runcinated 6-orthoplex Alternate names • Small prismatohexacontatetrapeton (spog) (Jonathan Bowers)[1] Images orthographic projections Coxeter plane B6 B5 B4 Graph Dihedral symmetry [12] [10] [8] Coxeter plane B3 B2 Graph Dihedral symmetry [6] [4] Coxeter plane A5 A3 Graph Dihedral symmetry [6] [4] Runcicantellated 6-orthoplex Alternate names • Prismatorhombated hexacontatetrapeton (prog) (Jonathan Bowers)[2] Images orthographic projections Coxeter plane B6 B5 B4 Graph Dihedral symmetry [12] [10] [8] Coxeter plane B3 B2 Graph Dihedral symmetry [6] [4] Coxeter plane A5 A3 Graph Dihedral symmetry [6] [4] Runcitruncated 6-orthoplex Alternate names • Prismatotruncated hexacontatetrapeton (potag) (Jonathan Bowers)[3] Images orthographic projections Coxeter plane B6 B5 B4 Graph Dihedral symmetry [12] [10] [8] Coxeter plane B3 B2 Graph Dihedral symmetry [6] [4] Coxeter plane A5 A3 Graph Dihedral symmetry [6] [4] Biruncicantellated 6-cube Alternate names • Great biprismated hexeractihexacontatetrapeton (gobpoxog) (Jonathan Bowers)[4] Images orthographic projections Coxeter plane B6 B5 B4 Graph Dihedral symmetry [12] [10] [8] Coxeter plane B3 B2 Graph Dihedral symmetry [6] [4] Coxeter plane A5 A3 Graph Dihedral symmetry [6] [4] Related polytopes These polytopes are from a set of 63 uniform 6-polytopes generated from the B6 Coxeter plane, including the regular 6-cube or 6-orthoplex. B6 polytopes β6 t1β6 t2β6 t2γ6 t1γ6 γ6 t0,1β6 t0,2β6 t1,2β6 t0,3β6 t1,3β6 t2,3γ6 t0,4β6 t1,4γ6 t1,3γ6 t1,2γ6 t0,5γ6 t0,4γ6 t0,3γ6 t0,2γ6 t0,1γ6 t0,1,2β6 t0,1,3β6 t0,2,3β6 t1,2,3β6 t0,1,4β6 t0,2,4β6 t1,2,4β6 t0,3,4β6 t1,2,4γ6 t1,2,3γ6 t0,1,5β6 t0,2,5β6 t0,3,4γ6 t0,2,5γ6 t0,2,4γ6 t0,2,3γ6 t0,1,5γ6 t0,1,4γ6 t0,1,3γ6 t0,1,2γ6 t0,1,2,3β6 t0,1,2,4β6 t0,1,3,4β6 t0,2,3,4β6 t1,2,3,4γ6 t0,1,2,5β6 t0,1,3,5β6 t0,2,3,5γ6 t0,2,3,4γ6 t0,1,4,5γ6 t0,1,3,5γ6 t0,1,3,4γ6 t0,1,2,5γ6 t0,1,2,4γ6 t0,1,2,3γ6 t0,1,2,3,4β6 t0,1,2,3,5β6 t0,1,2,4,5β6 t0,1,2,4,5γ6 t0,1,2,3,5γ6 t0,1,2,3,4γ6 t0,1,2,3,4,5γ6 Notes 1. Klitzing, (x3o3o3x3o4o - spog) 2. Klitzing, (x3o3x3x3o4o - prog) 3. Klitzing, (x3x3o3x3o4o - potag) 4. Klitzing, (o3x3x3x3x4o - gobpoxog) References • H.S.M. Coxeter: • H.S.M. Coxeter, Regular Polytopes, 3rd Edition, Dover New York, 1973 • Kaleidoscopes: Selected Writings of H.S.M. Coxeter, edited by F. Arthur Sherk, Peter McMullen, Anthony C. Thompson, Asia Ivic Weiss, Wiley-Interscience Publication, 1995, ISBN 978-0-471-01003-6 • (Paper 22) H.S.M. Coxeter, Regular and Semi Regular Polytopes I, [Math. Zeit. 46 (1940) 380-407, MR 2,10] • (Paper 23) H.S.M. Coxeter, Regular and Semi-Regular Polytopes II, [Math. Zeit. 188 (1985) 559-591] • (Paper 24) H.S.M. Coxeter, Regular and Semi-Regular Polytopes III, [Math. Zeit. 200 (1988) 3-45] • Norman Johnson Uniform Polytopes, Manuscript (1991) • N.W. Johnson: The Theory of Uniform Polytopes and Honeycombs, Ph.D. • Klitzing, Richard. "6D uniform polytopes (polypeta)". x3o3o3x3o4o - spog, x3o3x3x3o4o - prog, x3x3o3x3o4o - potag, o3x3x3x3x4o - gobpoxog External links • Weisstein, Eric W. "Hypercube". MathWorld. • Polytopes of Various Dimensions • Multi-dimensional Glossary Fundamental convex regular and uniform polytopes in dimensions 2–10 Family An Bn I2(p) / Dn E6 / E7 / E8 / F4 / G2 Hn Regular polygon Triangle Square p-gon Hexagon Pentagon Uniform polyhedron Tetrahedron Octahedron • Cube Demicube Dodecahedron • Icosahedron Uniform polychoron Pentachoron 16-cell • Tesseract Demitesseract 24-cell 120-cell • 600-cell Uniform 5-polytope 5-simplex 5-orthoplex • 5-cube 5-demicube Uniform 6-polytope 6-simplex 6-orthoplex • 6-cube 6-demicube 122 • 221 Uniform 7-polytope 7-simplex 7-orthoplex • 7-cube 7-demicube 132 • 231 • 321 Uniform 8-polytope 8-simplex 8-orthoplex • 8-cube 8-demicube 142 • 241 • 421 Uniform 9-polytope 9-simplex 9-orthoplex • 9-cube 9-demicube Uniform 10-polytope 10-simplex 10-orthoplex • 10-cube 10-demicube Uniform n-polytope n-simplex n-orthoplex • n-cube n-demicube 1k2 • 2k1 • k21 n-pentagonal polytope Topics: Polytope families • Regular polytope • List of regular polytopes and compounds
Wikipedia
Order-5 cubic honeycomb In hyperbolic geometry, the order-5 cubic honeycomb is one of four compact regular space-filling tessellations (or honeycombs) in hyperbolic 3-space. With Schläfli symbol {4,3,5}, it has five cubes {4,3} around each edge, and 20 cubes around each vertex. It is dual with the order-4 dodecahedral honeycomb. Order-5 cubic honeycomb Poincaré disk models TypeHyperbolic regular honeycomb Uniform hyperbolic honeycomb Schläfli symbol{4,3,5} Coxeter diagram Cells{4,3} (cube) Faces{4} (square) Edge figure{5} (pentagon) Vertex figure icosahedron Coxeter groupBH3, [4,3,5] DualOrder-4 dodecahedral honeycomb PropertiesRegular A geometric honeycomb is a space-filling of polyhedral or higher-dimensional cells, so that there are no gaps. It is an example of the more general mathematical tiling or tessellation in any number of dimensions. Honeycombs are usually constructed in ordinary Euclidean ("flat") space, like the convex uniform honeycombs. They may also be constructed in non-Euclidean spaces, such as hyperbolic uniform honeycombs. Any finite uniform polytope can be projected to its circumsphere to form a uniform honeycomb in spherical space. Description One cell, centered in Poincare ball model Main cells Cells with extended edges to ideal boundary Symmetry It has a radial subgroup symmetry construction with dodecahedral fundamental domains: Coxeter notation: [4,(3,5)*], index 120. Related polytopes and honeycombs The order-5 cubic honeycomb has a related alternated honeycomb, ↔ , with icosahedron and tetrahedron cells. The honeycomb is also one of four regular compact honeycombs in 3D hyperbolic space: Four regular compact honeycombs in H3 {5,3,4} {4,3,5} {3,5,3} {5,3,5} There are fifteen uniform honeycombs in the [5,3,4] Coxeter group family, including the order-5 cubic honeycomb as the regular form: [5,3,4] family honeycombs {5,3,4} r{5,3,4} t{5,3,4} rr{5,3,4} t0,3{5,3,4} tr{5,3,4} t0,1,3{5,3,4} t0,1,2,3{5,3,4} {4,3,5} r{4,3,5} t{4,3,5} rr{4,3,5} 2t{4,3,5} tr{4,3,5} t0,1,3{4,3,5} t0,1,2,3{4,3,5} The order-5 cubic honeycomb is in a sequence of regular polychora and honeycombs with icosahedral vertex figures. {p,3,5} polytopes Space S3 H3 Form Finite Compact Paracompact Noncompact Name {3,3,5} {4,3,5} {5,3,5} {6,3,5} {7,3,5} {8,3,5} ... {∞,3,5} Image Cells {3,3} {4,3} {5,3} {6,3} {7,3} {8,3} {∞,3} It is also in a sequence of regular polychora and honeycombs with cubic cells. The first polytope in the sequence is the tesseract, and the second is the Euclidean cubic honeycomb. {4,3,p} regular honeycombs Space S3 E3 H3 Form Finite Affine Compact Paracompact Noncompact Name {4,3,3} {4,3,4} {4,3,5} {4,3,6} {4,3,7} {4,3,8} ... {4,3,∞} Image Vertex figure {3,3} {3,4} {3,5} {3,6} {3,7} {3,8} {3,∞} Rectified order-5 cubic honeycomb Rectified order-5 cubic honeycomb TypeUniform honeycombs in hyperbolic space Schläfli symbolr{4,3,5} or 2r{5,3,4} 2r{5,31,1} Coxeter diagram ↔ Cellsr{4,3} {3,5} Facestriangle {3} square {4} Vertex figure pentagonal prism Coxeter group${\overline {BH}}_{3}$, [4,3,5] ${\overline {DH}}_{3}$, [5,31,1] PropertiesVertex-transitive, edge-transitive The rectified order-5 cubic honeycomb, , has alternating icosahedron and cuboctahedron cells, with a pentagonal prism vertex figure. Related honeycomb There are four rectified compact regular honeycombs: Four rectified regular compact honeycombs in H3 Image Symbols r{5,3,4} r{4,3,5} r{3,5,3} r{5,3,5} Vertex figure r{p,3,5} Space S3 H3 Form Finite Compact Paracompact Noncompact Name r{3,3,5} r{4,3,5} r{5,3,5} r{6,3,5} r{7,3,5} ... r{∞,3,5} Image Cells {3,5} r{3,3} r{4,3} r{5,3} r{6,3} r{7,3} r{∞,3} Truncated order-5 cubic honeycomb Truncated order-5 cubic honeycomb TypeUniform honeycombs in hyperbolic space Schläfli symbolt{4,3,5} Coxeter diagram Cellst{4,3} {3,5} Facestriangle {3} octagon {8} Vertex figure pentagonal pyramid Coxeter group${\overline {BH}}_{3}$, [4,3,5] PropertiesVertex-transitive The truncated order-5 cubic honeycomb, , has truncated cube and icosahedron cells, with a pentagonal pyramid vertex figure. It can be seen as analogous to the 2D hyperbolic truncated order-5 square tiling, t{4,5}, with truncated square and pentagonal faces: It is similar to the Euclidean (order-4) truncated cubic honeycomb, t{4,3,4}, which has octahedral cells at the truncated vertices. Related honeycombs Four truncated regular compact honeycombs in H3 Image Symbols t{5,3,4} t{4,3,5} t{3,5,3} t{5,3,5} Vertex figure Bitruncated order-5 cubic honeycomb The bitruncated order-5 cubic honeycomb is the same as the bitruncated order-4 dodecahedral honeycomb. Cantellated order-5 cubic honeycomb Cantellated order-5 cubic honeycomb TypeUniform honeycombs in hyperbolic space Schläfli symbolrr{4,3,5} Coxeter diagram Cellsrr{4,3} r{3,5} {}x{5} Facestriangle {3} square {4} pentagon {5} Vertex figure wedge Coxeter group${\overline {BH}}_{3}$, [4,3,5] PropertiesVertex-transitive The cantellated order-5 cubic honeycomb, , has rhombicuboctahedron, icosidodecahedron, and pentagonal prism cells, with a wedge vertex figure. Related honeycombs It is similar to the Euclidean (order-4) cantellated cubic honeycomb, rr{4,3,4}: Four cantellated regular compact honeycombs in H3 Image Symbols rr{5,3,4} rr{4,3,5} rr{3,5,3} rr{5,3,5} Vertex figure Cantitruncated order-5 cubic honeycomb Cantitruncated order-5 cubic honeycomb TypeUniform honeycombs in hyperbolic space Schläfli symboltr{4,3,5} Coxeter diagram Cellstr{4,3} t{3,5} {}x{5} Facessquare {4} pentagon {5} hexagon {6} octagon {8} Vertex figure mirrored sphenoid Coxeter group${\overline {BH}}_{3}$, [4,3,5] PropertiesVertex-transitive The cantitruncated order-5 cubic honeycomb, , has truncated cuboctahedron, truncated icosahedron, and pentagonal prism cells, with a mirrored sphenoid vertex figure. Related honeycombs It is similar to the Euclidean (order-4) cantitruncated cubic honeycomb, tr{4,3,4}: Four cantitruncated regular compact honeycombs in H3 Image Symbols tr{5,3,4} tr{4,3,5} tr{3,5,3} tr{5,3,5} Vertex figure Runcinated order-5 cubic honeycomb Runcinated order-5 cubic honeycomb TypeUniform honeycombs in hyperbolic space Semiregular honeycomb Schläfli symbolt0,3{4,3,5} Coxeter diagram Cells{4,3} {5,3} {}x{5} Facessquare {4} pentagon {5} Vertex figure irregular triangular antiprism Coxeter group${\overline {BH}}_{3}$, [4,3,5] PropertiesVertex-transitive The runcinated order-5 cubic honeycomb or runcinated order-4 dodecahedral honeycomb , has cube, dodecahedron, and pentagonal prism cells, with an irregular triangular antiprism vertex figure. It is analogous to the 2D hyperbolic rhombitetrapentagonal tiling, rr{4,5}, with square and pentagonal faces: Related honeycombs It is similar to the Euclidean (order-4) runcinated cubic honeycomb, t0,3{4,3,4}: Three runcinated regular compact honeycombs in H3 Image Symbols t0,3{4,3,5} t0,3{3,5,3} t0,3{5,3,5} Vertex figure Runcitruncated order-5 cubic honeycomb Runctruncated order-5 cubic honeycomb Runcicantellated order-4 dodecahedral honeycomb TypeUniform honeycombs in hyperbolic space Schläfli symbolt0,1,3{4,3,5} Coxeter diagram Cellst{4,3} rr{5,3} {}x{5} {}x{8} Facestriangle {3} square {4} pentagon {5} octagon {8} Vertex figure isosceles-trapezoidal pyramid Coxeter group${\overline {BH}}_{3}$, [4,3,5] PropertiesVertex-transitive The runcitruncated order-5 cubic honeycomb or runcicantellated order-4 dodecahedral honeycomb, , has truncated cube, rhombicosidodecahedron, pentagonal prism, and octagonal prism cells, with an isosceles-trapezoidal pyramid vertex figure. Related honeycombs It is similar to the Euclidean (order-4) runcitruncated cubic honeycomb, t0,1,3{4,3,4}: Four runcitruncated regular compact honeycombs in H3 Image Symbols t0,1,3{5,3,4} t0,1,3{4,3,5} t0,1,3{3,5,3} t0,1,3{5,3,5} Vertex figure Runcicantellated order-5 cubic honeycomb The runcicantellated order-5 cubic honeycomb is the same as the runcitruncated order-4 dodecahedral honeycomb. Omnitruncated order-5 cubic honeycomb Omnitruncated order-5 cubic honeycomb TypeUniform honeycombs in hyperbolic space Semiregular honeycomb Schläfli symbolt0,1,2,3{4,3,5} Coxeter diagram Cellstr{5,3} tr{4,3} {10}x{} {8}x{} Facessquare {4} hexagon {6} octagon {8} decagon {10} Vertex figure irregular tetrahedron Coxeter group${\overline {BH}}_{3}$, [4,3,5] PropertiesVertex-transitive The omnitruncated order-5 cubic honeycomb or omnitruncated order-4 dodecahedral honeycomb, , has truncated icosidodecahedron, truncated cuboctahedron, decagonal prism, and octagonal prism cells, with an irregular tetrahedral vertex figure. Related honeycombs It is similar to the Euclidean (order-4) omnitruncated cubic honeycomb, t0,1,2,3{4,3,4}: Three omnitruncated regular compact honeycombs in H3 Image Symbols t0,1,2,3{4,3,5} t0,1,2,3{3,5,3} t0,1,2,3{5,3,5} Vertex figure Alternated order-5 cubic honeycomb Alternated order-5 cubic honeycomb TypeUniform honeycombs in hyperbolic space Schläfli symbolh{4,3,5} Coxeter diagram ↔ Cells{3,3} {3,5} Facestriangle {3} Vertex figure icosidodecahedron Coxeter group${\overline {DH}}_{3}$, [5,31,1] PropertiesVertex-transitive, edge-transitive, quasiregular In 3-dimensional hyperbolic geometry, the alternated order-5 cubic honeycomb is a uniform compact space-filling tessellation (or honeycomb). With Schläfli symbol h{4,3,5}, it can be considered a quasiregular honeycomb, alternating icosahedra and tetrahedra around each vertex in an icosidodecahedron vertex figure. Related honeycombs It has 3 related forms: the cantic order-5 cubic honeycomb, , the runcic order-5 cubic honeycomb, , and the runcicantic order-5 cubic honeycomb, . Cantic order-5 cubic honeycomb Cantic order-5 cubic honeycomb TypeUniform honeycombs in hyperbolic space Schläfli symbolh2{4,3,5} Coxeter diagram ↔ Cellsr{5,3} t{3,5} t{3,3} Facestriangle {3} pentagon {5} hexagon {6} Vertex figure rectangular pyramid Coxeter group${\overline {DH}}_{3}$, [5,31,1] PropertiesVertex-transitive The cantic order-5 cubic honeycomb is a uniform compact space-filling tessellation (or honeycomb), with Schläfli symbol h2{4,3,5}. It has icosidodecahedron, truncated icosahedron, and truncated tetrahedron cells, with a rectangular pyramid vertex figure. Runcic order-5 cubic honeycomb Runcic order-5 cubic honeycomb TypeUniform honeycombs in hyperbolic space Schläfli symbolh3{4,3,5} Coxeter diagram ↔ Cells{5,3} rr{5,3} {3,3} Facestriangle {3} square {4} pentagon {5} Vertex figure triangular frustum Coxeter group${\overline {DH}}_{3}$, [5,31,1] PropertiesVertex-transitive The runcic order-5 cubic honeycomb is a uniform compact space-filling tessellation (or honeycomb), with Schläfli symbol h3{4,3,5}. It has dodecahedron, rhombicosidodecahedron, and tetrahedron cells, with a triangular frustum vertex figure. Runcicantic order-5 cubic honeycomb Runcicantic order-5 cubic honeycomb TypeUniform honeycombs in hyperbolic space Schläfli symbolh2,3{4,3,5} Coxeter diagram ↔ Cellst{5,3} tr{5,3} t{3,3} Facestriangle {3} square {4} hexagon {6} decagon {10} Vertex figure irregular tetrahedron Coxeter group${\overline {DH}}_{3}$, [5,31,1] PropertiesVertex-transitive The runcicantic order-5 cubic honeycomb is a uniform compact space-filling tessellation (or honeycomb), with Schläfli symbol h2,3{4,3,5}. It has truncated dodecahedron, truncated icosidodecahedron, and truncated tetrahedron cells, with an irregular tetrahedron vertex figure. See also • Convex uniform honeycombs in hyperbolic space • Regular tessellations of hyperbolic 3-space References • Coxeter, Regular Polytopes, 3rd. ed., Dover Publications, 1973. ISBN 0-486-61480-8. (Tables I and II: Regular polytopes and honeycombs, pp. 294-296) • Coxeter, The Beauty of Geometry: Twelve Essays, Dover Publications, 1999 ISBN 0-486-40919-8 (Chapter 10: Regular honeycombs in hyperbolic space, Summary tables II,III,IV,V, p212-213) • Norman Johnson Uniform Polytopes, Manuscript • N.W. Johnson: The Theory of Uniform Polytopes and Honeycombs, Ph.D. Dissertation, University of Toronto, 1966 • N.W. Johnson: Geometries and Transformations, (2015) Chapter 13: Hyperbolic Coxeter groups
Wikipedia
Order-5 hexagonal tiling honeycomb In the field of hyperbolic geometry, the order-5 hexagonal tiling honeycomb arises as one of 11 regular paracompact honeycombs in 3-dimensional hyperbolic space. It is paracompact because it has cells composed of an infinite number of faces. Each cell consists of a hexagonal tiling whose vertices lie on a horosphere, a flat plane in hyperbolic space that approaches a single ideal point at infinity. Order-5 hexagonal tiling honeycomb Perspective projection view from center of Poincaré disk model TypeHyperbolic regular honeycomb Paracompact uniform honeycomb Schläfli symbol{6,3,5} Coxeter-Dynkin diagrams ↔ Cells{6,3} Faceshexagon {6} Edge figurepentagon {5} Vertex figureicosahedron DualOrder-6 dodecahedral honeycomb Coxeter group${\overline {HV}}_{3}$, [5,3,6] PropertiesRegular The Schläfli symbol of the order-5 hexagonal tiling honeycomb is {6,3,5}. Since that of the hexagonal tiling is {6,3}, this honeycomb has five such hexagonal tilings meeting at each edge. Since the Schläfli symbol of the icosahedron is {3,5}, the vertex figure of this honeycomb is an icosahedron. Thus, 20 hexagonal tilings meet at each vertex of this honeycomb.[1] A geometric honeycomb is a space-filling of polyhedral or higher-dimensional cells, so that there are no gaps. It is an example of the more general mathematical tiling or tessellation in any number of dimensions. Honeycombs are usually constructed in ordinary Euclidean ("flat") space, like the convex uniform honeycombs. They may also be constructed in non-Euclidean spaces, such as hyperbolic uniform honeycombs. Any finite uniform polytope can be projected to its circumsphere to form a uniform honeycomb in spherical space. Symmetry A lower-symmetry construction of index 120, [6,(3,5)*], exists with regular dodecahedral fundamental domains, and an icosahedral Coxeter-Dynkin diagram with 6 axial infinite-order (ultraparallel) branches. Images The order-5 hexagonal tiling honeycomb is similar to the 2D hyperbolic regular paracompact order-5 apeirogonal tiling, {∞,5}, with five apeirogonal faces meeting around every vertex. Related polytopes and honeycombs The order-5 hexagonal tiling honeycomb is a regular hyperbolic honeycomb in 3-space, and one of 11 which are paracompact. 11 paracompact regular honeycombs {6,3,3} {6,3,4} {6,3,5} {6,3,6} {4,4,3} {4,4,4} {3,3,6} {4,3,6} {5,3,6} {3,6,3} {3,4,4} There are 15 uniform honeycombs in the [6,3,5] Coxeter group family, including this regular form, and its regular dual, the order-6 dodecahedral honeycomb. [6,3,5] family honeycombs {6,3,5} r{6,3,5} t{6,3,5} rr{6,3,5} t0,3{6,3,5} tr{6,3,5} t0,1,3{6,3,5} t0,1,2,3{6,3,5} {5,3,6} r{5,3,6} t{5,3,6} rr{5,3,6} 2t{5,3,6} tr{5,3,6} t0,1,3{5,3,6} t0,1,2,3{5,3,6} The order-5 hexagonal tiling honeycomb has a related alternation honeycomb, represented by ↔ , with icosahedron and triangular tiling cells. It is a part of sequence of regular hyperbolic honeycombs of the form {6,3,p}, with hexagonal tiling facets: {6,3,p} honeycombs Space H3 Form Paracompact Noncompact Name {6,3,3} {6,3,4} {6,3,5} {6,3,6} {6,3,7} {6,3,8} ... {6,3,∞} Coxeter Image Vertex figure {3,p} {3,3} {3,4} {3,5} {3,6} {3,7} {3,8} {3,∞} It is also part of a sequence of regular polychora and honeycombs with icosahedral vertex figures: {p,3,5} polytopes Space S3 H3 Form Finite Compact Paracompact Noncompact Name {3,3,5} {4,3,5} {5,3,5} {6,3,5} {7,3,5} {8,3,5} ... {∞,3,5} Image Cells {3,3} {4,3} {5,3} {6,3} {7,3} {8,3} {∞,3} Rectified order-5 hexagonal tiling honeycomb Rectified order-5 hexagonal tiling honeycomb TypeParacompact uniform honeycomb Schläfli symbolsr{6,3,5} or t1{6,3,5} Coxeter diagrams ↔ Cells{3,5} r{6,3} or h2{6,3} Facestriangle {3} hexagon {6} Vertex figure pentagonal prism Coxeter groups${{\overline {HV}}_{3}}$, [5,3,6] ${{\overline {HP}}_{3}}$, [5,3[3]] PropertiesVertex-transitive, edge-transitive The rectified order-5 hexagonal tiling honeycomb, t1{6,3,5}, has icosahedron and trihexagonal tiling facets, with a pentagonal prism vertex figure. It is similar to the 2D hyperbolic infinite-order square tiling, r{∞,5} with pentagon and apeirogonal faces. All vertices are on the ideal surface. r{p,3,5} Space S3 H3 Form Finite Compact Paracompact Noncompact Name r{3,3,5} r{4,3,5} r{5,3,5} r{6,3,5} r{7,3,5} ... r{∞,3,5} Image Cells {3,5} r{3,3} r{4,3} r{5,3} r{6,3} r{7,3} r{∞,3} Truncated order-5 hexagonal tiling honeycomb Truncated order-5 hexagonal tiling honeycomb TypeParacompact uniform honeycomb Schläfli symbolt{6,3,5} or t0,1{6,3,5} Coxeter diagram Cells{3,5} t{6,3} Facestriangle {3} dodecagon {12} Vertex figure pentagonal pyramid Coxeter groups${\overline {HV}}_{3}$, [5,3,6] PropertiesVertex-transitive The truncated order-5 hexagonal tiling honeycomb, t0,1{6,3,5}, has icosahedron and truncated hexagonal tiling facets, with a pentagonal pyramid vertex figure. Bitruncated order-5 hexagonal tiling honeycomb Bitruncated order-5 hexagonal tiling honeycomb TypeParacompact uniform honeycomb Schläfli symbol2t{6,3,5} or t1,2{6,3,5} Coxeter diagram ↔ Cellst{3,6} t{3,5} Facespentagon {5} hexagon {6} Vertex figure digonal disphenoid Coxeter groups${\overline {HV}}_{3}$, [5,3,6] ${\overline {HP}}_{3}$, [5,3[3]] PropertiesVertex-transitive The bitruncated order-5 hexagonal tiling honeycomb, t1,2{6,3,5}, has hexagonal tiling and truncated icosahedron facets, with a digonal disphenoid vertex figure. Cantellated order-5 hexagonal tiling honeycomb Cantellated order-5 hexagonal tiling honeycomb TypeParacompact uniform honeycomb Schläfli symbolrr{6,3,5} or t0,2{6,3,5} Coxeter diagram Cellsr{3,5} rr{6,3} {}x{5} Facestriangle {3} square {4} pentagon {5} hexagon {6} Vertex figure wedge Coxeter groups${\overline {HV}}_{3}$, [5,3,6] PropertiesVertex-transitive The cantellated order-5 hexagonal tiling honeycomb, t0,2{6,3,5}, has icosidodecahedron, rhombitrihexagonal tiling, and pentagonal prism facets, with a wedge vertex figure. Cantitruncated order-5 hexagonal tiling honeycomb Cantitruncated order-5 hexagonal tiling honeycomb TypeParacompact uniform honeycomb Schläfli symboltr{6,3,5} or t0,1,2{6,3,5} Coxeter diagram Cellst{3,5} tr{6,3} {}x{5} Facessquare {4} pentagon {5} hexagon {6} dodecagon {12} Vertex figure mirrored sphenoid Coxeter groups${\overline {HV}}_{3}$, [5,3,6] PropertiesVertex-transitive The cantitruncated order-5 hexagonal tiling honeycomb, t0,1,2{6,3,5}, has truncated icosahedron, truncated trihexagonal tiling, and pentagonal prism facets, with a mirrored sphenoid vertex figure. Runcinated order-5 hexagonal tiling honeycomb Runcinated order-5 hexagonal tiling honeycomb TypeParacompact uniform honeycomb Schläfli symbolt0,3{6,3,5} Coxeter diagram Cells{6,3} {5,3} {}x{6} {}x{5} Facessquare {4} pentagon {5} hexagon {6} Vertex figure irregular triangular antiprism Coxeter groups${\overline {HV}}_{3}$, [5,3,6] PropertiesVertex-transitive The runcinated order-5 hexagonal tiling honeycomb, t0,3{6,3,5}, has dodecahedron, hexagonal tiling, pentagonal prism, and hexagonal prism facets, with an irregular triangular antiprism vertex figure. Runcitruncated order-5 hexagonal tiling honeycomb Runcitruncated order-5 hexagonal tiling honeycomb TypeParacompact uniform honeycomb Schläfli symbolt0,1,3{6,3,5} Coxeter diagram Cellst{6,3} rr{5,3} {}x{5} {}x{12} Facestriangle {3} square {4} pentagon {5} dodecagon {12} Vertex figure isosceles-trapezoidal pyramid Coxeter groups${\overline {HV}}_{3}$, [5,3,6] PropertiesVertex-transitive The runcitruncated order-5 hexagonal tiling honeycomb, t0,1,3{6,3,5}, has truncated hexagonal tiling, rhombicosidodecahedron, pentagonal prism, and dodecagonal prism cells, with an isosceles-trapezoidal pyramid vertex figure. Runcicantellated order-5 hexagonal tiling honeycomb The runcicantellated order-5 hexagonal tiling honeycomb is the same as the runcitruncated order-6 dodecahedral honeycomb. Omnitruncated order-5 hexagonal tiling honeycomb Omnitruncated order-5 hexagonal tiling honeycomb TypeParacompact uniform honeycomb Schläfli symbolt0,1,2,3{6,3,5} Coxeter diagram Cellstr{6,3} tr{5,3} {}x{10} {}x{12} Facessquare {4} hexagon {6} decagon {10} dodecagon {12} Vertex figure irregular tetrahedron Coxeter groups${\overline {HV}}_{3}$, [5,3,6] PropertiesVertex-transitive The omnitruncated order-5 hexagonal tiling honeycomb, t0,1,2,3{6,3,5}, has truncated trihexagonal tiling, truncated icosidodecahedron, decagonal prism, and dodecagonal prism facets, with an irregular tetrahedral vertex figure. Alternated order-5 hexagonal tiling honeycomb Alternated order-5 hexagonal tiling honeycomb TypeParacompact uniform honeycomb Semiregular honeycomb Schläfli symbolh{6,3,5} Coxeter diagram ↔ Cells{3[3]} {3,5} Facestriangle {3} Vertex figure truncated icosahedron Coxeter groups${\overline {HP}}_{3}$, [5,3[3]] PropertiesVertex-transitive, edge-transitive, quasiregular The alternated order-5 hexagonal tiling honeycomb, h{6,3,5}, ↔ , has triangular tiling and icosahedron facets, with a truncated icosahedron vertex figure. It is a quasiregular honeycomb. Cantic order-5 hexagonal tiling honeycomb Cantic order-5 hexagonal tiling honeycomb TypeParacompact uniform honeycomb Schläfli symbolh2{6,3,5} Coxeter diagram ↔ Cellsh2{6,3} t{3,5} r{5,3} Facestriangle {3} pentagon {5} hexagon {6} Vertex figure triangular prism Coxeter groups${\overline {HP}}_{3}$, [5,3[3]] PropertiesVertex-transitive The cantic order-5 hexagonal tiling honeycomb, h2{6,3,5}, ↔ , has trihexagonal tiling, truncated icosahedron, and icosidodecahedron facets, with a triangular prism vertex figure. Runcic order-5 hexagonal tiling honeycomb Runcic order-5 hexagonal tiling honeycomb TypeParacompact uniform honeycomb Schläfli symbolh3{6,3,5} Coxeter diagram ↔ Cells{3[3]} rr{5,3} {5,3} {}x{3} Facestriangle {3} square {4} pentagon {5} Vertex figure triangular cupola Coxeter groups${\overline {HP}}_{3}$, [5,3[3]] PropertiesVertex-transitive The runcic order-5 hexagonal tiling honeycomb, h3{6,3,5}, ↔ , has triangular tiling, rhombicosidodecahedron, dodecahedron, and triangular prism facets, with a triangular cupola vertex figure. Runcicantic order-5 hexagonal tiling honeycomb Runcicantic order-5 hexagonal tiling honeycomb TypeParacompact uniform honeycomb Schläfli symbolh2,3{6,3,5} Coxeter diagram ↔ Cellsh2{6,3} tr{5,3} t{5,3} {}x{3} Facestriangle {3} square {4} hexagon {6} decagon {10} Vertex figure rectangular pyramid Coxeter groups${\overline {HP}}_{3}$, [5,3[3]] PropertiesVertex-transitive The runcicantic order-5 hexagonal tiling honeycomb, h2,3{6,3,5}, ↔ , has trihexagonal tiling, truncated icosidodecahedron, truncated dodecahedron, and triangular prism facets, with a rectangular pyramid vertex figure. See also • Convex uniform honeycombs in hyperbolic space • Regular tessellations of hyperbolic 3-space • Paracompact uniform honeycombs References 1. Coxeter The Beauty of Geometry, 1999, Chapter 10, Table III • Coxeter, Regular Polytopes, 3rd. ed., Dover Publications, 1973. ISBN 0-486-61480-8. (Tables I and II: Regular polytopes and honeycombs, pp. 294–296) • The Beauty of Geometry: Twelve Essays (1999), Dover Publications, LCCN 99-35678, ISBN 0-486-40919-8 (Chapter 10, Regular Honeycombs in Hyperbolic Space) Table III • Jeffrey R. Weeks The Shape of Space, 2nd edition ISBN 0-8247-0709-5 (Chapter 16-17: Geometries on Three-manifolds I,II) • Norman Johnson Uniform Polytopes, Manuscript • N.W. Johnson: The Theory of Uniform Polytopes and Honeycombs, Ph.D. Dissertation, University of Toronto, 1966 • N.W. Johnson: Geometries and Transformations, (2018) Chapter 13: Hyperbolic Coxeter groups
Wikipedia
Order-6 cubic honeycomb The order-6 cubic honeycomb is a paracompact regular space-filling tessellation (or honeycomb) in hyperbolic 3-space. It is paracompact because it has vertex figures composed of an infinite number of facets, with all vertices as ideal points at infinity. With Schläfli symbol {4,3,6}, the honeycomb has six ideal cubes meeting along each edge. Its vertex figure is an infinite triangular tiling. Its dual is the order-4 hexagonal tiling honeycomb. Order-6 cubic honeycomb Perspective projection view within Poincaré disk model TypeHyperbolic regular honeycomb Paracompact uniform honeycomb Schläfli symbol{4,3,6} {4,3[3]} Coxeter diagram ↔ ↔ Cells{4,3} Facessquare {4} Edge figurehexagon {6} Vertex figure triangular tiling Coxeter group${\overline {BV}}_{3}$, [4,3,6] ${\overline {BP}}_{3}$, [4,3[3]] DualOrder-4 hexagonal tiling honeycomb PropertiesRegular, quasiregular A geometric honeycomb is a space-filling of polyhedral or higher-dimensional cells, so that there are no gaps. It is an example of the more general mathematical tiling or tessellation in any number of dimensions. Honeycombs are usually constructed in ordinary Euclidean ("flat") space, like the convex uniform honeycombs. They may also be constructed in non-Euclidean spaces, such as hyperbolic uniform honeycombs. Any finite uniform polytope can be projected to its circumsphere to form a uniform honeycomb in spherical space. Images One cell viewed outside of the Poincaré sphere model The order-6 cubic honeycomb is analogous to the 2D hyperbolic infinite-order square tiling, {4,∞} with square faces. All vertices are on the ideal surface. Symmetry A half-symmetry construction of the order-6 cubic honeycomb exists as {4,3[3]}, with two alternating types (colors) of cubic cells. This construction has Coxeter-Dynkin diagram ↔ . Another lower-symmetry construction, [4,3*,6], of index 6, exists with a non-simplex fundamental domain, with Coxeter-Dynkin diagram . This honeycomb contains that tile 2-hypercycle surfaces, similar to the paracompact order-3 apeirogonal tiling, : Related polytopes and honeycombs The order-6 cubic honeycomb is a regular hyperbolic honeycomb in 3-space, and one of 11 which are paracompact. 11 paracompact regular honeycombs {6,3,3} {6,3,4} {6,3,5} {6,3,6} {4,4,3} {4,4,4} {3,3,6} {4,3,6} {5,3,6} {3,6,3} {3,4,4} It has a related alternation honeycomb, represented by ↔ . This alternated form has hexagonal tiling and tetrahedron cells. There are fifteen uniform honeycombs in the [6,3,4] Coxeter group family, including the order-6 cubic honeycomb itself. [6,3,4] family honeycombs {6,3,4} r{6,3,4} t{6,3,4} rr{6,3,4} t0,3{6,3,4} tr{6,3,4} t0,1,3{6,3,4} t0,1,2,3{6,3,4} {4,3,6} r{4,3,6} t{4,3,6} rr{4,3,6} 2t{4,3,6} tr{4,3,6} t0,1,3{4,3,6} t0,1,2,3{4,3,6} The order-6 cubic honeycomb is part of a sequence of regular polychora and honeycombs with cubic cells. {4,3,p} regular honeycombs Space S3 E3 H3 Form Finite Affine Compact Paracompact Noncompact Name {4,3,3} {4,3,4} {4,3,5} {4,3,6} {4,3,7} {4,3,8} ... {4,3,∞} Image Vertex figure {3,3} {3,4} {3,5} {3,6} {3,7} {3,8} {3,∞} It is also part of a sequence of honeycombs with triangular tiling vertex figures. Hyperbolic uniform honeycombs: {p,3,6} Form Paracompact Noncompact Name {3,3,6} {4,3,6} {5,3,6} {6,3,6} {7,3,6} {8,3,6} ... {∞,3,6} Image Cells {3,3} {4,3} {5,3} {6,3} {7,3} {8,3} {∞,3} Rectified order-6 cubic honeycomb Rectified order-6 cubic honeycomb TypeParacompact uniform honeycomb Schläfli symbolsr{4,3,6} or t1{4,3,6} Coxeter diagrams ↔ ↔ ↔ Cellsr{3,4} {3,6} Facestriangle {3} square {4} Vertex figure hexagonal prism Coxeter groups${\overline {BV}}_{3}$, [4,3,6] ${\overline {DV}}_{3}$, [6,31,1] ${\overline {BP}}_{3}$, [4,3[3]] ${\overline {DP}}_{3}$, [3[]×[]] PropertiesVertex-transitive, edge-transitive The rectified order-6 cubic honeycomb, r{4,3,6}, has cuboctahedral and triangular tiling facets, with a hexagonal prism vertex figure. It is similar to the 2D hyperbolic tetraapeirogonal tiling, r{4,∞}, alternating apeirogonal and square faces: r{p,3,6} Space H3 Form Paracompact Noncompact Name r{3,3,6} r{4,3,6} r{5,3,6} r{6,3,6} r{7,3,6} ... r{∞,3,6} Image Cells {3,6} r{3,3} r{4,3} r{5,3} r{6,3} r{7,3} r{∞,3} Truncated order-6 cubic honeycomb Truncated order-6 cubic honeycomb TypeParacompact uniform honeycomb Schläfli symbolst{4,3,6} or t0,1{4,3,6} Coxeter diagrams ↔ Cellst{4,3} {3,6} Facestriangle {3} octagon {8} Vertex figure hexagonal pyramid Coxeter groups${\overline {BV}}_{3}$, [4,3,6] ${\overline {BP}}_{3}$, [4,3[3]] PropertiesVertex-transitive The truncated order-6 cubic honeycomb, t{4,3,6}, has truncated cube and triangular tiling facets, with a hexagonal pyramid vertex figure. It is similar to the 2D hyperbolic truncated infinite-order square tiling, t{4,∞}, with apeirogonal and octagonal (truncated square) faces: Bitruncated order-6 cubic honeycomb The bitruncated order-6 cubic honeycomb is the same as the bitruncated order-4 hexagonal tiling honeycomb. Cantellated order-6 cubic honeycomb Cantellated order-6 cubic honeycomb TypeParacompact uniform honeycomb Schläfli symbolsrr{4,3,6} or t0,2{4,3,6} Coxeter diagrams ↔ Cellsrr{4,3} r{3,6} {}x{6} Facestriangle {3} square {4} hexagon {6} Vertex figure wedge Coxeter groups${\overline {BV}}_{3}$, [4,3,6] ${\overline {BP}}_{3}$, [4,3[3]] PropertiesVertex-transitive The cantellated order-6 cubic honeycomb, rr{4,3,6}, has rhombicuboctahedron, trihexagonal tiling, and hexagonal prism facets, with a wedge vertex figure. Cantitruncated order-6 cubic honeycomb Cantitruncated order-6 cubic honeycomb TypeParacompact uniform honeycomb Schläfli symbolstr{4,3,6} or t0,1,2{4,3,6} Coxeter diagrams ↔ Cellstr{4,3} t{3,6} {}x{6} Facessquare {4} hexagon {6} octagon {8} Vertex figure mirrored sphenoid Coxeter groups${\overline {BV}}_{3}$, [4,3,6] ${\overline {BP}}_{3}$, [4,3[3]] PropertiesVertex-transitive The cantitruncated order-6 cubic honeycomb, tr{4,3,6}, has truncated cuboctahedron, hexagonal tiling, and hexagonal prism facets, with a mirrored sphenoid vertex figure. Runcinated order-6 cubic honeycomb The runcinated order-6 cubic honeycomb is the same as the runcinated order-4 hexagonal tiling honeycomb. Runcitruncated order-6 cubic honeycomb Cantellated order-6 cubic honeycomb TypeParacompact uniform honeycomb Schläfli symbolst0,1,3{4,3,6} Coxeter diagrams Cellst{4,3} rr{3,6} {}x{6} {}x{8} Facestriangle {3} square {4} hexagon {6} octagon {8} Vertex figure isosceles-trapezoidal pyramid Coxeter groups${\overline {BV}}_{3}$, [4,3,6] PropertiesVertex-transitive The runcitruncated order-6 cubic honeycomb, rr{4,3,6}, has truncated cube, rhombitrihexagonal tiling, hexagonal prism, and octagonal prism facets, with an isosceles-trapezoidal pyramid vertex figure. Runcicantellated order-6 cubic honeycomb The runcicantellated order-6 cubic honeycomb is the same as the runcitruncated order-4 hexagonal tiling honeycomb. Omnitruncated order-6 cubic honeycomb The omnitruncated order-6 cubic honeycomb is the same as the omnitruncated order-4 hexagonal tiling honeycomb. Alternated order-6 cubic honeycomb Alternated order-6 cubic honeycomb TypeParacompact uniform honeycomb Semiregular honeycomb Schläfli symbolh{4,3,6} Coxeter diagram ↔ ↔ ↔ ↔ Cells{3,3} {3,6} Facestriangle {3} Vertex figure trihexagonal tiling Coxeter group${\overline {DV}}_{3}$, [6,31,1] ${\overline {DP}}_{3}$, [3[]x[]] PropertiesVertex-transitive, edge-transitive, quasiregular In three-dimensional hyperbolic geometry, the alternated order-6 hexagonal tiling honeycomb is a uniform compact space-filling tessellation (or honeycomb). As an alternation, with Schläfli symbol h{4,3,6} and Coxeter-Dynkin diagram or , it can be considered a quasiregular honeycomb, alternating triangular tilings and tetrahedra around each vertex in a trihexagonal tiling vertex figure. Symmetry A half-symmetry construction from the form {4,3[3]} exists, with two alternating types (colors) of triangular tiling cells. This form has Coxeter-Dynkin diagram ↔ . Another lower-symmetry form of index 6, [4,3*,6], exists with a non-simplex fundamental domain, with Coxeter-Dynkin diagram . Related honeycombs The alternated order-6 cubic honeycomb is part of a series of quasiregular polychora and honeycombs. Quasiregular polychora and honeycombs: h{4,p,q} Space Finite Affine Compact Paracompact Schläfli symbol h{4,3,3} h{4,3,4} h{4,3,5} h{4,3,6} h{4,4,3} h{4,4,4} $\left\{3,{3 \atop 3}\right\}$ $\left\{3,{4 \atop 3}\right\}$ $\left\{3,{5 \atop 3}\right\}$ $\left\{3,{6 \atop 3}\right\}$ $\left\{4,{4 \atop 3}\right\}$ $\left\{4,{4 \atop 4}\right\}$ Coxeter diagram ↔ ↔ ↔ ↔ ↔ ↔ ↔ ↔ Image Vertex figure r{p,3} It also has 3 related forms: the cantic order-6 cubic honeycomb, h2{4,3,6}, ; the runcic order-6 cubic honeycomb, h3{4,3,6}, ; and the runcicantic order-6 cubic honeycomb, h2,3{4,3,6}, . Cantic order-6 cubic honeycomb Cantic order-6 cubic honeycomb TypeParacompact uniform honeycomb Schläfli symbolh2{4,3,6} Coxeter diagram ↔ ↔ ↔ Cellst{3,3} r{6,3} t{3,6} Facestriangle {3} hexagon {6} Vertex figure rectangular pyramid Coxeter group${\overline {DV}}_{3}$, [6,31,1] ${\overline {DP}}_{3}$, [3[]x[]] PropertiesVertex-transitive The cantic order-6 cubic honeycomb is a uniform compact space-filling tessellation (or honeycomb) with Schläfli symbol h2{4,3,6}. It is composed of truncated tetrahedron, trihexagonal tiling, and hexagonal tiling facets, with a rectangular pyramid vertex figure. Runcic order-6 cubic honeycomb Runcic order-6 cubic honeycomb TypeParacompact uniform honeycomb Schläfli symbolh3{4,3,6} Coxeter diagram ↔ Cells{3,3} {6,3} rr{6,3} Facestriangle {3} square {4} hexagon {6} Vertex figure triangular cupola Coxeter group${\overline {DV}}_{3}$, [6,31,1] PropertiesVertex-transitive The runcic order-6 cubic honeycomb is a uniform compact space-filling tessellation (or honeycomb) with Schläfli symbol h3{4,3,6}. It is composed of tetrahedron, hexagonal tiling, and rhombitrihexagonal tiling facets, with a triangular cupola vertex figure. Runcicantic order-6 cubic honeycomb Runcicantic order-6 cubic honeycomb TypeParacompact uniform honeycomb Schläfli symbolh2,3{4,3,6} Coxeter diagram ↔ Cellst{6,3} tr{6,3} t{3,3} Facestriangle {3} square {4} hexagon {6} dodecagon {12} Vertex figure mirrored sphenoid Coxeter group${\overline {DV}}_{3}$, [6,31,1] PropertiesVertex-transitive The runcicantic order-6 cubic honeycomb is a uniform compact space-filling tessellation (or honeycomb), with Schläfli symbol h2,3{4,3,6}. It is composed of truncated hexagonal tiling, truncated trihexagonal tiling, and truncated tetrahedron facets, with a mirrored sphenoid vertex figure. See also • Convex uniform honeycombs in hyperbolic space • Regular tessellations of hyperbolic 3-space • Paracompact uniform honeycombs References • Coxeter, Regular Polytopes, 3rd. ed., Dover Publications, 1973. ISBN 0-486-61480-8. (Tables I and II: Regular polytopes and honeycombs, pp. 294–296) • The Beauty of Geometry: Twelve Essays (1999), Dover Publications, LCCN 99-35678, ISBN 0-486-40919-8 (Chapter 10, Regular Honeycombs in Hyperbolic Space) Table III • Jeffrey R. Weeks The Shape of Space, 2nd edition ISBN 0-8247-0709-5 (Chapter 16-17: Geometries on Three-manifolds I,II) • Norman Johnson Uniform Polytopes, Manuscript • N.W. Johnson: The Theory of Uniform Polytopes and Honeycombs, Ph.D. Dissertation, University of Toronto, 1966 • N.W. Johnson: Geometries and Transformations, (2018) Chapter 13: Hyperbolic Coxeter groups
Wikipedia
Order-6 dodecahedral honeycomb The order-6 dodecahedral honeycomb is one of 11 paracompact regular honeycombs in hyperbolic 3-space. It is paracompact because it has vertex figures composed of an infinite number of faces, with all vertices as ideal points at infinity. It has Schläfli symbol {5,3,6}, with six ideal dodecahedral cells surrounding each edge of the honeycomb. Each vertex is ideal, and surrounded by infinitely many dodecahedra. The honeycomb has a triangular tiling vertex figure. Order-6 dodecahedral honeycomb Perspective projection view within Poincaré disk model TypeHyperbolic regular honeycomb Paracompact uniform honeycomb Schläfli symbol{5,3,6} {5,3[3]} Coxeter diagram ↔ Cells{5,3} Facespentagon {5} Edge figurehexagon {6} Vertex figure triangular tiling DualOrder-5 hexagonal tiling honeycomb Coxeter group${\overline {HV}}_{3}$, [5,3,6] ${\overline {HP}}_{3}$, [5,3[3]] PropertiesRegular, quasiregular A geometric honeycomb is a space-filling of polyhedral or higher-dimensional cells, so that there are no gaps. It is an example of the more general mathematical tiling or tessellation in any number of dimensions. Honeycombs are usually constructed in ordinary Euclidean ("flat") space, like the convex uniform honeycombs. They may also be constructed in non-Euclidean spaces, such as hyperbolic uniform honeycombs. Any finite uniform polytope can be projected to its circumsphere to form a uniform honeycomb in spherical space. Symmetry A half symmetry construction exists as with alternately colored dodecahedral cells. Images The model is cell-centered within the Poincaré disk model, with the viewpoint then placed at the origin. The order-6 dodecahedral honeycomb is similar to the 2D hyperbolic infinite-order pentagonal tiling, {5,∞}, with pentagonal faces, and with vertices on the ideal surface. Related polytopes and honeycombs The order-6 dodecahedral honeycomb is a regular hyperbolic honeycomb in 3-space, and one of 11 which are paracompact. 11 paracompact regular honeycombs {6,3,3} {6,3,4} {6,3,5} {6,3,6} {4,4,3} {4,4,4} {3,3,6} {4,3,6} {5,3,6} {3,6,3} {3,4,4} There are 15 uniform honeycombs in the [5,3,6] Coxeter group family, including this regular form, and its regular dual, the order-5 hexagonal tiling honeycomb. [6,3,5] family honeycombs {6,3,5} r{6,3,5} t{6,3,5} rr{6,3,5} t0,3{6,3,5} tr{6,3,5} t0,1,3{6,3,5} t0,1,2,3{6,3,5} {5,3,6} r{5,3,6} t{5,3,6} rr{5,3,6} 2t{5,3,6} tr{5,3,6} t0,1,3{5,3,6} t0,1,2,3{5,3,6} The order-6 dodecahedral honeycomb is part of a sequence of regular polychora and honeycombs with triangular tiling vertex figures: Hyperbolic uniform honeycombs: {p,3,6} Form Paracompact Noncompact Name {3,3,6} {4,3,6} {5,3,6} {6,3,6} {7,3,6} {8,3,6} ... {∞,3,6} Image Cells {3,3} {4,3} {5,3} {6,3} {7,3} {8,3} {∞,3} It is also part of a sequence of regular polytopes and honeycombs with dodecahedral cells: {5,3,p} polytopes Space S3 H3 Form Finite Compact Paracompact Noncompact Name {5,3,3} {5,3,4} {5,3,5} {5,3,6} {5,3,7} {5,3,8} ... {5,3,∞} Image Vertex figure {3,3} {3,4} {3,5} {3,6} {3,7} {3,8} {3,∞} Rectified order-6 dodecahedral honeycomb Rectified order-6 dodecahedral honeycomb TypeParacompact uniform honeycomb Schläfli symbolsr{5,3,6} t1{5,3,6} Coxeter diagrams ↔ Cellsr{5,3} {3,6} Facestriangle {3} pentagon {5} Vertex figure hexagonal prism Coxeter groups${\overline {HV}}_{3}$, [5,3,6] ${\overline {HP}}_{3}$, [5,3[3]] PropertiesVertex-transitive, edge-transitive The rectified order-6 dodecahedral honeycomb, t1{5,3,6} has icosidodecahedron and triangular tiling cells connected in a hexagonal prism vertex figure. Perspective projection view within Poincaré disk model It is similar to the 2D hyperbolic pentaapeirogonal tiling, r{5,∞} with pentagon and apeirogonal faces. r{p,3,6} Space H3 Form Paracompact Noncompact Name r{3,3,6} r{4,3,6} r{5,3,6} r{6,3,6} r{7,3,6} ... r{∞,3,6} Image Cells {3,6} r{3,3} r{4,3} r{5,3} r{6,3} r{7,3} r{∞,3} Truncated order-6 dodecahedral honeycomb Truncated order-6 dodecahedral honeycomb TypeParacompact uniform honeycomb Schläfli symbolst{5,3,6} t0,1{5,3,6} Coxeter diagrams ↔ Cellst{5,3} {3,6} Facestriangle {3} decagon {10} Vertex figure hexagonal pyramid Coxeter groups${\overline {HV}}_{3}$, [5,3,6] ${\overline {HP}}_{3}$, [5,3[3]] PropertiesVertex-transitive The truncated order-6 dodecahedral honeycomb, t0,1{5,3,6} has truncated dodecahedron and triangular tiling cells connected in a hexagonal pyramid vertex figure. Bitruncated order-6 dodecahedral honeycomb The bitruncated order-6 dodecahedral honeycomb is the same as the bitruncated order-5 hexagonal tiling honeycomb. Cantellated order-6 dodecahedral honeycomb Cantellated order-6 dodecahedral honeycomb TypeParacompact uniform honeycomb Schläfli symbolsrr{5,3,6} t0,2{5,3,6} Coxeter diagrams ↔ Cellsrr{5,3} rr{6,3} {}x{6} Facestriangle {3} square {4} pentagon {5} hexagon {6} Vertex figure wedge Coxeter groups${\overline {HV}}_{3}$, [5,3,6] ${\overline {HP}}_{3}$, [5,3[3]] PropertiesVertex-transitive The cantellated order-6 dodecahedral honeycomb, t0,2{5,3,6}, has rhombicosidodecahedron, trihexagonal tiling, and hexagonal prism cells, with a wedge vertex figure. Cantitruncated order-6 dodecahedral honeycomb Cantitruncated order-6 dodecahedral honeycomb TypeParacompact uniform honeycomb Schläfli symbolstr{5,3,6} t0,1,2{5,3,6} Coxeter diagrams ↔ Cellstr{5,3} t{3,6} {}x{6} Facessquare {4} hexagon {6} decagon {10} Vertex figure mirrored sphenoid Coxeter groups${\overline {HV}}_{3}$, [5,3,6] ${\overline {HP}}_{3}$, [5,3[3]] PropertiesVertex-transitive The cantitruncated order-6 dodecahedral honeycomb, t0,1,2{5,3,6} has truncated icosidodecahedron, hexagonal tiling, and hexagonal prism facets, with a mirrored sphenoid vertex figure. Runcinated order-6 dodecahedral honeycomb The runcinated order-6 dodecahedral honeycomb is the same as the runcinated order-5 hexagonal tiling honeycomb. Runcitruncated order-6 dodecahedral honeycomb Runcitruncated order-6 dodecahedral honeycomb TypeParacompact uniform honeycomb Schläfli symbolst0,1,3{5,3,6} Coxeter diagrams Cellst{5,3} rr{6,3} {}x{10} {}x{6} Facessquare {4} hexagon {6} decagon {10} Vertex figure isosceles-trapezoidal pyramid Coxeter groups${\overline {HV}}_{3}$, [5,3,6] PropertiesVertex-transitive The runcitruncated order-6 dodecahedral honeycomb, t0,1,3{5,3,6} has truncated dodecahedron, rhombitrihexagonal tiling, decagonal prism, and hexagonal prism facets, with an isosceles-trapezoidal pyramid vertex figure. Runcicantellated order-6 dodecahedral honeycomb The runcicantellated order-6 dodecahedral honeycomb is the same as the runcitruncated order-5 hexagonal tiling honeycomb. Omnitruncated order-6 dodecahedral honeycomb The omnitruncated order-6 dodecahedral honeycomb is the same as the omnitruncated order-5 hexagonal tiling honeycomb. See also • Convex uniform honeycombs in hyperbolic space • Regular tessellations of hyperbolic 3-space • Paracompact uniform honeycombs References • Coxeter, Regular Polytopes, 3rd. ed., Dover Publications, 1973. ISBN 0-486-61480-8. (Tables I and II: Regular polytopes and honeycombs, pp. 294–296) • The Beauty of Geometry: Twelve Essays (1999), Dover Publications, LCCN 99-35678, ISBN 0-486-40919-8 (Chapter 10, Regular Honeycombs in Hyperbolic Space) Table III • Jeffrey R. Weeks The Shape of Space, 2nd edition ISBN 0-8247-0709-5 (Chapter 16-17: Geometries on Three-manifolds I,II) • Norman Johnson Uniform Polytopes, Manuscript • N.W. Johnson: The Theory of Uniform Polytopes and Honeycombs, Ph.D. Dissertation, University of Toronto, 1966 • N.W. Johnson: Geometries and Transformations, (2018) Chapter 13: Hyperbolic Coxeter groups
Wikipedia
Order-6 tetrahedral honeycomb In hyperbolic 3-space, the order-6 tetrahedral honeycomb is a paracompact regular space-filling tessellation (or honeycomb). It is paracompact because it has vertex figures composed of an infinite number of faces, and has all vertices as ideal points at infinity. With Schläfli symbol {3,3,6}, the order-6 tetrahedral honeycomb has six ideal tetrahedra around each edge. All vertices are ideal, with infinitely many tetrahedra existing around each vertex in a triangular tiling vertex figure.[1] Order-6 tetrahedral honeycomb Perspective projection view within Poincaré disk model TypeHyperbolic regular honeycomb Paracompact uniform honeycomb Schläfli symbols{3,3,6} {3,3[3]} Coxeter diagrams ↔ Cells{3,3} Facestriangle {3} Edge figurehexagon {6} Vertex figure triangular tiling DualHexagonal tiling honeycomb Coxeter groups${\overline {V}}_{3}$, [3,3,6] ${\overline {P}}_{3}$, [3,3[3]] PropertiesRegular, quasiregular A geometric honeycomb is a space-filling of polyhedral or higher-dimensional cells, so that there are no gaps. It is an example of the more general mathematical tiling or tessellation in any number of dimensions. Honeycombs are usually constructed in ordinary Euclidean ("flat") space, like the convex uniform honeycombs. They may also be constructed in non-Euclidean spaces, such as hyperbolic uniform honeycombs. Any finite uniform polytope can be projected to its circumsphere to form a uniform honeycomb in spherical space. Symmetry constructions The order-6 tetrahedral honeycomb has a second construction as a uniform honeycomb, with Schläfli symbol {3,3[3]}. This construction contains alternating types, or colors, of tetrahedral cells. In Coxeter notation, this half symmetry is represented as [3,3,6,1+] ↔ [3,((3,3,3))], or [3,3[3]]: ↔ . Related polytopes and honeycombs The order-6 tetrahedral honeycomb is similar to the two-dimensional infinite-order triangular tiling, {3,∞}. Both tessellations are regular, and only contain triangles and ideal vertices. The order-6 tetrahedral honeycomb is also a regular hyperbolic honeycomb in 3-space, and one of 11 which are paracompact. 11 paracompact regular honeycombs {6,3,3} {6,3,4} {6,3,5} {6,3,6} {4,4,3} {4,4,4} {3,3,6} {4,3,6} {5,3,6} {3,6,3} {3,4,4} This honeycomb is one of 15 uniform paracompact honeycombs in the [6,3,3] Coxeter group, along with its dual, the hexagonal tiling honeycomb. [6,3,3] family honeycombs {6,3,3} r{6,3,3} t{6,3,3} rr{6,3,3} t0,3{6,3,3} tr{6,3,3} t0,1,3{6,3,3} t0,1,2,3{6,3,3} {3,3,6} r{3,3,6} t{3,3,6} rr{3,3,6} 2t{3,3,6} tr{3,3,6} t0,1,3{3,3,6} t0,1,2,3{3,3,6} The order-6 tetrahedral honeycomb is part of a sequence of regular polychora and honeycombs with tetrahedral cells. {3,3,p} polytopes Space S3 H3 Form Finite Paracompact Noncompact Name {3,3,3} {3,3,4} {3,3,5} {3,3,6} {3,3,7} {3,3,8} ... {3,3,∞} Image Vertex figure {3,3} {3,4} {3,5} {3,6} {3,7} {3,8} {3,∞} It is also part of a sequence of honeycombs with triangular tiling vertex figures. Hyperbolic uniform honeycombs: {p,3,6} and {p,3[3]} Form Paracompact Noncompact Name {3,3,6} {3,3[3]} {4,3,6} {4,3[3]} {5,3,6} {5,3[3]} {6,3,6} {6,3[3]} {7,3,6} {7,3[3]} {8,3,6} {8,3[3]} ... {∞,3,6} {∞,3[3]} Image Cells {3,3} {4,3} {5,3} {6,3} {7,3} {8,3} {∞,3} Rectified order-6 tetrahedral honeycomb Rectified order-6 tetrahedral honeycomb TypeParacompact uniform honeycomb Semiregular honeycomb Schläfli symbolsr{3,3,6} or t1{3,3,6} Coxeter diagrams ↔ Cellsr{3,3} {3,6} Facestriangle {3} Vertex figure hexagonal prism Coxeter groups${\overline {V}}_{3}$, [3,3,6] ${\overline {P}}_{3}$, [3,3[3]] PropertiesVertex-transitive, edge-transitive The rectified order-6 tetrahedral honeycomb, t1{3,3,6} has octahedral and triangular tiling cells arranged in a hexagonal prism vertex figure. Perspective projection view within Poincaré disk model r{p,3,6} Space H3 Form Paracompact Noncompact Name r{3,3,6} r{4,3,6} r{5,3,6} r{6,3,6} r{7,3,6} ... r{∞,3,6} Image Cells {3,6} r{3,3} r{4,3} r{5,3} r{6,3} r{7,3} r{∞,3} Truncated order-6 tetrahedral honeycomb Truncated order-6 tetrahedral honeycomb TypeParacompact uniform honeycomb Schläfli symbolst{3,3,6} or t0,1{3,3,6} Coxeter diagrams ↔ Cellst{3,3} {3,6} Facestriangle {3} hexagon {6} Vertex figure hexagonal pyramid Coxeter groups${\overline {V}}_{3}$, [3,3,6] ${\overline {P}}_{3}$, [3,3[3]] PropertiesVertex-transitive The truncated order-6 tetrahedral honeycomb, t0,1{3,3,6} has truncated tetrahedron and triangular tiling cells arranged in a hexagonal pyramid vertex figure. Bitruncated order-6 tetrahedral honeycomb The bitruncated order-6 tetrahedral honeycomb is equivalent to the bitruncated hexagonal tiling honeycomb. Cantellated order-6 tetrahedral honeycomb Cantellated order-6 tetrahedral honeycomb TypeParacompact uniform honeycomb Schläfli symbolsrr{3,3,6} or t0,2{3,3,6} Coxeter diagrams ↔ Cellsr{3,3} r{3,6} {}x{6} Facestriangle {3} square {4} hexagon {6} Vertex figure isosceles triangular prism Coxeter groups${\overline {V}}_{3}$, [3,3,6] ${\overline {P}}_{3}$, [3,3[3]] PropertiesVertex-transitive The cantellated order-6 tetrahedral honeycomb, t0,2{3,3,6} has cuboctahedron, trihexagonal tiling, and hexagonal prism cells arranged in an isosceles triangular prism vertex figure. Cantitruncated order-6 tetrahedral honeycomb Cantitruncated order-6 tetrahedral honeycomb TypeParacompact uniform honeycomb Schläfli symbolstr{3,3,6} or t0,1,2{3,3,6} Coxeter diagrams ↔ Cellstr{3,3} t{3,6} {}x{6} Facessquare {4} hexagon {6} Vertex figure mirrored sphenoid Coxeter groups${\overline {V}}_{3}$, [3,3,6] ${\overline {P}}_{3}$, [3,3[3]] PropertiesVertex-transitive The cantitruncated order-6 tetrahedral honeycomb, t0,1,2{3,3,6} has truncated octahedron, hexagonal tiling, and hexagonal prism cells connected in a mirrored sphenoid vertex figure. Runcinated order-6 tetrahedral honeycomb The bitruncated order-6 tetrahedral honeycomb is equivalent to the bitruncated hexagonal tiling honeycomb. Runcitruncated order-6 tetrahedral honeycomb The runcitruncated order-6 tetrahedral honeycomb is equivalent to the runcicantellated hexagonal tiling honeycomb. Runcicantellated order-6 tetrahedral honeycomb The runcicantellated order-6 tetrahedral honeycomb is equivalent to the runcitruncated hexagonal tiling honeycomb. Omnitruncated order-6 tetrahedral honeycomb The omnitruncated order-6 tetrahedral honeycomb is equivalent to the omnitruncated hexagonal tiling honeycomb. See also • Convex uniform honeycombs in hyperbolic space • Regular tessellations of hyperbolic 3-space • Paracompact uniform honeycombs References 1. Coxeter The Beauty of Geometry, 1999, Chapter 10, Table III • Coxeter, Regular Polytopes, 3rd. ed., Dover Publications, 1973. ISBN 0-486-61480-8. (Tables I and II: Regular polytopes and honeycombs, pp. 294–296) • The Beauty of Geometry: Twelve Essays (1999), Dover Publications, LCCN 99-35678, ISBN 0-486-40919-8 (Chapter 10, Regular Honeycombs in Hyperbolic Space) Table III • Jeffrey R. Weeks The Shape of Space, 2nd edition ISBN 0-8247-0709-5 (Chapter 16-17: Geometries on Three-manifolds I,II) • Norman Johnson Uniform Polytopes, Manuscript • N.W. Johnson: The Theory of Uniform Polytopes and Honeycombs, Ph.D. Dissertation, University of Toronto, 1966 • N.W. Johnson: Geometries and Transformations, (2018) Chapter 13: Hyperbolic Coxeter groups
Wikipedia
Runge's theorem In complex analysis, Runge's theorem (also known as Runge's approximation theorem) is named after the German mathematician Carl Runge who first proved it in the year 1885. It states the following: Denoting by C the set of complex numbers, let K be a compact subset of C and let f be a function which is holomorphic on an open set containing K. If A is a set containing at least one complex number from every bounded connected component of C\K then there exists a sequence $(r_{n})_{n\in \mathbb {N} }$ of rational functions which converges uniformly to f on K and such that all the poles of the functions $(r_{n})_{n\in \mathbb {N} }$ are in A. Note that not every complex number in A needs to be a pole of every rational function of the sequence $(r_{n})_{n\in \mathbb {N} }$. We merely know that for all members of $(r_{n})_{n\in \mathbb {N} }$ that do have poles, those poles lie in A. One aspect that makes this theorem so powerful is that one can choose the set A arbitrarily. In other words, one can choose any complex numbers from the bounded connected components of C\K and the theorem guarantees the existence of a sequence of rational functions with poles only amongst those chosen numbers. For the special case in which C\K is a connected set (in particular when K is simply-connected), the set A in the theorem will clearly be empty. Since rational functions with no poles are simply polynomials, we get the following corollary: If K is a compact subset of C such that C\K is a connected set, and f is a holomorphic function on an open set containing K, then there exists a sequence of polynomials $(p_{n})$ that approaches f uniformly on K (the assumptions can be relaxed, see Mergelyan's theorem). Runge's theorem generalises as follows: one can take A to be a subset of the Riemann sphere C∪{∞} and require that A intersect also the unbounded connected component of K (which now contains ∞). That is, in the formulation given above, the rational functions may turn out to have a pole at infinity, while in the more general formulation the pole can be chosen instead anywhere in the unbounded connected component of C\K. Proof An elementary proof, given in Sarason (1998), proceeds as follows. There is a closed piecewise-linear contour Γ in the open set, containing K in its interior. By Cauchy's integral formula $f(w)={1 \over 2\pi i}\int _{\Gamma }{f(z)\,dz \over z-w}$ for w in K. Riemann approximating sums can be used to approximate the contour integral uniformly over K. Each term in the sum is a scalar multiple of (z − w)−1 for some point z on the contour. This gives a uniform approximation by a rational function with poles on Γ. To modify this to an approximation with poles at specified points in each component of the complement of K, it is enough to check this for terms of the form (z − w)−1. If z0 is the point in the same component as z, take a piecewise-linear path from z to z0. If two points are sufficiently close on the path, any rational function with poles only at the first point can be expanded as a Laurent series about the second point. That Laurent series can be truncated to give a rational function with poles only at the second point uniformly close to the original function on K. Proceeding by steps along the path from z to z0 the original function (z − w)−1 can be successively modified to give a rational function with poles only at z0. If z0 is the point at infinity, then by the above procedure the rational function (z − w)−1 can first be approximated by a rational function g with poles at R > 0 where R is so large that K lies in w < R. The Taylor series expansion of g about 0 can then be truncated to give a polynomial approximation on K. See also • Mergelyan's theorem • Oka–Weil theorem • Behnke–Stein theorem on Stein manifolds References • Conway, John B. (1997), A Course in Functional Analysis (2nd ed.), Springer, ISBN 0-387-97245-5 • Greene, Robert E.; Krantz, Steven G. (2002), Function Theory of One Complex Variable (2nd ed.), American Mathematical Society, ISBN 0-8218-2905-X • Sarason, Donald (1998), Notes on complex function theory, Texts and Readings in Mathematics, vol. 5, Hindustan Book Agency, pp. 108–115, ISBN 81-85931-19-4 External links • "Runge theorem", Encyclopedia of Mathematics, EMS Press, 2001 [1994]
Wikipedia
Runge's phenomenon In the mathematical field of numerical analysis, Runge's phenomenon (German: [ˈʁʊŋə]) is a problem of oscillation at the edges of an interval that occurs when using polynomial interpolation with polynomials of high degree over a set of equispaced interpolation points. It was discovered by Carl David Tolmé Runge (1901) when exploring the behavior of errors when using polynomial interpolation to approximate certain functions.[1] The discovery was important because it shows that going to higher degrees does not always improve accuracy. The phenomenon is similar to the Gibbs phenomenon in Fourier series approximations. Introduction The Weierstrass approximation theorem states that for every continuous function f(x) defined on an interval [a,b], there exists a set of polynomial functions Pn(x) for n=0, 1, 2, …, each of degree at most n, that approximates f(x) with uniform convergence over [a,b] as n tends to infinity, that is, $\lim _{n\rightarrow \infty }\left(\sup _{a\leq x\leq b}\left|f(x)-P_{n}(x)\right|\right)=0.$ Consider the case where one desires to interpolate through n+1 equispaced points of a function f(x) using the n-degree polynomial Pn(x) that passes through those points. Naturally, one might expect from Weierstrass' theorem that using more points would lead to a more accurate reconstruction of f(x). However, this particular set of polynomial functions Pn(x) is not guaranteed to have the property of uniform convergence; the theorem only states that a set of polynomial functions exists, without providing a general method of finding one. The Pn(x) produced in this manner may in fact diverge away from f(x) as n increases; this typically occurs in an oscillating pattern that magnifies near the ends of the interpolation points. The discovery of this phenomenon is attributed to Runge.[2] Problem Consider the Runge function $f(x)={\frac {1}{1+25x^{2}}}\,$ (a scaled version of the Witch of Agnesi). Runge found that if this function is interpolated at equidistant points xi between −1 and 1 such that: $x_{i}={\frac {2i}{n}}-1,\quad i\in \left\{0,1,\dots ,n\right\}$ with a polynomial Pn(x) of degree ≤ n, the resulting interpolation oscillates toward the end of the interval, i.e. close to −1 and 1. It can even be proven that the interpolation error increases (without bound) when the degree of the polynomial is increased: $\lim _{n\rightarrow \infty }\left(\sup _{-1\leq x\leq 1}|f(x)-P_{n}(x)|\right)=\infty .$ This shows that high-degree polynomial interpolation at equidistant points can be troublesome. Reason Runge's phenomenon is the consequence of two properties of this problem. • The magnitude of the n-th order derivatives of this particular function grows quickly when n increases. • The equidistance between points leads to a Lebesgue constant that increases quickly when n increases. The phenomenon is graphically obvious because both properties combine to increase the magnitude of the oscillations. The error between the generating function and the interpolating polynomial of order n is given by $f(x)-P_{n}(x)={\frac {f^{(n+1)}(\xi )}{(n+1)!}}\prod _{i=0}^{n}(x-x_{i})$ for some $\xi $ in (−1, 1). Thus, $\max _{-1\leq x\leq 1}|f(x)-P_{n}(x)|\leq \max _{-1\leq x\leq 1}{\frac {\left|f^{(n+1)}(x)\right|}{(n+1)!}}\max _{-1\leq x\leq 1}\prod _{i=0}^{n}|x-x_{i}|$. Denote by $w_{n}(x)$ the nodal function $w_{n}(x)=(x-x_{0})(x-x_{1})\cdots (x-x_{n})$ and let $W_{n}$ be the maximum of the magnitude of the $w_{n}$ function: $W_{n}=\max _{-1\leq x\leq 1}|w_{n}(x)|$. It is elementary to prove that with equidistant nodes $W_{n}\leq n!h^{n+1}$ where $h=2/n$ is the step size. Moreover, assume that the (n+1)-th derivative of $f$ is bounded, i.e. $\max _{-1\leq x\leq 1}|f^{(n+1)}(x)|\leq M_{n+1}$. Therefore, $\max _{-1\leq x\leq 1}|f(x)-P_{n}(x)|\leq M_{n+1}{\frac {h^{n+1}}{(n+1)}}$. But the magnitude of the (n+1)-th derivative of Runge's function increases when n increases, since $M_{n+1}\leq (n+1)!5^{n+1}$. The consequence is that the resulting upper bound, $(10/n)^{n+1}n!$, tends to infinity when n tends to infinity. Although often used to explain the Runge phenomenon, the fact that the upper bound of the error goes to infinity does not necessarily imply, of course, that the error itself also diverges with n. Mitigations Change of interpolation points The oscillation can be minimized by using nodes that are distributed more densely towards the edges of the interval, specifically, with asymptotic density (on the interval [−1, 1]) given by the formula[3] $1/{\sqrt {1-x^{2}}}$. A standard example of such a set of nodes is Chebyshev nodes, for which the maximum error in approximating the Runge function is guaranteed to diminish with increasing polynomial order. S-Runge algorithm without resampling When equidistant samples must be used because resampling on well-behaved sets of nodes is not feasible, the S-Runge algorithm can be considered.[4] In this approach, the original set of nodes is mapped on the set of Chebyshev nodes, providing a stable polynomial reconstruction. The peculiarity of this method is that there is no need of resampling at the mapped nodes, which are also called fake nodes. A Python implementation of this procedure can be found here. Use of piecewise polynomials The problem can be avoided by using spline curves which are piecewise polynomials. When trying to decrease the interpolation error one can increase the number of polynomial pieces which are used to construct the spline instead of increasing the degree of the polynomials used. Constrained minimization One can also fit a polynomial of higher degree (for instance, with $n$ points use a polynomial of order $N=n^{2}$ instead of $n+1$), and fit an interpolating polynomial whose first (or second) derivative has minimal $L^{2}$ norm. A similar approach is to minimize a constrained version of the $L^{p}$ distance between the polynomial's $m$-th derivative and the mean value of its $m$-th derivative. Explicitly, to minimize $V_{p}=\int _{a}^{b}\left|{\frac {\mathrm {d} ^{m}P_{N}(x)}{\mathrm {d} x^{m}}}-{\frac {1}{b-a}}\int _{a}^{b}{\frac {\mathrm {d} ^{m}P_{N}(z)}{\mathrm {d} z^{m}}}\mathrm {d} z\right|^{p}\mathrm {d} x-\sum _{i=1}^{n}\lambda _{i}\,\left(P_{N}(x_{i})-f(x_{i})\right),$ where $N\geq n-1$ and $m<N$, with respect to the polynomial coefficients and the Lagrange multipliers, $\lambda _{i}$. When $N=n-1$, the constraint equations generated by the Lagrange multipliers reduce $P_{N}(x)$ to the minimum polynomial that passes through all $n$ points. At the opposite end, $\lim _{N\to \infty }P_{N}(x)$ will approach a form very similar to a piecewise polynomials approximation. When $m=1$, in particular, $\lim _{N\to \infty }P_{N}(x)$ approaches the linear piecewise polynomials, i.e. connecting the interpolation points with straight lines. The role played by $p$ in the process of minimizing $V_{p}$ is to control the importance of the size of the fluctuations away from the mean value. The larger $p$ is, the more large fluctuations are penalized compared to small ones. The greatest advantage of the Euclidean norm, $p=2$, is that it allows for analytic solutions and it guarantees that $V_{p}$ will only have a single minimum. When $p\neq 2$ there can be multiple minima in $V_{p}$, making it difficult to ensure that a particular minimum found will be the global minimum instead of a local one. Least squares fitting Another method is fitting a polynomial of lower degree using the method of least squares. Generally, when using $m$ equidistant points, if $N<2{\sqrt {m}}$ then least squares approximation $P_{N}(x)$ is well-conditioned.[5] Bernstein polynomial Using Bernstein polynomials, one can uniformly approximate every continuous function in a closed interval, although this method is rather computationally expensive. External fake constraints interpolation This method proposes to optimally stack a dense distribution of constraints of the type P″(x) = 0 on nodes positioned externally near the endpoints of each side of the interpolation interval, where P"(x) is the second derivative of the interpolation polynomial. Those constraints are called External Fake Constraints as they do not belong to the interpolation interval and they do not match the behaviour of the Runge function. The method has demonstrated that it has a better interpolation performance than Piecewise polynomials (splines) to mitigate the Runge phenomenon.[6] Related statements from the approximation theory For every predefined table of interpolation nodes there is a continuous function for which the sequence of interpolation polynomials on those nodes diverges.[7] For every continuous function there is a table of nodes on which the interpolation process converges. Chebyshev interpolation (i.e., on Chebyshev nodes) converges uniformly for every absolutely continuous function. See also • Chebyshev nodes • Compare with the Gibbs phenomenon for sinusoidal basis functions • Occam's razor argues for simpler models • Schwarz lantern, another example of failure of convergence • Stone–Weierstrass theorem • Taylor series • Wilkinson's polynomial References 1. Runge, Carl (1901), "Über empirische Funktionen und die Interpolation zwischen äquidistanten Ordinaten", Zeitschrift für Mathematik und Physik, 46: 224–243. available at www.archive.org 2. Epperson, James (1987). "On the Runge example". Amer. Math. Monthly. 94: 329–341. doi:10.2307/2323093. 3. Berrut, Jean-Paul; Trefethen, Lloyd N. (2004), "Barycentric Lagrange interpolation", SIAM Review, 46 (3): 501–517, CiteSeerX 10.1.1.15.5097, doi:10.1137/S0036144502417715, ISSN 1095-7200 4. De Marchi, Stefano; Marchetti, Francesco; Perracchione, Emma; Poggiali, Davide (2020), "Polynomial interpolation via mapped bases without resampling", J. Comput. Appl. Math., 364, doi:10.1016/j.cam.2019.112347, ISSN 0377-0427 5. Dahlquist, Germund; Björk, Åke (1974), "4.3.4. Equidistant Interpolation and the Runge Phenomenon", Numerical Methods, pp. 101–103, ISBN 0-13-627315-7 6. Belanger, Nicolas (2017), External Fake Constraints Interpolation: the end of Runge phenomenon with high degree polynomials relying on equispaced nodes – Application to aerial robotics motion planning (PDF), Proceedings of the 5th Institute of Mathematics and its Applications Conference on Mathematics in Defence 7. Cheney, Ward; Light, Will (2000), A Course in Approximation Theory, Brooks/Cole, p. 19, ISBN 0-534-36224-9
Wikipedia
Runge–Kutta method (SDE) In mathematics of stochastic systems, the Runge–Kutta method is a technique for the approximate numerical solution of a stochastic differential equation. It is a generalisation of the Runge–Kutta method for ordinary differential equations to stochastic differential equations (SDEs). Importantly, the method does not involve knowing derivatives of the coefficient functions in the SDEs. Most basic scheme Consider the Itō diffusion $X$ satisfying the following Itō stochastic differential equation $dX_{t}=a(X_{t})\,dt+b(X_{t})\,dW_{t},$ with initial condition $X_{0}=x_{0}$, where $W_{t}$ stands for the Wiener process, and suppose that we wish to solve this SDE on some interval of time $[0,T]$. Then the basic Runge–Kutta approximation to the true solution $X$ is the Markov chain $Y$ defined as follows:[1] • partition the interval $[0,T]$ into $N$ subintervals of width $\delta =T/N>0$: $0=\tau _{0}<\tau _{1}<\dots <\tau _{N}=T;$ • set $Y_{0}:=x_{0}$; • recursively compute $Y_{n}$ for $1\leq n\leq N$ by $Y_{n+1}:=Y_{n}+a(Y_{n})\delta +b(Y_{n})\Delta W_{n}+{\frac {1}{2}}\left(b({\hat {\Upsilon }}_{n})-b(Y_{n})\right)\left((\Delta W_{n})^{2}-\delta \right)\delta ^{-1/2},$ where $\Delta W_{n}=W_{\tau _{n+1}}-W_{\tau _{n}}$ and ${\hat {\Upsilon }}_{n}=Y_{n}+a(Y_{n})\delta +b(Y_{n})\delta ^{1/2}.$ The random variables $\Delta W_{n}$ are independent and identically distributed normal random variables with expected value zero and variance $\delta $. This scheme has strong order 1, meaning that the approximation error of the actual solution at a fixed time scales with the time step $\delta $. It has also weak order 1, meaning that the error on the statistics of the solution scales with the time step $\delta $. See the references for complete and exact statements. The functions $a$ and $b$ can be time-varying without any complication. The method can be generalized to the case of several coupled equations; the principle is the same but the equations become longer. Variation of the Improved Euler is flexible A newer Runge—Kutta scheme also of strong order 1 straightforwardly reduces to the improved Euler scheme for deterministic ODEs.[2] Consider the vector stochastic process ${\vec {X}}(t)\in \mathbb {R} ^{n}$ that satisfies the general Ito SDE $d{\vec {X}}={\vec {a}}(t,{\vec {X}})\,dt+{\vec {b}}(t,{\vec {X}})\,dW,$ where drift ${\vec {a}}$ and volatility ${\vec {b}}$ are sufficiently smooth functions of their arguments. Given time step $h$, and given the value ${\vec {X}}(t_{k})={\vec {X}}_{k}$, estimate ${\vec {X}}(t_{k+1})$ by ${\vec {X}}_{k+1}$ for time $t_{k+1}=t_{k}+h$ via ${\begin{array}{l}{\vec {K}}_{1}=h{\vec {a}}(t_{k},{\vec {X}}_{k})+(\Delta W_{k}-S_{k}{\sqrt {h}}){\vec {b}}(t_{k},{\vec {X}}_{k}),\\{\vec {K}}_{2}=h{\vec {a}}(t_{k+1},{\vec {X}}_{k}+{\vec {K}}_{1})+(\Delta W_{k}+S_{k}{\sqrt {h}}){\vec {b}}(t_{k+1},{\vec {X}}_{k}+{\vec {K}}_{1}),\\{\vec {X}}_{k+1}={\vec {X}}_{k}+{\frac {1}{2}}({\vec {K}}_{1}+{\vec {K}}_{2}),\end{array}}$ • where $\Delta W_{k}={\sqrt {h}}Z_{k}$ for normal random $Z_{k}\sim N(0,1)$; • and where $S_{k}=\pm 1$, each alternative chosen with probability $1/2$. The above describes only one time step. Repeat this time step $(t_{m}-t_{0})/h$ times in order to integrate the SDE from time $t=t_{0}$ to $t=t_{m}$. The scheme integrates Stratonovich SDEs to $O(h)$ provided one sets $S_{k}=0$ throughout (instead of choosing $\pm 1$). Higher order Runge-Kutta schemes Higher-order schemes also exist, but become increasingly complex. Rößler developed many schemes for Ito SDEs,[3][4] whereas Komori developed schemes for Stratonovich SDEs.[5][6][7] Rackauckas extended these schemes to allow for adaptive-time stepping via Rejection Sampling with Memory (RSwM), resulting in orders of magnitude efficiency increases in practical biological models,[8] along with coefficient optimization for improved stability.[9] References 1. P. E. Kloeden and E. Platen. Numerical solution of stochastic differential equations, volume 23 of Applications of Mathematics. Springer--Verlag, 1992. 2. Roberts, A. J. (Oct 2012). "Modify the Improved Euler scheme to integrate stochastic differential equations". arXiv:1210.0933. {{cite journal}}: Cite journal requires |journal= (help) 3. Rößler, A. (2009). "Second Order Runge–Kutta Methods for Itô Stochastic Differential Equations". SIAM Journal on Numerical Analysis. 47 (3): 1713–1738. doi:10.1137/060673308. 4. Rößler, A. (2010). "Runge–Kutta Methods for the Strong Approximation of Solutions of Stochastic Differential Equations". SIAM Journal on Numerical Analysis. 48 (3): 922–952. doi:10.1137/09076636X. 5. Komori, Y. (2007). "Multi-colored rooted tree analysis of the weak order conditions of a stochastic Runge–Kutta family". Applied Numerical Mathematics. 57 (2): 147–165. doi:10.1016/j.apnum.2006.02.002. S2CID 49220399. 6. Komori, Y. (2007). "Weak order stochastic Runge–Kutta methods for commutative stochastic differential equations". Journal of Computational and Applied Mathematics. 203: 57–79. doi:10.1016/j.cam.2006.03.010. 7. Komori, Y. (2007). "Weak second-order stochastic Runge–Kutta methods for non-commutative stochastic differential equations". Journal of Computational and Applied Mathematics. 206: 158–173. doi:10.1016/j.cam.2006.06.006. 8. Rackauckas, Christopher; Nie, Qing (2017). "Adaptive methods for stochastic differential equations via natural embeddings and rejection sampling with memory". Discrete and Continuous Dynamical Systems - Series B. 22 (7): 2731–2761. doi:10.3934/dcdsb.2017133. PMC 5844583. PMID 29527134. 9. Rackauckas, Christopher; Nie, Qing (2018). "Stability-optimized high order methods and stiffness detection for pathwise stiff stochastic differential equations". arXiv:1804.04344 [math.NA].
Wikipedia
Runge–Kutta–Fehlberg method In mathematics, the Runge–Kutta–Fehlberg method (or Fehlberg method) is an algorithm in numerical analysis for the numerical solution of ordinary differential equations. It was developed by the German mathematician Erwin Fehlberg and is based on the large class of Runge–Kutta methods. The novelty of Fehlberg's method is that it is an embedded method from the Runge–Kutta family, meaning that identical function evaluations are used in conjunction with each other to create methods of varying order and similar error constants. The method presented in Fehlberg's 1969 paper has been dubbed the RKF45 method, and is a method of order O(h4) with an error estimator of order O(h5).[1] By performing one extra calculation, the error in the solution can be estimated and controlled by using the higher-order embedded method that allows for an adaptive stepsize to be determined automatically. Butcher tableau for Fehlberg's 4(5) method Any Runge–Kutta method is uniquely identified by its Butcher tableau. The embedded pair proposed by Fehlberg[2] 0 1/41/4 3/83/329/32 12/131932/2197−7200/21977296/2197 1439/216−83680/513−845/4104 1/2−8/272−3544/25651859/4104−11/40 16/13506656/1282528561/56430−9/502/55 25/21601408/25652197/4104−1/50 The first row of coefficients at the bottom of the table gives the fifth-order accurate method, and the second row gives the fourth-order accurate method. Implementing an RK4(5) Algorithm The coefficients found by Fehlberg for Formula 1 (derivation with his parameter α2=1/3) are given in the table below, using array indexing of base 1 instead of base 0 to be compatible with most computer languages: COEFFICIENTS FOR RK4(5), FORMULA 1 Table II in Fehlberg[2] K A(K) B(K,L) C(K) CH(K) CT(K) L=1 L=2 L=3 L=4 L=5 1 0 1/9 47/450 1/150 2 2/9 2/9 0 0 0 3 1/3 1/12 1/4 9/20 12/25 -3/100 4 3/4 69/128 -243/128 135/64 16/45 32/225 16/75 5 1 -17/12 27/4 -27/5 16/15 1/12 1/30 1/20 6 5/6 65/432 -5/16 13/16 4/27 5/144 6/25 -6/25 Fehlberg[2] outlines a solution to solving a system of n differential equations of the form: ${\operatorname {d} \!y_{i} \over \operatorname {d} \!x}=f_{i}(x,y_{1},y_{2},...,y_{n}),i=1,2,...,n$ to iterative solve for $y_{i}(x+h),i=1,2,...,n$ where h is an adaptive stepsize to be determined algorithmically: The solution is the weighted average of six increments, where each increment is the product of the size of the interval, $ h$, and an estimated slope specified by function f on the right-hand side of the differential equation. $k_{1}=h\cdot f(x+A(1)\cdot h,y)$ $k_{2}=h\cdot f(x+A(2)\cdot h,y+B(2,1)\cdot k_{1})$ $k_{3}=h\cdot f(x+A(3)\cdot h,y+B(3,1)\cdot k_{1}+B(3,2)\cdot k_{2})$ $k_{4}=h\cdot f(x+A(4)\cdot h,y+B(4,1)\cdot k_{1}+B(4,2)\cdot k_{2}+B(4,3)\cdot k_{3})$ $k_{5}=h\cdot f(x+A(5)\cdot h,y+B(5,1)\cdot k_{1}+B(5,2)\cdot k_{2}+B(5,3)\cdot k_{3}+B(5,4)\cdot k_{4})$ $k_{6}=h\cdot f(x+A(6)\cdot h,y+B(6,1)\cdot k_{1}+B(6,2)\cdot k_{2}+B(6,3)\cdot k_{3}+B(6,4)\cdot k_{4}+B(6,5)\cdot k_{5})$ Then the weighted average is: $y(x+h)=y(x)+CH(1)\cdot k_{1}+CH(2)\cdot k_{2}+CH(3)\cdot k_{3}+CH(4)\cdot k_{4}+CH(5)\cdot k_{5}+CH(6)\cdot k_{6}$ The estimate of the truncation error is: $TE=|CT(1)\cdot k_{1}+CT(2)\cdot k_{2}+CT(3)\cdot k_{3}+CT(4)\cdot k_{4}+CT(5)\cdot k_{5}+CT(6)\cdot k_{6}|$ At the completion of the step, a new stepsize is calculated:[3] $h_{\mathrm {new} }=0.9\cdot h\cdot \left({\frac {\epsilon }{TE}}\right)^{1/5}$ If $ TE>\epsilon $, then replace $ h$ with $ h_{\mathrm {new} }$ and repeat the step. If $ TE\leqslant \epsilon $, then the step is completed. Replace $ h$ with $ h_{\mathrm {new} }$ for the next step. The coefficients found by Fehlberg for Formula 2 (derivation with his parameter α2=3/8) are given in the table below, using array indexing of base 1 instead of base 0 to be compatible with most computer languages: COEFFICIENTS FOR RK4(5), FORMULA 2 Table III in Fehlberg[2] K A(K) B(K,L) C(K) CH(K) CT(K) L=1 L=2 L=3 L=4 L=5 1 0 25/216 16/135 -1/360 2 1/4 1/4 0 0 0 3 3/8 3/32 9/32 1408/2565 6656/12825 128/4275 4 12/13 1932/2197 -7200/2197 7296/2197 2197/4104 28561/56430 2197/75240 5 1 439/216 -8 3680/513 -845/4104 -1/5 -9/50 -1/50 6 1/2 -8/27 2 -3544/2565 1859/4104 -11/40 2/55 -2/55 In another table in Fehlberg,[2] coefficients for an RKF4(5) derived by D. Sarafyan are given: COEFFICIENTS FOR Sarafyan's RK4(5), Table IV in Fehlberg[2] K A(K) B(K,L) C(K) CH(K) CT(K) L=1 L=2 L=3 L=4 L=5 1 0 0 1/6 1/24 1/8 2 1/2 1/2 0 0 0 3 1/2 1/4 1/4 2/3 0 2/3 4 1 0 -1 2 1/6 5/48 1/16 5 2/3 7/27 10/27 0 1/27 27/56 -27/56 6 1/5 28/625 -1/5 546/625 54/625 -378/625 125/336 -125/336 See also • List of Runge–Kutta methods • Numerical methods for ordinary differential equations • Runge–Kutta methods Notes 1. According to Hairer et al. (1993, §II.4), the method was originally proposed in Fehlberg (1969); Fehlberg (1970) is an extract of the latter publication. 2. Hairer, Nørsett & Wanner (1993, p. 177) refer to Fehlberg (1969) 3. Gurevich, Svetlana (2017). "Appendix A Runge-Kutta Methods" (PDF). Munster Institute for Theoretical Physics. pp. 8–11. Retrieved 4 March 2022. References • Fehlberg, Erwin (1968) Classical fifth-, sixth-, seventh-, and eighth-order Runge-Kutta formulas with stepsize control. NASA Technical Report 287. https://ntrs.nasa.gov/api/citations/19680027281/downloads/19680027281.pdf • Fehlberg, Erwin (1969) Low-order classical Runge-Kutta formulas with stepsize control and their application to some heat transfer problems. Vol. 315. National aeronautics and space administration. • Fehlberg, Erwin (1969). "Klassische Runge-Kutta-Nystrom-Formeln funfter und siebenter Ordnung mit Schrittweiten-Kontrolle". Computing. 4: 93–106. doi:10.1007/BF02234758. S2CID 38715401. • Fehlberg, Erwin (1970) Some experimental results concerning the error propagation in Runge-Kutta type integration formulas. NASA Technical Report R-352. https://ntrs.nasa.gov/api/citations/19700031412/downloads/19700031412.pdf • Fehlberg, Erwin (1970). "Klassische Runge-Kutta-Formeln vierter und niedrigerer Ordnung mit Schrittweiten-Kontrolle und ihre Anwendung auf Wärmeleitungsprobleme," Computing (Arch. Elektron. Rechnen), vol. 6, pp. 61–71. doi:10.1007/BF02241732 • Hairer, Ernst; Nørsett, Syvert; Wanner, Gerhard (1993). Solving Ordinary Differential Equations I: Nonstiff Problems (Second ed.). Berlin: Springer-Verlag. ISBN 3-540-56670-8. • Sarafyan, Diran (1966) Error Estimation for Runge-Kutta Methods Through Pseudo-Iterative Formulas. Technical Report No. 14, Louisiana State University in New Orleans, May 1966. Further reading • Fehlberg, E (1958). "Eine Methode zur Fehlerverkleinerung beim Runge-Kutta-Verfahren". Zeitschrift für Angewandte Mathematik und Mechanik. 38 (11/12): 421–426. Bibcode:1958ZaMM...38..421F. doi:10.1002/zamm.19580381102. • Fehlberg, E (1964). "New high-order Runge-Kutta formulas with step size control for systems of first and second-order differential equations". Zeitschrift für Angewandte Mathematik und Mechanik. 44 (S1): T17–T29. doi:10.1002/zamm.19640441310. • Fehlberg, E (1972). "Klassische Runge-Kutta-Nystrom-Formeln mit Schrittweiten-Kontrolle fur Differentialgleichungen x.. = f(t,x)". Computing. 10: 305–315. doi:10.1007/BF02242243. S2CID 37369149. • Fehlberg, E (1975). "Klassische Runge-Kutta-Nystrom-Formeln mit Schrittweiten-Kontrolle fur Differentialgleichungen x.. = f(t,x,x.)". Computing. 14: 371–387. doi:10.1007/BF02253548. S2CID 30533090. • Simos, T. E. (1993). "A Runge-Kutta Fehlberg method with phase-lag of order infinity for initial-value problems with oscillating solution". Computers & Mathematics with Applications. 25 (6): 95–101. doi:10.1016/0898-1221(93)90303-D.. • Handapangoda, C. C.; Premaratne, M.; Yeo, L.; Friend, J. (2008). "Laguerre Runge-Kutta-Fehlberg Method for Simulating Laser Pulse Propagation in Biological Tissue". IEEE Journal of Selected Topics in Quantum Electronics. 1 (14): 105–112. Bibcode:2008IJSTQ..14..105H. doi:10.1109/JSTQE.2007.913971. S2CID 13069335.. • Simos, T. E. (1995). "Modified Runge–Kutta–Fehlberg methods for periodic initial-value problems". Japan Journal of Industrial and Applied Mathematics. 12 (1): 109. doi:10.1007/BF03167384. S2CID 120146558.. • Sarafyan, D. (1994). "Approximate Solution of Ordinary Differential Equations and Their Systems Through Discrete and Continuous Embedded Runge-Kutta Formulae and Upgrading Their Order". Computers & Mathematics with Applications. 28 (10–12): 353–384. doi:10.1016/0898-1221(94)00201-0. • Paul, S.; Mondal, S. P.; Bhattacharya, P. (2016). "Numerical solution of Lotka Volterra prey predator model by using Runge–Kutta–Fehlberg method and Laplace Adomian decomposition method". Alexandria Engineering Journal. 55 (1): 613–617. doi:10.1016/j.aej.2015.12.026.
Wikipedia
Running angle In mathematics, the running angle is the angle of consecutive vectors $(Xt,Yt)$ with respect to the base line, i.e. $\phi (t)=\arctan \left({\frac {\Delta Yt}{\Delta Xt}}\right).$ Usually, it is more informative to compute it using a four-quadrant version of the arctan function in a mathematical software library. See also • Differential geometry • Polar distribution
Wikipedia
Tree decomposition In graph theory, a tree decomposition is a mapping of a graph into a tree that can be used to define the treewidth of the graph and speed up solving certain computational problems on the graph. This article is about tree structure of graphs. For decomposition of graphs into trees, see Graph theory § Decomposition problems. For decomposition of trees in nature, see Nurse log. Tree decompositions are also called junction trees, clique trees, or join trees. They play an important role in problems like probabilistic inference, constraint satisfaction, query optimization, and matrix decomposition. The concept of tree decomposition was originally introduced by Rudolf Halin (1976). Later it was rediscovered by Neil Robertson and Paul Seymour (1984) and has since been studied by many other authors.[1] Definition Intuitively, a tree decomposition represents the vertices of a given graph G as subtrees of a tree, in such a way that vertices in G are adjacent only when the corresponding subtrees intersect. Thus, G forms a subgraph of the intersection graph of the subtrees. The full intersection graph is a chordal graph. Each subtree associates a graph vertex with a set of tree nodes. To define this formally, we represent each tree node as the set of vertices associated with it. Thus, given a graph G = (V, E), a tree decomposition is a pair (X, T), where X = {X1, …, Xn} is a family of subsets (sometimes called bags) of V, and T is a tree whose nodes are the subsets Xi, satisfying the following properties:[2] 1. The union of all sets Xi equals V. That is, each graph vertex is associated with at least one tree node. 2. For every edge (v, w) in the graph, there is a subset Xi that contains both v and w. That is, vertices are adjacent in the graph only when the corresponding subtrees have a node in common. 3. If Xi and Xj both contain a vertex v, then all nodes Xk of the tree in the (unique) path between Xi and Xj contain v as well. That is, the nodes associated with vertex v form a connected subset of T. This is also known as coherence, or the running intersection property. It can be stated equivalently that if Xi, Xj and Xk are nodes, and Xk is on the path from Xi to Xj, then $X_{i}\cap X_{j}\subseteq X_{k}$. The tree decomposition of a graph is far from unique; for example, a trivial tree decomposition contains all vertices of the graph in its single root node. A tree decomposition in which the underlying tree is a path graph is called a path decomposition, and the width parameter derived from these special types of tree decompositions is known as pathwidth. A tree decomposition (X, T = (I, F)) of treewidth k is smooth, if for all $i\in I:|X_{i}|=k+1$, and for all $(i,j)\in F:|X_{i}\cap X_{j}|=k$.[3] Treewidth Main article: Treewidth The width of a tree decomposition is the size of its largest set Xi minus one. The treewidth tw(G) of a graph G is the minimum width among all possible tree decompositions of G. In this definition, the size of the largest set is diminished by one in order to make the treewidth of a tree equal to one. Treewidth may also be defined from other structures than tree decompositions, including chordal graphs, brambles, and havens. It is NP-complete to determine whether a given graph G has treewidth at most a given variable k.[4] However, when k is any fixed constant, the graphs with treewidth k can be recognized, and a width k tree decomposition constructed for them, in linear time.[3] The time dependence of this algorithm on k is an exponential function of k3. Dynamic programming At the beginning of the 1970s, it was observed that a large class of combinatorial optimization problems defined on graphs could be efficiently solved by non-serial dynamic programming as long as the graph had a bounded dimension,[5] a parameter related to treewidth. Later, several authors independently observed, at the end of the 1980s,[6] that many algorithmic problems that are NP-complete for arbitrary graphs may be solved efficiently by dynamic programming for graphs of bounded treewidth, using the tree-decompositions of these graphs. As an example, consider the problem of finding the maximum independent set in a graph of treewidth k. To solve this problem, first choose one of the nodes of the tree decomposition to be the root, arbitrarily. For a node Xi of the tree decomposition, let Di be the union of the sets Xj descending from Xi. For an independent set $S\subset X_{i},$ let A(S,i) denote the size of the largest independent subset I of Di such that $I\cap X_{i}=S.$ Similarly, for an adjacent pair of nodes Xi and Xj, with Xi farther from the root of the tree than Xj, and an independent set $S\subset X_{i}\cap X_{j},$ let B(S,i,j) denote the size of the largest independent subset I of Di such that $I\cap X_{i}\cap X_{j}=S.$ We may calculate these A and B values by a bottom-up traversal of the tree: $A(S,i)=|S|+\sum _{j}\left(B(S\cap X_{j},j,i)-|S\cap X_{j}|\right)$ $B(S,i,j)=\max _{S'\subset X_{i} \atop S=S'\cap X_{j}}A(S',i)$ where the sum in the calculation of $A(S,i)$ is over the children of node Xi. At each node or edge, there are at most 2k sets S for which we need to calculate these values, so if k is a constant then the whole calculation takes constant time per edge or node. The size of the maximum independent set is the largest value stored at the root node, and the maximum independent set itself can be found (as is standard in dynamic programming algorithms) by backtracking through these stored values starting from this largest value. Thus, in graphs of bounded treewidth, the maximum independent set problem may be solved in linear time. Similar algorithms apply to many other graph problems. This dynamic programming approach is used in machine learning via the junction tree algorithm for belief propagation in graphs of bounded treewidth. It also plays a key role in algorithms for computing the treewidth and constructing tree decompositions: typically, such algorithms have a first step that approximates the treewidth, constructing a tree decomposition with this approximate width, and then a second step that performs dynamic programming in the approximate tree decomposition to compute the exact value of the treewidth.[3] See also • Brambles and havens – Two kinds of structures that can be used as an alternative to tree decomposition in defining the treewidth of a graph. • Branch-decomposition – A closely related structure whose width is within a constant factor of treewidth. • Decomposition Method – Tree Decomposition is used in Decomposition Method for solving constraint satisfaction problem. Notes 1. Diestel (2005) pp.354–355 2. Diestel (2005) section 12.3 3. Bodlaender (1996). 4. Arnborg, Corneil & Proskurowski (1987). 5. Bertelé & Brioschi (1972). 6. Arnborg & Proskurowski (1989); Bern, Lawler & Wong (1987); Bodlaender (1988). References • Arnborg, S.; Corneil, D.; Proskurowski, A. (1987), "Complexity of finding embeddings in a k-tree", SIAM Journal on Matrix Analysis and Applications, 8 (2): 277–284, doi:10.1137/0608024. • Arnborg, S.; Proskurowski, A. (1989), "Linear time algorithms for NP-hard problems restricted to partial k-trees", Discrete Applied Mathematics, 23 (1): 11–24, doi:10.1016/0166-218X(89)90031-0. • Bern, M. W.; Lawler, E. L.; Wong, A. L. (1987), "Linear-time computation of optimal subgraphs of decomposable graphs", Journal of Algorithms, 8 (2): 216–235, doi:10.1016/0196-6774(87)90039-3. • Bertelé, Umberto; Brioschi, Francesco (1972), Nonserial Dynamic Programming, Academic Press, ISBN 0-12-093450-7. • Bodlaender, Hans L. (1988), "Dynamic programming on graphs with bounded treewidth", Proc. 15th International Colloquium on Automata, Languages and Programming, Lecture Notes in Computer Science, vol. 317, Springer-Verlag, pp. 105–118, doi:10.1007/3-540-19488-6_110, hdl:1874/16258. • Bodlaender, Hans L. (1996), "A linear time algorithm for finding tree-decompositions of small treewidth", SIAM Journal on Computing, 25 (6): 1305–1317, CiteSeerX 10.1.1.113.4539, doi:10.1137/S0097539793251219. • Diestel, Reinhard (2005), Graph Theory (3rd ed.), Springer, ISBN 3-540-26182-6. • Halin, Rudolf (1976), "S-functions for graphs", Journal of Geometry, 8 (1–2): 171–186, doi:10.1007/BF01917434, S2CID 120256194. • Robertson, Neil; Seymour, Paul D. (1984), "Graph minors III: Planar tree-width", Journal of Combinatorial Theory, Series B, 36 (1): 49–64, doi:10.1016/0095-8956(84)90013-3.
Wikipedia
Analysis of algorithms In computer science, the analysis of algorithms is the process of finding the computational complexity of algorithms—the amount of time, storage, or other resources needed to execute them. Usually, this involves determining a function that relates the size of an algorithm's input to the number of steps it takes (its time complexity) or the number of storage locations it uses (its space complexity). An algorithm is said to be efficient when this function's values are small, or grow slowly compared to a growth in the size of the input. Different inputs of the same size may cause the algorithm to have different behavior, so best, worst and average case descriptions might all be of practical interest. When not otherwise specified, the function describing the performance of an algorithm is usually an upper bound, determined from the worst case inputs to the algorithm. The term "analysis of algorithms" was coined by Donald Knuth.[1] Algorithm analysis is an important part of a broader computational complexity theory, which provides theoretical estimates for the resources needed by any algorithm which solves a given computational problem. These estimates provide an insight into reasonable directions of search for efficient algorithms. In theoretical analysis of algorithms it is common to estimate their complexity in the asymptotic sense, i.e., to estimate the complexity function for arbitrarily large input. Big O notation, Big-omega notation and Big-theta notation are used to this end. For instance, binary search is said to run in a number of steps proportional to the logarithm of the size n of the sorted list being searched, or in O(log n), colloquially "in logarithmic time". Usually asymptotic estimates are used because different implementations of the same algorithm may differ in efficiency. However the efficiencies of any two "reasonable" implementations of a given algorithm are related by a constant multiplicative factor called a hidden constant. Exact (not asymptotic) measures of efficiency can sometimes be computed but they usually require certain assumptions concerning the particular implementation of the algorithm, called model of computation. A model of computation may be defined in terms of an abstract computer, e.g. Turing machine, and/or by postulating that certain operations are executed in unit time. For example, if the sorted list to which we apply binary search has n elements, and we can guarantee that each lookup of an element in the list can be done in unit time, then at most log2(n) + 1 time units are needed to return an answer. Cost models Time efficiency estimates depend on what we define to be a step. For the analysis to correspond usefully to the actual run-time, the time required to perform a step must be guaranteed to be bounded above by a constant. One must be careful here; for instance, some analyses count an addition of two numbers as one step. This assumption may not be warranted in certain contexts. For example, if the numbers involved in a computation may be arbitrarily large, the time required by a single addition can no longer be assumed to be constant. Two cost models are generally used:[2][3][4][5][6] • the uniform cost model, also called uniform-cost measurement (and similar variations), assigns a constant cost to every machine operation, regardless of the size of the numbers involved • the logarithmic cost model, also called logarithmic-cost measurement (and similar variations), assigns a cost to every machine operation proportional to the number of bits involved The latter is more cumbersome to use, so it's only employed when necessary, for example in the analysis of arbitrary-precision arithmetic algorithms, like those used in cryptography. A key point which is often overlooked is that published lower bounds for problems are often given for a model of computation that is more restricted than the set of operations that you could use in practice and therefore there are algorithms that are faster than what would naively be thought possible.[7] Run-time analysis Run-time analysis is a theoretical classification that estimates and anticipates the increase in running time (or run-time or execution time) of an algorithm as its input size (usually denoted as n) increases. Run-time efficiency is a topic of great interest in computer science: A program can take seconds, hours, or even years to finish executing, depending on which algorithm it implements. While software profiling techniques can be used to measure an algorithm's run-time in practice, they cannot provide timing data for all infinitely many possible inputs; the latter can only be achieved by the theoretical methods of run-time analysis. Shortcomings of empirical metrics Since algorithms are platform-independent (i.e. a given algorithm can be implemented in an arbitrary programming language on an arbitrary computer running an arbitrary operating system), there are additional significant drawbacks to using an empirical approach to gauge the comparative performance of a given set of algorithms. Take as an example a program that looks up a specific entry in a sorted list of size n. Suppose this program were implemented on Computer A, a state-of-the-art machine, using a linear search algorithm, and on Computer B, a much slower machine, using a binary search algorithm. Benchmark testing on the two computers running their respective programs might look something like the following: n (list size) Computer A run-time (in nanoseconds) Computer B run-time (in nanoseconds) 16 8 100,000 63 32 150,000 250 125 200,000 1,000 500 250,000 Based on these metrics, it would be easy to jump to the conclusion that Computer A is running an algorithm that is far superior in efficiency to that of Computer B. However, if the size of the input-list is increased to a sufficient number, that conclusion is dramatically demonstrated to be in error: n (list size) Computer A run-time (in nanoseconds) Computer B run-time (in nanoseconds) 16 8 100,000 63 32 150,000 250 125 200,000 1,000 500 250,000 ... ... ... 1,000,000 500,000 500,000 4,000,000 2,000,000 550,000 16,000,000 8,000,000 600,000 ... ... ... 63,072 × 1012 31,536 × 1012 ns, or 1 year 1,375,000 ns, or 1.375 milliseconds Computer A, running the linear search program, exhibits a linear growth rate. The program's run-time is directly proportional to its input size. Doubling the input size doubles the run-time, quadrupling the input size quadruples the run-time, and so forth. On the other hand, Computer B, running the binary search program, exhibits a logarithmic growth rate. Quadrupling the input size only increases the run-time by a constant amount (in this example, 50,000 ns). Even though Computer A is ostensibly a faster machine, Computer B will inevitably surpass Computer A in run-time because it's running an algorithm with a much slower growth rate. Orders of growth Main article: Big O notation Informally, an algorithm can be said to exhibit a growth rate on the order of a mathematical function if beyond a certain input size n, the function f(n) times a positive constant provides an upper bound or limit for the run-time of that algorithm. In other words, for a given input size n greater than some n0 and a constant c, the run-time of that algorithm will never be larger than c × f(n). This concept is frequently expressed using Big O notation. For example, since the run-time of insertion sort grows quadratically as its input size increases, insertion sort can be said to be of order O(n2). Big O notation is a convenient way to express the worst-case scenario for a given algorithm, although it can also be used to express the average-case — for example, the worst-case scenario for quicksort is O(n2), but the average-case run-time is O(n log n). Empirical orders of growth Assuming the run-time follows power rule, t ≈ kna, the coefficient a can be found [8] by taking empirical measurements of run-time {t1, t2} at some problem-size points {n1, n2}, and calculating t2/t1 = (n2/n1)a so that a = log(t2/t1)/log(n2/n1). In other words, this measures the slope of the empirical line on the log–log plot of run-time vs. input size, at some size point. If the order of growth indeed follows the power rule (and so the line on the log–log plot is indeed a straight line), the empirical value of will stay constant at different ranges, and if not, it will change (and the line is a curved line)—but still could serve for comparison of any two given algorithms as to their empirical local orders of growth behaviour. Applied to the above table: n (list size) Computer A run-time (in nanoseconds) Local order of growth (n^_) Computer B run-time (in nanoseconds) Local order of growth (n^_) 15 7 100,000 65 32 1.04 150,000 0.28 250 125 1.01 200,000 0.21 1,000 500 1.00 250,000 0.16 ... ... ... 1,000,000 500,000 1.00 500,000 0.10 4,000,000 2,000,000 1.00 550,000 0.07 16,000,000 8,000,000 1.00 600,000 0.06 ... ... ... It is clearly seen that the first algorithm exhibits a linear order of growth indeed following the power rule. The empirical values for the second one are diminishing rapidly, suggesting it follows another rule of growth and in any case has much lower local orders of growth (and improving further still), empirically, than the first one. Evaluating run-time complexity The run-time complexity for the worst-case scenario of a given algorithm can sometimes be evaluated by examining the structure of the algorithm and making some simplifying assumptions. Consider the following pseudocode: 1 get a positive integer n from input 2 if n > 10 3 print "This might take a while..." 4 for i = 1 to n 5 for j = 1 to i 6 print i * j 7 print "Done!" A given computer will take a discrete amount of time to execute each of the instructions involved with carrying out this algorithm. Say that the actions carried out in step 1 are considered to consume time at most T1, step 2 uses time at most T2, and so forth. In the algorithm above, steps 1, 2 and 7 will only be run once. For a worst-case evaluation, it should be assumed that step 3 will be run as well. Thus the total amount of time to run steps 1-3 and step 7 is: $T_{1}+T_{2}+T_{3}+T_{7}.\,$ The loops in steps 4, 5 and 6 are trickier to evaluate. The outer loop test in step 4 will execute ( n + 1 ) times,[9] which will consume T4( n + 1 ) time. The inner loop, on the other hand, is governed by the value of j, which iterates from 1 to i. On the first pass through the outer loop, j iterates from 1 to 1: The inner loop makes one pass, so running the inner loop body (step 6) consumes T6 time, and the inner loop test (step 5) consumes 2T5 time. During the next pass through the outer loop, j iterates from 1 to 2: the inner loop makes two passes, so running the inner loop body (step 6) consumes 2T6 time, and the inner loop test (step 5) consumes 3T5 time. Altogether, the total time required to run the inner loop body can be expressed as an arithmetic progression: $T_{6}+2T_{6}+3T_{6}+\cdots +(n-1)T_{6}+nT_{6}$ which can be factored[10] as $\left[1+2+3+\cdots +(n-1)+n\right]T_{6}=\left[{\frac {1}{2}}(n^{2}+n)\right]T_{6}$ The total time required to run the inner loop test can be evaluated similarly: ${\begin{aligned}&2T_{5}+3T_{5}+4T_{5}+\cdots +(n-1)T_{5}+nT_{5}+(n+1)T_{5}\\=\ &T_{5}+2T_{5}+3T_{5}+4T_{5}+\cdots +(n-1)T_{5}+nT_{5}+(n+1)T_{5}-T_{5}\end{aligned}}$ which can be factored as ${\begin{aligned}&T_{5}\left[1+2+3+\cdots +(n-1)+n+(n+1)\right]-T_{5}\\=&\left[{\frac {1}{2}}(n^{2}+n)\right]T_{5}+(n+1)T_{5}-T_{5}\\=&\left[{\frac {1}{2}}(n^{2}+n)\right]T_{5}+nT_{5}\\=&\left[{\frac {1}{2}}(n^{2}+3n)\right]T_{5}\end{aligned}}$ Therefore, the total run-time for this algorithm is: $f(n)=T_{1}+T_{2}+T_{3}+T_{7}+(n+1)T_{4}+\left[{\frac {1}{2}}(n^{2}+n)\right]T_{6}+\left[{\frac {1}{2}}(n^{2}+3n)\right]T_{5}$ which reduces to $f(n)=\left[{\frac {1}{2}}(n^{2}+n)\right]T_{6}+\left[{\frac {1}{2}}(n^{2}+3n)\right]T_{5}+(n+1)T_{4}+T_{1}+T_{2}+T_{3}+T_{7}$ As a rule-of-thumb, one can assume that the highest-order term in any given function dominates its rate of growth and thus defines its run-time order. In this example, n2 is the highest-order term, so one can conclude that f(n) = O(n2). Formally this can be proven as follows: Prove that $\left[{\frac {1}{2}}(n^{2}+n)\right]T_{6}+\left[{\frac {1}{2}}(n^{2}+3n)\right]T_{5}+(n+1)T_{4}+T_{1}+T_{2}+T_{3}+T_{7}\leq cn^{2},\ n\geq n_{0}$ ${\begin{aligned}&\left[{\frac {1}{2}}(n^{2}+n)\right]T_{6}+\left[{\frac {1}{2}}(n^{2}+3n)\right]T_{5}+(n+1)T_{4}+T_{1}+T_{2}+T_{3}+T_{7}\\\leq &(n^{2}+n)T_{6}+(n^{2}+3n)T_{5}+(n+1)T_{4}+T_{1}+T_{2}+T_{3}+T_{7}\ ({\text{for }}n\geq 0)\end{aligned}}$ Let k be a constant greater than or equal to [T1..T7] ${\begin{aligned}&T_{6}(n^{2}+n)+T_{5}(n^{2}+3n)+(n+1)T_{4}+T_{1}+T_{2}+T_{3}+T_{7}\leq k(n^{2}+n)+k(n^{2}+3n)+kn+5k\\=&2kn^{2}+5kn+5k\leq 2kn^{2}+5kn^{2}+5kn^{2}\ ({\text{for }}n\geq 1)=12kn^{2}\end{aligned}}$ Therefore $\left[{\frac {1}{2}}(n^{2}+n)\right]T_{6}+\left[{\frac {1}{2}}(n^{2}+3n)\right]T_{5}+(n+1)T_{4}+T_{1}+T_{2}+T_{3}+T_{7}\leq cn^{2},n\geq n_{0}{\text{ for }}c=12k,n_{0}=1$ A more elegant approach to analyzing this algorithm would be to declare that [T1..T7] are all equal to one unit of time, in a system of units chosen so that one unit is greater than or equal to the actual times for these steps. This would mean that the algorithm's run-time breaks down as follows:[11] $4+\sum _{i=1}^{n}i\leq 4+\sum _{i=1}^{n}n=4+n^{2}\leq 5n^{2}\ ({\text{for }}n\geq 1)=O(n^{2}).$ Growth rate analysis of other resources The methodology of run-time analysis can also be utilized for predicting other growth rates, such as consumption of memory space. As an example, consider the following pseudocode which manages and reallocates memory usage by a program based on the size of a file which that program manages: while file is still open: let n = size of file for every 100,000 kilobytes of increase in file size double the amount of memory reserved In this instance, as the file size n increases, memory will be consumed at an exponential growth rate, which is order O(2n). This is an extremely rapid and most likely unmanageable growth rate for consumption of memory resources. Relevance Algorithm analysis is important in practice because the accidental or unintentional use of an inefficient algorithm can significantly impact system performance. In time-sensitive applications, an algorithm taking too long to run can render its results outdated or useless. An inefficient algorithm can also end up requiring an uneconomical amount of computing power or storage in order to run, again rendering it practically useless. Constant factors Analysis of algorithms typically focuses on the asymptotic performance, particularly at the elementary level, but in practical applications constant factors are important, and real-world data is in practice always limited in size. The limit is typically the size of addressable memory, so on 32-bit machines 232 = 4 GiB (greater if segmented memory is used) and on 64-bit machines 264 = 16 EiB. Thus given a limited size, an order of growth (time or space) can be replaced by a constant factor, and in this sense all practical algorithms are O(1) for a large enough constant, or for small enough data. This interpretation is primarily useful for functions that grow extremely slowly: (binary) iterated logarithm (log*) is less than 5 for all practical data (265536 bits); (binary) log-log (log log n) is less than 6 for virtually all practical data (264 bits); and binary log (log n) is less than 64 for virtually all practical data (264 bits). An algorithm with non-constant complexity may nonetheless be more efficient than an algorithm with constant complexity on practical data if the overhead of the constant time algorithm results in a larger constant factor, e.g., one may have $K>k\log \log n$ so long as $K/k>6$ and $n<2^{2^{6}}=2^{64}$. For large data linear or quadratic factors cannot be ignored, but for small data an asymptotically inefficient algorithm may be more efficient. This is particularly used in hybrid algorithms, like Timsort, which use an asymptotically efficient algorithm (here merge sort, with time complexity $n\log n$), but switch to an asymptotically inefficient algorithm (here insertion sort, with time complexity $n^{2}$) for small data, as the simpler algorithm is faster on small data. See also • Amortized analysis • Analysis of parallel algorithms • Asymptotic computational complexity • Best, worst and average case • Big O notation • Computational complexity theory • Master theorem (analysis of algorithms) • NP-Complete • Numerical analysis • Polynomial time • Program optimization • Profiling (computer programming) • Scalability • Smoothed analysis • Termination analysis — the subproblem of checking whether a program will terminate at all • Time complexity — includes table of orders of growth for common algorithms • Information-based complexity Notes 1. "Knuth: Recent News". 28 August 2016. Archived from the original on 28 August 2016. 2. Alfred V. Aho; John E. Hopcroft; Jeffrey D. Ullman (1974). The design and analysis of computer algorithms. Addison-Wesley Pub. Co. ISBN 9780201000290., section 1.3 3. Juraj Hromkovič (2004). Theoretical computer science: introduction to Automata, computability, complexity, algorithmics, randomization, communication, and cryptography. Springer. pp. 177–178. ISBN 978-3-540-14015-3. 4. Giorgio Ausiello (1999). Complexity and approximation: combinatorial optimization problems and their approximability properties. Springer. pp. 3–8. ISBN 978-3-540-65431-5. 5. Wegener, Ingo (2005), Complexity theory: exploring the limits of efficient algorithms, Berlin, New York: Springer-Verlag, p. 20, ISBN 978-3-540-21045-0 6. Robert Endre Tarjan (1983). Data structures and network algorithms. SIAM. pp. 3–7. ISBN 978-0-89871-187-5. 7. Examples of the price of abstraction?, cstheory.stackexchange.com 8. How To Avoid O-Abuse and Bribes Archived 2017-03-08 at the Wayback Machine, at the blog "Gödel's Lost Letter and P=NP" by R. J. Lipton, professor of Computer Science at Georgia Tech, recounting idea by Robert Sedgewick 9. an extra step is required to terminate the for loop, hence n + 1 and not n executions 10. It can be proven by induction that $1+2+3+\cdots +(n-1)+n={\frac {n(n+1)}{2}}$ 11. This approach, unlike the above approach, neglects the constant time consumed by the loop tests which terminate their respective loops, but it is trivial to prove that such omission does not affect the final result References • Sedgewick, Robert; Flajolet, Philippe (2013). An Introduction to the Analysis of Algorithms (2nd ed.). Addison-Wesley. ISBN 978-0-321-90575-8. • Greene, Daniel A.; Knuth, Donald E. (1982). Mathematics for the Analysis of Algorithms (Second ed.). Birkhäuser. ISBN 3-7643-3102-X. • Cormen, Thomas H.; Leiserson, Charles E.; Rivest, Ronald L. & Stein, Clifford (2001). Introduction to Algorithms. Chapter 1: Foundations (Second ed.). Cambridge, MA: MIT Press and McGraw-Hill. pp. 3–122. ISBN 0-262-03293-7. • Sedgewick, Robert (1998). Algorithms in C, Parts 1-4: Fundamentals, Data Structures, Sorting, Searching (3rd ed.). Reading, MA: Addison-Wesley Professional. ISBN 978-0-201-31452-6. • Knuth, Donald. The Art of Computer Programming. Addison-Wesley. • Goldreich, Oded (2010). Computational Complexity: A Conceptual Perspective. Cambridge University Press. ISBN 978-0-521-88473-0. External links • Media related to Analysis of algorithms at Wikimedia Commons Computer science Note: This template roughly follows the 2012 ACM Computing Classification System. Hardware • Printed circuit board • Peripheral • Integrated circuit • Very Large Scale Integration • Systems on Chip (SoCs) • Energy consumption (Green computing) • Electronic design automation • Hardware acceleration Computer systems organization • Computer architecture • Embedded system • Real-time computing • Dependability Networks • Network architecture • Network protocol • Network components • Network scheduler • Network performance evaluation • Network service Software organization • Interpreter • Middleware • Virtual machine • Operating system • Software quality Software notations and tools • Programming paradigm • Programming language • Compiler • Domain-specific language • Modeling language • Software framework • Integrated development environment • Software configuration management • Software library • Software repository Software development • Control variable • Software development process • Requirements analysis • Software design • Software construction • Software deployment • Software engineering • Software maintenance • Programming team • Open-source model Theory of computation • Model of computation • Formal language • Automata theory • Computability theory • Computational complexity theory • Logic • Semantics Algorithms • Algorithm design • Analysis of algorithms • Algorithmic efficiency • Randomized algorithm • Computational geometry Mathematics of computing • Discrete mathematics • Probability • Statistics • Mathematical software • Information theory • Mathematical analysis • Numerical analysis • Theoretical computer science Information systems • Database management system • Information storage systems • Enterprise information system • Social information systems • Geographic information system • Decision support system • Process control system • Multimedia information system • Data mining • Digital library • Computing platform • Digital marketing • World Wide Web • Information retrieval Security • Cryptography • Formal methods • Security services • Intrusion detection system • Hardware security • Network security • Information security • Application security Human–computer interaction • Interaction design • Social computing • Ubiquitous computing • Visualization • Accessibility Concurrency • Concurrent computing • Parallel computing • Distributed computing • Multithreading • Multiprocessing Artificial intelligence • Natural language processing • Knowledge representation and reasoning • Computer vision • Automated planning and scheduling • Search methodology • Control method • Philosophy of artificial intelligence • Distributed artificial intelligence Machine learning • Supervised learning • Unsupervised learning • Reinforcement learning • Multi-task learning • Cross-validation Graphics • Animation • Rendering • Photograph manipulation • Graphics processing unit • Mixed reality • Virtual reality • Image compression • Solid modeling Applied computing • E-commerce • Enterprise software • Computational mathematics • Computational physics • Computational chemistry • Computational biology • Computational social science • Computational engineering • Computational healthcare • Digital art • Electronic publishing • Cyberwarfare • Electronic voting • Video games • Word processing • Operations research • Educational technology • Document management • Category • Outline • WikiProject • Commons
Wikipedia
Prince Rupert's cube In geometry, Prince Rupert's cube is the largest cube that can pass through a hole cut through a unit cube without splitting it into two pieces. Its side length is approximately 1.06, 6% larger than the side length 1 of the unit cube through which it passes. The problem of finding the largest square that lies entirely within a unit cube is closely related, and has the same solution. Prince Rupert's cube is named after Prince Rupert of the Rhine, who asked whether a cube could be passed through a hole made in another cube of the same size without splitting the cube into two pieces. A positive answer was given by John Wallis. Approximately 100 years later, Pieter Nieuwland found the largest possible cube that can pass through a hole in a unit cube. Many other convex polyhedra, including all five Platonic solids, have been shown to have the Rupert property: a copy of the polyhedron, of the same or larger shape, can be passed through a hole in the polyhedron. It is unknown whether this is true for all convex polyhedra. Solution Place two points on two adjacent edges of a unit cube, each at a distance of 3/4 from the point where the two edges meet, and two more points symmetrically on the opposite face of the cube. Then these four points form a square with side length ${\frac {3{\sqrt {2}}}{4}}\approx 1.0606601.$ One way to see this is to first observe that these four points form a rectangle, by the symmetries of their construction. The lengths of all four sides of this rectangle equal $ (3{\sqrt {2}})/4$, by the Pythagorean theorem or (equivalently) the formula for Euclidean distance in three dimensions. For instance, the first two points, together with the third point where their two edges meet, form an isosceles right triangle with legs of length $3/4$, and the distance between the first two points is the hypotenuse of the triangle. As a rectangle with four equal sides, the shape formed by these four points is a square. Extruding the square in both directions perpendicularly to itself forms the hole through which a cube larger than the original one, up to side length $ (3{\sqrt {2}})/4$, may pass.[1] The parts of the unit cube that remain, after emptying this hole, form two triangular prisms and two irregular tetrahedra, connected by thin bridges at the four vertices of the square. Each prism has as its six vertices two adjacent vertices of the cube, and four points along the edges of the cube at distance 1/4 from these cube vertices. Each tetrahedron has as its four vertices one vertex of the cube, two points at distance 3/4 from it on two of the adjacent edges, and one point at distance 3/16 from the cube vertex along the third adjacent edge.[2] History Prince Rupert's cube is named after Prince Rupert of the Rhine. According to a story recounted in 1693 by English mathematician John Wallis, Prince Rupert wagered that a hole could be cut through a cube, large enough to let another cube of the same size pass through it. Wallis showed that in fact such a hole was possible (with some errors that were not corrected until much later), and Prince Rupert won his wager.[3][4] Wallis assumed that the hole would be parallel to a space diagonal of the cube. The projection of the cube onto a plane perpendicular to this diagonal is a regular hexagon, and the best hole parallel to the diagonal can be found by drawing the largest possible square that can be inscribed into this hexagon. Calculating the size of this square shows that a cube with side length ${\sqrt {6}}-{\sqrt {2}}\approx 1.03527$, slightly larger than one, is capable of passing through the hole.[3] Approximately 100 years later, Dutch mathematician Pieter Nieuwland found that a better solution may be achieved by using a hole with a different angle than the space diagonal. In fact, Nieuwland's solution is optimal. Nieuwland died in 1794, a year after taking a position as a professor at the University of Leiden, and his solution was published posthumously in 1816 by Nieuwland's mentor, Jean Henri van Swinden.[3][4][5] Since then, the problem has been repeated in many books on recreational mathematics, in some cases with Wallis' suboptimal solution instead of the optimal solution.[1][2][6][7][8][9][10][11][12] Models The construction of a physical model of Prince Rupert's cube is made challenging by the accuracy with which such a model needs to be measured, and the thinness of the connections between the remaining parts of the unit cube after the hole is cut through it. For the maximally sized inner cube with length ≈1.06 relative to the length 1 outer cube, constructing a model is "mathematically possible but practically impossible".[13] On the other hand, using the orientation of the maximal cube but making a smaller hole, big enough only for a unit cube, leaves additional thickness that allows for structural integrity.[14] For the example using two cubes of the same size, as originally proposed by Prince Rupert, model construction is possible. In a 1950 survey of the problem, D. J. E. Schrek published photographs of a model of a cube passing through a hole in another cube.[15] Martin Raynsford has designed a template for constructing paper models of a cube with another cube passing through it; however, to account for the tolerances of paper construction and not tear the paper at the narrow joints between parts of the punctured cube, the hole in Raynsford's model only lets cubes through that are slightly smaller than the outer cube.[16] Since the advent of 3D printing, construction of a Prince Rupert cube of the full 1:1 ratio has become easy.[17] Generalizations A polyhedron $P$ is said to have the Rupert property if a polyhedron of the same or larger size and the same shape as $P$ can pass through a hole in $P$.[18] All five Platonic solids—the cube, regular tetrahedron, regular octahedron,[19] regular dodecahedron, and regular icosahedron—have the Rupert property. Of the 13 Archimedean solids, it is known that at least these ten have the Rupert property: the cuboctahedron, truncated octahedron, truncated cube, rhombicuboctahedron, icosidodecahedron, truncated cuboctahedron, truncated icosahedron, truncated dodecahedron,[20] and the truncated tetrahedron[21][22], as well as the truncated icosidodecahedron[23][24]. It has been conjectured that all 3-dimensional convex polyhedra have this property[18], but also, to the contrary, that the rhombicosidodecahedron does not have Rupert's property[23][24]. Unsolved problem in mathematics: Do all convex polyhedra have the Rupert property? (more unsolved problems in mathematics) Cubes and all rectangular solids have Rupert passages in every direction that is not parallel to any of their faces.[25] Another way to express the same problem is to ask for the largest square that lies within a unit cube. More generally, Jerrard & Wetzel (2004) show how to find the largest rectangle of a given aspect ratio that lies within a unit cube. As they observe, the optimal rectangle must always be centered at the center of the cube, with its vertices on edges of the cube. Depending on its aspect ratio, the ratio between its long and short sides, there are two cases for how it can be placed within the cube. For an aspect ratio of ${\sqrt {2}}$ or more, the optimal rectangle lies within the rectangle connecting two opposite edges of the cube, which has aspect ratio exactly ${\sqrt {2}}$. For aspect ratios closer to 1 (including aspect ratio 1 for the square of Prince Rupert's cube), two of the four vertices of an optimal rectangle are equidistant from a vertex of the cube, along two of the three edges touching that vertex. The other two rectangle vertices are the reflections of the first two across the center of the cube.[4] If the aspect ratio is not constrained, the rectangle with the largest area that fits within a cube is the one of aspect ratio ${\sqrt {2}}$ that has two opposite edges of the cube as two of its sides, and two face diagonals as the other two sides.[26] For all $n\geq 2$, the $n$-dimensional hypercube also has the Rupert property.[27] Moreover, one may ask for the largest $m$-dimensional hypercube that may be drawn within an $n$-dimensional unit hypercube. The answer is always an algebraic number. For instance, the problem for $(m,n)=(3,4)$ asks for the largest (three-dimensional) cube within a four-dimensional hypercube. After Martin Gardner posed this question in Scientific American, Kay R. Pechenick DeVicci and several other readers showed that the answer for the (3,4) case is the square root of the smaller of two real roots of the polynomial $4x^{4}-28x^{3}-7x^{2}+16x+16$, which works out to approximately 1.007435.[1][28] For $m=2$, the optimal side length of the largest square in an $n$-dimensional hypercube is either $ {\sqrt {n/2}}$ or $ {\sqrt {n/2-3/8}}$, depending on whether $n$ is even or odd respectively.[29] References 1. Gardner, Martin (2001), The Colossal Book of Mathematics: Classic Puzzles, Paradoxes, and Problems : Number Theory, Algebra, Geometry, Probability, Topology, Game Theory, Infinity, and Other Topics of Recreational Mathematics, W. W. Norton & Company, pp. 172–173, ISBN 9780393020236 2. Wells, David (1997), The Penguin Dictionary of Curious and Interesting Numbers (3rd ed.), Penguin, p. 16, ISBN 9780140261493 3. Rickey, V. Frederick (2005), Dürer's Magic Square, Cardano's Rings, Prince Rupert's Cube, and Other Neat Things (PDF), archived from the original (PDF) on 2010-07-05; notes for “Recreational Mathematics: A Short Course in Honor of the 300th Birthday of Benjamin Franklin,” Mathematical Association of America, Albuquerque, NM, August 2–3, 2005 4. Jerrard, Richard P.; Wetzel, John E. (2004), "Prince Rupert's rectangles", The American Mathematical Monthly, 111 (1): 22–31, doi:10.2307/4145012, JSTOR 4145012, MR 2026310 5. Swinden, J. H. Van (1816), Grondbeginsels der Meetkunde (in Dutch) (2nd ed.), Amsterdam: P. den Hengst en zoon, pp. 512–513 6. Ozanam, Jacques (1803), Montucla, Jean Étienne; Hutton, Charles (eds.), Recreations in Mathematics and Natural Philosophy: Containing Amusing Dissertations and Enquiries Concerning a Variety of Subjects the Most Remarkable and Proper to Excite Curiosity and Attention to the Whole Range of the Mathematical and Philosophical Sciences, G. Kearsley, pp. 315–316 7. Dudeney, Henry Ernest (1936), Modern puzzles and how to solve them, p. 149 8. Ogilvy, C. Stanley (1956), Through the Mathescope, Oxford University Press, pp. 54–55. Reprinted as Ogilvy, C. Stanley (1994), Excursions in mathematics, New York: Dover Publications Inc., ISBN 0-486-28283-X, MR 1313725 9. Ehrenfeucht, Aniela (1964), The Cube Made Interesting, translated by Zawadowski, Wacław, New York: The Macmillan Co., p. 77, MR 0170242 10. Stewart, Ian (2001), Flatterland: Like Flatland Only More So, Macmillan, pp. 49–50, ISBN 9780333783122 11. Darling, David (2004), The Universal Book of Mathematics: From Abracadabra to Zeno's Paradoxes, John Wiley & Sons, p. 255, ISBN 9780471667001 12. Pickover, Clifford A. (2009), The Math Book: From Pythagoras to the 57th Dimension, 250 Milestones in the History of Mathematics, Sterling Publishing Company, Inc., p. 214, ISBN 9781402757969 13. Sriraman, Bharath (2009), "Mathematics and literature (the sequel): imagination as a pathway to advanced mathematical ideas and philosophy", in Sriraman, Bharath; Freiman, Viktor; Lirette-Pitre, Nicole (eds.), Interdisciplinarity, Creativity, and Learning: Mathematics With Literature, Paradoxes, History, Technology, and Modeling, The Montana Mathematics Enthusiast: Monograph Series in Mathematics Education, vol. 7, Information Age Publishing, Inc., pp. 41–54, ISBN 9781607521013 14. Parker, Matt (2015), Things to Make and Do in the Fourth Dimension: A Mathematician's Journey Through Narcissistic Numbers, Optimal Dating Algorithms, at Least Two Kinds of Infinity, and More, New York: Farrar, Straus and Giroux, p. 98, ISBN 978-0-374-53563-6, MR 3753642 15. Schrek, D. J. E. (1950), "Prince Rupert's problem and its extension by Pieter Nieuwland", Scripta Mathematica, 16: 73–80 and 261–267; as cited by Rickey (2005) and Jerrard & Wetzel (2004) 16. Hart, George W. (January 30, 2012), Math Monday: Passing a Cube Through Another Cube, Museum of Mathematics; originally published in Make Online 17. 3geek14, Prince Rupert's Cube, Shapeways, retrieved 2017-02-06 18. Jerrard, Richard P.; Wetzel, John E.; Yuan, Liping (April 2017), "Platonic passages", Mathematics Magazine, Washington, DC: Mathematical Association of America, 90 (2): 87–98, doi:10.4169/math.mag.90.2.87, S2CID 218542147 19. Scriba, Christoph J. (1968), "Das Problem des Prinzen Ruprecht von der Pfalz", Praxis der Mathematik (in German), 10 (9): 241–246, MR 0497615 20. Chai, Ying; Yuan, Liping; Zamfirescu, Tudor (June–July 2018), "Rupert Property of Archimedean Solids", The American Mathematical Monthly, 125 (6): 497–504, doi:10.1080/00029890.2018.1449505, S2CID 125508192 21. Hoffmann, Balazs (2019), "Rupert properties of polyhedra and the generalized Nieuwland constant", Journal for Geometry and Graphics, 23 (1): 29–35 22. Lavau, Gérard (December 2019), "The Truncated Tetrahedron is Rupert", The American Mathematical Monthly, 126 (10): 929–932, doi:10.1080/00029890.2019.1656958, S2CID 213502432 23. Steininger, Jakob; Yurkevich, Sergey (2022), "Extended Abstract for: Solving Rupert's Problem Algorithmically", ACM Commun. Comput. Algebra, 56 (2): 32–35, doi:10.1145/3572867.3572870, S2CID 253802715 24. Steininger, Jakob; Yurkevich, Sergey (December 27, 2021), An algorithmic approach to Rupert's problem, arXiv:2112.13754 25. Bezdek, András; Guan, Zhenyue; Hujter, Mihály; Joós, Antal (2021), "Cubes and boxes have Rupert's passages in every nontrivial direction", The American Mathematical Monthly, 128 (6): 534–542, arXiv:2111.03817, doi:10.1080/00029890.2021.1901461, MR 4265479, S2CID 235234134 26. Thompson, Silvanus P.; Gardner, Martin (1998), Calculus Made Easy (3rd ed.), Macmillan, p. 315, ISBN 9780312185480 27. Huber, Greg; Shultz, Kay Pechenick; Wetzel, John E. (June–July 2018), "The n-cube is Rupert", The American Mathematical Monthly, 125 (6): 505–512, doi:10.1080/00029890.2018.1448197, S2CID 51841349 28. Guy, Richard K.; Nowakowski, Richard J. (1997), "Unsolved Problems: Monthly Unsolved Problems, 1969-1997", The American Mathematical Monthly, 104 (10): 967–973, doi:10.2307/2974481, JSTOR 2974481, MR 1543116 29. Weisstein, Eric W., "Cube Square Inscribing", MathWorld External links Wikimedia Commons has media related to Prince Rupert's cube. • Weisstein, Eric W., "Prince Rupert's Cube", MathWorld
Wikipedia
Rupture field In abstract algebra, a rupture field of a polynomial $P(X)$ over a given field $K$ is a field extension of $K$ generated by a root $a$ of $P(X)$.[1] For instance, if $K=\mathbb {Q} $ and $P(X)=X^{3}-2$ then $\mathbb {Q} [{\sqrt[{3}]{2}}]$ is a rupture field for $P(X)$. The notion is interesting mainly if $P(X)$ is irreducible over $K$. In that case, all rupture fields of $P(X)$ over $K$ are isomorphic, non-canonically, to $K_{P}=K[X]/(P(X))$: if $L=K[a]$ where $a$ is a root of $P(X)$, then the ring homomorphism $f$ defined by $f(k)=k$ for all $k\in K$ and $f(X\mod P)=a$ is an isomorphism. Also, in this case the degree of the extension equals the degree of $P$. A rupture field of a polynomial does not necessarily contain all the roots of that polynomial: in the above example the field $\mathbb {Q} [{\sqrt[{3}]{2}}]$ does not contain the other two (complex) roots of $P(X)$ (namely $\omega {\sqrt[{3}]{2}}$ and $\omega ^{2}{\sqrt[{3}]{2}}$ where $\omega $ is a primitive cube root of unity). For a field containing all the roots of a polynomial, see Splitting field. Examples A rupture field of $X^{2}+1$ over $\mathbb {R} $ is $\mathbb {C} $. It is also a splitting field. The rupture field of $X^{2}+1$ over $\mathbb {F} _{3}$ is $\mathbb {F} _{9}$ since there is no element of $\mathbb {F} _{3}$ which squares to $-1$ (and all quadratic extensions of $\mathbb {F} _{3}$ are isomorphic to $\mathbb {F} _{9}$). See also • Splitting field References 1. Escofier, Jean-Paul (2001). Galois Theory. Springer. pp. 62. ISBN 0-387-98765-7.
Wikipedia
Ruriko Yoshida Ruriko (Rudy) Yoshida is a Japanese-American mathematician and statistician whose research topics have ranged from abstract mathematical problems in algebraic combinatorics to optimized camera placement in sensor networks and the phylogenomics of fungi. She works at the Naval Postgraduate School in Monterey, California as a professor of operations research.[1] She was promoted as a rank of professor on July 1st 2023. [2] Early life and education Yoshida grew up in Japan. Despite a love of mathematics that began in middle school, she was discouraged from studying mathematics by her teachers, and in response dropped out of her Japanese high school and took the high school equivalency examination instead. In order to continue her study of mathematics, she moved to the US, and after studying at a junior college, transferred to the University of California, Berkeley. Her parents, who had been supporting her financially, stopped their support when they learned that she was studying mathematics instead of business, and she put herself through school working both as a grader in the mathematics department and in the university's police department.[3] She graduated with a bachelor's degree in mathematics in 2000.[1] She went to the University of California, Davis for graduate study, under the supervision of Jesús A. De Loera. De Loera had been a student of Berkeley professor Bernd Sturmfels, and Yoshida also considers Sturmfels to be an academic mentor. Part of her work there involved implementing a method of Alexander Barvinok for counting integer points in convex polyhedra by decomposing the input into cones,[3] and her 2004 dissertation was Barvinok's Rational Functions: Algorithms and Applications to Optimization, Statistics, and Algebra.[4] Career After completing her doctorate, Yoshida returned to the University of California, Berkeley as a postdoctoral researcher, working with Lior Pachter in the Center for Pure and Applied Mathematics, and then went to Duke University for more postdoctoral research as an assistant research professor of mathematics, working with Mark L. Huber. She became an assistant professor of statistics at the University of Kentucky in 2006, and was promoted to a tenured associate professor in 2012. In 2016 she moved to her present position at the Naval Postgraduate School,[1] moving there in part to be closer to her husband's family in Northern California.[3] She has also returned to Japan as a visitor to the Institute of Statistical Mathematics.[1] She is also known as a superb teacher of mathematics and statistics as is testified by the provost's announcement [2] and the list of her students [5]. References 1. Curriculum vitae (PDF), October 14, 2020, retrieved 2020-10-24 2. NPS Campus Announcement, April 6, 2023, retrieved 2023-04-06 3. Interview with Yoshida, Math is For All: Parts 1, 2, 3, 4, 5, retrieved 2020-10-24 4. Ruriko Yoshida at the Mathematics Genealogy Project 5. List of Students, July 26, 2023, retrieved 2023-07-26 External links • Home page • Ruriko Yoshida publications indexed by Google Scholar Authority control: Academics • MathSciNet • Mathematics Genealogy Project • ORCID
Wikipedia
Ruslan Smelyansky Ruslan Smelyansky (Russian: Русла́н Леони́дович Смеля́нский) (born 1950) is a Russian mathematician, Dr. Sc., Professor, a professor at the Faculty of Computer Science at the Moscow State University, Corresponding Member of the Russian Academy of Sciences.[1] Ruslan Smelyansky Русла́н Смеля́нский Ruslan Smelyansky (2005) Born (1950-11-12) 12 November 1950 Moscow Alma materMoscow State University (1977) Scientific career FieldsMathematics InstitutionsMSU CMC Doctoral advisorLev Korolyov He defended the thesis «Analysis of the performance of multiprocessor systems based on the invariant behavior of programs» for the degree of Doctor of Physical and Mathematical Sciences (1990).[2] He is the author of six books.[3][4] References 1. Annals of the Moscow University(in Russian) 2. MSU CMC(in Russian) 3. Scientific works of Ruslan Smelyansky 4. Scientific works of Ruslan Smelyansky(in English) Bibliography • Faculty of Computational Mathematics and Cybernetics: History and Modernity: A Biographical Directory (1 500 экз ed.). Moscow: Publishing house of Moscow University. Author-compiler Evgeny Grigoriev. 2010. pp. 442–444. ISBN 978-5-211-05838-5. External links • Annals of the Moscow University(in Russian) • MSU CMC(in Russian) • Scientific works of Ruslan Smelyansky • Scientific works of Ruslan Smelyansky(in English)
Wikipedia
Russel E. Caflisch Russel E. Caflisch (born 29 April 1954)[2] is an American mathematician. Russel E. Caflisch Caflisch in 2012 Born (1954-04-29) April 29, 1954 NationalityAmerican Alma materMichigan State University (B.S., 1975) New York University, Courant Institute of Mathematics (M.S.; Ph.D.) SpouseCarol Lynn Meylan.[1] Parent(s)Edward G. Caflisch Dorothy G. Caflisch Scientific career FieldsApplied mathematics InstitutionsNew York University University of California, Los Angeles Institute for Pure and Applied Mathematics Doctoral advisorGeorge Papanicolaou Biography Caflisch is Director of the Courant Institute of Mathematical Sciences at New York University (NYU), and a Professor in the Mathematics Department. Russel Edward Caflisch was born in Charleston, West Virginia.[2] He received his bachelor's degree from Michigan State University in 1975. He earned a master's degree and Ph.D. in Mathematics from the Courant Institute of Mathematical Sciences at New York University.[2] His dissertation was titled "The Fluid Dynamic Limit and Shocks for a Model Boltzmann Equation." (1978)[3] He has also held faculty positions at Stanford and NYU. He has served as PhD advisor for 22 students, with 55 descendants.[4] Up until August 2017, Caflisch was the director of the Institute for Pure & Applied Mathematics (IPAM) and a professor in the Mathematics department, where he also held a joint appointment in the department of Materials Science and Engineering.[5] Caflisch was a founding member of California NanoSystems Institute (CNSI). Caflisch's expertise includes topics in the field of applied mathematics, including partial differential equations, fluid dynamics, plasma physics, materials science, Monte Carlo methods, and computational finance. Recognition Caflisch was awarded the Hertz Foundation Graduate Fellowship in 1975[6] and a Sloan Foundation Research Fellowship in 1984. He was named a fellow of the Society for Industrial and Applied Mathematics in 2009,[7] the American Mathematical Society in 2012,[8] and the American Academy of Arts and Sciences in 2013.[9] He was elected a member of the National Academy of Sciences in April 2019.[10] Caflisch was an invited speaker at International Congress of Mathematicians in Madrid in 2006,[11] and at the conference Dynamics, Equations and Applications in Kraków in 2019.[12] Personal life His parents are Edward G. Caflisch and Dorothy G. Caflisch. Russel Caflisch is married to Carol Lynn Meylan. Books • Mathematical Aspects of Vortex Dynamics (Society for Industrial and Applied Mathematics, 1989) ISBN 0898712351 References 1. Carol Meylan, Russel Caflisch - The New York Times, January 7, 2018 2. Katle P.M., Nemeh K.H., Schusterbauer N. (eds.) - American Men & Women of Science. Volume 2 (2005) 3. THE FLUID DYNAMIC LIMIT AND SHOCKS FOR A MODEL BOLTZMANN EQUATION. - ProQuest 4. "Russel Caflisch - The Mathematics Genealogy Project". www.genealogy.ams.org. Retrieved May 18, 2019. 5. "IPAM Directors". Retrieved May 18, 2019. 6. "Hertz Foundation Fellows List". Retrieved May 18, 2019. 7. "SIAM Fellows List". Retrieved May 18, 2019. 8. "AMS Fellows List". Retrieved May 18, 2019. 9. "American Academy of Arts and Sciences Fellows List". Retrieved May 18, 2019. 10. "2019 NAS Election". National Academy of Sciences. April 30, 2019. Retrieved May 18, 2019. 11. Sanz-Solé, Marta; Soria, Javier; Varona, Juan L.; Verdera, Joan (2007-05-15). "International Congress of Mathematicians Madrid 2006". ems.press. Retrieved 2023-03-17. 12. "DEA 2019 Invited Speakers". Retrieved 2023-03-17. External links • Personal Webpage • Russel E. Caflisch publications indexed by Google Scholar Authority control National • Israel • United States Academics • Google Scholar • MathSciNet • Mathematics Genealogy Project • ORCID • zbMATH Other • IdRef
Wikipedia
Russo–Dye theorem In mathematics, the Russo–Dye theorem is a result in the field of functional analysis. It states that in a unital C*-algebra, the closure of the convex hull of the unitary elements is the closed unit ball.[1]: 44  The theorem was published by B. Russo and H. A. Dye in 1966.[2] Other formulations and generalizations Results similar to the Russo–Dye theorem hold in more general contexts. For example, in a unital *-Banach algebra, the closed unit ball is contained in the closed convex hull of the unitary elements.[1]: 73  A more precise result is true for the C*-algebra of all bounded linear operators on a Hilbert space: If T is such an operator and ||T|| < 1 − 2/n for some integer n > 2, then T is the mean of n unitary operators.[3]: 98  Applications This example is due to Russo & Dye,[2] Corollary 1: If U(A) denotes the unitary elements of a C*-algebra A, then the norm of a linear mapping f from A to a normed linear space B is $\sup _{U\in U(A)}||f(U)||.$ In other words, the norm of an operator can be calculated using only the unitary elements of the algebra. Further reading • An especially simple proof of the theorem is given in: Gardner, L. T. (1984). "An elementary proof of the Russo–Dye theorem". Proceedings of the American Mathematical Society. 90 (1): 171. doi:10.2307/2044692. JSTOR 2044692. Notes 1. Doran, Robert S.; Victor A. Belfi (1986). Characterizations of C*-Algebras: The Gelfand–Naimark Theorems. New York: Marcel Dekker. ISBN 0-8247-7569-4. 2. Russo, B.; H. A. Dye (1966). "A Note on Unitary Operators in C*-Algebras". Duke Mathematical Journal. 33 (2): 413–416. doi:10.1215/S0012-7094-66-03346-1. 3. Pedersen, Gert K. (1989). Analysis Now. Berlin: Springer-Verlag. ISBN 0-387-96788-5. Functional analysis (topics – glossary) Spaces • Banach • Besov • Fréchet • Hilbert • Hölder • Nuclear • Orlicz • Schwartz • Sobolev • Topological vector Properties • Barrelled • Complete • Dual (Algebraic/Topological) • Locally convex • Reflexive • Reparable Theorems • Hahn–Banach • Riesz representation • Closed graph • Uniform boundedness principle • Kakutani fixed-point • Krein–Milman • Min–max • Gelfand–Naimark • Banach–Alaoglu Operators • Adjoint • Bounded • Compact • Hilbert–Schmidt • Normal • Nuclear • Trace class • Transpose • Unbounded • Unitary Algebras • Banach algebra • C*-algebra • Spectrum of a C*-algebra • Operator algebra • Group algebra of a locally compact group • Von Neumann algebra Open problems • Invariant subspace problem • Mahler's conjecture Applications • Hardy space • Spectral theory of ordinary differential equations • Heat kernel • Index theorem • Calculus of variations • Functional calculus • Integral operator • Jones polynomial • Topological quantum field theory • Noncommutative geometry • Riemann hypothesis • Distribution (or Generalized functions) Advanced topics • Approximation property • Balanced set • Choquet theory • Weak topology • Banach–Mazur distance • Tomita–Takesaki theory •  Mathematics portal • Category • Commons
Wikipedia
Ruth Britto Ruth Alexandra Britto-Pacumio is an American mathematical physicist whose research topics include black holes, Yang–Mills theory, and the theory of Feynman integrals; with Freddy Cachazo, Bo Feng, and Edward Witten she is one of the namesakes of the BCFW recursion relations for computing scattering amplitudes.[1] She is an associate professor in mathematics and theoretical physics at Trinity College Dublin,[2] and is also affiliated with the Institut de Physique Théorique of CEA Saclay. Education and career Britto is originally from Binghamton, New York,[3] where her father, Ronald Britto,[4] was a professor of economics at Binghamton University.[5] As an undergraduate mathematics student at the Massachusetts Institute of Technology, she was the 1994 winner of the Elizabeth Lowell Putnam Prize for the best performance by a female student in the William Lowell Putnam Mathematical Competition, and the 1995 winner of the Alice T. Schafer Prize for Excellence in Mathematics by an Undergraduate Woman, awarded by the Association for Women in Mathematics.[3] She completed her Ph.D. in physics at Harvard University in 2002. Her dissertation, Bound states of supersymmetric black holes, was supervised by Andrew Strominger.[6] She was a researcher at the Institute for Advanced Study, University of Amsterdam, Fermilab, and CEA Paris-Saclay before joining the Trinity College staff in 2014.[2] References 1. Gallian, Joseph A. (June–July 2019), "The First Twenty-Five Winners of the AWM Alice T. Schafer Prize" (PDF), Notices of the American Mathematical Society, 66 (6): 870–874, doi:10.1090/noti1892 2. "Dr. Ruth Britto", Trinity People Finder, Trinity College Dublin, retrieved 2021-03-13 3. Alice T. Schafer Prize for Excellence in Mathematics by an Undergraduate Woman 1995, Association for Women in Mathematics, retrieved 2021-03-13 4. "Vestal spelling bee stings some elementary students", Press and Sun-Bulletin, 29 November 1984 – via Newspapers.com 5. Directory of American Fulbright Scholars, Council for International Exchange of Scholars, 1983, p. 30 6. Ruth Britto at the Mathematics Genealogy Project External links • Home page • How do we explain the universe we observe? Trinity College Dublin Talks Authority control International • VIAF Academics • MathSciNet • Mathematics Genealogy Project • ORCID Other • IdRef
Wikipedia
Ruth Charney Ruth Michele Charney (born 1950)[1] is an American mathematician known for her work in geometric group theory and Artin groups. Other areas of research include K-theory and algebraic topology.[2] She holds the Theodore and Evelyn G. Berenson Chair in Mathematics at Brandeis University. She was in the first group of mathematicians named Fellows of the American Mathematical Society.[3][4] She was in the first group of mathematicians named Fellows of the Association for Women in Mathematics.[5][6] She served as president of the Association for Women in Mathematics during 2013–2015,[7] and served as president of the American Mathematical Society for the 2021–2023 term.[8] Ruth Charney Ruth Charney in 1977 Born1950 (age 72–73) Alma materBrandeis University Princeton University Known forGeometric group theory, Artin groups Awards • President of Association for Women in Mathematics • President of the American Mathematical Society • Fellow of the American Mathematical Society • Fellow of the Association for Women in Mathematics Scientific career FieldsMathematics InstitutionsBrandeis University ThesisHomological Stability for the General Linear Group of a Principal Ideal Domain (1977) Doctoral advisorWu-Chung Hsiang Life Charney attended Brandeis University, graduating in mathematics in 1972.[9] She then attended Merce Cunningham Dance Studio for a year, studying modern dance. She received her Ph.D. from Princeton University in 1977 under Wu-Chung Hsiang.[10] Work Following her graduation from Princeton, Charney took a postdoctoral position at University of California, Berkeley, followed by an NSF postdoctoral appointment/assistant professor position at Yale University.[11] She worked for Ohio State University until 2003, when she returned to work at Brandeis University. Charney served as president of the Association for Women in Mathematics during 2013–2015.[9] She emphasized the importance of encouraging young women in mathematics through summer programs, mentorships, and parental involvement.[12] She has served as an editor of the journal Algebraic and Geometric Topology from 2000 to 2007.[13][11] In 2019 she was elected to serve as president of the American Mathematical Society during 2021–2023.[8] She currently serves as the AMS Immediate Past President.[14] Additionally, she was a member at large for the American Mathematical Society from 1992 to 1994.[15] Honors • In 2013 Charney was named a Fellow of the American Mathematical Society in the inaugural class.[3][4] • In 2017 she was selected as a Fellow of the Association for Women in Mathematics in the inaugural class.[5][6] Selected publications • Charney, Ruth; Davis, Michael W. Finite K(π,1)s for Artin groups. Prospects in topology (Princeton, NJ, 1994), 110–124, Ann. of Math. Stud., 138, Princeton Univ. Press, Princeton, NJ, 1995. MR1368655 • Charney, Ruth Geodesic automation and growth functions for Artin groups of finite type. Math. Ann. 301 (1995), no. 2, 307–324. MR1314589 • Charney, Ruth Artin groups of finite type are biautomatic. Math. Ann. 292 (1992), no. 4, 671–683. MR1157320 • Charney, Ruth An introduction to right-angled Artin groups. Geom. Dedicata 125 (2007), 141–158. MR2322545 References 1. Birth date from ISNI authority control file, accessed 2018-11-26. 2. MSRI. "Mathematical Sciences Research Institute". www.msri.org. Retrieved 2021-10-26. 3. "Inaugural Fellows of the AMS" (PDF). 4. "Fellows of the AMS". American Mathematical Society. Retrieved 2023-01-04. 5. "2018 Inaugural Class of AWM Fellows". Association for Women in Mathematics. Retrieved 7 April 2019. 6. "AWM Fellows". AWM Fellows. Retrieved 4 Jan 2023. 7. "Ruth Charney Curriculum Vita" (PDF). Retrieved 18 December 2019. 8. "Ruth Charney Elected AMS President". American Mathematical Society. Retrieved 18 December 2019. 9. Burrows, Leah (October 21, 2013). "Charney makes it all add up: Mathematician (and former dancer) wants to multiply women in math". Brandeis NOW. Brandeis University. Retrieved December 5, 2014. 10. "Ruth Michele Charney". The Mathematics Genealogy Project. NDSU Department of Mathematics. Retrieved December 5, 2014. 11. "Personal Profile of Prof. Ruth Charney". Mathematical Sciences Research Institute. Retrieved December 5, 2014. 12. Suhay, Lisa (March 14, 2014). "Calculating women: How to get more girls into math". Christian Science Monitor. Boston. Retrieved December 5, 2014. 13. "Ruth Charney Theodore and Evelyn Berenson Professor of Mathematics Brandeis University (CV)". Brandeis University. 2019. Retrieved March 25, 2020. 14. "Officers". American Mathematical Society. Retrieved 2023-03-27. 15. "AMS Committees". American Mathematical Society. Retrieved 2023-03-27. External links • Ruth Charney's Author Profile on MathSciNet Presidents of the American Mathematical Society 1888–1900 • John Howard Van Amringe (1888–1890) • Emory McClintock (1891–1894) • George William Hill (1895–1896) • Simon Newcomb (1897–1898) • Robert Simpson Woodward (1899–1900) 1901–1924 • E. H. Moore (1901–1902) • Thomas Fiske (1903–1904) • William Fogg Osgood (1905–1906) • Henry Seely White (1907–1908) • Maxime Bôcher (1909–1910) • Henry Burchard Fine (1911–1912) • Edward Burr Van Vleck (1913–1914) • Ernest William Brown (1915–1916) • Leonard Eugene Dickson (1917–1918) • Frank Morley (1919–1920) • Gilbert Ames Bliss (1921–1922) • Oswald Veblen (1923–1924) 1925–1950 • George David Birkhoff (1925–1926) • Virgil Snyder (1927–1928) • Earle Raymond Hedrick (1929–1930) • Luther P. Eisenhart (1931–1932) • Arthur Byron Coble (1933–1934) • Solomon Lefschetz (1935–1936) • Robert Lee Moore (1937–1938) • Griffith C. Evans (1939–1940) • Marston Morse (1941–1942) • Marshall H. Stone (1943–1944) • Theophil Henry Hildebrandt (1945–1946) • Einar Hille (1947–1948) • Joseph L. Walsh (1949–1950) 1951–1974 • John von Neumann (1951–1952) • Gordon Thomas Whyburn (1953–1954) • Raymond Louis Wilder (1955–1956) • Richard Brauer (1957–1958) • Edward J. McShane (1959–1960) • Deane Montgomery (1961–1962) • Joseph L. Doob (1963–1964) • Abraham Adrian Albert (1965–1966) • Charles B. Morrey Jr. (1967–1968) • Oscar Zariski (1969–1970) • Nathan Jacobson (1971–1972) • Saunders Mac Lane (1973–1974) 1975–2000 • Lipman Bers (1975–1976) • R. H. Bing (1977–1978) • Peter Lax (1979–1980) • Andrew M. Gleason (1981–1982) • Julia Robinson (1983–1984) • Irving Kaplansky (1985–1986) • George Mostow (1987–1988) • William Browder (1989–1990) • Michael Artin (1991–1992) • Ronald Graham (1993–1994) • Cathleen Synge Morawetz (1995–1996) • Arthur Jaffe (1997–1998) • Felix Browder (1999–2000) 2001–2024 • Hyman Bass (2001–2002) • David Eisenbud (2003–2004) • James Arthur (2005–2006) • James Glimm (2007–2008) • George Andrews (2009–2010) • Eric Friedlander (2011–2012) • David Vogan (2013–2014) • Robert Bryant (2015–2016) • Ken Ribet (2017–2018) • Jill Pipher (2019–2020) • Ruth Charney (2021–2022) • Bryna Kra (2023–2024) Presidents of the Association for Women in Mathematics 1971–1990 • Mary W. Gray (1971–1973) • Alice T. Schafer (1973–1975) • Lenore Blum (1975–1979) • Judith Roitman (1979–1981) • Bhama Srinivasan (1981–1983) • Linda Preiss Rothschild (1983–1985) • Linda Keen (1985–1987) • Rhonda Hughes (1987–1989) • Jill P. Mesirov (1989–1991) 1991–2010 • Carol S. Wood (1991–1993) • Cora Sadosky (1993–1995) • Chuu-Lian Terng (1995–1997) • Sylvia M. Wiegand (1997–1999) • Jean E. Taylor (1999–2001) • Suzanne Lenhart (2001–2003) • Carolyn S. Gordon (2003–2005) • Barbara Keyfitz (2005–2007) • Cathy Kessel (2007–2009) • Georgia Benkart (2009–2011) 2011–0000 • Jill Pipher (2011–2013) • Ruth Charney (2013–2015) • Kristin Lauter (2015–2017) • Ami Radunskaya (2017–2019) • Ruth Haas (2019–2021) • Kathryn Leonard (2021–2023) • Talitha Washington (2023–2025) Authority control International • ISNI • VIAF National • Germany • Israel • United States Academics • DBLP • Google Scholar • MathSciNet • Mathematics Genealogy Project • zbMATH Other • IdRef
Wikipedia
Ruth Haas Ruth Haas is an American mathematician and professor at the University of Hawaii at Manoa. Previously she was the Achilles Professor of Mathematics at Smith College.[1] She received the M. Gweneth Humphreys Award from the Association for Women in Mathematics (AWM) in 2015 for her mentorship of women in mathematics.[2][1] Haas was named an inaugural AWM Fellow in 2017.[3] In 2017 she was elected President of the AWM and on February 1, 2019 she assumed that position.[4] Ruth Haas Ruth Haas at AWM Research Symposium, 2019 NationalityAmerican Alma materSwarthmore College Cornell University AwardsM. Gweneth Humphreys Award (2015) Scientific career FieldsMathematics InstitutionsSmith College University of Hawaii at Manoa ThesisDimension and Bases for Certain Classes of Splines: A Combinatorial and Homological Approach (1987) Doctoral advisorLouis Billera Websitemath.hawaii.edu/wordpress/people/rhaas/ Education Haas received her Bachelor of Arts from Swarthmore College, her Master of Science from Cornell University, and her Ph.D. from Cornell University in 1987.[5][6] Prior to becoming a professor at the University of Hawaii, Haas was Achilles Professor of Mathematics and Statistics at Smith College.[7] Career Ruth Haas was a driving force in the strong and vibrant mathematics community at Smith College, where she taught for many years. At Smith Haas was instrumental in establishing the Center for Women in Mathematics and the highly-successful post-baccalaureate program at Smith. There is a plethora of other academic and community-building initiatives developed and supported by Haas, including a highly effective undergraduate research course, the annual Women In Mathematics In the Northeast (WIMIN) conference, a program for junior visitors, a high school outreach program, and weekly seminars. The AWM honored Ruth Haas’s outstanding achievements in inspiring undergraduate women to discover and pursue their passion for mathematics and eventually becoming mathematicians by awarding her the 2015 M. Gweneth Humphreys Award.[1] References 1. "Ruth Haas Honored with Humphreys Award". Association for Women in Mathematics. Retrieved 27 April 2019. 2. "Gweneth Humphreys Awards". Association for Women in Mathematics. Retrieved 26 April 2019. 3. "2018 Inaugural Class of AWM Fellows". Association for Women in Mathematics. Retrieved 9 January 2021. 4. "History of the AWM". Association for Women in Mathematics. Retrieved 27 April 2019. 5. "Ruth Haas". Smith College. Retrieved January 12, 2018. 6. "Ruth Haas". Mathematics Genealogy Project. Retrieved 12 January 2018. 7. "Smith Bids Farewell to Retiring Faculty Members". Grécourt Gate. Smith College. May 26, 2017. Retrieved January 12, 2018. External links • Official website Presidents of the Association for Women in Mathematics 1971–1990 • Mary W. Gray (1971–1973) • Alice T. Schafer (1973–1975) • Lenore Blum (1975–1979) • Judith Roitman (1979–1981) • Bhama Srinivasan (1981–1983) • Linda Preiss Rothschild (1983–1985) • Linda Keen (1985–1987) • Rhonda Hughes (1987–1989) • Jill P. Mesirov (1989–1991) 1991–2010 • Carol S. Wood (1991–1993) • Cora Sadosky (1993–1995) • Chuu-Lian Terng (1995–1997) • Sylvia M. Wiegand (1997–1999) • Jean E. Taylor (1999–2001) • Suzanne Lenhart (2001–2003) • Carolyn S. Gordon (2003–2005) • Barbara Keyfitz (2005–2007) • Cathy Kessel (2007–2009) • Georgia Benkart (2009–2011) 2011–0000 • Jill Pipher (2011–2013) • Ruth Charney (2013–2015) • Kristin Lauter (2015–2017) • Ami Radunskaya (2017–2019) • Ruth Haas (2019–2021) • Kathryn Leonard (2021–2023) • Talitha Washington (2023–2025) Authority control: Academics • MathSciNet • Mathematics Genealogy Project
Wikipedia
Ruth I. Michler Memorial Prize The Ruth I. Michler Memorial Prize is an annual prize in mathematics, awarded by the Association for Women in Mathematics to honor outstanding research by a female mathematician who has recently earned tenure. The prize funds the winner to spend a semester as a visiting faculty member at Cornell University, working with the faculty there and presenting a distinguished lecture on their research.[1][2] It is named after Ruth I. Michler (1967–2000), a German-American mathematician born at Cornell, who died young in a construction accident.[3] The award was first offered in 2007. Its winners and their lectures have included:[1][2][4] • Rebecca Goldin (2007), "The Geometry of Polygons" • Irina Mitrea (2008), "Boundary-Value Problems for Higher-Order Elliptic Operators" • Maria Gordina (2009), "Lie's Third Theorem in Infinite Dimensions" • Patricia Hersh (2010), "Regular CS Complexes, Total Positivity and Bruhat Order" • Anna Mazzucato (2011), "The Analysis of Incompressible Fluids at High Reynolds Numbers" • Ling Long (2012), "Atkin and Swinnerton-Dyer Congruences" • Megumi Harada (2013), "Newton-Okounkov bodies and integrable systems" • Sema Salur (2014), "Manifolds with G2 structure and beyond" • Malabika Pramanik (2015), "Needles, Bushes, Hairbrushes, and Polynomials" • Pallavi Dani (2016), "Large-scale geometry of right-angled Coxeter groups" • Julia Gordon (2017), "Wilkie's theorem and (ineffective) uniform bounds" • Julie Bergner (2018), "2-Segal structures and the Waldhausen S-construction" • Anna Skripka (2019), "Untangling noncommutativity with operator integrals" • Shabnam Akhtari (2021), "Representation of integers by binary forms" • Emily E. Witt (2022), "Local cohomology: An algebraic tool capturing geometric data" See also • List of awards honoring women • List of mathematics awards References 1. Ruth I. Michler Memorial Prizes, Association for Women in Mathematics, retrieved 2019-10-26 2. The Ruth I Michler Memorial Prize of the AWM, MacTutor History of Mathematics Archive, retrieved 2019-10-26 3. O'Connor, John J.; Robertson, Edmund F., "Ruth Ingrid Michler", MacTutor History of Mathematics Archive, University of St Andrews 4. Michler Lecture Series, Cornell University, retrieved 2019-10-26. See also Department of Mathematics Michler Lecture Series - Julie Bergner, The University of Virginia. Talk Title: 2-Segal structures and the Waldhausen S-construction, Cornell University, retrieved 2019-10-26
Wikipedia
Ruth J. Williams Ruth Jeannette Williams is an Australian-born American mathematician at the University of California, San Diego where she holds the Charles Lee Powell Chair as a Distinguished Professor of Mathematics. Her research concerns probability theory and stochastic processes.[1] Ruth J. Williams Born Australia Alma materUniversity of Melbourne Stanford University Known forProbability theory Stochastic process AwardsJohn von Neumann Theory Prize Scientific career FieldsMathematics InstitutionsUniversity of California, San Diego ThesisBrownian motion in a wedge with oblique reflection at the boundary (1983) Doctoral advisorChung Kai-lai Early life and education Williams was born in Australia and moved to the United States in 1978.[2] Williams graduated from the University of Melbourne with a Bachelor of Sciences, with honors, in 1976 and a Master of Science in mathematics in 1978.[3] Williams went on to earn her Ph.D. from Stanford University in 1983, under the supervision of Chung Kai-lai.[4][5] Recognition Williams was president of the Institute of Mathematical Statistics from 2011 to 2012. Williams is a member of the National Academy of Sciences and a fellow of the American Academy of Arts and Sciences, the American Association for the Advancement of Science, the American Mathematical Society, the Institute of Mathematical Statistics, the Institute for Operations Research and the Management Sciences,[6] and the Society for Industrial and Applied Mathematics.[7] In 1998 she was an Invited Speaker of the International Congress of Mathematicians in Berlin.[8] Williams was an American Mathematical Society (AMS) Council member at large.[9] Her other awards and honors include: • Alfred P. Sloan Fellow (1988) [10] • Guggenheim Fellow (2001)[11] • Best Publication Award of the INFORMS Applied Probability Society (2007), jointly with Amber L. Puha and H. Christian Gromoll[12] • John von Neumann Theory Prize (2016), jointly with Martin I. Reiman for "seminal research contributions over the past several decades, to the theory and applications of stochastic networks/systems and their heavy traffic approximations".[12][13] • Honorary Doctorate from the University of Melbourne (2018) [2] • Award for the Advancement of Women in Operations Research and the Management Sciences (2017), annual INFORMS meeting[12] • Honorary Doctor of Science degree from La Trobe University in Australia[12] • National Science Foundation Presidential Young Investigator[12] References 1. Ruth Williams, UCSD, retrieved 2014-12-24. 2. Cashin, Kasey (2018-12-11). "Ruth Williams receives Honorary Doctorate". School of Mathematics and Statistics. Retrieved 2019-11-23. 3. "Citation in support of Ruth Williams' nomination for an Honorary Doctorate" (PDF). University of Melbourne. December 11, 2018. Retrieved November 23, 2019. 4. Ruth Jeannette Williams at the Mathematics Genealogy Project 5. Williams, Ruth Jeannette (1983). Brownian motion in a wedge with oblique reflection at the boundary / (Thesis). Stanford University. 6. Faculty profile, UCSD, retrieved 2014-12-24. 7. SIAM Announces Class of 2020 Fellows, SIAM, March 31, 2020, retrieved 2020-06-12 8. Williams, Ruth J. (1998). "Reflecting diffusions and queueing networks". Doc. Math. (Bielefeld) Extra Vol. ICM Berlin, 1998, vol. III. pp. 321–330. 9. "AMS Committees". American Mathematical Society. Retrieved 2023-03-27. 10. "Past Fellows". sloan.org. Retrieved 2019-11-23. 11. "John Simon Guggenheim Foundation | Ruth J. Williams". Retrieved 2019-11-23. 12. "Ruth Williams | ARC Centre of Excellence for Mathematical and Statistical Frontiers". acems.org.au. Retrieved 2019-11-23. 13. "Reiman, Williams share von Neumann Prize", INFORMS News, Institute for Operations Research and the Management Sciences, 43 (6), December 2016 John von Neumann Theory Prize 1975–1999 • George Dantzig (1975) • Richard Bellman (1976) • Felix Pollaczek (1977) • John F. Nash / Carlton E. Lemke (1978) • David Blackwell (1979) • David Gale / Harold W. Kuhn / Albert W. Tucker (1980) • Lloyd Shapley (1981) • Abraham Charnes / William W. Cooper / Richard J. Duffin (1982) • Herbert Scarf (1983) • Ralph Gomory (1984) • Jack Edmonds (1985) • Kenneth Arrow (1986) • Samuel Karlin (1987) • Herbert A. Simon (1988) • Harry Markowitz (1989) • Richard Karp (1990) • Richard E. Barlow / Frank Proschan (1991) • Alan J. Hoffman / Philip Wolfe (1992) • Robert Herman (1993) • Lajos Takacs (1994) • Egon Balas (1995) • Peter C. Fishburn (1996) • Peter Whittle (1997) • Fred W. Glover (1998) • R. Tyrrell Rockafellar (1999) 2000–present • Ellis L. Johnson / Manfred W. Padberg (2000) • Ward Whitt (2001) • Donald L. Iglehart / Cyrus Derman (2002) • Arkadi Nemirovski / Michael J. Todd (2003) • J. Michael Harrison (2004) • Robert Aumann (2005) • Martin Grötschel / László Lovász / Alexander Schrijver (2006) • Arthur F. Veinott, Jr. (2007) • Frank Kelly (2008) • Yurii Nesterov / Yinyu Ye (2009) • Søren Asmussen / Peter W. Glynn (2010) • Gérard Cornuéjols (2011) • George Nemhauser / Laurence Wolsey (2012) • Michel Balinski (2013) • Nimrod Megiddo (2014) • Vašek Chvátal / Jean Bernard Lasserre (2015) • Martin I. Reiman / Ruth J. Williams (2016) • Donald Goldfarb / Jorge Nocedal (2017) • Dimitri Bertsekas / John Tsitsiklis (2018) • Dimitris Bertsimas / Jong-Shi Pang (2019) • Adrian Lewis (2020) • Alexander Shapiro (2021) • Vijay Vazirani (2022) Authority control International • ISNI • 2 • VIAF • 2 National • Norway • France • BnF data • Germany • Israel • United States • Latvia • Australia • Netherlands • 2 Academics • CiNii • MathSciNet • Mathematics Genealogy Project • zbMATH People • Trove Other • IdRef • 2
Wikipedia
Ruth I. Michler Ruth I. Michler (March 8, 1967 to November 1, 2000)[1][2][3][4] was an American-born mathematician of German descent who lived and worked in the United States. She earned her Ph.D. in Mathematics from the University of California, Berkeley,[5] and she was a tenured associate professor at the University of North Texas. She died at the age of 33 while visiting Northeastern University, after which at least three memorial conferences were held in her honor, and the Ruth I. Michler Memorial Prize was established in her memory. Ruth I. Michler BornMarch 8, 1967 Ithaca, New York DiedNovember 1, 2000 NationalityAmerican Alma materUniversity of California, Berkeley Scientific career Fieldscommutative algebra, algebraic geometry InstitutionsUniversity of North Texas Doctoral advisorArthur Ogus, Mariusz Wodzicki Early years Michler was the daughter of German mathematician Gerhard O. Michler and was born in Ithaca, New York while her family was visiting Cornell University from Germany.[1] She grew up in Germany, living in Tübingen, Giessen, and Essen.[2] She completed her undergraduate studies in 1988 at the University of Oxford, graduating summa cum laude.[3][6] Doctoral studies and research Michler earned her Ph.D. in Mathematics in 1993 from the University of California, Berkeley. Her dissertation is titled "Hodge components of cyclic homology of affine hypersurfaces."[5][7] Her advisors were Mariusz Wodzicki and Arthur Ogus. She spent the academic year 1993-1994 as a postdoc at Queen's University working with Leslie Roberts. In 1994, she joined the tenure-track faculty at the University of North Texas where she earned tenure in 2000. She was the author of eleven (11) research articles in commutative algebra and algebraic geometry.[8][9] She organized several special sessions at meetings of the American Mathematical Society.[10][11][12] The session in San Antonio resulted in a conference proceedings which Michler co-edited.[13] In 2000 she was awarded a National Science Foundation POWRE grant to visit Northeastern University.[14] Memorial conferences and prize Michler was killed in an accident in Boston on November 1, 2000, when she was struck by a construction vehicle while riding her bicycle.[4][15][6] Several conferences were organized in her honor.[16][17] Two conferences resulted in a volume of papers dedicated to her memory[18][19] which includes a dedicatory article[20] and an article describing her research.[21] In 2007 the Association for Women in Mathematics inaugurated the Ruth I. Michler Memorial Prize which is "awarded annually to a woman recently promoted to Associate Professor or an equivalent position in the mathematical sciences".[22] References 1. "Ruth Michler biography". www-history.mcs.st-andrews.ac.uk. Retrieved 2019-01-26. 2. "Cornell Math - About Ruth Michler". pi.math.cornell.edu. Retrieved 2019-01-26. 3. "Commemorating Dr. Ruth Michler". web.northeastern.edu. Retrieved 2019-01-26. 4. "The Valuation Theory Home Page: Very Sad News. Includes memorial articles from Boston Globe, Boston Herald, and Texas Star". math.usask.ca. Retrieved 2019-01-26. 5. "Ruth Michler - The Mathematics Genealogy Project". genealogy.math.ndsu.nodak.edu. Retrieved 2019-01-26. 6. "Association for Women in Mathematics Newsletter Jan-Feb 2001, In Memoriam". www.drivehq.com. Retrieved 2019-01-26. 7. "Math Reviews review for 'Hodge-components of cyclic homology of singular affine hypersurfaces.'". MR 2690218. {{cite journal}}: Cite journal requires |journal= (help) 8. "Math Reviews author page: Michler, Ruth". mathscinet.ams.org. Retrieved 2019-01-26. 9. "Dr. Ruth Michler: Recent Publications". web.northeastern.edu. Retrieved 2019-01-26. 10. "American Mathematical Society meeting special session Washington, DC". jointmathematicsmeetings.org. Retrieved 2019-01-26. 11. "American Mathematical Society meeting special session San Antonio". jointmathematicsmeetings.org. Retrieved 2019-01-26. 12. "American Mathematical Society meeting special session San Francisco". www.ams.org. Retrieved 2019-01-26. 13. "Math Review: edited volume, 'Singularities in algebraic and analytic geometry'". MR 1792143. {{cite journal}}: Cite journal requires |journal= (help) 14. "NSF Award Search: Award#0075057 - POWRE: Differentials, Singularities and Applications". www.nsf.gov. Retrieved 2019-01-26. 15. "Northeastern University, Department of Mathematics 'Tragic Accident'". mathserver.neu.edu. Retrieved 2019-01-26. 16. "Conferences Commemorating Dr. Ruth Michler". web.northeastern.edu. Retrieved 2019-01-26. 17. "AWM at JMM 2011". Association for Women in Mathematics (AWM). Retrieved 2019-01-26. 18. "Math Review 'Topics in algebraic and noncommutative geometry'". MR 2017395. {{cite journal}}: Cite journal requires |journal= (help) 19. "Topics in Algebraic and Noncommutative Geometry: Proceedings in Memory of Ruth Michler". bookstore.ams.org. Retrieved 2019-01-26. 20. "Math Review 'Dedication [to Dr. Ruth Ingrid Michler]'". MR 2017396. {{cite journal}}: Cite journal requires |journal= (help) 21. "Math Review 'Dr. Ruth I. Michler's research'". MR 1986110. {{cite journal}}: Cite journal requires |journal= (help) 22. "Ruth I. Michler Prize". Association for Women in Mathematics (AWM). Retrieved 2019-01-26. Authority control International • ISNI • VIAF National • France • BnF data • Germany • Israel • United States • Netherlands Academics • MathSciNet • Mathematics Genealogy Project • zbMATH Other • IdRef
Wikipedia
Ruth Silverman Ruth Silverman (born 1936 or 1937, died April 25, 2011)[1] was an American mathematician and computer scientist known for her research in computational geometry. She was one of the original founders of the Association for Women in Mathematics in 1971.[2][3] Ruth Silverman Born 1936 or 1937 Died(2011-04-25)April 25, 2011, age 74 Academic background Alma materUniversity of Washington ThesisDecomposition of plane convex sets Academic work DisciplineMathematics Sub-disciplinecomputational geometry InstitutionsNew Jersey Institute of Technology, Southern Connecticut State College, University of the District of Columbia, University of Maryland, College Park Education and career Silverman completed a Ph.D. in 1970 at the University of Washington.[4] She was a faculty member at the New Jersey Institute of Technology, an associate professor at Southern Connecticut State College,[5] a computer science instructor at the University of the District of Columbia, and a researcher in the Center for Automation Research at the University of Maryland, College Park.[1] Contributions Silverman's dissertation, Decomposition of plane convex sets,[4] concerned the characterization of compact convex sets in the Euclidean plane that cannot be formed as Minkowski sums of simpler sets.[6] She became known for her research in computational geometry and particular for highly cited publications on k-means clustering[KM] and nearest neighbor search.[NN] Other topics in Silverman's research include robust statistics[LT] and small sets of points that meet every line in finite projective planes.[IP] Selected publications IP. Erdős, P.; Silverman, R.; Stein, A. (1983), "Intersection properties of families containing sets of nearly the same size", Ars Combinatoria, 15: 247–259, MR 0706303 NN. Arya, Sunil; Mount, David M.; Netanyahu, Nathan S.; Silverman, Ruth; Wu, Angela Y. (1998), "An optimal algorithm for approximate nearest neighbor searching in fixed dimensions", Journal of the ACM, 45 (6): 891–923, doi:10.1145/293347.293348, MR 1678846, S2CID 8193729 KM. Kanungo, T.; Mount, D. M.; Netanyahu, N. S.; Piatko, C. D.; Silverman, R.; Wu, A. Y. (2002), "An efficient k-means clustering algorithm: analysis and implementation", IEEE Transactions on Pattern Analysis and Machine Intelligence, 24 (7): 881–892, doi:10.1109/tpami.2002.1017616 LT. Mount, David M.; Netanyahu, Nathan S.; Piatko, Christine D.; Silverman, Ruth; Wu, Angela Y. (2014), "On the least trimmed squares estimator", Algorithmica, 69 (1): 148–183, doi:10.1007/s00453-012-9721-8, MR 3172284, S2CID 6796756 References 1. "Ruth Silverman (age 74)", Paid death notices, Washington Post, April 28, 2011 2. Blum, Lenore (September 1991), "A Brief History of the Association for Women in Mathematics: The Presidents' Perspectives", Notices of the American Mathematical Society, 38 (7): 738–774, archived from the original on 2017-07-29, retrieved 2018-02-12. See section "What we did ... (In the beginning): Atlantic City". 3. Kenschaft, Patricia C. (2005), Change is Possible: Stories of Women and Minorities in Mathematics, American Mathematical Society, p. 131, ISBN 9780821837481 4. MathSciNet record for Silverman's dissertation: MR2620174 5. "News and Notices", American Mathematical Monthly, 86 (5): 418–420, May 1979, doi:10.1080/00029890.1979.11994820, JSTOR 2321116 6. Schneider, Rolf (2014), Convex bodies: the Brunn-Minkowski theory, Encyclopedia of Mathematics and its Applications, vol. 151 (2nd ed.), Cambridge University Press, Cambridge, pp. 168–169, ISBN 978-1-107-60101-7, MR 3155183 External links • Case, B.A.; Leggett, A.M. (2016). Complexities: Women in Mathematics. Princeton University Press. p. 81. ISBN 978-1-4008-8016-4. Authority control International • ISNI • VIAF National • United States Academics • DBLP • MathSciNet • Mathematics Genealogy Project • zbMATH
Wikipedia
Ruth Stokes Ruth Wyckliffe Stokes (October 12, 1890 or 1891 – August 27, 1968) was an American mathematician, cryptologist, and astronomer. She earned the first doctorate in mathematics from Duke University, made pioneering contributions to the theory of linear programming, and founded the Pi Mu Epsilon journal. Ruth Stokes Stokes in 1957 Born(1890-10-12)October 12, 1890 DiedAugust 27, 1968(1968-08-27) (aged 77) EducationWinthrop Normal and Industrial College (Bachelor's), Vanderbilt University (Master's), Duke University (Ph.D.) Known formathematician, cryptologist, and astronomer Early life and education Stokes was born on October 12, 1890 or 1891[1] in Mountville, South Carolina, one of six children of William Henry Stokes, a physician and farmer, and his wife Francis Emily Fuller Stokes. She earned a bachelor's degree in 1911 from the Winthrop Normal and Industrial College, a women's college that later became Winthrop University, and began working as a high school mathematics teacher. She was principal of a school in Rock Hill, South Carolina, from 1913 to 1916, and head of mathematics at Synodical College in Fulton, Missouri, from 1916 to 1917.[2][3] She subsequently held two more teaching positions in South Carolina. During this time she also studied mathematics by correspondence through Columbia University, the University of Virginia, and the University of Chicago.[3] She returned to graduate study in 1922 at Vanderbilt University, where she earned a master's degree in mathematics in 1923 with a thesis in the history of mathematics on the fundamental theorem of algebra. She became an instructor at Winthrop College, and began taking summer classes at the University of Wisconsin–Madison, entering more formal doctoral study at Duke University in 1928.[2][3] She completed her Ph.D. in 1931, supervised by Joseph Miller Thomas,[4] becoming the first person to earn a doctorate in mathematics at Duke.[2] Her dissertation, A Geometric Theory of Solution of Linear Inequalities, represented pioneering work in linear programming, following on from the work of Lloyd Dines and Hermann Minkowski.[5] Career and later life Stokes expected her position at Winthrop to be waiting for her on the completion of her doctorate, but David Bancroft Johnson, the president of Winthrop with whom she had made this agreement, died in 1928 and the next president did not hold to the agreement.[3] After continuing at Duke as an instructor for a year, Stokes became a mathematics instructor at North Texas State Teachers College (now the University of North Texas) from 1932 until 1935, when she became head of mathematics at Mitchell College in Statesville, North Carolina.[2] In 1936, Stokes returned once more to Winthrop College where she became a professor of astronomy and mathematics and, later, the head of mathematics. Her astronomical work included an excursion to Florida to observe the solar eclipse of April 7, 1940. As a response to World War II, in 1942, she instituted a program in cryptology, and began teaching navigation and astronomy to pilots in the United States Army Air Corps. During this period at Winthrop she also chaired the Southeastern Section of the Mathematical Association of America and was president of the section for mathematics of the South Carolina Education Association.[2][3] Stokes had increasingly found herself in dispute with the Winthrop College administration, and in 1946 she moved to Syracuse University as an assistant professor of mathematics and education, promoted to associate professor in 1953. There, she became founding editor of the Pi Mu Epsilon journal in 1949. She also participated in the International Congress of Mathematicians in 1950, exhibiting a collection of mathematical models. She retired from Syracuse as associate professor emerita in 1959, continuing to teach for one more year as an associate professor at Longwood College in Farmville, Virginia.[2][3] After retirement, Stokes returned to Mountville, South Carolina. She died on August 27, 1968.[2][3] Recognition Stokes was named a Fellow of the American Association for the Advancement of Science in 1950.[6] References 1. Lee gives her birth year as 1891; Green and LaDuke note conflicting sources for the year but conclude that 1890 is more likely. 2. Lee, Susanna O. (2016), "Chapter 21: Dr. Ruth W. Stokes", From Winthrop to Washington, Winthrop College 3. Green, Judy; LaDuke, Jeanne (2008), Pioneering Women in American Mathematics: The Pre-1940 PhD's, History of Mathematics, vol. 34 (1st ed.), American Mathematical Society, The London Mathematical Society, pp. 294–295, ISBN 978-0-8218-4376-5; see also extended biography on pp. 578–579 of the Supplementary Material at the AMS web site for the book. 4. Ruth Stokes at the Mathematics Genealogy Project 5. Kjeldsen, Tinne Hoff (2002), "Different motivations and goals in the historical development of the theory of systems of linear inequalities", Archive for History of Exact Sciences, 56 (6): 469–538, doi:10.1007/s004070200057, JSTOR 41134152, MR 1940044, S2CID 38713659; see in particular pp. 509–510. 6. Historic Fellows, American Association for the Advancement of Science, retrieved 2021-04-19 Authority control: Academics • MathSciNet • Mathematics Genealogy Project
Wikipedia
Ruth Lawrence Ruth Elke Lawrence-Neimark (Hebrew: רות אלקה לורנס-נאימרק, born 2 August 1971) is a British–Israeli mathematician and a professor of mathematics at the Einstein Institute of Mathematics, Hebrew University of Jerusalem, and a researcher in knot theory and algebraic topology. In the public eye, she is best known for having been a child prodigy in mathematics. Ruth Lawrence Ruth Lawrence, Berkeley 1991 Born (1971-08-02) 2 August 1971 Brighton, England Alma materUniversity of Oxford (MA, DPhil) Known forBeing a child prodigy Lawrence–Krammer representation AwardsFellow of the American Mathematical Society Scientific career FieldsTopology, knot theory InstitutionsHebrew University of Jerusalem University of Michigan ThesisHomology representations of braid groups (1989) Doctoral advisorMichael Atiyah Early life Ruth Lawrence was born in Brighton, England. Her parents, Harry Lawrence and Sylvia Greybourne, were both computer consultants. When Lawrence was five, her father gave up his job so that he could educate her at home.[1] Education At the age of nine, Lawrence gained an O-level in mathematics, setting a new age record (later surpassed in 2001 when Arran Fernandez successfully sat GCSE mathematics aged five).[2] Also at the age of nine she achieved a Grade A at A-level pure mathematics.[1] In 1981 Lawrence passed the Oxford University entrance examination in mathematics, joining St Hugh's College in 1983 at the age of 12. At Oxford, her father continued to be actively involved in her education, accompanying her to all lectures and some tutorials. Lawrence completed her bachelor's degree in two years, instead of the normal three, and graduated in 1985 at the age of 13 with a starred first and special commendation. Attracting considerable press interest, she became the youngest British person to gain a first-class degree, and the youngest to graduate from the University of Oxford in modern times.[1] Lawrence followed her first degree with a bachelor's degree in physics in 1986 and a Doctor of Philosophy (DPhil) degree in mathematics at Oxford in June 1989, at the age of 17. Her doctoral thesis title was Homology representations of braid groups and her thesis adviser was Sir Michael Atiyah.[3] Academic career Lawrence and her father moved to America for Lawrence's first academic post, which was at Harvard University, where she became a junior fellow in 1990 at the age of 19. In 1993, she moved to the University of Michigan, where she became an associate professor with tenure in 1997. In 1998, Lawrence married Ariyeh Neimark, a mathematician at the Hebrew University of Jerusalem, and adopted the name Ruth Lawrence-Neimark. The following year, she moved to Israel with him and took up the post of associate professor of mathematics at the Einstein Institute of Mathematics, a part of the Hebrew University of Jerusalem.[1] Research Lawrence's 1990 paper, "Homological representations of the Hecke algebra", in Communications in Mathematical Physics, introduced, among other things, certain novel linear representations of the braid group — known as Lawrence–Krammer representations. In papers published in 2000 and 2001, Daan Krammer and Stephen Bigelow established the faithfulness of Lawrence's representation. This result goes by the phrase "braid groups are linear."[4] Awards and honors In 2012 she became a fellow of the American Mathematical Society.[5] Selected publications • Lawrence, R.J.,An explicit symmetric DGLA model of a triangle, joint with Itay Griniasty (2018) • Lawrence, R.J.,A formula for topology/deformations and its significance,joint with Dennis Sullivan Fundamenta Mathematica 225 (2014) 229-242. • Lawrence, R.J., Homological representations of the Hecke algebra, Communications in Mathematical Physics, V 135, N 1, pp 141–191 (1990). • Lawrence, R. and Zagier, D., Modular forms and quantum invariants of 3-manifolds. Asian Journal of Mathematics, V 3, N 1, pp 93–108 (1999). • Lawrence, R. and Rozansky, L., Witten–Reshetikhin–Turaev Invariants of Seifert Manifolds. Communications in Mathematical Physics, V. 205, N 2, pp. 287–314 (1999). References 1. "1985: Teenage genius gets a first". BBC. 4 July 1985. Retrieved 6 August 2014. 2. "Boy who broke GCSE record at five is off to Cambridge". London Evening Standard. Archived from the original on 22 April 2013. Retrieved 2 September 2010. 3. Ruth Lawrence at the Mathematics Genealogy Project 4. Bigelow, Stephen (2003), "The Lawrence–Krammer representation", Topology and geometry of manifolds, Proc. Sympos. Pure Math., vol. 71, Providence, RI: Amer. Math. Soc., pp. 51–68, MR 2024629 5. List of Fellows of the American Mathematical Society, retrieved 2013-01-27. External links • Ruth Lawrence's home page at the Hebrew University of Jerusalem Authority control International • ISNI • VIAF National • France • BnF data • Israel • Netherlands Academics • DBLP • MathSciNet • Mathematics Genealogy Project • zbMATH Other • IdRef
Wikipedia
Ruth Moufang Ruth Moufang (10 January 1905 – 26 November 1977) was a German mathematician. Ruth Moufang Born(1905-01-10)10 January 1905 Darmstadt, Germany Died26 November 1977(1977-11-26) (aged 72) Frankfurt am Main, Germany Nationality German Alma materGoethe University Frankfurt Known for • Moufang loop • Moufang plane • Moufang polygon • Moufang–Lie algebra • Moufang set Scientific career FieldsMathematics InstitutionsGoethe University Frankfurt Doctoral advisorMax Dehn Biography Born to German chemist Eduard Moufang and Else Fecht Moufang. Eduard Moufang was the son of Friedrich Carl Moufang (1848-1885) from Mainz, and Elisabeth von Moers from Mainz. Ruth Moufang's mother was Else Fecht, who was the daughter of Alexander Fecht (1848-1913) from Kehl and Ella Scholtz (1847-1921). Ruth was the younger of her parents' two daughters, having an elder sister named Erica.[1] Education and career She studied mathematics at the University of Frankfurt. In 1931 she received her Ph.D. on projective geometry under the direction of Max Dehn, and in 1932 spent a fellowship year in Rome. After her year in Rome, she returned to Germany to lecture at the University of Königsberg and the University of Frankfurt.[1] Denied permission to teach by the minister of education of Nazi Germany, she worked at Research and Development of Krupp (battleships, U-boats, tanks, howitzers, guns, etc.), where she became the first German woman with a doctorate to be employed as an industrial mathematician. At the end of World War II she was leading the Department of Applied mathematics at the arms industry of Krupp. In 1946 she was finally allowed to accept a teaching position at the University of Frankfurt, and in 1957 she became the first woman professor at the university.[2] Research Moufang's research in projective geometry built upon the work of David Hilbert. She was responsible for ground-breaking work on non-associative algebraic structures, including the Moufang loops named after her.[1] In 1933, Moufang showed Desargues's theorem does not hold in the Cayley plane. The Cayley plane uses octonion coordinates which do not satisfy the associative law. Such connections between geometry and algebra had been previously noted by Karl von Staudt and David Hilbert.[1] Ruth Moufang thus initiated a new branch of geometry called Moufang planes.[2] She published 7 papers on this topic, these are Zur Struktur der projectiven Geometrie der Ebene Ⓣ (1931); Die Einführung in der ebenen Geometrie mit Hilfe des Satzes vom vollständigen Vierseit Ⓣ (1931); Die Schnittpunktssätze des projektiven speziellen Fünfecksnetzes in ihrer Abhängigkeit voneinander Ⓣ (1932); Ein Satz über die Schnittpunktsätze des allgemeinen Fünfecksnetzes Ⓣ (1932); Die Desarguesschen Sätze von Rang 10 Ⓣ (1933); Alternativkörper und der Satz vom vollständigen Vierseit D9 Ⓣ (1934); and Zur Struktur von Alternativkörpern Ⓣ (1934). Moufang published only one paper on group theory, Einige Untersuchungen über geordenete Schiefkörper Ⓣ, which appeared in print in 1937.[1] References 1. "Ruth Moufang - Biography". Maths History. Retrieved 2022-05-08. 2. "Ruth Moufang". mathwomen.agnesscott.org. Retrieved 2022-05-08. • O'Connor, John J.; Robertson, Edmund F., "Ruth Moufang", MacTutor History of Mathematics Archive, University of St Andrews • Ruth Moufang at the Mathematics Genealogy Project • "Ruth Moufang", Biographies of Women Mathematicians, Agnes Scott College • Bhama Srinivasan (1984) "Ruth Moufang, 1905—1977" Mathematical Intelligencer 6(2):51–5. Authority control International • ISNI • VIAF • WorldCat National • France • BnF data • Germany Academics • MathSciNet • Mathematics Genealogy Project • zbMATH People • Deutsche Biographie
Wikipedia
Ruthmae Sears Ruthmae Sears is a Bahamian-American mathematics educator, focusing on systemic inequities that impede student understanding of mathematics. She is an associate professor for secondary mathematics education in the University of South Florida College of Education.[1] Education and career Sears is originally from the Bahamas,[2] and studied mathematics, statistics, and secondary mathematics at the College of the Bahamas, earning associate of arts and bachelor of education degrees there. She has a master's degree in mathematics education from Indiana University, and a Ph.D. from the University of Missouri.[3] She has taught high school mathematics in the Bahamas, and became an assistant professor at the University of South Florida in 2012, earning tenure as an associate professor in 2018.[3] She is also a member of the board of directors of Pace Bahamas, an educational foundation in the Bahamas.[4] Recognition The Florida Association of Mathematics Teacher Educators named Sears as their 2016 Mathematics Teacher Educator of the Year.[3] Sears was named to the 2021 class of Fellows of the American Association for the Advancement of Science,[5] becoming the first Black faculty member at the University of South Florida to win this honor.[6] References 1. "Dr. Ruthmae Sears", Faculty profiles, University of South Florida College of Education, retrieved 2022-01-31 2. "Ruthmae Sears", Boundless Bulls, University of South Florida, 13 January 2022, retrieved 2022-01-31 3. "Ruthmae Sears", Community member profiles, Science Education Resource Center at Carleton College, retrieved 2022-01-31 4. "Dr. Ruthmae Sears, director", Board of directors, Pace Bahamas, retrieved 2022-01-31 5. 2021 Fellows, American Association for the Advancement of Science, retrieved 2022-01-31 6. "USF celebrates commitment to Black community throughout Black Heritage Month", University News, University of South Florida, 1 February 2022, retrieved 2022-01-31 External links • Ruthmae Sears publications indexed by Google Scholar
Wikipedia
Ruziewicz problem In mathematics, the Ruziewicz problem (sometimes Banach–Ruziewicz problem) in measure theory asks whether the usual Lebesgue measure on the n-sphere is characterised, up to proportionality, by its properties of being finitely additive, invariant under rotations, and defined on all Lebesgue measurable sets. This was answered affirmatively and independently for n ≥ 4 by Grigory Margulis and Dennis Sullivan around 1980, and for n = 2 and 3 by Vladimir Drinfeld (published 1984). It fails for the circle. The problem is named after Stanisław Ruziewicz. References • Lubotzky, Alexander (1994), Discrete groups, expanding graphs and invariant measures, Progress in Mathematics, vol. 125, Basel: Birkhäuser Verlag, ISBN 0-8176-5075-X. • Drinfeld, Vladimir (1984), "Finitely-additive measures on S2 and S3, invariant with respect to rotations", Funktsional. Anal. i Prilozhen., 18 (3): 77, MR 0757256. • Margulis, Grigory (1980), "Some remarks on invariant means", Monatshefte für Mathematik, 90 (3): 233–235, doi:10.1007/BF01295368, MR 0596890. • Sullivan, Dennis (1981), "For n > 3 there is only one finitely additive rotationally invariant measure on the n-sphere on all Lebesgue measurable sets", Bulletin of the American Mathematical Society, 4 (1): 121–123, doi:10.1090/S0273-0979-1981-14880-1, MR 0590825. • Survey of the area by Hee Oh
Wikipedia
Ruzsa–Szemerédi problem In combinatorial mathematics and extremal graph theory, the Ruzsa–Szemerédi problem or (6,3)-problem asks for the maximum number of edges in a graph in which every edge belongs to a unique triangle. Equivalently it asks for the maximum number of edges in a balanced bipartite graph whose edges can be partitioned into a linear number of induced matchings, or the maximum number of triples one can choose from $n$ points so that every six points contain at most two triples. The problem is named after Imre Z. Ruzsa and Endre Szemerédi, who first proved that its answer is smaller than $n^{2}$ by a slowly-growing (but still unknown) factor.[1] Equivalence between formulations The following questions all have answers that are asymptotically equivalent: they differ by, at most, constant factors from each other.[1] • What is the maximum possible number of edges in a graph with $n$ vertices in which every edge belongs to a unique triangle?[2] The graphs with this property are called locally linear graphs[3] or locally matching graphs.[4] • What is the maximum possible number of edges in a bipartite graph with $n$ vertices on each side of its bipartition, whose edges can be partitioned into $n$ induced subgraphs that are each matchings?[1] • What is the largest possible number of triples of points that one can select from $n$ given points, in such a way that every six points contain at most two of the selected triples?[5] The Ruzsa–Szemerédi problem asks for the answer to these equivalent questions. To convert the bipartite graph induced matching problem into the unique triangle problem, add a third set of $n$ vertices to the graph, one for each induced matching, and add edges from vertices $u$ and $v$ of the bipartite graph to vertex $w$ in this third set whenever bipartite edge $uv$ belongs to induced matching $w$. The result is a balanced tripartite graph with $3n$ vertices and the unique triangle property. In the other direction, an arbitrary graph with the unique triangle property can be made into a balanced tripartite graph by choosing a partition of the vertices into three equal sets randomly and keeping only the triangles that respect the partition. This will retain (in expectation) a constant fraction of the triangles and edges. A balanced tripartite graph with the unique triangle property can be made into a partitioned bipartite graph by removing one of its three subsets of vertices, and making an induced matching on the neighbors of each removed vertex. To convert a graph with a unique triangle per edge into a triple system, let the triples be the triangles of the graph. No six points can include three triangles without either two of the three triangles sharing an edge or all three triangles forming a fourth triangle that shares an edge with each of them. In the other direction, to convert a triple system into a graph, first eliminate any sets of four points that contain two triples. These four points cannot participate in any other triples, and so cannot contribute towards a more-than-linear total number of triples. Then, form a graph connecting any pair of points that both belong to any of the remaining triples. Lower bound A nearly-quadratic lower bound on the Ruzsa–Szemerédi problem can be derived from a result of Felix Behrend, according to which the numbers modulo an odd prime number $p$ have large Salem–Spencer sets, subsets $A$ of size $|A|=p/e^{O({\sqrt {\log p}})}$ with no three-term arithmetic progressions.[6] Behrend's result can be used to construct tripartite graphs in which each side of the tripartition has $p$ vertices, there are $3|A|p$ edges, and each edge belongs to a unique triangle. Thus, with this construction, $n=3p$ and the number of edges is $n^{2}/e^{O({\sqrt {\log n}})}$.[5] To construct a graph of this form from Behrend's arithmetic-progression-free subset $A$, number the vertices on each side of the tripartition from $0$ to $p-1$, and construct triangles of the form $(x,x+a,x+2a)$ modulo $p$ for each $x$ in the range from $0$ to $p-1$ and each $a$ in $A$. For example, with $p=3$ and $A=\{\pm 1\}$, the result is a nine-vertex balanced tripartite graph with 18 edges, shown in the illustration. The graph formed from the union of these triangles has the desired property that every edge belongs to a unique triangle. For, if not, there would be a triangle $(x,x+a,x+a+b)$ where $a$, $b$, and $c=(a+b)/2$ all belong to $A$, violating the assumption that there be no arithmetic progressions $(a,c,b)$ in $A$.[5] Upper bound The Szemerédi regularity lemma can be used to prove that any solution to the Ruzsa–Szemerédi problem has at most $o(n^{2})$ edges or triples.[5] A stronger form of the graph removal lemma by Jacob Fox implies that the size of a solution is at most $n^{2}/e^{\Omega (\log ^{*}n)}$. Here the $o$ and $\Omega $ are instances of little o and big Omega notation, and $\log ^{*}$ denotes the iterated logarithm. Fox proves that, in any $n$-vertex graph with $O(n^{3-\delta })$ triangles for some $\delta >0$, one can find a triangle-free subgraph by removing at most $n^{2}/e^{\Omega (\log ^{*}n)}$ edges.[7] In a graph with the unique triangle property, there are (naively) $O(n^{2})$ triangles, so this result applies with $\delta =1$. But in this graph, each edge removal eliminates only one triangle, so the number of edges that must be removed to eliminate all triangles is the same as the number of triangles. History The problem is named after Imre Z. Ruzsa and Endre Szemerédi, who studied this problem, in the formulation involving triples of points, in a 1978 publication.[5] However, it had been previously studied by W. G. Brown, Paul Erdős, and Vera T. Sós, in two publications in 1973 which proved that the maximum number of triples can be $\Omega (n^{3/2})$,[8] and conjectured that it was $o(n^{2})$.[9] Ruzsa and Szemerédi provided (unequal) nearly-quadratic upper and lower bounds for the problem, significantly improving the previous lower bound of Brown, Erdős, and Sós, and proving their conjecture.[5] Applications The existence of dense graphs that can be partitioned into large induced matchings has been used to construct efficient tests for whether a Boolean function is linear, a key component of the PCP theorem in computational complexity theory.[10] In the theory of property testing algorithms, the known results on the Ruzsa–Szemerédi problem have been applied to show that it is possible to test whether a graph has no copies of a given subgraph $H$, with one-sided error in a number of queries polynomial in the error parameter, if and only if $H$ is a bipartite graph.[11] In the theory of streaming algorithms for graph matching (for instance to match internet advertisers with advertising slots), the quality of matching covers (sparse subgraphs that approximately preserve the size of a matching in all vertex subsets) is closely related to the density of bipartite graphs that can be partitioned into induced matchings. This construction uses a modified form of the Ruzsa-Szemerédi problem in which the number of induced matchings can be much smaller than the number of vertices, but each induced matching must cover most of the vertices of the graph. In this version of the problem, it is possible to construct graphs with a non-constant number of linear-sized induced matchings, and this result leads to nearly-tight bounds on the approximation ratio of streaming matching algorithms.[12][13][14][15] The subquadratic upper bound on the Ruzsa–Szemerédi problem was also used to provide an $o(3^{n})$ bound on the size of cap sets,[16] before stronger bounds of the form $c^{n}$ for $c<3$ were proven for this problem.[17] It also provides the best known upper bound on tripod packing.[18] References 1. Komlós, J.; Simonovits, M. (1996), "Szemerédi's regularity lemma and its applications in graph theory", Combinatorics, Paul Erdős is eighty, Vol. 2 (Keszthely, 1993), Bolyai Soc. Math. Stud., vol. 2, Budapest: János Bolyai Math. Soc., pp. 295–352, CiteSeerX 10.1.1.31.2310, MR 1395865 2. Clark, L. H.; Entringer, R. C.; McCanna, J. E.; Székely, L. A. (1991), "Extremal problems for local properties of graphs" (PDF), The Australasian Journal of Combinatorics, 4: 25–31, MR 1129266 3. Fronček, Dalibor (1989), "Locally linear graphs", Mathematica Slovaca, 39 (1): 3–6, hdl:10338.dmlcz/136481, MR 1016323 4. Larrión, F.; Pizaña, M. A.; Villarroel-Flores, R. (2011), "Small locally nK2 graphs" (PDF), Ars Combinatoria, 102: 385–391, MR 2867738 5. Ruzsa, I. Z.; Szemerédi, E. (1978), "Triple systems with no six points carrying three triangles", Combinatorics (Proc. Fifth Hungarian Colloq., Keszthely, 1976), Vol. II, Colloq. Math. Soc. János Bolyai, vol. 18, Amsterdam and New York: North-Holland, pp. 939–945, MR 0519318 6. Behrend, F. A. (December 1946), "On sets of integers which contain no three terms in arithmetical progression", Proceedings of the National Academy of Sciences, 32 (12): 331–332, doi:10.1073/pnas.32.12.331, PMC 1078964, PMID 16578230 7. Fox, Jacob (2011), "A new proof of the graph removal lemma", Annals of Mathematics, Second Series, 174 (1): 561–579, arXiv:1006.1300, doi:10.4007/annals.2011.174.1.17, MR 2811609 8. Sós, V. T.; Erdős, P.; Brown, W. G. (1973), "On the existence of triangulated spheres in 3-graphs, and related problems" (PDF), Periodica Mathematica Hungarica, 3 (3–4): 221–228, doi:10.1007/BF02018585, MR 0323647 9. Brown, W. G.; Erdős, P.; Sós, V. T. (1973), "Some extremal problems on r-graphs" (PDF), New Directions in the Theory of Graphs (Proc. Third Ann Arbor Conf., Univ. Michigan, Ann Arbor, Mich, 1971), New York: Academic Press, pp. 53–63, MR 0351888 10. Håstad, Johan; Wigderson, Avi (2003), "Simple analysis of graph tests for linearity and PCP" (PDF), Random Structures & Algorithms, 22 (2): 139–160, doi:10.1002/rsa.10068, MR 1954608 11. Alon, Noga (2002), "Testing subgraphs in large graphs" (PDF), Random Structures & Algorithms, 21 (3–4): 359–370, doi:10.1002/rsa.10056, MR 1945375 12. Goel, Ashish; Kapralov, Michael; Khanna, Sanjeev (2012), "On the communication and streaming complexity of maximum bipartite matching" (PDF), Proceedings of the Twenty-Third Annual ACM-SIAM Symposium on Discrete Algorithms, New York: ACM, pp. 468–485, MR 3205231 13. Kapralov, Michael (2013), "Better bounds for matchings in the streaming model", Proceedings of the Twenty-Fourth Annual ACM-SIAM Symposium on Discrete Algorithms, Philadelphia, Pennsylvania: SIAM, pp. 1679–1697, arXiv:1206.2269, doi:10.1137/1.9781611973105.121, MR 3203007 14. Konrad, Christian (2015), "Maximum matching in turnstile streams", Algorithms—ESA 2015, Lecture Notes in Comput. Sci., vol. 9294, Heidelberg: Springer, pp. 840–852, arXiv:1505.01460, doi:10.1007/978-3-662-48350-3_70, MR 3446428 15. Fox, Jacob; Huang, Hao; Sudakov, Benny (2017), "On graphs decomposable into induced matchings of linear sizes", Bulletin of the London Mathematical Society, 49 (1): 45–57, arXiv:1512.07852, doi:10.1112/blms.12005, MR 3653100 16. Frankl, P.; Graham, R. L.; Rödl, V. (1987), "On subsets of abelian groups with no 3-term arithmetic progression", Journal of Combinatorial Theory, Series A, 45 (1): 157–161, doi:10.1016/0097-3165(87)90053-7, MR 0883900 17. Ellenberg, Jordan S.; Gijswijt, Dion (2017), "On large subsets of $\mathbb {F} _{q}^{n}$ with no three-term arithmetic progression", Annals of Mathematics, Second Series, 185 (1): 339–343, arXiv:1605.09223, doi:10.4007/annals.2017.185.1.8, MR 3583358 18. Gowers, W. T.; Long, J. (2016), The length of an $s$-increasing sequence of $r$-tuples, arXiv:1609.08688
Wikipedia
Ruzsa triangle inequality In additive combinatorics, the Ruzsa triangle inequality, also known as the Ruzsa difference triangle inequality to differentiate it from some of its variants, bounds the size of the difference of two sets in terms of the sizes of both their differences with a third set. It was proven by Imre Ruzsa (1996),[1] and is so named for its resemblance to the triangle inequality. It is an important lemma in the proof of the Plünnecke-Ruzsa inequality. Statement If $A$ and $B$ are subsets of a group, then the sumset notation $A+B$ is used to denote $\{a+b:a\in A,b\in B\}$. Similarly, $A-B$ denotes $\{a-b:a\in A,b\in B\}$. Then, the Ruzsa triangle inequality states the following. Theorem (Ruzsa triangle inequality) — If $A$, $B$, and $C$ are finite subsets of a group, then $|A||B-C|\leq |A-B||A-C|.$ An alternate formulation involves the notion of the Ruzsa distance.[2] Definition. If $A$ and $B$ are finite subsets of a group, then the Ruzsa distance between these two sets, denoted $d(A,B)$, is defined to be $d(A,B)=\log {\frac {|A-B|}{\sqrt {|A||B|}}}.$ Then, the Ruzsa triangle inequality has the following equivalent formulation: Theorem (Ruzsa triangle inequality) — If $A$, $B$, and $C$ are finite subsets of a group, then $d(B,C)\leq d(A,B)+d(A,C).$ This formulation resembles the triangle inequality for a metric space; however, the Ruzsa distance does not define a metric space since $d(A,A)$ is not always zero. Proof To prove the statement, it suffices to construct an injection from the set $A\times (B-C)$ to the set $(A-B)\times (A-C)$. Define a function $\phi $ as follows. For each $x\in B-C$ choose a $b(x)\in B$ and a $c(x)\in C$ such that $x=b(x)-c(x)$. By the definition of $B-C$, this can always be done. Let $\phi :A\times (B-C)\rightarrow (A-B)\times (A-C)$ be the function that sends $(a,x)$ to $(a-b(x),a-c(x))$. For every point $\phi (a,x)=(y,z)$ in the set is $(A-B)\times (A-C)$, it must be the case that $x=z-y$ and $a=y+b(x)$. Hence, $\phi $ maps every point in $A\times (B-C)$ to a distinct point in $(A-B)\times (A-C)$ and is thus an injection. In particular, there must be at least as many points in $(A-B)\times (A-C)$ as in $A\times (B-C)$. Therefore, $|A||B-C|=|A\times (B-C)|\leq |(A-B)\times (A-C)|=|A-B||A-C|,$ completing the proof. Variants of the Ruzsa triangle inequality The Ruzsa sum triangle inequality is a corollary of the Plünnecke-Ruzsa inequality (which is in turn proved using the ordinary Ruzsa triangle inequality). Theorem (Ruzsa sum triangle inequality) — If $A$, $B$, and $C$ are finite subsets of an abelian group, then $|A||B+C|\leq |A+B||A+C|.$ Proof. The proof uses the following lemma from the proof of the Plünnecke-Ruzsa inequality. Lemma. Let $A$ and $B$ be finite subsets of an abelian group $G$. If $X\subseteq A$ is a nonempty subset that minimizes the value of $K'=|X+B|/|X|$, then for all finite subsets $C\subset G,$ $|X+B+C|\leq K'|X+C|.$ If $A$ is the empty set, then the left side of the inequality becomes $0$, so the inequality is true. Otherwise, let $X$ be a subset of $A$ that minimizes $K'=|X+B|/|X|$. Let $K=|A+B|/|A|$. The definition of $X$ implies that $K'\leq K.$ Because $X\subset A$, applying the above lemma gives $|B+C|\leq |X+B+C|\leq K'|X+C|\leq K'|A+C|\leq K|A+C|={\frac {|A+B||A+C|}{|A|}}.$ Rearranging gives the Ruzsa sum triangle inequality. By replacing $B$ and $C$ in the Ruzsa triangle inequality and the Ruzsa sum triangle inequality with $-B$ and $-C$ as needed, a more general result can be obtained: If $A$, $B$, and $C$ are finite subsets of an abelian group then $|A||B\pm C|\leq |A\pm B||A\pm C|,$ where all eight possible configurations of signs hold. These results are also sometimes known collectively as the Ruzsa triangle inequalities. References 1. Ruzsa, I. (1996). "Sums of finite sets". Number Theory: New York Seminar 1991-1995. 2. Tao, T.; Vu, V. (2006). Additive Combinatorics. Cambridge: Cambridge University Press. ISBN 978-0-521-85386-6.
Wikipedia
Rvachev function In mathematics, an R-function, or Rvachev function, is a real-valued function whose sign does not change if none of the signs of its arguments change; that is, its sign is determined solely by the signs of its arguments.[1][2] Interpreting positive values as true and negative values as false, an R-function is transformed into a "companion" Boolean function (the two functions are called friends). For instance, the R-function ƒ(x, y) = min(x, y) is one possible friend of the logical conjunction (AND). R-functions are used in computer graphics and geometric modeling in the context of implicit surfaces and the function representation. They also appear in certain boundary-value problems, and are also popular in certain artificial intelligence applications, where they are used in pattern recognition. R-functions were first proposed by Vladimir Logvinovich Rvachev[3] (Russian: Влади́мир Логвинович Рвачёв) in 1963, though the name, "R-functions", was given later on by Ekaterina L. Rvacheva-Yushchenko, in memory of their father, Logvin Fedorovich Rvachev (Russian: Логвин Фёдорович Рвачёв). See also • Function representation • Slesarenko function (S-function) Notes 1. V.L. Rvachev, “On the analytical description of some geometric objects”, Reports of Ukrainian Academy of Sciences, vol. 153, no. 4, 1963, pp. 765–767 (in Russian) 2. V. Shapiro, Semi-analytic geometry with R-Functions, Acta Numerica, Cambridge University Press, 2007, 16: 239-303 3. 75 years to Vladimir L. Rvachev (75th anniversary biographical tribute) References • Meshfree Modeling and Analysis, R-Functions (University of Wisconsin) • Pattern Recognition Methods Based on Rvachev Functions (Purdue University) • Shape Modeling and Computer Graphics with Real Functions
Wikipedia
Rybicki Press algorithm The Rybicki–Press algorithm is a fast algorithm for inverting a matrix whose entries are given by $A(i,j)=\exp(-a\vert t_{i}-t_{j}\vert )$, where $a\in \mathbb {R} $[1] and where the $t_{i}$ are sorted in order.[2] The key observation behind the Rybicki-Press observation is that the matrix inverse of such a matrix is always a tridiagonal matrix (a matrix with nonzero entries only on the main diagonal and the two adjoining ones), and tridiagonal systems of equations can be solved efficiently (to be more precise, in linear time).[1] It is a computational optimization of a general set of statistical methods developed to determine whether two noisy, irregularly sampled data sets are, in fact, dimensionally shifted representations of the same underlying function.[3][4] The most common use of the algorithm is in the detection of periodicity in astronomical observations, such as for detecting quasars.[4] The method has been extended to the Generalized Rybicki-Press algorithm for inverting matrices with entries of the form $A(i,j)=\sum _{k=1}^{p}a_{k}\exp(-\beta _{k}\vert t_{i}-t_{j}\vert )$.[2] The key observation in the Generalized Rybicki-Press (GRP) algorithm is that the matrix $A$ is a semi-separable matrix with rank $p$ (that is, a matrix whose upper half, not including the main diagonal, is that of some matrix with matrix rank $p$ and whose lower half is also that of some possibly different rank $p$ matrix[2]) and so can be embedded into a larger band matrix (see figure on the right), whose sparsity structure can be leveraged to reduce the computational complexity. As the matrix $A\in \mathbb {R} ^{n\times n}$ has a semi-separable rank of $p$, the computational complexity of solving the linear system $Ax=b$ or of calculating the determinant of the matrix $A$ scales as ${\mathcal {O}}\left(p^{2}n\right)$, thereby making it attractive for large matrices.[2] The fact that matrix $A$ is a semi-separable matrix also forms the basis for celerite[5] library, which is a library for fast and scalable Gaussian process regression in one dimension[6] with implementations in C++, Python, and Julia. The celerite method[6] also provides an algorithm for generating samples from a high-dimensional distribution. The method has found attractive applications in a wide range of fields, especially in astronomical data analysis.[7][8] See also • Invertible matrix • Matrix decomposition • Multidimensional signal processing • System of linear equations References 1. Rybicki, George B.; Press, William H. (1995), "Class of fast methods for processing Irregularly sampled or otherwise inhomogeneous one-dimensional data", Physical Review Letters, 74 (7): 1060–1063, arXiv:comp-gas/9405004, Bibcode:1995PhRvL..74.1060R, doi:10.1103/PhysRevLett.74.1060, PMID 10058924, S2CID 17436268 2. Ambikasaran, Sivaram (2015-12-01). "Generalized Rybicki Press algorithm". Numerical Linear Algebra with Applications. 22 (6): 1102–1114. arXiv:1409.7852. doi:10.1002/nla.2003. ISSN 1099-1506. S2CID 1627477. 3. Rybicki, George B.; Press, William H. (October 1992). "Interpolation, realization, and reconstruction of noisy, irregularly sampled data". The Astrophysical Journal. 398: 169. Bibcode:1992ApJ...398..169R. doi:10.1086/171845. 4. MacLeod, C. L.; Brooks, K.; Ivezic, Z.; Kochanek, C. S.; Gibson, R.; Meisner, A.; Kozlowski, S.; Sesar, B.; Becker, A. C. (2011-02-10). "Quasar Selection Based on Photometric Variability". The Astrophysical Journal. 728 (1): 26. arXiv:1009.2081. Bibcode:2011ApJ...728...26M. doi:10.1088/0004-637X/728/1/26. ISSN 0004-637X. S2CID 28219978. 5. "celerite — celerite 0.3.0 documentation". celerite.readthedocs.io. Retrieved 2018-04-05. 6. Foreman-Mackey, Daniel; Agol, Eric; Ambikasaran, Sivaram; Angus, Ruth (2017). "Fast and Scalable Gaussian Process Modeling with Applications to Astronomical Time Series". The Astronomical Journal. 154 (6): 220. arXiv:1703.09710. Bibcode:2017AJ....154..220F. doi:10.3847/1538-3881/aa9332. ISSN 1538-3881. S2CID 88521913. 7. Foreman-Mackey, Daniel (2018). "Scalable Backpropagation for Gaussian Processes using Celerite". Research Notes of the AAS. 2 (1): 31. arXiv:1801.10156. Bibcode:2018RNAAS...2...31F. doi:10.3847/2515-5172/aaaf6c. ISSN 2515-5172. S2CID 102481482. 8. Parviainen, Hannu (2018). "Bayesian Methods for Exoplanet Science". Handbook of Exoplanets. Springer, Cham. pp. 1–24. arXiv:1711.03329. doi:10.1007/978-3-319-30648-3_149-1. ISBN 9783319306483. External links • Implementation of the Generalized Rybicki Press algorithm • celerite library on GitHub
Wikipedia
Ryll-Nardzewski fixed-point theorem In functional analysis, a branch of mathematics, the Ryll-Nardzewski fixed-point theorem states that if $E$ is a normed vector space and $K$ is a nonempty convex subset of $E$ that is compact under the weak topology, then every group (or equivalently: every semigroup) of affine isometries of $K$ has at least one fixed point. (Here, a fixed point of a set of maps is a point that is fixed by each map in the set.) This theorem was announced by Czesław Ryll-Nardzewski.[1] Later Namioka and Asplund [2] gave a proof based on a different approach. Ryll-Nardzewski himself gave a complete proof in the original spirit.[3] Applications The Ryll-Nardzewski theorem yields the existence of a Haar measure on compact groups.[4] See also • Fixed-point theorems • Fixed-point theorems in infinite-dimensional spaces • Markov-Kakutani fixed-point theorem - abelian semigroup of continuous affine self-maps on compact convex set in a topological vector space has a fixed point References 1. Ryll-Nardzewski, C. (1962). "Generalized random ergodic theorems and weakly almost periodic functions". Bull. Acad. Polon. Sci. Sér. Sci. Math. Astron. Phys. 10: 271–275. 2. Namioka, I.; Asplund, E. (1967). "A geometric proof of Ryll-Nardzewski's fixed point theorem". Bull. Amer. Math. Soc. 73 (3): 443–445. doi:10.1090/S0002-9904-1967-11779-8. 3. Ryll-Nardzewski, C. (1967). "On fixed points of semi-groups of endomorphisms of linear spaces". Proc. 5th Berkeley Symp. Probab. Math. Stat. Univ. California Press. 2: 1: 55–61. 4. Bourbaki, N. (1981). Espaces vectoriels topologiques. Chapitres 1 à 5. Éléments de mathématique. (New ed.). Paris: Masson. ISBN 2-225-68410-3. • Andrzej Granas and James Dugundji, Fixed Point Theory (2003) Springer-Verlag, New York, ISBN 0-387-00173-5. • A proof written by J. Lurie Functional analysis (topics – glossary) Spaces • Banach • Besov • Fréchet • Hilbert • Hölder • Nuclear • Orlicz • Schwartz • Sobolev • Topological vector Properties • Barrelled • Complete • Dual (Algebraic/Topological) • Locally convex • Reflexive • Reparable Theorems • Hahn–Banach • Riesz representation • Closed graph • Uniform boundedness principle • Kakutani fixed-point • Krein–Milman • Min–max • Gelfand–Naimark • Banach–Alaoglu Operators • Adjoint • Bounded • Compact • Hilbert–Schmidt • Normal • Nuclear • Trace class • Transpose • Unbounded • Unitary Algebras • Banach algebra • C*-algebra • Spectrum of a C*-algebra • Operator algebra • Group algebra of a locally compact group • Von Neumann algebra Open problems • Invariant subspace problem • Mahler's conjecture Applications • Hardy space • Spectral theory of ordinary differential equations • Heat kernel • Index theorem • Calculus of variations • Functional calculus • Integral operator • Jones polynomial • Topological quantum field theory • Noncommutative geometry • Riemann hypothesis • Distribution (or Generalized functions) Advanced topics • Approximation property • Balanced set • Choquet theory • Weak topology • Banach–Mazur distance • Tomita–Takesaki theory •  Mathematics portal • Category • Commons
Wikipedia
Ryser's conjecture In graph theory, Ryser's conjecture is a conjecture relating the maximum matching size and the minimum transversal size in hypergraphs. This conjecture first appeared in 1971 in the Ph.D. thesis of J. R. Henderson, whose advisor was Herbert John Ryser.[1] Preliminaries A matching in a hypergraph is a set of hyperedges such that each vertex appears in at most one of them. The largest size of a matching in a hypergraph H is denoted by $\nu (H)$. A transversal (or vertex cover) in a hypergraph is a set of vertices such that each hyperedge contains at least one of them. The smallest size of a transversal in a hypergraph H is denoted by $\tau (H)$. For every H, $\nu (H)\leq \tau (H)$, since every cover must contain at least one point from each edge in any matching. If H is r-uniform (each hyperedge has exactly r vertices), then $\tau (H)\leq r\cdot \nu (H)$, since the union of the edges from any maximal matching is a set of at most rv vertices that meets every edge. The conjecture Ryser's conjecture is that, if H is not only r-uniform but also r-partite (i.e., its vertices can be partitioned into r sets so that every edge contains exactly one element of each set), then: $\tau (H)\leq (r-1)\cdot \nu (H)$ I.e., the multiplicative factor in the above inequality can be decreased by 1.[2] Extremal hypergraphs An extremal hypergraph to Ryser's conjecture is a hypergraph in which the conjecture holds with equality, i.e., $\tau (H)=(r-1)\cdot \nu (H)$. The existence of such hypergraphs show that the factor r-1 is the smallest possible. An example of an extremal hypergraph is the truncated projective plane - the projective plane of order r-1 in which one vertex and all lines containing it is removed.[3] It is known to exist whenever r-1 is the power of a prime integer. There are other families of such extremal hypergraphs.[4] Special cases In the case r=2, the hypergraph becomes a bipartite graph, and the conjecture becomes $\tau (H)\leq \nu (H)$. This is known to be true by Kőnig's theorem. In the case r=3, the conjecture has been proved by Ron Aharoni.[5] The proof uses the Aharoni-Haxell theorem for matching in hypergraphs. In the cases r=4 and r=5, the following weaker version has been proved by Penny Haxell and Scott:[6] there exists some ε > 0 such that $\tau (H)\leq (r-\varepsilon )\cdot \nu (H)$. Moreover, in the cases r=4 and r=5, Ryser's conjecture has been proved by Tuza (1978) in the special case $\nu (H)=1$, i.e.: $\nu (H)=1\implies \tau (H)\leq r-1$. Fractional variants A fractional matching in a hypergraph is an assignment of a weight to each hyperedge such that the sum of weights near each vertex is at most one. The largest size of a fractional matching in a hypergraph H is denoted by $\nu ^{*}(H)$. A fractional transversal in a hypergraph is an assignment of a weight to each vertex such that the sum of weights in each hyperedge is at least one. The smallest size of a fractional transversal in a hypergraph H is denoted by $\tau ^{*}(H)$. Linear programming duality implies that $\nu ^{*}(H)=\tau ^{*}(H)$. Furedi has proved the following fractional version of Ryser's conjecture: If H is r-partite and r-regular (each vertex appears in exactly r hyperedges), then[7] $\tau ^{*}(H)\leq (r-1)\cdot \nu (H)$. Lovasz has shown that[8] $\tau (H)\leq {\frac {r}{2}}\cdot \nu ^{*}(H)$. References 1. Lin, Bo (2014). "Introduction to Ryser's Conjecture" (PDF). 2. "Ryser's conjecture | Open Problem Garden". www.openproblemgarden.org. Retrieved 2020-07-14. 3. Tuza (1983). "Ryser's conjecture on transversals of r-partite hypergraphs". Ars Combinatorica. 4. Abu-Khazneh, Ahmad; Barát, János; Pokrovskiy, Alexey; Szabó, Tibor (2018-07-12). "A family of extremal hypergraphs for Ryser's conjecture". arXiv:1605.06361 [math.CO]. 5. Aharoni, Ron (2001-01-01). "Ryser's Conjecture for Tripartite 3-Graphs". Combinatorica. 21 (1): 1–4. doi:10.1007/s004930170001. ISSN 0209-9683. S2CID 13307018. 6. Haxell, P. E.; Scott, A. D. (2012-01-21). "On Ryser's conjecture". The Electronic Journal of Combinatorics. 19 (1). doi:10.37236/1175. ISSN 1077-8926. 7. Füredi, Zoltán (1981-06-01). "Maximum degree and fractional matchings in uniform hypergraphs". Combinatorica. 1 (2): 155–162. doi:10.1007/bf02579271. ISSN 0209-9683. S2CID 10530732. 8. Lovász, L. (1974), "Minimax theorems for hypergraphs", Hypergraph Seminar, Lecture Notes in Mathematics, Berlin, Heidelberg: Springer Berlin Heidelberg, vol. 411, pp. 111–126, doi:10.1007/bfb0066186, ISBN 978-3-540-06846-4
Wikipedia
Computing the permanent In linear algebra, the computation of the permanent of a matrix is a problem that is thought to be more difficult than the computation of the determinant of a matrix despite the apparent similarity of the definitions. The permanent is defined similarly to the determinant, as a sum of products of sets of matrix entries that lie in distinct rows and columns. However, where the determinant weights each of these products with a ±1 sign based on the parity of the set, the permanent weights them all with a +1 sign. While the determinant can be computed in polynomial time by Gaussian elimination, it is generally believed that the permanent cannot be computed in polynomial time. In computational complexity theory, a theorem of Valiant states that computing permanents is #P-hard, and even #P-complete for matrices in which all entries are 0 or 1 Valiant (1979). This puts the computation of the permanent in a class of problems believed to be even more difficult to compute than NP. It is known that computing the permanent is impossible for logspace-uniform ACC0 circuits.(Allender & Gore 1994) The development of both exact and approximate algorithms for computing the permanent of a matrix is an active area of research. Definition and naive algorithm The permanent of an n-by-n matrix A = (ai,j) is defined as $\operatorname {perm} (A)=\sum _{\sigma \in S_{n}}\prod _{i=1}^{n}a_{i,\sigma (i)}.$ The sum here extends over all elements σ of the symmetric group Sn, i.e. over all permutations of the numbers 1, 2, ..., n. This formula differs from the corresponding formula for the determinant only in that, in the determinant, each product is multiplied by the sign of the permutation σ while in this formula each product is unsigned. The formula may be directly translated into an algorithm that naively expands the formula, summing over all permutations and within the sum multiplying out each matrix entry. This requires n! n arithmetic operations. Ryser formula The best known[1] general exact algorithm is due to H. J. Ryser (1963). Ryser’s method is based on an inclusion–exclusion formula that can be given[2] as follows: Let $A_{k}$ be obtained from A by deleting k columns, let $P(A_{k})$ be the product of the row-sums of $A_{k}$, and let $\Sigma _{k}$ be the sum of the values of $P(A_{k})$ over all possible $A_{k}$. Then $\operatorname {perm} (A)=\sum _{k=0}^{n-1}(-1)^{k}\Sigma _{k}.$ It may be rewritten in terms of the matrix entries as follows[3] $\operatorname {perm} (A)=(-1)^{n}\sum _{S\subseteq \{1,\dots ,n\}}(-1)^{|S|}\prod _{i=1}^{n}\sum _{j\in S}a_{ij}.$ Ryser's formula can be evaluated using $O(2^{n-1}n^{2})$ arithmetic operations, or $O(2^{n-1}n)$ by processing the sets $S$ in Gray code order.[4] Balasubramanian–Bax–Franklin–Glynn formula Another formula that appears to be as fast as Ryser's (or perhaps even twice as fast) is to be found in the two Ph.D. theses; see (Balasubramanian 1980), (Bax 1998); also (Bax & Franklin 1996). The methods to find the formula are quite different, being related to the combinatorics of the Muir algebra, and to finite difference theory respectively. Another way, connected with invariant theory is via the polarization identity for a symmetric tensor (Glynn 2010). The formula generalizes to infinitely many others, as found by all these authors, although it is not clear if they are any faster than the basic one. See (Glynn 2013). The simplest known formula of this type (when the characteristic of the field is not two) is $\operatorname {perm} (A)={\frac {1}{2^{n-1}}}\left[\sum _{\delta }\left(\prod _{k=1}^{n}\delta _{k}\right)\prod _{j=1}^{n}\sum _{i=1}^{n}\delta _{i}a_{ij}\right],$ where the outer sum is over all $2^{n-1}$ vectors $\delta =(\delta _{1}=1,\delta _{2},\dots ,\delta _{n})\in \{\pm 1\}^{n}$. Special cases Planar and K3,3-free The number of perfect matchings in a bipartite graph is counted by the permanent of the graph's biadjacency matrix, and the permanent of any 0-1 matrix can be interpreted in this way as the number of perfect matchings in a graph. For planar graphs (regardless of bipartiteness), the FKT algorithm computes the number of perfect matchings in polynomial time by changing the signs of a carefully chosen subset of the entries in the Tutte matrix of the graph, so that the Pfaffian of the resulting skew-symmetric matrix (the square root of its determinant) is the number of perfect matchings. This technique can be generalized to graphs that contain no subgraph homeomorphic to the complete bipartite graph K3,3.[5] George Pólya had asked the question[6] of when it is possible to change the signs of some of the entries of a 01 matrix A so that the determinant of the new matrix is the permanent of A. Not all 01 matrices are "convertible" in this manner; in fact it is known (Marcus & Minc (1961)) that there is no linear map $T$ such that $\operatorname {per} T(A)=\det A$ for all $n\times n$ matrices $A$. The characterization of "convertible" matrices was given by Little (1975) who showed that such matrices are precisely those that are the biadjacency matrix of bipartite graphs that have a Pfaffian orientation: an orientation of the edges such that for every even cycle $C$ for which $G\setminus C$ has a perfect matching, there are an odd number of edges directed along C (and thus an odd number with the opposite orientation). It was also shown that these graphs are exactly those that do not contain a subgraph homeomorphic to $K_{3,3}$, as above. Computation modulo a number Modulo 2, the permanent is the same as the determinant, as $(-1)\equiv 1{\pmod {2}}.$ It can also be computed modulo $2^{k}$ in time $O(n^{4k-3})$ for $k\geq 2$. However, it is UP-hard to compute the permanent modulo any number that is not a power of 2. Valiant (1979) There are various formulae given by Glynn (2010) for the computation modulo a prime p. First, there is one using symbolic calculations with partial derivatives. Second, for p = 3 there is the following formula for an n×n-matrix $A$, involving the matrix's principal minors (Kogan (1996)): $\operatorname {per} (A)=(-1)^{n}\Sigma _{J\subseteq \{1,\dots ,n\}}\det(A_{J})\det(A_{\bar {J}}),$ where $A_{J}$ is the submatrix of $A$ induced by the rows and columns of $A$ indexed by $J$, and ${\bar {J}}$ is the complement of $J$ in $\{1,\dots ,n\}$, while the determinant of the empty submatrix is defined to be 1. The expansion above can be generalized in an arbitrary characteristic p as the following pair of dual identities: ${\begin{aligned}\operatorname {per} (A)&=(-1)^{n}\Sigma _{{J_{1}},\ldots ,{J_{p-1}}}\det(A_{J_{1}})\dotsm \det(A_{J_{p-1}})\\\det(A)&=(-1)^{n}\Sigma _{{J_{1}},\ldots ,{J_{p-1}}}\operatorname {per} (A_{J_{1}})\dotsm \operatorname {per} (A_{J_{p-1}})\end{aligned}}$ where in both formulas the sum is taken over all the (p − 1)-tuples ${J_{1}},\ldots ,{J_{p-1}}$ that are partitions of the set $\{1,\dots ,n\}$ into p − 1 subsets, some of them possibly empty. The former formula possesses an analog for the hafnian of a symmetric $A$ and an odd p: $\operatorname {haf} ^{2}(A)=(-1)^{n}\Sigma _{{J_{1}},\ldots ,{J_{p-1}}}\det(A_{J_{1}})\dotsm \det(A_{J_{p-1}})(-1)^{|J_{1}|+\dots +|J_{(p-1)/2}|}$ with the sum taken over the same set of indexes. Moreover, in characteristic zero a similar convolution sum expression involving both the permanent and the determinant yields the Hamiltonian cycle polynomial (defined as $ \operatorname {ham} (A)=\sum _{\sigma \in H_{n}}\prod _{i=1}^{n}a_{i,\sigma (i)}$ where $H_{n}$ is the set of n-permutations having only one cycle): $\operatorname {ham} (A)=\Sigma _{J\subseteq \{2,\dots ,n\}}\det(A_{J})\operatorname {per} (A_{\bar {J}})(-1)^{|J|}.$ In characteristic 2 the latter equality turns into $\operatorname {ham} (A)=\Sigma _{J\subseteq \{2,\dots ,n\}}\det(A_{J})\operatorname {det} (A_{\bar {J}})$ what therefore provides an opportunity to polynomial-time calculate the Hamiltonian cycle polynomial of any unitary $U$ (i.e. such that $U^{\textsf {T}}U=I$ where $I$ is the identity n×n-matrix), because each minor of such a matrix coincides with its algebraic complement: $\operatorname {ham} (U)=\operatorname {det} ^{2}(U+I_{/1})$ where $I_{/1}$ is the identity n×n-matrix with the entry of indexes 1,1 replaced by 0. Moreover, it may, in turn, be further generalized for a unitary n×n-matrix $U$ as $\operatorname {ham_{K}} (U)=\operatorname {det} ^{2}(U+I_{/K})$ where $K$ is a subset of {1, ..., n}, $I_{/K}$ is the identity n×n-matrix with the entries of indexes k,k replaced by 0 for all k belonging to $K$, and we define $ \operatorname {ham_{K}} (A)=\sum _{\sigma \in H_{n}(K)}\prod _{i=1}^{n}a_{i,\sigma (i)}$ where $H_{n}(K)$ is the set of n-permutations whose each cycle contains at least one element of $K$. This formula also implies the following identities over fields of characteristic 3: for any invertible $A$ $\operatorname {per} (A^{-1})\operatorname {det} ^{2}(A)=\operatorname {per} (A);$ for any unitary $U$, that is, a square matrix $U$ such that $U^{\textsf {T}}U=I$ where $I$ is the identity matrix of the corresponding size, $\operatorname {per} ^{2}(U)=\det(U+V)\det(-U)$ where $V$ is the matrix whose entries are the cubes of the corresponding entries of $U$. It was also shown (Kogan (1996)) that, if we define a square matrix $A$ as k-semi-unitary when $\operatorname {rank} (A^{\textsf {T}}A-I)=k$, the permanent of a 1-semi-unitary matrix is computable in polynomial time over fields of characteristic 3, while for k > 1 the problem becomes #3-P-complete. (A parallel theory concerns the Hamiltonian cycle polynomial in characteristic 2: while computing it on the unitary matrices is polynomial-time feasible, the problem is #2-P-complete for the k-semi-unitary ones for any k > 0). The latter result was essentially extended in 2017 (Knezevic & Cohen (2017)) and it was proven that in characteristic 3 there is a simple formula relating the permanents of a square matrix and its partial inverse (for $A_{11}$ and $A_{22}$ being square, $A_{11}$ being invertible): $\operatorname {per} {\begin{pmatrix}A_{11}&A_{12}\\A_{21}&A_{22}\end{pmatrix}}=\operatorname {det} ^{2}(A_{11})\operatorname {per} {\begin{pmatrix}A_{11}^{-1}&A_{11}^{-1}A_{12}\\A_{21}A_{11}^{-1}&A_{22}-A_{21}A_{11}^{-1}A_{12}\end{pmatrix}}$ and it allows to polynomial-time reduce the computation of the permanent of an n×n-matrix with a subset of k or k − 1 rows expressible as linear combinations of another (disjoint) subset of k rows to the computation of the permanent of an (n − k)×(n − k)- or (n − k + 1)×(n − k + 1)-matrix correspondingly, hence having introduced a compression operator (analogical to the Gaussian modification applied for calculating the determinant) that "preserves" the permanent in characteristic 3. (Analogically, it would be worth noting that the Hamiltonian cycle polynomial in characteristic 2 does possess its invariant matrix compressions as well, taking into account the fact that ham(A) = 0 for any n×n-matrix A having three equal rows or, if n > 2, a pair of indexes i,j such that its i-th and j-th rows are identical and its i-th and j-th columns are identical too.) The closure of that operator defined as the limit of its sequential application together with the transpose transformation (utilized each time the operator leaves the matrix intact) is also an operator mapping, when applied to classes of matrices, one class to another. While the compression operator maps the class of 1-semi-unitary matrices to itself and the classes of unitary and 2-semi-unitary ones, the compression-closure of the 1-semi-unitary class (as well as the class of matrices received from unitary ones through replacing one row by an arbitrary row vector — the permanent of such a matrix is, via the Laplace expansion, the sum of the permanents of 1-semi-unitary matrices and, accordingly, polynomial-time computable) is yet unknown and tensely related to the general problem of the permanent's computational complexity in characteristic 3 and the chief question of P versus NP: as it was shown in (Knezevic & Cohen (2017)), if such a compression-closure is the set of all square matrices over a field of characteristic 3 or, at least, contains a matrix class the permanent's computation on is #3-P-complete (like the class of 2-semi-unitary matrices) then the permanent is computable in polynomial time in this characteristic. Besides, the problem of finding and classifying any possible analogs of the permanent-preserving compressions existing in characteristic 3 for other prime characteristics was formulated (Knezevic & Cohen (2017)), while giving the following identity for an n×n matrix $A$ and two n-vectors (having all their entries from the set {0, ..., p − 1}) $\alpha $ and $\beta $ such that $ {\sum _{i=1}^{n}\alpha _{i}=\sum _{j=1}^{n}\beta _{j}}$, valid in an arbitrary prime characteristic p: $\operatorname {per} (A^{(\alpha ,\beta )})=\det ^{p-1}(A)\operatorname {per} (A^{-1})^{((p-1){\vec {1}}_{n}-\beta ,(p-1){\vec {1}}_{n}-\alpha )}\left(\prod _{i=1}^{n}\alpha _{i}!\right)\left(\prod _{j=1}^{n}\beta _{j}!\right)(-1)^{n+\sum _{i=1}^{n}\alpha _{i}}$ where for an n×m-matrix $M$, an n-vector $x$ and an m-vector $y$, both vectors having all their entries from the set {0, ..., p − 1}, $M^{(x,y)}$ denotes the matrix received from $M$ via repeating $x_{i}$ times its i-th row for i = 1, ..., n and $y_{j}$ times its j-th column for j = 1, ..., m (if some row's or column's multiplicity equals zero it would mean that the row or column was removed, and thus this notion is a generalization of the notion of submatrix), and ${\vec {1}}_{n}$ denotes the n-vector all whose entries equal unity. This identity is an exact analog of the classical formula expressing a matrix's minor through a minor of its inverse and hence demonstrates (once more) a kind of duality between the determinant and the permanent as relative immanants. (Actually its own analogue for the hafnian of a symmetric $A$ and an odd prime p is $ \operatorname {haf} ^{2}(A^{(\alpha ,\alpha )})=\det ^{p-1}(A)\operatorname {haf} ^{2}(A^{-1})^{((p-1){\vec {1}}_{n}-\alpha ,(p-1){\vec {1}}_{n}-\alpha )}\left(\prod _{i=1}^{n}\alpha _{i}!\right)^{2}(-1)^{n(p-1)/2+n+\sum _{i=1}^{n}\alpha _{i}}$). And, as an even wider generalization for the partial inverse case in a prime characteristic p, for $A_{11}$, $A_{22}$ being square, $A_{11}$ being invertible and of size ${n_{1}}$x${n_{1}}$, and $ {\sum _{i=1}^{n}\alpha _{i}=\sum _{j=1}^{n}\beta _{j}}$, there holds also the identity $\operatorname {per} {\begin{pmatrix}A_{11}&A_{12}\\A_{21}&A_{22}\end{pmatrix}}^{(\alpha ,\beta )}={\det }^{p-1}(A_{11})\operatorname {per} {\begin{pmatrix}A_{11}^{-1}&A_{11}^{-1}A_{12}\\A_{21}A_{11}^{-1}&A_{22}-A_{21}A_{11}^{-1}A_{12}\end{pmatrix}}^{((p-1){\vec {1}}_{n}-\beta ,(p-1){\vec {1}}_{n}-\alpha )}\left(\prod _{i=1}^{n}\alpha _{1,i}!\right)\left(\prod _{j=1}^{n}\beta _{1,j}!\right)(-1)^{n_{1}+\sum _{i=1}^{n}\alpha _{1,i}}$ where the common row/column multiplicity vectors $\alpha $ and $\beta $ for the matrix $A$ generate the corresponding row/column multiplicity vectors $\alpha _{s}$ and $\beta _{t}$, s,t = 1,2, for its blocks (the same concerns $A$'s partial inverse in the equality's right side). Approximate computation When the entries of A are nonnegative, the permanent can be computed approximately in probabilistic polynomial time, up to an error of εM, where M is the value of the permanent and ε > 0 is arbitrary. In other words, there exists a fully polynomial-time randomized approximation scheme (FPRAS) (Jerrum, Sinclair & Vigoda (2001)). The most difficult step in the computation is the construction of an algorithm to sample almost uniformly from the set of all perfect matchings in a given bipartite graph: in other words, a fully polynomial almost uniform sampler (FPAUS). This can be done using a Markov chain Monte Carlo algorithm that uses a Metropolis rule to define and run a Markov chain whose distribution is close to uniform, and whose mixing time is polynomial. It is possible to approximately count the number of perfect matchings in a graph via the self-reducibility of the permanent, by using the FPAUS in combination with a well-known reduction from sampling to counting due to Jerrum, Valiant & Vazirani (1986). Let $M(G)$ denote the number of perfect matchings in $G$. Roughly, for any particular edge $e$ in $G$, by sampling many matchings in $G$ and counting how many of them are matchings in $G\setminus e$, one can obtain an estimate of the ratio $ \rho ={\frac {M(G)}{M(G\setminus e)}}$. The number $M(G)$ is then $\rho M(G\setminus e)$, where $M(G\setminus e)$ can be approximated by applying the same method recursively. Another class of matrices for which the permanent can be computed approximately, is the set of positive-semidefinite matrices (the complexity-theoretic problem of approximating the permanent of such matrices to within a multiplicative error is considered open[7]). The corresponding randomized algorithm is based on the model of boson sampling and it uses the tools proper to quantum optics, to represent the permanent of positive-semidefinite matrices as the expected value of a specific random variable. The latter is then approximated by its sample mean.[8] This algorithm, for a certain set of positive-semidefinite matrices, approximates their permanent in polynomial time up to an additive error, which is more reliable than that of the standard classical polynomial-time algorithm by Gurvits.[9] Notes 1. As of 2008, see Rempała & Wesolowski (2008) 2. van Lint & Wilson (2001) p. 99 3. CRC Concise Encyclopedia of Mathematics 4. Nijenhuis & Wilf (1978) 5. Little (1974), Vazirani (1988) 6. Pólya (1913), Reich (1971) 7. See open problem (4) at "Shtetl Optimized: Introducing some British people to P vs. NP". 22 July 2015. 8. Chakhmakhchyan, Levon; Cerf, Nicolas; Garcia-Patron, Raul (2017). "A quantum-inspired algorithm for estimating the permanent of positive semidefinite matrices". Phys. Rev. A. 96 (2): 022329. arXiv:1609.02416. Bibcode:2017PhRvA..96b2329C. doi:10.1103/PhysRevA.96.022329. S2CID 54194194. 9. Gurvits, Leonid (2005). "On the complexity of mixed discriminants and related problems". Mathematical Foundations of Computer Science. Lecture Notes in Computer Science. 3618: 447–458. doi:10.1007/11549345_39. ISBN 978-3-540-28702-5. References • Allender, Eric; Gore, Vivec (1994), "A uniform circuit lower bound for the permanent", SIAM Journal on Computing, 23 (5): 1026–1049, CiteSeerX 10.1.1.51.3546, doi:10.1137/s0097539792233907 • Balasubramanian, K. (1980), Combinatorics and Diagonals of Matrices (PDF), Ph.D. Thesis, Department of Statistics, Loyola College, Madras, India, vol. T073, Indian Statistical Institute, Calcutta • Bax, Eric (1998), Finite-difference Algorithms for Counting Problems, Ph.D. Dissertation, vol. 223, California Institute of Technology • Bax, Eric; Franklin, J. (1996), A finite-difference sieve to compute the permanent, Caltech-CS-TR-96-04, California Institute of Technology • Glynn, David G. (2010), "The permanent of a square matrix", European Journal of Combinatorics, 31 (7): 1887–1891, doi:10.1016/j.ejc.2010.01.010 • Glynn, David G. (2013), "Permanent formulae from the Veronesean", Designs, Codes and Cryptography, 68 (1–3): 39–47, doi:10.1007/s10623-012-9618-1, S2CID 36911503 • Jerrum, M.; Sinclair, A.; Vigoda, E. (2001), "A polynomial-time approximation algorithm for the permanent of a matrix with non-negative entries", Proc. 33rd Symposium on Theory of Computing, pp. 712–721, doi:10.1145/380752.380877, ISBN 978-1581133493, S2CID 8368245, ECCC TR00-079 • Jerrum, Mark; Valiant, Leslie; Vazirani, Vijay (1986), "Random generation of combinatorial structures from a uniform distribution", Theoretical Computer Science, 43: 169–188, doi:10.1016/0304-3975(86)90174-X • Kogan, Grigoriy (1996). "Computing permanents over fields of characteristic 3: Where and why it becomes difficult". Proceedings of 37th Conference on Foundations of Computer Science. pp. 108–114. doi:10.1109/SFCS.1996.548469. ISBN 0-8186-7594-2. S2CID 39024286.{{cite book}}: CS1 maint: date and year (link) • Knezevic, Anna; Cohen, Greg (2017), Some facts on Permanents in Finite Characteristics, arXiv:1710.01783, Bibcode:2017arXiv171001783K • van Lint, Jacobus Hendricus; Wilson, Richard Michale (2001), A Course in Combinatorics, Cambridge University Press, ISBN 978-0-521-00601-9 • Little, C. H. C. (1974), "An extension of Kasteleyn's method of enumerating the 1-factors of planar graphs", in Holton, D. (ed.), Proc. 2nd Australian Conf. Combinatorial Mathematics, Lecture Notes in Mathematics, vol. 403, Springer-Verlag, pp. 63–72 • Little, C. H. C. (1975), "A characterization of convertible (0, 1)-matrices", Journal of Combinatorial Theory, Series B, 18 (3): 187–208, doi:10.1016/0095-8956(75)90048-9 • Marcus, M.; Minc, H. (1961), "On the relation between the determinant and the permanent", Illinois Journal of Mathematics, 5 (3): 376–381, doi:10.1215/ijm/1255630882 • Nijenhuis, Albert; Wilf, Herbert S. (1978), Combinatorial Algorithms, Academic Press • Pólya, G. (1913), "Aufgabe 424", Arch. Math. Phys., 20 (3): 27 • Reich, Simeon (1971), "Another solution of an old problem of pólya", American Mathematical Monthly, 78 (6): 649–650, doi:10.2307/2316574, JSTOR 2316574 • Rempała, Grzegorz A.; Wesolowski, Jacek (2008), Symmetric Functionals on Random Matrices and Random Matchings Problems, Springer, p. 4, ISBN 978-0-387-75145-0 • Ryser, Herbert John (1963), Combinatorial Mathematics, The Carus Mathematical Monographs, Vol. 14, Mathematical Association of America • Vazirani, Vijay V. (1988), "NC algorithms for computing the number of perfect matchings in K3,3-free graphs and related problems", Proc. 1st Scandinavian Workshop on Algorithm Theory (SWAT '88), Lecture Notes in Computer Science, vol. 318, Springer-Verlag, pp. 233–242, doi:10.1007/3-540-19487-8_27, hdl:1813/6700, ISBN 978-3-540-19487-3 • Valiant, Leslie G. (1979), "The complexity of computing the permanent", Theoretical Computer Science, Elsevier, 8 (2): 189–201, doi:10.1016/0304-3975(79)90044-6 • "Permanent", CRC Concise Encyclopedia of Mathematics, Chapman & Hall/CRC, 2002 Further reading • Barvinok, A. (2017), "Approximating permanents and hafnians", Discrete Analysis, arXiv:1601.07518, doi:10.19086/da.1244, S2CID 397350.
Wikipedia
Ryszard Engelking Ryszard Engelking (born 1935-11-16 in Sosnowiec) is a Polish mathematician. He was working mainly on general topology[1] and dimension theory. He is author of several influential monographs in this field. The 1989 edition of his General Topology is nowadays a standard reference for topology.[2] Scientific work Apart from his books, Ryszard Engelking is known, among other things, for a generalization to an arbitrary topological space of the "Alexandroff double circle",[3][4] for works on completely metrizable spaces, suborderable spaces and generalized ordered spaces.[5] The Engelking–Karlowicz theorem, proved together with Monica Karlowicz, is a statement about the existence of a family of functions from $2^{\mu }$ to $\mu $ with topological[6] and set-theoretical[7] applications. In addition to research papers authored just by himself, he also published jointly with Kazimierz Kuratowski, Roman Sikorski, Aleksander Pełczyński and others. He has published about 60 scientific works reviewed by MathSciNet and Zentralblatt. Translation works Apart from mathematics he is also interested in literature. He translated into Polish French authors: Flaubert's Madame Bovary, and works of Baudelaire, Gérard de Nerval, Auguste de Villiers de L'Isle-Adam, Nicolas Restif de la Bretonne. For these activities he was awarded by Literatura na Świecie (World Literature). Bibliography • R. Engelking (1968). Outline of General Topology. translated from Polish. North-Holland, Amsterdam. • R. Engelking (1977). General Topology. PWN, Warsaw. • R. Engelking (1978). Dimension Theory. North-Holland, Amsterdam. • R. Engelking (1989). General Topology. Revised and completed edition. Heldermann Verlag, Berlin. ISBN 3-88538-006-4. • R. Engelking (1995). Theory of Dimensions: Finite and Infinite. Heldermann Verlag, Berlin. ISBN 3-88538-010-2. Notes 1. Instytut Historii Nauki, Oświaty i Techniki (Polska Akademia Nauki (1980). Kwartalnik historii nauki i techniki (in Polish). Panstwowe Wydawnictwo Naukowe. p. 704. Retrieved 12 June 2011. 2. K.P. Hart, J.-I. Nagata and J.E. Vaughan Editors, Encyclopedia of general Topology, Elsevier 2003, p. vii 3. Haruto Ohta, Special Spaces, Chapter b-13 in Encyclopedia of general Topology. 4. R. Engelking, On the double circumference of Alexandroff, Bull. Acad. Polon. Sci. 16 (1968), 629–634. 5. Encyclopedia of general Topology, pp. 204, 206, 252 and 328 6. Ryszard Engelking and Monica Karlowicz, Some theorems of set theory and their topological consequences, Fundamenta Mathematicae, 57, 275–285, 1965. 7. Uri Abraham and Menachem Magidor, Cardinal Arithmetic, Ch. 14 in Handbook of Set Theory (Matthew Foreman, Akihiro Kanamori, Editors) pp. 1223, 1226. External links • Ryszard Engelking at the Mathematics Genealogy Project Authority control International • ISNI • VIAF National • Norway • France • BnF data • Israel • United States • Czech Republic • Australia • Netherlands • Poland Academics • CiNii • MathSciNet • Mathematics Genealogy Project • zbMATH People • Trove Other • IdRef
Wikipedia
Rytz's construction The Rytz’s axis construction is a basic method of descriptive geometry to find the axes, the semi-major axis and semi-minor axis and the vertices of an ellipse, starting from two conjugated half-diameters. If the center and the semi axis of an ellipse are determined the ellipse can be drawn using an ellipsograph or by hand (see ellipse). Rytz’s construction is a classical construction of Euclidean geometry, in which only compass and ruler are allowed as aids. The design is named after its inventor David Rytz of Brugg (1801–1868). Conjugate diameters appear always if a circle or an ellipse is projected parallelly (the rays are parallel) as images of orthogonal diameters of a circle (see second diagram) or as images of the axes of an ellipse. An essential property of two conjugate diameters $d_{1},\;d_{2}$ is: The tangents at the ellipse points of one diameter are parallel to the second diameter (see second diagram). Problem statement and solution The parallel projection (skew or orthographic) of a circle that is in general an ellipse (the special case of a line segment as image is omitted). A fundamental task in descriptive geometry is to draw such an image of a circle. The diagram shows a military projection of a cube with 3 circles on 3 faces of the cube. The image plane for a military projection is horizontal. That means the circle on the top appears in its true shape (as circle). The images of the circles at the other two faces are obviously ellipses with unknown axes. But one recognizes in any case the images of two orthogonal diameters of the circles. These diameters of the ellipses are no more orthogonal but as images of orthogonal diameters of the circle they are conjugate (the tangents at the end points of one diameter are parallel to the other diameter !). This is a standard situation in descriptive geometry: • From an ellipse the center $C$ and two points $P,Q$ on two conjugate diameters are known. • Task: find the axes and semi-axes of the ellipse. Steps of the construction (1) rotate point $P$ around $C$ by 90°. (2) Determine the center $D$ of the line segment ${\overline {P'Q}}$. (3) Draw the line $P'Q$ and the circle with center $D$ through $C$. Intersect the circle and the line. The intersection points are $A,B$. (4) The lines $CA$ and $CB$ are the axes of the ellipse. (5) The line segment ${\overline {AB}}$ can be considered as a paperstrip of length $a+b$ (see ellipse) generating point $Q$. Hence $a=|AQ|$ and $b=|BQ|$ are the semi-axes. (If $a>b$ then $a$ is the semi-major axis.) (6) The vertices and co-vertices are known and the ellipse can be drawn by one of the drawing methods. If one performs a left turn of point $P$, then the configuration shows the 2nd paper strip method (see second diagram in next section) and $a=|AQ|$ and $b=|BQ|$ is still true. Proof of the statement The standard proof is performed geometrically.[1] An alternative proof uses analytic geometry: The proof is done, if one is able to show that • the intersection points $U,V$ of the line $P'Q$ with the axes of the ellipse lie on the circle through $C$ with center $D$, hence $U=A$ and $V=B$, and $|UQ|=a,\quad |VQ|=b\ .$ Proof (1): Any ellipse can be represented in a suitable coordinate system parametrically by ${\vec {p}}(t)=(a\cos t,\;b\sin t)^{T}$. Two points ${\vec {p}}(t_{1}),\ {\vec {p}}(t_{2})$ lie on conjugate diameters if $t_{2}-t_{1}=\pm {\tfrac {\pi }{2}}\ .$ (see Ellipse: conjugate diameters.) (2): Let be $Q=(a\cos t,\;b\sin t)$ and $P=(a\cos(t+{\tfrac {\pi }{2}}),\;b\sin(t+{\tfrac {\pi }{2}}))=(-a\sin t,b\cos t)$ two points on conjugate diameters. Then $P'=(b\cos t,a\sin t)$ and the midpoint of line segment ${\overline {P'Q}}$ is $D=\left({\tfrac {a+b}{2}}\cos t,{\tfrac {a+b}{2}}\sin t\right)$. (3): Line $P'Q$ has equation $x\sin t+y\cos t=(a+b)\;\cos t\sin t\ .$ The intersection points of this line with the axes of the ellipse are $U=\left(0,\;(a+b)\sin t\right)\ ,\quad V=\left((a+b)\cos t,\;0\right)\ .$ (4): Because of $|UD|=|VD|=|CD|$ the points $U,V,C$ lie on the circle with center $D$ and radius $|CD|\ .$ Hence $A=U,\ B=V\ .$ (5): $|UQ|=a,\quad |VQ|=b\ .$ The proof uses a right turn of point $P$, which leads to a diagram showing the 1st paper strip method. Variations If one performs a left turn of point $P$, then results (4) and (5) are still valid and the configuration shows now the 2nd paper strip method (see diagram). If one uses $P=(a\cos(t{\color {red}-}{\tfrac {\pi }{2}}),\;b\sin(t{\color {red}-}{\tfrac {\pi }{2}}))=(a\sin t,-b\cos t)$, then the construction and proof work either. Computer aided solution To find the vertices of the ellipse with help of a computer, • the coordinates of the three points $C,P,Q$ have to be known. A straight forward idea is: One can write a program that performs the steps described above. A better idea is to use the representation of an arbitrary ellipse parametrically: • ${\vec {x}}={\vec {p}}(t)={\vec {f}}_{0}+{\vec {f}}_{1}\cos t+{\vec {f}}_{2}\sin t\ .$ With ${\vec {f}}_{0}={\vec {OC}}$ (the center) and ${\vec {f}}_{1}={\vec {CP}},\;{\vec {f}}_{2}={\vec {CQ}}$ (two conjugate half-diameters) one is able to calculate points and to draw the ellipse. If necessary: With $\cot(2t_{0})={\tfrac {{\vec {f}}_{1}^{\,2}-{\vec {f}}_{2}^{\,2}}{2{\vec {f}}_{1}\cdot {\vec {f}}_{2}}}$ one gets the four vertices of the ellipse: ${\vec {p}}(t_{0}),\;{\vec {p}}(t_{0}\pm {\frac {\pi }{2}}),\;{\vec {p}}(t_{0}+\pi )\ .$ References • Rudolf Fucke; Konrad Kirch; Heinz Nickel (2007). Darstellende Geometrie für Ingenieure [Descriptive geometry for engineers] (in German) (17th ed.). München: Carl Hanser. p. 183. ISBN 978-3446411432. Retrieved 2013-05-31. • Klaus Ulshöfer; Dietrich Tilp (2010). "5: Ellipse als orthogonal-affines Bild des Hauptkreises" [5: "Ellipse as the orthogonal affine image of the unit circle"]. Darstellende Geometrie in systematischen Beispielen [Descriptive geometry in systematic collection of examples]. Übungen für die gymnasiale Oberstufe (in German) (1st ed.). Bamberg: C. C. Buchner. ISBN 978-3-7661-6092-8. • Alexander Ostermann; Gerhard Wanner (2012). Geometry by its History. Springer Science & Business Media. pp. 68–69. ISBN 9783642291630. 1. Ulrich Graf, Martin Barner: Darstellende Geometrie. Quelle & Meyer, Heidelberg 1961, ISBN 3-494-00488-9, p.114 External links • animation of Rytz's construction
Wikipedia
David Rytz David Rytz von Brugg (1 April 1801, in Bucheggberg – 25 March 1868, in Aarau) was a Swiss mathematician and teacher.[1] Life Rytz von Brugg was son of a priest and studied mathematics at Göttingen and Leipzig. He had teaching positions at various cities, one of them 1835 until 1862 at Aarau, where he was „Professor der Mathematik an der Gewerbeschule zu Aarau“.[2] Merits Rytz von Brugg is famous for a geometrical method which is known as Rytz’s axis construction. This classical procedure retrieves the semi-axes of an Ellipse from any pair of conjugate diameters. This method is known since 1845, when it was published within a paper by Leopold Moosbrugger.[3][4] Sources • Siegfried Gottwald, Hans-Joachim Ilgauds und Karl-Heinz Schlote, ed. (1990), Lexikon bedeutender Mathematiker (in German), Thun: Verlag Harri Deutsch, p. 407, ISBN 3-8171-1164-9 MR1089881 • Hans Honsberg (1971), Analytische Geometrie: Mit Anhang "Einführung in die Vektorrechnung", Mathematik für Gymnasien (in German) (3. ed.), München: Bayerischer Schulbuch-Verlag, p. 96, ISBN 3-7627-0677-8 • Emil Müller, Erwin Kruppa (1961), Lehrbuch der darstellenden Geometrie: Unveränderter Neudruck der fünften Auflage (in German) (6. ed.), Wien: Springer Verlag, p. 98 • Alexander Ostermann, Gerhard Wanner (2012), Geometry by Its History, Undergraduate Texts in Mathematics. Readings in Mathematics (in German), Heidelberg, New York, Dordrecht, London: Springer Verlag, p. 69, doi:10.1007/978-3-642-29163-0, ISBN 978-3-642-29162-3 MR2918594 • Guido Walz (Red.) (2002), Lexikon der Mathematik in sechs Bänden: Vierter Band (in German), Heidelberg, Berlin: Spektrum Akademischer Verlag, p. 448, ISBN 3-8274-0436-3 References 1. Ostermann and Wanner mention in „Geometry by Its History“ (S. 69) his firstname as „Daniel“. 2. Alexander Ostermann, Gerhard Wanner: Geometry by Its History. 2012, S. 69 3. Siegfried Gottwald, Hans-Joachim Ilgauds, Karl-Heinz Schlote (Hrsg.): Lexikon bedeutender Mathematiker. 1990, S. 407 4. Emil Müller und Erwin Kruppa Lehrbuch der darstellenden Geometrie. 1961, S. 98 .
Wikipedia
Ryszard Syski Ryszard Syski (April 8, 1924 in Płock, Poland - June 11, 2007 in Silver Spring, Maryland) was a Polish-American mathematician whose research was in queueing theory.[1] Ryszard Syski BornApril 8, 1924 Płock, Poland DiedJune 11, 2007 (age 83) Silver Spring, Maryland, United States Alma materPolish University Abroad University of London University of Maryland College Park Scientific career FieldsQueueing theory InstitutionsUniversity of Maryland College Park During World War II he was in the Armia Krajowa with his parents, partaking in the Warsaw uprising, being imprisoned in Lamsdorf, Silesia and Bavaria (1944), and joining the Polish Second Corps for fights in Italy (1945). His studies in mathematics started in London, at the Polish University Abroad (1946). He joined Automatic Telephone and Electric Co. in London (1952). He got his B.Sc. (1954) and Ph.D. (1961) at University of London, on the dissertation Stochastic Process in Banach space and its Applications to Congestion Theory.[2] Encouraged by Thomas L. Saaty he moved to College Park, Maryland, joining the mathematics faculty of University of Maryland (1961–1999), founded the journal Stochastic Processes and their Applications (1973) and was fellow of the Institute of Mathematical Statistics. Syski wrote over forty journal articles, often collaborating with notables such as Félix Pollaczek, Lajos Takács, Julian Keilson and Wim Cohen. Syski died of complications from a brain injury received during a fall.[3] Books • Introduction to congestion theory in telephone systems (North-Holland, 1960) • Passage times for Markov chains (1992) References 1. Editorial of special issue, devoted to Syski, Journal of Applied Mathematics and Stochastic Analysis Volume 11 (1998), Issue 3, Pages 219-220 2. Syski, R. Stochastic process in Banach space and its applications to congestion theory (Thesis). Senate House Libraries Catalogue. Retrieved 13 February 2013. 3. obituary in The Washington Post (July 6, 2007) Home Army (Armia Krajowa) Main articles Polish resistance movement in World War II in Belarus Polish Underground State Operations Uprisings and battles • Warsaw Uprising • Zamość uprising • Warsaw Ghetto Uprising • Lwów uprising • Murowana Oszmianka • Osuchy • Ostra Brama • Tempest Underground, field, and espionage • Arsenal • Heads • Kutschera • Project Big Ben • Bürkl • Most III • Operation Belt • V-1 and V-2 • Wieniec Propaganda • Biuletyn Informacyjny • Operation N • Operation Antyk Directorates • Civil Resistance • Covert Resistance • Underground Resistance Political • Republic of Pińczów Personnel, emblems and decorations Commanders • Michał Karaszewicz-Tokarzewski • Stefan Rowecki • Tadeusz Bór-Komorowski • Leopold Okulicki Senior officers and prominent members • J. Aleksandrowicz • J.Arkusz • K. K. Baczyński • W. Bartoszewski • J. Batory • R. Białous • S. Bittner • F. Błażej • A. Bohdziewicz • J. Bokszczanin • S. Braun • T. Gajcy • H. Lederman • S. Jankowski • S. Karpiel • K. Kierzkowski • B. Kostrzewska • H. Krahelska • A. Krzyżanowski • L. Kulej • J. J. Lerski • J. Mazurkiewicz • W. Micuta • K. Moczarski • A. Nadolski • T. Pełczyński • A. Pilch • W. Pilecki • R. Reiff • Z. Romanowiczowa • Z. Rumel • J. Rutkowski • D. Smoleński • A. Stelmachowski • R. Syski • J. Szczepański • Z. Szendzielarz • A. Szklarski • H. Szwarc • E. Umińska • J. Zabłocki • A. Zakrzewska • W. Zalewski • J. Zamoyski • T. Zawadzki • T. Żenczykowski • M. Żuławski • T. Żychiewicz Membership lists • Armia Krajowa members • Warsaw Uprising insurgents Emblems and decorations • Armia Krajowa Cross • Kotwica • Krzyż Powstania Warszawskiego Units, affiliates, and predecessors Headquarters and Directorates • Information and Propaganda • Kedyw • Civil Resistance • Covert Resistance • Underground Resistance Combat units • 27th Volhynian Division • 104th Company of Syndicalists • Battalion Parasol • Battalion Zośka • Żaglowiec Group • Żbik Group • Żmija Group • Żniwiarz Group • Żyrafa Group Warsaw commands • I Śródmieście • Żoliborz • III Wola • IV Ochota • V Mokotów • VI Praga • VII Warsaw suburbs Other • Błyskawica radiostation • Little Andrews (Jędrusie) • Underground Police (PKB) • Secret Military Printing Works (TWZW) • Silent Unseen (Cichociemni) • Wachlarz Predecessors • Service for Poland's Victory (SZP) • Union of Armed Struggle (ZWZ) Affiliates • Peasants' Battalions (BCh) • Confederation of the Nation (KN) • Leśni (Forest Soldiers) • National Military Association (NOW) (split) • National Armed Forces (NSZ-AK) (faction) • Camp of Fighting Poland (OPW) • Grey Ranks (Szare Szeregi - Boy Scouts) • Secret Polish Army (TAP) • Socialist Party People's Guard (GL-WRN) • Union of Retaliation (ZO) Successors • Independence (NIE) • Freedom and Independence (WiN) Opponents and rivals Opponents • Nazi Germany • UPA Ukrainian Insurgent Army • Lithuanian Security Police Rivals • NSZ National Armed Forces (split) • AL Armia Ludowa (People's Army) • Soviet partisans Related articles The Holocaust in Poland Warsaw Uprising Warsaw Ghetto Uprising Authority control International • ISNI • VIAF National • Norway • France • BnF data • United States • Sweden • Australia • Netherlands • Poland Academics • CiNii • MathSciNet • Mathematics Genealogy Project • zbMATH Other • IdRef
Wikipedia
Rémi Abgrall Rémi Abgrall (born 1961) is a French applied mathematician. He is known for his contributions in computational fluid dynamics, numerical analysis of conservation laws, multiphase flow and Hamilton–Jacobi equations.[1][2] He has been editor in chief of the Journal of Computational Physics[3] since 2015 and is part of the editorial board of several international scientific journals. In 2014 he was invited speaker at the International Congress of Mathematics[4] in Seoul. He is author of more than 100 scientific papers published in international scientific journals.[5] He is editor of 4 books[6][7][8][9] and author of one book[10] on advanced topics concerning computational fluid dynamics, High-resolution scheme and conservation laws. Rémi Abgrall Born Rémi Abgrall 1961 Alma materPierre and Marie Curie University OccupationMathematician. AwardsPrix Blaise-Pascal (2001) Education and career Abgrall received his degree in mathematics at École normale supérieure de Saint-Cloud in 1985 and his PhD at Pierre and Marie Curie University in 1987 with a thesis entitled Conception d’un modèle semi-lagrangien de turbulence bidimensionnelle.[11] He worked at ONERA and then at Inria[12] as a research scientist. From 1996 until 2013 he was professor at University of Bordeaux 1 and then at Institut polytechnique de Bordeaux. Since 2014, he has been a professor of numerical analysis at the University of Zurich.[13] Honours and awards In 2001 he received the Prix Blaise-Pascal[14] from the French Academy of Sciences. He is honorary member of the Institut Universitaire de France.[15] In 2008 he got an advanced CORDIS grant[16] from the European Research Council. He was elected as a Fellow of the Society for Industrial and Applied Mathematics, in the 2022 Class of SIAM Fellows, "for fundamental contributions to the development of numerical methods for conservation laws, in particular for multi-fluid flows and residual distribution schemes".[17] References 1. Rémi Abgrall: Addecco, radically changing models in dynamics 2. Society for Industrial and Applied Mathematics, Fellows Program, nomination for outstanding contributions to the field. 3. "Editorial Board - Journal of Computational Physics - Journal - Elsevier". www.journals.elsevier.com. 4. "International Congress of Mathematicians". www.icm2014.org. 5. American Mathematical Society 6. Flows for Reentry Problems 7. High Order Nonlinear Numerical Schemes for Evolutionary PDEs 8. XVII Handbook of Numerical Methods for Hyperbolic Problems 9. XVIII Handbook of Numerical Methods for Hyperbolic Problems 10. Residual Distribution Schemes for Conservation Laws Via Adaptive Quadrature 11. "Rémi Abgrall - The Mathematics Genealogy Project". www.mathgenealogy.org. 12. "Rémi Abgrall | Inria". www.inria.fr. 13. "UZH - Institute of Mathematics - Person". www.math.uzh.ch. 14. Prix Blaise-Pascal, Société de Mathématiques Appliquées et Industrielles 15. Institut Universitaire de France 16. Adaptive Schemes for Deterministic and Stochastic Flow Problems, ERC Advanced Grant 17. "SIAM Announces Class of 2022 Fellows". SIAM News. March 31, 2022. Retrieved 2022-03-31. External links • Rémi Abgrall publications indexed by Google Scholar • Publications by Rémi Abgrall at ResearchGate Authority control International • ISNI • VIAF National • Germany • Israel • United States Academics • Google Scholar • MathSciNet • Mathematics Genealogy Project • ORCID Other • IdRef
Wikipedia
Rényi entropy In information theory, the Rényi entropy is a quantity that generalizes various notions of entropy, including Hartley entropy, Shannon entropy, collision entropy, and min-entropy. The Rényi entropy is named after Alfréd Rényi, who looked for the most general way to quantify information while preserving additivity for independent events.[1][2] In the context of fractal dimension estimation, the Rényi entropy forms the basis of the concept of generalized dimensions.[3] The Rényi entropy is important in ecology and statistics as index of diversity. The Rényi entropy is also important in quantum information, where it can be used as a measure of entanglement. In the Heisenberg XY spin chain model, the Rényi entropy as a function of α can be calculated explicitly because it is an automorphic function with respect to a particular subgroup of the modular group.[4][5] In theoretical computer science, the min-entropy is used in the context of randomness extractors. Definition The Rényi entropy of order $\alpha $, where $0<\alpha <\infty $ and $\alpha \neq 1$, is defined as[1] $\mathrm {H} _{\alpha }(X)={\frac {1}{1-\alpha }}\log {\Bigg (}\sum _{i=1}^{n}p_{i}^{\alpha }{\Bigg )}.$ It is further defined at $\alpha =0,1,\infty $ as $\mathrm {H} _{\alpha }(X)=\lim _{\gamma \to \alpha }\mathrm {H} _{\gamma }(X).$ Here, $X$ is a discrete random variable with possible outcomes in the set ${\mathcal {A}}=\{x_{1},x_{2},...,x_{n}\}$ and corresponding probabilities $p_{i}\doteq \Pr(X=x_{i})$ for $i=1,\dots ,n$. The resulting unit of information is determined by the base of the logarithm, e.g. shannon for base 2, or nat for base e. If the probabilities are $p_{i}=1/n$ for all $i=1,\dots ,n$, then all the Rényi entropies of the distribution are equal: $\mathrm {H} _{\alpha }(X)=\log n$. In general, for all discrete random variables $X$, $\mathrm {H} _{\alpha }(X)$ is a non-increasing function in $\alpha $. Applications often exploit the following relation between the Rényi entropy and the p-norm of the vector of probabilities: $\mathrm {H} _{\alpha }(X)={\frac {\alpha }{1-\alpha }}\log \left(\|P\|_{\alpha }\right)$ . Here, the discrete probability distribution $P=(p_{1},\dots ,p_{n})$ is interpreted as a vector in $\mathbb {R} ^{n}$ with $p_{i}\geq 0$ and $ \sum _{i=1}^{n}p_{i}=1$. The Rényi entropy for any $\alpha \geq 0$ is Schur concave. Special cases As $\alpha $ approaches zero, the Rényi entropy increasingly weighs all events with nonzero probability more equally, regardless of their probabilities. In the limit for $\alpha \to 0$, the Rényi entropy is just the logarithm of the size of the support of X. The limit for $\alpha \to 1$ is the Shannon entropy. As $\alpha $ approaches infinity, the Rényi entropy is increasingly determined by the events of highest probability. Hartley or max-entropy Provided the probabilities are nonzero,[6] $\mathrm {H} _{0}$ is the logarithm of the cardinality of the alphabet (${\mathcal {A}}$) of $X$, sometimes called the Hartley entropy of $X$, $\mathrm {H} _{0}(X)=\log n=\log |{\mathcal {A}}|\,$ Shannon entropy The limiting value of $\mathrm {H} _{\alpha }$ as $\alpha \to 1$ is the Shannon entropy:[7] $\mathrm {H} _{1}(X)\equiv \lim _{\alpha \to 1}\mathrm {H} _{\alpha }(X)=-\sum _{i=1}^{n}p_{i}\log p_{i}$ Collision entropy Collision entropy, sometimes just called "Rényi entropy", refers to the case $\alpha =2$, $\mathrm {H} _{2}(X)=-\log \sum _{i=1}^{n}p_{i}^{2}=-\log P(X=Y),$ where X and Y are independent and identically distributed. The collision entropy is related to the index of coincidence. Min-entropy In the limit as $\alpha \rightarrow \infty $, the Rényi entropy $\mathrm {H} _{\alpha }$ converges to the min-entropy $\mathrm {H} _{\infty }$: $\mathrm {H} _{\infty }(X)\doteq \min _{i}(-\log p_{i})=-(\max _{i}\log p_{i})=-\log \max _{i}p_{i}\,.$ Equivalently, the min-entropy $\mathrm {H} _{\infty }(X)$ is the largest real number b such that all events occur with probability at most $2^{-b}$. The name min-entropy stems from the fact that it is the smallest entropy measure in the family of Rényi entropies. In this sense, it is the strongest way to measure the information content of a discrete random variable. In particular, the min-entropy is never larger than the Shannon entropy. The min-entropy has important applications for randomness extractors in theoretical computer science: Extractors are able to extract randomness from random sources that have a large min-entropy; merely having a large Shannon entropy does not suffice for this task. Inequalities for different orders α That $\mathrm {H} _{\alpha }$ is non-increasing in $\alpha $ for any given distribution of probabilities $p_{i}$, which can be proven by differentiation,[8] as $-{\frac {d\mathrm {H} _{\alpha }}{d\alpha }}={\frac {1}{(1-\alpha )^{2}}}\sum _{i=1}^{n}z_{i}\log(z_{i}/p_{i}),$ which is proportional to Kullback–Leibler divergence (which is always non-negative), where $z_{i}=p_{i}^{\alpha }/\sum _{j=1}^{n}p_{j}^{\alpha }$. In particular cases inequalities can be proven also by Jensen's inequality:[9][10] $\log n=\mathrm {H} _{0}\geq \mathrm {H} _{1}\geq \mathrm {H} _{2}\geq \mathrm {H} _{\infty }.$ For values of $\alpha >1$, inequalities in the other direction also hold. In particular, we have[11] $\mathrm {H} _{2}\leq 2\mathrm {H} _{\infty }.$ On the other hand, the Shannon entropy $\mathrm {H} _{1}$ can be arbitrarily high for a random variable $X$ that has a given min-entropy. An example of this is given by the sequence of random variables $X_{n}\sim \{0,\ldots ,n\}$ for $n\geq 1$ such that $P(X_{n}=0)=1/2$ and $P(X_{n}=x)=1/(2n)$ since $\mathrm {H} _{\infty }(X_{n})=2$ but $\mathrm {H} _{1}(X_{n})=(\log 2+\log 2n)/2$. Rényi divergence As well as the absolute Rényi entropies, Rényi also defined a spectrum of divergence measures generalising the Kullback–Leibler divergence.[12] The Rényi divergence of order α or alpha-divergence of a distribution P from a distribution Q is defined to be $D_{\alpha }(P\|Q)={\frac {1}{\alpha -1}}\log {\Bigg (}\sum _{i=1}^{n}{\frac {p_{i}^{\alpha }}{q_{i}^{\alpha -1}}}{\Bigg )}\,$ when 0 < α < ∞ and α ≠ 1. We can define the Rényi divergence for the special values α = 0, 1, ∞ by taking a limit, and in particular the limit α → 1 gives the Kullback–Leibler divergence. Some special cases: $D_{0}(P\|Q)=-\log Q(\{i:p_{i}>0\})$ : minus the log probability under Q that pi > 0; $D_{1/2}(P\|Q)=-2\log \sum _{i=1}^{n}{\sqrt {p_{i}q_{i}}}$ : minus twice the logarithm of the Bhattacharyya coefficient; (Nielsen & Boltz (2010)) $D_{1}(P\|Q)=\sum _{i=1}^{n}p_{i}\log {\frac {p_{i}}{q_{i}}}$ : the Kullback–Leibler divergence; $D_{2}(P\|Q)=\log {\Big \langle }{\frac {p_{i}}{q_{i}}}{\Big \rangle }$ : the log of the expected ratio of the probabilities; $D_{\infty }(P\|Q)=\log \sup _{i}{\frac {p_{i}}{q_{i}}}$ : the log of the maximum ratio of the probabilities. The Rényi divergence is indeed a divergence, meaning simply that $D_{\alpha }(P\|Q)$ is greater than or equal to zero, and zero only when P = Q. For any fixed distributions P and Q, the Rényi divergence is nondecreasing as a function of its order α, and it is continuous on the set of α for which it is finite,[12] or for the sake of brevity, the information of order α obtained if the distribution P is replaced by the distribution Q.[1] Financial interpretation A pair of probability distributions can be viewed as a game of chance in which one of the distributions defines official odds and the other contains the actual probabilities. Knowledge of the actual probabilities allows a player to profit from the game. The expected profit rate is connected to the Rényi divergence as follows[13] ${\rm {ExpectedRate}}={\frac {1}{R}}\,D_{1}(b\|m)+{\frac {R-1}{R}}\,D_{1/R}(b\|m)\,,$ where $m$ is the distribution defining the official odds (i.e. the "market") for the game, $b$ is the investor-believed distribution and $R$ is the investor's risk aversion (the Arrow–Pratt relative risk aversion). If the true distribution is $p$ (not necessarily coinciding with the investor's belief $b$), the long-term realized rate converges to the true expectation which has a similar mathematical structure[14] ${\rm {RealizedRate}}={\frac {1}{R}}\,{\Big (}D_{1}(p\|m)-D_{1}(p\|b){\Big )}+{\frac {R-1}{R}}\,D_{1/R}(b\|m)\,.$ Properties specific to α = 1 The value α = 1, which gives the Shannon entropy and the Kullback–Leibler divergence, is the only value at which the chain rule of conditional probability holds exactly: $\mathrm {H} (A,X)=\mathrm {H} (A)+\mathbb {E} _{a\sim A}{\big [}\mathrm {H} (X|A=a){\big ]}$ for the absolute entropies, and $D_{\mathrm {KL} }(p(x|a)p(a)\|m(x,a))=D_{\mathrm {KL} }(p(a)\|m(a))+\mathbb {E} _{p(a)}\{D_{\mathrm {KL} }(p(x|a)\|m(x|a))\},$ for the relative entropies. The latter in particular means that if we seek a distribution p(x, a) which minimizes the divergence from some underlying prior measure m(x, a), and we acquire new information which only affects the distribution of a, then the distribution of p(x|a) remains m(x|a), unchanged. The other Rényi divergences satisfy the criteria of being positive and continuous, being invariant under 1-to-1 co-ordinate transformations, and of combining additively when A and X are independent, so that if p(A, X) = p(A)p(X), then $\mathrm {H} _{\alpha }(A,X)=\mathrm {H} _{\alpha }(A)+\mathrm {H} _{\alpha }(X)\;$ and $D_{\alpha }(P(A)P(X)\|Q(A)Q(X))=D_{\alpha }(P(A)\|Q(A))+D_{\alpha }(P(X)\|Q(X)).$ The stronger properties of the α = 1 quantities allow the definition of conditional information and mutual information from communication theory. Exponential families The Rényi entropies and divergences for an exponential family admit simple expressions[15] $\mathrm {H} _{\alpha }(p_{F}(x;\theta ))={\frac {1}{1-\alpha }}\left(F(\alpha \theta )-\alpha F(\theta )+\log E_{p}[e^{(\alpha -1)k(x)}]\right)$ and $D_{\alpha }(p:q)={\frac {J_{F,\alpha }(\theta :\theta ')}{1-\alpha }}$ :\theta ')}{1-\alpha }}} where $J_{F,\alpha }(\theta :\theta ')=\alpha F(\theta )+(1-\alpha )F(\theta ')-F(\alpha \theta +(1-\alpha )\theta ')$ :\theta ')=\alpha F(\theta )+(1-\alpha )F(\theta ')-F(\alpha \theta +(1-\alpha )\theta ')} is a Jensen difference divergence. Physical meaning The Rényi entropy in quantum physics is not considered to be an observable, due to its nonlinear dependence on the density matrix. (This nonlinear dependence applies even in the special case of the Shannon entropy.) It can, however, be given an operational meaning through the two-time measurements (also known as full counting statistics) of energy transfers. The limit of the quantum mechanical Rényi entropy as $\alpha \to 1$ is the von Neumann entropy. See also • Diversity indices • Tsallis entropy • Generalized entropy index Notes 1. Rényi (1961) 2. Rioul (2021) 3. Barros, Vanessa; Rousseau, Jérôme (2021-06-01). "Shortest Distance Between Multiple Orbits and Generalized Fractal Dimensions". Annales Henri Poincaré. 22 (6): 1853–1885. arXiv:1912.07516. Bibcode:2021AnHP...22.1853B. doi:10.1007/s00023-021-01039-y. ISSN 1424-0661. S2CID 209376774. 4. Franchini, Its & Korepin (2008) 5. Its & Korepin (2010) 6. RFC 4086, page 6 7. Bromiley, Thacker & Bouhova-Thacker (2004) 8. Beck & Schlögl (1993) 9. $\mathrm {H} _{1}\geq \mathrm {H} _{2}$ holds because $\sum \limits _{i=1}^{M}{p_{i}\log p_{i}}\leq \log \sum \limits _{i=1}^{M}{p_{i}^{2}}$. 10. $\mathrm {H} _{\infty }\leq \mathrm {H} _{2}$ holds because $\log \sum \limits _{i=1}^{n}{p_{i}^{2}}\leq \log \sup _{i}p_{i}\left({\sum \limits _{i=1}^{n}{p_{i}}}\right)=\log \sup _{i}p_{i}$. 11. $\mathrm {H} _{2}\leq 2\mathrm {H} _{\infty }$ holds because $\log \sum \limits _{i=1}^{n}{p_{i}^{2}}\geq \log \sup _{i}p_{i}^{2}=2\log \sup _{i}p_{i}$ 12. Van Erven, Tim; Harremoës, Peter (2014). "Rényi Divergence and Kullback–Leibler Divergence". IEEE Transactions on Information Theory. 60 (7): 3797–3820. arXiv:1206.2459. doi:10.1109/TIT.2014.2320500. S2CID 17522805. 13. Soklakov (2018) 14. Soklakov (2018) 15. Nielsen & Nock (2011) References • Beck, Christian; Schlögl, Friedrich (1993). Thermodynamics of chaotic systems: an introduction. Cambridge University Press. ISBN 0521433673. • Jizba, P.; Arimitsu, T. (2004). "The world according to Rényi: Thermodynamics of multifractal systems". Annals of Physics. 312 (1): 17–59. arXiv:cond-mat/0207707. Bibcode:2004AnPhy.312...17J. doi:10.1016/j.aop.2004.01.002. S2CID 119704502. • Jizba, P.; Arimitsu, T. (2004). "On observability of Rényi's entropy". Physical Review E. 69 (2): 026128. arXiv:cond-mat/0307698. Bibcode:2004PhRvE..69b6128J. doi:10.1103/PhysRevE.69.026128. PMID 14995541. S2CID 39231939. • Bromiley, P.A.; Thacker, N.A.; Bouhova-Thacker, E. (2004), Shannon Entropy, Rényi Entropy, and Information, CiteSeerX 10.1.1.330.9856 • Franchini, F.; Its, A. R.; Korepin, V. E. (2008). "Rényi entropy as a measure of entanglement in quantum spin chain". Journal of Physics A: Mathematical and Theoretical. 41 (25302): 025302. arXiv:0707.2534. Bibcode:2008JPhA...41b5302F. doi:10.1088/1751-8113/41/2/025302. S2CID 119672750. • "Rényi test", Encyclopedia of Mathematics, EMS Press, 2001 [1994] • Hero, A. O.; Michael, O.; Gorman, J. (2002). "Alpha-divergences for Classification, Indexing and Retrieval" (PDF). CiteSeerX 10.1.1.373.2763. {{cite journal}}: Cite journal requires |journal= (help) • Its, A. R.; Korepin, V. E. (2010). "Generalized entropy of the Heisenberg spin chain". Theoretical and Mathematical Physics. 164 (3): 1136–1139. Bibcode:2010TMP...164.1136I. doi:10.1007/s11232-010-0091-6. S2CID 119525704. • Nielsen, F.; Boltz, S. (2010). "The Burbea-Rao and Bhattacharyya centroids". IEEE Transactions on Information Theory. 57 (8): 5455–5466. arXiv:1004.5049. doi:10.1109/TIT.2011.2159046. S2CID 14238708. • Nielsen, Frank; Nock, Richard (2012). "A closed-form expression for the Sharma–Mittal entropy of exponential families". Journal of Physics A. 45 (3): 032003. arXiv:1112.4221. Bibcode:2012JPhA...45c2003N. doi:10.1088/1751-8113/45/3/032003. S2CID 8653096. • Nielsen, Frank; Nock, Richard (2011). "On Rényi and Tsallis entropies and divergences for exponential families". Journal of Physics A. 45 (3): 032003. arXiv:1105.3259. Bibcode:2012JPhA...45c2003N. doi:10.1088/1751-8113/45/3/032003. S2CID 8653096. • Rényi, Alfréd (1961). "On measures of information and entropy" (PDF). Proceedings of the fourth Berkeley Symposium on Mathematics, Statistics and Probability 1960. pp. 547–561. • Rosso, O. A. (2006). "EEG analysis using wavelet-based information tools". Journal of Neuroscience Methods. 153 (2): 163–182. doi:10.1016/j.jneumeth.2005.10.009. PMID 16675027. S2CID 7134638. • Zachos, C. K. (2007). "A classical bound on quantum entropy". Journal of Physics A. 40 (21): F407–F412. arXiv:hep-th/0609148. Bibcode:2007JPhA...40..407Z. doi:10.1088/1751-8113/40/21/F02. S2CID 1619604. • Nazarov, Y. (2011). "Flows of Rényi entropies". Physical Review B. 84 (10): 205437. arXiv:1108.3537. Bibcode:2015PhRvB..91j4303A. doi:10.1103/PhysRevB.91.104303. S2CID 40312624. • Ansari, Mohammad H.; Nazarov, Yuli V. (2015). "Rényi entropy flows from quantum heat engines". Physical Review B. 91 (10): 104303. arXiv:1408.3910. Bibcode:2015PhRvB..91j4303A. doi:10.1103/PhysRevB.91.104303. S2CID 40312624. • Ansari, Mohammad H.; Nazarov, Yuli V. (2015). "Exact correspondence between Rényi entropy flows and physical flows". Physical Review B. 91 (17): 174307. arXiv:1502.08020. Bibcode:2015PhRvB..91q4307A. doi:10.1103/PhysRevB.91.174307. S2CID 36847902. • Soklakov, A. N. (2020). "Economics of Disagreement—Financial Intuition for the Rényi Divergence". Entropy. 22 (8): 860. arXiv:1811.08308. Bibcode:2020Entrp..22..860S. doi:10.3390/e22080860. PMC 7517462. PMID 33286632. • Ansari, Mohammad H.; van Steensel, Alwin; Nazarov, Yuli V. (2019). "Entropy Production in Quantum is Different". Entropy. 21 (9): 854. arXiv:1907.09241. doi:10.3390/e21090854. S2CID 198148019. • Rioul, Olivier (2021). "This is it: A Primer on Shannon's Entropy and Information" (PDF). Information Theory. Progress in Mathematical Physics. Birkhäuser. 78: 49–86. doi:10.1007/978-3-030-81480-9_2. ISBN 978-3-030-81479-3. S2CID 204783328. Statistical mechanics Theory • Principle of maximum entropy • ergodic theory Statistical thermodynamics • Ensembles • partition functions • equations of state • thermodynamic potential: • U • H • F • G • Maxwell relations Models • Ferromagnetism models • Ising • Potts • Heisenberg • percolation • Particles with force field • depletion force • Lennard-Jones potential Mathematical approaches • Boltzmann equation • H-theorem • Vlasov equation • BBGKY hierarchy • stochastic process • mean-field theory and conformal field theory Critical phenomena • Phase transition • Critical exponents • correlation length • size scaling Entropy • Boltzmann • Shannon • Tsallis • Rényi • von Neumann Applications • Statistical field theory • elementary particle • superfluidity • Condensed matter physics • Complex system • chaos • information theory • Boltzmann machine
Wikipedia
Rózsa Péter Rózsa Péter, born Rózsa Politzer, (17 February 1905 – 16 February 1977) was a Hungarian mathematician and logician. She is best known as the "founding mother of recursion theory".[1][2] Rózsa Péter Rózsa Péter Born Rózsa Politzer (1905-02-17)17 February 1905 Budapest, Austria-Hungary Died16 February 1977(1977-02-16) (aged 71) Budapest, Hungary NationalityHungarian Scientific career FieldsMathematics Early life and education Péter was born in Budapest, Hungary, as Rózsa Politzer (Hungarian: Politzer Rózsa). She attended Pázmány Péter University (now Eötvös Loránd University), originally studying chemistry but later switching to mathematics. She attended lectures by Lipót Fejér and József Kürschák. While at university, she met László Kalmár; they would collaborate in future years and Kalmár encouraged her to pursue her love of mathematics.[3] After graduating in 1927, Péter could not find a permanent teaching position although she had passed her exams to qualify as a mathematics teacher. Due to the effects of the Great Depression, many university graduates could not find work and Péter began private tutoring.[4] At this time, she also began her graduate studies. Professional career and research Initially, Péter began her graduate research on number theory. Upon discovering that her results had already been proven by the work of Robert Carmichael and L. E. Dickson, she abandoned mathematics to focus on poetry. However, she was convinced to return to mathematics by her friend László Kalmár, who suggested she research the work of Kurt Gödel on the theory of incompleteness.[3] She prepared her own, different proofs to Gödel's work.[5] Péter presented the results of her paper on recursive theory, "Rekursive Funktionen", to the International Congress of Mathematicians in Zurich, Switzerland in 1932. In the summer of 1933, she worked with Paul Bernays in Göttingen, Germany, for the long chapter on recursive functions in the book Grundlagen der Mathematik that appeared in 1934 under the names of David Hilbert and Bernays. Her main results are summarised in the book and also appeared in several articles in the leading journal of mathematics, the Mathematische Annalen, the first in 1934. Publication was under the name Politzer-Péter as she had changed her Jewish surname Politzer into Péter that same year. For her research, she received her PhD summa cum laude in 1935. In 1936, she presented a paper entitled "Über rekursive Funktionen der zweiten Stufe" to the International Congress of Mathematicians in Oslo.[3] These papers helped to found the modern field of recursive function theory as a separate area of mathematical research.[6][7] In 1937, she was appointed as contributing editor of the Journal of Symbolic Logic.[4] After the passage of the Jewish Laws of 1939 in Hungary, Péter was forbidden to teach because of her Jewish origin and was briefly confined to a ghetto in Budapest. During World War II, she wrote her book Playing with Infinity: Mathematical Explorations and Excursions, a work for lay readers on the topics of number theory and logic. Originally published in Hungarian, it has been translated into English and at least a dozen other languages.[8] With the end of the war in 1945, Péter received her first full-time teaching appointment at the Budapest Teachers' Training College. In 1952, she was the first Hungarian woman to be made an Academic Doctor of Mathematics. After the College closed in 1955, she taught at Eötvös Loránd University until her retirement in 1975. She was a popular professor, known as "Aunt Rózsa" to her students.[4] In 1951, she published her key work Rekursive Funktionen,[9] the first book on modern logic by a female author, later translated into English as Recursive Functions.[10] She continued to publish important papers on recursive theory throughout her life. In 1959, she presented a major paper "Über die Verallgemeinerung der Theorie der rekursiven Funktionen für abstrakte Mengen geeigneter Struktur als Definitionsbereiche" to the International Symposium in Warsaw (later published in two parts in 1961[11] and 1962[12]).[3] Beginning in the mid-1950s, Péter applied recursive function theory to computers. Her final book, published in 1976, was Rekursive Funktionen in der Komputer-Theorie (Recursive Functions in Computer Theory). Originally published in Hungarian, it was the second Hungarian mathematical book to be published in the Soviet Union because its subject matter was considered indispensable to the theory of computers. It was translated into English in 1981.[13][8] Honors Péter was awarded the Kossuth Prize in 1951. She received the Manó Beke Prize by the János Bolyai Mathematical Society in 1953, the Silver State Prize in 1970, and the Gold State Prize in 1973. In 1973, she became the first woman to be elected to the Hungarian Academy of Sciences.[3] See also • Ackermann function • Recursive function theory • List of pioneers in computer science References 1. Morris & Harkleroad 1990. 2. Andrásfai 1997. 3. O'Connor & Robertson 2014. 4. dead link 2023. 5. Tamássy 1994. 6. Albers, Alexanderson & Reid 1990. 7. Andrásfai 1986. 8. Riddle 2022. 9. Péter 1957. 10. Péter 1967. 11. Péter 1961. 12. Péter 1962. 13. Péter 1981. Bibliography • Albers, Donald J.; Alexanderson, Gerald L.; Reid, Constance, eds. (1990), "Rozsa Peter 1905–1977", More Mathematical People, Harcourt Brace Jovanovich, p. 149 • Andrásfai, Béla (1986). "Rózsa (Rosa) Péter". Periodica Polytechnica Electrical Engineering. 30 (2–3): 139–145. • Andrásfai, Béla (1997). "Rozsa Peter: Founder of Recursive Function Theory". Women in Science: A Selection of 16 Contributors. San Diego Supercomputer Center. Retrieved 2023-08-13. • O'Connor, J.J.; Robertson, E.F. (2014). MacTutor History of Mathematics Archive (ed.). "Rózsa Péter". School of Mathematics and Statistics, University of St Andrews, Scotland. Retrieved 2023-08-13. • dead link (2023). "Rózsa Péter". EpiGeneSys. Archived from the original on 2017-03-26. Retrieved 2014-04-14. • Morris, Edie; Harkleroad, Leon (1990). "Rózsa Péter: recursive function theory's founding mother". The Mathematical Intelligencer. 12 (1): 59–64. doi:10.1007/BF03023988. S2CID 120595680. • Péter, Rózsa (1957). Rekursive Funktionen (in German) (2., erw. Ausg., Reprint 2021 ed.). De Gruyter. doi:10.1515/9783112573082. ISBN 9783112573075. • Péter, Rózsa (1961). "Über die Verallgemeinerung der Theorie der rekursiven Funktionen für abstrakte Mengen geeigneter Struktur als Definitionsbereiche". Acta Mathematica Academiae Scientiarum Hungaricae. 12: 271–314. doi:10.1007/BF02023919. • Péter, Rózsa (1962). "Über die Verallgemeinerung der Theorie der rekursiven Funktionen für abstrakte Mengen geeigneter Struktur als Definitionsbereiche (Fortsetzung)". Acta Mathematica Academiae Scientiarum Hungaricae. 13: 1–24. doi:10.1007/BF02033622. • Péter, Rózsa (1967). Recursive Functions (3d revised ed.). Academic Press. ISBN 978-0125526500. • Péter, Rózsa (1981). Recursive Functions in Computer Theory. Ellis Horwood. p. 179. ISBN 9780470271957. • Riddle, Larry (2022-01-16). "Rózsa Péter". Biographies of Women Mathematicians. Agnes Scott College. Retrieved 2023-08-13. • Tamássy, István (1994). "Interview with Róza Péter". Modern Logic. 4 (3): 277–280. Authority control International • ISNI • VIAF National • Norway • France • BnF data • Germany • Israel • United States • Latvia • Czech Republic • 2 • Australia • Croatia • Netherlands • Poland Academics • CiNii • DBLP • MathSciNet • Mathematics Genealogy Project • zbMATH Other • IdRef
Wikipedia
Michael Röckner Michael Röckner is a mathematician working in the fields of Stochastic analysis and Mathematical Physics. He obtained his PhD at the University of Bielefeld in 1984 under the supervision of Sergio Albeverio and Christopher John Preston. Together with Claudia Prévôt, he wrote the book A Concise Course on Stochastic Partial Differential Equations.[1] • Prévôt, Claudia (2007). A concise course on stochastic partial differential equations. Berlin New York: Springer. ISBN 978-3-540-70781-3. OCLC 185027082. References 1. Wiesinger, Sven (July 10, 2007). "Michael Röckner". Universität Bielefeld. Retrieved May 7, 2011. Authority control International • ISNI • VIAF National • France • BnF data • Germany • Israel • United States • Netherlands Academics • CiNii • DBLP • MathSciNet • Mathematics Genealogy Project • zbMATH Other • IdRef
Wikipedia
Rūsiņš Mārtiņš Freivalds Rūsiņš Mārtiņš Freivalds (10 November 1942 – 4 January 2016) was a Latvian computer scientist and mathematician. He was a member of the Latvian Academy of Sciences from 1992. He discovered Freivalds' algorithm for checking the correctness of matrix products. He also taught at the University of Latvia, with students including Daina Taimiņa and Andris Ambainis. He was born in Cesvaine and studied at Moscow State University (MSU). Freivalds died from a heart attack[1] on 4 January 2016 in Riga, aged 73.[2] References Wikimedia Commons has media related to Rūsiņš Mārtiņš Freivalds. 1. Fortnow, Lance (6 January 2016). "Rūsiņš Freivalds (1942-2016)". Computational Complexity. Retrieved 11 January 2016. 2. "Mūžībā aizgājis profesors Rūsiņš Mārtiņš Freivalds". delfi.lv (in Latvian). 4 January 2016. Retrieved 11 January 2016. Authority control International • ISNI • VIAF National • Norway • Israel • United States • Latvia • Czech Republic • Netherlands • Poland Academics • DBLP • Google Scholar • MathSciNet • zbMATH Other • IdRef
Wikipedia
h-cobordism In geometric topology and differential topology, an (n + 1)-dimensional cobordism W between n-dimensional manifolds M and N is an h-cobordism (the h stands for homotopy equivalence) if the inclusion maps $M\hookrightarrow W\quad {\mbox{and}}\quad N\hookrightarrow W$ are homotopy equivalences. The h-cobordism theorem gives sufficient conditions for an h-cobordism to be trivial, i.e., to be C-isomorphic to the cylinder M × [0, 1]. Here C refers to any of the categories of smooth, piecewise linear, or topological manifolds. The theorem was first proved by Stephen Smale for which he received the Fields Medal and is a fundamental result in the theory of high-dimensional manifolds. For a start, it almost immediately proves the generalized Poincaré conjecture. Background Before Smale proved this theorem, mathematicians became stuck while trying to understand manifolds of dimension 3 or 4, and assumed that the higher-dimensional cases were even harder. The h-cobordism theorem showed that (simply connected) manifolds of dimension at least 5 are much easier than those of dimension 3 or 4. The proof of the theorem depends on the "Whitney trick" of Hassler Whitney, which geometrically untangles homologically-tangled spheres of complementary dimension in a manifold of dimension >4. An informal reason why manifolds of dimension 3 or 4 are unusually hard is that the trick fails to work in lower dimensions, which have no room for entanglement. Precise statement of the h-cobordism theorem Let n be at least 5 and let W be a compact (n + 1)-dimensional h-cobordism between M and N in the category C=Diff, PL, or Top such that W, M and N are simply connected, then W is C-isomorphic to M × [0, 1]. The isomorphism can be chosen to be the identity on M × {0}. This means that the homotopy equivalence between M and N (or, between M × [0, 1], W and N × [0, 1]) is homotopic to a C-isomorphism. Lower dimensional versions For n = 4, the h-cobordism theorem is true topologically (proved by Michael Freedman using a 4-dimensional Whitney trick) but is false PL and smoothly (as shown by Simon Donaldson). For n = 3, the h-cobordism theorem for smooth manifolds has not been proved and, due to the 3-dimensional Poincaré conjecture, is equivalent to the hard open question of whether the 4-sphere has non-standard smooth structures. For n = 2, the h-cobordism theorem is equivalent to the Poincaré conjecture stated by Poincaré in 1904 (one of the Millennium Problems[1]) and was proved by Grigori Perelman in a series of three papers in 2002 and 2003,[2][3][4] where he follows Richard S. Hamilton's program using Ricci flow. For n = 1, the h-cobordism theorem is vacuously true, since there is no closed simply-connected 1-dimensional manifold. For n = 0, the h-cobordism theorem is trivially true: the interval is the only connected cobordism between connected 0-manifolds. A proof sketch A Morse function $f:W\to [a,b]$ induces a handle decomposition of W, i.e., if there is a single critical point of index k in $f^{-1}([c,c'])$, then the ascending cobordism $W_{c'}$ is obtained from $W_{c}$ by attaching a k-handle. The goal of the proof is to find a handle decomposition with no handles at all so that integrating the non-zero gradient vector field of f gives the desired diffeomorphism to the trivial cobordism. This is achieved through a series of techniques. 1) Handle rearrangement First, we want to rearrange all handles by order so that lower order handles are attached first. The question is thus when can we slide an i-handle off of a j-handle? This can be done by a radial isotopy so long as the i attaching sphere and the j belt sphere do not intersect. We thus want $(i-1)+(n-j)\leq \dim \partial W-1=n-1$ which is equivalent to $i\leq j$. We then define the handle chain complex $(C_{*},\partial _{*})$ by letting $C_{k}$ be the free abelian group on the k-handles and defining $\partial _{k}:C_{k}\to C_{k-1}$ by sending a k-handle $h_{\alpha }^{k}$ to $\sum _{\beta }\langle h_{\alpha }^{k}\mid h_{\beta }^{k-1}\rangle h_{\beta }^{k-1}$, where $\langle h_{\alpha }^{k}\mid h_{\beta }^{k-1}\rangle $ is the intersection number of the k-attaching sphere and the (k − 1)-belt sphere. 2) Handle cancellation Next, we want to "cancel" handles. The idea is that attaching a k-handle $h_{\alpha }^{k}$ might create a hole that can be filled in by attaching a (k + 1)-handle $h_{\beta }^{k+1}$. This would imply that $\partial _{k+1}h_{\beta }^{k+1}=\pm h_{\alpha }^{k}$ and so the $(\alpha ,\beta )$ entry in the matrix of $\partial _{k+1}$ would be $\pm 1$. However, when is this condition sufficient? That is, when can we geometrically cancel handles if this condition is true? The answer lies in carefully analyzing when the manifold remains simply-connected after removing the attaching and belt spheres in question, and finding an embedded disk using the Whitney trick. This analysis leads to the requirement that n must be at least 5. Moreover, during the proof one requires that the cobordism has no 0-,1-,n-, or (n + 1)-handles which is obtained by the next technique. 3) Handle trading The idea of handle trading is to create a cancelling pair of (k + 1)- and (k + 2)-handles so that a given k-handle cancels with the (k + 1)-handle leaving behind the (k + 2)-handle. To do this, consider the core of the k-handle which is an element in $\pi _{k}(W,M)$. This group is trivial since W is an h-cobordism. Thus, there is a disk $D^{k+1}$ which we can fatten to a cancelling pair as desired, so long as we can embed this disk into the boundary of W. This embedding exists if $\dim \partial W-1=n-1\geq 2(k+1)$. Since we are assuming n is at least 5 this means that k is either 0 or 1. Finally, by considering the negative of the given Morse function, −f, we can turn the handle decomposition upside down and also remove the n- and (n + 1)-handles as desired. 4) Handle sliding Finally, we want to make sure that doing row and column operations on $\partial _{k}$ corresponds to a geometric operation. Indeed, it isn't hard to show (best done by drawing a picture) that sliding a k-handle $h_{\alpha }^{k}$ over another k-handle $h_{\beta }^{k}$ replaces $h_{\alpha }^{k}$ by $h_{\alpha }^{k}\pm h_{\beta }^{k}$ in the basis for $C_{k}$. The proof of the theorem now follows: the handle chain complex is exact since $H_{*}(W,M;\mathbb {Z} )=0$. Thus $C_{k}\cong \operatorname {coker} \partial _{k+1}\oplus \operatorname {im} \partial _{k+1}$ since the $C_{k}$ are free. Then $\partial _{k}$, which is an integer matrix, restricts to an invertible morphism which can thus be diagonalized via elementary row operations (handle sliding) and must have only $\pm 1$ on the diagonal because it is invertible. Thus, all handles are paired with a single other cancelling handle yielding a decomposition with no handles. The s-cobordism theorem If the assumption that M and N are simply connected is dropped, h-cobordisms need not be cylinders; the obstruction is exactly the Whitehead torsion τ (W, M) of the inclusion $M\hookrightarrow W$. Precisely, the s-cobordism theorem (the s stands for simple-homotopy equivalence), proved independently by Barry Mazur, John Stallings, and Dennis Barden, states (assumptions as above but where M and N need not be simply connected): An h-cobordism is a cylinder if and only if Whitehead torsion τ (W, M) vanishes. The torsion vanishes if and only if the inclusion $M\hookrightarrow W$ is not just a homotopy equivalence, but a simple homotopy equivalence. Note that one need not assume that the other inclusion $N\hookrightarrow W$ is also a simple homotopy equivalence—that follows from the theorem. Categorically, h-cobordisms form a groupoid. Then a finer statement of the s-cobordism theorem is that the isomorphism classes of this groupoid (up to C-isomorphism of h-cobordisms) are torsors for the respective[5] Whitehead groups Wh(π), where $\pi \cong \pi _{1}(M)\cong \pi _{1}(W)\cong \pi _{1}(N).$ See also • Semi-s-cobordism Notes 1. "Millennium Problems | Clay Mathematics Institute". www.claymath.org. Retrieved 2016-03-30. 2. Perelman, Grisha (2002-11-11). "The entropy formula for the Ricci flow and its geometric applications". arXiv:math/0211159. 3. Perelman, Grisha (2003-03-10). "Ricci flow with surgery on three-manifolds". arXiv:math/0303109. 4. Perelman, Grisha (2003-07-17). "Finite extinction time for the solutions to the Ricci flow on certain three-manifolds". arXiv:math/0307245. 5. Note that identifying the Whitehead groups of the various manifolds requires that one choose base points $m\in M,n\in N$ and a path in W connecting them. References • Freedman, Michael H; Quinn, Frank (1990). Topology of 4-manifolds. Princeton Mathematical Series. Vol. 39. Princeton, NJ: Princeton University Press. ISBN 0-691-08577-3. (This does the theorem for topological 4-manifolds.) • Milnor, John, Lectures on the h-cobordism theorem, notes by L. Siebenmann and J. Sondow, Princeton University Press, Princeton, NJ, 1965. v+116 pp. This gives the proof for smooth manifolds. • Rourke, Colin Patrick; Sanderson, Brian Joseph, Introduction to piecewise-linear topology, Springer Study Edition, Springer-Verlag, Berlin-New York, 1982. ISBN 3-540-11102-6. This proves the theorem for PL manifolds. • S. Smale, "On the structure of manifolds" Amer. J. Math., 84 (1962) pp. 387–399 • Rudyak, Yu.B. (2001) [1994], "h-cobordism", Encyclopedia of Mathematics, EMS Press
Wikipedia
σ-algebra In mathematical analysis and in probability theory, a σ-algebra (also σ-field) on a set X is a nonempty collection Σ of subsets of X closed under complement, countable unions, and countable intersections. The ordered pair $(X,\Sigma )$ is called a measurable space. For an algebraic structure admitting a given signature Σ of operations, see Universal algebra. The σ-algebras are a subset of the set algebras; elements of the latter only need to be closed under the union or intersection of finitely many subsets, which is a weaker condition.[1] The main use of σ-algebras is in the definition of measures; specifically, the collection of those subsets for which a given measure is defined is necessarily a σ-algebra. This concept is important in mathematical analysis as the foundation for Lebesgue integration, and in probability theory, where it is interpreted as the collection of events which can be assigned probabilities. Also, in probability, σ-algebras are pivotal in the definition of conditional expectation. In statistics, (sub) σ-algebras are needed for the formal mathematical definition of a sufficient statistic,[2] particularly when the statistic is a function or a random process and the notion of conditional density is not applicable. If $X=\{a,b,c,d\}$ one possible σ-algebra on $X$ is $\Sigma =\{\varnothing ,\{a,b\},\{c,d\},\{a,b,c,d\}\},$ where $\varnothing $ is the empty set. In general, a finite algebra is always a σ-algebra. If $\{A_{1},A_{2},A_{3},\ldots \},$ is a countable partition of $X$ then the collection of all unions of sets in the partition (including the empty set) is a σ-algebra. A more useful example is the set of subsets of the real line formed by starting with all open intervals and adding in all countable unions, countable intersections, and relative complements and continuing this process (by transfinite iteration through all countable ordinals) until the relevant closure properties are achieved (a construction known as the Borel hierarchy). Motivation There are at least three key motivators for σ-algebras: defining measures, manipulating limits of sets, and managing partial information characterized by sets. Measure A measure on $X$ is a function that assigns a non-negative real number to subsets of $X;$ this can be thought of as making precise a notion of "size" or "volume" for sets. We want the size of the union of disjoint sets to be the sum of their individual sizes, even for an infinite sequence of disjoint sets. One would like to assign a size to every subset of $X,$ but in many natural settings, this is not possible. For example, the axiom of choice implies that when the size under consideration is the ordinary notion of length for subsets of the real line, then there exist sets for which no size exists, for example, the Vitali sets. For this reason, one considers instead a smaller collection of privileged subsets of $X.$ These subsets will be called the measurable sets. They are closed under operations that one would expect for measurable sets, that is, the complement of a measurable set is a measurable set and the countable union of measurable sets is a measurable set. Non-empty collections of sets with these properties are called σ-algebras. Limits of sets Many uses of measure, such as the probability concept of almost sure convergence, involve limits of sequences of sets. For this, closure under countable unions and intersections is paramount. Set limits are defined as follows on σ-algebras. • The limit supremum or outer limit of a sequence $A_{1},A_{2},A_{3},\ldots $ of subsets of $X$ is $\limsup _{n\to \infty }A_{n}=\bigcap _{n=1}^{\infty }\bigcup _{m=n}^{\infty }A_{m}=\bigcap _{n=1}^{\infty }A_{n}\cup A_{n+1}\cup \cdots .$ It consists of all points $x$ that are in infinitely many of these sets (or equivalently, that are in cofinally many of them). That is, $x\in \limsup _{n\to \infty }A_{n}$ if and only if there exists an infinite subsequence $A_{n_{1}},A_{n_{2}},\ldots $ (where $n_{1}<n_{2}<\cdots $) of sets that all contain $x;$ that is, such that $x\in A_{n_{1}}\cap A_{n_{2}}\cap \cdots .$ • The limit infimum or inner limit of a sequence $A_{1},A_{2},A_{3},\ldots $ of subsets of $X$ is $\liminf _{n\to \infty }A_{n}=\bigcup _{n=1}^{\infty }\bigcap _{m=n}^{\infty }A_{m}=\bigcup _{n=1}^{\infty }A_{n}\cap A_{n+1}\cap \cdots .$ It consists of all points that are in all but finitely many of these sets (or equivalently, that are eventually in all of them). That is, $x\in \liminf _{n\to \infty }A_{n}$ if and only if there exists an index $N\in \mathbb {N} $ such that $A_{N},A_{N+1},\ldots $ all contain $x;$ that is, such that $x\in A_{N}\cap A_{N+1}\cap \cdots .$ The inner limit is always a subset of the outer limit: $\liminf _{n\to \infty }A_{n}~\subseteq ~\limsup _{n\to \infty }A_{n}.$ If these two sets are equal then their limit $\lim _{n\to \infty }A_{n}$ exists and is equal to this common set: $\lim _{n\to \infty }A_{n}:=\liminf _{n\to \infty }A_{n}=\limsup _{n\to \infty }A_{n}.$ Sub σ-algebras In much of probability, especially when conditional expectation is involved, one is concerned with sets that represent only part of all the possible information that can be observed. This partial information can be characterized with a smaller σ-algebra which is a subset of the principal σ-algebra; it consists of the collection of subsets relevant only to and determined only by the partial information. A simple example suffices to illustrate this idea. Imagine you and another person are betting on a game that involves flipping a coin repeatedly and observing whether it comes up Heads ($H$) or Tails ($T$). Since you and your opponent are each infinitely wealthy, there is no limit to how long the game can last. This means the sample space Ω must consist of all possible infinite sequences of $H$ or $T:$ $\Omega =\{H,T\}^{\infty }=\{(x_{1},x_{2},x_{3},\dots ):x_{i}\in \{H,T\},i\geq 1\}.$ However, after $n$ flips of the coin, you may want to determine or revise your betting strategy in advance of the next flip. The observed information at that point can be described in terms of the 2n possibilities for the first $n$ flips. Formally, since you need to use subsets of Ω, this is codified as the σ-algebra ${\mathcal {G}}_{n}=\{A\times \{H,T\}^{\infty }:A\subseteq \{H,T\}^{n}\}.$ Observe that then ${\mathcal {G}}_{1}\subseteq {\mathcal {G}}_{2}\subseteq {\mathcal {G}}_{3}\subseteq \cdots \subseteq {\mathcal {G}}_{\infty },$ where ${\mathcal {G}}_{\infty }$ is the smallest σ-algebra containing all the others. Definition and properties Definition Let $X$ be some set, and let $P(X)$ represent its power set. Then a subset $\Sigma \subseteq P(X)$ is called a σ-algebra if and only if it satisfies the following three properties:[3] 1. $X$ is in $\Sigma ,$ and $X$ is considered to be the universal set in the following context. 2. $\Sigma $ is closed under complementation: If $A$ is in $\Sigma ,$ then so is its complement, $X\setminus A.$ 3. $\Sigma $ is closed under countable unions: If $A_{1},A_{2},A_{3},\ldots $ are in $\Sigma ,$ then so is $A=A_{1}\cup A_{2}\cup A_{3}\cup \cdots .$ From these properties, it follows that the σ-algebra is also closed under countable intersections (by applying De Morgan's laws). It also follows that the empty set $\varnothing $ is in $\Sigma ,$ since by (1) $X$ is in $\Sigma $ and (2) asserts that its complement, the empty set, is also in $\Sigma .$ Moreover, since $\{X,\varnothing \}$ satisfies condition (3) as well, it follows that $\{X,\varnothing \}$ is the smallest possible σ-algebra on $X.$ The largest possible σ-algebra on $X$ is $P(X).$ Elements of the σ-algebra are called measurable sets. An ordered pair $(X,\Sigma ),$ where $X$ is a set and $\Sigma $ is a σ-algebra over $X,$ is called a measurable space. A function between two measurable spaces is called a measurable function if the preimage of every measurable set is measurable. The collection of measurable spaces forms a category, with the measurable functions as morphisms. Measures are defined as certain types of functions from a σ-algebra to $[0,\infty ].$ A σ-algebra is both a π-system and a Dynkin system (λ-system). The converse is true as well, by Dynkin's theorem (see below). Dynkin's π-λ theorem See also: π-λ theorem This theorem (or the related monotone class theorem) is an essential tool for proving many results about properties of specific σ-algebras. It capitalizes on the nature of two simpler classes of sets, namely the following. • A π-system $P$ is a collection of subsets of $X$ that is closed under finitely many intersections, and • A Dynkin system (or λ-system) $D$ is a collection of subsets of $X$ that contains $X$ and is closed under complement and under countable unions of disjoint subsets. Dynkin's π-λ theorem says, if $P$ is a π-system and $D$ is a Dynkin system that contains $P,$ then the σ-algebra $\sigma (P)$ generated by $P$ is contained in $D.$ Since certain π-systems are relatively simple classes, it may not be hard to verify that all sets in $P$ enjoy the property under consideration while, on the other hand, showing that the collection $D$ of all subsets with the property is a Dynkin system can also be straightforward. Dynkin's π-λ Theorem then implies that all sets in $\sigma (P)$ enjoy the property, avoiding the task of checking it for an arbitrary set in $\sigma (P).$ One of the most fundamental uses of the π-λ theorem is to show equivalence of separately defined measures or integrals. For example, it is used to equate a probability for a random variable $X$ with the Lebesgue-Stieltjes integral typically associated with computing the probability: $\mathbb {P} (X\in A)=\int _{A}\,F(dx)$ for all $A$ in the Borel σ-algebra on $\mathbb {R} ,$ where $F(x)$ is the cumulative distribution function for $X,$ defined on $\mathbb {R} ,$ while $\mathbb {P} $ is a probability measure, defined on a σ-algebra $\Sigma $ of subsets of some sample space $\Omega .$ Combining σ-algebras Suppose $\textstyle \left\{\Sigma _{\alpha }:\alpha \in {\mathcal {A}}\right\}$ is a collection of σ-algebras on a space $X.$ Meet The intersection of a collection of σ-algebras is a σ-algebra. To emphasize its character as a σ-algebra, it often is denoted by: $\bigwedge _{\alpha \in {\mathcal {A}}}\Sigma _{\alpha }.$ Sketch of Proof: Let $\Sigma ^{*}$ denote the intersection. Since $X$ is in every $\Sigma _{\alpha },\Sigma ^{*}$ is not empty. Closure under complement and countable unions for every $\Sigma _{\alpha }$ implies the same must be true for $\Sigma ^{*}.$ Therefore, $\Sigma ^{*}$ is a σ-algebra. Join The union of a collection of σ-algebras is not generally a σ-algebra, or even an algebra, but it generates a σ-algebra known as the join which typically is denoted $\bigvee _{\alpha \in {\mathcal {A}}}\Sigma _{\alpha }=\sigma \left(\bigcup _{\alpha \in {\mathcal {A}}}\Sigma _{\alpha }\right).$ A π-system that generates the join is ${\mathcal {P}}=\left\{\bigcap _{i=1}^{n}A_{i}:A_{i}\in \Sigma _{\alpha _{i}},\alpha _{i}\in {\mathcal {A}},\ n\geq 1\right\}.$ Sketch of Proof: By the case $n=1,$ it is seen that each $\Sigma _{\alpha }\subset {\mathcal {P}},$ so $\bigcup _{\alpha \in {\mathcal {A}}}\Sigma _{\alpha }\subseteq {\mathcal {P}}.$ This implies $\sigma \left(\bigcup _{\alpha \in {\mathcal {A}}}\Sigma _{\alpha }\right)\subseteq \sigma ({\mathcal {P}})$ by the definition of a σ-algebra generated by a collection of subsets. On the other hand, ${\mathcal {P}}\subseteq \sigma \left(\bigcup _{\alpha \in {\mathcal {A}}}\Sigma _{\alpha }\right)$ which, by Dynkin's π-λ theorem, implies $\sigma ({\mathcal {P}})\subseteq \sigma \left(\bigcup _{\alpha \in {\mathcal {A}}}\Sigma _{\alpha }\right).$ σ-algebras for subspaces Suppose $Y$ is a subset of $X$ and let $(X,\Sigma )$ be a measurable space. • The collection $\{Y\cap B:B\in \Sigma \}$ is a σ-algebra of subsets of $Y.$ • Suppose $(Y,\Lambda )$ is a measurable space. The collection $\{A\subseteq X:A\cap Y\in \Lambda \}$ is a σ-algebra of subsets of $X.$ Relation to σ-ring A σ-algebra $\Sigma $ is just a σ-ring that contains the universal set $X.$[4] A σ-ring need not be a σ-algebra, as for example measurable subsets of zero Lebesgue measure in the real line are a σ-ring, but not a σ-algebra since the real line has infinite measure and thus cannot be obtained by their countable union. If, instead of zero measure, one takes measurable subsets of finite Lebesgue measure, those are a ring but not a σ-ring, since the real line can be obtained by their countable union yet its measure is not finite. Typographic note σ-algebras are sometimes denoted using calligraphic capital letters, or the Fraktur typeface. Thus $(X,\Sigma )$ may be denoted as $\scriptstyle (X,\,{\mathcal {F}})$ or $\scriptstyle (X,\,{\mathfrak {F}}).$ Particular cases and examples Separable σ-algebras A separable $\sigma $-algebra (or separable $\sigma $-field) is a $\sigma $-algebra ${\mathcal {F}}$ that is a separable space when considered as a metric space with metric $\rho (A,B)=\mu (A{\mathbin {\triangle }}B)$ for $A,B\in {\mathcal {F}}$ and a given measure $\mu $ (and with $\triangle $ being the symmetric difference operator).[5] Note that any $\sigma $-algebra generated by a countable collection of sets is separable, but the converse need not hold. For example, the Lebesgue $\sigma $-algebra is separable (since every Lebesgue measurable set is equivalent to some Borel set) but not countably generated (since its cardinality is higher than continuum). A separable measure space has a natural pseudometric that renders it separable as a pseudometric space. The distance between two sets is defined as the measure of the symmetric difference of the two sets. Note that the symmetric difference of two distinct sets can have measure zero; hence the pseudometric as defined above need not to be a true metric. However, if sets whose symmetric difference has measure zero are identified into a single equivalence class, the resulting quotient set can be properly metrized by the induced metric. If the measure space is separable, it can be shown that the corresponding metric space is, too. Simple set-based examples Let $X$ be any set. • The family consisting only of the empty set and the set $X,$ called the minimal or trivial σ-algebra over $X.$ • The power set of $X,$ called the discrete σ-algebra. • The collection $\{\varnothing ,A,X\setminus A,X\}$ is a simple σ-algebra generated by the subset $A.$ • The collection of subsets of $X$ which are countable or whose complements are countable is a σ-algebra (which is distinct from the power set of $X$ if and only if $X$ is uncountable). This is the σ-algebra generated by the singletons of $X.$ Note: "countable" includes finite or empty. • The collection of all unions of sets in a countable partition of $X$ is a σ-algebra. Stopping time sigma-algebras A stopping time $\tau $ can define a $\sigma $-algebra ${\mathcal {F}}_{\tau },$ the so-called stopping time sigma-algebra, which in a filtered probability space describes the information up to the random time $\tau $ in the sense that, if the filtered probability space is interpreted as a random experiment, the maximum information that can be found out about the experiment from arbitrarily often repeating it until the time $\tau $ is ${\mathcal {F}}_{\tau }.$[6] σ-algebras generated by families of sets σ-algebra generated by an arbitrary family Let $F$ be an arbitrary family of subsets of $X.$ Then there exists a unique smallest σ-algebra which contains every set in $F$ (even though $F$ may or may not itself be a σ-algebra). It is, in fact, the intersection of all σ-algebras containing $F.$ (See intersections of σ-algebras above.) This σ-algebra is denoted $\sigma (F)$ and is called the σ-algebra generated by $F.$ If $F$ is empty, then $\sigma (\varnothing )=\{\varnothing ,X\}.$ Otherwise $\sigma (F)$ consists of all the subsets of $X$ that can be made from elements of $F$ by a countable number of complement, union and intersection operations. For a simple example, consider the set $X=\{1,2,3\}.$ Then the σ-algebra generated by the single subset $\{1\}$ is $\sigma (\{1\})=\{\varnothing ,\{1\},\{2,3\},\{1,2,3\}\}.$ By an abuse of notation, when a collection of subsets contains only one element, $A,$ $\sigma (A)$ may be written instead of $\sigma (\{A\});$ in the prior example $\sigma (\{1\})$ instead of $\sigma (\{\{1\}\}).$ Indeed, using $\sigma \left(A_{1},A_{2},\ldots \right)$ to mean $\sigma \left(\left\{A_{1},A_{2},\ldots \right\}\right)$ is also quite common. There are many families of subsets that generate useful σ-algebras. Some of these are presented here. σ-algebra generated by a function If $f$ is a function from a set $X$ to a set $Y$ and $B$ is a $\sigma $-algebra of subsets of $Y,$ then the $\sigma $-algebra generated by the function $f,$ denoted by $\sigma (f),$ is the collection of all inverse images $f^{-1}(S)$ of the sets $S$ in $B.$ That is, $\sigma (f)=\left\{f^{-1}(S)\,:\,S\in B\right\}.$ A function $f$ from a set $X$ to a set $Y$ is measurable with respect to a σ-algebra $\Sigma $ of subsets of $X$ if and only if $\sigma (f)$ is a subset of $\Sigma .$ One common situation, and understood by default if $B$ is not specified explicitly, is when $Y$ is a metric or topological space and $B$ is the collection of Borel sets on $Y.$ If $f$ is a function from $X$ to $\mathbb {R} ^{n}$ then $\sigma (f)$ is generated by the family of subsets which are inverse images of intervals/rectangles in $\mathbb {R} ^{n}:$ $\sigma (f)=\sigma \left(\left\{f^{-1}(\left[a_{1},b_{1}\right]\times \cdots \times \left[a_{n},b_{n}\right]):a_{i},b_{i}\in \mathbb {R} \right\}\right).$ A useful property is the following. Assume $f$ is a measurable map from $\left(X,\Sigma _{X}\right)$ to $\left(S,\Sigma _{S}\right)$ and $g$ is a measurable map from $\left(X,\Sigma _{X}\right)$ to $\left(T,\Sigma _{T}\right).$ If there exists a measurable map $h$ from $\left(T,\Sigma _{T}\right)$ to $\left(S,\Sigma _{S}\right)$ such that $f(x)=h(g(x))$ for all $x,$ then $\sigma (f)\subseteq \sigma (g).$ If $S$ is finite or countably infinite or, more generally, $\left(S,\Sigma _{S}\right)$ is a standard Borel space (for example, a separable complete metric space with its associated Borel sets), then the converse is also true.[7] Examples of standard Borel spaces include $\mathbb {R} ^{n}$ with its Borel sets and $\mathbb {R} ^{\infty }$ with the cylinder σ-algebra described below. Borel and Lebesgue σ-algebras An important example is the Borel algebra over any topological space: the σ-algebra generated by the open sets (or, equivalently, by the closed sets). Note that this σ-algebra is not, in general, the whole power set. For a non-trivial example that is not a Borel set, see the Vitali set or Non-Borel sets. On the Euclidean space $\mathbb {R} ^{n},$ another σ-algebra is of importance: that of all Lebesgue measurable sets. This σ-algebra contains more sets than the Borel σ-algebra on $\mathbb {R} ^{n}$ and is preferred in integration theory, as it gives a complete measure space. Product σ-algebra Let $\left(X_{1},\Sigma _{1}\right)$ and $\left(X_{2},\Sigma _{2}\right)$ be two measurable spaces. The σ-algebra for the corresponding product space $X_{1}\times X_{2}$ is called the product σ-algebra and is defined by $\Sigma _{1}\times \Sigma _{2}=\sigma \left(\left\{B_{1}\times B_{2}:B_{1}\in \Sigma _{1},B_{2}\in \Sigma _{2}\right\}\right).$ Observe that $\{B_{1}\times B_{2}:B_{1}\in \Sigma _{1},B_{2}\in \Sigma _{2}\}$ is a π-system. The Borel σ-algebra for $\mathbb {R} ^{n}$ is generated by half-infinite rectangles and by finite rectangles. For example, ${\mathcal {B}}(\mathbb {R} ^{n})=\sigma \left(\left\{(-\infty ,b_{1}]\times \cdots \times (-\infty ,b_{n}]:b_{i}\in \mathbb {R} \right\}\right)=\sigma \left(\left\{\left(a_{1},b_{1}\right]\times \cdots \times \left(a_{n},b_{n}\right]:a_{i},b_{i}\in \mathbb {R} \right\}\right).$ For each of these two examples, the generating family is a π-system. σ-algebra generated by cylinder sets Suppose $X\subseteq \mathbb {R} ^{\mathbb {T} }=\{f:f(t)\in \mathbb {R} ,\ t\in \mathbb {T} \}$ is a set of real-valued functions. Let ${\mathcal {B}}(\mathbb {R} )$ denote the Borel subsets of $\mathbb {R} .$ A cylinder subset of $X$ is a finitely restricted set defined as $C_{t_{1},\dots ,t_{n}}(B_{1},\dots ,B_{n})=\left\{f\in X:f(t_{i})\in B_{i},1\leq i\leq n\right\}.$ Each $\left\{C_{t_{1},\dots ,t_{n}}\left(B_{1},\dots ,B_{n}\right):B_{i}\in {\mathcal {B}}(\mathbb {R} ),1\leq i\leq n\right\}$ is a π-system that generates a σ-algebra $\textstyle \Sigma _{t_{1},\dots ,t_{n}}.$ Then the family of subsets ${\mathcal {F}}_{X}=\bigcup _{n=1}^{\infty }\bigcup _{t_{i}\in \mathbb {T} ,i\leq n}\Sigma _{t_{1},\dots ,t_{n}}$ is an algebra that generates the cylinder σ-algebra for $X.$ This σ-algebra is a subalgebra of the Borel σ-algebra determined by the product topology of $\mathbb {R} ^{\mathbb {T} }$ restricted to $X.$ An important special case is when $\mathbb {T} $ is the set of natural numbers and $X$ is a set of real-valued sequences. In this case, it suffices to consider the cylinder sets $C_{n}\left(B_{1},\dots ,B_{n}\right)=\left(B_{1}\times \cdots \times B_{n}\times \mathbb {R} ^{\infty }\right)\cap X=\left\{\left(x_{1},x_{2},\ldots ,x_{n},x_{n+1},\ldots \right)\in X:x_{i}\in B_{i},1\leq i\leq n\right\},$ for which $\Sigma _{n}=\sigma \left(\{C_{n}\left(B_{1},\dots ,B_{n}\right):B_{i}\in {\mathcal {B}}(\mathbb {R} ),1\leq i\leq n\}\right)$ is a non-decreasing sequence of σ-algebras. σ-algebra generated by random variable or vector Suppose $(\Omega ,\Sigma ,\mathbb {P} )$ is a probability space. If $\textstyle Y:\Omega \to \mathbb {R} ^{n}$ is measurable with respect to the Borel σ-algebra on $\mathbb {R} ^{n}$ then $Y$ is called a random variable ($n=1$) or random vector ($n>1$). The σ-algebra generated by $Y$ is $\sigma (Y)=\left\{Y^{-1}(A):A\in {\mathcal {B}}\left(\mathbb {R} ^{n}\right)\right\}.$ σ-algebra generated by a stochastic process Suppose $(\Omega ,\Sigma ,\mathbb {P} )$ is a probability space and $\mathbb {R} ^{\mathbb {T} }$ is the set of real-valued functions on $\mathbb {T} .$ If $\textstyle Y:\Omega \to X\subseteq \mathbb {R} ^{\mathbb {T} }$ is measurable with respect to the cylinder σ-algebra $\sigma \left({\mathcal {F}}_{X}\right)$ (see above) for $X$ then $Y$ is called a stochastic process or random process. The σ-algebra generated by $Y$ is $\sigma (Y)=\left\{Y^{-1}(A):A\in \sigma \left({\mathcal {F}}_{X}\right)\right\}=\sigma \left(\left\{Y^{-1}(A):A\in {\mathcal {F}}_{X}\right\}\right),$ the σ-algebra generated by the inverse images of cylinder sets. See also • Join (sigma algebra) – Algebric structure of set algebraPages displaying short descriptions of redirect targets • Measurable function – Function for which the preimage of a measurable set is measurable • Sample space – Set of all possible outcomes or results of a statistical trial or experiment • Sigma-additive set function – Mapping function • Sigma-ring – Ring closed under countable unions Families ${\mathcal {F}}$ of sets over $\Omega $ Is necessarily true of ${\mathcal {F}}\colon $ or, is ${\mathcal {F}}$ closed under: Directed by $\,\supseteq $ $A\cap B$ $A\cup B$ $B\setminus A$ $\Omega \setminus A$ $A_{1}\cap A_{2}\cap \cdots $ $A_{1}\cup A_{2}\cup \cdots $ $\Omega \in {\mathcal {F}}$ $\varnothing \in {\mathcal {F}}$ F.I.P. π-system Semiring Never Semialgebra (Semifield) Never Monotone class only if $A_{i}\searrow $only if $A_{i}\nearrow $ 𝜆-system (Dynkin System) only if $A\subseteq B$ only if $A_{i}\nearrow $ or they are disjoint Never Ring (Order theory) Ring (Measure theory) Never δ-Ring Never 𝜎-Ring Never Algebra (Field) Never 𝜎-Algebra (𝜎-Field) Never Dual ideal Filter NeverNever$\varnothing \not \in {\mathcal {F}}$ Prefilter (Filter base) NeverNever$\varnothing \not \in {\mathcal {F}}$ Filter subbase NeverNever$\varnothing \not \in {\mathcal {F}}$ Open Topology (even arbitrary $\cup $) Never Closed Topology (even arbitrary $\cap $) Never Is necessarily true of ${\mathcal {F}}\colon $ or, is ${\mathcal {F}}$ closed under: directed downward finite intersections finite unions relative complements complements in $\Omega $ countable intersections countable unions contains $\Omega $ contains $\varnothing $ Finite Intersection Property Additionally, a semiring is a π-system where every complement $B\setminus A$ is equal to a finite disjoint union of sets in ${\mathcal {F}}.$ A semialgebra is a semiring that contains $\Omega .$ $A,B,A_{1},A_{2},\ldots $ are arbitrary elements of ${\mathcal {F}}$ and it is assumed that ${\mathcal {F}}\neq \varnothing .$ References 1. "Probability, Mathematical Statistics, Stochastic Processes". Random. University of Alabama in Huntsville, Department of Mathematical Sciences. Retrieved 30 March 2016. 2. Billingsley, Patrick (2012). Probability and Measure (Anniversary ed.). Wiley. ISBN 978-1-118-12237-2. 3. Rudin, Walter (1987). Real & Complex Analysis. McGraw-Hill. ISBN 0-07-054234-1. 4. Vestrup, Eric M. (2009). The Theory of Measures and Integration. John Wiley & Sons. p. 12. ISBN 978-0-470-31795-2. 5. Džamonja, Mirna; Kunen, Kenneth (1995). "Properties of the class of measure separable compact spaces" (PDF). Fundamenta Mathematicae: 262. If $\mu $ is a Borel measure on $X,$ the measure algebra of $(X,\mu )$ is the Boolean algebra of all Borel sets modulo $\mu $-null sets. If $\mu $ is finite, then such a measure algebra is also a metric space, with the distance between the two sets being the measure of their symmetric difference. Then, we say that $\mu $ is separable if and only if this metric space is separable as a topological space. 6. Fischer, Tom (2013). "On simple representations of stopping times and stopping time sigma-algebras". Statistics and Probability Letters. 83 (1): 345–349. arXiv:1112.1603. doi:10.1016/j.spl.2012.09.024. 7. Kallenberg, Olav (2001). Foundations of Modern Probability (2nd ed.). Springer. p. 7. ISBN 0-387-95313-2. External links • "Algebra of sets", Encyclopedia of Mathematics, EMS Press, 2001 [1994] • Sigma Algebra from PlanetMath.
Wikipedia
Arithmetic group In mathematics, an arithmetic group is a group obtained as the integer points of an algebraic group, for example $\mathrm {SL} _{2}(\mathbb {Z} ).$ They arise naturally in the study of arithmetic properties of quadratic forms and other classical topics in number theory. They also give rise to very interesting examples of Riemannian manifolds and hence are objects of interest in differential geometry and topology. Finally, these two topics join in the theory of automorphic forms which is fundamental in modern number theory. Algebraic structure → Group theory Group theory Basic notions • Subgroup • Normal subgroup • Quotient group • (Semi-)direct product Group homomorphisms • kernel • image • direct sum • wreath product • simple • finite • infinite • continuous • multiplicative • additive • cyclic • abelian • dihedral • nilpotent • solvable • action • Glossary of group theory • List of group theory topics Finite groups • Cyclic group Zn • Symmetric group Sn • Alternating group An • Dihedral group Dn • Quaternion group Q • Cauchy's theorem • Lagrange's theorem • Sylow theorems • Hall's theorem • p-group • Elementary abelian group • Frobenius group • Schur multiplier Classification of finite simple groups • cyclic • alternating • Lie type • sporadic • Discrete groups • Lattices • Integers ($\mathbb {Z} $) • Free group Modular groups • PSL(2, $\mathbb {Z} $) • SL(2, $\mathbb {Z} $) • Arithmetic group • Lattice • Hyperbolic group Topological and Lie groups • Solenoid • Circle • General linear GL(n) • Special linear SL(n) • Orthogonal O(n) • Euclidean E(n) • Special orthogonal SO(n) • Unitary U(n) • Special unitary SU(n) • Symplectic Sp(n) • G2 • F4 • E6 • E7 • E8 • Lorentz • Poincaré • Conformal • Diffeomorphism • Loop Infinite dimensional Lie group • O(∞) • SU(∞) • Sp(∞) Algebraic groups • Linear algebraic group • Reductive group • Abelian variety • Elliptic curve History One of the origins of the mathematical theory of arithmetic groups is algebraic number theory. The classical reduction theory of quadratic and Hermitian forms by Charles Hermite, Hermann Minkowski and others can be seen as computing fundamental domains for the action of certain arithmetic groups on the relevant symmetric spaces.[1][2] The topic was related to Minkowski's geometry of numbers and the early development of the study of arithmetic invariant of number fields such as the discriminant. Arithmetic groups can be thought of as a vast generalisation of the unit groups of number fields to a noncommutative setting. The same groups also appeared in analytic number theory as the study of classical modular forms and their generalisations developed. Of course the two topics were related, as can be seen for example in Langlands' computation of the volume of certain fundamental domains using analytic methods.[3] This classical theory culminated with the work of Siegel, who showed the finiteness of the volume of a fundamental domain in many cases. For the modern theory to begin foundational work was needed, and was provided by the work of Armand Borel, André Weil, Jacques Tits and others on algebraic groups.[4][5] Shortly afterwards the finiteness of covolume was proven in full generality by Borel and Harish-Chandra.[6] Meanwhile, there was progress on the general theory of lattices in Lie groups by Atle Selberg, Grigori Margulis, David Kazhdan, M. S. Raghunathan and others. The state of the art after this period was essentially fixed in Raghunathan's treatise, published in 1972.[7] In the seventies Margulis revolutionised the topic by proving that in "most" cases the arithmetic constructions account for all lattices in a given Lie group.[8] Some limited results in this direction had been obtained earlier by Selberg, but Margulis' methods (the use of ergodic-theoretical tools for actions on homogeneous spaces) were completely new in this context and were to be extremely influential on later developments, effectively renewing the old subject of geometry of numbers and allowing Margulis himself to prove the Oppenheim conjecture; stronger results (Ratner's theorems) were later obtained by Marina Ratner. In another direction the classical topic of modular forms has blossomed into the modern theory of automorphic forms. The driving force behind this effort is mainly the Langlands program initiated by Robert Langlands. One of the main tool used there is the trace formula originating in Selberg's work[9] and developed in the most general setting by James Arthur.[10] Finally arithmetic groups are often used to construct interesting examples of locally symmetric Riemannian manifolds. A particularly active research topic has been arithmetic hyperbolic 3-manifolds, which as William Thurston wrote,[11] "...often seem to have special beauty." Definition and construction Arithmetic groups If $\mathrm {G} $ is an algebraic subgroup of $\mathrm {GL} _{n}(\mathbb {Q} )$ for some $n$ then we can define an arithmetic subgroup of $\mathrm {G} (\mathbb {Q} )$ as the group of integer points $\Gamma =\mathrm {GL} _{n}(\mathbb {Z} )\cap \mathrm {G} (\mathbb {Q} ).$ In general it is not so obvious how to make precise sense of the notion of "integer points" of a $\mathbb {Q} $-group, and the subgroup defined above can change when we take different embeddings $\mathrm {G} \to \mathrm {GL} _{n}(\mathbb {Q} ).$ Thus a better notion is to take for definition of an arithmetic subgroup of $\mathrm {G} (\mathbb {Q} )$ any group $\Lambda $ which is commensurable (this means that both $\Gamma /(\Gamma \cap \Lambda )$ and $\Lambda /(\Gamma \cap \Lambda )$ are finite sets) with a group $\Gamma $ defined as above (with respect to any embedding into $\mathrm {GL} _{n}$). With this definition, to the algebraic group $\mathrm {G} $ is associated a collection of "discrete" subgroups all commensurable to each other. Using number fields A natural generalisation of the construction above is as follows: let $F$ be a number field with ring of integers $O$ and $\mathrm {G} $ an algebraic group over $F$. If we are given an embedding $\rho :\mathrm {G} \to \mathrm {GL} _{n}$ :\mathrm {G} \to \mathrm {GL} _{n}} defined over $F$ then the subgroup $\rho ^{-1}(\mathrm {GL} _{n}(O))\subset \mathrm {G} (F)$ can legitimately be called an arithmetic group. On the other hand, the class of groups thus obtained is not larger than the class of arithmetic groups as defined above. Indeed, if we consider the algebraic group $\mathrm {G} '$ over $\mathbb {Q} $ obtained by restricting scalars from $F$ to $\mathbb {Q} $ and the $\mathbb {Q} $-embedding $\rho ':\mathrm {G} '\to \mathrm {GL} _{dn}$ induced by $\rho $ (where $d=[F:\mathbb {Q} ]$) then the group constructed above is equal to $(\rho ')^{-1}(\mathrm {GL} _{nd}(\mathbb {Z} ))$. Examples The classical example of an arithmetic group is $\mathrm {SL} _{n}(\mathbb {Z} )$, or the closely related groups $\mathrm {PSL} _{n}(\mathbb {Z} )$, $\mathrm {GL} _{n}(\mathbb {Z} )$ and $\mathrm {PGL} _{n}(\mathbb {Z} )$. For $n=2$ the group $\mathrm {PSL} _{2}(\mathbb {Z} )$, or sometimes $\mathrm {SL} _{2}(\mathbb {Z} )$, is called the modular group as it is related to the modular curve. Similar examples are the Siegel modular groups $\mathrm {Sp} _{2g}(\mathbb {Z} )$. Other well-known and studied examples include the Bianchi groups $\mathrm {SL} _{2}(O_{-m}),$ where $m>0$ is a square-free integer and $O_{-m}$ is the ring of integers in the field $\mathbb {Q} ({\sqrt {-m}}),$ and the Hilbert–Blumenthal modular groups $\mathrm {SL} _{2}(O_{m})$. Another classical example is given by the integral elements in the orthogonal group of a quadratic form defined over a number field, for example $\mathrm {SO} (n,1)(\mathbb {Z} )$. A related construction is by taking the unit groups of orders in quaternion algebras over number fields (for example the Hurwitz quaternion order). Similar constructions can be performed with unitary groups of hermitian forms, a well-known example is the Picard modular group. Arithmetic lattices in semisimple Lie groups When $G$ is a Lie group one can define an arithmetic lattice in $G$ as follows: for any algebraic group $\mathrm {G} $ defined over $\mathbb {Q} $ such that there is a morphism $\mathrm {G} (\mathbb {R} )\to G$ with compact kernel, the image of an arithmetic subgroup in $\mathrm {G} (\mathbb {Q} )$ is an arithmetic lattice in $G$. Thus, for example, if $G=\mathrm {G} (\mathbb {R} )$ and $G$ is a subgroup of $\mathrm {GL} _{n}$ then $G\cap \mathrm {GL} _{n}(\mathbb {Z} )$ is an arithmetic lattice in $G$ (but there are many more, corresponding to other embeddings); for instance, $\mathrm {SL} _{n}(\mathbb {Z} )$ is an arithmetic lattice in $\mathrm {SL} _{n}(\mathbb {R} )$. The Borel–Harish-Chandra theorem A lattice in a Lie group is usually defined as a discrete subgroup with finite covolume. The terminology introduced above is coherent with this, as a theorem due to Borel and Harish-Chandra states that an arithmetic subgroup in a semisimple Lie group is of finite covolume (the discreteness is obvious). The theorem is more precise: it says that the arithmetic lattice is cocompact if and only if the "form" of $G$ used to define it (i.e. the $\mathbb {Q} $-group $\mathrm {G} $) is anisotropic. For example, the arithmetic lattice associated to a quadratic form in $n$ variables over $\mathbb {Q} $ will be co-compact in the associated orthogonal group if and only if the quadratic form does not vanish at any point in $\mathbb {Q} ^{n}\setminus \{0\}$. Margulis arithmeticity theorem The spectacular result that Margulis obtained is a partial converse to the Borel—Harish-Chandra theorem: for certain Lie groups any lattice is arithmetic. This result is true for all irreducible lattice in semisimple Lie groups of real rank larger than two.[12][13] For example, all lattices in $\mathrm {SL} _{n}(\mathbb {R} )$ are arithmetic when $n\geq 3$. The main new ingredient that Margulis used to prove his theorem was the superrigidity of lattices in higher-rank groups that he proved for this purpose. Irreducibility only plays a role when $G$ has a factor of real rank one (otherwise the theorem always holds) and is not simple: it means that for any product decomposition $G=G_{1}\times G_{2}$ the lattice is not commensurable to a product of lattices in each of the factors $G_{i}$. For example, the lattice $\mathrm {SL} _{2}(\mathbb {Z} [{\sqrt {2}}])$ in $\mathrm {SL} _{2}(\mathbb {R} )\times \mathrm {SL} _{2}(\mathbb {R} )$ is irreducible, while $\mathrm {SL} _{2}(\mathbb {Z} )\times \mathrm {SL} _{2}(\mathbb {Z} )$ is not. The Margulis arithmeticity (and superrigidity) theorem holds for certain rank 1 Lie groups, namely $\mathrm {Sp} (n,1)$ for $n\geqslant 1$ and the exceptional group $F_{4}^{-20}$.[14][15] It is known not to hold in all groups $\mathrm {SO} (n,1)$ for $n\geqslant 2$ (ref to GPS) and for $\mathrm {SU} (n,1)$ when $n=1,2,3$. There are no known non-arithmetic lattices in the groups $\mathrm {SU} (n,1)$ when $n\geqslant 4$. Arithmetic Fuchsian and Kleinian groups Main article: Arithmetic Fuchsian group Main article: Arithmetic hyperbolic 3-manifold An arithmetic Fuchsian group is constructed from the following data: a totally real number field $F$, a quaternion algebra $A$ over $F$ and an order ${\mathcal {O}}$ in $A$. It is asked that for one embedding $\sigma :F\to \mathbb {R} $ the algebra $A^{\sigma }\otimes _{F}\mathbb {R} $ be isomorphic to the matrix algebra $M_{2}(\mathbb {R} )$ and for all others to the Hamilton quaternions. Then the group of units ${\mathcal {O}}^{1}$ is a lattice in $(A^{\sigma }\otimes _{F}\mathbb {R} )^{1}$ which is isomorphic to $\mathrm {SL} _{2}(\mathbb {R} ),$ and it is co-compact in all cases except when $A$ is the matrix algebra over $\mathbb {Q} .$ All arithmetic lattices in $\mathrm {SL} _{2}(\mathbb {R} )$ are obtained in this way (up to commensurability). Arithmetic Kleinian groups are constructed similarly except that $F$ is required to have exactly one complex place and $A$ to be the Hamilton quaternions at all real places. They exhaust all arithmetic commensurability classes in $\mathrm {SL} _{2}(\mathbb {C} ).$ Classification For every semisimple Lie group $G$ it is in theory possible to classify (up to commensurability) all arithmetic lattices in $G$, in a manner similar to the cases $G=\mathrm {SL} _{2}(\mathbb {R} ),\mathrm {SL} _{2}(\mathbb {C} )$ explained above. This amounts to classifying the algebraic groups whose real points are isomorphic up to a compact factor to $G$.[16] The congruence subgroup problem Main article: Congruence subgroup A congruence subgroup is (roughly) a subgroup of an arithmetic group defined by taking all matrices satisfying certain equations modulo an integer, for example the group of 2 by 2 integer matrices with diagonal (respectively off-diagonal) coefficients congruent to 1 (respectively 0) modulo a positive integer. These are always finite-index subgroups and the congruence subgroup problem roughly asks whether all subgroups are obtained in this way. The conjecture (usually attributed to Jean-Pierre Serre) is that this is true for (irreducible) arithmetic lattices in higher-rank groups and false in rank-one groups. It is still open in this generality but there are many results establishing it for specific lattices (in both its positive and negative cases). S-arithmetic groups Instead of taking integral points in the definition of an arithmetic lattice one can take points which are only integral away from a finite number of primes. This leads to the notion of an $S$-arithmetic lattice (where $S$ stands for the set of primes inverted). The prototypical example is $\mathrm {SL} _{2}\left(\mathbb {Z} \left[{\tfrac {1}{p}}\right]\right)$. They are also naturally lattices in certain topological groups, for example $\mathrm {SL} _{2}\left(\mathbb {Z} \left[{\tfrac {1}{p}}\right]\right)$ is a lattice in $\mathrm {SL} _{2}(\mathbb {R} )\times \mathrm {SL} _{2}(\mathbb {Q} _{p}).$ Definition The formal definition of an $S$-arithmetic group for $S$ a finite set of prime numbers is the same as for arithmetic groups with $\mathrm {GL} _{n}(\mathbb {Z} )$ replaced by $\mathrm {GL} _{n}\left(\mathbb {Z} \left[{\tfrac {1}{N}}\right]\right)$ where $N$ is the product of the primes in $S$. Lattices in Lie groups over local fields The Borel–Harish-Chandra theorem generalizes to $S$-arithmetic groups as follows: if $\Gamma $ is an $S$-arithmetic group in a $\mathbb {Q} $-algebraic group $\mathrm {G} $ then $\Gamma $ is a lattice in the locally compact group $G=\mathrm {G} (\mathbb {R} )\times \prod _{p\in S}\mathrm {G} (\mathbb {Q} _{p})$. Some applications Explicit expander graphs Arithmetic groups with Kazhdan's property (T) or the weaker property ($\tau $) of Lubotzky and Zimmer can be used to construct expander graphs (Margulis), or even Ramanujan graphs(Lubotzky—Phillips—Sarnak[17][18]). Such graphs are known to exist in abundance by probabilistic results but the explicit nature of these constructions makes them interesting. Extremal surfaces and graphs Congruence covers of arithmetic surfaces are known to give rise to surfaces with large injectivity radius.[19] Likewise the Ramanujan graphs constructed by Lubotzky—Phillips—Sarnak have large girth. It is in fact known that the Ramanujan property itself implies that the local girths of the graph are almost always large.[20] Isospectral manifolds Arithmetic groups can be used to construct isospectral manifolds. This was first realised by Marie-France Vignéras[21] and numerous variations on her construction have appeared since. The isospectrality problem is in fact particularly amenable to study in the restricted setting of arithmetic manifolds.[22] Fake projective planes A fake projective plane[23] is a complex surface which has the same Betti numbers as the projective plane $\mathbb {P} ^{2}(\mathbb {C} )$ but is not biholomorphic to it; the first example was discovered by Mumford. By work of Klingler (also proved independently by Yeung) all such are quotients of the 2-ball by arithmetic lattices in $\mathrm {PU} (2,1)$. The possible lattices have been classified by Prasad and Yeung and the classification was completed by Cartwright and Steger who determined, by computer assisted computations, all the fake projective planes in each Prasad-Yeung class. References 1. Borel, Armand (1969). Introduction aux groupes arithmétiques. Hermann. 2. Siegel, Carl Ludwig (1989). Lectures on the geometry of numbers. Springer-Verlag. 3. Langlands, R. P. (1966), "The volume of the fundamental domain for some arithmetical subgroups of Chevalley groups", Algebraic Groups and Discontinuous Subgroups, Proc. Sympos. Pure Math., Providence, R.I.: Amer. Math. Soc., pp. 143–148, MR 0213362 4. Borel, Armand; Tits, Jacques (1965). "Groupes réductifs". Inst. Hautes Études Sci. Publ. Math. 27: 55–150. doi:10.1007/bf02684375. 5. Weil, André (1982). Adèles and algebraic groups. Birkhäuser. p. iii+126. MR 0670072. 6. Borel, Armand; Harish-Chandra (1962). "Arithmetic subgroups of algebraic groups". Annals of Mathematics. 75 (3): 485–535. doi:10.2307/1970210. JSTOR 1970210. 7. Raghunathan, M.S. (1972). Discrete subgroups of Lie groups. Springer-Verlag. 8. Margulis, Grigori (1975). "Discrete groups of motions of manifolds of nonpositive curvature". Proceedings of the International Congress of Mathematicians (Vancouver, B.C., 1974), Vol. 2 (in Russian). Canad. Math. Congress. pp. 21–34. 9. Selberg, Atle (1956). "Harmonic analysis ans discontinuous groups in weakly symmetric Riemannian spaces with applications to Dirichlet series". J. Indian Math. Soc. New Series. 20: 47–87. 10. Arthur, James (2005). "An introduction to the trace formula". Harmonic analysis, the trace formula, and Shimura varieties. Amer. Math. soc. pp. 1–263. 11. Thurston, William (1982). "Three-dimensional manifolds, Kleinian groups and hyperbolic geometry". Bull. Amer. Math. Soc. (N.S.). 6 (3): 357–381. doi:10.1090/s0273-0979-1982-15003-0. 12. Margulis, Girgori (1991). Discrete subgroups of semisimple Lie groups. Springer-Verlag. 13. Witte-Morris, Dave (2015). "16". Introduction to arithmetic groups. 14. Gromov, Mikhail; Schoen, Richard (1992). "Harmonic maps into singular spaces and p-adic superrigidity for lattices in groups of rank one". Inst. Hautes Études Sci. Publ. Math. 76: 165–246. doi:10.1007/bf02699433. 15. Corlette, Kevin (1992). "Archimedean superrigidity and hyperbolic geometry". Ann. of Math. 135 (1): 165–182. doi:10.2307/2946567. JSTOR 2946567. 16. Witte-Morris, Dave (2015). "18". Introduction to arithmetic groups. 17. Lubotzky, Alexander (1994). Discrete groups, expanding graphs and invariant measures. Birkhäuser. 18. Sarnak, Peter (1990). Some applications of modular forms. Cambridge University Press. 19. Katz, Mikhail G.; Schaps, Mary; Vishne, Uzi (2007), "Logarithmic growth of systole of arithmetic Riemann surfaces along congruence subgroups", Journal of Differential Geometry, 76 (3): 399–422, arXiv:math.DG/0505007, doi:10.4310/jdg/1180135693, MR 2331526 20. Abért, Miklós; Glasner, Yair; Virág, Bálint (2014). "Kesten's theorem for invariant random subgroups". Duke Math. J. 163 (3): 465. arXiv:1201.3399. doi:10.1215/00127094-2410064. MR 3165420. 21. Vignéras, Marie-France (1980). "Variétés riemanniennes isospectrales et non isométriques". Ann. of Math. (in French). 112 (1): 21–32. doi:10.2307/1971319. JSTOR 1971319. 22. Prasad, Gopal; Rapinchuk, Andrei S. (2009). "Weakly commensurable arithmetic groups and isospectral locally symmetric spaces". Publ. Math. Inst. Hautes Études Sci. 109: 113–184. arXiv:0705.2891. doi:10.1007/s10240-009-0019-6. MR 2511587. 23. Rémy, Bertrand (2007–2008), COVOLUME DES GROUPES S-ARITHMÉTIQUES ET FAUX PLANS PROJECTIFS [d'après Mumford, Prasad, Klingler, Yeung, Prasad-Yeung], séminaire Bourbaki{{citation}}: CS1 maint: location missing publisher (link)
Wikipedia
S-equivalence S-equivalence is an equivalence relation on the families of semistable vector bundles on an algebraic curve. Definition Let X be a projective curve over an algebraically closed field k. A vector bundle on X can be considered as a locally free sheaf. Every semistable locally free E on X admits a Jordan-Hölder filtration with stable subquotients, i.e. $0=E_{0}\subseteq E_{1}\subseteq \ldots \subseteq E_{n}=E$ where $E_{i}$ are locally free sheaves on X and $E_{i}/E_{i-1}$ are stable. Although the Jordan-Hölder filtration is not unique, the subquotients are, which means that $grE=\bigoplus _{i}E_{i}/E_{i-1}$ is unique up to isomorphism. Two semistable locally free sheaves E and F on X are S-equivalent if gr E ≅ gr F.
Wikipedia
S-estimator The goal of S-estimators is to have a simple high-breakdown regression estimator, which share the flexibility and nice asymptotic properties of M-estimators. The name "S-estimators" was chosen as they are based on estimators of scale. We will consider estimators of scale defined by a function $\rho $, which satisfy • R1 – $\rho $ is symmetric, continuously differentiable and $\rho (0)=0$. • R2 – there exists $c>0$ such that $\rho $ is strictly increasing on $[c,\infty ]$ For any sample $\{r_{1},...,r_{n}\}$ of real numbers, we define the scale estimate $s(r_{1},...,r_{n})$ as the solution of $ {\frac {1}{n}}\sum _{i=1}^{n}\rho (r_{i}/s)=K$, where $K$ is the expectation value of $\rho $ for a standard normal distribution. (If there are more solutions to the above equation, then we take the one with the smallest solution for s; if there is no solution, then we put $s(r_{1},...,r_{n})=0$ .) Definition: Let $(x_{1},y_{1}),...,(x_{n},y_{n})$ be a sample of regression data with p-dimensional $x_{i}$. For each vector $\theta $, we obtain residuals $s(r_{1}(\theta ),...,r_{n}(\theta ))$ by solving the equation of scale above, where $\rho $ satisfy R1 and R2. The S-estimator ${\hat {\theta }}$ is defined by ${\hat {\theta }}=\min _{\theta }\,s(r_{1}(\theta ),...,r_{n}(\theta ))$ and the final scale estimator ${\hat {\sigma }}$ is then ${\hat {\sigma }}=s(r_{1}({\hat {\theta }}),...,r_{n}({\hat {\theta }}))$.[1] References 1. P. Rousseeuw and V. Yohai, Robust Regression by Means of S-estimators, from the book: Robust and nonlinear time series analysis, pages 256–272, 1984
Wikipedia
s-finite measure In measure theory, a branch of mathematics that studies generalized notions of volumes, an s-finite measure is a special type of measure. An s-finite measure is more general than a finite measure, but allows one to generalize certain proofs for finite measures. The s-finite measures should not be confused with the σ-finite (sigma-finite) measures. Definition Let $(X,{\mathcal {A}})$ be a measurable space and $\mu $ a measure on this measurable space. The measure $\mu $ is called an s-finite measure, if it can be written as a countable sum of finite measures $\nu _{n}$ ($n\in \mathbb {N} $),[1] $\mu =\sum _{n=1}^{\infty }\nu _{n}.$ Example The Lebesgue measure $\lambda $ is an s-finite measure. For this, set $B_{n}=(-n,-n+1]\cup [n-1,n)$ and define the measures $\nu _{n}$ by $\nu _{n}(A)=\lambda (A\cap B_{n})$ for all measurable sets $A$. These measures are finite, since $\nu _{n}(A)\leq \nu _{n}(B_{n})=2$ for all measurable sets $A$, and by construction satisfy $\lambda =\sum _{n=1}^{\infty }\nu _{n}.$ Therefore the Lebesgue measure is s-finite. Properties Relation to σ-finite measures Every σ-finite measure is s-finite, but not every s-finite measure is also σ-finite. To show that every σ-finite measure is s-finite, let $\mu $ be σ-finite. Then there are measurable disjoint sets $B_{1},B_{2},\dots $ with $\mu (B_{n})<\infty $ and $\bigcup _{n=1}^{\infty }B_{n}=X$ Then the measures $\nu _{n}(\cdot ):=\mu (\cdot \cap B_{n})$ are finite and their sum is $\mu $. This approach is just like in the example above. An example for an s-finite measure that is not σ-finite can be constructed on the set $X=\{a\}$ with the σ-algebra ${\mathcal {A}}=\{\{a\},\emptyset \}$. For all $n\in \mathbb {N} $, let $\nu _{n}$ be the counting measure on this measurable space and define $\mu :=\sum _{n=1}^{\infty }\nu _{n}.$ :=\sum _{n=1}^{\infty }\nu _{n}.} The measure $\mu $ is by construction s-finite (since the counting measure is finite on a set with one element). But $\mu $ is not σ-finite, since $\mu (\{a\})=\sum _{n=1}^{\infty }\nu _{n}(\{a\})=\sum _{n=1}^{\infty }1=\infty .$ So $\mu $ cannot be σ-finite. Equivalence to probability measures For every s-finite measure $\mu =\sum _{n=1}^{\infty }\nu _{n}$, there exists an equivalent probability measure $P$, meaning that $\mu \sim P$.[1] One possible equivalent probability measure is given by $P=\sum _{n=1}^{\infty }2^{-n}{\frac {\nu _{n}}{\nu _{n}(X)}}.$ References 1. Kallenberg, Olav (2017). Random Measures, Theory and Applications. Probability Theory and Stochastic Modelling. Vol. 77. Switzerland: Springer. p. 21. doi:10.1007/978-3-319-41598-7. ISBN 978-3-319-41596-3. • Falkner, Neil (2009). "Reviews". American Mathematical Monthly. 116 (7): 657–664. doi:10.4169/193009709X458654. ISSN 0002-9890. • Olav Kallenberg (12 April 2017). Random Measures, Theory and Applications. Springer. ISBN 978-3-319-41598-7. • Günter Last; Mathew Penrose (26 October 2017). Lectures on the Poisson Process. Cambridge University Press. ISBN 978-1-107-08801-6. • R.K. Getoor (6 December 2012). Excessive Measures. Springer Science & Business Media. ISBN 978-1-4612-3470-8. Measure theory Basic concepts • Absolute continuity of measures • Lebesgue integration • Lp spaces • Measure • Measure space • Probability space • Measurable space/function Sets • Almost everywhere • Atom • Baire set • Borel set • equivalence relation • Borel space • Carathéodory's criterion • Cylindrical σ-algebra • Cylinder set • 𝜆-system • Essential range • infimum/supremum • Locally measurable • π-system • σ-algebra • Non-measurable set • Vitali set • Null set • Support • Transverse measure • Universally measurable Types of Measures • Atomic • Baire • Banach • Besov • Borel • Brown • Complex • Complete • Content • (Logarithmically) Convex • Decomposable • Discrete • Equivalent • Finite • Inner • (Quasi-) Invariant • Locally finite • Maximising • Metric outer • Outer • Perfect • Pre-measure • (Sub-) Probability • Projection-valued • Radon • Random • Regular • Borel regular • Inner regular • Outer regular • Saturated • Set function • σ-finite • s-finite • Signed • Singular • Spectral • Strictly positive • Tight • Vector Particular measures • Counting • Dirac • Euler • Gaussian • Haar • Harmonic • Hausdorff • Intensity • Lebesgue • Infinite-dimensional • Logarithmic • Product • Projections • Pushforward • Spherical measure • Tangent • Trivial • Young Maps • Measurable function • Bochner • Strongly • Weakly • Convergence: almost everywhere • of measures • in measure • of random variables • in distribution • in probability • Cylinder set measure • Random: compact set • element • measure • process • variable • vector • Projection-valued measure Main results • Carathéodory's extension theorem • Convergence theorems • Dominated • Monotone • Vitali • Decomposition theorems • Hahn • Jordan • Maharam's • Egorov's • Fatou's lemma • Fubini's • Fubini–Tonelli • Hölder's inequality • Minkowski inequality • Radon–Nikodym • Riesz–Markov–Kakutani representation theorem Other results • Disintegration theorem • Lifting theory • Lebesgue's density theorem • Lebesgue differentiation theorem • Sard's theorem For Lebesgue measure • Isoperimetric inequality • Brunn–Minkowski theorem • Milman's reverse • Minkowski–Steiner formula • Prékopa–Leindler inequality • Vitale's random Brunn–Minkowski inequality Applications & related • Convex analysis • Descriptive set theory • Probability theory • Real analysis • Spectral theory
Wikipedia
S-matrix theory S-matrix theory was a proposal for replacing local quantum field theory as the basic principle of elementary particle physics. String theory Fundamental objects • String • Cosmic string • Brane • D-brane Perturbative theory • Bosonic • Superstring (Type I, Type II, Heterotic) Non-perturbative results • S-duality • T-duality • U-duality • M-theory • F-theory • AdS/CFT correspondence Phenomenology • Phenomenology • Cosmology • Landscape Mathematics • Geometric Langlands correspondence • Mirror symmetry • Monstrous moonshine • Vertex algebra Related concepts • Theory of everything • Conformal field theory • Quantum gravity • Supersymmetry • Supergravity • Twistor string theory • N = 4 supersymmetric Yang–Mills theory • Kaluza–Klein theory • Multiverse • Holographic principle Theorists • Aganagić • Arkani-Hamed • Atiyah • Banks • Berenstein • Bousso • Cleaver • Curtright • Dijkgraaf • Distler • Douglas • Duff • Dvali • Ferrara • Fischler • Friedan • Gates • Gliozzi • Gopakumar • Green • Greene • Gross • Gubser • Gukov • Guth • Hanson • Harvey • Hořava • Horowitz • Gibbons • Kachru • Kaku • Kallosh • Kaluza • Kapustin • Klebanov • Knizhnik • Kontsevich • Klein • Linde • Maldacena • Mandelstam • Marolf • Martinec • Minwalla • Moore • Motl • Mukhi • Myers • Nanopoulos • Năstase • Nekrasov • Neveu • Nielsen • van Nieuwenhuizen • Novikov • Olive • Ooguri • Ovrut • Polchinski • Polyakov • Rajaraman • Ramond • Randall • Randjbar-Daemi • Roček • Rohm • Sagnotti • Scherk • Schwarz • Seiberg • Sen • Shenker • Siegel • Silverstein • Sơn • Staudacher • Steinhardt • Strominger • Sundrum • Susskind • 't Hooft • Townsend • Trivedi • Turok • Vafa • Veneziano • Verlinde • Verlinde • Wess • Witten • Yau • Yoneya • Zamolodchikov • Zamolodchikov • Zaslow • Zumino • Zwiebach • History • Glossary It avoided the notion of space and time by replacing it with abstract mathematical properties of the S-matrix. In S-matrix theory, the S-matrix relates the infinite past to the infinite future in one step, without being decomposable into intermediate steps corresponding to time-slices. This program was very influential in the 1960s, because it was a plausible substitute for quantum field theory, which was plagued with the zero interaction phenomenon at strong coupling. Applied to the strong interaction, it led to the development of string theory. S-matrix theory was largely abandoned by physicists in the 1970s, as quantum chromodynamics was recognized to solve the problems of strong interactions within the framework of field theory. But in the guise of string theory, S-matrix theory is still a popular approach to the problem of quantum gravity. The S-matrix theory is related to the holographic principle and the AdS/CFT correspondence by a flat space limit. The analog of the S-matrix relations in AdS space is the boundary conformal theory.[1] The most lasting legacy of the theory is string theory. Other notable achievements are the Froissart bound, and the prediction of the pomeron. History S-matrix theory was proposed as a principle of particle interactions by Werner Heisenberg in 1943,[2] following John Archibald Wheeler's 1937 introduction of the S-matrix.[3] It was developed heavily by Geoffrey Chew, Steven Frautschi, Stanley Mandelstam, Vladimir Gribov, and Tullio Regge. Some aspects of the theory were promoted by Lev Landau in the Soviet Union, and by Murray Gell-Mann in the United States. Basic principles The basic principles are: 1. Relativity: The S-matrix is a representation of the Poincaré group; 2. Unitarity: $SS^{\dagger }=1$; 3. Analyticity: integral relations and singularity conditions. The basic analyticity principles were also called analyticity of the first kind, and they were never fully enumerated, but they include 1. Crossing: The amplitudes for antiparticle scattering are the analytic continuation of particle scattering amplitudes. 2. Dispersion relations: the values of the S-matrix can be calculated by integrals over internal energy variables of the imaginary part of the same values. 3. Causality conditions: the singularities of the S-matrix can only occur in ways that don't allow the future to influence the past (motivated by Kramers–Kronig relations) 4. Landau principle: Any singularity of the S-matrix corresponds to production thresholds of physical particles.[4][5] These principles were to replace the notion of microscopic causality in field theory, the idea that field operators exist at each spacetime point, and that spacelike separated operators commute with one another. Bootstrap models The basic principles were too general to apply directly, because they are satisfied automatically by any field theory. So to apply to the real world, additional principles were added. The phenomenological way in which this was done was by taking experimental data and using the dispersion relations to compute new limits. This led to the discovery of some particles, and to successful parameterizations of the interactions of pions and nucleons. This path was mostly abandoned because the resulting equations, devoid of any space-time interpretation, were very difficult to understand and solve. Regge theory The principle behind the Regge theory hypothesis (also called analyticity of the second kind or the bootstrap principle) is that all strongly interacting particles lie on Regge trajectories. This was considered the definitive sign that all the hadrons are composite particles, but within S-matrix theory, they are not thought of as being made up of elementary constituents. The Regge theory hypothesis allowed for the construction of string theories, based on bootstrap principles. The additional assumption was the narrow resonance approximation, which started with stable particles on Regge trajectories, and added interaction loop by loop in a perturbation series. String theory was given a Feynman path-integral interpretation a little while later. The path integral in this case is the analog of a sum over particle paths, not of a sum over field configurations. Feynman's original path integral formulation of field theory also had little need for local fields, since Feynman derived the propagators and interaction rules largely using Lorentz invariance and unitarity. See also • Landau pole • Regge trajectory • Bootstrap model • Pomeron • Dual resonance model • History of string theory Notes 1. Giddings, Steven B. (1999-10-04). "Boundary S-Matrix and the Anti–de Sitter Space to Conformal Field Theory Dictionary". Physical Review Letters. 83 (14): 2707–2710. arXiv:hep-th/9903048. Bibcode:1999PhRvL..83.2707G. doi:10.1103/physrevlett.83.2707. ISSN 0031-9007. 2. Heisenberg, W. (1943). "Die beobachtbaren Größen in der Theorie der Elementarteilchen". Zeitschrift für Physik (in German). Springer Science and Business Media LLC. 120 (7–10): 513–538. Bibcode:1943ZPhy..120..513H. doi:10.1007/bf01329800. ISSN 1434-6001. S2CID 120706757. 3. Wheeler, John A. (1937-12-01). "On the Mathematical Description of Light Nuclei by the Method of Resonating Group Structure". Physical Review. American Physical Society (APS). 52 (11): 1107–1122. Bibcode:1937PhRv...52.1107W. doi:10.1103/physrev.52.1107. ISSN 0031-899X. 4. Landau, L.D. (1959). "On analytic properties of vertex parts in quantum field theory". Nuclear Physics. Elsevier BV. 13 (1): 181–192. Bibcode:1959NucPh..13..181L. doi:10.1016/0029-5582(59)90154-3. ISSN 0029-5582. 5. Yuri V. Kovchegov, Eugene Levin, Quantum Chromodynamics at High Energy, Cambridge University Press, 2012, p. 313. References • Steven Frautschi, Regge Poles and S-matrix Theory, New York: W. A. Benjamin, Inc., 1963.
Wikipedia
S-object In algebraic topology, an $\mathbb {S} $-object (also called a symmetric sequence) is a sequence $\{X(n)\}$ of objects such that each $X(n)$ comes with an action[note 1] of the symmetric group $\mathbb {S} _{n}$. The category of combinatorial species is equivalent to the category of finite $\mathbb {S} $-sets (roughly because the permutation category is equivalent to the category of finite sets and bijections.)[1] S-module By $\mathbb {S} $-module, we mean an $\mathbb {S} $-object in the category ${\mathsf {Vect}}$ of finite-dimensional vector spaces over a field k of characteristic zero (the symmetric groups act from the right by convention). Then each $\mathbb {S} $-module determines a Schur functor on ${\mathsf {Vect}}$. This definition of $\mathbb {S} $-module shares its name with the considerably better-known model for highly structured ring spectra due to Elmendorf, Kriz, Mandell and May. See also • Highly structured ring spectrum Notes 1. An action of a group G on an object X in a category C is a functor from G viewed as a category with a single object to C that maps the single object to X. Note this functor then induces a group homomorphism $G\to \operatorname {Aut} (X)$; cf. Automorphism group#In category theory. References 1. Getzler & Jones 1994, § 1 • Getzler, Ezra; Jones, J. D. S. (1994-03-08). "Operads, homotopy algebra and iterated integrals for double loop spaces". arXiv:hep-th/9403055. • Loday, Jean-Louis (1996). "La renaissance des opérades". www.numdam.org. Séminaire Nicolas Bourbaki. MR 1423619. Zbl 0866.18007. Retrieved 2018-09-27.
Wikipedia
Schur polynomial In mathematics, Schur polynomials, named after Issai Schur, are certain symmetric polynomials in n variables, indexed by partitions, that generalize the elementary symmetric polynomials and the complete homogeneous symmetric polynomials. In representation theory they are the characters of polynomial irreducible representations of the general linear groups. The Schur polynomials form a linear basis for the space of all symmetric polynomials. Any product of Schur polynomials can be written as a linear combination of Schur polynomials with non-negative integral coefficients; the values of these coefficients is given combinatorially by the Littlewood–Richardson rule. More generally, skew Schur polynomials are associated with pairs of partitions and have similar properties to Schur polynomials. "S-polynomial" redirects here. For the polynomials used in computing Gröbner bases, see Gröbner basis § S-polynomial. Definition (Jacobi's bialternant formula) Schur polynomials are indexed by integer partitions. Given a partition λ = (λ1, λ2, …,λn), where λ1 ≥ λ2 ≥ … ≥ λn, and each λj is a non-negative integer, the functions $a_{(\lambda _{1}+n-1,\lambda _{2}+n-2,\dots ,\lambda _{n})}(x_{1},x_{2},\dots ,x_{n})=\det \left[{\begin{matrix}x_{1}^{\lambda _{1}+n-1}&x_{2}^{\lambda _{1}+n-1}&\dots &x_{n}^{\lambda _{1}+n-1}\\x_{1}^{\lambda _{2}+n-2}&x_{2}^{\lambda _{2}+n-2}&\dots &x_{n}^{\lambda _{2}+n-2}\\\vdots &\vdots &\ddots &\vdots \\x_{1}^{\lambda _{n}}&x_{2}^{\lambda _{n}}&\dots &x_{n}^{\lambda _{n}}\end{matrix}}\right]$ are alternating polynomials by properties of the determinant. A polynomial is alternating if it changes sign under any transposition of the variables. Since they are alternating, they are all divisible by the Vandermonde determinant $a_{(n-1,n-2,\dots ,0)}(x_{1},x_{2},\dots ,x_{n})=\det \left[{\begin{matrix}x_{1}^{n-1}&x_{2}^{n-1}&\dots &x_{n}^{n-1}\\x_{1}^{n-2}&x_{2}^{n-2}&\dots &x_{n}^{n-2}\\\vdots &\vdots &\ddots &\vdots \\1&1&\dots &1\end{matrix}}\right]=\prod _{1\leq j<k\leq n}(x_{j}-x_{k}).$ The Schur polynomials are defined as the ratio $s_{\lambda }(x_{1},x_{2},\dots ,x_{n})={\frac {a_{(\lambda _{1}+n-1,\lambda _{2}+n-2,\dots ,\lambda _{n}+0)}(x_{1},x_{2},\dots ,x_{n})}{a_{(n-1,n-2,\dots ,0)}(x_{1},x_{2},\dots ,x_{n})}}.$ This is known as the bialternant formula of Jacobi. It is a special case of the Weyl character formula. This is a symmetric function because the numerator and denominator are both alternating, and a polynomial since all alternating polynomials are divisible by the Vandermonde determinant. Properties The degree d Schur polynomials in n variables are a linear basis for the space of homogeneous degree d symmetric polynomials in n variables. For a partition λ = (λ1, λ2, ..., λn), the Schur polynomial is a sum of monomials, $s_{\lambda }(x_{1},x_{2},\ldots ,x_{n})=\sum _{T}x^{T}=\sum _{T}x_{1}^{t_{1}}\cdots x_{n}^{t_{n}}$ where the summation is over all semistandard Young tableaux T of shape λ. The exponents t1, ..., tn give the weight of T, in other words each ti counts the occurrences of the number i in T. This can be shown to be equivalent to the definition from the first Giambelli formula using the Lindström–Gessel–Viennot lemma (as outlined on that page). Schur polynomials can be expressed as linear combinations of monomial symmetric functions mμ with non-negative integer coefficients Kλμ called Kostka numbers, $s_{\lambda }=\sum _{\mu }K_{\lambda \mu }m_{\mu }.\ $ The Kostka numbers Kλμ are given by the number of semi-standard Young tableaux of shape λ and weight μ. Jacobi−Trudi identities The first Jacobi−Trudi formula expresses the Schur polynomial as a determinant in terms of the complete homogeneous symmetric polynomials, $s_{\lambda }=\det(h_{\lambda _{i}+j-i})_{i,j=1}^{l(\lambda )}=\det \left[{\begin{matrix}h_{\lambda _{1}}&h_{\lambda _{1}+1}&\dots &h_{\lambda _{1}+n-1}\\h_{\lambda _{2}-1}&h_{\lambda _{2}}&\dots &h_{\lambda _{2}+n-2}\\\vdots &\vdots &\ddots &\vdots \\h_{\lambda _{n}-n+1}&h_{\lambda _{n}-n+2}&\dots &h_{\lambda _{n}}\end{matrix}}\right],$ where hi := s(i).[1] The second Jacobi-Trudi formula expresses the Schur polynomial as a determinant in terms of the elementary symmetric polynomials, $s_{\lambda }=\det(e_{\lambda '_{i}+j-i})_{i,j=1}^{l(\lambda ')}=\det \left[{\begin{matrix}e_{\lambda '_{1}}&e_{\lambda '_{1}+1}&\dots &e_{\lambda '_{1}+l-1}\\e_{\lambda '_{2}-1}&e_{\lambda '_{2}}&\dots &e_{\lambda '_{2}+l-2}\\\vdots &\vdots &\ddots &\vdots \\e_{\lambda '_{l}-l+1}&e_{\lambda '_{l}-l+2}&\dots &e_{\lambda '_{l}}\end{matrix}}\right],$ where ei := s(1i) and λ' is the conjugate partition to λ.[2] In both identities, functions with negative subscripts are defined to be zero. The Giambelli identity Another determinantal identity is Giambelli's formula, which expresses the Schur function for an arbitrary partition in terms of those for the hook partitions contained within the Young diagram. In Frobenius' notation, the partition is denoted $(a_{1},\ldots ,a_{r}\mid b_{1},\ldots ,b_{r})$ where, for each diagonal element in position ii, ai denotes the number of boxes to the right in the same row and bi denotes the number of boxes beneath it in the same column (the arm and leg lengths, respectively). The Giambelli identity expresses the Schur function corresponding to this partition as the determinant $s_{(a_{1},\ldots ,a_{r}\mid b_{1},\ldots ,b_{r})}=\det(s_{(a_{i}\mid b_{j})})$ of those for hook partitions. The Cauchy identity The Cauchy identity for Schur functions (now in infinitely many variables), and its dual state that $\sum _{\lambda }s_{\lambda }(x)s_{\lambda }(y)=\sum _{\lambda }m_{\lambda }(x)h_{\lambda }(y)=\prod _{i,j}(1-x_{i}y_{j})^{-1},$ and $\sum _{\lambda }s_{\lambda }(x)s_{\lambda '}(y)=\sum _{\lambda }m_{\lambda }(x)e_{\lambda }(y)=\prod _{i,j}(1+x_{i}y_{j}),$ where the sum is taken over all partitions λ, and $h_{\lambda }(x)$, $e_{\lambda }(x)$ denote the complete symmetric functions and elementary symmetric functions, respectively. If the sum is taken over products of Schur polynomials in $n$ variables $(x_{1},\dots ,x_{n})$, the sum includes only partitions of length $\ell (\lambda )\leq n$ since otherwise the Schur polynomials vanish. There are many generalizations of these identities to other families of symmetric functions. For example, Macdonald polynomials, Schubert polynomials and Grothendieck polynomials admit Cauchy-like identities. Further identities The Schur polynomial can also be computed via a specialization of a formula for Hall–Littlewood polynomials, $s_{\lambda }(x_{1},\dotsc ,x_{n})=\sum _{w\in S_{n}/S_{n}^{\lambda }}w\left(x^{\lambda }\prod _{\lambda _{i}>\lambda _{j}}{\frac {x_{i}}{x_{i}-x_{j}}}\right)$ where $S_{n}^{\lambda }$ is the subgroup of permutations such that $\lambda _{w(i)}=\lambda _{i}$ for all i, and w acts on variables by permuting indices. The Murnaghan−Nakayama rule The Murnaghan–Nakayama rule expresses a product of a power-sum symmetric function with a Schur polynomial, in terms of Schur polynomials: $p_{r}\cdot s_{\lambda }=\sum _{\mu }(-1)^{ht(\mu /\lambda )+1}s_{\mu }$ where the sum is over all partitions μ such that μ/λ is a rim-hook of size r and ht(μ/λ) is the number of rows in the diagram μ/λ. The Littlewood–Richardson rule and Pieri's formula The Littlewood–Richardson coefficients depend on three partitions, say $\lambda ,\mu ,\nu $, of which $\lambda $ and $\mu $ describe the Schur functions being multiplied, and $\nu $ gives the Schur function of which this is the coefficient in the linear combination; in other words they are the coefficients $c_{\lambda ,\mu }^{\nu }$ such that $s_{\lambda }s_{\mu }=\sum _{\nu }c_{\lambda ,\mu }^{\nu }s_{\nu }.$ The Littlewood–Richardson rule states that $c_{\lambda ,\mu }^{\nu }$ is equal to the number of Littlewood–Richardson tableaux of skew shape $\nu /\lambda $ and of weight $\mu $. Pieri's formula is a special case of the Littlewood-Richardson rule, which expresses the product $h_{r}s_{\lambda }$ in terms of Schur polynomials. The dual version expresses $e_{r}s_{\lambda }$ in terms of Schur polynomials. Specializations Evaluating the Schur polynomial sλ in (1, 1, ..., 1) gives the number of semi-standard Young tableaux of shape λ with entries in 1, 2, ..., n. One can show, by using the Weyl character formula for example, that $s_{\lambda }(1,1,\dots ,1)=\prod _{1\leq i<j\leq n}{\frac {\lambda _{i}-\lambda _{j}+j-i}{j-i}}.$ In this formula, λ, the tuple indicating the width of each row of the Young diagram, is implicitly extended with zeros until it has length n. The sum of the elements λi is d. See also the Hook length formula which computes the same quantity for fixed λ. Example The following extended example should help clarify these ideas. Consider the case n = 3, d = 4. Using Ferrers diagrams or some other method, we find that there are just four partitions of 4 into at most three parts. We have $s_{(2,1,1)}(x_{1},x_{2},x_{3})={\frac {1}{\Delta }}\;\det \left[{\begin{matrix}x_{1}^{4}&x_{2}^{4}&x_{3}^{4}\\x_{1}^{2}&x_{2}^{2}&x_{3}^{2}\\x_{1}&x_{2}&x_{3}\end{matrix}}\right]=x_{1}\,x_{2}\,x_{3}\,(x_{1}+x_{2}+x_{3})$ $s_{(2,2,0)}(x_{1},x_{2},x_{3})={\frac {1}{\Delta }}\;\det \left[{\begin{matrix}x_{1}^{4}&x_{2}^{4}&x_{3}^{4}\\x_{1}^{3}&x_{2}^{3}&x_{3}^{3}\\1&1&1\end{matrix}}\right]=x_{1}^{2}\,x_{2}^{2}+x_{1}^{2}\,x_{3}^{2}+x_{2}^{2}\,x_{3}^{2}+x_{1}^{2}\,x_{2}\,x_{3}+x_{1}\,x_{2}^{2}\,x_{3}+x_{1}\,x_{2}\,x_{3}^{2}$ and so on, where $\Delta $ is the Vandermonde determinant $a_{(2,1,0)}(x_{1},x_{2},x_{3})$. Summarizing: 1. $s_{(2,1,1)}=e_{1}\,e_{3}$ 2. $s_{(2,2,0)}=e_{2}^{2}-e_{1}\,e_{3}$ 3. $s_{(3,1,0)}=e_{1}^{2}\,e_{2}-e_{2}^{2}-e_{1}\,e_{3}$ 4. $s_{(4,0,0)}=e_{1}^{4}-3\,e_{1}^{2}\,e_{2}+2\,e_{1}\,e_{3}+e_{2}^{2}.$ Every homogeneous degree-four symmetric polynomial in three variables can be expressed as a unique linear combination of these four Schur polynomials, and this combination can again be found using a Gröbner basis for an appropriate elimination order. For example, $\phi (x_{1},x_{2},x_{3})=x_{1}^{4}+x_{2}^{4}+x_{3}^{4}$ is obviously a symmetric polynomial which is homogeneous of degree four, and we have $\phi =s_{(2,1,1)}-s_{(3,1,0)}+s_{(4,0,0)}.\,\!$ Relation to representation theory The Schur polynomials occur in the representation theory of the symmetric groups, general linear groups, and unitary groups. The Weyl character formula implies that the Schur polynomials are the characters of finite-dimensional irreducible representations of the general linear groups, and helps to generalize Schur's work to other compact and semisimple Lie groups. Several expressions arise for this relation, one of the most important being the expansion of the Schur functions sλ in terms of the symmetric power functions $p_{k}=\sum _{i}x_{i}^{k}$. If we write χλ ρ for the character of the representation of the symmetric group indexed by the partition λ evaluated at elements of cycle type indexed by the partition ρ, then $s_{\lambda }=\sum _{\nu }{\frac {\chi _{\nu }^{\lambda }}{z_{\nu }}}p_{\nu }=\sum _{\rho =(1^{r_{1}},2^{r_{2}},3^{r_{3}},\dots )}\chi _{\rho }^{\lambda }\prod _{k}{\frac {p_{k}^{r_{k}}}{r_{k}!k^{r_{k}}}},$ where ρ = (1r1, 2r2, 3r3, ...) means that the partition ρ has rk parts of length k. A proof of this can be found in R. Stanley's Enumerative Combinatorics Volume 2, Corollary 7.17.5. The integers χλ ρ can be computed using the Murnaghan–Nakayama rule. Schur positivity Due to the connection with representation theory, a symmetric function which expands positively in Schur functions are of particular interest. For example, the skew Schur functions expand positively in the ordinary Schur functions, and the coefficients are Littlewood–Richardson coefficients. A special case of this is the expansion of the complete homogeneous symmetric functions hλ in Schur functions. This decomposition reflects how a permutation module is decomposed into irreducible representations. Methods for proving Schur positivity There are several approaches to prove Schur positivity of a given symmetric function F. If F is described in a combinatorial manner, a direct approach is to produce a bijection with semi-standard Young tableaux. The Edelman–Greene correspondence and the Robinson–Schensted–Knuth correspondence are examples of such bijections. A bijection with more structure is a proof using so called crystals. This method can be described as defining a certain graph structure described with local rules on the underlying combinatorial objects. A similar idea is the notion of dual equivalence. This approach also uses a graph structure, but on the objects representing the expansion in the fundamental quasisymmetric basis. It is closely related to the RSK-correspondence. Generalizations Skew Schur functions Skew Schur functions sλ/μ depend on two partitions λ and μ, and can be defined by the property $\langle s_{\lambda /\mu },s_{\nu }\rangle =\langle s_{\lambda },s_{\mu }s_{\nu }\rangle .$ Here, the inner product is the Hall inner product, for which the Schur polynomials form an orthonormal basis. Similar to the ordinary Schur polynomials, there are numerous ways to compute these. The corresponding Jacobi-Trudi identities are $s_{\lambda /\mu }=\det(h_{\lambda _{i}-\mu _{j}-i+j})_{i,j=1}^{l(\lambda )}$ $s_{\lambda '/\mu '}=\det(e_{\lambda _{i}-\mu _{j}-i+j})_{i,j=1}^{l(\lambda )}$ There is also a combinatorial interpretation of the skew Schur polynomials, namely it is a sum over all semi-standard Young tableaux (or column-strict tableaux) of the skew shape $\lambda /\mu $. The skew Schur polynomials expands positively in Schur polynomials. A rule for the coefficients is given by the Littlewood-Richardson rule. Double Schur polynomials The double Schur polynomials[3] can be seen as a generalization of the shifted Schur polynomials. These polynomials are also closely related to the factorial Schur polynomials. Given a partition λ, and a sequence a1, a2,… one can define the double Schur polynomial sλ(x || a) as $s_{\lambda }(x||a)=\sum _{T}\prod _{\alpha \in \lambda }(x_{T(\alpha )}-a_{T(\alpha )-c(\alpha )})$ where the sum is taken over all reverse semi-standard Young tableaux T of shape λ, and integer entries in 1, …, n. Here T(α) denotes the value in the box α in T and c(α) is the content of the box. A combinatorial rule for the Littlewood-Richardson coefficients (depending on the sequence a) was given by A.I Molev.[3] In particular, this implies that the shifted Schur polynomials have non-negative Littlewood-Richardson coefficients. The shifted Schur polynomials s*λ(y) can be obtained from the double Schur polynomials by specializing ai = −i and yi = xi + i. The double Schur polynomials are special cases of the double Schubert polynomials. Factorial Schur polynomials The factorial Schur polynomials may be defined as follows. Given a partition λ, and a doubly infinite sequence …,a−1, a0, a1, … one can define the factorial Schur polynomial sλ(x|a) as $s_{\lambda }(x|a)=\sum _{T}\prod _{\alpha \in \lambda }(x_{T(\alpha )}-a_{T(\alpha )+c(\alpha )})$ where the sum is taken over all semi-standard Young tableaux T of shape λ, and integer entries in 1, …, n. Here T(α) denotes the value in the box α in T and c(α) is the content of the box. There is also a determinant formula, $s_{\lambda }(x|a)={\frac {\det[(x_{j}|a)^{\lambda _{i}+n-i}]_{i,j=1}^{l(\lambda )}}{\prod _{i<j}(x_{i}-x_{j})}}$ where (y|a)k = (y − a1) ... (y − ak). It is clear that if we let ai = 0 for all i, we recover the usual Schur polynomial sλ. The double Schur polynomials and the factorial Schur polynomials in n variables are related via the identity sλ(x||a) = sλ(x|u) where an−i+1 = ui. Other generalizations There are numerous generalizations of Schur polynomials: • Hall–Littlewood polynomials • Shifted Schur polynomials • Flagged Schur polynomials • Schubert polynomials • Stanley symmetric functions (also known as stable Schubert polynomials) • Key polynomials (also known as Demazure characters) • Quasi-symmetric Schur polynomials • Row-strict Schur polynomials • Jack polynomials • Modular Schur polynomials • Loop Schur functions • Macdonald polynomials • Schur polynomials for the symplectic and orthogonal group. • k-Schur functions • Grothendieck polynomials (K-theoretical analogue of Schur polynomials) • LLT polynomials See also • Schur functor • Littlewood–Richardson rule, where one finds some identities involving Schur polynomials. References • Macdonald, I. G. (1995). Symmetric functions and Hall polynomials. Oxford Mathematical Monographs (2nd ed.). Oxford University Press. ISBN 978-0-19-853489-1. MR 1354144. • Sagan, Bruce E. (2001) [1994], "Schur functions in algebraic combinatorics", Encyclopedia of Mathematics, EMS Press • Sturmfels, Bernd (1993). Algorithms in Invariant Theory. Springer. ISBN 978-0-387-82445-1. • Fulton, William; Harris, Joe (1991). Representation theory. A first course. Graduate Texts in Mathematics, Readings in Mathematics. Vol. 129. New York: Springer-Verlag. doi:10.1007/978-1-4612-0979-9. ISBN 978-0-387-97495-8. MR 1153249. OCLC 246650103. 1. Fulton & Harris 1991, Formula A.5 2. Fulton & Harris 1991, Formula A.6 3. Molev, A.I. (June 2009). "Littlewood–Richardson polynomials". Journal of Algebra. 321 (11): 3450–68. arXiv:0704.0065. doi:10.1016/j.jalgebra.2008.02.034.
Wikipedia
S-procedure The S-procedure or S-lemma is a mathematical result that gives conditions under which a particular quadratic inequality is a consequence of another quadratic inequality. The S-procedure was developed independently in a number of different contexts[1][2] and has applications in control theory, linear algebra and mathematical optimization. Statement of the S-procedure Let F1 and F2 be symmetric matrices, g1 and g2 be vectors and h1 and h2 be real numbers. Assume that there is some x0 such that the strict inequality $x_{0}^{T}F_{1}x_{0}+2g_{1}^{T}x_{0}+h_{1}<0$ holds. Then the implication $x^{T}F_{1}x+2g_{1}^{T}x+h_{1}\leq 0\Longrightarrow x^{T}F_{2}x+2g_{2}^{T}x+h_{2}\leq 0$ holds if and only if there exists some nonnegative number λ such that $\lambda {\begin{bmatrix}F_{1}&g_{1}\\g_{1}^{T}&h_{1}\end{bmatrix}}-{\begin{bmatrix}F_{2}&g_{2}\\g_{2}^{T}&h_{2}\end{bmatrix}}$ is positive semidefinite.[3] References 1. Frank Uhlig, A recurring theorem about pairs of quadratic forms and extensions: a survey, Linear Algebra and its Applications, Volume 25, 1979, pages 219–237. 2. Imre Pólik and Tamás Terlaky, A Survey of the S-Lemma, SIAM Review, Volume 49, 2007, Pages 371–418. 3. Stephen Boyd and Lieven Vandenberghe Convex Optimization, Cambridge University Press, 2004, p.655. See also • Linear matrix inequality • Finsler's lemma
Wikipedia
Scheme (mathematics) In mathematics, a scheme is a mathematical structure that enlarges the notion of algebraic variety in several ways, such as taking account of multiplicities (the equations x = 0 and x2 = 0 define the same algebraic variety but different schemes) and allowing "varieties" defined over any commutative ring (for example, Fermat curves are defined over the integers). Scheme theory was introduced by Alexander Grothendieck in 1960 in his treatise "Éléments de géométrie algébrique"; one of its aims was developing the formalism needed to solve deep problems of algebraic geometry, such as the Weil conjectures (the last of which was proved by Pierre Deligne).[1] Strongly based on commutative algebra, scheme theory allows a systematic use of methods of topology and homological algebra. Scheme theory also unifies algebraic geometry with much of number theory, which eventually led to Wiles's proof of Fermat's Last Theorem. Formally, a scheme is a topological space together with commutative rings for all of its open sets, which arises from gluing together spectra (spaces of prime ideals) of commutative rings along their open subsets. In other words, it is a ringed space which is locally a spectrum of a commutative ring. The relative point of view is that much of algebraic geometry should be developed for a morphism X → Y of schemes (called a scheme X over Y), rather than for an individual scheme. For example, in studying algebraic surfaces, it can be useful to consider families of algebraic surfaces over any scheme Y. In many cases, the family of all varieties of a given type can itself be viewed as a variety or scheme, known as a moduli space. For some of the detailed definitions in the theory of schemes, see the glossary of scheme theory. Development The origins of algebraic geometry mostly lie in the study of polynomial equations over the real numbers. By the 19th century, it became clear (notably in the work of Jean-Victor Poncelet and Bernhard Riemann) that algebraic geometry was simplified by working over the field of complex numbers, which has the advantage of being algebraically closed.[2] Two issues gradually drew attention in the early 20th century, motivated by problems in number theory: how can algebraic geometry be developed over any algebraically closed field, especially in positive characteristic? (The tools of topology and complex analysis used to study complex varieties do not seem to apply here.) And what about algebraic geometry over an arbitrary field? Hilbert's Nullstellensatz suggests an approach to algebraic geometry over any algebraically closed field k: the maximal ideals in the polynomial ring k[x1,...,xn] are in one-to-one correspondence with the set kn of n-tuples of elements of k, and the prime ideals correspond to the irreducible algebraic sets in kn, known as affine varieties. Motivated by these ideas, Emmy Noether and Wolfgang Krull developed the subject of commutative algebra in the 1920s and 1930s.[3] Their work generalizes algebraic geometry in a purely algebraic direction: instead of studying the prime ideals in a polynomial ring, one can study the prime ideals in any commutative ring. For example, Krull defined the dimension of any commutative ring in terms of prime ideals. At least when the ring is Noetherian, he proved many of the properties one would want from the geometric notion of dimension. Noether and Krull's commutative algebra can be viewed as an algebraic approach to affine algebraic varieties. However, many arguments in algebraic geometry work better for projective varieties, essentially because projective varieties are compact. From the 1920s to the 1940s, B. L. van der Waerden, André Weil and Oscar Zariski applied commutative algebra as a new foundation for algebraic geometry in the richer setting of projective (or quasi-projective) varieties.[4] In particular, the Zariski topology is a useful topology on a variety over any algebraically closed field, replacing to some extent the classical topology on a complex variety (based on the topology of the complex numbers). For applications to number theory, van der Waerden and Weil formulated algebraic geometry over any field, not necessarily algebraically closed. Weil was the first to define an abstract variety (not embedded in projective space), by gluing affine varieties along open subsets, on the model of manifolds in topology. He needed this generality for his construction of the Jacobian variety of a curve over any field. (Later, Jacobians were shown to be projective varieties by Weil, Chow and Matsusaka.) The algebraic geometers of the Italian school had often used the somewhat foggy concept of the generic point of an algebraic variety. What is true for the generic point is true for "most" points of the variety. In Weil's Foundations of Algebraic Geometry (1946), generic points are constructed by taking points in a very large algebraically closed field, called a universal domain.[4] Although this worked as a foundation, it was awkward: there were many different generic points for the same variety. (In the later theory of schemes, each algebraic variety has a single generic point.) In the 1950s, Claude Chevalley, Masayoshi Nagata and Jean-Pierre Serre, motivated in part by the Weil conjectures relating number theory and algebraic geometry, further extended the objects of algebraic geometry, for example by generalizing the base rings allowed. The word scheme was first used in the 1956 Chevalley Seminar, in which Chevalley was pursuing Zariski's ideas.[5] According to Pierre Cartier, it was André Martineau who suggested to Serre the possibility of using the spectrum of an arbitrary commutative ring as a foundation for algebraic geometry.[6] Origin of schemes Grothendieck then gave the decisive definition of a scheme, bringing to a conclusion a generation of experimental suggestions and partial developments.[7] He defined the spectrum X of a commutative ring R as the space of prime ideals of R with a natural topology (known as the Zariski topology), but augmented it with a sheaf of rings: to every open subset U he assigned a commutative ring OX(U). These objects Spec(R) are the affine schemes; a general scheme is then obtained by "gluing together" affine schemes. Much of algebraic geometry focuses on projective or quasi-projective varieties over a field k; in fact, k is often taken to be the complex numbers. Schemes of that sort are very special compared to arbitrary schemes; compare the examples below. Nonetheless, it is convenient that Grothendieck developed a large body of theory for arbitrary schemes. For example, it is common to construct a moduli space first as a scheme, and only later study whether it is a more concrete object such as a projective variety. Also, applications to number theory rapidly lead to schemes over the integers which are not defined over any field. Definition An affine scheme is a locally ringed space isomorphic to the spectrum Spec(R) of a commutative ring R. A scheme is a locally ringed space X admitting a covering by open sets Ui, such that each Ui (as a locally ringed space) is an affine scheme.[8] In particular, X comes with a sheaf OX, which assigns to every open subset U a commutative ring OX(U) called the ring of regular functions on U. One can think of a scheme as being covered by "coordinate charts" which are affine schemes. The definition means exactly that schemes are obtained by gluing together affine schemes using the Zariski topology. In the early days, this was called a prescheme, and a scheme was defined to be a separated prescheme. The term prescheme has fallen out of use, but can still be found in older books, such as Grothendieck's "Éléments de géométrie algébrique" and Mumford's "Red Book".[9] A basic example of an affine scheme is affine n-space over a field k, for a natural number n. By definition, An k is the spectrum of the polynomial ring k[x1,...,xn]. In the spirit of scheme theory, affine n-space can in fact be defined over any commutative ring R, meaning Spec(R[x1,...,xn]). The category of schemes Schemes form a category, with morphisms defined as morphisms of locally ringed spaces. (See also: morphism of schemes.) For a scheme Y, a scheme X over Y (or a Y-scheme) means a morphism X → Y of schemes. A scheme X over a commutative ring R means a morphism X → Spec(R). An algebraic variety over a field k can be defined as a scheme over k with certain properties. There are different conventions about exactly which schemes should be called varieties. One standard choice is that a variety over k means an integral separated scheme of finite type over k.[10] A morphism f: X → Y of schemes determines a pullback homomorphism on the rings of regular functions, f*: O(Y) → O(X). In the case of affine schemes, this construction gives a one-to-one correspondence between morphisms Spec(A) → Spec(B) of schemes and ring homomorphisms B → A.[11] In this sense, scheme theory completely subsumes the theory of commutative rings. Since Z is an initial object in the category of commutative rings, the category of schemes has Spec(Z) as a terminal object. For a scheme X over a commutative ring R, an R-point of X means a section of the morphism X → Spec(R). One writes X(R) for the set of R-points of X. In examples, this definition reconstructs the old notion of the set of solutions of the defining equations of X with values in R. When R is a field k, X(k) is also called the set of k-rational points of X. More generally, for a scheme X over a commutative ring R and any commutative R-algebra S, an S-point of X means a morphism Spec(S) → X over R. One writes X(S) for the set of S-points of X. (This generalizes the old observation that given some equations over a field k, one can consider the set of solutions of the equations in any field extension E of k.) For a scheme X over R, the assignment S ↦ X(S) is a functor from commutative R-algebras to sets. It is an important observation that a scheme X over R is determined by this functor of points.[12] The fiber product of schemes always exists. That is, for any schemes X and Z with morphisms to a scheme Y, the fiber product X×YZ (in the sense of category theory) exists in the category of schemes. If X and Z are schemes over a field k, their fiber product over Spec(k) may be called the product X × Z in the category of k-schemes. For example, the product of affine spaces Am and An over k is affine space Am+n over k. Since the category of schemes has fiber products and also a terminal object Spec(Z), it has all finite limits. Examples Here and below, all the rings considered are commutative: • Every affine scheme Spec(R) is a scheme. • A polynomial f over a field k, f ∈ k[x1, ..., xn], determines a closed subscheme f = 0 in affine space An over k, called an affine hypersurface. Formally, it can be defined as $\operatorname {Spec} k[x_{1},\ldots ,x_{n}]/(f).$ For example, taking k to be the complex numbers, the equation x2 = y2(y+1) defines a singular curve in the affine plane A2 C , called a nodal cubic curve. • For any commutative ring R and natural number n, projective space Pn R can be constructed as a scheme by gluing n + 1 copies of affine n-space over R along open subsets. This is the fundamental example that motivates going beyond affine schemes. The key advantage of projective space over affine space is that Pn R is proper over R; this is an algebro-geometric version of compactness. A related observation is that complex projective space CPn is a compact space in the classical topology (based on the topology of C), whereas Cn is not (for n > 0). • A homogeneous polynomial f of positive degree in the polynomial ring R[x0, ..., xn] determines a closed subscheme f = 0 in projective space Pn over R, called a projective hypersurface. In terms of the Proj construction, this subscheme can be written as $\operatorname {Proj} R[x_{0},\ldots ,x_{n}]/(f).$ For example, the closed subscheme x3 + y3 = z3 of P2 Q is an elliptic curve over the rational numbers. • The line with two origins (over a field k) is the scheme defined by starting with two copies of the affine line over k, and gluing together the two open subsets A1 − 0 by the identity map. This is a simple example of a non-separated scheme. In particular, it is not affine.[13] • A simple reason to go beyond affine schemes is that an open subset of an affine scheme need not be affine. For example, let X = An − 0, say over the complex numbers C; then X is not affine for n ≥ 2. (The restriction on n is necessary: the affine line minus the origin is isomorphic to the affine scheme Spec(C[x, x−1]). To show that X is not affine, one computes that every regular function on X extends to a regular function on An, when n ≥ 2. (This is analogous to Hartogs's lemma in complex analysis, though easier to prove.) That is, the inclusion f: X → An induces an isomorphism from O(An) = C[x1, ...., xn] to O(X). If X were affine, it would follow that f was an isomorphism. But f is not surjective and hence not an isomorphism. Therefore, the scheme X is not affine.[14] • Let k be a field. Then the scheme $ \operatorname {Spec} \left(\prod _{n=1}^{\infty }k\right)$ is an affine scheme whose underlying topological space is the Stone–Čech compactification of the positive integers (with the discrete topology). In fact, the prime ideals of this ring are in one-to-one correspondence with the ultrafilters on the positive integers, with the ideal $ \prod _{m\neq n}k$ corresponding to the principal ultrafilter associated to the positive integer n.[15] This topological space is zero-dimensional, and in particular, each point is an irreducible component. Since affine schemes are quasi-compact, this is an example of a quasi-compact scheme with infinitely many irreducible components. (By contrast, a Noetherian scheme has only finitely many irreducible components.) Examples of morphisms It is also fruitful to consider examples of morphisms as examples of schemes since they demonstrate their technical effectiveness for encapsulating many objects of study in algebraic and arithmetic geometry. Arithmetic surfaces If we consider a polynomial $f\in \mathbb {Z} [x,y]$ then the affine scheme $X=\operatorname {Spec} (\mathbb {Z} [x,y]/(f))$ has a canonical morphism to $\operatorname {Spec} \mathbb {Z} $ and is called an Arithmetic surface. The fibers $X_{p}=X\times _{\operatorname {Spec} (\mathbb {Z} )}\operatorname {Spec} (\mathbb {F} _{p})$ are then algebraic curves over the finite fields $\mathbb {F} _{p}$. If $f(x,y)=y^{2}-x^{3}+ax^{2}+bx+c$ is an Elliptic curve then the fibers over its discriminant locus generated by $\Delta _{f}$ where $\Delta _{f}=-4a^{3}c+a^{2}b^{2}+18abc-4b^{3}-27c^{2}$ [16] are all singular schemes. For example, if $p$ is a prime number and $X=\operatorname {Spec} \left({\frac {\mathbb {Z} [x,y]}{(y^{2}-x^{3}-p)}}\right)$ then its discriminant is $-27p^{2}$. In particular, this curve is singular over the prime numbers $3,p$. Motivation for schemes Here are some of the ways in which schemes go beyond older notions of algebraic varieties, and their significance. • Field extensions. Given some polynomial equations in n variables over a field k, one can study the set X(k) of solutions of the equations in the product set kn. If the field k is algebraically closed (for example the complex numbers), then one can base algebraic geometry on sets such as X(k): define the Zariski topology on X(k), consider polynomial mappings between different sets of this type, and so on. But if k is not algebraically closed, then the set X(k) is not rich enough. Indeed, one can study the solutions X(E) of the given equations in any field extension E of k, but these sets are not determined by X(k) in any reasonable sense. For example, the plane curve X over the real numbers defined by x2 + y2 = −1 has X(R) empty, but X(C) not empty. (In fact, X(C) can be identified with C − 0.) By contrast, a scheme X over a field k has enough information to determine the set X(E) of E-rational points for every extension field E of k. (In particular, the closed subscheme of A2 R defined by x2 + y2 = −1 is a nonempty topological space.) • Generic point. The points of the affine line A1 C , as a scheme, are its complex points (one for each complex number) together with one generic point (whose closure is the whole scheme). The generic point is the image of a natural morphism Spec(C(x)) → A1 C , where C(x) is the field of rational functions in one variable. To see why it is useful to have an actual "generic point" in the scheme, consider the following example. • Let X be the plane curve y2 = x(x−1)(x−5) over the complex numbers. This is a closed subscheme of A2 C . It can be viewed as a ramified double cover of the affine line A1 C by projecting to the x-coordinate. The fiber of the morphism X → A1 over the generic point of A1 is exactly the generic point of X, yielding the morphism $\operatorname {Spec} \mathbf {C} (x)\left({\sqrt {x(x-1)(x-5)}}\right)\to \operatorname {Spec} \mathbf {C} (x).$ This in turn is equivalent to the degree-2 extension of fields $\mathbf {C} (x)\subset \mathbf {C} (x)\left({\sqrt {x(x-1)(x-5)}}\right).$ Thus, having an actual generic point of a variety yields a geometric relation between a degree-2 morphism of algebraic varieties and the corresponding degree-2 extension of function fields. This generalizes to a relation between the fundamental group (which classifies covering spaces in topology) and the Galois group (which classifies certain field extensions). Indeed, Grothendieck's theory of the étale fundamental group treats the fundamental group and the Galois group on the same footing. • Nilpotent elements. Let X be the closed subscheme of the affine line A1 C defined by x2 = 0, sometimes called a fat point. The ring of regular functions on X is C[x]/(x2); in particular, the regular function x on X is nilpotent but not zero. To indicate the meaning of this scheme: two regular functions on the affine line have the same restriction to X if and only if they have the same value and first derivative at the origin. Allowing such non-reduced schemes brings the ideas of calculus and infinitesimals into algebraic geometry. • For a more elaborate example, one can describe all the zero-dimensional closed subschemes of degree 2 in a smooth complex variety Y. Such a subscheme consists of either two distinct complex points of Y, or else a subscheme isomorphic to X = Spec C[x]/(x2) as in the previous paragraph. Subschemes of the latter type are determined by a complex point y of Y together with a line in the tangent space TyY.[17] This again indicates that non-reduced subschemes have geometric meaning, related to derivatives and tangent vectors. Coherent sheaves Main article: Coherent sheaf A central part of scheme theory is the notion of coherent sheaves, generalizing the notion of (algebraic) vector bundles. For a scheme X, one starts by considering the abelian category of OX-modules, which are sheaves of abelian groups on X that form a module over the sheaf of regular functions OX. In particular, a module M over a commutative ring R determines an associated OX-module ~M on X = Spec(R). A quasi-coherent sheaf on a scheme X means an OX-module that is the sheaf associated to a module on each affine open subset of X. Finally, a coherent sheaf (on a Noetherian scheme X, say) is an OX-module that is the sheaf associated to a finitely generated module on each affine open subset of X. Coherent sheaves include the important class of vector bundles, which are the sheaves that locally come from finitely generated free modules. An example is the tangent bundle of a smooth variety over a field. However, coherent sheaves are richer; for example, a vector bundle on a closed subscheme Y of X can be viewed as a coherent sheaf on X which is zero outside Y (by the direct image construction). In this way, coherent sheaves on a scheme X include information about all closed subschemes of X. Moreover, sheaf cohomology has good properties for coherent (and quasi-coherent) sheaves. The resulting theory of coherent sheaf cohomology is perhaps the main technical tool in algebraic geometry.[18][19] Generalizations Considered as its functor of points, a scheme is a functor which is a sheaf of sets for the Zariski topology on the category of commutative rings, and which, locally in the Zariski topology, is an affine scheme. This can be generalized in several ways. One is to use the étale topology. Michael Artin defined an algebraic space as a functor which is a sheaf in the étale topology and which, locally in the étale topology, is an affine scheme. Equivalently, an algebraic space is the quotient of a scheme by an étale equivalence relation. A powerful result, the Artin representability theorem, gives simple conditions for a functor to be represented by an algebraic space.[20] A further generalization is the idea of a stack. Crudely speaking, algebraic stacks generalize algebraic spaces by having an algebraic group attached to each point, which is viewed as the automorphism group of that point. For example, any action of an algebraic group G on an algebraic variety X determines a quotient stack [X/G], which remembers the stabilizer subgroups for the action of G. More generally, moduli spaces in algebraic geometry are often best viewed as stacks, thereby keeping track of the automorphism groups of the objects being classified. Grothendieck originally introduced stacks as a tool for the theory of descent. In that formulation, stacks are (informally speaking) sheaves of categories.[21] From this general notion, Artin defined the narrower class of algebraic stacks (or "Artin stacks"), which can be considered geometric objects. These include Deligne–Mumford stacks (similar to orbifolds in topology), for which the stabilizer groups are finite, and algebraic spaces, for which the stabilizer groups are trivial. The Keel–Mori theorem says that an algebraic stack with finite stabilizer groups has a coarse moduli space which is an algebraic space. Another type of generalization is to enrich the structure sheaf, bringing algebraic geometry closer to homotopy theory. In this setting, known as derived algebraic geometry or "spectral algebraic geometry", the structure sheaf is replaced by a homotopical analog of a sheaf of commutative rings (for example, a sheaf of E-infinity ring spectra). These sheaves admit algebraic operations which are associative and commutative only up to an equivalence relation. Taking the quotient by this equivalence relation yields the structure sheaf of an ordinary scheme. Not taking the quotient, however, leads to a theory which can remember higher information, in the same way that derived functors in homological algebra yield higher information about operations such as tensor product and the Hom functor on modules. See also • Flat morphism, Smooth morphism, Proper morphism, Finite morphism, Étale morphism • Stable curve • Birational geometry • Étale cohomology, Chow group, Hodge theory • Group scheme, Abelian variety, Linear algebraic group, Reductive group • Moduli of algebraic curves • Gluing schemes Citations 1. Introduction of the first edition of "Éléments de géométrie algébrique". 2. Dieudonné 1985, Chapters IV and V. 3. Dieudonné 1985, sections VII.2 and VII.5. 4. Dieudonné 1985, section VII.4. 5. Chevalley, C. (1955–1956), Les schémas, Séminaire Henri Cartan, vol. 8 6. Cartier 2001, note 29. 7. Dieudonné 1985, sections VII.4, VIII.2, VIII.3. 8. Hartshorne 1997, section II.2. 9. Mumford 1999, Chapter II. 10. Stacks Project, Tag 020D. 11. Hartshorne 1997, Proposition II.2.3. 12. Eisenbud & Harris 1998, Proposition VI-2. 13. Hartshorne 1997, Example II.4.0.1. 14. Hartshorne 1997, Exercises I.3.6 and III.4.3. 15. Arapura 2011, section 1. 16. "Elliptic curves" (PDF). p. 20. 17. Eisenbud & Harris 1998, Example II-10. 18. Dieudonné 1985, sections VIII.2 and VIII.3. 19. Hartshorne 1997, Chapter III. 20. Stacks Project, Tag 07Y1. 21. Vistoli 2005, Definition 4.6. References • Arapura, Donu (2011), "Frobenius amplitude, ultraproducts, and vanishing on singular spaces", Illinois Journal of Mathematics, 55 (4): 1367–1384, doi:10.1215/ijm/1373636688, MR 3082873 • Cartier, Pierre (2001), "A mad day's work: from Grothendieck to Connes and Kontsevich. The evolution of concepts of space and symmetry", Bulletin of the American Mathematical Society, 38 (4): 389–408, doi:10.1090/S0273-0979-01-00913-2, MR 1848254 • Dieudonné, Jean (1985), History of Algebraic Geometry, Wadsworth, ISBN 978-0-534-03723-9, MR 0780183 • Eisenbud, David; Harris, Joe (1998). The Geometry of Schemes. Springer-Verlag. ISBN 978-0-387-98637-1. MR 1730819. • Grothendieck, Alexandre; Dieudonné, Jean (1960). "Éléments de géométrie algébrique: I. Le langage des schémas". Publications Mathématiques de l'IHÉS. 4. doi:10.1007/bf02684778. MR 0217083. • Hartshorne, Robin (1997) [1977]. Algebraic Geometry. Springer-Verlag. ISBN 978-0-387-90244-9. MR 0463157. • Igor R. Shafarevich (2013). Basic Algebraic Geometry 2: Schemes and Complex Manifolds. Springer-Verlag. ISBN 978-3642380099. MR 0456457. • Qing Liu (2002). Algebraic Geometry and Arithmetic Curves. Oxford University Press. ISBN 978-0-19-850284-5. MR 1917232. • Mumford, David (1999). The Red Book of Varieties and Schemes: Includes the Michigan Lectures (1974) on Curves and Their Jacobians. Lecture Notes in Mathematics. Vol. 1358 (2nd ed.). Springer-Verlag. doi:10.1007/b62130. ISBN 978-3-540-63293-1. MR 1748380. • Vistoli, Angelo (2005), "Grothendieck topologies, fibered categories and descent theory", Fundamental Algebraic Geometry, Providence, RI: American Mathematical Society, pp. 1–104, arXiv:math/0412512, Bibcode:2004math.....12512V, MR 2223406 External links • David Mumford, Can one explain schemes to biologists? • The Stacks Project Authors, The Stacks Project • https://quomodocumque.wordpress.com/2012/09/03/mochizuki-on-abc/ - the comment section contains some interesting discussion on scheme theory (including the posts from Terence Tao). Authority control: National • France • BnF data • Israel • United States
Wikipedia
S-unit In mathematics, in the field of algebraic number theory, an S-unit generalises the idea of unit of the ring of integers of the field. Many of the results which hold for units are also valid for S-units. Definition Let K be a number field with ring of integers R. Let S be a finite set of prime ideals of R. An element x of K is an S-unit if the principal fractional ideal (x) is a product of primes in S (to positive or negative powers). For the ring of rational integers Z one may take S to be a finite set of prime numbers and define an S-unit to be a rational number whose numerator and denominator are divisible only by the primes in S. Properties The S-units form a multiplicative group containing the units of R. Dirichlet's unit theorem holds for S-units: the group of S-units is finitely generated, with rank (maximal number of multiplicatively independent elements) equal to r + s, where r is the rank of the unit group and s = |S|. S-unit equation The S-unit equation is a Diophantine equation u + v = 1 with u and v restricted to being S-units of K (or more generally, elements of a finitely generated subgroup of the multiplicative group of any field of characteristic zero). The number of solutions of this equation is finite[1] and the solutions are effectively determined using estimates for linear forms in logarithms as developed in transcendental number theory. A variety of Diophantine equations are reducible in principle to some form of the S-unit equation: a notable example is Siegel's theorem on integral points on elliptic curves, and more generally superelliptic curves of the form yn = f(x). A computational solver for S-unit equation is available in the software SageMath.[2] References 1. Beukers, F.; Schlickewei, H. (1996). "The equation x+y=1 in finitely generated groups". Acta Arithmetica. 78 (2): 189–199. doi:10.4064/aa-78-2-189-199. ISSN 0065-1036. 2. "Solve S-unit equation x + y = 1 — Sage Reference Manual v8.7: Algebraic Numbers and Number Fields". doc.sagemath.org. Retrieved 2019-04-16. • Everest, Graham; van der Poorten, Alf; Shparlinski, Igor; Ward, Thomas (2003). Recurrence sequences. Mathematical Surveys and Monographs. Vol. 104. Providence, RI: American Mathematical Society. pp. 19–22. ISBN 0-8218-3387-1. Zbl 1033.11006. • Lang, Serge (1978). Elliptic curves: Diophantine analysis. Grundlehren der mathematischen Wissenschaften. Vol. 231. Springer-Verlag. pp. 128–153. ISBN 3-540-08489-4. • Lang, Serge (1986). Algebraic number theory. Springer-Verlag. ISBN 0-387-94225-4. Chap. V. • Smart, Nigel (1998). The algorithmic resolution of Diophantine equations. London Mathematical Society Student Texts. Vol. 41. Cambridge University Press. Chap. 9. ISBN 0-521-64156-X. • Neukirch, Jürgen (1986). Class field theory. Grundlehren der mathematischen Wissenschaften. Vol. 280. Springer-Verlag. pp. 72–73. ISBN 3-540-15251-2. Further reading • Baker, Alan; Wüstholz, Gisbert (2007). Logarithmic Forms and Diophantine Geometry. New Mathematical Monographs. Vol. 9. Cambridge University Press. ISBN 978-0-521-88268-2. • Bombieri, Enrico; Gubler, Walter (2006). Heights in Diophantine Geometry. New Mathematical Monographs. Vol. 4. Cambridge University Press. ISBN 978-0-521-71229-3. Zbl 1130.11034.
Wikipedia
Semyon Aranovich Gershgorin Semyon Aronovich Gershgorin (August 24, 1901 – May 30, 1933) was a Soviet (born in Pruzhany, Belarus, Russian Empire) mathematician. He began as a student at the Petrograd Technological Institute in 1923, became a Professor in 1930, and was given an appointment at the Leningrad Mechanical Engineering Institute in the same year. His contributions include the Gershgorin circle theorem. The spelling of S. A. Gershgorin's name (Семён Аронович Гершгорин) has been transliterated in several different ways, including Geršgorin, Gerschgorin, Gerszgorin, Gershgorin, Gershgeroff, Qureshin, Gershmachnow and from the Yiddish spelling הירשהאָרן to Hirshhorn and Hirschhorn. The authors of his obituary[1] wrote about Gershgorin's death at the very young age of 31: "A vigorous, stressful job weakened Semyon Aranovich's health; he succumbed to an accidental illness." References 1. Obituary: Semyon Aronovich Gershgorin (Russian), Applied Mathematics and Mechanics 1 (1) (1933), 4. External links • O'Connor, John J.; Robertson, Edmund F., "Semyon Aranovich Gershgorin", MacTutor History of Mathematics Archive, University of St Andrews. Authority control International • FAST • ISNI • VIAF National • Germany • Israel • United States Academics • zbMATH
Wikipedia
Stefan Banach Stefan Banach (Polish: [ˈstɛfan ˈbanax] (listen); 30 March 1892 – 31 August 1945) was a Polish mathematician[1] who is generally considered one of the 20th century's most important and influential mathematicians.[2] He was the founder of modern functional analysis,[1] and an original member of the Lwów School of Mathematics. His major work was the 1932 book, Théorie des opérations linéaires (Theory of Linear Operations), the first monograph on the general theory of functional analysis.[3] Stefan Banach Born(1892-03-30)30 March 1892 Kraków, Austria-Hungary (today Poland) Died31 August 1945(1945-08-31) (aged 53) Lviv, Ukrainian SSR, Soviet Union (today Ukraine) Alma materTechnical University of Lwów Known forBanach space Functional analysis Banach algebra Banach measure Banach–Tarski paradox Banach fixed-point theorem Banach–Steinhaus theorem Banach–Mazur theorem Banach–Schauder theorem Hahn–Banach theorem Banach–Alaoglu theorem Banach–Stone theorem Banach manifold Banach bundle Surjection of Fréchet spaces AwardsMembership: Polish Academy of Learning Scientific career FieldsMathematics InstitutionsUniversity of Lwów Doctoral advisorsHugo Steinhaus Kazimierz Twardowski Doctoral studentsStanisław Mazur Other notable studentsJózef Schreier Stanislaw Ulam Born in Kraków to a family of Goral descent, Banach showed a keen interest in mathematics and engaged in solving mathematical problems during school recess. After completing his secondary education, he befriended Hugo Steinhaus, with whom he established the Polish Mathematical Society in 1919 and later published the scientific journal Studia Mathematica. In 1920, he received an assistantship at the Lwów Polytechnic, subsequently becoming a professor in 1922 and a member of the Polish Academy of Learning in 1924. Banach was also a co-founder of the Lwów School of Mathematics, a school of thought comprising some of the most renowned Polish mathematicians of the interwar period (1918–1939). Some of the notable mathematical concepts that bear Banach's name include Banach spaces, Banach algebras, Banach measures, the Banach–Tarski paradox, the Hahn–Banach theorem, the Banach–Steinhaus theorem, the Banach–Mazur game, the Banach–Alaoglu theorem, and the Banach fixed-point theorem. Life Early life Stefan Banach was born on 30 March 1892 at St. Lazarus General Hospital in Kraków, then part of the Austro-Hungarian Empire, into a Góral Roman Catholic family,[4] and was subsequently baptised by his father.[5][6] Banach's parents were Stefan Greczek and Katarzyna Banach, both natives of the Podhale region.[7][8] Greczek was a soldier in the Austro-Hungarian Army stationed in Kraków. Little is known about Banach's mother.[5] According to his baptismal certificate, she was born in Borówna and worked as a domestic helper.[8] Unusually, Stefan's surname was his mother's instead of his father's, though he received his father's given name, Stefan. Military regulations did not permit soldiers of Stefan Greczek's rank to marry; he was a private and as the mother was too poor to support the child, the couple decided that he should be reared by family and friends.[9] Stefan spent the first few years of his life with his grandmother, but when she was taken ill, Greczek arranged for his son to be raised by Franciszka Płowa and her niece Maria Puchalska in Kraków. Young Stefan came to regard Franciszka as his foster mother and Maria as his older sister.[10] In his early years Banach was tutored by Juliusz Mien, a French intellectual and friend of the Płowa family, who had emigrated to Poland and supported himself with photography and translations of Polish literature into French. Mien taught Banach French and most likely encouraged him in his early mathematical pursuits.[11] In 1902, Banach, aged 10, enrolled in Kraków's IV Gymnasium (also known as the Goetz Gymnasium). While the school specialized in the humanities, Banach and his best friend Witold Wiłkosz (also a future mathematician) spent most of their time working on mathematics problems during breaks and after school.[12] Later in life Banach credited Dr. Kamil Kraft, the mathematics and physics teacher at the school, with kindling his interests in mathematics.[13] While Banach was a diligent student he did, on occasion, receive low grades (he failed Greek during his first semester at the school) and later spoke critically of the school's math teachers.[14] After obtaining his matura (high school degree) at age 18 in 1910, Banach moved to Lwów (today called Lviv) with the intention of studying at the Lwów Polytechnic. He initially chose engineering as his field of study since at the time he was convinced that there was nothing new to discover in mathematics.[15] At some point he also attended Jagiellonian University in Kraków on a part-time basis. As Banach had to earn money to support his studies it was not until 1914 that he finally, at age 22, passed his high school graduation exams.[16] When World War I broke out, Banach was excused from military service due to his left-handedness and poor vision. When the Russian Army opened its offensive toward Lwów, Banach left for Kraków, where he spent the rest of the war. He made his living as a tutor at the local schools, worked in a bookstore and as a foreman of a road building crew. He attended some lectures at the Jagiellonian University at that time, including those of the famous Polish mathematicians Stanisław Zaremba and Kazimierz Żorawski, but little is known of that period of his life.[17] Discovery by Steinhaus In 1916, in Kraków's Planty park, Banach encountered Professor Hugo Steinhaus, one of the renowned mathematicians of the time. According to Steinhaus, while he was strolling through the gardens he was surprised to overhear the term "Lebesgue integral" (Lebesgue integration was at the time still a fairly new idea in mathematics) and walked over to investigate. As a result, he met Banach, as well as Otto Nikodym.[18] Steinhaus became fascinated with the self-taught young mathematician. The encounter resulted in a long-lasting collaboration and friendship. In fact, soon after the encounter Steinhaus invited Banach to solve some problems he had been working on but which had proven difficult. Banach solved them within a week and the two soon published their first joint work (On the Mean Convergence of Fourier Series). Steinhaus, Banach and Nikodym, along with several other Kraków mathematicians (Władysław Ślebodziński, Leon Chwistek, Alfred Rosenblatt[19] and Włodzimierz Stożek) also established a mathematical society, which eventually became the Polish Mathematical Society.[20] The society was officially founded on 2 April 1919. It was also through Steinhaus that Banach met his future wife, Łucja Braus. Interbellum Steinhaus introduced Banach to academic circles and substantially accelerated his career. After Poland regained independence in 1918, Banach was given an assistantship at the Lwów Polytechnic. Steinhaus' backing also allowed him to receive a doctorate without actually graduating from a university. The doctoral thesis, accepted by King John II Casimir University of Lwów in 1920[21] and published in 1922,[22] included the basic ideas of functional analysis, which was soon to become an entirely new branch of mathematics. In his dissertation, written in 1920, he axiomatically defined what is today called a Banach space.[23] The thesis was widely discussed in academic circles and allowed him in 1922 to become a professor at the Lwów Polytechnic. Initially an assistant to Professor Antoni Łomnicki, in 1927, Banach received his own chair. In 1924 he was accepted as a member of the Polish Academy of Learning. At the same time, from 1922, Banach also headed the second Chair of Mathematics at University of Lwów. Young and talented, Banach gathered around him a large group of mathematicians. The group, meeting in the Scottish Café, soon gave birth to the "Lwów School of Mathematics". In 1929 the group began publishing its own journal, Studia Mathematica, devoted primarily to Banach's field of study—functional analysis. Around that time, Banach also began working on his best-known work, the first monograph on the general theory of linear-metric space. First published in Polish in 1931,[24] the next year it was also translated into French and gained wider recognition in European academic circles.[25] The book was also the first in a long series of mathematics monographs edited by Banach and his circle. In 17 June 1924, Banach become a correspondence member of the Polish Academy of Sciences and Fine Arts in Kraków. World War II After the invasion of Poland by Nazi Germany and the Soviet Union, Lwów came under the control of the Soviet Union for almost two years. Banach, from 1939 a corresponding member of the Academy of Sciences of Ukraine, and on good terms with Soviet mathematicians,[5] had to promise to learn Ukrainian to be allowed to keep his chair and continue his academic activities.[26] After the German takeover of Lwów in 1941 during Operation Barbarossa, all universities were closed and Banach, along with many colleagues and his son, was employed as a lice feeder at Professor Rudolf Weigl's Typhus Research Institute. Employment in Weigl's Institute provided many unemployed university professors and their associates protection from random arrest and deportation to Nazi concentration camps. After the Soviet Red Army recaptured Lviv in the Lvov–Sandomierz Offensive of 1944, Banach returned to the University and helped re-establish it after the war years. However, because the Soviets were deporting Poles from annexed formerly Polish eastern territories, Banach began preparing to leave the city and settle in Kraków, Poland, where he had been promised a chair at the Jagiellonian University.[5] He was also considered a candidate for Minister of Education of Poland.[27] In January 1945, he was diagnosed with lung cancer and was permitted to stay in Lwów. He died on 31 August 1945, aged 53. His funeral at the Lychakiv Cemetery (Cmentarz Łyczakowski) was attended by hundreds of people.[27] Contributions Banach's dissertation, completed in 1920 and published in 1922, formally axiomatized the concept of a complete normed vector space and laid the foundations for the area of functional analysis. In this work Banach called such spaces "class E-spaces", but in his 1932 book, Théorie des opérations linéaires, he changed terminology and referred to them as "spaces of type B", which most likely contributed to the subsequent eponymous naming of these spaces after him.[28] The theory of what came to be known as Banach spaces had antecedents in the work of the Hungarian mathematician Frigyes Riesz (published in 1916) and contemporaneous contributions from Hans Hahn and Norbert Wiener.[21] For a brief period in fact, complete normed linear spaces were referred to as "Banach–Wiener" spaces in mathematical literature, based on terminology introduced by Wiener himself. However, because Wiener's work on the topic was limited, the established name became just Banach spaces.[28] Likewise, Banach's fixed point theorem, based on earlier methods developed by Charles Émile Picard, was included in his dissertation, and was later extended by his students (for example in the Banach–Schauder theorem) and other mathematicians (in particular Brouwer and Poincaré and Birkhoff). The theorem did not require linearity of the space, and applied to any complete Cauchy space (in particular to any complete metric space).[21] The Hahn–Banach theorem is one of the fundamental theorems of functional analysis.[21] Further theorems related to Banach are: • Banach–Tarski paradox • Banach–Steinhaus theorem • Banach–Alaoglu theorem • Banach–Stone theorem Recognition In 1946, the Stefan Banach Prize (Polish: Nagroda im. Stefana Banacha) was established by the Polish Mathematical Society. In 1992, the Institute of Mathematics of the Polish Academy of Sciences established a special Stefan Banach Medal for outstanding achievements in mathematical sciences.[29] Since 2009, the International Stefan Banach Prize has been conferred by the Polish Mathematical Society to mathematicians for best doctoral dissertations in the mathematical sciences with the objective to "promote and financially support the most promising young researchers".[30] Stefan Banach is the patron of a number of schools and streets including in Warsaw, Lviv, Świdnica, Toruń and Jarosław. In 2001, a minor planet 16856 Banach, discovered by Paul Comba in 1997, was named after him.[31] In 2012, the National Bank of Poland celebrated the mathematician's achievements by issuing a series of commemorative coins designed by Robert Kotowicz (golden 200-zloty coin, silver 10-zloty coin and Nordic Gold 2-zloty coin).[32] In 2016, a commemorative bench featuring Banach and Otto Nikodym was unveiled in Kraków's Planty Park on the 100th anniversary of the conversation the two mathematicians held when they first met Hugo Steinhaus, which proved instrumental in the development of his scientific career.[33] In 2021, one of the episodes of Polish documentary TV series Geniusze i marzyciele (Geniuses and Dreamers) aired on TVP1 and TVP Dokument channels was devoted to Stefan Banach.[34] In 2022, Google Doodle commemorated the 100th anniversary of Banach receiving his title of professor.[35] Quotes Stanislaw Ulam, another mathematician of the Lwów School of Mathematics, in his autobiography, quotes Banach as saying: "Good mathematicians see analogies between theorems or theories, the very best ones see analogies between analogies."[36] Hugo Steinhaus said of Banach: "Banach was my greatest scientific discovery."[37] See also • Closed range theorem • International Stefan Banach Prize • List of Poles • List of Polish mathematicians • List of things named after Stefan Banach • Timeline of Polish science and technology References Citations 1. "Stefan Banach - Polish Mathematician". britannica.com. 2. Pitici 2019, p. 23. 3. Chemla, Chorla & Rabouin 2016, pp. 224, 225, 237. 4. "Home Page of Stefan Banach". kielich.amu.edu.pl. Retrieved 19 August 2017. 5. O'Connor, John J.; Robertson, Edmund F., "Stefan Banach", MacTutor History of Mathematics Archive, University of St Andrews 6. Stachura 1999, p. 51. 7. Waksmundzka-Hajnos 2006, p. 16. 8. Duda 2009, p. 29. 9. Kałuża 1996, pp. 2–4 10. Kałuża 1996, pp. 1–3 11. Kałuża 1996, p. 3 12. Kałuża 1996, p. 137 13. Jakimowicz & Miranowicz 2007, p. 4 14. Kałuża 1996, pp. 3–4 15. Jakimowicz & Miranowicz 2007, p.5 16. Kałuża 1996, p. 13 17. Kałuża 1996, p. 16 18. Jakimowicz & Miranowicz 2007, p. 6 19. Ciesielska & Maligranda 2019, pp. 57–108. 20. Kałuża 1996, p. 23 21. Jahnke 2003, p. 402 22. Stefan Banach (1922). "Sur les opérations dans les ensembles abstraits et leur application aux équations integrals (On operations in the abstract sets and their application to integral equations)". Fundamenta Mathematicae (in French and Polish). 3. 23. "Математичний міст між Краковом і Львовом: як Стефан Банах став одним із найвеличніших математиків століття - krakow1.one". 16 November 2022. 24. Stefan Banach: Teoria operacji liniowych. 25. Stefan Banach: Théorie des opérations linéaires (in French; Theory of Linear Operations). 26. Urbanek 2002 27. James 2003, p. 384 28. MacCluer 2008, p. 6 29. Institute of Mathematics: Stefan Banach Medal Polish Academy of Sciences 30. "PIERWSZY LAUREAT "THE INTERNATIONAL BANACH PRIZE"" (in Polish). Archived from the original on 17 June 2015. Retrieved 11 August 2022. 31. Urbanek, Mariusz (2014). Genialni. Lwowska Szkoła Matematyczna. Warsaw: Wydawnictwo Iskry. ISBN 978-83-244-0381-3. 32. "Narodowy Bank Polski. Monety" (PDF) (in Polish). Retrieved 11 August 2022. 33. "Setna rocznica najsłynniejszej matematycznej dyskusji na krakowskich Plantach" (in Polish). Archived from the original on 19 October 2016. Retrieved 11 August 2022. 34. ""Geniusze i marzyciele" – nowy serial dokumentalny w TVP1" (in Polish). Retrieved 11 August 2022. 35. "Stefan Banach: Google Doodle celebrates Polish mathematician". 22 July 2022. Retrieved 11 August 2022. 36. Ulam, Stanislaw M. (1976). Adventures of a Mathematician. p. 203. ISBN 9780684143910. 37. Strick, Heinz Klaus (2011). "Stefan Banach (March 30, 1892 – August 8, 1945)". Mathematics in Europe. Translated by Kramer, David. European Mathematical Society. Retrieved 28 March 2018. Sources • Chemla, Karine; Chorla, Renaud; Rabouin, David (2016). The Oxford Handbook of Generality in Mathematics and the Sciences. Oxford: University Press. ISBN 9780198777267. • Ciesielska, Danuta; Maligranda, Lech (2019). "Alfred Rosenblatt (1880–1947). Polish–Peruvian mathematician". Banach Center Publications. Institute of Mathematics, Polish Academy of Sciences. 119: 57–108. doi:10.4064/bc119-4. ISSN 0137-6934. S2CID 213021027. • Duda, Roman (2009). "Facts and Myths about Stefan Banach" (PDF). Newsletter of the European Mathematical Society. EMS (71): 29. • Jahnke, Hans Niels (2003). A History of Analysis. American Mathematical Society. ISBN 0821826239. • Jakimowicz, E.; Miranowicz, A., eds. (2007). Stefan Banach - Remarkable life, Brilliant mathematics. Gdańsk University Press and Adam Mickiewicz University Press. ISBN 978-83-7326-451-9. • James, Ioan (2003). Remarkable Mathematicians: From Euler to von Neumann. Cambridge University Press. ISBN 0521520940. • Kałuża, Roman (1996). Through a Reporter's Eyes: The Life of Stefan Banach. Translated by Wojbor Andrzej Woyczyński and Ann Kostant. Birkhäuser. ISBN 0-8176-3772-9. • Kosiedowski, Stanisław. "Stefan Banach". Mój Lwów. Retrieved 20 May 2008. • Siegmund-Schultze, Reinhard (2003). Jahnke, Hans Niels (ed.). A History of Analysis. American Mathematical Society. ISBN 0-8218-2623-9. • MacCluer, Barbara (2008). Elementary Functional Analysis. Springer. ISBN 978-0387855288. • Urbanek, Mariusz (April 2002). "Geniusz: gen i już". Polityka. 8 (2348). • Pitici, Mircea (2019). The Best Writing on Mathematics 2019. Princeton: University Press. ISBN 978-0691198354. • Stachura, Peter (1999). Poland in the Twentieth Century. Springer. • Waksmundzka-Hajnos, Monika (2006). "Wspomnienie o Stefanie Greczku". Focus. Gdańsk University (11). Further reading • Banach, Stefan (1932). Théorie des Opérations Linéaires [Theory of Linear Operations] (PDF). Monografie Matematyczne (in French). Vol. 1. Warszawa: Subwencji Funduszu Kultury Narodowej. Zbl 0005.20901. Archived from the original (PDF) on 11 January 2014. Retrieved 11 July 2020. External links Wikiquote has quotations related to Stefan Banach • Page devoted to Stefan Banach • Stefan Banach at the Mathematics Genealogy Project Authority control International • FAST • ISNI • VIAF National • Norway • France • BnF data • Germany • Israel • Belgium • United States • Sweden • Japan • Czech Republic • Australia • Netherlands • Poland • Russia Academics • CiNii • MathSciNet • Mathematics Genealogy Project • zbMATH People • Deutsche Biographie • Trove Other • Encyclopedia of Modern Ukraine • SNAC • IdRef
Wikipedia
Sarvadaman Chowla Sarvadaman D. S. Chowla (22 October 1907 – 10 December 1995) was an Indian American mathematician, specializing in number theory. Sarvadaman Chowla Born(1907-10-22)22 October 1907 London, England Died10 December 1995(1995-12-10) (aged 88) Laramie, Wyoming, US Alma materCambridge University Government College, Lahore Scientific career FieldsMathematics InstitutionsInstitute for Advanced Study University of Kansas University of Colorado at Boulder Penn State University Government College, Lahore Doctoral advisorJohn Edensor Littlewood Doctoral studentsJohn Friedlander Early life He was born in London, since his father, Gopal Chowla, a professor of mathematics in Lahore, was then studying in Cambridge.[1][2] His family returned to India, where he received his master's degree in 1928 from the Government College in Lahore. In 1931 he received his doctorate from the University of Cambridge, where he studied under J. E. Littlewood. [3]: 594  Career and awards Chowla then returned to India, where he taught at several universities, becoming head of mathematics at Government College, Lahore in 1936.[3]: 594  During the difficulties arising from the partition of India in 1947, he left for the United States.[4] There he visited the Institute for Advanced Study until the fall of 1949, then taught at the University of Kansas in Lawrence until moving to the University of Colorado in 1952.[3]: 594  He moved to Penn State in 1963 as a research professor, where he remained until his retirement in 1976.[3]: 594  He was a member of the Indian National Science Academy.[3]: 595  Among his contributions are a number of results which bear his name. These include the Bruck–Ryser–Chowla theorem, the Ankeny–Artin–Chowla congruence, the Chowla–Mordell theorem, and the Chowla–Selberg formula, and the Mian–Chowla sequence. Works • Chowla, Sarvadaman (2000). James G. Huard; Kenneth S. Williams (eds.). The Collected Papers of Sarvadaman Chowla. Montréal: Centre de Recherches mathématiques, Université de Montréal. OCLC 43730416. • Chowla, S. (1965). Riemann Hypothesis and Hilbert's Tenth Problem. New York: Routledge. ISBN 978-0-677-00140-1. OCLC 15428640. Notes 1. "Sarvadaman Chowla - Biography". Maths History. Retrieved 2021-04-05. 2. Raymond G. Ayoub, James G. Huard, and Kenneth S. Williams (May 1998). "Sarvadaman Chowla (1907–1995)" (PDF). Notices of the AMS. American Mathematical Society. 45 (5).{{cite journal}}: CS1 maint: multiple names: authors list (link) 3. Ayoub, Raymond G.; James G. Huard; Kenneth S. Williams (May 1998). "Sarvadaman Chowla (1907-1995)" (PDF). Notices of the American Mathematical Society. Providence, RI: American Mathematical Society. 45 (5): 594–598. ISSN 0002-9920. OCLC 1480366. Retrieved 2008-05-16. 4. O'Connor, John J.; Robertson, Edmund F., "Sarvadaman Chowla", MacTutor History of Mathematics Archive, University of St Andrews External links • Sarvadaman Chowla at the Mathematics Genealogy Project Authority control International • ISNI • VIAF National • France • BnF data • Germany • Israel • United States • Netherlands Academics • CiNii • MathSciNet • Mathematics Genealogy Project • zbMATH People • Deutsche Biographie Other • IdRef
Wikipedia
Solomon Feferman Solomon Feferman (December 13, 1928 – July 26, 2016)[2] was an American philosopher and mathematician who worked in mathematical logic. In addition to his prolific technical work in proof theory, recursion theory, and set theory, he was known for his contributions to the history of logic (for instance, via biographical writings on figures such as Kurt Gödel, Alfred Tarski, and Jean van Heijenoort) and as a vocal proponent of the philosophy of mathematics known as predicativism, notably from an anti-platonist stance. Solomon Feferman Solomon Feferman at the Association of Symbolic Logic, Pittsburgh, May 2004 Born(1928-12-13)December 13, 1928 The Bronx, New York City, US DiedJuly 26, 2016(2016-07-26) (aged 87) Stanford, California, US Alma materCalifornia Institute of Technology University of California, Berkeley EraContemporary philosophy RegionWestern philosophy SchoolAnalytic Predicativism ThesisFormal Consistency Proofs and Interpretability of Theories (1957) Doctoral advisorAlfred Tarski Doctoral students • Jon Barwise • Carolyn Talcott Main interests Philosophy of mathematics Proof theory Theory of computation Notable ideas Stratified systems for the foundations of category theory[1] Feferman–Schütte ordinal Ordinal collapsing function Explicit mathematics Influences • Kurt Gödel Influenced • Jon Barwise Life Solomon Feferman was born in The Bronx in New York City to working-class parents who had immigrated to the United States after World War I and had met and married in New York. Neither parent had any advanced education. The family moved to Los Angeles, where Feferman graduated from high school at age 16. He received his B.S. from the California Institute of Technology in 1948, and in 1957 his Ph.D. in mathematics from the University of California, Berkeley, under Alfred Tarski,[3] after having been drafted and having served in the U.S. Army from 1953 to 1955. In 1956 he was appointed to the Departments of Mathematics and Philosophy at Stanford University, where he later became the Patrick Suppes Professor of Humanities and Sciences. While the majority of his career was spent at Stanford, he also spent time as a post-doctoral fellow at the Institute for Advanced Study in Princeton, a visiting professor at MIT, and a visiting fellow at the University of Oxford (Wolfson College and All Souls College). [4] Feferman died on 26 July 2016 at his home in Stanford, following an illness that lasted three months and a stroke.[2][5][6] At his death, he had been a member of the MAA for 37 years.[7] Contributions Feferman was editor-in-chief of the five-volume Collected Works of Kurt Gödel, published by Oxford University Press between 2001 and 2013. In 2004, together with his wife Anita Burdman Feferman, he published a biography of Alfred Tarski: Alfred Tarski: Life and Logic.[8] He worked on predicative mathematics, in particular introducing the Feferman–Schütte ordinal as a measure of the strength of certain predicative systems. Recognition Feferman was awarded a Guggenheim Fellowship in 1972 and 1986[9] and the Rolf Schock Prize in logic and philosophy in 2003.[10] He was invited to give the Gödel Lecture in 1997[11] and the Tarski Lectures in 2006.[12] In 2012 he became a fellow of the American Mathematical Society.[13] Publications Papers • Feferman, Solomon; Vaught, Robert L. (1959), "The first order properties of products of algebraic systems", Fund. Math. 47, 57–103. • Feferman, Solomon (1975), "A language and axioms for explicit mathematics", Algebra and logic (Fourteenth Summer Res. Inst., Austral. Math. Soc., Monash Univ., Clayton, 1974), pp. 87–139, Lecture Notes in Math., vol. 450, Berlin, Springer. • Feferman, Solomon (1979), "Constructive theories of functions and classes", Logic Colloquium '78 (Mons, 1978), pp. 159–224, Stud. Logic Foundations Math., 97, Amsterdam, New York, North-Holland. • Buchholz, Wilfried; Feferman, Solomon; Pohlers, Wolfram; Sieg, Wilfried (1981), "Iterated inductive definitions and subsystems of analysis: recent proof-theoretical studies", Lecture Notes in Mathematics, 897, Berlin, New York, Springer-Verlag. • Feferman, Solomon; Hellman, Geoffrey (1995), "Predicative foundations of arithmetic", J. Philos. Logic 24 (1), 1–17. • Avigad, Jeremy; Feferman, Solomon (1998), "Gödel's functional (Dialectica) interpretation", Handbook of proof theory, 337–405, Stud. Logic Found. Math., 137, Amsterdam, North-Holland. Books • Feferman, Solomon. (1998). In the Light of Logic. Oxford University Press. ISBN 0-19-508030-0, Logic and Computation in Philosophy series.[14] • Feferman, Anita Burdman; Feferman, Solomon (2004). Alfred Tarski: Life and Logic. Cambridge University Press. ISBN 978-0-521-80240-6. OCLC 54691904.[8] See also • Criticism of non-standard analysis References 1. "Enriched Stratified systems for the Foundations of Category Theory" by Solomon Feferman (2011) 2. "Solomon Feferman (1928-2016)". 3. Solomon Feferman at the Mathematics Genealogy Project 4. "Solomon Feferman's homepage". Archived from the original on October 24, 2017. 5. Lanier Anderson, R. (August 4, 2016). "A tribute to Solomon Feferman (1928–2016)". philosophy.stanford.edu. Archived from the original on September 11, 2016. Retrieved July 24, 2021. 6. "Stanford mathematical logician Solomon Feferman dies at 87". Stanford News. October 7, 2016. Retrieved July 24, 2021.{{cite web}}: CS1 maint: url-status (link) 7. "In Memoriam | Mathematical Association of America". www.maa.org. Retrieved July 24, 2021. 8. Reviews of Alfred Tarski: • Dauben, Joseph W. (2005), Mathematical Reviews, MR 2095748{{citation}}: CS1 maint: untitled periodical (link) • Anellis, Irving H. (2005), "Review", The Review of Modern Logic, 10 (1–2): 117–130 • Davis, Philip J. (March 2005), "A life of logic and the illogic of life", SIAM News • Davis, Martin (March–April 2005), "The Man Who Defined Truth", American Scientist, 93 (2): 175–177, JSTOR 27858554 • Shell-Gellasch, Amy (May 2005), "Review", MAA Reviews • Misiuna, Krystyna (May 2005), History and Philosophy of Logic, 26 (2): 166–168, doi:10.1080/01445340412331313602, S2CID 216590845{{citation}}: CS1 maint: untitled periodical (link) • Mendelson, Elliott (June 2005), Philosophia Mathematica, 13 (2): 231–232, doi:10.1093/philmat/nki020{{citation}}: CS1 maint: untitled periodical (link) • Kilmister, C. W. (July 2005), The Mathematical Gazette, 89 (515): 330–331, doi:10.1017/S0025557200177988, JSTOR 3621256, S2CID 171454519{{citation}}: CS1 maint: untitled periodical (link) • Schmit, Roger (Fall 2005), Archives de Philosophie, 68 (3): 546–547, JSTOR 43038344{{citation}}: CS1 maint: untitled periodical (link) • Maddux, Roger D. (December 2005), The Bulletin of Symbolic Logic, 11 (4): 535–540, doi:10.1017/S1079898600003000, JSTOR 3396716, S2CID 124002889{{citation}}: CS1 maint: untitled periodical (link) • Kybernetes, 35 (1/2), January 2006, doi:10.1108/k.2006.06735aae.002{{citation}}: CS1 maint: untitled periodical (link) • Lescanne, Pierre (March 2006), ACM SIGACT News, 37 (1): 27, doi:10.1145/1122480.1122489, S2CID 9529607{{citation}}: CS1 maint: untitled periodical (link) • Carnielli, Walter (March 2006), Logic and Logical Philosophy, 15 (1), doi:10.12775/llp.2006.005{{citation}}: CS1 maint: untitled periodical (link) • Wood, Carol (April 2006), The American Mathematical Monthly, 113 (4): 377–379, doi:10.2307/27641942, JSTOR 27641942{{citation}}: CS1 maint: untitled periodical (link) • Oberdan, Thomas (June 2006), Isis, 97 (2): 362–363, doi:10.1086/507375, JSTOR 10.1086/507375{{citation}}: CS1 maint: untitled periodical (link) • Grattan-Guinness, Ivor (September 2006), The British Journal for the History of Science, 39 (3): 469–470, doi:10.1017/S0007087406438681, JSTOR 4028507{{citation}}: CS1 maint: untitled periodical (link) • Apt, Krzysztof R. (March 2007), "Alfred Tarski: life and logic", The Mathematical Intelligencer, 29 (2): 78–80, doi:10.1007/bf02986214, S2CID 189883846 • Sinaceur, Hourya Benis (September 2007), "Review" (PDF), Notices of the American Mathematical Society, 54 (8): 986–989 • Bassols, Alejandro Tomasini (April 2006), Crítica: Revista Hispanoamericana de Filosofía, 38 (112): 105–111, JSTOR 40104969{{citation}}: CS1 maint: untitled periodical (link) • Brown, Scott H. (March 2009), The Mathematics Teacher, 102 (7): 558, JSTOR 20876430{{citation}}: CS1 maint: untitled periodical (link) • Bremer, Manuel (December 2009), "Review", Philosophy in Review, 29 (6): 404 • Nerode, Anil (March 2010), The American Mathematical Monthly, 117 (3): 286–288, doi:10.4169/000298910x480144, JSTOR 10.4169/000298910x480144, S2CID 218549336{{citation}}: CS1 maint: untitled periodical (link) • Czernecka-Rej, Bożena (2011), Roczniki Filozoficzne, 59 (1): 79–84, JSTOR 43408916{{citation}}: CS1 maint: untitled periodical (link) 9. "John Simon Guggenheim Foundation | Solomon Feferman". 10. "Feferman awarded Rolf Schock Prize in logic and philosophy". 11. "Gödel Lecturers – Association for Symbolic Logic". Retrieved November 8, 2021. 12. "The Tarski Lectures | Department of Mathematics at University of California Berkeley". math.berkeley.edu. Retrieved November 8, 2021. 13. List of Fellows of the American Mathematical Society, retrieved December 2, 2012. 14. Reviews of In the Light of Logic: • Avigad, Jeremy (December 1999), "[Untitled]", The Journal of Philosophy, 96 (12): 638–642, doi:10.2307/2564698, JSTOR 2564698 • Antonelli, G. Aldo (June 2001), The Bulletin of Symbolic Logic, 7 (2): 270–277, doi:10.2307/2687778, JSTOR 2687778, S2CID 122751203{{citation}}: CS1 maint: untitled periodical (link) • Mendelson, E. (2001), Mathematical Reviews, MR 1661162{{citation}}: CS1 maint: untitled periodical (link) External links • Solomon Feferman official website (via Internet Archive) at Stanford University Rolf Schock Prize laureates Logic and philosophy • Willard Van Orman Quine (1993) • Michael Dummett (1995) • Dana Scott (1997) • John Rawls (1999) • Saul Kripke (2001) • Solomon Feferman (2003) • Jaakko Hintikka (2005) • Thomas Nagel (2008) • Hilary Putnam (2011) • Derek Parfit (2014) • Ruth Millikan (2017) • Saharon Shelah (2018) • Dag Prawitz / Per Martin-Löf (2020) • David Kaplan (2022) Mathematics • Elias M. Stein (1993) • Andrew Wiles (1995) • Mikio Sato (1997) • Yuri I. Manin (1999) • Elliott H. Lieb (2001) • Richard P. Stanley (2003) • Luis Caffarelli (2005) • Endre Szemerédi (2008) • Michael Aschbacher (2011) • Yitang Zhang (2014) • Richard Schoen (2017) • Ronald Coifman (2018) • Nikolai G. Makarov (2020) • Jonathan Pila (2022) Visual arts • Rafael Moneo (1993) • Claes Oldenburg (1995) • Torsten Andersson (1997) • Herzog & de Meuron (1999) • Giuseppe Penone (2001) • Susan Rothenberg (2003) • SANAA / Kazuyo Sejima + Ryue Nishizawa (2005) • Mona Hatoum (2008) • Marlene Dumas (2011) • Anne Lacaton / Jean-Philippe Vassal (2014) • Doris Salcedo (2017) • Andrea Branzi (2018) • Francis Alÿs (2020) • Rem Koolhaas (2022) Musical arts • Ingvar Lidholm (1993) • György Ligeti (1995) • Jorma Panula (1997) • Kronos Quartet (1999) • Kaija Saariaho (2001) • Anne Sofie von Otter (2003) • Mauricio Kagel (2005) • Gidon Kremer (2008) • Andrew Manze (2011) • Herbert Blomstedt (2014) • Wayne Shorter (2017) • Barbara Hannigan (2018) • György Kurtág (2020) • Víkingur Ólafsson (2022) Authority control International • ISNI • VIAF National • Norway • France • BnF data • Catalonia • Germany • Italy • Israel • United States • Sweden • Czech Republic • Netherlands • Poland Academics • CiNii • DBLP • MathSciNet • Mathematics Genealogy Project • zbMATH People • Deutsche Biographie Other • IdRef
Wikipedia
Shigeru Iitaka Shigeru Iitaka (飯高 茂 Iitaka Shigeru, born May 29, 1942, Chiba) is a Japanese mathematician at Gakushuin University working in algebraic geometry who introduced the Kodaira dimension and Iitaka dimension. He was a worldly leader in the field of Algebraic geometry. Shigeru Iitaka Born(1942-05-29)May 29, 1942 Chiba, Japan NationalityJapanese Alma materUniversity of Tokyo Known forKodaira dimension, Iitaka dimension AwardsIyanaga Prize (1980) Japan Academy Prize (1990) Scientific career FieldsMathematics InstitutionsGakushuin University Doctoral advisorKunihiko Kodaira Doctoral studentsYujiro Kawamata He received in 1970 his Ph.D. from the University of Tokyo under Kunihiko Kodaira with thesis「代数多様体のD-次元について」(On D-dimensions of algebraic varieties).[1] He was awarded the Iyanaga Prize of the Mathematical Society of Japan in 1980 and the Japan Academy Prize in 1990. References 1. Shigeru Iitaka at the Mathematics Genealogy Project External links • CV of Shigeru Iitaka Authority control International • ISNI • VIAF National • Norway • Germany • Israel • United States • Japan • Korea • Netherlands Academics • CiNii • MathSciNet • Mathematics Genealogy Project • zbMATH Other • IdRef
Wikipedia
Serge Lang Serge Lang (French: [lɑ̃ɡ]; May 19, 1927 – September 12, 2005) was a French-American mathematician and activist who taught at Yale University for most of his career. He is known for his work in number theory and for his mathematics textbooks, including the influential Algebra. He received the Frank Nelson Cole Prize in 1960 and was a member of the Bourbaki group. Serge Lang Serge Lang (1927–2005) Born(1927-05-19)May 19, 1927 Paris, France DiedSeptember 12, 2005(2005-09-12) (aged 78) Berkeley, California CitizenshipFrench American EducationCalifornia Institute of Technology (BA) Princeton University (PhD) Known forWork in number theory AwardsLeroy P. Steele Prize (1999) Cole Prize (1960) Scientific career FieldsMathematics InstitutionsUniversity of Chicago Columbia University Yale University ThesisOn Quasi Algebraic Closure (1951) Doctoral advisorEmil Artin Doctoral studentsMinhyong Kim Stephen Schanuel As an activist, Lang campaigned against the Vietnam War, and also successfully fought against the nomination of the political scientist Samuel P. Huntington to the National Academies of Science. Later in his life, Lang was an HIV/AIDS denialist. He claimed that HIV had not been proven to cause AIDS and protested Yale's research into HIV/AIDS.[1] Biography and mathematical work Lang was born in Saint-Germain-en-Laye, close to Paris, in 1927. He had a twin brother who became a basketball coach and a sister who became an actress.[2] Lang moved with his family to California as a teenager, where he graduated in 1943 from Beverly Hills High School. He subsequently graduated with an AB from the California Institute of Technology in 1946. He then received a PhD in mathematics from Princeton University in 1951. He held faculty positions at the University of Chicago, Columbia University (from 1955, leaving in 1971 in a dispute), and Yale University. Lang studied at Princeton University, writing his thesis titled "On quasi algebraic closure" under the supervision of Emil Artin,[3][4] and then worked on the geometric analogues of class field theory and diophantine geometry. Later he moved into diophantine approximation and transcendental number theory, proving the Schneider–Lang theorem. A break in research while he was involved in trying to meet 1960s student activism halfway caused him (by his own description) difficulties in picking up the threads afterwards. He wrote on modular forms and modular units, the idea of a 'distribution' on a profinite group, and value distribution theory. He made a number of conjectures in diophantine geometry: Mordell–Lang conjecture, Bombieri–Lang conjecture, Lang–Trotter conjecture, and the Lang conjecture on analytically hyperbolic varieties. He introduced the Lang map,[5] the Katz–Lang finiteness theorem, and the Lang–Steinberg theorem (cf. Lang's theorem) in algebraic groups. Mathematical books Lang was a prolific writer of mathematical texts, often completing one on his summer vacation. Most are at the graduate level. He wrote calculus texts and also prepared a book on group cohomology for Bourbaki. Lang's Algebra, a graduate-level introduction to abstract algebra, was a highly influential text that ran through numerous updated editions. His Steele prize citation stated, "Lang's Algebra changed the way graduate algebra is taught...It has affected all subsequent graduate-level algebra books." It contained ideas of his teacher, Artin; some of the most interesting passages in Algebraic Number Theory also reflect Artin's influence and ideas that might otherwise not have been published in that or any form. Awards as expositor Lang was noted for his eagerness for contact with students. He was described as a passionate teacher who would throw chalk at students who he believed were not paying attention. One of his colleagues recalled: "He would rant and rave in front of his students. He would say, 'Our two aims are truth and clarity, and to achieve these I will shout in class.'"[6] He won a Leroy P. Steele Prize for Mathematical Exposition (1999) from the American Mathematical Society. In 1960, he won the sixth Frank Nelson Cole Prize in Algebra for his paper "Unramified class field theory over function fields in several variables" (Annals of Mathematics, Series 2, volume 64 (1956), pp. 285–325). Activism Lang spent much of his professional time engaged in political activism. He was a staunch socialist and active in opposition to the Vietnam War, volunteering for the 1966 anti-war campaign of Robert Scheer (the subject of his book The Scheer Campaign). Lang later quit his position at Columbia in 1971 in protest over the university's treatment of anti-war protesters. Lang engaged in several efforts to challenge anyone he believed was spreading misinformation or misusing science or mathematics to further their own goals. He attacked the 1977 Survey of the American Professoriate, an opinion questionnaire that Seymour Martin Lipset and E. C. Ladd had sent to thousands of college professors in the United States. Lang said it contained numerous biased and loaded questions.[7] This led to a public and highly acrimonious conflict as detailed in his book The File : Case Study in Correction (1977-1979).[8] In 1986, Lang mounted what the New York Times described as a "one-man challenge" against the nomination of political scientist Samuel P. Huntington to the National Academy of Sciences.[6] Lang described Huntington's research, in particular his use of mathematical equations to demonstrate that South Africa was a "satisfied society", as "pseudoscience", arguing that it gave "the illusion of science without any of its substance." Despite support for Huntington from the Academy's social and behavioral scientists, Lang's challenge was successful, and Huntington was twice rejected for Academy membership. Huntington's supporters argued that Lang's opposition was political rather than scientific in nature.[9] Lang's detailed description of these events, "Academia, Journalism, and Politics: A Case Study: The Huntington Case", occupies the first 222 pages of his 1998 book Challenges.[10] Lang kept his political correspondence and related documentation in extensive "files". He would send letters or publish articles, wait for responses, engage the writers in further correspondence, collect all these writings together and point out what he considered contradictions. He often mailed these files to mathematicians and other interested parties throughout the world.[2] Some of the files were published in his books Challenges[11] and The File : Case Study in Correction (1977-1979).[8] His extensive file criticizing Nobel laureate David Baltimore was published in the journal Ethics and Behaviour in January 1993[12] and in his book Challenges.[11] Lang fought the decision by Yale University to hire Daniel Kevles, a historian of science, because Lang disagreed with Kevles' analysis in The Baltimore Case. Lang's most controversial political stance was as an HIV/AIDS denialist.[13] He maintained that the prevailing scientific consensus that HIV causes AIDS had not been backed up by reliable scientific research, yet for political and commercial reasons further research questioning the current point of view was suppressed. In public he was very outspoken about this point and a portion of Challenges is devoted to this issue.[11] List of books Pregraduate-level textbooks • Lang, Serge (1986). A first course in calculus. Undergraduate Texts in Mathematics (Fifth edition of 1964 original ed.). New York: Springer-Verlag. doi:10.1007/978-1-4419-8532-3. ISBN 978-1-4612-6428-6. The 1964 first edition was reprinted as: • Short calculus: the original edition of "A First Course in Calculus". Undergraduate Texts in Mathematics. New York: Springer-Verlag. 2002. doi:10.1007/978-1-4613-0077-9. ISBN 978-0-387-95327-4. • Lang, Serge (1986). Introduction to linear algebra. Undergraduate Texts in Mathematics (Second edition of 1970 original ed.). New York: Springer-Verlag. doi:10.1007/978-1-4612-1070-2. ISBN 978-0-387-96205-4. • Lang, Serge (1987). Calculus of several variables. Undergraduate Texts in Mathematics (Third edition of 1973 original ed.). New York: Springer-Verlag. doi:10.1007/978-1-4612-1068-9. ISBN 978-0-387-96405-8. Originally published as A Second Course in Calculus (1965)[14][15][16] • Lang, Serge (1987). Linear algebra. Undergraduate Texts in Mathematics (Third edition of 1966 original ed.). New York: Springer-Verlag. doi:10.1007/978-1-4757-1949-9. ISBN 0-387-96412-6. MR 0874113. • Shakarchi, Rami (1996). Solutions manual for Lang's "Linear Algebra". New York: Springer-Verlag. doi:10.1007/978-1-4612-0755-9. ISBN 0-387-94760-4. MR 1415837. • Lang, Serge (1988). Basic mathematics (Reprint of 1971 original ed.). New York: Springer-Verlag. • Lang, Serge; Murrow, Gene (1988). Geometry: a high school course. New York: Springer-Verlag. doi:10.1007/978-1-4757-2022-8. ISBN 978-0-387-96654-0. • Lang, Serge (1997). Undergraduate analysis. Undergraduate Texts in Mathematics (Second ed.). New York: Springer-Verlag. doi:10.1007/978-1-4757-2698-5. ISBN 0-387-94841-4. MR 1476913. The first edition (1983) of this title was itself the second edition of Analysis I (1968) • Shakarchi, Rami (1998). Problems and solutions for "Undergraduate Analysis". New York: Springer-Verlag. doi:10.1007/978-1-4612-1738-1. ISBN 0-387-98235-3. MR 1488961. • Lang, Serge (1999). Complex analysis. Graduate Texts in Mathematics. Vol. 103 (Fourth edition of 1977 original ed.). New York: Springer-Verlag. doi:10.1007/978-1-4757-3083-8. ISBN 0-387-98592-1. MR 1659317. • Shakarchi, Rami (1999). Problems and solutions for "Complex Analysis". New York: Springer-Verlag. doi:10.1007/978-1-4612-1534-9. ISBN 978-0-387-98831-3. MR 1716449. • Lang, Serge (2005). Undergraduate algebra. Undergraduate Texts in Mathematics (Third edition of 1990 original ed.). New York: Springer-Verlag. doi:10.1007/0-387-27475-8. ISBN 0-387-22025-9. The 1990 first edition was itself a second edition of Algebraic Structures (1967) Graduate-level textbooks • Lang, Serge (1966). Introduction to transcendental numbers. Reading, MA–London–Don Mills, Ontario: Addison-Wesley Publishing Co. MR 0214547. • Lang, Serge (1972). Introduction to algebraic geometry (Third printing, with corrections, of 1959 original ed.). Reading, MA: Addison-Wesley Publishing Co. MR 0344244.[17] • Lang, Serge; Trotter, Hale (1976). Frobenius distributions in GL2-extensions. Distribution of Frobenius automorphisms in GL2-extensions of the rational numbers. Lecture Notes in Mathematics. Vol. 504. Berlin–New York: Springer-Verlag. doi:10.1007/BFb0082087. ISBN 978-3-540-07550-9. MR 0568299. • Lang, Serge (1978). Elliptic curves: Diophantine analysis. Grundlehren der Mathematischen Wissenschaften. Vol. 231. Berlin–New York: Springer-Verlag. doi:10.1007/978-3-662-07010-9. ISBN 3-540-08489-4. MR 0518817.[18] • Kubert, Daniel S.; Lang, Serge (1981). Modular units. Grundlehren der Mathematischen Wissenschaften. Vol. 244. New York–Berlin: Springer-Verlag. doi:10.1007/978-1-4757-1741-9. ISBN 0-387-90517-0. MR 0648603. • Lang, Serge (1982). Introduction to algebraic and abelian functions. Graduate Texts in Mathematics. Vol. 89 (Second edition of 1972 original ed.). New York–Berlin: Springer-Verlag. doi:10.1007/978-1-4612-5740-0. ISBN 0-387-90710-6. MR 0681120. • Lang, Serge (1983). Abelian varieties (Reprint of 1959 original ed.). New York–Berlin: Springer-Verlag. doi:10.1007/978-1-4419-8534-7. ISBN 0-387-90875-7. MR 0713430. • Lang, Serge (1983). Complex multiplication. Grundlehren der mathematischen Wissenschaften. Vol. 255. New York: Springer-Verlag. doi:10.1007/978-1-4612-5485-0. ISBN 0-387-90786-6. MR 0713612. • Lang, Serge (1983). Fundamentals of Diophantine geometry. New York: Springer-Verlag. doi:10.1007/978-1-4757-1810-2. ISBN 0-387-90837-4. MR 0715605. Second edition of Diophantine Geometry (1962)[19][20] • Fulton, William; Lang, Serge (1985). Riemann–Roch algebra. Grundlehren der mathematischen Wissenschaften. Vol. 277. New York: Springer-Verlag. doi:10.1007/978-1-4757-1858-4. ISBN 978-1-4419-3073-6. MR 0801033. • Lang, Serge (1985). SL2(R). Graduate Texts in Mathematics. Vol. 105 (Reprint of the 1975 original ed.). New York: Springer-Verlag. doi:10.1007/978-1-4612-5142-2. ISBN 0-387-96198-4. MR 0803508.[21] • Lang, Serge (1987). Elliptic functions. Graduate Texts in Mathematics. Vol. 112. With an appendix by J. Tate (Second edition of 1973 original ed.). New York: Springer-Verlag. doi:10.1007/978-1-4612-4752-4. ISBN 0-387-96508-4. MR 0890960.[22] • Lang, Serge (1987). Introduction to complex hyperbolic spaces. New York: Springer-Verlag. doi:10.1007/978-1-4757-1945-1. ISBN 0-387-96447-9. MR 0886677.[23] • Lang, Serge (1988). Introduction to Arakelov theory. New York: Springer-Verlag. doi:10.1007/978-1-4612-1031-3. ISBN 0-387-96793-1. MR 0969124.[24] • Lang, Serge (1990). Cyclotomic fields I and II. Graduate Texts in Mathematics. Vol. 121. With an appendix by Karl Rubin (Combined second edition of 1978/1980 original ed.). New York: Springer-Verlag. doi:10.1007/978-1-4612-0987-4. ISBN 0-387-96671-4. MR 1029028. • Lang, Serge; Cherry, William (1990). Topics in Nevanlinna theory. Lecture Notes in Mathematics. Vol. 1433. With an appendix by Zhuan Ye. Berlin: Springer-Verlag. doi:10.1007/BFb0093846. ISBN 3-540-52785-0. MR 1069755. • Lang, Serge (1993). Real and functional analysis. Graduate Texts in Mathematics. Vol. 142 (Third ed.). New York: Springer-Verlag. doi:10.1007/978-1-4612-0897-6. ISBN 0-387-94001-4. MR 1216137. This book is the third edition, previously published under the different titles of Analysis II (1968) and Real Analysis (1983) • Jorgenson, Jay; Lang, Serge (1993). Basic analysis of regularized series and products. Lecture Notes in Mathematics. Vol. 1564. Berlin: Springer-Verlag. doi:10.1007/BFb0077194. ISBN 3-540-57488-3. MR 1284924. • Lang, Serge (1994). Algebraic number theory. Graduate Texts in Mathematics. Vol. 110 (Second edition of 1970 original ed.). New York: Springer-Verlag. doi:10.1007/978-1-4612-0853-2. ISBN 0-387-94225-4. MR 1282723.[25] The first edition was itself the second edition of Algebraic Numbers (1964) • Lang, Serge (1995). Introduction to Diophantine approximations (Second edition of 1966 original ed.). New York: Springer-Verlag. doi:10.1007/978-1-4612-4220-8. ISBN 0-387-94456-7. MR 1348400. • Lang, Serge (1995). Introduction to modular forms. Grundlehren der mathematischen Wissenschaften. Vol. 222. With appendixes by D. Zagier and Walter Feit (Corrected reprint of the 1976 original ed.). Berlin: Springer-Verlag. doi:10.1007/978-3-642-51447-0. ISBN 3-540-07833-9. MR 1363488.[26] • Lang, Serge (1996). Topics in cohomology of groups. Lecture Notes in Mathematics. Vol. 1625. Chapter X based on letters written by John Tate (Translated from the 1967 French original ed.). Berlin: Springer-Verlag. doi:10.1007/BFb0092624. ISBN 3-540-61181-9. MR 1439508.[27] • Lang, S. (1997), Survey of Diophantine geometry, Berlin: Springer-Verlag, ISBN 3-540-61223-8, Zbl 0869.11051 • Lang, Serge (1999). Fundamentals of differential geometry. Graduate Texts in Mathematics. Vol. 191. New York: Springer-Verlag. doi:10.1007/978-1-4612-0541-8. ISBN 0-387-98593-X. MR 1666820. This book is the fourth edition, previously published under the different titles of Introduction to Differentiable Manifolds (1962), Differential Manifolds (1972), and Differential and Riemannian Manifolds (1995). Lang also published a distinct second edition (preserving the title of the 1962 original) so as to provide a companion volume to Fundamentals of Differential Geometry which covers a portion of the same material, but with the more elementary exposition confined to finite-dimensional manifolds: • Lang, Serge (2002). Introduction to differentiable manifolds. Universitext (Second ed.). New York: Springer-Verlag. doi:10.1007/b97450. ISBN 0-387-95477-5. MR 1931083.[28] • Jorgenson, Jay; Lang, Serge (2001). Spherical inversion on SLn(R). Springer Monographs in Mathematics. New York: Springer-Verlag. doi:10.1007/978-1-4684-9302-3. ISBN 0-387-95115-6. MR 1834111.[29] • Lang, Serge (2002). Algebra. Graduate Texts in Mathematics. Vol. 211 (Revised third edition of 1965 original ed.). New York: Springer-Verlag. doi:10.1007/978-1-4613-0041-0. ISBN 0-387-95385-X. MR 1878556. • Jorgenson, Jay; Lang, Serge (2005). Posn(R) and Eisenstein series. Lecture Notes in Mathematics. Vol. 1868. Berlin: Springer-Verlag. doi:10.1007/b136063. ISBN 978-3-540-25787-5. MR 2166237. • Jorgenson, Jay; Lang, Serge (2008). The heat kernel and theta inversion on SL2(C). Springer Monographs in Mathematics. New York: Springer. doi:10.1007/978-0-387-38032-2. ISBN 978-0-387-38031-5. MR 2449649. • Jorgenson, Jay; Lang, Serge (2009). Heat Eisenstein series on SLn(C). Memoirs of the American Mathematical Society. Vol. 201. doi:10.1090/memo/0946. ISBN 978-0-8218-4044-3. MR 2548067. Other • Lang, Serge (1981). The file. Case study in correction (1977–1979). New York: Springer-Verlag. • Lang, Serge (1985). The beauty of doing mathematics. Three public dialogues. Translated from the French. New York: Springer-Verlag. doi:10.1007/978-1-4612-1102-0. ISBN 0-387-96149-6. MR 0804668. • Lang, Serge (1985). Math!: Encounters with high school students. New York: Springer-Verlag. doi:10.1007/978-1-4757-1860-7. ISBN 978-0-387-96129-3. • Lang, Serge (1998). Challenges. New York: Springer. doi:10.1007/978-1-4612-1638-4. ISBN 978-0-387-94861-4. • Lang, Serge (1999). Math talks for undergraduates. New York: Springer-Verlag. doi:10.1007/978-1-4612-1476-2. ISBN 0-387-98749-5. MR 1697559. • Lang, Serge (2000). Collected papers. I. 1952–1970. Springer Collected Works in Mathematics. New York: Springer. ISBN 0-387-98802-5. MR 1772967. • Lang, Serge (2000). Collected papers. II. 1971–1977. Springer Collected Works in Mathematics. New York: Springer. ISBN 0-387-98803-3. MR 1770235. • Lang, Serge (2000). Collected papers. III. 1978–1990. Springer Collected Works in Mathematics. New York: Springer. ISBN 0-387-98800-9. MR 1781669. • Lang, Serge (2000). Collected papers. IV. 1990–1996. Springer Collected Works in Mathematics. New York: Springer. ISBN 0-387-98804-1. MR 1781668. • Lang, Serge (2001). Collected papers. V. 1993–1999. Springer Collected Works in Mathematics. New York: Springer. ISBN 0-387-95030-3. MR 1781684. References 1. Kalichman, Seth (2009). Denying AIDS: Conspiracy Theories, Pseudoscience, and Human Tragedy. Springer. p. 182. ISBN 9780387794761. Lang descended into HIV/AIDS denialism and protested what he saw as the unjust treatment of Duesberg. He conducted a flawed analysis of Duesberg's grant failings and called into question the entire NIH review process. He also caused a bit of commotion on the Yale campus when AIDS speakers visited. He protested the appointment of former Global AIDS Program Director at the World Health Organization Michael Merson as Yale's Dean of Public Health and launched a series of letter writing campaigns to Yale administrators about the role the university was playing in the global AIDS conspiracy. 2. Jorgenson, Jay; Krantz, Steven G., eds. (May 2006). "Serge Lang, 1927–2005" (PDF). Notices of the American Mathematical Society. 53 (5): 536–553. 3. Lang, Serge (1951). On quasi algebraic closure. Princeton, N.J.: Princeton University. 4. Serge Lang at the Mathematics Genealogy Project 5. Daniel Bump, "The Lang Map" 6. Change, Kenneth; Warren Leary (September 25, 2005). "Serge Lang, 78, a Gadfly and Mathematical Theorist, Dies". New York Times. Retrieved August 13, 2010. 7. Serge Lang (18 May 1978), "The Professors: A Survey of a Survey", The New York Review of Books available online as reprinted in Challenges 8. Lang, Serge (1981). The File : Case Study in Correction (1977-1979). New York, NY: Springer New York. doi:10.1007/978-1-4613-8145-7. ISBN 978-1-4613-8145-7. Retrieved 4 May 2022. 9. Johnson, George; Laura Mansnerus (May 3, 1987). "Science Academy Rejects Harvard Political Scientist". New York Times. Retrieved August 13, 2010. 10. Lang, Serge (1999). Challenges. New York: Springer. ISBN 978-0-387-94861-4. 11. O'Hara, Michael W.; Lang, Serge (1998). Challenges. Springer Science & Business Media. ISBN 978-0-387-94861-4. Retrieved 4 May 2022. 12. Questions of Scientific Responsibility: The Baltimore Case From the journal Ethics and Behavior Vol. 3 No. 1 (1993) pp. 3–72, Serge Lang, Mathematics Department, Yale University 13. Jorgenson, Jay; Krantz, Steven G., eds. (May 2006). "Serge Lang, 1927–2005" (PDF). Notices of the American Mathematical Society. 53 (5): 536–553. 14. Magill, K. D. (1965-01-01). "Review of A Second Course in Calculus". The American Mathematical Monthly. 72 (9): 1048–1049. doi:10.2307/2313382. JSTOR 2313382. 15. Meacham, R. C. (1966-01-01). "Review of A Second Course in Calculus". Mathematics Magazine. 39 (2): 124. doi:10.2307/2688730. JSTOR 2688730. 16. Niven, Ivan (1970-01-01). "Review of A Second Course in Calculus". Mathematics Magazine. 43 (5): 277–278. doi:10.2307/2688750. JSTOR 2688750. 17. Rosenlicht, M. (1959). "Review: Introduction to algebraic geometry. By Serge Lang" (PDF). Bull. Amer. Math. Soc. 65 (6): 341–342. doi:10.1090/s0002-9904-1959-10361-x. 18. Baker, Alan (1980). "Review: Elliptic curves: Diophantine analysis, by Serge Lang" (PDF). Bull. Amer. Math. Soc. (N.S.). 2 (2): 352–354. doi:10.1090/s0273-0979-1980-14756-4. 19. Mordell, L. J. (1964). "Review: Diophantine geometry. By Serge Lang" (PDF). Bull. Amer. Math. Soc. 70 (4): 491–498. doi:10.1090/s0002-9904-1964-11164-2. 20. Lang, Serge (January 1995). "Mordell's review, Siegel's letter to Mordell, Diophantine Geomertry, and 20th century mathematics" (PDF). Gazette des mathématiciens (63): 17–36. 21. Langlands, R. P. (1976). "SL2(R), by Serge Lang" (PDF). Bull. Amer. Math. Soc. 82 (5): 688–691. doi:10.1090/s0002-9904-1976-14109-2. 22. Roquette, Peter (1976). "Review: Elliptic functions, by Serge Lang" (PDF). Bull. Amer. Math. Soc. 82 (4): 523–526. doi:10.1090/s0002-9904-1976-14082-7. 23. Green, Mark (1988). "Review: Introduction to complex hyperbolic spaces by Serge Lang". Bull. Amer. Math. Soc. (N.S.). 18 (2): 188–191. doi:10.1090/s0273-0979-1988-15644-3. 24. Silverman, Joseph H. (1989). "Review: Introduction to Arakelov theory, by Serge Lang" (PDF). Bull. Amer. Math. Soc. (N.S.). 21 (1): 171–176. doi:10.1090/s0273-0979-1989-15806-0. 25. Corwin, Lawrence (1972). "Review: Algebraic Number Theory by Serge Lang" (PDF). Bull. Amer. Math. Soc. 78 (5): 690–693. doi:10.1090/s0002-9904-1972-12984-7. 26. Terras, Audrey (1980). "Review: Introduction to modular forms, by Serge Lang" (PDF). Bull. Amer. Math. Soc. (N.S.). 2 (1): 206–214. doi:10.1090/s0273-0979-1980-14722-9. 27. Hochschild, G. (1969). "Review: Rapport sur la cohomologie des groupes by Serge Lang" (PDF). Bull. Amer. Math. Soc. 75 (5): 927–929. doi:10.1090/s0002-9904-1969-12294-9. 28. Abraham, Ralph (1964). "Review: Introduction to differential manifolds. By Serge Lang" (PDF). Bull. Amer. Math. Soc. 70 (2): 225–227. doi:10.1090/s0002-9904-1964-11089-2. 29. Krötz, Bernhard (2002). "Spherical Inversion on SLn(R), by Jay Jorgenson and Serge Lang" (PDF). Bull. Amer. Math. Soc. (N.S.). 40 (1): 137–142. doi:10.1090/s0273-0979-02-00962-x. Sources and further reading • Steele Prize citation and Lang's acceptance (AMS Notices, April 1999) • Jorgenson, Jay; Krantz, Steven G., eds. (May 2006). "Serge Lang, 1927–2005" (PDF). Notices of the American Mathematical Society. 53 (5): 536–553. • Jorgenson, Jay; Krantz, Steven G., eds. (April 2007). "The Mathematical Contributions of Serge Lang" (PDF). Notices of the AMS. 54 (4): 476–497. External links Wikiquote has quotations related to Serge Lang. • O'Connor, John J.; Robertson, Edmund F., "Serge Lang", MacTutor History of Mathematics Archive, University of St Andrews • Serge Lang at the Mathematics Genealogy Project • Obituary from the New York Times • Lang's obituary article in the Yale Daily News Authority control International • FAST • ISNI • VIAF • 2 • WorldCat National • Norway • France • BnF data • Catalonia • Germany • Israel • Belgium • United States • Sweden • Latvia • Japan • Czech Republic • Australia • Greece • Korea • Netherlands • Poland Academics • CiNii • DBLP • MathSciNet • Mathematics Genealogy Project • Scopus • zbMATH People • Deutsche Biographie • Trove Other • SNAC • IdRef
Wikipedia
Subbaramiah Minakshisundaram Subbaramiah Minakshisundaram (12 October 1913 - 13 August 1968), also known as Minakshi or SMS, was an Indian mathematician who worked on partial differential equations and heat kernels. In 1946, he worked at the Institute for Advanced Study in Princeton, America, where he met Åke Pleijel. In 1949, the two wrote a paper together called, Some properties of the eigenfunctions of the Laplace-operator on Riemannian manifolds, in which they introduced the Minakshisundaram-Pleijel zeta function.[1][2][3] Subbaramiah Minakshisundaram Minakshisundaram in 1940 Born12 October 1913 Trichur, Kerala State, India Died12 August 1968 Visakhapatnam, Andhra Pradesh, India NationalityIndian Other namesMinakshi, SMS, Jeja OccupationMathematician Years active1936 - 1968 Known forMinakshisundaram-Pleijel zeta function On the 13th of August 1968, Subbaramiah suffered a heart attack and died at the age of 55. Biography Early life Subbaramiah Minakshisundaram was the first of three children, born in 1913 and called "Jeja" by his parents, meaning God. In his early years he learned Malayalam, but when his family moved to Chennai because of his father's job as a sanitary engineer they spoke Tamil.[1] His family were Hindus, and as a boy Minakshisundaram would chant the Gayatri Japa and perform the Sandhyavandanam with his grandfather every morning and night.[4] In 1919, Minakshi attended the Calavala Ramanujam Chetty High School, where he displayed a "marked aptitude" for mathematics, and graduated in 1929 with a Secondary School Leaving Certificate. After graduating, he studied at Pachaiyappa's College and went to Loyola College in 1931 to pursue a B.A. in mathematics, which he attained in 1934. Career After graduating Loyola College, Subbaramiah joined the University of Madras as a research scholar and worked in the library.[3] During his time at the University of Madras, Minakshi was influenced by K Ananda Rau, and began studying the summability of series. His first paper, Tauberian theorems on Dirichlet's series, was published in 1936 and expanded upon Rua's theorem. He followed this with two papers, both published in 1937: On the extension of a theorem of Caratheodory in the theory of Fourier series, and, The Fourier series of a sequence of functions. [1] Marriage On the 10th of May, 1937, Minakshisundaram married M Parvathi. They had three children together: • K Ramu (born 1943) • K Girija (born 1945) • K Radha (birthdate unknown)[1] Publications • Minakshisundaram, S., & Szasz, O. (1947). On Absolute Convergence of Multiple Fourier Series. Transactions of the American Mathematical Society, 61(1), 36–53. https://doi.org/10.2307/1990288 • Minakshisundaram, S., & Pleijel, Å. (1949). Some Properties of the Eigenfunctions of The Laplace-Operator on Riemannian Manifolds. Canadian Journal of Mathematics, 1(3), 242-256. doi:10.4153/CJM-1949-021-5 • Minakshisundaram, S. (1949). A Generalization of Epstein Zeta Functions. Canadian Journal of Mathematics, 1(4), 320-327. doi:10.4153/CJM-1949-029-3 • Minakshisundaram, S., Chandrasekharan, K. (1952). Typical Means. India: Tata Institute of Fundamental Research, Bombay. • Minakshisundaram, S. (1953). Eigenfunctions on Riemannian Manifolds. The Journal of the Indian Mathematical Society, 17(4), 159–165 • Chandrasekharan, K., & Minakshisundaram, S. (1954). A Note on Typical Means. The Journal of the Indian Mathematical Society, 18(1), 107–114 Posthumous works • Minakshisundaram, S. (2012). Collected Works of S. Minakshisundaram. India: Ramanujan Mathematical Society. References 1. "Subbaramiah Minakshisundaram - Biography". Maths History. Retrieved 31 March 2023. 2. "FROM THE EDITOR' S DESK «". web.archive.org. 4 July 2016. Retrieved 3 April 2023. 3. Thangavelu, S (January 2003). "S Minakshisundaram: A glimpse into his life and work". Resonance. 8 (1): 41–50. doi:10.1007/BF02834449. ISSN 0971-8044. 4. "Episodes in Minakshisundaram's life". Maths History. Retrieved 31 March 2023. External links • S. Minakshisundaram memorial society Authority control International • ISNI • VIAF National • Norway • Netherlands Academics • MathSciNet • Mathematics Genealogy Project • zbMATH Other • IdRef
Wikipedia
Satyendra Nath Bose Satyendra Nath Bose FRS, MP[3] (/ˈboʊs/;[4][lower-alpha 1] 1 January 1894 – 4 February 1974) was an Indian mathematician and physicist specializing in theoretical physics. He is best known for his work on quantum mechanics in the early 1920s, in developing the foundation for Bose–Einstein statistics and the theory of the Bose–Einstein condensate. A Fellow of the Royal Society, he was awarded India's second highest civilian award, the Padma Vibhushan, in 1954 by the Government of India.[5][6][7] Satyendra Nath Bose FRS, MP Bose in 1925 Born Satyendra Nath Bose (1894-01-01)1 January 1894 Calcutta, Bengal Presidency, British India Died4 February 1974(1974-02-04) (aged 80) Calcutta, West Bengal, India[1] Alma materUniversity of Calcutta Known for • Bose–Einstein condensate • Bose–Einstein statistics • Bose–Einstein distribution • Bose–Einstein correlations • Bose gas • Boson • Ideal Bose Equation of State • Photon gas • Bosonic string theory SpouseUshabati Bose (née Ghosh)[2] Awards • Padma Vibhushan • Fellow of the Royal Society[3] Scientific career FieldsPhysics, Quantum Mechanics, Mathematics Institutions • University of Calcutta • University of Dhaka • Visva-Bharati Academic advisors • Jagadish Chandra Bose • Prafulla Chandra Ray Doctoral students • Purnima Sinha • Partha Ghose • Siva Brata Bhattacherjee Other notable students • Mani Lal Bhaumik • Lilabati Bhattacharjee • Asima Chatterjee • Ratan Lal Brahmachary Member of Parliament, Rajya Sabha In office 3 April 1952 – 2 April 1960 Signature Part of a series of articles about Quantum mechanics $i\hbar {\frac {\partial }{\partial t}}|\psi (t)\rangle ={\hat {H}}|\psi (t)\rangle $ Schrödinger equation • Introduction • Glossary • History Background • Classical mechanics • Old quantum theory • Bra–ket notation • Hamiltonian • Interference Fundamentals • Complementarity • Decoherence • Entanglement • Energy level • Measurement • Nonlocality • Quantum number • State • Superposition • Symmetry • Tunnelling • Uncertainty • Wave function • Collapse Experiments • Bell's inequality • Davisson–Germer • Double-slit • Elitzur–Vaidman • Franck–Hertz • Leggett–Garg inequality • Mach–Zehnder • Popper • Quantum eraser • Delayed-choice • Schrödinger's cat • Stern–Gerlach • Wheeler's delayed-choice Formulations • Overview • Heisenberg • Interaction • Matrix • Phase-space • Schrödinger • Sum-over-histories (path integral) Equations • Dirac • Klein–Gordon • Pauli • Rydberg • Schrödinger Interpretations • Bayesian • Consistent histories • Copenhagen • de Broglie–Bohm • Ensemble • Hidden-variable • Local • Many-worlds • Objective collapse • Quantum logic • Relational • Transactional Advanced topics • Relativistic quantum mechanics • Quantum field theory • Quantum information science • Quantum computing • Quantum chaos • EPR paradox • Density matrix • Scattering theory • Quantum statistical mechanics • Quantum machine learning Scientists • Aharonov • Bell • Bethe • Blackett • Bloch • Bohm • Bohr • Born • Bose • de Broglie • Compton • Dirac • Davisson • Debye • Ehrenfest • Einstein • Everett • Fock • Fermi • Feynman • Glauber • Gutzwiller • Heisenberg • Hilbert • Jordan • Kramers • Pauli • Lamb • Landau • Laue • Moseley • Millikan • Onnes • Planck • Rabi • Raman • Rydberg • Schrödinger • Simmons • Sommerfeld • von Neumann • Weyl • Wien • Wigner • Zeeman • Zeilinger The class of particles that obey Bose statistics, bosons, was named after Bose by Paul Dirac.[8][9] A polymath, he had a wide range of interests in varied fields, including physics, mathematics, chemistry, biology, mineralogy, philosophy, arts, literature, and music. He served on many research and development committees in India after independence.[10] Early life Bose was born in Calcutta (now Kolkata), the eldest of seven children in a Bengali Kayastha[11] family. He was the only son, with six sisters after him. His ancestral home was in the village Bara Jagulia, in the district of Nadia, in the Bengal Presidency. His schooling began at the age of five, near his home. When his family moved to Goabagan, he was admitted into the New Indian School. In his final year of school, he was admitted into the Hindu School. He passed his entrance examination (matriculation) in 1909 and stood fifth in the order of merit. He then joined the intermediate science course at the Presidency College, Calcutta, where his teachers included Jagadish Chandra Bose, Sarada Prasanna Das, and Prafulla Chandra Ray. Bose received a Bachelor of Science in mixed mathematics from Presidency College, standing first in 1913. Then he joined Sir Ashutosh Mukherjee's newly formed Science College where he again stood first in the MSc mixed mathematics exam in 1915. His marks in the MSc examination created a new record in the annals of the University of Calcutta, which is yet to be surpassed.[12] After completing his MSc, Bose joined the Science College, Calcutta University as a research scholar in 1916 and started his studies in the theory of relativity. It was an exciting era in the history of scientific progress. Quantum theory had just appeared on the horizon and significant results had started pouring in.[12] His father, Surendranath Bose, worked in the Engineering Department of the East Indian Railway Company. In 1914, at age 20, Satyendra Nath Bose married Ushabati Ghosh,[2][13] the 11-year-old daughter of a prominent Calcutta physician.[14] They had nine offspring, two of whom died in early childhood. When he died in 1974, he left behind his wife, two sons, and five daughters.[12] As a polyglot, Bose was well versed in several languages such as Bengali, English, French, German and Sanskrit as well as the poetry of Lord Tennyson, Rabindranath Tagore and Kalidasa. He could play the esraj, an Indian instrument similar to a violin.[15] He was actively involved in running night schools that came to be known as the Working Men's Institute.[7][16] Research career Bose attended Hindu School in Calcutta, and later attended Presidency College, also in Calcutta, earning the highest marks at each institution, while fellow student and future astrophysicist Meghnad Saha came second.[7] He came in contact with teachers such as Jagadish Chandra Bose, Prafulla Chandra Ray and Naman Sharma who provided inspiration to aim high in life. From 1916 to 1921, he was a lecturer in the physics department of the Rajabazar Science College under University of Calcutta. Along with Saha, Bose prepared the first book in English based on German and French translations of original papers on Einstein's special and general relativity in 1919. In 1921, Satyendra Nath Bose joined as Reader in the Department of Physics of the recently founded University of Dhaka (in present-day Bangladesh).[17] Bose set up whole new departments, including laboratories, to teach advanced courses for MSc and BSc honours and taught thermodynamics as well as James Clerk Maxwell's theory of electromagnetism.[18] Bose, along with Indian Astrophysicist Meghnad Saha, presented several papers in theoretical physics and pure mathematics from 1918 onwards. In 1924, whilst a Reader in the Physics Department of the University of Dhaka, Bose wrote a paper deriving Planck's quantum radiation law without any reference to classical physics by using a novel way of counting states with identical particles. This paper was seminal in creating the important field of quantum statistics.[19] Though not accepted at once for publication, he sent the article directly to Albert Einstein in Germany. Einstein, recognising the importance of the paper, translated it into German himself and submitted it on Bose's behalf to the Zeitschrift für Physik. As a result of this recognition, Bose was able to work for two years in European X-ray and crystallography laboratories, during which he worked with Louis de Broglie, Marie Curie, and Einstein.[7][20][21][22] Bose–Einstein statistics While presenting a lecture[23] at the University of Dhaka on the theory of radiation and the ultraviolet catastrophe, Bose intended to show his students that the contemporary theory was inadequate, because it predicted results not in accordance with experimental results. In the process of describing this discrepancy, Bose for the first time took the position that the Maxwell–Boltzmann distribution would not be true for microscopic particles, where fluctuations due to Heisenberg's uncertainty principle will be significant. Thus he stressed the probability of finding particles in the phase space, each state having volume h3, and discarding the distinct position and momentum of the particles. Bose adapted this lecture into a short article called "Planck's Law and the Hypothesis of Light Quanta" and sent it to Albert Einstein with the following letter:[24] Respected Sir, I have ventured to send you the accompanying article for your perusal and opinion. I am anxious to know what you think of it. You will see that I have tried to deduce the coefficient 8π ν2/c3 in Planck's Law independent of classical electrodynamics, only assuming that the ultimate elementary region in the phase-space has the content h3. I do not know sufficient German to translate the paper. If you think the paper worth publication I shall be grateful if you arrange for its publication in Zeitschrift für Physik. Though a complete stranger to you, I do not feel any hesitation in making such a request. Because we are all your pupils though profiting only by your teachings through your writings. I do not know whether you still remember that somebody from Calcutta asked your permission to translate your papers on Relativity in English. You acceded to the request. The book has since been published. I was the one who translated your paper on Generalised Relativity. Einstein agreed with him, translated Bose's papers "Planck's Law and Hypothesis of Light Quanta" into German, and had it published in Zeitschrift für Physik under Bose's name, in 1924.[25] Possible outcomes of flipping two coins Two headsTwo tailsOne of each (1) There are three outcomes. What is the probability of producing two heads? Outcome probabilities   Coin 1 HeadTail Coin 2 Head HHHT Tail THTT (2) Since the coins are distinct, there are two outcomes which produce a head and a tail. The probability of two heads is one-quarter. The reason Bose's interpretation produced accurate results was that since photons are indistinguishable from each other, one cannot treat any two photons having equal energy as being two distinct identifiable photons. By analogy if, in an alternate universe, coins were to behave like photons and other bosons, the probability of producing two heads would indeed be one-third (tail-head = head-tail). Bose's interpretation is now called Bose–Einstein statistics. This result derived by Bose laid the foundation of quantum statistics, and especially the revolutionary new philosophical conception of the indistinguishability of particles, as acknowledged by Einstein and Dirac.[25] When Einstein met Bose face-to-face, he asked him whether he had been aware that he had invented a new type of statistics, and he very candidly said that no, he wasn't that familiar with Boltzmann's statistics and didn't realize that he was doing the calculations differently. He was equally candid with anyone who asked. Bose–Einstein condensate Standard Model of particle physics Elementary particles of the Standard Model Background Particle physics Standard Model Quantum field theory Gauge theory Spontaneous symmetry breaking Higgs mechanism Constituents Electroweak interaction Quantum chromodynamics CKM matrix Standard Model mathematics Limitations Strong CP problem Hierarchy problem Neutrino oscillations Physics beyond the Standard Model Scientists • Rutherford • Thomson • Chadwick • Bose • Sudarshan • Davis Jr • Anderson • Fermi • Dirac • Feynman • Rubbia • Gell-Mann • Kendall • Taylor • Friedman • Powell • Anderson • Glashow • Iliopoulos • Lederman • Maiani • Meer • Cowan • Nambu • Chamberlain • Cabibbo • Schwartz • Perl • Majorana • Weinberg • Lee • Ward • Salam • Kobayashi • Maskawa • Mills • Yang • Yukawa • 't Hooft • Veltman • Gross • Pais • Pauli • Politzer • Reines • Schwinger • Wilczek • Cronin • Fitch • Vleck • Higgs • Englert • Brout • Hagen • Guralnik • Kibble • de Mayolo • Lattes • Zweig Einstein also did not at first realize how radical Bose's departure was, and in his first paper after Bose, he was guided, like Bose, by the fact that the new method gave the right answer. But after Einstein's second paper using Bose's method in which Einstein predicted the Bose-Einstein condensate (pictured left), he started to realize just how radical it was, and he compared it to wave/particle duality, saying that some particles didn't behave exactly like particles. Bose had already submitted his article to the British Journal Philosophical Magazine, which rejected it before he sent it to Einstein. It is not known why it was rejected.[27] Einstein adopted the idea and extended it to atoms. This led to the prediction of the existence of phenomena which became known as Bose–Einstein condensate, a dense collection of bosons (which are particles with integer spin, named after Bose), which was demonstrated to exist by experiment in 1995. Dhaka After his stay in Europe, Bose returned to Dhaka in 1926. He did not have a doctorate, and so ordinarily, under the prevailing regulations, he would not be qualified for the post of Professor he applied for, but Einstein recommended him. He was then made Head of the Department of Physics at Dhaka University. He continued guiding and teaching at Dhaka University and was the Dean of the Faculty of Science there until 1945. Bose designed equipment himself for an X-ray crystallography laboratory. He set up laboratories and libraries to make the department a center of research in X-ray spectroscopy, X-ray diffraction, magnetic properties of matter, optical spectroscopy, wireless, and unified field theories. He also published an equation of state for real gases with Meghnad Saha. Calcutta When the partition of India became imminent (1947), he returned to Calcutta (now known as Kolkata) and taught there until 1956. He insisted every student design his own equipment using local materials and local technicians. He was made professor emeritus on his retirement.[20][28][7] He then became Vice-Chancellor of Visva-Bharati University in Santiniketan. He returned to the University of Calcutta to continue research in nuclear physics and complete earlier works in organic chemistry. In subsequent years, he worked in applied research such as extraction of helium in hot springs of Bakreshwar.[29] Other fields Apart from physics, he did research in biotechnology and literature (Bengali and English). He made studies in chemistry, geology, zoology, anthropology, engineering and other sciences. Being Bengali, he devoted significant time to promoting Bengali as a teaching language, translating scientific papers into it, and promoting the development of the region.[21][30][6] Honours In 1937, Rabindranath Tagore dedicated his only book on science, Visva–Parichay, to Satyendra Nath Bose. Bose was honoured with the title Padma Vibhushan by the Indian Government in 1954. In 1959, he was appointed as the National Professor, the highest honour in the country for a scholar, a position he held for 15 years. In 1986, the S.N. Bose National Centre for Basic Sciences was established by an act of Parliament, Government of India, in Salt Lake, Calcutta.[31][32] Bose became an adviser to the then newly formed Council of Scientific and Industrial Research. He was the president of the Indian Physical Society and the National Institute of Science. He was elected general president of the Indian Science Congress. He was the vice president and then the president of Indian Statistical Institute. In 1958, he became a Fellow of the Royal Society. He was nominated as member of Rajya Sabha. Partha Ghose has stated that[7] Bose's work stood at the transition between the 'old quantum theory' of Planck, Bohr and Einstein and the new quantum mechanics of Schrödinger, Heisenberg, Born, Dirac and others. Nobel Prize nomination Bose was nominated by K. Banerjee (1956), D.S. Kothari (1959), S.N. Bagchi (1962), and A.K. Dutta (1962) for the Nobel Prize in Physics, for his contribution to Bose–Einstein statistics and the unified field theory. Banerjee, head of the Physics Department, University of Allahabad, in a letter of 12 January 1956 wrote to the Nobel Committee as follows: "(1). He (Bose) made very outstanding contributions to physics by developing the statistics known after his name as Bose statistics. In recent years this statistics is found to be of profound importance in the classifications of fundamental particles and has contributed immensely to the development of nuclear physics. (2). During the period from 1953 to date, he has made a number of highly interesting contributions of far-reaching consequences on the subject of Einstein's Unitary Field Theory." Bose's work was evaluated by an expert of the Nobel Committee, Oskar Klein, who deemed his work worthy of a Nobel Prize.[33][34][35] Legacy Bosons, a class of elementary subatomic particles in particle physics were named by Dirac after Satyendra Nath Bose to commemorate his contributions to science.[36][37] Although seven Nobel Prizes were awarded for research related to S N Bose's concepts of the boson, Bose–Einstein statistics and Bose–Einstein condensate, Bose himself was not awarded a Nobel Prize. In his book The Scientific Edge, physicist Jayant Narlikar observed: SN Bose's work on particle statistics (c. 1922), which clarified the behaviour of photons (the particles of light in an enclosure) and opened the door to new ideas on statistics of Microsystems that obey the rules of quantum theory, was one of the top ten achievements of 20th century Indian science and could be considered in the Nobel Prize class.[38] When Bose himself was once asked that question, he replied, "I have got all the recognition I deserve" - probably because in the realms of science to which he belonged, what is important is not a Nobel, but whether one's name will live on in scientific discussions in the decades to come.[39] One of the main academic buildings of University of Rajshahi, the No 1 science building has been named after him. On 4 June 2022, Google honoured Bose by featuring him on a Google Doodle[40] marking the 98th anniversary of Bose sending his quantum formulations to the German scientist Albert Einstein who recognised it as a significant discovery in quantum mechanics.[41][42] Works (selection) • Bose (1924), "Plancks Gesetz und Lichtquantenhypothese", Zeitschrift für Physik (in German), 26 (1): 178–181, Bibcode:1924ZPhy...26..178B, doi:10.1007/BF01327326, S2CID 186235974. Notes 1. The English pronunciation is from the Hindustani, [səˈtjeːndrə ˈnaːtʰ ˈboːs]. The Bengali pronunciation is [ʃotːendronatʰ boʃu]. References 1. "Satyendra Nath Bose – Bengali physicist". Encyclopædia Britannica. Retrieved 5 December 2015. 2. "S. N. Bose Biography Project". July 2012. 3. Mehra, J. (1975). "Satyendra Nath Bose 1 January 1894 – 4 February 1974". Biographical Memoirs of Fellows of the Royal Society. 21: 116–126. doi:10.1098/rsbm.1975.0002. S2CID 72507392. 4. "Bose, Satyendra Nath". Lexico UK English Dictionary. Oxford University Press. Archived from the original on 18 July 2021. 5. Wali 2009, pp. xv, xxxiv. 6. Barran, Michel, "Bose, Satyendranath (1894–1974)", Science world (biography), Wolfram. 7. Mahanti, Dr Subodh. "Satyendra Nath Bose, The Creator of Quantum Statistics". IN: Vigyan Prasar. 8. Farmelo, Graham, "The Strangest Man", Notes on Dirac's lecture Developments in Atomic Theory at Le Palais de la Découverte, 6 December 1945, UKNATARCHI Dirac Papers, p. 331, note 64, BW83/2/257889. 9. Miller, Sean (18 March 2013). Strung Together: The Cultural Currency of String Theory as a Scientific Imaginary. University of Michigan Press. p. 63. ISBN 978-0-472-11866-3. 10. Wali 2009, p. xl. 11. Santimay Chatterjee; Enakshi Chatterjee (1976). Satyendra Nath Bose. National Book Trust, India. p. 12. Satyendra Nath was born in Calcutta on the first of January, 1894, in a high caste Kayastha family with two generations of English education behind him. 12. Kamble, Dr VB (January 2002). "Vigyan Prasar". 13. Wali 2009, p. xvii. 14. Masters, Barry R. (April 2013). "Satyendra Nath Bose and Bose–Einstein Statistics" (PDF). Optics & Photonics News. 24 (4): 41. Bibcode:2013OptPN..24...40M. doi:10.1364/OPN.24.4.000040. 15. "Vigyan Prasar – SC Bose". www.vigyanprasar.gov.in. Government of India. Retrieved 14 June 2017. 16. Wali 2009, p. xvi. 17. Md Mahbub Murshed (2012), "Bose, Satyendra Nath", in Sirajul Islam and Ahmed A. Jamal (ed.), Banglapedia: National Encyclopedia of Bangladesh (Second ed.), Asiatic Society of Bangladesh 18. Wali 2009, pp. xvii, xviii, xx. 19. Bose, S. N. (1994). "Planck's Law and the Light Quantum Hypothesis" (PDF). Journal of Astrophysics and Astronomy. 15: 3–7. Bibcode:1994JApA...15....3B. doi:10.1007/BF03010400. S2CID 121808581. 20. Shanbhag, MR. "Scientist". Personalities. Calcutta web. Archived from the original on 2 August 2002. 21. O'Connor, JJ; Robertson, EF (October 2003). "Satyendranath Bose". The MacTutor History of Mathematics archive. UK: St Andrew's. 22. Wali 2009, pp. xx–xxiii. 23. Shanbhag, MR. "Satyendra Nath Bose (January 1, 1894 – February 4, 1974)". Indian Statistical Institute. 24. Venkataraman, G (1992), Bose And His Statistics, Universities Press, p. 14, ISBN 978-81-7371-036-0 25. Wali 2009, p. 414. 26. "Quantum Physics; Bose Einstein condensate", Image Gallery, NIST, 11 March 2006. 27. A.Douglas Stone, Chapter 24, The Indian Comet, in the book Einstein and the Quantum, Princeton University Press, Princeton, New Jersey, 2013. 28. Wali 2009, pp. xxx, xxiv. 29. Wali 2009, pp. xxxvi, xxxviii. 30. Wali 2009, pp. xxiv, xxxix. 31. Wali 2009, pp. xxxiv, xxxviii. 32. Ghose, Partha (3 January 2012), "Original vision", The Telegraph (Opinion), IN, archived from the original on 25 February 2014. 33. Singh, Rajinder (2016) India's Nobel Prize Nominators and Nominees – The Praxis of Nomination and Geographical Distribution, Shaker Publisher, Aachen, pp. 26–27. ISBN 978-3-8440-4315-0 34. Singh, Rajinder (2016) Die Nobelpreise und die indische Elite, Shaker Verlag, Aachen, pp. 24–25. ISBN 978-3-8440-4429-4 35. Singh, Rajinder (2016) Chemistry and Physics Nobel Prizes – India's Contribution, Shaker Verlag, Aachen. ISBN 978-3-8440-4669-4. 36. Daigle, Katy (10 July 2012). "India: Enough about Higgs, let's discuss the boson". AP News. Retrieved 10 July 2012. 37. Bal, Hartosh Singh (19 September 2012). "The Bose in the Boson". New York Times blog. Retrieved 21 September 2012. 38. Narlikar, Jayant V (2003), The Scientific Edge: The Indian Scientist from Vedic to Modern Times, Penguin Books, p. 127, ISBN 978-0-14-303028-7. The work of other 20th century Indian scientists which Narlikar considered to be of Nobel Prize class were Srinivasa Ramanujan, Chandrasekhara Venkata Raman and Megh Nad Saha. 39. Alikhan, Anvar (16 July 2012). "The Spark in a Crowded Field". Outlook India. Retrieved 10 July 2012. 40. "Google Doodle : বিশ্ব মঞ্চে শ্রেষ্ঠ শিরোপা! বিজ্ঞানী Satyendra Nath Bose-কে সম্মান গুগলের". The Bengali Chronicle (in Bengali). 4 June 2022. Retrieved 10 August 2022. 41. "Celebrating Satyendra Nath Bose". www.google.com. Retrieved 4 June 2022. 42. "Satyendra Nath Bose: Google Pays Tribute To Indian Physicist With Special Doodle". NDTV.com. Retrieved 4 June 2022. External links Wikimedia Commons has media related to Satyendranath Bose. • Works by or about Satyendra Nath Bose at Internet Archive • Satyendra Nath Bose at the Encyclopædia Britannica • Pais, Abraham (1982), Subtle is the Lord...: The Science and Life of Albert Einstein, Oxford and New York: Oxford University Press, pp. 423–34, ISBN 978-0-19-853907-0. • Saha; Srivasthava, Heat and thermodynamics. • Pitaevskii, Lev; Stringari, Sandro (2003), Bose–Einstein Condensation, Oxford: Clarendon Press. • Wali, Kameshwar C (2009), Satyendra Nath Bose: his life and times (selected works with commentary), Singapore: World Scientific, ISBN 978-981-279-070-5 • O'Connor, John J.; Robertson, Edmund F., "Satyendra Nath Bose", MacTutor History of Mathematics Archive, University of St Andrews • "Bosons – The Birds That Flock and Sing Together", Vigyan Prasar, IN, January 2002 (biography of Bose and Bose–Einstein Condensation). • S.N. Bose Scholars Program, Wisc. • The Quantum Indians: film on Bose, Raman and Saha on YouTube by Raja Choudhury and produced by PSBT and Indian Public Diplomacy. Recipients of Padma Vibhushan Arts • Ebrahim Alkazi • Kishori Amonkar • Prabha Atre • Amitabh Bachchan • Teejan Bai • M. Balamuralikrishna • T. Balasaraswati • Asha Bhosle • Nandalal Bose • Hariprasad Chaurasia • Girija Devi • Kumar Gandharva • Adoor Gopalakrishnan • Satish Gujral • Gangubai Hangal • Bhupen Hazarika • M. F. Husain • Ilaiyaraaja • Semmangudi Srinivasa Iyer • Bhimsen Joshi • Ali Akbar Khan • Amjad Ali Khan • Allauddin Khan • Bismillah Khan • Ghulam Mustafa Khan • Yamini Krishnamurthy • Dilip Kumar • R. K. Laxman • Birju Maharaj • Kishan Maharaj • Lata Mangeshkar • Sonal Mansingh • Mallikarjun Mansur • Zubin Mehta • Mario Miranda • Chhannulal Mishra • Kelucharan Mohapatra • Raghunath Mohapatra • Jasraj Motiram • Benode Behari Mukherjee • Hrishikesh Mukherjee • Rajinikanth • Ram Narayan • D. K. Pattammal • K. Shankar Pillai • Balwant Moreshwar Purandare • Akkineni Nageswara Rao • Kaloji Narayana Rao • Satyajit Ray • S. H. Raza • Zohra Sehgal • Uday Shankar • Ravi Shankar • V. Shantaram • Shivkumar Sharma • Umayalpuram K. Sivaraman • M. S. Subbulakshmi • K. G. Subramanyan • Kapila Vatsyayan • Homai Vyarawalla • K. J. Yesudas • Zakir Hussain Civil service • Bimala Prasad Chaliha • Naresh Chandra • T. N. Chaturvedi • Jayanto Nath Chaudhuri • Suranjan Das • Rajeshwar Dayal • Basanti Devi • P. N. Dhar • Jyotindra Nath Dixit • M. S. Gill • Hafiz Mohamad Ibrahim • H. V. R. Iyengar • Bhola Nath Jha • Dattatraya Shridhar Joshi • Ajudhiya Nath Khosla • Rai Krishnadasa • V. Krishnamurthy • P. Prabhakar Kumaramangalam • Pratap Chandra Lal • K. B. Lall • Sam Manekshaw • Om Prakash Mehra • Mohan Sinha Mehta • M. G. K. Menon • Brajesh Mishra • Sumati Morarjee • A. Ramasamy Mudaliar • Sardarilal Mathradas Nanda • Chakravarthi V. Narasimhan • Braj Kumar Nehru • Bhairab Dutt Pande • Ghananand Pande • Vijaya Lakshmi Pandit • T. V. Rajeswar • C. R. Krishnaswamy Rao • Pattadakal Venkanna R. Rao • V. K. R. V. Rao • Bipin Rawat • Khusro Faramurz Rustamji • Harish Chandra Sarin • Binay Ranjan Sen • Homi Sethna • Arjan Singh • Harbaksh Singh • Kirpal Singh • Manmohan Singh • Tarlok Singh • Lallan Prasad Singh • Balaram Sivaraman • Chandrika Prasad Srivastava • T. Swaminathan • Arun Shridhar Vaidya • Dharma Vira • Narinder Nath Vohra Literature and education • V. S. R. Arunachalam • Jagdish Bhagwati • Satyendra Nath Bose • Tara Chand • Suniti Kumar Chatterji • D. P. Chattopadhyaya • Bhabatosh Datta • Avinash Dixit • Mahasweta Devi • John Kenneth Galbraith • Sarvepalli Gopal • Lakshman Shastri Joshi • Kaka Kalelkar • Dhondo Keshav Karve • Gopinath Kaviraj • Radheshyam Khemka • Kuvempu • O. N. V. Kurup • Prasanta Chandra Mahalanobis • Sitakant Mahapatra • John Mathai • Kotha Satchidananda Murthy • Giani Gurmukh Singh Musafir • Basanti Dulal Nagchaudhuri • Bal Ram Nanda • R. K. Narayan • P. Parameswaran • Amrita Pritam • K. N. Raj • C. Rangarajan • Raja Rao • Ramoji Rao • Hormasji Maneckji Seervai • Rajaram Shastri • Kalu Lal Shrimali • Govindbhai Shroff • Khushwant Singh • Chandeshwar Prasad Narayan Singh • Premlila Vithaldas Thackersey • Mahadevi Varma • Bashir Hussain Zaidi Medicine • Jasbir Singh Bajaj • B. K. Goyal • Purshotam Lal • A. Lakshmanaswami Mudaliar • S. I. Padmavati • Autar Singh Paintal • Kantilal Hastimal Sancheti • Balu Sankaran • V. Shanta • Vithal Nagesh Shirodkar • Prakash Narain Tandon • Brihaspati Dev Triguna • M. S. Valiathan • Dilip Mahalanabis Other • Sunderlal Bahuguna • B. K. S. Iyengar • Rambhadracharya • Ravi Shankar • Vishwesha Teertha • Jaggi Vasudev • B. V. Doshi Public affairs • L. K. Advani • Montek Singh Ahluwalia • Aruna Asaf Ali • Fazal Ali • Adarsh Sein Anand • Madhav Shrihari Aney • Parkash Singh Badal • Sikander Bakht • Milon K. Banerji • Mirza Hameedullah Beg • P. N. Bhagwati • Raja Chelliah • Chandra Kisan Daphtary • Niren De • C. D. Deshmukh • Anthony Lancelot Dias • Uma Shankar Dikshit • Kazi Lhendup Dorjee • George Fernandes • P. B. Gajendragadkar • Benjamin Gilman • Ismaïl Omar Guelleh • Zakir Husain • V. R. Krishna Iyer • Jagmohan • Lakshmi Chand Jain • Arun Jaitley • Aditya Nath Jha • Murli Manohar Joshi • Anerood Jugnauth • Mehdi Nawaz Jung • Ali Yavar Jung • Vijay Kelkar • Hans Raj Khanna • V. N. Khare • Balasaheb Gangadhar Kher • Akhlaqur Rahman Kidwai • Jivraj Narayan Mehta • V. K. Krishna Menon • Hirendranath Mukherjee • Ajoy Mukherjee • Pranab Mukherjee • Padmaja Naidu • Gulzarilal Nanda • Govind Narain • Fali Sam Nariman • Hosei Norota • Nanabhoy Palkhivala • K. Parasaran • Hari Vinayak Pataskar • Sunder Lal Patwa • Sharad Pawar • Naryana Raghvan Pillai • Sri Prakasa • N. G. Ranga • Ravi Narayana Reddy • Y. Venugopal Reddy • Ghulam Mohammed Sadiq • Lakshmi Sahgal • P. A. Sangma • M. C. Setalvad • Kalyan Singh • Karan Singh • Nagendra Singh • Swaran Singh • Walter Sisulu • Soli Sorabjee • Kalyan Sundaram • Sushma Swaraj • Chandulal Madhavlal Trivedi • Atal Bihari Vajpayee • M. N. Venkatachaliah • Kottayan Katankot Venugopal • Jigme Dorji Wangchuck • S. M. Krishna • Mulayam Singh Yadav Science and engineering • V. K. Aatre • Salim Ali • Norman Borlaug • Subrahmanyan Chandrasekhar • Rajagopala Chidambaram • Charles Correa • Satish Dhawan • Anil Kakodkar • A. P. J. Abdul Kalam • Krishnaswamy Kasturirangan • Har Gobind Khorana • Daulat Singh Kothari • Verghese Kurien • Raghunath Anant Mashelkar • G. Madhavan Nair • Roddam Narasimha • Jayant Narlikar • Rajendra K. Pachauri • Benjamin Peary Pal • Yash Pal • I. G. Patel • Venkatraman Ramakrishnan • K. R. Ramanathan • Raja Ramanna • C. R. Rao • C. N. R. Rao • Palle Rama Rao • Udupi Ramachandra Rao • Vikram Sarabhai • Man Mohan Sharma • Obaid Siddiqi • E. Sreedharan • M. R. Srinivasan • George Sudarshan • M. S. Swaminathan • Narinder Singh Kapany • S. R. Srinivasa Varadhan Social work • Baba Amte • Pandurang Shastri Athavale • Janaki Devi Bajaj • Mirabehn • Kamaladevi Chattopadhyay • Durgabai Deshmukh • Nanaji Deshmukh • Nirmala Deshpande • Mohan Dharia • U. N. Dhebar • Valerian Gracias • Veerendra Heggade • Mary Clubwala Jadhav • Gaganvihari Lallubhai Mehta • Usha Mehta • Sister Nirmala • Nellie Sengupta Sports • Viswanathan Anand • Edmund Hillary • Mary Kom • Sachin Tendulkar Trade and industry • Dhirubhai Ambani • Ghanshyam Das Birla • Ashok Sekhar Ganguly • Karim Al Hussaini Aga Khan • Lakshmi Mittal • Anil Manibhai Naik • N. R. Narayana Murthy • M. Narasimham • Prithvi Raj Singh Oberoi • Azim Premji • Prathap C. Reddy • J. R. D. Tata • Ratan Tata • Portal • Category • WikiProject Bengali Renaissance People • Sri Aurobindo • Atul Prasad Sen • Rajnarayan Basu • Debendra Mohan Bose • Jagadish Chandra Bose • Subhash Chandra Bose • Satyendra Nath Bose • Madhusudan Gupta • Bethune • Upendranath Brahmachari • Bankim Chandra Chatterjee • Sarat Chandra Chattopadhyay • Akshay Kumar Datta • Henry Derozio • Alexander Duff • Michael Madhusudan Dutt • Romesh Chunder Dutt • Anil Kumar Gain • Kadambini Ganguly • Monomohun Ghose • Ramgopal Ghosh • Aghore Nath Gupta • David Hare • Kazi Nazrul Islam • Eugène Lafont • Mahanambrata Brahmachari • Ashutosh Mukherjee • Kalikrishna Mitra • Harish Chandra Mukherjee • Ramakrishna Paramahamsa • Gour Govinda Ray • Upendrakishore Ray Chowdhury • Raja Ram Mohan Roy • Meghnad Saha • Akshay Chandra Sarkar • Mahendralal Sarkar • Brajendra Nath Seal • Girish Chandra Sen • Keshub Chandra Sen • Haraprasad Shastri • Debendranath Tagore • Rabindranath Tagore • Satyendranath Tagore • Jnanadanandini Devi • Fanindra Nath Gooptu • Sitanath Tattwabhushan • Brahmabandhav Upadhyay • Ram Chandra Vidyabagish • Dwarkanath Vidyabhusan • Ishwar Chandra Vidyasagar • Swami Vivekananda • Paramahansa Yogananda • Begum Rokeya Culture • Bengali literature • Bengali poetry • Bengali music • Bangiya Sahitya Parishad • Bhadralok • Brahmoism • Brahmo Samaj • Gaudiya Math • Mahanam Sampraday • Nazrul geeti • Rabindra Nritya Natya • Rabindra Sangeet • Sambad Prabhakar • Satyagraha • Self-Realization Fellowship • Yogoda Satsanga • Swadeshi • Tattwabodhini Patrika • Young Bengal Education • Chittagong College • Indian Statistical Institute • Hindu School • Jadavpur University • Presidency University • University of Calcutta • Bethune College • University of Dhaka • Visva-Bharati University • Calcutta School of Tropical Medicine Authority control International • FAST • ISNI • VIAF National • France • BnF data • Germany • Israel • United States • Czech Republic • Poland Academics • MathSciNet • Mathematics Genealogy Project • zbMATH People • Deutsche Biographie Other • SNAC • IdRef
Wikipedia
S. R. Ranganathan Shiyali Ramamrita Ranganathan[1] (listen  9 August 1892 – 27 September 1972) was a librarian and mathematician from India.[2] His most notable contributions to the field were his five laws of library science and the development of the first major faceted classification system, the colon classification. He is considered to be the father of library science, documentation, and information science in India and is widely known throughout the rest of the world for his fundamental thinking in the field. His birthday is observed every year as the National Librarian Day in India.[3] S. R. Ranganathan S. R. Ranganathan's Portrait at City Central Library, Hyderabad, Chennai BornShiyali Ramamrita Ranganathan (1892-08-09)9 August 1892 Shiyali, Madras Presidency, British India (present-day Tamil Nadu, India) Died27 September 1972 (aged 80) Bangalore, Mysore State, India (present-day Karnataka) OccupationLibrarian, author, academic, mathematician NationalityIndian GenreLibrary Science, Documentation, Information Science Notable worksProlegomena to Library Classification The Five Laws of Library Science Colon Classification Ramanujan: the Man and the Mathematician Classified Catalogue Code: With Additional Rules for Dictionary Catalogue Code Library Administration Indian Library Manifesto Library Manual for Library Authorities, Librarians, and Library Workers Classification and Communication Headings and Canons; Comparative Study of Five Catalogue Codes Notable awardsPadma Shri (1957) He was a university librarian and professor of library science at Banaras Hindu University (1945–47) and professor of library science at the University of Delhi (1947–55). The last appointment made him director of the first Indian school of librarianship to offer higher degrees. He was president of the Indian Library Association from 1944 to 1953. In 1957 he was elected an honorary member of the International Federation for Information and Documentation (FID) and was made a vice-president for life of the Library Association of Great Britain.[4] Early life and education Ranganathan was born on 9 August 1892 in Siyali, Thanjavur, Tamil Nadu in an orthodox Hindu Brahmin family.[5][1] His birth date is also written 12 August 1892 but he himself wrote his birth date 9 August 1892 in his book, The Five Laws of Library Science. Ranganathan began his professional life as a mathematician; he earned B.A. and M.A. degrees in mathematics from Madras Christian College in his home province, and then went on to earn a teaching license. His lifelong goal was to teach mathematics, and he was successively a member of the mathematics faculties at universities in Mangalore, Coimbatore and Madras. As a mathematics professor, he published papers mainly on the history of mathematics. His career as an educator was somewhat hindered by stammering (a difficulty he gradually overcame in his professional life). The Government of India awarded Padmashri to Dr. S.R. Ranganathan in 1957 for valuable contributions to Library Science.[6] Early career In 1923, the University of Madras created the post of University Librarian to oversee their poorly organized collection. Among the 900 applicants for the position, none had any formal training in librarianship, and Ranganathan's handful of papers satisfied the search committee's requirement that the candidate should have a research background. His sole knowledge of librarianship came from an Encyclopædia Britannica article he read days before the interview. Ranganathan was initially reluctant to pursue the position (he had forgotten about his application by the time he was called for an interview there). To his own surprise, he received the appointment and accepted the position in January 1924.[1] At first, Ranganathan found the solitude of the position was intolerable. In a matter of weeks, complaining of total boredom, he went back to the university administration to beg for his teaching position back. A deal was struck that Ranganathan would travel to London to study contemporary Western practices in librarianship, and that, if he returned and still rejected librarianship as a career, the mathematics lectureship would be his again.[7] Ranganathan travelled to University College London, which at that time housed the only graduate degree program in library science in Britain. At University College, he earned marks only slightly above average, but his mathematical genius latched onto the problem of classification, a subject typically taught by rote in library programs of the time. As an outsider, he focused on what he perceived to be flaws with the popular decimal classification, and began to explore new possibilities on his own.[8] He also devised the Acknowledgment of Duplication, which states that any system of classification of information necessarily implies at least two different classifications for any given datum. He anecdotally proved this with the Dewey Decimal Classification (DDC) by taking several books and showing how each might be classified with two totally different resultant DDC numbers.[9] For example, a book on "warfare in India" could be classified under "warfare" or "India". Even a general book on warfare could be classified under "warfare", "history", "social organisation", "Indian essays", or many other headings, depending upon the viewpoint, needs, and prejudices of the classifier. To Ranganathan, a structured, step-by-step system acknowledging each facet of the topic of the work was immensely preferable to the anarchy and "intellectual laziness" (as he termed it) of the DDC. Given the poor technology for information retrieval available at that time, the implementation of this concept was a tremendous step forwards for the science of information retrieval. He began drafting the system that was ultimately to become colon classification while in England, and refined it as he returned home, even going so far as to reorder the ship's library on the voyage back to India. He initially got the idea for the system from seeing a set of Meccano in a toy store in London. Ranganathan returned with great interest for libraries and librarianship and a vision of its importance for the Indian nation. The system remains useful even into the modern times. He returned to and held the position of University Librarian at the University of Madras for twenty years. During that time, he helped to found the Madras Library Association, and lobbied actively for the establishment of free public libraries throughout India and for the creation of a comprehensive national library.[9] Ranganathan was considered by many to be a workaholic. During his two decades in Madras, he consistently worked 13-hour days, seven days a week, without taking a vacation for the entire time. Although he married in November 1928, he returned to work the afternoon following the marriage ceremony. A few years later, he and his wife Sarada had a son. The couple remained married until Ranganathan's death. The first few years of Ranganathan's tenure at Madras were years of deliberation and analysis as he addressed the problems of library administration and classification.[10] It was during this period that he produced what have come to be known as his two greatest legacies: his five laws of library science (1931) and the colon classification system (1933).[11] Regarding the political climate at the time, Ranganathan took his position at the University of Madras in 1924. Gandhi had been imprisoned in 1922 and was released around the time that Ranganathan was taking that job. Ranganathan sought to institute massive changes to the library system and to write about such things as open access and education for all which essentially had the potential to enable the masses and encourage civil discourse (and disobedience). Although there is no evidence that Ranganathan did any of this for political reasons, his changes to the library had the result of educating more people, making information available to all, and even aiding women and minorities in the information-seeking process. The Northern Ireland crisis got an unexpected metaphorical reference in a book by S. R. Ranganathan, as "making an Ulster of the ... law of parsimony", complaining about the harmful effects of low budget on the good functioning of a library. Later career After two decades of serving as librarian at Madras – a post he had intended to keep until his retirement –, Ranganathan retired from his position after conflicts with a new university vice-chancellor became intolerable. At the age of 54, he submitted his resignation and, after a brief bout with depression, accepted a professorship in library science at Banaras Hindu University in Varanasi, his last formal academic position, in August 1945. There, he catalogued the university's collection; by the time he left four years later, he had classified over 100,000 items personally. Ranganathan headed the Indian Library Association from 1944 to 1953, but was never a particularly adept administrator, and left amid controversy when the Delhi Public Library chose to use the Dewey Decimal Classification system instead of his own Colon Classification. He held an honorary professorship at Delhi University from 1949 to 1955 and helped build that institution's library science programs with S. Dasgupta, a former student of his.[7] While at Delhi, Ranganathan drafted a comprehensive 30-year plan for the development of an advanced library system for the whole of India.[12] In 1951, Ranganathan released an album on Folkways Records entitled, Readings from the Ramayana: In Sanskrit Bhagavad Gita. Ranganathan briefly moved to the Zurich, Switzerland, from 1955 to 1957, when his son married a European woman; the unorthodox relationship did not sit well with Ranganathan, although his time in Zurich allowed him to expand his contacts within the European library community, where he gained a significant following. However, he soon returned to India and settled in the city of Bangalore, where he spent the rest of his life. While in Zurich, though, he endowed a professorship at Madras University in honour of his wife of thirty years, largely as an ironic gesture in retaliation for the persecution he suffered for many years at the hands of that university's administration. Ranganathan's final major achievement was the establishment of the Documentation Research and Training Centre as a department and research centre in the Indian Statistical Institute in Bangalore in 1962, where he served as honorary director for five years. In 1965, the Indian government honoured him for his contributions to the field with a rare title of "National Research Professor." In the final years of his life, Ranganathan suffered from ill health and was largely confined to his bed. On 27 September 1972, he finally succumbed to complications from bronchitis.[13] Upon the 1992 centenary of his birth, several biographical volumes and collections of essays on Ranganathan's influence were published in his honour. Ranganathan's autobiography, published serially during his life, is titled A Librarian Looks Back. Influence and legacy Ranganathan dedicated his book The Five Laws of Library Science to his maths tutor at Madras Christian College, Edward Burns Ross.[14] His birthday, August 12, has been denoted National Librarians' Day in India.[15] See also • Colon classification • Faceted classification • Five laws of library science • Madras Public Libraries Act • Subject (documents) References 1. Islam, Mohammed Nurul (10 June 2015). "S. R. Ranganathan: library and documentation scientist. Historical Notes" (PDF). Current Science. 108: 2110–2111. Retrieved 24 May 2016. 2. "A Short Biography". Indian Statistical Institute. 3. Broughton, Vanda (2006). "S. R. Ranganathan" in Essential Classification. London, Facet Publishing. ISBN 978-1-85604-514-8. Indian Statistical Institute Library and Sarita Ranganathan Endowment for Library Science. 4. "Ranganathan, Shivala Ramanrita (1892–1972)..." The Hutchinson Unabridged Encyclopedia with Atlas and Weather Guide. Abington: Helicon, 2009. Credo Reference. 5. Raghavan, K. S. (2019). "Shiyali Ramamrita Ranganathan". www.isko.org. 6. Srivastava, p. 125. 7. Garfield, Eugene. "A Tribute to S. R. Ranganathan, the Father of Indian Library Science. Part 1. Life and Works" (PDF). Essays of an Information Scientist. 7: 45–49. 8. Srivastava, p. 46. 9. Srivastava 10. Srivastava, pp. 30–31 11. Kabir, A. (2003). "Ranganathan: A Universal Librarian". Journal of Educational Media & Library Sciences. 40: 453–459. 12. Allen Kent, ed. (1978). "S .R. Ranganathan - A Short Biography" (PDF). Encyclopedia of Library and Information Science. Vol. 25. New York: Marcel Dekker Inc – via Indian Statistical Institute. 13. Srivastava, p. 2. 14. Ross biography. Groups.dcs.st-and.ac.uk. Retrieved on 2018-12-06. 15. India, The Hans (15 August 2015). "National Library Day celebrated". www.thehansindia.com. Retrieved 12 August 2021. Cited sources • Srivastava, A. P. (1977). Ranganathan: A pattern maker. New Delhi: Metropolitan Book Co. External links Wikimedia Commons has media related to S. R. Ranganathan. • Portal on Dr. S R Ranganathan from India • Ranganathan for Information Architects by Mike Steckel • Ranganathan's Monologue on Melvil Dewey, Recorded 1964 – transcript • India's First IT Guru • S.R. Ranganathan (1892–1972): Google Scholar Profile • Ranganathan- Profile in Brief • Full-view works by S.R. Ranganathan at HathiTrust Digital Library. Authority control International • FAST • 2 • ISNI • VIAF National • Norway • Spain • France • BnF data • Catalonia • Germany • Israel • Belgium • United States • Sweden • Japan • Czech Republic • Australia • Greece • Korea • Croatia • Netherlands • Poland • Vatican Academics • CiNii • zbMATH Artists • ULAN People • Deutsche Biographie • Trove Other • SNAC • IdRef Recipients of Padma Shri in Literature & Education 1950s • K. Shankar Pillai (1954) • Krishna Kanta Handique (1955) • Surya Kumar Bhuyan (1956) • Sukhdev Pande (1956) • Nalini Bala Devi (1957) • S. R. Ranganathan (1957) • Ram Chandra Varma (1958) • Magan Lal Tribhuvandas Vyas (1958) • K. S. Chandrasekharan (1958) 1960s • B. S. Kesavan (1960) • Artaballabha Mohanty (1960) • N. D. Sundaravadivelu (1961) • Vinayaka Krishna Gokak (1961) • Vishnukant Jha (1961) • Jinvijay (1961) • Evengeline Lazarus (1961) 1970s • Ananda Chandra Barua (1970) • Sulabha Panandikar (1971) 1980s • Krishan Dutta Bharadwaj (1981) • Abid Ali Khan (1981) • Ram Punjwani (1981) • Vaikom Muhammad Basheer (1982) • R. V. Pandit (1982) • Sher Singh Sher (1982) • Gaura Pant Shivani (1982) • Ahalya Chari (1983) • Amitabha Chaudhuri (1983) • Saliha Abid Hussain (1983) • Komal Kothari (1983) • Hundraj Lial Ram Dukhayal Manik (1983) • Raghuvir Sharan Mitra (1983) • Attar Singh (1983) • Mayangnokcha Ao (1984) • Kshem Suman Chandra (1984) • Lakshmi Kumari Chundawat (1984) • Shanta Gandhi (1984) • Sadhu Singh Hamdard (1984) • Qurratulain Hyder (1984) • Ganpatrao Jadhav (1984) • Syed Abdul Malik (1984) • John Arthur King Martyn (1984) • Sooranad Kunjan Pillai (1984) • Syed Hasan Askari (1985) • Jamesh Dokhuma (1985) • Kaka Hathrasi (1985) • Bharat Mishra (1985) • Harishankar Parsai (1985) • Ashangbam Minaketan Singh (1985) • Anil Agarwal (1986) • Binod Kanungo (1986) • Chitra Naik (1986) • Abdur Rahman (1986) • Nuchhungi Renthlei (1986) • Raghunath Sharma (1986) • Abdul Sattar (1987) • Nazir Ahmed (1987) • Vanaja Iyengar (1987) • Khawlkungi (1987) • Badri Narayan (1987) • Debi Prasanna Pattanayak (1987) • Sant Singh Sekhon (1987) • N. Khelchandra Singh (1987) • Madaram Brahma (1988) • Nissim Ezekiel (1988) • K. M. George (1988) • Mario Miranda (1988) • Vidya Niwas Mishra (1988) • Ali Jawad Zaidi (1988) • Kalim Aajiz (1989) • Barsane Lal Chaturvedi (1989) • Anita Desai (1989) • Moti Lal Saqi (1989) • Rongbong Terang (1989) • V. Venkatachalam (1989) 1990s • M. Aram (1990) • Vijay Kumar Chopra (1990) • Behram Contractor (1990) • Radha Mohan Gadanayak (1990) • Madhav Yeshwant Gadkari (1990) • Yashpal Jain (1990) • Sharad Joshi (1990) • Kanhiyalal Prabhakar Mishra (1990) • Gopi Chand Narang (1990) • Dagdu Maruti Pawar (1990) • Nilmani Phookan Jr (1990) • Shyam Singh Shashi (1990) • Ram Nath Shastri (1990) • Bharat Bhushan (yogi) (1991) • Kapil Deva Dvivedi (1991) • B. K. S. Iyengar (1991) • Satish Chandra Kakati (1991) • Vishnu Bhikaji Kolte (1991) • Madan Lal Madhu (1991) • Namdeo Dhondo Mahanor (1991) • Keshav Malik (1991) • Surendra Mohanty (1991) • P. T. Narasimhachar (1991) • V. G. Bhide (1992) • Gulabdas Broker (1992) • Krishna Chaithanya (1992) • Rajammal P. Devadas (1992) • Vasant Shankar Kanetkar (1992) • V. C. Kulandaiswamy (1992) • R. S. Lugani (1992) • Shovana Narayan (1992) • Nisith Ranjan Ray (1992) • M. Kirti Singh (1992) • B. K. Thapar (1992) • Mark Tully (1992) • B. N. Goswamy (1998) • O. N. V. Kurup (1998) • Lalsangzuali Sailo (1998) • Gurdial Singh (1998) • Narayan Gangaram Surve (1998) • Ruskin Bond (1999) • Shayama Chona (1999) • G. P. Chopra (1999) • Namdeo Dhasal (1999) • Kanhaiya Lal Nandan (1999) • Satya Vrat Shastri (1999) • Rajkumar Jhalajit Singh (1999) 2000s • Grigoriy Lvovitch Bondarevsky (2000) • P. S. Chawngthu (2000) • Piloo Nowshir Jungalwalla (2000) • Mandan Mishra (2000) • Rehman Rahi (2000) • K. P. Saxena (2000) • Nabaneeta Dev Sen (2000) • Elangbam Nilakanta Singh (2000) • Bala V. Balachandran (2001) • Jeelani Bano (2001) • Manoj Das (2001) • Javare Gowda (2001) • Chandrashekhara Kambara (2001) • Gnanananda Kavi (2001) • Kumar Ketkar (2001) • Ravindra Kumar (2001) • Kalidas Gupta Riza (2001) • Padma Sachdev (2001) • Bhabendra Nath Saikia (2001) • Vachnesh Tripathi (2001) • Munirathna Anandakrishnan (2002) • Gopal Chhotray (2002) • Gyan Chand Jain (2002) • Madhu Mangesh Karnik (2002) • Ashok Ramchandra Kelkar (2002) • V. K. Madhavan Kutty (2002) • Turlapaty Kutumba Rao (2002) • Kim Yang-shik (2002) • Manzoor Ahtesham (2003) • Jagdish Chaturvedi (2003) • Motilal Jotwani (2003) • Yarlagadda Lakshmi Prasad (2003) • Tekkatte Narayan Shanbhag (2003) • Shailendra Nath Shrivastava (2003) • Pritam Singh (2003) • Vairamuthu (2003) • Hamlet Bareh (2004) • Kumarpal Desai (2004) • Tatyana Elizarenkova (2004) • Anil Kumar Gupta (2004) • Gowri Ishwaran (2004) • Leeladhar Jagudi (2004) • Sunita Jain (2004) • Prithvi Nath Kaula (2004) • Ayyappa Paniker (2004) • P. Parameswaran (2004) • Bal Samant (2004) • Kanhaiyalal Sethia (2004) • Ramesh Chandra Shah (2004) • Heinrich von Stietencron (2004) • Sudhir Tailang (2004) • Dalip Kaur Tiwana (2004) • Amiya Kumar Bagchi (2005) • Shobhana Bhartia (2005) • Manas Chaudhuri (2005) • Darchhawna (2005) • J. S. Grewal (2005) • Amin Kamil (2005) • Gadul Singh Lama (2005) • Mammen Mathew (2005) • S. B. Mujumdar (2005) • Bilat Paswan Vihangam (2005) • Ajeet Cour (2006) • Sucheta Dalal (2006) • Laltluangliana Khiangte (2006) • Lothar Lutze (2006) • Mrinal Pande (2006) • Sugathakumari (2006) • Sitanshu Yashaschandra (2006) • Temsüla Ao (2007) • Vijaydan Detha (2007) • Bakul Harshadrai Dholakia (2007) • Amitav Ghosh (2007) • Meenakshi Gopinath (2007) • Giriraj Kishore (2007) • Shekhar Pathak (2007) • Pratibha Ray (2007) • Rostislav Rybakov (2007) • Vikram Seth (2007) • Vaali (2007) • Sivanthi Adithan (2008) • Bina Agarwal (2008) • Vellayani Arjunan (2008) • Nirupam Bajpai (2008) • Surjya Kanta Hazarika (2008) • Vinod Dua (2008) • M. Leelavathy (2008) • Amitabh Mattoo (2008) • Bholabhai Patel (2008) • Rajdeep Sardesai (2008) • Sukhadeo Thorat (2008) • Srinibash Udgata (2008) • Suresh Gundu Amonkar • Abhay Chhajlani • Birendra Nath Datta • Shashi Deshpande • Bannanje Govindacharya • Panchapakesa Jayaraman • Mathoor Krishnamurty • Jayanta Mahapatra • Laxman Mane • John Ralston Marr • Alok Mehta • A. Sankara Reddy • Lalthangfala Sailo • Ngawang Samten • Ranbir Chander Sobti • Ram Shankar Tripathi 2010s • Lalzuia Colney (2010) • Maria Aurora Couto (2010) • Romuald D'Souza (2010) • Bertha Gyndykes Dkhar (2010) • Surendra Dubey (2010) • Sadiq-ur-Rahman Kidwai (2010) • Hermann Kulke (2010) • Ramaranjan Mukherji (2010) • Govind Chandra Pande (2010) • Mrs YGP (2010) • Sheldon Pollock (2010) • Arun Sarma (2010) • Jitendra Udhampuri (2010) • Granville Austin (2011) • Mahim Bora (2011) • Urvashi Butalia (2011) • Pullella Sriramachandrudu (2011) • Mamang Dai (2011) • Pravin Darji (2011) • Chandra Prakash Deval (2011) • Deviprasad Dwivedi (2011) • Balraj Komal (2011) • Krishna Kumar (2011) • Rajni Kumar (2011) • Devanur Mahadeva (2011) • Barun Mazumder (2011) • Ritu Menon (2011) • Avvai Natarajan (2011) • Bhalchandra Nemade (2011) • Karl Harrington Potter (2011) • Koneru Ramakrishna Rao (2011) • Devi Dutt Sharma (2011) • Nilamber Dev Sharma (2011) • Geeta Dharmarajan (2012) • Eberhard Fischer (2012) • Kedar Gurung (2012) • Surjit Patar (2012) • Sachchidanand Sahai (2012) • Allan Sealy (2012) • Pepita Seth (2012) • Vijay Dutt Shridhar (2012) • Ralte L. Thanmawia (2012) • Anvita Abbi (2013) • Nida Fazli (2013) • Radhika Herzberger (2013) • Noboru Karashima (2013) • Salik Lucknawi (2013) • J. Malsawma (2013) • Devendra Patel (2013) • Christopher Pinney (2013) • Mohammad Sharaf-e-Alam (2013) • Rama Kant Shukla (2013) • Jagdish Prasad Singh (2013) • Akhtarul Wasey (2013) • Naheed Abidi (2014) • Ashok Chakradhar (2014) • Keki N. Daruwalla (2014) • G. N. Devy (2014) • Kolakaluri Enoch (2014) • Ved Kumari Ghai (2014) • Manorama Jafa (2014) • Rehana Khatoon (2014) • P. Kilemsungla (2014) • Sengaku Mayeda (2014) • Waikhom Gojen Meitei (2014) • Vishnunarayanan Namboothiri (2014) • Dinesh Singh (2014) • Huang Baosheng (2015) • Bettina Bäumer (2015) • Lakshmi Nandan Bora (2015) • Jean-Claude Carrière (2015) • Gyan Chaturvedi (2015) • Raj Chetty (2015) • Bibek Debroy (2015) • Ashok Gulati (2015) • George L. Hart (2015) • Sunil Jogi (2015) • Usha Kiran Khan (2015) • Narayana Purushothama Mallaya (2015) • Lambert Mascarenhas (2015) • Taarak Mehta (2015) • Ram Bahadur Rai (2015) • J. S. Rajput (2015) • Bimal Kumar Roy (2015) • Annette Schmiedchen (2015) • Gunvant Shah (2015) • Brahmdev Sharma (2015) • Dhirendra Nath Bezbaruah (2016) • S. L. Bhyrappa (2016) • Kameshwar Brahma (2016) • Jawahar Lal Kaul (2016) • Sal Khan (2016) • Ashok Malik (2016) • Haldhar Nag (2016) • Pushpesh Pant (2016) • Dahyabhai Shastri (2016) • Prahlad Chandra Tasa (2016) • Anant Agarwal (2017) • Eli Ahmed (2017) • Michel Danino (2017) • Narendra Kohli (2017) • Akkitham Achuthan Namboothiri (2017) • Kashi Nath Pandita (2017) • Vishnu Pandya (2017) • V. G. Patel (2017) • H.R. Shah (2017) • Chamu Krishna Shastry (2017) • Bhawana Somaaya (2017) • Punam Suri (2017) • Harihar Kripalu Tripathi (2017) • G. Venkatasubbiah (2017) • Prafulla Govinda Baruah (2018) • Shyamlal Chaturvedi (2018) • Arup Kumar Dutta (2018) • Arvind Gupta (2018) • Digamber Hansda (2018) • Anwar Jalalpuri (2018) • Piyong Temjen Jamir (2018) • Joyasree Goswami Mahanta (2018) • Zaverilal Mehta (2018) • Tomio Mizokami (2018) • Habibullo Rajabov (2018) • Vagish Shastri (2018) • Maharao Raghuveer Singh (2018) • A Zakia (2018) • Narsingh Dev Jamwal (2019) • Nagindas Sanghavi (2019) • Mohammed Hanif Khan Shastri (2019) • Devendra Swarup (2019) 2020s • Abhiraj Rajendra Mishra (2020) • Binapani Mohanty (2020) • Damayanti Beshra (2020) • H. M. Desai (2020) • Lil Bahadur Chettri (2020) • Meenakshi Jain (2020) • N. Chandrasekharan Nair (2020) • Narayan Joshi Karayal (2020) • Prithwindra Mukherjee (2020) • Robert Thurman (2020) • S. P. Kothari (2020) • Shahabuddin Rathod (2020) • Sudharma (2020) • Vijayasarathi Sribhashyam (2020) • Yeshe Dorjee Thongchi (2020) • Yogesh Praveen (2020) • Benichandra Jamatia (2020) • Carlos G. Vallés (2021) • Dadudan Gadhvi (2021) • Imran Shah (2021) • Mangal Singh Hazowary (2021) • Mridula Sinha (2021) • Namdeo Kamble (2021) • Rangasami L. Kashyap (2021) • Srikant Datar (2021) • Solomon Pappaiah (2021) • Asavadi Prakasarao (2021) • Lalbiakthanga Pachuau (2021) • Najma Akhtar (2022) • T Senka Ao (2022) • J K Bajaj (2022) • Sirpi Balasubramaniam (2022) • Akhone Asgar Ali Basharat (2022) • Harmohinder Singh Bedi (2022) • Maria Christopher Byrski (2022) • Khalil Dhantejvi (Posthumous) (2022) • Dhaneswar Engti (2022) • Narasimha Rao Garikapati (2022) • Girdhari Ram Gonjhu (Posthumous) (2022) • Shaibal Gupta (Posthumous) (2022) • Narasingha Prasad Guru (2022) • Avadh Kishore Jadia (2022) • Tara Jauhar (2022) • Rutger Kortenhorst (2022) • P Narayana Kurup (2022) • V L Nghaka (2022) • Chirapat Prapandavidya (2022) • Vidyanand Sarek (2022) • Kali Pada Saren (2022) • Dilip Shahani (2022) • Vishwamurti Shastri (2022) • Tatiana Lvovna Shaumyan (2022) • Siddhalingaiah (Posthumous) (2022) • Vidya Vindu Singh (2022) • Raghuvendra Tanwar (2022) • Badaplin War (2022) • Radha Charan Gupta (2023) • C. I. Issac (2023) • Rattan Singh Jaggi (2023) • Anand Kumar (2023) • Prabhakar Bhanudas Mande (2023) • Antaryami Mishra (2023) • Ramesh Patange (2023) • B. Ramakrishna Reddy (2023) • Mohan Singh (2023) • Prakash Chandra Sood (2023) • Janum Singh Soy (2023) • Vishwanath Prasad Tiwari (2023) • Dhaniram Toto (2023)
Wikipedia
S2S (mathematics) In mathematics, S2S is the monadic second order theory with two successors. It is one of the most expressive natural decidable theories known, with many decidable theories interpretable in S2S. Its decidability was proved by Rabin in 1969.[1] Basic properties The first order objects of S2S are finite binary strings. The second order objects are arbitrary sets (or unary predicates) of finite binary strings. S2S has functions s→s0 and s→s1 on strings, and predicate s∈S (equivalently, S(s)) meaning string s belongs to set S. Some properties and conventions: • By default, lowercase letters refer to first order objects, and uppercase to second order objects. • The inclusion of sets makes S2S second order, with "monadic" indicating absence of k-ary predicate variables for k>1. • Concatenation of strings s and t is denoted by st, and is not generally available in S2S, not even s→0s. The prefix relation between strings is definable. • Equality is primitive, or it can be defined as s = t ⇔ ∀S (S(s) ⇔ S(t)) and S = T ⇔ ∀s (S(s) ⇔ T(s)). • In place of strings, one can use (for example) natural numbers with n→2n+1 and n→2n+2 but no other operations. • The set of all binary strings is denoted by {0,1}*, using Kleene star. • Arbitrary subsets of {0,1}* are sometimes identified with trees, specifically as a {0,1}-labeled tree {0,1}*; {0,1}* forms a complete infinite binary tree. • For formula complexity, the prefix relation on strings is typically treated as first order. Without it, not all formulas would be equivalent to Δ12 formulas.[2] • For properties expressible in S2S (viewing the set of all binary strings as a tree), for each node, only O(1) bits can be communicated between the left subtree and the right subtree and the rest (see communication complexity). • For a fixed k, a function from strings to k (i.e. natural numbers below k) can be encoded by a single set. Moreover, s,t ⇒ s01t′ where t′ doubles every character of t is injective, and s ⇒ {s01t′: t∈{0,1}*} is S2S definable. By contrast, by a communication complexity argument, in S1S (below) a pair of sets is not encodable by a single set. Weakenings of S2S: Weak S2S (WS2S) requires all sets to be finite (note that finiteness is expressible in S2S using Kőnig's lemma). S1S can be obtained by requiring that '1' does not appear in strings, and WS1S also requires finiteness. Even WS1S can interpret Presburger arithmetic with a predicate for powers of 2, as sets can be used to represent unbounded binary numbers with definable addition. Decision complexity S2S is decidable, and each of S2S, S1S, WS2S, WS1S has a nonelementary decision complexity corresponding to a linearly growing stack of exponentials. For the lower bound, it suffices to consider Σ11 WS1S sentences. A single second order quantifier can be used to propose an arithmetic (or other) computation, which can be verified using first order quantifiers if we can test which numbers are equal. For this, if we appropriately encode numbers 1..m, we can encode a number with binary representation i1i2...im as i1 1 i2 2 ... im m, preceded by a guard. By merging testing of guards and reusing variable names, the number of bits is linear in the number of exponentials. For the upper bound, using the decision procedure (below), sentences with k-fold quantifier alternation can decided in time corresponding to k+O(1)-fold exponentiation of the sentence length (with uniform constants). Axiomatization WS2S can be axiomatized through certain basic properties plus induction schema.[3] S2S can be partially axiomatized by: (1) ∃!s ∀t ( t0≠s ∧ t1≠s) (empty string, denoted by ε; ∃!s means "there is unique s") (2) ∀s,t ∀i∈{0,1} ∀j∈{0,1} (si=tj ⇒ s=t ∧ i=j) (the use of i and j is an abbreviation; for i=j, 0 does not equal 1) (3) ∀S (S(ε) ∧ ∀s (S(s) ⇒ S(s0) ∧ S(s1))⇒ ∀s S(s)) (induction) (4) ∃S ∀s (S(s) ⇔ φ(s)) (S not free in φ) (4) is the comprehension schema over formulas φ, which always holds for second order logic. As usual, if φ has free variables not shown, we take the universal closure of the axiom. If equality is primitive for predicates, one also adds extensionality S=T ⇔ ∀s (S(s) ⇔ T(s)). Since we have comprehension, induction can be a single statement rather than a schema. The analogous axiomatization of S1S is complete.[4] However, for S2S, completeness is open (as of 2021). While S1S has uniformization, there is no S2S definable (even allowing parameters) choice function that given a non-empty set S returns an element of S,[5] and comprehension schemas are commonly augmented with various forms of the axiom of choice. However, (1)-(4) is complete when extended with a determinacy schema for certain parity games.[6] S2S can also be axiomatized by Π13 sentences (using the prefix relation on strings as a primitive). However, it is not finitely axiomatizable, nor can it be axiomatized by Σ13 sentences even if we add induction schema and a finite set of other sentences (this follows from its connection to Π12-CA0). Theories related to S2S For every finite k, the monadic second order (MSO) theory of countable graphs with treewidth ≤k (and a corresponding tree decomposition) is interpretable in S2S (see Courcelle's theorem). For example, the MSO theory of trees (as graphs) or of series-parallel graphs is decidable. Here (i.e. for bounded tree width), we can also interpret the finiteness quantifier for a set of vertices (or edges), and also count vertices (or edges) in a set modulo a fixed integer. Allowing uncountable graphs does not change the theory. Also, for comparison, S1S can interpret connected graphs of bounded pathwidth. By contrast, for every set of graphs of unbounded treewidth, its existential (i.e. Σ11) MSO theory is undecidable if we allow predicates on both vertices and edges. Thus, in a sense, decidability of S2S is the best possible. Graphs with unbounded treewidth have large grid minors, which can be used to simulate a Turing machine. By reduction to S2S, the MSO theory of countable orders is decidable, as is the MSO theory of countable trees with their Kleene–Brouwer orders. However, the MSO theory of ($\mathbb {R} $, <) is undecidable.[7][8] The MSO theory of ordinals <ω2 is decidable; decidability for ω2 is independent of ZFC (assuming Con(ZFC + weakly compact cardinal)).[9] Also, an ordinal is definable using monadic second order logic on ordinals iff it can be obtained obtained from definable regular cardinals by ordinal addition and multiplication.[10] S2S is useful for decidability of certain modal logics, with Kripke semantics naturally leading to trees. S2S+U (or just S1S+U) is undecidable if U is the unbounding quantifier — UX Φ(X) iff Φ(X) holds for some arbitrarily large finite X.[11] However, WS2S+U, even with quantification over infinite paths, is decidable, even with S2S subformulas that do not contain U.[12] Formula complexity A set of binary strings is definable in S2S iff it is regular (i.e. forms a regular language). In S1S, a (unary) predicate on sets is (parameter-free) definable iff it is an ω-regular language. For S2S, for formulas that use their free variables only on strings not containing a 1, the expressiveness is the same as for S1S. For every S2S formula φ(S1,...,Sk), (with k free variables) and finite tree of binary strings T, φ(S1∩T,...,Sk∩T) can be computed in time linear in |T| (see Courcelle's theorem), but as noted above, the overhead can be iterated exponential in the formula size (more precisely, the time is $O(|T|k)+2_{O(|\phi |)}^{2}$). For S1S, every formula is equivalent to a Δ11 formula, and to a boolean combination of Π02 arithmetic formulas. Moreover, every S1S formula is equivalent to acceptance by a corresponding ω-automaton of the parameters of the formula. The automaton can be a deterministic parity automaton: A parity automaton has an integer priority for each state, and accepts iff the highest priority seen infinitely often is odd (alternatively, even). For S2S, using tree automata (below), every formula is equivalent to a Δ12 formula. Moreover, every S2S formula is equivalent to a formula with just four quantifiers, ∃S∀T∃s∀t ... (assuming that our formalization has both the prefix relation and the successor functions). For S1S, three quantifiers (∃S∀s∃t) suffice, and for WS2S and WS1S, two quantifiers (∃S∀t) suffice; the prefix relation is not needed here for WS2S and WS1S. However, with free second order variables, not every S2S formula can be expressed in second order arithmetic through just Π11 transfinite recursion (see reverse mathematics). RCA0 + (schema) {τ: τ is a true S2S sentence} is equivalent to (schema) {τ: τ is a Π13 sentence provable in Π12-CA0 }.[13][14] Over a base theory, the schemas are equivalent to (schema over k) ∀S⊆ω ∃α1<...<αk Lα1(S) ≺Σ1 ... ≺Σ1 Lαk(S) where L is the constructible universe (see also large countable ordinal). Due to limited induction, Π12-CA0 does not prove that all true (under the standard decision procedure) Π13 S2S statements are actually true even though each such sentence is provable Π12-CA0. Moreover, given sets of binary strings S and T, the following are equivalent: (1) T is S2S definable from some set of binary strings polynomial time computable from S. (2) T can be computed from the set of winning positions for some game whose payoff is a finite boolean combination of Π02(S) sets. (3) T can be defined from S in arithmetic μ-calculus (arithmetic formulas + least fixed-point logic) (4) T is in the least β-model (i.e. an ω-model whose set-theoretic counterpart is transitive) containing S and satisfying all Π13 consequences of in Π12-CA0. Proof sketch: See [13] and,[15] but briefly the proof runs as follows. (1)⇒(2) follows from the below decidability proof using tree automata. (2)⇒(1) follows by converting the game associated with a binary string s into a tree parity game with a fixed number of priorities and then merging these trees into a single tree (the binary trees can be merged here using s,t ⇒ s01t′ where t′ doubles every character of t). The below determinacy proof (in the decidability section) also leads to (2)⇒(3). Π12-CA0 proves Δ12 monotonic induction, so (3)⇒(4). For (3)⇒(2), define a game where player 1 attempts to show that the desired element s is inside the least fixed point. Player 1 gradually labels elements including s with rational numbers, intended to correspond to ordinal stages of the monotonic induction (any countable ordinal is embeddable into $\mathbb {Q} $). Player 2 plays elements with strictly descending labels (and he can pass) and wins iff the sequence is infinite or player 2 wins the last auxiliary game. In the auxiliary game, player 1 attempts to show that the last element picked by player 2 is a valid inductive step using elements with smaller labels. Now, if s is not in the least fixed-point, then the set of labels is ill-founded, or an inductive step is wrong, and (using monotonicity) this can be picked up by player 2. (If player 1 plays a smaller label outside the least fixed point, player 2 can use it (abandoning the auxiliary game), otherwise (using monotonicity) player 2 can use an auxiliary game strategy that assumes that the set of smaller labels in the original game will equal the least fixed point.) For (4)⇒(3), we use monotonic induction to build an initial segment of the constructible hierarchy above a given real number r. This works as long as each ordinal α is identified by some appropriately expressible property of α so that we can encode α by a natural number and continue. Now, suppose that we built Lα(r) and the inductive step (which uses Lα(r) as a parameter) allows examining Lβ(r). If a new Σ1(L(r),∈,r) fact appears between α and β, we can use it to label α and continue. Otherwise, we get the above Σ1 elementary chains whose length corresponds to the nesting depth of the monotonic inductive definitions. For the equivalence of RCA0+S2S with {Π13 φ: Π12-CA0⊢φ}, for each k the positional determinacy with k priorities is provable in Π12-CA0, while the rest (in terms of proving S2S sentences) can be done in a weak base theory. Conversely, RCA0+S2S gives us a determinacy schema that gives existence of least fixed points (by a modification of the above (3)⇒(2) and even without requiring positionality; see the reference). In turn, their existence (using (4)⇒(3)) gives the desired Σ1 elementary chains. Models of S1S and S2S In addition to the standard model (which is the unique MSO model for S1S and S2S), there are other models for S1S and S2S, which use some rather than all subsets of the domain (see Henkin semantics). For every S⊆ω, sets recursive in S form an elementary submodel of the standard S1S model, and same for every non-empty collection of subsets of ω closed under Turing join and Turing reducibility.[16] This follows from relative recursiveness of S1S definable sets plus uniformization: - φ(s) (as a function of s) can be computed from the parameters of φ and the values of φ(s′) for a finite set of s′ (with its size bounded by the number of states in a deterministic automaton for φ). - A witness for ∃S φ(S) can be obtained by choosing k and a finite fragment of S′ of S, and repeatedly extending S′ such that the highest priority during each extension is k and that the extension can be completed into S satisfying φ without hitting priorities above k (these are permitted only for the initial S′). Also, by using lexicographically least shortest choices, there is an S1S formula φ' such that φ'⇒φ and ∃S φ(S) ⇔∃!S φ'(S) (i.e. uniformization; φ may have free variables not shown; φ' depends only on the formula φ). The minimal model of S2S consists of all regular languages on binary strings. It is an elementary submodel of the standard model, so if an S2S parameter-free definable set of trees is non-empty, then it includes a regular tree. A regular language can also be treated as a regular {0,1}-labeled complete infinite binary tree (identified with predicates on strings). A labeled tree is regular if it can be obtained by unrolling a vertex-labeled finite directed graph with an initial vertex; a (directed) cycle in the graph reachable from the initial vertex gives an infinite tree. With this interpretation and encoding of regular trees, every true S2S sentence may already be provable in elementary function arithmetic. It is non-regular trees that may require nonpredicative comprehension for determinacy (below). There are nonregular (i.e. containing nonregular languages) models of S1S (and presumably S2S) (both with and without standard first order part) with a computable satisfaction relation. However, the set of recursive sets of strings does not form a model of S2S due to failure of comprehension and determinacy. Decidability of S2S The proof of decidability is by showing that every formula is equivalent to acceptance by a nondeterministic tree automaton (see tree automaton and infinite-tree automaton). An infinite tree automaton starts at the root and moves up the tree, and accepts iff every tree branch accepts. A nondeterministic tree automaton accepts iff player 1 has a winning strategy, where player 1 chooses an allowed (for the current state and input) pair of new states (p0,p1), while player 2 chooses the branch, with the transition to p0 if 0 is chosen and p1 otherwise. For a co-nondeterministic automaton, all choices are by player 2, while for deterministic, (p0,p1) is fixed by the state and input; and for a game automaton, the two players play a finite game to set the branch and the state. Acceptance on a branch is based on states seen infinitely often on the branch; parity automata are sufficiently general here. For converting the formulas to automata, the base case is easy, and nondeterminism gives closure under existential quantifiers, so we only need closure under complementation. Using positional determinacy of parity games (which is where we need impredicative comprehension), non-existence of player 1 winning strategy gives a player 2 winning strategy S, with a co-nondeterministic tree automaton verifying its soundness. The automaton can then be made deterministic (which is where we get an exponential increase in the number of states), and thus existence of S corresponds to acceptance by a non-deterministic automaton. Determinacy: Provably in ZFC, Borel games are determined, and the determinacy proof for boolean combinations of Π02 formulas (with arbitrary real parameters) also gives a strategy here that depends only on the current state and the position in the tree. The proof is by induction on the number of priorities. Assume that there are k priorities, with the highest priority being k, and that k has the right parity for player 2. For each position (tree position + state) assign the least ordinal α (if any) such that player 1 has a winning strategy with all entered (after one or more steps) priority k positions (if any) having labels <α. Player 1 can win if the initial position is labeled: Each time a priority k state is reached, the ordinal is decreased, and moreover in between the decreases, player 1 can use a strategy for k-1 priorities. Player 2 can win if the position is unlabeled: By the determinacy for k-1 priorities, player 2 has a strategy that wins or enters an unlabeled priority k state, in which case player 2 can again use that strategy. To make the strategy positional (by induction on k), when playing the auxiliary game, if two chosen positional strategies lead to the same position, continue with the strategy with the lower α, or for the same α (or for player 2) lower initial position (so we can switch a strategy finitely many times). Automata determinization: For determinization of co-nondeterministic tree automata, it suffices to consider ω-automata, treating branch choice as input, determinizing the automaton, and using it for the deterministic tree automaton. Note that this does not work for nondeterministic tree automata as the determinization for going left (i.e. s→s0) can depend on the contents of the right branch; by contrast to nondeterminism, deterministic tree automata cannot even accept precisely nonempty sets. To determinize a nondeterministic ω-automaton M (for co-nondeterministic, take the complement, noting that deterministic parity automata are closed under complements), we can use a Safra tree with each node storing a set of possible states of M, and node creation and deletion based on reaching high priority states. For details, see [17] or.[18] Decidability of acceptance: Acceptance by a nondeterministic parity automaton of the empty tree corresponds to a parity game on a finite graph G. Using the above positional (also called memoryless) determinacy, this can be simulated by a finite game that ends when we reach a loop, with the winning condition based on the highest priority state in the loop. A clever optimization gives a quasipolynomial time algorithm,[19] which is polynomial time when the number of priorities is small enough (which occurs commonly in practice). Theory of trees: For decidability of MSO logic on trees (i.e. graphs that are trees), even with finiteness and modular counting quantifiers for first order objects, we can embed countable trees into the complete binary tree and use the decidability of S2S. For example, for a node s, we can represent its children by s1, s01, s001, and so on. For uncountable trees, we can use Shelah-Stup theorem (below). We can also add a predicate for a set first order objects having cardinality ω1, and the predicate for cardinality ω2, and so on for infinite regular cardinals. Graphs of bounded tree width are interpretable using trees, and without predicates over edges this also applies to graphs of bounded clique width. Combining S2S with other decidable theories Tree extensions of monadic theories: By Shelah-Stup theorem,[20][21] if a monadic relational model M is decidable, then so is its tree counterpart. For example, (modulo choice of formalization) S2S is the tree counterpart of {0,1}. In the tree counterpart, the first order objects are finite sequences of elements of M ordered by extension, and an M-relation Pi is mapped to Pi'(vd1,...,vdk) ⇔ Pi(d1,...,dk) with Pi' false otherwise (dj∈M, and v is a (possibly empty) sequence of elements of M). The proof is similar to the S2S decidability proof. At each step, a (nondeterministic) automaton gets a tuple of M objects (possibly second order) as input, and an M formula determines which state transitions are permitted. Player 1 (as above) chooses a mapping child⇒state that is permitted by the formula (given the current state), and player 2 chooses the child (of the node) to continue. To witness rejection by a non-deterministic automaton, for each (node, state) pick a set of (child, state) pairs such that for every choice, at least one of the pairs is hit, and such that all the resulting paths lead to rejection. Combining a monadic theory with a first order theory: Feferman–Vaught theorem extends/applies as follows. If M is an MSO model and N is a first order model, then M remains decidable relative to a (Theory(M), Theory(N)) oracle even if M is augmented with all functions M→N where M is identified with its first objects, and for each s∈M we use a disjoint copy of N, with the language modified accordingly. For example, if N is ($\mathbb {R} $,0,+,⋅), we can state ∀(function f) ∀s ∃r∈Ns f(s) +Ns r = 0Ns. If M is S2S (or more generally, the tree counterpart of some monadic model), the automata can now use N-formulas, and thereby convert f:M→Nk into a tuple of M sets. Disjointness is necessary as otherwise for every infinite N with equality, the extended S2S or just WS1S is undecidable. Also, for a (possibly incomplete) theory T, the theory TM of M-products of T is decidable relative to a (Theory(M), T) oracle, where a model of TM uses an arbitrary disjoint model Ns of T for each s∈M (as above, M is an MSO model; Theory(Ns) may depend on s). The proof is by induction on formula complexity. Let vs be the list of free Ns variables, including f(s) if function f is free. By induction, one shows that vs is only used through a finite set of N-formulas with |vs| free variables. Thus, we can quantify over all possible outcomes by using N (or T) to answer what is possible, and given a list possibilities (or constraints), formulate a corresponding sentence in M. Coding into extensions of S2S: Every decidable predicate on strings can be encoded (with linear time encoding and decoding) for decidability of S2S (even with the extensions above) together with the encoded predicate. Proof: Given a nondeterministic infinite tree automaton, we can partition the set of finite binary labeled trees (having labels over which the automaton can operate) into finitely many classes such that if a complete infinite binary tree can be composed of same-class trees, acceptance depends only on the class and the initial state (i.e. state the automaton enters the tree). (Note a rough similarity with the pumping lemma.) For example (for a parity automaton), assign trees to the same class if they have the same predicate that given initial_state and set Q of (state, highest_priority_reached) pairs returns whether player 1 (i.e. nondeterminism) can simultaneously force all branches to correspond to elements of Q. Now, for each k, pick a finite set of trees (suitable for coding) that belong to the same class for automata 1-k, with the choice of class consistent across k. To encode a predicate, encode some bits using k=1, then more bits using k=2, and so on. References 1. Rabin, Michael (1969). "Decidability of second-order theories and automata on infinite trees" (PDF). Transactions of the American Mathematical Society. 141. 2. Janin, David; Lenzi, Giacomo. On the Structure of the Monadic Logic of the Binary Tree. MFCS 1999. doi:10.1007/3-540-48340-3_28. 3. Siefkes, Dirk (1971), An axiom system for the weak monadic second order theory of two successors 4. Riba, Colin (2012). A model theoretic proof of completeness of an axiomatization of monadic second-order logic on infinite words (PDF). TCS 2012. doi:10.1007/978-3-642-33475-7_22. 5. Carayol, Arnaud; Löding, Christof (2007), "MSO on the Infinite Binary Tree: Choice and Order" (PDF), Computer Science Logic, Lecture Notes in Computer Science, vol. 4646, pp. 161–176, doi:10.1007/978-3-540-74915-8_15, ISBN 978-3-540-74914-1, S2CID 14580598 6. Das, Anupam; Riba, Colin (2020). "A functional (monadic) second-order theory of infinite trees". Logical Methods in Computer Science. 16 (4). arXiv:1903.05878. doi:10.23638/LMCS-16(4:6)2020. (A preliminary 2015 version erroneously claimed proof of completeness without the determinacy schema.) 7. Gurevich, Yuri; Shelah, Saharon (1984). "The monadic theory and the "next world"". Israel Journal of Mathematics. 49 (1–3): 55–68. doi:10.1007/BF02760646. S2CID 15807840. 8. "What is the Turing degree of the monadic theory of the real line?". MathOverflow. Retrieved November 14, 2022. 9. Gurevich, Yuri; Magidor, Menachem; Shelah, Saharon (1993). "The monadic theory of ω2" (PDF). The Journal of Symbolic Logic. 48 (2): 387–398. doi:10.2307/2273556. JSTOR 2273556. S2CID 120260712. 10. Neeman, Itay (2008), "Monadic definability of ordinals" (PDF), Computational Prospects of Infinity, Lecture Notes Series, Institute for Mathematical Sciences, National University of Singapore, vol. 15, pp. 193–205, doi:10.1142/9789812796554_0010, ISBN 978-981-279-654-7 11. Bojańczyk, Mikołaj; Parys, Paweł; Toruńczyk, Szymon (2015), The MSO+U theory of (N, <) is undecidable, arXiv:1502.04578 12. Bojańczyk, Mikołaj (2014), Weak MSO+U with path quantifiers over infinite trees, arXiv:1404.7278 13. Kołodziejczyk, Leszek; Michalewski, Henryk (2016). How unprovable is Rabin's decidability theorem?. LICS '16: 31st Annual ACM/IEEE Symposium on Logic in Computer Science. arXiv:1508.06780. 14. Kołodziejczyk, Leszek (October 19, 2015). "Question on Decidability of S2S". FOM. 15. Heinatsch, Christoph; Möllerfeld, Michael (2010). "The determinacy strength of Π12-comprehension" (PDF). Annals of Pure and Applied Logic. 161 (12): 1462–1470. doi:10.1016/j.apal.2010.04.012. 16. Kołodziejczyk, Leszek; Michalewski, Henryk; Pradic, Pierre; Skrzypczak, Michał (2019). "The logical strength of Büchi's decidability theorem". Logical Methods in Computer Science. 15 (2): 16:1–16:31. 17. Piterman, Nir (2006). From Nondeterministic Buchi and Streett Automata to Deterministic Parity Automata. 21st Annual IEEE Symposium on Logic in Computer Science (LICS'06). pp. 255–264. arXiv:0705.2205. doi:10.1109/LICS.2006.28. 18. Löding, Christof; Pirogov, Anton. Determinization of Büchi Automata: Unifying the Approaches of Safra and Muller-Schupp. ICALP 2019. arXiv:1902.02139. 19. Calude, Cristian; Jain, Sanjay; Khoussainov, Bakhadyr; Li, Wei; Stephan, Frank. Deciding parity games in quasipolynomial time (PDF). STOC 2017. 20. Shelah, Saharon (Nov 1975). "Monadic theory of order" (PDF). Annals of Mathematics. 102 (3): 379–419. doi:10.2307/1971037. JSTOR 1971037. 21. "The generalization of Shelah–Stup theorem" (PDF). Retrieved November 14, 2022. Additional reference: Weyer, Mark (2002). "Decidability of S1S and S2S". Automata, Logics, and Infinite Games. Lecture Notes in Computer Science. Vol. 2500. Springer. pp. 207–230. doi:10.1007/3-540-36387-4_12. ISBN 978-3-540-00388-5.
Wikipedia
Interior algebra In abstract algebra, an interior algebra is a certain type of algebraic structure that encodes the idea of the topological interior of a set. Interior algebras are to topology and the modal logic S4 what Boolean algebras are to set theory and ordinary propositional logic. Interior algebras form a variety of modal algebras. Definition An interior algebra is an algebraic structure with the signature ⟨S, ·, +, ′, 0, 1, I⟩ where ⟨S, ·, +, ′, 0, 1⟩ is a Boolean algebra and postfix I designates a unary operator, the interior operator, satisfying the identities: 1. xI ≤ x 2. xII = xI 3. (xy)I = xIyI 4. 1I = 1 xI is called the interior of x. The dual of the interior operator is the closure operator C defined by xC = ((x′)I)′. xC is called the closure of x. By the principle of duality, the closure operator satisfies the identities: 1. xC ≥ x 2. xCC = xC 3. (x + y)C = xC + yC 4. 0C = 0 If the closure operator is taken as primitive, the interior operator can be defined as xI = ((x′)C)′. Thus the theory of interior algebras may be formulated using the closure operator instead of the interior operator, in which case one considers closure algebras of the form ⟨S, ·, +, ′, 0, 1, C⟩, where ⟨S, ·, +, ′, 0, 1⟩ is again a Boolean algebra and C satisfies the above identities for the closure operator. Closure and interior algebras form dual pairs, and are paradigmatic instances of "Boolean algebras with operators." The early literature on this subject (mainly Polish topology) invoked closure operators, but the interior operator formulation eventually became the norm following the work of Wim Blok. Open and closed elements Elements of an interior algebra satisfying the condition xI = x are called open. The complements of open elements are called closed and are characterized by the condition xC = x. An interior of an element is always open and the closure of an element is always closed. Interiors of closed elements are called regular open and closures of open elements are called regular closed. Elements which are both open and closed are called clopen. 0 and 1 are clopen. An interior algebra is called Boolean if all its elements are open (and hence clopen). Boolean interior algebras can be identified with ordinary Boolean algebras as their interior and closure operators provide no meaningful additional structure. A special case is the class of trivial interior algebras which are the single element interior algebras characterized by the identity 0 = 1. Morphisms of interior algebras Homomorphisms Interior algebras, by virtue of being algebraic structures, have homomorphisms. Given two interior algebras A and B, a map f : A → B is an interior algebra homomorphism if and only if f is a homomorphism between the underlying Boolean algebras of A and B, that also preserves interiors and closures. Hence: • f(xI) = f(x)I; • f(xC) = f(x)C. Topomorphisms Topomorphisms are another important, and more general, class of morphisms between interior algebras. A map f : A → B is a topomorphism if and only if f is a homomorphism between the Boolean algebras underlying A and B, that also preserves the open and closed elements of A. Hence: • If x is open in A, then f(x) is open in B; • If x is closed in A, then f(x) is closed in B. (Such morphisms have also been called stable homomorphisms and closure algebra semi-homomorphisms.) Every interior algebra homomorphism is a topomorphism, but not every topomorphism is an interior algebra homomorphism. Boolean homomorphisms Early research often considered mappings between interior algebras which were homomorphisms of the underlying Boolean algebras but which did not necessarily preserve the interior or closure operator. Such mappings were called Boolean homomorphisms. (The terms closure homomorphism or topological homomorphism were used in the case where these were preserved, but this terminology is now redundant as the standard definition of a homomorphism in universal algebra requires that it preserves all operations.) Applications involving countably complete interior algebras (in which countable meets and joins always exist, also called σ-complete) typically made use of countably complete Boolean homomorphisms also called Boolean σ-homomorphisms - these preserve countable meets and joins. Continuous morphisms The earliest generalization of continuity to interior algebras was Sikorski's based on the inverse image map of a continuous map. This is a Boolean homomorphism, preserves unions of sequences and includes the closure of an inverse image in the inverse image of the closure. Sikorski thus defined a continuous homomorphism as a Boolean σ-homomorphism f between two σ-complete interior algebras such that f(x)C ≤ f(xC). This definition had several difficulties: The construction acts contravariantly producing a dual of a continuous map rather than a generalization. On the one hand σ-completeness is too weak to characterize inverse image maps (completeness is required), on the other hand it is too restrictive for a generalization. (Sikorski remarked on using non-σ-complete homomorphisms but included σ-completeness in his axioms for closure algebras.) Later J. Schmid defined a continuous homomorphism or continuous morphism for interior algebras as a Boolean homomorphism f between two interior algebras satisfying f(xC) ≤ f(x)C. This generalizes the forward image map of a continuous map - the image of a closure is contained in the closure of the image. This construction is covariant but not suitable for category theoretic applications as it only allows construction of continuous morphisms from continuous maps in the case of bijections. (C. Naturman returned to Sikorski's approach while dropping σ-completeness to produce topomorphisms as defined above. In this terminology, Sikorski's original "continuous homomorphisms" are σ-complete topomorphisms between σ-complete interior algebras.) Relationships to other areas of mathematics Topology Given a topological space X = ⟨X, T⟩ one can form the power set Boolean algebra of X: ⟨P(X), ∩, ∪, ′, ø, X⟩ and extend it to an interior algebra A(X) = ⟨P(X), ∩, ∪, ′, ø, X, I⟩, where I is the usual topological interior operator. For all S ⊆ X it is defined by SI = ∪ {O | O ⊆ S and O is open in X} For all S ⊆ X the corresponding closure operator is given by SC = ∩ {C | S ⊆ C and C is closed in X} SI is the largest open subset of S and SC is the smallest closed superset of S in X. The open, closed, regular open, regular closed and clopen elements of the interior algebra A(X) are just the open, closed, regular open, regular closed and clopen subsets of X respectively in the usual topological sense. Every complete atomic interior algebra is isomorphic to an interior algebra of the form A(X) for some topological space X. Moreover, every interior algebra can be embedded in such an interior algebra giving a representation of an interior algebra as a topological field of sets. The properties of the structure A(X) are the very motivation for the definition of interior algebras. Because of this intimate connection with topology, interior algebras have also been called topo-Boolean algebras or topological Boolean algebras. Given a continuous map between two topological spaces f : X → Y we can define a complete topomorphism A(f) : A(Y) → A(X) by A(f)(S) = f−1[S] for all subsets S of Y. Every complete topomorphism between two complete atomic interior algebras can be derived in this way. If Top is the category of topological spaces and continuous maps and Cit is the category of complete atomic interior algebras and complete topomorphisms then Top and Cit are dually isomorphic and A : Top → Cit is a contravariant functor that is a dual isomorphism of categories. A(f) is a homomorphism if and only if f is a continuous open map. Under this dual isomorphism of categories many natural topological properties correspond to algebraic properties, in particular connectedness properties correspond to irreducibility properties: • X is empty if and only if A(X) is trivial • X is indiscrete if and only if A(X) is simple • X is discrete if and only if A(X) is Boolean • X is almost discrete if and only if A(X) is semisimple • X is finitely generated (Alexandrov) if and only if A(X) is operator complete i.e. its interior and closure operators distribute over arbitrary meets and joins respectively • X is connected if and only if A(X) is directly indecomposable • X is ultraconnected if and only if A(X) is finitely subdirectly irreducible • X is compact ultra-connected if and only if A(X) is subdirectly irreducible Generalized topology The modern formulation of topological spaces in terms of topologies of open subsets, motivates an alternative formulation of interior algebras: A generalized topological space is an algebraic structure of the form ⟨B, ·, +, ′, 0, 1, T⟩ where ⟨B, ·, +, ′, 0, 1⟩ is a Boolean algebra as usual, and T is a unary relation on B (subset of B) such that: 1. 0,1 ∈ T 2. T is closed under arbitrary joins (i.e. if a join of an arbitrary subset of T exists then it will be in T) 3. T is closed under finite meets 4. For every element b of B, the join Σ{a ∈T | a ≤ b} exists T is said to be a generalized topology in the Boolean algebra. Given an interior algebra its open elements form a generalized topology. Conversely given a generalized topological space ⟨B, ·, +, ′, 0, 1, T⟩ we can define an interior operator on B by bI = Σ{a ∈T | a ≤ b} thereby producing an interior algebra whose open elements are precisely T. Thus generalized topological spaces are equivalent to interior algebras. Considering interior algebras to be generalized topological spaces, topomorphisms are then the standard homomorphisms of Boolean algebras with added relations, so that standard results from universal algebra apply. Neighbourhood functions and neighbourhood lattices The topological concept of neighbourhoods can be generalized to interior algebras: An element y of an interior algebra is said to be a neighbourhood of an element x if x ≤ yI. The set of neighbourhoods of x is denoted by N(x) and forms a filter. This leads to another formulation of interior algebras: A neighbourhood function on a Boolean algebra is a mapping N from its underlying set B to its set of filters, such that: 1. For all x ∈ B, max{y ∈ B | x ∈ N(y)} exists 2. For all x,y ∈ B, x ∈ N(y) if and only if there is a z ∈ B such that y ≤ z ≤ x and z ∈ N(z). The mapping N of elements of an interior algebra to their filters of neighbourhoods is a neighbourhood function on the underlying Boolean algebra of the interior algebra. Moreover, given a neighbourhood function N on a Boolean algebra with underlying set B, we can define an interior operator by xI = max{y ∈ B | x ∈ N(y)} thereby obtaining an interior algebra. $N(x)$ will then be precisely the filter of neighbourhoods of x in this interior algebra. Thus interior algebras are equivalent to Boolean algebras with specified neighbourhood functions. In terms of neighbourhood functions, the open elements are precisely those elements x such that x ∈ N(x). In terms of open elements x ∈ N(y) if and only if there is an open element z such that y ≤ z ≤ x. Neighbourhood functions may be defined more generally on (meet)-semilattices producing the structures known as neighbourhood (semi)lattices. Interior algebras may thus be viewed as precisely the Boolean neighbourhood lattices i.e. those neighbourhood lattices whose underlying semilattice forms a Boolean algebra. Modal logic Given a theory (set of formal sentences) M in the modal logic S4, we can form its Lindenbaum–Tarski algebra: L(M) = ⟨M / ~, ∧, ∨, ¬, F, T, □⟩ where ~ is the equivalence relation on sentences in M given by p ~ q if and only if p and q are logically equivalent in M, and M / ~ is the set of equivalence classes under this relation. Then L(M) is an interior algebra. The interior operator in this case corresponds to the modal operator □ (necessarily), while the closure operator corresponds to ◊ (possibly). This construction is a special case of a more general result for modal algebras and modal logic. The open elements of L(M) correspond to sentences that are only true if they are necessarily true, while the closed elements correspond to those that are only false if they are necessarily false. Because of their relation to S4, interior algebras are sometimes called S4 algebras or Lewis algebras, after the logician C. I. Lewis, who first proposed the modal logics S4 and S5. Preorders Since interior algebras are (normal) Boolean algebras with operators, they can be represented by fields of sets on appropriate relational structures. In particular, since they are modal algebras, they can be represented as fields of sets on a set with a single binary relation, called a modal frame. The modal frames corresponding to interior algebras are precisely the preordered sets. Preordered sets (also called S4-frames) provide the Kripke semantics of the modal logic S4, and the connection between interior algebras and preorders is deeply related to their connection with modal logic. Given a preordered set X = ⟨X, «⟩ we can construct an interior algebra B(X) = ⟨P(X), ∩, ∪, ′, ø, X, I⟩ from the power set Boolean algebra of X where the interior operator I is given by SI = {x ∈ X | for all y ∈ X, x « y implies y ∈ S} for all S ⊆ X. The corresponding closure operator is given by SC = {x ∈ X | there exists a y ∈ S with x « y} for all S ⊆ X. SI is the set of all worlds inaccessible from worlds outside S, and SC is the set of all worlds accessible from some world in S. Every interior algebra can be embedded in an interior algebra of the form B(X) for some preordered set X giving the above-mentioned representation as a field of sets (a preorder field). This construction and representation theorem is a special case of the more general result for modal algebras and modal frames. In this regard, interior algebras are particularly interesting because of their connection to topology. The construction provides the preordered set X with a topology, the Alexandrov topology, producing a topological space T(X) whose open sets are: {O ⊆ X | for all x ∈ O and all y ∈ X, x « y implies y ∈ O}. The corresponding closed sets are: {C ⊆ X | for all x ∈ C and all y ∈ X, y « x implies y ∈ C}. In other words, the open sets are the ones whose worlds are inaccessible from outside (the up-sets), and the closed sets are the ones for which every outside world is inaccessible from inside (the down-sets). Moreover, B(X) = A(T(X)). Monadic Boolean algebras Any monadic Boolean algebra can be considered to be an interior algebra where the interior operator is the universal quantifier and the closure operator is the existential quantifier. The monadic Boolean algebras are then precisely the variety of interior algebras satisfying the identity xIC = xI. In other words, they are precisely the interior algebras in which every open element is closed or equivalently, in which every closed element is open. Moreover, such interior algebras are precisely the semisimple interior algebras. They are also the interior algebras corresponding to the modal logic S5, and so have also been called S5 algebras. In the relationship between preordered sets and interior algebras they correspond to the case where the preorder is an equivalence relation, reflecting the fact that such preordered sets provide the Kripke semantics for S5. This also reflects the relationship between the monadic logic of quantification (for which monadic Boolean algebras provide an algebraic description) and S5 where the modal operators □ (necessarily) and ◊ (possibly) can be interpreted in the Kripke semantics using monadic universal and existential quantification, respectively, without reference to an accessibility relation. Heyting algebras The open elements of an interior algebra form a Heyting algebra and the closed elements form a dual Heyting algebra. The regular open elements and regular closed elements correspond to the pseudo-complemented elements and dual pseudo-complemented elements of these algebras respectively and thus form Boolean algebras. The clopen elements correspond to the complemented elements and form a common subalgebra of these Boolean algebras as well as of the interior algebra itself. Every Heyting algebra can be represented as the open elements of an interior algebra and the latter may be chosen to an interior algebra generated by its open elements - such interior algebras correspond one to one with Heyting algebras (up to isomorphism) being the free Boolean extensions of the latter. Heyting algebras play the same role for intuitionistic logic that interior algebras play for the modal logic S4 and Boolean algebras play for propositional logic. The relation between Heyting algebras and interior algebras reflects the relationship between intuitionistic logic and S4, in which one can interpret theories of intuitionistic logic as S4 theories closed under necessity. The one to one correspondence between Heyting algebras and interior algebras generated by their open elements reflects the correspondence between extensions of intuitionistic logic and normal extensions of the modal logic S4.Grz. Derivative algebras Given an interior algebra A, the closure operator obeys the axioms of the derivative operator, D. Hence we can form a derivative algebra D(A) with the same underlying Boolean algebra as A by using the closure operator as a derivative operator. Thus interior algebras are derivative algebras. From this perspective, they are precisely the variety of derivative algebras satisfying the identity xD ≥ x. Derivative algebras provide the appropriate algebraic semantics for the modal logic WK4. Hence derivative algebras stand to topological derived sets and WK4 as interior/closure algebras stand to topological interiors/closures and S4. Given a derivative algebra V with derivative operator D, we can form an interior algebra I(V) with the same underlying Boolean algebra as V, with interior and closure operators defined by xI = x·x ′ D ′ and xC = x + xD, respectively. Thus every derivative algebra can be regarded as an interior algebra. Moreover, given an interior algebra A, we have I(D(A)) = A. However, D(I(V)) = V does not necessarily hold for every derivative algebra V. Stone duality and representation for interior algebras Stone duality provides a category theoretic duality between Boolean algebras and a class of topological spaces known as Boolean spaces. Building on nascent ideas of relational semantics (later formalized by Kripke) and a result of R. S. Pierce, Jónsson, Tarski and G. Hansoul extended Stone duality to Boolean algebras with operators by equipping Boolean spaces with relations that correspond to the operators via a power set construction. In the case of interior algebras the interior (or closure) operator corresponds to a pre-order on the Boolean space. Homomorphisms between interior algebras correspond to a class of continuous maps between the Boolean spaces known as pseudo-epimorphisms or p-morphisms for short. This generalization of Stone duality to interior algebras based on the Jónsson–Tarski representation was investigated by Leo Esakia and is also known as the Esakia duality for S4-algebras (interior algebras) and is closely related to the Esakia duality for Heyting algebras. Whereas the Jónsson–Tarski generalization of Stone duality applies to Boolean algebras with operators in general, the connection between interior algebras and topology allows for another method of generalizing Stone duality that is unique to interior algebras. An intermediate step in the development of Stone duality is Stone's representation theorem which represents a Boolean algebra as a field of sets. The Stone topology of the corresponding Boolean space is then generated using the field of sets as a topological basis. Building on the topological semantics introduced by Tang Tsao-Chen for Lewis's modal logic, McKinsey and Tarski showed that by generating a topology equivalent to using only the complexes that correspond to open elements as a basis, a representation of an interior algebra is obtained as a topological field of sets - a field of sets on a topological space that is closed with respect to taking interiors or closures. By equipping topological fields of sets with appropriate morphisms known as field maps C. Naturman showed that this approach can be formalized as a category theoretic Stone duality in which the usual Stone duality for Boolean algebras corresponds to the case of interior algebras having redundant interior operator (Boolean interior algebras). The pre-order obtained in the Jónsson–Tarski approach corresponds to the accessibility relation in the Kripke semantics for an S4 theory, while the intermediate field of sets corresponds to a representation of the Lindenbaum–Tarski algebra for the theory using the sets of possible worlds in the Kripke semantics in which sentences of the theory hold. Moving from the field of sets to a Boolean space somewhat obfuscates this connection. By treating fields of sets on pre-orders as a category in its own right this deep connection can be formulated as a category theoretic duality that generalizes Stone representation without topology. R. Goldblatt had shown that with restrictions to appropriate homomorphisms such a duality can be formulated for arbitrary modal algebras and modal frames. Naturman showed that in the case of interior algebras this duality applies to more general topomorphisms and can be factored via a category theoretic functor through the duality with topological fields of sets. The latter represent the Lindenbaum–Tarski algebra using sets of points satisfying sentences of the S4 theory in the topological semantics. The pre-order can be obtained as the specialization pre-order of the McKinsey-Tarski topology. The Esakia duality can be recovered via a functor that replaces the field of sets with the Boolean space it generates. Via a functor that instead replaces the pre-order with its corresponding Alexandrov topology, an alternative representation of the interior algebra as a field of sets is obtained where the topology is the Alexandrov bico-reflection of the McKinsey-Tarski topology. The approach of formulating a topological duality for interior algebras using both the Stone topology of the Jónsson–Tarski approach and the Alexandrov topology of the pre-order to form a bi-topological space has been investigated by G. Bezhanishvili, R.Mines, and P.J. Morandi. The McKinsey-Tarski topology of an interior algebra is the intersection of the former two topologies. Metamathematics Grzegorczyk proved the elementary theory of closure algebras undecidable.[1][2] Naturman demonstrated that the theory is hereditarily undecidable (all its subtheories are undecidable) and demonstrated an infinite chain of elementary classes of interior algebras with hereditarily undecidable theories. Notes 1. Andrzej Grzegorczyk (1951), "Undecidability of some topological theories," Fundamenta Mathematicae 38: 137–52. 2. According to footnote 19 in McKinsey and Tarski, 1944, the result had been proved earlier by S. Jaskowski in 1939, but remained unpublished and not accessible in view of the present [at the time] war conditions. References • Blok, W.A., 1976, Varieties of interior algebras, Ph.D. thesis, University of Amsterdam. • Esakia, L., 2004, "Intuitionistic logic and modality via topology," Annals of Pure and Applied Logic 127: 155-70. • McKinsey, J.C.C. and Alfred Tarski, 1944, "The Algebra of Topology," Annals of Mathematics 45: 141-91. • Naturman, C.A., 1991, Interior Algebras and Topology, Ph.D. thesis, University of Cape Town Department of Mathematics. • Bezhanishvili, G., Mines, R. and Morandi, P.J., 2008, Topo-canonical completions of closure algebras and Heyting algebras, Algebra Universalis 58: 1-34. • Schmid, J., 1973, On the compactification of closure algebras, Fundamenta Mathematicae 79: 33-48 • Sikorski R., 1955, Closure homomorphisms and interior mappings, Fundamenta Mathematicae 41: 12-20
Wikipedia
Symmetric group In abstract algebra, the symmetric group defined over any set is the group whose elements are all the bijections from the set to itself, and whose group operation is the composition of functions. In particular, the finite symmetric group $\mathrm {S} _{n}$ defined over a finite set of $n$ symbols consists of the permutations that can be performed on the $n$ symbols.[1] Since there are $n!$ ($n$ factorial) such permutation operations, the order (number of elements) of the symmetric group $\mathrm {S} _{n}$ is $n!$. Not to be confused with Symmetry group. Algebraic structure → Group theory Group theory Basic notions • Subgroup • Normal subgroup • Quotient group • (Semi-)direct product Group homomorphisms • kernel • image • direct sum • wreath product • simple • finite • infinite • continuous • multiplicative • additive • cyclic • abelian • dihedral • nilpotent • solvable • action • Glossary of group theory • List of group theory topics Finite groups • Cyclic group Zn • Symmetric group Sn • Alternating group An • Dihedral group Dn • Quaternion group Q • Cauchy's theorem • Lagrange's theorem • Sylow theorems • Hall's theorem • p-group • Elementary abelian group • Frobenius group • Schur multiplier Classification of finite simple groups • cyclic • alternating • Lie type • sporadic • Discrete groups • Lattices • Integers ($\mathbb {Z} $) • Free group Modular groups • PSL(2, $\mathbb {Z} $) • SL(2, $\mathbb {Z} $) • Arithmetic group • Lattice • Hyperbolic group Topological and Lie groups • Solenoid • Circle • General linear GL(n) • Special linear SL(n) • Orthogonal O(n) • Euclidean E(n) • Special orthogonal SO(n) • Unitary U(n) • Special unitary SU(n) • Symplectic Sp(n) • G2 • F4 • E6 • E7 • E8 • Lorentz • Poincaré • Conformal • Diffeomorphism • Loop Infinite dimensional Lie group • O(∞) • SU(∞) • Sp(∞) Algebraic groups • Linear algebraic group • Reductive group • Abelian variety • Elliptic curve Although symmetric groups can be defined on infinite sets, this article focuses on the finite symmetric groups: their applications, their elements, their conjugacy classes, a finite presentation, their subgroups, their automorphism groups, and their representation theory. For the remainder of this article, "symmetric group" will mean a symmetric group on a finite set. The symmetric group is important to diverse areas of mathematics such as Galois theory, invariant theory, the representation theory of Lie groups, and combinatorics. Cayley's theorem states that every group $G$ is isomorphic to a subgroup of the symmetric group on (the underlying set of) $G$. Definition and first properties The symmetric group on a finite set $X$ is the group whose elements are all bijective functions from $X$ to $X$ and whose group operation is that of function composition.[1] For finite sets, "permutations" and "bijective functions" refer to the same operation, namely rearrangement. The symmetric group of degree $n$ is the symmetric group on the set $X=\{1,2,\ldots ,n\}$. The symmetric group on a set $X$ is denoted in various ways, including $\mathrm {S} _{X}$, ${\mathfrak {S}}_{X}$, $\Sigma _{X}$, $X!$, and $\operatorname {Sym} (X)$.[1] If $X$ is the set $\{1,2,\ldots ,n\}$ then the name may be abbreviated to $\mathrm {S} _{n}$, ${\mathfrak {S}}_{n}$, $\Sigma _{n}$, or $\operatorname {Sym} (n)$.[1] Symmetric groups on infinite sets behave quite differently from symmetric groups on finite sets, and are discussed in (Scott 1987, Ch. 11), (Dixon & Mortimer 1996, Ch. 8), and (Cameron 1999). The symmetric group on a set of $n$ elements has order $n!$ (the factorial of $n$).[2] It is abelian if and only if $n$ is less than or equal to 2.[3] For $n=0$ and $n=1$ (the empty set and the singleton set), the symmetric groups are trivial (they have order $0!=1!=1$). The group Sn is solvable if and only if $n\leq 4$. This is an essential part of the proof of the Abel–Ruffini theorem that shows that for every $n>4$ there are polynomials of degree $n$ which are not solvable by radicals, that is, the solutions cannot be expressed by performing a finite number of operations of addition, subtraction, multiplication, division and root extraction on the polynomial's coefficients. Applications The symmetric group on a set of size n is the Galois group of the general polynomial of degree n and plays an important role in Galois theory. In invariant theory, the symmetric group acts on the variables of a multi-variate function, and the functions left invariant are the so-called symmetric functions. In the representation theory of Lie groups, the representation theory of the symmetric group plays a fundamental role through the ideas of Schur functors. In the theory of Coxeter groups, the symmetric group is the Coxeter group of type An and occurs as the Weyl group of the general linear group. In combinatorics, the symmetric groups, their elements (permutations), and their representations provide a rich source of problems involving Young tableaux, plactic monoids, and the Bruhat order. Subgroups of symmetric groups are called permutation groups and are widely studied because of their importance in understanding group actions, homogeneous spaces, and automorphism groups of graphs, such as the Higman–Sims group and the Higman–Sims graph. Group properties and special elements The elements of the symmetric group on a set X are the permutations of X. Multiplication The group operation in a symmetric group is function composition, denoted by the symbol ∘ or simply by just a composition of the permutations. The composition f ∘ g of permutations f and g, pronounced "f of g", maps any element x of X to f(g(x)). Concretely, let (see permutation for an explanation of notation): $f=(1\ 3)(4\ 5)={\begin{pmatrix}1&2&3&4&5\\3&2&1&5&4\end{pmatrix}}$ $g=(1\ 2\ 5)(3\ 4)={\begin{pmatrix}1&2&3&4&5\\2&5&4&3&1\end{pmatrix}}.$ Applying f after g maps 1 first to 2 and then 2 to itself; 2 to 5 and then to 4; 3 to 4 and then to 5, and so on. So composing f and g gives $fg=f\circ g=(1\ 2\ 4)(3\ 5)={\begin{pmatrix}1&2&3&4&5\\2&4&5&1&3\end{pmatrix}}.$ A cycle of length L = k · m, taken to the kth power, will decompose into k cycles of length m: For example, (k = 2, m = 3), $(1~2~3~4~5~6)^{2}=(1~3~5)(2~4~6).$ Verification of group axioms To check that the symmetric group on a set X is indeed a group, it is necessary to verify the group axioms of closure, associativity, identity, and inverses.[4] 1. The operation of function composition is closed in the set of permutations of the given set X. 2. Function composition is always associative. 3. The trivial bijection that assigns each element of X to itself serves as an identity for the group. 4. Every bijection has an inverse function that undoes its action, and thus each element of a symmetric group does have an inverse which is a permutation too. Transpositions, sign, and the alternating group Main article: Transposition (mathematics) A transposition is a permutation which exchanges two elements and keeps all others fixed; for example (1 3) is a transposition. Every permutation can be written as a product of transpositions; for instance, the permutation g from above can be written as g = (1 2)(2 5)(3 4). Since g can be written as a product of an odd number of transpositions, it is then called an odd permutation, whereas f is an even permutation. The representation of a permutation as a product of transpositions is not unique; however, the number of transpositions needed to represent a given permutation is either always even or always odd. There are several short proofs of the invariance of this parity of a permutation. The product of two even permutations is even, the product of two odd permutations is even, and all other products are odd. Thus we can define the sign of a permutation: $\operatorname {sgn} f={\begin{cases}+1,&{\text{if }}f{\mbox{ is even}}\\-1,&{\text{if }}f{\text{ is odd}}.\end{cases}}$ With this definition, $\operatorname {sgn} \colon \mathrm {S} _{n}\rightarrow \{+1,-1\}\ $ is a group homomorphism ({+1, −1} is a group under multiplication, where +1 is e, the neutral element). The kernel of this homomorphism, that is, the set of all even permutations, is called the alternating group An. It is a normal subgroup of Sn, and for n ≥ 2 it has n!/2 elements. The group Sn is the semidirect product of An and any subgroup generated by a single transposition. Furthermore, every permutation can be written as a product of adjacent transpositions, that is, transpositions of the form (a a+1). For instance, the permutation g from above can also be written as g = (4 5)(3 4)(4 5)(1 2)(2 3)(3 4)(4 5). The sorting algorithm bubble sort is an application of this fact. The representation of a permutation as a product of adjacent transpositions is also not unique. Cycles A cycle of length k is a permutation f for which there exists an element x in {1, ..., n} such that x, f(x), f2(x), ..., fk(x) = x are the only elements moved by f; it conventionally is required that k ≥ 2 since with k = 1 the element x itself would not be moved either. The permutation h defined by $h={\begin{pmatrix}1&2&3&4&5\\4&2&1&3&5\end{pmatrix}}$ is a cycle of length three, since h(1) = 4, h(4) = 3 and h(3) = 1, leaving 2 and 5 untouched. We denote such a cycle by (1 4 3), but it could equally well be written (4 3 1) or (3 1 4) by starting at a different point. The order of a cycle is equal to its length. Cycles of length two are transpositions. Two cycles are disjoint if they have disjoint subsets of elements. Disjoint cycles commute: for example, in S6 there is the equality (4 1 3)(2 5 6) = (2 5 6)(4 1 3). Every element of Sn can be written as a product of disjoint cycles; this representation is unique up to the order of the factors, and the freedom present in representing each individual cycle by choosing its starting point. Cycles admit the following conjugation property with any permutation $\sigma $, this property is often used to obtain its generators and relations. $\sigma {\begin{pmatrix}a&b&c&\ldots \end{pmatrix}}\sigma ^{-1}={\begin{pmatrix}\sigma (a)&\sigma (b)&\sigma (c)&\ldots \end{pmatrix}}$ Special elements Certain elements of the symmetric group of {1, 2, ..., n} are of particular interest (these can be generalized to the symmetric group of any finite totally ordered set, but not to that of an unordered set). The order reversing permutation is the one given by: ${\begin{pmatrix}1&2&\cdots &n\\n&n-1&\cdots &1\end{pmatrix}}.$ This is the unique maximal element with respect to the Bruhat order and the longest element in the symmetric group with respect to generating set consisting of the adjacent transpositions (i i+1), 1 ≤ i ≤ n − 1. This is an involution, and consists of $\lfloor n/2\rfloor $ (non-adjacent) transpositions $(1\,n)(2\,n-1)\cdots ,{\text{ or }}\sum _{k=1}^{n-1}k={\frac {n(n-1)}{2}}{\text{ adjacent transpositions: }}$ $(n\,n-1)(n-1\,n-2)\cdots (2\,1)(n-1\,n-2)(n-2\,n-3)\cdots ,$ so it thus has sign: $\mathrm {sgn} (\rho _{n})=(-1)^{\lfloor n/2\rfloor }=(-1)^{n(n-1)/2}={\begin{cases}+1&n\equiv 0,1{\pmod {4}}\\-1&n\equiv 2,3{\pmod {4}}\end{cases}}$ which is 4-periodic in n. In S2n, the perfect shuffle is the permutation that splits the set into 2 piles and interleaves them. Its sign is also $(-1)^{\lfloor n/2\rfloor }.$ Note that the reverse on n elements and perfect shuffle on 2n elements have the same sign; these are important to the classification of Clifford algebras, which are 8-periodic. Conjugacy classes The conjugacy classes of Sn correspond to the cycle types of permutations; that is, two elements of Sn are conjugate in Sn if and only if they consist of the same number of disjoint cycles of the same lengths. For instance, in S5, (1 2 3)(4 5) and (1 4 3)(2 5) are conjugate; (1 2 3)(4 5) and (1 2)(4 5) are not. A conjugating element of Sn can be constructed in "two line notation" by placing the "cycle notations" of the two conjugate permutations on top of one another. Continuing the previous example, $k={\begin{pmatrix}1&2&3&4&5\\1&4&3&2&5\end{pmatrix}},$ which can be written as the product of cycles as (2 4). This permutation then relates (1 2 3)(4 5) and (1 4 3)(2 5) via conjugation, that is, $(2~4)\circ (1~2~3)(4~5)\circ (2~4)=(1~4~3)(2~5).$ It is clear that such a permutation is not unique. Conjugacy classes of Sn correspond to integer partitions of n: to the partition μ = (μ1, μ2, ..., μk) with $ n=\sum _{i=1}^{k}\mu _{i}$ and μ1 ≥ μ2 ≥ ... ≥ μk, is associated the set Cμ of permutations with cycles of lengths μ1, μ2, ..., μk. Then Cμ is a conjugacy class of Sn, whose elements are said to be of cycle-type $\mu $. Low degree groups See also: Representation theory of the symmetric group § Special cases The low-degree symmetric groups have simpler and exceptional structure, and often must be treated separately. S0 and S1 The symmetric groups on the empty set and the singleton set are trivial, which corresponds to 0! = 1! = 1. In this case the alternating group agrees with the symmetric group, rather than being an index 2 subgroup, and the sign map is trivial. In the case of S0, its only member is the empty function. S2 This group consists of exactly two elements: the identity and the permutation swapping the two points. It is a cyclic group and is thus abelian. In Galois theory, this corresponds to the fact that the quadratic formula gives a direct solution to the general quadratic polynomial after extracting only a single root. In invariant theory, the representation theory of the symmetric group on two points is quite simple and is seen as writing a function of two variables as a sum of its symmetric and anti-symmetric parts: Setting fs(x, y) = f(x, y) + f(y, x), and fa(x, y) = f(x, y) − f(y, x), one gets that 2⋅f = fs + fa. This process is known as symmetrization. S3 S3 is the first nonabelian symmetric group. This group is isomorphic to the dihedral group of order 6, the group of reflection and rotation symmetries of an equilateral triangle, since these symmetries permute the three vertices of the triangle. Cycles of length two correspond to reflections, and cycles of length three are rotations. In Galois theory, the sign map from S3 to S2 corresponds to the resolving quadratic for a cubic polynomial, as discovered by Gerolamo Cardano, while the A3 kernel corresponds to the use of the discrete Fourier transform of order 3 in the solution, in the form of Lagrange resolvents. S4 The group S4 is isomorphic to the group of proper rotations about opposite faces, opposite diagonals and opposite edges, 9, 8 and 6 permutations, of the cube.[5] Beyond the group A4, S4 has a Klein four-group V as a proper normal subgroup, namely the even transpositions {(1), (1 2)(3 4), (1 3)(2 4), (1 4)(2 3)}, with quotient S3. In Galois theory, this map corresponds to the resolving cubic to a quartic polynomial, which allows the quartic to be solved by radicals, as established by Lodovico Ferrari. The Klein group can be understood in terms of the Lagrange resolvents of the quartic. The map from S4 to S3 also yields a 2-dimensional irreducible representation, which is an irreducible representation of a symmetric group of degree n of dimension below n − 1, which only occurs for n = 4. S5 S5 is the first non-solvable symmetric group. Along with the special linear group SL(2, 5) and the icosahedral group A5 × S2, S5 is one of the three non-solvable groups of order 120, up to isomorphism. S5 is the Galois group of the general quintic equation, and the fact that S5 is not a solvable group translates into the non-existence of a general formula to solve quintic polynomials by radicals. There is an exotic inclusion map S5 → S6 as a transitive subgroup; the obvious inclusion map Sn → Sn+1 fixes a point and thus is not transitive. This yields the outer automorphism of S6, discussed below, and corresponds to the resolvent sextic of a quintic. S6 Unlike all other symmetric groups, S6, has an outer automorphism. Using the language of Galois theory, this can also be understood in terms of Lagrange resolvents. The resolvent of a quintic is of degree 6—this corresponds to an exotic inclusion map S5 → S6 as a transitive subgroup (the obvious inclusion map Sn → Sn+1 fixes a point and thus is not transitive) and, while this map does not make the general quintic solvable, it yields the exotic outer automorphism of S6—see Automorphisms of the symmetric and alternating groups for details. Note that while A6 and A7 have an exceptional Schur multiplier (a triple cover) and that these extend to triple covers of S6 and S7, these do not correspond to exceptional Schur multipliers of the symmetric group. Maps between symmetric groups Other than the trivial map Sn → C1 ≅ S0 ≅ S1 and the sign map Sn → S2, the most notable homomorphisms between symmetric groups, in order of relative dimension, are: • S4 → S3 corresponding to the exceptional normal subgroup V < A4 < S4; • S6 → S6 (or rather, a class of such maps up to inner automorphism) corresponding to the outer automorphism of S6. • S5 → S6 as a transitive subgroup, yielding the outer automorphism of S6 as discussed above. There are also a host of other homomorphisms Sm → Sn where m < n. Relation with alternating group For n ≥ 5, the alternating group An is simple, and the induced quotient is the sign map: An → Sn → S2 which is split by taking a transposition of two elements. Thus Sn is the semidirect product An ⋊ S2, and has no other proper normal subgroups, as they would intersect An in either the identity (and thus themselves be the identity or a 2-element group, which is not normal), or in An (and thus themselves be An or Sn). Sn acts on its subgroup An by conjugation, and for n ≠ 6, Sn is the full automorphism group of An: Aut(An) ≅ Sn. Conjugation by even elements are inner automorphisms of An while the outer automorphism of An of order 2 corresponds to conjugation by an odd element. For n = 6, there is an exceptional outer automorphism of An so Sn is not the full automorphism group of An. Conversely, for n ≠ 6, Sn has no outer automorphisms, and for n ≠ 2 it has no center, so for n ≠ 2, 6 it is a complete group, as discussed in automorphism group, below. For n ≥ 5, Sn is an almost simple group, as it lies between the simple group An and its group of automorphisms. Sn can be embedded into An+2 by appending the transposition (n + 1, n + 2) to all odd permutations, while embedding into An+1 is impossible for n > 1. Generators and relations The symmetric group on n letters is generated by the adjacent transpositions $\sigma _{i}=(i,i+1)$ that swap i and i + 1.[6] The collection $\sigma _{1},\ldots ,\sigma _{n-1}$ generates Sn subject to the following relations:[7] • $\sigma _{i}^{2}=1,$ • $\sigma _{i}\sigma _{j}=\sigma _{j}\sigma _{i}$ for $|i-j|>1$, and • $(\sigma _{i}\sigma _{i+1})^{3}=1,$ where 1 represents the identity permutation. This representation endows the symmetric group with the structure of a Coxeter group (and so also a reflection group). Other possible generating sets include the set of transpositions that swap 1 and i for 2 ≤ i ≤ n, and a set containing any n-cycle and a 2-cycle of adjacent elements in the n-cycle.[8] Subgroup structure A subgroup of a symmetric group is called a permutation group. Normal subgroups The normal subgroups of the finite symmetric groups are well understood. If n ≤ 2, Sn has at most 2 elements, and so has no nontrivial proper subgroups. The alternating group of degree n is always a normal subgroup, a proper one for n ≥ 2 and nontrivial for n ≥ 3; for n ≥ 3 it is in fact the only nontrivial proper normal subgroup of Sn, except when n = 4 where there is one additional such normal subgroup, which is isomorphic to the Klein four group. The symmetric group on an infinite set does not have a subgroup of index 2, as Vitali (1915[9]) proved that each permutation can be written as a product of three squares. However it contains the normal subgroup S of permutations that fix all but finitely many elements, which is generated by transpositions. Those elements of S that are products of an even number of transpositions form a subgroup of index 2 in S, called the alternating subgroup A. Since A is even a characteristic subgroup of S, it is also a normal subgroup of the full symmetric group of the infinite set. The groups A and S are the only nontrivial proper normal subgroups of the symmetric group on a countably infinite set. This was first proved by Onofri (1929[10]) and independently Schreier–Ulam (1934[11]). For more details see (Scott 1987, Ch. 11.3) or (Dixon & Mortimer 1996, Ch. 8.1). Maximal subgroups The maximal subgroups of Sn fall into three classes: the intransitive, the imprimitive, and the primitive. The intransitive maximal subgroups are exactly those of the form Sk × Sn–k for 1 ≤ k < n/2. The imprimitive maximal subgroups are exactly those of the form Sk wr Sn/k, where 2 ≤ k ≤ n/2 is a proper divisor of n and "wr" denotes the wreath product. The primitive maximal subgroups are more difficult to identify, but with the assistance of the O'Nan–Scott theorem and the classification of finite simple groups, (Liebeck, Praeger & Saxl 1988) gave a fairly satisfactory description of the maximal subgroups of this type, according to (Dixon & Mortimer 1996, p. 268). Sylow subgroups The Sylow subgroups of the symmetric groups are important examples of p-groups. They are more easily described in special cases first: The Sylow p-subgroups of the symmetric group of degree p are just the cyclic subgroups generated by p-cycles. There are (p − 1)!/(p − 1) = (p − 2)! such subgroups simply by counting generators. The normalizer therefore has order p⋅(p − 1) and is known as a Frobenius group Fp(p−1) (especially for p = 5), and is the affine general linear group, AGL(1, p). The Sylow p-subgroups of the symmetric group of degree p2 are the wreath product of two cyclic groups of order p. For instance, when p = 3, a Sylow 3-subgroup of Sym(9) is generated by a = (1 4 7)(2 5 8)(3 6 9) and the elements x = (1 2 3), y = (4 5 6), z = (7 8 9), and every element of the Sylow 3-subgroup has the form aixjykzl for $0\leq i,j,k,l\leq 2$. The Sylow p-subgroups of the symmetric group of degree pn are sometimes denoted Wp(n), and using this notation one has that Wp(n + 1) is the wreath product of Wp(n) and Wp(1). In general, the Sylow p-subgroups of the symmetric group of degree n are a direct product of ai copies of Wp(i), where 0 ≤ ai ≤ p − 1 and n = a0 + p⋅a1 + ... + pk⋅ak (the base p expansion of n). For instance, W2(1) = C2 and W2(2) = D8, the dihedral group of order 8, and so a Sylow 2-subgroup of the symmetric group of degree 7 is generated by { (1,3)(2,4), (1,2), (3,4), (5,6) } and is isomorphic to D8 × C2. These calculations are attributed to (Kaloujnine 1948) and described in more detail in (Rotman 1995, p. 176). Note however that (Kerber 1971, p. 26) attributes the result to an 1844 work of Cauchy, and mentions that it is even covered in textbook form in (Netto 1882, §39–40). Transitive subgroups A transitive subgroup of Sn is a subgroup whose action on {1, 2, ,..., n} is transitive. For example, the Galois group of a (finite) Galois extension is a transitive subgroup of Sn, for some n. Cayley's theorem Cayley's theorem states that every group G is isomorphic to a subgroup of some symmetric group. In particular, one may take a subgroup of the symmetric group on the elements of G, since every group acts on itself faithfully by (left or right) multiplication. Cyclic subgroups Cyclic groups are those that are generated by a single permutation. When a permutation is represented in cycle notation, the order of the cyclic subgroup that it generates is the least common multiple of the lengths of its cycles. For example, in S5, one cyclic subgroup of order 5 is generated by (13254), whereas the largest cyclic subgroups of S5 are generated by elements like (123)(45) that have one cycle of length 3 and another cycle of length 2. This rules out many groups as possible subgroups of symmetric groups of a given size. For example, S5 has no subgroup of order 15 (a divisor of the order of S5), because the only group of order 15 is the cyclic group. The largest possible order of a cyclic subgroup (equivalently, the largest possible order of an element in Sn) is given by Landau's function. Automorphism group Further information: Automorphisms of the symmetric and alternating groups n Aut(Sn) Out(Sn) Z(Sn) n ≠ 2, 6 Sn C1 C1 n = 2 C1 C1 S2 n = 6 S6 ⋊ C2 C2 C1 For n ≠ 2, 6, Sn is a complete group: its center and outer automorphism group are both trivial. For n = 2, the automorphism group is trivial, but S2 is not trivial: it is isomorphic to C2, which is abelian, and hence the center is the whole group. For n = 6, it has an outer automorphism of order 2: Out(S6) = C2, and the automorphism group is a semidirect product Aut(S6) = S6 ⋊ C2. In fact, for any set X of cardinality other than 6, every automorphism of the symmetric group on X is inner, a result first due to (Schreier & Ulam 1936) according to (Dixon & Mortimer 1996, p. 259). Homology See also: Alternating group § Group homology The group homology of Sn is quite regular and stabilizes: the first homology (concretely, the abelianization) is: $H_{1}(\mathrm {S} _{n},\mathbf {Z} )={\begin{cases}0&n<2\\\mathbf {Z} /2&n\geq 2.\end{cases}}$ The first homology group is the abelianization, and corresponds to the sign map Sn → S2 which is the abelianization for n ≥ 2; for n < 2 the symmetric group is trivial. This homology is easily computed as follows: Sn is generated by involutions (2-cycles, which have order 2), so the only non-trivial maps Sn → Cp are to S2 and all involutions are conjugate, hence map to the same element in the abelianization (since conjugation is trivial in abelian groups). Thus the only possible maps Sn → S2 ≅ {±1} send an involution to 1 (the trivial map) or to −1 (the sign map). One must also show that the sign map is well-defined, but assuming that, this gives the first homology of Sn. The second homology (concretely, the Schur multiplier) is: $H_{2}(\mathrm {S} _{n},\mathbf {Z} )={\begin{cases}0&n<4\\\mathbf {Z} /2&n\geq 4.\end{cases}}$ This was computed in (Schur 1911), and corresponds to the double cover of the symmetric group, 2 · Sn. Note that the exceptional low-dimensional homology of the alternating group ($H_{1}(\mathrm {A} _{3})\cong H_{1}(\mathrm {A} _{4})\cong \mathrm {C} _{3},$ corresponding to non-trivial abelianization, and $H_{2}(\mathrm {A} _{6})\cong H_{2}(\mathrm {A} _{7})\cong \mathrm {C} _{6},$ due to the exceptional 3-fold cover) does not change the homology of the symmetric group; the alternating group phenomena do yield symmetric group phenomena – the map $\mathrm {A} _{4}\twoheadrightarrow \mathrm {C} _{3}$ extends to $\mathrm {S} _{4}\twoheadrightarrow \mathrm {S} _{3},$ and the triple covers of A6 and A7 extend to triple covers of S6 and S7 – but these are not homological – the map $\mathrm {S} _{4}\twoheadrightarrow \mathrm {S} _{3}$ does not change the abelianization of S4, and the triple covers do not correspond to homology either. The homology "stabilizes" in the sense of stable homotopy theory: there is an inclusion map Sn → Sn+1, and for fixed k, the induced map on homology Hk(Sn) → Hk(Sn+1) is an isomorphism for sufficiently high n. This is analogous to the homology of families Lie groups stabilizing. The homology of the infinite symmetric group is computed in (Nakaoka 1961), with the cohomology algebra forming a Hopf algebra. Representation theory Main article: Representation theory of the symmetric group The representation theory of the symmetric group is a particular case of the representation theory of finite groups, for which a concrete and detailed theory can be obtained. This has a large area of potential applications, from symmetric function theory to problems of quantum mechanics for a number of identical particles. The symmetric group Sn has order n!. Its conjugacy classes are labeled by partitions of n. Therefore, according to the representation theory of a finite group, the number of inequivalent irreducible representations, over the complex numbers, is equal to the number of partitions of n. Unlike the general situation for finite groups, there is in fact a natural way to parametrize irreducible representation by the same set that parametrizes conjugacy classes, namely by partitions of n or equivalently Young diagrams of size n. Each such irreducible representation can be realized over the integers (every permutation acting by a matrix with integer coefficients); it can be explicitly constructed by computing the Young symmetrizers acting on a space generated by the Young tableaux of shape given by the Young diagram. Over other fields the situation can become much more complicated. If the field K has characteristic equal to zero or greater than n then by Maschke's theorem the group algebra KSn is semisimple. In these cases the irreducible representations defined over the integers give the complete set of irreducible representations (after reduction modulo the characteristic if necessary). However, the irreducible representations of the symmetric group are not known in arbitrary characteristic. In this context it is more usual to use the language of modules rather than representations. The representation obtained from an irreducible representation defined over the integers by reducing modulo the characteristic will not in general be irreducible. The modules so constructed are called Specht modules, and every irreducible does arise inside some such module. There are now fewer irreducibles, and although they can be classified they are very poorly understood. For example, even their dimensions are not known in general. The determination of the irreducible modules for the symmetric group over an arbitrary field is widely regarded as one of the most important open problems in representation theory. See also • Braid group • History of group theory • Signed symmetric group and Generalized symmetric group • Symmetry in quantum mechanics § Exchange symmetry or permutation symmetry • Symmetric inverse semigroup • Symmetric power Notes 1. Jacobson 2009, p. 31 2. Jacobson 2009, p. 32 Theorem 1.1 3. "Symmetric Group is not Abelian/Proof 1". 4. Vasishtha, A.R.; Vasishtha, A.K. (2008). "2. Groups S3 Group Definition". Modern Algebra. Krishna Prakashan Media. p. 49. ISBN 9788182830561. 5. Neubüser, J. (1967). Die Untergruppenverbände der Gruppen der Ordnungen ̤100 mit Ausnahme der Ordnungen 64 und 96 (PhD). Universität Kiel. 6. Sagan, Bruce E. (2001), The Symmetric Group (2 ed.), Springer, p. 4, ISBN 978-0-387-95067-9 7. Björner, Anders; Brenti, Francesco (2005), Combinatorics of Coxeter groups, Springer, p. 4. Example 1.2.3, ISBN 978-3-540-27596-1 8. Artin, Michael (1991), Algebra, Pearson, Exercise 6.6.16, ISBN 978-0-13-004763-2 9. Vitali, G. (1915). "Sostituzioni sopra una infinità numerabile di elementi". Bollettino Mathesis. 7: 29–31. 10. §141, p.124 in Onofri, L. (1929). "Teoria delle sostituzioni che operano su una infinità numerabile di elementi". Annali di Matematica. 7 (1): 103–130. doi:10.1007/BF02409971. S2CID 186219904. 11. Schreier, J.; Ulam, S. (1933). "Über die Permutationsgruppe der natürlichen Zahlenfolge" (PDF). Studia Math. 4 (1): 134–141. doi:10.4064/sm-4-1-134-141. References • Cameron, Peter J. (1999), Permutation Groups, London Mathematical Society Student Texts, vol. 45, Cambridge University Press, ISBN 978-0-521-65378-7 • Dixon, John D.; Mortimer, Brian (1996), Permutation groups, Graduate Texts in Mathematics, vol. 163, Springer-Verlag, ISBN 978-0-387-94599-6, MR 1409812 • Jacobson, Nathan (2009), Basic algebra, vol. 1 (2nd ed.), Dover, ISBN 978-0-486-47189-1. • Kaloujnine, Léo (1948), "La structure des p-groupes de Sylow des groupes symétriques finis", Annales Scientifiques de l'École Normale Supérieure, Série 3, 65: 239–276, doi:10.24033/asens.961, ISSN 0012-9593, MR 0028834 • Kerber, Adalbert (1971), Representations of permutation groups. I, Lecture Notes in Mathematics, Vol. 240, vol. 240, Springer-Verlag, doi:10.1007/BFb0067943, ISBN 978-3-540-05693-5, MR 0325752 • Liebeck, M.W.; Praeger, C.E.; Saxl, J. (1988), "On the O'Nan–Scott theorem for finite primitive permutation groups", Journal of the Australian Mathematical Society, 44 (3): 389–396, doi:10.1017/S144678870003216X • Nakaoka, Minoru (March 1961), "Homology of the Infinite Symmetric Group", Annals of Mathematics, 2, 73 (2): 229–257, doi:10.2307/1970333, JSTOR 1970333 • Netto, Eugen (1882), Substitutionentheorie und ihre Anwendungen auf die Algebra (in German), Leipzig. Teubner, JFM 14.0090.01 • Rotman, Joseph J. (1995), "Extensions and Cohomology" (PDF), An Introduction to the Theory of Groups, Graduate Texts in Mathematics, vol. 148, Springer, pp. 154–216, doi:10.1007/978-1-4612-4176-8_7, ISBN 978-1-4612-8686-8 • Scott, W.R. (1987), Group Theory, Dover Publications, pp. 45–46, ISBN 978-0-486-65377-8 • Schur, Issai (1911), "Über die Darstellung der symmetrischen und der alternierenden Gruppe durch gebrochene lineare Substitutionen", Journal für die reine und angewandte Mathematik, 1911 (139): 155–250, doi:10.1515/crll.1911.139.155, S2CID 122809608 • Schreier, Józef; Ulam, Stanislaw (1936), "Über die Automorphismen der Permutationsgruppe der natürlichen Zahlenfolge" (PDF), Fundamenta Mathematicae (in German), 28: 258–260, doi:10.4064/fm-28-1-258-260, Zbl 0016.20301 External links • "Symmetric group", Encyclopedia of Mathematics, EMS Press, 2001 [1994] • Weisstein, Eric W. "Symmetric group". MathWorld. • Weisstein, Eric W. "Symmetric group graph". MathWorld. • Marcus du Sautoy: Symmetry, reality's riddle (video of a talk) • OEIS Entries dealing with the Symmetric Group
Wikipedia
S5 (modal logic) In logic and philosophy, S5 is one of five systems of modal logic proposed by Clarence Irving Lewis and Cooper Harold Langford in their 1932 book Symbolic Logic. It is a normal modal logic, and one of the oldest systems of modal logic of any kind. It is formed with propositional calculus formulas and tautologies, and inference apparatus with substitution and modus ponens, but extending the syntax with the modal operator necessarily $\Box $ and its dual possibly $\Diamond $.[1][2] The axioms of S5 The following makes use of the modal operators $\Box $ ("necessarily") and $\Diamond $ ("possibly"). S5 is characterized by the axioms: • K: $\Box (A\to B)\to (\Box A\to \Box B)$; • T: $\Box A\to A$, and either: • 5: $\Diamond A\to \Box \Diamond A$; • or both of the following: • 4: $\Box A\to \Box \Box A$, and • B: $A\to \Box \Diamond A$. The (5) axiom restricts the accessibility relation $R$ of the Kripke frame to be Euclidean, i.e. $(wRv\land wRu)\implies vRu$. Kripke semantics In terms of Kripke semantics, S5 is characterized by models where the accessibility relation is an equivalence relation: it is reflexive, transitive, and symmetric. Determining the satisfiability of an S5 formula is an NP-complete problem. The hardness proof is trivial, as S5 includes the propositional logic. Membership is proved by showing that any satisfiable formula has a Kripke model where the number of worlds is at most linear in the size of the formula. Applications S5 is useful because it avoids superfluous iteration of qualifiers of different kinds. For example, under S5, if X is necessarily, possibly, necessarily, possibly true, then X is possibly true. Unbolded qualifiers before the final "possibly" are pruned in S5. While this is useful for keeping propositions reasonably short, it also might appear counter-intuitive in that, under S5, if something is possibly necessary, then it is necessary. Alvin Plantinga has argued that this feature of S5 is not, in fact, counter-intuitive. To justify, he reasons that if X is possibly necessary, it is necessary in at least one possible world; hence it is necessary in all possible worlds and thus is true in all possible worlds. Such reasoning underpins 'modal' formulations of the ontological argument. S5 is equivalent to the adjunction $\Diamond \dashv \Box $.[3] Leibniz proposed an ontological argument for the existence of God using this axiom. In his words, "If a necessary being is possible, it follows that it exists actually".[4] S5 is also the modal system for the metaphysics of saint Thomas Aquinas and in particular for the Five Ways.[5] See also • Modal logic • Normal modal logic • Kripke semantics References 1. Chellas, B. F. (1980) Modal Logic: An Introduction. Cambridge University Press. ISBN 0-521-22476-4 2. Hughes, G. E., and Cresswell, M. J. (1996) A New Introduction to Modal Logic. Routledge. ISBN 0-415-12599-5 3. "Steve Awodey. Category Theory. Chapter 10. Monads. 10.4 Comonads and Coalgebras" (PDF). 4. Look, Brandon C. (2020), Zalta, Edward N. (ed.), "Gottfried Wilhelm Leibniz", The Stanford Encyclopedia of Philosophy (Spring 2020 ed.), Metaphysics Research Lab, Stanford University, retrieved 2022-06-03 5. Gianfranco Basti (2017). Logica III: logica filosofica e filosofia formale- Parte I: la riscoperta moderna della logica formale [Logics III: philosophical Logic and formal philosophy - Part I: the modern rediscovery of the formal logic] (PDF) (in Italian). Rome. pp. 106, 108. Archived from the original (PPT) on 2022-10-07.{{cite book}}: CS1 maint: location missing publisher (link) External links • http://home.utah.edu/~nahaj/logic/structures/systems/s5.html • Modal Logic at the Stanford Encyclopedia of Philosophy
Wikipedia
Monadic Boolean algebra In abstract algebra, a monadic Boolean algebra is an algebraic structure A with signature ⟨·, +, ', 0, 1, ∃⟩ of type ⟨2,2,1,0,0,1⟩, where ⟨A, ·, +, ', 0, 1⟩ is a Boolean algebra. The monadic/unary operator ∃ denotes the existential quantifier, which satisfies the identities (using the received prefix notation for ∃): • ∃0 = 0 • ∃x ≥ x • ∃(x + y) = ∃x + ∃y • ∃x∃y = ∃(x∃y). ∃x is the existential closure of x. Dual to ∃ is the unary operator ∀, the universal quantifier, defined as ∀x := (∃x′)′. A monadic Boolean algebra has a dual definition and notation that take ∀ as primitive and ∃ as defined, so that ∃x := (∀x′)′. (Compare this with the definition of the dual Boolean algebra.) Hence, with this notation, an algebra A has signature ⟨·, +, ', 0, 1, ∀⟩, with ⟨A, ·, +, ', 0, 1⟩ a Boolean algebra, as before. Moreover, ∀ satisfies the following dualized version of the above identities: 1. ∀1 = 1 2. ∀x ≤ x 3. ∀(xy) = ∀x∀y 4. ∀x + ∀y = ∀(x + ∀y). ∀x is the universal closure of x. Discussion Monadic Boolean algebras have an important connection to topology. If ∀ is interpreted as the interior operator of topology, (1)–(3) above plus the axiom ∀(∀x) = ∀x make up the axioms for an interior algebra. But ∀(∀x) = ∀x can be proved from (1)–(4). Moreover, an alternative axiomatization of monadic Boolean algebras consists of the (reinterpreted) axioms for an interior algebra, plus ∀(∀x)' = (∀x)' (Halmos 1962: 22). Hence monadic Boolean algebras are the semisimple interior/closure algebras such that: • The universal (dually, existential) quantifier interprets the interior (closure) operator; • All open (or closed) elements are also clopen. A more concise axiomatization of monadic Boolean algebra is (1) and (2) above, plus ∀(x∨∀y) = ∀x∨∀y (Halmos 1962: 21). This axiomatization obscures the connection to topology. Monadic Boolean algebras form a variety. They are to monadic predicate logic what Boolean algebras are to propositional logic, and what polyadic algebras are to first-order logic. Paul Halmos discovered monadic Boolean algebras while working on polyadic algebras; Halmos (1962) reprints the relevant papers. Halmos and Givant (1998) includes an undergraduate treatment of monadic Boolean algebra. Monadic Boolean algebras also have an important connection to modal logic. The modal logic S5, viewed as a theory in S4, is a model of monadic Boolean algebras in the same way that S4 is a model of interior algebra. Likewise, monadic Boolean algebras supply the algebraic semantics for S5. Hence S5-algebra is a synonym for monadic Boolean algebra. See also • Clopen set • Cylindric algebra • Interior algebra • Kuratowski closure axioms • Łukasiewicz–Moisil algebra • Modal logic • Monadic logic References • Paul Halmos, 1962. Algebraic Logic. New York: Chelsea. • ------ and Steven Givant, 1998. Logic as Algebra. Mathematical Association of America.
Wikipedia
Hamming weight The Hamming weight of a string is the number of symbols that are different from the zero-symbol of the alphabet used. It is thus equivalent to the Hamming distance from the all-zero string of the same length. For the most typical case, a string of bits, this is the number of 1's in the string, or the digit sum of the binary representation of a given number and the ℓ₁ norm of a bit vector. In this binary case, it is also called the population count,[1] popcount, sideways sum,[2] or bit summation.[3] Examples String Hamming weight 11101 4 11101000 4 00000000 0 678012340567 10 A plot for the population count (Hamming weight for binary numbers) for (decimal) numbers 0 to 256.[4][5][6] History and usage The Hamming weight is named after Richard Hamming although he did not originate the notion.[7] The Hamming weight of binary numbers was already used in 1899 by James W. L. Glaisher to give a formula for the number of odd binomial coefficients in a single row of Pascal's triangle.[8] Irving S. Reed introduced a concept, equivalent to Hamming weight in the binary case, in 1954.[9] Hamming weight is used in several disciplines including information theory, coding theory, and cryptography. Examples of applications of the Hamming weight include: • In modular exponentiation by squaring, the number of modular multiplications required for an exponent e is log2 e + weight(e). This is the reason that the public key value e used in RSA is typically chosen to be a number of low Hamming weight.[10] • The Hamming weight determines path lengths between nodes in Chord distributed hash tables.[11] • IrisCode lookups in biometric databases are typically implemented by calculating the Hamming distance to each stored record. • In computer chess programs using a bitboard representation, the Hamming weight of a bitboard gives the number of pieces of a given type remaining in the game, or the number of squares of the board controlled by one player's pieces, and is therefore an important contributing term to the value of a position. • Hamming weight can be used to efficiently compute find first set using the identity ffs(x) = pop(x ^ (x - 1)). This is useful on platforms such as SPARC that have hardware Hamming weight instructions but no hardware find first set instruction.[12][1] • The Hamming weight operation can be interpreted as a conversion from the unary numeral system to binary numbers.[13] • In implementation of some succinct data structures like bit vectors and wavelet trees. Efficient implementation The population count of a bitstring is often needed in cryptography and other applications. The Hamming distance of two words A and B can be calculated as the Hamming weight of A xor B.[1] The problem of how to implement it efficiently has been widely studied. A single operation for the calculation, or parallel operations on bit vectors are available on some processors. For processors lacking those features, the best solutions known are based on adding counts in a tree pattern. For example, to count the number of 1 bits in the 16-bit binary number a = 0110 1100 1011 1010, these operations can be done: Expression Binary Decimal Comment a 01 10 11 00 10 11 10 10 27834 The original number b0 = (a >> 0) & 01 01 01 01 01 01 01 01 01 00 01 00 00 01 00 00 1, 0, 1, 0, 0, 1, 0, 0 Every other bit from a b1 = (a >> 1) & 01 01 01 01 01 01 01 01 00 01 01 00 01 01 01 01 0, 1, 1, 0, 1, 1, 1, 1 The remaining bits from a c = b0 + b1 01 01 10 00 01 10 01 01 1, 1, 2, 0, 1, 2, 1, 1 Count of 1s in each 2-bit slice of a d0 = (c >> 0) & 0011 0011 0011 0011 0001 0000 0010 0001 1, 0, 2, 1 Every other count from c d2 = (c >> 2) & 0011 0011 0011 0011 0001 0010 0001 0001 1, 2, 1, 1 The remaining counts from c e = d0 + d2 0010 0010 0011 0010 2, 2, 3, 2 Count of 1s in each 4-bit slice of a f0 = (e >> 0) & 00001111 00001111 00000010 00000010 2, 2 Every other count from e f4 = (e >> 4) & 00001111 00001111 00000010 00000011 2, 3 The remaining counts from e g = f0 + f4 00000100 00000101 4, 5 Count of 1s in each 8-bit slice of a h0 = (g >> 0) & 0000000011111111 0000000000000101 5 Every other count from g h8 = (g >> 8) & 0000000011111111 0000000000000100 4 The remaining counts from g i = h0 + h8 0000000000001001 9 Count of 1s in entire 16-bit word Here, the operations are as in C programming language, so X >> Y means to shift X right by Y bits, X & Y means the bitwise AND of X and Y, and + is ordinary addition. The best algorithms known for this problem are based on the concept illustrated above and are given here:[1] //types and constants used in the functions below //uint64_t is an unsigned 64-bit integer variable type (defined in C99 version of C language) const uint64_t m1 = 0x5555555555555555; //binary: 0101... const uint64_t m2 = 0x3333333333333333; //binary: 00110011.. const uint64_t m4 = 0x0f0f0f0f0f0f0f0f; //binary: 4 zeros, 4 ones ... const uint64_t m8 = 0x00ff00ff00ff00ff; //binary: 8 zeros, 8 ones ... const uint64_t m16 = 0x0000ffff0000ffff; //binary: 16 zeros, 16 ones ... const uint64_t m32 = 0x00000000ffffffff; //binary: 32 zeros, 32 ones const uint64_t h01 = 0x0101010101010101; //the sum of 256 to the power of 0,1,2,3... //This is a naive implementation, shown for comparison, //and to help in understanding the better functions. //This algorithm uses 24 arithmetic operations (shift, add, and). int popcount64a(uint64_t x) { x = (x & m1 ) + ((x >> 1) & m1 ); //put count of each 2 bits into those 2 bits x = (x & m2 ) + ((x >> 2) & m2 ); //put count of each 4 bits into those 4 bits x = (x & m4 ) + ((x >> 4) & m4 ); //put count of each 8 bits into those 8 bits x = (x & m8 ) + ((x >> 8) & m8 ); //put count of each 16 bits into those 16 bits x = (x & m16) + ((x >> 16) & m16); //put count of each 32 bits into those 32 bits x = (x & m32) + ((x >> 32) & m32); //put count of each 64 bits into those 64 bits return x; } //This uses fewer arithmetic operations than any other known //implementation on machines with slow multiplication. //This algorithm uses 17 arithmetic operations. int popcount64b(uint64_t x) { x -= (x >> 1) & m1; //put count of each 2 bits into those 2 bits x = (x & m2) + ((x >> 2) & m2); //put count of each 4 bits into those 4 bits x = (x + (x >> 4)) & m4; //put count of each 8 bits into those 8 bits x += x >> 8; //put count of each 16 bits into their lowest 8 bits x += x >> 16; //put count of each 32 bits into their lowest 8 bits x += x >> 32; //put count of each 64 bits into their lowest 8 bits return x & 0x7f; } //This uses fewer arithmetic operations than any other known //implementation on machines with fast multiplication. //This algorithm uses 12 arithmetic operations, one of which is a multiply. int popcount64c(uint64_t x) { x -= (x >> 1) & m1; //put count of each 2 bits into those 2 bits x = (x & m2) + ((x >> 2) & m2); //put count of each 4 bits into those 4 bits x = (x + (x >> 4)) & m4; //put count of each 8 bits into those 8 bits return (x * h01) >> 56; //returns left 8 bits of x + (x<<8) + (x<<16) + (x<<24) + ... } The above implementations have the best worst-case behavior of any known algorithm. However, when a value is expected to have few nonzero bits, it may instead be more efficient to use algorithms that count these bits one at a time. As Wegner described in 1960,[14] the bitwise AND of x with x − 1 differs from x only in zeroing out the least significant nonzero bit: subtracting 1 changes the rightmost string of 0s to 1s, and changes the rightmost 1 to a 0. If x originally had n bits that were 1, then after only n iterations of this operation, x will be reduced to zero. The following implementation is based on this principle. //This is better when most bits in x are 0 //This algorithm works the same for all data sizes. //This algorithm uses 3 arithmetic operations and 1 comparison/branch per "1" bit in x. int popcount64d(uint64_t x) { int count; for (count=0; x; count++) x &= x - 1; return count; } If greater memory usage is allowed, we can calculate the Hamming weight faster than the above methods. With unlimited memory, we could simply create a large lookup table of the Hamming weight of every 64 bit integer. If we can store a lookup table of the hamming function of every 16 bit integer, we can do the following to compute the Hamming weight of every 32 bit integer. static uint8_t wordbits[65536] = { /* bitcounts of integers 0 through 65535, inclusive */ }; //This algorithm uses 3 arithmetic operations and 2 memory reads. int popcount32e(uint32_t x) { return wordbits[x & 0xFFFF] + wordbits[x >> 16]; } //Optionally, the wordbits[] table could be filled using this function int popcount32e_init(void) { uint32_t i; uint16_t x; int count; for (i=0; i <= 0xFFFF; i++) { x = i; for (count=0; x; count++) // borrowed from popcount64d() above x &= x - 1; wordbits[i] = count; } } Muła et al.[15] have shown that a vectorized version of popcount64b can run faster than dedicated instructions (e.g., popcnt on x64 processors). Minimum weight In error-correcting coding, the minimum Hamming weight, commonly referred to as the minimum weight wmin of a code is the weight of the lowest-weight non-zero code word. The weight w of a code word is the number of 1s in the word. For example, the word 11001010 has a weight of 4. In a linear block code the minimum weight is also the minimum Hamming distance (dmin) and defines the error correction capability of the code. If wmin = n, then dmin = n and the code will correct up to dmin/2 errors.[16] Language support Some C compilers provide intrinsic functions that provide bit counting facilities. For example, GCC (since version 3.4 in April 2004) includes a builtin function __builtin_popcount that will use a processor instruction if available or an efficient library implementation otherwise.[17] LLVM-GCC has included this function since version 1.5 in June 2005.[18] In the C++ Standard Library, the bit-array data structure bitset has a count() method that counts the number of bits that are set. In C++20, a new header <bit> was added, containing functions std::popcount and std::has_single_bit, taking arguments of unsigned integer types. In Java, the growable bit-array data structure BitSet has a BitSet.cardinality() method that counts the number of bits that are set. In addition, there are Integer.bitCount(int) and Long.bitCount(long) functions to count bits in primitive 32-bit and 64-bit integers, respectively. Also, the BigInteger arbitrary-precision integer class also has a BigInteger.bitCount() method that counts bits. In Python, the int type has a bit_count() method to count the number of bits set. This functionality was introduced in Python 3.10, released in October 2021.[19] In Common Lisp, the function logcount, given a non-negative integer, returns the number of 1 bits. (For negative integers it returns the number of 0 bits in 2's complement notation.) In either case the integer can be a BIGNUM. Starting in GHC 7.4, the Haskell base package has a popCount function available on all types that are instances of the Bits class (available from the Data.Bits module).[20] MySQL version of SQL language provides BIT_COUNT() as a standard function.[21] Fortran 2008 has the standard, intrinsic, elemental function popcnt returning the number of nonzero bits within an integer (or integer array).[22] Some programmable scientific pocket calculators feature special commands to calculate the number of set bits, e.g. #B on the HP-16C[3][23] and WP 43S,[24][25] #BITS[26][27] or BITSUM[28][29] on HP-16C emulators, and nBITS on the WP 34S.[30][31] FreePascal implements popcnt since version 3.0.[32] Processor support • The IBM STRETCH computer in the 1960s calculated the number of set bits as well as the number of leading zeros as a by-product of all logical operations.[1] • Cray supercomputers early on featured a population count machine instruction, rumoured to have been specifically requested by the U.S. government National Security Agency for cryptanalysis applications.[1] • Control Data Corporation's (CDC) 6000 and Cyber 70/170 series machines included a population count instruction; in COMPASS, this instruction was coded as CXi. • The 64-bit SPARC version 9 architecture defines a POPC instruction,[12][1] but most implementations do not implement it, requiring it be emulated by the operating system.[33] • Donald Knuth's model computer MMIX that is going to replace MIX in his book The Art of Computer Programming has an SADD instruction since 1999. SADD a,b,c counts all bits that are 1 in b and 0 in c and writes the result to a. • Compaq's Alpha 21264A, released in 1999, was the first Alpha series CPU design that had the count extension (CIX). • Analog Devices' Blackfin processors feature the ONES instruction to perform a 32-bit population count.[34] • AMD's Barcelona architecture introduced the advanced bit manipulation (ABM) ISA introducing the POPCNT instruction as part of the SSE4a extensions in 2007. • Intel Core processors introduced a POPCNT instruction with the SSE4.2 instruction set extension, first available in a Nehalem-based Core i7 processor, released in November 2008. • The ARM architecture introduced the VCNT instruction as part of the Advanced SIMD (NEON) extensions. • The RISC-V architecture introduced the PCNT instruction as part of the Bit Manipulation (B) extension.[35] See also • Two's complement • Fan out References 1. Warren Jr., Henry S. (2013) [2002]. Hacker's Delight (2 ed.). Addison Wesley - Pearson Education, Inc. pp. 81–96. ISBN 978-0-321-84268-8. 0-321-84268-5. 2. Knuth, Donald Ervin (2009). "Bitwise tricks & techniques; Binary Decision Diagrams". The Art of Computer Programming. Vol. 4, Fascicle 1. Addison–Wesley Professional. ISBN 978-0-321-58050-4. (NB. Draft of Fascicle 1b available for download.) 3. Hewlett-Packard HP-16C Computer Scientist Owner's Handbook (PDF). Hewlett-Packard Company. April 1982. 00016-90001. Archived (PDF) from the original on 2017-03-28. Retrieved 2017-03-28. 4. , written in Fōrmulæ. The Fōrmulæ wiki. Retrieved 2019-09-30. 5. A solution to the task Population count. Retrieved 2019-09-30. 6. Rosetta Code. Retrieved 2019-09-30. 7. Thompson, Thomas M. (1983). From Error-Correcting Codes through Sphere Packings to Simple Groups. The Carus Mathematical Monographs #21. The Mathematical Association of America. p. 33. 8. Glaisher, James Whitbread Lee (1899). "On the residue of a binomial-theorem coefficient with respect to a prime modulus". The Quarterly Journal of Pure and Applied Mathematics. 30: 150–156. (NB. See in particular the final paragraph of p. 156.) 9. Reed, Irving Stoy (1954). "A Class of Multiple-Error-Correcting Codes and the Decoding Scheme". IRE Professional Group on Information Theory. Institute of Radio Engineers (IRE). PGIT-4: 38–49. 10. Cohen, Gérard D.; Lobstein, Antoine; Naccache, David; Zémor, Gilles (1998). "How to improve an exponentiation black-box". In Nyberg, Kaisa (ed.). Advances in Cryptology – EUROCRYPT '98, International Conference on the Theory and Application of Cryptographic Techniques, Espoo, Finland, May 31 – June 4, 1998, Proceeding. Lecture Notes in Computer Science. Vol. 1403. Springer. pp. 211–220. doi:10.1007/BFb0054128. 11. Stoica, I.; Morris, R.; Liben-Nowell, D.; Karger, D. R.; Kaashoek, M. F.; Dabek, F.; Balakrishnan, H. (February 2003). "Chord: a scalable peer-to-peer lookup protocol for internet applications". IEEE/ACM Transactions on Networking. 11 (1): 17–32. doi:10.1109/TNET.2002.808407. S2CID 221276912. Section 6.3: "In general, the number of fingers we need to follow will be the number of ones in the binary representation of the distance from node to query." 12. SPARC International, Inc. (1992). "A.41: Population Count. Programming Note". The SPARC architecture manual: version 8 (Version 8 ed.). Englewood Cliffs, New Jersey, USA: Prentice Hall. pp. 231. ISBN 0-13-825001-4. 13. Blaxell, David (1978). Hogben, David; Fife, Dennis W. (eds.). "Record linkage by bit pattern matching". Computer Science and Statistics--Tenth Annual Symposium on the Interface. NBS Special Publication. U.S. Department of Commerce / National Bureau of Standards. 503: 146–156. 14. Wegner, Peter (May 1960). "A technique for counting ones in a binary computer". Communications of the ACM. 3 (5): 322. doi:10.1145/367236.367286. S2CID 31683715. 15. Muła, Wojciech; Kurz, Nathan; Lemire, Daniel (January 2018). "Faster Population Counts Using AVX2 Instructions". Computer Journal. 61 (1): 111–120. arXiv:1611.07612. doi:10.1093/comjnl/bxx046. S2CID 540973. 16. Stern & Mahmoud, Communications System Design, Prentice Hall, 2004, p 477ff. 17. "GCC 3.4 Release Notes". GNU Project. 18. "LLVM 1.5 Release Notes". LLVM Project. 19. "What's New In Python 3.10". python.org. 20. "GHC 7.4.1 release notes". GHC documentation. 21. "Chapter 12.11. Bit Functions — MySQL 5.0 Reference Manual". 22. Metcalf, Michael; Reid, John; Cohen, Malcolm (2011). Modern Fortran Explained. Oxford University Press. p. 380. ISBN 978-0-19-960142-4. 23. Schwartz, Jake; Grevelle, Rick (2003-10-20) [1993]. HP16C Emulator Library for the HP48S/SX. 1.20 (1 ed.). Retrieved 2015-08-15. (NB. This library also works on the HP 48G/GX/G+. Beyond the feature set of the HP-16C this package also supports calculations for binary, octal, and hexadecimal floating-point numbers in scientific notation in addition to the usual decimal floating-point numbers.) 24. Bonin, Walter (2019) [2015]. WP 43S Owner's Manual (PDF). 0.12 (draft ed.). p. 135. ISBN 978-1-72950098-9. Retrieved 2019-08-05. (314 pages) 25. Bonin, Walter (2019) [2015]. WP 43S Reference Manual (PDF). 0.12 (draft ed.). pp. xiii, 104, 115, 120, 188. ISBN 978-1-72950106-1. Retrieved 2019-08-05. (271 pages) 26. Martin, Ángel M.; McClure, Greg J. (2015-09-05). "HP16C Emulator Module for the HP-41CX - User's Manual and QRG" (PDF). Archived (PDF) from the original on 2017-04-27. Retrieved 2017-04-27. (NB. Beyond the HP-16C feature set this custom library for the HP-41CX extends the functionality of the calculator by about 50 additional functions.) 27. Martin, Ángel M. (2015-09-07). "HP-41: New HP-16C Emulator available". Archived from the original on 2017-04-27. Retrieved 2017-04-27. 28. Thörngren, Håkan (2017-01-10). "Ladybug Documentation" (release 0A ed.). Retrieved 2017-01-29. 29. "New HP-41 module available: Ladybug". 2017-01-10. Archived from the original on 2017-01-29. Retrieved 2017-01-29. 30. Dale, Paul; Bonin, Walter (2012) [2008]. "WP 34S Owner's Manual" (PDF) (3.1 ed.). Retrieved 2017-04-27. 31. Bonin, Walter (2015) [2008]. WP 34S Owner's Manual (3.3 ed.). CreateSpace Independent Publishing Platform. ISBN 978-1-5078-9107-0. 32. "Free Pascal documentation popcnt". Retrieved 2019-12-07. 33. "JDK-6378821: bitCount() should use POPC on SPARC processors and AMD+10h". Java bug database. 2006-01-30. 34. Blackfin Instruction Set Reference (Preliminary ed.). Analog Devices. 2001. pp. 8–24. Part Number 82-000410-14. 35. Wolf, Claire (2019-03-22). "RISC-V "B" Bit Manipulation Extension for RISC-V, Draft v0.37" (PDF). Github. Further reading • Schroeppel, Richard C.; Orman, Hilarie K. (1972-02-29). "compilation". HAKMEM. By Beeler, Michael; Gosper, Ralph William; Schroeppel, Richard C. (report). Artificial Intelligence Laboratory, Massachusetts Institute of Technology, Cambridge, Massachusetts, USA. MIT AI Memo 239. (Item 169: Population count assembly code for the PDP/6-10.) External links • Aggregate Magic Algorithms. Optimized population count and other algorithms explained with sample code. • Bit Twiddling Hacks Several algorithms with code for counting bits set. • Necessary and Sufficient - by Damien Wintour - Has code in C# for various Hamming Weight implementations. • Best algorithm to count the number of set bits in a 32-bit integer? - Stackoverflow
Wikipedia
SARG04 SARG04 (named after Valerio Scarani, Antonio Acin, Gregoire Ribordy, and Nicolas Gisin) is a 2004 quantum cryptography protocol derived from the first protocol of that kind, BB84. Origin Researchers built SARG04 when they noticed that by using the four states of BB84 with a different information encoding they could develop a new protocol which would be more robust, especially against the photon-number-splitting attack, when attenuated laser pulses are used instead of single-photon sources. SARG04 was defined by Scarani et al. in 2004 in Physical Review Letters as a prepare and measure version (in which it is equivalent to BB84 when viewed at the level of quantum processing).[1] An entanglement-based version has been defined as well.[1] Description In the SARG04 scheme, Alice wishes to send a private key to Bob. She begins with two strings of bits, $a$ and $b$, each $n$ bits long. She then encodes these two strings as a string of $n$ qubits, $|\psi \rangle =\bigotimes _{i=1}^{n}|\psi _{a_{i}b_{i}}\rangle .$ $a_{i}$ and $b_{i}$ are the $i^{\mathrm {th} }$ bits of $a$ and $b$, respectively. Together, $a_{i}b_{i}$ give us an index into the following four qubit states: $|\psi _{00}\rangle =|0\rangle $ $|\psi _{10}\rangle =|1\rangle $ $|\psi _{01}\rangle =|+\rangle ={\frac {1}{\sqrt {2}}}|0\rangle +{\frac {1}{\sqrt {2}}}|1\rangle $ $|\psi _{11}\rangle =|-\rangle ={\frac {1}{\sqrt {2}}}|0\rangle -{\frac {1}{\sqrt {2}}}|1\rangle .$ Note that the bit $b_{i}$ is what decides which basis $a_{i}$ is encoded in (either in the computational basis or the Hadamard basis). The qubits are now in states which are not mutually orthogonal, and thus it is impossible to distinguish all of them with certainty without knowing $b$. Alice sends $|\psi \rangle $ over a public quantum channel to Bob. Bob receives a state $\varepsilon \rho =\varepsilon |\psi \rangle \langle \psi |$, where $\varepsilon $ represents the effects of noise in the channel as well as eavesdropping by a third party we'll call Eve. After Bob receives the string of qubits, all three parties, namely Alice, Bob and Eve, have their own states. However, since only Alice knows $b$, it makes it virtually impossible for either Bob or Eve to distinguish the states of the qubits. Bob proceeds to generate a string of random bits $b'$ of the same length as $b$, and uses those bits for his choice of basis when measuring the qubits transmitted by Alice. At this point, Bob announces publicly that he has received Alice's transmission. For each qubit sent, Alice chooses one computational basis state and one Hadamard basis state such that the state of the qubit is one of these two states. Alice then announces those two states. Alice will note whether the state is the computational basis state or the Hadamard basis state; that piece of information makes up the secret bit that Alice wishes to communicate to Bob. Bob now knows that the state of his qubit was one of the two states indicated by Alice. To determine the secret bit, Bob must distinguish between the two candidate states. For each qubit, Bob can check to see whether his measurement is consistent with either possible state. If it is consistent with either state, Bob announces that the bit is invalid, since he cannot distinguish which state was transmitted based on the measurement. If on the other hand, one of the two candidate states was inconsistent with the observed measurement, Bob announces that the bit is valid since he can deduce the state (and therefore the secret bit). Consider for example the scenario that Alice transmits $|\psi _{00}\rangle $ and announces the two states $|\psi _{00}\rangle $ and $|\psi _{01}\rangle $. If Bob measures in the computational basis, his only possible measurement is $|\psi _{00}\rangle $. This outcome is clearly consistent with the state having been $|\psi _{00}\rangle $, but it would also be a possible outcome if the state had been $|\psi _{01}\rangle $. If Bob measures in the Hadamard basis, either $|\psi _{01}\rangle $ or $|\psi _{11}\rangle $ could be measured, each with probability ½. If the outcome is $|\psi _{01}\rangle $ then again this state is consistent with either starting state. On the other hand, an outcome of $|\psi _{11}\rangle $ cannot possibly be observed from a qubit in state $|\psi _{01}\rangle $. Thus in the case that Bob measures in the Hadamard basis and observes state $|\psi _{11}\rangle $ (and only in that case), Bob can deduce which state he was sent and therefore what the secret bit is. From the remaining $k$ bits where both Bob's measurement was conclusive, Alice randomly chooses $k/2$ bits and discloses her choices over the public channel. Both Alice and Bob announce these bits publicly and run a check to see if more than a certain number of them agree. If this check passes, Alice and Bob proceed to use privacy amplification and information reconciliation techniques to create some number of shared secret keys. Otherwise, they cancel and start over. The advantage of this scheme relative to the simpler BB84 protocol is that Alice never announces the basis of her bit. As a result, Eve needs to store more copies of the qubit in order to be able to eventually determine the state than she would if the basis were directly announced. Intended use The intended use of SARG04 is in situations where the information is originated by a Poissonian source producing weak pulses (this means: mean number of photons < 1) and received by an imperfect detector, which is when attenuated laser pulses are used instead of single photons. Such a SARG04 system can be reliable up to a distance of about 10 km.[1] Modus operandi The modus operandi of SARG04 is based on the principle that the hardware must remain the same (as prior protocols) and the only change must be in the protocol itself.[1] In the original "prepare and measure" version, SARG04's two conjugated bases are chosen with equal probability.[1] Double clicks (when both detectors click) are important for comprehending SARG04: double clicks work differently in BB84 and SARG04. In BB84, their item is discarded because there is no way to tell what bit Alice has sent. In SARG04, they are also discarded, "for simplicity", but their occurrence is monitored to prevent eavesdropping. See the paper for a full quantum analysis of the various cases.[1] Security Kiyoshi Tamaki and Hoi-Kwong Lo were successful in proving security for one and two-photon pulses using SARG04.[1] It has been confirmed that SARG04 is more robust than BB84 against incoherent PNS attacks.[1] Unfortunately an incoherent attack has been identified which performs better than a simple phase-covariant cloning machine, and SARG04 has been found to be particularly vulnerable in single-photon implementations when Q >= 14.9%.[1] Comparison with BB84 In single-photon implementations, SARG04 was theorised to be equal with BB84, but experiments have shown that it is inferior.[1] References 1. Cyril Branciard, Nicolas Gisin, Barbara Kraus, Valerio Scarani (2005). "Security of two quantum cryptography protocols using the same four qubit states". Physical Review A. 72 (3): 032301. arXiv:quant-ph/0505035. Bibcode:2005PhRvA..72c2301B. doi:10.1103/PhysRevA.72.032301.{{cite journal}}: CS1 maint: uses authors parameter (link) Bibliography • Valerio Scarani; Antonio Acín; Grégoire Ribordy; Nicolas Gisin (2004). "Quantum Cryptography Protocols Robust against Photon Number Splitting Attacks for Weak Laser Pulse Implementations". Physical Review Letters. 92 (5): 057901. arXiv:quant-ph/0211131. Bibcode:2004PhRvL..92e7901S. doi:10.1103/PhysRevLett.92.057901. PMID 14995344. • Chi-Hang Fred Fung; Kiyoshi Tamaki; Hoi-Kwong Lo (2006). "Performance of two quantum-key-distribution protocols". Physical Review A. 73 (1): 012337. arXiv:quant-ph/0510025. Bibcode:2006PhRvA..73a2337F. doi:10.1103/PhysRevA.73.012337. • Branciard, Cyril; Gisin, Nicolas; Kraus, Barbara; Scarani, Valerio (2005). "Security of two quantum cryptography protocols using the same four qubit states". Physical Review A. 72: 32301. arXiv:quant-ph/0505035. Bibcode:2005PhRvA..72c2301B. doi:10.1103/physreva.72.032301. See also • Quantum cryptography • BB84 protocol Quantum information science General • DiVincenzo's criteria • NISQ era • Quantum computing • timeline • Quantum information • Quantum programming • Quantum simulation • Qubit • physical vs. logical • Quantum processors • cloud-based Theorems • Bell's • Eastin–Knill • Gleason's • Gottesman–Knill • Holevo's • Margolus–Levitin • No-broadcasting • No-cloning • No-communication • No-deleting • No-hiding • No-teleportation • PBR • Threshold • Solovay–Kitaev • Purification Quantum communication • Classical capacity • entanglement-assisted • quantum capacity • Entanglement distillation • Monogamy of entanglement • LOCC • Quantum channel • quantum network • Quantum teleportation • quantum gate teleportation • Superdense coding Quantum cryptography • Post-quantum cryptography • Quantum coin flipping • Quantum money • Quantum key distribution • BB84 • SARG04 • other protocols • Quantum secret sharing Quantum algorithms • Amplitude amplification • Bernstein–Vazirani • Boson sampling • Deutsch–Jozsa • Grover's • HHL • Hidden subgroup • Quantum annealing • Quantum counting • Quantum Fourier transform • Quantum optimization • Quantum phase estimation • Shor's • Simon's • VQE Quantum complexity theory • BQP • EQP • QIP • QMA • PostBQP Quantum processor benchmarks • Quantum supremacy • Quantum volume • Randomized benchmarking • XEB • Relaxation times • T1 • T2 Quantum computing models • Adiabatic quantum computation • Continuous-variable quantum information • One-way quantum computer • cluster state • Quantum circuit • quantum logic gate • Quantum machine learning • quantum neural network • Quantum Turing machine • Topological quantum computer Quantum error correction • Codes • CSS • quantum convolutional • stabilizer • Shor • Bacon–Shor • Steane • Toric • gnu • Entanglement-assisted Physical implementations Quantum optics • Cavity QED • Circuit QED • Linear optical QC • KLM protocol Ultracold atoms • Optical lattice • Trapped-ion QC Spin-based • Kane QC • Spin qubit QC • NV center • NMR QC Superconducting • Charge qubit • Flux qubit • Phase qubit • Transmon Quantum programming • OpenQASM-Qiskit-IBM QX • Quil-Forest/Rigetti QCS • Cirq • Q# • libquantum • many others... • Quantum information science • Quantum mechanics topics
Wikipedia
Boolean satisfiability problem In logic and computer science, the Boolean satisfiability problem (sometimes called propositional satisfiability problem and abbreviated SATISFIABILITY, SAT or B-SAT) is the problem of determining if there exists an interpretation that satisfies a given Boolean formula. In other words, it asks whether the variables of a given Boolean formula can be consistently replaced by the values TRUE or FALSE in such a way that the formula evaluates to TRUE. If this is the case, the formula is called satisfiable. On the other hand, if no such assignment exists, the function expressed by the formula is FALSE for all possible variable assignments and the formula is unsatisfiable. For example, the formula "a AND NOT b" is satisfiable because one can find the values a = TRUE and b = FALSE, which make (a AND NOT b) = TRUE. In contrast, "a AND NOT a" is unsatisfiable. SAT is the first problem that was proven to be NP-complete; see Cook–Levin theorem. This means that all problems in the complexity class NP, which includes a wide range of natural decision and optimization problems, are at most as difficult to solve as SAT. There is no known algorithm that efficiently solves each SAT problem, and it is generally believed that no such algorithm exists; yet this belief has not been proved mathematically, and resolving the question of whether SAT has a polynomial-time algorithm is equivalent to the P versus NP problem, which is a famous open problem in the theory of computing. Nevertheless, as of 2007, heuristic SAT-algorithms are able to solve problem instances involving tens of thousands of variables and formulas consisting of millions of symbols,[1] which is sufficient for many practical SAT problems from, e.g., artificial intelligence, circuit design,[2] and automatic theorem proving. Definitions A propositional logic formula, also called Boolean expression, is built from variables, operators AND (conjunction, also denoted by ∧), OR (disjunction, ∨), NOT (negation, ¬), and parentheses. A formula is said to be satisfiable if it can be made TRUE by assigning appropriate logical values (i.e. TRUE, FALSE) to its variables. The Boolean satisfiability problem (SAT) is, given a formula, to check whether it is satisfiable. This decision problem is of central importance in many areas of computer science, including theoretical computer science, complexity theory,[3][4] algorithmics, cryptography[5][6] and artificial intelligence.[7] Conjunctive normal form A literal is either a variable (in which case it is called a positive literal) or the negation of a variable (called a negative literal). A clause is a disjunction of literals (or a single literal). A clause is called a Horn clause if it contains at most one positive literal. A formula is in conjunctive normal form (CNF) if it is a conjunction of clauses (or a single clause). For example, x1 is a positive literal, ¬x2 is a negative literal, x1 ∨ ¬x2 is a clause. The formula (x1 ∨ ¬x2) ∧ (¬x1 ∨ x2 ∨ x3) ∧ ¬x1 is in conjunctive normal form; its first and third clauses are Horn clauses, but its second clause is not. The formula is satisfiable, by choosing x1 = FALSE, x2 = FALSE, and x3 arbitrarily, since (FALSE ∨ ¬FALSE) ∧ (¬FALSE ∨ FALSE ∨ x3) ∧ ¬FALSE evaluates to (FALSE ∨ TRUE) ∧ (TRUE ∨ FALSE ∨ x3) ∧ TRUE, and in turn to TRUE ∧ TRUE ∧ TRUE (i.e. to TRUE). In contrast, the CNF formula a ∧ ¬a, consisting of two clauses of one literal, is unsatisfiable, since for a=TRUE or a=FALSE it evaluates to TRUE ∧ ¬TRUE (i.e., FALSE) or FALSE ∧ ¬FALSE (i.e., again FALSE), respectively. For some versions of the SAT problem, it is useful to define the notion of a generalized conjunctive normal form formula, viz. as a conjunction of arbitrarily many generalized clauses, the latter being of the form R(l1,...,ln) for some Boolean function R and (ordinary) literals li. Different sets of allowed boolean functions lead to different problem versions. As an example, R(¬x,a,b) is a generalized clause, and R(¬x,a,b) ∧ R(b,y,c) ∧ R(c,d,¬z) is a generalized conjunctive normal form. This formula is used below, with R being the ternary operator that is TRUE just when exactly one of its arguments is. Using the laws of Boolean algebra, every propositional logic formula can be transformed into an equivalent conjunctive normal form, which may, however, be exponentially longer. For example, transforming the formula (x1∧y1) ∨ (x2∧y2) ∨ ... ∨ (xn∧yn) into conjunctive normal form yields (x1 ∨ x2 ∨ … ∨ xn) ∧ (y1 ∨ x2 ∨ … ∨ xn) ∧ (x1 ∨ y2 ∨ … ∨ xn) ∧ (y1 ∨ y2 ∨ … ∨ xn) ∧ ... ∧ (x1 ∨ x2 ∨ … ∨ yn) ∧ (y1 ∨ x2 ∨ … ∨ yn) ∧ (x1 ∨ y2 ∨ … ∨ yn) ∧ (y1 ∨ y2 ∨ … ∨ yn); while the former is a disjunction of n conjunctions of 2 variables, the latter consists of 2n clauses of n variables. However, with use of the Tseytin transformation, we may find an equisatisfiable conjunctive normal form formula with length linear in the size of the original propositional logic formula. Complexity Main article: Cook–Levin theorem SAT was the first known NP-complete problem, as proved by Stephen Cook at the University of Toronto in 1971[8] and independently by Leonid Levin at the Russian Academy of Sciences in 1973.[9] Until that time, the concept of an NP-complete problem did not even exist. The proof shows how every decision problem in the complexity class NP can be reduced to the SAT problem for CNF[note 1] formulas, sometimes called CNFSAT. A useful property of Cook's reduction is that it preserves the number of accepting answers. For example, deciding whether a given graph has a 3-coloring is another problem in NP; if a graph has 17 valid 3-colorings, the SAT formula produced by the Cook–Levin reduction will have 17 satisfying assignments. NP-completeness only refers to the run-time of the worst case instances. Many of the instances that occur in practical applications can be solved much more quickly. See Algorithms for solving SAT below. 3-satisfiability Like the satisfiability problem for arbitrary formulas, determining the satisfiability of a formula in conjunctive normal form where each clause is limited to at most three literals is NP-complete also; this problem is called 3-SAT, 3CNFSAT, or 3-satisfiability. To reduce the unrestricted SAT problem to 3-SAT, transform each clause l1 ∨ ⋯ ∨ ln to a conjunction of n - 2 clauses (l1 ∨ l2 ∨ x2) ∧ (¬x2 ∨ l3 ∨ x3) ∧ (¬x3 ∨ l4 ∨ x4) ∧ ⋯ ∧ (¬xn − 3 ∨ ln − 2 ∨ xn − 2) ∧ (¬xn − 2 ∨ ln − 1 ∨ ln) where x2, ⋯ , xn − 2 are fresh variables not occurring elsewhere. Although the two formulas are not logically equivalent, they are equisatisfiable. The formula resulting from transforming all clauses is at most 3 times as long as its original, i.e. the length growth is polynomial.[10] 3-SAT is one of Karp's 21 NP-complete problems, and it is used as a starting point for proving that other problems are also NP-hard.[note 2] This is done by polynomial-time reduction from 3-SAT to the other problem. An example of a problem where this method has been used is the clique problem: given a CNF formula consisting of c clauses, the corresponding graph consists of a vertex for each literal, and an edge between each two non-contradicting[note 3] literals from different clauses, cf. picture. The graph has a c-clique if and only if the formula is satisfiable.[11] There is a simple randomized algorithm due to Schöning (1999) that runs in time (4/3)n where n is the number of variables in the 3-SAT proposition, and succeeds with high probability to correctly decide 3-SAT.[12] The exponential time hypothesis asserts that no algorithm can solve 3-SAT (or indeed k-SAT for any $k>2$) in exp(o(n)) time (i.e., fundamentally faster than exponential in n). Selman, Mitchell, and Levesque (1996) give empirical data on the difficulty of randomly generated 3-SAT formulas, depending on their size parameters. Difficulty is measured in number recursive calls made by a DPLL algorithm. They identified a phase transition region from almost certainly satisfiable to almost certainly unsatisfiable formulas at the clauses-to-variables ratio at about 4.26.[13] 3-satisfiability can be generalized to k-satisfiability (k-SAT, also k-CNF-SAT), when formulas in CNF are considered with each clause containing up to k literals. However, since for any k ≥ 3, this problem can neither be easier than 3-SAT nor harder than SAT, and the latter two are NP-complete, so must be k-SAT. Some authors restrict k-SAT to CNF formulas with exactly k literals. This doesn't lead to a different complexity class either, as each clause l1 ∨ ⋯ ∨ lj with j < k literals can be padded with fixed dummy variables to l1 ∨ ⋯ ∨ lj ∨ dj+1 ∨ ⋯ ∨ dk. After padding all clauses, 2k-1 extra clauses[note 4] have to be appended to ensure that only d1 = ⋯ = dk=FALSE can lead to a satisfying assignment. Since k doesn't depend on the formula length, the extra clauses lead to a constant increase in length. For the same reason, it does not matter whether duplicate literals are allowed in clauses, as in ¬x ∨ ¬y ∨ ¬y. Special cases of SAT Conjunctive normal form Conjunctive normal form (in particular with 3 literals per clause) is often considered the canonical representation for SAT formulas. As shown above, the general SAT problem reduces to 3-SAT, the problem of determining satisfiability for formulas in this form. Disjunctive normal form SAT is trivial if the formulas are restricted to those in disjunctive normal form, that is, they are a disjunction of conjunctions of literals. Such a formula is indeed satisfiable if and only if at least one of its conjunctions is satisfiable, and a conjunction is satisfiable if and only if it does not contain both x and NOT x for some variable x. This can be checked in linear time. Furthermore, if they are restricted to being in full disjunctive normal form, in which every variable appears exactly once in every conjunction, they can be checked in constant time (each conjunction represents one satisfying assignment). But it can take exponential time and space to convert a general SAT problem to disjunctive normal form; for an example exchange "∧" and "∨" in the above exponential blow-up example for conjunctive normal forms. Exactly-1 3-satisfiability A variant of the 3-satisfiability problem is the one-in-three 3-SAT (also known variously as 1-in-3-SAT and exactly-1 3-SAT). Given a conjunctive normal form with three literals per clause, the problem is to determine whether there exists a truth assignment to the variables so that each clause has exactly one TRUE literal (and thus exactly two FALSE literals). In contrast, ordinary 3-SAT requires that every clause has at least one TRUE literal. Formally, a one-in-three 3-SAT problem is given as a generalized conjunctive normal form with all generalized clauses using a ternary operator R that is TRUE just if exactly one of its arguments is. When all literals of a one-in-three 3-SAT formula are positive, the satisfiability problem is called one-in-three positive 3-SAT. One-in-three 3-SAT, together with its positive case, is listed as NP-complete problem "LO4" in the standard reference, Computers and Intractability: A Guide to the Theory of NP-Completeness by Michael R. Garey and David S. Johnson. One-in-three 3-SAT was proved to be NP-complete by Thomas Jerome Schaefer as a special case of Schaefer's dichotomy theorem, which asserts that any problem generalizing Boolean satisfiability in a certain way is either in the class P or is NP-complete.[14] Schaefer gives a construction allowing an easy polynomial-time reduction from 3-SAT to one-in-three 3-SAT. Let "(x or y or z)" be a clause in a 3CNF formula. Add six fresh boolean variables a, b, c, d, e, and f, to be used to simulate this clause and no other. Then the formula R(x,a,d) ∧ R(y,b,d) ∧ R(a,b,e) ∧ R(c,d,f) ∧ R(z,c,FALSE) is satisfiable by some setting of the fresh variables if and only if at least one of x, y, or z is TRUE, see picture (left). Thus any 3-SAT instance with m clauses and n variables may be converted into an equisatisfiable one-in-three 3-SAT instance with 5m clauses and n+6m variables.[15] Another reduction involves only four fresh variables and three clauses: R(¬x,a,b) ∧ R(b,y,c) ∧ R(c,d,¬z), see picture (right). Not-all-equal 3-satisfiability Another variant is the not-all-equal 3-satisfiability problem (also called NAE3SAT). Given a conjunctive normal form with three literals per clause, the problem is to determine if an assignment to the variables exists such that in no clause all three literals have the same truth value. This problem is NP-complete, too, even if no negation symbols are admitted, by Schaefer's dichotomy theorem.[14] Linear SAT A 3-SAT formula is Linear SAT (LSAT) if each clause (viewed as a set of literals) intersects at most one other clause, and, moreover, if two clauses intersect, then they have exactly one literal in common. An LSAT formula can be depicted as a set of disjoint semi-closed intervals on a line. Deciding whether an LSAT formula is satisfiable is NP-complete.[16] 2-satisfiability SAT is easier if the number of literals in a clause is limited to at most 2, in which case the problem is called 2-SAT. This problem can be solved in polynomial time, and in fact is complete for the complexity class NL. If additionally all OR operations in literals are changed to XOR operations, the result is called exclusive-or 2-satisfiability, which is a problem complete for the complexity class SL = L. Horn-satisfiability Main article: Horn-satisfiability The problem of deciding the satisfiability of a given conjunction of Horn clauses is called Horn-satisfiability, or HORN-SAT. It can be solved in polynomial time by a single step of the Unit propagation algorithm, which produces the single minimal model of the set of Horn clauses (w.r.t. the set of literals assigned to TRUE). Horn-satisfiability is P-complete. It can be seen as P's version of the Boolean satisfiability problem. Also, deciding the truth of quantified Horn formulas can be done in polynomial time. [17] Horn clauses are of interest because they are able to express implication of one variable from a set of other variables. Indeed, one such clause ¬x1 ∨ ... ∨ ¬xn ∨ y can be rewritten as x1 ∧ ... ∧ xn → y, that is, if x1,...,xn are all TRUE, then y needs to be TRUE as well. A generalization of the class of Horn formulae is that of renameable-Horn formulae, which is the set of formulae that can be placed in Horn form by replacing some variables with their respective negation. For example, (x1 ∨ ¬x2) ∧ (¬x1 ∨ x2 ∨ x3) ∧ ¬x1 is not a Horn formula, but can be renamed to the Horn formula (x1 ∨ ¬x2) ∧ (¬x1 ∨ x2 ∨ ¬y3) ∧ ¬x1 by introducing y3 as negation of x3. In contrast, no renaming of (x1 ∨ ¬x2 ∨ ¬x3) ∧ (¬x1 ∨ x2 ∨ x3) ∧ ¬x1 leads to a Horn formula. Checking the existence of such a replacement can be done in linear time; therefore, the satisfiability of such formulae is in P as it can be solved by first performing this replacement and then checking the satisfiability of the resulting Horn formula. XOR-satisfiability Solving an XOR-SAT example by Gaussian elimination Given formula ("⊕" means XOR, the red clause is optional) (a⊕c⊕d) ∧ (b⊕¬c⊕d) ∧ (a⊕b⊕¬d) ∧ (a⊕¬b⊕¬c) ∧ (¬a⊕b⊕c) Equation system ("1" means TRUE, "0" means FALSE) Each clause leads to one equation. a⊕c⊕d= 1 b⊕¬c⊕d= 1 a⊕b⊕¬d= 1 a⊕¬b⊕¬c= 1 ¬a⊕b⊕c ≃ 1 Normalized equation system using properties of Boolean rings (¬x=1⊕x, x⊕x=0) a⊕c⊕d= 1 b⊕c⊕d= 0 a⊕b⊕d= 0 a⊕b⊕c= 1 a⊕b⊕c ≃ 0 (If the red equation is present, it contradicts the last black one, so the system is unsolvable. Therefore, Gauss' algorithm is used only for the black equations.) Associated coefficient matrix   abcdline   1011 1 A 0111 0 B 1101 0 C 1110 1 D Transforming to echelon form   abcdoperation   1011 1 A 1101 0 C 1110 1 D 0111 0 B (swapped)   1011 1 A 0110 1 E = C⊕A 0101 0 F = D⊕A 0111 0 B   1011 1 A 0110 1 E 0011 1 G = F⊕E 0001 1 H = B⊕E Transforming to diagonal form   abcdoperation   1010 0 I = A⊕H 0110 1 E 0010 0 J = G⊕H 0001 1 H   1000 0 K = I⊕J 0100 1 L = E⊕J 0010 0 J 0001 1 H Solution: If the red clause is present:Unsolvable Else:a = 0 = FALSE b = 1 = TRUE c = 0 = FALSE d = 1 = TRUE As a consequence: R(a,c,d) ∧ R(b,¬c,d) ∧ R(a,b,¬d) ∧ R(a,¬b,¬c) ∧ R(¬a,b,c) is not 1-in-3-satisfiable, while (a ∨ c ∨ d) ∧ (b ∨ ¬c ∨ d) ∧ (a ∨ b ∨ ¬d) ∧ (a ∨ ¬b ∨ ¬c) is 3-satisfiable with a=c=FALSE and b=d=TRUE. Another special case is the class of problems where each clause contains XOR (i.e. exclusive or) rather than (plain) OR operators.[note 5] This is in P, since an XOR-SAT formula can also be viewed as a system of linear equations mod 2, and can be solved in cubic time by Gaussian elimination;[18] see the box for an example. This recast is based on the kinship between Boolean algebras and Boolean rings, and the fact that arithmetic modulo two forms a finite field. Since a XOR b XOR c evaluates to TRUE if and only if exactly 1 or 3 members of {a,b,c} are TRUE, each solution of the 1-in-3-SAT problem for a given CNF formula is also a solution of the XOR-3-SAT problem, and in turn each solution of XOR-3-SAT is a solution of 3-SAT, cf. picture. As a consequence, for each CNF formula, it is possible to solve the XOR-3-SAT problem defined by the formula, and based on the result infer either that the 3-SAT problem is solvable or that the 1-in-3-SAT problem is unsolvable. Provided that the complexity classes P and NP are not equal, neither 2-, nor Horn-, nor XOR-satisfiability is NP-complete, unlike SAT. Schaefer's dichotomy theorem Main article: Schaefer's dichotomy theorem The restrictions above (CNF, 2CNF, 3CNF, Horn, XOR-SAT) bound the considered formulae to be conjunctions of subformulae; each restriction states a specific form for all subformulae: for example, only binary clauses can be subformulae in 2CNF. Schaefer's dichotomy theorem states that, for any restriction to Boolean functions that can be used to form these subformulae, the corresponding satisfiability problem is in P or NP-complete. The membership in P of the satisfiability of 2CNF, Horn, and XOR-SAT formulae are special cases of this theorem.[14] The following table summarizes some common variants of SAT. Code Name Restrictions Requirements Class 3SAT 3-satisfiability Each clause contains 3 literals. At least one literal must be true. NP-c 2SAT 2-satisfiability Each clause contains 2 literals. At least one literal must be true. NL-c 1-in-3-SAT Exactly-1 3-SAT Each clause contains 3 literals. Exactly one literal must be true. NP-c 1-in-3-SAT+ Exactly-1 Positive 3-SAT Each clause contains 3 positive literals. Exactly one literal must be true. NP-c NAE3SAT Not-all-equal 3-satisfiability Each clause contains 3 literals. Either one or two literals must be true. NP-c NAE3SAT+ Not-all-equal positive 3-SAT Each clause contains 3 positive literals. Either one or two literals must be true. NP-c PL-SAT Planar SAT The incidence graph (clause-variable graph) is planar. At least one literal must be true. NP-c LSAT Linear SAT Each clause contains 3 literals, intersects at most one other clause, and the intersection is exactly one literal. At least one literal must be true. NP-c HORN-SAT Horn satisfiability Horn clauses (at most one positive literal). At least one literal must be true. P-c XOR-SAT Xor satisfiability Each clause contains XOR operations rather than OR. The XOR of all literals must be true. P Extensions of SAT An extension that has gained significant popularity since 2003 is satisfiability modulo theories (SMT) that can enrich CNF formulas with linear constraints, arrays, all-different constraints, uninterpreted functions,[19] etc. Such extensions typically remain NP-complete, but very efficient solvers are now available that can handle many such kinds of constraints. The satisfiability problem becomes more difficult if both "for all" (∀) and "there exists" (∃) quantifiers are allowed to bind the Boolean variables. An example of such an expression would be ∀x ∀y ∃z (x ∨ y ∨ z) ∧ (¬x ∨ ¬y ∨ ¬z); it is valid, since for all values of x and y, an appropriate value of z can be found, viz. z=TRUE if both x and y are FALSE, and z=FALSE else. SAT itself (tacitly) uses only ∃ quantifiers. If only ∀ quantifiers are allowed instead, the so-called tautology problem is obtained, which is co-NP-complete. If both quantifiers are allowed, the problem is called the quantified Boolean formula problem (QBF), which can be shown to be PSPACE-complete. It is widely believed that PSPACE-complete problems are strictly harder than any problem in NP, although this has not yet been proved. Using highly parallel P systems, QBF-SAT problems can be solved in linear time.[20] Ordinary SAT asks if there is at least one variable assignment that makes the formula true. A variety of variants deal with the number of such assignments: • MAJ-SAT asks if the majority of all assignments make the formula TRUE. It is known to be complete for PP, a probabilistic class. • #SAT, the problem of counting how many variable assignments satisfy a formula, is a counting problem, not a decision problem, and is #P-complete. • UNIQUE SAT[21] is the problem of determining whether a formula has exactly one assignment. It is complete for US,[22] the complexity class describing problems solvable by a non-deterministic polynomial time Turing machine that accepts when there is exactly one nondeterministic accepting path and rejects otherwise. • UNAMBIGUOUS-SAT is the name given to the satisfiability problem when the input is restricted to formulas having at most one satisfying assignment. The problem is also called USAT.[23] A solving algorithm for UNAMBIGUOUS-SAT is allowed to exhibit any behavior, including endless looping, on a formula having several satisfying assignments. Although this problem seems easier, Valiant and Vazirani have shown[24] that if there is a practical (i.e. randomized polynomial-time) algorithm to solve it, then all problems in NP can be solved just as easily. • MAX-SAT, the maximum satisfiability problem, is an FNP generalization of SAT. It asks for the maximum number of clauses which can be satisfied by any assignment. It has efficient approximation algorithms, but is NP-hard to solve exactly. Worse still, it is APX-complete, meaning there is no polynomial-time approximation scheme (PTAS) for this problem unless P=NP. • WMSAT is the problem of finding an assignment of minimum weight that satisfy a monotone Boolean formula (i.e. a formula without any negation). Weights of propositional variables are given in the input of the problem. The weight of an assignment is the sum of weights of true variables. That problem is NP-complete (see Th. 1 of [25]). Other generalizations include satisfiability for first- and second-order logic, constraint satisfaction problems, 0-1 integer programming. Finding a satisfying assignment While SAT is a decision problem, the search problem of finding a satisfying assignment reduces to SAT. That is, each algorithm which correctly answers if an instance of SAT is solvable can be used to find a satisfying assignment. First, the question is asked on the given formula Φ. If the answer is "no", the formula is unsatisfiable. Otherwise, the question is asked on the partly instantiated formula Φ{x1=TRUE}, i.e. Φ with the first variable x1 replaced by TRUE, and simplified accordingly. If the answer is "yes", then x1=TRUE, otherwise x1=FALSE. Values of other variables can be found subsequently in the same way. In total, n+1 runs of the algorithm are required, where n is the number of distinct variables in Φ. This property is used in several theorems in complexity theory: NP ⊆ P/poly ⇒ PH = Σ2   (Karp–Lipton theorem) NP ⊆ BPP ⇒ NP = RP P = NP ⇒ FP = FNP Algorithms for solving SAT Since the SAT problem is NP-complete, only algorithms with exponential worst-case complexity are known for it. In spite of this, efficient and scalable algorithms for SAT were developed during the 2000s and have contributed to dramatic advances in our ability to automatically solve problem instances involving tens of thousands of variables and millions of constraints (i.e. clauses).[1] Examples of such problems in electronic design automation (EDA) include formal equivalence checking, model checking, formal verification of pipelined microprocessors,[19] automatic test pattern generation, routing of FPGAs,[26] planning, and scheduling problems, and so on. A SAT-solving engine is also considered to be an essential component in the electronic design automation toolbox. Major techniques used by modern SAT solvers include the Davis–Putnam–Logemann–Loveland algorithm (or DPLL), conflict-driven clause learning (CDCL), and stochastic local search algorithms such as WalkSAT. Almost all SAT solvers include time-outs, so they will terminate in reasonable time even if they cannot find a solution. Different SAT solvers will find different instances easy or hard, and some excel at proving unsatisfiability, and others at finding solutions. Recent attempts have been made to learn an instance's satisfiability using deep learning techniques.[27] SAT solvers are developed and compared in SAT-solving contests.[28] Modern SAT solvers are also having significant impact on the fields of software verification, constraint solving in artificial intelligence, and operations research, among others. See also • Unsatisfiable core • Satisfiability modulo theories • Counting SAT • Planar SAT • Karloff–Zwick algorithm • Circuit satisfiability Notes 1. The SAT problem for arbitrary formulas is NP-complete, too, since it is easily shown to be in NP, and it cannot be easier than SAT for CNF formulas. 2. i.e. at least as hard as every other problem in NP. A decision problem is NP-complete if and only if it is in NP and is NP-hard. 3. i.e. such that one literal is not the negation of the other 4. viz. all maxterms that can be built with d1,⋯,dk, except d1∨⋯∨dk 5. Formally, generalized conjunctive normal forms with a ternary boolean function R are employed, which is TRUE just if 1 or 3 of its arguments is. An input clause with more than 3 literals can be transformed into an equisatisfiable conjunction of clauses á 3 literals similar to above; i.e. XOR-SAT can be reduced to XOR-3-SAT. External links Wikimedia Commons has media related to Boolean satisfiability problem. • SAT Game: try solving a Boolean satisfiability problem yourself • The international SAT competition website • International Conference on Theory and Applications of Satisfiability Testing • Journal on Satisfiability, Boolean Modeling and Computation • SAT Live, an aggregate website for research on the satisfiability problem • Yearly evaluation of MaxSAT solvers References 1. Ohrimenko, Olga; Stuckey, Peter J.; Codish, Michael (2007), "Propagation = Lazy Clause Generation", Principles and Practice of Constraint Programming – CP 2007, Lecture Notes in Computer Science, vol. 4741, pp. 544–558, CiteSeerX 10.1.1.70.5471, doi:10.1007/978-3-540-74970-7_39, modern SAT solvers can often handle problems with millions of constraints and hundreds of thousands of variables. 2. Hong, Ted; Li, Yanjing; Park, Sung-Boem; Mui, Diana; Lin, David; Kaleq, Ziyad Abdel; Hakim, Nagib; Naeimi, Helia; Gardner, Donald S.; Mitra, Subhasish (November 2010). "QED: Quick Error Detection tests for effective post-silicon validation". 2010 IEEE International Test Conference. pp. 1–10. doi:10.1109/TEST.2010.5699215. ISBN 978-1-4244-7206-2. S2CID 7909084. 3. Karp, Richard M. (1972). "Reducibility Among Combinatorial Problems" (PDF). In Raymond E. Miller; James W. Thatcher (eds.). Complexity of Computer Computations. New York: Plenum. pp. 85–103. ISBN 0-306-30707-3. Archived from the original (PDF) on 2011-06-29. Retrieved 2020-05-07. Here: p.86 4. Aho, Alfred V.; Hopcroft, John E.; Ullman, Jeffrey D. (1974). The Design and Analysis of Computer Algorithms. Addison-Wesley. p. 403. ISBN 0-201-00029-6. 5. Massacci, Fabio; Marraro, Laura (2000-02-01). "Logical Cryptanalysis as a SAT Problem". Journal of Automated Reasoning. 24 (1): 165–203. doi:10.1023/A:1006326723002. S2CID 3114247. 6. Mironov, Ilya; Zhang, Lintao (2006). "Applications of SAT Solvers to Cryptanalysis of Hash Functions". In Biere, Armin; Gomes, Carla P. (eds.). Theory and Applications of Satisfiability Testing - SAT 2006. Lecture Notes in Computer Science. Vol. 4121. Springer. pp. 102–115. doi:10.1007/11814948_13. ISBN 978-3-540-37207-3. 7. Vizel, Y.; Weissenbacher, G.; Malik, S. (2015). "Boolean Satisfiability Solvers and Their Applications in Model Checking". Proceedings of the IEEE. 103 (11): 2021–2035. doi:10.1109/JPROC.2015.2455034. S2CID 10190144. 8. Cook, Stephen A. (1971). "The complexity of theorem-proving procedures" (PDF). Proceedings of the third annual ACM symposium on Theory of computing - STOC '71. pp. 151–158. CiteSeerX 10.1.1.406.395. doi:10.1145/800157.805047. S2CID 7573663. Archived (PDF) from the original on 2022-10-09. 9. Levin, Leonid (1973). "Universal search problems (Russian: Универсальные задачи перебора, Universal'nye perebornye zadachi)". Problems of Information Transmission (Russian: Проблемы передачи информа́ции, Problemy Peredachi Informatsii). 9 (3): 115–116. (pdf) (in Russian), translated into English by Trakhtenbrot, B. A. (1984). "A survey of Russian approaches to perebor (brute-force searches) algorithms". Annals of the History of Computing. 6 (4): 384–400. doi:10.1109/MAHC.1984.10036. S2CID 950581. 10. Aho, Hopcroft & Ullman (1974), Theorem 10.4. 11. Aho, Hopcroft & Ullman (1974), Theorem 10.5. 12. Schöning, Uwe (Oct 1999). "A probabilistic algorithm for k-SAT and constraint satisfaction problems" (PDF). 40th Annual Symposium on Foundations of Computer Science (Cat. No.99CB37039). pp. 410–414. doi:10.1109/SFFCS.1999.814612. ISBN 0-7695-0409-4. S2CID 123177576. Archived (PDF) from the original on 2022-10-09. 13. Selman, Bart; Mitchell, David; Levesque, Hector (1996). "Generating Hard Satisfiability Problems". Artificial Intelligence. 81 (1–2): 17–29. CiteSeerX 10.1.1.37.7362. doi:10.1016/0004-3702(95)00045-3. 14. Schaefer, Thomas J. (1978). "The complexity of satisfiability problems" (PDF). Proceedings of the 10th Annual ACM Symposium on Theory of Computing. San Diego, California. pp. 216–226. CiteSeerX 10.1.1.393.8951. doi:10.1145/800133.804350. 15. Schaefer (1978), p. 222, Lemma 3.5. 16. Arkin, Esther M.; Banik, Aritra; Carmi, Paz; Citovsky, Gui; Katz, Matthew J.; Mitchell, Joseph S. B.; Simakov, Marina (2018-12-11). "Selecting and covering colored points". Discrete Applied Mathematics. 250: 75–86. doi:10.1016/j.dam.2018.05.011. ISSN 0166-218X. 17. Buning, H.K.; Karpinski, Marek; Flogel, A. (1995). "Resolution for Quantified Boolean Formulas". Information and Computation. Elsevier. 117 (1): 12–18. doi:10.1006/inco.1995.1025. 18. Moore, Cristopher; Mertens, Stephan (2011), The Nature of Computation, Oxford University Press, p. 366, ISBN 9780199233212. 19. R. E. Bryant, S. M. German, and M. N. Velev, Microprocessor Verification Using Efficient Decision Procedures for a Logic of Equality with Uninterpreted Functions, in Analytic Tableaux and Related Methods, pp. 1–13, 1999. 20. Alhazov, Artiom; Martín-Vide, Carlos; Pan, Linqiang (2003). "Solving a PSPACE-Complete Problem by Recognizing P Systems with Restricted Active Membranes". Fundamenta Informaticae. 58: 67–77. Here: Sect.3, Thm.3.1 21. Blass, Andreas; Gurevich, Yuri (1982-10-01). "On the unique satisfiability problem". Information and Control. 55 (1): 80–88. doi:10.1016/S0019-9958(82)90439-9. ISSN 0019-9958. 22. "Complexity Zoo:U - Complexity Zoo". complexityzoo.uwaterloo.ca. Archived from the original on 2019-07-09. Retrieved 2019-12-05. 23. Kozen, Dexter C. (2006). "Supplementary Lecture F: Unique Satisfiability". Theory of Computation. Texts in Computer Science. Springer. p. 180. ISBN 9781846282973. 24. Valiant, L.; Vazirani, V. (1986). "NP is as easy as detecting unique solutions" (PDF). Theoretical Computer Science. 47: 85–93. doi:10.1016/0304-3975(86)90135-0. 25. Buldas, Ahto; Lenin, Aleksandr; Willemson, Jan; Charnamord, Anton (2017). "Simple Infeasibility Certificates for Attack Trees". In Obana, Satoshi; Chida, Koji (eds.). Advances in Information and Computer Security. Lecture Notes in Computer Science. Vol. 10418. Springer International Publishing. pp. 39–55. doi:10.1007/978-3-319-64200-0_3. ISBN 9783319642000. 26. Gi-Joon Nam; Sakallah, K. A.; Rutenbar, R. A. (2002). "A new FPGA detailed routing approach via search-based Boolean satisfiability" (PDF). IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems. 21 (6): 674. doi:10.1109/TCAD.2002.1004311. Archived from the original (PDF) on 2016-03-15. Retrieved 2015-09-04. 27. Selsam, Daniel; Lamm, Matthew; Bünz, Benedikt; Liang, Percy; de Moura, Leonardo; Dill, David L. (11 March 2019). "Learning a SAT Solver from Single-Bit Supervision". arXiv:1802.03685 [cs.AI]. 28. "The international SAT Competitions web page". Retrieved 2007-11-15. Sources • This article includes material from https://web.archive.org/web/20070708233347/http://www.sigda.org/newsletter/2006/eNews_061201.html by Prof. Karem Sakallah. Further reading (by date of publication) • Garey, Michael R.; Johnson, David S. (1979). Computers and Intractability: A Guide to the Theory of NP-Completeness. W. H. Freeman. pp. A9.1: LO1–LO7, pp. 259–260. ISBN 0-7167-1045-5. • Marques-Silva, J.; Glass, T. (1999). "Combinational equivalence checking using satisfiability and recursive learning". Design, Automation and Test in Europe Conference and Exhibition, 1999. Proceedings (Cat. No. PR00078) (PDF). p. 145. doi:10.1109/DATE.1999.761110. ISBN 0-7695-0078-1. Archived (PDF) from the original on 2022-10-09. • Clarke, E.; Biere, A.; Raimi, R.; Zhu, Y. (2001). "Bounded Model Checking Using Satisfiability Solving". Formal Methods in System Design. 19: 7–34. doi:10.1023/A:1011276507260. S2CID 2484208. • Giunchiglia, E.; Tacchella, A. (2004). Giunchiglia, Enrico; Tacchella, Armando (eds.). Theory and Applications of Satisfiability Testing. Lecture Notes in Computer Science. Vol. 2919. doi:10.1007/b95238. ISBN 978-3-540-20851-8. S2CID 31129008. • Babic, D.; Bingham, J.; Hu, A. J. (2006). "B-Cubing: New Possibilities for Efficient SAT-Solving" (PDF). IEEE Transactions on Computers. 55 (11): 1315. doi:10.1109/TC.2006.175. S2CID 14819050. • Rodriguez, C.; Villagra, M.; Baran, B. (2007). "Asynchronous team algorithms for Boolean Satisfiability" (PDF). 2007 2nd Bio-Inspired Models of Network, Information and Computing Systems. pp. 66–69. doi:10.1109/BIMNICS.2007.4610083. S2CID 15185219. • Gomes, Carla P.; Kautz, Henry; Sabharwal, Ashish; Selman, Bart (2008). "Satisfiability Solvers". In Harmelen, Frank Van; Lifschitz, Vladimir; Porter, Bruce (eds.). Handbook of knowledge representation. Foundations of Artificial Intelligence. Vol. 3. Elsevier. pp. 89–134. doi:10.1016/S1574-6526(07)03002-7. ISBN 978-0-444-52211-5. • Vizel, Y.; Weissenbacher, G.; Malik, S. (2015). "Boolean Satisfiability Solvers and Their Applications in Model Checking". Proceedings of the IEEE. 103 (11): 2021–2035. doi:10.1109/JPROC.2015.2455034. S2CID 10190144. • Knuth, Donald E. (2022). "Chapter 7.2.2.2: Satifiability". The Art of Computer Programming. Vol. 4B: Combinatorial Algorithms, Part 2. Addison-Wesley Professional. pp. 185–369. ISBN 978-0-201-03806-4. Logic • Outline • History Major fields • Computer science • Formal semantics (natural language) • Inference • Philosophy of logic • Proof • Semantics of logic • Syntax Logics • Classical • Informal • Critical thinking • Reason • Mathematical • Non-classical • Philosophical Theories • Argumentation • Metalogic • Metamathematics • Set Foundations • Abduction • Analytic and synthetic propositions • Contradiction • Paradox • Antinomy • Deduction • Deductive closure • Definition • Description • Entailment • Linguistic • Form • Induction • Logical truth • Name • Necessity and sufficiency • Premise • Probability • Reference • Statement • Substitution • Truth • Validity Lists topics • Mathematical logic • Boolean algebra • Set theory other • Logicians • Rules of inference • Paradoxes • Fallacies • Logic symbols •  Philosophy portal • Category • WikiProject (talk) • changes
Wikipedia
SBI ring In algebra, an SBI ring is a ring R (with identity) such that every idempotent of R modulo the Jacobson radical can be lifted to R. The abbreviation SBI was introduced by Irving Kaplansky and stands for "suitable for building idempotent elements" (Jacobson 1956, p.53). Examples • Any ring with nil radical is SBI. • Any Banach algebra is SBI: more generally, so is any compact topological ring. • The ring of rational numbers with odd denominator, and more generally, any local ring, is SBI. References • Jacobson, Nathan (1956), Structure of rings, American Mathematical Society, Colloquium Publications, vol. 37, Providence, R.I.: American Mathematical Society, ISBN 978-0-8218-1037-8, MR 0081264, Zbl 0073.02002 • Kaplansky, Irving (1972), Fields and Rings, Chicago Lectures in Mathematics (2nd ed.), University Of Chicago Press, pp. 124–125, ISBN 0-226-42451-0, Zbl 1001.16500
Wikipedia
SC (complexity) In computational complexity theory, SC (Steve's Class, named after Stephen Cook)[1] is the complexity class of problems solvable by a deterministic Turing machine in polynomial time (class P) and polylogarithmic space (class PolyL) (that is, O((log n)k) space for some constant k). It may also be called DTISP(poly, polylog), where DTISP stands for deterministic time and space. Note that the definition of SC differs from P ∩ PolyL, since for the former, it is required that a single algorithm runs in both polynomial time and polylogarithmic space; while for the latter, two separate algorithms will suffice: one that runs in polynomial time, and another that runs in polylogarithmic space. (It is unknown whether SC and P ∩ PolyL are equivalent). DCFL, the strict subset of context-free languages recognized by deterministic pushdown automata, is contained in SC, as shown by Cook in 1979.[2] It is open if all context-free languages can be recognized in SC, although they are known be in P ∩ PolyL.[3] It is open if directed st-connectivity is in SC, although it is known to be in P ∩ PolyL (because of a DFS algorithm and Savitch's theorem). This question is equivalent to NL ⊆ SC. RL and BPL are classes of problems acceptable by probabilistic Turing machines in logarithmic space and polynomial time. Noam Nisan showed in 1992 the weak derandomization result that both are contained in SC.[4] In other words, given polylogarithmic space, a deterministic machine can simulate logarithmic space probabilistic algorithms. References 1. Complexity Zoo: SC 2. S. A. Cook. Deterministic CFL's are accepted simultaneously in polynomial time and log squared space. Proceedings of ACM STOC'79, pp. 338–345. 1979. 3. TCS Stack Exchange: CFG parsing using o(n^2) space 4. Nisan, Noam (1992), "RL ⊆ SC", Proceedings of the 24th ACM Symposium on Theory of computing (STOC '92), Victoria, British Columbia, Canada, pp. 619–623, doi:10.1145/129712.129772{{citation}}: CS1 maint: location missing publisher (link). Important complexity classes Considered feasible • DLOGTIME • AC0 • ACC0 • TC0 • L • SL • RL • NL • NL-complete • NC • SC • CC • P • P-complete • ZPP • RP • BPP • BQP • APX • FP Suspected infeasible • UP • NP • NP-complete • NP-hard • co-NP • co-NP-complete • AM • QMA • PH • ⊕P • PP • #P • #P-complete • IP • PSPACE • PSPACE-complete Considered infeasible • EXPTIME • NEXPTIME • EXPSPACE • 2-EXPTIME • ELEMENTARY • PR • R • RE • ALL Class hierarchies • Polynomial hierarchy • Exponential hierarchy • Grzegorczyk hierarchy • Arithmetical hierarchy • Boolean hierarchy Families of classes • DTIME • NTIME • DSPACE • NSPACE • Probabilistically checkable proof • Interactive proof system List of complexity classes
Wikipedia
Séminaire de Géométrie Algébrique du Bois Marie In mathematics, the Séminaire de Géométrie Algébrique du Bois Marie (SGA) was an influential seminar run by Alexander Grothendieck. It was a unique phenomenon of research and publication outside of the main mathematical journals that ran from 1960 to 1969 at the IHÉS near Paris. (The name came from the small wood on the estate in Bures-sur-Yvette where the IHÉS was located from 1962.) The seminar notes were eventually published in twelve volumes, all except one in the Springer Lecture Notes in Mathematics series. Style The material has a reputation of being hard to read for a number of reasons. More elementary or foundational parts were relegated to the EGA series of Grothendieck and Jean Dieudonné, causing long strings of logical dependencies in the statements. The style is very abstract and makes heavy use of category theory. Moreover, an attempt was made to achieve maximally general statements, while assuming that the reader is aware of the motivations and concrete examples. First publication The original notes to SGA were published in fascicles by the IHÉS, most of which went through two or three revisions. These were published as the seminar proceeded, beginning in the early 60's and continuing through most of the decade. They can still be found in large math libraries, but distribution was limited. In the late 60's and early 70's, the original seminar notes were comprehensively revised and rewritten to take into account later developments. In addition, a new volume, SGA 4½, was compiled by Pierre Deligne and published in 1977; it contains simplified and new results by Deligne within the scope of SGA4 as well as some material from SGA5, which had not yet appeared at that time. The revised notes, except for SGA2, were published by Springer in its Lecture Notes in Mathematics series. After a dispute with Springer, Grothendieck refused the permission for reprints of the series. While these later revisions were more widely distributed than the original fascicles, they are still uncommon outside of libraries. References to SGA typically mean the later, revised editions and not the original fascicles; some of the originals were labelled by capital letters, thus for example S.G.A.D. = SGA3 and S.G.A.A. = SGA4. Series titles The volumes of the SGA series are the following: • SGA1 Revêtements étales et groupe fondamental, 1960–1961 (Étale coverings and the fundamental group), Lecture Notes in Mathematics 224, 1971 • SGA2 Cohomologie locale des faisceaux cohérents et théorèmes de Lefschetz locaux et globaux, 1961–1962 (Local cohomology of coherent sheaves and global and local Lefschetz theorems), North Holland 1968 • SGA3 Schémas en groupes, 1962–1964 (Group schemes), Lecture Notes in Mathematics 151, 152 and 153, 1970 • SGA4 Théorie des topos et cohomologie étale des schémas, 1963–1964 (Topos theory and étale cohomology), Lecture Notes in Mathematics 269, 270 and 305, 1972/3 • SGA4½ Cohomologie étale (Étale cohomology), Lecture Notes in Mathematics 569, 1977. SGA 4½ does not correspond to any of the actual seminars. It is a compilation by Pierre Deligne of some survey articles, new results within the scope of SGA4, and finally material from SGA5. • SGA5 Cohomologie l-adique et fonctions L, 1965–1966 (l-adic cohomology and L-functions), Lecture Notes in Mathematics 589, 1977 • SGA6 Théorie des intersections et théorème de Riemann-Roch, 1966–1967 (Intersection theory and the Riemann–Roch theorem), Lecture Notes in Mathematics 225, 1971 • SGA7 Groupes de monodromie en géométrie algébrique, 1967–1969 (Monodromy groups in algebraic geometry), Lecture Notes in Mathematics 288 and 340, 1972/3. • SGA8 was never written. The occasional mentions of SGA8 usually refer to either chapter 8 of SGA1, or Berthelot's work on crystalline cohomology later published outside the SGA series. Re-publishing SGA In the 1990s, it became obvious that the lack of availability of the SGA was becoming more and more of a problem to researchers and graduate students in algebraic geometry: not only are the copies in book form too few for the growing number of researchers, but they are also difficult to read because of the way they are typeset (on an electric typewriter, with mathematical formulae written by hand). Thus, under the impetus of various mathematicians from several countries, a project was formed of re-publishing SGA in a more widely available electronic format and using LaTeX for typesetting; also, various notes are to be added to correct for minor mistakes or obscurities. The result should be published by the Société Mathématique de France. Legal permission to reprint the works was obtained from every author except Alexander Grothendieck himself, who could not be contacted; it was decided to proceed without his explicit agreement on the grounds that his refusal for the SGA to be re-published by Springer-Verlag was an objection against Springer and not one of principle. As a first step, the entire work was scanned and made available on-line (see the links section below) by Frank Calegari, Jim Borger and William Stein. The job of typesetting the text anew and proofreading it was then distributed among dozens of volunteers (most of them junior French mathematicians, because of the required fluency in French and knowledge of algebraic geometry), starting with SGA1 in late 2001. The coordinating editor for the work on SGA1 was Bas Edixhoven from University of Leiden (at the time University of Rennes): the first version was available on the arXiv.org e-print archive on June 20, 2002, and the proof-read version was uploaded on January 4, 2004, and later published in book form by the Société Mathématique de France. Work on SGA2 was started in 2004 with Yves Laszlo as coordinating editor. The LaTeX source file is available on the arXiv.org e-print archive; SGA2 appeared in print in late 2005 by the Société Mathématique de France (see https://web.archive.org/web/20091130171320/http://smf.emath.fr/Publications/DocumentsMathematiques/). Laszlo has also edited SGA4 and recently Philippe Gille and Patrick Polo have uploaded TeXed version of SGA3. In January 2010, however, Grothendieck requested that work cease on republishing SGA. In late 2014, work on republishing SGA resumed and it was restored to the Grothendieck circle site. Bibliographic information SGA 1 • Grothendieck, Alexander; Raynaud, Michèle (2003) [1971], Revêtements étales et groupe fondamental (SGA 1), Documents Mathématiques (Paris) [Mathematical Documents (Paris)], vol. 3, Paris: Société Mathématique de France, arXiv:math/0206203, Bibcode:2002math......6203G, ISBN 978-2-85629-141-2, MR 2017446 • Grothendieck, Alexandre (1971). Séminaire de Géométrie Algébrique du Bois Marie - 1960-61 - Revêtements étales et groupe fondamental - (SGA 1) (Lecture notes in mathematics 224). Lecture Notes in Mathematics (in French). Vol. 224. Berlin; New York: Springer-Verlag. xxii+447. doi:10.1007/BFb0058656. ISBN 978-3-540-05614-0. MR 0354651. SGA 2 • Grothendieck, Alexander; Raynaud, Michèle (2005) [1968], Laszlo, Yves (ed.), Cohomologie locale des faisceaux cohérents et théorèmes de Lefschetz locaux et globaux (SGA 2), Documents Mathématiques (Paris), vol. 4, Paris: Société Mathématique de France, arXiv:math/0511279, Bibcode:2005math.....11279G, ISBN 978-2-85629-169-6, MR 2171939 • Grothendieck, Alexandre; Raynaud, Michèle (1968). Séminaire de Géométrie Algébrique du Bois Marie - 1962 - Cohomologie locale des faisceaux cohérents et théorèmes de Lefschetz locaux et globaux - (SGA 2) (Advanced Studies in Pure Mathematics 2) (in French). Amsterdam: North-Holland Publishing Company. vii+287. MR 0476737. SGA 3 • Demazure, Michel; Alexandre Grothendieck, eds. (1970). Séminaire de Géométrie Algébrique du Bois Marie - 1962-64 - Schémas en groupes - (SGA 3) - vol. 1 (Lecture notes in mathematics 151). Lecture Notes in Mathematics (in French). Vol. 151. Berlin; New York: Springer-Verlag. pp. xv+564. doi:10.1007/BFb0058993. ISBN 978-3-540-05179-4. MR 0274458. • Demazure, Michel; Alexandre Grothendieck, eds. (1970). Séminaire de Géométrie Algébrique du Bois Marie - 1962-64 - Schémas en groupes - (SGA 3) - vol. 2 (Lecture notes in mathematics 152). Lecture Notes in Mathematics (in French). Vol. 152. Berlin; New York: Springer-Verlag. pp. ix+654. doi:10.1007/BFb0059005. ISBN 978-3-540-05180-0. MR 0274459. • Demazure, Michel; Alexandre Grothendieck, eds. (1970). Séminaire de Géométrie Algébrique du Bois Marie - 1962-64 - Schémas en groupes - (SGA 3) - vol. 3. Lecture Notes in Mathematics (in French). Vol. 153. Berlin; New York: Springer-Verlag. vii+529. doi:10.1007/BFb0059027. ISBN 978-3-540-05181-7. MR 0274460. SGA 4 • Artin, Michael; Alexandre Grothendieck; Jean-Louis Verdier, eds. (1972). Séminaire de Géométrie Algébrique du Bois Marie - 1963-64 - Théorie des topos et cohomologie étale des schémas - (SGA 4) - vol. 1. Lecture Notes in Mathematics (in French). Vol. 269. Berlin; New York: Springer-Verlag. xix+525. doi:10.1007/BFb0081551. ISBN 978-3-540-05896-0. MR 0354652. • Artin, Michael; Alexandre Grothendieck; Jean-Louis Verdier, eds. (1972). Séminaire de Géométrie Algébrique du Bois Marie - 1963-64 - Théorie des topos et cohomologie étale des schémas - (SGA 4) - vol. 2. Lecture Notes in Mathematics (in French). Vol. 270. Berlin; New York: Springer-Verlag. pp. iv+418. doi:10.1007/BFb0061319. ISBN 978-3-540-06012-3. MR 0354653. • Artin, Michael; Alexandre Grothendieck; Jean-Louis Verdier, eds. (1972). Séminaire de Géométrie Algébrique du Bois Marie - 1963-64 - Théorie des topos et cohomologie étale des schémas - (SGA 4) - vol. 3 (PDF). Lecture Notes in Mathematics (in French). Vol. 305. Berlin; New York: Springer-Verlag. pp. vi+640. doi:10.1007/BFb0070714. ISBN 978-3-540-06118-2. MR 0354654. SGA 4½ • Deligne, Pierre (1977). Séminaire de Géométrie Algébrique du Bois Marie - Cohomologie étale - (SGA 4½). Lecture Notes in Mathematics (in French). Vol. 569. Berlin; New York: Springer-Verlag. pp. iv+312. doi:10.1007/BFb0091516. ISBN 978-3-540-08066-4. MR 0463174. SGA 5 • Illusie, Luc, ed. (1977). Séminaire de Géométrie Algébrique du Bois Marie - 1965-66 - Cohomologie l-adique et Fonctions L - (SGA 5). Lecture Notes in Mathematics (in French). Vol. 589. Berlin; New York: Springer-Verlag. xii+484. doi:10.1007/BFb0096802. ISBN 3-540-08248-4. MR 0491704. SGA 6 • Berthelot, Pierre; Alexandre Grothendieck; Luc Illusie, eds. (1971). Séminaire de Géométrie Algébrique du Bois Marie - 1966-67 - Théorie des intersections et théorème de Riemann-Roch - (SGA 6) (Lecture notes in mathematics 225). Lecture Notes in Mathematics (in French). Vol. 225. Berlin; New York: Springer-Verlag. xii+700. doi:10.1007/BFb0066283. ISBN 978-3-540-05647-8. MR 0354655. SGA 7 • Grothendieck, Alexandre (1972). Séminaire de Géométrie Algébrique du Bois Marie - 1967-69 - Groupes de monodromie en géométrie algébrique - (SGA 7) - vol. 1. Lecture Notes in Mathematics (in French). Vol. 288. Berlin; New York: Springer-Verlag. viii+523. doi:10.1007/BFb0068688. ISBN 978-3-540-05987-5. MR 0354656. • Deligne, Pierre; Nicholas Katz, eds. (1973). Séminaire de Géométrie Algébrique du Bois Marie - 1967-69 - Groupes de monodromie en géométrie algébrique - (SGA 7) - vol. 2. Lecture Notes in Mathematics (in French). Vol. 340. Berlin; New York: Springer-Verlag. pp. x+438. doi:10.1007/BFb0060505. ISBN 978-3-540-06433-6. MR 0354657. See also • Éléments de géométrie algébrique • Fondements de la Géometrie Algébrique External links • Scanned version of SGA hosted by MSRI • SGA, EGA, FGA By Mateo Carmona
Wikipedia
SIAM Journal on Numerical Analysis The SIAM Journal on Numerical Analysis (SINUM; until 1965: Journal of the Society for Industrial & Applied Mathematics, Series B: Numerical Analysis) is a peer-reviewed mathematical journal published by the Society for Industrial and Applied Mathematics[1] that covers research on the analysis of numerical methods. The journal was established in 1964 and appears bimonthly.[2] The editor-in-chief is Angela Kunoth. SIAM Journal on Numerical Analysis DisciplineNumerical analysis LanguageEnglish Edited byAngela Kunoth Publication details Former name(s) Journal of the Society for Industrial & Applied Mathematics, Series B: Numerical Analysis History1964–present Publisher Society for Industrial and Applied Mathematics FrequencyBimonthly Standard abbreviations ISO 4 (alt) · Bluebook (alt1 · alt2) NLM (alt) · MathSciNet (alt ) ISO 4SIAM J. Numer. Anal. Indexing CODEN (alt · alt2) · JSTOR (alt) · LCCN (alt) MIAR · NLM (alt) · Scopus CODENSJNAAM ISSN0036-1429 (print) 1095-7170 (web) Links • Journal homepage • Online access References 1. SIAM journal on numerical analysis : a publication of the Society of Industrial and Applied Mathematics. (Journal, magazine, 1966). OCLC 6035777. {{cite book}}: |work= ignored (help) 2. "SIAM J. on Numerical Analysis – Online access". Society for Industrial and Applied Mathematics. Retrieved 2011-06-03. External links • Official website Society for Industrial and Applied Mathematics Awards • John von Neumann Lecture • SIAM Fellowship • Germund Dahlquist Prize • George David Birkhoff Prize • Norbert Wiener Prize in Applied Mathematics • Ralph E. Kleinman Prize • J. D. Crawford Prize • J. H. Wilkinson Prize for Numerical Software • W. T. and Idalia Reid Prize in Mathematics • Theodore von Kármán Prize • George Pólya Prize • Peter Henrici Prize • SIAM/ACM Prize in Computational Science and Engineering • SIAM Prize for Distinguished Service to the Profession Publications • SIAM Review • SIAM Journal on Applied Mathematics • Theory of Probability and Its Applications • SIAM Journal on Control and Optimization • SIAM Journal on Numerical Analysis • SIAM Journal on Mathematical Analysis • SIAM Journal on Computing • SIAM Journal on Matrix Analysis and Applications • SIAM Journal on Scientific Computing • SIAM Journal on Discrete Mathematics Educational programs • Mathworks Mega Math Challenge Related societies • Association for Computing Machinery • International Council for Industrial and Applied Mathematics • IEEE • American Mathematics Society • American Statistical Association
Wikipedia
SIMPLEC algorithm The SIMPLEC (Semi-Implicit Method for Pressure Linked Equations-Consistent) algorithm; a modified form of SIMPLE algorithm; is a commonly used numerical procedure in the field of computational fluid dynamics to solve the Navier–Stokes equations. This algorithm was developed by Van Doormal and Raithby in 1984. The algorithm follows the same steps as the SIMPLE algorithm, with the variation that the momentum equations are manipulated, allowing the SIMPLEC velocity correction equations to omit terms that are less significant than those omitted in SIMPLE. This modification attempts to minimize the effects of dropping velocity neighbor correction terms.[1] Algorithm The steps involved are same as the SIMPLE algorithm and the algorithm is iterative in nature. p*, u*, v* are guessed Pressure, X-direction velocity and Y-direction velocity respectively, p', u', v' are the correction terms respectively and p, u, v are the correct fields respectively; Φ is the property for which we are solving and d terms are involved with the under relaxation factor. So, steps are as follows: 1. Specify the boundary conditions and guess the initial values. 2. Determine the velocity and pressure gradients. 3. Calculate the pseudo velocities. ${\hat {u}}_{i,J}={\frac {\sum a_{nb}u_{nb}^{*}+b_{i,J}}{a_{i,J}}}$ ${\hat {v}}_{I,j}={\frac {\sum a_{nb}v_{nb}^{*}+b_{I,j}}{a_{I,j}}}$ 1. Solve for the pressure equation and get the p. $a_{I,J}p_{I,J}=a_{I-1,J}p_{I-1,J}+a_{I+1,J}p_{I+1,J}+a_{I,J-1}p_{I,J-1}+a_{I,J+1}p_{I,J+1}+b_{I,J}$ 1. Set p*=p. 2. Using p* solve the discretized momentum equation and get u* and v*. $a_{i,J}u_{i,J}^{*}=\sum a_{nb}u_{nb}^{*}+(p_{I-1,J}^{*}-p_{I,J}^{*})A_{i,J}+b_{i,J}$ $a_{I,j}v_{I,j}^{*}=\sum a_{nb}v_{nb}^{*}+(p_{I,J-1}^{*}-p_{I,J}^{*})A_{I,j}+b_{I,j}$ 1. Solve the pressure correction equation. $a_{I,J}p'_{I,J}=a_{I-1,J}p'_{I-1,J}+a_{I+1,J}p'_{I+1,J}+a_{I,J-1}p'_{I,J-1}+a_{I,J+1}p'_{I,J+1}+b'_{I,J}$ 1. Get the pressure correction term and evaluate the corrected velocities and get p, u, v, Φ*. $u_{i,J}=u_{i,J}^{*}+d_{i,J}(p'_{I-1,J}-p'_{I,J})$ $v_{I,j}=v_{I,j}^{*}+d_{I,j}(p'_{I,J-1}-p'_{I,J})$ 1. Solve all other discretized transport equations. $a_{I,J}\phi _{I,J}=a_{I-1,J}\phi _{I-1,J}+a_{I+1,J}\phi _{I+1,J}+a_{I,J-1}\phi _{I,J-1}+a_{I,J+1}\phi _{I,J+1}+b_{I,J}^{\phi }$ 1. If Φ shows convergence, then STOP and if not, then set p*=p, u*=u, v*=v, Φ*=Φ and start the iteration again. [2][3] Peculiar features • The discretized pressure correction equation is same as in the SIMPLE algorithm, except for the d terms which are used in momentum equations. • p=p*+p' which tells that the under relaxing factor is not there in SIMPLEC as it was in SIMPLE. • SIMPLEC algorithm is seen to converge 1.2-1.3 times faster than the SIMPLE algorithm • It doesn't solve extra equations like SIMPLER algorithm. • The cost per iteration is same as in the case of SIMPLE. • Like SIMPLE, a bad pressure field guess will destroy a good velocity field.[4] See also • SIMPLE algorithm • SIMPLER algorithm • Navier–Stokes equations References 1. "Variants of SIMPLE algorithm" (PDF). engineering.purdue.edu. Retrieved 11 November 2014. 2. Versteeg, H.K.; Malalasekera, W. An introduction to Computational Fluid Dynamics- The finite volume method (1st edition, 1995 ed.). Longman Group Ltd. pp. 149–151. 3. Patankar, S. V. (1980). Numerical Heat Transfer and Fluid Flow. Taylor & Francis. ISBN 978-0-89116-522-4. 4. "SIMPLE solver for driven cavity problem" (PDF). engineering.purdue.edu.
Wikipedia
Samuel James Patterson Samuel James Patterson (September 7, 1948 in Belfast)[2] is a Northern Irish mathematician specializing in analytic number theory. He has been a professor at the University of Göttingen since 1981.[3] Samuel James Patterson Samuel Patterson (2nd from right) 2004 in Oberwolfach, with (from left) Martin Huxley, Yōichi Motohashi, Matti Jutila Born (1948-09-07) September 7, 1948 Belfast Alma materCambridge University (PhD) Known forThe Patterson-Sullivan measure Disproving the Kummer conjecture on cubic Gauss sums AwardsWhitehead Prize (1984) Scientific career FieldsDiscontinuous groups analytic number theory InstitutionsUniversity of Göttingen ThesisThe Limit Set of a Fuchsian Group (1975) Doctoral advisorAlan Beardon[1] WebsiteUniversity of Göttingen: Samuel J. Patterson Biography Patterson was born in Belfast and grew up in the east of the city, attending Grosvenor High School. He went to Clare College, Cambridge, in 1967, and received his BA in mathematics in 1970, and his Ph.D. (completed in 1974, awarded in 1975) on "The limit set of a Fuchsian group" under Alan Beardon.[4] He spent 1974–1975 at Göttingen, 1975–1979 he was back at Cambridge, and 1979–1981 he was at Harvard as Benjamin Pierce Lecturer. From 1981 to his retirement in 2011 he was professor of mathematics at Göttingen. His 18 PhD students include Jörg Brüdern and Bernd Otto Stratmann.[1] He is the brother of the Northern Irish taxonomist David Joseph Patterson. Mathematics Subjects that Patterson deals with include discontinuous groups (Fuchsian groups), different zeta functions (for example those of Ruelle and Selberg, in particular those associated with certain groups of infinite covolume[5][6][7]), metaplectic groups,[8] generalized theta functions, and exponential sums in analytical number theory. In 1978, together with Roger Heath-Brown, he disproved the Kummer conjecture on cubic Gauss sums.[9][10] He proposed a new conjecture[11] which was based on insights from his determination of the coefficients of the cuspidal Fourier expansions of the metaplectic cubic theta function.[12] This revised conjecture remained open until 2021, when it was finally proved by Alexander Dunn and Maksym Radziwiłł at Caltech.[13][14] In 1976 Patterson introduced what later became known as the Patterson-Sullivan measure.[15] The concept was further developed and extended by Dennis Sullivan starting in 1979.[16] It has proved to be a useful tool in studying Fuchsian and Kleinian groups (and certain generalizations) and their limit sets.[17][18] History of mathematics Patterson is also interested in the history of mathematics. For example, together with Ralf Meyer, he contributed an updated introduction to a new edition of a classic textbook by Hermann Weyl,[19] and an introduction to the classic textbook of Whittaker and Watson.[20] He has collaborated with Norbert Schappacher on elucidating the biography of Kurt Heegner. Honors and awards In 1984 Patterson received the Whitehead Prize of the London Mathematical Society.[21] He is on the Executive Committee of the Leibniz Archives based in Hannover[22] and has been a member of the Göttingen Academy of Sciences since 1998.[23] From 1982 to 1994 he was an editor of Crelle's Journal.[24] To mark his 60th birthday friends and colleagues in Göttingen organized a three day conference to celebrate his life in July, 2009.[19] Speakers at this gathering included Daniel Bump, Dorian Goldfeld, David Kazhdan, and Andrew Ranicki.[25] A commemorative volume, Contributions in Analytic and Algebraic Number Theory (Springer 2012), edited by Valentin Blomer & Preda Mihăilescu, collecting articles related to or developed at the conference, was issued as a Festschrift for him.[26] Selected papers • ′A lattice-point problem in hyperbolic space′, Mathematika, 22, 81-88 (1975). DOI: 10.1112/S0025579300004526 • ′The limit set of a Fuchsian group′, Acta Math. 136, 241-273 (1976) DOI: 10.1007/BF02392046 • ′The Laplacian operator on a Riemann surface′, Compos. Math. 31, 83-107 (1975) Compos. Math. 32, 71-112 (1976) Compos. Math. 33, 227-259 (1976) • ′A cubic analogue of the theta series′, J. Reine Angew. Math. 296, 125-161 (1977). J. Reine Angew. Math. 296, 217-220 (1977) DOI: 10.1515/crll.1977.296.125, 10.1515/crll.1977.296.217 • ′On the distribution of Kummer sums′, J. Reine Angew. Math. 303/304, 126-143 (1978). DOI: 10.1515/crll.1978.303-304.126 • (With D.R. Heath-Brown)′ The distribution of Kummer sums at prime arguments′, J. Reine Angew. Math. 310, 111-130 (1979). • (With D.A. Kazhdan)′ Metaplectic forms′, Publ. Math., Inst. Hautes Étud. Sci. 59, 35-142 (1984) + 62, 149(1985) DOI: 10.1007/BF02698770, 10.1007/BF02698809 • ′The Hardy-Littlewood method and Diophantine analysis in the light of Igusa'a Work′, Mathematica Goettingensis (1985-11,45pp) • ′The Selberg zeta-function of a Kleinian group. Number theory, trace formulas and discrete groups′, Symp. in Honor of Atle Selberg, Oslo/Norway 1987, 409-441 (1989) • (With P.A. Perry) [Appendix by Charles Epstein] ′The divisor of Selberg’s zeta function for Kleinian groups′, Duke Math. J. 106, No. 2, 321-390 (2001) DOI: 10.1215/S0012-7094-01-10624-8 • (With R. Livné) ′The first moment of cubic exponential sums′, Invent. Math. 148, No. 1, 79-116 (2002) DOI: 10.1007/s002220100189 References 1. Samuel James Patterson at the Mathematics Genealogy Project 2. Author Profile: Samuel James Patterson in zbMATH database 3. Literature by and about Samuel J. Patterson in the German National Library catalogue 4. Patterson, S. J. (1976). "The limit set of a Fuchsian group". Acta Mathematica. 136: 241–273. doi:10.1007/BF02392046. 5. ′The Laplacian operator on a Riemann surface′, Compos. Math. 31, 83-107 (1975) Compos. Math. 32, 71-112 (1976) Compos. Math. 33, 227-259 (1976) 6. ′The Selberg zeta-function of a Kleinian group. Number theory, trace formulas and discrete groups′, Symp. in Honor of Atle Selberg, Oslo/Norway 1987, 409-441 (1989) 7. (With P.A. Perry) [Appendix by Charles Epstein] ′The divisor of Selberg’s zeta function for Kleinian groups′, Duke Math. J. 106, No. 2, 321-390 (2001) DOI: 10.1215/S0012-7094-01-10624-8) 8. Kazhdan, D. A.; Patterson, Samuel James (1984). "Metaplectic forms". Publications Mathématiques de l'IHÉS. 59 (310): 35–142. 9. Heath-Brown, D. Roger; Patterson, Samuel James (1979). "The distribution of Kummer sums at prime arguments". Journal für die reine und angewandte Mathematik. 1979 (310): 111–130. doi:10.1515/crll.1979.310.111. MR 0546667. 10. Heath-Brown, D. R. (2000). "Kummer's conjecture for cubic Gauss sums" (PDF). Israel Journal of Mathematics. 120: part A, 97–124. doi:10.1007/s11856-000-1273-y. MR 1815372. 11. ′On the distribution of Kummer sums′, J. Reine Angew. Math. 303/304, 126-143 (1978). DOI: 10.1515/crll.1978. 303-304.126 12. 'A cubic analogue of the theta series′, J. Reine Angew. Math. 296, 125-161(1977). J. Reine Angew. Math. 296, 217-220 (1977) DOI: 10.1515/crll.1977.296.125, 10.1515/crll.1977.296.217 13. After 175 Years, Theorem Finally Has a Proof by Katie Spalding, IFLScience, Aug 26, 2022 14. A Numerical Mystery From the 19th Century Finally Gets Solved by Leila Sloman, Quanta Magazine, August 15, 2022 15. S. Patterson, “The limit set of a Fuchsian group,” Acta Math. 136 (1976), pp. 241–273. 16. D. Sullivan, “The density at infinity of a discrete group of hyperbolic motions,” Publ. Math. IHÉS 50 (1979), pp. 171–202. 17. D. Sullivan, “Entropy, Hausdorff measures old and new, and limit sets of geometrically finite Kleinian groups,” Acta Math. 153 (1984), pp. 259–277 18. Peter J. Nicholls: The ergodic theory of discrete groups, LMS Lecture Note Series, 143 (1989). 19. International Conference on the Occasion of the 60th Birthday of Samuel J. Patterson Göttingen, July 27–29, 2009 20. E.T. Whittaker and G.N. Watson: Modern Analysis, 5th Edition, (Edited and prepared for publication by Victor H. Moll), 2021. 21. List of LMS prize winners The London Mathematical Society 22. Leibniz-Archiv/Leibniz Research Center Hannover 23. Göttingen Academy of Sciences: member Samuel James Patterson 24. "Frontmatter". Journal für die reine und angewandte Mathematik. 737. April 2018. doi:10.1515/crelle-2018-frontmatter737. 25. Mathematics: International Conference on Questions of Number Theory University of Göttingen 26. Festschrift for S. J. Patterson The text that comprises this volume is a collection of surveys and original works from experts in the fields of algebraic number theory, analytic number theory, harmonic analysis, and hyperbolic geometry. A portion of the collected contributions have been developed from lectures given at the "International Conference on the Occasion of the 60th Birthday of S. J. Patterson" External links • Samuel James Patterson at the Mathematics Genealogy Project Authority control International • ISNI • VIAF National • France • BnF data • Germany • Israel • United States • Netherlands Academics • MathSciNet • Mathematics Genealogy Project • zbMATH People • Deutsche Biographie Other • IdRef
Wikipedia
SKI combinator calculus The SKI combinator calculus is a combinatory logic system and a computational system. It can be thought of as a computer programming language, though it is not convenient for writing software. Instead, it is important in the mathematical theory of algorithms because it is an extremely simple Turing complete language. It can be likened to a reduced version of the untyped lambda calculus. It was introduced by Moses Schönfinkel[1] and Haskell Curry.[2] All operations in lambda calculus can be encoded via abstraction elimination into the SKI calculus as binary trees whose leaves are one of the three symbols S, K, and I (called combinators). Notation Although the most formal representation of the objects in this system requires binary trees, for simpler typesetting they are often represented as parenthesized expressions, as a shorthand for the tree they represent. Any subtrees may be parenthesized, but often only the right-side subtrees are parenthesized, with left associativity implied for any unparenthesized applications. For example, ISK means ((IS)K). Using this notation, a tree whose left subtree is the tree KS and whose right subtree is the tree SK can be written as KS(SK). If more explicitness is desired, the implied parentheses can be included as well: ((KS)(SK)). Informal description Informally, and using programming language jargon, a tree (xy) can be thought of as a function x applied to an argument y. When evaluated (i.e., when the function is "applied" to the argument), the tree "returns a value", i.e., transforms into another tree. The "function", "argument" and the "value" are either combinators or binary trees. If they are binary trees, they may be thought of as functions too, if needed. The evaluation operation is defined as follows: (x, y, and z represent expressions made from the functions S, K, and I, and set values): I returns its argument: Ix = x K, when applied to any argument x, yields a one-argument constant function Kx, which, when applied to any argument y, returns x: Kxy = x S is a substitution operator. It takes three arguments and then returns the first argument applied to the third, which is then applied to the result of the second argument applied to the third. More clearly: Sxyz = xz(yz) Example computation: SKSK evaluates to KK(SK) by the S-rule. Then if we evaluate KK(SK), we get K by the K-rule. As no further rule can be applied, the computation halts here. For all trees x and all trees y, SKxy will always evaluate to y in two steps, Ky(xy) = y, so the ultimate result of evaluating SKxy will always equal the result of evaluating y. We say that SKx and I are "functionally equivalent" because they always yield the same result when applied to any y. From these definitions it can be shown that SKI calculus is not the minimum system that can fully perform the computations of lambda calculus, as all occurrences of I in any expression can be replaced by (SKK) or (SKS) or (SK whatever) and the resulting expression will yield the same result. So the "I" is merely syntactic sugar. Since I is optional, the system is also referred as SK calculus or SK combinator calculus. It is possible to define a complete system using only one (improper) combinator. An example is Chris Barker's iota combinator, which can be expressed in terms of S and K as follows: ιx = xSK It is possible to reconstruct S, K, and I from the iota combinator. Applying ι to itself gives ιι = ιSK = SSKK = SK(KK) which is functionally equivalent to I. K can be constructed by applying ι twice to I (which is equivalent to application of ι to itself): ι(ι(ιι)) = ι(ιιSK) = ι(ISK) = ι(SK) = SKSK = K. Applying ι one more time gives ι(ι(ι(ιι))) = ιK = KSK = S. Formal definition The terms and derivations in this system can also be more formally defined: Terms: The set T of terms is defined recursively by the following rules. 1. S, K, and I are terms. 2. If τ1 and τ2 are terms, then (τ1τ2) is a term. 3. Nothing is a term if not required to be so by the first two rules. Derivations: A derivation is a finite sequence of terms defined recursively by the following rules (where α and ι are words over the alphabet {S, K, I, (, )} while β, γ and δ are terms): 1. If Δ is a derivation ending in an expression of the form α(Iβ)ι, then Δ followed by the term αβι is a derivation. 2. If Δ is a derivation ending in an expression of the form α((Kβ)γ)ι, then Δ followed by the term αβι is a derivation. 3. If Δ is a derivation ending in an expression of the form α(((Sβ)γ)δ)ι, then Δ followed by the term α((βδ)(γδ))ι is a derivation. Assuming a sequence is a valid derivation to begin with, it can be extended using these rules. All derivations of length 1 are valid derivations. SKI expressions Self-application and recursion SII is an expression that takes an argument and applies that argument to itself: SIIα = Iα(Iα) = αα This is also known as U combinator, Ux = xx. One interesting property of it is that its self-application is irreducible: SII(SII) = I(SII)(I(SII)) = SII(I(SII)) = SII(SII) Another thing is that it allows one to write a function that applies one thing to the self application of another thing: (S(Kα)(SII))β = Kαβ(SIIβ) = α(Iβ(Iβ)) = α(ββ) or it can be seen as defining yet another combinator directly, Hxy = x(yy). This function can be used to achieve recursion. If β is the function that applies α to the self application of something else, β = Hα = S(Kα)(SII) then the self-application of this β is the fixed point of that α: SIIβ = ββ = α(ββ) = α(α(ββ)) = $\ldots $ If α expresses a "computational step" computed by αρν for some ρ and ν, that assumes ρν' expresses "the rest of the computation" (for some ν' that α will "compute" from ν), then its fixed point ββ expresses the whole recursive computation, since using the same function ββ for the "rest of computation" call (with ββν = α(ββ)ν) is the very definition of recursion: ρν' = ββν' = α(ββ)ν' = ... . α will have to employ some kind of conditional to stop at some "base case" and not make the recursive call then, to avoid divergence. This can be formalized, with β = Hα = S(Kα)(SII) = S(KS)Kα(SII) = S(S(KS)K)(K(SII)) α as Yα = SIIβ = SII(Hα) = S(K(SII))H α = S(K(SII))(S(S(KS)K)(K(SII))) α which gives us one possible encoding of the Y combinator. This becomes much shorter with the use of the B and C combinators, as the equivalent Yα = S(KU)(SB(KU))α = U(BαU) = BU(CBU)α or directly, as Hαβ = α(ββ) = BαUβ = CBUαβ Yα = U(Hα) = BU(CBU)α The reversal expression S(K(SI))K reverses the following two terms: S(K(SI))Kαβ → K(SI)α(Kα)β → SI(Kα)β → Iβ(Kαβ) → Iβα → βα Boolean logic SKI combinator calculus can also implement Boolean logic in the form of an if-then-else structure. An if-then-else structure consists of a Boolean expression that is either true (T) or false (F) and two arguments, such that: Txy = x and Fxy = y The key is in defining the two Boolean expressions. The first works just like one of our basic combinators: T = K Kxy = x The second is also fairly simple: F = SK SKxy = Ky(xy) = y Once true and false are defined, all Boolean logic can be implemented in terms of if-then-else structures. Boolean NOT (which returns the opposite of a given Boolean) works the same as the if-then-else structure, with F and T as the second and third values, so it can be implemented as a postfix operation: NOT = (F)(T) = (SK)(K) If this is put in an if-then-else structure, it can be shown that this has the expected result (T)NOT = T(F)(T) = F (F)NOT = F(F)(T) = T Boolean OR (which returns T if either of the two Boolean values surrounding it is T) works the same as an if-then-else structure with T as the second value, so it can be implemented as an infix operation: OR = T = K If this is put in an if-then-else structure, it can be shown that this has the expected result: (T)OR(T) = T(T)(T) = T (T)OR(F) = T(T)(F) = T (F)OR(T) = F(T)(T) = T (F)OR(F) = F(T)(F) = F Boolean AND (which returns T if both of the two Boolean values surrounding it are T) works the same as an if-then-else structure with F as the third value, so it can be implemented as a postfix operation: AND = F = SK If this is put in an if-then-else structure, it can be shown that this has the expected result: (T)(T)AND = T(T)(F) = T (T)(F)AND = T(F)(F) = F (F)(T)AND = F(T)(F) = F (F)(F)AND = F(F)(F) = F Because this defines T, F, NOT (as a postfix operator), OR (as an infix operator), and AND (as a postfix operator) in terms of SKI notation, this proves that the SKI system can fully express Boolean logic. As the SKI calculus is complete, it is also possible to express NOT, OR and AND as prefix operators: NOT = S(SI(KF))(KT) (as S(SI(KF))(KT)x = SI(KF)x(KTx) = Ix(KFx)T = xFT) OR = SI(KT) (as SI(KT)xy = Ix(KTx)y = xTy) AND = SS(K(KF)) (as SS(K(KF))xy = Sx(K(KF)x)y = xy(KFy) = xyF) Connection to intuitionistic logic The combinators K and S correspond to two well-known axioms of sentential logic: AK: A → (B → A), AS: (A → (B → C)) → ((A → B) → (A → C)). Function application corresponds to the rule modus ponens: MP: from A and A → B, infer B. The axioms AK and AS, and the rule MP are complete for the implicational fragment of intuitionistic logic. In order for combinatory logic to have as a model: • The implicational fragment of classical logic, would require the combinatory analog to the law of excluded middle, i.e., Peirce's law; • Complete classical logic, would require the combinatory analog to the sentential axiom F → A. This connection between the types of combinators and the corresponding logical axioms is an instance of the Curry–Howard isomorphism. Examples of reduction There may be multiple ways to do a reduction. All are equivalent, as long as you don't break order of operations • ${\textrm {SKI(KIS)}}$ • ${\textrm {SKI(KIS)}}\Rightarrow {\textrm {K(KIS)(I(KIS))}}\Rightarrow {\textrm {K(KIS)x}}\Rightarrow {\textrm {KIS}}\Rightarrow {\textrm {I}}$ • ${\textrm {SKI(KIS)}}\Rightarrow {\textrm {SKII}}\Rightarrow {\textrm {KI(II)}}\Rightarrow {\textrm {KII}}\Rightarrow {\textrm {I}}$ • ${\textrm {KS(I(SKSI))}}$ • ${\textrm {KS(I(SKSI))}}\Rightarrow {\textrm {KS(I(KI(SI)))}}\Rightarrow {\textrm {KS(I(I))}}\Rightarrow {\textrm {KS(II)}}\Rightarrow {\textrm {KSI}}\Rightarrow {\textrm {S}}$ • ${\textrm {KS(I(SKSI))}}\Rightarrow {\textrm {KS(x)}}\Rightarrow {\textrm {S}}$ • ${\textrm {SKIK}}\Rightarrow {\textrm {KK(IK)}}\Rightarrow {\textrm {KKK}}\Rightarrow {\textrm {K}}$ See also • Combinatory logic • B, C, K, W system • Fixed point combinator • Lambda calculus • Functional programming • Unlambda programming language • The Iota and Jot programming languages, designed to be even simpler than SKI. • To Mock a Mockingbird References 1. Schönfinkel, M. (1924). "Über die Bausteine der mathematischen Logik". Mathematische Annalen. 92 (3–4): 305–316. doi:10.1007/BF01448013. S2CID 118507515. Translated by Stefan Bauer-Mengelberg as van Heijenoort, Jean, ed. (2002) [1967]. "On the building blocks of mathematical logic". A Source Book in Mathematical Logic 1879–1931. Harvard University Press. pp. 355–366. ISBN 9780674324497. 2. Curry, Haskell Brooks (1930). "Grundlagen der Kombinatorischen Logik" [Foundations of combinatorial logic]. American Journal of Mathematics (in German). Johns Hopkins University Press. 52 (3): 509–536. doi:10.2307/2370619. JSTOR 2370619. • Smullyan, Raymond (1985). To Mock a Mockingbird. Knopf. ISBN 0-394-53491-3. A gentle introduction to combinatory logic, presented as a series of recreational puzzles using bird watching metaphors. • — (1994). "Ch. 17–20". Diagonalization and Self-Reference. Oxford University Press. ISBN 9780198534501. OCLC 473553893. are a more formal introduction to combinatory logic, with a special emphasis on fixed point results. External links • O'Donnell, Mike "The SKI Combinator Calculus as a Universal System." • Keenan, David C. (2001) "To Dissect a Mockingbird." • Rathman, Chris, "Combinator Birds." • ""Drag 'n' Drop Combinators (Java Applet)." • A Calculus of Mobile Processes, Part I (PostScript) (by Milner, Parrow, and Walker) shows a scheme for combinator graph reduction for the SKI calculus in pages 25–28. • the Nock programming language may be seen as an assembly language based on SK combinator calculus in the same way that traditional assembly language is based on Turing machines. Nock instruction 2 (the "Nock operator") is the S combinator and Nock instruction 1 is the K combinator. The other primitive instructions in Nock (instructions 0,3,4,5, and the pseudo-instruction "implicit cons") are not necessary for universal computation, but make programming more convenient by providing facilities for dealing with binary tree data structures and arithmetic; Nock also provides 5 more instructions (6,7,8,9,10) that could have been built out of these primitives.
Wikipedia
Binary tetrahedral group In mathematics, the binary tetrahedral group, denoted 2T or ⟨2,3,3⟩,[2] is a certain nonabelian group of order 24. It is an extension of the tetrahedral group T or (2,3,3) of order 12 by a cyclic group of order 2, and is the preimage of the tetrahedral group under the 2:1 covering homomorphism Spin(3) → SO(3) of the special orthogonal group by the spin group. It follows that the binary tetrahedral group is a discrete subgroup of Spin(3) of order 24. The complex reflection group named 3(24)3 by G.C. Shephard or 3[3]3 and by Coxeter, is isomorphic to the binary tetrahedral group. The binary tetrahedral group is most easily described concretely as a discrete subgroup of the unit quaternions, under the isomorphism Spin(3) ≅ Sp(1), where Sp(1) is the multiplicative group of unit quaternions. (For a description of this homomorphism see the article on quaternions and spatial rotations.) Elements Symmetry projections 8-fold 12-fold 24 quaternion elements: • 1 order-1: 1 • 1 order-2: -1 • 6 order-4: ±i, ±j, ±k • 8 order-6: (+1±i±j±k)/2 • 8 order-3: (-1±i±j±k)/2. Explicitly, the binary tetrahedral group is given as the group of units in the ring of Hurwitz integers. There are 24 such units given by $\{\pm 1,\pm i,\pm j,\pm k,{\tfrac {1}{2}}(\pm 1\pm i\pm j\pm k)\}$ with all possible sign combinations. All 24 units have absolute value 1 and therefore lie in the unit quaternion group Sp(1). The convex hull of these 24 elements in 4-dimensional space form a convex regular 4-polytope called the 24-cell. Properties The binary tetrahedral group, denoted by 2T, fits into the short exact sequence $1\to \{\pm 1\}\to 2\mathrm {T} \to \mathrm {T} \to 1.$ This sequence does not split, meaning that 2T is not a semidirect product of {±1} by T. In fact, there is no subgroup of 2T isomorphic to T. The binary tetrahedral group is the covering group of the tetrahedral group. Thinking of the tetrahedral group as the alternating group on four letters, T ≅ A4, we thus have the binary tetrahedral group as the covering group, 2T ≅ ${\widehat {\mathrm {A} _{4}}}$. The center of 2T is the subgroup {±1}. The inner automorphism group is isomorphic to A4, and the full automorphism group is isomorphic to S4.[3] The binary tetrahedral group can be written as a semidirect product $2\mathrm {T} =\mathrm {Q} \rtimes \mathrm {C} _{3}$ where Q is the quaternion group consisting of the 8 Lipschitz units and C3 is the cyclic group of order 3 generated by ω = −1/2(1 + i + j + k). The group Z3 acts on the normal subgroup Q by conjugation. Conjugation by ω is the automorphism of Q that cyclically rotates i, j, and k. One can show that the binary tetrahedral group is isomorphic to the special linear group SL(2,3) – the group of all 2 × 2 matrices over the finite field F3 with unit determinant, with this isomorphism covering the isomorphism of the projective special linear group PSL(2,3) with the alternating group A4. Presentation The group 2T has a presentation given by $\langle r,s,t\mid r^{2}=s^{3}=t^{3}=rst\rangle $ or equivalently, $\langle s,t\mid (st)^{2}=s^{3}=t^{3}\rangle .$ Generators with these relations are given by $r=i\qquad s={\tfrac {1}{2}}(1+i+j+k)\qquad t={\tfrac {1}{2}}(1+i+j-k),$ with $r^{2}=s^{3}=t^{3}=-1$. Subgroups The quaternion group consisting of the 8 Lipschitz units forms a normal subgroup of 2T of index 3. This group and the center {±1} are the only nontrivial normal subgroups. All other subgroups of 2T are cyclic groups generated by the various elements, with orders 3, 4, and 6.[4] Higher dimensions Just as the tetrahedral group generalizes to the rotational symmetry group of the n-simplex (as a subgroup of SO(n)), there is a corresponding higher binary group which is a 2-fold cover, coming from the cover Spin(n) → SO(n). The rotational symmetry group of the n-simplex can be considered as the alternating group on n + 1 points, An+1, and the corresponding binary group is a 2-fold covering group. For all higher dimensions except A6 and A7 (corresponding to the 5-dimensional and 6-dimensional simplexes), this binary group is the covering group (maximal cover) and is superperfect, but for dimensional 5 and 6 there is an additional exceptional 3-fold cover, and the binary groups are not superperfect. Usage in theoretical physics The binary tetrahedral group was used in the context of Yang–Mills theory in 1956 by Chen Ning Yang and others.[5] It was first used in flavor physics model building by Paul Frampton and Thomas Kephart in 1994.[6] In 2012 it was shown [7] that a relation between two neutrino mixing angles, derived [8] by using this binary tetrahedral flavor symmetry, agrees with experiment. See also • Binary polyhedral group • Binary cyclic group, ⟨n⟩, order 2n • Binary dihedral group, ⟨2,2,n⟩,[2] order 4n • Binary octahedral group, 2O = ⟨2,3,4⟩,[2] order 48 • Binary icosahedral group, 2I = ⟨2,3,5⟩,[2] order 120 Notes 1. Coxeter, Complex Regular Polytopes, p 109, Fig 11.5E 2. Coxeter&Moser: Generators and Relations for discrete groups: <l,m,n>: Rl = Sm = Tn = RST 3. "Special linear group:SL(2,3)". groupprops. 4. SL2(F3) on GroupNames 5. Case, E.M.; Robert Karplus; C.N. Yang (1956). "Strange Particles and the Conservation of Isotopic Spin". Physical Review. 101 (2): 874–876. Bibcode:1956PhRv..101..874C. doi:10.1103/PhysRev.101.874. S2CID 122544023. 6. Frampton, Paul H.; Thomas W. Kephart (1995). "Simple Nonabelian Finite Flavor Groups and Fermion Masses". International Journal of Modern Physics. A10 (32): 4689–4704. arXiv:hep-ph/9409330. Bibcode:1995IJMPA..10.4689F. doi:10.1142/s0217751x95002187. S2CID 7620375. 7. Eby, David A.; Paul H. Frampton (2012). "Nonzero theta(13)signals nonmaximal atmospheric neutrino mixing". Physical Review. D86 (11): 117–304. arXiv:1112.2675. Bibcode:2012PhRvD..86k7304E. doi:10.1103/physrevd.86.117304. S2CID 118408743. 8. Eby, David A.; Paul H. Frampton; Shinya Matsuzaki (2009). "Predictions for neutrino mixing angles in a T′ Model". Physics Letters. B671 (3): 386–390. arXiv:0801.4899. Bibcode:2009PhLB..671..386E. doi:10.1016/j.physletb.2008.11.074. S2CID 119272452. References • Conway, John H.; Smith, Derek A. (2003). On Quaternions and Octonions. Natick, Massachusetts: AK Peters, Ltd. ISBN 1-56881-134-9. • Coxeter, H. S. M. & Moser, W. O. J. (1980). Generators and Relations for Discrete Groups, 4th edition. New York: Springer-Verlag. ISBN 0-387-09212-9. 6.5 The binary polyhedral groups, p. 68
Wikipedia
Binary icosahedral group In mathematics, the binary icosahedral group 2I or ⟨2,3,5⟩[1] is a certain nonabelian group of order 120. It is an extension of the icosahedral group I or (2,3,5) of order 60 by the cyclic group of order 2, and is the preimage of the icosahedral group under the 2:1 covering homomorphism $\operatorname {Spin} (3)\to \operatorname {SO} (3)\,$ of the special orthogonal group by the spin group. It follows that the binary icosahedral group is a discrete subgroup of Spin(3) of order 120. It should not be confused with the full icosahedral group, which is a different group of order 120, and is rather a subgroup of the orthogonal group O(3). The binary icosahedral group is most easily described concretely as a discrete subgroup of the unit quaternions, under the isomorphism $\operatorname {Spin} (3)\cong \operatorname {Sp} (1)$ where Sp(1) is the multiplicative group of unit quaternions. (For a description of this homomorphism see the article on quaternions and spatial rotations.) Elements Explicitly, the binary icosahedral group is given as the union of all even permutations of the following vectors: • 8 even permutations of $(\pm 1,0,0,0)$ • 16 even permutations of $(\pm 1/2,\pm 1/2,\pm 1/2,\pm 1/2)$ • 96 even permutations of $(0,\pm 1/2,\pm 1/2\phi ,\pm \phi /2)$ Here $\phi ={\frac {1+{\sqrt {5}}}{2}}$ is the golden ratio. In total there are 120 elements, namely the unit icosians. They all have unit magnitude and therefore lie in the unit quaternion group Sp(1). The 120 elements in 4-dimensional space match the 120 vertices the 600-cell, a regular 4-polytope. Properties Central extension The binary icosahedral group, denoted by 2I, is the universal perfect central extension of the icosahedral group, and thus is quasisimple: it is a perfect central extension of a simple group. Explicitly, it fits into the short exact sequence $1\to \{\pm 1\}\to 2I\to I\to 1.\,$ This sequence does not split, meaning that 2I is not a semidirect product of { ±1 } by I. In fact, there is no subgroup of 2I isomorphic to I. The center of 2I is the subgroup { ±1 }, so that the inner automorphism group is isomorphic to I. The full automorphism group is isomorphic to S5 (the symmetric group on 5 letters), just as for $I\cong A_{5}$ - any automorphism of 2I fixes the non-trivial element of the center ($-1$), hence descends to an automorphism of I, and conversely, any automorphism of I lifts to an automorphism of 2I, since the lift of generators of I are generators of 2I (different lifts give the same automorphism). Superperfect The binary icosahedral group is perfect, meaning that it is equal to its commutator subgroup. In fact, 2I is the unique perfect group of order 120. It follows that 2I is not solvable. Further, the binary icosahedral group is superperfect, meaning abstractly that its first two group homology groups vanish: $H_{1}(2I;\mathbf {Z} )\cong H_{2}(2I;\mathbf {Z} )\cong 0.$ Concretely, this means that its abelianization is trivial (it has no non-trivial abelian quotients) and that its Schur multiplier is trivial (it has no non-trivial perfect central extensions). In fact, the binary icosahedral group is the smallest (non-trivial) superperfect group. The binary icosahedral group is not acyclic, however, as Hn(2I,Z) is cyclic of order 120 for n = 4k+3, and trivial for n > 0 otherwise, (Adem & Milgram 1994, p. 279). Isomorphisms Concretely, the binary icosahedral group is a subgroup of Spin(3), and covers the icosahedral group, which is a subgroup of SO(3). Abstractly, the icosahedral group is isomorphic to the symmetries of the 4-simplex, which is a subgroup of SO(4), and the binary icosahedral group is isomorphic to the double cover of this in Spin(4). Note that the symmetric group $S_{5}$ does have a 4-dimensional representation (its usual lowest-dimensional irreducible representation as the full symmetries of the $(n-1)$-simplex), and that the full symmetries of the 4-simplex are thus $S_{5},$ not the full icosahedral group (these are two different groups of order 120). The binary icosahedral group can be considered as the double cover of the alternating group $A_{5},$ denoted $2\cdot A_{5}\cong 2I;$ this isomorphism covers the isomorphism of the icosahedral group with the alternating group $A_{5}\cong I,$. Just as $I$ is a discrete subgroup of $\mathrm {SO} (3)$, $2I$ is a discrete subgroup of the double over of $\mathrm {SO} (3)$, namely $\mathrm {Spin} (3)\cong \mathrm {SU} (2)$. The 2-1 homomorphism from $\mathrm {Spin} (3)$ to $\mathrm {SO} (3)$ then restricts to the 2-1 homomorphism from $2I$ to $I$. One can show that the binary icosahedral group is isomorphic to the special linear group SL(2,5) — the group of all 2×2 matrices over the finite field F5 with unit determinant; this covers the exceptional isomorphism of Failed to parse (SVG (MathML can be enabled via browser plugin): Invalid response ("Math extension cannot connect to Restbase.") from server "http://localhost:6011/en.wikipedia.org/v1/":): I\cong A_{5} with the projective special linear group PSL(2,5). Note also the exceptional isomorphism $PGL(2,5)\cong S_{5},$ which is a different group of order 120, with the commutative square of SL, GL, PSL, PGL being isomorphic to a commutative square of $2\cdot A_{5},2\cdot S_{5},A_{5},S_{5},$ which are isomorphic to subgroups of the commutative square of Spin(4), Pin(4), SO(4), O(4). Presentation The group 2I has a presentation given by $\langle r,s,t\mid r^{2}=s^{3}=t^{5}=rst\rangle $ or equivalently, $\langle s,t\mid (st)^{2}=s^{3}=t^{5}\rangle .$ Generators with these relations are given by $s={\tfrac {1}{2}}(1+i+j+k)\qquad t={\tfrac {1}{2}}(\varphi +\varphi ^{-1}i+j).$ Subgroups The only proper normal subgroup of 2I is the center { ±1 }. By the third isomorphism theorem, there is a Galois connection between subgroups of 2I and subgroups of I, where the closure operator on subgroups of 2I is multiplication by { ±1 }. $-1$ is the only element of order 2, hence it is contained in all subgroups of even order: thus every subgroup of 2I is either of odd order or is the preimage of a subgroup of I. Besides the cyclic groups generated by the various elements (which can have odd order), the only other subgroups of 2I (up to conjugation) are:[2] • binary dihedral groups, Dic5=Q20=⟨2,2,5⟩, order 20 and Dic3=Q12=⟨2,2,3⟩ of order 12 • The quaternion group, Q8=⟨2,2,2⟩, consisting of the 8 Lipschitz units forms a subgroup of index 15, which is also the dicyclic group Dic2; this covers the stabilizer of an edge. • The 24 Hurwitz units form an index 5 subgroup called the binary tetrahedral group; this covers a chiral tetrahedral group. This group is self-normalizing so its conjugacy class has 5 members (this gives a map $2I\to S_{5}$ whose image is $A_{5}$). Relation to 4-dimensional symmetry groups The 4-dimensional analog of the icosahedral symmetry group Ih is the symmetry group of the 600-cell (also that of its dual, the 120-cell). Just as the former is the Coxeter group of type H3, the latter is the Coxeter group of type H4, also denoted [3,3,5]. Its rotational subgroup, denoted [3,3,5]+ is a group of order 7200 living in SO(4). SO(4) has a double cover called Spin(4) in much the same way that Spin(3) is the double cover of SO(3). Similar to the isomorphism Spin(3) = Sp(1), the group Spin(4) is isomorphic to Sp(1) × Sp(1). The preimage of [3,3,5]+ in Spin(4) (a four-dimensional analogue of 2I) is precisely the product group 2I × 2I of order 14400. The rotational symmetry group of the 600-cell is then [3,3,5]+ = ( 2I × 2I ) / { ±1 }. Various other 4-dimensional symmetry groups can be constructed from 2I. For details, see (Conway and Smith, 2003). Applications The coset space Spin(3) / 2I = S3 / 2I is a spherical 3-manifold called the Poincaré homology sphere. It is an example of a homology sphere, i.e. a 3-manifold whose homology groups are identical to those of a 3-sphere. The fundamental group of the Poincaré sphere is isomorphic to the binary icosahedral group, as the Poincaré sphere is the quotient of a 3-sphere by the binary icosahedral group. See also • binary polyhedral group • binary cyclic group, ⟨n⟩, order 2n • binary dihedral group, ⟨2,2,n⟩, order 4n • binary tetrahedral group, 2T=⟨2,3,3⟩, order 24 • binary octahedral group, 2O=⟨2,3,4⟩, order 48 References • Adem, Alejandro; Milgram, R. James (1994), Cohomology of finite groups, Grundlehren der Mathematischen Wissenschaften [Fundamental Principles of Mathematical Sciences], vol. 309, Berlin, New York: Springer-Verlag, ISBN 978-3-540-57025-7, MR 1317096 • Coxeter, H. S. M. & Moser, W. O. J. (1980). Generators and Relations for Discrete Groups, 4th edition. New York: Springer-Verlag. ISBN 0-387-09212-9. 6.5 The binary polyhedral groups, p. 68 • Conway, John H.; Smith, Derek A. (2003). On Quaternions and Octonions. Natick, Massachusetts: AK Peters, Ltd. ISBN 1-56881-134-9. Notes 1. Coxeter&Moser: Generators and Relations for discrete groups: <l,m,n>: Rl = Sm = Tn = RST 2. $SL_{2}(\mathbb {F} _{5})$ on GroupNames
Wikipedia
Sequential linear-quadratic programming Sequential linear-quadratic programming (SLQP) is an iterative method for nonlinear optimization problems where objective function and constraints are twice continuously differentiable. Similarly to sequential quadratic programming (SQP), SLQP proceeds by solving a sequence of optimization subproblems. The difference between the two approaches is that: • in SQP, each subproblem is a quadratic program, with a quadratic model of the objective subject to a linearization of the constraints • in SLQP, two subproblems are solved at each step: a linear program (LP) used to determine an active set, followed by an equality-constrained quadratic program (EQP) used to compute the total step This decomposition makes SLQP suitable to large-scale optimization problems, for which efficient LP and EQP solvers are available, these problems being easier to scale than full-fledged quadratic programs. It may be considered related to, but distinct from, quasi-Newton methods. Algorithm basics Consider a nonlinear programming problem of the form: ${\begin{array}{rl}\min \limits _{x}&f(x)\\{\mbox{s.t.}}&b(x)\geq 0\\&c(x)=0.\end{array}}$ The Lagrangian for this problem is[1] ${\mathcal {L}}(x,\lambda ,\sigma )=f(x)-\lambda ^{T}b(x)-\sigma ^{T}c(x),$ where $\lambda \geq 0$ and $\sigma $ are Lagrange multipliers. LP phase In the LP phase of SLQP, the following linear program is solved: ${\begin{array}{rl}\min \limits _{d}&f(x_{k})+\nabla f(x_{k})^{T}d\\\mathrm {s.t.} &b(x_{k})+\nabla b(x_{k})^{T}d\geq 0\\&c(x_{k})+\nabla c(x_{k})^{T}d=0.\end{array}}$ Let ${\cal {A}}_{k}$ denote the active set at the optimum $d_{\text{LP}}^{*}$ of this problem, that is to say, the set of constraints that are equal to zero at $d_{\text{LP}}^{*}$. Denote by $b_{{\cal {A}}_{k}}$ and $c_{{\cal {A}}_{k}}$ the sub-vectors of $b$ and $c$ corresponding to elements of ${\cal {A}}_{k}$. EQP phase In the EQP phase of SLQP, the search direction $d_{k}$ of the step is obtained by solving the following equality-constrained quadratic program: ${\begin{array}{rl}\min \limits _{d}&f(x_{k})+\nabla f(x_{k})^{T}d+{\tfrac {1}{2}}d^{T}\nabla _{xx}^{2}{\mathcal {L}}(x_{k},\lambda _{k},\sigma _{k})d\\\mathrm {s.t.} &b_{{\cal {A}}_{k}}(x_{k})+\nabla b_{{\cal {A}}_{k}}(x_{k})^{T}d=0\\&c_{{\cal {A}}_{k}}(x_{k})+\nabla c_{{\cal {A}}_{k}}(x_{k})^{T}d=0.\end{array}}$ Note that the term $f(x_{k})$ in the objective functions above may be left out for the minimization problems, since it is constant. See also • Newton's method • Secant method • Sequential linear programming • Sequential quadratic programming Notes 1. Jorge Nocedal and Stephen J. Wright (2006). Numerical Optimization. Springer. ISBN 0-387-30303-0. References • Jorge Nocedal and Stephen J. Wright (2006). Numerical Optimization. Springer. ISBN 0-387-30303-0. Optimization: Algorithms, methods, and heuristics Unconstrained nonlinear Functions • Golden-section search • Interpolation methods • Line search • Nelder–Mead method • Successive parabolic interpolation Gradients Convergence • Trust region • Wolfe conditions Quasi–Newton • Berndt–Hall–Hall–Hausman • Broyden–Fletcher–Goldfarb–Shanno and L-BFGS • Davidon–Fletcher–Powell • Symmetric rank-one (SR1) Other methods • Conjugate gradient • Gauss–Newton • Gradient • Mirror • Levenberg–Marquardt • Powell's dog leg method • Truncated Newton Hessians • Newton's method Constrained nonlinear General • Barrier methods • Penalty methods Differentiable • Augmented Lagrangian methods • Sequential quadratic programming • Successive linear programming Convex optimization Convex minimization • Cutting-plane method • Reduced gradient (Frank–Wolfe) • Subgradient method Linear and quadratic Interior point • Affine scaling • Ellipsoid algorithm of Khachiyan • Projective algorithm of Karmarkar Basis-exchange • Simplex algorithm of Dantzig • Revised simplex algorithm • Criss-cross algorithm • Principal pivoting algorithm of Lemke Combinatorial Paradigms • Approximation algorithm • Dynamic programming • Greedy algorithm • Integer programming • Branch and bound/cut Graph algorithms Minimum spanning tree • Borůvka • Prim • Kruskal Shortest path • Bellman–Ford • SPFA • Dijkstra • Floyd–Warshall Network flows • Dinic • Edmonds–Karp • Ford–Fulkerson • Push–relabel maximum flow Metaheuristics • Evolutionary algorithm • Hill climbing • Local search • Parallel metaheuristics • Simulated annealing • Spiral optimization algorithm • Tabu search • Software
Wikipedia
Science, technology, engineering, and mathematics Science, technology, engineering, and mathematics (STEM) is an umbrella term used to group together the distinct but related technical disciplines of science, technology, engineering, and mathematics. The term is typically used in the context of education policy or curriculum choices in schools. It has implications for workforce development, national security concerns (as a shortage of STEM-educated citizens can reduce effectiveness in this area) and immigration policy, with regard to admitting foreign students and tech workers.[1] There is no universal agreement on which disciplines are included in STEM; in particular whether or not the science in STEM includes social sciences, such as psychology, sociology, economics, and political science. In the United States, these are typically included by organizations such as the National Science Foundation (NSF),[1] the Department of Labor's O*Net online database for job seekers,[2] and the Department of Homeland Security.[3] In the United Kingdom, the social sciences are categorized separately and are instead grouped together with humanities and arts to form another counterpart acronym HASS (Humanities, Arts, and Social Sciences), rebranded in 2020 as SHAPE (Social Sciences, Humanities and the Arts for People and the Economy).[4][5] Some sources also use HEAL (health, education, administration, and literacy) as the counterpart of STEM.[6] Terminology History Previously referred to as SMET by the NSF,[7] in the early 1990s the acronym STEM was used by a variety of educators, including Charles E. Vela, the founder and director of the Center for the Advancement of Hispanics in Science and Engineering Education (CAHSEE).[8][9][10] Moreover, the CAHSEE started a summer program for talented under-represented students in the Washington, D.C., area called the STEM Institute. Based on the program's recognized success and his expertise in STEM education,[11] Charles Vela was asked to serve on numerous NSF and Congressional panels in science, mathematics and engineering education;[12] it is through this manner that NSF was first introduced to the acronym STEM. One of the first NSF projects to use the acronym was STEMTEC, the Science, Technology, Engineering and Math Teacher Education Collaborative at the University of Massachusetts Amherst, which was founded in 1998.[13] In 2001, at the urging of Dr. Peter Faletra, the Director of Workforce Development for Teachers and Scientists at the Office of Science, the acronym was adopted by Rita Colwell and other science administrators in the National Science Foundation (NSF). The Office of Science was also an early adopter of the STEM acronym.[14] Other variations • A-STEM (arts, science, technology, engineering, and mathematics);[15] more focus and based on humanism and arts. • eSTEM (environmental STEM)[16][17] • GEMS (girls in engineering, math, and science); used for programs to encourage women to enter these fields.[18][19] • MINT (mathematics, informatics, natural sciences, and technology)[20] • SHTEAM (science, humanities, technology, engineering, arts, and mathematics)[21] • SMET (science, mathematics, engineering, and technology); previous name[7] • STEAM (science, technology, engineering, arts, and mathematics)[22] • STEAM (science, technology, engineering, agriculture, and mathematics); add agriculture[23] • STEAM (science, technology, engineering, and applied mathematics); more focus on applied mathematics[24] • STEEM (science, technology, engineering, economics, and mathematics); adds economics as a field[25] • STEMIE (science, technology, engineering, mathematics, invention and entrepreneurship); adds Inventing and Entrepreneurship as means to apply STEM to real world problem solving and markets.[26] • STEMM (science, technology, engineering, mathematics, and medicine)[27] • STM (scientific, technical, and mathematics[28] or science, technology, and medicine)[29] • STREAM (science, technology, robotics, engineering, arts, and mathematics); adds robotics and arts as fields[30] Geographic distribution Australia The Australian Curriculum, Assessment and Reporting Authority 2015 report entitled, National STEM School Education Strategy, stated that "A renewed national focus on STEM in school education is critical to ensuring that all young Australians are equipped with the necessary STEM skills and knowledge that they must need to succeed."[31] Its goals were to: • "Ensure all students finish school with strong foundational knowledge in STEM and related skills"[31] • "Ensure that students are inspired to take on more challenging STEM subjects"[31] Events and programs meant to help develop STEM in Australian schools include the Victorian Model Solar Vehicle Challenge, the Maths Challenge (Australian Mathematics Trust),[32] Go Girl Go Global[32] and the Australian Informatics Olympiad.[32] Canada Canada ranks 12th out of 16 peer countries in the percentage of its graduates who studied in STEM programs, with 21.2%, a number higher than the United States, but lower than France, Germany, and Austria. The peer country with the greatest proportion of STEM graduates, Finland, has over 30% of its university graduates coming from science, mathematics, computer science, and engineering programs.[33] SHAD is an annual Canadian summer enrichment program for high-achieving high school students in July. The program focuses on academic learning particularly in STEAM fields.[34] Scouts Canada has taken similar measures to their American counterpart to promote STEM fields to youth. Their STEM program began in 2015.[35] In 2011 Canadian entrepreneur and philanthropist Seymour Schulich established the Schulich Leader Scholarships, $100 million in $60,000 scholarships for students beginning their university education in a STEM program at 20 institutions across Canada. Each year 40 Canadian students would be selected to receive the award, two at each institution, with the goal of attracting gifted youth into the STEM fields.[36] The program also supplies STEM scholarships to five participating universities in Israel.[37] China To promote STEM in China, the Chinese government issued a guideline in 2016 on national innovation-driven development strategy, instructing that by 2020, China should become an innovative country; by 2030, it should be at the forefront of innovative countries; and by 2050, it should become a technology innovation power. In February 2017, the Ministry of Education in China announced they would officially add STEM education to the primary school curriculum, which is the first official government recognition of STEM education. And later, in May 2018, the launching ceremony and press conference for the 2029 Action Plan for China's STEM Education was held in Beijing, China. This plan aims to allow as many students to benefit from STEM education as possible and equip all students with scientific thinking and the ability to innovate. In response to encouraging policies by the government, schools in both public and private sectors around the country have begun to carry out STEM education programs. However, to effectively implement STEM curricula, full-time teachers specializing in STEM education and relevant content to be taught are needed. Currently, China lacks qualified STEM teachers, and a training system is yet to be established. Several Chinese cities have taken bold measures to add programming as a compulsory course for elementary and middle school students. This is the case of the city of Chongqing. Europe Several European projects have promoted STEM education and careers in Europe. For instance, Scientix[38] is a European cooperation of STEM teachers, education scientists, and policymakers. The SciChallenge[39] project used a social media contest and the student-generated content to increase motivation of pre- university students for STEM education and careers. The Erasmus programme project AutoSTEM[40] used automata to introduce STEM subjects to very young children. Finland In Finland LUMA Center is the leading advocate for STEM-oriented education. In the native tongue luma stands for "luonnontieteellis-matemaattinen" (lit. adj. "scientific-mathematical"). The short is more or less a direct translation of STEM, with engineering fields included by association. However unlike STEM, the term is also a portmanteau from lu and ma. France The name of STEM in France is industrial engineering sciences (sciences industrielles or sciences de l'ingénieur). The STEM organization in France is the association UPSTI. Hong Kong STEM education has not been promoted among the local schools in Hong Kong until recent years. In November 2015, the Education Bureau of Hong Kong released a document titled Promotion of STEM Education,[41] which proposes strategies and recommendations on promoting STEM education. India India is next only to China with STEM graduates per population of 1 to 52. The total fresh STEM graduates were 2.6 million in 2016.[42] STEM graduates have been contributing to the Indian economy with well paid salaries locally and abroad since last two decades. The turnaround of Indian economy with comfortable foreign exchange reserves is mainly attributed to the skills of its STEM graduates. In India, women make up an impressive 43% of STEM graduates, the highest percentage worldwide. However, they hold only 14% of STEM-related jobs. Additionally, among the 280,000 scientists and engineers working in research and development institutes in the country, women represent a mere 14%[43] Nigeria In Nigeria, the Association of Professional Women Engineers Of Nigeria (APWEN) has involved girls between the ages of 12 and 19 in science based courses in other for them to pursue science based courses in the higher institutions of learning. National Science Foundation (NSF) In Nigeria has made conscious efforts to encourage girls to innovate, invent and build it through the 'invent it, build it challenge' program sponsored by NNPC.[44] Pakistan STEM subjects are taught in Pakistan as part of electives taken in the 9th and 10th grade, culminating in Matriculation exams. These electives are: pure sciences (Physics, Chemistry, Biology), mathematics (Physics, Chemistry, Maths) and computer science (Physics, Chemistry, Computer Science). STEM subjects are also offered as electives taken in the 11th and 12th grade, more commonly referred to as first and second year, culminating in Intermediate exams. These electives are: FSc pre-medical (Physics, Chemistry, Biology), FSc pre-engineering (Physics, Chemistry, Maths) and ICS (Physics/Statistics, Computer Science, Maths). These electives are intended to aid students in pursuing STEM-related careers in the future by preparing them for the study of these courses at university. A STEM education project has been approved by the government[45] to establish STEM labs in public schools. The Ministry of Information Technology and Telecommunication has collaborated with Google to launch Pakistan's first grassroots level Coding Skills Development Program,[46] based on Google's CS First Program, a global initiative aimed at developing coding skills in children. The aim of the program is to develop applied coding skills using gamification techniques for children between the ages of 9 and 14. The KPITBs Early Age Programming initiative,[47] established in the province of Khyber Pakhtunkhwa, has been successfully introduced in 225 Elementary and Secondary Schools. There are many private organizations working in Pakistan to introduce STEM education in schools. Philippines In the Philippines, STEM is a two-year program and strand that is used for Senior High School (Grade 11 and 12), as signed by the Department of Education or DepEd. The STEM strand is under the Academic Track, which also include other strands like ABM, HUMSS, and GAS.[48][49] The purpose of STEM strand is to educate students in the field of science, technology, engineering, and mathematics, in an interdisciplinary and applied approach, and to give students advance knowledge and application in the field. After completing the program, the students will earn a Diploma in Science, Technology, Engineering, and Mathematics. In some colleges and universities, they require students applying for STEM degrees (like medicine, engineering, computer studies, etc.) to be a graduate of STEM, if not, they will need to enter a bridging program. Qatar In Qatar, AL-Bairaq is an outreach program to high-school students with a curriculum that focuses on STEM, run by the Center for Advanced Materials (CAM) at Qatar University. Each year around 946 students, from about 40 high schools, participate in AL-Bairaq competitions.[50] AL-Bairaq make use of project-based learning, encourages students to solve authentic problems, and inquires them to work with each other as a team to build real solutions.[51][52] Research has so far shown positive results for the program.[53] Singapore STEM is part of the Applied Learning Programme (ALP) that the Singapore Ministry of Education (MOE) has been promoting since 2013, and currently, all secondary schools have such a programme. It is expected that by 2023, all primary schools in Singapore will have an ALP. There are no tests or exams for ALPs. The emphasis is for students to learn through experimentation – they try, fail, try, learn from it and try again. The MOE actively supports schools with ALPs to further enhance and strengthen their capabilities and programmes that nurtures innovation and creativity. The Singapore Science Centre established a STEM unit in January 2014, dedicated to igniting students' passion for STEM. To further enrich students' learning experiences, their Industrial Partnership Programme (IPP) creates opportunities for students to get early exposure to the real-world STEM industries and careers. Curriculum specialists and STEM educators from the Science Centre will work hand-in-hand with teachers to co-develop STEM lessons, provide training to teachers and co-teach such lessons to provide students with an early exposure and develop their interest in STEM. Thailand In 2017, Thai Education Minister Teerakiat Jareonsettasin said after the 49th Southeast Asia Ministers of Education Organisation (SEAMEO) Council Conference in Jakarta that the meeting approved the establishment of two new SEAMEO regional centres in Thailand. One would be the STEM Education Centre, while the other would be a Sufficient Economy Learning Centre. Teerakiat said that the Thai government had already allocated Bt250 million over five years for the new STEM centre. The centre will be the regional institution responsible for STEM education promotion. It will not only set up policies to improve STEM education, but it will also be the centre for information and experience sharing among the member countries and education experts. According to him, "This is the first SEAMEO regional centre for STEM education, as the existing science education centre in Malaysia only focuses on the academic perspective. Our STEM education centre will also prioritise the implementation and adaptation of science and technology."[54] The Institute for the Promotion of Teaching Science and Technology has initiated a STEM Education Network. Its goals are to promote integrated learning activities and improve student creativity and application of knowledge, and to establish a network of organisations and personnel for the promotion of STEM education in the country.[55] Turkey Turkish STEM Education Task Force (or FeTeMM—Fen Bilimleri, Teknoloji, Mühendislik ve Matematik) is a coalition of academicians and teachers who show an effort to increase the quality of education in STEM fields rather than focussing on increasing the number of STEM graduates.[56][57] United States In the United States, the acronym began to be used in education and immigration debates in initiatives to begin to address the perceived lack of qualified candidates for high-tech jobs. It also addresses concern that the subjects are often taught in isolation, instead of as an integrated curriculum.[58] Maintaining a citizenry that is well versed in the STEM fields is a key portion of the public education agenda of the United States.[59] The acronym has been widely used in the immigration debate regarding access to United States work visas for immigrants who are skilled in these fields. It has also become commonplace in education discussions as a reference to the shortage of skilled workers and inadequate education in these areas.[60] The term tends not to refer to the non-professional and less visible sectors of the fields, such as electronics assembly line work. National Science Foundation Many organizations in the United States follow the guidelines of the National Science Foundation on what constitutes a STEM field. The NSF uses a broader definition of STEM subjects that includes subjects in the fields of chemistry, computer and information technology science, engineering, geosciences, life sciences, mathematical sciences, physics and astronomy, social sciences (anthropology, economics, psychology and sociology), and STEM education and learning research.[1][61] The NSF is the only American federal agency whose mission includes support for all fields of fundamental science and engineering, except for medical sciences.[62] Its disciplinary program areas include scholarships, grants, fellowships in fields such as biological sciences, computer and information science and engineering, education and human resources, engineering, environmental research and education, geosciences, international science and engineering, mathematical and physical sciences, social, behavioral and economic sciences, cyberinfrastructure, and polar programs.[61] Immigration policy Why must students pretend they don't dream of staying and working here, and why must the US pretend it doesn't need them? Moumita Das[63] Although many organizations in the United States follow the guidelines of the National Science Foundation on what constitutes a STEM field, the United States Department of Homeland Security (DHS) has its own functional definition used for immigration policy.[64] In 2012, DHS or ICE announced an expanded list of STEM designated-degree programs that qualify eligible graduates on student visas for an optional practical training (OPT) extension. Under the OPT program, international students who graduate from colleges and universities in the United States can stay in the country and receive up to twelve months of training through work experience. Students who graduate from a designated STEM degree program can stay for an additional seventeen months on an OPT STEM extension.[65][66] As of 2023, the U.S. faces a shortage of high-skilled workers in STEM, and foreign talents must navigate difficult hurdles in order to immigrate. Meanwhile, some other countries, such as Australia, Canada, and the United Kingdom, have introduced programs to attract talents at the expense of the United States.[67] In the case of China, the United States risks losing its edge over a strategic rival.[68] STEM-eligible degrees in US immigration An exhaustive list of STEM disciplines does not exist because the definition varies by organization. The U.S. Immigration and Customs Enforcement lists disciplines including[69] architecture, physics, actuarial science, chemistry, biology, mathematics, applied mathematics, statistics, computer science, computational science, psychology, biochemistry, robotics, computer engineering, electrical engineering, electronics, mechanical engineering, industrial engineering, information science, information technology, civil engineering, aerospace engineering, chemical engineering, astrophysics, astronomy, optics, nanotechnology, nuclear physics, mathematical biology, operations research, neurobiology, biomechanics, bioinformatics, acoustical engineering, geographic information systems, atmospheric sciences, educational/instructional technology, software engineering, educational research, and landscape architecture. Education See also: Mathematics education in the United States By cultivating an interest in the natural and social sciences in preschool or immediately following school entry, the chances of STEM success in high school can be greatly improved. STEM supports broadening the study of engineering within each of the other subjects, and beginning engineering at younger grades, even elementary school. It also brings STEM education to all students rather than only the gifted programs. In his 2012 budget, President Barack Obama renamed and broadened the "Mathematics and Science Partnership (MSP)" to award block grants to states for improving teacher education in those subjects.[70] In the 2015 run of the international assessment test the Program for International Student Assessment (PISA), American students came out 35th in mathematics, 24th in reading and 25th in science, out of 109 countries. The United States also ranked 29th in the percentage of 24-year-olds with science or mathematics degrees.[73] STEM education often uses new technologies such as RepRap 3D printers to encourage interest in STEM fields.[74] STEM education can also leverage the combination of new technologies, such as photovoltaics and environmental sensors, with old technologies such as composting systems and irrigation within land lab environments. In 2006 the United States National Academies expressed their concern about the declining state of STEM education in the United States. Its Committee on Science, Engineering, and Public Policy developed a list of 10 actions. Their top three recommendations were to: • Increase America's talent pool by improving K–12 science and mathematics education • Strengthen the skills of teachers through additional training in science, mathematics and technology • Enlarge the pipeline of students prepared to enter college and graduate with STEM degrees[75] The National Aeronautics and Space Administration also has implemented programs and curricula to advance STEM education in order to replenish the pool of scientists, engineers and mathematicians who will lead space exploration in the 21st century.[75] Individual states, such as California, have run pilot after-school STEM programs to learn what the most promising practices are and how to implement them to increase the chance of student success.[76] Another state to invest in STEM education is Florida, where Florida Polytechnic University,[77] Florida's first public university for engineering and technology dedicated to science, technology, engineering and mathematics (STEM), was established.[78] During school, STEM programs have been established for many districts throughout the U.S. Some states include New Jersey, Arizona, Virginia, North Carolina, Texas, and Ohio.[79][80] Continuing STEM education has expanded to the post-secondary level through masters programs such as the University of Maryland's STEM Program[81] as well as the University of Cincinnati.[82] Racial gap in STEM fields In the United States, the National Science Foundation found that the average science score on the 2011 National Assessment of Educational Progress was lower for black and Hispanic students than white, Asian, and Pacific Islanders.[84] In 2011, eleven percent of the U.S. workforce was black, while only six percent of STEM workers were black.[85] Though STEM in the U.S. has typically been dominated by white males, there have been considerable efforts to create initiatives to make STEM a more racially and gender diverse field.[86] Some evidence suggests that all students, including black and Hispanic students, have a better chance of earning a STEM degree if they attend a college or university at which their entering academic credentials are at least as high as the average student's.[87] Gender gaps in STEM Although women make up 47% of the workforce[88] in the U.S., they hold only 24% of STEM jobs. Research suggests that exposing girls to female inventors at a young age has the potential to reduce the gender gap in technical STEM fields by half.[89] Campaigns from organizations like the National Inventors Hall of Fame aimed to achieve a 50/50 gender balance in their youth STEM programs by 2020. American Competitiveness Initiative In the State of the Union Address on January 31, 2006, President George W. Bush announced the American Competitiveness Initiative. Bush proposed the initiative to address shortfalls in federal government support of educational development and progress at all academic levels in the STEM fields. In detail, the initiative called for significant increases in federal funding for advanced R&D programs (including a doubling of federal funding support for advanced research in the physical sciences through DOE) and an increase in U.S. higher education graduates within STEM disciplines. The NASA Means Business competition, sponsored by the Texas Space Grant Consortium, furthers that goal. College students compete to develop promotional plans to encourage students in middle and high school to study STEM subjects and to inspire professors in STEM fields to involve their students in outreach activities that support STEM education. The National Science Foundation has numerous programs in STEM education, including some for K–12 students such as the ITEST Program that supports The Global Challenge Award ITEST Program. STEM programs have been implemented in some Arizona schools. They implement higher cognitive skills for students and enable them to inquire and use techniques used by professionals in the STEM fields. Project Lead The Way (PLTW) is a provider of STEM education curricular programs to middle and high schools in the United States. Programs include a high school engineering curriculum called Pathway To Engineering, a high school biomedical sciences program, and a middle school engineering and technology program called Gateway To Technology. PLTW programs have been endorsed by President Barack Obama and United States Secretary of Education Arne Duncan as well as various state, national, and business leaders. STEM Education Coalition The Science, Technology, Engineering, and Mathematics (STEM) Education Coalition[90] works to support STEM programs for teachers and students at the U. S. Department of Education, the National Science Foundation, and other agencies that offer STEM-related programs. Activity of the STEM Coalition seems to have slowed since September 2008. Scouting In 2012, the Boy Scouts of America began handing out awards, titled NOVA and SUPERNOVA, for completing specific requirements appropriate to scouts' program level in each of the four main STEM areas. The Girl Scouts of the USA has similarly incorporated STEM into their program through the introduction of merit badges such as "Naturalist" and "Digital Art".[91] SAE is an international organization, solutions'provider specialized on supporting education, award and scholarship programs for STEM matters, from pre-K to the college degree.[92] It also promotes scientific and technologic innovation. Department of Defense programs [93] The eCybermission is a free, web-based science, mathematics and technology competition for students in grades six through nine sponsored by the U.S. Army. Each webinar is focused on a different step of the scientific method and is presented by an experienced eCybermission CyberGuide. CyberGuides are military and civilian volunteers with a strong background in STEM and STEM education, who are able to provide insight into science, technology, engineering, and mathematics to students and team advisers. STARBASE is an educational program, sponsored by the Office of the Assistant Secretary of Defense for Reserve Affairs. Students interact with military personnel to explore careers and make connections with the "real world". The program provides students with 20–25 hours of experience at National Guard, Navy, Marines, Air Force Reserve and Air Force bases across the nation. SeaPerch is an underwater robotics program that trains teachers to teach their students how to build an underwater remotely operated vehicle (ROV) in an in-school or out-of-school setting. Students build the ROV from a kit composed of low-cost, easily accessible parts, following a curriculum that teaches basic engineering and science concepts with a marine engineering theme. NASA NASAStem is a program of the U.S. space agency NASA to increase diversity within its ranks, including age, disability, and gender as well as race/ethnicity.[94] Legislation The America COMPETES Act (P.L. 110–69) became law on August 9, 2007. It is intended to increase the nation's investment in science and engineering research and in STEM education from kindergarten to graduate school and postdoctoral education. The act authorizes funding increases for the National Science Foundation, National Institute of Standards and Technology laboratories, and the Department of Energy (DOE) Office of Science over FY2008–FY2010. Robert Gabrys, Director of Education at NASA's Goddard Space Flight Center, articulated success as increased student achievement, early expression of student interest in STEM subjects, and student preparedness to enter the workforce. Jobs In November 2012 the White House announcement before congressional vote on the STEM Jobs Act put President Obama in opposition to many of the Silicon Valley firms and executives who bankrolled his re-election campaign.[95] The Department of Labor identified 14 sectors that are "projected to add substantial numbers of new jobs to the economy or affect the growth of other industries or are being transformed by technology and innovation requiring new sets of skills for workers."[96] The identified sectors were as follows: advanced manufacturing, Automotive, construction, financial services, geospatial technology, homeland security, information technology, Transportation, Aerospace, Biotechnology, energy, healthcare, hospitality, and retail. The Department of Commerce notes STEM fields careers are some of the best-paying and have the greatest potential for job growth in the early 21st century. The report also notes that STEM workers play a key role in the sustained growth and stability of the U.S. economy, and training in STEM fields generally results in higher wages, whether or not they work in a STEM field.[97] In 2015, there were around 9.0 million STEM jobs in the United States, representing 6.1% of American employment. STEM jobs were increasing around 9% percent per year.[98] Brookings Institution found that the demand for competent technology graduates will surpass the number of capable applicants by at least one million individuals. According to Pew Research Center, a typical STEM worker earns two-thirds more than those employed in other fields.[99] Trajectories of STEM graduates in STEM and non-STEM jobs According to the 2014 US census "74 percent of those who have a bachelor's degree in science, technology, engineering and math — commonly referred to as STEM — are not employed in STEM occupations."[100][101] Updates In September 2017, a number of large American technology firms collectively pledged to donate $300 million for computer science education in the U.S.[102] PEW findings revealed in 2018 that Americans identified several issues that hound STEM education which included unconcerned parents, disinterested students, obsolete curriculum materials, and too much focus on state parameters. 57 percent of survey respondents pointed out that one main problem of STEM is lack of students' concentration in learning.[103] The recent National Assessment of Educational Progress (NAEP) report card[104] made public technology as well as engineering literacy scores which determines whether students have the capability to apply technology and engineering proficiency to real-life scenarios. The report showed a gap of 28 points between low-income students and their high-income counterparts. The same report also indicated a 38-point difference between white and black students.[105] The Smithsonian Science Education Center (SSEC) announced the release of a five-year strategic plan by the Committee on STEM Education of the National Science and Technology Council on December 4, 2018. The plan is entitled "Charting a Course for Success: America's Strategy for STEM Education."[106] The objective is to propose a federal strategy anchored on a vision for the future so that all Americans are given permanent access to premium-quality education in Science, Technology, Engineering, and Mathematics. In the end, the United States can emerge as world leader in STEM mastery, employment, and innovation. The goals of this plan are building foundations for STEM literacy; enhancing diversity, equality, and inclusion in STEM; and preparing the STEM workforce for the future.[107] The 2019 fiscal budget proposal of the White House supported the funding plan in President Donald Trump's Memorandum on STEM Education which allocated around $200 million (grant funding) on STEM education every year. This budget also supports STEM through a grant program worth $20 million for career as well as technical education programs.[108] Events and programs to help develop STEM in US schools • FIRST Tech Challenge • VEX Robotics Competitions • FIRST Robotics Competition Vietnam In Vietnam, beginning in 2012 many private education organizations have STEM education initiatives. In 2015, the Ministry of Science and Technology and Liên minh STEM organized the first National STEM day, followed by many similar events across the country. in 2015, Ministry of Education and Training included STEM as an area needed to be encouraged in national school year program. In May 2017, Prime Minister signed a Directive no. 16[109] stating: "Dramatically change the policies, contents, education and vocational training methods to create a human resource capable of receiving new production technology trends, with a focus on promoting training in science, technology, engineering and mathematics (STEM), foreign languages, information technology in general education; " and asking "Ministry of Education and Training (to): Promote the deployment of science, technology, engineering and mathematics (STEM) education in general education program; Pilot organize in some high schools from 2017 to 2018. Women Main articles: Female education in STEM and Women in STEM fields Women constitute 47% of the U.S. workforce, and perform 24% of STEM-related jobs.[110] In the UK women perform 13% of STEM-related jobs (2014).[111] In the U.S. women with STEM degrees are more likely to work in education or healthcare rather than STEM fields compared with their male counterparts. The gender ratio depends on field of study. For example, in the European Union in 2012 women made up 47.3% of the total, 51% of the social sciences, business and law, 42% of the science, mathematics and computing, 28% of engineering, manufacturing and construction, and 59% of PhD graduates in Health and Welfare.[112] In a study from 2019 it was shown that part of the success of women in STEM depends on the way women in STEM are viewed. In a study that researched grants given based primarily on project versus primarily based on the project lead there was almost no difference in the evaluation between projects from men or women when evaluated on project, but those evaluated mainly on the project leader showed that projects headed by women were given grants four percent less often.[113] Improving the experiences of women in STEM is a major component of increasing the number of women in STEM. One part of this includes the need for role models and mentors who are women in STEM. Along with this, having good resources for information and networking opportunities can improve women's ability to flourish in STEM fields.[114] LGBTQ+ People identifying within the group LGBTQ+ have faced discrimination in STEM fields throughout history. Few were openly queer in STEM; however, a couple of well-known people are Alan Turing, the father of computer science, and Sara Josephine Baker, American physician and public-health leader.[115] Despite recent changes in attitudes towards LGBTQ+ people, discrimination still permeates throughout STEM fields.[116][117] A recent study has shown that gay men are less likely to have completed a bachelor's degree in a STEM field and to work in a STEM occupation.[118][119] Along with this, those of sexual minorities overall have been shown to be less likely to remain in STEM majors throughout college.[117] Another study concluded that queer people are more likely to experience exclusion, harassment and other negative impacts while in a STEM career while also having fewer opportunities and resources available to them.[120] Multiple programs and institutions are working towards increasing the inclusion and acceptance of LGBTQ+ people in STEM. In the US, the National Organization of Gay and Lesbian Scientists and Technical Professionals (NOGLSTP) has organized people to address homophobia since the 1980s and now promotes activism and support for queer scientists.[121] Other programs, including 500 Queer Scientists and Pride in STEM, function as visibility campaigns for LGBTQ+ people in STEM worldwide.[121][122] Criticism The focus on increasing participation in STEM fields has attracted criticism. In the 2014 article "The Myth of the Science and Engineering Shortage" in The Atlantic, demographer Michael S. Teitelbaum criticized the efforts of the U.S. government to increase the number of STEM graduates, saying that, among studies on the subject, "No one has been able to find any evidence indicating current widespread labor market shortages or hiring difficulties in science and engineering occupations that require bachelor's degrees or higher", and that "Most studies report that real wages in many—but not all—science and engineering occupations have been flat or slow-growing, and unemployment as high or higher than in many comparably-skilled occupations." Teitelbaum also wrote that the then-current national fixation on increasing STEM participation paralleled previous U.S. government efforts since World War II to increase the number of scientists and engineers, all of which he stated ultimately ended up in "mass layoffs, hiring freezes, and funding cuts"; including one driven by the Space Race of the late 1950s and 1960s, which he wrote led to "a bust of serious magnitude in the 1970s."[123] IEEE Spectrum contributing editor Robert N. Charette echoed these sentiments in the 2013 article "The STEM Crisis Is a Myth", also noting that there was a "mismatch between earning a STEM degree and having a STEM job" in the United States, with only around 1⁄4 of STEM graduates working in STEM fields, while less than half of workers in STEM fields have a STEM degree.[124] Economics writer Ben Casselman, in a 2014 study of post-graduation earnings in the United States for FiveThirtyEight, wrote that, based on the data, science should not be grouped with the other three STEM categories, because, while the other three generally result in high-paying jobs, "many sciences, particularly the life sciences, pay below the overall median for recent college graduates."[125] See also • American Indian Science and Engineering Society (AISES) • Craft Academy for Excellence in Science and Mathematics • Hard and soft science • List of African American women in STEM fields • Maker culture • NASA RealWorld-InWorld Engineering Design Challenge • National Society of Black Engineers (NSBE) • Pre-STEM • Science, Technology, Engineering and Mathematics Network • Society of Hispanic Professional Engineers (SHPE) • STEM Academy • STEM.org • STEM pipeline • Underrepresented group References Citations 1. "Science, Technology, Engineering, and Mathematics (STEM) Education: A Primer" (PDF). Fas.org. Archived (PDF) from the original on 2018-10-09. Retrieved 2017-08-21. 2. "Research, Development, Design, and Practitioners STEM Occupations". Onetonline.org. 2021-11-16. Archived from the original on 2021-11-16. Retrieved 2021-12-02. 3. "Archived copy" (PDF). Archived (PDF) from the original on 2021-08-24. Retrieved 2021-11-16.{{cite web}}: CS1 maint: archived copy as title (link) 4. British Academy (2020). "SHAPE". SHAPE. Archived from the original on 25 January 2021. Retrieved 14 January 2021. 5. Black, Julia (2 November 2020). "SHAPE – A Focus on the Human World". Social Science Space. Archived from the original on 15 January 2021. Retrieved 14 January 2021. 6. Reeves, Richard V. (25 September 2022). "Men can HEAL". ofboysandmen.substack.com. Retrieved 27 January 2023. 7. Hallinen, Judith (Oct 21, 2015). "STEM Education Curriculum". ENCYCLOPÆDIA BRITANNICA. Archived from the original on February 25, 2020. Retrieved March 7, 2019. 8. "CAHSEE - About CAHSEE". The Center for the Advancement of Hispanics in Science and Engineering Education. Archived from the original on 2019-02-14. Retrieved 2018-10-03. 9. "STEM Science, Technology, Engineering, Mathematics - Main". stem.ccny.cuny.edu. Archived from the original on 2018-10-04. Retrieved 2018-10-03. 10. Group, Career Communications (1996). Hispanic Engineer & IT. Career Communications Group. Archived from the original on 2020-01-18. Retrieved 2018-10-03. 11. "President Bush Honors Excellence in Science, Mathematics and Engineering Mentoring | NSF - National Science Foundation". National Science Foundation. Archived from the original on 2018-11-06. Retrieved 2018-10-03. 12. "CAHSEE - Founder's Biography". The Center for the Advancement of Hispanics in Science and Engineering Education. Archived from the original on 2018-11-11. Retrieved 2018-10-03. 13. "STEMTEC". Fivecolleges.edu. Archived from the original on 2019-06-05. Retrieved 2016-10-27. The Science, Technology, Engineering, and Mathematics Teacher Education Collaborative (STEMTEC) was a five-year, $5,000,000 project funded by the National Science Foundation in 1998. Managed by the STEM Education Institute at UMass and the Five Colleges School Partnership Program, the collaborative included the Five Colleges--Amherst, Hampshire, Mount Holyoke, and Smith Colleges, and UMass Amherst--plus Greenfield, Holyoke, and Springfield Technical Community Colleges, and several regional school districts. 14. "Workforce Development for Teachers and Scientists". 15. Shenzhen City Longgang District Education Bureau, China (27 August 2018). "The Guidance of A-STEM Curriculum Construction in Longgang District Shenzhen City" (PDF). g.gov.cn. Archived from the original (PDF) on 12 July 2019. Retrieved 29 April 2019. 16. eSTEM Academy Archived 2020-05-26 at the Wayback Machine, retrieved 2013-07-02 17. Arbor Height Elementary to implement "eSTEM" curriculum in coming years, West Seattle Herald, 4-30-2013 Archived 2017-01-26 at the Wayback Machine, retrieved 2013-07-02 18. "Girls in Engineering, Math and Science (GEMS)". GRASP lab. 2015-04-06. Archived from the original on 2017-09-09. Retrieved 2017-03-28. 19. "Annual Report - Lee Richardson Zoo" (PDF). Lee Richardson Zoo. Archived (PDF) from the original on 2017-12-08. Retrieved 2017-03-28. 20. Locherer, M.; Hausamann, D.; Schüttler, T. (July 22, 2012). "Practical science education in remote sensing at the DLR_School_Lab Oberpfaffenhofen". 2012 IEEE International Geoscience and Remote Sensing Symposium. pp. 7389–7392. doi:10.1109/IGARSS.2012.6351922. ISBN 978-1-4673-1159-5. S2CID 20426116. Retrieved September 16, 2022. 21. adelphiacademy. "SHTEAM". Adelphi Academy. Archived from the original on 2021-06-12. Retrieved 2021-05-27. 22. "STEAM Rising: Why we need to put the arts into STEM education". Slate. 16 June 2015. Archived from the original on 2018-10-16. Retrieved 2016-11-10. 23. "STEAM: Science, Technology, Engineering, Agriculture and Mathematics Education in Agroecology". Retrieved September 16, 2022. 24. "Virginia Tech and Virginia STEAM Academy form strategic partnership to meet critical education needs". Virginia Tech News. 31 July 2012. Archived from the original on 13 January 2020. Retrieved 27 May 2013. 25. Yordanova Krumova, Milena (September 2021). "STEEM and re-Project-Based Learning Design: A Case Study about Learning Economics by IT Students at School". STEM in Bulgaria, Europe and the World. Retrieved September 16, 2022. 26. "Home". STEMIE Coalition. Archived from the original on May 13, 2020. Retrieved July 25, 2019. 27. "Youth Stemm Award". ysawards.co.uk. Retrieved 13 July 2022. 28. Ken Whistler, Asmus Freytag, AMS (STIX); "Encoding Additional Mathematical Symbols in Unicode (revised)"; 2000-04-09. Math Symbols 2000-04-19 - Unicode Consortium (accessed 2016-10-21 Archived 2019-06-15 at the Wayback Machine 29. Junianto, Erfian; Nurbayanti Shobary, Mayya; Rachman, Rizal (August 7, 2018). "Classification of Science, Technology and Medicine (STM) Domains with PSO and NBC". 2018 6th International Conference on Cyber and IT Service Management (CITSM). pp. 1–6. doi:10.1109/CITSM.2018.8674271. ISBN 978-1-5386-5434-7. S2CID 90263157. Retrieved September 16, 2022. {{cite book}}: |journal= ignored (help) 30. "How to Connect Science, Technology, Engineering, Robotics, Arts, and Math in the Classroom". EDWEB.net. July 19, 2016. Retrieved October 5, 2022. 31. Irene, Tham (11 May 2017). "Add coding to basic skills taught in schools". Add coding to basic skills taught in schools. The Straits Times. Archived from the original on 2 December 2019. Retrieved 3 August 2019. 32. "micro:bit Global Challenge". micro:bit Global Challenge. micro:bit. 6 May 2019. Archived from the original on 5 August 2019. Retrieved 3 August 2019. 33. "Graduates in science, math, computer science, and engineering". Conferenceboard.ca. Archived from the original on 8 May 2019. Retrieved 20 August 2017. 34. "SHAD Brochure" (PDF). Archived from the original (PDF) on 2018-08-04. Retrieved 2018-08-03. 35. "Scouts Canada - STEM Activities". Archived from the original on 2014-08-11. Retrieved 2014-06-30. 36. "Toronto philanthropist Schulich unveils $100-million scholarship". Theglobeandmail.com. Archived from the original on 27 January 2017. Retrieved 30 June 2014. 37. "Philanthropist Makes $100 Million Investment In Nation's Future". Shalomlife.com. Archived from the original on 24 September 2015. Retrieved 30 June 2014. 38. "Scientix Project". Archived from the original on 5 January 2019. Retrieved 4 March 2018. 39. Achilleos, Achilleas; Mettouris, Christos; Yeratziotis, Alexandros; Papadopoulos, George; Pllana, Sabri; Huber, Florian; Jaeger, Bernhard; Leitner, Peter; Ocsovszky, Zsofia; Dinnyes, Andras (2019). "SciChallenge: A Social Media Aware Platform for Contest-Based STEM Education and Motivation of Young Students". IEEE Transactions on Learning Technologies. 12: 98–111. doi:10.1109/TLT.2018.2810879. S2CID 65050107. 40. "AutoSTEM". Archived from the original on 27 July 2021. Retrieved 30 August 2021. 41. "Promotion of STEM Education" (PDF). Edb.gov.hk. Archived (PDF) from the original on 2018-10-09. Retrieved 2017-08-21. 42. "How the STEM Crisis is Threatening the Future of Work". 6 January 2020. Archived from the original on 26 January 2020. Retrieved 19 January 2020. 43. 101Reporters (2021-10-11). "Addressing Gender Disparities In STEM". Feminism in India. Retrieved 2023-04-21. 44. kingsley-omoyibo, Queeneth. "APWEN". APWEN. Retrieved 2022-08-18. 45. "PM approves STEM education project | The Express Tribune". tribune.com.pk. 2020-08-21. Archived from the original on 2020-08-23. Retrieved 2020-10-29. 46. "MINISTRY OF INFORMATION TECHNOLOGY & TELECOMMUNICATION". moitt.gov.pk. Archived from the original on 2020-11-01. Retrieved 2020-10-29. 47. Early Programming, KPITB. "Early Age Programming | KPITB | Khyber Pakhtunkhwa Information Technology Board". Archived from the original on 2020-10-23. Retrieved 2020-10-29. 48. "Academic Track | Department of Education". Archived from the original on 2020-07-11. Retrieved 2020-07-09. 49. "A Guide to Choosing the Right Senior High School Strand". TeacherPH. 2018-02-06. Archived from the original on 2020-07-10. Retrieved 2020-07-09. 50. "AlBairaq World - Welcome to Al-Bairaq World". 19 April 2014. Archived from the original on 19 April 2014. Retrieved 20 August 2017.{{cite web}}: CS1 maint: bot: original URL status unknown (link) 51. "Supreme Education Council". Sec.gov.qa. Archived from the original on 2017-06-30. Retrieved 2017-08-20. 52. "The Peninsula Qatar - Al Bairaq holds workshop for high school students". Thepeninsulaqatar.com. Archived from the original on 2016-09-22. Retrieved 2017-08-20. 53. "Al-Ghanim, K.A; Al-Maadeed, M.A and Al-Thani, N.J (Sept. 2014) : IMPACT OF INNOVATIVE LEARNING ENVIRONMENT BASED ON RESEARCH ACTIVITIES ON SECONDARY SCHOOL STUDENTS' ATTITUDE TOWARDS RESEARCH AND THEIR SELF-EFFICACY, EJES, 1(3), 39-57" (PDF). Ejes.eu. Retrieved 2017-08-20. 54. "SEAMEO Secretariat". www.seameo.org. Archived from the original on 2019-10-05. Retrieved 2019-11-18. 55. Boonruang, Sasiwimon (14 January 2015). "A Stem education". Bangkok Post. Archived from the original on 2021-12-02. Retrieved 2019-11-18. 56. "FeTeMM Çalışma Grubu". Archived from the original on 16 July 2017. Retrieved 3 September 2014. 57. "STEM Education Task Force". Tstem.com. Archived from the original on 1 July 2019. Retrieved 3 September 2014. 58. "STEM Education in Southwestern Pennsylvania" (PDF). The Intermediate Unit 1 Center for STEM Education. 2008. Archived (PDF) from the original on 2013-05-13. Retrieved 2012-12-21. 59. Morella, Michael (July 26, 2012). "U.S. News Inducts Five to STEM Leadership Hall of Fame". U.S. News & World Report. Archived from the original on 2019-07-21. Retrieved 2012-12-21. 60. Kakutani, Michiko (November 7, 2011). "Bill Clinton Lays Out His Prescription for America's Future". The New York Times. Archived from the original on 2018-11-15. Retrieved 2012-12-21. 61. "Graduate Research Fellowship Program". nsf.gov. Archived from the original on 2019-09-06. Retrieved 2018-04-06. 62. "What We Do". The National Science Foundation. Archived from the original on 2019-02-03. Retrieved 2012-12-21. 63. Das, Moumita (July 13, 2023). "Opinion: The World is Coming for US Science Talent". APS News. Retrieved July 26, 2023. 64. "Immigration of Foreign Nationals with Science, Technology, Engineering, and Mathematics (STEM) Degrees" (PDF). Fas.org. Archived (PDF) from the original on 2017-11-19. Retrieved 2017-08-21. 65. Jennifer G. Roeper (May 19, 2012). "DHS Expands List of STEM designated-degree programs". Fowler White Boggs P.A. Archived from the original on 2017-06-30. Retrieved 2012-10-01. 66. "STEM-Designated Degree Program List : 2012 Revised List" (PDF). Ice.gov. Archived (PDF) from the original on 2017-12-07. Retrieved 2017-08-21. 67. Kim, Tae (July 20, 2023). "The Chip Act's Big Problem". Barron's. Archived from the original on July 20, 2023. Retrieved July 21, 2023. 68. Das, Moumita (July 13, 2023). "Opinion: The World is Coming for US Science Talent". APS News. Retrieved July 26, 2023. 69. "DHS STEM Designated Degree Program List" (PDF). U.S. Immigration and Customs Enforcement. July 12, 2023. 70. Jane J. Lee (14 February 2012). "Obama's Budget Shuffles STEM Education Deck". American Association for the Advancement of Science. Archived from the original on 29 August 2012. Retrieved 2012-12-21. 71. Dutt-Ballerstadt, Reshmi (March 1, 2019). "Academic Prioritization or Killing the Liberal Arts?". Inside Higher Ed. Retrieved March 1, 2021. 72. "Was your degree really worth it?". The Economist. April 3, 2023. Archived from the original on April 8, 2023. Retrieved April 14, 2023. 73. "Program for International Student Assessment (PISA) - Overview". nces.ed.gov. Archived from the original on 2019-09-04. Retrieved 2018-09-04. 74. J.L. Irwin, D.E. Oppliger, J.M. Pearce, G. Anzalone, Evaluation of RepRap 3D Printer Workshops in K-12 STEM Archived 2015-10-02 at the Wayback Machine. 122nd ASEE 122nd ASEE Conf. Proceedings, paper ID#12036, 2015. open access 75. "STEM Education". Slsd.org. Archived from the original on 2017-01-26. Retrieved 2016-06-09. 76. "Final Report : California Department of Education : CDE Agreement" (PDF). Powerofdiscovery.org. Archived (PDF) from the original on 2016-07-02. Retrieved 2017-08-21. 77. "Florida Polytechnic University". Florida Polytechnic University. Archived from the original on September 3, 2019. Retrieved July 25, 2019. 78. "About Florida Polytechnic University". Florida Polytechnic University. Archived from the original on 2019-05-05. Retrieved 2015-10-26. 79. "STEM Academy / Overview". OLENTANGY SCHOOLS. Archived from the original on 2019-05-05. Retrieved 2018-10-12. 80. "Best STEM High schools". Archived from the original on 2020-05-24. Retrieved 2018-10-12. 81. "Center for Mathematics Education". Archived from the original on 2014-09-10. Retrieved 2014-07-05. 82. "A STEM Degree that Inspires Innovation | UC's Master of Education Online Program". Archived from the original on 2014-09-10. Retrieved 2014-07-05. 83. "A Leak in the STEM Pipeline: Taking Algebra Early". U.S. Department of Education. November 2018. Retrieved May 13, 2023. 84. "Science and Engineering Indicators 2014." S&E Indicators 2014 - Figures - US National Science Foundation (NSF). N.p., n.d. Web. 85. Landivar, Liana C. Disparities in STEM Employment by Sex, Race, and Hispanic Origin . Rep. N.p.: n.p., 2013. 86. "FACT SHEET: President Obama Announces Over $240 Million in New STEM Commitments at the 2015 White House Science Fair." National Archives and Records Administration. National Archives and Records Administration, n.d. Web. 87. Gail Heriot, Want to Be a Doctor? A Scientist? An Engineer?: An Affirmative Action Leg Up May Hurt Your Chances Archived 2020-01-11 at the Wayback Machine, Engage (2010). 88. "Women in STEM: 2017 Update" Archived 2020-10-22 at the Wayback Machine, U.S. Department of Commerce. 89. "Who Becomes an Inventor in America? The Importance of Exposure to Innovation" Archived 2021-07-17 at the Wayback Machine, Opportunity Insights. 90. Bybee, R. W. (2010). "What is STEM Education?". Science. 329 (5995): 996. Bibcode:2010Sci...329..996B. doi:10.1126/science.1194998. PMID 20798284. 91. "STEM - Girl Scouts". Girl Scouts of the USA. Archived from the original on 2019-04-22. Retrieved 2017-09-27. 92. "SAE - about us". saefoundation.org. Archived from the original on December 8, 2017. Retrieved Jul 24, 2018. 93. "Research & Engineering Enterprise: STEM". Osd.mil. Archived from the original on 2017-10-20. Retrieved 2017-08-21. 94. "NASA Office of Diversity and Equal Opportunity (ODEO)". missionstem.nasa.gov. Archived from the original on 18 January 2020. Retrieved 20 August 2017. 95. Declan McCullagh (November 28, 2012). "Obama opposes Silicon Valley firms on immigration reform". CNET. Archived from the original on 2014-01-05. Retrieved 2012-12-21. 96. "The STEM Workforce Challenge: the Role of the Public Workforce System in a National Solution for a Competitive Science, Technology, Engineering, and Mathematics (STEM) Workforce" (PDF). U.S. Department of Labor. April 2007. Archived (PDF) from the original on 2018-09-08. Retrieved 2012-12-21. 97. "STEM: Good Jobs Now and For the Future". doc.gov. Archived from the original on 2018-09-28. Retrieved 2011-11-22. 98. "STEM Jobs: 2017 Update | Economics & Statistics Administration". esa.doc.gov. Archived from the original on 2018-09-27. Retrieved 2018-09-04. 99. "The typical STEM worker now earns two-thirds more than non-STEM workers". 100. "Census Bureau Reports Majority of STEM College Graduates Do Not Work in STEM Occupations". United States Census Bureau. July 10, 2014. Archived from the original on August 25, 2019. Retrieved June 25, 2019. 101. "Where do college graduates work? A Special Focus on Science, Technology, Engineering and Math". United States Census Bureau. July 10, 2014. Archived from the original on August 12, 2019. Retrieved June 25, 2019. 102. Kang, Cecilia (26 September 2017). "Tech Firms Add $300 Million to Trump Administration's Computer Science Push". The New York Times. Archived from the original on 2019-08-15. Retrieved 2018-09-04. 103. "Americans Rate U.S. K–12 STEM Education as Mediocre". THE Journal. Archived from the original on 2018-11-06. Retrieved 2018-09-04. 104. "NAEP TEL - Technology and Engineering Literacy Assessment". nces.ed.gov. Archived from the original on 2019-06-26. Retrieved 2018-09-04. 105. "Analysis | Suddenly, Trump wants to spend millions of dollars on STEM in public schools". Washington Post. Archived from the original on 2018-09-04. Retrieved 2018-09-04. 106. Carol O'Donnell (December 10, 2018). "Charting a Course for Success: America's Strategy for STEM Education". ssec.si.edu. Archived from the original on April 10, 2019. Retrieved December 28, 2018. 107. Steve Zylstra (December 19, 2018). "Envisioning STEM education for all". Phoenix Business Journal. Archived from the original on December 28, 2018. Retrieved December 28, 2018.(subscription required) 108. "Trump stands by STEM education spending in fiscal 2019 budget". EdScoop. Archived from the original on 2018-11-18. Retrieved 2018-09-04. 109. "Archived copy". Archived from the original on 2018-01-06. Retrieved 2017-07-23.{{cite web}}: CS1 maint: archived copy as title (link) 110. "Women in STEM: 2017 Update". Archived from the original on October 3, 2018. Retrieved September 15, 2018. 111. "Science careers face diversity challenge". westminster.ac.uk. Archived from the original on 2014-10-18. 112. European Commission. Directorate General for Research Innovation (2016). She Figures 2015 (PDF) (Report). European Commission. doi:10.2777/744106. ISBN 978-92-79-48375-2. Archived (PDF) from the original on 19 June 2018. Retrieved 15 September 2018. 113. Witteman, Holly O; Hendricks, Michael; Straus, Sharon; Tannenbaum, Cara (2019-02-09). "Are gender gaps due to evaluations of the applicant or the science? A natural experiment at a national funding agency". The Lancet. 393 (10171): 531–540. doi:10.1016/S0140-6736(18)32611-4. ISSN 0140-6736. PMID 30739688. S2CID 72334588. 114. Berryhill, Marian E.; Desrochers, Theresa M. (2021-06-01). "Addressing the Gender Gap in Research: Insights from a Women in Neuroscience Conference". Trends in Neurosciences. 44 (6): 419–421. doi:10.1016/j.tins.2021.03.004. ISSN 0166-2236. PMID 33883084. S2CID 233292352. 115. "LGBTQ+ People in STEM". National Museums Liverpool. Retrieved 2022-11-27. 116. Elsevier. "LGBT in STEM: Progress but still many obstacles". Elsevier Connect. Retrieved 2022-11-17. 117. Hughes, Bryce E. (2018-03-02). "Coming out in STEM: Factors affecting retention of sexual minority STEM students". Science Advances. 4 (3): eaao6373. Bibcode:2018SciA....4.6373H. doi:10.1126/sciadv.aao6373. ISSN 2375-2548. PMC 5851677. PMID 29546240. 118. Carpenter, Christoper C.; Sansone, Dario (2020). "Turing's children: Representation of sexual minorities in STEM". PLOS ONE. 15 (11): e0241596. arXiv:2005.06664. Bibcode:2020PLoSO..1541596S. doi:10.1371/journal.pone.0241596. PMC 7673532. PMID 33206668. S2CID 218629971. 119. Ennis, Dawn (Nov 30, 2020). "The New STEM Gap: Study Confirms Gay Men Are Less Likely Than Straight Men To Be In STEM Fields". Forbes Magazine. Retrieved 2017-10-12. 120. Cech, E. A.; Waidzunas, T. J. (January 15, 2021). "Systemic inequalities for LGBTQ professionals in STEM". Science Advances. 7 (3). Bibcode:2021SciA....7..933C. doi:10.1126/sciadv.abe0933. PMC 7810386. PMID 33523910. 121. Unsay, Joseph D. (2020-07-17). "LGBTQ+ in STEM: Visibility and Beyond". Chemistry – A European Journal. 26 (40): 8670–8675. doi:10.1002/chem.202002474. ISSN 0947-6539. PMID 32588929. S2CID 220075368. 122. Elsevier. "On being LGBTQ+ in science – yes it matters, and here's why". Elsevier Connect. Retrieved 2022-11-18. 123. Teitelbaum, Michael S. (19 March 2014). "The Myth of the Science and Engineering Shortage". The Atlantic. Archived from the original on 2019-08-24. Retrieved 2017-03-11. 124. Charette, Robert N. (August 30, 2013). "The STEM Crisis Is a Myth". IEEE Spectrum. Archived from the original on September 6, 2019. Retrieved December 2, 2021. 125. Casselman, Ben (September 12, 2014). "The Economic Guide To Picking A College Major". FiveThirtyEight. Archived from the original on August 29, 2019. Retrieved August 27, 2015. Further reading • David Beede; et al. (September 2011). "Education Supports Racial and Ethnic Equality in STEM" (PDF). U.S. Department of Commerce. Retrieved 2012-12-21. • David Beede; et al. (August 2011). "Women in STEM: An Opportunity and An Imperative" (PDF). U.S. Department of Commerce. Retrieved 2012-12-21. • Kaye Husbands Fealing, Aubrey Incorvaia, and Richard Utz, "Humanizing Science and Engineering for the Twenty-First Century." Issues in Science and Technology, Fall issue, 2022: 54-57. • David Langdon; et al. (July 2011). "STEM: Good Jobs Now and For the Future" (PDF). U.S. Department of Commerce. Retrieved 2012-12-21. • Arden Bement (May 24, 2005). "Statement To House & Senate Appriopriators In Support Of STEM Education And NSF Education" (PDF). STEM Coalition. Archived from the original (PDF) on November 20, 2012. Retrieved 2012-12-21. • Carla C. Johnson, et al., eds. (2020) Handbook of research on STEM education (Routledge, 2020). • Mary Kirk (2009). Gender and Information Technology: Moving Beyond Access to Co-Create Global Partnership. IGI Global Snippet. ISBN 978-1-59904-786-7. • Shirley M. Malcom; Daryl E. Chubin; Jolene K. Jesse (2004). Standing Our Ground: A Guidebook for STEM Educators in the Post-Michigan Era. American Association for the Advancement of Science. ISBN 0871686996. • Unesco publication on girls education in STEM – Cracking the code: girls' and women's education in science, technology, engineering and mathematics (STEM) "http://unesdoc.unesco.org/images/0025/002534/253479E.pdf " • Wing Lau – Chief Engineer at the Department of Physics, Oxford University (Oct 12, 2017). "STEM Re-vitalisation, not trivialisation". OpenSchool. Retrieved 2017-10-12. External links • Media related to STEM at Wikimedia Commons Glossaries of science and engineering • Aerospace engineering • Agriculture • Archaeology • Architecture • Artificial intelligence • Astronomy • Biology • Botany • Calculus • Cell biology • Chemistry • Civil engineering • Clinical research • Computer hardware • Computer science • Developmental and reproductive biology • Ecology • Economics • Electrical and electronics engineering • Engineering • A–L • M–Z • Entomology • Environmental science • Evolutionary biology • Genetics • 0–L • M–Z • Geography • A–M • N–Z • Arabic toponyms • Hebrew toponyms • Western and South Asia • Geology • Ichthyology • Machine vision • Mathematics • Mechanical engineering • Medicine • Meteorology • Mycology • Nanotechnology • Ornithology • Physics • Probability and statistics • Psychiatry • Quantum computing • Robotics • Scientific naming • Structural engineering • Virology
Wikipedia
Soboleva modified hyperbolic tangent The Soboleva modified hyperbolic tangent, also known as (parametric) Soboleva modified hyperbolic tangent activation function ([P]SMHTAF),[nb 1] is a special S-shaped function based on the hyperbolic tangent, given by $\operatorname {mtanh} x={\frac {e^{ax}-e^{-bx}}{e^{cx}+e^{-dx}}}.$ This function was originally proposed as "modified hyperbolic tangent"[nb 1] by Elena V. Soboleva (Елена В. Соболева) as a utility function for multi-objective optimization and choice modelling in decision-making.[1][2][3] It has since been introduced into neural network theory and practice.[4] The function was also used to approximate current-voltage characteristics of field-effect transistors and light-emitting diodes,[5] to design antenna feeders,[6] and analyze plasma temperatures and densities in the divertor region of fusion reactors.[7] A family of recurrence-generated parametric Soboleva modified hyperbolic tangent activation functions (NPSMHTAF, FPSMHTAF) was studied with parameters a = c and b = d.[8] With parameters a = b = c = d = 1 the modified hyperbolic tangent function reduces to the conventional tanh(x) function, whereas for a = b = 1 and c = d = 0, the term becomes equal to sinh(x). See also • Activation function • e (mathematical constant) • Equal incircles theorem, based on sinh • Hausdorff distance • Inverse hyperbolic functions • List of integrals of hyperbolic functions • Poinsot's spirals • Sigmoid function Notes 1. Soboleva proposed the name "modified hyperbolic tangent" (mtanh, mth), but since other authors used this name also for other functions, some authors have started to refer to this function as "Soboleva modified hyperbolic tangent". References 1. Soboleva, Elena Vladimirovna; Beskorovainyi, Vladimir Valentinovich (2008). The utility function in problems of structural optimization of distributed objects Функция для оценки полезности альтернатив в задачах структурной оптимизации территориально распределенных объектов. Четверта наукова конференція Харківського університету Повітряних Сил імені Івана Кожедуба, 16–17 квітня 2008 (The fourth scientific conference of the Ivan Kozhedub Kharkiv University of Air Forces, 16–17 April 2008) (in Russian). Kharkiv, Ukraine: Kharkiv University of Air Force (HUPS/ХУПС). p. 121. 2. Soboleva, Elena Vladimirovna (2009). S-образная функция полезности част-ных критериев для многофакторной оценки проектных решений [The S-shaped utility function of individual criteria for multi-objective decision-making in design]. Материалы XIII Международного молодежного форума «Радиоэлектро-ника и молодежь в XXI веке» (Materials of the 13th international youth forum "Radioelectronics and youth in the 21st century") (in Russian). Kharkiv, Ukraine: Kharkiv National University of Radioelectronics (KNURE/ХНУРЕ). p. 247. 3. Beskorovainyi, Vladimir Valentinovich; Soboleva, Elena Vladimirovna (2010). ИДЕНТИФИКАЦИЯ ЧАСТНОй ПОлЕЗНОСТИ МНОГОФАКТОРНЫХ АлЬТЕРНАТИВ С ПОМОЩЬЮ S-ОБРАЗНЫХ ФУНКЦИй [Identification of utility functions in multi-objective choice modelling by using S-shaped functions] (PDF). Problemy Bioniki: Respublikanskij Mežvedomstvennyj Naučno-Techničeskij Sbornik БИОНИКА ИНТЕЛЛЕКТА [Bionics of Intelligence] (in Russian). Vol. 72, no. 1. Kharkiv National University of Radioelectronics (KNURE/ХНУРЕ). pp. 50–54. ISSN 0555-2656. UDK 519.688: 004.896. Archived (PDF) from the original on 2022-06-21. Retrieved 2020-06-19. (5 pages) 4. Malinova, Anna; Golev, Angel; Iliev, Anton; Kyurkchiev, Nikolay (August 2017). "A Family Of Recurrence Generating Activation Functions Based On Gudermann Function" (PDF). International Journal of Engineering Researches and Management Studies. Faculty of Mathematics and Informatics, University of Plovdiv "Paisii Hilendarski", Plovdiv, Bulgaria. 4 (8): 38–48. ISSN 2394-7659. Archived (PDF) from the original on 2022-07-14. Retrieved 2020-06-19. (11 pages) 5. Tuev, Vasily I.; Uzhanin, Maxim V. (2009). ПРИМЕНЕНИЕ МОДИФИЦИРОВАННОЙ ФУНКЦИИ ГИПЕРБОЛИЧЕСКОГО ТАНГЕНСА ДЛЯ АППРОКСИМАЦИИ ВОЛЬТАМПЕРНЫХ ХАРАКТЕРИСТИК ПОЛЕВЫХ ТРАНЗИСТОРОВ [Using modified hyperbolic tangent function to approximate the current-voltage characteristics of field-effect transistors] (in Russian). Tomsk, Russia: Tomsk Politehnic University (TPU/ТПУ). pp. 135–138. No. 4/314. Archived from the original on 2017-08-15. Retrieved 2015-11-05. (4 pages) 6. Golev, Angel; Djamiykov, Todor; Kyurkchiev, Nikolay (2017-11-23) [2017-10-09, 2017-08-19]. "Sigmoidal Functions In Antenna-Feeder Technique" (PDF). International Journal of Pure and Applied Mathematics. Faculty of Mathematics and Informatics, University of Plovdiv "Paisii Hilendarski", Plovdiv, Bulgaria / Technical University of Sofia, Sofia, Bulgaria: Academic Publications, Ltd. 116 (4): 1081–1092. doi:10.12732/ijpam.v116i4.23 (inactive 2023-08-01). ISSN 1311-8080. Archived (PDF) from the original on 2020-06-19. Retrieved 2020-06-19.{{cite journal}}: CS1 maint: DOI inactive as of August 2023 (link) (12 pages) 7. Rubino, Giulio (2018-01-15) [2018-01-14]. Power Exhaust Data Analysis and Modeling Of Advanced Divertor Configuration (PDF) (Thesis). Joint Research Doctorate In Fusion Science And Engineering Cycle XXX (in English, Italian, and Portuguese). Padova, Italy: Centro Ricerche Fusione (CRF), Università degli Studi di Padova / Università degli Studi di Napoli Federico II / Instituto Superior Técnico (IST), Universidade de Lisboa. p. 84. ID 10811. Archived (PDF) from the original on 2020-06-19. Retrieved 2020-06-19. (2+viii+3*iii+102 pages) 8. Golev, Angel; Iliev, Anton; Kyurkchiev, Nikolay (June 2017). "A Note on the Soboleva' Modified Hyperbolic Tangent Activation Function" (PDF). International Journal of Innovative Science, Engineering & Technology (JISET). Faculty of Mathematics and Informatics, University of Plovdiv "Paisii Hilendarski", Plovdiv, Bulgaria. 4 (6): 177–182. ISSN 2348-7968. Archived (PDF) from the original on 2020-06-19. Retrieved 2020-06-19. (6 pages) Further reading • Iliev, Anton; Kyurkchiev, Nikolay; Markov, Svetoslav (2017). "A Note on the New Activation Function of Gompertz Type". Biomath Communications. Faculty of Mathematics and Informatics, University of Plovdiv "Paisii Hilendarski", Plovdiv, Bulgaria / Institute of Mathematics and Informatics, Bulgarian Academy of Sciences, Sofia, Bulgaria: Biomath Forum (BF). 4 (2). doi:10.11145/10.11145/bmc.2017.10.201. ISSN 2367-5233. Archived from the original on 2020-06-20. Retrieved 2020-06-19. (20 pages)
Wikipedia
School Mathematics Study Group The School Mathematics Study Group (SMSG) was an American academic think tank focused on the subject of reform in mathematics education. Directed by Edward G. Begle and financed by the National Science Foundation, the group was created in the wake of the Sputnik crisis in 1958 and tasked with creating and implementing mathematics curricula for primary and secondary education, which it did until its termination in 1977. The efforts of the SMSG yielded a reform in mathematics education known as New Math which was promulgated in a series of reports, culminating in a series published by Random House called the New Mathematical Library (Vol. 1 is Ivan Niven's Numbers: Rational and Irrational). In the early years, SMSG also produced a set of draft textbooks in typewritten paperback format for elementary, middle and high school students. Perhaps the most authoritative collection of materials from the School Mathematics Study Group is now housed in the Archives of American Mathematics in the University of Texas at Austin's Center for American History. See also • Foundations of geometry Further reading • 1958 Letter from Ralph A. Raimi to Fred Quigley concerning the New Math • Whatever Happened to the New Math by Ralph A. Raimi • Some Technical Commentaries on Mathematics Education and History by Ralph A. Raimi External links • The SMSG Collection at The Center for American History at UT • Archives of American Mathematics at the Center for American History at UT
Wikipedia
SMath Studio SMath Studio is a freeware (free of charge, but not libre), closed-source, mathematical notebook program similar to Mathcad. It is available for Windows, Linux, iOS, Android, Universal Windows Platform, and on some handhelds. SMath Studio SMath Studio v0.82 in Windows XP Developer(s)Andrey Ivashov Initial release2006, 16–17 years ago Stable release 1.0.8151 / 26 April 2022 (2022-04-26)[1] Written inC# Operating systemMicrosoft Windows, Linux, iOS, Android, Universal Windows Platform, and handhelds[2] Platform.NET Framework, Mono Size2.28 MB Available in43 languages[3] List of languages Arabic, Belarusian, Bengali, Bulgarian, Catalan, Chinese (Simplified), Chinese (Traditional), Croatian, Czech, Danish, Dutch, English, Esperanto, Finnish, French, German, Greek, Hebrew, Hungarian, Indonesian, Italian, Japanese, Korean, Latvian, Lithuanian, Norwegian, Persian, Polish, Portuguese, Portuguese (Brazil), Romanian, Russian, Serbian (Cyrillic), Serbian (Latin), Slovak, Slovenian, Spanish, Swahili, Swedish, Thai, Turkish, Ukrainian, Vietnamese TypeComputer algebra system LicenseCreative Commons Attribution-NoDerivs (CC-BY-ND)[4] Websiteen.smath.com Among its capabilities are: • Solving differential equations; • Graphing functions in two or three dimensions; • Symbolic calculations, including solving systems of equations; • Matrix operations, including determinants; • Finding roots of polynomials and functions; • Symbolic and numeric differentiation of functions; • Numeric integration; • Simple multiline looped programs; • User-defined functions; • Units of measurement. References 1. "New Stable SMath Studio 1.0.8151 is available!". 26 April 2022. 2. "Stable: SMath Studio 0.90". 9 January 2012. 3. "SMath Studio Translator". SMath. 4. "license?". SMath. 17 November 2009. External links • "SMath Studio: CNET Editors' Review". CNET. 3 December 2009. • "Free math software: SMath Studio". 3D CAD Tips. WTWH Media LLC. 26 April 2010. • Liengme, Bernard V. (1 March 2015). SMath for physics : a primer. San Rafael, California: Morgan & Claypool Publishers (Institute of Physics Publishing). ISBN 978-1-6270-5925-1. • Atkin, Keith (1 September 2021). "Using SMath to solve the time-independent Schrödinger equation". Physics Education. 56 (5): 055018. Bibcode:2021PhyEd..56e5018A. doi:10.1088/1361-6552/ac08ef. S2CID 235621314. Computer algebra systems Open-source • Axiom • Cadabra • CoCoA • Fermat • FriCAS • FORM • GAP • GiNaC • Macaulay2 • Maxima • Normaliz • PARI/GP • Reduce • SageMath • Singular • SymPy • Xcas/Giac • Yacas Proprietary • ClassPad Manager • KANT • Magma • Maple • Mathcad • Mathematica • muPAD (MATLAB symbolic math toolbox) • SMath Studio • TI InterActive! Discontinued • CAMAL • Derive • Erable • LiveMath • Macsyma • Mathomatic • muMATH • ALTRAN • Category • List Authority control: National • Germany
Wikipedia
Circle group In mathematics, the circle group, denoted by $\mathbb {T} $ or $\mathbb {S} ^{1}$, is the multiplicative group of all complex numbers with absolute value 1, that is, the unit circle in the complex plane or simply the unit complex numbers[1] $\mathbb {T} =\{z\in \mathbb {C} :|z|=1\}.$ :|z|=1\}.} Algebraic structure → Group theory Group theory Basic notions • Subgroup • Normal subgroup • Quotient group • (Semi-)direct product Group homomorphisms • kernel • image • direct sum • wreath product • simple • finite • infinite • continuous • multiplicative • additive • cyclic • abelian • dihedral • nilpotent • solvable • action • Glossary of group theory • List of group theory topics Finite groups • Cyclic group Zn • Symmetric group Sn • Alternating group An • Dihedral group Dn • Quaternion group Q • Cauchy's theorem • Lagrange's theorem • Sylow theorems • Hall's theorem • p-group • Elementary abelian group • Frobenius group • Schur multiplier Classification of finite simple groups • cyclic • alternating • Lie type • sporadic • Discrete groups • Lattices • Integers ($\mathbb {Z} $) • Free group Modular groups • PSL(2, $\mathbb {Z} $) • SL(2, $\mathbb {Z} $) • Arithmetic group • Lattice • Hyperbolic group Topological and Lie groups • Solenoid • Circle • General linear GL(n) • Special linear SL(n) • Orthogonal O(n) • Euclidean E(n) • Special orthogonal SO(n) • Unitary U(n) • Special unitary SU(n) • Symplectic Sp(n) • G2 • F4 • E6 • E7 • E8 • Lorentz • Poincaré • Conformal • Diffeomorphism • Loop Infinite dimensional Lie group • O(∞) • SU(∞) • Sp(∞) Algebraic groups • Linear algebraic group • Reductive group • Abelian variety • Elliptic curve Lie groups and Lie algebras Classical groups • General linear GL(n) • Special linear SL(n) • Orthogonal O(n) • Special orthogonal SO(n) • Unitary U(n) • Special unitary SU(n) • Symplectic Sp(n) Simple Lie groups Classical • An • Bn • Cn • Dn Exceptional • G2 • F4 • E6 • E7 • E8 Other Lie groups • Circle • Lorentz • Poincaré • Conformal group • Diffeomorphism • Loop • Euclidean Lie algebras • Lie group–Lie algebra correspondence • Exponential map • Adjoint representation • Killing form • Index • Simple Lie algebra • Loop algebra • Affine Lie algebra Semisimple Lie algebra • Dynkin diagrams • Cartan subalgebra • Root system • Weyl group • Real form • Complexification • Split Lie algebra • Compact Lie algebra Representation theory • Lie group representation • Lie algebra representation • Representation theory of semisimple Lie algebras • Representations of classical Lie groups • Theorem of the highest weight • Borel–Weil–Bott theorem Lie groups in physics • Particle physics and representation theory • Lorentz group representations • Poincaré group representations • Galilean group representations Scientists • Sophus Lie • Henri Poincaré • Wilhelm Killing • Élie Cartan • Hermann Weyl • Claude Chevalley • Harish-Chandra • Armand Borel • Glossary • Table of Lie groups The circle group forms a subgroup of $\mathbb {C} ^{\times }$, the multiplicative group of all nonzero complex numbers. Since $\mathbb {C} ^{\times }$ is abelian, it follows that $\mathbb {T} $ is as well. A unit complex number in the circle group represents a rotation of the complex plane about the origin and can be parametrized by the angle measure $\theta $: $\theta \mapsto z=e^{i\theta }=\cos \theta +i\sin \theta .$ This is the exponential map for the circle group. The circle group plays a central role in Pontryagin duality and in the theory of Lie groups. The notation $\mathbb {T} $ for the circle group stems from the fact that, with the standard topology (see below), the circle group is a 1-torus. More generally, $\mathbb {T} ^{n}$ (the direct product of $\mathbb {T} $ with itself $n$ times) is geometrically an $n$-torus. The circle group is isomorphic to the special orthogonal group $\mathrm {SO} (2)$. Elementary introduction One way to think about the circle group is that it describes how to add angles, where only angles between 0° and 360° or $\in [0,2\pi )$ or $\in (-\pi ,+\pi ]$ are permitted. For example, the diagram illustrates how to add 150° to 270°. The answer is 150° + 270° = 420°, but when thinking in terms of the circle group, we may "forget" the fact that we have wrapped once around the circle. Therefore, we adjust our answer by 360°, which gives 420° ≡ 60° (mod 360°). Another description is in terms of ordinary (real) addition, where only numbers between 0 and 1 are allowed (with 1 corresponding to a full rotation: 360° or $2\pi $), i.e. the real numbers modulo the integers: $\mathbb {T} \cong \mathbb {R} /\mathbb {Z} $. This can be achieved by throwing away the digits occurring before the decimal point. For example, when we work out 0.4166... + 0.75, the answer is 1.1666..., but we may throw away the leading 1, so the answer (in the circle group) is just   $0.1{\bar {6}}\equiv 1.1{\bar {6}}\equiv -0.8{\bar {3}}\;({\text{mod}}\,\mathbb {Z} )$   with some preference to 0.166..., because $0.1{\bar {6}}\in [0,1)$. Topological and analytic structure The circle group is more than just an abstract algebraic object. It has a natural topology when regarded as a subspace of the complex plane. Since multiplication and inversion are continuous functions on $\mathbb {C} ^{\times }$, the circle group has the structure of a topological group. Moreover, since the unit circle is a closed subset of the complex plane, the circle group is a closed subgroup of $\mathbb {C} ^{\times }$ (itself regarded as a topological group). One can say even more. The circle is a 1-dimensional real manifold, and multiplication and inversion are real-analytic maps on the circle. This gives the circle group the structure of a one-parameter group, an instance of a Lie group. In fact, up to isomorphism, it is the unique 1-dimensional compact, connected Lie group. Moreover, every $n$-dimensional compact, connected, abelian Lie group is isomorphic to $\mathbb {T} ^{n}$. Isomorphisms The circle group shows up in a variety of forms in mathematics. We list some of the more common forms here. Specifically, we show that $\mathbb {T} \cong {\mbox{U}}(1)\cong \mathbb {R} /\mathbb {Z} \cong \mathrm {SO} (2).$ Note that the slash (/) denotes here quotient group. The set of all 1×1 unitary matrices clearly coincides with the circle group; the unitary condition is equivalent to the condition that its element have absolute value 1. Therefore, the circle group is canonically isomorphic to $\mathrm {U} (1)$, the first unitary group. The exponential function gives rise to a group homomorphism $\exp :\mathbb {R} \to \mathbb {T} $ :\mathbb {R} \to \mathbb {T} } from the additive real numbers $\mathbb {R} $ to the circle group $\mathbb {T} $ via the map $\theta \mapsto e^{i\theta }=\cos \theta +i\sin \theta .$ The last equality is Euler's formula or the complex exponential. The real number θ corresponds to the angle (in radians) on the unit circle as measured counterclockwise from the positive x axis. That this map is a homomorphism follows from the fact that the multiplication of unit complex numbers corresponds to addition of angles: $e^{i\theta _{1}}e^{i\theta _{2}}=e^{i(\theta _{1}+\theta _{2})}.$ This exponential map is clearly a surjective function from $\mathbb {R} $ to $\mathbb {T} $. However, it is not injective. The kernel of this map is the set of all integer multiples of $2\pi $. By the first isomorphism theorem we then have that $\mathbb {T} \cong \mathbb {R} /2\pi \mathbb {Z} .$ After rescaling we can also say that $\mathbb {T} $ is isomorphic to $\mathbb {R} /\mathbb {Z} $. If complex numbers are realized as 2×2 real matrices (see complex number), the unit complex numbers correspond to 2×2 orthogonal matrices with unit determinant. Specifically, we have $e^{i\theta }\leftrightarrow {\begin{bmatrix}\cos \theta &-\sin \theta \\\sin \theta &\cos \theta \\\end{bmatrix}}=f\left(e^{i\theta }\right).$ This function shows that the circle group is isomorphic to the special orthogonal group $\mathrm {SO} (2)$ since $f\left(e^{i\theta _{1}}e^{i\theta _{2}}\right)={\begin{bmatrix}\cos(\theta _{1}+\theta _{2})&-\sin(\theta _{1}+\theta _{2})\\\sin(\theta _{1}+\theta _{2})&\cos(\theta _{1}+\theta _{2})\end{bmatrix}}=f\left(e^{i\theta _{1}}\right)\times f\left(e^{i\theta _{2}}\right),$ where $\times $ is matrix multiplication. This isomorphism has the geometric interpretation that multiplication by a unit complex number is a proper rotation in the complex (and real) plane, and every such rotation is of this form. Properties Every compact Lie group $\mathrm {G} $ of dimension > 0 has a subgroup isomorphic to the circle group. This means that, thinking in terms of symmetry, a compact symmetry group acting continuously can be expected to have one-parameter circle subgroups acting; the consequences in physical systems are seen, for example, at rotational invariance and spontaneous symmetry breaking. The circle group has many subgroups, but its only proper closed subgroups consist of roots of unity: For each integer $n>0$, the $n$-th roots of unity form a cyclic group of order $n$, which is unique up to isomorphism. In the same way that the real numbers are a completion of the b-adic rationals $\mathbb {Z} [{\tfrac {1}{b}}]$ for every natural number $b>1$, the circle group is the completion of the Prüfer group $\mathbb {Z} [{\tfrac {1}{b}}]/\mathbb {Z} $ for $b$, given by the direct limit $\varinjlim \mathbb {Z} /b^{n}\mathbb {Z} $. Representations The representations of the circle group are easy to describe. It follows from Schur's lemma that the irreducible complex representations of an abelian group are all 1-dimensional. Since the circle group is compact, any representation $\rho :\mathbb {T} \to \mathrm {GL} (1,\mathbb {C} )\cong \mathbb {C} ^{\times }$ :\mathbb {T} \to \mathrm {GL} (1,\mathbb {C} )\cong \mathbb {C} ^{\times }} must take values in ${\mbox{U}}(1)\cong \mathbb {T} $. Therefore, the irreducible representations of the circle group are just the homomorphisms from the circle group to itself. For each integer $n$ we can define a representation $\phi _{n}$ of the circle group by $\phi _{n}(z)=z^{n}$. These representations are all inequivalent. The representation $\phi _{-n}$ is conjugate to $\phi _{n}$: $\phi _{-n}={\overline {\phi _{n}}}.$ These representations are just the characters of the circle group. The character group of $\mathbb {T} $ is clearly an infinite cyclic group generated by $\phi _{1}$: $\operatorname {Hom} (\mathbb {T} ,\mathbb {T} )\cong \mathbb {Z} .$ The irreducible real representations of the circle group are the trivial representation (which is 1-dimensional) and the representations $\rho _{n}(e^{i\theta })={\begin{bmatrix}\cos n\theta &-\sin n\theta \\\sin n\theta &\cos n\theta \end{bmatrix}},\quad n\in \mathbb {Z} ^{+},$ taking values in $\mathrm {SO} (2)$. Here we only have positive integers $n$, since the representation $\rho _{-n}$ is equivalent to $\rho _{n}$. Group structure The circle group $\mathbb {T} $ is a divisible group. Its torsion subgroup is given by the set of all $n$-th roots of unity for all $n$ and is isomorphic to $\mathbb {Q} /\mathbb {Z} $. The structure theorem for divisible groups and the axiom of choice together tell us that $\mathbb {T} $ is isomorphic to the direct sum of $\mathbb {Q} /\mathbb {Z} $ with a number of copies of $\mathbb {Q} $. The number of copies of $\mathbb {Q} $ must be ${\mathfrak {c}}$ (the cardinality of the continuum) in order for the cardinality of the direct sum to be correct. But the direct sum of ${\mathfrak {c}}$ copies of $\mathbb {Q} $ is isomorphic to $\mathbb {R} $, as $\mathbb {R} $ is a vector space of dimension ${\mathfrak {c}}$ over $\mathbb {Q} $. Thus $\mathbb {T} \cong \mathbb {R} \oplus (\mathbb {Q} /\mathbb {Z} ).$ The isomorphism $\mathbb {C} ^{\times }\cong \mathbb {R} \oplus (\mathbb {Q} /\mathbb {Z} )$ can be proved in the same way, since $\mathbb {C} ^{\times }$ is also a divisible abelian group whose torsion subgroup is the same as the torsion subgroup of $\mathbb {T} $. See also • Group of rational points on the unit circle • One-parameter subgroup • n-sphere • Orthogonal group • Phase factor (application in quantum-mechanics) • Rotation number • Solenoid Notes 1. James, Robert C.; James, Glenn (1992). Mathematics Dictionary (Fifth ed.). Chapman & Hall. p. 436. ISBN 9780412990410. a unit complex number is a complex number of unit absolute value. References • James, Robert C.; James, Glenn (1992). Mathematics Dictionary (Fifth ed.). Chapman & Hall. ISBN 9780412990410. Further reading • Hua Luogeng (1981) Starting with the unit circle, Springer Verlag, ISBN 0-387-90589-8. External links • Homeomorphism and the Group Structure on a Circle
Wikipedia
Orthogonal group In mathematics, the orthogonal group in dimension $n$, denoted $\operatorname {O} (n)$, is the group of distance-preserving transformations of a Euclidean space of dimension $n$ that preserve a fixed point, where the group operation is given by composing transformations. The orthogonal group is sometimes called the general orthogonal group, by analogy with the general linear group. Equivalently, it is the group of $n\times n$ orthogonal matrices, where the group operation is given by matrix multiplication (an orthogonal matrix is a real matrix whose inverse equals its transpose). The orthogonal group is an algebraic group and a Lie group. It is compact. Algebraic structure → Group theory Group theory Basic notions • Subgroup • Normal subgroup • Quotient group • (Semi-)direct product Group homomorphisms • kernel • image • direct sum • wreath product • simple • finite • infinite • continuous • multiplicative • additive • cyclic • abelian • dihedral • nilpotent • solvable • action • Glossary of group theory • List of group theory topics Finite groups • Cyclic group Zn • Symmetric group Sn • Alternating group An • Dihedral group Dn • Quaternion group Q • Cauchy's theorem • Lagrange's theorem • Sylow theorems • Hall's theorem • p-group • Elementary abelian group • Frobenius group • Schur multiplier Classification of finite simple groups • cyclic • alternating • Lie type • sporadic • Discrete groups • Lattices • Integers ($\mathbb {Z} $) • Free group Modular groups • PSL(2, $\mathbb {Z} $) • SL(2, $\mathbb {Z} $) • Arithmetic group • Lattice • Hyperbolic group Topological and Lie groups • Solenoid • Circle • General linear GL(n) • Special linear SL(n) • Orthogonal O(n) • Euclidean E(n) • Special orthogonal SO(n) • Unitary U(n) • Special unitary SU(n) • Symplectic Sp(n) • G2 • F4 • E6 • E7 • E8 • Lorentz • Poincaré • Conformal • Diffeomorphism • Loop Infinite dimensional Lie group • O(∞) • SU(∞) • Sp(∞) Algebraic groups • Linear algebraic group • Reductive group • Abelian variety • Elliptic curve The orthogonal group in dimension $n$ has two connected components. The one that contains the identity element is a normal subgroup, called the special orthogonal group, and denoted $\operatorname {SO} (n)$. It consists of all orthogonal matrices of determinant 1. This group is also called the rotation group, generalizing the fact that in dimensions 2 and 3, its elements are the usual rotations around a point (in dimension 2) or a line (in dimension 3). In low dimension, these groups have been widely studied, see SO(2), SO(3) and SO(4). The other component consists of all orthogonal matrices of determinant –1. This component does not form a group, as the product of any two of its elements is of determinant 1, and therefore not an element of the component. By extension, for any field $F$, an $n\times n$ matrix with entries in $F$ such that its inverse equals its transpose is called an orthogonal matrix over $F$. The $n\times n$ orthogonal matrices form a subgroup, denoted $\operatorname {O} (n,F)$, of the general linear group $\operatorname {GL} (n,F)$; that is $\operatorname {O} (n,F)=\left\{Q\in \operatorname {GL} (n,F)\mid Q^{\mathsf {T}}Q=QQ^{\mathsf {T}}=I\right\}.$ More generally, given a non-degenerate symmetric bilinear form or quadratic form[1] on a vector space over a field, the orthogonal group of the form is the group of invertible linear maps that preserve the form. The preceding orthogonal groups are the special case where, on some basis, the bilinear form is the dot product, or, equivalently, the quadratic form is the sum of the square of the coordinates. All orthogonal groups are algebraic groups, since the condition of preserving a form can be expressed as an equality of matrices. Name The name of "orthogonal group" originates from the following characterization of its elements. Given a Euclidean vector space $E$ of dimension $n$, the elements of the orthogonal group $\operatorname {O} (n)$ are, up to a uniform scaling (homothecy), the linear maps from $E$ to $E$ that map orthogonal vectors to orthogonal vectors. In Euclidean geometry The orthogonal group $\operatorname {O} (n)$ is the subgroup of the general linear group $\operatorname {GL} (n,\mathbb {R} )$, consisting of all endomorphisms that preserve the Euclidean norm; that is, endomorphisms $g$ such that $\|g(x)\|=\|x\|.$ Let $\operatorname {E} (n)$ be the group of the Euclidean isometries of a Euclidean space $S$ of dimension $n$. This group does not depend on the choice of a particular space, since all Euclidean spaces of the same dimension are isomorphic. The stabilizer subgroup of a point $x\in S$ is the subgroup of the elements $g\in \operatorname {E} (n)$ such that $g(x)=x$. This stabilizer is (or, more exactly, is isomorphic to) $\operatorname {O} (n)$, since the choice of a point as an origin induces an isomorphism between the Euclidean space and its associated Euclidean vector space. There is a natural group homomorphism $p$ from $\operatorname {E} (n)$ to $\operatorname {O} (n)$, which is defined by $p(g)(y-x)=g(y)-g(x),$ where, as usual, the subtraction of two points denotes the translation vector that maps the second point to the first one. This is a well defined homomorphism, since a straightforward verification shows that, if two pairs of points have the same difference, the same is true for their images by $g$ (for details, see Affine space § Subtraction and Weyl's axioms). The kernel of $p$ is the vector space of the translations. So, the translations form a normal subgroup of $\operatorname {E} (n)$, the stabilizers of two points are conjugate under the action of the translations, and all stabilizers are isomorphic to $\operatorname {O} (n)$. Moreover, the Euclidean group is a semidirect product of $\operatorname {O} (n)$ and the group of translations. It follows that the study of the Euclidean group is essentially reduced to the study of $\operatorname {O} (n)$. Special orthogonal group By choosing an orthonormal basis of a Euclidean vector space, the orthogonal group can be identified with the group (under matrix multiplication) of orthogonal matrices, which are the matrices such that $QQ^{\mathsf {T}}=I.$ It follows from this equation that the square of the determinant of Q equals 1, and thus the determinant of Q is either 1 or –1. The orthogonal matrices with determinant 1 form a subgroup called the special orthogonal group, denoted SO(n), consisting of all direct isometries of O(n), which are those that preserve the orientation of the space. SO(n) is a normal subgroup of O(n), as being the kernel of the determinant, which is a group homomorphism whose image is the multiplicative group {–1, +1}. This implies that the orthogonal group is an internal semidirect product of SO(n) and any subgroup formed with the identity and a reflection. The group with two elements {±I} (where I is the identity matrix) is a normal subgroup and even a characteristic subgroup of O(n), and, if n is even, also of SO(n). If n is odd, O(n) is the internal direct product of SO(n) and {±I}. The group SO(2) is abelian (this is not the case of SO(n) for every n > 2). Its finite subgroups are the cyclic group Ck of k-fold rotations, for every positive integer k. All these groups are normal subgroups of O(2) and SO(2). Canonical form For any element of O(n) there is an orthogonal basis, where its matrix has the form ${\begin{bmatrix}{\begin{matrix}R_{1}&&\\&\ddots &\\&&R_{k}\end{matrix}}&0\\0&{\begin{matrix}\pm 1&&\\&\ddots &\\&&\pm 1\end{matrix}}\\\end{bmatrix}},$ where the matrices R1, ..., Rk are 2-by-2 rotation matrices, that is matrices of the form ${\begin{bmatrix}a&b\\-b&a\end{bmatrix}},$ with $a^{2}+b^{2}=1.$ This results from the spectral theorem by regrouping eigenvalues that are complex conjugate, and taking into account that the absolute values of the eigenvalues of an orthogonal matrix are all equal to 1. The element belongs to SO(n) if and only if there are an even number of –1 on the diagonal. The special case of n = 3 is known as Euler's rotation theorem, which asserts that every (non-identity) element of SO(3) is a rotation about a unique axis-angle pair. Reflections Reflections are the elements of O(n) whose canonical form is ${\begin{bmatrix}-1&0\\0&I\end{bmatrix}},$ where I is the (n–1)×(n–1) identity matrix, and the zeros denote row or column zero matrices. In other words, a reflection is a transformation that transforms the space in its mirror image with respect to a hyperplane. In dimension two, every rotation is the product of two reflections. More precisely, a rotation of angle 𝜃 is the product of two reflections whose axes have an angle of 𝜃 / 2. Every element of O(n) is the product of at most n reflections. This results immediately from the above canonical form and the case of dimension two. The Cartan–Dieudonné theorem is the generalization of this result to the orthogonal group of a nondegenerate quadratic form over a field of characteristic different from two. The reflection through the origin (the map v ↦ −v) is an example of an element of O(n) that is not the product of fewer than n reflections. Symmetry group of spheres The orthogonal group O(n) is the symmetry group of the (n − 1)-sphere (for n = 3, this is just the sphere) and all objects with spherical symmetry, if the origin is chosen at the center. The symmetry group of a circle is O(2). The orientation-preserving subgroup SO(2) is isomorphic (as a real Lie group) to the circle group, also known as U(1), the multiplicative group of the complex numbers of absolute value equal to one. This isomorphism sends the complex number exp(φ i) = cos(φ) + i sin(φ) of absolute value 1 to the special orthogonal matrix ${\begin{bmatrix}\cos(\varphi )&-\sin(\varphi )\\\sin(\varphi )&\cos(\varphi )\end{bmatrix}}.$ In higher dimension, O(n) has a more complicated structure (in particular, it is no longer commutative). The topological structures of the n-sphere and O(n) are strongly correlated, and this correlation is widely used for studying both topological spaces. Group structure The groups O(n) and SO(n) are real compact Lie groups of dimension n(n − 1)/2. The group O(n) has two connected components, with SO(n) being the identity component, that is, the connected component containing the identity matrix. As algebraic groups The orthogonal group O(n) can be identified with the group of the matrices A such that $A^{\mathsf {T}}A=I.$ Since both members of this equation are symmetric matrices, this provides $\textstyle {\frac {n(n+1)}{2}}$ equations that the entries of an orthogonal matrix must satisfy, and which are not all satisfied by the entries of any non-orthogonal matrix. This proves that O(n) is an algebraic set. Moreover, it can be proved that its dimension is ${\frac {n(n-1)}{2}}=n^{2}-{\frac {n(n+1)}{2}},$ which implies that O(n) is a complete intersection. This implies that all its irreducible components have the same dimension, and that it has no embedded component. In fact, O(n) has two irreducible components, that are distinguished by the sign of the determinant (that is det(A) = 1 or det(A) = –1). Both are nonsingular algebraic varieties of the same dimension n(n – 1) / 2. The component with det(A) = 1 is SO(n). Maximal tori and Weyl groups A maximal torus in a compact Lie group G is a maximal subgroup among those that are isomorphic to Tk for some k, where T = SO(2) is the standard one-dimensional torus.[2] In O(2n) and SO(2n), for every maximal torus, there is a basis on which the torus consists of the block-diagonal matrices of the form ${\begin{bmatrix}R_{1}&&0\\&\ddots &\\0&&R_{n}\end{bmatrix}},$ where each Rj belongs to SO(2). In O(2n + 1) and SO(2n + 1), the maximal tori have the same form, bordered by a row and a column of zeros, and 1 on the diagonal. The Weyl group of SO(2n + 1) is the semidirect product $\{\pm 1\}^{n}\rtimes S_{n}$ of a normal elementary abelian 2-subgroup and a symmetric group, where the nontrivial element of each {±1} factor of {±1}n acts on the corresponding circle factor of T × {1} by inversion, and the symmetric group Sn acts on both {±1}n and T × {1} by permuting factors. The elements of the Weyl group are represented by matrices in O(2n) × {±1}. The Sn factor is represented by block permutation matrices with 2-by-2 blocks, and a final 1 on the diagonal. The {±1}n component is represented by block-diagonal matrices with 2-by-2 blocks either ${\begin{bmatrix}1&0\\0&1\end{bmatrix}}\quad {\text{or}}\quad {\begin{bmatrix}0&1\\1&0\end{bmatrix}},$ with the last component ±1 chosen to make the determinant 1. The Weyl group of SO(2n) is the subgroup $H_{n-1}\rtimes S_{n}<\{\pm 1\}^{n}\rtimes S_{n}$ of that of SO(2n + 1), where Hn−1 < {±1}n is the kernel of the product homomorphism {±1}n → {±1} given by $\left(\epsilon _{1},\ldots ,\epsilon _{n}\right)\mapsto \epsilon _{1}\cdots \epsilon _{n}$; that is, Hn−1 < {±1}n is the subgroup with an even number of minus signs. The Weyl group of SO(2n) is represented in SO(2n) by the preimages under the standard injection SO(2n) → SO(2n + 1) of the representatives for the Weyl group of SO(2n + 1). Those matrices with an odd number of ${\begin{bmatrix}0&1\\1&0\end{bmatrix}}$ blocks have no remaining final −1 coordinate to make their determinants positive, and hence cannot be represented in SO(2n). Topology Low-dimensional topology The low-dimensional (real) orthogonal groups are familiar spaces: • O(1) = S0, a two-point discrete space • SO(1) = {1} • SO(2) is S1 • SO(3) is RP3 [3] • SO(4) is doubly covered by SU(2) × SU(2) = S3 × S3. Fundamental group In terms of algebraic topology, for n > 2 the fundamental group of SO(n, R) is cyclic of order 2,[4] and the spin group Spin(n) is its universal cover. For n = 2 the fundamental group is infinite cyclic and the universal cover corresponds to the real line (the group Spin(2) is the unique connected 2-fold cover). Homotopy groups Generally, the homotopy groups πk(O) of the real orthogonal group are related to homotopy groups of spheres, and thus are in general hard to compute. However, one can compute the homotopy groups of the stable orthogonal group (aka the infinite orthogonal group), defined as the direct limit of the sequence of inclusions: $\operatorname {O} (0)\subset \operatorname {O} (1)\subset \operatorname {O} (2)\subset \cdots \subset O=\bigcup _{k=0}^{\infty }\operatorname {O} (k)$ Since the inclusions are all closed, hence cofibrations, this can also be interpreted as a union. On the other hand, Sn is a homogeneous space for O(n + 1), and one has the following fiber bundle: $\operatorname {O} (n)\to \operatorname {O} (n+1)\to S^{n},$ which can be understood as "The orthogonal group O(n + 1) acts transitively on the unit sphere Sn, and the stabilizer of a point (thought of as a unit vector) is the orthogonal group of the perpendicular complement, which is an orthogonal group one dimension lower." Thus the natural inclusion O(n) → O(n + 1) is (n − 1)-connected, so the homotopy groups stabilize, and πk(O(n + 1)) = πk(O(n)) for n > k + 1: thus the homotopy groups of the stable space equal the lower homotopy groups of the unstable spaces. From Bott periodicity we obtain Ω8O ≅ O, therefore the homotopy groups of O are 8-fold periodic, meaning πk + 8(O) = πk(O), and one need only to list the lower 8 homotopy groups: ${\begin{aligned}\pi _{0}(O)&=\mathbf {Z} /2\mathbf {Z} \\\pi _{1}(O)&=\mathbf {Z} /2\mathbf {Z} \\\pi _{2}(O)&=0\\\pi _{3}(O)&=\mathbf {Z} \\\pi _{4}(O)&=0\\\pi _{5}(O)&=0\\\pi _{6}(O)&=0\\\pi _{7}(O)&=\mathbf {Z} \end{aligned}}$ Relation to KO-theory Via the clutching construction, homotopy groups of the stable space O are identified with stable vector bundles on spheres (up to isomorphism), with a dimension shift of 1: πk(O) = πk + 1(BO). Setting KO = BO × Z = Ω−1O × Z (to make π0 fit into the periodicity), one obtains: ${\begin{aligned}\pi _{0}(KO)&=\mathbf {Z} \\\pi _{1}(KO)&=\mathbf {Z} /2\mathbf {Z} \\\pi _{2}(KO)&=\mathbf {Z} /2\mathbf {Z} \\\pi _{3}(KO)&=0\\\pi _{4}(KO)&=\mathbf {Z} \\\pi _{5}(KO)&=0\\\pi _{6}(KO)&=0\\\pi _{7}(KO)&=0\end{aligned}}$ Low-dimensional groups The first few homotopy groups can be calculated by using the concrete descriptions of low-dimensional groups. • π0(O) = π0(O(1)) = Z/2Z, from orientation-preserving/reversing (this class survives to O(2) and hence stably) • π1(O) = π1(SO(3)) = Z/2Z, which is spin comes from SO(3) = RP3 = S3/(Z/2Z). • π2(O) = π2(SO(3)) = 0, which surjects onto π2(SO(4)); this latter thus vanishes. Lie groups From general facts about Lie groups, π2(G) always vanishes, and π3(G) is free (free abelian). Vector bundles From the vector bundle point of view, π0(KO) is vector bundles over S0, which is two points. Thus over each point, the bundle is trivial, and the non-triviality of the bundle is the difference between the dimensions of the vector spaces over the two points, so π0(KO) = Z is dimension. Loop spaces Using concrete descriptions of the loop spaces in Bott periodicity, one can interpret the higher homotopies of O in terms of simpler-to-analyze homotopies of lower order. Using π0, O and O/U have two components, KO = BO × Z and KSp = BSp × Z have countably many components, and the rest are connected. Interpretation of homotopy groups In a nutshell:[5] • π0(KO) = Z is about dimension • π1(KO) = Z/2Z is about orientation • π2(KO) = Z/2Z is about spin • π4(KO) = Z is about topological quantum field theory. Let R be any of the four division algebras R, C, H, O, and let LR be the tautological line bundle over the projective line RP1, and [LR] its class in K-theory. Noting that RP1 = S1, CP1 = S2, HP1 = S4, OP1 = S8, these yield vector bundles over the corresponding spheres, and • π1(KO) is generated by [LR] • π2(KO) is generated by [LC] • π4(KO) is generated by [LH] • π8(KO) is generated by [LO] From the point of view of symplectic geometry, π0(KO) ≅ π8(KO) = Z can be interpreted as the Maslov index, thinking of it as the fundamental group π1(U/O) of the stable Lagrangian Grassmannian as U/O ≅ Ω7(KO), so π1(U/O) = π1+7(KO). Whitehead tower The orthogonal group anchors a Whitehead tower: $\ldots \rightarrow \operatorname {Fivebrane} (n)\rightarrow \operatorname {String} (n)\rightarrow \operatorname {Spin} (n)\rightarrow \operatorname {SO} (n)\rightarrow \operatorname {O} (n)$ which is obtained by successively removing (killing) homotopy groups of increasing order. This is done by constructing short exact sequences starting with an Eilenberg–MacLane space for the homotopy group to be removed. The first few entries in the tower are the spin group and the string group, and are preceded by the fivebrane group. The homotopy groups that are killed are in turn π0(O) to obtain SO from O, π1(O) to obtain Spin from SO, π3(O) to obtain String from Spin, and then π7(O) and so on to obtain the higher order branes. Of indefinite quadratic form over the reals Main article: Indefinite orthogonal group Over the real numbers, nondegenerate quadratic forms are classified by Sylvester's law of inertia, which asserts that, on a vector space of dimension n, such a form can be written as the difference of a sum of p squares and a sum of q squares, with p + q = n. In other words, there is a basis on which the matrix of the quadratic form is a diagonal matrix, with p entries equal to 1, and q entries equal to –1. The pair (p, q) called the inertia, is an invariant of the quadratic form, in the sense that it does not depend on the way of computing the diagonal matrix. The orthogonal group of a quadratic form depends only on the inertia, and is thus generally denoted O(p, q). Moreover, as a quadratic form and its opposite have the same orthogonal group, one has O(p, q) = O(q, p). The standard orthogonal group is O(n) = O(n, 0) = O(0, n). So, in the remainder of this section, it is supposed that neither p nor q is zero. The subgroup of the matrices of determinant 1 in O(p, q) is denoted SO(p, q). The group O(p, q) has four connected components, depending on whether an element preserves orientation on either of the two maximal subspaces where the quadratic form is positive definite or negative definite. The component of the identity, whose elements preserve orientation on both subspaces, is denoted SO+(p, q). The group O(3, 1) is the Lorentz group that is fundamental in relativity theory. Here the 3 corresponds to space coordinates, and 1 corresponds to the time coordinate. Of complex quadratic forms Over the field C of complex numbers, every non-degenerate quadratic form in n variables is equivalent to $x_{1}^{2}+\cdots +x_{n}^{2}$. Thus, up to isomorphism, there is only one non-degenerate complex quadratic space of dimension n, and one associated orthogonal group, usually denoted O(n, C). It is the group of complex orthogonal matrices, complex matrices whose product with their transpose is the identity matrix. As in the real case, O(n, C) has two connected components. The component of the identity consists of all matrices of determinant 1 in O(n, C); it is denoted SO(n, C). The groups O(n, C) and SO(n, C) are complex Lie groups of dimension n(n − 1)/2 over C (the dimension over R is twice that). For n ≥ 2, these groups are noncompact. As in the real case, SO(n, C) is not simply connected: For n > 2, the fundamental group of SO(n, C) is cyclic of order 2, whereas the fundamental group of SO(2, C) is Z. Over finite fields Characteristic different from two Over a field of characteristic different from two, two quadratic forms are equivalent if their matrices are congruent, that is if a change of basis transforms the matrix of the first form into the matrix of the second form. Two equivalent quadratic forms have clearly the same orthogonal group. The non-degenerate quadratic forms over a finite field of characteristic different from two are completely classified into congruence classes, and it results from this classification that there is only one orthogonal group in odd dimension and two in even dimension. More precisely, Witt's decomposition theorem asserts that (in characteristic different from two) every vector space equipped with a non-degenerate quadratic form Q can be decomposed as a direct sum of pairwise orthogonal subspaces $V=L_{1}\oplus L_{2}\oplus \cdots \oplus L_{m}\oplus W,$ where each Li is a hyperbolic plane (that is there is a basis such that the matrix of the restriction of Q to Li has the form $\textstyle {\begin{bmatrix}0&1\\1&0\end{bmatrix}}$), and the restriction of Q to W is anisotropic (that is, Q(w) ≠ 0 for every nonzero w in W). The Chevalley–Warning theorem asserts that, over a finite field, the dimension of W is at most two. If the dimension of V is odd, the dimension of W is thus equal to one, and its matrix is congruent either to $\textstyle {\begin{bmatrix}1\end{bmatrix}}$ or to $\textstyle {\begin{bmatrix}\varphi \end{bmatrix}},$ where 𝜑 is a non-square scalar. It results that there is only one orthogonal group that is denoted O(2n + 1, q), where q is the number of elements of the finite field (a power of an odd prime).[6] If the dimension of W is two and –1 is not a square in the ground field (that is, if its number of elements q is congruent to 3 modulo 4), the matrix of the restriction of Q to W is congruent to either I or –I, where I is the 2×2 identity matrix. If the dimension of W is two and –1 is a square in the ground field (that is, if q is congruent to 1, modulo 4) the matrix of the restriction of Q to W is congruent to $\textstyle {\begin{bmatrix}1&0\\0&\phi \end{bmatrix}},$ 𝜙 is any non-square scalar. This implies that if the dimension of V is even, there are only two orthogonal groups, depending whether the dimension of W zero or two. They are denoted respectively O+(2n, q) and O−(2n, q).[6] The orthogonal group Oϵ(2, q) is a dihedral group of order 2(q − ϵ), where ϵ = ±. Proof For studying the orthogonal group of Oϵ(2, q), one can suppose that the matrix of the quadratic form is $Q={\begin{bmatrix}1&0\\0&-\omega \end{bmatrix}},$ because, given a quadratic form, there is a basis where its matrix is diagonalizable. A matrix $A={\begin{bmatrix}a&b\\c&d\end{bmatrix}}$ belongs to the orthogonal group if $AQA^{\text{T}}=Q,$ that is, a2 – ωb2 = 1, ac – ωbd = 0, and c2 – ωd2 = –ω. As a and b cannot be both zero (because of the first equation), the second equation implies the existence of ϵ in Fq, such that c = ϵωb and d = ϵa. Reporting these values in the third equation, and using the first equation, one gets that ϵ2 = 1, and thus the orthogonal group consists of the matrices ${\begin{bmatrix}a&b\\\epsilon \omega b&\epsilon a\end{bmatrix}},$ where a2 – ωb2 = 1 and ϵ = ±1. Moreover, the determinant of the matrix is ϵ. For further studying the orthogonal group, it is convenient to introduce a square root α of ω. This square root belongs to Fq if the orthogonal group is O+(2, q), and to Fq2 otherwise. Setting x = a + αb, and y = a – αb, one has $xy=1,\qquad a={\frac {x+y}{2}}\qquad b={\frac {x-y}{2\alpha }}.$ If $A_{1}={\begin{bmatrix}a_{1}&b_{1}\\\omega b_{1}&a_{1}\end{bmatrix}}$ and $A_{2}={\begin{bmatrix}a_{2}&b_{2}\\\omega b_{2}&a_{2}\end{bmatrix}}$ are two matrices of determinant one in the orthogonal group then $A_{1}A_{2}={\begin{bmatrix}a_{1}a_{2}+\omega b_{1}b_{2}&a_{1}b_{2}+b_{1}a_{2}\\\omega b_{1}a_{2}+\omega a_{1}b_{2}&\omega b_{1}b_{2}+a_{1}a_{1}\end{bmatrix}}.$ This is an orthogonal matrix ${\begin{bmatrix}a&b\\\omega b&a\end{bmatrix}},$ with a = a1a2 + ωb1b2, and b = a1b2 + b1a2. Thus $a+\alpha b=(a_{1}+\alpha b_{1})(a_{2}+\alpha b_{2}).$ It follows that the map $(a,b)\mapsto a+\alpha b$ is a homomorphism of the group of orthogonal matrices of determinant one into the multiplicative group of Fq2. In the case of O+(2n, q), the image is the multiplicative group of Fq, which is a cyclic group of order q. In the case of O–(2n, q), the above x and y are conjugate, and are therefore the image of each other by the Frobenius automorphism. This meant that $y=x^{-1}=x^{q},$ and thus $x^{q+1}=1.$ For every such x one can reconstruct a corresponding orthogonal matrix. It follows that the map $(a,b)\mapsto a+\alpha b$ is a group isomorphism from the orthogonal matrices of determinant 1 to the group of the (q + 1)-roots of unity. This group is a cyclic group of order q + 1 which consists of the powers of $g^{q-1},$ where g is a primitive element of Fq2, For finishing the proof, it suffices to verify that the group all orthogonal matrices is not abelian, and is the semidirect product of the group {1, –1} and the group of orthogonal matrices of determinant one. The comparison of this proof with the real case may be illuminating. Here two group isomorphisms are involved: ${\begin{aligned}\mathbb {Z} /(q+1)\mathbb {Z} &\to T\\k&\mapsto g^{(q-1)k},\end{aligned}}$ where g is a primitive element of Fq2 and T is the multiplicative group of the element of norm one in Fq2 ; ${\begin{aligned}\mathbb {T} &\to \operatorname {SO} ^{+}(2,\mathbf {F} _{q})\\x&\mapsto {\begin{bmatrix}a&b\\\omega b&a\end{bmatrix}},\end{aligned}}$ with $a={\frac {x+x^{-1}}{2}}$ and $b={\frac {x-x^{-1}}{2\alpha }}.$ In the real case, the corresponding isomorphisms are: ${\begin{aligned}\mathbb {R} /2\pi \mathbb {R} &\to C\\\theta &\mapsto e^{i\theta },\end{aligned}}$ where C is the circle of the complex numbers of norm one; ${\begin{aligned}\mathbb {C} &\to \operatorname {SO} (2,\mathbb {R} )\\x&\mapsto {\begin{bmatrix}\cos \theta &\sin \theta \\-\sin \theta &\cos \theta \end{bmatrix}},\end{aligned}}$ with $\cos \theta ={\frac {e^{i\theta }+e^{-i\theta }}{2}}$ and $\sin \theta ={\frac {e^{i\theta }-e^{-i\theta }}{2i}}.$ When the characteristic is not two, the order of the orthogonal groups are[7] $\left|\operatorname {O} (2n+1,q)\right|=2q^{n^{2}}\prod _{i=1}^{n}\left(q^{2i}-1\right),$ $\left|\operatorname {O} ^{+}(2n,q)\right|=2q^{n(n-1)}\left(q^{n}-1\right)\prod _{i=1}^{n-1}\left(q^{2i}-1\right),$ $\left|\operatorname {O} ^{-}(2n,q)\right|=2q^{n(n-1)}\left(q^{n}+1\right)\prod _{i=1}^{n-1}\left(q^{2i}-1\right).$ In characteristic two, the formulas are the same, except that the factor 2 of $\left|\operatorname {O} (2n+1,q)\right|$ must be removed. The Dickson invariant For orthogonal groups, the Dickson invariant is a homomorphism from the orthogonal group to the quotient group Z/2Z (integers modulo 2), taking the value 0 in case the element is the product of an even number of reflections, and the value of 1 otherwise.[8] Algebraically, the Dickson invariant can be defined as D(f) = rank(I − f) modulo 2, where I is the identity (Taylor 1992, Theorem 11.43). Over fields that are not of characteristic 2 it is equivalent to the determinant: the determinant is −1 to the power of the Dickson invariant. Over fields of characteristic 2, the determinant is always 1, so the Dickson invariant gives more information than the determinant. The special orthogonal group is the kernel of the Dickson invariant[8] and usually has index 2 in O(n, F ).[9] When the characteristic of F is not 2, the Dickson Invariant is 0 whenever the determinant is 1. Thus when the characteristic is not 2, SO(n, F ) is commonly defined to be the elements of O(n, F ) with determinant 1. Each element in O(n, F ) has determinant ±1. Thus in characteristic 2, the determinant is always 1. The Dickson invariant can also be defined for Clifford groups and pin groups in a similar way (in all dimensions). Orthogonal groups of characteristic 2 Over fields of characteristic 2 orthogonal groups often exhibit special behaviors, some of which are listed in this section. (Formerly these groups were known as the hypoabelian groups, but this term is no longer used.) • Any orthogonal group over any field is generated by reflections, except for a unique example where the vector space is 4-dimensional over the field with 2 elements and the Witt index is 2.[10] A reflection in characteristic two has a slightly different definition. In characteristic two, the reflection orthogonal to a vector u takes a vector v to v + B(v, u)/Q(u) · u where B is the bilinear form and Q is the quadratic form associated to the orthogonal geometry. Compare this to the Householder reflection of odd characteristic or characteristic zero, which takes v to v − 2·B(v, u)/Q(u) · u. • The center of the orthogonal group usually has order 1 in characteristic 2, rather than 2, since I = −I. • In odd dimensions 2n + 1 in characteristic 2, orthogonal groups over perfect fields are the same as symplectic groups in dimension 2n. In fact the symmetric form is alternating in characteristic 2, and as the dimension is odd it must have a kernel of dimension 1, and the quotient by this kernel is a symplectic space of dimension 2n, acted upon by the orthogonal group. • In even dimensions in characteristic 2 the orthogonal group is a subgroup of the symplectic group, because the symmetric bilinear form of the quadratic form is also an alternating form. The spinor norm The spinor norm is a homomorphism from an orthogonal group over a field F to the quotient group F×/(F×)2 (the multiplicative group of the field F up to multiplication by square elements), that takes reflection in a vector of norm n to the image of n in F×/(F×)2.[11] For the usual orthogonal group over the reals, it is trivial, but it is often non-trivial over other fields, or for the orthogonal group of a quadratic form over the reals that is not positive definite. Galois cohomology and orthogonal groups In the theory of Galois cohomology of algebraic groups, some further points of view are introduced. They have explanatory value, in particular in relation with the theory of quadratic forms; but were for the most part post hoc, as far as the discovery of the phenomena is concerned. The first point is that quadratic forms over a field can be identified as a Galois H1, or twisted forms (torsors) of an orthogonal group. As an algebraic group, an orthogonal group is in general neither connected nor simply-connected; the latter point brings in the spin phenomena, while the former is related to the determinant. The 'spin' name of the spinor norm can be explained by a connection to the spin group (more accurately a pin group). This may now be explained quickly by Galois cohomology (which however postdates the introduction of the term by more direct use of Clifford algebras). The spin covering of the orthogonal group provides a short exact sequence of algebraic groups. $1\rightarrow \mu _{2}\rightarrow \mathrm {Pin} _{V}\rightarrow \mathrm {O_{V}} \rightarrow 1$ Here μ2 is the algebraic group of square roots of 1; over a field of characteristic not 2 it is roughly the same as a two-element group with trivial Galois action. The connecting homomorphism from H0(OV), which is simply the group OV(F) of F-valued points, to H1(μ2) is essentially the spinor norm, because H1(μ2) is isomorphic to the multiplicative group of the field modulo squares. There is also the connecting homomorphism from H1 of the orthogonal group, to the H2 of the kernel of the spin covering. The cohomology is non-abelian so that this is as far as we can go, at least with the conventional definitions. Lie algebra The Lie algebra corresponding to Lie groups O(n, F ) and SO(n, F ) consists of the skew-symmetric n × n matrices, with the Lie bracket [ , ] given by the commutator. One Lie algebra corresponds to both groups. It is often denoted by ${\mathfrak {o}}(n,F)$ or ${\mathfrak {so}}(n,F)$, and called the orthogonal Lie algebra or special orthogonal Lie algebra. Over real numbers, these Lie algebras for different n are the compact real forms of two of the four families of semisimple Lie algebras: in odd dimension Bk, where n = 2k + 1, while in even dimension Dr, where n = 2r. Since the group SO(n) is not simply connected, the representation theory of the orthogonal Lie algebras includes both representations corresponding to ordinary representations of the orthogonal groups, and representations corresponding to projective representations of the orthogonal groups. (The projective representations of SO(n) are just linear representations of the universal cover, the spin group Spin(n).) The latter are the so-called spin representation, which are important in physics. More generally, given a vector space $V$ (over a field with characteristic not equal to 2) with a nondegenerate symmetric bilinear form $(\cdot ,\cdot )$, the special orthogonal Lie algebra consists of tracefree endomorphisms $\phi $ which are skew-symmetric for this form ($(\phi A,B)+(A,\phi B)=0$). Over a field of characteristic 2 we consider instead the alternating endomorphisms. Concretely we can equate these with the alternating tensors $\Lambda ^{2}V$. The correspondence is given by: $v\wedge w\mapsto (v,\cdot )w-(w,\cdot )v$ This description applies equally for the indefinite special orthogonal Lie algebras ${\mathfrak {so}}(p,q)$ for symmetric bilinear forms with signature $(p,q)$. Over real numbers, this characterization is used in interpreting the curl of a vector field (naturally a 2-vector) as an infinitesimal rotation or "curl", hence the name. Related groups The orthogonal groups and special orthogonal groups have a number of important subgroups, supergroups, quotient groups, and covering groups. These are listed below. The inclusions O(n) ⊂ U(n) ⊂ USp(2n) and USp(n) ⊂ U(n) ⊂ O(2n) are part of a sequence of 8 inclusions used in a geometric proof of the Bott periodicity theorem, and the corresponding quotient spaces are symmetric spaces of independent interest – for example, U(n)/O(n) is the Lagrangian Grassmannian. Lie subgroups In physics, particularly in the areas of Kaluza–Klein compactification, it is important to find out the subgroups of the orthogonal group. The main ones are: $\mathrm {O} (n)\supset \mathrm {O} (n-1)$ – preserve an axis $\mathrm {O} (2n)\supset \mathrm {U} (n)\supset \mathrm {SU} (n)$ – U(n) are those that preserve a compatible complex structure or a compatible symplectic structure – see 2-out-of-3 property; SU(n) also preserves a complex orientation. $\mathrm {O} (2n)\supset \mathrm {USp} (n)$ $\mathrm {O} (7)\supset \mathrm {G} _{2}$ Lie supergroups The orthogonal group O(n) is also an important subgroup of various Lie groups: ${\begin{aligned}\mathrm {U} (n)&\supset \mathrm {O} (n)\\\mathrm {USp} (2n)&\supset \mathrm {O} (n)\\\mathrm {G} _{2}&\supset \mathrm {O} (3)\\\mathrm {F} _{4}&\supset \mathrm {O} (9)\\\mathrm {E} _{6}&\supset \mathrm {O} (10)\\\mathrm {E} _{7}&\supset \mathrm {O} (12)\\\mathrm {E} _{8}&\supset \mathrm {O} (16)\end{aligned}}$ Conformal group Main article: Conformal group Being isometries, real orthogonal transforms preserve angles, and are thus conformal maps, though not all conformal linear transforms are orthogonal. In classical terms this is the difference between congruence and similarity, as exemplified by SSS (side-side-side) congruence of triangles and AAA (angle-angle-angle) similarity of triangles. The group of conformal linear maps of Rn is denoted CO(n) for the conformal orthogonal group, and consists of the product of the orthogonal group with the group of dilations. If n is odd, these two subgroups do not intersect, and they are a direct product: CO(2k + 1) = O(2k + 1) × R∗, where R∗ = R∖{0} is the real multiplicative group, while if n is even, these subgroups intersect in ±1, so this is not a direct product, but it is a direct product with the subgroup of dilation by a positive scalar: CO(2k) = O(2k) × R+. Similarly one can define CSO(n); this is always: CSO(n) = CO(n) ∩ GL+(n) = SO(n) × R+. Discrete subgroups As the orthogonal group is compact, discrete subgroups are equivalent to finite subgroups.[note 1] These subgroups are known as point groups and can be realized as the symmetry groups of polytopes. A very important class of examples are the finite Coxeter groups, which include the symmetry groups of regular polytopes. Dimension 3 is particularly studied – see point groups in three dimensions, polyhedral groups, and list of spherical symmetry groups. In 2 dimensions, the finite groups are either cyclic or dihedral – see point groups in two dimensions. Other finite subgroups include: • Permutation matrices (the Coxeter group An) • Signed permutation matrices (the Coxeter group Bn); also equals the intersection of the orthogonal group with the integer matrices.[note 2] Covering and quotient groups The orthogonal group is neither simply connected nor centerless, and thus has both a covering group and a quotient group, respectively: • Two covering Pin groups, Pin+(n) → O(n) and Pin−(n) → O(n), • The quotient projective orthogonal group, O(n) → PO(n). These are all 2-to-1 covers. For the special orthogonal group, the corresponding groups are: • Spin group, Spin(n) → SO(n), • Projective special orthogonal group, SO(n) → PSO(n). Spin is a 2-to-1 cover, while in even dimension, PSO(2k) is a 2-to-1 cover, and in odd dimension PSO(2k + 1) is a 1-to-1 cover; i.e., isomorphic to SO(2k + 1). These groups, Spin(n), SO(n), and PSO(n) are Lie group forms of the compact special orthogonal Lie algebra, ${\mathfrak {so}}(n,\mathbb {R} )$ – Spin is the simply connected form, while PSO is the centerless form, and SO is in general neither.[note 3] In dimension 3 and above these are the covers and quotients, while dimension 2 and below are somewhat degenerate; see specific articles for details. Principal homogeneous space: Stiefel manifold Main article: Stiefel manifold The principal homogeneous space for the orthogonal group O(n) is the Stiefel manifold Vn(Rn) of orthonormal bases (orthonormal n-frames). In other words, the space of orthonormal bases is like the orthogonal group, but without a choice of base point: given an orthogonal space, there is no natural choice of orthonormal basis, but once one is given one, there is a one-to-one correspondence between bases and the orthogonal group. Concretely, a linear map is determined by where it sends a basis: just as an invertible map can take any basis to any other basis, an orthogonal map can take any orthogonal basis to any other orthogonal basis. The other Stiefel manifolds Vk(Rn) for k < n of incomplete orthonormal bases (orthonormal k-frames) are still homogeneous spaces for the orthogonal group, but not principal homogeneous spaces: any k-frame can be taken to any other k-frame by an orthogonal map, but this map is not uniquely determined. See also Specific transforms • Coordinate rotations and reflections • Reflection through the origin Specific groups • rotation group, SO(3, R) • SO(8) Related groups • indefinite orthogonal group • unitary group • symplectic group Lists of groups • list of finite simple groups • list of simple Lie groups Representation theory • Representations of classical Lie groups • Brauer algebra Notes 1. Infinite subsets of a compact space have an accumulation point and are not discrete. 2. O(n) ∩ GL(n, Z) equals the signed permutation matrices because an integer vector of norm 1 must have a single non-zero entry, which must be ±1 (if it has two non-zero entries or a larger entry, the norm will be larger than 1), and in an orthogonal matrix these entries must be in different coordinates, which is exactly the signed permutation matrices. 3. In odd dimension, SO(2k + 1) ≅ PSO(2k + 1) is centerless (but not simply connected), while in even dimension SO(2k) is neither centerless nor simply connected. Citations 1. For base fields of characteristic not 2, the definition in terms of a symmetric bilinear form is equivalent to that in terms of a quadratic form, but in characteristic 2 these notions differ. 2. Hall 2015 Theorem 11.2 3. Hall 2015 Section 1.3.4 4. Hall 2015 Proposition 13.10 5. Baez, John. "Week 105". This Week's Finds in Mathematical Physics. Retrieved 2023-02-01. 6. Wilson, Robert A. (2009). The finite simple groups. Graduate Texts in Mathematics. Vol. 251. London: Springer. pp. 69–75. ISBN 978-1-84800-987-5. Zbl 1203.20012. 7. (Taylor 1992, p. 141) 8. Knus, Max-Albert (1991), Quadratic and Hermitian forms over rings, Grundlehren der Mathematischen Wissenschaften, vol. 294, Berlin etc.: Springer-Verlag, p. 224, ISBN 3-540-52117-8, Zbl 0756.11008 9. (Taylor 1992, page 160) 10. (Grove 2002, Theorem 6.6 and 14.16) 11. Cassels 1978, p. 178 References • Cassels, J.W.S. (1978), Rational Quadratic Forms, London Mathematical Society Monographs, vol. 13, Academic Press, ISBN 0-12-163260-1, Zbl 0395.10029 • Grove, Larry C. (2002), Classical groups and geometric algebra, Graduate Studies in Mathematics, vol. 39, Providence, R.I.: American Mathematical Society, ISBN 978-0-8218-2019-3, MR 1859189 • Hall, Brian C. (2015), Lie Groups, Lie Algebras, and Representations: An Elementary Introduction, Graduate Texts in Mathematics, vol. 222 (2nd ed.), Springer, ISBN 978-3319134666 • Taylor, Donald E. (1992), The Geometry of the Classical Groups, Sigma Series in Pure Mathematics, vol. 9, Berlin: Heldermann Verlag, ISBN 3-88538-009-9, MR 1189139, Zbl 0767.20001 External links • "Orthogonal group", Encyclopedia of Mathematics, EMS Press, 2001 [1994] • John Baez "This Week's Finds in Mathematical Physics" week 105 • John Baez on Octonions • (in Italian) n-dimensional Special Orthogonal Group parametrization
Wikipedia
SO(5) In mathematics, SO(5), also denoted SO5(R) or SO(5,R), is the special orthogonal group of degree 5 over the field R of real numbers, i.e. (isomorphic to) the group of orthogonal 5×5 matrices of determinant 1. Geometric interpretation SO(5) is a subgroup of the direct Euclidean group E+(5), the group of direct isometries, i.e., isometries preserving orientation, of R5, consisting of elements which leave the origin fixed. More precisely, we have: SO(5) $\cong $ E+(5) / T where T is the translational group of R5. Lie group SO(5) is a simple Lie group of dimension 10. See also • Orthogonal matrix • Orthogonal group • Rotation group SO(3) • List of simple Lie groups
Wikipedia
SPIKE algorithm The SPIKE algorithm is a hybrid parallel solver for banded linear systems developed by Eric Polizzi and Ahmed Sameh^ Overview The SPIKE algorithm deals with a linear system AX = F, where A is a banded $n\times n$ matrix of bandwidth much less than $n$, and F is an $n\times s$ matrix containing $s$ right-hand sides. It is divided into a preprocessing stage and a postprocessing stage. Preprocessing stage In the preprocessing stage, the linear system AX = F is partitioned into a block tridiagonal form ${\begin{bmatrix}{\boldsymbol {A}}_{1}&{\boldsymbol {B}}_{1}\\{\boldsymbol {C}}_{2}&{\boldsymbol {A}}_{2}&{\boldsymbol {B}}_{2}\\&\ddots &\ddots &\ddots \\&&{\boldsymbol {C}}_{p-1}&{\boldsymbol {A}}_{p-1}&{\boldsymbol {B}}_{p-1}\\&&&{\boldsymbol {C}}_{p}&{\boldsymbol {A}}_{p}\end{bmatrix}}{\begin{bmatrix}{\boldsymbol {X}}_{1}\\{\boldsymbol {X}}_{2}\\\vdots \\{\boldsymbol {X}}_{p-1}\\{\boldsymbol {X}}_{p}\end{bmatrix}}={\begin{bmatrix}{\boldsymbol {F}}_{1}\\{\boldsymbol {F}}_{2}\\\vdots \\{\boldsymbol {F}}_{p-1}\\{\boldsymbol {F}}_{p}\end{bmatrix}}.$ Assume, for the time being, that the diagonal blocks Aj (j = 1,...,p with p ≥ 2) are nonsingular. Define a block diagonal matrix D = diag(A1,...,Ap), then D is also nonsingular. Left-multiplying D−1 to both sides of the system gives ${\begin{bmatrix}{\boldsymbol {I}}&{\boldsymbol {V}}_{1}\\{\boldsymbol {W}}_{2}&{\boldsymbol {I}}&{\boldsymbol {V}}_{2}\\&\ddots &\ddots &\ddots \\&&{\boldsymbol {W}}_{p-1}&{\boldsymbol {I}}&{\boldsymbol {V}}_{p-1}\\&&&{\boldsymbol {W}}_{p}&{\boldsymbol {I}}\end{bmatrix}}{\begin{bmatrix}{\boldsymbol {X}}_{1}\\{\boldsymbol {X}}_{2}\\\vdots \\{\boldsymbol {X}}_{p-1}\\{\boldsymbol {X}}_{p}\end{bmatrix}}={\begin{bmatrix}{\boldsymbol {G}}_{1}\\{\boldsymbol {G}}_{2}\\\vdots \\{\boldsymbol {G}}_{p-1}\\{\boldsymbol {G}}_{p}\end{bmatrix}},$ which is to be solved in the postprocessing stage. Left-multiplication by D−1 is equivalent to solving $p$ systems of the form Aj[Vj Wj Gj] = [Bj Cj Fj] (omitting W1 and C1 for $j=1$, and Vp and Bp for $j=p$), which can be carried out in parallel. Due to the banded nature of A, only a few leftmost columns of each Vj and a few rightmost columns of each Wj can be nonzero. These columns are called the spikes. Postprocessing stage Without loss of generality, assume that each spike contains exactly $m$ columns ($m$ is much less than $n$) (pad the spike with columns of zeroes if necessary). Partition the spikes in all Vj and Wj into ${\begin{bmatrix}{\boldsymbol {V}}_{j}^{(t)}\\{\boldsymbol {V}}_{j}'\\{\boldsymbol {V}}_{j}^{(b)}\end{bmatrix}}$ and ${\begin{bmatrix}{\boldsymbol {W}}_{j}^{(t)}\\{\boldsymbol {W}}_{j}'\\{\boldsymbol {W}}_{j}^{(b)}\\\end{bmatrix}}$ where V (t) j   , V (b) j   , W (t) j   and W (b) j   are of dimensions $m\times m$. Partition similarly all Xj and Gj into ${\begin{bmatrix}{\boldsymbol {X}}_{j}^{(t)}\\{\boldsymbol {X}}_{j}'\\{\boldsymbol {X}}_{j}^{(b)}\end{bmatrix}}$ and ${\begin{bmatrix}{\boldsymbol {G}}_{j}^{(t)}\\{\boldsymbol {G}}_{j}'\\{\boldsymbol {G}}_{j}^{(b)}\\\end{bmatrix}}.$ Notice that the system produced by the preprocessing stage can be reduced to a block pentadiagonal system of much smaller size (recall that $m$ is much less than $n$) ${\begin{bmatrix}{\boldsymbol {I}}_{m}&{\boldsymbol {0}}&{\boldsymbol {V}}_{1}^{(t)}\\{\boldsymbol {0}}&{\boldsymbol {I}}_{m}&{\boldsymbol {V}}_{1}^{(b)}&{\boldsymbol {0}}\\{\boldsymbol {0}}&{\boldsymbol {W}}_{2}^{(t)}&{\boldsymbol {I}}_{m}&{\boldsymbol {0}}&{\boldsymbol {V}}_{2}^{(t)}\\&{\boldsymbol {W}}_{2}^{(b)}&{\boldsymbol {0}}&{\boldsymbol {I}}_{m}&{\boldsymbol {V}}_{2}^{(b)}&{\boldsymbol {0}}\\&&\ddots &\ddots &\ddots &\ddots &\ddots \\&&&{\boldsymbol {0}}&{\boldsymbol {W}}_{p-1}^{(t)}&{\boldsymbol {I}}_{m}&{\boldsymbol {0}}&{\boldsymbol {V}}_{p-1}^{(t)}\\&&&&{\boldsymbol {W}}_{p-1}^{(b)}&{\boldsymbol {0}}&{\boldsymbol {I}}_{m}&{\boldsymbol {V}}_{p-1}^{(b)}&{\boldsymbol {0}}\\&&&&&{\boldsymbol {0}}&{\boldsymbol {W}}_{p}^{(t)}&{\boldsymbol {I}}_{m}&{\boldsymbol {0}}\\&&&&&&{\boldsymbol {W}}_{p}^{(b)}&{\boldsymbol {0}}&{\boldsymbol {I}}_{m}\end{bmatrix}}{\begin{bmatrix}{\boldsymbol {X}}_{1}^{(t)}\\{\boldsymbol {X}}_{1}^{(b)}\\{\boldsymbol {X}}_{2}^{(t)}\\{\boldsymbol {X}}_{2}^{(b)}\\\vdots \\{\boldsymbol {X}}_{p-1}^{(t)}\\{\boldsymbol {X}}_{p-1}^{(b)}\\{\boldsymbol {X}}_{p}^{(t)}\\{\boldsymbol {X}}_{p}^{(b)}\end{bmatrix}}={\begin{bmatrix}{\boldsymbol {G}}_{1}^{(t)}\\{\boldsymbol {G}}_{1}^{(b)}\\{\boldsymbol {G}}_{2}^{(t)}\\{\boldsymbol {G}}_{2}^{(b)}\\\vdots \\{\boldsymbol {G}}_{p-1}^{(t)}\\{\boldsymbol {G}}_{p-1}^{(b)}\\{\boldsymbol {G}}_{p}^{(t)}\\{\boldsymbol {G}}_{p}^{(b)}\end{bmatrix}}{\text{,}}$ which we call the reduced system and denote by S̃X̃ = G̃. Once all X (t) j   and X (b) j   are found, all X′j can be recovered with perfect parallelism via ${\begin{cases}{\boldsymbol {X}}_{1}'={\boldsymbol {G}}_{1}'-{\boldsymbol {V}}_{1}'{\boldsymbol {X}}_{2}^{(t)}{\text{,}}\\{\boldsymbol {X}}_{j}'={\boldsymbol {G}}_{j}'-{\boldsymbol {V}}_{j}'{\boldsymbol {X}}_{j+1}^{(t)}-{\boldsymbol {W}}_{j}'{\boldsymbol {X}}_{j-1}^{(b)}{\text{,}}&j=2,\ldots ,p-1{\text{,}}\\{\boldsymbol {X}}_{p}'={\boldsymbol {G}}_{p}'-{\boldsymbol {W}}_{p}{\boldsymbol {X}}_{p-1}^{(b)}{\text{.}}\end{cases}}$ SPIKE as a polyalgorithmic banded linear system solver Despite being logically divided into two stages, computationally, the SPIKE algorithm comprises three stages: 1. factorizing the diagonal blocks, 2. computing the spikes, 3. solving the reduced system. Each of these stages can be accomplished in several ways, allowing a multitude of variants. Two notable variants are the recursive SPIKE algorithm for non-diagonally-dominant cases and the truncated SPIKE algorithm for diagonally-dominant cases. Depending on the variant, a system can be solved either exactly or approximately. In the latter case, SPIKE is used as a preconditioner for iterative schemes like Krylov subspace methods and iterative refinement. Preprocessing stage The first step of the preprocessing stage is to factorize the diagonal blocks Aj. For numerical stability, one can use LAPACK's XGBTRF routines to LU factorize them with partial pivoting. Alternatively, one can also factorize them without partial pivoting but with a "diagonal boosting" strategy. The latter method tackles the issue of singular diagonal blocks. In concrete terms, the diagonal boosting strategy is as follows. Let 0ε denote a configurable "machine zero". In each step of LU factorization, we require that the pivot satisfy the condition |pivot| > 0ε‖A‖1. If the pivot does not satisfy the condition, it is then boosted by $\mathrm {pivot} ={\begin{cases}\mathrm {pivot} +\epsilon \lVert {\boldsymbol {A}}_{j}\rVert _{1}&{\text{if }}\mathrm {pivot} \geq 0{\text{,}}\\\mathrm {pivot} -\epsilon \lVert {\boldsymbol {A}}_{j}\rVert _{1}&{\text{if }}\mathrm {pivot} <0\end{cases}}$ where ε is a positive parameter depending on the machine's unit roundoff, and the factorization continues with the boosted pivot. This can be achieved by modified versions of ScaLAPACK's XDBTRF routines. After the diagonal blocks are factorized, the spikes are computed and passed on to the postprocessing stage. The two-partition case In the two-partition case, i.e., when p = 2, the reduced system S̃X̃ = G̃ has the form ${\begin{bmatrix}{\boldsymbol {I}}_{m}&{\boldsymbol {0}}&{\boldsymbol {V}}_{1}^{(t)}\\{\boldsymbol {0}}&{\boldsymbol {I}}_{m}&{\boldsymbol {V}}_{1}^{(b)}&{\boldsymbol {0}}\\{\boldsymbol {0}}&{\boldsymbol {W}}_{2}^{(t)}&{\boldsymbol {I}}_{m}&{\boldsymbol {0}}\\&{\boldsymbol {W}}_{2}^{(b)}&{\boldsymbol {0}}&{\boldsymbol {I}}_{m}\end{bmatrix}}{\begin{bmatrix}{\boldsymbol {X}}_{1}^{(t)}\\{\boldsymbol {X}}_{1}^{(b)}\\{\boldsymbol {X}}_{2}^{(t)}\\{\boldsymbol {X}}_{2}^{(b)}\end{bmatrix}}={\begin{bmatrix}{\boldsymbol {G}}_{1}^{(t)}\\{\boldsymbol {G}}_{1}^{(b)}\\{\boldsymbol {G}}_{2}^{(t)}\\{\boldsymbol {G}}_{2}^{(b)}\end{bmatrix}}{\text{.}}$ An even smaller system can be extracted from the center: ${\begin{bmatrix}{\boldsymbol {I}}_{m}&{\boldsymbol {V}}_{1}^{(b)}\\{\boldsymbol {W}}_{2}^{(t)}&{\boldsymbol {I}}_{m}\end{bmatrix}}{\begin{bmatrix}{\boldsymbol {X}}_{1}^{(b)}\\{\boldsymbol {X}}_{2}^{(t)}\end{bmatrix}}={\begin{bmatrix}{\boldsymbol {G}}_{1}^{(b)}\\{\boldsymbol {G}}_{2}^{(t)}\end{bmatrix}}{\text{,}}$ which can be solved using the block LU factorization ${\begin{bmatrix}{\boldsymbol {I}}_{m}&{\boldsymbol {V}}_{1}^{(b)}\\{\boldsymbol {W}}_{2}^{(t)}&{\boldsymbol {I}}_{m}\end{bmatrix}}={\begin{bmatrix}{\boldsymbol {I}}_{m}\\{\boldsymbol {W}}_{2}^{(t)}&{\boldsymbol {I}}_{m}\end{bmatrix}}{\begin{bmatrix}{\boldsymbol {I}}_{m}&{\boldsymbol {V}}_{1}^{(b)}\\&{\boldsymbol {I}}_{m}-{\boldsymbol {W}}_{2}^{(t)}{\boldsymbol {V}}_{1}^{(b)}\end{bmatrix}}{\text{.}}$ Once X (b) 1   and X (t) 2   are found, X (t) 1   and X (b) 2   can be computed via X (t) 1   = G (t) 1   − V (t) 1   X (t) 2   , X (b) 2   = G (b) 2   − W (b) 2   X (b) 1   . The multiple-partition case Assume that p is a power of two, i.e., p = 2d. Consider a block diagonal matrix D̃1 = diag(D̃ [1] 1   ,...,D̃ [1] p/2   ) where ${\boldsymbol {\tilde {D}}}_{k}^{[1]}={\begin{bmatrix}{\boldsymbol {I}}_{m}&{\boldsymbol {0}}&{\boldsymbol {V}}_{2k-1}^{(t)}\\{\boldsymbol {0}}&{\boldsymbol {I}}_{m}&{\boldsymbol {V}}_{2k-1}^{(b)}&{\boldsymbol {0}}\\{\boldsymbol {0}}&{\boldsymbol {W}}_{2k}^{(t)}&{\boldsymbol {I}}_{m}&{\boldsymbol {0}}\\&{\boldsymbol {W}}_{2k}^{(b)}&{\boldsymbol {0}}&{\boldsymbol {I}}_{m}\end{bmatrix}}$ for k = 1,...,p/2. Notice that D̃1 essentially consists of diagonal blocks of order 4m extracted from S̃. Now we factorize S̃ as S̃ = D̃1S̃2. The new matrix S̃2 has the form ${\begin{bmatrix}{\boldsymbol {I}}_{3m}&{\boldsymbol {0}}&{\boldsymbol {V}}_{1}^{[2](t)}\\{\boldsymbol {0}}&{\boldsymbol {I}}_{m}&{\boldsymbol {V}}_{1}^{[2](b)}&{\boldsymbol {0}}\\{\boldsymbol {0}}&{\boldsymbol {W}}_{2}^{[2](t)}&{\boldsymbol {I}}_{m}&{\boldsymbol {0}}&{\boldsymbol {V}}_{2}^{[2](t)}\\&{\boldsymbol {W}}_{2}^{[2](b)}&{\boldsymbol {0}}&{\boldsymbol {I}}_{3m}&{\boldsymbol {V}}_{2}^{[2](b)}&{\boldsymbol {0}}\\&&\ddots &\ddots &\ddots &\ddots &\ddots \\&&&{\boldsymbol {0}}&{\boldsymbol {W}}_{p/2-1}^{[2](t)}&{\boldsymbol {I}}_{3m}&{\boldsymbol {0}}&{\boldsymbol {V}}_{p/2-1}^{[2](t)}\\&&&&{\boldsymbol {W}}_{p/2-1}^{[2](b)}&{\boldsymbol {0}}&{\boldsymbol {I}}_{m}&{\boldsymbol {V}}_{p/2-1}^{[2](b)}&{\boldsymbol {0}}\\&&&&&{\boldsymbol {0}}&{\boldsymbol {W}}_{p/2}^{[2](t)}&{\boldsymbol {I}}_{m}&{\boldsymbol {0}}\\&&&&&&{\boldsymbol {W}}_{p/2}^{[2](b)}&{\boldsymbol {0}}&{\boldsymbol {I}}_{3m}\end{bmatrix}}{\text{.}}$ Its structure is very similar to that of S̃2, only differing in the number of spikes and their height (their width stays the same at m). Thus, a similar factorization step can be performed on S̃2 to produce S̃2 = D̃2S̃3 and S̃ = D̃1D̃2S̃3. Such factorization steps can be performed recursively. After d − 1 steps, we obtain the factorization S̃ = D̃1⋯D̃d−1S̃d, where S̃d has only two spikes. The reduced system will then be solved via X̃ = S̃ −1 d   D̃ −1 d−1   ⋯D̃ −1 1   G̃ . The block LU factorization technique in the two-partition case can be used to handle the solving steps involving D̃1, ..., D̃d−1 and S̃d for they essentially solve multiple independent systems of generalized two-partition forms. Generalization to cases where p is not a power of two is almost trivial. Truncated SPIKE When A is diagonally-dominant, in the reduced system ${\begin{bmatrix}{\boldsymbol {I}}_{m}&{\boldsymbol {0}}&{\boldsymbol {V}}_{1}^{(t)}\\{\boldsymbol {0}}&{\boldsymbol {I}}_{m}&{\boldsymbol {V}}_{1}^{(b)}&{\boldsymbol {0}}\\{\boldsymbol {0}}&{\boldsymbol {W}}_{2}^{(t)}&{\boldsymbol {I}}_{m}&{\boldsymbol {0}}&{\boldsymbol {V}}_{2}^{(t)}\\&{\boldsymbol {W}}_{2}^{(b)}&{\boldsymbol {0}}&{\boldsymbol {I}}_{m}&{\boldsymbol {V}}_{2}^{(b)}&{\boldsymbol {0}}\\&&\ddots &\ddots &\ddots &\ddots &\ddots \\&&&{\boldsymbol {0}}&{\boldsymbol {W}}_{p-1}^{(t)}&{\boldsymbol {I}}_{m}&{\boldsymbol {0}}&{\boldsymbol {V}}_{p-1}^{(t)}\\&&&&{\boldsymbol {W}}_{p-1}^{(b)}&{\boldsymbol {0}}&{\boldsymbol {I}}_{m}&{\boldsymbol {V}}_{p-1}^{(b)}&{\boldsymbol {0}}\\&&&&&{\boldsymbol {0}}&{\boldsymbol {W}}_{p}^{(t)}&{\boldsymbol {I}}_{m}&{\boldsymbol {0}}\\&&&&&&{\boldsymbol {W}}_{p}^{(b)}&{\boldsymbol {0}}&{\boldsymbol {I}}_{m}\end{bmatrix}}{\begin{bmatrix}{\boldsymbol {X}}_{1}^{(t)}\\{\boldsymbol {X}}_{1}^{(b)}\\{\boldsymbol {X}}_{2}^{(t)}\\{\boldsymbol {X}}_{2}^{(b)}\\\vdots \\{\boldsymbol {X}}_{p-1}^{(t)}\\{\boldsymbol {X}}_{p-1}^{(b)}\\{\boldsymbol {X}}_{p}^{(t)}\\{\boldsymbol {X}}_{p}^{(b)}\end{bmatrix}}={\begin{bmatrix}{\boldsymbol {G}}_{1}^{(t)}\\{\boldsymbol {G}}_{1}^{(b)}\\{\boldsymbol {G}}_{2}^{(t)}\\{\boldsymbol {G}}_{2}^{(b)}\\\vdots \\{\boldsymbol {G}}_{p-1}^{(t)}\\{\boldsymbol {G}}_{p-1}^{(b)}\\{\boldsymbol {G}}_{p}^{(t)}\\{\boldsymbol {G}}_{p}^{(b)}\end{bmatrix}}{\text{,}}$ the blocks V (t) j   and W (b) j   are often negligible. With them omitted, the reduced system becomes block diagonal ${\begin{bmatrix}{\boldsymbol {I}}_{m}\\&{\boldsymbol {I}}_{m}&{\boldsymbol {V}}_{1}^{(b)}\\&{\boldsymbol {W}}_{2}^{(t)}&{\boldsymbol {I}}_{m}\\&&&{\boldsymbol {I}}_{m}&{\boldsymbol {V}}_{2}^{(b)}\\&&&\ddots &\ddots &\ddots \\&&&&{\boldsymbol {W}}_{p-1}^{(t)}&{\boldsymbol {I}}_{m}\\&&&&&&{\boldsymbol {I}}_{m}&{\boldsymbol {V}}_{p-1}^{(b)}\\&&&&&&{\boldsymbol {W}}_{p}^{(t)}&{\boldsymbol {I}}_{m}\\&&&&&&&&{\boldsymbol {I}}_{m}\end{bmatrix}}{\begin{bmatrix}{\boldsymbol {X}}_{1}^{(t)}\\{\boldsymbol {X}}_{1}^{(b)}\\{\boldsymbol {X}}_{2}^{(t)}\\{\boldsymbol {X}}_{2}^{(b)}\\\vdots \\{\boldsymbol {X}}_{p-1}^{(t)}\\{\boldsymbol {X}}_{p-1}^{(b)}\\{\boldsymbol {X}}_{p}^{(t)}\\{\boldsymbol {X}}_{p}^{(b)}\end{bmatrix}}={\begin{bmatrix}{\boldsymbol {G}}_{1}^{(t)}\\{\boldsymbol {G}}_{1}^{(b)}\\{\boldsymbol {G}}_{2}^{(t)}\\{\boldsymbol {G}}_{2}^{(b)}\\\vdots \\{\boldsymbol {G}}_{p-1}^{(t)}\\{\boldsymbol {G}}_{p-1}^{(b)}\\{\boldsymbol {G}}_{p}^{(t)}\\{\boldsymbol {G}}_{p}^{(b)}\end{bmatrix}}$ and can be easily solved in parallel . The truncated SPIKE algorithm can be wrapped inside some outer iterative scheme (e.g., BiCGSTAB or iterative refinement) to improve the accuracy of the solution. SPIKE for tridiagonal systems The first SPIKE partitioning and algorithm was presented in and was designed as the means to improve the stability properties of a parallel Givens rotations-based solver for tridiagonal systems. A version of the algorithm, termed g-Spike, that is based on serial Givens rotations applied independently on each block was designed for the NVIDIA GPU . A SPIKE-based algorithm for the GPU that is based on a special block diagonal pivoting strategy is described in . SPIKE as a preconditioner The SPIKE algorithm can also function as a preconditioner for iterative methods for solving linear systems. To solve a linear system Ax = b using a SPIKE-preconditioned iterative solver, one extracts center bands from A to form a banded preconditioner M and solves linear systems involving M in each iteration with the SPIKE algorithm. In order for the preconditioner to be effective, row and/or column permutation is usually necessary to move "heavy" elements of A close to the diagonal so that they are covered by the preconditioner. This can be accomplished by computing the weighted spectral reordering of A. The SPIKE algorithm can be generalized by not restricting the preconditioner to be strictly banded. In particular, the diagonal block in each partition can be a general matrix and thus handled by a direct general linear system solver rather than a banded solver. This enhances the preconditioner, and hence allows better chance of convergence and reduces the number of iterations. Implementations Intel offers an implementation of the SPIKE algorithm under the name Intel Adaptive Spike-Based Solver . Tridiagonal solvers have also been developed for the NVIDIA GPU and the Xeon Phi co-processors. The method in is the basis for a tridiagonal solver in the cuSPARSE library.[1] The Givens rotations based solver was also implemented for the GPU and the Intel Xeon Phi.[2] References 1. NVIDIA, Accessed October 28, 2014. CUDA Toolkit Documentation v. 6.5: cuSPARSE, http://docs.nvidia.com/cuda/cusparse. 2. Venetis, Ioannis; Sobczyk, Aleksandros; Kouris, Alexandros; Nakos, Alexandros; Nikoloutsakos, Nikolaos; Gallopoulos, Efstratios (2015-09-03). "A general tridiagonal solver for coprocessors: Adapting g-Spike for the Intel Xeon Phi" – via ResearchGate. 1. ^ Polizzi, E.; Sameh, A. H. (2006). "A parallel hybrid banded system solver: the SPIKE algorithm". Parallel Computing. 32 (2): 177–194. doi:10.1016/j.parco.2005.07.005. 2. ^ Polizzi, E.; Sameh, A. H. (2007). "SPIKE: A parallel environment for solving banded linear systems". Computers & Fluids. 36: 113–141. doi:10.1016/j.compfluid.2005.07.005. 3. ^ Mikkelsen, C. C. K.; Manguoglu, M. (2008). "Analysis of the Truncated SPIKE Algorithm". SIAM J. Matrix Anal. Appl. 30 (4): 1500–1519. CiteSeerX 10.1.1.514.8748. doi:10.1137/080719571. 4. ^ Manguoglu, M.; Sameh, A. H.; Schenk, O. (2009). "PSPIKE: A Parallel Hybrid Sparse Linear System Solver". Euro-Par 2009 Parallel Processing. Lecture Notes in Computer Science. Vol. 5704. pp. 797–808. Bibcode:2009LNCS.5704..797M. doi:10.1007/978-3-642-03869-3_74. ISBN 978-3-642-03868-6. 5. ^ "Intel Adaptive Spike-Based Solver - Intel Software Network". Retrieved 2009-03-23. 6. ^ Sameh, A. H.; Kuck, D. J. (1978). "On Stable Parallel Linear System Solvers". Journal of the ACM. 25: 81–91. doi:10.1145/322047.322054. S2CID 17109524. 7. ^ Venetis, I.E.; Kouris, A.; Sobczyk, A.; Gallopoulos, E.; Sameh, A. H. (2015). "A direct tridiagonal solver based on Givens rotations for GPU architectures". Parallel Computing. 25: 101–116. doi:10.1016/j.parco.2015.03.008. 8. ^ Chang, L.-W.; Stratton, J.; Kim, H.; Hwu, W.-M. (2012). "A scalable, numerically stable, high-performance tridiagonal solver using GPUs". Proc. Int'l. Conf. High Performance Computing, Networking Storage and Analysis (SC'12). Los Alamitos, CA, USA: IEEE Computer Soc. Press: 27:1–27:11. ISBN 978-1-4673-0804-5. Further reading • Gallopoulos, E.; Philippe, B.; Sameh, A.H. (2015). Parallelism in Matrix Computations. Springer. ISBN 978-94-017-7188-7. Numerical linear algebra Key concepts • Floating point • Numerical stability Problems • System of linear equations • Matrix decompositions • Matrix multiplication (algorithms) • Matrix splitting • Sparse problems Hardware • CPU cache • TLB • Cache-oblivious algorithm • SIMD • Multiprocessing Software • MATLAB • Basic Linear Algebra Subprograms (BLAS) • LAPACK • Specialized libraries • General purpose software
Wikipedia
SQ-universal group In mathematics, in the realm of group theory, a countable group is said to be SQ-universal if every countable group can be embedded in one of its quotient groups. SQ-universality can be thought of as a measure of largeness or complexity of a group. History Many classic results of combinatorial group theory, going back to 1949, are now interpreted as saying that a particular group or class of groups is (are) SQ-universal. However the first explicit use of the term seems to be in an address given by Peter Neumann to The London Algebra Colloquium entitled "SQ-universal groups" on 23 May 1968. Examples of SQ-universal groups In 1949 Graham Higman, Bernhard Neumann and Hanna Neumann proved that every countable group can be embedded in a two-generator group.[1] Using the contemporary language of SQ-universality, this result says that F2, the free group (non-abelian) on two generators, is SQ-universal. This is the first known example of an SQ-universal group. Many more examples are now known: • Adding two generators and one arbitrary relator to a nontrivial torsion-free group, always results in an SQ-universal group.[2] • Any non-elementary group that is hyperbolic with respect to a collection of proper subgroups is SQ-universal.[3] • Many HNN extensions, free products and free products with amalgamation.[4][5][6] • The four-generator Coxeter group with presentation:[7] $P=\left\langle a,b,c,d\,|\,a^{2}=b^{2}=c^{2}=d^{2}=(ab)^{3}=(bc)^{3}=(ac)^{3}=(ad)^{3}=(cd)^{3}=(bd)^{3}=1\right\rangle $ • Charles F. Miller III's example of a finitely presented SQ-universal group all of whose non-trivial quotients have unsolvable word problem.[8] In addition much stronger versions of the Higmann-Neumann-Neumann theorem are now known. Ould Houcine has proved: For every countable group G there exists a 2-generator SQ-universal group H such that G can be embedded in every non-trivial quotient of H.[9] Some elementary properties of SQ-universal groups A free group on countably many generators h1, h2, ..., hn, ... , say, must be embeddable in a quotient of an SQ-universal group G. If $h_{1}^{*},h_{2}^{*},\dots ,h_{n}^{*}\dots \in G$ are chosen such that $h_{n}^{*}\mapsto h_{n}$ for all n, then they must freely generate a free subgroup of G. Hence: Every SQ-universal group has as a subgroup, a free group on countably many generators. Since every countable group can be embedded in a countable simple group, it is often sufficient to consider embeddings of simple groups. This observation allows us to easily prove some elementary results about SQ-universal groups, for instance: If G is an SQ-universal group and N is a normal subgroup of G (i.e. $N\triangleleft G$) then either N is SQ-universal or the quotient group G/N is SQ-universal. To prove this suppose N is not SQ-universal, then there is a countable group K that cannot be embedded into a quotient group of N. Let H be any countable group, then the direct product H × K is also countable and hence can be embedded in a countable simple group S. Now, by hypothesis, G is SQ-universal so S can be embedded in a quotient group, G/M, say, of G. The second isomorphism theorem tells us: $MN/M\cong N/(M\cap N)$ Now $MN/M\triangleleft G/M$ and S is a simple subgroup of G/M so either: $MN/M\cap S\cong 1$ or: $S\subseteq MN/M\cong N/(M\cap N)$. The latter cannot be true because it implies K ⊆ H × K ⊆ S ⊆ N/(M ∩ N) contrary to our choice of K. It follows that S can be embedded in (G/M)/(MN/M), which by the third isomorphism theorem is isomorphic to G/MN, which is in turn isomorphic to (G/N)/(MN/N). Thus S has been embedded into a quotient group of G/N, and since H ⊆ S was an arbitrary countable group, it follows that G/N is SQ-universal. Since every subgroup H of finite index in a group G contains a normal subgroup N also of finite index in G,[10] it easily follows that: If a group G is SQ-universal then so is any finite index subgroup H of G. The converse of this statement is also true.[11] Variants and generalizations of SQ-universality Several variants of SQ-universality occur in the literature. The reader should be warned that terminology in this area is not yet completely stable and should read this section with this caveat in mind. Let ${\mathcal {P}}$ be a class of groups. (For the purposes of this section, groups are defined up to isomorphism) A group G is called SQ-universal in the class ${\mathcal {P}}$ if $G\in {\mathcal {P}}$ and every countable group in ${\mathcal {P}}$ is isomorphic to a subgroup of a quotient of G. The following result can be proved: Let n, m ∈ Z where m is odd, $n>10^{78}$ and m > 1, and let B(m, n) be the free m-generator Burnside group, then every non-cyclic subgroup of B(m, n) is SQ-universal in the class of groups of exponent n. Let ${\mathcal {P}}$ be a class of groups. A group G is called SQ-universal for the class ${\mathcal {P}}$ if every group in ${\mathcal {P}}$ is isomorphic to a subgroup of a quotient of G. Note that there is no requirement that $G\in {\mathcal {P}}$ nor that any groups be countable. The standard definition of SQ-universality is equivalent to SQ-universality both in and for the class of countable groups. Given a countable group G, call an SQ-universal group H G-stable, if every non-trivial factor group of H contains a copy of G. Let ${\mathcal {G}}$ be the class of finitely presented SQ-universal groups that are G-stable for some G then Houcine's version of the HNN theorem that can be re-stated as: The free group on two generators is SQ-universal for ${\mathcal {G}}$. However, there are uncountably many finitely generated groups, and a countable group can only have countably many finitely generated subgroups. It is easy to see from this that: No group can be SQ-universal in ${\mathcal {G}}$. An infinite class ${\mathcal {P}}$ of groups is wrappable if given any groups $F,G\in {\mathcal {P}}$ there exists a simple group S and a group $H\in {\mathcal {P}}$ such that F and G can be embedded in S and S can be embedded in H. The it is easy to prove: If ${\mathcal {P}}$ is a wrappable class of groups, G is an SQ-universal for ${\mathcal {P}}$ and $N\triangleleft G$ then either N is SQ-universal for ${\mathcal {P}}$ or G/N is SQ-universal for ${\mathcal {P}}$. If ${\mathcal {P}}$ is a wrappable class of groups and H is of finite index in G then G is SQ-universal for the class ${\mathcal {P}}$ if and only if H is SQ-universal for ${\mathcal {P}}$. The motivation for the definition of wrappable class comes from results such as the Boone-Higman theorem, which states that a countable group G has soluble word problem if and only if it can be embedded in a simple group S that can be embedded in a finitely presented group F. Houcine has shown that the group F can be constructed so that it too has soluble word problem. This together with the fact that taking the direct product of two groups preserves solubility of the word problem shows that: The class of all finitely presented groups with soluble word problem is wrappable. Other examples of wrappable classes of groups are: • The class of finite groups. • The class of torsion free groups. • The class of countable torsion free groups. • The class of all groups of a given infinite cardinality. The fact that a class ${\mathcal {P}}$ is wrappable does not imply that any groups are SQ-universal for ${\mathcal {P}}$. It is clear, for instance, that some sort of cardinality restriction for the members of ${\mathcal {P}}$ is required. If we replace the phrase "isomorphic to a subgroup of a quotient of" with "isomorphic to a subgroup of" in the definition of "SQ-universal", we obtain the stronger concept of S-universal (respectively S-universal for/in ${\mathcal {P}}$). The Higman Embedding Theorem can be used to prove that there is a finitely presented group that contains a copy of every finitely presented group. If ${\mathcal {W}}$ is the class of all finitely presented groups with soluble word problem, then it is known that there is no uniform algorithm to solve the word problem for groups in ${\mathcal {W}}$. It follows, although the proof is not a straightforward as one might expect, that no group in ${\mathcal {W}}$ can contain a copy of every group in ${\mathcal {W}}$. But it is clear that any SQ-universal group is a fortiori SQ-universal for ${\mathcal {W}}$. If we let ${\mathcal {F}}$ be the class of finitely presented groups, and F2 be the free group on two generators, we can sum this up as: • F2 is SQ-universal in ${\mathcal {F}}$ and ${\mathcal {W}}$. • There exists a group that is S-universal in ${\mathcal {F}}$. • No group is S-universal in ${\mathcal {W}}$. The following questions are open (the second implies the first): • Is there a countable group that is not SQ-universal but is SQ-universal for ${\mathcal {W}}$? • Is there a countable group that is not SQ-universal but is SQ-universal in ${\mathcal {W}}$? While it is quite difficult to prove that F2 is SQ-universal, the fact that it is SQ-universal for the class of finite groups follows easily from these two facts: • Every symmetric group on a finite set can be generated by two elements • Every finite group can be embedded inside a symmetric group—the natural one being the Cayley group, which is the symmetric group acting on this group as the finite set. SQ-universality in other categories If ${\mathcal {C}}$ is a category and ${\mathcal {P}}$ is a class of objects of ${\mathcal {C}}$, then the definition of SQ-universal for ${\mathcal {P}}$ clearly makes sense. If ${\mathcal {C}}$ is a concrete category, then the definition of SQ-universal in ${\mathcal {P}}$ also makes sense. As in the group theoretic case, we use the term SQ-universal for an object that is SQ-universal both for and in the class of countable objects of ${\mathcal {C}}$. Many embedding theorems can be restated in terms of SQ-universality. Shirshov's Theorem that a Lie algebra of finite or countable dimension can be embedded into a 2-generator Lie algebra is equivalent to the statement that the 2-generator free Lie algebra is SQ-universal (in the category of Lie algebras). This can be proved by proving a version of the Higman, Neumann, Neumann theorem for Lie algebras.[12] However versions of the HNN theorem can be proved for categories where there is no clear idea of a free object. For instance it can be proved that every separable topological group is isomorphic to a topological subgroup of a group having two topological generators (that is, having a dense 2-generator subgroup).[13] A similar concept holds for free lattices. The free lattice in three generators is countably infinite. It has, as a sublattice, the free lattice in four generators, and, by induction, as a sublattice, the free lattice in a countable number of generators.[14] References 1. G. Higman, B.H. Neumann and H. Neumann, 'Embedding theorems for groups', J. London Math. Soc. 24 (1949), 247-254 2. Anton A. Klyachko, 'The SQ-universality of one-relator relative presentation', Arxiv preprint math.GR/0603468, 2006 3. G. Arzhantseva, A. Minasyan, D. Osin, 'The SQ-universality and residual properties of relatively hyperbolic groups', Journal of Algebra 315 (2007), No. 1, pp. 165-177 4. Benjamin Fine, Marvin Tretkoff, 'On the SQ-Universality of HNN Groups', Proceedings of the American Mathematical Society, Vol. 73, No. 3 (Mar., 1979), pp. 283-290 5. P.M. Neumann: The SQ-universality of some finitely presented groups. J. Austral. Math. Soc. 16, 1-6 (1973) 6. K. I. Lossov, 'SQ-universality of free products with amalgamated finite subgroups', Siberian Mathematical Journal Volume 27, Number 6 / November, 1986 7. Muhammad A. Albar, 'On a four-generator Coxeter Group', Internat. J. Math & Math. Sci Vol 24, No 12 (2000), 821-823 8. C. F. Miller. Decision problems for groups -- survey and reflections. In Algorithms and Classification in Combinatorial Group Theory, pages 1--60. Springer, 1991. 9. A.O. Houcine, 'Satisfaction of existential theories in finitely presented groups and some embedding theorems', Annals of Pure and Applied Logic, Volume 142, Issues 1-3 , October 2006, Pages 351-365 10. Lawson, Mark V. (1998) Inverse semigroups: the theory of partial symmetries, World Scientific. ISBN 981-02-3316-7, p. 52 11. P.M. Neumann: The SQ-universality of some finitely presented groups. J. Austral. Math. Soc. 16, 1-6 (1973) 12. A.I. Lichtman and M. Shirvani, 'HNN-extensions of Lie algebras', Proc. American Math. Soc. Vol 125, Number 12, December 1997, 3501-3508 13. Sidney A. Morris and Vladimir Pestov, 'A topological generalization of the Higman-Neumann-Neumann Theorem', Research Report RP-97-222 (May 1997), School of Mathematical and Computing Sciences, Victoria University of Wellington. See also J. Group Theory 1, No.2, 181-187 (1998). 14. L.A. Skornjakov, Elements of Lattice Theory (1977) Adam Hilger Ltd. (see pp.77-78) • Lawson, M.V. (1998). Inverse semigroups: the theory of partial symmetries. World Scientific. ISBN 978-981-02-3316-7.
Wikipedia
S. R. Srinivasa Varadhan Sathamangalam Ranga Iyengar Srinivasa Varadhan, FRS (born 2 January 1940) is an Indian American mathematician. He is known for his fundamental contributions to probability theory and in particular for creating a unified theory of large deviations.[1] He is regarded as one of the fundamental contributors to the theory of diffusion processes with an orientation towards the refinement and further development of Itô’s stochastic calculus.[2] In the year 2007, he became the first Asian to win the Abel Prize.[3][4] Srinivasa Varadhan FRS Srinivasa Varadhan at the 1st Heidelberg Laureate Forum in September 2013 Born (1940-01-02) 2 January 1940 Madras, Madras, British India (Chennai, Tamil Nadu, India) Alma materPresidency College University of Madras Indian Statistical Institute Known forMartingale problems; Large deviation theory AwardsPadma Vibhushan (2023) National Medal of Science (2010) Padma Bhushan (2008) Abel Prize (2007) Steele Prize (1996) Birkhoff Prize (1994) Scientific career FieldsMathematics InstitutionsCourant Institute of Mathematical Sciences (New York University) Doctoral advisorC R Rao Doctoral studentsPeter Friz Jeremy Quastel Early life and education Srinivasa was born into a Hindu Tamil Brahmin Iyengar family in 1940 [5] in Chennai (then Madras). In 1953, his family migrated to Kolkata. He grew up in Chennai and Kolkata.[6] Varadhan received his undergraduate degree in 1959 and his postgraduate degree in 1960 from Presidency College, Chennai. He received his doctorate from ISI in 1963 under C R Rao,[7][8] who arranged for Andrey Kolmogorov to be present at Varadhan's thesis defence.[9] He was one of the "famous four" (the others being R Ranga Rao, K R Parthasarathy, and Veeravalli S Varadarajan) in ISI during 1956–1963.[10] Career Since 1963, he has worked at the Courant Institute of Mathematical Sciences at New York University, where he was at first a postdoctoral fellow (1963–66), strongly recommended by Monroe D Donsker. Here he met Daniel Stroock, who became a close colleague and co-author. In an article in the Notices of the American Mathematical Society, Stroock recalls these early years: Varadhan, whom everyone calls Raghu, came to these shores from his native India in the fall of 1963. He arrived by plane at Idlewild Airport and proceeded to Manhattan by bus. His destination was that famous institution with the modest name, The Courant Institute of Mathematical Sciences, where he had been given a postdoctoral fellowship. Varadhan was assigned to one of the many windowless offices in the Courant building, which used to be a hat factory. Yet despite the somewhat humble surroundings, from these offices flowed a remarkably large fraction of the post-war mathematics of which America is justly proud. Varadhan is currently a professor at the Courant Institute.[11][12] He is known for his work with Daniel W Stroock on diffusion processes, and for his work on large deviations with Monroe D Donsker. He has chaired the Mathematical Sciences jury for the Infosys Prize from 2009 and was the chief guest in 2020.[13] Awards and honours Varadhan's awards and honours include the National Medal of Science (2010) from President Barack Obama, "the highest honour bestowed by the United States government on scientists, engineers and inventors".[14] He also received the Birkhoff Prize (1994), the Margaret and Herman Sokol Award of the Faculty of Arts and Sciences, New York University (1995), and the Leroy P Steele Prize for Seminal Contribution to Research (1996) from the American Mathematical Society, awarded for his work with Daniel W Stroock on diffusion processes.[15] He was awarded the Abel Prize in 2007 for his work on large deviations with Monroe D Donsker.[11][16] In 2008, the Government of India awarded him the Padma Bhushan.[17] and in 2023, he was awarded India's second highest civilian honor Padma Vibhushan.[18][19] He also has two honorary degrees from Université Pierre et Marie Curie in Paris (2003) and from Indian Statistical Institute in Kolkata, India (2004). Varadhan is a member of the US National Academy of Sciences (1995),[20] and the Norwegian Academy of Science and Letters (2009).[21] He was elected to Fellow of the American Academy of Arts and Sciences (1988),[22] the Third World Academy of Sciences (1988), the Institute of Mathematical Statistics (1991), the Royal Society (1998),[23] the Indian Academy of Sciences (2004), the Society for Industrial and Applied Mathematics (2009),[24] and the American Mathematical Society (2012).[25] Selected publications • Convolution Properties of Distributions on Topological Groups. Dissertation, Indian Statistical Institute, 1963. • Varadhan, SRS (1966). "Asymptotic probabilities and differential equations". Communications on Pure and Applied Mathematics. 19 (3): 261–286. doi:10.1002/cpa.3160190303. • Stroock, DW; SRS Varadhan (1972). "On the support of diffusion processes with applications to the strong maximum principle". Proc. Of the Sixth Berkeley Symposium on Mathematical Statistics and Probability. 3: 333–359. • (with M D Donsker) Donsker, M. D.; Varadhan, S. R. S. (1975). "On a variational formula for the principal eigenvalues for operators with maximum principle". Proc Natl Acad Sci USA. 72 (3): 780–783. Bibcode:1975PNAS...72..780D. doi:10.1073/pnas.72.3.780. PMC 432403. PMID 16592231. • (with M D Donsker) Asymptotic evaluation of certain Markov process expectations for large time. I, Communications on Pure and Applied Mathematics 28 (1975), pp. 1–47; part II, 28 (1975), pp. 279–301; part III, 29 (1976), pp 389–461; part IV, 36 (1983), pp 183–212. • Varadhan, SRS (2003). "Stochastic analysis and applications". Bull Amer Math Soc. 40 (1): 89–97. doi:10.1090/s0273-0979-02-00968-0. MR 1943135. See also • Varadhan's lemma References 1. Ramachandran, R. (7–20 April 2007). "Science of chance". Frontline. India. Archived from the original on 11 December 2007. 2. Varadhan, S. R. Srinivasa (2020). "Essentials of integration theory for analysis". Springer, [2020] ©2020. 3. "2007: Srinivasa S. R. Varadhan | The Abel Prize". abelprize.no. 4. "Indian wins Norway's Abel Prize for Mathematics". Hindustan Times. 23 March 2007. 5. "Srinivasa Varadhan". Archived from the original on 5 November 2016. 6. interview-with-srinivasa-varadhan/ Interview with Srinivasa Varadhan], http://gonitsora.com 7. S. R. Srinivasa Varadhan at the Mathematics Genealogy Project 8. List of degree / diploma / certificate recipients of ISI, web site at the Indian Statistical Institute. Retrieved 22 March 2007. 9. S. R. Srinivasa Varadhan's Biography, Allvoices. Retrieved 1 August 2010. 10. Sinha, Kalyan Bidhan; Rajarama Bhat, B. V. "S. R. Srinivasa Varadhan" (PDF). Louisiana State University. 11. Announcement of the 1996 Steele Prizes at the American Mathematical Society web site. Retrieved 21 February 2007. 12. Srinivasa Varadhan is known as S R S Varadhan for short and Raghu to his friends and colleagues. His father, Ranga Iyengar, was a science teacher who became the Principal of the Board High School in Ponneri Biography Archived 21 April 2007 at the Wayback Machine (PDF), from the Abel Prize web site. Retrieved 22 March 2007. 13. "Infosys Prize - Jury 2020". www.infosys-science-foundation.com. Retrieved 10 December 2020. 14. "President Obama Honors Nation's Top Scientists and Innovators". whitehouse.gov. 27 September 2011. Retrieved 28 September 2011 – via National Archives. 15. "1996 Steele Prizes" (PDF). Notices of the American Mathematical Society. 43 (11): 1340–1347. November 1996. Retrieved 29 September 2011. 16. "2007: Srinivasa S. R. Varadhan". www.abelprize.no. Retrieved 22 August 2022.{{cite web}}: CS1 maint: url-status (link) 17. "Padma Awards" (PDF). Ministry of Home Affairs, Government of India. 2015. Retrieved 21 July 2015. 18. Padma Awardees 19. "Padma honours: Full list of awardees". The Times of India. 25 January 2023. Retrieved 25 January 2023. 20. "NAS Membership Directory". U.S. National Academy of Sciences. Retrieved 10 June 2011. Search with Last Name is "Varadhan". 21. "Gruppe 1: Matematiske fag" (in Norwegian). Norwegian Academy of Science and Letters. Retrieved 10 June 2011. 22. "Book of Members, 1780–2010: Chapter V" (PDF). American Academy of Arts and Sciences. Retrieved 10 June 2011. 23. "Fellows of the Royal Society" (PDF). Royal Society. Retrieved 10 June 2011. 24. "SIAM Fellows: Class of 2009". Society for Industrial and Applied Mathematics. Retrieved 10 June 2011. 25. List of Fellows of the American Mathematical Society. Retrieved 28 August 2013. External links Wikimedia Commons has media related to Sathamangalam Ranga Iyengar Srinivasa Varadhan. • S. R. Srinivasa Varadhan, home page at the Courant Institute • O'Connor, John J.; Robertson, Edmund F., "S. R. Srinivasa Varadhan", MacTutor History of Mathematics Archive, University of St Andrews • S. R. Srinivasa Varadhan at the Mathematics Genealogy Project Fellows of the Royal Society elected in 1998 Fellows • Colin Atkinson • David Barker • Jean Beggs • Harshad Bhadeshia • David Keith Bowen • Roger Cashmore • Andrew Casson • Thomas Cavalier-Smith • David W. Clarke • Enrico Coen • Stephen Cook • Peter Crane • Richard Denton • Raymond Dwek • Charles Ellington • Richard B. Flavell • Ken Freeman • Brian Greenwood • J. Philip Grime • David C. Hanna • Geoffrey Hinton • Steven Martin • Raghunath Anant Mashelkar • Yoshio Masui • Ronald Charles Newman • Mark Pepys • Trevor Charles Platt • Alan Plumb • Richard J. Puddephatt • Philip Ruffles • Anthony Segal • Ashoke Sen • Jonathan Sprent • James Staunton • John Michael Taylor • Robert K. Thomas • Cheryll Tickle • S. R. Srinivasa Varadhan • Bernard Wood • Brian Worthington Foreign • John E. Casida • Elias James Corey • Walter Kohn • Oliver Smithies • Rolf M. Zinkernagel Abel Prize laureates • 2003  Jean-Pierre Serre • 2004  Michael Atiyah • Isadore Singer • 2005  Peter Lax • 2006  Lennart Carleson • 2007  S. R. Srinivasa Varadhan • 2008  John G. Thompson • Jacques Tits • 2009  Mikhail Gromov • 2010  John Tate • 2011  John Milnor • 2012  Endre Szemerédi • 2013  Pierre Deligne • 2014  Yakov Sinai • 2015  John Forbes Nash Jr. • Louis Nirenberg • 2016  Andrew Wiles • 2017  Yves Meyer • 2018  Robert Langlands • 2019  Karen Uhlenbeck • 2020  Hillel Furstenberg • Grigory Margulis • 2021  László Lovász • Avi Wigderson • 2022  Dennis Sullivan • 2023  Luis Caffarelli Recipients of Padma Vibhushan Arts • Ebrahim Alkazi • Kishori Amonkar • Prabha Atre • Amitabh Bachchan • Teejan Bai • M. Balamuralikrishna • T. Balasaraswati • Asha Bhosle • Nandalal Bose • Hariprasad Chaurasia • Girija Devi • Kumar Gandharva • Adoor Gopalakrishnan • Satish Gujral • Gangubai Hangal • Bhupen Hazarika • M. F. Husain • Ilaiyaraaja • Semmangudi Srinivasa Iyer • Bhimsen Joshi • Ali Akbar Khan • Amjad Ali Khan • Allauddin Khan • Bismillah Khan • Ghulam Mustafa Khan • Yamini Krishnamurthy • Dilip Kumar • R. K. Laxman • Birju Maharaj • Kishan Maharaj • Lata Mangeshkar • Sonal Mansingh • Mallikarjun Mansur • Zubin Mehta • Mario Miranda • Chhannulal Mishra • Kelucharan Mohapatra • Raghunath Mohapatra • Jasraj Motiram • Benode Behari Mukherjee • Hrishikesh Mukherjee • Rajinikanth • Ram Narayan • D. K. Pattammal • K. Shankar Pillai • Balwant Moreshwar Purandare • Akkineni Nageswara Rao • Kaloji Narayana Rao • Satyajit Ray • S. H. Raza • Zohra Sehgal • Uday Shankar • Ravi Shankar • V. Shantaram • Shivkumar Sharma • Umayalpuram K. Sivaraman • M. S. Subbulakshmi • K. G. Subramanyan • Kapila Vatsyayan • Homai Vyarawalla • K. J. Yesudas • Zakir Hussain Civil service • Bimala Prasad Chaliha • Naresh Chandra • T. N. Chaturvedi • Jayanto Nath Chaudhuri • Suranjan Das • Rajeshwar Dayal • Basanti Devi • P. N. Dhar • Jyotindra Nath Dixit • M. S. Gill • Hafiz Mohamad Ibrahim • H. V. R. Iyengar • Bhola Nath Jha • Dattatraya Shridhar Joshi • Ajudhiya Nath Khosla • Rai Krishnadasa • V. Krishnamurthy • P. Prabhakar Kumaramangalam • Pratap Chandra Lal • K. B. Lall • Sam Manekshaw • Om Prakash Mehra • Mohan Sinha Mehta • M. G. K. Menon • Brajesh Mishra • Sumati Morarjee • A. Ramasamy Mudaliar • Sardarilal Mathradas Nanda • Chakravarthi V. Narasimhan • Braj Kumar Nehru • Bhairab Dutt Pande • Ghananand Pande • Vijaya Lakshmi Pandit • T. V. Rajeswar • C. R. Krishnaswamy Rao • Pattadakal Venkanna R. Rao • V. K. R. V. Rao • Bipin Rawat • Khusro Faramurz Rustamji • Harish Chandra Sarin • Binay Ranjan Sen • Homi Sethna • Arjan Singh • Harbaksh Singh • Kirpal Singh • Manmohan Singh • Tarlok Singh • Lallan Prasad Singh • Balaram Sivaraman • Chandrika Prasad Srivastava • T. Swaminathan • Arun Shridhar Vaidya • Dharma Vira • Narinder Nath Vohra Literature and education • V. S. R. Arunachalam • Jagdish Bhagwati • Satyendra Nath Bose • Tara Chand • Suniti Kumar Chatterji • D. P. Chattopadhyaya • Bhabatosh Datta • Avinash Dixit • Mahasweta Devi • John Kenneth Galbraith • Sarvepalli Gopal • Lakshman Shastri Joshi • Kaka Kalelkar • Dhondo Keshav Karve • Gopinath Kaviraj • Radheshyam Khemka • Kuvempu • O. N. V. Kurup • Prasanta Chandra Mahalanobis • Sitakant Mahapatra • John Mathai • Kotha Satchidananda Murthy • Giani Gurmukh Singh Musafir • Basanti Dulal Nagchaudhuri • Bal Ram Nanda • R. K. Narayan • P. Parameswaran • Amrita Pritam • K. N. Raj • C. Rangarajan • Raja Rao • Ramoji Rao • Hormasji Maneckji Seervai • Rajaram Shastri • Kalu Lal Shrimali • Govindbhai Shroff • Khushwant Singh • Chandeshwar Prasad Narayan Singh • Premlila Vithaldas Thackersey • Mahadevi Varma • Bashir Hussain Zaidi Medicine • Jasbir Singh Bajaj • B. K. Goyal • Purshotam Lal • A. Lakshmanaswami Mudaliar • S. I. Padmavati • Autar Singh Paintal • Kantilal Hastimal Sancheti • Balu Sankaran • V. Shanta • Vithal Nagesh Shirodkar • Prakash Narain Tandon • Brihaspati Dev Triguna • M. S. Valiathan • Dilip Mahalanabis Other • Sunderlal Bahuguna • B. K. S. Iyengar • Rambhadracharya • Ravi Shankar • Vishwesha Teertha • Jaggi Vasudev • B. V. Doshi Public affairs • L. K. Advani • Montek Singh Ahluwalia • Aruna Asaf Ali • Fazal Ali • Adarsh Sein Anand • Madhav Shrihari Aney • Parkash Singh Badal • Sikander Bakht • Milon K. Banerji • Mirza Hameedullah Beg • P. N. Bhagwati • Raja Chelliah • Chandra Kisan Daphtary • Niren De • C. D. Deshmukh • Anthony Lancelot Dias • Uma Shankar Dikshit • Kazi Lhendup Dorjee • George Fernandes • P. B. Gajendragadkar • Benjamin Gilman • Ismaïl Omar Guelleh • Zakir Husain • V. R. Krishna Iyer • Jagmohan • Lakshmi Chand Jain • Arun Jaitley • Aditya Nath Jha • Murli Manohar Joshi • Anerood Jugnauth • Mehdi Nawaz Jung • Ali Yavar Jung • Vijay Kelkar • Hans Raj Khanna • V. N. Khare • Balasaheb Gangadhar Kher • Akhlaqur Rahman Kidwai • Jivraj Narayan Mehta • V. K. Krishna Menon • Hirendranath Mukherjee • Ajoy Mukherjee • Pranab Mukherjee • Padmaja Naidu • Gulzarilal Nanda • Govind Narain • Fali Sam Nariman • Hosei Norota • Nanabhoy Palkhivala • K. Parasaran • Hari Vinayak Pataskar • Sunder Lal Patwa • Sharad Pawar • Naryana Raghvan Pillai • Sri Prakasa • N. G. Ranga • Ravi Narayana Reddy • Y. Venugopal Reddy • Ghulam Mohammed Sadiq • Lakshmi Sahgal • P. A. Sangma • M. C. Setalvad • Kalyan Singh • Karan Singh • Nagendra Singh • Swaran Singh • Walter Sisulu • Soli Sorabjee • Kalyan Sundaram • Sushma Swaraj • Chandulal Madhavlal Trivedi • Atal Bihari Vajpayee • M. N. Venkatachaliah • Kottayan Katankot Venugopal • Jigme Dorji Wangchuck • S. M. Krishna • Mulayam Singh Yadav Science and engineering • V. K. Aatre • Salim Ali • Norman Borlaug • Subrahmanyan Chandrasekhar • Rajagopala Chidambaram • Charles Correa • Satish Dhawan • Anil Kakodkar • A. P. J. Abdul Kalam • Krishnaswamy Kasturirangan • Har Gobind Khorana • Daulat Singh Kothari • Verghese Kurien • Raghunath Anant Mashelkar • G. Madhavan Nair • Roddam Narasimha • Jayant Narlikar • Rajendra K. Pachauri • Benjamin Peary Pal • Yash Pal • I. G. Patel • Venkatraman Ramakrishnan • K. R. Ramanathan • Raja Ramanna • C. R. Rao • C. N. R. Rao • Palle Rama Rao • Udupi Ramachandra Rao • Vikram Sarabhai • Man Mohan Sharma • Obaid Siddiqi • E. Sreedharan • M. R. Srinivasan • George Sudarshan • M. S. Swaminathan • Narinder Singh Kapany • S. R. Srinivasa Varadhan Social work • Baba Amte • Pandurang Shastri Athavale • Janaki Devi Bajaj • Mirabehn • Kamaladevi Chattopadhyay • Durgabai Deshmukh • Nanaji Deshmukh • Nirmala Deshpande • Mohan Dharia • U. N. Dhebar • Valerian Gracias • Veerendra Heggade • Mary Clubwala Jadhav • Gaganvihari Lallubhai Mehta • Usha Mehta • Sister Nirmala • Nellie Sengupta Sports • Viswanathan Anand • Edmund Hillary • Mary Kom • Sachin Tendulkar Trade and industry • Dhirubhai Ambani • Ghanshyam Das Birla • Ashok Sekhar Ganguly • Karim Al Hussaini Aga Khan • Lakshmi Mittal • Anil Manibhai Naik • N. R. Narayana Murthy • M. Narasimham • Prithvi Raj Singh Oberoi • Azim Premji • Prathap C. Reddy • J. R. D. Tata • Ratan Tata • Portal • Category • WikiProject Padma Bhushan award recipients (2000–2009) 2000 • V. K. Aatre • Anil Agarwal • Ram Narain Agarwal • Sharan Rani Backliwal • Swami Kalyandev • Veerendra Heggade • Pavaguda V. Indiresan • Wahiduddin Khan • B. B. Lal • Raghunath Anant Mashelkar • H. Y. Sharada Prasad • Rajinikanth • Begum Aizaz Rasul • Radha Reddy • Raja Reddy • Pakkiriswamy Chandra Sekharan • Karamshi Jethabhai Somaiya • S. Srinivasan • Ratan Tata • Harbans Singh Wasir 2001 • Dev Anand • Viswanathan Anand • Amitabh Bachchan • Rahul Bajaj • B. R. Barwale • Balasaheb Bharde • Boyi Bhimanna • Swadesh Chatterjee • B. R. Chopra • Ashok Desai • K. M. George • Bhupen Hazarika • Lalgudi Jayaraman • Yamini Krishnamurthy • Shiv K. Kumar • Raghunath Mohapatra • Arun Netravali • Mohan Singh Oberoi • Rajendra K. Pachauri • Abdul Karim Parekh • Amrita Patel • Pran • Aroon Purie • B. V. Raju • P. Bhanumathi • Sundaram Ramakrishnan • Chitranjan Singh Ranawat • Palle Rama Rao • Raj Reddy • Uma Sharma • L. Subramaniam • Naresh Trehan 2002 • Gary Ackerman • H. P. S. Ahluwalia • Prabha Atre • Sushantha Kumar Bhattacharyya • Chandu Borde • Eugene Chelyshev • Pravinchandra Varjivan Gandhi • Shobha Gurtu • Henning Holck-Larsen • Zakir Hussain • B. K. S. Iyengar • F. C. Kohli • V. C. Kulandaiswamy • Gury Marchuk • Jagat Singh Mehta • Ismail Merchant • Mario Miranda • Frank Pallone • Ramanujam Varatharaja Perumal • Natesan Rangabashyam • Maharaja Krishna Rasgotra • Habib Tanvir • K. K. Venugopal • Nirmal Verma • K. J. Yesudas 2003 • Teejan Bai • Ammannur Madhava Chakyar • Prabhu Chawla • Herbert Fischer • Jamshyd Godrej • Coluthur Gopalan • K. Parasaran • B. Rajam Iyer • Shri Krishna Joshi • Madurai Narayanan Krishnan • Rajinder Kumar • Ramesh Kumar • Purshotam Lal • Sitakant Mahapatra • Bagicha Singh Minhas • Subhash Mukhopadhyay • P. S. Narayanaswamy • Arcot Ramachandran • Trichur V. Ramachandran • Kantilal Hastimal Sancheti • T. V. Sankaranarayanan • Naseeruddin Shah • T. V. R. Shenoy • Jagjit Singh • Ram Badan Singh • Hari Shankar Singhania • Umayalpuram K. Sivaraman • Narayanan Srinivasan • Padma Subrahmanyam • Swapna Sundari • O. V. Vijayan • Herbert Alexandrovich Yefremov 2004 • Thoppil Varghese Antony • Soumitra Chatterjee • Chandrashekhar Shankar Dharmadhikari • Gulzar • Sardara Singh Johl • M. V. Kamath • Komal Kothari • Yoshirō Mori • Gopi Chand Narang • Govindarajan Padmanaban • Poornima Arvind Pakvasa • Vishnu Prabhakar • N. Rajam • C. H. Hanumantha Rao • Thiruvengadam Lakshman Sankar • T. N. Seshagopalan • Bijoy Nandan Shahi • Krishna Srinivas • Alarmel Valli 2005 • Sardar Anjum • André Beteille • Chandi Prasad Bhatt • Tumkur Ramaiya Satishchandran • Mrinal Datta Chaudhuri • Yash Chopra • Manna Dey • Irfan Habib • Yusuf Hamied • Qurratulain Hyder • Tarlochan Singh Kler • Anil Kohli • Kiran Mazumdar-Shaw • Mrinal Miri • Hari Mohan • Brijmohan Lall Munjal • M. T. Vasudevan Nair • Azim Premji • Balraj Puri • Syed Mir Qasim • A. Ramachandran • G. V. Iyer Ramakrishna • V. S. Ramamurthy • K. I. Varaprasad Reddy • K. Srinath Reddy • Girish Chandra Saxena • Narasimaiah Seshagiri • Mark Tully 2006 • Jaiveer Agarwal • P. S. Appu • Shashi Bhushan • Ganga Prasad Birla • Grigory Bongard-Levin • Lokesh Chandra • Chiranjeevi • Dinesh Nandini Dalmia • Tarun Das • Madhav Gadgil • A. K. Hangal • Devaki Jain • Kamleshwar • Abdul Halim Jaffer Khan • Sabri Khan • Ghulam Mustafa Khan • Shanno Khurana • Gunter Kruger • P. Leela • K. P. P. Nambiar • Nandan Nilekani • Sai Paranjpye • Deepak Parekh • M. V. Pylee • Subramaniam Ramadorai • N. S. Ramaswamy • Pavani Parameswara Rao • Ramakanta Rath • V. Shanta • Hira Lall Sibal • Billy Arjan Singh • Jasjit Singh • Vijaypat Singhania • K. G. Subramanyan • K. K. Talwar • Vijay Shankar Vyas • Dušan Zbavitel 2007 • Javed Akhtar • Gabriel Chiramel • Ela Gandhi • Saroj Ghose • V. Mohini Giri • Somnath Hore • Jamshed Jiji Irani • Gurcharan Singh Kalkat • N. Mahalingam • Prithipal Singh Maini • Tyeb Mehta • Rajan and Sajan Mishra • Rajan and Sajan Mishra • Sunil Mittal • Ramankutty Nair • Gopaldas Neeraj • Indra Nooyi • Kavalam Narayana Panicker • Bhikhu Parekh • Syed Mohammad Sharfuddin Quadri • V. S. Ramachandran • Tapan Raychaudhuri • S. H. Raza • Jeffrey Sachs • Chandra Prasad Saikia • L. Z. Sailo • Shiv Kumar Sarin • Shriram Sharma • Manju Sharma • T. N. Srinivasan • Osamu Suzuki • K. T. Thomas 2008 • Mian Bashir Ahmed • Kaushik Basu • Shayama Chona • Jagjit Singh Chopra • Rahim Fahimuddin Dagar • Chandrashekhar Dasgupta • Asis Datta • Meghnad Desai • Padma Desai • Sukh Dev • Nirmal Kumar Ganguly • B. N. Goswamy • Vasant Gowarikar • Baba Kalyani • K. V. Kamath • Inderjit Kaur Barthakur • Ravindra Kelekar • Asad Ali Khan • Dominique Lapierre • D. R. Mehta • Shiv Nadar • Suresh Kumar Neotia • T. K. Oommen • K. Padmanabhaiah • Vikram Pandit • V. Ramachandran • Sushil Kumar Saxena • Amarnath Sehgal • Jasdev Singh • Shrilal Shukla • P. Susheela • S. R. Srinivasa Varadhan • Yuli Vorontsov • Sunita Williams • Ji Xianlin 2009 • Isher Judge Ahluwalia • Inderjit Kaur Barthakur • Shamshad Begum • Abhinav Bindra • Shanta Dhananjayan • V. P. Dhananjayan • Ramachandra Guha • Shekhar Gupta • Khalid Hameed • Minoru Hara • Jayakanthan • Thomas Kailath • Sarvagya Singh Katiyar • G. Krishna • R. C. Mehta • A. Sreedhara Menon • S. K. Misra • A. M. Naik • Satish Nambiar • Kunwar Narayan • Nagnath Naikwadi • Kirit Parikh • Sam Pitroda • C. K. Prahalad • Gurdip Singh Randhawa • Brijendra Kumar Rao • Bhakta B. Rath • C. S. Seshadri • V. Ganapati Sthapati • Devendra Triguna • Sarojini Varadappan # Posthumous conferral • 1954–1959 • 1960–1969 • 1970–1979 • 1980–1989 • 1990–1999 • 2000–2009 • 2010–2019 • 2020–2029 Authority control International • ISNI • VIAF National • France • BnF data • Catalonia • Germany • Israel • United States • Sweden • Czech Republic • Netherlands Academics • CiNii • MathSciNet • Mathematics Genealogy Project • Scopus • zbMATH Other • IdRef
Wikipedia
Friedman's SSCG function In mathematics, a simple subcubic graph (SSCG) is a finite simple graph in which each vertex has a degree of at most three. Suppose we have a sequence of simple subcubic graphs G1, G2, ... such that each graph Gi has at most i + k vertices (for some integer k) and for no i < j is Gi homeomorphically embeddable into (i.e. is a graph minor of) Gj. The Robertson–Seymour theorem proves that subcubic graphs (simple or not) are well-founded by homeomorphic embeddability, implying such a sequence cannot be infinite. So, for each value of k, there is a sequence with maximal length. The function SSCG(k)[1] denotes that length for simple subcubic graphs. The function SCG(k)[2] denotes that length for (general) subcubic graphs. The SCG sequence begins SCG(0) = 6, but then explodes to a value equivalent to fε2*2 in the fast-growing hierarchy. The SSCG sequence begins slower than SCG, SSCG(0) = 2, SSCG(1) = 5, but then grows rapidly. SSCG(2) = 3 × 2(3 × 295) − 8 ≈ 3.241704 × 1035775080127201286522908640065. Its first and last 20 digits are 32417042291246009846...34057047399148290040. SSCG(3) is much larger than both TREE(3) and TREETREE(3)(3), that is, the TREE function nested TREE(3) times with 3 at the bottom. Adam P. Goucher claims there is no qualitative difference between the asymptotic growth rates of SSCG and SCG. He writes "It's clear that SCG(n) ≥ SSCG(n), but I can also prove SSCG(4n + 3) ≥ SCG(n)."[3] See also • Goodstein's theorem • Paris–Harrington theorem • Kanamori–McAloon theorem Notes References 1. [FOM] 274:Subcubic Graph Numbers 2. [FOM] 279:Subcubic Graph Numbers/restated 3. TREE(3) and impartial games | Complex Projective 4-Space Large numbers Examples in numerical order • Thousand • Ten thousand • Hundred thousand • Million • Ten million • Hundred million • Billion • Trillion • Quadrillion • Quintillion • Sextillion • Septillion • Octillion • Nonillion • Decillion • Eddington number • Googol • Shannon number • Googolplex • Skewes's number • Moser's number • Graham's number • TREE(3) • SSCG(3) • BH(3) • Rayo's number • Transfinite numbers Expression methods Notations • Scientific notation • Knuth's up-arrow notation • Conway chained arrow notation • Steinhaus–Moser notation Operators • Hyperoperation • Tetration • Pentation • Ackermann function • Grzegorczyk hierarchy • Fast-growing hierarchy Related articles (alphabetical order) • Busy beaver • Extended real number line • Indefinite and fictitious numbers • Infinitesimal • Largest known prime number • List of numbers • Long and short scales • Number systems • Number names • Orders of magnitude • Power of two • Power of three • Power of 10 • Sagan Unit • Names • History
Wikipedia
Symmetric successive over-relaxation In applied mathematics, symmetric successive over-relaxation (SSOR),[1] is a preconditioner. If the original matrix can be split into diagonal, lower and upper triangular as $A=D+L+L^{\mathsf {T}}$ then the SSOR preconditioner matrix is defined as $M=(D+L)D^{-1}(D+L)^{\mathsf {T}}$ It can also be parametrised by $\omega $ as follows.[2] $M(\omega )={\omega \over {2-\omega }}\left({1 \over \omega }D+L\right)D^{-1}\left({1 \over \omega }D+L\right)^{\mathsf {T}}$ See also • Successive over-relaxation References 1. Iterative methods at CFD-Online wiki 2. SSOR preconditioning at Netlib
Wikipedia
STEAM fields STEAM fields are the areas of science, technology, engineering, the arts, and mathematics.[1] STEAM is designed to integrate STEM subjects with arts subjects into various relevant education disciplines.[2] These programs aim to teach students innovation, to think critically, and use engineering or technology in imaginative designs or creative approaches to real-world problems while building on students' mathematics and science base. STEAM programs add arts to STEM curriculum by drawing on reasoning and design principles, and encouraging creative solutions.[3][4][5] STEAM in children's media • Sesame Street's 43rd season continues to focus on STEM but finds ways to integrate art. They state: "This helps make learning STEM concepts relevant and enticing to young children by highlighting how artists use STEM knowledge to enhance their art or solve problems. It also provides context for the importance of STEM knowledge in careers in the arts (e.g. musician, painter, sculptor, and dancer)."[6] • MGA Entertainment created a S.T.E.A.M. based franchise Project Mc2.[7] Other uses of the STEAM acronym • Other meanings of the "A" that have been promoted include agriculture, architecture, and applied mathematics.[8][9] • The Rhode Island School of Design has a STEM to STEAM program and at one point maintained an interactive map that showed global STEAM initiatives.[10] Relevant organizations were able to add themselves to the map, though it is no longer available at the location stated in press releases.[11] John Maeda, (2008 to 2013 president of Rhode Island School of Design) has been a champion in bringing the initiative to the political forums of educational policy. • Some programs offer STEAM from a base focus like mathematics and science.[2] • SteamHead is a non-profit organization that promotes innovation and accessibility in education, focusing on STEAM fields. • Wolf Trap's Institute of Education, as part of a $1.5 million Department of Education grant, trains and places teaching artists into preschool and kindergarten classrooms. The artists collaborate with the teachers to integrate math and science into the arts.[12] • American Lisa La Bonte, CEO of the Arab Youth Venture Foundation based in the United Arab Emirates, uses the STEAM acronym, but her work does not include arts integration.[13] Starting in 2007, La Bonte created and ran high profile free public STEAM programs[14] having added an A for "inspired STEM", with the A standing for Aeronautics, Aviation, Astronomy, Aerospace, Ad Astra! and using all things "air and space" as a hook for youth to embark on greater experimentation, studies, and careers in the region's burgeoning space-related industries.[15] One of AYVF's best-known programs,[13] "STEAM@TheMall", served over 200,000 its first two years at the most popular shopping malls[13] and provided free weekend activity stations such as Mars robotics, science experiments, SkyLab portable planetarium, art/design, and creative writing.[13] In 2008, Sharjah Sheikha Maisa kicked off the "Design booth for youth for Al Ain Summer S.T.E.A.M. funded by the Foundation created by the Crown Prince of Abu Dhabi".[16] In 2010, the American Association of Arts & Sciences (AAAS) included a chapter on AYVF's most popular STEAM program in its book, Building Mathematical and Scientific Talent in the Broader Middle East and North Africa (BMENA) Region.[17] Examples of STEAM jobs Among others, careers in STEAM include:[18] • Animator • Agriculturist • Archaeologist • Architect • Astronaut • Astrophysicist • Audio developer • Biomedical engineer • Broadcast technician • Civil engineering • Conservators • Electronic engineering • Fashion designer • Forensic psychologist • Graphic designer • Interior designer • Mechanical engineer • Media artist • Medical illustrator • Modern urban planner • Orthopedic technologist • Photographer • Pilot • Product designer • Scientific imaging • Teacher • Sound engineer • Video game designer • Website/app designer See also • Arts-based training • STEM fields References 1. "STEAM Rising: Why we need to put the arts into STEM education". Slate. Retrieved 2016-11-10. 2. Jolly, Anne (18 November 2014). "STEM vs. STEAM: Do the Arts Belong?". Teacher. Education Week: Teacher. Retrieved 6 September 2016. 3. Pomeroy, Steven Ross. "From STEM to STEAM: Science and Art Go Hand-in-Hand". blogs.scientificamerican.com. Scientific American. Retrieved 17 November 2016. 4. Jones, Elena (2022-01-11). "STEM Vs STEAM: Making Room For The Arts". Spiral Toys. Retrieved 2022-05-02. 5. Eger, John (31 May 2011). "National Science Foundation Slowly Turning STEM to STEAM". www.huffingtonpost.com. Huffington Post. Retrieved 17 November 2016. 6. Jean-Louis, Rosemary (24 August 2012). "Sesame Street: New Season Focuses on S.T.E.A.M." gpb.org. Retrieved 30 October 2019. 7. Entertainment, M. G. A. "November 8 Is National S.T.E.M./S.T.E.A.M. Day". www.prnewswire.com (Press release). Retrieved 2019-11-06. 8. "Virginia Tech and Virginia STEAM Academy form strategic partnership to meet critical education needs". Virginia Tech News. 31 July 2012. 9. "Public Engagement | Academics | RISD". 10. "Rhode Island School of Design Launches STEAM Map to Demonstrate Global Activity and Support for the Movement". 7 May 2014. Retrieved 23 February 2023. 11. "STEAM Map Debuts on Capitol Hill". 21 May 2014. Retrieved 23 February 2023. 12. Chen, Kelly; Cheers, Imani (31 July 2012). "STEAM Ahead: Merging Arts and Science Education". PBS NewsHour. PBS. Retrieved 7 March 2015. 13. Bonte, Lisa. "AYVF - Strategic Space & STEM Workforce Pioneer". AYVF - Strategic Space & STEM Workforce Pioneer. Retrieved 28 October 2019. 14. "Weekend Getaways!". 15. "STEAM Franchises". AYVF - Strategic Space & STEM Workforce Pioneer. April 17, 2013. Retrieved 28 October 2019. 16. "AYVF awarded prestigious Grant from Emirates Foundation". 17. https://www.aaas.org/sites/default/files/BTC_LaBonte_E.pdf 18. Riley, Susan (2018-09-01). "STEAM careers for the 21st century". The Institute for Arts Integration and STEAM. Retrieved 2019-10-14. External links • What is STEAM? (fine arts-based) • STEM to STEAM • STEAM Connect • The Arts Are Not a Luxury • ISEA2012 Machine Wilderness: Special Media-N edition • The STEAM Journal (Academic journal) Glossaries of science and engineering • Aerospace engineering • Agriculture • Archaeology • Architecture • Artificial intelligence • Astronomy • Biology • Botany • Calculus • Cell biology • Chemistry • Civil engineering • Clinical research • Computer hardware • Computer science • Developmental and reproductive biology • Ecology • Economics • Electrical and electronics engineering • Engineering • A–L • M–Z • Entomology • Environmental science • Evolutionary biology • Genetics • 0–L • M–Z • Geography • A–M • N–Z • Arabic toponyms • Hebrew toponyms • Western and South Asia • Geology • Ichthyology • Machine vision • Mathematics • Mechanical engineering • Medicine • Meteorology • Mycology • Nanotechnology • Ornithology • Physics • Probability and statistics • Psychiatry • Quantum computing • Robotics • Scientific naming • Structural engineering • Virology Emerging technologies Topics • Collingridge dilemma • Differential technological development • Disruptive innovation • Ephemeralization • Ethics • Bioethics • Cyberethics • Neuroethics • Robot ethics • Exploratory engineering • Fictional technology • Proactionary principle • Technological change • Technological unemployment • Technological convergence • Technological evolution • Technological paradigm • Technology forecasting • Accelerating change • Future-oriented technology analysis • Horizon scanning • Moore's law • Technological singularity • Technology scouting • Technology readiness level • Technology roadmap • Transhumanism • Category • List Engineering • History • Outline • List of engineering branches Specialties and Interdisciplinarity Civil • Architectural • Coastal • Construction • Earthquake • Environmental • Ecological • Sanitary • Geological • Geotechnical • Hydraulic • Mining • Offshore • River • Structural • Transportation Mechanical • Acoustical • Aerospace • Automotive • Energy • HVAC • Manufacturing • Marine • Naval architecture • Railway • Sports • Tribology Electrical • Broadcast • Computer • outline • Control • Electromechanics • Electronics • Avionics • Microwaves • Optical • Photonics • Power • Radio frequency • Telecommunications Chemical • Biochemical • Biological • Biomaterial • Bioresource • Genetic • Synthetic biology • Tissue • Electrochemical • Food • Molecular • Paper • Petroleum • Process • Reaction Other • Agricultural • Audio • Biomedical • Bioinformatics • Clinical • Health technology • Pharmaceutical • Prosthesis • Rehabilitation • Building services • MEP • Geoengineering • Corrosion • Engineering management • Engineering mathematics • Engineering physics • Explosives • Facilities • Fire • Industrial • Information • Logistics • Materials • Ceramics • Metals • Plastics • Polymers • Surface • Mechatronics • Robotics • Military • Nanotechnology • Nuclear • Ontology • Packaging • Piping • Privacy • Safety • Security • Software • Systems • Textile Engineering education • Bachelor of Engineering • Master of Engineering • Doctor of Engineering • Engineer's degree • Engineering studies Related topics • Engineer Glossaries • Engineering • A–L • M–Z • Aerospace engineering • Civil engineering • Electrical and electronics engineering • Mechanical engineering • Structural engineering • Category • Commons • Wikiproject • Portal Aesthetics topics Philosophers • Abhinavagupta • Theodor W. Adorno • Leon Battista Alberti • Thomas Aquinas • Hans Urs von Balthasar • Alexander Gottlieb Baumgarten • Clive Bell • Bernard Bosanquet • Edward Bullough • Edmund Burke • R. G. Collingwood • Ananda Coomaraswamy • Arthur Danto • Gilles Deleuze • John Dewey • Denis Diderot • Hubert Dreyfus • Curt John Ducasse • Thierry de Duve • Roger Fry • Nelson Goodman • Clement Greenberg • G. W. F. Hegel • Martin Heidegger • David Hume • Immanuel Kant • Paul Klee • Susanne Langer • Louis Lavelle • Theodor Lipps • György Lukács • Jean-François Lyotard • Joseph Margolis • Jacques Maritain • Thomas Munro • Friedrich Nietzsche • José Ortega y Gasset • Dewitt H. Parker • Stephen Pepper • David Prall • Jacques Rancière • Ayn Rand • George Lansing Raymond • I. A. Richards • George Santayana • Friedrich Schiller • Arthur Schopenhauer • Roger Scruton • Irving Singer • Rabindranath Tagore • Giorgio Vasari • Morris Weitz • Johann Joachim Winckelmann • Richard Wollheim • more... Theories • Classicism • Evolutionary aesthetics • Historicism • Modernism • New Classical • Postmodernism • Psychoanalytic theory • Romanticism • Symbolism • Theory of painting • Theory of art • more... Concepts • Aesthetic emotions • Aesthetic interpretation • Appropriation • Art manifesto • Avant-garde • Axiology • Beauty • Feminine • Masculine • Boredom • Camp • Comedy • Creativity • Cuteness • Disgust • Ecstasy • Elegance • Entertainment • Eroticism • Fashion • Fun • Gaze • Harmony • Humour • Judgment • Kama • Kitsch • Life imitating art • Magnificence • Mimesis • Perception • Quality • Rasa • Recreation • Reverence • Style • Sthayibhava • Sublime • Taste • Tragedy • Work of art Related • Aesthetics of music • Applied aesthetics • Architecture • Art • Artistic merit • Arts criticism • Feminist aesthetics • Gastronomy • Japanese aesthetics • Mathematical beauty • Mathematics and architecture • Mathematics and art • Medieval aesthetics • Music theory • Neuroesthetics • Patterns in nature • Philosophy of design • Philosophy of film • Philosophy of music • Poetry • Visual arts • Painting • History of painting • Sculpture • Index • Outline • Category •  Philosophy portal Major mathematics areas • History • Timeline • Future • Outline • Lists • Glossary Foundations • Category theory • Information theory • Mathematical logic • Philosophy of mathematics • Set theory • Type theory Algebra • Abstract • Commutative • Elementary • Group theory • Linear • Multilinear • Universal • Homological Analysis • Calculus • Real analysis • Complex analysis • Hypercomplex analysis • Differential equations • Functional analysis • Harmonic analysis • Measure theory Discrete • Combinatorics • Graph theory • Order theory Geometry • Algebraic • Analytic • Arithmetic • Differential • Discrete • Euclidean • Finite Number theory • Arithmetic • Algebraic number theory • Analytic number theory • Diophantine geometry Topology • General • Algebraic • Differential • Geometric • Homotopy theory Applied • Engineering mathematics • Mathematical biology • Mathematical chemistry • Mathematical economics • Mathematical finance • Mathematical physics • Mathematical psychology • Mathematical sociology • Mathematical statistics • Probability • Statistics • Systems science • Control theory • Game theory • Operations research Computational • Computer science • Theory of computation • Computational complexity theory • Numerical analysis • Optimization • Computer algebra Related topics • Mathematicians • lists • Informal mathematics • Films about mathematicians • Recreational mathematics • Mathematics and art • Mathematics education •  Mathematics portal • Category • Commons • WikiProject
Wikipedia
ST type theory The following system is Mendelson's (1997, 289–293) ST type theory. ST is equivalent with Russell's ramified theory plus the Axiom of reducibility. The domain of quantification is partitioned into an ascending hierarchy of types, with all individuals assigned a type. Quantified variables range over only one type; hence the underlying logic is first-order logic. ST is "simple" (relative to the type theory of Principia Mathematica) primarily because all members of the domain and codomain of any relation must be of the same type. There is a lowest type, whose individuals have no members and are members of the second lowest type. Individuals of the lowest type correspond to the urelements of certain set theories. Each type has a next higher type, analogous to the notion of successor in Peano arithmetic. While ST is silent as to whether there is a maximal type, a transfinite number of types poses no difficulty. These facts, reminiscent of the Peano axioms, make it convenient and conventional to assign a natural number to each type, starting with 0 for the lowest type. But type theory does not require a prior definition of the naturals. The symbols peculiar to ST are primed variables and infix operator $\in $. In any given formula, unprimed variables all have the same type, while primed variables ($x'$) range over the next higher type. The atomic formulas of ST are of two forms, $x=y$ (identity) and $y\in x'$. The infix-operator symbol $\in $ suggests the intended interpretation, set membership. All variables appearing in the definition of identity and in the axioms Extensionality and Comprehension, range over individuals of one of two consecutive types. Only unprimed variables (ranging over the "lower" type) can appear to the left of '$\in $', whereas to its right, only primed variables (ranging over the "higher" type) can appear. The first-order formulation of ST rules out quantifying over types. Hence each pair of consecutive types requires its own axiom of Extensionality and of Comprehension, which is possible if Extensionality and Comprehension below are taken as axiom schemata "ranging over" types. • Identity, defined by $x=y\leftrightarrow \forall z'[x\in z'\leftrightarrow y\in z']$. • Extensionality. An axiom schema. $\forall x[x\in y'\leftrightarrow x\in z']\rightarrow [y'=z']$. Let $\Phi (x)$ denote any first-order formula containing the free variable $x$. • Comprehension. An axiom schema. $\exists z'\forall x[x\in z'\leftrightarrow \Phi (x)]$. Remark. Any collection of elements of the same type may form an object of the next higher type. Comprehension is schematic with respect to $\Phi (x)$ as well as to types. • Infinity. There exists a nonempty binary relation $R$ over the individuals of the lowest type, that is irreflexive, transitive, and strongly connected: $\forall x,y[x\neq y\rightarrow [xRy\vee yRx]]$ and with codomain contained in domain. Remark. Infinity is the only true axiom of ST and is entirely mathematical in nature. It asserts that $R$ is a strict total order, with a codomain contained in its domain. If 0 is assigned to the lowest type, the type of $R$ is 3. Infinity can be satisfied only if the (co)domain of $R$ is infinite, thus forcing the existence of an infinite set. If relations are defined in terms of ordered pairs, this axiom requires a prior definition of ordered pair; the Kuratowski definition, adapted to ST, will do. The literature does not explain why the usual axiom of infinity (there exists an inductive set) of ZFC of other set theories could not be married to ST. ST reveals how type theory can be made very similar to axiomatic set theory. Moreover, the more elaborate ontology of ST, grounded in what is now called the "iterative conception of set," makes for axiom (schemata) that are far simpler than those of conventional set theories, such as ZFC, with simpler ontologies. Set theories whose point of departure is type theory, but whose axioms, ontology, and terminology differ from the above, include New Foundations and Scott–Potter set theory. Formulations based on equality Church's type theory has been extensively studied by two of Church's students, Leon Henkin and Peter B. Andrews. Since ST is a higher order logic, and in higher order logics one can define propositional connectives in terms of logical equivalence and quantifiers, in 1963 Henkin developed a formulation of ST based on equality, but in which he restricted attention to propositional types. This was simplified later that year by Andrews in his theory Q0.[1] In this respect ST can be seen as a particular kind of a higher-order logic, classified by P.T. Johnstone in Sketches of an Elephant, as having a lambda-signature, that is a higher-order signature that contains no relations, and uses only products and arrows (function types) as type constructors. Furthermore, as Johnstone put it, ST is "logic-free" in the sense that it contains no logical connectives or quantifiers in its formulae.[2] See also • type theory References • Mendelson, Elliot, 1997. Introduction to Mathematical Logic, 4th ed. Chapman & Hall. • W. Farmer, The seven virtues of simple type theory, Journal of Applied Logic, Vol. 6, No. 3. (September 2008), pp. 267–286. 1. Stanford Encyclopedia of Philosophy: Church's Type Theory" – by Peter Andrews (adapted from his book). 2. P.T. Johnstone, Sketches of an elephant, p. 952
Wikipedia