text
stringlengths 100
500k
| subset
stringclasses 4
values |
---|---|
Curve fitting
Curve fitting[1][2] is the process of constructing a curve, or mathematical function, that has the best fit to a series of data points,[3] possibly subject to constraints.[4][5] Curve fitting can involve either interpolation,[6][7] where an exact fit to the data is required, or smoothing,[8][9] in which a "smooth" function is constructed that approximately fits the data. A related topic is regression analysis,[10][11] which focuses more on questions of statistical inference such as how much uncertainty is present in a curve that is fit to data observed with random errors. Fitted curves can be used as an aid for data visualization,[12][13] to infer values of a function where no data are available,[14] and to summarize the relationships among two or more variables.[15] Extrapolation refers to the use of a fitted curve beyond the range of the observed data,[16] and is subject to a degree of uncertainty[17] since it may reflect the method used to construct the curve as much as it reflects the observed data.
For linear-algebraic analysis of data, "fitting" usually means trying to find the curve that minimizes the vertical (y-axis) displacement of a point from the curve (e.g., ordinary least squares). However, for graphical and image applications, geometric fitting seeks to provide the best visual fit; which usually means trying to minimize the orthogonal distance to the curve (e.g., total least squares), or to otherwise include both axes of displacement of a point from the curve. Geometric fits are not popular because they usually require non-linear and/or iterative calculations, although they have the advantage of a more aesthetic and geometrically accurate result.[18][19][20]
Algebraic fitting of functions to data points
Most commonly, one fits a function of the form y=f(x).
Fitting lines and polynomial functions to data points
Main article: Polynomial regression
See also: Polynomial interpolation
The first degree polynomial equation
$y=ax+b\;$
is a line with slope a. A line will connect any two points, so a first degree polynomial equation is an exact fit through any two points with distinct x coordinates.
If the order of the equation is increased to a second degree polynomial, the following results:
$y=ax^{2}+bx+c\;.$
This will exactly fit a simple curve to three points.
If the order of the equation is increased to a third degree polynomial, the following is obtained:
$y=ax^{3}+bx^{2}+cx+d\;.$
This will exactly fit four points.
A more general statement would be to say it will exactly fit four constraints. Each constraint can be a point, angle, or curvature (which is the reciprocal of the radius of an osculating circle). Angle and curvature constraints are most often added to the ends of a curve, and in such cases are called end conditions. Identical end conditions are frequently used to ensure a smooth transition between polynomial curves contained within a single spline. Higher-order constraints, such as "the change in the rate of curvature", could also be added. This, for example, would be useful in highway cloverleaf design to understand the rate of change of the forces applied to a car (see jerk), as it follows the cloverleaf, and to set reasonable speed limits, accordingly.
The first degree polynomial equation could also be an exact fit for a single point and an angle while the third degree polynomial equation could also be an exact fit for two points, an angle constraint, and a curvature constraint. Many other combinations of constraints are possible for these and for higher order polynomial equations.
If there are more than n + 1 constraints (n being the degree of the polynomial), the polynomial curve can still be run through those constraints. An exact fit to all constraints is not certain (but might happen, for example, in the case of a first degree polynomial exactly fitting three collinear points). In general, however, some method is then needed to evaluate each approximation. The least squares method is one way to compare the deviations.
There are several reasons given to get an approximate fit when it is possible to simply increase the degree of the polynomial equation and get an exact match.:
• Even if an exact match exists, it does not necessarily follow that it can be readily discovered. Depending on the algorithm used there may be a divergent case, where the exact fit cannot be calculated, or it might take too much computer time to find the solution. This situation might require an approximate solution.
• The effect of averaging out questionable data points in a sample, rather than distorting the curve to fit them exactly, may be desirable.
• Runge's phenomenon: high order polynomials can be highly oscillatory. If a curve runs through two points A and B, it would be expected that the curve would run somewhat near the midpoint of A and B, as well. This may not happen with high-order polynomial curves; they may even have values that are very large in positive or negative magnitude. With low-order polynomials, the curve is more likely to fall near the midpoint (it's even guaranteed to exactly run through the midpoint on a first degree polynomial).
• Low-order polynomials tend to be smooth and high order polynomial curves tend to be "lumpy". To define this more precisely, the maximum number of inflection points possible in a polynomial curve is n-2, where n is the order of the polynomial equation. An inflection point is a location on the curve where it switches from a positive radius to negative. We can also say this is where it transitions from "holding water" to "shedding water". Note that it is only "possible" that high order polynomials will be lumpy; they could also be smooth, but there is no guarantee of this, unlike with low order polynomial curves. A fifteenth degree polynomial could have, at most, thirteen inflection points, but could also have eleven, or nine or any odd number down to one. (Polynomials with even numbered degree could have any even number of inflection points from n - 2 down to zero.)
The degree of the polynomial curve being higher than needed for an exact fit is undesirable for all the reasons listed previously for high order polynomials, but also leads to a case where there are an infinite number of solutions. For example, a first degree polynomial (a line) constrained by only a single point, instead of the usual two, would give an infinite number of solutions. This brings up the problem of how to compare and choose just one solution, which can be a problem for software and for humans, as well. For this reason, it is usually best to choose as low a degree as possible for an exact match on all constraints, and perhaps an even lower degree, if an approximate fit is acceptable.
Fitting other functions to data points
Other types of curves, such as trigonometric functions (such as sine and cosine), may also be used, in certain cases.
In spectroscopy, data may be fitted with Gaussian, Lorentzian, Voigt and related functions.
In biology, ecology, demography, epidemiology, and many other disciplines, the growth of a population, the spread of infectious disease, etc. can be fitted using the logistic function.
In agriculture the inverted logistic sigmoid function (S-curve) is used to describe the relation between crop yield and growth factors. The blue figure was made by a sigmoid regression of data measured in farm lands. It can be seen that initially, i.e. at low soil salinity, the crop yield reduces slowly at increasing soil salinity, while thereafter the decrease progresses faster.
Geometric fitting of plane curves to data points
If a function of the form $y=f(x)$ cannot be postulated, one can still try to fit a plane curve.
Other types of curves, such as conic sections (circular, elliptical, parabolic, and hyperbolic arcs) or trigonometric functions (such as sine and cosine), may also be used, in certain cases. For example, trajectories of objects under the influence of gravity follow a parabolic path, when air resistance is ignored. Hence, matching trajectory data points to a parabolic curve would make sense. Tides follow sinusoidal patterns, hence tidal data points should be matched to a sine wave, or the sum of two sine waves of different periods, if the effects of the Moon and Sun are both considered.
For a parametric curve, it is effective to fit each of its coordinates as a separate function of arc length; assuming that data points can be ordered, the chord distance may be used.[22]
Fitting a circle by geometric fit
Coope[23] approaches the problem of trying to find the best visual fit of circle to a set of 2D data points. The method elegantly transforms the ordinarily non-linear problem into a linear problem that can be solved without using iterative numerical methods, and is hence much faster than previous techniques.
Fitting an ellipse by geometric fit
The above technique is extended to general ellipses[24] by adding a non-linear step, resulting in a method that is fast, yet finds visually pleasing ellipses of arbitrary orientation and displacement.
Fitting surfaces
See also: Multivariate interpolation and Smoothing
Note that while this discussion was in terms of 2D curves, much of this logic also extends to 3D surfaces, each patch of which is defined by a net of curves in two parametric directions, typically called u and v. A surface may be composed of one or more surface patches in each direction.
Software
Many statistical packages such as R and numerical software such as the gnuplot, GNU Scientific Library, MLAB, Maple, MATLAB, TK Solver 6.0, Scilab, Mathematica, GNU Octave, and SciPy include commands for doing curve fitting in a variety of scenarios. There are also programs specifically written to do curve fitting; they can be found in the lists of statistical and numerical-analysis programs as well as in Category:Regression and curve fitting software.
See also
• Calibration curve
• Curve-fitting compaction
• Estimation theory
• Function approximation
• Goodness of fit
• Genetic programming
• Least-squares adjustment
• Levenberg–Marquardt algorithm
• Line fitting
• Linear interpolation
• Mathematical model
• Multi expression programming
• Nonlinear regression
• Overfitting
• Plane curve
• Probability distribution fitting
• Sinusoidal model
• Smoothing
• Splines (interpolating, smoothing)
• Time series
• Total least squares
• Linear trend estimation
References
1. Sandra Lach Arlinghaus, PHB Practical Handbook of Curve Fitting. CRC Press, 1994.
2. William M. Kolb. Curve Fitting for Programmable Calculators. Syntec, Incorporated, 1984.
3. S.S. Halli, K.V. Rao. 1992. Advanced Techniques of Population Analysis. ISBN 0306439972 Page 165 (cf. ... functions are fulfilled if we have a good to moderate fit for the observed data.)
4. The Signal and the Noise: Why So Many Predictions Fail-but Some Don't. By Nate Silver
5. Data Preparation for Data Mining: Text. By Dorian Pyle.
6. Numerical Methods in Engineering with MATLAB®. By Jaan Kiusalaas. Page 24.
7. Numerical Methods in Engineering with Python 3. By Jaan Kiusalaas. Page 21.
8. Numerical Methods of Curve Fitting. By P. G. Guest, Philip George Guest. Page 349.
9. See also: Mollifier
10. Fitting Models to Biological Data Using Linear and Nonlinear Regression. By Harvey Motulsky, Arthur Christopoulos.
11. Regression Analysis By Rudolf J. Freund, William J. Wilson, Ping Sa. Page 269.
12. Visual Informatics. Edited by Halimah Badioze Zaman, Peter Robinson, Maria Petrou, Patrick Olivier, Heiko Schröder. Page 689.
13. Numerical Methods for Nonlinear Engineering Models. By John R. Hauser. Page 227.
14. Methods of Experimental Physics: Spectroscopy, Volume 13, Part 1. By Claire Marton. Page 150.
15. Encyclopedia of Research Design, Volume 1. Edited by Neil J. Salkind. Page 266.
16. Community Analysis and Planning Techniques. By Richard E. Klosterman. Page 1.
17. An Introduction to Risk and Uncertainty in the Evaluation of Environmental Investments. DIANE Publishing. Pg 69
18. Ahn, Sung-Joon (December 2008), "Geometric Fitting of Parametric Curves and Surfaces" (PDF), Journal of Information Processing Systems, 4 (4): 153–158, doi:10.3745/JIPS.2008.4.4.153, archived from the original (PDF) on 2014-03-13
19. Chernov, N.; Ma, H. (2011), "Least squares fitting of quadratic curves and surfaces", in Yoshida, Sota R. (ed.), Computer Vision, Nova Science Publishers, pp. 285–302, ISBN 9781612093994
20. Liu, Yang; Wang, Wenping (2008), "A Revisit to Least Squares Orthogonal Distance Fitting of Parametric Curves and Surfaces", in Chen, F.; Juttler, B. (eds.), Advances in Geometric Modeling and Processing, Lecture Notes in Computer Science, vol. 4975, pp. 384–397, CiteSeerX 10.1.1.306.6085, doi:10.1007/978-3-540-79246-8_29, ISBN 978-3-540-79245-1
21. Calculator for sigmoid regression
22. p.51 in Ahlberg & Nilson (1967) The theory of splines and their applications, Academic Press, 1967
23. Coope, I.D. (1993). "Circle fitting by linear and nonlinear least squares". Journal of Optimization Theory and Applications. 76 (2): 381–388. doi:10.1007/BF00939613. hdl:10092/11104. S2CID 59583785.
24. Paul Sheer, A software assistant for manual stereo photometrology, M.Sc. thesis, 1997
Further reading
Wikimedia Commons has media related to Curve fitting.
• N. Chernov (2010), Circular and linear regression: Fitting circles and lines by least squares, Chapman & Hall/CRC, Monographs on Statistics and Applied Probability, Volume 117 (256 pp.).
Authority control: National
• Germany
• Poland
| Wikipedia |
Fractal landscape
A fractal landscape or fractal surface is generated using a stochastic algorithm designed to produce fractal behavior that mimics the appearance of natural terrain. In other words, the surface resulting from the procedure is not a deterministic, but rather a random surface that exhibits fractal behavior.[1]
Many natural phenomena exhibit some form of statistical self-similarity that can be modeled by fractal surfaces.[2] Moreover, variations in surface texture provide important visual cues to the orientation and slopes of surfaces, and the use of almost self-similar fractal patterns can help create natural looking visual effects.[3] The modeling of the Earth's rough surfaces via fractional Brownian motion was first proposed by Benoit Mandelbrot.[4]
Because the intended result of the process is to produce a landscape, rather than a mathematical function, processes are frequently applied to such landscapes that may affect the stationarity and even the overall fractal behavior of such a surface, in the interests of producing a more convincing landscape.
According to R. R. Shearer, the generation of natural looking surfaces and landscapes was a major turning point in art history, where the distinction between geometric, computer generated images and natural, man made art became blurred.[5] The first use of a fractal-generated landscape in a film was in 1982 for the movie Star Trek II: The Wrath of Khan. Loren Carpenter refined the techniques of Mandelbrot to create an alien landscape.[6]
Behavior of natural landscapes
Whether or not natural landscapes behave in a generally fractal manner has been the subject of some research. Technically speaking, any surface in three-dimensional space has a topological dimension of 2, and therefore any fractal surface in three-dimensional space has a Hausdorff dimension between 2 and 3.[7] Real landscapes however, have varying behavior at different scales. This means that an attempt to calculate the 'overall' fractal dimension of a real landscape can result in measures of negative fractal dimension, or of fractal dimension above 3. In particular, many studies of natural phenomena, even those commonly thought to exhibit fractal behavior, do not do so, over more than a few orders of magnitude. For instance, Richardson's examination of the western coastline of Britain showed fractal behavior of the coastline over only two orders of magnitude.[8] In general, there is no reason to suppose that the geological processes that shape terrain on large scales (for example plate tectonics) exhibit the same mathematical behavior as those that shape terrain on smaller scales (for instance, soil creep).
Real landscapes also have varying statistical behavior from place to place, so for example sandy beaches don't exhibit the same fractal properties as mountain ranges. A fractal function, however, is statistically stationary, meaning that its bulk statistical properties are the same everywhere. Thus, any real approach to modeling landscapes requires the ability to modulate fractal behavior spatially. Additionally real landscapes have very few natural minima (most of these are lakes), whereas a fractal function has as many minima as maxima, on average. Real landscapes also have features originating with the flow of water and ice over their surface, which simple fractals cannot model.[9]
It is because of these considerations that the simple fractal functions are often inappropriate for modeling landscapes. More sophisticated techniques (known as 'multi-fractal' techniques) use different fractal dimensions for different scales, and thus can better model the frequency spectrum behavior of real landscapes[10]
Generation of fractal landscapes
A way to make such a landscape is to employ the random midpoint displacement algorithm, in which a square is subdivided into four smaller equal squares and the center point is vertically offset by some random amount. The process is repeated on the four new squares, and so on, until the desired level of detail is reached. There are many fractal procedures (such as combining multiple octaves of Simplex noise) capable of creating terrain data, however, the term "fractal landscape" has become more generic over time.
Fractal plants
Fractal plants can be procedurally generated using L-systems in computer-generated scenes.[11]
See also
• Brownian surface
• Bryce
• Diamond-square algorithm
• Fractal-generating software
• Grome
• Heightmap
• Outerra
• Scenery generator
• Terragen
• Octree
• Quadtree
Notes
1. "The Fractal Geometry of Nature".
2. Advances in multimedia modeling: 13th International Multimedia Modeling by Tat-Jen Cham 2007 ISBN 3-540-69428-5 page
3. Human symmetry perception and its computational analysis by Christopher W. Tyler 2002 ISBN 0-8058-4395-7 pages 173–177
4. Dynamics of Fractal Surfaces by Fereydoon Family and Tamas Vicsek 1991 ISBN 981-02-0720-4 page 45
5. Rhonda Roland Shearer "Rethinking Images and Metaphors" in The languages of the brain by Albert M. Galaburda 2002 ISBN 0-674-00772-7 pages 351–359
6. Briggs, John (1992). Fractals: The Patterns of Chaos : a New Aesthetic of Art, Science, and Nature. Simon and Schuster. p. 84. ISBN 978-0671742171. Retrieved 15 June 2014.
7. Lewis
8. Richardson
9. Ken Musgrave, 1993
10. Joost van Lawick van Pabst et al.
11. de la Re, Armando; Abad, Francisco; Camahort, Emilio; Juan, M. C. (2009). "Tools for Procedural Generation of Plants in Virtual Scenes" (PDF). Computational Science – ICCS 2009. Lecture Notes in Computer Science. Vol. 5545. pp. 801–810. doi:10.1007/978-3-642-01973-9_89. ISBN 978-3-642-01972-2. S2CID 33892094.
References
• Lewis, J.P. "Is the Fractal Model Appropriate for Terrain?" (PDF).
• Richardson, L.F. (1961). "The Problem of Continuity". General Systems Yearbook. 6: 139–187.
• van Lawick van Pabst, Joost; Jense, Hans (2001). "Dynamic Terrain Generation Based on Multifractal Techniques" (PDF). Archived from the original (PDF) on 2011-07-24.
• Musgrave, Ken (1993). "Methods for Realistic Landscape Imaging" (PDF).
External links
• A Web-Wide World by Ken Perlin, 1998; a Java applet showing a sphere with a generated landscape.
Fractal software
• Digital art
• Graphics software
• Fractal art
Open-source
• Apophysis
• Blender
• Fyre
• Kalles Fraktaler
• MilkDrop
• Sterling
GNU
• Electric Sheep
• GIMP
• openPlaG
• XaoS
Freeware
• Fractint
Retail
Cross-platform
• Bryce
• Chaotica
• Maple
• Ultra Fractal
• Wolfram Mathematica
Windows only
• VisSim
Scenery generator
• MojoWorld Generator
• Picogen
• Terragen
• VistaPro
Found objects
• Burning Ship fractal
• Jerusalem cube
• Julia set
• Mandelbox
• Mandelbrot set
• Mandelbulb
Related
• Computer-generated imagery
• Fractal compression
• Fractal landscape
• Fractal flame
• Iterated function system
• Mathematical visualization
• Orbit trap
• Category
Fractals
Characteristics
• Fractal dimensions
• Assouad
• Box-counting
• Higuchi
• Correlation
• Hausdorff
• Packing
• Topological
• Recursion
• Self-similarity
Iterated function
system
• Barnsley fern
• Cantor set
• Koch snowflake
• Menger sponge
• Sierpinski carpet
• Sierpinski triangle
• Apollonian gasket
• Fibonacci word
• Space-filling curve
• Blancmange curve
• De Rham curve
• Minkowski
• Dragon curve
• Hilbert curve
• Koch curve
• Lévy C curve
• Moore curve
• Peano curve
• Sierpiński curve
• Z-order curve
• String
• T-square
• n-flake
• Vicsek fractal
• Hexaflake
• Gosper curve
• Pythagoras tree
• Weierstrass function
Strange attractor
• Multifractal system
L-system
• Fractal canopy
• Space-filling curve
• H tree
Escape-time
fractals
• Burning Ship fractal
• Julia set
• Filled
• Newton fractal
• Douady rabbit
• Lyapunov fractal
• Mandelbrot set
• Misiurewicz point
• Multibrot set
• Newton fractal
• Tricorn
• Mandelbox
• Mandelbulb
Rendering techniques
• Buddhabrot
• Orbit trap
• Pickover stalk
Random fractals
• Brownian motion
• Brownian tree
• Brownian motor
• Fractal landscape
• Lévy flight
• Percolation theory
• Self-avoiding walk
People
• Michael Barnsley
• Georg Cantor
• Bill Gosper
• Felix Hausdorff
• Desmond Paul Henry
• Gaston Julia
• Helge von Koch
• Paul Lévy
• Aleksandr Lyapunov
• Benoit Mandelbrot
• Hamid Naderi Yeganeh
• Lewis Fry Richardson
• Wacław Sierpiński
Other
• "How Long Is the Coast of Britain?"
• Coastline paradox
• Fractal art
• List of fractals by Hausdorff dimension
• The Fractal Geometry of Nature (1982 book)
• The Beauty of Fractals (1986 book)
• Chaos: Making a New Science (1987 book)
• Kaleidoscope
• Chaos theory
| Wikipedia |
Surface gradient
In vector calculus, the surface gradient is a vector differential operator that is similar to the conventional gradient. The distinction is that the surface gradient takes effect along a surface.
For a surface $S$ in a scalar field $u$, the surface gradient is defined and notated as
$\nabla _{S}u=\nabla u-\mathbf {\hat {n}} (\mathbf {\hat {n}} \cdot \nabla u)$
where $\mathbf {\hat {n}} $ is a unit normal to the surface.[1] Examining the definition shows that the surface gradient is the (conventional) gradient with the component normal to the surface removed (subtracted), hence this gradient is tangent to the surface. In other words, the surface gradient is the orthographic projection of the gradient onto the surface.
The surface gradient arises whenever the gradient of a quantity over a surface is important. In the study of capillary surfaces for example, the gradient of spatially varying surface tension doesn't make much sense, however the surface gradient does and serves certain purposes.
See also
• Aspect (geography)
• Geomorphometry#Surface gradient Derivatives
• Grade (slope)
• Spatial gradient
References
1. R. Shankar Subramanian, Boundary Conditions in Fluid Mechanics.
| Wikipedia |
Surface of revolution
A surface of revolution is a surface in Euclidean space created by rotating a curve (the generatrix) one full revolution around an axis of rotation (normally not intersecting the generatrix, except at its endpoints).[1] The volume bounded by the surface created by this revolution is the solid of revolution.
Examples of surfaces of revolution generated by a straight line are cylindrical and conical surfaces depending on whether or not the line is parallel to the axis. A circle that is rotated around any diameter generates a sphere of which it is then a great circle, and if the circle is rotated around an axis that does not intersect the interior of a circle, then it generates a torus which does not intersect itself (a ring torus).
Properties
The sections of the surface of revolution made by planes through the axis are called meridional sections. Any meridional section can be considered to be the generatrix in the plane determined by it and the axis.[2]
The sections of the surface of revolution made by planes that are perpendicular to the axis are circles.
Some special cases of hyperboloids (of either one or two sheets) and elliptic paraboloids are surfaces of revolution. These may be identified as those quadratic surfaces all of whose cross sections perpendicular to the axis are circular.
Area formula
If the curve is described by the parametric functions x(t), y(t), with t ranging over some interval [a,b], and the axis of revolution is the y-axis, then the area Ay is given by the integral
$A_{y}=2\pi \int _{a}^{b}x(t)\,{\sqrt {\left({dx \over dt}\right)^{2}+\left({dy \over dt}\right)^{2}}}\,dt,$
provided that x(t) is never negative between the endpoints a and b. This formula is the calculus equivalent of Pappus's centroid theorem.[3] The quantity
${\sqrt {\left({dx \over dt}\right)^{2}+\left({dy \over dt}\right)^{2}}}$
comes from the Pythagorean theorem and represents a small segment of the arc of the curve, as in the arc length formula. The quantity 2πx(t) is the path of (the centroid of) this small segment, as required by Pappus' theorem.
Likewise, when the axis of rotation is the x-axis and provided that y(t) is never negative, the area is given by[4]
$A_{x}=2\pi \int _{a}^{b}y(t)\,{\sqrt {\left({dx \over dt}\right)^{2}+\left({dy \over dt}\right)^{2}}}\,dt.$
If the continuous curve is described by the function y = f(x), a ≤ x ≤ b, then the integral becomes
$A_{x}=2\pi \int _{a}^{b}y{\sqrt {1+\left({\frac {dy}{dx}}\right)^{2}}}\,dx=2\pi \int _{a}^{b}f(x){\sqrt {1+{\big (}f'(x){\big )}^{2}}}\,dx$
for revolution around the x-axis, and
$A_{y}=2\pi \int _{a}^{b}x{\sqrt {1+\left({\frac {dy}{dx}}\right)^{2}}}\,dx$
for revolution around the y-axis (provided a ≥ 0). These come from the above formula.[5]
For example, the spherical surface with unit radius is generated by the curve y(t) = sin(t), x(t) = cos(t), when t ranges over [0,π]. Its area is therefore
${\begin{aligned}A&{}=2\pi \int _{0}^{\pi }\sin(t){\sqrt {{\big (}\cos(t){\big )}^{2}+{\big (}\sin(t){\big )}^{2}}}\,dt\\&{}=2\pi \int _{0}^{\pi }\sin(t)\,dt\\&{}=4\pi .\end{aligned}}$
For the case of the spherical curve with radius r, y(x) = √r2 − x2 rotated about the x-axis
${\begin{aligned}A&{}=2\pi \int _{-r}^{r}{\sqrt {r^{2}-x^{2}}}\,{\sqrt {1+{\frac {x^{2}}{r^{2}-x^{2}}}}}\,dx\\&{}=2\pi r\int _{-r}^{r}\,{\sqrt {r^{2}-x^{2}}}\,{\sqrt {\frac {1}{r^{2}-x^{2}}}}\,dx\\&{}=2\pi r\int _{-r}^{r}\,dx\\&{}=4\pi r^{2}\,\end{aligned}}$
A minimal surface of revolution is the surface of revolution of the curve between two given points which minimizes surface area.[6] A basic problem in the calculus of variations is finding the curve between two points that produces this minimal surface of revolution.[6]
There are only two minimal surfaces of revolution (surfaces of revolution which are also minimal surfaces): the plane and the catenoid.[7]
Coordinate expressions
A surface of revolution given by rotating a curve described by $y=f(x)$ around the x-axis may be most simply described by $y^{2}+z^{2}=f(x)^{2}$. This yields the parametrization in terms of $x$ and $\theta $ as $(x,f(x)\cos(\theta ),f(x)\sin(\theta ))$. If instead we revolve the curve around the y-axis, then the curve is described by $y=f({\sqrt {x^{2}+z^{2}}})$, yielding the expression $(x\cos(\theta ),f(x),x\sin(\theta ))$ in terms of the parameters $x$ and $\theta $.
If x and y are defined in terms of a parameter $t$, then we obtain a parametrization in terms of $t$ and $\theta $. If $x$ and $y$ are functions of $t$, then the surface of revolution obtained by revolving the curve around the x-axis is described by $(x(t),y(t)\cos(\theta ),y(t)\sin(\theta ))$, and the surface of revolution obtained by revolving the curve around the y-axis is described by $(x(t)\cos(\theta ),y(t),x(t)\sin(\theta ))$.
Geodesics
Meridians are always geodesics on a surface of revolution. Other geodesics are governed by Clairaut's relation.[8]
Toroids
Main article: Toroid
A surface of revolution with a hole in, where the axis of revolution does not intersect the surface, is called a toroid.[9] For example, when a rectangle is rotated around an axis parallel to one of its edges, then a hollow square-section ring is produced. If the revolved figure is a circle, then the object is called a torus.
Applications
The use of surfaces of revolution is essential in many fields in physics and engineering. When certain objects are designed digitally, revolutions like these can be used to determine surface area without the use of measuring the length and radius of the object being designed.
See also
• Channel surface, a generalisation of a surface of revolution
• Gabriel's Horn
• Generalized helicoid
• Lemon (geometry), surface of revolution of a circular arc
• Liouville surface, another generalization of a surface of revolution
• Spheroid
• Surface integral
• Translation surface (differential geometry)
References
1. Middlemiss; Marks; Smart. "15-4. Surfaces of Revolution". Analytic Geometry (3rd ed.). p. 378. LCCN 68015472.
2. Wilson, W.A.; Tracey, J.I. (1925), Analytic Geometry (Revised ed.), D.C. Heath and Co., p. 227
3. Thomas, George B. "6.7: Area of a Surface of Revolution; 6.11: The Theorems of Pappus". Calculus (3rd ed.). pp. 206–209, 217–219. LCCN 69016407.
4. Singh, R.R. (1993). Engineering Mathematics (6 ed.). Tata McGraw-Hill. p. 6.90. ISBN 0-07-014615-2.
5. Swokowski, Earl W. (1983), Calculus with analytic geometry (Alternate ed.), Prindle, Weber & Schmidt, p. 617, ISBN 0-87150-341-7
6. Weisstein, Eric W. "Minimal Surface of Revolution". MathWorld.
7. Weisstein, Eric W. "Catenoid". MathWorld.
8. Pressley, Andrew. “Chapter 9 - Geodesics.” Elementary Differential Geometry, 2nd ed., Springer, London, 2012, pp. 227–230.
9. Weisstein, Eric W. "Toroid". MathWorld.
External links
• Weisstein, Eric W. "Surface of Revolution". MathWorld.
• "Surface de révolution". Encyclopédie des Formes Mathématiques Remarquables (in French).
| Wikipedia |
Surface subgroup conjecture
In mathematics, the surface subgroup conjecture of Friedhelm Waldhausen states that the fundamental group of every closed, irreducible 3-manifold with infinite fundamental group has a surface subgroup. By "surface subgroup" we mean the fundamental group of a closed surface not the 2-sphere. This problem is listed as Problem 3.75 in Robion Kirby's problem list.[1]
Assuming the geometrization conjecture, the only open case was that of closed hyperbolic 3-manifolds. A proof of this case was announced in the summer of 2009 by Jeremy Kahn and Vladimir Markovic and outlined in a talk August 4, 2009 at the FRG (Focused Research Group) Conference hosted by the University of Utah. A preprint appeared in the arxiv.org server in October 2009.[2] Their paper was published in the Annals of Mathematics in 2012.[2] In June 2012, Kahn and Markovic were given the Clay Research Awards by the Clay Mathematics Institute at a ceremony in Oxford.
See also
• Virtually Haken conjecture
• Ehrenpreis conjecture
References
1. Robion Kirby, Problems in low-dimensional topology
2. Kahn, J.; Markovic, V. (2012). "Immersing almost geodesic surfaces in a closed hyperbolic three manifold". Annals of Mathematics. 175 (3): 1127. arXiv:0910.5501. doi:10.4007/annals.2012.175.3.4.
| Wikipedia |
Surface of class VII
In mathematics, surfaces of class VII are non-algebraic complex surfaces studied by (Kodaira 1964, 1968) that have Kodaira dimension −∞ and first Betti number 1. Minimal surfaces of class VII (those with no rational curves with self-intersection −1) are called surfaces of class VII0. Every class VII surface is birational to a unique minimal class VII surface, and can be obtained from this minimal surface by blowing up points a finite number of times.
The name "class VII" comes from (Kodaira 1964, theorem 21), which divided minimal surfaces into 7 classes numbered I0 to VII0. However Kodaira's class VII0 did not have the condition that the Kodaira dimension is −∞, but instead had the condition that the geometric genus is 0. As a result, his class VII0 also included some other surfaces, such as secondary Kodaira surfaces, that are no longer considered to be class VII as they do not have Kodaira dimension −∞. The minimal surfaces of class VII are the class numbered "7" on the list of surfaces in (Kodaira 1968, theorem 55).
Invariants
The irregularity q is 1, and h1,0 = 0. All plurigenera are 0.
Hodge diamond:
1
01
0b20
10
1
Examples
Hopf surfaces are quotients of C2−(0,0) by a discrete group G acting freely, and have vanishing second Betti numbers. The simplest example is to take G to be the integers, acting as multiplication by powers of 2; the corresponding Hopf surface is diffeomorphic to S1×S3.
Inoue surfaces are certain class VII surfaces whose universal cover is C×H where H is the upper half plane (so they are quotients of this by a group of automorphisms). They have vanishing second Betti numbers.
Inoue–Hirzebruch surfaces, Enoki surfaces, and Kato surfaces give examples of type VII surfaces with b2 > 0.
Classification and global spherical shells
The minimal class VII surfaces with second Betti number b2=0 have been classified by Bogomolov (1976, 1982), and are either Hopf surfaces or Inoue surfaces. Those with b2=1 were classified by Nakamura (1984b) under an additional assumption that the surface has a curve, that was later proved by Teleman (2005).
A global spherical shell (Kato 1978) is a smooth 3-sphere in the surface with connected complement, with a neighbourhood biholomorphic to a neighbourhood of a sphere in C2. The global spherical shell conjecture claims that all class VII0 surfaces with positive second Betti number have a global spherical shell. The manifolds with a global spherical shell are all Kato surfaces which are reasonably well understood, so a proof of this conjecture would lead to a classification of the type VII surfaces.
A class VII surface with positive second Betti number b2 has at most b2 rational curves, and has exactly this number if it has a global spherical shell. Conversely Georges Dloussky, Karl Oeljeklaus, and Matei Toma (2003) showed that if a minimal class VII surface with positive second Betti number b2 has exactly b2 rational curves then it has a global spherical shell.
For type VII surfaces with vanishing second Betti number, the primary Hopf surfaces have a global spherical shell, but secondary Hopf surfaces and Inoue surfaces do not because their fundamental groups are not infinite cyclic. Blowing up points on the latter surfaces gives non-minimal class VII surfaces with positive second Betti number that do not have spherical shells.
References
• Barth, Wolf P.; Hulek, Klaus; Peters, Chris A.M.; Van de Ven, Antonius (2004), Compact Complex Surfaces, Ergebnisse der Mathematik und ihrer Grenzgebiete. 3. Folge., vol. 4, Springer-Verlag, Berlin, ISBN 978-3-540-00832-3, MR 2030225
• Bogomolov, Fedor A. (1976), "Classification of surfaces of class VII0 with b2=0", Izvestiya Akademii Nauk SSSR. Seriya Matematicheskaya, 10 (2): 273–288, ISSN 0373-2436, MR 0427325
• Bogomolov, Fedor A. (1982), "Surfaces of class VII0 and affine geometry", Izvestiya Akademii Nauk SSSR. Seriya Matematicheskaya, 46 (4): 710–761, Bibcode:1983IzMat..21...31B, doi:10.1070/IM1983v021n01ABEH001640, ISSN 0373-2436, MR 0670164
• Dloussky, Georges; Oeljeklaus, Karl; Toma, Matei (2003), "Class VII0 surfaces with b2 curves", The Tohoku Mathematical Journal, Second Series, 55 (2): 283–309, arXiv:math/0201010, doi:10.2748/tmj/1113246942, ISSN 0040-8735, MR 1979500
• Kato, Masahide (1978), "Compact complex manifolds containing "global" spherical shells. I", Proceedings of the International Symposium on Algebraic Geometry (Kyoto Univ., Kyoto, 1977), Tokyo: Kinokuniya Book Store, pp. 45–84, MR 0578853
• Kodaira, Kunihiko (1964), "On the structure of compact complex analytic surfaces. I", American Journal of Mathematics, The Johns Hopkins University Press, 86 (4): 751–798, doi:10.2307/2373157, ISSN 0002-9327, JSTOR 2373157, MR 0187255
• Kodaira, Kunihiko (1968), "On the structure of complex analytic surfaces. IV", American Journal of Mathematics, The Johns Hopkins University Press, 90 (4): 1048–1066, doi:10.2307/2373289, ISSN 0002-9327, JSTOR 2373289, MR 0239114
• Nakamura, Iku (1984a), "On surfaces of class VII0 with curves", Inventiones Mathematicae, 78 (3): 393–443, Bibcode:1984InMat..78..393N, doi:10.1007/BF01388444, ISSN 0020-9910, MR 0768987
• Nakamura, Iku (1984b), "Classification of non-Kähler complex surfaces", Mathematical Society of Japan. Sugaku (Mathematics), 36 (2): 110–124, ISSN 0039-470X, MR 0780359
• Nakamura, I. (2008), "Survey on VII0 surfaces", Recent Developments in NonKaehler Geometry, Sapporo (PDF)
• Teleman, Andrei (2005), "Donaldson theory on non-Kählerian surfaces and class VII surfaces with b2=1", Inventiones Mathematicae, 162 (3): 493–521, arXiv:0704.2638, Bibcode:2005InMat.162..493T, doi:10.1007/s00222-005-0451-2, ISSN 0020-9910, MR 2198220
| Wikipedia |
Surgery exact sequence
In the mathematical surgery theory the surgery exact sequence is the main technical tool to calculate the surgery structure set of a compact manifold in dimension $>4$. The surgery structure set ${\mathcal {S}}(X)$ of a compact $n$-dimensional manifold $X$ is a pointed set which classifies $n$-dimensional manifolds within the homotopy type of $X$.
The basic idea is that in order to calculate ${\mathcal {S}}(X)$ it is enough to understand the other terms in the sequence, which are usually easier to determine. These are on one hand the normal invariants which form generalized cohomology groups, and hence one can use standard tools of algebraic topology to calculate them at least in principle. On the other hand, there are the L-groups which are defined algebraically in terms of quadratic forms or in terms of chain complexes with quadratic structure. A great deal is known about these groups. Another part of the sequence are the surgery obstruction maps from normal invariants to the L-groups. For these maps there are certain characteristic classes formulas, which enable to calculate them in some cases. Knowledge of these three components, that means: the normal maps, the L-groups and the surgery obstruction maps is enough to determine the structure set (at least up to extension problems).
In practice one has to proceed case by case, for each manifold ${\mathcal {}}X$ it is a unique task to determine the surgery exact sequence, see some examples below. Also note that there are versions of the surgery exact sequence depending on the category of manifolds we work with: smooth (DIFF), PL, or topological manifolds and whether we take Whitehead torsion into account or not (decorations $s$ or $h$).
The original 1962 work of Browder and Novikov on the existence and uniqueness of manifolds within a simply-connected homotopy type was reformulated by Sullivan in 1966 as a surgery exact sequence. In 1970 Wall developed non-simply-connected surgery theory and the surgery exact sequence for manifolds with arbitrary fundamental group.
Definition
The surgery exact sequence is defined as
$\cdots \to {\mathcal {N}}_{\partial }(X\times I)\to L_{n+1}(\pi _{1}(X))\to {\mathcal {S}}(X)\to {\mathcal {N}}(X)\to L_{n}(\pi _{1}(X))$
where:
the entries ${\mathcal {N}}_{\partial }(X\times I)$ and ${\mathcal {N}}(X)$ are the abelian groups of normal invariants,
the entries ${\mathcal {}}L_{n+1}(\pi _{1}(X))$ and ${\mathcal {}}L_{n}(\pi _{1}(X))$ are the L-groups associated to the group ring $\mathbb {Z} [\pi _{1}(X)]$,
the maps $\theta \colon {\mathcal {N}}_{\partial }(X\times I)\to L_{n+1}(\pi _{1}(X))$ and $\theta \colon {\mathcal {N}}(X)\to L_{n}(\pi _{1}(X))$ are the surgery obstruction maps,
the arrows $\partial \colon L_{n+1}(\pi _{1}(X))\to {\mathcal {S}}(X)$ and $\eta \colon {\mathcal {S}}(X)\to {\mathcal {N}}(X)$ will be explained below.
Versions
There are various versions of the surgery exact sequence. One can work in either of the three categories of manifolds: differentiable (smooth), PL, topological. Another possibility is to work with the decorations $s$ or $h$.
The entries
Normal invariants
Main article: Normal invariants
A degree one normal map $(f,b)\colon M\to X$ consists of the following data: an $n$-dimensional oriented closed manifold $M$, a map $f$ which is of degree one (that means $f_{*}([M])=[X]$), and a bundle map $b\colon TM\oplus \varepsilon ^{k}\to \xi $ from the stable tangent bundle of $M$ to some bundle $\xi $ over $X$. Two such maps are equivalent if there exists a normal bordism between them (that means a bordism of the sources covered by suitable bundle data). The equivalence classes of degree one normal maps are called normal invariants.
When defined like this the normal invariants ${\mathcal {N}}(X)$ are just a pointed set, with the base point given by $(id,id)$. However the Pontrjagin-Thom construction gives ${\mathcal {N}}(X)$ a structure of an abelian group. In fact we have a non-natural bijection
${\mathcal {N}}(X)\cong [X,G/O]$
where $G/O$ denotes the homotopy fiber of the map $J\colon BO\to BG$, which is an infinite loop space and hence maps into it define a generalized cohomology theory. There are corresponding identifications of the normal invariants with $[X,G/PL]$ when working with PL-manifolds and with $[X,G/TOP]$ when working with topological manifolds.
L-groups
Main article: L-theory
The $L$-groups are defined algebraically in terms of quadratic forms or in terms of chain complexes with quadratic structure. See the main article for more details. Here only the properties of the L-groups described below will be important.
Surgery obstruction maps
Main article: Surgery obstruction
The map $\theta \colon {\mathcal {N}}(X)\to L_{n}(\pi _{1}(X))$ is in the first instance a set-theoretic map (that means not necessarily a homomorphism) with the following property (when $n\geq 5$:
A degree one normal map $(f,b)\colon M\to X$ is normally cobordant to a homotopy equivalence if and only if the image $\theta (f,b)=0$ in $L_{n}(\mathbb {Z} [\pi _{1}(X)])$.
The normal invariants arrow $\eta \colon {\mathcal {S}}(X)\to {\mathcal {N}}(X)$
Any homotopy equivalence $f\colon M\to X$ defines a degree one normal map.
The surgery obstruction arrow $\partial \colon L_{n+1}(\pi _{1}(X))\to {\mathcal {S}}(X)$
This arrow describes in fact an action of the group $L_{n+1}(\pi _{1}(X))$ on the set ${\mathcal {S}}(X)$ rather than just a map. The definition is based on the realization theorem for the elements of the $L$-groups which reads as follows:
Let $M$ be an $n$-dimensional manifold with $\pi _{1}(M)\cong \pi _{1}(X)$ and let $x\in L_{n+1}(\pi _{1}(X))$. Then there exists a degree one normal map of manifolds with boundary
$(F,B)\colon (W,M,M')\to (M\times I,M\times 0,M\times 1)$
with the following properties:
1. $\theta (F,B)=x\in L_{n+1}(\pi _{1}(X))$
2. $F_{0}\colon M\to M\times 0$ is a diffeomorphism
3. $F_{1}\colon M'\to M\times 1$ is a homotopy equivalence of closed manifolds
Let $f\colon M\to X$ represent an element in ${\mathcal {S}}(X)$ and let $x\in L_{n+1}(\pi _{1}(X))$. Then $\partial (f,x)$ is defined as $f\circ F_{1}\colon M'\to X$.
The exactness
Recall that the surgery structure set is only a pointed set and that the surgery obstruction map $\theta $ might not be a homomorphism. Hence it is necessary to explain what is meant when talking about the "exact sequence". So the surgery exact sequence is an exact sequence in the following sense:
For a normal invariant $z\in {\mathcal {N}}(X)$ we have $z\in \mathrm {Im} (\eta )$ if and only if $\theta (z)=0$. For two manifold structures $x_{1},x_{2}\in {\mathcal {S}}(X)$ we have $\eta (x_{1})=\eta (x_{2})$ if and only if there exists $u\in L_{n+1}(\pi _{1}(X))$ such that $\partial (u,x_{1})=x_{2}$. For an element $u\in L_{n+1}(\pi _{1}(X))$ we have $\partial (u,\mathrm {id} )=\mathrm {id} $ if and only if $u\in \mathrm {Im} (\theta )$.
Versions revisited
In the topological category the surgery obstruction map can be made into a homomorphism. This is achieved by putting an alternative abelian group structure on the normal invariants as described here. Moreover, the surgery exact sequence can be identified with the algebraic surgery exact sequence of Ranicki which is an exact sequence of abelian groups by definition. This gives the structure set ${\mathcal {S}}(X)$ the structure of an abelian group. Note, however, that there is to this date no satisfactory geometric description of this abelian group structure.
Classification of manifolds
The answer to the organizing questions of the surgery theory can be formulated in terms of the surgery exact sequence. In both cases the answer is given in the form of a two-stage obstruction theory.
The existence question. Let $X$ be a finite Poincaré complex. It is homotopy equivalent to a manifold if and only if the following two conditions are satisfied. Firstly, $X$ must have a vector bundle reduction of its Spivak normal fibration. This condition can be also formulated as saying that the set of normal invariants ${\mathcal {N}}(X)$ is non-empty. Secondly, there must be a normal invariant $x\in {\mathcal {N}}(X)$ such that $\theta (x)=0$. Equivalently, the surgery obstruction map $\theta \colon {\mathcal {N}}(X)\rightarrow L_{n}(\pi _{1}(X))$ hits $0\in L_{n}(\pi _{1}(X))$.
The uniqueness question. Let $f\colon M\to X$ and $f'\colon M'\to X$ represent two elements in the surgery structure set ${\mathcal {S}}(X)$. The question whether they represent the same element can be answered in two stages as follows. First there must be a normal cobordism between the degree one normal maps induced by ${\mathcal {}}f$ and ${\mathcal {}}f'$, this means ${\mathcal {}}\eta (f)=\eta (f')$ in ${\mathcal {N}}(X)$. Denote the normal cobordism $(F,B)\colon (W,M,M')\to (X\times I,X\times 0,X\times 1)$. If the surgery obstruction ${\mathcal {}}\theta (F,B)$ in ${\mathcal {}}L_{n+1}(\pi _{1}(X))$ to make this normal cobordism to an h-cobordism (or s-cobordism) relative to the boundary vanishes then ${\mathcal {}}f$ and ${\mathcal {}}f'$ in fact represent the same element in the surgery structure set.
Quinn's surgery fibration
In his thesis written under the guidance of Browder, Frank Quinn introduced a fiber sequence so that the surgery long exact sequence is the induced sequence on homotopy groups.[1]
Examples
1. Homotopy spheres
This is an example in the smooth category, $n\geq 5$.
The idea of the surgery exact sequence is implicitly present already in the original article of Kervaire and Milnor on the groups of homotopy spheres. In the present terminology we have
${\mathcal {S}}^{DIFF}(S^{n})=\Theta ^{n}$
${\mathcal {N}}^{DIFF}(S^{n})=\Omega _{n}^{alm}$ the cobordism group of almost framed $n$ manifolds, ${\mathcal {N}}_{\partial }^{DIFF}(S^{n}\times I)=\Omega _{n+1}^{alm}$
$L_{n}(1)=\mathbb {Z} ,0,\mathbb {Z} _{2},0$ where $n\equiv 0,1,2,3$ mod $4$ (recall the $4$-periodicity of the L-groups)
The surgery exact sequence in this case is an exact sequence of abelian groups. In addition to the above identifications we have
$bP^{n+1}=\mathrm {ker} (\eta \colon {\mathcal {S}}^{DIFF}(S^{n})\to {\mathcal {N}}^{DIFF}(S^{n}))=\mathrm {coker} (\theta \colon {\mathcal {N}}_{\partial }^{DIFF}(S^{n}\times I)\to L_{n+1}(1))$
Because the odd-dimensional L-groups are trivial one obtains these exact sequences:
$0\to \Theta ^{4i}\to \Omega _{4i}^{alm}\to \mathbb {Z} \to bP^{4i}\to 0$
$0\to \Theta ^{4i-2}\to \Omega _{4i-2}^{alm}\to \mathbb {Z} /2\to bP^{4i-2}\to 0$
$0\to bP^{2j}\to \Theta ^{2j-1}\to \Omega _{2j-1}^{alm}\to 0$
The results of Kervaire and Milnor are obtained by studying the middle map in the first two sequences and by relating the groups $\Omega _{i}^{alm}$ to stable homotopy theory.
2. Topological spheres
The generalized Poincaré conjecture in dimension $n$ can be phrased as saying that ${\mathcal {S}}^{TOP}(S^{n})=0$. It has been proved for any $n$ by the work of Smale, Freedman and Perelman. From the surgery exact sequence for $S^{n}$ for $n\geq 5$ in the topological category we see that
$\theta \colon {\mathcal {N}}^{TOP}(S^{n})\to L_{n}(1)$
is an isomorphism. (In fact this can be extended to $n\geq 1$ by some ad-hoc methods.)
3. Complex projective spaces in the topological category
The complex projective space $\mathbb {C} P^{n}$ is a $(2n)$-dimensional topological manifold with $\pi _{1}(\mathbb {C} P^{n})=1$. In addition it is known that in the case $\pi _{1}(X)=1$ in the topological category the surgery obstruction map $\theta $ is always surjective. Hence we have
$0\to {\mathcal {S}}^{TOP}(\mathbb {C} P^{n})\to {\mathcal {N}}^{TOP}(\mathbb {C} P^{n})\to L_{2n}(1)\to 0$
From the work of Sullivan one can calculate
${\mathcal {N}}(\mathbb {C} P^{n})\cong \oplus _{i=1}^{\lfloor n/2\rfloor }\mathbb {Z} \oplus \oplus _{i=1}^{\lfloor (n+1)/2\rfloor }\mathbb {Z} _{2}$ and hence ${\mathcal {S}}(\mathbb {C} P^{n})\cong \oplus _{i=1}^{\lfloor (n-1)/2\rfloor }\mathbb {Z} \oplus \oplus _{i=1}^{\lfloor n/2\rfloor }\mathbb {Z} _{2}$
4. Aspherical manifolds in the topological category
An aspherical $n$-dimensional manifold $X$ is an $n$-manifold such that $\pi _{i}(X)=0$ for $i\geq 2$. Hence the only non-trivial homotopy group is $\pi _{1}(X)$
One way to state the Borel conjecture is to say that for such $X$ we have that the Whitehead group $Wh(\pi _{1}(X))$ is trivial and that
${\mathcal {S}}(X)=0$
This conjecture was proven in many special cases - for example when $\pi _{1}(X)$ is $\mathbb {Z} ^{n}$, when it is the fundamental group of a negatively curved manifold or when it is a word-hyperbolic group or a CAT(0)-group.
The statement is equivalent to showing that the surgery obstruction map to the right of the surgery structure set is injective and the surgery obstruction map to the left of the surgery structure set is surjective. Most of the proofs of the above-mentioned results are done by studying these maps or by studying the assembly maps with which they can be identified. See more details in Borel conjecture, Farrell-Jones Conjecture.
References
1. Quinn, Frank (1971), A geomeric formulation of surgery (PDF), Topology of Manifolds, Proc. Univ. Georgia 1969, 500-511 (1971)
• Browder, William (1972), Surgery on simply-connected manifolds, Berlin, New York: Springer-Verlag, MR 0358813
• Lück, Wolfgang (2002), A basic introduction to surgery theory (PDF), ICTP Lecture Notes Series 9, Band 1, of the school "High-dimensional manifold theory" in Trieste, May/June 2001, Abdus Salam International Centre for Theoretical Physics, Trieste 1-224
• Ranicki, Andrew (1992), Algebraic L-theory and topological manifolds (PDF), Cambridge Tracts in Mathematics, vol. 102, Cambridge University Press
• Ranicki, Andrew (2002), Algebraic and Geometric Surgery (PDF), Oxford Mathematical Monographs, Clarendon Press, ISBN 978-0-19-850924-0, MR 2061749
• Wall, C. T. C. (1999), Surgery on compact manifolds, Mathematical Surveys and Monographs, vol. 69 (2nd ed.), Providence, R.I.: American Mathematical Society, ISBN 978-0-8218-0942-6, MR 1687388
| Wikipedia |
L-theory
In mathematics, algebraic L-theory is the K-theory of quadratic forms; the term was coined by C. T. C. Wall, with L being used as the letter after K. Algebraic L-theory, also known as "Hermitian K-theory", is important in surgery theory.[1]
Definition
One can define L-groups for any ring with involution R: the quadratic L-groups $L_{*}(R)$ (Wall) and the symmetric L-groups $L^{*}(R)$ (Mishchenko, Ranicki).
Even dimension
The even-dimensional L-groups $L_{2k}(R)$ are defined as the Witt groups of ε-quadratic forms over the ring R with $\epsilon =(-1)^{k}$. More precisely,
$L_{2k}(R)$
is the abelian group of equivalence classes $[\psi ]$ of non-degenerate ε-quadratic forms $\psi \in Q_{\epsilon }(F)$ over R, where the underlying R-modules F are finitely generated free. The equivalence relation is given by stabilization with respect to hyperbolic ε-quadratic forms:
$[\psi ]=[\psi ']\Longleftrightarrow n,n'\in {\mathbb {N} }_{0}:\psi \oplus H_{(-1)^{k}}(R)^{n}\cong \psi '\oplus H_{(-1)^{k}}(R)^{n'}$.
The addition in $L_{2k}(R)$ is defined by
$[\psi _{1}]+[\psi _{2}]:=[\psi _{1}\oplus \psi _{2}].$
The zero element is represented by $H_{(-1)^{k}}(R)^{n}$ for any $n\in {\mathbb {N} }_{0}$. The inverse of $[\psi ]$ is $[-\psi ]$.
Odd dimension
Defining odd-dimensional L-groups is more complicated; further details and the definition of the odd-dimensional L-groups can be found in the references mentioned below.
Examples and applications
The L-groups of a group $\pi $ are the L-groups $L_{*}(\mathbf {Z} [\pi ])$ of the group ring $\mathbf {Z} [\pi ]$. In the applications to topology $\pi $ is the fundamental group $\pi _{1}(X)$ of a space $X$. The quadratic L-groups $L_{*}(\mathbf {Z} [\pi ])$ play a central role in the surgery classification of the homotopy types of $n$-dimensional manifolds of dimension $n>4$, and in the formulation of the Novikov conjecture.
The distinction between symmetric L-groups and quadratic L-groups, indicated by upper and lower indices, reflects the usage in group homology and cohomology. The group cohomology $H^{*}$ of the cyclic group $\mathbf {Z} _{2}$ deals with the fixed points of a $\mathbf {Z} _{2}$-action, while the group homology $H_{*}$ deals with the orbits of a $\mathbf {Z} _{2}$-action; compare $X^{G}$ (fixed points) and $X_{G}=X/G$ (orbits, quotient) for upper/lower index notation.
The quadratic L-groups: $L_{n}(R)$ and the symmetric L-groups: $L^{n}(R)$ are related by a symmetrization map $L_{n}(R)\to L^{n}(R)$ which is an isomorphism modulo 2-torsion, and which corresponds to the polarization identities.
The quadratic and the symmetric L-groups are 4-fold periodic (the comment of Ranicki, page 12, on the non-periodicity of the symmetric L-groups refers to another type of L-groups, defined using "short complexes").
In view of the applications to the classification of manifolds there are extensive calculations of the quadratic $L$-groups $L_{*}(\mathbf {Z} [\pi ])$. For finite $\pi $ algebraic methods are used, and mostly geometric methods (e.g. controlled topology) are used for infinite $\pi $.
More generally, one can define L-groups for any additive category with a chain duality, as in Ranicki (section 1).
Integers
The simply connected L-groups are also the L-groups of the integers, as $L(e):=L(\mathbf {Z} [e])=L(\mathbf {Z} )$ for both $L$ = $L^{*}$ or $L_{*}.$ For quadratic L-groups, these are the surgery obstructions to simply connected surgery.
The quadratic L-groups of the integers are:
${\begin{aligned}L_{4k}(\mathbf {Z} )&=\mathbf {Z} &&{\text{signature}}/8\\L_{4k+1}(\mathbf {Z} )&=0\\L_{4k+2}(\mathbf {Z} )&=\mathbf {Z} /2&&{\text{Arf invariant}}\\L_{4k+3}(\mathbf {Z} )&=0.\end{aligned}}$
In doubly even dimension (4k), the quadratic L-groups detect the signature; in singly even dimension (4k+2), the L-groups detect the Arf invariant (topologically the Kervaire invariant).
The symmetric L-groups of the integers are:
${\begin{aligned}L^{4k}(\mathbf {Z} )&=\mathbf {Z} &&{\text{signature}}\\L^{4k+1}(\mathbf {Z} )&=\mathbf {Z} /2&&{\text{de Rham invariant}}\\L^{4k+2}(\mathbf {Z} )&=0\\L^{4k+3}(\mathbf {Z} )&=0.\end{aligned}}$
In doubly even dimension (4k), the symmetric L-groups, as with the quadratic L-groups, detect the signature; in dimension (4k+1), the L-groups detect the de Rham invariant.
References
1. "L-theory, K-theory and involutions, by Levikov, Filipp, 2013, On University of Aberdeen(ISNI:0000 0004 2745 8820)".
• Lück, Wolfgang (2002), "A basic introduction to surgery theory" (PDF), Topology of high-dimensional manifolds, No. 1, 2 (Trieste, 2001), ICTP Lect. Notes, vol. 9, Abdus Salam Int. Cent. Theoret. Phys., Trieste, pp. 1–224, MR 1937016
• Ranicki, Andrew A. (1992), Algebraic L-theory and topological manifolds (PDF), Cambridge Tracts in Mathematics, vol. 102, Cambridge University Press, ISBN 978-0-521-42024-2, MR 1211640
• Wall, C. T. C. (1999) [1970], Ranicki, Andrew (ed.), Surgery on compact manifolds (PDF), Mathematical Surveys and Monographs, vol. 69 (2nd ed.), Providence, R.I.: American Mathematical Society, ISBN 978-0-8218-0942-6, MR 1687388
| Wikipedia |
Surgery obstruction
In mathematics, specifically in surgery theory, the surgery obstructions define a map $\theta \colon {\mathcal {N}}(X)\to L_{n}(\pi _{1}(X))$ from the normal invariants to the L-groups which is in the first instance a set-theoretic map (that means not necessarily a homomorphism) with the following property when $n\geq 5$:
A degree-one normal map $(f,b)\colon M\to X$ is normally cobordant to a homotopy equivalence if and only if the image $\theta (f,b)=0$ in $L_{n}(\mathbb {Z} [\pi _{1}(X)])$.
Sketch of the definition
The surgery obstruction of a degree-one normal map has a relatively complicated definition.
Consider a degree-one normal map $(f,b)\colon M\to X$. The idea in deciding the question whether it is normally cobordant to a homotopy equivalence is to try to systematically improve $(f,b)$ so that the map $f$ becomes $m$-connected (that means the homotopy groups $\pi _{*}(f)=0$ for $*\leq m$) for high $m$. It is a consequence of Poincaré duality that if we can achieve this for $m>\lfloor n/2\rfloor $ then the map $f$ already is a homotopy equivalence. The word systematically above refers to the fact that one tries to do surgeries on $M$ to kill elements of $\pi _{i}(f)$. In fact it is more convenient to use homology of the universal covers to observe how connected the map $f$ is. More precisely, one works with the surgery kernels $K_{i}({\tilde {M}}):=\mathrm {ker} \{f_{*}\colon H_{i}({\tilde {M}})\rightarrow H_{i}({\tilde {X}})\}$ which one views as $\mathbb {Z} [\pi _{1}(X)]$-modules. If all these vanish, then the map $f$ is a homotopy equivalence. As a consequence of Poincaré duality on $M$ and $X$ there is a $\mathbb {Z} [\pi _{1}(X)]$-modules Poincaré duality $K^{n-i}({\tilde {M}})\cong K_{i}({\tilde {M}})$, so one only has to watch half of them, that means those for which $i\leq \lfloor n/2\rfloor $.
Any degree-one normal map can be made $\lfloor n/2\rfloor $-connected by the process called surgery below the middle dimension. This is the process of killing elements of $K_{i}({\tilde {M}})$ for $i<\lfloor n/2\rfloor $ described here when we have $p+q=n$ such that $i=p<\lfloor n/2\rfloor $. After this is done there are two cases.
1. If $n=2k$ then the only nontrivial homology group is the kernel $K_{k}({\tilde {M}}):=\mathrm {ker} \{f_{*}\colon H_{k}({\tilde {M}})\rightarrow H_{k}({\tilde {X}})\}$. It turns out that the cup-product pairings on $M$ and $X$ induce a cup-product pairing on $K_{k}({\tilde {M}})$. This defines a symmetric bilinear form in case $k=2l$ and a skew-symmetric bilinear form in case $k=2l+1$. It turns out that these forms can be refined to $\varepsilon $-quadratic forms, where $\varepsilon =(-1)^{k}$. These $\varepsilon $-quadratic forms define elements in the L-groups $L_{n}(\pi _{1}(X))$.
2. If $n=2k+1$ the definition is more complicated. Instead of a quadratic form one obtains from the geometry a quadratic formation, which is a kind of automorphism of quadratic forms. Such a thing defines an element in the odd-dimensional L-group $L_{n}(\pi _{1}(X))$.
If the element $\theta (f,b)$ is zero in the L-group surgery can be done on $M$ to modify $f$ to a homotopy equivalence.
Geometrically the reason why this is not always possible is that performing surgery in the middle dimension to kill an element in $K_{k}({\tilde {M}})$ possibly creates an element in $K_{k-1}({\tilde {M}})$ when $n=2k$ or in $K_{k}({\tilde {M}})$ when $n=2k+1$. So this possibly destroys what has already been achieved. However, if $\theta (f,b)$ is zero, surgeries can be arranged in such a way that this does not happen.
Example
In the simply connected case the following happens.
If $n=2k+1$ there is no obstruction.
If $n=4l$ then the surgery obstruction can be calculated as the difference of the signatures of M and X.
If $n=4l+2$ then the surgery obstruction is the Arf-invariant of the associated kernel quadratic form over $\mathbb {Z} _{2}$.
References
• Browder, William (1972), Surgery on simply-connected manifolds, Berlin, New York: Springer-Verlag, MR 0358813
• Lück, Wolfgang (2002), A basic introduction to surgery theory (PDF), ICTP Lecture Notes Series 9, Band 1, of the school "High-dimensional manifold theory" in Trieste, May/June 2001, Abdus Salam International Centre for Theoretical Physics, Trieste 1-224
• Ranicki, Andrew (2002), Algebraic and Geometric Surgery, Oxford Mathematical Monographs, Clarendon Press, ISBN 978-0-19-850924-0, MR 2061749
• Wall, C. T. C. (1999), Surgery on compact manifolds, Mathematical Surveys and Monographs, vol. 69 (2nd ed.), Providence, R.I.: American Mathematical Society, ISBN 978-0-8218-0942-6, MR 1687388
| Wikipedia |
Surjective function
In mathematics, a surjective function (also known as surjection, or onto function /ˈɒn.tuː/) is a function f such that every element y can be mapped from some element x such that f(x) = y. In other words, every element of the function's codomain is the image of at least one element of its domain.[1][2] It is not required that x be unique; the function f may map one or more elements of X to the same element of Y.
"Onto" redirects here. For other uses, see wiktionary:onto.
Function
x ↦ f (x)
Examples of domains and codomains
• $X$ → $\mathbb {B} $, $\mathbb {B} $ → $X$, $\mathbb {B} ^{n}$ → $X$
• $X$ → $\mathbb {Z} $, $\mathbb {Z} $ → $X$
• $X$ → $\mathbb {R} $, $\mathbb {R} $ → $X$, $\mathbb {R} ^{n}$ → $X$
• $X$ → $\mathbb {C} $, $\mathbb {C} $ → $X$, $\mathbb {C} ^{n}$ → $X$
Classes/properties
• Constant
• Identity
• Linear
• Polynomial
• Rational
• Algebraic
• Analytic
• Smooth
• Continuous
• Measurable
• Injective
• Surjective
• Bijective
Constructions
• Restriction
• Composition
• λ
• Inverse
Generalizations
• Partial
• Multivalued
• Implicit
• space
The term surjective and the related terms injective and bijective were introduced by Nicolas Bourbaki,[3][4] a group of mainly French 20th-century mathematicians who, under this pseudonym, wrote a series of books presenting an exposition of modern advanced mathematics, beginning in 1935. The French word sur means over or above, and relates to the fact that the image of the domain of a surjective function completely covers the function's codomain.
Any function induces a surjection by restricting its codomain to the image of its domain. Every surjective function has a right inverse assuming the axiom of choice, and every function with a right inverse is necessarily a surjection. The composition of surjective functions is always surjective. Any function can be decomposed into a surjection and an injection.
Definition
Further information on notation: Function (mathematics) § Notation
A surjective function is a function whose image is equal to its codomain. Equivalently, a function $f$ with domain $X$ and codomain $Y$ is surjective if for every $y$ in $Y$ there exists at least one $x$ in $X$ with $f(x)=y$.[1] Surjections are sometimes denoted by a two-headed rightwards arrow (U+21A0 ↠ RIGHTWARDS TWO HEADED ARROW),[5] as in $f\colon X\twoheadrightarrow Y$.
Symbolically,
If $f\colon X\rightarrow Y$, then $f$ is said to be surjective if
$\forall y\in Y,\,\exists x\in X,\;\;f(x)=y$.[2][6]
Examples
For more examples, see § Gallery.
• For any set X, the identity function idX on X is surjective.
• The function f : Z → {0, 1} defined by f(n) = n mod 2 (that is, even integers are mapped to 0 and odd integers to 1) is surjective.
• The function f : R → R defined by f(x) = 2x + 1 is surjective (and even bijective), because for every real number y, we have an x such that f(x) = y: such an appropriate x is (y − 1)/2.
• The function f : R → R defined by f(x) = x3 − 3x is surjective, because the pre-image of any real number y is the solution set of the cubic polynomial equation x3 − 3x − y = 0, and every cubic polynomial with real coefficients has at least one real root. However, this function is not injective (and hence not bijective), since, for example, the pre-image of y = 2 is {x = −1, x = 2}. (In fact, the pre-image of this function for every y, −2 ≤ y ≤ 2 has more than one element.)
• The function g : R → R defined by g(x) = x2 is not surjective, since there is no real number x such that x2 = −1. However, the function g : R → R≥0 defined by g(x) = x2 (with the restricted codomain) is surjective, since for every y in the nonnegative real codomain Y, there is at least one x in the real domain X such that x2 = y.
• The natural logarithm function ln : (0, +∞) → R is a surjective and even bijective (mapping from the set of positive real numbers to the set of all real numbers). Its inverse, the exponential function, if defined with the set of real numbers as the domain, is not surjective (as its range is the set of positive real numbers).
• The matrix exponential is not surjective when seen as a map from the space of all n×n matrices to itself. It is, however, usually defined as a map from the space of all n×n matrices to the general linear group of degree n (that is, the group of all n×n invertible matrices). Under this definition, the matrix exponential is surjective for complex matrices, although still not surjective for real matrices.
• The projection from a cartesian product A × B to one of its factors is surjective, unless the other factor is empty.
• In a 3D video game, vectors are projected onto a 2D flat screen by means of a surjective function.
Properties
A function is bijective if and only if it is both surjective and injective.
If (as is often done) a function is identified with its graph, then surjectivity is not a property of the function itself, but rather a property of the mapping.[7] This is, the function together with its codomain. Unlike injectivity, surjectivity cannot be read off of the graph of the function alone.
Surjections as right invertible functions
The function g : Y → X is said to be a right inverse of the function f : X → Y if f(g(y)) = y for every y in Y (g can be undone by f). In other words, g is a right inverse of f if the composition f o g of g and f in that order is the identity function on the domain Y of g. The function g need not be a complete inverse of f because the composition in the other order, g o f, may not be the identity function on the domain X of f. In other words, f can undo or "reverse" g, but cannot necessarily be reversed by it.
Every function with a right inverse is necessarily a surjection. The proposition that every surjective function has a right inverse is equivalent to the axiom of choice.
If f : X → Y is surjective and B is a subset of Y, then f(f −1(B)) = B. Thus, B can be recovered from its preimage f −1(B).
For example, in the first illustration above, there is some function g such that g(C) = 4. There is also some function f such that f(4) = C. It doesn't matter that g(C) can also equal 3; it only matters that f "reverses" g.
Surjections as epimorphisms
A function f : X → Y is surjective if and only if it is right-cancellative:[8] given any functions g,h : Y → Z, whenever g o f = h o f, then g = h. This property is formulated in terms of functions and their composition and can be generalized to the more general notion of the morphisms of a category and their composition. Right-cancellative morphisms are called epimorphisms. Specifically, surjective functions are precisely the epimorphisms in the category of sets. The prefix epi is derived from the Greek preposition ἐπί meaning over, above, on.
Any morphism with a right inverse is an epimorphism, but the converse is not true in general. A right inverse g of a morphism f is called a section of f. A morphism with a right inverse is called a split epimorphism.
Surjections as binary relations
Any function with domain X and codomain Y can be seen as a left-total and right-unique binary relation between X and Y by identifying it with its function graph. A surjective function with domain X and codomain Y is then a binary relation between X and Y that is right-unique and both left-total and right-total.
Cardinality of the domain of a surjection
The cardinality of the domain of a surjective function is greater than or equal to the cardinality of its codomain: If f : X → Y is a surjective function, then X has at least as many elements as Y, in the sense of cardinal numbers. (The proof appeals to the axiom of choice to show that a function g : Y → X satisfying f(g(y)) = y for all y in Y exists. g is easily seen to be injective, thus the formal definition of |Y| ≤ |X| is satisfied.)
Specifically, if both X and Y are finite with the same number of elements, then f : X → Y is surjective if and only if f is injective.
Given two sets X and Y, the notation X ≤* Y is used to say that either X is empty or that there is a surjection from Y onto X. Using the axiom of choice one can show that X ≤* Y and Y ≤* X together imply that |Y| = |X|, a variant of the Schröder–Bernstein theorem.
Composition and decomposition
The composition of surjective functions is always surjective: If f and g are both surjective, and the codomain of g is equal to the domain of f, then f o g is surjective. Conversely, if f o g is surjective, then f is surjective (but g, the function applied first, need not be). These properties generalize from surjections in the category of sets to any epimorphisms in any category.
Any function can be decomposed into a surjection and an injection: For any function h : X → Z there exist a surjection f : X → Y and an injection g : Y → Z such that h = g o f. To see this, define Y to be the set of preimages h−1(z) where z is in h(X). These preimages are disjoint and partition X. Then f carries each x to the element of Y which contains it, and g carries each element of Y to the point in Z to which h sends its points. Then f is surjective since it is a projection map, and g is injective by definition.
Induced surjection and induced bijection
Any function induces a surjection by restricting its codomain to its range. Any surjective function induces a bijection defined on a quotient of its domain by collapsing all arguments mapping to a given fixed image. More precisely, every surjection f : A → B can be factored as a projection followed by a bijection as follows. Let A/~ be the equivalence classes of A under the following equivalence relation: x ~ y if and only if f(x) = f(y). Equivalently, A/~ is the set of all preimages under f. Let P(~) : A → A/~ be the projection map which sends each x in A to its equivalence class [x]~, and let fP : A/~ → B be the well-defined function given by fP([x]~) = f(x). Then f = fP o P(~).
Space of surjections
Given fixed A and B, one can form the set of surjections A ↠ B. The cardinality of this set is one of the twelve aspects of Rota's Twelvefold way, and is given by $ |B|!{\begin{Bmatrix}|A|\\|B|\end{Bmatrix}}$, where $ {\begin{Bmatrix}|A|\\|B|\end{Bmatrix}}$ denotes a Stirling number of the second kind.
Gallery
• A non-injective surjective function (surjection, not a bijection)
• An injective surjective function (bijection)
• An injective non-surjective function (injection, not a bijection)
• A non-injective non-surjective function (neither a bijection)
• Surjective composition: the first function need not be surjective.
• Non-surjective functions in the Cartesian plane. Although some parts of the function are surjective, where elements y in Y do have a value x in X such that y = f(x), some parts are not. Left: There is y0 in Y, but there is no x0 in X such that y0 = f(x0). Right: There are y1, y2 and y3 in Y, but there are no x1, x2, and x3 in X such that y1 = f(x1), y2 = f(x2), and y3 = f(x3).
• Interpretation for surjective functions in the Cartesian plane, defined by the mapping f : X → Y, where y = f(x), X = domain of function, Y = range of function. Every element in the range is mapped onto from an element in the domain, by the rule f. There may be a number of domain elements which map to the same range element. That is, every y in Y is mapped from an element x in X, more than one x can map to the same y. Left: Only one domain is shown which makes f surjective. Right: two possible domains X1 and X2 are shown.
See also
Wikimedia Commons has media related to Surjectivity.
Look up surjective, surjection, or onto in Wiktionary, the free dictionary.
• Bijection, injection and surjection
• Cover (algebra)
• Covering map
• Enumeration
• Fiber bundle
• Index set
• Section (category theory)
References
1. "Injective, Surjective and Bijective". www.mathsisfun.com. Retrieved 2019-12-07.
2. "Bijection, Injection, And Surjection | Brilliant Math & Science Wiki". brilliant.org. Retrieved 2019-12-07.
3. Miller, Jeff, "Injection, Surjection and Bijection", Earliest Uses of Some of the Words of Mathematics, Tripod.
4. Mashaal, Maurice (2006). Bourbaki. American Mathematical Soc. p. 106. ISBN 978-0-8218-3967-6.
5. "Arrows – Unicode" (PDF). Retrieved 2013-05-11.
6. Farlow, S. J. "Injections, Surjections, and Bijections" (PDF). math.umaine.edu. Retrieved 2019-12-06.
7. T. M. Apostol (1981). Mathematical Analysis. Addison-Wesley. p. 35.
8. Goldblatt, Robert (2006) [1984]. Topoi, the Categorial Analysis of Logic (Revised ed.). Dover Publications. ISBN 978-0-486-45026-1. Retrieved 2009-11-25.
Further reading
• Bourbaki, N. (2004) [1968]. Theory of Sets. Elements of Mathematics. Vol. 1. Springer. doi:10.1007/978-3-642-59309-3. ISBN 978-3-540-22525-6. LCCN 2004110815.
Mathematical logic
General
• Axiom
• list
• Cardinality
• First-order logic
• Formal proof
• Formal semantics
• Foundations of mathematics
• Information theory
• Lemma
• Logical consequence
• Model
• Theorem
• Theory
• Type theory
Theorems (list)
& Paradoxes
• Gödel's completeness and incompleteness theorems
• Tarski's undefinability
• Banach–Tarski paradox
• Cantor's theorem, paradox and diagonal argument
• Compactness
• Halting problem
• Lindström's
• Löwenheim–Skolem
• Russell's paradox
Logics
Traditional
• Classical logic
• Logical truth
• Tautology
• Proposition
• Inference
• Logical equivalence
• Consistency
• Equiconsistency
• Argument
• Soundness
• Validity
• Syllogism
• Square of opposition
• Venn diagram
Propositional
• Boolean algebra
• Boolean functions
• Logical connectives
• Propositional calculus
• Propositional formula
• Truth tables
• Many-valued logic
• 3
• Finite
• ∞
Predicate
• First-order
• list
• Second-order
• Monadic
• Higher-order
• Free
• Quantifiers
• Predicate
• Monadic predicate calculus
Set theory
• Set
• Hereditary
• Class
• (Ur-)Element
• Ordinal number
• Extensionality
• Forcing
• Relation
• Equivalence
• Partition
• Set operations:
• Intersection
• Union
• Complement
• Cartesian product
• Power set
• Identities
Types of Sets
• Countable
• Uncountable
• Empty
• Inhabited
• Singleton
• Finite
• Infinite
• Transitive
• Ultrafilter
• Recursive
• Fuzzy
• Universal
• Universe
• Constructible
• Grothendieck
• Von Neumann
Maps & Cardinality
• Function/Map
• Domain
• Codomain
• Image
• In/Sur/Bi-jection
• Schröder–Bernstein theorem
• Isomorphism
• Gödel numbering
• Enumeration
• Large cardinal
• Inaccessible
• Aleph number
• Operation
• Binary
Set theories
• Zermelo–Fraenkel
• Axiom of choice
• Continuum hypothesis
• General
• Kripke–Platek
• Morse–Kelley
• Naive
• New Foundations
• Tarski–Grothendieck
• Von Neumann–Bernays–Gödel
• Ackermann
• Constructive
Formal systems (list),
Language & Syntax
• Alphabet
• Arity
• Automata
• Axiom schema
• Expression
• Ground
• Extension
• by definition
• Conservative
• Relation
• Formation rule
• Grammar
• Formula
• Atomic
• Closed
• Ground
• Open
• Free/bound variable
• Language
• Metalanguage
• Logical connective
• ¬
• ∨
• ∧
• →
• ↔
• =
• Predicate
• Functional
• Variable
• Propositional variable
• Proof
• Quantifier
• ∃
• !
• ∀
• rank
• Sentence
• Atomic
• Spectrum
• Signature
• String
• Substitution
• Symbol
• Function
• Logical/Constant
• Non-logical
• Variable
• Term
• Theory
• list
Example axiomatic
systems
(list)
• of arithmetic:
• Peano
• second-order
• elementary function
• primitive recursive
• Robinson
• Skolem
• of the real numbers
• Tarski's axiomatization
• of Boolean algebras
• canonical
• minimal axioms
• of geometry:
• Euclidean:
• Elements
• Hilbert's
• Tarski's
• non-Euclidean
• Principia Mathematica
Proof theory
• Formal proof
• Natural deduction
• Logical consequence
• Rule of inference
• Sequent calculus
• Theorem
• Systems
• Axiomatic
• Deductive
• Hilbert
• list
• Complete theory
• Independence (from ZFC)
• Proof of impossibility
• Ordinal analysis
• Reverse mathematics
• Self-verifying theories
Model theory
• Interpretation
• Function
• of models
• Model
• Equivalence
• Finite
• Saturated
• Spectrum
• Submodel
• Non-standard model
• of arithmetic
• Diagram
• Elementary
• Categorical theory
• Model complete theory
• Satisfiability
• Semantics of logic
• Strength
• Theories of truth
• Semantic
• Tarski's
• Kripke's
• T-schema
• Transfer principle
• Truth predicate
• Truth value
• Type
• Ultraproduct
• Validity
Computability theory
• Church encoding
• Church–Turing thesis
• Computably enumerable
• Computable function
• Computable set
• Decision problem
• Decidable
• Undecidable
• P
• NP
• P versus NP problem
• Kolmogorov complexity
• Lambda calculus
• Primitive recursive function
• Recursion
• Recursive set
• Turing machine
• Type theory
Related
• Abstract logic
• Category theory
• Concrete/Abstract Category
• Category of sets
• History of logic
• History of mathematical logic
• timeline
• Logicism
• Mathematical object
• Philosophy of mathematics
• Supertask
Mathematics portal
| Wikipedia |
Surjunctive group
In mathematics, a surjunctive group is a group such that every injective cellular automaton with the group elements as its cells is also surjective. Surjunctive groups were introduced by Gottschalk (1973). It is unknown whether every group is surjunctive.
Unsolved problem in mathematics:
Is every group surjunctive?
(more unsolved problems in mathematics)
Definition
A cellular automaton consists of a regular system of cells, each containing a symbol from a finite alphabet, together with a uniform rule called a transition function for updating all cells simultaneously based on the values of neighboring cells. Most commonly the cells are arranged in the form of a line or a higher-dimensional integer grid, but other arrangements of cells are also possible. What is required of the cells is that they form a structure in which every cell "looks the same as" every other cell: there is a symmetry of both the arrangement of cells and the rule set that takes any cell to any other cell. Mathematically, this can be formalized by the notion of a group, a set of elements together with an associative and invertible binary operation. The elements of the group can be used as the cells of an automaton, with symmetries generated by the group operation. For instance, a one-dimensional line of cells can be described in this way as the additive group of the integers, and the higher-dimensional integer grids can be described as the free abelian groups.
The collection of all possible states of a cellular automaton over a group can be described as the functions that map each group element to one of the symbols in the alphabet. As a finite set, the alphabet has a discrete topology, and the collection of states can be given the product topology (called a prodiscrete topology because it is the product of discrete topologies). To be the transition function of a cellular automaton, a function from states to states must be a continuous function for this topology, and must also be equivariant with the group action, meaning that shifting the cells prior to applying the transition function produces the same result as applying the function and then shifting the cells. For such functions, the Curtis–Hedlund–Lyndon theorem ensures that the value of the transition function at each group element depends on the previous state of only a finite set of neighboring elements.
A state transition function is a surjective function when every state has a predecessor (there can be no Garden of Eden). It is an injective function when no two states have the same successor. A surjunctive group is a group with the property that, when its elements are used as the cells of cellular automata, every injective transition function of a cellular automaton is also surjective. Equivalently, summarizing the definitions above, a group $G$ is surjunctive if, for every finite set $S$, every continuous equivariant injective function $f:S^{G}\to S^{G}$ is also surjective.[1] The implication from injectivity to surjectivity is a form of the Garden of Eden theorem, and the cellular automata defined from injective and surjective transition functions are reversible.
Examples
Examples of surjunctive groups include all locally residually finite groups,[2] all free groups,[2] all subgroups of surjunctive groups,[3] all abelian groups,[2] all sofic groups,[4] and every locally surjunctive group.[3]
When he introduced surjunctive groups in 1973, Gottschalk observed that there were no known examples of non-surjunctive groups. As of 2014, it is still unknown whether every group is surjunctive.[5]
See also
• Ax–Grothendieck theorem, an analogous result for polynomials
Notes
1. Ceccherini-Silberstein & Coornaert (2010) p.57
2. Ceccherini-Silberstein & Coornaert (2010) p.60
3. Ceccherini-Silberstein & Coornaert (2010) p.58
4. Ceccherini-Silberstein & Coornaert (2010) p.276
5. Šunić (2014).
References
• Ceccherini-Silberstein, Tullio; Coornaert, Michel (2010), "Surjunctive groups", Cellular Automata and Groups, Springer Monographs in Mathematics, Berlin, New York: Springer-Verlag, doi:10.1007/978-3-642-14034-1_3, ISBN 978-3-642-14033-4, MR 2683112, Zbl 1218.37004
• Gottschalk, Walter (1973), "Some general dynamical notions", Recent Advances in Topological Dynamics (Proc. Conf. Topological Dynamics, Yale Univ., New Haven, Conn., 1972; in honor of Gustav Arnold Hedlund), Lecture Notes in Math., vol. 318, Berlin, New York: Springer-Verlag, pp. 120–125, doi:10.1007/BFb0061728, ISBN 978-3-540-06187-8, MR 0407821, Zbl 0255.54035
• Šunić, Zoran (2014), "Cellular automata and groups, by Tullio Ceccherini-Silberstein and Michel Coornaert (book review)", Bulletin of the American Mathematical Society, 51 (2): 361–366, doi:10.1090/S0273-0979-2013-01425-3.
| Wikipedia |
Survey methodology
Survey methodology is "the study of survey methods".[1] As a field of applied statistics concentrating on human-research surveys, survey methodology studies the sampling of individual units from a population and associated techniques of survey data collection, such as questionnaire construction and methods for improving the number and accuracy of responses to surveys. Survey methodology targets instruments or procedures that ask one or more questions that may or may not be answered.
Researchers carry out statistical surveys with a view towards making statistical inferences about the population being studied; such inferences depend strongly on the survey questions used. Polls about public opinion, public-health surveys, market-research surveys, government surveys and censuses all exemplify quantitative research that uses survey methodology to answer questions about a population. Although censuses do not include a "sample", they do include other aspects of survey methodology, like questionnaires, interviewers, and non-response follow-up techniques. Surveys provide important information for all kinds of public-information and research fields, such as marketing research, psychology, health-care provision and sociology.
Overview
A single survey is made of at least a sample (or full population in the case of a census), a method of data collection (e.g., a questionnaire) and individual questions or items that become data that can be analyzed statistically. A single survey may focus on different types of topics such as preferences (e.g., for a presidential candidate), opinions (e.g., should abortion be legal?), behavior (smoking and alcohol use), or factual information (e.g., income), depending on its purpose. Since survey research is almost always based on a sample of the population, the success of the research is dependent on the representativeness of the sample with respect to a target population of interest to the researcher. That target population can range from the general population of a given country to specific groups of people within that country, to a membership list of a professional organization, or list of students enrolled in a school system (see also sampling (statistics) and survey sampling). The persons replying to a survey are called respondents, and depending on the questions asked their answers may represent themselves as individuals, their households, employers, or other organization they represent.
Survey methodology as a scientific field seeks to identify principles about the sample design, data collection instruments, statistical adjustment of data, and data processing, and final data analysis that can create systematic and random survey errors. Survey errors are sometimes analyzed in connection with survey cost. Cost constraints are sometimes framed as improving quality within cost constraints, or alternatively, reducing costs for a fixed level of quality. Survey methodology is both a scientific field and a profession, meaning that some professionals in the field focus on survey errors empirically and others design surveys to reduce them. For survey designers, the task involves making a large set of decisions about thousands of individual features of a survey in order to improve it.[2]
The most important methodological challenges of a survey methodologist include making decisions on how to:[2]
• Identify and select potential sample members.
• Contact sampled individuals and collect data from those who are hard to reach (or reluctant to respond)
• Evaluate and test questions.
• Select the mode for posing questions and collecting responses.
• Train and supervise interviewers (if they are involved).
• Check data files for accuracy and internal consistency.
• Adjust survey estimates to correct for identified errors.
Selecting samples
The sample is chosen from the sampling frame, which consists of a list of all members of the population of interest.[3] The goal of a survey is not to describe the sample, but the larger population. This generalizing ability is dependent on the representativeness of the sample, as stated above. Each member of the population is termed an element. There are frequent difficulties one encounters while choosing a representative sample. One common error that results is selection bias. Selection bias results when the procedures used to select a sample result in over representation or under representation of some significant aspect of the population. For instance, if the population of interest consists of 75% females, and 25% males, and the sample consists of 40% females and 60% males, females are under represented while males are overrepresented. In order to minimize selection biases, stratified random sampling is often used. This is when the population is divided into sub-populations called strata, and random samples are drawn from each of the strata, or elements are drawn for the sample on a proportional basis.
Modes of data collection
There are several ways of administering a survey. The choice between administration modes is influenced by several factors, including
1. costs,
2. coverage of the target population,
3. flexibility of asking questions,
4. respondents' willingness to participate and
5. response accuracy.
Different methods create mode effects that change how respondents answer, and different methods have different advantages. The most common modes of administration can be summarized as:[4]
• Telephone
• Mail (post)
• Online surveys
• Personal in-home surveys
• Personal mall or street intercept survey
• Hybrids of the above.
Research designs
There are several different designs, or overall structures, that can be used in survey research. The three general types are cross-sectional, successive independent samples, and longitudinal studies.[3]
Cross-sectional studies
In cross-sectional studies, a sample (or samples) is drawn from the relevant population and studied once.[3] A cross-sectional study describes characteristics of that population at one time, but cannot give any insight as to the causes of population characteristics because it is a predictive, correlational design.
Successive independent samples studies
A successive independent samples design draws multiple random samples from a population at one or more times.[3] This design can study changes within a population, but not changes within individuals because the same individuals are not surveyed more than once. Such studies cannot, therefore, identify the causes of change over time necessarily. For successive independent samples designs to be effective, the samples must be drawn from the same population, and must be equally representative of it. If the samples are not comparable, the changes between samples may be due to demographic characteristics rather than time. In addition, the questions must be asked in the same way so that responses can be compared directly.
Longitudinal studies
Longitudinal studies take measure of the same random sample at multiple time points.[3] Unlike with a successive independent samples design, this design measures the differences in individual participants' responses over time. This means that a researcher can potentially assess the reasons for response changes by assessing the differences in respondents' experiences. Longitudinal studies are the easiest way to assess the effect of a naturally occurring event, such as divorce that cannot be tested experimentally.
However, longitudinal studies are both expensive and difficult to do. It is harder to find a sample that will commit to a months- or years-long study than a 15-minute interview, and participants frequently leave the study before the final assessment. In addition, such studies sometimes require data collection to be confidential or anonymous, which creates additional difficulty in linking participants' responses over time. One potential solution is the use of a self-generated identification code (SGIC).[5] These codes usually are created from elements like 'month of birth' and 'first letter of the mother's middle name.' Some recent anonymous SGIC approaches have also attempted to minimize use of personalized data even further, instead using questions like 'name of your first pet.[6][7] Depending on the approach used, the ability to match some portion of the sample can be lost.
In addition, the overall attrition of participants is not random, so samples can become less representative with successive assessments. To account for this, a researcher can compare the respondents who left the survey to those that did not, to see if they are statistically different populations. Respondents may also try to be self-consistent in spite of changes to survey answers.
Questionnaires
Questionnaires are the most commonly used tool in survey research. However, the results of a particular survey are worthless if the questionnaire is written inadequately.[3] Questionnaires should produce valid and reliable demographic variable measures and should yield valid and reliable individual disparities that self-report scales generate.[3]
Questionnaires as tools
A variable category that is often measured in survey research are demographic variables, which are used to depict the characteristics of the people surveyed in the sample.[3] Demographic variables include such measures as ethnicity, socioeconomic status, race, and age.[3] Surveys often assess the preferences and attitudes of individuals, and many employ self-report scales to measure people's opinions and judgements about different items presented on a scale.[3] Self-report scales are also used to examine the disparities among people on scale items.[3] These self-report scales, which are usually presented in questionnaire form, are one of the most used instruments in psychology, and thus it is important that the measures be constructed carefully, while also being reliable and valid.[3]
Reliability and validity of self-report measures
Reliable measures of self-report are defined by their consistency.[3] Thus, a reliable self-report measure produces consistent results every time it is executed.[3] A test's reliability can be measured a few ways.[3] First, one can calculate a test-retest reliability.[3] A test-retest reliability entails conducting the same questionnaire to a large sample at two different times.[3] For the questionnaire to be considered reliable, people in the sample do not have to score identically on each test, but rather their position in the score distribution should be similar for both the test and the retest.[3] Self-report measures will generally be more reliable when they have many items measuring a construct.[3] Furthermore, measurements will be more reliable when the factor being measured has greater variability among the individuals in the sample that are being tested.[3] Finally, there will be greater reliability when instructions for the completion of the questionnaire are clear and when there are limited distractions in the testing environment.[3] Contrastingly, a questionnaire is valid if what it measures is what it had originally planned to measure.[3] Construct validity of a measure is the degree to which it measures the theoretical construct that it was originally supposed to measure.[3]
Composing a questionnaire
Six steps can be employed to construct a questionnaire that will produce reliable and valid results.[3] First, one must decide what kind of information should be collected.[3] Second, one must decide how to conduct the questionnaire.[3] Thirdly, one must construct a first draft of the questionnaire.[3] Fourth, the questionnaire should be revised.[3] Next, the questionnaire should be pretested.[3] Finally, the questionnaire should be edited and the procedures for its use should be specified.[3]
Guidelines for the effective wording of questions
The way that a question is phrased can have a large impact on how a research participant will answer the question.[3] Thus, survey researchers must be conscious of their wording when writing survey questions.[3] It is important for researchers to keep in mind that different individuals, cultures, and subcultures can interpret certain words and phrases differently from one another.[3] There are two different types of questions that survey researchers use when writing a questionnaire: free response questions and closed questions.[3] Free response questions are open-ended, whereas closed questions are usually multiple choice.[3] Free response questions are beneficial because they allow the responder greater flexibility, but they are also very difficult to record and score, requiring extensive coding.[3] Contrastingly, closed questions can be scored and coded more easily, but they diminish expressivity and spontaneity of the responder.[3] In general, the vocabulary of the questions should be very simple and direct, and most should be less than twenty words.[3] Each question should be edited for "readability" and should avoid leading or loaded questions.[3] Finally, if multiple items are being used to measure one construct, the wording of some of the items should be worded in the opposite direction to evade response bias.[3]
A respondent's answer to an open-ended question can be coded into a response scale afterwards,[4] or analysed using more qualitative methods.
Order of questions
Survey researchers should carefully construct the order of questions in a questionnaire.[3] For questionnaires that are self-administered, the most interesting questions should be at the beginning of the questionnaire to catch the respondent's attention, while demographic questions should be near the end.[3] Contrastingly, if a survey is being administered over the telephone or in person, demographic questions should be administered at the beginning of the interview to boost the respondent's confidence.[3] Another reason to be mindful of question order may cause a survey response effect in which one question may affect how people respond to subsequent questions as a result of priming.
Nonresponse reduction
The following ways have been recommended for reducing nonresponse[8] in telephone and face-to-face surveys:[9]
• Advance letter. A short letter is sent in advance to inform the sampled respondents about the upcoming survey. The style of the letter should be personalized but not overdone. First, it announces that a phone call will be made, or an interviewer wants to make an appointment to do the survey face-to-face. Second, the research topic will be described. Last, it allows both an expression of the surveyor's appreciation of cooperation and an opening to ask questions on the survey.
• Training. The interviewers are thoroughly trained in how to ask respondents questions, how to work with computers and making schedules for callbacks to respondents who were not reached.
• Short introduction. The interviewer should always start with a short introduction about him or herself. She/he should give her name, the institute she is working for, the length of the interview and goal of the interview. Also it can be useful to make clear that you are not selling anything: this has been shown to lead to a slightly higher responding rate.[10]
• Respondent-friendly survey questionnaire. The questions asked must be clear, non-offensive and easy to respond to for the subjects under study.
Brevity is also often cited as increasing response rate. A 1996 literature review found mixed evidence to support this claim for both written and verbal surveys, concluding that other factors may often be more important.[11] A 2010 study looking at 100,000 online surveys found response rate dropped by about 3% at 10 questions and about 6% at 20 questions, with drop-off slowing (for example, only 10% reduction at 40 questions).[12] Other studies showed that quality of response degraded toward the end of long surveys.[13]
Some researchers have also discussed the recipient's role or profession as a potential factor affecting how nonresponse is managed. For example, faxes are not commonly used to distribute surveys, but in a recent study were sometimes preferred by pharmacists, since they frequently receive faxed prescriptions at work but may not always have access to a generally-addressed piece of mail.[14]
Interviewer effects
Survey methodologists have devoted much effort to determining the extent to which interviewee responses are affected by physical characteristics of the interviewer. Main interviewer traits that have been demonstrated to influence survey responses are race,[15] gender,[16] and relative body weight (BMI).[17] These interviewer effects are particularly operant when questions are related to the interviewer trait. Hence, race of interviewer has been shown to affect responses to measures regarding racial attitudes,[18] interviewer sex responses to questions involving gender issues,[19] and interviewer BMI answers to eating and dieting-related questions.[20] While interviewer effects have been investigated mainly for face-to-face surveys, they have also been shown to exist for interview modes with no visual contact, such as telephone surveys and in video-enhanced web surveys. The explanation typically provided for interviewer effects is social desirability bias: survey participants may attempt to project a positive self-image in an effort to conform to the norms they attribute to the interviewer asking questions. Interviewer effects are one example survey response effects.
See also
• Data Documentation Initiative
• Enterprise feedback management (EFM)
• Likert scale
• Official statistics
• Paid survey
• Quantitative marketing research
• Questionnaire construction
• Ratio estimator
• Social research
• Total survey error
References
1. Groves, Robert M.; Fowler, Floyd J.; Couper, Mick P.; Lepkowski, James M.; Singer, Eleanor; Tourangeau, Roger (2004). "An introduction to survey methodology". Survey Methodology. Wiley Series in Survey Methodology. Vol. 561 (2 ed.). Hoboken, New Jersey: John Wiley & Sons (published 2009). p. 3. ISBN 9780470465462. Retrieved 27 August 2020. [...] survey methodology is the study of survey methods. It is the study of sources of error in surveys and how to make the numbers produced by the surveys as accurate as possible.
2. Groves, R.M.; Fowler, F. J.; Couper, M.P.; Lepkowski, J.M.; Singer, E.; Tourangeau, R. (2009). Survey Methodology. New Jersey: John Wiley & Sons. ISBN 978-1-118-21134-2.
3. Shaughnessy, J.; Zechmeister, E.; Jeanne, Z. (2011). Research methods in psychology (9th ed.). New York, NY: McGraw Hill. pp. 161–175. ISBN 9780078035180.
4. Mellenbergh, G.J. (2008). Chapter 9: Surveys. In H.J. Adèr & G.J. Mellenbergh (Eds.) (with contributions by D.J. Hand), Advising on Research Methods: A consultant's companion (pp. 183–209). Huizen, The Netherlands: Johannes van Kessel Publishing.
5. Audette, Lillian M.; Hammond, Marie S.; Rochester, Natalie K. (February 2020). "Methodological Issues With Coding Participants in Anonymous Psychological Longitudinal Studies". Educational and Psychological Measurement. 80 (1): 163–185. doi:10.1177/0013164419843576. ISSN 0013-1644. PMC 6943988. PMID 31933497.
6. Agley, Jon; Tidd, David; Jun, Mikyoung; Eldridge, Lori; Xiao, Yunyu; Sussman, Steve; Jayawardene, Wasantha; Agley, Daniel; Gassman, Ruth; Dickinson, Stephanie L. (February 2021). "Developing and Validating a Novel Anonymous Method for Matching Longitudinal School-Based Data". Educational and Psychological Measurement. 81 (1): 90–109. doi:10.1177/0013164420938457. ISSN 0013-1644. PMC 7797962. PMID 33456063.
7. Calatrava, Maria; de Irala, Jokin; Osorio, Alfonso; Benítez, Edgar; Lopez-del Burgo, Cristina (2021-08-12). "Matched and Fully Private? A New Self-Generated Identification Code for School-Based Cohort Studies to Increase Perceived Anonymity". Educational and Psychological Measurement. 82 (3): 465–481. doi:10.1177/00131644211035436. ISSN 0013-1644. PMC 9014735. PMID 35444340. S2CID 238718313.
8. Lynn, P. (2008) "The problem of non-response", chapter 3, 35-55, in International Handbook of Survey Methodology (ed.s Edith de Leeuw, Joop Hox & Don A. Dillman). Erlbaum. ISBN 0-8058-5753-2
9. Dillman, D.A. (1978) Mail and telephone surveys: The total design method. Wiley. ISBN 0-471-21555-4
10. De Leeuw, E.D. (2001). "I am not selling anything: Experiments in telephone introductions". Kwantitatieve Methoden, 22, 41–48.
11. Bogen, Karen (1996). "THE EFFECT OF QUESTIONNAIRE LENGTH ON RESPONSE RATES -- A REVIEW OF THE LITERATURE" (PDF). Proceedings of the Section on Survey Research Methods. American Statistical Association: 1020–1025. Retrieved 2013-03-19.
12. "Does Adding One More Question Impact Survey Completion Rate?". 2010-12-10. Retrieved 2017-11-08.
13. "Respondent engagement and survey length: the long and the short of it". research. April 7, 2010. Retrieved 2013-10-03.
14. Agley, Jon; Meyerson, Beth; Eldridge, Lori; Smith, Carriann; Arora, Prachi; Richardson, Chanel; Miller, Tara (February 2019). "Just the fax, please: Updating electronic/hybrid methods for surveying pharmacists". Research in Social and Administrative Pharmacy. 15 (2): 226–227. doi:10.1016/j.sapharm.2018.10.028. PMID 30416040. S2CID 53281364.
15. Hill, M.E (2002). "Race of the interviewer and perception of skin color: Evidence from the multi-city study of urban inequality". American Sociological Review. 67 (1): 99–108. doi:10.2307/3088935. JSTOR 3088935.
16. Flores-Macias, F.; Lawson, C. (2008). "Effects of interviewer gender on survey responses: Findings from a household survey in Mexico" (PDF). International Journal of Public Opinion Research. 20 (1): 100–110. doi:10.1093/ijpor/edn007. S2CID 33820854. Archived from the original (PDF) on 2019-03-07.
17. Eisinga, R.; Te Grotenhuis, M.; Larsen, J.K.; Pelzer, B.; Van Strien, T. (2011). "BMI of interviewer effects". International Journal of Public Opinion Research. 23 (4): 530–543. doi:10.1093/ijpor/edr026.
18. Anderson, B.A.; Silver, B.D.; Abramson, P.R. (1988). "The effects of the race of the interviewer on race-related attitudes of black respondents in SRC/CPS national election studies". Public Opinion Quarterly. 52 (3): 1–28. doi:10.1086/269108.
19. Kane, E.W.; MacAulay, L.J. (1993). "Interviewer gender and gender attitudes". Public Opinion Quarterly. 57 (1): 1–28. doi:10.1086/269352.
20. Eisinga, R.; Te Grotenhuis, M.; Larsen, J.K.; Pelzer, B. (2011). "Interviewer BMI effects on under- and over-reporting of restrained eating. Evidence from a national Dutch face-to-face survey and a postal follow-up". International Journal of Public Health. 57 (3): 643–647. doi:10.1007/s00038-011-0323-z. PMC 3359459. PMID 22116390.
Further reading
• Abramson, J. J. and Abramson, Z. H. (1999). Survey Methods in Community Medicine: Epidemiological Research, Programme Evaluation, Clinical Trials (5th edition). London: Churchill Livingstone/Elsevier Health Sciences ISBN 0-443-06163-7
• Adèr, H. J., Mellenbergh, G. J., and Hand, D. J. (2008). Advising on research methods: A consultant's companion. Huizen, The Netherlands: Johannes van Kessel Publishing.
• Andres, Lesley (2012). "Designing and Doing Survey Research". London: Sage.
• Dillman, D.A. (1978) Mail and telephone surveys: The total design method. New York: Wiley. ISBN 0-471-21555-4
• Engel. U., Jann, B., Lynn, P., Scherpenzeel, A. and Sturgis, P. (2014). Improving Survey Methods: Lessons from Recent Research. New York: Routledge. ISBN 978-0-415-81762-2
• Groves, R.M. (1989). Survey Errors and Survey Costs Wiley. ISBN 0-471-61171-9
• Griffith, James. (2014) "Survey Research in Military Settings." in Routledge Handbook of Research Methods in Military Studies edited by Joseph Soeters, Patricia Shields and Sebastiaan Rietjens.pp. 179–193. New York: Routledge.
• Leung, Wai-Ching (2001) "Conducting a Survey", in Student BMJ, (British Medical Journal, Student Edition), May 2001
• Ornstein, M.D. (1998). "Survey Research." Current Sociology 46(4): iii-136.
• Prince, S. a, Adamo, K. B., Hamel, M., Hardt, J., Connor Gorber, S., & Tremblay, M. (2008). A comparison of direct versus self-report measures for assessing physical activity in adults: a systematic review. International Journal of Behavioral Nutrition and Physical Activity, 5(1), 56. http://doi.org/10.1186/1479-5868-5-56
• Shaughnessy, J. J., Zechmeister, E. B., & Zechmeister, J. S. (2006). Research Methods in Psychology (Seventh Edition ed.). McGraw–Hill Higher Education. ISBN 0-07-111655-9 (pp. 143–192)
• Singh, S. (2003). Advanced Sampling Theory with Applications: How Michael Selected Amy. Kluwer Academic Publishers, The Netherlands.
• Soeters, Joseph; Shields, Patricia and Rietjens, Sebastiaan.(2014). Routledge Handbook of Research Methods in Military Studies New York: Routledge.
• Surveys at Curlie
• Shackman, G. What is Program Evaluation? A Beginners Guide 2018
External links
• Media related to Survey methodology at Wikimedia Commons
Social survey research
Data collection
• Collection methods
• Questionnaire
• Interview
• Structured
• Semi-structured
• Unstructured
• Couple
Methodology
• Census
• Sampling frame
• Statistical sample
• Sampling for surveys
• Random sampling
• Simple random sampling
• Quota sampling
• Stratified sampling
• Nonprobability sampling
• Sample size determination
• Research design
• Panel study
• Cohort study
• Cross-sectional study
• Cross-sequential study
Survey errors
• Sampling error
• Standard error
• Sampling bias
• Systematic errors
• Non-sampling error
• Specification error
• Frame error
• Measurement error
• Response errors
• Non-response bias
• Coverage error
• Pseudo-opinion
• Processing errors
Data analysis
• Categorical data
• Contingency table
• Level of measurement
• Descriptive statistics
• Exploratory data analysis
• Multivariate statistics
• Psychometrics
• Statistical inference
• Statistical models
• Graphical
• Log-linear
• Structural
Applications
• Audience measurement
• Demography
• Market research
• Opinion poll
• Public opinion
Major surveys
• List of comparative social surveys
• Afrobarometer
• American National Election Studies
• Asian Barometer Survey
• Comparative Study of Electoral Systems
• Emerson College Polling
• Eurobarometer
• European Social Survey
• Gallup Poll
• General Social Survey
• Household, Income and Labour Dynamics in Australia Survey
• International Social Survey
• Latinobarómetro
• List of household surveys in the United States
• National Health and Nutrition Examination Survey
• New Zealand Attitudes and Values Study
• Suffolk University Political Research Center
• The Phillips Academy Poll
• Quinnipiac University Polling Institute
• World Values Survey
Associations
• American Association for Public Opinion Research
• European Society for Opinion and Marketing Research
• International Statistical Institute
• Pew Research Center
• World Association for Public Opinion Research
• Category
• Projects
• Business
• Politics
• Psychology
• Sociology
• Statistics
Statistics
• Outline
• Index
Descriptive statistics
Continuous data
Center
• Mean
• Arithmetic
• Arithmetic-Geometric
• Cubic
• Generalized/power
• Geometric
• Harmonic
• Heronian
• Heinz
• Lehmer
• Median
• Mode
Dispersion
• Average absolute deviation
• Coefficient of variation
• Interquartile range
• Percentile
• Range
• Standard deviation
• Variance
Shape
• Central limit theorem
• Moments
• Kurtosis
• L-moments
• Skewness
Count data
• Index of dispersion
Summary tables
• Contingency table
• Frequency distribution
• Grouped data
Dependence
• Partial correlation
• Pearson product-moment correlation
• Rank correlation
• Kendall's τ
• Spearman's ρ
• Scatter plot
Graphics
• Bar chart
• Biplot
• Box plot
• Control chart
• Correlogram
• Fan chart
• Forest plot
• Histogram
• Pie chart
• Q–Q plot
• Radar chart
• Run chart
• Scatter plot
• Stem-and-leaf display
• Violin plot
Data collection
Study design
• Effect size
• Missing data
• Optimal design
• Population
• Replication
• Sample size determination
• Statistic
• Statistical power
Survey methodology
• Sampling
• Cluster
• Stratified
• Opinion poll
• Questionnaire
• Standard error
Controlled experiments
• Blocking
• Factorial experiment
• Interaction
• Random assignment
• Randomized controlled trial
• Randomized experiment
• Scientific control
Adaptive designs
• Adaptive clinical trial
• Stochastic approximation
• Up-and-down designs
Observational studies
• Cohort study
• Cross-sectional study
• Natural experiment
• Quasi-experiment
Statistical inference
Statistical theory
• Population
• Statistic
• Probability distribution
• Sampling distribution
• Order statistic
• Empirical distribution
• Density estimation
• Statistical model
• Model specification
• Lp space
• Parameter
• location
• scale
• shape
• Parametric family
• Likelihood (monotone)
• Location–scale family
• Exponential family
• Completeness
• Sufficiency
• Statistical functional
• Bootstrap
• U
• V
• Optimal decision
• loss function
• Efficiency
• Statistical distance
• divergence
• Asymptotics
• Robustness
Frequentist inference
Point estimation
• Estimating equations
• Maximum likelihood
• Method of moments
• M-estimator
• Minimum distance
• Unbiased estimators
• Mean-unbiased minimum-variance
• Rao–Blackwellization
• Lehmann–Scheffé theorem
• Median unbiased
• Plug-in
Interval estimation
• Confidence interval
• Pivot
• Likelihood interval
• Prediction interval
• Tolerance interval
• Resampling
• Bootstrap
• Jackknife
Testing hypotheses
• 1- & 2-tails
• Power
• Uniformly most powerful test
• Permutation test
• Randomization test
• Multiple comparisons
Parametric tests
• Likelihood-ratio
• Score/Lagrange multiplier
• Wald
Specific tests
• Z-test (normal)
• Student's t-test
• F-test
Goodness of fit
• Chi-squared
• G-test
• Kolmogorov–Smirnov
• Anderson–Darling
• Lilliefors
• Jarque–Bera
• Normality (Shapiro–Wilk)
• Likelihood-ratio test
• Model selection
• Cross validation
• AIC
• BIC
Rank statistics
• Sign
• Sample median
• Signed rank (Wilcoxon)
• Hodges–Lehmann estimator
• Rank sum (Mann–Whitney)
• Nonparametric anova
• 1-way (Kruskal–Wallis)
• 2-way (Friedman)
• Ordered alternative (Jonckheere–Terpstra)
• Van der Waerden test
Bayesian inference
• Bayesian probability
• prior
• posterior
• Credible interval
• Bayes factor
• Bayesian estimator
• Maximum posterior estimator
• Correlation
• Regression analysis
Correlation
• Pearson product-moment
• Partial correlation
• Confounding variable
• Coefficient of determination
Regression analysis
• Errors and residuals
• Regression validation
• Mixed effects models
• Simultaneous equations models
• Multivariate adaptive regression splines (MARS)
Linear regression
• Simple linear regression
• Ordinary least squares
• General linear model
• Bayesian regression
Non-standard predictors
• Nonlinear regression
• Nonparametric
• Semiparametric
• Isotonic
• Robust
• Heteroscedasticity
• Homoscedasticity
Generalized linear model
• Exponential families
• Logistic (Bernoulli) / Binomial / Poisson regressions
Partition of variance
• Analysis of variance (ANOVA, anova)
• Analysis of covariance
• Multivariate ANOVA
• Degrees of freedom
Categorical / Multivariate / Time-series / Survival analysis
Categorical
• Cohen's kappa
• Contingency table
• Graphical model
• Log-linear model
• McNemar's test
• Cochran–Mantel–Haenszel statistics
Multivariate
• Regression
• Manova
• Principal components
• Canonical correlation
• Discriminant analysis
• Cluster analysis
• Classification
• Structural equation model
• Factor analysis
• Multivariate distributions
• Elliptical distributions
• Normal
Time-series
General
• Decomposition
• Trend
• Stationarity
• Seasonal adjustment
• Exponential smoothing
• Cointegration
• Structural break
• Granger causality
Specific tests
• Dickey–Fuller
• Johansen
• Q-statistic (Ljung–Box)
• Durbin–Watson
• Breusch–Godfrey
Time domain
• Autocorrelation (ACF)
• partial (PACF)
• Cross-correlation (XCF)
• ARMA model
• ARIMA model (Box–Jenkins)
• Autoregressive conditional heteroskedasticity (ARCH)
• Vector autoregression (VAR)
Frequency domain
• Spectral density estimation
• Fourier analysis
• Least-squares spectral analysis
• Wavelet
• Whittle likelihood
Survival
Survival function
• Kaplan–Meier estimator (product limit)
• Proportional hazards models
• Accelerated failure time (AFT) model
• First hitting time
Hazard function
• Nelson–Aalen estimator
Test
• Log-rank test
Applications
Biostatistics
• Bioinformatics
• Clinical trials / studies
• Epidemiology
• Medical statistics
Engineering statistics
• Chemometrics
• Methods engineering
• Probabilistic design
• Process / quality control
• Reliability
• System identification
Social statistics
• Actuarial science
• Census
• Crime statistics
• Demography
• Econometrics
• Jurimetrics
• National accounts
• Official statistics
• Population statistics
• Psychometrics
Spatial statistics
• Cartography
• Environmental statistics
• Geographic information system
• Geostatistics
• Kriging
• Category
• Mathematics portal
• Commons
• WikiProject
Qualitative forecasting methods
• Executive opinions
• Delphi method
• Sales force polling
• Consumer surveys
Authority control: National
• Germany
• United States
• Czech Republic
| Wikipedia |
Susan Brown (mathematician)
Susan North Brown (22 December 1937 – 11 August 2017) was a professor of mathematics at University College London[1] and a leading researcher in the field of fluid mechanics.[2]
Susan Brown
Born(1937-12-22)22 December 1937
Southampton
Died11 August 2017(2017-08-11) (aged 79)
London
Alma materUniversity of Oxford (BA) (1959) University of Oxford and University of Durham (DPhil)
Scientific career
FieldsMathematics
Fluid Mechanics
Institutions
• University of Durham
• University of Newcastle
• University College London
Doctoral advisorGeorge Frederick James Temple[1]
Doctoral studentsPeter Daniels[3]
Background and employment
An exact timeline for Susan Brown's career has been difficult to pin down, but a newsletter published by Department of Mathematical and Physical Sciences at UCL shortly after her death offers a framework for her career achievements and highlights the esteem in which she was held by colleagues and students.[1] Her undergraduate degree in mathematics was from St Hilda's College, Oxford. For about two years more years she continued studies at Oxford, in theoretical fluid mechanics—and she then moved to the University of Durham to complete her DPhil in 1964. During this time she held temporary Lectureships at both Durham and Newcastle and in 1964 began a Lectureship that started her long association with UCL. From her Lectureship she advanced to a Readership in 1971 and was appointed to a Professorship in 1986. The afore-mentioned departmental newsletter that recapped her accomplishments after her death expresses the belief that Brown was the first female in the UK to be appointed to a professorship in Mathematics, but Joan E. Walsh was promoted to a professorship in mathematics at the University of Manchester in 1974.[4]
Research
Brown's department described her as an outstanding teacher and someone with an international reputation for her research.[1] She had a productive partnership in fluid dynamics with UCL colleague Keith Stewartson—who also arrived at UCL in 1964. Quoting from the afore-mentioned departmental newsletter, " Together they published 29 papers and pioneered early developments of 'triple-deck' theory, which, in turn, enabled resolution of long-standing questions in steady and unsteady trailing-edge flows, and addressed associated important aerodynamic applications. Another area for which Brown was especially renowned was a series of discussions of critical layers, especially effects of viscosity and nonlinearity and applications to geophysical flows such as atmospheric jets."[4][1] Google-Scholar lists numerous papers for the pair "SN Brown, K Stewartson" and several of these are listed below.
Death
Brown died on 11 August 2017, aged 79, in London.[5]
Selected publications
• Brown, S. N.; Stewartson, K. (1969). "Laminar Separation". Annual Review of Fluid Mechanics. 1 (1): 45–72. Bibcode:1969AnRFM...1...45B. doi:10.1146/annurev.fl.01.010169.000401.
• Hocking, L. M.; Stewartson, K.; Stuart, J. T.; Brown, S. N. (1972). "A nonlinear instability burst in plane parallel flow". Journal of Fluid Mechanics. 51 (4): 705. Bibcode:1972JFM....51..705H. doi:10.1017/S0022112072001326. S2CID 122808105.
• Brown, S. N.; Stewartson, K. (1978). "On finite amplitude Benard convection in a cylindrical container". Proceedings of the Royal Society of London A. 360 (1703): 455–469. Bibcode:1978RSPSA.360..455B. doi:10.1098/rspa.1978.0079. S2CID 122462197.
• Brown, S. N.; Stewartson, K. (1978). "The evolution of the critical layer of a rossby wave. Part II". Geophysical & Astrophysical Fluid Dynamics. 10 (1): 1–24. Bibcode:1978GApFD..10....1B. doi:10.1080/03091927808242627.
References
1. "Professor Susan Brown". University College London. 2018-01-02.
2. "Fluid Mechanics". UCL. University College London. 24 May 2018.
3. "Peter Daniels Obituary". The Times.
4. "UCL Women in Mathematics". 1 May 2015. Archived from the original on 16 May 2015.
5. "Susan North Brown's Obituary on The Times". The Times.
Authority control
International
• ISNI
• VIAF
National
• Belgium
• United States
Academics
• zbMATH
| Wikipedia |
Susan Empson
Susan Baker Empson is an American scholar of mathematics education whose work includes longitudinal studies of children's mathematical development, the use of Cognitively Guided Instruction in mathematics education, analyses of childhood understanding of the concept of fractions, and research on the professional development of mathematics educators. She is a professor emerita in the Department of Learning, Teaching, and Curriculum at the University of Missouri, where she held the Richard Miller endowed chair of mathematics education.[1]
Education and career
Empson majored in art at Queens College, Charlotte in North Carolina, with a minor in mathematics; she graduated summa cum laude in 1983. After two years teaching mathematics in Morocco through the Peace Corps, she became a mathematics teacher at A. Philip Randolph Campus High School in New York City in 1987. While in New York, she also went to Teachers College, Columbia University for a master's degree in mathematics education and a minor in educational technology, completed in 1988.[2]
In 1990, she moved to the University of Wisconsin–Madison for continuing graduate study in mathematics education. She completed her Ph.D. there in 1994, with a minor in cognitive science in education.[2] Her dissertation, The Development of Children's Fraction Thinking in a First-grade Classroom, was supervised by Thomas P. Carpenter.[3] She also worked at the university as a lecturer from 1993 to 1995, and stayed on as a post-doctoral researcher from 1994 to 1996.[2]
In 1996, she took a faculty position at the University of Texas at Austin, as an assistant professor in the Department of Curriculum and Instruction. She remained there until 2016, progressing through the faculty ranks, until retiring in 2016 as professor emerita. In that year she moved to the University of Missouri, as a professor in the Department of Learning, Teaching, and Curriculum, Richard Miller endowed chair of mathematics education, and associate director of the Institute for Reimagining and Researching STEM Education.[2] She has since retired again, as professor emerita.[1]
Selected publications
Books
• Carpenter, T. P.; Fennema, E.; Franke, M.; Levi, L.; Empson, S. B. (1999), Children’s Mathematics: Cognitively Guided Instruction, Heinemann; 2nd ed., 2015[4]
• Empson, S. B.; Levi, L. (2011), Extending Children's Mathematics: Fractions and Decimals, Heinemann[5]
Articles
• Fennema, Elizabeth; Carpenter, Thomas P.; Franke, Megan L.; Levi, Linda; Jacobs, Victoria R.; Empson, Susan B. (July 1996), "A longitudinal study of learning to use children's thinking in mathematics instruction", Journal for Research in Mathematics Education, 27 (4): 403–434, doi:10.5951/jresematheduc.27.4.0403, JSTOR 749875
• Carpenter, Thomas P.; Franke, Megan L.; Jacobs, Victoria R.; Fennema, Elizabeth; Empson, Susan B. (January 1998), "A longitudinal study of invention and understanding in children's multidigit addition and subtraction", Journal for Research in Mathematics Education, 29 (1): 3–20, doi:10.5951/jresematheduc.29.1.0003, JSTOR 749715
• Empson, Susan B. (September 1999), "Equal sharing and shared meaning: the development of fraction concepts in a first-grade classroom", Cognition and Instruction, 17 (3): 283–342, doi:10.1207/s1532690xci1703_3, JSTOR 3233836
• Empson, Susan B. (July 2003), "Low-performing students and teaching fractions for understanding: an interactional analysis", Journal for Research in Mathematics Education, 34 (4): 305, doi:10.2307/30034786, JSTOR 30034786
• Roschelle, Jeremy; Shechtman, Nicole; Tatar, Deborah; Hegedus, Stephen; Hopkins, Bill; Empson, Susan; Knudsen, Jennifer; Gallagher, Lawrence P. (December 2010), "Integration of technology, curriculum, and professional development for advancing middle school mathematics", American Educational Research Journal, 47 (4): 833–878, doi:10.3102/0002831210367426, JSTOR 40928357
• Jacobs, Victoria R.; Empson, Susan B. (August 2015), "Responding to children's mathematical thinking in the moment: an emerging framework of teaching moves", ZDM, 48 (1–2): 185–197, doi:10.1007/s11858-015-0717-0
References
1. "Susan Empson, emerita", People, University of Missouri College of Education & Human Development, retrieved 2023-03-23
2. Curriculum vitae, retrieved 2023-03-23
3. Susan Empson at the Mathematics Genealogy Project
4. Reviews of Children’s Mathematics:
• Alexander, Nancy P. (August 2000), McCracken, Janet Brown (ed.), "Professional Books", Childhood Education, 76 (5): 333, doi:10.1080/00094056.2000.10522127
• Fadness, Judy (December 1999), Teaching Children Mathematics, 6 (4): 268, JSTOR 41197411{{citation}}: CS1 maint: untitled periodical (link)
• Holt, Sheila, "Resources for a new school year", Teaching Children Mathematics, 23 (2): 117, doi:10.5951/teacchilmath.23.2.0116, JSTOR 10.5951/teacchilmath.23.2.0116
5. Reviews of Extending Children’s Mathematics:
• Damas, Rebecca (May 2012), "Books", Mathematics Teaching in the Middle School, 17 (9): 571, doi:10.5951/mathteacmiddscho.17.9.0570, JSTOR 10.5951/mathteacmiddscho.17.9.0570
• Willman, Lisa (September 2013), "For your consideration", Teaching Children Mathematics, 20 (2): 124–126, doi:10.5951/teacchilmath.20.2.0120, JSTOR 10.5951/teacchilmath.20.2.0120
Authority control
International
• ISNI
• VIAF
National
• Norway
• Israel
• United States
• Czech Republic
• Korea
• Croatia
Academics
• Mathematics Genealogy Project
• ORCID
| Wikipedia |
Susan Friedlander
Susan Jean Friedlander (née Poate; born January 26, 1946) is an American mathematician. Her research concerns mathematical fluid dynamics, the Euler equations and the Navier-Stokes equations.
Susan Friedlander
CitizenshipAmerican and British
Alma materUniversity College London
Known forMathematical Fluid Dynamics
Awards
• Kennedy Scholarship
• Medal of the Institute Henri Poincare
Scientific career
Institutions
• University of Southern California
• University of Illinois, Chicago
• Princeton University
Websitewww-bcf.usc.edu/~susanfri/
Education
Friedlander graduated from University College, London with a BS in Mathematics in 1967. She was awarded a Kennedy Scholarship to study at MIT, where she earned an MS in 1970. She completed her doctorate in 1972 from Princeton University under the supervision of Louis Norberg Howard.[1][2]
Career
From 1972–1974, Friedlander was a Visiting Member at the Courant Institute of Mathematical Sciences, followed by a year as an instructor at Princeton University. In 1975, she joined the faculty in the Mathematics department at the University of Illinois at Chicago. In 2007, she moved to the University of Southern California where she is Professor of Mathematics and the Director of the Center for Applied Mathematical Sciences.
Service
From 1996–2010, Friedlander served as an officer of the American Mathematical Society in the role of Associate Secretary. In 2005, she was appointed the first female Editor-in-Chief of the Bulletin of the American Mathematical Society.[1][3] Her other leadership activities include membership of the Scientific Advisory Committee of the Mathematical Sciences Research Institute (2001–2006), the Board of Mathematical Sciences and their Applications (2008–2011), the Section A Steering Committee of the American Association for the Advancement of Science (2013–2015), and the MIT Mathematics Department Visiting Committee (2013–2021). She is currently the Chair of the Mathematical Council of the Americas.
Honors and awards
• 1967–1969 Kennedy Scholarship, MIT
• 1993 N.S.F Visiting Professorship for Women, Brown University
• 1995 Elected Honorary Member, Moscow Mathematical Society
• 1998 Medal of the Institute Henri Poincare
• 2003 Senior Scholar Award, University of Illinois at Chicago
• 2012 Fellow, Society for Industrial and Applied Mathematics[4]
• 2012 Fellow, the American Association for the Advancement of Science[5]
• 2012 Fellow, the American Mathematical Society[6]
Personal
Friedlander is married to mathematician Eric Friedlander.[3]
References
1. Curriculum vitae, retrieved 2014-12-18.
2. Susan Jean Friedlander at the Mathematics Genealogy Project
3. UIC Mathematician First Female to Edit Influential Journal, University of Illinois at Chicago, January 26, 2005.
4. List of Fellows of the Society for Industrial and Applied Mathematics.
5. List of Fellows of the American Association for the Advancement of Science.
6. List of Fellows of the American Mathematical Society, retrieved 2014-12-18.
External links
• Home page
Authority control
International
• ISNI
• VIAF
National
• France
• BnF data
• Germany
• Israel
• United States
• Czech Republic
• Netherlands
Academics
• DBLP
• MathSciNet
• Mathematics Genealogy Project
• zbMATH
Other
• IdRef
| Wikipedia |
Susan Hermiller
Susan Marie Hermiller is an American mathematician specializing in the computational, combinatorial, and geometric theory of groups. She is a Willa Cather Professor of Mathematics and a former Graduate Chair for Mathematics at the University of Nebraska–Lincoln.
Education and career
Hermiller earned a bachelor's degree in mathematics and physics from Ohio State University in 1984. She went to Cornell University for graduate study in mathematics, earning a master's degree in 1987 and completing her Ph.D. in 1992.[1] Her doctoral advisor was Kenneth Brown, and her dissertation was Rewriting Systems for Coxeter Groups.[2]
After postdoctoral research at the Mathematical Sciences Research Institute and the University of Melbourne, she became an assistant professor of mathematics at New Mexico State University in 1994. She moved to the University of Nebraska–Lincoln in 1999.[1]
Service
Hermiller was a founding member of the Committee on Women in Mathematics of the American Mathematical Society, in 2013.[3] She also served as the American Mathematical Society representative on the Joint Committee on Women in the Mathematical Sciences for 2011 through 2013.[4] She was also a former AMS Council member at large.[5]
Recognition
Hermiller became the Willa Cather Professor in 2017.[6] She was included in the 2019 class of fellows of the American Mathematical Society "for contributions to combinatorial and geometric group theory and for service to the profession, particularly in support of underrepresented groups".[7]
References
1. Curriculum vitae (PDF), December 3, 2017, retrieved 2018-11-08
2. Susan Hermiller at the Mathematics Genealogy Project
3. Committee on Women in Mathematics (CoWIM) Past Members, American Mathematical Society, retrieved 2018-11-08
4. Joint Committee On Women In the Mathematical Sciences Past Members, American Mathematical Society, retrieved 2018-11-07
5. "AMS Committees". American Mathematical Society. Retrieved 2023-03-27.
6. "Six faculty earn professorships", Nebraska Today, University of Nebraska–Lincoln, March 29, 2017, retrieved 2018-11-08
7. 2019 Class of the Fellows of the AMS, American Mathematical Society, retrieved 2018-11-07
External links
• Home page
Authority control: Academics
• MathSciNet
• Mathematics Genealogy Project
• zbMATH
| Wikipedia |
Susan Jane Cunningham
Susan Jane Cunningham (March 23, 1842 – January 24, 1921) was an American mathematician instrumental in the founding and development of Swarthmore College.[1] She was born in Maryland, and studied mathematics and astronomy with Maria Mitchell at Vassar College as a special student during 1866–67.[1] She also studied those subjects during several summers at Harvard University, Princeton University, Newnham College, Cambridge, the Royal Observatory, Greenwich, and Williams College.[1]
Susan Jane Cunningham
Susan Jane Cunningham, "A Woman of the Century"
Born(1842-03-23)March 23, 1842
Harford County, Maryland, US
DiedJanuary 24, 1921(1921-01-24) (aged 78)
NationalityAmerican
Alma materVassar College
Known forfounding and development of Swarthmore College
Scientific career
FieldsMathematics
Astronomy
InstitutionsSwarthmore College
Early life and education
Mary Jane Cunningham was born in Harford County, Maryland, March 23, 1842. On her mother's side she was of Quaker descent. Her mother died in 1845, and Susan was left to the care of her grandparents.[2]
She attended a Friends' school until she was fifteen years old, when it was decided that she should prepare for the work of teaching. She was sent to a Friends' boarding-school in Montgomery County, Maryland for a year, when family cares required her to return home, and she continued her studies in the school near by.[2]
At nineteen, she became a teacher, and she has taught thereafter, with the exception of two years, one of which she spent in the Friends' school in Leghorne, or Attleboro, and the other in Vassar College. She spent her summer vacations in study. She studied in Harvard College Observatory in the summers of 1874 and 1876, in Princeton observatory in 1881, in Williamstown in 1883 and 1884, under Prof. Truman Henry Safford, and in Cambridge, England, in 1877, in 1878, in 1879 and in 1882, under a private tutor. In 1887, she studied in the Cambridge Observatory, England, and in 1891, she spent the summer in the Greenwich, England, observatory. [2]
Career
In 1869, she became one of the founders of the mathematics and astronomy departments at Swarthmore, and she headed both those divisions until her retirement in 1906.[3] She was Swarthmore's first professor of astronomy, and was professor of mathematics at the college beginning in 1871.[1][4] By 1888, she was Mathematics Department Chair, and that year she was given permission to plan and equip the first observatory in Swarthmore, which housed the astronomy department, and in which she lived in until her retirement; it was known as Cunningham Observatory.[1][4] The building still exists on the campus although it is no longer used as an observatory, and is now simply known as the Cunningham Building.[1][4] In 1888, Cunningham was given the first honorary doctorate of science ever given by Swarthmore.[3] In 1891, she became one of the first six women to join the New York Mathematical Society, which later became the American Mathematical Society.[5] The very first was Charlotte Angas Scott, and the other four were Mary E. Byrd of Smith College, Mary Watson Whitney of Vassar, Ellen Hayes of Wellesley, and Amy Rayson, who taught mathematics and physics at a private school in New York City.[5] Cunningham was also a member of the Astronomical Society of the Pacific as early as 1891.[6] She was also a founder member of the British Astronomical Association in 1890, resigned 1908 September. She was named a Fellow of the American Association for the Advancement of Science in 1901.[7]
Death
Cunningham died on January 24, 1921, from heart failure. Her funeral service was held on-campus in the Swarthmore College Meeting House, and was attended by many notable figures such as then-Pennsylvania governor William C. Sproul and Pennsylvania State Commissioner of Health Edward Martin.[8]
References
1. "Susan Jane Cunningham". Biographies of Women Mathematicians. Agnes Scott College. Retrieved 24 October 2012.
2. Willard & Livermore 1893, p. 221.
3. Schlup, Leonard C.; Ryan, James G. (2003). Historical Dictionary of the Gilded Age. Armonk, N.Y.: M.E. Sharpe. p. 546. ISBN 978-0-7656-0331-9.
4. Weber, Elizabeth (15 November 1996). "The Cunningham Building: Swarthmore's Other Observatory". The Phoenix. Swarthmore College. Retrieved 24 October 2012.
5. Duren, Peter L.; Askey, Richard; Merzbach, Uta C. (1990). A Century of Mathematics in America (2. [Dr.] ed.). Providence, RI: American Mathemat. Soc. p. 382. ISBN 978-0-8218-0130-7.
6. "List of Members of the Astronomical Society of the Pacific". Publications of the Astronomical Society of the Pacific. The University of Chicago Press. 3 (13): 1–8. 1 January 1891. JSTOR 40666804.
7. "Historic Fellows". American Association for the Advancement of Science. Retrieved 21 April 2021.
8. "Funeral of Dr. Cunningham". The Phoenix. 8 February 1921. Archived from the original on 27 March 2014. Retrieved 27 March 2014.
Attribution
• This article incorporates text from this source, which is in the public domain: Willard, Frances Elizabeth; Livermore, Mary Ashton Rice (1893). "Susan J. Cunningham". A Woman of the Century: Fourteen Hundred-seventy Biographical Sketches Accompanied by Portraits of Leading American Women in All Walks of Life (Public domain ed.). Charles Wells Moulton.
External links
• Works related to Woman of the Century/Susan J. Cunningham at Wikisource
Authority control: Academics
• zbMATH
| Wikipedia |
Susan Loepp
Susan Renee Loepp (born 1967)[1] is an American mathematician who works as a professor of mathematics at Williams College.[2] Her research concerns commutative algebra.[3]
Professional career
Loepp graduated from Bethel College (Kansas) in 1989,[3][4] and earned her Ph.D. in 1994 from the University of Texas at Austin, under the supervision of Raymond Heitmann.[5] After postdoctoral studies at the University of Nebraska she took her present faculty position at Williams.[3][4] She has publications in Journal of Algebra and Journal of Pure and Applied Algebra.[3]
Book
With William Wootters, she is the co-author of the book Protecting Information: From Classical Error Correction to Quantum Cryptography (Cambridge University Press, 2006).[4][6][7] The book covers topics in quantum cryptography and quantum computing and the potential impacts of quantum physics. These potential impacts include quantum computers which, if built, could crack our currently used public-key cryptosystems, and quantum cryptography which promises to provide an alternative to these cryptosystems.[8]
Awards and honors
In 2007, Loepp won the Young Alumnus Award from Bethel College.[9] In 2010, she won the Northeastern Section of the Mathematical Association of America’s Teaching Award.[10] In 2012, she won the Deborah and Franklin Tepper Haimo Award for Distinguished College or University Teaching of Mathematics of the Mathematical Association of America, which honors “college or university teachers who have been widely recognized as extraordinarily successful and whose teaching effectiveness has been shown to have had influence beyond their own institutions.”[3][11] In 2013, she was elected as one of the inaugural fellows of the American Mathematical Society.[12]
Loepp was an American Mathematical Society (AMS) Council member at large from 2019-2021.[13]
References
1. Birth year from WorldCat identities, retrieved 2019-01-13.
2. Faculty listing, Williams College, retrieved 2014-12-25.
3. Susan Loepp Wins National Award for Excellence in Teaching Mathematics, Williams College, January 27, 2012, retrieved 2014-12-25.
4. Susan Loepp, The National Alliance for Doctoral Studies in the Mathematical Sciences, retrieved 2014-12-25.
5. Susan Renee Loepp at the Mathematics Genealogy Project
6. Review of Protecting Information by Fan Junjie (March 29, 2012), International Association for Cryptologic Research.
7. Review of Protecting Information by Darren Glass (April 5, 2007), MAA Reviews, Mathematical Association of America.
8. Loepp, Susan; Wootters, William K. (2006). Protecting Information: From Classical Error Correction to Quantum Cryptography. doi:10.1017/CBO9780511813719. ISBN 9780521827409. Retrieved 2020-01-17. {{cite book}}: |website= ignored (help)CS1 maint: url-status (link)
9. "Previous Recipients | Bethel College". www.bethelks.edu. Retrieved 2020-01-17.
10. "Awards -- Northeastern Section of the MAA". sections.maa.org. Retrieved 2020-01-17.
11. Deborah and Franklin Tepper Haimo Award - List of Recipients, MAA, retrieved 2014-12-25.
12. List of Fellows of the American Mathematical Society, retrieved 2014-12-25.
13. "AMS Committees". American Mathematical Society. Retrieved 2023-03-29.
Authority control
International
• ISNI
• VIAF
National
• France
• BnF data
• United States
Academics
• MathSciNet
• Mathematics Genealogy Project
• zbMATH
Other
• IdRef
| Wikipedia |
Mary Rees
Susan Mary Rees, FRS (born 31 July 1953[1]) is a British mathematician and an emeritus professor of mathematics at the University of Liverpool since 2018, specialising in research in complex dynamical systems.[2][3]
Career
Rees was born in Cambridge. After obtaining her BA in 1974 and MSc in 1975 at St Hugh's College, Oxford, she did research in mathematics under the direction of Bill Parry at the University of Warwick, obtaining a PhD in 1978. Her first postdoctoral position was at the Institute for Advanced Study from 1978 to 1979. Later she worked at Institut des hautes études scientifiques and the University of Minnesota. Following this she worked at the University of Liverpool until her retirement. She became professor of mathematics in 2002 and retired in 2018, becoming an emeritus professor.
She was awarded a Whitehead Prize of the London Mathematical Society in 1988. The citation[4] notes that, in particular,
Her most spectacular theorem[5] has been to show that in the space of rational maps of the Riemann sphere of degree d ≥ 2 those maps that are ergodic with respect to Lebesgue measure and leave invariant an absolutely continuous probability measure form a set of positive measure.
She also spoke at the ICM at Kyoto in 1990.[6] In recent years, much of Rees' work has focused on the dynamics of quadratic rational maps; i.e. rational maps of the Riemann sphere of degree two, including an extensive monograph.[7] In 2004, she also presented an alternative proof of the Ending Laminations Conjecture of Thurston,[8] which had been proved by Brock, Canary and Minsky shortly before.[9]
FRS
She was elected to a Fellowship of the Royal Society in 2002.
Family
Her father David Rees was also a distinguished mathematician, who worked on Enigma in Hut 6 at Bletchley Park. Her sister Sarah Rees is also a mathematician.[6]
Works
• Mary Rees (2010) "Multiple equivalent matings with the aeroplane polynomial". Ergodic Theory and Dynamical Systems, pp. 20
• Mary Rees (2008) "William Parry FRS 1934–2006". Biographical Memoirs of the Royal Society, 54, pp. 229–243
• Mary Rees (2004) "Teichmuller distance is not $C^{2+\varepsilon }$". Proc London Math, 88, pp. 114–134
• Mary Rees (2003) "Views of Parameter Space: Topographer and Resident". Asterisque, 288, pp. 1–418
• Mary Rees (2002) "Teichmuller distance for analytically finite surfaces is $C^{2}$." Proc. London Math. Soc. 85 (2002) 686 – 716.,85, pp. 686–716
References
1. GRO Register of Births: SEP 1953 4a 294 CAMBRIDGE – Susan M. Rees, mmn = Cushen
2. Mary Rees profile Archived 23 January 2010 at the Wayback Machine, University of Liverpool
3. "Dr. Mary Rees". University of Liverpool. 18 January 2008. Retrieved 2 January 2014.
4. Bulletin of the London Mathematical Society 20 (1988), no. 6, p. 639.
5. Positive measure sets of ergodic rational maps, Ann. Sci. École Norm. Sup. 19 (1986), no. 3, 383–407.
6. EWM. "Mary Reese". European Women in Mathematics. Archived from the original on 16 June 2018. Retrieved 25 February 2018.
7. Views of parameter space: Topographer and resident, Asterisque 288 (2003)
8. The Ending Laminations Theorem direct from Teichmüller geodesics, Preprint, 2004
9. The classification of Kleinian surface groups, II: The Ending Lamination Conjecture, Preprint, 2004
Fellows of the Royal Society elected in 2002
Fellows
• Allan Bradley
• Robin Carrell
• Michael John Crawley
• Stuart Cull-Candy
• John Dainton
• Roger John Davis
• Anne Dell
• David Henry Dolphin
• David Fowler
• Steve Furber
• Graham Goodwin
• Jean-Pierre Hansen
• Nicholas Hastie
• Christopher Hawkesworth
• Judith Howard
• Philip Ingham
• David Ish-Horowicz
• James A. Jackson
• Bruce Ernest Kemp
• John Vincent Kilmartin
• David Malcolm James Lilley
• Terry Lyons
• Georgina Mace
• John McCanny
• Brian Cecil Joseph Moore
• David Parker
• Martyn Poliakoff
• Eric Priest
• Terence Quinn
• Peter J. Ratcliffe
• Mary Rees
• Miles Reid
• David William Rhind
• Thomas Maurice Rice
• Roy Sambles
• Peter Sarnak
• Tony Sinclair
• Andrew Benjamin Smith
• Anthony John Stace
• Nicholas Strausfeld
• Mark Welland
• Ian Wilmut
Foreign
• Claude Allègre
• Per Andersen
• Hubert Markl
• Alexander Pines
• Peter H. Raven
• Carl Wunsch
List of fellows of the Royal Society
Chaos theory
Concepts
Core
• Attractor
• Bifurcation
• Fractal
• Limit set
• Lyapunov exponent
• Orbit
• Periodic point
• Phase space
• Anosov diffeomorphism
• Arnold tongue
• axiom A dynamical system
• Bifurcation diagram
• Box-counting dimension
• Correlation dimension
• Conservative system
• Ergodicity
• False nearest neighbors
• Hausdorff dimension
• Invariant measure
• Lyapunov stability
• Measure-preserving dynamical system
• Mixing
• Poincaré section
• Recurrence plot
• SRB measure
• Stable manifold
• Topological conjugacy
Theorems
• Ergodic theorem
• Liouville's theorem
• Krylov–Bogolyubov theorem
• Poincaré–Bendixson theorem
• Poincaré recurrence theorem
• Stable manifold theorem
• Takens's theorem
Theoretical
branches
• Bifurcation theory
• Control of chaos
• Dynamical system
• Ergodic theory
• Quantum chaos
• Stability theory
• Synchronization of chaos
Chaotic
maps (list)
Discrete
• Arnold's cat map
• Baker's map
• Complex quadratic map
• Coupled map lattice
• Duffing map
• Dyadic transformation
• Dynamical billiards
• outer
• Exponential map
• Gauss map
• Gingerbreadman map
• Hénon map
• Horseshoe map
• Ikeda map
• Interval exchange map
• Irrational rotation
• Kaplan–Yorke map
• Langton's ant
• Logistic map
• Standard map
• Tent map
• Tinkerbell map
• Zaslavskii map
Continuous
• Double scroll attractor
• Duffing equation
• Lorenz system
• Lotka–Volterra equations
• Mackey–Glass equations
• Rabinovich–Fabrikant equations
• Rössler attractor
• Three-body problem
• Van der Pol oscillator
Physical
systems
• Chua's circuit
• Convection
• Double pendulum
• Elastic pendulum
• FPUT problem
• Hénon–Heiles system
• Kicked rotator
• Multiscroll attractor
• Population dynamics
• Swinging Atwood's machine
• Tilt-A-Whirl
• Weather
Chaos
theorists
• Michael Berry
• Rufus Bowen
• Mary Cartwright
• Chen Guanrong
• Leon O. Chua
• Mitchell Feigenbaum
• Peter Grassberger
• Celso Grebogi
• Martin Gutzwiller
• Brosl Hasslacher
• Michel Hénon
• Svetlana Jitomirskaya
• Bryna Kra
• Edward Norton Lorenz
• Aleksandr Lyapunov
• Benoît Mandelbrot
• Hee Oh
• Edward Ott
• Henri Poincaré
• Mary Rees
• Otto Rössler
• David Ruelle
• Caroline Series
• Yakov Sinai
• Oleksandr Mykolayovych Sharkovsky
• Nina Snaith
• Floris Takens
• Audrey Terras
• Mary Tsingou
• Marcelo Viana
• Amie Wilkinson
• James A. Yorke
• Lai-Sang Young
Related
articles
• Butterfly effect
• Complexity
• Edge of chaos
• Predictability
• Santa Fe Institute
Authority control
International
• FAST
• ISNI
• VIAF
National
• Spain
• France
• BnF data
• Germany
• Israel
• Belgium
• United States
• Korea
• Netherlands
Academics
• DBLP
• MathSciNet
• Mathematics Genealogy Project
• Scopus
• zbMATH
Other
• IdRef
| Wikipedia |
Susan Miller Rambo
Susan Miller Rambo (April 3, 1883 – January 7, 1977) was the second woman awarded a Ph.D. from the University of Michigan and had a long teaching career at Smith College.[1]
Susan Miller Rambo
Born(1883-04-03)April 3, 1883
Easton, Pennsylvania
DiedJanuary 7, 1977(1977-01-07) (aged 93)
Northampton, Massachusetts
NationalityAmerican
Alma materSmith College
University of Michigan
Scientific career
FieldsMathematics
InstitutionsSmith College
Doctoral advisorWalter Burton Ford
Biography
Born in Easton, Pennsylvania, Susan Rambo was the eldest child of George and Annie Rambo. Her father was a wholesale grocer. She graduated high school in Easton, then entered Smith College, located in Northampton, Massachusetts. After graduating from Smith, she taught high school mathematics in Hoosick Falls, New York until 1908.
Susan Rambo joined the mathematics department at her alma mater in 1908 as an assistant in mathematics and remained there the remainder of her career with promotions to instructor in 1911, assistant professor in 1918, associate professor in 1922 and professor in 1937. She was department chairman from 1934 to 1940 and retired in 1948 as professor emeritus. One of her students was Mabel Gweneth Humphreys.
Susan Rambo never married. From 1918 she shared a house with her colleague Suzan Rose Benedict until the latter’s death in 1942. In 1945, Susan relinquished her life tenure on the house and the proceeds from its sale went to Smith College to be used for scholarships. She died in a Northampton nursing home in 1977.[2]
Graduate education
Starting early in her career at Smith, Susan Rambo began taking graduate courses and was awarded her master's degree in 1913. Her thesis was “A comparative study of analytic and synthetic projective geometry”. In 1916 she took a leave of absence from Smith and studied for her PhD the next two years at the University of Michigan.[2] Her dissertation, “The point at infinity as a regular point of certain linear difference equations of the second order”[3] was directed by Walter Burton Ford. In 1920 she received her PhD, two years after returning to Smith.[2]
Memberships
American Mathematical Society. In 1928 Susan Rambo was a delegate from the society to the International Congress of Mathematicians in Bologna, Italy.[2]
Mathematical Association of America[4]
Publications
• 1905 A Defense of Immigration.[5]
• 1946 Review of College Mathematics: A General Introduction, by C. H.Sisam. Science n.s., 104:169.[2]
References
1. Green, Judy; LaDuke, Jeanne (2009). Pioneering Women in American Mathematics — The Pre-1940 PhD's. History of Mathematics. Vol. 34. American Mathematical Society, The London Mathematical Society. p. 10. ISBN 978-0-8218-4376-5.
2. Judy Green and Jeanne LaDuke, “Supplementary Material for Pioneering Women in American Mathematics: The Pre-1940 PhD’s,” 504-505: http://www.ams.org/publications/authors/books/postpub/hmath-34-PioneeringWomen.pdf
3. “Notes”, (Bulletin of the American Mathematical Society, Volume 27, Published by the Society, Lancaster, PA and New York, 1921) p. 92
4. The Sixth Summer Meeting of the Society,” (The American Mathematical Monthly, Volume 28, No. 10, Oct., 1921), 351. https://www.jstor.org/stable/2972156
5. "In Defence of Immigration,” (Smith College Monthly, Volume 12, Number 8, May 1905) p. 324. https://archive.org/details/smith0506smit
External links
• Mathematics Genealogy Project
• “Susan M Rambo Fund,” (SUMMER RESEARCH FELLOWS PROGRAM 2007 Clark Science Center - Smith College RESEARCH OPPORTUNITIES
Authority control: Academics
• Mathematics Genealogy Project
| Wikipedia |
Susan Montgomery
M. Susan Montgomery (born 2 April 1943 in Lansing, MI) is a distinguished American mathematician whose current research interests concern noncommutative algebras: in particular, Hopf algebras, their structure and representations, and their actions on other algebras. Her early research was on group actions on rings.
M. Susan Montgomery
Born (1943-04-02) April 2, 1943
Lansing, Michigan
NationalityAmerican
Alma materB.A., University of Michigan, 1965
Ph.D., University of Chicago, 1969
Known forStructure and representations of Hopf algebras.
Scientific career
FieldsMathematics
InstitutionsUSC
Doctoral advisorIsrael Nathan Herstein
Education
Montgomery received her B.A. in 1965 from the University of Michigan and her Ph.D. in Mathematics from the University of Chicago in 1969 under the supervision of I. N. Herstein.
Career
Upon receiving her Ph.D. from Chicago, Montgomery spent one year on the faculty at DePaul University. Montgomery joined the faculty of the University of Southern California (USC) in 1970 and was promoted to the rank of Professor in 1982. She was chair of the Department of Mathematics at USC from 1996 to 1999.[1] Montgomery has spent sabbaticals at the Hebrew University of Jerusalem, the University of Leeds, the University of Wisconsin, the University of Munich, the University of New South Wales, the Mittag-Leffler Institute, and the Mathematical Sciences Research Institute.
Montgomery wrote about a hundred research articles and several books, of which Hopf algebras and their actions on rings is her most cited work. This book includes a discussion of Hopf-Galois theory, an area to which Montgomery has significantly contributed, and an introduction to quantum group theory.
Honors
Montgomery was awarded a Guggenheim Foundation[2] Fellowship in 1984 and a Raubenheimer Outstanding Faculty Award by USC in 1987.
She gave an American Mathematical Society (AMS) Invited Address at the Joint Mathematics Meetings in 1984. In 1995 she gave an Invited Address at the Joint AMS-Israel Math Union Meeting in Jerusalem.[3] In 2009, she gave a plenary lecture at the summer meeting of the Canadian Mathematical Society.[4] She has also given numerous lectures at meetings and universities around the world.
Montgomery was the Principal Lecturer at the Conference Board of the Mathematical Sciences (CBMS) 1992 Conference on Hopf Algebras. Her CBMS monograph Hopf Algebras and their Actions on Rings[5] is highly cited. She has written one other book and has edited five collections of research articles.
She served as an editor for the Journal of Algebra for over 20 years. She was also an editor for the AMS Proceedings, AMS Mathematical Surveys and Monographs, and Advances in Mathematics, and currently is on the editorial boards of Algebras and Representation Theory[6] and of Algebra and Number Theory.[7]
Montgomery has been very active in the American Mathematical Society, serving on the Board of Trustees from 1986–1996.[8] She has also served on the Council, the Policy Committee on Publications,[9] and on the Nominating Committee.[10]
In 2013 she was elected to a 3-year term as a Vice-President of the American Mathematical Society.[11] She was also a member of the National Research Council's Board on Mathematical Sciences and Their Applications (BMSA), serving one year on the Executive Committee.
In 2012 she was selected a Fellow of the American Mathematical Society[12][13] and a Fellow of the AAAS.[14][15]
References
1. "USC Chair". Retrieved 19 March 2013.
2. "Guggenheim Fellows Lists". Archived from the original on 2013-03-05.
3. "International Joint Meeting". Retrieved 9 March 2013.
4. "Canadian Math Society Plenary Addresses 2009". Canadian Math Society. Retrieved 11 March 2013.
5. Montgomery, Susan (1993). Hopf Algebras and Their Actions on Rings. Providence, RI: American Mathematical Society. p. 238. ISBN 978-0-8218-0738-5. MR 1243637. Retrieved 16 April 2021.
6. "Algebras and Representation Theory Editorial Board". Springer. Retrieved 9 March 2013.
7. "Algebra and Number Theory Editorial Board". Mathematical Sciences Publishers. Retrieved 9 March 2013.
8. "AMS Board of Trustees" (PDF). American Mathematical Society. Retrieved 20 March 2013.
9. "AMS Publications Committee Members" (PDF). American Mathematical Society. Retrieved 20 March 2013.
10. "AMS Nominating Committee Members" (PDF). American Mathematical Society. Retrieved 20 March 2013.
11. "AMS Election Results".
12. "AMS Fellows List". Retrieved 9 March 2013.
13. "USC Announces AMS Fellows". Retrieved 19 March 2013.
14. "AAAS 2012 Fellows List". Archived from the original on 2013-03-23. Retrieved 2013-03-19.
15. "USC Announces AAAS Fellows". Retrieved 19 March 2013.
External links
• M. Susan Montgomery's Web Site
• Susan Montgomery publications indexed by Google Scholar
• Susan Montgomery's Author Profile Page on MathSciNet
• Kashina, Yevgenia (March 2023). "Susan Montgomery: A Journey in Noncommutative Algebra" (PDF). Notices of the American Mathematical Society. 70 (3): 368–379. doi:10.1090/noti2641.
Authority control
International
• ISNI
• VIAF
National
• France
• BnF data
• Germany
• Israel
• United States
• Netherlands
Academics
• Google Scholar
• MathSciNet
• Mathematics Genealogy Project
• zbMATH
People
• Deutsche Biographie
Other
• IdRef
| Wikipedia |
Susan Morey
Susan Morey is an American mathematician and a professor and chair of the Mathematics department at Texas State University in San Marcos, Texas.[1]
Education and career
Morey received a B.S. in mathematics with Honors from the University of Missouri in 1990 and a Ph.D. in mathematics from Rutgers University in 1995. Her dissertation The Equations of Rees Algebras of ideals of Low Codimension was supervised by Wolmer Vasconcelos.[2] After receiving her Ph.D., Morey held a postdoctoral position at the University of Texas at Austin. She became an assistant professor at Texas State (then Southwest Texas State University) in 1997. Morey was awarded tenure and promotion to associate professor in 2001[3] and promotion to full professor in 2010. She became chair of the mathematics department in 2015.[1] She received the Everette Swinney Excellence in Teaching Award from Texas State in 2016.[4]
Morey is known for her work in commutative algebra, in particular, for work on normal rings and algebraic and combinatorial properties of edge ideals of graphs and hypergraphs. Her work is published in the Journal of Pure and Applied Algebra,[5] the Journal of Algebraic Combinatorics,[6] Communications in Algebra,[7][8] Progress in Commutative Algebra,[9] the Proceedings of the American Mathematical Society,[10] and other journals.
Morey was selected a Fellow of the Association for Women in Mathematics in the Class of 2021 "for inspiring and mentoring several generations of women mathematicians, whom she has helped and encouraged to reach their full potential; and for support of graduate students through the Stokes Alliance for Minority Participation".[11]
References
1. "Faculty Profiles: Susan Morey". Texas State University. Retrieved 7 November 2020.
2. Susan Morey at the Mathematics Genealogy Project
3. "SWT announces faculty promotions and tenures". Texas State University. Retrieved 7 November 2020.
4. "Everette Swinney Faculty Senate Excellence in Teaching Award". Faculty Senate. Texas State University. Retrieved 7 November 2020.
5. Morey, Susan (1996-06-10). "Equations of blowups of ideals of codimension two and three". Journal of Pure and Applied Algebra. 109 (2): 197–211. doi:10.1016/0022-4049(95)00087-9. ISSN 0022-4049.
6. Fouli, Louiza; Morey, Susan (2015-11-01). "A lower bound for depths of powers of edge ideals". Journal of Algebraic Combinatorics. 42 (3): 829–848. arXiv:1409.7020. doi:10.1007/s10801-015-0604-3. ISSN 1572-9192. S2CID 117362461.
7. Morey, Susan (1999). "Stability of associated primes and equality of ordinary and symbolic powers of ideals". Communications in Algebra. 27 (7): 3221–3231. doi:10.1080/00927879908826624. ISSN 0092-7872.
8. Morey, Susan (2010-11-15). "Depths of Powers of the Edge Ideal of a Tree". Communications in Algebra. 38 (11): 4042–4055. arXiv:0908.0553. doi:10.1080/00927870903286900. ISSN 0092-7872. S2CID 8430167.
9. Morey, Susan; Villarreal, Rafael H. (2012). "Edge Ideals: Algebraic and Combinatorial Properties" (PDF). Progress in Commutative Algebra. 1: 85–126.
10. Morey, Susan; Ulrich, Bernd (1996). "Rees algebras of ideals with low codimension". Proceedings of the American Mathematical Society. 124 (12): 3653–3661. doi:10.1090/S0002-9939-96-03470-3. ISSN 0002-9939.
11. "The AWM Fellows Program: 2021 Class of AWM Fellows". Association for Women in Mathematics. Retrieved 7 November 2020.
External links
• Susan Morey's Author Profile Page on MathSciNet
• Susan Morey's Faculty Profile at Texas State University
• Official website
Authority control: Academics
• MathSciNet
• Mathematics Genealogy Project
| Wikipedia |
Susanne Brenner
Susanne Cecelia Brenner is an American mathematician, whose research concerns the finite element method and related techniques for the numerical solution of differential equations. She is a Boyd Professor[1] at Louisiana State University. Previously, she held the Nicholson Professorship of Mathematics and the Michael F. and Roberta Nesbit McDonald Professorship at Louisiana State University,[2] She currently chairs the editorial committee of the journal Mathematics of Computation.[3] During 2021-2022 she is serving as President of the Society for Industrial and Applied Mathematics (SIAM).
Education and career
Brenner did her undergraduate studies in mathematics and German at West Chester State College and received a master's degree in mathematics from SUNY Stony Brook.[4] She obtained her Ph.D. from the University of Michigan in 1988 under the joint supervision of Jeffrey Rauch and L. Ridgway Scott; her thesis was entitled "Multigrid Methods for Nonconforming Finite Elements".[5]
She held faculty positions at Clarkson University and the University of South Carolina before moving to Louisiana State University in 2006.[4][6]
Selected Publications.
• $C^{0}$interior penalty methods for fourth order elliptic boundary value problems on polygonal domains. J. Sci. Comput. 22/23 (2005), 83–118.
• Korn's inequalities for piecewise vector fields. Math. Comp. 73 (2004), no. 247, 1067–1087.
• Poincaré-Friedrichs inequalities for piecewise $H^{1}$functions. SIAM J. Numer. Anal. 41 (2003), no. 1, 306–324.
• with L. R. Scott, The Mathematical Theory of Finite Element Methods (Springer-Verlag, 1994; 3rd edition, 2008).
Recognition
She is a fellow of the Society for Industrial and Applied Mathematics,[7] the American Mathematical Society,[8] and the American Association for the Advancement of Science.[9] The Association for Women in Mathematics has included her in the 2020 class of AWM Fellows for "being a role model nationally and internationally due to her widely-known work in finite element methods; for her promotion of women in mathematics via the Women in Numerical Analysis and Scientific Computing network, as mentor of Ph.D.s, and as advisor of graduate and undergraduate students".[10] Brenner was also awarded a Humboldt Forschungspreis (Humboldt Research Award) from the Alexander von Humboldt Foundation in 2005.[11]
She is included in a deck of playing cards featuring notable women mathematicians published by the Association of Women in Mathematics.[12]
References
1. "Susanne C. Brenner named Boyd Professor | LSUMath". www.math.lsu.edu. Retrieved 2019-11-18.
2. Brenner Named Michael F. and Roberta Nesbit McDonald Professor, LSU Mathematics, August 16, 2010. Retrieved 2013-10-15.
3. Mathematics of Computation Editorial Board. Retrieved 2013-10-15.
4. Curriculum vitae. Retrieved 2013-10-15.
5. Susanne Brenner at the Mathematics Genealogy Project
6. "Louisiana State's Susanne Brenner Named President of the Society for Industrial and Applied Mathematics". Women In Academia Report. 2020-01-02. Retrieved 2020-11-20.
7. SIAM Fellows: Class of 2010. Retrieved 2013-10-15.
8. List of Fellows of the American Mathematical Society. Retrieved 2013-10-15.
9. 2012 Fellows, AAAS. Retrieved 2013-10-15.
10. 2020 Class of AWM Fellows, Association for Women in Mathematics, retrieved 2019-11-08
11. "Prof. Dr. Susanne Brenner". Alexander von Humboldt-Foundation. 2017-12-10. Retrieved 2022-11-29.
12. "Mathematicians of EvenQuads Deck 1". awm-math.org. Retrieved 2022-06-18.{{cite web}}: CS1 maint: url-status (link)
Authority control
International
• ISNI
• VIAF
National
• Norway
• France
• BnF data
• Germany
• Israel
• United States
• Czech Republic
• Netherlands
Academics
• DBLP
• MathSciNet
• Mathematics Genealogy Project
• zbMATH
Other
• IdRef
| Wikipedia |
Susanne Dierolf
Susanne Dierolf (16 July 1942 – 24 April 2009)[1] was a German mathematician specializing in the theory of topological vector spaces.[2] She was a professor for many years at the University of Trier.[3]
Life
Dierolf was born on 16 July 1942[1] in Bratislava, at the time under German occupation and administered as part of Lower Austria.[4]
She completed her doctorate in 1974 at the Ludwig Maximilian University of Munich, with the dissertation Über Vererbbarkeitseigenschaften in topologischen Vektorräumen supervised by Walter Roelcke.[5] She continued at Munich as an assistant, earning her habilitation there in 1985. She became a Privatdozent at Trier in 1985, and außerplanmäßiger Professor in 1991.[6]
She died on 24 April 2009.[1][3]
Research
Dierolf published 71 mathematics papers and was the advisor to ten doctoral students. Highlights of her research contributions include the solution of four problems of Alexander Grothendieck and of a conjecture of Dmitriĭ A. Raĭkov. Her work often involved the construction of counterexamples, for which she became known as "Mrs. Counterexample".[2]
Beyond the main part of her work on topological vector spaces, she was also a coauthor of a book on topological group theory, Uniform structures on topological groups and their quotients (with Walter Roelcke, McGraw-Hill, 1981).[7]
Recognition
A special volume of the journal Functiones et Approximatio Commentarii Mathematici was published in Dierolf's memory in 2011.[1]
References
1. Bonet, Jose; Domański, Paweł (March 2011), "Susanne Dierolf (16.07.1942 – 24.04.2009)", Functiones et Approximatio Commentarii Mathematici, 44 (1): 5–6
2. Frerick, Leonhard; Wengenroth, Jochen (March 2011), "The mathematical work of Susanne Dierolf", Functiones et Approximatio Commentarii Mathematici, 44 (1): 7–31, doi:10.7169/facm/1301497744, MR 2807896
3. Wengenroth, J. (2009), "Im Garten der Mathematik: Ein Nachruf auf Prof. Dr. Susanne Dierolf", Unijournal (in German), University of Trier (2): 68
4. Birthplace from German National Library catalog, retrieved 2021-11-03
5. Susanne Dierolf at the Mathematics Genealogy Project
6. Curriculum vitae, retrieved 2021-12-03
7. Reviews of Uniform structures on topological groups and their quotients: W. W. Comfort, MR0644485; B. Gelbaum, Zbl 0489.22001
External links
• Archived copy of home page
Authority control
International
• ISNI
• VIAF
National
• Germany
• Israel
• United States
Academics
• MathSciNet
• Mathematics Genealogy Project
• zbMATH
People
• Deutsche Biographie
| Wikipedia |
Susanne Teschl
Susanne Teschl (née Timischl, born 1971)[1] is a biomathematician and professor of mathematics at the University of Applied Sciences Technikum Wien in Vienna, Austria. She is known for her research on the mathematical modeling of breath analysis.
Education and career
Teschle earned a diploma in mathematical physics at the University of Graz in 1995,[2] and completed her Ph.D. there in 1998. Her dissertation, A Global Model for the Cardiovascular and Respiratory System, was supervised by Franz Kappel.[3]
After working for the Austrian Science Fund, she joined the University of Applied Sciences Technikum Wien in 2001, and headed the Department of Applied Mathematics and Natural Sciences there from 2007 to 2010.[2]
Personal life
Teschl is the daughter of Wolfgang Timischl, an Austrian mathematics teacher and textbook author. Her husband, Gerald Teschl, is a mathematical physicist at the University of Vienna.[1][2]
References
1. German National Library catalog entry, retrieved 2021-09-01
2. Susanne Teschl (nee Timischl), University of Applied Sciences Technikum Wien, retrieved 2021-09-01
3. Susanne Teschl at the Mathematics Genealogy Project
External links
• Susanne Teschl publications indexed by Google Scholar
Authority control
International
• ISNI
• VIAF
National
• Germany
• Czech Republic
• Poland
Academics
• Google Scholar
• MathSciNet
• Mathematics Genealogy Project
• ORCID
• Scopus
• zbMATH
| Wikipedia |
Susie W. Håkansson
Susie Wong Håkansson (born July 15, 1940) is known for her work in mathematics education, teacher preparation and professional development. Since 1999, she has been Executive Director of the California Mathematics Project.
Susie W. Håkansson
BornJuly 15, 1940 (1940-07-15) (age 83)
Los Angeles, California
NationalityAmerican
Alma materUniversity of California, Santa Barbara
Scientific career
FieldsMathematics education
InstitutionsUniversity of California, Los Angeles
Doctoral advisorNoreen Webb
Life
Susie (Susan) Wong Håkansson is a native Southern Californian born and raised in Los Angeles. Her parents are first generation Chinese immigrants who owned a small business in the Larchmont area.
After gaining her master's degree, Håkansson taught for several years at Huntington Park High School in Los Angeles Unified School District and was involved in several projects to improve the teaching and learning of mathematics. She served as a head track and field coach and official specializing in pole vault and had the opportunity of working at the 1984 Olympics at Los Angeles. In 1984, she joined Center X in the UCLA Graduate School of Education and Information Studies and served as the site director of the UCLA Mathematics Project. She took on the position as the statewide Executive Director for the California Mathematics Project in 1999.
Education
After completing her primary schooling at Los Angeles High School, went on to receiving her bachelor's and master's degrees in mathematics, and a teaching credential from University of California, Santa Barbara, and her doctorate in education from University of California, Los Angeles. Her graduate advisor was Dr. Noreen Webb and her thesis was titled The effects of daily problem solving on problem-solving performance, attitudes towards mathematics, and mathematics achievement.[1]
Awards
• Robert Sorgenfrey Distinguished Teaching Award, UCLA, 2009
• Walter Denham Memorial Award (Advocacy for Mathematics Education), California Mathematics Council, 2009[2][3]
References
1. Hakansson, Susan Wong. 1990. The effects of daily problem solving on problem-solving performance, attitudes towards mathematics, and mathematics achievement. Thesis (Ph. D.) – University of California, Los Angeles, 1990.
2. "CAMTE Advisory Board Member Receives Walter Denham Award". Retrieved 5 October 2012.
3. "The Walter Denham Memorial Award". Retrieved 5 October 2012.
| Wikipedia |
Suslin algebra
In mathematics, a Suslin algebra is a Boolean algebra that is complete, atomless, countably distributive, and satisfies the countable chain condition. They are named after Mikhail Yakovlevich Suslin.[1]
The existence of Suslin algebras is independent of the axioms of ZFC, and is equivalent to the existence of Suslin trees or Suslin lines.[2]
See also
Andrei Suslin
References
1. Jech, Thomas (2013-06-29). Set Theory. Springer Science & Business Media. ISBN 978-3-662-22400-7.
2. "The mathematics of Andrei Suslin". www.ams.org. Retrieved 2021-08-08.
| Wikipedia |
Suslin cardinal
In mathematics, a cardinal λ < Θ is a Suslin cardinal if there exists a set P ⊂ 2ω such that P is λ-Suslin but P is not λ'-Suslin for any λ' < λ. It is named after the Russian mathematician Mikhail Yakovlevich Suslin (1894–1919).[1]
See also
• Suslin representation
• Suslin line
• AD+
References
1. Akihiro Kanamori, Tenenbaum and Set theory (PDF), p. 2
• Howard Becker, The restriction of a Borel equivalence relation to a sparse set, Arch. Math. Logic 42, 335–347 (2003), doi:10.1007/s001530200142
| Wikipedia |
Rigidity (K-theory)
In mathematics, rigidity of K-theory encompasses results relating algebraic K-theory of different rings.
Suslin rigidity
Suslin rigidity, named after Andrei Suslin, refers to the invariance of mod-n algebraic K-theory under the base change between two algebraically closed fields: Suslin (1983) showed that for an extension
$E/F$
of algebraically closed fields, and an algebraic variety X / F, there is an isomorphism
$K_{*}(X,\mathbf {Z} /n)\cong K_{*}(X\times _{F}E,\mathbf {Z} /n),\ i\geq 0$
between the mod-n K-theory of coherent sheaves on X, respectively its base change to E. A textbook account of this fact in the case X = F, including the resulting computation of K-theory of algebraically closed fields in characteristic p, is in Weibel (2013).
This result has stimulated various other papers. For example Röndigs & Østvær (2008) show that the base change functor for the mod-n stable A1-homotopy category
$\mathrm {SH} (F,\mathbf {Z} /n)\to \mathrm {SH} (E,\mathbf {Z} /n)$
is fully faithful. A similar statement for non-commutative motives has been established by Tabuada (2018).
Gabber rigidity
Another type of rigidity relates the mod-n K-theory of an henselian ring A to the one of its residue field A/m. This rigidity result is referred to as Gabber rigidity, in view of the work of Gabber (1992) who showed that there is an isomorphism
$K_{*}(A,\mathbf {Z} /n)=K_{*}(A/m,\mathbf {Z} /n)$
provided that n≥1 is an integer which is invertible in A.
If n is not invertible in A, the result as above still holds, provided that K-theory is replaced by the fiber of the trace map between K-theory and topological cyclic homology. This was shown by Clausen, Mathew & Morrow (2021).
Applications
Jardine (1993) used Gabber's and Suslin's rigidity result to reprove Quillen's computation of K-theory of finite fields.
References
• Clausen, Dustin; Mathew, Akhil; Morrow, Matthew (2021), "K-theory and topological cyclic homology of henselian pairs", J. Amer. Math. Soc., 34: 411--473, arXiv:1803.10897
• Gabber, Ofer (1992), "K-theory of Henselian local rings and Henselian pairs", Algebraic K-theory, commutative algebra, and algebraic geometry (Santa Margherita Ligure, 1989), Contemp. Math., vol. 126, pp. 59–70, doi:10.1090/conm/126/00509, MR 1156502
• Jardine, J. F. (1993), "The K-theory of finite fields, revisited", K-Theory, 7 (6): 579–595, doi:10.1007/BF00961219, MR 1268594
• Röndigs, Oliver; Østvær, Paul Arne (2008), "Rigidity in motivic homotopy theory", Mathematische Annalen, 341 (3): 651–675, doi:10.1007/s00208-008-0208-5, MR 2399164
• Suslin, Andrei (1983), "On the K-theory of algebraically closed fields", Inventiones Mathematicae, 73 (2): 241–245, doi:10.1007/BF01394024, MR 0714090
• Tabuada, Gonçalo (2018), "Noncommutative rigidity", Mathematische Zeitschrift, 289 (3–4): 1281–1298, arXiv:1703.10599, doi:10.1007/s00209-017-1998-5, MR 3830249
• Weibel, Charles A. (2013), The K-book, Graduate Studies in Mathematics, vol. 145, American Mathematical Society, Providence, RI, ISBN 978-0-8218-9132-2, MR 3076731
| Wikipedia |
Suslin tree
In mathematics, a Suslin tree is a tree of height ω1 such that every branch and every antichain is at most countable. They are named after Mikhail Yakovlevich Suslin.
Every Suslin tree is an Aronszajn tree.
The existence of a Suslin tree is independent of ZFC, and is equivalent to the existence of a Suslin line (shown by Kurepa (1935)) or a Suslin algebra. The diamond principle, a consequence of V=L, implies that there is a Suslin tree, and Martin's axiom MA(ℵ1) implies that there are no Suslin trees.
More generally, for any infinite cardinal κ, a κ-Suslin tree is a tree of height κ such that every branch and antichain has cardinality less than κ. In particular a Suslin tree is the same as a ω1-Suslin tree. Jensen (1972) showed that if V=L then there is a κ-Suslin tree for every infinite successor cardinal κ. Whether the Generalized Continuum Hypothesis implies the existence of an ℵ2-Suslin tree, is a longstanding open problem.
See also
• Glossary of set theory
• Kurepa tree
• List of statements independent of ZFC
• List of unsolved problems in set theory
• Suslin's problem
References
• Thomas Jech, Set Theory, 3rd millennium ed., 2003, Springer Monographs in Mathematics,Springer, ISBN 3-540-44085-2
• Jensen, R. Björn (1972), "The fine structure of the constructible hierarchy.", Ann. Math. Logic, 4 (3): 229–308, doi:10.1016/0003-4843(72)90001-0, MR 0309729 erratum, ibid. 4 (1972), 443.
• Kunen, Kenneth (2011), Set theory, Studies in Logic, vol. 34, London: College Publications, ISBN 978-1-84890-050-9, Zbl 1262.03001
• Kurepa, G. (1935), "Ensembles ordonnés et ramifiés", Publ. Math. Univ. Belgrade, 4: 1–138, JFM 61.0980.01, Zbl 0014.39401
| Wikipedia |
Suspension (dynamical systems)
Suspension is a construction passing from a map to a flow. Namely, let $X$ be a metric space, $f:X\to X$ be a continuous map and $r:X\to \mathbb {R} ^{+}$ be a function (roof function or ceiling function) bounded away from 0. Consider the quotient space:
$X_{r}=\{(x,t):0\leq t\leq r(x),x\in X\}/(x,r(x))\sim (f(x),0).$
The suspension of $(X,f)$ with roof function $r$ is the semiflow[1] $f_{t}:X_{r}\to X_{r}$ induced by the time translation $T_{t}:X\times \mathbb {R} \to X\times \mathbb {R} ,(x,s)\mapsto (x,s+t)$.
If $r(x)\equiv 1$, then the quotient space is also called the mapping torus of $(X,f)$.
References
1. M. Brin and G. Stuck, Introduction to Dynamical Systems, Cambridge University Press, 2002.
| Wikipedia |
Suspension (topology)
In topology, a branch of mathematics, the suspension of a topological space X is intuitively obtained by stretching X into a cylinder and then collapsing both end faces to points. One views X as "suspended" between these end points. The suspension of X is denoted by SX[1] or susp(X).[2]: 76
There is a variation of the suspension for pointed space, which is called the reduced suspension and denoted by ΣX. The "usual" suspension SX is sometimes called the unreduced suspension, unbased suspension, or free suspension of X, to distinguish it from ΣX.
Free suspension
The (free) suspension $SX$ of a topological space $X$ can be defined in several ways.
1. $SX$ is the quotient space $(X\times [0,1])/(X\times \{0\},X\times \{1\})$. In other words, it can be constructed as follows:
• Construct the cylinder $X\times [0,1]$.
• Consider the entire set $X\times \{0\}$ as a single point ("glue" all its points together).
• Consider the entire set $X\times \{1\}$ as a single point ("glue" all its points together).
2. Another way to write this is:
$SX:=v_{0}\cup _{p_{0}}(X\times [0,1])\cup _{p_{1}}v_{1}\ =\ \varinjlim _{i\in \{0,1\}}{\bigl (}(X\times [0,1])\hookleftarrow (X\times \{i\})\xrightarrow {p_{i}} v_{i}{\bigr )},$
Where $v_{0},v_{1}$ are two points, and for each i in {0,1}, $p_{i}$ is the projection to the point $v_{i}$ (a function that maps everything to $v_{i}$). That means, the suspension $SX$ is the result of constructing the cylinder $X\times [0,1]$, and then attaching it by its faces, $X\times \{0\}$ and $X\times \{1\}$, to the points $v_{0},v_{1}$ along the projections $p_{i}:{\bigl (}X\times \{i\}{\bigr )}\to v_{i}$.
3. One can view $SX$ as two cones on X, glued together at their base.
4. $SX$ can also be defined as the join $X\star S^{0},$ where $S^{0}$ is a discrete space with two points.[2]: 76
Properties
In rough terms, S increases the dimension of a space by one: for example, it takes an n-sphere to an (n + 1)-sphere for n ≥ 0.
Given a continuous map $f:X\rightarrow Y,$ there is a continuous map $Sf:SX\rightarrow SY$ defined by $Sf([x,t]):=[f(x),t],$ where square brackets denote equivalence classes. This makes $S$ into a functor from the category of topological spaces to itself.
Reduced suspension
If X is a pointed space with basepoint x0, there is a variation of the suspension which is sometimes more useful. The reduced suspension or based suspension ΣX of X is the quotient space:
$\Sigma X=(X\times I)/(X\times \{0\}\cup X\times \{1\}\cup \{x_{0}\}\times I)$.
This is the equivalent to taking SX and collapsing the line (x0 × I ) joining the two ends to a single point. The basepoint of the pointed space ΣX is taken to be the equivalence class of (x0, 0).
One can show that the reduced suspension of X is homeomorphic to the smash product of X with the unit circle S1.
$\Sigma X\cong S^{1}\wedge X$
For well-behaved spaces, such as CW complexes, the reduced suspension of X is homotopy equivalent to the unbased suspension.
Adjunction of reduced suspension and loop space functors
Σ gives rise to a functor from the category of pointed spaces to itself. An important property of this functor is that it is left adjoint to the functor $\Omega $ taking a pointed space $X$ to its loop space $\Omega X$. In other words, we have a natural isomorphism
$\operatorname {Maps} _{*}\left(\Sigma X,Y\right)\cong \operatorname {Maps} _{*}\left(X,\Omega Y\right)$
where $X$ and $Y$ are pointed spaces and $\operatorname {Maps} _{*}$ stands for continuous maps that preserve basepoints. This adjunction can be understood geometrically, as follows: $\Sigma X$ arises out of $X$ if a pointed circle is attached to every non-basepoint of $X$, and the basepoints of all these circles are identified and glued to the basepoint of $X$. Now, to specify a pointed map from $\Sigma X$ to $Y$, we need to give pointed maps from each of these pointed circles to $Y$. This is to say we need to associate to each element of $X$a loop in $Y$ (an element of the loop space $\Omega Y$), and the trivial loop should be associated to the basepoint of $X$: this is a pointed map from $X$ to $\Omega Y$. (The continuity of all involved maps needs to be checked.)
The adjunction is thus akin to currying, taking maps on cartesian products to their curried form, and is an example of Eckmann–Hilton duality.
This adjunction is a special case of the adjunction explained in the article on smash products.
Applications
The reduced suspension can be used to construct a homomorphism of homotopy groups, to which the Freudenthal suspension theorem applies. In homotopy theory, the phenomena which are preserved under suspension, in a suitable sense, make up stable homotopy theory.
Examples
Some examples of suspensions are:[3]: 77, Exercise.1
• The suspension of an n-ball is homeomorphic to the (n+1)-ball.
Desuspension
Main article: Desuspension
Desuspension is an operation partially inverse to suspension.[4]
See also
• Double suspension theorem
• Cone (topology)
• Join (topology)
References
1. Allen Hatcher, Algebraic topology. Cambridge University Presses, Cambridge, 2002. xii+544 pp. ISBN 0-521-79160-X and ISBN 0-521-79540-0
2. Matoušek, Jiří (2007). Using the Borsuk-Ulam Theorem: Lectures on Topological Methods in Combinatorics and Geometry (2nd ed.). Berlin-Heidelberg: Springer-Verlag. ISBN 978-3-540-00362-5. Written in cooperation with Anders Björner and Günter M. Ziegler
3. Matoušek, Jiří (2007). Using the Borsuk-Ulam Theorem: Lectures on Topological Methods in Combinatorics and Geometry (2nd ed.). Berlin-Heidelberg: Springer-Verlag. ISBN 978-3-540-00362-5. Written in cooperation with Anders Björner and Günter M. Ziegler , Section 4.3
4. Wolcott, Luke. "Imagining Negative-Dimensional Space" (PDF). forthelukeofmath.com. Retrieved 2015-06-23.
• This article incorporates material from Suspension on PlanetMath, which is licensed under the Creative Commons Attribution/Share-Alike License.
| Wikipedia |
Suspension of a ring
In algebra, more specifically in algebraic K-theory, the suspension $\Sigma R$ of a ring R is given by[1] $\Sigma (R)=C(R)/M(R)$ where $C(R)$ is the ring of all infinite matrices with coefficients in R having only finitely many nonzero elements in each row or column and $M(R)$ is its ideal of matrices having only finitely many nonzero elements. It is an analog of suspension in topology.
One then has: $K_{i}(R)\simeq K_{i+1}(\Sigma R)$.
References
1. Weibel, III, Ex. 1.15
• C. Weibel "The K-book: An introduction to algebraic K-theory"
| Wikipedia |
Susskind–Glogower operator
The Susskind–Glogower operator, first proposed by Leonard Susskind and J. Glogower,[1] refers to the operator where the phase is introduced as an approximate polar decomposition of the creation and annihilation operators.
It is defined as
$V={\frac {1}{\sqrt {aa^{\dagger }}}}a$,
and its adjoint
$V^{\dagger }=a^{\dagger }{\frac {1}{\sqrt {aa^{\dagger }}}}$.
Their commutation relation is
$[V,V^{\dagger }]=|0\rangle \langle 0|$,
where $|0\rangle $ is the vacuum state of the harmonic oscillator.
They may be regarded as a (exponential of) phase operator because
$Va^{\dagger }aV^{\dagger }=a^{\dagger }a+1$,
where $a^{\dagger }a$ is the number operator. So the exponential of the phase operator displaces the number operator in the same fashion as the momentum operator acts as the generator of translations in quantum mechanics: $\exp \left(i{\frac {{\hat {p}}x_{o}}{\hbar }}\right){\hat {x}}\exp \left(-i{\frac {{\hat {p}}x_{o}}{\hbar }}\right)={\hat {x}}+x_{0}$.
They may be used to solve problems such as atom-field interactions,[2] level-crossings [3] or to define some class of non-linear coherent states,[4] among others.
References
1. Susskind, L.; Glogower, J. (1964). "Quantum mechanical phase and time operator". Physica. 1: 49.
2. Rodríguez-Lara, B. M.; Moya-Cessa, H.M. (2013). "Exact solution of generalized Dicke models via Susskind-Glogower operators". Journal of Physics A. 46 (9): 095301. arXiv:1207.6551. Bibcode:2013JPhA...46i5301R. doi:10.1088/1751-8113/46/9/095301. S2CID 118671292.
3. Rodríguez-Lara, B.M.; Rodríguez-Méndez, D.; Moya-Cessa, H. (2011). "Solution to the Landau-Zener problem via Susskind-Glogower operators". Physics Letters A. 375 (43): 3770–3774. arXiv:1105.4013. Bibcode:2011PhLA..375.3770R. doi:10.1016/j.physleta.2011.08.051. S2CID 118486579.
4. León-Montiel, J.; Moya-Cessa, H.; Soto-Eguibar, F. (2011). "Nonlinear coherent states for the Susskind-Glogower operators" (PDF). Revista Mexicana de Física. 57: 133. arXiv:1303.2516.
| Wikipedia |
Suzan Kahramaner
Suzan Kahramaner (May 21, 1913 – February 22, 2006) was one of the first female mathematicians in Turkish academia.
Suzan Kahramaner
Born(1913-05-21)May 21, 1913
Istanbul, Ottoman Empire
DiedFebruary 22, 2006(2006-02-22) (aged 92)
Istanbul, Turkey
NationalityTurkish
Alma materIstanbul University
AwardsJyvaskyla University medal
War of Independence Sword
Scientific career
FieldsMathematics
InstitutionsIstanbul University
Doctoral advisorKerim Erim
Education
Kahramaner was born in Üsküdar, in Istanbul. Her mother was Müzeyyen Hanım, the daughter of Halep's district treasurer, and her father was surgeon Dr. Rifki Osman Bey. She studied at the Moda Nümune Inas primary school between 1919 and 1924. After enrolling in Notre Dame De Sion in 1924, she completed her secondary education and obtained her French bachelor's degree in 1934.
In the aftermath of the higher education reforms conducted in 1933 in Istanbul Darülfunun, which was the only institution of higher education in the country, was modernized and renamed Istanbul University. Kahramaner began her graduate studies in 1934 in the Mathematics-Astronomy Department of Istanbul University. In addition to its renewed curricula and evolving faculty, Istanbul University housed the scientific research of many famous German academics that had fled from the pre-World War II Germany.
During her undergraduate studies, she took classes taught by many famous mathematicians, including Ali Yar, Kerim Erim, Richard von Mises, Hilda Geiringer and William Prager.
In 1939, Kahramaner graduated from the Department of Mathematics and Astronomy in Istanbul University, which had previously housed great academic merit through its successful scholars. She undertook research projects in the field of physics between the years 1939 and 1940.
In 1943, she started her doctorate studies on Coefficient Problems in the Theory of Complex Functions with the advisor Kerim Erim, the first mathematician in Turkey with a doctoral degree, who had completed his doctorate studies in Friedrich-Alexander University in Germany with his advisor Adolf Hurwitz. Kerim Erim was also the first scientist to direct a doctoral study in mathematics in Turkey. Kahramaner's doctoral thesis was entitled, Sur les fonctions analytiques qui prénnent la même valeur ou des valeurs donnés (ou en m points donnés).
Career
At the beginning of the 1940–1941 academic year, since teachers at the time were not appointed to Istanbul but were instead appointed to other cities in Turkey, she started working as an assistant teacher in Çamlıca High School for Girls and worked as a mathematics teacher there until 1943. In 1943, she worked as a teaching assistant for the Analysis I and Analysis II courses in the Mathematics Department of the Faculty of Science in Istanbul University. After her doctoral thesis was approved, Kahramaner continued her scientific and academic studies in Istanbul University as one of the first woman mathematicians in Turkey with a PhD in Mathematics.
She wrote the thesis, Sur l'argument des fonctions univalentes for her Assistant Professorship and was consequently titled assistant professor the same year after she successfully passed the necessary exams. She was sent to Rolf Nevanlinna to Helsinki University for a year in January 1957 in order to do research on the Theory of Complex Functions. She participated in the Scandinavian Congress of Mathematicians, International Colloquium on the Theory of Functions, in Helsinki the very same year in August and had the opportunity to meet some of the famous mathematicians like Ernst Hölder, Wilhelm Blaschke, Lars Valerian Ahlfors, Paul Montel, Olli Lehto, Mieczysław Biernacki, Alexander Gelfond, Albert Pfluger, Wilfred Kaplan, Walter Hayman and Paul Erdős.
In November 1957, she went to Zurich to continue her scientific research for approximately a year at Zurich University, where Rolf Nevanlinna was lecturing. In August 1958, she attended the International Congress of Mathematicians in Edinburgh held by the International Mathematical Union, where the Fields Medals were awarded to their recipients.
She returned to Istanbul University at the end of 1958. In the autumn of 1959, she won the NATO Scholarship, to which she had applied with a reference from Rolf Nevanlinna and with this scholarship; she worked at Zurich University during 1959–1960.
Afterwards, she conducted scientific research at Stanford University for a month. She continued her research the same year in September at Helsinki University. She returned to her duty in Istanbul University at the end of October 1960.
She participated to the International Congress of Mathematicians held in Stockholm in August 1962. She did her research at the Helsinki University and Zurich University in September and October in the same year. In August 1966, she was invited to the II. Rolf Nevanlinna Colloquium. In August 1966, she joined the International Congress of Mathematicians (ICM) in Moscow. After the congress, she carried out her studies at Helsinki University in September and October in order to complete her professorship thesis.
Her professorship thesis, entitled Sur les singularites d'une application différentiable was accepted in 1968 and she received the title of professor the same year. She conducted scientific studies at various universities in London, Paris, Zurich and Nice in 1970. She attended the International Congress of Mathematicians in Nice in 1970.
She also contributed to the founding of Balkan Union of Mathematicians, which was realized with the participation of Romania, Yugoslavia, Greece, Bulgaria and Turkey in the same year. She participated to the Balkan Union of Mathematicians in Athens in 1971. She also participated to the congress organized by the Balkan Union of Mathematicians in September 1971.
In May and July 1973, O. Lehto, Menahem Max Schiffer, O. Tammi, Cevdet Kocak and H. Minc visited her and conducted scientific research with her. She joined the Seminar and International Symposium on Functions Theory of Silivri Institute of Research on Mathematics in 1976. In this symposium, Rolf Nevanlinna was given the title of Honoris Causa. In the same year, she was awarded the Jyvaskyla University (Finland) medal. She joined the conference organized in Varna in 1977 by the Balkan Union of Mathematicians. In 1978, she also participated to the International Congress of Mathematicians in Helsinki and Rolf Nevanlinna Colloquium in Joensuu. She was the head of the Department of Mathematics between 1978 and 1979 in Istanbul University.
Kahramaner was the PhD supervisor of Ahmet Dernek, Rıfkı Kahramaner and Yasar Polatoglu and, co-supervisor of Semin Akdogan.
In the beginning of 1983, Kahramaner retired from Istanbul University after forty years in academia due to her age. During her retirement, she continued her scientific research. In August 1987, she attended the Rolf Nevanlinna Colloquium in Leningrad.
Selected publications
Kahramaner, who was proficient in English, French, German and Arabic, wrote countless scientific studies, some of which are:
• Sur les fonctions analytiques qui prénnent la meme valeur ou des valeurs données en deux points donnés (ou en m points donnés), Revue de la faculté des sciences de l'université d'Istanbul, Série A, Vol. 20, 1955.
• Ein verzerrungssatz des argumentes der schlichten funktionen, (with Nazim Terzioglu) Revue de la faculté des sciences de l'université d'Istanbul, Série A, Vol. 20, 1955.
• Über das argument der anlytischen funktionen, (with Nazim Terzioglu) Revue de la faculté des sciences de l'université d'Istanbul, Série A, Vol. 21, 1956.
• Sur le comportement d'une représentation presque-conforme dans le voisinage d'un point singulier, Revue de la faculté des sciences de l'université d'Istanbul, Série A, Vol. 22, 1957.
• Sur les applications différentiables du plan complexe, Revue de la faculté des sciences de l'université d'Istanbul, Série A, Vol. 26, 1961.
• Sur les coefficients des fonctions univalents, Revue de la faculté des sciences de l'université d'Istanbul, Série A, Vol. 28, 1962.
• Modern Mathematical Methods and Models Volume I: Multicomponent Methods (A Book of Experimental Text Materials), (Translation from The Dartmouth College Writing Group; E.J. Cogan, R.L. Davis, J. G. Kemeny, R.Z. Norman, J.L. Snell and G.L. Thompson) Malloy Inc., Ann Arbor, Michigan, ABD, 1958.
• Sur l'argument des fonctions univalentes, Revue de la faculté des sciences de l'université d'Istanbul, Série A, Vol. 32, 1967.
Awards and honors
Kahramaner was awarded the War of Independence Sword by the Halic Rotary Club in the 75th celebration of the Turkish Republic.
Personal
Kahramaner died in Istanbul, on Wednesday, February 22, 2006.
Kahramaner's son H. Rifki Kahramaner and her daughter-in-law Yasemin Kahramaner are also both mathematics professors.
References
• Suzan Kahramaner - The Mathematics Genealogy Project
• 9th International Symposium on Geometric Function Theory and Applications Symposium, GFTA2013 (Dedicated to Suzan Kahramaner) Abstract Book.
External links
• Photo Movie of Suzan Kahramaner on YouTube
Authority control
International
• VIAF
National
• Germany
Academics
• MathSciNet
• Mathematics Genealogy Project
• zbMATH
People
• Deutsche Biographie
| Wikipedia |
Suzanne Dorée
Suzanne Ingrid Dorée is a professor of mathematics at Augsburg University, where she is also chair of the Department of Mathematics, Statistics, and Computer Science,[1]. She is chair of the Congress of the Mathematical Association of America and, as such, serves on its board of directors and the Section Visitors Program (Invited Speakers).[2] Her doctoral research concerned group theory;[3] she has also published in mathematics education.
Education and career
Dorée grew up near New York City, and did her undergraduate studies at the University of Delaware.[1] She joined the Augsburg university faculty in 1989,[4] and did her graduate studies at the University of Wisconsin–Madison. She completed her Ph.D. at the University of Wisconsin–Madison in 1996; her dissertation, supervised by Martin Isaacs, was Subgroups with the Character Restriction Property and Normal Complements.[3]
Recognition
In 2004, Dorée won a Distinguished Teaching Award from the Mathematical Association of America.[1][5] In 2019, Dorée won a Deborah and Franklin Tepper Haimo Award from the Mathematical Association of America.[6]
References
1. "Suzanne I. Doree", Faculty, Augsburg University, 2018-02-21
2. Council and Committees List, Mathematical Association of America, retrieved 2018-02-21
3. Suzanne Dorée at the Mathematics Genealogy Project
4. Author biography from Arett, Danielle; Dorée, Suzanne (2010), "Coloring and counting on the Tower of Hanoi graphs", Mathematics Magazine, 83 (3): 200–209, doi:10.4169/002557010X494841, MR 2668333, S2CID 120868360
5. Sizer, Wally (September 2004), "Secretary's Report", North Central Mathematical Bulletin, North Central Section of the Mathematical Association of America, 7 (2)
6. "Deborah and Franklin Tepper Haimo Award | Mathematical Association of America".
Authority control: Academics
• Mathematics Genealogy Project
| Wikipedia |
Suzanne Weekes
Suzanne L. Weekes is the Executive Director of the Society for Industrial and Applied Mathematics.[1] She is also Professor of Mathematical Sciences at Worcester Polytechnic Institute (WPI). She is a co-founder of the Mathematical Sciences Research Institute Undergraduate Program.[2]
Suzanne L. Weekes
NationalityAmerican, Trinidadian
Alma materUniversity of Michigan
Known forMSRI Undergraduate Program, PIC Math Program
Scientific career
FieldsMathematics
InstitutionsSociety for Industrial and Applied Mathematics, Worcester Polytechnic Institute
Doctoral advisorE. Harabetian
Education
Weekes is Caribbean-American, and was born and raised in Trinidad and Tobago.[2] She graduated in 1989 from Indiana University with a major in mathematics and a minor in computer science.[2] She went on to get an MS in applied mathematics in 1990 and a PhD in Mathematics and Scientific Computing in 1995 at the University of Michigan.[3]
Career
Weekes is the co-director of the Preparation for Industrial Careers in Mathematical Sciences, which helps faculty in the U.S. engage their students with Industrial math research. She is a professor of mathematical sciences at Worcester Polytechnic Institute as well as a cofounder of MSRI-UP, a research experience for undergraduates that aims to increase under represented groups in math programs by providing them with research opportunities.[2] In July 2019, she became Interim Associate Dean of Undergraduate Studies at WPI.[4] In December 2019, she was elected to the executive committee of the Association for Women in Mathematics as an at large member.[5]
Awards and recognition
In 2015, Weekes received the Denise Nicoletti Trustees' Award for Service to Community.[6] Weekes was recognized by Mathematically Gifted & Black as a Black History Month 2017 Honoree.[2] She received the 2019 M. Gweneth Humphreys Award for mentorship from the Association for Women in Mathematics.[7] She won the Deborah and Franklin Tepper Haimo Award for Distinguished College or University Teaching of Mathematics from the Mathematical Association of America in 2020.[8] She was honored as the 2022 AWM-MAA Etta Zuber Falconer Lecturer.[9]
References
1. "Suzanne L. Weekes Named SIAM Executive Director" (PDF). AWM Newsletter: 15. November–December 2020.
2. "Suzanne L. Weekes". Mathematically Gifted & Black: Black History Month 2017 Honoree. Retrieved 2018-12-03.
3. "Suzanne L. Weekes @ WPI". users.wpi.edu. Retrieved 2017-04-08.
4. "Weekes Aims to Expand Undergraduate Research Opportunities". WPI. Retrieved 2020-02-01.
5. "AWM Election Results". Association for Women in Mathematics (AWM). 2019-12-24. Retrieved 2020-02-01.
6. "Awards for Excellence in Teaching, Research, Advising, and Community Service Presented at Honors Convocation | News | WPI". www.wpi.edu. Retrieved 2017-04-08.
7. "2019 M. Gweneth Humphreys Award Winner". Association for Women in Mathematics. Retrieved 24 October 2018.
8. "Deborah and Franklin Tepper Haimo Award | Mathematical Association of America". www.maa.org. Retrieved 2020-02-01.
9. "SIAM Executive Director Suzanne L. Weekes Named 2022 AWM-MAA Etta Zuber Falconer Lecturer". SIAM News. Retrieved 2022-05-28.
Authority control: Academics
• MathSciNet
• Mathematics Genealogy Project
| Wikipedia |
Ree group
In mathematics, a Ree group is a group of Lie type over a finite field constructed by Ree (1960, 1961) from an exceptional automorphism of a Dynkin diagram that reverses the direction of the multiple bonds, generalizing the Suzuki groups found by Suzuki using a different method. They were the last of the infinite families of finite simple groups to be discovered.
Unlike the Steinberg groups, the Ree groups are not given by the points of a connected reductive algebraic group defined over a finite field; in other words, there is no "Ree algebraic group" related to the Ree groups in the same way that (say) unitary groups are related to Steinberg groups. However, there are some exotic pseudo-reductive algebraic groups over non-perfect fields whose construction is related to the construction of Ree groups, as they use the same exotic automorphisms of Dynkin diagrams that change root lengths.
Tits (1960) defined Ree groups over infinite fields of characteristics 2 and 3. Tits (1989) and Hée (1990) introduced Ree groups of infinite-dimensional Kac–Moody algebras.
Construction
If X is a Dynkin diagram, Chevalley constructed split algebraic groups corresponding to X, in particular giving groups X(F) with values in a field F. These groups have the following automorphisms:
• Any endomorphism σ of the field F induces an endomorphism ασ of the group X(F)
• Any automorphism π of the Dynkin diagram induces an automorphism απ of the group X(F).
The Steinberg and Chevalley groups can be constructed as fixed points of an endomorphism of X(F) for F the algebraic closure of a field. For the Chevalley groups, the automorphism is the Frobenius endomorphism of F, while for the Steinberg groups the automorphism is the Frobenius endomorphism times an automorphism of the Dynkin diagram.
Over fields of characteristic 2 the groups B2(F) and F4(F) and over fields of characteristic 3 the groups G2(F) have an endomorphism whose square is the endomorphism αφ associated to the Frobenius endomorphism φ of the field F. Roughly speaking, this endomorphism απ comes from the order 2 automorphism of the Dynkin diagram where one ignores the lengths of the roots.
Suppose that the field F has an endomorphism σ whose square is the Frobenius endomorphism: σ2 = φ. Then the Ree group is defined to be the group of elements g of X(F) such that απ(g) = ασ(g). If the field F is perfect then απ and αφ are automorphisms, and the Ree group is the group of fixed points of the involution αφ/απ of X(F).
In the case when F is a finite field of order pk (with p = 2 or 3) there is an endomorphism with square the Frobenius exactly when k = 2n + 1 is odd, in which case it is unique. So this gives the finite Ree groups as subgroups of B2(22n+1), F4(22n+1), and G2(32n+1) fixed by an involution.
Chevalley groups, Steinberg group, and Ree groups
The relation between Chevalley groups, Steinberg group, and Ree groups is roughly as follows. Given a Dynkin diagram X, Chevalley constructed a group scheme over the integers Z whose values over finite fields are the Chevalley groups. In general one can take the fixed points of an endomorphism α of X(F) where F is the algebraic closure of a finite field, such that some power of α is some power of the Frobenius endomorphism φ. The three cases are as follows:
• For Chevalley groups, α = φn for some positive integer n. In this case the group of fixed points is also the group of points of X defined over a finite field.
• For Steinberg groups, αm = φn for some positive integers m, n with m dividing n and m > 1. In this case the group of fixed points is also the group of points of a twisted (quasisplit) form of X defined over a finite field.
• For Ree groups, αm = φn for some positive integers m, n with m not dividing n. In practice m=2 and n is odd. Ree groups are not given as the points of some connected algebraic group with values in a field. they are the fixed points of an order m=2 automorphism of a group defined over a field of order pn with n odd, and there is no corresponding field of order pn/2 (although some authors like to pretend there is in their notation for the groups).
Ree groups of type 2B2
The Ree groups of type 2B2 were first found by Suzuki (1960) using a different method, and are usually called Suzuki groups. Ree noticed that they could be constructed from the groups of type B2 using a variation of the construction of Steinberg (1959). Ree realized that a similar construction could be applied to the Dynkin diagrams F4 and G2, leading to two new families of finite simple groups.
Ree groups of type 2G2
The Ree groups of type 2G2(32n+1) were introduced by Ree (1960), who showed that they are all simple except for the first one 2G2(3), which is isomorphic to the automorphism group of SL2(8). Wilson (2010) gave a simplified construction of the Ree groups, as the automorphisms of a 7-dimensional vector space over the field with 32n+1 elements preserving a bilinear form, a trilinear form, and a product satisfying a twisted linearity law.
The Ree group has order q3(q3 + 1)(q − 1) where q = 32n+1
The Schur multiplier is trivial for n ≥ 1 and for 2G2(3)′.
The outer automorphism group is cyclic of order 2n + 1.
The Ree group is also occasionally denoted by Ree(q), R(q), or E2*(q)
The Ree group 2G2(q) has a doubly transitive permutation representation on q3 + 1 points, and more precisely acts as automorphisms of an S(2, q+1, q3+1) Steiner system. It also acts on a 7-dimensional vector space over the field with q elements as it is a subgroup of G2(q).
The 2-sylow subgroups of the Ree groups are elementary abelian of order 8. Walter's theorem shows that the only other non-abelian finite simple groups with abelian Sylow 2-subgroups are the projective special linear groups in dimension 2 and the Janko group J1. These groups also played a role in the discovery of the first modern sporadic group. They have involution centralizers of the form Z/2Z × PSL2(q), and by investigating groups with an involution centralizer of the similar form Z/2Z × PSL2(5) Janko found the sporadic group J1. Kleidman (1988) determined their maximal subgroups.
The Ree groups of type 2G2 are exceptionally hard to characterize. Thompson (1967, 1972, 1977) studied this problem, and was able to show that the structure of such a group is determined by a certain automorphism σ of a finite field of characteristic 3, and that if the square of this automorphism is the Frobenius automorphism then the group is the Ree group. He also gave some complicated conditions satisfied by the automorphism σ. Finally Bombieri (1980) used elimination theory to show that Thompson's conditions implied that σ2 = 3 in all but 178 small cases, that were eliminated using a computer by Odlyzko and Hunt. Bombieri found out about this problem after reading an article about the classification by Gorenstein (1979), who suggested that someone from outside group theory might be able to help solving it. Enguehard (1986) gave a unified account of the solution of this problem by Thompson and Bombieri.
Ree groups of type 2F4
The Ree groups of type 2F4(22n+1) were introduced by Ree (1961). They are simple except for the first one 2F4(2), which Tits (1964) showed has a simple subgroup of index 2, now known as the Tits group. Wilson (2010b) gave a simplified construction of the Ree groups as the symmetries of a 26-dimensional space over the field of order 22n+1 preserving a quadratic form, a cubic form, and a partial multiplication.
The Ree group 2F4(22n+1) has order q12(q6 + 1) (q4 − 1) (q3 + 1) (q − 1) where q = 22n+1. The Schur multiplier is trivial. The outer automorphism group is cyclic of order 2n + 1.
These Ree groups have the unusual property that the Coxeter group of their BN pair is not crystallographic: it is the dihedral group of order 16. Tits (1983) showed that all Moufang octagons come from Ree groups of type 2F4.
See also
• List of finite simple groups
References
• Carter, Roger W. (1989) [1972], Simple groups of Lie type, Wiley Classics Library, New York: John Wiley & Sons, ISBN 978-0-471-50683-6, MR 0407163
• Bombieri, Enrico (1980), appendices by Andrew Odlyzko and D. Hunt, "Thompson's problem (σ2=3)", Inventiones Mathematicae, 58 (1): 77–100, doi:10.1007/BF01402275, ISSN 0020-9910, MR 0570875, S2CID 122867511
• Enguehard, Michel (1986), "Caractérisation des groupes de Ree", Astérisque (142): 49–139, ISSN 0303-1179, MR 0873958
• Gorenstein, D. (1979), "The classification of finite simple groups. I. Simple groups and local analysis", Bulletin of the American Mathematical Society, New Series, 1 (1): 43–199, doi:10.1090/S0273-0979-1979-14551-8, ISSN 0002-9904, MR 0513750
• Hée, Jean-Yves (1990), "Construction de groupes tordus en théorie de Kac-Moody", Comptes Rendus de l'Académie des Sciences, Série I, 310 (3): 77–80, ISSN 0764-4442, MR 1044619
• Kleidman, Peter B. (1988), "The maximal subgroups of the Chevalley groups G2(q) with q odd, the Ree groups 2G2(q), and their automorphism groups", Journal of Algebra, 117 (1): 30–71, doi:10.1016/0021-8693(88)90239-6, ISSN 0021-8693, MR 0955589
• Ree, Rimhak (1960), "A family of simple groups associated with the simple Lie algebra of type (G2)", Bulletin of the American Mathematical Society, 66 (6): 508–510, doi:10.1090/S0002-9904-1960-10523-X, ISSN 0002-9904, MR 0125155
• Ree, Rimhak (1961), "A family of simple groups associated with the simple Lie algebra of type (F4)", Bulletin of the American Mathematical Society, 67: 115–116, doi:10.1090/S0002-9904-1961-10527-2, ISSN 0002-9904, MR 0125155
• Steinberg, Robert (1959), "Variations on a theme of Chevalley", Pacific Journal of Mathematics, 9 (3): 875–891, doi:10.2140/pjm.1959.9.875, ISSN 0030-8730, MR 0109191
• Steinberg, Robert (1968), Lectures on Chevalley groups, Yale University, New Haven, Conn., MR 0466335, archived from the original on 2012-09-10
• Steinberg, Robert (1968), Endomorphisms of linear algebraic groups, Memoirs of the American Mathematical Society, No. 80, Providence, R.I.: American Mathematical Society, ISBN 9780821812808, MR 0230728
• Suzuki, Michio (1960), "A new type of simple groups of finite order", Proceedings of the National Academy of Sciences of the United States of America, 46 (6): 868–870, doi:10.1073/pnas.46.6.868, ISSN 0027-8424, JSTOR 70960, MR 0120283, PMC 222949, PMID 16590684
• Thompson, John G. (1967), "Toward a characterization of E2*(q)", Journal of Algebra, 7 (3): 406–414, doi:10.1016/0021-8693(67)90080-4, ISSN 0021-8693, MR 0223448
• Thompson, John G. (1972), "Toward a characterization of E2*(q) . II", Journal of Algebra, 20 (3): 610–621, doi:10.1016/0021-8693(72)90074-9, ISSN 0021-8693, MR 0313377
• Thompson, John G. (1977), "Toward a characterization of E2*(q) . III", Journal of Algebra, 49 (1): 162–166, doi:10.1016/0021-8693(77)90276-9, ISSN 0021-8693, MR 0453858
• Tits, Jacques (1960), "Les groupes simples de Suzuki et de Ree", Séminaire Bourbaki, Vol. 6, Paris: Société Mathématique de France, pp. 65–82, MR 1611778
• Tits, Jacques (1964), "Algebraic and abstract simple groups", Annals of Mathematics, Second Series, 80 (2): 313–329, doi:10.2307/1970394, ISSN 0003-486X, JSTOR 1970394, MR 0164968
• Tits, Jacques (1983), "Moufang octagons and the Ree groups of type 2F4", American Journal of Mathematics, 105 (2): 539–594, doi:10.2307/2374268, ISSN 0002-9327, JSTOR 2374268, MR 0701569
• Tits, Jacques (1989), "Groupes associés aux algèbres de Kac-Moody", Astérisque, Séminaire Bourbaki (177): 7–31, ISSN 0303-1179, MR 1040566
• Wilson, Robert A. (2010), "Another new approach to the small Ree groups", Archiv der Mathematik, 94 (6): 501–510, CiteSeerX 10.1.1.156.9909, doi:10.1007/s00013-010-0130-4, ISSN 0003-9268, MR 2653666, S2CID 122724281
• Wilson, Robert A. (2010b), "A simple construction of the Ree groups of type 2F4", Journal of Algebra, 323 (5): 1468–1481, doi:10.1016/j.jalgebra.2009.11.015, ISSN 0021-8693, MR 2584965
External links
• ATLAS: Ree group R(27)
| Wikipedia |
Aida Yasuaki
Aida Yasuaki (会田 安明, February 10, 1747 – October 26, 1817) also known as Aida Ammei, was a Japanese mathematician in the Edo period.[1]
Aida Yasuaki
Born(1747-02-10)10 February 1747
Died26 October 1817(1817-10-26) (aged 70)
NationalityJapanese
Scientific career
FieldsMathematics
He made significant contributions to the fields of number theory and geometry, and furthered methods for simplifying continued fractions.
Aida created an original symbol for "equal". This was the first appearance of the notation for equal in East Asia.[2]
Selected works
In a statistical overview derived from writings by and about Aida Yasuaki, OCLC/WorldCat encompasses roughly 50 works in 50+ publications in 1 language and 50+ library holdings.[3]
• 1784 — Shoyaku konʼitsujutsu (諸約混一術) OCLC 22057343766
• 1785 — Kaisei sanpō (改精算法) OCLC 22049703851, Counter-arguments with seiyo sampō[2]
• 1787 — Kaisei sanpō kaiseiron (改精算法改正論) OCLC 22056510030, Counter-arguments with seiyo sampō, new edition[2]
• 1788 — Kaiwaku sanpō (解惑筭法) OCLC 22056510044[2]
• 1797 — Sanpō kakujo (筭法廓如) OCLC 22057185824[2]
• 1801 — Sanpō hi hatsuran (筭法非撥亂) OCLC 22057185770[2]
• 1811 — Sanpō tensei-ho shinan (算法天生法指南, Mathematical Introduction of 'Tensei-ho)[2]
See also
• Sangaku, the custom of presenting mathematical problems, carved in wood tablets, to the public in shinto shrines
• Soroban, a Japanese abacus
• Japanese mathematics
Notes
1. Smith, David. (1914). A History of Japanese Mathematics, p. 188. , p. 188, at Google Books
2. Jochi, Shigeru. (1997). "Aida, Yasuaki," Encyclopaedia of the History of Science, Technology, and Medicine in Non-Western Cultures, p. 38. , p. 38, at Google Books
3. WorldCat Identities: 会田安明 1747–1817
References
• Endō Toshisada (1896). History of mathematics in Japan (日本數學史史, Dai Nihon sūgakush). Tōkyō: _____. OCLC 122770600
• Restivo, Sal P. (1992). Mathematics in Society and History: Sociological Inquiries. Dordrecht: Kluwer Academic Publishers. ISBN 978-0-7923-1765-4; OCLC 25709270
• Selin, Helaine. (1997). Encyclopaedia of the History of Science, Technology, and Medicine in Non-Western Cultures. Dordrecht: Kluwer/Springer. ISBN 978-0-7923-4066-9; OCLC 186451909
• Shimodaira, Kazuo. (1970). "Aida Yasuaki", Dictionary of Scientific Biography. New York: Charles Scribner's Sons. ISBN 0-684-10114-9
• David Eugene Smith and Yoshio Mikami. (1914). A History of Japanese Mathematics. Chicago: Open Court Publishing. OCLC 1515528– note alternate online, full-text copy at archive.org
External links
• O'Connor, John J.; Robertson, Edmund F., "Aida Yasuaki", MacTutor History of Mathematics Archive, University of St Andrews
Authority control
International
• FAST
• ISNI
• VIAF
National
• Germany
• United States
• Japan
Academics
• CiNii
• zbMATH
Other
• SNAC
• IdRef
| Wikipedia |
Suzuki sporadic group
In the area of modern algebra known as group theory, the Suzuki group Suz or Sz is a sporadic simple group of order
213 · 37 · 52 · 7 · 11 · 13 = 448345497600 ≈ 4×1011.
Algebraic structure → Group theory
Group theory
Basic notions
• Subgroup
• Normal subgroup
• Quotient group
• (Semi-)direct product
Group homomorphisms
• kernel
• image
• direct sum
• wreath product
• simple
• finite
• infinite
• continuous
• multiplicative
• additive
• cyclic
• abelian
• dihedral
• nilpotent
• solvable
• action
• Glossary of group theory
• List of group theory topics
Finite groups
• Cyclic group Zn
• Symmetric group Sn
• Alternating group An
• Dihedral group Dn
• Quaternion group Q
• Cauchy's theorem
• Lagrange's theorem
• Sylow theorems
• Hall's theorem
• p-group
• Elementary abelian group
• Frobenius group
• Schur multiplier
Classification of finite simple groups
• cyclic
• alternating
• Lie type
• sporadic
• Discrete groups
• Lattices
• Integers ($\mathbb {Z} $)
• Free group
Modular groups
• PSL(2, $\mathbb {Z} $)
• SL(2, $\mathbb {Z} $)
• Arithmetic group
• Lattice
• Hyperbolic group
Topological and Lie groups
• Solenoid
• Circle
• General linear GL(n)
• Special linear SL(n)
• Orthogonal O(n)
• Euclidean E(n)
• Special orthogonal SO(n)
• Unitary U(n)
• Special unitary SU(n)
• Symplectic Sp(n)
• G2
• F4
• E6
• E7
• E8
• Lorentz
• Poincaré
• Conformal
• Diffeomorphism
• Loop
Infinite dimensional Lie group
• O(∞)
• SU(∞)
• Sp(∞)
Algebraic groups
• Linear algebraic group
• Reductive group
• Abelian variety
• Elliptic curve
History
Suz is one of the 26 Sporadic groups and was discovered by Suzuki (1969) as a rank 3 permutation group on 1782 points with point stabilizer G2(4). It is not related to the Suzuki groups of Lie type. The Schur multiplier has order 6 and the outer automorphism group has order 2.
Complex Leech lattice
The 24-dimensional Leech lattice has a fixed-point-free automorphism of order 3. Identifying this with a complex cube root of 1 makes the Leech lattice into a 12 dimensional lattice over the Eisenstein integers, called the complex Leech lattice. The automorphism group of the complex Leech lattice is the universal cover 6 · Suz of the Suzuki group. This makes the group 6 · Suz · 2 into a maximal subgroup of Conway's group Co0 = 2 · Co1 of automorphisms of the Leech lattice, and shows that it has two complex irreducible representations of dimension 12. The group 6 · Suz acting on the complex Leech lattice is analogous to the group 2 · Co1 acting on the Leech lattice.
Suzuki chain
The Suzuki chain or Suzuki tower is the following tower of rank 3 permutation groups from (Suzuki 1969), each of which is the point stabilizer of the next.
• G2(2) = U(3, 3) · 2 has a rank 3 action on 36 = 1 + 14 + 21 points with point stabilizer PSL(3, 2) · 2
• J2 · 2 has a rank 3 action on 100 = 1 + 36 + 63 points with point stabilizer G2(2)
• G2(4) · 2 has a rank 3 action on 416 = 1 + 100 + 315 points with point stabilizer J2 · 2
• Suz · 2 has a rank 3 action on 1782 = 1 + 416 + 1365 points with point stabilizer G2(4) · 2
Maximal subgroups
Wilson (1983) found the 17 conjugacy classes of maximal subgroups of Suz as follows:
Maximal Subgroup Order Index
G2(4)251,596,8001782
32 · U(4, 3) · 2319,595,52022,880
U(5, 2)13,685,76032,760
21+6 · U(4, 2)3,317,760135,135
35 : M111,924,560232,960
J2 : 21,209,600370,656
24+6 : 3A61,105,920405,405
(A4 × L3(4)) : 2483,840926,640
22+8 : (A5 × S3)368,6401,216,215
M12 : 2190,0802,358,720
32+4 : 2 · (A4 × 22) · 2139,9683,203,200
(A6 × A5) · 243,20010,378,368
(A6 × 32 : 4) · 225,92017,297,280
L3(3) : 211,23239,916,800
L2(25)7,80057,480,192
A72,520177,914,880
References
• Conway, J. H.; Curtis, R. T.; Norton, S. P.; Parker, R. A.; and Wilson, R. A.: "Atlas of Finite Groups: Maximal Subgroups and Ordinary Characters for Simple Groups." Oxford, England 1985.
• Griess, Robert L. Jr. (1998), Twelve sporadic groups, Springer Monographs in Mathematics, Berlin, New York: Springer-Verlag, ISBN 978-3-540-62778-4, MR 1707296
• Suzuki, Michio (1969), "A simple group of order 448,345,497,600", in Brauer, R.; Sah, Chih-han (eds.), Theory of Finite Groups (Symposium, Harvard Univ., Cambridge, Mass., 1968), Benjamin, New York, pp. 113–119, MR 0241527
• Wilson, Robert A. (1983), "The complex Leech lattice and maximal subgroups of the Suzuki group", Journal of Algebra, 84 (1): 151–188, doi:10.1016/0021-8693(83)90074-1, ISSN 0021-8693, MR 0716777
• Wilson, Robert A. (2009), The finite simple groups, Graduate Texts in Mathematics 251, vol. 251, Berlin, New York: Springer-Verlag, doi:10.1007/978-1-84800-988-2, ISBN 978-1-84800-987-5, Zbl 1203.20012
External links
• MathWorld: Suzuki group
• Atlas of Finite Group Representations: Suzuki group
| Wikipedia |
Svante Janson
Carl Svante Janson[3] (born 21 May 1955) is a Swedish mathematician. A member of the Royal Swedish Academy of Sciences since 1994, Janson has been the chaired professor of mathematics at Uppsala University since 1987.
Svante Janson
Svante Janson
Born (1955-05-21) 21 May 1955
CitizenshipSwedish
Alma materUppsala University
Known forJanson's inequality[1] (probability)
Random graphs (Hoeffding decomposition & U-statistics)
"Birth of the giant component" (with coauthors)
AwardsRoyal Swedish Academy of Sciences (KVA)
1978 Sparre Award (KVA)
Royal Scientific Society of Uppsala
1994 Göran Gustafsson prize
2009 Gårding prize (Royal Physiological Society, Lund)
Scientific career
FieldsMathematical analysis
Mathematical statistics
InstitutionsUppsala University (1980–1984, 1985–present)
Mittag-Leffler Institute (1978–1980)
University of Chicago (1980–1981)
Stockholm University (1984–1985)
Doctoral advisorLennart Carleson (mathematics, 1977)
Carl-Gustav Esseen (mathematical statistics, 1984)
InfluencesBéla Bollobás
Persi Diaconis
Donald Knuth[2]
In mathematical analysis, Janson has publications in functional analysis (especially harmonic analysis) and probability theory. In mathematical statistics, Janson has made contributions to the theory of U-statistics.[4][5] In combinatorics, Janson has publications in probabilistic combinatorics, particularly random graphs and in the analysis of algorithms: In the study of random graphs, Janson introduced U-statistics and the Hoeffding decomposition.[6]
Janson has published four books and over 300 academic papers (as of 2017). He has an Erdős number of 1.[7]
Biography
Svante Janson has already had a long career in mathematics, because he started research at a very young age.
From prodigy to docent
A child prodigy in mathematics, Janson took high-school and even university classes while in primary school. He was admitted in 1968 to Gothenburg University at age 12. After his 1968 matriculation at Uppsala University at age 13,[8] Janson obtained the following degrees in mathematics: a "candidate of philosophy" (roughly an "honours" B.S. with a thesis) at age 14 (in 1970) and a doctor of philosophy at age 21–22 (in 1977). Janson's Ph.D. was awarded on his 22nd birthday.[8] Janson's doctoral dissertation was supervised by Lennart Carleson,[10] who had himself received his doctoral degree when he was 22 years old.[11]
After having earned his doctorate, Janson was a postdoc with the Mittag-Leffler Institute from 1978 to 1980. Thereafter he worked at Uppsala University. Janson's ongoing research earned him another PhD from Uppsala University in 1984 – this second doctoral degree being in mathematical statistics;[12] the supervisor was Carl-Gustav Esseen.[13]
In 1984, Janson was hired by Stockholm University as docent (roughly associate professor in the USA).[8]
Professorships
In 1985 Janson returned to Uppsala University, where he was named the chaired professor in mathematical statistics. In 1987 Janson became the chaired professor of mathematics at Uppsala university.[8] Traditionally in Sweden, the chaired professor has had the role of a "professor ordinarius" in a German university (roughly combining the roles of research professor and director of graduate studies at a research university in the USA).
Awards
Besides being a member of the Royal Swedish Academy of Sciences (KVA), Svante Janson is a member of the Royal Society of Sciences in Uppsala. His thesis received the 1978 Sparre Award from the KVA. He received the 1994 Swedish medal for the best young mathematical scientist, the Göran Gustafsson Prize. Janson's former doctoral student, Ola Hössjer, received the Göran Gustafsson prize in 2009, becoming the first statistician so honored.[15]
In December 2009, Janson received the Eva & Lars Gårding prize from the Royal Physiographic Society in Lund.[8] In 2021, Janson received the Flajolet Lecture Prize. He will deliver the Flajolet Lecture at the 2022 AofA conference.
Works by Janson
Books
• Barbour, A. D.; Holst, Lars; Janson, Svante (1992). Poisson Approximation. Oxford, UK: Oxford University Press. ISBN 0-19-852235-5. MR 1163825.
• Janson, Svante (1994). "Orthogonal decompositions and functional limit theorems for random graph statistics". Memoirs of the American Mathematical Society. Providence, Rhode Island: American Mathematical Society. 111 (534): vi+78. doi:10.1090/memo/0534. ISBN 0-8218-2595-X. MR 1219708.
• Janson, Svante (1997). Gaussian Hilbert spaces. Cambridge Tracts in Mathematics. Vol. 129. Cambridge: Cambridge University Press. pp. x, 340. ISBN 0-521-56128-0. MR 1474726.
• Janson, Svante; Łuczak, Tomasz; Rucinski, Andrzej (2000). Random graphs. Wiley-Interscience Series in Discrete Mathematics and Optimization. New York: Wiley-Interscience. pp. xii+333. ISBN 0-471-17541-2. MR 1782847.
Selected articles
• Janson, Svante (1990). "Poisson approximation for large deviations". Random Structures and Algorithms. 1 (2): 221–229. doi:10.1002/rsa.3240010209. MR 1138428. (Janson's inequality)
• Janson, Svante; Knuth, Donald E.; Luczak, Tomasz; Pittel, Boris (1993). "The birth of the giant component". Random Structures and Algorithms. 4 (3): 231–358. arXiv:math/9310236. doi:10.1002/rsa.3240040303. MR 1220220. S2CID 206454812.
• Janson, Svante; Nowicki, Krzysztof (1991). "The asymptotic distributions of generalized U-statistics with applications to random graphs". Probability Theory and Related Fields. 90 (3): 341–375. doi:10.1007/BF01193750. MR 1133371. S2CID 120249197.
References
1. Alon, Noga; Spencer, Joel (2008). The probabilistic method. Wiley-Interscience Series in Discrete Mathematics and Optimization (third ed.). Hoboken, NJ: John Wiley and Sons. pp. 87, 110, 115–119, 120–121, 123, 128, 157–148, 160 (Second edition). ISBN 978-0-470-17020-5. MR 2437651.
2. "Oral History of Donald Knuth. Interviewed by Edward Feigenbaum (March 14 and 21, 2007)" (PDF). Mountain View, California.
3. Page 647 in Graham, Ronald L.; Knuth, Donald E.; Patashnik, Oren (1994). Concrete mathematics: A foundation for computer science (Second ed.). Reading, MA: Addison–Wesley Publishing Company. pp. xiv, 657. ISBN 0-201-55802-5. MR 1397498.
4. Page 508 in Koroljuk, V. S.; Borovskich, Yu. V. (1994). Theory of U-statistics. Mathematics and its Applications. Vol. 273 (Translated by P. V. Malyshev and D. V. Malyshev from the 1989 Russian original ed.). Dordrecht: Kluwer Academic Publishers Group. pp. x, 552. ISBN 0-7923-2608-3. MR 1472486.
5. Pages 381–382 in Borovskikh, Yu. V. (1996). U-statistics in Banach spaces. Utrecht: VSP. pp. xii, 420. ISBN 90-6764-200-2. MR 1419498.
6. Page xii in Kwapień, Stanisƚaw; Woyczyński, Wojbor A. (1992). Random series and stochastic integrals: Single and multiple. Probability and its Applications. Boston, MA: Birkhäuser Boston, Inc. pp. xvi+360. ISBN 0-8176-3572-6. MR 1167198.
7. "MR: Collaboration Distance".
8. Curriculum Vitæ for Svante Janson, read 18 december 2009
9. Raussen, Martin; Skau, Christian (February 2007). "Interview with Abel Prize Recipient Lennart Carleson" (PDF). Notices of the American Mathematical Society. 54 (2): 223–229. Retrieved 2008-01-16.
10. Janson, Svante (1977). On BMO and related spaces. Uppsala: Department of Mathematics, Uppsala University.
11. his thesis studied harmonic analysis, particularly Hardy spaces of bounded mean oscillation (BMO), and Raussen, Martin; Skau, Christian (February 2007). "Interview with Abel Prize Recipient Lennart Carleson" (PDF). Notices of the American Mathematical Society. 54 (2): 223–229. Retrieved 2008-01-16.
12. Janson, Svante (1984). Random coverings and related problems. Uppsala.{{cite book}}: CS1 maint: location missing publisher (link)
13. The Mathematics Genealogy Project: Svante Janson, read 1 May 2010
14. Diaconis, Persi (2009). "Book review: Probabilistic symmetries and invariance principles (Olav Kallenberg, Springer, New York, 2005)". Bulletin of the American Mathematical Society. New Series. 46 (4): 691–696. doi:10.1090/S0273-0979-09-01262-2. MR 2525743.
15. "Göran Gustafsson prize awarded to Svante Janson in 1994 (and to Janson's student Ola Hössjer in 2009)". 2010-06-28. Archived from the original on 2012-08-04.
• Svante Janson's homepage at Uppsala University. Accessed 2010-06-27.
• Curriculum Vitæ for Svante Janson. Accessed 2010-06-27.
• Mathematical works by Svante Janson, Department of Mathematics, Uppsala University. Accessed 2010-06-27.
• Details of seminar given by Janson on May 7th 2010 to the Microsoft Research Theory Group. Accessed 2010-06-27.
• Member record for Svante Janson. Swedish Academy of Sciences. Accessed 2010-06-27.
External links
• Svante Janson at the Mathematics Genealogy Project
• "Svante Janson". Google Scholar. Retrieved 25 July 2022.
• Mathematical Reviews. "Svante Janson". Retrieved 2010-06-29.
Authority control
International
• ISNI
• VIAF
National
• Norway
• Germany
• Israel
• United States
• Sweden
• Netherlands
Academics
• CiNii
• MathSciNet
• Mathematics Genealogy Project
• ORCID
• zbMATH
Other
• IdRef
| Wikipedia |
Sven Erlander
Sven Bertil Erlander (25 May 1934 – 13 June 2021)[1][2] was a Swedish mathematician and academic.
Sven Erlander
Sven Erlander, 2014
Born(1934-05-25)25 May 1934
Halmstad, Sweden
Died13 June 2021(2021-06-13) (aged 87)
Linköping, Sweden
NationalitySwedish
Occupation(s)Professor of mathematics, Rector of Linköping University
Academic background
Doctoral advisorUlf Grenander
Biography
Erlander was the son of Tage Erlander, who was the Prime Minister of Sweden from 1946 to 1969. He published several of his father's diaries.[3]
He received his Ph.D. in mathematics from Stockholm University in 1968. In 1971 he became a professor in Optimisation at Linköping University. In Linköping he was department head of the Mathematics department from 1973–1976, and dean of the Institute of Technology between 1978 and 1983.
In 1983–1995 he was the rector of Linköping University.[4]
Awards
• Member of the Royal Swedish Academy of Engineering Sciences, 1983
• Honorary Doctorate, Gdańsk University, Poland
Publications
• Cost-Minimizing Choice Behavior in Transportation Planning, monograph, Springer Verlag, 2010.
References
1. "In memoriam: Professor Sven Erlander – ERSA".
2. "Sven Erlander har avlidit".
3. Article in Norrköpings Tidningar Archived 7 September 2012 at archive.today (in Swedish)
4. CV at Linköping University Archived 24 July 2011 at the Wayback Machine
External links
• Prof. Sven Erlander's CV
• Sven Erlander at the Mathematics Genealogy Project
Authority control
International
• ISNI
• VIAF
National
• Norway
• France
• BnF data
• Germany
• Israel
• United States
• Sweden
• Czech Republic
• Netherlands
Academics
• CiNii
• MathSciNet
• Mathematics Genealogy Project
• zbMATH
Other
• IdRef
| Wikipedia |
Sven Leyffer
Sven Leyffer is an American computational mathematician specializing in nonlinear optimization. He is a Senior Computational Mathematician in the Laboratory for Applied Mathematics, Numerical Software, and Statistics at Argonne National Laboratory.
Education
Leyffer received a Vordiplom in Pure and Applied Mathematics from the University of Hamburg in 1989. Leyffer obtained his Ph.D. in 1994 from the University of Dundee under doctoral advisor Roger Fletcher. His dissertation was Deterministic Methods in Mixed Integer Nonlinear Programming.[1]
Recognition
In 2006, Leyffer was awarded, alongside Roger Fletcher and Philippe L. Toint, the Lagrange Prize from the Mathematical Programming Society (MPS) and the Society for Industrial and Applied Mathematics (SIAM).[2][3]
In 2009, Leyffer was named a Fellow of the Society for Industrial and Applied Mathematics (SIAM) for contributions to large-scale nonlinear optimization.[4][5]
Service
From 2017 to 2021, Leyffer was Editor-in-Chief of the journal Mathematical Programming B.[6]
Leyffer is president-elected (2023-2024) of the Society for Industrial and Applied Mathematics (SIAM).[7][8]
References
1. Leyffer, Sven, Deterministic methods for mixed integer nonlinear programming, University of Dundee, CiteSeerX 10.1.1.132.8932
2. SIAM Awards Lagrange Prize to Roger Fletcher, Sven Leyffer and Philippe L. Toint, BrightSurf Science News, retrieved 2022-04-30
3. Math News, Dm.unito.it, archived from the original on 20 March 2012, retrieved 26 July 2014
4. Leyffer named SIAM fellow, Argonne, retrieved 2022-04-30
5. Class of 2009, SIAM, retrieved 2022-04-30
6. Mathematical Programming Editors, Springer, retrieved 2022-04-30
7. Sven Leyffer named President-Elect of SIAM, ICERM, retrieved 2022-04-30
8. Leyffer named SIAM president-elect, Argonne, retrieved 2022-04-30
External links
• Home page
Authority control
International
• ISNI
• VIAF
National
• Norway
• Israel
• United States
Academics
• DBLP
• Google Scholar
• Mathematics Genealogy Project
• ORCID
• Scopus
Other
• IdRef
| Wikipedia |
Svetlana Jitomirskaya
Svetlana Yakovlevna Jitomirskaya (born June 4, 1966) is a Soviet-born American mathematician working on dynamical systems and mathematical physics.[1][2] She is a distinguished professor of mathematics at Georgia Tech and UC Irvine.[3] She is best known for solving the ten martini problem along with mathematician Artur Avila.[4][5]
Svetlana Jitomirskaya
Born (1966-06-04) June 4, 1966
Kharkiv
Alma materMoscow State University
Known forTen martini problem
AwardsRuth Lyttle Satter Prize in Mathematics (2005)
Dannie Heineman Prize for Mathematical Physics (2020)
Olga Ladyzhenskaya Prize (2022)
Scientific career
FieldsMathematics
Institutions
• UC Irvine
• Georgia Tech
ThesisSpectral and Statistical Properties of Lattice Hamiltonians (1991)
Doctoral advisorYakov Sinai
InfluencesVladimir Arnold
Education and career
Jitomirskaya was born and grew up in Kharkiv. Both her mother, Valentina Borok, and her father, Yakov Zhitomirskii, were professors of mathematics.[1]
Her undergraduate studies were at Moscow State University, where she was a student of, among others, Vladimir Arnold and Yakov Sinai.[1] She obtained her Ph.D. from Moscow State University in 1991 under the supervision of Yakov Sinai.[6] She joined the mathematics department at the University of California, Irvine in 1991 as a lecturer, and she became an assistant professor there in 1994 and a full professor in 2000.[2]
Honors
In 2005, she was awarded the Ruth Lyttle Satter Prize in Mathematics, "for her pioneering work on non-perturbative quasiperiodic localization".[7]
She was an invited speaker at the 2002 International Congress of Mathematicians, in Beijing.[8] She was a plenary speaker at the 2022 International Congress of Mathematicians, originally scheduled for Saint Petersburg.[9] After the Russian invasion of Ukraine in February 2022, congress organizers changed plans, and moved some events online, and others to Helsinki, Finland.[10] Jitomirskaya's July 14 plenary address, Small denominators and multiplicative Jensen's formula, is available online.[11]
She received a Sloan Fellowship in 1996.[12]
In 2018 she was named to the American Academy of Arts and Sciences.[13]
Jitomirskaya is the 2020 winner of the Dannie Heineman Prize for Mathematical Physics, becoming the second woman to win the prize and the first woman to be the sole winner of the prize. The award citation credited her "for work on the spectral theory of almost-periodic Schrödinger operators and related questions in dynamical systems. In particular, for her role in the solution of the Ten Martini problem, concerning the Cantor set nature of the spectrum of all almost Mathieu operators and in the development of the fundamental mathematical aspects of the localization and metal-insulator transition phenomena."[5]
In 2022, she was elected to the National Academy of Sciences. On July 2 2022, she received the inaugural Ladyzhenskaya Prize in Mathematical Physics (OAL Prize) “for her seminal and deep contributions to the spectral theory of almost periodic Schrödinger operators” https://2022.worldwomeninmaths.org/OAL-prize-winner.
Jitomirskaya was elected to be an American Mathematical Society (AMS) Council member at large from February 1st, 2023, to January 31st, 2024.[14]
Selected publications
• Jitomirskaya, Svetlana Ya. (1999), "Metal-insulator transition for the almost Mathieu operator", Annals of Mathematics, Second Series, 150 (3): 1159–1175, arXiv:math/9911265, Bibcode:1999math.....11265J, doi:10.2307/121066, JSTOR 121066, MR 1740982, S2CID 10641385.
• Avila, Artur; Jitomirskaya, Svetlana (2009), "The Ten Martini Problem", Annals of Mathematics, Second Series, 170 (1): 303–342, arXiv:math/0503363, doi:10.4007/annals.2009.170.303, MR 2521117.
• Jitomirskaya, Svetlana; Last, Yoram (1999), "Power-law subordinacy and singular spectra. I. Half-line operators", Acta Mathematica, 183 (2): 171–189, doi:10.1007/BF02392827, MR 1738043.
References
1. O'Connor, John J.; Robertson, Edmund F., "Svetlana Jitomirskaya", MacTutor History of Mathematics Archive, University of St Andrews
2. Jitomirskaya's CV
3. "Distinguished professors". University of California, Irvine. Retrieved 2020-03-06.
4. Avila, Artur; Jitomirskaya, Svetlana (2006). "Solving the Ten Martini Problem". Mathematical Physics of Quantum Mechanics. Lecture Notes in Physics. Vol. 690. pp. 5–16. arXiv:math/0503363. doi:10.1007/3-540-34273-7_2. ISBN 978-3-540-31026-6. S2CID 55259301.
5. Pignataro, Anthony (October 23, 2019), "UC Irvine mathematics professor makes history with award", OC Weekly
6. Svetlana Jitomirskaya at the Mathematics Genealogy Project
7. "2005 Satter Prize" (PDF), Notices of the American Mathematical Society, 52 (4): 447–448, April 2005.
8. "International Mathematical Union (IMU)". www.mathunion.org. Archived from the original on 2012-01-11.
9. "ICM Plenary speakers".
10. "Virtual ICM 2022". International Mathematical Union (IMU). 2022-07-14. Retrieved 2022-09-05.
11. "Svetlana Jitomirskaya: Small denominators and multiplicative Jensen's formula". YouTube. 2022-07-14. Retrieved 2022-09-05.
12. "Past Fellows".
13. "Around Town: UC Irvine professor named fellow of American Academy of Arts and Sciences", Los Angeles Times, April 21, 2018
14. "Council". American Mathematical Society. Retrieved 2023-03-27.
External links
• Home page of Svetlana Jitomirskaya
• Riddle, Larry (January 10, 2014), "Svetlana Jitomirskaya", Biographies of Women Mathematicians, Agnes Scott College, retrieved 2015-10-22.
• UCI Distinguished Mid-Career Award for Research 2004–2005 at the Wayback Machine (archived September 3, 2004)
• "In Math and Life, Svetlana Jitomirskaya Stares Down Complexity". Quanta Magazine. 2022-11-01.
Chaos theory
Concepts
Core
• Attractor
• Bifurcation
• Fractal
• Limit set
• Lyapunov exponent
• Orbit
• Periodic point
• Phase space
• Anosov diffeomorphism
• Arnold tongue
• axiom A dynamical system
• Bifurcation diagram
• Box-counting dimension
• Correlation dimension
• Conservative system
• Ergodicity
• False nearest neighbors
• Hausdorff dimension
• Invariant measure
• Lyapunov stability
• Measure-preserving dynamical system
• Mixing
• Poincaré section
• Recurrence plot
• SRB measure
• Stable manifold
• Topological conjugacy
Theorems
• Ergodic theorem
• Liouville's theorem
• Krylov–Bogolyubov theorem
• Poincaré–Bendixson theorem
• Poincaré recurrence theorem
• Stable manifold theorem
• Takens's theorem
Theoretical
branches
• Bifurcation theory
• Control of chaos
• Dynamical system
• Ergodic theory
• Quantum chaos
• Stability theory
• Synchronization of chaos
Chaotic
maps (list)
Discrete
• Arnold's cat map
• Baker's map
• Complex quadratic map
• Coupled map lattice
• Duffing map
• Dyadic transformation
• Dynamical billiards
• outer
• Exponential map
• Gauss map
• Gingerbreadman map
• Hénon map
• Horseshoe map
• Ikeda map
• Interval exchange map
• Irrational rotation
• Kaplan–Yorke map
• Langton's ant
• Logistic map
• Standard map
• Tent map
• Tinkerbell map
• Zaslavskii map
Continuous
• Double scroll attractor
• Duffing equation
• Lorenz system
• Lotka–Volterra equations
• Mackey–Glass equations
• Rabinovich–Fabrikant equations
• Rössler attractor
• Three-body problem
• Van der Pol oscillator
Physical
systems
• Chua's circuit
• Convection
• Double pendulum
• Elastic pendulum
• FPUT problem
• Hénon–Heiles system
• Kicked rotator
• Multiscroll attractor
• Population dynamics
• Swinging Atwood's machine
• Tilt-A-Whirl
• Weather
Chaos
theorists
• Michael Berry
• Rufus Bowen
• Mary Cartwright
• Chen Guanrong
• Leon O. Chua
• Mitchell Feigenbaum
• Peter Grassberger
• Celso Grebogi
• Martin Gutzwiller
• Brosl Hasslacher
• Michel Hénon
• Svetlana Jitomirskaya
• Bryna Kra
• Edward Norton Lorenz
• Aleksandr Lyapunov
• Benoît Mandelbrot
• Hee Oh
• Edward Ott
• Henri Poincaré
• Mary Rees
• Otto Rössler
• David Ruelle
• Caroline Series
• Yakov Sinai
• Oleksandr Mykolayovych Sharkovsky
• Nina Snaith
• Floris Takens
• Audrey Terras
• Mary Tsingou
• Marcelo Viana
• Amie Wilkinson
• James A. Yorke
• Lai-Sang Young
Related
articles
• Butterfly effect
• Complexity
• Edge of chaos
• Predictability
• Santa Fe Institute
Ruth Lyttle Satter Prize in Mathematics recipients
• 1991 Dusa McDuff
• 1993 Lai-Sang Young
• 1995 Sun-Yung Alice Chang
• 1997 Ingrid Daubechies
• 1999 Bernadette Perrin-Riou
• 2001 Karen E. Smith & Sijue Wu
• 2003 Abigail Thompson
• 2005 Svetlana Jitomirskaya
• 2007 Claire Voisin
• 2009 Laure Saint-Raymond
• 2011 Amie Wilkinson
• 2013 Maryam Mirzakhani
• 2015 Hee Oh
• 2017 Laura DeMarco
• 2019 Maryna Viazovska
• 2021 Kaisa Matomäki
• 2023 Panagiota Daskalopoulos & Nataša Šešum
Authority control: Academics
• Google Scholar
• MathSciNet
• Mathematics Genealogy Project
• ORCID
• Scopus
• zbMATH
| Wikipedia |
Svetlana Katok
Svetlana Katok (born May 1, 1947)[1] is a Russian-American mathematician and a professor of mathematics at Pennsylvania State University.[2]
Education and career
Katok grew up in Moscow, and earned a master's degree from Moscow State University in 1969; however, due to the anti-Semitic and anti-intelligentsia policies of the time, she was denied admission to the doctoral program there and instead worked for several years in the area of early and secondary mathematical education.[2] She immigrated to the US in 1978,[2] and earned her doctorate from the University of Maryland, College Park in 1983 under the supervision of Don Zagier.[2][3] She joined the Pennsylvania State University faculty in 1990.[2]
Katok founded the Electronic Research Announcements of the American Mathematical Society in 1995; it was renamed in 2007 to the Electronic Research Announcements in Mathematical Sciences, and she remains its managing editor.[4]
Katok was an American Mathematical Society (AMS) Council member at large.[5]
Books
Katok is the author of:
• Fuchsian Groups, Chicago Lectures in Mathematics, University of Chicago Press, 1992.[6] Russian edition, Faktorial Press, Moscow, 2002.
• p-adic Analysis Compared with Real, Student Mathematical Library, vol. 37, American Math. Soc., 2007.[7] Russian edition, MCCME Press, Moscow, 2004.
Additionally, she coedited the book MASS Selecta: Teaching and learning advanced undergraduate mathematics (American Math. Soc., 2003).[8]
Awards and honors
Katok was the 2004 Emmy Noether Lecturer of the Association for Women in Mathematics.[2] In 2012 she and her husband, mathematician Anatole Katok, both became fellows of the American Mathematical Society.[9]
References
1. Svetlana Katok, Eugene B. Dynkin Collection of Mathematics Interviews, Cornell University, January 1981, retrieved October 16, 2013.
2. Svetlana Katok, Association for Women in Mathematics, 2005, retrieved October 16, 2013.
3. Svetlana Katok at the Mathematics Genealogy Project
4. Electronic Research Announcements, retrieved October 16, 2013.
5. "AMS Committees". American Mathematical Society. Retrieved March 27, 2023.
6. Review of Fuchsian Groups by Irwin Kra, 1993, MR1177168.
7. Review of p-adic Analysis by Daniel Barsky, 2008, MR2298943.
8. Unsigned review of MASS Selecta, 2004, MR2027171.
9. List of Fellows of the American Mathematical Society. Retrieved October 16, 2013. American Mathematical Society.
Authority control
International
• ISNI
• VIAF
National
• France
• BnF data
• Germany
• Israel
• United States
• Poland
Academics
• CiNii
• MathSciNet
• Mathematics Genealogy Project
• zbMATH
Other
• IdRef
| Wikipedia |
Svetlozar Rachev
Svetlozar (Zari) Todorov Rachev is a professor at Texas Tech University who works in the field of mathematical finance, probability theory, and statistics. He is known for his work in probability metrics, derivative pricing, financial risk modeling, and econometrics. In the practice of risk management, he is the originator of the methodology behind the flagship product of FinAnalytica.
Life and work
Rachev earned a MSc degree from the Faculty of Mathematics at Sofia University in 1974, a PhD degree from Lomonosov Moscow State University under the supervision of Vladimir Zolotarev in 1979, and a Dr Sci degree from Steklov Mathematical Institute in 1986 under the supervision of Leonid Kantorovich, a Nobel Prize winner in economic sciences, Andrey Kolmogorov and Yuri Prokhorov.[1] Currently, he is Professor of Financial Mathematics at Texas Tech University.[2]
In mathematical finance, Rachev is known for his work on application of non-Gaussian models for risk assessment, option pricing, and the applications of such models in portfolio theory.[3] He is also known for the introduction of a new risk-return ratio, the "Rachev Ratio", designed to measure the reward potential relative to tail risk in a non-Gaussian setting.[4][5][6]
In probability theory, his books on probability metrics and mass-transportation problems are widely cited.[7]
FinAnalytica
Rachev's academic work on non-Gaussian models in mathematical finance was inspired by the difficulties of common classical Gaussian-based models to capture empirical properties of financial data.[3][4] Rachev and his daughter, Borjana Racheva-Iotova, established Bravo Group in 1999, a company with the goal to develop software based on Rachev's research on fat-tailed models. The company was later acquired by FinAnalytica. The company has won the Waters Rankings "Best Market Risk Solution Provider" award in 2010, 2012, and 2015, and also the "Most Innovative Specialist Vendor" Risk Award in 2014.[8][9]
Awards and honors
• Fellow of the Institute of Mathematical Statistics[10]
• Humboldt Research Award for Foreign Scholars (1995)[11]
• Honorary Doctor of Science at Saint Petersburg State Institute of Technology (1992)[12]
• Foreign Member of the Russian Academy of Natural Sciences[13]
Selected publications
Books
• Rachev, S.T. (1991). Probability Metrics and the Stability of Stochastic Models. New York: Wiley. ISBN 978-0471928775.
• Rachev, S.T.; Rueschendorf, L. (1998). Mass Transportation Problems, Vol I: Theory. Springer. ISBN 978-1475785258.
• Rachev, S.T.; Rueschendorf, L. (1999). Mass Transportation Problems, Vol II: Applications. Springer. ISBN 978-0387983523.
• Rachev, S.T.; Mittnik, S. (2000). Stable Paretian Models in Finance. Wiley. ISBN 978-0471953142.
• Rachev, S.T.; Kim, Y.; Bianchi, M.L.; Fabozzi, F.J. (2011). Financial Models with Levy Processes and Volatility Clustering. New York: Springer. ISBN 978-0470482353.
• Rachev, S.T.; Klebanov, Lev; Stoyanov, S.V.; Fabozzi, F.J. (2013). The Methods of Distances in the Theory of Probability and Statistics. Springer. ISBN 978-1461448686.
Articles
• Rachev, S.T.; Sengupta, A. (1993). "Laplace-Weibull mixtures for modelling price changes". Management Science. 39 (8): 1029–1038. doi:10.1287/mnsc.39.8.1029.
• Mittnik, S.; Rachev, S.T. (1993). "Modeling asset returns with alternative stable distributions". Econometric Reviews. 12 (3): 261–330. doi:10.1080/07474939308800266.
• Mittnik, S.; Paollela, M.; Rachev, S.T. (2000). "Diagnosing and treating the fat tails in financial returns data". Journal of Empirical Finance. 7 (3–4): 389–416. doi:10.1016/S0927-5398(00)00019-0.
• Mittnik, S.; Paollela, M.; Rachev, S.T. (2002). "Stationarity of stable power-GARCH process". Journal of Econometrics. 106 (1): 97–107. doi:10.1016/S0304-4076(01)00089-6.
• Biglova, A.; Ortobelli, S.; Rachev, S.T.; Stoyanov, S.V. (2004). "Different Approaches to Risk Estimation in Portfolio Theory". Journal of Portfolio Management. 31 (1): 103–112. doi:10.3905/jpm.2004.443328.
• Stoyanov, S.V.; Rachev, S.T.; Fabozzi, F.J. (2007). "Optimal financial portfolios". Applied Mathematical Finance. 14 (5): 401–436. doi:10.1080/13504860701255292.
• Bierbrauer, M.; Menn, C.; Rachev, S.T.; Türck, S. (2007). "Spot and derivative pricing in the EEX power market". Journal of Banking & Finance. 31 (11): 3462–3485. doi:10.1016/j.jbankfin.2007.04.011.
• Stoyanov, S.V.; Rachev, S.T.; Racheva-Iotova, B.; Fabozzi, F.J. (2011). "Fat-tailed models for risk estimation". Journal of Portfolio Management. 37 (2): 107–117. doi:10.3905/jpm.2011.37.2.107.
References
1. "Meet the team". www.finanalytica.com. FinAnalytica. Retrieved 15 August 2015.
2. "Department of Mathematics & Statistics". Retrieved 31 December 2017.
3. Baird, Jane (2009-05-25). "Assessing the risk of a cataclysm". Reuters. Retrieved May 25, 2009.
4. Fehr, Benedikt. "Beyond the Normal Distribution" (PDF). Frankfurter Allgemeine Zeitung. Retrieved 16 March 2006.
5. Cheridito, P.; Kromer, E. (2013). "Reward-Risk Ratios". Journal of Investment Strategies. 3 (1): 3–18. doi:10.21314/JOIS.2013.022.
6. Farinelli, S.; Ferreira, M.; Rossello, D.; Thoeny, M.; Tibiletti, L. (2008). "Beyond Sharpe ratio: Optimal asset allocation using different performance ratios". Journal of Banking and Finance. 32 (10): 2057–2063. doi:10.1016/j.jbankfin.2007.12.026.
7. Villani, Cedric (2009). Optimal Transport: Old and New. Springer. pp. 9, 236, 41–43, 80, 93, 161–163, 409. ISBN 978-3-540-71050-9.
8. "FinAnalytica Wins 'Best Market Risk Solution Provider' Award in 2015 Waters Rankings". www.reuters.com. Reuters. Retrieved 15 August 2015.
9. "Waters Rankings 2015: Best Market Risk Solution Provider ─ FinAnalytica". www.waterstechnology.com. Waterstechnology. 2015-08-05. Retrieved 15 August 2015.
10. "Honored IMS Fellows". Institute of Mathematical Statistics. Archived from the original on 2 March 2014. Retrieved 13 August 2015.
11. Foundation, Humboldt (May 1995). "Humboldt Awards Announced" (PDF). Notices of the AMS. Vol. 42, no. 5. American Mathematical Society. Retrieved 13 August 2015.
12. "Honorary Doctors and Distinguished Alumni". St. Petersburg Technical University. Retrieved 13 August 2015.
13. "Stable Paretian Models in Finance: Author Information". www.wiley.com. Wiley. Retrieved 15 August 2015.
External links
• A definition of the Rachev Ratio
• FinAnalytica Inc
Authority control
International
• ISNI
• VIAF
National
• Catalonia
• Germany
• Israel
• United States
• Sweden
• Czech Republic
• Netherlands
• Poland
Academics
• MathSciNet
• Mathematics Genealogy Project
• Scopus
• zbMATH
Other
• IdRef
| Wikipedia |
Svitlana Mayboroda
Svitlana Mayboroda (born 1981) is a Ukrainian mathematician who works as a professor of mathematics at the University of Minnesota[1] and ETH Zurich.[2]
Research
Mayboroda's research concerns harmonic analysis and partial differential equations, including boundary value problems for elliptic partial differential equations.[3] Her work has provided a new mathematical approach to Anderson localization, a phenomenon in physics in which waves are confined to a local region rather than propagating throughout a medium, and with this explanation she can predict the regions in which waves will be confined.[4]
Education and career
Mayboroda was born on June 2, 1981, in Kharkiv. She earned the Ukrainian equivalent of two master's degrees, one in finance and one in applied mathematics, from the University of Kharkiv in 2001, and completed her Ph.D. in 2005 from the University of Missouri under the supervision of Marius Mitrea.[1][5] After visiting positions at the Australian National University, Ohio State University, and Brown University, she joined the Purdue University faculty in 2008, and moved to the University of Minnesota in 2011.[1] In 2023, she joined the ETH Zurich faculty.[2]
Recognition
Mayboroda was a Sloan Research Fellow for 2010–2015.[1] In 2013, she became the inaugural winner of the Sadosky Research Prize in Analysis of the Association for Women in Mathematics.[3] In 2015 she was elected as a fellow of the American Mathematical Society.[6] In 2016, she was awarded the first Northrop Professorship at the University of Minnesota.[7] She is an invited speaker at the 2018 International Congress of Mathematicians, speaking in the section on Analysis and Operator Algebras.[8]
References
1. Curriculum vitae: Svitlana Mayboroda (PDF), retrieved 2015-11-18.
2. "Svitlana Mayboroda newly appointed professor". math.ethz.ch. 2023-08-04. Retrieved 2023-08-11.
3. AWM-Sadosky Research Prize in Analysis, Association for Women in Mathematics, retrieved 2015-11-18.
4. Hartnett, Kevin (August 22, 2017), "Mathematician Tames Rogue Waves, Illuminating Future of LED Lighting", Quanta Magazine
5. Svitlana Mayboroda at the Mathematics Genealogy Project
6. 2016 Class of the Fellows of the AMS, American Mathematical Society, retrieved 2015-11-18.
7. "News - math.umn.edu". www2.math.umn.edu. Archived from the original on 21 April 2016. Retrieved 22 May 2022.
8. "Speakers", ICM 2018, archived from the original on 2017-12-15, retrieved 2018-02-24
External links
• Home page
Authority control
International
• VIAF
National
• Norway
• Germany
• Israel
• United States
Academics
• MathSciNet
• Mathematics Genealogy Project
• ORCID
• Scopus
• zbMATH
Other
• IdRef
| Wikipedia |
Satellite knot
In the mathematical theory of knots, a satellite knot is a knot that contains an incompressible, non boundary-parallel torus in its complement.[1] Every knot is either hyperbolic, a torus, or a satellite knot. The class of satellite knots include composite knots, cable knots, and Whitehead doubles. A satellite link is one that orbits a companion knot K in the sense that it lies inside a regular neighborhood of the companion.[2]: 217
A satellite knot $K$ can be picturesquely described as follows: start by taking a nontrivial knot $K'$ lying inside an unknotted solid torus $V$. Here "nontrivial" means that the knot $K'$ is not allowed to sit inside of a 3-ball in $V$ and $K'$ is not allowed to be isotopic to the central core curve of the solid torus. Then tie up the solid torus into a nontrivial knot.
This means there is a non-trivial embedding $f\colon V\to S^{3}$ and $K=f\left(K'\right)$. The central core curve of the solid torus $V$ is sent to a knot $H$, which is called the "companion knot" and is thought of as the planet around which the "satellite knot" $K$ orbits. The construction ensures that $f(\partial V)$ is a non-boundary parallel incompressible torus in the complement of $K$. Composite knots contain a certain kind of incompressible torus called a swallow-follow torus, which can be visualized as swallowing one summand and following another summand.
Since $V$ is an unknotted solid torus, $S^{3}\setminus V$ is a tubular neighbourhood of an unknot $J$. The 2-component link $K'\cup J$ together with the embedding $f$ is called the pattern associated to the satellite operation.
A convention: people usually demand that the embedding $f\colon V\to S^{3}$ is untwisted in the sense that $f$ must send the standard longitude of $V$ to the standard longitude of $f(V)$. Said another way, given any two disjoint curves $c_{1},c_{2}\subset V$, $f$ preserves their linking numbers i.e.: $\operatorname {lk} (f(c_{1}),f(c_{2}))=\operatorname {lk} (c_{1},c_{2})$.
Basic families
When $K'\subset \partial V$ is a torus knot, then $K$ is called a cable knot. Examples 3 and 4 are cable knots. The cable constructed with given winding numbers (m,n) from another knot K, is often called the (m,n) cable of K.
If $K'$ is a non-trivial knot in $S^{3}$ and if a compressing disc for $V$ intersects $K'$ in precisely one point, then $K$ is called a connect-sum. Another way to say this is that the pattern $K'\cup J$ is the connect-sum of a non-trivial knot $K'$ with a Hopf link.
If the link $K'\cup J$ is the Whitehead link, $K$ is called a Whitehead double. If $f$ is untwisted, $K$ is called an untwisted Whitehead double.
Examples
• Example 1: A connect-sum of a trefoil and figure-8 knot.
• Example 2: The Whitehead double of the figure-8.
• Example 3: A cable of a connect-sum.
• Example 4: A cable of a trefoil.
• Example 5: A knot which is a 2-fold satellite i.e.: it has non-parallel swallow-follow tori.
• Example 6: A knot which is a 2-fold satellite i.e.: it has non-parallel swallow-follow tori.
Examples 5 and 6 are variants on the same construction. They both have two non-parallel, non-boundary-parallel incompressible tori in their complements, splitting the complement into the union of three manifolds. In 5, those manifolds are: the Borromean rings complement, trefoil complement, and figure-8 complement. In 6, the figure-8 complement is replaced by another trefoil complement.
Origins
In 1949 [3] Horst Schubert proved that every oriented knot in $S^{3}$ decomposes as a connect-sum of prime knots in a unique way, up to reordering, making the monoid of oriented isotopy-classes of knots in $S^{3}$ a free commutative monoid on countably-infinite many generators. Shortly after, he realized he could give a new proof of his theorem by a close analysis of the incompressible tori present in the complement of a connect-sum. This led him to study general incompressible tori in knot complements in his epic work Knoten und Vollringe,[4] where he defined satellite and companion knots.
Follow-up work
Schubert's demonstration that incompressible tori play a major role in knot theory was one several early insights leading to the unification of 3-manifold theory and knot theory. It attracted Waldhausen's attention, who later used incompressible surfaces to show that a large class of 3-manifolds are homeomorphic if and only if their fundamental groups are isomorphic.[5] Waldhausen conjectured what is now the Jaco–Shalen–Johannson-decomposition of 3-manifolds, which is a decomposition of 3-manifolds along spheres and incompressible tori. This later became a major ingredient in the development of geometrization, which can be seen as a partial-classification of 3-dimensional manifolds. The ramifications for knot theory were first described in the long-unpublished manuscript of Bonahon and Siebenmann.[6]
Uniqueness of satellite decomposition
In Knoten und Vollringe, Schubert proved that in some cases, there is essentially a unique way to express a knot as a satellite. But there are also many known examples where the decomposition is not unique.[7] With a suitably enhanced notion of satellite operation called splicing, the JSJ decomposition gives a proper uniqueness theorem for satellite knots.[8][9]
See also
• Hyperbolic knot
• Torus knot
References
1. Colin Adams, The Knot Book: An Elementary Introduction to the Mathematical Theory of Knots, (2001), ISBN 0-7167-4219-5
2. Menasco, William; Thistlethwaite, Morwen, eds. (2005). Handbook of Knot Theory. Elsevier. ISBN 0080459544. Retrieved 2014-08-18.
3. Schubert, H. Die eindeutige Zerlegbarkeit eines Knotens in Primknoten. S.-B Heidelberger Akad. Wiss. Math.-Nat. Kl. 1949 (1949), 57–104.
4. Schubert, H. Knoten und Vollringe. Acta Math. 90 (1953), 131–286.
5. Waldhausen, F. On irreducible 3-manifolds which are sufficiently large.Ann. of Math. (2) 87 (1968), 56–88.
6. F.Bonahon, L.Siebenmann, New Geometric Splittings of Classical Knots, and the Classification and Symmetries of Arborescent Knots,
7. Motegi, K. Knot Types of Satellite Knots and Twisted Knots. Lectures at Knots '96. World Scientific.
8. Eisenbud, D. Neumann, W. Three-dimensional link theory and invariants of plane curve singularities. Ann. of Math. Stud. 110
9. Budney, R. JSJ-decompositions of knot and link complements in S^3. L'enseignement Mathematique 2e Serie Tome 52 Fasc. 3–4 (2006). arXiv:math.GT/0506523
Knot theory (knots and links)
Hyperbolic
• Figure-eight (41)
• Three-twist (52)
• Stevedore (61)
• 62
• 63
• Endless (74)
• Carrick mat (818)
• Perko pair (10161)
• (−2,3,7) pretzel (12n242)
• Whitehead (52
1
)
• Borromean rings (63
2
)
• L10a140
• Conway knot (11n34)
Satellite
• Composite knots
• Granny
• Square
• Knot sum
Torus
• Unknot (01)
• Trefoil (31)
• Cinquefoil (51)
• Septafoil (71)
• Unlink (02
1
)
• Hopf (22
1
)
• Solomon's (42
1
)
Invariants
• Alternating
• Arf invariant
• Bridge no.
• 2-bridge
• Brunnian
• Chirality
• Invertible
• Crosscap no.
• Crossing no.
• Finite type invariant
• Hyperbolic volume
• Khovanov homology
• Genus
• Knot group
• Link group
• Linking no.
• Polynomial
• Alexander
• Bracket
• HOMFLY
• Jones
• Kauffman
• Pretzel
• Prime
• list
• Stick no.
• Tricolorability
• Unknotting no. and problem
Notation
and operations
• Alexander–Briggs notation
• Conway notation
• Dowker–Thistlethwaite notation
• Flype
• Mutation
• Reidemeister move
• Skein relation
• Tabulation
Other
• Alexander's theorem
• Berge
• Braid theory
• Conway sphere
• Complement
• Double torus
• Fibered
• Knot
• List of knots and links
• Ribbon
• Slice
• Sum
• Tait conjectures
• Twist
• Wild
• Writhe
• Surgery theory
• Category
• Commons
| Wikipedia |
Swap regret
Swap regret is a concept from game theory. It is a generalization of regret in a repeated, n-decision game.
Definition
A player's swap-regret is defined to be the following:
${\mbox{swap-regret}}=\sum _{i=1}^{n}\max _{j\leq n}{\frac {1}{T}}\sum _{t=1}^{T}x_{i}^{t}\cdot (p_{j}^{t}-p_{i}^{t}).$
Intuitively, it is how much a player could improve by switching each occurrence of decision i to the best decision j possible in hindsight. The swap regret is always nonnegative. Swap regret is useful for computing correlated equilibria.
References
• Blum, Avrim; Mansour, Yishay (2007), "From external to internal regret", Journal of Machine Learning Research, 8: 1307–1324, MR 2332433.
| Wikipedia |
Coalgebra
In mathematics, coalgebras or cogebras are structures that are dual (in the category-theoretic sense of reversing arrows) to unital associative algebras. The axioms of unital associative algebras can be formulated in terms of commutative diagrams. Turning all arrows around, one obtains the axioms of coalgebras. Every coalgebra, by (vector space) duality, gives rise to an algebra, but not in general the other way. In finite dimensions, this duality goes in both directions (see below).
Coalgebras occur naturally in a number of contexts (for example, representation theory, universal enveloping algebras and group schemes).
There are also F-coalgebras, with important applications in computer science.
Informal discussion
One frequently recurring example of coalgebras occurs in representation theory, and in particular, in the representation theory of the rotation group. A primary task, of practical use in physics, is to obtain combinations of systems with different states of angular momentum and spin. For this purpose, one uses the Clebsch–Gordan coefficients. Given two systems $A,B$ with angular momenta $j_{A}$ and $j_{B}$, a particularly important task is to find the total angular momentum $j_{A}+j_{B}$ given the combined state $|A\rangle \otimes |B\rangle $. This is provided by the total angular momentum operator, which extracts the needed quantity from each side of the tensor product. It can be written as an "external" tensor product
$\mathbf {J} \equiv \mathbf {j} \otimes 1+1\otimes \mathbf {j} $
The word "external" appears here, in contrast to the "internal" tensor product of a tensor algebra. A tensor algebra comes with a tensor product (the internal one); it can also be equipped with a second tensor product, the "external" one, or the coproduct, having the form above. That they are two different products is emphasized by recalling that the internal tensor product of a vector and a scalar is just simple scalar multiplication. The external product keeps them separated. In this setting, the coproduct is the map
$\Delta :J\to J\otimes J$
that takes
$\Delta :\mathbf {j} \mapsto \mathbf {j} \otimes 1+1\otimes \mathbf {j} $ :\mathbf {j} \mapsto \mathbf {j} \otimes 1+1\otimes \mathbf {j} }
For this example, $J$ can be taken to be one of the spin representations of the rotation group, with the fundamental representation being the common-sense choice. This coproduct can be lifted to all of the tensor algebra, by a simple lemma that applies to free objects: the tensor algebra is a free algebra, therefore, any homomorphism defined on a subset can be extended to the entire algebra. Examining the lifting in detail, one observes that the coproduct behaves as the shuffle product, essentially because the two factors above, the left and right $\mathbf {j} $ must be kept in sequential order during products of multiple angular momenta (rotations are not commutative).
The peculiar form of having the $\mathbf {j} $ appear only once in the coproduct, rather than (for example) defining $\mathbf {j} \mapsto \mathbf {j} \otimes \mathbf {j} $ is in order to maintain linearity: for this example, (and for representation theory in general), the coproduct must be linear. As a general rule, the coproduct in representation theory is reducible; the factors are given by the Littlewood–Richardson rule. (The Littlewood–Richardson rule conveys the same idea as the Clebsch–Gordan coefficients, but in a more general setting).
The formal definition of the coalgebra, below, abstracts away this particular special case, and its requisite properties, into a general setting.
Formal definition
Formally, a coalgebra over a field K is a vector space C over K together with K-linear maps Δ: C → C ⊗ C and ε: C → K such that
1. $(\mathrm {id} _{C}\otimes \Delta )\circ \Delta =(\Delta \otimes \mathrm {id} _{C})\circ \Delta $
2. $(\mathrm {id} _{C}\otimes \varepsilon )\circ \Delta =\mathrm {id} _{C}=(\varepsilon \otimes \mathrm {id} _{C})\circ \Delta $.
(Here ⊗ refers to the tensor product over K and id is the identity function.)
Equivalently, the following two diagrams commute:
In the first diagram, C ⊗ (C ⊗ C) is identified with (C ⊗ C) ⊗ C; the two are naturally isomorphic.[1] Similarly, in the second diagram the naturally isomorphic spaces C, C ⊗ K and K ⊗ C are identified.[2]
The first diagram is the dual of the one expressing associativity of algebra multiplication (called the coassociativity of the comultiplication); the second diagram is the dual of the one expressing the existence of a multiplicative identity. Accordingly, the map Δ is called the comultiplication (or coproduct) of C and ε is the counit of C.
Examples
Take an arbitrary set S and form the K-vector space C = K(S) with basis S, as follows. The elements of this vector space C are those functions from S to K that map all but finitely many elements of S to zero; identify the element s of S with the function that maps s to 1 and all other elements of S to 0. Define
Δ(s) = s ⊗ s and ε(s) = 1 for all s in S.
By linearity, both Δ and ε can then uniquely be extended to all of C. The vector space C becomes a coalgebra with comultiplication Δ and counit ε.
As a second example, consider the polynomial ring K[X] in one indeterminate X. This becomes a coalgebra (the divided power coalgebra[3][4]) if for all n ≥ 0 one defines:
$\Delta (X^{n})=\sum _{k=0}^{n}{\dbinom {n}{k}}X^{k}\otimes X^{n-k},$
$\varepsilon (X^{n})={\begin{cases}1&{\mbox{if }}n=0\\0&{\mbox{if }}n>0\end{cases}}$
Again, because of linearity, this suffices to define Δ and ε uniquely on all of K[X]. Now K[X] is both a unital associative algebra and a coalgebra, and the two structures are compatible. Objects like this are called bialgebras, and in fact most of the important coalgebras considered in practice are bialgebras.
Examples of coalgebras include the tensor algebra, the exterior algebra, Hopf algebras and Lie bialgebras. Unlike the polynomial case above, none of these are commutative. Therefore, the coproduct becomes the shuffle product, rather than the divided power structure given above. The shuffle product is appropriate, because it preserves the order of the terms appearing in the product, as is needed by non-commutative algebras.
The singular homology of a topological space forms a graded coalgebra whenever the Künneth isomorphism holds, e.g. if the coefficients are taken to be a field.[5]
If C is the K-vector space with basis {s, c}, consider Δ: C → C ⊗ C is given by
Δ(s) = s ⊗ c + c ⊗ s
Δ(c) = c ⊗ c − s ⊗ s
and ε: C → K is given by
ε(s) = 0
ε(c) = 1
In this situation, (C, Δ, ε) is a coalgebra known as trigonometric coalgebra.[6][7]
For a locally finite poset P with set of intervals J, define the incidence coalgebra C with J as basis. The comultiplication and counit are defined as
$\Delta [x,z]=\sum _{y\in [x,z]}[x,y]\otimes [y,z]{\text{ for }}x\leq z\ .$
$\varepsilon [x,y]={\begin{cases}1&{\text{if }}x=y,\\0&{\text{if }}x\neq y.\end{cases}}$
The intervals of length zero correspond to points of P and are group-like elements.[8]
Finite dimensions
In finite dimensions, the duality between algebras and coalgebras is closer: the dual of a finite-dimensional (unital associative) algebra is a coalgebra, while the dual of a finite-dimensional coalgebra is a (unital associative) algebra. In general, the dual of an algebra may not be a coalgebra.
The key point is that in finite dimensions, (A ⊗ A)∗ and A∗ ⊗ A∗ are isomorphic.
To distinguish these: in general, algebra and coalgebra are dual notions (meaning that their axioms are dual: reverse the arrows), while for finite dimensions, they are also dual objects (meaning that a coalgebra is the dual object of an algebra and conversely).
If A is a finite-dimensional unital associative K-algebra, then its K-dual A∗ consisting of all K-linear maps from A to K is a coalgebra. The multiplication of A can be viewed as a linear map A ⊗ A → A, which when dualized yields a linear map A∗ → (A ⊗ A)∗. In the finite-dimensional case, (A ⊗ A)∗ is naturally isomorphic to A∗ ⊗ A∗, so this defines a comultiplication on A∗. The counit of A∗ is given by evaluating linear functionals at 1.
Sweedler notation
When working with coalgebras, a certain notation for the comultiplication simplifies the formulas considerably and has become quite popular. Given an element c of the coalgebra (C, Δ, ε), there exist elements c(i )
(1)
and c(i )
(2)
in C such that
$\Delta (c)=\sum _{i}c_{(1)}^{(i)}\otimes c_{(2)}^{(i)}$
Note that neither the number of terms in this sum, nor the exact values of each $c_{(1)}^{(i)}$ or $c_{(2)}^{(i)}$, are uniquely determined by $c$; there is only a promise that there are finitely many terms, and that the full sum of all these terms $c_{(1)}^{(i)}\otimes c_{(2)}^{(i)}$ have the right value $\Delta (c)$.
In Sweedler's notation,[9] (so named after Moss Sweedler), this is abbreviated to
$\Delta (c)=\sum _{(c)}c_{(1)}\otimes c_{(2)}.$
The fact that ε is a counit can then be expressed with the following formula
$c=\sum _{(c)}\varepsilon (c_{(1)})c_{(2)}=\sum _{(c)}c_{(1)}\varepsilon (c_{(2)}).\;$
Here it is understood that the sums have the same number of terms, and the same lists of values for $c_{(1)}$ and $c_{(2)}$, as in the previous sum for $\Delta (c)$.
The coassociativity of Δ can be expressed as
$\sum _{(c)}c_{(1)}\otimes \left(\sum _{(c_{(2)})}(c_{(2)})_{(1)}\otimes (c_{(2)})_{(2)}\right)=\sum _{(c)}\left(\sum _{(c_{(1)})}(c_{(1)})_{(1)}\otimes (c_{(1)})_{(2)}\right)\otimes c_{(2)}.$
In Sweedler's notation, both of these expressions are written as
$\sum _{(c)}c_{(1)}\otimes c_{(2)}\otimes c_{(3)}.$
Some authors omit the summation symbols as well; in this sumless Sweedler notation, one writes
$\Delta (c)=c_{(1)}\otimes c_{(2)}$
and
$c=\varepsilon (c_{(1)})c_{(2)}=c_{(1)}\varepsilon (c_{(2)}).\;$
Whenever a variable with lowered and parenthesized index is encountered in an expression of this kind, a summation symbol for that variable is implied.
Further concepts and facts
A coalgebra (C, Δ, ε) is called co-commutative if $\sigma \circ \Delta =\Delta $, where σ: C ⊗ C → C ⊗ C is the K-linear map defined by σ(c ⊗ d) = d ⊗ c for all c, d in C. In Sweedler's sumless notation, C is co-commutative if and only if
$c_{(1)}\otimes c_{(2)}=c_{(2)}\otimes c_{(1)}$
for all c in C. (It's important to understand that the implied summation is significant here: it is not required that all the summands are pairwise equal, only that the sums are equal, a much weaker requirement.)
A group-like element (or set-like element) is an element x such that Δ(x) = x ⊗ x and ε(x) = 1. Contrary to what this naming convention suggests the group-like elements do not always form a group and in general they only form a set. The group-like elements of a Hopf algebra do form a group. A primitive element is an element x that satisfies Δ(x) = x ⊗ 1 + 1 ⊗ x. The primitive elements of a Hopf algebra form a Lie algebra.[10][11]
If (C1, Δ1, ε1) and (C2, Δ2, ε2) are two coalgebras over the same field K, then a coalgebra morphism from C1 to C2 is a K-linear map f : C1 → C2 such that $(f\otimes f)\circ \Delta _{1}=\Delta _{2}\circ f$ and $\epsilon _{2}\circ f=\epsilon _{1}$. In Sweedler's sumless notation, the first of these properties may be written as:
$f(c_{(1)})\otimes f(c_{(2)})=f(c)_{(1)}\otimes f(c)_{(2)}.$
The composition of two coalgebra morphisms is again a coalgebra morphism, and the coalgebras over K together with this notion of morphism form a category.
A linear subspace I in C is called a coideal if I ⊆ ker(ε) and Δ(I) ⊆ I ⊗ C + C ⊗ I. In that case, the quotient space C/I becomes a coalgebra in a natural fashion.
A subspace D of C is called a subcoalgebra if Δ(D) ⊆ D ⊗ D; in that case, D is itself a coalgebra, with the restriction of ε to D as counit.
The kernel of every coalgebra morphism f : C1 → C2 is a coideal in C1, and the image is a subcoalgebra of C2. The common isomorphism theorems are valid for coalgebras, so for instance C1/ker(f) is isomorphic to im(f).
If A is a finite-dimensional unital associative K-algebra, then A∗ is a finite-dimensional coalgebra, and indeed every finite-dimensional coalgebra arises in this fashion from some finite-dimensional algebra (namely from the coalgebra's K-dual). Under this correspondence, the commutative finite-dimensional algebras correspond to the cocommutative finite-dimensional coalgebras. So in the finite-dimensional case, the theories of algebras and of coalgebras are dual; studying one is equivalent to studying the other. However, relations diverge in the infinite-dimensional case: while the K-dual of every coalgebra is an algebra, the K-dual of an infinite-dimensional algebra need not be a coalgebra.
Every coalgebra is the sum of its finite-dimensional subcoalgebras, something that is not true for algebras. Abstractly, coalgebras are generalizations, or duals, of finite-dimensional unital associative algebras.
Corresponding to the concept of representation for algebras is a corepresentation or comodule.
See also
• Cofree coalgebra
• Measuring coalgebra
• Dialgebra
References
1. Yokonuma (1992). "Prop. 1.7". Tensor spaces and exterior algebra. p. 12.
2. Yokonuma (1992). "Prop. 1.4". Tensor spaces and exterior algebra. p. 10.
3. See also Dăscălescu, Năstăsescu & Raianu (2001). Hopf Algebras: An introduction. p. 3.
4. See also Raianu, Serban. Coalgebras from Formulas Archived 2010-05-29 at the Wayback Machine, p. 2.
5. "Lecture notes for reference" (PDF). Archived from the original (PDF) on 2012-02-24. Retrieved 2008-10-31.
6. See also Dăscălescu, Năstăsescu & Raianu (2001). Hopf Algebras: An introduction. p. 4., and Dăscălescu, Năstăsescu & Raianu (2001). Hopf Algebras: An introduction. p. 55., Ex. 1.1.5.
7. Raianu, Serban. Coalgebras from Formulas Archived 2010-05-29 at the Wayback Machine, p. 1.
8. Montgomery (1993) p.61
9. Underwood (2011) p.35
10. Mikhalev, Aleksandr Vasilʹevich; Pilz, Günter, eds. (2002). The Concise Handbook of Algebra. Springer-Verlag. p. 307, C.42. ISBN 0792370724.
11. Abe, Eiichi (2004). Hopf Algebras. Cambridge Tracts in Mathematics. Vol. 74. Cambridge University Press. p. 59. ISBN 0-521-60489-3.
Further reading
• Block, Richard E.; Leroux, Pierre (1985), "Generalized dual coalgebras of algebras, with applications to cofree coalgebras", Journal of Pure and Applied Algebra, 36 (1): 15–21, doi:10.1016/0022-4049(85)90060-X, ISSN 0022-4049, MR 0782637, Zbl 0556.16005
• Dăscălescu, Sorin; Năstăsescu, Constantin; Raianu, Șerban (2001), Hopf Algebras: An introduction, Pure and Applied Mathematics, vol. 235 (1st ed.), New York, NY: Marcel Dekker, ISBN 0-8247-0481-9, Zbl 0962.16026.
• Gómez-Torrecillas, José (1998), "Coalgebras and comodules over a commutative ring", Revue Roumaine de Mathématiques Pures et Appliquées, 43: 591–603
• Hazewinkel, Michiel (2003), "Cofree coalgebras and multivariable recursiveness", Journal of Pure and Applied Algebra, 183 (1): 61–103, doi:10.1016/S0022-4049(03)00013-6, ISSN 0022-4049, MR 1992043, Zbl 1048.16022
• Montgomery, Susan (1993), Hopf algebras and their actions on rings, Regional Conference Series in Mathematics, vol. 82, Providence, RI: American Mathematical Society, ISBN 0-8218-0738-2, Zbl 0793.16029
• Underwood, Robert G. (2011), An introduction to Hopf algebras, Berlin: Springer-Verlag, ISBN 978-0-387-72765-3, Zbl 1234.16022
• Yokonuma, Takeo (1992), Tensor spaces and exterior algebra, Translations of mathematical monographs, vol. 108, American Mathematical Society, ISBN 0-8218-4564-0, Zbl 0754.15028
• Chapter III, section 11 in Bourbaki, Nicolas (1989). Algebra. Springer-Verlag. ISBN 0-387-19373-1.
External links
• William Chin: A brief introduction to coalgebra representation theory
| Wikipedia |
Sweep line algorithm
In computational geometry, a sweep line algorithm or plane sweep algorithm is an algorithmic paradigm that uses a conceptual sweep line or sweep surface to solve various problems in Euclidean space. It is one of the critical techniques in computational geometry.
The idea behind algorithms of this type is to imagine that a line (often a vertical line) is swept or moved across the plane, stopping at some points. Geometric operations are restricted to geometric objects that either intersect or are in the immediate vicinity of the sweep line whenever it stops, and the complete solution is available once the line has passed over all objects.
History
This approach may be traced to scanline algorithms of rendering in computer graphics, followed by exploiting this approach in early algorithms of integrated circuit layout design, in which a geometric description of an IC was processed in parallel strips because the entire description could not fit into memory.
Applications
Application of this approach led to a breakthrough in the computational complexity of geometric algorithms when Shamos and Hoey presented algorithms for line segment intersection in the plane, and in particular, they described how a combination of the scanline approach with efficient data structures (self-balancing binary search trees) makes it possible to detect whether there are intersections among N segments in the plane in time complexity of O(N log N).[1] The closely related Bentley–Ottmann algorithm uses a sweep line technique to report all K intersections among any N segments in the plane in time complexity of O((N + K) log N) and space complexity of O(N).[2]
Since then, this approach has been used to design efficient algorithms for a number of problems, such as the construction of the Voronoi diagram (Fortune's algorithm) and the Delaunay triangulation or boolean operations on polygons.
Generalizations and extensions
Topological sweeping is a form of plane sweep with a simple ordering of processing points, which avoids the necessity of completely sorting the points; it allows some sweep line algorithms to be performed more efficiently.
The rotating calipers technique for designing geometric algorithms may also be interpreted as a form of the plane sweep, in the projective dual of the input plane: a form of projective duality transforms the slope of a line in one plane into the x-coordinate of a point in the dual plane, so the progression through lines in sorted order by their slope as performed by a rotating calipers algorithm is dual to the progression through points sorted by their x-coordinates in a plane sweep algorithm.
The sweeping approach may be generalised to higher dimensions.[3]
References
1. Shamos, Michael I.; Hoey, Dan (1976), "Geometric intersection problems", Proc. 17th IEEE Symp. Foundations of Computer Science (FOCS '76), pp. 208–215, doi:10.1109/SFCS.1976.16, S2CID 124804.
2. Souvaine, Diane (2008), Line Segment Intersection Using a Sweep Line Algorithm (PDF).
3. Sinclair, David (2016-02-11). "A 3D Sweep Hull Algorithm for computing Convex Hulls and Delaunay Triangulation". arXiv:1602.04707 [cs.CG].
Well-known computer science algorithms
Categories
• Minimax
• Sorting
• Search
• Streaming
Paradigms
• Backtracking
• Brute-force search
• Divide and conquer
• Dynamic programming
• Greedy
• Prune and search
• Sweep line
• Recursion
Other
• Binary search
• Breadth-first search
• Depth-first search
• Topological sorting
• List of algorithms
| Wikipedia |
Swendsen–Wang algorithm
The Swendsen–Wang algorithm is the first non-local or cluster algorithm for Monte Carlo simulation for large systems near criticality. It has been introduced by Robert Swendsen and Jian-Sheng Wang in 1987 at Carnegie Mellon.
The original algorithm was designed for the Ising and Potts models, and it was later generalized to other systems as well, such as the XY model by Wolff algorithm and particles of fluids. The key ingredient was the random cluster model, a representation of the Ising or Potts model through percolation models of connecting bonds, due to Fortuin and Kasteleyn. It has been generalized by Barbu and Zhu[1] to arbitrary sampling probabilities by viewing it as a Metropolis–Hastings algorithm and computing the acceptance probability of the proposed Monte Carlo move.
Motivation
The problem of the critical slowing-down affecting local processes is of fundamental importance in the study of second-order phase transitions (like ferromagnetic transition in the Ising model), as increasing the size of the system in order to reduce finite-size effects has the disadvantage of requiring a far larger number of moves to reach thermal equilibrium. Indeed the correlation time $\tau $ usually increases as $L^{z}$ with $z\simeq 2$ or greater; since, to be accurate, the simulation time must be $t\gg \tau $, this is a major limitation in the size of the systems that can be studied through local algorithms. SW algorithm was the first to produce unusually small values for the dynamical critical exponents: $z=0.35$ for the 2D Ising model ($z=2.125$ for standard simulations); $z=0.75$ for the 3D Ising model, as opposed to $z=2.0$ for standard simulations.
Description
The algorithm is non-local in the sense that a single sweep updates a collection of spin variables based on the Fortuin–Kasteleyn representation. The update is done on a "cluster" of spin variables connected by open bond variables that are generated through a percolation process, based on the interaction states of the spins.
Consider a typical ferromagnetic Ising model with only nearest-neighbor interaction.
• Starting from a given configuration of spins, we associate to each pair of nearest neighbours on sites $n,m$ a random variable $b_{n,m}\in \lbrace 0,1\rbrace $ which is interpreted in the following way: if $b_{n,m}=0$ then there is no link between the sites $n$ and $m$ (the bond is closed); if $b_{n,m}=1$ then there is a link connecting the spins $\sigma _{n}{\text{ and }}\sigma _{m}$(the bond is open). These values are assigned according to the following (conditional) probability distribution:
$P\left[b_{n,m}=0|\sigma _{n}\neq \sigma _{m}\right]=1$;
$P\left[b_{n,m}=1|\sigma _{n}\neq \sigma _{m}\right]=0$;
$P\left[b_{n,m}=0|\sigma _{n}=\sigma _{m}\right]=e^{-2\beta J_{nm}}$;
$P\left[b_{n,m}=1|\sigma _{n}=\sigma _{m}\right]=1-e^{-2\beta J_{nm}}$;
where $J_{nm}>0$ is the ferromagnetic coupling strength.
This probability distribution has been derived in the following way: the Hamiltonian of the Ising model is
$H[\sigma ]=\sum \limits _{<i,j>}-J_{i,j}\sigma _{i}\sigma _{j}$,
and the partition function is
$Z=\sum \limits _{\lbrace \sigma \rbrace }e^{-\beta H[\sigma ]}$.
Consider the interaction between a pair of selected sites $n$ and $m$ and eliminate it from the total Hamiltonian, defining $H_{nm}[\sigma ]=\sum \limits _{<i,j>\neq <n,m>}-J_{i,j}\sigma _{i}\sigma _{j}.$
Define also the restricted sums:
$Z_{n,m}^{same}=\sum \limits _{\lbrace \sigma \rbrace }e^{-\beta H_{nm}[\sigma ]}\delta _{\sigma _{n},\sigma _{m}}$;
$Z_{n,m}^{diff}=\sum \limits _{\lbrace \sigma \rbrace }e^{-\beta H_{nm}[\sigma ]}\left(1-\delta _{\sigma _{n},\sigma _{m}}\right).$
$Z=e^{\beta J_{nm}}Z_{n,m}^{same}+e^{-\beta J_{nm}}Z_{n,m}^{diff}.$
Introduce the quantity
$Z_{nm}^{ind}=Z_{n,m}^{same}+Z_{n,m}^{diff}$;
the partition function can be rewritten as
$Z=\left(e^{\beta J_{nm}}-e^{-\beta J_{nm}}\right)Z_{n,m}^{same}+e^{-\beta J_{nm}}Z_{n,m}^{ind}.$
Since the first term contains a restriction on the spin values whereas there is no restriction in the second term, the weighting factors (properly normalized) can be interpreted as probabilities of forming/not forming a link between the sites: $P_{<n,m>\;link}=1-e^{-2\beta J_{nm}}.$ The process can be easily adapted to antiferromagnetic spin systems, as it is sufficient to eliminate $Z_{n,m}^{same}$ in favor of $Z_{n,m}^{diff}$ (as suggested by the change of sign in the interaction constant).
• After assigning the bond variables, we identify the same-spin clusters formed by connected sites and make an inversion of all the variables in the cluster with probability 1/2. At the following time step we have a new starting Ising configuration, which will produce a new clustering and a new collective spin-flip.
Correctness
It can be shown that this algorithm leads to equilibrium configurations. To show this, we interpret the algorithm as a Markov chain, and show that the chain is both ergodic (when used together with other algorithms) and satisfies detailed balance, such that the equilibrium Boltzmann distribution is equal to the stationary distribution of the chain.
Ergodicity means that it is possible to transit from any initial state to any final state with a finite number of updates. It has been shown that the SW algorithm is not ergodic in general (in the thermodynamic limit).[2] Thus in practice, the SW algorithm is usually used in conjunction with single spin-flip algorithms such as the Metropolis–Hastings algorithm to achieve ergodicity.
The SW algorithm does however satisfy detailed-balance. To show this, we note that every transition between two Ising spin states must pass through some bond configuration in the percolation representation. Let's fix a particular bond configuration: what matters in comparing the probabilities related to it is the number of factors $q=e^{-2\beta J}$ for each missing bond between neighboring spins with the same value; the probability of going to a certain Ising configuration compatible with a given bond configuration is uniform (say $p$). So the ratio of the transition probabilities of going from one state to another is
${\frac {P_{\lbrace \sigma \rbrace \rightarrow \lbrace \sigma '\rbrace }}{P_{\lbrace \sigma '\rbrace \rightarrow \lbrace \sigma \rbrace }}}={\frac {Pr\left(\lbrace \sigma '\rbrace |B.C.\right)Pr\left(B.C.|\lbrace \sigma \rbrace \right)}{Pr\left(\lbrace \sigma \rbrace |B.C.\right)Pr\left(B.C.|\lbrace \sigma '\rbrace \right)}}={\frac {p\cdot \exp \left[-2\beta \sum \limits _{<l,m>}\delta _{\sigma _{l},\sigma _{m}}J_{lm}\right]}{p\cdot \exp \left[-2\beta \sum \limits _{<l,m>}\delta _{\sigma '_{l},\sigma '_{m}}J_{lm}\right]}}=e^{-\beta \Delta E}$
since $\Delta E=-\sum \limits _{<l,m>}J_{lm}\left(\sigma '_{l}\sigma '_{m}-\sigma _{l}\sigma _{m}\right)=-\sum \limits _{<l,m>}J_{lm}\left[\delta _{\sigma '_{l},\sigma '_{m}}-\left(1-\delta _{\sigma '_{l},\sigma '_{m}}\right)-\delta _{\sigma _{l},\sigma _{m}}+\left(1-\delta _{\sigma _{l},\sigma _{m}}\right)\right]=-2\sum \limits _{<l,m>}J_{lm}\left(\delta _{\sigma '_{l},\sigma '_{m}}-\delta _{\sigma _{l},\sigma _{m}}\right)$.
This is valid for every bond configuration the system can pass through during its evolution, so detailed balance is satisfied for the total transition probability. This proves that the algorithm is correct.
Efficiency
Although not analytically clear from the original paper, the reason why all the values of z obtained with the SW algorithm are much lower than the exact lower bound for single-spin-flip algorithms ($z\geq \gamma /\nu $) is that the correlation length divergence is strictly related to the formation of percolation clusters, which are flipped together. In this way the relaxation time is significantly reduced. Another way to view this is through the correspondence between the spin statistics and cluster statistics in the Edwards-Sokal representation.[3]
Generalizations
The algorithm is not efficient in simulating frustrated systems, because the correlation length of the clusters is larger than the correlation length of the spin model in the presence of frustrated interactions.[4] Currently, there are two main approaches to addressing this problem, such that the efficiency of cluster algorithms is extended to frustrated systems.
The first approach is to extend the bond-formation rules to more non-local cells, and the second approach is to generate clusters based on more relevant order parameters. In the first case, we have the KBD algorithm for the fully-frustrated Ising model, where the decision of opening bonds are made on each plaquette, arranged in a checkerboard pattern on the square lattice.[5] In the second case, we have replica cluster move for low-dimensional spin glasses, where the clusters are generated based on spin overlaps, which is believed to be the relevant order parameter.
See also
• Random cluster model
• Monte Carlo method
• Wolff algorithm
• http://www.hpjava.org/theses/shko/thesis_paper/node69.html
• http://www-fcs.acs.i.kyoto-u.ac.jp/~harada/monte-en.html
References
1. Barbu, Adrian; Zhu, Song-Chun (August 2005). "Generalizing Swendsen-Wang to sampling arbitrary posterior probabilities". IEEE Transactions on Pattern Analysis and Machine Intelligence. 27 (8): 1239–1253. doi:10.1109/TPAMI.2005.161. ISSN 0162-8828. PMID 16119263. S2CID 410716.
2. Gore, Vivek K.; Jerrum, Mark R. (1999-10-01). "The Swendsen–Wang Process Does Not Always Mix Rapidly". Journal of Statistical Physics. 97 (1): 67–86. Bibcode:1999JSP....97...67G. doi:10.1023/A:1004610900745. ISSN 1572-9613. S2CID 189821827.
3. Edwards, Robert G.; Sokal, Alan D. (1988-09-15). "Generalization of the Fortuin-Kasteleyn-Swendsen-Wang representation and Monte Carlo algorithm". Physical Review D. 38 (6): 2009–2012. Bibcode:1988PhRvD..38.2009E. doi:10.1103/PhysRevD.38.2009. PMID 9959355.
4. Cataudella, V.; Franzese, G.; Nicodemi, M.; Scala, A.; Coniglio, A. (1994-03-07). "Critical clusters and efficient dynamics for frustrated spin models". Physical Review Letters. 72 (10): 1541–1544. Bibcode:1994PhRvL..72.1541C. doi:10.1103/PhysRevLett.72.1541. hdl:2445/13250. PMID 10055635.
5. Kandel, Daniel; Ben-Av, Radel; Domany, Eytan (1990-08-20). "Cluster dynamics for fully frustrated systems". Physical Review Letters. 65 (8): 941–944. Bibcode:1990PhRvL..65..941K. doi:10.1103/PhysRevLett.65.941. PMID 10043065.
• Swendsen, Robert H.; Wang, Jian-Sheng (1987-01-12). "Nonuniversal critical dynamics in Monte Carlo simulations". Physical Review Letters. American Physical Society (APS). 58 (2): 86–88. Bibcode:1987PhRvL..58...86S. doi:10.1103/physrevlett.58.86. ISSN 0031-9007. PMID 10034599.
• Kasteleyn P. W. and Fortuin (1969) J. Phys. Soc. Jpn. Suppl. 26s:11
• Fortuin, C.M.; Kasteleyn, P.W. (1972). "On the random-cluster model". Physica. Elsevier BV. 57 (4): 536–564. Bibcode:1972Phy....57..536F. doi:10.1016/0031-8914(72)90045-6. ISSN 0031-8914.
• Wang, Jian-Sheng; Swendsen, Robert H. (1990). "Cluster Monte Carlo algorithms". Physica A: Statistical Mechanics and Its Applications. Elsevier BV. 167 (3): 565–579. Bibcode:1990PhyA..167..565W. doi:10.1016/0378-4371(90)90275-w. ISSN 0378-4371.
• Barbu, A. (2005). "Generalizing Swendsen-Wang to sampling arbitrary posterior probabilities". IEEE Transactions on Pattern Analysis and Machine Intelligence. Institute of Electrical and Electronics Engineers (IEEE). 27 (8): 1239–1253. doi:10.1109/tpami.2005.161. ISSN 0162-8828. PMID 16119263. S2CID 410716.
| Wikipedia |
Swift–Hohenberg equation
The Swift–Hohenberg equation (named after Jack B. Swift and Pierre Hohenberg) is a partial differential equation noted for its pattern-forming behaviour. It takes the form
${\frac {\partial u}{\partial t}}=ru-(1+\nabla ^{2})^{2}u+N(u)$
where u = u(x, t) or u = u(x, y, t) is a scalar function defined on the line or the plane, r is a real bifurcation parameter, and N(u) is some smooth nonlinearity.
The equation is named after the authors of the paper,[1] where it was derived from the equations for thermal convection.
The webpage of Michael Cross[2] contains some numerical integrators which demonstrate the behaviour of several Swift–Hohenberg-like systems.
References
1. J. Swift,P.C. Hohenberg (1977). "Hydrodynamic fluctuations at the convective instability". Phys. Rev. A. 15 (1): 319–328. Bibcode:1977PhRvA..15..319S. doi:10.1103/PhysRevA.15.319.{{cite journal}}: CS1 maint: uses authors parameter (link)
2. Java applet demonstrations
| Wikipedia |
Swing equation
A power system consists of a number of synchronous machines operating synchronously under all operating conditions. Under normal operating conditions, the relative position of the rotor axis and the resultant magnetic field axis is fixed. The angle between the two is known as the power angle or torque angle. During any disturbance, the rotor decelerates or accelerates with respect to the synchronously rotating air gap magnetomotive force, creating relative motion. The equation describing the relative motion is known as the swing equation, which is a non-linear second order differential equation that describes the swing of the rotor of synchronous machine. The power exchange between the mechanical rotor and the electrical grid due to the rotor swing (acceleration and deceleration) is called Inertial response.
Derivation
A synchronous generator is driven by a prime mover. The equation governing the rotor motion is given by:
$J{\frac {d^{2}{\theta _{\text{m}}}}{dt^{2}}}=T_{a}=T_{\text{m}}-T_{\text{e}}$ N-m
Where:
• $J$ is the total moment of inertia of the rotor mass in kg-m2
• $\theta _{\text{m}}$ is the angular position of the rotor with respect to a stationary axis in (rad)
• $t$ is time in seconds (s)
• $T_{\text{m}}$ is the mechanical torque supplied by the prime mover in N-m
• $T_{\text{e}}$ is the electrical torque output of the alternator in N-m
• $T_{a}$ is the net accelerating torque, in N-m
Neglecting losses, the difference between the mechanical and electrical torque gives the net accelerating torque Ta. In the steady state, the electrical torque is equal to the mechanical torque and hence the accelerating power is zero.[1] During this period the rotor moves at synchronous speed ωs in rad/s. The electric torque Te corresponds to the net air-gap power in the machine and thus accounts for the total output power of the generator plus I2R losses in the armature winding.
The angular position θ is measured with a stationary reference frame. Representing it with respect to the synchronously rotating frame gives:
$\theta _{\text{m}}=\omega _{\text{s}}t+\delta _{\text{m}}$
where, δm is the angular position in rad with respect to the synchronously rotating reference frame. The derivative of the above equation with respect to time is:
${\frac {d\theta _{\text{m}}}{dt}}=\omega _{\text{s}}+{\frac {d\delta _{\text{m}}}{dt}}$
The above equations show that the rotor angular speed is equal to the synchronous speed only when dδm/dt is equal to zero. Therefore, the term dδm/dt represents the deviation of the rotor speed from synchronism in rad/s.
By taking the second order derivative of the above equation it becomes:
${\frac {d^{2}\theta _{\text{m}}}{dt^{2}}}={\frac {d^{2}\delta _{\text{m}}}{dt^{2}}}$
Substituting the above equation in the equation of rotor motion gives:
$J{\frac {d^{2}{\delta _{\text{m}}}}{dt^{2}}}=T_{a}=T_{\text{m}}-T_{\text{e}}$ N-m
Introducing the angular velocity ωm of the rotor for the notational purpose, $\omega _{\text{m}}={\frac {d\theta _{\text{m}}}{dt}}$ and multiplying both sides by ωm,
$J\omega _{\text{m}}{\frac {d^{2}{\delta _{\text{m}}}}{dt^{2}}}=P_{a}=P_{\text{m}}-P_{\text{e}}$ W
where, Pm , Pe and Pa respectively are the mechanical, electrical and accelerating power in MW.
The coefficient Jωm is the angular momentum of the rotor: at synchronous speed ωs, it is denoted by M and called the inertia constant of the machine. Normalizing it as
$H={\frac {\text{stored kinetic energy in mega joules at synchronous speed}}{\text{machine rating in MVA}}}={\frac {J\omega _{\text{s}}^{2}}{2S_{\text{rated}}}}$ MJ/MVA
where Srated is the three phase rating of the machine in MVA. Substituting in the above equation
$2H{\frac {S_{\text{rated}}}{\omega _{\text{s}}^{2}}}\omega _{\text{m}}{\frac {d^{2}{\delta _{\text{m}}}}{dt^{2}}}=P_{\text{m}}-P_{\text{e}}=P_{a}$.
In steady state, the machine angular speed is equal to the synchronous speed and hence ωm can be replaced in the above equation by ωs. Since Pm, Pe and Pa are given in MW, dividing them by the generator MVA rating Srated gives these quantities in per unit. Dividing the above equation on both sides by Srated gives
${\frac {2H}{\omega _{\text{s}}}}{\frac {d^{2}{\delta }}{dt^{2}}}=P_{\text{m}}-P_{e}=P_{a}$ per unit
The above equation describes the behaviour of the rotor dynamics and hence is known as the swing equation. The angle δ is the angle of the internal EMF of the generator and it dictates the amount of power that can be transferred. This angle is therefore called the load angle.
References
1. Grainger, John J.; Stevenson, William D. (1 January 1994). Power system analysis. McGraw-Hill. ISBN 978-0-07-061293-8.
| Wikipedia |
Swinging Atwood's machine
The swinging Atwood's machine (SAM) is a mechanism that resembles a simple Atwood's machine except that one of the masses is allowed to swing in a two-dimensional plane, producing a dynamical system that is chaotic for some system parameters and initial conditions.
Specifically, it comprises two masses (the pendulum, mass m and counterweight, mass M) connected by an inextensible, massless string suspended on two frictionless pulleys of zero radius such that the pendulum can swing freely around its pulley without colliding with the counterweight.[1]
The conventional Atwood's machine allows only "runaway" solutions (i.e. either the pendulum or counterweight eventually collides with its pulley), except for $M=m$. However, the swinging Atwood's machine with $M>m$ has a large parameter space of conditions that lead to a variety of motions that can be classified as terminating or non-terminating, periodic, quasiperiodic or chaotic, bounded or unbounded, singular or non-singular[1][2] due to the pendulum's reactive centrifugal force counteracting the counterweight's weight.[1] Research on the SAM started as part of a 1982 senior thesis entitled Smiles and Teardrops (referring to the shape of some trajectories of the system) by Nicholas Tufillaro at Reed College, directed by David J. Griffiths.[3]
Equations of motion
The swinging Atwood's machine is a system with two degrees of freedom. We may derive its equations of motion using either Hamiltonian mechanics or Lagrangian mechanics. Let the swinging mass be $m$ and the non-swinging mass be $M$. The kinetic energy of the system, $T$, is:
${\begin{aligned}T&={\frac {1}{2}}Mv_{M}^{2}+{\frac {1}{2}}mv_{m}^{2}\\&={\frac {1}{2}}M{\dot {r}}^{2}+{\frac {1}{2}}m\left({\dot {r}}^{2}+r^{2}{\dot {\theta }}^{2}\right)\end{aligned}}$
where $r$ is the distance of the swinging mass to its pivot, and $\theta $ is the angle of the swinging mass relative to pointing straight downwards. The potential energy $U$ is solely due to the acceleration due to gravity:
${\begin{aligned}U&=Mgr-mgr\cos {\theta }\end{aligned}}$
We may then write down the Lagrangian, ${\mathcal {L}}$, and the Hamiltonian, ${\mathcal {H}}$ of the system:
${\begin{aligned}{\mathcal {L}}&=T-U\\&={\frac {1}{2}}M{\dot {r}}^{2}+{\frac {1}{2}}m\left({\dot {r}}^{2}+r^{2}{\dot {\theta }}^{2}\right)-Mgr+mgr\cos {\theta }\\{\mathcal {H}}&=T+U\\&={\frac {1}{2}}M{\dot {r}}^{2}+{\frac {1}{2}}m\left({\dot {r}}^{2}+r^{2}{\dot {\theta }}^{2}\right)+Mgr-mgr\cos {\theta }\end{aligned}}$
We can then express the Hamiltonian in terms of the canonical momenta, $p_{r}$, $p_{\theta }$:
${\begin{aligned}p_{r}&={\frac {\partial {\mathcal {L}}}{\partial {\dot {r}}}}={\frac {\partial T}{\partial {\dot {r}}}}=(M+m){\dot {r}}\\p_{\theta }&={\frac {\partial {\mathcal {L}}}{\partial {\dot {\theta }}}}={\frac {\partial T}{\partial {\dot {\theta }}}}=mr^{2}{\dot {\theta }}\\\therefore {\mathcal {H}}&={\frac {p_{r}^{2}}{2(M+m)}}+{\frac {p_{\theta }^{2}}{2mr^{2}}}+Mgr-mgr\cos {\theta }\end{aligned}}$
Lagrange analysis can be applied to obtain two second-order coupled ordinary differential equations in $r$ and $\theta $. First, the $\theta $ equation:
${\begin{aligned}{\frac {\partial {\mathcal {L}}}{\partial \theta }}&={\frac {d}{dt}}\left({\frac {\partial {\mathcal {L}}}{\partial {\dot {\theta }}}}\right)\\-mgr\sin {\theta }&=2mr{\dot {r}}{\dot {\theta }}+mr^{2}{\ddot {\theta }}\\r{\ddot {\theta }}+2{\dot {r}}{\dot {\theta }}+g\sin {\theta }&=0\end{aligned}}$
And the $r$ equation:
${\begin{aligned}{\frac {\partial {\mathcal {L}}}{\partial r}}&={\frac {d}{dt}}\left({\frac {\partial {\mathcal {L}}}{\partial {\dot {r}}}}\right)\\mr{\dot {\theta }}^{2}-Mg+mg\cos {\theta }&=(M+m){\ddot {r}}\end{aligned}}$
We simplify the equations by defining the mass ratio $\mu ={\frac {M}{m}}$. The above then becomes:
$(\mu +1){\ddot {r}}-r{\dot {\theta }}^{2}+g(\mu -\cos {\theta })=0$
Hamiltonian analysis may also be applied to determine four first order ODEs in terms of $r$, $\theta $ and their corresponding canonical momenta $p_{r}$ and $p_{\theta }$:
${\begin{aligned}{\dot {r}}&={\frac {\partial {\mathcal {H}}}{\partial {p_{r}}}}={\frac {p_{r}}{M+m}}\\{\dot {p_{r}}}&=-{\frac {\partial {\mathcal {H}}}{\partial {r}}}={\frac {p_{\theta }^{2}}{mr^{3}}}-Mg+mg\cos {\theta }\\{\dot {\theta }}&={\frac {\partial {\mathcal {H}}}{\partial {p_{\theta }}}}={\frac {p_{\theta }}{mr^{2}}}\\{\dot {p_{\theta }}}&=-{\frac {\partial {\mathcal {H}}}{\partial {\theta }}}=-mgr\sin {\theta }\end{aligned}}$
Notice that in both of these derivations, if one sets $\theta $ and angular velocity ${\dot {\theta }}$ to zero, the resulting special case is the regular non-swinging Atwood machine:
${\ddot {r}}=g{\frac {1-\mu }{1+\mu }}=g{\frac {m-M}{m+M}}$
The swinging Atwood's machine has a four-dimensional phase space defined by $r$, $\theta $ and their corresponding canonical momenta $p_{r}$ and $p_{\theta }$. However, due to energy conservation, the phase space is constrained to three dimensions.
System with massive pulleys
If the pulleys in the system are taken to have moment of inertia $I$ and radius $R$, the Hamiltonian of the SAM is then:[4]
${\mathcal {H}}\left(r,\theta ,{\dot {r}},{\dot {\theta }}\right)=\underbrace {{\frac {1}{2}}M_{t}\left(R{\dot {\theta }}-{\dot {r}}\right)^{2}+{\frac {1}{2}}mr^{2}{\dot {\theta }}^{2}} _{T}+\underbrace {gr\left(M-m\cos {\theta }\right)+gR\left(m\sin {\theta }-M\theta \right)} _{U},$
Where Mt is the effective total mass of the system,
$M_{t}=M+m+{\frac {I}{R^{2}}}$
This reduces to the version above when $R$ and $I$ become zero. The equations of motion are now:[4]
${\begin{aligned}\mu _{t}({\ddot {r}}-R{\ddot {\theta }})&=r{\dot {\theta }}^{2}+g(\cos {\theta }-\mu )\\r{\ddot {\theta }}&=-2{\dot {r}}{\dot {\theta }}+R{\dot {\theta }}^{2}-g\sin {\theta }\\\end{aligned}}$
where $\mu _{t}=M_{t}/m$.
Integrability
Hamiltonian systems can be classified as integrable and nonintegrable. SAM is integrable when the mass ratio $\mu =M/m=3$.[5] The system also looks pretty regular for $\mu =4n^{2}-1=3,15,35,...$, but the $\mu =3$ case is the only known integrable mass ratio. It has been shown that the system is not integrable for $\mu \in (0,1)\cup (3,\infty )$.[6] For many other values of the mass ratio (and initial conditions) SAM displays chaotic motion.
Numerical studies indicate that when the orbit is singular (initial conditions: $r=0,{\dot {r}}=v,\theta =\theta _{0},{\dot {\theta }}=0$), the pendulum executes a single symmetrical loop and returns to the origin, regardless of the value of $\theta _{0}$. When $\theta _{0}$ is small (near vertical), the trajectory describes a "teardrop", when it is large, it describes a "heart". These trajectories can be exactly solved algebraically, which is unusual for a system with a non-linear Hamiltonian.[7]
Trajectories
The swinging mass of the swinging Atwood's machine undergoes interesting trajectories or orbits when subject to different initial conditions, and for different mass ratios. These include periodic orbits and collision orbits.
Nonsingular orbits
For certain conditions, system exhibits complex harmonic motion.[1] The orbit is called nonsingular if the swinging mass does not touch the pulley.
Selection of nonsingular orbits
• An orbit of the swinging Atwood's machine for $\mu =2$, $\theta _{0}={\frac {\pi }{2}}$, and zero initial velocity.
• An orbit of the swinging Atwood's machine for $\mu =3$, $\theta _{0}={\frac {\pi }{2}}$, and zero initial velocity.
• An orbit of the swinging Atwood's machine for $\mu =5$, $\theta _{0}={\frac {\pi }{2}}$, and zero initial velocity.
• An orbit of the swinging Atwood's machine for $\mu =6$, $\theta _{0}={\frac {\pi }{2}}$, and zero initial velocity.
• An orbit of the swinging Atwood's machine for $\mu =16$, $\theta _{0}={\frac {\pi }{2}}$, and zero initial velocity.
• An orbit of the swinging Atwood's machine for $\mu =19$, $\theta _{0}={\frac {\pi }{2}}$, and zero initial velocity.
• An orbit of the swinging Atwood's machine for $\mu =21$, $\theta _{0}={\frac {\pi }{2}}$, and zero initial velocity.
• An orbit of the swinging Atwood's machine for $\mu =24$, $\theta _{0}={\frac {\pi }{2}}$, and zero initial velocity.
Periodic orbits
When the different harmonic components in the system are in phase, the resulting trajectory is simple and periodic, such as the "smile" trajectory, which resembles that of an ordinary pendulum, and various loops.[3][8] In general a periodic orbit exists when the following is satisfied:[1]
$r(t+\tau )=r(t),\,\theta (t+\tau )=\theta (t)$
The simplest case of periodic orbits is the "smile" orbit, which Tufillaro termed Type A orbits in his 1984 paper.[1]
Selection of periodic orbits
• A "smile" orbit of the swinging Atwood's machine for $\mu =1.665$, $\theta _{0}={\frac {\pi }{2}}$, and zero initial velocity.
• An orbit of the swinging Atwood's machine for $\mu =2.394$, $\theta _{0}={\frac {\pi }{2}}$, and zero initial velocity.
• An orbit of the swinging Atwood's machine for $\mu =1.1727$, $\theta _{0}={\frac {\pi }{2}}$, and zero initial velocity.
• An orbit of the swinging Atwood's machine for $\mu =1.555$, $\theta _{0}={\frac {\pi }{2}}$, and zero initial velocity.
Singular orbits
The motion is singular if at some point, the swinging mass passes through the origin. Since the system is invariant under time reversal and translation, it is equivalent to say that the pendulum starts at the origin and is fired outwards:[1]
$r(0)=0$
The region close to the pivot is singular, since $r$ is close to zero and the equations of motion require dividing by $r$. As such, special techniques must be used to rigorously analyze these cases.[9]
The following are plots of arbitrarily selected singular orbits.
Selection of singular orbits
• An orbit of the swinging Atwood's machine for $\mu =10$, $\theta _{0}={\frac {\pi }{2}}$, and zero initial velocity.
• An orbit of the swinging Atwood's machine for $\mu =25$, $\theta _{0}={\frac {\pi }{2}}$, and zero initial velocity.
Collision orbits
Collision (or terminating singular) orbits are subset of singular orbits formed when the swinging mass is ejected from its pivot with an initial velocity, such that it returns to the pivot (i.e. it collides with the pivot):
$r(\tau )=r(0)=0,\,\tau >0$
The simplest case of collision orbits are the ones with a mass ratio of 3, which will always return symmetrically to the origin after being ejected from the origin, and were termed Type B orbits in Tufillaro's initial paper.[1] They were also referred to as teardrop, heart, or rabbit-ear orbits because of their appearance.[3][7][8][9]
When the swinging mass returns to the origin, the counterweight mass, $M$ must instantaneously change direction, causing an infinite tension in the connecting string. Thus we may consider the motion to terminate at this time.[1]
Boundedness
For any initial position, it can be shown that the swinging mass is bounded by a curve that is a conic section.[2] The pivot is always a focus of this bounding curve. The equation for this curve can be derived by analyzing the energy of the system, and using conservation of energy. Let us suppose that $m$ is released from rest at $r=r_{0}$ and $\theta =\theta _{0}$. The total energy of the system is therefore:
$E={\frac {1}{2}}M{\dot {r}}^{2}+{\frac {1}{2}}m\left({\dot {r}}^{2}+r^{2}{\dot {\theta }}^{2}\right)+Mgr-mgr\cos {\theta }=Mgr_{0}-mgr_{0}\cos {\theta _{0}}$
However, notice that in the boundary case, the velocity of the swinging mass is zero.[2] Hence we have:
$Mgr-mgr\cos {\theta }=Mgr_{0}-mgr_{0}\cos {\theta _{0}}$
To see that it is the equation of a conic section, we isolate for $r$:
${\begin{aligned}r&={\frac {h}{1-{\frac {\cos {\theta }}{\mu }}}}\\h&=r_{0}\left(1-{\frac {\cos {\theta _{0}}}{\mu }}\right)\end{aligned}}$
Note that the numerator is a constant dependent only on the initial position in this case, as we have assumed the initial condition to be at rest. However, the energy constant $h$ can also be calculated for nonzero initial velocity, and the equation still holds in all cases.[2] The eccentricity of the conic section is ${\frac {1}{\mu }}$. For $\mu >1$, this is an ellipse, and the system is bounded and the swinging mass always stays within the ellipse. For $\mu =1$, it is a parabola and for $\mu <1$ it is a hyperbola; in either of these cases, it is not bounded. As $\mu $ gets arbitrarily large, the bounding curve approaches a circle. The region enclosed by the curve is known as the Hill's region.[2]
Recent three dimensional extension
A new integrable case for the problem of three dimensional Swinging Atwood Machine (3D-SAM) was announced in 2016.[10] Like the 2D version, the problem is integrable when $M=3m$.
References
1. Tufillaro, Nicholas B.; Abbott, Tyler A.; Griffiths, David J. (1984). "Swinging Atwood's Machine". American Journal of Physics. 52 (10): 895–903. Bibcode:1984AmJPh..52..895T. doi:10.1119/1.13791.
2. Tufillaro, Nicholas B.; Nunes, A.; Casasayas, J. (1988). "Unbounded orbits of a swinging Atwood's machine". American Journal of Physics. 56: 1117. Bibcode:1988AmJPh..56.1117T. doi:10.1119/1.15774.
3. Tufillaro, Nicholas B. (1982). Smiles and Teardrops (Thesis). Reed College.
4. Pujol, Olivier; Perez, J.P.; Simo, C.; Simon, S.; Weil, J.A. (2010). "Swinging Atwood's Machine: Experimental and numerical results, and a theoretical study". Physica D. 239 (12): 1067–1081. arXiv:0912.5168. Bibcode:2010PhyD..239.1067P. doi:10.1016/j.physd.2010.02.017.
5. Tufillaro, Nicholas B. (1986). "Integrable motion of a swinging Atwood's machine". American Journal of Physics. 54 (2): 142. Bibcode:1986AmJPh..54..142T. doi:10.1119/1.14710.
6. Casasayas, J.; Nunes, A.; Tufillaro, N. (1990). "Swinging Atwood's Machine : integrability and dynamics". Journal de Physique. 51 (16): 1693–1702. doi:10.1051/jphys:0199000510160169300. ISSN 0302-0738.
7. Tufillaro, Nicholas B. (1994). "Teardrop and heart orbits of a swinging Atwoods machine,". American Journal of Physics. 62 (3): 231–233. arXiv:chao-dyn/9302006. Bibcode:1994AmJPh..62..231T. doi:10.1119/1.17602.
8. Tufillaro, Nicholas B. (1985). "Motions of a swinging Atwood's machine". Journal de Physique. 46 (9): 1495–1500. doi:10.1051/jphys:019850046090149500.
9. Tufillaro, Nicholas B. (1985). "Collision orbits of a swinging Atwood's machine" (PDF). Journal de Physique. 46: 2053–2056. doi:10.1051/jphys:0198500460120205300.
10. Elmandouh, A.A. (2016). "On the integrability of the motion of 3D-Swinging Atwood machine and related problems". Physics Letters A. 380: 989. Bibcode:2016PhLA..380..989E. doi:10.1016/j.physleta.2016.01.021.
Further reading
• Almeida, M.A., Moreira, I.C. and Santos, F.C. (1998) "On the Ziglin-Yoshida analysis for some classes of homogeneous hamiltonian systems", Brazilian Journal of Physics Vol.28 n.4 São Paulo Dec.
• Barrera, Emmanuel Jan (2003) Dynamics of a Double-Swinging Atwood's machine, B.S. Thesis, National Institute of Physics, University of the Philippines.
• Babelon, O, M. Talon, MC Peyranere (2010), "Kowalevski's analysis of a swinging Atwood's machine," Journal of Physics A: Mathematical and Theoretical Vol. 43 (8).
• Bruhn, B. (1987) "Chaos and order in weakly coupled systems of nonlinear oscillators," Physica Scripta Vol.35(1).
• Casasayas, J., N. B. Tufillaro, and A. Nunes (1989) "Infinity manifold of a swinging Atwood's machine," European Journal of Physics Vol.10(10), p173.
• Casasayas, J, A. Nunes, and N. B. Tufillaro (1990) "Swinging Atwood's machine: integrability and dynamics," Journal de Physique Vol.51, p1693.
• Chowdhury, A. Roy and M. Debnath (1988) "Swinging Atwood Machine. Far- and near-resonance region", International Journal of Theoretical Physics, Vol. 27(11), p1405-1410.
• Griffiths D. J. and T. A. Abbott (1992) "Comment on ""A surprising mechanics demonstration,"" American Journal of Physics Vol.60(10), p951-953.
• Moreira, I.C. and M.A. Almeida (1991) "Noether symmetries and the Swinging Atwood Machine", Journal of Physics II France 1, p711-715.
• Nunes, A., J. Casasayas, and N. B. Tufillaro (1995) "Periodic orbits of the integrable swinging Atwood's machine," American Journal of Physics Vol.63(2), p121-126.
• Ouazzani-T.H., A. and Ouzzani-Jamil, M., (1995) "Bifurcations of Liouville tori of an integrable case of swinging Atwood's machine," Il Nuovo Cimento B Vol. 110 (9).
• Olivier, Pujol, JP Perez, JP Ramis, C. Simo, S. Simon, JA Weil (2010), "Swinging Atwood's Machine: Experimental and numerical results, and a theoretical study," Physica D 239, pp. 1067–1081.
• Sears, R. (1995) "Comment on "A surprising mechanics demonstration," American Journal of Physics, Vol. 63(9), p854-855.
• Yehia, H.M., (2006) "On the integrability of the motion of a heavy particle on a tilted cone and the swinging Atwood machine", Mechanics Research Communications Vol. 33 (5), p711–716.
External links
• Example of use in undergraduate research: symplectic integrators
• Imperial College Course
• Oscilaciones en la máquina de Atwood
• "Smiles and Teardrops" (1982)
• 2007 Workshop
• 2010 Videos of an experimental Swinging Atwood's Machine
• Update on a Swinging Atwood's Machine at 2010 APS Meeting, 8:24 AM, Friday 19 March 2010, Portland, OR
• Interactive web application of the Swinging Atwood's Machine
• Open source Java code for running the Swinging Atwood's Machine
Chaos theory
Concepts
Core
• Attractor
• Bifurcation
• Fractal
• Limit set
• Lyapunov exponent
• Orbit
• Periodic point
• Phase space
• Anosov diffeomorphism
• Arnold tongue
• axiom A dynamical system
• Bifurcation diagram
• Box-counting dimension
• Correlation dimension
• Conservative system
• Ergodicity
• False nearest neighbors
• Hausdorff dimension
• Invariant measure
• Lyapunov stability
• Measure-preserving dynamical system
• Mixing
• Poincaré section
• Recurrence plot
• SRB measure
• Stable manifold
• Topological conjugacy
Theorems
• Ergodic theorem
• Liouville's theorem
• Krylov–Bogolyubov theorem
• Poincaré–Bendixson theorem
• Poincaré recurrence theorem
• Stable manifold theorem
• Takens's theorem
Theoretical
branches
• Bifurcation theory
• Control of chaos
• Dynamical system
• Ergodic theory
• Quantum chaos
• Stability theory
• Synchronization of chaos
Chaotic
maps (list)
Discrete
• Arnold's cat map
• Baker's map
• Complex quadratic map
• Coupled map lattice
• Duffing map
• Dyadic transformation
• Dynamical billiards
• outer
• Exponential map
• Gauss map
• Gingerbreadman map
• Hénon map
• Horseshoe map
• Ikeda map
• Interval exchange map
• Irrational rotation
• Kaplan–Yorke map
• Langton's ant
• Logistic map
• Standard map
• Tent map
• Tinkerbell map
• Zaslavskii map
Continuous
• Double scroll attractor
• Duffing equation
• Lorenz system
• Lotka–Volterra equations
• Mackey–Glass equations
• Rabinovich–Fabrikant equations
• Rössler attractor
• Three-body problem
• Van der Pol oscillator
Physical
systems
• Chua's circuit
• Convection
• Double pendulum
• Elastic pendulum
• FPUT problem
• Hénon–Heiles system
• Kicked rotator
• Multiscroll attractor
• Population dynamics
• Swinging Atwood's machine
• Tilt-A-Whirl
• Weather
Chaos
theorists
• Michael Berry
• Rufus Bowen
• Mary Cartwright
• Chen Guanrong
• Leon O. Chua
• Mitchell Feigenbaum
• Peter Grassberger
• Celso Grebogi
• Martin Gutzwiller
• Brosl Hasslacher
• Michel Hénon
• Svetlana Jitomirskaya
• Bryna Kra
• Edward Norton Lorenz
• Aleksandr Lyapunov
• Benoît Mandelbrot
• Hee Oh
• Edward Ott
• Henri Poincaré
• Mary Rees
• Otto Rössler
• David Ruelle
• Caroline Series
• Yakov Sinai
• Oleksandr Mykolayovych Sharkovsky
• Nina Snaith
• Floris Takens
• Audrey Terras
• Mary Tsingou
• Marcelo Viana
• Amie Wilkinson
• James A. Yorke
• Lai-Sang Young
Related
articles
• Butterfly effect
• Complexity
• Edge of chaos
• Predictability
• Santa Fe Institute
| Wikipedia |
HP-42S
The HP-42S RPN Scientific is a programmable RPN Scientific hand held calculator introduced by Hewlett-Packard in 1988. It has advanced functions suitable for applications in mathematics, linear algebra, statistical analysis, computer science and others.
HP-42S
The HP-42S
TypeProgrammable scientific
ManufacturerHewlett-Packard
Introduced1988
Discontinued1995
Calculator
Entry modeRPN
Precision12 display digits (15 digits internally),[1] exponent ±499
Display typeLCD dot-matrix
Display size2 lines, 22 characters, 131×16 pixels
CPU
ProcessorSaturn (Lewis)
Programming
Programming language(s)RPN key stroke (fully merged)
Firmware memory64 KB of ROM
Program steps7200
Interfaces
PortsIR (Infrared) printing
Other
Power supply3×1.5V button cell batteries (Panasonic LR44, Duracell PX76A/675A or Energizer 357/303)
Weight6 oz (170 g)
Dimensions148×80×15mm
Overview
Perhaps the HP-42S was to be released as a replacement for the aging HP-41 series as it is designed to be compatible with all programs written for the HP-41. Since it lacked expandability, and lacked any real I/O ability, both key features of the HP-41 series, it was marketed as an HP-15C replacement.
The 42S, however, has a much smaller form factor than the 41, and features many more built-in functions, such as a matrix editor, complex number support, an equation solver, user-defined menus, and basic graphing capabilities (the 42S can draw graphs only by programs). Additionally, it features a two-line dot matrix display, which made stack manipulation easier to understand.
Production of the 42S ended in 1995.[2] As this calculator is regarded amongst the best ever made in terms of quality, key stroke feel, ease of programming, and daily usability for engineers,[3] in the HP calculator community the 42S has become famous for its high prices in online auctions, up to several times its introduction price, which has created a scarcity for utility end users.
Specifications
• Series: Pioneer
• Code Name: Davinci
• Introduction: 1988-10-31
• 64 KB of ROM
• 8 KB of RAM
• Functions: Over 350
• Expandability: Officially no other than IR printing (32 KB memory upgrade[4] and over-clocking hardware[5] hacks are possible)
• Peripherals: HP 82240A infrared printer
Features
• All basic scientific functions (including hyperbolic functions)
• Statistics (including curve fitting and forecasting)
• Probability (including factorial, random numbers and Gamma function)
• Equation solver (root finder) that can solve for any variable in an equation
• Numerical integration for calculating definite integrals
• Matrix operations (including a matrix editor, dot product, cross product and solver for simultaneous linear equations)
• Complex numbers (including polar coordinates representation)
• Vector functions
• Named variables, registers and binary flags
• Graphic display with graphics functions and adjustable contrast
• Menus with submenus and mode settings (also custom programmable) that use the bottom line of the display to label the top row of keys
• Sound (piezoelectric beeper)
• Base conversion, integer arithmetic and binary and logic manipulation of numbers in binary, octal, decimal and hexadecimal systems
• Catalogs for reviewing and using items stored in memory
• Programmability (keystroke programming with branching, loops, tests and flags)
• The ability to run programs written for the HP-41C series of calculators
Programming
Main article: Focal (HP-41)
The HP-42S is keystroke-programmable, meaning that it can remember and later execute sequences of keystrokes to solve particular problems of interest to the user. The HP-42S uses a superset of the HP-41CX FOCAL language.
The HP-42S supports indirect addressing with which it is possible to implement a Universal Turing machine and therefore the programming model of the HP-42S can be considered Turing-complete.
Sample program
This is a sample program which computes the factorial of an input integer number (ignoring the calculator's built-in factorial function). The program consumes 18 bytes. No memory registers are used.
Step Instruction Comment
01 LBLFAC Start of program "FAC"
02 1 1 is put into X, hence the value to be calculated upon (which was initially in X) is lifted (pushed) into stack register Y
03 LBL00 Define label 00
04 RCL×STY Recall stack register Y and multiply with X
05 DSESTY Decrement stack register Y and if not zero ...
06 GTO00 ... go back to label 00
07 END or RTN Returns control (and result in X) to either the user or to a calling program.
Legacy
In May 2017, SwissMicros released pre-production samples of an RPN calculator closely resembling the HP-42S, the DM42. The final product was released on the 9 December 2017. Even though slightly smaller (144×77×13 mm, 170 g) than the original HP-42S (148×80×15 mm, 170 g), the calculator comes with an additional top row of keys for soft menus, a keyboard layout supporting direct alpha character input, a much larger high-contrast display (Sharp low power transflective memory LCD with a resolution of 400×240, protected by Gorilla Glass) showing all four stack levels at once (configurable), ca. 75 KB usable RAM, a beeper, a callable real-time clock as well as an infrared port for HP 82240A/HP 82240B printer support and a USB interface (with Micro-B connector) emulating a FAT16-formatted USB mass storage device for easy program transfer and state backup / transfer as well as for firmware updates. The calculator, which comes in a stainless steel case with matte black physical vapor deposition (PVD) coating, supports keyboard overlays and is based on a modified version of Thomas Okken's GPLed Free42 simulator with Intel's decimal floating-point math library for higher precision (decimal128) running on an STM32L476RG processor (ARM Cortex-M4 core, 128 KB RAM, 1 MB internal flash) with another 8 MB of external QSPI flash (of which ca. 6 MB are available to users). It is powered by a CR2032 coin cell or via USB and clocked dynamically at 24-80 MHz. The DM42 is also the hardware basis for the community-developed WP 43S calculator,[6][7] a successor to the WP 34S.
An open-source software version of the HP-42S (Free42) was developed by Thomas Okken that runs on iOS, Android, Windows, MacOS, and Linux. Its source code has been released under the GNU General Public License.
See also
• FOCAL character set
• Comparison of HP graphing calculators
• HP calculators
• List of Hewlett-Packard pocket calculators
References
1. HP-42S RPN Scientific Calculator - Owner's Manual (PDF) (1 ed.). Corvallis, OR, USA: Hewlett-Packard Co. June 1988. p. 3. 00042-90001. Archived (PDF) from the original on 2017-09-17. Retrieved 2017-09-17.
2. "HP-42S". Museum of HP Calculators. Retrieved 2016-10-27.
3. "HP's best scientific calculator?".
4. Hosoda, Takayuki (2007-10-10). "Upgrading the memory of the HP 42S to 32KB". Archived from the original on 2017-09-17. Retrieved 2011-08-12.
5. HP 42S Easy Double Speed / Turbo Mode for Calculator and Programs, retrieved 2022-08-05
6. Bonin, Walter (2019) [2015]. WP 43S Owner's Manual (PDF). 0.13 (draft ed.). ISBN 978-1-72950098-9. Retrieved 2019-10-31. (314 pages)
7. Bonin, Walter (2019) [2015]. WP 43S Reference Manual (PDF). 0.13 (draft ed.). ISBN 978-1-72950106-1. Retrieved 2019-10-31. (271 pages)
Further reading
• HP-42S RPN Scientific Calculator - Owner's Manual (PDF) (1 ed.). Corvallis, OR, USA: Hewlett-Packard Co. June 1988. 00042-90001. Archived (PDF) from the original on 2017-09-17. Retrieved 2017-09-17.
• HP-42S RPN Scientific Calculator - Programming Examples and Techniques (PDF) (1 ed.). Hewlett-Packard. July 1988. 00042-90020, 00042-90019. Archived (PDF) from the original on 2017-12-19. Retrieved 2017-12-19.
• Strapasson, José Lauro; Jones, Russ (January 2010). An Alternative HP-42S/Free42 Manual (PDF). 0.7. Archived (PDF) from the original on 2017-09-17. Retrieved 2017-09-17.
• HP-42S Quick Reference Guide (1 ed.). Corvallis, OR, USA, Dex Smith. October 1988. 00042-92222E.
• Horn, Joseph K. (2017-08-23) [1988-11-09]. "HP-42S Owner's Manual Addendum: Hidden Matrix Functions". Archived from the original on 2017-09-17. Retrieved 2017-09-17.
• "DM42 User Manual". 3.17. SwissMicros GmbH. 2020-10-21 [2016]. Archived from the original on 2020-10-05. Retrieved 2020-10-21.
External links
• SwissMicros DM42
• HP-42S intro on hpcc.org
• HP-42S page on hpmuseum.org
• HP-42S resources on hp42s.com (defunct as of July 2017)
• HP-42S description on rskey.org
• HP-42S description on thimet.de
• Free42 for Android by Thomas Okken, an Open Source project.
• Okken, Thomas (2011-04-20). "Free42, A HP-42S Calculator Simulator". Retrieved 2011-08-12.
Hewlett-Packard (HP) calculators
Graphing
• 9g
• 28C
• 28S
• 38G
• 39G
• 39g+
• 39gs
• 39gII
• 40G
• 40gs
• 48S
• 48SX
• 48G
• 48G+
• 48GX
• 48gII
• 49G
• 49g+
• 50g
• Prime
• Xpander
Scientific programmable
• 10C
• 11C
• 15C
• LE
• CE
• 16C
• 19C
• 20S
• 21S
• 25
• 25C
• 29C
• 32S
• 32SII
• 33C
• 33E
• 33s
• 34C
• 35s
• 41C
• 41CV
• 41CX
• 42S
• 55
• 65
• 67
• 71B
• 95C
• 97
• 97S
• 9100A
• 9100B
• 9805
Scientific non-programmable
• 6s
• 6s Solar
• 8s
• 9s
• 10s
• 10s+
• 10sII
• 21
• 22S
• 27
• 27S
• 30s
• 31E
• 32E
• 35
• 45
• 46
• 91
• 300s
• 300s+
Financial and business
• 10B
• 10bII
• 10bII+
• 12C
• Platinum
• Prestige
• 14B
• 17B
• 17BII
• 17bII+
• 18C
• 19B
• 19BII
• 20b
• 22
• 27
• 30b
• 37E
• 38C
• 38E
• 70
• 80
• 81
• 92
Other
• 01
• 10
• CalcPad
• 100
• 200
• QuickCalc
• EasyCalc
• OfficeCalc
• PrintCalc
Related topics
• RPN
• RPL
• PPL
• FOCAL
• ALG
• CAS
| Wikipedia |
Swiss cheese (mathematics)
In mathematics, a Swiss cheese is a compact subset of the complex plane obtained by removing from a closed disc some countable union of open discs, usually with some restriction on the centres and radii of the removed discs. Traditionally the deleted discs should have pairwise disjoint closures which are subsets of the interior of the starting disc, the sum of the radii of the deleted discs should be finite, and the Swiss cheese should have empty interior. This is the type of Swiss cheese originally introduced by the Swiss mathematician Alice Roth.
More generally, a Swiss cheese may be all or part of Euclidean space Rn – or of an even more complicated manifold – with "holes" in it.
References
• Feinstein, J. F.; Morley, S.; Yang, H. (2016). "Abstract Swiss cheese space and classicalisation of Swiss cheeses". Journal of Mathematical Analysis and Applications. 438 (1): 119–141. arXiv:1503.03785. doi:10.1016/j.jmaa.2016.02.004. MR 3462570. S2CID 55614027.
• van den Berg, M.; Bolthausen, E.; den Hollander, F. (2004). "On the volume of the intersection of two Wiener sausages" (PDF). Annals of Mathematics. 159 (2): 741–783. doi:10.4007/annals.2004.159.741.
| Wikipedia |
Boolean function
In mathematics, a Boolean function is a function whose arguments and result assume values from a two-element set (usually {true, false}, {0,1} or {-1,1}).[1][2] Alternative names are switching function, used especially in older computer science literature,[3][4] and truth function (or logical function), used in logic. Boolean functions are the subject of Boolean algebra and switching theory.[5]
Not to be confused with Binary function.
A Boolean function takes the form $f:\{0,1\}^{k}\to \{0,1\}$, where $\{0,1\}$ is known as the Boolean domain and $k$ is a non-negative integer called the arity of the function. In the case where $k=0$, the function is a constant element of $\{0,1\}$. A Boolean function with multiple outputs, $f:\{0,1\}^{k}\to \{0,1\}^{m}$ with $m>1$ is a vectorial or vector-valued Boolean function (an S-box in symmetric cryptography).[6]
There are $2^{2^{k}}$ different Boolean functions with $k$ arguments; equal to the number of different truth tables with $2^{k}$ entries.
Every $k$-ary Boolean function can be expressed as a propositional formula in $k$ variables $x_{1},...,x_{k}$, and two propositional formulas are logically equivalent if and only if they express the same Boolean function.
Examples
See also: Truth table
The rudimentary symmetric Boolean functions (logical connectives or logic gates) are:
• NOT, negation or complement - which receives one input and returns true when that input is false ("not")
• AND or conjunction - true when all inputs are true ("both")
• OR or disjunction - true when any input is true ("either")
• XOR or exclusive disjunction - true when one of its inputs is true and the other is false ("not equal")
• NAND or Sheffer stroke - true when it is not the case that all inputs are true ("not both")
• NOR or logical nor - true when none of the inputs are true ("neither")
• XNOR or logical equality - true when both inputs are the same ("equal")
An example of a more complicated function is the majority function (of an odd number of inputs).
Representation
A Boolean function may be specified in a variety of ways:
• Truth table: explicitly listing its value for all possible values of the arguments
• Marquand diagram: truth table values arranged in a two-dimensional grid (used in a Karnaugh map)
• Binary decision diagram, listing the truth table values at the bottom of a binary tree
• Venn diagram, depicting the truth table values as a colouring of regions of the plane
Algebraically, as a propositional formula using rudimentary Boolean functions:
• Negation normal form, an arbitrary mix of AND and ORs of the arguments and their complements
• Disjunctive normal form, as an OR of ANDs of the arguments and their complements
• Conjunctive normal form, as an AND of ORs of the arguments and their complements
• Canonical normal form, a standardized formula which uniquely identifies the function:
• Algebraic normal form or Zhegalkin polynomial, as a XOR of ANDs of the arguments (no complements allowed)
• Full (canonical) disjunctive normal form, an OR of ANDs each containing every argument or complement (minterms)
• Full (canonical) conjunctive normal form, an AND of ORs each containing every argument or complement (maxterms)
• Blake canonical form, the OR of all the prime implicants of the function
Boolean formulas can also be displayed as a graph:
• Propositional directed acyclic graph
• Digital circuit diagram of logic gates, a Boolean circuit
• And-inverter graph, using only AND and NOT
In order to optimize electronic circuits, Boolean formulas can be minimized using the Quine–McCluskey algorithm or Karnaugh map.
Analysis
See also: Analysis of Boolean functions
Properties
A Boolean function can have a variety of properties:[7]
• Constant: Is always true or always false regardless of its arguments.
• Monotone: for every combination of argument values, changing an argument from false to true can only cause the output to switch from false to true and not from true to false. A function is said to be unate in a certain variable if it is monotone with respect to changes in that variable.
• Linear: for each variable, flipping the value of the variable either always makes a difference in the truth value or never makes a difference (a parity function).
• Symmetric: the value does not depend on the order of its arguments.
• Read-once: Can be expressed with conjunction, disjunction, and negation with a single instance of each variable.
• Balanced: if its truth table contains an equal number of zeros and ones. The Hamming weight of the function is the number of ones in the truth table.
• Bent: its derivatives are all balanced (the autocorrelation spectrum is zero)
• Correlation immune to mth order: if the output is uncorrelated with all (linear) combinations of at most m arguments
• Evasive: if evaluation of the function always requires the value of all arguments
• A Boolean function is a Sheffer function if it can be used to create (by composition) any arbitrary Boolean function (see functional completeness)
• The algebraic degree of a function is the order of the highest order monomial in its algebraic normal form
Circuit complexity attempts to classify Boolean functions with respect to the size or depth of circuits that can compute them.
Derived functions
A Boolean function may be decomposed using Boole's expansion theorem in positive and negative Shannon cofactors (Shannon expansion), which are the (k-1)-ary functions resulting from fixing one of the arguments (to zero or one). The general (k-ary) functions obtained by imposing a linear constraint on a set of inputs (a linear subspace) are known as subfunctions.[8]
The Boolean derivative of the function to one of the arguments is a (k-1)-ary function that is true when the output of the function is sensitive to the chosen input variable; it is the XOR of the two corresponding cofactors. A derivative and a cofactor are used in a Reed–Muller expansion. The concept can be generalized as a k-ary derivative in the direction dx, obtained as the difference (XOR) of the function at x and x + dx.[8]
The Möbius transform (or Boole-Möbius transform) of a Boolean function is the set of coefficients of its polynomial (algebraic normal form), as a function of the monomial exponent vectors. It is a self-inverse transform. It can be calculated efficiently using a butterfly algorithm ("Fast Möbius Transform"), analogous to the Fast Fourier Transform.[9] Coincident Boolean functions are equal to their Möbius transform, i.e. their truth table (minterm) values equal their algebraic (monomial) coefficients.[10] There are 2^2^(k−1) coincident functions of k arguments.[11]
Cryptographic analysis
The Walsh transform of a Boolean function is a k-ary integer-valued function giving the coefficients of a decomposition into linear functions (Walsh functions), analogous to the decomposition of real-valued functions into harmonics by the Fourier transform. Its square is the power spectrum or Walsh spectrum. The Walsh coefficient of a single bit vector is a measure for the correlation of that bit with the output of the Boolean function. The maximum (in absolute value) Walsh coefficient is known as the linearity of the function.[8] The highest number of bits (order) for which all Walsh coefficients are 0 (i.e. the subfunctions are balanced) is known as resiliency, and the function is said to be correlation immune to that order.[8] The Walsh coefficients play a key role in linear cryptanalysis.
The autocorrelation of a Boolean function is a k-ary integer-valued function giving the correlation between a certain set of changes in the inputs and the function output. For a given bit vector it is related to the Hamming weight of the derivative in that direction. The maximal autocorrelation coefficient (in absolute value) is known as the absolute indicator.[7][8] If all autocorrelation coefficients are 0 (i.e. the derivatives are balanced) for a certain number of bits then the function is said to satisfy the propagation criterion to that order; if they are all zero then the function is a bent function.[12] The autocorrelation coefficients play a key role in differential cryptanalysis.
The Walsh coefficients of a Boolean function and its autocorrelation coefficients are related by the equivalent of the Wiener–Khinchin theorem, which states that the autocorrelation and the power spectrum are a Walsh transform pair.[8]
Linear approximation table
These concepts can be extended naturally to vectorial Boolean functions by considering their output bits (coordinates) individually, or more thoroughly, by looking at the set of all linear functions of output bits, known as its components.[6] The set of Walsh transforms of the components is known as a Linear Approximation Table (LAT)[13][14] or correlation matrix;[15][16] it describes the correlation between different linear combinations of input and output bits. The set of autocorrelation coefficients of the components is the autocorrelation table,[14] related by a Walsh transform of the components[17] to the more widely used Difference Distribution Table (DDT)[13][14] which lists the correlations between differences in input and output bits (see also: S-box).
Real polynomial form
On the unit hypercube
Any Boolean function $f(x):\{0,1\}^{n}\rightarrow \{0,1\}$ can be uniquely extended (interpolated) to the real domain by a multilinear polynomial in $\mathbb {R} ^{n}$, constructed by summing the truth table values multiplied by indicator polynomials:
$f^{*}(x)=\sum _{a\in {\{0,1\}}^{n}}f(a)\prod _{i:a_{i}=1}x_{i}\prod _{i:a_{i}=0}(1-x_{i})$
For example, the extension of the binary XOR function $x\oplus y$ is
$0(1-x)(1-y)+1x(1-y)+1(1-x)y+0xy$
which equals
$x+y-2xy$
Some other examples are negation ($1-x$), AND ($xy$) and OR ($x+y-xy$). When all operands are independent (share no variables) a function's polynomial form can be found by repeatedly applying the polynomials of the operators in a Boolean formula. When the coefficients are calculated modulo 2 one obtains the algebraic normal form (Zhegalkin polynomial). Direct expressions for the coefficients of the polynomial can be derived by taking an appropriate derivative:
${\begin{array}{lcl}f^{*}(00)&=&(f^{*})(00)&=&f(00)\\f^{*}(01)&=&(\partial _{1}f^{*})(00)&=&-f(00)+f(01)\\f^{*}(10)&=&(\partial _{2}f^{*})(00)&=&-f(00)+f(10)\\f^{*}(11)&=&(\partial _{1}\partial _{2}f^{*})(00)&=&f(00)-f(01)-f(10)+f(11)\\\end{array}}$
this generalizes as the Möbius inversion of the partially ordered set of bit vectors:
$f^{*}(m)=\sum _{a\subseteq m}(-1)^{|a|+|m|}f(a)$
where $|a|$ denotes the weight of the bit vector $a$. Taken modulo 2, this is the Boolean Möbius transform, giving the algebraic normal form coefficients:
${\hat {f}}(m)=\bigoplus _{a\subseteq m}f(a)$
In both cases, the sum is taken over all bit-vectors a covered by m, i.e. the "one" bits of a form a subset of the one bits of m.
When the domain is restricted to the n-dimensional hypercube $[0,1]^{n}$, the polynomial $f^{*}(x):[0,1]^{n}\rightarrow [0,1]$ gives the probability of a positive outcome when the Boolean function f is applied to n independent random (Bernoulli) variables, with individual probabilities x. A special case of this fact is the piling-up lemma for parity functions. The polynomial form of a Boolean function can also be used as its natural extension to fuzzy logic.
On the symmetric hypercube
Often, the Boolean domain is taken as $\{-1,1\}$, with false ("0") mapping to 1 and true ("1") to -1 (see Analysis of Boolean functions). The polynomial corresponding to $g(x):\{-1,1\}^{n}\rightarrow \{-1,1\}$ is then given by:
$g^{*}(x)=\sum _{a\in {\{-1,1\}}^{n}}g(a)\prod _{i:a_{i}=-1}{\frac {1-x_{i}}{2}}\prod _{i:a_{i}=1}{\frac {1+x_{i}}{2}}$
Using the symmetric Boolean domain simplifies certain aspects of the analysis, since negation corresponds to multiplying by -1 and linear functions are monomials (XOR is multiplication). This polynomial form thus corresponds to the Walsh transform (in this context also known as Fourier transform) of the function (see above). The polynomial also has the same statistical interpretation as the one in the standard Boolean domain, except that it now deals with the expected values $E(X)=P(X=1)-P(X=-1)\in [-1,1]$ (see piling-up lemma for an example).
Applications
Boolean functions play a basic role in questions of complexity theory as well as the design of processors for digital computers, where they are implemented in electronic circuits using logic gates.
The properties of Boolean functions are critical in cryptography, particularly in the design of symmetric key algorithms (see substitution box).
In cooperative game theory, monotone Boolean functions are called simple games (voting games); this notion is applied to solve problems in social choice theory.
See also
• Pseudo-Boolean function
• Boolean-valued function
• Boolean algebra topics
• Algebra of sets
• Decision tree model
• Indicator function
• Signed set
References
1. "Boolean function - Encyclopedia of Mathematics". encyclopediaofmath.org. Retrieved 2021-05-03.
2. Weisstein, Eric W. "Boolean Function". mathworld.wolfram.com. Retrieved 2021-05-03.
3. "switching function". TheFreeDictionary.com. Retrieved 2021-05-03.
4. Davies, D. W. (December 1957). "Switching Functions of Three Variables". IRE Transactions on Electronic Computers. EC-6 (4): 265–275. doi:10.1109/TEC.1957.5222038. ISSN 0367-9950.
5. McCluskey, Edward J. (2003-01-01), "Switching theory", Encyclopedia of Computer Science, GBR: John Wiley and Sons Ltd., pp. 1727–1731, ISBN 978-0-470-86412-8, retrieved 2021-05-03
6. Carlet, Claude. "Vectorial Boolean Functions for Cryptography" (PDF). University of Paris. Archived (PDF) from the original on 2016-01-17.
7. "Boolean functions — Sage 9.2 Reference Manual: Cryptography". doc.sagemath.org. Retrieved 2021-05-01.
8. Tarannikov, Yuriy; Korolev, Peter; Botev, Anton (2001). "Autocorrelation Coefficients and Correlation Immunity of Boolean Functions". In Boyd, Colin (ed.). Advances in Cryptology — ASIACRYPT 2001. Lecture Notes in Computer Science. Vol. 2248. Berlin, Heidelberg: Springer. pp. 460–479. doi:10.1007/3-540-45682-1_27. ISBN 978-3-540-45682-7.
9. Carlet, Claude (2010), "Boolean Functions for Cryptography and Error-Correcting Codes" (PDF), Boolean Models and Methods in Mathematics, Computer Science, and Engineering, Encyclopedia of Mathematics and its Applications, Cambridge: Cambridge University Press, pp. 257–397, ISBN 978-0-521-84752-0, retrieved 2021-05-17
10. Pieprzyk, Josef; Wang, Huaxiong; Zhang, Xian-Mo (2011-05-01). "Mobius transforms, coincident Boolean functions and non-coincidence property of Boolean functions". International Journal of Computer Mathematics. 88 (7): 1398–1416. doi:10.1080/00207160.2010.509428. ISSN 0020-7160. S2CID 9580510.
11. Nitaj, Abderrahmane; Susilo, Willy; Tonien, Joseph (2017-10-01). "Dirichlet product for boolean functions". Journal of Applied Mathematics and Computing. 55 (1): 293–312. doi:10.1007/s12190-016-1037-4. ISSN 1865-2085. S2CID 16760125.
12. Canteaut, Anne; Carlet, Claude; Charpin, Pascale; Fontaine, Caroline (2000-05-14). "Propagation characteristics and correlation-immunity of highly nonlinear boolean functions". Proceedings of the 19th International Conference on Theory and Application of Cryptographic Techniques. EUROCRYPT'00. Bruges, Belgium: Springer-Verlag: 507–522. ISBN 978-3-540-67517-4.
13. Heys, Howard M. "A Tutorial on Linear and Differential Cryptanalysis" (PDF). Archived (PDF) from the original on 2017-05-17.
14. "S-Boxes and Their Algebraic Representations — Sage 9.2 Reference Manual: Cryptography". doc.sagemath.org. Retrieved 2021-05-04.
15. Daemen, Joan; Govaerts, René; Vandewalle, Joos (1995). Preneel, Bart (ed.). "Correlation matrices". Fast Software Encryption. Lecture Notes in Computer Science. Berlin, Heidelberg: Springer. 1008: 275–285. doi:10.1007/3-540-60590-8_21. ISBN 978-3-540-47809-6.
16. Daemen, Joan (10 June 1998). "Chapter 5: Propagation and Correlation - Annex to AES Proposal Rijndael" (PDF). NIST. Archived (PDF) from the original on 2018-07-23.
17. Nyberg, Kaisa (December 1, 2019). "The Extended Autocorrelation and Boomerang Tables and Links Between Nonlinearity Properties of Vectorial Boolean Functions" (PDF). Archived (PDF) from the original on 2020-11-02.
Further reading
• Crama, Yves; Hammer, Peter L. (2011), Boolean Functions: Theory, Algorithms, and Applications, Cambridge University Press, doi:10.1017/CBO9780511852008, ISBN 9780511852008
• "Boolean function", Encyclopedia of Mathematics, EMS Press, 2001 [1994]
• Janković, Dragan; Stanković, Radomir S.; Moraga, Claudio (November 2003). "Arithmetic expressions optimisation using dual polarity property". Serbian Journal of Electrical Engineering. 1 (71–80, number 1): 71–80. doi:10.2298/SJEE0301071J.
• Arnold, Bradford Henry (1 January 2011). Logic and Boolean Algebra. Courier Corporation. ISBN 978-0-486-48385-6.
• Mano, M. M.; Ciletti, M. D. (2013), Digital Design, Pearson
Mathematical logic
General
• Axiom
• list
• Cardinality
• First-order logic
• Formal proof
• Formal semantics
• Foundations of mathematics
• Information theory
• Lemma
• Logical consequence
• Model
• Theorem
• Theory
• Type theory
Theorems (list)
& Paradoxes
• Gödel's completeness and incompleteness theorems
• Tarski's undefinability
• Banach–Tarski paradox
• Cantor's theorem, paradox and diagonal argument
• Compactness
• Halting problem
• Lindström's
• Löwenheim–Skolem
• Russell's paradox
Logics
Traditional
• Classical logic
• Logical truth
• Tautology
• Proposition
• Inference
• Logical equivalence
• Consistency
• Equiconsistency
• Argument
• Soundness
• Validity
• Syllogism
• Square of opposition
• Venn diagram
Propositional
• Boolean algebra
• Boolean functions
• Logical connectives
• Propositional calculus
• Propositional formula
• Truth tables
• Many-valued logic
• 3
• Finite
• ∞
Predicate
• First-order
• list
• Second-order
• Monadic
• Higher-order
• Free
• Quantifiers
• Predicate
• Monadic predicate calculus
Set theory
• Set
• Hereditary
• Class
• (Ur-)Element
• Ordinal number
• Extensionality
• Forcing
• Relation
• Equivalence
• Partition
• Set operations:
• Intersection
• Union
• Complement
• Cartesian product
• Power set
• Identities
Types of Sets
• Countable
• Uncountable
• Empty
• Inhabited
• Singleton
• Finite
• Infinite
• Transitive
• Ultrafilter
• Recursive
• Fuzzy
• Universal
• Universe
• Constructible
• Grothendieck
• Von Neumann
Maps & Cardinality
• Function/Map
• Domain
• Codomain
• Image
• In/Sur/Bi-jection
• Schröder–Bernstein theorem
• Isomorphism
• Gödel numbering
• Enumeration
• Large cardinal
• Inaccessible
• Aleph number
• Operation
• Binary
Set theories
• Zermelo–Fraenkel
• Axiom of choice
• Continuum hypothesis
• General
• Kripke–Platek
• Morse–Kelley
• Naive
• New Foundations
• Tarski–Grothendieck
• Von Neumann–Bernays–Gödel
• Ackermann
• Constructive
Formal systems (list),
Language & Syntax
• Alphabet
• Arity
• Automata
• Axiom schema
• Expression
• Ground
• Extension
• by definition
• Conservative
• Relation
• Formation rule
• Grammar
• Formula
• Atomic
• Closed
• Ground
• Open
• Free/bound variable
• Language
• Metalanguage
• Logical connective
• ¬
• ∨
• ∧
• →
• ↔
• =
• Predicate
• Functional
• Variable
• Propositional variable
• Proof
• Quantifier
• ∃
• !
• ∀
• rank
• Sentence
• Atomic
• Spectrum
• Signature
• String
• Substitution
• Symbol
• Function
• Logical/Constant
• Non-logical
• Variable
• Term
• Theory
• list
Example axiomatic
systems
(list)
• of arithmetic:
• Peano
• second-order
• elementary function
• primitive recursive
• Robinson
• Skolem
• of the real numbers
• Tarski's axiomatization
• of Boolean algebras
• canonical
• minimal axioms
• of geometry:
• Euclidean:
• Elements
• Hilbert's
• Tarski's
• non-Euclidean
• Principia Mathematica
Proof theory
• Formal proof
• Natural deduction
• Logical consequence
• Rule of inference
• Sequent calculus
• Theorem
• Systems
• Axiomatic
• Deductive
• Hilbert
• list
• Complete theory
• Independence (from ZFC)
• Proof of impossibility
• Ordinal analysis
• Reverse mathematics
• Self-verifying theories
Model theory
• Interpretation
• Function
• of models
• Model
• Equivalence
• Finite
• Saturated
• Spectrum
• Submodel
• Non-standard model
• of arithmetic
• Diagram
• Elementary
• Categorical theory
• Model complete theory
• Satisfiability
• Semantics of logic
• Strength
• Theories of truth
• Semantic
• Tarski's
• Kripke's
• T-schema
• Transfer principle
• Truth predicate
• Truth value
• Type
• Ultraproduct
• Validity
Computability theory
• Church encoding
• Church–Turing thesis
• Computably enumerable
• Computable function
• Computable set
• Decision problem
• Decidable
• Undecidable
• P
• NP
• P versus NP problem
• Kolmogorov complexity
• Lambda calculus
• Primitive recursive function
• Recursion
• Recursive set
• Turing machine
• Type theory
Related
• Abstract logic
• Category theory
• Concrete/Abstract Category
• Category of sets
• History of logic
• History of mathematical logic
• timeline
• Logicism
• Mathematical object
• Philosophy of mathematics
• Supertask
Mathematics portal
| Wikipedia |
Switching circuit theory
Switching circuit theory is the mathematical study of the properties of networks of idealized switches. Such networks may be strictly combinational logic, in which their output state is only a function of the present state of their inputs; or may also contain sequential elements, where the present state depends on the present state and past states; in that sense, sequential circuits are said to include "memory" of past states. An important class of sequential circuits are state machines. Switching circuit theory is applicable to the design of telephone systems, computers, and similar systems. Switching circuit theory provided the mathematical foundations and tools for digital system design in almost all areas of modern technology.[1]
In an 1886 letter, Charles Sanders Peirce described how logical operations could be carried out by electrical switching circuits.[2] During 1880–1881 he showed that NOR gates alone (or alternatively NAND gates alone) can be used to reproduce the functions of all the other logic gates, but this work remained unpublished until 1933.[3] The first published proof was by Henry M. Sheffer in 1913, so the NAND logical operation is sometimes called Sheffer stroke; the logical NOR is sometimes called Peirce's arrow.[4] Consequently, these gates are sometimes called universal logic gates.[5]
In 1898, Martin Boda described a switching theory for signalling block systems.[6][7]
Eventually, vacuum tubes replaced relays for logic operations. Lee De Forest's modification, in 1907, of the Fleming valve can be used as a logic gate. Ludwig Wittgenstein introduced a version of the 16-row truth table as proposition 5.101 of Tractatus Logico-Philosophicus (1921). Walther Bothe, inventor of the coincidence circuit, got part of the 1954 Nobel Prize in physics, for the first modern electronic AND gate in 1924. Konrad Zuse designed and built electromechanical logic gates for his computer Z1 (from 1935 to 1938).
From 1934 to 1936, NEC engineer Akira Nakashima,[8] Claude Shannon[9] and Victor Shestakov[10] published a series of papers showing that the two-valued Boolean algebra, which they discovered independently, can describe the operation of switching circuits.[7][11][12][13][1]
Ideal switches are considered as having only two exclusive states, for example, open or closed. In some analysis, the state of a switch can be considered to have no influence on the output of the system and is designated as a "don't care" state. In complex networks it is necessary to also account for the finite switching time of physical switches; where two or more different paths in a network may affect the output, these delays may result in a "logic hazard" or "race condition" where the output state changes due to the different propagation times through the network.
See also
• Circuit switching
• Message switching
• Packet switching
• Fast packet switching
• Network switching subsystem
• 5ESS Switching System
• Number One Electronic Switching System
• Boolean circuit
• C-element
• Circuit complexity
• Circuit minimization
• Karnaugh map
• Logic design
• Logic gate
• Logic in computer science
• Nonblocking minimal spanning switch
• Programmable logic controller – computer software mimics relay circuits for industrial applications
• Quine–McCluskey algorithm
• Relay – an early kind of logic device
• Switching lemma
• Unate function
References
1. Stanković, Radomir S. [in German]; Astola, Jaakko Tapio [in Finnish], eds. (2008). Reprints from the Early Days of Information Sciences: TICSP Series on the Contributions of Akira Nakashima to Switching Theory (PDF). Tampere International Center for Signal Processing (TICSP) Series. Vol. 40. Tampere University of Technology, Tampere, Finland. ISBN 978-952-15-1980-2. ISSN 1456-2774. Archived from the original (PDF) on 2021-03-08.{{cite book}}: CS1 maint: location missing publisher (link) (3+207+1 pages) 10:00 min
2. Peirce, Charles Saunders (1993) [1886]. Letter, Peirce to A. Marquand. pp. 421–423. {{cite book}}: |work= ignored (help) See also: Burks, Arthur Walter (1978). "Review: Charles S. Peirce, The new elements of mathematics". Bulletin of the American Mathematical Society (review). 84 (5): 913–918 [917]. doi:10.1090/S0002-9904-1978-14533-9.
3. Peirce, Charles Saunders (1933) [Winter of 1880–1881]. A Boolian Algebra with One Constant. paragraphs 12–20. {{cite book}}: |work= ignored (help) Reprinted in Writings of Charles S. Peirce. Vol. 4 (reprint ed.). 1989. pp. 218–221. ISBN 9780253372017. ark:/13960/t11p5r61f. See also: Roberts, Don D. (2009). The Existential Graphs of Charles S. Peirce. p. 131.
4. Kleine Büning, Hans; Lettmann, Theodor (1999). Propositional logic: deduction and algorithms. Cambridge University Press. p. 2. ISBN 978-0-521-63017-7.
5. Bird, John (2007). Engineering mathematics. Newnes. p. 532. ISBN 978-0-7506-8555-9.
6. Boda, Martin (1898). "Die Schaltungstheorie der Blockwerke" [The switching theory of block systems]. Organ für die Fortschritte des Eisenbahnwesens in technischer Beziehung – Fachblatt des Vereins deutscher Eisenbahn-Verwaltungen (in German). Wiesbaden, Germany: C. W. Kreidel's Verlag. Neue Folge XXXV (1–7): 1–7, 29–34, 49–53, 71–75, 91–95, 111–115, 133–138. (NB. This series of seven articles was republished in a 91-pages book in 1899 with a foreword by Georg Barkhausen.)
7. Klir, George Jiří (May 1972). "Reference Notations to Chapter 1". Introduction to the Methodology of Switching Circuits (1 ed.). Binghamton, New York, USA: Litton Educational Publishing, Inc. / D. van Nostrand Company. p. 19. ISBN 0-442-24463-0. LCCN 72-181095. C4463-000-3. p. 19: Although the possibility of establishing a switching theory was recognized by M. Boda[A] as early as in the 19th century, the first important works on this subject were published by A. Nakashima[B] and C. E. Shannon[C] shortly before World War II. (xvi+573+1 pages)
8. Nakashima [中嶋], Akira [章] (May 1936). "Theory of Relay Circuit Composition". Nippon Electrical Communication Engineering (3): 197–226. (NB. Translation of an article which originally appeared in Japanese in the Journal of the Institute of Telegraph and Telephone Engineers of Japan (JITTEJ) September 1935, 150 731–752.)
9. Shannon, Claude Elwood (1938). "A Symbolic Analysis of Relay and Switching Circuits". Transactions of the American Institute of Electrical Engineers. American Institute of Electrical Engineers (AIEE). 57 (12): 713–723. doi:10.1109/T-AIEE.1938.5057767. hdl:1721.1/11173. S2CID 51638483. (NB. Based on Shannon's master thesis of the same title at Massachusetts Institute of Technology in 1937.)
10. Shestakov [Шестаков], Victor Ivanovich [Виктор Иванович] (1938). Некоторые математические методы кон-струирования и упрощения двухполюсных электрических схем класса А [Some mathematical methods for the construction and simplification of two-terminal electrical networks of class A] (PhD thesis) (in Russian). Lomonosov State University.
11. Yamada [山田], Akihiko [彰彦] (2004). "History of Research on Switching Theory in Japan". IEEJ Transactions on Fundamentals and Materials. Institute of Electrical Engineers of Japan. 124 (8): 720–726. Bibcode:2004IJTFM.124..720Y. doi:10.1541/ieejfms.124.720. Archived from the original on 2022-07-10. Retrieved 2022-10-26.
12. "Switching Theory/Relay Circuit Network Theory/Theory of Logical Mathematics". IPSJ Computer Museum. Information Processing Society of Japan. 2012. Archived from the original on 2021-03-22. Retrieved 2021-03-28.
13. Stanković, Radomir S. [in German]; Astola, Jaakko Tapio [in Finnish]; Karpovsky, Mark G. (2007). Some Historical Remarks on Switching Theory (PDF). Niš, Serbia; Tampere, Finland; Boston, Massachusetts, USA. CiteSeerX 10.1.1.66.1248. S2CID 10029339. Archived (PDF) from the original on 2022-10-25. Retrieved 2022-10-25.{{cite book}}: CS1 maint: location missing publisher (link) (8 pages)
Further reading
• Keister, William; Ritchie, Alistair E.; Washburn, Seth H. (1951). The Design of Switching Circuits. The Bell Telephone Laboratories Series (1 ed.). D. Van Nostrand Company, Inc. p. 147. Archived from the original on 2020-05-09. Retrieved 2020-05-09. (2+xx+556+2 pages)
• Caldwell, Samuel Hawks (1958-12-01) [February 1958]. Written at Watertown, Massachusetts, USA. Switching Circuits and Logical Design. 5th printing September 1963 (1st ed.). New York, USA: John Wiley & Sons Inc. ISBN 0-47112969-0. LCCN 58-7896. (xviii+686 pages)
• Perkowski, Marek A.; Grygiel, Stanislaw (1995-11-20). "6. Historical Overview of the Research on Decomposition". A Survey of Literature on Function Decomposition (PDF). Version IV. Functional Decomposition Group, Department of Electrical Engineering, Portland University, Portland, Oregon, USA. CiteSeerX 10.1.1.64.1129. Archived (PDF) from the original on 2021-03-28. Retrieved 2021-03-28. (188 pages)
• Stanković, Radomir S. [in German]; Sasao, Tsutomu; Astola, Jaakko Tapio [in Finnish] (August 2001). "Publications in the First Twenty Years of Switching Theory and Logic Design" (PDF). Tampere International Center for Signal Processing (TICSP) Series. Tampere University of Technology / TTKK, Monistamo, Finland. ISSN 1456-2774. S2CID 62319288. #14. Archived from the original (PDF) on 2017-08-09. Retrieved 2021-03-28. (4+60 pages)
• Stanković, Radomir S. [in German]; Astola, Jaakko Tapio [in Finnish] (2011). Written at Niš, Serbia & Tampere, Finland. From Boolean Logic to Switching Circuits and Automata: Towards Modern Information Technology. Studies in Computational Intelligence. Vol. 335 (1 ed.). Berlin & Heidelberg, Germany: Springer-Verlag. doi:10.1007/978-3-642-11682-7. ISBN 978-3-642-11681-0. ISSN 1860-949X. LCCN 2011921126. Retrieved 2022-10-25. (xviii+212 pages)
Digital electronics
Components
• Transistor
• Resistor
• Inductor
• Capacitor
• Printed electronics
• Printed circuit board
• Electronic circuit
• Flip-flop
• Memory cell
• Combinational logic
• Sequential logic
• Logic gate
• Boolean circuit
• Integrated circuit (IC)
• Hybrid integrated circuit (HIC)
• Mixed-signal integrated circuit
• Three-dimensional integrated circuit (3D IC)
• Emitter-coupled logic (ECL)
• Erasable programmable logic device (EPLD)
• Macrocell array
• Programmable logic array (PLA)
• Programmable logic device (PLD)
• Programmable Array Logic (PAL)
• Generic array logic (GAL)
• Complex programmable logic device (CPLD)
• Field-programmable gate array (FPGA)
• Field-programmable object array (FPOA)
• Application-specific integrated circuit (ASIC)
• Tensor Processing Unit (TPU)
Theory
• Digital signal
• Boolean algebra
• Logic synthesis
• Logic in computer science
• Computer architecture
• Digital signal
• Digital signal processing
• Circuit minimization
• Switching circuit theory
• Gate equivalent
Design
• Logic synthesis
• Place and route
• Placement
• Routing
• Transaction-level modeling
• Register-transfer level
• Hardware description language
• High-level synthesis
• Formal equivalence checking
• Synchronous logic
• Asynchronous logic
• Finite-state machine
• Hierarchical state machine
Applications
• Computer hardware
• Hardware acceleration
• Digital audio
• radio
• Digital photography
• Digital telephone
• Digital video
• cinematography
• television
• Electronic literature
Design issues
• Metastability
• Runt pulse
| Wikipedia |
Sybilla Beckmann
Sybilla Beckmann is a Josiah Meigs Distinguished Teaching Professor of Mathematics, Emeritus, at the University of Georgia and a recipient of the Association for Women in Mathematics Louise Hay Award.
Sybilla Beckmann
NationalityAmerican
TitleJosiah Meigs Distinguished Teaching Professor of Mathematics
AwardsLouise Hay Award
Academic background
Alma materUniversity of Pennsylvania
Brown University
ThesisFields of Definition of Solvable Branched Coverings (1986)
Doctoral advisorDavid Harbater
Academic work
DisciplineMathematics
InstitutionsUniversity of Georgia
Yale University
Main interestsMathematical cognition
Mathematical education of teachers
Mathematics content for grades pre-K - 8
Biography
Sybilla Beckmann received her Sc.B. in Mathematics from Brown University in 1980[1] and her Ph.D. in Mathematics from the University of Pennsylvania under the supervision of David Harbater in 1986.[2] She taught at Yale University as a J.W. Gibbs Instructor of Mathematics, before becoming a Josiah Meigs Distinguished Teaching Professor of Mathematics at the University of Georgia.[3] She retired in 2020.[1]
Beckmann's main interests include mathematical cognition, mathematical education of teachers, and mathematics content for pre-Kindergarten through Grade 8.[4]
Publications
Beckmann's publications include the following.[5][6]
• Mathematics for Elementary Teachers: Making Sense by "Explaining Why", in Proceedings of the Second International Conference on the Teaching of Mathematics at the Undergraduate Level, J. Wiley & Sons, Inc., (2002).[7]
• What mathematicians should know about teaching math for elementary teachers. Mathematicians and Education Reform Newsletter, Spring 2004. Volume 16, number 2.
• Solving Algebra and Other Story Problems with Simple Diagrams: a Method Demonstrated in Grade 4 – 6 Texts Used in Singapore, The Mathematics Educator, 14, (1), pp. 42 – 46 (2004).[8]
• With Karen Fuson. Focal Points: Grades 5 and 6. Teaching Children Mathematics. May 2008. Volume 14, issue 9, pages 508 – 517.
• Focus in Grade 5, Teaching with Curriculum Focal Points. (2009). National Council of Teachers of Mathematics. This book elaborates on the Focal Points at grade 5, including discussions of the necessary foundations at grades 3 and 4.
• Thomas J. Cooney, Sybilla Beckmann, and Gwendolyn M. Lloyd. (2010). Developing Essential Understanding of Functions for Teaching Mathematics in Grades 9 – 12. National Council of Teachers of Mathematics.[9]
• Karen C. Fuson, Douglas Clements, and Sybilla Beckmann. (2010). Focus in Prekindergarten: Teaching with Curriculum Focal Points. National Council of Teachers of Mathematics.
• Karen C. Fuson, Douglas Clements, and Sybilla Beckmann. (2010). Focus in Kindergarten: Teaching with Curriculum Focal Points. National Council of Teachers of Mathematics.
• Karen C. Fuson, Douglas Clements, and Sybilla Beckmann. (2010). Focus in Grade 1: Teaching with Curriculum Focal Points. National Council of Teachers of Mathematics.[10]
• Karen C. Fuson, Douglas Clements, and Sybilla Beckmann. (2011). Focus in Grade 2: Teaching with Curriculum Focal Points. National Council of Teachers of Mathematics.
• Fuson, K. C. & Beckmann, S. (Fall/Winter, 2012–2013). Standard algorithms in the Common Core State Standards. National Council of Supervisors of Mathematics Journal of Mathematics Education Leadership, 14 (2), 14–30.[11]
• Mathematics for Elementary Teachers with Activities, 4th edition, published by Pearson Education, copyright 2014, publication date January 2013.[12]
• Beckmann, S., & Izsák, A. (2014). Variable parts: A new perspective on proportional relationships and linear functions. In Nicol, C., Liljedahl, P., Oesterle, S., & Allan, D. (Eds.) Proceedings of the Joint Meeting of Thirty-Eighth Conference of the International meeting of the Psychology of Mathematics Education and the Thirty-Sixth meeting of the North American Chapter of the International Group for the Psychology of Mathematics Education, Vol. 2, pp. 113–120. Vancouver, Canada: PME.
• Beckmann, S. & Izsák, A. (2014). Why is slope hard to teach? American Mathematical Society Blog on Teaching and Learning Mathematics.[13]
• Beckmann, S., & Izsák, A. (2015). Two perspectives on proportional relationships: Extending complementary origins of multiplication in terms of quantities. Journal for Research in Mathematics Education 46(1), pp. 17–38.
• Beckmann, S., Izsák, A., & Ölmez, İ. B. (2015). From multiplication to proportional relationships. In X. Sun, B. Kaur, J. Novotna (Eds.), Conference proceedings of ICMI Study 23: Primary mathematics study on whole numbers, pp. 518 – 525. Macau, China: University of Macau.[14]
Awards
• Association for Women in Mathematics twenty-fourth annual Louise Hay Award (2014).[15]
• Mathematical Association of America fourth annual Mary P. Dolciani Award (2015).[16]
References
1. "About Me". Sybilla Beckmann, PhD. Retrieved October 10, 2022.
2. Sybilla Beckmann at the Mathematics Genealogy Project
3. "Sybilla Beckmann-Kazez". University of Georgia. Retrieved October 10, 2022.
4. "Biography | Sybilla Beckmann". faculty.franklin.uga.edu. Retrieved 2016-11-05.
5. "temrrg". temrrg. Retrieved 2016-11-05.
6. "Sybilla Beckmann".
7. Beckmann, Sybilla. "Mathematics for Elementary Teachers" (PDF).
8. "TME – Volume 14 Number 1". math.coe.uga.edu. Retrieved 2016-11-05.
9. "NCTM Store: Developing Essential Understanding of Functions for Teaching Mathematics in Grades 9-12". www.nctm.org. Retrieved 2017-04-05.
10. "NCTM Store: Focus in Grade 1: Teaching with Curriculum Focal Points". www.nctm.org. Retrieved 2017-04-05.
11. "Standard Algorithms in the Common Core State Standards" (PDF).
12. "Mathematics for Elementary Teachers with Activities, 4/e by Sybilla Beckmann | Pearson". www.pearsonhighered.com. Retrieved 2016-11-05.
13. "Why is Slope Hard to Teach? | On Teaching and Learning Mathematics". blogs.ams.org. Retrieved 2016-11-05.
14. "Primary Mathematics Study on Whole Numbers" (PDF).
15. "Sybilla Beckmann – AWM Association for Women in Mathematics". sites.google.com. Retrieved 2016-11-05.
16. "Dolciani Award | Mathematical Association of America". www.maa.org. Retrieved 2020-09-27.
Authority control
International
• ISNI
• VIAF
National
• United States
Academics
• MathSciNet
• Mathematics Genealogy Project
• zbMATH
| Wikipedia |
Jean-Pierre Sydler
Jean-Pierre Sydler (1921-1988) was a Swiss mathematician and a librarian, well known for his work in geometry, most notably on Hilbert's third problem.
Biography
Sydler was born in 1921 in Neuchâtel, Switzerland. He graduated from ETH Zürich in 1943 and received a doctorate in 1947. In 1950 he became a librarian at the ETH while continuing to publish mathematical papers in his spare time. In 1960 he received a prize of the Danish Academy of Sciences for his work on scissors congruence. In 1963 he became a director of the ETH library and pioneered the use of automatisation. He continued serving as a director until the retirement in 1986. He died in 1988 in Zürich.
References
• Greg N. Frederickson, Dissections: Plane and Fancy, Cambridge University Press, 2003.
External links
• Jean-Pierre Sydler at the Mathematics Genealogy Project
Authority control
International
• FAST
• ISNI
• VIAF
National
• Germany
• United States
Academics
• Mathematics Genealogy Project
Other
• IdRef
| Wikipedia |
Sydney Chapman (mathematician)
Sydney Chapman FRS (29 January 1888 – 16 June 1970)[1] was a British mathematician and geophysicist.[4] His work on the kinetic theory of gases, solar-terrestrial physics, and the Earth's ozone layer has inspired a broad range of research over many decades.[2][5][6][7][8]
Sydney Chapman
Sydney Chapman 1888–1970
Born(1888-01-29)29 January 1888
Eccles, Greater Manchester, England
Died16 June 1970(1970-06-16) (aged 82)
Boulder, Colorado, U.S.
Alma materUniversity of Manchester
University of Cambridge
Known forChapman cycle
Chapman function
Chapman–Kolmogorov equation
Chapman–Enskog theory
AwardsFellow of the Royal Society (1919)[1]
Smith's Prize (1913)
Adams Prize (1928)
Royal Medal (1934)
Chree Medal and Prize (1941)
De Morgan Medal (1944)
William Bowie Medal (1962)
Copley Medal (1964)
Symons Gold Medal (1965)
Scientific career
InstitutionsUniversity of Manchester
University of Cambridge
Imperial College London
University of Oxford
The Queen's College, Oxford
Royal Observatory, Greenwich, University of Colorado
Academic advisorsG. H. Hardy[2]
Doctoral students
• Franz Kahn[3]
• George Frederick James Temple[2]
• Syun-Ichi Akasofu
Education and early life
Chapman was born in Eccles, near Salford in England and began his advanced studies at a technical institute, now the University of Salford, in 1902.[9] In 1904 at age 16, Chapman entered the University of Manchester. He competed for a scholarship to the university offered by his home county, and was the last student selected. Chapman later reflected, "I sometimes wonder what would have happened if I'd hit one place lower."[5] He initially studied engineering in the department headed by Osborne Reynolds. Chapman was taught mathematics by Horace Lamb, the Beyer professor of mathematics, and J. E. Littlewood, who came from Cambridge in Chapman's final year at Manchester. Although he graduated with an engineering degree, Chapman had become so enthusiastic for mathematics that he stayed for one further year to take a mathematics degree. Following Lamb's suggestion, Chapman applied for a scholarship to Trinity College, Cambridge. He was at first awarded only a partial scholarship as a sizar (meaning that he obtained financial support by acting as a servant to other students), but from his second year onwards he received a full scholarship. He graduated as a wrangler in 1910.[6] He began his research in pure mathematics under G. H. Hardy, but later that year was asked by Sir Frank Dyson to be his chief assistant at the Royal Greenwich Observatory.
Career and research
From 1914 to 1919, Chapman returned to Cambridge as a lecturer in mathematics and a fellow of Trinity. He held the Beyer Chair of Applied Mathematics at Manchester from 1919 to 1924, the same position as had been held by Lamb, and then moved to Imperial College London. During the Second World War he was Deputy Scientific Advisor to the Army Council.[6]
In 1946, Chapman was elected to the Sedleian Chair of Natural Philosophy at Oxford, and was appointed fellow of The Queen's College, Oxford. In 1953, on his retirement from Oxford, Chapman took research and teaching opportunities all over the world,[4] including at the University of Alaska and the University of Colorado, but also as far afield as Istanbul, Cairo, Prague, and Tokyo. As the Advisory Scientific Director of the University of Alaska Geophysical Institute from 1951 to 1970, he spent three months of the year in Alaska, usually during winter for research into auroras.[10] Much of the remainder of the year he spent at the High Altitude Observatory in Boulder, Colorado.[11]
Chapman's most noted mathematical accomplishments were in the field of stochastic processes (random processes), especially Markov processes. In his study of Markovian stochastic processes and their generalizations, Chapman and the Russian Andrey Kolmogorov independently developed the pivotal set of equations in the field, the Chapman–Kolmogorov equations. Chapman is credited with working out, in 1930, the photochemical mechanisms that give rise to the ozone layer.[11]
Chapman is recognised as one of the pioneers of solar-terrestrial physics.[4] This interest stemmed from his early work on the kinetic theory of gases. Chapman studied magnetic storms and aurorae, developing theories to explain their relation to the interaction of the Earth's magnetic field with the solar wind. He disputed and ridiculed the work of Kristian Birkeland and Hannes Alfvén, later adopting Birkeland's theories as his own.[12][13] Chapman and his first graduate student, V. C. A. Ferraro, predicted the presence of the magnetosphere in the early 1930s. They also predicted characteristics of the magnetosphere that were confirmed 30 years later by the Explorer 12 satellite.[5]
In 1940, Chapman and a German colleague Julius Bartels published a book in two volumes[14][15] on geomagnetism, which was to become the standard text book for the next two decades.[5] In 1946 Chapman coined the term: Aeronomy, which is used today to describe the scientific field of high-altitude research into atmosphere/space interaction.
From 1951 to 1954, Chapman was president of the International Union of Geodesy and Geophysics (IUGG).
Chapman was President of the Special Committee for the International Geophysical Year (IGY). The idea of the IGY stemmed from a discussion in 1950 between Chapman and scientists including James Van Allen. The IGY was held in 1957–58, and resulted in great progress in fields including Earth and space sciences, as well as leading to the first satellite launches.
Honours and awards
Chapman was bestowed many honours over his career, including Smith's Prize in 1913,[6] election as a Fellow of the Royal Society in 1919,[1] Invited Speaker of the ICM in 1924,[16] Royal Society Bakerian lecturer in 1931, Royal Society Royal Medal in 1934, London Mathematical Society De Morgan Medal in 1944. In 1949, he was awarded the Gold Medal of the Royal Astronomical Society and was elected as a Fellow of the Royal Society of Edinburgh in 1953. In 1964, he was awarded the Copley Medal of the Royal Society and in 1965 the Symons Gold Medal of the Royal Meteorological Society. He was elected to the National Academies of Science of the United States, Norway, Sweden and Finland.[6] He served as president of the London Mathematical Society during 1929–1931 and the Royal Meteorological Society 1932–1933.
The lunar Crater Chapman is named in his honour, as is the Sydney Chapman Building on the campus of the University of Alaska Fairbanks. This building served as the first permanent home of the University of Alaska Geophysical Institute, and it now contains the Department of Computer Science and the Department of Mathematics and Statistics.[17] The American Geophysical Union organises "Chapman Conferences," which are small, topical meetings intended to foster innovative research in key areas.[18] The Royal Astronomical Society founded the Chapman Medal in his memory.[19]
Personal life
In 1970, Chapman died in Boulder, Colorado, at the age of 82.[6]
References
1. Cowling, T. G. (1971). "Sydney Chapman 1888–1970". Biographical Memoirs of Fellows of the Royal Society. 17: 53–89. doi:10.1098/rsbm.1971.0003.
2. Sydney Chapman at the Mathematics Genealogy Project
3. Kahn, Franz Daniel (1950). Some problems concerning the luminosity and other properties of the upper atmosphere. ethos.bl.uk (DPhil thesis). University of Oxford.
4. Akasofu, S. I. (1970). "In memoriam Sydney Chapman". Space Science Reviews. 11 (5): 599. Bibcode:1970SSRv...11..599A. doi:10.1007/BF00177026. S2CID 120617892.
5. Akasofu, S. I. (2011). "The scientific legacy of Sydney Chapman". Eos, Transactions American Geophysical Union. 92 (34): 281–282. Bibcode:2011EOSTr..92..281A. doi:10.1029/2011EO340001.
6. O'Connor, John J.; Robertson, Edmund F., "Sydney Chapman (mathematician)", MacTutor History of Mathematics Archive, University of St Andrews
7. Finding aid to papers of Sydney Chapman, Niels Bohr Library and Archives, accessed 7 September 2008
8. Sydney Chapman page at the Geophysical Institute of the University of Alaska, Fairbanks Archived 30 August 2009 at the Wayback Machine includes sections from Sydney Chapman, Eighty, From His Friends, accessed 4 October 2008
9. Hockey, Thomas (2009). The Biographical Encyclopedia of Astronomers. Springer Publishing. ISBN 978-0-387-31022-0. Retrieved 22 August 2012.
10. Keith B. Mather. "Introduction to Sydney Chapman". Geophysical Institute. Archived from the original on 7 December 2010. Retrieved 20 December 2010.
11. Sydney Chapman, eighty: From His Friends By Sydney Chapman, Syun-Ichi Akasofu, Benson Fogle, Bernhard Haurwitz, University of Alaska (College). Geophysical Institute, National Center for Atmospheric Research (U.S.) Published by National Center for Atmospheric Research, 1968
12. Lucy Jago (2001). The Northern Lights. New York: Alfred A. Knopf. ISBN 0-375-40980-7
13. Schuster, A. (1911). "The Origin of Magnetic Storms". Proceedings of the Royal Society A: Mathematical, Physical and Engineering Sciences. 85 (575): 44–50. Bibcode:1911RSPSA..85...44S. doi:10.1098/rspa.1911.0019.
14. Sydney Chapman; J. Bartels (1940). Geomagnetism, Vol. I, Geomagnetic and Related Phenomena. Oxford Univ. Press.
15. Sydney Chapman; J. Bartels (1940). Geomagnetism, Vol. II, Analysis and Physical Interpretation of the Phenomena. Oxford Univ. Press.
16. Chapman, S.; Whitehead, T. T. "The influence of electromagnetic induction within the earth upon terrestrial magnetic storms" (PDF). In: Proceedings of the International Congress of Mathematicians in Toronto, August 11–16. 1924. Vol. 2. pp. 313–338. Archived from the original (PDF) on 1 December 2017.
17. Kieth B. Mather (1970). "Sydney Chapman (obit)". University of Alaska.
18. "Chapman Conferences". Archived from the original on 5 August 2011. Retrieved 28 August 2011.
19. Tayler, R.J. (1987). History of the Royal Astronomical Society: Volume 2 1920–1980. Oxford: Blackwell. p. 202. ISBN 0-632-01792-9.
Copley Medallists (1951–2000)
• David Keilin (1951)
• Paul Dirac (1952)
• Albert Kluyver (1953)
• E. T. Whittaker (1954)
• Ronald Fisher (1955)
• Patrick Blackett (1956)
• Howard Florey (1957)
• John Edensor Littlewood (1958)
• Macfarlane Burnet (1959)
• Harold Jeffreys (1960)
• Hans Krebs (1961)
• Cyril Norman Hinshelwood (1962)
• Paul Fildes (1963)
• Sydney Chapman (1964)
• Alan Hodgkin (1965)
• Lawrence Bragg (1966)
• Bernard Katz (1967)
• Tadeusz Reichstein (1968)
• Peter Medawar (1969)
• Alexander R. Todd (1970)
• Norman Pirie (1971)
• Nevill Francis Mott (1972)
• Andrew Huxley (1973)
• W. V. D. Hodge (1974)
• Francis Crick (1975)
• Dorothy Hodgkin (1976)
• Frederick Sanger (1977)
• Robert Burns Woodward (1978)
• Max Perutz (1979)
• Derek Barton (1980)
• Peter D. Mitchell (1981)
• John Cornforth (1982)
• Rodney Robert Porter (1983)
• Subrahmanyan Chandrasekhar (1984)
• Aaron Klug (1985)
• Rudolf Peierls (1986)
• Robin Hill (1987)
• Michael Atiyah (1988)
• César Milstein (1989)
• Abdus Salam (1990)
• Sydney Brenner (1991)
• George Porter (1992)
• James D. Watson (1993)
• Frederick Charles Frank (1994)
• Frank Fenner (1995)
• Alan Cottrell (1996)
• Hugh Huxley (1997)
• James Lighthill (1998)
• John Maynard Smith (1999)
• Alan Battersby (2000)
De Morgan Medallists
• Arthur Cayley (1884)
• James Joseph Sylvester (1887)
• Lord Rayleigh (1890)
• Felix Klein (1893)
• S. Roberts (1896)
• William Burnside (1899)
• A. G. Greenhill (1902)
• H. F. Baker (1905)
• J. W. L. Glaisher (1908)
• Horace Lamb (1911)
• J. Larmor (1914)
• W. H. Young (1917)
• E. W. Hobson (1920)
• P. A. MacMahon (1923)
• A. E. H. Love (1926)
• Godfrey Harold Hardy (1929)
• Bertrand Russell (1932)
• E. T. Whittaker (1935)
• J. E. Littlewood (1938)
• Louis Mordell (1941)
• Sydney Chapman (1944)
• George Neville Watson (1947)
• A. S. Besicovitch (1950)
• E. C. Titchmarsh (1953)
• G. I. Taylor (1956)
• W. V. D. Hodge (1959)
• Max Newman (1962)
• Philip Hall (1965)
• Mary Cartwright (1968)
• Kurt Mahler (1971)
• Graham Higman (1974)
• C. Ambrose Rogers (1977)
• Michael Atiyah (1980)
• K. F. Roth (1983)
• J. W. S. Cassels (1986)
• D. G. Kendall (1989)
• Albrecht Fröhlich (1992)
• W. K. Hayman (1995)
• R. A. Rankin (1998)
• J. A. Green (2001)
• Roger Penrose (2004)
• Bryan John Birch (2007)
• Keith William Morton (2010)
• John Griggs Thompson (2013)
• Timothy Gowers (2016)
• Andrew Wiles (2019)
Sedleian Professors of Natural Philosophy
• Edward Lapworth (1621)
• John Edwards (1638)
• Joshua Crosse (1648)
• Thomas Willis (1660)
• Thomas Millington (1675)
• James Fayrer (1704)
• Charles Bertie (1719/20)
• Joseph Browne (1746/7)
• Benjamin Wheeler (1767)
• Thomas Hornsby (1782)
• George Leigh Cooke (1810)
• Bartholomew Price (1853)
• Augustus Edward Hough Love (1899)
• Sydney Chapman (1946)
• George Frederick James Temple (1953)
• Albert E. Green (1968)
• Brooke Benjamin (1979)
• John M. Ball (1996)
• Jonathan Keating (2019)
University of Oxford portal
Authority control
International
• FAST
• ISNI
• VIAF
National
• Norway
• France
• BnF data
• Catalonia
• Germany
• Israel
• United States
• Sweden
• Japan
• Czech Republic
• Australia
• Netherlands
• Poland
Academics
• CiNii
• Leopoldina
• MathSciNet
• Mathematics Genealogy Project
• Scopus
• zbMATH
People
• Deutsche Biographie
• Trove
Other
• SNAC
• IdRef
| Wikipedia |
Sidney Holgate
Sidney Holgate, CBE (9 September 1918 – 17 May 2003) was a British mathematician and academic.
Holgate was schooled at Henry Mellish School and won a scholarship to Hatfield College, Durham, where he studied Mathematics and eventually became Senior Man.[1] He was also President of the Durham Union for Michaelmas term of 1940.[2] Being unacceptable for wartime service on medical grounds, he instead taught for a year at Nottingham High School, before returning to Durham and completing his doctorate in 1945.[1]
He was Master of Grey College, Durham from its foundation in 1959 to 1980. He served as Pro-Vice-Chancellor and Sub-Warden of Durham University from 1964 to 1969.[1][3]
References
1. "Sidney Holgate". The Times. 10 June 2003. Retrieved 11 March 2018.
2. Campbell, P. D. A. (1952). A Short History of the Durham Union Society. Durham County Press. p. 17.
3. "Dr Sidney Holgate 1918–2003: First Master of Grey College". News. Durham University. Retrieved 14 April 2014.
Authority control: Academics
• MathSciNet
• zbMATH
| Wikipedia |
Syllabical and Steganographical Table
Syllabical and Steganographical Table (French: Tableau syllabique et stéganographique) is an eighteenth-century cryptographical work by P. R. Wouves. Published by Benjamin Franklin Bache in 1797, it provided a method for representing pairs of letters by numbers. It may have been the first chart for cryptographic purposes to have been printed in the United States.[1][2]
References
1. Kane, Joseph Nathan (1997). Famous First Facts: A Record of First Happenings, Discoveries and Inventions in the United States, Fifth Edition. H. W. Wilson. p. 329. ISBN 0-8242-0930-3. The first cryptography chart was P.R. Wouves's A Syllabical and Steganographical Table, a chart 27 by 19.25 inches, with a list of syllables and words in English and French intended for secret correspondence.
2. Sheldon, Rose Mary. "The Friedman Collection: An Analytical Guide, Item 265" (PDF). George C. Marshall Foundation. Retrieved August 7, 2015. William F. Friedman library collection Item 265 Wouves, P.R., A Syllabical and Steganographical Table, Philadelphia: Printed by Benjamin Franklin Bache, 1797. Photostat negative. According to William F. Friedman, an item of very great historical interest and importance. An explanatory note accompanies the item written by the book dealer i Philadelphia from whom Friedman purchased this copy. See Galland, p. 205 for explanatory details. In French and English, 1786-98: ―By means of which any sort of writings taken from either the French, English, Dutch, Spanish, Portuguese or Italian languages or any languages which use the same alphabet can be converted into numerical figures. According to the extract from a letter from W.D. Witt, Book Dealer on Carlisle Street, Philadelphia, the Library of Congress does NOT have a copy of this. The New York Public Library has a copy signed by Wouves and the Philadelphia Historical Society has an unsigned copy. De Witt thought Wouves was a pseudonym for Benjamin Franklin. The publisher, Benjamin Franklin Bach was his grandson. We know Franklin made use of ciphers. P.R.=Poor Richard?{{cite web}}: CS1 maint: url-status (link)
| Wikipedia |
Sylow theorems
In mathematics, specifically in the field of finite group theory, the Sylow theorems are a collection of theorems named after the Norwegian mathematician Peter Ludwig Sylow[1] that give detailed information about the number of subgroups of fixed order that a given finite group contains. The Sylow theorems form a fundamental part of finite group theory and have very important applications in the classification of finite simple groups.
Algebraic structure → Group theory
Group theory
Basic notions
• Subgroup
• Normal subgroup
• Quotient group
• (Semi-)direct product
Group homomorphisms
• kernel
• image
• direct sum
• wreath product
• simple
• finite
• infinite
• continuous
• multiplicative
• additive
• cyclic
• abelian
• dihedral
• nilpotent
• solvable
• action
• Glossary of group theory
• List of group theory topics
Finite groups
• Cyclic group Zn
• Symmetric group Sn
• Alternating group An
• Dihedral group Dn
• Quaternion group Q
• Cauchy's theorem
• Lagrange's theorem
• Sylow theorems
• Hall's theorem
• p-group
• Elementary abelian group
• Frobenius group
• Schur multiplier
Classification of finite simple groups
• cyclic
• alternating
• Lie type
• sporadic
• Discrete groups
• Lattices
• Integers ($\mathbb {Z} $)
• Free group
Modular groups
• PSL(2, $\mathbb {Z} $)
• SL(2, $\mathbb {Z} $)
• Arithmetic group
• Lattice
• Hyperbolic group
Topological and Lie groups
• Solenoid
• Circle
• General linear GL(n)
• Special linear SL(n)
• Orthogonal O(n)
• Euclidean E(n)
• Special orthogonal SO(n)
• Unitary U(n)
• Special unitary SU(n)
• Symplectic Sp(n)
• G2
• F4
• E6
• E7
• E8
• Lorentz
• Poincaré
• Conformal
• Diffeomorphism
• Loop
Infinite dimensional Lie group
• O(∞)
• SU(∞)
• Sp(∞)
Algebraic groups
• Linear algebraic group
• Reductive group
• Abelian variety
• Elliptic curve
For a prime number $p$, a Sylow p-subgroup (sometimes p-Sylow subgroup) of a group $G$ is a maximal $p$-subgroup of $G$, i.e., a subgroup of $G$ that is a p-group (meaning its cardinality is a power of $p,$ or equivalently, the order of every group element is a power of $p$) that is not a proper subgroup of any other $p$-subgroup of $G$. The set of all Sylow $p$-subgroups for a given prime $p$ is sometimes written ${\text{Syl}}_{p}(G)$.
The Sylow theorems assert a partial converse to Lagrange's theorem. Lagrange's theorem states that for any finite group $G$ the order (number of elements) of every subgroup of $G$ divides the order of $G$. The Sylow theorems state that for every prime factor $p$ of the order of a finite group $G$, there exists a Sylow $p$-subgroup of $G$ of order $p^{n}$, the highest power of $p$ that divides the order of $G$. Moreover, every subgroup of order $p^{n}$ is a Sylow $p$-subgroup of $G$, and the Sylow $p$-subgroups of a group (for a given prime $p$) are conjugate to each other. Furthermore, the number of Sylow $p$-subgroups of a group for a given prime $p$ is congruent to 1 (mod $p$).
Theorems
Motivation
The Sylow theorems are a powerful statement about the structure of groups in general, but are also powerful in applications of finite group theory. This is because they give a method for using the prime decomposition of the cardinality of a finite group $G$ to give statements about the structure of its subgroups: essentially, it gives a technique to transport basic number-theoretic information about a group to its group structure. From this observation, classifying finite groups becomes a game of finding which combinations/constructions of groups of smaller order can be applied to construct a group. For example, a typical application of these theorems is in the classification of finite groups of some fixed cardinality, e.g. $|G|=60$.[2]
Statement
Collections of subgroups that are each maximal in one sense or another are common in group theory. The surprising result here is that in the case of $\operatorname {Syl} _{p}(G)$, all members are actually isomorphic to each other and have the largest possible order: if $|G|=p^{n}m$ with $n>0$ where p does not divide m, then every Sylow p-subgroup P has order $|P|=p^{n}$. That is, P is a p-group and ${\text{gcd}}(|G:P|,p)=1$. These properties can be exploited to further analyze the structure of G.
The following theorems were first proposed and proven by Ludwig Sylow in 1872, and published in Mathematische Annalen.
Theorem (1) — For every prime factor p with multiplicity n of the order of a finite group G, there exists a Sylow p-subgroup of G, of order $p^{n}$.
The following weaker version of theorem 1 was first proved by Augustin-Louis Cauchy, and is known as Cauchy's theorem.
Corollary — Given a finite group G and a prime number p dividing the order of G, then there exists an element (and thus a cyclic subgroup generated by this element) of order p in G.[3]
Theorem (2) — Given a finite group G and a prime number p, all Sylow p-subgroups of G are conjugate to each other. That is, if H and K are Sylow p-subgroups of G, then there exists an element $g\in G$ with $g^{-1}Hg=K$.
Theorem (3) — Let p be a prime factor with multiplicity n of the order of a finite group G, so that the order of G can be written as $p^{n}m$, where $n>0$ and p does not divide m. Let $n_{p}$ be the number of Sylow p-subgroups of G. Then the following hold:
• $n_{p}$ divides m, which is the index of the Sylow p-subgroup in G.
• $n_{p}\equiv 1{\bmod {p}}$
• $n_{p}=|G:N_{G}(P)|$, where P is any Sylow p-subgroup of G and $N_{G}$ denotes the normalizer.
Consequences
The Sylow theorems imply that for a prime number $p$ every Sylow $p$-subgroup is of the same order, $p^{n}$. Conversely, if a subgroup has order $p^{n}$, then it is a Sylow $p$-subgroup, and so is conjugate to every other Sylow $p$-subgroup. Due to the maximality condition, if $H$ is any $p$-subgroup of $G$, then $H$ is a subgroup of a $p$-subgroup of order $p^{n}$.
A very important consequence of Theorem 2 is that the condition $n_{p}=1$ is equivalent to saying that the Sylow $p$-subgroup of $G$ is a normal subgroup. However, there are groups that have normal subgroups but no normal Sylow subgroups, such as $S_{4}$.
Sylow theorems for infinite groups
There is an analogue of the Sylow theorems for infinite groups. One defines a Sylow p-subgroup in an infinite group to be a p-subgroup (that is, every element in it has p-power order) that is maximal for inclusion among all p-subgroups in the group. Let $\operatorname {Cl} (K)$ denote the set of conjugates of a subgroup $K\subset G$.
Theorem — If K is a Sylow p-subgroup of G, and $n_{p}=|\operatorname {Cl} (K)|$ is finite, then every Sylow p-subgroup is conjugate to K, and $n_{p}\equiv 1{\bmod {p}}$.
Examples
A simple illustration of Sylow subgroups and the Sylow theorems are the dihedral group of the n-gon, D2n. For n odd, 2 = 21 is the highest power of 2 dividing the order, and thus subgroups of order 2 are Sylow subgroups. These are the groups generated by a reflection, of which there are n, and they are all conjugate under rotations; geometrically the axes of symmetry pass through a vertex and a side.
By contrast, if n is even, then 4 divides the order of the group, and the subgroups of order 2 are no longer Sylow subgroups, and in fact they fall into two conjugacy classes, geometrically according to whether they pass through two vertices or two faces. These are related by an outer automorphism, which can be represented by rotation through π/n, half the minimal rotation in the dihedral group.
Another example are the Sylow p-subgroups of GL2(Fq), where p and q are primes ≥ 3 and p ≡ 1 (mod q) , which are all abelian. The order of GL2(Fq) is (q2 − 1)(q2 − q) = (q)(q + 1)(q − 1)2. Since q = pnm + 1, the order of GL2(Fq) = p2n m′. Thus by Theorem 1, the order of the Sylow p-subgroups is p2n.
One such subgroup P, is the set of diagonal matrices ${\begin{bmatrix}x^{im}&0\\0&x^{jm}\end{bmatrix}}$, x is any primitive root of Fq. Since the order of Fq is q − 1, its primitive roots have order q − 1, which implies that x(q − 1)/pn or xm and all its powers have an order which is a power of p. So, P is a subgroup where all its elements have orders which are powers of p. There are pn choices for both a and b, making |P| = p2n. This means P is a Sylow p-subgroup, which is abelian, as all diagonal matrices commute, and because Theorem 2 states that all Sylow p-subgroups are conjugate to each other, the Sylow p-subgroups of GL2(Fq) are all abelian.
Example applications
Since Sylow's theorem ensures the existence of p-subgroups of a finite group, it's worthwhile to study groups of prime power order more closely. Most of the examples use Sylow's theorem to prove that a group of a particular order is not simple. For groups of small order, the congruence condition of Sylow's theorem is often sufficient to force the existence of a normal subgroup.
Example-1
Groups of order pq, p and q primes with p < q.
Example-2
Group of order 30, groups of order 20, groups of order p2q, p and q distinct primes are some of the applications.
Example-3
(Groups of order 60): If the order |G| = 60 and G has more than one Sylow 5-subgroup, then G is simple.
Cyclic group orders
Some non-prime numbers n are such that every group of order n is cyclic. One can show that n = 15 is such a number using the Sylow theorems: Let G be a group of order 15 = 3 · 5 and n3 be the number of Sylow 3-subgroups. Then n3 $\mid $ 5 and n3 ≡ 1 (mod 3). The only value satisfying these constraints is 1; therefore, there is only one subgroup of order 3, and it must be normal (since it has no distinct conjugates). Similarly, n5 must divide 3, and n5 must equal 1 (mod 5); thus it must also have a single normal subgroup of order 5. Since 3 and 5 are coprime, the intersection of these two subgroups is trivial, and so G must be the internal direct product of groups of order 3 and 5, that is the cyclic group of order 15. Thus, there is only one group of order 15 (up to isomorphism).
Small groups are not simple
A more complex example involves the order of the smallest simple group that is not cyclic. Burnside's pa qb theorem states that if the order of a group is the product of one or two prime powers, then it is solvable, and so the group is not simple, or is of prime order and is cyclic. This rules out every group up to order 30 (= 2 · 3 · 5).
If G is simple, and |G| = 30, then n3 must divide 10 ( = 2 · 5), and n3 must equal 1 (mod 3). Therefore, n3 = 10, since neither 4 nor 7 divides 10, and if n3 = 1 then, as above, G would have a normal subgroup of order 3, and could not be simple. G then has 10 distinct cyclic subgroups of order 3, each of which has 2 elements of order 3 (plus the identity). This means G has at least 20 distinct elements of order 3.
As well, n5 = 6, since n5 must divide 6 ( = 2 · 3), and n5 must equal 1 (mod 5). So G also has 24 distinct elements of order 5. But the order of G is only 30, so a simple group of order 30 cannot exist.
Next, suppose |G| = 42 = 2 · 3 · 7. Here n7 must divide 6 ( = 2 · 3) and n7 must equal 1 (mod 7), so n7 = 1. So, as before, G can not be simple.
On the other hand, for |G| = 60 = 22 · 3 · 5, then n3 = 10 and n5 = 6 is perfectly possible. And in fact, the smallest simple non-cyclic group is A5, the alternating group over 5 elements. It has order 60, and has 24 cyclic permutations of order 5, and 20 of order 3.
Wilson's theorem
Part of Wilson's theorem states that
$(p-1)!\equiv -1{\pmod {p}}$
for every prime p. One may easily prove this theorem by Sylow's third theorem. Indeed, observe that the number np of Sylow's p-subgroups in the symmetric group Sp is (p − 2)!. On the other hand, np ≡ 1 (mod p). Hence, (p − 2)! ≡ 1 (mod p). So, (p − 1)! ≡ −1 (mod p).
Fusion results
Frattini's argument shows that a Sylow subgroup of a normal subgroup provides a factorization of a finite group. A slight generalization known as Burnside's fusion theorem states that if G is a finite group with Sylow p-subgroup P and two subsets A and B normalized by P, then A and B are G-conjugate if and only if they are NG(P)-conjugate. The proof is a simple application of Sylow's theorem: If B=Ag, then the normalizer of B contains not only P but also Pg (since Pg is contained in the normalizer of Ag). By Sylow's theorem P and Pg are conjugate not only in G, but in the normalizer of B. Hence gh−1 normalizes P for some h that normalizes B, and then Agh−1 = Bh−1 = B, so that A and B are NG(P)-conjugate. Burnside's fusion theorem can be used to give a more powerful factorization called a semidirect product: if G is a finite group whose Sylow p-subgroup P is contained in the center of its normalizer, then G has a normal subgroup K of order coprime to P, G = PK and P∩K = {1}, that is, G is p-nilpotent.
Less trivial applications of the Sylow theorems include the focal subgroup theorem, which studies the control a Sylow p-subgroup of the derived subgroup has on the structure of the entire group. This control is exploited at several stages of the classification of finite simple groups, and for instance defines the case divisions used in the Alperin–Brauer–Gorenstein theorem classifying finite simple groups whose Sylow 2-subgroup is a quasi-dihedral group. These rely on J. L. Alperin's strengthening of the conjugacy portion of Sylow's theorem to control what sorts of elements are used in the conjugation.
Proof of the Sylow theorems
The Sylow theorems have been proved in a number of ways, and the history of the proofs themselves is the subject of many papers, including Waterhouse,[4] Scharlau,[5] Casadio and Zappa,[6] Gow,[7] and to some extent Meo.[8]
One proof of the Sylow theorems exploits the notion of group action in various creative ways. The group G acts on itself or on the set of its p-subgroups in various ways, and each such action can be exploited to prove one of the Sylow theorems. The following proofs are based on combinatorial arguments of Wielandt.[9] In the following, we use $a\mid b$ as notation for "a divides b" and $a\nmid b$ for the negation of this statement.
Theorem (1) — A finite group G whose order $|G|$ is divisible by a prime power pk has a subgroup of order pk.
Proof
Let |G| = pkm = pk+ru such that $p\nmid u$, and let Ω denote the set of subsets of G of size pk. G acts on Ω by left multiplication: for g ∈ G and ω ∈ Ω, g⋅ω = { gx | x ∈ ω }. For a given set ω ∈ Ω, write Gω for its stabilizer subgroup { g ∈ G | g⋅ω = ω } and Gω for its orbit { g⋅ω | g ∈ G } in Ω.
The proof will show the existence of some ω ∈ Ω for which Gω has pk elements, providing the desired subgroup. This is the maximal possible size of a stabilizer subgroup Gω, since for any fixed element α ∈ ω ⊆ G, the right coset Gωα is contained in ω; therefore, |Gω| = |Gωα| ≤ |ω| = pk.
By the orbit-stabilizer theorem we have |Gω| |Gω| = |G| for each ω ∈ Ω, and therefore using the additive p-adic valuation νp, which counts the number of factors p, one has νp(|Gω|) + νp(|Gω|) = νp(|G|) = k + r. This means that for those ω with |Gω| = pk, the ones we are looking for, one has νp(|Gω|) = r, while for any other ω one has νp(|Gω|) > r (as 0 < |Gω| < pk implies νp(|Gω|) < k). Since |Ω| is the sum of |Gω| over all distinct orbits Gω, one can show the existence of ω of the former type by showing that νp(|Ω|) = r (if none existed, that valuation would exceed r). This is an instance of Kummer's theorem (since in base p notation the number |G| ends with precisely k + r digits zero, subtracting pk from it involves a carry in r places), and can also be shown by a simple computation:
$|\Omega |={p^{k}m \choose p^{k}}=\prod _{j=0}^{p^{k}-1}{\frac {p^{k}m-j}{p^{k}-j}}=m\prod _{j=1}^{p^{k}-1}{\frac {p^{k-\nu _{p}(j)}m-j/p^{\nu _{p}(j)}}{p^{k-\nu _{p}(j)}-j/p^{\nu _{p}(j)}}}$
and no power of p remains in any of the factors inside the product on the right. Hence νp(|Ω|) = νp(m) = r, completing the proof.
It may be noted that conversely every subgroup H of order pk gives rise to sets ω ∈ Ω for which Gω = H, namely any one of the m distinct cosets Hg.
Lemma — Let H be a finite p-group, let Ω be a finite set acted on by H, and let Ω0 denote the set of points of Ω that are fixed under the action of H. Then |Ω| ≡ |Ω0| (mod p).
Proof
Any element x ∈ Ω not fixed by H will lie in an orbit of order |H|/|Hx| (where Hx denotes the stabilizer), which is a multiple of p by assumption. The result follows immediately by writing |Ω| as the sum of |Hx| over all distinct orbits Hx and reducing mod p.
Theorem (2) — If H is a p-subgroup of G and P is a Sylow p-subgroup of G, then there exists an element g in G such that g−1Hg ≤ P. In particular, all Sylow p-subgroups of G are conjugate to each other (and therefore isomorphic), that is, if H and K are Sylow p-subgroups of G, then there exists an element g in G with g−1Hg = K.
Proof
Let Ω be the set of left cosets of P in G and let H act on Ω by left multiplication. Applying the Lemma to H on Ω, we see that |Ω0| ≡ |Ω| = [G : P] (mod p). Now $p\nmid [G:P]$ by definition so $p\nmid |\Omega _{0}|$, hence in particular |Ω0| ≠ 0 so there exists some gP ∈ Ω0. With this gP, we have hgP = gP for all h ∈ H, so g−1HgP = P and therefore g−1Hg ≤ P. Furthermore, if H is a Sylow p-subgroup, then |g−1Hg| = |H| = |P| so that g−1Hg = P.
Theorem (3) — Let q denote the order of any Sylow p-subgroup P of a finite group G. Let np denote the number of Sylow p-subgroups of G. Then (a) np = [G : NG(P)] (where NG(P) is the normalizer of P), (b) np divides |G|/q, and (c) np ≡ 1 (mod p).
Proof
Let Ω be the set of all Sylow p-subgroups of G and let G act on Ω by conjugation. Let P ∈ Ω be a Sylow p-subgroup. By Theorem 2, the orbit of P has size np, so by the orbit-stabilizer theorem np = [G : GP]. For this group action, the stabilizer GP is given by {g ∈ G | gPg−1 = P} = NG(P), the normalizer of P in G. Thus, np = [G : NG(P)], and it follows that this number is a divisor of [G : P] = |G|/q.
Now let P act on Ω by conjugation, and again let Ω0 denote the set of fixed points of this action. Let Q ∈ Ω0 and observe that then Q = xQx−1 for all x ∈ P so that P ≤ NG(Q). By Theorem 2, P and Q are conjugate in NG(Q) in particular, and Q is normal in NG(Q), so then P = Q. It follows that Ω0 = {P} so that, by the Lemma, |Ω| ≡ |Ω0| = 1 (mod p).
Algorithms
The problem of finding a Sylow subgroup of a given group is an important problem in computational group theory.
One proof of the existence of Sylow p-subgroups is constructive: if H is a p-subgroup of G and the index [G:H] is divisible by p, then the normalizer N = NG(H) of H in G is also such that [N : H] is divisible by p. In other words, a polycyclic generating system of a Sylow p-subgroup can be found by starting from any p-subgroup H (including the identity) and taking elements of p-power order contained in the normalizer of H but not in H itself. The algorithmic version of this (and many improvements) is described in textbook form in Butler,[10] including the algorithm described in Cannon.[11] These versions are still used in the GAP computer algebra system.
In permutation groups, it has been proven, in Kantor[12][13][14] and Kantor and Taylor,[15] that a Sylow p-subgroup and its normalizer can be found in polynomial time of the input (the degree of the group times the number of generators). These algorithms are described in textbook form in Seress,[16] and are now becoming practical as the constructive recognition of finite simple groups becomes a reality. In particular, versions of this algorithm are used in the Magma computer algebra system.
See also
• Frattini's argument
• Hall subgroup
• Maximal subgroup
• p-group
Notes
1. Sylow, L. (1872). "Théorèmes sur les groupes de substitutions". Math. Ann. (in French). 5 (4): 584–594. doi:10.1007/BF01442913. JFM 04.0056.02. S2CID 121928336.
2. Gracia–Saz, Alfonso. "Classification of groups of order 60" (PDF). math.toronto.edu. Archived (PDF) from the original on 28 October 2020. Retrieved 8 May 2021.
3. Fraleigh, John B. (2004). A First Course In Abstract Algebra. with contribution by Victor J. Katz. Pearson Education. p. 322. ISBN 9788178089973.
4. Waterhouse 1980.
5. Scharlau 1988.
6. Casadio & Zappa 1990.
7. Gow 1994.
8. Meo 2004.
9. Wielandt 1959.
10. Butler 1991, Chapter 16.
11. Cannon 1971.
12. Kantor 1985a.
13. Kantor 1985b.
14. Kantor 1990.
15. Kantor & Taylor 1988.
16. Seress 2003.
References
Proofs
• Casadio, Giuseppina; Zappa, Guido (1990). "History of the Sylow theorem and its proofs". Boll. Storia Sci. Mat. (in Italian). 10 (1): 29–75. ISSN 0392-4432. MR 1096350. Zbl 0721.01008.
• Gow, Rod (1994). "Sylow's proof of Sylow's theorem". Irish Math. Soc. Bull.. 0033 (33): 55–63. doi:10.33232/BIMS.0033.55.63. ISSN 0791-5578. MR 1313412. Zbl 0829.01011.
• Kammüller, Florian; Paulson, Lawrence C. (1999). "A formal proof of Sylow's theorem. An experiment in abstract algebra with Isabelle HOL" (PDF). J. Automat. Reason.. 23 (3): 235–264. doi:10.1023/A:1006269330992. ISSN 0168-7433. MR 1721912. S2CID 1449341. Zbl 0943.68149. Archived from the original (PDF) on 2006-01-03.
• Meo, M. (2004). "The mathematical life of Cauchy's group theorem". Historia Math. 31 (2): 196–221. doi:10.1016/S0315-0860(03)00003-X. ISSN 0315-0860. MR 2055642. Zbl 1065.01009.
• Scharlau, Winfried (1988). "Die Entdeckung der Sylow-Sätze". Historia Math. (in German). 15 (1): 40–52. doi:10.1016/0315-0860(88)90048-1. ISSN 0315-0860. MR 0931678. Zbl 0637.01006.
• Waterhouse, William C. (1980). "The early proofs of Sylow's theorem". Arch. Hist. Exact Sci.. 21 (3): 279–290. doi:10.1007/BF00327877. ISSN 0003-9519. MR 0575718. S2CID 123685226. Zbl 0436.01006.
• Wielandt, Helmut [in German] (1959). "Ein Beweis für die Existenz der Sylowgruppen". Arch. Math. (in German). 10 (1): 401–402. doi:10.1007/BF01240818. ISSN 0003-9268. MR 0147529. S2CID 119816392. Zbl 0092.02403.
Algorithms
• Butler, G. (1991). Fundamental Algorithms for Permutation Groups. Lecture Notes in Computer Science. Vol. 559. Berlin, New York City: Springer-Verlag. doi:10.1007/3-540-54955-2. ISBN 9783540549550. MR 1225579. S2CID 395110. Zbl 0785.20001.
• Cannon, John J. (1971). "Computing local structure of large finite groups". Computers in Algebra and Number Theory (Proc. SIAM-AMS Sympos. Appl. Math., New York City, 1970). SIAM-AMS Proc.. Vol. 4. Providence RI: AMS. pp. 161–176. ISSN 0160-7634. MR 0367027. Zbl 0253.20027.
• Kantor, William M. (1985a). "Polynomial-time algorithms for finding elements of prime order and Sylow subgroups" (PDF). J. Algorithms. 6 (4): 478–514. CiteSeerX 10.1.1.74.3690. doi:10.1016/0196-6774(85)90029-X. ISSN 0196-6774. MR 0813589. Zbl 0604.20001.
• Kantor, William M. (1985b). "Sylow's theorem in polynomial time". J. Comput. Syst. Sci.. 30 (3): 359–394. doi:10.1016/0022-0000(85)90052-2. ISSN 1090-2724. MR 0805654. Zbl 0573.20022.
• Kantor, William M.; Taylor, Donald E. (1988). "Polynomial-time versions of Sylow's theorem". J. Algorithms. 9 (1): 1–17. doi:10.1016/0196-6774(88)90002-8. ISSN 0196-6774. MR 0925595. Zbl 0642.20019.
• Kantor, William M. (1990). "Finding Sylow normalizers in polynomial time". J. Algorithms. 11 (4): 523–563. doi:10.1016/0196-6774(90)90009-4. ISSN 0196-6774. MR 1079450. Zbl 0731.20005.
• Seress, Ákos (2003). Permutation Group Algorithms. Cambridge Tracts in Mathematics. Vol. 152. Cambridge University Press. ISBN 9780521661034. MR 1970241. Zbl 1028.20002.
External links
• "Sylow theorems", Encyclopedia of Mathematics, EMS Press, 2001 [1994]
• Abstract Algebra/Group Theory/The Sylow Theorems at Wikibooks
• Weisstein, Eric W. "Sylow p-Subgroup". MathWorld.
• Weisstein, Eric W. "Sylow Theorems". MathWorld.
| Wikipedia |
Hall subgroup
In mathematics, specifically group theory, a Hall subgroup of a finite group G is a subgroup whose order is coprime to its index. They were introduced by the group theorist Philip Hall (1928).
Algebraic structure → Group theory
Group theory
Basic notions
• Subgroup
• Normal subgroup
• Quotient group
• (Semi-)direct product
Group homomorphisms
• kernel
• image
• direct sum
• wreath product
• simple
• finite
• infinite
• continuous
• multiplicative
• additive
• cyclic
• abelian
• dihedral
• nilpotent
• solvable
• action
• Glossary of group theory
• List of group theory topics
Finite groups
• Cyclic group Zn
• Symmetric group Sn
• Alternating group An
• Dihedral group Dn
• Quaternion group Q
• Cauchy's theorem
• Lagrange's theorem
• Sylow theorems
• Hall's theorem
• p-group
• Elementary abelian group
• Frobenius group
• Schur multiplier
Classification of finite simple groups
• cyclic
• alternating
• Lie type
• sporadic
• Discrete groups
• Lattices
• Integers ($\mathbb {Z} $)
• Free group
Modular groups
• PSL(2, $\mathbb {Z} $)
• SL(2, $\mathbb {Z} $)
• Arithmetic group
• Lattice
• Hyperbolic group
Topological and Lie groups
• Solenoid
• Circle
• General linear GL(n)
• Special linear SL(n)
• Orthogonal O(n)
• Euclidean E(n)
• Special orthogonal SO(n)
• Unitary U(n)
• Special unitary SU(n)
• Symplectic Sp(n)
• G2
• F4
• E6
• E7
• E8
• Lorentz
• Poincaré
• Conformal
• Diffeomorphism
• Loop
Infinite dimensional Lie group
• O(∞)
• SU(∞)
• Sp(∞)
Algebraic groups
• Linear algebraic group
• Reductive group
• Abelian variety
• Elliptic curve
Definitions
A Hall divisor (also called a unitary divisor) of an integer n is a divisor d of n such that d and n/d are coprime. The easiest way to find the Hall divisors is to write the prime power factorization of the number in question and take any subset of the factors. For example, to find the Hall divisors of 60, its prime power factorization is 22 × 3 × 5, so one takes any product of 3, 22 = 4, and 5. Thus, the Hall divisors of 60 are 1, 3, 4, 5, 12, 15, 20, and 60.
A Hall subgroup of G is a subgroup whose order is a Hall divisor of the order of G. In other words, it is a subgroup whose order is coprime to its index.
If π is a set of primes, then a Hall π-subgroup is a subgroup whose order is a product of primes in π, and whose index is not divisible by any primes in π.
Examples
• Any Sylow subgroup of a group is a Hall subgroup.
• The alternating group A4 of order 12 is solvable but has no subgroups of order 6 even though 6 divides 12, showing that Hall's theorem (see below) cannot be extended to all divisors of the order of a solvable group.
• If G = A5, the only simple group of order 60, then 15 and 20 are Hall divisors of the order of G, but G has no subgroups of these orders.
• The simple group of order 168 has two different conjugacy classes of Hall subgroups of order 24 (though they are connected by an outer automorphism of G).
• The simple group of order 660 has two Hall subgroups of order 12 that are not even isomorphic (and so certainly not conjugate, even under an outer automorphism). The normalizer of a Sylow 2-subgroup of order 4 is isomorphic to the alternating group A4 of order 12, while the normalizer of a subgroup of order 2 or 3 is isomorphic to the dihedral group of order 12.
Hall's theorem
Hall (1928) proved that if G is a finite solvable group and π is any set of primes, then G has a Hall π-subgroup, and any two Hall π-subgroups are conjugate. Moreover, any subgroup whose order is a product of primes in π is contained in some Hall π-subgroup. This result can be thought of as a generalization of Sylow's Theorem to Hall subgroups, but the examples above show that such a generalization is false when the group is not solvable.
The existence of Hall subgroups can be proved by induction on the order of G, using the fact that every finite solvable group has a normal elementary abelian subgroup. More precisely, fix a minimal normal subgroup A, which is either a π-group or a π′-group as G is π-separable. By induction there is a subgroup H of G containing A such that H/A is a Hall π-subgroup of G/A. If A is a π-group then H is a Hall π-subgroup of G. On the other hand, if A is a π′-group, then by the Schur–Zassenhaus theorem A has a complement in H, which is a Hall π-subgroup of G.
A converse to Hall's theorem
Any finite group that has a Hall π-subgroup for every set of primes π is solvable. This is a generalization of Burnside's theorem that any group whose order is of the form paqb for primes p and q is solvable, because Sylow's theorem implies that all Hall subgroups exist. This does not (at present) give another proof of Burnside's theorem, because Burnside's theorem is used to prove this converse.
Sylow systems
A Sylow system is a set of Sylow p-subgroups Sp for each prime p such that SpSq = SqSp for all p and q. If we have a Sylow system, then the subgroup generated by the groups Sp for p in π is a Hall π-subgroup. A more precise version of Hall's theorem says that any solvable group has a Sylow system, and any two Sylow systems are conjugate.
Normal Hall subgroups
Any normal Hall subgroup H of a finite group G possesses a complement, that is, there is some subgroup K of G that intersects H trivially and such that HK = G (so G is a semidirect product of H and K). This is the Schur–Zassenhaus theorem.
See also
• Formation
References
• Gorenstein, Daniel (1980), Finite groups, New York: Chelsea Publishing Co., ISBN 0-8284-0301-5, MR 0569209.
• Hall, Philip (1928), "A note on soluble groups", Journal of the London Mathematical Society, 3 (2): 98–105, doi:10.1112/jlms/s1-3.2.98, JFM 54.0145.01, MR 1574393
| Wikipedia |
Sylvester's law of inertia
Sylvester's law of inertia is a theorem in matrix algebra about certain properties of the coefficient matrix of a real quadratic form that remain invariant under a change of basis. Namely, if $A$ is the symmetric matrix that defines the quadratic form, and $S$ is any invertible matrix such that $D=SAS^{T}$ is diagonal, then the number of negative elements in the diagonal of $D$ is always the same, for all such $S$; and the same goes for the number of positive elements.
This property is named after James Joseph Sylvester who published its proof in 1852.[1][2]
Statement
Let $A$ be a symmetric square matrix of order $n$ with real entries. Any non-singular matrix $S$ of the same size is said to transform $A$ into another symmetric matrix $B=SAS^{T}$, also of order $n$, where $S^{T}$ is the transpose of $S$. It is also said that matrices $A$ and $B$ are congruent. If $A$ is the coefficient matrix of some quadratic form of $\mathbb {R} ^{n}$, then $B$ is the matrix for the same form after the change of basis defined by $S$.
A symmetric matrix $A$ can always be transformed in this way into a diagonal matrix $D$ which has only entries $0$, $+1$, $-1$ along the diagonal. Sylvester's law of inertia states that the number of diagonal entries of each kind is an invariant of $A$, i.e. it does not depend on the matrix $S$ used.
The number of $+1$s, denoted $n_{+}$, is called the positive index of inertia of $A$, and the number of $-1$s, denoted $n_{-}$, is called the negative index of inertia. The number of $0$s, denoted $n_{0}$, is the dimension of the null space of $A$, known as the nullity of $A$. These numbers satisfy an obvious relation
$n_{0}+n_{+}+n_{-}=n.$
The difference, $\mathrm {sgn} (A)=n_{+}-n_{-}$, is usually called the signature of $A$. (However, some authors use that term for the triple $(n_{0},n_{+},n_{-})$ consisting of the nullity and the positive and negative indices of inertia of $A$; for a non-degenerate form of a given dimension these are equivalent data, but in general the triple yields more data.)
If the matrix $A$ has the property that every principal upper left $k\times k$ minor $\Delta _{k}$ is non-zero then the negative index of inertia is equal to the number of sign changes in the sequence
$\Delta _{0}=1,\Delta _{1},\ldots ,\Delta _{n}=\det A.$
Statement in terms of eigenvalues
The law can also be stated as follows: two symmetric square matrices of the same size have the same number of positive, negative and zero eigenvalues if and only if they are congruent[3] ($B=SAS^{T}$, for some non-singular $S$).
The positive and negative indices of a symmetric matrix $A$ are also the number of positive and negative eigenvalues of $A$. Any symmetric real matrix $A$ has an eigendecomposition of the form $QEQ^{T}$ where $E$ is a diagonal matrix containing the eigenvalues of $A$, and $Q$ is an orthonormal square matrix containing the eigenvectors. The matrix $E$ can be written $E=WDW^{T}$ where $D$ is diagonal with entries $0,+1,-1$, and $W$ is diagonal with $W_{ii}={\sqrt {|E_{ii}|}}$. The matrix $S=QW$ transforms $D$ to $A$.
Law of inertia for quadratic forms
In the context of quadratic forms, a real quadratic form $Q$ in $n$ variables (or on an $n$-dimensional real vector space) can by a suitable change of basis (by non-singular linear transformation from $x$ to $y$) be brought to the diagonal form
$Q(x_{1},x_{2},\ldots ,x_{n})=\sum _{i=1}^{n}a_{i}x_{i}^{2}$
with each $a_{i}\in \{0,1,-1\}$. Sylvester's law of inertia states that the number of coefficients of a given sign is an invariant of $Q$, i.e., does not depend on a particular choice of diagonalizing basis. Expressed geometrically, the law of inertia says that all maximal subspaces on which the restriction of the quadratic form is positive definite (respectively, negative definite) have the same dimension. These dimensions are the positive and negative indices of inertia.
Generalizations
Sylvester's law of inertia is also valid if $A$ and $B$ have complex entries. In this case, it is said that $A$ and $B$ are $*$-congruent if and only if there exists a non-singular complex matrix $S$ such that $B=SAS^{*}$, where $*$ denotes the conjugate transpose. In the complex scenario, a way to state Sylvester's law of inertia is that if $A$ and $B$ are Hermitian matrices, then $A$ and $B$ are $*$-congruent if and only if they have the same inertia, the definition of which is still valid as the eigenvalues of Hermitian matrices are always real numbers.
Ostrowski proved a quantitative generalization of Sylvester's law of inertia:[4][5] if $A$ and $B$ are $*$-congruent with $B=SAS^{*}$, then their eigenvalues $\lambda _{i}$ are related by
$\lambda _{i}(B)=\theta _{i}\lambda _{i}(A),\quad i=1,\ldots ,n$
where $\theta _{i}$ are such that $\lambda _{n}(SS^{*})\leq \theta _{i}\leq \lambda _{1}(SS^{*})$.
A theorem due to Ikramov generalizes the law of inertia to any normal matrices $A$ and $B$:[6] If $A$ and $B$ are normal matrices, then $A$ and $B$ are congruent if and only if they have the same number of eigenvalues on each open ray from the origin in the complex plane.
See also
• Metric signature
• Morse theory
• Cholesky decomposition
• Haynsworth inertia additivity formula
References
1. Sylvester, James Joseph (1852). "A demonstration of the theorem that every homogeneous quadratic polynomial is reducible by real orthogonal substitutions to the form of a sum of positive and negative squares" (PDF). Philosophical Magazine. 4th Series. 4 (23): 138–142. doi:10.1080/14786445208647087. Retrieved 2008-06-27.
2. Norman, C.W. (1986). Undergraduate algebra. Oxford University Press. pp. 360–361. ISBN 978-0-19-853248-4.
3. Carrell, James B. (2017). Groups, Matrices, and Vector Spaces: A Group Theoretic Approach to Linear Algebra. Springer. p. 313. ISBN 978-0-387-79428-0.
4. Ostrowski, Alexander M. (1959). "A quantitative formulation of Sylvester's law of inertia" (PDF). Proceedings of the National Academy of Sciences. A quantitative formulation of Sylvester's law of inertia (5): 740–744. Bibcode:1959PNAS...45..740O. doi:10.1073/pnas.45.5.740. PMC 222627. PMID 16590437.
5. Higham, Nicholas J.; Cheng, Sheung Hun (1998). "Modifying the inertia of matrices arising in optimization". Linear Algebra and Its Applications. 275–276: 261–279. doi:10.1016/S0024-3795(97)10015-5.
6. Ikramov, Kh. D. (2001). "On the inertia law for normal matrices". Doklady Mathematics. 64: 141–142.
• Garling, D. J. H. (2011). Clifford algebras. An introduction. London Mathematical Society Student Texts. Vol. 78. Cambridge: Cambridge University Press. ISBN 978-1-107-09638-7. Zbl 1235.15025.
External links
• Sylvester's law at PlanetMath.
• Sylvester's law of inertia and *-congruence
| Wikipedia |
Sylvester's determinant identity
In matrix theory, Sylvester's determinant identity is an identity useful for evaluating certain types of determinants. It is named after James Joseph Sylvester, who stated this identity without proof in 1851.[1]
Given an n-by-n matrix $A$, let $\det(A)$ denote its determinant. Choose a pair
$u=(u_{1},\dots ,u_{m}),v=(v_{1},\dots ,v_{m})\subset (1,\dots ,n)$
of m-element ordered subsets of $(1,\dots ,n)$, where m ≤ n. Let $A_{v}^{u}$ denote the (n−m)-by-(n−m) submatrix of $A$ obtained by deleting the rows in $u$ and the columns in $v$. Define the auxiliary m-by-m matrix ${\tilde {A}}_{v}^{u}$ whose elements are equal to the following determinants
$({\tilde {A}}_{v}^{u})_{ij}:=\det(A_{v[{\hat {v}}_{j}]}^{u[{\hat {u}}_{i}]}),$
where $u[{\hat {u_{i}}}]$, $v[{\hat {v_{j}}}]$ denote the m−1 element subsets of $u$ and $v$ obtained by deleting the elements $u_{i}$ and $v_{j}$, respectively. Then the following is Sylvester's determinantal identity (Sylvester, 1851):
$\det(A)(\det(A_{v}^{u}))^{m-1}=\det({\tilde {A}}_{v}^{u}).$
When m = 2, this is the Desnanot-Jacobi identity (Jacobi, 1851).
See also
• Weinstein–Aronszajn identity, which is sometimes attributed to Sylvester
References
1. Sylvester, James Joseph (1851). "On the relation between the minor determinants of linearly equivalent quadratic functions". Philosophical Magazine. 1: 295–305.
Cited in Akritas, A. G.; Akritas, E. K.; Malaschonok, G. I. (1996). "Various proofs of Sylvester's (determinant) identity". Mathematics and Computers in Simulation. 42 (4–6): 585. doi:10.1016/S0378-4754(96)00035-3.
| Wikipedia |
List of things named after James Joseph Sylvester
The mathematician J. J. Sylvester was known for his ability to coin new names and new notation for mathematical objects,[1] not based on his own name. Nevertheless, many objects and results in mathematics have come to be named after him:[2]
• The Sylvester–Gallai theorem, on the existence of a line with only two of n given points.[3]
• Sylvester–Gallai configuration, a set of points and lines without any two-point lines.
• Sylvester matroid, a matroid without any two-point lines.[4]
• Sylvester's determinant identity.
• Sylvester's matrix theorem, a.k.a. Sylvester's formula, for a matrix function in terms of eigenvalues.
• Sylvester's theorem on the product of k consecutive integers > k, that generalizes Bertrand's postulate.
• Sylvester's law of inertia a.k.a. Sylvester's rigidity theorem, about the signature of a quadratic form.
• Sylvester's identity about determinants of submatrices.[5]
• Sylvester's criterion, a characterization of positive-definite Hermitian matrices.
• Sylvester domain.
• The Sylvester matrix for two polynomials.
• Sylvester's sequence, where each term is the product of previous terms plus one.
• Sylvester cyclotomic numbers.
• The Sylvester equation, AX + XB = C where A, B, C are given matrices and X is an unknown matrix.
• Sylvester's "four point problem" of geometric probability.
• The Sylvester expansion or Fibonacci–Sylvester expansion of a rational number, a representation as a sum of unit fractions found by a greedy algorithm.
• Sylvester's rank inequality rank(A) + rank(B) − n ≤ rank(AB) on the rank of the product of an m × n matrix A and an n × p matrix B.
• Sylver coinage, a number-theoretic game.[6]
Other things named after Sylvester
• Sylvester (crater), an impact crater on the Moon
• Sylvester Medal, given by the Royal Society for the encouragement of mathematical research[7]
• Sylvester (javascript library), a vector, matrix and geometry library for JavaScript
See also
• Sylvester's closed solution for the Frobenius coin problem when there are only two coins.
• Sylvester's construction for an arbitrarily large Hadamard matrix.
• Scientific equations named after people
References
1. Franklin, Fabian (1897), "James Joseph Sylvester", Bulletin of the American Mathematical Society, 3 (9): 299–309, doi:10.1090/S0002-9904-1897-00424-4, MR 1557527.
2. MathSciNet lists over 500 mathematics articles with "Sylvester" in their titles, most of which concern mathematical subjects named after Sylvester.
3. Borwein, P.; Moser, W. O. J. (1990), "A survey of Sylvester's problem and its generalizations", Aequationes Mathematicae, 40 (1): 111–135, CiteSeerX 10.1.1.218.8616, doi:10.1007/BF02112289, S2CID 122052678.
4. Murty, U. S. R. (1969), "Sylvester matroids", Recent Progress in Combinatorics (Proc. Third Waterloo Conf. on Combinatorics, 1968), New York: Academic Press, pp. 283–286, MR 0255432.
5. Erwin H. Bareiss (1968), Sylvester's Identity and Multistep Integer- Preserving Gaussian Elimination. Mathematics of Computation, Vol. 22, No. 103, pp. 565–578
6. Berlekamp, Elwyn R.; Conway, John H.; Guy, Richard K. (1982), "Sylver Coinage", Winning Ways for your Mathematical Plays, Vol. 2: Games in Particular, London: Academic Press Inc. [Harcourt Brace Jovanovich Publishers], pp. 576, 606, MR 0654502.
7. Cantor, Geoffrey (2004), "Creating the Royal Society's Sylvester Medal" (PDF), British Journal for the History of Science, 37 (1(132)): 75–92, doi:10.1017/S0007087403005132, MR 2128208, S2CID 143307164
| Wikipedia |
Sylvester's criterion
In mathematics, Sylvester’s criterion is a necessary and sufficient criterion to determine whether a Hermitian matrix is positive-definite. It is named after James Joseph Sylvester.
Sylvester's criterion states that a n × n Hermitian matrix M is positive-definite if and only if all the following matrices have a positive determinant:
• the upper left 1-by-1 corner of M,
• the upper left 2-by-2 corner of M,
• the upper left 3-by-3 corner of M,
• ${}\quad \vdots $
• M itself.
In other words, all of the leading principal minors must be positive. By using appropriate permutations of rows and columns of M, it can also be shown that the positivity of any nested sequence of n principal minors of M is equivalent to M being positive-definite.[1]
An analogous theorem holds for characterizing positive-semidefinite Hermitian matrices, except that it is no longer sufficient to consider only the leading principal minors: a Hermitian matrix M is positive-semidefinite if and only if all principal minors of M are nonnegative.[2][3]
Simple proof for special case
Suppose $M_{n}$is $n\times n$ Hermitian matrix $M_{n}^{\dagger }=M_{n}$. Let $M_{k},k=1,\ldots n$ be the principal minor matrices, i.e. the $k\times k$ upper left corner matrices. It will be shown that if $M_{n}$ is positive definite, then the principal minors are positive; that is, $\det M_{k}>0$ for all $k$.
$M_{k}$ is positive definite. Indeed, choosing
$x=\left({\begin{array}{c}x_{1}\\\vdots \\x_{k}\\0\\\vdots \\0\end{array}}\right)=\left({\begin{array}{c}{\vec {x}}\\0\\\vdots \\0\end{array}}\right)$
we can notice that $0<x^{\dagger }M_{n}x={\vec {x}}^{\dagger }M_{k}{\vec {x}}.$ Equivalently, the eigenvalues of $M_{k}$ are positive, and this implies that $\det M_{k}>0$ since the determinant is the product of the eigenvalues.
To prove the reverse implication, we use induction. The general form of an $(n+1)\times (n+1)$ Hermitian matrix is
$M_{n+1}=\left({\begin{array}{cc}M_{n}&{\vec {v}}\\{\vec {v}}^{\dagger }&d\end{array}}\right)\qquad (*)$,
where $M_{n}$ is an $n\times n$ Hermitian matrix, ${\vec {v}}$ is a vector and $d$ is a real constant.
Suppose the criterion holds for $M_{n}$. Assuming that all the principal minors of $M_{n+1}$ are positive implies that $\det M_{n+1}>0$, $\det M_{n}>0$, and that $M_{n}$ is positive definite by the inductive hypothesis. Denote
$x=\left({\begin{array}{c}{\vec {x}}\\x_{n+1}\end{array}}\right)$
then
$x^{\dagger }M_{n+1}x={\vec {x}}^{\dagger }M_{n}{\vec {x}}+x_{n+1}{\vec {x}}^{\dagger }{\vec {v}}+{\bar {x}}_{n+1}{\vec {v}}^{\dagger }{\vec {x}}+d|x_{n+1}|^{2}$
By completing the squares, this last expression is equal to
$({\vec {x}}^{\dagger }+{\vec {v}}^{\dagger }M_{n}^{-1}{\bar {x}}_{n+1})M_{n}({\vec {x}}+x_{n+1}M_{n}^{-1}{\vec {v}})-|x_{n+1}|^{2}{\vec {v}}^{\dagger }M_{n}^{-1}{\vec {v}}+d|x_{n+1}|^{2}$
$=({\vec {x}}+{\vec {c}})^{\dagger }M_{n}({\vec {x}}+{\vec {c}})+|x_{n+1}|^{2}(d-{\vec {v}}^{\dagger }M_{n}^{-1}{\vec {v}})$
where ${\vec {c}}=x_{n+1}M_{n}^{-1}{\vec {v}}$ (note that $M_{n}^{-1}$ exists because the eigenvalues of $M_{n}$ are all positive.) The first term is positive by the inductive hypothesis. We now examine the sign of the second term. By using the block matrix determinant formula
$\det \left({\begin{array}{cc}A&B\\C&D\end{array}}\right)=\det A\det(D-CA^{-1}B)$
on $(*)$ we obtain
$\det M_{n+1}=\det M_{n}(d-{\vec {v}}^{\dagger }M_{n}^{-1}{\vec {v}})>0$, which implies $d-{\vec {v}}^{\dagger }M_{n}^{-1}{\vec {v}}>0$.
Consequently, $x^{\dagger }M_{n+1}x>0.$
Proof for general case
The previous proof is only for nonsingular Hermitian matrix with coefficients in $\mathbb {R} $, and therefore only for nonsingular real-symmetric matrices.
Theorem I: A real-symmetric matrix A has nonnegative eigenvalues if and only if A can be factored as A = BTB, and all eigenvalues are positive if and only if B is nonsingular.[4]
Proof:
Forward implication: If A ∈ Rn×n is symmetric, then, by the spectral theorem, there is an orthogonal matrix P such that A = PDPT , where D = diag (λ1, λ2, . . . , λn) is real diagonal matrix with entries being eigenvalues of A and P is such that its columns are the eigenvectors of A. If λi ≥ 0 for each i, then D1/2 exists, so A = PDPT = PD1/2D1/2PT = BTB for B = D1/2PT, and λi > 0 for each i if and only if B is nonsingular.
Reverse implication: Conversely, if A can be factored as A = BTB, then all eigenvalues of A are nonnegative because for any eigenpair (λ, x):
$\lambda ={\frac {x^{T}Ax}{x^{T}x}}={\frac {x^{T}B^{T}Bx}{x^{T}x}}={\frac {\|Bx\|^{2}}{\|x\|^{2}}}\geq 0.$
Theorem II (The Cholesky decomposition): The symmetric matrix A possesses positive pivots if and only if A can be uniquely factored as A = RTR, where R is an upper-triangular matrix with positive diagonal entries. This is known as the Cholesky decomposition of A, and R is called the Cholesky factor of A.[5]
Proof:
Forward implication: If A possesses positive pivots (therefore A possesses an LU factorization: A = L·U'), then, it has an LDU factorization A = LDU = LDLT in which D = diag(u11, u22, . . . , unn) is the diagonal matrix containing the pivots uii > 0.
${\begin{aligned}\mathbf {A} &=LU'={\begin{bmatrix}1&0&\cdots &0\\\ell _{12}&1&\cdots &0\\\vdots &\vdots &&\vdots \\\ell _{1n}&\ell _{2n}&\cdots &1\end{bmatrix}}{\begin{bmatrix}u_{11}&u_{12}&\cdots &u_{1n}\\0&u_{22}&\cdots &u_{2n}\\\vdots &\vdots &&\vdots \\0&0&\cdots &u_{nn}\end{bmatrix}}\\[8pt]&=LDU={\begin{bmatrix}1&0&\cdots &0\\\ell _{12}&1&\cdots &0\\\vdots &\vdots &&\vdots \\\ell _{1n}&\ell _{2n}&\cdots &1\end{bmatrix}}{\begin{bmatrix}u_{11}&0&\cdots &0\\0&u_{22}&\cdots &0\\\vdots &\vdots &&\vdots \\0&0&\cdots &u_{nn}\end{bmatrix}}{\begin{bmatrix}1&u_{12}/u_{11}&\cdots &u_{1n}/u_{11}\\0&1&\cdots &u_{2n}/u_{22}\\\vdots &\vdots &&\vdots \\0&0&\cdots &1\end{bmatrix}}\end{aligned}}$
By a uniqueness property of the LDU decomposition, the symmetry of A yields: U = LT, consequently A = LDU = LDLT. Setting R = D1/2LT where D1/2 = diag($\scriptstyle {\sqrt {u_{11}}},\scriptstyle {\sqrt {u_{22}}},\ldots ,\scriptstyle {\sqrt {u_{11}}}$) yields the desired factorization, because A = LD1/2D1/2LT = RTR, and R is upper triangular with positive diagonal entries.
Reverse implication: Conversely, if A = RRT, where R is lower triangular with a positive diagonal, then factoring the diagonal entries out of R is as follows:
$\mathbf {R} =LD={\begin{bmatrix}1&0&\cdots &0\\r_{12}/r_{11}&1&\cdots &0\\\vdots &\vdots &&\vdots \\r_{1n}/r_{11}&r_{2n}/r_{22}&\cdots &1\end{bmatrix}}{\begin{bmatrix}r_{11}&0&\cdots &0\\0&r_{22}&\cdots &0\\\vdots &\vdots &&\vdots \\0&0&\cdots &r_{nn}\end{bmatrix}}.$
R = LD, where L is a lower triangular matrix with a unit diagonal and D is the diagonal matrix whose diagonal entries are the rii ’s. Hence D has a positive diagonal and hence D is non-singular. Hence D2 is a non-singular diagonal matrix. Also, LT is an upper triangular matrix with a unit diagonal. Consequently, A = LD2LT is the LDU factorization for A, and thus the pivots must be positive because they are the diagonal entries in D2.
Uniqueness of the Cholesky decomposition: If we have another Cholesky decomposition A = R1R1T of A, where R1 is lower triangular with a positive diagonal, then similar to the above we may write R1 = L1D1, where L1 is a lower triangular matrix with a unit diagonal and D1 is a diagonal matrix whose diagonal entries are the same as the corresponding diagonal entries of R1. Consequently, A = L1D12L1T is an LDU factorization for A. By the uniqueness of the LDU factorization of A, we have L1 = L and D12 = D2. As both D1 and D are diagonal matrices with positive diagonal entries, we have D1 = D. Hence R1 = L1D1 = LD = R. Hence A has a unique Cholesky decomposition.
Theorem III: Let Ak be the k × k leading principal submatrix of An×n. If A has an LU factorization A = LU, where L is a lower triangular matrix with a unit diagonal, then det(Ak) = u11u22 · · · ukk, and the k-th pivot is ukk = det(A1) = a11 for k = 1, ukk = det(Ak)/det(Ak−1) for k = 2, 3, . . . , n, where ukk is the (k, k)-th entry of U for all k = 1, 2, . . . , n.[6]
Combining Theorem II with Theorem III yields:
Statement I: If the symmetric matrix A can be factored as A=RTR where R is an upper-triangular matrix with positive diagonal entries, then all the pivots of A are positive (by Theorem II), therefore all the leading principal minors of A are positive (by Theorem III).
Statement II: If the nonsingular n × n symmetric matrix A can be factored as $A=B^{T}B$, then the QR decomposition (closely related to Gram-Schmidt process) of B (B = QR) yields: $A=B^{T}B=R^{T}Q^{T}QR=R^{T}R$, where Q is orthogonal matrix and R is upper triangular matrix.
As A is non-singular and $A=R^{T}R$, it follows that all the diagonal entries of R are non-zero. Let rjj be the (j, j)-th entry of R for all j = 1, 2, . . . , n. Then rjj ≠ 0 for all j = 1, 2, . . . , n.
Let F be a diagonal matrix, and let fjj be the (j, j)-th entry of F for all j = 1, 2, . . . , n. For all j = 1, 2, . . . , n, we set fjj = 1 if rjj > 0, and we set fjj = -1 if rjj < 0. Then $F^{T}F=I_{n}$, the n × n identity matrix.
Let S=FR. Then S is an upper-triangular matrix with all diagonal entries being positive. Hence we have $A=R^{T}R=R^{T}F^{T}FR=S^{T}S$, for some upper-triangular matrix S with all diagonal entries being positive.
Namely Statement II requires the non-singularity of the symmetric matrix A.
Combining Theorem I with Statement I and Statement II yields:
Statement III: If the real-symmetric matrix A is positive definite then A possess factorization of the form A = BTB, where B is nonsingular (Theorem I), the expression A = BTB implies that A possess factorization of the form A = RTR where R is an upper-triangular matrix with positive diagonal entries (Statement II), therefore all the leading principal minors of A are positive (Statement I).
In other words, Statement III proves the "only if" part of Sylvester's Criterion for non-singular real-symmetric matrices.
Sylvester's Criterion: The real-symmetric matrix A is positive definite if and only if all the leading principal minors of A are positive.
Notes
1. Horn, Roger A.; Johnson, Charles R. (1985), Matrix Analysis, Cambridge University Press, ISBN 978-0-521-38632-6. See Theorem 7.2.5.
2. Carl D. Meyer, Matrix Analysis and Applied Linear Algebra. See section 7.6 Positive Definite Matrices, page 566
3. Prussing, John E. (1986), "The Principal Minor Test for Semidefinite Matrices" (PDF), Journal of Guidance, Control, and Dynamics, 9 (1): 121–122, Bibcode:1986JGCD....9..121P, doi:10.2514/3.20077, archived from the original (PDF) on 2017-01-07, retrieved 2017-09-28
4. Carl D. Meyer, Matrix Analysis and Applied Linear Algebra. See section 7.6 Positive Definite Matrices, page 558
5. Carl D. Meyer, Matrix Analysis and Applied Linear Algebra. See section 3.10 The LU Factorization, Example 3.10.7, page 154
6. Carl D. Meyer, Matrix Analysis and Applied Linear Algebra. See section 6.1 Determinants, Exercise 6.1.16, page 474
References
• Gilbert, George T. (1991), "Positive definite matrices and Sylvester's criterion", The American Mathematical Monthly, Mathematical Association of America, 98 (1): 44–46, doi:10.2307/2324036, ISSN 0002-9890, JSTOR 2324036.
• Horn, Roger A.; Johnson, Charles R. (1985), Matrix Analysis, Cambridge University Press, ISBN 978-0-521-38632-6. Theorem 7.2.5.
• Carl D. Meyer (June 2000), Matrix Analysis and Applied Linear Algebra, SIAM, ISBN 0-89871-454-0.
| Wikipedia |
Sylvester domain
In mathematics, a Sylvester domain, named after James Joseph Sylvester by Dicks & Sontag (1978), is a ring in which Sylvester's law of nullity holds. This means that if A is an m by n matrix, and B is an n by s matrix over R, then
ρ(AB) ≥ ρ(A) + ρ(B) – n
where ρ is the inner rank of a matrix. The inner rank of an m by n matrix is the smallest integer r such that the matrix is a product of an m by r matrix and an r by n matrix.
Sylvester (1884) showed that fields satisfy Sylvester's law of nullity and are, therefore, Sylvester domains.
References
• Dicks, Warren; Sontag, Eduardo D. (1978), "Sylvester domains", Journal of Pure and Applied Algebra, 13 (3): 243–275, doi:10.1016/0022-4049(78)90011-7, ISSN 0022-4049, MR 0509164
• Sylvester, James Joseph (1884), "On involutants and other allied species of invariants to matrix systems", Johns Hopkins University Circulars, III: 9–12, 34–35, Reprinted in collected papers volume IV, paper 15
| Wikipedia |
Sylvester graph
The Sylvester graph is the unique distance-regular graph with intersection array $\{5,4,2;1,1,4\}$.[1] It is a subgraph of the Hoffman–Singleton graph.
Sylvester graph
Vertices36
Edges90
Radius3
Diameter3
Girth5
Automorphisms1440
Chromatic number4
Chromatic index5
PropertiesDistance regular
Hamiltonian
Table of graphs and parameters
References
1. Brouwer, A. E.; Cohen, A. M.; Neumaier, A. (1989), Distance-regular graphs, Springer-Verlag, Theorem 13.1.2
External links
• A.E. Brouwer's website: the Sylvester graph
| Wikipedia |
Sylvester matrix
In mathematics, a Sylvester matrix is a matrix associated to two univariate polynomials with coefficients in a field or a commutative ring. The entries of the Sylvester matrix of two polynomials are coefficients of the polynomials. The determinant of the Sylvester matrix of two polynomials is their resultant, which is zero when the two polynomials have a common root (in case of coefficients in a field) or a non-constant common divisor (in case of coefficients in an integral domain).
Sylvester matrices are named after James Joseph Sylvester.
Definition
Formally, let p and q be two nonzero polynomials, respectively of degree m and n. Thus:
$p(z)=p_{0}+p_{1}z+p_{2}z^{2}+\cdots +p_{m}z^{m},\;q(z)=q_{0}+q_{1}z+q_{2}z^{2}+\cdots +q_{n}z^{n}.$
The Sylvester matrix associated to p and q is then the $(n+m)\times (n+m)$ matrix constructed as follows:
• if n > 0, the first row is:
${\begin{pmatrix}p_{m}&p_{m-1}&\cdots &p_{1}&p_{0}&0&\cdots &0\end{pmatrix}}.$
• the second row is the first row, shifted one column to the right; the first element of the row is zero.
• the following n − 2 rows are obtained the same way, shifting the coefficients one column to the right each time and setting the other entries in the row to be 0.
• if m > 0 the (n + 1)th row is:
${\begin{pmatrix}q_{n}&q_{n-1}&\cdots &q_{1}&q_{0}&0&\cdots &0\end{pmatrix}}.$
• the following rows are obtained the same way as before.
Thus, if m = 4 and n = 3, the matrix is:
$S_{p,q}={\begin{pmatrix}p_{4}&p_{3}&p_{2}&p_{1}&p_{0}&0&0\\0&p_{4}&p_{3}&p_{2}&p_{1}&p_{0}&0\\0&0&p_{4}&p_{3}&p_{2}&p_{1}&p_{0}\\q_{3}&q_{2}&q_{1}&q_{0}&0&0&0\\0&q_{3}&q_{2}&q_{1}&q_{0}&0&0\\0&0&q_{3}&q_{2}&q_{1}&q_{0}&0\\0&0&0&q_{3}&q_{2}&q_{1}&q_{0}\end{pmatrix}}.$
If one of the degrees is zero (that is, the corresponding polynomial is a nonzero constant polynomial), then there are zero rows consisting of coefficients of the other polynomial, and the Sylvester matrix is a diagonal matrix of dimension the degree of the non-constant polynomial, with the all diagonal coefficients equal to the constant polynomial. If m = n = 0, then the Sylvester matrix is the empty matrix with zero rows and zero columns.
A variant
The above defined Sylvester matrix appears in a Sylvester paper of 1840. In a paper of 1853, Sylvester introduced the following matrix, which is, up to a permutation of the rows, the Sylvester matrix of p and q, which are both considered as having degree max(m, n).[1] This is thus a $2\max(n,m)\times 2\max(n,m)$-matrix containing $\max(n,m)$ pairs of rows. Assuming $m>n,$ it is obtained as follows:
• the first pair is:
${\begin{pmatrix}p_{m}&p_{m-1}&\cdots &p_{n}&\cdots &p_{1}&p_{0}&0&\cdots &0\\0&\cdots &0&q_{n}&\cdots &q_{1}&q_{0}&0&\cdots &0\end{pmatrix}}.$
• the second pair is the first pair, shifted one column to the right; the first elements in the two rows are zero.
• the remaining $max(n,m)-2$ pairs of rows are obtained the same way as above.
Thus, if m = 4 and n = 3, the matrix is:
${\begin{pmatrix}p_{4}&p_{3}&p_{2}&p_{1}&p_{0}&0&0&0\\0&q_{3}&q_{2}&q_{1}&q_{0}&0&0&0\\0&p_{4}&p_{3}&p_{2}&p_{1}&p_{0}&0&0\\0&0&q_{3}&q_{2}&q_{1}&q_{0}&0&0\\0&0&p_{4}&p_{3}&p_{2}&p_{1}&p_{0}&0\\0&0&0&q_{3}&q_{2}&q_{1}&q_{0}&0\\0&0&0&p_{4}&p_{3}&p_{2}&p_{1}&p_{0}\\0&0&0&0&q_{3}&q_{2}&q_{1}&q_{0}\\\end{pmatrix}}.$
The determinant of the 1853 matrix is, up to sign, the product of the determinant of the Sylvester matrix (which is called the resultant of p and q) by $p_{m}^{m-n}$ (still supposing $m\geq n$).
Applications
These matrices are used in commutative algebra, e.g. to test if two polynomials have a (non-constant) common factor. In such a case, the determinant of the associated Sylvester matrix (which is called the resultant of the two polynomials) equals zero. The converse is also true.
The solutions of the simultaneous linear equations
${S_{p,q}}^{\mathrm {T} }\cdot {\begin{pmatrix}x\\y\end{pmatrix}}={\begin{pmatrix}0\\0\end{pmatrix}}$
where $x$ is a vector of size $n$ and $y$ has size $m$, comprise the coefficient vectors of those and only those pairs $x,y$ of polynomials (of degrees $n-1$ and $m-1$, respectively) which fulfill
$x(z)\cdot p(z)+y(z)\cdot q(z)=0,$
where polynomial multiplication and addition is used. This means the kernel of the transposed Sylvester matrix gives all solutions of the Bézout equation where $\deg x<\deg q$ and $\deg y<\deg p$.
Consequently the rank of the Sylvester matrix determines the degree of the greatest common divisor of p and q:
$\deg(\gcd(p,q))=m+n-\operatorname {rank} S_{p,q}.$
Moreover, the coefficients of this greatest common divisor may be expressed as determinants of submatrices of the Sylvester matrix (see Subresultant).
See also
• Transfer matrix
• Bézout matrix
References
1. Akritas, A.G., Malaschonok, G.I., Vigklas, P.S.:Sturm Sequences and Modified Subresultant Polynomial Remainder Sequences. Serdica Journal of Computing, Vol. 8, No 1, 29--46, 2014
• Weisstein, Eric W. "Sylvester Matrix". MathWorld.
Matrix classes
Explicitly constrained entries
• Alternant
• Anti-diagonal
• Anti-Hermitian
• Anti-symmetric
• Arrowhead
• Band
• Bidiagonal
• Bisymmetric
• Block-diagonal
• Block
• Block tridiagonal
• Boolean
• Cauchy
• Centrosymmetric
• Conference
• Complex Hadamard
• Copositive
• Diagonally dominant
• Diagonal
• Discrete Fourier Transform
• Elementary
• Equivalent
• Frobenius
• Generalized permutation
• Hadamard
• Hankel
• Hermitian
• Hessenberg
• Hollow
• Integer
• Logical
• Matrix unit
• Metzler
• Moore
• Nonnegative
• Pentadiagonal
• Permutation
• Persymmetric
• Polynomial
• Quaternionic
• Signature
• Skew-Hermitian
• Skew-symmetric
• Skyline
• Sparse
• Sylvester
• Symmetric
• Toeplitz
• Triangular
• Tridiagonal
• Vandermonde
• Walsh
• Z
Constant
• Exchange
• Hilbert
• Identity
• Lehmer
• Of ones
• Pascal
• Pauli
• Redheffer
• Shift
• Zero
Conditions on eigenvalues or eigenvectors
• Companion
• Convergent
• Defective
• Definite
• Diagonalizable
• Hurwitz
• Positive-definite
• Stieltjes
Satisfying conditions on products or inverses
• Congruent
• Idempotent or Projection
• Invertible
• Involutory
• Nilpotent
• Normal
• Orthogonal
• Unimodular
• Unipotent
• Unitary
• Totally unimodular
• Weighing
With specific applications
• Adjugate
• Alternating sign
• Augmented
• Bézout
• Carleman
• Cartan
• Circulant
• Cofactor
• Commutation
• Confusion
• Coxeter
• Distance
• Duplication and elimination
• Euclidean distance
• Fundamental (linear differential equation)
• Generator
• Gram
• Hessian
• Householder
• Jacobian
• Moment
• Payoff
• Pick
• Random
• Rotation
• Seifert
• Shear
• Similarity
• Symplectic
• Totally positive
• Transformation
Used in statistics
• Centering
• Correlation
• Covariance
• Design
• Doubly stochastic
• Fisher information
• Hat
• Precision
• Stochastic
• Transition
Used in graph theory
• Adjacency
• Biadjacency
• Degree
• Edmonds
• Incidence
• Laplacian
• Seidel adjacency
• Tutte
Used in science and engineering
• Cabibbo–Kobayashi–Maskawa
• Density
• Fundamental (computer vision)
• Fuzzy associative
• Gamma
• Gell-Mann
• Hamiltonian
• Irregular
• Overlap
• S
• State transition
• Substitution
• Z (chemistry)
Related terms
• Jordan normal form
• Linear independence
• Matrix exponential
• Matrix representation of conic sections
• Perfect matrix
• Pseudoinverse
• Row echelon form
• Wronskian
• Mathematics portal
• List of matrices
• Category:Matrices
| Wikipedia |
Sylvester matroid
In matroid theory, a Sylvester matroid is a matroid in which every pair of elements belongs to a three-element circuit (a triangle) of the matroid.[1][2]
Example
The $n$-point line (i.e., the rank 2 uniform matroid on $n$ elements, $U{}_{n}^{2}$) is a Sylvester matroid because every pair of elements is a basis and every triple is a circuit.
A Sylvester matroid of rank three may be formed from any Steiner triple system, by defining the lines of the matroid to be the triples of the system. Sylvester matroids of rank three may also be formed from Sylvester–Gallai configurations, configurations of points and lines (in non-Euclidean spaces) with no two-point line. For example, the Fano plane and the Hesse configuration give rise to Sylvester matroids with seven and nine elements respectively, and may be interpreted either as Steiner triple systems or as Sylvester–Gallai configurations.
Properties
A Sylvester matroid with rank $r$ must have at least $2^{r}-1$ elements; this bound is tight only for the projective spaces over GF(2), of which the Fano plane is an example.[3]
In a Sylvester matroid, every independent set can be augmented by one more element to form a circuit of the matroid.[1][4]
Sylvester matroids (other than $U{}_{n}^{2}$) cannot be represented over the real numbers (this is the Sylvester–Gallai theorem), nor can they be oriented.[5]
History
Sylvester matroids were studied and named by Murty (1969) after James Joseph Sylvester, because they violate the Sylvester–Gallai theorem (for points and lines in the Euclidean plane, or in higher-dimensional Euclidean spaces) that for every finite set of points there is a line containing only two of the points.
References
1. Murty, U. S. R. (1969), "Sylvester matroids", Recent Progress in Combinatorics (Proc. Third Waterloo Conf. on Combinatorics, 1968), New York: Academic Press, pp. 283–286, MR 0255432.
2. Welsh, D. J. A. (2010), Matroid Theory, Courier Dover Publications, p. 297, ISBN 9780486474397.
3. Murty, U. S. R. (1970), "Matroids with Sylvester property", Aequationes Mathematicae, 4 (1–2): 44–50, doi:10.1007/BF01817744, MR 0265186, S2CID 189832452.
4. Bryant, V. W.; Dawson, J. E.; Perfect, Hazel (1978), "Hereditary circuit spaces", Compositio Mathematica, 37 (3): 339–351, MR 0511749.
5. Ziegler, Günter M. (1991), "Some minimal non-orientable matroids of rank three", Geometriae Dedicata, 38 (3): 365–371, doi:10.1007/BF00181199, MR 1112674, S2CID 14993704.
| Wikipedia |
Quaternary cubic
In mathematics, a quaternary cubic form is a degree 3 homogeneous polynomial in four variables. The zeros form a cubic surface in 3-dimensional projective space.
Invariants
Salmon (1860) and Clebsch (1861, 1861b) studied the ring of invariants of a quaternary cubic, which is a ring generated by invariants of degrees 8, 16, 24, 32, 40, 100. The generators of degrees 8, 16, 24, 32, 40 generate a polynomial ring. The generator of degree 100 is a skew invariant, whose square is a polynomial in the other generators given explicitly by Salmon. Salmon also gave an explicit formula for the discriminant as a polynomial in the generators, though Edge (1980) pointed out that the formula has a widely copied misprint in it.
Sylvester pentahedron
A generic quaternary cubic can be written as a sum of 5 cubes of linear forms, unique up to multiplication by cube roots of unity. This was conjectured by Sylvester in 1851, and proven 10 years later by Clebsch. The union of the 5 planes where these 5 linear forms vanish is called the Sylvester pentahedron.
See also
• Ternary cubic
• Ternary quartic
• Invariants of a binary form
References
• Clebsch, A. (1861), "Zur Theorie der algebraischer Flächen", Journal für die reine und angewandte Mathematik, 58: 93–108, ISSN 0075-4102
• Clebsch, A. (1861), "Ueber eine Transformation der homogenen Funktionen dritter Ordnung mit vier Veränderlichen", Journal für die reine und angewandte Mathematik, 58: 109–126, doi:10.1515/crll.1861.58.109, ISSN 0075-4102
• Edge, W. L. (1980), "The Discriminant of a Cubic Surface", Proceedings of the Royal Irish Academy, Royal Irish Academy, 80A (1): 75–78, ISSN 0035-8975, JSTOR 20489083
• Salmon, George (1860), "On Quaternary Cubics", Philosophical Transactions of the Royal Society, The Royal Society, 150: 229–239, doi:10.1098/rstl.1860.0015, ISSN 0080-4614, JSTOR 108770
• Schmitt, Alexander (1997), "Quaternary cubic forms and projective algebraic threefolds", L'Enseignement Mathématique, 2e Série, 43 (3): 253–270, ISSN 0013-8584, MR 1489885
| Wikipedia |
Sylvester's sequence
In number theory, Sylvester's sequence is an integer sequence in which each term is the product of the previous terms, plus one. The first few terms of the sequence are
2, 3, 7, 43, 1807, 3263443, 10650056950807, 113423713055421844361000443 (sequence A000058 in the OEIS).
Sylvester's sequence is named after James Joseph Sylvester, who first investigated it in 1880. Its values grow doubly exponentially, and the sum of its reciprocals forms a series of unit fractions that converges to 1 more rapidly than any other series of unit fractions. The recurrence by which it is defined allows the numbers in the sequence to be factored more easily than other numbers of the same magnitude, but, due to the rapid growth of the sequence, complete prime factorizations are known only for a few of its terms. Values derived from this sequence have also been used to construct finite Egyptian fraction representations of 1, Sasakian Einstein manifolds, and hard instances for online algorithms.
Formal definitions
Formally, Sylvester's sequence can be defined by the formula
$s_{n}=1+\prod _{i=0}^{n-1}s_{i}.$
The product of the empty set is 1, so s0 = 2.
Alternatively, one may define the sequence by the recurrence
$\displaystyle s_{i}=s_{i-1}(s_{i-1}-1)+1,$ with s0 = 2.
It is straightforward to show by induction that this is equivalent to the other definition.
Closed form formula and asymptotics
The Sylvester numbers grow doubly exponentially as a function of n. Specifically, it can be shown that
$s_{n}=\left\lfloor E^{2^{n+1}}+{\frac {1}{2}}\right\rfloor ,\!$
for a number E that is approximately 1.26408473530530...[1] (sequence A076393 in the OEIS). This formula has the effect of the following algorithm:
s0 is the nearest integer to E 2; s1 is the nearest integer to E 4; s2 is the nearest integer to E 8; for sn, take E 2, square it n more times, and take the nearest integer.
This would only be a practical algorithm if we had a better way of calculating E to the requisite number of places than calculating sn and taking its repeated square root.
The double-exponential growth of the Sylvester sequence is unsurprising if one compares it to the sequence of Fermat numbers Fn ; the Fermat numbers are usually defined by a doubly exponential formula, $2^{2^{n}}\!+1$, but they can also be defined by a product formula very similar to that defining Sylvester's sequence:
$F_{n}=2+\prod _{i=0}^{n-1}F_{i}.$
Connection with Egyptian fractions
The unit fractions formed by the reciprocals of the values in Sylvester's sequence generate an infinite series:
$\sum _{i=0}^{\infty }{\frac {1}{s_{i}}}={\frac {1}{2}}+{\frac {1}{3}}+{\frac {1}{7}}+{\frac {1}{43}}+{\frac {1}{1807}}+\cdots .$
The partial sums of this series have a simple form,
$\sum _{i=0}^{j-1}{\frac {1}{s_{i}}}=1-{\frac {1}{s_{j}-1}}={\frac {s_{j}-2}{s_{j}-1}}.$
This may be proved by induction, or more directly by noting that the recursion implies that
${\frac {1}{s_{i}-1}}-{\frac {1}{s_{i+1}-1}}={\frac {1}{s_{i}}},$
so the sum telescopes
$\sum _{i=0}^{j-1}{\frac {1}{s_{i}}}=\sum _{i=0}^{j-1}\left({\frac {1}{s_{i}-1}}-{\frac {1}{s_{i+1}-1}}\right)={\frac {1}{s_{0}-1}}-{\frac {1}{s_{j}-1}}=1-{\frac {1}{s_{j}-1}}.$
Since this sequence of partial sums (sj − 2)/(sj − 1) converges to one, the overall series forms an infinite Egyptian fraction representation of the number one:
$1={\frac {1}{2}}+{\frac {1}{3}}+{\frac {1}{7}}+{\frac {1}{43}}+{\frac {1}{1807}}+\cdots .$
One can find finite Egyptian fraction representations of one, of any length, by truncating this series and subtracting one from the last denominator:
$1={\tfrac {1}{2}}+{\tfrac {1}{3}}+{\tfrac {1}{6}},\quad 1={\tfrac {1}{2}}+{\tfrac {1}{3}}+{\tfrac {1}{7}}+{\tfrac {1}{42}},\quad 1={\tfrac {1}{2}}+{\tfrac {1}{3}}+{\tfrac {1}{7}}+{\tfrac {1}{43}}+{\tfrac {1}{1806}},\quad \dots .$
The sum of the first k terms of the infinite series provides the closest possible underestimate of 1 by any k-term Egyptian fraction.[2] For example, the first four terms add to 1805/1806, and therefore any Egyptian fraction for a number in the open interval (1805/1806, 1) requires at least five terms.
It is possible to interpret the Sylvester sequence as the result of a greedy algorithm for Egyptian fractions, that at each step chooses the smallest possible denominator that makes the partial sum of the series be less than one. Alternatively, the terms of the sequence after the first can be viewed as the denominators of the odd greedy expansion of 1/2.
Uniqueness of quickly growing series with rational sums
As Sylvester himself observed, Sylvester's sequence seems to be unique in having such quickly growing values, while simultaneously having a series of reciprocals that converges to a rational number. This sequence provides an example showing that double-exponential growth is not enough to cause an integer sequence to be an irrationality sequence.[3]
To make this more precise, it follows from results of Badea (1993) that, if a sequence of integers $a_{n}$ grows quickly enough that
$a_{n}\geq a_{n-1}^{2}-a_{n-1}+1,$
and if the series
$A=\sum {\frac {1}{a_{i}}}$
converges to a rational number A, then, for all n after some point, this sequence must be defined by the same recurrence
$a_{n}=a_{n-1}^{2}-a_{n-1}+1$
that can be used to define Sylvester's sequence.
Erdős & Graham (1980) conjectured that, in results of this type, the inequality bounding the growth of the sequence could be replaced by a weaker condition,
$\lim _{n\rightarrow \infty }{\frac {a_{n}}{a_{n-1}^{2}}}=1.$
Badea (1995) surveys progress related to this conjecture; see also Brown (1979).
Divisibility and factorizations
If i < j, it follows from the definition that sj ≡ 1 (mod si ). Therefore, every two numbers in Sylvester's sequence are relatively prime. The sequence can be used to prove that there are infinitely many prime numbers, as any prime can divide at most one number in the sequence. More strongly, no prime factor of a number in the sequence can be congruent to 5 modulo 6, and the sequence can be used to prove that there are infinitely many primes congruent to 7 modulo 12.[4]
Unsolved problem in mathematics:
Are all the terms in Sylvester's sequence squarefree?
(more unsolved problems in mathematics)
Much remains unknown about the factorization of the numbers in the Sylvester's sequence. For instance, it is not known if all numbers in the sequence are squarefree, although all the known terms are.
As Vardi (1991) describes, it is easy to determine which Sylvester number (if any) a given prime p divides: simply compute the recurrence defining the numbers modulo p until finding either a number that is congruent to zero (mod p) or finding a repeated modulus. Using this technique he found that 1166 out of the first three million primes are divisors of Sylvester numbers,[5] and that none of these primes has a square that divides a Sylvester number. The set of primes which can occur as factors of Sylvester numbers is of density zero in the set of all primes:[6] indeed, the number of such primes less than x is $O(\pi (x)/\log \log \log x)$.[7]
The following table shows known factorizations of these numbers (except the first four, which are all prime):[8]
n Factors of sn
4 13 × 139
5 3263443, which is prime
6 547 × 607 × 1033 × 31051
7 29881 × 67003 × 9119521 × 6212157481
8 5295435634831 × 31401519357481261 × 77366930214021991992277
9 181 × 1987 × 112374829138729 × 114152531605972711 × 35874380272246624152764569191134894955972560447869169859142453622851
10 2287 × 2271427 × 21430986826194127130578627950810640891005487 × P156
11 73 × C416
12 2589377038614498251653 × 2872413602289671035947763837 × C785
13 52387 × 5020387 × 5783021473 × 401472621488821859737 × 287001545675964617409598279 × C1600
14 13999 × 74203 × 9638659 × 57218683 × 10861631274478494529 × C3293
15 17881 × 97822786011310111 × 54062008753544850522999875710411 × C6618
16 128551 × C13335
17 635263 × 1286773 × 21269959 × C26661
18 50201023123 × 139263586549 × 60466397701555612333765567 × C53313
19 775608719589345260583891023073879169 × C106685
20 352867 × 6210298470888313 × C213419
21 387347773 × 1620516511 × C426863
22 91798039513 × C853750
As is customary, Pn and Cn denote prime numbers and unfactored composite numbers n digits long.
Applications
Boyer, Galicki & Kollár (2005) use the properties of Sylvester's sequence to define large numbers of Sasakian Einstein manifolds having the differential topology of odd-dimensional spheres or exotic spheres. They show that the number of distinct Sasakian Einstein metrics on a topological sphere of dimension 2n − 1 is at least proportional to sn and hence has double exponential growth with n.
As Galambos & Woeginger (1995) describe, Brown (1979) and Liang (1980) used values derived from Sylvester's sequence to construct lower bound examples for online bin packing algorithms. Seiden & Woeginger (2005) similarly use the sequence to lower bound the performance of a two-dimensional cutting stock algorithm.[9]
Znám's problem concerns sets of numbers such that each number in the set divides but is not equal to the product of all the other numbers, plus one. Without the inequality requirement, the values in Sylvester's sequence would solve the problem; with that requirement, it has other solutions derived from recurrences similar to the one defining Sylvester's sequence. Solutions to Znám's problem have applications to the classification of surface singularities (Brenton and Hill 1988) and to the theory of nondeterministic finite automata.[10]
D. R. Curtiss (1922) describes an application of the closest approximations to one by k-term sums of unit fractions, in lower-bounding the number of divisors of any perfect number, and Miller (1919) uses the same property to upper bound the size of certain groups.
See also
• Cahen's constant
• Primary pseudoperfect number
• Leonardo number
Notes
1. Graham, Knuth & Patashnik (1989) set this as an exercise; see also Golomb (1963).
2. This claim is commonly attributed to Curtiss (1922), but Miller (1919) appears to be making the same statement in an earlier paper. See also Rosenman & Underwood (1933), Salzer (1947), and Soundararajan (2005).
3. Guy (2004).
4. Guy & Nowakowski (1975).
5. This appears to be a typo, as Andersen finds 1167 prime divisors in this range.
6. Jones (2006).
7. Odoni (1985).
8. All prime factors p of Sylvester numbers sn with p < 5×107 and n ≤ 200 are listed by Vardi. Ken Takusagawa lists the factorizations up to s9 and the factorization of s10. The remaining factorizations are from a list of factorizations of Sylvester's sequence maintained by Jens Kruse Andersen. Retrieved 2014-06-13.
9. In their work, Seiden and Woeginger refer to Sylvester's sequence as "Salzer's sequence" after the work of Salzer (1947) on closest approximation.
10. Domaratzki et al. (2005).
References
• Badea, Catalin (1993). "A theorem on irrationality of infinite series and applications". Acta Arithmetica. 63 (4): 313–323. doi:10.4064/aa-63-4-313-323. MR 1218459.
• Badea, Catalin (1995). "On some criteria for irrationality for series of positive rationals: a survey" (PDF). Archived from the original (PDF) on 2008-09-11.
• Boyer, Charles P.; Galicki, Krzysztof; Kollár, János (2005). "Einstein metrics on spheres". Annals of Mathematics. 162 (1): 557–580. arXiv:math.DG/0309408. doi:10.4007/annals.2005.162.557. MR 2178969. S2CID 13945306.
• Brenton, Lawrence; Hill, Richard (1988). "On the Diophantine equation 1=Σ1/ni + 1/Πni and a class of homologically trivial complex surface singularities". Pacific Journal of Mathematics. 133 (1): 41–67. doi:10.2140/pjm.1988.133.41. MR 0936356.
• Brown, D. J. (1979). "A lower bound for on-line one-dimensional bin packing algorithms". Tech. Rep. R-864. Coordinated Science Lab., Univ. of Illinois, Urbana-Champaign. {{cite journal}}: Cite journal requires |journal= (help)
• Curtiss, D. R. (1922). "On Kellogg's diophantine problem". American Mathematical Monthly. 29 (10): 380–387. doi:10.2307/2299023. JSTOR 2299023.
• Domaratzki, Michael; Ellul, Keith; Shallit, Jeffrey; Wang, Ming-Wei (2005). "Non-uniqueness and radius of cyclic unary NFAs". International Journal of Foundations of Computer Science. 16 (5): 883–896. doi:10.1142/S0129054105003352. MR 2174328.
• Erdős, Paul; Graham, Ronald L. (1980). Old and new problems and results in combinatorial number theory. Monographies de L'Enseignement Mathématique, No. 28, Univ. de Genève. MR 0592420.
• Galambos, Gábor; Woeginger, Gerhard J. (1995). "On-line bin packing — A restricted survey". Mathematical Methods of Operations Research. 42 (1): 25. doi:10.1007/BF01415672. MR 1346486. S2CID 26692460.
• Golomb, Solomon W. (1963). "On certain nonlinear recurring sequences". American Mathematical Monthly. 70 (4): 403–405. doi:10.2307/2311857. JSTOR 2311857. MR 0148605.
• Graham, R.; Knuth, D. E.; Patashnik, O. (1989). Concrete Mathematics (2nd ed.). Addison-Wesley. Exercise 4.37. ISBN 0-201-55802-5.
• Guy, Richard K. (2004). "E24 Irrationality sequences". Unsolved Problems in Number Theory (3rd ed.). Springer-Verlag. p. 346. ISBN 0-387-20860-7. Zbl 1058.11001.
• Guy, Richard; Nowakowski, Richard (1975). "Discovering primes with Euclid". Delta (Waukesha). 5 (2): 49–63. MR 0384675.
• Jones, Rafe (2006). "The density of prime divisors in the arithmetic dynamics of quadratic polynomials". Journal of the London Mathematical Society. 78 (2): 523–544. arXiv:math.NT/0612415. Bibcode:2006math.....12415J. doi:10.1112/jlms/jdn034. S2CID 15310955.
• Liang, Frank M. (1980). "A lower bound for on-line bin packing". Information Processing Letters. 10 (2): 76–79. doi:10.1016/S0020-0190(80)90077-0. MR 0564503.
• Miller, G. A. (1919). "Groups possessing a small number of sets of conjugate operators". Transactions of the American Mathematical Society. 20 (3): 260–270. doi:10.2307/1988867. JSTOR 1988867.
• Odoni, R. W. K. (1985). "On the prime divisors of the sequence wn+1 =1+w1⋯wn". Journal of the London Mathematical Society. Series II. 32: 1–11. doi:10.1112/jlms/s2-32.1.1. Zbl 0574.10020.
• Rosenman, Martin; Underwood, F. (1933). "Problem 3536". American Mathematical Monthly. 40 (3): 180–181. doi:10.2307/2301036. JSTOR 2301036.
• Salzer, H. E. (1947). "The approximation of numbers as sums of reciprocals". American Mathematical Monthly. 54 (3): 135–142. doi:10.2307/2305906. JSTOR 2305906. MR 0020339.
• Seiden, Steven S.; Woeginger, Gerhard J. (2005). "The two-dimensional cutting stock problem revisited". Mathematical Programming. 102 (3): 519–530. doi:10.1007/s10107-004-0548-1. MR 2136225. S2CID 35815524.
• Soundararajan, K. (2005). "Approximating 1 from below using n Egyptian fractions". arXiv:math.CA/0502247. {{cite journal}}: Cite journal requires |journal= (help)
• Sylvester, J. J. (1880). "On a point in the theory of vulgar fractions". American Journal of Mathematics. 3 (4): 332–335. doi:10.2307/2369261. JSTOR 2369261.
• Vardi, Ilan (1991). Computational Recreations in Mathematica. Addison-Wesley. pp. 82–89. ISBN 0-201-52989-0.
External links
• Irrationality of Quadratic Sums, from K. S. Brown's MathPages.
• Weisstein, Eric W. "Sylvester's Sequence". MathWorld.
| Wikipedia |
Sylvestre François Lacroix
Sylvestre François Lacroix (28 April 1765 – 24 May 1843) was a French mathematician.
Sylvestre François Lacroix
Born(1765-04-28)28 April 1765
Paris, France
Died24 May 1843(1843-05-24) (aged 78)
Paris, France
Scientific career
FieldsMathematics
Academic advisorsGaspard Monge
Life
He was born in Paris, and was raised in a poor family who still managed to obtain a good education for their son. Lacroix's path to mathematics started with the novel Robinson Crusoe. That gave him an interest in sailing and thus navigation too. At that point geometry captured his interest and the rest of mathematics followed. He had courses with Antoine-René Mauduit at College Royale de France and Joseph-Francois Marie at Collége Mazaine of University of Paris. In 1779 he obtained some lunar observations of Pierre Charles Le Monnier and began to calculate the variables of lunar theory. The next year he followed some lectures of Gaspard Monge.
In 1782 at the age of 17 he became an instructor in mathematics at the École de Gardes de la Marine in Rochefort. Monge was the students' examiner and Lacroix's supervisor there until 1795. Returning to Paris, Condorcet hired Lacroix to fill in for him as instructor of gentlemen at a Paris lycée. In 1787 he began to teach at École Royale Militaire de Paris and he married Marie Nicole Sophie Arcambal.
In Besançon, from 1788, he taught courses at the École Royale d'Artillerie under examiner Pierre-Simon Laplace. The posting in Besançon lasted until 1793 when Lacroix returned to Paris.
It was the best of times and the worst of times: Lavoisier had opened inquiry into "new chemistry", a subject Lacroix studied with Jean Henri Hassenfratz. He also joined Societe Philomatique de Paris which provided a journal in which to communicate his findings. On the other hand, Paris was in the grip of the Reign of Terror. In 1794 Lacroix became director of the Executive Committee for Public Instruction. In this position he promoted École Normale and the system of Écoles Centrales. In 1795 he taught at École Centrale des Quatres-Nations.
The first volume Traité du Calcul Différentiel et du Calcul Intégral was published in 1797. Legendre predicted that it "will make itself conspicuous by the choice of methods, their generality, and the rigor of the demonstrations."[1]: 140 In hindsight Ivor Grattan-Guinness observed:[1]: 183
The Traite is by far the most comprehensive work of its kind for that time. The extent of its circulation is not known and it may not have been very large...But it is as well known as any other treatise of its time, and certainly more worth reading than any other, especially for the emerging generation.
In 1799, he became professor of analysis at École Polytechnique.
Lacroix was the author of at least 17 biographies contributed to Biographie Universalle compiled by Louis Gabriel Michaud.
In 1809, he was admitted to Faculté des Sciences de Paris.
In 1812, he began teaching at the Collège de France, and was appointed chair of mathematics in 1815.
When a second edition of the Traité du Calcul Différentiel et du Calcul Intégral was published in three volumes in 1810, 1814, and 1819, Lacroix renewed the text:
New material, recording many of the advances made during the new century, were introduced throughout the text, which was rounded off by a long list of "Corrections and additions" and a splendid "Table of contents". In addition, the structure of the work was changed somewhat, especially the third volume on series and differences. But the general impression is still that the main streams and directions of the calculus had been amplified and enriched, rather than changed in any substantial way.[1]: 267
During his career, he produced a number of important textbooks in mathematics. Translations of these books into the English language were used in British universities, and the books remained in circulation for nearly 50 years.[2][3]
In 1812, Babbage set up The Analytical Society for the translation of Differential and Integral Calculus and the book was translated into English in 1816 by George Peacock.[4]
He died on 24 May 1843 in Paris.
Lacroix crater on the Moon was named for him.
Publications
• Traité du Calcul Différentiel et du Calcul Intégral, Courcier, Paris, 1797-1800.
• 1797: Premier Tome, link from Internet Archive.
• 1798: Tome Second, link from Internet Archive.
• 1800: Tome 3: Traité des Differences et des Séries, link from Internet Archive.
• 1802: Traité Élémentaire du Calcul Différentiel et du Calcul Intégral, link from HathiTrust.
• Revised and re-published several times; the 9th edition appeared in 1881.
• 1804: Complément des Élémens d'algèbre, à l'usage de l'École Centrale des Quatre-Nations, Courcier, Paris, 5th edition (1825), link from Internet Archive.
• 1814: Eléments de Géométrie à l'usage de l'École Centrale des Quatre-Nations, 10th edition, link from Hathi Trust.
• 1816: Traité élémentaire de calcul des probabilités, Paris, Mallet-Bachelier, link from HathiTrust.
• 1816: Essais sur l'Enseignement en Général, et sur celui des Mathématiques en Particulier, link from Internet Archive.
References
1. Ivor Grattan-Guinness (1990). Convolutions in French Mathematics, 1800–1840, §2.5.4 "Lacroix: scientific educator", pp. 113–114, Science Networks: Historical Studies v. 2, Birkhäuser ISBN 3-7643-2240-3
2. For example, John Farrar's translation Elements of Algebra, 3rd edition, 1831 Boston
3. S. F. Lacroix (1861). Traité du Calcul Différentiel ed du Calcul Intégral, Premier Tome, 6th edition
4. Charles Babbage - MacTutor History of Mathematics Archived 2007-09-27 at the Wayback Machine
Further reading
• João Caramalho Domingues (2008) Lacroix and the Calculus, Science Networks: Historical Studies, v. 35, Birkhäuser ISBN 978-3-7643-8638-2.
External links
• O'Connor, John J.; Robertson, Edmund F., "Sylvestre François Lacroix", MacTutor History of Mathematics Archive, University of St Andrews
Authority control
International
• FAST
• ISNI
• VIAF
National
• Spain
• France
• BnF data
• Catalonia
• Germany
• Israel
• Belgium
• United States
• Czech Republic
• Greece
• Netherlands
• Poland
• Portugal
Academics
• MathSciNet
• zbMATH
People
• Deutsche Biographie
Other
• IdRef
| Wikipedia |
Sylvia Chin-Pi Lu
Sylvia Chin-Pi Lu (1928–2014) was a Taiwanese-American mathematician specializing in commutative algebra who was an invited speaker at the 1990 International Congress of Mathematicians in Kyoto.[1] Less than 5% of ICM speakers in algebra and number theory have been women, placing Lu in a rarefied group in this "hall of fame for mathematics".[2] Lu's most highly cited papers are on the properties of prime submodules.[3]
Sylvia Chin-Pi Lu
Born1928 (1928)
Died2014 (aged 85–86)
Alma materPhD, The Pennsylvania State University, 1963
Scientific career
FieldsCommutative Algebra
InstitutionsUniversity of Colorado Denver
Doctoral advisorRaymond Ayoub
Education
Lu received her dissertation from The Pennsylvania State University in 1963, under the direction of Raymond Ayoub.[4]
References
1. "Sylvia Chin-Pi Lu (1928-2014)". American Mathematical Society. Retrieved June 28, 2019.
2. Mihaljević, Helena; Roy, Marie-Françoise (2019). "A data analysis of women's trails among ICM speakers". arXiv:1903.02543 [math.HO].
3. "Lu, Chin-Pi". MathSciNet: Mathematical Reviews. American Mathematical Society. Retrieved June 28, 2019.
4. Lu, Chin-Pi (1963). The Ring of Formal Power Series in a Countably Infinite Number of Indeterminates (dissertation). The Pennsylvania State University.
Authority control: Academics
• MathSciNet
• Mathematics Genealogy Project
• zbMATH
| Wikipedia |
Sylvia Serfaty
Sylvia Serfaty (born 1975)[1] is a French mathematician working in the United States. She won the 2004 EMS Prize for her contributions to the Ginzburg–Landau theory, the Henri Poincaré Prize in 2012, and the Mergier–Bourdeix Prize of the French Academy of Sciences in 2013.[2]
Sylvia Serfaty
Serfaty at the ICM 2018
Born1975 (age 47–48)
NationalityFrench
Alma materParis-Sud 11 University
Awards
• EMS Prize (2004)
• Henri Poincaré Prize (2012)
• Mergier–Bourdeix Prize (2013)
Scientific career
FieldsMathematics
InstitutionsNew York University
Doctoral advisorFabrice Bethuel
Early life and education
Serfaty was born and raised in Paris.[3] She was interested in mathematics since high school.
Serfaty earned her doctorate from Paris-Sud 11 University in 1999, under supervision of Fabrice Bethuel.[4] She then held a teaching position (agrégé préparateur) at the École Normale Supérieure de Cachan. Since 2007 she has held a professorship at the Courant Institute of Mathematical Sciences of NYU.
Research
Serfaty's research is part of the field of partial differential equations and mathematical physics. Her work particularly focuses on the Ginzburg-Landau model of superconductivity and quantum vortexes in the Ginzburg–Landau theory. She has also worked on the statistical mechanics of Coulomb-type systems.
In 2007 she published a book on the Ginzburg-Landau theory with Étienne Sandier, Vortices in the Magnetic Ginzburg-Landau Model .[5] She was an invited plenary speaker at the 2018 International Congress of Mathematicians.[6]
She was elected to the American Academy of Arts and Sciences in 2019.[7]
She is one of the editors-in-chief of the scientific journal Probability and Mathematical Physics.[8]
Awards
• European Mathematical Society Prize in 2004[3]
• Henri Poincaré Prize in 2012[3]
References
1. Birth year from ISNI authority control file, retrieved 2018-12-02.
2. Sylvia Serfaty de nouveau couronnée avec le grand prix Mergier-Bourdeix de l'Académie des Sciences (in French), UPMC, July 12, 2013, archived from the original on 2013-09-01, retrieved 2017-04-04
3. "Sylvia Serfaty on Mathematical Truth and Frustration". Quanta Magazine. Retrieved 2020-05-02.
4. Sylvia Serfaty at the Mathematics Genealogy Project
5. Roberts, Siobhan (February 21, 2017), "In Mathematics, 'You Cannot Be Lied To': For Sylvia Serfaty, mathematics is all about truth and beauty and building scientific and human connections", Quanta Magazine.
6. "Plenary lectures", ICM 2018, archived from the original on 2018-12-29, retrieved 2018-08-08
7. "New 2019 Academy Members Announced". American Academy of Arts and Sciences. April 17, 2019.
8. "Probability and Mathematical Physics". msp.org. Retrieved 2020-05-02.
External links
• Website at NYU
• EMS Prize Laudatio
• Author Page on MathSciNet
• "Systems of points with Coulomb interactions – Sylvia Serfaty – ICM2018". YouTube. 19 September 2018.
Authority control
International
• ISNI
• VIAF
National
• France
• BnF data
• Catalonia
• Germany
• Israel
• United States
• Netherlands
Academics
• DBLP
• Google Scholar
• MathSciNet
• Mathematics Genealogy Project
• zbMATH
Other
• IdRef
| Wikipedia |
Sylvia Wiegand
Sylvia Margaret Wiegand (born March 8, 1945) is an American mathematician.[1]
Sylvia Margaret Wiegand
Born (1945-03-08) March 8, 1945
Cape Town, South Africa
Alma materUniversity of Wisconsin-Madison
Scientific career
FieldsCommutative algebra
math education, history of math
ThesisGalois Theory of Essential Expansions of Modules and Vanishing Tensor Powers (1972)
Doctoral advisorLawrence S. Levy
Doctoral studentsChristina Eubanks-Turner
Early life and education
Wiegand was born in Cape Town, South Africa. She is the daughter of mathematician Laurence Chisholm Young and through him the grand-daughter of mathematicians Grace Chisholm Young and William Henry Young.[2] Her family moved to Wisconsin in 1949, and she graduated from Bryn Mawr College in 1966 after three years of study.[1] In 1971 Wiegand earned her Ph.D. from the University of Wisconsin-Madison.[3] Her dissertation was titled Galois Theory of Essential Expansions of Modules and Vanishing Tensor Powers.[3]
Career
In 1987, she was named full professor at the University of Nebraska; at the time Wiegand was the only female professor in the department.[1] In 1988 Sylvia headed a search committee for two new jobs in the math department, for which two women were hired, although one stayed only a year and another left after four years.[4] In 1996 Sylvia and her husband, Roger Wiegand, established a fellowship for graduate student research at the university in honor of Sylvia's grandparents.[5]
From 1997 until 2000, Wiegand was president of the Association for Women in Mathematics.[6][7]
Wiegand has been an editor for Communications in Algebra and the Rocky Mountain Journal of Mathematics.[2] She was on the board of directors of the Canadian Mathematical Society from 1997 to 2000.[2]
Wiegand was an American Mathematical Society (AMS) Council member at large.[8]
Awards and recognition
Wiegand is featured in the book Notable Women in Mathematics: A Biographical Dictionary, edited by Charlene Morrow and Teri Perl, published in 1998.[1] For her work in improving the status of women in mathematics, she was awarded the University of Nebraska's Outstanding Contribution to the Status of Women Award in 2000.[4] In May 2005, the University of Nebraska hosted the Nebraska Commutative Algebra Conference: WiegandFest "in celebration of the many important contributions of Sylvia and her husband Roger Wiegand."[1]
In 2012 she became a fellow of the AMS.[9]
In 2017, she was selected as a fellow of the Association for Women in Mathematics in the inaugural class.[10]
References
1. "Sylvia Wiegand". Agnesscott.edu. 1945-03-08. Retrieved 2012-10-31.
2. "Sylvia Wiegand". www.agnesscott.edu. Retrieved 2018-10-06.
3. Sylvia Wiegand at the Mathematics Genealogy Project
4. "OCWW | Vol 32, Issue 3-4 | Features". Aacu.org. Archived from the original on 2003-11-10. Retrieved 2012-10-31.
5. PO BOX 880130 (2010-11-18). "UNL | Arts & Sciences | Math | Department | Awards | Graduate Student Awards". Math.unl.edu. Retrieved 2012-10-31.
6. "Sylvia Wiegand's Homepage". Math.unl.edu. Retrieved 2012-10-31.
7. "AWM Profile" (PDF). Ams.org. Retrieved 2012-10-31.
8. "AMS Committees". American Mathematical Society. Retrieved 2023-03-27.
9. List of Fellows of the American Mathematical Society, retrieved 2013-09-01.
10. "2018 Inaugural Class of AWM Fellows". Association for Women in Mathematics. Retrieved 9 January 2021.
External links
• Sylvia Wiegand's homepage
• Sylvia Wiegand's Author profile on MathSciNet
Presidents of the Association for Women in Mathematics
1971–1990
• Mary W. Gray (1971–1973)
• Alice T. Schafer (1973–1975)
• Lenore Blum (1975–1979)
• Judith Roitman (1979–1981)
• Bhama Srinivasan (1981–1983)
• Linda Preiss Rothschild (1983–1985)
• Linda Keen (1985–1987)
• Rhonda Hughes (1987–1989)
• Jill P. Mesirov (1989–1991)
1991–2010
• Carol S. Wood (1991–1993)
• Cora Sadosky (1993–1995)
• Chuu-Lian Terng (1995–1997)
• Sylvia M. Wiegand (1997–1999)
• Jean E. Taylor (1999–2001)
• Suzanne Lenhart (2001–2003)
• Carolyn S. Gordon (2003–2005)
• Barbara Keyfitz (2005–2007)
• Cathy Kessel (2007–2009)
• Georgia Benkart (2009–2011)
2011–0000
• Jill Pipher (2011–2013)
• Ruth Charney (2013–2015)
• Kristin Lauter (2015–2017)
• Ami Radunskaya (2017–2019)
• Ruth Haas (2019–2021)
• Kathryn Leonard (2021–2023)
• Talitha Washington (2023–2025)
Authority control
International
• ISNI
• VIAF
National
• Israel
• United States
• Netherlands
Academics
• MathSciNet
• Mathematics Genealogy Project
• zbMATH
Other
• IdRef
| Wikipedia |
Sylvia de Neymet
Sylvia de Neymet Urbina (aka Silvia de Neymet de Christ, 1939 – 13 January 2003) was a Mexican mathematician, the first woman to earn a doctorate in mathematics in Mexico, and the first female professor in the faculty of sciences of the National Autonomous University of Mexico (UNAM).[1][2]
Early life and education
De Neymet was born in Mexico City in 1939.[3] Her mother had been orphaned in the Mexican Revolution of 1910, studied art at La Esmeralda, and became a teacher; she encouraged De Neymet in her studies. Her father's mother was also a teacher, and her father was a civil engineer. In 1955 she began studying at the Universidad Femenina de México, a women's school founded by Adela Formoso de Obregón Santacilia, and in her fourth year there she was hired as a mathematics teacher herself, despite the fact that many of her students would be older than her.[2]
After two years of mathematical study in Paris, at the Institut Henri Poincaré, from 1959 to 1961,[2][3] de Neymet returned to Mexico and was given a degree in mathematics in 1961, with a thesis on differential equations supervised by Solomon Lefschetz, who by this time was regularly wintering at UNAM.[1][2][4] At around the same time, CINVESTAV (the Center for Research and Advanced Studies of the National Polytechnic Institute) was founded; de Neymet became one of the first students there, and the first doctoral student of Samuel Gitler Hammer, one of the founders of CINVESTAV.[1][4] She married Michael Christ, a French physician, in 1962,[1][2] and while still finishing her doctorate became a teacher at the Escuela Superior de Física y Matemáticas of the Instituto Politécnico Nacional, founded four years earlier.[2][4] She completed her doctorate under Gitler's supervision[5] in 1966, becoming one of the first seven people to earn a mathematics doctorate in Mexico, and the first Mexican woman to do so.[1][3]
Career and later life
After completing her doctorate, she joined the faculty of sciences of UNAM, one of only three full-time mathematicians there (with Víctor Neumann-Lara and Arturo Fregoso Urbina). After continuing her career at UNAM for many years,[4] she died on 13 January 2003.[2][3]
Her book Introducción a los grupos topológicos de transformaciones [Introduction to topological transformation groups] was published posthumously in 2005.[6]
References
1. "Sylvia de Neymet (1939–2003)", Matemáticos en México (in Spanish), National Autonomous University of Mexico, retrieved 2021-09-27
2. de la Paz Álvarez Scherer, Ma. (12 March 2019), "Tejiendo destellos: Imágenes de la vida de Sylvia de Neymet", Mujeres con Ciencia (in Spanish), retrieved 2021-09-27
3. Fallece Silvia de Neymet Urbina (in Spanish), Museo de la Mujer, retrieved 2021-09-27
4. Gómez Wulschner, Claudia (2010), "Ecos del pasado . . . luces del presente: Nuestras primeras matemáticas" (PDF), Miscelánea Matemática (in Spanish), 51: 41–57, archived from the original (PDF) on 2019-08-28; see in particular the biography of de Neymet on pp. 48–49
5. Sylvia de Neymet at the Mathematics Genealogy Project
6. Review of Introducción a los grupos topológicos de transformaciones: Xabier Domínguez, Zbl 1231.54002
Authority control
International
• ISNI
• VIAF
National
• Croatia
Academics
• Mathematics Genealogy Project
• zbMATH
| Wikipedia |
Sylvie Benzoni
Sylvie Benzoni-Gavage (born 1967)[1] is a French mathematician known for her research in partial differential equations, fluid dynamics, traffic flow, shock waves, and phase transitions. In 2017 she was named as the director of the Institut Henri Poincaré.[2][3]
Education and career
Benzoni was a student at the École normale supérieure de Saint-Cloud.[3] She completed her Ph.D. in 1991 at the Claude Bernard University Lyon 1; her dissertation, supervised by Denis Serre, was Analyse numerique des modeles hydrodynamiques d'ecoulements diphasiques instationnaires dans les reseaux de production petroliere.[4]
She became a researcher at CNRS in 1992, and in 2003 became a professor at Claude Bernard University. After five years as assistant director at the Camille Jordan Institute in Lyon, she became director there in 2016.[3]
Contributions
With her advisor Denis Serre, Benzoni is the author of Multi-dimensional Hyperbolic Partial Differential Equations: First-Order Systems and Applications (Oxford University Press, 2007)[5] and the editor of Hyperbolic Problems: Theory, Numerics, Applications (Springer, 2008). She is the author of a French textbook on differential calculus and differential equations, Calcul différentiel et équations différentielles: Cours et exercices corrigés (Dunod, 2010; 2nd ed., 2014).
Benzoni is also active in communicating mathematics to the public, through her work with the European Mathematical Society, and is a supporter of open access publishing of research.[3]
References
1. Birth year from idRef authority control file, accessed 2018-11-26.
2. Sylvie Benzoni named Director of the Institut Henri Poincaré, European Mathematical Society, December 16, 2017
3. En ce moment à l'IHP (in French), retrieved December 16, 2017
4. Sylvie Benzoni at the Mathematics Genealogy Project
5. Review of Multi-dimensional Hyperbolic Partial Differential Equations: Kenneth H. Karlsen (2008), Mathematical Reviews, MR2284507
Authority control
International
• ISNI
• VIAF
National
• France
• BnF data
• Germany
• Israel
• United States
• Netherlands
Academics
• DBLP
• MathSciNet
• Mathematics Genealogy Project
• zbMATH
Other
• IdRef
| Wikipedia |
Sylvie Roelly
Sylvie Roelly (born 1960) is a French mathematician specializing in probability theory, including the study of particle systems, Gibbs measure, diffusion, and branching processes. She is a professor of mathematics in the Institute of Mathematics at the University of Potsdam in Germany.
Education and career
Roelly was born in 1960 in Paris,[1] and studied mathematics from 1979 to 1984 at the École normale supérieure de jeunes filles in Paris.[1] She earned a diploma in mathematics in 1980 through the Paris Diderot University, and an agrégation in 1982.[2] She completed her Ph.D. in 1984 through Pierre and Marie Curie University, with the dissertation Processus de diffusion à valeurs mesures multiplicatifs supervised by Nicole El Karoui.[2][3] She also earned her habilitation in 1991 through Pierre and Marie Curie University.[2]
After a year of lecturing at the École normale supérieure, she became a researcher for the French National Centre for Scientific Research (CNRS) in 1985. She came to Germany as a Humboldt Fellow at Bielefeld University from 1990 to 1994, and was a researcher at the Weierstrass Institute in Berlin from 2001 to 2003, before taking her professorship at Potsdam in 2003.[2]
At Potsdam, she was head of the Institute of Mathematics from 2011 to 2015, and vice-dean of the Faculty of Science from 2016 to 2019.[1] Along with her research interest in probability, she has organized in Potsdam several events concerning the history of Jewish mathematicians.[4][5]
Recognition
In 2007, Roelly and Michèle Thieullen won the Itô Prize of the Bernoulli Society for their work on Brownian diffusion.[6] She was named mathematician of the month for April 2015 by the German Mathematical Society.[4][5]
References
1. "Prof. Dr. Sylvie Roelly", Invited speakers for Days of Ukraine in Berlin and Brandenburg, September 2021, retrieved 2021-11-09
2. Curriculum vitae (PDF), 2018, retrieved 2021-11-09
3. Sylvie Roelly at the Mathematics Genealogy Project; note that this lists the Ph.D. as being through Paris Diderot University, but her curriculum vitae lists Pierre and Marie Curie University, consistent with other students of El Karoui.
4. Sylvie Roelly (in German), German Mathematical Society, retrieved 2021-11-09
5. Eröffnung der Ausstellung zu Jüdischen Mathematikern in Potsdam [Opening of the exhibition on Jewish mathematicians in Potsdam] (in German), German Mathematical Society, April 2015, retrieved 2021-11-09
6. Itô Prize – Previous Prize Recipients, Bernoulli Society, retrieved 2021-11-09
External links
• Home page
• Sylvie Roelly publications indexed by Google Scholar
Authority control
International
• ISNI
• VIAF
National
• France
• BnF data
• Germany
• Israel
• United States
Academics
• MathSciNet
• Mathematics Genealogy Project
Other
• IdRef
| Wikipedia |
Symbolic Cholesky decomposition
In the mathematical subfield of numerical analysis the symbolic Cholesky decomposition is an algorithm used to determine the non-zero pattern for the $L$ factors of a symmetric sparse matrix when applying the Cholesky decomposition or variants.
Algorithm
Let $A=(a_{ij})\in \mathbb {K} ^{n\times n}$ be a sparse symmetric positive definite matrix with elements from a field $\mathbb {K} $, which we wish to factorize as $A=LL^{T}\,$.
In order to implement an efficient sparse factorization it has been found to be necessary to determine the non zero structure of the factors before doing any numerical work. To write the algorithm down we use the following notation:
• Let ${\mathcal {A}}_{i}$ and ${\mathcal {L}}_{j}$ be sets representing the non-zero patterns of columns i and j (below the diagonal only, and including diagonal elements) of matrices A and L respectively.
• Take $\min {\mathcal {L}}_{j}$ to mean the smallest element of ${\mathcal {L}}_{j}$.
• Use a parent function $\pi (i)\,\!$ to define the elimination tree within the matrix.
The following algorithm gives an efficient symbolic factorization of A :
${\begin{aligned}&\pi (i):=0~{\mbox{for all}}~i\\&{\mbox{For}}~i:=1~{\mbox{to}}~n\\&\qquad {\mathcal {L}}_{i}:={\mathcal {A}}_{i}\\&\qquad {\mbox{For all}}~j~{\mbox{such that}}~\pi (j)=i\\&\qquad \qquad {\mathcal {L}}_{i}:=({\mathcal {L}}_{i}\cup {\mathcal {L}}_{j})\setminus \{j\}\\&\qquad \pi (i):=\min({\mathcal {L}}_{i}\setminus \{i\})\end{aligned}}$
| Wikipedia |
Symbolic data analysis
Symbolic data analysis (SDA) is an extension of standard data analysis where symbolic data tables are used as input and symbolic objects are made output as a result. The data units are called symbolic since they are more complex than standard ones, as they not only contain values or categories, but also include internal variation and structure. SDA is based on four spaces: the space of individuals, the space of concepts, the space of descriptions, and the space of symbolic objects. The space of descriptions models individuals, while the space of symbolic objects models concepts.[1][2]
References
1. Diday, Edwin; Esposito, Floriana (December 2003). "An introduction to symbolic data analysis and the SODAS software". Intelligent Data Analysis. 7 (6): 583–601. doi:10.3233/IDA-2003-7606.
2. Lynne Billard; Edwin Diday (14 May 2012). Symbolic Data Analysis: Conceptual Statistics and Data Mining. John Wiley & Sons. ISBN 978-0-470-09017-6.
Further reading
• Diday, Edwin; Noirhomme-Fraiture, Monique (2008). Symbolic Data Analysis and the SODAS Software. Wiley–Blackwell. ISBN 9780470018835.
External links
• Symbolic Data Analysis: Conceptual Statistics and Data Mining
• An introduction to symbolic data analysis and its Application to the Sodas Project by Edwin Diday
• R2S: An R package to transform relational data into symbolic data
| Wikipedia |
Symbolic dynamics
In mathematics, symbolic dynamics is the practice of modeling a topological or smooth dynamical system by a discrete space consisting of infinite sequences of abstract symbols, each of which corresponds to a state of the system, with the dynamics (evolution) given by the shift operator. Formally, a Markov partition is used to provide a finite cover for the smooth system; each set of the cover is associated with a single symbol, and the sequences of symbols result as a trajectory of the system moves from one covering set to another.
History
The idea goes back to Jacques Hadamard's 1898 paper on the geodesics on surfaces of negative curvature.[1] It was applied by Marston Morse in 1921 to the construction of a nonperiodic recurrent geodesic. Related work was done by Emil Artin in 1924 (for the system now called Artin billiard), Pekka Myrberg, Paul Koebe, Jakob Nielsen, G. A. Hedlund.
The first formal treatment was developed by Morse and Hedlund in their 1938 paper.[2] George Birkhoff, Norman Levinson and the pair Mary Cartwright and J. E. Littlewood have applied similar methods to qualitative analysis of nonautonomous second order differential equations.
Claude Shannon used symbolic sequences and shifts of finite type in his 1948 paper A mathematical theory of communication that gave birth to information theory.
During the late 1960s the method of symbolic dynamics was developed to hyperbolic toral automorphisms by Roy Adler and Benjamin Weiss,[3] and to Anosov diffeomorphisms by Yakov Sinai who used the symbolic model to construct Gibbs measures.[4] In the early 1970s the theory was extended to Anosov flows by Marina Ratner, and to Axiom A diffeomorphisms and flows by Rufus Bowen.
A spectacular application of the methods of symbolic dynamics is Sharkovskii's theorem about periodic orbits of a continuous map of an interval into itself (1964).
Examples
Concepts such as heteroclinic orbits and homoclinic orbits have a particularly simple representation in symbolic dynamics.
Itinerary
Itinerary of point with respect to the partition is a sequence of symbols. It describes dynamic of the point. [5]
Applications
Symbolic dynamics originated as a method to study general dynamical systems; now its techniques and ideas have found significant applications in data storage and transmission, linear algebra, the motions of the planets and many other areas. The distinct feature in symbolic dynamics is that time is measured in discrete intervals. So at each time interval the system is in a particular state. Each state is associated with a symbol and the evolution of the system is described by an infinite sequence of symbols—represented effectively as strings. If the system states are not inherently discrete, then the state vector must be discretized, so as to get a coarse-grained description of the system.
See also
• Measure-preserving dynamical system
• Combinatorics and dynamical systems
• Shift space
• Shift of finite type
• Complex dynamics
• Arithmetic dynamics
References
1. Hadamard, J. (1898). "Les surfaces à courbures opposées et leurs lignes géodésiques" (PDF). J. Math. Pures Appl. 5 (4): 27–73.
2. Morse, M.; Hedlund, G. A. (1938). "Symbolic Dynamics". American Journal of Mathematics. 60 (4): 815–866. doi:10.2307/2371264. JSTOR 2371264.
3. Adler, R.; Weiss, B. (1967). "Entropy, a complete metric invariant for automorphisms of the torus". PNAS. 57 (6): 1573–1576. Bibcode:1967PNAS...57.1573A. doi:10.1073/pnas.57.6.1573. JSTOR 57985. PMC 224513. PMID 16591564.
4. Sinai, Y. (1968). "Construction of Markov partitionings". Funkcional. Anal. I Priložen. 2 (3): 70–80.
5. Mathematics of Complexity and Dynamical Systems by Robert A. Meyers. Springer Science & Business Media, 2011, ISBN 1461418054, 9781461418054
Further reading
• Hao, Bailin (1989). Elementary Symbolic Dynamics and Chaos in Dissipative Systems. World Scientific. ISBN 9971-5-0682-3. Archived from the original on 2009-12-05. Retrieved 2009-12-02.
• Bruce Kitchens, Symbolic dynamics. One-sided, two-sided and countable state Markov shifts. Universitext, Springer-Verlag, Berlin, 1998. x+252 pp. ISBN 3-540-62738-3 MR1484730
• Lind, Douglas; Marcus, Brian (1995). An introduction to symbolic dynamics and coding. Cambridge University Press. ISBN 0-521-55124-2. MR 1369092. Zbl 1106.37301.
• G. A. Hedlund, Endomorphisms and automorphisms of the shift dynamical system. Math. Systems Theory, Vol. 3, No. 4 (1969) 320–3751
• Teschl, Gerald (2012). Ordinary Differential Equations and Dynamical Systems. Providence: American Mathematical Society. ISBN 978-0-8218-8328-0.
• "Symbolic dynamics". Scholarpedia.
External links
• ChaosBook.org Chapter "Transition graphs"
• A simulation of the three-bumper billiard system and its symbolic dynamics, from Chaos V: Duhem's Bull
| Wikipedia |
Symbolic integration
In calculus, symbolic integration is the problem of finding a formula for the antiderivative, or indefinite integral, of a given function f(x), i.e. to find a differentiable function F(x) such that
${\frac {dF}{dx}}=f(x).$
Part of a series of articles about
Calculus
• Fundamental theorem
• Limits
• Continuity
• Rolle's theorem
• Mean value theorem
• Inverse function theorem
Differential
Definitions
• Derivative (generalizations)
• Differential
• infinitesimal
• of a function
• total
Concepts
• Differentiation notation
• Second derivative
• Implicit differentiation
• Logarithmic differentiation
• Related rates
• Taylor's theorem
Rules and identities
• Sum
• Product
• Chain
• Power
• Quotient
• L'Hôpital's rule
• Inverse
• General Leibniz
• Faà di Bruno's formula
• Reynolds
Integral
• Lists of integrals
• Integral transform
• Leibniz integral rule
Definitions
• Antiderivative
• Integral (improper)
• Riemann integral
• Lebesgue integration
• Contour integration
• Integral of inverse functions
Integration by
• Parts
• Discs
• Cylindrical shells
• Substitution (trigonometric, tangent half-angle, Euler)
• Euler's formula
• Partial fractions
• Changing order
• Reduction formulae
• Differentiating under the integral sign
• Risch algorithm
Series
• Geometric (arithmetico-geometric)
• Harmonic
• Alternating
• Power
• Binomial
• Taylor
Convergence tests
• Summand limit (term test)
• Ratio
• Root
• Integral
• Direct comparison
• Limit comparison
• Alternating series
• Cauchy condensation
• Dirichlet
• Abel
Vector
• Gradient
• Divergence
• Curl
• Laplacian
• Directional derivative
• Identities
Theorems
• Gradient
• Green's
• Stokes'
• Divergence
• generalized Stokes
Multivariable
Formalisms
• Matrix
• Tensor
• Exterior
• Geometric
Definitions
• Partial derivative
• Multiple integral
• Line integral
• Surface integral
• Volume integral
• Jacobian
• Hessian
Advanced
• Calculus on Euclidean space
• Generalized functions
• Limit of distributions
Specialized
• Fractional
• Malliavin
• Stochastic
• Variations
Miscellaneous
• Precalculus
• History
• Glossary
• List of topics
• Integration Bee
• Mathematical analysis
• Nonstandard analysis
This is also denoted
$F(x)=\int f(x)\,dx.$
Discussion
The term symbolic is used to distinguish this problem from that of numerical integration, where the value of F is sought at a particular input or set of inputs, rather than a general formula for F.
Both problems were held to be of practical and theoretical importance long before the time of digital computers, but they are now generally considered the domain of computer science, as computers are most often used currently to tackle individual instances.
Finding the derivative of an expression is a straightforward process for which it is easy to construct an algorithm. The reverse question of finding the integral is much more difficult. Many expressions which are relatively simple do not have integrals that can be expressed in closed form. See antiderivative and nonelementary integral for more details.
A procedure called the Risch algorithm exists which is capable of determining whether the integral of an elementary function (function built from a finite number of exponentials, logarithms, constants, and nth roots through composition and combinations using the four elementary operations) is elementary and returning it if it is. In its original form, Risch algorithm was not suitable for a direct implementation, and its complete implementation took a long time. It was first implemented in Reduce in the case of purely transcendental functions; the case of purely algebraic functions was solved and implemented in Reduce by James H. Davenport; the general case was solved by Manuel Bronstein, who implemented almost all of it in Axiom, though to date there is no implementation of the Risch algorithm which can deal with all of the special cases and branches in it.[1][2]
However, the Risch algorithm applies only to indefinite integrals, while most of the integrals of interest to physicists, theoretical chemists, and engineers are definite integrals often related to Laplace transforms, Fourier transforms, and Mellin transforms. Lacking a general algorithm, the developers of computer algebra systems have implemented heuristics based on pattern-matching and the exploitation of special functions, in particular the incomplete gamma function.[3] Although this approach is heuristic rather than algorithmic, it is nonetheless an effective method for solving many definite integrals encountered by practical engineering applications. Earlier systems such as Macsyma had a few definite integrals related to special functions within a look-up table. However this particular method, involving differentiation of special functions with respect to its parameters, variable transformation, pattern matching and other manipulations, was pioneered by developers of the Maple[4] system and then later emulated by Mathematica, Axiom, MuPAD and other systems.
Recent advances
The main problem in the classical approach of symbolic integration is that, if a function is represented in closed form, then, in general, its antiderivative has not a similar representation. In other words, the class of functions that can be represented in closed form is not closed under antiderivation.
Holonomic functions are a large class of functions, which is closed under antiderivation and allows algorithmic implementation in computers of integration and many other operations of calculus.
More precisely, a holonomic function is a solution of a homogeneous linear differential equation with polynomial coefficients. Holonomic functions are closed under addition and multiplication, derivation, and antiderivation. They include algebraic functions, exponential function, logarithm, sine, cosine, inverse trigonometric functions, inverse hyperbolic functions. They include also most common special functions such as Airy function, error function, Bessel functions and all hypergeometric functions.
A fundamental property of holonomic functions is that the coefficients of their Taylor series at any point satisfy a linear recurrence relation with polynomial coefficients, and that this recurrence relation may be computed from the differential equation defining the function. Conversely given such a recurrence relation between the coefficients of a power series, this power series defines a holonomic function whose differential equation may be computed algorithmically. This recurrence relation allows a fast computation of the Taylor series, and thus of the value of the function at any point, with an arbitrary small certified error.
This makes algorithmic most operations of calculus, when restricted to holonomic functions, represented by their differential equation and initial conditions. This includes the computation of antiderivatives and definite integrals (this amounts to evaluating the antiderivative at the endpoints of the interval of integration). This includes also the computation of the asymptotic behavior of the function at infinity, and thus the definite integrals on unbounded intervals.
All these operations are implemented in the algolib library for Maple.[5] See also the Dynamic Dictionary of Mathematical functions.[6]
Example
For example:
$\int x^{2}\,dx={\frac {x^{3}}{3}}+C$
is a symbolic result for an indefinite integral (here C is a constant of integration),
$\int _{-1}^{1}x^{2}\,dx=\left[{\frac {x^{3}}{3}}\right]_{-1}^{1}={\frac {1^{3}}{3}}-{\frac {(-1)^{3}}{3}}={\frac {2}{3}}$
is a symbolic result for a definite integral, and
$\int _{-1}^{1}x^{2}\,dx\approx 0.6667$
is a numerical result for the same definite integral.
See also
• Definite integral – Operation in mathematical calculus
• Elementary function – Mathematical function
• Indefinite integral – Concept in calculusPages displaying short descriptions of redirect targets
• Lists of integrals
• Operational calculus – Technique to solve differential equations
• Risch algorithm – Method for evaluating indefinite integrals
• Symbolic computation – Scientific area at the interface between computer science and mathematicsPages displaying short descriptions of redirect targets
• Meijer G-function – Generalization of the hypergeometric function
• Fox H-function – Generalization of the Meijer G-function and the Fox–Wright function
References
1. Bronstein, Manuel (September 5, 2003). "Manuel Bronstein on Axiom's Integration Capabilities". groups.google.com. Retrieved 2023-02-10.
2. "integration - Does there exist a complete implementation of the Risch algorithm?". MathOverflow. Oct 15, 2020. Retrieved 2023-02-10.
3. K.O Geddes, M.L. Glasser, R.A. Moore and T.C. Scott, Evaluation of Classes of Definite Integrals Involving Elementary Functions via Differentiation of Special Functions, AAECC (Applicable Algebra in Engineering, Communication and Computing), vol. 1, (1990), pp. 149–165,
4. K.O. Geddes and T.C. Scott, Recipes for Classes of Definite Integrals Involving Exponentials and Logarithms, Proceedings of the 1989 Computers and Mathematics conference, (held at MIT June 12, 1989), edited by E. Kaltofen and S.M. Watt, Springer-Verlag, New York, (1989), pp. 192–201.
5. http://algo.inria.fr/libraries/ algolib
6. http://ddmf.msr-inria.inria.fr Dynamic Dictionary of Mathematical functions
• Bronstein, Manuel (1997), Symbolic Integration 1 (transcendental functions) (2 ed.), Springer-Verlag, ISBN 3-540-60521-5
• Moses, Joel (March 23–25, 1971), "Symbolic integration: the stormy decade", Proceedings of the Second ACM Symposium on Symbolic and Algebraic Manipulation, Los Angeles, California: 427–440
External links
• Bhatt, Bhuvanesh. "Risch Algorithm". MathWorld.
• Wolfram Integrator — Free online symbolic integration with Mathematica
| Wikipedia |
Symbolic method
In mathematics, the symbolic method in invariant theory is an algorithm developed by Arthur Cayley,[1] Siegfried Heinrich Aronhold,[2] Alfred Clebsch,[3] and Paul Gordan[4] in the 19th century for computing invariants of algebraic forms. It is based on treating the form as if it were a power of a degree one form, which corresponds to embedding a symmetric power of a vector space into the symmetric elements of a tensor product of copies of it.
Symbolic notation
The symbolic method uses a compact, but rather confusing and mysterious notation for invariants, depending on the introduction of new symbols a, b, c, ... (from which the symbolic method gets its name) with apparently contradictory properties.
Example: the discriminant of a binary quadratic form
These symbols can be explained by the following example from Gordan.[5] Suppose that
$\displaystyle f(x)=A_{0}x_{1}^{2}+2A_{1}x_{1}x_{2}+A_{2}x_{2}^{2}$
is a binary quadratic form with an invariant given by the discriminant
$\displaystyle \Delta =A_{0}A_{2}-A_{1}^{2}.$
The symbolic representation of the discriminant is
$\displaystyle 2\Delta =(ab)^{2}$
where a and b are the symbols. The meaning of the expression (ab)2 is as follows. First of all, (ab) is a shorthand form for the determinant of a matrix whose rows are a1, a2 and b1, b2, so
$\displaystyle (ab)=a_{1}b_{2}-a_{2}b_{1}.$
Squaring this we get
$\displaystyle (ab)^{2}=a_{1}^{2}b_{2}^{2}-2a_{1}a_{2}b_{1}b_{2}+a_{2}^{2}b_{1}^{2}.$
Next we pretend that
$\displaystyle f(x)=(a_{1}x_{1}+a_{2}x_{2})^{2}=(b_{1}x_{1}+b_{2}x_{2})^{2}$
so that
$\displaystyle A_{i}=a_{1}^{2-i}a_{2}^{i}=b_{1}^{2-i}b_{2}^{i}$
and we ignore the fact that this does not seem to make sense if f is not a power of a linear form. Substituting these values gives
$\displaystyle (ab)^{2}=A_{2}A_{0}-2A_{1}A_{1}+A_{0}A_{2}=2\Delta .$
Higher degrees
More generally if
$\displaystyle f(x)=A_{0}x_{1}^{n}+{\binom {n}{1}}A_{1}x_{1}^{n-1}x_{2}+\cdots +A_{n}x_{2}^{n}$
is a binary form of higher degree, then one introduces new variables a1, a2, b1, b2, c1, c2, with the properties
$f(x)=(a_{1}x_{1}+a_{2}x_{2})^{n}=(b_{1}x_{1}+b_{2}x_{2})^{n}=(c_{1}x_{1}+c_{2}x_{2})^{n}=\cdots .$
What this means is that the following two vector spaces are naturally isomorphic:
• The vector space of homogeneous polynomials in A0,...An of degree m
• The vector space of polynomials in 2m variables a1, a2, b1, b2, c1, c2, ... that have degree n in each of the m pairs of variables (a1, a2), (b1, b2), (c1, c2), ... and are symmetric under permutations of the m symbols a, b, ....,
The isomorphism is given by mapping an−j
1
aj
2
, bn−j
1
bj
2
, .... to Aj. This mapping does not preserve products of polynomials.
More variables
The extension to a form f in more than two variables x1, x2, x3,... is similar: one introduces symbols a1, a2, a3 and so on with the properties
$f(x)=(a_{1}x_{1}+a_{2}x_{2}+a_{3}x_{3}+\cdots )^{n}=(b_{1}x_{1}+b_{2}x_{2}+b_{3}x_{3}+\cdots )^{n}=(c_{1}x_{1}+c_{2}x_{2}+c_{3}x_{3}+\cdots )^{n}=\cdots .$
Symmetric products
The rather mysterious formalism of the symbolic method corresponds to embedding a symmetric product Sn(V) of a vector space V into a tensor product of n copies of V, as the elements preserved by the action of the symmetric group. In fact this is done twice, because the invariants of degree n of a quantic of degree m are the invariant elements of SnSm(V), which gets embedded into a tensor product of mn copies of V, as the elements invariant under a wreath product of the two symmetric groups. The brackets of the symbolic method are really invariant linear forms on this tensor product, which give invariants of SnSm(V) by restriction.
See also
• Umbral calculus
References
• Gordan, Paul (1987) [1887]. Kerschensteiner, Georg (ed.). Vorlesungen über Invariantentheorie (2nd ed.). New York York: AMS Chelsea Publishing. ISBN 9780828403283. MR 0917266.
Footnotes
1. Cayley, Arthur (1846). "On linear transformations". Cambridge and Dublin Mathematical Journal: 104–122.
2. Aronhold, Siegfried Heinrich (1858). "Theorie der homogenen Functionen dritten Grades von drei Veränderlichen". Journal für die reine und angewandte Mathematik (in German). 1858 (55): 97–191. doi:10.1515/crll.1858.55.97. ISSN 0075-4102. S2CID 122247157.
3. Clebsch, A. (1861). "Ueber symbolische Darstellung algebraischer Formen". Journal für die Reine und Angewandte Mathematik (in German). 1861 (59): 1–62. doi:10.1515/crll.1861.59.1. ISSN 0075-4102. S2CID 119389672.
4. Gordan 1887.
5. Gordan 1887, v. 2, p.g. 1-3.
Further reading
• Dieudonné, Jean; Carrell, James B. (1970). "Invariant theory, old and new". Advances in Mathematics. 4: 1–80. doi:10.1016/0001-8708(70)90015-0. pp. 32–7, "Invariants of n-ary forms: the symbolic method. Reprinted as Dieudonné, Jean; Carrell, James B. (1971). Invariant theory, old and new. Academic Press. ISBN 0-12-215540-8.
• Dolgachev, Igor (2003). Lectures on invariant theory. London Mathematical Society Lecture Note Series. Vol. 296. Cambridge University Press. doi:10.1017/CBO9780511615436. ISBN 978-0-521-52548-0. MR 2004511. S2CID 118144995.
• Grace, John Hilton; Young, Alfred (1903), The Algebra of invariants, Cambridge University Press
• Hilbert, David (1993) [1897]. Theory of algebraic invariants. Cambridge University Press. ISBN 9780521444576. MR 1266168.
• Koh, Sebastian S., ed. (2009) [1987]. Invariant Theory. Lecture Notes in Mathematics. Vol. 1278. Springer. ISBN 9783540183600.
• Kung, Joseph P. S.; Rota, Gian-Carlo (1984). "The invariant theory of binary forms". Bulletin of the American Mathematical Society. New Series. 10 (1): 27–85. doi:10.1090/S0273-0979-1984-15188-7. ISSN 0002-9904. MR 0722856.
| Wikipedia |
Symbolic power of an ideal
In algebra and algebraic geometry, given a commutative Noetherian ring $R$ and an ideal $I$ in it, the n-th symbolic power of $I$ is the ideal
$I^{(n)}=\bigcap _{P\in \operatorname {Ass} (R/I)}\varphi _{P}^{-1}(I^{n}R_{P})$
where $R_{P}$ is the localization of $R$ at $P$, we set $\varphi _{P}:R\to R_{P}$ is the canonical map from a ring to its localization, and the intersection runs through all of the associated primes of $R/I$.
Though this definition does not require $I$ to be prime, this assumption is often worked with because in the case of a prime ideal, the symbolic power can be equivalently defined as the $I$-primary component of $I^{n}$. Very roughly, it consists of functions with zeros of order n along the variety defined by $I$. We have: $I^{(1)}=I$ and if $I$ is a maximal ideal, then $I^{(n)}=I^{n}$.
Symbolic powers induce the following chain of ideals:
$I^{(0)}=R\supset I=I^{(1)}\supset I^{(2)}\supset I^{(3)}\supset I^{(4)}\supset \cdots $
Uses
The study and use of symbolic powers has a long history in commutative algebra. Krull’s famous proof of his principal ideal theorem uses them in an essential way. They first arose after primary decompositions were proved for Noetherian rings. Zariski used symbolic powers in his study of the analytic normality of algebraic varieties. Chevalley's famous lemma comparing topologies states that in a complete local domain the symbolic powers topology of any prime is finer than the m-adic topology. A crucial step in the vanishing theorem on local cohomology of Hartshorne and Lichtenbaum uses that for a prime $I$ defining a curve in a complete local domain, the powers of $I$ are cofinal with the symbolic powers of $I$. This important property of being cofinal was further developed by Schenzel in the 1970s.[1]
In algebraic geometry
Though generators for ordinary powers of $I$ are well understood when $I$ is given in terms of its generators as $I=(f_{1},\ldots ,f_{k})$, it is still very difficult in many cases to determine the generators of symbolic powers of $I$. But in the geometric setting, there is a clear geometric interpretation in the case when $I$ is a radical ideal over an algebraically closed field of characteristic zero.
If $X$ is an irreducible variety whose ideal of vanishing is $I$, then the differential power of $I$ consists of all the functions in $R$ that vanish to order ≥ n on $X$, i.e.
$I^{\langle n\rangle }:=\{f\in R\mid f{\text{ vanishes to order}}\geq n{\text{ on all of }}X\}.$
Or equivalently, if $\mathbf {m} _{p}$ is the maximal ideal for a point $p\in X$, $I^{\langle n\rangle }=\bigcap _{p\in X}\mathbf {m} _{p}^{n}$.
Theorem (Nagata, Zariski)[2] Let $I$ be a prime ideal in a polynomial ring $K[x_{1},\ldots ,x_{N}]$ over an algebraically closed field. Then
$I^{(m)}=I^{\langle m\rangle }$
This result can be extended to any radical ideal.[3] This formulation is very useful because, in characteristic zero, we can compute the differential powers in terms of generators as:
$I^{\langle m\rangle }=\left\langle f\mid {\frac {\partial ^{\mathbf {a} }f}{\partial x^{\mathbf {a} }}}\in I{\text{ for all }}\mathbf {a} \in \mathbb {N} ^{N}{\text{ where }}|\mathbf {a} |=\sum _{i=1}^{N}a_{i}\leq m-1\right\rangle $
For another formulation, we can consider the case when the base ring is a polynomial ring over a field. In this case, we can interpret the n-th symbolic power as the sheaf of all function germs over $X=\operatorname {Spec} (R){\text{ vanishing to order}}\geq n{\text{ at }}Z=V(I)$ In fact, if $X$ is a smooth variety over a perfect field, then
$I^{(n)}=\{f\in R\mid f\in \mathbf {m} ^{n}{\text{ for every closed point }}\mathbf {m} \in Z\}$[1]
Containments
It is natural to consider whether or not symbolic powers agree with ordinary powers, i.e. does $I^{n}=I^{(n)}$ hold? In general this is not the case. One example of this is the prime ideal $P=(x^{4}-yz,\,y^{2}-xz,\,x^{3}y-z^{2})\subseteq K[x,y,z]$. Here we have that $P^{2}\neq P^{(2)}$.[1] However, $P^{2}\subset P^{(2)}$ does hold and the generalization of this inclusion is well understood. Indeed, the containment $I^{n}\subseteq I^{(n)}$follows from the definition. Further, it is known that $I^{r}\subseteq I^{(m)}$ if and only if $m\leq r$. The proof follows from Nakayama's lemma.[4]
There has been extensive study into the other containment, when symbolic powers are contained in ordinary powers of ideals, referred to as the Containment Problem. Once again this has an easily stated answer summarized in the following theorem. It was developed by Ein, Lazarfeld, and Smith in characteristic zero [5] and was expanded to positive characteristic by Hochster and Huneke.[6] Their papers both build upon the results of Irena Swanson in Linear Equivalence of Ideal Topologies (2000).[7]
Theorem (Ein, Lazarfeld, Smith; Hochster, Huneke) Let $I\subset K[x_{1},x_{2},\ldots ,x_{N}]$ be a homogeneous ideal. Then the inclusion
$I^{(m)}\subset I^{r}$ holds for all $m\geq Nr.$
It was later verified that the bound of $N$ in the theorem cannot be tightened for general ideals.[8] However, following a question posed[8] by Bocci, Harbourne, and Huneke, it was discovered that a better bound exists in some cases.
Theorem The inclusion $I^{(m)}\subseteq I^{r}$ for all $m\geq Nr-N+1$ holds
1. for arbitrary ideals in characteristic 2;[9]
2. for monomial ideals in arbitrary characteristic[4]
3. for ideals of d-stars[8]
4. for ideals of general points in $\mathbb {P} ^{2}{\text{ and }}\mathbb {P} ^{3}$[10][11]
References
1. Dao, Hailong; De Stefani, Alessandro; Grifo, Eloísa; Huneke, Craig; Núñez-Betancourt, Luis (2017-08-09). "Symbolic powers of ideals". arXiv:1708.03010 [math.AC].
2. David Eisenbud. Commutative Algebra: with a view toward algebraic geometry, volume 150. Springer Science & Business Media, 2013.
3. Sidman, Jessica; Sullivant, Seth (2006). "Prolongations and computational algebra". arXiv:math/0611696.
4. Bauer, Thomas; Di Rocco, Sandra; Harbourne, Brian; Kapustka, Michał; Knutsen, Andreas; Syzdek, Wioletta; Szemberg, Tomasz (2009). "A primer on Seshadri constants". In Bates, Daniel J.; Besana, GianMario; Di Rocco, Sandra; Wampler, Charles W. (eds.). Interactions of classical and numerical algebraic geometry: Papers from the conference in honor of Andrew Sommese held at the University of Notre Dame, Notre Dame, IN, May 22–24, 2008. Contemporary Mathematics. Vol. 496. Providence, Rhode Island: American Mathematical Society. pp. 33–70. doi:10.1090/conm/496/09718. MR 2555949.
5. Lawrence Ein, Robert Lazarsfeld, and Karen E Smith. Uniform bounds and symbolic powers on smooth varieties. Inventiones mathematicae, 144(2):241–252, 2001
6. Melvin Hochster and Craig Huneke. Comparison of symbolic and ordinary powers of ideals. Inventiones mathematicae, 147(2):349–369, 2002.
7. Irena Swanson. Linear equivalence of ideal topologies. Mathematische Zeitschrift, 234(4):755–775, 2000
8. Bocci, Cristiano; Harbourne, Brian (2007). "Comparing powers and symbolic powers of ideals". arXiv:0706.3707 [math.AG].
9. Tomasz Szemberg and Justyna Szpond. On the containment problem. Rendiconti del Circolo Matematico di Palermo Series 2, pages 1–13, 2016.
10. Marcin Dumnicki. Containments of symbolic powers of ideals of generic points in P 3 . Proceedings of the American Mathematical Society, 143(2):513–530, 2015.
11. Harbourne, Brian; Huneke, Craig (2011). "Are symbolic powers highly evolved?". arXiv:1103.5809 [math.AC].
External links
• Melvin Hochster, Math 711: Lecture of September 7, 2007
| Wikipedia |
Formal proof
In logic and mathematics, a formal proof or derivation is a finite sequence of sentences (called well-formed formulas in the case of a formal language), each of which is an axiom, an assumption, or follows from the preceding sentences in the sequence by a rule of inference. It differs from a natural language argument in that it is rigorous, unambiguous and mechanically verifiable.[1] If the set of assumptions is empty, then the last sentence in a formal proof is called a theorem of the formal system. The notion of theorem is not in general effective, therefore there may be no method by which we can always find a proof of a given sentence or determine that none exists. The concepts of Fitch-style proof, sequent calculus and natural deduction are generalizations of the concept of proof.[2][3]
The theorem is a syntactic consequence of all the well-formed formulas preceding it in the proof. For a well-formed formula to qualify as part of a proof, it must be the result of applying a rule of the deductive apparatus (of some formal system) to the previous well-formed formulas in the proof sequence.
Formal proofs often are constructed with the help of computers in interactive theorem proving (e.g., through the use of proof checker and automated theorem prover).[4] Significantly, these proofs can be checked automatically, also by computer. Checking formal proofs is usually simple, while the problem of finding proofs (automated theorem proving) is usually computationally intractable and/or only semi-decidable, depending upon the formal system in use.
Background
Formal language
Main article: Formal language
A formal language is a set of finite sequences of symbols. Such a language can be defined without reference to any meanings of any of its expressions; it can exist before any interpretation is assigned to it – that is, before it has any meaning. Formal proofs are expressed in some formal languages.
Formal grammar
Main articles: Formal grammar and Formation rule
A formal grammar (also called formation rules) is a precise description of the well-formed formulas of a formal language. It is synonymous with the set of strings over the alphabet of the formal language which constitute well formed formulas. However, it does not describe their semantics (i.e. what they mean).
Formal systems
Main article: Formal system
A formal system (also called a logical calculus, or a logical system) consists of a formal language together with a deductive apparatus (also called a deductive system). The deductive apparatus may consist of a set of transformation rules (also called inference rules) or a set of axioms, or have both. A formal system is used to derive one expression from one or more other expressions.
Interpretations
Main articles: Formal semantics (logic) and Interpretation (logic)
An interpretation of a formal system is the assignment of meanings to the symbols, and truth values to the sentences of a formal system. The study of interpretations is called formal semantics. Giving an interpretation is synonymous with constructing a model.
See also
• Axiomatic system
• Formal verification
• Mathematical proof
• Proof assistant
• Proof calculus
• Proof theory
• Proof (truth)
• De Bruijn factor
References
1. Kassios, Yannis (February 20, 2009). "Formal Proof" (PDF). cs.utoronto.ca. Retrieved 2019-12-12.
2. The Cambridge Dictionary of Philosophy, deduction
3. Barwise, Jon; Etchemendy, John Etchemendy (1999). Language, Proof and Logic (1st ed.). Seven Bridges Press and CSLI.
4. Harrison, John (December 2008). "Formal Proof—Theory and Practice" (PDF). ams.org. Retrieved 2019-12-12.
External links
• "A Special Issue on Formal Proof". Notices of the American Mathematical Society. December 2008.
• 2πix.com: Logic Part of a series of articles covering mathematics and logic.
• Archive of Formal Proofs
• Mizar Home Page
Logical truth ⊤
Functional:
• truth value
• truth function
• ⊨ tautology
Formal:
• theory
• formal proof
• theorem
Negation
• ⊥ false
• contradiction
• inconsistency
Mathematical logic
General
• Axiom
• list
• Cardinality
• First-order logic
• Formal proof
• Formal semantics
• Foundations of mathematics
• Information theory
• Lemma
• Logical consequence
• Model
• Theorem
• Theory
• Type theory
Theorems (list)
& Paradoxes
• Gödel's completeness and incompleteness theorems
• Tarski's undefinability
• Banach–Tarski paradox
• Cantor's theorem, paradox and diagonal argument
• Compactness
• Halting problem
• Lindström's
• Löwenheim–Skolem
• Russell's paradox
Logics
Traditional
• Classical logic
• Logical truth
• Tautology
• Proposition
• Inference
• Logical equivalence
• Consistency
• Equiconsistency
• Argument
• Soundness
• Validity
• Syllogism
• Square of opposition
• Venn diagram
Propositional
• Boolean algebra
• Boolean functions
• Logical connectives
• Propositional calculus
• Propositional formula
• Truth tables
• Many-valued logic
• 3
• Finite
• ∞
Predicate
• First-order
• list
• Second-order
• Monadic
• Higher-order
• Free
• Quantifiers
• Predicate
• Monadic predicate calculus
Set theory
• Set
• Hereditary
• Class
• (Ur-)Element
• Ordinal number
• Extensionality
• Forcing
• Relation
• Equivalence
• Partition
• Set operations:
• Intersection
• Union
• Complement
• Cartesian product
• Power set
• Identities
Types of Sets
• Countable
• Uncountable
• Empty
• Inhabited
• Singleton
• Finite
• Infinite
• Transitive
• Ultrafilter
• Recursive
• Fuzzy
• Universal
• Universe
• Constructible
• Grothendieck
• Von Neumann
Maps & Cardinality
• Function/Map
• Domain
• Codomain
• Image
• In/Sur/Bi-jection
• Schröder–Bernstein theorem
• Isomorphism
• Gödel numbering
• Enumeration
• Large cardinal
• Inaccessible
• Aleph number
• Operation
• Binary
Set theories
• Zermelo–Fraenkel
• Axiom of choice
• Continuum hypothesis
• General
• Kripke–Platek
• Morse–Kelley
• Naive
• New Foundations
• Tarski–Grothendieck
• Von Neumann–Bernays–Gödel
• Ackermann
• Constructive
Formal systems (list),
Language & Syntax
• Alphabet
• Arity
• Automata
• Axiom schema
• Expression
• Ground
• Extension
• by definition
• Conservative
• Relation
• Formation rule
• Grammar
• Formula
• Atomic
• Closed
• Ground
• Open
• Free/bound variable
• Language
• Metalanguage
• Logical connective
• ¬
• ∨
• ∧
• →
• ↔
• =
• Predicate
• Functional
• Variable
• Propositional variable
• Proof
• Quantifier
• ∃
• !
• ∀
• rank
• Sentence
• Atomic
• Spectrum
• Signature
• String
• Substitution
• Symbol
• Function
• Logical/Constant
• Non-logical
• Variable
• Term
• Theory
• list
Example axiomatic
systems
(list)
• of arithmetic:
• Peano
• second-order
• elementary function
• primitive recursive
• Robinson
• Skolem
• of the real numbers
• Tarski's axiomatization
• of Boolean algebras
• canonical
• minimal axioms
• of geometry:
• Euclidean:
• Elements
• Hilbert's
• Tarski's
• non-Euclidean
• Principia Mathematica
Proof theory
• Formal proof
• Natural deduction
• Logical consequence
• Rule of inference
• Sequent calculus
• Theorem
• Systems
• Axiomatic
• Deductive
• Hilbert
• list
• Complete theory
• Independence (from ZFC)
• Proof of impossibility
• Ordinal analysis
• Reverse mathematics
• Self-verifying theories
Model theory
• Interpretation
• Function
• of models
• Model
• Equivalence
• Finite
• Saturated
• Spectrum
• Submodel
• Non-standard model
• of arithmetic
• Diagram
• Elementary
• Categorical theory
• Model complete theory
• Satisfiability
• Semantics of logic
• Strength
• Theories of truth
• Semantic
• Tarski's
• Kripke's
• T-schema
• Transfer principle
• Truth predicate
• Truth value
• Type
• Ultraproduct
• Validity
Computability theory
• Church encoding
• Church–Turing thesis
• Computably enumerable
• Computable function
• Computable set
• Decision problem
• Decidable
• Undecidable
• P
• NP
• P versus NP problem
• Kolmogorov complexity
• Lambda calculus
• Primitive recursive function
• Recursion
• Recursive set
• Turing machine
• Type theory
Related
• Abstract logic
• Category theory
• Concrete/Abstract Category
• Category of sets
• History of logic
• History of mathematical logic
• timeline
• Logicism
• Mathematical object
• Philosophy of mathematics
• Supertask
Mathematics portal
Authority control
• Encyclopedia of Modern Ukraine
| Wikipedia |
Symbolic regression
Symbolic regression (SR) is a type of regression analysis that searches the space of mathematical expressions to find the model that best fits a given dataset, both in terms of accuracy and simplicity.
No particular model is provided as a starting point for symbolic regression. Instead, initial expressions are formed by randomly combining mathematical building blocks such as mathematical operators, analytic functions, constants, and state variables. Usually, a subset of these primitives will be specified by the person operating it, but that's not a requirement of the technique. The symbolic regression problem for mathematical functions has been tackled with a variety of methods, including recombining equations most commonly using genetic programming,[1] as well as more recent methods utilizing Bayesian methods[2] and neural networks.[3] Another non-classical alternative method to SR is called Universal Functions Originator (UFO), which has a different mechanism, search-space, and building strategy.[4] Further methods such as Exact Learning attempt to transform the fitting problem into a moments problem in a natural function space, usually built around generalizations of the Meijer-G function.[5]
By not requiring a priori specification of a model, symbolic regression isn't affected by human bias, or unknown gaps in domain knowledge. It attempts to uncover the intrinsic relationships of the dataset, by letting the patterns in the data itself reveal the appropriate models, rather than imposing a model structure that is deemed mathematically tractable from a human perspective. The fitness function that drives the evolution of the models takes into account not only error metrics (to ensure the models accurately predict the data), but also special complexity measures,[6] thus ensuring that the resulting models reveal the data's underlying structure in a way that's understandable from a human perspective. This facilitates reasoning and favors the odds of getting insights about the data-generating system, as well as improving generalisability and extrapolation behaviour by preventing overfitting. Accuracy and simplicity may be left as two separate objectives of the regression—in which case the optimum solutions form a Pareto front—or they may be combined into a single objective by means of a model selection principle such as minimum description length.
It has been proven that symbolic regression is an NP-hard problem, in the sense that one cannot always find the best possible mathematical expression to fit to a given dataset in polynomial time.[7] Nevertheless, if the sought-for equation is not too complex it is possible to solve the symbolic regression problem exactly by generating every possible function (built from some predefined set of operators) and evaluating them on the dataset in question.[8]
Difference from classical regression
While conventional regression techniques seek to optimize the parameters for a pre-specified model structure, symbolic regression avoids imposing prior assumptions, and instead infers the model from the data. In other words, it attempts to discover both model structures and model parameters.
This approach has the disadvantage of having a much larger space to search, because not only the search space in symbolic regression is infinite, but there are an infinite number of models which will perfectly fit a finite data set (provided that the model complexity isn't artificially limited). This means that it will possibly take a symbolic regression algorithm longer to find an appropriate model and parametrization, than traditional regression techniques. This can be attenuated by limiting the set of building blocks provided to the algorithm, based on existing knowledge of the system that produced the data; but in the end, using symbolic regression is a decision that has to be balanced with how much is known about the underlying system.
Nevertheless, this characteristic of symbolic regression also has advantages: because the evolutionary algorithm requires diversity in order to effectively explore the search space, the result is likely to be a selection of high-scoring models (and their corresponding set of parameters). Examining this collection could provide better insight into the underlying process, and allows the user to identify an approximation that better fits their needs in terms of accuracy and simplicity.
Benchmarking
SRBench
In 2021, SRBench[9] was proposed as a large benchmark for symbolic regression. In its inception, SRBench featured 14 symbolic regression methods, 7 other ML methods, and 252 datasets from PMLB. The benchmark intends to be a living project: it encourages the submission of improvements, new datasets, and new methods, to keep track of the state of the art in SR.
SRBench Competition 2022
In 2022, SRBench announced the competition Interpretable Symbolic Regression for Data Science, which was held at the GECCO conference in Boston, MA. The competition pitted nine leading symbolic regression algorithms against each other on a novel set of data problems and considered different evaluation criteria. The competition was organized in two tracks, a synthetic track and a real-world data track.[10]
Synthetic Track
In the synthetic track, methods were compared according to five properties: re-discovery of exact expressions; feature selection; resistance to local optima; extrapolation; and sensitivity to noise. Rankings of the methods were:
1. QLattice
2. PySR (Python Symbolic Regression)
3. uDSR (Deep Symbolic Optimization)
Real-world Track
In the real-world track, methods were trained to build interpretable predictive models for 14-day forecast counts of COVID-19 cases, hospitalizations, and deaths in New York State. These models were reviewed by a subject expert and assigned trust ratings and evaluated for accuracy and simplicity. The ranking of the methods was:
1. uDSR (Deep Symbolic Optimization)
2. QLattice
3. geneticengine (Genetic Engine)
Non-Standard Methods
Most symbolic regression algorithms prevent combinatorial explosion by implementing evolutionary algorithms that iteratively improve the best-fit expression over many generations. Recently, researchers have proposed algorithms utilizing other tactics in AI.
Silviu-Marian Udrescu and Max Tegmark developed the "AI Feynman" algorithm,[11][12] which attempts symbolic regression by training a neural network to represent the mystery function, then runs tests against the neural network to attempt to break up the problem into smaller parts. For example, if $f(x_{1},...,x_{i},x_{i+1},...,x_{n})=g(x_{1},...,x_{i})+h(x_{i+1},...,x_{n})$, tests against the neural network can recognize the separation and proceed to solve for $g$ and $h$ separately and with different variables as inputs. This is an example of divide and conquer, which reduces the size of the problem to be more manageable. AI Feynman also transforms the inputs and outputs of the mystery function in order to produce a new function which can be solved with other techniques, and performs dimensional analysis to reduce the number of independent variables involved. The algorithm was able to "discover" 100 equations from The Feynman Lectures on Physics, while a leading software using evolutionary algorithms, Eureqa, solved only 71. AI Feynman, in contrast to classic symbolic regression methods, requires a very large dataset in order to first train the neural network and is naturally biased towards equations that are common in elementary physics.
Software
End-user software
• QLattice is a quantum-inspired simulation and machine learning technology that helps search through an infinite list of potential mathematical models to solve a problem.[13][14]
• uDSR is a deep learning framework for symbolic optimization tasks[15]
• dCGP, differentiable Cartesian Genetic Programming in python (free, open source) [16][17]
• HeuristicLab, a software environment for heuristic and evolutionary algorithms, including symbolic regression (free, open source)
• GeneXProTools, - an implementation of Gene expression programming technique for various problems including symbolic regression (commercial)
• Multi Expression Programming X, an implementation of Multi expression programming for symbolic regression and classification (free, open source)
• Eureqa, evolutionary symbolic regression software (commercial), and software library
• TuringBot, symbolic regression software based on simulated annealing (commercial)
• PySR,[18] symbolic regression environment written in Python and Julia, using regularized evolution, simulated annealing, and gradient-free optimization (free, open source)[19]
• GP-GOMEA, fast (C++ back-end) evolutionary symbolic regression with Python scikit-learn-compatible interface, achieved one of the best trade-offs between accuracy and simplicity of discovered models on SRBench in 2021 (free, open source)
See also
• Closed-form expression § Conversion from numerical forms
• Genetic programming
• Gene expression programming
• Kolmogorov complexity
• Linear genetic programming
• Mathematical optimization
• Multi expression programming
• Regression analysis
• Reverse mathematics
• Discovery system (AI research)[3]
References
1. Michael Schmidt; Hod Lipson (2009). "Distilling free-form natural laws from experimental data". Science. American Association for the Advancement of Science. 324 (5923): 81–85. Bibcode:2009Sci...324...81S. CiteSeerX 10.1.1.308.2245. doi:10.1126/science.1165893. PMID 19342586. S2CID 7366016.
2. Ying Jin; Weilin Fu; Jian Kang; Jiadong Guo; Jian Guo (2019). "Bayesian Symbolic Regression". arXiv:1910.08892 [stat.ME].
3. Silviu-Marian Udrescu; Max Tegmark (2020). "AI Feynman: A physics-inspired method for symbolic regression". Science_Advances. American Association for the Advancement of Science. 6 (16): eaay2631. Bibcode:2020SciA....6.2631U. doi:10.1126/sciadv.aay2631. PMC 7159912. PMID 32426452.
4. Ali R. Al-Roomi; Mohamed E. El-Hawary (2020). "Universal Functions Originator". Applied Soft Computing. Elsevier B.V. 94: 106417. doi:10.1016/j.asoc.2020.106417. ISSN 1568-4946. S2CID 219743405.
5. Benedict W. J. Irwin (2021). "Exact Learning" (PDF). doi:10.21203/rs.3.rs-149856/v1. S2CID 234014141. {{cite journal}}: Cite journal requires |journal= (help)
6. Ekaterina J. Vladislavleva; Guido F. Smits; Dick Den Hertog (2009). "Order of nonlinearity as a complexity measure for models generated by symbolic regression via pareto genetic programming" (PDF). IEEE Transactions on Evolutionary Computation. 13 (2): 333–349. doi:10.1109/tevc.2008.926486. S2CID 12072764.
7. Virgolin, Marco; Pissis, Solon P. (2022-07-05). "Symbolic Regression is NP-hard". arXiv:2207.01018 [cs.NE].
8. Bartlett, Deaglan; Desmond, Harry; Ferreira, Pedro (2023). "Exhaustive Symbolic Regression". IEEE Transactions on Evolutionary Computation: 1. arXiv:2211.11461. doi:10.1109/TEVC.2023.3280250. S2CID 253735380.
9. La Cava, William; Orzechowski, Patryk; Burlacu, Bogdan; de Franca, Fabricio; Virgolin, Marco; Jin, Ying; Kommenda, Michael; Moore, Jason (2021). "Contemporary Symbolic Regression Methods and their Relative Performance". Proceedings of the Neural Information Processing Systems Track on Datasets and Benchmarks. 1. arXiv:2107.14351.
10. Michael Kommenda; William La Cava; Maimuna Majumder; Fabricio Olivetti de França; Marco Virgolin. "SRBench Competition 2022: Interpretable Symbolic Regression for Data Science".
11. Udrescu, Silviu-Marian; Tegmark, Max (2020-04-17). "AI Feynman: A physics-inspired method for symbolic regression". Science Advances. 6 (16): eaay2631. arXiv:1905.11481. Bibcode:2020SciA....6.2631U. doi:10.1126/sciadv.aay2631. ISSN 2375-2548. PMC 7159912. PMID 32426452.
12. Udrescu, Silviu-Marian; Tan, Andrew; Feng, Jiahai; Neto, Orisvaldo; Wu, Tailin; Tegmark, Max (2020-12-16). "AI Feynman 2.0: Pareto-optimal symbolic regression exploiting graph modularity". arXiv:2006.10782 [cs.LG].
13. "Feyn is a Python module for running the QLattice". June 22, 2022.
14. Kevin René Broløs; Meera Vieira Machado; Chris Cave; Jaan Kasak; Valdemar Stentoft-Hansen; Victor Galindo Batanero; Tom Jelen; Casper Wilstrup (2021-04-12). "An Approach to Symbolic Regression Using Feyn". arXiv:2104.05417 [cs.LG].
15. "Deep symbolic optimization". GitHub. June 22, 2022.
16. "Differentiable Cartesian Genetic Programming, v1.6 Documentation". June 10, 2022.
17. Izzo, Dario; Biscani, Francesco; Mereta, Alessio (2016). "Differentiable genetic programming". Proceedings of the European Conference on Genetic Programming. arXiv:1611.04766.
18. "High-Performance Symbolic Regression in Python". GitHub. 18 August 2022.
19. "'Machine Scientists' Distill the Laws of Physics From Raw Data". Quanta Magazine. May 10, 2022.
Further reading
• Mark J. Willis; Hugo G. Hiden; Ben McKay; Gary A. Montague; Peter Marenbach (1997). "Genetic programming: An introduction and survey of applications" (PDF). IEE Conference Publications. IEE. pp. 314–319.
• Wouter Minnebo; Sean Stijven (2011). "Chapter 4: Symbolic Regression" (PDF). Empowering Knowledge Computing with Variable Selection (M.Sc. thesis). University of Antwerp.
• John R. Koza; Martin A. Keane; James P. Rice (1993). "Performance improvement of machine learning via automatic discovery of facilitating functions as applied to a problem of symbolic system identification" (PDF). IEEE International Conference on Neural Networks. San Francisco: IEEE. pp. 191–198.
External links
• Ivan Zelinka (2004). "Symbolic regression — an overview".
• Hansueli Gerber (1998). "Simple Symbolic Regression Using Genetic Programming". (Java applet) — approximates a function by evolving combinations of simple arithmetic operators, using algorithms developed by John Koza.
• Katya Vladislavleva. "Symbolic Regression: Function Discovery & More". Archived from the original on 2014-12-18.
| Wikipedia |
Symlet
In applied mathematics, symlet wavelets are a family of wavelets. They are a modified version of Daubechies wavelets with increased symmetry.[1][2][3]
References
1. Daubechles, Ingrid (2009-12-31). "Orthonormal Bases of Compactly Supported Wavelets". Fundamental Papers in Wavelet Theory. Princeton University Press. pp. 564–652. doi:10.1515/9781400827268.564. ISBN 978-1-4008-2726-8. Retrieved 2021-11-27.
2. Gao, Robert X.; Yan, Ruqiang (2010-12-07). Wavelets: Theory and Applications for Manufacturing. Springer Science & Business Media. ISBN 978-1-4419-1545-0.
3. Arfaoui, Sabrine; Mabrouk, Anouar Ben; Cattani, Carlo (2021-04-20). Wavelet Analysis: Basic Concepts and Applications. CRC Press. ISBN 978-1-000-36954-0.
| Wikipedia |
Symmedian
In geometry, symmedians are three particular lines associated with every triangle. They are constructed by taking a median of the triangle (a line connecting a vertex with the midpoint of the opposite side), and reflecting the line over the corresponding angle bisector (the line through the same vertex that divides the angle there in half). The angle formed by the symmedian and the angle bisector has the same measure as the angle between the median and the angle bisector, but it is on the other side of the angle bisector.
The three symmedians meet at a triangle center called the Lemoine point. Ross Honsberger has called its existence "one of the crown jewels of modern geometry".[1]
Isogonality
Many times in geometry, if we take three special lines through the vertices of a triangle, or cevians, then their reflections about the corresponding angle bisectors, called isogonal lines, will also have interesting properties. For instance, if three cevians of a triangle intersect at a point P, then their isogonal lines also intersect at a point, called the isogonal conjugate of P.
The symmedians illustrate this fact.
• In the diagram, the medians (in black) intersect at the centroid G.
• Because the symmedians (in red) are isogonal to the medians, the symmedians also intersect at a single point, L.
This point is called the triangle's symmedian point, or alternatively the Lemoine point or Grebe point.
The dotted lines are the angle bisectors; the symmedians and medians are symmetric about the angle bisectors (hence the name "symmedian.")
Construction of the symmedian
Let △ABC be a triangle. Construct a point D by intersecting the tangents from B and C to the circumcircle. Then AD is the symmedian of △ABC.[2]
first proof. Let the reflection of AD across the angle bisector of ∠BAC meet BC at M'. Then:
${\frac {|BM'|}{|M'C|}}={\frac {|AM'|{\frac {\sin \angle {BAM'}}{\sin \angle {ABM'}}}}{|AM'|{\frac {\sin \angle {CAM'}}{\sin \angle {ACM'}}}}}={\frac {\sin \angle {BAM'}}{\sin \angle {ACD}}}{\frac {\sin \angle {ABD}}{\sin \angle {CAM'}}}={\frac {\sin \angle {CAD}}{\sin \angle {ACD}}}{\frac {\sin \angle {ABD}}{\sin \angle {BAD}}}={\frac {|CD|}{|AD|}}{\frac {|AD|}{|BD|}}=1$
second proof. Define D' as the isogonal conjugate of D. It is easy to see that the reflection of CD about the bisector is the line through C parallel to AB. The same is true for BD, and so, ABD'C is a parallelogram. AD' is clearly the median, because a parallelogram's diagonals bisect each other, and AD is its reflection about the bisector.
third proof. Let ω be the circle with center D passing through B and C, and let O be the circumcenter of △ABC. Say lines AB, AC intersect ω at P, Q, respectively. Since ∠ABC = ∠AQP, triangles △ABC and △AQP are similar. Since
$\angle PBQ=\angle BQC+\angle BAC={\frac {\angle BDC+\angle BOC}{2}}=90^{\circ },$
we see that PQ is a diameter of ω and hence passes through D. Let M be the midpoint of BC. Since D is the midpoint of PQ, the similarity implies that ∠BAM = ∠QAD, from which the result follows.
fourth proof. Let S be the midpoint of the arc BC. |BS| = |SC|, so AS is the angle bisector of ∠BAC. Let M be the midpoint of BC, and It follows that D is the Inverse of M with respect to the circumcircle. From that, we know that the circumcircle is an Apollonian circle with foci M, D. So AS is the bisector of angle ∠DAM, and we have achieved our wanted result.
Tetrahedra
The concept of a symmedian point extends to (irregular) tetrahedra. Given a tetrahedron ABCD two planes P, Q through AB are isogonal conjugates if they form equal angles with the planes ABC and ABD. Let M be the midpoint of the side CD. The plane containing the side AB that is isogonal to the plane ABM is called a symmedian plane of the tetrahedron. The symmedian planes can be shown to intersect at a point, the symmedian point. This is also the point that minimizes the squared distance from the faces of the tetrahedron.[3]
References
1. Honsberger, Ross (1995), "Chapter 7: The Symmedian Point", Episodes in Nineteenth and Twentieth Century Euclidean Geometry, Washington, D.C.: Mathematical Association of America.
2. Yufei, Zhao (2010). Three Lemmas in Geometry (PDF). p. 5.
3. Sadek, Jawad; Bani-Yaghoub, Majid; Rhee, Noah (2016), "Isogonal Conjugates in a Tetrahedron" (PDF), Forum Geometricorum, 16: 43–50.
External links
• Symmedian and Antiparallel at cut-the-knot
• Symmedian and 2 Antiparallels at cut-the-knot
• Symmedian and the Tangents at cut-the-knot
• An interactive Java applet for the symmedian point
• Isogons and Isogonic Symmetry
| Wikipedia |
Symmetry
Symmetry (from Ancient Greek συμμετρία (summetría) 'agreement in dimensions, due proportion, arrangement')[1] in everyday language refers to a sense of harmonious and beautiful proportion and balance.[2][3][lower-alpha 1] In mathematics, the term has a more precise definition and is usually used to refer to an object that is invariant under some transformations, such as translation, reflection, rotation, or scaling. Although these two meanings of the word can sometimes be told apart, they are intricately related, and hence are discussed together in this article.
Geometry
Projecting a sphere to a plane
• Outline
• History (Timeline)
Branches
• Euclidean
• Non-Euclidean
• Elliptic
• Spherical
• Hyperbolic
• Non-Archimedean geometry
• Projective
• Affine
• Synthetic
• Analytic
• Algebraic
• Arithmetic
• Diophantine
• Differential
• Riemannian
• Symplectic
• Discrete differential
• Complex
• Finite
• Discrete/Combinatorial
• Digital
• Convex
• Computational
• Fractal
• Incidence
• Noncommutative geometry
• Noncommutative algebraic geometry
• Concepts
• Features
Dimension
• Straightedge and compass constructions
• Angle
• Curve
• Diagonal
• Orthogonality (Perpendicular)
• Parallel
• Vertex
• Congruence
• Similarity
• Symmetry
Zero-dimensional
• Point
One-dimensional
• Line
• segment
• ray
• Length
Two-dimensional
• Plane
• Area
• Polygon
Triangle
• Altitude
• Hypotenuse
• Pythagorean theorem
Parallelogram
• Square
• Rectangle
• Rhombus
• Rhomboid
Quadrilateral
• Trapezoid
• Kite
Circle
• Diameter
• Circumference
• Area
Three-dimensional
• Volume
• Cube
• cuboid
• Cylinder
• Dodecahedron
• Icosahedron
• Octahedron
• Pyramid
• Platonic Solid
• Sphere
• Tetrahedron
Four- / other-dimensional
• Tesseract
• Hypersphere
Geometers
by name
• Aida
• Aryabhata
• Ahmes
• Alhazen
• Apollonius
• Archimedes
• Atiyah
• Baudhayana
• Bolyai
• Brahmagupta
• Cartan
• Coxeter
• Descartes
• Euclid
• Euler
• Gauss
• Gromov
• Hilbert
• Huygens
• Jyeṣṭhadeva
• Kātyāyana
• Khayyám
• Klein
• Lobachevsky
• Manava
• Minkowski
• Minggatu
• Pascal
• Pythagoras
• Parameshvara
• Poincaré
• Riemann
• Sakabe
• Sijzi
• al-Tusi
• Veblen
• Virasena
• Yang Hui
• al-Yasamin
• Zhang
• List of geometers
by period
BCE
• Ahmes
• Baudhayana
• Manava
• Pythagoras
• Euclid
• Archimedes
• Apollonius
1–1400s
• Zhang
• Kātyāyana
• Aryabhata
• Brahmagupta
• Virasena
• Alhazen
• Sijzi
• Khayyám
• al-Yasamin
• al-Tusi
• Yang Hui
• Parameshvara
1400s–1700s
• Jyeṣṭhadeva
• Descartes
• Pascal
• Huygens
• Minggatu
• Euler
• Sakabe
• Aida
1700s–1900s
• Gauss
• Lobachevsky
• Bolyai
• Riemann
• Klein
• Poincaré
• Hilbert
• Minkowski
• Cartan
• Veblen
• Coxeter
Present day
• Atiyah
• Gromov
Mathematical symmetry may be observed with respect to the passage of time; as a spatial relationship; through geometric transformations; through other kinds of functional transformations; and as an aspect of abstract objects, including theoretic models, language, and music.[4][lower-alpha 2]
This article describes symmetry from three perspectives: in mathematics, including geometry, the most familiar type of symmetry for many people; in science and nature; and in the arts, covering architecture, art, and music.
The opposite of symmetry is asymmetry, which refers to the absence or a violation of symmetry.
In mathematics
In geometry
Main article: Symmetry (geometry)
A geometric shape or object is symmetric if it can be divided into two or more identical pieces that are arranged in an organized fashion.[5] This means that an object is symmetric if there is a transformation that moves individual pieces of the object, but doesn't change the overall shape. The type of symmetry is determined by the way the pieces are organized, or by the type of transformation:
• An object has reflectional symmetry (line or mirror symmetry) if there is a line (or in 3D a plane) going through it which divides it into two pieces that are mirror images of each other.[6]
• An object has rotational symmetry if the object can be rotated about a fixed point (or in 3D about a line) without changing the overall shape.[7]
• An object has translational symmetry if it can be translated (moving every point of the object by the same distance) without changing its overall shape.[8]
• An object has helical symmetry if it can be simultaneously translated and rotated in three-dimensional space along a line known as a screw axis.[9]
• An object has scale symmetry if it does not change shape when it is expanded or contracted.[10] Fractals also exhibit a form of scale symmetry, where smaller portions of the fractal are similar in shape to larger portions.[11]
• Other symmetries include glide reflection symmetry (a reflection followed by a translation) and rotoreflection symmetry (a combination of a rotation and a reflection[12]).
In logic
A dyadic relation R = S × S is symmetric if for all elements a, b in S, whenever it is true that Rab, it is also true that Rba.[13] Thus, the relation "is the same age as" is symmetric, for if Paul is the same age as Mary, then Mary is the same age as Paul.
In propositional logic, symmetric binary logical connectives include and (∧, or &), or (∨, or |) and if and only if (↔), while the connective if (→) is not symmetric.[14] Other symmetric logical connectives include nand (not-and, or ⊼), xor (not-biconditional, or ⊻), and nor (not-or, or ⊽).
Other areas of mathematics
Main article: Symmetry in mathematics
Generalizing from geometrical symmetry in the previous section, one can say that a mathematical object is symmetric with respect to a given mathematical operation, if, when applied to the object, this operation preserves some property of the object.[15] The set of operations that preserve a given property of the object form a group.
In general, every kind of structure in mathematics will have its own kind of symmetry. Examples include even and odd functions in calculus, symmetric groups in abstract algebra, symmetric matrices in linear algebra, and Galois groups in Galois theory. In statistics, symmetry also manifests as symmetric probability distributions, and as skewness—the asymmetry of distributions.[16]
In science and nature
Further information: Patterns in nature
In physics
Symmetry in physics has been generalized to mean invariance—that is, lack of change—under any kind of transformation, for example arbitrary coordinate transformations.[17] This concept has become one of the most powerful tools of theoretical physics, as it has become evident that practically all laws of nature originate in symmetries. In fact, this role inspired the Nobel laureate PW Anderson to write in his widely read 1972 article More is Different that "it is only slightly overstating the case to say that physics is the study of symmetry."[18] See Noether's theorem (which, in greatly simplified form, states that for every continuous mathematical symmetry, there is a corresponding conserved quantity such as energy or momentum; a conserved current, in Noether's original language);[19] and also, Wigner's classification, which says that the symmetries of the laws of physics determine the properties of the particles found in nature.[20]
Important symmetries in physics include continuous symmetries and discrete symmetries of spacetime; internal symmetries of particles; and supersymmetry of physical theories.
In biology
In biology, the notion of symmetry is mostly used explicitly to describe body shapes. Bilateral animals, including humans, are more or less symmetric with respect to the sagittal plane which divides the body into left and right halves.[21] Animals that move in one direction necessarily have upper and lower sides, head and tail ends, and therefore a left and a right. The head becomes specialized with a mouth and sense organs, and the body becomes bilaterally symmetric for the purpose of movement, with symmetrical pairs of muscles and skeletal elements, though internal organs often remain asymmetric.[22]
Plants and sessile (attached) animals such as sea anemones often have radial or rotational symmetry, which suits them because food or threats may arrive from any direction. Fivefold symmetry is found in the echinoderms, the group that includes starfish, sea urchins, and sea lilies.[23]
In biology, the notion of symmetry is also used as in physics, that is to say to describe the properties of the objects studied, including their interactions. A remarkable property of biological evolution is the changes of symmetry corresponding to the appearance of new parts and dynamics.[24][25]
In chemistry
Symmetry is important to chemistry because it undergirds essentially all specific interactions between molecules in nature (i.e., via the interaction of natural and human-made chiral molecules with inherently chiral biological systems). The control of the symmetry of molecules produced in modern chemical synthesis contributes to the ability of scientists to offer therapeutic interventions with minimal side effects. A rigorous understanding of symmetry explains fundamental observations in quantum chemistry, and in the applied areas of spectroscopy and crystallography. The theory and application of symmetry to these areas of physical science draws heavily on the mathematical area of group theory.[26]
In psychology and neuroscience
For a human observer, some symmetry types are more salient than others, in particular the most salient is a reflection with a vertical axis, like that present in the human face. Ernst Mach made this observation in his book "The analysis of sensations" (1897),[27] and this implies that perception of symmetry is not a general response to all types of regularities. Both behavioural and neurophysiological studies have confirmed the special sensitivity to reflection symmetry in humans and also in other animals.[28] Early studies within the Gestalt tradition suggested that bilateral symmetry was one of the key factors in perceptual grouping. This is known as the Law of Symmetry. The role of symmetry in grouping and figure/ground organization has been confirmed in many studies. For instance, detection of reflectional symmetry is faster when this is a property of a single object.[29] Studies of human perception and psychophysics have shown that detection of symmetry is fast, efficient and robust to perturbations. For example, symmetry can be detected with presentations between 100 and 150 milliseconds.[30]
More recent neuroimaging studies have documented which brain regions are active during perception of symmetry. Sasaki et al.[31] used functional magnetic resonance imaging (fMRI) to compare responses for patterns with symmetrical or random dots. A strong activity was present in extrastriate regions of the occipital cortex but not in the primary visual cortex. The extrastriate regions included V3A, V4, V7, and the lateral occipital complex (LOC). Electrophysiological studies have found a late posterior negativity that originates from the same areas.[32] In general, a large part of the visual system seems to be involved in processing visual symmetry, and these areas involve similar networks to those responsible for detecting and recognising objects.[33]
In social interactions
People observe the symmetrical nature, often including asymmetrical balance, of social interactions in a variety of contexts. These include assessments of reciprocity, empathy, sympathy, apology, dialogue, respect, justice, and revenge. Reflective equilibrium is the balance that may be attained through deliberative mutual adjustment among general principles and specific judgments.[34] Symmetrical interactions send the moral message "we are all the same" while asymmetrical interactions may send the message "I am special; better than you." Peer relationships, such as can be governed by the golden rule, are based on symmetry, whereas power relationships are based on asymmetry.[35] Symmetrical relationships can to some degree be maintained by simple (game theory) strategies seen in symmetric games such as tit for tat.[36]
In the arts
Further information: Mathematics and art
There exists a list of journals and newsletters known to deal, at least in part, with symmetry and the arts.[37]
In architecture
Further information: Mathematics and architecture
Symmetry finds its ways into architecture at every scale, from the overall external views of buildings such as Gothic cathedrals and The White House, through the layout of the individual floor plans, and down to the design of individual building elements such as tile mosaics. Islamic buildings such as the Taj Mahal and the Lotfollah mosque make elaborate use of symmetry both in their structure and in their ornamentation.[38][39] Moorish buildings like the Alhambra are ornamented with complex patterns made using translational and reflection symmetries as well as rotations.[40]
It has been said that only bad architects rely on a "symmetrical layout of blocks, masses and structures";[41] Modernist architecture, starting with International style, relies instead on "wings and balance of masses".[41]
In pottery and metal vessels
Since the earliest uses of pottery wheels to help shape clay vessels, pottery has had a strong relationship to symmetry. Pottery created using a wheel acquires full rotational symmetry in its cross-section, while allowing substantial freedom of shape in the vertical direction. Upon this inherently symmetrical starting point, potters from ancient times onwards have added patterns that modify the rotational symmetry to achieve visual objectives.
Cast metal vessels lacked the inherent rotational symmetry of wheel-made pottery, but otherwise provided a similar opportunity to decorate their surfaces with patterns pleasing to those who used them. The ancient Chinese, for example, used symmetrical patterns in their bronze castings as early as the 17th century BC. Bronze vessels exhibited both a bilateral main motif and a repetitive translated border design.[42]
In carpets and rugs
A long tradition of the use of symmetry in carpet and rug patterns spans a variety of cultures. American Navajo Indians used bold diagonals and rectangular motifs. Many Oriental rugs have intricate reflected centers and borders that translate a pattern. Not surprisingly, rectangular rugs have typically the symmetries of a rectangle—that is, motifs that are reflected across both the horizontal and vertical axes (see Klein four-group § Geometry).[43][44]
In quilts
As quilts are made from square blocks (usually 9, 16, or 25 pieces to a block) with each smaller piece usually consisting of fabric triangles, the craft lends itself readily to the application of symmetry.[45]
In other arts and crafts
Symmetries appear in the design of objects of all kinds. Examples include beadwork, furniture, sand paintings, knotwork, masks, and musical instruments. Symmetries are central to the art of M.C. Escher and the many applications of tessellation in art and craft forms such as wallpaper, ceramic tilework such as in Islamic geometric decoration, batik, ikat, carpet-making, and many kinds of textile and embroidery patterns.[46]
Symmetry is also used in designing logos.[47] By creating a logo on a grid and using the theory of symmetry, designers can organize their work, create a symmetric or asymmetrical design, determine the space between letters, determine how much negative space is required in the design, and how to accentuate parts of the logo to make it stand out.
In music
Symmetry is not restricted to the visual arts. Its role in the history of music touches many aspects of the creation and perception of music.
Musical form
Symmetry has been used as a formal constraint by many composers, such as the arch (swell) form (ABCBA) used by Steve Reich, Béla Bartók, and James Tenney. In classical music, Bach used the symmetry concepts of permutation and invariance.[48]
Pitch structures
Symmetry is also an important consideration in the formation of scales and chords, traditional or tonal music being made up of non-symmetrical groups of pitches, such as the diatonic scale or the major chord. Symmetrical scales or chords, such as the whole tone scale, augmented chord, or diminished seventh chord (diminished-diminished seventh), are said to lack direction or a sense of forward motion, are ambiguous as to the key or tonal center, and have a less specific diatonic functionality. However, composers such as Alban Berg, Béla Bartók, and George Perle have used axes of symmetry and/or interval cycles in an analogous way to keys or non-tonal tonal centers.[49] George Perle explains "C–E, D–F♯, [and] Eb–G, are different instances of the same interval … the other kind of identity. … has to do with axes of symmetry. C–E belongs to a family of symmetrically related dyads as follows:"[49]
D D♯ E F F♯ G G♯
D C♯ C B A♯ A G♯
Thus in addition to being part of the interval-4 family, C–E is also a part of the sum-4 family (with C equal to 0).[49]
+ 2 3 4 5 6 7 8
2 1 0 11 10 9 8
4 4 4 4 4 4 4
Interval cycles are symmetrical and thus non-diatonic. However, a seven pitch segment of C5 (the cycle of fifths, which are enharmonic with the cycle of fourths) will produce the diatonic major scale. Cyclic tonal progressions in the works of Romantic composers such as Gustav Mahler and Richard Wagner form a link with the cyclic pitch successions in the atonal music of Modernists such as Bartók, Alexander Scriabin, Edgard Varèse, and the Vienna school. At the same time, these progressions signal the end of tonality.[49][50]
The first extended composition consistently based on symmetrical pitch relations was probably Alban Berg's Quartet, Op. 3 (1910).[50]
Equivalency
Tone rows or pitch class sets which are invariant under retrograde are horizontally symmetrical, under inversion vertically. See also Asymmetric rhythm.
In aesthetics
The relationship of symmetry to aesthetics is complex. Humans find bilateral symmetry in faces physically attractive;[51] it indicates health and genetic fitness.[52][53] Opposed to this is the tendency for excessive symmetry to be perceived as boring or uninteresting. Rudolf Arnheim suggested that people prefer shapes that have some symmetry, and enough complexity to make them interesting.[54]
In literature
Symmetry can be found in various forms in literature, a simple example being the palindrome where a brief text reads the same forwards or backwards. Stories may have a symmetrical structure, such as the rise and fall pattern of Beowulf.[55]
See also
• Automorphism
• Burnside's lemma
• Chirality
• Even and odd functions
• Fixed points of isometry groups in Euclidean space – center of symmetry
• Isotropy
• Palindrome
• Spacetime symmetries
• Spontaneous symmetry breaking
• Symmetry-breaking constraints
• Symmetric relation
• Symmetries of polyiamonds
• Symmetries of polyominoes
• Symmetry group
• Wallpaper group
Notes
1. For example, Aristotle ascribed spherical shape to the heavenly bodies, attributing this formally defined geometric measure of symmetry to the natural order and perfection of the cosmos.
2. Symmetric objects can be material, such as a person, crystal, quilt, floor tiles, or molecule, or it can be an abstract structure such as a mathematical equation or a series of tones (music).
References
1. Harper, Douglas. "symmetry". Online Etymology Dictionary.
2. Zee, A. (2007). Fearful Symmetry. Princeton, New Jersey: Princeton University Press. ISBN 978-0-691-13482-6.
3. Hill, C. T.; Lederman, L. M. (2005). Symmetry and the Beautiful Universe. Prometheus Books.
4. Mainzer, Klaus (2005). Symmetry and Complexity: The Spirit and Beauty of Nonlinear Science. World Scientific. ISBN 981-256-192-7.
5. E. H. Lockwood, R. H. Macmillan, Geometric Symmetry, London: Cambridge Press, 1978
6. Weyl, Hermann (1982) [1952]. Symmetry. Princeton: Princeton University Press. ISBN 0-691-02374-3.
7. Singer, David A. (1998). Geometry: Plane and Fancy. Springer Science & Business Media.
8. Stenger, Victor J. (2000) and Mahou Shiro (2007). Timeless Reality. Prometheus Books. Especially chapter 12. Nontechnical.
9. Bottema, O, and B. Roth, Theoretical Kinematics, Dover Publications (September 1990)
10. Tian Yu Cao Conceptual Foundations of Quantum Field Theory Cambridge University Press p.154-155
11. Gouyet, Jean-François (1996). Physics and fractal structures. Paris/New York: Masson Springer. ISBN 978-0-387-94153-0.
12. "Rotoreflection Axis". TheFreeDictionary.com. Retrieved 2019-11-12.
13. Josiah Royce, Ignas K. Skrupskelis (2005) The Basic Writings of Josiah Royce: Logic, loyalty, and community (Google eBook) Fordham Univ Press, p. 790
14. Gao, Alice (2019). "Propositional Logic: Introduction and Syntax" (PDF). University of Waterloo — School of Computer Science. Retrieved 2019-11-12.
15. Christopher G. Morris (1992) Academic Press Dictionary of Science and Technology Gulf Professional Publishing
16. Petitjean, M. (2003). "Chirality and Symmetry Measures: A Transdisciplinary Review". Entropy. 5 (3): 271–312 (see section 2.9). Bibcode:2003Entrp...5..271P. doi:10.3390/e5030271.
17. Costa, Giovanni; Fogli, Gianluigi (2012). Symmetries and Group Theory in Particle Physics: An Introduction to Space-Time and Internal Symmetries. Springer Science & Business Media. p. 112.
18. Anderson, P.W. (1972). "More is Different" (PDF). Science. 177 (4047): 393–396. Bibcode:1972Sci...177..393A. doi:10.1126/science.177.4047.393. PMID 17796623. S2CID 34548824.
19. Kosmann-Schwarzbach, Yvette (2010). The Noether theorems: Invariance and conservation laws in the twentieth century. Sources and Studies in the History of Mathematics and Physical Sciences. Springer-Verlag. ISBN 978-0-387-87867-6.
20. Wigner, E. P. (1939), "On unitary representations of the inhomogeneous Lorentz group", Annals of Mathematics, 40 (1): 149–204, Bibcode:1939AnMat..40..149W, doi:10.2307/1968551, JSTOR 1968551, MR 1503456, S2CID 121773411
21. Valentine, James W. "Bilateria". AccessScience. Archived from the original on 18 January 2008. Retrieved 29 May 2013.
22. Hickman, Cleveland P.; Roberts, Larry S.; Larson, Allan (2002). "Animal Diversity (Third Edition)" (PDF). Chapter 8: Acoelomate Bilateral Animals. McGraw-Hill. p. 139. Archived from the original (PDF) on May 17, 2016. Retrieved October 25, 2012.
23. Stewart, Ian (2001). What Shape is a Snowflake? Magical Numbers in Nature. Weidenfeld & Nicolson. pp. 64–65.
24. Longo, Giuseppe; Montévil, Maël (2016). Perspectives on Organisms: Biological time, Symmetries and Singularities. Springer. ISBN 978-3-662-51229-6.
25. Montévil, Maël; Mossio, Matteo; Pocheville, Arnaud; Longo, Giuseppe (2016). "Theoretical principles for biology: Variation". Progress in Biophysics and Molecular Biology. From the Century of the Genome to the Century of the Organism: New Theoretical Approaches. 122 (1): 36–50. doi:10.1016/j.pbiomolbio.2016.08.005. PMID 27530930. S2CID 3671068.
26. Lowe, John P; Peterson, Kirk (2005). Quantum Chemistry (Third ed.). Academic Press. ISBN 0-12-457551-X.
27. Mach, Ernst (1897). Symmetries and Group Theory in Particle Physics: An Introduction to Space-Time and Internal Symmetries. Open Court Publishing House.
28. Wagemans, J. (1997). "Characteristics and models of human symmetry detection". Trends in Cognitive Sciences. 1 (9): 346–352. doi:10.1016/S1364-6613(97)01105-4. PMID 21223945. S2CID 2143353.
29. Bertamini, M. (2010). "Sensitivity to reflection and translation is modulated by objectness". Perception. 39 (1): 27–40. doi:10.1068/p6393. PMID 20301844. S2CID 22451173.
30. Barlow, H.B.; Reeves, B.C. (1979). "The versatility and absolute efficiency of detecting mirror symmetry in random dot displays". Vision Research. 19 (7): 783–793. doi:10.1016/0042-6989(79)90154-8. PMID 483597. S2CID 41530752.
31. Sasaki, Y.; Vanduffel, W.; Knutsen, T.; Tyler, C.W.; Tootell, R. (2005). "Symmetry activates extrastriate visual cortex in human and nonhuman primates". Proceedings of the National Academy of Sciences of the USA. 102 (8): 3159–3163. Bibcode:2005PNAS..102.3159S. doi:10.1073/pnas.0500319102. PMC 549500. PMID 15710884.
32. Makin, A.D.J.; Rampone, G.; Pecchinenda, A.; Bertamini, M. (2013). "Electrophysiological responses to visuospatial regularity". Psychophysiology. 50 (10): 1045–1055. doi:10.1111/psyp.12082. PMID 23941638.
33. Bertamini, M.; Silvanto, J.; Norcia, A.M.; Makin, A.D.J.; Wagemans, J. (2018). "The neural basis of visual symmetry and its role in middle and high-level visual processing". Annals of the New York Academy of Sciences. 132 (1): 280–293. Bibcode:2018NYASA1426..111B. doi:10.1111/nyas.13667. PMID 29604083.
34. Daniels, Norman (2003-04-28). "Reflective Equilibrium". In Zalta, Edward N. (ed.). Stanford Encyclopedia of Philosophy.
35. Emotional Competency: Symmetry
36. Lutus, P. (2008). "The Symmetry Principle". Retrieved 28 September 2015.
37. Bouissou, C.; Petitjean, M. (2018). "Asymmetric Exchanges". Journal of Interdisciplinary Methodologies and Issues in Science. 4: 1–18. doi:10.18713/JIMIS-230718-4-1. (see appendix 1)
38. Williams: Symmetry in Architecture. Members.tripod.com (1998-12-31). Retrieved on 2013-04-16.
39. Aslaksen: Mathematics in Art and Architecture. Math.nus.edu.sg. Retrieved on 2013-04-16.
40. Derry, Gregory N. (2002). What Science Is and How It Works. Princeton University Press. pp. 269–. ISBN 978-1-4008-2311-6.
41. Dunlap, David W. (31 July 2009). "Behind the Scenes: Edgar Martins Speaks". New York Times. Retrieved 11 November 2014. "My starting point for this construction was a simple statement which I once read (and which does not necessarily reflect my personal views): 'Only a bad architect relies on symmetry; instead of symmetrical layout of blocks, masses and structures, Modernist architecture relies on wings and balance of masses.'
42. The Art of Chinese Bronzes Archived 2003-12-11 at the Wayback Machine. Chinavoc (2007-11-19). Retrieved on 2013-04-16.
43. Marla Mallett Textiles & Tribal Oriental Rugs. The Metropolitan Museum of Art, New York.
44. Dilucchio: Navajo Rugs. Navajocentral.org (2003-10-26). Retrieved on 2013-04-16.
45. Quate: Exploring Geometry Through Quilts Archived 2003-12-31 at the Wayback Machine. Its.guilford.k12.nc.us. Retrieved on 2013-04-16.
46. Cucker, Felipe (2013). Manifold Mirrors: The Crossing Paths of the Arts and Mathematics. Cambridge University Press. pp. 77–78, 83, 89, 103. ISBN 978-0-521-72876-8.
47. "How to Design a Perfect Logo with Grid and Symmetry".
48. see ("Fugue No. 21," pdf or Shockwave)
49. Perle, George (1992). "Symmetry, the twelve-tone scale, and tonality". Contemporary Music Review. 6 (2): 81–96. doi:10.1080/07494469200640151.
50. Perle, George (1990). The Listening Composer. University of California Press. p. 21. ISBN 978-0-520-06991-6.
51. Grammer, K.; Thornhill, R. (1994). "Human (Homo sapiens) facial attractiveness and sexual selection: the role of symmetry and averageness". Journal of Comparative Psychology. Washington, D.C. 108 (3): 233–42. doi:10.1037/0735-7036.108.3.233. PMID 7924253. S2CID 1205083.
52. Rhodes, Gillian; Zebrowitz, Leslie, A. (2002). Facial Attractiveness - Evolutionary, Cognitive, and Social Perspectives. Ablex. ISBN 1-56750-636-4.{{cite book}}: CS1 maint: multiple names: authors list (link)
53. Jones, B. C., Little, A. C., Tiddeman, B. P., Burt, D. M., & Perrett, D. I. (2001). Facial symmetry and judgements of apparent health Support for a “‘ good genes ’” explanation of the attractiveness – symmetry relationship, 22, 417–429.
54. Arnheim, Rudolf (1969). Visual Thinking. University of California Press.
55. Jenny Lea Bowman (2009). "Symmetrical Aesthetics of Beowulf". University of Tennessee, Knoxville.
Further reading
• The Equation That Couldn't Be Solved: How Mathematical Genius Discovered the Language of Symmetry, Mario Livio, Souvenir Press 2006, ISBN 0-285-63743-6
External links
Look up symmetry in Wiktionary, the free dictionary.
Wikimedia Commons has media related to Symmetry.
Wikiquote has quotations related to Symmetry.
• International Symmetry Association (ISA)
• Dutch: Symmetry Around a Point in the Plane Archived 2004-01-02 at the Wayback Machine
• Chapman: Aesthetics of Symmetry
• ISIS Symmetry
• Symmetry, BBC Radio 4 discussion with Fay Dowker, Marcus du Sautoy & Ian Stewart (In Our Time, Apr. 19, 2007)
Authority control: National
• France
• BnF data
• Germany
• Israel
• United States
• Czech Republic
Mathematics and art
Concepts
• Algorithm
• Catenary
• Fractal
• Golden ratio
• Hyperboloid structure
• Minimal surface
• Paraboloid
• Perspective
• Camera lucida
• Camera obscura
• Plastic number
• Projective geometry
• Proportion
• Architecture
• Human
• Symmetry
• Tessellation
• Wallpaper group
Forms
• Algorithmic art
• Anamorphic art
• Architecture
• Geodesic dome
• Islamic
• Mughal
• Pyramid
• Vastu shastra
• Computer art
• Fiber arts
• 4D art
• Fractal art
• Islamic geometric patterns
• Girih
• Jali
• Muqarnas
• Zellij
• Knotting
• Celtic knot
• Croatian interlace
• Interlace
• Music
• Origami
• Sculpture
• String art
• Tiling
Artworks
• List of works designed with the golden ratio
• Continuum
• Mathemalchemy
• Mathematica: A World of Numbers... and Beyond
• Octacube
• Pi
• Pi in the Sky
Buildings
• Cathedral of Saint Mary of the Assumption
• Hagia Sophia
• Pantheon
• Parthenon
• Pyramid of Khufu
• Sagrada Família
• Sydney Opera House
• Taj Mahal
Artists
Renaissance
• Paolo Uccello
• Piero della Francesca
• Leonardo da Vinci
• Vitruvian Man
• Albrecht Dürer
• Parmigianino
• Self-portrait in a Convex Mirror
19th–20th
Century
• William Blake
• The Ancient of Days
• Newton
• Jean Metzinger
• Danseuse au café
• L'Oiseau bleu
• Giorgio de Chirico
• Man Ray
• M. C. Escher
• Circle Limit III
• Print Gallery
• Relativity
• Reptiles
• Waterfall
• René Magritte
• La condition humaine
• Salvador Dalí
• Crucifixion
• The Swallow's Tail
• Crockett Johnson
Contemporary
• Max Bill
• Martin and Erik Demaine
• Scott Draves
• Jan Dibbets
• John Ernest
• Helaman Ferguson
• Peter Forakis
• Susan Goldstine
• Bathsheba Grossman
• George W. Hart
• Desmond Paul Henry
• Anthony Hill
• Charles Jencks
• Garden of Cosmic Speculation
• Andy Lomas
• Robert Longhurst
• Jeanette McLeod
• Hamid Naderi Yeganeh
• István Orosz
• Hinke Osinga
• Antoine Pevsner
• Tony Robbin
• Alba Rojo Cama
• Reza Sarhangi
• Oliver Sin
• Hiroshi Sugimoto
• Daina Taimiņa
• Roman Verostko
• Margaret Wertheim
Theorists
Ancient
• Polykleitos
• Canon
• Vitruvius
• De architectura
Renaissance
• Filippo Brunelleschi
• Leon Battista Alberti
• De pictura
• De re aedificatoria
• Piero della Francesca
• De prospectiva pingendi
• Luca Pacioli
• De divina proportione
• Leonardo da Vinci
• A Treatise on Painting
• Albrecht Dürer
• Vier Bücher von Menschlicher Proportion
• Sebastiano Serlio
• Regole generali d'architettura
• Andrea Palladio
• I quattro libri dell'architettura
Romantic
• Samuel Colman
• Nature's Harmonic Unity
• Frederik Macody Lund
• Ad Quadratum
• Jay Hambidge
• The Greek Vase
Modern
• Owen Jones
• The Grammar of Ornament
• Ernest Hanbury Hankin
• The Drawing of Geometric Patterns in Saracenic Art
• G. H. Hardy
• A Mathematician's Apology
• George David Birkhoff
• Aesthetic Measure
• Douglas Hofstadter
• Gödel, Escher, Bach
• Nikos Salingaros
• The 'Life' of a Carpet
Publications
• Journal of Mathematics and the Arts
• Lumen Naturae
• Making Mathematics with Needlework
• Rhythm of Structure
• Viewpoints: Mathematical Perspective and Fractal Geometry in Art
Organizations
• Ars Mathematica
• The Bridges Organization
• European Society for Mathematics and the Arts
• Goudreau Museum of Mathematics in Art and Science
• Institute For Figuring
• Mathemalchemy
• National Museum of Mathematics
Related
• Droste effect
• Mathematical beauty
• Patterns in nature
• Sacred geometry
• Category
Patterns in nature
Patterns
• Crack
• Dune
• Foam
• Meander
• Phyllotaxis
• Soap bubble
• Symmetry
• in crystals
• Quasicrystals
• in flowers
• in biology
• Tessellation
• Vortex street
• Wave
• Widmanstätten pattern
Causes
• Pattern formation
• Biology
• Natural selection
• Camouflage
• Mimicry
• Sexual selection
• Mathematics
• Chaos theory
• Fractal
• Logarithmic spiral
• Physics
• Crystal
• Fluid dynamics
• Plateau's laws
• Self-organization
People
• Plato
• Pythagoras
• Empedocles
• Fibonacci
• Liber Abaci
• Adolf Zeising
• Ernst Haeckel
• Joseph Plateau
• Wilson Bentley
• D'Arcy Wentworth Thompson
• On Growth and Form
• Alan Turing
• The Chemical Basis of Morphogenesis
• Aristid Lindenmayer
• Benoît Mandelbrot
• How Long Is the Coast of Britain? Statistical Self-Similarity and Fractional Dimension
Related
• Pattern recognition
• Emergence
• Mathematics and art
| Wikipedia |
Symmetric matrix
In linear algebra, a symmetric matrix is a square matrix that is equal to its transpose. Formally,
$A{\text{ is symmetric}}\iff A=A^{\textsf {T}}.$
This article is about a matrix symmetric about its diagonal. For a matrix symmetric about its center, see Centrosymmetric matrix.
For matrices with symmetry over the complex number field, see Hermitian matrix.
Because equal matrices have equal dimensions, only square matrices can be symmetric.
The entries of a symmetric matrix are symmetric with respect to the main diagonal. So if $a_{ij}$ denotes the entry in the $i$th row and $j$th column then
$A{\text{ is symmetric}}\iff {\text{ for every }}i,j,\quad a_{ji}=a_{ij}$
for all indices $i$ and $j.$
Every square diagonal matrix is symmetric, since all off-diagonal elements are zero. Similarly in characteristic different from 2, each diagonal element of a skew-symmetric matrix must be zero, since each is its own negative.
In linear algebra, a real symmetric matrix represents a self-adjoint operator[1] represented in an orthonormal basis over a real inner product space. The corresponding object for a complex inner product space is a Hermitian matrix with complex-valued entries, which is equal to its conjugate transpose. Therefore, in linear algebra over the complex numbers, it is often assumed that a symmetric matrix refers to one which has real-valued entries. Symmetric matrices appear naturally in a variety of applications, and typical numerical linear algebra software makes special accommodations for them.
Example
The following $3\times 3$ matrix is symmetric:
$A={\begin{bmatrix}1&7&3\\7&4&5\\3&5&1\end{bmatrix}}$
Since $A=A^{\textsf {T}}$.
Properties
Basic properties
• The sum and difference of two symmetric matrices is symmetric.
• This is not always true for the product: given symmetric matrices $A$ and $B$, then $AB$ is symmetric if and only if $A$ and $B$ commute, i.e., if $AB=BA$.
• For any integer $n$, $A^{n}$ is symmetric if $A$ is symmetric.
• If $A^{-1}$ exists, it is symmetric if and only if $A$ is symmetric.
• Rank of a symmetric matrix $A$ is equal to the number of non-zero eigenvalues of $A$.
Decomposition into symmetric and skew-symmetric
Any square matrix can uniquely be written as sum of a symmetric and a skew-symmetric matrix. This decomposition is known as the Toeplitz decomposition. Let ${\mbox{Mat}}_{n}$ denote the space of $n\times n$ matrices. If ${\mbox{Sym}}_{n}$ denotes the space of $n\times n$ symmetric matrices and ${\mbox{Skew}}_{n}$ the space of $n\times n$ skew-symmetric matrices then ${\mbox{Mat}}_{n}={\mbox{Sym}}_{n}+{\mbox{Skew}}_{n}$ and ${\mbox{Sym}}_{n}\cap {\mbox{Skew}}_{n}=\{0\}$, i.e.
${\mbox{Mat}}_{n}={\mbox{Sym}}_{n}\oplus {\mbox{Skew}}_{n},$
where $\oplus $ denotes the direct sum. Let $X\in {\mbox{Mat}}_{n}$ then
$X={\frac {1}{2}}\left(X+X^{\textsf {T}}\right)+{\frac {1}{2}}\left(X-X^{\textsf {T}}\right).$
Notice that $ {\frac {1}{2}}\left(X+X^{\textsf {T}}\right)\in {\mbox{Sym}}_{n}$ and $ {\frac {1}{2}}\left(X-X^{\textsf {T}}\right)\in \mathrm {Skew} _{n}$. This is true for every square matrix $X$ with entries from any field whose characteristic is different from 2.
A symmetric $n\times n$ matrix is determined by ${\tfrac {1}{2}}n(n+1)$ scalars (the number of entries on or above the main diagonal). Similarly, a skew-symmetric matrix is determined by ${\tfrac {1}{2}}n(n-1)$ scalars (the number of entries above the main diagonal).
Matrix congruent to a symmetric matrix
Any matrix congruent to a symmetric matrix is again symmetric: if $X$ is a symmetric matrix, then so is $AXA^{\mathrm {T} }$ for any matrix $A$.
Symmetry implies normality
A (real-valued) symmetric matrix is necessarily a normal matrix.
Real symmetric matrices
Denote by $\langle \cdot ,\cdot \rangle $ the standard inner product on $\mathbb {R} ^{n}$. The real $n\times n$ matrix $A$ is symmetric if and only if
$\langle Ax,y\rangle =\langle x,Ay\rangle \quad \forall x,y\in \mathbb {R} ^{n}.$
Since this definition is independent of the choice of basis, symmetry is a property that depends only on the linear operator A and a choice of inner product. This characterization of symmetry is useful, for example, in differential geometry, for each tangent space to a manifold may be endowed with an inner product, giving rise to what is called a Riemannian manifold. Another area where this formulation is used is in Hilbert spaces.
The finite-dimensional spectral theorem says that any symmetric matrix whose entries are real can be diagonalized by an orthogonal matrix. More explicitly: For every real symmetric matrix $A$ there exists a real orthogonal matrix $Q$ such that $D=Q^{\mathrm {T} }AQ$ is a diagonal matrix. Every real symmetric matrix is thus, up to choice of an orthonormal basis, a diagonal matrix.
If $A$ and $B$ are $n\times n$ real symmetric matrices that commute, then they can be simultaneously diagonalized: there exists a basis of $\mathbb {R} ^{n}$ such that every element of the basis is an eigenvector for both $A$ and $B$.
Every real symmetric matrix is Hermitian, and therefore all its eigenvalues are real. (In fact, the eigenvalues are the entries in the diagonal matrix $D$ (above), and therefore $D$ is uniquely determined by $A$ up to the order of its entries.) Essentially, the property of being symmetric for real matrices corresponds to the property of being Hermitian for complex matrices.
Complex symmetric matrices
A complex symmetric matrix can be 'diagonalized' using a unitary matrix: thus if $A$ is a complex symmetric matrix, there is a unitary matrix $U$ such that $UAU^{\mathrm {T} }$ is a real diagonal matrix with non-negative entries. This result is referred to as the Autonne–Takagi factorization. It was originally proved by Léon Autonne (1915) and Teiji Takagi (1925) and rediscovered with different proofs by several other mathematicians.[2][3] In fact, the matrix $B=A^{\dagger }A$ is Hermitian and positive semi-definite, so there is a unitary matrix $V$ such that $V^{\dagger }BV$ is diagonal with non-negative real entries. Thus $C=V^{\mathrm {T} }AV$ is complex symmetric with $C^{\dagger }C$ real. Writing $C=X+iY$ with $X$ and $Y$ real symmetric matrices, $C^{\dagger }C=X^{2}+Y^{2}+i(XY-YX)$. Thus $XY=YX$. Since $X$ and $Y$ commute, there is a real orthogonal matrix $W$ such that both $WXW^{\mathrm {T} }$ and $WYW^{\mathrm {T} }$ are diagonal. Setting $U=WV^{\mathrm {T} }$ (a unitary matrix), the matrix $UAU^{\mathrm {T} }$ is complex diagonal. Pre-multiplying $U$ by a suitable diagonal unitary matrix (which preserves unitarity of $U$), the diagonal entries of $UAU^{\mathrm {T} }$ can be made to be real and non-negative as desired. To construct this matrix, we express the diagonal matrix as $UAU^{\mathrm {T} }=\operatorname {diag} (r_{1}e^{i\theta _{1}},r_{2}e^{i\theta _{2}},\dots ,r_{n}e^{i\theta _{n}})$. The matrix we seek is simply given by $D=\operatorname {diag} (e^{-i\theta _{1}/2},e^{-i\theta _{2}/2},\dots ,e^{-i\theta _{n}/2})$. Clearly $DUAU^{\mathrm {T} }D=\operatorname {diag} (r_{1},r_{2},\dots ,r_{n})$ as desired, so we make the modification $U'=DU$. Since their squares are the eigenvalues of $A^{\dagger }A$, they coincide with the singular values of $A$. (Note, about the eigen-decomposition of a complex symmetric matrix $A$, the Jordan normal form of $A$ may not be diagonal, therefore $A$ may not be diagonalized by any similarity transformation.)
Decomposition
Using the Jordan normal form, one can prove that every square real matrix can be written as a product of two real symmetric matrices, and every square complex matrix can be written as a product of two complex symmetric matrices.[4]
Every real non-singular matrix can be uniquely factored as the product of an orthogonal matrix and a symmetric positive definite matrix, which is called a polar decomposition. Singular matrices can also be factored, but not uniquely.
Cholesky decomposition states that every real positive-definite symmetric matrix $A$ is a product of a lower-triangular matrix $L$ and its transpose,
$A=LL^{\textsf {T}}.$
If the matrix is symmetric indefinite, it may be still decomposed as $PAP^{\textsf {T}}=LDL^{\textsf {T}}$ where $P$ is a permutation matrix (arising from the need to pivot), $L$ a lower unit triangular matrix, and $D$ is a direct sum of symmetric $1\times 1$ and $2\times 2$ blocks, which is called Bunch–Kaufman decomposition [5]
A general (complex) symmetric matrix may be defective and thus not be diagonalizable. If $A$ is diagonalizable it may be decomposed as
$A=Q\Lambda Q^{\textsf {T}}$
where $Q$ is an orthogonal matrix $QQ^{\textsf {T}}=I$, and $\Lambda $ is a diagonal matrix of the eigenvalues of $A$. In the special case that $A$ is real symmetric, then $Q$ and $\Lambda $ are also real. To see orthogonality, suppose $\mathbf {x} $ and $\mathbf {y} $ are eigenvectors corresponding to distinct eigenvalues $\lambda _{1}$, $\lambda _{2}$. Then
$\lambda _{1}\langle \mathbf {x} ,\mathbf {y} \rangle =\langle A\mathbf {x} ,\mathbf {y} \rangle =\langle \mathbf {x} ,A\mathbf {y} \rangle =\lambda _{2}\langle \mathbf {x} ,\mathbf {y} \rangle .$
Since $\lambda _{1}$ and $\lambda _{2}$ are distinct, we have $\langle \mathbf {x} ,\mathbf {y} \rangle =0$.
Hessian
Symmetric $n\times n$ matrices of real functions appear as the Hessians of twice differentiable functions of $n$ real variables (the continuity of the second derivative is not needed, despite common belief to the opposite[6]).
Every quadratic form $q$ on $\mathbb {R} ^{n}$ can be uniquely written in the form $q(\mathbf {x} )=\mathbf {x} ^{\textsf {T}}A\mathbf {x} $ with a symmetric $n\times n$ matrix $A$. Because of the above spectral theorem, one can then say that every quadratic form, up to the choice of an orthonormal basis of $\mathbb {R} ^{n}$, "looks like"
$q\left(x_{1},\ldots ,x_{n}\right)=\sum _{i=1}^{n}\lambda _{i}x_{i}^{2}$
with real numbers $\lambda _{i}$. This considerably simplifies the study of quadratic forms, as well as the study of the level sets $\left\{\mathbf {x} :q(\mathbf {x} )=1\right\}$ which are generalizations of conic sections.
This is important partly because the second-order behavior of every smooth multi-variable function is described by the quadratic form belonging to the function's Hessian; this is a consequence of Taylor's theorem.
Symmetrizable matrix
An $n\times n$ matrix $A$ is said to be symmetrizable if there exists an invertible diagonal matrix $D$ and symmetric matrix $S$ such that $A=DS.$
The transpose of a symmetrizable matrix is symmetrizable, since $A^{\mathrm {T} }=(DS)^{\mathrm {T} }=SD=D^{-1}(DSD)$ and $DSD$ is symmetric. A matrix $A=(a_{ij})$ is symmetrizable if and only if the following conditions are met:
1. $a_{ij}=0$ implies $a_{ji}=0$ for all $1\leq i\leq j\leq n.$
2. $a_{i_{1}i_{2}}a_{i_{2}i_{3}}\dots a_{i_{k}i_{1}}=a_{i_{2}i_{1}}a_{i_{3}i_{2}}\dots a_{i_{1}i_{k}}$ for any finite sequence $\left(i_{1},i_{2},\dots ,i_{k}\right).$
See also
Other types of symmetry or pattern in square matrices have special names; see for example:
• Skew-symmetric matrix (also called antisymmetric or antimetric)
• Centrosymmetric matrix
• Circulant matrix
• Covariance matrix
• Coxeter matrix
• GCD matrix
• Hankel matrix
• Hilbert matrix
• Persymmetric matrix
• Sylvester's law of inertia
• Toeplitz matrix
• Transpositions matrix
See also symmetry in mathematics.
Notes
1. Jesús Rojo García (1986). Álgebra lineal (in Spanish) (2nd ed.). Editorial AC. ISBN 84-7288-120-2.
2. Horn, R.A.; Johnson, C.R. (2013). Matrix analysis (2nd ed.). Cambridge University Press. pp. 263, 278. MR 2978290.
3. See:
• Autonne, L. (1915), "Sur les matrices hypohermitiennes et sur les matrices unitaires", Ann. Univ. Lyon, 38: 1–77
• Takagi, T. (1925), "On an algebraic problem related to an analytic theorem of Carathéodory and Fejér and on an allied theorem of Landau", Jpn. J. Math., 1: 83–93, doi:10.4099/jjm1924.1.0_83
• Siegel, Carl Ludwig (1943), "Symplectic Geometry", American Journal of Mathematics, 65 (1): 1–86, doi:10.2307/2371774, JSTOR 2371774, Lemma 1, page 12
• Hua, L.-K. (1944), "On the theory of automorphic functions of a matrix variable I–geometric basis", Amer. J. Math., 66 (3): 470–488, doi:10.2307/2371910, JSTOR 2371910
• Schur, I. (1945), "Ein Satz über quadratische Formen mit komplexen Koeffizienten", Amer. J. Math., 67 (4): 472–480, doi:10.2307/2371974, JSTOR 2371974
• Benedetti, R.; Cragnolini, P. (1984), "On simultaneous diagonalization of one Hermitian and one symmetric form", Linear Algebra Appl., 57: 215–226, doi:10.1016/0024-3795(84)90189-7
4. Bosch, A. J. (1986). "The factorization of a square matrix into two symmetric matrices". American Mathematical Monthly. 93 (6): 462–464. doi:10.2307/2323471. JSTOR 2323471.
5. G.H. Golub, C.F. van Loan. (1996). Matrix Computations. The Johns Hopkins University Press, Baltimore, London.
6. Dieudonné, Jean A. (1969). Foundations of Modern Analysis (Enlarged and Corrected printing ed.). Academic Press. pp. Theorem (8.12.2), p. 180. ISBN 978-1443724265.
References
• Horn, Roger A.; Johnson, Charles R. (2013), Matrix analysis (2nd ed.), Cambridge University Press, ISBN 978-0-521-54823-6
External links
• "Symmetric matrix", Encyclopedia of Mathematics, EMS Press, 2001 [1994]
• A brief introduction and proof of eigenvalue properties of the real symmetric matrix
• How to implement a Symmetric Matrix in C++
Matrix classes
Explicitly constrained entries
• Alternant
• Anti-diagonal
• Anti-Hermitian
• Anti-symmetric
• Arrowhead
• Band
• Bidiagonal
• Bisymmetric
• Block-diagonal
• Block
• Block tridiagonal
• Boolean
• Cauchy
• Centrosymmetric
• Conference
• Complex Hadamard
• Copositive
• Diagonally dominant
• Diagonal
• Discrete Fourier Transform
• Elementary
• Equivalent
• Frobenius
• Generalized permutation
• Hadamard
• Hankel
• Hermitian
• Hessenberg
• Hollow
• Integer
• Logical
• Matrix unit
• Metzler
• Moore
• Nonnegative
• Pentadiagonal
• Permutation
• Persymmetric
• Polynomial
• Quaternionic
• Signature
• Skew-Hermitian
• Skew-symmetric
• Skyline
• Sparse
• Sylvester
• Symmetric
• Toeplitz
• Triangular
• Tridiagonal
• Vandermonde
• Walsh
• Z
Constant
• Exchange
• Hilbert
• Identity
• Lehmer
• Of ones
• Pascal
• Pauli
• Redheffer
• Shift
• Zero
Conditions on eigenvalues or eigenvectors
• Companion
• Convergent
• Defective
• Definite
• Diagonalizable
• Hurwitz
• Positive-definite
• Stieltjes
Satisfying conditions on products or inverses
• Congruent
• Idempotent or Projection
• Invertible
• Involutory
• Nilpotent
• Normal
• Orthogonal
• Unimodular
• Unipotent
• Unitary
• Totally unimodular
• Weighing
With specific applications
• Adjugate
• Alternating sign
• Augmented
• Bézout
• Carleman
• Cartan
• Circulant
• Cofactor
• Commutation
• Confusion
• Coxeter
• Distance
• Duplication and elimination
• Euclidean distance
• Fundamental (linear differential equation)
• Generator
• Gram
• Hessian
• Householder
• Jacobian
• Moment
• Payoff
• Pick
• Random
• Rotation
• Seifert
• Shear
• Similarity
• Symplectic
• Totally positive
• Transformation
Used in statistics
• Centering
• Correlation
• Covariance
• Design
• Doubly stochastic
• Fisher information
• Hat
• Precision
• Stochastic
• Transition
Used in graph theory
• Adjacency
• Biadjacency
• Degree
• Edmonds
• Incidence
• Laplacian
• Seidel adjacency
• Tutte
Used in science and engineering
• Cabibbo–Kobayashi–Maskawa
• Density
• Fundamental (computer vision)
• Fuzzy associative
• Gamma
• Gell-Mann
• Hamiltonian
• Irregular
• Overlap
• S
• State transition
• Substitution
• Z (chemistry)
Related terms
• Jordan normal form
• Linear independence
• Matrix exponential
• Matrix representation of conic sections
• Perfect matrix
• Pseudoinverse
• Row echelon form
• Wronskian
• Mathematics portal
• List of matrices
• Category:Matrices
Authority control: National
• Germany
• Israel
• United States
| Wikipedia |
Symmetric Boolean function
In mathematics, a symmetric Boolean function is a Boolean function whose value does not depend on the order of its input bits, i.e., it depends only on the number of ones (or zeros) in the input.[1] For this reason they are also known as Boolean counting functions.[2]
There are 2n+1 symmetric n-ary Boolean functions. Instead of the truth table, traditionally used to represent Boolean functions, one may use a more compact representation for an n-variable symmetric Boolean function: the (n + 1)-vector, whose i-th entry (i = 0, ..., n) is the value of the function on an input vector with i ones. Mathematically, the symmetric Boolean functions correspond one-to-one with the functions that map n+1 elements to two elements, $f:\{0,1,...,n\}\rightarrow \{0,1\}$.
Symmetric Boolean functions are used to classify Boolean satisfiability problems.
Special cases
A number of special cases are recognized:[1]
• Majority function: their value is 1 on input vectors with more than n/2 ones
• Threshold functions: their value is 1 on input vectors with k or more ones for a fixed k
• All-equal and not-all-equal function: their values is 1 when the inputs do (not) all have the same value
• Exact-count functions: their value is 1 on input vectors with k ones for a fixed k
• One-hot or 1-in-n function: their value is 1 on input vectors with exactly one one
• One-cold function: their value is 1 on input vectors with exactly one zero
• Congruence functions: their value is 1 on input vectors with the number of ones congruent to k mod m for fixed k, m
• Parity function: their value is 1 if the input vector has odd number of ones
The n-ary versions of AND, OR, XOR, NAND, NOR and XNOR are also symmetric Boolean functions.
Properties
In the following, $f_{k}$ denotes the value of the function $f:\{0,1\}^{n}\rightarrow \{0,1\}$ when applied to an input vector of weight $k$.
Weight
The weight of the function can be calculated from its value vector:
$|f|=\sum _{k=0}^{n}{\binom {n}{k}}f_{k}$
Algebraic normal form
The algebraic normal form either contains all monomials of certain order $m$, or none of them; i.e. the Möbius transform ${\hat {f}}$ of the function is also a symmetric function. It can thus also be described by a simple (n+1) bit vector, the ANF vector ${\hat {f}}_{m}$. The ANF and value vectors are related by a Möbius relation:
${\hat {f}}_{m}=\bigoplus _{k_{2}\subseteq m_{2}}f_{k}$
where $k_{2}\subseteq m_{2}$ denotes all the weights k whose base-2 representation is covered by the base-2 representation of m (a consequence of Lucas’ theorem).[3] Effectively, an n-variable symmetric Boolean function corresponds to a log(n)-variable ordinary Boolean function acting on the base-2 representation of the input weight.
For example, for three-variable functions:
${\begin{array}{lcl}{\hat {f}}_{0}&=&f_{0}\\{\hat {f}}_{1}&=&f_{0}\oplus f_{1}\\{\hat {f}}_{2}&=&f_{0}\oplus f_{2}\\{\hat {f}}_{3}&=&f_{0}\oplus f_{1}\oplus f_{2}\oplus f_{3}\end{array}}$
So the three variable majority function with value vector (0, 0, 1, 1) has ANF vector (0, 0, 1, 0), i.e.:
${\text{Maj}}(x,y,z)=xy\oplus xz\oplus yz$
Unit hypercube polynomial
The coefficients of the real polynomial agreeing with the function on $\{0,1\}^{n}$ are given by:
$f_{m}^{*}=\sum _{k=0}^{m}(-1)^{|k|+|m|}{\binom {m}{k}}f_{k}$
For example, the three variable majority function polynomial has coefficients (0, 0, 1, -2):
${\text{Maj}}(x,y,z)=(xy+xz+yz)-2(xyz)$
Examples
The 16 symmetric Boolean functions of three variables
Function value Value vector Weight Name Colloquial description ANF vector
0 1 2 3
F F F F (0, 0, 0, 0) 0 Constant false "never" (0, 0, 0, 0)
F F F T (0, 0, 0, 1) 1 Three-way AND, Threshold(3), Count(3) "all three" (0, 0, 0, 1)
F F T F (0, 0, 1, 0) 3 Count(2), One-cold "exactly two" (0, 0, 1, 1)
F F T T (0, 0, 1, 1) 4 Majority, Threshold(2) "most", "at least two" (0, 0, 1, 0)
F T F F (0, 1, 0, 0) 3 Count(1), One-hot "exactly one" (0, 1, 0, 1)
F T F T (0, 1, 0, 1) 4 Three-way XOR, (odd) parity "one or three" (0, 1, 0, 0)
F T T F (0, 1, 1, 0) 6 Not-all-equal "one or two" (0, 1, 1, 0)
F T T T (0, 1, 1, 1) 7 Three-way OR, Threshold(1) "any", "at least one" (0, 1, 1, 1)
T F F F (1, 0, 0, 0) 1 Three-way NOR, Count(0) "none" (1, 1, 1, 1)
T F F T (1, 0, 0, 1) 2 All-equal "all or none" (1, 1, 1, 0)
T F T F (1, 0, 1, 0) 4 Three-way XNOR, even parity "none or two" (1, 1, 0, 0)
T F T T (1, 0, 1, 1) 5 "not exactly one" (1, 1, 0, 1)
T T F F (1, 1, 0, 0) 4 (Horn clause) "at most one" (1, 0, 1, 0)
T T F T (1, 1, 0, 1) 5 "not exactly two" (1, 0, 1, 1)
T T T F (1, 1, 1, 0) 7 Three-way NAND "at most two" (1, 0, 0, 1)
T T T T (1, 1, 1, 1) 8 Constant true "always" (1, 0, 0, 0)
See also
• Symmetric function
References
1. Ingo Wegener, "The Complexity of Symmetric Boolean Functions", in: Computation Theory and Logic, Lecture Notes in Computer Science, vol. 270, 1987, pp. 433–442
2. "BooleanCountingFunction—Wolfram Language Documentation". reference.wolfram.com. Retrieved 2021-05-25.
3. Canteaut, A.; Videau, M. (2005). "Symmetric Boolean functions". IEEE Transactions on Information Theory. 51 (8): 2791–2811. doi:10.1109/TIT.2005.851743. ISSN 1557-9654.
| Wikipedia |
Orthogonal symmetric Lie algebra
In mathematics, an orthogonal symmetric Lie algebra is a pair $({\mathfrak {g}},s)$ consisting of a real Lie algebra ${\mathfrak {g}}$ and an automorphism $s$ of ${\mathfrak {g}}$ of order $2$ such that the eigenspace ${\mathfrak {u}}$ of s corresponding to 1 (i.e., the set ${\mathfrak {u}}$ of fixed points) is a compact subalgebra. If "compactness" is omitted, it is called a symmetric Lie algebra. An orthogonal symmetric Lie algebra is said to be effective if ${\mathfrak {u}}$ intersects the center of ${\mathfrak {g}}$ trivially. In practice, effectiveness is often assumed; we do this in this article as well.
The canonical example is the Lie algebra of a symmetric space, $s$ being the differential of a symmetry.
Let $({\mathfrak {g}},s)$ be effective orthogonal symmetric Lie algebra, and let ${\mathfrak {p}}$ denotes the -1 eigenspace of $s$. We say that $({\mathfrak {g}},s)$ is of compact type if ${\mathfrak {g}}$ is compact and semisimple. If instead it is noncompact, semisimple, and if ${\mathfrak {g}}={\mathfrak {u}}+{\mathfrak {p}}$ is a Cartan decomposition, then $({\mathfrak {g}},s)$ is of noncompact type. If ${\mathfrak {p}}$ is an Abelian ideal of ${\mathfrak {g}}$, then $({\mathfrak {g}},s)$ is said to be of Euclidean type.
Every effective, orthogonal symmetric Lie algebra decomposes into a direct sum of ideals ${\mathfrak {g}}_{0}$, ${\mathfrak {g}}_{-}$ and ${\mathfrak {g}}_{+}$, each invariant under $s$ and orthogonal with respect to the Killing form of ${\mathfrak {g}}$, and such that if $s_{0}$, $s_{-}$ and $s_{+}$ denote the restriction of $s$ to ${\mathfrak {g}}_{0}$, ${\mathfrak {g}}_{-}$ and ${\mathfrak {g}}_{+}$, respectively, then $({\mathfrak {g}}_{0},s_{0})$, $({\mathfrak {g}}_{-},s_{-})$ and $({\mathfrak {g}}_{+},s_{+})$ are effective orthogonal symmetric Lie algebras of Euclidean type, compact type and noncompact type.
References
• Helgason, Sigurdur (2001). Differential Geometry, Lie Groups, and Symmetric Spaces. American Mathematical Society. ISBN 978-0-8218-2848-9.
| Wikipedia |
SL (complexity)
In computational complexity theory, SL (Symmetric Logspace or Sym-L) is the complexity class of problems log-space reducible to USTCON (undirected s-t connectivity), which is the problem of determining whether there exists a path between two vertices in an undirected graph, otherwise described as the problem of determining whether two vertices are in the same connected component. This problem is also called the undirected reachability problem. It does not matter whether many-one reducibility or Turing reducibility is used. Although originally described in terms of symmetric Turing machines, that equivalent formulation is very complex, and the reducibility definition is what is used in practice.
USTCON is a special case of STCON (directed reachability), the problem of determining whether a directed path between two vertices in a directed graph exists, which is complete for NL. Because USTCON is SL-complete, most advances that impact USTCON have also impacted SL. Thus they are connected, and discussed together.
In October 2004 Omer Reingold showed that SL = L.
Origin
SL was first defined in 1982 by Harry R. Lewis and Christos Papadimitriou,[1] who were looking for a class in which to place USTCON, which until this time could, at best, be placed only in NL, despite seeming not to require nondeterminism. They defined the symmetric Turing machine, used it to define SL, showed that USTCON was complete for SL, and proved that
${\mathsf {L}}\subseteq {\mathsf {SL}}\subseteq {\mathsf {NL}}$
where L is the more well-known class of problems solvable by an ordinary deterministic Turing machine in logarithmic space, and NL is the class of problems solvable by nondeterministic Turing machines in logarithmic space. The result of Reingold, discussed later, shows that in fact, when limited to log space, the symmetric Turing machine is equivalent in power to the deterministic Turing machine.
Complete problems
By definition, USTCON is complete for SL (all problems in SL reduce to it, including itself). Many more interesting complete problems were found, most by reducing directly or indirectly from USTCON, and a compendium of them was made by Àlvarez and Greenlaw.[2] Many of the problems are graph theory problems on undirected graphs. Some of the simplest and most important SL-complete problems they describe include:
• USTCON
• Simulation of symmetric Turing machines: does an STM accept a given input in a certain space, given in unary?
• Vertex-disjoint paths: are there k paths between two vertices, sharing vertices only at the endpoints? (a generalization of USTCON, equivalent to asking whether a graph is k-connected)
• Is a given graph a bipartite graph, or equivalently, does it have a graph coloring using 2 colors?
• Do two undirected graphs have the same number of connected components?
• Does a graph have an even number of connected components?
• Given a graph, is there a cycle containing a given edge?
• Do the spanning forests of two graphs have the same number of edges?
• Given a graph where all its edges have distinct weights, is a given edge in the minimum weight spanning forest?
• Exclusive or 2-satisfiability: given a formula requiring that $x_{i}$ or $x_{j}$ hold for a number of pairs of variables $(x_{i},x_{j})$, is there an assignment to the variables that makes it true?
The complements of all these problems are in SL as well, since, as we will see, SL is closed under complement.
From the fact that L = SL, it follows that many more problems are SL-complete with respect to log-space reductions: every non-trivial problem in L or in SL is SL-complete; moreover, even if the reductions are in some smaller class than L, L-completeness is equivalent to SL-completeness. In this sense this class has become somewhat trivial.
Important results
There are well-known classical algorithms such as depth-first search and breadth-first search which solve USTCON in linear time and space. Their existence, shown long before SL was defined, proves that SL is contained in P. It's also not difficult to show that USTCON, and so SL, is in NL, since we can just nondeterministically guess at each vertex which vertex to visit next in order to discover a path if one exists.
The first nontrivial result for SL, however, was Savitch's theorem, proved in 1970, which provided an algorithm that solves USTCON in log2 n space. Unlike depth-first search, however, this algorithm is impractical for most applications because of its potentially superpolynomial running time. One consequence of this is that USTCON, and so SL, is in DSPACE(log2n).[3] (Actually, Savitch's theorem gives the stronger result that NL is in DSPACE(log2n).)
Although there were no (uniform) deterministic space improvements on Savitch's algorithm for 22 years, a highly practical probabilistic log-space algorithm was found in 1979 by Aleliunas et al.: simply start at one vertex and perform a random walk until you find the other one (then accept) or until |V|3 time has passed (then reject).[4] False rejections are made with a small bounded probability that shrinks exponentially the longer the random walk is continued. This showed that SL is contained in RLP, the class of problems solvable in polynomial time and logarithmic space with probabilistic machines that reject incorrectly less than 1/3 of the time. By replacing the random walk by a universal traversal sequence, Aleliunas et al. also showed that SL is contained in L/poly, a non-uniform complexity class of the problems solvable deterministically in logarithmic space with polynomial advice.
In 1989, Borodin et al. strengthened this result by showing that the complement of USTCON, determining whether two vertices are in different connected components, is also in RLP.[5] This placed USTCON, and SL, in co-RLP and in the intersection of RLP and co-RLP, which is ZPLP, the class of problems which have log-space, expected polynomial-time, no-error randomized algorithms.
In 1992, Nisan, Szemerédi, and Wigderson finally found a new deterministic algorithm to solve USTCON using only log1.5 n space.[6] This was improved slightly, but there would be no more significant gains until Reingold.
In 1995, Nisan and Ta-Shma showed the surprising result that SL is closed under complement, which at the time was believed by many to be false; that is, SL = co-SL.[7] Equivalently, if a problem can be solved by reducing it to a graph and asking if two vertices are in the same component, it can also be solved by reducing it to another graph and asking if two vertices are in different components. However, Reingold's paper would later make this result redundant.
One of the most important corollaries of SL = co-SL is that LSL = SL; that is, a deterministic, log-space machine with an oracle for SL can solve problems in SL (trivially) but cannot solve any other problems. This means it does not matter whether we use Turing reducibility or many-one reducibility to show a problem is in SL; they are equivalent.
A breakthrough October 2004 paper by Omer Reingold showed that USTCON is in fact in L.[8] Since USTCON is SL-complete, this implies that SL = L, essentially eliminating the usefulness of consideration of SL as a separate class. A few weeks later, graduate student Vladimir Trifonov showed that USTCON could be solved deterministically using $O{\text{(log }}n{\text{log log }}n)$ space—a weaker result—using different techniques.[9] There has not been substantial effort into turning Reingold's algorithm for USTCON into a practical formulation. It is explicit in his paper (and those leading up to it) that they are primarily concerned with asymptotics; as a result, the algorithm he describes would actually take $64^{32}\,\log N$ memory, and $O(N^{64^{32}})$ time. This means that even for $N=2$, the algorithm would require more memory than contained on all computers in the world (a kiloexaexaexabyte).
Consequences of L = SL
The collapse of L and SL has a number of significant consequences. Most obviously, all SL-complete problems are now in L, and can be gainfully employed in the design of deterministic log-space and polylogarithmic-space algorithms. In particular, we have a new set of tools to use in log-space reductions. It is also now known that a problem is in L if and only if it is log-space reducible to USTCON.
Footnotes
1. Lewis, Harry R.; Papadimitriou, Christos H. (1980), "Symmetric space-bounded computation", Proceedings of the Seventh International Colloquium on Automata, Languages and Programming, Lecture Notes in Computer Science, vol. 85, Berlin: Springer, pp. 374–384, doi:10.1007/3-540-10003-2_85, MR 0589018. Journal version published as Lewis, Harry R.; Papadimitriou, Christos H. (1982), "Symmetric space-bounded computation", Theoretical Computer Science, 19 (2): 161–187, doi:10.1016/0304-3975(82)90058-5, MR 0666539
2. Àlvarez, Carme; Greenlaw, Raymond (2000), "A compendium of problems complete for symmetric logarithmic space", Computational Complexity, 9 (2): 123–145, doi:10.1007/PL00001603, MR 1809688.
3. Savitch, Walter J. (1970), "Relationships between nondeterministic and deterministic tape complexities", Journal of Computer and System Sciences, 4: 177–192, doi:10.1016/S0022-0000(70)80006-X, hdl:10338.dmlcz/120475, MR 0266702.
4. Aleliunas, Romas; Karp, Richard M.; Lipton, Richard J.; Lovász, László; Rackoff, Charles (1979), "Random walks, universal traversal sequences, and the complexity of maze problems", Proceedings of 20th Annual Symposium on Foundations of Computer Science, New York: IEEE, pp. 218–223, doi:10.1109/SFCS.1979.34, MR 0598110.
5. Borodin, Allan; Cook, Stephen A.; Dymond, Patrick W.; Ruzzo, Walter L.; Tompa, Martin (1989), "Two applications of inductive counting for complementation problems", SIAM Journal on Computing, 18 (3): 559–578, CiteSeerX 10.1.1.394.1662, doi:10.1137/0218038, MR 0996836.
6. Nisan, Noam; Szemerédi, Endre; Wigderson, Avi (1992), "Undirected connectivity in O(log1.5n) space", Proceedings of 33rd Annual Symposium on Foundations of Computer Science, pp. 24–29, doi:10.1109/SFCS.1992.267822.
7. Nisan, Noam; Ta-Shma, Amnon (1995), "Symmetric logspace is closed under complement", Chicago Journal of Theoretical Computer Science, Article 1, MR 1345937, ECCC TR94-003.
8. Reingold, Omer (2008), "Undirected connectivity in log-space", Journal of the ACM, 55 (4): 1–24, doi:10.1145/1391289.1391291, MR 2445014.
9. Trifonov, Vladimir (2008), "An O(log n log log n) space algorithm for undirected st-connectivity", SIAM Journal on Computing, 38 (2): 449–483, doi:10.1137/050642381, MR 2411031.
References
• C. Papadimitriou. Computational Complexity. Addison-Wesley, 1994. ISBN 0-201-53082-1.
• Michael Sipser. Introduction to the Theory of Computation. PWS Publishing Co., Boston 1997 ISBN 0-534-94728-X.
Important complexity classes
Considered feasible
• DLOGTIME
• AC0
• ACC0
• TC0
• L
• SL
• RL
• NL
• NL-complete
• NC
• SC
• CC
• P
• P-complete
• ZPP
• RP
• BPP
• BQP
• APX
• FP
Suspected infeasible
• UP
• NP
• NP-complete
• NP-hard
• co-NP
• co-NP-complete
• AM
• QMA
• PH
• ⊕P
• PP
• #P
• #P-complete
• IP
• PSPACE
• PSPACE-complete
Considered infeasible
• EXPTIME
• NEXPTIME
• EXPSPACE
• 2-EXPTIME
• ELEMENTARY
• PR
• R
• RE
• ALL
Class hierarchies
• Polynomial hierarchy
• Exponential hierarchy
• Grzegorczyk hierarchy
• Arithmetical hierarchy
• Boolean hierarchy
Families of classes
• DTIME
• NTIME
• DSPACE
• NSPACE
• Probabilistically checkable proof
• Interactive proof system
List of complexity classes
| Wikipedia |
Symmetric Turing machine
A symmetric Turing machine is a Turing machine which has a configuration graph that is undirected (that is, configuration i yields configuration j if and only if j yields i).
Definition of symmetric Turing machines
Formally, we define a variant of Turing machines with a set of transitions of the form $(p,ab,D,cd,q)$, where p,q are states, ab,cd are pairs of symbols and D is a direction. If D is left, then the head of a machine in state p above a tape symbol b preceded by a symbol a can be transitioned by moving the head left, changing the state to q and replacing the symbols a,b by c,d. The opposite transition $(q,cd,-D,ab,p)$ can always be applied. If D is right the transition is analogous. The ability to peek at two symbols and change both at a time is non-essential, but makes the definition easier.
Such machines were first defined in 1982 by Harry R. Lewis and Christos Papadimitriou,[1][2] who were looking for a class in which to place USTCON, the problem asking whether there is a path between two given vertices s,t in an undirected graph. Until this time, it could be placed only in NL, despite seeming not to require nondeterminism (the asymmetric variant STCON was known to be complete for NL). Symmetric Turing machines are a kind of Turing machine with limited nondeterministic power, and were shown to be at least as powerful as deterministic Turing machines, giving an interesting case in between.
${\mathsf {STIME}}(T(n))$ is the class of the languages accepted by a symmetric Turing machine running in time $O(T(n))$. It can easily proved that ${\mathsf {STIME}}(T)={\mathsf {NTIME}}(T)$ by limiting the nondeterminism of any machine in ${\mathsf {NTIME}}(T)$ to an initial stage where a string of symbols is nondeterministically written, followed by deterministic computations.
SL=L
Main article: SL (complexity)
SSPACE(S(n)) is the class of the languages accepted by a symmetric Turing machine running in space $O(S(n))$ and SL=SSPACE(log(n)).
SL can equivalently be defined as the class of problems logspace reducible to USTCON. Lewis and Papadimitriou by their definition showed this by constructing a nondeterministic machine for USTCON with properties that they showed are sufficient to make a construction of an equivalent symmetric Turing machine possible. Then, they observed that any language in SL is logspace reducible to USTCON as from the properties of the symmetric computation we can view the special configuration as the undirected edges of the graph.
In 2004, Omer Reingold proved that SL=L by showing a deterministic algorithm for USTCON running in logarithmic space,[3] for which he received the 2005 Grace Murray Hopper Award and (together with Avi Wigderson and Salil Vadhan) the 2009 Gödel Prize. The proof uses the zig-zag product to efficiently construct expander graphs.
Notes
1. Jesper Jansson. Deterministic Space-Bounded Graph Connectivity Algorithms. Manuscript. 1998.
2. Harry R. Lewis and Christos H. Papadimitriou. Symmetric space-bounded computation. Theoretical Computer Science. pp.161-187. 1982.
3. Reingold, Omer (2008), "Undirected connectivity in log-space", Journal of the ACM, 55 (4): 1–24, doi:10.1145/1391289.1391291, MR 2445014, S2CID 207168478, ECCC TR04-094
References
• Lecture Notes :CS369E: Expanders in Computer Science By Cynthia Dwork & Prahladh Harsha
• Lecture Notes
• Sharon Bruckner Lecture Notes
• Deterministic Space Bounded Graph connectivity Algorithms Jesper Janson
| Wikipedia |
Symmetric bilinear form
In mathematics, a symmetric bilinear form on a vector space is a bilinear map from two copies of the vector space to the field of scalars such that the order of the two vectors does not affect the value of the map. In other words, it is a bilinear function $B$ that maps every pair $(u,v)$ of elements of the vector space $V$ to the underlying field such that $B(u,v)=B(v,u)$ for every $u$ and $v$ in $V$. They are also referred to more briefly as just symmetric forms when "bilinear" is understood.
Symmetric bilinear forms on finite-dimensional vector spaces precisely correspond to symmetric matrices given a basis for V. Among bilinear forms, the symmetric ones are important because they are the ones for which the vector space admits a particularly simple kind of basis known as an orthogonal basis (at least when the characteristic of the field is not 2).
Given a symmetric bilinear form B, the function q(x) = B(x, x) is the associated quadratic form on the vector space. Moreover, if the characteristic of the field is not 2, B is the unique symmetric bilinear form associated with q.
Formal definition
Let V be a vector space of dimension n over a field K. A map $B:V\times V\rightarrow K$ is a symmetric bilinear form on the space if:
• $B(u,v)=B(v,u)\ \quad \forall u,v\in V$
• $B(u+v,w)=B(u,w)+B(v,w)\ \quad \forall u,v,w\in V$
• $B(\lambda v,w)=\lambda B(v,w)\ \quad \forall \lambda \in K,\forall v,w\in V$
The last two axioms only establish linearity in the first argument, but the first axiom (symmetry) then immediately implies linearity in the second argument as well.
Examples
Let V = Rn, the n dimensional real vector space. Then the standard dot product is a symmetric bilinear form, B(x, y) = x ⋅ y. The matrix corresponding to this bilinear form (see below) on a standard basis is the identity matrix.
Let V be any vector space (including possibly infinite-dimensional), and assume T is a linear function from V to the field. Then the function defined by B(x, y) = T(x)T(y) is a symmetric bilinear form.
Let V be the vector space of continuous single-variable real functions. For $f,g\in V$ one can define $\textstyle B(f,g)=\int _{0}^{1}f(t)g(t)dt$. By the properties of definite integrals, this defines a symmetric bilinear form on V. This is an example of a symmetric bilinear form which is not associated to any symmetric matrix (since the vector space is infinite-dimensional).
Matrix representation
Let $C=\{e_{1},\ldots ,e_{n}\}$ be a basis for V. Define the n × n matrix A by $A_{ij}=B(e_{i},e_{j})$. The matrix A is a symmetric matrix exactly due to symmetry of the bilinear form. If we let the n×1 matrix x represent the vector v with respect to this basis, and similarly let the n×1 matrix y represent the vector w, then $B(v,w)$ is given by :
$x^{\mathsf {T}}Ay=y^{\mathsf {T}}Ax.$
Suppose C' is another basis for V, with : ${\begin{bmatrix}e'_{1}&\cdots &e'_{n}\end{bmatrix}}={\begin{bmatrix}e_{1}&\cdots &e_{n}\end{bmatrix}}S$ with S an invertible n×n matrix. Now the new matrix representation for the symmetric bilinear form is given by
$A'=S^{\mathsf {T}}AS.$
Orthogonality and singularity
Two vectors v and w are defined to be orthogonal with respect to the bilinear form B if B(v, w) = 0, which, for a symmetric bilinear form, is equivalent to B(w, v) = 0.
The radical of a bilinear form B is the set of vectors orthogonal with every vector in V. That this is a subspace of V follows from the linearity of B in each of its arguments. When working with a matrix representation A with respect to a certain basis, v, represented by x, is in the radical if and only if
$Ax=0\Longleftrightarrow x^{\mathsf {T}}A=0.$
The matrix A is singular if and only if the radical is nontrivial.
If W is a subset of V, then its orthogonal complement W⊥ is the set of all vectors in V that are orthogonal to every vector in W; it is a subspace of V. When B is non-degenerate, the radical of B is trivial and the dimension of W⊥ is dim(W⊥) = dim(V) − dim(W).
Orthogonal basis
A basis $C=\{e_{1},\ldots ,e_{n}\}$ is orthogonal with respect to B if and only if :
$B(e_{i},e_{j})=0\ \forall i\neq j.$
When the characteristic of the field is not two, V always has an orthogonal basis. This can be proven by induction.
A basis C is orthogonal if and only if the matrix representation A is a diagonal matrix.
Signature and Sylvester's law of inertia
In a more general form, Sylvester's law of inertia says that, when working over an ordered field, the numbers of diagonal elements in the diagonalized form of a matrix that are positive, negative and zero respectively are independent of the chosen orthogonal basis. These three numbers form the signature of the bilinear form.
Real case
When working in a space over the reals, one can go a bit a further. Let $C=\{e_{1},\ldots ,e_{n}\}$ be an orthogonal basis.
We define a new basis $C'=\{e'_{1},\ldots ,e'_{n}\}$
$e'_{i}={\begin{cases}e_{i}&{\text{if }}B(e_{i},e_{i})=0\\{\frac {e_{i}}{\sqrt {B(e_{i},e_{i})}}}&{\text{if }}B(e_{i},e_{i})>0\\{\frac {e_{i}}{\sqrt {-B(e_{i},e_{i})}}}&{\text{if }}B(e_{i},e_{i})<0\end{cases}}$
Now, the new matrix representation A will be a diagonal matrix with only 0, 1 and −1 on the diagonal. Zeroes will appear if and only if the radical is nontrivial.
Complex case
When working in a space over the complex numbers, one can go further as well and it is even easier. Let $C=\{e_{1},\ldots ,e_{n}\}$ be an orthogonal basis.
We define a new basis $C'=\{e'_{1},\ldots ,e'_{n}\}$ :
$e'_{i}={\begin{cases}e_{i}&{\text{if }}\;B(e_{i},e_{i})=0\\e_{i}/{\sqrt {B(e_{i},e_{i})}}&{\text{if }}\;B(e_{i},e_{i})\neq 0\\\end{cases}}$
Now the new matrix representation A will be a diagonal matrix with only 0 and 1 on the diagonal. Zeroes will appear if and only if the radical is nontrivial.
Orthogonal polarities
Let B be a symmetric bilinear form with a trivial radical on the space V over the field K with characteristic not 2. One can now define a map from D(V), the set of all subspaces of V, to itself:
$\alpha :D(V)\rightarrow D(V):W\mapsto W^{\perp }.$
This map is an orthogonal polarity on the projective space PG(W). Conversely, one can prove all orthogonal polarities are induced in this way, and that two symmetric bilinear forms with trivial radical induce the same polarity if and only if they are equal up to scalar multiplication.
References
• Adkins, William A.; Weintraub, Steven H. (1992). Algebra: An Approach via Module Theory. Graduate Texts in Mathematics. Vol. 136. Springer-Verlag. ISBN 3-540-97839-9. Zbl 0768.00003.
• Milnor, J.; Husemoller, D. (1973). Symmetric Bilinear Forms. Ergebnisse der Mathematik und ihrer Grenzgebiete. Vol. 73. Springer-Verlag. ISBN 3-540-06009-X. Zbl 0292.10016.
• Weisstein, Eric W. "Symmetric Bilinear Form". MathWorld.
| Wikipedia |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.